{"questions":"argocd Release Assets Asset Description Verification of Argo CD Artifacts Prerequisites crane for container verification only cosign or higher slsa verifier","answers":"# Verification of Argo CD Artifacts\n\n## Prerequisites\n- cosign `v2.0.0` or higher [installation instructions](https:\/\/docs.sigstore.dev\/cosign\/installation)\n- slsa-verifier [installation instructions](https:\/\/github.com\/slsa-framework\/slsa-verifier#installation)\n- crane [installation instructions](https:\/\/github.com\/google\/go-containerregistry\/blob\/main\/cmd\/crane\/README.md) (for container verification only)\n\n***\n## Release Assets\n| Asset                    | Description                   |\n|--------------------------|-------------------------------|\n| argocd-darwin-amd64      | CLI Binary                    |\n| argocd-darwin-arm64      | CLI Binary                    |\n| argocd-linux_amd64       | CLI Binary                    |\n| argocd-linux_arm64       | CLI Binary                    |\n| argocd-linux_ppc64le     | CLI Binary                    |\n| argocd-linux_s390x       | CLI Binary                    |\n| argocd-windows_amd64     | CLI Binary                    |\n| argocd-cli.intoto.jsonl  | Attestation of CLI binaries   |\n| argocd-sbom.intoto.jsonl | Attestation of SBOM           |\n| cli_checksums.txt        | Checksums of binaries         |\n| sbom.tar.gz              | Sbom                          |\n| sbom.tar.gz.pem          | Certificate used to sign sbom |\n| sbom.tar.gz.sig          | Signature of sbom             |\n\n***\n## Verification of container images\n\nArgo CD container images are signed by [cosign](https:\/\/github.com\/sigstore\/cosign) using identity-based (\"keyless\") signing and transparency. Executing the following command can be used to verify the signature of a container image:\n\n```bash\ncosign verify \\\n--certificate-identity-regexp https:\/\/github.com\/argoproj\/argo-cd\/.github\/workflows\/image-reuse.yaml@refs\/tags\/v \\\n--certificate-oidc-issuer https:\/\/token.actions.githubusercontent.com \\\n--certificate-github-workflow-repository \"argoproj\/argo-cd\" \\\nquay.io\/argoproj\/argocd:v2.11.3 | jq\n```\nThe command should output the following if the container image was correctly verified:\n```bash\nThe following checks were performed on each of these signatures:\n  - The cosign claims were validated\n  - Existence of the claims in the transparency log was verified offline\n  - Any certificates were verified against the Fulcio roots.\n[\n  {\n    \"critical\": {\n      \"identity\": {\n        \"docker-reference\": \"quay.io\/argoproj\/argo-cd\"\n      },\n      \"image\": {\n        \"docker-manifest-digest\": \"sha256:63dc60481b1b2abf271e1f2b866be8a92962b0e53aaa728902caa8ac8d235277\"\n      },\n      \"type\": \"cosign container image signature\"\n    },\n    \"optional\": {\n      \"1.3.6.1.4.1.57264.1.1\": \"https:\/\/token.actions.githubusercontent.com\",\n      \"1.3.6.1.4.1.57264.1.2\": \"push\",\n      \"1.3.6.1.4.1.57264.1.3\": \"a6ec84da0eaa519cbd91a8f016cf4050c03323b2\",\n      \"1.3.6.1.4.1.57264.1.4\": \"Publish ArgoCD Release\",\n      \"1.3.6.1.4.1.57264.1.5\": \"argoproj\/argo-cd\",\n      \"1.3.6.1.4.1.57264.1.6\": \"refs\/tags\/<version>\",\n      ...\n```\n\n***\n## Verification of container image with SLSA attestations\n\nA [SLSA](https:\/\/slsa.dev\/) Level 3 provenance is generated using [slsa-github-generator](https:\/\/github.com\/slsa-framework\/slsa-github-generator).\n\nThe following command will verify the signature of an attestation and how it was issued. It will contain the payloadType, payload, and signature.\n\nRun the following command as per the [slsa-verifier documentation](https:\/\/github.com\/slsa-framework\/slsa-verifier\/tree\/main#containers):\n\n```bash\n# Get the immutable container image to prevent TOCTOU attacks https:\/\/github.com\/slsa-framework\/slsa-verifier#toctou-attacks\nIMAGE=quay.io\/argoproj\/argocd:v2.7.0\nIMAGE=\"${IMAGE}@\"$(crane digest \"${IMAGE}\")\n# Verify provenance, including the tag to prevent rollback attacks.\nslsa-verifier verify-image \"$IMAGE\" \\\n    --source-uri github.com\/argoproj\/argo-cd \\\n    --source-tag v2.7.0\n```\n\nIf you only want to verify up to the major or minor verion of the source repository tag (instead of the full tag), use the `--source-versioned-tag` which performs semantic versioning verification:\n\n```shell\nslsa-verifier verify-image \"$IMAGE\" \\\n    --source-uri github.com\/argoproj\/argo-cd \\\n    --source-versioned-tag v2 # Note: May use v2.7 for minor version verification.\n```\n\nThe attestation payload contains a non-forgeable provenance which is base64 encoded and can be viewed by passing the `--print-provenance` option to the commands above:\n\n```bash\nslsa-verifier verify-image \"$IMAGE\" \\\n    --source-uri github.com\/argoproj\/argo-cd \\\n    --source-tag v2.7.0 \\\n    --print-provenance | jq\n```\n\nIf you prefer using cosign, follow these [instructions](https:\/\/github.com\/slsa-framework\/slsa-github-generator\/blob\/main\/internal\/builders\/container\/README.md#cosign).\n\n!!! tip\n    `cosign` or `slsa-verifier` can both be used to verify image attestations.\n    Check the documentation of each binary for detailed instructions.\n\n***\n\n## Verification of CLI artifacts with SLSA attestations\n\nA single attestation (`argocd-cli.intoto.jsonl`) from each release is provided. This can be used with [slsa-verifier](https:\/\/github.com\/slsa-framework\/slsa-verifier#verification-for-github-builders) to verify that a CLI binary was generated using Argo CD workflows on GitHub and ensures it was cryptographically signed.\n\n```bash\nslsa-verifier verify-artifact argocd-linux-amd64 \\\n  --provenance-path argocd-cli.intoto.jsonl \\\n  --source-uri github.com\/argoproj\/argo-cd \\\n  --source-tag v2.7.0\n```\n\nIf you only want to verify up to the major or minor verion of the source repository tag (instead of the full tag), use the `--source-versioned-tag` which performs semantic versioning verification:\n\n```shell\nslsa-verifier verify-artifact argocd-linux-amd64 \\\n  --provenance-path argocd-cli.intoto.jsonl \\\n  --source-uri github.com\/argoproj\/argo-cd \\\n  --source-versioned-tag v2 # Note: May use v2.7 for minor version verification.\n```\n\nThe payload is a non-forgeable provenance which is base64 encoded and can be viewed by passing the `--print-provenance` option to the commands above:\n\n```bash\nslsa-verifier verify-artifact argocd-linux-amd64 \\\n  --provenance-path argocd-cli.intoto.jsonl \\\n  --source-uri github.com\/argoproj\/argo-cd \\\n  --source-tag v2.7.0 \\\n  --print-provenance | jq\n```\n\n## Verification of Sbom\n\nA single attestation (`argocd-sbom.intoto.jsonl`) from each release is provided along with the sbom (`sbom.tar.gz`). This can be used with [slsa-verifier](https:\/\/github.com\/slsa-framework\/slsa-verifier#verification-for-github-builders) to verify that the SBOM was generated using Argo CD workflows on GitHub and ensures it was cryptographically signed.\n\n```bash\nslsa-verifier verify-artifact sbom.tar.gz \\\n  --provenance-path argocd-sbom.intoto.jsonl \\\n  --source-uri github.com\/argoproj\/argo-cd \\\n  --source-tag v2.7.0\n```\n\n***\n## Verification on Kubernetes\n\n### Policy controllers\n!!! note\n    We encourage all users to verify signatures and provenances with your admission\/policy controller of choice. Doing so will verify that an image was built by us before it's deployed on your Kubernetes cluster.\n\nCosign signatures and SLSA provenances are compatible with several types of admission controllers. Please see the [cosign documentation](https:\/\/docs.sigstore.dev\/cosign\/overview\/#kubernetes-integrations) and [slsa-github-generator](https:\/\/github.com\/slsa-framework\/slsa-github-generator\/blob\/main\/internal\/builders\/container\/README.md#verification) for supported controllers.","site":"argocd","answers_cleaned":"  Verification of Argo CD Artifacts     Prerequisites   cosign  v2 0 0  or higher  installation instructions  https   docs sigstore dev cosign installation    slsa verifier  installation instructions  https   github com slsa framework slsa verifier installation    crane  installation instructions  https   github com google go containerregistry blob main cmd crane README md   for container verification only          Release Assets   Asset                      Description                                                                                    argocd darwin amd64        CLI Binary                        argocd darwin arm64        CLI Binary                        argocd linux amd64         CLI Binary                        argocd linux arm64         CLI Binary                        argocd linux ppc64le       CLI Binary                        argocd linux s390x         CLI Binary                        argocd windows amd64       CLI Binary                        argocd cli intoto jsonl    Attestation of CLI binaries       argocd sbom intoto jsonl   Attestation of SBOM               cli checksums txt          Checksums of binaries             sbom tar gz                Sbom                              sbom tar gz pem            Certificate used to sign sbom     sbom tar gz sig            Signature of sbom                       Verification of container images  Argo CD container images are signed by  cosign  https   github com sigstore cosign  using identity based   keyless   signing and transparency  Executing the following command can be used to verify the signature of a container image      bash cosign verify     certificate identity regexp https   github com argoproj argo cd  github workflows image reuse yaml refs tags v     certificate oidc issuer https   token actions githubusercontent com     certificate github workflow repository  argoproj argo cd    quay io argoproj argocd v2 11 3   jq     The command should output the following if the container image was correctly verified     bash The following checks were performed on each of these signatures      The cosign claims were validated     Existence of the claims in the transparency log was verified offline     Any certificates were verified against the Fulcio roots             critical            identity              docker reference    quay io argoproj argo cd                  image              docker manifest digest    sha256 63dc60481b1b2abf271e1f2b866be8a92962b0e53aaa728902caa8ac8d235277                  type    cosign container image signature              optional            1 3 6 1 4 1 57264 1 1    https   token actions githubusercontent com          1 3 6 1 4 1 57264 1 2    push          1 3 6 1 4 1 57264 1 3    a6ec84da0eaa519cbd91a8f016cf4050c03323b2          1 3 6 1 4 1 57264 1 4    Publish ArgoCD Release          1 3 6 1 4 1 57264 1 5    argoproj argo cd          1 3 6 1 4 1 57264 1 6    refs tags  version                          Verification of container image with SLSA attestations  A  SLSA  https   slsa dev   Level 3 provenance is generated using  slsa github generator  https   github com slsa framework slsa github generator    The following command will verify the signature of an attestation and how it was issued  It will contain the payloadType  payload  and signature   Run the following command as per the  slsa verifier documentation  https   github com slsa framework slsa verifier tree main containers       bash   Get the immutable container image to prevent TOCTOU attacks https   github com slsa framework slsa verifier toctou attacks IMAGE quay io argoproj argocd v2 7 0 IMAGE    IMAGE     crane digest    IMAGE      Verify provenance  including the tag to prevent rollback attacks  slsa verifier verify image   IMAGE          source uri github com argoproj argo cd         source tag v2 7 0      If you only want to verify up to the major or minor verion of the source repository tag  instead of the full tag   use the    source versioned tag  which performs semantic versioning verification      shell slsa verifier verify image   IMAGE          source uri github com argoproj argo cd         source versioned tag v2   Note  May use v2 7 for minor version verification       The attestation payload contains a non forgeable provenance which is base64 encoded and can be viewed by passing the    print provenance  option to the commands above      bash slsa verifier verify image   IMAGE          source uri github com argoproj argo cd         source tag v2 7 0         print provenance   jq      If you prefer using cosign  follow these  instructions  https   github com slsa framework slsa github generator blob main internal builders container README md cosign        tip      cosign  or  slsa verifier  can both be used to verify image attestations      Check the documentation of each binary for detailed instructions           Verification of CLI artifacts with SLSA attestations  A single attestation   argocd cli intoto jsonl   from each release is provided  This can be used with  slsa verifier  https   github com slsa framework slsa verifier verification for github builders  to verify that a CLI binary was generated using Argo CD workflows on GitHub and ensures it was cryptographically signed      bash slsa verifier verify artifact argocd linux amd64       provenance path argocd cli intoto jsonl       source uri github com argoproj argo cd       source tag v2 7 0      If you only want to verify up to the major or minor verion of the source repository tag  instead of the full tag   use the    source versioned tag  which performs semantic versioning verification      shell slsa verifier verify artifact argocd linux amd64       provenance path argocd cli intoto jsonl       source uri github com argoproj argo cd       source versioned tag v2   Note  May use v2 7 for minor version verification       The payload is a non forgeable provenance which is base64 encoded and can be viewed by passing the    print provenance  option to the commands above      bash slsa verifier verify artifact argocd linux amd64       provenance path argocd cli intoto jsonl       source uri github com argoproj argo cd       source tag v2 7 0       print provenance   jq         Verification of Sbom  A single attestation   argocd sbom intoto jsonl   from each release is provided along with the sbom   sbom tar gz    This can be used with  slsa verifier  https   github com slsa framework slsa verifier verification for github builders  to verify that the SBOM was generated using Argo CD workflows on GitHub and ensures it was cryptographically signed      bash slsa verifier verify artifact sbom tar gz       provenance path argocd sbom intoto jsonl       source uri github com argoproj argo cd       source tag v2 7 0             Verification on Kubernetes      Policy controllers     note     We encourage all users to verify signatures and provenances with your admission policy controller of choice  Doing so will verify that an image was built by us before it s deployed on your Kubernetes cluster   Cosign signatures and SLSA provenances are compatible with several types of admission controllers  Please see the  cosign documentation  https   docs sigstore dev cosign overview  kubernetes integrations  and  slsa github generator  https   github com slsa framework slsa github generator blob main internal builders container README md verification  for supported controllers "}
{"questions":"argocd operations and the API TLS configuration The user facing endpoint of the workload which serves the UI The endpoint of the which is accessed by Argo CD provides three inbound TLS endpoints that can be configured and workloads to request repository","answers":"# TLS configuration\n\nArgo CD provides three inbound TLS endpoints that can be configured:\n\n* The user-facing endpoint of the `argocd-server` workload which serves the UI\n  and the API\n* The endpoint of the `argocd-repo-server`, which is accessed by `argocd-server`\n  and `argocd-application-controller` workloads to request repository\n  operations.\n* The endpoint of the `argocd-dex-server`, which is accessed by `argocd-server`\n  to handle OIDC authentication.\n\nBy default, and without further configuration, these endpoints will be\nset-up to use an automatically generated, self-signed certificate. However,\nmost users will want to explicitly configure the certificates for these TLS\nendpoints, possibly using automated means such as `cert-manager` or using\ntheir own dedicated Certificate Authority.\n\n## Configuring TLS for argocd-server\n\n### Inbound TLS options for argocd-server\n\nYou can configure certain TLS options for the `argocd-server` workload by\nsetting command line parameters. The following parameters are available:\n\n|Parameter|Default|Description|\n|---------|-------|-----------|\n|`--insecure`|`false`|Disables TLS completely|\n|`--tlsminversion`|`1.2`|The minimum TLS version to be offered to clients|\n|`--tlsmaxversion`|`1.3`|The maximum TLS version to be offered to clients|\n|`--tlsciphers`|`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384:TLS_RSA_WITH_AES_256_GCM_SHA384`|A colon separated list of TLS cipher suites to be offered to clients|\n\n### TLS certificates used by argocd-server\n\nThere are two ways to configure the TLS certificates used by `argocd-server`:\n\n* Setting the `tls.crt` and `tls.key` keys in the `argocd-server-tls` secret\n  to hold PEM data of the certificate and the corresponding private key. The\n  `argocd-server-tls` secret may be of type `tls`, but does not have to be.\n* Setting the `tls.crt` and `tls.key` keys in the `argocd-secret` secret to\n  hold PEM data of the certificate and the corresponding private key. This\n  method is considered deprecated, and only exists for purposes of backwards\n  compatibility. Changing `argocd-secret` should not be used to override the\n  TLS certificate anymore.\n\nArgo CD decides which TLS certificate to use for the endpoint of\n`argocd-server` as follows:\n\n* If the `argocd-server-tls` secret exists and contains a valid key pair in the\n  `tls.crt` and `tls.key` keys, this will be used for the certificate of the\n  endpoint of `argocd-server`.\n* Otherwise, if the `argocd-secret` secret contains a valid key pair in the\n `tls.crt` and `tls.key` keys, this will be used as certificate for the\n  endpoint of `argocd-server`.\n* If no `tls.crt` and `tls.key` keys are found in neither of the two mentioned\n  secrets, Argo CD will generate a self-signed certificate and persist it in\n  the `argocd-secret` secret.\n\nThe `argocd-server-tls` secret contains only information for TLS configuration\nto be used by `argocd-server` and is safe to be managed via third-party tools\nsuch as `cert-manager` or `SealedSecrets`\n\nTo create this secret manually from an existing key pair, you can use `kubectl`:\n\n```shell\nkubectl create -n argocd secret tls argocd-server-tls \\\n  --cert=\/path\/to\/cert.pem \\\n  --key=\/path\/to\/key.pem\n```\n\nArgo CD will pick up changes to the `argocd-server-tls` secret automatically\nand will not require restart of the pods to use a renewed certificate.\n\n## Configuring inbound TLS for argocd-repo-server\n\n### Inbound TLS options for argocd-repo-server\n\nYou can configure certain TLS options for the `argocd-repo-server` workload by\nsetting command line parameters. The following parameters are available:\n\n|Parameter|Default|Description|\n|---------|-------|-----------|\n|`--disable-tls`|`false`|Disables TLS completely|\n|`--tlsminversion`|`1.2`|The minimum TLS version to be offered to clients|\n|`--tlsmaxversion`|`1.3`|The maximum TLS version to be offered to clients|\n|`--tlsciphers`|`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384:TLS_RSA_WITH_AES_256_GCM_SHA384`|A colon separated list of TLS cipher suites to be offered to clients|\n\n### Inbound TLS certificates used by argocd-repo-server\n\nTo configure the TLS certificate used by the `argocd-repo-server` workload,\ncreate a secret named `argocd-repo-server-tls` in the namespace where Argo CD\nis running in with the certificate's key pair stored in `tls.crt` and\n`tls.key` keys. If this secret does not exist, `argocd-repo-server` will\ngenerate and use a self-signed certificate.\n\nTo create this secret, you can use `kubectl`:\n\n```shell\nkubectl create -n argocd secret tls argocd-repo-server-tls \\\n  --cert=\/path\/to\/cert.pem \\\n  --key=\/path\/to\/key.pem\n```\n\nIf the certificate is self-signed, you will also need to add `ca.crt` to the secret\nwith the contents of your CA certificate.\n\nPlease note, that as opposed to `argocd-server`, the `argocd-repo-server` is\nnot able to pick up changes to this secret automatically. If you create (or\nupdate) this secret, the `argocd-repo-server` pods need to be restarted.\n\nAlso note, that the certificate should be issued with the correct SAN entries\nfor the `argocd-repo-server`, containing at least the entries for\n`DNS:argocd-repo-server` and `DNS:argocd-repo-server.argo-cd.svc` depending\non how your workloads connect to the repository server.\n\n## Configuring inbound TLS for argocd-dex-server\n\n### Inbound TLS options for argocd-dex-server\n\nYou can configure certain TLS options for the `argocd-dex-server` workload by\nsetting command line parameters. The following parameters are available:\n\n|Parameter|Default|Description|\n|---------|-------|-----------|\n|`--disable-tls`|`false`|Disables TLS completely|\n\n### Inbound TLS certificates used by argocd-dex-server\n\nTo configure the TLS certificate used by the `argocd-dex-server` workload,\ncreate a secret named `argocd-dex-server-tls` in the namespace where Argo CD\nis running in with the certificate's key pair stored in `tls.crt` and\n`tls.key` keys. If this secret does not exist, `argocd-dex-server` will\ngenerate and use a self-signed certificate.\n\nTo create this secret, you can use `kubectl`:\n\n```shell\nkubectl create -n argocd secret tls argocd-dex-server-tls \\\n  --cert=\/path\/to\/cert.pem \\\n  --key=\/path\/to\/key.pem\n```\n\nIf the certificate is self-signed, you will also need to add `ca.crt` to the secret\nwith the contents of your CA certificate.\n\nPlease note, that as opposed to `argocd-server`, the `argocd-dex-server` is\nnot able to pick up changes to this secret automatically. If you create (or\nupdate) this secret, the `argocd-dex-server` pods need to be restarted.\n\nAlso note, that the certificate should be issued with the correct SAN entries\nfor the `argocd-dex-server`, containing at least the entries for\n`DNS:argocd-dex-server` and `DNS:argocd-dex-server.argo-cd.svc` depending\non how your workloads connect to the repository server.\n\n## Configuring TLS between Argo CD components\n\n### Configuring TLS to argocd-repo-server\n\nBoth `argocd-server` and `argocd-application-controller` communicate with the\n`argocd-repo-server` using a gRPC API over TLS. By default,\n`argocd-repo-server` generates a non-persistent, self signed certificate\nto use for its gRPC endpoint on startup. Because the `argocd-repo-server` has\nno means to connect to the K8s control plane API, this certificate is not\nbeing available to outside consumers for verification. Both, the\n`argocd-server` and `argocd-application-server` will use a non-validating\nconnection to the `argocd-repo-server` for this reason.\n\nTo change this behavior to be more secure by having the `argocd-server` and\n`argocd-application-controller` validate the TLS certificate of the\n`argocd-repo-server` endpoint, the following steps need to be performed:\n\n* Create a persistent TLS certificate to be used by `argocd-repo-server`, as\n  shown above\n* Restart the `argocd-repo-server` pod(s)\n* Modify the pod startup parameters for `argocd-server` and\n  `argocd-application-controller` to include the `--repo-server-strict-tls`\n  parameter.\n\nThe `argocd-server` and `argocd-application-controller` workloads will now\nvalidate the TLS certificate of the `argocd-repo-server` by using the\ncertificate stored in the `argocd-repo-server-tls` secret.\n\n!!!note \"Certificate expiry\"\n    Please make sure that the certificate has a proper life time. Keep in\n    mind that when you have to replace the certificate, all workloads have\n    to be restarted in order to properly work again.\n\n### Configuring TLS to argocd-dex-server\n\n`argocd-server` communicates with the `argocd-dex-server` using an HTTPS API\nover TLS. By default, `argocd-dex-server` generates a non-persistent, self\nsigned certificate to use for its HTTPS endpoint on startup. Because the \n`argocd-dex-server` has no means to connect to the K8s control plane API,\nthis certificate is not being available to outside consumers for verification.\nThe `argocd-server` will use a non-validating connection to the `argocd-dex-server`\nfor this reason.\n\nTo change this behavior to be more secure by having the `argocd-server` validate \nthe TLS certificate of the `argocd-dex-server` endpoint, the following steps need\nto be performed:\n\n* Create a persistent TLS certificate to be used by `argocd-dex-server`, as\n  shown above\n* Restart the `argocd-dex-server` pod(s)\n* Modify the pod startup parameters for `argocd-server` to include the \n`--dex-server-strict-tls` parameter.\n\nThe `argocd-server` workload will now validate the TLS certificate of the\n`argocd-dex-server` by using the certificate stored in the `argocd-dex-server-tls`\nsecret.\n\n!!!note \"Certificate expiry\"\n    Please make sure that the certificate has a proper life time. Keep in\n    mind that when you have to replace the certificate, all workloads have\n    to be restarted in order to properly work again.\n\n### Disabling TLS to argocd-repo-server\n\nIn some scenarios where mTLS through side-car proxies is involved (e.g.\nin a service mesh), you may want configure the connections between the\n`argocd-server` and `argocd-application-controller` to `argocd-repo-server`\nto not use TLS at all.\n\nIn this case, you will need to:\n\n* Configure `argocd-repo-server` with TLS on the gRPC API disabled by specifying\n  the `--disable-tls` parameter to the pod container's startup arguments.\n  Also, consider restricting listening addresses to the loopback interface by specifying\n  `--listen 127.0.0.1` parameter, so that insecure endpoint is not exposed on\n  the pod's network interfaces, but still available to the side-car container.\n* Configure `argocd-server` and `argocd-application-controller` to not use TLS\n  for connections to the `argocd-repo-server` by specifying the parameter\n  `--repo-server-plaintext` to the pod container's startup arguments\n* Configure `argocd-server` and `argocd-application-controller` to connect to\n  the side-car instead of directly to the `argocd-repo-server` service by\n  specifying its address via the `--repo-server <address>` parameter\n\nAfter this change, the `argocd-server` and `argocd-application-controller` will\nuse a plain text connection to the side-car proxy, that will handle all aspects\nof TLS to the `argocd-repo-server`'s TLS side-car proxy.\n\n### Disabling TLS to argocd-dex-server\n\nIn some scenarios where mTLS through side-car proxies is involved (e.g.\nin a service mesh), you may want configure the connections between\n`argocd-server` to `argocd-dex-server` to not use TLS at all.\n\nIn this case, you will need to:\n\n* Configure `argocd-dex-server` with TLS on the HTTPS API disabled by specifying\n  the `--disable-tls` parameter to the pod container's startup arguments\n* Configure `argocd-server` to not use TLS for connections to the `argocd-dex-server` \n  by specifying the parameter `--dex-server-plaintext` to the pod container's startup\n  arguments\n* Configure `argocd-server` to connect to the side-car instead of directly to the \n  `argocd-dex-server` service by specifying its address via the `--dex-server <address>`\n  parameter\n\nAfter this change, the `argocd-server` will use a plain text connection to the side-car \nproxy, that will handle all aspects of TLS to the `argocd-dex-server`'s TLS side-car proxy.\n","site":"argocd","answers_cleaned":"  TLS configuration  Argo CD provides three inbound TLS endpoints that can be configured     The user facing endpoint of the  argocd server  workload which serves the UI   and the API   The endpoint of the  argocd repo server   which is accessed by  argocd server    and  argocd application controller  workloads to request repository   operations    The endpoint of the  argocd dex server   which is accessed by  argocd server    to handle OIDC authentication   By default  and without further configuration  these endpoints will be set up to use an automatically generated  self signed certificate  However  most users will want to explicitly configure the certificates for these TLS endpoints  possibly using automated means such as  cert manager  or using their own dedicated Certificate Authority      Configuring TLS for argocd server      Inbound TLS options for argocd server  You can configure certain TLS options for the  argocd server  workload by setting command line parameters  The following parameters are available    Parameter Default Description                                      insecure   false  Disables TLS completely      tlsminversion   1 2  The minimum TLS version to be offered to clients      tlsmaxversion   1 3  The maximum TLS version to be offered to clients      tlsciphers   TLS ECDHE RSA WITH AES 256 GCM SHA384 TLS RSA WITH AES 256 GCM SHA384  A colon separated list of TLS cipher suites to be offered to clients       TLS certificates used by argocd server  There are two ways to configure the TLS certificates used by  argocd server      Setting the  tls crt  and  tls key  keys in the  argocd server tls  secret   to hold PEM data of the certificate and the corresponding private key  The    argocd server tls  secret may be of type  tls   but does not have to be    Setting the  tls crt  and  tls key  keys in the  argocd secret  secret to   hold PEM data of the certificate and the corresponding private key  This   method is considered deprecated  and only exists for purposes of backwards   compatibility  Changing  argocd secret  should not be used to override the   TLS certificate anymore   Argo CD decides which TLS certificate to use for the endpoint of  argocd server  as follows     If the  argocd server tls  secret exists and contains a valid key pair in the    tls crt  and  tls key  keys  this will be used for the certificate of the   endpoint of  argocd server     Otherwise  if the  argocd secret  secret contains a valid key pair in the   tls crt  and  tls key  keys  this will be used as certificate for the   endpoint of  argocd server     If no  tls crt  and  tls key  keys are found in neither of the two mentioned   secrets  Argo CD will generate a self signed certificate and persist it in   the  argocd secret  secret   The  argocd server tls  secret contains only information for TLS configuration to be used by  argocd server  and is safe to be managed via third party tools such as  cert manager  or  SealedSecrets   To create this secret manually from an existing key pair  you can use  kubectl       shell kubectl create  n argocd secret tls argocd server tls       cert  path to cert pem       key  path to key pem      Argo CD will pick up changes to the  argocd server tls  secret automatically and will not require restart of the pods to use a renewed certificate      Configuring inbound TLS for argocd repo server      Inbound TLS options for argocd repo server  You can configure certain TLS options for the  argocd repo server  workload by setting command line parameters  The following parameters are available    Parameter Default Description                                      disable tls   false  Disables TLS completely      tlsminversion   1 2  The minimum TLS version to be offered to clients      tlsmaxversion   1 3  The maximum TLS version to be offered to clients      tlsciphers   TLS ECDHE RSA WITH AES 256 GCM SHA384 TLS RSA WITH AES 256 GCM SHA384  A colon separated list of TLS cipher suites to be offered to clients       Inbound TLS certificates used by argocd repo server  To configure the TLS certificate used by the  argocd repo server  workload  create a secret named  argocd repo server tls  in the namespace where Argo CD is running in with the certificate s key pair stored in  tls crt  and  tls key  keys  If this secret does not exist   argocd repo server  will generate and use a self signed certificate   To create this secret  you can use  kubectl       shell kubectl create  n argocd secret tls argocd repo server tls       cert  path to cert pem       key  path to key pem      If the certificate is self signed  you will also need to add  ca crt  to the secret with the contents of your CA certificate   Please note  that as opposed to  argocd server   the  argocd repo server  is not able to pick up changes to this secret automatically  If you create  or update  this secret  the  argocd repo server  pods need to be restarted   Also note  that the certificate should be issued with the correct SAN entries for the  argocd repo server   containing at least the entries for  DNS argocd repo server  and  DNS argocd repo server argo cd svc  depending on how your workloads connect to the repository server      Configuring inbound TLS for argocd dex server      Inbound TLS options for argocd dex server  You can configure certain TLS options for the  argocd dex server  workload by setting command line parameters  The following parameters are available    Parameter Default Description                                      disable tls   false  Disables TLS completely       Inbound TLS certificates used by argocd dex server  To configure the TLS certificate used by the  argocd dex server  workload  create a secret named  argocd dex server tls  in the namespace where Argo CD is running in with the certificate s key pair stored in  tls crt  and  tls key  keys  If this secret does not exist   argocd dex server  will generate and use a self signed certificate   To create this secret  you can use  kubectl       shell kubectl create  n argocd secret tls argocd dex server tls       cert  path to cert pem       key  path to key pem      If the certificate is self signed  you will also need to add  ca crt  to the secret with the contents of your CA certificate   Please note  that as opposed to  argocd server   the  argocd dex server  is not able to pick up changes to this secret automatically  If you create  or update  this secret  the  argocd dex server  pods need to be restarted   Also note  that the certificate should be issued with the correct SAN entries for the  argocd dex server   containing at least the entries for  DNS argocd dex server  and  DNS argocd dex server argo cd svc  depending on how your workloads connect to the repository server      Configuring TLS between Argo CD components      Configuring TLS to argocd repo server  Both  argocd server  and  argocd application controller  communicate with the  argocd repo server  using a gRPC API over TLS  By default   argocd repo server  generates a non persistent  self signed certificate to use for its gRPC endpoint on startup  Because the  argocd repo server  has no means to connect to the K8s control plane API  this certificate is not being available to outside consumers for verification  Both  the  argocd server  and  argocd application server  will use a non validating connection to the  argocd repo server  for this reason   To change this behavior to be more secure by having the  argocd server  and  argocd application controller  validate the TLS certificate of the  argocd repo server  endpoint  the following steps need to be performed     Create a persistent TLS certificate to be used by  argocd repo server   as   shown above   Restart the  argocd repo server  pod s    Modify the pod startup parameters for  argocd server  and    argocd application controller  to include the    repo server strict tls    parameter   The  argocd server  and  argocd application controller  workloads will now validate the TLS certificate of the  argocd repo server  by using the certificate stored in the  argocd repo server tls  secret      note  Certificate expiry      Please make sure that the certificate has a proper life time  Keep in     mind that when you have to replace the certificate  all workloads have     to be restarted in order to properly work again       Configuring TLS to argocd dex server   argocd server  communicates with the  argocd dex server  using an HTTPS API over TLS  By default   argocd dex server  generates a non persistent  self signed certificate to use for its HTTPS endpoint on startup  Because the   argocd dex server  has no means to connect to the K8s control plane API  this certificate is not being available to outside consumers for verification  The  argocd server  will use a non validating connection to the  argocd dex server  for this reason   To change this behavior to be more secure by having the  argocd server  validate  the TLS certificate of the  argocd dex server  endpoint  the following steps need to be performed     Create a persistent TLS certificate to be used by  argocd dex server   as   shown above   Restart the  argocd dex server  pod s    Modify the pod startup parameters for  argocd server  to include the     dex server strict tls  parameter   The  argocd server  workload will now validate the TLS certificate of the  argocd dex server  by using the certificate stored in the  argocd dex server tls  secret      note  Certificate expiry      Please make sure that the certificate has a proper life time  Keep in     mind that when you have to replace the certificate  all workloads have     to be restarted in order to properly work again       Disabling TLS to argocd repo server  In some scenarios where mTLS through side car proxies is involved  e g  in a service mesh   you may want configure the connections between the  argocd server  and  argocd application controller  to  argocd repo server  to not use TLS at all   In this case  you will need to     Configure  argocd repo server  with TLS on the gRPC API disabled by specifying   the    disable tls  parameter to the pod container s startup arguments    Also  consider restricting listening addresses to the loopback interface by specifying      listen 127 0 0 1  parameter  so that insecure endpoint is not exposed on   the pod s network interfaces  but still available to the side car container    Configure  argocd server  and  argocd application controller  to not use TLS   for connections to the  argocd repo server  by specifying the parameter      repo server plaintext  to the pod container s startup arguments   Configure  argocd server  and  argocd application controller  to connect to   the side car instead of directly to the  argocd repo server  service by   specifying its address via the    repo server  address   parameter  After this change  the  argocd server  and  argocd application controller  will use a plain text connection to the side car proxy  that will handle all aspects of TLS to the  argocd repo server  s TLS side car proxy       Disabling TLS to argocd dex server  In some scenarios where mTLS through side car proxies is involved  e g  in a service mesh   you may want configure the connections between  argocd server  to  argocd dex server  to not use TLS at all   In this case  you will need to     Configure  argocd dex server  with TLS on the HTTPS API disabled by specifying   the    disable tls  parameter to the pod container s startup arguments   Configure  argocd server  to not use TLS for connections to the  argocd dex server     by specifying the parameter    dex server plaintext  to the pod container s startup   arguments   Configure  argocd server  to connect to the side car instead of directly to the     argocd dex server  service by specifying its address via the    dex server  address     parameter  After this change  the  argocd server  will use a plain text connection to the side car  proxy  that will handle all aspects of TLS to the  argocd dex server  s TLS side car proxy  "}
{"questions":"argocd Operators can add actions to custom resources in form of a Lua script and expand those capabilities Built in Actions Argo CD allows operators to define custom actions which users can perform on specific resource types This is used internally to provide actions like for a or for an Argo Rollout The following are actions that are built in to Argo CD Each action name links to its Lua script definition Resource Actions Overview","answers":"# Resource Actions\n\n## Overview\nArgo CD allows operators to define custom actions which users can perform on specific resource types. This is used internally to provide actions like `restart` for a `DaemonSet`, or `retry` for an Argo Rollout.\n\nOperators can add actions to custom resources in form of a Lua script and expand those capabilities.\n\n## Built-in Actions\n\nThe following are actions that are built-in to Argo CD. Each action name links to its Lua script definition:\n\n{!docs\/operator-manual\/resource_actions_builtin.md!}\n\nSee the [RBAC documentation](rbac.md#the-action-action) for information on how to control access to these actions.\n\n## Custom Resource Actions\n\nArgo CD supports custom resource actions written in [Lua](https:\/\/www.lua.org\/). This is useful if you:\n\n* Have a custom resource for which Argo CD does not provide any built-in actions.\n* Have a commonly performed manual task that might be error prone if executed by users via `kubectl`\n\nThe resource actions act on a single object.\n\nYou can define your own custom resource actions in the `argocd-cm` ConfigMap.\n\n### Custom Resource Action Types\n\n#### An action that modifies the source resource\n\nThis action modifies and returns the source resource.\nThis kind of action was the only one available till 2.8, and it is still supported.\n\n#### An action that produces a list of new or modified resources\n\n**An alpha feature, introduced in 2.8.**\n\nThis action returns a list of impacted resources, each impacted resource has a K8S resource and an operation to perform on.   \nCurrently supported operations are \"create\" and \"patch\", \"patch\" is only supported for the source resource.   \nCreating new resources is possible, by specifying a \"create\" operation for each such resource in the returned list.  \nOne of the returned resources can be the modified source object, with a \"patch\" operation, if needed.   \nSee the definition examples below.\n\n### Define a Custom Resource Action in `argocd-cm` ConfigMap\n\nCustom resource actions can be defined in `resource.customizations.actions.<group_kind>` field of `argocd-cm`. Following example demonstrates a set of custom actions for `CronJob` resources, each such action returns the modified CronJob. \nThe customizations key is in the format of `resource.customizations.actions.<apiGroup_Kind>`.\n\n```yaml\nresource.customizations.actions.batch_CronJob: |\n  discovery.lua: |\n    actions = {}\n    actions[\"suspend\"] = {[\"disabled\"] = true}\n    actions[\"resume\"] = {[\"disabled\"] = true}\n  \n    local suspend = false\n    if obj.spec.suspend ~= nil then\n        suspend = obj.spec.suspend\n    end\n    if suspend then\n        actions[\"resume\"][\"disabled\"] = false\n    else\n        actions[\"suspend\"][\"disabled\"] = false\n    end\n    return actions\n  definitions:\n  - name: suspend\n    action.lua: |\n      obj.spec.suspend = true\n      return obj\n  - name: resume\n    action.lua: |\n      if obj.spec.suspend ~= nil and obj.spec.suspend then\n          obj.spec.suspend = false\n      end\n      return obj\n```\n\nThe `discovery.lua` script must return a table where the key name represents the action name. You can optionally include logic to enable or disable certain actions based on the current object state.\n\nEach action name must be represented in the list of `definitions` with an accompanying `action.lua` script to control the resource modifications. The `obj` is a global variable which contains the resource. Each action script returns an optionally modified version of the resource. In this example, we are simply setting `.spec.suspend` to either `true` or `false`.\n\nBy default, defining a resource action customization will override any built-in action for this resource kind. As of Argo CD version 2.13.0, if you want to retain the built-in actions, you can set the `mergeBuiltinActions` key to `true`. Your custom actions will have precedence over the built-in actions.\n```yaml        \nresource.customizations.actions.argoproj.io_Rollout: |\n  mergeBuiltinActions: true\n  discovery.lua: |\n    actions = {}\n    actions[\"do-things\"] = {}\n    return actions\n  definitions:\n  - name: do-things\n    action.lua: |\n      return obj\t\t\n```\n\n#### Creating new resources with a custom action\n\n!!! important\n    Creating resources via the Argo CD UI is an intentional, strategic departure from GitOps principles. We recommend \n    that you use this feature sparingly and only for resources that are not part of the desired state of the \n    application.\n\nThe resource the action is invoked on would be referred to as the `source resource`.  \nThe new resource and all the resources implicitly created as a result, must be permitted on the AppProject level, otherwise the creation will fail.\n\n##### Creating a source resource child resources with a custom action\n\nIf the new resource represents a k8s child of the source resource, the source resource ownerReference must be set on the new resource.  \nHere is an example Lua snippet, that takes care of constructing a Job resource that is a child of a source CronJob resource - the `obj` is a global variable, which contains the source resource:\n\n```lua\n-- ...\nownerRef = {}\nownerRef.apiVersion = obj.apiVersion\nownerRef.kind = obj.kind\nownerRef.name = obj.metadata.name\nownerRef.uid = obj.metadata.uid\njob = {}\njob.metadata = {}\njob.metadata.ownerReferences = {}\njob.metadata.ownerReferences[1] = ownerRef\n-- ...\n```\n\n##### Creating independent child resources with a custom action\n\nIf the new resource is independent of the source resource, the default behavior of such new resource is that it is not known by the App of the source resource (as it is not part of the desired state and does not have an `ownerReference`).  \nTo make the App aware of the new resource, the `app.kubernetes.io\/instance` label (or other ArgoCD tracking label, if configured) must be set on the resource.   \nIt can be copied from the source resource, like this:\n\n```lua\n-- ...\nnewObj = {}\nnewObj.metadata = {}\nnewObj.metadata.labels = {}\nnewObj.metadata.labels[\"app.kubernetes.io\/instance\"] = obj.metadata.labels[\"app.kubernetes.io\/instance\"]\n-- ...\n```   \n\nWhile the new resource will be part of the App with the tracking label in place, it will be immediately deleted if auto prune is set on the App.   \nTo keep the resource, set `Prune=false` annotation on the resource, with this Lua snippet:\n\n```lua\n-- ...\nnewObj.metadata.annotations = {}\nnewObj.metadata.annotations[\"argocd.argoproj.io\/sync-options\"] = \"Prune=false\"\n-- ...\n```\n\n(If setting `Prune=false` behavior, the resource will not be deleted upon the deletion of the App, and will require a manual cleanup).\n\nThe resource and the App will now appear out of sync - which is the expected ArgoCD behavior upon creating a resource that is not part of the desired state.\n\nIf you wish to treat such an App as a synced one, add the following resource annotation in Lua code:\n\n```lua\n-- ...\nnewObj.metadata.annotations[\"argocd.argoproj.io\/compare-options\"] = \"IgnoreExtraneous\"\n-- ...\n```\n\n#### An action that produces a list of resources - a complete example:\n\n```yaml\nresource.customizations.actions.ConfigMap: |\n  discovery.lua: |\n    actions = {}\n    actions[\"do-things\"] = {}\n    return actions\n  definitions:\n  - name: do-things\n    action.lua: |\n      -- Create a new ConfigMap\n      cm1 = {}\n      cm1.apiVersion = \"v1\"\n      cm1.kind = \"ConfigMap\"\n      cm1.metadata = {}\n      cm1.metadata.name = \"cm1\"\n      cm1.metadata.namespace = obj.metadata.namespace\n      cm1.metadata.labels = {}\n      -- Copy ArgoCD tracking label so that the resource is recognized by the App\n      cm1.metadata.labels[\"app.kubernetes.io\/instance\"] = obj.metadata.labels[\"app.kubernetes.io\/instance\"]\n      cm1.metadata.annotations = {}\n      -- For Apps with auto-prune, set the prune false on the resource, so it does not get deleted\n      cm1.metadata.annotations[\"argocd.argoproj.io\/sync-options\"] = \"Prune=false\"\t  \n      -- Keep the App synced even though it has a resource that is not in Git\n      cm1.metadata.annotations[\"argocd.argoproj.io\/compare-options\"] = \"IgnoreExtraneous\"\t\t  \n      cm1.data = {}\n      cm1.data.myKey1 = \"myValue1\"\n      impactedResource1 = {}\n      impactedResource1.operation = \"create\"\n      impactedResource1.resource = cm1\n\n      -- Patch the original cm\n      obj.metadata.labels[\"aKey\"] = \"aValue\"\n      impactedResource2 = {}\n      impactedResource2.operation = \"patch\"\n      impactedResource2.resource = obj\n\n      result = {}\n      result[1] = impactedResource1\n      result[2] = impactedResource2\n      return result\t\t  \n```","site":"argocd","answers_cleaned":"  Resource Actions     Overview Argo CD allows operators to define custom actions which users can perform on specific resource types  This is used internally to provide actions like  restart  for a  DaemonSet   or  retry  for an Argo Rollout   Operators can add actions to custom resources in form of a Lua script and expand those capabilities      Built in Actions  The following are actions that are built in to Argo CD  Each action name links to its Lua script definition     docs operator manual resource actions builtin md    See the  RBAC documentation  rbac md the action action  for information on how to control access to these actions      Custom Resource Actions  Argo CD supports custom resource actions written in  Lua  https   www lua org    This is useful if you     Have a custom resource for which Argo CD does not provide any built in actions    Have a commonly performed manual task that might be error prone if executed by users via  kubectl   The resource actions act on a single object   You can define your own custom resource actions in the  argocd cm  ConfigMap       Custom Resource Action Types       An action that modifies the source resource  This action modifies and returns the source resource  This kind of action was the only one available till 2 8  and it is still supported        An action that produces a list of new or modified resources    An alpha feature  introduced in 2 8     This action returns a list of impacted resources  each impacted resource has a K8S resource and an operation to perform on     Currently supported operations are  create  and  patch    patch  is only supported for the source resource     Creating new resources is possible  by specifying a  create  operation for each such resource in the returned list    One of the returned resources can be the modified source object  with a  patch  operation  if needed     See the definition examples below       Define a Custom Resource Action in  argocd cm  ConfigMap  Custom resource actions can be defined in  resource customizations actions  group kind   field of  argocd cm   Following example demonstrates a set of custom actions for  CronJob  resources  each such action returns the modified CronJob   The customizations key is in the format of  resource customizations actions  apiGroup Kind        yaml resource customizations actions batch CronJob      discovery lua        actions          actions  suspend        disabled     true      actions  resume        disabled     true         local suspend   false     if obj spec suspend    nil then         suspend   obj spec suspend     end     if suspend then         actions  resume    disabled     false     else         actions  suspend    disabled     false     end     return actions   definitions      name  suspend     action lua          obj spec suspend   true       return obj     name  resume     action lua          if obj spec suspend    nil and obj spec suspend then           obj spec suspend   false       end       return obj      The  discovery lua  script must return a table where the key name represents the action name  You can optionally include logic to enable or disable certain actions based on the current object state   Each action name must be represented in the list of  definitions  with an accompanying  action lua  script to control the resource modifications  The  obj  is a global variable which contains the resource  Each action script returns an optionally modified version of the resource  In this example  we are simply setting   spec suspend  to either  true  or  false    By default  defining a resource action customization will override any built in action for this resource kind  As of Argo CD version 2 13 0  if you want to retain the built in actions  you can set the  mergeBuiltinActions  key to  true   Your custom actions will have precedence over the built in actions     yaml         resource customizations actions argoproj io Rollout      mergeBuiltinActions  true   discovery lua        actions          actions  do things            return actions   definitions      name  do things     action lua          return obj             Creating new resources with a custom action      important     Creating resources via the Argo CD UI is an intentional  strategic departure from GitOps principles  We recommend      that you use this feature sparingly and only for resources that are not part of the desired state of the      application   The resource the action is invoked on would be referred to as the  source resource     The new resource and all the resources implicitly created as a result  must be permitted on the AppProject level  otherwise the creation will fail         Creating a source resource child resources with a custom action  If the new resource represents a k8s child of the source resource  the source resource ownerReference must be set on the new resource    Here is an example Lua snippet  that takes care of constructing a Job resource that is a child of a source CronJob resource   the  obj  is a global variable  which contains the source resource      lua        ownerRef      ownerRef apiVersion   obj apiVersion ownerRef kind   obj kind ownerRef name   obj metadata name ownerRef uid   obj metadata uid job      job metadata      job metadata ownerReferences      job metadata ownerReferences 1    ownerRef                   Creating independent child resources with a custom action  If the new resource is independent of the source resource  the default behavior of such new resource is that it is not known by the App of the source resource  as it is not part of the desired state and does not have an  ownerReference      To make the App aware of the new resource  the  app kubernetes io instance  label  or other ArgoCD tracking label  if configured  must be set on the resource     It can be copied from the source resource  like this      lua        newObj      newObj metadata      newObj metadata labels      newObj metadata labels  app kubernetes io instance     obj metadata labels  app kubernetes io instance                  While the new resource will be part of the App with the tracking label in place  it will be immediately deleted if auto prune is set on the App     To keep the resource  set  Prune false  annotation on the resource  with this Lua snippet      lua        newObj metadata annotations      newObj metadata annotations  argocd argoproj io sync options      Prune false               If setting  Prune false  behavior  the resource will not be deleted upon the deletion of the App  and will require a manual cleanup    The resource and the App will now appear out of sync   which is the expected ArgoCD behavior upon creating a resource that is not part of the desired state   If you wish to treat such an App as a synced one  add the following resource annotation in Lua code      lua        newObj metadata annotations  argocd argoproj io compare options      IgnoreExtraneous                   An action that produces a list of resources   a complete example      yaml resource customizations actions ConfigMap      discovery lua        actions          actions  do things            return actions   definitions      name  do things     action lua             Create a new ConfigMap       cm1            cm1 apiVersion    v1        cm1 kind    ConfigMap        cm1 metadata            cm1 metadata name    cm1        cm1 metadata namespace   obj metadata namespace       cm1 metadata labels               Copy ArgoCD tracking label so that the resource is recognized by the App       cm1 metadata labels  app kubernetes io instance     obj metadata labels  app kubernetes io instance         cm1 metadata annotations               For Apps with auto prune  set the prune false on the resource  so it does not get deleted       cm1 metadata annotations  argocd argoproj io sync options      Prune false              Keep the App synced even though it has a resource that is not in Git       cm1 metadata annotations  argocd argoproj io compare options      IgnoreExtraneous            cm1 data            cm1 data myKey1    myValue1        impactedResource1            impactedResource1 operation    create        impactedResource1 resource   cm1           Patch the original cm       obj metadata labels  aKey      aValue        impactedResource2            impactedResource2 operation    patch        impactedResource2 resource   obj        result            result 1    impactedResource1       result 2    impactedResource2       return result        "}
{"questions":"argocd For untracked resources you can By default an Argo CD Application is refreshed every time a resource that belongs to it changes When a resource update is ignored if the resource s does not change the Application that this resource belongs to will not be reconciled for Reconcile Optimization and a high CPU usage on the Argo CD allows you to optionally ignore resource updates on specific fields Kubernetes controllers often update the resources they watch periodically causing continuous reconcile operation on the Application","answers":"# Reconcile Optimization\n\nBy default, an Argo CD Application is refreshed every time a resource that belongs to it changes.\n\nKubernetes controllers often update the resources they watch periodically, causing continuous reconcile operation on the Application\nand a high CPU usage on the `argocd-application-controller`. Argo CD allows you to optionally ignore resource updates on specific fields\nfor [tracked resources](..\/user-guide\/resource_tracking.md). \nFor untracked resources, you can [use the argocd.argoproj.io\/ignore-resource-updates annotations](#ignoring-updates-for-untracked-resources)\n\nWhen a resource update is ignored, if the resource's [health status](.\/health.md) does not change, the Application that this resource belongs to will not be reconciled.\n\n## System-Level Configuration\n\nBy default, `resource.ignoreResourceUpdatesEnabled` is set to `true`, enabling Argo CD to ignore resource updates. This default setting ensures that Argo CD maintains sustainable performance by reducing unnecessary reconcile operations. If you need to alter this behavior, you can explicitly set `resource.ignoreResourceUpdatesEnabled` to `false` in the `argocd-cm` ConfigMap:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cm\n  namespace: argocd\ndata:\n  resource.ignoreResourceUpdatesEnabled: \"false\"\n```\n\nArgo CD allows ignoring resource updates at a specific JSON path, using [RFC6902 JSON patches](https:\/\/tools.ietf.org\/html\/rfc6902) and [JQ path expressions](https:\/\/stedolan.github.io\/jq\/manual\/#path(path_expression)). It can be configured for a specified group and kind\nin `resource.customizations` key of the `argocd-cm` ConfigMap.\n\nFollowing is an example of a customization which ignores the `refreshTime` status field of an [`ExternalSecret`](https:\/\/external-secrets.io\/main\/api\/externalsecret\/) resource:\n\n```yaml\ndata:\n  resource.customizations.ignoreResourceUpdates.external-secrets.io_ExternalSecret: |\n    jsonPointers:\n    - \/status\/refreshTime\n    # JQ equivalent of the above:\n    # jqPathExpressions:\n    # - .status.refreshTime\n```\n\nIt is possible to configure `ignoreResourceUpdates` to be applied to all tracked resources in every Application managed by an Argo CD instance. In order to do so, resource customizations can be configured like in the example below:\n\n```yaml\ndata:\n  resource.customizations.ignoreResourceUpdates.all: |\n    jsonPointers:\n    - \/status\n```\n\n### Using ignoreDifferences to ignore reconcile\n\nIt is possible to use existing system-level `ignoreDifferences` customizations to ignore resource updates as well. Instead of copying all configurations,\nthe `ignoreDifferencesOnResourceUpdates` setting can be used to add all ignored differences as ignored resource updates:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cm\ndata:\n  resource.compareoptions: |\n    ignoreDifferencesOnResourceUpdates: true\n```\n\n## Default Configuration\n\nBy default, the metadata fields `generation`, `resourceVersion` and `managedFields` are always ignored for all resources.\n\n## Finding Resources to Ignore\n\nThe application controller logs when a resource change triggers a refresh. You can use these logs to find\nhigh-churn resource kinds and then inspect those resources to find which fields to ignore.\n\nTo find these logs, search for `\"Requesting app refresh caused by object update\"`. The logs include structured\nfields for `api-version` and `kind`.  Counting the number of refreshes triggered, by api-version\/kind should\nreveal the high-churn resource kinds.\n\n!!!note \n    These logs are at the `debug` level. Configure the application-controller's log level to `debug`.\n\nOnce you have identified some resources which change often, you can try to determine which fields are changing. Here is\none approach:\n\n```shell\nkubectl get <resource> -o yaml > \/tmp\/before.yaml\n# Wait a minute or two.\nkubectl get <resource> -o yaml > \/tmp\/after.yaml\ndiff \/tmp\/before.yaml \/tmp\/after\n```\n\nThe diff can give you a sense for which fields are changing and should perhaps be ignored.\n\n## Checking Whether Resource Updates are Ignored\n\nWhenever Argo CD skips a refresh due to an ignored resource update, the controller logs the following line:\n\"Ignoring change of object because none of the watched resource fields have changed\".\n\nSearch the application-controller logs for this line to confirm that your resource ignore rules are being applied.\n\n!!!note\n    These logs are at the `debug` level. Configure the application-controller's log level to `debug`.\n\n## Examples\n\n### argoproj.io\/Application\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cm\ndata:\n  resource.customizations.ignoreResourceUpdates.argoproj.io_Application: |\n    jsonPointers:\n    # Ignore when ownerReferences change, for example when a parent ApplicationSet changes often.\n    - \/metadata\/ownerReferences\n    # Ignore reconciledAt, since by itself it doesn't indicate any important change.\n    - \/status\/reconciledAt\n    jqPathExpressions:\n    # Ignore lastTransitionTime for conditions; helpful when SharedResourceWarnings are being regularly updated but not\n    # actually changing in content.\n    - .status?.conditions[]?.lastTransitionTime\n```\n\n## Ignoring updates for untracked resources\n\nArgoCD will only apply `ignoreResourceUpdates` configuration to tracked resources of an application. This means dependant resources, such as a `ReplicaSet` and `Pod` created by a `Deployment`, will not ignore any updates and trigger a reconcile of the application for any changes.\n\nIf you want to apply the `ignoreResourceUpdates` configuration to an untracked resource, you can add the\n`argocd.argoproj.io\/ignore-resource-updates=true` annotation in the dependent resources manifest.\n\n## Example\n\n### CronJob\n\n```yaml\napiVersion: batch\/v1\nkind: CronJob\nmetadata:\n  name: hello\n  namespace: test-cronjob\nspec:\n  schedule: \"* * * * *\"\n  jobTemplate:\n    metadata:\n      annotations:\n        argocd.argoproj.io\/ignore-resource-updates: \"true\"\n    spec:\n      template:\n        metadata:\n          annotations:\n            argocd.argoproj.io\/ignore-resource-updates: \"true\"\n        spec:\n          containers:\n          - name: hello\n            image: busybox:1.28\n            imagePullPolicy: IfNotPresent\n            command:\n            - \/bin\/sh\n            - -c\n            - date; echo Hello from the Kubernetes cluster\n          restartPolicy: OnFailure\n```\n\nThe resource updates will be ignored based on your the `ignoreResourceUpdates` configuration in the `argocd-cm` configMap:\n\n`argocd-cm`:\n```yaml\nresource.customizations.ignoreResourceUpdates.batch_Job: |\n    jsonPointers:\n      - \/status\nresource.customizations.ignoreResourceUpdates.Pod: |\n    jsonPointers:\n      - \/status      \n```","site":"argocd","answers_cleaned":"  Reconcile Optimization  By default  an Argo CD Application is refreshed every time a resource that belongs to it changes   Kubernetes controllers often update the resources they watch periodically  causing continuous reconcile operation on the Application and a high CPU usage on the  argocd application controller   Argo CD allows you to optionally ignore resource updates on specific fields for  tracked resources     user guide resource tracking md    For untracked resources  you can  use the argocd argoproj io ignore resource updates annotations   ignoring updates for untracked resources   When a resource update is ignored  if the resource s  health status    health md  does not change  the Application that this resource belongs to will not be reconciled      System Level Configuration  By default   resource ignoreResourceUpdatesEnabled  is set to  true   enabling Argo CD to ignore resource updates  This default setting ensures that Argo CD maintains sustainable performance by reducing unnecessary reconcile operations  If you need to alter this behavior  you can explicitly set  resource ignoreResourceUpdatesEnabled  to  false  in the  argocd cm  ConfigMap      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cm   namespace  argocd data    resource ignoreResourceUpdatesEnabled   false       Argo CD allows ignoring resource updates at a specific JSON path  using  RFC6902 JSON patches  https   tools ietf org html rfc6902  and  JQ path expressions  https   stedolan github io jq manual  path path expression    It can be configured for a specified group and kind in  resource customizations  key of the  argocd cm  ConfigMap   Following is an example of a customization which ignores the  refreshTime  status field of an   ExternalSecret   https   external secrets io main api externalsecret   resource      yaml data    resource customizations ignoreResourceUpdates external secrets io ExternalSecret        jsonPointers         status refreshTime       JQ equivalent of the above        jqPathExpressions           status refreshTime      It is possible to configure  ignoreResourceUpdates  to be applied to all tracked resources in every Application managed by an Argo CD instance  In order to do so  resource customizations can be configured like in the example below      yaml data    resource customizations ignoreResourceUpdates all        jsonPointers         status          Using ignoreDifferences to ignore reconcile  It is possible to use existing system level  ignoreDifferences  customizations to ignore resource updates as well  Instead of copying all configurations  the  ignoreDifferencesOnResourceUpdates  setting can be used to add all ignored differences as ignored resource updates      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cm data    resource compareoptions        ignoreDifferencesOnResourceUpdates  true         Default Configuration  By default  the metadata fields  generation    resourceVersion  and  managedFields  are always ignored for all resources      Finding Resources to Ignore  The application controller logs when a resource change triggers a refresh  You can use these logs to find high churn resource kinds and then inspect those resources to find which fields to ignore   To find these logs  search for   Requesting app refresh caused by object update    The logs include structured fields for  api version  and  kind    Counting the number of refreshes triggered  by api version kind should reveal the high churn resource kinds      note      These logs are at the  debug  level  Configure the application controller s log level to  debug    Once you have identified some resources which change often  you can try to determine which fields are changing  Here is one approach      shell kubectl get  resource   o yaml    tmp before yaml   Wait a minute or two  kubectl get  resource   o yaml    tmp after yaml diff  tmp before yaml  tmp after      The diff can give you a sense for which fields are changing and should perhaps be ignored      Checking Whether Resource Updates are Ignored  Whenever Argo CD skips a refresh due to an ignored resource update  the controller logs the following line   Ignoring change of object because none of the watched resource fields have changed    Search the application controller logs for this line to confirm that your resource ignore rules are being applied      note     These logs are at the  debug  level  Configure the application controller s log level to  debug       Examples      argoproj io Application     yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cm data    resource customizations ignoreResourceUpdates argoproj io Application        jsonPointers        Ignore when ownerReferences change  for example when a parent ApplicationSet changes often         metadata ownerReferences       Ignore reconciledAt  since by itself it doesn t indicate any important change         status reconciledAt     jqPathExpressions        Ignore lastTransitionTime for conditions  helpful when SharedResourceWarnings are being regularly updated but not       actually changing in content         status  conditions    lastTransitionTime         Ignoring updates for untracked resources  ArgoCD will only apply  ignoreResourceUpdates  configuration to tracked resources of an application  This means dependant resources  such as a  ReplicaSet  and  Pod  created by a  Deployment   will not ignore any updates and trigger a reconcile of the application for any changes   If you want to apply the  ignoreResourceUpdates  configuration to an untracked resource  you can add the  argocd argoproj io ignore resource updates true  annotation in the dependent resources manifest      Example      CronJob     yaml apiVersion  batch v1 kind  CronJob metadata    name  hello   namespace  test cronjob spec    schedule                jobTemplate      metadata        annotations          argocd argoproj io ignore resource updates   true      spec        template          metadata            annotations              argocd argoproj io ignore resource updates   true          spec            containers              name  hello             image  busybox 1 28             imagePullPolicy  IfNotPresent             command                 bin sh                c               date  echo Hello from the Kubernetes cluster           restartPolicy  OnFailure      The resource updates will be ignored based on your the  ignoreResourceUpdates  configuration in the  argocd cm  configMap    argocd cm      yaml resource customizations ignoreResourceUpdates batch Job        jsonPointers           status resource customizations ignoreResourceUpdates Pod        jsonPointers           status          "}
{"questions":"argocd NOTE The HA installation will require at least three different nodes due to pod anti affinity roles in the specs Additionally IPv6 only clusters are not supported Scaling Up High Availability A set of are provided for users who wish to run Argo CD in a highly available manner This runs more containers and runs Redis in HA mode Argo CD is largely stateless All data is persisted as Kubernetes objects which in turn is stored in Kubernetes etcd Redis is only used as a throw away cache and can be lost When lost it will be rebuilt without loss of service","answers":"# High Availability\n\nArgo CD is largely stateless. All data is persisted as Kubernetes objects, which in turn is stored in Kubernetes' etcd. Redis is only used as a throw-away cache and can be lost. When lost, it will be rebuilt without loss of service.\n\nA set of [HA manifests](https:\/\/github.com\/argoproj\/argo-cd\/tree\/master\/manifests\/ha) are provided for users who wish to run Argo CD in a highly available manner. This runs more containers, and runs Redis in HA mode.\n\n> **NOTE:** The HA installation will require at least three different nodes due to pod anti-affinity roles in the\n> specs. Additionally, IPv6 only clusters are not supported.\n\n## Scaling Up\n\n### argocd-repo-server\n\n**settings:**\n\nThe `argocd-repo-server` is responsible for cloning Git repository, keeping it up to date and generating manifests using the appropriate tool.\n\n* `argocd-repo-server` fork\/exec config management tool to generate manifests. The fork can fail due to lack of memory or limit on the number of OS threads.\nThe `--parallelismlimit` flag controls how many manifests generations are running concurrently and helps avoid OOM kills.\n\n* the `argocd-repo-server` ensures that repository is in the clean state during the manifest generation using config management tools such as Kustomize, Helm\nor custom plugin. As a result Git repositories with multiple applications might affect repository server performance.\nRead [Monorepo Scaling Considerations](#monorepo-scaling-considerations) for more information.\n\n* `argocd-repo-server` clones the repository into `\/tmp` (or the path specified in the `TMPDIR` env variable). The Pod might run out of disk space if it has too many repositories\nor if the repositories have a lot of files. To avoid this problem mount a persistent volume.\n\n* `argocd-repo-server` uses `git ls-remote` to resolve ambiguous revisions such as `HEAD`, a branch or a tag name. This operation happens frequently\nand might fail. To avoid failed syncs use the `ARGOCD_GIT_ATTEMPTS_COUNT` environment variable to retry failed requests.\n\n* `argocd-repo-server` Every 3m (by default) Argo CD checks for changes to the app manifests. Argo CD assumes by default that manifests only change when the repo changes, so it caches the generated manifests (for 24h by default). With Kustomize remote bases, or in case a Helm chart gets changed without bumping its version number, the expected manifests can change even though the repo has not changed. By reducing the cache time, you can get the changes without waiting for 24h. Use `--repo-cache-expiration duration`, and we'd suggest in low volume environments you try '1h'. Bear in mind that this will negate the benefits of caching if set too low.\n\n* `argocd-repo-server` executes config management tools such as `helm` or `kustomize` and enforces a 90 second timeout. This timeout can be changed by using the `ARGOCD_EXEC_TIMEOUT` env variable. The value should be in the Go time duration string format, for example, `2m30s`.\n\n**metrics:**\n\n* `argocd_git_request_total` - Number of git requests. This metric provides two tags: `repo` - Git repo URL; `request_type` - `ls-remote` or `fetch`.\n\n* `ARGOCD_ENABLE_GRPC_TIME_HISTOGRAM` - Is an environment variable that enables collecting RPC performance metrics. Enable it if you need to troubleshoot performance issues. Note: This metric is expensive to both query and store!\n\n### argocd-application-controller\n\n**settings:**\n\nThe `argocd-application-controller` uses `argocd-repo-server` to get generated manifests and Kubernetes API server to get the actual cluster state.\n\n* each controller replica uses two separate queues to process application reconciliation (milliseconds) and app syncing (seconds). The number of queue processors for each queue is controlled by\n`--status-processors` (20 by default) and `--operation-processors` (10 by default) flags. Increase the number of processors if your Argo CD instance manages too many applications.\nFor 1000 application we use 50 for `--status-processors` and 25 for `--operation-processors`\n\n* The manifest generation typically takes the most time during reconciliation. The duration of manifest generation is limited to make sure the controller refresh queue does not overflow.\nThe app reconciliation fails with `Context deadline exceeded` error if the manifest generation is taking too much time. As a workaround increase the value of `--repo-server-timeout-seconds` and\nconsider scaling up the `argocd-repo-server` deployment.\n\n* The controller uses Kubernetes watch APIs to maintain a lightweight Kubernetes cluster cache. This allows avoiding querying Kubernetes during app reconciliation and significantly improves\nperformance. For performance reasons the controller monitors and caches only the preferred versions of a resource. During reconciliation, the controller might have to convert cached resources from the\npreferred version into a version of the resource stored in Git. If `kubectl convert` fails because the conversion is not supported then the controller falls back to Kubernetes API query which slows down\nreconciliation. In this case, we advise to use the preferred resource version in Git.\n\n* The controller polls Git every 3m by default. You can change this duration using the `timeout.reconciliation` and `timeout.reconciliation.jitter` setting in the `argocd-cm` ConfigMap. The value of the fields is a duration string e.g `60s`, `1m`, `1h` or `1d`.\n\n* If the controller is managing too many clusters and uses too much memory then you can shard clusters across multiple\ncontroller replicas. To enable sharding, increase the number of replicas in `argocd-application-controller` `StatefulSet`\nand repeat the number of replicas in the `ARGOCD_CONTROLLER_REPLICAS` environment variable. The strategic merge patch below demonstrates changes required to configure two controller replicas.\n\n* By default, the controller will update the cluster information every 10 seconds. If there is a problem with your cluster network environment that is causing the update time to take a long time, you can try modifying the environment variable `ARGO_CD_UPDATE_CLUSTER_INFO_TIMEOUT` to increase the timeout (the unit is seconds).\n\n```yaml\napiVersion: apps\/v1\nkind: StatefulSet\nmetadata:\n  name: argocd-application-controller\nspec:\n  replicas: 2\n  template:\n    spec:\n      containers:\n      - name: argocd-application-controller\n        env:\n        - name: ARGOCD_CONTROLLER_REPLICAS\n          value: \"2\"\n```\n* In order to manually set the cluster's shard number, specify the optional `shard` property when creating a cluster. If not specified, it will be calculated on the fly by the application controller.\n\n* The shard distribution algorithm of the `argocd-application-controller` can be set by using the `--sharding-method` parameter. Supported sharding methods are : [legacy (default), round-robin, consistent-hashing]:\n- `legacy` mode uses an `uid` based distribution (non-uniform).\n- `round-robin` uses an equal distribution across all shards.\n- `consistent-hashing` uses the consistent hashing with bounded loads algorithm which tends to equal distribution and also reduces cluster or application reshuffling in case of additions or removals of shards or clusters. \n\nThe `--sharding-method` parameter can also be overridden by setting the key `controller.sharding.algorithm` in the `argocd-cmd-params-cm` `configMap` (preferably) or by setting the `ARGOCD_CONTROLLER_SHARDING_ALGORITHM` environment variable and by specifiying the same possible values.\n\n!!! warning \"Alpha Features\"\n    The `round-robin` shard distribution algorithm is an experimental feature. Reshuffling is known to occur in certain scenarios with cluster removal. If the cluster at rank-0 is removed, reshuffling all clusters across shards will occur and may temporarily have negative performance impacts.\n    The `consistent-hashing` shard distribution algorithm is an experimental feature. Extensive benchmark have been documented on the [CNOE blog](https:\/\/cnoe.io\/blog\/argo-cd-application-scalability) with encouraging results. Community feedback is highly appreciated before moving this feature to a production ready state.\n\n* A cluster can be manually assigned and forced to a `shard` by patching the `shard` field in the cluster secret to contain the shard number, e.g.\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: mycluster-secret\n  labels:\n    argocd.argoproj.io\/secret-type: cluster\ntype: Opaque\nstringData:\n  shard: 1\n  name: mycluster.example.com\n  server: https:\/\/mycluster.example.com\n  config: |\n    {\n      \"bearerToken\": \"<authentication token>\",\n      \"tlsClientConfig\": {\n        \"insecure\": false,\n        \"caData\": \"<base64 encoded certificate>\"\n      }\n    }\n```\n\n* `ARGOCD_ENABLE_GRPC_TIME_HISTOGRAM` - environment variable that enables collecting RPC performance metrics. Enable it if you need to troubleshoot performance issues. Note: This metric is expensive to both query and store!\n\n* `ARGOCD_CLUSTER_CACHE_LIST_PAGE_BUFFER_SIZE` - environment variable controlling the number of pages the controller\n  buffers in memory when performing a list operation against the K8s api server while syncing the cluster cache. This\n  is useful when the cluster contains a large number of resources and cluster sync times exceed the default etcd\n  compaction interval timeout. In this scenario, when attempting to sync the cluster cache, the application controller\n  may throw an error that the `continue parameter is too old to display a consistent list result`. Setting a higher\n  value for this environment variable configures the controller with a larger buffer in which to store pre-fetched\n  pages which are processed asynchronously, increasing the likelihood that all pages have been pulled before the etcd\n  compaction interval timeout expires. In the most extreme case, operators can set this value such that\n  `ARGOCD_CLUSTER_CACHE_LIST_PAGE_SIZE * ARGOCD_CLUSTER_CACHE_LIST_PAGE_BUFFER_SIZE` exceeds the largest resource\n  count (grouped by k8s api version, the granule of parallelism for list operations). In this case, all resources will\n  be buffered in memory -- no api server request will be blocked by processing.\n\n* `ARGOCD_APPLICATION_TREE_SHARD_SIZE` - environment variable controlling the max number of resources stored in one Redis\n  key. Splitting application tree into multiple keys helps to reduce the amount of traffic between the controller and Redis.\n  The default value is 0, which means that the application tree is stored in a single Redis key. The reasonable value is 100.\n\n**metrics**\n\n* `argocd_app_reconcile` - reports application reconciliation duration in seconds. Can be used to build reconciliation duration heat map to get a high-level reconciliation performance picture.\n* `argocd_app_k8s_request_total` - number of k8s requests per application. The number of fallback Kubernetes API queries - useful to identify which application has a resource with\nnon-preferred version and causes performance issues.\n\n### argocd-server\n\nThe `argocd-server` is stateless and probably the least likely to cause issues. To ensure there is no downtime during upgrades, consider increasing the number of replicas to `3` or more and repeat the number in the `ARGOCD_API_SERVER_REPLICAS` environment variable. The strategic merge patch below\ndemonstrates this.\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: argocd-server\nspec:\n  replicas: 3\n  template:\n    spec:\n      containers:\n      - name: argocd-server\n        env:\n        - name: ARGOCD_API_SERVER_REPLICAS\n          value: \"3\"\n```\n\n**settings:**\n\n* The `ARGOCD_API_SERVER_REPLICAS` environment variable is used to divide [the limit of concurrent login requests (`ARGOCD_MAX_CONCURRENT_LOGIN_REQUESTS_COUNT`)](.\/user-management\/index.md#failed-logins-rate-limiting) between each replica.\n* The `ARGOCD_GRPC_MAX_SIZE_MB` environment variable allows specifying the max size of the server response message in megabytes.\nThe default value is 200. You might need to increase this for an Argo CD instance that manages 3000+ applications.\n\n### argocd-dex-server, argocd-redis\n\nThe `argocd-dex-server` uses an in-memory database, and two or more instances would have inconsistent data. `argocd-redis` is pre-configured with the understanding of only three total redis servers\/sentinels.\n\n## Monorepo Scaling Considerations\n\nArgo CD repo server maintains one repository clone locally and uses it for application manifest generation. If the manifest generation requires to change a file in the local repository clone then only one concurrent manifest generation per server instance is allowed. This limitation might significantly slowdown Argo CD if you have a mono repository with multiple applications (50+).\n\n### Enable Concurrent Processing\n\nArgo CD determines if manifest generation might change local files in the local repository clone based on the config management tool and application settings.\nIf the manifest generation has no side effects then requests are processed in parallel without a performance penalty. The following are known cases that might cause slowness and their workarounds:\n\n  * **Multiple Helm based applications pointing to the same directory in one Git repository:** for historical reasons Argo CD generates Helm manifests sequentially.  To enable parallel generation set `ARGOCD_HELM_ALLOW_CONCURRENCY=true` to `argocd-repo-server` deployment or create `.argocd-allow-concurrency` file.\n    Future versions of Argo CD will enable this by default.\n\n  * **Multiple Custom plugin based applications:** avoid creating temporal files during manifest generation and create `.argocd-allow-concurrency` file in the app directory, or use the sidecar plugin option, which processes each application using a temporary copy of the repository.\n\n  * **Multiple Kustomize applications in same repository with [parameter overrides](..\/user-guide\/parameters.md):** sorry, no workaround for now.\n\n\n### Manifest Paths Annotation\n\nArgo CD aggressively caches generated manifests and uses the repository commit SHA as a cache key. A new commit to the Git repository invalidates the cache for all applications configured in the repository.\nThis can negatively affect repositories with multiple applications. You can use [webhooks](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/docs\/operator-manual\/webhook.md) and the `argocd.argoproj.io\/manifest-generate-paths` Application CRD annotation to solve this problem and improve performance.\n\nThe `argocd.argoproj.io\/manifest-generate-paths` annotation contains a semicolon-separated list of paths within the Git repository that are used during manifest generation. It will use the paths specified in the annotation to compare the last cached revision to the latest commit. If no modified files match the paths specified in `argocd.argoproj.io\/manifest-generate-paths`, then it will not trigger application reconciliation and the existing cache will be considered valid for the new commit.\n\nInstallations that use a different repository for each application are **not** subject to this behavior and will likely get no benefit from using these annotations.\n\nSimilarly, applications referencing an external Helm values file will not get the benefits of this feature when an unrelated change happens in the external source.\n\nFor webhooks, the comparison is done using the files specified in the webhook event payload instead.\n\n!!! note\n    Application manifest paths annotation support for webhooks depends on the git provider used for the Application. It is currently only supported for GitHub, GitLab, and Gogs based repos.\n\n* **Relative path** The annotation might contain a relative path. In this case the path is considered relative to the path specified in the application source:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  name: guestbook\n  namespace: argocd\n  annotations:\n    # resolves to the 'guestbook' directory\n    argocd.argoproj.io\/manifest-generate-paths: .\nspec:\n  source:\n    repoURL: https:\/\/github.com\/argoproj\/argocd-example-apps.git\n    targetRevision: HEAD\n    path: guestbook\n# ...\n```\n\n* **Absolute path** The annotation value might be an absolute path starting with '\/'. In this case path is considered as an absolute path within the Git repository:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  name: guestbook\n  annotations:\n    argocd.argoproj.io\/manifest-generate-paths: \/guestbook\nspec:\n  source:\n    repoURL: https:\/\/github.com\/argoproj\/argocd-example-apps.git\n    targetRevision: HEAD\n    path: guestbook\n# ...\n```\n\n* **Multiple paths** It is possible to put multiple paths into the annotation. Paths must be separated with a semicolon (`;`):\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  name: guestbook\n  annotations:\n    # resolves to 'my-application' and 'shared'\n    argocd.argoproj.io\/manifest-generate-paths: .;..\/shared\nspec:\n  source:\n    repoURL: https:\/\/github.com\/argoproj\/argocd-example-apps.git\n    targetRevision: HEAD\n    path: my-application\n# ...\n```\n\n* **Glob paths** The annotation might contain a glob pattern path, which can be any pattern supported by the [Go filepath Match function](https:\/\/pkg.go.dev\/path\/filepath#Match):\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  name: guestbook\n  namespace: argocd\n  annotations:\n    # resolves to any file matching the pattern of *-secret.yaml in the top level shared folder\n    argocd.argoproj.io\/manifest-generate-paths: \"\/shared\/*-secret.yaml\"\nspec:\n  source:\n    repoURL: https:\/\/github.com\/argoproj\/argocd-example-apps.git\n    targetRevision: HEAD\n    path: guestbook\n# ...\n```\n\n!!! note\n    If application manifest generation using the `argocd.argoproj.io\/manifest-generate-paths` annotation feature is enabled, only the resources specified by this annotation will be sent to the CMP server for manifest generation, rather than the entire repository. To determine the appropriate resources, a common root path is calculated based on the paths provided in the annotation. The application path serves as the deepest path that can be selected as the root.\n\n### Application Sync Timeout & Jitter\n\nArgo CD has a timeout for application syncs. It will trigger a refresh for each application periodically when the timeout expires.\nWith a large number of applications, this will cause a spike in the refresh queue and can cause a spike to the repo-server component. To avoid this, you can set a jitter to the sync timeout which will spread out the refreshes and give time to the repo-server to catch up.\n\nThe jitter is the maximum duration that can be added to the sync timeout, so if the sync timeout is 5 minutes and the jitter is 1 minute, then the actual timeout will be between 5 and 6 minutes.\n\nTo configure the jitter you can set the following environment variables:\n\n* `ARGOCD_RECONCILIATION_JITTER` - The jitter to apply to the sync timeout. Disabled when value is 0. Defaults to 0.\n\n## Rate Limiting Application Reconciliations\n\nTo prevent high controller resource usage or sync loops caused either due to misbehaving apps or other environment specific factors,\nwe can configure rate limits on the workqueues used by the application controller. There are two types of rate limits that can be configured:\n\n  * Global rate limits\n  * Per item rate limits\n\nThe final rate limiter uses a combination of both and calculates the final backoff as `max(globalBackoff, perItemBackoff)`.\n\n### Global rate limits\n\n  This is disabled by default, it is a simple bucket based rate limiter that limits the number of items that can be queued per second.\nThis is useful to prevent a large number of apps from being queued at the same time.\n\nTo configure the bucket limiter you can set the following environment variables:\n\n  * `WORKQUEUE_BUCKET_SIZE` - The number of items that can be queued in a single burst. Defaults to 500.\n  * `WORKQUEUE_BUCKET_QPS` - The number of items that can be queued per second. Defaults to MaxFloat64, which disables the limiter.\n\n### Per item rate limits\n\n  This by default returns a fixed base delay\/backoff value but can be configured to return exponential values.\nPer item rate limiter limits the number of times a particular item can be queued. This is based on exponential backoff where the backoff time for an item keeps increasing exponentially\nif it is queued multiple times in a short period, but the backoff is reset automatically if a configured `cool down` period has elapsed since the last time the item was queued.\n\nTo configure the per item limiter you can set the following environment variables:\n\n  * `WORKQUEUE_FAILURE_COOLDOWN_NS` : The cool down period in nanoseconds, once period has elapsed for an item the backoff is reset. Exponential backoff is disabled if set to 0(default), eg. values : 10 * 10^9 (=10s)\n  * `WORKQUEUE_BASE_DELAY_NS` : The base delay in nanoseconds, this is the initial backoff used in the exponential backoff formula. Defaults to 1000 (=1\u03bcs)\n  * `WORKQUEUE_MAX_DELAY_NS` : The max delay in nanoseconds, this is the max backoff limit. Defaults to 3 * 10^9 (=3s)\n  * `WORKQUEUE_BACKOFF_FACTOR` : The backoff factor, this is the factor by which the backoff is increased for each retry. Defaults to 1.5\n\nThe formula used to calculate the backoff time for an item, where `numRequeue` is the number of times the item has been queued\nand `lastRequeueTime` is the time at which the item was last queued:\n\n- When `WORKQUEUE_FAILURE_COOLDOWN_NS` != 0 :\n\n```\nbackoff = time.Since(lastRequeueTime) >= WORKQUEUE_FAILURE_COOLDOWN_NS ?\n          WORKQUEUE_BASE_DELAY_NS :\n          min(\n              WORKQUEUE_MAX_DELAY_NS,\n              WORKQUEUE_BASE_DELAY_NS * WORKQUEUE_BACKOFF_FACTOR ^ (numRequeue)\n              )\n```\n\n- When `WORKQUEUE_FAILURE_COOLDOWN_NS` = 0 :\n\n```\nbackoff = WORKQUEUE_BASE_DELAY_NS\n```\n\n## HTTP Request Retry Strategy\n\nIn scenarios where network instability or transient server errors occur, the retry strategy ensures the robustness of HTTP communication by automatically resending failed requests. It uses a combination of maximum retries and backoff intervals to prevent overwhelming the server or thrashing the network.\n\n### Configuring Retries\n\nThe retry logic can be fine-tuned with the following environment variables:\n\n* `ARGOCD_K8SCLIENT_RETRY_MAX` - The maximum number of retries for each request. The request will be dropped after this count is reached. Defaults to 0 (no retries).\n* `ARGOCD_K8SCLIENT_RETRY_BASE_BACKOFF` - The initial backoff delay on the first retry attempt in ms. Subsequent retries will double this backoff time up to a maximum threshold. Defaults to 100ms.\n\n### Backoff Strategy\n\nThe backoff strategy employed is a simple exponential backoff without jitter. The backoff time increases exponentially with each retry attempt until a maximum backoff duration is reached.\n\nThe formula for calculating the backoff time is:\n\n```\nbackoff = min(retryWaitMax, baseRetryBackoff * (2 ^ retryAttempt))\n```\nWhere `retryAttempt` starts at 0 and increments by 1 for each subsequent retry.\n\n### Maximum Wait Time\n\nThere is a cap on the backoff time to prevent excessive wait times between retries. This cap is defined by:\n\n`retryWaitMax` - The maximum duration to wait before retrying. This ensures that retries happen within a reasonable timeframe. Defaults to 10 seconds.\n\n### Non-Retriable Conditions\n\nNot all HTTP responses are eligible for retries. The following conditions will not trigger a retry:\n\n* Responses with a status code indicating client errors (4xx) except for 429 Too Many Requests.\n* Responses with the status code 501 Not Implemented.\n\n\n## CPU\/Memory Profiling\n\nArgo CD optionally exposes a profiling endpoint that can be used to profile the CPU and memory usage of the Argo CD component.\nThe profiling endpoint is available on metrics port of each component. See [metrics](.\/metrics.md) for more information about the port.\nFor security reasons the profiling endpoint is disabled by default. The endpoint can be enabled by setting the `server.profile.enabled`\nor `controller.profile.enabled` key of [argocd-cmd-params-cm](argocd-cmd-params-cm.yaml) ConfigMap to `true`.\nOnce the endpoint is enabled you can use go profile tool to collect the CPU and memory profiles. Example:\n\n```bash\n$ kubectl port-forward svc\/argocd-metrics 8082:8082\n$ go tool pprof http:\/\/localhost:8082\/debug\/pprof\/heap\n```","site":"argocd","answers_cleaned":"  High Availability  Argo CD is largely stateless  All data is persisted as Kubernetes objects  which in turn is stored in Kubernetes  etcd  Redis is only used as a throw away cache and can be lost  When lost  it will be rebuilt without loss of service   A set of  HA manifests  https   github com argoproj argo cd tree master manifests ha  are provided for users who wish to run Argo CD in a highly available manner  This runs more containers  and runs Redis in HA mode       NOTE    The HA installation will require at least three different nodes due to pod anti affinity roles in the   specs  Additionally  IPv6 only clusters are not supported      Scaling Up      argocd repo server    settings     The  argocd repo server  is responsible for cloning Git repository  keeping it up to date and generating manifests using the appropriate tool      argocd repo server  fork exec config management tool to generate manifests  The fork can fail due to lack of memory or limit on the number of OS threads  The    parallelismlimit  flag controls how many manifests generations are running concurrently and helps avoid OOM kills     the  argocd repo server  ensures that repository is in the clean state during the manifest generation using config management tools such as Kustomize  Helm or custom plugin  As a result Git repositories with multiple applications might affect repository server performance  Read  Monorepo Scaling Considerations   monorepo scaling considerations  for more information      argocd repo server  clones the repository into   tmp   or the path specified in the  TMPDIR  env variable   The Pod might run out of disk space if it has too many repositories or if the repositories have a lot of files  To avoid this problem mount a persistent volume      argocd repo server  uses  git ls remote  to resolve ambiguous revisions such as  HEAD   a branch or a tag name  This operation happens frequently and might fail  To avoid failed syncs use the  ARGOCD GIT ATTEMPTS COUNT  environment variable to retry failed requests      argocd repo server  Every 3m  by default  Argo CD checks for changes to the app manifests  Argo CD assumes by default that manifests only change when the repo changes  so it caches the generated manifests  for 24h by default   With Kustomize remote bases  or in case a Helm chart gets changed without bumping its version number  the expected manifests can change even though the repo has not changed  By reducing the cache time  you can get the changes without waiting for 24h  Use    repo cache expiration duration   and we d suggest in low volume environments you try  1h   Bear in mind that this will negate the benefits of caching if set too low      argocd repo server  executes config management tools such as  helm  or  kustomize  and enforces a 90 second timeout  This timeout can be changed by using the  ARGOCD EXEC TIMEOUT  env variable  The value should be in the Go time duration string format  for example   2m30s      metrics        argocd git request total    Number of git requests  This metric provides two tags   repo    Git repo URL   request type     ls remote  or  fetch       ARGOCD ENABLE GRPC TIME HISTOGRAM    Is an environment variable that enables collecting RPC performance metrics  Enable it if you need to troubleshoot performance issues  Note  This metric is expensive to both query and store       argocd application controller    settings     The  argocd application controller  uses  argocd repo server  to get generated manifests and Kubernetes API server to get the actual cluster state     each controller replica uses two separate queues to process application reconciliation  milliseconds  and app syncing  seconds   The number of queue processors for each queue is controlled by    status processors   20 by default  and    operation processors   10 by default  flags  Increase the number of processors if your Argo CD instance manages too many applications  For 1000 application we use 50 for    status processors  and 25 for    operation processors     The manifest generation typically takes the most time during reconciliation  The duration of manifest generation is limited to make sure the controller refresh queue does not overflow  The app reconciliation fails with  Context deadline exceeded  error if the manifest generation is taking too much time  As a workaround increase the value of    repo server timeout seconds  and consider scaling up the  argocd repo server  deployment     The controller uses Kubernetes watch APIs to maintain a lightweight Kubernetes cluster cache  This allows avoiding querying Kubernetes during app reconciliation and significantly improves performance  For performance reasons the controller monitors and caches only the preferred versions of a resource  During reconciliation  the controller might have to convert cached resources from the preferred version into a version of the resource stored in Git  If  kubectl convert  fails because the conversion is not supported then the controller falls back to Kubernetes API query which slows down reconciliation  In this case  we advise to use the preferred resource version in Git     The controller polls Git every 3m by default  You can change this duration using the  timeout reconciliation  and  timeout reconciliation jitter  setting in the  argocd cm  ConfigMap  The value of the fields is a duration string e g  60s    1m    1h  or  1d      If the controller is managing too many clusters and uses too much memory then you can shard clusters across multiple controller replicas  To enable sharding  increase the number of replicas in  argocd application controller   StatefulSet  and repeat the number of replicas in the  ARGOCD CONTROLLER REPLICAS  environment variable  The strategic merge patch below demonstrates changes required to configure two controller replicas     By default  the controller will update the cluster information every 10 seconds  If there is a problem with your cluster network environment that is causing the update time to take a long time  you can try modifying the environment variable  ARGO CD UPDATE CLUSTER INFO TIMEOUT  to increase the timeout  the unit is seconds       yaml apiVersion  apps v1 kind  StatefulSet metadata    name  argocd application controller spec    replicas  2   template      spec        containers          name  argocd application controller         env            name  ARGOCD CONTROLLER REPLICAS           value   2        In order to manually set the cluster s shard number  specify the optional  shard  property when creating a cluster  If not specified  it will be calculated on the fly by the application controller     The shard distribution algorithm of the  argocd application controller  can be set by using the    sharding method  parameter  Supported sharding methods are    legacy  default   round robin  consistent hashing      legacy  mode uses an  uid  based distribution  non uniform      round robin  uses an equal distribution across all shards     consistent hashing  uses the consistent hashing with bounded loads algorithm which tends to equal distribution and also reduces cluster or application reshuffling in case of additions or removals of shards or clusters    The    sharding method  parameter can also be overridden by setting the key  controller sharding algorithm  in the  argocd cmd params cm   configMap   preferably  or by setting the  ARGOCD CONTROLLER SHARDING ALGORITHM  environment variable and by specifiying the same possible values       warning  Alpha Features      The  round robin  shard distribution algorithm is an experimental feature  Reshuffling is known to occur in certain scenarios with cluster removal  If the cluster at rank 0 is removed  reshuffling all clusters across shards will occur and may temporarily have negative performance impacts      The  consistent hashing  shard distribution algorithm is an experimental feature  Extensive benchmark have been documented on the  CNOE blog  https   cnoe io blog argo cd application scalability  with encouraging results  Community feedback is highly appreciated before moving this feature to a production ready state     A cluster can be manually assigned and forced to a  shard  by patching the  shard  field in the cluster secret to contain the shard number  e g     yaml apiVersion  v1 kind  Secret metadata    name  mycluster secret   labels      argocd argoproj io secret type  cluster type  Opaque stringData    shard  1   name  mycluster example com   server  https   mycluster example com   config                 bearerToken     authentication token           tlsClientConfig              insecure   false           caData     base64 encoded certificate                         ARGOCD ENABLE GRPC TIME HISTOGRAM    environment variable that enables collecting RPC performance metrics  Enable it if you need to troubleshoot performance issues  Note  This metric is expensive to both query and store      ARGOCD CLUSTER CACHE LIST PAGE BUFFER SIZE    environment variable controlling the number of pages the controller   buffers in memory when performing a list operation against the K8s api server while syncing the cluster cache  This   is useful when the cluster contains a large number of resources and cluster sync times exceed the default etcd   compaction interval timeout  In this scenario  when attempting to sync the cluster cache  the application controller   may throw an error that the  continue parameter is too old to display a consistent list result   Setting a higher   value for this environment variable configures the controller with a larger buffer in which to store pre fetched   pages which are processed asynchronously  increasing the likelihood that all pages have been pulled before the etcd   compaction interval timeout expires  In the most extreme case  operators can set this value such that    ARGOCD CLUSTER CACHE LIST PAGE SIZE   ARGOCD CLUSTER CACHE LIST PAGE BUFFER SIZE  exceeds the largest resource   count  grouped by k8s api version  the granule of parallelism for list operations   In this case  all resources will   be buffered in memory    no api server request will be blocked by processing      ARGOCD APPLICATION TREE SHARD SIZE    environment variable controlling the max number of resources stored in one Redis   key  Splitting application tree into multiple keys helps to reduce the amount of traffic between the controller and Redis    The default value is 0  which means that the application tree is stored in a single Redis key  The reasonable value is 100     metrics       argocd app reconcile    reports application reconciliation duration in seconds  Can be used to build reconciliation duration heat map to get a high level reconciliation performance picture     argocd app k8s request total    number of k8s requests per application  The number of fallback Kubernetes API queries   useful to identify which application has a resource with non preferred version and causes performance issues       argocd server  The  argocd server  is stateless and probably the least likely to cause issues  To ensure there is no downtime during upgrades  consider increasing the number of replicas to  3  or more and repeat the number in the  ARGOCD API SERVER REPLICAS  environment variable  The strategic merge patch below demonstrates this      yaml apiVersion  apps v1 kind  Deployment metadata    name  argocd server spec    replicas  3   template      spec        containers          name  argocd server         env            name  ARGOCD API SERVER REPLICAS           value   3         settings       The  ARGOCD API SERVER REPLICAS  environment variable is used to divide  the limit of concurrent login requests   ARGOCD MAX CONCURRENT LOGIN REQUESTS COUNT      user management index md failed logins rate limiting  between each replica    The  ARGOCD GRPC MAX SIZE MB  environment variable allows specifying the max size of the server response message in megabytes  The default value is 200  You might need to increase this for an Argo CD instance that manages 3000  applications       argocd dex server  argocd redis  The  argocd dex server  uses an in memory database  and two or more instances would have inconsistent data   argocd redis  is pre configured with the understanding of only three total redis servers sentinels      Monorepo Scaling Considerations  Argo CD repo server maintains one repository clone locally and uses it for application manifest generation  If the manifest generation requires to change a file in the local repository clone then only one concurrent manifest generation per server instance is allowed  This limitation might significantly slowdown Argo CD if you have a mono repository with multiple applications  50         Enable Concurrent Processing  Argo CD determines if manifest generation might change local files in the local repository clone based on the config management tool and application settings  If the manifest generation has no side effects then requests are processed in parallel without a performance penalty  The following are known cases that might cause slowness and their workarounds         Multiple Helm based applications pointing to the same directory in one Git repository    for historical reasons Argo CD generates Helm manifests sequentially   To enable parallel generation set  ARGOCD HELM ALLOW CONCURRENCY true  to  argocd repo server  deployment or create   argocd allow concurrency  file      Future versions of Argo CD will enable this by default         Multiple Custom plugin based applications    avoid creating temporal files during manifest generation and create   argocd allow concurrency  file in the app directory  or use the sidecar plugin option  which processes each application using a temporary copy of the repository         Multiple Kustomize applications in same repository with  parameter overrides     user guide parameters md     sorry  no workaround for now        Manifest Paths Annotation  Argo CD aggressively caches generated manifests and uses the repository commit SHA as a cache key  A new commit to the Git repository invalidates the cache for all applications configured in the repository  This can negatively affect repositories with multiple applications  You can use  webhooks  https   github com argoproj argo cd blob master docs operator manual webhook md  and the  argocd argoproj io manifest generate paths  Application CRD annotation to solve this problem and improve performance   The  argocd argoproj io manifest generate paths  annotation contains a semicolon separated list of paths within the Git repository that are used during manifest generation  It will use the paths specified in the annotation to compare the last cached revision to the latest commit  If no modified files match the paths specified in  argocd argoproj io manifest generate paths   then it will not trigger application reconciliation and the existing cache will be considered valid for the new commit   Installations that use a different repository for each application are   not   subject to this behavior and will likely get no benefit from using these annotations   Similarly  applications referencing an external Helm values file will not get the benefits of this feature when an unrelated change happens in the external source   For webhooks  the comparison is done using the files specified in the webhook event payload instead       note     Application manifest paths annotation support for webhooks depends on the git provider used for the Application  It is currently only supported for GitHub  GitLab  and Gogs based repos       Relative path   The annotation might contain a relative path  In this case the path is considered relative to the path specified in the application source      yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    name  guestbook   namespace  argocd   annotations        resolves to the  guestbook  directory     argocd argoproj io manifest generate paths    spec    source      repoURL  https   github com argoproj argocd example apps git     targetRevision  HEAD     path  guestbook                Absolute path   The annotation value might be an absolute path starting with      In this case path is considered as an absolute path within the Git repository      yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    name  guestbook   annotations      argocd argoproj io manifest generate paths   guestbook spec    source      repoURL  https   github com argoproj argocd example apps git     targetRevision  HEAD     path  guestbook                Multiple paths   It is possible to put multiple paths into the annotation  Paths must be separated with a semicolon            yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    name  guestbook   annotations        resolves to  my application  and  shared      argocd argoproj io manifest generate paths       shared spec    source      repoURL  https   github com argoproj argocd example apps git     targetRevision  HEAD     path  my application                Glob paths   The annotation might contain a glob pattern path  which can be any pattern supported by the  Go filepath Match function  https   pkg go dev path filepath Match       yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    name  guestbook   namespace  argocd   annotations        resolves to any file matching the pattern of   secret yaml in the top level shared folder     argocd argoproj io manifest generate paths    shared   secret yaml  spec    source      repoURL  https   github com argoproj argocd example apps git     targetRevision  HEAD     path  guestbook                note     If application manifest generation using the  argocd argoproj io manifest generate paths  annotation feature is enabled  only the resources specified by this annotation will be sent to the CMP server for manifest generation  rather than the entire repository  To determine the appropriate resources  a common root path is calculated based on the paths provided in the annotation  The application path serves as the deepest path that can be selected as the root       Application Sync Timeout   Jitter  Argo CD has a timeout for application syncs  It will trigger a refresh for each application periodically when the timeout expires  With a large number of applications  this will cause a spike in the refresh queue and can cause a spike to the repo server component  To avoid this  you can set a jitter to the sync timeout which will spread out the refreshes and give time to the repo server to catch up   The jitter is the maximum duration that can be added to the sync timeout  so if the sync timeout is 5 minutes and the jitter is 1 minute  then the actual timeout will be between 5 and 6 minutes   To configure the jitter you can set the following environment variables      ARGOCD RECONCILIATION JITTER    The jitter to apply to the sync timeout  Disabled when value is 0  Defaults to 0      Rate Limiting Application Reconciliations  To prevent high controller resource usage or sync loops caused either due to misbehaving apps or other environment specific factors  we can configure rate limits on the workqueues used by the application controller  There are two types of rate limits that can be configured       Global rate limits     Per item rate limits  The final rate limiter uses a combination of both and calculates the final backoff as  max globalBackoff  perItemBackoff         Global rate limits    This is disabled by default  it is a simple bucket based rate limiter that limits the number of items that can be queued per second  This is useful to prevent a large number of apps from being queued at the same time   To configure the bucket limiter you can set the following environment variables        WORKQUEUE BUCKET SIZE    The number of items that can be queued in a single burst  Defaults to 500       WORKQUEUE BUCKET QPS    The number of items that can be queued per second  Defaults to MaxFloat64  which disables the limiter       Per item rate limits    This by default returns a fixed base delay backoff value but can be configured to return exponential values  Per item rate limiter limits the number of times a particular item can be queued  This is based on exponential backoff where the backoff time for an item keeps increasing exponentially if it is queued multiple times in a short period  but the backoff is reset automatically if a configured  cool down  period has elapsed since the last time the item was queued   To configure the per item limiter you can set the following environment variables        WORKQUEUE FAILURE COOLDOWN NS    The cool down period in nanoseconds  once period has elapsed for an item the backoff is reset  Exponential backoff is disabled if set to 0 default   eg  values   10   10 9   10s       WORKQUEUE BASE DELAY NS    The base delay in nanoseconds  this is the initial backoff used in the exponential backoff formula  Defaults to 1000   1 s       WORKQUEUE MAX DELAY NS    The max delay in nanoseconds  this is the max backoff limit  Defaults to 3   10 9   3s       WORKQUEUE BACKOFF FACTOR    The backoff factor  this is the factor by which the backoff is increased for each retry  Defaults to 1 5  The formula used to calculate the backoff time for an item  where  numRequeue  is the number of times the item has been queued and  lastRequeueTime  is the time at which the item was last queued     When  WORKQUEUE FAILURE COOLDOWN NS     0        backoff   time Since lastRequeueTime     WORKQUEUE FAILURE COOLDOWN NS             WORKQUEUE BASE DELAY NS             min                WORKQUEUE MAX DELAY NS                WORKQUEUE BASE DELAY NS   WORKQUEUE BACKOFF FACTOR    numRequeue                         When  WORKQUEUE FAILURE COOLDOWN NS    0        backoff   WORKQUEUE BASE DELAY NS         HTTP Request Retry Strategy  In scenarios where network instability or transient server errors occur  the retry strategy ensures the robustness of HTTP communication by automatically resending failed requests  It uses a combination of maximum retries and backoff intervals to prevent overwhelming the server or thrashing the network       Configuring Retries  The retry logic can be fine tuned with the following environment variables      ARGOCD K8SCLIENT RETRY MAX    The maximum number of retries for each request  The request will be dropped after this count is reached  Defaults to 0  no retries      ARGOCD K8SCLIENT RETRY BASE BACKOFF    The initial backoff delay on the first retry attempt in ms  Subsequent retries will double this backoff time up to a maximum threshold  Defaults to 100ms       Backoff Strategy  The backoff strategy employed is a simple exponential backoff without jitter  The backoff time increases exponentially with each retry attempt until a maximum backoff duration is reached   The formula for calculating the backoff time is       backoff   min retryWaitMax  baseRetryBackoff    2   retryAttempt       Where  retryAttempt  starts at 0 and increments by 1 for each subsequent retry       Maximum Wait Time  There is a cap on the backoff time to prevent excessive wait times between retries  This cap is defined by    retryWaitMax    The maximum duration to wait before retrying  This ensures that retries happen within a reasonable timeframe  Defaults to 10 seconds       Non Retriable Conditions  Not all HTTP responses are eligible for retries  The following conditions will not trigger a retry     Responses with a status code indicating client errors  4xx  except for 429 Too Many Requests    Responses with the status code 501 Not Implemented       CPU Memory Profiling  Argo CD optionally exposes a profiling endpoint that can be used to profile the CPU and memory usage of the Argo CD component  The profiling endpoint is available on metrics port of each component  See  metrics    metrics md  for more information about the port  For security reasons the profiling endpoint is disabled by default  The endpoint can be enabled by setting the  server profile enabled  or  controller profile enabled  key of  argocd cmd params cm  argocd cmd params cm yaml  ConfigMap to  true   Once the endpoint is enabled you can use go profile tool to collect the CPU and memory profiles  Example      bash   kubectl port forward svc argocd metrics 8082 8082   go tool pprof http   localhost 8082 debug pprof heap    "}
{"questions":"argocd note Git webhook notifications from GitHub GitLab Bitbucket Bitbucket Server Azure DevOps and Gogs The following explains how to configure this delay from polling the API server can be configured to receive webhook events Argo CD supports Overview Git Webhook Configuration Argo CD polls Git repositories every three minutes to detect changes to the manifests To eliminate a Git webhook for GitHub but the same process should be applicable to other providers","answers":"# Git Webhook Configuration\n\n## Overview\n\nArgo CD polls Git repositories every three minutes to detect changes to the manifests. To eliminate\nthis delay from polling, the API server can be configured to receive webhook events. Argo CD supports\nGit webhook notifications from GitHub, GitLab, Bitbucket, Bitbucket Server, Azure DevOps and Gogs. The following explains how to configure\na Git webhook for GitHub, but the same process should be applicable to other providers.\n\n!!! note\n    The webhook handler does not differentiate between branch events and tag events where the branch and tag names are\n    the same. A hook event for a push to branch `x` will trigger a refresh for an app pointing at the same repo with\n    `targetRevision: refs\/tags\/x`.\n\n## 1. Create The WebHook In The Git Provider\n\nIn your Git provider, navigate to the settings page where webhooks can be configured. The payload\nURL configured in the Git provider should use the `\/api\/webhook` endpoint of your Argo CD instance\n(e.g. `https:\/\/argocd.example.com\/api\/webhook`). If you wish to use a shared secret, input an\narbitrary value in the secret. This value will be used when configuring the webhook in the next step.\n\nTo prevent DDoS attacks with unauthenticated webhook events (the `\/api\/webhook` endpoint currently lacks rate limiting protection), it is recommended to limit the payload size. You can achieve this by configuring the `argocd-cm` ConfigMap with the `webhook.maxPayloadSizeMB` attribute. The default value is 1GB.\n\n## Github\n\n![Add Webhook](..\/assets\/webhook-config.png \"Add Webhook\")\n\n!!! note\n    When creating the webhook in GitHub, the \"Content type\" needs to be set to \"application\/json\". The default value \"application\/x-www-form-urlencoded\" is not supported by the library used to handle the hooks\n\n## Azure DevOps\n\n![Add Webhook](..\/assets\/azure-devops-webhook-config.png \"Add Webhook\")\n\nAzure DevOps optionally supports securing the webhook using basic authentication. To use it, specify the username and password in the webhook configuration and configure the same username\/password in `argocd-secret` Kubernetes secret in\n`webhook.azuredevops.username` and `webhook.azuredevops.password` keys.\n\n## 2. Configure Argo CD With The WebHook Secret (Optional)\n\nConfiguring a webhook shared secret is optional, since Argo CD will still refresh applications\nrelated to the Git repository, even with unauthenticated webhook events. This is safe to do since\nthe contents of webhook payloads are considered untrusted, and will only result in a refresh of the\napplication (a process which already occurs at three-minute intervals). If Argo CD is publicly\naccessible, then configuring a webhook secret is recommended to prevent a DDoS attack.\n\nIn the `argocd-secret` Kubernetes secret, configure one of the following keys with the Git\nprovider's webhook secret configured in step 1.\n\n| Provider        | K8s Secret Key                   |\n|-----------------|----------------------------------|\n| GitHub          | `webhook.github.secret`          |\n| GitLab          | `webhook.gitlab.secret`          |\n| BitBucket       | `webhook.bitbucket.uuid`         |\n| BitBucketServer | `webhook.bitbucketserver.secret` |\n| Gogs            | `webhook.gogs.secret`            |\n| Azure DevOps    | `webhook.azuredevops.username`   |\n|                 | `webhook.azuredevops.password`   |\n\nEdit the Argo CD Kubernetes secret:\n\n```bash\nkubectl edit secret argocd-secret -n argocd\n```\n\nTIP: for ease of entering secrets, Kubernetes supports inputting secrets in the `stringData` field,\nwhich saves you the trouble of base64 encoding the values and copying it to the `data` field.\nSimply copy the shared webhook secret created in step 1, to the corresponding\nGitHub\/GitLab\/BitBucket key under the `stringData` field:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: argocd-secret\n  namespace: argocd\ntype: Opaque\ndata:\n...\n\nstringData:\n  # github webhook secret\n  webhook.github.secret: shhhh! it's a GitHub secret\n\n  # gitlab webhook secret\n  webhook.gitlab.secret: shhhh! it's a GitLab secret\n\n  # bitbucket webhook secret\n  webhook.bitbucket.uuid: your-bitbucket-uuid\n\n  # bitbucket server webhook secret\n  webhook.bitbucketserver.secret: shhhh! it's a Bitbucket server secret\n\n  # gogs server webhook secret\n  webhook.gogs.secret: shhhh! it's a gogs server secret\n\n  # azuredevops username and password\n  webhook.azuredevops.username: admin\n  webhook.azuredevops.password: secret-password\n```\n\nAfter saving, the changes should take effect automatically.\n\n### Alternative\n\nIf you want to store webhook data in **another** Kubernetes `Secret`, instead of `argocd-secret`. ArgoCD knows to check the keys under `data` in your Kubernetes `Secret` starts with `$`, then your Kubernetes `Secret` name and `:` (colon).\n\nSyntax: `$<k8s_secret_name>:<a_key_in_that_k8s_secret>`\n\n> NOTE: Secret must have label `app.kubernetes.io\/part-of: argocd`\n\nFor more information refer to the corresponding section in the [User Management Documentation](user-management\/index.md#alternative).","site":"argocd","answers_cleaned":"  Git Webhook Configuration     Overview  Argo CD polls Git repositories every three minutes to detect changes to the manifests  To eliminate this delay from polling  the API server can be configured to receive webhook events  Argo CD supports Git webhook notifications from GitHub  GitLab  Bitbucket  Bitbucket Server  Azure DevOps and Gogs  The following explains how to configure a Git webhook for GitHub  but the same process should be applicable to other providers       note     The webhook handler does not differentiate between branch events and tag events where the branch and tag names are     the same  A hook event for a push to branch  x  will trigger a refresh for an app pointing at the same repo with      targetRevision  refs tags x       1  Create The WebHook In The Git Provider  In your Git provider  navigate to the settings page where webhooks can be configured  The payload URL configured in the Git provider should use the   api webhook  endpoint of your Argo CD instance  e g   https   argocd example com api webhook    If you wish to use a shared secret  input an arbitrary value in the secret  This value will be used when configuring the webhook in the next step   To prevent DDoS attacks with unauthenticated webhook events  the   api webhook  endpoint currently lacks rate limiting protection   it is recommended to limit the payload size  You can achieve this by configuring the  argocd cm  ConfigMap with the  webhook maxPayloadSizeMB  attribute  The default value is 1GB      Github    Add Webhook     assets webhook config png  Add Webhook        note     When creating the webhook in GitHub  the  Content type  needs to be set to  application json   The default value  application x www form urlencoded  is not supported by the library used to handle the hooks     Azure DevOps    Add Webhook     assets azure devops webhook config png  Add Webhook    Azure DevOps optionally supports securing the webhook using basic authentication  To use it  specify the username and password in the webhook configuration and configure the same username password in  argocd secret  Kubernetes secret in  webhook azuredevops username  and  webhook azuredevops password  keys      2  Configure Argo CD With The WebHook Secret  Optional   Configuring a webhook shared secret is optional  since Argo CD will still refresh applications related to the Git repository  even with unauthenticated webhook events  This is safe to do since the contents of webhook payloads are considered untrusted  and will only result in a refresh of the application  a process which already occurs at three minute intervals   If Argo CD is publicly accessible  then configuring a webhook secret is recommended to prevent a DDoS attack   In the  argocd secret  Kubernetes secret  configure one of the following keys with the Git provider s webhook secret configured in step 1     Provider          K8s Secret Key                                                                              GitHub             webhook github secret               GitLab             webhook gitlab secret               BitBucket          webhook bitbucket uuid              BitBucketServer    webhook bitbucketserver secret      Gogs               webhook gogs secret                 Azure DevOps       webhook azuredevops username                           webhook azuredevops password       Edit the Argo CD Kubernetes secret      bash kubectl edit secret argocd secret  n argocd      TIP  for ease of entering secrets  Kubernetes supports inputting secrets in the  stringData  field  which saves you the trouble of base64 encoding the values and copying it to the  data  field  Simply copy the shared webhook secret created in step 1  to the corresponding GitHub GitLab BitBucket key under the  stringData  field      yaml apiVersion  v1 kind  Secret metadata    name  argocd secret   namespace  argocd type  Opaque data       stringData      github webhook secret   webhook github secret  shhhh  it s a GitHub secret      gitlab webhook secret   webhook gitlab secret  shhhh  it s a GitLab secret      bitbucket webhook secret   webhook bitbucket uuid  your bitbucket uuid      bitbucket server webhook secret   webhook bitbucketserver secret  shhhh  it s a Bitbucket server secret      gogs server webhook secret   webhook gogs secret  shhhh  it s a gogs server secret      azuredevops username and password   webhook azuredevops username  admin   webhook azuredevops password  secret password      After saving  the changes should take effect automatically       Alternative  If you want to store webhook data in   another   Kubernetes  Secret   instead of  argocd secret   ArgoCD knows to check the keys under  data  in your Kubernetes  Secret  starts with      then your Kubernetes  Secret  name and      colon    Syntax     k8s secret name   a key in that k8s secret      NOTE  Secret must have label  app kubernetes io part of  argocd   For more information refer to the corresponding section in the  User Management Documentation  user management index md alternative  "}
{"questions":"argocd Argo CD applications projects and settings can be defined declaratively using Kubernetes manifests These can be updated using without needing to touch the command line tool Quick Reference Declarative Setup Atomic configuration All resources including and specs have to be installed in the Argo CD namespace by default","answers":"# Declarative Setup\n\nArgo CD applications, projects and settings can be defined declaratively using Kubernetes manifests. These can be updated using `kubectl apply`, without needing to touch the `argocd` command-line tool.\n\n## Quick Reference\n\nAll resources, including `Application` and `AppProject` specs, have to be installed in the Argo CD namespace (by default `argocd`).\n\n### Atomic configuration\n\n| Sample File                                                           | Resource Name                                                                      | Kind      | Description                                                                          |\n|-----------------------------------------------------------------------|------------------------------------------------------------------------------------|-----------|--------------------------------------------------------------------------------------|\n| [`argocd-cm.yaml`](argocd-cm-yaml.md)                                 | argocd-cm                                                                          | ConfigMap | General Argo CD configuration                                                        |\n| [`argocd-repositories.yaml`](argocd-repositories-yaml.md)             | my-private-repo \/ istio-helm-repo \/ private-helm-repo \/ private-repo               | Secrets   | Sample repository connection details                                                 |\n| [`argocd-repo-creds.yaml`](argocd-repo-creds-yaml.md)                    | argoproj-https-creds \/ argoproj-ssh-creds \/ github-creds \/ github-enterprise-creds | Secrets   | Sample repository credential templates                                               |\n| [`argocd-cmd-params-cm.yaml`](argocd-cmd-params-cm-yaml.md)           | argocd-cmd-params-cm                                                               | ConfigMap | Argo CD env variables configuration                                                  |\n| [`argocd-secret.yaml`](argocd-secret-yaml.md)                         | argocd-secret                                                                      | Secret    | User Passwords, Certificates (deprecated), Signing Key, Dex secrets, Webhook secrets |\n| [`argocd-rbac-cm.yaml`](argocd-rbac-cm-yaml.md)                       | argocd-rbac-cm                                                                     | ConfigMap | RBAC Configuration                                                                   |\n| [`argocd-tls-certs-cm.yaml`](argocd-tls-certs-cm-yaml.md)             | argocd-tls-certs-cm                                                                | ConfigMap | Custom TLS certificates for connecting Git repositories via HTTPS (v1.2 and later)   |\n| [`argocd-ssh-known-hosts-cm.yaml`](argocd-ssh-known-hosts-cm-yaml.md) | argocd-ssh-known-hosts-cm                                                          | ConfigMap | SSH known hosts data for connecting Git repositories via SSH (v1.2 and later)        |\n\nFor each specific kind of ConfigMap and Secret resource, there is only a single supported resource name (as listed in the above table) - if you need to merge things you need to do it before creating them.\n\n!!!warning \"A note about ConfigMap resources\"\n    Be sure to annotate your ConfigMap resources using the label `app.kubernetes.io\/part-of: argocd`, otherwise Argo CD will not be able to use them.\n\n### Multiple configuration objects\n\n| Sample File                                                      | Kind        | Description              |\n|------------------------------------------------------------------|-------------|--------------------------|\n| [`application.yaml`](..\/user-guide\/application-specification.md) | Application | Example application spec |\n| [`project.yaml`](.\/project-specification.md)                     | AppProject  | Example project spec     |\n| [`argocd-repositories.yaml`](.\/argocd-repositories-yaml.md)                                                                | Secret      | Repository credentials   |\n\nFor `Application` and `AppProject` resources, the name of the resource equals the name of the application or project within Argo CD. This also means that application and project names are unique within a given Argo CD installation - you cannot have the same application name for two different applications.\n\n## Applications\n\nThe Application CRD is the Kubernetes resource object representing a deployed application instance\nin an environment. It is defined by two key pieces of information:\n\n* `source` reference to the desired state in Git (repository, revision, path, environment)\n* `destination` reference to the target cluster and namespace. For the cluster one of server or name can be used, but not both (which will result in an error). Under the hood when the server is missing, it is calculated based on the name and used for any operations.\n\nA minimal Application spec is as follows:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  name: guestbook\n  namespace: argocd\nspec:\n  project: default\n  source:\n    repoURL: https:\/\/github.com\/argoproj\/argocd-example-apps.git\n    targetRevision: HEAD\n    path: guestbook\n  destination:\n    server: https:\/\/kubernetes.default.svc\n    namespace: guestbook\n```\n\nSee [application.yaml](application.yaml) for additional fields. As long as you have completed the first step of [Getting Started](..\/getting_started.md#1-install-argo-cd), you can apply this with `kubectl apply -n argocd -f application.yaml` and Argo CD will start deploying the guestbook application.\n\n!!! note\n    The namespace must match the namespace of your Argo CD instance - typically this is `argocd`.\n\n!!! note\n    When creating an application from a Helm repository, the `chart` attribute must be specified instead of the `path` attribute within `spec.source`.\n\n```yaml\nspec:\n  source:\n    repoURL: https:\/\/argoproj.github.io\/argo-helm\n    chart: argo\n```\n\n!!! warning\n    Without the `resources-finalizer.argocd.argoproj.io` finalizer, deleting an application will not delete the resources it manages. To perform a cascading delete, you must add the finalizer. See [App Deletion](..\/user-guide\/app_deletion.md#about-the-deletion-finalizer).\n\n```yaml\nmetadata:\n  finalizers:\n    - resources-finalizer.argocd.argoproj.io\n```\n\n### App of Apps\n\nYou can create an app that creates other apps, which in turn can create other apps.\nThis allows you to declaratively manage a group of apps that can be deployed and configured in concert.\n\nSee [cluster bootstrapping](cluster-bootstrapping.md).\n\n## Projects\n\nThe AppProject CRD is the Kubernetes resource object representing a logical grouping of applications.\nIt is defined by the following key pieces of information:\n\n* `sourceRepos` reference to the repositories that applications within the project can pull manifests from.\n* `destinations` reference to clusters and namespaces that applications within the project can deploy into.\n* `roles` list of entities with definitions of their access to resources within the project.\n\n!!!warning \"Projects which can deploy to the Argo CD namespace grant admin access\"\n    If a Project's `destinations` configuration allows deploying to the namespace in which Argo CD is installed, then\n    Applications under that project have admin-level access. [RBAC access](https:\/\/argo-cd.readthedocs.io\/en\/stable\/operator-manual\/rbac\/)\n    to admin-level Projects should be carefully restricted, and push access to allowed `sourceRepos` should be limited\n    to only admins.\n\nAn example spec is as follows:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: AppProject\nmetadata:\n  name: my-project\n  namespace: argocd\n  # Finalizer that ensures that project is not deleted until it is not referenced by any application\n  finalizers:\n    - resources-finalizer.argocd.argoproj.io\nspec:\n  description: Example Project\n  # Allow manifests to deploy from any Git repos\n  sourceRepos:\n  - '*'\n  # Only permit applications to deploy to the guestbook namespace in the same cluster\n  destinations:\n  - namespace: guestbook\n    server: https:\/\/kubernetes.default.svc\n  # Deny all cluster-scoped resources from being created, except for Namespace\n  clusterResourceWhitelist:\n  - group: ''\n    kind: Namespace\n  # Allow all namespaced-scoped resources to be created, except for ResourceQuota, LimitRange, NetworkPolicy\n  namespaceResourceBlacklist:\n  - group: ''\n    kind: ResourceQuota\n  - group: ''\n    kind: LimitRange\n  - group: ''\n    kind: NetworkPolicy\n  # Deny all namespaced-scoped resources from being created, except for Deployment and StatefulSet\n  namespaceResourceWhitelist:\n  - group: 'apps'\n    kind: Deployment\n  - group: 'apps'\n    kind: StatefulSet\n  roles:\n  # A role which provides read-only access to all applications in the project\n  - name: read-only\n    description: Read-only privileges to my-project\n    policies:\n    - p, proj:my-project:read-only, applications, get, my-project\/*, allow\n    groups:\n    - my-oidc-group\n  # A role which provides sync privileges to only the guestbook-dev application, e.g. to provide\n  # sync privileges to a CI system\n  - name: ci-role\n    description: Sync privileges for guestbook-dev\n    policies:\n    - p, proj:my-project:ci-role, applications, sync, my-project\/guestbook-dev, allow\n    # NOTE: JWT tokens can only be generated by the API server and the token is not persisted\n    # anywhere by Argo CD. It can be prematurely revoked by removing the entry from this list.\n    jwtTokens:\n    - iat: 1535390316\n```\n\n## Repositories\n\n!!!note\n    Some Git hosters - notably GitLab and possibly on-premise GitLab instances as well - require you to\n    specify the `.git` suffix in the repository URL, otherwise they will send a HTTP 301 redirect to the\n    repository URL suffixed with `.git`. Argo CD will **not** follow these redirects, so you have to\n    adjust your repository URL to be suffixed with `.git`.\n\nRepository details are stored in secrets. To configure a repo, create a secret which contains repository details.\nConsider using [bitnami-labs\/sealed-secrets](https:\/\/github.com\/bitnami-labs\/sealed-secrets) to store an encrypted secret definition as a Kubernetes manifest.\nEach repository must have a `url` field and, depending on whether you connect using HTTPS, SSH, or GitHub App, `username` and `password` (for HTTPS), `sshPrivateKey` (for SSH), or `githubAppPrivateKey` (for GitHub App).\nCredentials can be scoped to a project using the optional `project` field. When omitted, the credential will be used as the default for all projects without a scoped credential.\n\n!!!warning\n    When using [bitnami-labs\/sealed-secrets](https:\/\/github.com\/bitnami-labs\/sealed-secrets) the labels will be removed and have to be readded as described here: https:\/\/github.com\/bitnami-labs\/sealed-secrets#sealedsecrets-as-templates-for-secrets\n\nExample for HTTPS:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: private-repo\n  namespace: argocd\n  labels:\n    argocd.argoproj.io\/secret-type: repository\nstringData:\n  type: git\n  url: https:\/\/github.com\/argoproj\/private-repo\n  password: my-password\n  username: my-username\n  project: my-project\n```\n\nExample for SSH:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: private-repo\n  namespace: argocd\n  labels:\n    argocd.argoproj.io\/secret-type: repository\nstringData:\n  type: git\n  url: git@github.com:argoproj\/my-private-repository.git\n  sshPrivateKey: |\n    -----BEGIN OPENSSH PRIVATE KEY-----\n    ...\n    -----END OPENSSH PRIVATE KEY-----\n```\n\nExample for GitHub App:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: github-repo\n  namespace: argocd\n  labels:\n    argocd.argoproj.io\/secret-type: repository\nstringData:\n  type: git\n  url: https:\/\/github.com\/argoproj\/my-private-repository\n  githubAppID: 1\n  githubAppInstallationID: 2\n  githubAppPrivateKey: |\n    -----BEGIN OPENSSH PRIVATE KEY-----\n    ...\n    -----END OPENSSH PRIVATE KEY-----\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: github-enterprise-repo\n  namespace: argocd\n  labels:\n    argocd.argoproj.io\/secret-type: repository\nstringData:\n  type: git\n  url: https:\/\/ghe.example.com\/argoproj\/my-private-repository\n  githubAppID: 1\n  githubAppInstallationID: 2\n  githubAppEnterpriseBaseUrl: https:\/\/ghe.example.com\/api\/v3\n  githubAppPrivateKey: |\n    -----BEGIN OPENSSH PRIVATE KEY-----\n    ...\n    -----END OPENSSH PRIVATE KEY-----\n```\n\nExample for Google Cloud Source repositories:\n\n```yaml\nkind: Secret\nmetadata:\n  name: github-repo\n  namespace: argocd\n  labels:\n    argocd.argoproj.io\/secret-type: repository\nstringData:\n  type: git\n  url: https:\/\/source.developers.google.com\/p\/my-google-project\/r\/my-repo\n  gcpServiceAccountKey: |\n    {\n      \"type\": \"service_account\",\n      \"project_id\": \"my-google-project\",\n      \"private_key_id\": \"REDACTED\",\n      \"private_key\": \"-----BEGIN PRIVATE KEY-----\\nREDACTED\\n-----END PRIVATE KEY-----\\n\",\n      \"client_email\": \"argocd-service-account@my-google-project.iam.gserviceaccount.com\",\n      \"client_id\": \"REDACTED\",\n      \"auth_uri\": \"https:\/\/accounts.google.com\/o\/oauth2\/auth\",\n      \"token_uri\": \"https:\/\/oauth2.googleapis.com\/token\",\n      \"auth_provider_x509_cert_url\": \"https:\/\/www.googleapis.com\/oauth2\/v1\/certs\",\n      \"client_x509_cert_url\": \"https:\/\/www.googleapis.com\/robot\/v1\/metadata\/x509\/argocd-service-account%40my-google-project.iam.gserviceaccount.com\"\n    }\n```\n\n!!! tip\n    The Kubernetes documentation has [instructions for creating a secret containing a private key](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/#use-case-pod-with-ssh-keys).\n\n### Repository Credentials\n\nIf you want to use the same credentials for multiple repositories, you can configure credential templates. Credential templates can carry the same credentials information as repositories.\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: first-repo\n  namespace: argocd\n  labels:\n    argocd.argoproj.io\/secret-type: repository\nstringData:\n  type: git\n  url: https:\/\/github.com\/argoproj\/private-repo\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: second-repo\n  namespace: argocd\n  labels:\n    argocd.argoproj.io\/secret-type: repository\nstringData:\n  type: git\n  url: https:\/\/github.com\/argoproj\/other-private-repo\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: private-repo-creds\n  namespace: argocd\n  labels:\n    argocd.argoproj.io\/secret-type: repo-creds\nstringData:\n  type: git\n  url: https:\/\/github.com\/argoproj\n  password: my-password\n  username: my-username\n```\n\nIn the above example, every repository accessed via HTTPS whose URL is prefixed with `https:\/\/github.com\/argoproj` would use a username stored in the key `username` and a password stored in the key `password` of the secret `private-repo-creds` for connecting to Git.\n\nIn order for Argo CD to use a credential template for any given repository, the following conditions must be met:\n\n* The repository must either not be configured at all, or if configured, must not contain any credential information (i.e. contain none of `sshPrivateKey`, `username`, `password` )\n* The URL configured for a credential template (e.g. `https:\/\/github.com\/argoproj`) must match as prefix for the repository URL (e.g. `https:\/\/github.com\/argoproj\/argocd-example-apps`).\n\n!!! note\n    Matching credential template URL prefixes is done on a _best match_ effort, so the longest (best) match will take precedence. The order of definition is not important, as opposed to pre v1.4 configuration.\n\nThe following keys are valid to refer to credential secrets:\n\n#### SSH repositories\n\n* `sshPrivateKey` refers to the SSH private key for accessing the repositories\n\n#### HTTPS repositories\n\n* `username` and `password` refer to the username and\/or password for accessing the repositories\n* `tlsClientCertData` and `tlsClientCertKey` refer to secrets where a TLS client certificate (`tlsClientCertData`) and the corresponding private key `tlsClientCertKey` are stored for accessing the repositories\n\n#### GitHub App repositories\n\n* `githubAppPrivateKey` refers to the GitHub App private key for accessing the repositories\n* `githubAppID` refers to the GitHub Application ID for the application you created.\n* `githubAppInstallationID` refers to the Installation ID of the GitHub app you created and installed.\n* `githubAppEnterpriseBaseUrl` refers to the base api URL for GitHub Enterprise (e.g. `https:\/\/ghe.example.com\/api\/v3`)\n* `tlsClientCertData` and `tlsClientCertKey` refer to secrets where a TLS client certificate (`tlsClientCertData`) and the corresponding private key `tlsClientCertKey` are stored for accessing GitHub Enterprise if custom certificates are used.\n\n### Repositories using self-signed TLS certificates (or are signed by custom CA)\n\nYou can manage the TLS certificates used to verify the authenticity of your repository servers in a ConfigMap object named `argocd-tls-certs-cm`. The data section should contain a map, with the repository server's hostname part (not the complete URL) as key, and the certificate(s) in PEM format as data. So, if you connect to a repository with the URL `https:\/\/server.example.com\/repos\/my-repo`, you should use `server.example.com` as key. The certificate data should be either the server's certificate (in case of self-signed certificate) or the certificate of the CA that was used to sign the server's certificate. You can configure multiple certificates for each server, e.g. if you are having a certificate roll-over planned.\n\nIf there are no dedicated certificates configured for a repository server, the system's default trust store is used for validating the server's repository. This should be good enough for most (if not all) public Git repository services such as GitLab, GitHub and Bitbucket as well as most privately hosted sites which use certificates from well-known CAs, including Let's Encrypt certificates.\n\nAn example ConfigMap object:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-tls-certs-cm\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/name: argocd-cm\n    app.kubernetes.io\/part-of: argocd\ndata:\n  server.example.com: |\n    -----BEGIN CERTIFICATE-----\n    MIIF1zCCA7+gAwIBAgIUQdTcSHY2Sxd3Tq\/v1eIEZPCNbOowDQYJKoZIhvcNAQEL\n    BQAwezELMAkGA1UEBhMCREUxFTATBgNVBAgMDExvd2VyIFNheG9ueTEQMA4GA1UE\n    BwwHSGFub3ZlcjEVMBMGA1UECgwMVGVzdGluZyBDb3JwMRIwEAYDVQQLDAlUZXN0\n    c3VpdGUxGDAWBgNVBAMMD2Jhci5leGFtcGxlLmNvbTAeFw0xOTA3MDgxMzU2MTda\n    Fw0yMDA3MDcxMzU2MTdaMHsxCzAJBgNVBAYTAkRFMRUwEwYDVQQIDAxMb3dlciBT\n    YXhvbnkxEDAOBgNVBAcMB0hhbm92ZXIxFTATBgNVBAoMDFRlc3RpbmcgQ29ycDES\n    MBAGA1UECwwJVGVzdHN1aXRlMRgwFgYDVQQDDA9iYXIuZXhhbXBsZS5jb20wggIi\n    MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCv4mHMdVUcafmaSHVpUM0zZWp5\n    NFXfboxA4inuOkE8kZlbGSe7wiG9WqLirdr39Ts+WSAFA6oANvbzlu3JrEQ2CHPc\n    CNQm6diPREFwcDPFCe\/eMawbwkQAPVSHPts0UoRxnpZox5pn69ghncBR+jtvx+\/u\n    P6HdwW0qqTvfJnfAF1hBJ4oIk2AXiip5kkIznsAh9W6WRy6nTVCeetmIepDOGe0G\n    ZJIRn\/OfSz7NzKylfDCat2z3EAutyeT\/5oXZoWOmGg\/8T7pn\/pR588GoYYKRQnp+\n    YilqCPFX+az09EqqK\/iHXnkdZ\/Z2fCuU+9M\/Zhrnlwlygl3RuVBI6xhm\/ZsXtL2E\n    Gxa61lNy6pyx5+hSxHEFEJshXLtioRd702VdLKxEOuYSXKeJDs1x9o6cJ75S6hko\n    Ml1L4zCU+xEsMcvb1iQ2n7PZdacqhkFRUVVVmJ56th8aYyX7KNX6M9CD+kMpNm6J\n    kKC1li\/Iy+RI138bAvaFplajMF551kt44dSvIoJIbTr1LigudzWPqk31QaZXV\/4u\n    kD1n4p\/XMc9HYU\/was\/CmQBFqmIZedTLTtK7clkuFN6wbwzdo1wmUNgnySQuMacO\n    gxhHxxzRWxd24uLyk9Px+9U3BfVPaRLiOPaPoC58lyVOykjSgfpgbus7JS69fCq7\n    bEH4Jatp\/10zkco+UQIDAQABo1MwUTAdBgNVHQ4EFgQUjXH6PHi92y4C4hQpey86\n    r6+x1ewwHwYDVR0jBBgwFoAUjXH6PHi92y4C4hQpey86r6+x1ewwDwYDVR0TAQH\/\n    BAUwAwEB\/zANBgkqhkiG9w0BAQsFAAOCAgEAFE4SdKsX9UsLy+Z0xuHSxhTd0jfn\n    Iih5mtzb8CDNO5oTw4z0aMeAvpsUvjJ\/XjgxnkiRACXh7K9hsG2r+ageRWGevyvx\n    CaRXFbherV1kTnZw4Y9\/pgZTYVWs9jlqFOppz5sStkfjsDQ5lmPJGDii\/StENAz2\n    XmtiPOgfG9Upb0GAJBCuKnrU9bIcT4L20gd2F4Y14ccyjlf8UiUi192IX6yM9OjT\n    +TuXwZgqnTOq6piVgr+FTSa24qSvaXb5z\/mJDLlk23npecTouLg83TNSn3R6fYQr\n    d\/Y9eXuUJ8U7\/qTh2Ulz071AO9KzPOmleYPTx4Xty4xAtWi1QE5NHW9\/Ajlv5OtO\n    OnMNWIs7ssDJBsB7VFC8hcwf79jz7kC0xmQqDfw51Xhhk04kla+v+HZcFW2AO9so\n    6ZdVHHQnIbJa7yQJKZ+hK49IOoBR6JgdB5kymoplLLiuqZSYTcwSBZ72FYTm3iAr\n    jzvt1hxpxVDmXvRnkhRrIRhK4QgJL0jRmirBjDY+PYYd7bdRIjN7WNZLFsgplnS8\n    9w6CwG32pRlm0c8kkiQ7FXA6BYCqOsDI8f1VGQv331OpR2Ck+FTv+L7DAmg6l37W\n    +LB9LGh4OAp68ImTjqf6ioGKG0RBSznwME+r4nXtT1S\/qLR6ASWUS4ViWRhbRlNK\n    XWyb96wrUlv+E8I=\n    -----END CERTIFICATE-----\n\n```\n\n!!! note\n    The `argocd-tls-certs-cm` ConfigMap will be mounted as a volume at the mount path `\/app\/config\/tls` in the pods of `argocd-server` and `argocd-repo-server`. It will create files for each data key in the mount path directory, so above example would leave the file `\/app\/config\/tls\/server.example.com`, which contains the certificate data. It might take a while for changes in the ConfigMap to be reflected in your pods, depending on your Kubernetes configuration.\n\n### SSH known host public keys\n\nIf you are configuring repositories to use SSH, Argo CD will need to know their SSH public keys. In order for Argo CD to connect via SSH the public key(s) for each repository server must be pre-configured in Argo CD (unlike TLS configuration), otherwise the connections to the repository will fail.\n\nYou can manage the SSH known hosts data in the `argocd-ssh-known-hosts-cm` ConfigMap. This ConfigMap contains a single entry, `ssh_known_hosts`, with the public keys of the SSH servers as its value. The value can be filled in from any existing `ssh_known_hosts` file, or from the output of the `ssh-keyscan` utility (which is part of OpenSSH's client package). The basic format is `<server_name> <keytype> <base64-encoded_key>`, one entry per line.\n\nHere is an example of running `ssh-keyscan`:\n```bash\n$ for host in bitbucket.org github.com gitlab.com ssh.dev.azure.com vs-ssh.visualstudio.com ; do ssh-keyscan $host 2> \/dev\/null ; done\nbitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk+aySyboD5QF61I\/1WeTwu+deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr\/6mrui\/oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc\/5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3+30LVlORZkxOh+LKL\/BvbZ\/iRNhItLqNyieoQj\/uh\/7Iv4uyH\/cV\/0b4WDSd3DptigWq84lJubb9t\/DnZlrJazxyDCulTmKdOR7vs9gMTo+uoIrPSb8ScTtvw65+odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO+FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf\/97P5zauIhxcjX+xHv4M=\ngithub.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl\ngithub.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr\/C56SJMy\/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9\/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS\/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4\/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1\/wsjk=\ngithub.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT\/y6v0mKV0U2w0WZ2YB\/++Tpockg=\ngitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=\ngitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn\/nOeHHE5UOzRdf\ngitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq\/U0tCNyokEi\/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT\/ia1NEKjunUqu1xOB\/StKDHMoX4\/OKyIzuS0q\/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR\/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9\nssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0\/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY\/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3\/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26\/s0lfKob4Kw8H\nvs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0\/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY\/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3\/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26\/s0lfKob4Kw8H\n```\n\nHere is an example `ConfigMap` object using the output from `ssh-keyscan` above:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  labels:\n    app.kubernetes.io\/name: argocd-ssh-known-hosts-cm\n    app.kubernetes.io\/part-of: argocd\n  name: argocd-ssh-known-hosts-cm\ndata:\n  ssh_known_hosts: |\n    # This file was automatically generated by hack\/update-ssh-known-hosts.sh. DO NOT EDIT\n    [ssh.github.com]:443 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT\/y6v0mKV0U2w0WZ2YB\/++Tpockg=\n    [ssh.github.com]:443 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl\n    [ssh.github.com]:443 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr\/C56SJMy\/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9\/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS\/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4\/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1\/wsjk=\n    bitbucket.org ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPIQmuzMBuKdWeF4+a2sjSSpBK0iqitSQ+5BM9KhpexuGt20JpTVM7u5BDZngncgrqDMbWdxMWWOGtZ9UgbqgZE=\n    bitbucket.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIazEu89wgQZ4bqs3d63QSMzYVa0MuJ2e2gKTKqu+UUO\n    bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk+aySyboD5QF61I\/1WeTwu+deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr\/6mrui\/oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc\/5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3+30LVlORZkxOh+LKL\/BvbZ\/iRNhItLqNyieoQj\/uh\/7Iv4uyH\/cV\/0b4WDSd3DptigWq84lJubb9t\/DnZlrJazxyDCulTmKdOR7vs9gMTo+uoIrPSb8ScTtvw65+odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO+FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf\/97P5zauIhxcjX+xHv4M=\n    github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT\/y6v0mKV0U2w0WZ2YB\/++Tpockg=\n    github.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl\n    github.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr\/C56SJMy\/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9\/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS\/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4\/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1\/wsjk=\n    gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=\n    gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn\/nOeHHE5UOzRdf\n    gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq\/U0tCNyokEi\/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT\/ia1NEKjunUqu1xOB\/StKDHMoX4\/OKyIzuS0q\/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR\/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9\n    ssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0\/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY\/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3\/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26\/s0lfKob4Kw8H\n    vs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0\/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY\/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3\/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26\/s0lfKob4Kw8H\n```\n\n!!! note\n    The `argocd-ssh-known-hosts-cm` ConfigMap will be mounted as a volume at the mount path `\/app\/config\/ssh` in the pods of `argocd-server` and `argocd-repo-server`. It will create a file `ssh_known_hosts` in that directory, which contains the SSH known hosts data used by Argo CD for connecting to Git repositories via SSH. It might take a while for changes in the ConfigMap to be reflected in your pods, depending on your Kubernetes configuration.\n\n### Configure repositories with proxy\n\nProxy for your repository can be specified in the `proxy` field of the repository secret, along with a corresponding `noProxy` config. Argo CD uses this proxy\/noProxy config to access the repository and do related helm\/kustomize operations. Argo CD looks for the standard proxy environment variables in the repository server if the custom proxy config is absent.\n\nAn example repository with proxy and noProxy:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: private-repo\n  namespace: argocd\n  labels:\n    argocd.argoproj.io\/secret-type: repository\nstringData:\n  type: git\n  url: https:\/\/github.com\/argoproj\/private-repo\n  proxy: https:\/\/proxy-server-url:8888\n  noProxy: \".internal.example.com,company.org,10.123.0.0\/16\"\n  password: my-password\n  username: my-username\n```\n\nA note on noProxy: Argo CD uses exec to interact with different tools such as helm and kustomize. Not all of these tools support the same noProxy syntax as the [httpproxy go package](https:\/\/cs.opensource.google\/go\/x\/net\/+\/internal-branch.go1.21-vendor:http\/httpproxy\/proxy.go;l=38-50) does. In case you run in trouble with noProxy not beeing respected you might want to try using the full domain instead of a wildcard pattern or IP range to find a common syntax that all tools support.\n\n### Legacy behaviour\n\nIn Argo CD version 2.0 and earlier, repositories were stored as part of the `argocd-cm` config map. For\nbackward-compatibility, Argo CD will still honor repositories in the config map, but this style of repository\nconfiguration is deprecated and support for it will be removed in a future version.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\ndata:\n  repositories: |\n    - url: https:\/\/github.com\/argoproj\/my-private-repository\n      passwordSecret:\n        name: my-secret\n        key: password\n      usernameSecret:\n        name: my-secret\n        key: username\n  repository.credentials: |\n    - url: https:\/\/github.com\/argoproj\n      passwordSecret:\n        name: my-secret\n        key: password\n      usernameSecret:\n        name: my-secret\n        key: username\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: my-secret\n  namespace: argocd\nstringData:\n  password: my-password\n  username: my-username\n```\n\n## Clusters\n\nCluster credentials are stored in secrets same as repositories or repository credentials. Each secret must have label\n`argocd.argoproj.io\/secret-type: cluster`.\n\nThe secret data must include following fields:\n\n* `name` - cluster name\n* `server` - cluster api server url\n* `namespaces` - optional comma-separated list of namespaces which are accessible in that cluster. Cluster level resources would be ignored if namespace list is not empty.\n* `clusterResources` - optional boolean string (`\"true\"` or `\"false\"`) determining whether Argo CD can manage cluster-level resources on this cluster. This setting is used only if the list of managed namespaces is not empty.\n* `project` - optional string to designate this as a project-scoped cluster.\n* `config` - JSON representation of following data structure:\n\n```yaml\n# Basic authentication settings\nusername: string\npassword: string\n# Bearer authentication settings\nbearerToken: string\n# IAM authentication configuration\nawsAuthConfig:\n    clusterName: string\n    roleARN: string\n    profile: string\n# Configure external command to supply client credentials\n# See https:\/\/godoc.org\/k8s.io\/client-go\/tools\/clientcmd\/api#ExecConfig\nexecProviderConfig:\n    command: string\n    args: [\n      string\n    ]\n    env: {\n      key: value\n    }\n    apiVersion: string\n    installHint: string\n# Proxy URL for the kubernetes client to use when connecting to the cluster api server\nproxyUrl: string\n# Transport layer security configuration settings\ntlsClientConfig:\n    # Base64 encoded PEM-encoded bytes (typically read from a client certificate file).\n    caData: string\n    # Base64 encoded PEM-encoded bytes (typically read from a client certificate file).\n    certData: string\n    # Server should be accessed without verifying the TLS certificate\n    insecure: boolean\n    # Base64 encoded PEM-encoded bytes (typically read from a client certificate key file).\n    keyData: string\n    # ServerName is passed to the server for SNI and is used in the client to check server\n    # certificates against. If ServerName is empty, the hostname used to contact the\n    # server is used.\n    serverName: string\n# Disable automatic compression for requests to the cluster \ndisableCompression: boolean\n```\n\nNote that if you specify a command to run under `execProviderConfig`, that command must be available in the Argo CD image. See [BYOI (Build Your Own Image)](custom_tools.md#byoi-build-your-own-image).\n\nCluster secret example:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: mycluster-secret\n  labels:\n    argocd.argoproj.io\/secret-type: cluster\ntype: Opaque\nstringData:\n  name: mycluster.example.com\n  server: https:\/\/mycluster.example.com\n  config: |\n    {\n      \"bearerToken\": \"<authentication token>\",\n      \"tlsClientConfig\": {\n        \"insecure\": false,\n        \"caData\": \"<base64 encoded certificate>\"\n      }\n    }\n```\n\n### EKS\n\nEKS cluster secret example using argocd-k8s-auth and [IRSA](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts.html):\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: mycluster-secret\n  labels:\n    argocd.argoproj.io\/secret-type: cluster\ntype: Opaque\nstringData:\n  name: \"eks-cluster-name-for-argo\"\n  server: \"https:\/\/xxxyyyzzz.xyz.some-region.eks.amazonaws.com\"\n  config: |\n    {\n      \"awsAuthConfig\": {\n        \"clusterName\": \"my-eks-cluster-name\",\n        \"roleARN\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<IAM_ROLE_NAME>\"\n      },\n      \"tlsClientConfig\": {\n        \"insecure\": false,\n        \"caData\": \"<base64 encoded certificate>\"\n      }        \n    }\n```\n\nThis setup requires:\n\n1. [IRSA enabled](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/enable-iam-roles-for-service-accounts.html) on your Argo CD EKS cluster\n2. An IAM role (\"management role\") for your Argo CD EKS cluster that has an appropriate trust policy and permission policies (see below)\n3. A role created for each cluster being added to Argo CD that is assumable by the Argo CD management role\n4. An [Access Entry](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/access-entries.html) within each EKS cluster added to Argo CD that gives the cluster's role (from point 3) RBAC permissions\nto perform actions within the cluster\n    - Or, alternatively, an entry within the `aws-auth` ConfigMap within the cluster added to Argo CD ([depreciated by EKS](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/auth-configmap.html))\n\n#### Argo CD Management Role\n\nThe role created for Argo CD (the \"management role\") will need to have a trust policy suitable for assumption by certain \nArgo CD Service Accounts *and by itself*.\n\nThe service accounts that need to assume this role are:\n\n- `argocd-application-controller`,\n- `argocd-applicationset-controller`\n- `argocd-server`\n\nIf we create role `arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>` for this purpose, the following\nis an example trust policy suitable for this need. Ensure that the Argo CD cluster has an [IAM OIDC provider configured](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/enable-iam-roles-for-service-accounts.html).\n\n```json\n{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Sid\": \"ExplicitSelfRoleAssumption\",\n            \"Effect\": \"Allow\",\n            \"Principal\": {\n                \"AWS\": \"*\"\n            },\n            \"Action\": \"sts:AssumeRole\",\n            \"Condition\": {\n                \"ArnLike\": {\n                  \"aws:PrincipalArn\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>\"\n                }\n            }\n        },\n        {\n            \"Sid\": \"ServiceAccountRoleAssumption\",\n            \"Effect\": \"Allow\",\n            \"Principal\": {\n                \"Federated\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider\/oidc.eks.<AWS_REGION>.amazonaws.com\/id\/EXAMPLED539D4633E53DE1B71EXAMPLE\"\n            },\n            \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n            \"Condition\": {\n                \"StringEquals\": {\n                    \"oidc.eks.<AWS_REGION>.amazonaws.com\/id\/EXAMPLED539D4633E53DE1B71EXAMPLE:sub\": [\n                        \"system:serviceaccount:argocd:argocd-application-controller\",\n                        \"system:serviceaccount:argocd:argocd-applicationset-controller\",\n                        \"system:serviceaccount:argocd:argocd-server\"\n                    ],\n                    \"oidc.eks.<AWS_REGION>.amazonaws.com\/id\/EXAMPLED539D4633E53DE1B71EXAMPLE:aud\": \"sts.amazonaws.com\"\n                }\n            }\n        }\n    ]\n}\n```\n\n#### Argo CD Service Accounts\n\nThe 3 service accounts need to be modified to include an annotation with the Argo CD management role ARN.\n\nHere's an example service account configurations for `argocd-application-controller`, `argocd-applicationset-controller`, and `argocd-server`.\n\n!!! warning\nOnce the annotations has been set on the service accounts, the application controller and server pods need to be restarted.\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  annotations:\n    eks.amazonaws.com\/role-arn: \"<arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>\"\n  name: argocd-application-controller\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  annotations:\n    eks.amazonaws.com\/role-arn: \"<arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>\"\n  name: argocd-applicationset-controller\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  annotations:\n    eks.amazonaws.com\/role-arn: \"<arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>\"\n  name: argocd-server\n```\n\n#### IAM Permission Policy\n\nThe Argo CD management role (`arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>` in our example) additionally\nneeds to be allowed to assume a role for each cluster added to Argo CD.\n\nIf we create a role named `<IAM_CLUSTER_ROLE>` for an EKS cluster we are adding to Argo CD, we would update the permission \npolicy of the Argo CD management role to include the following:\n\n```json\n{\n    \"Version\" : \"2012-10-17\",\n    \"Statement\" : {\n      \"Effect\" : \"Allow\",\n      \"Action\" : \"sts:AssumeRole\",\n      \"Resource\" : [\n        \"arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<IAM_CLUSTER_ROLE>\"\n      ]\n    }\n  }\n```\n\nThis allows the Argo CD management role to assume the cluster role.\n\nYou can add permissions like above to the Argo CD management role for each cluster being managed by Argo CD (assuming you\ncreate a new role per cluster).\n\n#### Cluster Role Trust Policies\n\nAs stated, each EKS cluster being added to Argo CD should have its own corresponding role. This role should not have any\npermission policies. Instead, it will be used to authenticate against the EKS cluster's API. The Argo CD management role\nassumes this role, and calls the AWS API to get an auth token via argocd-k8s-auth. That token is used when connecting to\nthe added cluster's API endpoint.\n\nIf we create role `arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<IAM_CLUSTER_ROLE>` for a cluster being added to Argo CD, we should\nset its trust policy to give the Argo CD management role permission to assume it. Note that we're granting the Argo CD \nmanagement role permission to assume this role above, but we also need to permit that action via the cluster role's\ntrust policy.\n\nA suitable trust policy allowing the `IAM_CLUSTER_ROLE` to be assumed by the `ARGO_CD_MANAGEMENT_IAM_ROLE_NAME` role looks like this:\n\n```json\n{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Principal\": {\n                \"AWS\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>\"\n            },\n            \"Action\": \"sts:AssumeRole\"\n        }\n    ]\n}\n```\n\n#### Access Entries\n\nEach cluster's role (e.g. `arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<IAM_CLUSTER_ROLE>`) has no permission policy. Instead, we\nassociate that role with an EKS permission policy, which grants that role the ability to generate authentication tokens\nto the cluster's API. This EKS permission policy decides what RBAC permissions are granted in that process.\n\nAn [access entry](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/access-entries.html) (and the policy associated to the role) can be created using the following commands:\n\n```bash\n# For each cluster being added to Argo CD\naws eks create-access-entry \\\n    --cluster-name my-eks-cluster-name \\\n    --principal-arn arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<IAM_CLUSTER_ROLE> \\\n    --type STANDARD \\\n    --kubernetes-groups [] # No groups needed\n\naws eks associate-access-policy \\\n    --cluster-name my-eks-cluster-name \\\n    --policy-arn arn:aws:eks::aws:cluster-access-policy\/AmazonEKSClusterAdminPolicy \\\n    --access-scope type=cluster \\\n    --principal-arn arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<IAM_CLUSTER_ROLE>\n```\n\nThe above role is granted cluster admin permissions via `AmazonEKSClusterAdminPolicy`. The Argo CD management role that\nassume this role is therefore granted the same cluster admin permissions when it generates an API token when adding the \nassociated EKS cluster.\n\n**AWS Auth (Depreciated)**\n\nInstead of using Access Entries, you may need to use the depreciated `aws-auth`.\n\nIf so, the `roleARN` of each managed cluster needs to be added to each respective cluster's `aws-auth` config map (see\n[Enabling IAM principal access to your cluster](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/add-user-role.html)), as\nwell as having an assume role policy which allows it to be assumed by the Argo CD pod role.\n\nAn example assume role policy for a cluster which is managed by Argo CD:\n\n```json\n{\n    \"Version\" : \"2012-10-17\",\n    \"Statement\" : {\n      \"Effect\" : \"Allow\",\n      \"Action\" : \"sts:AssumeRole\",\n      \"Principal\" : {\n        \"AWS\" : \"<arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<ARGO_CD_MANAGEMENT_IAM_ROLE_NAME>\"\n      }\n    }\n  }\n```\n\nExample kube-system\/aws-auth configmap for your cluster managed by Argo CD:\n\n```yaml\napiVersion: v1\ndata:\n  # Other groups and accounts omitted for brevity. Ensure that no other rolearns and\/or groups are inadvertently removed, \n  # or you risk borking access to your cluster.\n  #\n  # The group name is a RoleBinding which you use to map to a [Cluster]Role. See https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/#role-binding-examples  \n  mapRoles: |\n    - \"groups\":\n      - \"<GROUP-NAME-IN-K8S-RBAC>\"\n      \"rolearn\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<IAM_CLUSTER_ROLE>\"\n      \"username\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<IAM_CLUSTER_ROLE>\"\n```\n\nUse the role ARN for both `rolearn` and `username`.\n\n#### Alternative EKS Authentication Methods\nIn some scenarios it may not be possible to use IRSA, such as when the Argo CD cluster is running on a different cloud\nprovider's platform. In this case, there are two options:\n1. Use `execProviderConfig` to call the AWS authentication mechanism which enables the injection of environment variables to supply credentials\n2. Leverage the new AWS profile option available in Argo CD release 2.10\n\nBoth of these options will require the steps involving IAM and the `aws-auth` config map (defined above) to provide the \nprincipal with access to the cluster.\n\n##### Using execProviderConfig with Environment Variables\n```yaml\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: mycluster-secret\n  labels:\n    argocd.argoproj.io\/secret-type: cluster\ntype: Opaque\nstringData:\n  name: mycluster\n  server: https:\/\/mycluster.example.com\n  namespaces: \"my,managed,namespaces\"\n  clusterResources: \"true\"\n  config: |\n    {\n      \"execProviderConfig\": {\n        \"command\": \"argocd-k8s-auth\",\n        \"args\": [\"aws\", \"--cluster-name\", \"my-eks-cluster\"],\n        \"apiVersion\": \"client.authentication.k8s.io\/v1beta1\",\n        \"env\": {\n          \"AWS_REGION\": \"xx-east-1\",\n          \"AWS_ACCESS_KEY_ID\": \"\",\n          \"AWS_SECRET_ACCESS_KEY\": \"\",\n          \"AWS_SESSION_TOKEN\": \"\"\n        }\n      },\n      \"tlsClientConfig\": {\n        \"insecure\": false,\n        \"caData\": \"\"\n      }\n    }\n```\n\nThis example assumes that the role being attached to the credentials that have been supplied, if this is not the case\nthe role can be appended to the `args` section like so:\n\n```yaml\n...\n    \"args\": [\"aws\", \"--cluster-name\", \"my-eks-cluster\", \"--role-arn\", \"arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<IAM_ROLE_NAME>\"],\n...\n```\nThis construct can be used in conjunction with something like the External Secrets Operator to avoid storing the keys in\nplain text and additionally helps to provide a foundation for key rotation.\n\n##### Using An AWS Profile For Authentication\nThe option to use profiles, added in release 2.10, provides a method for supplying credentials while still using the\nstandard Argo CD EKS cluster declaration with an additional command flag that points to an AWS credentials file:\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: mycluster-secret\n  labels:\n    argocd.argoproj.io\/secret-type: cluster\ntype: Opaque\nstringData:\n  name: \"mycluster.com\"\n  server: \"https:\/\/mycluster.com\"\n  config: |\n    {\n      \"awsAuthConfig\": {\n        \"clusterName\": \"my-eks-cluster-name\",\n        \"roleARN\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:role\/<IAM_ROLE_NAME>\",\n        \"profile\": \"\/mount\/path\/to\/my-profile-file\"\n      },\n      \"tlsClientConfig\": {\n        \"insecure\": false,\n        \"caData\": \"<base64 encoded certificate>\"\n      }        \n    }\n```\nThis will instruct Argo CD to read the file at the provided path and use the credentials defined within to authenticate to AWS. \nThe profile must be mounted in both the `argocd-server` and `argocd-application-controller` components in order for this to work.\nFor example, the following values can be defined in a Helm-based Argo CD deployment:\n\n```yaml\ncontroller:\n  extraVolumes:\n    - name: my-profile-volume\n      secret:\n        secretName: my-aws-profile\n        items:\n          - key: my-profile-file\n            path: my-profile-file\n  extraVolumeMounts:\n    - name: my-profile-mount\n      mountPath: \/mount\/path\/to\n      readOnly: true\n\nserver:\n  extraVolumes:\n    - name: my-profile-volume\n      secret:\n        secretName: my-aws-profile\n        items:\n          - key: my-profile-file\n            path: my-profile-file\n  extraVolumeMounts:\n    - name: my-profile-mount\n      mountPath: \/mount\/path\/to\n      readOnly: true\n```\n\nWhere the secret is defined as follows:\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: my-aws-profile\ntype: Opaque\nstringData:\n  my-profile-file: |\n    [default]\n    region = <aws_region>\n    aws_access_key_id = <aws_access_key_id>\n    aws_secret_access_key = <aws_secret_access_key>\n    aws_session_token = <aws_session_token>\n```\n\n> \u26a0\ufe0f Secret mounts are updated on an interval, not real time. If rotation is a requirement ensure the token lifetime outlives the mount update interval and the rotation process doesn't immediately invalidate the existing token\n\n\n### GKE\n\nGKE cluster secret example using argocd-k8s-auth and [Workload Identity](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/workload-identity):\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: mycluster-secret\n  labels:\n    argocd.argoproj.io\/secret-type: cluster\ntype: Opaque\nstringData:\n  name: mycluster.example.com\n  server: https:\/\/mycluster.example.com\n  config: |\n    {\n      \"execProviderConfig\": {\n        \"command\": \"argocd-k8s-auth\",\n        \"args\": [\"gcp\"],\n        \"apiVersion\": \"client.authentication.k8s.io\/v1beta1\"\n      },\n      \"tlsClientConfig\": {\n        \"insecure\": false,\n        \"caData\": \"<base64 encoded certificate>\"\n      }\n    }\n```\n\nNote that you must enable Workload Identity on your GKE cluster, create GCP service account with appropriate IAM role and bind it to Kubernetes service account for argocd-application-controller and argocd-server (showing Pod logs on UI). See [Use Workload Identity](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/workload-identity) and [Authenticating to the Kubernetes API server](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/api-server-authentication).\n\n### AKS\n\nAzure cluster secret example using argocd-k8s-auth and [kubelogin](https:\/\/github.com\/Azure\/kubelogin).  The option *azure* to the argocd-k8s-auth execProviderConfig encapsulates the *get-token* command for kubelogin.  Depending upon which authentication flow is desired (devicecode, spn, ropc, msi, azurecli, workloadidentity), set the environment variable AAD_LOGIN_METHOD with this value.  Set other appropriate environment variables depending upon which authentication flow is desired.\n\n|Variable Name|Description|\n|-------------|-----------|\n|AAD_LOGIN_METHOD|One of devicecode, spn, ropc, msi, azurecli, or workloadidentity|\n|AAD_SERVICE_PRINCIPAL_CLIENT_CERTIFICATE|AAD client cert in pfx.  Used in spn login|\n|AAD_SERVICE_PRINCIPAL_CLIENT_ID|AAD client application ID|\n|AAD_SERVICE_PRINCIPAL_CLIENT_SECRET|AAD client application secret|\n|AAD_USER_PRINCIPAL_NAME|Used in the ropc flow|\n|AAD_USER_PRINCIPAL_PASSWORD|Used in the ropc flow|\n|AZURE_TENANT_ID|The AAD tenant ID.|\n|AZURE_AUTHORITY_HOST|Used in the WorkloadIdentityLogin flow|\n|AZURE_FEDERATED_TOKEN_FILE|Used in the WorkloadIdentityLogin flow|\n|AZURE_CLIENT_ID|Used in the WorkloadIdentityLogin flow|\n\nIn addition to the environment variables above, argocd-k8s-auth accepts two extra environment variables to set the AAD environment, and to set the AAD server application ID.  The AAD server application ID will default to 6dae42f8-4368-4678-94ff-3960e28e3630 if not specified.  See [here](https:\/\/github.com\/azure\/kubelogin#exec-plugin-format) for details.\n\n|Variable Name|Description|\n|-------------|-----------|\n|AAD_ENVIRONMENT_NAME|The azure environment to use, default of AzurePublicCloud|\n|AAD_SERVER_APPLICATION_ID|The optional AAD server application ID, defaults to 6dae42f8-4368-4678-94ff-3960e28e3630|\n\nThis is an example of using the [federated workload login flow](https:\/\/github.com\/Azure\/kubelogin#azure-workload-federated-identity-non-interactive).  The federated token file needs to be mounted as a secret into argoCD, so it can be used in the flow.  The location of the token file needs to be set in the environment variable AZURE_FEDERATED_TOKEN_FILE.\n\nIf your AKS cluster utilizes the [Mutating Admission Webhook](https:\/\/azure.github.io\/azure-workload-identity\/docs\/installation\/mutating-admission-webhook.html) from the Azure Workload Identity project, follow these steps to enable the `argocd-application-controller` and `argocd-server` pods to use the federated identity:\n\n1. **Label the Pods**: Add the `azure.workload.identity\/use: \"true\"` label to the `argocd-application-controller` and `argocd-server` pods.\n\n2. **Create Federated Identity Credential**: Generate an Azure federated identity credential for the `argocd-application-controller` and `argocd-server` service accounts. Refer to the [Federated Identity Credential](https:\/\/azure.github.io\/azure-workload-identity\/docs\/topics\/federated-identity-credential.html) documentation for detailed instructions.\n\n3. **Add Annotations to Service Account** Add `\"azure.workload.identity\/client-id\": \"$CLIENT_ID\"` and `\"azure.workload.identity\/tenant-id\": \"$TENANT_ID\"` annotations to the `argocd-application-controller` and `argocd-server` service accounts using the details from the federated credential.\n\n4. **Set the AZURE_CLIENT_ID**: Update the `AZURE_CLIENT_ID` in the cluster secret to match the client id of the newly created federated identity credential.\n\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: mycluster-secret\n  labels:\n    argocd.argoproj.io\/secret-type: cluster\ntype: Opaque\nstringData:\n  name: mycluster.example.com\n  server: https:\/\/mycluster.example.com\n  config: |\n    {\n      \"execProviderConfig\": {\n        \"command\": \"argocd-k8s-auth\",\n        \"env\": {\n          \"AAD_ENVIRONMENT_NAME\": \"AzurePublicCloud\",\n          \"AZURE_CLIENT_ID\": \"fill in client id\",\n          \"AZURE_TENANT_ID\": \"fill in tenant id\", # optional, injected by workload identity mutating admission webhook if enabled\n          \"AZURE_FEDERATED_TOKEN_FILE\": \"\/opt\/path\/to\/federated_file.json\", # optional, injected by workload identity mutating admission webhook if enabled\n          \"AZURE_AUTHORITY_HOST\": \"https:\/\/login.microsoftonline.com\/\", # optional, injected by workload identity mutating admission webhook if enabled\n          \"AAD_LOGIN_METHOD\": \"workloadidentity\"\n        },\n        \"args\": [\"azure\"],\n        \"apiVersion\": \"client.authentication.k8s.io\/v1beta1\"\n      },\n      \"tlsClientConfig\": {\n        \"insecure\": false,\n        \"caData\": \"<base64 encoded certificate>\"\n      }\n    }\n```\n\nThis is an example of using the spn (service principal name) flow.\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: mycluster-secret\n  labels:\n    argocd.argoproj.io\/secret-type: cluster\ntype: Opaque\nstringData:\n  name: mycluster.example.com\n  server: https:\/\/mycluster.example.com\n  config: |\n    {\n      \"execProviderConfig\": {\n        \"command\": \"argocd-k8s-auth\",\n        \"env\": {\n          \"AAD_ENVIRONMENT_NAME\": \"AzurePublicCloud\",\n          \"AAD_SERVICE_PRINCIPAL_CLIENT_SECRET\": \"fill in your service principal client secret\",\n          \"AZURE_TENANT_ID\": \"fill in tenant id\",\n          \"AAD_SERVICE_PRINCIPAL_CLIENT_ID\": \"fill in your service principal client id\",\n          \"AAD_LOGIN_METHOD\": \"spn\"\n        },\n        \"args\": [\"azure\"],\n        \"apiVersion\": \"client.authentication.k8s.io\/v1beta1\"\n      },\n      \"tlsClientConfig\": {\n        \"insecure\": false,\n        \"caData\": \"<base64 encoded certificate>\"\n      }\n    }\n```\n\n## Helm Chart Repositories\n\nNon standard Helm Chart repositories have to be registered explicitly.\nEach repository must have `url`, `type` and `name` fields. For private Helm repos you may need to configure access credentials and HTTPS settings using `username`, `password`,\n`tlsClientCertData` and `tlsClientCertKey` fields.\n\nExample:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: istio\n  namespace: argocd\n  labels:\n    argocd.argoproj.io\/secret-type: repository\nstringData:\n  name: istio.io\n  url: https:\/\/storage.googleapis.com\/istio-prerelease\/daily-build\/master-latest-daily\/charts\n  type: helm\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: argo-helm\n  namespace: argocd\n  labels:\n    argocd.argoproj.io\/secret-type: repository\nstringData:\n  name: argo\n  url: https:\/\/argoproj.github.io\/argo-helm\n  type: helm\n  username: my-username\n  password: my-password\n  tlsClientCertData: ...\n  tlsClientCertKey: ...\n```\n\n## Resource Exclusion\/Inclusion\n\nResources can be excluded from discovery and sync so that Argo CD is unaware of them. For example, the apiGroup\/kind `events.k8s.io\/*`, `metrics.k8s.io\/*` and `coordination.k8s.io\/Lease` are always excluded. Use cases:\n\n* You have temporal issues and you want to exclude problematic resources.\n* There are many of a kind of resources that impacts Argo CD's performance.\n* Restrict Argo CD's access to certain kinds of resources, e.g. secrets. See [security.md#cluster-rbac](security.md#cluster-rbac).\n\nTo configure this, edit the `argocd-cm` config map:\n\n```shell\nkubectl edit configmap argocd-cm -n argocd\n```\n\nAdd `resource.exclusions`, e.g.:\n\n```yaml\napiVersion: v1\ndata:\n  resource.exclusions: |\n    - apiGroups:\n      - \"*\"\n      kinds:\n      - \"*\"\n      clusters:\n      - https:\/\/192.168.0.20\nkind: ConfigMap\n```\n\nThe `resource.exclusions` node is a list of objects. Each object can have:\n\n* `apiGroups` A list of globs to match the API group.\n* `kinds` A list of kinds to match. Can be `\"*\"` to match all.\n* `clusters` A list of globs to match the cluster.\n\nIf all three match, then the resource is ignored.\n\nIn addition to exclusions, you might configure the list of included resources using the `resource.inclusions` setting.\nBy default, all resource group\/kinds are included. The `resource.inclusions` setting allows customizing the list of included group\/kinds:\n\n```yaml\napiVersion: v1\ndata:\n  resource.inclusions: |\n    - apiGroups:\n      - \"*\"\n      kinds:\n      - Deployment\n      clusters:\n      - https:\/\/192.168.0.20\nkind: ConfigMap\n```\n\nThe `resource.inclusions` and `resource.exclusions` might be used together. The final list of resources includes group\/kinds specified in `resource.inclusions` minus group\/kinds\nspecified in `resource.exclusions` setting.\n\nNotes:\n\n* Quote globs in your YAML to avoid parsing errors.\n* Invalid globs result in the whole rule being ignored.\n* If you add a rule that matches existing resources, these will appear in the interface as `OutOfSync`.\n\n## Mask sensitive Annotations on Secrets\n\nAn optional comma-separated list of `metadata.annotations` keys can be configured with `resource.sensitive.mask.annotations` to mask their values in UI\/CLI on Secrets.\n\n```yaml\n  resource.sensitive.mask.annotations: openshift.io\/token-secret.value, api-key\n```\n\n## Auto respect RBAC for controller\n\nArgocd controller can be restricted from discovering\/syncing specific resources using just controller rbac, without having to manually configure resource exclusions.\nThis feature can be enabled by setting `resource.respectRBAC` key in argocd cm, once it is set the controller will automatically stop watching for resources \nthat it does not have the permission to list\/access. Possible values for `resource.respectRBAC` are:\n    - `strict` : This setting checks whether the list call made by controller is forbidden\/unauthorized and if it is, it will cross-check the permission by making a `SelfSubjectAccessReview` call for the resource.\n    - `normal` : This will only check whether the list call response is forbidden\/unauthorized and skip `SelfSubjectAccessReview` call, to minimize any extra api-server calls.\n    - unset\/empty (default) : This will disable the feature and controller will continue to monitor all resources.\n\nUsers who are comfortable with an increase in kube api-server calls can opt for `strict` option while users who are concerned with higher api calls and are willing to compromise on the accuracy can opt for the `normal` option.\n\nNotes:\n\n* When set to use `strict` mode controller must have rbac permission to `create` a `SelfSubjectAccessReview` resource \n* The `SelfSubjectAccessReview` request will be only made for the `list` verb, it is assumed that if `list` is allowed for a resource then all other permissions are also available to the controller.\n\nExample argocd cm with `resource.respectRBAC` set to `strict`:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cm\ndata:\n  resource.respectRBAC: \"strict\"\n```\n\n## Resource Custom Labels\n\nCustom Labels configured with `resource.customLabels` (comma separated string) will be displayed in the UI (for any resource that defines them).\n\n## Labels on Application Events\n\nAn optional comma-separated list of `metadata.labels` keys can be configured with `resource.includeEventLabelKeys` to add to Kubernetes events generated for Argo CD Applications. When events are generated for Applications containing the specified labels, the controller adds the matching labels to the event. This establishes an easy link between the event and the application, allowing for filtering using labels. In case of conflict between labels on the Application and AppProject, the Application label values are prioritized and added to the event.\n\n```yaml\n  resource.includeEventLabelKeys: team,env*\n```\n\nTo exclude certain labels from events, use the `resource.excludeEventLabelKeys` key, which takes a comma-separated list of `metadata.labels` keys.\n\n```yaml\n  resource.excludeEventLabelKeys: environment,bu\n```\n\nBoth `resource.includeEventLabelKeys` and `resource.excludeEventLabelKeys` support wildcards.\n\n## SSO & RBAC\n\n* SSO configuration details: [SSO](.\/user-management\/index.md)\n* RBAC configuration details: [RBAC](.\/rbac.md)\n\n## Manage Argo CD Using Argo CD\n\nArgo CD is able to manage itself since all settings are represented by Kubernetes manifests. The suggested way is to create [Kustomize](https:\/\/github.com\/kubernetes-sigs\/kustomize)\nbased application which uses base Argo CD manifests from [https:\/\/github.com\/argoproj\/argo-cd](https:\/\/github.com\/argoproj\/argo-cd\/tree\/stable\/manifests) and apply required changes on top.\n\nExample of `kustomization.yaml`:\n\n```yaml\n# additional resources like ingress rules, cluster and repository secrets.\nresources:\n- github.com\/argoproj\/argo-cd\/\/manifests\/cluster-install?ref=stable\n- clusters-secrets.yaml\n- repos-secrets.yaml\n\n# changes to config maps\npatches:\n- path: overlays\/argo-cd-cm.yaml\n```\n\nThe live example of self managed Argo CD config is available at [https:\/\/cd.apps.argoproj.io](https:\/\/cd.apps.argoproj.io) and with configuration\nstored at [argoproj\/argoproj-deployments](https:\/\/github.com\/argoproj\/argoproj-deployments\/tree\/master\/argocd).\n\n!!! note\n    You will need to sign-in using your GitHub account to get access to [https:\/\/cd.apps.argoproj.io](https:\/\/cd.apps.argoproj.io)","site":"argocd","answers_cleaned":"  Declarative Setup  Argo CD applications  projects and settings can be defined declaratively using Kubernetes manifests  These can be updated using  kubectl apply   without needing to touch the  argocd  command line tool      Quick Reference  All resources  including  Application  and  AppProject  specs  have to be installed in the Argo CD namespace  by default  argocd         Atomic configuration    Sample File                                                             Resource Name                                                                        Kind        Description                                                                                                                                                                                                                                                                                                                                                  argocd cm yaml   argocd cm yaml md                                    argocd cm                                                                            ConfigMap   General Argo CD configuration                                                              argocd repositories yaml   argocd repositories yaml md                my private repo   istio helm repo   private helm repo   private repo                 Secrets     Sample repository connection details                                                       argocd repo creds yaml   argocd repo creds yaml md                       argoproj https creds   argoproj ssh creds   github creds   github enterprise creds   Secrets     Sample repository credential templates                                                     argocd cmd params cm yaml   argocd cmd params cm yaml md              argocd cmd params cm                                                                 ConfigMap   Argo CD env variables configuration                                                        argocd secret yaml   argocd secret yaml md                            argocd secret                                                                        Secret      User Passwords  Certificates  deprecated   Signing Key  Dex secrets  Webhook secrets       argocd rbac cm yaml   argocd rbac cm yaml md                          argocd rbac cm                                                                       ConfigMap   RBAC Configuration                                                                         argocd tls certs cm yaml   argocd tls certs cm yaml md                argocd tls certs cm                                                                  ConfigMap   Custom TLS certificates for connecting Git repositories via HTTPS  v1 2 and later          argocd ssh known hosts cm yaml   argocd ssh known hosts cm yaml md    argocd ssh known hosts cm                                                            ConfigMap   SSH known hosts data for connecting Git repositories via SSH  v1 2 and later            For each specific kind of ConfigMap and Secret resource  there is only a single supported resource name  as listed in the above table    if you need to merge things you need to do it before creating them      warning  A note about ConfigMap resources      Be sure to annotate your ConfigMap resources using the label  app kubernetes io part of  argocd   otherwise Argo CD will not be able to use them       Multiple configuration objects    Sample File                                                        Kind          Description                                                                                                                                  application yaml      user guide application specification md    Application   Example application spec       project yaml     project specification md                        AppProject    Example project spec           argocd repositories yaml     argocd repositories yaml md                                                                   Secret        Repository credentials      For  Application  and  AppProject  resources  the name of the resource equals the name of the application or project within Argo CD  This also means that application and project names are unique within a given Argo CD installation   you cannot have the same application name for two different applications      Applications  The Application CRD is the Kubernetes resource object representing a deployed application instance in an environment  It is defined by two key pieces of information      source  reference to the desired state in Git  repository  revision  path  environment     destination  reference to the target cluster and namespace  For the cluster one of server or name can be used  but not both  which will result in an error   Under the hood when the server is missing  it is calculated based on the name and used for any operations   A minimal Application spec is as follows      yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    name  guestbook   namespace  argocd spec    project  default   source      repoURL  https   github com argoproj argocd example apps git     targetRevision  HEAD     path  guestbook   destination      server  https   kubernetes default svc     namespace  guestbook      See  application yaml  application yaml  for additional fields  As long as you have completed the first step of  Getting Started     getting started md 1 install argo cd   you can apply this with  kubectl apply  n argocd  f application yaml  and Argo CD will start deploying the guestbook application       note     The namespace must match the namespace of your Argo CD instance   typically this is  argocd        note     When creating an application from a Helm repository  the  chart  attribute must be specified instead of the  path  attribute within  spec source       yaml spec    source      repoURL  https   argoproj github io argo helm     chart  argo          warning     Without the  resources finalizer argocd argoproj io  finalizer  deleting an application will not delete the resources it manages  To perform a cascading delete  you must add the finalizer  See  App Deletion     user guide app deletion md about the deletion finalizer       yaml metadata    finalizers        resources finalizer argocd argoproj io          App of Apps  You can create an app that creates other apps  which in turn can create other apps  This allows you to declaratively manage a group of apps that can be deployed and configured in concert   See  cluster bootstrapping  cluster bootstrapping md       Projects  The AppProject CRD is the Kubernetes resource object representing a logical grouping of applications  It is defined by the following key pieces of information      sourceRepos  reference to the repositories that applications within the project can pull manifests from     destinations  reference to clusters and namespaces that applications within the project can deploy into     roles  list of entities with definitions of their access to resources within the project      warning  Projects which can deploy to the Argo CD namespace grant admin access      If a Project s  destinations  configuration allows deploying to the namespace in which Argo CD is installed  then     Applications under that project have admin level access   RBAC access  https   argo cd readthedocs io en stable operator manual rbac       to admin level Projects should be carefully restricted  and push access to allowed  sourceRepos  should be limited     to only admins   An example spec is as follows      yaml apiVersion  argoproj io v1alpha1 kind  AppProject metadata    name  my project   namespace  argocd     Finalizer that ensures that project is not deleted until it is not referenced by any application   finalizers        resources finalizer argocd argoproj io spec    description  Example Project     Allow manifests to deploy from any Git repos   sourceRepos              Only permit applications to deploy to the guestbook namespace in the same cluster   destinations      namespace  guestbook     server  https   kubernetes default svc     Deny all cluster scoped resources from being created  except for Namespace   clusterResourceWhitelist      group         kind  Namespace     Allow all namespaced scoped resources to be created  except for ResourceQuota  LimitRange  NetworkPolicy   namespaceResourceBlacklist      group         kind  ResourceQuota     group         kind  LimitRange     group         kind  NetworkPolicy     Deny all namespaced scoped resources from being created  except for Deployment and StatefulSet   namespaceResourceWhitelist      group   apps      kind  Deployment     group   apps      kind  StatefulSet   roles      A role which provides read only access to all applications in the project     name  read only     description  Read only privileges to my project     policies        p  proj my project read only  applications  get  my project    allow     groups        my oidc group     A role which provides sync privileges to only the guestbook dev application  e g  to provide     sync privileges to a CI system     name  ci role     description  Sync privileges for guestbook dev     policies        p  proj my project ci role  applications  sync  my project guestbook dev  allow       NOTE  JWT tokens can only be generated by the API server and the token is not persisted       anywhere by Argo CD  It can be prematurely revoked by removing the entry from this list      jwtTokens        iat  1535390316         Repositories     note     Some Git hosters   notably GitLab and possibly on premise GitLab instances as well   require you to     specify the   git  suffix in the repository URL  otherwise they will send a HTTP 301 redirect to the     repository URL suffixed with   git   Argo CD will   not   follow these redirects  so you have to     adjust your repository URL to be suffixed with   git    Repository details are stored in secrets  To configure a repo  create a secret which contains repository details  Consider using  bitnami labs sealed secrets  https   github com bitnami labs sealed secrets  to store an encrypted secret definition as a Kubernetes manifest  Each repository must have a  url  field and  depending on whether you connect using HTTPS  SSH  or GitHub App   username  and  password   for HTTPS    sshPrivateKey   for SSH   or  githubAppPrivateKey   for GitHub App   Credentials can be scoped to a project using the optional  project  field  When omitted  the credential will be used as the default for all projects without a scoped credential      warning     When using  bitnami labs sealed secrets  https   github com bitnami labs sealed secrets  the labels will be removed and have to be readded as described here  https   github com bitnami labs sealed secrets sealedsecrets as templates for secrets  Example for HTTPS      yaml apiVersion  v1 kind  Secret metadata    name  private repo   namespace  argocd   labels      argocd argoproj io secret type  repository stringData    type  git   url  https   github com argoproj private repo   password  my password   username  my username   project  my project      Example for SSH      yaml apiVersion  v1 kind  Secret metadata    name  private repo   namespace  argocd   labels      argocd argoproj io secret type  repository stringData    type  git   url  git github com argoproj my private repository git   sshPrivateKey             BEGIN OPENSSH PRIVATE KEY                       END OPENSSH PRIVATE KEY           Example for GitHub App      yaml apiVersion  v1 kind  Secret metadata    name  github repo   namespace  argocd   labels      argocd argoproj io secret type  repository stringData    type  git   url  https   github com argoproj my private repository   githubAppID  1   githubAppInstallationID  2   githubAppPrivateKey             BEGIN OPENSSH PRIVATE KEY                       END OPENSSH PRIVATE KEY          apiVersion  v1 kind  Secret metadata    name  github enterprise repo   namespace  argocd   labels      argocd argoproj io secret type  repository stringData    type  git   url  https   ghe example com argoproj my private repository   githubAppID  1   githubAppInstallationID  2   githubAppEnterpriseBaseUrl  https   ghe example com api v3   githubAppPrivateKey             BEGIN OPENSSH PRIVATE KEY                       END OPENSSH PRIVATE KEY           Example for Google Cloud Source repositories      yaml kind  Secret metadata    name  github repo   namespace  argocd   labels      argocd argoproj io secret type  repository stringData    type  git   url  https   source developers google com p my google project r my repo   gcpServiceAccountKey                 type    service account          project id    my google project          private key id    REDACTED          private key         BEGIN PRIVATE KEY      nREDACTED n     END PRIVATE KEY      n          client email    argocd service account my google project iam gserviceaccount com          client id    REDACTED          auth uri    https   accounts google com o oauth2 auth          token uri    https   oauth2 googleapis com token          auth provider x509 cert url    https   www googleapis com oauth2 v1 certs          client x509 cert url    https   www googleapis com robot v1 metadata x509 argocd service account 40my google project iam gserviceaccount com                 tip     The Kubernetes documentation has  instructions for creating a secret containing a private key  https   kubernetes io docs concepts configuration secret  use case pod with ssh keys        Repository Credentials  If you want to use the same credentials for multiple repositories  you can configure credential templates  Credential templates can carry the same credentials information as repositories      yaml apiVersion  v1 kind  Secret metadata    name  first repo   namespace  argocd   labels      argocd argoproj io secret type  repository stringData    type  git   url  https   github com argoproj private repo     apiVersion  v1 kind  Secret metadata    name  second repo   namespace  argocd   labels      argocd argoproj io secret type  repository stringData    type  git   url  https   github com argoproj other private repo     apiVersion  v1 kind  Secret metadata    name  private repo creds   namespace  argocd   labels      argocd argoproj io secret type  repo creds stringData    type  git   url  https   github com argoproj   password  my password   username  my username      In the above example  every repository accessed via HTTPS whose URL is prefixed with  https   github com argoproj  would use a username stored in the key  username  and a password stored in the key  password  of the secret  private repo creds  for connecting to Git   In order for Argo CD to use a credential template for any given repository  the following conditions must be met     The repository must either not be configured at all  or if configured  must not contain any credential information  i e  contain none of  sshPrivateKey    username    password      The URL configured for a credential template  e g   https   github com argoproj   must match as prefix for the repository URL  e g   https   github com argoproj argocd example apps         note     Matching credential template URL prefixes is done on a  best match  effort  so the longest  best  match will take precedence  The order of definition is not important  as opposed to pre v1 4 configuration   The following keys are valid to refer to credential secrets        SSH repositories     sshPrivateKey  refers to the SSH private key for accessing the repositories       HTTPS repositories     username  and  password  refer to the username and or password for accessing the repositories    tlsClientCertData  and  tlsClientCertKey  refer to secrets where a TLS client certificate   tlsClientCertData   and the corresponding private key  tlsClientCertKey  are stored for accessing the repositories       GitHub App repositories     githubAppPrivateKey  refers to the GitHub App private key for accessing the repositories    githubAppID  refers to the GitHub Application ID for the application you created     githubAppInstallationID  refers to the Installation ID of the GitHub app you created and installed     githubAppEnterpriseBaseUrl  refers to the base api URL for GitHub Enterprise  e g   https   ghe example com api v3      tlsClientCertData  and  tlsClientCertKey  refer to secrets where a TLS client certificate   tlsClientCertData   and the corresponding private key  tlsClientCertKey  are stored for accessing GitHub Enterprise if custom certificates are used       Repositories using self signed TLS certificates  or are signed by custom CA   You can manage the TLS certificates used to verify the authenticity of your repository servers in a ConfigMap object named  argocd tls certs cm   The data section should contain a map  with the repository server s hostname part  not the complete URL  as key  and the certificate s  in PEM format as data  So  if you connect to a repository with the URL  https   server example com repos my repo   you should use  server example com  as key  The certificate data should be either the server s certificate  in case of self signed certificate  or the certificate of the CA that was used to sign the server s certificate  You can configure multiple certificates for each server  e g  if you are having a certificate roll over planned   If there are no dedicated certificates configured for a repository server  the system s default trust store is used for validating the server s repository  This should be good enough for most  if not all  public Git repository services such as GitLab  GitHub and Bitbucket as well as most privately hosted sites which use certificates from well known CAs  including Let s Encrypt certificates   An example ConfigMap object      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd tls certs cm   namespace  argocd   labels      app kubernetes io name  argocd cm     app kubernetes io part of  argocd data    server example com             BEGIN CERTIFICATE          MIIF1zCCA7 gAwIBAgIUQdTcSHY2Sxd3Tq v1eIEZPCNbOowDQYJKoZIhvcNAQEL     BQAwezELMAkGA1UEBhMCREUxFTATBgNVBAgMDExvd2VyIFNheG9ueTEQMA4GA1UE     BwwHSGFub3ZlcjEVMBMGA1UECgwMVGVzdGluZyBDb3JwMRIwEAYDVQQLDAlUZXN0     c3VpdGUxGDAWBgNVBAMMD2Jhci5leGFtcGxlLmNvbTAeFw0xOTA3MDgxMzU2MTda     Fw0yMDA3MDcxMzU2MTdaMHsxCzAJBgNVBAYTAkRFMRUwEwYDVQQIDAxMb3dlciBT     YXhvbnkxEDAOBgNVBAcMB0hhbm92ZXIxFTATBgNVBAoMDFRlc3RpbmcgQ29ycDES     MBAGA1UECwwJVGVzdHN1aXRlMRgwFgYDVQQDDA9iYXIuZXhhbXBsZS5jb20wggIi     MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCv4mHMdVUcafmaSHVpUM0zZWp5     NFXfboxA4inuOkE8kZlbGSe7wiG9WqLirdr39Ts WSAFA6oANvbzlu3JrEQ2CHPc     CNQm6diPREFwcDPFCe eMawbwkQAPVSHPts0UoRxnpZox5pn69ghncBR jtvx  u     P6HdwW0qqTvfJnfAF1hBJ4oIk2AXiip5kkIznsAh9W6WRy6nTVCeetmIepDOGe0G     ZJIRn OfSz7NzKylfDCat2z3EAutyeT 5oXZoWOmGg 8T7pn pR588GoYYKRQnp      YilqCPFX az09EqqK iHXnkdZ Z2fCuU 9M Zhrnlwlygl3RuVBI6xhm ZsXtL2E     Gxa61lNy6pyx5 hSxHEFEJshXLtioRd702VdLKxEOuYSXKeJDs1x9o6cJ75S6hko     Ml1L4zCU xEsMcvb1iQ2n7PZdacqhkFRUVVVmJ56th8aYyX7KNX6M9CD kMpNm6J     kKC1li Iy RI138bAvaFplajMF551kt44dSvIoJIbTr1LigudzWPqk31QaZXV 4u     kD1n4p XMc9HYU was CmQBFqmIZedTLTtK7clkuFN6wbwzdo1wmUNgnySQuMacO     gxhHxxzRWxd24uLyk9Px 9U3BfVPaRLiOPaPoC58lyVOykjSgfpgbus7JS69fCq7     bEH4Jatp 10zkco UQIDAQABo1MwUTAdBgNVHQ4EFgQUjXH6PHi92y4C4hQpey86     r6 x1ewwHwYDVR0jBBgwFoAUjXH6PHi92y4C4hQpey86r6 x1ewwDwYDVR0TAQH      BAUwAwEB zANBgkqhkiG9w0BAQsFAAOCAgEAFE4SdKsX9UsLy Z0xuHSxhTd0jfn     Iih5mtzb8CDNO5oTw4z0aMeAvpsUvjJ XjgxnkiRACXh7K9hsG2r ageRWGevyvx     CaRXFbherV1kTnZw4Y9 pgZTYVWs9jlqFOppz5sStkfjsDQ5lmPJGDii StENAz2     XmtiPOgfG9Upb0GAJBCuKnrU9bIcT4L20gd2F4Y14ccyjlf8UiUi192IX6yM9OjT      TuXwZgqnTOq6piVgr FTSa24qSvaXb5z mJDLlk23npecTouLg83TNSn3R6fYQr     d Y9eXuUJ8U7 qTh2Ulz071AO9KzPOmleYPTx4Xty4xAtWi1QE5NHW9 Ajlv5OtO     OnMNWIs7ssDJBsB7VFC8hcwf79jz7kC0xmQqDfw51Xhhk04kla v HZcFW2AO9so     6ZdVHHQnIbJa7yQJKZ hK49IOoBR6JgdB5kymoplLLiuqZSYTcwSBZ72FYTm3iAr     jzvt1hxpxVDmXvRnkhRrIRhK4QgJL0jRmirBjDY PYYd7bdRIjN7WNZLFsgplnS8     9w6CwG32pRlm0c8kkiQ7FXA6BYCqOsDI8f1VGQv331OpR2Ck FTv L7DAmg6l37W      LB9LGh4OAp68ImTjqf6ioGKG0RBSznwME r4nXtT1S qLR6ASWUS4ViWRhbRlNK     XWyb96wrUlv E8I           END CERTIFICATE                note     The  argocd tls certs cm  ConfigMap will be mounted as a volume at the mount path   app config tls  in the pods of  argocd server  and  argocd repo server   It will create files for each data key in the mount path directory  so above example would leave the file   app config tls server example com   which contains the certificate data  It might take a while for changes in the ConfigMap to be reflected in your pods  depending on your Kubernetes configuration       SSH known host public keys  If you are configuring repositories to use SSH  Argo CD will need to know their SSH public keys  In order for Argo CD to connect via SSH the public key s  for each repository server must be pre configured in Argo CD  unlike TLS configuration   otherwise the connections to the repository will fail   You can manage the SSH known hosts data in the  argocd ssh known hosts cm  ConfigMap  This ConfigMap contains a single entry   ssh known hosts   with the public keys of the SSH servers as its value  The value can be filled in from any existing  ssh known hosts  file  or from the output of the  ssh keyscan  utility  which is part of OpenSSH s client package   The basic format is   server name   keytype   base64 encoded key    one entry per line   Here is an example of running  ssh keyscan      bash   for host in bitbucket org github com gitlab com ssh dev azure com vs ssh visualstudio com   do ssh keyscan  host 2   dev null   done bitbucket org ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk aySyboD5QF61I 1WeTwu deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr 6mrui oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc 5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3 30LVlORZkxOh LKL BvbZ iRNhItLqNyieoQj uh 7Iv4uyH cV 0b4WDSd3DptigWq84lJubb9t DnZlrJazxyDCulTmKdOR7vs9gMTo uoIrPSb8ScTtvw65 odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf 97P5zauIhxcjX xHv4M  github com ssh ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl github com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6 PKCUXaDbC7qtbW8gIkhL7aGCsOr C56SJMy BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9 hWCqBywINIR 5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL 38TGxkxCflmO 5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk S4dhPeAUC5y bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn EjqoTwvqNj4kqx5QUCI0ThS YkOxJCXmPUWZbhjpCg56i 2aB6CmK2JGhn57K5mj0MNdBXA4 WnwH6XoPWJzK5Nyu2zB3nAZp S5hpQs p1vN1 wsjk  github com ecdsa sha2 nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT y6v0mKV0U2w0WZ2YB   Tpockg  gitlab com ecdsa sha2 nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY  gitlab com ssh ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn nOeHHE5UOzRdf gitlab com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ jqCMRgBqB98u3z  J1sKlXHWfM9dyhSevkMwSbhoR8XIq U0tCNyokEi ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT ia1NEKjunUqu1xOB StKDHMoX4 OKyIzuS0q T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl siMkPGbO5xR En4iEY6K2XPASUEMaieWVNTRCtJ4S8H 9 ssh dev azure com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0 QUfTTqeu tm22gOsv VrVTMk6vwRU75gY y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3 QpyNLHbWDdzwtrlS ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21 nZcmCTISQBtdcyPaEno7fFQMDD26 s0lfKob4Kw8H vs ssh visualstudio com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0 QUfTTqeu tm22gOsv VrVTMk6vwRU75gY y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3 QpyNLHbWDdzwtrlS ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21 nZcmCTISQBtdcyPaEno7fFQMDD26 s0lfKob4Kw8H      Here is an example  ConfigMap  object using the output from  ssh keyscan  above      yaml apiVersion  v1 kind  ConfigMap metadata    labels      app kubernetes io name  argocd ssh known hosts cm     app kubernetes io part of  argocd   name  argocd ssh known hosts cm data    ssh known hosts          This file was automatically generated by hack update ssh known hosts sh  DO NOT EDIT      ssh github com  443 ecdsa sha2 nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT y6v0mKV0U2w0WZ2YB   Tpockg       ssh github com  443 ssh ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl      ssh github com  443 ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6 PKCUXaDbC7qtbW8gIkhL7aGCsOr C56SJMy BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9 hWCqBywINIR 5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL 38TGxkxCflmO 5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk S4dhPeAUC5y bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn EjqoTwvqNj4kqx5QUCI0ThS YkOxJCXmPUWZbhjpCg56i 2aB6CmK2JGhn57K5mj0MNdBXA4 WnwH6XoPWJzK5Nyu2zB3nAZp S5hpQs p1vN1 wsjk      bitbucket org ecdsa sha2 nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPIQmuzMBuKdWeF4 a2sjSSpBK0iqitSQ 5BM9KhpexuGt20JpTVM7u5BDZngncgrqDMbWdxMWWOGtZ9UgbqgZE      bitbucket org ssh ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIazEu89wgQZ4bqs3d63QSMzYVa0MuJ2e2gKTKqu UUO     bitbucket org ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQeJzhupRu0u0cdegZIa8e86EG2qOCsIsD1Xw0xSeiPDlCr7kq97NLmMbpKTX6Esc30NuoqEEHCuc7yWtwp8dI76EEEB1VqY9QJq6vk aySyboD5QF61I 1WeTwu deCbgKMGbUijeXhtfbxSxm6JwGrXrhBdofTsbKRUsrN1WoNgUa8uqN1Vx6WAJw1JHPhglEGGHea6QICwJOAr 6mrui oB7pkaWKHj3z7d1IC4KWLtY47elvjbaTlkN04Kc 5LFEirorGYVbt15kAUlqGM65pk6ZBxtaO3 30LVlORZkxOh LKL BvbZ iRNhItLqNyieoQj uh 7Iv4uyH cV 0b4WDSd3DptigWq84lJubb9t DnZlrJazxyDCulTmKdOR7vs9gMTo uoIrPSb8ScTtvw65 odKAlBj59dhnVp9zd7QUojOpXlL62Aw56U4oO FALuevvMjiWeavKhJqlR7i5n9srYcrNV7ttmDw7kf 97P5zauIhxcjX xHv4M      github com ecdsa sha2 nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT y6v0mKV0U2w0WZ2YB   Tpockg      github com ssh ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl     github com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6 PKCUXaDbC7qtbW8gIkhL7aGCsOr C56SJMy BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9 hWCqBywINIR 5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL 38TGxkxCflmO 5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk S4dhPeAUC5y bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn EjqoTwvqNj4kqx5QUCI0ThS YkOxJCXmPUWZbhjpCg56i 2aB6CmK2JGhn57K5mj0MNdBXA4 WnwH6XoPWJzK5Nyu2zB3nAZp S5hpQs p1vN1 wsjk      gitlab com ecdsa sha2 nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY      gitlab com ssh ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn nOeHHE5UOzRdf     gitlab com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ jqCMRgBqB98u3z  J1sKlXHWfM9dyhSevkMwSbhoR8XIq U0tCNyokEi ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT ia1NEKjunUqu1xOB StKDHMoX4 OKyIzuS0q T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl siMkPGbO5xR En4iEY6K2XPASUEMaieWVNTRCtJ4S8H 9     ssh dev azure com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0 QUfTTqeu tm22gOsv VrVTMk6vwRU75gY y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3 QpyNLHbWDdzwtrlS ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21 nZcmCTISQBtdcyPaEno7fFQMDD26 s0lfKob4Kw8H     vs ssh visualstudio com ssh rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0 QUfTTqeu tm22gOsv VrVTMk6vwRU75gY y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3 QpyNLHbWDdzwtrlS ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21 nZcmCTISQBtdcyPaEno7fFQMDD26 s0lfKob4Kw8H          note     The  argocd ssh known hosts cm  ConfigMap will be mounted as a volume at the mount path   app config ssh  in the pods of  argocd server  and  argocd repo server   It will create a file  ssh known hosts  in that directory  which contains the SSH known hosts data used by Argo CD for connecting to Git repositories via SSH  It might take a while for changes in the ConfigMap to be reflected in your pods  depending on your Kubernetes configuration       Configure repositories with proxy  Proxy for your repository can be specified in the  proxy  field of the repository secret  along with a corresponding  noProxy  config  Argo CD uses this proxy noProxy config to access the repository and do related helm kustomize operations  Argo CD looks for the standard proxy environment variables in the repository server if the custom proxy config is absent   An example repository with proxy and noProxy      yaml apiVersion  v1 kind  Secret metadata    name  private repo   namespace  argocd   labels      argocd argoproj io secret type  repository stringData    type  git   url  https   github com argoproj private repo   proxy  https   proxy server url 8888   noProxy    internal example com company org 10 123 0 0 16    password  my password   username  my username      A note on noProxy  Argo CD uses exec to interact with different tools such as helm and kustomize  Not all of these tools support the same noProxy syntax as the  httpproxy go package  https   cs opensource google go x net   internal branch go1 21 vendor http httpproxy proxy go l 38 50  does  In case you run in trouble with noProxy not beeing respected you might want to try using the full domain instead of a wildcard pattern or IP range to find a common syntax that all tools support       Legacy behaviour  In Argo CD version 2 0 and earlier  repositories were stored as part of the  argocd cm  config map  For backward compatibility  Argo CD will still honor repositories in the config map  but this style of repository configuration is deprecated and support for it will be removed in a future version      yaml apiVersion  v1 kind  ConfigMap data    repositories          url  https   github com argoproj my private repository       passwordSecret          name  my secret         key  password       usernameSecret          name  my secret         key  username   repository credentials          url  https   github com argoproj       passwordSecret          name  my secret         key  password       usernameSecret          name  my secret         key  username     apiVersion  v1 kind  Secret metadata    name  my secret   namespace  argocd stringData    password  my password   username  my username         Clusters  Cluster credentials are stored in secrets same as repositories or repository credentials  Each secret must have label  argocd argoproj io secret type  cluster    The secret data must include following fields      name    cluster name    server    cluster api server url    namespaces    optional comma separated list of namespaces which are accessible in that cluster  Cluster level resources would be ignored if namespace list is not empty     clusterResources    optional boolean string    true   or   false    determining whether Argo CD can manage cluster level resources on this cluster  This setting is used only if the list of managed namespaces is not empty     project    optional string to designate this as a project scoped cluster     config    JSON representation of following data structure      yaml   Basic authentication settings username  string password  string   Bearer authentication settings bearerToken  string   IAM authentication configuration awsAuthConfig      clusterName  string     roleARN  string     profile  string   Configure external command to supply client credentials   See https   godoc org k8s io client go tools clientcmd api ExecConfig execProviderConfig      command  string     args          string           env          key  value           apiVersion  string     installHint  string   Proxy URL for the kubernetes client to use when connecting to the cluster api server proxyUrl  string   Transport layer security configuration settings tlsClientConfig        Base64 encoded PEM encoded bytes  typically read from a client certificate file       caData  string       Base64 encoded PEM encoded bytes  typically read from a client certificate file       certData  string       Server should be accessed without verifying the TLS certificate     insecure  boolean       Base64 encoded PEM encoded bytes  typically read from a client certificate key file       keyData  string       ServerName is passed to the server for SNI and is used in the client to check server       certificates against  If ServerName is empty  the hostname used to contact the       server is used      serverName  string   Disable automatic compression for requests to the cluster  disableCompression  boolean      Note that if you specify a command to run under  execProviderConfig   that command must be available in the Argo CD image  See  BYOI  Build Your Own Image   custom tools md byoi build your own image    Cluster secret example      yaml apiVersion  v1 kind  Secret metadata    name  mycluster secret   labels      argocd argoproj io secret type  cluster type  Opaque stringData    name  mycluster example com   server  https   mycluster example com   config                 bearerToken     authentication token           tlsClientConfig              insecure   false           caData     base64 encoded certificate                          EKS  EKS cluster secret example using argocd k8s auth and  IRSA  https   docs aws amazon com eks latest userguide iam roles for service accounts html       yaml apiVersion  v1 kind  Secret metadata    name  mycluster secret   labels      argocd argoproj io secret type  cluster type  Opaque stringData    name   eks cluster name for argo    server   https   xxxyyyzzz xyz some region eks amazonaws com    config                 awsAuthConfig              clusterName    my eks cluster name            roleARN    arn aws iam   AWS ACCOUNT ID  role  IAM ROLE NAME                   tlsClientConfig              insecure   false           caData     base64 encoded certificate                              This setup requires   1   IRSA enabled  https   docs aws amazon com eks latest userguide enable iam roles for service accounts html  on your Argo CD EKS cluster 2  An IAM role   management role   for your Argo CD EKS cluster that has an appropriate trust policy and permission policies  see below  3  A role created for each cluster being added to Argo CD that is assumable by the Argo CD management role 4  An  Access Entry  https   docs aws amazon com eks latest userguide access entries html  within each EKS cluster added to Argo CD that gives the cluster s role  from point 3  RBAC permissions to perform actions within the cluster       Or  alternatively  an entry within the  aws auth  ConfigMap within the cluster added to Argo CD   depreciated by EKS  https   docs aws amazon com eks latest userguide auth configmap html         Argo CD Management Role  The role created for Argo CD  the  management role   will need to have a trust policy suitable for assumption by certain  Argo CD Service Accounts  and by itself    The service accounts that need to assume this role are      argocd application controller      argocd applicationset controller     argocd server   If we create role  arn aws iam   AWS ACCOUNT ID  role  ARGO CD MANAGEMENT IAM ROLE NAME   for this purpose  the following is an example trust policy suitable for this need  Ensure that the Argo CD cluster has an  IAM OIDC provider configured  https   docs aws amazon com eks latest userguide enable iam roles for service accounts html       json        Version    2012 10 17        Statement                            Sid    ExplicitSelfRoleAssumption                Effect    Allow                Principal                      AWS                                   Action    sts AssumeRole                Condition                      ArnLike                        aws PrincipalArn    arn aws iam   AWS ACCOUNT ID  role  ARGO CD MANAGEMENT IAM ROLE NAME                                                                     Sid    ServiceAccountRoleAssumption                Effect    Allow                Principal                      Federated    arn aws iam   AWS ACCOUNT ID  oidc provider oidc eks  AWS REGION  amazonaws com id EXAMPLED539D4633E53DE1B71EXAMPLE                              Action    sts AssumeRoleWithWebIdentity                Condition                      StringEquals                          oidc eks  AWS REGION  amazonaws com id EXAMPLED539D4633E53DE1B71EXAMPLE sub                              system serviceaccount argocd argocd application controller                            system serviceaccount argocd argocd applicationset controller                            system serviceaccount argocd argocd server                                              oidc eks  AWS REGION  amazonaws com id EXAMPLED539D4633E53DE1B71EXAMPLE aud    sts amazonaws com                                                              Argo CD Service Accounts  The 3 service accounts need to be modified to include an annotation with the Argo CD management role ARN   Here s an example service account configurations for  argocd application controller    argocd applicationset controller   and  argocd server        warning Once the annotations has been set on the service accounts  the application controller and server pods need to be restarted      yaml apiVersion  v1 kind  ServiceAccount metadata    annotations      eks amazonaws com role arn    arn aws iam   AWS ACCOUNT ID  role  ARGO CD MANAGEMENT IAM ROLE NAME     name  argocd application controller     apiVersion  v1 kind  ServiceAccount metadata    annotations      eks amazonaws com role arn    arn aws iam   AWS ACCOUNT ID  role  ARGO CD MANAGEMENT IAM ROLE NAME     name  argocd applicationset controller     apiVersion  v1 kind  ServiceAccount metadata    annotations      eks amazonaws com role arn    arn aws iam   AWS ACCOUNT ID  role  ARGO CD MANAGEMENT IAM ROLE NAME     name  argocd server           IAM Permission Policy  The Argo CD management role   arn aws iam   AWS ACCOUNT ID  role  ARGO CD MANAGEMENT IAM ROLE NAME   in our example  additionally needs to be allowed to assume a role for each cluster added to Argo CD   If we create a role named   IAM CLUSTER ROLE   for an EKS cluster we are adding to Argo CD  we would update the permission  policy of the Argo CD management role to include the following      json        Version     2012 10 17        Statement             Effect     Allow          Action     sts AssumeRole          Resource               arn aws iam   AWS ACCOUNT ID  role  IAM CLUSTER ROLE                          This allows the Argo CD management role to assume the cluster role   You can add permissions like above to the Argo CD management role for each cluster being managed by Argo CD  assuming you create a new role per cluster         Cluster Role Trust Policies  As stated  each EKS cluster being added to Argo CD should have its own corresponding role  This role should not have any permission policies  Instead  it will be used to authenticate against the EKS cluster s API  The Argo CD management role assumes this role  and calls the AWS API to get an auth token via argocd k8s auth  That token is used when connecting to the added cluster s API endpoint   If we create role  arn aws iam   AWS ACCOUNT ID  role  IAM CLUSTER ROLE   for a cluster being added to Argo CD  we should set its trust policy to give the Argo CD management role permission to assume it  Note that we re granting the Argo CD  management role permission to assume this role above  but we also need to permit that action via the cluster role s trust policy   A suitable trust policy allowing the  IAM CLUSTER ROLE  to be assumed by the  ARGO CD MANAGEMENT IAM ROLE NAME  role looks like this      json        Version    2012 10 17        Statement                            Effect    Allow                Principal                      AWS    arn aws iam   AWS ACCOUNT ID  role  ARGO CD MANAGEMENT IAM ROLE NAME                               Action    sts AssumeRole                              Access Entries  Each cluster s role  e g   arn aws iam   AWS ACCOUNT ID  role  IAM CLUSTER ROLE    has no permission policy  Instead  we associate that role with an EKS permission policy  which grants that role the ability to generate authentication tokens to the cluster s API  This EKS permission policy decides what RBAC permissions are granted in that process   An  access entry  https   docs aws amazon com eks latest userguide access entries html   and the policy associated to the role  can be created using the following commands      bash   For each cluster being added to Argo CD aws eks create access entry         cluster name my eks cluster name         principal arn arn aws iam   AWS ACCOUNT ID  role  IAM CLUSTER ROLE          type STANDARD         kubernetes groups      No groups needed  aws eks associate access policy         cluster name my eks cluster name         policy arn arn aws eks  aws cluster access policy AmazonEKSClusterAdminPolicy         access scope type cluster         principal arn arn aws iam   AWS ACCOUNT ID  role  IAM CLUSTER ROLE       The above role is granted cluster admin permissions via  AmazonEKSClusterAdminPolicy   The Argo CD management role that assume this role is therefore granted the same cluster admin permissions when it generates an API token when adding the  associated EKS cluster     AWS Auth  Depreciated     Instead of using Access Entries  you may need to use the depreciated  aws auth    If so  the  roleARN  of each managed cluster needs to be added to each respective cluster s  aws auth  config map  see  Enabling IAM principal access to your cluster  https   docs aws amazon com eks latest userguide add user role html    as well as having an assume role policy which allows it to be assumed by the Argo CD pod role   An example assume role policy for a cluster which is managed by Argo CD      json        Version     2012 10 17        Statement             Effect     Allow          Action     sts AssumeRole          Principal               AWS      arn aws iam   AWS ACCOUNT ID  role  ARGO CD MANAGEMENT IAM ROLE NAME                          Example kube system aws auth configmap for your cluster managed by Argo CD      yaml apiVersion  v1 data      Other groups and accounts omitted for brevity  Ensure that no other rolearns and or groups are inadvertently removed       or you risk borking access to your cluster          The group name is a RoleBinding which you use to map to a  Cluster Role  See https   kubernetes io docs reference access authn authz rbac  role binding examples     mapRoles           groups             GROUP NAME IN K8S RBAC          rolearn    arn aws iam   AWS ACCOUNT ID  role  IAM CLUSTER ROLE          username    arn aws iam   AWS ACCOUNT ID  role  IAM CLUSTER ROLE        Use the role ARN for both  rolearn  and  username         Alternative EKS Authentication Methods In some scenarios it may not be possible to use IRSA  such as when the Argo CD cluster is running on a different cloud provider s platform  In this case  there are two options  1  Use  execProviderConfig  to call the AWS authentication mechanism which enables the injection of environment variables to supply credentials 2  Leverage the new AWS profile option available in Argo CD release 2 10  Both of these options will require the steps involving IAM and the  aws auth  config map  defined above  to provide the  principal with access to the cluster         Using execProviderConfig with Environment Variables    yaml     apiVersion  v1 kind  Secret metadata    name  mycluster secret   labels      argocd argoproj io secret type  cluster type  Opaque stringData    name  mycluster   server  https   mycluster example com   namespaces   my managed namespaces    clusterResources   true    config                 execProviderConfig              command    argocd k8s auth            args     aws      cluster name    my eks cluster             apiVersion    client authentication k8s io v1beta1            env                AWS REGION    xx east 1              AWS ACCESS KEY ID                  AWS SECRET ACCESS KEY                  AWS SESSION TOKEN                                tlsClientConfig              insecure   false           caData                         This example assumes that the role being attached to the credentials that have been supplied  if this is not the case the role can be appended to the  args  section like so      yaml          args     aws      cluster name    my eks cluster      role arn    arn aws iam   AWS ACCOUNT ID  role  IAM ROLE NAME             This construct can be used in conjunction with something like the External Secrets Operator to avoid storing the keys in plain text and additionally helps to provide a foundation for key rotation         Using An AWS Profile For Authentication The option to use profiles  added in release 2 10  provides a method for supplying credentials while still using the standard Argo CD EKS cluster declaration with an additional command flag that points to an AWS credentials file     yaml apiVersion  v1 kind  Secret metadata    name  mycluster secret   labels      argocd argoproj io secret type  cluster type  Opaque stringData    name   mycluster com    server   https   mycluster com    config                 awsAuthConfig              clusterName    my eks cluster name            roleARN    arn aws iam   AWS ACCOUNT ID  role  IAM ROLE NAME             profile     mount path to my profile file                  tlsClientConfig              insecure   false           caData     base64 encoded certificate                             This will instruct Argo CD to read the file at the provided path and use the credentials defined within to authenticate to AWS   The profile must be mounted in both the  argocd server  and  argocd application controller  components in order for this to work  For example  the following values can be defined in a Helm based Argo CD deployment      yaml controller    extraVolumes        name  my profile volume       secret          secretName  my aws profile         items              key  my profile file             path  my profile file   extraVolumeMounts        name  my profile mount       mountPath   mount path to       readOnly  true  server    extraVolumes        name  my profile volume       secret          secretName  my aws profile         items              key  my profile file             path  my profile file   extraVolumeMounts        name  my profile mount       mountPath   mount path to       readOnly  true      Where the secret is defined as follows     yaml apiVersion  v1 kind  Secret metadata    name  my aws profile type  Opaque stringData    my profile file         default      region    aws region      aws access key id    aws access key id      aws secret access key    aws secret access key      aws session token    aws session token            Secret mounts are updated on an interval  not real time  If rotation is a requirement ensure the token lifetime outlives the mount update interval and the rotation process doesn t immediately invalidate the existing token       GKE  GKE cluster secret example using argocd k8s auth and  Workload Identity  https   cloud google com kubernetes engine docs how to workload identity       yaml apiVersion  v1 kind  Secret metadata    name  mycluster secret   labels      argocd argoproj io secret type  cluster type  Opaque stringData    name  mycluster example com   server  https   mycluster example com   config                 execProviderConfig              command    argocd k8s auth            args     gcp             apiVersion    client authentication k8s io v1beta1                  tlsClientConfig              insecure   false           caData     base64 encoded certificate                      Note that you must enable Workload Identity on your GKE cluster  create GCP service account with appropriate IAM role and bind it to Kubernetes service account for argocd application controller and argocd server  showing Pod logs on UI   See  Use Workload Identity  https   cloud google com kubernetes engine docs how to workload identity  and  Authenticating to the Kubernetes API server  https   cloud google com kubernetes engine docs how to api server authentication        AKS  Azure cluster secret example using argocd k8s auth and  kubelogin  https   github com Azure kubelogin    The option  azure  to the argocd k8s auth execProviderConfig encapsulates the  get token  command for kubelogin   Depending upon which authentication flow is desired  devicecode  spn  ropc  msi  azurecli  workloadidentity   set the environment variable AAD LOGIN METHOD with this value   Set other appropriate environment variables depending upon which authentication flow is desired    Variable Name Description                               AAD LOGIN METHOD One of devicecode  spn  ropc  msi  azurecli  or workloadidentity   AAD SERVICE PRINCIPAL CLIENT CERTIFICATE AAD client cert in pfx   Used in spn login   AAD SERVICE PRINCIPAL CLIENT ID AAD client application ID   AAD SERVICE PRINCIPAL CLIENT SECRET AAD client application secret   AAD USER PRINCIPAL NAME Used in the ropc flow   AAD USER PRINCIPAL PASSWORD Used in the ropc flow   AZURE TENANT ID The AAD tenant ID    AZURE AUTHORITY HOST Used in the WorkloadIdentityLogin flow   AZURE FEDERATED TOKEN FILE Used in the WorkloadIdentityLogin flow   AZURE CLIENT ID Used in the WorkloadIdentityLogin flow   In addition to the environment variables above  argocd k8s auth accepts two extra environment variables to set the AAD environment  and to set the AAD server application ID   The AAD server application ID will default to 6dae42f8 4368 4678 94ff 3960e28e3630 if not specified   See  here  https   github com azure kubelogin exec plugin format  for details    Variable Name Description                               AAD ENVIRONMENT NAME The azure environment to use  default of AzurePublicCloud   AAD SERVER APPLICATION ID The optional AAD server application ID  defaults to 6dae42f8 4368 4678 94ff 3960e28e3630   This is an example of using the  federated workload login flow  https   github com Azure kubelogin azure workload federated identity non interactive    The federated token file needs to be mounted as a secret into argoCD  so it can be used in the flow   The location of the token file needs to be set in the environment variable AZURE FEDERATED TOKEN FILE   If your AKS cluster utilizes the  Mutating Admission Webhook  https   azure github io azure workload identity docs installation mutating admission webhook html  from the Azure Workload Identity project  follow these steps to enable the  argocd application controller  and  argocd server  pods to use the federated identity   1    Label the Pods    Add the  azure workload identity use   true   label to the  argocd application controller  and  argocd server  pods   2    Create Federated Identity Credential    Generate an Azure federated identity credential for the  argocd application controller  and  argocd server  service accounts  Refer to the  Federated Identity Credential  https   azure github io azure workload identity docs topics federated identity credential html  documentation for detailed instructions   3    Add Annotations to Service Account   Add   azure workload identity client id     CLIENT ID   and   azure workload identity tenant id     TENANT ID   annotations to the  argocd application controller  and  argocd server  service accounts using the details from the federated credential   4    Set the AZURE CLIENT ID    Update the  AZURE CLIENT ID  in the cluster secret to match the client id of the newly created federated identity credential       yaml apiVersion  v1 kind  Secret metadata    name  mycluster secret   labels      argocd argoproj io secret type  cluster type  Opaque stringData    name  mycluster example com   server  https   mycluster example com   config                 execProviderConfig              command    argocd k8s auth            env                AAD ENVIRONMENT NAME    AzurePublicCloud              AZURE CLIENT ID    fill in client id              AZURE TENANT ID    fill in tenant id     optional  injected by workload identity mutating admission webhook if enabled            AZURE FEDERATED TOKEN FILE     opt path to federated file json     optional  injected by workload identity mutating admission webhook if enabled            AZURE AUTHORITY HOST    https   login microsoftonline com      optional  injected by workload identity mutating admission webhook if enabled            AAD LOGIN METHOD    workloadidentity                      args     azure             apiVersion    client authentication k8s io v1beta1                  tlsClientConfig              insecure   false           caData     base64 encoded certificate                      This is an example of using the spn  service principal name  flow      yaml apiVersion  v1 kind  Secret metadata    name  mycluster secret   labels      argocd argoproj io secret type  cluster type  Opaque stringData    name  mycluster example com   server  https   mycluster example com   config                 execProviderConfig              command    argocd k8s auth            env                AAD ENVIRONMENT NAME    AzurePublicCloud              AAD SERVICE PRINCIPAL CLIENT SECRET    fill in your service principal client secret              AZURE TENANT ID    fill in tenant id              AAD SERVICE PRINCIPAL CLIENT ID    fill in your service principal client id              AAD LOGIN METHOD    spn                      args     azure             apiVersion    client authentication k8s io v1beta1                  tlsClientConfig              insecure   false           caData     base64 encoded certificate                         Helm Chart Repositories  Non standard Helm Chart repositories have to be registered explicitly  Each repository must have  url    type  and  name  fields  For private Helm repos you may need to configure access credentials and HTTPS settings using  username    password    tlsClientCertData  and  tlsClientCertKey  fields   Example      yaml apiVersion  v1 kind  Secret metadata    name  istio   namespace  argocd   labels      argocd argoproj io secret type  repository stringData    name  istio io   url  https   storage googleapis com istio prerelease daily build master latest daily charts   type  helm     apiVersion  v1 kind  Secret metadata    name  argo helm   namespace  argocd   labels      argocd argoproj io secret type  repository stringData    name  argo   url  https   argoproj github io argo helm   type  helm   username  my username   password  my password   tlsClientCertData        tlsClientCertKey              Resource Exclusion Inclusion  Resources can be excluded from discovery and sync so that Argo CD is unaware of them  For example  the apiGroup kind  events k8s io      metrics k8s io    and  coordination k8s io Lease  are always excluded  Use cases     You have temporal issues and you want to exclude problematic resources    There are many of a kind of resources that impacts Argo CD s performance    Restrict Argo CD s access to certain kinds of resources  e g  secrets  See  security md cluster rbac  security md cluster rbac    To configure this  edit the  argocd cm  config map      shell kubectl edit configmap argocd cm  n argocd      Add  resource exclusions   e g       yaml apiVersion  v1 data    resource exclusions          apiGroups                    kinds                    clusters          https   192 168 0 20 kind  ConfigMap      The  resource exclusions  node is a list of objects  Each object can have      apiGroups  A list of globs to match the API group     kinds  A list of kinds to match  Can be       to match all     clusters  A list of globs to match the cluster   If all three match  then the resource is ignored   In addition to exclusions  you might configure the list of included resources using the  resource inclusions  setting  By default  all resource group kinds are included  The  resource inclusions  setting allows customizing the list of included group kinds      yaml apiVersion  v1 data    resource inclusions          apiGroups                    kinds          Deployment       clusters          https   192 168 0 20 kind  ConfigMap      The  resource inclusions  and  resource exclusions  might be used together  The final list of resources includes group kinds specified in  resource inclusions  minus group kinds specified in  resource exclusions  setting   Notes     Quote globs in your YAML to avoid parsing errors    Invalid globs result in the whole rule being ignored    If you add a rule that matches existing resources  these will appear in the interface as  OutOfSync       Mask sensitive Annotations on Secrets  An optional comma separated list of  metadata annotations  keys can be configured with  resource sensitive mask annotations  to mask their values in UI CLI on Secrets      yaml   resource sensitive mask annotations  openshift io token secret value  api key         Auto respect RBAC for controller  Argocd controller can be restricted from discovering syncing specific resources using just controller rbac  without having to manually configure resource exclusions  This feature can be enabled by setting  resource respectRBAC  key in argocd cm  once it is set the controller will automatically stop watching for resources  that it does not have the permission to list access  Possible values for  resource respectRBAC  are         strict    This setting checks whether the list call made by controller is forbidden unauthorized and if it is  it will cross check the permission by making a  SelfSubjectAccessReview  call for the resource         normal    This will only check whether the list call response is forbidden unauthorized and skip  SelfSubjectAccessReview  call  to minimize any extra api server calls        unset empty  default    This will disable the feature and controller will continue to monitor all resources   Users who are comfortable with an increase in kube api server calls can opt for  strict  option while users who are concerned with higher api calls and are willing to compromise on the accuracy can opt for the  normal  option   Notes     When set to use  strict  mode controller must have rbac permission to  create  a  SelfSubjectAccessReview  resource    The  SelfSubjectAccessReview  request will be only made for the  list  verb  it is assumed that if  list  is allowed for a resource then all other permissions are also available to the controller   Example argocd cm with  resource respectRBAC  set to  strict       yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cm data    resource respectRBAC   strict          Resource Custom Labels  Custom Labels configured with  resource customLabels   comma separated string  will be displayed in the UI  for any resource that defines them       Labels on Application Events  An optional comma separated list of  metadata labels  keys can be configured with  resource includeEventLabelKeys  to add to Kubernetes events generated for Argo CD Applications  When events are generated for Applications containing the specified labels  the controller adds the matching labels to the event  This establishes an easy link between the event and the application  allowing for filtering using labels  In case of conflict between labels on the Application and AppProject  the Application label values are prioritized and added to the event      yaml   resource includeEventLabelKeys  team env       To exclude certain labels from events  use the  resource excludeEventLabelKeys  key  which takes a comma separated list of  metadata labels  keys      yaml   resource excludeEventLabelKeys  environment bu      Both  resource includeEventLabelKeys  and  resource excludeEventLabelKeys  support wildcards      SSO   RBAC    SSO configuration details   SSO    user management index md    RBAC configuration details   RBAC    rbac md      Manage Argo CD Using Argo CD  Argo CD is able to manage itself since all settings are represented by Kubernetes manifests  The suggested way is to create  Kustomize  https   github com kubernetes sigs kustomize  based application which uses base Argo CD manifests from  https   github com argoproj argo cd  https   github com argoproj argo cd tree stable manifests  and apply required changes on top   Example of  kustomization yaml       yaml   additional resources like ingress rules  cluster and repository secrets  resources    github com argoproj argo cd  manifests cluster install ref stable   clusters secrets yaml   repos secrets yaml    changes to config maps patches    path  overlays argo cd cm yaml      The live example of self managed Argo CD config is available at  https   cd apps argoproj io  https   cd apps argoproj io  and with configuration stored at  argoproj argoproj deployments  https   github com argoproj argoproj deployments tree master argocd        note     You will need to sign in using your GitHub account to get access to  https   cd apps argoproj io  https   cd apps argoproj io "}
{"questions":"argocd Config Management Plugins a Config Management Plugin CMP Argo CD s native config management tools are Helm Jsonnet and Kustomize If you want to use a different config task of building manifests to the plugin Helm OCI or git repository When a config management plugin is correctly configured the repo server may delegate the management tools or if Argo CD s native tool support does not include a feature you need you might need to turn to The Argo CD repo server component is in charge of building Kubernetes manifests based on some source files from a","answers":"\n# Config Management Plugins\n\nArgo CD's \"native\" config management tools are Helm, Jsonnet, and Kustomize. If you want to use a different config\nmanagement tools, or if Argo CD's native tool support does not include a feature you need, you might need to turn to\na Config Management Plugin (CMP).\n\nThe Argo CD \"repo server\" component is in charge of building Kubernetes manifests based on some source files from a\nHelm, OCI, or git repository. When a config management plugin is correctly configured, the repo server may delegate the\ntask of building manifests to the plugin.\n\nThe following sections will describe how to create, install, and use plugins. Check out the\n[example plugins](https:\/\/github.com\/argoproj\/argo-cd\/tree\/master\/examples\/plugins) for additional guidance.\n\n!!! warning\n    Plugins are granted a level of trust in the Argo CD system, so it is important to implement plugins securely. Argo\n    CD administrators should only install plugins from trusted sources, and they should audit plugins to weigh their\n    particular risks and benefits.\n\n## Installing a config management plugin\n\n### Sidecar plugin\n\nAn operator can configure a plugin tool via a sidecar to repo-server. The following changes are required to configure a new plugin:\n\n#### Write the plugin configuration file\n\nPlugins will be configured via a ConfigManagementPlugin manifest located inside the plugin container.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ConfigManagementPlugin\nmetadata:\n  # The name of the plugin must be unique within a given Argo CD instance.\n  name: my-plugin\nspec:\n  # The version of your plugin. Optional. If specified, the Application's spec.source.plugin.name field\n  # must be <plugin name>-<plugin version>.\n  version: v1.0\n  # The init command runs in the Application source directory at the beginning of each manifest generation. The init\n  # command can output anything. A non-zero status code will fail manifest generation.\n  init:\n    # Init always happens immediately before generate, but its output is not treated as manifests.\n    # This is a good place to, for example, download chart dependencies.\n    command: [sh]\n    args: [-c, 'echo \"Initializing...\"']\n  # The generate command runs in the Application source directory each time manifests are generated. Standard output\n  # must be ONLY valid Kubernetes Objects in either YAML or JSON. A non-zero exit code will fail manifest generation.\n  # To write log messages from the command, write them to stderr, it will always be displayed.\n  # Error output will be sent to the UI, so avoid printing sensitive information (such as secrets).\n  generate:\n    command: [sh, -c]\n    args:\n      - |\n        echo \"{\\\"kind\\\": \\\"ConfigMap\\\", \\\"apiVersion\\\": \\\"v1\\\", \\\"metadata\\\": { \\\"name\\\": \\\"$ARGOCD_APP_NAME\\\", \\\"namespace\\\": \\\"$ARGOCD_APP_NAMESPACE\\\", \\\"annotations\\\": {\\\"Foo\\\": \\\"$ARGOCD_ENV_FOO\\\", \\\"KubeVersion\\\": \\\"$KUBE_VERSION\\\", \\\"KubeApiVersion\\\": \\\"$KUBE_API_VERSIONS\\\",\\\"Bar\\\": \\\"baz\\\"}}}\"\n  # The discovery config is applied to a repository. If every configured discovery tool matches, then the plugin may be\n  # used to generate manifests for Applications using the repository. If the discovery config is omitted then the plugin \n  # will not match any application but can still be invoked explicitly by specifying the plugin name in the app spec. \n  # Only one of fileName, find.glob, or find.command should be specified. If multiple are specified then only the \n  # first (in that order) is evaluated.\n  discover:\n    # fileName is a glob pattern (https:\/\/pkg.go.dev\/path\/filepath#Glob) that is applied to the Application's source \n    # directory. If there is a match, this plugin may be used for the Application.\n    fileName: \".\/subdir\/s*.yaml\"\n    find:\n      # This does the same thing as fileName, but it supports double-start (nested directory) glob patterns.\n      glob: \"**\/Chart.yaml\"\n      # The find command runs in the repository's root directory. To match, it must exit with status code 0 _and_ \n      # produce non-empty output to standard out.\n      command: [sh, -c, find . -name env.yaml]\n  # The parameters config describes what parameters the UI should display for an Application. It is up to the user to\n  # actually set parameters in the Application manifest (in spec.source.plugin.parameters). The announcements _only_\n  # inform the \"Parameters\" tab in the App Details page of the UI.\n  parameters:\n    # Static parameter announcements are sent to the UI for _all_ Applications handled by this plugin.\n    # Think of the `string`, `array`, and `map` values set here as \"defaults\". It is up to the plugin author to make \n    # sure that these default values actually reflect the plugin's behavior if the user doesn't explicitly set different\n    # values for those parameters.\n    static:\n      - name: string-param\n        title: Description of the string param\n        tooltip: Tooltip shown when the user hovers the\n        # If this field is set, the UI will indicate to the user that they must set the value.\n        required: false\n        # itemType tells the UI how to present the parameter's value (or, for arrays and maps, values). Default is\n        # \"string\". Examples of other types which may be supported in the future are \"boolean\" or \"number\".\n        # Even if the itemType is not \"string\", the parameter value from the Application spec will be sent to the plugin\n        # as a string. It's up to the plugin to do the appropriate conversion.\n        itemType: \"\"\n        # collectionType describes what type of value this parameter accepts (string, array, or map) and allows the UI\n        # to present a form to match that type. Default is \"string\". This field must be present for non-string types.\n        # It will not be inferred from the presence of an `array` or `map` field.\n        collectionType: \"\"\n        # This field communicates the parameter's default value to the UI. Setting this field is optional.\n        string: default-string-value\n      # All the fields above besides \"string\" apply to both the array and map type parameter announcements.\n      - name: array-param\n        # This field communicates the parameter's default value to the UI. Setting this field is optional.\n        array: [default, items]\n        collectionType: array\n      - name: map-param\n        # This field communicates the parameter's default value to the UI. Setting this field is optional.\n        map:\n          some: value\n        collectionType: map\n    # Dynamic parameter announcements are announcements specific to an Application handled by this plugin. For example,\n    # the values for a Helm chart's values.yaml file could be sent as parameter announcements.\n    dynamic:\n      # The command is run in an Application's source directory. Standard output must be JSON matching the schema of the\n      # static parameter announcements list.\n      command: [echo, '[{\"name\": \"example-param\", \"string\": \"default-string-value\"}]']\n\n  # If set to `true` then the plugin receives repository files with original file mode. Dangerous since the repository\n  # might have executable files. Set to true only if you trust the CMP plugin authors.\n  preserveFileMode: false\n\n  # If set to `true` then the plugin can retrieve git credentials from the reposerver during generate. Plugin authors \n  # should ensure these credentials are appropriately protected during execution\n  provideGitCreds: false\n```\n\n!!! note\n    While the ConfigManagementPlugin _looks like_ a Kubernetes object, it is not actually a custom resource. \n    It only follows kubernetes-style spec conventions.\n\nThe `generate` command must print a valid Kubernetes YAML or JSON object stream to stdout. Both `init` and `generate` commands are executed inside the application source directory.\n\nThe `discover.fileName` is used as [glob](https:\/\/pkg.go.dev\/path\/filepath#Glob) pattern to determine whether an\napplication repository is supported by the plugin or not. \n\n```yaml\n  discover:\n    find:\n      command: [sh, -c, find . -name env.yaml]\n```\n\nIf `discover.fileName` is not provided, the `discover.find.command` is executed in order to determine whether an\napplication repository is supported by the plugin or not. The `find` command should return a non-error exit code\nand produce output to stdout when the application source type is supported.\n\n#### Place the plugin configuration file in the sidecar\n\nArgo CD expects the plugin configuration file to be located at `\/home\/argocd\/cmp-server\/config\/plugin.yaml` in the sidecar.\n\nIf you use a custom image for the sidecar, you can add the file directly to that image.\n\n```dockerfile\nWORKDIR \/home\/argocd\/cmp-server\/config\/\nCOPY plugin.yaml .\/\n```\n\nIf you use a stock image for the sidecar or would rather maintain the plugin configuration in a ConfigMap, just nest the\nplugin config file in a ConfigMap under the `plugin.yaml` key and mount the ConfigMap in the sidecar (see next section).\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: my-plugin-config\ndata:\n  plugin.yaml: |\n    apiVersion: argoproj.io\/v1alpha1\n    kind: ConfigManagementPlugin\n    metadata:\n      name: my-plugin\n    spec:\n      version: v1.0\n      init:\n        command: [sh, -c, 'echo \"Initializing...\"']\n      generate:\n        command: [sh, -c, 'echo \"{\\\"kind\\\": \\\"ConfigMap\\\", \\\"apiVersion\\\": \\\"v1\\\", \\\"metadata\\\": { \\\"name\\\": \\\"$ARGOCD_APP_NAME\\\", \\\"namespace\\\": \\\"$ARGOCD_APP_NAMESPACE\\\", \\\"annotations\\\": {\\\"Foo\\\": \\\"$ARGOCD_ENV_FOO\\\", \\\"KubeVersion\\\": \\\"$KUBE_VERSION\\\", \\\"KubeApiVersion\\\": \\\"$KUBE_API_VERSIONS\\\",\\\"Bar\\\": \\\"baz\\\"}}}\"']\n      discover:\n        fileName: \".\/subdir\/s*.yaml\"\n```\n\n#### Register the plugin sidecar\n\nTo install a plugin, patch argocd-repo-server to run the plugin container as a sidecar, with argocd-cmp-server as its \nentrypoint. You can use either off-the-shelf or custom-built plugin image as sidecar image. For example:\n\n```yaml\ncontainers:\n- name: my-plugin\n  command: [\/var\/run\/argocd\/argocd-cmp-server] # Entrypoint should be Argo CD lightweight CMP server i.e. argocd-cmp-server\n  image: ubuntu # This can be off-the-shelf or custom-built image\n  securityContext:\n    runAsNonRoot: true\n    runAsUser: 999\n  volumeMounts:\n    - mountPath: \/var\/run\/argocd\n      name: var-files\n    - mountPath: \/home\/argocd\/cmp-server\/plugins\n      name: plugins\n    # Remove this volumeMount if you've chosen to bake the config file into the sidecar image.\n    - mountPath: \/home\/argocd\/cmp-server\/config\/plugin.yaml\n      subPath: plugin.yaml\n      name: my-plugin-config\n    # Starting with v2.4, do NOT mount the same tmp volume as the repo-server container. The filesystem separation helps \n    # mitigate path traversal attacks.\n    - mountPath: \/tmp\n      name: cmp-tmp\nvolumes:\n- configMap:\n    name: my-plugin-config\n  name: my-plugin-config\n- emptyDir: {}\n  name: cmp-tmp\n``` \n\n!!! important \"Double-check these items\"\n    1. Make sure to use `\/var\/run\/argocd\/argocd-cmp-server` as an entrypoint. The `argocd-cmp-server` is a lightweight GRPC service that allows Argo CD to interact with the plugin.\n    2. Make sure that sidecar container is running as user 999.\n    3. Make sure that plugin configuration file is present at `\/home\/argocd\/cmp-server\/config\/plugin.yaml`. It can either be volume mapped via configmap or baked into image.\n\n### Using environment variables in your plugin\n\nPlugin commands have access to\n\n1. The system environment variables of the sidecar\n2. [Standard build environment variables](..\/user-guide\/build-environment.md)\n3. Variables in the Application spec (References to system and build variables will get interpolated in the variables' values):\n\n        apiVersion: argoproj.io\/v1alpha1\n        kind: Application\n        spec:\n          source:\n            plugin:\n              env:\n                - name: FOO\n                  value: bar\n                - name: REV\n                  value: test-$ARGOCD_APP_REVISION\n    \n    Before reaching the `init.command`, `generate.command`, and `discover.find.command` commands, Argo CD prefixes all \n    user-supplied environment variables (#3 above) with `ARGOCD_ENV_`. This prevents users from directly setting \n    potentially-sensitive environment variables.\n\n4. Parameters in the Application spec:\n\n        apiVersion: argoproj.io\/v1alpha1\n        kind: Application\n        spec:\n         source:\n           plugin:\n             parameters:\n               - name: values-files\n                 array: [values-dev.yaml]\n               - name: helm-parameters\n                 map:\n                   image.tag: v1.2.3\n   \n    The parameters are available as JSON in the `ARGOCD_APP_PARAMETERS` environment variable. The example above would\n    produce this JSON:\n   \n        [{\"name\": \"values-files\", \"array\": [\"values-dev.yaml\"]}, {\"name\": \"helm-parameters\", \"map\": {\"image.tag\": \"v1.2.3\"}}]\n   \n    !!! note\n        Parameter announcements, even if they specify defaults, are _not_ sent to the plugin in `ARGOCD_APP_PARAMETERS`.\n        Only parameters explicitly set in the Application spec are sent to the plugin. It is up to the plugin to apply\n        the same defaults as the ones announced to the UI.\n   \n    The same parameters are also available as individual environment variables. The names of the environment variables\n    follows this convention:\n   \n           - name: some-string-param\n             string: some-string-value\n           # PARAM_SOME_STRING_PARAM=some-string-value\n           \n           - name: some-array-param\n             value: [item1, item2]\n           # PARAM_SOME_ARRAY_PARAM_0=item1\n           # PARAM_SOME_ARRAY_PARAM_1=item2\n           \n           - name: some-map-param\n             map:\n               image.tag: v1.2.3\n           # PARAM_SOME_MAP_PARAM_IMAGE_TAG=v1.2.3\n   \n!!! warning \"Sanitize\/escape user input\" \n    As part of Argo CD's manifest generation system, config management plugins are treated with a level of trust. Be\n    sure to escape user input in your plugin to prevent malicious input from causing unwanted behavior.\n\n## Using a config management plugin with an Application\n\nYou may leave the `name` field\nempty in the `plugin` section for the plugin to be automatically matched with the Application based on its discovery rules. If you do mention the name make sure \nit is either `<metadata.name>-<spec.version>` if version is mentioned in the `ConfigManagementPlugin` spec or else just `<metadata.name>`. When name is explicitly \nspecified only that particular plugin will be used iff its discovery pattern\/command matches the provided application repo.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  name: guestbook\n  namespace: argocd\nspec:\n  project: default\n  source:\n    repoURL: https:\/\/github.com\/argoproj\/argocd-example-apps.git\n    targetRevision: HEAD\n    path: guestbook\n    plugin:\n      env:\n        - name: FOO\n          value: bar\n```\n\nIf you don't need to set any environment variables, you can set an empty plugin section.\n\n```yaml\n    plugin: {}\n```\n\n!!! important\n    If your CMP command runs too long, the command will be killed, and the UI will show an error. The CMP server\n    respects the timeouts set by the `server.repo.server.timeout.seconds` and `controller.repo.server.timeout.seconds` \n    items in `argocd-cm`. Increase their values from the default of 60s.\n\n    Each CMP command will also independently timeout on the `ARGOCD_EXEC_TIMEOUT` set for the CMP sidecar. The default\n    is 90s. So if you increase the repo server timeout greater than 90s, be sure to set `ARGOCD_EXEC_TIMEOUT` on the\n    sidecar.\n    \n!!! note\n    Each Application can only have one config management plugin configured at a time. If you're converting an existing\n    plugin configured through the `argocd-cm` ConfigMap to a sidecar, make sure to update the plugin name to either `<metadata.name>-<spec.version>` \n    if version was mentioned in the `ConfigManagementPlugin` spec or else just use `<metadata.name>`. You can also remove the name altogether \n    and let the automatic discovery to identify the plugin.\n!!! note\n    If a CMP renders blank manfiests, and `prune` is set to `true`, Argo CD will automatically remove resources. CMP plugin authors should ensure errors are part of the exit code. Commonly something like `kustomize build . | cat` won't pass errors because of the pipe. Consider setting `set -o pipefail` so anything piped will pass errors on failure.\n\n## Debugging a CMP\n\nIf you are actively developing a sidecar-installed CMP, keep a few things in mind:\n\n1. If you are mounting plugin.yaml from a ConfigMap, you will have to restart the repo-server Pod so the plugin will\n   pick up the changes.\n2. If you have baked plugin.yaml into your image, you will have to build, push, and force a re-pull of that image on the\n   repo-server Pod so the plugin will pick up the changes. If you are using `:latest`, the Pod will always pull the new\n   image. If you're using a different, static tag, set `imagePullPolicy: Always` on the CMP's sidecar container.\n3. CMP errors are cached by the repo-server in Redis. Restarting the repo-server Pod will not clear the cache. Always\n   do a \"Hard Refresh\" when actively developing a CMP so you have the latest output.\n4. Verify your sidecar has started properly by viewing the Pod and seeing that two containers are running `kubectl get pod -l app.kubernetes.io\/component=repo-server -n argocd`\n5. Write log message to stderr and set the `--loglevel=info` flag in the sidecar. This will print everything written to stderr, even on successfull command execution.\n\n\n### Other Common Errors\n| Error Message | Cause |\n| -- | -- |\n| `no matches for kind \"ConfigManagementPlugin\" in version \"argoproj.io\/v1alpha1\"` | The `ConfigManagementPlugin` CRD was deprecated in Argo CD 2.4 and removed in 2.8. This error means you've tried to put the configuration for your plugin directly into Kubernetes as a CRD. Refer to this [section of documentation](#write-the-plugin-configuration-file) for how to write the plugin configuration file and place it properly in the sidecar. |\n\n## Plugin tar stream exclusions\n\nIn order to increase the speed of manifest generation, certain files and folders can be excluded from being sent to your\nplugin. We recommend excluding your `.git` folder if it isn't necessary. Use Go's\n[filepatch.Match](https:\/\/pkg.go.dev\/path\/filepath#Match) syntax. For example, `.git\/*` to exclude `.git` folder.\n\nYou can set it one of three ways:\n\n1. The `--plugin-tar-exclude` argument on the repo server.\n2. The `reposerver.plugin.tar.exclusions` key if you are using `argocd-cmd-params-cm`\n3. Directly setting `ARGOCD_REPO_SERVER_PLUGIN_TAR_EXCLUSIONS` environment variable on the repo server.\n\nFor option 1, the flag can be repeated multiple times. For option 2 and 3, you can specify multiple globs by separating\nthem with semicolons.\n\n## Application manifests generation using argocd.argoproj.io\/manifest-generate-paths\n\nTo enhance the application manifests generation process, you can enable the use of the `argocd.argoproj.io\/manifest-generate-paths` annotation. When this flag is enabled, the resources specified by this annotation will be passed to the CMP server for generating application manifests, rather than sending the entire repository. This can be particularly useful for monorepos.\n\nYou can set it one of three ways:\n\n1. The `--plugin-use-manifest-generate-paths` argument on the repo server.\n2. The `reposerver.plugin.use.manifest.generate.paths` key if you are using `argocd-cmd-params-cm`\n3. Directly setting `ARGOCD_REPO_SERVER_PLUGIN_USE_MANIFEST_GENERATE_PATHS` environment variable on the repo server to `true`.\n\n## Migrating from argocd-cm plugins\n\nInstalling plugins by modifying the argocd-cm ConfigMap is deprecated as of v2.4 and has been completely removed starting in v2.8.\n\nCMP plugins work by adding a sidecar to `argocd-repo-server` along with a configuration in that sidecar located at `\/home\/argocd\/cmp-server\/config\/plugin.yaml`. A argocd-cm plugin can be easily converted with the following steps.\n\n### Convert the ConfigMap entry into a config file\n\nFirst, copy the plugin's configuration into its own YAML file. Take for example the following ConfigMap entry:\n\n```yaml\ndata:\n  configManagementPlugins: |\n    - name: pluginName\n      init:                          # Optional command to initialize application source directory\n        command: [\"sample command\"]\n        args: [\"sample args\"]\n      generate:                      # Command to generate Kubernetes Objects in either YAML or JSON\n        command: [\"sample command\"]\n        args: [\"sample args\"]\n      lockRepo: true                 # Defaults to false. See below.\n```\n\nThe `pluginName` item would be converted to a config file like this:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ConfigManagementPlugin\nmetadata:\n  name: pluginName\nspec:\n  init:                          # Optional command to initialize application source directory\n    command: [\"sample command\"]\n    args: [\"sample args\"]\n  generate:                      # Command to generate Kubernetes Objects in either YAML or JSON\n    command: [\"sample command\"]\n    args: [\"sample args\"]\n```\n\n!!! note\n    The `lockRepo` key is not relevant for sidecar plugins, because sidecar plugins do not share a single source repo\n    directory when generating manifests.\n\nNext, we need to decide how this yaml is going to be added to the sidecar. We can either bake the yaml directly into the image, or we can mount it from a ConfigMap. \n\nIf using a ConfigMap, our example would look like this:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: pluginName\n  namespace: argocd\ndata:\n  pluginName.yaml: |\n    apiVersion: argoproj.io\/v1alpha1\n    kind: ConfigManagementPlugin\n    metadata:\n      name: pluginName\n    spec:\n      init:                          # Optional command to initialize application source directory\n        command: [\"sample command\"]\n        args: [\"sample args\"]\n      generate:                      # Command to generate Kubernetes Objects in either YAML or JSON\n        command: [\"sample command\"]\n        args: [\"sample args\"]\n```\n\nThen this would be mounted in our plugin sidecar.\n\n### Write discovery rules for your plugin\n\nSidecar plugins can use either discovery rules or a plugin name to match Applications to plugins. If the discovery rule is omitted \nthen you have to explicitly specify the plugin by name in the app spec or else that particular plugin will not match any app.\n\nIf you want to use discovery instead of the plugin name to match applications to your plugin, write rules applicable to \nyour plugin [using the instructions above](#1-write-the-plugin-configuration-file) and add them to your configuration \nfile.\n\nTo use the name instead of discovery, update the name in your application manifest to `<metadata.name>-<spec.version>` \nif version was mentioned in the `ConfigManagementPlugin` spec or else just use `<metadata.name>`. For example:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  name: guestbook\nspec:\n  source:\n    plugin:\n      name: pluginName  # Delete this for auto-discovery (and set `plugin: {}` if `name` was the only value) or use proper sidecar plugin name\n```\n\n### Make sure the plugin has access to the tools it needs\n\nPlugins configured with argocd-cm ran on the Argo CD image. This gave it access to all the tools installed on that\nimage by default (see the [Dockerfile](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/Dockerfile) for base image and\ninstalled tools).\n\nYou can either use a stock image (like ubuntu, busybox, or alpine\/k8s) or design your own base image with the tools your plugin needs. For\nsecurity, avoid using images with more binaries installed than what your plugin actually needs.\n\n### Test the plugin\n\nAfter installing the plugin as a sidecar [according to the directions above](#installing-a-config-management-plugin),\ntest it out on a few Applications before migrating all of them to the sidecar plugin.\n\nOnce tests have checked out, remove the plugin entry from your argocd-cm ConfigMap.\n\n### Additional Settings\n\n#### Preserve repository files mode\n\nBy default, config management plugin receives source repository files with reset file mode. This is done for security\nreasons. If you want to preserve original file mode, you can set `preserveFileMode` to `true` in the plugin spec:\n\n!!! warning\n    Make sure you trust the plugin you are using. If you set `preserveFileMode` to `true` then the plugin might receive\n    files with executable permissions which can be a security risk.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ConfigManagementPlugin\nmetadata:\n  name: pluginName\nspec:\n  init:\n    command: [\"sample command\"]\n    args: [\"sample args\"]\n  generate:\n    command: [\"sample command\"]\n    args: [\"sample args\"]\n  preserveFileMode: true\n```\n\n##### Provide Git Credentials\n\nBy default, the config management plugin is responsible for providing its own credentials to additional Git repositories\nthat may need to be accessed during manifest generation. The reposerver has these credentials available in its git creds\nstore. When credential sharing is allowed, the git credentials used by the reposerver to clone the repository contents\nare shared for the lifetime of the execution of the config management plugin, utilizing git's `ASKPASS` method to make a\ncall from the config management sidecar container to the reposerver to retrieve the initialized git credentials.\n\nUtilizing `ASKPASS` means that credentials are not proactively shared, but rather only provided when an operation requires\nthem.\n\nTo allow the plugin to access the reposerver git credentials, you can set `provideGitCreds` to `true` in the plugin spec:\n\n!!! warning\n    Make sure you trust the plugin you are using. If you set `provideGitCreds` to `true` then the plugin will receive\n    credentials used to clone the source Git repository.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ConfigManagementPlugin\nmetadata:\n  name: pluginName\nspec:\n  init:\n    command: [\"sample command\"]\n    args: [\"sample args\"]\n  generate:\n    command: [\"sample command\"]\n    args: [\"sample args\"]\n  provideGitCreds: true\n```\n","site":"argocd","answers_cleaned":"   Config Management Plugins  Argo CD s  native  config management tools are Helm  Jsonnet  and Kustomize  If you want to use a different config management tools  or if Argo CD s native tool support does not include a feature you need  you might need to turn to a Config Management Plugin  CMP    The Argo CD  repo server  component is in charge of building Kubernetes manifests based on some source files from a Helm  OCI  or git repository  When a config management plugin is correctly configured  the repo server may delegate the task of building manifests to the plugin   The following sections will describe how to create  install  and use plugins  Check out the  example plugins  https   github com argoproj argo cd tree master examples plugins  for additional guidance       warning     Plugins are granted a level of trust in the Argo CD system  so it is important to implement plugins securely  Argo     CD administrators should only install plugins from trusted sources  and they should audit plugins to weigh their     particular risks and benefits      Installing a config management plugin      Sidecar plugin  An operator can configure a plugin tool via a sidecar to repo server  The following changes are required to configure a new plugin        Write the plugin configuration file  Plugins will be configured via a ConfigManagementPlugin manifest located inside the plugin container      yaml apiVersion  argoproj io v1alpha1 kind  ConfigManagementPlugin metadata      The name of the plugin must be unique within a given Argo CD instance    name  my plugin spec      The version of your plugin  Optional  If specified  the Application s spec source plugin name field     must be  plugin name   plugin version     version  v1 0     The init command runs in the Application source directory at the beginning of each manifest generation  The init     command can output anything  A non zero status code will fail manifest generation    init        Init always happens immediately before generate  but its output is not treated as manifests        This is a good place to  for example  download chart dependencies      command   sh      args    c   echo  Initializing           The generate command runs in the Application source directory each time manifests are generated  Standard output     must be ONLY valid Kubernetes Objects in either YAML or JSON  A non zero exit code will fail manifest generation      To write log messages from the command  write them to stderr  it will always be displayed      Error output will be sent to the UI  so avoid printing sensitive information  such as secrets     generate      command   sh   c      args                    echo     kind      ConfigMap      apiVersion      v1      metadata        name       ARGOCD APP NAME      namespace       ARGOCD APP NAMESPACE      annotations       Foo       ARGOCD ENV FOO      KubeVersion       KUBE VERSION      KubeApiVersion       KUBE API VERSIONS     Bar      baz           The discovery config is applied to a repository  If every configured discovery tool matches  then the plugin may be     used to generate manifests for Applications using the repository  If the discovery config is omitted then the plugin      will not match any application but can still be invoked explicitly by specifying the plugin name in the app spec       Only one of fileName  find glob  or find command should be specified  If multiple are specified then only the      first  in that order  is evaluated    discover        fileName is a glob pattern  https   pkg go dev path filepath Glob  that is applied to the Application s source        directory  If there is a match  this plugin may be used for the Application      fileName     subdir s  yaml      find          This does the same thing as fileName  but it supports double start  nested directory  glob patterns        glob      Chart yaml          The find command runs in the repository s root directory  To match  it must exit with status code 0  and           produce non empty output to standard out        command   sh   c  find    name env yaml      The parameters config describes what parameters the UI should display for an Application  It is up to the user to     actually set parameters in the Application manifest  in spec source plugin parameters   The announcements  only      inform the  Parameters  tab in the App Details page of the UI    parameters        Static parameter announcements are sent to the UI for  all  Applications handled by this plugin        Think of the  string    array   and  map  values set here as  defaults   It is up to the plugin author to make        sure that these default values actually reflect the plugin s behavior if the user doesn t explicitly set different       values for those parameters      static          name  string param         title  Description of the string param         tooltip  Tooltip shown when the user hovers the           If this field is set  the UI will indicate to the user that they must set the value          required  false           itemType tells the UI how to present the parameter s value  or  for arrays and maps  values   Default is            string   Examples of other types which may be supported in the future are  boolean  or  number             Even if the itemType is not  string   the parameter value from the Application spec will be sent to the plugin           as a string  It s up to the plugin to do the appropriate conversion          itemType               collectionType describes what type of value this parameter accepts  string  array  or map  and allows the UI           to present a form to match that type  Default is  string   This field must be present for non string types            It will not be inferred from the presence of an  array  or  map  field          collectionType               This field communicates the parameter s default value to the UI  Setting this field is optional          string  default string value         All the fields above besides  string  apply to both the array and map type parameter announcements          name  array param           This field communicates the parameter s default value to the UI  Setting this field is optional          array   default  items          collectionType  array         name  map param           This field communicates the parameter s default value to the UI  Setting this field is optional          map            some  value         collectionType  map       Dynamic parameter announcements are announcements specific to an Application handled by this plugin  For example        the values for a Helm chart s values yaml file could be sent as parameter announcements      dynamic          The command is run in an Application s source directory  Standard output must be JSON matching the schema of the         static parameter announcements list        command   echo      name    example param    string    default string value           If set to  true  then the plugin receives repository files with original file mode  Dangerous since the repository     might have executable files  Set to true only if you trust the CMP plugin authors    preserveFileMode  false      If set to  true  then the plugin can retrieve git credentials from the reposerver during generate  Plugin authors      should ensure these credentials are appropriately protected during execution   provideGitCreds  false          note     While the ConfigManagementPlugin  looks like  a Kubernetes object  it is not actually a custom resource       It only follows kubernetes style spec conventions   The  generate  command must print a valid Kubernetes YAML or JSON object stream to stdout  Both  init  and  generate  commands are executed inside the application source directory   The  discover fileName  is used as  glob  https   pkg go dev path filepath Glob  pattern to determine whether an application repository is supported by the plugin or not       yaml   discover      find        command   sh   c  find    name env yaml       If  discover fileName  is not provided  the  discover find command  is executed in order to determine whether an application repository is supported by the plugin or not  The  find  command should return a non error exit code and produce output to stdout when the application source type is supported        Place the plugin configuration file in the sidecar  Argo CD expects the plugin configuration file to be located at   home argocd cmp server config plugin yaml  in the sidecar   If you use a custom image for the sidecar  you can add the file directly to that image      dockerfile WORKDIR  home argocd cmp server config  COPY plugin yaml         If you use a stock image for the sidecar or would rather maintain the plugin configuration in a ConfigMap  just nest the plugin config file in a ConfigMap under the  plugin yaml  key and mount the ConfigMap in the sidecar  see next section       yaml apiVersion  v1 kind  ConfigMap metadata    name  my plugin config data    plugin yaml        apiVersion  argoproj io v1alpha1     kind  ConfigManagementPlugin     metadata        name  my plugin     spec        version  v1 0       init          command   sh   c   echo  Initializing             generate          command   sh   c   echo     kind      ConfigMap      apiVersion      v1      metadata        name       ARGOCD APP NAME      namespace       ARGOCD APP NAMESPACE      annotations       Foo       ARGOCD ENV FOO      KubeVersion       KUBE VERSION      KubeApiVersion       KUBE API VERSIONS     Bar      baz               discover          fileName     subdir s  yaml            Register the plugin sidecar  To install a plugin  patch argocd repo server to run the plugin container as a sidecar  with argocd cmp server as its  entrypoint  You can use either off the shelf or custom built plugin image as sidecar image  For example      yaml containers    name  my plugin   command    var run argocd argocd cmp server    Entrypoint should be Argo CD lightweight CMP server i e  argocd cmp server   image  ubuntu   This can be off the shelf or custom built image   securityContext      runAsNonRoot  true     runAsUser  999   volumeMounts        mountPath   var run argocd       name  var files       mountPath   home argocd cmp server plugins       name  plugins       Remove this volumeMount if you ve chosen to bake the config file into the sidecar image        mountPath   home argocd cmp server config plugin yaml       subPath  plugin yaml       name  my plugin config       Starting with v2 4  do NOT mount the same tmp volume as the repo server container  The filesystem separation helps        mitigate path traversal attacks        mountPath   tmp       name  cmp tmp volumes    configMap      name  my plugin config   name  my plugin config   emptyDir       name  cmp tmp           important  Double check these items      1  Make sure to use   var run argocd argocd cmp server  as an entrypoint  The  argocd cmp server  is a lightweight GRPC service that allows Argo CD to interact with the plugin      2  Make sure that sidecar container is running as user 999      3  Make sure that plugin configuration file is present at   home argocd cmp server config plugin yaml   It can either be volume mapped via configmap or baked into image       Using environment variables in your plugin  Plugin commands have access to  1  The system environment variables of the sidecar 2   Standard build environment variables     user guide build environment md  3  Variables in the Application spec  References to system and build variables will get interpolated in the variables  values            apiVersion  argoproj io v1alpha1         kind  Application         spec            source              plugin                env                    name  FOO                   value  bar                   name  REV                   value  test  ARGOCD APP REVISION          Before reaching the  init command    generate command   and  discover find command  commands  Argo CD prefixes all      user supplied environment variables   3 above  with  ARGOCD ENV    This prevents users from directly setting      potentially sensitive environment variables   4  Parameters in the Application spec           apiVersion  argoproj io v1alpha1         kind  Application         spec           source             plugin               parameters                   name  values files                  array   values dev yaml                   name  helm parameters                  map                     image tag  v1 2 3         The parameters are available as JSON in the  ARGOCD APP PARAMETERS  environment variable  The example above would     produce this JSON                 name    values files    array     values dev yaml       name    helm parameters    map     image tag    v1 2 3                 note         Parameter announcements  even if they specify defaults  are  not  sent to the plugin in  ARGOCD APP PARAMETERS           Only parameters explicitly set in the Application spec are sent to the plugin  It is up to the plugin to apply         the same defaults as the ones announced to the UI          The same parameters are also available as individual environment variables  The names of the environment variables     follows this convention                   name  some string param              string  some string value              PARAM SOME STRING PARAM some string value                          name  some array param              value   item1  item2               PARAM SOME ARRAY PARAM 0 item1              PARAM SOME ARRAY PARAM 1 item2                          name  some map param              map                 image tag  v1 2 3              PARAM SOME MAP PARAM IMAGE TAG v1 2 3         warning  Sanitize escape user input       As part of Argo CD s manifest generation system  config management plugins are treated with a level of trust  Be     sure to escape user input in your plugin to prevent malicious input from causing unwanted behavior      Using a config management plugin with an Application  You may leave the  name  field empty in the  plugin  section for the plugin to be automatically matched with the Application based on its discovery rules  If you do mention the name make sure  it is either   metadata name   spec version   if version is mentioned in the  ConfigManagementPlugin  spec or else just   metadata name    When name is explicitly  specified only that particular plugin will be used iff its discovery pattern command matches the provided application repo      yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    name  guestbook   namespace  argocd spec    project  default   source      repoURL  https   github com argoproj argocd example apps git     targetRevision  HEAD     path  guestbook     plugin        env            name  FOO           value  bar      If you don t need to set any environment variables  you can set an empty plugin section      yaml     plugin              important     If your CMP command runs too long  the command will be killed  and the UI will show an error  The CMP server     respects the timeouts set by the  server repo server timeout seconds  and  controller repo server timeout seconds       items in  argocd cm   Increase their values from the default of 60s       Each CMP command will also independently timeout on the  ARGOCD EXEC TIMEOUT  set for the CMP sidecar  The default     is 90s  So if you increase the repo server timeout greater than 90s  be sure to set  ARGOCD EXEC TIMEOUT  on the     sidecar           note     Each Application can only have one config management plugin configured at a time  If you re converting an existing     plugin configured through the  argocd cm  ConfigMap to a sidecar  make sure to update the plugin name to either   metadata name   spec version        if version was mentioned in the  ConfigManagementPlugin  spec or else just use   metadata name    You can also remove the name altogether      and let the automatic discovery to identify the plugin      note     If a CMP renders blank manfiests  and  prune  is set to  true   Argo CD will automatically remove resources  CMP plugin authors should ensure errors are part of the exit code  Commonly something like  kustomize build     cat  won t pass errors because of the pipe  Consider setting  set  o pipefail  so anything piped will pass errors on failure      Debugging a CMP  If you are actively developing a sidecar installed CMP  keep a few things in mind   1  If you are mounting plugin yaml from a ConfigMap  you will have to restart the repo server Pod so the plugin will    pick up the changes  2  If you have baked plugin yaml into your image  you will have to build  push  and force a re pull of that image on the    repo server Pod so the plugin will pick up the changes  If you are using   latest   the Pod will always pull the new    image  If you re using a different  static tag  set  imagePullPolicy  Always  on the CMP s sidecar container  3  CMP errors are cached by the repo server in Redis  Restarting the repo server Pod will not clear the cache  Always    do a  Hard Refresh  when actively developing a CMP so you have the latest output  4  Verify your sidecar has started properly by viewing the Pod and seeing that two containers are running  kubectl get pod  l app kubernetes io component repo server  n argocd  5  Write log message to stderr and set the    loglevel info  flag in the sidecar  This will print everything written to stderr  even on successfull command execution        Other Common Errors   Error Message   Cause                  no matches for kind  ConfigManagementPlugin  in version  argoproj io v1alpha1     The  ConfigManagementPlugin  CRD was deprecated in Argo CD 2 4 and removed in 2 8  This error means you ve tried to put the configuration for your plugin directly into Kubernetes as a CRD  Refer to this  section of documentation   write the plugin configuration file  for how to write the plugin configuration file and place it properly in the sidecar        Plugin tar stream exclusions  In order to increase the speed of manifest generation  certain files and folders can be excluded from being sent to your plugin  We recommend excluding your   git  folder if it isn t necessary  Use Go s  filepatch Match  https   pkg go dev path filepath Match  syntax  For example    git    to exclude   git  folder   You can set it one of three ways   1  The    plugin tar exclude  argument on the repo server  2  The  reposerver plugin tar exclusions  key if you are using  argocd cmd params cm  3  Directly setting  ARGOCD REPO SERVER PLUGIN TAR EXCLUSIONS  environment variable on the repo server   For option 1  the flag can be repeated multiple times  For option 2 and 3  you can specify multiple globs by separating them with semicolons      Application manifests generation using argocd argoproj io manifest generate paths  To enhance the application manifests generation process  you can enable the use of the  argocd argoproj io manifest generate paths  annotation  When this flag is enabled  the resources specified by this annotation will be passed to the CMP server for generating application manifests  rather than sending the entire repository  This can be particularly useful for monorepos   You can set it one of three ways   1  The    plugin use manifest generate paths  argument on the repo server  2  The  reposerver plugin use manifest generate paths  key if you are using  argocd cmd params cm  3  Directly setting  ARGOCD REPO SERVER PLUGIN USE MANIFEST GENERATE PATHS  environment variable on the repo server to  true       Migrating from argocd cm plugins  Installing plugins by modifying the argocd cm ConfigMap is deprecated as of v2 4 and has been completely removed starting in v2 8   CMP plugins work by adding a sidecar to  argocd repo server  along with a configuration in that sidecar located at   home argocd cmp server config plugin yaml   A argocd cm plugin can be easily converted with the following steps       Convert the ConfigMap entry into a config file  First  copy the plugin s configuration into its own YAML file  Take for example the following ConfigMap entry      yaml data    configManagementPlugins          name  pluginName       init                             Optional command to initialize application source directory         command    sample command           args    sample args         generate                         Command to generate Kubernetes Objects in either YAML or JSON         command    sample command           args    sample args         lockRepo  true                   Defaults to false  See below       The  pluginName  item would be converted to a config file like this      yaml apiVersion  argoproj io v1alpha1 kind  ConfigManagementPlugin metadata    name  pluginName spec    init                             Optional command to initialize application source directory     command    sample command       args    sample args     generate                         Command to generate Kubernetes Objects in either YAML or JSON     command    sample command       args    sample args            note     The  lockRepo  key is not relevant for sidecar plugins  because sidecar plugins do not share a single source repo     directory when generating manifests   Next  we need to decide how this yaml is going to be added to the sidecar  We can either bake the yaml directly into the image  or we can mount it from a ConfigMap    If using a ConfigMap  our example would look like this      yaml apiVersion  v1 kind  ConfigMap metadata    name  pluginName   namespace  argocd data    pluginName yaml        apiVersion  argoproj io v1alpha1     kind  ConfigManagementPlugin     metadata        name  pluginName     spec        init                             Optional command to initialize application source directory         command    sample command           args    sample args         generate                         Command to generate Kubernetes Objects in either YAML or JSON         command    sample command           args    sample args        Then this would be mounted in our plugin sidecar       Write discovery rules for your plugin  Sidecar plugins can use either discovery rules or a plugin name to match Applications to plugins  If the discovery rule is omitted  then you have to explicitly specify the plugin by name in the app spec or else that particular plugin will not match any app   If you want to use discovery instead of the plugin name to match applications to your plugin  write rules applicable to  your plugin  using the instructions above   1 write the plugin configuration file  and add them to your configuration  file   To use the name instead of discovery  update the name in your application manifest to   metadata name   spec version    if version was mentioned in the  ConfigManagementPlugin  spec or else just use   metadata name    For example      yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    name  guestbook spec    source      plugin        name  pluginName    Delete this for auto discovery  and set  plugin      if  name  was the only value  or use proper sidecar plugin name          Make sure the plugin has access to the tools it needs  Plugins configured with argocd cm ran on the Argo CD image  This gave it access to all the tools installed on that image by default  see the  Dockerfile  https   github com argoproj argo cd blob master Dockerfile  for base image and installed tools    You can either use a stock image  like ubuntu  busybox  or alpine k8s  or design your own base image with the tools your plugin needs  For security  avoid using images with more binaries installed than what your plugin actually needs       Test the plugin  After installing the plugin as a sidecar  according to the directions above   installing a config management plugin   test it out on a few Applications before migrating all of them to the sidecar plugin   Once tests have checked out  remove the plugin entry from your argocd cm ConfigMap       Additional Settings       Preserve repository files mode  By default  config management plugin receives source repository files with reset file mode  This is done for security reasons  If you want to preserve original file mode  you can set  preserveFileMode  to  true  in the plugin spec       warning     Make sure you trust the plugin you are using  If you set  preserveFileMode  to  true  then the plugin might receive     files with executable permissions which can be a security risk      yaml apiVersion  argoproj io v1alpha1 kind  ConfigManagementPlugin metadata    name  pluginName spec    init      command    sample command       args    sample args     generate      command    sample command       args    sample args     preserveFileMode  true            Provide Git Credentials  By default  the config management plugin is responsible for providing its own credentials to additional Git repositories that may need to be accessed during manifest generation  The reposerver has these credentials available in its git creds store  When credential sharing is allowed  the git credentials used by the reposerver to clone the repository contents are shared for the lifetime of the execution of the config management plugin  utilizing git s  ASKPASS  method to make a call from the config management sidecar container to the reposerver to retrieve the initialized git credentials   Utilizing  ASKPASS  means that credentials are not proactively shared  but rather only provided when an operation requires them   To allow the plugin to access the reposerver git credentials  you can set  provideGitCreds  to  true  in the plugin spec       warning     Make sure you trust the plugin you are using  If you set  provideGitCreds  to  true  then the plugin will receive     credentials used to clone the source Git repository      yaml apiVersion  argoproj io v1alpha1 kind  ConfigManagementPlugin metadata    name  pluginName spec    init      command    sample command       args    sample args     generate      command    sample command       args    sample args     provideGitCreds  true     "}
{"questions":"argocd Metrics Application Controller Metrics gauge Information about Applications It contains labels such as and that reflect the application state in Argo CD Metrics about applications Scraped at the endpoint Metric Type Description Argo CD exposes different sets of Prometheus metrics per server","answers":"# Metrics\n\nArgo CD exposes different sets of Prometheus metrics per server.\n\n## Application Controller Metrics\nMetrics about applications. Scraped at the `argocd-metrics:8082\/metrics` endpoint.\n\n| Metric | Type | Description |\n|--------|:----:|-------------|\n| `argocd_app_info` | gauge | Information about Applications. It contains labels such as `sync_status` and `health_status` that reflect the application state in Argo CD. |\n| `argocd_app_condition` | gauge | Report Applications conditions. It contains the conditions currently present in the application status. |\n| `argocd_app_k8s_request_total` | counter | Number of Kubernetes requests executed during application reconciliation |\n| `argocd_app_labels` | gauge | Argo Application labels converted to Prometheus labels. Disabled by default. See section below about how to enable it. |\n| `argocd_app_orphaned_resources_count` | gauge | Number of orphaned resources per application. |\n| `argocd_app_reconcile` | histogram | Application reconciliation performance in seconds. |\n| `argocd_app_sync_total` | counter | Counter for application sync history |\n| `argocd_cluster_api_resource_objects` | gauge | Number of k8s resource objects in the cache. |\n| `argocd_cluster_api_resources` | gauge | Number of monitored Kubernetes API resources. |\n| `argocd_cluster_cache_age_seconds` | gauge | Cluster cache age in seconds. |\n| `argocd_cluster_connection_status` | gauge | The k8s cluster current connection status. |\n| `argocd_cluster_events_total` | counter | Number of processes k8s resource events. |\n| `argocd_cluster_info` | gauge | Information about cluster. |\n| `argocd_kubectl_exec_pending` | gauge | Number of pending kubectl executions |\n| `argocd_kubectl_exec_total` | counter | Number of kubectl executions |\n| `argocd_redis_request_duration` | histogram | Redis requests duration. |\n| `argocd_redis_request_total` | counter | Number of redis requests executed during application reconciliation |\n\nIf you use Argo CD with many application and project creation and deletion,\nthe metrics page will keep in cache your application and project's history.\nIf you are having issues because of a large number of metrics cardinality due\nto deleted resources, you can schedule a metrics reset to clean the\nhistory with an application controller flag. Example:\n`--metrics-cache-expiration=\"24h0m0s\"`.\n\n\n\n### Exposing Application labels as Prometheus metrics\n\nThere are use-cases where Argo CD Applications contain labels that are desired to be exposed as Prometheus metrics.\nSome examples are:\n\n* Having the team name as a label to allow routing alerts to specific receivers\n* Creating dashboards broken down by business units\n\nAs the Application labels are specific to each company, this feature is disabled by default. To enable it, add the\n`--metrics-application-labels` flag to the Argo CD application controller.\n\nThe example below will expose the Argo CD Application labels `team-name` and `business-unit` to Prometheus:\n\n    containers:\n    - command:\n      - argocd-application-controller\n      - --metrics-application-labels\n      - team-name\n      - --metrics-application-labels\n      - business-unit\n\nIn this case, the metric would look like:\n\n```\n# TYPE argocd_app_labels gauge\nargocd_app_labels{label_business_unit=\"bu-id-1\",label_team_name=\"my-team\",name=\"my-app-1\",namespace=\"argocd\",project=\"important-project\"} 1\nargocd_app_labels{label_business_unit=\"bu-id-1\",label_team_name=\"my-team\",name=\"my-app-2\",namespace=\"argocd\",project=\"important-project\"} 1\nargocd_app_labels{label_business_unit=\"bu-id-2\",label_team_name=\"another-team\",name=\"my-app-3\",namespace=\"argocd\",project=\"important-project\"} 1\n```\n\n### Exposing Application conditions as Prometheus metrics\n\nThere are use-cases where Argo CD Applications contain conditions that are desired to be exposed as Prometheus metrics.\nSome examples are:\n\n* Hunting orphaned resources across all deployed applications\n* Knowing which resources are excluded from ArgoCD\n\nAs the Application conditions are specific to each company, this feature is disabled by default. To enable it, add the\n`--metrics-application-conditions` flag to the Argo CD application controller.\n\nThe example below will expose the Argo CD Application condition `OrphanedResourceWarning` and `ExcludedResourceWarning` to Prometheus:\n\n```yaml\n    containers:\n    - command:\n      - argocd-application-controller\n      - --metrics-application-conditions\n      - OrphanedResourceWarning\n      - --metrics-application-conditions\n      - ExcludedResourceWarning\n```\n\n## Application Set Controller metrics\n\nThe Application Set controller exposes the following metrics for application sets.\n\n| Metric | Type | Description |\n|--------|:----:|-------------|\n| `argocd_appset_info` | gauge | Information about Application Sets. It contains labels for the name and namespace of an application set as well as `Resource_update_status`  that reflects the `ResourcesUpToDate` property |\n| `argocd_appset_reconcile` | histogram | Application reconciliation performance in seconds. It contains labels for the name and namespace of an applicationset |\n| `argocd_appset_labels` | gauge | Applicationset labels translated to Prometheus labels. Disabled by default |\n| `argocd_appset_owned_applications` | gauge | Number of applications owned by the applicationset. It contains labels for the name and namespace of an applicationset. |\n\nSimilar to the same metric in application controller (`argocd_app_labels`) the metric `argocd_appset_labels` is disabled by default. You can enable it by providing the `\u2013metrics-applicationset-labels` argument to the applicationset controller.\n\nOnce enabled it works exactly the same as application controller metrics (label_ appended to normalized label name).\nAvailable labels include Name, Namespace + all labels enabled by the command line options and their value (exactly like application controller metrics described in the previous section).\n\n## API Server Metrics\nMetrics about API Server API request and response activity (request totals, response codes, etc...).\nScraped at the `argocd-server-metrics:8083\/metrics` endpoint.\n\n| Metric | Type | Description |\n|--------|:----:|-------------|\n| `argocd_redis_request_duration` | histogram | Redis requests duration. |\n| `argocd_redis_request_total` | counter | Number of Kubernetes requests executed during application reconciliation. |\n| `grpc_server_handled_total` | counter | Total number of RPCs completed on the server, regardless of success or failure. |\n| `grpc_server_msg_sent_total` | counter | Total number of gRPC stream messages sent by the server. |\n| `argocd_proxy_extension_request_total` | counter | Number of requests sent to the configured proxy extensions. |\n| `argocd_proxy_extension_request_duration_seconds` | histogram | Request duration in seconds between the Argo CD API server and the proxy extension backend. |\n\n## Repo Server Metrics\nMetrics about the Repo Server.\nScraped at the `argocd-repo-server:8084\/metrics` endpoint.\n\n| Metric | Type | Description |\n|--------|:----:|-------------|\n| `argocd_git_request_duration_seconds` | histogram | Git requests duration seconds. |\n| `argocd_git_request_total` | counter | Number of git requests performed by repo server |\n| `argocd_git_fetch_fail_total` | counter | Number of git fetch requests failures by repo server |\n| `argocd_redis_request_duration_seconds` | histogram | Redis requests duration seconds. |\n| `argocd_redis_request_total` | counter | Number of Kubernetes requests executed during application reconciliation. |\n| `argocd_repo_pending_request_total` | gauge | Number of pending requests requiring repository lock |\n\n## Prometheus Operator\n\nIf using Prometheus Operator, the following ServiceMonitor example manifests can be used.\nAdd a namespace where Argo CD is installed and change `metadata.labels.release` to the name of label selected by your Prometheus.\n\n```yaml\napiVersion: monitoring.coreos.com\/v1\nkind: ServiceMonitor\nmetadata:\n  name: argocd-metrics\n  labels:\n    release: prometheus-operator\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io\/name: argocd-metrics\n  endpoints:\n  - port: metrics\n```\n\n```yaml\napiVersion: monitoring.coreos.com\/v1\nkind: ServiceMonitor\nmetadata:\n  name: argocd-server-metrics\n  labels:\n    release: prometheus-operator\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io\/name: argocd-server-metrics\n  endpoints:\n  - port: metrics\n```\n\n```yaml\napiVersion: monitoring.coreos.com\/v1\nkind: ServiceMonitor\nmetadata:\n  name: argocd-repo-server-metrics\n  labels:\n    release: prometheus-operator\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io\/name: argocd-repo-server\n  endpoints:\n  - port: metrics\n```\n\n```yaml\napiVersion: monitoring.coreos.com\/v1\nkind: ServiceMonitor\nmetadata:\n  name: argocd-applicationset-controller-metrics\n  labels:\n    release: prometheus-operator\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io\/name: argocd-applicationset-controller\n  endpoints:\n  - port: metrics\n```\n\n```yaml\napiVersion: monitoring.coreos.com\/v1\nkind: ServiceMonitor\nmetadata:\n  name: argocd-dex-server\n  labels:\n    release: prometheus-operator\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io\/name: argocd-dex-server\n  endpoints:\n    - port: metrics\n```\n\n```yaml\napiVersion: monitoring.coreos.com\/v1\nkind: ServiceMonitor\nmetadata:\n  name: argocd-redis-haproxy-metrics\n  labels:\n    release: prometheus-operator\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io\/name: argocd-redis-ha-haproxy\n  endpoints:\n  - port: http-exporter-port\n```\n\nFor notifications controller, you need to additionally add following:\n\n```yaml\napiVersion: monitoring.coreos.com\/v1\nkind: ServiceMonitor\nmetadata:\n  name: argocd-notifications-controller\n  labels:\n    release: prometheus-operator\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io\/name: argocd-notifications-controller-metrics\n  endpoints:\n    - port: metrics\n```\n\n\n## Dashboards\n\nYou can find an example Grafana dashboard [here](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/examples\/dashboard.json) or check demo instance\n[dashboard](https:\/\/grafana.apps.argoproj.io).\n\n![dashboard](..\/assets\/dashboard.jpg)","site":"argocd","answers_cleaned":"  Metrics  Argo CD exposes different sets of Prometheus metrics per server      Application Controller Metrics Metrics about applications  Scraped at the  argocd metrics 8082 metrics  endpoint     Metric   Type   Description                                      argocd app info    gauge   Information about Applications  It contains labels such as  sync status  and  health status  that reflect the application state in Argo CD       argocd app condition    gauge   Report Applications conditions  It contains the conditions currently present in the application status       argocd app k8s request total    counter   Number of Kubernetes requests executed during application reconciliation      argocd app labels    gauge   Argo Application labels converted to Prometheus labels  Disabled by default  See section below about how to enable it       argocd app orphaned resources count    gauge   Number of orphaned resources per application       argocd app reconcile    histogram   Application reconciliation performance in seconds       argocd app sync total    counter   Counter for application sync history      argocd cluster api resource objects    gauge   Number of k8s resource objects in the cache       argocd cluster api resources    gauge   Number of monitored Kubernetes API resources       argocd cluster cache age seconds    gauge   Cluster cache age in seconds       argocd cluster connection status    gauge   The k8s cluster current connection status       argocd cluster events total    counter   Number of processes k8s resource events       argocd cluster info    gauge   Information about cluster       argocd kubectl exec pending    gauge   Number of pending kubectl executions      argocd kubectl exec total    counter   Number of kubectl executions      argocd redis request duration    histogram   Redis requests duration       argocd redis request total    counter   Number of redis requests executed during application reconciliation    If you use Argo CD with many application and project creation and deletion  the metrics page will keep in cache your application and project s history  If you are having issues because of a large number of metrics cardinality due to deleted resources  you can schedule a metrics reset to clean the history with an application controller flag  Example     metrics cache expiration  24h0m0s           Exposing Application labels as Prometheus metrics  There are use cases where Argo CD Applications contain labels that are desired to be exposed as Prometheus metrics  Some examples are     Having the team name as a label to allow routing alerts to specific receivers   Creating dashboards broken down by business units  As the Application labels are specific to each company  this feature is disabled by default  To enable it  add the    metrics application labels  flag to the Argo CD application controller   The example below will expose the Argo CD Application labels  team name  and  business unit  to Prometheus       containers        command          argocd application controller           metrics application labels         team name           metrics application labels         business unit  In this case  the metric would look like         TYPE argocd app labels gauge argocd app labels label business unit  bu id 1  label team name  my team  name  my app 1  namespace  argocd  project  important project   1 argocd app labels label business unit  bu id 1  label team name  my team  name  my app 2  namespace  argocd  project  important project   1 argocd app labels label business unit  bu id 2  label team name  another team  name  my app 3  namespace  argocd  project  important project   1          Exposing Application conditions as Prometheus metrics  There are use cases where Argo CD Applications contain conditions that are desired to be exposed as Prometheus metrics  Some examples are     Hunting orphaned resources across all deployed applications   Knowing which resources are excluded from ArgoCD  As the Application conditions are specific to each company  this feature is disabled by default  To enable it  add the    metrics application conditions  flag to the Argo CD application controller   The example below will expose the Argo CD Application condition  OrphanedResourceWarning  and  ExcludedResourceWarning  to Prometheus      yaml     containers        command          argocd application controller           metrics application conditions         OrphanedResourceWarning           metrics application conditions         ExcludedResourceWarning         Application Set Controller metrics  The Application Set controller exposes the following metrics for application sets     Metric   Type   Description                                      argocd appset info    gauge   Information about Application Sets  It contains labels for the name and namespace of an application set as well as  Resource update status   that reflects the  ResourcesUpToDate  property      argocd appset reconcile    histogram   Application reconciliation performance in seconds  It contains labels for the name and namespace of an applicationset      argocd appset labels    gauge   Applicationset labels translated to Prometheus labels  Disabled by default      argocd appset owned applications    gauge   Number of applications owned by the applicationset  It contains labels for the name and namespace of an applicationset     Similar to the same metric in application controller   argocd app labels   the metric  argocd appset labels  is disabled by default  You can enable it by providing the   metrics applicationset labels  argument to the applicationset controller   Once enabled it works exactly the same as application controller metrics  label  appended to normalized label name   Available labels include Name  Namespace   all labels enabled by the command line options and their value  exactly like application controller metrics described in the previous section       API Server Metrics Metrics about API Server API request and response activity  request totals  response codes  etc      Scraped at the  argocd server metrics 8083 metrics  endpoint     Metric   Type   Description                                      argocd redis request duration    histogram   Redis requests duration       argocd redis request total    counter   Number of Kubernetes requests executed during application reconciliation       grpc server handled total    counter   Total number of RPCs completed on the server  regardless of success or failure       grpc server msg sent total    counter   Total number of gRPC stream messages sent by the server       argocd proxy extension request total    counter   Number of requests sent to the configured proxy extensions       argocd proxy extension request duration seconds    histogram   Request duration in seconds between the Argo CD API server and the proxy extension backend        Repo Server Metrics Metrics about the Repo Server  Scraped at the  argocd repo server 8084 metrics  endpoint     Metric   Type   Description                                      argocd git request duration seconds    histogram   Git requests duration seconds       argocd git request total    counter   Number of git requests performed by repo server      argocd git fetch fail total    counter   Number of git fetch requests failures by repo server      argocd redis request duration seconds    histogram   Redis requests duration seconds       argocd redis request total    counter   Number of Kubernetes requests executed during application reconciliation       argocd repo pending request total    gauge   Number of pending requests requiring repository lock       Prometheus Operator  If using Prometheus Operator  the following ServiceMonitor example manifests can be used  Add a namespace where Argo CD is installed and change  metadata labels release  to the name of label selected by your Prometheus      yaml apiVersion  monitoring coreos com v1 kind  ServiceMonitor metadata    name  argocd metrics   labels      release  prometheus operator spec    selector      matchLabels        app kubernetes io name  argocd metrics   endpoints      port  metrics         yaml apiVersion  monitoring coreos com v1 kind  ServiceMonitor metadata    name  argocd server metrics   labels      release  prometheus operator spec    selector      matchLabels        app kubernetes io name  argocd server metrics   endpoints      port  metrics         yaml apiVersion  monitoring coreos com v1 kind  ServiceMonitor metadata    name  argocd repo server metrics   labels      release  prometheus operator spec    selector      matchLabels        app kubernetes io name  argocd repo server   endpoints      port  metrics         yaml apiVersion  monitoring coreos com v1 kind  ServiceMonitor metadata    name  argocd applicationset controller metrics   labels      release  prometheus operator spec    selector      matchLabels        app kubernetes io name  argocd applicationset controller   endpoints      port  metrics         yaml apiVersion  monitoring coreos com v1 kind  ServiceMonitor metadata    name  argocd dex server   labels      release  prometheus operator spec    selector      matchLabels        app kubernetes io name  argocd dex server   endpoints        port  metrics         yaml apiVersion  monitoring coreos com v1 kind  ServiceMonitor metadata    name  argocd redis haproxy metrics   labels      release  prometheus operator spec    selector      matchLabels        app kubernetes io name  argocd redis ha haproxy   endpoints      port  http exporter port      For notifications controller  you need to additionally add following      yaml apiVersion  monitoring coreos com v1 kind  ServiceMonitor metadata    name  argocd notifications controller   labels      release  prometheus operator spec    selector      matchLabels        app kubernetes io name  argocd notifications controller metrics   endpoints        port  metrics          Dashboards  You can find an example Grafana dashboard  here  https   github com argoproj argo cd blob master examples dashboard json  or check demo instance  dashboard  https   grafana apps argoproj io      dashboard     assets dashboard jpg "}
{"questions":"argocd The following groups of features won t be available in this engine capable of getting the desired state from Git repositories and Introduction Argo CD Core mode With this installation you will have a fully functional GitOps applying it in Kubernetes Argo CD Core is a different installation that runs Argo CD in headless","answers":"# Argo CD Core\n\n## Introduction\n\nArgo CD Core is a different installation that runs Argo CD in headless\nmode. With this installation, you will have a fully functional GitOps\nengine capable of getting the desired state from Git repositories and\napplying it in Kubernetes.\n\nThe following groups of features won't be available in this\ninstallation:\n\n- Argo CD RBAC model\n- Argo CD API\n- Argo CD Notification Controller\n- OIDC based authentication\n\nThe following features will be partially available (see the\n[usage](#using) section below for more details):\n\n- Argo CD Web UI\n- Argo CD CLI\n- Multi-tenancy (strictly GitOps based on git push permissions)\n\nA few use-cases that justify running Argo CD Core are:\n\n- As a cluster admin, I want to rely on Kubernetes RBAC only.\n- As a devops engineer, I don't want to learn a new API or depend on\n  another CLI to automate my deployments. I want to rely on the\n  Kubernetes API only.\n- As a cluster admin, I don't want to provide Argo CD UI or Argo CD\n  CLI to developers.\n\n## Architecture\n\nBecause Argo CD is designed with a component based architecture in\nmind, it is possible to have a more minimalist installation. In this\ncase fewer components are installed and yet the main GitOps\nfunctionality remains operational.\n\nIn the diagram below, the Core box, shows the components that will be\ninstalled while opting for Argo CD Core:\n\n![Argo CD Core](..\/assets\/argocd-core-components.png)\n\nNote that even if the Argo CD controller can run without Redis, it\nisn't recommended. The Argo CD controller uses Redis as an important\ncaching mechanism reducing the load on Kube API and in Git. For this\nreason, Redis is also included in this installation method.\n\n## Installing\n\nArgo CD Core can be installed by applying a single manifest file that\ncontains all the required resources.\n\nExample:\n\n```\nexport ARGOCD_VERSION=<desired argo cd release version (e.g. v2.7.0)>\nkubectl create namespace argocd\nkubectl apply -n argocd -f https:\/\/raw.githubusercontent.com\/argoproj\/argo-cd\/$ARGOCD_VERSION\/manifests\/core-install.yaml\n```\n\n## Using\n\nOnce Argo CD Core is installed, users will be able to interact with it\nby relying on GitOps. The available Kubernetes resources will be the\n`Application` and the `ApplicationSet` CRDs. By using those resources,\nusers will be able to deploy and manage applications in Kubernetes.\n\nIt is still possible to use Argo CD CLI even when running Argo CD\nCore. In this case, the CLI will spawn a local API server process that\nwill be used to handle the CLI command. Once the command is concluded,\nthe local API Server process will also be terminated. This happens\ntransparently for the user with no additional command required. Note\nthat Argo CD Core will rely only on Kubernetes RBAC and the user (or\nthe process) invoking the CLI needs to have access to the Argo CD\nnamespace with the proper permission in the `Application` and\n`ApplicationSet` resources for executing a given command.\n\nTo use Argo CD CLI in core mode, it is required to pass the `--core`\nflag with the `login` subcommand.\n\nExample:\n\n```bash\nkubectl config set-context --current --namespace=argocd # change current kube context to argocd namespace\nargocd login --core\n```\n\nSimilarly, users can also run the Web UI locally if they prefer to\ninteract with Argo CD using this method. The Web UI can be started\nlocally by running the following command:\n\n```\nargocd admin dashboard -n argocd\n```\n\nArgo CD Web UI will be available at `http:\/\/localhost:8080`\n","site":"argocd","answers_cleaned":"  Argo CD Core     Introduction  Argo CD Core is a different installation that runs Argo CD in headless mode  With this installation  you will have a fully functional GitOps engine capable of getting the desired state from Git repositories and applying it in Kubernetes   The following groups of features won t be available in this installation     Argo CD RBAC model   Argo CD API   Argo CD Notification Controller   OIDC based authentication  The following features will be partially available  see the  usage   using  section below for more details      Argo CD Web UI   Argo CD CLI   Multi tenancy  strictly GitOps based on git push permissions   A few use cases that justify running Argo CD Core are     As a cluster admin  I want to rely on Kubernetes RBAC only    As a devops engineer  I don t want to learn a new API or depend on   another CLI to automate my deployments  I want to rely on the   Kubernetes API only    As a cluster admin  I don t want to provide Argo CD UI or Argo CD   CLI to developers      Architecture  Because Argo CD is designed with a component based architecture in mind  it is possible to have a more minimalist installation  In this case fewer components are installed and yet the main GitOps functionality remains operational   In the diagram below  the Core box  shows the components that will be installed while opting for Argo CD Core     Argo CD Core     assets argocd core components png   Note that even if the Argo CD controller can run without Redis  it isn t recommended  The Argo CD controller uses Redis as an important caching mechanism reducing the load on Kube API and in Git  For this reason  Redis is also included in this installation method      Installing  Argo CD Core can be installed by applying a single manifest file that contains all the required resources   Example       export ARGOCD VERSION  desired argo cd release version  e g  v2 7 0   kubectl create namespace argocd kubectl apply  n argocd  f https   raw githubusercontent com argoproj argo cd  ARGOCD VERSION manifests core install yaml         Using  Once Argo CD Core is installed  users will be able to interact with it by relying on GitOps  The available Kubernetes resources will be the  Application  and the  ApplicationSet  CRDs  By using those resources  users will be able to deploy and manage applications in Kubernetes   It is still possible to use Argo CD CLI even when running Argo CD Core  In this case  the CLI will spawn a local API server process that will be used to handle the CLI command  Once the command is concluded  the local API Server process will also be terminated  This happens transparently for the user with no additional command required  Note that Argo CD Core will rely only on Kubernetes RBAC and the user  or the process  invoking the CLI needs to have access to the Argo CD namespace with the proper permission in the  Application  and  ApplicationSet  resources for executing a given command   To use Argo CD CLI in core mode  it is required to pass the    core  flag with the  login  subcommand   Example      bash kubectl config set context   current   namespace argocd   change current kube context to argocd namespace argocd login   core      Similarly  users can also run the Web UI locally if they prefer to interact with Argo CD using this method  The Web UI can be started locally by running the following command       argocd admin dashboard  n argocd      Argo CD Web UI will be available at  http   localhost 8080  "}
{"questions":"argocd Please read this documentation carefully before you enable this feature Misconfiguration could lead to potential security issues warning Argo CD administrators can define a certain set of namespaces where resources may be created updated and reconciled in However applications in these additional namespaces will only be allowed to use certain as configured by the Argo CD administrators This allows ordinary Argo CD users e g application teams to use patterns like declarative management of resources implementing app of apps and others without the risk of a privilege escalation through usage of other that would exceed the permissions granted to the application teams As of version 2 5 Argo CD supports managing resources in namespaces other than the control plane s namespace which is usually but this feature has to be explicitly enabled and configured appropriately Introduction Applications in any namespace","answers":"# Applications in any namespace\n\n!!! warning\n    Please read this documentation carefully before you enable this feature. Misconfiguration could lead to potential security issues.\n\n## Introduction\n\nAs of version 2.5, Argo CD supports managing `Application` resources in namespaces other than the control plane's namespace (which is usually `argocd`), but this feature has to be explicitly enabled and configured appropriately.\n\nArgo CD administrators can define a certain set of namespaces where `Application` resources may be created, updated and reconciled in. However, applications in these additional namespaces will only be allowed to use certain `AppProjects`, as configured by the Argo CD administrators. This allows ordinary Argo CD users (e.g. application teams) to use patterns like declarative management of `Application` resources, implementing app-of-apps and others without the risk of a privilege escalation through usage of other `AppProjects` that would exceed the permissions granted to the application teams.\n\nSome manual steps will need to be performed by the Argo CD administrator in order to enable this feature. \n\nOne additional advantage of adopting applications in any namespace is to allow end-users to configure notifications for their Argo CD application in the namespace where Argo CD application is running in. See notifications [namespace based configuration](notifications\/index.md#namespace-based-configuration) page for more information.\n\n## Prerequisites\n\n### Cluster-scoped Argo CD installation\n\nThis feature can only be enabled and used when your Argo CD is installed as a cluster-wide instance, so it has permissions to list and manipulate resources on a cluster scope. It will not work with an Argo CD installed in namespace-scoped mode.\n\n### Switch resource tracking method\n\nAlso, while technically not necessary, it is strongly suggested that you switch the application tracking method from the default `label` setting to either `annotation` or `annotation+label`. The reasoning for this is, that application names will be a composite of the namespace's name and the name of the `Application`, and this can easily exceed the 63 characters length limit imposed on label values. Annotations have a notably greater length limit.\n\nTo enable annotation based resource tracking, refer to the documentation about [resource tracking methods](..\/..\/user-guide\/resource_tracking\/)\n\n## Implementation details\n\n### Overview\n\nIn order for an application to be managed and reconciled outside the Argo CD's control plane namespace, two prerequisites must match:\n\n1. The `Application`'s namespace must be explicitly enabled using the `--application-namespaces` parameter for the `argocd-application-controller` and `argocd-server` workloads. This parameter controls the list of namespaces that Argo CD will be allowed to source `Application` resources from globally. Any namespace not configured here cannot be used from any `AppProject`.\n1. The `AppProject` referenced by the `.spec.project` field of the `Application` must have the namespace listed in its `.spec.sourceNamespaces` field. This setting will determine whether an `Application` may use a certain `AppProject`. If an `Application` specifies an `AppProject` that is not allowed, Argo CD refuses to process this `Application`. As stated above, any namespace configured in the `.spec.sourceNamespaces` field must also be enabled globally.\n\n`Applications` in different namespaces can be created and managed just like any other `Application` in the `argocd` namespace previously, either declaratively or through the Argo CD API (e.g. using the CLI, the web UI, the REST API, etc).\n\n### Reconfigure Argo CD to allow certain namespaces\n\n#### Change workload startup parameters\n\nIn order to enable this feature, the Argo CD administrator must reconfigure the `argocd-server` and `argocd-application-controller` workloads to add the `--application-namespaces` parameter to the container's startup command.\n\nThe `--application-namespaces` parameter takes a comma-separated list of namespaces where `Applications` are to be allowed in. Each entry of the list supports:\n\n- shell-style wildcards such as `*`, so for example the entry `app-team-*` would match `app-team-one` and `app-team-two`. To enable all namespaces on the cluster where Argo CD is running on, you can just specify `*`, i.e. `--application-namespaces=*`.\n- regex, requires wrapping the string in ```\/```, example to allow all namespaces except a particular one: ```\/^((?!not-allowed).)*$\/```.\n  \nThe startup parameters for both, the `argocd-server` and the `argocd-application-controller` can also be conveniently set up and kept in sync by specifying the `application.namespaces` settings in the `argocd-cmd-params-cm` ConfigMap _instead_ of changing the manifests for the respective workloads. For example:\n\n```yaml\ndata:\n  application.namespaces: app-team-one, app-team-two\n```\n\nwould allow the `app-team-one` and `app-team-two` namespaces for managing `Application` resources. After a change to the `argocd-cmd-params-cm` namespace, the appropriate workloads need to be restarted:\n\n```bash\nkubectl rollout restart -n argocd deployment argocd-server\nkubectl rollout restart -n argocd statefulset argocd-application-controller\n```\n\n#### Adapt Kubernetes RBAC\n\nWe decided to not extend the Kubernetes RBAC for the `argocd-server` workload by default for the time being. If you want `Applications` in other namespaces to be managed by the Argo CD API (i.e. the CLI and UI), you need to extend the Kubernetes permissions for the `argocd-server` ServiceAccount.\n\nWe supply a `ClusterRole` and `ClusterRoleBinding` suitable for this purpose in the `examples\/k8s-rbac\/argocd-server-applications` directory. For a default Argo CD installation (i.e. installed to the `argocd` namespace), you can just apply them as-is:\n\n```shell\nkubectl apply -k examples\/k8s-rbac\/argocd-server-applications\/\n```\n\n`argocd-notifications-controller-rbac-clusterrole.yaml` and `argocd-notifications-controller-rbac-clusterrolebinding.yaml` are used to support notifications controller to notify apps in all namespaces.\n\n!!! note\n    At some later point in time, we may make this cluster role part of the default installation manifests.\n\n### Allowing additional namespaces in an AppProject\n\nAny user with Kubernetes access to the Argo CD control plane's namespace (`argocd`), especially those with permissions to create or update `Applications` in a declarative way, is to be considered an Argo CD admin.\n\nThis prevented unprivileged Argo CD users from declaratively creating or managing `Applications` in the past. Those users were constrained to using the API instead, subject to Argo CD RBAC which ensures only `Applications` in allowed `AppProjects` were created.\n\nFor an `Application` to be created outside the `argocd` namespace, the `AppProject` referred to in the `Application`'s `.spec.project` field must include the `Application`'s namespace in its `.spec.sourceNamespaces` field.\n\nFor example, consider the two following (incomplete) `AppProject` specs:\n\n```yaml\nkind: AppProject\napiVersion: argoproj.io\/v1alpha1\nmetadata:\n  name: project-one\n  namespace: argocd\nspec:\n  sourceNamespaces:\n  - namespace-one\n```\n\nand\n\n```yaml\nkind: AppProject\napiVersion: argoproj.io\/v1alpha1\nmetadata:\n  name: project-two\n  namespace: argocd\nspec:\n  sourceNamespaces:\n  - namespace-two\n```\n\nIn order for an Application to set `.spec.project` to `project-one`, it would have to be created in either namespace `namespace-one` or `argocd`. Likewise, in order for an Application to set `.spec.project` to `project-two`, it would have to be created in either namespace `namespace-two` or `argocd`.\n\nIf an Application in `namespace-two` would set their `.spec.project` to `project-one` or an Application in `namespace-one` would set their `.spec.project` to `project-two`, Argo CD would consider this as a permission violation and refuse to reconcile the Application.\n\nAlso, the Argo CD API will enforce these constraints, regardless of the Argo CD RBAC permissions.\n\nThe `.spec.sourceNamespaces` field of the `AppProject` is a list that can contain an arbitrary amount of namespaces, and each entry supports shell-style wildcard, so that you can allow namespaces with patterns like `team-one-*`.\n\n!!! warning\n    Do not add user controlled namespaces in the `.spec.sourceNamespaces` field of any privileged AppProject like the `default` project. Always make sure that the AppProject follows the principle of granting least required privileges. Never grant access to the `argocd` namespace within the AppProject.\n\n!!! note\n    For backwards compatibility, Applications in the Argo CD control plane's namespace (`argocd`) are allowed to set their `.spec.project` field to reference any AppProject, regardless of the restrictions placed by the AppProject's `.spec.sourceNamespaces` field.\n  \n### Application names\n\nFor the CLI and UI, applications are now referred to and displayed as in the format `<namespace>\/<name>`. \n\nFor backwards compatibility, if the namespace of the Application is the control plane's namespace (i.e. `argocd`), the `<namespace>` can be omitted from the application name when referring to it. For example, the application names `argocd\/someapp` and `someapp` are semantically the same and refer to the same application in the CLI and the UI.\n\n### Application RBAC\n\nThe RBAC syntax for Application objects has been changed from `<project>\/<application>` to `<project>\/<namespace>\/<application>` to accommodate the need to restrict access based on the source namespace of the Application to be managed.\n\nFor backwards compatibility, Applications in the `argocd` namespace can still be refered to as `<project>\/<application>` in the RBAC policy rules.\n\nWildcards do not make any distinction between project and application namespaces yet. For example, the following RBAC rule would match any application belonging to project `foo`, regardless of the namespace it is created in:\n\n```\np, somerole, applications, get, foo\/*, allow\n```\n\nIf you want to restrict access to be granted only to `Applications` in project `foo` within namespace `bar`, the rule would need to be adapted as follows:\n\n```\np, somerole, applications, get, foo\/bar\/*, allow\n```\n  \n## Managing applications in other namespaces\n\n### Declaratively\n\nFor declarative management of Applications, just create the Application from a YAML or JSON manifest in the desired namespace. Make sure that the `.spec.project` field refers to an AppProject that allows this namespace. For example, the following (incomplete) Application manifest creates an Application in the namespace `some-namespace`:\n\n```yaml\nkind: Application\napiVersion: argoproj.io\/v1alpha1\nmetadata:\n  name: some-app\n  namespace: some-namespace\nspec:\n  project: some-project\n  # ...\n```\n\nThe project `some-project` will then need to specify `some-namespace` in the list of allowed source namespaces, e.g.\n\n```yaml\nkind: AppProject\napiVersion: argoproj.io\/v1alpha1\nmetadata:\n    name: some-project\n    namespace: argocd\nspec:\n    sourceNamespaces:\n    - some-namespace\n```\n\n### Using the CLI\n\nYou can use all existing Argo CD CLI commands for managing applications in other namespaces, exactly as you would use the CLI to manage applications in the control plane's namespace.\n\nFor example, to retrieve the `Application` named `foo` in the namespace `bar`, you can use the following CLI command:\n\n```shell\nargocd app get foo\/bar\n```\n\nLikewise, to manage this application, keep referring to it as `foo\/bar`:\n\n```bash\n# Create an application\nargocd app create foo\/bar ...\n# Sync the application\nargocd app sync foo\/bar\n# Delete the application\nargocd app delete foo\/bar\n# Retrieve application's manifest\nargocd app manifests foo\/bar\n```\n\nAs stated previously, for applications in the Argo CD's control plane namespace, you can omit the namespace from the application name.\n\n### Using the UI\n\nSimilar to the CLI, you can refer to the application in the UI as `foo\/bar`.\n\nFor example, to create an application named `bar` in the namespace `foo` in the web UI, set the application name in the creation dialogue's _Application Name_ field to `foo\/bar`. If the namespace is omitted, the control plane's namespace will be used.\n\n### Using the REST API\n\nIf you are using the REST API, the namespace for `Application` cannot be specified as the application name, and resources need to be specified using the optional `appNamespace` query parameter. For example, to work with the `Application` resource named `foo` in the namespace `bar`, the request would look like follows:\n\n```bash\nGET \/api\/v1\/applications\/foo?appNamespace=bar\n```\n\nFor other operations such as `POST` and `PUT`, the `appNamespace` parameter must be part of the request's payload.\n\nFor `Application` resources in the control plane namespace, this parameter can be omitted.","site":"argocd","answers_cleaned":"  Applications in any namespace      warning     Please read this documentation carefully before you enable this feature  Misconfiguration could lead to potential security issues      Introduction  As of version 2 5  Argo CD supports managing  Application  resources in namespaces other than the control plane s namespace  which is usually  argocd    but this feature has to be explicitly enabled and configured appropriately   Argo CD administrators can define a certain set of namespaces where  Application  resources may be created  updated and reconciled in  However  applications in these additional namespaces will only be allowed to use certain  AppProjects   as configured by the Argo CD administrators  This allows ordinary Argo CD users  e g  application teams  to use patterns like declarative management of  Application  resources  implementing app of apps and others without the risk of a privilege escalation through usage of other  AppProjects  that would exceed the permissions granted to the application teams   Some manual steps will need to be performed by the Argo CD administrator in order to enable this feature    One additional advantage of adopting applications in any namespace is to allow end users to configure notifications for their Argo CD application in the namespace where Argo CD application is running in  See notifications  namespace based configuration  notifications index md namespace based configuration  page for more information      Prerequisites      Cluster scoped Argo CD installation  This feature can only be enabled and used when your Argo CD is installed as a cluster wide instance  so it has permissions to list and manipulate resources on a cluster scope  It will not work with an Argo CD installed in namespace scoped mode       Switch resource tracking method  Also  while technically not necessary  it is strongly suggested that you switch the application tracking method from the default  label  setting to either  annotation  or  annotation label   The reasoning for this is  that application names will be a composite of the namespace s name and the name of the  Application   and this can easily exceed the 63 characters length limit imposed on label values  Annotations have a notably greater length limit   To enable annotation based resource tracking  refer to the documentation about  resource tracking methods        user guide resource tracking       Implementation details      Overview  In order for an application to be managed and reconciled outside the Argo CD s control plane namespace  two prerequisites must match   1  The  Application  s namespace must be explicitly enabled using the    application namespaces  parameter for the  argocd application controller  and  argocd server  workloads  This parameter controls the list of namespaces that Argo CD will be allowed to source  Application  resources from globally  Any namespace not configured here cannot be used from any  AppProject   1  The  AppProject  referenced by the   spec project  field of the  Application  must have the namespace listed in its   spec sourceNamespaces  field  This setting will determine whether an  Application  may use a certain  AppProject   If an  Application  specifies an  AppProject  that is not allowed  Argo CD refuses to process this  Application   As stated above  any namespace configured in the   spec sourceNamespaces  field must also be enabled globally    Applications  in different namespaces can be created and managed just like any other  Application  in the  argocd  namespace previously  either declaratively or through the Argo CD API  e g  using the CLI  the web UI  the REST API  etc        Reconfigure Argo CD to allow certain namespaces       Change workload startup parameters  In order to enable this feature  the Argo CD administrator must reconfigure the  argocd server  and  argocd application controller  workloads to add the    application namespaces  parameter to the container s startup command   The    application namespaces  parameter takes a comma separated list of namespaces where  Applications  are to be allowed in  Each entry of the list supports     shell style wildcards such as      so for example the entry  app team    would match  app team one  and  app team two   To enable all namespaces on the cluster where Argo CD is running on  you can just specify      i e     application namespaces       regex  requires wrapping the string in          example to allow all namespaces except a particular one           not allowed              The startup parameters for both  the  argocd server  and the  argocd application controller  can also be conveniently set up and kept in sync by specifying the  application namespaces  settings in the  argocd cmd params cm  ConfigMap  instead  of changing the manifests for the respective workloads  For example      yaml data    application namespaces  app team one  app team two      would allow the  app team one  and  app team two  namespaces for managing  Application  resources  After a change to the  argocd cmd params cm  namespace  the appropriate workloads need to be restarted      bash kubectl rollout restart  n argocd deployment argocd server kubectl rollout restart  n argocd statefulset argocd application controller           Adapt Kubernetes RBAC  We decided to not extend the Kubernetes RBAC for the  argocd server  workload by default for the time being  If you want  Applications  in other namespaces to be managed by the Argo CD API  i e  the CLI and UI   you need to extend the Kubernetes permissions for the  argocd server  ServiceAccount   We supply a  ClusterRole  and  ClusterRoleBinding  suitable for this purpose in the  examples k8s rbac argocd server applications  directory  For a default Argo CD installation  i e  installed to the  argocd  namespace   you can just apply them as is      shell kubectl apply  k examples k8s rbac argocd server applications        argocd notifications controller rbac clusterrole yaml  and  argocd notifications controller rbac clusterrolebinding yaml  are used to support notifications controller to notify apps in all namespaces       note     At some later point in time  we may make this cluster role part of the default installation manifests       Allowing additional namespaces in an AppProject  Any user with Kubernetes access to the Argo CD control plane s namespace   argocd    especially those with permissions to create or update  Applications  in a declarative way  is to be considered an Argo CD admin   This prevented unprivileged Argo CD users from declaratively creating or managing  Applications  in the past  Those users were constrained to using the API instead  subject to Argo CD RBAC which ensures only  Applications  in allowed  AppProjects  were created   For an  Application  to be created outside the  argocd  namespace  the  AppProject  referred to in the  Application  s   spec project  field must include the  Application  s namespace in its   spec sourceNamespaces  field   For example  consider the two following  incomplete   AppProject  specs      yaml kind  AppProject apiVersion  argoproj io v1alpha1 metadata    name  project one   namespace  argocd spec    sourceNamespaces      namespace one      and     yaml kind  AppProject apiVersion  argoproj io v1alpha1 metadata    name  project two   namespace  argocd spec    sourceNamespaces      namespace two      In order for an Application to set   spec project  to  project one   it would have to be created in either namespace  namespace one  or  argocd   Likewise  in order for an Application to set   spec project  to  project two   it would have to be created in either namespace  namespace two  or  argocd    If an Application in  namespace two  would set their   spec project  to  project one  or an Application in  namespace one  would set their   spec project  to  project two   Argo CD would consider this as a permission violation and refuse to reconcile the Application   Also  the Argo CD API will enforce these constraints  regardless of the Argo CD RBAC permissions   The   spec sourceNamespaces  field of the  AppProject  is a list that can contain an arbitrary amount of namespaces  and each entry supports shell style wildcard  so that you can allow namespaces with patterns like  team one          warning     Do not add user controlled namespaces in the   spec sourceNamespaces  field of any privileged AppProject like the  default  project  Always make sure that the AppProject follows the principle of granting least required privileges  Never grant access to the  argocd  namespace within the AppProject       note     For backwards compatibility  Applications in the Argo CD control plane s namespace   argocd   are allowed to set their   spec project  field to reference any AppProject  regardless of the restrictions placed by the AppProject s   spec sourceNamespaces  field         Application names  For the CLI and UI  applications are now referred to and displayed as in the format   namespace   name      For backwards compatibility  if the namespace of the Application is the control plane s namespace  i e   argocd    the   namespace   can be omitted from the application name when referring to it  For example  the application names  argocd someapp  and  someapp  are semantically the same and refer to the same application in the CLI and the UI       Application RBAC  The RBAC syntax for Application objects has been changed from   project   application   to   project   namespace   application   to accommodate the need to restrict access based on the source namespace of the Application to be managed   For backwards compatibility  Applications in the  argocd  namespace can still be refered to as   project   application   in the RBAC policy rules   Wildcards do not make any distinction between project and application namespaces yet  For example  the following RBAC rule would match any application belonging to project  foo   regardless of the namespace it is created in       p  somerole  applications  get  foo    allow      If you want to restrict access to be granted only to  Applications  in project  foo  within namespace  bar   the rule would need to be adapted as follows       p  somerole  applications  get  foo bar    allow           Managing applications in other namespaces      Declaratively  For declarative management of Applications  just create the Application from a YAML or JSON manifest in the desired namespace  Make sure that the   spec project  field refers to an AppProject that allows this namespace  For example  the following  incomplete  Application manifest creates an Application in the namespace  some namespace       yaml kind  Application apiVersion  argoproj io v1alpha1 metadata    name  some app   namespace  some namespace spec    project  some project              The project  some project  will then need to specify  some namespace  in the list of allowed source namespaces  e g      yaml kind  AppProject apiVersion  argoproj io v1alpha1 metadata      name  some project     namespace  argocd spec      sourceNamespaces        some namespace          Using the CLI  You can use all existing Argo CD CLI commands for managing applications in other namespaces  exactly as you would use the CLI to manage applications in the control plane s namespace   For example  to retrieve the  Application  named  foo  in the namespace  bar   you can use the following CLI command      shell argocd app get foo bar      Likewise  to manage this application  keep referring to it as  foo bar       bash   Create an application argocd app create foo bar       Sync the application argocd app sync foo bar   Delete the application argocd app delete foo bar   Retrieve application s manifest argocd app manifests foo bar      As stated previously  for applications in the Argo CD s control plane namespace  you can omit the namespace from the application name       Using the UI  Similar to the CLI  you can refer to the application in the UI as  foo bar    For example  to create an application named  bar  in the namespace  foo  in the web UI  set the application name in the creation dialogue s  Application Name  field to  foo bar   If the namespace is omitted  the control plane s namespace will be used       Using the REST API  If you are using the REST API  the namespace for  Application  cannot be specified as the application name  and resources need to be specified using the optional  appNamespace  query parameter  For example  to work with the  Application  resource named  foo  in the namespace  bar   the request would look like follows      bash GET  api v1 applications foo appNamespace bar      For other operations such as  POST  and  PUT   the  appNamespace  parameter must be part of the request s payload   For  Application  resources in the control plane namespace  this parameter can be omitted "}
{"questions":"argocd warning Alpha Feature Since 2 13 0 Please read this documentation carefully before you enable this feature Misconfiguration could lead to potential security issues warning the control plane operations feature that allows you to control the service account used for the sync operation The configured service account Application Sync using impersonation could have lesser privileges required for creating resources compared to the highly privileged access required for This is an experimental","answers":"# Application Sync using impersonation\n\n!!! warning \"Alpha Feature (Since 2.13.0)\"\n    This is an experimental, [alpha-quality](https:\/\/github.com\/argoproj\/argoproj\/blob\/main\/community\/feature-status.md#alpha) \n    feature that allows you to control the service account used for the sync operation. The configured service account \n    could have lesser privileges required for creating resources compared to the highly privileged access required for \n    the control plane operations.\n\n!!! warning\n    Please read this documentation carefully before you enable this feature. Misconfiguration could lead to potential security issues.\n\n## Introduction\n\nArgo CD supports syncing `Application` resources using the same service account used for its control plane operations. This feature enables users to decouple service account used for application sync from the service account used for control plane operations.\n\nBy default, application syncs in Argo CD have the same privileges as the Argo CD control plane. As a consequence, in a multi-tenant setup, the Argo CD control plane privileges needs to match the tenant that needs the highest privileges. As an example, if an Argo CD instance has 10 Applications and only one of them requires admin privileges, then the Argo CD control plane must have admin privileges in order to be able to sync that one Application. This provides an opportunity for malicious tenants to gain admin level access. Argo CD provides a multi-tenancy model to restrict what each `Application` is authorized to do using `AppProjects`, however it is not secure enough and if Argo CD is compromised, attackers will easily gain `cluster-admin` access to the cluster.\n\nSome manual steps will need to be performed by the Argo CD administrator in order to enable this feature, as it is disabled by default.\n\n!!! note\n    This feature is considered alpha as of now. Some of the implementation details may change over the course of time until it is promoted to a stable status. We will be happy if early adopters use this feature and provide us with bug reports and feedback.\n\n### What is Impersonation\n\nImpersonation is a feature in Kubernetes and enabled in the `kubectl` CLI client, using which, a user can act as another user through impersonation headers. For example, an admin could use this feature to debug an authorization policy by temporarily impersonating another user and seeing if a request was denied.\n\nImpersonation requests first authenticate as the requesting user, then switch to the impersonated user info.\n\n## Prerequisites\n\nIn a multi-team\/multi-tenant environment, a team\/tenant is typically granted access to a target namespace to self-manage their kubernetes resources in a declarative way.\nA typical tenant onboarding process looks like below:\n1. The platform admin creates a tenant namespace and the service account to be used for creating the resources is also created in the same tenant namespace.\n2. The platform admin creates one or more Role(s) to manage kubernetes resources in the tenant namespace\n3. The platform admin creates one or more RoleBinding(s) to map the service account to the role(s) created in the previous steps.\n4. The platform admin can choose to use either the [apps-in-any-namespace](.\/app-any-namespace.md) feature or provide access to tenants to create applications in the ArgoCD control plane namespace.\n5. If the platform admin chooses apps-in-any-namespace feature, tenants can self-service their Argo applications in their respective tenant namespaces and no additional access needs to be provided for the control plane namespace.\n\n## Implementation details\n\n### Overview\n\nIn order for an application to use a different service account for the application sync operation, the following steps needs to be performed:\n\n1. The impersonation feature flag should be enabled. Please refer the steps provided in [Enable application sync with impersonation feature](#enable-application-sync-with-impersonation-feature)\n\n2. The `AppProject` referenced by the `.spec.project` field of the `Application` must have the `DestinationServiceAccounts` mapping the destination server and namespace to a service account to be used for the sync operation. Please refer the steps provided in [Configuring destination service accounts](#configuring-destination-service-accounts)\n\n\n### Enable application sync with impersonation feature\n\nIn order to enable this feature, the Argo CD administrator must reconfigure the `application.sync.impersonation.enabled` settings in the `argocd-cm` ConfigMap as below:\n\n```yaml\ndata:\n  application.sync.impersonation.enabled: \"true\"\n```\n\n### Disable application sync with impersonation feature\n\nIn order to disable this feature, the Argo CD administrator must reconfigure the `application.sync.impersonation.enabled` settings in the `argocd-cm` ConfigMap as below:\n\n```yaml\ndata:\n  application.sync.impersonation.enabled: \"false\"\n```\n\n!!! note\n    This feature is disabled by default.\n\n!!! note\n    This feature can be enabled\/disabled only at the system level and once enabled\/disabled it is applicable to all Applications managed by ArgoCD.\n\n## Configuring destination service accounts\n\nDestination service accounts can be added to the `AppProject` under `.spec.destinationServiceAccounts`. Specify the target destination `server` and `namespace` and provide the service account to be used for the sync operation using `defaultServiceAccount` field. Applications that refer this `AppProject` will use the corresponding service account configured for its destination.\n\nDuring the application sync operation, the controller loops through the available `destinationServiceAccounts` in the mapped `AppProject` and tries to find a matching candidate. If there are multiple matches for a destination server and namespace combination, then the first valid match will be considered. If there are no matches, then an error is reported during the sync operation. In order to avoid such sync errors, it is highly recommended that a valid service account may be configured as a catch-all configuration, for all target destinations and kept in lowest order of priority.\n\nIt is possible to specify service accounts along with its namespace. eg: `tenant1-ns:guestbook-deployer`. If no namespace is provided for the service account, then the Application's `spec.destination.namespace` will be used. If no namespace is provided for the service account and the optional `spec.destination.namespace` field is also not provided in the `Application`, then the Application's namespace will be used.\n\n`DestinationServiceAccounts` associated to a `AppProject` can be created and managed, either declaratively or through the Argo CD API (e.g. using the CLI, the web UI, the REST API, etc).\n\n### Using declarative yaml\n\nFor declaratively configuring destination service accounts, create an yaml file for the `AppProject` as below and apply the changes using `kubectl apply` command.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: AppProject\nmetadata:\n  name: my-project\n  namespace: argocd\nspec:\n  description: Example Project\n  # Allow manifests to deploy from any Git repos\n  sourceRepos:\n    - '*'\n  destinations:\n    - '*'\n  destinationServiceAccounts:\n    - server: https:\/\/kubernetes.default.svc\n      namespace: guestbook\n      defaultServiceAccount: guestbook-deployer\n    - server: https:\/\/kubernetes.default.svc\n      namespace: guestbook-dev\n      defaultServiceAccount: guestbook-dev-deployer\n    - server: https:\/\/kubernetes.default.svc\n      namespace: guestbook-stage\n      defaultServiceAccount: guestbook-stage-deployer\n    - server: https:\/\/kubernetes.default.svc # catch-all configuration\n      namespace: '*'\n      defaultServiceAccount: default\n```\n\n### Using the CLI\n\nDestination service accounts can be added to an `AppProject` using the ArgoCD CLI.\n\nFor example, to add a destination service account for `in-cluster` and `guestbook` namespace, you can use the following CLI command:\n\n```shell\nargocd proj add-destination-service-account my-project https:\/\/kubernetes.default.svc guestbook guestbook-sa\n```\n\nLikewise, to remove the destination service account from an `AppProject`, you can use the following CLI command:\n\n```shell\nargocd proj remove-destination-service-account my-project https:\/\/kubernetes.default.svc guestbook\n```\n\n### Using the UI\n\nSimilar to the CLI, you can add destination service account when creating or updating an `AppProject` from the UI","site":"argocd","answers_cleaned":"  Application Sync using impersonation      warning  Alpha Feature  Since 2 13 0       This is an experimental   alpha quality  https   github com argoproj argoproj blob main community feature status md alpha       feature that allows you to control the service account used for the sync operation  The configured service account      could have lesser privileges required for creating resources compared to the highly privileged access required for      the control plane operations       warning     Please read this documentation carefully before you enable this feature  Misconfiguration could lead to potential security issues      Introduction  Argo CD supports syncing  Application  resources using the same service account used for its control plane operations  This feature enables users to decouple service account used for application sync from the service account used for control plane operations   By default  application syncs in Argo CD have the same privileges as the Argo CD control plane  As a consequence  in a multi tenant setup  the Argo CD control plane privileges needs to match the tenant that needs the highest privileges  As an example  if an Argo CD instance has 10 Applications and only one of them requires admin privileges  then the Argo CD control plane must have admin privileges in order to be able to sync that one Application  This provides an opportunity for malicious tenants to gain admin level access  Argo CD provides a multi tenancy model to restrict what each  Application  is authorized to do using  AppProjects   however it is not secure enough and if Argo CD is compromised  attackers will easily gain  cluster admin  access to the cluster   Some manual steps will need to be performed by the Argo CD administrator in order to enable this feature  as it is disabled by default       note     This feature is considered alpha as of now  Some of the implementation details may change over the course of time until it is promoted to a stable status  We will be happy if early adopters use this feature and provide us with bug reports and feedback       What is Impersonation  Impersonation is a feature in Kubernetes and enabled in the  kubectl  CLI client  using which  a user can act as another user through impersonation headers  For example  an admin could use this feature to debug an authorization policy by temporarily impersonating another user and seeing if a request was denied   Impersonation requests first authenticate as the requesting user  then switch to the impersonated user info      Prerequisites  In a multi team multi tenant environment  a team tenant is typically granted access to a target namespace to self manage their kubernetes resources in a declarative way  A typical tenant onboarding process looks like below  1  The platform admin creates a tenant namespace and the service account to be used for creating the resources is also created in the same tenant namespace  2  The platform admin creates one or more Role s  to manage kubernetes resources in the tenant namespace 3  The platform admin creates one or more RoleBinding s  to map the service account to the role s  created in the previous steps  4  The platform admin can choose to use either the  apps in any namespace    app any namespace md  feature or provide access to tenants to create applications in the ArgoCD control plane namespace  5  If the platform admin chooses apps in any namespace feature  tenants can self service their Argo applications in their respective tenant namespaces and no additional access needs to be provided for the control plane namespace      Implementation details      Overview  In order for an application to use a different service account for the application sync operation  the following steps needs to be performed   1  The impersonation feature flag should be enabled  Please refer the steps provided in  Enable application sync with impersonation feature   enable application sync with impersonation feature   2  The  AppProject  referenced by the   spec project  field of the  Application  must have the  DestinationServiceAccounts  mapping the destination server and namespace to a service account to be used for the sync operation  Please refer the steps provided in  Configuring destination service accounts   configuring destination service accounts        Enable application sync with impersonation feature  In order to enable this feature  the Argo CD administrator must reconfigure the  application sync impersonation enabled  settings in the  argocd cm  ConfigMap as below      yaml data    application sync impersonation enabled   true           Disable application sync with impersonation feature  In order to disable this feature  the Argo CD administrator must reconfigure the  application sync impersonation enabled  settings in the  argocd cm  ConfigMap as below      yaml data    application sync impersonation enabled   false           note     This feature is disabled by default       note     This feature can be enabled disabled only at the system level and once enabled disabled it is applicable to all Applications managed by ArgoCD      Configuring destination service accounts  Destination service accounts can be added to the  AppProject  under   spec destinationServiceAccounts   Specify the target destination  server  and  namespace  and provide the service account to be used for the sync operation using  defaultServiceAccount  field  Applications that refer this  AppProject  will use the corresponding service account configured for its destination   During the application sync operation  the controller loops through the available  destinationServiceAccounts  in the mapped  AppProject  and tries to find a matching candidate  If there are multiple matches for a destination server and namespace combination  then the first valid match will be considered  If there are no matches  then an error is reported during the sync operation  In order to avoid such sync errors  it is highly recommended that a valid service account may be configured as a catch all configuration  for all target destinations and kept in lowest order of priority   It is possible to specify service accounts along with its namespace  eg   tenant1 ns guestbook deployer   If no namespace is provided for the service account  then the Application s  spec destination namespace  will be used  If no namespace is provided for the service account and the optional  spec destination namespace  field is also not provided in the  Application   then the Application s namespace will be used    DestinationServiceAccounts  associated to a  AppProject  can be created and managed  either declaratively or through the Argo CD API  e g  using the CLI  the web UI  the REST API  etc        Using declarative yaml  For declaratively configuring destination service accounts  create an yaml file for the  AppProject  as below and apply the changes using  kubectl apply  command      yaml apiVersion  argoproj io v1alpha1 kind  AppProject metadata    name  my project   namespace  argocd spec    description  Example Project     Allow manifests to deploy from any Git repos   sourceRepos              destinations              destinationServiceAccounts        server  https   kubernetes default svc       namespace  guestbook       defaultServiceAccount  guestbook deployer       server  https   kubernetes default svc       namespace  guestbook dev       defaultServiceAccount  guestbook dev deployer       server  https   kubernetes default svc       namespace  guestbook stage       defaultServiceAccount  guestbook stage deployer       server  https   kubernetes default svc   catch all configuration       namespace            defaultServiceAccount  default          Using the CLI  Destination service accounts can be added to an  AppProject  using the ArgoCD CLI   For example  to add a destination service account for  in cluster  and  guestbook  namespace  you can use the following CLI command      shell argocd proj add destination service account my project https   kubernetes default svc guestbook guestbook sa      Likewise  to remove the destination service account from an  AppProject   you can use the following CLI command      shell argocd proj remove destination service account my project https   kubernetes default svc guestbook          Using the UI  Similar to the CLI  you can add destination service account when creating or updating an  AppProject  from the UI"}
{"questions":"argocd user management system and has only one built in user The user is a superuser and Once SSO or local users are configured additional RBAC roles can be defined and SSO groups or local users can then be mapped to roles it has unrestricted access to the system RBAC requires or The RBAC feature enables restrictions of access to Argo CD resources Argo CD does not have its own The global RBAC config map see RBAC Configuration There are two main components where RBAC configuration can be defined","answers":"# RBAC Configuration\n\nThe RBAC feature enables restrictions of access to Argo CD resources. Argo CD does not have its own\nuser management system and has only one built-in user, `admin`. The `admin` user is a superuser and\nit has unrestricted access to the system. RBAC requires [SSO configuration](user-management\/index.md) or [one or more local users setup](user-management\/index.md).\nOnce SSO or local users are configured, additional RBAC roles can be defined, and SSO groups or local users can then be mapped to roles.\n\nThere are two main components where RBAC configuration can be defined:\n\n- The global RBAC config map (see [argo-rbac-cm.yaml](argocd-rbac-cm-yaml.md))\n- The [AppProject's roles](..\/user-guide\/projects.md#project-roles)\n\n## Basic Built-in Roles\n\nArgo CD has two pre-defined roles but RBAC configuration allows defining roles and groups (see below).\n\n- `role:readonly`: read-only access to all resources\n- `role:admin`: unrestricted access to all resources\n\nThese default built-in role definitions can be seen in [builtin-policy.csv](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/assets\/builtin-policy.csv)\n\n## Default Policy for Authenticated Users\n\nWhen a user is authenticated in Argo CD, it will be granted the role specified in `policy.default`.\n\n!!! warning \"Restricting Default Permissions\"\n\n    **All authenticated users get _at least_ the permissions granted by the default policies. This access cannot be blocked\n    by a `deny` rule.** It is recommended to create a new `role:authenticated` with the minimum set of permissions possible,\n    then grant permissions to individual roles as needed.\n\n## Anonymous Access\n\nEnabling anonymous access to the Argo CD instance allows users to assume the default role permissions specified by `policy.default` **without being authenticated**.\n\nThe anonymous access to Argo CD can be enabled using the `users.anonymous.enabled` field in `argocd-cm` (see [argocd-cm.yaml](argocd-cm-yaml.md)).\n\n!!! warning\n\n    When enabling anonymous access, consider creating a new default role and assigning it to the default policies\n    with `policy.default: role:unauthenticated`.\n\n## RBAC Model Structure\n\nThe model syntax is based on [Casbin](https:\/\/casbin.org\/docs\/overview). There are two different types of syntax: one for assigning policies, and another one for assigning users to internal roles.\n\n**Group**: Allows to assign authenticated users\/groups to internal roles.\n\nSyntax: `g, <user\/group>, <role>`\n\n- `<user\/group>`: The entity to whom the role will be assigned. It can be a local user or a user authenticated with SSO.\n  When SSO is used, the `user` will be based on the `sub` claims, while the group is one of the values returned by the `scopes` configuration.\n- `<role>`: The internal role to which the entity will be assigned.\n\n**Policy**: Allows to assign permissions to an entity.\n\nSyntax: `p, <role\/user\/group>, <resource>, <action>, <object>, <effect>`\n\n- `<role\/user\/group>`: The entity to whom the policy will be assigned\n- `<resource>`: The type of resource on which the action is performed.\n- `<action>`: The operation that is being performed on the resource.\n- `<object>`: The object identifier representing the resource on which the action is performed. Depending on the resource, the object's format will vary.\n- `<effect>`: Whether this policy should grant or restrict the operation on the target object. One of `allow` or `deny`.\n\nBelow is a table that summarizes all possible resources and which actions are valid for each of them.\n\n| Resource\\Action     | get | create | update | delete | sync | action | override | invoke |\n| :------------------ | :-: | :----: | :----: | :----: | :--: | :----: | :------: | :----: |\n| **applications**    | \u2705  |   \u2705   |   \u2705   |   \u2705   |  \u2705  |   \u2705   |    \u2705    |   \u274c   |\n| **applicationsets** | \u2705  |   \u2705   |   \u2705   |   \u2705   |  \u274c  |   \u274c   |    \u274c    |   \u274c   |\n| **clusters**        | \u2705  |   \u2705   |   \u2705   |   \u2705   |  \u274c  |   \u274c   |    \u274c    |   \u274c   |\n| **projects**        | \u2705  |   \u2705   |   \u2705   |   \u2705   |  \u274c  |   \u274c   |    \u274c    |   \u274c   |\n| **repositories**    | \u2705  |   \u2705   |   \u2705   |   \u2705   |  \u274c  |   \u274c   |    \u274c    |   \u274c   |\n| **accounts**        | \u2705  |   \u274c   |   \u2705   |   \u274c   |  \u274c  |   \u274c   |    \u274c    |   \u274c   |\n| **certificates**    | \u2705  |   \u2705   |   \u274c   |   \u2705   |  \u274c  |   \u274c   |    \u274c    |   \u274c   |\n| **gpgkeys**         | \u2705  |   \u2705   |   \u274c   |   \u2705   |  \u274c  |   \u274c   |    \u274c    |   \u274c   |\n| **logs**            | \u2705  |   \u274c   |   \u274c   |   \u274c   |  \u274c  |   \u274c   |    \u274c    |   \u274c   |\n| **exec**            | \u274c  |   \u2705   |   \u274c   |   \u274c   |  \u274c  |   \u274c   |    \u274c    |   \u274c   |\n| **extensions**      | \u274c  |   \u274c   |   \u274c   |   \u274c   |  \u274c  |   \u274c   |    \u274c    |   \u2705   |\n\n### Application-Specific Policy\n\nSome policy only have meaning within an application. It is the case with the following resources:\n\n- `applications`\n- `applicationsets`\n- `logs`\n- `exec`\n\nWhile they can be set in the global configuration, they can also be configured in [AppProject's roles](..\/user-guide\/projects.md#project-roles).\nThe expected `<object>` value in the policy structure is replaced by `<app-project>\/<app-name>`.\n\nFor instance, these policies would grant `example-user` access to get any applications,\nbut only be able to see logs in `my-app` application part of the `example-project` project.\n\n```csv\np, example-user, applications, get, *, allow\np, example-user, logs, get, example-project\/my-app, allow\n```\n\n#### Application in Any Namespaces\n\nWhen [application in any namespace](app-any-namespace.md) is enabled, the expected `<object>` value in the policy structure is replaced by `<app-project>\/<app-ns>\/<app-name>`.\nSince multiple applications could have the same name in the same project, the policy below makes sure to restrict access only to `app-namespace`.\n\n```csv\np, example-user, applications, get, *\/app-namespace\/*, allow\np, example-user, logs, get, example-project\/app-namespace\/my-app, allow\n```\n\n### The `applications` resource\n\nThe `applications` resource is an [Application-Specific Policy](#application-specific-policy).\n\n#### Fine-grained Permissions for `update`\/`delete` action\n\nThe `update` and `delete` actions, when granted on an application, will allow the user to perform the operation on the application itself **and** all of its resources.\nIt can be desirable to only allow `update` or `delete` on specific resources within an application.\n\nTo do so, when the action if performed on an application's resource, the `<action>` will have the `<action>\/<group>\/<kind>\/<ns>\/<name>` format.\n\nFor instance, to grant access to `example-user` to only delete Pods in the `prod-app` Application, the policy could be:\n\n```csv\np, example-user, applications, delete\/*\/Pod\/*\/*, default\/prod-app, allow\n```\n\n!!!warning \"Understand glob pattern behavior\"\n\n    Argo CD RBAC does not use `\/` as a separator when evaluating glob patterns. So the pattern `delete\/*\/kind\/*`\n    will match `delete\/<group>\/kind\/<namespace>\/<name>` but also `delete\/<group>\/<kind>\/kind\/<name>`.\n\n    The fact that both of these match will generally not be a problem, because resource kinds generally contain capital \n    letters, and namespaces cannot contain capital letters. However, it is possible for a resource kind to be lowercase. \n    So it is better to just always include all the parts of the resource in the pattern (in other words, always use four \n    slashes).\n\nIf we want to grant access to the user to update all resources of an application, but not the application itself:\n\n```csv\np, example-user, applications, update\/*, default\/prod-app, allow\n```\n\nIf we want to explicitly deny delete of the application, but allow the user to delete Pods:\n\n```csv\np, example-user, applications, delete, default\/prod-app, deny\np, example-user, applications, delete\/*\/Pod\/*\/*, default\/prod-app, allow\n```\n\n!!! note\n\n    It is not possible to deny fine-grained permissions for a sub-resource if the action was **explicitly allowed on the application**.\n    For instance, the following policies will **allow** a user to delete the Pod and any other resources in the application:\n\n    ```csv\n    p, example-user, applications, delete, default\/prod-app, allow\n    p, example-user, applications, delete\/*\/Pod\/*\/*, default\/prod-app, deny\n    ```\n\n#### The `action` action\n\nThe `action` action corresponds to either built-in resource customizations defined\n[in the Argo CD repository](https:\/\/github.com\/argoproj\/argo-cd\/tree\/master\/resource_customizations),\nor to [custom resource actions](resource_actions.md#custom-resource-actions) defined by you.\n\nSee the [resource actions documentation](resource_actions.md#built-in-actions) for a list of built-in actions.\n\nThe `<action>` has the `action\/<group>\/<kind>\/<action-name>` format.\n\nFor example, a resource customization path `resource_customizations\/extensions\/DaemonSet\/actions\/restart\/action.lua`\ncorresponds to the `action` path `action\/extensions\/DaemonSet\/restart`. If the resource is not under a group (for example, Pods or ConfigMaps),\nthen the path will be `action\/\/Pod\/action-name`.\n\nThe following policies allows the user to perform any action on the DaemonSet resources, as well as the `maintenance-off` action on a Pod:\n\n```csv\np, example-user, applications, action\/\/Pod\/maintenance-off, default\/*, allow\np, example-user, applications, action\/extensions\/DaemonSet\/*, default\/*, allow\n```\n\nTo allow the user to perform any actions:\n\n```csv\np, example-user, applications, action\/*, default\/*, allow\n```\n\n#### The `override` action\n\nWhen granted along with the `sync` action, the override action will allow a user to synchronize local manifests to the Application.\nThese manifests will be used instead of the configured source, until the next sync is performed.\n\n### The `applicationsets` resource\n\nThe `applicationsets` resource is an [Application-Specific policy](#application-specific-policy).\n\n[ApplicationSets](applicationset\/index.md) provide a declarative way to automatically create\/update\/delete Applications.\n\nAllowing the `create` action on the resource effectively grants the ability to create Applications. While it doesn't allow the\nuser to create Applications directly, they can create Applications via an ApplicationSet.\n\n!!! note\n\n    In v2.5, it is not possible to create an ApplicationSet with a templated Project field (e.g. `project: `)\n    via the API (or, by extension, the CLI). Disallowing templated projects makes project restrictions via RBAC safe:\n\nWith the resource being application-specific, the `<object>` of the applicationsets policy will have the format `<app-project>\/<app-name>`.\nHowever, since an ApplicationSet does belong to any project, the `<app-project>` value represents the projects in which the ApplicationSet will be able to create Applications.\n\nWith the following policy, a `dev-group` user will be unable to create an ApplicationSet capable of creating Applications\noutside the `dev-project` project.\n\n```csv\np, dev-group, applicationsets, *, dev-project\/*, allow\n```\n\n### The `logs` resource\n\nThe `logs` resource is an [Application-Specific Policy](#application-specific-policy).\n\nWhen granted with the `get` action, this policy allows a user to see Pod's logs of an application via\nthe Argo CD UI. The functionality is similar to `kubectl logs`.\n\n### The `exec` resource\n\nThe `exec` resource is an [Application-Specific Policy](#application-specific-policy).\n\nWhen granted with the `create` action, this policy allows a user to `exec` into Pods of an application via\nthe Argo CD UI. The functionality is similar to `kubectl exec`.\n\nSee [Web-based Terminal](web_based_terminal.md) for more info.\n\n### The `extensions` resource\n\nWith the `extensions` resource, it is possible to configure permissions to invoke [proxy extensions](..\/developer-guide\/extensions\/proxy-extensions.md).\nThe `extensions` RBAC validation works in conjunction with the `applications` resource.\nA user **needs to have read permission on the application** where the request is originated from.\n\nConsider the example below, it will allow the `example-user` to invoke the `httpbin` extensions in all\napplications under the `default` project.\n\n```csv\np, example-user, applications, get, default\/*, allow\np, example-user, extensions, invoke, httpbin, allow\n```\n\n### The `deny` effect\n\nWhen `deny` is used as an effect in a policy, it will be effective if the policy matches.\nEven if more specific policies with the `allow` effect match as well, the `deny` will have priority.\n\nThe order in which the policies appears in the policy file configuration has no impact, and the result is deterministic.\n\n## Policies Evaluation and Matching\n\nThe evaluation of access is done in two parts: validating against the default policy configuration, then validating against the policies for the current user.\n\n**If an action is allowed or denied by the default policies, then this effect will be effective without further evaluation**.\nWhen the effect is undefined, the evaluation will continue with subject-specific policies.\n\nThe access will be evaluated for the user, then for each configured group that the user is part of.\n\nThe matching engine, configured in `policy.matchMode`, can use two different match modes to compare the values of tokens:\n\n- `glob`: based on the [`glob` package](https:\/\/pkg.go.dev\/github.com\/gobwas\/glob).\n- `regex`: based on the [`regexp` package](https:\/\/pkg.go.dev\/regexp).\n\nWhen all tokens match during the evaluation, the effect will be returned. The evaluation will continue until all matching policies are evaluated, or until a policy with the `deny` effect matches.\nAfter all policies are evaluated, if there was at least one `allow` effect and no `deny`, access will be granted.\n\n### Glob matching\n\nWhen `glob` is used, the policy tokens are treated as single terms, without separators.\n\nConsider the following policy:\n\n```\np, example-user, applications, action\/extensions\/*, default\/*, allow\n```\n\nWhen the `example-user` executes the `extensions\/DaemonSet\/test` action, the following `glob` matches will happen:\n\n1. The current user `example-user` matches the token `example-user`.\n2. The value `applications` matches the token `applications`.\n3. The value `action\/extensions\/DaemonSet\/test` matches `action\/extensions\/*`. Note that `\/` is not treated as a separator and the use of `**` is not necessary.\n4. The value `default\/my-app` matches `default\/*`.\n\n## Using SSO Users\/Groups\n\nThe `scopes` field controls which OIDC scopes to examine during RBAC enforcement (in addition to `sub` scope).\nIf omitted, it defaults to `'[groups]'`. The scope value can be a string, or a list of strings.\n\nFor more information on `scopes` please review the [User Management Documentation](user-management\/index.md).\n\nThe following example shows targeting `email` as well as `groups` from your OIDC provider.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-rbac-cm\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/name: argocd-rbac-cm\n    app.kubernetes.io\/part-of: argocd\ndata:\n  policy.csv: |\n    p, my-org:team-alpha, applications, sync, my-project\/*, allow\n    g, my-org:team-beta, role:admin\n    g, user@example.org, role:admin\n  policy.default: role:readonly\n  scopes: '[groups, email]'\n```\n\nThis can be useful to associate users' emails and groups directly in AppProject.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: AppProject\nmetadata:\n  name: team-beta-project\n  namespace: argocd\nspec:\n  roles:\n    - name: admin\n      description: Admin privileges to team-beta\n      policies:\n        - p, proj:team-beta-project:admin, applications, *, *, allow\n      groups:\n        - user@example.org # Value from the email scope\n        - my-org:team-beta # Value from the groups scope\n```\n\n## Local Users\/Accounts\n\n[Local users](user-management\/index.md#local-usersaccounts) are assigned access by either grouping them with a role or by assigning policies directly\nto them.\n\nThe example below shows how to assign a policy directly to a local user.\n\n```yaml\np, my-local-user, applications, sync, my-project\/*, allow\n```\n\nThis example shows how to assign a role to a local user.\n\n```yaml\ng, my-local-user, role:admin\n```\n\n!!! warning \"Ambiguous Group Assignments\"\n\n    If you have [enabled SSO](user-management\/index.md#sso), any SSO user with a scope that matches a local user will be\n    added to the same roles as the local user. For example, if local user `sally` is assigned to `role:admin`, and if an\n    SSO user has a scope which happens to be named `sally`, that SSO user will also be assigned to `role:admin`.\n\n    An example of where this may be a problem is if your SSO provider is an SCM, and org members are automatically\n    granted scopes named after the orgs. If a user can create or add themselves to an org in the SCM, they can gain the\n    permissions of the local user with the same name.\n\n    To avoid ambiguity, if you are using local users and SSO, it is recommended to assign policies directly to local\n    users, and not to assign roles to local users. In other words, instead of using `g, my-local-user, role:admin`, you\n    should explicitly assign policies to `my-local-user`:\n\n    ```yaml\n    p, my-local-user, *, *, *, allow\n    ```\n\n## Policy CSV Composition\n\nIt is possible to provide additional entries in the `argocd-rbac-cm` configmap to compose the final policy csv.\nIn this case, the key must follow the pattern `policy.<any string>.csv`.\nArgo CD will concatenate all additional policies it finds with this pattern below the main one ('policy.csv').\nThe order of additional provided policies are determined by the key string.\n\nExample: if two additional policies are provided with keys `policy.A.csv` and `policy.B.csv`,\nit will first concatenate `policy.A.csv` and then `policy.B.csv`.\n\nThis is useful to allow composing policies in config management tools like Kustomize, Helm, etc.\n\nThe example below shows how a Kustomize patch can be provided in an overlay to add additional configuration to an existing RBAC ConfigMap.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-rbac-cm\n  namespace: argocd\ndata:\n  policy.tester-overlay.csv: |\n    p, role:tester, applications, *, *\/*, allow\n    p, role:tester, projects, *, *, allow\n    g, my-org:team-qa, role:tester\n```\n\n## Validating and testing your RBAC policies\n\nIf you want to ensure that your RBAC policies are working as expected, you can\nuse the [`argocd admin settings rbac` command](..\/user-guide\/commands\/argocd_admin_settings_rbac.md) to validate them.\nThis tool allows you to test whether a certain role or subject can perform the requested action with a policy\nthat's not live yet in the system, i.e. from a local file or config map.\nAdditionally, it can be used against the live RBAC configuration in the cluster your Argo CD is running in.\n\n### Validating a policy\n\nTo check whether your new policy configuration is valid and understood by Argo CD's RBAC implementation,\nyou can use the [`argocd admin settings rbac validate` command](..\/user-guide\/commands\/argocd_admin_settings_rbac_validate.md).\n\n### Testing a policy\n\nTo test whether a role or subject (group or local user) has sufficient\npermissions to execute certain actions on certain resources, you can\nuse the [`argocd admin settings rbac can` command](..\/user-guide\/commands\/argocd_admin_settings_rbac_can.md).","site":"argocd","answers_cleaned":"  RBAC Configuration  The RBAC feature enables restrictions of access to Argo CD resources  Argo CD does not have its own user management system and has only one built in user   admin   The  admin  user is a superuser and it has unrestricted access to the system  RBAC requires  SSO configuration  user management index md  or  one or more local users setup  user management index md   Once SSO or local users are configured  additional RBAC roles can be defined  and SSO groups or local users can then be mapped to roles   There are two main components where RBAC configuration can be defined     The global RBAC config map  see  argo rbac cm yaml  argocd rbac cm yaml md     The  AppProject s roles     user guide projects md project roles      Basic Built in Roles  Argo CD has two pre defined roles but RBAC configuration allows defining roles and groups  see below       role readonly   read only access to all resources    role admin   unrestricted access to all resources  These default built in role definitions can be seen in  builtin policy csv  https   github com argoproj argo cd blob master assets builtin policy csv      Default Policy for Authenticated Users  When a user is authenticated in Argo CD  it will be granted the role specified in  policy default        warning  Restricting Default Permissions         All authenticated users get  at least  the permissions granted by the default policies  This access cannot be blocked     by a  deny  rule    It is recommended to create a new  role authenticated  with the minimum set of permissions possible      then grant permissions to individual roles as needed      Anonymous Access  Enabling anonymous access to the Argo CD instance allows users to assume the default role permissions specified by  policy default    without being authenticated     The anonymous access to Argo CD can be enabled using the  users anonymous enabled  field in  argocd cm   see  argocd cm yaml  argocd cm yaml md         warning      When enabling anonymous access  consider creating a new default role and assigning it to the default policies     with  policy default  role unauthenticated       RBAC Model Structure  The model syntax is based on  Casbin  https   casbin org docs overview   There are two different types of syntax  one for assigning policies  and another one for assigning users to internal roles     Group    Allows to assign authenticated users groups to internal roles   Syntax   g   user group    role        user group    The entity to whom the role will be assigned  It can be a local user or a user authenticated with SSO    When SSO is used  the  user  will be based on the  sub  claims  while the group is one of the values returned by the  scopes  configuration      role    The internal role to which the entity will be assigned     Policy    Allows to assign permissions to an entity   Syntax   p   role user group    resource    action    object    effect        role user group    The entity to whom the policy will be assigned     resource    The type of resource on which the action is performed      action    The operation that is being performed on the resource      object    The object identifier representing the resource on which the action is performed  Depending on the resource  the object s format will vary      effect    Whether this policy should grant or restrict the operation on the target object  One of  allow  or  deny    Below is a table that summarizes all possible resources and which actions are valid for each of them     Resource Action       get   create   update   delete   sync   action   override   invoke                                                                                                    applications                                                                         applicationsets                                                                      clusters                                                                             projects                                                                             repositories                                                                         accounts                                                                             certificates                                                                         gpgkeys                                                                              logs                                                                                 exec                                                                                 extensions                                                                            Application Specific Policy  Some policy only have meaning within an application  It is the case with the following resources      applications     applicationsets     logs     exec   While they can be set in the global configuration  they can also be configured in  AppProject s roles     user guide projects md project roles   The expected   object   value in the policy structure is replaced by   app project   app name     For instance  these policies would grant  example user  access to get any applications  but only be able to see logs in  my app  application part of the  example project  project      csv p  example user  applications  get     allow p  example user  logs  get  example project my app  allow           Application in Any Namespaces  When  application in any namespace  app any namespace md  is enabled  the expected   object   value in the policy structure is replaced by   app project   app ns   app name    Since multiple applications could have the same name in the same project  the policy below makes sure to restrict access only to  app namespace       csv p  example user  applications  get    app namespace    allow p  example user  logs  get  example project app namespace my app  allow          The  applications  resource  The  applications  resource is an  Application Specific Policy   application specific policy         Fine grained Permissions for  update   delete  action  The  update  and  delete  actions  when granted on an application  will allow the user to perform the operation on the application itself   and   all of its resources  It can be desirable to only allow  update  or  delete  on specific resources within an application   To do so  when the action if performed on an application s resource  the   action   will have the   action   group   kind   ns   name   format   For instance  to grant access to  example user  to only delete Pods in the  prod app  Application  the policy could be      csv p  example user  applications  delete   Pod      default prod app  allow         warning  Understand glob pattern behavior       Argo CD RBAC does not use     as a separator when evaluating glob patterns  So the pattern  delete   kind        will match  delete  group  kind  namespace   name   but also  delete  group   kind  kind  name         The fact that both of these match will generally not be a problem  because resource kinds generally contain capital      letters  and namespaces cannot contain capital letters  However  it is possible for a resource kind to be lowercase       So it is better to just always include all the parts of the resource in the pattern  in other words  always use four      slashes    If we want to grant access to the user to update all resources of an application  but not the application itself      csv p  example user  applications  update    default prod app  allow      If we want to explicitly deny delete of the application  but allow the user to delete Pods      csv p  example user  applications  delete  default prod app  deny p  example user  applications  delete   Pod      default prod app  allow          note      It is not possible to deny fine grained permissions for a sub resource if the action was   explicitly allowed on the application        For instance  the following policies will   allow   a user to delete the Pod and any other resources in the application          csv     p  example user  applications  delete  default prod app  allow     p  example user  applications  delete   Pod      default prod app  deny               The  action  action  The  action  action corresponds to either built in resource customizations defined  in the Argo CD repository  https   github com argoproj argo cd tree master resource customizations   or to  custom resource actions  resource actions md custom resource actions  defined by you   See the  resource actions documentation  resource actions md built in actions  for a list of built in actions   The   action   has the  action  group   kind   action name   format   For example  a resource customization path  resource customizations extensions DaemonSet actions restart action lua  corresponds to the  action  path  action extensions DaemonSet restart   If the resource is not under a group  for example  Pods or ConfigMaps   then the path will be  action  Pod action name    The following policies allows the user to perform any action on the DaemonSet resources  as well as the  maintenance off  action on a Pod      csv p  example user  applications  action  Pod maintenance off  default    allow p  example user  applications  action extensions DaemonSet    default    allow      To allow the user to perform any actions      csv p  example user  applications  action    default    allow           The  override  action  When granted along with the  sync  action  the override action will allow a user to synchronize local manifests to the Application  These manifests will be used instead of the configured source  until the next sync is performed       The  applicationsets  resource  The  applicationsets  resource is an  Application Specific policy   application specific policy     ApplicationSets  applicationset index md  provide a declarative way to automatically create update delete Applications   Allowing the  create  action on the resource effectively grants the ability to create Applications  While it doesn t allow the user to create Applications directly  they can create Applications via an ApplicationSet       note      In v2 5  it is not possible to create an ApplicationSet with a templated Project field  e g   project         via the API  or  by extension  the CLI   Disallowing templated projects makes project restrictions via RBAC safe   With the resource being application specific  the   object   of the applicationsets policy will have the format   app project   app name    However  since an ApplicationSet does belong to any project  the   app project   value represents the projects in which the ApplicationSet will be able to create Applications   With the following policy  a  dev group  user will be unable to create an ApplicationSet capable of creating Applications outside the  dev project  project      csv p  dev group  applicationsets     dev project    allow          The  logs  resource  The  logs  resource is an  Application Specific Policy   application specific policy    When granted with the  get  action  this policy allows a user to see Pod s logs of an application via the Argo CD UI  The functionality is similar to  kubectl logs        The  exec  resource  The  exec  resource is an  Application Specific Policy   application specific policy    When granted with the  create  action  this policy allows a user to  exec  into Pods of an application via the Argo CD UI  The functionality is similar to  kubectl exec    See  Web based Terminal  web based terminal md  for more info       The  extensions  resource  With the  extensions  resource  it is possible to configure permissions to invoke  proxy extensions     developer guide extensions proxy extensions md   The  extensions  RBAC validation works in conjunction with the  applications  resource  A user   needs to have read permission on the application   where the request is originated from   Consider the example below  it will allow the  example user  to invoke the  httpbin  extensions in all applications under the  default  project      csv p  example user  applications  get  default    allow p  example user  extensions  invoke  httpbin  allow          The  deny  effect  When  deny  is used as an effect in a policy  it will be effective if the policy matches  Even if more specific policies with the  allow  effect match as well  the  deny  will have priority   The order in which the policies appears in the policy file configuration has no impact  and the result is deterministic      Policies Evaluation and Matching  The evaluation of access is done in two parts  validating against the default policy configuration  then validating against the policies for the current user     If an action is allowed or denied by the default policies  then this effect will be effective without further evaluation    When the effect is undefined  the evaluation will continue with subject specific policies   The access will be evaluated for the user  then for each configured group that the user is part of   The matching engine  configured in  policy matchMode   can use two different match modes to compare the values of tokens      glob   based on the   glob  package  https   pkg go dev github com gobwas glob      regex   based on the   regexp  package  https   pkg go dev regexp    When all tokens match during the evaluation  the effect will be returned  The evaluation will continue until all matching policies are evaluated  or until a policy with the  deny  effect matches  After all policies are evaluated  if there was at least one  allow  effect and no  deny   access will be granted       Glob matching  When  glob  is used  the policy tokens are treated as single terms  without separators   Consider the following policy       p  example user  applications  action extensions    default    allow      When the  example user  executes the  extensions DaemonSet test  action  the following  glob  matches will happen   1  The current user  example user  matches the token  example user   2  The value  applications  matches the token  applications   3  The value  action extensions DaemonSet test  matches  action extensions     Note that     is not treated as a separator and the use of      is not necessary  4  The value  default my app  matches  default         Using SSO Users Groups  The  scopes  field controls which OIDC scopes to examine during RBAC enforcement  in addition to  sub  scope   If omitted  it defaults to    groups     The scope value can be a string  or a list of strings   For more information on  scopes  please review the  User Management Documentation  user management index md    The following example shows targeting  email  as well as  groups  from your OIDC provider      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd rbac cm   namespace  argocd   labels      app kubernetes io name  argocd rbac cm     app kubernetes io part of  argocd data    policy csv        p  my org team alpha  applications  sync  my project    allow     g  my org team beta  role admin     g  user example org  role admin   policy default  role readonly   scopes    groups  email        This can be useful to associate users  emails and groups directly in AppProject      yaml apiVersion  argoproj io v1alpha1 kind  AppProject metadata    name  team beta project   namespace  argocd spec    roles        name  admin       description  Admin privileges to team beta       policies            p  proj team beta project admin  applications        allow       groups            user example org   Value from the email scope           my org team beta   Value from the groups scope         Local Users Accounts   Local users  user management index md local usersaccounts  are assigned access by either grouping them with a role or by assigning policies directly to them   The example below shows how to assign a policy directly to a local user      yaml p  my local user  applications  sync  my project    allow      This example shows how to assign a role to a local user      yaml g  my local user  role admin          warning  Ambiguous Group Assignments       If you have  enabled SSO  user management index md sso   any SSO user with a scope that matches a local user will be     added to the same roles as the local user  For example  if local user  sally  is assigned to  role admin   and if an     SSO user has a scope which happens to be named  sally   that SSO user will also be assigned to  role admin        An example of where this may be a problem is if your SSO provider is an SCM  and org members are automatically     granted scopes named after the orgs  If a user can create or add themselves to an org in the SCM  they can gain the     permissions of the local user with the same name       To avoid ambiguity  if you are using local users and SSO  it is recommended to assign policies directly to local     users  and not to assign roles to local users  In other words  instead of using  g  my local user  role admin   you     should explicitly assign policies to  my local user           yaml     p  my local user           allow             Policy CSV Composition  It is possible to provide additional entries in the  argocd rbac cm  configmap to compose the final policy csv  In this case  the key must follow the pattern  policy  any string  csv   Argo CD will concatenate all additional policies it finds with this pattern below the main one   policy csv    The order of additional provided policies are determined by the key string   Example  if two additional policies are provided with keys  policy A csv  and  policy B csv   it will first concatenate  policy A csv  and then  policy B csv    This is useful to allow composing policies in config management tools like Kustomize  Helm  etc   The example below shows how a Kustomize patch can be provided in an overlay to add additional configuration to an existing RBAC ConfigMap      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd rbac cm   namespace  argocd data    policy tester overlay csv        p  role tester  applications          allow     p  role tester  projects        allow     g  my org team qa  role tester         Validating and testing your RBAC policies  If you want to ensure that your RBAC policies are working as expected  you can use the   argocd admin settings rbac  command     user guide commands argocd admin settings rbac md  to validate them  This tool allows you to test whether a certain role or subject can perform the requested action with a policy that s not live yet in the system  i e  from a local file or config map  Additionally  it can be used against the live RBAC configuration in the cluster your Argo CD is running in       Validating a policy  To check whether your new policy configuration is valid and understood by Argo CD s RBAC implementation  you can use the   argocd admin settings rbac validate  command     user guide commands argocd admin settings rbac validate md        Testing a policy  To test whether a role or subject  group or local user  has sufficient permissions to execute certain actions on certain resources  you can use the   argocd admin settings rbac can  command     user guide commands argocd admin settings rbac can md  "}
{"questions":"argocd loading a CSS file directly onto the argocd server container Both mechanisms are driven by modifying the argocd cm configMap Argo CD imports the majority of its UI stylesheets from the project Sometimes it may be desired to customize certain components of the UI for branding purposes or to Such custom styling can be applied either by supplying a URL to a remotely hosted CSS file or by help distinguish between multiple instances of Argo CD running in different environments Custom Styles","answers":"# Custom Styles\n\nArgo CD imports the majority of its UI stylesheets from the [argo-ui](https:\/\/github.com\/argoproj\/argo-ui) project.\nSometimes, it may be desired to customize certain components of the UI for branding purposes or to\nhelp distinguish between multiple instances of Argo CD running in different environments.\n\nSuch custom styling can be applied either by supplying a URL to a remotely hosted CSS file, or by \nloading a CSS file directly onto the argocd-server container.  Both mechanisms are driven by modifying\nthe argocd-cm configMap.\n\n## Adding Styles Via Remote URL\n\nThe first method simply requires the addition of the remote URL to the argocd-cm configMap:\n\n### argocd-cm\n```yaml\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  ...\n  name: argocd-cm\ndata:\n  ui.cssurl: \"https:\/\/www.example.com\/my-styles.css\"\n```\n\n## Adding Styles Via Volume Mounts\n\nThe second method requires mounting the CSS file directly onto the argocd-server container and then\nproviding the argocd-cm with the properly configured path to that file.  In the following example,\nthe CSS file is actually defined inside of a separate configMap (the same effect could be achieved\nby generating or downloading a CSS file in an initContainer):\n\n### argocd-cm\n```yaml\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  ...\n  name: argocd-cm\ndata:\n  ui.cssurl: \".\/custom\/my-styles.css\"\n```\n\nNote that the `cssurl` should be specified relative to the \"\/shared\/app\" directory; \nnot as an absolute path.\n\n### argocd-styles-cm\n```yaml\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  ...\n  name: argocd-styles-cm\ndata:\n  my-styles.css: |\n    .sidebar {\n      background: linear-gradient(to bottom, #999, #777, #333, #222, #111);\n    }\n```\n\n### argocd-server\n```yaml\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: argocd-server\n  ...\nspec:\n  template:\n    ...\n    spec:\n      containers:\n      - command:\n        ...\n        volumeMounts:\n        ...\n        - mountPath: \/shared\/app\/custom\n          name: styles\n      ...\n      volumes:\n      ...\n      - configMap:\n          name: argocd-styles-cm\n        name: styles\n```\n\nNote that the CSS file should be mounted within a subdirectory of the \"\/shared\/app\" directory\n(e.g. \"\/shared\/app\/custom\").  Otherwise, the file will likely fail to be imported by the browser with an\n\"incorrect MIME type\" error. The subdirectory can be changed using `server.staticassets` key of the\n[argocd-cmd-params-cm.yaml](.\/argocd-cmd-params-cm.yaml) ConfigMap.\n\n## Developing Style Overlays\nThe styles specified in the injected CSS file should be specific to components and classes defined in [argo-ui](https:\/\/github.com\/argoproj\/argo-ui).\nIt is recommended to test out the styles you wish to apply first by making use of your browser's built-in developer tools.  For a more full-featured\nexperience, you may wish to build a separate project using the [Argo CD UI dev server](https:\/\/webpack.js.org\/configuration\/dev-server\/).\n\n## Banners\n\nArgo CD can optionally display a banner that can be used to notify your users of upcoming maintenance and operational changes. This feature can be enabled by specifying the banner message using the `ui.bannercontent` field in the `argocd-cm` ConfigMap and Argo CD will display this message at the top of every UI page. You can optionally add a link to this message by setting `ui.bannerurl`. You can also make the banner sticky (permanent) by setting `ui.bannerpermanent` to true and change its position to \"both\" or \"bottom\" by using `ui.bannerposition: \"both\"`, allowing the banner to display on both the top and bottom, or `ui.bannerposition: \"bottom\"` to display it exclusively at the bottom.\n\n### argocd-cm\n```yaml\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  ...\n  name: argocd-cm\ndata:\n    ui.bannercontent: \"Banner message linked to a URL\"\n    ui.bannerurl: \"www.bannerlink.com\"\n    ui.bannerpermanent: \"true\"\n    ui.bannerposition: \"bottom\"\n```\n\n![banner with link](..\/assets\/banner.png)","site":"argocd","answers_cleaned":"  Custom Styles  Argo CD imports the majority of its UI stylesheets from the  argo ui  https   github com argoproj argo ui  project  Sometimes  it may be desired to customize certain components of the UI for branding purposes or to help distinguish between multiple instances of Argo CD running in different environments   Such custom styling can be applied either by supplying a URL to a remotely hosted CSS file  or by  loading a CSS file directly onto the argocd server container   Both mechanisms are driven by modifying the argocd cm configMap      Adding Styles Via Remote URL  The first method simply requires the addition of the remote URL to the argocd cm configMap       argocd cm    yaml     apiVersion  v1 kind  ConfigMap metadata          name  argocd cm data    ui cssurl   https   www example com my styles css          Adding Styles Via Volume Mounts  The second method requires mounting the CSS file directly onto the argocd server container and then providing the argocd cm with the properly configured path to that file   In the following example  the CSS file is actually defined inside of a separate configMap  the same effect could be achieved by generating or downloading a CSS file in an initContainer        argocd cm    yaml     apiVersion  v1 kind  ConfigMap metadata          name  argocd cm data    ui cssurl     custom my styles css       Note that the  cssurl  should be specified relative to the   shared app  directory   not as an absolute path       argocd styles cm    yaml     apiVersion  v1 kind  ConfigMap metadata          name  argocd styles cm data    my styles css         sidebar         background  linear gradient to bottom   999   777   333   222   111                  argocd server    yaml     apiVersion  apps v1 kind  Deployment metadata    name  argocd server       spec    template              spec        containers          command                      volumeMounts                        mountPath   shared app custom           name  styles                 volumes                    configMap            name  argocd styles cm         name  styles      Note that the CSS file should be mounted within a subdirectory of the   shared app  directory  e g    shared app custom     Otherwise  the file will likely fail to be imported by the browser with an  incorrect MIME type  error  The subdirectory can be changed using  server staticassets  key of the  argocd cmd params cm yaml    argocd cmd params cm yaml  ConfigMap      Developing Style Overlays The styles specified in the injected CSS file should be specific to components and classes defined in  argo ui  https   github com argoproj argo ui   It is recommended to test out the styles you wish to apply first by making use of your browser s built in developer tools   For a more full featured experience  you may wish to build a separate project using the  Argo CD UI dev server  https   webpack js org configuration dev server        Banners  Argo CD can optionally display a banner that can be used to notify your users of upcoming maintenance and operational changes  This feature can be enabled by specifying the banner message using the  ui bannercontent  field in the  argocd cm  ConfigMap and Argo CD will display this message at the top of every UI page  You can optionally add a link to this message by setting  ui bannerurl   You can also make the banner sticky  permanent  by setting  ui bannerpermanent  to true and change its position to  both  or  bottom  by using  ui bannerposition   both    allowing the banner to display on both the top and bottom  or  ui bannerposition   bottom   to display it exclusively at the bottom       argocd cm    yaml     apiVersion  v1 kind  ConfigMap metadata          name  argocd cm data      ui bannercontent   Banner message linked to a URL      ui bannerurl   www bannerlink com      ui bannerpermanent   true      ui bannerposition   bottom         banner with link     assets banner png "}
{"questions":"argocd This guide is for operators who have already installed Argo CD and have a new cluster and are looking to install many apps in that cluster Admins should review pull requests to that repository paying particular attention to the field in each Cluster Bootstrapping The ability to create Applications in arbitrary warning App of Apps is an admin only tool There s no one particular pattern to solve this problem e g you could write a script to create your apps or you could even manually create them However users of Argo CD tend to use the app of apps pattern is an admin level capability Only admins should have push access to the parent Application s source repository","answers":"# Cluster Bootstrapping\n\nThis guide is for operators who have already installed Argo CD, and have a new cluster and are looking to install many apps in that cluster.\n\nThere's no one particular pattern to solve this problem, e.g. you could write a script to create your apps, or you could even manually create them. However, users of Argo CD tend to use the **app of apps pattern**.\n\n!!!warning \"App of Apps is an admin-only tool\"\n    The ability to create Applications in arbitrary [Projects](.\/declarative-setup.md#projects) \n    is an admin-level capability. Only admins should have push access to the parent Application's source repository. \n    Admins should review pull requests to that repository, paying particular attention to the `project` field in each \n    Application. Projects with access to the namespace in which Argo CD is installed effectively have admin-level \n    privileges.\n\n## App Of Apps Pattern\n\n[Declaratively](declarative-setup.md) specify one Argo CD app that consists only of other apps.\n\n![Application of Applications](..\/assets\/application-of-applications.png)\n\n### Helm Example\n\nThis example shows how to use Helm to achieve this. You can, of course, use another tool if you like.\n\nA typical layout of your Git repository for this might be:\n\n```\n\u251c\u2500\u2500 Chart.yaml\n\u251c\u2500\u2500 templates\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 guestbook.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 helm-dependency.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 helm-guestbook.yaml\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 kustomize-guestbook.yaml\n\u2514\u2500\u2500 values.yaml\n```\n\n`Chart.yaml` is boiler-plate.\n\n`templates` contains one file for each child app, roughly:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  name: guestbook\n  namespace: argocd\n  finalizers:\n  - resources-finalizer.argocd.argoproj.io\nspec:\n  destination:\n    namespace: argocd\n    server: \n  project: default\n  source:\n    path: guestbook\n    repoURL: https:\/\/github.com\/argoproj\/argocd-example-apps\n    targetRevision: HEAD\n``` \n\nThe sync policy to automated + prune, so that child apps are automatically created, synced, and deleted when the manifest is changed, but you may wish to disable this. I've also added the finalizer, which will ensure that your apps are deleted correctly.\n\nFix the revision to a specific Git commit SHA to make sure that, even if the child apps repo changes, the app will only change when the parent app change that revision. Alternatively, you can set it to HEAD or a branch name.\n\nAs you probably want to override the cluster server, this is a templated values.\n\n`values.yaml` contains the default values:\n\n```yaml\nspec:\n  destination:\n    server: https:\/\/kubernetes.default.svc\n```\n\nNext, you need to create and sync your parent app, e.g. via the CLI:\n\n```bash\nargocd app create apps \\\n    --dest-namespace argocd \\\n    --dest-server https:\/\/kubernetes.default.svc \\\n    --repo https:\/\/github.com\/argoproj\/argocd-example-apps.git \\\n    --path apps  \nargocd app sync apps  \n```\n\nThe parent app will appear as in-sync but the child apps will be out of sync:\n\n![New App Of Apps](..\/assets\/new-app-of-apps.png)\n\n> NOTE: You may want to modify this behavior to bootstrap your cluster in waves; see [v1.8 upgrade notes](upgrading\/1.7-1.8.md) for information on changing this.\n\nYou can either sync via the UI, firstly filter by the correct label:\n\n![Filter Apps](..\/assets\/filter-apps.png)\n\nThen select the \"out of sync\" apps and sync: \n\n![Sync Apps](..\/assets\/sync-apps.png)\n\nOr, via the CLI: \n\n```bash\nargocd app sync -l app.kubernetes.io\/instance=apps\n```\n\nView [the example on GitHub](https:\/\/github.com\/argoproj\/argocd-example-apps\/tree\/master\/apps).\n\n\n\n### Cascading deletion\n\nIf you want to ensure that child-apps and all of their resources are deleted when the parent-app is deleted make sure to add the appropriate [finalizer](..\/user-guide\/app_deletion.md#about-the-deletion-finalizer) to your `Application` definition\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  name: guestbook\n  namespace: argocd\n  finalizers:\n  - resources-finalizer.argocd.argoproj.io\nspec:\n ...\n```\n\n### Ignoring differences in child applications\n\nTo allow changes in child apps without triggering an out-of-sync status, or modification for debugging etc, the app of apps pattern works with [diff customization](..\/user-guide\/diffing\/). The example below shows how to ignore changes to syncPolicy and other common values.\n\n```yaml\nspec:\n  ...\n  syncPolicy:\n    ...\n    syncOptions:\n      - RespectIgnoreDifferences=true\n    ...\n  ignoreDifferences:\n    - group: \"*\"\n      kind: \"Application\"\n      namespace: \"*\"\n      jsonPointers:\n        # Allow manually disabling auto sync for apps, useful for debugging.\n        - \/spec\/syncPolicy\/automated\n        # These are automatically updated on a regular basis. Not ignoring last applied configuration since it's used for computing diffs after normalization.\n        - \/metadata\/annotations\/argocd.argoproj.io~1refresh\n        - \/operation\n  ...\n```","site":"argocd","answers_cleaned":"  Cluster Bootstrapping  This guide is for operators who have already installed Argo CD  and have a new cluster and are looking to install many apps in that cluster   There s no one particular pattern to solve this problem  e g  you could write a script to create your apps  or you could even manually create them  However  users of Argo CD tend to use the   app of apps pattern        warning  App of Apps is an admin only tool      The ability to create Applications in arbitrary  Projects    declarative setup md projects       is an admin level capability  Only admins should have push access to the parent Application s source repository       Admins should review pull requests to that repository  paying particular attention to the  project  field in each      Application  Projects with access to the namespace in which Argo CD is installed effectively have admin level      privileges      App Of Apps Pattern   Declaratively  declarative setup md  specify one Argo CD app that consists only of other apps     Application of Applications     assets application of applications png       Helm Example  This example shows how to use Helm to achieve this  You can  of course  use another tool if you like   A typical layout of your Git repository for this might be           Chart yaml     templates         guestbook yaml         helm dependency yaml         helm guestbook yaml         kustomize guestbook yaml     values yaml       Chart yaml  is boiler plate    templates  contains one file for each child app  roughly      yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    name  guestbook   namespace  argocd   finalizers      resources finalizer argocd argoproj io spec    destination      namespace  argocd     server     project  default   source      path  guestbook     repoURL  https   github com argoproj argocd example apps     targetRevision  HEAD       The sync policy to automated   prune  so that child apps are automatically created  synced  and deleted when the manifest is changed  but you may wish to disable this  I ve also added the finalizer  which will ensure that your apps are deleted correctly   Fix the revision to a specific Git commit SHA to make sure that  even if the child apps repo changes  the app will only change when the parent app change that revision  Alternatively  you can set it to HEAD or a branch name   As you probably want to override the cluster server  this is a templated values    values yaml  contains the default values      yaml spec    destination      server  https   kubernetes default svc      Next  you need to create and sync your parent app  e g  via the CLI      bash argocd app create apps         dest namespace argocd         dest server https   kubernetes default svc         repo https   github com argoproj argocd example apps git         path apps   argocd app sync apps        The parent app will appear as in sync but the child apps will be out of sync     New App Of Apps     assets new app of apps png     NOTE  You may want to modify this behavior to bootstrap your cluster in waves  see  v1 8 upgrade notes  upgrading 1 7 1 8 md  for information on changing this   You can either sync via the UI  firstly filter by the correct label     Filter Apps     assets filter apps png   Then select the  out of sync  apps and sync      Sync Apps     assets sync apps png   Or  via the CLI       bash argocd app sync  l app kubernetes io instance apps      View  the example on GitHub  https   github com argoproj argocd example apps tree master apps          Cascading deletion  If you want to ensure that child apps and all of their resources are deleted when the parent app is deleted make sure to add the appropriate  finalizer     user guide app deletion md about the deletion finalizer  to your  Application  definition     yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    name  guestbook   namespace  argocd   finalizers      resources finalizer argocd argoproj io spec                Ignoring differences in child applications  To allow changes in child apps without triggering an out of sync status  or modification for debugging etc  the app of apps pattern works with  diff customization     user guide diffing    The example below shows how to ignore changes to syncPolicy and other common values      yaml spec          syncPolicy              syncOptions          RespectIgnoreDifferences true           ignoreDifferences        group            kind   Application        namespace            jsonPointers            Allow manually disabling auto sync for apps  useful for debugging             spec syncPolicy automated           These are automatically updated on a regular basis  Not ignoring last applied configuration since it s used for computing diffs after normalization             metadata annotations argocd argoproj io 1refresh            operation          "}
{"questions":"argocd Installation Argo CD has two type of installations multi tenant and core The end users can access Argo CD via the API server using the Web UI or CLI The CLI has to be configured using command The multi tenant installation is the most common way to install Argo CD This type of installation is typically used to service multiple application developer teams in the organization and maintained by a platform team Multi Tenant","answers":"# Installation\n\nArgo CD has two type of installations: multi-tenant and core.\n\n## Multi-Tenant\n\nThe multi-tenant installation is the most common way to install Argo CD. This type of installation is typically used to service multiple application developer teams\nin the organization and maintained by a platform team.\n\nThe end-users can access Argo CD via the API server using the Web UI or `argocd` CLI. The `argocd` CLI has to be configured using `argocd login <server-host>` command\n(learn more [here](..\/user-guide\/commands\/argocd_login.md)).\n\nTwo types of installation manifests are provided:\n\n### Non High Availability:\n\nNot recommended for production use. This type of installation is typically used during evaluation period for demonstrations and testing.\n\n* [install.yaml](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/manifests\/install.yaml) - Standard Argo CD installation with cluster-admin access. Use this\n  manifest set if you plan to use Argo CD to deploy applications in the same cluster that Argo CD runs\n  in (i.e. kubernetes.svc.default). It will still be able to deploy to external clusters with inputted\n  credentials.\n\n  > Note: The ClusterRoleBinding in the installation manifest is bound to a ServiceAccount in the argocd namespace. \n  > Be cautious when modifying the namespace, as changing it may cause permission-related errors unless the ClusterRoleBinding is correctly adjusted to reflect the new namespace.\n\n* [namespace-install.yaml](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/manifests\/namespace-install.yaml) - Installation of Argo CD which requires only\n  namespace level privileges (does not need cluster roles). Use this manifest set if you do not\n  need Argo CD to deploy applications in the same cluster that Argo CD runs in, and will rely solely\n  on inputted cluster credentials. An example of using this set of manifests is if you run several\n  Argo CD instances for different teams, where each instance will be deploying applications to\n  external clusters. It will still be possible to deploy to the same cluster (kubernetes.svc.default)\n  with inputted credentials (i.e. `argocd cluster add <CONTEXT> --in-cluster --namespace <YOUR NAMESPACE>`).\n\n  > Note: Argo CD CRDs are not included into [namespace-install.yaml](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/manifests\/namespace-install.yaml).\n  > and have to be installed separately. The CRD manifests are located in the [manifests\/crds](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/manifests\/crds) directory.\n  > Use the following command to install them:\n  > ```\n  > kubectl apply -k https:\/\/github.com\/argoproj\/argo-cd\/manifests\/crds\\?ref\\=stable\n  > ```\n\n### High Availability:\n\nHigh Availability installation is recommended for production use. This bundle includes the same components but tuned for high availability and resiliency.\n\n* [ha\/install.yaml](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/manifests\/ha\/install.yaml) - the same as install.yaml but with multiple replicas for\n  supported components.\n\n* [ha\/namespace-install.yaml](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/manifests\/ha\/namespace-install.yaml) - the same as namespace-install.yaml but\n  with multiple replicas for supported components.\n\n## Core\n\nThe Argo CD Core installation is primarily used to deploy Argo CD in\nheadless mode. This type of installation is most suitable for cluster\nadministrators who independently use Argo CD and don't need\nmulti-tenancy features. This installation includes fewer components\nand is easier to setup. The bundle does not include the API server or\nUI, and installs the lightweight (non-HA) version of each component.\n\nInstallation manifest is available at [core-install.yaml](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/manifests\/core-install.yaml).\n\nFor more details about Argo CD Core please refer to the [official\ndocumentation](.\/core.md)\n\n## Kustomize\n\nThe Argo CD manifests can also be installed using Kustomize. It is recommended to include the manifest as a remote resource and apply additional customizations\nusing Kustomize patches.\n\n\n```yaml\napiVersion: kustomize.config.k8s.io\/v1beta1\nkind: Kustomization\n\nnamespace: argocd\nresources:\n- https:\/\/raw.githubusercontent.com\/argoproj\/argo-cd\/v2.7.2\/manifests\/install.yaml\n```\n\nFor an example of this, see the [kustomization.yaml](https:\/\/github.com\/argoproj\/argoproj-deployments\/blob\/master\/argocd\/kustomization.yaml)\nused to deploy the [Argoproj CI\/CD infrastructure](https:\/\/github.com\/argoproj\/argoproj-deployments#argoproj-deployments).\n\n#### Installing Argo CD in a Custom Namespace\nIf you want to install Argo CD in a namespace other than the default argocd, you can use Kustomize to apply a patch that updates the ClusterRoleBinding to reference the correct namespace for the ServiceAccount. This ensures that the necessary permissions are correctly set in your custom namespace.\n\nBelow is an example of how to configure your kustomization.yaml to install Argo CD in a custom namespace:\n```yaml\napiVersion: kustomize.config.k8s.io\/v1beta1\nkind: Kustomization\n\nnamespace: <your-custom-namespace>\nresources:\n  - https:\/\/raw.githubusercontent.com\/argoproj\/argo-cd\/v2.7.2\/manifests\/install.yaml\n\npatches:\n  - patch: |-\n      - op: replace\n        path: \/subjects\/0\/namespace\n        value: <your-custom-namespace>\n    target:\n      kind: ClusterRoleBinding\n```\n\nThis patch ensures that the ClusterRoleBinding correctly maps to the ServiceAccount in your custom namespace, preventing any permission-related issues during the deployment.\n\n## Helm\n\nThe Argo CD can be installed using [Helm](https:\/\/helm.sh\/). The Helm chart is currently community maintained and available at\n[argo-helm\/charts\/argo-cd](https:\/\/github.com\/argoproj\/argo-helm\/tree\/main\/charts\/argo-cd).\n\n## Supported versions\n\nFor detailed information regarding Argo CD's version support policy, please refer to the [Release Process and Cadence documentation](https:\/\/argo-cd.readthedocs.io\/en\/stable\/developer-guide\/release-process-and-cadence\/).\n\n## Tested versions\n\nThe following table shows the versions of Kubernetes that are tested with each version of Argo CD.\n\n{!docs\/operator-manual\/tested-kubernetes-versions.md!}","site":"argocd","answers_cleaned":"  Installation  Argo CD has two type of installations  multi tenant and core      Multi Tenant  The multi tenant installation is the most common way to install Argo CD  This type of installation is typically used to service multiple application developer teams in the organization and maintained by a platform team   The end users can access Argo CD via the API server using the Web UI or  argocd  CLI  The  argocd  CLI has to be configured using  argocd login  server host   command  learn more  here     user guide commands argocd login md     Two types of installation manifests are provided       Non High Availability   Not recommended for production use  This type of installation is typically used during evaluation period for demonstrations and testing      install yaml  https   github com argoproj argo cd blob master manifests install yaml    Standard Argo CD installation with cluster admin access  Use this   manifest set if you plan to use Argo CD to deploy applications in the same cluster that Argo CD runs   in  i e  kubernetes svc default   It will still be able to deploy to external clusters with inputted   credentials       Note  The ClusterRoleBinding in the installation manifest is bound to a ServiceAccount in the argocd namespace       Be cautious when modifying the namespace  as changing it may cause permission related errors unless the ClusterRoleBinding is correctly adjusted to reflect the new namespace      namespace install yaml  https   github com argoproj argo cd blob master manifests namespace install yaml    Installation of Argo CD which requires only   namespace level privileges  does not need cluster roles   Use this manifest set if you do not   need Argo CD to deploy applications in the same cluster that Argo CD runs in  and will rely solely   on inputted cluster credentials  An example of using this set of manifests is if you run several   Argo CD instances for different teams  where each instance will be deploying applications to   external clusters  It will still be possible to deploy to the same cluster  kubernetes svc default    with inputted credentials  i e   argocd cluster add  CONTEXT    in cluster   namespace  YOUR NAMESPACE          Note  Argo CD CRDs are not included into  namespace install yaml  https   github com argoproj argo cd blob master manifests namespace install yaml       and have to be installed separately  The CRD manifests are located in the  manifests crds  https   github com argoproj argo cd blob master manifests crds  directory      Use the following command to install them              kubectl apply  k https   github com argoproj argo cd manifests crds  ref  stable              High Availability   High Availability installation is recommended for production use  This bundle includes the same components but tuned for high availability and resiliency      ha install yaml  https   github com argoproj argo cd blob master manifests ha install yaml    the same as install yaml but with multiple replicas for   supported components      ha namespace install yaml  https   github com argoproj argo cd blob master manifests ha namespace install yaml    the same as namespace install yaml but   with multiple replicas for supported components      Core  The Argo CD Core installation is primarily used to deploy Argo CD in headless mode  This type of installation is most suitable for cluster administrators who independently use Argo CD and don t need multi tenancy features  This installation includes fewer components and is easier to setup  The bundle does not include the API server or UI  and installs the lightweight  non HA  version of each component   Installation manifest is available at  core install yaml  https   github com argoproj argo cd blob master manifests core install yaml    For more details about Argo CD Core please refer to the  official documentation    core md      Kustomize  The Argo CD manifests can also be installed using Kustomize  It is recommended to include the manifest as a remote resource and apply additional customizations using Kustomize patches       yaml apiVersion  kustomize config k8s io v1beta1 kind  Kustomization  namespace  argocd resources    https   raw githubusercontent com argoproj argo cd v2 7 2 manifests install yaml      For an example of this  see the  kustomization yaml  https   github com argoproj argoproj deployments blob master argocd kustomization yaml  used to deploy the  Argoproj CI CD infrastructure  https   github com argoproj argoproj deployments argoproj deployments         Installing Argo CD in a Custom Namespace If you want to install Argo CD in a namespace other than the default argocd  you can use Kustomize to apply a patch that updates the ClusterRoleBinding to reference the correct namespace for the ServiceAccount  This ensures that the necessary permissions are correctly set in your custom namespace   Below is an example of how to configure your kustomization yaml to install Argo CD in a custom namespace     yaml apiVersion  kustomize config k8s io v1beta1 kind  Kustomization  namespace   your custom namespace  resources      https   raw githubusercontent com argoproj argo cd v2 7 2 manifests install yaml  patches      patch             op  replace         path   subjects 0 namespace         value   your custom namespace      target        kind  ClusterRoleBinding      This patch ensures that the ClusterRoleBinding correctly maps to the ServiceAccount in your custom namespace  preventing any permission related issues during the deployment      Helm  The Argo CD can be installed using  Helm  https   helm sh    The Helm chart is currently community maintained and available at  argo helm charts argo cd  https   github com argoproj argo helm tree main charts argo cd       Supported versions  For detailed information regarding Argo CD s version support policy  please refer to the  Release Process and Cadence documentation  https   argo cd readthedocs io en stable developer guide release process and cadence        Tested versions  The following table shows the versions of Kubernetes that are tested with each version of Argo CD     docs operator manual tested kubernetes versions md  "}
{"questions":"argocd Both protocols are exposed by the argocd server service object on the following ports Ingress Configuration 443 gRPC HTTPS Argo CD API server runs both a gRPC server used by the CLI as well as a HTTP HTTPS server used by the UI 80 HTTP redirects to HTTPS There are several ways how Ingress can be configured","answers":"# Ingress Configuration\n\nArgo CD API server runs both a gRPC server (used by the CLI), as well as a HTTP\/HTTPS server (used by the UI).\nBoth protocols are exposed by the argocd-server service object on the following ports:\n\n* 443 - gRPC\/HTTPS\n* 80 - HTTP (redirects to HTTPS)\n\nThere are several ways how Ingress can be configured.\n\n## [Ambassador](https:\/\/www.getambassador.io\/)\n\nThe Ambassador Edge Stack can be used as a Kubernetes ingress controller with [automatic TLS termination](https:\/\/www.getambassador.io\/docs\/latest\/topics\/running\/tls\/#host) and routing capabilities for both the CLI and the UI.\n\nThe API server should be run with TLS disabled. Edit the `argocd-server` deployment to add the `--insecure` flag to the argocd-server command, or simply set `server.insecure: \"true\"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands\/additional-configuration-method.md). Given the `argocd` CLI includes the port number in the request `host` header, 2 Mappings are required. \nNote: Disabling TLS in not required if you are using grpc-web\n\n### Option 1: Mapping CRD for Host-based Routing\n```yaml\napiVersion: getambassador.io\/v2\nkind: Mapping\nmetadata:\n  name: argocd-server-ui\n  namespace: argocd\nspec:\n  host: argocd.example.com\n  prefix: \/\n  service: https:\/\/argocd-server:443\n---\napiVersion: getambassador.io\/v2\nkind: Mapping\nmetadata:\n  name: argocd-server-cli\n  namespace: argocd\nspec:\n  # NOTE: the port must be ignored if you have strip_matching_host_port enabled on envoy\n  host: argocd.example.com:443\n  prefix: \/\n  service: argocd-server:80\n  regex_headers:\n    Content-Type: \"^application\/grpc.*$\"\n  grpc: true\n```\n\nLogin with the `argocd` CLI:\n\n```shell\nargocd login <host>\n```\n\n### Option 2: Mapping CRD for Path-based Routing\n\nThe API server must be configured to be available under a non-root path (e.g. `\/argo-cd`). Edit the `argocd-server` deployment to add the `--rootpath=\/argo-cd` flag to the argocd-server command.\n\n```yaml\napiVersion: getambassador.io\/v2\nkind: Mapping\nmetadata:\n  name: argocd-server\n  namespace: argocd\nspec:\n  prefix: \/argo-cd\n  rewrite: \/argo-cd\n  service: https:\/\/argocd-server:443\n```\n\nExample of `argocd-cmd-params-cm` configmap\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cmd-params-cm\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/name: argocd-cmd-params-cm\n    app.kubernetes.io\/part-of: argocd\ndata:\n  ## Server properties\n  # Value for base href in index.html. Used if Argo CD is running behind reverse proxy under subpath different from \/ (default \"\/\")\n  server.basehref: \"\/argo-cd\"\n  # Used if Argo CD is running behind reverse proxy under subpath different from \/\n  server.rootpath: \"\/argo-cd\"\n```\n\nLogin with the `argocd` CLI using the extra `--grpc-web-root-path` flag for non-root paths.\n\n```shell\nargocd login <host>:<port> --grpc-web-root-path \/argo-cd\n```\n\n## [Contour](https:\/\/projectcontour.io\/)\nThe Contour ingress controller can terminate TLS ingress traffic at the edge.\n\nThe Argo CD API server should be run with TLS disabled. Edit the `argocd-server` Deployment to add the `--insecure` flag to the argocd-server container command, or simply set `server.insecure: \"true\"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands\/additional-configuration-method.md).\n\nIt is also possible to provide an internal-only ingress path and an external-only ingress path by deploying two instances of Contour: one behind a private-subnet LoadBalancer service and one behind a public-subnet LoadBalancer service. The private Contour deployment will pick up Ingresses annotated with `kubernetes.io\/ingress.class: contour-internal` and the public Contour deployment will pick up Ingresses annotated with `kubernetes.io\/ingress.class: contour-external`.\n\nThis provides the opportunity to deploy the Argo CD UI privately but still allow for SSO callbacks to succeed.\n\n### Private Argo CD UI with  Multiple Ingress Objects and BYO Certificate\nSince Contour Ingress supports only a single protocol per Ingress object, define three Ingress objects. One for private HTTP\/HTTPS, one for private gRPC, and one for public HTTPS SSO callbacks.\n\nInternal HTTP\/HTTPS Ingress:\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: argocd-server-http\n  annotations:\n    kubernetes.io\/ingress.class: contour-internal\n    ingress.kubernetes.io\/force-ssl-redirect: \"true\"\nspec:\n  rules:\n  - host: internal.path.to.argocd.io\n    http:\n      paths:\n      - path: \/\n        pathType: Prefix\n        backend:\n          service:\n            name: argocd-server\n            port:\n              name: http\n  tls:\n  - hosts:\n    - internal.path.to.argocd.io\n    secretName: your-certificate-name\n```\n\nInternal gRPC Ingress:\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: argocd-server-grpc\n  annotations:\n    kubernetes.io\/ingress.class: contour-internal\nspec:\n  rules:\n  - host: grpc-internal.path.to.argocd.io\n    http:\n      paths:\n      - path: \/\n        pathType: Prefix\n        backend:\n          service:\n            name: argocd-server\n            port:\n              name: https\n  tls:\n  - hosts:\n    - grpc-internal.path.to.argocd.io\n    secretName: your-certificate-name\n```\n\nExternal HTTPS SSO Callback Ingress:\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: argocd-server-external-callback-http\n  annotations:\n    kubernetes.io\/ingress.class: contour-external\n    ingress.kubernetes.io\/force-ssl-redirect: \"true\"\nspec:\n  rules:\n  - host: external.path.to.argocd.io\n    http:\n      paths:\n      - path: \/api\/dex\/callback\n        pathType: Prefix\n        backend:\n          service:\n            name: argocd-server\n            port:\n              name: http\n  tls:\n  - hosts:\n    - external.path.to.argocd.io\n    secretName: your-certificate-name\n```\n\nThe argocd-server Service needs to be annotated with `projectcontour.io\/upstream-protocol.h2c: \"https,443\"` to wire up the gRPC protocol proxying.\n\nThe API server should then be run with TLS disabled. Edit the `argocd-server` deployment to add the\n`--insecure` flag to the argocd-server command, or simply set `server.insecure: \"true\"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands\/additional-configuration-method.md).\n\nContour httpproxy CRD:\n\nUsing a contour httpproxy CRD allows you to use the same hostname for the GRPC and REST api.\n\n```yaml\napiVersion: projectcontour.io\/v1\nkind: HTTPProxy\nmetadata:\n  name: argocd-server\n  namespace: argocd\nspec:\n  ingressClassName: contour\n  virtualhost:\n    fqdn: path.to.argocd.io\n    tls:\n      secretName: wildcard-tls\n  routes:\n    - conditions:\n        - prefix: \/\n        - header:\n            name: Content-Type\n            contains: application\/grpc\n      services:\n        - name: argocd-server\n          port: 80\n          protocol: h2c # allows for unencrypted http2 connections\n      timeoutPolicy:\n        response: 1h\n        idle: 600s\n        idleConnection: 600s\n    - conditions:\n        - prefix: \/\n      services:\n        - name: argocd-server\n          port: 80\n```\n\n## [kubernetes\/ingress-nginx](https:\/\/github.com\/kubernetes\/ingress-nginx)\n\n### Option 1: SSL-Passthrough\n\nArgo CD serves multiple protocols (gRPC\/HTTPS) on the same port (443), this provides a\nchallenge when attempting to define a single nginx ingress object and rule for the argocd-service,\nsince the `nginx.ingress.kubernetes.io\/backend-protocol` [annotation](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/nginx-configuration\/annotations\/#backend-protocol)\naccepts only a single value for the backend protocol (e.g. HTTP, HTTPS, GRPC, GRPCS).\n\nIn order to expose the Argo CD API server with a single ingress rule and hostname, the\n`nginx.ingress.kubernetes.io\/ssl-passthrough` [annotation](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/nginx-configuration\/annotations\/#ssl-passthrough)\nmust be used to passthrough TLS connections and terminate TLS at the Argo CD API server.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: argocd-server-ingress\n  namespace: argocd\n  annotations:\n    nginx.ingress.kubernetes.io\/force-ssl-redirect: \"true\"\n    nginx.ingress.kubernetes.io\/ssl-passthrough: \"true\"\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: argocd.example.com\n    http:\n      paths:\n      - path: \/\n        pathType: Prefix\n        backend:\n          service:\n            name: argocd-server\n            port:\n              name: https\n```\n\nThe above rule terminates TLS at the Argo CD API server, which detects the protocol being used,\nand responds appropriately. Note that the `nginx.ingress.kubernetes.io\/ssl-passthrough` annotation\nrequires that the `--enable-ssl-passthrough` flag be added to the command line arguments to\n`nginx-ingress-controller`.\n\n#### SSL-Passthrough with cert-manager and Let's Encrypt\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: argocd-server-ingress\n  namespace: argocd\n  annotations:\n    cert-manager.io\/cluster-issuer: letsencrypt-prod\n    nginx.ingress.kubernetes.io\/ssl-passthrough: \"true\"\n    # If you encounter a redirect loop or are getting a 307 response code\n    # then you need to force the nginx ingress to connect to the backend using HTTPS.\n    #\n    nginx.ingress.kubernetes.io\/backend-protocol: \"HTTPS\"\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: argocd.example.com\n    http:\n      paths:\n      - path: \/\n        pathType: Prefix\n        backend:\n          service:\n            name: argocd-server\n            port:\n              name: https\n  tls:\n  - hosts:\n    - argocd.example.com\n    secretName: argocd-server-tls # as expected by argocd-server\n```\n\n### Option 2: SSL Termination at Ingress Controller\n\nAn alternative approach is to perform the SSL termination at the Ingress. Since an `ingress-nginx` Ingress supports only a single protocol per Ingress object, two Ingress objects need to be defined using the `nginx.ingress.kubernetes.io\/backend-protocol` annotation, one for HTTP\/HTTPS and the other for gRPC.\n\nEach ingress will be for a different domain (`argocd.example.com` and `grpc.argocd.example.com`). This requires that the Ingress resources use different TLS `secretName`s to avoid unexpected behavior.\n\nHTTP\/HTTPS Ingress:\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: argocd-server-http-ingress\n  namespace: argocd\n  annotations:\n    nginx.ingress.kubernetes.io\/force-ssl-redirect: \"true\"\n    nginx.ingress.kubernetes.io\/backend-protocol: \"HTTP\"\nspec:\n  ingressClassName: nginx\n  rules:\n  - http:\n      paths:\n      - path: \/\n        pathType: Prefix\n        backend:\n          service:\n            name: argocd-server\n            port:\n              name: http\n    host: argocd.example.com\n  tls:\n  - hosts:\n    - argocd.example.com\n    secretName: argocd-ingress-http\n```\n\ngRPC Ingress:\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: argocd-server-grpc-ingress\n  namespace: argocd\n  annotations:\n    nginx.ingress.kubernetes.io\/backend-protocol: \"GRPC\"\nspec:\n  ingressClassName: nginx\n  rules:\n  - http:\n      paths:\n      - path: \/\n        pathType: Prefix\n        backend:\n          service:\n            name: argocd-server\n            port:\n              name: https\n    host: grpc.argocd.example.com\n  tls:\n  - hosts:\n    - grpc.argocd.example.com\n    secretName: argocd-ingress-grpc\n```\n\nThe API server should then be run with TLS disabled. Edit the `argocd-server` deployment to add the\n`--insecure` flag to the argocd-server command, or simply set `server.insecure: \"true\"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands\/additional-configuration-method.md).\n\nThe obvious disadvantage to this approach is that this technique requires two separate hostnames for\nthe API server -- one for gRPC and the other for HTTP\/HTTPS. However it allows TLS termination to\nhappen at the ingress controller.\n\n\n## [Traefik (v3.0)](https:\/\/docs.traefik.io\/)\n\nTraefik can be used as an edge router and provide [TLS](https:\/\/docs.traefik.io\/user-guides\/grpc\/) termination within the same deployment.\n\nIt currently has an advantage over NGINX in that it can terminate both TCP and HTTP connections _on the same port_ meaning you do not require multiple hosts or paths.\n\nThe API server should be run with TLS disabled. Edit the `argocd-server` deployment to add the `--insecure` flag to the argocd-server command or set `server.insecure: \"true\"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands\/additional-configuration-method.md).\n\n### IngressRoute CRD\n```yaml\napiVersion: traefik.io\/v1alpha1\nkind: IngressRoute\nmetadata:\n  name: argocd-server\n  namespace: argocd\nspec:\n  entryPoints:\n    - websecure\n  routes:\n    - kind: Rule\n      match: Host(`argocd.example.com`)\n      priority: 10\n      services:\n        - name: argocd-server\n          port: 80\n    - kind: Rule\n      match: Host(`argocd.example.com`) && Header(`Content-Type`, `application\/grpc`)\n      priority: 11\n      services:\n        - name: argocd-server\n          port: 80\n          scheme: h2c\n  tls:\n    certResolver: default\n```\n\n## AWS Application Load Balancers (ALBs) And Classic ELB (HTTP Mode)\nAWS ALBs can be used as an L7 Load Balancer for both UI and gRPC traffic, whereas Classic ELBs and NLBs can be used as L4 Load Balancers for both.\n\nWhen using an ALB, you'll want to create a second service for argocd-server. This is necessary because we need to tell the ALB to send the GRPC traffic to a different target group then the UI traffic, since the backend protocol is HTTP2 instead of HTTP1.\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  annotations:\n    alb.ingress.kubernetes.io\/backend-protocol-version: GRPC # This tells AWS to send traffic from the ALB using GRPC. Plain HTTP2 can be used, but the health checks wont be available because argo currently downgrade non-grpc calls to HTTP1\n  labels:\n    app: argogrpc\n  name: argogrpc\n  namespace: argocd\nspec:\n  ports:\n  - name: \"443\"\n    port: 443\n    protocol: TCP\n    targetPort: 8080\n  selector:\n    app.kubernetes.io\/name: argocd-server\n  sessionAffinity: None\n  type: NodePort\n```\n\nOnce we create this service, we can configure the Ingress to conditionally route all `application\/grpc` traffic to the new HTTP2 backend, using the `alb.ingress.kubernetes.io\/conditions` annotation, as seen below. Note: The value after the . in the condition annotation _must_ be the same name as the service that you want traffic to route to - and will be applied on any path with a matching serviceName.\n\n```yaml\n  apiVersion: networking.k8s.io\/v1\n  kind: Ingress\n  metadata:\n    annotations:\n      alb.ingress.kubernetes.io\/backend-protocol: HTTPS\n      # Use this annotation (which must match a service name) to route traffic to HTTP2 backends.\n      alb.ingress.kubernetes.io\/conditions.argogrpc: |\n        [{\"field\":\"http-header\",\"httpHeaderConfig\":{\"httpHeaderName\": \"Content-Type\", \"values\":[\"application\/grpc\"]}}]\n      alb.ingress.kubernetes.io\/listen-ports: '[{\"HTTPS\":443}]'\n    name: argocd\n    namespace: argocd\n  spec:\n    rules:\n    - host: argocd.argoproj.io\n      http:\n        paths:\n        - path: \/\n          backend:\n            service:\n              name: argogrpc # The grpc service must be placed before the argocd-server for the listening rules to be created in the correct order\n              port:\n                number: 443\n          pathType: Prefix\n        - path: \/\n          backend:\n            service:\n              name: argocd-server\n              port:\n                number: 443\n          pathType: Prefix\n    tls:\n    - hosts:\n      - argocd.argoproj.io\n```\n\n## [Istio](https:\/\/www.istio.io)\nYou can put Argo CD behind Istio using following configurations. Here we will achieve both serving Argo CD behind istio and using subpath on Istio\n\nFirst we need to make sure that we can run Argo CD with subpath (ie \/argocd). For this we have used install.yaml from argocd project as is\n\n```bash\ncurl -kLs -o install.yaml https:\/\/raw.githubusercontent.com\/argoproj\/argo-cd\/stable\/manifests\/install.yaml\n```\n\nsave following file as kustomization.yml\n\n```yaml\napiVersion: kustomize.config.k8s.io\/v1beta1\nkind: Kustomization\nresources:\n- .\/install.yaml\n\npatches:\n- path: .\/patch.yml\n``` \n\nAnd following lines as patch.yml\n\n```yaml\n# Use --insecure so Ingress can send traffic with HTTP\n# --bashref \/argocd is the subpath like https:\/\/IP\/argocd\n# env was added because of https:\/\/github.com\/argoproj\/argo-cd\/issues\/3572 error\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n name: argocd-server\nspec:\n template:\n   spec:\n     containers:\n     - args:\n       - \/usr\/local\/bin\/argocd-server\n       - --staticassets\n       - \/shared\/app\n       - --redis\n       - argocd-redis:6379\n       - --insecure\n       - --basehref\n       - \/argocd\n       - --rootpath\n       - \/argocd\n       name: argocd-server\n       env:\n       - name: ARGOCD_MAX_CONCURRENT_LOGIN_REQUESTS_COUNT\n         value: \"0\"\n```\n\nAfter that install Argo CD  (there should be only 3 yml file defined above in current directory )\n\n```bash\nkubectl apply -k .\/ -n argocd --wait=true\n```\n\nBe sure you create secret for Istio ( in our case secretname is argocd-server-tls on argocd Namespace). After that we create Istio Resources\n\n```yaml\napiVersion: networking.istio.io\/v1alpha3\nkind: Gateway\nmetadata:\n  name: argocd-gateway\n  namespace: argocd\nspec:\n  selector:\n    istio: ingressgateway\n  servers:\n  - port:\n      number: 80\n      name: http\n      protocol: HTTP\n    hosts:\n    - \"*\"\n    tls:\n     httpsRedirect: true\n  - port:\n      number: 443\n      name: https\n      protocol: HTTPS\n    hosts:\n    - \"*\"\n    tls:\n      credentialName: argocd-server-tls\n      maxProtocolVersion: TLSV1_3\n      minProtocolVersion: TLSV1_2\n      mode: SIMPLE\n      cipherSuites:\n        - ECDHE-ECDSA-AES128-GCM-SHA256\n        - ECDHE-RSA-AES128-GCM-SHA256\n        - ECDHE-ECDSA-AES128-SHA\n        - AES128-GCM-SHA256\n        - AES128-SHA\n        - ECDHE-ECDSA-AES256-GCM-SHA384\n        - ECDHE-RSA-AES256-GCM-SHA384\n        - ECDHE-ECDSA-AES256-SHA\n        - AES256-GCM-SHA384\n        - AES256-SHA\n---\napiVersion: networking.istio.io\/v1alpha3\nkind: VirtualService\nmetadata:\n  name: argocd-virtualservice\n  namespace: argocd\nspec:\n  hosts:\n  - \"*\"\n  gateways:\n  - argocd-gateway\n  http:\n  - match:\n    - uri:\n        prefix: \/argocd\n    route:\n    - destination:\n        host: argocd-server\n        port:\n          number: 80\n```\n\nAnd now we can browse http:\/\/\/argocd (it will be rewritten to https:\/\/\/argocd\n\n\n## Google Cloud load balancers with Kubernetes Ingress\n\nYou can make use of the integration of GKE with Google Cloud to deploy Load Balancers using just Kubernetes objects.\n\nFor this we will need these five objects:\n- A Service\n- A BackendConfig\n- A FrontendConfig\n- A secret with your SSL certificate\n- An Ingress for GKE\n\nIf you need detail for all the options available for these Google integrations, you can check the [Google docs on configuring Ingress features](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features)\n\n### Disable internal TLS\n\nFirst, to avoid internal redirection loops from HTTP to HTTPS, the API server should be run with TLS disabled.\n\nEdit the `--insecure` flag in the `argocd-server` command of the argocd-server deployment, or simply set `server.insecure: \"true\"` in the `argocd-cmd-params-cm` ConfigMap [as described here](server-commands\/additional-configuration-method.md).\n\n### Creating a service\n\nNow you need an externally accessible service. This is practically the same as the internal service Argo CD has, but with Google Cloud annotations. Note that this service is annotated to use a [Network Endpoint Group](https:\/\/cloud.google.com\/load-balancing\/docs\/negs) (NEG) to allow your load balancer to send traffic directly to your pods without using kube-proxy, so remove the `neg` annotation if that's not what you want.\n\nThe service:\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: argocd-server\n  namespace: argocd\n  annotations:\n    cloud.google.com\/neg: '{\"ingress\": true}'\n    cloud.google.com\/backend-config: '{\"ports\": {\"http\":\"argocd-backend-config\"}}'\nspec:\n  type: ClusterIP\n  ports:\n  - name: http\n    port: 80\n    protocol: TCP\n    targetPort: 8080\n  selector:\n    app.kubernetes.io\/name: argocd-server\n```\n\n### Creating a BackendConfig\n\nSee that previous service referencing a backend config called `argocd-backend-config`? So lets deploy it using this yaml:\n\n```yaml\napiVersion: cloud.google.com\/v1\nkind: BackendConfig\nmetadata:\n  name: argocd-backend-config\n  namespace: argocd\nspec:\n  healthCheck:\n    checkIntervalSec: 30\n    timeoutSec: 5\n    healthyThreshold: 1\n    unhealthyThreshold: 2\n    type: HTTP\n    requestPath: \/healthz\n    port: 8080\n```\n\nIt uses the same health check as the pods.\n\n### Creating a FrontendConfig\n\nNow we can deploy a frontend config with an HTTP to HTTPS redirect:\n\n```yaml\napiVersion: networking.gke.io\/v1beta1\nkind: FrontendConfig\nmetadata:\n  name: argocd-frontend-config\n  namespace: argocd\nspec:\n  redirectToHttps:\n    enabled: true\n```\n\n---\n!!! note\n\n    The next two steps (the certificate secret and the Ingress) are described supposing that you manage the certificate yourself, and you have the certificate and key files for it. In the case that your certificate is Google-managed, fix the next two steps using the [guide to use a Google-managed SSL certificate](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/managed-certs#creating_an_ingress_with_a_google-managed_certificate).\n\n---\n\n### Creating a certificate secret\n\nWe need now to create a secret with the SSL certificate we want in our load balancer. It's as easy as executing this command on the path you have your certificate keys stored:\n\n```\nkubectl -n argocd create secret tls secret-yourdomain-com \\\n  --cert cert-file.crt --key key-file.key\n```\n\n### Creating an Ingress\n\nAnd finally, to top it all, our Ingress. Note the reference to our frontend config, the service, and to the certificate secret.\n\n---\n!!! note\n\n   GKE clusters running versions earlier than `1.21.3-gke.1600`, [the only supported value for the pathType field](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/load-balance-ingress#creating_an_ingress) is `ImplementationSpecific`. So you must check your GKE cluster's version. You need to use different YAML depending on the version.\n\n---\n\nIf you use the version earlier than `1.21.3-gke.1600`, you should use the following Ingress resource:\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: argocd\n  namespace: argocd\n  annotations:\n    networking.gke.io\/v1beta1.FrontendConfig: argocd-frontend-config\nspec:\n  tls:\n    - secretName: secret-example-com\n  rules:\n    - host: argocd.example.com\n      http:\n        paths:\n        - pathType: ImplementationSpecific\n          path: \"\/*\"   # \"*\" is needed. Without this, the UI Javascript and CSS will not load properly\n          backend:\n            service:\n              name: argocd-server\n              port:\n                number: 80\n```\n\nIf you use the version `1.21.3-gke.1600` or later, you should use the following Ingress resource:\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: argocd\n  namespace: argocd\n  annotations:\n    networking.gke.io\/v1beta1.FrontendConfig: argocd-frontend-config\nspec:\n  tls:\n    - secretName: secret-example-com\n  rules:\n    - host: argocd.example.com\n      http:\n        paths:\n        - pathType: Prefix\n          path: \"\/\"\n          backend:\n            service:\n              name: argocd-server\n              port:\n                number: 80\n```\n\nAs you may know already, it can take some minutes to deploy the load balancer and become ready to accept connections. Once it's ready, get the public IP address for your Load Balancer, go to your DNS server (Google or third party) and point your domain or subdomain (i.e. argocd.example.com) to that IP address.\n\nYou can get that IP address describing the Ingress object like this:\n\n```\nkubectl -n argocd describe ingresses argocd | grep Address\n```\n\nOnce the DNS change is propagated, you're ready to use Argo with your Google Cloud Load Balancer\n\n## Authenticating through multiple layers of authenticating reverse proxies\n\nArgo CD endpoints may be protected by one or more reverse proxies layers, in that case, you can provide additional headers through the `argocd` CLI `--header` parameter to authenticate through those layers.\n\n```shell\n$ argocd login <host>:<port> --header 'x-token1:foo' --header 'x-token2:bar' # can be repeated multiple times\n$ argocd login <host>:<port> --header 'x-token1:foo,x-token2:bar' # headers can also be comma separated\n```\n## ArgoCD Server and UI Root Path (v1.5.3)\n\nArgo CD server and UI can be configured to be available under a non-root path (e.g. `\/argo-cd`).\nTo do this, add the `--rootpath` flag into the `argocd-server` deployment command:\n\n```yaml\nspec:\n  template:\n    spec:\n      name: argocd-server\n      containers:\n      - command:\n        - \/argocd-server\n        - --repo-server\n        - argocd-repo-server:8081\n        - --rootpath\n        - \/argo-cd\n```\nNOTE: The flag `--rootpath` changes both API Server and UI base URL.\nExample nginx.conf:\n\n```\nworker_processes 1;\n\nevents { worker_connections 1024; }\n\nhttp {\n\n    sendfile on;\n\n    server {\n        listen 443;\n\n        location \/argo-cd\/ {\n            proxy_pass         https:\/\/localhost:8080\/argo-cd\/;\n            proxy_redirect     off;\n            proxy_set_header   Host $host;\n            proxy_set_header   X-Real-IP $remote_addr;\n            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;\n            proxy_set_header   X-Forwarded-Host $server_name;\n            # buffering should be disabled for api\/v1\/stream\/applications to support chunked response\n            proxy_buffering off;\n        }\n    }\n}\n```\nFlag ```--grpc-web-root-path ``` is used to provide a non-root path (e.g. \/argo-cd)\n\n```shell\n$ argocd login <host>:<port> --grpc-web-root-path \/argo-cd\n```\n\n## UI Base Path\n\nIf the Argo CD UI is available under a non-root path (e.g. `\/argo-cd` instead of `\/`) then the UI path should be configured in the API server.\nTo configure the UI path add the `--basehref` flag into the `argocd-server` deployment command:\n\n```yaml\nspec:\n  template:\n    spec:\n      name: argocd-server\n      containers:\n      - command:\n        - \/argocd-server\n        - --repo-server\n        - argocd-repo-server:8081\n        - --basehref\n        - \/argo-cd\n```\n\nNOTE: The flag `--basehref` only changes the UI base URL. The API server will keep using the `\/` path so you need to add a URL rewrite rule to the proxy config.\nExample nginx.conf with URL rewrite:\n\n```\nworker_processes 1;\n\nevents { worker_connections 1024; }\n\nhttp {\n\n    sendfile on;\n\n    server {\n        listen 443;\n\n        location \/argo-cd {\n            rewrite \/argo-cd\/(.*) \/$1  break;\n            proxy_pass         https:\/\/localhost:8080;\n            proxy_redirect     off;\n            proxy_set_header   Host $host;\n            proxy_set_header   X-Real-IP $remote_addr;\n            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;\n            proxy_set_header   X-Forwarded-Host $server_name;\n            # buffering should be disabled for api\/v1\/stream\/applications to support chunked response\n            proxy_buffering off;\n        }\n    }\n}\n```","site":"argocd","answers_cleaned":"  Ingress Configuration  Argo CD API server runs both a gRPC server  used by the CLI   as well as a HTTP HTTPS server  used by the UI   Both protocols are exposed by the argocd server service object on the following ports     443   gRPC HTTPS   80   HTTP  redirects to HTTPS   There are several ways how Ingress can be configured       Ambassador  https   www getambassador io    The Ambassador Edge Stack can be used as a Kubernetes ingress controller with  automatic TLS termination  https   www getambassador io docs latest topics running tls  host  and routing capabilities for both the CLI and the UI   The API server should be run with TLS disabled  Edit the  argocd server  deployment to add the    insecure  flag to the argocd server command  or simply set  server insecure   true   in the  argocd cmd params cm  ConfigMap  as described here  server commands additional configuration method md   Given the  argocd  CLI includes the port number in the request  host  header  2 Mappings are required   Note  Disabling TLS in not required if you are using grpc web      Option 1  Mapping CRD for Host based Routing    yaml apiVersion  getambassador io v2 kind  Mapping metadata    name  argocd server ui   namespace  argocd spec    host  argocd example com   prefix      service  https   argocd server 443     apiVersion  getambassador io v2 kind  Mapping metadata    name  argocd server cli   namespace  argocd spec      NOTE  the port must be ignored if you have strip matching host port enabled on envoy   host  argocd example com 443   prefix      service  argocd server 80   regex headers      Content Type    application grpc       grpc  true      Login with the  argocd  CLI      shell argocd login  host           Option 2  Mapping CRD for Path based Routing  The API server must be configured to be available under a non root path  e g    argo cd    Edit the  argocd server  deployment to add the    rootpath  argo cd  flag to the argocd server command      yaml apiVersion  getambassador io v2 kind  Mapping metadata    name  argocd server   namespace  argocd spec    prefix   argo cd   rewrite   argo cd   service  https   argocd server 443      Example of  argocd cmd params cm  configmap    yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cmd params cm   namespace  argocd   labels      app kubernetes io name  argocd cmd params cm     app kubernetes io part of  argocd data       Server properties     Value for base href in index html  Used if Argo CD is running behind reverse proxy under subpath different from    default        server basehref    argo cd      Used if Argo CD is running behind reverse proxy under subpath different from     server rootpath    argo cd       Login with the  argocd  CLI using the extra    grpc web root path  flag for non root paths      shell argocd login  host   port    grpc web root path  argo cd          Contour  https   projectcontour io   The Contour ingress controller can terminate TLS ingress traffic at the edge   The Argo CD API server should be run with TLS disabled  Edit the  argocd server  Deployment to add the    insecure  flag to the argocd server container command  or simply set  server insecure   true   in the  argocd cmd params cm  ConfigMap  as described here  server commands additional configuration method md    It is also possible to provide an internal only ingress path and an external only ingress path by deploying two instances of Contour  one behind a private subnet LoadBalancer service and one behind a public subnet LoadBalancer service  The private Contour deployment will pick up Ingresses annotated with  kubernetes io ingress class  contour internal  and the public Contour deployment will pick up Ingresses annotated with  kubernetes io ingress class  contour external    This provides the opportunity to deploy the Argo CD UI privately but still allow for SSO callbacks to succeed       Private Argo CD UI with  Multiple Ingress Objects and BYO Certificate Since Contour Ingress supports only a single protocol per Ingress object  define three Ingress objects  One for private HTTP HTTPS  one for private gRPC  and one for public HTTPS SSO callbacks   Internal HTTP HTTPS Ingress     yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  argocd server http   annotations      kubernetes io ingress class  contour internal     ingress kubernetes io force ssl redirect   true  spec    rules      host  internal path to argocd io     http        paths          path            pathType  Prefix         backend            service              name  argocd server             port                name  http   tls      hosts        internal path to argocd io     secretName  your certificate name      Internal gRPC Ingress     yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  argocd server grpc   annotations      kubernetes io ingress class  contour internal spec    rules      host  grpc internal path to argocd io     http        paths          path            pathType  Prefix         backend            service              name  argocd server             port                name  https   tls      hosts        grpc internal path to argocd io     secretName  your certificate name      External HTTPS SSO Callback Ingress     yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  argocd server external callback http   annotations      kubernetes io ingress class  contour external     ingress kubernetes io force ssl redirect   true  spec    rules      host  external path to argocd io     http        paths          path   api dex callback         pathType  Prefix         backend            service              name  argocd server             port                name  http   tls      hosts        external path to argocd io     secretName  your certificate name      The argocd server Service needs to be annotated with  projectcontour io upstream protocol h2c   https 443   to wire up the gRPC protocol proxying   The API server should then be run with TLS disabled  Edit the  argocd server  deployment to add the    insecure  flag to the argocd server command  or simply set  server insecure   true   in the  argocd cmd params cm  ConfigMap  as described here  server commands additional configuration method md    Contour httpproxy CRD   Using a contour httpproxy CRD allows you to use the same hostname for the GRPC and REST api      yaml apiVersion  projectcontour io v1 kind  HTTPProxy metadata    name  argocd server   namespace  argocd spec    ingressClassName  contour   virtualhost      fqdn  path to argocd io     tls        secretName  wildcard tls   routes        conditions            prefix              header              name  Content Type             contains  application grpc       services            name  argocd server           port  80           protocol  h2c   allows for unencrypted http2 connections       timeoutPolicy          response  1h         idle  600s         idleConnection  600s       conditions            prefix          services            name  argocd server           port  80          kubernetes ingress nginx  https   github com kubernetes ingress nginx       Option 1  SSL Passthrough  Argo CD serves multiple protocols  gRPC HTTPS  on the same port  443   this provides a challenge when attempting to define a single nginx ingress object and rule for the argocd service  since the  nginx ingress kubernetes io backend protocol   annotation  https   kubernetes github io ingress nginx user guide nginx configuration annotations  backend protocol  accepts only a single value for the backend protocol  e g  HTTP  HTTPS  GRPC  GRPCS    In order to expose the Argo CD API server with a single ingress rule and hostname  the  nginx ingress kubernetes io ssl passthrough   annotation  https   kubernetes github io ingress nginx user guide nginx configuration annotations  ssl passthrough  must be used to passthrough TLS connections and terminate TLS at the Argo CD API server      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  argocd server ingress   namespace  argocd   annotations      nginx ingress kubernetes io force ssl redirect   true      nginx ingress kubernetes io ssl passthrough   true  spec    ingressClassName  nginx   rules      host  argocd example com     http        paths          path            pathType  Prefix         backend            service              name  argocd server             port                name  https      The above rule terminates TLS at the Argo CD API server  which detects the protocol being used  and responds appropriately  Note that the  nginx ingress kubernetes io ssl passthrough  annotation requires that the    enable ssl passthrough  flag be added to the command line arguments to  nginx ingress controller         SSL Passthrough with cert manager and Let s Encrypt     yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  argocd server ingress   namespace  argocd   annotations      cert manager io cluster issuer  letsencrypt prod     nginx ingress kubernetes io ssl passthrough   true        If you encounter a redirect loop or are getting a 307 response code       then you need to force the nginx ingress to connect to the backend using HTTPS            nginx ingress kubernetes io backend protocol   HTTPS  spec    ingressClassName  nginx   rules      host  argocd example com     http        paths          path            pathType  Prefix         backend            service              name  argocd server             port                name  https   tls      hosts        argocd example com     secretName  argocd server tls   as expected by argocd server          Option 2  SSL Termination at Ingress Controller  An alternative approach is to perform the SSL termination at the Ingress  Since an  ingress nginx  Ingress supports only a single protocol per Ingress object  two Ingress objects need to be defined using the  nginx ingress kubernetes io backend protocol  annotation  one for HTTP HTTPS and the other for gRPC   Each ingress will be for a different domain   argocd example com  and  grpc argocd example com    This requires that the Ingress resources use different TLS  secretName s to avoid unexpected behavior   HTTP HTTPS Ingress     yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  argocd server http ingress   namespace  argocd   annotations      nginx ingress kubernetes io force ssl redirect   true      nginx ingress kubernetes io backend protocol   HTTP  spec    ingressClassName  nginx   rules      http        paths          path            pathType  Prefix         backend            service              name  argocd server             port                name  http     host  argocd example com   tls      hosts        argocd example com     secretName  argocd ingress http      gRPC Ingress     yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  argocd server grpc ingress   namespace  argocd   annotations      nginx ingress kubernetes io backend protocol   GRPC  spec    ingressClassName  nginx   rules      http        paths          path            pathType  Prefix         backend            service              name  argocd server             port                name  https     host  grpc argocd example com   tls      hosts        grpc argocd example com     secretName  argocd ingress grpc      The API server should then be run with TLS disabled  Edit the  argocd server  deployment to add the    insecure  flag to the argocd server command  or simply set  server insecure   true   in the  argocd cmd params cm  ConfigMap  as described here  server commands additional configuration method md    The obvious disadvantage to this approach is that this technique requires two separate hostnames for the API server    one for gRPC and the other for HTTP HTTPS  However it allows TLS termination to happen at the ingress controller        Traefik  v3 0   https   docs traefik io    Traefik can be used as an edge router and provide  TLS  https   docs traefik io user guides grpc   termination within the same deployment   It currently has an advantage over NGINX in that it can terminate both TCP and HTTP connections  on the same port  meaning you do not require multiple hosts or paths   The API server should be run with TLS disabled  Edit the  argocd server  deployment to add the    insecure  flag to the argocd server command or set  server insecure   true   in the  argocd cmd params cm  ConfigMap  as described here  server commands additional configuration method md        IngressRoute CRD    yaml apiVersion  traefik io v1alpha1 kind  IngressRoute metadata    name  argocd server   namespace  argocd spec    entryPoints        websecure   routes        kind  Rule       match  Host  argocd example com         priority  10       services            name  argocd server           port  80       kind  Rule       match  Host  argocd example com      Header  Content Type    application grpc         priority  11       services            name  argocd server           port  80           scheme  h2c   tls      certResolver  default         AWS Application Load Balancers  ALBs  And Classic ELB  HTTP Mode  AWS ALBs can be used as an L7 Load Balancer for both UI and gRPC traffic  whereas Classic ELBs and NLBs can be used as L4 Load Balancers for both   When using an ALB  you ll want to create a second service for argocd server  This is necessary because we need to tell the ALB to send the GRPC traffic to a different target group then the UI traffic  since the backend protocol is HTTP2 instead of HTTP1      yaml apiVersion  v1 kind  Service metadata    annotations      alb ingress kubernetes io backend protocol version  GRPC   This tells AWS to send traffic from the ALB using GRPC  Plain HTTP2 can be used  but the health checks wont be available because argo currently downgrade non grpc calls to HTTP1   labels      app  argogrpc   name  argogrpc   namespace  argocd spec    ports      name   443      port  443     protocol  TCP     targetPort  8080   selector      app kubernetes io name  argocd server   sessionAffinity  None   type  NodePort      Once we create this service  we can configure the Ingress to conditionally route all  application grpc  traffic to the new HTTP2 backend  using the  alb ingress kubernetes io conditions  annotation  as seen below  Note  The value after the   in the condition annotation  must  be the same name as the service that you want traffic to route to   and will be applied on any path with a matching serviceName      yaml   apiVersion  networking k8s io v1   kind  Ingress   metadata      annotations        alb ingress kubernetes io backend protocol  HTTPS         Use this annotation  which must match a service name  to route traffic to HTTP2 backends        alb ingress kubernetes io conditions argogrpc               field   http header   httpHeaderConfig    httpHeaderName    Content Type    values    application grpc            alb ingress kubernetes io listen ports      HTTPS  443        name  argocd     namespace  argocd   spec      rules        host  argocd argoproj io       http          paths            path              backend              service                name  argogrpc   The grpc service must be placed before the argocd server for the listening rules to be created in the correct order               port                  number  443           pathType  Prefix           path              backend              service                name  argocd server               port                  number  443           pathType  Prefix     tls        hosts          argocd argoproj io          Istio  https   www istio io  You can put Argo CD behind Istio using following configurations  Here we will achieve both serving Argo CD behind istio and using subpath on Istio  First we need to make sure that we can run Argo CD with subpath  ie  argocd   For this we have used install yaml from argocd project as is     bash curl  kLs  o install yaml https   raw githubusercontent com argoproj argo cd stable manifests install yaml      save following file as kustomization yml     yaml apiVersion  kustomize config k8s io v1beta1 kind  Kustomization resources      install yaml  patches    path    patch yml       And following lines as patch yml     yaml   Use   insecure so Ingress can send traffic with HTTP     bashref  argocd is the subpath like https   IP argocd   env was added because of https   github com argoproj argo cd issues 3572 error     apiVersion  apps v1 kind  Deployment metadata   name  argocd server spec   template     spec       containers         args            usr local bin argocd server            staticassets           shared app            redis          argocd redis 6379            insecure            basehref           argocd            rootpath           argocd        name  argocd server        env           name  ARGOCD MAX CONCURRENT LOGIN REQUESTS COUNT          value   0       After that install Argo CD   there should be only 3 yml file defined above in current directory       bash kubectl apply  k     n argocd   wait true      Be sure you create secret for Istio   in our case secretname is argocd server tls on argocd Namespace   After that we create Istio Resources     yaml apiVersion  networking istio io v1alpha3 kind  Gateway metadata    name  argocd gateway   namespace  argocd spec    selector      istio  ingressgateway   servers      port        number  80       name  http       protocol  HTTP     hosts                tls       httpsRedirect  true     port        number  443       name  https       protocol  HTTPS     hosts                tls        credentialName  argocd server tls       maxProtocolVersion  TLSV1 3       minProtocolVersion  TLSV1 2       mode  SIMPLE       cipherSuites            ECDHE ECDSA AES128 GCM SHA256           ECDHE RSA AES128 GCM SHA256           ECDHE ECDSA AES128 SHA           AES128 GCM SHA256           AES128 SHA           ECDHE ECDSA AES256 GCM SHA384           ECDHE RSA AES256 GCM SHA384           ECDHE ECDSA AES256 SHA           AES256 GCM SHA384           AES256 SHA     apiVersion  networking istio io v1alpha3 kind  VirtualService metadata    name  argocd virtualservice   namespace  argocd spec    hosts            gateways      argocd gateway   http      match        uri          prefix   argocd     route        destination          host  argocd server         port            number  80      And now we can browse http    argocd  it will be rewritten to https    argocd      Google Cloud load balancers with Kubernetes Ingress  You can make use of the integration of GKE with Google Cloud to deploy Load Balancers using just Kubernetes objects   For this we will need these five objects    A Service   A BackendConfig   A FrontendConfig   A secret with your SSL certificate   An Ingress for GKE  If you need detail for all the options available for these Google integrations  you can check the  Google docs on configuring Ingress features  https   cloud google com kubernetes engine docs how to ingress features       Disable internal TLS  First  to avoid internal redirection loops from HTTP to HTTPS  the API server should be run with TLS disabled   Edit the    insecure  flag in the  argocd server  command of the argocd server deployment  or simply set  server insecure   true   in the  argocd cmd params cm  ConfigMap  as described here  server commands additional configuration method md        Creating a service  Now you need an externally accessible service  This is practically the same as the internal service Argo CD has  but with Google Cloud annotations  Note that this service is annotated to use a  Network Endpoint Group  https   cloud google com load balancing docs negs   NEG  to allow your load balancer to send traffic directly to your pods without using kube proxy  so remove the  neg  annotation if that s not what you want   The service      yaml apiVersion  v1 kind  Service metadata    name  argocd server   namespace  argocd   annotations      cloud google com neg     ingress   true       cloud google com backend config     ports     http   argocd backend config     spec    type  ClusterIP   ports      name  http     port  80     protocol  TCP     targetPort  8080   selector      app kubernetes io name  argocd server          Creating a BackendConfig  See that previous service referencing a backend config called  argocd backend config   So lets deploy it using this yaml      yaml apiVersion  cloud google com v1 kind  BackendConfig metadata    name  argocd backend config   namespace  argocd spec    healthCheck      checkIntervalSec  30     timeoutSec  5     healthyThreshold  1     unhealthyThreshold  2     type  HTTP     requestPath   healthz     port  8080      It uses the same health check as the pods       Creating a FrontendConfig  Now we can deploy a frontend config with an HTTP to HTTPS redirect      yaml apiVersion  networking gke io v1beta1 kind  FrontendConfig metadata    name  argocd frontend config   namespace  argocd spec    redirectToHttps      enabled  true              note      The next two steps  the certificate secret and the Ingress  are described supposing that you manage the certificate yourself  and you have the certificate and key files for it  In the case that your certificate is Google managed  fix the next two steps using the  guide to use a Google managed SSL certificate  https   cloud google com kubernetes engine docs how to managed certs creating an ingress with a google managed certificate             Creating a certificate secret  We need now to create a secret with the SSL certificate we want in our load balancer  It s as easy as executing this command on the path you have your certificate keys stored       kubectl  n argocd create secret tls secret yourdomain com       cert cert file crt   key key file key          Creating an Ingress  And finally  to top it all  our Ingress  Note the reference to our frontend config  the service  and to the certificate secret           note     GKE clusters running versions earlier than  1 21 3 gke 1600    the only supported value for the pathType field  https   cloud google com kubernetes engine docs how to load balance ingress creating an ingress  is  ImplementationSpecific   So you must check your GKE cluster s version  You need to use different YAML depending on the version        If you use the version earlier than  1 21 3 gke 1600   you should use the following Ingress resource     yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  argocd   namespace  argocd   annotations      networking gke io v1beta1 FrontendConfig  argocd frontend config spec    tls        secretName  secret example com   rules        host  argocd example com       http          paths            pathType  ImplementationSpecific           path               is needed  Without this  the UI Javascript and CSS will not load properly           backend              service                name  argocd server               port                  number  80      If you use the version  1 21 3 gke 1600  or later  you should use the following Ingress resource     yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  argocd   namespace  argocd   annotations      networking gke io v1beta1 FrontendConfig  argocd frontend config spec    tls        secretName  secret example com   rules        host  argocd example com       http          paths            pathType  Prefix           path                backend              service                name  argocd server               port                  number  80      As you may know already  it can take some minutes to deploy the load balancer and become ready to accept connections  Once it s ready  get the public IP address for your Load Balancer  go to your DNS server  Google or third party  and point your domain or subdomain  i e  argocd example com  to that IP address   You can get that IP address describing the Ingress object like this       kubectl  n argocd describe ingresses argocd   grep Address      Once the DNS change is propagated  you re ready to use Argo with your Google Cloud Load Balancer     Authenticating through multiple layers of authenticating reverse proxies  Argo CD endpoints may be protected by one or more reverse proxies layers  in that case  you can provide additional headers through the  argocd  CLI    header  parameter to authenticate through those layers      shell   argocd login  host   port    header  x token1 foo    header  x token2 bar    can be repeated multiple times   argocd login  host   port    header  x token1 foo x token2 bar    headers can also be comma separated        ArgoCD Server and UI Root Path  v1 5 3   Argo CD server and UI can be configured to be available under a non root path  e g    argo cd    To do this  add the    rootpath  flag into the  argocd server  deployment command      yaml spec    template      spec        name  argocd server       containers          command             argocd server             repo server           argocd repo server 8081             rootpath            argo cd     NOTE  The flag    rootpath  changes both API Server and UI base URL  Example nginx conf       worker processes 1   events   worker connections 1024     http        sendfile on       server           listen 443           location  argo cd                proxy pass         https   localhost 8080 argo cd               proxy redirect     off              proxy set header   Host  host              proxy set header   X Real IP  remote addr              proxy set header   X Forwarded For  proxy add x forwarded for              proxy set header   X Forwarded Host  server name                buffering should be disabled for api v1 stream applications to support chunked response             proxy buffering off                        Flag      grpc web root path     is used to provide a non root path  e g   argo cd      shell   argocd login  host   port    grpc web root path  argo cd         UI Base Path  If the Argo CD UI is available under a non root path  e g    argo cd  instead of      then the UI path should be configured in the API server  To configure the UI path add the    basehref  flag into the  argocd server  deployment command      yaml spec    template      spec        name  argocd server       containers          command             argocd server             repo server           argocd repo server 8081             basehref            argo cd      NOTE  The flag    basehref  only changes the UI base URL  The API server will keep using the     path so you need to add a URL rewrite rule to the proxy config  Example nginx conf with URL rewrite       worker processes 1   events   worker connections 1024     http        sendfile on       server           listen 443           location  argo cd               rewrite  argo cd        1  break              proxy pass         https   localhost 8080              proxy redirect     off              proxy set header   Host  host              proxy set header   X Real IP  remote addr              proxy set header   X Forwarded For  proxy add x forwarded for              proxy set header   X Forwarded Host  server name                buffering should be disabled for api v1 stream applications to support chunked response             proxy buffering off                       "}
{"questions":"argocd Observed generation is equal to desired generation specific types of Kubernetes resources surfaced to the overall Application health status as a whole The following checks are made for Deployment ReplicaSet StatefulSet DaemonSet Resource Health Argo CD provides built in health assessment for several standard Kubernetes types which is then Overview Number of updated replicas equals the number of desired replicas","answers":"# Resource Health\n\n## Overview\nArgo CD provides built-in health assessment for several standard Kubernetes types, which is then\nsurfaced to the overall Application health status as a whole. The following checks are made for\nspecific types of Kubernetes resources:\n\n### Deployment, ReplicaSet, StatefulSet, DaemonSet\n* Observed generation is equal to desired generation.\n* Number of **updated** replicas equals the number of desired replicas.\n\n### Service\n* If service type is of type `LoadBalancer`, the `status.loadBalancer.ingress` list is non-empty,\nwith at least one value for `hostname` or `IP`.\n\n### Ingress\n* The `status.loadBalancer.ingress` list is non-empty, with at least one value for `hostname` or `IP`.\n\n### Job\n* If job `.spec.suspended` is set to 'true', then the job and app health will be marked as suspended.\n### PersistentVolumeClaim\n* The `status.phase` is `Bound`\n\n### Argocd App\n\nThe health assessment of `argoproj.io\/Application` CRD has been removed in argocd 1.8 (see [#3781](https:\/\/github.com\/argoproj\/argo-cd\/issues\/3781) for more information).\nYou might need to restore it if you are using app-of-apps pattern and orchestrating synchronization using sync waves. Add the following resource customization in\n`argocd-cm` ConfigMap:\n\n```yaml\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cm\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/name: argocd-cm\n    app.kubernetes.io\/part-of: argocd\ndata:\n  resource.customizations.health.argoproj.io_Application: |\n    hs = {}\n    hs.status = \"Progressing\"\n    hs.message = \"\"\n    if obj.status ~= nil then\n      if obj.status.health ~= nil then\n        hs.status = obj.status.health.status\n        if obj.status.health.message ~= nil then\n          hs.message = obj.status.health.message\n        end\n      end\n    end\n    return hs\n```\n\n## Custom Health Checks\n\nArgo CD supports custom health checks written in [Lua](https:\/\/www.lua.org\/). This is useful if you:\n\n* Are affected by known issues where your `Ingress` or `StatefulSet` resources are stuck in `Progressing` state because of bug in your resource controller.\n* Have a custom resource for which Argo CD does not have a built-in health check.\n\nThere are two ways to configure a custom health check. The next two sections describe those ways.\n\n### Way 1. Define a Custom Health Check in `argocd-cm` ConfigMap\n\nCustom health checks can be defined in\n```yaml\n  resource.customizations.health.<group>_<kind>: |\n```\nfield of `argocd-cm`. If you are using argocd-operator, this is overridden by [the argocd-operator resourceCustomizations](https:\/\/argocd-operator.readthedocs.io\/en\/latest\/reference\/argocd\/#resource-customizations).\n\nThe following example demonstrates a health check for `cert-manager.io\/Certificate`.\n\n```yaml\ndata:\n  resource.customizations.health.cert-manager.io_Certificate: |\n    hs = {}\n    if obj.status ~= nil then\n      if obj.status.conditions ~= nil then\n        for i, condition in ipairs(obj.status.conditions) do\n          if condition.type == \"Ready\" and condition.status == \"False\" then\n            hs.status = \"Degraded\"\n            hs.message = condition.message\n            return hs\n          end\n          if condition.type == \"Ready\" and condition.status == \"True\" then\n            hs.status = \"Healthy\"\n            hs.message = condition.message\n            return hs\n          end\n        end\n      end\n    end\n\n    hs.status = \"Progressing\"\n    hs.message = \"Waiting for certificate\"\n    return hs\n```\n\nIn order to prevent duplication of custom health checks for potentially multiple resources, it is also possible to\nspecify a wildcard in the resource kind, and anywhere in the resource group, like this:\n\n```yaml\n  resource.customizations: |\n    ec2.aws.crossplane.io\/*:\n      health.lua: |\n        ...\n```\n\n```yaml\n  # If a key _begins_ with a wildcard, please ensure that the GVK key is quoted.\n  resource.customizations: |\n    \"*.aws.crossplane.io\/*\":\n      health.lua: |\n        ...\n```\n\n!!!important\n    Please, note that wildcards are only supported when using the `resource.customizations` key, the `resource.customizations.health.<group>_<kind>`\nstyle keys do not work since wildcards (`*`) are not supported in Kubernetes configmap keys.\n\nThe `obj` is a global variable which contains the resource. The script must return an object with status and optional message field.\nThe custom health check might return one of the following health statuses:\n\n  * `Healthy` - the resource is healthy\n  * `Progressing` - the resource is not healthy yet but still making progress and might be healthy soon\n  * `Degraded` - the resource is degraded\n  * `Suspended` - the resource is suspended and waiting for some external event to resume (e.g. suspended CronJob or paused Deployment)\n\nBy default, health typically returns a `Progressing` status.\n\nNOTE: As a security measure, access to the standard Lua libraries will be disabled by default. Admins can control access by\nsetting `resource.customizations.useOpenLibs.<group>_<kind>`. In the following example, standard libraries are enabled for health check of `cert-manager.io\/Certificate`.\n\n```yaml\ndata:\n  resource.customizations.useOpenLibs.cert-manager.io_Certificate: true\n  resource.customizations.health.cert-manager.io_Certificate: |\n    # Lua standard libraries are enabled for this script\n```\n\n### Way 2. Contribute a Custom Health Check\n\nA health check can be bundled into Argo CD. Custom health check scripts are located in the `resource_customizations` directory of [https:\/\/github.com\/argoproj\/argo-cd](https:\/\/github.com\/argoproj\/argo-cd). This must have the following directory structure:\n\n```\nargo-cd\n|-- resource_customizations\n|    |-- your.crd.group.io               # CRD group\n|    |    |-- MyKind                     # Resource kind\n|    |    |    |-- health.lua            # Health check\n|    |    |    |-- health_test.yaml      # Test inputs and expected results\n|    |    |    +-- testdata              # Directory with test resource YAML definitions\n```\n\nEach health check must have tests defined in `health_test.yaml` file. The `health_test.yaml` is a YAML file with the following structure:\n\n```yaml\ntests:\n- healthStatus:\n    status: ExpectedStatus\n    message: Expected message\n  inputPath: testdata\/test-resource-definition.yaml\n```\n\nTo test the implemented custom health checks, run `go test -v .\/util\/lua\/`.\n\nThe [PR#1139](https:\/\/github.com\/argoproj\/argo-cd\/pull\/1139) is an example of Cert Manager CRDs custom health check.\n\nPlease note that bundled health checks with wildcards are not supported.\n\n## Overriding Go-Based Health Checks\n\nHealth checks for some resources were [hardcoded as Go code](https:\/\/github.com\/argoproj\/gitops-engine\/tree\/master\/pkg\/health) \nbecause Lua support was introduced later. Also, the logic of health checks for some resources were too complex, so it \nwas easier to implement it in Go.\n\nIt is possible to override health checks for built-in resource. Argo will prefer the configured health check over the\nGo-based built-in check.\n\nThe following resources have Go-based health checks:\n\n* PersistentVolumeClaim\n* Pod\n* Service\n* apiregistration.k8s.io\/APIService\n* apps\/DaemonSet\n* apps\/Deployment\n* apps\/ReplicaSet\n* apps\/StatefulSet\n* argoproj.io\/Workflow\n* autoscaling\/HorizontalPodAutoscaler\n* batch\/Job\n* extensions\/Ingress\n* networking.k8s.io\/Ingress\n\n## Health Checks\n\nAn Argo CD App's health is inferred from the health of its immediate child resources (the resources represented in \nsource control). The App health will be the worst health of its immediate child sources. The priority of most to least \nhealthy statuses is: `Healthy`, `Suspended`, `Progressing`, `Missing`, `Degraded`, `Unknown`. So, for example, if an App\nhas a `Missing` resource and a `Degraded` resource, the App's health will be `Missing`.\n\nBut the health of a resource is not inherited from child resources - it is calculated using only information about the \nresource itself. A resource's status field may or may not contain information about the health of a child resource, and \nthe resource's health check may or may not take that information into account.\n\nThe lack of inheritance is by design. A resource's health can't be inferred from its children because the health of a\nchild resource may not be relevant to the health of the parent resource. For example, a Deployment's health is not\nnecessarily affected by the health of its Pods. \n\n```\nApp (healthy)\n\u2514\u2500\u2500 Deployment (healthy)\n    \u2514\u2500\u2500 ReplicaSet (healthy)\n        \u2514\u2500\u2500 Pod (healthy)\n    \u2514\u2500\u2500 ReplicaSet (unhealthy)\n        \u2514\u2500\u2500 Pod (unhealthy)\n```\n\nIf you want the health of a child resource to affect the health of its parent, you need to configure the parent's health\ncheck to take the child's health into account. Since only the parent resource's state is available to the health check,\nthe parent resource's controller needs to make the child resource's health available in the parent resource's status \nfield.\n\n```\nApp (healthy)\n\u2514\u2500\u2500 CustomResource (healthy) <- This resource's health check needs to be fixed to mark the App as unhealthy\n    \u2514\u2500\u2500 CustomChildResource (unhealthy)\n```\n## Ignoring Child Resource Health Check in Applications\n\nTo ignore the health check of an immediate child resource within an Application, set the annotation `argocd.argoproj.io\/ignore-healthcheck` to `true`. For example:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  annotations:\n    argocd.argoproj.io\/ignore-healthcheck: \"true\"\n```\n\nBy doing this, the health status of the Deployment will not affect the health of its parent Application","site":"argocd","answers_cleaned":"  Resource Health     Overview Argo CD provides built in health assessment for several standard Kubernetes types  which is then surfaced to the overall Application health status as a whole  The following checks are made for specific types of Kubernetes resources       Deployment  ReplicaSet  StatefulSet  DaemonSet   Observed generation is equal to desired generation    Number of   updated   replicas equals the number of desired replicas       Service   If service type is of type  LoadBalancer   the  status loadBalancer ingress  list is non empty  with at least one value for  hostname  or  IP        Ingress   The  status loadBalancer ingress  list is non empty  with at least one value for  hostname  or  IP        Job   If job   spec suspended  is set to  true   then the job and app health will be marked as suspended      PersistentVolumeClaim   The  status phase  is  Bound       Argocd App  The health assessment of  argoproj io Application  CRD has been removed in argocd 1 8  see   3781  https   github com argoproj argo cd issues 3781  for more information   You might need to restore it if you are using app of apps pattern and orchestrating synchronization using sync waves  Add the following resource customization in  argocd cm  ConfigMap      yaml     apiVersion  v1 kind  ConfigMap metadata    name  argocd cm   namespace  argocd   labels      app kubernetes io name  argocd cm     app kubernetes io part of  argocd data    resource customizations health argoproj io Application        hs          hs status    Progressing      hs message          if obj status    nil then       if obj status health    nil then         hs status   obj status health status         if obj status health message    nil then           hs message   obj status health message         end       end     end     return hs         Custom Health Checks  Argo CD supports custom health checks written in  Lua  https   www lua org    This is useful if you     Are affected by known issues where your  Ingress  or  StatefulSet  resources are stuck in  Progressing  state because of bug in your resource controller    Have a custom resource for which Argo CD does not have a built in health check   There are two ways to configure a custom health check  The next two sections describe those ways       Way 1  Define a Custom Health Check in  argocd cm  ConfigMap  Custom health checks can be defined in    yaml   resource customizations health  group   kind         field of  argocd cm   If you are using argocd operator  this is overridden by  the argocd operator resourceCustomizations  https   argocd operator readthedocs io en latest reference argocd  resource customizations    The following example demonstrates a health check for  cert manager io Certificate       yaml data    resource customizations health cert manager io Certificate        hs          if obj status    nil then       if obj status conditions    nil then         for i  condition in ipairs obj status conditions  do           if condition type     Ready  and condition status     False  then             hs status    Degraded              hs message   condition message             return hs           end           if condition type     Ready  and condition status     True  then             hs status    Healthy              hs message   condition message             return hs           end         end       end     end      hs status    Progressing      hs message    Waiting for certificate      return hs      In order to prevent duplication of custom health checks for potentially multiple resources  it is also possible to specify a wildcard in the resource kind  and anywhere in the resource group  like this      yaml   resource customizations        ec2 aws crossplane io          health lua                        yaml     If a key  begins  with a wildcard  please ensure that the GVK key is quoted    resource customizations           aws crossplane io           health lua                        important     Please  note that wildcards are only supported when using the  resource customizations  key  the  resource customizations health  group   kind   style keys do not work since wildcards       are not supported in Kubernetes configmap keys   The  obj  is a global variable which contains the resource  The script must return an object with status and optional message field  The custom health check might return one of the following health statuses        Healthy    the resource is healthy      Progressing    the resource is not healthy yet but still making progress and might be healthy soon      Degraded    the resource is degraded      Suspended    the resource is suspended and waiting for some external event to resume  e g  suspended CronJob or paused Deployment   By default  health typically returns a  Progressing  status   NOTE  As a security measure  access to the standard Lua libraries will be disabled by default  Admins can control access by setting  resource customizations useOpenLibs  group   kind    In the following example  standard libraries are enabled for health check of  cert manager io Certificate       yaml data    resource customizations useOpenLibs cert manager io Certificate  true   resource customizations health cert manager io Certificate          Lua standard libraries are enabled for this script          Way 2  Contribute a Custom Health Check  A health check can be bundled into Argo CD  Custom health check scripts are located in the  resource customizations  directory of  https   github com argoproj argo cd  https   github com argoproj argo cd   This must have the following directory structure       argo cd     resource customizations          your crd group io                 CRD group               MyKind                       Resource kind                    health lua              Health check                    health test yaml        Test inputs and expected results                    testdata                Directory with test resource YAML definitions      Each health check must have tests defined in  health test yaml  file  The  health test yaml  is a YAML file with the following structure      yaml tests    healthStatus      status  ExpectedStatus     message  Expected message   inputPath  testdata test resource definition yaml      To test the implemented custom health checks  run  go test  v   util lua     The  PR 1139  https   github com argoproj argo cd pull 1139  is an example of Cert Manager CRDs custom health check   Please note that bundled health checks with wildcards are not supported      Overriding Go Based Health Checks  Health checks for some resources were  hardcoded as Go code  https   github com argoproj gitops engine tree master pkg health   because Lua support was introduced later  Also  the logic of health checks for some resources were too complex  so it  was easier to implement it in Go   It is possible to override health checks for built in resource  Argo will prefer the configured health check over the Go based built in check   The following resources have Go based health checks     PersistentVolumeClaim   Pod   Service   apiregistration k8s io APIService   apps DaemonSet   apps Deployment   apps ReplicaSet   apps StatefulSet   argoproj io Workflow   autoscaling HorizontalPodAutoscaler   batch Job   extensions Ingress   networking k8s io Ingress     Health Checks  An Argo CD App s health is inferred from the health of its immediate child resources  the resources represented in  source control   The App health will be the worst health of its immediate child sources  The priority of most to least  healthy statuses is   Healthy    Suspended    Progressing    Missing    Degraded    Unknown   So  for example  if an App has a  Missing  resource and a  Degraded  resource  the App s health will be  Missing    But the health of a resource is not inherited from child resources   it is calculated using only information about the  resource itself  A resource s status field may or may not contain information about the health of a child resource  and  the resource s health check may or may not take that information into account   The lack of inheritance is by design  A resource s health can t be inferred from its children because the health of a child resource may not be relevant to the health of the parent resource  For example  a Deployment s health is not necessarily affected by the health of its Pods        App  healthy      Deployment  healthy          ReplicaSet  healthy              Pod  healthy          ReplicaSet  unhealthy              Pod  unhealthy       If you want the health of a child resource to affect the health of its parent  you need to configure the parent s health check to take the child s health into account  Since only the parent resource s state is available to the health check  the parent resource s controller needs to make the child resource s health available in the parent resource s status  field       App  healthy      CustomResource  healthy     This resource s health check needs to be fixed to mark the App as unhealthy         CustomChildResource  unhealthy         Ignoring Child Resource Health Check in Applications  To ignore the health check of an immediate child resource within an Application  set the annotation  argocd argoproj io ignore healthcheck  to  true   For example      yaml apiVersion  apps v1 kind  Deployment metadata    annotations      argocd argoproj io ignore healthcheck   true       By doing this  the health status of the Deployment will not affect the health of its parent Application"}
{"questions":"argocd compliance https www pcisecuritystandards org requirements The following are some security topics and implementation details of Argo CD JWTs Username password bearer tokens are not used for authentication The JWT is obtained managed Authentication Argo CD has undergone rigorous internal security reviews and penetration testing to satisfy PCI Security Authentication to Argo CD API server is performed exclusively using","answers":"# Security\n\nArgo CD has undergone rigorous internal security reviews and penetration testing to satisfy [PCI\ncompliance](https:\/\/www.pcisecuritystandards.org) requirements. The following are some security\ntopics and implementation details of Argo CD.\n\n## Authentication\n\nAuthentication to Argo CD API server is performed exclusively using [JSON Web Tokens](https:\/\/jwt.io)\n(JWTs). Username\/password bearer tokens are not used for authentication. The JWT is obtained\/managed\nin one of the following ways:\n\n1. For the local `admin` user, a username\/password is exchanged for a JWT using the `\/api\/v1\/session`\n   endpoint. This token is signed & issued by the Argo CD API server itself and it expires after 24\u00a0hours \n   (this token used not to expire, see [CVE-2021-26921](https:\/\/github.com\/argoproj\/argo-cd\/security\/advisories\/GHSA-9h6w-j7w4-jr52)).\n   When the admin password is updated, all existing admin JWT tokens are immediately revoked.\n   The password is stored as a bcrypt hash in the [`argocd-secret`](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/manifests\/base\/config\/argocd-secret.yaml) Secret.\n\n2. For Single Sign-On users, the user completes an OAuth2 login flow to the configured OIDC identity\n   provider (either delegated through the bundled Dex provider, or directly to a self-managed OIDC\n   provider). This JWT is signed & issued by the IDP, and expiration and revocation is handled by\n   the provider. Dex tokens expire after 24 hours.\n\n3. Automation tokens are generated for a project using the `\/api\/v1\/projects\/{project}\/roles\/{role}\/token`\n   endpoint, and are signed & issued by Argo CD. These tokens are limited in scope and privilege,\n   and can only be used to manage application resources in the project which it belongs to. Project\n   JWTs have a configurable expiration and can be immediately revoked by deleting the JWT reference\n   ID from the project role.\n\n## Authorization\n\nAuthorization is performed by iterating the list of group membership in a user's JWT groups claims,\nand comparing each group against the roles\/rules in the [RBAC](.\/rbac.md) policy. Any matched rule\npermits access to the API request.\n\n## TLS\n\nAll network communication is performed over TLS including service-to-service communication between\nthe three components (argocd-server, argocd-repo-server, argocd-application-controller). The Argo CD\nAPI server can enforce the use of TLS 1.2 using the flag: `--tlsminversion 1.2`.\nCommunication with Redis is performed over plain HTTP by default. TLS can be setup with command line arguments.\n\n## Git & Helm Repositories\n\nGit and helm repositories are managed by a stand-alone service, called the repo-server. The\nrepo-server does not carry any Kubernetes privileges and does not store credentials to any services\n(including git). The repo-server is responsible for cloning repositories which have been permitted\nand trusted by Argo CD operators, and generating Kubernetes manifests at a given path in the\nrepository. For performance and bandwidth efficiency, the repo-server maintains local clones of\nthese repositories so that subsequent commits to the repository are efficiently downloaded.\n\nThere are security considerations when configuring git repositories that Argo CD is permitted to\ndeploy from. In short, gaining unauthorized write access to a git repository trusted by Argo CD\nwill have serious security implications outlined below.\n\n### Unauthorized Deployments\n\nSince Argo CD deploys the Kubernetes resources defined in git, an attacker with access to a trusted\ngit repo would be able to affect the Kubernetes resources which are deployed. For example, an\nattacker could update the deployment manifest deploy malicious container images to the environment,\nor delete resources in git causing them to be pruned in the live environment.\n\n### Tool command invocation\n\nIn addition to raw YAML, Argo CD natively supports two popular Kubernetes config management tools,\nhelm and kustomize. When rendering manifests, Argo CD executes these config management tools\n(i.e. `helm template`, `kustomize build`) to generate the manifests. It is possible that an attacker\nwith write access to a trusted git repository may construct malicious helm charts or kustomizations\nthat attempt to read files out-of-tree. This includes adjacent git repos, as well as files on the\nrepo-server itself. Whether or not this is a risk to your organization depends on if the contents\nin the git repos are sensitive in nature. By default, the repo-server itself does not contain\nsensitive information, but might be configured with Config Management Plugins which do\n(e.g. decryption keys). If such plugins are used, extreme care must be taken to ensure the\nrepository contents can be trusted at all times.\n\nOptionally the built-in config management tools might be individually disabled.\nIf you know that your users will not need a certain config management tool, it's advisable\nto disable that tool.\nSee [Tool Detection](..\/user-guide\/tool_detection.md) for more information.\n\n### Remote bases and helm chart dependencies\n\nArgo CD's repository allow-list only restricts the initial repository which is cloned. However, both\nkustomize and helm contain features to reference and follow *additional* repositories\n(e.g. kustomize remote bases, helm chart dependencies), of which might not be in the repository\nallow-list. Argo CD operators must understand that users with write access to trusted git\nrepositories could reference other remote git repositories containing Kubernetes resources not\neasily searchable or auditable in the configured git repositories.\n\n## Sensitive Information\n\n### Secrets\n\nArgo CD never returns sensitive data from its API, and redacts all sensitive data in API payloads\nand logs. This includes:\n\n* cluster credentials\n* Git credentials\n* OAuth2 client secrets\n* Kubernetes Secret values\n\n### External Cluster Credentials\n\nTo manage external clusters, Argo CD stores the credentials of the external cluster as a Kubernetes\nSecret in the argocd namespace. This secret contains the K8s API bearer token associated with the\n`argocd-manager` ServiceAccount created during `argocd cluster add`, along with connection options\nto that API server (TLS configuration\/certs, AWS role-arn, etc...).\nThe information is used to reconstruct a REST config and kubeconfig to the cluster used by Argo CD\nservices.\n\nTo rotate the bearer token used by Argo CD, the token can be deleted (e.g. using kubectl) which\ncauses Kubernetes to generate a new secret with a new bearer token. The new token can be re-inputted\nto Argo CD by re-running `argocd cluster add`. Run the following commands against the *_managed_*\ncluster:\n\n```bash\n# run using a kubeconfig for the externally managed cluster\nkubectl delete secret argocd-manager-token-XXXXXX -n kube-system\nargocd cluster add CONTEXTNAME\n```\n\n!!! note\n    Kubernetes 1.24 [stopped automatically creating tokens for Service Accounts](https:\/\/github.com\/kubernetes\/kubernetes\/blob\/master\/CHANGELOG\/CHANGELOG-1.24.md#no-really-you-must-read-this-before-you-upgrade).\n    [Starting in Argo CD 2.4](https:\/\/github.com\/argoproj\/argo-cd\/pull\/9546), `argocd cluster add` creates a \n    ServiceAccount _and_ a non-expiring Service Account token Secret when adding 1.24 clusters. In the future, Argo CD \n    will [add support for the Kubernetes TokenRequest API](https:\/\/github.com\/argoproj\/argo-cd\/issues\/9610) to avoid \n    using long-lived tokens.\n\nTo revoke Argo CD's access to a managed cluster, delete the RBAC artifacts against the *_managed_*\ncluster, and remove the cluster entry from Argo CD:\n\n```bash\n# run using a kubeconfig for the externally managed cluster\nkubectl delete sa argocd-manager -n kube-system\nkubectl delete clusterrole argocd-manager-role\nkubectl delete clusterrolebinding argocd-manager-role-binding\nargocd cluster rm https:\/\/your-kubernetes-cluster-addr\n```\n<!-- markdownlint-disable MD027 -->\n> NOTE: for AWS EKS clusters, the [get-token](https:\/\/docs.aws.amazon.com\/cli\/latest\/reference\/eks\/get-token.html) command\n  is used to authenticate to the external cluster, which uses IAM roles in lieu of locally stored\n  tokens, so token rotation is not needed, and revocation is handled through IAM.\n<!-- markdownlint-enable MD027 -->\n\n## Cluster RBAC\n\nBy default, Argo CD uses a [clusteradmin level role](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/manifests\/base\/application-controller-roles\/argocd-application-controller-role.yaml)\nin order to:\n\n1. watch & operate on cluster state\n2. deploy resources to the cluster\n\nAlthough Argo CD requires cluster-wide **_read_** privileges to resources in the managed cluster to\nfunction properly, it does not necessarily need full **_write_** privileges to the cluster. The\nClusterRole used by argocd-server and argocd-application-controller can be modified such\nthat write privileges are limited to only the namespaces and resources that you wish Argo CD to\nmanage.\n\nTo fine-tune privileges of externally managed clusters, edit the ClusterRole of the `argocd-manager-role`\n\n```bash\n# run using a kubeconfig for the externally managed cluster\nkubectl edit clusterrole argocd-manager-role\n```\n\nTo fine-tune privileges which Argo CD has against its own cluster (i.e. `https:\/\/kubernetes.default.svc`),\nedit the following cluster roles where Argo CD is running in:\n\n```bash\n# run using a kubeconfig to the cluster Argo CD is running in\nkubectl edit clusterrole argocd-server\nkubectl edit clusterrole argocd-application-controller\n```\n\n!!! tip\n    If you want to deny Argo CD access to a kind of resource then add it as an [excluded resource](declarative-setup.md#resource-exclusion).\n\n## Auditing\n\nAs a GitOps deployment tool, the Git commit history provides a natural audit log of what changes\nwere made to application configuration, when they were made, and by whom. However, this audit log\nonly applies to what happened in Git and does not necessarily correlate one-to-one with events\nthat happen in a cluster. For example, User A could have made multiple commits to application\nmanifests, but User B could have just only synced those changes to the cluster sometime later.\n\nTo complement the Git revision history, Argo CD emits Kubernetes Events of application activity,\nindicating the responsible actor when applicable. For example:\n\n```bash\n$ kubectl get events\nLAST SEEN   FIRST SEEN   COUNT   NAME                         KIND          SUBOBJECT   TYPE      REASON               SOURCE                          MESSAGE\n1m          1m           1       guestbook.157f7c5edd33aeac   Application               Normal    ResourceCreated      argocd-server                   admin created application\n1m          1m           1       guestbook.157f7c5f0f747acf   Application               Normal    ResourceUpdated      argocd-application-controller   Updated sync status:  -> OutOfSync\n1m          1m           1       guestbook.157f7c5f0fbebbff   Application               Normal    ResourceUpdated      argocd-application-controller   Updated health status:  -> Missing\n1m          1m           1       guestbook.157f7c6069e14f4d   Application               Normal    OperationStarted     argocd-server                   admin initiated sync to HEAD (8a1cb4a02d3538e54907c827352f66f20c3d7b0d)\n1m          1m           1       guestbook.157f7c60a55a81a8   Application               Normal    OperationCompleted   argocd-application-controller   Sync operation to 8a1cb4a02d3538e54907c827352f66f20c3d7b0d succeeded\n1m          1m           1       guestbook.157f7c60af1ccae2   Application               Normal    ResourceUpdated      argocd-application-controller   Updated sync status: OutOfSync -> Synced\n1m          1m           1       guestbook.157f7c60af5bc4f0   Application               Normal    ResourceUpdated      argocd-application-controller   Updated health status: Missing -> Progressing\n1m          1m           1       guestbook.157f7c651990e848   Application               Normal    ResourceUpdated      argocd-application-controller   Updated health status: Progressing -> Healthy\n```\n\nThese events can be then be persisted for longer periods of time using other tools as\n[Event Exporter](https:\/\/github.com\/GoogleCloudPlatform\/k8s-stackdriver\/tree\/master\/event-exporter) or\n[Event Router](https:\/\/github.com\/heptiolabs\/eventrouter).\n\n## WebHook Payloads\n\nPayloads from webhook events are considered untrusted. Argo CD only examines the payload to infer\nthe involved applications of the webhook event (e.g. which repo was modified), then refreshes\nthe related application for reconciliation. This refresh is the same refresh which occurs regularly\nat three minute intervals, just fast-tracked by the webhook event.\n\n## Logging\n\n### Security field\n\nSecurity-related logs are tagged with a `security` field to make them easier to find, analyze, and report on.\n\n| Level | Friendly Level | Description                                                                                       | Example                                     |\n|-------|----------------|---------------------------------------------------------------------------------------------------|---------------------------------------------|\n| 1     | Low            | Unexceptional, non-malicious events                                                               | Successful access                           |\n| 2     | Medium         | Could indicate malicious events, but has a high likelihood of being user\/system error             | Access denied                               |\n| 3     | High           | Likely malicious events but one that had no side effects or was blocked                           | Out of bounds symlinks in repo              |\n| 4     | Critical       | Any malicious or exploitable event that had a side effect                                         | Secrets being left behind on the filesystem |\n| 5     | Emergency      | Unmistakably malicious events that should NEVER occur accidentally and indicates an active attack | Brute forcing of accounts                   |\n\nWhere applicable, a `CWE` field is also added specifying the [Common Weakness Enumeration](https:\/\/cwe.mitre.org\/index.html) number.\n\n!!! warning\n    Please be aware that not all security logs are comprehensively tagged yet and these examples are not necessarily implemented.\n\n### API Logs\n\nArgo CD logs payloads of most API requests except request that are considered sensitive, such as\n`\/cluster.ClusterService\/Create`, `\/session.SessionService\/Create` etc. The full list of method\ncan be found in [server\/server.go](https:\/\/github.com\/argoproj\/argo-cd\/blob\/abba8dddce8cd897ba23320e3715690f465b4a95\/server\/server.go#L516).\n\nArgo CD does not log IP addresses of clients requesting API endpoints, since the API server is typically behind a proxy. Instead, it is recommended\nto configure IP addresses logging in the proxy server that sits in front of the API server.\n\n## ApplicationSets\n\nArgo CD's ApplicationSets feature has its own [security considerations](.\/applicationset\/Security.md). Be aware of those\nissues before using ApplicationSets.\n\n## Limiting Directory App Memory Usage\n\n> >2.2.10, 2.1.16, >2.3.5\n\nDirectory-type Applications (those whose source is raw JSON or YAML files) can consume significant\n[repo-server](architecture.md#repository-server) memory, depending on the size and structure of the YAML files.\n\nTo avoid over-using memory in the repo-server (potentially causing a crash and denial of service), set the\n`reposerver.max.combined.directory.manifests.size` config option in [argocd-cmd-params-cm](argocd-cmd-params-cm.yaml).\n\nThis option limits the combined size of all JSON or YAML files in an individual app. Note that the in-memory\nrepresentation of a manifest may be as much as 300x the size of the manifest on disk. Also note that the limit is per\nApplication. If manifests are generated for multiple applications at once, memory usage will be higher.\n\n**Example:**\n\nSuppose your repo-server has a 10G memory limit, and you have ten Applications which use raw JSON or YAML files. To\ncalculate the max safe combined file size per Application, divide 10G by 300 * 10 Apps (300 being the worst-case memory\ngrowth factor for the manifests).\n\n```\n10G \/ 300 * 10 = 3M\n```\n\nSo a reasonably safe configuration for this setup would be a 3M limit per app.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cmd-params-cm\ndata:\n  reposerver.max.combined.directory.manifests.size: '3M'\n```\n\nThe 300x ratio assumes a maliciously-crafted manifest file. If you only want to protect against accidental excessive\nmemory use, it is probably safe to use a smaller ratio.\n\nKeep in mind that if a malicious user can create additional Applications, they can increase the total memory usage.\nGrant [App creation privileges](rbac.md) carefully.","site":"argocd","answers_cleaned":"  Security  Argo CD has undergone rigorous internal security reviews and penetration testing to satisfy  PCI compliance  https   www pcisecuritystandards org  requirements  The following are some security topics and implementation details of Argo CD      Authentication  Authentication to Argo CD API server is performed exclusively using  JSON Web Tokens  https   jwt io   JWTs   Username password bearer tokens are not used for authentication  The JWT is obtained managed in one of the following ways   1  For the local  admin  user  a username password is exchanged for a JWT using the   api v1 session     endpoint  This token is signed   issued by the Argo CD API server itself and it expires after 24 hours      this token used not to expire  see  CVE 2021 26921  https   github com argoproj argo cd security advisories GHSA 9h6w j7w4 jr52       When the admin password is updated  all existing admin JWT tokens are immediately revoked     The password is stored as a bcrypt hash in the   argocd secret   https   github com argoproj argo cd blob master manifests base config argocd secret yaml  Secret   2  For Single Sign On users  the user completes an OAuth2 login flow to the configured OIDC identity    provider  either delegated through the bundled Dex provider  or directly to a self managed OIDC    provider   This JWT is signed   issued by the IDP  and expiration and revocation is handled by    the provider  Dex tokens expire after 24 hours   3  Automation tokens are generated for a project using the   api v1 projects  project  roles  role  token     endpoint  and are signed   issued by Argo CD  These tokens are limited in scope and privilege     and can only be used to manage application resources in the project which it belongs to  Project    JWTs have a configurable expiration and can be immediately revoked by deleting the JWT reference    ID from the project role      Authorization  Authorization is performed by iterating the list of group membership in a user s JWT groups claims  and comparing each group against the roles rules in the  RBAC    rbac md  policy  Any matched rule permits access to the API request      TLS  All network communication is performed over TLS including service to service communication between the three components  argocd server  argocd repo server  argocd application controller   The Argo CD API server can enforce the use of TLS 1 2 using the flag     tlsminversion 1 2   Communication with Redis is performed over plain HTTP by default  TLS can be setup with command line arguments      Git   Helm Repositories  Git and helm repositories are managed by a stand alone service  called the repo server  The repo server does not carry any Kubernetes privileges and does not store credentials to any services  including git   The repo server is responsible for cloning repositories which have been permitted and trusted by Argo CD operators  and generating Kubernetes manifests at a given path in the repository  For performance and bandwidth efficiency  the repo server maintains local clones of these repositories so that subsequent commits to the repository are efficiently downloaded   There are security considerations when configuring git repositories that Argo CD is permitted to deploy from  In short  gaining unauthorized write access to a git repository trusted by Argo CD will have serious security implications outlined below       Unauthorized Deployments  Since Argo CD deploys the Kubernetes resources defined in git  an attacker with access to a trusted git repo would be able to affect the Kubernetes resources which are deployed  For example  an attacker could update the deployment manifest deploy malicious container images to the environment  or delete resources in git causing them to be pruned in the live environment       Tool command invocation  In addition to raw YAML  Argo CD natively supports two popular Kubernetes config management tools  helm and kustomize  When rendering manifests  Argo CD executes these config management tools  i e   helm template    kustomize build   to generate the manifests  It is possible that an attacker with write access to a trusted git repository may construct malicious helm charts or kustomizations that attempt to read files out of tree  This includes adjacent git repos  as well as files on the repo server itself  Whether or not this is a risk to your organization depends on if the contents in the git repos are sensitive in nature  By default  the repo server itself does not contain sensitive information  but might be configured with Config Management Plugins which do  e g  decryption keys   If such plugins are used  extreme care must be taken to ensure the repository contents can be trusted at all times   Optionally the built in config management tools might be individually disabled  If you know that your users will not need a certain config management tool  it s advisable to disable that tool  See  Tool Detection     user guide tool detection md  for more information       Remote bases and helm chart dependencies  Argo CD s repository allow list only restricts the initial repository which is cloned  However  both kustomize and helm contain features to reference and follow  additional  repositories  e g  kustomize remote bases  helm chart dependencies   of which might not be in the repository allow list  Argo CD operators must understand that users with write access to trusted git repositories could reference other remote git repositories containing Kubernetes resources not easily searchable or auditable in the configured git repositories      Sensitive Information      Secrets  Argo CD never returns sensitive data from its API  and redacts all sensitive data in API payloads and logs  This includes     cluster credentials   Git credentials   OAuth2 client secrets   Kubernetes Secret values      External Cluster Credentials  To manage external clusters  Argo CD stores the credentials of the external cluster as a Kubernetes Secret in the argocd namespace  This secret contains the K8s API bearer token associated with the  argocd manager  ServiceAccount created during  argocd cluster add   along with connection options to that API server  TLS configuration certs  AWS role arn  etc      The information is used to reconstruct a REST config and kubeconfig to the cluster used by Argo CD services   To rotate the bearer token used by Argo CD  the token can be deleted  e g  using kubectl  which causes Kubernetes to generate a new secret with a new bearer token  The new token can be re inputted to Argo CD by re running  argocd cluster add   Run the following commands against the   managed   cluster      bash   run using a kubeconfig for the externally managed cluster kubectl delete secret argocd manager token XXXXXX  n kube system argocd cluster add CONTEXTNAME          note     Kubernetes 1 24  stopped automatically creating tokens for Service Accounts  https   github com kubernetes kubernetes blob master CHANGELOG CHANGELOG 1 24 md no really you must read this before you upgrade        Starting in Argo CD 2 4  https   github com argoproj argo cd pull 9546    argocd cluster add  creates a      ServiceAccount  and  a non expiring Service Account token Secret when adding 1 24 clusters  In the future  Argo CD      will  add support for the Kubernetes TokenRequest API  https   github com argoproj argo cd issues 9610  to avoid      using long lived tokens   To revoke Argo CD s access to a managed cluster  delete the RBAC artifacts against the   managed   cluster  and remove the cluster entry from Argo CD      bash   run using a kubeconfig for the externally managed cluster kubectl delete sa argocd manager  n kube system kubectl delete clusterrole argocd manager role kubectl delete clusterrolebinding argocd manager role binding argocd cluster rm https   your kubernetes cluster addr          markdownlint disable MD027       NOTE  for AWS EKS clusters  the  get token  https   docs aws amazon com cli latest reference eks get token html  command   is used to authenticate to the external cluster  which uses IAM roles in lieu of locally stored   tokens  so token rotation is not needed  and revocation is handled through IAM       markdownlint enable MD027         Cluster RBAC  By default  Argo CD uses a  clusteradmin level role  https   github com argoproj argo cd blob master manifests base application controller roles argocd application controller role yaml  in order to   1  watch   operate on cluster state 2  deploy resources to the cluster  Although Argo CD requires cluster wide    read    privileges to resources in the managed cluster to function properly  it does not necessarily need full    write    privileges to the cluster  The ClusterRole used by argocd server and argocd application controller can be modified such that write privileges are limited to only the namespaces and resources that you wish Argo CD to manage   To fine tune privileges of externally managed clusters  edit the ClusterRole of the  argocd manager role      bash   run using a kubeconfig for the externally managed cluster kubectl edit clusterrole argocd manager role      To fine tune privileges which Argo CD has against its own cluster  i e   https   kubernetes default svc    edit the following cluster roles where Argo CD is running in      bash   run using a kubeconfig to the cluster Argo CD is running in kubectl edit clusterrole argocd server kubectl edit clusterrole argocd application controller          tip     If you want to deny Argo CD access to a kind of resource then add it as an  excluded resource  declarative setup md resource exclusion       Auditing  As a GitOps deployment tool  the Git commit history provides a natural audit log of what changes were made to application configuration  when they were made  and by whom  However  this audit log only applies to what happened in Git and does not necessarily correlate one to one with events that happen in a cluster  For example  User A could have made multiple commits to application manifests  but User B could have just only synced those changes to the cluster sometime later   To complement the Git revision history  Argo CD emits Kubernetes Events of application activity  indicating the responsible actor when applicable  For example      bash   kubectl get events LAST SEEN   FIRST SEEN   COUNT   NAME                         KIND          SUBOBJECT   TYPE      REASON               SOURCE                          MESSAGE 1m          1m           1       guestbook 157f7c5edd33aeac   Application               Normal    ResourceCreated      argocd server                   admin created application 1m          1m           1       guestbook 157f7c5f0f747acf   Application               Normal    ResourceUpdated      argocd application controller   Updated sync status      OutOfSync 1m          1m           1       guestbook 157f7c5f0fbebbff   Application               Normal    ResourceUpdated      argocd application controller   Updated health status      Missing 1m          1m           1       guestbook 157f7c6069e14f4d   Application               Normal    OperationStarted     argocd server                   admin initiated sync to HEAD  8a1cb4a02d3538e54907c827352f66f20c3d7b0d  1m          1m           1       guestbook 157f7c60a55a81a8   Application               Normal    OperationCompleted   argocd application controller   Sync operation to 8a1cb4a02d3538e54907c827352f66f20c3d7b0d succeeded 1m          1m           1       guestbook 157f7c60af1ccae2   Application               Normal    ResourceUpdated      argocd application controller   Updated sync status  OutOfSync    Synced 1m          1m           1       guestbook 157f7c60af5bc4f0   Application               Normal    ResourceUpdated      argocd application controller   Updated health status  Missing    Progressing 1m          1m           1       guestbook 157f7c651990e848   Application               Normal    ResourceUpdated      argocd application controller   Updated health status  Progressing    Healthy      These events can be then be persisted for longer periods of time using other tools as  Event Exporter  https   github com GoogleCloudPlatform k8s stackdriver tree master event exporter  or  Event Router  https   github com heptiolabs eventrouter       WebHook Payloads  Payloads from webhook events are considered untrusted  Argo CD only examines the payload to infer the involved applications of the webhook event  e g  which repo was modified   then refreshes the related application for reconciliation  This refresh is the same refresh which occurs regularly at three minute intervals  just fast tracked by the webhook event      Logging      Security field  Security related logs are tagged with a  security  field to make them easier to find  analyze  and report on     Level   Friendly Level   Description                                                                                         Example                                                                                                                                                                                                                      1       Low              Unexceptional  non malicious events                                                                 Successful access                               2       Medium           Could indicate malicious events  but has a high likelihood of being user system error               Access denied                                   3       High             Likely malicious events but one that had no side effects or was blocked                             Out of bounds symlinks in repo                  4       Critical         Any malicious or exploitable event that had a side effect                                           Secrets being left behind on the filesystem     5       Emergency        Unmistakably malicious events that should NEVER occur accidentally and indicates an active attack   Brute forcing of accounts                      Where applicable  a  CWE  field is also added specifying the  Common Weakness Enumeration  https   cwe mitre org index html  number       warning     Please be aware that not all security logs are comprehensively tagged yet and these examples are not necessarily implemented       API Logs  Argo CD logs payloads of most API requests except request that are considered sensitive  such as   cluster ClusterService Create     session SessionService Create  etc  The full list of method can be found in  server server go  https   github com argoproj argo cd blob abba8dddce8cd897ba23320e3715690f465b4a95 server server go L516    Argo CD does not log IP addresses of clients requesting API endpoints  since the API server is typically behind a proxy  Instead  it is recommended to configure IP addresses logging in the proxy server that sits in front of the API server      ApplicationSets  Argo CD s ApplicationSets feature has its own  security considerations    applicationset Security md   Be aware of those issues before using ApplicationSets      Limiting Directory App Memory Usage     2 2 10  2 1 16   2 3 5  Directory type Applications  those whose source is raw JSON or YAML files  can consume significant  repo server  architecture md repository server  memory  depending on the size and structure of the YAML files   To avoid over using memory in the repo server  potentially causing a crash and denial of service   set the  reposerver max combined directory manifests size  config option in  argocd cmd params cm  argocd cmd params cm yaml    This option limits the combined size of all JSON or YAML files in an individual app  Note that the in memory representation of a manifest may be as much as 300x the size of the manifest on disk  Also note that the limit is per Application  If manifests are generated for multiple applications at once  memory usage will be higher     Example     Suppose your repo server has a 10G memory limit  and you have ten Applications which use raw JSON or YAML files  To calculate the max safe combined file size per Application  divide 10G by 300   10 Apps  300 being the worst case memory growth factor for the manifests        10G   300   10   3M      So a reasonably safe configuration for this setup would be a 3M limit per app      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cmd params cm data    reposerver max combined directory manifests size   3M       The 300x ratio assumes a maliciously crafted manifest file  If you only want to protect against accidental excessive memory use  it is probably safe to use a smaller ratio   Keep in mind that if a malicious user can create additional Applications  they can increase the total memory usage  Grant  App creation privileges  rbac md  carefully "}
{"questions":"argocd warning For ApplicationSets with a templated field If the field in your ApplicationSet is templated developers may be able to create Applications under Projects with excessive permissions Git generators are often used to make it easier for non admin developers to create Applications in the case of git generators PRs must require admin approval The Git generator contains two subtypes the Git directory generator and Git file generator Git generator does not support Signature Verification For ApplicationSets with a templated field Git Generator","answers":"# Git Generator\n\nThe Git generator contains two subtypes: the Git directory generator, and Git file generator.\n\n!!! warning\n    Git generators are often used to make it easier for (non-admin) developers to create Applications.\n    If the `project` field in your ApplicationSet is templated, developers may be able to create Applications under Projects with excessive permissions.\n    For ApplicationSets with a templated `project` field, [the source of truth _must_ be controlled by admins](.\/Security.md#templated-project-field)\n    - in the case of git generators, PRs must require admin approval.\n    - Git generator does not support Signature Verification For ApplicationSets with a templated `project` field.\n    \n\n## Git Generator: Directories\n\nThe Git directory generator, one of two subtypes of the Git generator, generates parameters using the directory structure of a specified Git repository.\n\nSuppose you have a Git repository with the following directory structure:\n```\n\u251c\u2500\u2500 argo-workflows\n\u2502   \u251c\u2500\u2500 kustomization.yaml\n\u2502   \u2514\u2500\u2500 namespace-install.yaml\n\u2514\u2500\u2500 prometheus-operator\n    \u251c\u2500\u2500 Chart.yaml\n    \u251c\u2500\u2500 README.md\n    \u251c\u2500\u2500 requirements.yaml\n    \u2514\u2500\u2500 values.yaml\n```\n\nThis repository contains two directories, one for each of the workloads to deploy:\n\n- an Argo Workflow controller kustomization YAML file\n- a Prometheus Operator Helm chart\n\nWe can deploy both workloads, using this example:\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: cluster-addons\n  namespace: argocd\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - git:\n      repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n      revision: HEAD\n      directories:\n      - path: applicationset\/examples\/git-generator-directory\/cluster-addons\/*\n  template:\n    metadata:\n      name: ''\n    spec:\n      project: \"my-project\"\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n        targetRevision: HEAD\n        path: ''\n      destination:\n        server: https:\/\/kubernetes.default.svc\n        namespace: ''\n      syncPolicy:\n        syncOptions:\n        - CreateNamespace=true\n```\n(*The full example can be found [here](https:\/\/github.com\/argoproj\/argo-cd\/tree\/master\/applicationset\/examples\/git-generator-directory).*)\n\nThe generator parameters are:\n\n- ``: The directory paths within the Git repository that match the `path` wildcard.\n- ``: The directory paths within the Git repository that match the `path` wildcard, split into array elements (`n` - array index)\n- ``: For any directory path within the Git repository that matches the `path` wildcard, the right-most path name is extracted (e.g. `\/directory\/directory2` would produce `directory2`).\n- ``: This field is the same as `path.basename` with unsupported characters replaced with `-` (e.g. a `path` of `\/directory\/directory_2`, and `path.basename` of `directory_2` would produce `directory-2` here).\n\n**Note**: The right-most path name always becomes ``. For example, for `- path: \/one\/two\/three\/four`, `` is `four`.\n\n**Note**: If the `pathParamPrefix` option is specified, all `path`-related parameter names above will be prefixed with the specified value and a dot separator. E.g., if `pathParamPrefix` is `myRepo`, then the generated parameter name would be `.myRepo.path` instead of `.path`. Using this option is necessary in a Matrix generator where both child generators are Git generators (to avoid conflicts when merging the child generators\u2019 items).\n\nWhenever a new Helm chart\/Kustomize YAML\/Application\/plain subdirectory is added to the Git repository, the ApplicationSet controller will detect this change and automatically deploy the resulting manifests within new `Application` resources.\n\nAs with other generators, clusters *must* already be defined within Argo CD, in order to generate Applications for them.\n\n### Exclude directories\n\nThe Git directory generator will automatically exclude directories that begin with `.` (such as `.git`).\n\nThe Git directory generator also supports an `exclude` option in order to exclude directories in the repository from being scanned by the ApplicationSet controller:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: cluster-addons\n  namespace: argocd\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - git:\n      repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n      revision: HEAD\n      directories:\n      - path: applicationset\/examples\/git-generator-directory\/excludes\/cluster-addons\/*\n      - path: applicationset\/examples\/git-generator-directory\/excludes\/cluster-addons\/exclude-helm-guestbook\n        exclude: true\n  template:\n    metadata:\n      name: ''\n    spec:\n      project: \"my-project\"\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n        targetRevision: HEAD\n        path: ''\n      destination:\n        server: https:\/\/kubernetes.default.svc\n        namespace: ''\n```\n(*The full example can be found [here](https:\/\/github.com\/argoproj\/argo-cd\/tree\/master\/applicationset\/examples\/git-generator-directory\/excludes).*)\n\nThis example excludes the `exclude-helm-guestbook` directory from the list of directories scanned for this `ApplicationSet` resource.\n\n!!! note \"Exclude rules have higher priority than include rules\"\n\n    If a directory matches at least one `exclude` pattern, it will be excluded. Or, said another way, *exclude rules take precedence over include rules.*\n\n    As a corollary, which directories are included\/excluded is not affected by the order of `path`s in the `directories` field list (because, as above, exclude rules always take precedence over include rules). \n\nFor example, with these directories:\n\n```\n.\n\u2514\u2500\u2500 d\n    \u251c\u2500\u2500 e\n    \u251c\u2500\u2500 f\n    \u2514\u2500\u2500 g\n```\nSay you want to include `\/d\/e`, but exclude `\/d\/f` and `\/d\/g`. This will *not* work:\n\n```yaml\n- path: \/d\/e\n  exclude: false\n- path: \/d\/*\n  exclude: true\n```\nWhy? Because the exclude `\/d\/*` exclude rule will take precedence over the `\/d\/e` include rule. When the `\/d\/e` path in the Git repository is processed by the ApplicationSet controller, the controller detects that at least one exclude rule is matched, and thus that directory should not be scanned.\n\nYou would instead need to do:\n\n```yaml\n- path: \/d\/*\n- path: \/d\/f\n  exclude: true\n- path: \/d\/g\n  exclude: true\n```\n\nOr, a shorter way (using [path.Match](https:\/\/golang.org\/pkg\/path\/#Match) syntax) would be:\n\n```yaml\n- path: \/d\/*\n- path: \/d\/[fg]\n  exclude: true\n```\n\n### Root Of Git Repo\n\nThe Git directory generator can be configured to deploy from the root of the git repository by providing `'*'` as the `path`.\n\nTo exclude directories, you only need to put the name\/[path.Match](https:\/\/golang.org\/pkg\/path\/#Match) of the directory you do not want to deploy.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: cluster-addons\n  namespace: argocd\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - git:\n      repoURL: https:\/\/github.com\/example\/example-repo.git\n      revision: HEAD\n      directories:\n      - path: '*'\n      - path: donotdeploy\n        exclude: true\n  template:\n    metadata:\n      name: ''\n    spec:\n      project: \"my-project\"\n      source:\n        repoURL: https:\/\/github.com\/example\/example-repo.git\n        targetRevision: HEAD\n        path: ''\n      destination:\n        server: https:\/\/kubernetes.default.svc\n        namespace: ''\n```\n\n### Pass additional key-value pairs via `values` field\n\nYou may pass additional, arbitrary string key-value pairs via the `values` field of the git directory generator. Values added via the `values` field are added as `values.(field)`.\n\nIn this example, a `cluster` parameter value is passed. It is interpolated from the `branch` and `path` variable, to then be used to determine the destination namespace.\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: cluster-addons\n  namespace: argocd\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - git:\n      repoURL: https:\/\/github.com\/example\/example-repo.git\n      revision: HEAD\n      directories:\n      - path: '*'\n      values:\n        cluster: '-'\n  template:\n    metadata:\n      name: ''\n    spec:\n      project: \"my-project\"\n      source:\n        repoURL: https:\/\/github.com\/example\/example-repo.git\n        targetRevision: HEAD\n        path: ''\n      destination:\n        server: https:\/\/kubernetes.default.svc\n        namespace: ''\n```\n\n!!! note\n    The `values.` prefix is always prepended to values provided via `generators.git.values` field. Ensure you include this prefix in the parameter name within the `template` when using it.\n\nIn `values` we can also interpolate all fields set by the git directory generator as mentioned above.\n\n## Git Generator: Files\n\nThe Git file generator is the second subtype of the Git generator. The Git file generator generates parameters using the contents of JSON\/YAML files found within a specified repository.\n\nSuppose you have a Git repository with the following directory structure:\n```\n\u251c\u2500\u2500 apps\n\u2502   \u2514\u2500\u2500 guestbook\n\u2502       \u251c\u2500\u2500 guestbook-ui-deployment.yaml\n\u2502       \u251c\u2500\u2500 guestbook-ui-svc.yaml\n\u2502       \u2514\u2500\u2500 kustomization.yaml\n\u251c\u2500\u2500 cluster-config\n\u2502   \u2514\u2500\u2500 engineering\n\u2502       \u251c\u2500\u2500 dev\n\u2502       \u2502   \u2514\u2500\u2500 config.json\n\u2502       \u2514\u2500\u2500 prod\n\u2502           \u2514\u2500\u2500 config.json\n\u2514\u2500\u2500 git-generator-files.yaml\n```\n\nThe directories are:\n\n- `guestbook` contains the Kubernetes resources for a simple guestbook application\n- `cluster-config` contains JSON\/YAML files describing the individual engineering clusters: one for `dev` and one for `prod`.\n- `git-generator-files.yaml` is the example `ApplicationSet` resource that deploys `guestbook` to the specified clusters.\n\nThe `config.json` files contain information describing the cluster (along with extra sample data):\n```json\n{\n  \"aws_account\": \"123456\",\n  \"asset_id\": \"11223344\",\n  \"cluster\": {\n    \"owner\": \"cluster-admin@company.com\",\n    \"name\": \"engineering-dev\",\n    \"address\": \"https:\/\/1.2.3.4\"\n  }\n}\n```\n\nGit commits containing changes to the `config.json` files are automatically discovered by the Git generator, and the contents of those files are parsed and converted into template parameters. Here are the parameters generated for the above JSON:\n```text\naws_account: 123456\nasset_id: 11223344\ncluster.owner: cluster-admin@company.com\ncluster.name: engineering-dev\ncluster.address: https:\/\/1.2.3.4\n```\n\n\nAnd the generated parameters for all discovered `config.json` files will be substituted into ApplicationSet template:\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: guestbook\n  namespace: argocd\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - git:\n      repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n      revision: HEAD\n      files:\n      - path: \"applicationset\/examples\/git-generator-files-discovery\/cluster-config\/**\/config.json\"\n  template:\n    metadata:\n      name: '-guestbook'\n    spec:\n      project: default\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n        targetRevision: HEAD\n        path: \"applicationset\/examples\/git-generator-files-discovery\/apps\/guestbook\"\n      destination:\n        server: ''\n        namespace: guestbook\n```\n(*The full example can be found [here](https:\/\/github.com\/argoproj\/argo-cd\/tree\/master\/applicationset\/examples\/git-generator-files-discovery).*)\n\nAny `config.json` files found under the `cluster-config` directory will be parameterized based on the `path` wildcard pattern specified. Within each file JSON fields are flattened into key\/value pairs, with this ApplicationSet example using the `cluster.address` and `cluster.name` parameters in the template.\n\nAs with other generators, clusters *must* already be defined within Argo CD, in order to generate Applications for them.\n\nIn addition to the flattened key\/value pairs from the configuration file, the following generator parameters are provided:\n\n- ``: The path to the directory containing matching configuration file within the Git repository. Example: `\/clusters\/clusterA`, if the config file was `\/clusters\/clusterA\/config.json`\n- ``: The path to the matching configuration file within the Git repository, split into array elements (`n` - array index). Example: `index .path.segments 0: clusters`, `index .path.segments 1: clusterA`\n- ``: Basename of the path to the directory containing the configuration file (e.g. `clusterA`, with the above example.)\n- ``: This field is the same as `.path.basename` with unsupported characters replaced with `-` (e.g. a `path` of `\/directory\/directory_2`, and `.path.basename` of `directory_2` would produce `directory-2` here).\n- ``: The matched filename. e.g., `config.json` in the above example.\n- ``: The matched filename with unsupported characters replaced with `-`.\n\n**Note**: The right-most *directory* name always becomes ``. For example, from `- path: \/one\/two\/three\/four\/config.json`, `` will be `four`. \nThe filename can always be accessed using ``. \n\n**Note**: If the `pathParamPrefix` option is specified, all `path`-related parameter names above will be prefixed with the specified value and a dot separator. E.g., if `pathParamPrefix` is `myRepo`, then the generated parameter name would be `myRepo.path` instead of `path`. Using this option is necessary in a Matrix generator where both child generators are Git generators (to avoid conflicts when merging the child generators\u2019 items).\n\n**Note**: The default behavior of the Git file generator is very greedy. Please see [Git File Generator Globbing](.\/Generators-Git-File-Globbing.md) for more information.\n\n### Pass additional key-value pairs via `values` field\n\nYou may pass additional, arbitrary string key-value pairs via the `values` field of the git files generator. Values added via the `values` field are added as `values.(field)`.\n\nIn this example, a `base_dir` parameter value is passed. It is interpolated from `path` segments, to then be used to determine the source path.\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: guestbook\n  namespace: argocd\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - git:\n      repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n      revision: HEAD\n      files:\n      - path: \"applicationset\/examples\/git-generator-files-discovery\/cluster-config\/**\/config.json\"\n      values:\n        base_dir: \"\/\/\"\n  template:\n    metadata:\n      name: '-guestbook'\n    spec:\n      project: default\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n        targetRevision: HEAD\n        path: \"\/apps\/guestbook\"\n      destination:\n        server: ''\n        namespace: guestbook\n```\n\n!!! note\n    The `values.` prefix is always prepended to values provided via `generators.git.values` field. Ensure you include this prefix in the parameter name within the `template` when using it.\n\nIn `values` we can also interpolate all fields set by the git files generator as mentioned above.\n\n## Webhook Configuration\n\nWhen using a Git generator, ApplicationSet polls Git repositories every three minutes to detect changes. To eliminate\nthis delay from polling, the ApplicationSet webhook server can be configured to receive webhook events. ApplicationSet supports\nGit webhook notifications from GitHub and GitLab. The following explains how to configure a Git webhook for GitHub, but the same process should be applicable to other providers.\n\n!!! note\n    The ApplicationSet controller webhook does not use the same webhook as the API server as defined [here](..\/webhook.md). ApplicationSet exposes a webhook server as a service of type ClusterIP. An ApplicationSet specific Ingress resource needs to be created to expose this service to the webhook source.\n\n### 1. Create the webhook in the Git provider\n\nIn your Git provider, navigate to the settings page where webhooks can be configured. The payload\nURL configured in the Git provider should use the `\/api\/webhook` endpoint of your ApplicationSet instance\n(e.g. `https:\/\/applicationset.example.com\/api\/webhook`). If you wish to use a shared secret, input an\narbitrary value in the secret. This value will be used when configuring the webhook in the next step.\n\n![Add Webhook](..\/..\/assets\/applicationset\/webhook-config.png \"Add Webhook\")\n\n!!! note\n    When creating the webhook in GitHub, the \"Content type\" needs to be set to \"application\/json\". The default value \"application\/x-www-form-urlencoded\" is not supported by the library used to handle the hooks\n\n### 2. Configure ApplicationSet with the webhook secret (Optional)\n\nConfiguring a webhook shared secret is optional, since ApplicationSet will still refresh applications\ngenerated by Git generators, even with unauthenticated webhook events. This is safe to do since\nthe contents of webhook payloads are considered untrusted, and will only result in a refresh of the\napplication (a process which already occurs at three-minute intervals). If ApplicationSet is publicly\naccessible, then configuring a webhook secret is recommended to prevent a DDoS attack.\n\nIn the `argocd-secret` Kubernetes secret, include the Git provider's webhook secret configured in step 1.\n\nEdit the Argo CD Kubernetes secret:\n\n```bash\nkubectl edit secret argocd-secret -n argocd\n```\n\nTIP: for ease of entering secrets, Kubernetes supports inputting secrets in the `stringData` field,\nwhich saves you the trouble of base64 encoding the values and copying it to the `data` field.\nSimply copy the shared webhook secret created in step 1, to the corresponding\nGitHub\/GitLab\/BitBucket key under the `stringData` field:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: argocd-secret\n  namespace: argocd\ntype: Opaque\ndata:\n...\n\nstringData:\n  # github webhook secret\n  webhook.github.secret: shhhh! it's a github secret\n\n  # gitlab webhook secret\n  webhook.gitlab.secret: shhhh! it's a gitlab secret\n```\n\nAfter saving, please restart the ApplicationSet pod for the changes to take effect.\n\n## Repository credentials for ApplicationSets\nIf your [ApplicationSets](index.md) uses a repository where you need credentials to be able to access it, you need to add the repository as a \"non project scoped\" repository.  \n- When doing that through the UI, set this to a **blank** value in the dropdown menu.\n- When doing that through the CLI, make sure you **DO NOT** supply the parameter `--project` ([argocd repo add docs](..\/..\/user-guide\/commands\/argocd_repo_add.md))\n- When doing that declaratively, make sure you **DO NOT** have `project:` defined under `stringData:` ([complete yaml example](..\/argocd-repositories-yaml.md))","site":"argocd","answers_cleaned":"  Git Generator  The Git generator contains two subtypes  the Git directory generator  and Git file generator       warning     Git generators are often used to make it easier for  non admin  developers to create Applications      If the  project  field in your ApplicationSet is templated  developers may be able to create Applications under Projects with excessive permissions      For ApplicationSets with a templated  project  field   the source of truth  must  be controlled by admins    Security md templated project field        in the case of git generators  PRs must require admin approval        Git generator does not support Signature Verification For ApplicationSets with a templated  project  field           Git Generator  Directories  The Git directory generator  one of two subtypes of the Git generator  generates parameters using the directory structure of a specified Git repository   Suppose you have a Git repository with the following directory structure          argo workflows         kustomization yaml         namespace install yaml     prometheus operator         Chart yaml         README md         requirements yaml         values yaml      This repository contains two directories  one for each of the workloads to deploy     an Argo Workflow controller kustomization YAML file   a Prometheus Operator Helm chart  We can deploy both workloads  using this example     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  cluster addons   namespace  argocd spec    goTemplate  true   goTemplateOptions    missingkey error     generators      git        repoURL  https   github com argoproj argo cd git       revision  HEAD       directories          path  applicationset examples git generator directory cluster addons     template      metadata        name         spec        project   my project        source          repoURL  https   github com argoproj argo cd git         targetRevision  HEAD         path           destination          server  https   kubernetes default svc         namespace           syncPolicy          syncOptions            CreateNamespace true       The full example can be found  here  https   github com argoproj argo cd tree master applicationset examples git generator directory      The generator parameters are         The directory paths within the Git repository that match the  path  wildcard        The directory paths within the Git repository that match the  path  wildcard  split into array elements   n    array index        For any directory path within the Git repository that matches the  path  wildcard  the right most path name is extracted  e g    directory directory2  would produce  directory2          This field is the same as  path basename  with unsupported characters replaced with      e g  a  path  of   directory directory 2   and  path basename  of  directory 2  would produce  directory 2  here      Note    The right most path name always becomes     For example  for    path   one two three four      is  four      Note    If the  pathParamPrefix  option is specified  all  path  related parameter names above will be prefixed with the specified value and a dot separator  E g   if  pathParamPrefix  is  myRepo   then the generated parameter name would be   myRepo path  instead of   path   Using this option is necessary in a Matrix generator where both child generators are Git generators  to avoid conflicts when merging the child generators  items    Whenever a new Helm chart Kustomize YAML Application plain subdirectory is added to the Git repository  the ApplicationSet controller will detect this change and automatically deploy the resulting manifests within new  Application  resources   As with other generators  clusters  must  already be defined within Argo CD  in order to generate Applications for them       Exclude directories  The Git directory generator will automatically exclude directories that begin with      such as   git     The Git directory generator also supports an  exclude  option in order to exclude directories in the repository from being scanned by the ApplicationSet controller      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  cluster addons   namespace  argocd spec    goTemplate  true   goTemplateOptions    missingkey error     generators      git        repoURL  https   github com argoproj argo cd git       revision  HEAD       directories          path  applicationset examples git generator directory excludes cluster addons           path  applicationset examples git generator directory excludes cluster addons exclude helm guestbook         exclude  true   template      metadata        name         spec        project   my project        source          repoURL  https   github com argoproj argo cd git         targetRevision  HEAD         path           destination          server  https   kubernetes default svc         namespace           The full example can be found  here  https   github com argoproj argo cd tree master applicationset examples git generator directory excludes      This example excludes the  exclude helm guestbook  directory from the list of directories scanned for this  ApplicationSet  resource       note  Exclude rules have higher priority than include rules       If a directory matches at least one  exclude  pattern  it will be excluded  Or  said another way   exclude rules take precedence over include rules        As a corollary  which directories are included excluded is not affected by the order of  path s in the  directories  field list  because  as above  exclude rules always take precedence over include rules     For example  with these directories             d         e         f         g     Say you want to include   d e   but exclude   d f  and   d g   This will  not  work      yaml   path   d e   exclude  false   path   d     exclude  true     Why  Because the exclude   d    exclude rule will take precedence over the   d e  include rule  When the   d e  path in the Git repository is processed by the ApplicationSet controller  the controller detects that at least one exclude rule is matched  and thus that directory should not be scanned   You would instead need to do      yaml   path   d     path   d f   exclude  true   path   d g   exclude  true      Or  a shorter way  using  path Match  https   golang org pkg path  Match  syntax  would be      yaml   path   d     path   d  fg    exclude  true          Root Of Git Repo  The Git directory generator can be configured to deploy from the root of the git repository by providing       as the  path    To exclude directories  you only need to put the name  path Match  https   golang org pkg path  Match  of the directory you do not want to deploy      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  cluster addons   namespace  argocd spec    goTemplate  true   goTemplateOptions    missingkey error     generators      git        repoURL  https   github com example example repo git       revision  HEAD       directories          path              path  donotdeploy         exclude  true   template      metadata        name         spec        project   my project        source          repoURL  https   github com example example repo git         targetRevision  HEAD         path           destination          server  https   kubernetes default svc         namespace              Pass additional key value pairs via  values  field  You may pass additional  arbitrary string key value pairs via the  values  field of the git directory generator  Values added via the  values  field are added as  values  field     In this example  a  cluster  parameter value is passed  It is interpolated from the  branch  and  path  variable  to then be used to determine the destination namespace     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  cluster addons   namespace  argocd spec    goTemplate  true   goTemplateOptions    missingkey error     generators      git        repoURL  https   github com example example repo git       revision  HEAD       directories          path            values          cluster        template      metadata        name         spec        project   my project        source          repoURL  https   github com example example repo git         targetRevision  HEAD         path           destination          server  https   kubernetes default svc         namespace              note     The  values   prefix is always prepended to values provided via  generators git values  field  Ensure you include this prefix in the parameter name within the  template  when using it   In  values  we can also interpolate all fields set by the git directory generator as mentioned above      Git Generator  Files  The Git file generator is the second subtype of the Git generator  The Git file generator generates parameters using the contents of JSON YAML files found within a specified repository   Suppose you have a Git repository with the following directory structure          apps         guestbook             guestbook ui deployment yaml             guestbook ui svc yaml             kustomization yaml     cluster config         engineering             dev                 config json             prod                 config json     git generator files yaml      The directories are      guestbook  contains the Kubernetes resources for a simple guestbook application    cluster config  contains JSON YAML files describing the individual engineering clusters  one for  dev  and one for  prod      git generator files yaml  is the example  ApplicationSet  resource that deploys  guestbook  to the specified clusters   The  config json  files contain information describing the cluster  along with extra sample data      json      aws account    123456      asset id    11223344      cluster          owner    cluster admin company com        name    engineering dev        address    https   1 2 3 4             Git commits containing changes to the  config json  files are automatically discovered by the Git generator  and the contents of those files are parsed and converted into template parameters  Here are the parameters generated for the above JSON     text aws account  123456 asset id  11223344 cluster owner  cluster admin company com cluster name  engineering dev cluster address  https   1 2 3 4       And the generated parameters for all discovered  config json  files will be substituted into ApplicationSet template     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  guestbook   namespace  argocd spec    goTemplate  true   goTemplateOptions    missingkey error     generators      git        repoURL  https   github com argoproj argo cd git       revision  HEAD       files          path   applicationset examples git generator files discovery cluster config    config json    template      metadata        name    guestbook      spec        project  default       source          repoURL  https   github com argoproj argo cd git         targetRevision  HEAD         path   applicationset examples git generator files discovery apps guestbook        destination          server             namespace  guestbook       The full example can be found  here  https   github com argoproj argo cd tree master applicationset examples git generator files discovery      Any  config json  files found under the  cluster config  directory will be parameterized based on the  path  wildcard pattern specified  Within each file JSON fields are flattened into key value pairs  with this ApplicationSet example using the  cluster address  and  cluster name  parameters in the template   As with other generators  clusters  must  already be defined within Argo CD  in order to generate Applications for them   In addition to the flattened key value pairs from the configuration file  the following generator parameters are provided         The path to the directory containing matching configuration file within the Git repository  Example    clusters clusterA   if the config file was   clusters clusterA config json        The path to the matching configuration file within the Git repository  split into array elements   n    array index   Example   index  path segments 0  clusters    index  path segments 1  clusterA        Basename of the path to the directory containing the configuration file  e g   clusterA   with the above example         This field is the same as   path basename  with unsupported characters replaced with      e g  a  path  of   directory directory 2   and   path basename  of  directory 2  would produce  directory 2  here         The matched filename  e g    config json  in the above example        The matched filename with unsupported characters replaced with         Note    The right most  directory  name always becomes     For example  from    path   one two three four config json      will be  four    The filename can always be accessed using         Note    If the  pathParamPrefix  option is specified  all  path  related parameter names above will be prefixed with the specified value and a dot separator  E g   if  pathParamPrefix  is  myRepo   then the generated parameter name would be  myRepo path  instead of  path   Using this option is necessary in a Matrix generator where both child generators are Git generators  to avoid conflicts when merging the child generators  items      Note    The default behavior of the Git file generator is very greedy  Please see  Git File Generator Globbing    Generators Git File Globbing md  for more information       Pass additional key value pairs via  values  field  You may pass additional  arbitrary string key value pairs via the  values  field of the git files generator  Values added via the  values  field are added as  values  field     In this example  a  base dir  parameter value is passed  It is interpolated from  path  segments  to then be used to determine the source path     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  guestbook   namespace  argocd spec    goTemplate  true   goTemplateOptions    missingkey error     generators      git        repoURL  https   github com argoproj argo cd git       revision  HEAD       files          path   applicationset examples git generator files discovery cluster config    config json        values          base dir         template      metadata        name    guestbook      spec        project  default       source          repoURL  https   github com argoproj argo cd git         targetRevision  HEAD         path    apps guestbook        destination          server             namespace  guestbook          note     The  values   prefix is always prepended to values provided via  generators git values  field  Ensure you include this prefix in the parameter name within the  template  when using it   In  values  we can also interpolate all fields set by the git files generator as mentioned above      Webhook Configuration  When using a Git generator  ApplicationSet polls Git repositories every three minutes to detect changes  To eliminate this delay from polling  the ApplicationSet webhook server can be configured to receive webhook events  ApplicationSet supports Git webhook notifications from GitHub and GitLab  The following explains how to configure a Git webhook for GitHub  but the same process should be applicable to other providers       note     The ApplicationSet controller webhook does not use the same webhook as the API server as defined  here     webhook md   ApplicationSet exposes a webhook server as a service of type ClusterIP  An ApplicationSet specific Ingress resource needs to be created to expose this service to the webhook source       1  Create the webhook in the Git provider  In your Git provider  navigate to the settings page where webhooks can be configured  The payload URL configured in the Git provider should use the   api webhook  endpoint of your ApplicationSet instance  e g   https   applicationset example com api webhook    If you wish to use a shared secret  input an arbitrary value in the secret  This value will be used when configuring the webhook in the next step     Add Webhook        assets applicationset webhook config png  Add Webhook        note     When creating the webhook in GitHub  the  Content type  needs to be set to  application json   The default value  application x www form urlencoded  is not supported by the library used to handle the hooks      2  Configure ApplicationSet with the webhook secret  Optional   Configuring a webhook shared secret is optional  since ApplicationSet will still refresh applications generated by Git generators  even with unauthenticated webhook events  This is safe to do since the contents of webhook payloads are considered untrusted  and will only result in a refresh of the application  a process which already occurs at three minute intervals   If ApplicationSet is publicly accessible  then configuring a webhook secret is recommended to prevent a DDoS attack   In the  argocd secret  Kubernetes secret  include the Git provider s webhook secret configured in step 1   Edit the Argo CD Kubernetes secret      bash kubectl edit secret argocd secret  n argocd      TIP  for ease of entering secrets  Kubernetes supports inputting secrets in the  stringData  field  which saves you the trouble of base64 encoding the values and copying it to the  data  field  Simply copy the shared webhook secret created in step 1  to the corresponding GitHub GitLab BitBucket key under the  stringData  field      yaml apiVersion  v1 kind  Secret metadata    name  argocd secret   namespace  argocd type  Opaque data       stringData      github webhook secret   webhook github secret  shhhh  it s a github secret      gitlab webhook secret   webhook gitlab secret  shhhh  it s a gitlab secret      After saving  please restart the ApplicationSet pod for the changes to take effect      Repository credentials for ApplicationSets If your  ApplicationSets  index md  uses a repository where you need credentials to be able to access it  you need to add the repository as a  non project scoped  repository      When doing that through the UI  set this to a   blank   value in the dropdown menu    When doing that through the CLI  make sure you   DO NOT   supply the parameter    project    argocd repo add docs        user guide commands argocd repo add md     When doing that declaratively  make sure you   DO NOT   have  project   defined under  stringData     complete yaml example     argocd repositories yaml md  "}
{"questions":"argocd Templates Template fields An Argo CD Application is created by combining the parameters from the generator with fields of the template via and from that a concrete resource is produced and applied to the cluster The template fields of the ApplicationSet are used to generate Argo CD resources ApplicationSet is using but will be soon deprecated in favor of Go Template","answers":"# Templates\n\nThe template fields of the ApplicationSet `spec` are used to generate Argo CD `Application` resources.\n\nApplicationSet is using [fasttemplate](https:\/\/github.com\/valyala\/fasttemplate) but will be soon deprecated in favor of Go Template. \n\n## Template fields\n\nAn Argo CD Application is created by combining the parameters from the generator with fields of the template (via ``), and from that a concrete `Application` resource is produced and applied to the cluster.\n\nHere is the template subfield from a Cluster generator:\n\n```yaml\n# (...)\n template:\n   metadata:\n     name: '-guestbook'\n   spec:\n     source:\n       repoURL: https:\/\/github.com\/infra-team\/cluster-deployments.git\n       targetRevision: HEAD\n       path: guestbook\/\n     destination:\n       server: ''\n       namespace: guestbook\n```\n\nFor details on all available parameters (like `.name`, `.nameNormalized`, etc.) please refer to the [Cluster Generator docs](.\/Generators-Cluster.md).\n\nThe template subfields correspond directly to [the spec of an Argo CD `Application` resource](..\/..\/declarative-setup\/#applications):\n\n- `project` refers to the [Argo CD Project](..\/..\/user-guide\/projects.md) in use (`default` may be used here to utilize the default Argo CD Project)\n- `source` defines from which Git repository to extract the desired Application manifests\n    - **repoURL**: URL of the repository (eg `https:\/\/github.com\/argoproj\/argocd-example-apps.git`)\n    - **targetRevision**: Revision (tag\/branch\/commit) of the repository (eg `HEAD`)\n    - **path**: Path within the repository where Kubernetes manifests (and\/or Helm, Kustomize, Jsonnet resources) are located\n- `destination`: Defines which Kubernetes cluster\/namespace to deploy to\n    - **name**: Name of the cluster (within Argo CD) to deploy to\n    - **server**: API Server URL for the cluster (Example: `https:\/\/kubernetes.default.svc`)\n    - **namespace**: Target namespace in which to deploy the manifests from `source` (Example: `my-app-namespace`)\n\nNote:\n\n- Referenced clusters must already be defined in Argo CD, for the ApplicationSet controller to use them\n- Only **one** of `name` or `server` may be specified: if both are specified, an error is returned.\n- Signature Verification does not work with the templated `project` field when using git generator.\n\nThe `metadata` field of template may also be used to set an Application `name`, or to add labels or annotations to the Application.\n\nWhile the ApplicationSet spec provides a basic form of templating, it is not intended to replace the full-fledged configuration management capabilities of tools such as Kustomize, Helm, or Jsonnet.\n\n### Deploying ApplicationSet resources as part of a Helm chart\n\nApplicationSet uses the same templating notation as Helm (`{{}}`). If the ApplicationSet templates aren't written as\nHelm string literals, Helm will throw an error like `function \"cluster\" not defined`. To avoid that error, write the\ntemplate as a Helm string literal. For example:\n\n```yaml\n    metadata:\n      name: '`}}-guestbook'\n```\n\nThis _only_ applies if you use Helm to deploy your ApplicationSet resources.\n\n## Generator templates\n\nIn addition to specifying a template within the `.spec.template` of the `ApplicationSet` resource, templates may also be specified within generators. This is useful for overriding the values of the `spec`-level template.\n\nThe generator's `template` field takes precedence over the `spec`'s template fields:\n\n- If both templates contain the same field, the generator's field value will be used.\n- If only one of those templates' fields has a value, that value will be used.\n\nGenerator templates can thus be thought of as patches against the outer `spec`-level template fields.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: guestbook\nspec:\n  generators:\n  - list:\n      elements:\n        - cluster: engineering-dev\n          url: https:\/\/kubernetes.default.svc\n      template:\n        metadata: {}\n        spec:\n          project: \"default\"\n          source:\n            targetRevision: HEAD\n            repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n            # New path value is generated here:\n            path: 'applicationset\/examples\/template-override\/-override'\n          destination: {}\n\n  template:\n    metadata:\n      name: '-guestbook'\n    spec:\n      project: \"default\"\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n        targetRevision: HEAD\n        # This 'default' value is not used: it is replaced by the generator's template path, above\n        path: applicationset\/examples\/template-override\/default\n      destination:\n        server: ''\n        namespace: guestbook\n```\n(*The full example can be found [here](https:\/\/github.com\/argoproj\/argo-cd\/tree\/master\/applicationset\/examples\/template-override).*)\n\nIn this example, the ApplicationSet controller will generate an `Application` resource using the `path` generated by the List generator, rather than the `path` value defined in `.spec.template`.\n\n## Template Patch\n\nTemplating is only available on string type. However, some use cases may require applying templating on other types.\n\nExample:\n\n- Conditionally set the automated sync policy.\n- Conditionally switch prune boolean to `true`.\n- Add multiple helm value files from a list.\n\nThe `templatePatch` feature enables advanced templating, with support for `json` and `yaml`.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: guestbook\nspec:\n  goTemplate: true\n  generators:\n  - list:\n      elements:\n        - cluster: engineering-dev\n          url: https:\/\/kubernetes.default.svc\n          autoSync: true\n          prune: true\n          valueFiles:\n            - values.large.yaml\n            - values.debug.yaml\n  template:\n    metadata:\n      name: '-deployment'\n    spec:\n      project: \"default\"\n      source:\n        repoURL: https:\/\/github.com\/infra-team\/cluster-deployments.git\n        targetRevision: HEAD\n        path: guestbook\/\n      destination:\n        server: ''\n        namespace: guestbook\n  templatePatch: |\n    spec:\n      source:\n        helm:\n          valueFiles:\n          \n            - \n          \n    \n      syncPolicy:\n        automated:\n          prune: \n    \n```\n\n!!! important\n    The `templatePatch` can apply arbitrary changes to the template. If parameters include untrustworthy user input, it \n    may be possible to inject malicious changes into the template. It is recommended to use `templatePatch` only with \n    trusted input or to carefully escape the input before using it in the template. Piping input to `toJson` should help\n    prevent, for example, a user from successfully injecting a string with newlines.\n\n    The `spec.project` field is not supported in `templatePatch`. If you need to change the project, you can use the\n    `spec.project` field in the `template` field.\n\n!!! important\n    When writing a `templatePatch`, you're crafting a patch. So, if the patch includes an empty `spec: # nothing in here`, it will effectively clear out existing fields. See [#17040](https:\/\/github.com\/argoproj\/argo-cd\/issues\/17040) for an example of this behavior.","site":"argocd","answers_cleaned":"  Templates  The template fields of the ApplicationSet  spec  are used to generate Argo CD  Application  resources   ApplicationSet is using  fasttemplate  https   github com valyala fasttemplate  but will be soon deprecated in favor of Go Template       Template fields  An Argo CD Application is created by combining the parameters from the generator with fields of the template  via      and from that a concrete  Application  resource is produced and applied to the cluster   Here is the template subfield from a Cluster generator      yaml          template     metadata       name    guestbook     spec       source         repoURL  https   github com infra team cluster deployments git        targetRevision  HEAD        path  guestbook       destination         server            namespace  guestbook      For details on all available parameters  like   name     nameNormalized   etc   please refer to the  Cluster Generator docs    Generators Cluster md    The template subfields correspond directly to  the spec of an Argo CD  Application  resource        declarative setup  applications       project  refers to the  Argo CD Project        user guide projects md  in use   default  may be used here to utilize the default Argo CD Project     source  defines from which Git repository to extract the desired Application manifests         repoURL    URL of the repository  eg  https   github com argoproj argocd example apps git           targetRevision    Revision  tag branch commit  of the repository  eg  HEAD           path    Path within the repository where Kubernetes manifests  and or Helm  Kustomize  Jsonnet resources  are located    destination   Defines which Kubernetes cluster namespace to deploy to         name    Name of the cluster  within Argo CD  to deploy to         server    API Server URL for the cluster  Example   https   kubernetes default svc           namespace    Target namespace in which to deploy the manifests from  source   Example   my app namespace    Note     Referenced clusters must already be defined in Argo CD  for the ApplicationSet controller to use them   Only   one   of  name  or  server  may be specified  if both are specified  an error is returned    Signature Verification does not work with the templated  project  field when using git generator   The  metadata  field of template may also be used to set an Application  name   or to add labels or annotations to the Application   While the ApplicationSet spec provides a basic form of templating  it is not intended to replace the full fledged configuration management capabilities of tools such as Kustomize  Helm  or Jsonnet       Deploying ApplicationSet resources as part of a Helm chart  ApplicationSet uses the same templating notation as Helm           If the ApplicationSet templates aren t written as Helm string literals  Helm will throw an error like  function  cluster  not defined   To avoid that error  write the template as a Helm string literal  For example      yaml     metadata        name       guestbook       This  only  applies if you use Helm to deploy your ApplicationSet resources      Generator templates  In addition to specifying a template within the   spec template  of the  ApplicationSet  resource  templates may also be specified within generators  This is useful for overriding the values of the  spec  level template   The generator s  template  field takes precedence over the  spec  s template fields     If both templates contain the same field  the generator s field value will be used    If only one of those templates  fields has a value  that value will be used   Generator templates can thus be thought of as patches against the outer  spec  level template fields      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  guestbook spec    generators      list        elements            cluster  engineering dev           url  https   kubernetes default svc       template          metadata             spec            project   default            source              targetRevision  HEAD             repoURL  https   github com argoproj argo cd git               New path value is generated here              path   applicationset examples template override  override            destination        template      metadata        name    guestbook      spec        project   default        source          repoURL  https   github com argoproj argo cd git         targetRevision  HEAD           This  default  value is not used  it is replaced by the generator s template path  above         path  applicationset examples template override default       destination          server             namespace  guestbook       The full example can be found  here  https   github com argoproj argo cd tree master applicationset examples template override      In this example  the ApplicationSet controller will generate an  Application  resource using the  path  generated by the List generator  rather than the  path  value defined in   spec template       Template Patch  Templating is only available on string type  However  some use cases may require applying templating on other types   Example     Conditionally set the automated sync policy    Conditionally switch prune boolean to  true     Add multiple helm value files from a list   The  templatePatch  feature enables advanced templating  with support for  json  and  yaml       yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  guestbook spec    goTemplate  true   generators      list        elements            cluster  engineering dev           url  https   kubernetes default svc           autoSync  true           prune  true           valueFiles                values large yaml               values debug yaml   template      metadata        name    deployment      spec        project   default        source          repoURL  https   github com infra team cluster deployments git         targetRevision  HEAD         path  guestbook        destination          server             namespace  guestbook   templatePatch        spec        source          helm            valueFiles                                                  syncPolicy          automated            prune                 important     The  templatePatch  can apply arbitrary changes to the template  If parameters include untrustworthy user input  it      may be possible to inject malicious changes into the template  It is recommended to use  templatePatch  only with      trusted input or to carefully escape the input before using it in the template  Piping input to  toJson  should help     prevent  for example  a user from successfully injecting a string with newlines       The  spec project  field is not supported in  templatePatch   If you need to change the project  you can use the      spec project  field in the  template  field       important     When writing a  templatePatch   you re crafting a patch  So  if the patch includes an empty  spec    nothing in here   it will effectively clear out existing fields  See   17040  https   github com argoproj argo cd issues 17040  for an example of this behavior "}
{"questions":"argocd You can write in any language You can use it in a sidecar or standalone deployment Plugin Generator Simple a plugin just responds to RPC HTTP requests Plugins allow you to provide your own generator You can get your plugin running today no need to wait 3 5 months for review approval merge and an Argo software release You can combine it with Matrix or Merge","answers":"# Plugin Generator\n\nPlugins allow you to provide your own generator.\n\n- You can write in any language\n- Simple: a plugin just responds to RPC HTTP requests.\n- You can use it in a sidecar, or standalone deployment.\n- You can get your plugin running today, no need to wait 3-5 months for review, approval, merge and an Argo software\n  release.\n- You can combine it with Matrix or Merge.\n\nTo start working on your own plugin, you can generate a new repository based on the example\n[applicationset-hello-plugin](https:\/\/github.com\/argoproj-labs\/applicationset-hello-plugin).\n\n## Simple example\n\nUsing a generator plugin without combining it with Matrix or Merge.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myplugin\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n    - plugin:\n        # Specify the configMap where the plugin configuration is located.\n        configMapRef:\n          name: my-plugin\n        # You can pass arbitrary parameters to the plugin. `input.parameters` is a map, but values may be any type.\n        # These parameters will also be available on the generator's output under the `generator.input.parameters` key.\n        input:\n          parameters:\n            key1: \"value1\"\n            key2: \"value2\"\n            list: [\"list\", \"of\", \"values\"]\n            boolean: true\n            map:\n              key1: \"value1\"\n              key2: \"value2\"\n              key3: \"value3\"\n\n        # You can also attach arbitrary values to the generator's output under the `values` key. These values will be\n        # available in templates under the `values` key.\n        values:\n          value1: something\n\n        # When using a Plugin generator, the ApplicationSet controller polls every `requeueAfterSeconds` interval (defaulting to every 30 minutes) to detect changes.\n        requeueAfterSeconds: 30\n  template:\n    metadata:\n      name: myplugin\n      annotations:\n        example.from.input.parameters: \"\"\n        example.from.values: \"\"\n        # The plugin determines what else it produces.\n        example.from.plugin.output: \"\"\n```\n\n- `configMapRef.name`: A `ConfigMap` name containing the plugin configuration to use for RPC call.\n- `input.parameters`: Input parameters included in the RPC call to the plugin. (Optional)\n\n!!! note\n    The concept of the plugin should not undermine the spirit of GitOps by externalizing data outside of Git. The goal is to be complementary in specific contexts.\n    For example, when using one of the PullRequest generators, it's impossible to retrieve parameters related to the CI (only the commit hash is available), which limits the possibilities. By using a plugin, it's possible to retrieve the necessary parameters from a separate data source and use them to extend the functionality of the generator.\n\n### Add a ConfigMap to configure the access of the plugin\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: my-plugin\n  namespace: argocd\ndata:\n  token: \"$plugin.myplugin.token\" # Alternatively $<some_K8S_secret>:plugin.myplugin.token\n  baseUrl: \"http:\/\/myplugin.plugin-ns.svc.cluster.local.\"\n  requestTimeout: \"60\"\n```\n\n- `token`: Pre-shared token used to authenticate HTTP request (points to the right key you created in the `argocd-secret` Secret)\n- `baseUrl`: BaseUrl of the k8s service exposing your plugin in the cluster.\n- `requestTimeout`: Timeout of the request to the plugin in seconds (default: 30)\n\n### Store credentials\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: argocd-secret\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/name: argocd-secret\n    app.kubernetes.io\/part-of: argocd\ntype: Opaque\ndata:\n  # ...\n  # The secret value must be base64 encoded **once**.\n  # this value corresponds to: `printf \"strong-password\" | base64`.\n  plugin.myplugin.token: \"c3Ryb25nLXBhc3N3b3Jk\"\n  # ...\n```\n\n#### Alternative\n\nIf you want to store sensitive data in **another** Kubernetes `Secret`, instead of `argocd-secret`, ArgoCD knows how to check the keys under `data` in your Kubernetes `Secret` for a corresponding key whenever a value in a configmap starts with `$`, then your Kubernetes `Secret` name and `:` (colon) followed by the key name.\n\nSyntax: `$<k8s_secret_name>:<a_key_in_that_k8s_secret>`\n\n> NOTE: Secret must have label `app.kubernetes.io\/part-of: argocd`\n\n##### Example\n\n`another-secret`:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: another-secret\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/part-of: argocd\ntype: Opaque\ndata:\n  # ...\n  # Store client secret like below.\n  # The secret value must be base64 encoded **once**.\n  # This value corresponds to: `printf \"strong-password\" | base64`.\n  plugin.myplugin.token: \"c3Ryb25nLXBhc3N3b3Jk\"\n```\n\n### HTTP server\n\n#### A Simple Python Plugin\n\nYou can deploy it either as a sidecar or as a standalone deployment (the latter is recommended).\n\nIn the example, the token is stored in a file at this location : `\/var\/run\/argo\/token`\n\n```\nstrong-password\n```\n\n```python\nimport json\nfrom http.server import BaseHTTPRequestHandler, HTTPServer\n\nwith open(\"\/var\/run\/argo\/token\") as f:\n    plugin_token = f.read().strip()\n\n\nclass Plugin(BaseHTTPRequestHandler):\n\n    def args(self):\n        return json.loads(self.rfile.read(int(self.headers.get('Content-Length'))))\n\n    def reply(self, reply):\n        self.send_response(200)\n        self.end_headers()\n        self.wfile.write(json.dumps(reply).encode(\"UTF-8\"))\n\n    def forbidden(self):\n        self.send_response(403)\n        self.end_headers()\n\n    def unsupported(self):\n        self.send_response(404)\n        self.end_headers()\n\n    def do_POST(self):\n        if self.headers.get(\"Authorization\") != \"Bearer \" + plugin_token:\n            self.forbidden()\n\n        if self.path == '\/api\/v1\/getparams.execute':\n            args = self.args()\n            self.reply({\n                \"output\": {\n                    \"parameters\": [\n                        {\n                            \"key1\": \"val1\",\n                            \"key2\": \"val2\"\n                        },\n                        {\n                            \"key1\": \"val2\",\n                            \"key2\": \"val2\"\n                        }\n                    ]\n                }\n            })\n        else:\n            self.unsupported()\n\n\nif __name__ == '__main__':\n    httpd = HTTPServer(('', 4355), Plugin)\n    httpd.serve_forever()\n```\n\nExecute getparams with curl :\n\n```\ncurl http:\/\/localhost:4355\/api\/v1\/getparams.execute -H \"Authorization: Bearer strong-password\" -d \\\n'{\n  \"applicationSetName\": \"fake-appset\",\n  \"input\": {\n    \"parameters\": {\n      \"param1\": \"value1\"\n    }\n  }\n}'\n```\n\nSome things to note here:\n\n- You only need to implement the calls `\/api\/v1\/getparams.execute`\n- You should check that the `Authorization` header contains the same bearer value as `\/var\/run\/argo\/token`. Return 403 if not\n- The input parameters are included in the request body and can be accessed using the `input.parameters` variable.\n- The output must always be a list of object maps nested under the `output.parameters` key in a map.\n- `generator.input.parameters` and `values` are reserved keys. If present in the plugin output, these keys will be overwritten by the\n  contents of the `input.parameters` and `values` keys in the ApplicationSet's plugin generator spec.\n\n## With matrix and pull request example\n\nIn the following example, the plugin implementation is returning a set of image digests for the given branch. The returned list contains only one item corresponding to the latest built image for the branch.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: fb-matrix\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n    - matrix:\n        generators:\n          - pullRequest:\n              github: ...\n              requeueAfterSeconds: 30\n          - plugin:\n              configMapRef:\n                name: cm-plugin\n              input:\n                parameters:\n                  branch: \"\" # provided by generator pull request\n              values:\n                branchLink: \"https:\/\/git.example.com\/org\/repo\/tree\/\"\n  template:\n    metadata:\n      name: \"fb-matrix-\"\n    spec:\n      source:\n        repoURL: \"https:\/\/github.com\/myorg\/myrepo.git\"\n        targetRevision: \"HEAD\"\n        path: charts\/my-chart\n        helm:\n          releaseName: fb-matrix-\n          valueFiles:\n            - values.yaml\n          values: |\n            front:\n              image: myregistry:@ # digestFront is generated by the plugin\n            back:\n              image: myregistry:@ # digestBack is generated by the plugin\n      project: default\n      syncPolicy:\n        automated:\n          prune: true\n          selfHeal: true\n        syncOptions:\n          - CreateNamespace=true\n      destination:\n        server: https:\/\/kubernetes.default.svc\n        namespace: \"\"\n      info:\n        - name: Link to the Application's branch\n          value: \"\"\n```\n\nTo illustrate :\n\n- The generator pullRequest would return, for example, 2 branches: `feature-branch-1` and `feature-branch-2`.\n\n- The generator plugin would then perform 2 requests as follows :\n\n```shell\ncurl http:\/\/localhost:4355\/api\/v1\/getparams.execute -H \"Authorization: Bearer strong-password\" -d \\\n'{\n  \"applicationSetName\": \"fb-matrix\",\n  \"input\": {\n    \"parameters\": {\n      \"branch\": \"feature-branch-1\"\n    }\n  }\n}'\n```\n\nThen,\n\n```shell\ncurl http:\/\/localhost:4355\/api\/v1\/getparams.execute -H \"Authorization: Bearer strong-password\" -d \\\n'{\n  \"applicationSetName\": \"fb-matrix\",\n  \"input\": {\n    \"parameters\": {\n      \"branch\": \"feature-branch-2\"\n    }\n  }\n}'\n```\n\nFor each call, it would return a unique result such as :\n\n```json\n{\n  \"output\": {\n    \"parameters\": [\n      {\n        \"digestFront\": \"sha256:a3f18c17771cc1051b790b453a0217b585723b37f14b413ad7c5b12d4534d411\",\n        \"digestBack\": \"sha256:4411417d614d5b1b479933b7420079671facd434fd42db196dc1f4cc55ba13ce\"\n      }\n    ]\n  }\n}\n```\n\nThen,\n\n```json\n{\n  \"output\": {\n    \"parameters\": [\n      {\n        \"digestFront\": \"sha256:7c20b927946805124f67a0cb8848a8fb1344d16b4d0425d63aaa3f2427c20497\",\n        \"digestBack\": \"sha256:e55e7e40700bbab9e542aba56c593cb87d680cefdfba3dd2ab9cfcb27ec384c2\"\n      }\n    ]\n  }\n}\n```\n\nIn this example, by combining the two, you ensure that one or more pull requests are available and that the generated tag has been properly generated. This wouldn't have been possible with just a commit hash because a hash alone does not certify the success of the build.","site":"argocd","answers_cleaned":"  Plugin Generator  Plugins allow you to provide your own generator     You can write in any language   Simple  a plugin just responds to RPC HTTP requests    You can use it in a sidecar  or standalone deployment    You can get your plugin running today  no need to wait 3 5 months for review  approval  merge and an Argo software   release    You can combine it with Matrix or Merge   To start working on your own plugin  you can generate a new repository based on the example  applicationset hello plugin  https   github com argoproj labs applicationset hello plugin       Simple example  Using a generator plugin without combining it with Matrix or Merge      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myplugin spec    goTemplate  true   goTemplateOptions    missingkey error     generators        plugin            Specify the configMap where the plugin configuration is located          configMapRef            name  my plugin           You can pass arbitrary parameters to the plugin   input parameters  is a map  but values may be any type            These parameters will also be available on the generator s output under the  generator input parameters  key          input            parameters              key1   value1              key2   value2              list    list    of    values               boolean  true             map                key1   value1                key2   value2                key3   value3             You can also attach arbitrary values to the generator s output under the  values  key  These values will be           available in templates under the  values  key          values            value1  something            When using a Plugin generator  the ApplicationSet controller polls every  requeueAfterSeconds  interval  defaulting to every 30 minutes  to detect changes          requeueAfterSeconds  30   template      metadata        name  myplugin       annotations          example from input parameters             example from values               The plugin determines what else it produces          example from plugin output             configMapRef name   A  ConfigMap  name containing the plugin configuration to use for RPC call     input parameters   Input parameters included in the RPC call to the plugin   Optional       note     The concept of the plugin should not undermine the spirit of GitOps by externalizing data outside of Git  The goal is to be complementary in specific contexts      For example  when using one of the PullRequest generators  it s impossible to retrieve parameters related to the CI  only the commit hash is available   which limits the possibilities  By using a plugin  it s possible to retrieve the necessary parameters from a separate data source and use them to extend the functionality of the generator       Add a ConfigMap to configure the access of the plugin     yaml apiVersion  v1 kind  ConfigMap metadata    name  my plugin   namespace  argocd data    token    plugin myplugin token    Alternatively   some K8S secret  plugin myplugin token   baseUrl   http   myplugin plugin ns svc cluster local     requestTimeout   60          token   Pre shared token used to authenticate HTTP request  points to the right key you created in the  argocd secret  Secret     baseUrl   BaseUrl of the k8s service exposing your plugin in the cluster     requestTimeout   Timeout of the request to the plugin in seconds  default  30       Store credentials     yaml apiVersion  v1 kind  Secret metadata    name  argocd secret   namespace  argocd   labels      app kubernetes io name  argocd secret     app kubernetes io part of  argocd type  Opaque data              The secret value must be base64 encoded   once        this value corresponds to   printf  strong password    base64     plugin myplugin token   c3Ryb25nLXBhc3N3b3Jk                    Alternative  If you want to store sensitive data in   another   Kubernetes  Secret   instead of  argocd secret   ArgoCD knows how to check the keys under  data  in your Kubernetes  Secret  for a corresponding key whenever a value in a configmap starts with      then your Kubernetes  Secret  name and      colon  followed by the key name   Syntax     k8s secret name   a key in that k8s secret      NOTE  Secret must have label  app kubernetes io part of  argocd         Example   another secret       yaml apiVersion  v1 kind  Secret metadata    name  another secret   namespace  argocd   labels      app kubernetes io part of  argocd type  Opaque data              Store client secret like below      The secret value must be base64 encoded   once        This value corresponds to   printf  strong password    base64     plugin myplugin token   c3Ryb25nLXBhc3N3b3Jk           HTTP server       A Simple Python Plugin  You can deploy it either as a sidecar or as a standalone deployment  the latter is recommended    In the example  the token is stored in a file at this location     var run argo token       strong password         python import json from http server import BaseHTTPRequestHandler  HTTPServer  with open   var run argo token   as f      plugin token   f read   strip     class Plugin BaseHTTPRequestHandler        def args self           return json loads self rfile read int self headers get  Content Length           def reply self  reply           self send response 200          self end headers           self wfile write json dumps reply  encode  UTF 8         def forbidden self           self send response 403          self end headers        def unsupported self           self send response 404          self end headers        def do POST self           if self headers get  Authorization       Bearer     plugin token              self forbidden            if self path      api v1 getparams execute               args   self args               self reply                    output                          parameters                                                            key1    val1                                key2    val2                                                                                    key1    val2                                key2    val2                                                                                           else              self unsupported     if   name         main         httpd   HTTPServer      4355   Plugin      httpd serve forever        Execute getparams with curl        curl http   localhost 4355 api v1 getparams execute  H  Authorization  Bearer strong password   d         applicationSetName    fake appset      input          parameters            param1    value1                    Some things to note here     You only need to implement the calls   api v1 getparams execute    You should check that the  Authorization  header contains the same bearer value as   var run argo token   Return 403 if not   The input parameters are included in the request body and can be accessed using the  input parameters  variable    The output must always be a list of object maps nested under the  output parameters  key in a map     generator input parameters  and  values  are reserved keys  If present in the plugin output  these keys will be overwritten by the   contents of the  input parameters  and  values  keys in the ApplicationSet s plugin generator spec      With matrix and pull request example  In the following example  the plugin implementation is returning a set of image digests for the given branch  The returned list contains only one item corresponding to the latest built image for the branch      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  fb matrix spec    goTemplate  true   goTemplateOptions    missingkey error     generators        matrix          generators              pullRequest                github                    requeueAfterSeconds  30             plugin                configMapRef                  name  cm plugin               input                  parameters                    branch       provided by generator pull request               values                  branchLink   https   git example com org repo tree     template      metadata        name   fb matrix       spec        source          repoURL   https   github com myorg myrepo git          targetRevision   HEAD          path  charts my chart         helm            releaseName  fb matrix            valueFiles                values yaml           values                front                image  myregistry     digestFront is generated by the plugin             back                image  myregistry     digestBack is generated by the plugin       project  default       syncPolicy          automated            prune  true           selfHeal  true         syncOptions              CreateNamespace true       destination          server  https   kubernetes default svc         namespace           info            name  Link to the Application s branch           value          To illustrate      The generator pullRequest would return  for example  2 branches   feature branch 1  and  feature branch 2      The generator plugin would then perform 2 requests as follows       shell curl http   localhost 4355 api v1 getparams execute  H  Authorization  Bearer strong password   d         applicationSetName    fb matrix      input          parameters            branch    feature branch 1                    Then      shell curl http   localhost 4355 api v1 getparams execute  H  Authorization  Bearer strong password   d         applicationSetName    fb matrix      input          parameters            branch    feature branch 2                    For each call  it would return a unique result such as       json      output          parameters                      digestFront    sha256 a3f18c17771cc1051b790b453a0217b585723b37f14b413ad7c5b12d4534d411            digestBack    sha256 4411417d614d5b1b479933b7420079671facd434fd42db196dc1f4cc55ba13ce                           Then      json      output          parameters                      digestFront    sha256 7c20b927946805124f67a0cb8848a8fb1344d16b4d0425d63aaa3f2427c20497            digestBack    sha256 e55e7e40700bbab9e542aba56c593cb87d680cefdfba3dd2ab9cfcb27ec384c2                           In this example  by combining the two  you ensure that one or more pull requests are available and that the generated tag has been properly generated  This wouldn t have been possible with just a commit hash because a hash alone does not certify the success of the build "}
{"questions":"argocd Please read this documentation carefully before you enable this feature Misconfiguration could lead to potential security issues This feature is in the stage It is generally considered stable but there may be unhandled edge cases warning ApplicationSet in any namespace Introduction warning Beta Feature Since v2 8 0","answers":"# ApplicationSet in any namespace\n\n!!! warning \"Beta Feature (Since v2.8.0)\"\n    This feature is in the [Beta](https:\/\/github.com\/argoproj\/argoproj\/blob\/main\/community\/feature-status.md#beta) stage. \n    It is generally considered stable, but there may be unhandled edge cases.\n\n!!! warning\n    Please read this documentation carefully before you enable this feature. Misconfiguration could lead to potential security issues.\n\n## Introduction\n\nAs of version 2.8, Argo CD supports managing `ApplicationSet` resources in namespaces other than the control plane's namespace (which is usually `argocd`), but this feature has to be explicitly enabled and configured appropriately.\n\nArgo CD administrators can define a certain set of namespaces where `ApplicationSet` resources may be created, updated and reconciled in. \n\nAs Applications generated by an ApplicationSet are generated in the same namespace as the ApplicationSet itself, this works in combination with [App in any namespace](..\/app-any-namespace.md).\n    \n## Prerequisites\n\n### App in any namespace configured\n\nThis feature needs [App in any namespace](..\/app-any-namespace.md) feature activated. The list of namespaces must be the same.\n\n### Cluster-scoped Argo CD installation\n\nThis feature can only be enabled and used when your Argo CD ApplicationSet controller is installed as a cluster-wide instance, so it has permissions to list and manipulate resources on a cluster scope. It will *not* work with an Argo CD installed in namespace-scoped mode.\n\n### SCM Providers secrets consideration\n\nBy allowing ApplicationSet in any namespace you must be aware that any secrets can be exfiltrated using `scmProvider` or `pullRequest` generators. This means if ApplicationSet controller is configured to allow namespace `appNs` and some user is allowed to create \nan ApplicationSet in `appNs` namespace, then the user can install a malicious Pod into the `appNs` namespace as described below\nand read out the content of the secret indirectly, thus exfiltrating the secret value.\n\nHere is an example:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\n  namespace: appNs\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - scmProvider:\n      gitea:\n        # The Gitea owner to scan.\n        owner: myorg\n        # With this malicious setting, user can send all request to a Pod that will log incoming requests including headers with tokens\n        api: http:\/\/my-service.appNs.svc.cluster.local\n        # If true, scan every branch of every repository. If false, scan only the default branch. Defaults to false.\n        allBranches: true\n        # By changing this token reference, user can exfiltrate any secrets\n        tokenRef:\n          secretName: gitea-token\n          key: token\n  template:\n```\n\nIn order to prevent the scenario above administrator must restrict the urls of the allowed SCM Providers (example: `https:\/\/git.mydomain.com\/,https:\/\/gitlab.mydomain.com\/`) by setting the environment variable `ARGOCD_APPLICATIONSET_CONTROLLER_ALLOWED_SCM_PROVIDERS` to argocd-cmd-params-cm `applicationsetcontroller.allowed.scm.providers`. If another url is used, it will be rejected by the applicationset controller.\n\nFor example:\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cmd-params-cm\ndata:\n  applicationsetcontroller.allowed.scm.providers: https:\/\/git.mydomain.com\/,https:\/\/gitlab.mydomain.com\/\n```\n\n!!! note\n    Please note url used in the `api` field of the `ApplicationSet` must match the url declared by the Administrator including the protocol\n\n!!! warning\n    The allow-list only applies to SCM providers for which the user may configure a custom `api`. Where an SCM or PR\n    generator does not accept a custom API URL, the provider is implicitly allowed.\n\nIf you do not intend to allow users to use the SCM or PR generators, you can disable them entirely by setting the environment variable `ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_SCM_PROVIDERS` to argocd-cmd-params-cm `applicationsetcontroller.enable.scm.providers` to `false`.\n\n#### `tokenRef` Restrictions\n\nIt is **highly recommended** to enable SCM Providers secrets restrictions to avoid any secrets exfiltration. This\nrecommendation applies even when AppSets-in-any-namespace is disabled, but is especially important when it is enabled,\nsince non-Argo-admins may attempt to reference out-of-bounds secrets in the `argocd` namespace from an AppSet\n`tokenRef`.\n\nWhen this mode is enabled, the referenced secret must have a label `argocd.argoproj.io\/secret-type` with value\n`scm-creds`.\n\nTo enable this mode, set the `ARGOCD_APPLICATIONSET_CONTROLLER_TOKENREF_STRICT_MODE` environment variable to `true` in the\n`argocd-application-controller` deployment. You can do this by adding the following to your `argocd-cmd-paramscm`\nConfigMap:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cmd-params-cm\ndata:\n    applicationsetcontroller.tokenref.strict.mode: \"true\"\n```\n\n### Overview\n\nIn order for an ApplicationSet to be managed and reconciled outside the Argo CD's control plane namespace, two prerequisites must match:\n\n1. The namespace list from which `argocd-applicationset-controller` can source `ApplicationSets` must be explicitly set using environment variable `ARGOCD_APPLICATIONSET_CONTROLLER_NAMESPACES` or alternatively using parameter `--applicationset-namespaces`.\n2. The enabled namespaces must be entirely covered by the [App in any namespace](..\/app-any-namespace.md), otherwise the generated Applications generated outside the allowed Application namespaces won't be reconciled\n\nIt can be achieved by setting the environment variable `ARGOCD_APPLICATIONSET_CONTROLLER_NAMESPACES` to argocd-cmd-params-cm `applicationsetcontroller.namespaces`\n\n`ApplicationSets` in different namespaces can be created and managed just like any other `ApplicationSet` in the `argocd` namespace previously, either declaratively or through the Argo CD API (e.g. using the CLI, the web UI, the REST API, etc).\n\n### Reconfigure Argo CD to allow certain namespaces\n\n#### Change workload startup parameters\n\nIn order to enable this feature, the Argo CD administrator must reconfigure the and `argocd-applicationset-controller` workloads to add the `--applicationset-namespaces` parameter to the container's startup command.\n\n### Safely template project\n\nAs [App in any namespace](..\/app-any-namespace.md) is a prerequisite, it is possible to safely template project. \n\nLet's take an example with two teams and an infra project:\n\n```yaml\nkind: AppProject\napiVersion: argoproj.io\/v1alpha1\nmetadata:\n  name: infra-project\n  namespace: argocd\nspec:\n  destinations:\n    - namespace: '*'  \n```\n\n```yaml\nkind: AppProject\napiVersion: argoproj.io\/v1alpha1\nmetadata:\n  name: team-one-project\n  namespace: argocd\nspec:\n  sourceNamespaces:\n  - team-one-cd\n```\n\n```yaml\nkind: AppProject\napiVersion: argoproj.io\/v1alpha1\nmetadata:\n  name: team-two-project\n  namespace: argocd\nspec:\n  sourceNamespaces:\n  - team-two-cd\n```\n\nCreating following `ApplicationSet` generates two Applications `infra-escalation` and `team-two-escalation`. Both will be rejected as they are outside `argocd` namespace, therefore `sourceNamespaces` will be checked\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: team-one-product-one\n  namespace: team-one-cd\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n    list:\n    - name: infra\n      project: infra-project\n    - name: team-two\n      project: team-two-project\n  template:\n    metadata:\n      name: '-escalation'\n    spec:\n      project: \"\"\n```\n  \n### ApplicationSet names\n\nFor the CLI, applicationSets are now referred to and displayed as in the format `<namespace>\/<name>`. \n\nFor backwards compatibility, if the namespace of the ApplicationSet is the control plane's namespace (i.e. `argocd`), the `<namespace>` can be omitted from the applicationset name when referring to it. For example, the application names `argocd\/someappset` and `someappset` are semantically the same and refer to the same application in the CLI and the UI.\n\n### Applicationsets RBAC\n\nThe RBAC syntax for Application objects has been changed from `<project>\/<applicationset>` to `<project>\/<namespace>\/<applicationset>` to accommodate the need to restrict access based on the source namespace of the Application to be managed.\n\nFor backwards compatibility, Applications in the argocd namespace can still be referred to as `<project>\/<applicationset>` in the RBAC policy rules.\n\nWildcards do not make any distinction between project and applicationset namespaces yet. For example, the following RBAC rule would match any application belonging to project foo, regardless of the namespace it is created in:\n\n\n```\np, somerole, applicationsets, get, foo\/*, allow\n```\n\nIf you want to restrict access to be granted only to `ApplicationSets` with project `foo` within namespace `bar`, the rule would need to be adapted as follows:\n\n```\np, somerole, applicationsets, get, foo\/bar\/*, allow\n```\n\n## Managing applicationSets in other namespaces\n\n### Using the CLI\n\nYou can use all existing Argo CD CLI commands for managing applications in other namespaces, exactly as you would use the CLI to manage applications in the control plane's namespace.\n\nFor example, to retrieve the `ApplicationSet` named `foo` in the namespace `bar`, you can use the following CLI command:\n\n```shell\nargocd appset get foo\/bar\n```\n\nLikewise, to manage this applicationSet, keep referring to it as `foo\/bar`:\n\n```bash\n# Delete the application\nargocd appset delete foo\/bar\n```\n\nThere is no change on the create command as it is using a file. You just need to add the namespace in the `metadata.namespace` field.\n\nAs stated previously, for applicationSets in the Argo CD's control plane namespace, you can omit the namespace from the application name.\n\n### Using the REST API\n\nIf you are using the REST API, the namespace for `ApplicationSet` cannot be specified as the application name, and resources need to be specified using the optional `appNamespace` query parameter. For example, to work with the `ApplicationSet` resource named `foo` in the namespace `bar`, the request would look like follows:\n\n```bash\nGET \/api\/v1\/applicationsets\/foo?appsetNamespace=bar\n```\n\nFor other operations such as `POST` and `PUT`, the `appNamespace` parameter must be part of the request's payload.\n\nFor `ApplicationSet` resources in the control plane namespace, this parameter can be omitted.\n\n## Clusters secrets consideration\n\nBy allowing ApplicationSet in any namespace you must be aware that clusters can be discovered and used. \n\nExample:\n\nFollowing will discover all clusters\n\n```yaml\nspec:\n  generators:\n  - clusters: {} # Automatically use all clusters defined within Argo CD\n```\n\nIf you don't want to allow users to discover all clusters with ApplicationSets from other namespaces you may consider deploying ArgoCD in namespace scope or use OPA rules","site":"argocd","answers_cleaned":"  ApplicationSet in any namespace      warning  Beta Feature  Since v2 8 0       This feature is in the  Beta  https   github com argoproj argoproj blob main community feature status md beta  stage       It is generally considered stable  but there may be unhandled edge cases       warning     Please read this documentation carefully before you enable this feature  Misconfiguration could lead to potential security issues      Introduction  As of version 2 8  Argo CD supports managing  ApplicationSet  resources in namespaces other than the control plane s namespace  which is usually  argocd    but this feature has to be explicitly enabled and configured appropriately   Argo CD administrators can define a certain set of namespaces where  ApplicationSet  resources may be created  updated and reconciled in    As Applications generated by an ApplicationSet are generated in the same namespace as the ApplicationSet itself  this works in combination with  App in any namespace     app any namespace md           Prerequisites      App in any namespace configured  This feature needs  App in any namespace     app any namespace md  feature activated  The list of namespaces must be the same       Cluster scoped Argo CD installation  This feature can only be enabled and used when your Argo CD ApplicationSet controller is installed as a cluster wide instance  so it has permissions to list and manipulate resources on a cluster scope  It will  not  work with an Argo CD installed in namespace scoped mode       SCM Providers secrets consideration  By allowing ApplicationSet in any namespace you must be aware that any secrets can be exfiltrated using  scmProvider  or  pullRequest  generators  This means if ApplicationSet controller is configured to allow namespace  appNs  and some user is allowed to create  an ApplicationSet in  appNs  namespace  then the user can install a malicious Pod into the  appNs  namespace as described below and read out the content of the secret indirectly  thus exfiltrating the secret value   Here is an example      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps   namespace  appNs spec    goTemplate  true   goTemplateOptions    missingkey error     generators      scmProvider        gitea            The Gitea owner to scan          owner  myorg           With this malicious setting  user can send all request to a Pod that will log incoming requests including headers with tokens         api  http   my service appNs svc cluster local           If true  scan every branch of every repository  If false  scan only the default branch  Defaults to false          allBranches  true           By changing this token reference  user can exfiltrate any secrets         tokenRef            secretName  gitea token           key  token   template       In order to prevent the scenario above administrator must restrict the urls of the allowed SCM Providers  example   https   git mydomain com  https   gitlab mydomain com    by setting the environment variable  ARGOCD APPLICATIONSET CONTROLLER ALLOWED SCM PROVIDERS  to argocd cmd params cm  applicationsetcontroller allowed scm providers   If another url is used  it will be rejected by the applicationset controller   For example     yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cmd params cm data    applicationsetcontroller allowed scm providers  https   git mydomain com  https   gitlab mydomain com           note     Please note url used in the  api  field of the  ApplicationSet  must match the url declared by the Administrator including the protocol      warning     The allow list only applies to SCM providers for which the user may configure a custom  api   Where an SCM or PR     generator does not accept a custom API URL  the provider is implicitly allowed   If you do not intend to allow users to use the SCM or PR generators  you can disable them entirely by setting the environment variable  ARGOCD APPLICATIONSET CONTROLLER ENABLE SCM PROVIDERS  to argocd cmd params cm  applicationsetcontroller enable scm providers  to  false          tokenRef  Restrictions  It is   highly recommended   to enable SCM Providers secrets restrictions to avoid any secrets exfiltration  This recommendation applies even when AppSets in any namespace is disabled  but is especially important when it is enabled  since non Argo admins may attempt to reference out of bounds secrets in the  argocd  namespace from an AppSet  tokenRef    When this mode is enabled  the referenced secret must have a label  argocd argoproj io secret type  with value  scm creds    To enable this mode  set the  ARGOCD APPLICATIONSET CONTROLLER TOKENREF STRICT MODE  environment variable to  true  in the  argocd application controller  deployment  You can do this by adding the following to your  argocd cmd paramscm  ConfigMap      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cmd params cm data      applicationsetcontroller tokenref strict mode   true           Overview  In order for an ApplicationSet to be managed and reconciled outside the Argo CD s control plane namespace  two prerequisites must match   1  The namespace list from which  argocd applicationset controller  can source  ApplicationSets  must be explicitly set using environment variable  ARGOCD APPLICATIONSET CONTROLLER NAMESPACES  or alternatively using parameter    applicationset namespaces   2  The enabled namespaces must be entirely covered by the  App in any namespace     app any namespace md   otherwise the generated Applications generated outside the allowed Application namespaces won t be reconciled  It can be achieved by setting the environment variable  ARGOCD APPLICATIONSET CONTROLLER NAMESPACES  to argocd cmd params cm  applicationsetcontroller namespaces    ApplicationSets  in different namespaces can be created and managed just like any other  ApplicationSet  in the  argocd  namespace previously  either declaratively or through the Argo CD API  e g  using the CLI  the web UI  the REST API  etc        Reconfigure Argo CD to allow certain namespaces       Change workload startup parameters  In order to enable this feature  the Argo CD administrator must reconfigure the and  argocd applicationset controller  workloads to add the    applicationset namespaces  parameter to the container s startup command       Safely template project  As  App in any namespace     app any namespace md  is a prerequisite  it is possible to safely template project    Let s take an example with two teams and an infra project      yaml kind  AppProject apiVersion  argoproj io v1alpha1 metadata    name  infra project   namespace  argocd spec    destinations        namespace                yaml kind  AppProject apiVersion  argoproj io v1alpha1 metadata    name  team one project   namespace  argocd spec    sourceNamespaces      team one cd         yaml kind  AppProject apiVersion  argoproj io v1alpha1 metadata    name  team two project   namespace  argocd spec    sourceNamespaces      team two cd      Creating following  ApplicationSet  generates two Applications  infra escalation  and  team two escalation   Both will be rejected as they are outside  argocd  namespace  therefore  sourceNamespaces  will be checked     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  team one product one   namespace  team one cd spec    goTemplate  true   goTemplateOptions    missingkey error     generators      list        name  infra       project  infra project       name  team two       project  team two project   template      metadata        name    escalation      spec        project                ApplicationSet names  For the CLI  applicationSets are now referred to and displayed as in the format   namespace   name      For backwards compatibility  if the namespace of the ApplicationSet is the control plane s namespace  i e   argocd    the   namespace   can be omitted from the applicationset name when referring to it  For example  the application names  argocd someappset  and  someappset  are semantically the same and refer to the same application in the CLI and the UI       Applicationsets RBAC  The RBAC syntax for Application objects has been changed from   project   applicationset   to   project   namespace   applicationset   to accommodate the need to restrict access based on the source namespace of the Application to be managed   For backwards compatibility  Applications in the argocd namespace can still be referred to as   project   applicationset   in the RBAC policy rules   Wildcards do not make any distinction between project and applicationset namespaces yet  For example  the following RBAC rule would match any application belonging to project foo  regardless of the namespace it is created in        p  somerole  applicationsets  get  foo    allow      If you want to restrict access to be granted only to  ApplicationSets  with project  foo  within namespace  bar   the rule would need to be adapted as follows       p  somerole  applicationsets  get  foo bar    allow         Managing applicationSets in other namespaces      Using the CLI  You can use all existing Argo CD CLI commands for managing applications in other namespaces  exactly as you would use the CLI to manage applications in the control plane s namespace   For example  to retrieve the  ApplicationSet  named  foo  in the namespace  bar   you can use the following CLI command      shell argocd appset get foo bar      Likewise  to manage this applicationSet  keep referring to it as  foo bar       bash   Delete the application argocd appset delete foo bar      There is no change on the create command as it is using a file  You just need to add the namespace in the  metadata namespace  field   As stated previously  for applicationSets in the Argo CD s control plane namespace  you can omit the namespace from the application name       Using the REST API  If you are using the REST API  the namespace for  ApplicationSet  cannot be specified as the application name  and resources need to be specified using the optional  appNamespace  query parameter  For example  to work with the  ApplicationSet  resource named  foo  in the namespace  bar   the request would look like follows      bash GET  api v1 applicationsets foo appsetNamespace bar      For other operations such as  POST  and  PUT   the  appNamespace  parameter must be part of the request s payload   For  ApplicationSet  resources in the control plane namespace  this parameter can be omitted      Clusters secrets consideration  By allowing ApplicationSet in any namespace you must be aware that clusters can be discovered and used    Example   Following will discover all clusters     yaml spec    generators      clusters       Automatically use all clusters defined within Argo CD      If you don t want to allow users to discover all clusters with ApplicationSets from other namespaces you may consider deploying ArgoCD in namespace scope or use OPA rules"}
{"questions":"argocd kind ApplicationSet The List generator generates parameters based on an arbitrary list of key value pairs as long as the values are string values In this example we re targeting a local cluster named apiVersion argoproj io v1alpha1 namespace argocd List Generator spec metadata yaml name guestbook","answers":"# List Generator\n\nThe List generator generates parameters based on an arbitrary list of key\/value pairs (as long as the values are string values). In this example, we're targeting a local cluster named `engineering-dev`:\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: guestbook\n  namespace: argocd\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - list:\n      elements:\n      - cluster: engineering-dev\n        url: https:\/\/kubernetes.default.svc\n      # - cluster: engineering-prod\n      #   url: https:\/\/kubernetes.default.svc\n  template:\n    metadata:\n      name: '-guestbook'\n    spec:\n      project: \"my-project\"\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n        targetRevision: HEAD\n        path: applicationset\/examples\/list-generator\/guestbook\/\n      destination:\n        server: ''\n        namespace: guestbook\n```\n(*The full example can be found [here](https:\/\/github.com\/argoproj\/argo-cd\/tree\/master\/applicationset\/examples\/list-generator).*)\n\nIn this example, the List generator passes the `url` and `cluster` fields as parameters into the template. If we wanted to add a second environment, we could uncomment the second element and the ApplicationSet controller would automatically target it with the defined application.\n\nWith the ApplicationSet v0.1.0 release, one could *only* specify `url` and `cluster` element fields (plus arbitrary `values`). As of ApplicationSet v0.2.0, any key\/value `element` pair is supported (which is also fully backwards compatible with the v0.1.0 form):\n```yaml\nspec:\n  generators:\n  - list:\n      elements:\n        # v0.1.0 form - requires cluster\/url keys:\n        - cluster: engineering-dev\n          url: https:\/\/kubernetes.default.svc\n          values:\n            additional: value\n        # v0.2.0+ form - does not require cluster\/URL keys\n        # (but they are still supported).\n        - staging: \"true\"\n          gitRepo: https:\/\/kubernetes.default.svc   \n# (...)\n```\n\n!!! note \"Clusters must be predefined in Argo CD\"\n    These clusters *must* already be defined within Argo CD, in order to generate applications for these values. The ApplicationSet controller does not create clusters within Argo CD (for instance, it does not have the credentials to do so).\n\n## Dynamically generated elements\nThe List generator can also dynamically generate its elements based on a yaml\/json it gets from a previous generator like git by combining the two with a matrix generator. In this example we are using the matrix generator with a git followed by a list generator and pass the content of a file in git as input to the `elementsYaml` field of the list generator:\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: elements-yaml\n  namespace: argocd\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - matrix:\n      generators:\n      - git:\n          repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n          revision: HEAD\n          files:\n          - path: applicationset\/examples\/list-generator\/list-elementsYaml-example.yaml\n      - list:\n          elementsYaml: \"\"\n  template:\n    metadata:\n      name: ''\n    spec:\n      project: default\n      syncPolicy:\n        automated:\n          selfHeal: true    \n        syncOptions:\n        - CreateNamespace=true        \n      sources:\n        - chart: ''\n          repoURL: ''\n          targetRevision: ''\n          helm:\n            releaseName: ''\n      destination:\n        server: https:\/\/kubernetes.default.svc\n        namespace: ''\n```\n\nwhere `list-elementsYaml-example.yaml` content is:\n```yaml\nkey:\n  components:\n    - name: component1\n      chart: podinfo\n      version: \"6.3.2\"\n      releaseName: component1\n      repoUrl: \"https:\/\/stefanprodan.github.io\/podinfo\"\n      namespace: component1\n    - name: component2\n      chart: podinfo\n      version: \"6.3.3\"\n      releaseName: component2\n      repoUrl: \"ghcr.io\/stefanprodan\/charts\"\n      namespace: component2\n```","site":"argocd","answers_cleaned":"  List Generator  The List generator generates parameters based on an arbitrary list of key value pairs  as long as the values are string values   In this example  we re targeting a local cluster named  engineering dev      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  guestbook   namespace  argocd spec    goTemplate  true   goTemplateOptions    missingkey error     generators      list        elements          cluster  engineering dev         url  https   kubernetes default svc           cluster  engineering prod           url  https   kubernetes default svc   template      metadata        name    guestbook      spec        project   my project        source          repoURL  https   github com argoproj argo cd git         targetRevision  HEAD         path  applicationset examples list generator guestbook        destination          server             namespace  guestbook       The full example can be found  here  https   github com argoproj argo cd tree master applicationset examples list generator      In this example  the List generator passes the  url  and  cluster  fields as parameters into the template  If we wanted to add a second environment  we could uncomment the second element and the ApplicationSet controller would automatically target it with the defined application   With the ApplicationSet v0 1 0 release  one could  only  specify  url  and  cluster  element fields  plus arbitrary  values    As of ApplicationSet v0 2 0  any key value  element  pair is supported  which is also fully backwards compatible with the v0 1 0 form      yaml spec    generators      list        elements            v0 1 0 form   requires cluster url keys            cluster  engineering dev           url  https   kubernetes default svc           values              additional  value           v0 2 0  form   does not require cluster URL keys            but they are still supported             staging   true            gitRepo  https   kubernetes default svc                     note  Clusters must be predefined in Argo CD      These clusters  must  already be defined within Argo CD  in order to generate applications for these values  The ApplicationSet controller does not create clusters within Argo CD  for instance  it does not have the credentials to do so       Dynamically generated elements The List generator can also dynamically generate its elements based on a yaml json it gets from a previous generator like git by combining the two with a matrix generator  In this example we are using the matrix generator with a git followed by a list generator and pass the content of a file in git as input to the  elementsYaml  field of the list generator     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  elements yaml   namespace  argocd spec    goTemplate  true   goTemplateOptions    missingkey error     generators      matrix        generators          git            repoURL  https   github com argoproj argo cd git           revision  HEAD           files              path  applicationset examples list generator list elementsYaml example yaml         list            elementsYaml       template      metadata        name         spec        project  default       syncPolicy          automated            selfHeal  true             syncOptions            CreateNamespace true               sources            chart               repoURL               targetRevision               helm              releaseName           destination          server  https   kubernetes default svc         namespace          where  list elementsYaml example yaml  content is     yaml key    components        name  component1       chart  podinfo       version   6 3 2        releaseName  component1       repoUrl   https   stefanprodan github io podinfo        namespace  component1       name  component2       chart  podinfo       version   6 3 3        releaseName  component2       repoUrl   ghcr io stefanprodan charts        namespace  component2    "}
{"questions":"argocd Cluster Generator name but normalized to contain only lowercase alphanumeric characters or For each cluster registered with Argo CD the Cluster generator produces parameters based on the list of items found within the cluster secret In Argo CD managed clusters in the Argo CD namespace The ApplicationSet controller uses those same Secrets to generate parameters to identify and target available clusters It automatically provides the following parameter values to the Application template for each cluster","answers":"# Cluster Generator\n\nIn Argo CD, managed clusters [are stored within Secrets](..\/..\/declarative-setup\/#clusters) in the Argo CD namespace. The ApplicationSet controller uses those same Secrets to generate parameters to identify and target available clusters.\n\nFor each cluster registered with Argo CD, the Cluster generator produces parameters based on the list of items found within the cluster secret.\n\nIt automatically provides the following parameter values to the Application template for each cluster:\n\n- `name`\n- `nameNormalized` *('name' but normalized to contain only lowercase alphanumeric characters, '-' or '.')*\n- `server`\n- `project` *(the Secret's 'project' field, if present; otherwise, it defaults to '')*\n- `metadata.labels.<key>` *(for each label in the Secret)*\n- `metadata.annotations.<key>` *(for each annotation in the Secret)*\n\n!!! note\n    Use the `nameNormalized` parameter if your cluster name contains characters (such as underscores) that are not valid for Kubernetes resource names. This prevents rendering invalid Kubernetes resources with names like `my_cluster-app1`, and instead would convert them to `my-cluster-app1`.\n\n\nWithin [Argo CD cluster Secrets](..\/..\/declarative-setup\/#clusters) are data fields describing the cluster:\n```yaml\nkind: Secret\ndata:\n  # Within Kubernetes these fields are actually encoded in Base64; they are decoded here for convenience.\n  # (They are likewise decoded when passed as parameters by the Cluster generator)\n  config: \"{'tlsClientConfig':{'insecure':false}}\"\n  name: \"in-cluster2\"\n  server: \"https:\/\/kubernetes.default.svc\"\nmetadata:\n  labels:\n    argocd.argoproj.io\/secret-type: cluster\n# (...)\n```\n\nThe Cluster generator will automatically identify clusters defined with Argo CD, and extract the cluster data as parameters:\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: guestbook\n  namespace: argocd\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - clusters: {} # Automatically use all clusters defined within Argo CD\n  template:\n    metadata:\n      name: '-guestbook' # 'name' field of the Secret\n    spec:\n      project: \"my-project\"\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argocd-example-apps\/\n        targetRevision: HEAD\n        path: guestbook\n      destination:\n        server: '' # 'server' field of the secret\n        namespace: guestbook\n```\n(*The full example can be found [here](https:\/\/github.com\/argoproj\/argo-cd\/tree\/master\/applicationset\/examples\/cluster).*)\n\nIn this example, the cluster secret's `name` and `server` fields are used to populate the `Application` resource `name` and `server` (which are then used to target that same cluster).\n\n### Label selector\n\nA label selector may be used to narrow the scope of targeted clusters to only those matching a specific label:\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: guestbook\n  namespace: argocd\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - clusters:\n      selector:\n        matchLabels:\n          staging: \"true\"\n        # The cluster generator also supports matchExpressions.\n        #matchExpressions:\n        #  - key: staging\n        #    operator: In\n        #    values:\n        #      - \"true\"\n  template:\n  # (...)\n```\n\nThis would match an Argo CD cluster secret containing:\n```yaml\napiVersion: v1\nkind: Secret\ndata:\n  # (... fields as above ...)\nmetadata:\n  labels:\n    argocd.argoproj.io\/secret-type: cluster\n    staging: \"true\"\n# (...)\n```\n\nThe cluster selector also supports set-based requirements, as used by [several core Kubernetes resources](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\/#resources-that-support-set-based-requirements).\n\n### Deploying to the local cluster\n\nIn Argo CD, the 'local cluster' is the cluster upon which Argo CD (and the ApplicationSet controller) is installed. This is to distinguish it from 'remote clusters', which are those that are added to Argo CD [declaratively](..\/..\/declarative-setup\/#clusters) or via the [Argo CD CLI](..\/..\/getting_started.md\/#5-register-a-cluster-to-deploy-apps-to-optional).\n \nThe cluster generator will automatically target both local and non-local clusters, for every cluster that matches the cluster selector.\n\nIf you wish to target only remote clusters with your Applications (e.g. you want to exclude the local cluster), then use a cluster selector with labels, for example:\n```yaml\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - clusters:\n      selector:\n        matchLabels:\n          argocd.argoproj.io\/secret-type: cluster\n        # The cluster generator also supports matchExpressions.\n        #matchExpressions:\n        #  - key: staging\n        #    operator: In\n        #    values:\n        #      - \"true\"\n```\n\nThis selector will not match the default local cluster, since the default local cluster does not have a Secret (and thus does not have the `argocd.argoproj.io\/secret-type` label on that secret). Any cluster selector that selects on that label will automatically exclude the default local cluster.\n\nHowever, if you do wish to target both local and non-local clusters, while also using label matching, you can create a secret for the local cluster within the Argo CD web UI:\n\n1. Within the Argo CD web UI, select *Settings*, then *Clusters*.\n2. Select your local cluster, usually named `in-cluster`.\n3. Click the *Edit* button, and change the *NAME* of the cluster to another value, for example `in-cluster-local`. Any other value here is fine.\n4. Leave all other fields unchanged.\n5. Click *Save*.\n\nThese steps might seem counterintuitive, but the act of changing one of the default values for the local cluster causes the Argo CD Web UI to create a new secret for this cluster. In the Argo CD namespace, you should now see a Secret resource named `cluster-(cluster suffix)` with label `argocd.argoproj.io\/secret-type\": \"cluster\"`. You may also create a local [cluster secret declaratively](..\/..\/declarative-setup\/#clusters), or with the CLI using `argocd cluster add \"(context name)\" --in-cluster`, rather than through the Web UI.\n\n### Fetch clusters based on their K8s version\n\nThere is also the possibility to fetch clusters based upon their Kubernetes version. To do this, the label `argocd.argoproj.io\/auto-label-cluster-info` needs to be set to `true` on the cluster secret. \nOnce that has been set, the controller will dynamically label the cluster secret with the Kubernetes version it is running on. To retrieve that value, you need to use the\n`argocd.argoproj.io\/kubernetes-version`, as the example below demonstrates:\n\n```yaml\nspec:\n  goTemplate: true\n  generators:\n  - clusters:\n      selector:\n        matchLabels:\n          argocd.argoproj.io\/kubernetes-version: 1.28\n        # matchExpressions are also supported.\n        #matchExpressions:\n        #  - key: argocd.argoproj.io\/kubernetes-version\n        #    operator: In\n        #    values:\n        #      - \"1.27\"\n        #      - \"1.28\"\n```\n\n### Pass additional key-value pairs via `values` field\n\nYou may pass additional, arbitrary string key-value pairs via the `values` field of the cluster generator. Values added via the `values` field are added as `values.(field)`\n\nIn this example, a `revision` parameter value is passed, based on matching labels on the cluster secret:\n```yaml\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - clusters:\n      selector:\n        matchLabels:\n          type: 'staging'\n      # A key-value map for arbitrary parameters\n      values:\n        revision: HEAD # staging clusters use HEAD branch\n  - clusters:\n      selector:\n        matchLabels:\n          type: 'production'\n      values:\n        # production uses a different revision value, for 'stable' branch\n        revision: stable\n  template:\n    metadata:\n      name: '-guestbook'\n    spec:\n      project: \"my-project\"\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argocd-example-apps\/\n        # The cluster values field for each generator will be substituted here:\n        targetRevision: ''\n        path: guestbook\n      destination:\n        server: ''\n        namespace: guestbook\n```\n\nIn this example the `revision` value from the `generators.clusters` fields is passed into the template as `values.revision`, containing either `HEAD` or `stable` (based on which generator generated the set of parameters).\n\n!!! note\n    The `values.` prefix is always prepended to values provided via `generators.clusters.values` field. Ensure you include this prefix in the parameter name within the `template` when using it.\n\nIn `values` we can also interpolate the following parameter values (i.e. the same values as presented in the beginning of this page)\n\n- `name`\n- `nameNormalized` *('name' but normalized to contain only lowercase alphanumeric characters, '-' or '.')*\n- `server`\n- `metadata.labels.<key>` *(for each label in the Secret)*\n- `metadata.annotations.<key>` *(for each annotation in the Secret)*\n\nExtending the example above, we could do something like this:\n\n```yaml\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - clusters:\n      selector:\n        matchLabels:\n          type: 'staging'\n      # A key-value map for arbitrary parameters\n      values:\n        # If `my-custom-annotation` is in your cluster secret, `revision` will be substituted with it.\n        revision: '' \n        clusterName: ''\n  - clusters:\n      selector:\n        matchLabels:\n          type: 'production'\n      values:\n        # production uses a different revision value, for 'stable' branch\n        revision: stable\n        clusterName: ''\n  template:\n    metadata:\n      name: '-guestbook'\n    spec:\n      project: \"my-project\"\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argocd-example-apps\/\n        # The cluster values field for each generator will be substituted here:\n        targetRevision: ''\n        path: guestbook\n      destination:\n        # In this case this is equivalent to just using \n        server: ''\n        namespace: guestbook\n```\n### Gather cluster information as a flat list\n\nYou may sometimes need to gather your clusters information, without having to deploy one application per cluster found.\nFor that, you can use the option `flatList` in the cluster generator.\n\nHere is an example of cluster generator using this option:\n```yaml\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - clusters:\n      selector:\n        matchLabels:\n          type: 'staging'\n      flatList: true\n  template:\n    metadata:\n      name: 'flat-list-guestbook'\n    spec:\n      project: \"my-project\"\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argocd-example-apps\/\n        # The cluster values field for each generator will be substituted here:\n        targetRevision: 'HEAD'\n        path: helm-guestbook\n        helm:\n          values: |\n            clusters:\n            \n              - name: \n            \n      destination:\n        # In this case this is equivalent to just using \n        server: 'my-cluster'\n        namespace: guestbook\n```\n\nGiven that you have two cluster secrets matching with names cluster1 and cluster2, this would generate the **single** following Application:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  name: flat-list-guestbook\n  namespace: guestbook\nspec:\n  project: \"my-project\"\n  source:\n    repoURL: https:\/\/github.com\/argoproj\/argocd-example-apps\/\n    targetRevision: 'HEAD'\n    path: helm-guestbook\n    helm:\n      values: |\n        clusters:\n          - name: cluster1\n          - name: cluster2\n```\n\nIn case you are using several cluster generators, each with the flatList option, one Application would be generated by cluster generator, as we can't simply merge values and templates that would potentially differ in each generator","site":"argocd","answers_cleaned":"  Cluster Generator  In Argo CD  managed clusters  are stored within Secrets        declarative setup  clusters  in the Argo CD namespace  The ApplicationSet controller uses those same Secrets to generate parameters to identify and target available clusters   For each cluster registered with Argo CD  the Cluster generator produces parameters based on the list of items found within the cluster secret   It automatically provides the following parameter values to the Application template for each cluster      name     nameNormalized     name  but normalized to contain only lowercase alphanumeric characters      or          server     project    the Secret s  project  field  if present  otherwise  it defaults to         metadata labels  key     for each label in the Secret      metadata annotations  key     for each annotation in the Secret        note     Use the  nameNormalized  parameter if your cluster name contains characters  such as underscores  that are not valid for Kubernetes resource names  This prevents rendering invalid Kubernetes resources with names like  my cluster app1   and instead would convert them to  my cluster app1     Within  Argo CD cluster Secrets        declarative setup  clusters  are data fields describing the cluster     yaml kind  Secret data      Within Kubernetes these fields are actually encoded in Base64  they are decoded here for convenience       They are likewise decoded when passed as parameters by the Cluster generator    config     tlsClientConfig    insecure  false      name   in cluster2    server   https   kubernetes default svc  metadata    labels      argocd argoproj io secret type  cluster              The Cluster generator will automatically identify clusters defined with Argo CD  and extract the cluster data as parameters     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  guestbook   namespace  argocd spec    goTemplate  true   goTemplateOptions    missingkey error     generators      clusters       Automatically use all clusters defined within Argo CD   template      metadata        name    guestbook     name  field of the Secret     spec        project   my project        source          repoURL  https   github com argoproj argocd example apps          targetRevision  HEAD         path  guestbook       destination          server        server  field of the secret         namespace  guestbook       The full example can be found  here  https   github com argoproj argo cd tree master applicationset examples cluster      In this example  the cluster secret s  name  and  server  fields are used to populate the  Application  resource  name  and  server   which are then used to target that same cluster        Label selector  A label selector may be used to narrow the scope of targeted clusters to only those matching a specific label     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  guestbook   namespace  argocd spec    goTemplate  true   goTemplateOptions    missingkey error     generators      clusters        selector          matchLabels            staging   true            The cluster generator also supports matchExpressions           matchExpressions               key  staging              operator  In              values                    true    template                 This would match an Argo CD cluster secret containing     yaml apiVersion  v1 kind  Secret data           fields as above      metadata    labels      argocd argoproj io secret type  cluster     staging   true               The cluster selector also supports set based requirements  as used by  several core Kubernetes resources  https   kubernetes io docs concepts overview working with objects labels  resources that support set based requirements        Deploying to the local cluster  In Argo CD  the  local cluster  is the cluster upon which Argo CD  and the ApplicationSet controller  is installed  This is to distinguish it from  remote clusters   which are those that are added to Argo CD  declaratively        declarative setup  clusters  or via the  Argo CD CLI        getting started md  5 register a cluster to deploy apps to optional     The cluster generator will automatically target both local and non local clusters  for every cluster that matches the cluster selector   If you wish to target only remote clusters with your Applications  e g  you want to exclude the local cluster   then use a cluster selector with labels  for example     yaml spec    goTemplate  true   goTemplateOptions    missingkey error     generators      clusters        selector          matchLabels            argocd argoproj io secret type  cluster           The cluster generator also supports matchExpressions           matchExpressions               key  staging              operator  In              values                    true       This selector will not match the default local cluster  since the default local cluster does not have a Secret  and thus does not have the  argocd argoproj io secret type  label on that secret   Any cluster selector that selects on that label will automatically exclude the default local cluster   However  if you do wish to target both local and non local clusters  while also using label matching  you can create a secret for the local cluster within the Argo CD web UI   1  Within the Argo CD web UI  select  Settings   then  Clusters   2  Select your local cluster  usually named  in cluster   3  Click the  Edit  button  and change the  NAME  of the cluster to another value  for example  in cluster local   Any other value here is fine  4  Leave all other fields unchanged  5  Click  Save    These steps might seem counterintuitive  but the act of changing one of the default values for the local cluster causes the Argo CD Web UI to create a new secret for this cluster  In the Argo CD namespace  you should now see a Secret resource named  cluster  cluster suffix   with label  argocd argoproj io secret type    cluster    You may also create a local  cluster secret declaratively        declarative setup  clusters   or with the CLI using  argocd cluster add   context name     in cluster   rather than through the Web UI       Fetch clusters based on their K8s version  There is also the possibility to fetch clusters based upon their Kubernetes version  To do this  the label  argocd argoproj io auto label cluster info  needs to be set to  true  on the cluster secret   Once that has been set  the controller will dynamically label the cluster secret with the Kubernetes version it is running on  To retrieve that value  you need to use the  argocd argoproj io kubernetes version   as the example below demonstrates      yaml spec    goTemplate  true   generators      clusters        selector          matchLabels            argocd argoproj io kubernetes version  1 28           matchExpressions are also supported           matchExpressions               key  argocd argoproj io kubernetes version              operator  In              values                    1 27                    1 28           Pass additional key value pairs via  values  field  You may pass additional  arbitrary string key value pairs via the  values  field of the cluster generator  Values added via the  values  field are added as  values  field    In this example  a  revision  parameter value is passed  based on matching labels on the cluster secret     yaml spec    goTemplate  true   goTemplateOptions    missingkey error     generators      clusters        selector          matchLabels            type   staging          A key value map for arbitrary parameters       values          revision  HEAD   staging clusters use HEAD branch     clusters        selector          matchLabels            type   production        values            production uses a different revision value  for  stable  branch         revision  stable   template      metadata        name    guestbook      spec        project   my project        source          repoURL  https   github com argoproj argocd example apps            The cluster values field for each generator will be substituted here          targetRevision             path  guestbook       destination          server             namespace  guestbook      In this example the  revision  value from the  generators clusters  fields is passed into the template as  values revision   containing either  HEAD  or  stable   based on which generator generated the set of parameters        note     The  values   prefix is always prepended to values provided via  generators clusters values  field  Ensure you include this prefix in the parameter name within the  template  when using it   In  values  we can also interpolate the following parameter values  i e  the same values as presented in the beginning of this page      name     nameNormalized     name  but normalized to contain only lowercase alphanumeric characters      or          server     metadata labels  key     for each label in the Secret      metadata annotations  key     for each annotation in the Secret    Extending the example above  we could do something like this      yaml spec    goTemplate  true   goTemplateOptions    missingkey error     generators      clusters        selector          matchLabels            type   staging          A key value map for arbitrary parameters       values            If  my custom annotation  is in your cluster secret   revision  will be substituted with it          revision              clusterName         clusters        selector          matchLabels            type   production        values            production uses a different revision value  for  stable  branch         revision  stable         clusterName       template      metadata        name    guestbook      spec        project   my project        source          repoURL  https   github com argoproj argocd example apps            The cluster values field for each generator will be substituted here          targetRevision             path  guestbook       destination            In this case this is equivalent to just using          server             namespace  guestbook         Gather cluster information as a flat list  You may sometimes need to gather your clusters information  without having to deploy one application per cluster found  For that  you can use the option  flatList  in the cluster generator   Here is an example of cluster generator using this option     yaml spec    goTemplate  true   goTemplateOptions    missingkey error     generators      clusters        selector          matchLabels            type   staging        flatList  true   template      metadata        name   flat list guestbook      spec        project   my project        source          repoURL  https   github com argoproj argocd example apps            The cluster values field for each generator will be substituted here          targetRevision   HEAD          path  helm guestbook         helm            values                clusters                               name                      destination            In this case this is equivalent to just using          server   my cluster          namespace  guestbook      Given that you have two cluster secrets matching with names cluster1 and cluster2  this would generate the   single   following Application      yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    name  flat list guestbook   namespace  guestbook spec    project   my project    source      repoURL  https   github com argoproj argocd example apps      targetRevision   HEAD      path  helm guestbook     helm        values            clusters              name  cluster1             name  cluster2      In case you are using several cluster generators  each with the flatList option  one Application would be generated by cluster generator  as we can t simply merge values and templates that would potentially differ in each generator"}
{"questions":"argocd kind ApplicationSet apiVersion argoproj io v1alpha1 Pull Request Generator spec metadata yaml The Pull Request generator uses the API of an SCMaaS provider GitHub Gitea or Bitbucket Server to automatically discover open pull requests within a repository This fits well with the style of building a test environment when you create a pull request name myapps","answers":"# Pull Request Generator\n\nThe Pull Request generator uses the API of an SCMaaS provider (GitHub, Gitea, or Bitbucket Server) to automatically discover open pull requests within a repository. This fits well with the style of building a test environment when you create a pull request.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - pullRequest:\n      # When using a Pull Request generator, the ApplicationSet controller polls every `requeueAfterSeconds` interval (defaulting to every 30 minutes) to detect changes.\n      requeueAfterSeconds: 1800\n      # See below for provider specific options.\n      github:\n        # ...\n```\n\n!!! note\n    Know the security implications of PR generators in ApplicationSets.\n    [Only admins may create ApplicationSets](.\/Security.md#only-admins-may-createupdatedelete-applicationsets) to avoid\n    leaking Secrets, and [only admins may create PRs](.\/Security.md#templated-project-field) if the `project` field of\n    an ApplicationSet with a PR generator is templated, to avoid granting management of out-of-bounds resources.\n\n## GitHub\n\nSpecify the repository from which to fetch the GitHub Pull requests.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - pullRequest:\n      github:\n        # The GitHub organization or user.\n        owner: myorg\n        # The Github repository\n        repo: myrepository\n        # For GitHub Enterprise (optional)\n        api: https:\/\/git.example.com\/\n        # Reference to a Secret containing an access token. (optional)\n        tokenRef:\n          secretName: github-token\n          key: token\n        # (optional) use a GitHub App to access the API instead of a PAT.\n        appSecretName: github-app-repo-creds\n        # Labels is used to filter the PRs that you want to target. (optional)\n        labels:\n        - preview\n      requeueAfterSeconds: 1800\n  template:\n  # ...\n```\n\n* `owner`: Required name of the GitHub organization or user.\n* `repo`: Required name of the GitHub repository.\n* `api`: If using GitHub Enterprise, the URL to access it. (Optional)\n* `tokenRef`: A `Secret` name and key containing the GitHub access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories. (Optional)\n* `labels`: Filter the PRs to those containing **all** of the labels listed. (Optional)\n* `appSecretName`: A `Secret` name containing a GitHub App secret in [repo-creds format][repo-creds].\n\n[repo-creds]: ..\/declarative-setup.md#repository-credentials\n\n## GitLab\n\nSpecify the project from which to fetch the GitLab merge requests.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - pullRequest:\n      gitlab:\n        # The GitLab project ID.\n        project: \"12341234\"\n        # For self-hosted GitLab (optional)\n        api: https:\/\/git.example.com\/\n        # Reference to a Secret containing an access token. (optional)\n        tokenRef:\n          secretName: gitlab-token\n          key: token\n        # Labels is used to filter the MRs that you want to target. (optional)\n        labels:\n        - preview\n        # MR state is used to filter MRs only with a certain state. (optional)\n        pullRequestState: opened\n        # If true, skips validating the SCM provider's TLS certificate - useful for self-signed certificates.\n        insecure: false\n        # Reference to a ConfigMap containing trusted CA certs - useful for self-signed certificates. (optional)\n        caRef:\n          configMapName: argocd-tls-certs-cm\n          key: gitlab-ca\n      requeueAfterSeconds: 1800\n  template:\n  # ...\n```\n\n* `project`: Required project ID of the GitLab project.\n* `api`: If using self-hosted GitLab, the URL to access it. (Optional)\n* `tokenRef`: A `Secret` name and key containing the GitLab access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories. (Optional)\n* `labels`: Labels is used to filter the MRs that you want to target. (Optional)\n* `pullRequestState`: PullRequestState is an additional MRs filter to get only those with a certain state. Default: \"\" (all states)\n* `insecure`: By default (false) - Skip checking the validity of the SCM's certificate - useful for self-signed TLS certificates.\n* `caRef`: Optional `ConfigMap` name and key containing the GitLab certificates to trust - useful for self-signed TLS certificates. Possibly reference the ArgoCD CM holding the trusted certs.\n\nAs a preferable alternative to setting `insecure` to true, you can configure self-signed TLS certificates for Gitlab by [mounting self-signed certificate to the applicationset controller](.\/Generators-SCM-Provider.md#self-signed-tls-certificates).\n\n## Gitea\n\nSpecify the repository from which to fetch the Gitea Pull requests.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - pullRequest:\n      gitea:\n        # The Gitea organization or user.\n        owner: myorg\n        # The Gitea repository\n        repo: myrepository\n        # The Gitea url to use\n        api: https:\/\/gitea.mydomain.com\/\n        # Reference to a Secret containing an access token. (optional)\n        tokenRef:\n          secretName: gitea-token\n          key: token\n        # many gitea deployments use TLS, but many are self-hosted and self-signed certificates\n        insecure: true\n      requeueAfterSeconds: 1800\n  template:\n  # ...\n```\n\n* `owner`: Required name of the Gitea organization or user.\n* `repo`: Required name of the Gitea repository.\n* `api`: The url of the Gitea instance.\n* `tokenRef`: A `Secret` name and key containing the Gitea access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories. (Optional)\n* `insecure`: `Allow for self-signed certificates, primarily for testing.`\n\n## Bitbucket Server\n\nFetch pull requests from a repo hosted on a Bitbucket Server (not the same as Bitbucket Cloud).\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - pullRequest:\n      bitbucketServer:\n        project: myproject\n        repo: myrepository\n        # URL of the Bitbucket Server. Required.\n        api: https:\/\/mycompany.bitbucket.org\n        # Credentials for Basic authentication (App Password). Either basicAuth or bearerToken\n        # authentication is required to access private repositories\n        basicAuth:\n          # The username to authenticate with\n          username: myuser\n          # Reference to a Secret containing the password or personal access token.\n          passwordRef:\n            secretName: mypassword\n            key: password\n        # Credentials for Bearer Token (App Token) authentication. Either basicAuth or bearerToken\n        # authentication is required to access private repositories\n        bearerToken:\n          # Reference to a Secret containing the bearer token.\n          tokenRef:\n            secretName: repotoken\n            key: token\n        # If true, skips validating the SCM provider's TLS certificate - useful for self-signed certificates.\n        insecure: true\n        # Reference to a ConfigMap containing trusted CA certs - useful for self-signed certificates. (optional)\n        caRef:\n          configMapName: argocd-tls-certs-cm\n          key: bitbucket-ca\n      # Labels are not supported by Bitbucket Server, so filtering by label is not possible.\n      # Filter PRs using the source branch name. (optional)\n      filters:\n      - branchMatch: \".*-argocd\"\n  template:\n  # ...\n```\n\n* `project`: Required name of the Bitbucket project\n* `repo`: Required name of the Bitbucket repository.\n* `api`: Required URL to access the Bitbucket REST API. For the example above, an API request would be made to `https:\/\/mycompany.bitbucket.org\/rest\/api\/1.0\/projects\/myproject\/repos\/myrepository\/pull-requests`\n* `branchMatch`: Optional regexp filter which should match the source branch name. This is an alternative to labels which are not supported by Bitbucket server.\n\nIf you want to access a private repository, you must also provide the credentials for Basic auth (this is the only auth supported currently):\n* `username`: The username to authenticate with. It only needs read access to the relevant repo.\n* `passwordRef`: A `Secret` name and key containing the password or personal access token to use for requests.\n\nIn case of Bitbucket App Token, go with `bearerToken` section.\n* `tokenRef`: A `Secret` name and key containing the app token to use for requests.\n\nIn case self-signed BitBucket Server certificates, the following options can be usefully:\n* `insecure`: By default (false) - Skip checking the validity of the SCM's certificate - useful for self-signed TLS certificates.\n* `caRef`: Optional `ConfigMap` name and key containing the BitBucket server certificates to trust - useful for self-signed TLS certificates. Possibly reference the ArgoCD CM holding the trusted certs.\n\n## Bitbucket Cloud\n\nFetch pull requests from a repo hosted on a Bitbucket Cloud.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n    - pullRequest:\n        bitbucket:\n          # Workspace name where the repoistory is stored under. Required.\n          owner: myproject\n          # Repository slug. Required.\n          repo: myrepository\n          # URL of the Bitbucket Server. (optional) Will default to 'https:\/\/api.bitbucket.org\/2.0'.\n          api: https:\/\/api.bitbucket.org\/2.0\n          # Credentials for Basic authentication (App Password). Either basicAuth or bearerToken\n          # authentication is required to access private repositories\n          basicAuth:\n            # The username to authenticate with\n            username: myuser\n            # Reference to a Secret containing the password or personal access token.\n            passwordRef:\n              secretName: mypassword\n              key: password\n          # Credentials for Bearer Token (App Token) authentication. Either basicAuth or bearerToken\n          # authentication is required to access private repositories\n          bearerToken:\n            # Reference to a Secret containing the bearer token.\n            tokenRef:\n              secretName: repotoken\n              key: token\n        # Labels are not supported by Bitbucket Cloud, so filtering by label is not possible.\n        # Filter PRs using the source branch name. (optional)\n        filters:\n          - branchMatch: \".*-argocd\"\n  template:\n  # ...\n```\n\n- `owner`: Required name of the Bitbucket workspace\n- `repo`: Required name of the Bitbucket repository.\n- `api`: Optional URL to access the Bitbucket REST API. For the example above, an API request would be made to `https:\/\/api.bitbucket.org\/2.0\/repositories\/{workspace}\/{repo_slug}\/pullrequests`. If not set, defaults to `https:\/\/api.bitbucket.org\/2.0`\n- `branchMatch`: Optional regexp filter which should match the source branch name. This is an alternative to labels which are not supported by Bitbucket server.\n\nIf you want to access a private repository, Argo CD will need credentials to access repository in Bitbucket Cloud. You can use Bitbucket App Password (generated per user, with access to whole workspace), or Bitbucket App Token (generated per repository, with access limited to repository scope only). If both App Password and App Token are defined, App Token will be used.\n\nTo use Bitbucket App Password, use `basicAuth` section.\n- `username`: The username to authenticate with. It only needs read access to the relevant repo.\n- `passwordRef`: A `Secret` name and key containing the password or personal access token to use for requests.\n\nIn case of Bitbucket App Token, go with `bearerToken` section.\n- `tokenRef`: A `Secret` name and key containing the app token to use for requests.\n\n## Azure DevOps\n\nSpecify the organization, project and repository from which you want to fetch pull requests.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - pullRequest:\n      azuredevops:\n        # Azure DevOps org to scan. Required.\n        organization: myorg\n        # Azure DevOps project name to scan. Required.\n        project: myproject\n        # Azure DevOps repo name to scan. Required.\n        repo: myrepository\n        # The Azure DevOps API URL to talk to. If blank, use https:\/\/dev.azure.com\/.\n        api: https:\/\/dev.azure.com\/\n        # Reference to a Secret containing an access token. (optional)\n        tokenRef:\n          secretName: azure-devops-token\n          key: token\n        # Labels is used to filter the PRs that you want to target. (optional)\n        labels:\n        - preview\n      requeueAfterSeconds: 1800\n  template:\n  # ...\n```\n\n* `organization`: Required name of the Azure DevOps organization.\n* `project`: Required name of the Azure DevOps project.\n* `repo`: Required name of the Azure DevOps repository.\n* `api`: If using self-hosted Azure DevOps Repos, the URL to access it. (Optional)\n* `tokenRef`: A `Secret` name and key containing the Azure DevOps access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories. (Optional)\n* `labels`: Filter the PRs to those containing **all** of the labels listed. (Optional)\n\n## Filters\n\nFilters allow selecting which pull requests to generate for. Each filter can declare one or more conditions, all of which must pass. If multiple filters are present, any can match for a repository to be included. If no filters are specified, all pull requests will be processed.\nCurrently, only a subset of filters is available when comparing with [SCM provider](Generators-SCM-Provider.md) filters.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - pullRequest:\n      # ...\n      # Include any pull request ending with \"argocd\". (optional)\n      filters:\n      - branchMatch: \".*-argocd\"\n  template:\n  # ...\n```\n\n* `branchMatch`: A regexp matched against source branch names.\n* `targetBranchMatch`: A regexp matched against target branch names.\n\n[GitHub](#github) and [GitLab](#gitlab) also support a `labels` filter.\n\n## Template\n\nAs with all generators, several keys are available for replacement in the generated application.\n\nThe following is a comprehensive Helm Application example;\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - pullRequest:\n    # ...\n  template:\n    metadata:\n      name: 'myapp--'\n    spec:\n      source:\n        repoURL: 'https:\/\/github.com\/myorg\/myrepo.git'\n        targetRevision: ''\n        path: kubernetes\/\n        helm:\n          parameters:\n          - name: \"image.tag\"\n            value: \"pull--\"\n      project: \"my-project\"\n      destination:\n        server: https:\/\/kubernetes.default.svc\n        namespace: default\n```\n\nAnd, here is a robust Kustomize example;\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - pullRequest:\n    # ...\n  template:\n    metadata:\n      name: 'myapp--'\n    spec:\n      source:\n        repoURL: 'https:\/\/github.com\/myorg\/myrepo.git'\n        targetRevision: ''\n        path: kubernetes\/\n        kustomize:\n          nameSuffix: ''\n          commonLabels:\n            app.kubernetes.io\/instance: '-'\n          images:\n          - 'ghcr.io\/myorg\/myrepo:-'\n      project: \"my-project\"\n      destination:\n        server: https:\/\/kubernetes.default.svc\n        namespace: default\n```\n\n* `number`: The ID number of the pull request.\n* `title`: The title of the pull request.\n* `branch`: The name of the branch of the pull request head.\n* `branch_slug`: The branch name will be cleaned to be conform to the DNS label standard as defined in [RFC 1123](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#dns-label-names), and truncated to 50 characters to give room to append\/suffix-ing it with 13 more characters.\n* `target_branch`: The name of the target branch of the pull request.\n* `target_branch_slug`: The target branch name will be cleaned to be conform to the DNS label standard as defined in [RFC 1123](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#dns-label-names), and truncated to 50 characters to give room to append\/suffix-ing it with 13 more characters.\n* `head_sha`: This is the SHA of the head of the pull request.\n* `head_short_sha`: This is the short SHA of the head of the pull request (8 characters long or the length of the head SHA if it's shorter).\n* `head_short_sha_7`: This is the short SHA of the head of the pull request (7 characters long or the length of the head SHA if it's shorter).\n* `labels`: The array of pull request labels. (Supported only for Go Template ApplicationSet manifests.)\n* `author`: The author\/creator of the pull request.\n\n## Webhook Configuration\n\nWhen using a Pull Request generator, the ApplicationSet controller polls every `requeueAfterSeconds` interval (defaulting to every 30 minutes) to detect changes. To eliminate this delay from polling, the ApplicationSet webhook server can be configured to receive webhook events, which will trigger Application generation by the Pull Request generator.\n\nThe configuration is almost the same as the one described [in the Git generator](Generators-Git.md), but there is one difference: if you want to use the Pull Request Generator as well, additionally configure the following settings.\n\n!!! note\n    The ApplicationSet controller webhook does not use the same webhook as the API server as defined [here](..\/webhook.md). ApplicationSet exposes a webhook server as a service of type ClusterIP. An ApplicationSet specific Ingress resource needs to be created to expose this service to the webhook source.\n\n### Github webhook configuration\n\nIn section 1, _\"Create the webhook in the Git provider\"_, add an event so that a webhook request will be sent when a pull request is created, closed, or label changed.\n\nAdd Webhook URL with uri `\/api\/webhook` and select content-type as json\n![Add Webhook URL](..\/..\/assets\/applicationset\/webhook-config-pullrequest-generator.png \"Add Webhook URL\")\n\nSelect `Let me select individual events` and enable the checkbox for `Pull requests`.\n\n![Add Webhook](..\/..\/assets\/applicationset\/webhook-config-pull-request.png \"Add Webhook Pull Request\")\n\nThe Pull Request Generator will requeue when the next action occurs.\n\n- `opened`\n- `closed`\n- `reopened`\n- `labeled`\n- `unlabeled`\n- `synchronized`\n\nFor more information about each event, please refer to the [official documentation](https:\/\/docs.github.com\/en\/developers\/webhooks-and-events\/webhooks\/webhook-events-and-payloads).\n\n### Gitlab webhook configuration\n\nEnable checkbox for \"Merge request events\" in triggers list.\n\n![Add Gitlab Webhook](..\/..\/assets\/applicationset\/webhook-config-merge-request-gitlab.png \"Add Gitlab Merge request Webhook\")\n\nThe Pull Request Generator will requeue when the next action occurs.\n\n- `open`\n- `close`\n- `reopen`\n- `update`\n- `merge`\n\nFor more information about each event, please refer to the [official documentation](https:\/\/docs.gitlab.com\/ee\/user\/project\/integrations\/webhook_events.html#merge-request-events).\n\n## Lifecycle\n\nAn Application will be generated when a Pull Request is discovered when the configured criteria is met - i.e. for GitHub when a Pull Request matches the specified `labels` and\/or `pullRequestState`. Application will be removed when a Pull Request no longer meets the specified criteria.","site":"argocd","answers_cleaned":"  Pull Request Generator  The Pull Request generator uses the API of an SCMaaS provider  GitHub  Gitea  or Bitbucket Server  to automatically discover open pull requests within a repository  This fits well with the style of building a test environment when you create a pull request      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    goTemplate  true   goTemplateOptions    missingkey error     generators      pullRequest          When using a Pull Request generator  the ApplicationSet controller polls every  requeueAfterSeconds  interval  defaulting to every 30 minutes  to detect changes        requeueAfterSeconds  1800         See below for provider specific options        github                         note     Know the security implications of PR generators in ApplicationSets       Only admins may create ApplicationSets    Security md only admins may createupdatedelete applicationsets  to avoid     leaking Secrets  and  only admins may create PRs    Security md templated project field  if the  project  field of     an ApplicationSet with a PR generator is templated  to avoid granting management of out of bounds resources      GitHub  Specify the repository from which to fetch the GitHub Pull requests      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    goTemplate  true   goTemplateOptions    missingkey error     generators      pullRequest        github            The GitHub organization or user          owner  myorg           The Github repository         repo  myrepository           For GitHub Enterprise  optional          api  https   git example com            Reference to a Secret containing an access token   optional          tokenRef            secretName  github token           key  token            optional  use a GitHub App to access the API instead of a PAT          appSecretName  github app repo creds           Labels is used to filter the PRs that you want to target   optional          labels            preview       requeueAfterSeconds  1800   template                  owner   Required name of the GitHub organization or user     repo   Required name of the GitHub repository     api   If using GitHub Enterprise  the URL to access it   Optional     tokenRef   A  Secret  name and key containing the GitHub access token to use for requests  If not specified  will make anonymous requests which have a lower rate limit and can only see public repositories   Optional     labels   Filter the PRs to those containing   all   of the labels listed   Optional     appSecretName   A  Secret  name containing a GitHub App secret in  repo creds format  repo creds     repo creds      declarative setup md repository credentials     GitLab  Specify the project from which to fetch the GitLab merge requests      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    goTemplate  true   goTemplateOptions    missingkey error     generators      pullRequest        gitlab            The GitLab project ID          project   12341234            For self hosted GitLab  optional          api  https   git example com            Reference to a Secret containing an access token   optional          tokenRef            secretName  gitlab token           key  token           Labels is used to filter the MRs that you want to target   optional          labels            preview           MR state is used to filter MRs only with a certain state   optional          pullRequestState  opened           If true  skips validating the SCM provider s TLS certificate   useful for self signed certificates          insecure  false           Reference to a ConfigMap containing trusted CA certs   useful for self signed certificates   optional          caRef            configMapName  argocd tls certs cm           key  gitlab ca       requeueAfterSeconds  1800   template                  project   Required project ID of the GitLab project     api   If using self hosted GitLab  the URL to access it   Optional     tokenRef   A  Secret  name and key containing the GitLab access token to use for requests  If not specified  will make anonymous requests which have a lower rate limit and can only see public repositories   Optional     labels   Labels is used to filter the MRs that you want to target   Optional     pullRequestState   PullRequestState is an additional MRs filter to get only those with a certain state  Default      all states     insecure   By default  false    Skip checking the validity of the SCM s certificate   useful for self signed TLS certificates     caRef   Optional  ConfigMap  name and key containing the GitLab certificates to trust   useful for self signed TLS certificates  Possibly reference the ArgoCD CM holding the trusted certs   As a preferable alternative to setting  insecure  to true  you can configure self signed TLS certificates for Gitlab by  mounting self signed certificate to the applicationset controller    Generators SCM Provider md self signed tls certificates       Gitea  Specify the repository from which to fetch the Gitea Pull requests      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    goTemplate  true   goTemplateOptions    missingkey error     generators      pullRequest        gitea            The Gitea organization or user          owner  myorg           The Gitea repository         repo  myrepository           The Gitea url to use         api  https   gitea mydomain com            Reference to a Secret containing an access token   optional          tokenRef            secretName  gitea token           key  token           many gitea deployments use TLS  but many are self hosted and self signed certificates         insecure  true       requeueAfterSeconds  1800   template                  owner   Required name of the Gitea organization or user     repo   Required name of the Gitea repository     api   The url of the Gitea instance     tokenRef   A  Secret  name and key containing the Gitea access token to use for requests  If not specified  will make anonymous requests which have a lower rate limit and can only see public repositories   Optional     insecure    Allow for self signed certificates  primarily for testing       Bitbucket Server  Fetch pull requests from a repo hosted on a Bitbucket Server  not the same as Bitbucket Cloud       yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    goTemplate  true   goTemplateOptions    missingkey error     generators      pullRequest        bitbucketServer          project  myproject         repo  myrepository           URL of the Bitbucket Server  Required          api  https   mycompany bitbucket org           Credentials for Basic authentication  App Password   Either basicAuth or bearerToken           authentication is required to access private repositories         basicAuth              The username to authenticate with           username  myuser             Reference to a Secret containing the password or personal access token            passwordRef              secretName  mypassword             key  password           Credentials for Bearer Token  App Token  authentication  Either basicAuth or bearerToken           authentication is required to access private repositories         bearerToken              Reference to a Secret containing the bearer token            tokenRef              secretName  repotoken             key  token           If true  skips validating the SCM provider s TLS certificate   useful for self signed certificates          insecure  true           Reference to a ConfigMap containing trusted CA certs   useful for self signed certificates   optional          caRef            configMapName  argocd tls certs cm           key  bitbucket ca         Labels are not supported by Bitbucket Server  so filtering by label is not possible          Filter PRs using the source branch name   optional        filters          branchMatch      argocd    template                  project   Required name of the Bitbucket project    repo   Required name of the Bitbucket repository     api   Required URL to access the Bitbucket REST API  For the example above  an API request would be made to  https   mycompany bitbucket org rest api 1 0 projects myproject repos myrepository pull requests     branchMatch   Optional regexp filter which should match the source branch name  This is an alternative to labels which are not supported by Bitbucket server   If you want to access a private repository  you must also provide the credentials for Basic auth  this is the only auth supported currently      username   The username to authenticate with  It only needs read access to the relevant repo     passwordRef   A  Secret  name and key containing the password or personal access token to use for requests   In case of Bitbucket App Token  go with  bearerToken  section     tokenRef   A  Secret  name and key containing the app token to use for requests   In case self signed BitBucket Server certificates  the following options can be usefully     insecure   By default  false    Skip checking the validity of the SCM s certificate   useful for self signed TLS certificates     caRef   Optional  ConfigMap  name and key containing the BitBucket server certificates to trust   useful for self signed TLS certificates  Possibly reference the ArgoCD CM holding the trusted certs      Bitbucket Cloud  Fetch pull requests from a repo hosted on a Bitbucket Cloud      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    goTemplate  true   goTemplateOptions    missingkey error     generators        pullRequest          bitbucket              Workspace name where the repoistory is stored under  Required            owner  myproject             Repository slug  Required            repo  myrepository             URL of the Bitbucket Server   optional  Will default to  https   api bitbucket org 2 0             api  https   api bitbucket org 2 0             Credentials for Basic authentication  App Password   Either basicAuth or bearerToken             authentication is required to access private repositories           basicAuth                The username to authenticate with             username  myuser               Reference to a Secret containing the password or personal access token              passwordRef                secretName  mypassword               key  password             Credentials for Bearer Token  App Token  authentication  Either basicAuth or bearerToken             authentication is required to access private repositories           bearerToken                Reference to a Secret containing the bearer token              tokenRef                secretName  repotoken               key  token           Labels are not supported by Bitbucket Cloud  so filtering by label is not possible            Filter PRs using the source branch name   optional          filters              branchMatch      argocd    template                  owner   Required name of the Bitbucket workspace    repo   Required name of the Bitbucket repository     api   Optional URL to access the Bitbucket REST API  For the example above  an API request would be made to  https   api bitbucket org 2 0 repositories  workspace   repo slug  pullrequests   If not set  defaults to  https   api bitbucket org 2 0     branchMatch   Optional regexp filter which should match the source branch name  This is an alternative to labels which are not supported by Bitbucket server   If you want to access a private repository  Argo CD will need credentials to access repository in Bitbucket Cloud  You can use Bitbucket App Password  generated per user  with access to whole workspace   or Bitbucket App Token  generated per repository  with access limited to repository scope only   If both App Password and App Token are defined  App Token will be used   To use Bitbucket App Password  use  basicAuth  section     username   The username to authenticate with  It only needs read access to the relevant repo     passwordRef   A  Secret  name and key containing the password or personal access token to use for requests   In case of Bitbucket App Token  go with  bearerToken  section     tokenRef   A  Secret  name and key containing the app token to use for requests      Azure DevOps  Specify the organization  project and repository from which you want to fetch pull requests      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    goTemplate  true   goTemplateOptions    missingkey error     generators      pullRequest        azuredevops            Azure DevOps org to scan  Required          organization  myorg           Azure DevOps project name to scan  Required          project  myproject           Azure DevOps repo name to scan  Required          repo  myrepository           The Azure DevOps API URL to talk to  If blank  use https   dev azure com           api  https   dev azure com            Reference to a Secret containing an access token   optional          tokenRef            secretName  azure devops token           key  token           Labels is used to filter the PRs that you want to target   optional          labels            preview       requeueAfterSeconds  1800   template                  organization   Required name of the Azure DevOps organization     project   Required name of the Azure DevOps project     repo   Required name of the Azure DevOps repository     api   If using self hosted Azure DevOps Repos  the URL to access it   Optional     tokenRef   A  Secret  name and key containing the Azure DevOps access token to use for requests  If not specified  will make anonymous requests which have a lower rate limit and can only see public repositories   Optional     labels   Filter the PRs to those containing   all   of the labels listed   Optional      Filters  Filters allow selecting which pull requests to generate for  Each filter can declare one or more conditions  all of which must pass  If multiple filters are present  any can match for a repository to be included  If no filters are specified  all pull requests will be processed  Currently  only a subset of filters is available when comparing with  SCM provider  Generators SCM Provider md  filters      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    goTemplate  true   goTemplateOptions    missingkey error     generators      pullRequest                      Include any pull request ending with  argocd    optional        filters          branchMatch      argocd    template                  branchMatch   A regexp matched against source branch names     targetBranchMatch   A regexp matched against target branch names    GitHub   github  and  GitLab   gitlab  also support a  labels  filter      Template  As with all generators  several keys are available for replacement in the generated application   The following is a comprehensive Helm Application example      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    goTemplate  true   goTemplateOptions    missingkey error     generators      pullRequest              template      metadata        name   myapp        spec        source          repoURL   https   github com myorg myrepo git          targetRevision             path  kubernetes          helm            parameters              name   image tag              value   pull          project   my project        destination          server  https   kubernetes default svc         namespace  default      And  here is a robust Kustomize example      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    goTemplate  true   goTemplateOptions    missingkey error     generators      pullRequest              template      metadata        name   myapp        spec        source          repoURL   https   github com myorg myrepo git          targetRevision             path  kubernetes          kustomize            nameSuffix               commonLabels              app kubernetes io instance                images               ghcr io myorg myrepo          project   my project        destination          server  https   kubernetes default svc         namespace  default         number   The ID number of the pull request     title   The title of the pull request     branch   The name of the branch of the pull request head     branch slug   The branch name will be cleaned to be conform to the DNS label standard as defined in  RFC 1123  https   kubernetes io docs concepts overview working with objects names  dns label names   and truncated to 50 characters to give room to append suffix ing it with 13 more characters     target branch   The name of the target branch of the pull request     target branch slug   The target branch name will be cleaned to be conform to the DNS label standard as defined in  RFC 1123  https   kubernetes io docs concepts overview working with objects names  dns label names   and truncated to 50 characters to give room to append suffix ing it with 13 more characters     head sha   This is the SHA of the head of the pull request     head short sha   This is the short SHA of the head of the pull request  8 characters long or the length of the head SHA if it s shorter      head short sha 7   This is the short SHA of the head of the pull request  7 characters long or the length of the head SHA if it s shorter      labels   The array of pull request labels   Supported only for Go Template ApplicationSet manifests      author   The author creator of the pull request      Webhook Configuration  When using a Pull Request generator  the ApplicationSet controller polls every  requeueAfterSeconds  interval  defaulting to every 30 minutes  to detect changes  To eliminate this delay from polling  the ApplicationSet webhook server can be configured to receive webhook events  which will trigger Application generation by the Pull Request generator   The configuration is almost the same as the one described  in the Git generator  Generators Git md   but there is one difference  if you want to use the Pull Request Generator as well  additionally configure the following settings       note     The ApplicationSet controller webhook does not use the same webhook as the API server as defined  here     webhook md   ApplicationSet exposes a webhook server as a service of type ClusterIP  An ApplicationSet specific Ingress resource needs to be created to expose this service to the webhook source       Github webhook configuration  In section 1    Create the webhook in the Git provider    add an event so that a webhook request will be sent when a pull request is created  closed  or label changed   Add Webhook URL with uri   api webhook  and select content type as json   Add Webhook URL        assets applicationset webhook config pullrequest generator png  Add Webhook URL    Select  Let me select individual events  and enable the checkbox for  Pull requests      Add Webhook        assets applicationset webhook config pull request png  Add Webhook Pull Request    The Pull Request Generator will requeue when the next action occurs      opened     closed     reopened     labeled     unlabeled     synchronized   For more information about each event  please refer to the  official documentation  https   docs github com en developers webhooks and events webhooks webhook events and payloads        Gitlab webhook configuration  Enable checkbox for  Merge request events  in triggers list     Add Gitlab Webhook        assets applicationset webhook config merge request gitlab png  Add Gitlab Merge request Webhook    The Pull Request Generator will requeue when the next action occurs      open     close     reopen     update     merge   For more information about each event  please refer to the  official documentation  https   docs gitlab com ee user project integrations webhook events html merge request events       Lifecycle  An Application will be generated when a Pull Request is discovered when the configured criteria is met   i e  for GitHub when a Pull Request matches the specified  labels  and or  pullRequestState   Application will be removed when a Pull Request no longer meets the specified criteria "}
{"questions":"argocd The ApplicationSet controller supports a number of settings that limit the ability of the controller to make changes to generated Applications for example preventing the controller from deleting child Applications Dry run prevent ApplicationSet from creating modifying or deleting all Applications These settings allow you to exert control over when and how changes are made to your Applications and to their corresponding cluster resources etc Here are some of the controller settings that may be modified to alter the ApplicationSet controller s resource handling behaviour Controlling if when the ApplicationSet controller modifies resources","answers":"# Controlling if\/when the ApplicationSet controller modifies `Application` resources\n\nThe ApplicationSet controller supports a number of settings that limit the ability of the controller to make changes to generated Applications, for example, preventing the controller from deleting child Applications.\n\nThese settings allow you to exert control over when, and how, changes are made to your Applications, and to their corresponding cluster resources (`Deployments`, `Services`, etc).\n\nHere are some of the controller settings that may be modified to alter the ApplicationSet controller's resource-handling behaviour.\n\n## Dry run: prevent ApplicationSet from creating, modifying, or deleting all Applications\n\nTo prevent the ApplicationSet controller from creating, modifying, or deleting any `Application` resources, you may enable `dry-run` mode. This essentially switches the controller into a \"read only\" mode, where the controller Reconcile loop will run, but no resources will be modified.\n\nTo enable dry-run, add `--dryrun true` to the ApplicationSet Deployment's container launch parameters.\n\nSee 'How to modify ApplicationSet container parameters' below for detailed steps on how to add this parameter to the controller.\n\n## Managed Applications modification Policies\n\nThe ApplicationSet controller supports a parameter `--policy`, which is specified on launch (within the controller Deployment container), and which restricts what types of modifications will be made to managed Argo CD `Application` resources.\n\nThe `--policy` parameter takes four values: `sync`, `create-only`, `create-delete`, and `create-update`. (`sync` is the default, which is used if the `--policy` parameter is not specified; the other policies are described below).\n\nIt is also possible to set this policy per ApplicationSet.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nspec:\n  # (...)\n  syncPolicy:\n    applicationsSync: create-only # create-update, create-delete sync\n\n```\n\n- Policy `create-only`: Prevents ApplicationSet controller from modifying or deleting Applications. **WARNING**: It doesn't prevent Application controller from deleting Applications according to [ownerReferences](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/owners-dependents\/) when deleting ApplicationSet.\n- Policy `create-update`: Prevents ApplicationSet controller from deleting Applications. Update is allowed. **WARNING**: It doesn't prevent Application controller from deleting Applications according to [ownerReferences](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/owners-dependents\/) when deleting ApplicationSet.\n- Policy `create-delete`: Prevents ApplicationSet controller from modifying Applications. Delete is allowed.\n- Policy `sync`: Update and Delete are allowed.\n\nIf the controller parameter `--policy` is set, it takes precedence on the field `applicationsSync`. It is possible to allow per ApplicationSet sync policy by setting variable `ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_POLICY_OVERRIDE` to argocd-cmd-params-cm `applicationsetcontroller.enable.policy.override` or directly with controller parameter `--enable-policy-override` (default to `false`).\n\n### Policy - `create-only`: Prevent ApplicationSet controller from modifying and deleting Applications\n\nTo allow the ApplicationSet controller to *create* `Application` resources, but prevent any further modification, such as *deletion*, or modification of Application fields, add this parameter in the ApplicationSet controller:\n\n**WARNING**: \"*deletion*\" indicates the case as the result of comparing generated Application between before and after, there are Applications which no longer exist. It doesn't indicate the case Applications are deleted according to ownerReferences to ApplicationSet. See [How to prevent Application controller from deleting Applications when deleting ApplicationSet](#how-to-prevent-application-controller-from-deleting-applications-when-deleting-applicationset)\n\n```\n--policy create-only\n```\n\nAt ApplicationSet level\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nspec:\n  # (...)\n  syncPolicy:\n    applicationsSync: create-only\n```\n\n### Policy - `create-update`: Prevent ApplicationSet controller from deleting Applications\n\nTo allow the ApplicationSet controller to create or modify `Application` resources, but prevent Applications from being deleted, add the following parameter to the ApplicationSet controller `Deployment`:\n\n**WARNING**: \"*deletion*\" indicates the case as the result of comparing generated Application between before and after, there are Applications which no longer exist. It doesn't indicate the case Applications are deleted according to ownerReferences to ApplicationSet. See [How to prevent Application controller from deleting Applications when deleting ApplicationSet](#how-to-prevent-application-controller-from-deleting-applications-when-deleting-applicationset)\n\n```\n--policy create-update\n```\n\nThis may be useful to users looking for additional protection against deletion of the Applications generated by the controller.\n\nAt ApplicationSet level\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nspec:\n  # (...)\n  syncPolicy:\n    applicationsSync: create-update\n```\n\n### How to prevent Application controller from deleting Applications when deleting ApplicationSet\n\nBy default, `create-only` and `create-update` policy isn't effective against preventing deletion of Applications when deleting ApplicationSet.\nYou must set the finalizer to ApplicationSet to prevent deletion in such case, and use background cascading deletion.\nIf you use foreground cascading deletion, there's no guarantee to preserve applications.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  finalizers:\n  - resources-finalizer.argocd.argoproj.io\nspec:\n  # (...)\n```\n\n## Ignore certain changes to Applications\n\nThe ApplicationSet spec includes an `ignoreApplicationDifferences` field, which allows you to specify which fields of \nthe ApplicationSet should be ignored when comparing Applications.\n\nThe field supports multiple ignore rules. Each ignore rule may specify a list of either `jsonPointers` or \n`jqPathExpressions` to ignore.\n\nYou may optionally also specify a `name` to apply the ignore rule to a specific Application, or omit the `name` to apply\nthe ignore rule to all Applications.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nspec:\n  ignoreApplicationDifferences:\n    - jsonPointers:\n        - \/spec\/source\/targetRevision\n    - name: some-app\n      jqPathExpressions:\n        - .spec.source.helm.values\n```\n\n### Allow temporarily toggling auto-sync\n\nOne of the most common use cases for ignoring differences is to allow temporarily toggling auto-sync for an Application.\n\nFor example, if you have an ApplicationSet that is configured to automatically sync Applications, you may want to temporarily\ndisable auto-sync for a specific Application. You can do this by adding an ignore rule for the `spec.syncPolicy.automated` field.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nspec:\n  ignoreApplicationDifferences:\n    - jsonPointers:\n        - \/spec\/syncPolicy\n```\n\n### Limitations of `ignoreApplicationDifferences`\n\nWhen an ApplicationSet is reconciled, the controller will compare the ApplicationSet spec with the spec of each Application\nthat it manages. If there are any differences, the controller will generate a patch to update the Application to match the\nApplicationSet spec.\n\nThe generated patch is a MergePatch. According to the MergePatch documentation, \"existing lists will be completely \nreplaced by new lists\" when there is a change to the list.\n\nThis limits the effectiveness of `ignoreApplicationDifferences` when the ignored field is in a list. For example, if you\nhave an application with multiple sources, and you want to ignore changes to the `targetRevision` of one of the sources,\nchanges in other fields or in other sources will cause the entire `sources` list to be replaced, and the `targetRevision`\nfield will be reset to the value defined in the ApplicationSet.\n\nFor example, consider this ApplicationSet:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nspec:\n  ignoreApplicationDifferences:\n    - jqPathExpressions:\n        - .spec.sources[] | select(.repoURL == \"https:\/\/git.example.com\/org\/repo1\").targetRevision\n  template:\n    spec:\n      sources:\n      - repoURL: https:\/\/git.example.com\/org\/repo1\n        targetRevision: main\n      - repoURL: https:\/\/git.example.com\/org\/repo2\n        targetRevision: main\n```\n\nYou can freely change the `targetRevision` of the `repo1` source, and the ApplicationSet controller will not overwrite\nyour change.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nspec:\n  sources:\n  - repoURL: https:\/\/git.example.com\/org\/repo1\n    targetRevision: fix\/bug-123\n  - repoURL: https:\/\/git.example.com\/org\/repo2\n    targetRevision: main\n```\n\nHowever, if you change the `targetRevision` of the `repo2` source, the ApplicationSet controller will overwrite the entire\n`sources` field.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nspec:\n  sources:\n  - repoURL: https:\/\/git.example.com\/org\/repo1\n    targetRevision: main\n  - repoURL: https:\/\/git.example.com\/org\/repo2\n    targetRevision: main\n```\n\n!!! note\n    [Future improvements](https:\/\/github.com\/argoproj\/argo-cd\/issues\/15975) to the ApplicationSet controller may \n    eliminate this problem. For example, the `ref` field might be made a merge key, allowing the ApplicationSet \n    controller to generate and use a StrategicMergePatch instead of a MergePatch. You could then target a specific \n    source by `ref`, ignore changes to a field in that source, and changes to other sources would not cause the ignored \n    field to be overwritten.\n\n## Prevent an `Application`'s child resources from being deleted, when the parent Application is deleted\n\nBy default, when an `Application` resource is deleted by the ApplicationSet controller, all of the child resources of the Application will be deleted as well (such as, all of the Application's `Deployments`, `Services`, etc).\n\nTo prevent an Application's child resources from being deleted when the parent Application is deleted, add the `preserveResourcesOnDeletion: true` field to the `syncPolicy` of the ApplicationSet:\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nspec:\n  # (...)\n  syncPolicy:\n    preserveResourcesOnDeletion: true\n```\n\nMore information on the specific behaviour of `preserveResourcesOnDeletion`, and deletion in ApplicationSet controller and Argo CD in general, can be found on the [Application Deletion](Application-Deletion.md) page.\n\n\n## Prevent an Application's child resources from being modified\n\nChanges made to the ApplicationSet will propagate to the Applications managed by the ApplicationSet, and then Argo CD will propagate the Application changes to the underlying cluster resources (as per [Argo CD Integration](Argo-CD-Integration.md)).\n\nThe propagation of Application changes to the cluster is managed by the [automated sync settings](..\/..\/user-guide\/auto_sync.md), which are referenced in the ApplicationSet `template` field:\n\n- `spec.template.syncPolicy.automated`: If enabled, changes to Applications will automatically propagate to the cluster resources of the cluster. \n    - Unset this within the ApplicationSet template to 'pause' updates to cluster resources managed by the `Application` resource.\n- `spec.template.syncPolicy.automated.prune`: By default, Automated sync will not delete resources when Argo CD detects the resource is no longer defined in Git.\n    - For extra safety, set this to false to prevent unexpected changes to the backing Git repository from affecting cluster resources.\n\n\n## How to modify ApplicationSet container launch parameters\n\nThere are a couple of ways to modify the ApplicationSet container parameters, so as to enable the above settings.\n\n### A) Use `kubectl edit` to modify the deployment on the cluster\n\nEdit the applicationset-controller `Deployment` resource on the cluster:\n```\nkubectl edit deployment\/argocd-applicationset-controller -n argocd\n```\n\nLocate the `.spec.template.spec.containers[0].command` field, and add the required parameter(s):\n```yaml\nspec:\n    # (...)\n  template:\n    # (...)\n    spec:\n      containers:\n      - command:\n        - entrypoint.sh\n        - argocd-applicationset-controller\n        # Insert new parameters here, for example:\n        # --policy create-only\n    # (...)\n```\n\nSave and exit the editor. Wait for a new `Pod` to start containing the updated parameters.\n\n### Or, B) Edit the `install.yaml` manifest for the ApplicationSet installation\n\nRather than directly editing the cluster resource, you may instead choose to modify the installation YAML that is used to install the ApplicationSet controller:\n\nApplicable for applicationset versions less than 0.4.0. \n```bash\n# Clone the repository\n\ngit clone https:\/\/github.com\/argoproj\/applicationset\n\n# Checkout the version that corresponds to the one you have installed.\ngit checkout \"(version of applicationset)\"\n# example: git checkout \"0.1.0\"\n\ncd applicationset\/manifests\n\n# open 'install.yaml' in a text editor, make the same modifications to Deployment \n# as described in the previous section.\n\n# Apply the change to the cluster\nkubectl apply -n argocd -f install.yaml\n```\n\n## Preserving changes made to an Applications annotations and labels\n\n!!! note\n    The same behavior can be achieved on a per-app basis using the [`ignoreApplicationDifferences`](#ignore-certain-changes-to-applications) \n    feature described above. However, preserved fields may be configured globally, a feature that is not yet available\n    for `ignoreApplicationDifferences`.\n\nIt is common practice in Kubernetes to store state in annotations, operators will often make use of this. To allow for this, it is possible to configure a list of annotations that the ApplicationSet should preserve when reconciling.\n\nFor example, imagine that we have an Application created from an ApplicationSet, but a custom annotation and label has since been added (to the Application) that does not exist in the `ApplicationSet` resource:\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  # This annotation and label exists only on this Application, and not in \n  # the parent ApplicationSet template:\n  annotations: \n    my-custom-annotation: some-value\n  labels:\n    my-custom-label: some-value\nspec:\n  # (...)\n```\n\nTo preserve this annotation and label we can use the `preservedFields` property of the `ApplicationSet` like so:\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nspec:\n  # (...)\n  preservedFields:\n    annotations: [\"my-custom-annotation\"]\n    labels: [\"my-custom-label\"]\n```\n\nThe ApplicationSet controller will leave this annotation and label as-is when reconciling, even though it is not defined in the metadata of the ApplicationSet itself.\n\nBy default, the Argo CD notifications and the Argo CD refresh type annotations are also preserved.\n\n!!!note\n  One can also set global preserved fields for the controller by passing a comma separated list of annotations and labels to \n  `ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_ANNOTATIONS` and `ARGOCD_APPLICATIONSET_CONTROLLER_GLOBAL_PRESERVED_LABELS` respectively.\n\n## Debugging unexpected changes to Applications\n\nWhen the ApplicationSet controller makes a change to an application, it logs the patch at the debug level. To see these\nlogs, set the log level to debug in the `argocd-cmd-params-cm` ConfigMap in the `argocd` namespace:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cmd-params-cm\n  namespace: argocd\ndata:\n  applicationsetcontroller.log.level: debug\n```\n\n## Previewing changes\n\nTo preview changes that the ApplicationSet controller would make to Applications, you can create the AppSet in dry-run \nmode. This works whether the AppSet already exists or not.\n\n```shell\nargocd appset create --dry-run .\/appset.yaml -o json | jq -r '.status.resources[].name'\n```\n\nThe dry-run will populate the returned ApplicationSet's status with the Applications which would be managed with the \ngiven config. You can compare to the existing Applications to see what would change.","site":"argocd","answers_cleaned":"  Controlling if when the ApplicationSet controller modifies  Application  resources  The ApplicationSet controller supports a number of settings that limit the ability of the controller to make changes to generated Applications  for example  preventing the controller from deleting child Applications   These settings allow you to exert control over when  and how  changes are made to your Applications  and to their corresponding cluster resources   Deployments    Services   etc    Here are some of the controller settings that may be modified to alter the ApplicationSet controller s resource handling behaviour      Dry run  prevent ApplicationSet from creating  modifying  or deleting all Applications  To prevent the ApplicationSet controller from creating  modifying  or deleting any  Application  resources  you may enable  dry run  mode  This essentially switches the controller into a  read only  mode  where the controller Reconcile loop will run  but no resources will be modified   To enable dry run  add    dryrun true  to the ApplicationSet Deployment s container launch parameters   See  How to modify ApplicationSet container parameters  below for detailed steps on how to add this parameter to the controller      Managed Applications modification Policies  The ApplicationSet controller supports a parameter    policy   which is specified on launch  within the controller Deployment container   and which restricts what types of modifications will be made to managed Argo CD  Application  resources   The    policy  parameter takes four values   sync    create only    create delete   and  create update     sync  is the default  which is used if the    policy  parameter is not specified  the other policies are described below    It is also possible to set this policy per ApplicationSet      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet spec              syncPolicy      applicationsSync  create only   create update  create delete sync         Policy  create only   Prevents ApplicationSet controller from modifying or deleting Applications    WARNING    It doesn t prevent Application controller from deleting Applications according to  ownerReferences  https   kubernetes io docs concepts overview working with objects owners dependents   when deleting ApplicationSet    Policy  create update   Prevents ApplicationSet controller from deleting Applications  Update is allowed    WARNING    It doesn t prevent Application controller from deleting Applications according to  ownerReferences  https   kubernetes io docs concepts overview working with objects owners dependents   when deleting ApplicationSet    Policy  create delete   Prevents ApplicationSet controller from modifying Applications  Delete is allowed    Policy  sync   Update and Delete are allowed   If the controller parameter    policy  is set  it takes precedence on the field  applicationsSync   It is possible to allow per ApplicationSet sync policy by setting variable  ARGOCD APPLICATIONSET CONTROLLER ENABLE POLICY OVERRIDE  to argocd cmd params cm  applicationsetcontroller enable policy override  or directly with controller parameter    enable policy override   default to  false         Policy    create only   Prevent ApplicationSet controller from modifying and deleting Applications  To allow the ApplicationSet controller to  create   Application  resources  but prevent any further modification  such as  deletion   or modification of Application fields  add this parameter in the ApplicationSet controller     WARNING      deletion   indicates the case as the result of comparing generated Application between before and after  there are Applications which no longer exist  It doesn t indicate the case Applications are deleted according to ownerReferences to ApplicationSet  See  How to prevent Application controller from deleting Applications when deleting ApplicationSet   how to prevent application controller from deleting applications when deleting applicationset         policy create only      At ApplicationSet level     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet spec              syncPolicy      applicationsSync  create only          Policy    create update   Prevent ApplicationSet controller from deleting Applications  To allow the ApplicationSet controller to create or modify  Application  resources  but prevent Applications from being deleted  add the following parameter to the ApplicationSet controller  Deployment      WARNING      deletion   indicates the case as the result of comparing generated Application between before and after  there are Applications which no longer exist  It doesn t indicate the case Applications are deleted according to ownerReferences to ApplicationSet  See  How to prevent Application controller from deleting Applications when deleting ApplicationSet   how to prevent application controller from deleting applications when deleting applicationset         policy create update      This may be useful to users looking for additional protection against deletion of the Applications generated by the controller   At ApplicationSet level     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet spec              syncPolicy      applicationsSync  create update          How to prevent Application controller from deleting Applications when deleting ApplicationSet  By default   create only  and  create update  policy isn t effective against preventing deletion of Applications when deleting ApplicationSet  You must set the finalizer to ApplicationSet to prevent deletion in such case  and use background cascading deletion  If you use foreground cascading deletion  there s no guarantee to preserve applications      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    finalizers      resources finalizer argocd argoproj io spec                    Ignore certain changes to Applications  The ApplicationSet spec includes an  ignoreApplicationDifferences  field  which allows you to specify which fields of  the ApplicationSet should be ignored when comparing Applications   The field supports multiple ignore rules  Each ignore rule may specify a list of either  jsonPointers  or   jqPathExpressions  to ignore   You may optionally also specify a  name  to apply the ignore rule to a specific Application  or omit the  name  to apply the ignore rule to all Applications      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet spec    ignoreApplicationDifferences        jsonPointers             spec source targetRevision       name  some app       jqPathExpressions             spec source helm values          Allow temporarily toggling auto sync  One of the most common use cases for ignoring differences is to allow temporarily toggling auto sync for an Application   For example  if you have an ApplicationSet that is configured to automatically sync Applications  you may want to temporarily disable auto sync for a specific Application  You can do this by adding an ignore rule for the  spec syncPolicy automated  field      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet spec    ignoreApplicationDifferences        jsonPointers             spec syncPolicy          Limitations of  ignoreApplicationDifferences   When an ApplicationSet is reconciled  the controller will compare the ApplicationSet spec with the spec of each Application that it manages  If there are any differences  the controller will generate a patch to update the Application to match the ApplicationSet spec   The generated patch is a MergePatch  According to the MergePatch documentation   existing lists will be completely  replaced by new lists  when there is a change to the list   This limits the effectiveness of  ignoreApplicationDifferences  when the ignored field is in a list  For example  if you have an application with multiple sources  and you want to ignore changes to the  targetRevision  of one of the sources  changes in other fields or in other sources will cause the entire  sources  list to be replaced  and the  targetRevision  field will be reset to the value defined in the ApplicationSet   For example  consider this ApplicationSet      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet spec    ignoreApplicationDifferences        jqPathExpressions             spec sources     select  repoURL     https   git example com org repo1   targetRevision   template      spec        sources          repoURL  https   git example com org repo1         targetRevision  main         repoURL  https   git example com org repo2         targetRevision  main      You can freely change the  targetRevision  of the  repo1  source  and the ApplicationSet controller will not overwrite your change      yaml apiVersion  argoproj io v1alpha1 kind  Application spec    sources      repoURL  https   git example com org repo1     targetRevision  fix bug 123     repoURL  https   git example com org repo2     targetRevision  main      However  if you change the  targetRevision  of the  repo2  source  the ApplicationSet controller will overwrite the entire  sources  field      yaml apiVersion  argoproj io v1alpha1 kind  Application spec    sources      repoURL  https   git example com org repo1     targetRevision  main     repoURL  https   git example com org repo2     targetRevision  main          note      Future improvements  https   github com argoproj argo cd issues 15975  to the ApplicationSet controller may      eliminate this problem  For example  the  ref  field might be made a merge key  allowing the ApplicationSet      controller to generate and use a StrategicMergePatch instead of a MergePatch  You could then target a specific      source by  ref   ignore changes to a field in that source  and changes to other sources would not cause the ignored      field to be overwritten      Prevent an  Application  s child resources from being deleted  when the parent Application is deleted  By default  when an  Application  resource is deleted by the ApplicationSet controller  all of the child resources of the Application will be deleted as well  such as  all of the Application s  Deployments    Services   etc    To prevent an Application s child resources from being deleted when the parent Application is deleted  add the  preserveResourcesOnDeletion  true  field to the  syncPolicy  of the ApplicationSet     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet spec              syncPolicy      preserveResourcesOnDeletion  true      More information on the specific behaviour of  preserveResourcesOnDeletion   and deletion in ApplicationSet controller and Argo CD in general  can be found on the  Application Deletion  Application Deletion md  page       Prevent an Application s child resources from being modified  Changes made to the ApplicationSet will propagate to the Applications managed by the ApplicationSet  and then Argo CD will propagate the Application changes to the underlying cluster resources  as per  Argo CD Integration  Argo CD Integration md     The propagation of Application changes to the cluster is managed by the  automated sync settings        user guide auto sync md   which are referenced in the ApplicationSet  template  field      spec template syncPolicy automated   If enabled  changes to Applications will automatically propagate to the cluster resources of the cluster         Unset this within the ApplicationSet template to  pause  updates to cluster resources managed by the  Application  resource     spec template syncPolicy automated prune   By default  Automated sync will not delete resources when Argo CD detects the resource is no longer defined in Git        For extra safety  set this to false to prevent unexpected changes to the backing Git repository from affecting cluster resources       How to modify ApplicationSet container launch parameters  There are a couple of ways to modify the ApplicationSet container parameters  so as to enable the above settings       A  Use  kubectl edit  to modify the deployment on the cluster  Edit the applicationset controller  Deployment  resource on the cluster      kubectl edit deployment argocd applicationset controller  n argocd      Locate the   spec template spec containers 0  command  field  and add the required parameter s      yaml spec                template                  spec        containers          command            entrypoint sh           argocd applicationset controller           Insert new parameters here  for example              policy create only                  Save and exit the editor  Wait for a new  Pod  to start containing the updated parameters       Or  B  Edit the  install yaml  manifest for the ApplicationSet installation  Rather than directly editing the cluster resource  you may instead choose to modify the installation YAML that is used to install the ApplicationSet controller   Applicable for applicationset versions less than 0 4 0      bash   Clone the repository  git clone https   github com argoproj applicationset    Checkout the version that corresponds to the one you have installed  git checkout   version of applicationset     example  git checkout  0 1 0   cd applicationset manifests    open  install yaml  in a text editor  make the same modifications to Deployment    as described in the previous section     Apply the change to the cluster kubectl apply  n argocd  f install yaml         Preserving changes made to an Applications annotations and labels      note     The same behavior can be achieved on a per app basis using the   ignoreApplicationDifferences    ignore certain changes to applications       feature described above  However  preserved fields may be configured globally  a feature that is not yet available     for  ignoreApplicationDifferences    It is common practice in Kubernetes to store state in annotations  operators will often make use of this  To allow for this  it is possible to configure a list of annotations that the ApplicationSet should preserve when reconciling   For example  imagine that we have an Application created from an ApplicationSet  but a custom annotation and label has since been added  to the Application  that does not exist in the  ApplicationSet  resource     yaml apiVersion  argoproj io v1alpha1 kind  Application metadata      This annotation and label exists only on this Application  and not in      the parent ApplicationSet template    annotations       my custom annotation  some value   labels      my custom label  some value spec                 To preserve this annotation and label we can use the  preservedFields  property of the  ApplicationSet  like so     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet spec              preservedFields      annotations    my custom annotation       labels    my custom label        The ApplicationSet controller will leave this annotation and label as is when reconciling  even though it is not defined in the metadata of the ApplicationSet itself   By default  the Argo CD notifications and the Argo CD refresh type annotations are also preserved      note   One can also set global preserved fields for the controller by passing a comma separated list of annotations and labels to     ARGOCD APPLICATIONSET CONTROLLER GLOBAL PRESERVED ANNOTATIONS  and  ARGOCD APPLICATIONSET CONTROLLER GLOBAL PRESERVED LABELS  respectively      Debugging unexpected changes to Applications  When the ApplicationSet controller makes a change to an application  it logs the patch at the debug level  To see these logs  set the log level to debug in the  argocd cmd params cm  ConfigMap in the  argocd  namespace      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cmd params cm   namespace  argocd data    applicationsetcontroller log level  debug         Previewing changes  To preview changes that the ApplicationSet controller would make to Applications  you can create the AppSet in dry run  mode  This works whether the AppSet already exists or not      shell argocd appset create   dry run   appset yaml  o json   jq  r   status resources   name       The dry run will populate the returned ApplicationSet s status with the Applications which would be managed with the  given config  You can compare to the existing Applications to see what would change "}
{"questions":"argocd Installation Have a file default location is Getting Started Requirements This guide assumes you are familiar with Argo CD and its basic concepts See the for more information Installed command line tool","answers":"# Getting Started\n\nThis guide assumes you are familiar with Argo CD and its basic concepts. See the [Argo CD documentation](..\/..\/core_concepts.md) for more information.\n    \n## Requirements\n\n* Installed [kubectl](https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl\/) command-line tool\n* Have a [kubeconfig](https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/configure-access-multiple-clusters\/) file (default location is `~\/.kube\/config`).\n\n## Installation\n\nThere are a few options for installing the ApplicationSet controller.\n\n\n### A) Install ApplicationSet as part of Argo CD\n\nStarting with Argo CD v2.3, the ApplicationSet controller is bundled with Argo CD. It is no longer necessary to install the ApplicationSet controller separately from Argo CD.\n\nFollow the [Argo CD Getting Started](..\/..\/getting_started.md) instructions for more information.\n\n\n\n### B) Install ApplicationSet into an existing Argo CD install (pre-Argo CD v2.3)\n\n**Note**: These instructions only apply to versions of Argo CD before v2.3.0.\n\nThe ApplicationSet controller *must* be installed into the same namespace as the Argo CD it is targeting.\n\nPresuming that Argo CD is installed into the `argocd` namespace, run the following command:\n\n```bash\nkubectl apply -n argocd -f https:\/\/raw.githubusercontent.com\/argoproj\/applicationset\/v0.4.0\/manifests\/install.yaml\n```\n\nOnce installed, the ApplicationSet controller requires no additional setup.\n\nThe `manifests\/install.yaml` file contains the Kubernetes manifests required to install the ApplicationSet controller:\n\n- CustomResourceDefinition for `ApplicationSet` resource\n- Deployment for `argocd-applicationset-controller`\n- ServiceAccount for use by ApplicationSet controller, to access Argo CD resources\n- Role granting RBAC access to needed resources, for ServiceAccount\n- RoleBinding to bind the ServiceAccount and Role\n\n\n<!-- ### C) Install development builds of ApplicationSet controller for access to the latest features\n\nDevelopment builds of the ApplicationSet controller can be installed by running the following command:\n```bash\nkubectl apply -n argocd -f https:\/\/raw.githubusercontent.com\/argoproj\/applicationset\/master\/manifests\/install.yaml\n```\n\nWith this option you will need to ensure that Argo CD is already installed into the `argocd` namespace.\n\nHow it works:\n\n- After each successful commit to *argoproj\/applicationset* `master` branch, a GitHub action will run that performs a container build\/push to [`argoproj\/argocd-applicationset:latest`](https:\/\/quay.io\/repository\/argoproj\/argocd-applicationset?tab=tags )\n- [Documentation for the `master`-branch-based developer builds](https:\/\/argocd-applicationset.readthedocs.io\/en\/master\/)  is available from Read the Docs.\n\n!!! warning\n    Development builds contain newer features and bug fixes, but are more likely to be unstable, as compared to release builds.\n\nSee the `master` branch [Read the Docs](https:\/\/argocd-applicationset.readthedocs.io\/en\/master\/) page for documentation on post-release features. -->\n\n\n<!-- ## Upgrading to a Newer Release\n\nTo upgrade from an older release (eg 0.1.0, 0.2.0) to a newer release (eg 0.3.0), you only need to `kubectl apply` the `install.yaml` for the new release, as described under *Installation* above.\n\nThere are no manual upgrade steps required between any release of ApplicationSet controller, (including 0.1.0, 0.2.0, and 0.3.0) as of this writing, however, see the behaviour changes in ApplicationSet controller v0.3.0, below.\n\n### Behaviour changes in ApplicationSet controller v0.3.0\n\nThere are no breaking changes, however, a couple of behaviours have changed from v0.2.0 to v0.3.0. See the [v0.3.0 upgrade page](upgrading\/v0.2.0-to-v0.3.0.md) for details. -->\n## Enabling high availability mode\n\nTo enable high availability, you have to set the command ``` --enable-leader-election=true  ``` in argocd-applicationset-controller container and increase the replicas. \n\ndo following changes in manifests\/install.yaml\n\n```bash\n    spec:\n      containers:\n      - command:\n        - entrypoint.sh\n        - argocd-applicationset-controller\n        - --enable-leader-election=true\n```\n\n### Optional: Additional Post-Upgrade Safeguards\n\nSee the [Controlling Resource Modification](Controlling-Resource-Modification.md) page for information on additional parameters you may wish to add to the ApplicationSet Resource in `install.yaml`, to provide extra security against any initial, unexpected post-upgrade behaviour. \n\nFor instance, to temporarily prevent the upgraded ApplicationSet controller from making any changes, you could:\n\n- Enable dry-run\n- Use a create-only policy\n- Enable `preserveResourcesOnDeletion` on your ApplicationSets\n- Temporarily disable automated sync in your ApplicationSets' template\n\nThese parameters would allow you to observe\/control the behaviour of the new version of the ApplicationSet controller in your environment, to ensure you are happy with the result (see the ApplicationSet log file for details). Just don't forget to remove any temporary changes when you are done testing!\n\nHowever, as mentioned above, these steps are not strictly necessary: upgrading the ApplicationSet controller should be a minimally invasive process, and these are only suggested as an optional precaution for extra safety.\n\n## Next Steps\n\nOnce your ApplicationSet controller is up and running, proceed to [Use Cases](Use-Cases.md) to learn more about the supported scenarios, or proceed directly to [Generators](Generators.md) to see example `ApplicationSet` resources. ","site":"argocd","answers_cleaned":"  Getting Started  This guide assumes you are familiar with Argo CD and its basic concepts  See the  Argo CD documentation        core concepts md  for more information          Requirements    Installed  kubectl  https   kubernetes io docs tasks tools install kubectl   command line tool   Have a  kubeconfig  https   kubernetes io docs tasks access application cluster configure access multiple clusters   file  default location is     kube config        Installation  There are a few options for installing the ApplicationSet controller        A  Install ApplicationSet as part of Argo CD  Starting with Argo CD v2 3  the ApplicationSet controller is bundled with Argo CD  It is no longer necessary to install the ApplicationSet controller separately from Argo CD   Follow the  Argo CD Getting Started        getting started md  instructions for more information         B  Install ApplicationSet into an existing Argo CD install  pre Argo CD v2 3     Note    These instructions only apply to versions of Argo CD before v2 3 0   The ApplicationSet controller  must  be installed into the same namespace as the Argo CD it is targeting   Presuming that Argo CD is installed into the  argocd  namespace  run the following command      bash kubectl apply  n argocd  f https   raw githubusercontent com argoproj applicationset v0 4 0 manifests install yaml      Once installed  the ApplicationSet controller requires no additional setup   The  manifests install yaml  file contains the Kubernetes manifests required to install the ApplicationSet controller     CustomResourceDefinition for  ApplicationSet  resource   Deployment for  argocd applicationset controller    ServiceAccount for use by ApplicationSet controller  to access Argo CD resources   Role granting RBAC access to needed resources  for ServiceAccount   RoleBinding to bind the ServiceAccount and Role            C  Install development builds of ApplicationSet controller for access to the latest features  Development builds of the ApplicationSet controller can be installed by running the following command     bash kubectl apply  n argocd  f https   raw githubusercontent com argoproj applicationset master manifests install yaml      With this option you will need to ensure that Argo CD is already installed into the  argocd  namespace   How it works     After each successful commit to  argoproj applicationset   master  branch  a GitHub action will run that performs a container build push to   argoproj argocd applicationset latest   https   quay io repository argoproj argocd applicationset tab tags      Documentation for the  master  branch based developer builds  https   argocd applicationset readthedocs io en master    is available from Read the Docs       warning     Development builds contain newer features and bug fixes  but are more likely to be unstable  as compared to release builds   See the  master  branch  Read the Docs  https   argocd applicationset readthedocs io en master   page for documentation on post release features                Upgrading to a Newer Release  To upgrade from an older release  eg 0 1 0  0 2 0  to a newer release  eg 0 3 0   you only need to  kubectl apply  the  install yaml  for the new release  as described under  Installation  above   There are no manual upgrade steps required between any release of ApplicationSet controller   including 0 1 0  0 2 0  and 0 3 0  as of this writing  however  see the behaviour changes in ApplicationSet controller v0 3 0  below       Behaviour changes in ApplicationSet controller v0 3 0  There are no breaking changes  however  a couple of behaviours have changed from v0 2 0 to v0 3 0  See the  v0 3 0 upgrade page  upgrading v0 2 0 to v0 3 0 md  for details         Enabling high availability mode  To enable high availability  you have to set the command       enable leader election true      in argocd applicationset controller container and increase the replicas    do following changes in manifests install yaml     bash     spec        containers          command            entrypoint sh           argocd applicationset controller             enable leader election true          Optional  Additional Post Upgrade Safeguards  See the  Controlling Resource Modification  Controlling Resource Modification md  page for information on additional parameters you may wish to add to the ApplicationSet Resource in  install yaml   to provide extra security against any initial  unexpected post upgrade behaviour    For instance  to temporarily prevent the upgraded ApplicationSet controller from making any changes  you could     Enable dry run   Use a create only policy   Enable  preserveResourcesOnDeletion  on your ApplicationSets   Temporarily disable automated sync in your ApplicationSets  template  These parameters would allow you to observe control the behaviour of the new version of the ApplicationSet controller in your environment  to ensure you are happy with the result  see the ApplicationSet log file for details   Just don t forget to remove any temporary changes when you are done testing   However  as mentioned above  these steps are not strictly necessary  upgrading the ApplicationSet controller should be a minimally invasive process  and these are only suggested as an optional precaution for extra safety      Next Steps  Once your ApplicationSet controller is up and running  proceed to  Use Cases  Use Cases md  to learn more about the supported scenarios  or proceed directly to  Generators  Generators md  to see example  ApplicationSet  resources  "}
{"questions":"argocd ApplicationSet is able to use To activate this feature add to your ApplicationSet manifest is available in addition to the default Go Text Template functions The except for and Go Template Introduction","answers":"# Go Template\n\n## Introduction\n\nApplicationSet is able to use [Go Text Template](https:\/\/pkg.go.dev\/text\/template). To activate this feature, add \n`goTemplate: true` to your ApplicationSet manifest.\n\nThe [Sprig function library](https:\/\/masterminds.github.io\/sprig\/) (except for `env`, `expandenv` and `getHostByName`) \nis available in addition to the default Go Text Template functions.\n\nAn additional `normalize` function makes any string parameter usable as a valid DNS name by replacing invalid characters \nwith hyphens and truncating at 253 characters. This is useful when making parameters safe for things like Application\nnames.\n\nAnother `slugify` function has been added which, by default, sanitizes and smart truncates (it doesn't cut a word into 2). This function accepts a couple of arguments:\n\n- The first argument (if provided) is an integer specifying the maximum length of the slug.\n- The second argument (if provided) is a boolean indicating whether smart truncation is enabled.\n- The last argument (if provided) is the input name that needs to be slugified.\n\n#### Usage example\n\n```\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: test-appset\nspec:\n  ... \n  template:\n    metadata:\n      name: 'hellos3--'\n      annotations:\n        label-1: ''\n        label-2: ''\n        label-3: ''\n```\n\nIf you want to customize [options defined by text\/template](https:\/\/pkg.go.dev\/text\/template#Template.Option), you can\nadd the `goTemplateOptions: [\"opt1\", \"opt2\", ...]` key to your ApplicationSet next to `goTemplate: true`. Note that at\nthe time of writing, there is only one useful option defined, which is `missingkey=error`.\n\nThe recommended setting of `goTemplateOptions` is `[\"missingkey=error\"]`, which ensures that if undefined values are\nlooked up by your template then an error is reported instead of being ignored silently. This is not currently the default\nbehavior, for backwards compatibility.\n\n## Motivation\n\nGo Template is the Go Standard for string templating. It is also more powerful than fasttemplate (the default templating \nengine) as it allows doing complex templating logic.\n\n## Limitations\n\nGo templates are applied on a per-field basis, and only on string fields. Here are some examples of what is **not** \npossible with Go text templates:\n\n- Templating a boolean field.\n\n        ::yaml\n        apiVersion: argoproj.io\/v1alpha1\n        kind: ApplicationSet\n        spec:\n          goTemplate: true\n          goTemplateOptions: [\"missingkey=error\"]\n          template:\n            spec:\n              source:\n                helm:\n                  useCredentials: \"\"  # This field may NOT be templated, because it is a boolean field.\n\n- Templating an object field:\n\n        ::yaml\n        apiVersion: argoproj.io\/v1alpha1\n        kind: ApplicationSet\n        spec:\n          goTemplate: true\n          goTemplateOptions: [\"missingkey=error\"]\n          template:\n            spec:\n              syncPolicy: \"\"  # This field may NOT be templated, because it is an object field.\n\n- Using control keywords across fields:\n\n        ::yaml\n        apiVersion: argoproj.io\/v1alpha1\n        kind: ApplicationSet\n        spec:\n          goTemplate: true\n          goTemplateOptions: [\"missingkey=error\"]\n          template:\n            spec:\n              source:\n                helm:\n                  parameters:\n                  # Each of these fields is evaluated as an independent template, so the first one will fail with an error.\n                  - name: \"\"\n                  - name: \"\"\n                    value: \"\"\n                  - name: throw-away\n                    value: \"\"\n\n- Signature verification is not supported for the templated `project` field when using the Git generator.\n\n        ::yaml\n        apiVersion: argoproj.io\/v1alpha1\n        kind: ApplicationSet\n        spec:\n          goTemplate: true\n          template:\n            spec:\n              project: \n\n\n## Migration guide\n\n### Globals\n\nAll your templates must replace parameters with GoTemplate Syntax:\n\nExample: `` becomes ``\n\n### Cluster Generators\n\nBy activating Go Templating, `` becomes an object.\n\n- `` becomes ``\n- `` becomes ``\n\n### Git Generators\n\nBy activating Go Templating, `` becomes an object. Therefore, some changes must be made to the Git \ngenerators' templating:\n\n- `` becomes ``\n- `` becomes ``\n- `` becomes ``\n- `` becomes ``\n- `` becomes ``\n- `` becomes ``\n- `` if being used in the file generator becomes ``\n\nHere is an example:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: cluster-addons\nspec:\n  generators:\n  - git:\n      repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n      revision: HEAD\n      directories:\n      - path: applicationset\/examples\/git-generator-directory\/cluster-addons\/*\n  template:\n    metadata:\n      name: ''\n    spec:\n      project: default\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n        targetRevision: HEAD\n        path: ''\n      destination:\n        server: https:\/\/kubernetes.default.svc\n        namespace: ''\n```\n\nbecomes\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: cluster-addons\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - git:\n      repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n      revision: HEAD\n      directories:\n      - path: applicationset\/examples\/git-generator-directory\/cluster-addons\/*\n  template:\n    metadata:\n      name: ''\n    spec:\n      project: default\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n        targetRevision: HEAD\n        path: ''\n      destination:\n        server: https:\/\/kubernetes.default.svc\n        namespace: ''\n```\n\nIt is also possible to use Sprig functions to construct the path variables manually:\n\n| with `goTemplate: false` | with `goTemplate: true` | with `goTemplate: true` + Sprig |\n| ------------ | ----------- | --------------------- |\n| `` | `` | `` |\n| `` | `` | `` |\n| `` | `` | `` |\n| `` | `` | `` |\n| `` | `` | `` |\n| `` | `-` | `` |\n\n## Available template functions\n\nApplicationSet controller provides:\n\n- all [sprig](http:\/\/masterminds.github.io\/sprig\/) Go templates function except `env`, `expandenv` and `getHostByName`\n- `normalize`: sanitizes the input so that it complies with the following rules:\n    1. contains no more than 253 characters\n    2. contains only lowercase alphanumeric characters, '-' or '.'\n    3. starts and ends with an alphanumeric character\n\n- `slugify`: sanitizes like `normalize` and smart truncates (it doesn't cut a word into 2) like described in the [introduction](#introduction) section.\n- `toYaml` \/ `fromYaml` \/ `fromYamlArray` helm like functions\n\n\n## Examples\n\n### Basic Go template usage\n\nThis example shows basic string parameter substitution.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: guestbook\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - list:\n      elements:\n      - cluster: engineering-dev\n        url: https:\/\/1.2.3.4\n      - cluster: engineering-prod\n        url: https:\/\/2.4.6.8\n      - cluster: finance-preprod\n        url: https:\/\/9.8.7.6\n  template:\n    metadata:\n      name: '-guestbook'\n    spec:\n      project: my-project\n      source:\n        repoURL: https:\/\/github.com\/infra-team\/cluster-deployments.git\n        targetRevision: HEAD\n        path: guestbook\/\n      destination:\n        server: ''\n        namespace: guestbook\n```\n\n### Fallbacks for unset parameters\n\nFor some generators, a parameter of a certain name might not always be populated (for example, with the values generator\nor the git files generator). In these cases, you can use a Go template to provide a fallback value.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: guestbook\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - list:\n      elements:\n      - cluster: engineering-dev\n        url: https:\/\/kubernetes.default.svc\n      - cluster: engineering-prod\n        url: https:\/\/kubernetes.default.svc\n        nameSuffix: -my-name-suffix\n  template:\n    metadata:\n      name: ''\n    spec:\n      project: default\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n        targetRevision: HEAD\n        path: applicationset\/examples\/list-generator\/guestbook\/\n      destination:\n        server: ''\n        namespace: guestbook\n```\n\nThis ApplicationSet will produce an Application called `engineering-dev` and another called \n`engineering-prod-my-name-suffix`.\n\nNote that unset parameters are an error, so you need to avoid looking up a property that doesn't exist. Instead, use\ntemplate functions like `dig` to do the lookup with a default. If you prefer to have unset parameters default to zero,\nyou can remove `goTemplateOptions: [\"missingkey=error\"]` or set it to `goTemplateOptions: [\"missingkey=invalid\"]`","site":"argocd","answers_cleaned":"  Go Template     Introduction  ApplicationSet is able to use  Go Text Template  https   pkg go dev text template   To activate this feature  add   goTemplate  true  to your ApplicationSet manifest   The  Sprig function library  https   masterminds github io sprig    except for  env    expandenv  and  getHostByName    is available in addition to the default Go Text Template functions   An additional  normalize  function makes any string parameter usable as a valid DNS name by replacing invalid characters  with hyphens and truncating at 253 characters  This is useful when making parameters safe for things like Application names   Another  slugify  function has been added which  by default  sanitizes and smart truncates  it doesn t cut a word into 2   This function accepts a couple of arguments     The first argument  if provided  is an integer specifying the maximum length of the slug    The second argument  if provided  is a boolean indicating whether smart truncation is enabled    The last argument  if provided  is the input name that needs to be slugified        Usage example      apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  test appset spec           template      metadata        name   hellos3          annotations          label 1             label 2             label 3          If you want to customize  options defined by text template  https   pkg go dev text template Template Option   you can add the  goTemplateOptions    opt1    opt2         key to your ApplicationSet next to  goTemplate  true   Note that at the time of writing  there is only one useful option defined  which is  missingkey error    The recommended setting of  goTemplateOptions  is    missingkey error     which ensures that if undefined values are looked up by your template then an error is reported instead of being ignored silently  This is not currently the default behavior  for backwards compatibility      Motivation  Go Template is the Go Standard for string templating  It is also more powerful than fasttemplate  the default templating  engine  as it allows doing complex templating logic      Limitations  Go templates are applied on a per field basis  and only on string fields  Here are some examples of what is   not    possible with Go text templates     Templating a boolean field             yaml         apiVersion  argoproj io v1alpha1         kind  ApplicationSet         spec            goTemplate  true           goTemplateOptions    missingkey error             template              spec                source                  helm                    useCredentials        This field may NOT be templated  because it is a boolean field     Templating an object field             yaml         apiVersion  argoproj io v1alpha1         kind  ApplicationSet         spec            goTemplate  true           goTemplateOptions    missingkey error             template              spec                syncPolicy        This field may NOT be templated  because it is an object field     Using control keywords across fields             yaml         apiVersion  argoproj io v1alpha1         kind  ApplicationSet         spec            goTemplate  true           goTemplateOptions    missingkey error             template              spec                source                  helm                    parameters                      Each of these fields is evaluated as an independent template  so the first one will fail with an error                      name                         name                         value                         name  throw away                     value        Signature verification is not supported for the templated  project  field when using the Git generator             yaml         apiVersion  argoproj io v1alpha1         kind  ApplicationSet         spec            goTemplate  true           template              spec                project        Migration guide      Globals  All your templates must replace parameters with GoTemplate Syntax   Example     becomes         Cluster Generators  By activating Go Templating     becomes an object        becomes         becomes         Git Generators  By activating Go Templating     becomes an object  Therefore  some changes must be made to the Git  generators  templating        becomes         becomes         becomes         becomes         becomes         becomes         if being used in the file generator becomes     Here is an example      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  cluster addons spec    generators      git        repoURL  https   github com argoproj argo cd git       revision  HEAD       directories          path  applicationset examples git generator directory cluster addons     template      metadata        name         spec        project  default       source          repoURL  https   github com argoproj argo cd git         targetRevision  HEAD         path           destination          server  https   kubernetes default svc         namespace          becomes     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  cluster addons spec    goTemplate  true   goTemplateOptions    missingkey error     generators      git        repoURL  https   github com argoproj argo cd git       revision  HEAD       directories          path  applicationset examples git generator directory cluster addons     template      metadata        name         spec        project  default       source          repoURL  https   github com argoproj argo cd git         targetRevision  HEAD         path           destination          server  https   kubernetes default svc         namespace          It is also possible to use Sprig functions to construct the path variables manually     with  goTemplate  false    with  goTemplate  true    with  goTemplate  true    Sprig                                                                                                                                                                     Available template functions  ApplicationSet controller provides     all  sprig  http   masterminds github io sprig   Go templates function except  env    expandenv  and  getHostByName     normalize   sanitizes the input so that it complies with the following rules      1  contains no more than 253 characters     2  contains only lowercase alphanumeric characters      or         3  starts and ends with an alphanumeric character     slugify   sanitizes like  normalize  and smart truncates  it doesn t cut a word into 2  like described in the  introduction   introduction  section     toYaml     fromYaml     fromYamlArray  helm like functions      Examples      Basic Go template usage  This example shows basic string parameter substitution      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  guestbook spec    goTemplate  true   goTemplateOptions    missingkey error     generators      list        elements          cluster  engineering dev         url  https   1 2 3 4         cluster  engineering prod         url  https   2 4 6 8         cluster  finance preprod         url  https   9 8 7 6   template      metadata        name    guestbook      spec        project  my project       source          repoURL  https   github com infra team cluster deployments git         targetRevision  HEAD         path  guestbook        destination          server             namespace  guestbook          Fallbacks for unset parameters  For some generators  a parameter of a certain name might not always be populated  for example  with the values generator or the git files generator   In these cases  you can use a Go template to provide a fallback value      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  guestbook spec    goTemplate  true   goTemplateOptions    missingkey error     generators      list        elements          cluster  engineering dev         url  https   kubernetes default svc         cluster  engineering prod         url  https   kubernetes default svc         nameSuffix   my name suffix   template      metadata        name         spec        project  default       source          repoURL  https   github com argoproj argo cd git         targetRevision  HEAD         path  applicationset examples list generator guestbook        destination          server             namespace  guestbook      This ApplicationSet will produce an Application called  engineering dev  and another called   engineering prod my name suffix    Note that unset parameters are an error  so you need to avoid looking up a property that doesn t exist  Instead  use template functions like  dig  to do the lookup with a default  If you prefer to have unset parameters default to zero  you can remove  goTemplateOptions    missingkey error    or set it to  goTemplateOptions    missingkey invalid   "}
{"questions":"argocd Using a Merge generator is appropriate when a subset of parameter sets require overriding Merge Generator As an example imagine that we have two clusters The Merge generator combines parameters produced by the base first generator with matching parameter sets produced by subsequent generators A matching parameter set has the same values for the configured merge keys Non matching parameter sets are discarded Override precedence is bottom to top the values from a matching parameter set produced by generator 3 will take precedence over the values from the corresponding parameter set produced by generator 2 Example Base Cluster generator override Cluster generator List generator","answers":"# Merge Generator\n\nThe Merge generator combines parameters produced by the base (first) generator with matching parameter sets produced by subsequent generators. A _matching_ parameter set has the same values for the configured _merge keys_. _Non-matching_ parameter sets are discarded. Override precedence is bottom-to-top: the values from a matching parameter set produced by generator 3 will take precedence over the values from the corresponding parameter set produced by generator 2.\n\nUsing a Merge generator is appropriate when a subset of parameter sets require overriding.\n\n## Example: Base Cluster generator + override Cluster generator + List generator \n\nAs an example, imagine that we have two clusters:\n\n- A `staging` cluster (at `https:\/\/1.2.3.4`)\n- A `production` cluster (at `https:\/\/2.4.6.8`)\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: cluster-git\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n    # merge 'parent' generator\n    - merge:\n        mergeKeys:\n          - server\n        generators:\n          - clusters:\n              values:\n                kafka: 'true'\n                redis: 'false'\n          # For clusters with a specific label, enable Kafka.\n          - clusters:\n              selector:\n                matchLabels:\n                  use-kafka: 'false'\n              values:\n                kafka: 'false'\n          # For a specific cluster, enable Redis.\n          - list:\n              elements: \n                - server: https:\/\/2.4.6.8\n                  values.redis: 'true'\n  template:\n    metadata:\n      name: ''\n    spec:\n      project: ''\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n        targetRevision: HEAD\n        path: app\n        helm:\n          parameters:\n            - name: kafka\n              value: ''\n            - name: redis\n              value: ''\n      destination:\n        server: ''\n        namespace: default\n```\n\nThe base Cluster generator scans the [set of clusters defined in Argo CD](Generators-Cluster.md), finds the staging and production cluster secrets, and produces two corresponding sets of parameters:\n```yaml\n- name: staging\n  server: https:\/\/1.2.3.4\n  values.kafka: 'true'\n  values.redis: 'false'\n  \n- name: production\n  server: https:\/\/2.4.6.8\n  values.kafka: 'true'\n  values.redis: 'false'\n```\n\nThe override Cluster generator scans the [set of clusters defined in Argo CD](Generators-Cluster.md), finds the staging cluster secret (which has the required label), and produces the following parameters:\n```yaml\n- name: staging\n  server: https:\/\/1.2.3.4\n  values.kafka: 'false'\n```\n\nWhen merged with the base generator's parameters, the `values.kafka` value for the staging cluster is set to `'false'`.\n```yaml\n- name: staging\n  server: https:\/\/1.2.3.4\n  values.kafka: 'false'\n  values.redis: 'false'\n\n- name: production\n  server: https:\/\/2.4.6.8\n  values.kafka: 'true'\n  values.redis: 'false'\n```\n\nFinally, the List cluster generates a single set of parameters:\n```yaml\n- server: https:\/\/2.4.6.8\n  values.redis: 'true'\n```\n\nWhen merged with the updated base parameters, the `values.redis` value for the production cluster is set to `'true'`. This is the merge generator's final output:\n```yaml\n- name: staging\n  server: https:\/\/1.2.3.4\n  values.kafka: 'false'\n  values.redis: 'false'\n\n- name: production\n  server: https:\/\/2.4.6.8\n  values.kafka: 'true'\n  values.redis: 'true'\n```\n\n## Example: Use value interpolation in merge\n\nSome generators support additional values and interpolating from generated variables to selected values. This can be used to teach the merge generator which generated variables to use to combine different generators.\n\nThe following example combines discovered clusters and a git repository by cluster labels and the branch name:\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: cluster-git\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n    # merge 'parent' generator:\n    # Use the selector set by both child generators to combine them.\n    - merge:\n        mergeKeys:\n          # Note that this would not work with goTemplate enabled,\n          # nested merge keys are not supported there.\n          - values.selector\n        generators:\n          # Assuming, all configured clusters have a label for their location:\n          # Set the selector to this location.\n          - clusters:\n              values:\n                selector: ''\n          # The git repo may have different directories which correspond to the\n          # cluster locations, using these as a selector.\n          - git:\n              repoURL: https:\/\/github.com\/argoproj\/argocd-example-apps\/\n              revision: HEAD\n              directories:\n              - path: '*'\n              values:\n                selector: ''\n  template:\n    metadata:\n      name: ''\n    spec:\n      project: ''\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argocd-example-apps\/\n        # The cluster values field for each generator will be substituted here:\n        targetRevision: HEAD\n        path: ''\n      destination:\n        server: ''\n        namespace: default\n```\n\nAssuming a cluster named `germany01` with the label `metadata.labels.location=Germany` and a git repository containing a directory called `Germany`, this could combine to values as follows:\n\n```yaml\n  # From the cluster generator\n- name: germany01\n  server: https:\/\/1.2.3.4\n  # From the git generator\n  path: Germany\n  # Combining selector with the merge generator\n  values.selector: 'Germany'\n  # More values from cluster & git generator\n  # [\u2026]\n```\n\n\n## Restrictions\n\n1. You should specify only a single generator per array entry. This is not valid:\n\n        - merge:\n            generators:\n            - list: # (...)\n              git: # (...)\n\n    - While this *will* be accepted by Kubernetes API validation, the controller will report an error on generation. Each generator should be specified in a separate array element, as in the examples above.\n\n1. The Merge generator does not support [`template` overrides](Template.md#generator-templates) specified on child generators. This `template` will not be processed:\n\n        - merge:\n            generators:\n              - list:\n                  elements:\n                    - # (...)\n                  template: { } # Not processed\n\n1. Combination-type generators (Matrix or Merge) can only be nested once. For example, this will not work:\n\n        - merge:\n            generators:\n              - merge:\n                  generators:\n                    - merge:  # This third level is invalid.\n                        generators:\n                          - list:\n                              elements:\n                                - # (...)\n\n1. Merging on nested values while using `goTemplate: true` is currently not supported, this will not work\n\n        spec:\n          goTemplate: true\n          generators:\n          - merge:\n              mergeKeys:\n                - values.merge\n\n1. When using a Merge generator nested inside another Matrix or Merge generator, [Post Selectors](Generators-Post-Selector.md) for this nested generator's generators will only be applied when enabled via `spec.applyNestedSelectors`.\n\n        - merge:\n            generators:\n              - merge:\n                  generators:\n                    - list\n                        elements:\n                          - # (...)\n                      selector: { } # Only applied when applyNestedSelectors is true","site":"argocd","answers_cleaned":"  Merge Generator  The Merge generator combines parameters produced by the base  first  generator with matching parameter sets produced by subsequent generators  A  matching  parameter set has the same values for the configured  merge keys    Non matching  parameter sets are discarded  Override precedence is bottom to top  the values from a matching parameter set produced by generator 3 will take precedence over the values from the corresponding parameter set produced by generator 2   Using a Merge generator is appropriate when a subset of parameter sets require overriding      Example  Base Cluster generator   override Cluster generator   List generator   As an example  imagine that we have two clusters     A  staging  cluster  at  https   1 2 3 4     A  production  cluster  at  https   2 4 6 8       yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  cluster git spec    goTemplate  true   goTemplateOptions    missingkey error     generators        merge  parent  generator       merge          mergeKeys              server         generators              clusters                values                  kafka   true                  redis   false              For clusters with a specific label  enable Kafka              clusters                selector                  matchLabels                    use kafka   false                values                  kafka   false              For a specific cluster  enable Redis              list                elements                     server  https   2 4 6 8                   values redis   true    template      metadata        name         spec        project           source          repoURL  https   github com argoproj argo cd git         targetRevision  HEAD         path  app         helm            parameters                name  kafka               value                   name  redis               value           destination          server             namespace  default      The base Cluster generator scans the  set of clusters defined in Argo CD  Generators Cluster md   finds the staging and production cluster secrets  and produces two corresponding sets of parameters     yaml   name  staging   server  https   1 2 3 4   values kafka   true    values redis   false       name  production   server  https   2 4 6 8   values kafka   true    values redis   false       The override Cluster generator scans the  set of clusters defined in Argo CD  Generators Cluster md   finds the staging cluster secret  which has the required label   and produces the following parameters     yaml   name  staging   server  https   1 2 3 4   values kafka   false       When merged with the base generator s parameters  the  values kafka  value for the staging cluster is set to   false       yaml   name  staging   server  https   1 2 3 4   values kafka   false    values redis   false     name  production   server  https   2 4 6 8   values kafka   true    values redis   false       Finally  the List cluster generates a single set of parameters     yaml   server  https   2 4 6 8   values redis   true       When merged with the updated base parameters  the  values redis  value for the production cluster is set to   true    This is the merge generator s final output     yaml   name  staging   server  https   1 2 3 4   values kafka   false    values redis   false     name  production   server  https   2 4 6 8   values kafka   true    values redis   true          Example  Use value interpolation in merge  Some generators support additional values and interpolating from generated variables to selected values  This can be used to teach the merge generator which generated variables to use to combine different generators   The following example combines discovered clusters and a git repository by cluster labels and the branch name     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  cluster git spec    goTemplate  true   goTemplateOptions    missingkey error     generators        merge  parent  generator        Use the selector set by both child generators to combine them        merge          mergeKeys              Note that this would not work with goTemplate enabled              nested merge keys are not supported there              values selector         generators              Assuming  all configured clusters have a label for their location              Set the selector to this location              clusters                values                  selector                 The git repo may have different directories which correspond to the             cluster locations  using these as a selector              git                repoURL  https   github com argoproj argocd example apps                revision  HEAD               directories                  path                    values                  selector       template      metadata        name         spec        project           source          repoURL  https   github com argoproj argocd example apps            The cluster values field for each generator will be substituted here          targetRevision  HEAD         path           destination          server             namespace  default      Assuming a cluster named  germany01  with the label  metadata labels location Germany  and a git repository containing a directory called  Germany   this could combine to values as follows      yaml     From the cluster generator   name  germany01   server  https   1 2 3 4     From the git generator   path  Germany     Combining selector with the merge generator   values selector   Germany      More values from cluster   git generator                  Restrictions  1  You should specify only a single generator per array entry  This is not valid             merge              generators                list                        git                 While this  will  be accepted by Kubernetes API validation  the controller will report an error on generation  Each generator should be specified in a separate array element  as in the examples above   1  The Merge generator does not support   template  overrides  Template md generator templates  specified on child generators  This  template  will not be processed             merge              generators                  list                    elements                                                  template        Not processed  1  Combination type generators  Matrix or Merge  can only be nested once  For example  this will not work             merge              generators                  merge                    generators                        merge     This third level is invalid                          generators                              list                                elements                                             1  Merging on nested values while using  goTemplate  true  is currently not supported  this will not work          spec            goTemplate  true           generators              merge                mergeKeys                    values merge  1  When using a Merge generator nested inside another Matrix or Merge generator   Post Selectors  Generators Post Selector md  for this nested generator s generators will only be applied when enabled via  spec applyNestedSelectors              merge              generators                  merge                    generators                        list                         elements                                                            selector        Only applied when applyNestedSelectors is true"}
{"questions":"argocd And so on Git File Generator List Generator Providing a list of applications to deploy via configuration files with optional configuration options and deploying them to a fixed list of clusters Git Directory Generator Cluster Decision Resource Generator Locate application resources contained within folders of a Git repository and deploy them to a list of clusters provided via an external custom resource SCM Provider Generator Cluster Generator Scanning the repositories of a GitHub organization for application resources and targeting those resources to all available clusters The Matrix generator combines the parameters generated by two child generators iterating through every combination of each generator s generated parameters Matrix Generator By combining both generators parameters to produce every possible combination this allows you to gain the intrinsic properties of both generators For example a small subset of the many possible use cases include","answers":"# Matrix Generator\n\nThe Matrix generator combines the parameters generated by two child generators, iterating through every combination of each generator's generated parameters.\n\nBy combining both generators parameters, to produce every possible combination, this allows you to gain the intrinsic properties of both generators. For example, a small subset of the many possible use cases include:\n\n- *SCM Provider Generator + Cluster Generator*: Scanning the repositories of a GitHub organization for application resources, and targeting those resources to all available clusters.\n- *Git File Generator + List Generator*: Providing a list of applications to deploy via configuration files, with optional configuration options, and deploying them to a fixed list of clusters.\n- *Git Directory Generator + Cluster Decision Resource Generator*: Locate application resources contained within folders of a Git repository, and deploy them to a list of clusters provided via an external custom resource.\n- And so on...\n\nAny set of generators may be used, with the combined values of those generators inserted into the `template` parameters, as usual.\n\n**Note**: If both child generators are Git generators, one or both of them must use the `pathParamPrefix` option to avoid conflicts when merging the child generators\u2019 items.\n\n## Example: Git Directory generator + Cluster generator\n\nAs an example, imagine that we have two clusters:\n\n- A `staging` cluster (at `https:\/\/1.2.3.4`)\n- A `production` cluster (at `https:\/\/2.4.6.8`)\n\nAnd our application YAMLs are defined in a Git repository:\n\n- [Argo Workflows controller](https:\/\/github.com\/argoproj\/argo-cd\/tree\/master\/applicationset\/examples\/git-generator-directory\/cluster-addons\/argo-workflows)\n- [Prometheus operator](https:\/\/github.com\/argoproj\/argo-cd\/tree\/master\/applicationset\/examples\/git-generator-directory\/cluster-addons\/prometheus-operator)\n\nOur goal is to deploy both applications onto both clusters, and, more generally, in the future to automatically deploy new applications in the Git repository, and to new clusters defined within Argo CD, as well.\n\nFor this we will use the Matrix generator, with the Git and the Cluster as child generators:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: cluster-git\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n    # matrix 'parent' generator\n    - matrix:\n        generators:\n          # git generator, 'child' #1\n          - git:\n              repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n              revision: HEAD\n              directories:\n                - path: applicationset\/examples\/matrix\/cluster-addons\/*\n          # cluster generator, 'child' #2\n          - clusters:\n              selector:\n                matchLabels:\n                  argocd.argoproj.io\/secret-type: cluster\n  template:\n    metadata:\n      name: '-'\n    spec:\n      project: ''\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n        targetRevision: HEAD\n        path: ''\n      destination:\n        server: ''\n        namespace: ''\n```\n\nFirst, the Git directory generator will scan the Git repository, discovering directories under the specified path. It discovers the argo-workflows and prometheus-operator applications, and produces two corresponding sets of parameters:\n```yaml\n- path: \/examples\/git-generator-directory\/cluster-addons\/argo-workflows\n  path.basename: argo-workflows\n\n- path: \/examples\/git-generator-directory\/cluster-addons\/prometheus-operator\n  path.basename: prometheus-operator\n```\n\nNext, the Cluster generator scans the [set of clusters defined in Argo CD](Generators-Cluster.md), finds the staging and production cluster secrets, and produce two corresponding sets of parameters:\n```yaml\n- name: staging\n  server: https:\/\/1.2.3.4\n\n- name: production\n  server: https:\/\/2.4.6.8\n```\n\nFinally, the Matrix generator will combine both sets of outputs, and produce:\n```yaml\n- name: staging\n  server: https:\/\/1.2.3.4\n  path: \/examples\/git-generator-directory\/cluster-addons\/argo-workflows\n  path.basename: argo-workflows\n\n- name: staging\n  server: https:\/\/1.2.3.4\n  path: \/examples\/git-generator-directory\/cluster-addons\/prometheus-operator\n  path.basename: prometheus-operator\n\n- name: production\n  server: https:\/\/2.4.6.8\n  path: \/examples\/git-generator-directory\/cluster-addons\/argo-workflows\n  path.basename: argo-workflows\n\n- name: production\n  server: https:\/\/2.4.6.8\n  path: \/examples\/git-generator-directory\/cluster-addons\/prometheus-operator\n  path.basename: prometheus-operator\n```\n(*The full example can be found [here](https:\/\/github.com\/argoproj\/argo-cd\/tree\/master\/applicationset\/examples\/matrix).*)\n\n## Using Parameters from one child generator in another child generator\n\nThe Matrix generator allows using the parameters generated by one child generator inside another child generator. \nBelow is an example that uses a git-files generator in conjunction with a cluster generator.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: cluster-git\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n    # matrix 'parent' generator\n    - matrix:\n        generators:\n          # git generator, 'child' #1\n          - git:\n              repoURL: https:\/\/github.com\/argoproj\/applicationset.git\n              revision: HEAD\n              files:\n                - path: \"examples\/git-generator-files-discovery\/cluster-config\/**\/config.json\"\n          # cluster generator, 'child' #2\n          - clusters:\n              selector:\n                matchLabels:\n                  argocd.argoproj.io\/secret-type: cluster\n                  kubernetes.io\/environment: ''\n  template:\n    metadata:\n      name: '-guestbook'\n    spec:\n      project: default\n      source:\n        repoURL: https:\/\/github.com\/argoproj\/applicationset.git\n        targetRevision: HEAD\n        path: \"examples\/git-generator-files-discovery\/apps\/guestbook\"\n      destination:\n        server: ''\n        namespace: guestbook\n```\nHere is the corresponding folder structure for the git repository used by the git-files generator:\n\n```\n\u251c\u2500\u2500 apps\n\u2502   \u2514\u2500\u2500 guestbook\n\u2502       \u251c\u2500\u2500 guestbook-ui-deployment.yaml\n\u2502       \u251c\u2500\u2500 guestbook-ui-svc.yaml\n\u2502       \u2514\u2500\u2500 kustomization.yaml\n\u251c\u2500\u2500 cluster-config\n\u2502   \u2514\u2500\u2500 engineering\n\u2502       \u251c\u2500\u2500 dev\n\u2502       \u2502   \u2514\u2500\u2500 config.json\n\u2502       \u2514\u2500\u2500 prod\n\u2502           \u2514\u2500\u2500 config.json\n\u2514\u2500\u2500 git-generator-files.yaml\n```\nIn the above example, the `` parameters produced by the git-files generator will resolve to `dev` and `prod`.\nIn the 2nd child generator, the label selector with label `kubernetes.io\/environment: ` will resolve with the values produced by the first child generator's parameters (`kubernetes.io\/environment: prod` and `kubernetes.io\/environment: dev`). \n\nSo in the above example, clusters with the label `kubernetes.io\/environment: prod` will have only prod-specific configuration (ie. `prod\/config.json`) applied to it, wheres clusters\nwith the label `kubernetes.io\/environment: dev` will have only dev-specific configuration (ie. `dev\/config.json`)\n\n## Overriding parameters from one child generator in another child generator\n\nThe Matrix Generator allows parameters with the same name to be defined in multiple child generators. This is useful, for example, to define default values for all stages in one generator and override them with stage-specific values in another generator. The example below generates a Helm-based application using a matrix generator with two git generators: the first provides stage-specific values (one directory per stage) and the second provides global values for all stages.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: parameter-override-example\nspec:\n  generators:\n    - matrix:\n        generators:\n          - git:\n              repoURL: https:\/\/github.com\/example\/values.git\n              revision: HEAD\n              files:\n                - path: \"**\/stage.values.yaml\"\n          - git:\n               repoURL: https:\/\/github.com\/example\/values.git\n               revision: HEAD\n               files:\n                  - path: \"global.values.yaml\"\n  goTemplate: true\n  template:\n    metadata:\n      name: example\n    spec:\n      project: default\n      source:\n        repoURL: https:\/\/github.com\/example\/example-app.git\n        targetRevision: HEAD\n        path: .\n        helm:\n          values: |\n            ` }}\n      destination:\n        server: in-cluster\n        namespace: default\n```\n\nGiven the following structure\/content of the example\/values repository:\n\n```\n\u251c\u2500\u2500 test\n\u2502   \u2514\u2500\u2500 stage.values.yaml\n\u2502         stageName: test\n\u2502         cpuRequest: 100m\n\u2502         debugEnabled: true\n\u251c\u2500\u2500 staging\n\u2502   \u2514\u2500\u2500 stage.values.yaml\n\u2502         stageName: staging\n\u251c\u2500\u2500 production\n\u2502   \u2514\u2500\u2500 stage.values.yaml\n\u2502         stageName: production\n\u2502         memoryLimit: 512Mi\n\u2502         debugEnabled: false\n\u2514\u2500\u2500 global.values.yaml\n      cpuRequest: 200m\n      memoryLimit: 256Mi\n      debugEnabled: true\n```\n\nThe matrix generator above would yield the following results:\n\n```yaml\n- stageName: test\n  cpuRequest: 100m\n  memoryLimit: 256Mi\n  debugEnabled: true\n  \n- stageName: staging\n  cpuRequest: 200m\n  memoryLimit: 256Mi\n  debugEnabled: true\n\n- stageName: production\n  cpuRequest: 200m\n  memoryLimit: 512Mi\n  debugEnabled: false\n```\n\n## Example: Two Git Generators Using `pathParamPrefix`\n\nThe matrix generator will fail if its children produce results containing identical keys with differing values.\nThis poses a problem for matrix generators where both children are Git generators since they auto-populate `path`-related parameters in their outputs.\nTo avoid this problem, specify a `pathParamPrefix` on one or both of the child generators to avoid conflicting parameter keys in the output.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: two-gits-with-path-param-prefix\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n    - matrix:\n        generators:\n          # git file generator referencing files containing details about each\n          # app to be deployed (e.g., `appName`).\n          - git:\n              repoURL: https:\/\/github.com\/some-org\/some-repo.git\n              revision: HEAD\n              files:\n                - path: \"apps\/*.json\"\n              pathParamPrefix: app\n          # git file generator referencing files containing details about\n          # locations to which each app should deploy (e.g., `region` and\n          # `clusterName`).\n          - git:\n              repoURL: https:\/\/github.com\/some-org\/some-repo.git\n              revision: HEAD\n              files:\n                - path: \"targets\/\/*.json\"\n              pathParamPrefix: target\n  template: {} # ...\n```\n\nThen, given the following file structure\/content:\n\n```\n\u251c\u2500\u2500 apps\n\u2502   \u251c\u2500\u2500 app-one.json\n\u2502   \u2502   { \"appName\": \"app-one\" }\n\u2502   \u2514\u2500\u2500 app-two.json\n\u2502       { \"appName\": \"app-two\" }\n\u2514\u2500\u2500 targets\n    \u251c\u2500\u2500 app-one\n    \u2502   \u251c\u2500\u2500 east-cluster-one.json\n    \u2502   \u2502   { \"region\": \"east\", \"clusterName\": \"cluster-one\" }\n    \u2502   \u2514\u2500\u2500 east-cluster-two.json\n    \u2502       { \"region\": \"east\", \"clusterName\": \"cluster-two\" }\n    \u2514\u2500\u2500 app-two\n        \u251c\u2500\u2500 east-cluster-one.json\n        \u2502   { \"region\": \"east\", \"clusterName\": \"cluster-one\" }\n        \u2514\u2500\u2500 west-cluster-three.json\n            { \"region\": \"west\", \"clusterName\": \"cluster-three\" }\n```\n\n\u2026the matrix generator above would yield the following results:\n\n```yaml\n- appName: app-one\n  app.path: \/apps\n  app.path.filename: app-one.json\n  # plus additional path-related parameters from the first child generator, all\n  # prefixed with \"app\".\n  region: east\n  clusterName: cluster-one\n  target.path: \/targets\/app-one\n  target.path.filename: east-cluster-one.json\n  # plus additional path-related parameters from the second child generator, all\n  # prefixed with \"target\".\n\n- appName: app-one\n  app.path: \/apps\n  app.path.filename: app-one.json\n  region: east\n  clusterName: cluster-two\n  target.path: \/targets\/app-one\n  target.path.filename: east-cluster-two.json\n\n- appName: app-two\n  app.path: \/apps\n  app.path.filename: app-two.json\n  region: east\n  clusterName: cluster-one\n  target.path: \/targets\/app-two\n  target.path.filename: east-cluster-one.json\n\n- appName: app-two\n  app.path: \/apps\n  app.path.filename: app-two.json\n  region: west\n  clusterName: cluster-three\n  target.path: \/targets\/app-two\n  target.path.filename: west-cluster-three.json\n```\n\n## Restrictions\n\n1. The Matrix generator currently only supports combining the outputs of only two child generators (eg does not support generating combinations for 3 or more).\n\n1. You should specify only a single generator per array entry, eg this is not valid:\n\n        - matrix:\n            generators:\n            - list: # (...)\n              git: # (...)\n\n    - While this *will* be accepted by Kubernetes API validation, the controller will report an error on generation. Each generator should be specified in a separate array element, as in the examples above.\n\n1. The Matrix generator does not currently support [`template` overrides](Template.md#generator-templates) specified on child generators, eg this `template` will not be processed:\n\n        - matrix:\n            generators:\n              - list:\n                  elements:\n                    - # (...)\n                  template: { } # Not processed\n\n1. Combination-type generators (matrix or merge) can only be nested once. For example, this will not work:\n\n        - matrix:\n            generators:\n              - matrix:\n                  generators:\n                    - matrix:  # This third level is invalid.\n                        generators:\n                          - list:\n                              elements:\n                                - # (...)\n\n1. When using parameters from one child generator inside another child generator, the child generator that *consumes* the parameters **must come after** the child generator that *produces* the parameters.\nFor example, the below example would be invalid (cluster-generator must come after the git-files generator):\n\n        - matrix:\n            generators:\n              # cluster generator, 'child' #1\n              - clusters:\n                  selector:\n                    matchLabels:\n                      argocd.argoproj.io\/secret-type: cluster\n                      kubernetes.io\/environment: '' #  is produced by git-files generator\n              # git generator, 'child' #2\n              - git:\n                  repoURL: https:\/\/github.com\/argoproj\/applicationset.git\n                  revision: HEAD\n                  files:\n                    - path: \"examples\/git-generator-files-discovery\/cluster-config\/**\/config.json\"\n\n1. You cannot have both child generators consuming parameters from each another. In the example below, the cluster generator is consuming the `` parameter produced by the git-files generator, whereas the git-files generator is consuming the `` parameter produced by the cluster generator. This will result in a circular dependency, which is invalid.\n\n        - matrix:\n            generators:\n              # cluster generator, 'child' #1\n              - clusters:\n                  selector:\n                    matchLabels:\n                      argocd.argoproj.io\/secret-type: cluster\n                      kubernetes.io\/environment: '' #  is produced by git-files generator\n              # git generator, 'child' #2\n              - git:\n                  repoURL: https:\/\/github.com\/argoproj\/applicationset.git\n                  revision: HEAD\n                  files:\n                    - path: \"examples\/git-generator-files-discovery\/cluster-config\/engineering\/**\/config.json\" #  is produced by cluster generator\n\n1. When using a Matrix generator nested inside another Matrix or Merge generator, [Post Selectors](Generators-Post-Selector.md) for this nested generator's generators will only be applied when enabled via `spec.applyNestedSelectors`. You may also need to enable this even if your Post Selectors are not within the nested matrix or Merge generator, but are instead a sibling of a nested Matrix or Merge generator.\n\n        - matrix:\n            generators:\n              - matrix:\n                  generators:\n                    - list\n                        elements:\n                          - # (...)\n                      selector: { } # Only applied when applyNestedSelectors is true","site":"argocd","answers_cleaned":"  Matrix Generator  The Matrix generator combines the parameters generated by two child generators  iterating through every combination of each generator s generated parameters   By combining both generators parameters  to produce every possible combination  this allows you to gain the intrinsic properties of both generators  For example  a small subset of the many possible use cases include      SCM Provider Generator   Cluster Generator   Scanning the repositories of a GitHub organization for application resources  and targeting those resources to all available clusters     Git File Generator   List Generator   Providing a list of applications to deploy via configuration files  with optional configuration options  and deploying them to a fixed list of clusters     Git Directory Generator   Cluster Decision Resource Generator   Locate application resources contained within folders of a Git repository  and deploy them to a list of clusters provided via an external custom resource    And so on     Any set of generators may be used  with the combined values of those generators inserted into the  template  parameters  as usual     Note    If both child generators are Git generators  one or both of them must use the  pathParamPrefix  option to avoid conflicts when merging the child generators  items      Example  Git Directory generator   Cluster generator  As an example  imagine that we have two clusters     A  staging  cluster  at  https   1 2 3 4     A  production  cluster  at  https   2 4 6 8    And our application YAMLs are defined in a Git repository      Argo Workflows controller  https   github com argoproj argo cd tree master applicationset examples git generator directory cluster addons argo workflows     Prometheus operator  https   github com argoproj argo cd tree master applicationset examples git generator directory cluster addons prometheus operator   Our goal is to deploy both applications onto both clusters  and  more generally  in the future to automatically deploy new applications in the Git repository  and to new clusters defined within Argo CD  as well   For this we will use the Matrix generator  with the Git and the Cluster as child generators      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  cluster git spec    goTemplate  true   goTemplateOptions    missingkey error     generators        matrix  parent  generator       matrix          generators              git generator   child   1             git                repoURL  https   github com argoproj argo cd git               revision  HEAD               directories                    path  applicationset examples matrix cluster addons               cluster generator   child   2             clusters                selector                  matchLabels                    argocd argoproj io secret type  cluster   template      metadata        name          spec        project           source          repoURL  https   github com argoproj argo cd git         targetRevision  HEAD         path           destination          server             namespace          First  the Git directory generator will scan the Git repository  discovering directories under the specified path  It discovers the argo workflows and prometheus operator applications  and produces two corresponding sets of parameters     yaml   path   examples git generator directory cluster addons argo workflows   path basename  argo workflows    path   examples git generator directory cluster addons prometheus operator   path basename  prometheus operator      Next  the Cluster generator scans the  set of clusters defined in Argo CD  Generators Cluster md   finds the staging and production cluster secrets  and produce two corresponding sets of parameters     yaml   name  staging   server  https   1 2 3 4    name  production   server  https   2 4 6 8      Finally  the Matrix generator will combine both sets of outputs  and produce     yaml   name  staging   server  https   1 2 3 4   path   examples git generator directory cluster addons argo workflows   path basename  argo workflows    name  staging   server  https   1 2 3 4   path   examples git generator directory cluster addons prometheus operator   path basename  prometheus operator    name  production   server  https   2 4 6 8   path   examples git generator directory cluster addons argo workflows   path basename  argo workflows    name  production   server  https   2 4 6 8   path   examples git generator directory cluster addons prometheus operator   path basename  prometheus operator       The full example can be found  here  https   github com argoproj argo cd tree master applicationset examples matrix         Using Parameters from one child generator in another child generator  The Matrix generator allows using the parameters generated by one child generator inside another child generator   Below is an example that uses a git files generator in conjunction with a cluster generator      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  cluster git spec    goTemplate  true   goTemplateOptions    missingkey error     generators        matrix  parent  generator       matrix          generators              git generator   child   1             git                repoURL  https   github com argoproj applicationset git               revision  HEAD               files                    path   examples git generator files discovery cluster config    config json              cluster generator   child   2             clusters                selector                  matchLabels                    argocd argoproj io secret type  cluster                   kubernetes io environment       template      metadata        name    guestbook      spec        project  default       source          repoURL  https   github com argoproj applicationset git         targetRevision  HEAD         path   examples git generator files discovery apps guestbook        destination          server             namespace  guestbook     Here is the corresponding folder structure for the git repository used by the git files generator           apps         guestbook             guestbook ui deployment yaml             guestbook ui svc yaml             kustomization yaml     cluster config         engineering             dev                 config json             prod                 config json     git generator files yaml     In the above example  the    parameters produced by the git files generator will resolve to  dev  and  prod   In the 2nd child generator  the label selector with label  kubernetes io environment    will resolve with the values produced by the first child generator s parameters   kubernetes io environment  prod  and  kubernetes io environment  dev      So in the above example  clusters with the label  kubernetes io environment  prod  will have only prod specific configuration  ie   prod config json   applied to it  wheres clusters with the label  kubernetes io environment  dev  will have only dev specific configuration  ie   dev config json       Overriding parameters from one child generator in another child generator  The Matrix Generator allows parameters with the same name to be defined in multiple child generators  This is useful  for example  to define default values for all stages in one generator and override them with stage specific values in another generator  The example below generates a Helm based application using a matrix generator with two git generators  the first provides stage specific values  one directory per stage  and the second provides global values for all stages      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  parameter override example spec    generators        matrix          generators              git                repoURL  https   github com example values git               revision  HEAD               files                    path      stage values yaml              git                 repoURL  https   github com example values git                revision  HEAD                files                      path   global values yaml    goTemplate  true   template      metadata        name  example     spec        project  default       source          repoURL  https   github com example example app git         targetRevision  HEAD         path            helm            values                           destination          server  in cluster         namespace  default      Given the following structure content of the example values repository           test         stage values yaml           stageName  test           cpuRequest  100m           debugEnabled  true     staging         stage values yaml           stageName  staging     production         stage values yaml           stageName  production           memoryLimit  512Mi           debugEnabled  false     global values yaml       cpuRequest  200m       memoryLimit  256Mi       debugEnabled  true      The matrix generator above would yield the following results      yaml   stageName  test   cpuRequest  100m   memoryLimit  256Mi   debugEnabled  true      stageName  staging   cpuRequest  200m   memoryLimit  256Mi   debugEnabled  true    stageName  production   cpuRequest  200m   memoryLimit  512Mi   debugEnabled  false         Example  Two Git Generators Using  pathParamPrefix   The matrix generator will fail if its children produce results containing identical keys with differing values  This poses a problem for matrix generators where both children are Git generators since they auto populate  path  related parameters in their outputs  To avoid this problem  specify a  pathParamPrefix  on one or both of the child generators to avoid conflicting parameter keys in the output      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  two gits with path param prefix spec    goTemplate  true   goTemplateOptions    missingkey error     generators        matrix          generators              git file generator referencing files containing details about each             app to be deployed  e g    appName                git                repoURL  https   github com some org some repo git               revision  HEAD               files                    path   apps   json                pathParamPrefix  app             git file generator referencing files containing details about             locations to which each app should deploy  e g    region  and              clusterName                git                repoURL  https   github com some org some repo git               revision  HEAD               files                    path   targets    json                pathParamPrefix  target   template                Then  given the following file structure content           apps         app one json            appName    app one            app two json            appName    app two        targets         app one             east cluster one json                region    east    clusterName    cluster one                east cluster two json                region    east    clusterName    cluster two            app two             east cluster one json                region    east    clusterName    cluster one                west cluster three json                region    west    clusterName    cluster three          the matrix generator above would yield the following results      yaml   appName  app one   app path   apps   app path filename  app one json     plus additional path related parameters from the first child generator  all     prefixed with  app     region  east   clusterName  cluster one   target path   targets app one   target path filename  east cluster one json     plus additional path related parameters from the second child generator  all     prefixed with  target      appName  app one   app path   apps   app path filename  app one json   region  east   clusterName  cluster two   target path   targets app one   target path filename  east cluster two json    appName  app two   app path   apps   app path filename  app two json   region  east   clusterName  cluster one   target path   targets app two   target path filename  east cluster one json    appName  app two   app path   apps   app path filename  app two json   region  west   clusterName  cluster three   target path   targets app two   target path filename  west cluster three json         Restrictions  1  The Matrix generator currently only supports combining the outputs of only two child generators  eg does not support generating combinations for 3 or more    1  You should specify only a single generator per array entry  eg this is not valid             matrix              generators                list                        git                 While this  will  be accepted by Kubernetes API validation  the controller will report an error on generation  Each generator should be specified in a separate array element  as in the examples above   1  The Matrix generator does not currently support   template  overrides  Template md generator templates  specified on child generators  eg this  template  will not be processed             matrix              generators                  list                    elements                                                  template        Not processed  1  Combination type generators  matrix or merge  can only be nested once  For example  this will not work             matrix              generators                  matrix                    generators                        matrix     This third level is invalid                          generators                              list                                elements                                             1  When using parameters from one child generator inside another child generator  the child generator that  consumes  the parameters   must come after   the child generator that  produces  the parameters  For example  the below example would be invalid  cluster generator must come after the git files generator              matrix              generators                  cluster generator   child   1                 clusters                    selector                      matchLabels                        argocd argoproj io secret type  cluster                       kubernetes io environment        is produced by git files generator                 git generator   child   2                 git                    repoURL  https   github com argoproj applicationset git                   revision  HEAD                   files                        path   examples git generator files discovery cluster config    config json   1  You cannot have both child generators consuming parameters from each another  In the example below  the cluster generator is consuming the    parameter produced by the git files generator  whereas the git files generator is consuming the    parameter produced by the cluster generator  This will result in a circular dependency  which is invalid             matrix              generators                  cluster generator   child   1                 clusters                    selector                      matchLabels                        argocd argoproj io secret type  cluster                       kubernetes io environment        is produced by git files generator                 git generator   child   2                 git                    repoURL  https   github com argoproj applicationset git                   revision  HEAD                   files                        path   examples git generator files discovery cluster config engineering    config json     is produced by cluster generator  1  When using a Matrix generator nested inside another Matrix or Merge generator   Post Selectors  Generators Post Selector md  for this nested generator s generators will only be applied when enabled via  spec applyNestedSelectors   You may also need to enable this even if your Post Selectors are not within the nested matrix or Merge generator  but are instead a sibling of a nested Matrix or Merge generator             matrix              generators                  matrix                    generators                        list                         elements                                                            selector        Only applied when applyNestedSelectors is true"}
{"questions":"argocd The ApplicationSet controller is a that adds support for an CRD This controller CRD enables both automation and greater flexibility managing Applications across a large number of clusters and within monorepos plus it makes self service usage possible on multitenant Kubernetes clusters The ApplicationSet controller works alongside an existing Argo CD is a declarative GitOps continuous delivery tool which allows developers to define and control deployment of Kubernetes application resources from within their existing Git workflow Starting with Argo CD v2 3 the ApplicationSet controller is bundled with Argo CD Introduction Introduction to ApplicationSet controller","answers":"# Introduction to ApplicationSet controller\n\n## Introduction\n\nThe ApplicationSet controller is a [Kubernetes controller](https:\/\/kubernetes.io\/docs\/concepts\/architecture\/controller\/) that adds support for an `ApplicationSet` [CustomResourceDefinition](https:\/\/kubernetes.io\/docs\/tasks\/extend-kubernetes\/custom-resources\/custom-resource-definitions\/) (CRD). This controller\/CRD enables both automation and greater flexibility managing [Argo CD](..\/..\/index.md) Applications across a large number of clusters and within monorepos, plus it makes self-service usage possible on multitenant Kubernetes clusters.\n\nThe ApplicationSet controller works alongside an existing [Argo CD installation](..\/..\/index.md). Argo CD is a declarative, GitOps continuous delivery tool, which allows developers to define and control deployment of Kubernetes application resources from within their existing Git workflow.\n\nStarting with Argo CD v2.3, the ApplicationSet controller is bundled with Argo CD.\n\nThe ApplicationSet controller, supplements Argo CD by adding additional features in support of cluster-administrator-focused scenarios. The `ApplicationSet` controller provides:\n\n- The ability to use a single Kubernetes manifest to target multiple Kubernetes clusters with Argo CD\n- The ability to use a single Kubernetes manifest to deploy multiple applications from one or multiple Git repositories with Argo CD\n- Improved support for monorepos: in the context of Argo CD, a monorepo is multiple Argo CD Application resources defined within a single Git repository\n- Within multitenant clusters, improves the ability of individual cluster tenants to deploy applications using Argo CD (without needing to involve privileged cluster administrators in enabling the destination clusters\/namespaces)\n\n!!! note\n    Be aware of the [security implications](.\/Security.md) of ApplicationSets before using them.\n\n## The ApplicationSet resource\n\nThis example defines a new `guestbook` resource of kind `ApplicationSet`:\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: guestbook\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - list:\n      elements:\n      - cluster: engineering-dev\n        url: https:\/\/1.2.3.4\n      - cluster: engineering-prod\n        url: https:\/\/2.4.6.8\n      - cluster: finance-preprod\n        url: https:\/\/9.8.7.6\n  template:\n    metadata:\n      name: '-guestbook'\n    spec:\n      project: my-project\n      source:\n        repoURL: https:\/\/github.com\/infra-team\/cluster-deployments.git\n        targetRevision: HEAD\n        path: guestbook\/\n      destination:\n        server: ''\n        namespace: guestbook\n```\n\nIn this example, we want to deploy our `guestbook` application (with the Kubernetes resources for this application coming from Git, since this is GitOps) to a list of Kubernetes clusters (with the list of target clusters defined in the List items element of the `ApplicationSet` resource).\n\nWhile there are multiple types of *generators* that are available to use with the `ApplicationSet` resource, this example uses the List generator, which simply contains a fixed, literal list of clusters to target. This list of clusters will be the clusters upon which Argo CD deploys the `guestbook` application resources, once the ApplicationSet controller has processed the `ApplicationSet` resource.\n\nGenerators, such as the List generator, are responsible for generating *parameters*. Parameters are key-values pairs that are substituted into the `template:` section of the ApplicationSet resource during template rendering.\n\nThere are multiple generators currently supported by the ApplicationSet controller:\n\n- **List generator**: Generates parameters based on a fixed list of cluster name\/URL values, as seen in the example above.\n- **Cluster generator**: Rather than a literal list of clusters (as with the list generator), the cluster generator automatically generates cluster parameters based on the clusters that are defined within Argo CD.\n- **Git generator**: The Git generator generates parameters based on files or folders that are contained within the Git repository defined within the generator resource.\n    - Files containing JSON values will be parsed and converted into template parameters.\n    - Individual directory paths within the Git repository may be used as parameter values, as well.\n- **Matrix generator**: The Matrix generators combines the generated parameters of two other generators.\n\nSee the [generator section](Generators.md) for more information about individual generators, and the other generators not listed above.\n\n## Parameter substitution into templates\n\nIndependent of which generator is used, parameters generated by a generator are substituted into `` values within the `template:` section of the `ApplicationSet` resource. In this example, the List generator defines `cluster` and `url` parameters, which are then substituted into the template's `` and `` values, respectively.\n\nAfter substitution, this `guestbook` `ApplicationSet` resource is applied to the Kubernetes cluster:\n\n1. The ApplicationSet controller processes the generator entries, producing a set of template parameters.\n2. These parameters are substituted into the template, once for each set of parameters.\n3. Each rendered template is converted into an Argo CD `Application` resource, which is then created (or updated) within the Argo CD namespace.\n4. Finally, the Argo CD controller is notified of these `Application` resources and is responsible for handling them.\n\n\nWith the three different clusters defined in our example -- `engineering-dev`, `engineering-prod`, and `finance-preprod` -- this will produce three new Argo CD `Application` resources: one for each cluster.\n\nHere is an example of one of the `Application` resources that would be created, for the `engineering-dev` cluster at `1.2.3.4`:\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  name: engineering-dev-guestbook\nspec:\n  source:\n    repoURL: https:\/\/github.com\/infra-team\/cluster-deployments.git\n    targetRevision: HEAD\n    path: guestbook\/engineering-dev\n  destination:\n    server: https:\/\/1.2.3.4\n    namespace: guestbook\n```\nWe can see that the generated values have been substituted into the `server` and `path` fields of the template, and the template has been rendered into a fully-fleshed out Argo CD Application.\n\nThe Applications are now also visible from within the Argo CD UI:\n\n![List generator example in Argo CD Web UI](..\/..\/assets\/applicationset\/Introduction\/List-Example-In-Argo-CD-Web-UI.png)\n\nThe ApplicationSet controller will ensure that any changes, updates, or deletions made to `ApplicationSet` resources are automatically applied to the corresponding `Application`(s).\n\nFor instance, if a new cluster\/URL list entry was added to the List generator, a new Argo CD `Application` resource would be accordingly created for this new cluster. Any edits made to the `guestbook` `ApplicationSet` resource will affect all the Argo CD Applications that were instantiated by that resource, including the new Application.\n\nWhile the List generator's literal list of clusters is fairly simplistic, much more sophisticated scenarios are supported by the other available generators in the ApplicationSet controller.","site":"argocd","answers_cleaned":"  Introduction to ApplicationSet controller     Introduction  The ApplicationSet controller is a  Kubernetes controller  https   kubernetes io docs concepts architecture controller   that adds support for an  ApplicationSet   CustomResourceDefinition  https   kubernetes io docs tasks extend kubernetes custom resources custom resource definitions    CRD   This controller CRD enables both automation and greater flexibility managing  Argo CD        index md  Applications across a large number of clusters and within monorepos  plus it makes self service usage possible on multitenant Kubernetes clusters   The ApplicationSet controller works alongside an existing  Argo CD installation        index md   Argo CD is a declarative  GitOps continuous delivery tool  which allows developers to define and control deployment of Kubernetes application resources from within their existing Git workflow   Starting with Argo CD v2 3  the ApplicationSet controller is bundled with Argo CD   The ApplicationSet controller  supplements Argo CD by adding additional features in support of cluster administrator focused scenarios  The  ApplicationSet  controller provides     The ability to use a single Kubernetes manifest to target multiple Kubernetes clusters with Argo CD   The ability to use a single Kubernetes manifest to deploy multiple applications from one or multiple Git repositories with Argo CD   Improved support for monorepos  in the context of Argo CD  a monorepo is multiple Argo CD Application resources defined within a single Git repository   Within multitenant clusters  improves the ability of individual cluster tenants to deploy applications using Argo CD  without needing to involve privileged cluster administrators in enabling the destination clusters namespaces       note     Be aware of the  security implications    Security md  of ApplicationSets before using them      The ApplicationSet resource  This example defines a new  guestbook  resource of kind  ApplicationSet      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  guestbook spec    goTemplate  true   goTemplateOptions    missingkey error     generators      list        elements          cluster  engineering dev         url  https   1 2 3 4         cluster  engineering prod         url  https   2 4 6 8         cluster  finance preprod         url  https   9 8 7 6   template      metadata        name    guestbook      spec        project  my project       source          repoURL  https   github com infra team cluster deployments git         targetRevision  HEAD         path  guestbook        destination          server             namespace  guestbook      In this example  we want to deploy our  guestbook  application  with the Kubernetes resources for this application coming from Git  since this is GitOps  to a list of Kubernetes clusters  with the list of target clusters defined in the List items element of the  ApplicationSet  resource    While there are multiple types of  generators  that are available to use with the  ApplicationSet  resource  this example uses the List generator  which simply contains a fixed  literal list of clusters to target  This list of clusters will be the clusters upon which Argo CD deploys the  guestbook  application resources  once the ApplicationSet controller has processed the  ApplicationSet  resource   Generators  such as the List generator  are responsible for generating  parameters   Parameters are key values pairs that are substituted into the  template   section of the ApplicationSet resource during template rendering   There are multiple generators currently supported by the ApplicationSet controller       List generator    Generates parameters based on a fixed list of cluster name URL values  as seen in the example above      Cluster generator    Rather than a literal list of clusters  as with the list generator   the cluster generator automatically generates cluster parameters based on the clusters that are defined within Argo CD      Git generator    The Git generator generates parameters based on files or folders that are contained within the Git repository defined within the generator resource        Files containing JSON values will be parsed and converted into template parameters        Individual directory paths within the Git repository may be used as parameter values  as well      Matrix generator    The Matrix generators combines the generated parameters of two other generators   See the  generator section  Generators md  for more information about individual generators  and the other generators not listed above      Parameter substitution into templates  Independent of which generator is used  parameters generated by a generator are substituted into    values within the  template   section of the  ApplicationSet  resource  In this example  the List generator defines  cluster  and  url  parameters  which are then substituted into the template s    and    values  respectively   After substitution  this  guestbook   ApplicationSet  resource is applied to the Kubernetes cluster   1  The ApplicationSet controller processes the generator entries  producing a set of template parameters  2  These parameters are substituted into the template  once for each set of parameters  3  Each rendered template is converted into an Argo CD  Application  resource  which is then created  or updated  within the Argo CD namespace  4  Finally  the Argo CD controller is notified of these  Application  resources and is responsible for handling them    With the three different clusters defined in our example     engineering dev    engineering prod   and  finance preprod     this will produce three new Argo CD  Application  resources  one for each cluster   Here is an example of one of the  Application  resources that would be created  for the  engineering dev  cluster at  1 2 3 4      yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    name  engineering dev guestbook spec    source      repoURL  https   github com infra team cluster deployments git     targetRevision  HEAD     path  guestbook engineering dev   destination      server  https   1 2 3 4     namespace  guestbook     We can see that the generated values have been substituted into the  server  and  path  fields of the template  and the template has been rendered into a fully fleshed out Argo CD Application   The Applications are now also visible from within the Argo CD UI     List generator example in Argo CD Web UI        assets applicationset Introduction List Example In Argo CD Web UI png   The ApplicationSet controller will ensure that any changes  updates  or deletions made to  ApplicationSet  resources are automatically applied to the corresponding  Application  s    For instance  if a new cluster URL list entry was added to the List generator  a new Argo CD  Application  resource would be accordingly created for this new cluster  Any edits made to the  guestbook   ApplicationSet  resource will affect all the Argo CD Applications that were instantiated by that resource  including the new Application   While the List generator s literal list of clusters is fairly simplistic  much more sophisticated scenarios are supported by the other available generators in the ApplicationSet controller "}
{"questions":"argocd kind ApplicationSet apiVersion argoproj io v1alpha1 spec metadata SCM Provider Generator The SCM Provider generator uses the API of an SCMaaS provider eg GitHub to automatically discover repositories within an organization This fits well with GitOps layout patterns that split microservices across many repositories yaml name myapps","answers":"# SCM Provider Generator\n\nThe SCM Provider generator uses the API of an SCMaaS provider (eg GitHub) to automatically discover repositories within an organization. This fits well with GitOps layout patterns that split microservices across many repositories.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  generators:\n  - scmProvider:\n      # Which protocol to clone using.\n      cloneProtocol: ssh\n      # See below for provider specific options.\n      github:\n        # ...\n```\n\n* `cloneProtocol`: Which protocol to use for the SCM URL. Default is provider-specific but ssh if possible. Not all providers necessarily support all protocols, see provider documentation below for available options.\n\n!!! note\n    Know the security implications of using SCM generators. [Only admins may create ApplicationSets](.\/Security.md#only-admins-may-createupdatedelete-applicationsets)\n    to avoid leaking Secrets, and [only admins may create repos\/branches](.\/Security.md#templated-project-field) if the\n    `project` field of an ApplicationSet with an SCM generator is templated, to avoid granting management of\n    out-of-bounds resources.\n\n## GitHub\n\nThe GitHub mode uses the GitHub API to scan an organization in either github.com or GitHub Enterprise.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  generators:\n  - scmProvider:\n      github:\n        # The GitHub organization to scan.\n        organization: myorg\n        # For GitHub Enterprise:\n        api: https:\/\/git.example.com\/\n        # If true, scan every branch of every repository. If false, scan only the default branch. Defaults to false.\n        allBranches: true\n        # Reference to a Secret containing an access token. (optional)\n        tokenRef:\n          secretName: github-token\n          key: token\n        # (optional) use a GitHub App to access the API instead of a PAT.\n        appSecretName: gh-app-repo-creds\n  template:\n  # ...\n```\n\n* `organization`: Required name of the GitHub organization to scan. If you have multiple organizations, use multiple generators.\n* `api`: If using GitHub Enterprise, the URL to access it.\n* `allBranches`: By default (false) the template will only be evaluated for the default branch of each repo. If this is true, every branch of every repository will be passed to the filters. If using this flag, you likely want to use a `branchMatch` filter.\n* `tokenRef`: A `Secret` name and key containing the GitHub access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories.\n* `appSecretName`: A `Secret` name containing a GitHub App secret in [repo-creds format][repo-creds].\n\n[repo-creds]: ..\/declarative-setup.md#repository-credentials\n\nFor label filtering, the repository topics are used.\n\nAvailable clone protocols are `ssh` and `https`.\n\n## Gitlab\n\nThe GitLab mode uses the GitLab API to scan and organization in either gitlab.com or self-hosted GitLab.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  generators:\n  - scmProvider:\n      gitlab:\n        # The base GitLab group to scan.  You can either use the group id or the full namespaced path.\n        group: \"8675309\"\n        # For self-hosted GitLab:\n        api: https:\/\/gitlab.example.com\/\n        # If true, scan every branch of every repository. If false, scan only the default branch. Defaults to false.\n        allBranches: true\n        # If true, recurses through subgroups. If false, it searches only in the base group. Defaults to false.\n        includeSubgroups: true\n        # If true and includeSubgroups is also true, include Shared Projects, which is gitlab API default.\n        # If false only search Projects under the same path. Defaults to true.\n        includeSharedProjects: false\n        # filter projects by topic. A single topic is supported by Gitlab API. Defaults to \"\" (all topics).\n        topic: \"my-topic\"\n        # Reference to a Secret containing an access token. (optional)\n        tokenRef:\n          secretName: gitlab-token\n          key: token\n        # If true, skips validating the SCM provider's TLS certificate - useful for self-signed certificates.\n        insecure: false\n        # Reference to a ConfigMap containing trusted CA certs - useful for self-signed certificates. (optional)\n        caRef:\n          configMapName: argocd-tls-certs-cm\n          key: gitlab-ca\n  template:\n  # ...\n```\n\n* `group`: Required name of the base GitLab group to scan. If you have multiple base groups, use multiple generators.\n* `api`: If using self-hosted GitLab, the URL to access it.\n* `allBranches`: By default (false) the template will only be evaluated for the default branch of each repo. If this is true, every branch of every repository will be passed to the filters. If using this flag, you likely want to use a `branchMatch` filter.\n* `includeSubgroups`: By default (false) the controller will only search for repos directly in the base group. If this is true, it will recurse through all the subgroups searching for repos to scan.\n* `includeSharedProjects`: If true and includeSubgroups is also true, include Shared Projects, which is gitlab API default. If false only search Projects under the same path. In general most would want the behaviour when set to false. Defaults to true.\n* `topic`: filter projects by topic. A single topic is supported by Gitlab API. Defaults to \"\" (all topics).\n* `tokenRef`: A `Secret` name and key containing the GitLab access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories.\n* `insecure`: By default (false) - Skip checking the validity of the SCM's certificate - useful for self-signed TLS certificates.\n* `caRef`: Optional `ConfigMap` name and key containing the GitLab certificates to trust - useful for self-signed TLS certificates. Possibly reference the ArgoCD CM holding the trusted certs.\n\nFor label filtering, the repository topics are used.\n\nAvailable clone protocols are `ssh` and `https`.\n\n### Self-signed TLS Certificates\n\nAs a preferable alternative to setting `insecure` to true, you can configure self-signed TLS certificates for Gitlab.\n\nIn order for a self-signed TLS certificate be used by an ApplicationSet's SCM \/ PR Gitlab Generator, the certificate needs to be mounted on the applicationset-controller. The path of the mounted certificate must be explicitly set using the environment variable `ARGOCD_APPLICATIONSET_CONTROLLER_SCM_ROOT_CA_PATH` or alternatively using parameter `--scm-root-ca-path`. The applicationset controller will read the mounted certificate to create the Gitlab client for SCM\/PR Providers\n\nThis can be achieved conveniently by setting `applicationsetcontroller.scm.root.ca.path` in the argocd-cmd-params-cm ConfigMap. Be sure to restart the ApplicationSet controller after setting this value.\n\n## Gitea\n\nThe Gitea mode uses the Gitea API to scan organizations in your instance\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  generators:\n  - scmProvider:\n      gitea:\n        # The Gitea owner to scan.\n        owner: myorg\n        # The Gitea instance url\n        api: https:\/\/gitea.mydomain.com\/\n        # If true, scan every branch of every repository. If false, scan only the default branch. Defaults to false.\n        allBranches: true\n        # Reference to a Secret containing an access token. (optional)\n        tokenRef:\n          secretName: gitea-token\n          key: token\n  template:\n  # ...\n```\n\n* `owner`: Required name of the Gitea organization to scan. If you have multiple organizations, use multiple generators.\n* `api`: The URL of the Gitea instance you are using.\n* `allBranches`: By default (false) the template will only be evaluated for the default branch of each repo. If this is true, every branch of every repository will be passed to the filters. If using this flag, you likely want to use a `branchMatch` filter.\n* `tokenRef`: A `Secret` name and key containing the Gitea access token to use for requests. If not specified, will make anonymous requests which have a lower rate limit and can only see public repositories.\n* `insecure`: Allow for self-signed TLS certificates.\n\nThis SCM provider does not yet support label filtering\n\nAvailable clone protocols are `ssh` and `https`.\n\n## Bitbucket Server\n\nUse the Bitbucket Server API (1.0) to scan repos in a project. Note that Bitbucket Server is not to same as Bitbucket Cloud (API 2.0)\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  generators:\n  - scmProvider:\n      bitbucketServer:\n        project: myproject\n        # URL of the Bitbucket Server. Required.\n        api: https:\/\/mycompany.bitbucket.org\n        # If true, scan every branch of every repository. If false, scan only the default branch. Defaults to false.\n        allBranches: true\n        # Credentials for Basic authentication (App Password). Either basicAuth or bearerToken\n        # authentication is required to access private repositories\n        basicAuth:\n          # The username to authenticate with\n          username: myuser\n          # Reference to a Secret containing the password or personal access token.\n          passwordRef:\n            secretName: mypassword\n            key: password\n        # Credentials for Bearer Token (App Token) authentication. Either basicAuth or bearerToken\n        # authentication is required to access private repositories\n        bearerToken:\n          # Reference to a Secret containing the bearer token.\n          tokenRef:\n            secretName: repotoken\n            key: token\n        # If true, skips validating the SCM provider's TLS certificate - useful for self-signed certificates.\n        insecure: true\n        # Reference to a ConfigMap containing trusted CA certs - useful for self-signed certificates. (optional)\n        caRef:\n          configMapName: argocd-tls-certs-cm\n          key: bitbucket-ca\n        # Support for filtering by labels is TODO. Bitbucket server labels are not supported for PRs, but they are for repos\n  template:\n  # ...\n```\n\n* `project`: Required name of the Bitbucket project\n* `api`: Required URL to access the Bitbucket REST api.\n* `allBranches`: By default (false) the template will only be evaluated for the default branch of each repo. If this is true, every branch of every repository will be passed to the filters. If using this flag, you likely want to use a `branchMatch` filter.\n\nIf you want to access a private repository, you must also provide the credentials for Basic auth (this is the only auth supported currently):\n* `username`: The username to authenticate with. It only needs read access to the relevant repo.\n* `passwordRef`: A `Secret` name and key containing the password or personal access token to use for requests.\n\nIn case of Bitbucket App Token, go with `bearerToken` section.\n* `tokenRef`: A `Secret` name and key containing the app token to use for requests.\n\nIn case self-signed BitBucket Server certificates, the following options can be usefully:\n* `insecure`: By default (false) - Skip checking the validity of the SCM's certificate - useful for self-signed TLS certificates.\n* `caRef`: Optional `ConfigMap` name and key containing the BitBucket server certificates to trust - useful for self-signed TLS certificates. Possibly reference the ArgoCD CM holding the trusted certs.\n\nAvailable clone protocols are `ssh` and `https`.\n\n## Azure DevOps\n\nUses the Azure DevOps API to look up eligible repositories based on a team project within an Azure DevOps organization.\nThe default Azure DevOps URL is `https:\/\/dev.azure.com`, but this can be overridden with the field `azureDevOps.api`.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  generators:\n  - scmProvider:\n      azureDevOps:\n        # The Azure DevOps organization.\n        organization: myorg\n        # URL to Azure DevOps. Optional. Defaults to https:\/\/dev.azure.com.\n        api: https:\/\/dev.azure.com\n        # If true, scan every branch of eligible repositories. If false, check only the default branch of the eligible repositories. Defaults to false.\n        allBranches: true\n        # The team project within the specified Azure DevOps organization.\n        teamProject: myProject\n        # Reference to a Secret containing the Azure DevOps Personal Access Token (PAT) used for accessing Azure DevOps.\n        accessTokenRef:\n          secretName: azure-devops-scm\n          key: accesstoken\n  template:\n  # ...\n```\n\n* `organization`: Required. Name of the Azure DevOps organization.\n* `teamProject`: Required. The name of the team project within the specified `organization`.\n* `accessTokenRef`: Required. A `Secret` name and key containing the Azure DevOps Personal Access Token (PAT) to use for requests.\n* `api`: Optional. URL to Azure DevOps. If not set, `https:\/\/dev.azure.com` is used.\n* `allBranches`: Optional, default `false`. If `true`, scans every branch of eligible repositories. If `false`, check only the default branch of the eligible repositories.\n\n## Bitbucket Cloud\n\nThe Bitbucket mode uses the Bitbucket API V2 to scan a workspace in bitbucket.org.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  generators:\n  - scmProvider:\n      bitbucket:\n        # The workspace id (slug).  \n        owner: \"example-owner\"\n        # The user to use for basic authentication with an app password.\n        user: \"example-user\"\n        # If true, scan every branch of every repository. If false, scan only the main branch. Defaults to false.\n        allBranches: true\n        # Reference to a Secret containing an app password.\n        appPasswordRef:\n          secretName: appPassword\n          key: password\n  template:\n  # ...\n```\n\n* `owner`: The workspace ID (slug) to use when looking up repositories.\n* `user`: The user to use for authentication to the Bitbucket API V2 at bitbucket.org.\n* `allBranches`: By default (false) the template will only be evaluated for the main branch of each repo. If this is true, every branch of every repository will be passed to the filters. If using this flag, you likely want to use a `branchMatch` filter.\n* `appPasswordRef`: A `Secret` name and key containing the bitbucket app password to use for requests.\n\nThis SCM provider does not yet support label filtering\n\nAvailable clone protocols are `ssh` and `https`.\n\n## AWS CodeCommit (Alpha)\n\nUses AWS ResourceGroupsTagging and AWS CodeCommit APIs to scan repos across AWS accounts and regions.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  generators:\n    - scmProvider:\n        awsCodeCommit:\n          # AWS region to scan repos.\n          # default to the environmental region from ApplicationSet controller.\n          region: us-east-1\n          # AWS role to assume to scan repos.\n          # default to the environmental role from ApplicationSet controller.\n          role: arn:aws:iam::111111111111:role\/argocd-application-set-discovery\n          # If true, scan every branch of every repository. If false, scan only the main branch. Defaults to false.\n          allBranches: true\n          # AWS resource tags to filter repos with.\n          # see https:\/\/docs.aws.amazon.com\/resourcegroupstagging\/latest\/APIReference\/API_GetResources.html#resourcegrouptagging-GetResources-request-TagFilters for details\n          # default to no tagFilters, to include all repos in the region.\n          tagFilters:\n            - key: organization\n              value: platform-engineering\n            - key: argo-ready\n  template:\n  # ...\n```\n\n* `region`: (Optional) AWS region to scan repos. By default, use ApplicationSet controller's current region.\n* `role`: (Optional) AWS role to assume to scan repos. By default, use ApplicationSet controller's current role.\n* `allBranches`: (Optional) If `true`, scans every branch of eligible repositories. If `false`, check only the default branch of the eligible repositories. Default `false`.\n* `tagFilters`: (Optional) A list of tagFilters to filter AWS CodeCommit repos with. See [AWS ResourceGroupsTagging API](https:\/\/docs.aws.amazon.com\/resourcegroupstagging\/latest\/APIReference\/API_GetResources.html#resourcegrouptagging-GetResources-request-TagFilters) for details. By default, no filter is included.\n\nThis SCM provider does not support the following features\n\n* label filtering\n* `sha`, `short_sha` and `short_sha_7` template parameters\n\nAvailable clone protocols are `ssh`, `https` and `https-fips`.\n\n### AWS IAM Permission Considerations\n\nIn order to call AWS APIs to discover AWS CodeCommit repos, ApplicationSet controller must be configured with valid environmental AWS config, like current AWS region and AWS credentials.\nAWS config can be provided via all standard options, like Instance Metadata Service (IMDS), config file, environment variables, or IAM roles for service accounts (IRSA).\n\nDepending on whether `role` is provided in `awsCodeCommit` property, AWS IAM permission requirement is different.\n\n#### Discover AWS CodeCommit Repositories in the same AWS Account as ApplicationSet Controller\n\nWithout specifying `role`, ApplicationSet controller will use its own AWS identity to scan AWS CodeCommit repos.\nThis is suitable when you have a simple setup that all AWS CodeCommit repos reside in the same AWS account as your Argo CD.\n\nAs the ApplicationSet controller AWS identity is used directly for repo discovery, it must be granted below AWS permissions.\n\n* `tag:GetResources`\n* `codecommit:ListRepositories`\n* `codecommit:GetRepository`\n* `codecommit:GetFolder`\n* `codecommit:ListBranches`\n\n#### Discover AWS CodeCommit Repositories across AWS Accounts and Regions\n\nBy specifying `role`, ApplicationSet controller will first assume the `role`, and use it for repo discovery.\nThis enables more complicated use cases to discover repos from different AWS accounts and regions.\n\nThe ApplicationSet controller AWS identity should be granted permission to assume target AWS roles.\n\n* `sts:AssumeRole`\n\nAll AWS roles must have repo discovery related permissions.\n\n* `tag:GetResources`\n* `codecommit:ListRepositories`\n* `codecommit:GetRepository`\n* `codecommit:GetFolder`\n* `codecommit:ListBranches`\n\n## Filters\n\nFilters allow selecting which repositories to generate for. Each filter can declare one or more conditions, all of which must pass. If multiple filters are present, any can match for a repository to be included. If no filters are specified, all repositories will be processed.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  generators:\n  - scmProvider:\n      filters:\n      # Include any repository starting with \"myapp\" AND including a Kustomize config AND labeled with \"deploy-ok\" ...\n      - repositoryMatch: ^myapp\n        pathsExist: [kubernetes\/kustomization.yaml]\n        labelMatch: deploy-ok\n      # ... OR include any repository starting with \"otherapp\" AND a Helm folder and doesn't have file disabledrepo.txt.\n      - repositoryMatch: ^otherapp\n        pathsExist: [helm]\n        pathsDoNotExist: [disabledrepo.txt]\n  template:\n  # ...\n```\n\n* `repositoryMatch`: A regexp matched against the repository name.\n* `pathsExist`: An array of paths within the repository that must exist. Can be a file or directory.\n* `pathsDoNotExist`: An array of paths within the repository that must not exist. Can be a file or directory.\n* `labelMatch`: A regexp matched against repository labels. If any label matches, the repository is included.\n* `branchMatch`: A regexp matched against branch names.\n\n## Template\n\nAs with all generators, several parameters are generated for use within the `ApplicationSet` resource template.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - scmProvider:\n    # ...\n  template:\n    metadata:\n      name: ''\n    spec:\n      source:\n        repoURL: ''\n        targetRevision: ''\n        path: kubernetes\/\n      project: default\n      destination:\n        server: https:\/\/kubernetes.default.svc\n        namespace: default\n```\n\n* `organization`: The name of the organization the repository is in.\n* `repository`: The name of the repository.\n* `url`: The clone URL for the repository.\n* `branch`: The default branch of the repository.\n* `sha`: The Git commit SHA for the branch.\n* `short_sha`: The abbreviated Git commit SHA for the branch (8 chars or the length of the `sha` if it's shorter).\n* `short_sha_7`: The abbreviated Git commit SHA for the branch (7 chars or the length of the `sha` if it's shorter).\n* `labels`: A comma-separated list of repository labels in case of Gitea, repository topics in case of Gitlab and Github. Not supported by Bitbucket Cloud, Bitbucket Server, or Azure DevOps.\n* `branchNormalized`: The value of `branch` normalized to contain only lowercase alphanumeric characters, '-' or '.'.\n\n## Pass additional key-value pairs via `values` field\n\nYou may pass additional, arbitrary string key-value pairs via the `values` field of any SCM generator. Values added via the `values` field are added as `values.(field)`.\n\nIn this example, a `name` parameter value is passed. It is interpolated from `organization` and `repository` to generate a different template name.\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: myapps\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - scmProvider:\n      bitbucketServer:\n        project: myproject\n        api: https:\/\/mycompany.bitbucket.org\n        allBranches: true\n        basicAuth:\n          username: myuser\n          passwordRef:\n            secretName: mypassword\n            key: password\n      values:\n        name: \"-\"\n\n  template:\n    metadata:\n      name: ''\n    spec:\n      source:\n        repoURL: ''\n        targetRevision: ''\n        path: kubernetes\/\n      project: default\n      destination:\n        server: https:\/\/kubernetes.default.svc\n        namespace: default\n```\n\n!!! note\n    The `values.` prefix is always prepended to values provided via `generators.scmProvider.values` field. Ensure you include this prefix in the parameter name within the `template` when using it.\n\nIn `values` we can also interpolate all fields set by the SCM generator as mentioned above.","site":"argocd","answers_cleaned":"  SCM Provider Generator  The SCM Provider generator uses the API of an SCMaaS provider  eg GitHub  to automatically discover repositories within an organization  This fits well with GitOps layout patterns that split microservices across many repositories      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    generators      scmProvider          Which protocol to clone using        cloneProtocol  ssh         See below for provider specific options        github                        cloneProtocol   Which protocol to use for the SCM URL  Default is provider specific but ssh if possible  Not all providers necessarily support all protocols  see provider documentation below for available options       note     Know the security implications of using SCM generators   Only admins may create ApplicationSets    Security md only admins may createupdatedelete applicationsets      to avoid leaking Secrets  and  only admins may create repos branches    Security md templated project field  if the      project  field of an ApplicationSet with an SCM generator is templated  to avoid granting management of     out of bounds resources      GitHub  The GitHub mode uses the GitHub API to scan an organization in either github com or GitHub Enterprise      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    generators      scmProvider        github            The GitHub organization to scan          organization  myorg           For GitHub Enterprise          api  https   git example com            If true  scan every branch of every repository  If false  scan only the default branch  Defaults to false          allBranches  true           Reference to a Secret containing an access token   optional          tokenRef            secretName  github token           key  token            optional  use a GitHub App to access the API instead of a PAT          appSecretName  gh app repo creds   template                  organization   Required name of the GitHub organization to scan  If you have multiple organizations  use multiple generators     api   If using GitHub Enterprise  the URL to access it     allBranches   By default  false  the template will only be evaluated for the default branch of each repo  If this is true  every branch of every repository will be passed to the filters  If using this flag  you likely want to use a  branchMatch  filter     tokenRef   A  Secret  name and key containing the GitHub access token to use for requests  If not specified  will make anonymous requests which have a lower rate limit and can only see public repositories     appSecretName   A  Secret  name containing a GitHub App secret in  repo creds format  repo creds     repo creds      declarative setup md repository credentials  For label filtering  the repository topics are used   Available clone protocols are  ssh  and  https       Gitlab  The GitLab mode uses the GitLab API to scan and organization in either gitlab com or self hosted GitLab      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    generators      scmProvider        gitlab            The base GitLab group to scan   You can either use the group id or the full namespaced path          group   8675309            For self hosted GitLab          api  https   gitlab example com            If true  scan every branch of every repository  If false  scan only the default branch  Defaults to false          allBranches  true           If true  recurses through subgroups  If false  it searches only in the base group  Defaults to false          includeSubgroups  true           If true and includeSubgroups is also true  include Shared Projects  which is gitlab API default            If false only search Projects under the same path  Defaults to true          includeSharedProjects  false           filter projects by topic  A single topic is supported by Gitlab API  Defaults to     all topics           topic   my topic            Reference to a Secret containing an access token   optional          tokenRef            secretName  gitlab token           key  token           If true  skips validating the SCM provider s TLS certificate   useful for self signed certificates          insecure  false           Reference to a ConfigMap containing trusted CA certs   useful for self signed certificates   optional          caRef            configMapName  argocd tls certs cm           key  gitlab ca   template                  group   Required name of the base GitLab group to scan  If you have multiple base groups  use multiple generators     api   If using self hosted GitLab  the URL to access it     allBranches   By default  false  the template will only be evaluated for the default branch of each repo  If this is true  every branch of every repository will be passed to the filters  If using this flag  you likely want to use a  branchMatch  filter     includeSubgroups   By default  false  the controller will only search for repos directly in the base group  If this is true  it will recurse through all the subgroups searching for repos to scan     includeSharedProjects   If true and includeSubgroups is also true  include Shared Projects  which is gitlab API default  If false only search Projects under the same path  In general most would want the behaviour when set to false  Defaults to true     topic   filter projects by topic  A single topic is supported by Gitlab API  Defaults to     all topics      tokenRef   A  Secret  name and key containing the GitLab access token to use for requests  If not specified  will make anonymous requests which have a lower rate limit and can only see public repositories     insecure   By default  false    Skip checking the validity of the SCM s certificate   useful for self signed TLS certificates     caRef   Optional  ConfigMap  name and key containing the GitLab certificates to trust   useful for self signed TLS certificates  Possibly reference the ArgoCD CM holding the trusted certs   For label filtering  the repository topics are used   Available clone protocols are  ssh  and  https        Self signed TLS Certificates  As a preferable alternative to setting  insecure  to true  you can configure self signed TLS certificates for Gitlab   In order for a self signed TLS certificate be used by an ApplicationSet s SCM   PR Gitlab Generator  the certificate needs to be mounted on the applicationset controller  The path of the mounted certificate must be explicitly set using the environment variable  ARGOCD APPLICATIONSET CONTROLLER SCM ROOT CA PATH  or alternatively using parameter    scm root ca path   The applicationset controller will read the mounted certificate to create the Gitlab client for SCM PR Providers  This can be achieved conveniently by setting  applicationsetcontroller scm root ca path  in the argocd cmd params cm ConfigMap  Be sure to restart the ApplicationSet controller after setting this value      Gitea  The Gitea mode uses the Gitea API to scan organizations in your instance     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    generators      scmProvider        gitea            The Gitea owner to scan          owner  myorg           The Gitea instance url         api  https   gitea mydomain com            If true  scan every branch of every repository  If false  scan only the default branch  Defaults to false          allBranches  true           Reference to a Secret containing an access token   optional          tokenRef            secretName  gitea token           key  token   template                  owner   Required name of the Gitea organization to scan  If you have multiple organizations  use multiple generators     api   The URL of the Gitea instance you are using     allBranches   By default  false  the template will only be evaluated for the default branch of each repo  If this is true  every branch of every repository will be passed to the filters  If using this flag  you likely want to use a  branchMatch  filter     tokenRef   A  Secret  name and key containing the Gitea access token to use for requests  If not specified  will make anonymous requests which have a lower rate limit and can only see public repositories     insecure   Allow for self signed TLS certificates   This SCM provider does not yet support label filtering  Available clone protocols are  ssh  and  https       Bitbucket Server  Use the Bitbucket Server API  1 0  to scan repos in a project  Note that Bitbucket Server is not to same as Bitbucket Cloud  API 2 0      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    generators      scmProvider        bitbucketServer          project  myproject           URL of the Bitbucket Server  Required          api  https   mycompany bitbucket org           If true  scan every branch of every repository  If false  scan only the default branch  Defaults to false          allBranches  true           Credentials for Basic authentication  App Password   Either basicAuth or bearerToken           authentication is required to access private repositories         basicAuth              The username to authenticate with           username  myuser             Reference to a Secret containing the password or personal access token            passwordRef              secretName  mypassword             key  password           Credentials for Bearer Token  App Token  authentication  Either basicAuth or bearerToken           authentication is required to access private repositories         bearerToken              Reference to a Secret containing the bearer token            tokenRef              secretName  repotoken             key  token           If true  skips validating the SCM provider s TLS certificate   useful for self signed certificates          insecure  true           Reference to a ConfigMap containing trusted CA certs   useful for self signed certificates   optional          caRef            configMapName  argocd tls certs cm           key  bitbucket ca           Support for filtering by labels is TODO  Bitbucket server labels are not supported for PRs  but they are for repos   template                  project   Required name of the Bitbucket project    api   Required URL to access the Bitbucket REST api     allBranches   By default  false  the template will only be evaluated for the default branch of each repo  If this is true  every branch of every repository will be passed to the filters  If using this flag  you likely want to use a  branchMatch  filter   If you want to access a private repository  you must also provide the credentials for Basic auth  this is the only auth supported currently      username   The username to authenticate with  It only needs read access to the relevant repo     passwordRef   A  Secret  name and key containing the password or personal access token to use for requests   In case of Bitbucket App Token  go with  bearerToken  section     tokenRef   A  Secret  name and key containing the app token to use for requests   In case self signed BitBucket Server certificates  the following options can be usefully     insecure   By default  false    Skip checking the validity of the SCM s certificate   useful for self signed TLS certificates     caRef   Optional  ConfigMap  name and key containing the BitBucket server certificates to trust   useful for self signed TLS certificates  Possibly reference the ArgoCD CM holding the trusted certs   Available clone protocols are  ssh  and  https       Azure DevOps  Uses the Azure DevOps API to look up eligible repositories based on a team project within an Azure DevOps organization  The default Azure DevOps URL is  https   dev azure com   but this can be overridden with the field  azureDevOps api       yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    generators      scmProvider        azureDevOps            The Azure DevOps organization          organization  myorg           URL to Azure DevOps  Optional  Defaults to https   dev azure com          api  https   dev azure com           If true  scan every branch of eligible repositories  If false  check only the default branch of the eligible repositories  Defaults to false          allBranches  true           The team project within the specified Azure DevOps organization          teamProject  myProject           Reference to a Secret containing the Azure DevOps Personal Access Token  PAT  used for accessing Azure DevOps          accessTokenRef            secretName  azure devops scm           key  accesstoken   template                  organization   Required  Name of the Azure DevOps organization     teamProject   Required  The name of the team project within the specified  organization      accessTokenRef   Required  A  Secret  name and key containing the Azure DevOps Personal Access Token  PAT  to use for requests     api   Optional  URL to Azure DevOps  If not set   https   dev azure com  is used     allBranches   Optional  default  false   If  true   scans every branch of eligible repositories  If  false   check only the default branch of the eligible repositories      Bitbucket Cloud  The Bitbucket mode uses the Bitbucket API V2 to scan a workspace in bitbucket org      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    generators      scmProvider        bitbucket            The workspace id  slug             owner   example owner            The user to use for basic authentication with an app password          user   example user            If true  scan every branch of every repository  If false  scan only the main branch  Defaults to false          allBranches  true           Reference to a Secret containing an app password          appPasswordRef            secretName  appPassword           key  password   template                  owner   The workspace ID  slug  to use when looking up repositories     user   The user to use for authentication to the Bitbucket API V2 at bitbucket org     allBranches   By default  false  the template will only be evaluated for the main branch of each repo  If this is true  every branch of every repository will be passed to the filters  If using this flag  you likely want to use a  branchMatch  filter     appPasswordRef   A  Secret  name and key containing the bitbucket app password to use for requests   This SCM provider does not yet support label filtering  Available clone protocols are  ssh  and  https       AWS CodeCommit  Alpha   Uses AWS ResourceGroupsTagging and AWS CodeCommit APIs to scan repos across AWS accounts and regions      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    generators        scmProvider          awsCodeCommit              AWS region to scan repos              default to the environmental region from ApplicationSet controller            region  us east 1             AWS role to assume to scan repos              default to the environmental role from ApplicationSet controller            role  arn aws iam  111111111111 role argocd application set discovery             If true  scan every branch of every repository  If false  scan only the main branch  Defaults to false            allBranches  true             AWS resource tags to filter repos with              see https   docs aws amazon com resourcegroupstagging latest APIReference API GetResources html resourcegrouptagging GetResources request TagFilters for details             default to no tagFilters  to include all repos in the region            tagFilters                key  organization               value  platform engineering               key  argo ready   template                  region    Optional  AWS region to scan repos  By default  use ApplicationSet controller s current region     role    Optional  AWS role to assume to scan repos  By default  use ApplicationSet controller s current role     allBranches    Optional  If  true   scans every branch of eligible repositories  If  false   check only the default branch of the eligible repositories  Default  false      tagFilters    Optional  A list of tagFilters to filter AWS CodeCommit repos with  See  AWS ResourceGroupsTagging API  https   docs aws amazon com resourcegroupstagging latest APIReference API GetResources html resourcegrouptagging GetResources request TagFilters  for details  By default  no filter is included   This SCM provider does not support the following features    label filtering    sha    short sha  and  short sha 7  template parameters  Available clone protocols are  ssh    https  and  https fips        AWS IAM Permission Considerations  In order to call AWS APIs to discover AWS CodeCommit repos  ApplicationSet controller must be configured with valid environmental AWS config  like current AWS region and AWS credentials  AWS config can be provided via all standard options  like Instance Metadata Service  IMDS   config file  environment variables  or IAM roles for service accounts  IRSA    Depending on whether  role  is provided in  awsCodeCommit  property  AWS IAM permission requirement is different        Discover AWS CodeCommit Repositories in the same AWS Account as ApplicationSet Controller  Without specifying  role   ApplicationSet controller will use its own AWS identity to scan AWS CodeCommit repos  This is suitable when you have a simple setup that all AWS CodeCommit repos reside in the same AWS account as your Argo CD   As the ApplicationSet controller AWS identity is used directly for repo discovery  it must be granted below AWS permissions      tag GetResources     codecommit ListRepositories     codecommit GetRepository     codecommit GetFolder     codecommit ListBranches        Discover AWS CodeCommit Repositories across AWS Accounts and Regions  By specifying  role   ApplicationSet controller will first assume the  role   and use it for repo discovery  This enables more complicated use cases to discover repos from different AWS accounts and regions   The ApplicationSet controller AWS identity should be granted permission to assume target AWS roles      sts AssumeRole   All AWS roles must have repo discovery related permissions      tag GetResources     codecommit ListRepositories     codecommit GetRepository     codecommit GetFolder     codecommit ListBranches      Filters  Filters allow selecting which repositories to generate for  Each filter can declare one or more conditions  all of which must pass  If multiple filters are present  any can match for a repository to be included  If no filters are specified  all repositories will be processed      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    generators      scmProvider        filters          Include any repository starting with  myapp  AND including a Kustomize config AND labeled with  deploy ok              repositoryMatch   myapp         pathsExist   kubernetes kustomization yaml          labelMatch  deploy ok             OR include any repository starting with  otherapp  AND a Helm folder and doesn t have file disabledrepo txt          repositoryMatch   otherapp         pathsExist   helm          pathsDoNotExist   disabledrepo txt    template                  repositoryMatch   A regexp matched against the repository name     pathsExist   An array of paths within the repository that must exist  Can be a file or directory     pathsDoNotExist   An array of paths within the repository that must not exist  Can be a file or directory     labelMatch   A regexp matched against repository labels  If any label matches  the repository is included     branchMatch   A regexp matched against branch names      Template  As with all generators  several parameters are generated for use within the  ApplicationSet  resource template      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    goTemplate  true   goTemplateOptions    missingkey error     generators      scmProvider              template      metadata        name         spec        source          repoURL             targetRevision             path  kubernetes        project  default       destination          server  https   kubernetes default svc         namespace  default         organization   The name of the organization the repository is in     repository   The name of the repository     url   The clone URL for the repository     branch   The default branch of the repository     sha   The Git commit SHA for the branch     short sha   The abbreviated Git commit SHA for the branch  8 chars or the length of the  sha  if it s shorter      short sha 7   The abbreviated Git commit SHA for the branch  7 chars or the length of the  sha  if it s shorter      labels   A comma separated list of repository labels in case of Gitea  repository topics in case of Gitlab and Github  Not supported by Bitbucket Cloud  Bitbucket Server  or Azure DevOps     branchNormalized   The value of  branch  normalized to contain only lowercase alphanumeric characters      or          Pass additional key value pairs via  values  field  You may pass additional  arbitrary string key value pairs via the  values  field of any SCM generator  Values added via the  values  field are added as  values  field     In this example  a  name  parameter value is passed  It is interpolated from  organization  and  repository  to generate a different template name     yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  myapps spec    goTemplate  true   goTemplateOptions    missingkey error     generators      scmProvider        bitbucketServer          project  myproject         api  https   mycompany bitbucket org         allBranches  true         basicAuth            username  myuser           passwordRef              secretName  mypassword             key  password       values          name         template      metadata        name         spec        source          repoURL             targetRevision             path  kubernetes        project  default       destination          server  https   kubernetes default svc         namespace  default          note     The  values   prefix is always prepended to values provided via  generators scmProvider values  field  Ensure you include this prefix in the parameter name within the  template  when using it   In  values  we can also interpolate all fields set by the SCM generator as mentioned above "}
{"questions":"argocd feature that allows you to control the order in which the ApplicationSet controller will create or update the Applications owned by an ApplicationSet resource It may be removed in future releases or modified in backwards incompatible ways Progressive Syncs warning Alpha Feature Since v2 6 0 The Progressive Syncs feature set is intended to be light and flexible The feature only interacts with the health of managed Applications It is not intended to support direct integrations with other Rollout controllers such as the native ReplicaSet controller or Argo Rollouts Use Cases This is an experimental","answers":"# Progressive Syncs\n\n!!! warning \"Alpha Feature (Since v2.6.0)\"\n    This is an experimental, [alpha-quality](https:\/\/github.com\/argoproj\/argoproj\/blob\/main\/community\/feature-status.md#alpha) \n    feature that allows you to control the order in which the ApplicationSet controller will create or update the Applications \n    owned by an ApplicationSet resource. It may be removed in future releases or modified in backwards-incompatible ways.\n\n## Use Cases\nThe Progressive Syncs feature set is intended to be light and flexible. The feature only interacts with the health of managed Applications. It is not intended to support direct integrations with other Rollout controllers (such as the native ReplicaSet controller or Argo Rollouts).\n\n* Progressive Syncs watch for the managed Application resources to become \"Healthy\" before proceeding to the next stage.\n* Deployments, DaemonSets, StatefulSets, and [Argo Rollouts](https:\/\/argoproj.github.io\/argo-rollouts\/) are all supported, because the Application enters a \"Progressing\" state while pods are being rolled out. In fact, any resource with a health check that can report a \"Progressing\" status is supported.\n* [Argo CD Resource Hooks](..\/..\/user-guide\/resource_hooks.md) are supported. We recommend this approach for users that need advanced functionality when an Argo Rollout cannot be used, such as smoke testing after a DaemonSet change.\n\n## Enabling Progressive Syncs\nAs an experimental feature, progressive syncs must be explicitly enabled, in one of these ways.\n\n1. Pass `--enable-progressive-syncs` to the ApplicationSet controller args.\n1. Set `ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS=true` in the ApplicationSet controller environment variables.\n1. Set `applicationsetcontroller.enable.progressive.syncs: true` in the Argo CD `argocd-cmd-params-cm` ConfigMap.\n\n## Strategies\n\n* AllAtOnce (default)\n* RollingSync\n\n### AllAtOnce\nThis default Application update behavior is unchanged from the original ApplicationSet implementation.\n\nAll Applications managed by the ApplicationSet resource are updated simultaneously when the ApplicationSet is updated.\n\n### RollingSync\nThis update strategy allows you to group Applications by labels present on the generated Application resources.\nWhen the ApplicationSet changes, the changes will be applied to each group of Application resources sequentially.\n\n* Application groups are selected using their labels and `matchExpressions`.\n* All `matchExpressions` must be true for an Application to be selected (multiple expressions match with AND behavior).\n* The `In` and `NotIn` operators must match at least one value to be considered true (OR behavior).\n* The `NotIn` operator has priority in the event that both a `NotIn` and `In` operator produce a match.\n* All Applications in each group must become Healthy before the ApplicationSet controller will proceed to update the next group of Applications.\n* The number of simultaneous Application updates in a group will not exceed its `maxUpdate` parameter (default is 100%, unbounded).\n* RollingSync will capture external changes outside the ApplicationSet resource, since it relies on watching the OutOfSync status of the managed Applications.\n* RollingSync will force all generated Applications to have autosync disabled. Warnings are printed in the applicationset-controller logs for any Application specs with an automated syncPolicy enabled.\n* Sync operations are triggered the same way as if they were triggered by the UI or CLI (by directly setting the `operation` status field on the Application resource). This means that a RollingSync will respect sync windows just as if a user had clicked the \"Sync\" button in the Argo UI.\n* When a sync is triggered, the sync is performed with the same syncPolicy configured for the Application. For example, this preserves the Application's retry settings.\n* If an Application is considered \"Pending\" for `applicationsetcontroller.default.application.progressing.timeout` seconds, the Application is automatically moved to Healthy status (default 300).\n* If an Application is not selected in any step, it will be excluded from the rolling sync and needs to be manually synced through the CLI or UI.\n\n#### Example\nThe following example illustrates how to stage a progressive sync over Applications with explicitly configured environment labels.\n\nOnce a change is pushed, the following will happen in order.\n\n* All `env-dev` Applications will be updated simultaneously.\n* The rollout will wait for all `env-qa` Applications to be manually synced via the `argocd` CLI or by clicking the Sync button in the UI.\n* 10% of all `env-prod` Applications will be updated at a time until all `env-prod` Applications have been updated.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: guestbook\nspec:\n  generators:\n  - list:\n      elements:\n      - cluster: engineering-dev\n        url: https:\/\/1.2.3.4\n        env: env-dev\n      - cluster: engineering-qa\n        url: https:\/\/2.4.6.8\n        env: env-qa\n      - cluster: engineering-prod\n        url: https:\/\/9.8.7.6\/\n        env: env-prod\n  strategy:\n    type: RollingSync\n    rollingSync:\n      steps:\n        - matchExpressions:\n            - key: envLabel\n              operator: In\n              values:\n                - env-dev\n          #maxUpdate: 100%  # if undefined, all applications matched are updated together (default is 100%)\n        - matchExpressions:\n            - key: envLabel\n              operator: In\n              values:\n                - env-qa\n          maxUpdate: 0      # if 0, no matched applications will be updated\n        - matchExpressions:\n            - key: envLabel\n              operator: In\n              values:\n                - env-prod\n          maxUpdate: 10%    # maxUpdate supports both integer and percentage string values (rounds down, but floored at 1 Application for >0%)\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  template:\n    metadata:\n      name: '-guestbook'\n      labels:\n        envLabel: ''\n    spec:\n      project: my-project\n      source:\n        repoURL: https:\/\/github.com\/infra-team\/cluster-deployments.git\n        targetRevision: HEAD\n        path: guestbook\/\n      destination:\n        server: ''\n        namespace: guestbook\n```","site":"argocd","answers_cleaned":"  Progressive Syncs      warning  Alpha Feature  Since v2 6 0       This is an experimental   alpha quality  https   github com argoproj argoproj blob main community feature status md alpha       feature that allows you to control the order in which the ApplicationSet controller will create or update the Applications      owned by an ApplicationSet resource  It may be removed in future releases or modified in backwards incompatible ways      Use Cases The Progressive Syncs feature set is intended to be light and flexible  The feature only interacts with the health of managed Applications  It is not intended to support direct integrations with other Rollout controllers  such as the native ReplicaSet controller or Argo Rollouts      Progressive Syncs watch for the managed Application resources to become  Healthy  before proceeding to the next stage    Deployments  DaemonSets  StatefulSets  and  Argo Rollouts  https   argoproj github io argo rollouts   are all supported  because the Application enters a  Progressing  state while pods are being rolled out  In fact  any resource with a health check that can report a  Progressing  status is supported     Argo CD Resource Hooks        user guide resource hooks md  are supported  We recommend this approach for users that need advanced functionality when an Argo Rollout cannot be used  such as smoke testing after a DaemonSet change      Enabling Progressive Syncs As an experimental feature  progressive syncs must be explicitly enabled  in one of these ways   1  Pass    enable progressive syncs  to the ApplicationSet controller args  1  Set  ARGOCD APPLICATIONSET CONTROLLER ENABLE PROGRESSIVE SYNCS true  in the ApplicationSet controller environment variables  1  Set  applicationsetcontroller enable progressive syncs  true  in the Argo CD  argocd cmd params cm  ConfigMap      Strategies    AllAtOnce  default    RollingSync      AllAtOnce This default Application update behavior is unchanged from the original ApplicationSet implementation   All Applications managed by the ApplicationSet resource are updated simultaneously when the ApplicationSet is updated       RollingSync This update strategy allows you to group Applications by labels present on the generated Application resources  When the ApplicationSet changes  the changes will be applied to each group of Application resources sequentially     Application groups are selected using their labels and  matchExpressions     All  matchExpressions  must be true for an Application to be selected  multiple expressions match with AND behavior     The  In  and  NotIn  operators must match at least one value to be considered true  OR behavior     The  NotIn  operator has priority in the event that both a  NotIn  and  In  operator produce a match    All Applications in each group must become Healthy before the ApplicationSet controller will proceed to update the next group of Applications    The number of simultaneous Application updates in a group will not exceed its  maxUpdate  parameter  default is 100   unbounded     RollingSync will capture external changes outside the ApplicationSet resource  since it relies on watching the OutOfSync status of the managed Applications    RollingSync will force all generated Applications to have autosync disabled  Warnings are printed in the applicationset controller logs for any Application specs with an automated syncPolicy enabled    Sync operations are triggered the same way as if they were triggered by the UI or CLI  by directly setting the  operation  status field on the Application resource   This means that a RollingSync will respect sync windows just as if a user had clicked the  Sync  button in the Argo UI    When a sync is triggered  the sync is performed with the same syncPolicy configured for the Application  For example  this preserves the Application s retry settings    If an Application is considered  Pending  for  applicationsetcontroller default application progressing timeout  seconds  the Application is automatically moved to Healthy status  default 300     If an Application is not selected in any step  it will be excluded from the rolling sync and needs to be manually synced through the CLI or UI        Example The following example illustrates how to stage a progressive sync over Applications with explicitly configured environment labels   Once a change is pushed  the following will happen in order     All  env dev  Applications will be updated simultaneously    The rollout will wait for all  env qa  Applications to be manually synced via the  argocd  CLI or by clicking the Sync button in the UI    10  of all  env prod  Applications will be updated at a time until all  env prod  Applications have been updated      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  guestbook spec    generators      list        elements          cluster  engineering dev         url  https   1 2 3 4         env  env dev         cluster  engineering qa         url  https   2 4 6 8         env  env qa         cluster  engineering prod         url  https   9 8 7 6          env  env prod   strategy      type  RollingSync     rollingSync        steps            matchExpressions                key  envLabel               operator  In               values                    env dev            maxUpdate  100     if undefined  all applications matched are updated together  default is 100             matchExpressions                key  envLabel               operator  In               values                    env qa           maxUpdate  0        if 0  no matched applications will be updated           matchExpressions                key  envLabel               operator  In               values                    env prod           maxUpdate  10       maxUpdate supports both integer and percentage string values  rounds down  but floored at 1 Application for  0     goTemplate  true   goTemplateOptions    missingkey error     template      metadata        name    guestbook        labels          envLabel         spec        project  my project       source          repoURL  https   github com infra team cluster deployments git         targetRevision  HEAD         path  guestbook        destination          server             namespace  guestbook    "}
{"questions":"argocd Use cases supported by the ApplicationSet controller Use case cluster add ons An initial design focus of the ApplicationSet controller was to allow an infrastructure team s Kubernetes cluster administrators the ability to automatically create a large diverse set of Argo CD Applications across a significant number of clusters and manage those Applications as a single unit One example of why this is needed is the cluster add on use case With the concept of generators the ApplicationSet controller provides a powerful set of tools to automate the templating and modification of Argo CD Applications Generators produce template parameter data from a variety of sources including Argo CD clusters and Git repositories supporting and enabling new use cases While these tools may be utilized for whichever purpose is desired here are some of the specific use cases that the ApplicationSet controller was designed to support","answers":"# Use cases supported by the ApplicationSet controller\n\nWith the concept of generators, the ApplicationSet controller provides a powerful set of tools to automate the templating and modification of Argo CD Applications. Generators produce template parameter data from a variety of sources, including Argo CD clusters and Git repositories, supporting and enabling new use cases.\n\nWhile these tools may be utilized for whichever purpose is desired, here are some of the specific use cases that the ApplicationSet controller was designed to support.\n\n## Use case: cluster add-ons\n\nAn initial design focus of the ApplicationSet controller was to allow an infrastructure team's Kubernetes cluster administrators the ability to automatically create a large, diverse set of Argo CD Applications, across a significant number of clusters, and manage those Applications as a single unit. One example of why this is needed is the *cluster add-on use case*.\n\nIn the *cluster add-on use case*, an administrator is responsible for provisioning cluster add-ons to one or more Kubernetes clusters: cluster-addons are operators such as the [Prometheus operator](https:\/\/github.com\/prometheus-operator\/prometheus-operator), or controllers such as the [argo-workflows controller](https:\/\/argoproj.github.io\/argo-workflows\/) (part of the [Argo ecosystem](https:\/\/argoproj.github.io\/)).\n\nTypically these add-ons are required by the applications of development teams (as tenants of a multi-tenant cluster, for instance, they may wish to provide metrics data to Prometheus or orchestrate workflows via Argo Workflows).\n\nSince installing these add-ons requires cluster-level permissions not held by individual development teams, installation is the responsibility of the infrastructure\/ops team of an organization, and within a large organization this team might be responsible for tens, hundreds, or thousands of Kubernetes clusters (with new clusters being added\/modified\/removed on a regular basis).\n\nThe need to scale across a large number of clusters, and automatically respond to the lifecycle of new clusters, necessarily mandates some form of automation. A further requirement would be allowing the targeting of add-ons to a subset of clusters using specific criteria (eg staging vs production).\n\n![Cluster add-on diagram](..\/..\/assets\/applicationset\/Use-Cases\/Cluster-Add-Ons.png)\n\nIn this example, the infrastructure team maintains a Git repository containing application manifests for the Argo Workflows controller, and Prometheus operator.\n\nThe infrastructure team would like to deploy both these add-on to a large number of clusters, using Argo CD, and likewise wishes to easily manage the creation\/deletion of new clusters.\n\nIn this use case, we may use either the List, Cluster, or Git generators of the ApplicationSet controller to provide the required behaviour:\n\n- *List generator*: Administrators maintain two `ApplicationSet` resources, one for each application (Workflows and Prometheus), and include the list of clusters they wish to target within the List generator elements of each.\n    - With this generator, adding\/removing clusters requires manually updating the `ApplicationSet` resource's list elements.\n- *Cluster generator*: Administrators maintain two  `ApplicationSet` resources, one for each application (Workflows and Prometheus), and ensure that all new cluster are defined within Argo CD.\n    - Since the Cluster generator automatically detects and targets the clusters defined within Argo CD, [adding\/remove a cluster from Argo CD](..\/..\/declarative-setup\/#clusters) will automatically cause Argo CD Application resources (for each application) to be created by the ApplicationSet controller.\n- *Git generator*: The Git generator is the most flexible\/powerful of the generators, and thus there are a number of different ways to tackle this use case. Here are a couple:\n    - Using the Git generator `files` field: A list of clusters is kept as a JSON file within a Git repository. Updates to the JSON file, through Git commits, cause new clusters to be added\/removed.\n    - Using the Git generator `directories` field: For each target cluster, a corresponding directory of that name exists in a Git repository. Adding\/modifying a directory, through Git commits, would trigger an update for the cluster that has shares the directory name.\n\nSee the [generators section](Generators.md) for details on each of the generators.\n\n## Use case: monorepos\n\nIn the *monorepo use case*, Kubernetes cluster administrators manage the entire state of a single Kubernetes cluster from a single Git repository.\n\nManifest changes merged into the Git repository should automatically deploy to the cluster.\n\n![Monorepo diagram](..\/..\/assets\/applicationset\/Use-Cases\/Monorepos.png)\n\nIn this example, the infrastructure team maintains a Git repository containing application manifests for an Argo Workflows controller, and a Prometheus operator. Independent development teams also have added additional services they wish to deploy to the cluster.\n\nChanges made to the Git repository -- for example, updating the version of a deployed artifact -- should automatically cause that update to be applied to the corresponding Kubernetes cluster by Argo CD.\n\nThe Git generator may be used to support this use case:\n\n- The Git generator `directories` field may be used to specify particular subdirectories (using wildcards) containing the individual applications to deploy.\n- The Git generator `files` field may reference Git repository files containing JSON metadata, with that metadata describing the individual applications to deploy.\n- See the Git generator documentation for more details.\n\n## Use case: self-service of Argo CD Applications on multitenant clusters\n\nThe *self-service use case* seeks to allow developers (as the end users of a multitenant Kubernetes cluster) greater flexibility to:\n\n- Deploy multiple applications to a single cluster, in an automated fashion, using Argo CD\n- Deploy to multiple clusters, in an automated fashion, using Argo CD\n- But, in both cases, to empower those developers to be able to do so without needing to involve a cluster administrator (to create the necessarily Argo CD Applications\/AppProject resources on their behalf)\n\nOne potential solution to this use case is for development teams to define Argo CD `Application` resources within a Git repository (containing the manifests they wish to deploy), in an [app-of-apps pattern](..\/..\/cluster-bootstrapping\/#app-of-apps-pattern), and for cluster administrators to then review\/accept changes to this repository via merge requests.\n\nWhile this might sound like an effective solution, a major disadvantage is that a high degree of trust\/scrutiny is needed to accept commits containing Argo CD `Application` spec changes. This is because there are many sensitive fields contained within the `Application` spec, including `project`, `cluster`, and `namespace`. An inadvertent merge might allow applications to access namespaces\/clusters where they did not belong.\n\nThus in the self-service use case, administrators desire to only allow some fields of the `Application` spec to be controlled by developers (eg the Git source repository) but not other fields (eg the target namespace, or target cluster, should be restricted).\n\nFortunately, the ApplicationSet controller presents an alternative solution to this use case: cluster administrators may safely create an `ApplicationSet` resource containing a Git generator that restricts deployment of application resources to fixed values with the `template` field, while allowing customization of 'safe' fields by developers, at will.\n\nThe `config.json` files contain information describing the app.\n\n```json\n{\n  (...)\n  \"app\": {\n    \"source\": \"https:\/\/github.com\/argoproj\/argo-cd\",\n    \"revision\": \"HEAD\",\n    \"path\": \"applicationset\/examples\/git-generator-files-discovery\/apps\/guestbook\"\n  }\n  (...)\n}\n```\n\n```yaml\nkind: ApplicationSet\n# (...)\nspec:\n  goTemplate: true\n  goTemplateOptions: [\"missingkey=error\"]\n  generators:\n  - git:\n      repoURL: https:\/\/github.com\/argoproj\/argo-cd.git\n      files:\n      - path: \"apps\/**\/config.json\"\n  template:\n    spec:\n      project: dev-team-one # project is restricted\n      source:\n        # developers may customize app details using JSON files from above repo URL\n        repoURL: \n        targetRevision: \n        path: \n      destination:\n        name: production-cluster # cluster is restricted\n        namespace: dev-team-one # namespace is restricted\n```\nSee the [Git generator](Generators-Git.md) for more details.","site":"argocd","answers_cleaned":"  Use cases supported by the ApplicationSet controller  With the concept of generators  the ApplicationSet controller provides a powerful set of tools to automate the templating and modification of Argo CD Applications  Generators produce template parameter data from a variety of sources  including Argo CD clusters and Git repositories  supporting and enabling new use cases   While these tools may be utilized for whichever purpose is desired  here are some of the specific use cases that the ApplicationSet controller was designed to support      Use case  cluster add ons  An initial design focus of the ApplicationSet controller was to allow an infrastructure team s Kubernetes cluster administrators the ability to automatically create a large  diverse set of Argo CD Applications  across a significant number of clusters  and manage those Applications as a single unit  One example of why this is needed is the  cluster add on use case    In the  cluster add on use case   an administrator is responsible for provisioning cluster add ons to one or more Kubernetes clusters  cluster addons are operators such as the  Prometheus operator  https   github com prometheus operator prometheus operator   or controllers such as the  argo workflows controller  https   argoproj github io argo workflows    part of the  Argo ecosystem  https   argoproj github io      Typically these add ons are required by the applications of development teams  as tenants of a multi tenant cluster  for instance  they may wish to provide metrics data to Prometheus or orchestrate workflows via Argo Workflows    Since installing these add ons requires cluster level permissions not held by individual development teams  installation is the responsibility of the infrastructure ops team of an organization  and within a large organization this team might be responsible for tens  hundreds  or thousands of Kubernetes clusters  with new clusters being added modified removed on a regular basis    The need to scale across a large number of clusters  and automatically respond to the lifecycle of new clusters  necessarily mandates some form of automation  A further requirement would be allowing the targeting of add ons to a subset of clusters using specific criteria  eg staging vs production      Cluster add on diagram        assets applicationset Use Cases Cluster Add Ons png   In this example  the infrastructure team maintains a Git repository containing application manifests for the Argo Workflows controller  and Prometheus operator   The infrastructure team would like to deploy both these add on to a large number of clusters  using Argo CD  and likewise wishes to easily manage the creation deletion of new clusters   In this use case  we may use either the List  Cluster  or Git generators of the ApplicationSet controller to provide the required behaviour      List generator   Administrators maintain two  ApplicationSet  resources  one for each application  Workflows and Prometheus   and include the list of clusters they wish to target within the List generator elements of each        With this generator  adding removing clusters requires manually updating the  ApplicationSet  resource s list elements     Cluster generator   Administrators maintain two   ApplicationSet  resources  one for each application  Workflows and Prometheus   and ensure that all new cluster are defined within Argo CD        Since the Cluster generator automatically detects and targets the clusters defined within Argo CD   adding remove a cluster from Argo CD        declarative setup  clusters  will automatically cause Argo CD Application resources  for each application  to be created by the ApplicationSet controller     Git generator   The Git generator is the most flexible powerful of the generators  and thus there are a number of different ways to tackle this use case  Here are a couple        Using the Git generator  files  field  A list of clusters is kept as a JSON file within a Git repository  Updates to the JSON file  through Git commits  cause new clusters to be added removed        Using the Git generator  directories  field  For each target cluster  a corresponding directory of that name exists in a Git repository  Adding modifying a directory  through Git commits  would trigger an update for the cluster that has shares the directory name   See the  generators section  Generators md  for details on each of the generators      Use case  monorepos  In the  monorepo use case   Kubernetes cluster administrators manage the entire state of a single Kubernetes cluster from a single Git repository   Manifest changes merged into the Git repository should automatically deploy to the cluster     Monorepo diagram        assets applicationset Use Cases Monorepos png   In this example  the infrastructure team maintains a Git repository containing application manifests for an Argo Workflows controller  and a Prometheus operator  Independent development teams also have added additional services they wish to deploy to the cluster   Changes made to the Git repository    for example  updating the version of a deployed artifact    should automatically cause that update to be applied to the corresponding Kubernetes cluster by Argo CD   The Git generator may be used to support this use case     The Git generator  directories  field may be used to specify particular subdirectories  using wildcards  containing the individual applications to deploy    The Git generator  files  field may reference Git repository files containing JSON metadata  with that metadata describing the individual applications to deploy    See the Git generator documentation for more details      Use case  self service of Argo CD Applications on multitenant clusters  The  self service use case  seeks to allow developers  as the end users of a multitenant Kubernetes cluster  greater flexibility to     Deploy multiple applications to a single cluster  in an automated fashion  using Argo CD   Deploy to multiple clusters  in an automated fashion  using Argo CD   But  in both cases  to empower those developers to be able to do so without needing to involve a cluster administrator  to create the necessarily Argo CD Applications AppProject resources on their behalf   One potential solution to this use case is for development teams to define Argo CD  Application  resources within a Git repository  containing the manifests they wish to deploy   in an  app of apps pattern        cluster bootstrapping  app of apps pattern   and for cluster administrators to then review accept changes to this repository via merge requests   While this might sound like an effective solution  a major disadvantage is that a high degree of trust scrutiny is needed to accept commits containing Argo CD  Application  spec changes  This is because there are many sensitive fields contained within the  Application  spec  including  project    cluster   and  namespace   An inadvertent merge might allow applications to access namespaces clusters where they did not belong   Thus in the self service use case  administrators desire to only allow some fields of the  Application  spec to be controlled by developers  eg the Git source repository  but not other fields  eg the target namespace  or target cluster  should be restricted    Fortunately  the ApplicationSet controller presents an alternative solution to this use case  cluster administrators may safely create an  ApplicationSet  resource containing a Git generator that restricts deployment of application resources to fixed values with the  template  field  while allowing customization of  safe  fields by developers  at will   The  config json  files contain information describing the app      json              app          source    https   github com argoproj argo cd        revision    HEAD        path    applicationset examples git generator files discovery apps guestbook                        yaml kind  ApplicationSet         spec    goTemplate  true   goTemplateOptions    missingkey error     generators      git        repoURL  https   github com argoproj argo cd git       files          path   apps    config json    template      spec        project  dev team one   project is restricted       source            developers may customize app details using JSON files from above repo URL         repoURL           targetRevision           path         destination          name  production cluster   cluster is restricted         namespace  dev team one   namespace is restricted     See the  Git generator  Generators Git md  for more details "}
{"questions":"argocd Command Reference The API server is a gRPC REST server which exposes the API consumed by the Web UI CLI and CI CD systems This command runs API server in the foreground It can be configured by following options Synopsis Run the ArgoCD API server argocd server","answers":"# `argocd-server` Command Reference\n\n## argocd-server\n\nRun the ArgoCD API server\n\n### Synopsis\n\nThe API server is a gRPC\/REST server which exposes the API consumed by the Web UI, CLI, and CI\/CD systems.  This command runs API server in the foreground.  It can be configured by following options.\n\n```\nargocd-server [flags]\n```\n\n### Examples\n\n```\n  # Start the Argo CD API server with default settings\n  $ argocd-server\n  \n  # Start the Argo CD API server on a custom port and enable tracing\n  $ argocd-server --port 8888 --otlp-address localhost:4317\n```\n\n### Options\n\n```\n      --address string                                  Listen on given address (default \"0.0.0.0\")\n      --api-content-types string                        Semicolon separated list of allowed content types for non GET api requests. Any content type is allowed if empty. (default \"application\/json\")\n      --app-state-cache-expiration duration             Cache expiration for app state (default 1h0m0s)\n      --application-namespaces strings                  List of additional namespaces where application resources can be managed in\n      --appset-allowed-scm-providers strings            The list of allowed custom SCM provider API URLs. This restriction does not apply to SCM or PR generators which do not accept a custom API URL. (Default: Empty = all)\n      --appset-enable-new-git-file-globbing             Enable new globbing in Git files generator.\n      --appset-enable-scm-providers                     Enable retrieving information from SCM providers, used by the SCM and PR generators (Default: true) (default true)\n      --appset-scm-root-ca-path string                  Provide Root CA Path for self-signed TLS Certificates\n      --as string                                       Username to impersonate for the operation\n      --as-group stringArray                            Group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n      --as-uid string                                   UID to impersonate for the operation\n      --basehref string                                 Value for base href in index.html. Used if Argo CD is running behind reverse proxy under subpath different from \/ (default \"\/\")\n      --certificate-authority string                    Path to a cert file for the certificate authority\n      --client-certificate string                       Path to a client certificate file for TLS\n      --client-key string                               Path to a client key file for TLS\n      --cluster string                                  The name of the kubeconfig cluster to use\n      --connection-status-cache-expiration duration     Cache expiration for cluster\/repo connection status (default 1h0m0s)\n      --content-security-policy value                   Set Content-Security-Policy header in HTTP responses to value. To disable, set to \"\". (default \"frame-ancestors 'self';\")\n      --context string                                  The name of the kubeconfig context to use\n      --default-cache-expiration duration               Cache expiration default (default 24h0m0s)\n      --dex-server string                               Dex server address (default \"argocd-dex-server:5556\")\n      --dex-server-plaintext                            Use a plaintext client (non-TLS) to connect to dex server\n      --dex-server-strict-tls                           Perform strict validation of TLS certificates when connecting to dex server\n      --disable-auth                                    Disable client authentication\n      --disable-compression                             If true, opt-out of response compression for all requests to the server\n      --enable-gzip                                     Enable GZIP compression (default true)\n      --enable-k8s-event none                           Enable ArgoCD to use k8s event. For disabling all events, set the value as none. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated) (default [all])\n      --enable-proxy-extension                          Enable Proxy Extension feature\n      --gloglevel int                                   Set the glog logging level\n  -h, --help                                            help for argocd-server\n      --insecure                                        Run server without TLS\n      --insecure-skip-tls-verify                        If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n      --kubeconfig string                               Path to a kube config. Only required if out-of-cluster\n      --logformat string                                Set the logging format. One of: text|json (default \"text\")\n      --login-attempts-expiration duration              Cache expiration for failed login attempts (default 24h0m0s)\n      --loglevel string                                 Set the logging level. One of: debug|info|warn|error (default \"info\")\n      --metrics-address string                          Listen for metrics on given address (default \"0.0.0.0\")\n      --metrics-port int                                Start metrics on given port (default 8083)\n  -n, --namespace string                                If present, the namespace scope for this CLI request\n      --oidc-cache-expiration duration                  Cache expiration for OIDC state (default 3m0s)\n      --otlp-address string                             OpenTelemetry collector address to send traces to\n      --otlp-attrs strings                              List of OpenTelemetry collector extra attrs when send traces, each attribute is separated by a colon(e.g. key:value)\n      --otlp-headers stringToString                     List of OpenTelemetry collector extra headers sent with traces, headers are comma-separated key-value pairs(e.g. key1=value1,key2=value2) (default [])\n      --otlp-insecure                                   OpenTelemetry collector insecure mode (default true)\n      --password string                                 Password for basic authentication to the API server\n      --port int                                        Listen on given port (default 8080)\n      --proxy-url string                                If provided, this URL will be used to connect via proxy\n      --redis string                                    Redis server hostname and port (e.g. argocd-redis:6379). \n      --redis-ca-certificate string                     Path to Redis server CA certificate (e.g. \/etc\/certs\/redis\/ca.crt). If not specified, system trusted CAs will be used for server certificate validation.\n      --redis-client-certificate string                 Path to Redis client certificate (e.g. \/etc\/certs\/redis\/client.crt).\n      --redis-client-key string                         Path to Redis client key (e.g. \/etc\/certs\/redis\/client.crt).\n      --redis-compress string                           Enable compression for data sent to Redis with the required compression algorithm. (possible values: gzip, none) (default \"gzip\")\n      --redis-insecure-skip-tls-verify                  Skip Redis server certificate validation.\n      --redis-use-tls                                   Use TLS when connecting to Redis. \n      --redisdb int                                     Redis database.\n      --repo-cache-expiration duration                  Cache expiration for repo state, incl. app lists, app details, manifest generation, revision meta-data (default 24h0m0s)\n      --repo-server string                              Repo server address (default \"argocd-repo-server:8081\")\n      --repo-server-default-cache-expiration duration   Cache expiration default (default 24h0m0s)\n      --repo-server-plaintext                           Use a plaintext client (non-TLS) to connect to repository server\n      --repo-server-redis string                        Redis server hostname and port (e.g. argocd-redis:6379). \n      --repo-server-redis-ca-certificate string         Path to Redis server CA certificate (e.g. \/etc\/certs\/redis\/ca.crt). If not specified, system trusted CAs will be used for server certificate validation.\n      --repo-server-redis-client-certificate string     Path to Redis client certificate (e.g. \/etc\/certs\/redis\/client.crt).\n      --repo-server-redis-client-key string             Path to Redis client key (e.g. \/etc\/certs\/redis\/client.crt).\n      --repo-server-redis-compress string               Enable compression for data sent to Redis with the required compression algorithm. (possible values: gzip, none) (default \"gzip\")\n      --repo-server-redis-insecure-skip-tls-verify      Skip Redis server certificate validation.\n      --repo-server-redis-use-tls                       Use TLS when connecting to Redis. \n      --repo-server-redisdb int                         Redis database.\n      --repo-server-sentinel stringArray                Redis sentinel hostname and port (e.g. argocd-redis-ha-announce-0:6379). \n      --repo-server-sentinelmaster string               Redis sentinel master group name. (default \"master\")\n      --repo-server-strict-tls                          Perform strict validation of TLS certificates when connecting to repo server\n      --repo-server-timeout-seconds int                 Repo server RPC call timeout seconds. (default 60)\n      --request-timeout string                          The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\")\n      --revision-cache-expiration duration              Cache expiration for cached revision (default 3m0s)\n      --revision-cache-lock-timeout duration            Cache TTL for locks to prevent duplicate requests on revisions, set to 0 to disable (default 10s)\n      --rootpath string                                 Used if Argo CD is running behind reverse proxy under subpath different from \/\n      --sentinel stringArray                            Redis sentinel hostname and port (e.g. argocd-redis-ha-announce-0:6379). \n      --sentinelmaster string                           Redis sentinel master group name. (default \"master\")\n      --server string                                   The address and port of the Kubernetes API server\n      --staticassets string                             Directory path that contains additional static assets (default \"\/shared\/app\")\n      --tls-server-name string                          If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.\n      --tlsciphers string                               The list of acceptable ciphers to be used when establishing TLS connections. Use 'list' to list available ciphers. (default \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\")\n      --tlsmaxversion string                            The maximum SSL\/TLS version that is acceptable (one of: 1.0|1.1|1.2|1.3) (default \"1.3\")\n      --tlsminversion string                            The minimum SSL\/TLS version that is acceptable (one of: 1.0|1.1|1.2|1.3) (default \"1.2\")\n      --token string                                    Bearer token for authentication to the API server\n      --user string                                     The name of the kubeconfig user to use\n      --username string                                 Username for basic authentication to the API server\n      --webhook-parallelism-limit int                   Number of webhook requests processed concurrently (default 50)\n      --x-frame-options value                           Set X-Frame-Options header in HTTP responses to value. To disable, set to \"\". (default \"sameorigin\")\n```\n\n### SEE ALSO\n\n* [argocd-server version](argocd-server_version.md)\t - Print version information\n","site":"argocd","answers_cleaned":"   argocd server  Command Reference     argocd server  Run the ArgoCD API server      Synopsis  The API server is a gRPC REST server which exposes the API consumed by the Web UI  CLI  and CI CD systems   This command runs API server in the foreground   It can be configured by following options       argocd server  flags           Examples          Start the Argo CD API server with default settings     argocd server        Start the Argo CD API server on a custom port and enable tracing     argocd server   port 8888   otlp address localhost 4317          Options              address string                                  Listen on given address  default  0 0 0 0           api content types string                        Semicolon separated list of allowed content types for non GET api requests  Any content type is allowed if empty   default  application json           app state cache expiration duration             Cache expiration for app state  default 1h0m0s          application namespaces strings                  List of additional namespaces where application resources can be managed in         appset allowed scm providers strings            The list of allowed custom SCM provider API URLs  This restriction does not apply to SCM or PR generators which do not accept a custom API URL   Default  Empty   all          appset enable new git file globbing             Enable new globbing in Git files generator          appset enable scm providers                     Enable retrieving information from SCM providers  used by the SCM and PR generators  Default  true   default true          appset scm root ca path string                  Provide Root CA Path for self signed TLS Certificates         as string                                       Username to impersonate for the operation         as group stringArray                            Group to impersonate for the operation  this flag can be repeated to specify multiple groups          as uid string                                   UID to impersonate for the operation         basehref string                                 Value for base href in index html  Used if Argo CD is running behind reverse proxy under subpath different from    default              certificate authority string                    Path to a cert file for the certificate authority         client certificate string                       Path to a client certificate file for TLS         client key string                               Path to a client key file for TLS         cluster string                                  The name of the kubeconfig cluster to use         connection status cache expiration duration     Cache expiration for cluster repo connection status  default 1h0m0s          content security policy value                   Set Content Security Policy header in HTTP responses to value  To disable  set to      default  frame ancestors  self             context string                                  The name of the kubeconfig context to use         default cache expiration duration               Cache expiration default  default 24h0m0s          dex server string                               Dex server address  default  argocd dex server 5556           dex server plaintext                            Use a plaintext client  non TLS  to connect to dex server         dex server strict tls                           Perform strict validation of TLS certificates when connecting to dex server         disable auth                                    Disable client authentication         disable compression                             If true  opt out of response compression for all requests to the server         enable gzip                                     Enable GZIP compression  default true          enable k8s event none                           Enable ArgoCD to use k8s event  For disabling all events  set the value as none   e g   enable k8s event none   For enabling specific events  set the value as  event reason    e g   enable k8s event StatusRefreshed ResourceCreated   default  all           enable proxy extension                          Enable Proxy Extension feature         gloglevel int                                   Set the glog logging level    h    help                                            help for argocd server         insecure                                        Run server without TLS         insecure skip tls verify                        If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure         kubeconfig string                               Path to a kube config  Only required if out of cluster         logformat string                                Set the logging format  One of  text json  default  text           login attempts expiration duration              Cache expiration for failed login attempts  default 24h0m0s          loglevel string                                 Set the logging level  One of  debug info warn error  default  info           metrics address string                          Listen for metrics on given address  default  0 0 0 0           metrics port int                                Start metrics on given port  default 8083     n    namespace string                                If present  the namespace scope for this CLI request         oidc cache expiration duration                  Cache expiration for OIDC state  default 3m0s          otlp address string                             OpenTelemetry collector address to send traces to         otlp attrs strings                              List of OpenTelemetry collector extra attrs when send traces  each attribute is separated by a colon e g  key value          otlp headers stringToString                     List of OpenTelemetry collector extra headers sent with traces  headers are comma separated key value pairs e g  key1 value1 key2 value2   default             otlp insecure                                   OpenTelemetry collector insecure mode  default true          password string                                 Password for basic authentication to the API server         port int                                        Listen on given port  default 8080          proxy url string                                If provided  this URL will be used to connect via proxy         redis string                                    Redis server hostname and port  e g  argocd redis 6379            redis ca certificate string                     Path to Redis server CA certificate  e g   etc certs redis ca crt   If not specified  system trusted CAs will be used for server certificate validation          redis client certificate string                 Path to Redis client certificate  e g   etc certs redis client crt           redis client key string                         Path to Redis client key  e g   etc certs redis client crt           redis compress string                           Enable compression for data sent to Redis with the required compression algorithm   possible values  gzip  none   default  gzip           redis insecure skip tls verify                  Skip Redis server certificate validation          redis use tls                                   Use TLS when connecting to Redis           redisdb int                                     Redis database          repo cache expiration duration                  Cache expiration for repo state  incl  app lists  app details  manifest generation  revision meta data  default 24h0m0s          repo server string                              Repo server address  default  argocd repo server 8081           repo server default cache expiration duration   Cache expiration default  default 24h0m0s          repo server plaintext                           Use a plaintext client  non TLS  to connect to repository server         repo server redis string                        Redis server hostname and port  e g  argocd redis 6379            repo server redis ca certificate string         Path to Redis server CA certificate  e g   etc certs redis ca crt   If not specified  system trusted CAs will be used for server certificate validation          repo server redis client certificate string     Path to Redis client certificate  e g   etc certs redis client crt           repo server redis client key string             Path to Redis client key  e g   etc certs redis client crt           repo server redis compress string               Enable compression for data sent to Redis with the required compression algorithm   possible values  gzip  none   default  gzip           repo server redis insecure skip tls verify      Skip Redis server certificate validation          repo server redis use tls                       Use TLS when connecting to Redis           repo server redisdb int                         Redis database          repo server sentinel stringArray                Redis sentinel hostname and port  e g  argocd redis ha announce 0 6379            repo server sentinelmaster string               Redis sentinel master group name   default  master           repo server strict tls                          Perform strict validation of TLS certificates when connecting to repo server         repo server timeout seconds int                 Repo server RPC call timeout seconds   default 60          request timeout string                          The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   default  0           revision cache expiration duration              Cache expiration for cached revision  default 3m0s          revision cache lock timeout duration            Cache TTL for locks to prevent duplicate requests on revisions  set to 0 to disable  default 10s          rootpath string                                 Used if Argo CD is running behind reverse proxy under subpath different from           sentinel stringArray                            Redis sentinel hostname and port  e g  argocd redis ha announce 0 6379            sentinelmaster string                           Redis sentinel master group name   default  master           server string                                   The address and port of the Kubernetes API server         staticassets string                             Directory path that contains additional static assets  default   shared app           tls server name string                          If provided  this name will be used to validate server certificate  If this is not provided  hostname used to contact the server is used          tlsciphers string                               The list of acceptable ciphers to be used when establishing TLS connections  Use  list  to list available ciphers   default  TLS ECDHE RSA WITH AES 256 GCM SHA384           tlsmaxversion string                            The maximum SSL TLS version that is acceptable  one of  1 0 1 1 1 2 1 3   default  1 3           tlsminversion string                            The minimum SSL TLS version that is acceptable  one of  1 0 1 1 1 2 1 3   default  1 2           token string                                    Bearer token for authentication to the API server         user string                                     The name of the kubeconfig user to use         username string                                 Username for basic authentication to the API server         webhook parallelism limit int                   Number of webhook requests processed concurrently  default 50          x frame options value                           Set X Frame Options header in HTTP responses to value  To disable  set to      default  sameorigin            SEE ALSO     argocd server version  argocd server version md     Print version information "}
{"questions":"argocd Known Issues Argo CD 2 4 0 introduced a breaking API change renaming the filter to v2 4 to 2 5 Impact to API clients Broken filter before 2 5 15","answers":"# v2.4 to 2.5\n\n## Known Issues\n\n### Broken `project` filter before 2.5.15\n\nArgo CD 2.4.0 introduced a breaking API change, renaming the `project` filter to `projects`.\n\n#### Impact to API clients\n\nA similar issue applies to other API clients which communicate with the Argo CD API server via its REST API. If the\nclient uses the `project` field to filter projects, the filter will not be applied. **The failing project filter could\nhave detrimental consequences if, for example, you rely on it to list Applications to be deleted.**\n\n#### Impact to CLI clients\n\nCLI clients older that v2.4.0 rely on client-side filtering and are not impacted by this bug.\n\n#### How to fix the problem\n\nUpgrade to Argo CD >=2.4.27, >=2.5.15, or >=2.6.6. This version of Argo CD will accept both `project` and `projects` as\nvalid filters.\n\n### Broken matrix-nested git files generator in 2.5.14\n\nArgo CD 2.5.14 introduced a bug in the matrix-nested git files generator. The bug only applies when the git files\ngenerator is the second generator nested under a matrix. For example:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: ApplicationSet\nmetadata:\n  name: guestbook\nspec:\n  generators:\n  - matrix:\n      generators:\n        - clusters: {}\n        - git:\n            repoURL: https:\/\/git.example.com\/org\/repo.git\n            revision: HEAD\n            files:\n              - path: \"defaults\/*.yaml\"\n  template:\n    # ...\n```\n\nThe nested git files generator will produce no parameters, causing the matrix generator to also produce no parameters.\nThis will cause the ApplicationSet to produce no Applications. If the ApplicationSet controller is\n[configured with the ability to delete applications](https:\/\/argo-cd.readthedocs.io\/en\/latest\/operator-manual\/applicationset\/Controlling-Resource-Modification\/),\nit will delete all Applications which were previously created by the ApplicationSet.\n\nTo avoid this issue, upgrade directly to >=2.5.15 or >= 2.6.6.\n\n## Configure RBAC to account for new `applicationsets` resource\n\n2.5 introduces a new `applicationsets` [RBAC resource](https:\/\/argo-cd.readthedocs.io\/en\/stable\/operator-manual\/rbac\/#rbac-resources-and-actions).\n\nWhen you upgrade to 2.5, RBAC policies with `*` in the resource field and `create`, `update`, `delete`, `get`, or `*` in the action field will automatically grant the `applicationsets` privilege.\n\nTo avoid granting the new privilege, replace the existing policy with a list of new policies explicitly listing the old resources.\n\n### Example\n\nOld:\n\n```csv\np, role:org-admin, *, create, *, allow\n```\n\nNew:\n\n```csv\np, role:org-admin, clusters,     create, *, allow\np, role:org-admin, projects,     create, *, allow\np, role:org-admin, applications, create, *, allow\np, role:org-admin, repositories, create, *, allow\np, role:org-admin, certificates, create, *, allow\np, role:org-admin, accounts,     create, *, allow\np, role:org-admin, gpgkeys,      create, *, allow\np, role:org-admin, exec,         create, *, allow\n```\n\n(Note that `applicationsets` is missing from the list, to preserve pre-2.5 permissions.)\n\n## argocd-cm plugins (CMPs) are deprecated\n\nStarting with Argo CD v2.5, installing config management plugins (CMPs) via the `argocd-cm` ConfigMap is deprecated.\nSupport will be removed in v2.7.\n\nYou can continue to use the plugins by [installing them as sidecars](https:\/\/argo-cd.readthedocs.io\/en\/stable\/user-guide\/config-management-plugins\/)\non the repo-server Deployment.\n\nSidecar plugins are significantly more secure. Plugin code runs in its own container with an almost completely-isolated\nfilesystem. If an attacker compromises a plugin, the attacker's ability to cause harm is significantly mitigated.\n\nTo determine whether argocd-cm plugins are still in use, scan your argocd-repo-server and argocd-server logs for the \nfollowing message:\n\n> argocd-cm plugins are deprecated, and support will be removed in v2.6. Upgrade your plugin to be installed via sidecar. https:\/\/argo-cd.readthedocs.io\/en\/stable\/user-guide\/config-management-plugins\/\n\n**NOTE:** removal of argocd-cm plugin support was delayed to v2.7. Update your logs scan to use `v2.7` instead of `v2.6`.\n\nIf you run `argocd app list` as admin, the list of Applications using deprecated plugins will be logged as a warning.\n\n## Dex server TLS configuration\n\nIn order to secure the communications between the dex server and the Argo CD API server, TLS is now enabled by default on the dex server.\n\nBy default, without configuration, the dex server will generate a self-signed certificate upon startup. However, we recommend that users\nconfigure their own TLS certificate using the `argocd-dex-server-tls` secret. Please refer to the [TLS configuration guide](..\/tls.md#configuring-tls-to-argocd-dex-server) for more information.\n\n## Invalid users.session.duration values now fall back to 24h\n\nBefore v2.5, an invalid `users.session.duration` value in argocd-cm would 1) log a warning and 2) result in user sessions having no duration limit.\n\nStarting with v2.5, invalid duration values will fall back to the default value of 24 hours with a warning.\n\n## Out-of-bounds symlinks now blocked at fetch\n\nThere have been several path traversal and identification vulnerabilities disclosed in the past related to symlinks. To help prevent any further vulnerabilities, we now scan all repositories and Helm charts for **out of bounds symlinks** at the time they are fetched and block further processing if they are found.\n\nAn out-of-bounds symlink is defined as any symlink that leaves the root of the Git repository or Helm chart, even if the final target is within the root.\n\nIf an out of bounds symlink is found, a warning will be printed to the repo server console and an error will be shown in the UI or CLI.\n\nBelow is an example directory structure showing valid symlinks and invalid symlinks.\n\n```\nchart\n\u251c\u2500\u2500 Chart.yaml\n\u251c\u2500\u2500 values\n\u2502   \u2514\u2500\u2500 values.yaml\n\u251c\u2500\u2500 bad-link.yaml   -> ..\/out-of-bounds.yaml       # Blocked\n\u251c\u2500\u2500 bad-link-2.yaml -> ..\/chart\/values\/values.yaml # Blocked because it leaves the root\n\u251c\u2500\u2500 bad-link-3.yaml -> \/absolute\/link.yaml         # Blocked\n\u2514\u2500\u2500 good-link.yaml  -> values\/values.yaml          # OK\n```\n\nIf you rely on out of bounds symlinks, this check can be disabled one of three ways:\n\n1. The `--allow-oob-symlinks` argument on the repo server.\n2. The `reposerver.allow.oob.symlinks` key if you are using `argocd-cmd-params-cm`\n3. Directly setting `ARGOCD_REPO_SERVER_ALLOW_OOB_SYMLINKS` environment variable on the repo server.\n\nIt is **strongly recommended** to leave this check enabled. Disabling the check will not allow _all_ out-of-bounds symlinks. Those will still be blocked for things like values files in Helm charts, but symlinks which are not explicitly blocked by other checks will be allowed.\n\n## Deprecated client-side manifest diffs\n\nWhen using `argocd app diff --local`, code from the repo server is run on the user's machine in order to locally generate manifests for comparing against the live manifests of an app. However, this requires that the necessary tools (Helm, Kustomize, etc) are installed with the correct versions. Even worse, it does not support Config Management Plugins (CMPs) whatsoever.\n\nIn order to support CMPs and reduce local requirements, we have implemented *server-side generation* of local manifests via the `--server-side-generate` argument. For example, `argocd app diff --local repoDir --server-side-generate` will upload the contents of `repoDir` to the repo server and run your manifest generation pipeline against it, the same as it would for a Git repo.\n\nIn v2.7, the `--server-side-generate` argument will become the default, and client-side generation will be supported as an alternative.\n\n!!! warning\n    The semantics of *where* Argo will start generating manifests within a repo has changed between client-side and server-side generation. With client-side generation, the application's path (`spec.source.path`) was ignored and the value of `--local-repo-root` was effectively used (by default `\/` relative to `--local`).\n    \n    For example, given an application that has an application path of `\/manifests`, you would have had to run `argocd app diff --local yourRepo\/manifests`. This behavior did not match the repo server's process of downloading the full repo\/chart and then beginning generation in the path specified in the application manifest.\n\n    When switching to server-side generation, `--local` should point to the root of your repo *without* including your `spec.source.path`. This is especially important to keep in mind when `--server-side-generate` becomes the default in v2.7. Existing scripts utilizing `diff --local` may break in v2.7 if `spec.source.path` was not `\/`.\n    \n## Upgraded Kustomize Version\n\nThe bundled Kustomize version has been upgraded from 4.4.1 to 4.5.7.\n\n## Upgraded Helm Version\n\nNote that bundled Helm version has been upgraded from 3.9.0 to 3.10.1.\n\n## Upgraded HAProxy version\n\nThe HAProxy version in the HA manifests has been upgraded from 2.0.25 to 2.6.2. To read about the changes\/improvements,\nsee the HAProxy major release announcements ([2.1.0](https:\/\/www.mail-archive.com\/haproxy@formilux.org\/msg35491.html),\n[2.2.0](https:\/\/www.mail-archive.com\/haproxy@formilux.org\/msg37852.html),\n[2.3.0](https:\/\/www.mail-archive.com\/haproxy@formilux.org\/msg38812.html),\n[2.4.0](https:\/\/www.mail-archive.com\/haproxy@formilux.org\/msg40499.html),\n[2.5.0](https:\/\/www.mail-archive.com\/haproxy@formilux.org\/msg41508.html), and\n[2.6.0](https:\/\/www.mail-archive.com\/haproxy@formilux.org\/msg42371.html).\n\n## Logs RBAC enforcement will remain opt-in\n\nThis note is just for clarity. No action is required.\n\nWe [expected](..\/upgrading\/2.3-2.4.md#enable-logs-rbac-enforcement) to enable logs RBAC enforcement by default in 2.5.\nWe have decided not to do that in the 2.x series due to disruption for users of [Project Roles](..\/..\/user-guide\/projects.md#project-roles).\n\n## `argocd app create` for old CLI versions fails with API version >=2.5.16\n\nStarting with Argo CD 2.5.16, the API returns `PermissionDenied` instead of `NotFound` for Application `GET` requests if\nthe Application does not exist.\n\nThe Argo CD CLI before versions starting with version 2.5.0-rc1 and before versions 2.5.16 and 2.6.7 does a `GET` \nrequest before the `POST` request in `argocd app create`. The command does not gracefully handle the `PermissionDenied` \nresponse and will therefore fail to create\/update the Application.\n\nTo solve the issue, upgrade the CLI to at least 2.5.16, or 2.6.7.\n\nCLIs older than 2.5.0-rc1 are unaffected.\n\n## Golang upgrade in 2.5.20\n\nIn 2.5.20, we upgrade the Golang version used to build Argo CD from 1.18 to 1.19. If you use Argo CD as a library, you\nmay need to upgrade your Go version.","site":"argocd","answers_cleaned":"  v2 4 to 2 5     Known Issues      Broken  project  filter before 2 5 15  Argo CD 2 4 0 introduced a breaking API change  renaming the  project  filter to  projects         Impact to API clients  A similar issue applies to other API clients which communicate with the Argo CD API server via its REST API  If the client uses the  project  field to filter projects  the filter will not be applied    The failing project filter could have detrimental consequences if  for example  you rely on it to list Applications to be deleted          Impact to CLI clients  CLI clients older that v2 4 0 rely on client side filtering and are not impacted by this bug        How to fix the problem  Upgrade to Argo CD   2 4 27    2 5 15  or   2 6 6  This version of Argo CD will accept both  project  and  projects  as valid filters       Broken matrix nested git files generator in 2 5 14  Argo CD 2 5 14 introduced a bug in the matrix nested git files generator  The bug only applies when the git files generator is the second generator nested under a matrix  For example      yaml apiVersion  argoproj io v1alpha1 kind  ApplicationSet metadata    name  guestbook spec    generators      matrix        generators            clusters               git              repoURL  https   git example com org repo git             revision  HEAD             files                  path   defaults   yaml    template                 The nested git files generator will produce no parameters  causing the matrix generator to also produce no parameters  This will cause the ApplicationSet to produce no Applications  If the ApplicationSet controller is  configured with the ability to delete applications  https   argo cd readthedocs io en latest operator manual applicationset Controlling Resource Modification    it will delete all Applications which were previously created by the ApplicationSet   To avoid this issue  upgrade directly to   2 5 15 or    2 6 6      Configure RBAC to account for new  applicationsets  resource  2 5 introduces a new  applicationsets   RBAC resource  https   argo cd readthedocs io en stable operator manual rbac  rbac resources and actions    When you upgrade to 2 5  RBAC policies with     in the resource field and  create    update    delete    get   or     in the action field will automatically grant the  applicationsets  privilege   To avoid granting the new privilege  replace the existing policy with a list of new policies explicitly listing the old resources       Example  Old      csv p  role org admin     create     allow      New      csv p  role org admin  clusters      create     allow p  role org admin  projects      create     allow p  role org admin  applications  create     allow p  role org admin  repositories  create     allow p  role org admin  certificates  create     allow p  role org admin  accounts      create     allow p  role org admin  gpgkeys       create     allow p  role org admin  exec          create     allow       Note that  applicationsets  is missing from the list  to preserve pre 2 5 permissions       argocd cm plugins  CMPs  are deprecated  Starting with Argo CD v2 5  installing config management plugins  CMPs  via the  argocd cm  ConfigMap is deprecated  Support will be removed in v2 7   You can continue to use the plugins by  installing them as sidecars  https   argo cd readthedocs io en stable user guide config management plugins   on the repo server Deployment   Sidecar plugins are significantly more secure  Plugin code runs in its own container with an almost completely isolated filesystem  If an attacker compromises a plugin  the attacker s ability to cause harm is significantly mitigated   To determine whether argocd cm plugins are still in use  scan your argocd repo server and argocd server logs for the  following message     argocd cm plugins are deprecated  and support will be removed in v2 6  Upgrade your plugin to be installed via sidecar  https   argo cd readthedocs io en stable user guide config management plugins     NOTE    removal of argocd cm plugin support was delayed to v2 7  Update your logs scan to use  v2 7  instead of  v2 6    If you run  argocd app list  as admin  the list of Applications using deprecated plugins will be logged as a warning      Dex server TLS configuration  In order to secure the communications between the dex server and the Argo CD API server  TLS is now enabled by default on the dex server   By default  without configuration  the dex server will generate a self signed certificate upon startup  However  we recommend that users configure their own TLS certificate using the  argocd dex server tls  secret  Please refer to the  TLS configuration guide     tls md configuring tls to argocd dex server  for more information      Invalid users session duration values now fall back to 24h  Before v2 5  an invalid  users session duration  value in argocd cm would 1  log a warning and 2  result in user sessions having no duration limit   Starting with v2 5  invalid duration values will fall back to the default value of 24 hours with a warning      Out of bounds symlinks now blocked at fetch  There have been several path traversal and identification vulnerabilities disclosed in the past related to symlinks  To help prevent any further vulnerabilities  we now scan all repositories and Helm charts for   out of bounds symlinks   at the time they are fetched and block further processing if they are found   An out of bounds symlink is defined as any symlink that leaves the root of the Git repository or Helm chart  even if the final target is within the root   If an out of bounds symlink is found  a warning will be printed to the repo server console and an error will be shown in the UI or CLI   Below is an example directory structure showing valid symlinks and invalid symlinks       chart     Chart yaml     values         values yaml     bad link yaml         out of bounds yaml         Blocked     bad link 2 yaml       chart values values yaml   Blocked because it leaves the root     bad link 3 yaml     absolute link yaml           Blocked     good link yaml     values values yaml            OK      If you rely on out of bounds symlinks  this check can be disabled one of three ways   1  The    allow oob symlinks  argument on the repo server  2  The  reposerver allow oob symlinks  key if you are using  argocd cmd params cm  3  Directly setting  ARGOCD REPO SERVER ALLOW OOB SYMLINKS  environment variable on the repo server   It is   strongly recommended   to leave this check enabled  Disabling the check will not allow  all  out of bounds symlinks  Those will still be blocked for things like values files in Helm charts  but symlinks which are not explicitly blocked by other checks will be allowed      Deprecated client side manifest diffs  When using  argocd app diff   local   code from the repo server is run on the user s machine in order to locally generate manifests for comparing against the live manifests of an app  However  this requires that the necessary tools  Helm  Kustomize  etc  are installed with the correct versions  Even worse  it does not support Config Management Plugins  CMPs  whatsoever   In order to support CMPs and reduce local requirements  we have implemented  server side generation  of local manifests via the    server side generate  argument  For example   argocd app diff   local repoDir   server side generate  will upload the contents of  repoDir  to the repo server and run your manifest generation pipeline against it  the same as it would for a Git repo   In v2 7  the    server side generate  argument will become the default  and client side generation will be supported as an alternative       warning     The semantics of  where  Argo will start generating manifests within a repo has changed between client side and server side generation  With client side generation  the application s path   spec source path   was ignored and the value of    local repo root  was effectively used  by default     relative to    local             For example  given an application that has an application path of   manifests   you would have had to run  argocd app diff   local yourRepo manifests   This behavior did not match the repo server s process of downloading the full repo chart and then beginning generation in the path specified in the application manifest       When switching to server side generation     local  should point to the root of your repo  without  including your  spec source path   This is especially important to keep in mind when    server side generate  becomes the default in v2 7  Existing scripts utilizing  diff   local  may break in v2 7 if  spec source path  was not              Upgraded Kustomize Version  The bundled Kustomize version has been upgraded from 4 4 1 to 4 5 7      Upgraded Helm Version  Note that bundled Helm version has been upgraded from 3 9 0 to 3 10 1      Upgraded HAProxy version  The HAProxy version in the HA manifests has been upgraded from 2 0 25 to 2 6 2  To read about the changes improvements  see the HAProxy major release announcements   2 1 0  https   www mail archive com haproxy formilux org msg35491 html    2 2 0  https   www mail archive com haproxy formilux org msg37852 html    2 3 0  https   www mail archive com haproxy formilux org msg38812 html    2 4 0  https   www mail archive com haproxy formilux org msg40499 html    2 5 0  https   www mail archive com haproxy formilux org msg41508 html   and  2 6 0  https   www mail archive com haproxy formilux org msg42371 html       Logs RBAC enforcement will remain opt in  This note is just for clarity  No action is required   We  expected     upgrading 2 3 2 4 md enable logs rbac enforcement  to enable logs RBAC enforcement by default in 2 5  We have decided not to do that in the 2 x series due to disruption for users of  Project Roles        user guide projects md project roles        argocd app create  for old CLI versions fails with API version   2 5 16  Starting with Argo CD 2 5 16  the API returns  PermissionDenied  instead of  NotFound  for Application  GET  requests if the Application does not exist   The Argo CD CLI before versions starting with version 2 5 0 rc1 and before versions 2 5 16 and 2 6 7 does a  GET   request before the  POST  request in  argocd app create   The command does not gracefully handle the  PermissionDenied   response and will therefore fail to create update the Application   To solve the issue  upgrade the CLI to at least 2 5 16  or 2 6 7   CLIs older than 2 5 0 rc1 are unaffected      Golang upgrade in 2 5 20  In 2 5 20  we upgrade the Golang version used to build Argo CD from 1 18 to 1 19  If you use Argo CD as a library  you may need to upgrade your Go version "}
{"questions":"argocd Argo CD Notifications and ApplicationSet Are Bundled into Argo CD remove references to https github com argoproj labs argocd notifications and https github com argoproj labs applicationset The Argo CD Notifications and ApplicationSet are part of Argo CD now You no longer need to install them separately v2 2 to 2 3 The bundled manifests are drop in replacements for the previous versions If you are using Kustomize to bundle the manifests together then just The Notifications and ApplicationSet components are bundled into default Argo CD installation manifests","answers":"# v2.2 to 2.3\n\n## Argo CD Notifications and ApplicationSet Are Bundled into Argo CD\n\nThe Argo CD Notifications and ApplicationSet are part of Argo CD now. You no longer need to install them separately.\nThe Notifications and ApplicationSet components are bundled into default Argo CD installation manifests.\n\nThe bundled manifests are drop-in replacements for the previous versions. If you are using Kustomize to bundle the manifests together then just\nremove references to https:\/\/github.com\/argoproj-labs\/argocd-notifications and https:\/\/github.com\/argoproj-labs\/applicationset.\n\nIf you are using [the argocd-notifications helm chart](https:\/\/github.com\/argoproj\/argo-helm\/tree\/argocd-notifications-1.8.1\/charts\/argocd-notifications), you can move the chart [values](https:\/\/github.com\/argoproj\/argo-helm\/blob\/argocd-notifications-1.8.1\/charts\/argocd-notifications\/values.yaml) to the `notifications` section of the argo-cd chart [values](https:\/\/github.com\/argoproj\/argo-helm\/blob\/main\/charts\/argo-cd\/values.yaml#L2152). Although most values remain as is, for details please look up the values that are relevant to you.\n\nNo action is required if you are using `kubectl apply`.\n\n## Configure Additional Argo CD Binaries\n\nWe have removed non-Linux Argo CD binaries (Darwin amd64 and Windows amd64) from the image ([#7668](https:\/\/github.com\/argoproj\/argo-cd\/pull\/7668)) and the associated download buttons in the help page in the UI.\n\nThose removed binaries will still be included in the release assets and we made those configurable in [#7755](https:\/\/github.com\/argoproj\/argo-cd\/pull\/7755). You can add download buttons for other OS architectures by adding the following to your `argocd-cm` ConfigMap:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cm\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/name: argocd-cm\n    app.kubernetes.io\/part-of: argocd\ndata:\n  help.download.linux-arm64: \"path-or-url-to-download\"\n  help.download.darwin-amd64: \"path-or-url-to-download\"\n  help.download.darwin-arm64: \"path-or-url-to-download\"\n  help.download.windows-amd64: \"path-or-url-to-download\"\n```\n\n## Removed Python from the base image\n\nIf you are using a [Config Management Plugin](..\/config-management-plugins.md) that relies on Python, you\nwill need to build a custom image on the Argo CD base to install Python.\n\n## Upgraded Kustomize Version\n\nNote that bundled Kustomize version has been upgraded from 4.2.0 to 4.4.1.\n\n## Upgraded Helm Version\n\nNote that bundled Helm version has been upgraded from 3.7.1 to 3.8.0.\n\n## Support for private repo SSH keys using the SHA-1 signature hash algorithm is removed in 2.3.7\n\nArgo CD 2.3.7 upgraded its base image from Ubuntu 21.04 to Ubuntu 22.04, which upgraded OpenSSH to 8.9. OpenSSH starting\nwith 8.8 [dropped support for the `ssh-rsa` SHA-1 key signature algorithm](https:\/\/www.openssh.com\/txt\/release-8.8).\n\nThe signature algorithm is _not_ the same as the algorithm used when generating the key. There is no need to update\nkeys.\n\nThe signature algorithm is negotiated with the SSH server when the connection is being set up. The client offers its\nlist of accepted signature algorithms, and if the server has a match, the connection proceeds. For most SSH servers on\nup-to-date git providers, acceptable algorithms other than `ssh-rsa` should be available.\n\nBefore upgrading to Argo CD 2.3.7, check whether your git provider(s) using SSH authentication support algorithms newer\nthan `rsa-ssh`.\n\n1. Make sure your version of SSH >= 8.9 (the version used by Argo CD). If not, upgrade it before proceeding.\n\n   ```shell\n   ssh -V\n   ```\n\n   Example output: `OpenSSH_8.9p1 Ubuntu-3, OpenSSL 3.0.2 15 Mar 2022`\n\n2. Once you have a recent version of OpenSSH, follow the directions from the [OpenSSH 8.8 release notes](https:\/\/www.openssh.com\/txt\/release-8.7):\n\n   > To check whether a server is using the weak ssh-rsa public key\n   > algorithm, for host authentication, try to connect to it after\n   > removing the ssh-rsa algorithm from ssh(1)'s allowed list:\n   >\n   > ```shell\n   > ssh -oHostKeyAlgorithms=-ssh-rsa user@host\n   > ```\n   >\n   > If the host key verification fails and no other supported host key\n   > types are available, the server software on that host should be\n   > upgraded.\n\n   If the server does not support an acceptable version, you will get an error similar to this;\n\n   ```\n   $ ssh -oHostKeyAlgorithms=-ssh-rsa vs-ssh.visualstudio.com\n   Unable to negotiate with 20.42.134.1 port 22: no matching host key type found. Their offer: ssh-rsa\n   ```\n\n   This indicates that the server needs to update its supported key signature algorithms, and Argo CD will not connect\n   to it.\n\n### Workaround\n\nThe [OpenSSH 8.8 release notes](https:\/\/www.openssh.com\/txt\/release-8.8) describe a workaround if you cannot change the\nserver's key signature algorithms configuration.\n\n> Incompatibility is more likely when connecting to older SSH\n> implementations that have not been upgraded or have not closely tracked\n> improvements in the SSH protocol. For these cases, it may be necessary\n> to selectively re-enable RSA\/SHA1 to allow connection and\/or user\n> authentication via the HostkeyAlgorithms and PubkeyAcceptedAlgorithms\n> options. For example, the following stanza in ~\/.ssh\/config will enable\n> RSA\/SHA1 for host and user authentication for a single destination host:\n>\n> ```\n> Host old-host\n>     HostkeyAlgorithms +ssh-rsa\n>     PubkeyAcceptedAlgorithms +ssh-rsa\n> ```\n>\n> We recommend enabling RSA\/SHA1 only as a stopgap measure until legacy\n> implementations can be upgraded or reconfigured with another key type\n> (such as ECDSA or Ed25519).\n\nTo apply this to Argo CD, you could create a ConfigMap with the desired ssh config file and then mount it at\n`\/home\/argocd\/.ssh\/config`.\n","site":"argocd","answers_cleaned":"  v2 2 to 2 3     Argo CD Notifications and ApplicationSet Are Bundled into Argo CD  The Argo CD Notifications and ApplicationSet are part of Argo CD now  You no longer need to install them separately  The Notifications and ApplicationSet components are bundled into default Argo CD installation manifests   The bundled manifests are drop in replacements for the previous versions  If you are using Kustomize to bundle the manifests together then just remove references to https   github com argoproj labs argocd notifications and https   github com argoproj labs applicationset   If you are using  the argocd notifications helm chart  https   github com argoproj argo helm tree argocd notifications 1 8 1 charts argocd notifications   you can move the chart  values  https   github com argoproj argo helm blob argocd notifications 1 8 1 charts argocd notifications values yaml  to the  notifications  section of the argo cd chart  values  https   github com argoproj argo helm blob main charts argo cd values yaml L2152   Although most values remain as is  for details please look up the values that are relevant to you   No action is required if you are using  kubectl apply       Configure Additional Argo CD Binaries  We have removed non Linux Argo CD binaries  Darwin amd64 and Windows amd64  from the image    7668  https   github com argoproj argo cd pull 7668   and the associated download buttons in the help page in the UI   Those removed binaries will still be included in the release assets and we made those configurable in   7755  https   github com argoproj argo cd pull 7755   You can add download buttons for other OS architectures by adding the following to your  argocd cm  ConfigMap      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cm   namespace  argocd   labels      app kubernetes io name  argocd cm     app kubernetes io part of  argocd data    help download linux arm64   path or url to download    help download darwin amd64   path or url to download    help download darwin arm64   path or url to download    help download windows amd64   path or url to download          Removed Python from the base image  If you are using a  Config Management Plugin     config management plugins md  that relies on Python  you will need to build a custom image on the Argo CD base to install Python      Upgraded Kustomize Version  Note that bundled Kustomize version has been upgraded from 4 2 0 to 4 4 1      Upgraded Helm Version  Note that bundled Helm version has been upgraded from 3 7 1 to 3 8 0      Support for private repo SSH keys using the SHA 1 signature hash algorithm is removed in 2 3 7  Argo CD 2 3 7 upgraded its base image from Ubuntu 21 04 to Ubuntu 22 04  which upgraded OpenSSH to 8 9  OpenSSH starting with 8 8  dropped support for the  ssh rsa  SHA 1 key signature algorithm  https   www openssh com txt release 8 8    The signature algorithm is  not  the same as the algorithm used when generating the key  There is no need to update keys   The signature algorithm is negotiated with the SSH server when the connection is being set up  The client offers its list of accepted signature algorithms  and if the server has a match  the connection proceeds  For most SSH servers on up to date git providers  acceptable algorithms other than  ssh rsa  should be available   Before upgrading to Argo CD 2 3 7  check whether your git provider s  using SSH authentication support algorithms newer than  rsa ssh    1  Make sure your version of SSH    8 9  the version used by Argo CD   If not  upgrade it before proceeding         shell    ssh  V            Example output   OpenSSH 8 9p1 Ubuntu 3  OpenSSL 3 0 2 15 Mar 2022   2  Once you have a recent version of OpenSSH  follow the directions from the  OpenSSH 8 8 release notes  https   www openssh com txt release 8 7         To check whether a server is using the weak ssh rsa public key      algorithm  for host authentication  try to connect to it after      removing the ssh rsa algorithm from ssh 1  s allowed list               shell      ssh  oHostKeyAlgorithms  ssh rsa user host                    If the host key verification fails and no other supported host key      types are available  the server software on that host should be      upgraded      If the server does not support an acceptable version  you will get an error similar to this               ssh  oHostKeyAlgorithms  ssh rsa vs ssh visualstudio com    Unable to negotiate with 20 42 134 1 port 22  no matching host key type found  Their offer  ssh rsa            This indicates that the server needs to update its supported key signature algorithms  and Argo CD will not connect    to it       Workaround  The  OpenSSH 8 8 release notes  https   www openssh com txt release 8 8  describe a workaround if you cannot change the server s key signature algorithms configuration     Incompatibility is more likely when connecting to older SSH   implementations that have not been upgraded or have not closely tracked   improvements in the SSH protocol  For these cases  it may be necessary   to selectively re enable RSA SHA1 to allow connection and or user   authentication via the HostkeyAlgorithms and PubkeyAcceptedAlgorithms   options  For example  the following stanza in    ssh config will enable   RSA SHA1 for host and user authentication for a single destination host            Host old host       HostkeyAlgorithms  ssh rsa       PubkeyAcceptedAlgorithms  ssh rsa           We recommend enabling RSA SHA1 only as a stopgap measure until legacy   implementations can be upgraded or reconfigured with another key type    such as ECDSA or Ed25519    To apply this to Argo CD  you could create a ConfigMap with the desired ssh config file and then mount it at   home argocd  ssh config   "}
{"questions":"argocd Redis Upgraded to v6 2 1 However if you are running Argo CD in production with multiple users it is recommended to upgrade during off peak hours to avoid user visible failures The Redis itself should be able to upgrade with no downtime as well as Argo CD does not use it as a persistent store The bundled Redis version has been upgraded to v6 2 1 v1 8 to 2 0","answers":"# v1.8 to 2.0\n\n## Redis Upgraded to v6.2.1\n\nThe bundled Redis version has been upgraded to v6.2.1.\n\nThe Redis itself should be able to upgrade with no downtime, as well as Argo CD does not use it as a persistent store.\nHowever, if you are running Argo CD in production with multiple users it is recommended to upgrade during off-peak\nhours to avoid user-visible failures.\n\n## Environment variables expansion\n\nArgo CD supports using [environment variables](..\/..\/..\/user-guide\/build-environment\/) in\nconfig management tools parameters. The expansion logic has been improved and now expands missing environment variables\ninto an empty string.\n\n## Docker image migrated to use Ubuntu as base\n\nThe official Docker image has been migrated to use `ubuntu:20.10` instead of\n`debian:10-slim` as base image. While this should not affect user experience,\nyou might be affected if you use custom-built images and\/or include third party\ntools in custom-built images.\n\nPlease make sure that your custom tools are still working with the update to\nv2.0 before deploying it onto production.\n\n## Container registry switched to quay.io and sundown of Docker Hub repository\n\nDue to Docker Hub's new rate-limiting and retention policies, the Argo project\nhas decided to switch to the\n[quay.io](https:\/\/quay.io)\nregistry as a new home for all images published by its sub-projects.\n\nAs of Argo CD version 2.0, the installation manifests are configured to pull the\ncontainer images from `quay.io` and we announce the **sundown** of the existing\nDocker Hub repositories. For the 2.0 release this means, we will still push to\nboth registries, but we will stop pushing images to Docker Hub once Argo CD 2.1\nhas been released.\n\nPlease make sure that your clusters can pull from the `quay.io` registry.\nIf you aren't able to do so timely, you can change the container image slugs in\nthe installation manually to Docker Hub as a workaround to install Argo CD 2.0.\nThis workaround will not be possible anymore with 2.1, however.\n\n## Dex tool migrated from argocd-util to argocd-dex\n\nThe dex commands `rundex` and `gendexcfg` have been migrated from `argocd-util` to `argocd-dex`.\nIt means that you need to update `argocd-dex-server` deployment's commands to install `argocd-dex` \nbinary instead of `argocd-util` in init container and run dex command from `argocd-dex` instead of `argocd-util`:\n\n```bash\ninitContainers:\n- command:\n  - cp\n  - -n\n  - \/usr\/local\/bin\/argocd\n  - \/shared\/argocd-dex\n```\n\n```bash\ncontainers:\n- command:\n  - \/shared\/argocd-dex\n  - rundex\n```\nNote that starting from v2.0 argocd binary behaviour has changed. \nIt will have all argocd binaries such `argocd-dex`, `argocd-server`, `argocd-repo-server`, \n`argocd-application-controller`, `argocd-util`, `argocd` baked inside. \nThe binary will change behaviour based on its name. \n\n## Updated retry params type from String to Duration for app sync\n\nApp Sync command exposes certain retry options, which allows the users to parameterize the sync retries. \nTwo of those params, `retry-backoff-duration` and `retry-backoff-max-duration` were declared as type `string` rather than `duration`. \nThis allowed users to provide the values to these flags without time unit (seconds, minutes, hours ...) or any random string as well, \nbut since we have migrated from `string` to `duration`, it is now mandatory for users to provide a unit (valid duration).\n\n```bash\nEXAMPLE: \nargocd app sync <app-name> --retry-backoff-duration=10 -> invalid\nargocd app sync <app-name> --retry-backoff-duration=10s -> valid\n```\n\n## Switch to Golang 1.16\n\nThe official Argo CD binaries are now being build using Go 1.16, making a jump\nfrom the previous 1.14.x. Users should note that Go 1.15 introduced deprecation\nof validating server names against the `CommonName` property of a certificate\nwhen performing TLS connections.\n\nIf you have repository servers with an incompatible certificate, connections to\nthose servers might break. You will have to issue correct certificates to \nunbreak such a situation.\n\n## Migration of CRDs from apiextensions\/v1beta1 to apiextensions\/v1\n\nOur CRDs (`Application` and `AppProject`) have been moved from the\ndeprecated `apiextensions\/v1beta1` to the `apiextensions\/v1` API group.\n\nThis does **not** affect the version of the CRDs themselves.\n\nWe do not expect that changes to existing CRs for `Application` and `AppProject`\nare required from users, or that this change requires otherwise actions and this\nnote is just included for completeness.\n\n## Helm v3 is now the default when rendering Charts\n\nWith this release, we made Helm v3 being the default version for rendering any\nHelm charts through Argo CD. We also disabled the Helm version auto-detection\ndepending on the `apiVersion` field of the `Chart.yaml`, so the charts will\nbe rendered using Helm v3 regardless of what's in the Chart's `apiVersion`\nfield.\n\nThis can result in minor out-of-sync conditions on your Applications that were\npreviously rendered using Helm v2 (e.g. a change in one of the annotations that\nHelm adds). You can fix this by syncing the Application.\n\nIf you have existing Charts that require to be rendered using Helm v2, you will\nneed to explicitly configure your Application to use Helm v2 for rendering the\nchart, as described \n[here](..\/..\/user-guide\/helm.md#helm-version).\n\nPlease also note that Helm v2 is now being considered deprecated in Argo CD, as\nit will not receive any updates from the upstream Helm project anymore. We will\nstill ship the Helm v2 binary for the next two releases, but it will be subject\nto removal after that grace period.\n\nUsers are encouraged to upgrade any Charts that still require Helm v2 to be\ncompatible with Helm v3.\n\n## Kustomize version updated to v3.9.4\n\nArgo CD now ships with Kustomize v3.9.4 by default. Please make sure that your\nmanifests will render correctly with this Kustomize version.\n\nIf you need backwards compatibility to a previous version of Kustomize, please\nconsider setting up a custom Kustomize version and configure your Applications\nto be rendered using that specific version.","site":"argocd","answers_cleaned":"  v1 8 to 2 0     Redis Upgraded to v6 2 1  The bundled Redis version has been upgraded to v6 2 1   The Redis itself should be able to upgrade with no downtime  as well as Argo CD does not use it as a persistent store  However  if you are running Argo CD in production with multiple users it is recommended to upgrade during off peak hours to avoid user visible failures      Environment variables expansion  Argo CD supports using  environment variables           user guide build environment   in config management tools parameters  The expansion logic has been improved and now expands missing environment variables into an empty string      Docker image migrated to use Ubuntu as base  The official Docker image has been migrated to use  ubuntu 20 10  instead of  debian 10 slim  as base image  While this should not affect user experience  you might be affected if you use custom built images and or include third party tools in custom built images   Please make sure that your custom tools are still working with the update to v2 0 before deploying it onto production      Container registry switched to quay io and sundown of Docker Hub repository  Due to Docker Hub s new rate limiting and retention policies  the Argo project has decided to switch to the  quay io  https   quay io  registry as a new home for all images published by its sub projects   As of Argo CD version 2 0  the installation manifests are configured to pull the container images from  quay io  and we announce the   sundown   of the existing Docker Hub repositories  For the 2 0 release this means  we will still push to both registries  but we will stop pushing images to Docker Hub once Argo CD 2 1 has been released   Please make sure that your clusters can pull from the  quay io  registry  If you aren t able to do so timely  you can change the container image slugs in the installation manually to Docker Hub as a workaround to install Argo CD 2 0  This workaround will not be possible anymore with 2 1  however      Dex tool migrated from argocd util to argocd dex  The dex commands  rundex  and  gendexcfg  have been migrated from  argocd util  to  argocd dex   It means that you need to update  argocd dex server  deployment s commands to install  argocd dex   binary instead of  argocd util  in init container and run dex command from  argocd dex  instead of  argocd util       bash initContainers    command      cp      n      usr local bin argocd      shared argocd dex         bash containers    command       shared argocd dex     rundex     Note that starting from v2 0 argocd binary behaviour has changed   It will have all argocd binaries such  argocd dex    argocd server    argocd repo server     argocd application controller    argocd util    argocd  baked inside   The binary will change behaviour based on its name       Updated retry params type from String to Duration for app sync  App Sync command exposes certain retry options  which allows the users to parameterize the sync retries   Two of those params   retry backoff duration  and  retry backoff max duration  were declared as type  string  rather than  duration    This allowed users to provide the values to these flags without time unit  seconds  minutes  hours      or any random string as well   but since we have migrated from  string  to  duration   it is now mandatory for users to provide a unit  valid duration       bash EXAMPLE   argocd app sync  app name    retry backoff duration 10    invalid argocd app sync  app name    retry backoff duration 10s    valid         Switch to Golang 1 16  The official Argo CD binaries are now being build using Go 1 16  making a jump from the previous 1 14 x  Users should note that Go 1 15 introduced deprecation of validating server names against the  CommonName  property of a certificate when performing TLS connections   If you have repository servers with an incompatible certificate  connections to those servers might break  You will have to issue correct certificates to  unbreak such a situation      Migration of CRDs from apiextensions v1beta1 to apiextensions v1  Our CRDs   Application  and  AppProject   have been moved from the deprecated  apiextensions v1beta1  to the  apiextensions v1  API group   This does   not   affect the version of the CRDs themselves   We do not expect that changes to existing CRs for  Application  and  AppProject  are required from users  or that this change requires otherwise actions and this note is just included for completeness      Helm v3 is now the default when rendering Charts  With this release  we made Helm v3 being the default version for rendering any Helm charts through Argo CD  We also disabled the Helm version auto detection depending on the  apiVersion  field of the  Chart yaml   so the charts will be rendered using Helm v3 regardless of what s in the Chart s  apiVersion  field   This can result in minor out of sync conditions on your Applications that were previously rendered using Helm v2  e g  a change in one of the annotations that Helm adds   You can fix this by syncing the Application   If you have existing Charts that require to be rendered using Helm v2  you will need to explicitly configure your Application to use Helm v2 for rendering the chart  as described   here        user guide helm md helm version    Please also note that Helm v2 is now being considered deprecated in Argo CD  as it will not receive any updates from the upstream Helm project anymore  We will still ship the Helm v2 binary for the next two releases  but it will be subject to removal after that grace period   Users are encouraged to upgrade any Charts that still require Helm v2 to be compatible with Helm v3      Kustomize version updated to v3 9 4  Argo CD now ships with Kustomize v3 9 4 by default  Please make sure that your manifests will render correctly with this Kustomize version   If you need backwards compatibility to a previous version of Kustomize  please consider setting up a custom Kustomize version and configure your Applications to be rendered using that specific version "}
{"questions":"argocd Known Issues v2 3 to 2 4 Argo CD 2 4 0 introduced a breaking API change renaming the filter to Impact to API clients Broken filter before 2 4 27","answers":"# v2.3 to 2.4\n\n## Known Issues\n\n### Broken `project` filter before 2.4.27\n\nArgo CD 2.4.0 introduced a breaking API change, renaming the `project` filter to `projects`.\n\n#### Impact to API clients\n\nA similar issue applies to other API clients which communicate with the Argo CD API server via its REST API. If the\nclient uses the `project` field to filter projects, the filter will not be applied. **The failing project filter could \nhave detrimental consequences if, for example, you rely on it to list Applications to be deleted.**\n\n#### Impact to CLI clients\n\nCLI clients older that v2.4.0 rely on client-side filtering and are not impacted by this bug.\n\n#### How to fix the problem\n\nUpgrade to Argo CD >=2.4.27, >=2.5.15, or >=2.6.6. This version of Argo CD will accept both `project` and `projects` as\nvalid filters.\n\n## KSonnet support is removed\n\nKsonnet was deprecated in [2019](https:\/\/github.com\/ksonnet\/ksonnet\/pull\/914\/files) and is no longer maintained.\nThe time has come to remove it from the Argo CD.\n\n## Helm 2 support is removed\n\nHelm 2 has not been officially supported since [Nov 2020](https:\/\/helm.sh\/blog\/helm-2-becomes-unsupported\/). In order to ensure a smooth transition,\nHelm 2 support was preserved in the Argo CD. We feel that Helm 3 is stable, and it is time to drop Helm 2 support.\n\n## Support for private repo SSH keys using the SHA-1 signature hash algorithm is removed\n\nNote: this change was back-ported to 2.3.7 and 2.2.12.\n\nArgo CD 2.4 upgraded its base image from Ubuntu 20.04 to Ubuntu 22.04, which upgraded OpenSSH to 8.9. OpenSSH starting\nwith 8.8 [dropped support for the `ssh-rsa` SHA-1 key signature algorithm](https:\/\/www.openssh.com\/txt\/release-8.8).\n\nThe signature algorithm is _not_ the same as the algorithm used when generating the key. There is no need to update \nkeys.\n\nThe signature algorithm is negotiated with the SSH server when the connection is being set up. The client offers its \nlist of accepted signature algorithms, and if the server has a match, the connection proceeds. For most SSH servers on\nup-to-date git providers, acceptable algorithms other than `ssh-rsa` should be available.\n\nBefore upgrading to Argo CD 2.4, check whether your git provider(s) using SSH authentication support algorithms newer\nthan `rsa-ssh`.\n\n1. Make sure your version of SSH >= 8.9 (the version used by Argo CD). If not, upgrade it before proceeding.\n\n   ```shell\n   ssh -V\n   ```\n   \n   Example output: `OpenSSH_8.9p1 Ubuntu-3, OpenSSL 3.0.2 15 Mar 2022`\n\n2. Once you have a recent version of OpenSSH, follow the directions from the [OpenSSH 8.8 release notes](https:\/\/www.openssh.com\/txt\/release-8.7):\n\n   > To check whether a server is using the weak ssh-rsa public key\n   > algorithm, for host authentication, try to connect to it after\n   > removing the ssh-rsa algorithm from ssh(1)'s allowed list:\n   >\n   > ```shell\n   > ssh -oHostKeyAlgorithms=-ssh-rsa user@host\n   > ```\n   >\n   > If the host key verification fails and no other supported host key\n   > types are available, the server software on that host should be\n   > upgraded.\n\n   If the server does not support an acceptable version, you will get an error similar to this;\n\n   ```\n   $ ssh -oHostKeyAlgorithms=-ssh-rsa vs-ssh.visualstudio.com\n   Unable to negotiate with 20.42.134.1 port 22: no matching host key type found. Their offer: ssh-rsa\n   ```\n\n   This indicates that the server needs to update its supported key signature algorithms, and Argo CD will not connect\n   to it.\n\n### Workaround\n\nThe [OpenSSH 8.8 release notes](https:\/\/www.openssh.com\/txt\/release-8.8) describe a workaround if you cannot change the \nserver's key signature algorithms configuration.\n\n> Incompatibility is more likely when connecting to older SSH\n> implementations that have not been upgraded or have not closely tracked\n> improvements in the SSH protocol. For these cases, it may be necessary\n> to selectively re-enable RSA\/SHA1 to allow connection and\/or user\n> authentication via the HostkeyAlgorithms and PubkeyAcceptedAlgorithms\n> options. For example, the following stanza in ~\/.ssh\/config will enable\n> RSA\/SHA1 for host and user authentication for a single destination host:\n> \n> ```\n> Host old-host\n>     HostkeyAlgorithms +ssh-rsa\n>     PubkeyAcceptedAlgorithms +ssh-rsa\n> ```\n> \n> We recommend enabling RSA\/SHA1 only as a stopgap measure until legacy\n> implementations can be upgraded or reconfigured with another key type\n> (such as ECDSA or Ed25519).\n\nTo apply this to Argo CD, you could create a ConfigMap with the desired ssh config file and then mount it at \n`\/home\/argocd\/.ssh\/config`.\n\n## Configure RBAC to account for new `exec` resource\n\n2.4 introduces a new `exec` [RBAC resource](https:\/\/argo-cd.readthedocs.io\/en\/stable\/operator-manual\/rbac\/#rbac-resources-and-actions).\n\nWhen you upgrade to 2.4, RBAC policies with `*` in the resource field and `create` or `*` in the action field will automatically grant the `exec` privilege.\n\nTo avoid granting the new privilege, replace the existing policy with a list of new policies explicitly listing the old resources.\n\nThe exec feature is [disabled by default](https:\/\/argo-cd.readthedocs.io\/en\/stable\/operator-manual\/rbac\/#exec-resource), \nbut it is still a good idea to double-check your RBAC configuration to enforce least necessary privileges.\n\n### Example\n\nOld:\n\n```csv\np, role:org-admin, *, create, my-proj\/*, allow\n```\n\nNew:\n\n```csv\np, role:org-admin, clusters, create, my-proj\/*, allow\np, role:org-admin, projects, create, my-proj\/*, allow\np, role:org-admin, applications, create, my-proj\/*, allow\np, role:org-admin, repositories, create, my-proj\/*, allow\np, role:org-admin, certificates, create, my-proj\/*, allow\np, role:org-admin, accounts, create, my-proj\/*, allow\np, role:org-admin, gpgkeys, create, my-proj\/*, allow\n```\n\n## Enable logs RBAC enforcement\n\n2.4 introduced `logs` as a new RBAC resource. In 2.3, users with `applications, get` access automatically get logs\naccess. <del>In 2.5, you will have to explicitly grant `logs, get` access. Logs RBAC enforcement can be enabled with a flag\nin 2.4. We recommend enabling the flag now for an easier upgrade experience in 2.5.<\/del> \n\n!!! important \n    Logs RBAC enforcement **will not** be enabled by default in 2.5. This decision \n    [was made](https:\/\/github.com\/argoproj\/argo-cd\/issues\/10551#issuecomment-1242303457) to avoid breaking logs access \n    under [Project Roles](..\/..\/user-guide\/projects.md#project-roles), which do not provide a mechanism to grant `logs`\n    resource access.\n\nTo enabled logs RBAC enforcement, add this to your argocd-cm ConfigMap:\n\n```yaml\nserver.rbac.log.enforce.enable: \"true\"\n```\n\nIf you want to allow the same users to continue to have logs access, just find every line that grants \n`applications, get` access and also grant `logs, get`. \n\n### Example\n\nOld:\n\n```csv\np, role:staging-db-admins, applications, get, staging-db-admins\/*, allow\n\np, role:test-db-admins, applications, *, staging-db-admins\/*, allow\n```\n\nNew:\n\n```csv\np, role:staging-db-admins, applications, get, staging-db-admins\/*, allow\np, role:staging-db-admins, logs, get, staging-db-admins\/*, allow\n\np, role:test-db-admins, applications, *, staging-db-admins\/*, allow\np, role:test-db-admins, logs, get, staging-db-admins\/*, allow\n```\n\n### Pod Logs UI\n\nSince 2.4.9, the LOGS tab in pod view is visible in the UI only for users with explicit allow get logs policy.\n\n### Known pod logs UI issue prior to 2.4.9\n\nUpon pressing the \"LOGS\" tab in pod view by users who don't have an explicit allow get logs policy, the red \"unable to load data: Internal error\" is received in the bottom of the screen, and \"Failed to load data, please try again\" is displayed.\n\n## Test repo-server with its new dedicated Service Account\n\nAs a security enhancement, the argocd-repo-server Deployment uses its own Service Account instead of `default`.\n\nIf you have a custom environment that might depend on repo-server using the `default` Service Account (such as a plugin\nthat uses the Service Account for auth), be sure to test before deploying the 2.4 upgrade to production.\n\n## Plugins\n\n### Remove the shared volume from any sidecar plugins\n\nAs a security enhancement, [sidecar plugins](..\/config-management-plugins.md#option-2-configure-plugin-via-sidecar)\nno longer share the \/tmp directory with the repo-server.\n\nIf you have one or more sidecar plugins enabled, replace the \/tmp volume mount for each sidecar to use a volume specific \nto each plugin.\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: argocd-repo-server\nspec:\n  template:\n    spec:\n      containers:\n      - name: your-plugin-name\n        volumeMounts:\n        - mountPath: \/tmp\n          name: your-plugin-name-tmp\n      volumes:\n        # Add this volume.\n        - name: your-plugin-name-tmp\n          emptyDir: {}\n```\n\n### Update plugins to use newly-prefixed environment variables\n\nIf you use plugins that depend on user-supplied environment variables, then they must be updated to be compatible with\nArgo CD 2.4. Here is an example of user-supplied environment variables in the `plugin` section of an Application spec:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nspec:\n  source:\n    plugin:\n      env:\n        - name: FOO\n          value: bar\n```\n\nGoing forward, all user-supplied environment variables will be prefixed with `ARGOCD_ENV_` before being sent to the \nplugin's `init`, `generate`, or `discover` commands. This prevents users from setting potentially-sensitive environment\nvariables.\n\nIf you have written a custom plugin which handles user-provided environment variables, update it to handle the new\nprefix.\n\nIf you use a third-party plugin which does not explicitly advertise Argo CD 2.4 support, it might not handle the \nprefixed environment variables. Open an issue with the plugin's authors and confirm support before upgrading to Argo CD \n2.4.\n\n### Confirm sidecar plugins have all necessary environment variables\n\nA bug in < 2.4 caused `init` and `generate` commands to receive environment variables from the main repo-server\ncontainer, taking precedence over environment variables from the plugin's sidecar.\n\nStarting in 2.4, sidecar plugins will not receive environment variables from the main repo-server container. Make sure\nthat any environment variables necessary for the sidecar plugin to function are set on the sidecar plugin.\n\nargocd-cm plugins will continue to receive environment variables from the main repo-server container.","site":"argocd","answers_cleaned":"  v2 3 to 2 4     Known Issues      Broken  project  filter before 2 4 27  Argo CD 2 4 0 introduced a breaking API change  renaming the  project  filter to  projects         Impact to API clients  A similar issue applies to other API clients which communicate with the Argo CD API server via its REST API  If the client uses the  project  field to filter projects  the filter will not be applied    The failing project filter could  have detrimental consequences if  for example  you rely on it to list Applications to be deleted          Impact to CLI clients  CLI clients older that v2 4 0 rely on client side filtering and are not impacted by this bug        How to fix the problem  Upgrade to Argo CD   2 4 27    2 5 15  or   2 6 6  This version of Argo CD will accept both  project  and  projects  as valid filters      KSonnet support is removed  Ksonnet was deprecated in  2019  https   github com ksonnet ksonnet pull 914 files  and is no longer maintained  The time has come to remove it from the Argo CD      Helm 2 support is removed  Helm 2 has not been officially supported since  Nov 2020  https   helm sh blog helm 2 becomes unsupported    In order to ensure a smooth transition  Helm 2 support was preserved in the Argo CD  We feel that Helm 3 is stable  and it is time to drop Helm 2 support      Support for private repo SSH keys using the SHA 1 signature hash algorithm is removed  Note  this change was back ported to 2 3 7 and 2 2 12   Argo CD 2 4 upgraded its base image from Ubuntu 20 04 to Ubuntu 22 04  which upgraded OpenSSH to 8 9  OpenSSH starting with 8 8  dropped support for the  ssh rsa  SHA 1 key signature algorithm  https   www openssh com txt release 8 8    The signature algorithm is  not  the same as the algorithm used when generating the key  There is no need to update  keys   The signature algorithm is negotiated with the SSH server when the connection is being set up  The client offers its  list of accepted signature algorithms  and if the server has a match  the connection proceeds  For most SSH servers on up to date git providers  acceptable algorithms other than  ssh rsa  should be available   Before upgrading to Argo CD 2 4  check whether your git provider s  using SSH authentication support algorithms newer than  rsa ssh    1  Make sure your version of SSH    8 9  the version used by Argo CD   If not  upgrade it before proceeding         shell    ssh  V               Example output   OpenSSH 8 9p1 Ubuntu 3  OpenSSL 3 0 2 15 Mar 2022   2  Once you have a recent version of OpenSSH  follow the directions from the  OpenSSH 8 8 release notes  https   www openssh com txt release 8 7         To check whether a server is using the weak ssh rsa public key      algorithm  for host authentication  try to connect to it after      removing the ssh rsa algorithm from ssh 1  s allowed list               shell      ssh  oHostKeyAlgorithms  ssh rsa user host                    If the host key verification fails and no other supported host key      types are available  the server software on that host should be      upgraded      If the server does not support an acceptable version  you will get an error similar to this               ssh  oHostKeyAlgorithms  ssh rsa vs ssh visualstudio com    Unable to negotiate with 20 42 134 1 port 22  no matching host key type found  Their offer  ssh rsa            This indicates that the server needs to update its supported key signature algorithms  and Argo CD will not connect    to it       Workaround  The  OpenSSH 8 8 release notes  https   www openssh com txt release 8 8  describe a workaround if you cannot change the  server s key signature algorithms configuration     Incompatibility is more likely when connecting to older SSH   implementations that have not been upgraded or have not closely tracked   improvements in the SSH protocol  For these cases  it may be necessary   to selectively re enable RSA SHA1 to allow connection and or user   authentication via the HostkeyAlgorithms and PubkeyAcceptedAlgorithms   options  For example  the following stanza in    ssh config will enable   RSA SHA1 for host and user authentication for a single destination host             Host old host       HostkeyAlgorithms  ssh rsa       PubkeyAcceptedAlgorithms  ssh rsa            We recommend enabling RSA SHA1 only as a stopgap measure until legacy   implementations can be upgraded or reconfigured with another key type    such as ECDSA or Ed25519    To apply this to Argo CD  you could create a ConfigMap with the desired ssh config file and then mount it at    home argocd  ssh config       Configure RBAC to account for new  exec  resource  2 4 introduces a new  exec   RBAC resource  https   argo cd readthedocs io en stable operator manual rbac  rbac resources and actions    When you upgrade to 2 4  RBAC policies with     in the resource field and  create  or     in the action field will automatically grant the  exec  privilege   To avoid granting the new privilege  replace the existing policy with a list of new policies explicitly listing the old resources   The exec feature is  disabled by default  https   argo cd readthedocs io en stable operator manual rbac  exec resource    but it is still a good idea to double check your RBAC configuration to enforce least necessary privileges       Example  Old      csv p  role org admin     create  my proj    allow      New      csv p  role org admin  clusters  create  my proj    allow p  role org admin  projects  create  my proj    allow p  role org admin  applications  create  my proj    allow p  role org admin  repositories  create  my proj    allow p  role org admin  certificates  create  my proj    allow p  role org admin  accounts  create  my proj    allow p  role org admin  gpgkeys  create  my proj    allow         Enable logs RBAC enforcement  2 4 introduced  logs  as a new RBAC resource  In 2 3  users with  applications  get  access automatically get logs access   del In 2 5  you will have to explicitly grant  logs  get  access  Logs RBAC enforcement can be enabled with a flag in 2 4  We recommend enabling the flag now for an easier upgrade experience in 2 5   del        important      Logs RBAC enforcement   will not   be enabled by default in 2 5  This decision       was made  https   github com argoproj argo cd issues 10551 issuecomment 1242303457  to avoid breaking logs access      under  Project Roles        user guide projects md project roles   which do not provide a mechanism to grant  logs      resource access   To enabled logs RBAC enforcement  add this to your argocd cm ConfigMap      yaml server rbac log enforce enable   true       If you want to allow the same users to continue to have logs access  just find every line that grants   applications  get  access and also grant  logs  get         Example  Old      csv p  role staging db admins  applications  get  staging db admins    allow  p  role test db admins  applications     staging db admins    allow      New      csv p  role staging db admins  applications  get  staging db admins    allow p  role staging db admins  logs  get  staging db admins    allow  p  role test db admins  applications     staging db admins    allow p  role test db admins  logs  get  staging db admins    allow          Pod Logs UI  Since 2 4 9  the LOGS tab in pod view is visible in the UI only for users with explicit allow get logs policy       Known pod logs UI issue prior to 2 4 9  Upon pressing the  LOGS  tab in pod view by users who don t have an explicit allow get logs policy  the red  unable to load data  Internal error  is received in the bottom of the screen  and  Failed to load data  please try again  is displayed      Test repo server with its new dedicated Service Account  As a security enhancement  the argocd repo server Deployment uses its own Service Account instead of  default    If you have a custom environment that might depend on repo server using the  default  Service Account  such as a plugin that uses the Service Account for auth   be sure to test before deploying the 2 4 upgrade to production      Plugins      Remove the shared volume from any sidecar plugins  As a security enhancement   sidecar plugins     config management plugins md option 2 configure plugin via sidecar  no longer share the  tmp directory with the repo server   If you have one or more sidecar plugins enabled  replace the  tmp volume mount for each sidecar to use a volume specific  to each plugin      yaml apiVersion  apps v1 kind  Deployment metadata    name  argocd repo server spec    template      spec        containers          name  your plugin name         volumeMounts            mountPath   tmp           name  your plugin name tmp       volumes            Add this volume            name  your plugin name tmp           emptyDir              Update plugins to use newly prefixed environment variables  If you use plugins that depend on user supplied environment variables  then they must be updated to be compatible with Argo CD 2 4  Here is an example of user supplied environment variables in the  plugin  section of an Application spec      yaml apiVersion  argoproj io v1alpha1 kind  Application spec    source      plugin        env            name  FOO           value  bar      Going forward  all user supplied environment variables will be prefixed with  ARGOCD ENV   before being sent to the  plugin s  init    generate   or  discover  commands  This prevents users from setting potentially sensitive environment variables   If you have written a custom plugin which handles user provided environment variables  update it to handle the new prefix   If you use a third party plugin which does not explicitly advertise Argo CD 2 4 support  it might not handle the  prefixed environment variables  Open an issue with the plugin s authors and confirm support before upgrading to Argo CD  2 4       Confirm sidecar plugins have all necessary environment variables  A bug in   2 4 caused  init  and  generate  commands to receive environment variables from the main repo server container  taking precedence over environment variables from the plugin s sidecar   Starting in 2 4  sidecar plugins will not receive environment variables from the main repo server container  Make sure that any environment variables necessary for the sidecar plugin to function are set on the sidecar plugin   argocd cm plugins will continue to receive environment variables from the main repo server container "}
{"questions":"argocd v2 6 to 2 7 field and in the action field it will automatically grant the When you upgrade to 2 7 RBAC policies with in the resource Configure RBAC to account for new resource privilege RBAC resource 2 2 7 introduces the new Proxy Extensions 1 feature with a new","answers":"# v2.6 to 2.7\n\n## Configure RBAC to account for new `extensions` resource\n\n2.7 introduces the new [Proxy Extensions][1] feature with a new `extensions`\n[RBAC resource][2].\n\nWhen you upgrade to 2.7, RBAC policies with `*` in the *resource*\nfield and `*` in the action field, it will automatically grant the\n`extensions` privilege.\n\nThe Proxy Extension feature is disabled by default, however it is\nrecommended to check your RBAC configurations to enforce the least\nnecessary privileges.\n\nExample\nOld:\n\n```csv\np, role:org-admin, *, *, *, allow\n```\n\nNew:\n\n```csv\np, role:org-admin, clusters, create, my-proj\/*, allow\np, role:org-admin, projects, create, my-proj\/*, allow\np, role:org-admin, applications, create, my-proj\/*, allow\np, role:org-admin, repositories, create, my-proj\/*, allow\np, role:org-admin, certificates, create, my-proj\/*, allow\np, role:org-admin, accounts, create, my-proj\/*, allow\np, role:org-admin, gpgkeys, create, my-proj\/*, allow\n# If you don't want to grant the new permission, don't include the following line\np, role:org-admin, extensions, invoke, my-proj\/*, allow\n```\n\n## Upgraded Helm Version\n\nNote that bundled Helm version has been upgraded from 3.10.3 to 3.11.2.\n\n## Upgraded Kustomize Version\n\nNote that bundled Kustomize version has been upgraded from 4.5.7 to 5.0.1.\n\n## Notifications: `^` behavior change in Sprig's semver functions\nArgo CD 2.7 upgrades Sprig templating specifically within Argo CD notifications to v3. That upgrade includes an upgrade of [Masterminds\/semver](https:\/\/github.com\/Masterminds\/semver\/releases) to v3.\n\nMasterminds\/semver v3 changed the behavior of the `^` prefix in semantic version constraints. If you are using sprig template functions in your notifications templates which include references to [Sprig's semver functions](https:\/\/masterminds.github.io\/sprig\/semver.html) and use the `^` prefix, read the [Masterminds\/semver changelog](https:\/\/github.com\/Masterminds\/semver\/releases\/tag\/v3.0.0) to understand how your notifications' behavior may change.\n\n## Tini as entrypoint\n\nThe manifests are now using [`tini` as entrypoint][3], instead of `entrypoint.sh`. Until 2.8, `entrypoint.sh` is retained for upgrade compatibility. This means that the deployment manifests have to be updated after upgrading to 2.7, and before upgrading to 2.8 later. In case the manifests are updated before moving to 2.8, the containers will not be able to start.\n\n[1]: ..\/..\/developer-guide\/extensions\/proxy-extensions.md\n[2]: https:\/\/argo-cd.readthedocs.io\/en\/stable\/operator-manual\/rbac\/#the-extensions-resource\n[3]: https:\/\/github.com\/argoproj\/argo-cd\/pull\/12707\n\n\n## Deep Links template updates\n\nDeep Links now allow you to access other values like `cluster`, `project`, `application` and `resource` in the url and condition templates for specific categories of links.\nThe templating syntax has also been updated to be prefixed with the type of resource you want to access for example previously if you had a `resource.links` config like :\n```yaml\n  resource.links: |\n    - url: https:\/\/mycompany.splunk.com?search=\n      title: Splunk\n      if: kind == \"Pod\" || kind == \"Deployment\"\n```\nThis would become :\n```yaml\n  resource.links: |\n    - url: https:\/\/mycompany.splunk.com?search=&env=\n      title: Splunk\n      if: resource.kind == \"Pod\" || resource.kind == \"Deployment\"\n```\n\nRead the full [documentation](..\/deep_links.md) to see all possible combinations of values accessible fo each category of links.\n\n## Support of `helm.sh\/resource-policy` annotation\n\nArgo CD now supports the `helm.sh\/resource-policy` annotation to control the deletion of resources. The behavior is the same as the behavior of\n`argocd.argoproj.io\/sync-options: Delete=false` annotation: if the annotation is present and set to `keep`, the resource will not be deleted\nwhen the application is deleted.\n\n## Check your Kustomize patches for `--redis` changes\n\nStarting in Argo CD 2.7, the install manifests no longer pass the Redis server name via `--redis`. \n\nIf your environment uses Kustomize JSON patches to modify the Redis server name, the patch might break when you upgrade\nto the 2.7 manifests. If it does, you can remove the patch and instead set the Redis server name via the `redis.server` \nfield in the argocd-cmd-params-cm ConfigMap. That value will be passed to the necessary components via `valueFrom` \nenvironment variables.\n\n## `argocd applicationset` CLI incompatibilities for ApplicationSets with list generators\n\nIf you are running Argo CD v2.7.0-2.7.2 server-side, then CLI versions outside that range will incorrectly handle list\ngenerators. That is because the gRPC interface for those versions used the `elements` field number for the new\n`elementsYaml` field.\n\nIf you are running the Argo CD CLI versions v2.7.0-2.7.2 with a server-side version of v2.7.3 or later, then the CLI\nwill send the contents of the `elements` field to the server, which will interpret it as the `elementsYaml` field. This\nwill cause the ApplicationSet to fail at runtime with an error similar to this:\n\n```\nerror unmarshling decoded ElementsYaml error converting YAML to JSON: yaml: control characters are not allowed\n```\n\nBe sure to use CLI version v2.7.3 or later with server-side version v2.7.3 or later.","site":"argocd","answers_cleaned":"  v2 6 to 2 7     Configure RBAC to account for new  extensions  resource  2 7 introduces the new  Proxy Extensions  1  feature with a new  extensions   RBAC resource  2    When you upgrade to 2 7  RBAC policies with     in the  resource  field and     in the action field  it will automatically grant the  extensions  privilege   The Proxy Extension feature is disabled by default  however it is recommended to check your RBAC configurations to enforce the least necessary privileges   Example Old      csv p  role org admin           allow      New      csv p  role org admin  clusters  create  my proj    allow p  role org admin  projects  create  my proj    allow p  role org admin  applications  create  my proj    allow p  role org admin  repositories  create  my proj    allow p  role org admin  certificates  create  my proj    allow p  role org admin  accounts  create  my proj    allow p  role org admin  gpgkeys  create  my proj    allow   If you don t want to grant the new permission  don t include the following line p  role org admin  extensions  invoke  my proj    allow         Upgraded Helm Version  Note that bundled Helm version has been upgraded from 3 10 3 to 3 11 2      Upgraded Kustomize Version  Note that bundled Kustomize version has been upgraded from 4 5 7 to 5 0 1      Notifications      behavior change in Sprig s semver functions Argo CD 2 7 upgrades Sprig templating specifically within Argo CD notifications to v3  That upgrade includes an upgrade of  Masterminds semver  https   github com Masterminds semver releases  to v3   Masterminds semver v3 changed the behavior of the     prefix in semantic version constraints  If you are using sprig template functions in your notifications templates which include references to  Sprig s semver functions  https   masterminds github io sprig semver html  and use the     prefix  read the  Masterminds semver changelog  https   github com Masterminds semver releases tag v3 0 0  to understand how your notifications  behavior may change      Tini as entrypoint  The manifests are now using   tini  as entrypoint  3   instead of  entrypoint sh   Until 2 8   entrypoint sh  is retained for upgrade compatibility  This means that the deployment manifests have to be updated after upgrading to 2 7  and before upgrading to 2 8 later  In case the manifests are updated before moving to 2 8  the containers will not be able to start    1         developer guide extensions proxy extensions md  2   https   argo cd readthedocs io en stable operator manual rbac  the extensions resource  3   https   github com argoproj argo cd pull 12707      Deep Links template updates  Deep Links now allow you to access other values like  cluster    project    application  and  resource  in the url and condition templates for specific categories of links  The templating syntax has also been updated to be prefixed with the type of resource you want to access for example previously if you had a  resource links  config like      yaml   resource links          url  https   mycompany splunk com search        title  Splunk       if  kind     Pod     kind     Deployment      This would become      yaml   resource links          url  https   mycompany splunk com search  env        title  Splunk       if  resource kind     Pod     resource kind     Deployment       Read the full  documentation     deep links md  to see all possible combinations of values accessible fo each category of links      Support of  helm sh resource policy  annotation  Argo CD now supports the  helm sh resource policy  annotation to control the deletion of resources  The behavior is the same as the behavior of  argocd argoproj io sync options  Delete false  annotation  if the annotation is present and set to  keep   the resource will not be deleted when the application is deleted      Check your Kustomize patches for    redis  changes  Starting in Argo CD 2 7  the install manifests no longer pass the Redis server name via    redis     If your environment uses Kustomize JSON patches to modify the Redis server name  the patch might break when you upgrade to the 2 7 manifests  If it does  you can remove the patch and instead set the Redis server name via the  redis server   field in the argocd cmd params cm ConfigMap  That value will be passed to the necessary components via  valueFrom   environment variables       argocd applicationset  CLI incompatibilities for ApplicationSets with list generators  If you are running Argo CD v2 7 0 2 7 2 server side  then CLI versions outside that range will incorrectly handle list generators  That is because the gRPC interface for those versions used the  elements  field number for the new  elementsYaml  field   If you are running the Argo CD CLI versions v2 7 0 2 7 2 with a server side version of v2 7 3 or later  then the CLI will send the contents of the  elements  field to the server  which will interpret it as the  elementsYaml  field  This will cause the ApplicationSet to fail at runtime with an error similar to this       error unmarshling decoded ElementsYaml error converting YAML to JSON  yaml  control characters are not allowed      Be sure to use CLI version v2 7 3 or later with server side version v2 7 3 or later "}
{"questions":"argocd Keycloak Integrating Keycloak and ArgoCD Creating a new client in Keycloak to determine privileges in Argo You will create a client within Keycloak and configure ArgoCD to use Keycloak for authentication using groups set in Keycloak These instructions will take you through the entire process of getting your ArgoCD application authenticating with Keycloak","answers":"# Keycloak\n\n# Integrating Keycloak and ArgoCD\n\nThese instructions will take you through the entire process of getting your ArgoCD application authenticating with Keycloak. \nYou will create a client within Keycloak and configure ArgoCD to use Keycloak for authentication, using groups set in Keycloak\nto determine privileges in Argo.\n\n## Creating a new client in Keycloak\n\nFirst we need to setup a new client. Start by logging into your keycloak server, select the realm you want to use (`master` by default)\nand then go to __Clients__ and click the __Create client__ button at the top.\n\n![Keycloak add client](..\/..\/assets\/keycloak-add-client.png \"Keycloak add client\")\n\nEnable the __Client authentication__.\n\n![Keycloak add client Step 2](..\/..\/assets\/keycloak-add-client_2.png \"Keycloak add client Step 2\")\n\nConfigure the client by setting the __Root URL__, __Web origins__, __Admin URL__ to the hostname (https:\/\/{hostname}).\n\nAlso you can set __Home URL__ to your _\/applications_ path and __Valid Post logout redirect URIs__ to \"+\".\n\nThe Valid Redirect URIs should be set to https:\/\/{hostname}\/auth\/callback (you can also set the less secure https:\/\/{hostname}\/* for testing\/development purposes,\nbut it's not recommended in production).\n\n![Keycloak configure client](..\/..\/assets\/keycloak-configure-client.png \"Keycloak configure client\")\n\nMake sure to click __Save__. There should be a tab called __Credentials__. You can copy the Secret that we'll use in our ArgoCD \nconfiguration.\n\n![Keycloak client secret](..\/..\/assets\/keycloak-client-secret.png \"Keycloak client secret\")\n\n## Configuring the groups claim\n\nIn order for ArgoCD to provide the groups the user is in we need to configure a groups claim that can be included in the authentication token.\nTo do this we'll start by creating a new __Client Scope__ called _groups_.\n\n![Keycloak add scope](..\/..\/assets\/keycloak-add-scope.png \"Keycloak add scope\")\n\nOnce you've created the client scope you can now add a Token Mapper which will add the groups claim to the token when the client requests\nthe groups scope. In the Tab \"Mappers\", click on \"Configure a new mapper\" and choose __Group Membership__.\nMake sure to set the __Name__ as well as the __Token Claim Name__ to _groups_. Also disable the \"Full group path\".\n\n![Keycloak groups mapper](..\/..\/assets\/keycloak-groups-mapper.png \"Keycloak groups mapper\")\n\nWe can now configure the client to provide the _groups_ scope. Go back to the client we've created earlier and go to the Tab \"Client Scopes\".\nClick on \"Add client scope\", choose the _groups_ scope and add it either to the __Default__ or to the __Optional__ Client Scope. If you put it in the Optional\ncategory you will need to make sure that ArgoCD requests the scope in its OIDC configuration. Since we will always want group information, I recommend\nusing the Default category.\n\n![Keycloak client scope](..\/..\/assets\/keycloak-client-scope.png \"Keycloak client scope\")\n\nCreate a group called _ArgoCDAdmins_ and have your current user join the group.\n\n![Keycloak user group](..\/..\/assets\/keycloak-user-group.png \"Keycloak user group\")\n\n## Configuring ArgoCD OIDC\n\nLet's start by storing the client secret you generated earlier in the argocd secret _argocd-secret_.\n\n1. First you'll need to encode the client secret in base64: `$ echo -n '83083958-8ec6-47b0-a411-a8c55381fbd2' | base64`\n2. Then you can edit the secret and add the base64 value to a new key called _oidc.keycloak.clientSecret_ using `$ kubectl edit secret argocd-secret`.\n   \nYour Secret should look something like this:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: argocd-secret\ndata:\n  ...\n  oidc.keycloak.clientSecret: ODMwODM5NTgtOGVjNi00N2IwLWE0MTEtYThjNTUzODFmYmQy   \n  ...\n```\n\nNow we can configure the config map and add the oidc configuration to enable our keycloak authentication.\nYou can use `$ kubectl edit configmap argocd-cm`.\n\nYour ConfigMap should look like this:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cm\ndata:\n  url: https:\/\/argocd.example.com\n  oidc.config: |\n    name: Keycloak\n    issuer: https:\/\/keycloak.example.com\/realms\/master\n    clientID: argocd\n    clientSecret: $oidc.keycloak.clientSecret\n    requestedScopes: [\"openid\", \"profile\", \"email\", \"groups\"]\n```\n\nMake sure that:\n\n- __issuer__ ends with the correct realm (in this example _master_)\n- __issuer__ on Keycloak releases older than version 17 the URL must include \/auth (in this example \/auth\/realms\/master)\n- __clientID__ is set to the Client ID you configured in Keycloak\n- __clientSecret__ points to the right key you created in the _argocd-secret_ Secret\n- __requestedScopes__ contains the _groups_ claim if you didn't add it to the Default scopes\n\n##\u00a0Configuring ArgoCD Policy\n\nNow that we have an authentication that provides groups we want to apply a policy to these groups.\nWe can modify the _argocd-rbac-cm_ ConfigMap using `$ kubectl edit configmap argocd-rbac-cm`.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-rbac-cm\ndata:\n  policy.csv: |\n    g, ArgoCDAdmins, role:admin\n```\n\nIn this example we give the role _role:admin_ to all users in the group _ArgoCDAdmins_.\n\n## Login\n\nYou can now login using our new Keycloak OIDC authentication:\n\n![Keycloak ArgoCD login](..\/..\/assets\/keycloak-login.png \"Keycloak ArgoCD login\")\n\n## Troubleshoot\nIf ArgoCD auth returns 401 or when the login attempt leads to the loop, then restart the argocd-server pod.\n```\nkubectl rollout restart deployment argocd-server -n argocd\n```","site":"argocd","answers_cleaned":"  Keycloak    Integrating Keycloak and ArgoCD  These instructions will take you through the entire process of getting your ArgoCD application authenticating with Keycloak   You will create a client within Keycloak and configure ArgoCD to use Keycloak for authentication  using groups set in Keycloak to determine privileges in Argo      Creating a new client in Keycloak  First we need to setup a new client  Start by logging into your keycloak server  select the realm you want to use   master  by default  and then go to   Clients   and click the   Create client   button at the top     Keycloak add client        assets keycloak add client png  Keycloak add client    Enable the   Client authentication       Keycloak add client Step 2        assets keycloak add client 2 png  Keycloak add client Step 2    Configure the client by setting the   Root URL      Web origins      Admin URL   to the hostname  https    hostname     Also you can set   Home URL   to your   applications  path and   Valid Post logout redirect URIs   to       The Valid Redirect URIs should be set to https    hostname  auth callback  you can also set the less secure https    hostname    for testing development purposes  but it s not recommended in production      Keycloak configure client        assets keycloak configure client png  Keycloak configure client    Make sure to click   Save    There should be a tab called   Credentials    You can copy the Secret that we ll use in our ArgoCD  configuration     Keycloak client secret        assets keycloak client secret png  Keycloak client secret       Configuring the groups claim  In order for ArgoCD to provide the groups the user is in we need to configure a groups claim that can be included in the authentication token  To do this we ll start by creating a new   Client Scope   called  groups      Keycloak add scope        assets keycloak add scope png  Keycloak add scope    Once you ve created the client scope you can now add a Token Mapper which will add the groups claim to the token when the client requests the groups scope  In the Tab  Mappers   click on  Configure a new mapper  and choose   Group Membership    Make sure to set the   Name   as well as the   Token Claim Name   to  groups   Also disable the  Full group path      Keycloak groups mapper        assets keycloak groups mapper png  Keycloak groups mapper    We can now configure the client to provide the  groups  scope  Go back to the client we ve created earlier and go to the Tab  Client Scopes   Click on  Add client scope   choose the  groups  scope and add it either to the   Default   or to the   Optional   Client Scope  If you put it in the Optional category you will need to make sure that ArgoCD requests the scope in its OIDC configuration  Since we will always want group information  I recommend using the Default category     Keycloak client scope        assets keycloak client scope png  Keycloak client scope    Create a group called  ArgoCDAdmins  and have your current user join the group     Keycloak user group        assets keycloak user group png  Keycloak user group       Configuring ArgoCD OIDC  Let s start by storing the client secret you generated earlier in the argocd secret  argocd secret    1  First you ll need to encode the client secret in base64     echo  n  83083958 8ec6 47b0 a411 a8c55381fbd2    base64  2  Then you can edit the secret and add the base64 value to a new key called  oidc keycloak clientSecret  using    kubectl edit secret argocd secret       Your Secret should look something like this      yaml apiVersion  v1 kind  Secret metadata    name  argocd secret data          oidc keycloak clientSecret  ODMwODM5NTgtOGVjNi00N2IwLWE0MTEtYThjNTUzODFmYmQy               Now we can configure the config map and add the oidc configuration to enable our keycloak authentication  You can use    kubectl edit configmap argocd cm    Your ConfigMap should look like this      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cm data    url  https   argocd example com   oidc config        name  Keycloak     issuer  https   keycloak example com realms master     clientID  argocd     clientSecret   oidc keycloak clientSecret     requestedScopes    openid    profile    email    groups        Make sure that       issuer   ends with the correct realm  in this example  master       issuer   on Keycloak releases older than version 17 the URL must include  auth  in this example  auth realms master      clientID   is set to the Client ID you configured in Keycloak     clientSecret   points to the right key you created in the  argocd secret  Secret     requestedScopes   contains the  groups  claim if you didn t add it to the Default scopes     Configuring ArgoCD Policy  Now that we have an authentication that provides groups we want to apply a policy to these groups  We can modify the  argocd rbac cm  ConfigMap using    kubectl edit configmap argocd rbac cm       yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd rbac cm data    policy csv        g  ArgoCDAdmins  role admin      In this example we give the role  role admin  to all users in the group  ArgoCDAdmins       Login  You can now login using our new Keycloak OIDC authentication     Keycloak ArgoCD login        assets keycloak login png  Keycloak ArgoCD login       Troubleshoot If ArgoCD auth returns 401 or when the login attempt leads to the loop  then restart the argocd server pod      kubectl rollout restart deployment argocd server  n argocd    "}
{"questions":"argocd Integrating OneLogin and ArgoCD If you re using this IdP please consider to this document markdownlint enable MD033 note Are you using this Please contribute OneLogin div style text align center img src assets argo png div markdownlint disable MD033","answers":"# OneLogin\n\n!!! note \"Are you using this? Please contribute!\"\n    If you're using this IdP please consider [contributing](..\/..\/developer-guide\/docs-site.md) to this document.\n\n<!-- markdownlint-disable MD033 -->\n<div style=\"text-align:center\"><img src=\"..\/..\/..\/assets\/argo.png\" \/><\/div>\n<!-- markdownlint-enable MD033 -->\n\n# Integrating OneLogin and ArgoCD\n\nThese instructions will take you through the entire process of getting your ArgoCD application authenticating with OneLogin. You will create a custom OIDC application within OneLogin and configure ArgoCD to use OneLogin for authentication, using UserRoles set in OneLogin to determine privileges in Argo.\n\n## Creating and Configuring OneLogin App\n\nFor your ArgoCD application to communicate with OneLogin, you will first need to create and configure the OIDC application on the OneLogin side.\n\n### Create OIDC Application\n\nTo create the application, do the following:\n\n1. Navigate to your OneLogin portal, then Administration > Applications.\n2. Click \"Add App\".\n3. Search for \"OpenID Connect\" in the search field.\n4. Select the \"OpenId Connect (OIDC)\" app to create.\n5. Update the \"Display Name\" field (could be something like \"ArgoCD (Production)\".\n6. Click \"Save\".\n\n### Configuring OIDC Application Settings\n\nNow that the application is created, you can configure the settings of the app.\n\n#### Configuration Tab\n\nUpdate the \"Configuration\" settings as follows:\n\n1. Select the \"Configuration\" tab on the left.\n2. Set the \"Login Url\" field to https:\/\/argocd.myproject.com\/auth\/login, replacing the hostname with your own.\n3. Set the \"Redirect Url\" field to https:\/\/argocd.myproject.com\/auth\/callback, replacing the hostname with your own.\n4. Click \"Save\".\n\n!!! note \"OneLogin may not let you save any other fields until the above fields are set.\"\n\n#### Info Tab\n\nYou can update the \"Display Name\", \"Description\", \"Notes\", or the display images that appear in the OneLogin portal here.\n\n#### Parameters Tab\n\nThis tab controls what information is sent to Argo in the token. By default it will contain a Groups field and \"Credentials are\" is set to \"Configured by admin\". Leave \"Credentials are\" as the default.\n\nHow the Value of the Groups field is configured will vary based on your needs, but to use OneLogin User roles for ArgoCD privileges, configure the Value of the Groups field with the following:\n\n1. Click \"Groups\". A modal appears.\n2. Set the \"Default if no value selected\" field to \"User Roles\".\n3. Set the transform field (below it) to \"Semicolon Delimited Input\".\n4. Click \"Save\".\n\nWhen a user attempts to login to Argo with OneLogin, the User roles in OneLogin, say, Manager, ProductTeam, and TestEngineering, will be included in the Groups field in the token. These are the values needed for Argo to assign permissions.\n\nThe groups field in the token will look similar to the following:\n\n```\n\"groups\": [\n    \"Manager\",\n    \"ProductTeam\",\n    \"TestEngineering\",\n  ],\n```\n\n#### Rules Tab\n\nTo get up and running, you do not need to make modifications to any settings here.\n\n#### SSO Tab\n\nThis tab contains much of the information needed to be placed into your ArgoCD configuration file (API endpoints, client ID, client secret).\n\nConfirm \"Application Type\" is set to \"Web\".\n\nConfirm \"Token Endpoint\" is set to \"Basic\".\n\n#### Access Tab\n\nThis tab controls who can see this application in the OneLogin portal.\n\nSelect the roles you wish to have access to this application and click \"Save\".\n\n#### Users Tab\n\nThis tab shows you the individual users that have access to this application (usually the ones that have roles specified in the Access Tab).\n\nTo get up and running, you do not need to make modifications to any settings here.\n\n#### Privileges Tab\n\nThis tab shows which OneLogin users can configure this app.\n\nTo get up and running, you do not need to make modifications to any settings here.\n\n## Updating OIDC configuration in ArgoCD\n\nNow that the OIDC application is configured in OneLogin, you can update Argo configuration to communicate with OneLogin, as well as control permissions for those users that authenticate via OneLogin.\n\n### Tell Argo where OneLogin is\n\nArgo needs to have its config map (argocd-cm) updated in order to communicate with OneLogin. Consider the following yaml:\n\n```\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cm\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/part-of: argocd\ndata:\n  url: https:\/\/<argocd.myproject.com>\n  oidc.config: |\n    name: OneLogin\n    issuer: https:\/\/<subdomain>.onelogin.com\/oidc\/2\n    clientID: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaaaaaaaa\n    clientSecret: abcdef123456\n\n    # Optional set of OIDC scopes to request. If omitted, defaults to: [\"openid\", \"profile\", \"email\", \"groups\"]\n    requestedScopes: [\"openid\", \"profile\", \"email\", \"groups\"]\n```\n\nThe \"url\" key should have a value of the hostname of your Argo project.\n\nThe \"clientID\" is taken from the SSO tab of the OneLogin application.\n\nThe \u201cissuer\u201d is taken from the SSO tab of the OneLogin application. It is one of the issuer api endpoints.\n\nThe \"clientSecret\" value is a client secret located in the SSO tab of the OneLogin application.\n\n!!! note \"If you get an `invalid_client` error when trying the authenticate with OneLogin, there is a possibility that your client secret is not proper. Keep in mind that in previous versions `clientSecret` value had to be base64 encrypted, but it is not required anymore.\"\n\n### Configure Permissions for OneLogin Auth'd Users\n\nPermissions in ArgoCD can be configured by using the OneLogin role names that are passed in the Groups field in the token. Consider the following yaml in argocd-rbac-cm.yaml:\n\n```\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-rbac-cm\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/part-of: argocd\ndata:\n  policy.default: role:readonly\n  policy.csv: |\n    p, role:org-admin, applications, *, *\/*, allow\n    p, role:org-admin, clusters, get, *, allow\n    p, role:org-admin, repositories, get, *, allow\n    p, role:org-admin, repositories, create, *, allow\n    p, role:org-admin, repositories, update, *, allow\n    p, role:org-admin, repositories, delete, *, allow\n\n    g, TestEngineering, role:org-admin\n```\n\nIn OneLogin, a user with user role \"TestEngineering\" will receive ArgoCD admin privileges when they log in to Argo via OneLogin. All other users will receive the readonly role. The key takeaway here is that \"TestEngineering\" is passed via the Group field in the token (which is specified in the Parameters tab in OneLogin).","site":"argocd","answers_cleaned":"  OneLogin      note  Are you using this  Please contribute       If you re using this IdP please consider  contributing        developer guide docs site md  to this document        markdownlint disable MD033      div style  text align center   img src           assets argo png      div       markdownlint enable MD033        Integrating OneLogin and ArgoCD  These instructions will take you through the entire process of getting your ArgoCD application authenticating with OneLogin  You will create a custom OIDC application within OneLogin and configure ArgoCD to use OneLogin for authentication  using UserRoles set in OneLogin to determine privileges in Argo      Creating and Configuring OneLogin App  For your ArgoCD application to communicate with OneLogin  you will first need to create and configure the OIDC application on the OneLogin side       Create OIDC Application  To create the application  do the following   1  Navigate to your OneLogin portal  then Administration   Applications  2  Click  Add App   3  Search for  OpenID Connect  in the search field  4  Select the  OpenId Connect  OIDC   app to create  5  Update the  Display Name  field  could be something like  ArgoCD  Production    6  Click  Save        Configuring OIDC Application Settings  Now that the application is created  you can configure the settings of the app        Configuration Tab  Update the  Configuration  settings as follows   1  Select the  Configuration  tab on the left  2  Set the  Login Url  field to https   argocd myproject com auth login  replacing the hostname with your own  3  Set the  Redirect Url  field to https   argocd myproject com auth callback  replacing the hostname with your own  4  Click  Save        note  OneLogin may not let you save any other fields until the above fields are set         Info Tab  You can update the  Display Name    Description    Notes   or the display images that appear in the OneLogin portal here        Parameters Tab  This tab controls what information is sent to Argo in the token  By default it will contain a Groups field and  Credentials are  is set to  Configured by admin   Leave  Credentials are  as the default   How the Value of the Groups field is configured will vary based on your needs  but to use OneLogin User roles for ArgoCD privileges  configure the Value of the Groups field with the following   1  Click  Groups   A modal appears  2  Set the  Default if no value selected  field to  User Roles   3  Set the transform field  below it  to  Semicolon Delimited Input   4  Click  Save    When a user attempts to login to Argo with OneLogin  the User roles in OneLogin  say  Manager  ProductTeam  and TestEngineering  will be included in the Groups field in the token  These are the values needed for Argo to assign permissions   The groups field in the token will look similar to the following        groups          Manager        ProductTeam        TestEngineering                  Rules Tab  To get up and running  you do not need to make modifications to any settings here        SSO Tab  This tab contains much of the information needed to be placed into your ArgoCD configuration file  API endpoints  client ID  client secret    Confirm  Application Type  is set to  Web    Confirm  Token Endpoint  is set to  Basic         Access Tab  This tab controls who can see this application in the OneLogin portal   Select the roles you wish to have access to this application and click  Save         Users Tab  This tab shows you the individual users that have access to this application  usually the ones that have roles specified in the Access Tab    To get up and running  you do not need to make modifications to any settings here        Privileges Tab  This tab shows which OneLogin users can configure this app   To get up and running  you do not need to make modifications to any settings here      Updating OIDC configuration in ArgoCD  Now that the OIDC application is configured in OneLogin  you can update Argo configuration to communicate with OneLogin  as well as control permissions for those users that authenticate via OneLogin       Tell Argo where OneLogin is  Argo needs to have its config map  argocd cm  updated in order to communicate with OneLogin  Consider the following yaml       apiVersion  v1 kind  ConfigMap metadata    name  argocd cm   namespace  argocd   labels      app kubernetes io part of  argocd data    url  https    argocd myproject com    oidc config        name  OneLogin     issuer  https    subdomain  onelogin com oidc 2     clientID  aaaaaaaa aaaa aaaa aaaa aaaaaaaaaaaaaaaaaa     clientSecret  abcdef123456        Optional set of OIDC scopes to request  If omitted  defaults to    openid    profile    email    groups       requestedScopes    openid    profile    email    groups        The  url  key should have a value of the hostname of your Argo project   The  clientID  is taken from the SSO tab of the OneLogin application   The  issuer  is taken from the SSO tab of the OneLogin application  It is one of the issuer api endpoints   The  clientSecret  value is a client secret located in the SSO tab of the OneLogin application       note  If you get an  invalid client  error when trying the authenticate with OneLogin  there is a possibility that your client secret is not proper  Keep in mind that in previous versions  clientSecret  value had to be base64 encrypted  but it is not required anymore        Configure Permissions for OneLogin Auth d Users  Permissions in ArgoCD can be configured by using the OneLogin role names that are passed in the Groups field in the token  Consider the following yaml in argocd rbac cm yaml       apiVersion  v1 kind  ConfigMap metadata    name  argocd rbac cm   namespace  argocd   labels      app kubernetes io part of  argocd data    policy default  role readonly   policy csv        p  role org admin  applications          allow     p  role org admin  clusters  get     allow     p  role org admin  repositories  get     allow     p  role org admin  repositories  create     allow     p  role org admin  repositories  update     allow     p  role org admin  repositories  delete     allow      g  TestEngineering  role org admin      In OneLogin  a user with user role  TestEngineering  will receive ArgoCD admin privileges when they log in to Argo via OneLogin  All other users will receive the readonly role  The key takeaway here is that  TestEngineering  is passed via the Group field in the token  which is specified in the Parameters tab in OneLogin  "}
{"questions":"argocd If you re using this IdP please consider to this document A working Single Sign On configuration using Okta via at least two methods was achieved using note Are you using this Please contribute Okta","answers":"# Okta\n\n!!! note \"Are you using this? Please contribute!\"\n    If you're using this IdP please consider [contributing](..\/..\/developer-guide\/docs-site.md) to this document.\n\nA working Single Sign-On configuration using Okta via at least two methods was achieved using:\n\n* [SAML (with Dex)](#saml-with-dex)\n* [OIDC (without Dex)](#oidc-without-dex)\n\n## SAML (with Dex)\n\n!!! note \"Okta app group assignment\"\n    The Okta app's **Group Attribute Statements** regex will be used later to map Okta groups to Argo CD RBAC roles.\n\n1. Create a new SAML application in Okta UI.\n    * ![Okta SAML App 1](..\/..\/assets\/saml-1.png)\n        I've disabled `App Visibility` because Dex doesn't support Provider-initiated login flows.\n    * ![Okta SAML App 2](..\/..\/assets\/saml-2.png)\n1. Click `View setup instructions` after creating the application in Okta.\n    * ![Okta SAML App 3](..\/..\/assets\/saml-3.png)\n1. Copy the Argo CD URL to the `argocd-cm` in the data.url\n\n<!-- markdownlint-disable MD046 -->\n```yaml\ndata:\n  url: https:\/\/argocd.example.com\n```\n<!-- markdownlint-disable MD046 -->\n\n1. Download the CA certificate to use in the `argocd-cm` configuration.\n    * If you are using this in the caData field, you will need to pass the entire certificate (including `-----BEGIN CERTIFICATE-----` and `-----END CERTIFICATE-----` stanzas) through base64 encoding, for example, `base64 my_cert.pem`.\n    * If you are using the ca field and storing the CA certificate separately as a secret, you will need to mount the secret to the `dex` container in the `argocd-dex-server` Deployment.\n    * ![Okta SAML App 4](..\/..\/assets\/saml-4.png)\n1. Edit the `argocd-cm` and configure the `data.dex.config` section:\n\n<!-- markdownlint-disable MD046 -->\n```yaml\ndex.config: |\n  logger:\n    level: debug\n    format: json\n  connectors:\n  - type: saml\n    id: okta\n    name: Okta\n    config:\n      ssoURL: https:\/\/yourorganization.oktapreview.com\/app\/yourorganizationsandbox_appnamesaml_2\/rghdr9s6hg98s9dse\/sso\/saml\n      # You need `caData` _OR_ `ca`, but not both.\n      caData: |\n        <CA cert passed through base64 encoding>\n      # You need `caData` _OR_ `ca`, but not both.\n      # Path to mount the secret to the dex container\n      ca: \/path\/to\/ca.pem\n      redirectURI: https:\/\/ui.argocd.yourorganization.net\/api\/dex\/callback\n      usernameAttr: email\n      emailAttr: email\n      groupsAttr: group\n```\n<!-- markdownlint-enable MD046 -->\n\n----\n\n### Private deployment\nIt is possible to setup Okta SSO with a private Argo CD installation, where the Okta callback URL is the only publicly exposed endpoint.\nThe settings are largely the same with a few changes in the Okta app configuration and the `data.dex.config` section of the `argocd-cm` ConfigMap.\n\nUsing this deployment model, the user connects to the private Argo CD UI and the Okta authentication flow seamlessly redirects back to the private UI URL.\n\nOften this public endpoint is exposed through an [Ingress object](..\/..\/ingress\/#private-argo-cd-ui-with-multiple-ingress-objects-and-byo-certificate).\n\n\n1. Update the URLs in the Okta app's General settings\n    * ![Okta SAML App Split](..\/..\/assets\/saml-split.png)\n        The `Single sign on URL` field points to the public exposed endpoint, and all other URL fields point to the internal endpoint.\n1. Update the `data.dex.config` section of the `argocd-cm` ConfigMap with the external endpoint reference.\n\n<!-- markdownlint-disable MD046 -->\n```yaml\ndex.config: |\n  logger:\n    level: debug\n  connectors:\n  - type: saml\n    id: okta\n    name: Okta\n    config:\n      ssoURL: https:\/\/yourorganization.oktapreview.com\/app\/yourorganizationsandbox_appnamesaml_2\/rghdr9s6hg98s9dse\/sso\/saml\n      # You need `caData` _OR_ `ca`, but not both.\n      caData: |\n        <CA cert passed through base64 encoding>\n      # You need `caData` _OR_ `ca`, but not both.\n      # Path to mount the secret to the dex container\n      ca: \/path\/to\/ca.pem\n      redirectURI: https:\/\/external.path.to.argocd.io\/api\/dex\/callback\n      usernameAttr: email\n      emailAttr: email\n      groupsAttr: group\n```\n<!-- markdownlint-enable MD046 -->\n\n### Connect Okta Groups to Argo CD Roles\nArgo CD is aware of user memberships of Okta groups that match the *Group Attribute Statements* regex.\nThe example above uses the `argocd-*` regex, so Argo CD would be aware of a group named `argocd-admins`.\n\nModify the `argocd-rbac-cm` ConfigMap to connect the `argocd-admins` Okta group to the builtin Argo CD `admin` role.\n<!-- markdownlint-disable MD046 -->\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-rbac-cm\ndata:\n  policy.csv: |\n    g, argocd-admins, role:admin\n  scopes: '[email,groups]'\n```\n\n## OIDC (without Dex)\n\n!!! warning \"Okta groups for RBAC\"\n    If you want `groups` scope returned from Okta, you will need to enable [API Access Management with Okta](https:\/\/developer.okta.com\/docs\/concepts\/api-access-management\/). This addon is free, and automatically enabled, on Okta developer edition. However, it's an optional add-on for production environments, with an additional associated cost.\n\n    You may alternately add a \"groups\" scope and claim to the default authorization server, and then filter the claim in the Okta application configuration. It's not clear if this requires the Authorization Server add-on.\n\n    If this is not an option for you, use the [SAML (with Dex)](#saml-with-dex) option above instead.\n\n!!! note\n    These instructions and screenshots are of Okta version 2023.05.2 E. You can find the current version in the Okta website footer.\n\nFirst, create the OIDC integration:\n\n1. On the `Okta Admin` page, navigate to the Okta Applications at `Applications > Applications.`\n1. Choose `Create App Integration`, and choose `OIDC`, and then `Web Application` in the resulting dialogues.\n    ![Okta OIDC app dialogue](..\/..\/assets\/okta-create-oidc-app.png)\n1. Update the following:\n    1. `App Integration name` and `Logo` - set these to suit your needs; they'll be displayed in the Okta catalogue.\n    1. `Sign-in redirect URLs`: Add `https:\/\/argocd.example.com\/auth\/callback`; replacing `argocd.example.com` with your ArgoCD web interface URL.\n    1. `Sign-out redirect URIs`: Add `https:\/\/argocd.example.com`; substituting the correct domain name as above. \n    1. Either assign groups, or choose to skip this step for now.\n    1. Leave the rest of the options as-is, and save the integration.\n    ![Okta app settings](..\/..\/assets\/okta-app.png)\n1. Copy the `Client ID` and the `Client Secret` from the newly created app; you will need these later.\n\nNext, create a custom Authorization server:\n\n1. On the `Okta Admin` page, navigate to the Okta API Management at `Security > API`.\n1. Click `Add Authorization Server`, and assign it a name and a description. The `Audience` should match your ArgoCD URL - `https:\/\/argocd.example.com`\n1. Click `Scopes > Add Scope`:\n    1. Add a scope called `groups`. Leave the rest of the options as default.\n    ![Groups Scope](..\/..\/assets\/okta-groups-scope.png)\n1. Click `Claims > Add Claim`:\n    1. Add a claim called `groups`.\n    1. Adjust the `Include in token type` to `ID Token`, `Always`.\n    1. Adjust the `Value type` to `Groups`.\n    1. Add a filter that will match the Okta groups you want passed on to ArgoCD; for example `Regex: argocd-.*`.\n    1. Set `Include in` to `groups` (the scope you created above).\n    ![Groups Claim](..\/..\/assets\/okta-groups-claim.png)\n1. Click on `Access Policies` > `Add Policy.` This policy will restrict how this authorization server is used.\n    1. Add a name and description.\n    1. Assign the policy to the client (application integration) you created above. The field should auto-complete as you type.\n    1. Create the policy.\n    ![Auth Policy](..\/..\/assets\/okta-auth-policy.png)\n1. Add a rule to the policy:\n    1. Add a name; `default` is a reasonable name for this rule.\n    1. Fine-tune the settings to suit your organization's security posture. Some ideas:\n        1. uncheck all the grant types except the Authorization Code.\n        1. Adjust the token lifetime to govern how long a session can last.\n        1. Restrict refresh token lifetime, or completely disable it.\n    ![Default rule](..\/..\/assets\/okta-auth-rule.png)\n1. Finally, click `Back to Authorization Servers`, and copy the `Issuer URI`. You will need this later.\n\n### CLI login\n\nIn order to login with the CLI `argocd login https:\/\/argocd.example.com --sso`, Okta requires a separate dedicated App Integration:\n\n1. Create a new `Create App Integration`, and choose `OIDC`, and then `Single-Page Application`.\n1. Update the following:\n    1. `App Integration name` and `Logo` - set these to suit your needs; they'll be displayed in the Okta catalogue.\n    1. `Sign-in redirect URLs`: Add `http:\/\/localhost:8085\/auth\/callback`.\n    1. `Sign-out redirect URIs`: Add `http:\/\/localhost:8085`.\n    1. Either assign groups, or choose to skip this step for now.\n    1. Leave the rest of the options as-is, and save the integration.\n    1. Copy the `Client ID` from the newly created app; `cliClientID: <Client ID>` will be used in your `argocd-cm` ConfigMap.\n1. Edit your Authorization Server `Access Policies`:\n    1. Navigate to the Okta API Management at `Security > API`.\n    1. Choose your existing `Authorization Server` that was created previously.\n    1. Click `Access Policies` > `Edit Policy`.\n    1. Assign your newly created `App Integration` by filling in the text box and clicking `Update Policy`.\n    ![Edit Policy](..\/..\/assets\/okta-auth-policy-edit.png)\n\nIf you haven't yet created Okta groups, and assigned them to the application integration, you should do that now:\n\n1. Go to `Directory > Groups`\n1. For each group you wish to add:\n    1. Click `Add Group`, and choose a meaningful name. It should match the regex or pattern you added to your custom `group` claim.\n    1. Click on the group (refresh the page if the new group didn't show up in the list).\n    1. Assign Okta users to the group.\n    1. Click on `Applications` and assign the OIDC application integration you created to this group.\n    1. Repeat as needed.\n\nFinally, configure ArgoCD itself. Edit the `argocd-cm` configmap:\n\n<!-- markdownlint-disable MD046 -->\n```yaml\nurl: https:\/\/argocd.example.com\noidc.config: |\n  name: Okta\n  # this is the authorization server URI\n  issuer: https:\/\/example.okta.com\/oauth2\/aus9abcdefgABCDEFGd7\n  clientID: 0oa9abcdefgh123AB5d7\n  cliClientID: gfedcba0987654321GEFDCBA # Optional if using the CLI for SSO\n  clientSecret: ABCDEFG1234567890abcdefg\n  requestedScopes: [\"openid\", \"profile\", \"email\", \"groups\"]\n  requestedIDTokenClaims: {\"groups\": {\"essential\": true}}\n```\n\nYou may want to store the `clientSecret` in a Kubernetes secret; see [how to deal with SSO secrets](.\/index.md\/#sensitive-data-and-sso-client-secrets ) for more details.","site":"argocd","answers_cleaned":"  Okta      note  Are you using this  Please contribute       If you re using this IdP please consider  contributing        developer guide docs site md  to this document   A working Single Sign On configuration using Okta via at least two methods was achieved using      SAML  with Dex    saml with dex     OIDC  without Dex    oidc without dex      SAML  with Dex       note  Okta app group assignment      The Okta app s   Group Attribute Statements   regex will be used later to map Okta groups to Argo CD RBAC roles   1  Create a new SAML application in Okta UI          Okta SAML App 1        assets saml 1 png          I ve disabled  App Visibility  because Dex doesn t support Provider initiated login flows          Okta SAML App 2        assets saml 2 png  1  Click  View setup instructions  after creating the application in Okta          Okta SAML App 3        assets saml 3 png  1  Copy the Argo CD URL to the  argocd cm  in the data url       markdownlint disable MD046        yaml data    url  https   argocd example com          markdownlint disable MD046      1  Download the CA certificate to use in the  argocd cm  configuration        If you are using this in the caData field  you will need to pass the entire certificate  including       BEGIN CERTIFICATE       and       END CERTIFICATE       stanzas  through base64 encoding  for example   base64 my cert pem         If you are using the ca field and storing the CA certificate separately as a secret  you will need to mount the secret to the  dex  container in the  argocd dex server  Deployment          Okta SAML App 4        assets saml 4 png  1  Edit the  argocd cm  and configure the  data dex config  section        markdownlint disable MD046        yaml dex config      logger      level  debug     format  json   connectors      type  saml     id  okta     name  Okta     config        ssoURL  https   yourorganization oktapreview com app yourorganizationsandbox appnamesaml 2 rghdr9s6hg98s9dse sso saml         You need  caData   OR   ca   but not both        caData             CA cert passed through base64 encoding          You need  caData   OR   ca   but not both          Path to mount the secret to the dex container       ca   path to ca pem       redirectURI  https   ui argocd yourorganization net api dex callback       usernameAttr  email       emailAttr  email       groupsAttr  group          markdownlint enable MD046                Private deployment It is possible to setup Okta SSO with a private Argo CD installation  where the Okta callback URL is the only publicly exposed endpoint  The settings are largely the same with a few changes in the Okta app configuration and the  data dex config  section of the  argocd cm  ConfigMap   Using this deployment model  the user connects to the private Argo CD UI and the Okta authentication flow seamlessly redirects back to the private UI URL   Often this public endpoint is exposed through an  Ingress object        ingress  private argo cd ui with multiple ingress objects and byo certificate     1  Update the URLs in the Okta app s General settings         Okta SAML App Split        assets saml split png          The  Single sign on URL  field points to the public exposed endpoint  and all other URL fields point to the internal endpoint  1  Update the  data dex config  section of the  argocd cm  ConfigMap with the external endpoint reference        markdownlint disable MD046        yaml dex config      logger      level  debug   connectors      type  saml     id  okta     name  Okta     config        ssoURL  https   yourorganization oktapreview com app yourorganizationsandbox appnamesaml 2 rghdr9s6hg98s9dse sso saml         You need  caData   OR   ca   but not both        caData             CA cert passed through base64 encoding          You need  caData   OR   ca   but not both          Path to mount the secret to the dex container       ca   path to ca pem       redirectURI  https   external path to argocd io api dex callback       usernameAttr  email       emailAttr  email       groupsAttr  group          markdownlint enable MD046          Connect Okta Groups to Argo CD Roles Argo CD is aware of user memberships of Okta groups that match the  Group Attribute Statements  regex  The example above uses the  argocd    regex  so Argo CD would be aware of a group named  argocd admins    Modify the  argocd rbac cm  ConfigMap to connect the  argocd admins  Okta group to the builtin Argo CD  admin  role       markdownlint disable MD046        yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd rbac cm data    policy csv        g  argocd admins  role admin   scopes    email groups           OIDC  without Dex       warning  Okta groups for RBAC      If you want  groups  scope returned from Okta  you will need to enable  API Access Management with Okta  https   developer okta com docs concepts api access management    This addon is free  and automatically enabled  on Okta developer edition  However  it s an optional add on for production environments  with an additional associated cost       You may alternately add a  groups  scope and claim to the default authorization server  and then filter the claim in the Okta application configuration  It s not clear if this requires the Authorization Server add on       If this is not an option for you  use the  SAML  with Dex    saml with dex  option above instead       note     These instructions and screenshots are of Okta version 2023 05 2 E  You can find the current version in the Okta website footer   First  create the OIDC integration   1  On the  Okta Admin  page  navigate to the Okta Applications at  Applications   Applications   1  Choose  Create App Integration   and choose  OIDC   and then  Web Application  in the resulting dialogues        Okta OIDC app dialogue        assets okta create oidc app png  1  Update the following      1   App Integration name  and  Logo    set these to suit your needs  they ll be displayed in the Okta catalogue      1   Sign in redirect URLs   Add  https   argocd example com auth callback   replacing  argocd example com  with your ArgoCD web interface URL      1   Sign out redirect URIs   Add  https   argocd example com   substituting the correct domain name as above       1  Either assign groups  or choose to skip this step for now      1  Leave the rest of the options as is  and save the integration        Okta app settings        assets okta app png  1  Copy the  Client ID  and the  Client Secret  from the newly created app  you will need these later   Next  create a custom Authorization server   1  On the  Okta Admin  page  navigate to the Okta API Management at  Security   API   1  Click  Add Authorization Server   and assign it a name and a description  The  Audience  should match your ArgoCD URL    https   argocd example com  1  Click  Scopes   Add Scope       1  Add a scope called  groups   Leave the rest of the options as default        Groups Scope        assets okta groups scope png  1  Click  Claims   Add Claim       1  Add a claim called  groups       1  Adjust the  Include in token type  to  ID Token    Always       1  Adjust the  Value type  to  Groups       1  Add a filter that will match the Okta groups you want passed on to ArgoCD  for example  Regex  argocd          1  Set  Include in  to  groups   the scope you created above         Groups Claim        assets okta groups claim png  1  Click on  Access Policies     Add Policy   This policy will restrict how this authorization server is used      1  Add a name and description      1  Assign the policy to the client  application integration  you created above  The field should auto complete as you type      1  Create the policy        Auth Policy        assets okta auth policy png  1  Add a rule to the policy      1  Add a name   default  is a reasonable name for this rule      1  Fine tune the settings to suit your organization s security posture  Some ideas          1  uncheck all the grant types except the Authorization Code          1  Adjust the token lifetime to govern how long a session can last          1  Restrict refresh token lifetime  or completely disable it        Default rule        assets okta auth rule png  1  Finally  click  Back to Authorization Servers   and copy the  Issuer URI   You will need this later       CLI login  In order to login with the CLI  argocd login https   argocd example com   sso   Okta requires a separate dedicated App Integration   1  Create a new  Create App Integration   and choose  OIDC   and then  Single Page Application   1  Update the following      1   App Integration name  and  Logo    set these to suit your needs  they ll be displayed in the Okta catalogue      1   Sign in redirect URLs   Add  http   localhost 8085 auth callback       1   Sign out redirect URIs   Add  http   localhost 8085       1  Either assign groups  or choose to skip this step for now      1  Leave the rest of the options as is  and save the integration      1  Copy the  Client ID  from the newly created app   cliClientID   Client ID   will be used in your  argocd cm  ConfigMap  1  Edit your Authorization Server  Access Policies       1  Navigate to the Okta API Management at  Security   API       1  Choose your existing  Authorization Server  that was created previously      1  Click  Access Policies     Edit Policy       1  Assign your newly created  App Integration  by filling in the text box and clicking  Update Policy         Edit Policy        assets okta auth policy edit png   If you haven t yet created Okta groups  and assigned them to the application integration  you should do that now   1  Go to  Directory   Groups  1  For each group you wish to add      1  Click  Add Group   and choose a meaningful name  It should match the regex or pattern you added to your custom  group  claim      1  Click on the group  refresh the page if the new group didn t show up in the list       1  Assign Okta users to the group      1  Click on  Applications  and assign the OIDC application integration you created to this group      1  Repeat as needed   Finally  configure ArgoCD itself  Edit the  argocd cm  configmap        markdownlint disable MD046        yaml url  https   argocd example com oidc config      name  Okta     this is the authorization server URI   issuer  https   example okta com oauth2 aus9abcdefgABCDEFGd7   clientID  0oa9abcdefgh123AB5d7   cliClientID  gfedcba0987654321GEFDCBA   Optional if using the CLI for SSO   clientSecret  ABCDEFG1234567890abcdefg   requestedScopes    openid    profile    email    groups     requestedIDTokenClaims    groups     essential   true        You may want to store the  clientSecret  in a Kubernetes secret  see  how to deal with SSO secrets    index md  sensitive data and sso client secrets   for more details "}
{"questions":"argocd Once installed Argo CD has one built in user that has full access to the system It is recommended to use user only The local users accounts feature serves two main use cases Local users accounts for initial configuration and then switch to local users or configure SSO integration Overview Auth tokens for Argo CD management automation It is possible to configure an API account with limited permissions and generate an authentication token","answers":"# Overview\n\nOnce installed Argo CD has one built-in `admin` user that has full access to the system. It is recommended to use `admin` user only\nfor initial configuration and then switch to local users or configure SSO integration.\n\n## Local users\/accounts\n\nThe local users\/accounts feature serves two main use-cases:\n\n* Auth tokens for Argo CD management automation. It is possible to configure an API account with limited permissions and generate an authentication token.\nSuch token can be used to automatically create applications, projects etc.\n* Additional users for a very small team where use of SSO integration might be considered an overkill. The local users don't provide advanced features such as groups,\nlogin history etc. So if you need such features it is strongly recommended to use SSO.\n\n!!! note\n    When you create local users, each of those users will need additional [RBAC rules](..\/rbac.md) set up, otherwise they will fall back to the default policy specified by `policy.default` field of the `argocd-rbac-cm` ConfigMap.\n\nThe maximum length of a local account's username is 32.\n\n### Create new user\n\nNew users should be defined in `argocd-cm` ConfigMap:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cm\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/name: argocd-cm\n    app.kubernetes.io\/part-of: argocd\ndata:\n  # add an additional local user with apiKey and login capabilities\n  #   apiKey - allows generating API keys\n  #   login - allows to login using UI\n  accounts.alice: apiKey, login\n  # disables user. User is enabled by default\n  accounts.alice.enabled: \"false\"\n```\n\nEach user might have two capabilities:\n\n* apiKey - allows generating authentication tokens for API access\n* login - allows to login using UI\n\n### Delete user\n\nIn order to delete a user, you must remove the corresponding entry defined in the `argocd-cm` ConfigMap:\n\nExample:\n\n```bash\nkubectl patch -n argocd cm argocd-cm --type='json' -p='[{\"op\": \"remove\", \"path\": \"\/data\/accounts.alice\"}]'\n```\n\nIt is recommended to also remove the password entry in the `argocd-secret` Secret:\n\nExample:\n\n```bash\nkubectl patch -n argocd secrets argocd-secret --type='json' -p='[{\"op\": \"remove\", \"path\": \"\/data\/accounts.alice.password\"}]'\n```\n\n### Disable admin user\n\nAs soon as additional users are created it is recommended to disable `admin` user:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cm\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/name: argocd-cm\n    app.kubernetes.io\/part-of: argocd\ndata:\n  admin.enabled: \"false\"\n```\n\n### Manage users\n\nThe Argo CD CLI provides set of commands to set user password and generate tokens.\n\n* Get full users list\n```bash\nargocd account list\n```\n\n* Get specific user details\n```bash\nargocd account get --account <username>\n```\n\n* Set user password\n```bash\n# if you are managing users as the admin user, <current-user-password> should be the current admin password.\nargocd account update-password \\\n  --account <name> \\\n  --current-password <current-user-password> \\\n  --new-password <new-user-password>\n```\n\n* Generate auth token\n```bash\n# if flag --account is omitted then Argo CD generates token for current user\nargocd account generate-token --account <username>\n```\n\n### Failed logins rate limiting\n\nArgo CD rejects login attempts after too many failed in order to prevent password brute-forcing.\nThe following environments variables are available to control throttling settings:\n\n* `ARGOCD_SESSION_FAILURE_MAX_FAIL_COUNT`: Maximum number of failed logins before Argo CD starts\nrejecting login attempts. Default: 5.\n\n* `ARGOCD_SESSION_FAILURE_WINDOW_SECONDS`: Number of seconds for the failure window.\nDefault: 300 (5 minutes). If this is set to 0, the failure window is\ndisabled and the login attempts gets rejected after 10 consecutive logon failures,\nregardless of the time frame they happened.\n\n* `ARGOCD_SESSION_MAX_CACHE_SIZE`: Maximum number of entries allowed in the\ncache. Default: 1000\n\n* `ARGOCD_MAX_CONCURRENT_LOGIN_REQUESTS_COUNT`: Limits max number of concurrent login requests.\nIf set to 0 then limit is disabled. Default: 50.\n\n## SSO\n\nThere are two ways that SSO can be configured:\n\n* [Bundled Dex OIDC provider](#dex) - use this option if your current provider does not support OIDC (e.g. SAML,\n  LDAP) or if you wish to leverage any of Dex's connector features (e.g. the ability to map GitHub\n  organizations and teams to OIDC groups claims). Dex also supports OIDC directly and can fetch user\n  information from the identity provider when the groups cannot be included in the IDToken.\n\n* [Existing OIDC provider](#existing-oidc-provider) - use this if you already have an OIDC provider which you are using (e.g.\n  [Okta](okta.md), [OneLogin](onelogin.md), [Auth0](auth0.md), [Microsoft](microsoft.md), [Keycloak](keycloak.md),\n  [Google (G Suite)](google.md)), where you manage your users, groups, and memberships.\n\n## Dex\n\nArgo CD embeds and bundles [Dex](https:\/\/github.com\/dexidp\/dex) as part of its installation, for the\npurpose of delegating authentication to an external identity provider. Multiple types of identity\nproviders are supported (OIDC, SAML, LDAP, GitHub, etc...). SSO configuration of Argo CD requires\nediting the `argocd-cm` ConfigMap with\n[Dex connector](https:\/\/dexidp.io\/docs\/connectors\/) settings.\n\nThis document describes how to configure Argo CD SSO using GitHub (OAuth2) as an example, but the\nsteps should be similar for other identity providers.\n\n### 1. Register the application in the identity provider\n\nIn GitHub, register a new application. The callback address should be the `\/api\/dex\/callback`\nendpoint of your Argo CD URL (e.g. `https:\/\/argocd.example.com\/api\/dex\/callback`).\n\n![Register OAuth App](..\/..\/assets\/register-app.png \"Register OAuth App\")\n\nAfter registering the app, you will receive an OAuth2 client ID and secret. These values will be\ninputted into the Argo CD configmap.\n\n![OAuth2 Client Config](..\/..\/assets\/oauth2-config.png \"OAuth2 Client Config\")\n\n### 2. Configure Argo CD for SSO\n\nEdit the argocd-cm configmap:\n\n```bash\nkubectl edit configmap argocd-cm -n argocd\n```\n\n* In the `url` key, input the base URL of Argo CD. In this example, it is `https:\/\/argocd.example.com`\n* (Optional): If Argo CD should be accessible via multiple base URLs you may\n  specify any additional base URLs via the `additionalUrls` key.\n* In the `dex.config` key, add the `github` connector to the `connectors` sub field. See Dex's\n  [GitHub connector](https:\/\/github.com\/dexidp\/website\/blob\/main\/content\/docs\/connectors\/github.md)\n  documentation for explanation of the fields. A minimal config should populate the clientID,\n  clientSecret generated in Step 1.\n* You will very likely want to restrict logins to one or more GitHub organization. In the\n  `connectors.config.orgs` list, add one or more GitHub organizations. Any member of the org will\n  then be able to login to Argo CD to perform management tasks.\n\n```yaml\ndata:\n  url: https:\/\/argocd.example.com\n\n  dex.config: |\n    connectors:\n      # GitHub example\n      - type: github\n        id: github\n        name: GitHub\n        config:\n          clientID: aabbccddeeff00112233\n          clientSecret: $dex.github.clientSecret # Alternatively $<some_K8S_secret>:dex.github.clientSecret\n          orgs:\n          - name: your-github-org\n\n      # GitHub enterprise example\n      - type: github\n        id: acme-github\n        name: Acme GitHub\n        config:\n          hostName: github.acme.example.com\n          clientID: abcdefghijklmnopqrst\n          clientSecret: $dex.acme.clientSecret  # Alternatively $<some_K8S_secret>:dex.acme.clientSecret\n          orgs:\n          - name: your-github-org\n```\n\nAfter saving, the changes should take affect automatically.\n\nNOTES:\n\n* There is no need to set `redirectURI` in the `connectors.config` as shown in the dex documentation.\n  Argo CD will automatically use the correct `redirectURI` for any OAuth2 connectors, to match the\n  correct external callback URL (e.g. `https:\/\/argocd.example.com\/api\/dex\/callback`)\n* When using a custom secret (e.g., `some_K8S_secret` above,) it *must* have the label `app.kubernetes.io\/part-of: argocd`.\n\n## OIDC Configuration with DEX\n\nDex can be used for OIDC authentication instead of ArgoCD directly. This provides a separate set of\nfeatures such as fetching information from the `UserInfo` endpoint and\n[federated tokens](https:\/\/dexidp.io\/docs\/custom-scopes-claims-clients\/#cross-client-trust-and-authorized-party)\n\n### Configuration:\n* In the `argocd-cm` ConfigMap add the `OIDC` connector to the `connectors` sub field inside `dex.config`.\nSee Dex's [OIDC connect documentation](https:\/\/dexidp.io\/docs\/connectors\/oidc\/) to see what other\nconfiguration options might be useful. We're going to be using a minimal configuration here.\n* The issuer URL should be where Dex talks to the OIDC provider. There would normally be a\n`.well-known\/openid-configuration` under this URL which has information about what the provider supports.\ne.g. https:\/\/accounts.google.com\/.well-known\/openid-configuration\n\n\n```yaml\ndata:\n  url: \"https:\/\/argocd.example.com\"\n  dex.config: |\n    connectors:\n      # OIDC\n      - type: oidc\n        id: oidc\n        name: OIDC\n        config:\n          issuer: https:\/\/example-OIDC-provider.example.com\n          clientID: aaaabbbbccccddddeee\n          clientSecret: $dex.oidc.clientSecret\n```\n\n### Requesting additional ID token claims\n\nBy default Dex only retrieves the profile and email scopes. In order to retrieve more claims you\ncan add them under the `scopes` entry in the Dex configuration. To enable group claims through Dex,\n`insecureEnableGroups` also needs to enabled. Group information is currently only refreshed at authentication\ntime and support to refresh group information more dynamically can be tracked here: [dexidp\/dex#1065](https:\/\/github.com\/dexidp\/dex\/issues\/1065).\n\n```yaml\ndata:\n  url: \"https:\/\/argocd.example.com\"\n  dex.config: |\n    connectors:\n      # OIDC\n      - type: oidc\n        id: oidc\n        name: OIDC\n        config:\n          issuer: https:\/\/example-OIDC-provider.example.com\n          clientID: aaaabbbbccccddddeee\n          clientSecret: $dex.oidc.clientSecret\n          insecureEnableGroups: true\n          scopes:\n          - profile\n          - email\n          - groups\n```\n\n!!! warning\n    Because group information is only refreshed at authentication time just adding or removing an account from a group will not change a user's membership until they reauthenticate. Depending on your organization's needs this could be a security risk and could be mitigated by changing the authentication token's lifetime.\n\n### Retrieving claims that are not in the token\n\nWhen an Idp does not or cannot support certain claims in an IDToken they can be retrieved separately using\nthe UserInfo endpoint. Dex supports this functionality using the `getUserInfo` endpoint. One of the most\ncommon claims that is not supported in the IDToken is the `groups` claim and both `getUserInfo` and `insecureEnableGroups`\nmust be set to true.\n\n```yaml\ndata:\n  url: \"https:\/\/argocd.example.com\"\n  dex.config: |\n    connectors:\n      # OIDC\n      - type: oidc\n        id: oidc\n        name: OIDC\n        config:\n          issuer: https:\/\/example-OIDC-provider.example.com\n          clientID: aaaabbbbccccddddeee\n          clientSecret: $dex.oidc.clientSecret\n          insecureEnableGroups: true\n          scopes:\n          - profile\n          - email\n          - groups\n          getUserInfo: true\n```\n\n## Existing OIDC Provider\n\nTo configure Argo CD to delegate authentication to your existing OIDC provider, add the OAuth2\nconfiguration to the `argocd-cm` ConfigMap under the `oidc.config` key:\n\n```yaml\ndata:\n  url: https:\/\/argocd.example.com\n\n  oidc.config: |\n    name: Okta\n    issuer: https:\/\/dev-123456.oktapreview.com\n    clientID: aaaabbbbccccddddeee\n    clientSecret: $oidc.okta.clientSecret\n    \n    # Optional list of allowed aud claims. If omitted or empty, defaults to the clientID value above (and the \n    # cliClientID, if that is also specified). If you specify a list and want the clientID to be allowed, you must \n    # explicitly include it in the list.\n    # Token verification will pass if any of the token's audiences matches any of the audiences in this list.\n    allowedAudiences:\n    - aaaabbbbccccddddeee\n    - qqqqwwwweeeerrrrttt\n\n    # Optional. If false, tokens without an audience will always fail validation. If true, tokens without an audience \n    # will always pass validation.\n    # Defaults to true for Argo CD < 2.6.0. Defaults to false for Argo CD >= 2.6.0.\n    skipAudienceCheckWhenTokenHasNoAudience: true\n\n    # Optional set of OIDC scopes to request. If omitted, defaults to: [\"openid\", \"profile\", \"email\", \"groups\"]\n    requestedScopes: [\"openid\", \"profile\", \"email\", \"groups\"]\n\n    # Optional set of OIDC claims to request on the ID token.\n    requestedIDTokenClaims: {\"groups\": {\"essential\": true}}\n\n    # Some OIDC providers require a separate clientID for different callback URLs.\n    # For example, if configuring Argo CD with self-hosted Dex, you will need a separate client ID\n    # for the 'localhost' (CLI) client to Dex. This field is optional. If omitted, the CLI will\n    # use the same clientID as the Argo CD server\n    cliClientID: vvvvwwwwxxxxyyyyzzzz\n\n    # PKCE authentication flow processes authorization flow from browser only - default false\n    # uses the clientID\n    # make sure the Identity Provider (IdP) is public and doesn't need clientSecret\n    # make sure the Identity Provider (IdP) has this redirect URI registered: https:\/\/argocd.example.com\/pkce\/verify\n    enablePKCEAuthentication: true\n```\n\n!!! note\n    The callback address should be the \/auth\/callback endpoint of your Argo CD URL\n    (e.g. https:\/\/argocd.example.com\/auth\/callback).\n\n### Requesting additional ID token claims\n\nNot all OIDC providers support a special `groups` scope. E.g. Okta, OneLogin and Microsoft do support a special\n`groups` scope and will return group membership with the default `requestedScopes`.\n\nOther OIDC providers might be able to return a claim with group membership if explicitly requested to do so.\nIndividual claims can be requested with `requestedIDTokenClaims`, see\n[OpenID Connect Claims Parameter](https:\/\/connect2id.com\/products\/server\/docs\/guides\/requesting-openid-claims#claims-parameter)\nfor details. The Argo CD configuration for claims is as follows:\n\n```yaml\n  oidc.config: |\n    requestedIDTokenClaims:\n      email:\n        essential: true\n      groups:\n        essential: true\n        value: org:myorg\n      acr:\n        essential: true\n        values:\n        - urn:mace:incommon:iap:silver\n        - urn:mace:incommon:iap:bronze\n```\n\nFor a simple case this can be:\n\n```yaml\n  oidc.config: |\n    requestedIDTokenClaims: {\"groups\": {\"essential\": true}}\n```\n\n### Retrieving group claims when not in the token\n\nSome OIDC providers don't return the group information for a user in the ID token, even if explicitly requested using the `requestedIDTokenClaims` setting (Okta for example). They instead provide the groups on the user info endpoint. With the following config, Argo CD queries the user info endpoint during login for groups information of a user:\n\n```yaml\noidc.config: |\n    enableUserInfoGroups: true\n    userInfoPath: \/userinfo\n    userInfoCacheExpiration: \"5m\"\n```\n\n**Note: If you omit the `userInfoCacheExpiration` setting or if it's greater than the expiration of the ID token, the argocd-server will cache group information as long as the ID token is valid!**\n\n### Configuring a custom logout URL for your OIDC provider\n\nOptionally, if your OIDC provider exposes a logout API and you wish to configure a custom logout URL for the purposes of invalidating \nany active session post logout, you can do so by specifying it as follows:\n\n```yaml\n  oidc.config: |\n    name: example-OIDC-provider\n    issuer: https:\/\/example-OIDC-provider.example.com\n    clientID: xxxxxxxxx\n    clientSecret: xxxxxxxxx\n    requestedScopes: [\"openid\", \"profile\", \"email\", \"groups\"]\n    requestedIDTokenClaims: {\"groups\": {\"essential\": true}}\n    logoutURL: https:\/\/example-OIDC-provider.example.com\/logout?id_token_hint=\n```\nBy default, this would take the user to their OIDC provider's login page after logout. If you also wish to redirect the user back to Argo CD after logout, you can specify the logout URL as follows:\n\n```yaml\n...\n    logoutURL: https:\/\/example-OIDC-provider.example.com\/logout?id_token_hint=&post_logout_redirect_uri=\n```\n\nYou are not required to specify a logoutRedirectURL as this is automatically generated by ArgoCD as your base ArgoCD url + Rootpath\n\n!!! note\n   The post logout redirect URI may need to be whitelisted against your OIDC provider's client settings for ArgoCD.\n\n### Configuring a custom root CA certificate for communicating with the OIDC provider\n\nIf your OIDC provider is setup with a certificate which is not signed by one of the well known certificate authorities\nyou can provide a custom certificate which will be used in verifying the OIDC provider's TLS certificate when\ncommunicating with it.  \nAdd a `rootCA` to your `oidc.config` which contains the PEM encoded root certificate:\n\n```yaml\n  oidc.config: |\n    ...\n    rootCA: |\n      -----BEGIN CERTIFICATE-----\n      ... encoded certificate data here ...\n      -----END CERTIFICATE-----\n```\n\n\n## SSO Further Reading\n\n### Sensitive Data and SSO Client Secrets\n\n`argocd-secret` can be used to store sensitive data which can be referenced by ArgoCD. Values starting with `$` in configmaps are interpreted as follows:\n\n- If value has the form: `$<secret>:a.key.in.k8s.secret`, look for a k8s secret with the name `<secret>` (minus the `$`), and read its value. \n- Otherwise, look for a key in the k8s secret named `argocd-secret`. \n\n#### Example\n\nSSO `clientSecret` can thus be stored as a Kubernetes secret with the following manifests\n\n`argocd-secret`:\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: argocd-secret\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/name: argocd-secret\n    app.kubernetes.io\/part-of: argocd\ntype: Opaque\ndata:\n  ...\n  #\u00a0The secret value must be base64 encoded **once** \n  # this value corresponds to: `printf \"hello-world\" | base64`\n  oidc.auth0.clientSecret: \"aGVsbG8td29ybGQ=\"\n  ...\n```\n\n`argocd-cm`:\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cm\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/name: argocd-cm\n    app.kubernetes.io\/part-of: argocd\ndata:\n  ...\n  oidc.config: |\n    name: Auth0\n    clientID: aabbccddeeff00112233\n\n    # Reference key in argocd-secret\n    clientSecret: $oidc.auth0.clientSecret\n  ...\n```\n\n#### Alternative\n\nIf you want to store sensitive data in **another** Kubernetes `Secret`, instead of `argocd-secret`. ArgoCD knows to check the keys under `data` in your Kubernetes `Secret` for a corresponding key whenever a value in a configmap or secret starts with `$`, then your Kubernetes `Secret` name and `:` (colon).\n\nSyntax: `$<k8s_secret_name>:<a_key_in_that_k8s_secret>`\n\n> NOTE: Secret must have label `app.kubernetes.io\/part-of: argocd`\n\n##### Example\n\n`another-secret`:\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: another-secret\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/part-of: argocd\ntype: Opaque\ndata:\n  ...\n  # Store client secret like below.\n  # Ensure the secret is base64 encoded\n  oidc.auth0.clientSecret: <client-secret-base64-encoded>\n  ...\n```\n\n`argocd-cm`:\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cm\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/name: argocd-cm\n    app.kubernetes.io\/part-of: argocd\ndata:\n  ...\n  oidc.config: |\n    name: Auth0\n    clientID: aabbccddeeff00112233\n    # Reference key in another-secret (and not argocd-secret)\n    clientSecret: $another-secret:oidc.auth0.clientSecret  # Mind the ':'\n  ...\n```\n\n### Skipping certificate verification on OIDC provider connections\n\nBy default, all connections made by the API server to OIDC providers (either external providers or the bundled Dex\ninstance) must pass certificate validation. These connections occur when getting the OIDC provider's well-known\nconfiguration, when getting the OIDC provider's keys, and  when exchanging an authorization code or verifying an ID \ntoken as part of an OIDC login flow.\n\nDisabling certificate verification might make sense if:\n* You are using the bundled Dex instance **and** your Argo CD instance has TLS configured with a self-signed certificate\n  **and** you understand and accept the risks of skipping OIDC provider cert verification.\n* You are using an external OIDC provider **and** that provider uses an invalid certificate **and** you cannot solve\n  the problem by setting `oidcConfig.rootCA` **and** you understand and accept the risks of skipping OIDC provider cert \n  verification.\n\nIf either of those two applies, then you can disable OIDC provider certificate verification by setting\n`oidc.tls.insecure.skip.verify` to `\"true\"` in the `argocd-cm` ConfigMap.","site":"argocd","answers_cleaned":"  Overview  Once installed Argo CD has one built in  admin  user that has full access to the system  It is recommended to use  admin  user only for initial configuration and then switch to local users or configure SSO integration      Local users accounts  The local users accounts feature serves two main use cases     Auth tokens for Argo CD management automation  It is possible to configure an API account with limited permissions and generate an authentication token  Such token can be used to automatically create applications  projects etc    Additional users for a very small team where use of SSO integration might be considered an overkill  The local users don t provide advanced features such as groups  login history etc  So if you need such features it is strongly recommended to use SSO       note     When you create local users  each of those users will need additional  RBAC rules     rbac md  set up  otherwise they will fall back to the default policy specified by  policy default  field of the  argocd rbac cm  ConfigMap   The maximum length of a local account s username is 32       Create new user  New users should be defined in  argocd cm  ConfigMap      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cm   namespace  argocd   labels      app kubernetes io name  argocd cm     app kubernetes io part of  argocd data      add an additional local user with apiKey and login capabilities       apiKey   allows generating API keys       login   allows to login using UI   accounts alice  apiKey  login     disables user  User is enabled by default   accounts alice enabled   false       Each user might have two capabilities     apiKey   allows generating authentication tokens for API access   login   allows to login using UI      Delete user  In order to delete a user  you must remove the corresponding entry defined in the  argocd cm  ConfigMap   Example      bash kubectl patch  n argocd cm argocd cm   type  json   p     op    remove    path     data accounts alice          It is recommended to also remove the password entry in the  argocd secret  Secret   Example      bash kubectl patch  n argocd secrets argocd secret   type  json   p     op    remove    path     data accounts alice password              Disable admin user  As soon as additional users are created it is recommended to disable  admin  user      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cm   namespace  argocd   labels      app kubernetes io name  argocd cm     app kubernetes io part of  argocd data    admin enabled   false           Manage users  The Argo CD CLI provides set of commands to set user password and generate tokens     Get full users list    bash argocd account list        Get specific user details    bash argocd account get   account  username         Set user password    bash   if you are managing users as the admin user   current user password  should be the current admin password  argocd account update password       account  name        current password  current user password        new password  new user password         Generate auth token    bash   if flag   account is omitted then Argo CD generates token for current user argocd account generate token   account  username           Failed logins rate limiting  Argo CD rejects login attempts after too many failed in order to prevent password brute forcing  The following environments variables are available to control throttling settings      ARGOCD SESSION FAILURE MAX FAIL COUNT   Maximum number of failed logins before Argo CD starts rejecting login attempts  Default  5      ARGOCD SESSION FAILURE WINDOW SECONDS   Number of seconds for the failure window  Default  300  5 minutes   If this is set to 0  the failure window is disabled and the login attempts gets rejected after 10 consecutive logon failures  regardless of the time frame they happened      ARGOCD SESSION MAX CACHE SIZE   Maximum number of entries allowed in the cache  Default  1000     ARGOCD MAX CONCURRENT LOGIN REQUESTS COUNT   Limits max number of concurrent login requests  If set to 0 then limit is disabled  Default  50      SSO  There are two ways that SSO can be configured      Bundled Dex OIDC provider   dex    use this option if your current provider does not support OIDC  e g  SAML    LDAP  or if you wish to leverage any of Dex s connector features  e g  the ability to map GitHub   organizations and teams to OIDC groups claims   Dex also supports OIDC directly and can fetch user   information from the identity provider when the groups cannot be included in the IDToken      Existing OIDC provider   existing oidc provider    use this if you already have an OIDC provider which you are using  e g     Okta  okta md    OneLogin  onelogin md    Auth0  auth0 md    Microsoft  microsoft md    Keycloak  keycloak md      Google  G Suite   google md    where you manage your users  groups  and memberships      Dex  Argo CD embeds and bundles  Dex  https   github com dexidp dex  as part of its installation  for the purpose of delegating authentication to an external identity provider  Multiple types of identity providers are supported  OIDC  SAML  LDAP  GitHub  etc      SSO configuration of Argo CD requires editing the  argocd cm  ConfigMap with  Dex connector  https   dexidp io docs connectors   settings   This document describes how to configure Argo CD SSO using GitHub  OAuth2  as an example  but the steps should be similar for other identity providers       1  Register the application in the identity provider  In GitHub  register a new application  The callback address should be the   api dex callback  endpoint of your Argo CD URL  e g   https   argocd example com api dex callback       Register OAuth App        assets register app png  Register OAuth App    After registering the app  you will receive an OAuth2 client ID and secret  These values will be inputted into the Argo CD configmap     OAuth2 Client Config        assets oauth2 config png  OAuth2 Client Config        2  Configure Argo CD for SSO  Edit the argocd cm configmap      bash kubectl edit configmap argocd cm  n argocd        In the  url  key  input the base URL of Argo CD  In this example  it is  https   argocd example com     Optional   If Argo CD should be accessible via multiple base URLs you may   specify any additional base URLs via the  additionalUrls  key    In the  dex config  key  add the  github  connector to the  connectors  sub field  See Dex s    GitHub connector  https   github com dexidp website blob main content docs connectors github md    documentation for explanation of the fields  A minimal config should populate the clientID    clientSecret generated in Step 1    You will very likely want to restrict logins to one or more GitHub organization  In the    connectors config orgs  list  add one or more GitHub organizations  Any member of the org will   then be able to login to Argo CD to perform management tasks      yaml data    url  https   argocd example com    dex config        connectors          GitHub example         type  github         id  github         name  GitHub         config            clientID  aabbccddeeff00112233           clientSecret   dex github clientSecret   Alternatively   some K8S secret  dex github clientSecret           orgs              name  your github org          GitHub enterprise example         type  github         id  acme github         name  Acme GitHub         config            hostName  github acme example com           clientID  abcdefghijklmnopqrst           clientSecret   dex acme clientSecret    Alternatively   some K8S secret  dex acme clientSecret           orgs              name  your github org      After saving  the changes should take affect automatically   NOTES     There is no need to set  redirectURI  in the  connectors config  as shown in the dex documentation    Argo CD will automatically use the correct  redirectURI  for any OAuth2 connectors  to match the   correct external callback URL  e g   https   argocd example com api dex callback     When using a custom secret  e g    some K8S secret  above   it  must  have the label  app kubernetes io part of  argocd       OIDC Configuration with DEX  Dex can be used for OIDC authentication instead of ArgoCD directly  This provides a separate set of features such as fetching information from the  UserInfo  endpoint and  federated tokens  https   dexidp io docs custom scopes claims clients  cross client trust and authorized party       Configuration    In the  argocd cm  ConfigMap add the  OIDC  connector to the  connectors  sub field inside  dex config   See Dex s  OIDC connect documentation  https   dexidp io docs connectors oidc   to see what other configuration options might be useful  We re going to be using a minimal configuration here    The issuer URL should be where Dex talks to the OIDC provider  There would normally be a   well known openid configuration  under this URL which has information about what the provider supports  e g  https   accounts google com  well known openid configuration      yaml data    url   https   argocd example com    dex config        connectors          OIDC         type  oidc         id  oidc         name  OIDC         config            issuer  https   example OIDC provider example com           clientID  aaaabbbbccccddddeee           clientSecret   dex oidc clientSecret          Requesting additional ID token claims  By default Dex only retrieves the profile and email scopes  In order to retrieve more claims you can add them under the  scopes  entry in the Dex configuration  To enable group claims through Dex   insecureEnableGroups  also needs to enabled  Group information is currently only refreshed at authentication time and support to refresh group information more dynamically can be tracked here   dexidp dex 1065  https   github com dexidp dex issues 1065       yaml data    url   https   argocd example com    dex config        connectors          OIDC         type  oidc         id  oidc         name  OIDC         config            issuer  https   example OIDC provider example com           clientID  aaaabbbbccccddddeee           clientSecret   dex oidc clientSecret           insecureEnableGroups  true           scopes              profile             email             groups          warning     Because group information is only refreshed at authentication time just adding or removing an account from a group will not change a user s membership until they reauthenticate  Depending on your organization s needs this could be a security risk and could be mitigated by changing the authentication token s lifetime       Retrieving claims that are not in the token  When an Idp does not or cannot support certain claims in an IDToken they can be retrieved separately using the UserInfo endpoint  Dex supports this functionality using the  getUserInfo  endpoint  One of the most common claims that is not supported in the IDToken is the  groups  claim and both  getUserInfo  and  insecureEnableGroups  must be set to true      yaml data    url   https   argocd example com    dex config        connectors          OIDC         type  oidc         id  oidc         name  OIDC         config            issuer  https   example OIDC provider example com           clientID  aaaabbbbccccddddeee           clientSecret   dex oidc clientSecret           insecureEnableGroups  true           scopes              profile             email             groups           getUserInfo  true         Existing OIDC Provider  To configure Argo CD to delegate authentication to your existing OIDC provider  add the OAuth2 configuration to the  argocd cm  ConfigMap under the  oidc config  key      yaml data    url  https   argocd example com    oidc config        name  Okta     issuer  https   dev 123456 oktapreview com     clientID  aaaabbbbccccddddeee     clientSecret   oidc okta clientSecret            Optional list of allowed aud claims  If omitted or empty  defaults to the clientID value above  and the        cliClientID  if that is also specified   If you specify a list and want the clientID to be allowed  you must        explicitly include it in the list        Token verification will pass if any of the token s audiences matches any of the audiences in this list      allowedAudiences        aaaabbbbccccddddeee       qqqqwwwweeeerrrrttt        Optional  If false  tokens without an audience will always fail validation  If true  tokens without an audience        will always pass validation        Defaults to true for Argo CD   2 6 0  Defaults to false for Argo CD    2 6 0      skipAudienceCheckWhenTokenHasNoAudience  true        Optional set of OIDC scopes to request  If omitted  defaults to    openid    profile    email    groups       requestedScopes    openid    profile    email    groups          Optional set of OIDC claims to request on the ID token      requestedIDTokenClaims    groups     essential   true          Some OIDC providers require a separate clientID for different callback URLs        For example  if configuring Argo CD with self hosted Dex  you will need a separate client ID       for the  localhost   CLI  client to Dex  This field is optional  If omitted  the CLI will       use the same clientID as the Argo CD server     cliClientID  vvvvwwwwxxxxyyyyzzzz        PKCE authentication flow processes authorization flow from browser only   default false       uses the clientID       make sure the Identity Provider  IdP  is public and doesn t need clientSecret       make sure the Identity Provider  IdP  has this redirect URI registered  https   argocd example com pkce verify     enablePKCEAuthentication  true          note     The callback address should be the  auth callback endpoint of your Argo CD URL      e g  https   argocd example com auth callback        Requesting additional ID token claims  Not all OIDC providers support a special  groups  scope  E g  Okta  OneLogin and Microsoft do support a special  groups  scope and will return group membership with the default  requestedScopes    Other OIDC providers might be able to return a claim with group membership if explicitly requested to do so  Individual claims can be requested with  requestedIDTokenClaims   see  OpenID Connect Claims Parameter  https   connect2id com products server docs guides requesting openid claims claims parameter  for details  The Argo CD configuration for claims is as follows      yaml   oidc config        requestedIDTokenClaims        email          essential  true       groups          essential  true         value  org myorg       acr          essential  true         values            urn mace incommon iap silver           urn mace incommon iap bronze      For a simple case this can be      yaml   oidc config        requestedIDTokenClaims    groups     essential   true            Retrieving group claims when not in the token  Some OIDC providers don t return the group information for a user in the ID token  even if explicitly requested using the  requestedIDTokenClaims  setting  Okta for example   They instead provide the groups on the user info endpoint  With the following config  Argo CD queries the user info endpoint during login for groups information of a user      yaml oidc config        enableUserInfoGroups  true     userInfoPath   userinfo     userInfoCacheExpiration   5m         Note  If you omit the  userInfoCacheExpiration  setting or if it s greater than the expiration of the ID token  the argocd server will cache group information as long as the ID token is valid         Configuring a custom logout URL for your OIDC provider  Optionally  if your OIDC provider exposes a logout API and you wish to configure a custom logout URL for the purposes of invalidating  any active session post logout  you can do so by specifying it as follows      yaml   oidc config        name  example OIDC provider     issuer  https   example OIDC provider example com     clientID  xxxxxxxxx     clientSecret  xxxxxxxxx     requestedScopes    openid    profile    email    groups       requestedIDTokenClaims    groups     essential   true       logoutURL  https   example OIDC provider example com logout id token hint      By default  this would take the user to their OIDC provider s login page after logout  If you also wish to redirect the user back to Argo CD after logout  you can specify the logout URL as follows      yaml         logoutURL  https   example OIDC provider example com logout id token hint  post logout redirect uri       You are not required to specify a logoutRedirectURL as this is automatically generated by ArgoCD as your base ArgoCD url   Rootpath      note    The post logout redirect URI may need to be whitelisted against your OIDC provider s client settings for ArgoCD       Configuring a custom root CA certificate for communicating with the OIDC provider  If your OIDC provider is setup with a certificate which is not signed by one of the well known certificate authorities you can provide a custom certificate which will be used in verifying the OIDC provider s TLS certificate when communicating with it    Add a  rootCA  to your  oidc config  which contains the PEM encoded root certificate      yaml   oidc config                rootCA               BEGIN CERTIFICATE                encoded certificate data here                END CERTIFICATE               SSO Further Reading      Sensitive Data and SSO Client Secrets   argocd secret  can be used to store sensitive data which can be referenced by ArgoCD  Values starting with     in configmaps are interpreted as follows     If value has the form     secret  a key in k8s secret   look for a k8s secret with the name   secret    minus the       and read its value     Otherwise  look for a key in the k8s secret named  argocd secret          Example  SSO  clientSecret  can thus be stored as a Kubernetes secret with the following manifests   argocd secret      yaml apiVersion  v1 kind  Secret metadata    name  argocd secret   namespace  argocd   labels      app kubernetes io name  argocd secret     app kubernetes io part of  argocd type  Opaque data            The secret value must be base64 encoded   once        this value corresponds to   printf  hello world    base64    oidc auth0 clientSecret   aGVsbG8td29ybGQ               argocd cm      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cm   namespace  argocd   labels      app kubernetes io name  argocd cm     app kubernetes io part of  argocd data          oidc config        name  Auth0     clientID  aabbccddeeff00112233        Reference key in argocd secret     clientSecret   oidc auth0 clientSecret                 Alternative  If you want to store sensitive data in   another   Kubernetes  Secret   instead of  argocd secret   ArgoCD knows to check the keys under  data  in your Kubernetes  Secret  for a corresponding key whenever a value in a configmap or secret starts with      then your Kubernetes  Secret  name and      colon    Syntax     k8s secret name   a key in that k8s secret      NOTE  Secret must have label  app kubernetes io part of  argocd         Example   another secret      yaml apiVersion  v1 kind  Secret metadata    name  another secret   namespace  argocd   labels      app kubernetes io part of  argocd type  Opaque data            Store client secret like below      Ensure the secret is base64 encoded   oidc auth0 clientSecret   client secret base64 encoded              argocd cm      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cm   namespace  argocd   labels      app kubernetes io name  argocd cm     app kubernetes io part of  argocd data          oidc config        name  Auth0     clientID  aabbccddeeff00112233       Reference key in another secret  and not argocd secret      clientSecret   another secret oidc auth0 clientSecret    Mind the                    Skipping certificate verification on OIDC provider connections  By default  all connections made by the API server to OIDC providers  either external providers or the bundled Dex instance  must pass certificate validation  These connections occur when getting the OIDC provider s well known configuration  when getting the OIDC provider s keys  and  when exchanging an authorization code or verifying an ID  token as part of an OIDC login flow   Disabling certificate verification might make sense if    You are using the bundled Dex instance   and   your Argo CD instance has TLS configured with a self signed certificate     and   you understand and accept the risks of skipping OIDC provider cert verification    You are using an external OIDC provider   and   that provider uses an invalid certificate   and   you cannot solve   the problem by setting  oidcConfig rootCA    and   you understand and accept the risks of skipping OIDC provider cert    verification   If either of those two applies  then you can disable OIDC provider certificate verification by setting  oidc tls insecure skip verify  to   true   in the  argocd cm  ConfigMap "}
{"questions":"argocd note Entra ID was formerly known as Azure AD Entra ID SAML Enterprise App Auth using Dex Microsoft","answers":"# Microsoft\n\n!!! note \"\"\n    Entra ID was formerly known as Azure AD.\n\n* [Entra ID SAML Enterprise App Auth using Dex](#entra-id-saml-enterprise-app-auth-using-dex)\n* [Entra ID App Registration Auth using OIDC](#entra-id-app-registration-auth-using-oidc)\n* [Entra ID App Registration Auth using Dex](#entra-id-app-registration-auth-using-dex)\n\n## Entra ID SAML Enterprise App Auth using Dex\n### Configure a new Entra ID Enterprise App\n\n1. From the `Microsoft Entra ID` > `Enterprise applications` menu, choose `+ New application`\n2. Select `Non-gallery application`\n3. Enter a `Name` for the application (e.g. `Argo CD`), then choose `Add`\n4. Once the application is created, open it from the `Enterprise applications` menu.\n5. From the `Users and groups` menu of the app, add any users or groups requiring access to the service.\n   ![Azure Enterprise SAML Users](..\/..\/assets\/azure-enterprise-users.png \"Azure Enterprise SAML Users\")\n6. From the `Single sign-on` menu, edit the `Basic SAML Configuration` section as follows (replacing `my-argo-cd-url` with your Argo URL):\n      - **Identifier (Entity ID):** https:\/\/`<my-argo-cd-url>`\/api\/dex\/callback\n      - **Reply URL (Assertion Consumer Service URL):** https:\/\/`<my-argo-cd-url>`\/api\/dex\/callback\n      - **Sign on URL:** https:\/\/`<my-argo-cd-url>`\/auth\/login\n      - **Relay State:** `<empty>`\n      - **Logout Url:** `<empty>`\n      ![Azure Enterprise SAML URLs](..\/..\/assets\/azure-enterprise-saml-urls.png \"Azure Enterprise SAML URLs\")\n7. From the `Single sign-on` menu, edit the `User Attributes & Claims` section to create the following claims:\n      - `+ Add new claim` | **Name:** email | **Source:** Attribute | **Source attribute:** user.mail\n      - `+ Add group claim` | **Which groups:** All groups | **Source attribute:** Group ID | **Customize:** True | **Name:** Group | **Namespace:** `<empty>` | **Emit groups as role claims:** False\n      - *Note: The `Unique User Identifier` required claim can be left as the default `user.userprincipalname`*\n      ![Azure Enterprise SAML Claims](..\/..\/assets\/azure-enterprise-claims.png \"Azure Enterprise SAML Claims\")\n8. From the `Single sign-on` menu, download the SAML Signing Certificate (Base64)\n      - Base64 encode the contents of the downloaded certificate file, for example:\n      - `$ cat ArgoCD.cer | base64`\n      - *Keep a copy of the encoded output to be used in the next section.*\n9. From the `Single sign-on` menu, copy the `Login URL` parameter, to be used in the next section.\n\n### Configure Argo to use the new Entra ID Enterprise App\n\n1. Edit `argocd-cm` and add the following `dex.config` to the data section, replacing the `caData`, `my-argo-cd-url` and `my-login-url` your values from the Entra ID App:\n\n            data:\n              url: https:\/\/my-argo-cd-url\n              dex.config: |\n                logger:\n                  level: debug\n                  format: json\n                connectors:\n                - type: saml\n                  id: saml\n                  name: saml\n                  config:\n                    entityIssuer: https:\/\/my-argo-cd-url\/api\/dex\/callback\n                    ssoURL: https:\/\/my-login-url (e.g. https:\/\/login.microsoftonline.com\/xxxxx\/a\/saml2)\n                    caData: |\n                       MY-BASE64-ENCODED-CERTIFICATE-DATA\n                    redirectURI: https:\/\/my-argo-cd-url\/api\/dex\/callback\n                    usernameAttr: email\n                    emailAttr: email\n                    groupsAttr: Group\n\n2. Edit `argocd-rbac-cm` to configure permissions, similar to example below.\n      - Use Entra ID `Group IDs` for assigning roles.\n      - See [RBAC Configurations](..\/rbac.md) for more detailed scenarios.\n\n            # example policy\n            policy.default: role:readonly\n            policy.csv: |\n               p, role:org-admin, applications, *, *\/*, allow\n               p, role:org-admin, clusters, get, *, allow\n               p, role:org-admin, repositories, get, *, allow\n               p, role:org-admin, repositories, create, *, allow\n               p, role:org-admin, repositories, update, *, allow\n               p, role:org-admin, repositories, delete, *, allow\n               g, \"84ce98d1-e359-4f3b-85af-985b458de3c6\", role:org-admin # (azure group assigned to role)\n\n## Entra ID App Registration Auth using OIDC\n### Configure a new Entra ID App registration\n#### Add a new Entra ID App registration\n\n1. From the `Microsoft Entra ID` > `App registrations` menu, choose `+ New registration`\n2. Enter a `Name` for the application (e.g. `Argo CD`).\n3. Specify who can use the application (e.g. `Accounts in this organizational directory only`).\n4. Enter Redirect URI (optional) as follows (replacing `my-argo-cd-url` with your Argo URL), then choose `Add`.\n      - **Platform:** `Web`\n      - **Redirect URI:** https:\/\/`<my-argo-cd-url>`\/auth\/callback\n5. When registration finishes, the Azure portal displays the app registration's Overview pane. You see the Application (client) ID.\n      ![Azure App registration's Overview](..\/..\/assets\/azure-app-registration-overview.png \"Azure App registration's Overview\")\n\n#### Configure additional platform settings for ArgoCD CLI\n\n1. In the Azure portal, in App registrations, select your application.\n2. Under Manage, select Authentication.\n3. Under Platform configurations, select Add a platform.\n4. Under Configure platforms, select the \"Mobile and desktop applications\" tile. Use the below value. You shouldn't change it.\n      - **Redirect URI:** `http:\/\/localhost:8085\/auth\/callback`\n      ![Azure App registration's Authentication](..\/..\/assets\/azure-app-registration-authentication.png \"Azure App registration's Authentication\")\n\n#### Add credentials a new Entra ID App registration\n\n1. From the `Certificates & secrets` menu, choose `+ New client secret`\n2. Enter a `Name` for the secret (e.g. `ArgoCD-SSO`).\n      - Make sure to copy and save generated value. This is a value for the `client_secret`.\n      ![Azure App registration's Secret](..\/..\/assets\/azure-app-registration-secret.png \"Azure App registration's Secret\")\n\n#### Setup permissions for Entra ID Application\n\n1. From the `API permissions` menu, choose `+ Add a permission`\n2. Find `User.Read` permission (under `Microsoft Graph`) and grant it to the created application:\n   ![Entra ID API permissions](..\/..\/assets\/azure-api-permissions.png \"Entra ID API permissions\")\n3. From the `Token Configuration` menu, choose `+ Add groups claim`\n   ![Entra ID token configuration](..\/..\/assets\/azure-token-configuration.png \"Entra ID token configuration\")\n\n### Associate an Entra ID group to your Entra ID App registration\n\n1. From the `Microsoft Entra ID` > `Enterprise applications` menu, search the App that you created (e.g. `Argo CD`).\n      - An Enterprise application with the same name of the Entra ID App registration is created when you add a new Entra ID App registration.\n2. From the `Users and groups` menu of the app, add any users or groups requiring access to the service.\n   ![Azure Enterprise SAML Users](..\/..\/assets\/azure-enterprise-users.png \"Azure Enterprise SAML Users\")\n\n### Configure Argo to use the new Entra ID App registration\n\n1. Edit `argocd-cm` and configure the `data.oidc.config` and `data.url` section:\n\n            ConfigMap -> argocd-cm\n\n            data:\n               url: https:\/\/argocd.example.com\/ # Replace with the external base URL of your Argo CD\n               oidc.config: |\n                     name: Azure\n                     issuer: https:\/\/login.microsoftonline.com\/{directory_tenant_id}\/v2.0\n                     clientID: {azure_ad_application_client_id}\n                     clientSecret: $oidc.azure.clientSecret\n                     requestedIDTokenClaims:\n                        groups:\n                           essential: true\n                           value: \"SecurityGroup\"\n                     requestedScopes:\n                        - openid\n                        - profile\n                        - email\n\n2. Edit `argocd-secret` and configure the `data.oidc.azure.clientSecret` section:\n\n            Secret -> argocd-secret\n\n            data:\n               oidc.azure.clientSecret: {client_secret | base64_encoded}\n\n3. Edit `argocd-rbac-cm` to configure permissions. Use group ID from Azure for assigning roles\n      [RBAC Configurations](..\/rbac.md)\n\n            ConfigMap -> argocd-rbac-cm\n\n            policy.default: role:readonly\n            policy.csv: |\n               p, role:org-admin, applications, *, *\/*, allow\n               p, role:org-admin, clusters, get, *, allow\n               p, role:org-admin, repositories, get, *, allow\n               p, role:org-admin, repositories, create, *, allow\n               p, role:org-admin, repositories, update, *, allow\n               p, role:org-admin, repositories, delete, *, allow\n               g, \"84ce98d1-e359-4f3b-85af-985b458de3c6\", role:org-admin\n\n4. Mapping role from jwt token to argo.  \n   If you want to map the roles from the jwt token to match the default roles (readonly and admin) then you must change the scope variable in the rbac-configmap.\n\n            policy.default: role:readonly\n            policy.csv: |\n               p, role:org-admin, applications, *, *\/*, allow\n               p, role:org-admin, clusters, get, *, allow\n               p, role:org-admin, repositories, get, *, allow\n               p, role:org-admin, repositories, create, *, allow\n               p, role:org-admin, repositories, update, *, allow\n               p, role:org-admin, repositories, delete, *, allow\n               g, \"84ce98d1-e359-4f3b-85af-985b458de3c6\", role:org-admin\n            scopes: '[groups, email]'\n\n   Refer to [operator-manual\/argocd-rbac-cm.yaml](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/docs\/operator-manual\/argocd-rbac-cm.yaml) for all of the available variables.\n\n## Entra ID App Registration Auth using Dex\n\nConfigure a new AD App Registration, as above.\nThen, add the `dex.config` to `argocd-cm`:\n```yaml\nConfigMap -> argocd-cm\n\ndata:\n    dex.config: |\n      connectors:\n      - type: microsoft\n        id: microsoft\n        name: Your Company GmbH\n        config:\n          clientID: $MICROSOFT_APPLICATION_ID\n          clientSecret: $MICROSOFT_CLIENT_SECRET\n          redirectURI: http:\/\/localhost:8080\/api\/dex\/callback\n          tenant: ffffffff-ffff-ffff-ffff-ffffffffffff\n          groups:\n            - DevOps\n```\n\n## Validation\n### Log in to ArgoCD UI using SSO\n\n1. Open a new browser tab and enter your ArgoCD URI: https:\/\/`<my-argo-cd-url>`\n   ![Azure SSO Web Log In](..\/..\/assets\/azure-sso-web-log-in-via-azure.png \"Azure SSO Web Log In\")\n3. Click `LOGIN VIA AZURE` button to log in with your Microsoft Entra ID account. You\u2019ll see the ArgoCD applications screen.\n   ![Azure SSO Web Application](..\/..\/assets\/azure-sso-web-application.png \"Azure SSO Web Application\")\n4. Navigate to User Info and verify Group ID. Groups will have your group\u2019s Object ID that you added in the `Setup permissions for Entra ID Application` step.\n   ![Azure SSO Web User Info](..\/..\/assets\/azure-sso-web-user-info.png \"Azure SSO Web User Info\")\n\n### Log in to ArgoCD using CLI\n\n1. Open terminal, execute the below command.\n\n            argocd login <my-argo-cd-url> --grpc-web-root-path \/ --sso\n\n2. You will see the below message after entering your credentials from the browser.\n   ![Azure SSO CLI Log In](..\/..\/assets\/azure-sso-cli-log-in-success.png \"Azure SSO CLI Log In\")\n3. Your terminal output will be similar as below.\n   \n            WARNING: server certificate had error: x509: certificate is valid for ingress.local, not my-argo-cd-url. Proceed insecurely (y\/n)? y\n            Opening browser for authentication\n            INFO[0003] RequestedClaims: map[groups:essential:true ]\n            Performing authorization_code flow login: https:\/\/login.microsoftonline.com\/XXXXXXXXXXXXX\/oauth2\/v2.0\/authorize?access_type=offline&claims=%7B%22id_token%22%3A%7B%22groups%22%3A%7B%22essential%22%3Atrue%7D%7D%7D&client_id=XXXXXXXXXXXXX&code_challenge=XXXXXXXXXXXXX&code_challenge_method=S256&redirect_uri=http%3A%2F%2Flocalhost%3A8085%2Fauth%2Fcallback&response_type=code&scope=openid+profile+email+offline_access&state=XXXXXXXX\n            Authentication successful\n            'yourid@example.com' logged in successfully\n            Context 'my-argo-cd-url' updated\n\n   You may get an warning if you are not using a correctly signed certs. Refer to [Why Am I Getting x509: certificate signed by unknown authority When Using The CLI?](https:\/\/argo-cd.readthedocs.io\/en\/stable\/faq\/#why-am-i-getting-x509-certificate-signed-by-unknown-authority-when-using-the-cli).","site":"argocd","answers_cleaned":"  Microsoft      note        Entra ID was formerly known as Azure AD      Entra ID SAML Enterprise App Auth using Dex   entra id saml enterprise app auth using dex     Entra ID App Registration Auth using OIDC   entra id app registration auth using oidc     Entra ID App Registration Auth using Dex   entra id app registration auth using dex      Entra ID SAML Enterprise App Auth using Dex     Configure a new Entra ID Enterprise App  1  From the  Microsoft Entra ID     Enterprise applications  menu  choose    New application  2  Select  Non gallery application  3  Enter a  Name  for the application  e g   Argo CD    then choose  Add  4  Once the application is created  open it from the  Enterprise applications  menu  5  From the  Users and groups  menu of the app  add any users or groups requiring access to the service       Azure Enterprise SAML Users        assets azure enterprise users png  Azure Enterprise SAML Users   6  From the  Single sign on  menu  edit the  Basic SAML Configuration  section as follows  replacing  my argo cd url  with your Argo URL             Identifier  Entity ID     https     my argo cd url   api dex callback           Reply URL  Assertion Consumer Service URL     https     my argo cd url   api dex callback           Sign on URL    https     my argo cd url   auth login           Relay State      empty             Logout Url      empty           Azure Enterprise SAML URLs        assets azure enterprise saml urls png  Azure Enterprise SAML URLs   7  From the  Single sign on  menu  edit the  User Attributes   Claims  section to create the following claims             Add new claim      Name    email     Source    Attribute     Source attribute    user mail            Add group claim      Which groups    All groups     Source attribute    Group ID     Customize    True     Name    Group     Namespace      empty       Emit groups as role claims    False          Note  The  Unique User Identifier  required claim can be left as the default  user userprincipalname           Azure Enterprise SAML Claims        assets azure enterprise claims png  Azure Enterprise SAML Claims   8  From the  Single sign on  menu  download the SAML Signing Certificate  Base64          Base64 encode the contents of the downloaded certificate file  for example             cat ArgoCD cer   base64           Keep a copy of the encoded output to be used in the next section   9  From the  Single sign on  menu  copy the  Login URL  parameter  to be used in the next section       Configure Argo to use the new Entra ID Enterprise App  1  Edit  argocd cm  and add the following  dex config  to the data section  replacing the  caData    my argo cd url  and  my login url  your values from the Entra ID App               data                url  https   my argo cd url               dex config                    logger                    level  debug                   format  json                 connectors                    type  saml                   id  saml                   name  saml                   config                      entityIssuer  https   my argo cd url api dex callback                     ssoURL  https   my login url  e g  https   login microsoftonline com xxxxx a saml2                      caData                           MY BASE64 ENCODED CERTIFICATE DATA                     redirectURI  https   my argo cd url api dex callback                     usernameAttr  email                     emailAttr  email                     groupsAttr  Group  2  Edit  argocd rbac cm  to configure permissions  similar to example below          Use Entra ID  Group IDs  for assigning roles          See  RBAC Configurations     rbac md  for more detailed scenarios                 example policy             policy default  role readonly             policy csv                   p  role org admin  applications          allow                p  role org admin  clusters  get     allow                p  role org admin  repositories  get     allow                p  role org admin  repositories  create     allow                p  role org admin  repositories  update     allow                p  role org admin  repositories  delete     allow                g   84ce98d1 e359 4f3b 85af 985b458de3c6   role org admin    azure group assigned to role      Entra ID App Registration Auth using OIDC     Configure a new Entra ID App registration      Add a new Entra ID App registration  1  From the  Microsoft Entra ID     App registrations  menu  choose    New registration  2  Enter a  Name  for the application  e g   Argo CD    3  Specify who can use the application  e g   Accounts in this organizational directory only    4  Enter Redirect URI  optional  as follows  replacing  my argo cd url  with your Argo URL   then choose  Add             Platform     Web            Redirect URI    https     my argo cd url   auth callback 5  When registration finishes  the Azure portal displays the app registration s Overview pane  You see the Application  client  ID          Azure App registration s Overview        assets azure app registration overview png  Azure App registration s Overview         Configure additional platform settings for ArgoCD CLI  1  In the Azure portal  in App registrations  select your application  2  Under Manage  select Authentication  3  Under Platform configurations  select Add a platform  4  Under Configure platforms  select the  Mobile and desktop applications  tile  Use the below value  You shouldn t change it            Redirect URI     http   localhost 8085 auth callback          Azure App registration s Authentication        assets azure app registration authentication png  Azure App registration s Authentication         Add credentials a new Entra ID App registration  1  From the  Certificates   secrets  menu  choose    New client secret  2  Enter a  Name  for the secret  e g   ArgoCD SSO            Make sure to copy and save generated value  This is a value for the  client secret           Azure App registration s Secret        assets azure app registration secret png  Azure App registration s Secret         Setup permissions for Entra ID Application  1  From the  API permissions  menu  choose    Add a permission  2  Find  User Read  permission  under  Microsoft Graph   and grant it to the created application       Entra ID API permissions        assets azure api permissions png  Entra ID API permissions   3  From the  Token Configuration  menu  choose    Add groups claim       Entra ID token configuration        assets azure token configuration png  Entra ID token configuration        Associate an Entra ID group to your Entra ID App registration  1  From the  Microsoft Entra ID     Enterprise applications  menu  search the App that you created  e g   Argo CD            An Enterprise application with the same name of the Entra ID App registration is created when you add a new Entra ID App registration  2  From the  Users and groups  menu of the app  add any users or groups requiring access to the service       Azure Enterprise SAML Users        assets azure enterprise users png  Azure Enterprise SAML Users        Configure Argo to use the new Entra ID App registration  1  Edit  argocd cm  and configure the  data oidc config  and  data url  section               ConfigMap    argocd cm              data                 url  https   argocd example com    Replace with the external base URL of your Argo CD                oidc config                         name  Azure                      issuer  https   login microsoftonline com  directory tenant id  v2 0                      clientID   azure ad application client id                       clientSecret   oidc azure clientSecret                      requestedIDTokenClaims                          groups                             essential  true                            value   SecurityGroup                       requestedScopes                            openid                           profile                           email  2  Edit  argocd secret  and configure the  data oidc azure clientSecret  section               Secret    argocd secret              data                 oidc azure clientSecret   client secret   base64 encoded   3  Edit  argocd rbac cm  to configure permissions  Use group ID from Azure for assigning roles        RBAC Configurations     rbac md               ConfigMap    argocd rbac cm              policy default  role readonly             policy csv                   p  role org admin  applications          allow                p  role org admin  clusters  get     allow                p  role org admin  repositories  get     allow                p  role org admin  repositories  create     allow                p  role org admin  repositories  update     allow                p  role org admin  repositories  delete     allow                g   84ce98d1 e359 4f3b 85af 985b458de3c6   role org admin  4  Mapping role from jwt token to argo       If you want to map the roles from the jwt token to match the default roles  readonly and admin  then you must change the scope variable in the rbac configmap               policy default  role readonly             policy csv                   p  role org admin  applications          allow                p  role org admin  clusters  get     allow                p  role org admin  repositories  get     allow                p  role org admin  repositories  create     allow                p  role org admin  repositories  update     allow                p  role org admin  repositories  delete     allow                g   84ce98d1 e359 4f3b 85af 985b458de3c6   role org admin             scopes    groups  email       Refer to  operator manual argocd rbac cm yaml  https   github com argoproj argo cd blob master docs operator manual argocd rbac cm yaml  for all of the available variables      Entra ID App Registration Auth using Dex  Configure a new AD App Registration  as above  Then  add the  dex config  to  argocd cm      yaml ConfigMap    argocd cm  data      dex config          connectors          type  microsoft         id  microsoft         name  Your Company GmbH         config            clientID   MICROSOFT APPLICATION ID           clientSecret   MICROSOFT CLIENT SECRET           redirectURI  http   localhost 8080 api dex callback           tenant  ffffffff ffff ffff ffff ffffffffffff           groups                DevOps         Validation     Log in to ArgoCD UI using SSO  1  Open a new browser tab and enter your ArgoCD URI  https     my argo cd url        Azure SSO Web Log In        assets azure sso web log in via azure png  Azure SSO Web Log In   3  Click  LOGIN VIA AZURE  button to log in with your Microsoft Entra ID account  You ll see the ArgoCD applications screen       Azure SSO Web Application        assets azure sso web application png  Azure SSO Web Application   4  Navigate to User Info and verify Group ID  Groups will have your group s Object ID that you added in the  Setup permissions for Entra ID Application  step       Azure SSO Web User Info        assets azure sso web user info png  Azure SSO Web User Info        Log in to ArgoCD using CLI  1  Open terminal  execute the below command               argocd login  my argo cd url    grpc web root path     sso  2  You will see the below message after entering your credentials from the browser       Azure SSO CLI Log In        assets azure sso cli log in success png  Azure SSO CLI Log In   3  Your terminal output will be similar as below                  WARNING  server certificate had error  x509  certificate is valid for ingress local  not my argo cd url  Proceed insecurely  y n   y             Opening browser for authentication             INFO 0003  RequestedClaims  map groups essential true               Performing authorization code flow login  https   login microsoftonline com XXXXXXXXXXXXX oauth2 v2 0 authorize access type offline claims  7B 22id token 22 3A 7B 22groups 22 3A 7B 22essential 22 3Atrue 7D 7D 7D client id XXXXXXXXXXXXX code challenge XXXXXXXXXXXXX code challenge method S256 redirect uri http 3A 2F 2Flocalhost 3A8085 2Fauth 2Fcallback response type code scope openid profile email offline access state XXXXXXXX             Authentication successful              yourid example com  logged in successfully             Context  my argo cd url  updated     You may get an warning if you are not using a correctly signed certs  Refer to  Why Am I Getting x509  certificate signed by unknown authority When Using The CLI   https   argo cd readthedocs io en stable faq  why am i getting x509 certificate signed by unknown authority when using the cli  "}
{"questions":"argocd Google Dex Also you won t get Google Groups membership information through this method This is the recommended login method if you don t need information about the groups the user s belongs to Google doesn t expose the claim via oidc so you won t be able to use Google Groups membership information for RBAC There are three different ways to integrate Argo CD login with your Google Workspace users Generally the OpenID Connect oidc method would be the recommended way of doing this integration and easier as well but depending on your needs you may choose a different option This is the recommended method if you need to use Google Groups membership in your RBAC configuration","answers":"# Google\n\nThere are three different ways to integrate Argo CD login with your Google Workspace users. Generally the OpenID Connect (_oidc_) method would be the recommended way of doing this integration (and easier, as well...), but depending on your needs, you may choose a different option.\n\n* [OpenID Connect using Dex](#openid-connect-using-dex)  \n  This is the recommended login method if you don't need information about the groups the user's belongs to. Google doesn't expose the `groups` claim via _oidc_, so you won't be able to use Google Groups membership information for RBAC. \n* [SAML App Auth using Dex](#saml-app-auth-using-dex)  \n  Dex [recommends avoiding this method](https:\/\/dexidp.io\/docs\/connectors\/saml\/#warning). Also, you won't get Google Groups membership information through this method.\n* [OpenID Connect plus Google Groups using Dex](#openid-connect-plus-google-groups-using-dex)  \n  This is the recommended method if you need to use Google Groups membership in your RBAC configuration.\n\nOnce you've set up one of the above integrations, be sure to edit `argo-rbac-cm` to configure permissions (as in the example below). See [RBAC Configurations](..\/rbac.md) for more detailed scenarios.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-rbac-cm\n  namespace: argocd\ndata:\n  policy.default: role:readonly\n```\n\n## OpenID Connect using Dex\n\n### Configure your OAuth consent screen\n\nIf you've never configured this, you'll be redirected straight to this if you try to create an OAuth Client ID\n\n1. Go to your [OAuth Consent](https:\/\/console.cloud.google.com\/apis\/credentials\/consent) configuration. If you still haven't created one, select `Internal` or `External` and click `Create` \n2. Go and [edit your OAuth consent screen](https:\/\/console.cloud.google.com\/apis\/credentials\/consent\/edit) Verify you're in the correct project!\n3. Configure a name for your login app and a user support email address\n4. The app logo and filling the information links is not mandatory, but it's a nice touch for the login page\n5. In \"Authorized domains\" add the domains who are allowed to log in to ArgoCD (e.g. if you add `example.com`, all Google Workspace users with an `@example.com` address will be able to log in)\n6. Save to continue to the \"Scopes\" section\n7. Click on \"Add or remove scopes\" and add the `...\/auth\/userinfo.profile` and the `openid` scopes\n8. Save, review the summary of your changes and finish\n\n### Configure a new OAuth Client ID\n\n1. Go to your [Google API Credentials](https:\/\/console.cloud.google.com\/apis\/credentials) console, and make sure you're in the correct project.\n2. Click on \"+Create Credentials\"\/\"OAuth Client ID\"\n3. Select \"Web Application\" in the Application Type drop down menu, and enter an identifying name for your app (e.g. `Argo CD`)\n4. Fill \"Authorized JavaScript origins\" with your Argo CD URL, e.g. `https:\/\/argocd.example.com`\n5. Fill \"Authorized redirect URIs\" with your Argo CD URL plus `\/api\/dex\/callback`, e.g. `https:\/\/argocd.example.com\/api\/dex\/callback`\n\n    ![](..\/..\/assets\/google-admin-oidc-uris.png)\n\n6. Click \"Create\" and save your \"Client ID\" and your \"Client Secret\" for later\n\n### Configure Argo to use OpenID Connect\n\nEdit `argocd-cm` and add the following `dex.config` to the data section, replacing `clientID` and `clientSecret` with the values you saved before:\n\n```yaml\ndata:\n  url: https:\/\/argocd.example.com\n  dex.config: |\n    connectors:\n    - config:\n        issuer: https:\/\/accounts.google.com\n        clientID: XXXXXXXXXXXXX.apps.googleusercontent.com\n        clientSecret: XXXXXXXXXXXXX\n      type: oidc\n      id: google\n      name: Google\n```\n\n### References\n\n- [Dex oidc connector docs](https:\/\/dexidp.io\/docs\/connectors\/oidc\/)\n\n## SAML App Auth using Dex\n\n### Configure a new SAML App\n\n---\n!!! warning \"Deprecation Warning\"\n\n    Note that, according to [Dex documentation](https:\/\/dexidp.io\/docs\/connectors\/saml\/#warning), SAML is considered unsafe and they are planning to deprecate that module.\n\n---\n\n1. In the [Google admin console](https:\/\/admin.google.com), open the left-side menu and select `Apps` > `SAML Apps`\n\n    ![Google Admin Apps Menu](..\/..\/assets\/google-admin-saml-apps-menu.png \"Google Admin menu with the Apps \/ SAML Apps path selected\")\n\n2. Under `Add App` select `Add custom SAML app`\n\n    ![Google Admin Add Custom SAML App](..\/..\/assets\/google-admin-saml-add-app-menu.png \"Add apps menu with add custom SAML app highlighted\")\n\n3. Enter a `Name` for the application (e.g. `Argo CD`), then choose `Continue`\n\n    ![Google Admin Apps Menu](..\/..\/assets\/google-admin-saml-app-details.png \"Add apps menu with add custom SAML app highlighted\")\n\n4. Download the metadata or copy the `SSO URL`, `Certificate`, and optionally `Entity ID` from the identity provider details for use in the next section. Choose `continue`.\n    - Base64 encode the contents of the certificate file, for example:\n    - `$ cat ArgoCD.cer | base64`\n    - *Keep a copy of the encoded output to be used in the next section.*\n    - *Ensure that the certificate is in PEM format before base64 encoding*\n\n    ![Google Admin IdP Metadata](..\/..\/assets\/google-admin-idp-metadata.png \"A screenshot of the Google IdP metadata\")\n\n5. For both the `ACS URL` and `Entity ID`, use your Argo Dex Callback URL, for example: `https:\/\/argocd.example.com\/api\/dex\/callback`\n\n    ![Google Admin Service Provider Details](..\/..\/assets\/google-admin-service-provider-details.png \"A screenshot of the Google Service Provider Details\")\n\n6. Add SAML Attribute Mapping, Map `Primary email` to `name` and `Primary Email` to `email`. and click `ADD MAPPING` button.\n\n    ![Google Admin SAML Attribute Mapping Details](..\/..\/assets\/google-admin-saml-attribute-mapping-details.png \"A screenshot of the Google Admin SAML Attribute Mapping Details\")\n\n7. Finish creating the application.\n\n### Configure Argo to use the new Google SAML App\n\nEdit `argocd-cm` and add the following `dex.config` to the data section, replacing the `caData`, `argocd.example.com`, `sso-url`, and optionally `google-entity-id` with your values from the Google SAML App:\n\n```yaml\ndata:\n  url: https:\/\/argocd.example.com\n  dex.config: |\n    connectors:\n    - type: saml\n      id: saml\n      name: saml\n      config:\n        ssoURL: https:\/\/sso-url (e.g. https:\/\/accounts.google.com\/o\/saml2\/idp?idpid=Abcde0)\n        entityIssuer: https:\/\/argocd.example.com\/api\/dex\/callback\n        caData: |\n          BASE64-ENCODED-CERTIFICATE-DATA\n        redirectURI: https:\/\/argocd.example.com\/api\/dex\/callback\n        usernameAttr: name\n        emailAttr: email\n        # optional\n        ssoIssuer: https:\/\/google-entity-id (e.g. https:\/\/accounts.google.com\/o\/saml2?idpid=Abcde0)\n```\n\n### References\n\n- [Dex SAML connector docs](https:\/\/dexidp.io\/docs\/connectors\/saml\/)\n- [Google's SAML error messages](https:\/\/support.google.com\/a\/answer\/6301076?hl=en)\n\n## OpenID Connect plus Google Groups using Dex\n\nWe're going to use Dex's `google` connector to get additional Google Groups information from your users, allowing you to use group membership on your RBAC, i.e., giving `admin` role to the whole `sysadmins@yourcompany.com` group.\n\nThis connector uses two different credentials:\n\n- An oidc client ID and secret  \n  Same as when you're configuring an [OpenID connection](#openid-connect-using-dex), this authenticates your users\n- A Google service account  \n  This is used to connect to the Google Directory API and pull information about your user's group membership\n\nAlso, you'll need the email address for an admin user on this domain. Dex will impersonate that user identity to fetch user information from the API.\n\n### Configure OpenID Connect\n\nGo through the same steps as in [OpenID Connect using Dex](#openid-connect-using-dex), except for configuring `argocd-cm`. We'll do that later.\n\n### Set up Directory API access \n\n1. Follow [Google instructions to create a service account with Domain-Wide Delegation](https:\/\/developers.google.com\/admin-sdk\/directory\/v1\/guides\/delegation)\n    - When assigning API scopes to the service account assign **only** the `https:\/\/www.googleapis.com\/auth\/admin.directory.group.readonly` scope and nothing else. If you assign any other scopes, you won't be able to fetch information from the API\n    - Create the credentials in JSON format and store them in a safe place, we'll need them later  \n2. Enable the [Admin SDK](https:\/\/console.developers.google.com\/apis\/library\/admin.googleapis.com\/)\n\n### Configure Dex\n\n1. Create a secret with the contents of the previous json file encoded in base64, like this:\n\n        apiVersion: v1\n        kind: Secret\n        metadata:\n          name: argocd-google-groups-json\n          namespace: argocd\n        data:\n          googleAuth.json: JSON_FILE_BASE64_ENCODED\n\n2. Edit your `argocd-dex-server` deployment to mount that secret as a file  \n    - Add a volume mount in `\/spec\/template\/spec\/containers\/0\/volumeMounts\/` like this. Be aware of editing the running container and not the init container!\n\n            volumeMounts:\n            - mountPath: \/shared\n              name: static-files\n            - mountPath: \/tmp\n              name: dexconfig\n            - mountPath: \/tmp\/oidc\n              name: google-json\n              readOnly: true\n\n    - Add a volume in `\/spec\/template\/spec\/volumes\/` like this:\n\n            volumes:\n            - emptyDir: {}\n              name: static-files\n            - emptyDir: {}\n              name: dexconfig\n            - name: google-json\n              secret:\n                defaultMode: 420\n                secretName: argocd-google-groups-json\n\n3. Edit `argocd-cm` and add the following `dex.config` to the data section, replacing `clientID` and `clientSecret` with the values you saved before, `adminEmail` with the address for the admin user you're going to impersonate, and editing `redirectURI` with your Argo CD domain (note that the `type` is now `google` instead of `oidc`):\n\n        dex.config: |\n          connectors:\n          - config:\n              redirectURI: https:\/\/argocd.example.com\/api\/dex\/callback\n              clientID: XXXXXXXXXXXXX.apps.googleusercontent.com\n              clientSecret: XXXXXXXXXXXXX\n              serviceAccountFilePath: \/tmp\/oidc\/googleAuth.json\n              adminEmail: admin-email@example.com\n            type: google\n            id: google\n            name: Google\n\n4. Restart your `argocd-dex-server` deployment to be sure it's using the latest configuration\n5. Login to Argo CD and go to the \"User info\" section, were you should see the groups you're member  \n  ![User info](..\/..\/assets\/google-groups-membership.png)\n6. Now you can use groups email addresses to give RBAC permissions\n7. Dex (> v2.31.0) can also be configure to fetch transitive group membership as follows:\n\n        dex.config: |\n          connectors:\n          - config:\n              redirectURI: https:\/\/argocd.example.com\/api\/dex\/callback\n              clientID: XXXXXXXXXXXXX.apps.googleusercontent.com\n              clientSecret: XXXXXXXXXXXXX\n              serviceAccountFilePath: \/tmp\/oidc\/googleAuth.json\n              adminEmail: admin-email@example.com\n              fetchTransitiveGroupMembership: True\n            type: google\n            id: google\n            name: Google\n\n### References\n\n- [Dex Google connector docs](https:\/\/dexidp.io\/docs\/connectors\/google\/)","site":"argocd","answers_cleaned":"  Google  There are three different ways to integrate Argo CD login with your Google Workspace users  Generally the OpenID Connect   oidc   method would be the recommended way of doing this integration  and easier  as well      but depending on your needs  you may choose a different option      OpenID Connect using Dex   openid connect using dex      This is the recommended login method if you don t need information about the groups the user s belongs to  Google doesn t expose the  groups  claim via  oidc   so you won t be able to use Google Groups membership information for RBAC      SAML App Auth using Dex   saml app auth using dex      Dex  recommends avoiding this method  https   dexidp io docs connectors saml  warning   Also  you won t get Google Groups membership information through this method     OpenID Connect plus Google Groups using Dex   openid connect plus google groups using dex      This is the recommended method if you need to use Google Groups membership in your RBAC configuration   Once you ve set up one of the above integrations  be sure to edit  argo rbac cm  to configure permissions  as in the example below   See  RBAC Configurations     rbac md  for more detailed scenarios      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd rbac cm   namespace  argocd data    policy default  role readonly         OpenID Connect using Dex      Configure your OAuth consent screen  If you ve never configured this  you ll be redirected straight to this if you try to create an OAuth Client ID  1  Go to your  OAuth Consent  https   console cloud google com apis credentials consent  configuration  If you still haven t created one  select  Internal  or  External  and click  Create   2  Go and  edit your OAuth consent screen  https   console cloud google com apis credentials consent edit  Verify you re in the correct project  3  Configure a name for your login app and a user support email address 4  The app logo and filling the information links is not mandatory  but it s a nice touch for the login page 5  In  Authorized domains  add the domains who are allowed to log in to ArgoCD  e g  if you add  example com   all Google Workspace users with an   example com  address will be able to log in  6  Save to continue to the  Scopes  section 7  Click on  Add or remove scopes  and add the      auth userinfo profile  and the  openid  scopes 8  Save  review the summary of your changes and finish      Configure a new OAuth Client ID  1  Go to your  Google API Credentials  https   console cloud google com apis credentials  console  and make sure you re in the correct project  2  Click on   Create Credentials   OAuth Client ID  3  Select  Web Application  in the Application Type drop down menu  and enter an identifying name for your app  e g   Argo CD   4  Fill  Authorized JavaScript origins  with your Argo CD URL  e g   https   argocd example com  5  Fill  Authorized redirect URIs  with your Argo CD URL plus   api dex callback   e g   https   argocd example com api dex callback                 assets google admin oidc uris png   6  Click  Create  and save your  Client ID  and your  Client Secret  for later      Configure Argo to use OpenID Connect  Edit  argocd cm  and add the following  dex config  to the data section  replacing  clientID  and  clientSecret  with the values you saved before      yaml data    url  https   argocd example com   dex config        connectors        config          issuer  https   accounts google com         clientID  XXXXXXXXXXXXX apps googleusercontent com         clientSecret  XXXXXXXXXXXXX       type  oidc       id  google       name  Google          References     Dex oidc connector docs  https   dexidp io docs connectors oidc       SAML App Auth using Dex      Configure a new SAML App          warning  Deprecation Warning       Note that  according to  Dex documentation  https   dexidp io docs connectors saml  warning   SAML is considered unsafe and they are planning to deprecate that module        1  In the  Google admin console  https   admin google com   open the left side menu and select  Apps     SAML Apps         Google Admin Apps Menu        assets google admin saml apps menu png  Google Admin menu with the Apps   SAML Apps path selected    2  Under  Add App  select  Add custom SAML app         Google Admin Add Custom SAML App        assets google admin saml add app menu png  Add apps menu with add custom SAML app highlighted    3  Enter a  Name  for the application  e g   Argo CD    then choose  Continue         Google Admin Apps Menu        assets google admin saml app details png  Add apps menu with add custom SAML app highlighted    4  Download the metadata or copy the  SSO URL    Certificate   and optionally  Entity ID  from the identity provider details for use in the next section  Choose  continue         Base64 encode the contents of the certificate file  for example           cat ArgoCD cer   base64         Keep a copy of the encoded output to be used in the next section          Ensure that the certificate is in PEM format before base64 encoding         Google Admin IdP Metadata        assets google admin idp metadata png  A screenshot of the Google IdP metadata    5  For both the  ACS URL  and  Entity ID   use your Argo Dex Callback URL  for example   https   argocd example com api dex callback         Google Admin Service Provider Details        assets google admin service provider details png  A screenshot of the Google Service Provider Details    6  Add SAML Attribute Mapping  Map  Primary email  to  name  and  Primary Email  to  email   and click  ADD MAPPING  button         Google Admin SAML Attribute Mapping Details        assets google admin saml attribute mapping details png  A screenshot of the Google Admin SAML Attribute Mapping Details    7  Finish creating the application       Configure Argo to use the new Google SAML App  Edit  argocd cm  and add the following  dex config  to the data section  replacing the  caData    argocd example com    sso url   and optionally  google entity id  with your values from the Google SAML App      yaml data    url  https   argocd example com   dex config        connectors        type  saml       id  saml       name  saml       config          ssoURL  https   sso url  e g  https   accounts google com o saml2 idp idpid Abcde0          entityIssuer  https   argocd example com api dex callback         caData              BASE64 ENCODED CERTIFICATE DATA         redirectURI  https   argocd example com api dex callback         usernameAttr  name         emailAttr  email           optional         ssoIssuer  https   google entity id  e g  https   accounts google com o saml2 idpid Abcde0           References     Dex SAML connector docs  https   dexidp io docs connectors saml      Google s SAML error messages  https   support google com a answer 6301076 hl en      OpenID Connect plus Google Groups using Dex  We re going to use Dex s  google  connector to get additional Google Groups information from your users  allowing you to use group membership on your RBAC  i e   giving  admin  role to the whole  sysadmins yourcompany com  group   This connector uses two different credentials     An oidc client ID and secret     Same as when you re configuring an  OpenID connection   openid connect using dex   this authenticates your users   A Google service account     This is used to connect to the Google Directory API and pull information about your user s group membership  Also  you ll need the email address for an admin user on this domain  Dex will impersonate that user identity to fetch user information from the API       Configure OpenID Connect  Go through the same steps as in  OpenID Connect using Dex   openid connect using dex   except for configuring  argocd cm   We ll do that later       Set up Directory API access   1  Follow  Google instructions to create a service account with Domain Wide Delegation  https   developers google com admin sdk directory v1 guides delegation        When assigning API scopes to the service account assign   only   the  https   www googleapis com auth admin directory group readonly  scope and nothing else  If you assign any other scopes  you won t be able to fetch information from the API       Create the credentials in JSON format and store them in a safe place  we ll need them later   2  Enable the  Admin SDK  https   console developers google com apis library admin googleapis com        Configure Dex  1  Create a secret with the contents of the previous json file encoded in base64  like this           apiVersion  v1         kind  Secret         metadata            name  argocd google groups json           namespace  argocd         data            googleAuth json  JSON FILE BASE64 ENCODED  2  Edit your  argocd dex server  deployment to mount that secret as a file         Add a volume mount in   spec template spec containers 0 volumeMounts   like this  Be aware of editing the running container and not the init container               volumeMounts                mountPath   shared               name  static files               mountPath   tmp               name  dexconfig               mountPath   tmp oidc               name  google json               readOnly  true        Add a volume in   spec template spec volumes   like this               volumes                emptyDir                   name  static files               emptyDir                   name  dexconfig               name  google json               secret                  defaultMode  420                 secretName  argocd google groups json  3  Edit  argocd cm  and add the following  dex config  to the data section  replacing  clientID  and  clientSecret  with the values you saved before   adminEmail  with the address for the admin user you re going to impersonate  and editing  redirectURI  with your Argo CD domain  note that the  type  is now  google  instead of  oidc             dex config              connectors              config                redirectURI  https   argocd example com api dex callback               clientID  XXXXXXXXXXXXX apps googleusercontent com               clientSecret  XXXXXXXXXXXXX               serviceAccountFilePath   tmp oidc googleAuth json               adminEmail  admin email example com             type  google             id  google             name  Google  4  Restart your  argocd dex server  deployment to be sure it s using the latest configuration 5  Login to Argo CD and go to the  User info  section  were you should see the groups you re member       User info        assets google groups membership png  6  Now you can use groups email addresses to give RBAC permissions 7  Dex    v2 31 0  can also be configure to fetch transitive group membership as follows           dex config              connectors              config                redirectURI  https   argocd example com api dex callback               clientID  XXXXXXXXXXXXX apps googleusercontent com               clientSecret  XXXXXXXXXXXXX               serviceAccountFilePath   tmp oidc googleAuth json               adminEmail  admin email example com               fetchTransitiveGroupMembership  True             type  google             id  google             name  Google      References     Dex Google connector docs  https   dexidp io docs connectors google  "}
{"questions":"argocd The following steps are required to integrate ArgoCD with Zitadel 4 Set up an action in Zitadel These instructions will take you through the entire process of getting your ArgoCD application authenticating and authorizing with Zitadel You will create an application within Zitadel and configure ArgoCD to use Zitadel for authentication using roles set in Zitadel to determine privileges in ArgoCD 3 Set up roles in Zitadel Zitadel Please also consult the Integrating Zitadel and ArgoCD 1 Create a new project and a new application in Zitadel 2 Configure the application in Zitadel","answers":"# Zitadel\nPlease also consult the [Zitadel Documentation](https:\/\/zitadel.com\/docs).\n## Integrating Zitadel and ArgoCD\nThese instructions will take you through the entire process of getting your ArgoCD application authenticating and authorizing with Zitadel. You will create an application within Zitadel and configure ArgoCD to use Zitadel for authentication using roles set in Zitadel to determine privileges in ArgoCD.\n\nThe following steps are required to integrate ArgoCD with Zitadel:\n1. Create a new project and a new application in Zitadel\n2. Configure the application in Zitadel\n3. Set up roles in Zitadel\n4. Set up an action in Zitadel\n5. Configure ArgoCD configmaps\n6. Test the setup\n\nThe following values will be used in this example:\n- Zitadel FQDN: `auth.example.com`\n- Zitadel Project: `argocd-project`\n- Zitadel Application: `argocd-application`\n- Zitadel Action: `groupsClaim`\n- ArgoCD FQDN: `argocd.example.com`\n- ArgoCD Administrator Role: `argocd_administrators`\n- ArgoCD User Role: `argocd_users`\n\nYou may choose different values in your setup; these are used to keep the guide consistent.\n\n## Setting up your project and application in Zitadel\nFirst, we will create a new project within Zitadel. Go to **Projects** and select **Create New Project**.  \nYou should now see the following screen.  \n\n![Zitadel Project](..\/..\/assets\/zitadel-project.png \"Zitadel Project\")\n\nCheck the following options:\n- Assert Roles on Authentication\n- Check authorization on Authentication\n\n![Zitadel Project Settings](..\/..\/assets\/zitadel-project-settings.png \"Zitadel Project Settings\")\n\n### Roles\n\nGo to **Roles** and click **New**. Create the following two roles. Use the specified values below for both fields **Key** and **Group**.\n- `argocd_administrators`\n- `argocd_users`\n\nYour roles should now look like this:\n\n![Zitadel Project Roles](..\/..\/assets\/zitadel-project-roles.png \"Zitadel Project Roles\")\n\n### Authorizations\n\nNext, go to **Authorizations** and assign your user the role `argocd_administrators`.\nClick **New**, enter the name of your user and click **Continue**. Select the role `argocd_administrators` and click **Save**.\n\nYour authorizations should now look like this:\n\n![Zitadel Project Authorizations](..\/..\/assets\/zitadel-project-authorizations.png \"Zitadel Project Authorizations\")\n\n### Creating an application\n\nGo to **General** and create a new application. Name the application `argocd-application`.\n\nFor type of the application, select **WEB** and click continue.\n\n![Zitadel Application Setup Step 1](..\/..\/assets\/zitadel-application-1.png \"Zitadel Application Setup Step 1\")\n\nSelect **CODE** and continue.\n\n![Zitadel Application Setup Step 2](..\/..\/assets\/zitadel-application-2.png \"Zitadel Application Setup Step 2\")\n\nNext, we will set up the redirect and post-logout URIs. Set the following values:\n- Redirect URI: `https:\/\/argocd.example.com\/auth\/callback`\n- Post Logout URI: `https:\/\/argocd.example.com`\n\nThe post logout URI is optional. In the example setup users will be taken back to the ArgoCD login page after logging out.\n\n![Zitadel Application Setup Step 3](..\/..\/assets\/zitadel-application-3.png \"Zitadel Application Setup Step 3\")\n\nVerify your configuration on the next screen and click **Create** to create the application.\n\n![Zitadel Application Setup Step 4](..\/..\/assets\/zitadel-application-4.png \"Zitadel Application Setup Step 4\")\n\nAfter clicking **Create** you will be shown the `ClientId` and the `ClientSecret` for your application. Make sure to copy the ClientSecret as you will not be able to retrieve it after closing this window.  \nFor our example, the following values are used:\n- ClientId: `227060711795262483@argocd-project`\n- ClientSecret: `UGvTjXVFAQ8EkMv2x4GbPcrEwrJGWZ0sR2KbwHRNfYxeLsDurCiVEpa5bkgW0pl0`\n\n![Zitadel Application Secrets](..\/..\/assets\/zitadel-application-secrets.png \"Zitadel Application Secrets\")\n\nOnce you have saved the ClientSecret in a safe place, click **Close** to complete creating the application.\n\nGo to **Token Settings** and enable the following options:  \n- User roles inside ID Token\n- User Info inside ID Token\n\n![Zitadel Application Settings](..\/..\/assets\/zitadel-application-settings.png \"Zitadel Application Settings\")\n\n## Setting up an action in Zitadel\n\nTo include the role of the user in the token issued by Zitadel, we will need to set up a Zitadel Action. The authorization in ArgoCD will be determined by the role contained within the auth token.  \nGo to **Actions**, click **New** and choose `groupsClaim` as the name of your action.\n\nPaste the following code into the action:\n\n```javascript\n\/**\n * sets the roles an additional claim in the token with roles as value an project as key\n *\n * The role claims of the token look like the following:\n *\n * \/\/ added by the code below\n * \"groups\": [\"{roleName}\", \"{roleName}\", ...],\n *\n * Flow: Complement token, Triggers: Pre Userinfo creation, Pre access token creation\n *\n * @param ctx\n * @param api\n *\/\nfunction groupsClaim(ctx, api) {\n  if (ctx.v1.user.grants === undefined || ctx.v1.user.grants.count == 0) {\n    return;\n  }\n\n  let grants = [];\n  ctx.v1.user.grants.grants.forEach((claim) => {\n    claim.roles.forEach((role) => {\n      grants.push(role);\n    });\n  });\n\n  api.v1.claims.setClaim(\"groups\", grants);\n}\n```\n\nCheck **Allowed To Fail** and click **Add** to add your action.  \n\n*Note: If **Allowed To Fail** is not checked and a user does not have a role assigned, it may be possible that the user is no longer able to log in to Zitadel as the login flow fails when the action fails.*\n\nNext, add your action to the **Complement Token** flow. Select the **Complement Token** flow from the dropdown and click **Add trigger**.  \nAdd your action to both triggers **Pre Userinfo creation** and **Pre access token creation**.\n\nYour Actions page should now look like the following screenshot:\n\n![Zitadel Actions](..\/..\/assets\/zitadel-actions.png \"Zitadel Actions\")\n\n\n## Configuring the ArgoCD configmaps\n\nNext, we will configure two ArgoCD configmaps:\n- [argocd-cm.yaml](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/docs\/operator-manual\/argocd-cm.yaml)\n- [argocd-rbac-cm.yaml](https:\/\/github.com\/argoproj\/argo-cd\/blob\/master\/docs\/operator-manual\/argocd-rbac-cm.yaml)\n\nConfigure your configmaps as follows while making sure to replace the relevant values such as `url`, `issuer`, `clientID`, `clientSecret` and `logoutURL` with ones matching your setup.\n\n### argocd-cm.yaml\n```yaml\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cm\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/part-of: argocd\ndata:\n  admin.enabled: \"false\"\n  url: https:\/\/argocd.example.com\n  oidc.config: |\n    name: Zitadel\n    issuer: https:\/\/auth.example.com\n    clientID: 227060711795262483@argocd-project\n    clientSecret: UGvTjXVFAQ8EkMv2x4GbPcrEwrJGWZ0sR2KbwHRNfYxeLsDurCiVEpa5bkgW0pl0\n    requestedScopes:\n      - openid\n      - profile\n      - email\n      - groups\n    logoutURL: https:\/\/auth.example.com\/oidc\/v1\/end_session\n```\n\n### argocd-rbac-cm.yaml\n```yaml\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-rbac-cm\n  namespace: argocd\n  labels:\n    app.kubernetes.io\/part-of: argocd\ndata:\n  scopes: '[groups]'\n  policy.csv: |\n    g, argocd_administrators, role:admin\n    g, argocd_users, role:readonly\n  policy.default: ''\n```\n\nThe roles specified under `policy.csv` must match the roles configured in Zitadel.  \nThe Zitadel role `argocd_administrators` will be assigned the ArgoCD role `admin` granting admin access to ArgoCD.  \nThe Zitadel role `argocd_users` will be assigned the ArgoCD role `readonly` granting read-only access to ArgoCD.\n\nDeploy your ArgoCD configmaps. ArgoCD and Zitadel should now be set up correctly to allow users to log in to ArgoCD using Zitadel.\n\n## Testing the setup\n\nGo to your ArgoCD instance. You should now see the **LOG IN WITH ZITADEL** button above the usual username\/password login.\n\n![Zitadel ArgoCD Login](..\/..\/assets\/zitadel-argocd-login.png \"Zitadel ArgoCD Login\")\n\nAfter logging in with your Zitadel user go to **User Info**. If everything is set up correctly you should now see the group `argocd_administrators` as shown below.\n\n![Zitadel ArgoCD User Info](..\/..\/assets\/zitadel-argocd-user-info.png \"Zitadel ArgoCD User Info\")","site":"argocd","answers_cleaned":"  Zitadel Please also consult the  Zitadel Documentation  https   zitadel com docs      Integrating Zitadel and ArgoCD These instructions will take you through the entire process of getting your ArgoCD application authenticating and authorizing with Zitadel  You will create an application within Zitadel and configure ArgoCD to use Zitadel for authentication using roles set in Zitadel to determine privileges in ArgoCD   The following steps are required to integrate ArgoCD with Zitadel  1  Create a new project and a new application in Zitadel 2  Configure the application in Zitadel 3  Set up roles in Zitadel 4  Set up an action in Zitadel 5  Configure ArgoCD configmaps 6  Test the setup  The following values will be used in this example    Zitadel FQDN   auth example com    Zitadel Project   argocd project    Zitadel Application   argocd application    Zitadel Action   groupsClaim    ArgoCD FQDN   argocd example com    ArgoCD Administrator Role   argocd administrators    ArgoCD User Role   argocd users   You may choose different values in your setup  these are used to keep the guide consistent      Setting up your project and application in Zitadel First  we will create a new project within Zitadel  Go to   Projects   and select   Create New Project      You should now see the following screen       Zitadel Project        assets zitadel project png  Zitadel Project    Check the following options    Assert Roles on Authentication   Check authorization on Authentication    Zitadel Project Settings        assets zitadel project settings png  Zitadel Project Settings        Roles  Go to   Roles   and click   New    Create the following two roles  Use the specified values below for both fields   Key   and   Group       argocd administrators     argocd users   Your roles should now look like this     Zitadel Project Roles        assets zitadel project roles png  Zitadel Project Roles        Authorizations  Next  go to   Authorizations   and assign your user the role  argocd administrators   Click   New    enter the name of your user and click   Continue    Select the role  argocd administrators  and click   Save     Your authorizations should now look like this     Zitadel Project Authorizations        assets zitadel project authorizations png  Zitadel Project Authorizations        Creating an application  Go to   General   and create a new application  Name the application  argocd application    For type of the application  select   WEB   and click continue     Zitadel Application Setup Step 1        assets zitadel application 1 png  Zitadel Application Setup Step 1    Select   CODE   and continue     Zitadel Application Setup Step 2        assets zitadel application 2 png  Zitadel Application Setup Step 2    Next  we will set up the redirect and post logout URIs  Set the following values    Redirect URI   https   argocd example com auth callback    Post Logout URI   https   argocd example com   The post logout URI is optional  In the example setup users will be taken back to the ArgoCD login page after logging out     Zitadel Application Setup Step 3        assets zitadel application 3 png  Zitadel Application Setup Step 3    Verify your configuration on the next screen and click   Create   to create the application     Zitadel Application Setup Step 4        assets zitadel application 4 png  Zitadel Application Setup Step 4    After clicking   Create   you will be shown the  ClientId  and the  ClientSecret  for your application  Make sure to copy the ClientSecret as you will not be able to retrieve it after closing this window    For our example  the following values are used    ClientId   227060711795262483 argocd project    ClientSecret   UGvTjXVFAQ8EkMv2x4GbPcrEwrJGWZ0sR2KbwHRNfYxeLsDurCiVEpa5bkgW0pl0     Zitadel Application Secrets        assets zitadel application secrets png  Zitadel Application Secrets    Once you have saved the ClientSecret in a safe place  click   Close   to complete creating the application   Go to   Token Settings   and enable the following options      User roles inside ID Token   User Info inside ID Token    Zitadel Application Settings        assets zitadel application settings png  Zitadel Application Settings       Setting up an action in Zitadel  To include the role of the user in the token issued by Zitadel  we will need to set up a Zitadel Action  The authorization in ArgoCD will be determined by the role contained within the auth token    Go to   Actions    click   New   and choose  groupsClaim  as the name of your action   Paste the following code into the action      javascript        sets the roles an additional claim in the token with roles as value an project as key       The role claims of the token look like the following           added by the code below     groups      roleName      roleName                Flow  Complement token  Triggers  Pre Userinfo creation  Pre access token creation        param ctx     param api     function groupsClaim ctx  api      if  ctx v1 user grants     undefined    ctx v1 user grants count    0        return         let grants         ctx v1 user grants grants forEach  claim           claim roles forEach  role             grants push role                    api v1 claims setClaim  groups   grants          Check   Allowed To Fail   and click   Add   to add your action      Note  If   Allowed To Fail   is not checked and a user does not have a role assigned  it may be possible that the user is no longer able to log in to Zitadel as the login flow fails when the action fails    Next  add your action to the   Complement Token   flow  Select the   Complement Token   flow from the dropdown and click   Add trigger      Add your action to both triggers   Pre Userinfo creation   and   Pre access token creation     Your Actions page should now look like the following screenshot     Zitadel Actions        assets zitadel actions png  Zitadel Actions        Configuring the ArgoCD configmaps  Next  we will configure two ArgoCD configmaps     argocd cm yaml  https   github com argoproj argo cd blob master docs operator manual argocd cm yaml     argocd rbac cm yaml  https   github com argoproj argo cd blob master docs operator manual argocd rbac cm yaml   Configure your configmaps as follows while making sure to replace the relevant values such as  url    issuer    clientID    clientSecret  and  logoutURL  with ones matching your setup       argocd cm yaml    yaml     apiVersion  v1 kind  ConfigMap metadata    name  argocd cm   namespace  argocd   labels      app kubernetes io part of  argocd data    admin enabled   false    url  https   argocd example com   oidc config        name  Zitadel     issuer  https   auth example com     clientID  227060711795262483 argocd project     clientSecret  UGvTjXVFAQ8EkMv2x4GbPcrEwrJGWZ0sR2KbwHRNfYxeLsDurCiVEpa5bkgW0pl0     requestedScopes          openid         profile         email         groups     logoutURL  https   auth example com oidc v1 end session          argocd rbac cm yaml    yaml     apiVersion  v1 kind  ConfigMap metadata    name  argocd rbac cm   namespace  argocd   labels      app kubernetes io part of  argocd data    scopes    groups     policy csv        g  argocd administrators  role admin     g  argocd users  role readonly   policy default          The roles specified under  policy csv  must match the roles configured in Zitadel    The Zitadel role  argocd administrators  will be assigned the ArgoCD role  admin  granting admin access to ArgoCD    The Zitadel role  argocd users  will be assigned the ArgoCD role  readonly  granting read only access to ArgoCD   Deploy your ArgoCD configmaps  ArgoCD and Zitadel should now be set up correctly to allow users to log in to ArgoCD using Zitadel      Testing the setup  Go to your ArgoCD instance  You should now see the   LOG IN WITH ZITADEL   button above the usual username password login     Zitadel ArgoCD Login        assets zitadel argocd login png  Zitadel ArgoCD Login    After logging in with your Zitadel user go to   User Info    If everything is set up correctly you should now see the group  argocd administrators  as shown below     Zitadel ArgoCD User Info        assets zitadel argocd user info png  Zitadel ArgoCD User Info  "}
{"questions":"argocd apiVersion v1 kind ConfigMap The notification template is used to generate the notification content and is configured in the ConfigMap The template is leveraging the golang package and allows customization of the notification message metadata Templates are meant to be reusable and can be referenced by multiple triggers yaml The following template is used to notify the user about application sync status","answers":"The notification template is used to generate the notification content and is configured in the `argocd-notifications-cm` ConfigMap. The template is leveraging\nthe [html\/template](https:\/\/golang.org\/pkg\/html\/template\/) golang package and allows customization of the notification message.\nTemplates are meant to be reusable and can be referenced by multiple triggers.\n\nThe following template is used to notify the user about application sync status.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  template.my-custom-template-slack-template: |\n    message: |\n      Application  sync is .\n      Application details: \/applications\/.\n```\n\nEach template has access to the following fields:\n\n- `app` holds the application object.\n- `context` is a user-defined string map and might include any string keys and values.\n- `secrets` provides access to sensitive data stored in `argocd-notifications-secret`\n- `serviceType` holds the notification service type name (such as \"slack\" or \"email). The field can be used to conditionally\nrender service-specific fields.\n- `recipient` holds the recipient name.\n\n## Defining user-defined `context`\n\nIt is possible to define some shared context between all notification templates by setting a top-level\nYAML document of key-value pairs, which can then be used within templates, like so:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  context: |\n    region: east\n    environmentName: staging\n\n  template.a-slack-template-with-context: |\n    message: \"Something happened in  in the  data center!\"\n```\n\n## Defining and using secrets within notification templates\n\nSome notification service use cases will require the use of secrets within templates. This can be achieved with the use of\nthe `secrets` data variable available within the templates.\n\nGiven that we have the following `argocd-notifications-secret`:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: argocd-notifications-secret\nstringData:\n  sampleWebhookToken: secret-token \ntype: Opaque\n```\n\nWe can use the defined `sampleWebhookToken` in a template as such:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  template.trigger-webhook: |\n      webhook:\n        sample-webhook:\n          method: POST\n          path: 'webhook\/endpoint\/with\/auth'\n          body: 'token=&variables[APP_SOURCE_PATH]=\n```\n\n## Notification Service Specific Fields\n\nThe `message` field of the template definition allows creating a basic notification for any notification service. You can leverage notification service-specific\nfields to create complex notifications. For example using service-specific you can add blocks and attachments for Slack, subject for Email or URL path, and body for Webhook.\nSee corresponding service [documentation](services\/overview.md) for more information.\n\n## Change the timezone\n\nYou can change the timezone to show in notifications as follows.\n\n1. Call time functions.\n\n    ```\n    \n    ```\n\n2. Set the `TZ` environment variable on the argocd-notifications-controller container.\n\n    ```yaml\n    apiVersion: apps\/v1\n    kind: Deployment\n    metadata:\n      name: argocd-notifications-controller\n    spec:\n      template:\n        spec:\n          containers:\n          - name: argocd-notifications-controller\n            env:\n            - name: TZ\n              value: Asia\/Tokyo\n    ```\n\n## Functions\n\nTemplates have access to the set of built-in functions:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  template.my-custom-template-slack-template: |\n    message: \"Author: \"\n```\n\n{!docs\/operator-manual\/notifications\/functions.md!}","site":"argocd","answers_cleaned":"The notification template is used to generate the notification content and is configured in the  argocd notifications cm  ConfigMap  The template is leveraging the  html template  https   golang org pkg html template   golang package and allows customization of the notification message  Templates are meant to be reusable and can be referenced by multiple triggers   The following template is used to notify the user about application sync status      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    template my custom template slack template        message          Application  sync is         Application details   applications        Each template has access to the following fields      app  holds the application object     context  is a user defined string map and might include any string keys and values     secrets  provides access to sensitive data stored in  argocd notifications secret     serviceType  holds the notification service type name  such as  slack  or  email   The field can be used to conditionally render service specific fields     recipient  holds the recipient name      Defining user defined  context   It is possible to define some shared context between all notification templates by setting a top level YAML document of key value pairs  which can then be used within templates  like so      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    context        region  east     environmentName  staging    template a slack template with context        message   Something happened in  in the  data center           Defining and using secrets within notification templates  Some notification service use cases will require the use of secrets within templates  This can be achieved with the use of the  secrets  data variable available within the templates   Given that we have the following  argocd notifications secret       yaml apiVersion  v1 kind  Secret metadata    name  argocd notifications secret stringData    sampleWebhookToken  secret token  type  Opaque      We can use the defined  sampleWebhookToken  in a template as such      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    template trigger webhook          webhook          sample webhook            method  POST           path   webhook endpoint with auth            body   token  variables APP SOURCE PATH           Notification Service Specific Fields  The  message  field of the template definition allows creating a basic notification for any notification service  You can leverage notification service specific fields to create complex notifications  For example using service specific you can add blocks and attachments for Slack  subject for Email or URL path  and body for Webhook  See corresponding service  documentation  services overview md  for more information      Change the timezone  You can change the timezone to show in notifications as follows   1  Call time functions                         2  Set the  TZ  environment variable on the argocd notifications controller container          yaml     apiVersion  apps v1     kind  Deployment     metadata        name  argocd notifications controller     spec        template          spec            containers              name  argocd notifications controller             env                name  TZ               value  Asia Tokyo             Functions  Templates have access to the set of built in functions      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    template my custom template slack template        message   Author           docs operator manual notifications functions md  "}
{"questions":"argocd Prints information about configured templates argocd admin notifications template get Examples argocd admin notifications template get flags","answers":"## argocd admin notifications template get\n\nPrints information about configured templates\n\n```\nargocd admin notifications template get [flags]\n```\n\n### Examples\n\n```\n\n# prints all templates\nargocd admin notifications template get\n# print YAML formatted app-sync-succeeded template definition\nargocd admin notifications template get app-sync-succeeded -o=yaml\n\n```\n\n### Options\n\n```\n  -h, --help            help for get\n  -o, --output string   Output format. One of:json|yaml|wide|name (default \"wide\")\n```\n\n### Options inherited from parent commands\n\n```\n      --argocd-repo-server string       Argo CD repo server address (default \"argocd-repo-server:8081\")\n      --argocd-repo-server-plaintext    Use a plaintext client (non-TLS) to connect to repository server\n      --argocd-repo-server-strict-tls   Perform strict validation of TLS certificates when connecting to repo server\n      --as string                       Username to impersonate for the operation\n      --as-group stringArray            Group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n      --as-uid string                   UID to impersonate for the operation\n      --certificate-authority string    Path to a cert file for the certificate authority\n      --client-certificate string       Path to a client certificate file for TLS\n      --client-key string               Path to a client key file for TLS\n      --cluster string                  The name of the kubeconfig cluster to use\n      --config-map string               argocd-notifications-cm.yaml file path\n      --context string                  The name of the kubeconfig context to use\n      --disable-compression             If true, opt-out of response compression for all requests to the server\n      --insecure-skip-tls-verify        If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n      --kubeconfig string               Path to a kube config. Only required if out-of-cluster\n  -n, --namespace string                If present, the namespace scope for this CLI request\n      --password string                 Password for basic authentication to the API server\n      --proxy-url string                If provided, this URL will be used to connect via proxy\n      --request-timeout string          The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\")\n      --secret string                   argocd-notifications-secret.yaml file path. Use empty secret if provided value is ':empty'\n      --server string                   The address and port of the Kubernetes API server\n      --tls-server-name string          If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.\n      --token string                    Bearer token for authentication to the API server\n      --user string                     The name of the kubeconfig user to use\n      --username string                 Username for basic authentication to the API server\n```\n\n## argocd admin notifications template notify\n\nGenerates notification using the specified template and send it to specified recipients\n\n```\nargocd admin notifications template notify NAME RESOURCE_NAME [flags]\n```\n\n### Examples\n\n```\n\n# Trigger notification using in-cluster config map and secret\nargocd admin notifications template notify app-sync-succeeded guestbook --recipient slack:my-slack-channel\n\n# Render notification render generated notification in console\nargocd admin notifications template notify app-sync-succeeded guestbook\n\n```\n\n### Options\n\n```\n  -h, --help                    help for notify\n      --recipient stringArray   List of recipients (default [console:stdout])\n```\n\n### Options inherited from parent commands\n\n```\n      --argocd-repo-server string       Argo CD repo server address (default \"argocd-repo-server:8081\")\n      --argocd-repo-server-plaintext    Use a plaintext client (non-TLS) to connect to repository server\n      --argocd-repo-server-strict-tls   Perform strict validation of TLS certificates when connecting to repo server\n      --as string                       Username to impersonate for the operation\n      --as-group stringArray            Group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n      --as-uid string                   UID to impersonate for the operation\n      --certificate-authority string    Path to a cert file for the certificate authority\n      --client-certificate string       Path to a client certificate file for TLS\n      --client-key string               Path to a client key file for TLS\n      --cluster string                  The name of the kubeconfig cluster to use\n      --config-map string               argocd-notifications-cm.yaml file path\n      --context string                  The name of the kubeconfig context to use\n      --disable-compression             If true, opt-out of response compression for all requests to the server\n      --insecure-skip-tls-verify        If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n      --kubeconfig string               Path to a kube config. Only required if out-of-cluster\n  -n, --namespace string                If present, the namespace scope for this CLI request\n      --password string                 Password for basic authentication to the API server\n      --proxy-url string                If provided, this URL will be used to connect via proxy\n      --request-timeout string          The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\")\n      --secret string                   argocd-notifications-secret.yaml file path. Use empty secret if provided value is ':empty'\n      --server string                   The address and port of the Kubernetes API server\n      --tls-server-name string          If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.\n      --token string                    Bearer token for authentication to the API server\n      --user string                     The name of the kubeconfig user to use\n      --username string                 Username for basic authentication to the API server\n```\n\n## argocd admin notifications trigger get\n\nPrints information about configured triggers\n\n```\nargocd admin notifications trigger get [flags]\n```\n\n### Examples\n\n```\n\n# prints all triggers\nargocd admin notifications trigger get\n# print YAML formatted on-sync-failed trigger definition\nargocd admin notifications trigger get on-sync-failed -o=yaml\n\n```\n\n### Options\n\n```\n  -h, --help            help for get\n  -o, --output string   Output format. One of:json|yaml|wide|name (default \"wide\")\n```\n\n### Options inherited from parent commands\n\n```\n      --argocd-repo-server string       Argo CD repo server address (default \"argocd-repo-server:8081\")\n      --argocd-repo-server-plaintext    Use a plaintext client (non-TLS) to connect to repository server\n      --argocd-repo-server-strict-tls   Perform strict validation of TLS certificates when connecting to repo server\n      --as string                       Username to impersonate for the operation\n      --as-group stringArray            Group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n      --as-uid string                   UID to impersonate for the operation\n      --certificate-authority string    Path to a cert file for the certificate authority\n      --client-certificate string       Path to a client certificate file for TLS\n      --client-key string               Path to a client key file for TLS\n      --cluster string                  The name of the kubeconfig cluster to use\n      --config-map string               argocd-notifications-cm.yaml file path\n      --context string                  The name of the kubeconfig context to use\n      --disable-compression             If true, opt-out of response compression for all requests to the server\n      --insecure-skip-tls-verify        If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n      --kubeconfig string               Path to a kube config. Only required if out-of-cluster\n  -n, --namespace string                If present, the namespace scope for this CLI request\n      --password string                 Password for basic authentication to the API server\n      --proxy-url string                If provided, this URL will be used to connect via proxy\n      --request-timeout string          The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\")\n      --secret string                   argocd-notifications-secret.yaml file path. Use empty secret if provided value is ':empty'\n      --server string                   The address and port of the Kubernetes API server\n      --tls-server-name string          If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.\n      --token string                    Bearer token for authentication to the API server\n      --user string                     The name of the kubeconfig user to use\n      --username string                 Username for basic authentication to the API server\n```\n\n## argocd admin notifications trigger run\n\nEvaluates specified trigger condition and prints the result\n\n```\nargocd admin notifications trigger run NAME RESOURCE_NAME [flags]\n```\n\n### Examples\n\n```\n\n# Execute trigger configured in 'argocd-notification-cm' ConfigMap\nargocd admin notifications trigger run on-sync-status-unknown .\/sample-app.yaml\n\n# Execute trigger using my-config-map.yaml instead of 'argocd-notifications-cm' ConfigMap\nargocd admin notifications trigger run on-sync-status-unknown .\/sample-app.yaml \\\n    --config-map .\/my-config-map.yaml\n```\n\n### Options\n\n```\n  -h, --help   help for run\n```\n\n### Options inherited from parent commands\n\n```\n      --argocd-repo-server string       Argo CD repo server address (default \"argocd-repo-server:8081\")\n      --argocd-repo-server-plaintext    Use a plaintext client (non-TLS) to connect to repository server\n      --argocd-repo-server-strict-tls   Perform strict validation of TLS certificates when connecting to repo server\n      --as string                       Username to impersonate for the operation\n      --as-group stringArray            Group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n      --as-uid string                   UID to impersonate for the operation\n      --certificate-authority string    Path to a cert file for the certificate authority\n      --client-certificate string       Path to a client certificate file for TLS\n      --client-key string               Path to a client key file for TLS\n      --cluster string                  The name of the kubeconfig cluster to use\n      --config-map string               argocd-notifications-cm.yaml file path\n      --context string                  The name of the kubeconfig context to use\n      --disable-compression             If true, opt-out of response compression for all requests to the server\n      --insecure-skip-tls-verify        If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n      --kubeconfig string               Path to a kube config. Only required if out-of-cluster\n  -n, --namespace string                If present, the namespace scope for this CLI request\n      --password string                 Password for basic authentication to the API server\n      --proxy-url string                If provided, this URL will be used to connect via proxy\n      --request-timeout string          The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\")\n      --secret string                   argocd-notifications-secret.yaml file path. Use empty secret if provided value is ':empty'\n      --server string                   The address and port of the Kubernetes API server\n      --tls-server-name string          If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.\n      --token string                    Bearer token for authentication to the API server\n      --user string                     The name of the kubeconfig user to use\n      --username string                 Username for basic authentication to the API server\n```\n","site":"argocd","answers_cleaned":"   argocd admin notifications template get  Prints information about configured templates      argocd admin notifications template get  flags           Examples         prints all templates argocd admin notifications template get   print YAML formatted app sync succeeded template definition argocd admin notifications template get app sync succeeded  o yaml           Options         h    help            help for get    o    output string   Output format  One of json yaml wide name  default  wide            Options inherited from parent commands              argocd repo server string       Argo CD repo server address  default  argocd repo server 8081           argocd repo server plaintext    Use a plaintext client  non TLS  to connect to repository server         argocd repo server strict tls   Perform strict validation of TLS certificates when connecting to repo server         as string                       Username to impersonate for the operation         as group stringArray            Group to impersonate for the operation  this flag can be repeated to specify multiple groups          as uid string                   UID to impersonate for the operation         certificate authority string    Path to a cert file for the certificate authority         client certificate string       Path to a client certificate file for TLS         client key string               Path to a client key file for TLS         cluster string                  The name of the kubeconfig cluster to use         config map string               argocd notifications cm yaml file path         context string                  The name of the kubeconfig context to use         disable compression             If true  opt out of response compression for all requests to the server         insecure skip tls verify        If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure         kubeconfig string               Path to a kube config  Only required if out of cluster    n    namespace string                If present  the namespace scope for this CLI request         password string                 Password for basic authentication to the API server         proxy url string                If provided  this URL will be used to connect via proxy         request timeout string          The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   default  0           secret string                   argocd notifications secret yaml file path  Use empty secret if provided value is   empty          server string                   The address and port of the Kubernetes API server         tls server name string          If provided  this name will be used to validate server certificate  If this is not provided  hostname used to contact the server is used          token string                    Bearer token for authentication to the API server         user string                     The name of the kubeconfig user to use         username string                 Username for basic authentication to the API server         argocd admin notifications template notify  Generates notification using the specified template and send it to specified recipients      argocd admin notifications template notify NAME RESOURCE NAME  flags           Examples         Trigger notification using in cluster config map and secret argocd admin notifications template notify app sync succeeded guestbook   recipient slack my slack channel    Render notification render generated notification in console argocd admin notifications template notify app sync succeeded guestbook           Options         h    help                    help for notify         recipient stringArray   List of recipients  default  console stdout            Options inherited from parent commands              argocd repo server string       Argo CD repo server address  default  argocd repo server 8081           argocd repo server plaintext    Use a plaintext client  non TLS  to connect to repository server         argocd repo server strict tls   Perform strict validation of TLS certificates when connecting to repo server         as string                       Username to impersonate for the operation         as group stringArray            Group to impersonate for the operation  this flag can be repeated to specify multiple groups          as uid string                   UID to impersonate for the operation         certificate authority string    Path to a cert file for the certificate authority         client certificate string       Path to a client certificate file for TLS         client key string               Path to a client key file for TLS         cluster string                  The name of the kubeconfig cluster to use         config map string               argocd notifications cm yaml file path         context string                  The name of the kubeconfig context to use         disable compression             If true  opt out of response compression for all requests to the server         insecure skip tls verify        If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure         kubeconfig string               Path to a kube config  Only required if out of cluster    n    namespace string                If present  the namespace scope for this CLI request         password string                 Password for basic authentication to the API server         proxy url string                If provided  this URL will be used to connect via proxy         request timeout string          The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   default  0           secret string                   argocd notifications secret yaml file path  Use empty secret if provided value is   empty          server string                   The address and port of the Kubernetes API server         tls server name string          If provided  this name will be used to validate server certificate  If this is not provided  hostname used to contact the server is used          token string                    Bearer token for authentication to the API server         user string                     The name of the kubeconfig user to use         username string                 Username for basic authentication to the API server         argocd admin notifications trigger get  Prints information about configured triggers      argocd admin notifications trigger get  flags           Examples         prints all triggers argocd admin notifications trigger get   print YAML formatted on sync failed trigger definition argocd admin notifications trigger get on sync failed  o yaml           Options         h    help            help for get    o    output string   Output format  One of json yaml wide name  default  wide            Options inherited from parent commands              argocd repo server string       Argo CD repo server address  default  argocd repo server 8081           argocd repo server plaintext    Use a plaintext client  non TLS  to connect to repository server         argocd repo server strict tls   Perform strict validation of TLS certificates when connecting to repo server         as string                       Username to impersonate for the operation         as group stringArray            Group to impersonate for the operation  this flag can be repeated to specify multiple groups          as uid string                   UID to impersonate for the operation         certificate authority string    Path to a cert file for the certificate authority         client certificate string       Path to a client certificate file for TLS         client key string               Path to a client key file for TLS         cluster string                  The name of the kubeconfig cluster to use         config map string               argocd notifications cm yaml file path         context string                  The name of the kubeconfig context to use         disable compression             If true  opt out of response compression for all requests to the server         insecure skip tls verify        If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure         kubeconfig string               Path to a kube config  Only required if out of cluster    n    namespace string                If present  the namespace scope for this CLI request         password string                 Password for basic authentication to the API server         proxy url string                If provided  this URL will be used to connect via proxy         request timeout string          The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   default  0           secret string                   argocd notifications secret yaml file path  Use empty secret if provided value is   empty          server string                   The address and port of the Kubernetes API server         tls server name string          If provided  this name will be used to validate server certificate  If this is not provided  hostname used to contact the server is used          token string                    Bearer token for authentication to the API server         user string                     The name of the kubeconfig user to use         username string                 Username for basic authentication to the API server         argocd admin notifications trigger run  Evaluates specified trigger condition and prints the result      argocd admin notifications trigger run NAME RESOURCE NAME  flags           Examples         Execute trigger configured in  argocd notification cm  ConfigMap argocd admin notifications trigger run on sync status unknown   sample app yaml    Execute trigger using my config map yaml instead of  argocd notifications cm  ConfigMap argocd admin notifications trigger run on sync status unknown   sample app yaml         config map   my config map yaml          Options         h    help   help for run          Options inherited from parent commands              argocd repo server string       Argo CD repo server address  default  argocd repo server 8081           argocd repo server plaintext    Use a plaintext client  non TLS  to connect to repository server         argocd repo server strict tls   Perform strict validation of TLS certificates when connecting to repo server         as string                       Username to impersonate for the operation         as group stringArray            Group to impersonate for the operation  this flag can be repeated to specify multiple groups          as uid string                   UID to impersonate for the operation         certificate authority string    Path to a cert file for the certificate authority         client certificate string       Path to a client certificate file for TLS         client key string               Path to a client key file for TLS         cluster string                  The name of the kubeconfig cluster to use         config map string               argocd notifications cm yaml file path         context string                  The name of the kubeconfig context to use         disable compression             If true  opt out of response compression for all requests to the server         insecure skip tls verify        If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure         kubeconfig string               Path to a kube config  Only required if out of cluster    n    namespace string                If present  the namespace scope for this CLI request         password string                 Password for basic authentication to the API server         proxy url string                If provided  this URL will be used to connect via proxy         request timeout string          The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   default  0           secret string                   argocd notifications secret yaml file path  Use empty secret if provided value is   empty          server string                   The address and port of the Kubernetes API server         tls server name string          If provided  this name will be used to validate server certificate  If this is not provided  hostname used to contact the server is used          token string                    Bearer token for authentication to the API server         user string                     The name of the kubeconfig user to use         username string                 Username for basic authentication to the API server     "}
{"questions":"argocd apiVersion v1 and notification templates reference The condition is a predicate expression that returns true if the notification should be sent The trigger condition evaluation is powered by The condition language syntax is described at The trigger is configured in the ConfigMap For example the following trigger sends a notification when application sync status changes to using the template The trigger defines the condition when the notification should be sent The definition includes name condition yaml","answers":"The trigger defines the condition when the notification should be sent. The definition includes name, condition\nand notification templates reference. The condition is a predicate expression that returns true if the notification\nshould be sent. The trigger condition evaluation is powered by [antonmedv\/expr](https:\/\/github.com\/antonmedv\/expr).\nThe condition language syntax is described at [language-definition.md](https:\/\/github.com\/antonmedv\/expr\/blob\/master\/docs\/language-definition.md).\n\nThe trigger is configured in the `argocd-notifications-cm` ConfigMap. For example the following trigger sends a notification\nwhen application sync status changes to `Unknown` using the `app-sync-status` template:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  trigger.on-sync-status-unknown: |\n    - when: app.status.sync.status == 'Unknown'     # trigger condition\n      send: [app-sync-status, github-commit-status] # template names\n```\n\nEach condition might use several templates. Typically, each template is responsible for generating a service-specific notification part.\nIn the example above, the `app-sync-status` template \"knows\" how to create email and Slack notification, and `github-commit-status` knows how to\ngenerate the payload for GitHub webhook.\n\n## Conditions Bundles\n\nTriggers are typically managed by administrators and encapsulate information about when and which notification should be sent.\nThe end users just need to subscribe to the trigger and specify the notification destination. In order to improve user experience\ntriggers might include multiple conditions with a different set of templates for each condition. For example, the following trigger\ncovers all stages of sync status operation and use a different template for different cases:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  trigger.sync-operation-change: |\n    - when: app.status.operationState.phase in ['Succeeded']\n      send: [github-commit-status]\n    - when: app.status.operationState.phase in ['Running']\n      send: [github-commit-status]\n    - when: app.status.operationState.phase in ['Error', 'Failed']\n      send: [app-sync-failed, github-commit-status]\n```\n\n## Avoid Sending Same Notification Too Often\n\nIn some cases, the trigger condition might be \"flapping\". The example below illustrates the problem.\nThe trigger is supposed to generate a notification once when Argo CD application is successfully synchronized and healthy.\nHowever, the application health status might intermittently switch to `Progressing` and then back to `Healthy` so the trigger might unnecessarily generate\nmultiple notifications. The `oncePer` field configures triggers to generate the notification only when the corresponding application field changes.\nThe `on-deployed` trigger from the example below sends the notification only once per observed Git revision of the deployment repository.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  # Optional 'oncePer' property ensure that notification is sent only once per specified field value\n  # E.g. following is triggered once per sync revision\n  trigger.on-deployed: |\n    when: app.status.operationState.phase in ['Succeeded'] and app.status.health.status == 'Healthy'\n    oncePer: app.status.sync.revision\n    send: [app-sync-succeeded]\n```\n\n**Mono Repo Usage**\n\nWhen one repo is used to sync multiple applications, the `oncePer: app.status.sync.revision` field will trigger a notification for each commit. For mono repos, the better approach will be using `oncePer: app.status.operationState.syncResult.revision` statement. This way a notification will be sent only for a particular Application's revision.\n\n### oncePer\n\nThe `oncePer` field is supported like as follows.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  annotations:\n    example.com\/version: v0.1\n```\n\n```yaml\noncePer: app.metadata.annotations[\"example.com\/version\"]\n```\n\n## Default Triggers\n\nYou can use `defaultTriggers` field instead of specifying individual triggers to the annotations.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  # Holds list of triggers that are used by default if trigger is not specified explicitly in the subscription\n  defaultTriggers: |\n    - on-sync-status-unknown\n\n  defaultTriggers.mattermost: |\n    - on-sync-running\n    - on-sync-succeeded\n```\n\nSpecify the annotations as follows to use `defaultTriggers`. In this example, `slack` sends when `on-sync-status-unknown`, and `mattermost` sends when `on-sync-running` and `on-sync-succeeded`.\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  annotations:\n    notifications.argoproj.io\/subscribe.slack: my-channel\n    notifications.argoproj.io\/subscribe.mattermost: my-mattermost-channel\n```\n\n## Functions\n\nTriggers have access to the set of built-in functions.\n\nExample:\n\n```yaml\nwhen: time.Now().Sub(time.Parse(app.status.operationState.startedAt)).Minutes() >= 5\n```\n\n{!docs\/operator-manual\/notifications\/functions.md!}","site":"argocd","answers_cleaned":"The trigger defines the condition when the notification should be sent  The definition includes name  condition and notification templates reference  The condition is a predicate expression that returns true if the notification should be sent  The trigger condition evaluation is powered by  antonmedv expr  https   github com antonmedv expr   The condition language syntax is described at  language definition md  https   github com antonmedv expr blob master docs language definition md    The trigger is configured in the  argocd notifications cm  ConfigMap  For example the following trigger sends a notification when application sync status changes to  Unknown  using the  app sync status  template      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    trigger on sync status unknown          when  app status sync status     Unknown        trigger condition       send   app sync status  github commit status    template names      Each condition might use several templates  Typically  each template is responsible for generating a service specific notification part  In the example above  the  app sync status  template  knows  how to create email and Slack notification  and  github commit status  knows how to generate the payload for GitHub webhook      Conditions Bundles  Triggers are typically managed by administrators and encapsulate information about when and which notification should be sent  The end users just need to subscribe to the trigger and specify the notification destination  In order to improve user experience triggers might include multiple conditions with a different set of templates for each condition  For example  the following trigger covers all stages of sync status operation and use a different template for different cases      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    trigger sync operation change          when  app status operationState phase in   Succeeded         send   github commit status        when  app status operationState phase in   Running         send   github commit status        when  app status operationState phase in   Error    Failed         send   app sync failed  github commit status          Avoid Sending Same Notification Too Often  In some cases  the trigger condition might be  flapping   The example below illustrates the problem  The trigger is supposed to generate a notification once when Argo CD application is successfully synchronized and healthy  However  the application health status might intermittently switch to  Progressing  and then back to  Healthy  so the trigger might unnecessarily generate multiple notifications  The  oncePer  field configures triggers to generate the notification only when the corresponding application field changes  The  on deployed  trigger from the example below sends the notification only once per observed Git revision of the deployment repository      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data      Optional  oncePer  property ensure that notification is sent only once per specified field value     E g  following is triggered once per sync revision   trigger on deployed        when  app status operationState phase in   Succeeded   and app status health status     Healthy      oncePer  app status sync revision     send   app sync succeeded         Mono Repo Usage    When one repo is used to sync multiple applications  the  oncePer  app status sync revision  field will trigger a notification for each commit  For mono repos  the better approach will be using  oncePer  app status operationState syncResult revision  statement  This way a notification will be sent only for a particular Application s revision       oncePer  The  oncePer  field is supported like as follows      yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    annotations      example com version  v0 1         yaml oncePer  app metadata annotations  example com version           Default Triggers  You can use  defaultTriggers  field instead of specifying individual triggers to the annotations      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data      Holds list of triggers that are used by default if trigger is not specified explicitly in the subscription   defaultTriggers          on sync status unknown    defaultTriggers mattermost          on sync running       on sync succeeded      Specify the annotations as follows to use  defaultTriggers   In this example   slack  sends when  on sync status unknown   and  mattermost  sends when  on sync running  and  on sync succeeded       yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    annotations      notifications argoproj io subscribe slack  my channel     notifications argoproj io subscribe mattermost  my mattermost channel         Functions  Triggers have access to the set of built in functions   Example      yaml when  time Now   Sub time Parse app status operationState startedAt   Minutes      5        docs operator manual notifications functions md  "}
{"questions":"argocd and you can configure when the notification should be sent as users about important changes in the application state Using a flexible mechanism of well as notification content Argo CD Notifications includes the of useful triggers and templates Getting Started So you can just use them instead of reinventing new ones Argo CD Notifications continuously monitors Argo CD applications and provides a flexible way to notify Notifications Overview","answers":"# Notifications Overview\n\nArgo CD Notifications continuously monitors Argo CD applications and provides a flexible way to notify\nusers about important changes in the application state. Using a flexible mechanism of\n[triggers](triggers.md) and [templates](templates.md) you can configure when the notification should be sent as\nwell as notification content. Argo CD Notifications includes the [catalog](catalog.md) of useful triggers and templates.\nSo you can just use them instead of reinventing new ones.\n\n## Getting Started\n\n* Install Triggers and Templates from the catalog\n\n    ```bash\n    kubectl apply -n argocd -f https:\/\/raw.githubusercontent.com\/argoproj\/argo-cd\/stable\/notifications_catalog\/install.yaml\n    ```\n\n* Add Email username and password token to `argocd-notifications-secret` secret\n\n    ```bash\n    EMAIL_USER=<your-username>\n    PASSWORD=<your-password>\n    \n    kubectl apply -n argocd -f - << EOF\n    apiVersion: v1\n    kind: Secret\n    metadata:\n      name: argocd-notifications-secret\n    stringData:\n      email-username: $EMAIL_USER\n      email-password: $PASSWORD\n    type: Opaque\n    EOF\n    ```\n\n* Register Email notification service\n\n    ```bash\n    kubectl patch cm argocd-notifications-cm -n argocd --type merge -p '{\"data\": {\"service.email.gmail\": \"{ username: $email-username, password: $email-password, host: smtp.gmail.com, port: 465, from: $email-username }\" }}'\n    ```\n\n* Subscribe to notifications by adding the `notifications.argoproj.io\/subscribe.on-sync-succeeded.slack` annotation to the Argo CD application or project:\n\n    ```bash\n    kubectl patch app <my-app> -n argocd -p '{\"metadata\": {\"annotations\": {\"notifications.argoproj.io\/subscribe.on-sync-succeeded.slack\":\"<my-channel>\"}}}' --type merge\n    ```\n\nTry syncing an application to get notified when the sync is completed.\n\n## Namespace based configuration\n\nA common installation method for Argo CD Notifications is to install it in a dedicated namespace to manage a whole cluster. In this case, the administrator is the only\nperson who can configure notifications in that namespace generally. However, in some cases, it is required to allow end-users to configure notifications\nfor their Argo CD applications. For example, the end-user can configure notifications for their Argo CD application in the namespace where they have access to and their Argo CD application is running in.\n\nThis feature is based on applications in any namespace. See [applications in any namespace](..\/app-any-namespace.md) page for more information.\n\nIn order to enable this feature, the Argo CD administrator must reconfigure the argocd-notification-controller workloads to add  `--application-namespaces` and `--self-service-notification-enabled` parameters to the container's startup command.\n`--application-namespaces` controls the list of namespaces that Argo CD applications are in. `--self-service-notification-enabled` turns on this feature.\n\nThe startup parameters for both can also be conveniently set up and kept in sync by specifying\nthe `application.namespaces` and `notificationscontroller.selfservice.enabled` in the argocd-cmd-params-cm ConfigMap instead of changing the manifests for the respective workloads. For example:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-cmd-params-cm\ndata:\n  application.namespaces: app-team-one, app-team-two\n  notificationscontroller.selfservice.enabled: \"true\"\n```\n\nTo use this feature, you can deploy configmap named `argocd-notifications-cm` and possibly a secret `argocd-notifications-secret` in the namespace where the Argo CD application lives.\n\nWhen it is configured this way the controller will send notifications using both the controller level configuration (the configmap located in the same namespaces as the controller) as well as\nthe configuration located in the same namespace where the Argo CD application is at.\n\nExample: Application team wants to receive notifications using PagerDutyV2, when the controller level configuration is only supporting Slack.\n\nThe following two resources are deployed in the namespace where the Argo CD application lives.\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.pagerdutyv2: |\n    serviceKeys:\n      my-service: $pagerduty-key-my-service\n...\n```\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: argocd-notifications-secret\ntype: Opaque\ndata:\n  pagerduty-key-my-service: <pd-integration-key>\n```\n\nWhen an Argo CD application has the following subscriptions, user receives application sync failure message from pager duty.\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  annotations:\n    notifications.argoproj.io\/subscribe.on-sync-failed.pagerdutyv2: \"<serviceID for Pagerduty>\"\n```\n\n!!! note\n    When the same notification service and trigger are defined in controller level configuration and application level configuration,\n    both notifications will be sent according to its own configuration.\n\n[Defining and using secrets within notification templates](templates.md\/#defining-and-using-secrets-within-notification-templates) function is not available when flag `--self-service-notification-enable` is on.","site":"argocd","answers_cleaned":"  Notifications Overview  Argo CD Notifications continuously monitors Argo CD applications and provides a flexible way to notify users about important changes in the application state  Using a flexible mechanism of  triggers  triggers md  and  templates  templates md  you can configure when the notification should be sent as well as notification content  Argo CD Notifications includes the  catalog  catalog md  of useful triggers and templates  So you can just use them instead of reinventing new ones      Getting Started    Install Triggers and Templates from the catalog         bash     kubectl apply  n argocd  f https   raw githubusercontent com argoproj argo cd stable notifications catalog install yaml            Add Email username and password token to  argocd notifications secret  secret         bash     EMAIL USER  your username      PASSWORD  your password           kubectl apply  n argocd  f      EOF     apiVersion  v1     kind  Secret     metadata        name  argocd notifications secret     stringData        email username   EMAIL USER       email password   PASSWORD     type  Opaque     EOF            Register Email notification service         bash     kubectl patch cm argocd notifications cm  n argocd   type merge  p    data     service email gmail      username   email username  password   email password  host  smtp gmail com  port  465  from   email username                   Subscribe to notifications by adding the  notifications argoproj io subscribe on sync succeeded slack  annotation to the Argo CD application or project          bash     kubectl patch app  my app   n argocd  p    metadata     annotations     notifications argoproj io subscribe on sync succeeded slack    my channel         type merge          Try syncing an application to get notified when the sync is completed      Namespace based configuration  A common installation method for Argo CD Notifications is to install it in a dedicated namespace to manage a whole cluster  In this case  the administrator is the only person who can configure notifications in that namespace generally  However  in some cases  it is required to allow end users to configure notifications for their Argo CD applications  For example  the end user can configure notifications for their Argo CD application in the namespace where they have access to and their Argo CD application is running in   This feature is based on applications in any namespace  See  applications in any namespace     app any namespace md  page for more information   In order to enable this feature  the Argo CD administrator must reconfigure the argocd notification controller workloads to add     application namespaces  and    self service notification enabled  parameters to the container s startup command     application namespaces  controls the list of namespaces that Argo CD applications are in     self service notification enabled  turns on this feature   The startup parameters for both can also be conveniently set up and kept in sync by specifying the  application namespaces  and  notificationscontroller selfservice enabled  in the argocd cmd params cm ConfigMap instead of changing the manifests for the respective workloads  For example      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd cmd params cm data    application namespaces  app team one  app team two   notificationscontroller selfservice enabled   true       To use this feature  you can deploy configmap named  argocd notifications cm  and possibly a secret  argocd notifications secret  in the namespace where the Argo CD application lives   When it is configured this way the controller will send notifications using both the controller level configuration  the configmap located in the same namespaces as the controller  as well as the configuration located in the same namespace where the Argo CD application is at   Example  Application team wants to receive notifications using PagerDutyV2  when the controller level configuration is only supporting Slack   The following two resources are deployed in the namespace where the Argo CD application lives     yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service pagerdutyv2        serviceKeys        my service   pagerduty key my service            yaml apiVersion  v1 kind  Secret metadata    name  argocd notifications secret type  Opaque data    pagerduty key my service   pd integration key       When an Argo CD application has the following subscriptions  user receives application sync failure message from pager duty     yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    annotations      notifications argoproj io subscribe on sync failed pagerdutyv2    serviceID for Pagerduty            note     When the same notification service and trigger are defined in controller level configuration and application level configuration      both notifications will be sent according to its own configuration    Defining and using secrets within notification templates  templates md  defining and using secrets within notification templates  function is not available when flag    self service notification enable  is on "}
{"questions":"argocd hr Golang Time related functions Executes function built in Golang function Returns an instance of time","answers":"### **time**\nTime related functions.\n\n<hr>\n**`time.Now() Time`**\n\nExecutes function built-in Golang [time.Now](https:\/\/golang.org\/pkg\/time\/#Now) function. Returns an instance of\nGolang [Time](https:\/\/golang.org\/pkg\/time\/#Time).\n\n<hr>\n**`time.Parse(val string) Time`**\n\nParses specified string using RFC3339 layout. Returns an instance of Golang [Time](https:\/\/golang.org\/pkg\/time\/#Time).\n\n<hr>\nTime related constants.\n\n**Durations**\n\n```\n\ttime.Nanosecond   = 1\n\ttime.Microsecond  = 1000 * Nanosecond\n\ttime.Millisecond  = 1000 * Microsecond\n\ttime.Second       = 1000 * Millisecond\n\ttime.Minute       = 60 * Second\n\ttime.Hour         = 60 * Minute\n```\n\n**Timestamps**\n\nUsed when formatting time instances as strings (e.g. `time.Now().Format(time.RFC3339)`).\n\n```\n\ttime.Layout      = \"01\/02 03:04:05PM '06 -0700\" \/\/ The reference time, in numerical order.\n\ttime.ANSIC       = \"Mon Jan _2 15:04:05 2006\"\n\ttime.UnixDate    = \"Mon Jan _2 15:04:05 MST 2006\"\n\ttime.RubyDate    = \"Mon Jan 02 15:04:05 -0700 2006\"\n\ttime.RFC822      = \"02 Jan 06 15:04 MST\"\n\ttime.RFC822Z     = \"02 Jan 06 15:04 -0700\" \/\/ RFC822 with numeric zone\n\ttime.RFC850      = \"Monday, 02-Jan-06 15:04:05 MST\"\n\ttime.RFC1123     = \"Mon, 02 Jan 2006 15:04:05 MST\"\n\ttime.RFC1123Z    = \"Mon, 02 Jan 2006 15:04:05 -0700\" \/\/ RFC1123 with numeric zone\n\ttime.RFC3339     = \"2006-01-02T15:04:05Z07:00\"\n\ttime.RFC3339Nano = \"2006-01-02T15:04:05.999999999Z07:00\"\n\ttime.Kitchen     = \"3:04PM\"\n\t\/\/ Handy time stamps.\n\ttime.Stamp      = \"Jan _2 15:04:05\"\n\ttime.StampMilli = \"Jan _2 15:04:05.000\"\n\ttime.StampMicro = \"Jan _2 15:04:05.000000\"\n\ttime.StampNano  = \"Jan _2 15:04:05.000000000\"\n```\n\n### **strings**\nString related functions.\n\n<hr>\n**`strings.ReplaceAll() string`**\n\nExecutes function built-in Golang [strings.ReplaceAll](https:\/\/pkg.go.dev\/strings#ReplaceAll) function.\n\n<hr>\n**`strings.ToUpper() string`**\n\nExecutes function built-in Golang [strings.ToUpper](https:\/\/pkg.go.dev\/strings#ToUpper) function.\n\n<hr>\n**`strings.ToLower() string`**\n\nExecutes function built-in Golang [strings.ToLower](https:\/\/pkg.go.dev\/strings#ToLower) function.\n\n### **sync**\n\n<hr>\n**`sync.GetInfoItem(app map, name string) string`**\nReturns the `info` item value by given name stored in the Argo CD App sync operation.\n\n### **repo**\nFunctions that provide additional information about Application source repository.\n<hr>\n**`repo.RepoURLToHTTPS(url string) string`**\n\nTransforms given GIT URL into HTTPs format.\n\n<hr>\n**`repo.FullNameByRepoURL(url string) string`**\n\nReturns repository URL full name `(<owner>\/<repoName>)`. Currently supports only Github, GitLab and Bitbucket.\n\n<hr>\n**`repo.QueryEscape(s string) string`**\n\nQueryEscape escapes the string, so it can be safely placed inside a URL\n\nExample:\n```\n\/projects\/\/merge_requests\n```\n\n<hr>\n**`repo.GetCommitMetadata(sha string) CommitMetadata`**\n\nReturns commit metadata. The commit must belong to the application source repository. `CommitMetadata` fields:\n\n* `Message string` commit message\n* `Author string` - commit author\n* `Date time.Time` - commit creation date\n* `Tags []string` - Associated tags\n\n<hr>\n**`repo.GetAppDetails() AppDetail`**\n\nReturns application details. `AppDetail` fields:\n\n* `Type string` - AppDetail type\n* `Helm HelmAppSpec` - Helm details\n  * Fields :\n    * `Name string`\n    * `ValueFiles []string`\n    * `Parameters []*v1alpha1.HelmParameter`\n    * `Values string`\n    * `FileParameters []*v1alpha1.HelmFileParameter`\n  * Methods :\n    * `GetParameterValueByName(Name string)` Retrieve value by name in Parameters field\n    * `GetFileParameterPathByName(Name string)` Retrieve path by name in FileParameters field\n*\n* `Kustomize *apiclient.KustomizeAppSpec` - Kustomize details\n* `Directory *apiclient.DirectoryAppSpec` - Directory details","site":"argocd","answers_cleaned":"      time   Time related functions    hr     time Now   Time     Executes function built in Golang  time Now  https   golang org pkg time  Now  function  Returns an instance of Golang  Time  https   golang org pkg time  Time     hr     time Parse val string  Time     Parses specified string using RFC3339 layout  Returns an instance of Golang  Time  https   golang org pkg time  Time     hr  Time related constants     Durations         time Nanosecond     1  time Microsecond    1000   Nanosecond  time Millisecond    1000   Microsecond  time Second         1000   Millisecond  time Minute         60   Second  time Hour           60   Minute        Timestamps    Used when formatting time instances as strings  e g   time Now   Format time RFC3339           time Layout         01 02 03 04 05PM  06  0700     The reference time  in numerical order   time ANSIC          Mon Jan  2 15 04 05 2006   time UnixDate       Mon Jan  2 15 04 05 MST 2006   time RubyDate       Mon Jan 02 15 04 05  0700 2006   time RFC822         02 Jan 06 15 04 MST   time RFC822Z        02 Jan 06 15 04  0700     RFC822 with numeric zone  time RFC850         Monday  02 Jan 06 15 04 05 MST   time RFC1123        Mon  02 Jan 2006 15 04 05 MST   time RFC1123Z       Mon  02 Jan 2006 15 04 05  0700     RFC1123 with numeric zone  time RFC3339        2006 01 02T15 04 05Z07 00   time RFC3339Nano    2006 01 02T15 04 05 999999999Z07 00   time Kitchen        3 04PM      Handy time stamps   time Stamp         Jan  2 15 04 05   time StampMilli    Jan  2 15 04 05 000   time StampMicro    Jan  2 15 04 05 000000   time StampNano     Jan  2 15 04 05 000000000             strings   String related functions    hr     strings ReplaceAll   string     Executes function built in Golang  strings ReplaceAll  https   pkg go dev strings ReplaceAll  function    hr     strings ToUpper   string     Executes function built in Golang  strings ToUpper  https   pkg go dev strings ToUpper  function    hr     strings ToLower   string     Executes function built in Golang  strings ToLower  https   pkg go dev strings ToLower  function         sync     hr     sync GetInfoItem app map  name string  string    Returns the  info  item value by given name stored in the Argo CD App sync operation         repo   Functions that provide additional information about Application source repository   hr     repo RepoURLToHTTPS url string  string     Transforms given GIT URL into HTTPs format    hr     repo FullNameByRepoURL url string  string     Returns repository URL full name    owner   repoName     Currently supports only Github  GitLab and Bitbucket    hr     repo QueryEscape s string  string     QueryEscape escapes the string  so it can be safely placed inside a URL  Example       projects  merge requests       hr     repo GetCommitMetadata sha string  CommitMetadata     Returns commit metadata  The commit must belong to the application source repository   CommitMetadata  fields      Message string  commit message    Author string    commit author    Date time Time    commit creation date    Tags   string    Associated tags   hr     repo GetAppDetails   AppDetail     Returns application details   AppDetail  fields      Type string    AppDetail type    Helm HelmAppSpec    Helm details     Fields          Name string         ValueFiles   string         Parameters    v1alpha1 HelmParameter         Values string         FileParameters    v1alpha1 HelmFileParameter      Methods          GetParameterValueByName Name string   Retrieve value by name in Parameters field        GetFileParameterPathByName Name string   Retrieve path by name in FileParameters field      Kustomize  apiclient KustomizeAppSpec    Kustomize details    Directory  apiclient DirectoryAppSpec    Directory details"}
{"questions":"argocd Triggers and Templates Catalog bash Getting Started kubectl apply n argocd f https raw githubusercontent com argoproj argo cd stable notificationscatalog install yaml Triggers NAME DESCRIPTION TEMPLATE on created Application is created Install Triggers and Templates from the catalog","answers":"# Triggers and Templates Catalog\n## Getting Started\n* Install Triggers and Templates from the catalog\n  ```bash\n  kubectl apply -n argocd -f https:\/\/raw.githubusercontent.com\/argoproj\/argo-cd\/stable\/notifications_catalog\/install.yaml\n  ```\n## Triggers\n|          NAME          |                          DESCRIPTION                          |                      TEMPLATE                       |\n|------------------------|---------------------------------------------------------------|-----------------------------------------------------|\n| on-created             | Application is created.                                       | [app-created](#app-created)                         |\n| on-deleted             | Application is deleted.                                       | [app-deleted](#app-deleted)                         |\n| on-deployed            | Application is synced and healthy. Triggered once per commit. | [app-deployed](#app-deployed)                       |\n| on-health-degraded     | Application has degraded                                      | [app-health-degraded](#app-health-degraded)         |\n| on-sync-failed         | Application syncing has failed                                | [app-sync-failed](#app-sync-failed)                 |\n| on-sync-running        | Application is being synced                                   | [app-sync-running](#app-sync-running)               |\n| on-sync-status-unknown | Application status is 'Unknown'                               | [app-sync-status-unknown](#app-sync-status-unknown) |\n| on-sync-succeeded      | Application syncing has succeeded                             | [app-sync-succeeded](#app-sync-succeeded)           |\n\n## Templates\n### app-created\n**definition**:\n```yaml\nemail:\n  subject: Application  has been created.\nmessage: Application  has been created.\nteams:\n  title: Application  has been created.\n\n```\n### app-deleted\n**definition**:\n```yaml\nemail:\n  subject: Application  has been deleted.\nmessage: Application  has been deleted.\nteams:\n  title: Application  has been deleted.\n\n```\n### app-deployed\n**definition**:\n```yaml\nemail:\n  subject: New version of an application  is up and running.\nmessage: |\n  :white_check_mark: Application  is now running new version of deployments manifests.\nslack:\n  attachments: |\n    [{\n      \"title\": \"\",\n      \"title_link\":\"\/applications\/\",\n      \"color\": \"#18be52\",\n      \"fields\": [\n      {\n        \"title\": \"Sync Status\",\n        \"value\": \"\",\n        \"short\": true\n      },\n      {\n        \"title\":  \"Repository\"  \"Repositories\" ,\n        \"value\":  \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n        \"short\": true\n      },\n      {\n        \"title\": \"Revision\",\n        \"value\": \"\",\n        \"short\": true\n      }\n      \n      ,\n      {\n        \"title\": \"\",\n        \"value\": \"\",\n        \"short\": true\n      }\n      \n      ]\n    }]\n  deliveryPolicy: Post\n  groupingKey: \"\"\n  notifyBroadcast: false\nteams:\n  facts: |\n    [{\n      \"name\": \"Sync Status\",\n      \"value\": \"\"\n    },\n    {\n      \"name\":  \"Repository\"  \"Repositories\" ,\n      \"value\":  \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n    },\n    {\n      \"name\": \"Revision\",\n      \"value\": \"\"\n    }\n    \n      ,\n      {\n        \"name\": \"\",\n        \"value\": \"\"\n      }\n    \n    ]\n  potentialAction: |\n    [{\n      \"@type\":\"OpenUri\",\n      \"name\":\"Operation Application\",\n      \"targets\":[{\n        \"os\":\"default\",\n        \"uri\":\"\/applications\/\"\n      }]\n    },\n    {\n      \"@type\":\"OpenUri\",\n      \"name\":\"Open Repository\",\n      \"targets\":[{\n        \"os\":\"default\",\n        \"uri\": \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n      }]\n    }]\n  themeColor: '#000080'\n  title: New version of an application  is up and running.\n\n```\n### app-health-degraded\n**definition**:\n```yaml\nemail:\n  subject: Application  has degraded.\nmessage: |\n  :exclamation: Application  has degraded.\n  Application details: \/applications\/.\nslack:\n  attachments: |\n    [{\n      \"title\": \"\",\n      \"title_link\": \"\/applications\/\",\n      \"color\": \"#f4c030\",\n      \"fields\": [\n      {\n        \"title\": \"Health Status\",\n        \"value\": \"\",\n        \"short\": true\n      },\n      {\n        \"title\":  \"Repository\"  \"Repositories\" ,\n        \"value\":  \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n        \"short\": true\n      }\n      \n      ,\n      {\n        \"title\": \"\",\n        \"value\": \"\",\n        \"short\": true\n      }\n      \n      ]\n    }]\n  deliveryPolicy: Post\n  groupingKey: \"\"\n  notifyBroadcast: false\nteams:\n  facts: |\n    [{\n      \"name\": \"Health Status\",\n      \"value\": \"\"\n    },\n    {\n      \"name\":  \"Repository\"  \"Repositories\" ,\n      \"value\":  \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n    }\n    \n      ,\n      {\n        \"name\": \"\",\n        \"value\": \"\"\n      }\n    \n    ]\n  potentialAction: |\n    [{\n      \"@type\":\"OpenUri\",\n      \"name\":\"Open Application\",\n      \"targets\":[{\n        \"os\":\"default\",\n        \"uri\":\"\/applications\/\"\n      }]\n    },\n    {\n      \"@type\":\"OpenUri\",\n      \"name\":\"Open Repository\",\n      \"targets\":[{\n        \"os\":\"default\",\n        \"uri\": \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n      }]\n    }]\n  themeColor: '#FF0000'\n  title: Application  has degraded.\n\n```\n### app-sync-failed\n**definition**:\n```yaml\nemail:\n  subject: Failed to sync application .\nmessage: |\n  :exclamation:  The sync operation of application  has failed at  with the following error: \n  Sync operation details are available at: \/applications\/?operation=true .\nslack:\n  attachments: |\n    [{\n      \"title\": \"\",\n      \"title_link\":\"\/applications\/\",\n      \"color\": \"#E96D76\",\n      \"fields\": [\n      {\n        \"title\": \"Sync Status\",\n        \"value\": \"\",\n        \"short\": true\n      },\n      {\n        \"title\":  \"Repository\"  \"Repositories\" ,\n        \"value\":  \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n        \"short\": true\n      }\n      \n      ,\n      {\n        \"title\": \"\",\n        \"value\": \"\",\n        \"short\": true\n      }\n      \n      ]\n    }]\n  deliveryPolicy: Post\n  groupingKey: \"\"\n  notifyBroadcast: false\nteams:\n  facts: |\n    [{\n      \"name\": \"Sync Status\",\n      \"value\": \"\"\n    },\n    {\n      \"name\": \"Failed at\",\n      \"value\": \"\"\n    },\n    {\n      \"name\":  \"Repository\"  \"Repositories\" ,\n      \"value\":  \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n    }\n    \n      ,\n      {\n        \"name\": \"\",\n        \"value\": \"\"\n      }\n    \n    ]\n  potentialAction: |\n    [{\n      \"@type\":\"OpenUri\",\n      \"name\":\"Open Operation\",\n      \"targets\":[{\n        \"os\":\"default\",\n        \"uri\":\"\/applications\/?operation=true\"\n      }]\n    },\n    {\n      \"@type\":\"OpenUri\",\n      \"name\":\"Open Repository\",\n      \"targets\":[{\n        \"os\":\"default\",\n        \"uri\": \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n      }]\n    }]\n  themeColor: '#FF0000'\n  title: Failed to sync application .\n\n```\n### app-sync-running\n**definition**:\n```yaml\nemail:\n  subject: Start syncing application .\nmessage: |\n  The sync operation of application  has started at .\n  Sync operation details are available at: \/applications\/?operation=true .\nslack:\n  attachments: |\n    [{\n      \"title\": \"\",\n      \"title_link\":\"\/applications\/\",\n      \"color\": \"#0DADEA\",\n      \"fields\": [\n      {\n        \"title\": \"Sync Status\",\n        \"value\": \"\",\n        \"short\": true\n      },\n      {\n        \"title\":  \"Repository\"  \"Repositories\" ,\n        \"value\":  \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n        \"short\": true\n      }\n      \n      ,\n      {\n        \"title\": \"\",\n        \"value\": \"\",\n        \"short\": true\n      }\n      \n      ]\n    }]\n  deliveryPolicy: Post\n  groupingKey: \"\"\n  notifyBroadcast: false\nteams:\n  facts: |\n    [{\n      \"name\": \"Sync Status\",\n      \"value\": \"\"\n    },\n    {\n      \"name\": \"Started at\",\n      \"value\": \"\"\n    },\n    {\n      \"name\":  \"Repository\"  \"Repositories\" ,\n      \"value\":  \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n    }\n    \n      ,\n      {\n        \"name\": \"\",\n        \"value\": \"\"\n      }\n    \n    ]\n  potentialAction: |\n    [{\n      \"@type\":\"OpenUri\",\n      \"name\":\"Open Operation\",\n      \"targets\":[{\n        \"os\":\"default\",\n        \"uri\":\"\/applications\/?operation=true\"\n      }]\n    },\n    {\n      \"@type\":\"OpenUri\",\n      \"name\":\"Open Repository\",\n      \"targets\":[{\n        \"os\":\"default\",\n        \"uri\": \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n      }]\n    }]\n  title: Start syncing application .\n\n```\n### app-sync-status-unknown\n**definition**:\n```yaml\nemail:\n  subject: Application  sync status is 'Unknown'\nmessage: |\n  :exclamation: Application  sync is 'Unknown'.\n  Application details: \/applications\/.\n  \n  \n      * \n  \n  \nslack:\n  attachments: |\n    [{\n      \"title\": \"\",\n      \"title_link\":\"\/applications\/\",\n      \"color\": \"#E96D76\",\n      \"fields\": [\n      {\n        \"title\": \"Sync Status\",\n        \"value\": \"\",\n        \"short\": true\n      },\n      {\n        \"title\":  \"Repository\"  \"Repositories\" ,\n        \"value\":  \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n        \"short\": true\n      }\n      \n      ,\n      {\n        \"title\": \"\",\n        \"value\": \"\",\n        \"short\": true\n      }\n      \n      ]\n    }]\n  deliveryPolicy: Post\n  groupingKey: \"\"\n  notifyBroadcast: false\nteams:\n  facts: |\n    [{\n      \"name\": \"Sync Status\",\n      \"value\": \"\"\n    },\n    {\n      \"name\":  \"Repository\"  \"Repositories\" ,\n      \"value\":  \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n    }\n    \n      ,\n      {\n        \"name\": \"\",\n        \"value\": \"\"\n      }\n    \n    ]\n  potentialAction: |\n    [{\n      \"@type\":\"OpenUri\",\n      \"name\":\"Open Application\",\n      \"targets\":[{\n        \"os\":\"default\",\n        \"uri\":\"\/applications\/\"\n      }]\n    },\n    {\n      \"@type\":\"OpenUri\",\n      \"name\":\"Open Repository\",\n      \"targets\":[{\n        \"os\":\"default\",\n        \"uri\": \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n      }]\n    }]\n  title: Application  sync status is 'Unknown'\n\n```\n### app-sync-succeeded\n**definition**:\n```yaml\nemail:\n  subject: Application  has been successfully synced.\nmessage: |\n  :white_check_mark: Application  has been successfully synced at .\n  Sync operation details are available at: \/applications\/?operation=true .\nslack:\n  attachments: |\n    [{\n      \"title\": \"\",\n      \"title_link\":\"\/applications\/\",\n      \"color\": \"#18be52\",\n      \"fields\": [\n      {\n        \"title\": \"Sync Status\",\n        \"value\": \"\",\n        \"short\": true\n      },\n      {\n        \"title\":  \"Repository\"  \"Repositories\" ,\n        \"value\":  \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n        \"short\": true\n      }\n      \n      ,\n      {\n        \"title\": \"\",\n        \"value\": \"\",\n        \"short\": true\n      }\n      \n      ]\n    }]\n  deliveryPolicy: Post\n  groupingKey: \"\"\n  notifyBroadcast: false\nteams:\n  facts: |\n    [{\n      \"name\": \"Sync Status\",\n      \"value\": \"\"\n    },\n    {\n      \"name\": \"Synced at\",\n      \"value\": \"\"\n    },\n    {\n      \"name\":  \"Repository\"  \"Repositories\" ,\n      \"value\":  \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n    }\n    \n      ,\n      {\n        \"name\": \"\",\n        \"value\": \"\"\n      }\n    \n    ]\n  potentialAction: |\n    [{\n      \"@type\":\"OpenUri\",\n      \"name\":\"Operation Details\",\n      \"targets\":[{\n        \"os\":\"default\",\n        \"uri\":\"\/applications\/?operation=true\"\n      }]\n    },\n    {\n      \"@type\":\"OpenUri\",\n      \"name\":\"Open Repository\",\n      \"targets\":[{\n        \"os\":\"default\",\n        \"uri\": \":arrow_heading_up: \"  \"\\n:arrow_heading_up: \" ,\n      }]\n    }]\n  themeColor: '#000080'\n  title: Application  has been successfully synced\n\n```","site":"argocd","answers_cleaned":"  Triggers and Templates Catalog    Getting Started   Install Triggers and Templates from the catalog      bash   kubectl apply  n argocd  f https   raw githubusercontent com argoproj argo cd stable notifications catalog install yaml          Triggers            NAME                                     DESCRIPTION                                                 TEMPLATE                                                                                                                                                                            on created               Application is created                                           app created   app created                              on deleted               Application is deleted                                           app deleted   app deleted                              on deployed              Application is synced and healthy  Triggered once per commit     app deployed   app deployed                            on health degraded       Application has degraded                                         app health degraded   app health degraded              on sync failed           Application syncing has failed                                   app sync failed   app sync failed                      on sync running          Application is being synced                                      app sync running   app sync running                    on sync status unknown   Application status is  Unknown                                   app sync status unknown   app sync status unknown      on sync succeeded        Application syncing has succeeded                                app sync succeeded   app sync succeeded                  Templates     app created   definition       yaml email    subject  Application  has been created  message  Application  has been created  teams    title  Application  has been created           app deleted   definition       yaml email    subject  Application  has been deleted  message  Application  has been deleted  teams    title  Application  has been deleted           app deployed   definition       yaml email    subject  New version of an application  is up and running  message       white check mark  Application  is now running new version of deployments manifests  slack    attachments                  title              title link    applications           color     18be52          fields                      title    Sync Status            value                short   true                           title     Repository    Repositories             value      arrow heading up       n arrow heading up               short   true                           title    Revision            value                short   true                                         title                value                short   true                                 deliveryPolicy  Post   groupingKey       notifyBroadcast  false teams    facts                  name    Sync Status          value                          name     Repository    Repositories           value      arrow heading up       n arrow heading up                          name    Revision          value                                          name                value                           potentialAction                   type   OpenUri          name   Operation Application          targets              os   default            uri    applications                                 type   OpenUri          name   Open Repository          targets              os   default            uri     arrow heading up       n arrow heading up                        themeColor    000080    title  New version of an application  is up and running           app health degraded   definition       yaml email    subject  Application  has degraded  message       exclamation  Application  has degraded    Application details   applications   slack    attachments                  title              title link     applications           color     f4c030          fields                      title    Health Status            value                short   true                           title     Repository    Repositories             value      arrow heading up       n arrow heading up               short   true                                         title                value                short   true                                 deliveryPolicy  Post   groupingKey       notifyBroadcast  false teams    facts                  name    Health Status          value                          name     Repository    Repositories           value      arrow heading up       n arrow heading up                                          name                value                           potentialAction                   type   OpenUri          name   Open Application          targets              os   default            uri    applications                                 type   OpenUri          name   Open Repository          targets              os   default            uri     arrow heading up       n arrow heading up                        themeColor    FF0000    title  Application  has degraded           app sync failed   definition       yaml email    subject  Failed to sync application   message       exclamation   The sync operation of application  has failed at  with the following error     Sync operation details are available at   applications  operation true   slack    attachments                  title              title link    applications           color     E96D76          fields                      title    Sync Status            value                short   true                           title     Repository    Repositories             value      arrow heading up       n arrow heading up               short   true                                         title                value                short   true                                 deliveryPolicy  Post   groupingKey       notifyBroadcast  false teams    facts                  name    Sync Status          value                          name    Failed at          value                          name     Repository    Repositories           value      arrow heading up       n arrow heading up                                          name                value                           potentialAction                   type   OpenUri          name   Open Operation          targets              os   default            uri    applications  operation true                                type   OpenUri          name   Open Repository          targets              os   default            uri     arrow heading up       n arrow heading up                        themeColor    FF0000    title  Failed to sync application            app sync running   definition       yaml email    subject  Start syncing application   message      The sync operation of application  has started at     Sync operation details are available at   applications  operation true   slack    attachments                  title              title link    applications           color     0DADEA          fields                      title    Sync Status            value                short   true                           title     Repository    Repositories             value      arrow heading up       n arrow heading up               short   true                                         title                value                short   true                                 deliveryPolicy  Post   groupingKey       notifyBroadcast  false teams    facts                  name    Sync Status          value                          name    Started at          value                          name     Repository    Repositories           value      arrow heading up       n arrow heading up                                          name                value                           potentialAction                   type   OpenUri          name   Open Operation          targets              os   default            uri    applications  operation true                                type   OpenUri          name   Open Repository          targets              os   default            uri     arrow heading up       n arrow heading up                        title  Start syncing application            app sync status unknown   definition       yaml email    subject  Application  sync status is  Unknown  message       exclamation  Application  sync is  Unknown     Application details   applications                        slack    attachments                  title              title link    applications           color     E96D76          fields                      title    Sync Status            value                short   true                           title     Repository    Repositories             value      arrow heading up       n arrow heading up               short   true                                         title                value                short   true                                 deliveryPolicy  Post   groupingKey       notifyBroadcast  false teams    facts                  name    Sync Status          value                          name     Repository    Repositories           value      arrow heading up       n arrow heading up                                          name                value                           potentialAction                   type   OpenUri          name   Open Application          targets              os   default            uri    applications                                 type   OpenUri          name   Open Repository          targets              os   default            uri     arrow heading up       n arrow heading up                        title  Application  sync status is  Unknown           app sync succeeded   definition       yaml email    subject  Application  has been successfully synced  message       white check mark  Application  has been successfully synced at     Sync operation details are available at   applications  operation true   slack    attachments                  title              title link    applications           color     18be52          fields                      title    Sync Status            value                short   true                           title     Repository    Repositories             value      arrow heading up       n arrow heading up               short   true                                         title                value                short   true                                 deliveryPolicy  Post   groupingKey       notifyBroadcast  false teams    facts                  name    Sync Status          value                          name    Synced at          value                          name     Repository    Repositories           value      arrow heading up       n arrow heading up                                          name                value                           potentialAction                   type   OpenUri          name   Operation Details          targets              os   default            uri    applications  operation true                                type   OpenUri          name   Open Repository          targets              os   default            uri     arrow heading up       n arrow heading up                        themeColor    000080    title  Application  has been successfully synced     "}
{"questions":"argocd apiVersion v1 incorrect Failed to parse new settings YAML syntax is incorrect error converting YAML to JSON yaml","answers":"## Failed to parse new settings\n\n### error converting YAML to JSON\n\nYAML syntax is incorrect.\n\n**incorrect:**\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.slack: |\n    token: $slack-token\n    icon: :rocket:\n```\n\n**correct:**\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.slack: |\n    token: $slack-token\n    icon: \":rocket:\" # <- diff here\n```\n\n### service type 'xxxx' is not supported\n\nCheck the `argocd-notifications` controller version. For example, the Teams integration support started in `v1.1.0`.\n\n## Failed to notify recipient\n\n### notification service 'xxxx' is not supported\n\nYou have not defined `xxxx` in `argocd-notifications-cm` or parsing failed.\n\n### GitHub.repoURL (\\u003cno value\\u003e) does not have a \/ using the configuration\n\nLikely caused by an Application with [multiple sources](https:\/\/argo-cd.readthedocs.io\/en\/stable\/user-guide\/multiple_sources\/):\n\n```yaml\nspec:\n  sources:  # <- multiple sources\n  - repoURL: https:\/\/github.com\/exampleOrg\/first.git\n    path: sources\/example\n  - repoURL: https:\/\/github.com\/exampleOrg\/second.git\n    targetRevision: \"\"\n```\n\nThe standard notification template only supports a single source (``). Use an index to specify the source in the array:\n\n```yaml\ntemplate.example: |\n  github:\n    repoURLPath: \"\"\n```\n\n### Error message `POST https:\/\/api.github.com\/repos\/xxxx\/yyyy\/statuses\/: 404 Not Found`\n\nThis case is similar to the previous one, you have multiple sources in the Application manifest. \nDefault `revisionPath` template `` is for an Application with single source.\n\nMulti-source applications report application statuses in an array:\n\n```yaml\nstatus:\n  operationState:\n    syncResult:\n      revisions:\n        - 38cfa22edf9148caabfecb288bfb47dc4352dfc6\n        - 38cfa22edf9148caabfecb288bfb47dc4352dfc6\nQuick fix for this is to use `index` function to get the first revision:\n```yaml\ntemplate.example: |\n  github:\n    revisionPath: \"\"\n```\n\n## config referenced xxx, but key does not exist in secret\n\n- If you are using a custom secret, check that the secret is in the same namespace\n- You have added the label: `app.kubernetes.io\/part-of: argocd` to the secret\n- You have tried restarting `argocd-notifications` controller\n\n### Example:\nSecret:\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: argocd-slackbot\n  namespace: <the namespace where argocd is installed>\n  labels:\n    app.kubernetes.io\/part-of: argocd\ntype: Opaque\ndata:\n  slack-token: <base64encryptedtoken>\n```\nConfigMap\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.slack: |\n    token: $argocd-slackbot:slack-token\n```","site":"argocd","answers_cleaned":"   Failed to parse new settings      error converting YAML to JSON  YAML syntax is incorrect     incorrect        yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service slack        token   slack token     icon   rocket         correct        yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service slack        token   slack token     icon    rocket        diff here          service type  xxxx  is not supported  Check the  argocd notifications  controller version  For example  the Teams integration support started in  v1 1 0       Failed to notify recipient      notification service  xxxx  is not supported  You have not defined  xxxx  in  argocd notifications cm  or parsing failed       GitHub repoURL   u003cno value u003e  does not have a   using the configuration  Likely caused by an Application with  multiple sources  https   argo cd readthedocs io en stable user guide multiple sources        yaml spec    sources        multiple sources     repoURL  https   github com exampleOrg first git     path  sources example     repoURL  https   github com exampleOrg second git     targetRevision          The standard notification template only supports a single source       Use an index to specify the source in the array      yaml template example      github      repoURLPath              Error message  POST https   api github com repos xxxx yyyy statuses   404 Not Found   This case is similar to the previous one  you have multiple sources in the Application manifest   Default  revisionPath  template    is for an Application with single source   Multi source applications report application statuses in an array      yaml status    operationState      syncResult        revisions            38cfa22edf9148caabfecb288bfb47dc4352dfc6           38cfa22edf9148caabfecb288bfb47dc4352dfc6 Quick fix for this is to use  index  function to get the first revision     yaml template example      github      revisionPath             config referenced xxx  but key does not exist in secret    If you are using a custom secret  check that the secret is in the same namespace   You have added the label   app kubernetes io part of  argocd  to the secret   You have tried restarting  argocd notifications  controller      Example  Secret     yaml apiVersion  v1 kind  Secret metadata    name  argocd slackbot   namespace   the namespace where argocd is installed    labels      app kubernetes io part of  argocd type  Opaque data    slack token   base64encryptedtoken      ConfigMap    yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service slack        token   argocd slackbot slack token    "}
{"questions":"argocd Parameters Option Required Type Description Example The Slack notification service configuration includes following settings If you want to send message using incoming webhook you can use Slack","answers":"# Slack\n\nIf you want to send message using incoming webhook, you can use [webhook](.\/webhook.md#send-slack).\n\n## Parameters\n\nThe Slack notification service configuration includes following settings:\n\n| **Option**           | **Required** | **Type**       | **Description** | **Example** |\n| -------------------- | ------------ | -------------- | --------------- | ----------- |\n| `apiURL`             | False        | `string`       | The server URL. | `https:\/\/example.com\/api` |\n| `channels`           | False        | `list[string]` |                 | `[\"my-channel-1\", \"my-channel-2\"]` |\n| `icon`               | False        | `string`       | The app icon.   | `:robot_face:` or `https:\/\/example.com\/image.png` |\n| `insecureSkipVerify` | False        | `bool`         |                 | `true` |\n| `signingSecret`       | False        | `string`       |                 | `8f742231b10e8888abcd99yyyzzz85a5` |\n| `token`              | **True**     | `string`       | The app's OAuth access token. | `xoxb-1234567890-1234567890123-5n38u5ed63fgzqlvuyxvxcx6` |\n| `username`           | False        | `string`       | The app username. | `argocd` |\n| `disableUnfurl`      | False        | `bool`         | Disable slack unfurling links in messages | `true` |\n\n## Configuration\n\n1. Create Slack Application using https:\/\/api.slack.com\/apps?new_app=1\n![1](https:\/\/user-images.githubusercontent.com\/426437\/73604308-4cb0c500-4543-11ea-9092-6ca6bae21cbb.png)\n1. Once application is created navigate to `OAuth & Permissions`\n![2](https:\/\/user-images.githubusercontent.com\/426437\/73604309-4d495b80-4543-11ea-9908-4dea403d3399.png)\n1. Go to `Scopes` > `Bot Token Scopes` > `Add an OAuth Scope`. Add `chat:write` scope. To use the optional username and icon overrides in the Slack notification service also add the `chat:write.customize` scope.\n![3](https:\/\/user-images.githubusercontent.com\/426437\/73604310-4d495b80-4543-11ea-8576-09cd91aea0e5.png)\n1. `OAuth & Permission` > `OAuth Tokens for Your Workspace` > `Install to Workspace`\n![4](https:\/\/user-images.githubusercontent.com\/426437\/73604311-4d495b80-4543-11ea-9155-9d216b20ec86.png)\n1. Once installation is completed copy the OAuth token.\n![5](https:\/\/user-images.githubusercontent.com\/426437\/73604312-4d495b80-4543-11ea-832b-a9d9d5e4bc29.png)\n\n1. Create a public or private channel, for this example `my_channel`\n1. Invite your slack bot to this channel **otherwise slack bot won't be able to deliver notifications to this channel**\n1. Store Oauth access token in `argocd-notifications-secret` secret\n\n    ```yaml\n      apiVersion: v1\n      kind: Secret\n      metadata:\n          name: <secret-name>\n      stringData:\n          slack-token: <Oauth-access-token>\n    ```\n\n1. Define service type slack in data section of `argocd-notifications-cm` configmap:\n\n    ```yaml\n      apiVersion: v1\n      kind: ConfigMap\n      metadata:\n        name: argocd-notifications-cm\n      data:\n        service.slack: |\n          token: $slack-token\n    ```\n\n1. Add annotation in application yaml file to enable notifications for specific argocd app.  The following example uses the [on-sync-succeeded trigger](..\/catalog.md#triggers):\n\n    ```yaml\n      apiVersion: argoproj.io\/v1alpha1\n      kind: Application\n      metadata:\n        annotations:\n          notifications.argoproj.io\/subscribe.on-sync-succeeded.slack: my_channel\n    ```\n\n1. Annotation with more than one [trigger](..\/catalog.md#triggers), with multiple destinations and recipients\n\n    ```yaml\n      apiVersion: argoproj.io\/v1alpha1\n      kind: Application\n      metadata:\n        annotations:\n          notifications.argoproj.io\/subscriptions: |\n            - trigger: [on-scaling-replica-set, on-rollout-updated, on-rollout-step-completed]\n              destinations:\n                - service: slack\n                  recipients: [my-channel-1, my-channel-2]\n                - service: email\n                  recipients: [recipient-1, recipient-2, recipient-3 ]\n            - trigger: [on-rollout-aborted, on-analysis-run-failed, on-analysis-run-error]\n              destinations:\n                - service: slack\n                  recipients: [my-channel-21, my-channel-22]\n    ```\n\n## Templates\n\n[Notification templates](..\/templates.md) can be customized to leverage slack message blocks and attachments\n[feature](https:\/\/api.slack.com\/messaging\/composing\/layouts).\n\n![](https:\/\/user-images.githubusercontent.com\/426437\/72776856-6dcef880-3bc8-11ea-8e3b-c72df16ee8e6.png)\n\nThe message blocks and attachments can be specified in `blocks` and `attachments` string fields under `slack` field:\n\n```yaml\ntemplate.app-sync-status: |\n  message: |\n    Application  sync is .\n    Application details: \/applications\/.\n  slack:\n    attachments: |\n      [{\n        \"title\": \"\",\n        \"title_link\": \"\/applications\/\",\n        \"color\": \"#18be52\",\n        \"fields\": [{\n          \"title\": \"Sync Status\",\n          \"value\": \"\",\n          \"short\": true\n        }, {\n          \"title\": \"Repository\",\n          \"value\": \"\",\n          \"short\": true\n        }]\n      }]\n```\n\nIf you want to specify an icon and username for each message, you can specify values for `username` and `icon` in the `slack` field.\nFor icon you can specify emoji and image URL, just like in the service definition.\nIf you set `username` and `icon` in template, the values set in template will be used even if values are specified in the service definition.\n\n```yaml\ntemplate.app-sync-status: |\n  message: |\n    Application  sync is .\n    Application details: \/applications\/.\n  slack:\n    username: \"testbot\"\n    icon: https:\/\/example.com\/image.png\n    attachments: |\n      [{\n        \"title\": \"\",\n        \"title_link\": \"\/applications\/\",\n        \"color\": \"#18be52\",\n        \"fields\": [{\n          \"title\": \"Sync Status\",\n          \"value\": \"\",\n          \"short\": true\n        }, {\n          \"title\": \"Repository\",\n          \"value\": \"\",\n          \"short\": true\n        }]\n      }]\n```\n\nThe messages can be aggregated to the slack threads by grouping key which can be specified in a `groupingKey` string field under `slack` field.\n`groupingKey` is used across each template and works independently on each slack channel.\nWhen multiple applications will be updated at the same time or frequently, the messages in slack channel can be easily read by aggregating with git commit hash, application name, etc.\nFurthermore, the messages can be broadcast to the channel at the specific template by `notifyBroadcast` field.\n\n```yaml\ntemplate.app-sync-status: |\n  message: |\n    Application  sync is .\n    Application details: \/applications\/.\n  slack:\n    attachments: |\n      [{\n        \"title\": \"\",\n        \"title_link\": \"\/applications\/\",\n        \"color\": \"#18be52\",\n        \"fields\": [{\n          \"title\": \"Sync Status\",\n          \"value\": \"\",\n          \"short\": true\n        }, {\n          \"title\": \"Repository\",\n          \"value\": \"\",\n          \"short\": true\n        }]\n      }]\n    # Aggregate the messages to the thread by git commit hash\n    groupingKey: \"\"\n    notifyBroadcast: false\ntemplate.app-sync-failed: |\n  message: |\n    Application  sync is .\n    Application details: \/applications\/.\n  slack:\n    attachments: |\n      [{\n        \"title\": \"\",\n        \"title_link\": \"\/applications\/\",\n        \"color\": \"#ff0000\",\n        \"fields\": [{\n          \"title\": \"Sync Status\",\n          \"value\": \"\",\n          \"short\": true\n        }, {\n          \"title\": \"Repository\",\n          \"value\": \"\",\n          \"short\": true\n        }]\n      }]\n    # Aggregate the messages to the thread by git commit hash\n    groupingKey: \"\"\n    notifyBroadcast: true\n```\n\nThe message is sent according to the `deliveryPolicy` string field under the `slack` field. The available modes are `Post` (default), `PostAndUpdate`, and `Update`. The `PostAndUpdate` and `Update` settings require `groupingKey` to be set.","site":"argocd","answers_cleaned":"  Slack  If you want to send message using incoming webhook  you can use  webhook    webhook md send slack       Parameters  The Slack notification service configuration includes following settings       Option                 Required       Type             Description       Example                                                                                                 apiURL                False           string          The server URL     https   example com api       channels              False           list string                          my channel 1    my channel 2         icon                  False           string          The app icon        robot face   or  https   example com image png       insecureSkipVerify    False           bool                               true       signingSecret          False           string                             8f742231b10e8888abcd99yyyzzz85a5       token                   True          string          The app s OAuth access token     xoxb 1234567890 1234567890123 5n38u5ed63fgzqlvuyxvxcx6       username              False           string          The app username     argocd       disableUnfurl         False           bool            Disable slack unfurling links in messages    true        Configuration  1  Create Slack Application using https   api slack com apps new app 1   1  https   user images githubusercontent com 426437 73604308 4cb0c500 4543 11ea 9092 6ca6bae21cbb png  1  Once application is created navigate to  OAuth   Permissions    2  https   user images githubusercontent com 426437 73604309 4d495b80 4543 11ea 9908 4dea403d3399 png  1  Go to  Scopes     Bot Token Scopes     Add an OAuth Scope   Add  chat write  scope  To use the optional username and icon overrides in the Slack notification service also add the  chat write customize  scope    3  https   user images githubusercontent com 426437 73604310 4d495b80 4543 11ea 8576 09cd91aea0e5 png  1   OAuth   Permission     OAuth Tokens for Your Workspace     Install to Workspace    4  https   user images githubusercontent com 426437 73604311 4d495b80 4543 11ea 9155 9d216b20ec86 png  1  Once installation is completed copy the OAuth token    5  https   user images githubusercontent com 426437 73604312 4d495b80 4543 11ea 832b a9d9d5e4bc29 png   1  Create a public or private channel  for this example  my channel  1  Invite your slack bot to this channel   otherwise slack bot won t be able to deliver notifications to this channel   1  Store Oauth access token in  argocd notifications secret  secret         yaml       apiVersion  v1       kind  Secret       metadata            name   secret name        stringData            slack token   Oauth access token           1  Define service type slack in data section of  argocd notifications cm  configmap          yaml       apiVersion  v1       kind  ConfigMap       metadata          name  argocd notifications cm       data          service slack              token   slack token          1  Add annotation in application yaml file to enable notifications for specific argocd app   The following example uses the  on sync succeeded trigger     catalog md triggers           yaml       apiVersion  argoproj io v1alpha1       kind  Application       metadata          annotations            notifications argoproj io subscribe on sync succeeded slack  my channel          1  Annotation with more than one  trigger     catalog md triggers   with multiple destinations and recipients         yaml       apiVersion  argoproj io v1alpha1       kind  Application       metadata          annotations            notifications argoproj io subscriptions                  trigger   on scaling replica set  on rollout updated  on rollout step completed                destinations                    service  slack                   recipients   my channel 1  my channel 2                    service  email                   recipients   recipient 1  recipient 2  recipient 3                 trigger   on rollout aborted  on analysis run failed  on analysis run error                destinations                    service  slack                   recipients   my channel 21  my channel 22              Templates   Notification templates     templates md  can be customized to leverage slack message blocks and attachments  feature  https   api slack com messaging composing layouts        https   user images githubusercontent com 426437 72776856 6dcef880 3bc8 11ea 8e3b c72df16ee8e6 png   The message blocks and attachments can be specified in  blocks  and  attachments  string fields under  slack  field      yaml template app sync status      message        Application  sync is       Application details   applications     slack      attachments                      title                title link     applications             color     18be52            fields                 title    Sync Status              value                  short   true                         title    Repository              value                  short   true                          If you want to specify an icon and username for each message  you can specify values for  username  and  icon  in the  slack  field  For icon you can specify emoji and image URL  just like in the service definition  If you set  username  and  icon  in template  the values set in template will be used even if values are specified in the service definition      yaml template app sync status      message        Application  sync is       Application details   applications     slack      username   testbot      icon  https   example com image png     attachments                      title                title link     applications             color     18be52            fields                 title    Sync Status              value                  short   true                         title    Repository              value                  short   true                          The messages can be aggregated to the slack threads by grouping key which can be specified in a  groupingKey  string field under  slack  field   groupingKey  is used across each template and works independently on each slack channel  When multiple applications will be updated at the same time or frequently  the messages in slack channel can be easily read by aggregating with git commit hash  application name  etc  Furthermore  the messages can be broadcast to the channel at the specific template by  notifyBroadcast  field      yaml template app sync status      message        Application  sync is       Application details   applications     slack      attachments                      title                title link     applications             color     18be52            fields                 title    Sync Status              value                  short   true                         title    Repository              value                  short   true                           Aggregate the messages to the thread by git commit hash     groupingKey         notifyBroadcast  false template app sync failed      message        Application  sync is       Application details   applications     slack      attachments                      title                title link     applications             color     ff0000            fields                 title    Sync Status              value                  short   true                         title    Repository              value                  short   true                           Aggregate the messages to the thread by git commit hash     groupingKey         notifyBroadcast  true      The message is sent according to the  deliveryPolicy  string field under the  slack  field  The available modes are  Post   default    PostAndUpdate   and  Update   The  PostAndUpdate  and  Update  settings require  groupingKey  to be set "}
{"questions":"argocd Parameters The webhook notification service allows sending a generic HTTP request using the templatized request body and URL Webhook the url to send the webhook to Using Webhook you might trigger a Jenkins job update GitHub commit status The Webhook notification service configuration includes following settings","answers":"# Webhook\n\nThe webhook notification service allows sending a generic HTTP request using the templatized request body and URL.\nUsing Webhook you might trigger a Jenkins job, update GitHub commit status.\n\n## Parameters\n\nThe Webhook notification service configuration includes following settings:\n\n- `url` - the url to send the webhook to\n- `headers` - optional, the headers to pass along with the webhook\n- `basicAuth` - optional, the basic authentication to pass along with the webhook\n- `insecureSkipVerify` - optional bool, true or false\n- `retryWaitMin` - Optional, the minimum wait time between retries. Default value: 1s.\n- `retryWaitMax` - Optional, the maximum wait time between retries. Default value: 5s.\n- `retryMax` - Optional, the maximum number of retries. Default value: 3.\n\n## Retry Behavior\n\nThe webhook service will automatically retry the request if it fails due to network errors or if the server returns a 5xx status code. The number of retries and the wait time between retries can be configured using the `retryMax`, `retryWaitMin`, and `retryWaitMax` parameters.\n\nThe wait time between retries is between `retryWaitMin` and `retryWaitMax`. If all retries fail, the `Send` method will return an error.\n\n## Configuration\n\nUse the following steps to configure webhook:\n\n1 Register webhook in `argocd-notifications-cm` ConfigMap:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.webhook.<webhook-name>: |\n    url: https:\/\/<hostname>\/<optional-path>\n    headers: #optional headers\n    - name: <header-name>\n      value: <header-value>\n    basicAuth: #optional username password\n      username: <username>\n      password: <api-key>\n    insecureSkipVerify: true #optional bool\n```\n\n2 Define template that customizes webhook request method, path and body:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  template.github-commit-status: |\n    webhook:\n      <webhook-name>:\n        method: POST # one of: GET, POST, PUT, PATCH. Default value: GET \n        path: <optional-path-template>\n        body: |\n          <optional-body-template>\n  trigger.<trigger-name>: |\n    - when: app.status.operationState.phase in ['Succeeded']\n      send: [github-commit-status]\n```\n\n3 Create subscription for webhook integration:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  annotations:\n    notifications.argoproj.io\/subscribe.<trigger-name>.<webhook-name>: \"\"\n```\n\n## Examples\n\n### Set GitHub commit status\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.webhook.github: |\n    url: https:\/\/api.github.com\n    headers: #optional headers\n    - name: Authorization\n      value: token $github-token\n```\n\n2 Define template that customizes webhook request method, path and body:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.webhook.github: |\n    url: https:\/\/api.github.com\n    headers: #optional headers\n    - name: Authorization\n      value: token $github-token\n\n  template.github-commit-status: |\n    webhook:\n      github:\n        method: POST\n        path: \/repos\/\/statuses\/\n        body: |\n          {\n             \"state\": \"pending\"\n             \"state\": \"success\"\n             \"state\": \"error\"\n             \"state\": \"error\",\n            \"description\": \"ArgoCD\",\n            \"target_url\": \"\/applications\/\",\n            \"context\": \"continuous-delivery\/\"\n          }\n```\n\n### Start Jenkins Job\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.webhook.jenkins: |\n    url: http:\/\/<jenkins-host>\/job\/<job-name>\/build?token=<job-secret>\n    basicAuth:\n      username: <username>\n      password: <api-key>\n\ntype: Opaque\n```\n\n### Send form-data\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.webhook.form: |\n    url: https:\/\/form.example.com\n    headers:\n    - name: Content-Type\n      value: application\/x-www-form-urlencoded\n\n  template.form-data: |\n    webhook:\n      form:\n        method: POST\n        body: key1=value1&key2=value2\n```\n\n### Send Slack\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.webhook.slack_webhook: |\n    url: https:\/\/hooks.slack.com\/services\/xxxxx\n    headers:\n    - name: Content-Type\n      value: application\/json\n\n  template.send-slack: |\n    webhook:\n      slack_webhook:\n        method: POST\n        body: |\n          {\n            \"attachments\": [{\n              \"title\": \"\",\n              \"title_link\": \"\/applications\/\",\n              \"color\": \"#18be52\",\n              \"fields\": [{\n                \"title\": \"Sync Status\",\n                \"value\": \"\",\n                \"short\": true\n              }, {\n                \"title\": \"Repository\",\n                \"value\": \"\",\n                \"short\": true\n              }]\n            }]\n          }\n```","site":"argocd","answers_cleaned":"  Webhook  The webhook notification service allows sending a generic HTTP request using the templatized request body and URL  Using Webhook you might trigger a Jenkins job  update GitHub commit status      Parameters  The Webhook notification service configuration includes following settings      url    the url to send the webhook to    headers    optional  the headers to pass along with the webhook    basicAuth    optional  the basic authentication to pass along with the webhook    insecureSkipVerify    optional bool  true or false    retryWaitMin    Optional  the minimum wait time between retries  Default value  1s     retryWaitMax    Optional  the maximum wait time between retries  Default value  5s     retryMax    Optional  the maximum number of retries  Default value  3      Retry Behavior  The webhook service will automatically retry the request if it fails due to network errors or if the server returns a 5xx status code  The number of retries and the wait time between retries can be configured using the  retryMax    retryWaitMin   and  retryWaitMax  parameters   The wait time between retries is between  retryWaitMin  and  retryWaitMax   If all retries fail  the  Send  method will return an error      Configuration  Use the following steps to configure webhook   1 Register webhook in  argocd notifications cm  ConfigMap      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service webhook  webhook name         url  https    hostname   optional path      headers   optional headers       name   header name        value   header value      basicAuth   optional username password       username   username        password   api key      insecureSkipVerify  true  optional bool      2 Define template that customizes webhook request method  path and body      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    template github commit status        webhook         webhook name           method  POST   one of  GET  POST  PUT  PATCH  Default value  GET          path   optional path template          body               optional body template    trigger  trigger name           when  app status operationState phase in   Succeeded         send   github commit status       3 Create subscription for webhook integration      yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    annotations      notifications argoproj io subscribe  trigger name   webhook name              Examples      Set GitHub commit status     yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service webhook github        url  https   api github com     headers   optional headers       name  Authorization       value  token  github token      2 Define template that customizes webhook request method  path and body      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service webhook github        url  https   api github com     headers   optional headers       name  Authorization       value  token  github token    template github commit status        webhook        github          method  POST         path   repos  statuses          body                              state    pending                state    success                state    error                state    error                description    ArgoCD                target url     applications                 context    continuous delivery                        Start Jenkins Job     yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service webhook jenkins        url  http    jenkins host  job  job name  build token  job secret      basicAuth        username   username        password   api key   type  Opaque          Send form data     yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service webhook form        url  https   form example com     headers        name  Content Type       value  application x www form urlencoded    template form data        webhook        form          method  POST         body  key1 value1 key2 value2          Send Slack     yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service webhook slack webhook        url  https   hooks slack com services xxxxx     headers        name  Content Type       value  application json    template send slack        webhook        slack webhook          method  POST         body                             attachments                     title                      title link     applications                   color     18be52                  fields                       title    Sync Status                    value                        short   true                                     title    Repository                    value                        short   true                                                "}
{"questions":"argocd Parameters AWS SQS name of the queue you are intending to send messages to Can be overridden with target destination annotation optional aws access key must be either referenced from a secret via variable or via env variable AWSACCESSKEYID region of the sqs queue can be provided via env variable AWSDEFAULTREGION optional aws access secret must be either referenced from a secret via variable or via env variable AWSSECRETACCESSKEY This notification service is capable of sending simple messages to AWS SQS queue","answers":"# AWS SQS\n\n## Parameters\n\nThis notification service is capable of sending simple messages to AWS SQS queue.\n\n* `queue` - name of the queue you are intending to send messages to. Can be overridden with target destination annotation.\n* `region` - region of the sqs queue can be provided via env variable AWS_DEFAULT_REGION\n* `key` - optional, aws access key must be either referenced from a secret via variable or via env variable AWS_ACCESS_KEY_ID\n* `secret` - optional, aws access secret must be either referenced from a secret via variable or via env variable AWS_SECRET_ACCESS_KEY\n* `account` optional, external accountId of the queue\n* `endpointUrl` optional, useful for development with localstack\n\n## Example\n\n### Using Secret for credential retrieval:\n\nResource Annotation:\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  annotations:\n    notifications.argoproj.io\/subscribe.on-deployment-ready.awssqs: \"overwrite-myqueue\"\n```\n\n* ConfigMap\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.awssqs: |\n    region: \"us-east-2\"\n    queue: \"myqueue\"\n    account: \"1234567\"\n    key: \"$awsaccess_key\"\n    secret: \"$awsaccess_secret\"\n\n  template.deployment-ready: |\n    message: |\n      Deployment  is ready!\n\n  trigger.on-deployment-ready: |\n    - when: any(obj.status.conditions, {.type == 'Available' && .status == 'True'})\n      send: [deployment-ready]\n    - oncePer: obj.metadata.annotations[\"generation\"]\n\n```\n Secret\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: <secret-name>\nstringData:\n  awsaccess_key: test\n  awsaccess_secret: test\n```\n\n\n### Minimal configuration using AWS Env variables\n\nEnsure the following list of environment variables are injected via OIDC, or another method. And assuming SQS is local to the account.\nYou may skip usage of secret for sensitive data and omit other parameters. (Setting parameters via ConfigMap takes precedent.)\n\nVariables:\n\n```bash\nexport AWS_ACCESS_KEY_ID=\"test\"\nexport AWS_SECRET_ACCESS_KEY=\"test\"\nexport AWS_DEFAULT_REGION=\"us-east-1\"\n```\n\nResource Annotation:\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  annotations:\n    notifications.argoproj.io\/subscribe.on-deployment-ready.awssqs: \"\"\n```\n\n* ConfigMap\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.awssqs: |\n    queue: \"myqueue\"\n\n  template.deployment-ready: |\n    message: |\n      Deployment  is ready!\n\n  trigger.on-deployment-ready: |\n    - when: any(obj.status.conditions, {.type == 'Available' && .status == 'True'})\n      send: [deployment-ready]\n    - oncePer: obj.metadata.annotations[\"generation\"]\n\n```\n\n## FIFO SQS Queues\n\nFIFO queues require a [MessageGroupId](https:\/\/docs.aws.amazon.com\/AWSSimpleQueueService\/latest\/APIReference\/API_SendMessage.html#SQS-SendMessage-request-MessageGroupId) to be sent along with every message, every message with a matching MessageGroupId will be processed one by one in order.\n\nTo send to a FIFO SQS Queue you must include a `messageGroupId` in the template such as in the example below:\n\n```yaml\ntemplate.deployment-ready: |\n  message: |\n    Deployment  is ready!\n  messageGroupId: -deployment\n```","site":"argocd","answers_cleaned":"  AWS SQS     Parameters  This notification service is capable of sending simple messages to AWS SQS queue      queue    name of the queue you are intending to send messages to  Can be overridden with target destination annotation     region    region of the sqs queue can be provided via env variable AWS DEFAULT REGION    key    optional  aws access key must be either referenced from a secret via variable or via env variable AWS ACCESS KEY ID    secret    optional  aws access secret must be either referenced from a secret via variable or via env variable AWS SECRET ACCESS KEY    account  optional  external accountId of the queue    endpointUrl  optional  useful for development with localstack     Example      Using Secret for credential retrieval   Resource Annotation     yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx deployment   annotations      notifications argoproj io subscribe on deployment ready awssqs   overwrite myqueue         ConfigMap    yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service awssqs        region   us east 2      queue   myqueue      account   1234567      key    awsaccess key      secret    awsaccess secret     template deployment ready        message          Deployment  is ready     trigger on deployment ready          when  any obj status conditions    type     Available      status     True          send   deployment ready        oncePer  obj metadata annotations  generation         Secret    yaml apiVersion  v1 kind  Secret metadata    name   secret name  stringData    awsaccess key  test   awsaccess secret  test           Minimal configuration using AWS Env variables  Ensure the following list of environment variables are injected via OIDC  or another method  And assuming SQS is local to the account  You may skip usage of secret for sensitive data and omit other parameters   Setting parameters via ConfigMap takes precedent    Variables      bash export AWS ACCESS KEY ID  test  export AWS SECRET ACCESS KEY  test  export AWS DEFAULT REGION  us east 1       Resource Annotation     yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx deployment   annotations      notifications argoproj io subscribe on deployment ready awssqs            ConfigMap    yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service awssqs        queue   myqueue     template deployment ready        message          Deployment  is ready     trigger on deployment ready          when  any obj status conditions    type     Available      status     True          send   deployment ready        oncePer  obj metadata annotations  generation            FIFO SQS Queues  FIFO queues require a  MessageGroupId  https   docs aws amazon com AWSSimpleQueueService latest APIReference API SendMessage html SQS SendMessage request MessageGroupId  to be sent along with every message  every message with a matching MessageGroupId will be processed one by one in order   To send to a FIFO SQS Queue you must include a  messageGroupId  in the template such as in the example below      yaml template deployment ready      message        Deployment  is ready    messageGroupId   deployment    "}
{"questions":"argocd Parameters optional default is api v2 alerts optional default is false when scheme is https whether to skip the verification of ca Alertmanager the alertmanager service address array type The notification service is used to push events to and the following settings need to be specified optional default is http e g http or https","answers":"# Alertmanager\n\n## Parameters\n\nThe notification service is used to push events to [Alertmanager](https:\/\/github.com\/prometheus\/alertmanager), and the following settings need to be specified:\n\n* `targets` - the alertmanager service address, array type\n* `scheme` - optional, default is \"http\", e.g. http or https\n* `apiPath` - optional, default is \"\/api\/v2\/alerts\"\n* `insecureSkipVerify` - optional, default is \"false\", when scheme is https whether to skip the verification of ca\n* `basicAuth` - optional, server auth\n* `bearerToken` - optional, server auth\n* `timeout` - optional, the timeout in seconds used when sending alerts, default is \"3 seconds\"\n\n`basicAuth` or `bearerToken` is used for authentication, you can choose one. If the two are set at the same time, `basicAuth` takes precedence over `bearerToken`.\n\n## Example\n\n### Prometheus Alertmanager config\n\n```yaml\nglobal:\n  resolve_timeout: 5m\n\nroute:\n  group_by: ['alertname']\n  group_wait: 10s\n  group_interval: 10s\n  repeat_interval: 1h\n  receiver: 'default'\nreceivers:\n- name: 'default'\n  webhook_configs:\n  - send_resolved: false\n    url: 'http:\/\/10.5.39.39:10080\/api\/alerts\/webhook'\n```\n\nYou should turn off \"send_resolved\" or you will receive unnecessary recovery notifications after \"resolve_timeout\".\n\n### Send one alertmanager without auth\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.alertmanager: |\n    targets:\n    - 10.5.39.39:9093\n```\n\n### Send alertmanager cluster with custom api path\n\nIf your alertmanager has changed the default api, you can customize \"apiPath\".\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.alertmanager: |\n    targets:\n    - 10.5.39.39:443\n    scheme: https\n    apiPath: \/api\/events\n    insecureSkipVerify: true\n```\n\n### Send high availability alertmanager with auth\n\nStore auth token in `argocd-notifications-secret` Secret and use configure in `argocd-notifications-cm` ConfigMap.\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: <secret-name>\nstringData:\n  alertmanager-username: <username>\n  alertmanager-password: <password>\n  alertmanager-bearer-token: <token>\n```\n\n- with basicAuth\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.alertmanager: |\n    targets:\n    - 10.5.39.39:19093\n    - 10.5.39.39:29093\n    - 10.5.39.39:39093\n    scheme: https\n    apiPath: \/api\/v2\/alerts\n    insecureSkipVerify: true\n    basicAuth:\n      username: $alertmanager-username\n      password: $alertmanager-password   \n```\n\n- with bearerToken\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.alertmanager: |\n    targets:\n    - 10.5.39.39:19093\n    - 10.5.39.39:29093\n    - 10.5.39.39:39093\n    scheme: https\n    apiPath: \/api\/v2\/alerts\n    insecureSkipVerify: true\n    bearerToken: $alertmanager-bearer-token\n```\n\n## Templates\n\n* `labels` - at least one label pair required, implement different notification strategies according to alertmanager routing\n* `annotations` - optional, specifies a set of information labels, which can be used to store longer additional information, but only for display\n* `generatorURL` - optional, default is '', backlink used to identify the entity that caused this alert in the client\n\nthe `label` or `annotations` or `generatorURL` values can be templated.\n\n```yaml\ncontext: |\n  argocdUrl: https:\/\/example.com\/argocd\n\ntemplate.app-deployed: |\n  message: Application  has been healthy.\n  alertmanager:\n    labels:\n      fault_priority: \"P5\"\n      event_bucket: \"deploy\"\n      event_status: \"succeed\"\n      recipient: \"\"\n    annotations:\n      application: '<a href=\"\/applications\/\"><\/a>'\n      author: \"\"\n      message: \"\"\n```\n\nYou can do targeted push on [Alertmanager](https:\/\/github.com\/prometheus\/alertmanager) according to labels.\n\n```yaml\ntemplate.app-deployed: |\n  message: Application  has been healthy.\n  alertmanager:\n    labels:\n      alertname: app-deployed\n      fault_priority: \"P5\"\n      event_bucket: \"deploy\"\n```\n\nThere is a special label `alertname`. If you don\u2019t set its value, it will be equal to the template name by default","site":"argocd","answers_cleaned":"  Alertmanager     Parameters  The notification service is used to push events to  Alertmanager  https   github com prometheus alertmanager   and the following settings need to be specified      targets    the alertmanager service address  array type    scheme    optional  default is  http   e g  http or https    apiPath    optional  default is   api v2 alerts     insecureSkipVerify    optional  default is  false   when scheme is https whether to skip the verification of ca    basicAuth    optional  server auth    bearerToken    optional  server auth    timeout    optional  the timeout in seconds used when sending alerts  default is  3 seconds    basicAuth  or  bearerToken  is used for authentication  you can choose one  If the two are set at the same time   basicAuth  takes precedence over  bearerToken       Example      Prometheus Alertmanager config     yaml global    resolve timeout  5m  route    group by    alertname     group wait  10s   group interval  10s   repeat interval  1h   receiver   default  receivers    name   default    webhook configs      send resolved  false     url   http   10 5 39 39 10080 api alerts webhook       You should turn off  send resolved  or you will receive unnecessary recovery notifications after  resolve timeout        Send one alertmanager without auth     yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service alertmanager        targets        10 5 39 39 9093          Send alertmanager cluster with custom api path  If your alertmanager has changed the default api  you can customize  apiPath       yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service alertmanager        targets        10 5 39 39 443     scheme  https     apiPath   api events     insecureSkipVerify  true          Send high availability alertmanager with auth  Store auth token in  argocd notifications secret  Secret and use configure in  argocd notifications cm  ConfigMap      yaml apiVersion  v1 kind  Secret metadata    name   secret name  stringData    alertmanager username   username    alertmanager password   password    alertmanager bearer token   token         with basicAuth     yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service alertmanager        targets        10 5 39 39 19093       10 5 39 39 29093       10 5 39 39 39093     scheme  https     apiPath   api v2 alerts     insecureSkipVerify  true     basicAuth        username   alertmanager username       password   alertmanager password           with bearerToken     yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service alertmanager        targets        10 5 39 39 19093       10 5 39 39 29093       10 5 39 39 39093     scheme  https     apiPath   api v2 alerts     insecureSkipVerify  true     bearerToken   alertmanager bearer token         Templates     labels    at least one label pair required  implement different notification strategies according to alertmanager routing    annotations    optional  specifies a set of information labels  which can be used to store longer additional information  but only for display    generatorURL    optional  default is     backlink used to identify the entity that caused this alert in the client  the  label  or  annotations  or  generatorURL  values can be templated      yaml context      argocdUrl  https   example com argocd  template app deployed      message  Application  has been healthy    alertmanager      labels        fault priority   P5        event bucket   deploy        event status   succeed        recipient         annotations        application    a href   applications     a         author           message          You can do targeted push on  Alertmanager  https   github com prometheus alertmanager  according to labels      yaml template app deployed      message  Application  has been healthy    alertmanager      labels        alertname  app deployed       fault priority   P5        event bucket   deploy       There is a special label  alertname   If you don t set its value  it will be equal to the template name by default"}
{"questions":"argocd Parameters the webhook url map e g The Teams notification service send message notifications using Teams bot and requires specifying the following settings Configuration Teams","answers":"# Teams\n\n## Parameters\n\nThe Teams notification service send message notifications using Teams bot and requires specifying the following settings:\n\n* `recipientUrls` - the webhook url map, e.g. `channelName: https:\/\/example.com`\n\n## Configuration\n\n1. Open `Teams` and goto `Apps`\n2. Find `Incoming Webhook` microsoft app and click on it\n3. Press `Add to a team` -> select team and channel -> press `Set up a connector`\n4. Enter webhook name and upload image (optional)\n5. Press `Create` then copy webhook url and store it in `argocd-notifications-secret` and define it in `argocd-notifications-cm`\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: argocd-notifications-cm\ndata:\n  service.teams: |\n    recipientUrls:\n      channelName: $channel-teams-url\n```\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: <secret-name>\nstringData:\n  channel-teams-url: https:\/\/example.com\n```\n\n6. Create subscription for your Teams integration:\n\n```yaml\napiVersion: argoproj.io\/v1alpha1\nkind: Application\nmetadata:\n  annotations:\n    notifications.argoproj.io\/subscribe.on-sync-succeeded.teams: channelName\n```\n\n## Templates\n\n![](https:\/\/user-images.githubusercontent.com\/18019529\/114271500-9d2b8880-9a4c-11eb-85c1-f6935f0431d5.png)\n\n[Notification templates](..\/templates.md) can be customized to leverage teams message sections, facts, themeColor, summary and potentialAction [feature](https:\/\/docs.microsoft.com\/en-us\/microsoftteams\/platform\/webhooks-and-connectors\/how-to\/connectors-using).\n\n```yaml\ntemplate.app-sync-succeeded: |\n  teams:\n    themeColor: \"#000080\"\n    sections: |\n      [{\n        \"facts\": [\n          {\n            \"name\": \"Sync Status\",\n            \"value\": \"\"\n          },\n          {\n            \"name\": \"Repository\",\n            \"value\": \"\"\n          }\n        ]\n      }]\n    potentialAction: |-\n      [{\n        \"@type\":\"OpenUri\",\n        \"name\":\"Operation Details\",\n        \"targets\":[{\n          \"os\":\"default\",\n          \"uri\":\"\/applications\/?operation=true\"\n        }]\n      }]\n    title: Application  has been successfully synced\n    text: Application  has been successfully synced at .\n    summary: \" sync succeeded\"\n```\n\n### facts field\n\nYou can use `facts` field instead of `sections` field.\n\n```yaml\ntemplate.app-sync-succeeded: |\n  teams:\n    facts: |\n      [{\n        \"name\": \"Sync Status\",\n        \"value\": \"\"\n      },\n      {\n        \"name\": \"Repository\",\n        \"value\": \"\"\n      }]\n```\n\n### theme color field\n\nYou can set theme color as hex string for the message.\n\n![](https:\/\/user-images.githubusercontent.com\/1164159\/114864810-0718a900-9e24-11eb-8127-8d95da9544c1.png)\n\n```yaml\ntemplate.app-sync-succeeded: |\n  teams:\n    themeColor: \"#000080\"\n```\n\n### summary field\n\nYou can set a summary of the message that will be shown on Notification & Activity Feed \n\n![](https:\/\/user-images.githubusercontent.com\/6957724\/116587921-84c4d480-a94d-11eb-9da4-f365151a12e7.jpg)\n\n![](https:\/\/user-images.githubusercontent.com\/6957724\/116588002-99a16800-a94d-11eb-807f-8626eb53b980.jpg)\n\n```yaml\ntemplate.app-sync-succeeded: |\n  teams:\n    summary: \"Sync Succeeded\"\n```","site":"argocd","answers_cleaned":"  Teams     Parameters  The Teams notification service send message notifications using Teams bot and requires specifying the following settings      recipientUrls    the webhook url map  e g   channelName  https   example com      Configuration  1  Open  Teams  and goto  Apps  2  Find  Incoming Webhook  microsoft app and click on it 3  Press  Add to a team     select team and channel    press  Set up a connector  4  Enter webhook name and upload image  optional  5  Press  Create  then copy webhook url and store it in  argocd notifications secret  and define it in  argocd notifications cm      yaml apiVersion  v1 kind  ConfigMap metadata    name  argocd notifications cm data    service teams        recipientUrls        channelName   channel teams url         yaml apiVersion  v1 kind  Secret metadata    name   secret name  stringData    channel teams url  https   example com      6  Create subscription for your Teams integration      yaml apiVersion  argoproj io v1alpha1 kind  Application metadata    annotations      notifications argoproj io subscribe on sync succeeded teams  channelName         Templates      https   user images githubusercontent com 18019529 114271500 9d2b8880 9a4c 11eb 85c1 f6935f0431d5 png    Notification templates     templates md  can be customized to leverage teams message sections  facts  themeColor  summary and potentialAction  feature  https   docs microsoft com en us microsoftteams platform webhooks and connectors how to connectors using       yaml template app sync succeeded      teams      themeColor    000080      sections                      facts                              name    Sync Status                value                                            name    Repository                value                                         potentialAction                        type   OpenUri            name   Operation Details            targets                os   default              uri    applications  operation true                          title  Application  has been successfully synced     text  Application  has been successfully synced at       summary    sync succeeded           facts field  You can use  facts  field instead of  sections  field      yaml template app sync succeeded      teams      facts                      name    Sync Status            value                                name    Repository            value                        theme color field  You can set theme color as hex string for the message       https   user images githubusercontent com 1164159 114864810 0718a900 9e24 11eb 8127 8d95da9544c1 png      yaml template app sync succeeded      teams      themeColor    000080           summary field  You can set a summary of the message that will be shown on Notification   Activity Feed       https   user images githubusercontent com 6957724 116587921 84c4d480 a94d 11eb 9da4 f365151a12e7 jpg       https   user images githubusercontent com 6957724 116588002 99a16800 a94d 11eb 807f 8626eb53b980 jpg      yaml template app sync succeeded      teams      summary   Sync Succeeded     "}
{"questions":"cilium imminent and atomic if the deletion request is valid and Flag Name Description provided properties DeleteEndpointID Deletes the endpoint specified by the ID Deletion is I flags are compatible with the flag DeleteEndpoint Deletes a list of endpoints that have endpoints matching the","answers":".. <!-- This file was autogenerated via api-flaggen, do not edit manually-->\n\nCilium Agent API\n================\n\nThe following API flags are compatible with the ``cilium-agent`` flag\n``enable-cilium-api-server-access``.\n\n===================== ====================\nFlag Name             Description\n===================== ====================\nDeleteEndpoint        Deletes a list of endpoints that have endpoints matching the\n                      provided properties\nDeleteEndpointID      Deletes the endpoint specified by the ID. Deletion is\n                      imminent and atomic, if the deletion request is valid and\n                      the endpoint exists, deletion will occur even if errors are\n                      encountered in the process. If errors have been encountered,\n                      the code 202 will be returned, otherwise 200 on success. All\n                      resources associated with the endpoint will be freed and the\n                      workload represented by the endpoint will be disconnected.It\n                      will no longer be able to initiate or receive communications\n                      of any sort.\nDeleteFqdnCache       Deletes matching DNS lookups from the cache, optionally\n                      restricted by DNS name. The removed IP data will no longer\n                      be used in generated policies.\nDeleteIPAMIP          -\nDeletePolicy          -\nDeletePrefilter       -\nDeleteRecorderID      -\nDeleteServiceID       -\nGetBGPPeers           Retrieves current operational state of BGP peers created by\n                      Cilium BGP virtual router. This includes session state,\n                      uptime, information per address family, etc.\nGetBGPRoutePolicies   Retrieves route policies from BGP Control Plane.\nGetBGPRoutes          Retrieves routes from BGP Control Plane RIB filtered by\n                      parameters you specify\nGetCgroupDumpMetadata -\nGetClusterNodes       -\nGetConfig             Returns the configuration of the Cilium daemon.\nGetDebuginfo          -\nGetEndpoint           Retrieves a list of endpoints that have metadata matching\n                      the provided parameters, or all endpoints if no parameters\n                      provided.\nGetEndpointID         Returns endpoint information\nGetEndpointIDConfig   Retrieves the configuration of the specified endpoint.\nGetEndpointIDHealthz  -\nGetEndpointIDLabels   -\nGetEndpointIDLog      -\nGetFqdnCache          Retrieves the list of DNS lookups intercepted from\n                      endpoints, optionally filtered by DNS name, CIDR IP range or\n                      source.\nGetFqdnCacheID        Retrieves the list of DNS lookups intercepted from the\n                      specific endpoint, optionally filtered by endpoint id, DNS\n                      name, CIDR IP range or source.\nGetFqdnNames          Retrieves the list of DNS-related fields (names to poll,\n                      selectors and their corresponding regexes).\nGetHealthz            Returns health and status information of the Cilium daemon\n                      and related components such as the local container runtime,\n                      connected datastore, Kubernetes integration and Hubble.\nGetIP                 Retrieves a list of IPs with known associated information\n                      such as their identities, host addresses, Kubernetes pod\n                      names, etc. The list can optionally filtered by a CIDR IP\n                      range.\nGetIdentity           Retrieves a list of identities that have metadata matching\n                      the provided parameters, or all identities if no parameters\n                      are provided.\nGetIdentityEndpoints  -\nGetIdentityID         -\nGetLRP                -\nGetMap                -\nGetMapName            -\nGetMapNameEvents      -\nGetNodeIds            Retrieves a list of node IDs allocated by the agent and\n                      their associated node IP addresses.\nGetPolicy             Returns the entire policy tree with all children.\nGetPolicySelectors    -\nGetPrefilter          -\nGetRecorder           -\nGetRecorderID         -\nGetRecorderMasks      -\nGetService            -\nGetServiceID          -\nPatchConfig           Updates the daemon configuration by applying the provided\n                      ConfigurationMap and regenerates & recompiles all required\n                      datapath components.\nPatchEndpointID       Applies the endpoint change request to an existing endpoint\nPatchEndpointIDConfig Update the configuration of an existing endpoint and\n                      regenerates & recompiles the corresponding programs\n                      automatically.\nPatchEndpointIDLabels Sets labels associated with an endpoint. These can be user\n                      provided or derived from the orchestration system.\nPatchPrefilter        -\nPostIPAM              -\nPostIPAMIP            -\nPutEndpointID         Creates a new endpoint\nPutPolicy             -\nPutRecorderID         -\nPutServiceID          -\n===================== ====================\n\nCilium Agent Clusterwide Health API\n===================================\n\nThe following API flags are compatible with the ``cilium-agent`` flag\n``enable-cilium-health-api-server-access``.\n\n===================== ====================\nFlag Name             Description\n===================== ====================\nGetHealthz            Returns health and status information of the local node\n                      including load and uptime, as well as the status of related\n                      components including the Cilium daemon.\nGetStatus             Returns the connectivity status to all other cilium-health\n                      instances using interval-based probing.\nPutStatusProbe        Runs a synchronous probe to all other cilium-health\n                      instances and returns the connectivity status.\n===================== ====================\n\nCilium Operator API\n===================\n\nThe following API flags are compatible with the ``cilium-operator`` flag\n``enable-cilium-operator-server-access``.\n\n===================== ====================\nFlag Name             Description\n===================== ====================\nGetCluster            Returns the list of remote clusters and their status.\nGetHealthz            Returns the status of cilium operator instance.\nGetMetrics            Returns the metrics exposed by the Cilium operator.\n===================== ====================","site":"cilium","answers_cleaned":"        This file was autogenerated via api flaggen  do not edit manually     Cilium Agent API                   The following API flags are compatible with the   cilium agent   flag   enable cilium api server access                                                Flag Name             Description                                            DeleteEndpoint        Deletes a list of endpoints that have endpoints matching the                       provided properties DeleteEndpointID      Deletes the endpoint specified by the ID  Deletion is                       imminent and atomic  if the deletion request is valid and                       the endpoint exists  deletion will occur even if errors are                       encountered in the process  If errors have been encountered                        the code 202 will be returned  otherwise 200 on success  All                       resources associated with the endpoint will be freed and the                       workload represented by the endpoint will be disconnected It                       will no longer be able to initiate or receive communications                       of any sort  DeleteFqdnCache       Deletes matching DNS lookups from the cache  optionally                       restricted by DNS name  The removed IP data will no longer                       be used in generated policies  DeleteIPAMIP            DeletePolicy            DeletePrefilter         DeleteRecorderID        DeleteServiceID         GetBGPPeers           Retrieves current operational state of BGP peers created by                       Cilium BGP virtual router  This includes session state                        uptime  information per address family  etc  GetBGPRoutePolicies   Retrieves route policies from BGP Control Plane  GetBGPRoutes          Retrieves routes from BGP Control Plane RIB filtered by                       parameters you specify GetCgroupDumpMetadata   GetClusterNodes         GetConfig             Returns the configuration of the Cilium daemon  GetDebuginfo            GetEndpoint           Retrieves a list of endpoints that have metadata matching                       the provided parameters  or all endpoints if no parameters                       provided  GetEndpointID         Returns endpoint information GetEndpointIDConfig   Retrieves the configuration of the specified endpoint  GetEndpointIDHealthz    GetEndpointIDLabels     GetEndpointIDLog        GetFqdnCache          Retrieves the list of DNS lookups intercepted from                       endpoints  optionally filtered by DNS name  CIDR IP range or                       source  GetFqdnCacheID        Retrieves the list of DNS lookups intercepted from the                       specific endpoint  optionally filtered by endpoint id  DNS                       name  CIDR IP range or source  GetFqdnNames          Retrieves the list of DNS related fields  names to poll                        selectors and their corresponding regexes   GetHealthz            Returns health and status information of the Cilium daemon                       and related components such as the local container runtime                        connected datastore  Kubernetes integration and Hubble  GetIP                 Retrieves a list of IPs with known associated information                       such as their identities  host addresses  Kubernetes pod                       names  etc  The list can optionally filtered by a CIDR IP                       range  GetIdentity           Retrieves a list of identities that have metadata matching                       the provided parameters  or all identities if no parameters                       are provided  GetIdentityEndpoints    GetIdentityID           GetLRP                  GetMap                  GetMapName              GetMapNameEvents        GetNodeIds            Retrieves a list of node IDs allocated by the agent and                       their associated node IP addresses  GetPolicy             Returns the entire policy tree with all children  GetPolicySelectors      GetPrefilter            GetRecorder             GetRecorderID           GetRecorderMasks        GetService              GetServiceID            PatchConfig           Updates the daemon configuration by applying the provided                       ConfigurationMap and regenerates   recompiles all required                       datapath components  PatchEndpointID       Applies the endpoint change request to an existing endpoint PatchEndpointIDConfig Update the configuration of an existing endpoint and                       regenerates   recompiles the corresponding programs                       automatically  PatchEndpointIDLabels Sets labels associated with an endpoint  These can be user                       provided or derived from the orchestration system  PatchPrefilter          PostIPAM                PostIPAMIP              PutEndpointID         Creates a new endpoint PutPolicy               PutRecorderID           PutServiceID                                                        Cilium Agent Clusterwide Health API                                      The following API flags are compatible with the   cilium agent   flag   enable cilium health api server access                                                Flag Name             Description                                            GetHealthz            Returns health and status information of the local node                       including load and uptime  as well as the status of related                       components including the Cilium daemon  GetStatus             Returns the connectivity status to all other cilium health                       instances using interval based probing  PutStatusProbe        Runs a synchronous probe to all other cilium health                       instances and returns the connectivity status                                              Cilium Operator API                      The following API flags are compatible with the   cilium operator   flag   enable cilium operator server access                                                Flag Name             Description                                            GetCluster            Returns the list of remote clusters and their status  GetHealthz            Returns the status of cilium operator instance  GetMetrics            Returns the metrics exposed by the Cilium operator                                            "}
{"questions":"cilium There have been reports from users hitting issues with Argo CD This documentation page outlines some of the known issues and their solutions docs cilium io argocdissues Troubleshooting Cilium deployed with Argo CD","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _argocd_issues:\n\n********************************************\nTroubleshooting Cilium deployed with Argo CD\n********************************************\n\nThere have been reports from users hitting issues with Argo CD. This documentation \npage outlines some of the known issues and their solutions.\n\nArgo CD deletes CustomResourceDefinitions\n=========================================\n\nWhen deploying Cilium with Argo CD, some users have reported that Cilium-generated custom resources disappear,\ncausing one or more of the following issues:\n\n- ``ciliumid`` not found (:gh-issue:`17614`)\n- Argo CD Out-of-sync issues for hubble-generate-certs (:gh-issue:`14550`)\n- Out-of-sync issues for Cilium using Argo CD (:gh-issue:`18298`)\n\nSolution\n--------\n\nTo prevent these issues, declare resource exclusions in the Argo CD ``ConfigMap`` by following `these instructions <https:\/\/argo-cd.readthedocs.io\/en\/stable\/operator-manual\/declarative-setup\/#resource-exclusioninclusion>`__.\n\nHere is an example snippet:\n\n.. code-block:: yaml\n\n    resource.exclusions: |\n     - apiGroups:\n         - cilium.io\n       kinds:\n         - CiliumIdentity\n       clusters:\n         - \"*\"\n\n\nAlso, it has been reported that the problem may affect all workloads you deploy with Argo CD in a cluster running Cilium, not just Cilium itself.\nIf so, you will need the following exclusions in your Argo CD application definition to avoid getting \u201cout of sync\u201d when Hubble rotates its certificates.\n\n.. code-block:: yaml\n\n    ignoreDifferences:\n      - group: \"\"\n        kind: ConfigMap\n        name: hubble-ca-cert\n        jsonPointers:\n        - \/data\/ca.crt\n      - group: \"\"\n        kind: Secret\n        name: hubble-relay-client-certs\n        jsonPointers:\n        - \/data\/ca.crt\n        - \/data\/tls.crt\n        - \/data\/tls.key\n      - group: \"\"\n        kind: Secret\n        name: hubble-server-certs\n        jsonPointers:\n        - \/data\/ca.crt\n        - \/data\/tls.crt\n        - \/data\/tls.key\n\n\n.. note::\n    After applying the above configurations, for the settings to take effect, you will need to restart the Argo CD deployments.\n\nHelm template with serviceMonitor enabled fails\n===============================================\n\nSome users have reported that when they install Cilium using Argo CD and run ``helm template`` with ``serviceMonitor`` enabled, it fails.\nIt fails because Argo CD CLI doesn't pass the ``--api-versions`` flag to Helm upon deployment.\n\nSolution\n--------\n\nThis `pull request <https:\/\/github.com\/argoproj\/argo-cd\/pull\/8371>`__ fixed this issue in Argo CD's `v2.3.0 release <https:\/\/github.com\/argoproj\/argo-cd\/releases\/tag\/v2.3.0>`__.\nUpgrade your Argo CD and check if ``helm template`` with ``serviceMonitor`` enabled still fails.\n\n.. note::\n\n    When using ``helm template``, it is highly recommended you set\n    ``--kube-version`` and ``--api-versions`` with the values matching your\n    target Kubernetes cluster. Helm charts such as Cilium's often conditionally\n    enable certain Kubernetes features based on their availability (beta vs\n    stable) on the target cluster.\n\n    By specifying ``--api-versions=monitoring.coreos.com\/v1`` you should be\n    able to pass validation with ``helm template``.\n\nIf you have an issue with Argo CD that's not outlined above, check this `list\nof Argo CD related issues on GitHub\n<https:\/\/github.com\/cilium\/cilium\/issues?q=is%3Aissue+argocd>`__.\nIf you can't find an issue that relates to yours, create one and\/or seek help\non `Cilium Slack`_.\n\nApplication chart for Cilium deployed to Talos Linux fails with: field not declared in schema\n=============================================================================================\n\nWhen deploying Cilium to Talos Linux with ArgoCD, some users have reported\nissues due to Talos Security configuration. ArgoCD may fail to deploy the\napplication with the message::\n\n    Failed to compare desired state to live state: failed to calculate diff:\n    error calculating structured merge diff: error building typed value from live\n    resource: .spec.template.spec.securityContext.appArmorProfile: field not\n    declared in schema\n\nSolution\n--------\n\nAdd option ``ServerSideApply=true`` to list ``syncPolicy.syncOptions`` for the Application.\n\n.. code-block:: yaml\n\n    apiVersion: argoproj.io\/v1alpha1\n    kind: Application\n    spec:\n      syncPolicy:\n        syncOptions:\n        - ServerSideApply=true\n\nVisit the `ArgoCD documentation <https:\/\/argo-cd.readthedocs.io\/en\/stable\/user-guide\/sync-options\/#server-side-apply>`__ for further details.","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      argocd issues                                                Troubleshooting Cilium deployed with Argo CD                                               There have been reports from users hitting issues with Argo CD  This documentation  page outlines some of the known issues and their solutions   Argo CD deletes CustomResourceDefinitions                                            When deploying Cilium with Argo CD  some users have reported that Cilium generated custom resources disappear  causing one or more of the following issues       ciliumid   not found   gh issue  17614     Argo CD Out of sync issues for hubble generate certs   gh issue  14550     Out of sync issues for Cilium using Argo CD   gh issue  18298    Solution           To prevent these issues  declare resource exclusions in the Argo CD   ConfigMap   by following  these instructions  https   argo cd readthedocs io en stable operator manual declarative setup  resource exclusioninclusion       Here is an example snippet      code block   yaml      resource exclusions           apiGroups             cilium io        kinds             CiliumIdentity        clusters                   Also  it has been reported that the problem may affect all workloads you deploy with Argo CD in a cluster running Cilium  not just Cilium itself  If so  you will need the following exclusions in your Argo CD application definition to avoid getting  out of sync  when Hubble rotates its certificates      code block   yaml      ignoreDifferences          group             kind  ConfigMap         name  hubble ca cert         jsonPointers             data ca crt         group             kind  Secret         name  hubble relay client certs         jsonPointers             data ca crt            data tls crt            data tls key         group             kind  Secret         name  hubble server certs         jsonPointers             data ca crt            data tls crt            data tls key      note       After applying the above configurations  for the settings to take effect  you will need to restart the Argo CD deployments   Helm template with serviceMonitor enabled fails                                                  Some users have reported that when they install Cilium using Argo CD and run   helm template   with   serviceMonitor   enabled  it fails  It fails because Argo CD CLI doesn t pass the     api versions   flag to Helm upon deployment   Solution           This  pull request  https   github com argoproj argo cd pull 8371     fixed this issue in Argo CD s  v2 3 0 release  https   github com argoproj argo cd releases tag v2 3 0      Upgrade your Argo CD and check if   helm template   with   serviceMonitor   enabled still fails      note        When using   helm template    it is highly recommended you set         kube version   and     api versions   with the values matching your     target Kubernetes cluster  Helm charts such as Cilium s often conditionally     enable certain Kubernetes features based on their availability  beta vs     stable  on the target cluster       By specifying     api versions monitoring coreos com v1   you should be     able to pass validation with   helm template     If you have an issue with Argo CD that s not outlined above  check this  list of Argo CD related issues on GitHub  https   github com cilium cilium issues q is 3Aissue argocd      If you can t find an issue that relates to yours  create one and or seek help on  Cilium Slack     Application chart for Cilium deployed to Talos Linux fails with  field not declared in schema                                                                                                When deploying Cilium to Talos Linux with ArgoCD  some users have reported issues due to Talos Security configuration  ArgoCD may fail to deploy the application with the message        Failed to compare desired state to live state  failed to calculate diff      error calculating structured merge diff  error building typed value from live     resource   spec template spec securityContext appArmorProfile  field not     declared in schema  Solution           Add option   ServerSideApply true   to list   syncPolicy syncOptions   for the Application      code block   yaml      apiVersion  argoproj io v1alpha1     kind  Application     spec        syncPolicy          syncOptions            ServerSideApply true  Visit the  ArgoCD documentation  https   argo cd readthedocs io en stable user guide sync options  server side apply     for further details "}
{"questions":"cilium on a per node basis This allows overriding docs cilium io The Cilium agent process a k a DaemonSet supports setting configuration per node configuration Per node configuration","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _per-node-configuration:\n\n**********************\nPer-node configuration\n**********************\n\nThe Cilium agent process (a.k.a. DaemonSet) supports setting configuration\non a per-node basis. This allows overriding :ref:`cilium-config-configmap`\nfor a node or set of nodes. It is managed by CiliumNodeConfig objects.\n\nThis feature is useful for:\n\n- Gradually rolling out changes.\n- Selectively enabling features that require specific hardware:\n\n    * :ref:`XDP acceleration`\n    * :ref:`ipv6_big_tcp`\n\nCiliumNodeConfig objects\n------------------------\n\nA CiliumNodeConfig object allows for overriding ConfigMap \/ Agent arguments.\nIt consists of a set of fields and a label selector. The label selector\ndefines to which nodes the configuration applies. As is the standard with\nKubernetes, an empty LabelSelector (e.g. ``{}``) selects all nodes.\n\n.. note::\n    Creating or modifying a CiliumNodeConfig will not cause changes to take effect\n    until pods are deleted and re-created (or their node is restarted).\n\n\nExample: selective XDP enablement\n---------------------------------\n\nTo enable :ref:`XDP acceleration` only on nodes with necessary\nhardware, one would label the relevant nodes and override their configuration.\n\n.. code-block:: yaml\n\n    apiVersion: cilium.io\/v2\n    kind: CiliumNodeConfig\n    metadata:\n      namespace: kube-system\n      name: enable-xdp\n    spec:\n      nodeSelector:\n        matchLabels:\n          io.cilium.xdp-offload: \"true\"\n      defaults:\n        bpf-lb-acceleration: native\n\nExample: KubeProxyReplacement Rollout\n-------------------------------------\n\nTo roll out :ref:`kube-proxy replacement <kubeproxy-free>` in a gradual manner,\nyou may also wish to use the CiliumNodeConfig feature. This will label all migrated\nnodes with ``io.cilium.migration\/kube-proxy-replacement: true``\n\n.. warning::\n\n    You must have installed Cilium with the Helm values ``k8sServiceHost`` and\n    ``k8sServicePort``. Otherwise, Cilium will not be able to reach the Kubernetes\n    APIServer after kube-proxy is uninstalled.\n\n    You can apply these two values to a running cluster via ``helm upgrade``.\n\n#. Patch kube-proxy to only run on unmigrated nodes.\n\n    .. code-block:: shell-session\n\n        kubectl -n kube-system patch daemonset kube-proxy --patch '{\"spec\": {\"template\": {\"spec\": {\"affinity\": {\"nodeAffinity\": {\"requiredDuringSchedulingIgnoredDuringExecution\": {\"nodeSelectorTerms\": [{\"matchExpressions\": [{\"key\": \"io.cilium.migration\/kube-proxy-replacement\", \"operator\": \"NotIn\", \"values\": [\"true\"]}]}]}}}}}}}'\n\n#. Configure Cilium to use kube-proxy replacement on migrated nodes\n\n    .. code-block:: shell-session\n\n        cat <<EOF | kubectl apply --server-side -f -\n        apiVersion: cilium.io\/v2\n        kind: CiliumNodeConfig\n        metadata:\n          namespace: kube-system\n          name: kube-proxy-replacement\n        spec:\n          nodeSelector:\n            matchLabels:\n              io.cilium.migration\/kube-proxy-replacement: true\n          defaults:\n            kube-proxy-replacement: true\n            kube-proxy-replacement-healthz-bind-address: \"0.0.0.0:10256\"\n\n        EOF\n\n#. Select a node to migrate. Optionally, cordon and drain that node:\n\n    .. code-block:: shell-session\n\n        export NODE=kind-worker\n        kubectl label node $NODE --overwrite 'io.cilium.migration\/kube-proxy-replacement=true'\n        kubectl cordon $NODE\n\n#. Delete Cilium DaemonSet to reload configuration:\n\n    .. code-block:: shell-session\n\n        kubectl -n kube-system delete pod -l k8s-app=cilium --field-selector spec.nodeName=$NODE\n\n#. Ensure Cilium has the correct configuration:\n\n    .. code-block:: shell-session\n\n        kubectl -n kube-system exec $(kubectl -n kube-system get pod -l k8s-app=cilium --field-selector spec.nodeName=$NODE -o name) -c cilium-agent -- \\\n            cilium config get kube-proxy-replacement\n        true\n\n#. Uncordon node\n\n    .. code-block:: shell-session\n\n        kubectl uncordon $NODE\n\n#. Cleanup: set default to kube-proxy-replacement:\n\n    .. code-block:: shell-session\n\n        cilium config set --restart=false kube-proxy-replacement true\n        cilium config set --restart=false kube-proxy-replacement-healthz-bind-address \"0.0.0.0:10256\"\n        kubectl -n kube-system delete ciliumnodeconfig kube-proxy-replacement\n\n#. Cleanup: delete kube-proxy daemonset, unlabel nodes\n\n    .. code-block:: shell-session\n\n        kubectl -n kube-system delete daemonset kube-proxy\n        kubectl label node --all --overwrite 'io.cilium.migration\/kube-proxy-replacement-'","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      per node configuration                          Per node configuration                         The Cilium agent process  a k a  DaemonSet  supports setting configuration on a per node basis  This allows overriding  ref  cilium config configmap  for a node or set of nodes  It is managed by CiliumNodeConfig objects   This feature is useful for     Gradually rolling out changes    Selectively enabling features that require specific hardware          ref  XDP acceleration         ref  ipv6 big tcp   CiliumNodeConfig objects                           A CiliumNodeConfig object allows for overriding ConfigMap   Agent arguments  It consists of a set of fields and a label selector  The label selector defines to which nodes the configuration applies  As is the standard with Kubernetes  an empty LabelSelector  e g          selects all nodes      note       Creating or modifying a CiliumNodeConfig will not cause changes to take effect     until pods are deleted and re created  or their node is restarted     Example  selective XDP enablement                                    To enable  ref  XDP acceleration  only on nodes with necessary hardware  one would label the relevant nodes and override their configuration      code block   yaml      apiVersion  cilium io v2     kind  CiliumNodeConfig     metadata        namespace  kube system       name  enable xdp     spec        nodeSelector          matchLabels            io cilium xdp offload   true        defaults          bpf lb acceleration  native  Example  KubeProxyReplacement Rollout                                        To roll out  ref  kube proxy replacement  kubeproxy free   in a gradual manner  you may also wish to use the CiliumNodeConfig feature  This will label all migrated nodes with   io cilium migration kube proxy replacement  true       warning        You must have installed Cilium with the Helm values   k8sServiceHost   and       k8sServicePort    Otherwise  Cilium will not be able to reach the Kubernetes     APIServer after kube proxy is uninstalled       You can apply these two values to a running cluster via   helm upgrade        Patch kube proxy to only run on unmigrated nodes          code block   shell session          kubectl  n kube system patch daemonset kube proxy   patch    spec     template     spec     affinity     nodeAffinity     requiredDuringSchedulingIgnoredDuringExecution     nodeSelectorTerms      matchExpressions      key    io cilium migration kube proxy replacement    operator    NotIn    values     true                   Configure Cilium to use kube proxy replacement on migrated nodes         code block   shell session          cat   EOF   kubectl apply   server side  f           apiVersion  cilium io v2         kind  CiliumNodeConfig         metadata            namespace  kube system           name  kube proxy replacement         spec            nodeSelector              matchLabels                io cilium migration kube proxy replacement  true           defaults              kube proxy replacement  true             kube proxy replacement healthz bind address   0 0 0 0 10256           EOF     Select a node to migrate  Optionally  cordon and drain that node          code block   shell session          export NODE kind worker         kubectl label node  NODE   overwrite  io cilium migration kube proxy replacement true          kubectl cordon  NODE     Delete Cilium DaemonSet to reload configuration          code block   shell session          kubectl  n kube system delete pod  l k8s app cilium   field selector spec nodeName  NODE     Ensure Cilium has the correct configuration          code block   shell session          kubectl  n kube system exec   kubectl  n kube system get pod  l k8s app cilium   field selector spec nodeName  NODE  o name   c cilium agent                  cilium config get kube proxy replacement         true     Uncordon node         code block   shell session          kubectl uncordon  NODE     Cleanup  set default to kube proxy replacement          code block   shell session          cilium config set   restart false kube proxy replacement true         cilium config set   restart false kube proxy replacement healthz bind address  0 0 0 0 10256          kubectl  n kube system delete ciliumnodeconfig kube proxy replacement     Cleanup  delete kube proxy daemonset  unlabel nodes         code block   shell session          kubectl  n kube system delete daemonset kube proxy         kubectl label node   all   overwrite  io cilium migration kube proxy replacement  "}
{"questions":"cilium k8sinstallquick k8sinstallstandard docs cilium io Cilium Quick Installation k8squickinstall","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _k8s_install_quick:\n.. _k8s_quick_install:\n.. _k8s_install_standard:\n\n*************************\nCilium Quick Installation\n*************************\n\nThis guide will walk you through the quick default installation. It will\nautomatically detect and use the best configuration possible for the Kubernetes\ndistribution you are using. All state is stored using Kubernetes custom resource definitions (CRDs).\n\nThis is the best installation method for most use cases.  For large\nenvironments (> 500 nodes) or if you want to run specific datapath modes, refer\nto the :ref:`getting_started` guide.\n\nShould you encounter any issues during the installation, please refer to the\n:ref:`troubleshooting_k8s` section and\/or seek help on `Cilium Slack`_.\n\n.. _create_cluster:\n\nCreate the Cluster\n===================\n\nIf you don't have a Kubernetes Cluster yet, you can use the instructions below\nto create a Kubernetes cluster locally or using a managed Kubernetes service:\n\n.. tabs::\n\n    .. group-tab:: GKE\n\n       The following commands create a Kubernetes cluster using `Google\n       Kubernetes Engine <https:\/\/cloud.google.com\/kubernetes-engine>`_.  See\n       `Installing Google Cloud SDK <https:\/\/cloud.google.com\/sdk\/install>`_\n       for instructions on how to install ``gcloud`` and prepare your\n       account.\n\n       .. code-block:: bash\n\n           export NAME=\"$(whoami)-$RANDOM\"\n           # Create the node pool with the following taint to guarantee that\n           # Pods are only scheduled\/executed in the node when Cilium is ready.\n           # Alternatively, see the note below.\n           gcloud container clusters create \"${NAME}\" \\\n            --node-taints node.cilium.io\/agent-not-ready=true:NoExecute \\\n            --zone us-west2-a\n           gcloud container clusters get-credentials \"${NAME}\" --zone us-west2-a\n\n       .. note::\n\n          Please make sure to read and understand the documentation page on :ref:`taint effects and unmanaged pods<taint_effects>`.\n\n    .. group-tab:: AKS\n\n       The following commands create a Kubernetes cluster using `Azure\n       Kubernetes Service <https:\/\/docs.microsoft.com\/en-us\/azure\/aks\/>`_ with\n       no CNI plugin pre-installed (BYOCNI). See `Azure Cloud CLI\n       <https:\/\/docs.microsoft.com\/en-us\/cli\/azure\/install-azure-cli?view=azure-cli-latest>`_\n       for instructions on how to install ``az`` and prepare your account, and\n       the `Bring your own CNI documentation\n       <https:\/\/docs.microsoft.com\/en-us\/azure\/aks\/use-byo-cni?tabs=azure-cli>`_\n       for more details about BYOCNI prerequisites \/ implications.\n\n       .. code-block:: bash\n\n           export NAME=\"$(whoami)-$RANDOM\"\n           export AZURE_RESOURCE_GROUP=\"${NAME}-group\"\n           az group create --name \"${AZURE_RESOURCE_GROUP}\" -l westus2\n\n           # Create AKS cluster\n           az aks create \\\n             --resource-group \"${AZURE_RESOURCE_GROUP}\" \\\n             --name \"${NAME}\" \\\n             --network-plugin none \\\n             --generate-ssh-keys\n\n           # Get the credentials to access the cluster with kubectl\n           az aks get-credentials --resource-group \"${AZURE_RESOURCE_GROUP}\" --name \"${NAME}\"\n\n    .. group-tab:: EKS\n\n       The following commands create a Kubernetes cluster with ``eksctl``\n       using `Amazon Elastic Kubernetes Service\n       <https:\/\/aws.amazon.com\/eks\/>`_.  See `eksctl Installation\n       <https:\/\/github.com\/weaveworks\/eksctl>`_ for instructions on how to\n       install ``eksctl`` and prepare your account.\n\n       .. code-block:: none\n\n           export NAME=\"$(whoami)-$RANDOM\"\n           cat <<EOF >eks-config.yaml\n           apiVersion: eksctl.io\/v1alpha5\n           kind: ClusterConfig\n\n           metadata:\n             name: ${NAME}\n             region: eu-west-1\n\n           managedNodeGroups:\n           - name: ng-1\n             desiredCapacity: 2\n             privateNetworking: true\n             # taint nodes so that application pods are\n             # not scheduled\/executed until Cilium is deployed.\n             # Alternatively, see the note below.\n             taints:\n              - key: \"node.cilium.io\/agent-not-ready\"\n                value: \"true\"\n                effect: \"NoExecute\"\n           EOF\n           eksctl create cluster -f .\/eks-config.yaml\n\n       .. note::\n\n          Please make sure to read and understand the documentation page on :ref:`taint effects and unmanaged pods<taint_effects>`.\n\n    .. group-tab:: kind\n\n       Install ``kind`` >= v0.7.0 per kind documentation:\n       `Installation and Usage <https:\/\/kind.sigs.k8s.io\/#installation-and-usage>`_\n\n       .. parsed-literal::\n\n          curl -LO \\ |SCM_WEB|\\\/Documentation\/installation\/kind-config.yaml\n          kind create cluster --config=kind-config.yaml\n\n       .. note::\n\n         Cilium may fail to deploy due to too many open files in one or more\n         of the agent pods. If you notice this error, you can increase the\n         ``inotify`` resource limits on your host machine (see\n         `Pod errors due to \"too many open files\" <https:\/\/kind.sigs.k8s.io\/docs\/user\/known-issues\/#pod-errors-due-to-too-many-open-files>`__).\n\n    .. group-tab:: minikube\n\n       Install minikube \u2265 v1.28.0 as per minikube documentation:\n       `Install Minikube <https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-minikube\/>`_.\n       The following command will bring up a single node minikube cluster prepared for installing cilium.\n\n       .. code-block:: shell-session\n\n          minikube start --cni=cilium\n\n       .. note::\n\n          - This may not install the latest version of cilium.\n          - It might be necessary to add ``--host-dns-resolver=false`` if using the Virtualbox provider,\n            otherwise DNS resolution may not work after Cilium installation.\n\n    .. group-tab:: Rancher Desktop\n\n       Install Rancher Desktop >= v1.1.0 as per Rancher Desktop documentation:\n       `Install Rancher Desktop <https:\/\/docs.rancherdesktop.io\/getting-started\/installation>`_.\n\n       Next you need to configure Rancher Desktop to disable the built-in CNI so you can install Cilium.\n\n       .. include:: ..\/installation\/rancher-desktop-configure.rst\n\n    .. group-tab:: Alibaba ACK\n\n        .. include:: ..\/beta.rst\n\n        .. note::\n\n            The AlibabaCloud ENI integration with Cilium is subject to the following limitations:\n\n            - It is currently only enabled for IPv4.\n            - It only works with instances supporting ENI. Refer to `Instance families <https:\/\/www.alibabacloud.com\/help\/doc-detail\/25378.htm>`_ for details.\n\n        Setup a Kubernetes on AlibabaCloud. You can use any method you prefer.\n        The quickest way is to create an ACK (Alibaba Cloud Container Service for\n        Kubernetes) cluster and to replace the CNI plugin with Cilium.\n        For more details on how to set up an ACK cluster please follow\n        the `official documentation <https:\/\/www.alibabacloud.com\/help\/doc-detail\/86745.htm>`_.\n\n.. _install_cilium_cli:\n\nInstall the Cilium CLI\n======================\n\n.. include:: ..\/installation\/cli-download.rst\n\n.. admonition:: Video\n  :class: attention\n\n  To learn more about the Cilium CLI, check out `eCHO episode 8: Exploring the Cilium CLI <https:\/\/www.youtube.com\/watch?v=ndjmaM1i0WQ&t=1136s>`__.\n\nInstall Cilium\n==============\n\nYou can install Cilium on any Kubernetes cluster. Pick one of the options below:\n\n.. tabs::\n\n    .. group-tab:: Generic\n\n       These are the generic instructions on how to install Cilium into any\n       Kubernetes cluster. The installer will attempt to automatically pick the\n       best configuration options for you. Please see the other tabs for\n       distribution\/platform specific instructions which also list the ideal\n       default configuration for particular platforms.\n\n       .. include:: ..\/installation\/requirements-generic.rst\n\n       **Install Cilium**\n\n       Install Cilium into the Kubernetes cluster pointed to by your current kubectl context:\n\n       .. parsed-literal::\n\n          cilium install |CHART_VERSION|\n\n    .. group-tab:: GKE\n\n       .. include:: ..\/installation\/requirements-gke.rst\n\n       **Install Cilium:**\n\n       Install Cilium into the GKE cluster:\n\n       .. parsed-literal::\n\n           cilium install |CHART_VERSION|\n\n    .. group-tab:: AKS\n       \n       .. include:: ..\/installation\/requirements-aks.rst\n   \n       **Install Cilium:**\n\n       Install Cilium into the AKS cluster:\n\n       .. parsed-literal::\n\n           cilium install |CHART_VERSION| --set azure.resourceGroup=\"${AZURE_RESOURCE_GROUP}\"\n           \n    .. group-tab:: EKS\n\n       .. include:: ..\/installation\/requirements-eks.rst\n\n       **Install Cilium:**\n\n       Install Cilium into the EKS cluster.\n\n       .. parsed-literal::\n\n           cilium install |CHART_VERSION|\n           cilium status --wait\n\n       .. note::\n\n           If you have to uninstall Cilium and later install it again, that could cause\n           connectivity issues due to ``aws-node`` DaemonSet flushing Linux routing tables.\n           The issues can be fixed by restarting all pods, alternatively to avoid such issues\n           you can delete ``aws-node`` DaemonSet prior to installing Cilium.\n\n    .. group-tab:: OpenShift\n\n       .. include:: ..\/installation\/requirements-openshift.rst\n\n       **Install Cilium:**\n\n       Cilium is a `Certified OpenShift CNI Plugin <https:\/\/access.redhat.com\/articles\/5436171>`_\n       and is best installed when an OpenShift cluster is created using the OpenShift\n       installer. Please refer to :ref:`k8s_install_openshift_okd` for more information.\n\n    .. group-tab:: RKE\n\n       .. include:: ..\/installation\/requirements-rke.rst\n\n       **Install Cilium:**\n\n       Install Cilium into your newly created RKE cluster:\n\n       .. parsed-literal::\n\n           cilium install |CHART_VERSION|\n\n    .. group-tab:: k3s\n\n       .. include:: ..\/installation\/requirements-k3s.rst\n\n       **Install Cilium:**\n\n       Install Cilium into your newly created Kubernetes cluster:\n\n       .. parsed-literal::\n\n           cilium install |CHART_VERSION|\n\n    .. group-tab:: Alibaba ACK\n\n       You can install Cilium using Helm on Alibaba ACK, refer to `k8s_install_helm` for details.\n\n\nIf the installation fails for some reason, run ``cilium status`` to retrieve\nthe overall status of the Cilium deployment and inspect the logs of whatever\npods are failing to be deployed.\n\n.. tip::\n\n   You may be seeing ``cilium install`` print something like this:\n\n   .. code-block:: shell-session\n\n       \u267b\ufe0f  Restarted unmanaged pod kube-system\/event-exporter-gke-564fb97f9-rv8hg\n       \u267b\ufe0f  Restarted unmanaged pod kube-system\/kube-dns-6465f78586-hlcrz\n       \u267b\ufe0f  Restarted unmanaged pod kube-system\/kube-dns-autoscaler-7f89fb6b79-fsmsg\n       \u267b\ufe0f  Restarted unmanaged pod kube-system\/l7-default-backend-7fd66b8b88-qqhh5\n       \u267b\ufe0f  Restarted unmanaged pod kube-system\/metrics-server-v0.3.6-7b5cdbcbb8-kjl65\n       \u267b\ufe0f  Restarted unmanaged pod kube-system\/stackdriver-metadata-agent-cluster-level-6cc964cddf-8n2rt\n\n   This indicates that your cluster was already running some pods before Cilium\n   was deployed and the installer has automatically restarted them to ensure\n   all pods get networking provided by Cilium.\n\nValidate the Installation\n=========================\n\n.. include:: ..\/installation\/cli-status.rst\n.. include:: ..\/installation\/cli-connectivity-test.rst\n\n.. include:: ..\/installation\/next-steps.rst","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      k8s install quick      k8s quick install      k8s install standard                             Cilium Quick Installation                            This guide will walk you through the quick default installation  It will automatically detect and use the best configuration possible for the Kubernetes distribution you are using  All state is stored using Kubernetes custom resource definitions  CRDs    This is the best installation method for most use cases   For large environments    500 nodes  or if you want to run specific datapath modes  refer to the  ref  getting started  guide   Should you encounter any issues during the installation  please refer to the  ref  troubleshooting k8s  section and or seek help on  Cilium Slack         create cluster   Create the Cluster                      If you don t have a Kubernetes Cluster yet  you can use the instructions below to create a Kubernetes cluster locally or using a managed Kubernetes service      tabs           group tab   GKE         The following commands create a Kubernetes cluster using  Google        Kubernetes Engine  https   cloud google com kubernetes engine      See         Installing Google Cloud SDK  https   cloud google com sdk install           for instructions on how to install   gcloud   and prepare your        account             code block   bash             export NAME    whoami   RANDOM               Create the node pool with the following taint to guarantee that              Pods are only scheduled executed in the node when Cilium is ready               Alternatively  see the note below             gcloud container clusters create    NAME                   node taints node cilium io agent not ready true NoExecute                 zone us west2 a            gcloud container clusters get credentials    NAME     zone us west2 a            note              Please make sure to read and understand the documentation page on  ref  taint effects and unmanaged pods taint effects            group tab   AKS         The following commands create a Kubernetes cluster using  Azure        Kubernetes Service  https   docs microsoft com en us azure aks     with        no CNI plugin pre installed  BYOCNI   See  Azure Cloud CLI         https   docs microsoft com en us cli azure install azure cli view azure cli latest           for instructions on how to install   az   and prepare your account  and        the  Bring your own CNI documentation         https   docs microsoft com en us azure aks use byo cni tabs azure cli           for more details about BYOCNI prerequisites   implications             code block   bash             export NAME    whoami   RANDOM             export AZURE RESOURCE GROUP    NAME  group             az group create   name    AZURE RESOURCE GROUP    l westus2               Create AKS cluster            az aks create                  resource group    AZURE RESOURCE GROUP                    name    NAME                    network plugin none                  generate ssh keys               Get the credentials to access the cluster with kubectl            az aks get credentials   resource group    AZURE RESOURCE GROUP     name    NAME           group tab   EKS         The following commands create a Kubernetes cluster with   eksctl          using  Amazon Elastic Kubernetes Service         https   aws amazon com eks       See  eksctl Installation         https   github com weaveworks eksctl    for instructions on how to        install   eksctl   and prepare your account             code block   none             export NAME    whoami   RANDOM             cat   EOF  eks config yaml            apiVersion  eksctl io v1alpha5            kind  ClusterConfig             metadata               name    NAME               region  eu west 1             managedNodeGroups               name  ng 1              desiredCapacity  2              privateNetworking  true                taint nodes so that application pods are                not scheduled executed until Cilium is deployed                 Alternatively  see the note below               taints                  key   node cilium io agent not ready                  value   true                  effect   NoExecute             EOF            eksctl create cluster  f   eks config yaml            note              Please make sure to read and understand the documentation page on  ref  taint effects and unmanaged pods taint effects            group tab   kind         Install   kind      v0 7 0 per kind documentation          Installation and Usage  https   kind sigs k8s io  installation and usage               parsed literal              curl  LO    SCM WEB   Documentation installation kind config yaml           kind create cluster   config kind config yaml            note             Cilium may fail to deploy due to too many open files in one or more          of the agent pods  If you notice this error  you can increase the            inotify   resource limits on your host machine  see           Pod errors due to  too many open files   https   kind sigs k8s io docs user known issues  pod errors due to too many open files               group tab   minikube         Install minikube   v1 28 0 as per minikube documentation          Install Minikube  https   kubernetes io docs tasks tools install minikube             The following command will bring up a single node minikube cluster prepared for installing cilium             code block   shell session            minikube start   cni cilium            note                This may not install the latest version of cilium              It might be necessary to add     host dns resolver false   if using the Virtualbox provider              otherwise DNS resolution may not work after Cilium installation          group tab   Rancher Desktop         Install Rancher Desktop    v1 1 0 as per Rancher Desktop documentation          Install Rancher Desktop  https   docs rancherdesktop io getting started installation             Next you need to configure Rancher Desktop to disable the built in CNI so you can install Cilium             include      installation rancher desktop configure rst         group tab   Alibaba ACK             include      beta rst             note                The AlibabaCloud ENI integration with Cilium is subject to the following limitations                 It is currently only enabled for IPv4                It only works with instances supporting ENI  Refer to  Instance families  https   www alibabacloud com help doc detail 25378 htm    for details           Setup a Kubernetes on AlibabaCloud  You can use any method you prefer          The quickest way is to create an ACK  Alibaba Cloud Container Service for         Kubernetes  cluster and to replace the CNI plugin with Cilium          For more details on how to set up an ACK cluster please follow         the  official documentation  https   www alibabacloud com help doc detail 86745 htm          install cilium cli   Install the Cilium CLI                            include      installation cli download rst     admonition   Video    class  attention    To learn more about the Cilium CLI  check out  eCHO episode 8  Exploring the Cilium CLI  https   www youtube com watch v ndjmaM1i0WQ t 1136s       Install Cilium                 You can install Cilium on any Kubernetes cluster  Pick one of the options below      tabs           group tab   Generic         These are the generic instructions on how to install Cilium into any        Kubernetes cluster  The installer will attempt to automatically pick the        best configuration options for you  Please see the other tabs for        distribution platform specific instructions which also list the ideal        default configuration for particular platforms             include      installation requirements generic rst           Install Cilium           Install Cilium into the Kubernetes cluster pointed to by your current kubectl context             parsed literal              cilium install  CHART VERSION          group tab   GKE            include      installation requirements gke rst           Install Cilium            Install Cilium into the GKE cluster             parsed literal               cilium install  CHART VERSION          group tab   AKS                   include      installation requirements aks rst              Install Cilium            Install Cilium into the AKS cluster             parsed literal               cilium install  CHART VERSION    set azure resourceGroup    AZURE RESOURCE GROUP                      group tab   EKS            include      installation requirements eks rst           Install Cilium            Install Cilium into the EKS cluster             parsed literal               cilium install  CHART VERSION             cilium status   wait            note               If you have to uninstall Cilium and later install it again  that could cause            connectivity issues due to   aws node   DaemonSet flushing Linux routing tables             The issues can be fixed by restarting all pods  alternatively to avoid such issues            you can delete   aws node   DaemonSet prior to installing Cilium          group tab   OpenShift            include      installation requirements openshift rst           Install Cilium            Cilium is a  Certified OpenShift CNI Plugin  https   access redhat com articles 5436171           and is best installed when an OpenShift cluster is created using the OpenShift        installer  Please refer to  ref  k8s install openshift okd  for more information          group tab   RKE            include      installation requirements rke rst           Install Cilium            Install Cilium into your newly created RKE cluster             parsed literal               cilium install  CHART VERSION          group tab   k3s            include      installation requirements k3s rst           Install Cilium            Install Cilium into your newly created Kubernetes cluster             parsed literal               cilium install  CHART VERSION          group tab   Alibaba ACK         You can install Cilium using Helm on Alibaba ACK  refer to  k8s install helm  for details    If the installation fails for some reason  run   cilium status   to retrieve the overall status of the Cilium deployment and inspect the logs of whatever pods are failing to be deployed      tip       You may be seeing   cilium install   print something like this         code block   shell session             Restarted unmanaged pod kube system event exporter gke 564fb97f9 rv8hg            Restarted unmanaged pod kube system kube dns 6465f78586 hlcrz            Restarted unmanaged pod kube system kube dns autoscaler 7f89fb6b79 fsmsg            Restarted unmanaged pod kube system l7 default backend 7fd66b8b88 qqhh5            Restarted unmanaged pod kube system metrics server v0 3 6 7b5cdbcbb8 kjl65            Restarted unmanaged pod kube system stackdriver metadata agent cluster level 6cc964cddf 8n2rt     This indicates that your cluster was already running some pods before Cilium    was deployed and the installer has automatically restarted them to ensure    all pods get networking provided by Cilium   Validate the Installation                               include      installation cli status rst    include      installation cli connectivity test rst     include      installation next steps rst"}
{"questions":"cilium docs cilium io Getting Started with the Star Wars Demo starwarsdemo security gsgswdemo rst","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _starwars_demo:\n\n#######################################\nGetting Started with the Star Wars Demo\n#######################################\n\n.. include:: \/security\/gsg_sw_demo.rst\n\nCheck Current Access\n====================\nFrom the perspective of the *deathstar* service, only the ships with label ``org=empire`` are allowed to connect and request landing. Since we have no rules enforced, both *xwing* and *tiefighter* will be able to request landing. To test this, use the commands below.\n\n.. code-block:: shell-session\n\n    $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local\/v1\/request-landing\n    Ship landed\n    $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local\/v1\/request-landing\n    Ship landed\n\nApply an L3\/L4 Policy\n=====================\n\nWhen using Cilium, endpoint IP addresses are irrelevant when defining security\npolicies. Instead, you can use the labels assigned to the pods to define\nsecurity policies. The policies will be applied to the right pods based on the labels irrespective of where or when it is running within the cluster.\n\nWe'll start with the basic policy restricting deathstar landing requests to only the ships that have label (``org=empire``). This will not allow any ships that don't have the ``org=empire`` label to even connect with the *deathstar* service.\nThis is a simple policy that filters only on IP protocol (network layer 3) and TCP protocol (network layer 4), so it is often referred to as an L3\/L4 network security policy.\n\nNote: Cilium performs stateful *connection tracking*, meaning that if policy allows\nthe frontend to reach backend, it will automatically allow all required reply\npackets that are part of backend replying to frontend within the context of the\nsame TCP\/UDP connection.\n\n**L4 Policy with Cilium and Kubernetes**\n\n.. image:: images\/cilium_http_l3_l4_gsg.png\n   :scale: 30 %\n\nWe can achieve that with the following CiliumNetworkPolicy:\n\n.. literalinclude:: ..\/..\/examples\/minikube\/sw_l3_l4_policy.yaml\n\nCiliumNetworkPolicies match on pod labels using an \"endpointSelector\" to identify the sources and destinations to which the policy applies.\nThe above policy whitelists traffic sent from any pods with label (``org=empire``) to *deathstar* pods with label (``org=empire, class=deathstar``) on TCP port 80.\n\nTo apply this L3\/L4 policy, run:\n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/minikube\/sw_l3_l4_policy.yaml\n    ciliumnetworkpolicy.cilium.io\/rule1 created\n\n\nNow if we run the landing requests again, only the *tiefighter* pods with the label ``org=empire`` will succeed. The *xwing* pods will be blocked!\n\n.. code-block:: shell-session\n\n    $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local\/v1\/request-landing\n    Ship landed\n\nThis works as expected. Now the same request run from an *xwing* pod will fail:\n\n.. code-block:: shell-session\n\n    $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local\/v1\/request-landing\n\nThis request will hang, so press Control-C to kill the curl request, or wait for it to time out.\n\nInspecting the Policy\n=====================\n\nIf we run ``cilium-dbg endpoint list`` again we will see that the pods with the label ``org=empire`` and ``class=deathstar`` now have ingress policy enforcement enabled as per the policy above.\n\n.. code-block:: shell-session\n\n    $ kubectl -n kube-system exec cilium-1c2cz -- cilium-dbg endpoint list\n    ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                       IPv6   IPv4         STATUS\n               ENFORCEMENT        ENFORCEMENT\n    232        Enabled            Disabled          16530      k8s:class=deathstar                                      10.0.0.147   ready\n                                                               k8s:io.cilium.k8s.policy.cluster=default\n                                                               k8s:io.cilium.k8s.policy.serviceaccount=default\n                                                               k8s:io.kubernetes.pod.namespace=default\n                                                               k8s:org=empire\n    726        Disabled           Disabled          1          reserved:host                                                         ready\n    883        Disabled           Disabled          4          reserved:health                                          10.0.0.244   ready\n    1634       Disabled           Disabled          51373      k8s:io.cilium.k8s.policy.cluster=default                 10.0.0.118   ready\n                                                               k8s:io.cilium.k8s.policy.serviceaccount=coredns\n                                                               k8s:io.kubernetes.pod.namespace=kube-system\n                                                               k8s:k8s-app=kube-dns\n    1673       Disabled           Disabled          31028      k8s:class=tiefighter                                     10.0.0.112   ready\n                                                               k8s:io.cilium.k8s.policy.cluster=default\n                                                               k8s:io.cilium.k8s.policy.serviceaccount=default\n                                                               k8s:io.kubernetes.pod.namespace=default\n                                                               k8s:org=empire\n    2811       Disabled           Disabled          51373      k8s:io.cilium.k8s.policy.cluster=default                 10.0.0.47    ready\n                                                               k8s:io.cilium.k8s.policy.serviceaccount=coredns\n                                                               k8s:io.kubernetes.pod.namespace=kube-system\n                                                               k8s:k8s-app=kube-dns\n    2843       Enabled            Disabled          16530      k8s:class=deathstar                                      10.0.0.89    ready\n                                                               k8s:io.cilium.k8s.policy.cluster=default\n                                                               k8s:io.cilium.k8s.policy.serviceaccount=default\n                                                               k8s:io.kubernetes.pod.namespace=default\n                                                               k8s:org=empire\n    3184       Disabled           Disabled          22654      k8s:class=xwing                                          10.0.0.30    ready\n                                                               k8s:io.cilium.k8s.policy.cluster=default\n                                                               k8s:io.cilium.k8s.policy.serviceaccount=default\n                                                               k8s:io.kubernetes.pod.namespace=default\n                                                               k8s:org=alliance\n\n\nYou can also inspect the policy details via ``kubectl``\n\n.. code-block:: shell-session\n\n    $ kubectl get cnp\n    NAME    AGE\n    rule1   2m\n\n    $ kubectl describe cnp rule1\n    Name:         rule1\n    Namespace:    default\n    Labels:       <none>\n    Annotations:  <none>\n    API Version:  cilium.io\/v2\n    Description:  L3-L4 policy to restrict deathstar access to empire ships only\n    Kind:         CiliumNetworkPolicy\n    Metadata:\n      Creation Timestamp:  2020-06-15T14:06:48Z\n      Generation:          1\n      Managed Fields:\n        API Version:  cilium.io\/v2\n        Fields Type:  FieldsV1\n        fieldsV1:\n          f:description:\n          f:spec:\n            .:\n            f:endpointSelector:\n              .:\n              f:matchLabels:\n                .:\n                f:class:\n                f:org:\n            f:ingress:\n        Manager:         kubectl\n        Operation:       Update\n        Time:            2020-06-15T14:06:48Z\n      Resource Version:  2914\n      Self Link:         \/apis\/cilium.io\/v2\/namespaces\/default\/ciliumnetworkpolicies\/rule1\n      UID:               eb3a688b-b3aa-495c-b20a-d4f79e7c088d\n    Spec:\n      Endpoint Selector:\n        Match Labels:\n          Class:  deathstar\n          Org:    empire\n      Ingress:\n        From Endpoints:\n          Match Labels:\n            Org:  empire\n        To Ports:\n          Ports:\n            Port:      80\n            Protocol:  TCP\n    Events:            <none>\n\n\nApply and Test HTTP-aware L7 Policy\n===================================\n\nIn the simple scenario above, it was sufficient to either give *tiefighter* \/\n*xwing* full access to *deathstar's* API or no access at all. But to\nprovide the strongest security (i.e., enforce least-privilege isolation)\nbetween microservices, each service that calls *deathstar's* API should be\nlimited to making only the set of HTTP requests it requires for legitimate\noperation.\n\nFor example, consider that the *deathstar* service exposes some maintenance APIs which should not be called by random empire ships. To see this run:\n\n.. code-block:: shell-session\n\n    $ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local\/v1\/exhaust-port\n    Panic: deathstar exploded\n\n    goroutine 1 [running]:\n    main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa)\n            \/code\/src\/github.com\/empire\/deathstar\/\n            temp\/main.go:9 +0x64\n    main.main()\n            \/code\/src\/github.com\/empire\/deathstar\/\n            temp\/main.go:5 +0x85\n\n\nWhile this is an illustrative example, unauthorized access such as above can have adverse security repercussions.\n\n**L7 Policy with Cilium and Kubernetes**\n\n.. image:: images\/cilium_http_l3_l4_l7_gsg.png\n   :scale: 30 %\n\nCilium is capable of enforcing HTTP-layer (i.e., L7) policies to limit what\nURLs the *tiefighter* is allowed to reach.  Here is an example policy file that\nextends our original policy by limiting *tiefighter* to making only a POST \/v1\/request-landing\nAPI call, but disallowing all other calls (including PUT \/v1\/exhaust-port).\n\n.. literalinclude:: ..\/..\/examples\/minikube\/sw_l3_l4_l7_policy.yaml\n\nUpdate the existing rule to apply L7-aware policy to protect *deathstar* using:\n\n.. parsed-literal::\n\n    $ kubectl apply -f \\ |SCM_WEB|\\\/examples\/minikube\/sw_l3_l4_l7_policy.yaml\n    ciliumnetworkpolicy.cilium.io\/rule1 configured\n\n\nWe can now re-run the same test as above, but we will see a different outcome:\n\n.. code-block:: shell-session\n\n    $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local\/v1\/request-landing\n    Ship landed\n\n\nand\n\n.. code-block:: shell-session\n\n    $ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local\/v1\/exhaust-port\n    Access denied\n\nAs this rule builds on the identity-aware rule, traffic from pods without the label\n``org=empire`` will continue to be dropped causing the connection to time out:\n\n.. code-block:: shell-session\n\n    $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local\/v1\/request-landing\n\n\nAs you can see, with Cilium L7 security policies, we are able to permit\n*tiefighter* to access only the required API resources on *deathstar*, thereby\nimplementing a \"least privilege\" security approach for communication between\nmicroservices. Note that ``path`` matches the exact url, if for example you want\nto allow anything under \/v1\/, you need to use a regular expression:\n\n.. code-block:: yaml\n\n    path: \"\/v1\/.*\"\n\nYou can observe the L7 policy via ``kubectl``:\n\n.. code-block:: shell-session\n\n    $ kubectl describe ciliumnetworkpolicies\n    Name:         rule1\n    Namespace:    default\n    Labels:       <none>\n    Annotations:  API Version:  cilium.io\/v2\n    Description:  L7 policy to restrict access to specific HTTP call\n    Kind:         CiliumNetworkPolicy\n    Metadata:\n      Creation Timestamp:  2020-06-15T14:06:48Z\n      Generation:          2\n      Managed Fields:\n        API Version:  cilium.io\/v2\n        Fields Type:  FieldsV1\n        fieldsV1:\n          f:description:\n          f:metadata:\n            f:annotations:\n              .:\n              f:kubectl.kubernetes.io\/last-applied-configuration:\n          f:spec:\n            .:\n            f:endpointSelector:\n              .:\n              f:matchLabels:\n                .:\n                f:class:\n                f:org:\n            f:ingress:\n        Manager:         kubectl\n        Operation:       Update\n        Time:            2020-06-15T14:10:46Z\n      Resource Version:  3445\n      Self Link:         \/apis\/cilium.io\/v2\/namespaces\/default\/ciliumnetworkpolicies\/rule1\n      UID:               eb3a688b-b3aa-495c-b20a-d4f79e7c088d\n    Spec:\n      Endpoint Selector:\n        Match Labels:\n          Class:  deathstar\n          Org:    empire\n      Ingress:\n        From Endpoints:\n          Match Labels:\n            Org:  empire\n        To Ports:\n          Ports:\n            Port:      80\n            Protocol:  TCP\n          Rules:\n            Http:\n              Method:  POST\n              Path:    \/v1\/request-landing\n    Events:            <none>\n\nand ``cilium`` CLI:\n\n.. code-block:: shell-session\n\n    $ kubectl -n kube-system exec cilium-qh5l2 -- cilium-dbg policy get\n    [\n      {\n        \"endpointSelector\": {\n          \"matchLabels\": {\n            \"any:class\": \"deathstar\",\n            \"any:org\": \"empire\",\n            \"k8s:io.kubernetes.pod.namespace\": \"default\"\n          }\n        },\n        \"ingress\": [\n          {\n            \"fromEndpoints\": [\n              {\n                \"matchLabels\": {\n                  \"any:org\": \"empire\",\n                  \"k8s:io.kubernetes.pod.namespace\": \"default\"\n                }\n              }\n            ],\n            \"toPorts\": [\n              {\n                \"ports\": [\n                  {\n                    \"port\": \"80\",\n                    \"protocol\": \"TCP\"\n                  }\n                ],\n                \"rules\": {\n                  \"http\": [\n                    {\n                      \"path\": \"\/v1\/request-landing\",\n                      \"method\": \"POST\"\n                    }\n                  ]\n                }\n              }\n            ]\n          }\n        ],\n        \"labels\": [\n          {\n            \"key\": \"io.cilium.k8s.policy.derived-from\",\n            \"value\": \"CiliumNetworkPolicy\",\n            \"source\": \"k8s\"\n          },\n          {\n            \"key\": \"io.cilium.k8s.policy.name\",\n            \"value\": \"rule1\",\n            \"source\": \"k8s\"\n          },\n          {\n            \"key\": \"io.cilium.k8s.policy.namespace\",\n            \"value\": \"default\",\n            \"source\": \"k8s\"\n          },\n          {\n            \"key\": \"io.cilium.k8s.policy.uid\",\n            \"value\": \"eb3a688b-b3aa-495c-b20a-d4f79e7c088d\",\n            \"source\": \"k8s\"\n          }\n        ]\n      }\n    ]\n    Revision: 11\n\nIt is also possible to monitor the HTTP requests live by using ``cilium-dbg monitor``:\n\n.. code-block:: shell-session\n\n    $ kubectl exec -it -n kube-system cilium-kzgdx -- cilium-dbg monitor -v --type l7\n    <- Response http to 0 ([k8s:class=tiefighter k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire]) from 2756 ([k8s:io.cilium.k8s.policy.cluster=default k8s:class=deathstar k8s:org=empire k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 8876->43854, verdict Forwarded POST http:\/\/deathstar.default.svc.cluster.local\/v1\/request-landing => 200\n    <- Request http from 0 ([k8s:class=tiefighter k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire]) to 2756 ([k8s:io.cilium.k8s.policy.cluster=default k8s:class=deathstar k8s:org=empire k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 8876->43854, verdict Denied PUT http:\/\/deathstar.default.svc.cluster.local\/v1\/request-landing => 403\n\nThe above output demonstrates a successful response to a POST request followed by a PUT request that is denied by the L7 policy.\n\nWe hope you enjoyed the tutorial.  Feel free to play more with the setup, read\nthe rest of the documentation, and reach out to us on the `Cilium Slack`_ with\nany questions!\n\nClean-up\n========\n\n.. parsed-literal::\n\n   $ kubectl delete -f \\ |SCM_WEB|\\\/examples\/minikube\/http-sw-app.yaml\n   $ kubectl delete cnp rule1\n\n\n.. include:: ..\/installation\/next-steps.rst","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      starwars demo                                           Getting Started with the Star Wars Demo                                             include    security gsg sw demo rst  Check Current Access                      From the perspective of the  deathstar  service  only the ships with label   org empire   are allowed to connect and request landing  Since we have no rules enforced  both  xwing  and  tiefighter  will be able to request landing  To test this  use the commands below      code block   shell session        kubectl exec xwing    curl  s  XPOST deathstar default svc cluster local v1 request landing     Ship landed       kubectl exec tiefighter    curl  s  XPOST deathstar default svc cluster local v1 request landing     Ship landed  Apply an L3 L4 Policy                        When using Cilium  endpoint IP addresses are irrelevant when defining security policies  Instead  you can use the labels assigned to the pods to define security policies  The policies will be applied to the right pods based on the labels irrespective of where or when it is running within the cluster   We ll start with the basic policy restricting deathstar landing requests to only the ships that have label    org empire     This will not allow any ships that don t have the   org empire   label to even connect with the  deathstar  service  This is a simple policy that filters only on IP protocol  network layer 3  and TCP protocol  network layer 4   so it is often referred to as an L3 L4 network security policy   Note  Cilium performs stateful  connection tracking   meaning that if policy allows the frontend to reach backend  it will automatically allow all required reply packets that are part of backend replying to frontend within the context of the same TCP UDP connection     L4 Policy with Cilium and Kubernetes       image   images cilium http l3 l4 gsg png     scale  30    We can achieve that with the following CiliumNetworkPolicy      literalinclude         examples minikube sw l3 l4 policy yaml  CiliumNetworkPolicies match on pod labels using an  endpointSelector  to identify the sources and destinations to which the policy applies  The above policy whitelists traffic sent from any pods with label    org empire    to  deathstar  pods with label    org empire  class deathstar    on TCP port 80   To apply this L3 L4 policy  run      parsed literal          kubectl create  f    SCM WEB   examples minikube sw l3 l4 policy yaml     ciliumnetworkpolicy cilium io rule1 created   Now if we run the landing requests again  only the  tiefighter  pods with the label   org empire   will succeed  The  xwing  pods will be blocked      code block   shell session        kubectl exec tiefighter    curl  s  XPOST deathstar default svc cluster local v1 request landing     Ship landed  This works as expected  Now the same request run from an  xwing  pod will fail      code block   shell session        kubectl exec xwing    curl  s  XPOST deathstar default svc cluster local v1 request landing  This request will hang  so press Control C to kill the curl request  or wait for it to time out   Inspecting the Policy                        If we run   cilium dbg endpoint list   again we will see that the pods with the label   org empire   and   class deathstar   now have ingress policy enforcement enabled as per the policy above      code block   shell session        kubectl  n kube system exec cilium 1c2cz    cilium dbg endpoint list     ENDPOINT   POLICY  ingress    POLICY  egress    IDENTITY   LABELS  source key  value                         IPv6   IPv4         STATUS                ENFORCEMENT        ENFORCEMENT     232        Enabled            Disabled          16530      k8s class deathstar                                      10 0 0 147   ready                                                                k8s io cilium k8s policy cluster default                                                                k8s io cilium k8s policy serviceaccount default                                                                k8s io kubernetes pod namespace default                                                                k8s org empire     726        Disabled           Disabled          1          reserved host                                                         ready     883        Disabled           Disabled          4          reserved health                                          10 0 0 244   ready     1634       Disabled           Disabled          51373      k8s io cilium k8s policy cluster default                 10 0 0 118   ready                                                                k8s io cilium k8s policy serviceaccount coredns                                                                k8s io kubernetes pod namespace kube system                                                                k8s k8s app kube dns     1673       Disabled           Disabled          31028      k8s class tiefighter                                     10 0 0 112   ready                                                                k8s io cilium k8s policy cluster default                                                                k8s io cilium k8s policy serviceaccount default                                                                k8s io kubernetes pod namespace default                                                                k8s org empire     2811       Disabled           Disabled          51373      k8s io cilium k8s policy cluster default                 10 0 0 47    ready                                                                k8s io cilium k8s policy serviceaccount coredns                                                                k8s io kubernetes pod namespace kube system                                                                k8s k8s app kube dns     2843       Enabled            Disabled          16530      k8s class deathstar                                      10 0 0 89    ready                                                                k8s io cilium k8s policy cluster default                                                                k8s io cilium k8s policy serviceaccount default                                                                k8s io kubernetes pod namespace default                                                                k8s org empire     3184       Disabled           Disabled          22654      k8s class xwing                                          10 0 0 30    ready                                                                k8s io cilium k8s policy cluster default                                                                k8s io cilium k8s policy serviceaccount default                                                                k8s io kubernetes pod namespace default                                                                k8s org alliance   You can also inspect the policy details via   kubectl       code block   shell session        kubectl get cnp     NAME    AGE     rule1   2m        kubectl describe cnp rule1     Name          rule1     Namespace     default     Labels         none      Annotations    none      API Version   cilium io v2     Description   L3 L4 policy to restrict deathstar access to empire ships only     Kind          CiliumNetworkPolicy     Metadata        Creation Timestamp   2020 06 15T14 06 48Z       Generation           1       Managed Fields          API Version   cilium io v2         Fields Type   FieldsV1         fieldsV1            f description            f spec                             f endpointSelector                                 f matchLabels                                     f class                  f org              f ingress          Manager          kubectl         Operation        Update         Time             2020 06 15T14 06 48Z       Resource Version   2914       Self Link           apis cilium io v2 namespaces default ciliumnetworkpolicies rule1       UID                eb3a688b b3aa 495c b20a d4f79e7c088d     Spec        Endpoint Selector          Match Labels            Class   deathstar           Org     empire       Ingress          From Endpoints            Match Labels              Org   empire         To Ports            Ports              Port       80             Protocol   TCP     Events              none    Apply and Test HTTP aware L7 Policy                                      In the simple scenario above  it was sufficient to either give  tiefighter     xwing  full access to  deathstar s  API or no access at all  But to provide the strongest security  i e   enforce least privilege isolation  between microservices  each service that calls  deathstar s  API should be limited to making only the set of HTTP requests it requires for legitimate operation   For example  consider that the  deathstar  service exposes some maintenance APIs which should not be called by random empire ships  To see this run      code block   shell session        kubectl exec tiefighter    curl  s  XPUT deathstar default svc cluster local v1 exhaust port     Panic  deathstar exploded      goroutine 1  running       main HandleGarbage 0x2080c3f50  0x2  0x4  0x425c0  0x5  0xa               code src github com empire deathstar              temp main go 9  0x64     main main                code src github com empire deathstar              temp main go 5  0x85   While this is an illustrative example  unauthorized access such as above can have adverse security repercussions     L7 Policy with Cilium and Kubernetes       image   images cilium http l3 l4 l7 gsg png     scale  30    Cilium is capable of enforcing HTTP layer  i e   L7  policies to limit what URLs the  tiefighter  is allowed to reach   Here is an example policy file that extends our original policy by limiting  tiefighter  to making only a POST  v1 request landing API call  but disallowing all other calls  including PUT  v1 exhaust port       literalinclude         examples minikube sw l3 l4 l7 policy yaml  Update the existing rule to apply L7 aware policy to protect  deathstar  using      parsed literal          kubectl apply  f    SCM WEB   examples minikube sw l3 l4 l7 policy yaml     ciliumnetworkpolicy cilium io rule1 configured   We can now re run the same test as above  but we will see a different outcome      code block   shell session        kubectl exec tiefighter    curl  s  XPOST deathstar default svc cluster local v1 request landing     Ship landed   and     code block   shell session        kubectl exec tiefighter    curl  s  XPUT deathstar default svc cluster local v1 exhaust port     Access denied  As this rule builds on the identity aware rule  traffic from pods without the label   org empire   will continue to be dropped causing the connection to time out      code block   shell session        kubectl exec xwing    curl  s  XPOST deathstar default svc cluster local v1 request landing   As you can see  with Cilium L7 security policies  we are able to permit  tiefighter  to access only the required API resources on  deathstar   thereby implementing a  least privilege  security approach for communication between microservices  Note that   path   matches the exact url  if for example you want to allow anything under  v1   you need to use a regular expression      code block   yaml      path    v1      You can observe the L7 policy via   kubectl        code block   shell session        kubectl describe ciliumnetworkpolicies     Name          rule1     Namespace     default     Labels         none      Annotations   API Version   cilium io v2     Description   L7 policy to restrict access to specific HTTP call     Kind          CiliumNetworkPolicy     Metadata        Creation Timestamp   2020 06 15T14 06 48Z       Generation           2       Managed Fields          API Version   cilium io v2         Fields Type   FieldsV1         fieldsV1            f description            f metadata              f annotations                                 f kubectl kubernetes io last applied configuration            f spec                             f endpointSelector                                 f matchLabels                                     f class                  f org              f ingress          Manager          kubectl         Operation        Update         Time             2020 06 15T14 10 46Z       Resource Version   3445       Self Link           apis cilium io v2 namespaces default ciliumnetworkpolicies rule1       UID                eb3a688b b3aa 495c b20a d4f79e7c088d     Spec        Endpoint Selector          Match Labels            Class   deathstar           Org     empire       Ingress          From Endpoints            Match Labels              Org   empire         To Ports            Ports              Port       80             Protocol   TCP           Rules              Http                Method   POST               Path      v1 request landing     Events              none   and   cilium   CLI      code block   shell session        kubectl  n kube system exec cilium qh5l2    cilium dbg policy get                        endpointSelector                matchLabels                  any class    deathstar                any org    empire                k8s io kubernetes pod namespace    default                                  ingress                              fromEndpoints                                      matchLabels                        any org    empire                      k8s io kubernetes pod namespace    default                                                                toPorts                                      ports                                              port    80                        protocol    TCP                                                          rules                        http                                                  path     v1 request landing                          method    POST                                                                                                                            labels                              key    io cilium k8s policy derived from                value    CiliumNetworkPolicy                source    k8s                                        key    io cilium k8s policy name                value    rule1                source    k8s                                        key    io cilium k8s policy namespace                value    default                source    k8s                                        key    io cilium k8s policy uid                value    eb3a688b b3aa 495c b20a d4f79e7c088d                source    k8s                                          Revision  11  It is also possible to monitor the HTTP requests live by using   cilium dbg monitor        code block   shell session        kubectl exec  it  n kube system cilium kzgdx    cilium dbg monitor  v   type l7        Response http to 0   k8s class tiefighter k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire   from 2756   k8s io cilium k8s policy cluster default k8s class deathstar k8s org empire k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default    identity 8876  43854  verdict Forwarded POST http   deathstar default svc cluster local v1 request landing    200        Request http from 0   k8s class tiefighter k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire   to 2756   k8s io cilium k8s policy cluster default k8s class deathstar k8s org empire k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default    identity 8876  43854  verdict Denied PUT http   deathstar default svc cluster local v1 request landing    403  The above output demonstrates a successful response to a POST request followed by a PUT request that is denied by the L7 policy   We hope you enjoyed the tutorial   Feel free to play more with the setup  read the rest of the documentation  and reach out to us on the  Cilium Slack   with any questions   Clean up              parsed literal         kubectl delete  f    SCM WEB   examples minikube http sw app yaml      kubectl delete cnp rule1      include      installation next steps rst"}
{"questions":"cilium docs cilium io Terminology labels label","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n***********\nTerminology\n***********\n\n\n.. _label:\n.. _labels:\n\nLabels\n======\n\nLabels are a generic, flexible and highly scalable way of addressing a large\nset of resources as they allow for arbitrary grouping and creation of sets.\nWhenever something needs to be described, addressed or selected, it is done\nbased on labels:\n\n- `Endpoints` are assigned labels as derived from the container runtime,\n  orchestration system, or other sources.\n- `Network policies` select pairs of `endpoints` which are allowed to\n  communicate based on labels. The policies themselves are identified by labels\n  as well.\n\nWhat is a Label?\n----------------\n\nA label is a pair of strings consisting of a ``key`` and ``value``. A label can\nbe formatted as a single string with the format ``key=value``. The key portion\nis mandatory and must be unique. This is typically achieved by using the\nreverse domain name notion, e.g. ``io.cilium.mykey=myvalue``. The value portion\nis optional and can be omitted, e.g. ``io.cilium.mykey``.\n\nKey names should typically consist of the character set ``[a-z0-9-.]``.\n\nWhen using labels to select resources, both the key and the value must match,\ne.g. when a policy should be applied to all endpoints with the label\n``my.corp.foo`` then the label ``my.corp.foo=bar`` will not match the\nselector.\n\nLabel Source\n------------\n\nA label can be derived from various sources. For example, an `endpoint`_ will\nderive the labels associated to the container by the local container runtime as\nwell as the labels associated with the pod as provided by Kubernetes. As these\ntwo label namespaces are not aware of each other, this may result in\nconflicting label keys.\n\nTo resolve this potential conflict, Cilium prefixes all label keys with\n``source:`` to indicate the source of the label when importing labels, e.g.\n``k8s:role=frontend``, ``container:user=joe``, ``k8s:role=backend``. This means\nthat when you run a Docker container using ``docker run [...] -l foo=bar``, the\nlabel ``container:foo=bar`` will appear on the Cilium endpoint representing the\ncontainer. Similarly, a Kubernetes pod started with the label ``foo: bar``\nwill be represented with a Cilium endpoint associated with the label\n``k8s:foo=bar``. A unique name is allocated for each potential source. The\nfollowing label sources are currently supported:\n\n- ``container:`` for labels derived from the local container runtime\n- ``k8s:`` for labels derived from Kubernetes\n- ``reserved:`` for special reserved labels, see :ref:`reserved_labels`.\n- ``unspec:`` for labels with unspecified source\n\nWhen using labels to identify other resources, the source can be included to\nlimit matching of labels to a particular type. If no source is provided, the\nlabel source defaults to ``any:`` which will match all labels regardless of\ntheir source. If a source is provided, the source of the selecting and matching\nlabels need to match.\n\n.. _endpoint:\n.. _endpoints:\n\nEndpoint\n=========\n\nCilium makes application containers available on the network by assigning them\nIP addresses. Multiple application containers can share the same IP address; a\ntypical example for this model is a Kubernetes :term:`Pod`. All application containers\nwhich share a common address are grouped together in what Cilium refers to as\nan endpoint.\n\nAllocating individual IP addresses enables the use of the entire Layer 4 port\nrange by each endpoint. This essentially allows multiple application containers\nrunning on the same cluster node to all bind to well known ports such as ``80``\nwithout causing any conflicts.\n\nThe default behavior of Cilium is to assign both an IPv6 and IPv4 address to\nevery endpoint. However, this behavior can be configured to only allocate an\nIPv6 address with the ``--enable-ipv4=false`` option. If both an IPv6 and IPv4\naddress are assigned, either address can be used to reach the endpoint. The\nsame behavior will apply with regard to policy rules, load-balancing, etc. See\n:ref:`address_management` for more details.\n\nIdentification\n--------------\n\nFor identification purposes, Cilium assigns an internal endpoint id to all\nendpoints on a cluster node. The endpoint id is unique within the context of\nan individual cluster node.\n\n.. _endpoint id:\n\nEndpoint Metadata\n-----------------\n\nAn endpoint automatically derives metadata from the application containers\nassociated with the endpoint. The metadata can then be used to identify the\nendpoint for security\/policy, load-balancing and routing purposes.\n\nThe source of the metadata will depend on the orchestration system and\ncontainer runtime in use. The following metadata retrieval mechanisms are\ncurrently supported:\n\n+---------------------+---------------------------------------------------+\n| System              | Description                                       |\n+=====================+===================================================+\n| Kubernetes          | Pod labels (via k8s API)                          |\n+---------------------+---------------------------------------------------+\n| containerd (Docker) | Container labels (via Docker API)                 |\n+---------------------+---------------------------------------------------+\n\nMetadata is attached to endpoints in the form of `labels`.\n\nThe following example launches a container with the label ``app=benchmark``\nwhich is then associated with the endpoint. The label is prefixed with\n``container:`` to indicate that the label was derived from the container\nruntime.\n\n.. code-block:: shell-session\n\n    $ docker run --net cilium -d -l app=benchmark tgraf\/netperf\n    aaff7190f47d071325e7af06577f672beff64ccc91d2b53c42262635c063cf1c\n    $ cilium-dbg endpoint list\n    ENDPOINT   POLICY        IDENTITY   LABELS (source:key[=value])   IPv6                   IPv4            STATUS\n               ENFORCEMENT\n    62006      Disabled      257        container:app=benchmark       f00d::a00:20f:0:f236   10.15.116.202   ready\n\n\nAn endpoint can have metadata associated from multiple sources. A typical\nexample is a Kubernetes cluster which uses containerd as the container runtime.\nEndpoints will derive Kubernetes pod labels (prefixed with the ``k8s:`` source\nprefix) and containerd labels (prefixed with ``container:`` source prefix).\n\n.. _identity:\n\nIdentity\n========\n\nAll `endpoints` are assigned an identity. The identity is what is used to enforce\nbasic connectivity between endpoints. In traditional networking terminology,\nthis would be equivalent to Layer 3 enforcement.\n\nAn identity is identified by `labels` and is given a cluster wide unique\nidentifier. The endpoint is assigned the identity which matches the endpoint's\n`security relevant labels`, i.e. all endpoints which share the same set of\n`security relevant labels` will share the same identity. This concept allows to\nscale policy enforcement to a massive number of endpoints as many individual\nendpoints will typically share the same set of security `labels` as applications\nare scaled.\n\nWhat is an Identity?\n--------------------\n\nThe identity of an endpoint is derived based on the `labels` associated with\nthe pod or container which are derived to the `endpoint`_. When a pod or\ncontainer is started, Cilium will create an `endpoint`_ based on the event\nreceived by the container runtime to represent the pod or container on the\nnetwork. As a next step, Cilium will resolve the identity of the `endpoint`_\ncreated. Whenever the `labels` of the pod or container change, the identity is\nreconfirmed and automatically modified as required.\n\n.. _security relevant labels:\n\nSecurity Relevant Labels\n------------------------\n\nNot all `labels` associated with a container or pod are meaningful when\nderiving the `identity`. Labels may be used to store metadata such as the\ntimestamp when a container was launched. Cilium requires to know which labels\nare meaningful and are subject to being considered when deriving the identity.\nFor this purpose, the user is required to specify a list of string prefixes of\nmeaningful labels. The standard behavior is to include all labels which start\nwith the prefix ``id.``, e.g.  ``id.service1``, ``id.service2``,\n``id.groupA.service44``. The list of meaningful label prefixes can be specified\nwhen starting the agent.\n\n.. _reserved_labels:\n\nSpecial Identities\n------------------\n\nAll endpoints which are managed by Cilium will be assigned an identity. In\norder to allow communication to network endpoints which are not managed by\nCilium, special identities exist to represent those. Special reserved\nidentities are prefixed with the string ``reserved:``.\n\n+-----------------------------+------------+---------------------------------------------------+\n| Identity                    | Numeric ID | Description                                       |\n+=============================+============+===================================================+\n| ``reserved:unknown``        | 0          | The identity could not be derived.                |\n+-----------------------------+------------+---------------------------------------------------+\n| ``reserved:host``           | 1          | The local host. Any traffic that originates from  |\n|                             |            | or is designated to one of the local host IPs.    |\n+-----------------------------+------------+---------------------------------------------------+\n| ``reserved:world``          | 2          | Any network endpoint outside of the cluster       |\n+-----------------------------+------------+---------------------------------------------------+\n| ``reserved:unmanaged``      | 3          | An endpoint that is not managed by Cilium, e.g.   |\n|                             |            | a Kubernetes pod that was launched before Cilium  |\n|                             |            | was installed.                                    |\n+-----------------------------+------------+---------------------------------------------------+\n| ``reserved:health``         | 4          | This is health checking traffic generated by      |\n|                             |            | Cilium agents.                                    |\n+-----------------------------+------------+---------------------------------------------------+\n| ``reserved:init``           | 5          | An endpoint for which the identity has not yet    |\n|                             |            | been resolved is assigned the init identity.      |\n|                             |            | This represents the phase of an endpoint in which |\n|                             |            | some of the metadata required to derive the       |\n|                             |            | security identity is still missing. This is       |\n|                             |            | typically the case in the bootstrapping phase.    |\n|                             |            |                                                   |\n|                             |            | The init identity is only allocated if the labels |\n|                             |            | of the endpoint are not known at creation time.   |\n|                             |            | This can be the case for the Docker plugin.       |\n+-----------------------------+------------+---------------------------------------------------+\n| ``reserved:remote-node``    | 6          | The collection of all remote cluster hosts.       |\n|                             |            | Any traffic that originates from or is designated |\n|                             |            | to one of the IPs of any host in any connected    |\n|                             |            | cluster other than the local node.                |\n+-----------------------------+------------+---------------------------------------------------+\n| ``reserved:kube-apiserver`` | 7          | Remote node(s) which have backend(s) serving the  |\n|                             |            | kube-apiserver running.                           |\n+-----------------------------+------------+---------------------------------------------------+\n| ``reserved:ingress``        | 8          | Given to the IPs used as the source address for   |\n|                             |            | connections from Ingress proxies.                 |\n+-----------------------------+------------+---------------------------------------------------+\n\nWell-known Identities\n---------------------\n\nThe following is a list of well-known identities which Cilium is aware of\nautomatically and will hand out a security identity without requiring to\ncontact any external dependencies such as the kvstore. The purpose of this is\nto allow bootstrapping Cilium and enable network connectivity with policy\nenforcement in the cluster for essential services without depending on any\ndependencies.\n\n======================== =================== ==================== ================= =========== ============================================================================\nDeployment               Namespace           ServiceAccount       Cluster Name      Numeric ID  Labels\n======================== =================== ==================== ================= =========== ============================================================================\nkube-dns                 kube-system         kube-dns             <cilium-cluster>  102         ``k8s-app=kube-dns``\nkube-dns (EKS)           kube-system         kube-dns             <cilium-cluster>  103         ``k8s-app=kube-dns``, ``eks.amazonaws.com\/component=kube-dns``\ncore-dns                 kube-system         coredns              <cilium-cluster>  104         ``k8s-app=kube-dns``\ncore-dns (EKS)           kube-system         coredns              <cilium-cluster>  106         ``k8s-app=kube-dns``, ``eks.amazonaws.com\/component=coredns``\ncilium-operator          <cilium-namespace>  cilium-operator      <cilium-cluster>  105         ``name=cilium-operator``, ``io.cilium\/app=operator``\n======================== =================== ==================== ================= =========== ============================================================================\n\n*Note*: if ``cilium-cluster`` is not defined with the ``cluster-name`` option,\nthe default value will be set to \"``default``\".\n\nIdentity Management in the Cluster\n----------------------------------\n\nIdentities are valid in the entire cluster which means that if several pods or\ncontainers are started on several cluster nodes, all of them will resolve and\nshare a single identity if they share the identity relevant labels. This\nrequires coordination between cluster nodes.\n\n.. image:: ..\/images\/identity_store.png\n    :align: center\n\nThe operation to resolve an endpoint identity is performed with the help of the\ndistributed key-value store which allows to perform atomic operations in the\nform *generate a new unique identifier if the following value has not been seen\nbefore*. This allows each cluster node to create the identity relevant subset\nof labels and then query the key-value store to derive the identity. Depending\non whether the set of labels has been queried before, either a new identity\nwill be created, or the identity of the initial query will be returned.\n\n.. _node:\n\nNode\n====\n\nCilium refers to a node as an individual member of a cluster. Each node must be\nrunning the ``cilium-agent`` and will operate in a mostly autonomous manner.\nSynchronization of state between Cilium agents running on different nodes is\nkept to a minimum for simplicity and scale. It occurs exclusively via the\nKey-Value store or with packet metadata.\n\nNode Address\n------------\n\nCilium will automatically detect the node's IPv4 and IPv6 address. The detected\nnode address is printed out when the ``cilium-agent`` starts:\n\n::\n\n    Local node-name: worker0\n    Node-IPv6: f00d::ac10:14:0:1\n    External-Node IPv4: 172.16.0.20\n    Internal-Node IPv4: 10.200.28.238\n","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io              Terminology                   label      labels   Labels         Labels are a generic  flexible and highly scalable way of addressing a large set of resources as they allow for arbitrary grouping and creation of sets  Whenever something needs to be described  addressed or selected  it is done based on labels      Endpoints  are assigned labels as derived from the container runtime    orchestration system  or other sources     Network policies  select pairs of  endpoints  which are allowed to   communicate based on labels  The policies themselves are identified by labels   as well   What is a Label                    A label is a pair of strings consisting of a   key   and   value    A label can be formatted as a single string with the format   key value    The key portion is mandatory and must be unique  This is typically achieved by using the reverse domain name notion  e g    io cilium mykey myvalue    The value portion is optional and can be omitted  e g    io cilium mykey     Key names should typically consist of the character set    a z0 9        When using labels to select resources  both the key and the value must match  e g  when a policy should be applied to all endpoints with the label   my corp foo   then the label   my corp foo bar   will not match the selector   Label Source               A label can be derived from various sources  For example  an  endpoint   will derive the labels associated to the container by the local container runtime as well as the labels associated with the pod as provided by Kubernetes  As these two label namespaces are not aware of each other  this may result in conflicting label keys   To resolve this potential conflict  Cilium prefixes all label keys with   source    to indicate the source of the label when importing labels  e g    k8s role frontend      container user joe      k8s role backend    This means that when you run a Docker container using   docker run        l foo bar    the label   container foo bar   will appear on the Cilium endpoint representing the container  Similarly  a Kubernetes pod started with the label   foo  bar   will be represented with a Cilium endpoint associated with the label   k8s foo bar    A unique name is allocated for each potential source  The following label sources are currently supported       container    for labels derived from the local container runtime     k8s    for labels derived from Kubernetes     reserved    for special reserved labels  see  ref  reserved labels       unspec    for labels with unspecified source  When using labels to identify other resources  the source can be included to limit matching of labels to a particular type  If no source is provided  the label source defaults to   any    which will match all labels regardless of their source  If a source is provided  the source of the selecting and matching labels need to match       endpoint      endpoints   Endpoint            Cilium makes application containers available on the network by assigning them IP addresses  Multiple application containers can share the same IP address  a typical example for this model is a Kubernetes  term  Pod   All application containers which share a common address are grouped together in what Cilium refers to as an endpoint   Allocating individual IP addresses enables the use of the entire Layer 4 port range by each endpoint  This essentially allows multiple application containers running on the same cluster node to all bind to well known ports such as   80   without causing any conflicts   The default behavior of Cilium is to assign both an IPv6 and IPv4 address to every endpoint  However  this behavior can be configured to only allocate an IPv6 address with the     enable ipv4 false   option  If both an IPv6 and IPv4 address are assigned  either address can be used to reach the endpoint  The same behavior will apply with regard to policy rules  load balancing  etc  See  ref  address management  for more details   Identification                 For identification purposes  Cilium assigns an internal endpoint id to all endpoints on a cluster node  The endpoint id is unique within the context of an individual cluster node       endpoint id   Endpoint Metadata                    An endpoint automatically derives metadata from the application containers associated with the endpoint  The metadata can then be used to identify the endpoint for security policy  load balancing and routing purposes   The source of the metadata will depend on the orchestration system and container runtime in use  The following metadata retrieval mechanisms are currently supported                                                                                 System                Description                                                                                                                       Kubernetes            Pod labels  via k8s API                                                                                                           containerd  Docker    Container labels  via Docker API                                                                                                 Metadata is attached to endpoints in the form of  labels    The following example launches a container with the label   app benchmark   which is then associated with the endpoint  The label is prefixed with   container    to indicate that the label was derived from the container runtime      code block   shell session        docker run   net cilium  d  l app benchmark tgraf netperf     aaff7190f47d071325e7af06577f672beff64ccc91d2b53c42262635c063cf1c       cilium dbg endpoint list     ENDPOINT   POLICY        IDENTITY   LABELS  source key  value     IPv6                   IPv4            STATUS                ENFORCEMENT     62006      Disabled      257        container app benchmark       f00d  a00 20f 0 f236   10 15 116 202   ready   An endpoint can have metadata associated from multiple sources  A typical example is a Kubernetes cluster which uses containerd as the container runtime  Endpoints will derive Kubernetes pod labels  prefixed with the   k8s    source prefix  and containerd labels  prefixed with   container    source prefix        identity   Identity           All  endpoints  are assigned an identity  The identity is what is used to enforce basic connectivity between endpoints  In traditional networking terminology  this would be equivalent to Layer 3 enforcement   An identity is identified by  labels  and is given a cluster wide unique identifier  The endpoint is assigned the identity which matches the endpoint s  security relevant labels   i e  all endpoints which share the same set of  security relevant labels  will share the same identity  This concept allows to scale policy enforcement to a massive number of endpoints as many individual endpoints will typically share the same set of security  labels  as applications are scaled   What is an Identity                        The identity of an endpoint is derived based on the  labels  associated with the pod or container which are derived to the  endpoint    When a pod or container is started  Cilium will create an  endpoint   based on the event received by the container runtime to represent the pod or container on the network  As a next step  Cilium will resolve the identity of the  endpoint   created  Whenever the  labels  of the pod or container change  the identity is reconfirmed and automatically modified as required       security relevant labels   Security Relevant Labels                           Not all  labels  associated with a container or pod are meaningful when deriving the  identity   Labels may be used to store metadata such as the timestamp when a container was launched  Cilium requires to know which labels are meaningful and are subject to being considered when deriving the identity  For this purpose  the user is required to specify a list of string prefixes of meaningful labels  The standard behavior is to include all labels which start with the prefix   id     e g     id service1      id service2      id groupA service44    The list of meaningful label prefixes can be specified when starting the agent       reserved labels   Special Identities                     All endpoints which are managed by Cilium will be assigned an identity  In order to allow communication to network endpoints which are not managed by Cilium  special identities exist to represent those  Special reserved identities are prefixed with the string   reserved                                                                                                         Identity                      Numeric ID   Description                                                                                                                                              reserved unknown            0            The identity could not be derived                                                                                                                        reserved host               1            The local host  Any traffic that originates from                                                 or is designated to one of the local host IPs                                                                                                            reserved world              2            Any network endpoint outside of the cluster                                                                                                              reserved unmanaged          3            An endpoint that is not managed by Cilium  e g                                                   a Kubernetes pod that was launched before Cilium                                                 was installed                                                                                                                                            reserved health             4            This is health checking traffic generated by                                                     Cilium agents                                                                                                                                            reserved init               5            An endpoint for which the identity has not yet                                                   been resolved is assigned the init identity                                                      This represents the phase of an endpoint in which                                                some of the metadata required to derive the                                                      security identity is still missing  This is                                                      typically the case in the bootstrapping phase                                                                                                                                                     The init identity is only allocated if the labels                                                of the endpoint are not known at creation time                                                   This can be the case for the Docker plugin                                                                                                               reserved remote node        6            The collection of all remote cluster hosts                                                       Any traffic that originates from or is designated                                                to one of the IPs of any host in any connected                                                   cluster other than the local node                                                                                                                        reserved kube apiserver     7            Remote node s  which have backend s  serving the                                                 kube apiserver running                                                                                                                                   reserved ingress            8            Given to the IPs used as the source address for                                                  connections from Ingress proxies                                                                                                                      Well known Identities                        The following is a list of well known identities which Cilium is aware of automatically and will hand out a security identity without requiring to contact any external dependencies such as the kvstore  The purpose of this is to allow bootstrapping Cilium and enable network connectivity with policy enforcement in the cluster for essential services without depending on any dependencies                                                                                                                                                                                Deployment               Namespace           ServiceAccount       Cluster Name      Numeric ID  Labels                                                                                                                                                                              kube dns                 kube system         kube dns              cilium cluster   102           k8s app kube dns   kube dns  EKS            kube system         kube dns              cilium cluster   103           k8s app kube dns      eks amazonaws com component kube dns   core dns                 kube system         coredns               cilium cluster   104           k8s app kube dns   core dns  EKS            kube system         coredns               cilium cluster   106           k8s app kube dns      eks amazonaws com component coredns   cilium operator           cilium namespace   cilium operator       cilium cluster   105           name cilium operator      io cilium app operator                                                                                                                                                                                  Note   if   cilium cluster   is not defined with the   cluster name   option  the default value will be set to    default      Identity Management in the Cluster                                     Identities are valid in the entire cluster which means that if several pods or containers are started on several cluster nodes  all of them will resolve and share a single identity if they share the identity relevant labels  This requires coordination between cluster nodes      image      images identity store png      align  center  The operation to resolve an endpoint identity is performed with the help of the distributed key value store which allows to perform atomic operations in the form  generate a new unique identifier if the following value has not been seen before   This allows each cluster node to create the identity relevant subset of labels and then query the key value store to derive the identity  Depending on whether the set of labels has been queried before  either a new identity will be created  or the identity of the initial query will be returned       node   Node       Cilium refers to a node as an individual member of a cluster  Each node must be running the   cilium agent   and will operate in a mostly autonomous manner  Synchronization of state between Cilium agents running on different nodes is kept to a minimum for simplicity and scale  It occurs exclusively via the Key Value store or with packet metadata   Node Address               Cilium will automatically detect the node s IPv4 and IPv6 address  The detected node address is printed out when the   cilium agent   starts           Local node name  worker0     Node IPv6  f00d  ac10 14 0 1     External Node IPv4  172 16 0 20     Internal Node IPv4  10 200 28 238 "}
{"questions":"cilium the list below has to be deleted to prevent conflicts Cilium will manage ENIs instead of the ACK CNI so any running DaemonSet from docs cilium io","answers":"To install Cilium on `ACK (Alibaba Cloud Container Service for Kubernetes) <https:\/\/www.alibabacloud.com\/help\/doc-detail\/86745.htm>`_, perform the following steps:\n\n**Disable ACK CNI (ACK Only):**\n\nIf you are running an ACK cluster, you should delete the ACK CNI.\n\n.. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\nCilium will manage ENIs instead of the ACK CNI, so any running DaemonSet from\nthe list below has to be deleted to prevent conflicts.\n\n- ``kube-flannel-ds``\n- ``terway``\n- ``terway-eni``\n- ``terway-eniip``\n\n.. note::\n\n    If you are using ACK with Flannel (DaemonSet ``kube-flannel-ds``),\n    the Cloud Controller Manager (CCM) will create a route (Pod CIDR) in VPC.\n    If your cluster is a Managed Kubernetes you cannot disable this behavior.\n    Please consider creating a new cluster.\n\n.. code-block:: shell-session\n\n   kubectl -n kube-system delete daemonset <terway>\n\nThe next step is to remove CRD below created by ``terway*`` CNI\n\n.. code-block:: shell-session\n\n    kubectl delete crd \\\n        ciliumclusterwidenetworkpolicies.cilium.io \\\n        ciliumendpoints.cilium.io \\\n        ciliumidentities.cilium.io \\\n        ciliumnetworkpolicies.cilium.io \\\n        ciliumnodes.cilium.io \\\n        bgpconfigurations.crd.projectcalico.org \\\n        clusterinformations.crd.projectcalico.org \\\n        felixconfigurations.crd.projectcalico.org \\\n        globalnetworkpolicies.crd.projectcalico.org \\\n        globalnetworksets.crd.projectcalico.org \\\n        hostendpoints.crd.projectcalico.org \\\n        ippools.crd.projectcalico.org \\\n        networkpolicies.crd.projectcalico.org\n\n\n**Create AlibabaCloud Secrets:**\n\nBefore installing Cilium, a new Kubernetes Secret with the AlibabaCloud Tokens needs to\nbe added to your Kubernetes cluster. This Secret will allow Cilium to gather\ninformation from the AlibabaCloud API which is needed to implement ToGroups policies.\n\n**AlibabaCloud Access Keys:**\n\nTo create a new access token the `following guide can be used\n<https:\/\/www.alibabacloud.com\/help\/doc-detail\/93691.htm>`_.\nThese keys need to have certain `RAM Permissions\n<https:\/\/ram.console.aliyun.com\/overview>`_:\n\n.. code-block:: json\n\n    {\n      \"Version\": \"1\",\n      \"Statement\": [{\n          \"Action\": [\n            \"ecs:CreateNetworkInterface\",\n            \"ecs:DescribeNetworkInterfaces\",\n            \"ecs:AttachNetworkInterface\",\n            \"ecs:DetachNetworkInterface\",\n            \"ecs:DeleteNetworkInterface\",\n            \"ecs:DescribeInstanceAttribute\",\n            \"ecs:DescribeInstanceTypes\",\n            \"ecs:AssignPrivateIpAddresses\",\n            \"ecs:UnassignPrivateIpAddresses\",\n            \"ecs:DescribeInstances\",\n            \"ecs:DescribeSecurityGroups\",\n            \"ecs:ListTagResources\"\n          ],\n          \"Resource\": [\n            \"*\"\n          ],\n          \"Effect\": \"Allow\"\n        },\n        {\n          \"Action\": [\n            \"vpc:DescribeVSwitches\",\n            \"vpc:ListTagResources\",\n            \"vpc:DescribeVpcs\"\n          ],\n          \"Resource\": [\n            \"*\"\n          ],\n          \"Effect\": \"Allow\"\n        }\n      ]\n    }\n\n\nAs soon as you have the access tokens, the following secret needs to be added,\nwith each empty string replaced by the associated value as a base64-encoded string:\n\n.. code-block:: yaml\n\n    apiVersion: v1\n    kind: Secret\n    metadata:\n      name: cilium-alibabacloud\n      namespace: kube-system\n    type: Opaque\n    data:\n      ALIBABA_CLOUD_ACCESS_KEY_ID: \"\"\n      ALIBABA_CLOUD_ACCESS_KEY_SECRET: \"\"\n\n\nThe base64 command line utility can be used to generate each value, for example:\n\n.. code-block:: shell-session\n\n    $ echo -n \"access_key\" | base64\n    YWNjZXNzX2tleQ==\n\nThis secret stores the AlibabaCloud credentials, which will be used to\nconnect to the AlibabaCloud API.\n\n.. code-block:: shell-session\n\n    $ kubectl create -f cilium-secret.yaml\n\n\n**Install Cilium:**\n\nInstall Cilium release via Helm:\n\n.. parsed-literal::\n\n   helm install cilium |CHART_RELEASE| \\\\\n     --namespace kube-system \\\\\n     --set alibabacloud.enabled=true \\\\\n     --set ipam.mode=alibabacloud \\\\\n     --set enableIPv4Masquerade=false \\\\\n     --set routingMode=native\n\n.. note::\n\n   You must ensure that the security groups associated with the ENIs (``eth1``,\n   ``eth2``, ...) allow for egress traffic to go outside of the VPC. By default,\n   the security groups for pod ENIs are derived from the primary ENI\n   (``eth0``).\n\n","site":"cilium","answers_cleaned":"To install Cilium on  ACK  Alibaba Cloud Container Service for Kubernetes   https   www alibabacloud com help doc detail 86745 htm     perform the following steps     Disable ACK CNI  ACK Only      If you are running an ACK cluster  you should delete the ACK CNI      only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io  Cilium will manage ENIs instead of the ACK CNI  so any running DaemonSet from the list below has to be deleted to prevent conflicts       kube flannel ds       terway       terway eni       terway eniip       note        If you are using ACK with Flannel  DaemonSet   kube flannel ds         the Cloud Controller Manager  CCM  will create a route  Pod CIDR  in VPC      If your cluster is a Managed Kubernetes you cannot disable this behavior      Please consider creating a new cluster      code block   shell session     kubectl  n kube system delete daemonset  terway   The next step is to remove CRD below created by   terway    CNI     code block   shell session      kubectl delete crd           ciliumclusterwidenetworkpolicies cilium io           ciliumendpoints cilium io           ciliumidentities cilium io           ciliumnetworkpolicies cilium io           ciliumnodes cilium io           bgpconfigurations crd projectcalico org           clusterinformations crd projectcalico org           felixconfigurations crd projectcalico org           globalnetworkpolicies crd projectcalico org           globalnetworksets crd projectcalico org           hostendpoints crd projectcalico org           ippools crd projectcalico org           networkpolicies crd projectcalico org     Create AlibabaCloud Secrets     Before installing Cilium  a new Kubernetes Secret with the AlibabaCloud Tokens needs to be added to your Kubernetes cluster  This Secret will allow Cilium to gather information from the AlibabaCloud API which is needed to implement ToGroups policies     AlibabaCloud Access Keys     To create a new access token the  following guide can be used  https   www alibabacloud com help doc detail 93691 htm     These keys need to have certain  RAM Permissions  https   ram console aliyun com overview         code block   json               Version    1          Statement                 Action                  ecs CreateNetworkInterface                ecs DescribeNetworkInterfaces                ecs AttachNetworkInterface                ecs DetachNetworkInterface                ecs DeleteNetworkInterface                ecs DescribeInstanceAttribute                ecs DescribeInstanceTypes                ecs AssignPrivateIpAddresses                ecs UnassignPrivateIpAddresses                ecs DescribeInstances                ecs DescribeSecurityGroups                ecs ListTagResources                          Resource                                             Effect    Allow                                  Action                  vpc DescribeVSwitches                vpc ListTagResources                vpc DescribeVpcs                          Resource                                             Effect    Allow                            As soon as you have the access tokens  the following secret needs to be added  with each empty string replaced by the associated value as a base64 encoded string      code block   yaml      apiVersion  v1     kind  Secret     metadata        name  cilium alibabacloud       namespace  kube system     type  Opaque     data        ALIBABA CLOUD ACCESS KEY ID           ALIBABA CLOUD ACCESS KEY SECRET       The base64 command line utility can be used to generate each value  for example      code block   shell session        echo  n  access key    base64     YWNjZXNzX2tleQ    This secret stores the AlibabaCloud credentials  which will be used to connect to the AlibabaCloud API      code block   shell session        kubectl create  f cilium secret yaml     Install Cilium     Install Cilium release via Helm      parsed literal       helm install cilium  CHART RELEASE            namespace kube system           set alibabacloud enabled true           set ipam mode alibabacloud           set enableIPv4Masquerade false           set routingMode native     note       You must ensure that the security groups associated with the ENIs    eth1         eth2         allow for egress traffic to go outside of the VPC  By default     the security groups for pod ENIs are derived from the primary ENI       eth0      "}
{"questions":"cilium chainingazure Azure CNI Legacy docs cilium io","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _chaining_azure:\n\n******************\nAzure CNI (Legacy)\n******************\n\n.. note::\n\n   For most users, the best way to run Cilium on AKS is either\n   AKS BYO CNI as described in :ref:`k8s_install_quick`\n   or `Azure CNI Powered by Cilium <https:\/\/aka.ms\/aks\/cilium-dataplane>`__.\n   This guide provides alternative instructions to run Cilium with Azure CNI\n   in a chaining configuration. This is the legacy way of running Azure CNI with\n   cilium as Azure IPAM is legacy, for more information see :ref:`ipam_azure`.\n\n.. include:: cni-chaining-limitations.rst\n\n.. admonition:: Video\n :class: attention\n\n  If you'd like a video explanation of the Azure CNI Powered by Cilium, check out `eCHO episode 70: Azure CNI Powered by Cilium <https:\/\/www.youtube.com\/watch?v=8it8Hm2F_GM>`__.\n\nThis guide explains how to set up Cilium in combination with Azure CNI in a\nchaining configuration. In this hybrid mode, the Azure CNI plugin is\nresponsible for setting up the virtual network devices as well as address\nallocation (IPAM). After the initial networking is setup, the Cilium CNI plugin\nis called to attach eBPF programs to the network devices set up by Azure CNI to\nenforce network policies, perform load-balancing, and encryption.\n\n\nCreate an AKS + Cilium CNI configuration\n========================================\n\nCreate a ``chaining.yaml`` file based on the following template to specify the\ndesired CNI chaining configuration. This :term:`ConfigMap` will be installed as the CNI\nconfiguration file on all nodes and defines the chaining configuration. In the\nexample below, the Azure CNI, portmap, and Cilium are chained together.\n\n.. code-block:: yaml\n\n    apiVersion: v1\n    kind: ConfigMap\n    metadata:\n      name: cni-configuration\n      namespace: kube-system\n    data:\n      cni-config: |-\n        {\n          \"cniVersion\": \"0.3.0\",\n          \"name\": \"azure\",\n          \"plugins\": [\n            {\n              \"type\": \"azure-vnet\",\n              \"mode\": \"transparent\",\n              \"ipam\": {\n                 \"type\": \"azure-vnet-ipam\"\n               }\n            },\n            {\n              \"type\": \"portmap\",\n              \"capabilities\": {\"portMappings\": true},\n              \"snat\": true\n            },\n            {\n               \"name\": \"cilium\",\n               \"type\": \"cilium-cni\"\n            }\n          ]\n        }\n\nDeploy the :term:`ConfigMap`:\n\n.. code-block:: shell-session\n\n   kubectl apply -f chaining.yaml\n\n\nDeploy Cilium\n=============\n\n.. include:: k8s-install-download-release.rst\n\nDeploy Cilium release via Helm:\n\n.. parsed-literal::\n\n   helm install cilium |CHART_RELEASE| \\\\\n     --namespace kube-system \\\\\n     --set cni.chainingMode=generic-veth \\\\\n     --set cni.customConf=true \\\\\n     --set cni.exclusive=false \\\\\n     --set nodeinit.enabled=true \\\\\n     --set cni.configMap=cni-configuration \\\\\n     --set routingMode=native \\\\\n     --set enableIPv4Masquerade=false \\\\\n     --set endpointRoutes.enabled=true\n\nThis will create both the main cilium daemonset, as well as the cilium-node-init daemonset, which handles tasks like mounting the eBPF filesystem and updating the\nexisting Azure CNI plugin to run in 'transparent' mode.\n\n.. include:: k8s-install-restart-pods.rst\n\n.. include:: k8s-install-validate.rst\n\n.. include:: next-steps.rst","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      chaining azure                      Azure CNI  Legacy                         note       For most users  the best way to run Cilium on AKS is either    AKS BYO CNI as described in  ref  k8s install quick     or  Azure CNI Powered by Cilium  https   aka ms aks cilium dataplane         This guide provides alternative instructions to run Cilium with Azure CNI    in a chaining configuration  This is the legacy way of running Azure CNI with    cilium as Azure IPAM is legacy  for more information see  ref  ipam azure       include   cni chaining limitations rst     admonition   Video   class  attention    If you d like a video explanation of the Azure CNI Powered by Cilium  check out  eCHO episode 70  Azure CNI Powered by Cilium  https   www youtube com watch v 8it8Hm2F GM       This guide explains how to set up Cilium in combination with Azure CNI in a chaining configuration  In this hybrid mode  the Azure CNI plugin is responsible for setting up the virtual network devices as well as address allocation  IPAM   After the initial networking is setup  the Cilium CNI plugin is called to attach eBPF programs to the network devices set up by Azure CNI to enforce network policies  perform load balancing  and encryption    Create an AKS   Cilium CNI configuration                                           Create a   chaining yaml   file based on the following template to specify the desired CNI chaining configuration  This  term  ConfigMap  will be installed as the CNI configuration file on all nodes and defines the chaining configuration  In the example below  the Azure CNI  portmap  and Cilium are chained together      code block   yaml      apiVersion  v1     kind  ConfigMap     metadata        name  cni configuration       namespace  kube system     data        cni config                          cniVersion    0 3 0              name    azure              plugins                                  type    azure vnet                  mode    transparent                  ipam                       type    azure vnet ipam                                                               type    portmap                  capabilities     portMappings   true                  snat   true                                              name    cilium                   type    cilium cni                                       Deploy the  term  ConfigMap       code block   shell session     kubectl apply  f chaining yaml   Deploy Cilium                   include   k8s install download release rst  Deploy Cilium release via Helm      parsed literal       helm install cilium  CHART RELEASE            namespace kube system           set cni chainingMode generic veth           set cni customConf true           set cni exclusive false           set nodeinit enabled true           set cni configMap cni configuration           set routingMode native           set enableIPv4Masquerade false           set endpointRoutes enabled true  This will create both the main cilium daemonset  as well as the cilium node init daemonset  which handles tasks like mounting the eBPF filesystem and updating the existing Azure CNI plugin to run in  transparent  mode      include   k8s install restart pods rst     include   k8s install validate rst     include   next steps rst"}
{"questions":"cilium This guide will show you how to install Cilium using Helm https helm sh This involves a couple of additional steps compared to docs cilium io Installation using Helm k8sinstallhelm","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _k8s_install_helm:\n\n***********************\nInstallation using Helm\n***********************\n\nThis guide will show you how to install Cilium using `Helm\n<https:\/\/helm.sh\/>`_. This involves a couple of additional steps compared to\nthe :ref:`k8s_quick_install` and requires you to manually select the best\ndatapath and IPAM mode for your particular environment.\n\nInstall Cilium\n==============\n\n.. include:: k8s-install-download-release.rst\n\n.. tabs::\n\n    .. group-tab:: Generic\n\n       These are the generic instructions on how to install Cilium into any\n       Kubernetes cluster using the default configuration options below. Please\n       see the other tabs for distribution\/platform specific instructions which\n       also list the ideal default configuration for particular platforms.\n\n       **Default Configuration:**\n\n       =============== =============== ==============\n       Datapath        IPAM            Datastore\n       =============== =============== ==============\n       Encapsulation   Cluster Pool    Kubernetes CRD\n       =============== =============== ==============\n\n       .. include:: requirements-generic.rst\n\n       **Install Cilium:**\n\n       Deploy Cilium release via Helm:\n\n       .. parsed-literal::\n\n          helm install cilium |CHART_RELEASE| \\\\\n            --namespace kube-system\n\n    .. group-tab:: GKE\n\n       .. include:: requirements-gke.rst\n\n       **Install Cilium:**\n\n       Extract the Cluster CIDR to enable native-routing:\n\n       .. code-block:: shell-session\n\n          NATIVE_CIDR=\"$(gcloud container clusters describe \"${NAME}\" --zone \"${ZONE}\" --format 'value(clusterIpv4Cidr)')\"\n          echo $NATIVE_CIDR\n\n       Deploy Cilium release via Helm:\n\n       .. parsed-literal::\n\n          helm install cilium |CHART_RELEASE| \\\\\n            --namespace kube-system \\\\\n            --set nodeinit.enabled=true \\\\\n            --set nodeinit.reconfigureKubelet=true \\\\\n            --set nodeinit.removeCbrBridge=true \\\\\n            --set cni.binPath=\/home\/kubernetes\/bin \\\\\n            --set gke.enabled=true \\\\\n            --set ipam.mode=kubernetes \\\\\n            --set ipv4NativeRoutingCIDR=$NATIVE_CIDR\n\n       The NodeInit DaemonSet is required to prepare the GKE nodes as nodes are added\n       to the cluster. The NodeInit DaemonSet will perform the following actions:\n\n       * Reconfigure kubelet to run in CNI mode\n       * Mount the eBPF filesystem\n\n    .. group-tab:: AKS\n\n       .. include:: ..\/installation\/requirements-aks.rst\n\n       **Install Cilium:**\n\n       Deploy Cilium release via Helm:\n\n       .. parsed-literal::\n\n          helm install cilium |CHART_RELEASE| \\\\\n            --namespace kube-system \\\\\n            --set aksbyocni.enabled=true \\\\\n            --set nodeinit.enabled=true\n\n       .. note::\n\n          Installing Cilium via helm is supported only for AKS BYOCNI cluster and\n          not for Azure CNI Powered by Cilium clusters.\n\n    .. group-tab:: EKS\n\n       .. include:: requirements-eks.rst\n\n       **Patch VPC CNI (aws-node DaemonSet)**\n\n       Cilium will manage ENIs instead of VPC CNI, so the ``aws-node``\n       DaemonSet has to be patched to prevent conflict behavior.\n\n       .. code-block:: shell-session\n\n          kubectl -n kube-system patch daemonset aws-node --type='strategic' -p='{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"io.cilium\/aws-node-enabled\":\"true\"}}}}}'\n\n       **Install Cilium:**\n\n       Deploy Cilium release via Helm:\n\n       .. parsed-literal::\n\n          helm install cilium |CHART_RELEASE| \\\\\n            --namespace kube-system \\\\\n            --set eni.enabled=true \\\\\n            --set ipam.mode=eni \\\\\n            --set egressMasqueradeInterfaces=eth+ \\\\\n            --set routingMode=native\n\n       .. note::\n\n          This helm command sets ``eni.enabled=true`` and ``routingMode=native``,\n          meaning that Cilium will allocate a fully-routable AWS ENI IP address\n          for each pod, similar to the behavior of the `Amazon VPC CNI plugin\n          <https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/pod-networking.html>`_.\n\n          This mode depends on a set of :ref:`ec2privileges` from the EC2 API.\n\n          Cilium can alternatively run in EKS using an overlay mode that gives\n          pods non-VPC-routable IPs.  This allows running more pods per\n          Kubernetes worker node than the ENI limit but includes the following caveats:\n\n            1. Pod connectivity to resources outside the cluster (e.g., VMs in the VPC\n               or AWS managed services) is masqueraded (i.e., SNAT) by Cilium to use the\n               VPC IP address of the Kubernetes worker node.\n            2. The EKS API Server is unable to route packets to the overlay network. This\n               implies that any `webhook <https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/webhook\/>`_\n               which needs to be accessed must be host networked or exposed through a service\n               or ingress.\n\n          To set up Cilium overlay mode, follow the steps below:\n\n            1. Excluding the lines for ``eni.enabled=true``, ``ipam.mode=eni`` and \n               ``routingMode=native`` from the helm command will configure Cilium to use\n               overlay routing mode (which is the helm default).\n            2. Flush iptables rules added by VPC CNI\n\n               .. code-block:: shell-session\n               \n                  iptables -t nat -F AWS-SNAT-CHAIN-0 \\\\\n                     && iptables -t nat -F AWS-SNAT-CHAIN-1 \\\\\n                     && iptables -t nat -F AWS-CONNMARK-CHAIN-0 \\\\\n                     && iptables -t nat -F AWS-CONNMARK-CHAIN-1\n\n         Some Linux distributions use a different interface naming convention.\n         If you use masquerading with the option ``egressMasqueradeInterfaces=eth+``,\n         remember to replace ``eth+`` with the proper interface name. For\n         reference, Amazon Linux 2 uses ``eth+``, whereas Amazon Linux 2023 uses\n         ``ens+``. Mixed node clusters are not supported currently.\n\n    .. group-tab:: OpenShift\n\n       .. include:: requirements-openshift.rst\n\n       **Install Cilium:**\n\n       Cilium is a `Certified OpenShift CNI Plugin <https:\/\/access.redhat.com\/articles\/5436171>`_\n       and is best installed when an OpenShift cluster is created using the OpenShift\n       installer. Please refer to :ref:`k8s_install_openshift_okd` for more information.\n\n    .. group-tab:: RKE\n\n       .. include:: requirements-rke.rst\n\n    .. group-tab:: k3s\n\n       .. include:: requirements-k3s.rst\n\n       **Install Cilium:**\n\n       .. parsed-literal::\n\n          helm install cilium |CHART_RELEASE| \\\\\n             --namespace $CILIUM_NAMESPACE \\\\\n             --set operator.replicas=1\n\n    .. group-tab:: Rancher Desktop\n\n       **Configure Rancher Desktop:**\n\n       To install Cilium on `Rancher Desktop <https:\/\/rancherdesktop.io>`_,\n       perform the following steps:\n\n       .. include:: rancher-desktop-configure.rst\n\n       **Install Cilium:**\n\n       .. parsed-literal::\n\n          helm install cilium |CHART_RELEASE| \\\\\n             --namespace $CILIUM_NAMESPACE \\\\\n             --set operator.replicas=1 \\\\\n             --set cni.binPath=\/usr\/libexec\/cni\n\n    .. group-tab:: Talos Linux\n\n       To install Cilium on `Talos Linux <https:\/\/www.talos.dev\/>`_,\n       perform the following steps.\n\n       .. include:: k8s-install-talos-linux.rst\n\n    .. group-tab:: Alibaba ACK\n\n        .. include:: ..\/installation\/alibabacloud-eni.rst\n\n.. admonition:: Video\n  :class: attention\n\n  If you'd like to learn more about Cilium Helm values, check out `eCHO episode 117: A Tour of the Cilium Helm Values <https:\/\/www.youtube.com\/watch?v=ni0Uw4WLHYo>`__.\n\n.. include:: k8s-install-restart-pods.rst\n\n.. include:: k8s-install-validate.rst\n\n.. include:: next-steps.rst","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      k8s install helm                           Installation using Helm                          This guide will show you how to install Cilium using  Helm  https   helm sh      This involves a couple of additional steps compared to the  ref  k8s quick install  and requires you to manually select the best datapath and IPAM mode for your particular environment   Install Cilium                    include   k8s install download release rst     tabs           group tab   Generic         These are the generic instructions on how to install Cilium into any        Kubernetes cluster using the default configuration options below  Please        see the other tabs for distribution platform specific instructions which        also list the ideal default configuration for particular platforms            Default Configuration                                                                  Datapath        IPAM            Datastore                                                              Encapsulation   Cluster Pool    Kubernetes CRD                                                                  include   requirements generic rst           Install Cilium            Deploy Cilium release via Helm             parsed literal              helm install cilium  CHART RELEASE                   namespace kube system         group tab   GKE            include   requirements gke rst           Install Cilium            Extract the Cluster CIDR to enable native routing             code block   shell session            NATIVE CIDR    gcloud container clusters describe    NAME     zone    ZONE     format  value clusterIpv4Cidr               echo  NATIVE CIDR         Deploy Cilium release via Helm             parsed literal              helm install cilium  CHART RELEASE                   namespace kube system                  set nodeinit enabled true                  set nodeinit reconfigureKubelet true                  set nodeinit removeCbrBridge true                  set cni binPath  home kubernetes bin                  set gke enabled true                  set ipam mode kubernetes                  set ipv4NativeRoutingCIDR  NATIVE CIDR         The NodeInit DaemonSet is required to prepare the GKE nodes as nodes are added        to the cluster  The NodeInit DaemonSet will perform the following actions            Reconfigure kubelet to run in CNI mode          Mount the eBPF filesystem         group tab   AKS            include      installation requirements aks rst           Install Cilium            Deploy Cilium release via Helm             parsed literal              helm install cilium  CHART RELEASE                   namespace kube system                  set aksbyocni enabled true                  set nodeinit enabled true            note              Installing Cilium via helm is supported only for AKS BYOCNI cluster and           not for Azure CNI Powered by Cilium clusters          group tab   EKS            include   requirements eks rst           Patch VPC CNI  aws node DaemonSet            Cilium will manage ENIs instead of VPC CNI  so the   aws node          DaemonSet has to be patched to prevent conflict behavior             code block   shell session            kubectl  n kube system patch daemonset aws node   type  strategic   p    spec    template    spec    nodeSelector    io cilium aws node enabled   true                  Install Cilium            Deploy Cilium release via Helm             parsed literal              helm install cilium  CHART RELEASE                   namespace kube system                  set eni enabled true                  set ipam mode eni                  set egressMasqueradeInterfaces eth                   set routingMode native            note              This helm command sets   eni enabled true   and   routingMode native              meaning that Cilium will allocate a fully routable AWS ENI IP address           for each pod  similar to the behavior of the  Amazon VPC CNI plugin            https   docs aws amazon com eks latest userguide pod networking html                This mode depends on a set of  ref  ec2privileges  from the EC2 API             Cilium can alternatively run in EKS using an overlay mode that gives           pods non VPC routable IPs   This allows running more pods per           Kubernetes worker node than the ENI limit but includes the following caveats               1  Pod connectivity to resources outside the cluster  e g   VMs in the VPC                or AWS managed services  is masqueraded  i e   SNAT  by Cilium to use the                VPC IP address of the Kubernetes worker node              2  The EKS API Server is unable to route packets to the overlay network  This                implies that any  webhook  https   kubernetes io docs reference access authn authz webhook                    which needs to be accessed must be host networked or exposed through a service                or ingress             To set up Cilium overlay mode  follow the steps below               1  Excluding the lines for   eni enabled true      ipam mode eni   and                   routingMode native   from the helm command will configure Cilium to use                overlay routing mode  which is the helm default               2  Flush iptables rules added by VPC CNI                    code block   shell session                                   iptables  t nat  F AWS SNAT CHAIN 0                            iptables  t nat  F AWS SNAT CHAIN 1                            iptables  t nat  F AWS CONNMARK CHAIN 0                            iptables  t nat  F AWS CONNMARK CHAIN 1           Some Linux distributions use a different interface naming convention           If you use masquerading with the option   egressMasqueradeInterfaces eth              remember to replace   eth    with the proper interface name  For          reference  Amazon Linux 2 uses   eth     whereas Amazon Linux 2023 uses            ens     Mixed node clusters are not supported currently          group tab   OpenShift            include   requirements openshift rst           Install Cilium            Cilium is a  Certified OpenShift CNI Plugin  https   access redhat com articles 5436171           and is best installed when an OpenShift cluster is created using the OpenShift        installer  Please refer to  ref  k8s install openshift okd  for more information          group tab   RKE            include   requirements rke rst         group tab   k3s            include   requirements k3s rst           Install Cilium               parsed literal              helm install cilium  CHART RELEASE                    namespace  CILIUM NAMESPACE                   set operator replicas 1         group tab   Rancher Desktop           Configure Rancher Desktop            To install Cilium on  Rancher Desktop  https   rancherdesktop io            perform the following steps             include   rancher desktop configure rst           Install Cilium               parsed literal              helm install cilium  CHART RELEASE                    namespace  CILIUM NAMESPACE                   set operator replicas 1                   set cni binPath  usr libexec cni         group tab   Talos Linux         To install Cilium on  Talos Linux  https   www talos dev             perform the following steps             include   k8s install talos linux rst         group tab   Alibaba ACK             include      installation alibabacloud eni rst     admonition   Video    class  attention    If you d like to learn more about Cilium Helm values  check out  eCHO episode 117  A Tour of the Cilium Helm Values  https   www youtube com watch v ni0Uw4WLHYo          include   k8s install restart pods rst     include   k8s install validate rst     include   next steps rst"}
{"questions":"cilium Installation using Kops docs cilium io k8sinstallkops As of kops 1 9 release Cilium can be plugged into kops deployed kopsguide","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _kops_guide:\n.. _k8s_install_kops:\n\n***********************\nInstallation using Kops\n***********************\n\nAs of kops 1.9 release, Cilium can be plugged into kops-deployed\nclusters as the CNI plugin. This guide provides steps to create a Kubernetes\ncluster on AWS using kops and Cilium as the CNI plugin. Note, the kops\ndeployment will automate several deployment features in AWS by default,\nincluding AutoScaling, Volumes, VPCs, etc.\n\nKops offers several out-of-the-box configurations of Cilium including :ref:`kubeproxy-free`,\n:ref:`ipam_eni`, and dedicated etcd cluster for Cilium. This guide will just go through a basic setup.\n\n\nPrerequisites\n=============\n\n* `aws cli <https:\/\/aws.amazon.com\/cli\/>`_\n* `kubectl <https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl\/>`_\n* aws account with permissions:\n  * AmazonEC2FullAccess\n  * AmazonRoute53FullAccess\n  * AmazonS3FullAccess\n  * IAMFullAccess\n  * AmazonVPCFullAccess\n\n\nInstalling kops\n===============\n\n.. tabs::\n  .. group-tab:: Linux\n\n    .. code-block:: shell-session\n\n        curl -LO https:\/\/github.com\/kubernetes\/kops\/releases\/download\/$(curl -s https:\/\/api.github.com\/repos\/kubernetes\/kops\/releases\/latest | grep tag_name | cut -d '\"' -f 4)\/kops-linux-amd64\n        chmod +x kops-linux-amd64\n        sudo mv kops-linux-amd64 \/usr\/local\/bin\/kops\n\n  .. group-tab:: MacOS\n\n    .. code-block:: shell-session\n\n        brew update && brew install kops\n\n\nSetting up IAM Group and User\n=============================\n\nAssuming you have all the prerequisites, run the following commands to create\nthe kops user and group:\n\n.. code-block:: shell-session\n\n        $ # Create IAM group named kops and grant access\n        $ aws iam create-group --group-name kops\n        $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy\/AmazonEC2FullAccess --group-name kops\n        $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy\/AmazonRoute53FullAccess --group-name kops\n        $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy\/AmazonS3FullAccess --group-name kops\n        $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy\/IAMFullAccess --group-name kops\n        $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy\/AmazonVPCFullAccess --group-name kops\n        $ aws iam create-user --user-name kops\n        $ aws iam add-user-to-group --user-name kops --group-name kops\n        $ aws iam create-access-key --user-name kops\n\n\nkops requires the creation of a dedicated S3 bucket in order to store the\nstate and representation of the cluster. You will need to change the bucket\nname and provide your unique bucket name (for example a reverse of FQDN added\nwith short description of the cluster). Also make sure to use the region where\nyou will be deploying the cluster.\n\n.. code-block:: shell-session\n\n        $ aws s3api create-bucket --bucket prefix-example-com-state-store --region us-west-2 --create-bucket-configuration LocationConstraint=us-west-2\n        $ export KOPS_STATE_STORE=s3:\/\/prefix-example-com-state-store\n\nThe above steps are sufficient for getting a working cluster installed. Please\nconsult `kops aws documentation\n<https:\/\/kops.sigs.k8s.io\/getting_started\/install\/>`_ for more\ndetailed setup instructions.\n\n\nCilium Prerequisites\n====================\n\n* Ensure the :ref:`admin_system_reqs` are met, particularly the Linux kernel\n  and key-value store versions.\n\nThe default AMI satisfies the minimum kernel version required by Cilium, which is\nwhat we will use in this guide.\n\n\nCreating a Cluster\n==================\n\n* Note that you will need to specify the ``--master-zones`` and ``--zones`` for\n  creating the master and worker nodes. The number of master zones should be\n  * odd (1, 3, ...) for HA. For simplicity, you can just use 1 region.\n* To keep things simple when following this guide, we will use a gossip-based cluster.\n  This means you do not have to create a hosted zone upfront.  cluster ``NAME`` variable\n  must end with ``k8s.local`` to use the gossip  protocol. If creating multiple clusters\n  using the same kops user, then make the cluster name unique by adding a prefix such as \n  ``com-company-emailid-``.\n\n\n.. code-block:: shell-session\n\n        $ export NAME=com-company-emailid-cilium.k8s.local\n        $ kops create cluster --state=${KOPS_STATE_STORE} --node-count 3 --topology private --master-zones us-west-2a,us-west-2b,us-west-2c --zones us-west-2a,us-west-2b,us-west-2c --networking cilium --cloud-labels \"Team=Dev,Owner=Admin\" ${NAME} --yes\n\n\nYou may be prompted to create a ssh public-private key pair.\n\n.. code-block:: shell-session\n\n        $ ssh-keygen\n\n\n(Please see :ref:`appendix_kops`)\n\n.. include:: k8s-install-validate.rst\n\n.. _appendix_kops:\n\n\nDeleting a Cluster\n==================\n\nTo undo the dependencies and other deployment features in AWS from the kops\ncluster creation, use kops to destroy a cluster *immediately* with the\nparameter ``--yes``:\n\n.. code-block:: shell-session\n\n        $ kops delete cluster ${NAME} --yes\n\n\nFurther reading on using Cilium with Kops\n=========================================\n* See the `kops networking documentation <https:\/\/kops.sigs.k8s.io\/networking\/cilium\/>`_ for more information on the \n  configuration options kops offers.\n* See the `kops cluster spec documentation <https:\/\/pkg.go.dev\/k8s.io\/kops\/pkg\/apis\/kops?tab=doc#CiliumNetworkingSpec>`_ for a comprehensive list of all the options\n\n\nAppendix: Details of kops flags used in cluster creation\n========================================================\n\nThe following section explains all the flags used in create cluster command.\n\n* ``--state=${KOPS_STATE_STORE}`` : KOPS uses an S3 bucket to store the state of your cluster and representation of your cluster\n* ``--node-count 3`` : No. of worker nodes in the kubernetes cluster.\n* ``--topology private`` : Cluster will be created with private topology, what that means is all masters\/nodes will be launched in a private subnet in the VPC\n* ``--master-zones eu-west-1a,eu-west-1b,eu-west-1c`` : The 3 zones ensure the HA of master nodes, each belonging in a different Availability zones.\n* ``--zones eu-west-1a,eu-west-1b,eu-west-1c`` : Zones where the worker nodes will be deployed\n* ``--networking cilium`` : Networking CNI plugin to be used - cilium. You can also use ``cilium-etcd``, which will use a dedicated etcd cluster as key\/value store instead of CRDs.\n* ``--cloud-labels \"Team=Dev,Owner=Admin\"`` :  Labels for your cluster that will be applied to your instances\n* ``${NAME}`` : Name of the cluster. Make sure the name ends with k8s.local for a gossip based cluster","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      kops guide      k8s install kops                           Installation using Kops                          As of kops 1 9 release  Cilium can be plugged into kops deployed clusters as the CNI plugin  This guide provides steps to create a Kubernetes cluster on AWS using kops and Cilium as the CNI plugin  Note  the kops deployment will automate several deployment features in AWS by default  including AutoScaling  Volumes  VPCs  etc   Kops offers several out of the box configurations of Cilium including  ref  kubeproxy free    ref  ipam eni   and dedicated etcd cluster for Cilium  This guide will just go through a basic setup    Prerequisites                   aws cli  https   aws amazon com cli        kubectl  https   kubernetes io docs tasks tools install kubectl       aws account with permissions      AmazonEC2FullAccess     AmazonRoute53FullAccess     AmazonS3FullAccess     IAMFullAccess     AmazonVPCFullAccess   Installing kops                     tabs        group tab   Linux         code block   shell session          curl  LO https   github com kubernetes kops releases download   curl  s https   api github com repos kubernetes kops releases latest   grep tag name   cut  d      f 4  kops linux amd64         chmod  x kops linux amd64         sudo mv kops linux amd64  usr local bin kops       group tab   MacOS         code block   shell session          brew update    brew install kops   Setting up IAM Group and User                                Assuming you have all the prerequisites  run the following commands to create the kops user and group      code block   shell session              Create IAM group named kops and grant access           aws iam create group   group name kops           aws iam attach group policy   policy arn arn aws iam  aws policy AmazonEC2FullAccess   group name kops           aws iam attach group policy   policy arn arn aws iam  aws policy AmazonRoute53FullAccess   group name kops           aws iam attach group policy   policy arn arn aws iam  aws policy AmazonS3FullAccess   group name kops           aws iam attach group policy   policy arn arn aws iam  aws policy IAMFullAccess   group name kops           aws iam attach group policy   policy arn arn aws iam  aws policy AmazonVPCFullAccess   group name kops           aws iam create user   user name kops           aws iam add user to group   user name kops   group name kops           aws iam create access key   user name kops   kops requires the creation of a dedicated S3 bucket in order to store the state and representation of the cluster  You will need to change the bucket name and provide your unique bucket name  for example a reverse of FQDN added with short description of the cluster   Also make sure to use the region where you will be deploying the cluster      code block   shell session            aws s3api create bucket   bucket prefix example com state store   region us west 2   create bucket configuration LocationConstraint us west 2           export KOPS STATE STORE s3   prefix example com state store  The above steps are sufficient for getting a working cluster installed  Please consult  kops aws documentation  https   kops sigs k8s io getting started install     for more detailed setup instructions    Cilium Prerequisites                         Ensure the  ref  admin system reqs  are met  particularly the Linux kernel   and key value store versions   The default AMI satisfies the minimum kernel version required by Cilium  which is what we will use in this guide    Creating a Cluster                       Note that you will need to specify the     master zones   and     zones   for   creating the master and worker nodes  The number of master zones should be     odd  1  3       for HA  For simplicity  you can just use 1 region    To keep things simple when following this guide  we will use a gossip based cluster    This means you do not have to create a hosted zone upfront   cluster   NAME   variable   must end with   k8s local   to use the gossip  protocol  If creating multiple clusters   using the same kops user  then make the cluster name unique by adding a prefix such as      com company emailid          code block   shell session            export NAME com company emailid cilium k8s local           kops create cluster   state   KOPS STATE STORE    node count 3   topology private   master zones us west 2a us west 2b us west 2c   zones us west 2a us west 2b us west 2c   networking cilium   cloud labels  Team Dev Owner Admin    NAME    yes   You may be prompted to create a ssh public private key pair      code block   shell session            ssh keygen    Please see  ref  appendix kops       include   k8s install validate rst      appendix kops    Deleting a Cluster                     To undo the dependencies and other deployment features in AWS from the kops cluster creation  use kops to destroy a cluster  immediately  with the parameter     yes        code block   shell session            kops delete cluster   NAME    yes   Further reading on using Cilium with Kops                                             See the  kops networking documentation  https   kops sigs k8s io networking cilium     for more information on the    configuration options kops offers    See the  kops cluster spec documentation  https   pkg go dev k8s io kops pkg apis kops tab doc CiliumNetworkingSpec    for a comprehensive list of all the options   Appendix  Details of kops flags used in cluster creation                                                           The following section explains all the flags used in create cluster command         state   KOPS STATE STORE      KOPS uses an S3 bucket to store the state of your cluster and representation of your cluster       node count 3     No  of worker nodes in the kubernetes cluster        topology private     Cluster will be created with private topology  what that means is all masters nodes will be launched in a private subnet in the VPC       master zones eu west 1a eu west 1b eu west 1c     The 3 zones ensure the HA of master nodes  each belonging in a different Availability zones        zones eu west 1a eu west 1b eu west 1c     Zones where the worker nodes will be deployed       networking cilium     Networking CNI plugin to be used   cilium  You can also use   cilium etcd    which will use a dedicated etcd cluster as key value store instead of CRDs        cloud labels  Team Dev Owner Admin       Labels for your cluster that will be applied to your instances       NAME      Name of the cluster  Make sure the name ends with k8s local for a gossip based cluster"}
{"questions":"cilium Cilium as the CNI The guide uses The guide is to use Kubespray for creating an AWS Kubernetes cluster running docs cilium io Installation using Kubespray k8sinstallkubespray","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _k8s_install_kubespray:\n\n****************************\nInstallation using Kubespray\n****************************\n\nThe guide is to use Kubespray for creating an AWS Kubernetes cluster running \nCilium as the CNI. The guide uses:\n\n  - Kubespray v2.6.0\n  - Latest `Cilium released version`_ (instructions for using the version are mentioned below)\n\nPlease consult `Kubespray Prerequisites <https:\/\/github.com\/kubernetes-sigs\/kubespray#requirements>`__ and Cilium :ref:`admin_system_reqs`. \n\n.. _Cilium released version: `latest released Cilium version`_\n\nInstalling Kubespray\n====================\n\n.. code-block:: shell-session\n\n  $ git clone --branch v2.6.0 https:\/\/github.com\/kubernetes-sigs\/kubespray\n\nInstall dependencies from ``requirements.txt``\n\n.. code-block:: shell-session\n\n  $ cd kubespray\n  $ sudo pip install -r requirements.txt\n\n\nInfrastructure Provisioning\n===========================\n\nWe will use Terraform for provisioning AWS infrastructure.\n\nConfigure AWS credentials\n-------------------------\n\nExport the variables for your AWS credentials \n\n.. code-block:: shell-session\n\n  export AWS_ACCESS_KEY_ID=\"www\"\n  export AWS_SECRET_ACCESS_KEY =\"xxx\"\n  export AWS_SSH_KEY_NAME=\"yyy\"\n  export AWS_DEFAULT_REGION=\"zzz\"\n\nConfigure Terraform Variables\n-----------------------------\n\nWe will start by specifying the infrastructure needed for the Kubernetes cluster.\n\n.. code-block:: shell-session\n\n  $ cd contrib\/terraform\/aws\n  $ cp contrib\/terraform\/aws\/terraform.tfvars.example terraform.tfvars\n\nOpen the file and change any defaults particularly, the number of master, etcd, and worker nodes. \nYou can change the master and etcd number to 1 for deployments that don't need high availability.\nBy default, this tutorial will create:\n\n  - VPC with 2 public and private subnets\n  - Bastion Hosts and NAT Gateways in the Public Subnet\n  - Three of each (masters, etcd, and worker nodes) in the Private Subnet\n  - AWS ELB in the Public Subnet for accessing the Kubernetes API from\n    the internet\n  - Terraform scripts using ``CoreOS`` as base image.\n\nExample ``terraform.tfvars`` file:\n\n.. code-block:: bash\n\n  #Global Vars\n  aws_cluster_name = \"kubespray\"\n\n  #VPC Vars\n  aws_vpc_cidr_block = \"XXX.XXX.192.0\/18\"\n  aws_cidr_subnets_private = [\"XXX.XXX.192.0\/20\",\"XXX.XXX.208.0\/20\"]\n  aws_cidr_subnets_public = [\"XXX.XXX.224.0\/20\",\"XXX.XXX.240.0\/20\"]\n\n  #Bastion Host\n  aws_bastion_size = \"t2.medium\"\n\n\n  #Kubernetes Cluster\n\n  aws_kube_master_num = 3\n  aws_kube_master_size = \"t2.medium\"\n\n  aws_etcd_num = 3\n  aws_etcd_size = \"t2.medium\"\n\n  aws_kube_worker_num = 3\n  aws_kube_worker_size = \"t2.medium\"\n\n  #Settings AWS ELB\n\n  aws_elb_api_port = 6443\n  k8s_secure_api_port = 6443\n  kube_insecure_apiserver_address = \"0.0.0.0\"\n\n\nApply the configuration\n-----------------------\n\n``terraform init`` to initialize the following modules\n\n  - ``module.aws-vpc``\n  - ``module.aws-elb``\n  - ``module.aws-iam``\n\n.. code-block:: shell-session\n\n  $ terraform init\n\nOnce initialized , execute:\n\n.. code-block:: shell-session\n\n  $ terraform plan -out=aws_kubespray_plan\n\nThis will generate a file, ``aws_kubespray_plan``, depicting an execution\nplan of the infrastructure that will be created on AWS. To apply, execute:\n\n.. code-block:: shell-session\n\n  $ terraform init\n  $ terraform apply \"aws_kubespray_plan\"\n\nTerraform automatically creates an Ansible Inventory file at ``inventory\/hosts``.\n\nInstalling Kubernetes cluster with Cilium as CNI\n================================================\n\nKubespray uses Ansible as its substrate for provisioning and orchestration. Once the infrastructure is created, you can run the Ansible playbook to install Kubernetes and all the required dependencies. Execute the below command in the kubespray clone repo, providing the correct path of the AWS EC2 ssh private key in ``ansible_ssh_private_key_file=<path to EC2 SSH private key file>``\n\nWe recommend using the `latest released Cilium version`_ by passing the variable when running the ``ansible-playbook`` command.\nFor example, you could add the following flag to the command below: ``-e cilium_version=v1.11.0``.\n\n.. code-block:: shell-session\n\n  $ ansible-playbook -i .\/inventory\/hosts .\/cluster.yml -e ansible_user=core -e bootstrap_os=coreos -e kube_network_plugin=cilium -b --become-user=root --flush-cache  -e ansible_ssh_private_key_file=<path to EC2 SSH private key file>\n\n.. _latest released Cilium version: https:\/\/github.com\/cilium\/cilium\/releases\n\nIf you are interested in configuring your Kubernetes cluster setup, you should consider copying the sample inventory. Then, you can edit the variables in the relevant file in the ``group_vars`` directory.\n\n.. code-block:: shell-session\n\n  $ cp -r inventory\/sample inventory\/my-inventory\n  $ cp .\/inventory\/hosts .\/inventory\/my-inventory\/hosts\n  $ echo 'cilium_version: \"v1.11.0\"' >> .\/inventory\/my-inventory\/group_vars\/k8s_cluster\/k8s-net-cilium.yml\n  $ ansible-playbook -i .\/inventory\/my-inventory\/hosts .\/cluster.yml -e ansible_user=core -e bootstrap_os=coreos -e kube_network_plugin=cilium -b --become-user=root --flush-cache -e ansible_ssh_private_key_file=<path to EC2 SSH private key file>\n\nValidate Cluster\n================\n\nTo check if cluster is created successfully, ssh into the bastion host with the user ``core``. \n\n.. code-block:: shell-session\n\n  $ # Get information about the basiton host\n  $ cat ssh-bastion.conf\n  $ ssh -i ~\/path\/to\/ec2-key-file.pem core@public_ip_of_bastion_host\n\nExecute the commands below from the bastion host. If ``kubectl`` isn't installed on the bastion host, you can login to the master node to test the below commands. You may need to copy the private key to the bastion host to access the master node.\n\n.. include:: k8s-install-validate.rst\n\nDelete Cluster\n==============\n\n.. code-block:: shell-session\n\n  $ cd contrib\/terraform\/aws\n  $ terraform destroy\n","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      k8s install kubespray                                Installation using Kubespray                               The guide is to use Kubespray for creating an AWS Kubernetes cluster running  Cilium as the CNI  The guide uses       Kubespray v2 6 0     Latest  Cilium released version    instructions for using the version are mentioned below   Please consult  Kubespray Prerequisites  https   github com kubernetes sigs kubespray requirements     and Cilium  ref  admin system reqs         Cilium released version   latest released Cilium version    Installing Kubespray                          code block   shell session      git clone   branch v2 6 0 https   github com kubernetes sigs kubespray  Install dependencies from   requirements txt       code block   shell session      cd kubespray     sudo pip install  r requirements txt   Infrastructure Provisioning                              We will use Terraform for provisioning AWS infrastructure   Configure AWS credentials                            Export the variables for your AWS credentials      code block   shell session    export AWS ACCESS KEY ID  www    export AWS SECRET ACCESS KEY   xxx    export AWS SSH KEY NAME  yyy    export AWS DEFAULT REGION  zzz   Configure Terraform Variables                                We will start by specifying the infrastructure needed for the Kubernetes cluster      code block   shell session      cd contrib terraform aws     cp contrib terraform aws terraform tfvars example terraform tfvars  Open the file and change any defaults particularly  the number of master  etcd  and worker nodes   You can change the master and etcd number to 1 for deployments that don t need high availability  By default  this tutorial will create       VPC with 2 public and private subnets     Bastion Hosts and NAT Gateways in the Public Subnet     Three of each  masters  etcd  and worker nodes  in the Private Subnet     AWS ELB in the Public Subnet for accessing the Kubernetes API from     the internet     Terraform scripts using   CoreOS   as base image   Example   terraform tfvars   file      code block   bash     Global Vars   aws cluster name    kubespray      VPC Vars   aws vpc cidr block    XXX XXX 192 0 18    aws cidr subnets private     XXX XXX 192 0 20   XXX XXX 208 0 20     aws cidr subnets public     XXX XXX 224 0 20   XXX XXX 240 0 20       Bastion Host   aws bastion size    t2 medium       Kubernetes Cluster    aws kube master num   3   aws kube master size    t2 medium     aws etcd num   3   aws etcd size    t2 medium     aws kube worker num   3   aws kube worker size    t2 medium      Settings AWS ELB    aws elb api port   6443   k8s secure api port   6443   kube insecure apiserver address    0 0 0 0    Apply the configuration                            terraform init   to initialize the following modules        module aws vpc         module aws elb         module aws iam       code block   shell session      terraform init  Once initialized   execute      code block   shell session      terraform plan  out aws kubespray plan  This will generate a file    aws kubespray plan    depicting an execution plan of the infrastructure that will be created on AWS  To apply  execute      code block   shell session      terraform init     terraform apply  aws kubespray plan   Terraform automatically creates an Ansible Inventory file at   inventory hosts     Installing Kubernetes cluster with Cilium as CNI                                                   Kubespray uses Ansible as its substrate for provisioning and orchestration  Once the infrastructure is created  you can run the Ansible playbook to install Kubernetes and all the required dependencies  Execute the below command in the kubespray clone repo  providing the correct path of the AWS EC2 ssh private key in   ansible ssh private key file  path to EC2 SSH private key file     We recommend using the  latest released Cilium version   by passing the variable when running the   ansible playbook   command  For example  you could add the following flag to the command below     e cilium version v1 11 0        code block   shell session      ansible playbook  i   inventory hosts   cluster yml  e ansible user core  e bootstrap os coreos  e kube network plugin cilium  b   become user root   flush cache   e ansible ssh private key file  path to EC2 SSH private key file       latest released Cilium version  https   github com cilium cilium releases  If you are interested in configuring your Kubernetes cluster setup  you should consider copying the sample inventory  Then  you can edit the variables in the relevant file in the   group vars   directory      code block   shell session      cp  r inventory sample inventory my inventory     cp   inventory hosts   inventory my inventory hosts     echo  cilium version   v1 11 0        inventory my inventory group vars k8s cluster k8s net cilium yml     ansible playbook  i   inventory my inventory hosts   cluster yml  e ansible user core  e bootstrap os coreos  e kube network plugin cilium  b   become user root   flush cache  e ansible ssh private key file  path to EC2 SSH private key file   Validate Cluster                   To check if cluster is created successfully  ssh into the bastion host with the user   core         code block   shell session        Get information about the basiton host     cat ssh bastion conf     ssh  i   path to ec2 key file pem core public ip of bastion host  Execute the commands below from the bastion host  If   kubectl   isn t installed on the bastion host  you can login to the master node to test the below commands  You may need to copy the private key to the bastion host to access the master node      include   k8s install validate rst  Delete Cluster                    code block   shell session      cd contrib terraform aws     terraform destroy "}
{"questions":"cilium AWS VPC CNI plugin docs cilium io plugin In this hybrid mode the AWS VPC CNI plugin is responsible for setting This guide explains how to set up Cilium in combination with the AWS VPC CNI chainingawscni","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _chaining_aws_cni:\n\n******************\nAWS VPC CNI plugin\n******************\n\nThis guide explains how to set up Cilium in combination with the AWS VPC CNI\nplugin. In this hybrid mode, the AWS VPC CNI plugin is responsible for setting\nup the virtual network devices as well as for IP address management (IPAM) via\nENIs. After the initial networking is setup for a given pod, the Cilium CNI\nplugin is called to attach eBPF programs to the network devices set up by the\nAWS VPC CNI plugin in order to enforce network policies, perform load-balancing\nand provide encryption.\n\n.. image:: aws-cilium-architecture.png\n\n.. include:: cni-chaining-limitations.rst\n\n.. admonition:: Video\n  :class: attention\n\n  If you require advanced features of Cilium, consider migrating fully to Cilium.\n  To help you with the process, you can watch two Principal Engineers at Meltwater talk about `how they migrated\n  Meltwater's production Kubernetes clusters - from the AWS VPC CNI plugin to Cilium <https:\/\/www.youtube.com\/watch?v=w6S6baRHHu8&list=PLDg_GiBbAx-kDXqDYimwytMLh2kAHyMPd&t=182s>`__.\n\n.. important::\n\n   Please ensure that you are running version `1.11.2 <https:\/\/github.com\/aws\/amazon-vpc-cni-k8s\/releases\/tag\/v1.11.2>`_\n   or newer of the AWS VPC CNI plugin to guarantee compatibility with Cilium.\n\n   .. code-block:: shell-session\n\n      $ kubectl -n kube-system get ds\/aws-node -o json | jq -r '.spec.template.spec.containers[0].image'\n      602401143452.dkr.ecr.us-west-2.amazonaws.com\/amazon-k8s-cni:v1.11.2\n\n   If you are running an older version, as in the above example, you can upgrade it with:\n\n   .. code-block:: shell-session\n\n      $ kubectl apply -f https:\/\/raw.githubusercontent.com\/aws\/amazon-vpc-cni-k8s\/release-1.11\/config\/master\/aws-k8s-cni.yaml\n\n.. image:: aws-cni-architecture.png\n\n\nSetting up a cluster on AWS\n===========================\n\nFollow the instructions in the :ref:`k8s_install_quick` guide to set up an EKS\ncluster, or use any other method of your preference to set up a Kubernetes\ncluster on AWS.\n\nEnsure that the `aws-vpc-cni-k8s <https:\/\/github.com\/aws\/amazon-vpc-cni-k8s>`_\nplugin is installed \u2014 which will already be the case if you have created an EKS\ncluster. Also, ensure the version of the plugin is up-to-date as per the above.\n\n.. include:: k8s-install-download-release.rst\n\nDeploy Cilium via Helm:\n\n.. parsed-literal::\n\n   helm install cilium |CHART_RELEASE| \\\\\n     --namespace kube-system \\\\\n     --set cni.chainingMode=aws-cni \\\\\n     --set cni.exclusive=false \\\\\n     --set enableIPv4Masquerade=false \\\\\n     --set routingMode=native \\\\\n     --set endpointRoutes.enabled=true\n\nThis will enable chaining with the AWS VPC CNI plugin. It will also disable\ntunneling, as it's not required since ENI IP addresses can be directly routed\nin the VPC. For the same reason, masquerading can be disabled as well.\n\nRestart existing pods\n=====================\n\nThe new CNI chaining configuration *will not* apply to any pod that is already\nrunning in the cluster. Existing pods will be reachable, and Cilium will\nload-balance *to* them, but not *from* them. Policy enforcement will also not\nbe applied. For these reasons, you must restart these pods so that the chaining\nconfiguration can be applied to them.\n\nThe following command can be used to check which pods need to be restarted:\n\n.. code-block:: bash\n\n   for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do\n        ceps=$(kubectl -n \"${ns}\" get cep \\\n            -o jsonpath='{.items[*].metadata.name}')\n        pods=$(kubectl -n \"${ns}\" get pod \\\n            -o custom-columns=NAME:.metadata.name,NETWORK:.spec.hostNetwork \\\n            | grep -E '\\s(<none>|false)' | awk '{print $1}' | tr '\\n' ' ')\n        ncep=$(echo \"${pods} ${ceps}\" | tr ' ' '\\n' | sort | uniq -u | paste -s -d ' ' -)\n        for pod in $(echo $ncep); do\n          echo \"${ns}\/${pod}\";\n        done\n   done\n\n.. include:: k8s-install-validate.rst\n\nAdvanced\n========\n\nEnabling security groups for pods (EKS)\n---------------------------------------\n\nCilium can be used alongside the `security groups for pods <https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/security-groups-for-pods.html>`_\nfeature of EKS in supported clusters when running in chaining mode. Follow the\ninstructions below to enable this feature:\n\n.. important::\n\n   The following guide requires `jq <https:\/\/stedolan.github.io\/jq\/>`_ and the\n   `AWS CLI <https:\/\/aws.amazon.com\/cli\/>`_ to be installed and configured.\n\nMake sure that the ``AmazonEKSVPCResourceController`` managed policy is attached\nto the IAM role associated with the EKS cluster:\n\n.. code-block:: shell-session\n\n   export EKS_CLUSTER_NAME=\"my-eks-cluster\" # Change accordingly\n   export EKS_CLUSTER_ROLE_NAME=$(aws eks describe-cluster \\\n        --name \"${EKS_CLUSTER_NAME}\" \\\n        | jq -r '.cluster.roleArn' | awk -F\/ '{print $NF}')\n   aws iam attach-role-policy \\\n        --policy-arn arn:aws:iam::aws:policy\/AmazonEKSVPCResourceController \\\n        --role-name \"${EKS_CLUSTER_ROLE_NAME}\"\n\nThen, as mentioned above, make sure that the version of the AWS VPC CNI\nplugin running in the cluster is up-to-date:\n\n.. code-block:: shell-session\n\n   kubectl -n kube-system get ds\/aws-node \\\n     -o jsonpath='{.spec.template.spec.containers[0].image}'\n   602401143452.dkr.ecr.us-west-2.amazonaws.com\/amazon-k8s-cni:v1.7.10\n\nNext, patch the ``kube-system\/aws-node`` DaemonSet in order to enable security\ngroups for pods:\n\n.. code-block:: shell-session\n\n   kubectl -n kube-system patch ds aws-node \\\n     -p '{\"spec\":{\"template\":{\"spec\":{\"initContainers\":[{\"env\":[{\"name\":\"DISABLE_TCP_EARLY_DEMUX\",\"value\":\"true\"}],\"name\":\"aws-vpc-cni-init\"}],\"containers\":[{\"env\":[{\"name\":\"ENABLE_POD_ENI\",\"value\":\"true\"}],\"name\":\"aws-node\"}]}}}}'\n   kubectl -n kube-system rollout status ds aws-node\n\nAfter the rollout is complete, all nodes in the cluster should have the ``vps.amazonaws.com\/has-trunk-attached`` label set to ``true``:\n\n.. code-block:: shell-session\n\n   kubectl get nodes -L vpc.amazonaws.com\/has-trunk-attached\n   NAME                                            STATUS   ROLES    AGE   VERSION              HAS-TRUNK-ATTACHED\n   ip-192-168-111-169.eu-west-2.compute.internal   Ready    <none>   22m   v1.19.6-eks-49a6c0   true\n   ip-192-168-129-175.eu-west-2.compute.internal   Ready    <none>   22m   v1.19.6-eks-49a6c0   true\n\nFrom this moment everything should be in place. For details on how to actually\nassociate security groups to pods, please refer to the `official documentation <https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/security-groups-for-pods.html>`_.\n\n.. include:: next-steps.rst","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      chaining aws cni                      AWS VPC CNI plugin                     This guide explains how to set up Cilium in combination with the AWS VPC CNI plugin  In this hybrid mode  the AWS VPC CNI plugin is responsible for setting up the virtual network devices as well as for IP address management  IPAM  via ENIs  After the initial networking is setup for a given pod  the Cilium CNI plugin is called to attach eBPF programs to the network devices set up by the AWS VPC CNI plugin in order to enforce network policies  perform load balancing and provide encryption      image   aws cilium architecture png     include   cni chaining limitations rst     admonition   Video    class  attention    If you require advanced features of Cilium  consider migrating fully to Cilium    To help you with the process  you can watch two Principal Engineers at Meltwater talk about  how they migrated   Meltwater s production Kubernetes clusters   from the AWS VPC CNI plugin to Cilium  https   www youtube com watch v w6S6baRHHu8 list PLDg GiBbAx kDXqDYimwytMLh2kAHyMPd t 182s          important       Please ensure that you are running version  1 11 2  https   github com aws amazon vpc cni k8s releases tag v1 11 2       or newer of the AWS VPC CNI plugin to guarantee compatibility with Cilium         code block   shell session          kubectl  n kube system get ds aws node  o json   jq  r   spec template spec containers 0  image        602401143452 dkr ecr us west 2 amazonaws com amazon k8s cni v1 11 2     If you are running an older version  as in the above example  you can upgrade it with         code block   shell session          kubectl apply  f https   raw githubusercontent com aws amazon vpc cni k8s release 1 11 config master aws k8s cni yaml     image   aws cni architecture png   Setting up a cluster on AWS                              Follow the instructions in the  ref  k8s install quick  guide to set up an EKS cluster  or use any other method of your preference to set up a Kubernetes cluster on AWS   Ensure that the  aws vpc cni k8s  https   github com aws amazon vpc cni k8s    plugin is installed   which will already be the case if you have created an EKS cluster  Also  ensure the version of the plugin is up to date as per the above      include   k8s install download release rst  Deploy Cilium via Helm      parsed literal       helm install cilium  CHART RELEASE            namespace kube system           set cni chainingMode aws cni           set cni exclusive false           set enableIPv4Masquerade false           set routingMode native           set endpointRoutes enabled true  This will enable chaining with the AWS VPC CNI plugin  It will also disable tunneling  as it s not required since ENI IP addresses can be directly routed in the VPC  For the same reason  masquerading can be disabled as well   Restart existing pods                        The new CNI chaining configuration  will not  apply to any pod that is already running in the cluster  Existing pods will be reachable  and Cilium will load balance  to  them  but not  from  them  Policy enforcement will also not be applied  For these reasons  you must restart these pods so that the chaining configuration can be applied to them   The following command can be used to check which pods need to be restarted      code block   bash     for ns in   kubectl get ns  o jsonpath    items    metadata name     do         ceps   kubectl  n    ns   get cep                o jsonpath    items    metadata name            pods   kubectl  n    ns   get pod                o custom columns NAME  metadata name NETWORK  spec hostNetwork                 grep  E   s  none  false     awk   print  1     tr   n               ncep   echo    pods    ceps     tr       n    sort   uniq  u   paste  s  d                for pod in   echo  ncep   do           echo    ns    pod            done    done     include   k8s install validate rst  Advanced           Enabling security groups for pods  EKS                                           Cilium can be used alongside the  security groups for pods  https   docs aws amazon com eks latest userguide security groups for pods html    feature of EKS in supported clusters when running in chaining mode  Follow the instructions below to enable this feature      important       The following guide requires  jq  https   stedolan github io jq     and the     AWS CLI  https   aws amazon com cli     to be installed and configured   Make sure that the   AmazonEKSVPCResourceController   managed policy is attached to the IAM role associated with the EKS cluster      code block   shell session     export EKS CLUSTER NAME  my eks cluster    Change accordingly    export EKS CLUSTER ROLE NAME   aws eks describe cluster             name    EKS CLUSTER NAME               jq  r   cluster roleArn    awk  F    print  NF       aws iam attach role policy             policy arn arn aws iam  aws policy AmazonEKSVPCResourceController             role name    EKS CLUSTER ROLE NAME    Then  as mentioned above  make sure that the version of the AWS VPC CNI plugin running in the cluster is up to date      code block   shell session     kubectl  n kube system get ds aws node         o jsonpath    spec template spec containers 0  image      602401143452 dkr ecr us west 2 amazonaws com amazon k8s cni v1 7 10  Next  patch the   kube system aws node   DaemonSet in order to enable security groups for pods      code block   shell session     kubectl  n kube system patch ds aws node         p    spec    template    spec    initContainers     env     name   DISABLE TCP EARLY DEMUX   value   true     name   aws vpc cni init     containers     env     name   ENABLE POD ENI   value   true     name   aws node            kubectl  n kube system rollout status ds aws node  After the rollout is complete  all nodes in the cluster should have the   vps amazonaws com has trunk attached   label set to   true        code block   shell session     kubectl get nodes  L vpc amazonaws com has trunk attached    NAME                                            STATUS   ROLES    AGE   VERSION              HAS TRUNK ATTACHED    ip 192 168 111 169 eu west 2 compute internal   Ready     none    22m   v1 19 6 eks 49a6c0   true    ip 192 168 129 175 eu west 2 compute internal   Ready     none    22m   v1 19 6 eks 49a6c0   true  From this moment everything should be in place  For details on how to actually associate security groups to pods  please refer to the  official documentation  https   docs aws amazon com eks latest userguide security groups for pods html         include   next steps rst"}
{"questions":"cilium Migrating a cluster to Cilium docs cilium io be migrated on a node by node basis without disrupting existing traffic cnimigration Cilium can be used to migrate from another cni Running clusters can","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _cni_migration:\n\n*************************************\nMigrating a cluster to Cilium\n*************************************\n\nCilium can be used to migrate from another cni. Running clusters can\nbe migrated on a node-by-node basis, without disrupting existing traffic\nor requiring a complete cluster outage or rebuild depending on the complexity of the migration case.\n\nThis document outlines how migrations with Cilium work. You will have a good\nunderstanding of the basic requirements, as well as see an example migration\nwhich you can practice using :ref:`Kind <gs_kind>`.\n\n\nBackground\n==========\n\nWhen the kubelet creates a Pod's Sandbox, the installed CNI, as configured in ``\/etc\/cni\/net.d\/``,\nis called. The cni will handle the networking for a pod - including allocating \nan ip address, creating & configuring a network interface, and (potentially)\nestablishing an overlay network. The Pod's network configuration shares the\nsame life cycle as the PodSandbox.\n\nIn the case of migration, we typically reconfigure ``\/etc\/cni\/net.d\/`` to point\nto Cilium. However, any existing pods will still have been configured by the old\nnetwork plugin and any new pods will be configured by the newer CNI. To complete\nthe migration all Pods on the cluster that are configured by the old cni must be\nrecycled in order to be a member of the new CNI.\n\nA naive approach to migrating a CNI would be to reconfigure all nodes with a new\nCNI and then gradually restart each node in the cluster, thus replacing the CNI\nwhen the node is brought back up and ensuring that all pods are part of the new CNI.\n\nThis simple migration, while effective, comes at the cost of disrupting cluster\nconnectivity during the rollout. Unmigrated and migrated nodes would be split in\nto two \"islands\" of connectivity, and pods would be randomly unable to reach one-another\nuntil the migration is complete.\n\nMigration via dual overlays\n---------------------------\n\nInstead, Cilium supports a *hybrid* mode, where two separate overlays are established\nacross the cluster. While pods on a given node can only be attached to one network,\nthey have access to both Cilium and non-Cilium pods while the migration is\ntaking place. As long as Cilium and the existing networking provider use a separate\nIP range, the Linux routing table takes care of separating traffic.\n\nIn this document we will discuss a model for live migrating between two deployed\nCNI implementations. This will have the benefit of reducing downtime of nodes\nand workloads and ensuring that workloads on both configured CNIs can communicate\nduring migration.\n\nFor live migration to work, Cilium will be installed with a separate\nCIDR range and encapsulation port than that of the currently installed CNI. As\nlong as Cilium and the existing CNI use a separate IP range, the Linux \nrouting table takes care of separating traffic.\n\n\n\nRequirements\n============\n\nLive migration requires the following:\n\n- A new, distinct Cluster CIDR for Cilium to use\n- Use of the :ref:`Cluster Pool IPAM mode<ipam_crd_cluster_pool>`\n- A distinct overlay, either protocol or port\n- An existing network plugin that uses the Linux routing stack, such as Flannel, Calico, or AWS-CNI\n\nLimitations\n===========\n\nCurrently, Cilium migration has not been tested with:\n\n- BGP-based routing\n- Changing IP families (e.g. from IPv4 to IPv6)\n- Migrating from Cilium in chained mode\n- An existing NetworkPolicy provider\n\nDuring migration, Cilium's  NetworkPolicy and CiliumNetworkPolicy enforcement \nwill be disabled. Otherwise, traffic from non-Cilium pods may be incorrectly\ndropped. Once the migration process is complete, policy enforcement can\nbe re-enabled. If there is an existing NetworkPolicy provider, you may wish to\ntemporarily delete all NetworkPolicies before proceeding.\n\nIt is strongly recommended to install Cilium using the :ref:`cluster-pool <ipam_crd_cluster_pool>`\nIPAM allocator. This provides the strongest assurance that there will\nbe no IP collisions.\n\n.. warning::\n  Migration is highly dependent on the exact configuration of existing\n  clusters. It is, thus, strongly recommended to perform a trial migration\n  on a test or lab cluster.\n\nOverview\n========\n\nThe migration process utilizes the :ref:`per-node configuration<per-node-configuration>`\nfeature to selectively enable Cilium CNI. This allows for a controlled rollout\nof Cilium without disrupting existing workloads.\n\nCilium will be installed, first, in a mode where it establishes an overlay\nbut does not provide CNI networking for any pods. Then, individual nodes will\nbe migrated.\n\nIn summary, the process looks like:\n\n1. Install cilium in \"secondary\" mode\n2. Cordon, drain, migrate, and reboot each node\n3. Remove the existing network provider\n4. (Optional) Reboot each node again\n\n\nMigration procedure\n===================\n\nPreparation\n-----------\n\n- Optional: Create a :ref:`Kind <gs_kind>` cluster and install `Flannel <https:\/\/github.com\/flannel-io\/flannel>`_ on it.\n\n    .. parsed-literal::\n\n      $ cat <<EOF > kind-config.yaml\n      apiVersion: kind.x-k8s.io\/v1alpha4\n      kind: Cluster\n      nodes:\n      - role: control-plane\n      - role: worker\n      - role: worker\n      networking:\n        disableDefaultCNI: true\n      EOF\n      $ kind create cluster --config=kind-config.yaml\n      $ kubectl apply -n kube-system --server-side -f \\ |SCM_WEB|\\\/examples\/misc\/migration\/install-reference-cni-plugins.yaml\n      $ kubectl apply --server-side -f https:\/\/github.com\/flannel-io\/flannel\/releases\/latest\/download\/kube-flannel.yml\n      $ kubectl wait --for=condition=Ready nodes --all\n\n- Optional: Monitor connectivity.\n\n  You may wish to install a tool such as `goldpinger <https:\/\/github.com\/bloomberg\/goldpinger>`_\n  to detect any possible connectivity issues.\n\n1.  Select a **new** CIDR for pods. It must be distinct from all other CIDRs in use.\n\n    For Kind clusters, the default is ``10.244.0.0\/16``. So, for this example, we will\n    use ``10.245.0.0\/16``.\n\n2.  Select a **distinct** encapsulation port. For example, if the existing cluster\n    is using VXLAN, then you should either use GENEVE or configure Cilium to use VXLAN\n    with a different port.\n\n    For this example, we will use VXLAN with a non-default port of 8473.\n\n3.  Create a helm ``values-migration.yaml`` file based on the following example. Be sure to fill\n    in the CIDR you selected in step 1.\n\n    .. code-block:: yaml\n\n        operator:\n          unmanagedPodWatcher:\n            restart: false # Migration: Don't restart unmigrated pods\n        routingMode: tunnel # Migration: Optional: default is tunneling, configure as needed\n        tunnelProtocol: vxlan # Migration: Optional: default is VXLAN, configure as needed\n        tunnelPort: 8473 # Migration: Optional, change only if both networks use the same port by default\n        cni:\n          customConf: true # Migration: Don't install a CNI configuration file\n          uninstall: false # Migration: Don't remove CNI configuration on shutdown\n        ipam:\n          mode: \"cluster-pool\"\n          operator:\n            clusterPoolIPv4PodCIDRList: [\"10.245.0.0\/16\"] # Migration: Ensure this is distinct and unused\n        policyEnforcementMode: \"never\" # Migration: Disable policy enforcement\n        bpf:\n          hostLegacyRouting: true # Migration: Allow for routing between Cilium and the existing overlay\n\n4.  Configure any additional Cilium Helm values.\n\n    Cilium supports a number of :ref:`Helm configuration options<helm_reference>`. You may choose to\n    auto-detect typical ones using the :ref:`cilium-cli <install_cilium_cli>`.\n    This will consume the template and auto-detect any other relevant Helm values.\n    Review these values for your particular installation.\n\n    .. parsed-literal::\n\n        $ cilium install |CHART_VERSION| --values values-migration.yaml --dry-run-helm-values > values-initial.yaml\n        $ cat values-initial.yaml\n\n\n5.  Install cilium using :ref:`helm <k8s_install_helm>`.\n\n    .. code-block:: shell-session\n\n      $ helm repo add cilium https:\/\/helm.cilium.io\/\n      $ helm install cilium cilium\/cilium --namespace kube-system --values values-initial.yaml\n\n\n    At this point, you should have a cluster with Cilium installed and an overlay established, but no\n    pods managed by Cilium itself. You can verify this with the ``cilium`` command.\n\n    .. code-block:: shell-session\n\n      $ cilium status --wait\n      ...\n      Cluster Pods:     0\/3 managed by Cilium\n\n\n6.  Create a :ref:`per-node config<per-node-configuration>` that will instruct Cilium to \"take over\" CNI networking\n    on the node. Initially, this will apply to no nodes; you will roll it out gradually via\n    the migration process.\n\n    .. code-block:: shell-session\n\n        cat <<EOF | kubectl apply --server-side -f -\n        apiVersion: cilium.io\/v2\n        kind: CiliumNodeConfig\n        metadata:\n          namespace: kube-system\n          name: cilium-default\n        spec:\n          nodeSelector:\n            matchLabels:\n              io.cilium.migration\/cilium-default: \"true\"\n          defaults:\n            write-cni-conf-when-ready: \/host\/etc\/cni\/net.d\/05-cilium.conflist\n            custom-cni-conf: \"false\"\n            cni-chaining-mode: \"none\"\n            cni-exclusive: \"true\"\n        EOF\n\nMigration\n---------\n\nAt this point, you are ready to begin the migration process. The basic flow is:\n\nSelect a node to be migrated. It is not recommended to start with a control-plane node.\n\n.. code-block:: shell-session\n\n  $ NODE=\"kind-worker\" # for the Kind example\n\n1.  Cordon and, optionally, drain the node in question.\n\n    .. code-block:: shell-session\n\n      $ kubectl cordon $NODE\n      $ kubectl drain --ignore-daemonsets $NODE\n\n    Draining is not strictly required, but it is recommended. Otherwise pods will encounter\n    a brief interruption while the node is rebooted.\n\n2.  Label the node. This causes the ``CiliumNodeConfig`` to apply to this node.\n\n    .. code-block:: shell-session\n\n      $ kubectl label node $NODE --overwrite \"io.cilium.migration\/cilium-default=true\"\n\n3.  Restart Cilium. This will cause it to write its CNI configuration file.\n\n    .. code-block:: shell-session\n\n      $ kubectl -n kube-system delete pod --field-selector spec.nodeName=$NODE -l k8s-app=cilium\n      $ kubectl -n kube-system rollout status ds\/cilium -w\n\n4.  Reboot the node.\n\n    If using kind, do so with docker:\n\n    .. code-block:: shell-session\n    \n      docker restart $NODE\n\n5.  Validate that the node has been successfully migrated.\n\n    .. code-block:: shell-session\n\n      $ cilium status --wait\n      $ kubectl get -o wide node $NODE\n      $ kubectl -n kube-system run --attach --rm --restart=Never verify-network \\\n        --overrides='{\"spec\": {\"nodeName\": \"'$NODE'\", \"tolerations\": [{\"operator\": \"Exists\"}]}}' \\\n        --image ghcr.io\/nicolaka\/netshoot:v0.8 -- \/bin\/bash -c 'ip -br addr && curl -s -k https:\/\/$KUBERNETES_SERVICE_HOST\/healthz && echo'\n\n    Ensure the IP address of the pod is in the Cilium CIDR(s) supplied above and that the apiserver\n    is reachable.\n\n6.  Uncordon the node.\n\n    .. code-block:: shell-session\n\n      $ kubectl uncordon $NODE\n\n\nOnce you are satisfied everything has been migrated successfully, select another unmigrated node in the cluster\nand repeat these steps.\n\nPost-migration\n--------------\n\nPerform these steps once the cluster is fully migrated.\n\n1.  Ensure Cilium is healthy and that all pods have been migrated:\n\n    .. code-block:: shell-session\n\n      $ cilium status\n\n2.  Update the Cilium configuration:\n\n    - Cilium should be the primary CNI\n    - NetworkPolicy should be enforced\n    - The Operator can restart unmanaged pods\n    - **Optional**: use :ref:`eBPF_Host_Routing`. Enabling this will cause a short connectivity \n      interruption on each node as the daemon restarts, but improves networking performance.\n\n    You can do this manually, or via the ``cilium`` tool (this will not apply changes to the cluster):\n\n    .. parsed-literal::\n\n      $ cilium install |CHART_VERSION| --values values-initial.yaml --dry-run-helm-values \\\n        --set operator.unmanagedPodWatcher.restart=true --set cni.customConf=false \\\n        --set policyEnforcementMode=default \\\n        --set bpf.hostLegacyRouting=false > values-final.yaml # optional, can cause brief interruptions\n      $ diff values-initial.yaml values-final.yaml\n\n    Then, apply the changes to the cluster:\n\n    .. code-block:: shell-session\n\n      $ helm upgrade --namespace kube-system cilium cilium\/cilium --values values-final.yaml\n      $ kubectl -n kube-system rollout restart daemonset cilium\n      $ cilium status --wait\n\n3.  Delete the per-node configuration:\n\n    .. code-block:: shell-session\n\n      $ kubectl delete -n kube-system ciliumnodeconfig cilium-default\n\n4.  Delete the previous network plugin.\n\n    At this point, all pods should be using Cilium for networking. You can easily verify this with ``cilium status``.\n    It is now safe to delete the previous network plugin from the cluster.\n\n\n    Most network plugins leave behind some resources, e.g. iptables rules and interfaces. These will be\n    cleaned up when the node next reboots. If desired, you may perform a rolling reboot again.","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      cni migration                                         Migrating a cluster to Cilium                                        Cilium can be used to migrate from another cni  Running clusters can be migrated on a node by node basis  without disrupting existing traffic or requiring a complete cluster outage or rebuild depending on the complexity of the migration case   This document outlines how migrations with Cilium work  You will have a good understanding of the basic requirements  as well as see an example migration which you can practice using  ref  Kind  gs kind      Background             When the kubelet creates a Pod s Sandbox  the installed CNI  as configured in    etc cni net d     is called  The cni will handle the networking for a pod   including allocating  an ip address  creating   configuring a network interface  and  potentially  establishing an overlay network  The Pod s network configuration shares the same life cycle as the PodSandbox   In the case of migration  we typically reconfigure    etc cni net d    to point to Cilium  However  any existing pods will still have been configured by the old network plugin and any new pods will be configured by the newer CNI  To complete the migration all Pods on the cluster that are configured by the old cni must be recycled in order to be a member of the new CNI   A naive approach to migrating a CNI would be to reconfigure all nodes with a new CNI and then gradually restart each node in the cluster  thus replacing the CNI when the node is brought back up and ensuring that all pods are part of the new CNI   This simple migration  while effective  comes at the cost of disrupting cluster connectivity during the rollout  Unmigrated and migrated nodes would be split in to two  islands  of connectivity  and pods would be randomly unable to reach one another until the migration is complete   Migration via dual overlays                              Instead  Cilium supports a  hybrid  mode  where two separate overlays are established across the cluster  While pods on a given node can only be attached to one network  they have access to both Cilium and non Cilium pods while the migration is taking place  As long as Cilium and the existing networking provider use a separate IP range  the Linux routing table takes care of separating traffic   In this document we will discuss a model for live migrating between two deployed CNI implementations  This will have the benefit of reducing downtime of nodes and workloads and ensuring that workloads on both configured CNIs can communicate during migration   For live migration to work  Cilium will be installed with a separate CIDR range and encapsulation port than that of the currently installed CNI  As long as Cilium and the existing CNI use a separate IP range  the Linux  routing table takes care of separating traffic     Requirements               Live migration requires the following     A new  distinct Cluster CIDR for Cilium to use   Use of the  ref  Cluster Pool IPAM mode ipam crd cluster pool     A distinct overlay  either protocol or port   An existing network plugin that uses the Linux routing stack  such as Flannel  Calico  or AWS CNI  Limitations              Currently  Cilium migration has not been tested with     BGP based routing   Changing IP families  e g  from IPv4 to IPv6    Migrating from Cilium in chained mode   An existing NetworkPolicy provider  During migration  Cilium s  NetworkPolicy and CiliumNetworkPolicy enforcement  will be disabled  Otherwise  traffic from non Cilium pods may be incorrectly dropped  Once the migration process is complete  policy enforcement can be re enabled  If there is an existing NetworkPolicy provider  you may wish to temporarily delete all NetworkPolicies before proceeding   It is strongly recommended to install Cilium using the  ref  cluster pool  ipam crd cluster pool   IPAM allocator  This provides the strongest assurance that there will be no IP collisions      warning     Migration is highly dependent on the exact configuration of existing   clusters  It is  thus  strongly recommended to perform a trial migration   on a test or lab cluster   Overview           The migration process utilizes the  ref  per node configuration per node configuration   feature to selectively enable Cilium CNI  This allows for a controlled rollout of Cilium without disrupting existing workloads   Cilium will be installed  first  in a mode where it establishes an overlay but does not provide CNI networking for any pods  Then  individual nodes will be migrated   In summary  the process looks like   1  Install cilium in  secondary  mode 2  Cordon  drain  migrate  and reboot each node 3  Remove the existing network provider 4   Optional  Reboot each node again   Migration procedure                      Preparation                Optional  Create a  ref  Kind  gs kind   cluster and install  Flannel  https   github com flannel io flannel    on it          parsed literal            cat   EOF   kind config yaml       apiVersion  kind x k8s io v1alpha4       kind  Cluster       nodes          role  control plane         role  worker         role  worker       networking          disableDefaultCNI  true       EOF         kind create cluster   config kind config yaml         kubectl apply  n kube system   server side  f    SCM WEB   examples misc migration install reference cni plugins yaml         kubectl apply   server side  f https   github com flannel io flannel releases latest download kube flannel yml         kubectl wait   for condition Ready nodes   all    Optional  Monitor connectivity     You may wish to install a tool such as  goldpinger  https   github com bloomberg goldpinger      to detect any possible connectivity issues   1   Select a   new   CIDR for pods  It must be distinct from all other CIDRs in use       For Kind clusters  the default is   10 244 0 0 16    So  for this example  we will     use   10 245 0 0 16     2   Select a   distinct   encapsulation port  For example  if the existing cluster     is using VXLAN  then you should either use GENEVE or configure Cilium to use VXLAN     with a different port       For this example  we will use VXLAN with a non default port of 8473   3   Create a helm   values migration yaml   file based on the following example  Be sure to fill     in the CIDR you selected in step 1          code block   yaml          operator            unmanagedPodWatcher              restart  false   Migration  Don t restart unmigrated pods         routingMode  tunnel   Migration  Optional  default is tunneling  configure as needed         tunnelProtocol  vxlan   Migration  Optional  default is VXLAN  configure as needed         tunnelPort  8473   Migration  Optional  change only if both networks use the same port by default         cni            customConf  true   Migration  Don t install a CNI configuration file           uninstall  false   Migration  Don t remove CNI configuration on shutdown         ipam            mode   cluster pool            operator              clusterPoolIPv4PodCIDRList    10 245 0 0 16     Migration  Ensure this is distinct and unused         policyEnforcementMode   never    Migration  Disable policy enforcement         bpf            hostLegacyRouting  true   Migration  Allow for routing between Cilium and the existing overlay  4   Configure any additional Cilium Helm values       Cilium supports a number of  ref  Helm configuration options helm reference    You may choose to     auto detect typical ones using the  ref  cilium cli  install cilium cli        This will consume the template and auto detect any other relevant Helm values      Review these values for your particular installation          parsed literal              cilium install  CHART VERSION    values values migration yaml   dry run helm values   values initial yaml           cat values initial yaml   5   Install cilium using  ref  helm  k8s install helm            code block   shell session          helm repo add cilium https   helm cilium io          helm install cilium cilium cilium   namespace kube system   values values initial yaml       At this point  you should have a cluster with Cilium installed and an overlay established  but no     pods managed by Cilium itself  You can verify this with the   cilium   command          code block   shell session          cilium status   wait                 Cluster Pods      0 3 managed by Cilium   6   Create a  ref  per node config per node configuration   that will instruct Cilium to  take over  CNI networking     on the node  Initially  this will apply to no nodes  you will roll it out gradually via     the migration process          code block   shell session          cat   EOF   kubectl apply   server side  f           apiVersion  cilium io v2         kind  CiliumNodeConfig         metadata            namespace  kube system           name  cilium default         spec            nodeSelector              matchLabels                io cilium migration cilium default   true            defaults              write cni conf when ready   host etc cni net d 05 cilium conflist             custom cni conf   false              cni chaining mode   none              cni exclusive   true          EOF  Migration            At this point  you are ready to begin the migration process  The basic flow is   Select a node to be migrated  It is not recommended to start with a control plane node      code block   shell session      NODE  kind worker    for the Kind example  1   Cordon and  optionally  drain the node in question          code block   shell session          kubectl cordon  NODE         kubectl drain   ignore daemonsets  NODE      Draining is not strictly required  but it is recommended  Otherwise pods will encounter     a brief interruption while the node is rebooted   2   Label the node  This causes the   CiliumNodeConfig   to apply to this node          code block   shell session          kubectl label node  NODE   overwrite  io cilium migration cilium default true   3   Restart Cilium  This will cause it to write its CNI configuration file          code block   shell session          kubectl  n kube system delete pod   field selector spec nodeName  NODE  l k8s app cilium         kubectl  n kube system rollout status ds cilium  w  4   Reboot the node       If using kind  do so with docker          code block   shell session            docker restart  NODE  5   Validate that the node has been successfully migrated          code block   shell session          cilium status   wait         kubectl get  o wide node  NODE         kubectl  n kube system run   attach   rm   restart Never verify network             overrides    spec     nodeName      NODE     tolerations      operator    Exists                   image ghcr io nicolaka netshoot v0 8     bin bash  c  ip  br addr    curl  s  k https    KUBERNETES SERVICE HOST healthz    echo       Ensure the IP address of the pod is in the Cilium CIDR s  supplied above and that the apiserver     is reachable   6   Uncordon the node          code block   shell session          kubectl uncordon  NODE   Once you are satisfied everything has been migrated successfully  select another unmigrated node in the cluster and repeat these steps   Post migration                 Perform these steps once the cluster is fully migrated   1   Ensure Cilium is healthy and that all pods have been migrated          code block   shell session          cilium status  2   Update the Cilium configuration         Cilium should be the primary CNI       NetworkPolicy should be enforced       The Operator can restart unmanaged pods         Optional    use  ref  eBPF Host Routing   Enabling this will cause a short connectivity        interruption on each node as the daemon restarts  but improves networking performance       You can do this manually  or via the   cilium   tool  this will not apply changes to the cluster           parsed literal            cilium install  CHART VERSION    values values initial yaml   dry run helm values             set operator unmanagedPodWatcher restart true   set cni customConf false             set policyEnforcementMode default             set bpf hostLegacyRouting false   values final yaml   optional  can cause brief interruptions         diff values initial yaml values final yaml      Then  apply the changes to the cluster          code block   shell session          helm upgrade   namespace kube system cilium cilium cilium   values values final yaml         kubectl  n kube system rollout restart daemonset cilium         cilium status   wait  3   Delete the per node configuration          code block   shell session          kubectl delete  n kube system ciliumnodeconfig cilium default  4   Delete the previous network plugin       At this point  all pods should be using Cilium for networking  You can easily verify this with   cilium status        It is now safe to delete the previous network plugin from the cluster        Most network plugins leave behind some resources  e g  iptables rules and interfaces  These will be     cleaned up when the node next reboots  If desired  you may perform a rolling reboot again "}
{"questions":"cilium docs cilium io ranchermanagedrkeclusters Introduction Installation using Rancher","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _rancher_managed_rke_clusters:\n\n**************************\nInstallation using Rancher\n**************************\n\nIntroduction\n============\n\nIf you're not using the Rancher Management Console\/UI to install your clusters, head\nover to the :ref:`installation guides for standalone RKE clusters <rke_install>`.\n\nRancher comes with `official support for Cilium <https:\/\/ranchermanager.docs.rancher.com\/faq\/container-network-interface-providers>`__.\nFor most Rancher users, that's the recommended way to use Cilium on Rancher-managed\nclusters.\n\nHowever, as Rancher is using a custom\n``rke2-cilium`` `Helm chart <https:\/\/github.com\/rancher\/rke2-charts\/tree\/main-source\/packages\/rke2-cilium>`__\nwith independent release cycles, Cilium power-users might want to use an\nout-of-band Cilium installation instead, based on the official\n`Cilium Helm chart <https:\/\/github.com\/cilium\/charts>`__,\non top of their Rancher-managed RKE1\/RKE2 downstream clusters.\nThis guide explains how to achieve this.\n\n.. note::\n\n    This guide only shows a step-by-step guide for Rancher-managed (**non-standalone**)\n    **RKE2** clusters.\n\n    However, for a legacy RKE1 cluster, it's even easier. You also need to edit\n    the cluster YAML and change ``network.cni`` to ``none`` as described in the\n    :ref:`RKE 1 standalone guide<rke1_cni_none>`, but there's no need to copy over\n    a Control Plane node local KubeConfig manually. Luckily, Rancher allows access\n    to RKE1 clusters in ``Updating`` state, which are not ready yet. Hence, there's\n    no chicken-egg issue to resolve.\n\nPrerequisites\n=============\n\n* Fully functioning `Rancher Version 2.x <https:\/\/ranchermanager.docs.rancher.com\/>`__ instance\n* At least one empty Linux VM, to be used as initial downstream \"Custom Cluster\" (Control Plane) node\n* DNS record pointing to the Kubernetes API of the downstream \"Custom Cluster\" Control Plane node(s) or L4 load-balancer\n\nCreate a New Cluster\n====================\n\nIn Rancher UI, navigate to the Cluster Management page. In the top right, click on the\n``Create`` button to create a new cluster.\n\n.. image:: images\/rancher_add_cluster.png\n\nOn the Cluster creation page select to create a new ``Custom`` cluster:\n\n.. image:: images\/rancher_existing_nodes.png\n\nWhen the ``Create Custom`` page opens, provide at least a name for the cluster.\nGo through the other configuration options and configure the ones that are\nrelevant for your setup.\n\nNext to the ``Cluster Options`` section click the box to ``Edit as YAML``.\nThe configuration for the cluster will open up in an editor in the window.\n\n.. image:: images\/rancher_edit_as_yaml.png\n\nWithin the ``Cluster`` CustomResource (``provisioning.cattle.io\/v1``), the relevant\nparts to change are ``spec.rkeConfig.machineGlobalConfig.cni``,\n``spec.rkeConfig.machineGlobalConfig.tls-san``, and optionally\n``spec.rkeConfig.chartValues.rke2-calico`` and\n``spec.rkeConfig.machineGlobalConfig.disable-kube-proxy``:\n\n.. image:: images\/rancher_delete_network_plugin.png\n\nIt's required to add a DNS record, pointing to the Control Plane node IP(s)\nor an L4 load-balancer in front of them, under\n``spec.rkeConfig.machineGlobalConfig.tls-san``, as that's required to resolve\na chicken-egg issue further down the line.\n\nEnsure that ``spec.rkeConfig.machineGlobalConfig.cni`` is set to ``none`` and\n``spec.rkeConfig.machineGlobalConfig.tls-san`` lists the mentioned DNS record:\n\n.. image:: images\/rancher_network_plugin_none.png\n\nOptionally, if ``spec.rkeConfig.chartValues.rke2-calico`` is not empty, remove the\nfull object as you won't deploy Rancher's default CNI. At the same time, change\n``spec.rkeConfig.machineGlobalConfig.disable-kube-proxy`` to ``true`` in case you\nwant to run :ref:`Cilium without Kube-Proxy<kubeproxy-free>`.\n\nMake any additional changes to the configuration that are appropriate for your\nenvironment. When you are ready, click ``Create`` and Rancher will create the\ncluster.\n\n.. image:: images\/rancher_cluster_state_provisioning.png\n\nThe cluster will stay in ``Updating`` state until you add nodes. Click on the cluster.\nIn the ``Registration`` tab you should see the generated ``Registation command`` you\nneed to run on the downstream cluster nodes.\n\nDo not forget to select the correct node roles. Rancher comes with the default to\ndeploy all three roles (``etcd``, ``Control Plane``, and ``Worker``), which is often\nnot what you want for multi-node clusters.\n\n.. image:: images\/rancher_add_nodes.png\n\nA few seconds after you added at least a single node, you should see the new node(s)\nin the ``Machines`` tab. The machine will be stuck in ``Reconciling`` state and\nwon't become ``Active``:\n\n.. image:: images\/rancher_node_not_ready.png\n\nThat's expected as there's no CNI running on this cluster yet. Unfortunately, this also\nmeans critical pods like ``rke2-coredns-rke2-coredns-*`` and ``cattle-cluster-agent-*`` \nare stuck in ``PENDING`` state. Hence, the downstream cluster is not yet able\nto register itself on Rancher.\n\nAs a next step, you need to resolve this chicken-egg issue by directly accessing\nthe downstream cluster's Kubernetes API, without going via Rancher. Rancher will not allow\naccess to this downstream cluster, as it's still in ``Updating`` state. That's why you\ncan't use the downstream cluster's KubeConfig provided by the Rancher management console\/UI.\n\nCopy ``\/etc\/rancher\/rke2\/rke2.yaml`` from the first downstream cluster Control Plane\nnode to your jump\/bastion host where you have ``helm`` installed and can access the\nCilium Helm charts.\n\n.. code-block:: shell-session\n\n    scp root@<cp-node-1-ip>:\/etc\/rancher\/rke2\/rke2.yaml .\n\nSearch and replace ``127.0.0.1`` (``clusters[0].cluster.server``) with the\nalready mentioned DNS record pointing to the Control Plane \/ L4 load-balancer IP(s).\n\n.. code-block:: yaml\n\n    apiVersion: v1\n    clusters:\n    - cluster:\n        certificate-authority-data: LS0...S0K\n        server: https:\/\/127.0.0.1:6443\n    name: default\n    contexts: {}\n\nCheck if you can access the Kubernetes API:\n\n.. code-block:: shell-session\n\n    export KUBECONFIG=$(pwd)\/my-cluster-kubeconfig.yaml\n    kubectl get nodes\n    NAME                    STATUS     ROLES                       AGE   VERSION\n    rancher-demo-node       NotReady   control-plane,etcd,master   44m   v1.27.8+rke2r1\n\nIf successful, you can now install Cilium via Helm CLI:\n\n.. parsed-literal::\n\n    helm install cilium |CHART_RELEASE| \\\\\n      --namespace kube-system \\\\\n      -f my-cluster-cilium-values.yaml\n\nAfter a few minutes, you should see that the node changed to the ``Ready`` status:\n\n.. code-block:: shell-session\n\n    kubectl get nodes\n    NAME                    STATUS   ROLES                       AGE   VERSION\n    rancher-demo-node       Ready    control-plane,etcd,master   48m   v1.27.8+rke2r1\n\nBack in the Rancher UI, you should see that the cluster changed to the healthy\n``Active`` status:\n\n.. image:: images\/rancher_my_cluster_active.png\n\nThat's it. You can now normally work with this cluster as if you\ninstalled the CNI the default Rancher way. Additional nodes can now be added\nstraightaway and the \"local Control Plane RKE2 KubeConfig\" workaround\nis not required anymore.\n\nOptional: Add Cilium to Rancher Registries\n==========================================\n\nOne small, optional convenience item would be to add the Cilium Helm repository\nto Rancher so that, in the future, Cilium can easily be upgraded via Rancher UI.\n\nYou have two options available:\n\n**Option 1**: Navigate to ``Cluster Management`` -> ``Advanced`` -> ``Repositories`` and\nclick the ``Create`` button:\n\n.. image:: images\/rancher_add_repository.png\n\n**Option 2**: Alternatively, you can also just add the Cilium Helm repository\non a single cluster by navigating to ``<your-cluster>`` -> ``Apps`` -> ``Repositories``:\n\n.. image:: images\/rancher_add_repository_cluster.png\n\nFor either option, in the window that opens, add the official Cilium Helm chart\nrepository (``https:\/\/helm.cilium.io``) to the Rancher repository list:\n\n.. image:: images\/rancher_add_cilium_repository.png\n\nOnce added, you should see the Cilium repository in the repositories list:\n\n.. image:: images\/rancher_repositories_list_success.png\n\nIf you now head to ``<your-cluster>`` -> ``Apps`` -> ``Installed Apps``, you\nshould see the ``cilium`` app. Ensure ``All Namespaces`` or\n``Project: System -> kube-system`` is selected at the top of the page.\n\n.. image:: images\/rancher_cluster_cilium_app.png\n\nSince you added the Cilium repository, you will now see a small hint on this app entry\nwhen there's a new Cilium version released. You can then upgrade directly via Rancher UI.\n\n.. image:: images\/rancher_cluster_cilium_app_upgrade.png\n\n.. image:: images\/rancher_cluster_cilium_app_upgrade_version.pn","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      rancher managed rke clusters                              Installation using Rancher                             Introduction               If you re not using the Rancher Management Console UI to install your clusters  head over to the  ref  installation guides for standalone RKE clusters  rke install     Rancher comes with  official support for Cilium  https   ranchermanager docs rancher com faq container network interface providers      For most Rancher users  that s the recommended way to use Cilium on Rancher managed clusters   However  as Rancher is using a custom   rke2 cilium    Helm chart  https   github com rancher rke2 charts tree main source packages rke2 cilium     with independent release cycles  Cilium power users might want to use an out of band Cilium installation instead  based on the official  Cilium Helm chart  https   github com cilium charts      on top of their Rancher managed RKE1 RKE2 downstream clusters  This guide explains how to achieve this      note        This guide only shows a step by step guide for Rancher managed    non standalone          RKE2   clusters       However  for a legacy RKE1 cluster  it s even easier  You also need to edit     the cluster YAML and change   network cni   to   none   as described in the      ref  RKE 1 standalone guide rke1 cni none    but there s no need to copy over     a Control Plane node local KubeConfig manually  Luckily  Rancher allows access     to RKE1 clusters in   Updating   state  which are not ready yet  Hence  there s     no chicken egg issue to resolve   Prerequisites                  Fully functioning  Rancher Version 2 x  https   ranchermanager docs rancher com      instance   At least one empty Linux VM  to be used as initial downstream  Custom Cluster   Control Plane  node   DNS record pointing to the Kubernetes API of the downstream  Custom Cluster  Control Plane node s  or L4 load balancer  Create a New Cluster                       In Rancher UI  navigate to the Cluster Management page  In the top right  click on the   Create   button to create a new cluster      image   images rancher add cluster png  On the Cluster creation page select to create a new   Custom   cluster      image   images rancher existing nodes png  When the   Create Custom   page opens  provide at least a name for the cluster  Go through the other configuration options and configure the ones that are relevant for your setup   Next to the   Cluster Options   section click the box to   Edit as YAML    The configuration for the cluster will open up in an editor in the window      image   images rancher edit as yaml png  Within the   Cluster   CustomResource    provisioning cattle io v1     the relevant parts to change are   spec rkeConfig machineGlobalConfig cni      spec rkeConfig machineGlobalConfig tls san    and optionally   spec rkeConfig chartValues rke2 calico   and   spec rkeConfig machineGlobalConfig disable kube proxy        image   images rancher delete network plugin png  It s required to add a DNS record  pointing to the Control Plane node IP s  or an L4 load balancer in front of them  under   spec rkeConfig machineGlobalConfig tls san    as that s required to resolve a chicken egg issue further down the line   Ensure that   spec rkeConfig machineGlobalConfig cni   is set to   none   and   spec rkeConfig machineGlobalConfig tls san   lists the mentioned DNS record      image   images rancher network plugin none png  Optionally  if   spec rkeConfig chartValues rke2 calico   is not empty  remove the full object as you won t deploy Rancher s default CNI  At the same time  change   spec rkeConfig machineGlobalConfig disable kube proxy   to   true   in case you want to run  ref  Cilium without Kube Proxy kubeproxy free     Make any additional changes to the configuration that are appropriate for your environment  When you are ready  click   Create   and Rancher will create the cluster      image   images rancher cluster state provisioning png  The cluster will stay in   Updating   state until you add nodes  Click on the cluster  In the   Registration   tab you should see the generated   Registation command   you need to run on the downstream cluster nodes   Do not forget to select the correct node roles  Rancher comes with the default to deploy all three roles    etcd      Control Plane    and   Worker     which is often not what you want for multi node clusters      image   images rancher add nodes png  A few seconds after you added at least a single node  you should see the new node s  in the   Machines   tab  The machine will be stuck in   Reconciling   state and won t become   Active        image   images rancher node not ready png  That s expected as there s no CNI running on this cluster yet  Unfortunately  this also means critical pods like   rke2 coredns rke2 coredns     and   cattle cluster agent      are stuck in   PENDING   state  Hence  the downstream cluster is not yet able to register itself on Rancher   As a next step  you need to resolve this chicken egg issue by directly accessing the downstream cluster s Kubernetes API  without going via Rancher  Rancher will not allow access to this downstream cluster  as it s still in   Updating   state  That s why you can t use the downstream cluster s KubeConfig provided by the Rancher management console UI   Copy    etc rancher rke2 rke2 yaml   from the first downstream cluster Control Plane node to your jump bastion host where you have   helm   installed and can access the Cilium Helm charts      code block   shell session      scp root  cp node 1 ip   etc rancher rke2 rke2 yaml    Search and replace   127 0 0 1      clusters 0  cluster server    with the already mentioned DNS record pointing to the Control Plane   L4 load balancer IP s       code block   yaml      apiVersion  v1     clusters        cluster          certificate authority data  LS0   S0K         server  https   127 0 0 1 6443     name  default     contexts      Check if you can access the Kubernetes API      code block   shell session      export KUBECONFIG   pwd  my cluster kubeconfig yaml     kubectl get nodes     NAME                    STATUS     ROLES                       AGE   VERSION     rancher demo node       NotReady   control plane etcd master   44m   v1 27 8 rke2r1  If successful  you can now install Cilium via Helm CLI      parsed literal        helm install cilium  CHART RELEASE             namespace kube system           f my cluster cilium values yaml  After a few minutes  you should see that the node changed to the   Ready   status      code block   shell session      kubectl get nodes     NAME                    STATUS   ROLES                       AGE   VERSION     rancher demo node       Ready    control plane etcd master   48m   v1 27 8 rke2r1  Back in the Rancher UI  you should see that the cluster changed to the healthy   Active   status      image   images rancher my cluster active png  That s it  You can now normally work with this cluster as if you installed the CNI the default Rancher way  Additional nodes can now be added straightaway and the  local Control Plane RKE2 KubeConfig  workaround is not required anymore   Optional  Add Cilium to Rancher Registries                                             One small  optional convenience item would be to add the Cilium Helm repository to Rancher so that  in the future  Cilium can easily be upgraded via Rancher UI   You have two options available     Option 1    Navigate to   Cluster Management        Advanced        Repositories   and click the   Create   button      image   images rancher add repository png    Option 2    Alternatively  you can also just add the Cilium Helm repository on a single cluster by navigating to    your cluster         Apps        Repositories        image   images rancher add repository cluster png  For either option  in the window that opens  add the official Cilium Helm chart repository    https   helm cilium io    to the Rancher repository list      image   images rancher add cilium repository png  Once added  you should see the Cilium repository in the repositories list      image   images rancher repositories list success png  If you now head to    your cluster         Apps        Installed Apps    you should see the   cilium   app  Ensure   All Namespaces   or   Project  System    kube system   is selected at the top of the page      image   images rancher cluster cilium app png  Since you added the Cilium repository  you will now see a small hint on this app entry when there s a new Cilium version released  You can then upgrade directly via Rancher UI      image   images rancher cluster cilium app upgrade png     image   images rancher cluster cilium app upgrade version pn"}
{"questions":"cilium Hubble internals docs cilium io hubbleinternals This documentation section is targeted at developers who are interested in contributing to Hubble For this purpose it describes","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _hubble_internals:\n\n****************\nHubble internals\n****************\n\n.. note:: This documentation section is targeted at developers who are\n          interested in contributing to Hubble. For this purpose, it describes\n          Hubble internals.\n\n.. note:: This documentation covers the Hubble server (sometimes referred as\n          \"Hubble embedded\") and Hubble Relay components but does not cover the\n          Hubble UI and CLI.\n\nHubble builds on top of Cilium and eBPF to enable deep visibility into the\ncommunication and behavior of services as well as the networking infrastructure\nin a completely transparent manner. One of the design goals of Hubble is to\nachieve all of this at large scale.\n\nHubble's server component is embedded into the Cilium agent in order to achieve\nhigh performance with low-overhead. The gRPC services offered by Hubble server\nmay be consumed locally via a Unix domain socket or, more typically, through\nHubble Relay. Hubble Relay is a standalone component which is aware of all\nHubble instances and offers full cluster visibility by connecting to their\nrespective gRPC APIs. This capability is usually referred to as multi-node.\nHubble Relay's main goal is to offer a rich API that can be safely exposed and\nconsumed by the Hubble UI and CLI.\n\nHubble Architecture\n===================\n\nHubble exposes gRPC services from the Cilium process that allows clients to\nreceive flows and other type of data.\n\nHubble server\n-------------\n\nThe Hubble server component implements two gRPC services. The **Observer\nservice** which may optionally be exposed via a TCP socket in addition to a\nlocal Unix domain socket and the **Peer service**, which is served on both\nas well as being exposed as a Kubernetes Service when enabled via TCP.\n\nThe Observer service\n^^^^^^^^^^^^^^^^^^^^\n\nThe Observer service is the principal service. It provides four RPC endpoints:\n``GetFlows``, ``GetNodes``, ``GetNamespaces``  and ``ServerStatus``.\n\n* ``GetNodes`` returns a list of metrics and other information related to each Hubble instance\n* ``ServerStatus`` returns a summary the information in ``GetNodes``\n* ``GetNamespaces`` returns a list of namespaces that had network flows within the last one hour\n* ``GetFlows`` returns a stream of flow related events\n\nUsing ``GetFlows``, callers get a stream of payloads. Request parameters allow\ncallers to specify filters in the form of allow lists and deny lists to allow\nfor fine-grained filtering of data.\n\nIn order to answer ``GetFlows`` requests, Hubble stores monitoring events from\nCilium's event monitor into a user-space ring buffer structure. Monitoring\nevents are obtained by registering a new listener on Cilium monitor. The\nring buffer is capable of storing a configurable amount of events in memory.\nEvents are continuously consumed, overriding older ones once the ring buffer is\nfull.\n\nAdditionally, the Observer service also provides the ``GetAgentEvents`` and\n``GetDebugEvents`` RPC endpoints to expose data about the Cilium agent events\nand Cilium datapath debug events, respectively. Both are similar to ``GetFlows``\nexcept they do not implement filtering capabilities.\n\n.. image:: .\/..\/images\/hubble_getflows.png\n\nFor efficiency, the internal buffer length is a bit mask of ones + 1. The most\nsignificant bit of this bit mask is the same position of the most significant\nbit position of 'n'. In other terms, the internal buffer size is always a power\nof 2 with 1 slot reserved for the writer. In effect, from a user perspective,\nthe ring buffer capacity is one less than a power of 2. As the ring buffer is a\nhot code path, it has been designed to not employ any locking mechanisms and\nuses atomic operations instead. While this approach has performance benefits,\nit also has the downsides of being a complex component.\n\nDue to its complex nature, the ring buffer is typically accessed via a ring\nreader that abstracts the complexity of this data structure for reading. The\nring reader allows reading one event at the time with 'previous' and 'next'\nmethods but also implements a follow mode where events are continuously read as\nthey are written to the ring buffer.\n\nThe Peer service\n^^^^^^^^^^^^^^^^\n\nThe Peer service sends information about Hubble peers in the cluster in a\nstream. When the ``Notify`` method is called, it reports information about all\nthe peers in the cluster and subsequently sends information about peers that\nare updated, added, or removed from the cluster. Thus, it allows the caller to\nkeep track of all Hubble instances and query their respective gRPC services.\n\nThis service is exposed as a Kubernetes Service and is primarily used by Hubble\nRelay in order to have a cluster-wide view of all Hubble instances.\n\nThe Peer service obtains peer change notifications by subscribing to Cilium's\nnode manager. To this end, it internally defines a handler that implements\nCilium's datapath node handler interface.\n\n.. _hubble_relay:\n\nHubble Relay\n------------\n\nHubble Relay is the Hubble component that brings multi-node support. It\nleverages the Peer service to obtain information about Hubble instances and\nconsume their gRPC API in order to provide a more rich API that covers events\nfrom across the entire cluster (or even multiple clusters in a ClusterMesh\nscenario).\n\nHubble Relay was first introduced as a technology preview with the release of\nCilium v1.8 and was declared stable with the release of Cilium v1.9.\n\nHubble Relay implements the Observer service for multi-node. To that end, it\nmaintains a persistent connection with every Hubble peer in a cluster with a\npeer manager. This component provides callers with the list of peers. Callers\nmay report when a peer is unreachable, in which case the peer manager will\nattempt to reconnect.\n\nAs Hubble Relay connects to every node in a cluster, the Hubble server\ninstances must make their API available (by default on port 4244). By default,\nHubble server endpoints are secured using mutual TLS (mTLS) when exposed on a\nTCP port in order to limit access to Hubble Relay only.","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      hubble internals                    Hubble internals                      note   This documentation section is targeted at developers who are           interested in contributing to Hubble  For this purpose  it describes           Hubble internals      note   This documentation covers the Hubble server  sometimes referred as            Hubble embedded   and Hubble Relay components but does not cover the           Hubble UI and CLI   Hubble builds on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner  One of the design goals of Hubble is to achieve all of this at large scale   Hubble s server component is embedded into the Cilium agent in order to achieve high performance with low overhead  The gRPC services offered by Hubble server may be consumed locally via a Unix domain socket or  more typically  through Hubble Relay  Hubble Relay is a standalone component which is aware of all Hubble instances and offers full cluster visibility by connecting to their respective gRPC APIs  This capability is usually referred to as multi node  Hubble Relay s main goal is to offer a rich API that can be safely exposed and consumed by the Hubble UI and CLI   Hubble Architecture                      Hubble exposes gRPC services from the Cilium process that allows clients to receive flows and other type of data   Hubble server                The Hubble server component implements two gRPC services  The   Observer service   which may optionally be exposed via a TCP socket in addition to a local Unix domain socket and the   Peer service    which is served on both as well as being exposed as a Kubernetes Service when enabled via TCP   The Observer service                       The Observer service is the principal service  It provides four RPC endpoints    GetFlows      GetNodes      GetNamespaces    and   ServerStatus         GetNodes   returns a list of metrics and other information related to each Hubble instance     ServerStatus   returns a summary the information in   GetNodes       GetNamespaces   returns a list of namespaces that had network flows within the last one hour     GetFlows   returns a stream of flow related events  Using   GetFlows    callers get a stream of payloads  Request parameters allow callers to specify filters in the form of allow lists and deny lists to allow for fine grained filtering of data   In order to answer   GetFlows   requests  Hubble stores monitoring events from Cilium s event monitor into a user space ring buffer structure  Monitoring events are obtained by registering a new listener on Cilium monitor  The ring buffer is capable of storing a configurable amount of events in memory  Events are continuously consumed  overriding older ones once the ring buffer is full   Additionally  the Observer service also provides the   GetAgentEvents   and   GetDebugEvents   RPC endpoints to expose data about the Cilium agent events and Cilium datapath debug events  respectively  Both are similar to   GetFlows   except they do not implement filtering capabilities      image        images hubble getflows png  For efficiency  the internal buffer length is a bit mask of ones   1  The most significant bit of this bit mask is the same position of the most significant bit position of  n   In other terms  the internal buffer size is always a power of 2 with 1 slot reserved for the writer  In effect  from a user perspective  the ring buffer capacity is one less than a power of 2  As the ring buffer is a hot code path  it has been designed to not employ any locking mechanisms and uses atomic operations instead  While this approach has performance benefits  it also has the downsides of being a complex component   Due to its complex nature  the ring buffer is typically accessed via a ring reader that abstracts the complexity of this data structure for reading  The ring reader allows reading one event at the time with  previous  and  next  methods but also implements a follow mode where events are continuously read as they are written to the ring buffer   The Peer service                   The Peer service sends information about Hubble peers in the cluster in a stream  When the   Notify   method is called  it reports information about all the peers in the cluster and subsequently sends information about peers that are updated  added  or removed from the cluster  Thus  it allows the caller to keep track of all Hubble instances and query their respective gRPC services   This service is exposed as a Kubernetes Service and is primarily used by Hubble Relay in order to have a cluster wide view of all Hubble instances   The Peer service obtains peer change notifications by subscribing to Cilium s node manager  To this end  it internally defines a handler that implements Cilium s datapath node handler interface       hubble relay   Hubble Relay               Hubble Relay is the Hubble component that brings multi node support  It leverages the Peer service to obtain information about Hubble instances and consume their gRPC API in order to provide a more rich API that covers events from across the entire cluster  or even multiple clusters in a ClusterMesh scenario    Hubble Relay was first introduced as a technology preview with the release of Cilium v1 8 and was declared stable with the release of Cilium v1 9   Hubble Relay implements the Observer service for multi node  To that end  it maintains a persistent connection with every Hubble peer in a cluster with a peer manager  This component provides callers with the list of peers  Callers may report when a peer is unreachable  in which case the peer manager will attempt to reconnect   As Hubble Relay connects to every node in a cluster  the Hubble server instances must make their API available  by default on port 4244   By default  Hubble server endpoints are secured using mutual TLS  mTLS  when exposed on a TCP port in order to limit access to Hubble Relay only "}
{"questions":"cilium ciliumoperatorinternals This document provides a technical overview of the Cilium Operator and describes Cilium Operator docs cilium io the cluster wide operations it is responsible for","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _cilium_operator_internals:\n\nCilium Operator\n===============\n\nThis document provides a technical overview of the Cilium Operator and describes\nthe cluster-wide operations it is responsible for.\n\nHighly Available Cilium Operator\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe Cilium Operator uses Kubernetes leader election library in conjunction with\nlease locks to provide HA functionality. The capability is supported on Kubernetes\nversions 1.14 and above. It is Cilium's default behavior since the 1.9 release.\n\nThe number of replicas for the HA deployment can be configured using\nHelm option ``operator.replicas``.\n\n.. parsed-literal::\n\n    helm install cilium |CHART_RELEASE| \\\\\n      --namespace kube-system \\\\\n      --set operator.replicas=3\n\n.. code-block:: shell-session\n\n    $ kubectl get deployment cilium-operator -n kube-system\n    NAME              READY   UP-TO-DATE   AVAILABLE   AGE\n    cilium-operator   3\/3     3            3           46s\n\nThe operator is an integral part of Cilium installations in Kubernetes\nenvironments and is tasked to perform the following operations:\n\nCRD Registration\n~~~~~~~~~~~~~~~~\n\nThe default behavior of the Cilium Operator is to register the CRDs used by\nCilium. The following custom resources are registered by the Cilium Operator:\n\n.. include:: ..\/crdlist.rst\n\nIPAM\n~~~~\n\nCilium Operator is responsible for IP address management when running in\nthe following modes:\n\n-  :ref:`ipam_azure`\n-  :ref:`ipam_eni`\n-  :ref:`ipam_crd_cluster_pool`\n\nWhen running in IPAM mode :ref:`k8s_hostscope`, the allocation CIDRs used by\n``cilium-agent`` is derived from the fields ``podCIDR`` and ``podCIDRs``\npopulated by Kubernetes in the Kubernetes ``Node`` resource.\n\nFor :ref:`concepts_ipam_crd` IPAM allocation mode, it is the job of Cloud-specific\noperator to populate the required information about CIDRs in the\n``CiliumNode`` resource.\n\nCilium currently has native support for the following Cloud providers in CRD IPAM\nmode:\n\n- Azure - ``cilium-operator-azure``\n- AWS - ``cilium-operator-aws``\n\nFor more information on IPAM visit :ref:`address_management`.\n\nLoad Balancer IP Address Management\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWhen :ref:`lb_ipam` is used, Cilium Operator manages IP address\nfor ``type: LoadBalancer`` services.\n\nKVStore operations\n~~~~~~~~~~~~~~~~~~\n\nThese operations are performed only when KVStore is enabled for the\nCilium Operator. In addition, KVStore operations are only required when\n``cilium-operator`` is running with any of the below options:\n\n-  ``--synchronize-k8s-services``\n-  ``--synchronize-k8s-nodes``\n-  ``--identity-allocation-mode=kvstore``\n\nK8s Services synchronization\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nCilium Operator performs the job of synchronizing Kubernetes services to\nexternal KVStore configured for the Cilium Operator if running with\n``--synchronize-k8s-services`` flag.\n\nThe Cilium Operator performs this operation only for shared services (services\nthat have ``service.cilium.io\/shared`` annotation set to true). This is\nmeaningful when running Cilium to setup a ClusterMesh.\n\nK8s Nodes synchronization\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\nSimilar to K8s services, Cilium Operator also synchronizes Kubernetes nodes\ninformation to the shared KVStore.\n\nWhen a ``Node`` object is deleted it is not possible to reliably cleanup\nthe corresponding ``CiliumNode`` object from the Agent itself. The Cilium Operator\nholds the responsibility to garbage collect orphaned ``CiliumNodes``.\n\nHeartbeat update\n^^^^^^^^^^^^^^^^\n\nThe Cilium Operator periodically updates the Cilium's heartbeat path key\nwith the current time. The default key for this heartbeat is\n``cilium\/.heartbeat`` in the KVStore. It is used by Cilium Agents to validate\nthat KVStore updates can be received.\n\nIdentity garbage collection\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nEach workload in Kubernetes is assigned a security identity that is used\nfor policy decision making. This identity is based on common workload\nmarkers like labels. Cilium supports two identity allocation mechanisms:\n\n-  CRD Identity allocation\n-  KVStore Identity allocation\n\nBoth the mechanisms of identity allocation require the Cilium\nOperator to perform the garbage collection of stale\nidentities. This garbage collection is necessary because a 16-bit\nunsigned integer represents the security identity, and thus we can only\nhave a maximum of 65536 identities in the cluster.\n\nCRD Identity garbage collection\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nCRD identity allocation uses Kubernetes custom resource\n``CiliumIdentity`` to represent a security identity. This is the default\nbehavior of Cilium and works out of the box in any K8s environment\nwithout any external dependency.\n\nThe Cilium Operator maintains a local cache for CiliumIdentities with\nthe last time they were seen active. A controller runs in the background\nperiodically which scans this local cache and deletes identities that\nhave not had their heartbeat life sign updated since\n``identity-heartbeat-timeout``.\n\nOne thing to note here is that an Identity is always assumed to be live\nif it has an endpoint associated with it.\n\nKVStore Identity garbage collection\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nWhile the CRD allocation mode for identities is more common, it is\nlimited in terms of scale. When running in a very large environment, a\nsaner choice is to use the KVStore allocation mode. This mode stores\nthe identities in an external store like etcd.\n\nFor more information on Cilium's scalability visit :ref:`scalability_guide`.\n\nThe garbage collection mechanism involves scanning the KVStore of all\nthe identities. For each identity, the Cilium Operator search in the KVStore\nif there are any active users of that identity. The entry is deleted from the\nKVStore if there are no active users.\n\nCiliumEndpoint garbage collection\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nCiliumEndpoint object is created by the ``cilium-agent`` for each ``Pod``\nin the cluster. The Cilium Operator manages a controller to handle the\ngarbage collection of orphaned ``CiliumEndpoint`` objects. An orphaned\n``CiliumEndpoint`` object means that the owner of the endpoint object is\nnot active anymore in the cluster. CiliumEndpoints are also considered\norphaned if the owner is an existing Pod in ``PodFailed`` or ``PodSucceeded``\nstate.\nThis controller is run periodically if the ``endpoint-gc-interval`` option\nis specified and only once during startup if the option is unspecified.\n\nDerivative network policy creation\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWhen using Cloud-provider-specific constructs like ``toGroups`` in the\nnetwork policy spec, the Cilium Operator performs the job of converting these\nconstructs to derivative CNP\/CCNP objects without these fields.\n\nFor more information, see how Cilium network policies incorporate the\nuse of ``toGroups`` to :ref:`lock down external access using AWS security groups<aws_metadata_with_policy>`.\n\nIngress and Gateway API Support\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWhen Ingress or Gateway API support is enabled, the Cilium Operator performs the\ntask of parsing Ingress or Gateway API objects and converting them into\n``CiliumEnvoyConfig`` objects used for configuring the per-node Envoy proxy.\n\nAdditionally, Secrets used by Ingress or Gateway API objects will be synced to\na Cilium-managed namespace that the Cilium Agent is then granted access to. This\nreduces the permissions required of the Cilium Agent.\n\nMutual Authentication Support\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWhen Cilium's Mutual Authentication Support is enabled, the Cilium Operator is\nresponsible for ensuring that each Cilium Identity has an associated identity\nin the certificate management system. It will create and delete identity\nregistrations in the configured certificate management section as required.\nThe Cilium Operator does not, however have any to the key material in the\nidentities.\n\nThat information is only shared with the Cilium Agent via other channels.","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      cilium operator internals   Cilium Operator                  This document provides a technical overview of the Cilium Operator and describes the cluster wide operations it is responsible for   Highly Available Cilium Operator                                   The Cilium Operator uses Kubernetes leader election library in conjunction with lease locks to provide HA functionality  The capability is supported on Kubernetes versions 1 14 and above  It is Cilium s default behavior since the 1 9 release   The number of replicas for the HA deployment can be configured using Helm option   operator replicas        parsed literal        helm install cilium  CHART RELEASE             namespace kube system            set operator replicas 3     code block   shell session        kubectl get deployment cilium operator  n kube system     NAME              READY   UP TO DATE   AVAILABLE   AGE     cilium operator   3 3     3            3           46s  The operator is an integral part of Cilium installations in Kubernetes environments and is tasked to perform the following operations   CRD Registration                   The default behavior of the Cilium Operator is to register the CRDs used by Cilium  The following custom resources are registered by the Cilium Operator      include      crdlist rst  IPAM       Cilium Operator is responsible for IP address management when running in the following modes       ref  ipam azure      ref  ipam eni      ref  ipam crd cluster pool   When running in IPAM mode  ref  k8s hostscope   the allocation CIDRs used by   cilium agent   is derived from the fields   podCIDR   and   podCIDRs   populated by Kubernetes in the Kubernetes   Node   resource   For  ref  concepts ipam crd  IPAM allocation mode  it is the job of Cloud specific operator to populate the required information about CIDRs in the   CiliumNode   resource   Cilium currently has native support for the following Cloud providers in CRD IPAM mode     Azure     cilium operator azure     AWS     cilium operator aws    For more information on IPAM visit  ref  address management    Load Balancer IP Address Management                                      When  ref  lb ipam  is used  Cilium Operator manages IP address for   type  LoadBalancer   services   KVStore operations                     These operations are performed only when KVStore is enabled for the Cilium Operator  In addition  KVStore operations are only required when   cilium operator   is running with any of the below options          synchronize k8s services          synchronize k8s nodes          identity allocation mode kvstore    K8s Services synchronization                               Cilium Operator performs the job of synchronizing Kubernetes services to external KVStore configured for the Cilium Operator if running with     synchronize k8s services   flag   The Cilium Operator performs this operation only for shared services  services that have   service cilium io shared   annotation set to true   This is meaningful when running Cilium to setup a ClusterMesh   K8s Nodes synchronization                            Similar to K8s services  Cilium Operator also synchronizes Kubernetes nodes information to the shared KVStore   When a   Node   object is deleted it is not possible to reliably cleanup the corresponding   CiliumNode   object from the Agent itself  The Cilium Operator holds the responsibility to garbage collect orphaned   CiliumNodes     Heartbeat update                   The Cilium Operator periodically updates the Cilium s heartbeat path key with the current time  The default key for this heartbeat is   cilium  heartbeat   in the KVStore  It is used by Cilium Agents to validate that KVStore updates can be received   Identity garbage collection                              Each workload in Kubernetes is assigned a security identity that is used for policy decision making  This identity is based on common workload markers like labels  Cilium supports two identity allocation mechanisms      CRD Identity allocation    KVStore Identity allocation  Both the mechanisms of identity allocation require the Cilium Operator to perform the garbage collection of stale identities  This garbage collection is necessary because a 16 bit unsigned integer represents the security identity  and thus we can only have a maximum of 65536 identities in the cluster   CRD Identity garbage collection                                  CRD identity allocation uses Kubernetes custom resource   CiliumIdentity   to represent a security identity  This is the default behavior of Cilium and works out of the box in any K8s environment without any external dependency   The Cilium Operator maintains a local cache for CiliumIdentities with the last time they were seen active  A controller runs in the background periodically which scans this local cache and deletes identities that have not had their heartbeat life sign updated since   identity heartbeat timeout     One thing to note here is that an Identity is always assumed to be live if it has an endpoint associated with it   KVStore Identity garbage collection                                      While the CRD allocation mode for identities is more common  it is limited in terms of scale  When running in a very large environment  a saner choice is to use the KVStore allocation mode  This mode stores the identities in an external store like etcd   For more information on Cilium s scalability visit  ref  scalability guide    The garbage collection mechanism involves scanning the KVStore of all the identities  For each identity  the Cilium Operator search in the KVStore if there are any active users of that identity  The entry is deleted from the KVStore if there are no active users   CiliumEndpoint garbage collection                                    CiliumEndpoint object is created by the   cilium agent   for each   Pod   in the cluster  The Cilium Operator manages a controller to handle the garbage collection of orphaned   CiliumEndpoint   objects  An orphaned   CiliumEndpoint   object means that the owner of the endpoint object is not active anymore in the cluster  CiliumEndpoints are also considered orphaned if the owner is an existing Pod in   PodFailed   or   PodSucceeded   state  This controller is run periodically if the   endpoint gc interval   option is specified and only once during startup if the option is unspecified   Derivative network policy creation                                     When using Cloud provider specific constructs like   toGroups   in the network policy spec  the Cilium Operator performs the job of converting these constructs to derivative CNP CCNP objects without these fields   For more information  see how Cilium network policies incorporate the use of   toGroups   to  ref  lock down external access using AWS security groups aws metadata with policy     Ingress and Gateway API Support                                  When Ingress or Gateway API support is enabled  the Cilium Operator performs the task of parsing Ingress or Gateway API objects and converting them into   CiliumEnvoyConfig   objects used for configuring the per node Envoy proxy   Additionally  Secrets used by Ingress or Gateway API objects will be synced to a Cilium managed namespace that the Cilium Agent is then granted access to  This reduces the permissions required of the Cilium Agent   Mutual Authentication Support                                When Cilium s Mutual Authentication Support is enabled  the Cilium Operator is responsible for ensuring that each Cilium Identity has an associated identity in the certificate management system  It will create and delete identity registrations in the configured certificate management section as required  The Cilium Operator does not  however have any to the key material in the identities   That information is only shared with the Cilium Agent via other channels "}
{"questions":"cilium Cilium and Hubble can both be configured to serve Prometheus docs cilium io Monitoring Metrics metrics https prometheus io metrics Prometheus is a pluggable metrics collection","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _metrics:\n\n********************\nMonitoring & Metrics\n********************\n\nCilium and Hubble can both be configured to serve `Prometheus\n<https:\/\/prometheus.io>`_ metrics. Prometheus is a pluggable metrics collection\nand storage system and can act as a data source for `Grafana\n<https:\/\/grafana.com\/>`_, a metrics visualization frontend. Unlike some metrics\ncollectors like statsd, Prometheus requires the collectors to pull metrics from\neach source.\n\nCilium and Hubble metrics can be enabled independently of each other.\n\nCilium Metrics\n==============\n\nCilium metrics provide insights into the state of Cilium itself, namely\nof the ``cilium-agent``, ``cilium-envoy``, and ``cilium-operator`` processes.\nTo run Cilium with Prometheus metrics enabled, deploy it with the\n``prometheus.enabled=true`` Helm value set.\n\nCilium metrics are exported under the ``cilium_`` Prometheus namespace. Envoy\nmetrics are exported under the ``envoy_`` Prometheus namespace, of which the\nCilium-defined metrics are exported under the ``envoy_cilium_`` namespace.\nWhen running and collecting in Kubernetes they will be tagged with a pod name\nand namespace.\n\nInstallation\n------------\n\nYou can enable metrics for ``cilium-agent`` (including Envoy) with the Helm value\n``prometheus.enabled=true``. ``cilium-operator`` metrics are enabled by default,\nif you want to disable them, set Helm value ``operator.prometheus.enabled=false``.\n\n.. parsed-literal::\n\n   helm install cilium |CHART_RELEASE| \\\\\n     --namespace kube-system \\\\\n     --set prometheus.enabled=true \\\\\n     --set operator.prometheus.enabled=true\n\nThe ports can be configured via ``prometheus.port``,\n``envoy.prometheus.port``, or ``operator.prometheus.port`` respectively.\n\nWhen metrics are enabled, all Cilium components will have the following\nannotations. They can be used to signal Prometheus whether to scrape metrics:\n\n.. code-block:: yaml\n\n        prometheus.io\/scrape: true\n        prometheus.io\/port: 9962\n\nTo collect Envoy metrics the Cilium chart will create a Kubernetes headless\nservice named ``cilium-agent`` with the ``prometheus.io\/scrape:'true'`` annotation set:\n\n.. code-block:: yaml\n\n        prometheus.io\/scrape: true\n        prometheus.io\/port: 9964\n\nThis additional headless service in addition to the other Cilium components is needed\nas each component can only have one Prometheus scrape and port annotation.\n\nPrometheus will pick up the Cilium and Envoy metrics automatically if the following\noption is set in the ``scrape_configs`` section:\n\n.. code-block:: yaml\n\n    scrape_configs:\n    - job_name: 'kubernetes-pods'\n      kubernetes_sd_configs:\n      - role: pod\n      relabel_configs:\n        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]\n          action: keep\n          regex: true\n        - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]\n          action: replace\n          regex: ([^:]+)(?::\\d+)?;(\\d+)\n          replacement: ${1}:${2}\n          target_label: __address__\n\n.. _hubble_metrics:\n\nHubble Metrics\n==============\n\nWhile Cilium metrics allow you to monitor the state Cilium itself,\nHubble metrics on the other hand allow you to monitor the network behavior\nof your Cilium-managed Kubernetes pods with respect to connectivity and security.\n\nInstallation\n------------\n\nTo deploy Cilium with Hubble metrics enabled, you need to enable Hubble with\n``hubble.enabled=true`` and provide a set of Hubble metrics you want to\nenable via ``hubble.metrics.enabled``.\n\nSome of the metrics can also be configured with additional options.\nSee the :ref:`Hubble exported metrics<hubble_exported_metrics>`\nsection for the full list of available metrics and their options.\n\n.. parsed-literal::\n\n   helm install cilium |CHART_RELEASE| \\\\\n     --namespace kube-system \\\\\n     --set prometheus.enabled=true \\\\\n     --set operator.prometheus.enabled=true \\\\\n     --set hubble.enabled=true \\\\\n     --set hubble.metrics.enableOpenMetrics=true \\\\\n     --set hubble.metrics.enabled=\"{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\\\\,source_namespace\\\\,source_workload\\\\,destination_ip\\\\,destination_namespace\\\\,destination_workload\\\\,traffic_direction}\"\n\nThe port of the Hubble metrics can be configured with the\n``hubble.metrics.port`` Helm value.\n\nFor details on enabling Hubble metrics with TLS see the\n:ref:`hubble_configure_metrics_tls` section of the documentation.\n\n.. Note::\n\n    L7 metrics such as HTTP, are only emitted for pods that enable\n    :ref:`Layer 7 Protocol Visibility <proxy_visibility>`.\n\nWhen deployed with a non-empty ``hubble.metrics.enabled`` Helm value, the\nCilium chart will create a Kubernetes headless service named ``hubble-metrics``\nwith the ``prometheus.io\/scrape:'true'`` annotation set:\n\n.. code-block:: yaml\n\n        prometheus.io\/scrape: true\n        prometheus.io\/port: 9965\n\nSet the following options in the ``scrape_configs`` section of Prometheus to\nhave it scrape all Hubble metrics from the endpoints automatically:\n\n.. code-block:: yaml\n\n    scrape_configs:\n      - job_name: 'kubernetes-endpoints'\n        scrape_interval: 30s\n        kubernetes_sd_configs:\n          - role: endpoints\n        relabel_configs:\n          - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]\n            action: keep\n            regex: true\n          - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]\n            action: replace\n            target_label: __address__\n            regex: (.+)(?::\\d+);(\\d+)\n            replacement: $1:$2\n\n.. _hubble_open_metrics:\n\nOpenMetrics\n-----------\n\nAdditionally, you can opt-in to `OpenMetrics <https:\/\/openmetrics.io>`_ by\nsetting ``hubble.metrics.enableOpenMetrics=true``.\nEnabling OpenMetrics configures the Hubble metrics endpoint to support exporting\nmetrics in OpenMetrics format when explicitly requested by clients.\n\nUsing OpenMetrics supports additional functionality such as Exemplars, which\nenables associating metrics with traces by embedding trace IDs into the\nexported metrics.\n\nPrometheus needs to be configured to take advantage of OpenMetrics and will\nonly scrape exemplars when the `exemplars storage feature is enabled\n<https:\/\/prometheus.io\/docs\/prometheus\/latest\/feature_flags\/#exemplars-storage>`_.\n\nOpenMetrics imposes a few additional requirements on metrics names and labels,\nso this functionality is currently opt-in, though we believe all of the Hubble\nmetrics conform to the OpenMetrics requirements.\n\n\n.. _clustermesh_apiserver_metrics:\n\nCluster Mesh API Server Metrics\n===============================\n\nCluster Mesh API Server metrics provide insights into the state of the\n``clustermesh-apiserver`` process, the ``kvstoremesh`` process (if enabled),\nand the sidecar etcd instance.\nCluster Mesh API Server metrics are exported under the ``cilium_clustermesh_apiserver_``\nPrometheus namespace. KVStoreMesh metrics are exported under the ``cilium_kvstoremesh_``\nPrometheus namespace. Etcd metrics are exported under the ``etcd_`` Prometheus namespace.\n\n\nInstallation\n------------\n\nYou can enable the metrics for different Cluster Mesh API Server components by\nsetting the following values:\n\n* clustermesh-apiserver: ``clustermesh.apiserver.metrics.enabled=true``\n* kvstoremesh: ``clustermesh.apiserver.metrics.kvstoremesh.enabled=true``\n* sidecar etcd instance: ``clustermesh.apiserver.metrics.etcd.enabled=true``\n\n.. parsed-literal::\n\n   helm install cilium |CHART_RELEASE| \\\\\n     --namespace kube-system \\\\\n     --set clustermesh.useAPIServer=true \\\\\n     --set clustermesh.apiserver.metrics.enabled=true \\\\\n     --set clustermesh.apiserver.metrics.kvstoremesh.enabled=true \\\\\n     --set clustermesh.apiserver.metrics.etcd.enabled=true\n\nYou can figure the ports by way of ``clustermesh.apiserver.metrics.port``,\n``clustermesh.apiserver.metrics.kvstoremesh.port`` and\n``clustermesh.apiserver.metrics.etcd.port`` respectively.\n\nYou can automatically create a\n`Prometheus Operator <https:\/\/github.com\/prometheus-operator\/prometheus-operator>`_\n``ServiceMonitor`` by setting ``clustermesh.apiserver.metrics.serviceMonitor.enabled=true``.\n\nExample Prometheus & Grafana Deployment\n=======================================\n\nIf you don't have an existing Prometheus and Grafana stack running, you can\ndeploy a stack with:\n\n.. parsed-literal::\n\n    kubectl apply -f \\ |SCM_WEB|\\\/examples\/kubernetes\/addons\/prometheus\/monitoring-example.yaml\n\nIt will run Prometheus and Grafana in the ``cilium-monitoring`` namespace. If\nyou have either enabled Cilium or Hubble metrics, they will automatically\nbe scraped by Prometheus. You can then expose Grafana to access it via your browser.\n\n.. code-block:: shell-session\n\n    kubectl -n cilium-monitoring port-forward service\/grafana --address 0.0.0.0 --address :: 3000:3000\n\nOpen your browser and access http:\/\/localhost:3000\/\n\nMetrics Reference\n=================\n\ncilium-agent\n------------\n\nConfiguration\n^^^^^^^^^^^^^\n\nTo expose any metrics, invoke ``cilium-agent`` with the\n``--prometheus-serve-addr`` option. This option takes a ``IP:Port`` pair but\npassing an empty IP (e.g. ``:9962``) will bind the server to all available\ninterfaces (there is usually only one in a container).\n\nTo customize ``cilium-agent`` metrics, configure the ``--metrics`` option with\n``\"+metric_a -metric_b -metric_c\"``, where ``+\/-`` means to enable\/disable\nthe metric. For example, for really large clusters, users may consider to\ndisable the following two metrics as they generate too much data:\n\n- ``cilium_node_connectivity_status``\n- ``cilium_node_connectivity_latency_seconds``\n\nYou can then configure the agent with ``--metrics=\"-cilium_node_connectivity_status -cilium_node_connectivity_latency_seconds\"``.\n\nExported Metrics\n^^^^^^^^^^^^^^^^\n\nEndpoint\n~~~~~~~~\n\n============================================ ================================================== ========== ========================================================\nName                                         Labels                                             Default    Description\n============================================ ================================================== ========== ========================================================\n``endpoint``                                                                                    Enabled    Number of endpoints managed by this agent\n``endpoint_max_ifindex``                                                                        Disabled   Maximum interface index observed for existing endpoints\n``endpoint_regenerations_total``             ``outcome``                                        Enabled    Count of all endpoint regenerations that have completed\n``endpoint_regeneration_time_stats_seconds`` ``scope``                                          Enabled    Endpoint regeneration time stats\n``endpoint_state``                           ``state``                                          Enabled    Count of all endpoints\n============================================ ================================================== ========== ========================================================\n\nThe default enabled status of ``endpoint_max_ifindex`` is dynamic. On earlier\nkernels (typically with version lower than 5.10), Cilium must store the\ninterface index for each endpoint in the conntrack map, which reserves 16 bits\nfor this field. If Cilium is running on such a kernel, this metric will be\nenabled by default. It can be used to implement an alert if the ifindex is\napproaching the limit of 65535. This may be the case in instances of\nsignificant Endpoint churn.\n\nServices\n~~~~~~~~\n\n========================================== ================================================== ========== ========================================================\nName                                       Labels                                             Default    Description\n========================================== ================================================== ========== ========================================================\n``services_events_total``                                                                     Enabled    Number of services events labeled by action type\n``service_implementation_delay``           ``action``                                         Enabled    Duration in seconds to propagate the data plane programming of a service, its network and endpoints from the time the service or the service pod was changed excluding the event queue latency\n========================================== ================================================== ========== ========================================================\n\nCluster health\n~~~~~~~~~~~~~~\n\n========================================== ================================================== ========== ========================================================\nName                                       Labels                                             Default    Description\n========================================== ================================================== ========== ========================================================\n``unreachable_nodes``                                                                         Enabled    Number of nodes that cannot be reached\n``unreachable_health_endpoints``                                                              Enabled    Number of health endpoints that cannot be reached\n========================================== ================================================== ========== ========================================================\n\nNode Connectivity\n~~~~~~~~~~~~~~~~~\n\n============================================= ====================================================================================================================================================================== ========== ==================================================================================================================================================================================================================\nName                                          Labels                                                                                                                                                                 Default    Description\n============================================= ====================================================================================================================================================================== ========== ==================================================================================================================================================================================================================\n``node_connectivity_status``                  ``source_cluster``, ``source_node_name``, ``target_cluster``, ``target_node_name``, ``target_node_type``, ``type``                                                     Enabled    Deprecated, will be removed in Cilium 1.18 - use ``node_health_connectivity_status`` instead. The last observed status of both ICMP and HTTP connectivity between the current Cilium agent and other Cilium nodes\n``node_connectivity_latency_seconds``         ``address_type``, ``protocol``, ``source_cluster``, ``source_node_name``, ``target_cluster``, ``target_node_ip``, ``target_node_name``, ``target_node_type``, ``type`` Enabled    Deprecated, will be removed in Cilium 1.18 - use ``node_health_connectivity_latency_seconds`` instead. The last observed latency between the current Cilium agent and other Cilium nodes in seconds\n``node_health_connectivity_status``           ``source_cluster``, ``source_node_name``, ``type``, ``status``                                                                                                         Enabled    Number of endpoints with last observed status of both ICMP and HTTP connectivity between the current Cilium agent and other Cilium nodes\n``node_health_connectivity_latency_seconds``  ``source_cluster``, ``source_node_name``, ``type``, ``address_type``, ``protocol``                                                                                     Enabled    Histogram of the last observed latency between the current Cilium agent and other Cilium nodes in seconds\n============================================= ====================================================================================================================================================================== ========== ==================================================================================================================================================================================================================\n\nClustermesh\n~~~~~~~~~~~\n\n=============================================== ============================================================ ========== =================================================================\nName                                            Labels                                                       Default    Description\n=============================================== ============================================================ ========== =================================================================\n``clustermesh_global_services``                 ``source_cluster``, ``source_node_name``                     Enabled    The total number of global services in the cluster mesh\n``clustermesh_remote_clusters``                 ``source_cluster``, ``source_node_name``                     Enabled    The total number of remote clusters meshed with the local cluster\n``clustermesh_remote_cluster_failures``         ``source_cluster``, ``source_node_name``, ``target_cluster`` Enabled    The total number of failures related to the remote cluster\n``clustermesh_remote_cluster_nodes``            ``source_cluster``, ``source_node_name``, ``target_cluster`` Enabled    The total number of nodes in the remote cluster\n``clustermesh_remote_cluster_last_failure_ts``  ``source_cluster``, ``source_node_name``, ``target_cluster`` Enabled    The timestamp of the last failure of the remote cluster\n``clustermesh_remote_cluster_readiness_status`` ``source_cluster``, ``source_node_name``, ``target_cluster`` Enabled    The readiness status of the remote cluster\n=============================================== ============================================================ ========== =================================================================\n\nDatapath\n~~~~~~~~\n\n============================================= ================================================== ========== ========================================================\nName                                          Labels                                             Default    Description\n============================================= ================================================== ========== ========================================================\n``datapath_conntrack_dump_resets_total``      ``area``, ``name``, ``family``                     Enabled    Number of conntrack dump resets. Happens when a BPF entry gets removed while dumping the map is in progress.\n``datapath_conntrack_gc_runs_total``          ``status``                                         Enabled    Number of times that the conntrack garbage collector process was run\n``datapath_conntrack_gc_key_fallbacks_total``                                                    Enabled    The number of alive and deleted conntrack entries at the end of a garbage collector run labeled by datapath family\n``datapath_conntrack_gc_entries``             ``family``                                         Enabled    The number of alive and deleted conntrack entries at the end of a garbage collector run\n``datapath_conntrack_gc_duration_seconds``    ``status``                                         Enabled    Duration in seconds of the garbage collector process\n============================================= ================================================== ========== ========================================================\n\nIPsec\n~~~~~\n\n============================================= ================================================== ========== ===========================================================\nName                                          Labels                                             Default    Description\n============================================= ================================================== ========== ===========================================================\n``ipsec_xfrm_error``                          ``error``, ``type``                                Enabled    Total number of xfrm errors\n``ipsec_keys``                                                                                   Enabled    Number of keys in use\n``ipsec_xfrm_states``                         ``direction``                                      Enabled    Number of XFRM states\n``ipsec_xfrm_policies``                       ``direction``                                      Enabled    Number of XFRM policies\n============================================= ================================================== ========== ===========================================================\n\neBPF\n~~~~\n\n========================================== ===================================================================== ========== ========================================================\nName                                       Labels                                                                Default    Description\n========================================== ===================================================================== ========== ========================================================\n``bpf_syscall_duration_seconds``           ``operation``, ``outcome``                                            Disabled   Duration of eBPF system call performed\n``bpf_map_ops_total``                      ``mapName`` (deprecated), ``map_name``, ``operation``, ``outcome``    Enabled    Number of eBPF map operations performed. ``mapName`` is deprecated and will be removed in 1.10. Use ``map_name`` instead.\n``bpf_map_pressure``                       ``map_name``                                                          Enabled    Map pressure is defined as a ratio of the required map size compared to its configured size. Values < 1.0 indicate the map's utilization, while values >= 1.0 indicate that the map is full. Policy map metrics are only reported when the ratio is over 0.1, ie 10% full.\n``bpf_map_capacity``                       ``map_group``                                                         Enabled    Maximum size of eBPF maps by group of maps (type of map that have the same max capacity size). Map types with size of 65536 are not emitted, missing map types can be assumed to be 65536.\n``bpf_maps_virtual_memory_max_bytes``                                                                            Enabled    Max memory used by eBPF maps installed in the system\n``bpf_progs_virtual_memory_max_bytes``                                                                           Enabled    Max memory used by eBPF programs installed in the system\n``bpf_ratelimit_dropped_total``            ``usage``                                                             Enabled    Total drops resulting from BPF ratelimiter, tagged by source of drop\n========================================== ===================================================================== ========== ========================================================\n\nBoth ``bpf_maps_virtual_memory_max_bytes`` and ``bpf_progs_virtual_memory_max_bytes``\nare currently reporting the system-wide memory usage of eBPF that is directly\nand not directly managed by Cilium. This might change in the future and only\nreport the eBPF memory usage directly managed by Cilium.\n\nDrops\/Forwards (L3\/L4)\n~~~~~~~~~~~~~~~~~~~~~~\n\n========================================== ================================================== ========== ========================================================\nName                                       Labels                                             Default    Description\n========================================== ================================================== ========== ========================================================\n``drop_count_total``                       ``reason``, ``direction``                          Enabled    Total dropped packets\n``drop_bytes_total``                       ``reason``, ``direction``                          Enabled    Total dropped bytes\n``forward_count_total``                    ``direction``                                      Enabled    Total forwarded packets\n``forward_bytes_total``                    ``direction``                                      Enabled    Total forwarded bytes\n========================================== ================================================== ========== ========================================================\n\nPolicy\n~~~~~~\n\n========================================== ================================================== ========== ========================================================\nName                                       Labels                                             Default    Description\n========================================== ================================================== ========== ========================================================\n``policy``                                                                                    Enabled    Number of policies currently loaded\n``policy_regeneration_total``                                                                 Enabled    Deprecated, will be removed in Cilium 1.17 - use ``endpoint_regenerations_total`` instead. Total number of policies regenerated successfully\n``policy_regeneration_time_stats_seconds`` ``scope``                                          Enabled    Deprecated, will be removed in Cilium 1.17 - use ``endpoint_regeneration_time_stats_seconds`` instead. Policy regeneration time stats labeled by the scope\n``policy_max_revision``                                                                       Enabled    Highest policy revision number in the agent\n``policy_change_total``                                                                       Enabled    Number of policy changes by outcome\n``policy_endpoint_enforcement_status``                                                        Enabled    Number of endpoints labeled by policy enforcement status\n``policy_implementation_delay``            ``source``                                         Enabled    Time in seconds between a policy change and it being fully deployed into the datapath, labeled by the policy's source\n``policy_selector_match_count_max``        ``class``                                          Enabled    The maximum number of identities selected by a network policy selector\n========================================== ================================================== ========== ========================================================\n\nPolicy L7 (HTTP\/Kafka\/FQDN)\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n======================================== ================================================== ========== ========================================================\nName                                     Labels                                             Default    Description\n======================================== ================================================== ========== ========================================================\n``proxy_redirects``                      ``protocol``                                       Enabled    Number of redirects installed for endpoints\n``proxy_upstream_reply_seconds``         ``error``, ``protocol_l7``, ``scope``              Enabled    Seconds waited for upstream server to reply to a request\n``proxy_datapath_update_timeout_total``                                                     Disabled   Number of total datapath update timeouts due to FQDN IP updates\n``policy_l7_total``                      ``rule``, ``proxy_type``                           Enabled    Number of total L7 requests\/responses\n======================================== ================================================== ========== ========================================================\n\nIdentity\n~~~~~~~~\n\n======================================== ================================================== ========== ========================================================\nName                                     Labels                                             Default    Description\n======================================== ================================================== ========== ========================================================\n``identity``                             ``type``                                           Enabled    Number of identities currently allocated\n``identity_label_sources``               ``source``                                         Enabled    Number of identities which contain at least one label from the given label source\n``identity_gc_entries``                  ``identity_type``                                  Enabled    Number of alive and deleted identities at the end of a garbage collector run\n``identity_gc_runs``                     ``outcome``, ``identity_type``                     Enabled    Number of times identity garbage collector has run\n``identity_gc_latency``                  ``outcome``, ``identity_type``                     Enabled    Duration of the last successful identity GC run\n``ipcache_errors_total``                 ``type``, ``error``                                Enabled    Number of errors interacting with the ipcache\n``ipcache_events_total``                 ``type``                                           Enabled    Number of events interacting with the ipcache\n``identity_cache_timer_duration``        ``name``                                           Enabled    Seconds required to execute periodic policy processes. ``name=\"id-alloc-update-policy-maps\"`` is the time taken to apply incremental updates to the BPF policy maps.\n``identity_cache_timer_trigger_latency`` ``name``                                           Enabled    Seconds spent waiting for a previous process to finish before starting the next round. ``name=\"id-alloc-update-policy-maps\"`` is the time waiting before applying incremental updates to the BPF policy maps.\n``identity_cache_timer_trigger_folds``   ``name``                                           Enabled    Number of timer triggers that were coalesced in to one execution. ``name=\"id-alloc-update-policy-maps\"`` applies the incremental updates to the BPF policy maps.\n======================================== ================================================== ========== ========================================================\n\nEvents external to Cilium\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\n======================================== ================================================== ========== ========================================================\nName                                     Labels                                             Default    Description\n======================================== ================================================== ========== ========================================================\n``event_ts``                             ``source``                                         Enabled    Last timestamp when Cilium received an event from a control plane source, per resource and per action\n``k8s_event_lag_seconds``                ``source``                                         Disabled   Lag for Kubernetes events - computed value between receiving a CNI ADD event from kubelet and a Pod event received from kube-api-server\n======================================== ================================================== ========== ========================================================\n\nControllers\n~~~~~~~~~~~\n\n======================================== ================================================== ========== ========================================================\nName                                     Labels                                             Default    Description\n======================================== ================================================== ========== ========================================================\n``controllers_runs_total``               ``status``                                         Enabled    Number of times that a controller process was run\n``controllers_runs_duration_seconds``    ``status``                                         Enabled    Duration in seconds of the controller process\n``controllers_group_runs_total``         ``status``, ``group_name``                         Enabled    Number of times that a controller process was run, labeled by controller group name\n``controllers_failing``                                                                     Enabled    Number of failing controllers\n======================================== ================================================== ========== ========================================================\n\nThe ``controllers_group_runs_total`` metric reports the success and failure\ncount of each controller within the system, labeled by controller group name\nand completion status. Due to the large number of controllers, enabling this\nmetric is on a per-controller basis. This is configured using an allow-list\nwhich is passed as the ``controller-group-metrics`` configuration flag,\nor the ``prometheus.controllerGroupMetrics`` helm value. The current\nrecommended default set of group names can be found in the values file of\nthe Cilium Helm chart. The special names \"all\" and \"none\" are supported.\n\nSubProcess\n~~~~~~~~~~\n\n======================================== ================================================== ========== ========================================================\nName                                     Labels                                             Default    Description\n======================================== ================================================== ========== ========================================================\n``subprocess_start_total``               ``subsystem``                                      Enabled    Number of times that Cilium has started a subprocess\n======================================== ================================================== ========== ========================================================\n\nKubernetes\n~~~~~~~~~~\n\n=========================================== ================================================== ========== ========================================================\nName                                        Labels                                             Default    Description\n=========================================== ================================================== ========== ========================================================\n``kubernetes_events_received_total``        ``scope``, ``action``, ``validity``, ``equal``     Enabled    Number of Kubernetes events received\n``kubernetes_events_total``                 ``scope``, ``action``, ``outcome``                 Enabled    Number of Kubernetes events processed\n``k8s_cnp_status_completion_seconds``       ``attempts``, ``outcome``                          Enabled    Duration in seconds in how long it took to complete a CNP status update\n``k8s_terminating_endpoints_events_total``                                                     Enabled    Number of terminating endpoint events received from Kubernetes\n=========================================== ================================================== ========== ========================================================\n\nKubernetes Rest Client\n~~~~~~~~~~~~~~~~~~~~~~\n\n============================================= ============================================= ========== ===========================================================\nName                                          Labels                                        Default    Description\n============================================= ============================================= ========== ===========================================================\n``k8s_client_api_latency_time_seconds``       ``path``, ``method``                          Enabled    Duration of processed API calls labeled by path and method\n``k8s_client_rate_limiter_duration_seconds``  ``path``, ``method``                          Enabled    Kubernetes client rate limiter latency in seconds. Broken down by path and method\n``k8s_client_api_calls_total``                ``host``, ``method``, ``return_code``         Enabled    Number of API calls made to kube-apiserver labeled by host, method and return code\n============================================= ============================================= ========== ===========================================================\n\nKubernetes workqueue\n~~~~~~~~~~~~~~~~~~~~\n\n==================================================== ============================================= ========== ===========================================================\nName                                                 Labels                                        Default    Description\n==================================================== ============================================= ========== ===========================================================\n``k8s_workqueue_depth``                              ``name``                                      Enabled    Current depth of workqueue\n``k8s_workqueue_adds_total``                         ``name``                                      Enabled    Total number of adds handled by workqueue\n``k8s_workqueue_queue_duration_seconds``             ``name``                                      Enabled    Duration in seconds an item stays in workqueue prior to request\n``k8s_workqueue_work_duration_seconds``              ``name``                                      Enabled    Duration in seconds to process an item from workqueue\n``k8s_workqueue_unfinished_work_seconds``            ``name``                                      Enabled    Duration in seconds of work in progress that hasn't been observed by work_duration. Large values indicate stuck threads. You can deduce the number of stuck threads by observing the rate at which this value increases.\n``k8s_workqueue_longest_running_processor_seconds``  ``name``                                      Enabled    Duration in seconds of the longest running processor for workqueue\n``k8s_workqueue_retries_total``                      ``name``                                      Enabled    Total number of retries handled by workqueue\n==================================================== ============================================= ========== ===========================================================\n\nIPAM\n~~~~\n\n======================================== ============================================ ========== ========================================================\nName                                     Labels                                       Default    Description\n======================================== ============================================ ========== ========================================================\n``ipam_capacity``                        ``family``                                   Enabled    Total number of IPs in the IPAM pool labeled by family\n``ipam_events_total``                                                                 Enabled    Number of IPAM events received labeled by action and datapath family type\n``ip_addresses``                         ``family``                                   Enabled    Number of allocated IP addresses\n======================================== ============================================ ========== ========================================================\n\nKVstore\n~~~~~~~\n\n======================================== ============================================ ========== ========================================================\nName                                     Labels                                       Default    Description\n======================================== ============================================ ========== ========================================================\n``kvstore_operations_duration_seconds``  ``action``, ``kind``, ``outcome``, ``scope`` Enabled    Duration of kvstore operation\n``kvstore_events_queue_seconds``         ``action``, ``scope``                        Enabled    Seconds waited before a received event was queued\n``kvstore_quorum_errors_total``          ``error``                                    Enabled    Number of quorum errors\n``kvstore_sync_errors_total``            ``scope``, ``source_cluster``                Enabled    Number of times synchronization to the kvstore failed\n``kvstore_sync_queue_size``              ``scope``, ``source_cluster``                Enabled    Number of elements queued for synchronization in the kvstore\n``kvstore_initial_sync_completed``       ``scope``, ``source_cluster``, ``action``    Enabled    Whether the initial synchronization from\/to the kvstore has completed\n======================================== ============================================ ========== ========================================================\n\nAgent\n~~~~~\n\n================================ ================================ ========== ========================================================\nName                             Labels                           Default    Description\n================================ ================================ ========== ========================================================\n``agent_bootstrap_seconds``      ``scope``, ``outcome``           Enabled    Duration of various bootstrap phases\n``api_process_time_seconds``                                      Enabled    Processing time of all the API calls made to the cilium-agent, labeled by API method, API path and returned HTTP code.\n================================ ================================ ========== ========================================================\n\nFQDN\n~~~~\n\n================================== ================================ ============ ========================================================\nName                               Labels                           Default      Description\n================================== ================================ ============ ========================================================\n``fqdn_gc_deletions_total``                                         Enabled      Number of FQDNs that have been cleaned on FQDN garbage collector job\n``fqdn_active_names``              ``endpoint``                     Disabled     Number of domains inside the DNS cache that have not expired (by TTL), per endpoint\n``fqdn_active_ips``                ``endpoint``                     Disabled     Number of IPs inside the DNS cache associated with a domain that has not expired (by TTL), per endpoint\n``fqdn_alive_zombie_connections``  ``endpoint``                     Disabled     Number of IPs associated with domains that have expired (by TTL) yet still associated with an active connection (aka zombie), per endpoint\n``fqdn_selectors``                                                  Enabled      Number of registered ToFQDN selectors\n================================== ================================ ============ ========================================================\n\nJobs\n~~~~\n\n================================== ================================ ============ ========================================================\nName                               Labels                           Default      Description\n================================== ================================ ============ ========================================================\n``jobs_errors_total``              ``job``                          Enabled      Number of jobs runs that returned an error\n``jobs_one_shot_run_seconds``      ``job``                          Enabled      Histogram of one shot job run duration\n``jobs_timer_run_seconds``         ``job``                          Enabled      Histogram of timer job run duration\n``jobs_observer_run_seconds``      ``job``                          Enabled      Histogram of observer job run duration\n================================== ================================ ============ ========================================================\n\nCIDRGroups\n~~~~~~~~~~\n\n=================================================== ===================== =============================\nName                                                Labels                Default    Description\n=================================================== ===================== =============================\n``cidrgroups_referenced``                                                 Enabled    Number of CNPs and CCNPs referencing at least one CiliumCIDRGroup. CNPs with empty or non-existing CIDRGroupRefs are not considered\n``cidrgroup_translation_time_stats_seconds``                              Disabled   CIDRGroup translation time stats\n=================================================== ===================== =============================\n\n.. _metrics_api_rate_limiting:\n\nAPI Rate Limiting\n~~~~~~~~~~~~~~~~~\n\n============================================== ========================================== ========== ========================================================\nName                                           Labels                                     Default    Description\n============================================== ========================================== ========== ========================================================\n``api_limiter_adjustment_factor``              ``api_call``                               Enabled    Most recent adjustment factor for automatic adjustment\n``api_limiter_processed_requests_total``       ``api_call``, ``outcome``, ``return_code`` Enabled    Total number of API requests processed\n``api_limiter_processing_duration_seconds``    ``api_call``, ``value``                    Enabled    Mean and estimated processing duration in seconds\n``api_limiter_rate_limit``                     ``api_call``, ``value``                    Enabled    Current rate limiting configuration (limit and burst)\n``api_limiter_requests_in_flight``             ``api_call``  ``value``                    Enabled    Current and maximum allowed number of requests in flight\n``api_limiter_wait_duration_seconds``          ``api_call``, ``value``                    Enabled    Mean, min, and max wait duration\n``api_limiter_wait_history_duration_seconds``  ``api_call``                               Disabled   Histogram of wait duration per API call processed\n============================================== ========================================== ========== ========================================================\n\n.. _metrics_bgp_control_plane:\n\nBGP Control Plane\n~~~~~~~~~~~~~~~~~\n\n====================== =============================================================== ======== ===================================================================\nName                   Labels                                                          Default  Description\n====================== =============================================================== ======== ===================================================================\n``session_state``      ``vrouter``, ``neighbor``, ``neighbor_asn``                     Enabled  Current state of the BGP session with the peer, Up = 1 or Down = 0\n``advertised_routes``  ``vrouter``, ``neighbor``, ``neighbor_asn``, ``afi``, ``safi``  Enabled  Number of routes advertised to the peer\n``received_routes``    ``vrouter``, ``neighbor``, ``neighbor_asn``, ``afi``, ``safi``  Enabled  Number of routes received from the peer\n====================== =============================================================== ======== ===================================================================\n\nAll metrics are enabled only when the BGP Control Plane is enabled.\n\ncilium-operator\n---------------\n\nConfiguration\n^^^^^^^^^^^^^\n\n``cilium-operator`` can be configured to serve metrics by running with the\noption ``--enable-metrics``.  By default, the operator will expose metrics on\nport 9963, the port can be changed with the option\n``--operator-prometheus-serve-addr``.\n\nExported Metrics\n^^^^^^^^^^^^^^^^\n\nAll metrics are exported under the ``cilium_operator_`` Prometheus namespace.\n\n.. _ipam_metrics:\n\nIPAM\n~~~~\n\n.. Note::\n\n    IPAM metrics are all ``Enabled`` only if using the AWS, Alibabacloud or Azure IPAM plugins.\n\n======================================== ================================================================= ========== ========================================================\nName                                     Labels                                                            Default    Description\n======================================== ================================================================= ========== ========================================================\n``ipam_ips``                             ``type``                                                          Enabled    Number of IPs allocated\n``ipam_ip_allocation_ops``               ``subnet_id``                                                     Enabled    Number of IP allocation operations.\n``ipam_ip_release_ops``                  ``subnet_id``                                                     Enabled    Number of IP release operations.\n``ipam_interface_creation_ops``          ``subnet_id``                                                     Enabled    Number of interfaces creation operations.\n``ipam_release_duration_seconds``        ``type``, ``status``, ``subnet_id``                               Enabled    Release ip or interface latency in seconds\n``ipam_allocation_duration_seconds``     ``type``, ``status``, ``subnet_id``                               Enabled    Allocation ip or interface latency in seconds\n``ipam_available_interfaces``                                                                              Enabled    Number of interfaces with addresses available\n``ipam_nodes``                           ``category``                                                      Enabled    Number of nodes by category { total | in-deficit | at-capacity }\n``ipam_resync_total``                                                                                      Enabled    Number of synchronization operations with external IPAM API\n``ipam_api_duration_seconds``            ``operation``, ``response_code``                                  Enabled    Duration of interactions with external IPAM API.\n``ipam_api_rate_limit_duration_seconds`` ``operation``                                                     Enabled    Duration of rate limiting while accessing external IPAM API\n``ipam_available_ips``                   ``target_node``                                                   Enabled    Number of available IPs on a node (taking into account plugin specific NIC\/Address limits).\n``ipam_used_ips``                        ``target_node``                                                   Enabled    Number of currently used IPs on a node.\n``ipam_needed_ips``                      ``target_node``                                                   Enabled    Number of IPs needed to satisfy allocation on a node.\n======================================== ================================================================= ========== ========================================================\n\nLB-IPAM\n~~~~~~~\n\n======================================== ================================================================= ========== ========================================================\nName                                     Labels                                                            Default    Description\n======================================== ================================================================= ========== ========================================================\n``lbipam_conflicting_pools_total``                                                                         Enabled    Number of conflicting pools\n``lbipam_ips_available_total``           ``pool``                                                          Enabled    Number of available IPs per pool\n``lbipam_ips_used_total``                ``pool``                                                          Enabled    Number of used IPs per pool\n``lbipam_services_matching_total``                                                                         Enabled    Number of matching services\n``lbipam_services_unsatisfied_total``                                                                      Enabled    Number of services which did not get requested IPs\n======================================== ================================================================= ========== ========================================================\n\nControllers\n~~~~~~~~~~~\n\n======================================== ================================================== ========== ========================================================\nName                                     Labels                                             Default    Description\n======================================== ================================================== ========== ========================================================\n``controllers_group_runs_total``         ``status``, ``group_name``                         Enabled    Number of times that a controller process was run, labeled by controller group name\n======================================== ================================================== ========== ========================================================\n\nThe ``controllers_group_runs_total`` metric reports the success and failure\ncount of each controller within the system, labeled by controller group name\nand completion status. Due to the large number of controllers, enabling this\nmetric is on a per-controller basis. This is configured using an allow-list\nwhich is passed as the ``controller-group-metrics`` configuration flag,\nor the ``prometheus.controllerGroupMetrics`` helm value. The current\nrecommended default set of group names can be found in the values file of\nthe Cilium Helm chart. The special names \"all\" and \"none\" are supported.\n\n.. _ces_metrics:\n\nCiliumEndpointSlices (CES)\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n============================================== ================================ ========================================================\nName                                           Labels                           Description\n============================================== ================================ ========================================================\n``number_of_ceps_per_ces``                                                      The number of CEPs batched in a CES\n``number_of_cep_changes_per_ces``              ``opcode``                       The number of changed CEPs in each CES update\n``ces_sync_total``                             ``outcome``                      The number of completed CES syncs by outcome\n``ces_queueing_delay_seconds``                                                  CiliumEndpointSlice queueing delay in seconds\n============================================== ================================ ========================================================\n\nUnmanaged Pods\n~~~~~~~~~~~~~~\n\n============================================ ======= ========== ====================================================================\nName                                         Labels  Default    Description\n============================================ ======= ========== ====================================================================\n``unmanaged_pods``                                   Enabled    The total number of pods observed to be unmanaged by Cilium operator\n============================================ ======= ========== ====================================================================\n\n\"Double Write\" Identity Allocation Mode\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWhen the \":ref:`Double Write <double_write_migration>`\" identity allocation mode is\nenabled, the following metrics are available:\n\n============================================ ======= ========== ============================================================\nName                                         Labels  Default    Description\n============================================ ======= ========== ============================================================\n``doublewrite_identity_crd_total_count``             Enabled    The total number of CRD identities\n``doublewrite_identity_kvstore_total_count``         Enabled    The total number of identities in the KVStore\n``doublewrite_identity_crd_only_count``              Enabled    The number of CRD identities not present in the KVStore\n``doublewrite_identity_kvstore_only_count``          Enabled    The number of identities in the KVStore not present as a CRD\n============================================ ======= ========== ============================================================\n\n\nHubble\n------\n\nConfiguration\n^^^^^^^^^^^^^\n\nHubble metrics are served by a Hubble instance running inside ``cilium-agent``.\nThe command-line options to configure them are ``--enable-hubble``,\n``--hubble-metrics-server``, and ``--hubble-metrics``.\n``--hubble-metrics-server`` takes an ``IP:Port`` pair, but\npassing an empty IP (e.g. ``:9965``) will bind the server to all available\ninterfaces. ``--hubble-metrics`` takes a comma-separated list of metrics.\nIt's also possible to configure Hubble metrics to listen with TLS and\noptionally use mTLS for authentication. For details see :ref:`hubble_configure_metrics_tls`.\n\nSome metrics can take additional semicolon-separated options per metric, e.g.\n``--hubble-metrics=\"dns:query;ignoreAAAA,http:destinationContext=workload-name\"``\nwill enable the ``dns`` metric with the ``query`` and ``ignoreAAAA`` options,\nand the ``http`` metric with the ``destinationContext=workload-name`` option.\n\n.. _hubble_context_options:\n\nContext Options\n^^^^^^^^^^^^^^^\n\nHubble metrics support configuration via context options.\nSupported context options for all metrics:\n\n- ``sourceContext`` - Configures the ``source`` label on metrics for both egress and ingress traffic.\n- ``sourceEgressContext`` - Configures the ``source`` label on metrics for egress traffic (takes precedence over ``sourceContext``).\n- ``sourceIngressContext`` - Configures the ``source`` label on metrics for ingress traffic (takes precedence over ``sourceContext``).\n- ``destinationContext`` - Configures the ``destination`` label on metrics for both egress and ingress traffic.\n- ``destinationEgressContext`` - Configures the ``destination`` label on metrics for egress traffic (takes precedence over ``destinationContext``).\n- ``destinationIngressContext`` - Configures the ``destination`` label on metrics for ingress traffic (takes precedence over ``destinationContext``).\n- ``labelsContext`` - Configures a list of labels to be enabled on metrics.\n\nThere are also some context options that are specific to certain metrics.\nSee the documentation for the individual metrics to see what options are available for each.\n\nSee below for details on each of the different context options.\n\nMost Hubble metrics can be configured to add the source and\/or destination\ncontext as a label using the ``sourceContext`` and ``destinationContext``\noptions. The possible values are:\n\n===================== ===================================================================================\nOption Value          Description\n===================== ===================================================================================\n``identity``          All Cilium security identity labels\n``namespace``         Kubernetes namespace name\n``pod``               Kubernetes pod name and namespace name in the form of ``namespace\/pod``.\n``pod-name``          Kubernetes pod name.\n``dns``               All known DNS names of the source or destination (comma-separated)\n``ip``                The IPv4 or IPv6 address\n``reserved-identity`` Reserved identity label.\n``workload``          Kubernetes pod's workload name and namespace in the form of ``namespace\/workload-name``.\n``workload-name``     Kubernetes pod's workload name (workloads are: Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift), etc).\n``app``               Kubernetes pod's app name, derived from pod labels (``app.kubernetes.io\/name``, ``k8s-app``, or ``app``).\n===================== ===================================================================================\n\nWhen specifying the source and\/or destination context, multiple contexts can be\nspecified by separating them via the ``|`` symbol.\nWhen multiple are specified, then the first non-empty value is added to the\nmetric as a label. For example, a metric configuration of\n``flow:destinationContext=dns|ip`` will first try to use the DNS name of the\ntarget for the label. If no DNS name is known for the target, it will fall back\nand use the IP address of the target instead.\n\n.. note::\n\n   There are 3 cases in which the identity label list contains multiple reserved labels:\n\n   1. ``reserved:kube-apiserver`` and ``reserved:host``\n   2. ``reserved:kube-apiserver`` and ``reserved:remote-node``\n   3. ``reserved:kube-apiserver`` and ``reserved:world``\n\n   In all of these 3 cases, ``reserved-identity`` context returns ``reserved:kube-apiserver``.\n\nHubble metrics can also be configured with a ``labelsContext`` which allows providing a list of labels\nthat should be added to the metric. Unlike ``sourceContext`` and ``destinationContext``, instead\nof different values being put into the same metric label, the ``labelsContext`` puts them into different label values.\n\n============================== ===============================================================================\nOption Value                   Description\n============================== ===============================================================================\n``source_ip``                  The source IP of the flow.\n``source_namespace``           The namespace of the pod if the flow source is from a Kubernetes pod.\n``source_pod``                 The pod name if the flow source is from a Kubernetes pod.\n``source_workload``            The name of the source pod's workload (Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift)).\n``source_workload_kind``       The kind of the source pod's workload, for example, Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift).\n``source_app``                 The app name of the source pod, derived from pod labels (``app.kubernetes.io\/name``, ``k8s-app``, or ``app``).\n``destination_ip``             The destination IP of the flow.\n``destination_namespace``      The namespace of the pod if the flow destination is from a Kubernetes pod.\n``destination_pod``            The pod name if the flow destination is from a Kubernetes pod.\n``destination_workload``       The name of the destination pod's workload (Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift)).\n``destination_workload_kind``  The kind of the destination pod's workload, for example, Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift).\n``destination_app``            The app name of the source pod, derived from pod labels (``app.kubernetes.io\/name``, ``k8s-app``, or ``app``).\n``traffic_direction``          Identifies the traffic direction of the flow. Possible values are ``ingress``, ``egress`` and ``unknown``.\n============================== ===============================================================================\n\nWhen specifying the flow context, multiple values can be specified by separating them via the ``,`` symbol.\nAll labels listed are included in the metric, even if empty. For example, a metric configuration of\n``http:labelsContext=source_namespace,source_pod`` will add the ``source_namespace`` and ``source_pod``\nlabels to all Hubble HTTP metrics.\n\n.. note::\n\n    To limit metrics cardinality hubble will remove data series bound to specific pod after one minute from pod deletion.\n    Metric is considered to be bound to a specific pod when at least one of the following conditions is met:\n\n    * ``sourceContext`` is set to ``pod`` and metric series has ``source`` label matching ``<pod_namespace>\/<pod_name>``\n    * ``destinationContext`` is set to ``pod`` and metric series has ``destination`` label matching ``<pod_namespace>\/<pod_name>``\n    * ``labelsContext`` contains both ``source_namespace`` and ``source_pod`` and metric series labels match namespace and name of deleted pod\n    * ``labelsContext`` contains both ``destination_namespace`` and ``destination_pod`` and metric series labels match namespace and name of deleted pod\n\n.. _hubble_exported_metrics:\n\nExported Metrics\n^^^^^^^^^^^^^^^^\n\nHubble metrics are exported under the ``hubble_`` Prometheus namespace.\n\nlost events\n~~~~~~~~~~~\n\nThis metric, unlike other ones, is not directly tied to network flows. It's enabled if any of the other metrics is enabled.\n\n================================ ======================================== ========== ==================================================\nName                             Labels                                   Default    Description\n================================ ======================================== ========== ==================================================\n``lost_events_total``            ``source``                               Enabled    Number of lost events\n================================ ======================================== ========== ==================================================\n\nLabels\n\"\"\"\"\"\"\n\n- ``source`` identifies the source of lost events, one of:\n   - ``perf_event_ring_buffer``\n   - ``observer_events_queue``\n   - ``hubble_ring_buffer``\n\n\n``dns``\n~~~~~~~\n\n================================ ======================================== ========== ===================================\nName                             Labels                                   Default    Description\n================================ ======================================== ========== ===================================\n``dns_queries_total``            ``rcode``, ``qtypes``, ``ips_returned``  Disabled   Number of DNS queries observed\n``dns_responses_total``          ``rcode``, ``qtypes``, ``ips_returned``  Disabled   Number of DNS responses observed\n``dns_response_types_total``     ``type``, ``qtypes``                     Disabled   Number of DNS response types\n================================ ======================================== ========== ===================================\n\nOptions\n\"\"\"\"\"\"\"\n\n============== ============= ====================================================================================\nOption Key     Option Value  Description\n============== ============= ====================================================================================\n``query``      N\/A           Include the query as label \"query\"\n``ignoreAAAA`` N\/A           Ignore any AAAA requests\/responses\n============== ============= ====================================================================================\n\nThis metric supports :ref:`Context Options<hubble_context_options>`.\n\n\n``drop``\n~~~~~~~~\n\n================================ ======================================== ========== ===================================\nName                             Labels                                   Default    Description\n================================ ======================================== ========== ===================================\n``drop_total``                   ``reason``, ``protocol``                 Disabled   Number of drops\n================================ ======================================== ========== ===================================\n\nOptions\n\"\"\"\"\"\"\"\n\nThis metric supports :ref:`Context Options<hubble_context_options>`.\n\n``flow``\n~~~~~~~~\n\n================================ ======================================== ========== ===================================\nName                             Labels                                   Default    Description\n================================ ======================================== ========== ===================================\n``flows_processed_total``        ``type``, ``subtype``, ``verdict``       Disabled   Total number of flows processed\n================================ ======================================== ========== ===================================\n\nOptions\n\"\"\"\"\"\"\"\n\nThis metric supports :ref:`Context Options<hubble_context_options>`.\n\n``flows-to-world``\n~~~~~~~~~~~~~~~~~~\n\nThis metric counts all non-reply flows containing the ``reserved:world`` label in their\ndestination identity. By default, dropped flows are counted if and only if the drop reason\nis ``Policy denied``. Set ``any-drop`` option to count all dropped flows.\n\n================================ ======================================== ========== ============================================\nName                             Labels                                   Default    Description\n================================ ======================================== ========== ============================================\n``flows_to_world_total``         ``protocol``, ``verdict``                Disabled   Total number of flows to ``reserved:world``.\n================================ ======================================== ========== ============================================\n\nOptions\n\"\"\"\"\"\"\"\n\n============== ============= ======================================================\nOption Key     Option Value  Description\n============== ============= ======================================================\n``any-drop``   N\/A           Count any dropped flows regardless of the drop reason.\n``port``       N\/A           Include the destination port as label ``port``.\n``syn-only``   N\/A           Only count non-reply SYNs for TCP flows.\n============== ============= ======================================================\n\n\nThis metric supports :ref:`Context Options<hubble_context_options>`.\n\n``http``\n~~~~~~~~\n\nDeprecated, use ``httpV2`` instead.\nThese metrics can not be enabled at the same time as ``httpV2``.\n\n================================= ======================================= ========== ==============================================\nName                              Labels                                  Default    Description\n================================= ======================================= ========== ==============================================\n``http_requests_total``           ``method``, ``protocol``, ``reporter``  Disabled   Count of HTTP requests\n``http_responses_total``          ``method``, ``status``, ``reporter``    Disabled   Count of HTTP responses\n``http_request_duration_seconds`` ``method``, ``reporter``                Disabled   Histogram of HTTP request duration in seconds\n================================= ======================================= ========== ==============================================\n\nLabels\n\"\"\"\"\"\"\n\n- ``method`` is the HTTP method of the request\/response.\n- ``protocol`` is the HTTP protocol of the request, (For example: ``HTTP\/1.1``, ``HTTP\/2``).\n- ``status`` is the HTTP status code of the response.\n- ``reporter`` identifies the origin of the request\/response. It is set to ``client`` if it originated from the client, ``server`` if it originated from the server, or ``unknown`` if its origin is unknown.\n\nOptions\n\"\"\"\"\"\"\"\n\nThis metric supports :ref:`Context Options<hubble_context_options>`.\n\n``httpV2``\n~~~~~~~~~~\n\n``httpV2`` is an updated version of the existing ``http`` metrics.\nThese metrics can not be enabled at the same time as ``http``.\n\nThe main difference is that ``http_requests_total`` and\n``http_responses_total`` have been consolidated, and use the response flow\ndata.\n\nAdditionally, the ``http_request_duration_seconds`` metric source\/destination\nrelated labels now are from the perspective of the request. In the ``http``\nmetrics, the source\/destination were swapped, because the metric uses the\nresponse flow data, where the source\/destination are swapped, but in ``httpV2``\nwe correctly account for this.\n\n================================= =================================================== ========== ==============================================\nName                              Labels                                              Default    Description\n================================= =================================================== ========== ==============================================\n``http_requests_total``           ``method``, ``protocol``, ``status``, ``reporter``  Disabled   Count of HTTP requests\n``http_request_duration_seconds`` ``method``, ``reporter``                            Disabled   Histogram of HTTP request duration in seconds\n================================= =================================================== ========== ==============================================\n\nLabels\n\"\"\"\"\"\"\n\n- ``method`` is the HTTP method of the request\/response.\n- ``protocol`` is the HTTP protocol of the request, (For example: ``HTTP\/1.1``, ``HTTP\/2``).\n- ``status`` is the HTTP status code of the response.\n- ``reporter`` identifies the origin of the request\/response. It is set to ``client`` if it originated from the client, ``server`` if it originated from the server, or ``unknown`` if its origin is unknown.\n\nOptions\n\"\"\"\"\"\"\"\n\n============== ============== =============================================================================================================\nOption Key     Option Value   Description\n============== ============== =============================================================================================================\n``exemplars``  ``true``       Include extracted trace IDs in HTTP metrics. Requires :ref:`OpenMetrics to be enabled<hubble_open_metrics>`.\n============== ============== =============================================================================================================\n\nThis metric supports :ref:`Context Options<hubble_context_options>`.\n\n``icmp``\n~~~~~~~~\n\n================================ ======================================== ========== ===================================\nName                             Labels                                   Default    Description\n================================ ======================================== ========== ===================================\n``icmp_total``                   ``family``, ``type``                     Disabled   Number of ICMP messages\n================================ ======================================== ========== ===================================\n\nOptions\n\"\"\"\"\"\"\"\n\nThis metric supports :ref:`Context Options<hubble_context_options>`.\n\n``kafka``\n~~~~~~~~~\n\n=================================== ===================================================== ========== ==============================================\nName                                Labels                                                Default    Description\n=================================== ===================================================== ========== ==============================================\n``kafka_requests_total``            ``topic``, ``api_key``, ``error_code``, ``reporter``  Disabled   Count of Kafka requests by topic\n``kafka_request_duration_seconds``  ``topic``, ``api_key``, ``reporter``                  Disabled   Histogram of Kafka request duration by topic\n=================================== ===================================================== ========== ==============================================\n\nOptions\n\"\"\"\"\"\"\"\n\nThis metric supports :ref:`Context Options<hubble_context_options>`.\n\n``port-distribution``\n~~~~~~~~~~~~~~~~~~~~~\n\n================================ ======================================== ========== ==================================================\nName                             Labels                                   Default    Description\n================================ ======================================== ========== ==================================================\n``port_distribution_total``      ``protocol``, ``port``                   Disabled   Numbers of packets distributed by destination port\n================================ ======================================== ========== ==================================================\n\nOptions\n\"\"\"\"\"\"\"\n\nThis metric supports :ref:`Context Options<hubble_context_options>`.\n\n``tcp``\n~~~~~~~\n\n================================ ======================================== ========== ==================================================\nName                             Labels                                   Default    Description\n================================ ======================================== ========== ==================================================\n``tcp_flags_total``              ``flag``, ``family``                     Disabled   TCP flag occurrences\n================================ ======================================== ========== ==================================================\n\nOptions\n\"\"\"\"\"\"\"\n\nThis metric supports :ref:`Context Options<hubble_context_options>`.\n\ndynamic_exporter_exporters_total\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThis is dynamic hubble exporter metric.\n\n==================================== ======================================== ========== ==================================================\nName                                 Labels                                   Default    Description\n==================================== ======================================== ========== ==================================================\n``dynamic_exporter_exporters_total`` ``source``                               Enabled    Number of configured hubble exporters\n==================================== ======================================== ========== ==================================================\n\nLabels\n\"\"\"\"\"\"\n\n- ``status`` identifies status of exporters, can be one of:\n   - ``active``\n   - ``inactive``\n\ndynamic_exporter_up\n~~~~~~~~~~~~~~~~~~~\n\nThis is dynamic hubble exporter metric.\n\n==================================== ======================================== ========== ==================================================\nName                                 Labels                                   Default    Description\n==================================== ======================================== ========== ==================================================\n``dynamic_exporter_up``              ``source``                               Enabled    Status of exporter (1 - active, 0 - inactive)\n==================================== ======================================== ========== ==================================================\n\nLabels\n\"\"\"\"\"\"\n\n- ``name`` identifies exporter name\n\ndynamic_exporter_reconfigurations_total\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThis is dynamic hubble exporter metric.\n\n=========================================== ======================================== ========== ==================================================\nName                                        Labels                                   Default    Description\n=========================================== ======================================== ========== ==================================================\n``dynamic_exporter_reconfigurations_total`` ``op``                                   Enabled    Number of dynamic exporters reconfigurations\n=========================================== ======================================== ========== ==================================================\n\nLabels\n\"\"\"\"\"\"\n\n- ``op`` identifies reconfiguration operation type, can be one of:\n   - ``add``\n   - ``update``\n   - ``remove``\n\ndynamic_exporter_config_hash\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThis is dynamic hubble exporter metric.\n\n==================================== ======================================== ========== ==================================================\nName                                 Labels                                   Default    Description\n==================================== ======================================== ========== ==================================================\n``dynamic_exporter_config_hash``                                              Enabled    Hash of last applied config\n==================================== ======================================== ========== ==================================================\n\ndynamic_exporter_config_last_applied\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThis is dynamic hubble exporter metric.\n\n======================================== ======================================== ========== ==================================================\nName                                     Labels                                   Default    Description\n======================================== ======================================== ========== ==================================================\n``dynamic_exporter_config_last_applied``                                          Enabled    Timestamp of last applied config\n======================================== ======================================== ========== ==================================================\n\n\n\n\n.. _clustermesh_apiserver_metrics_reference:\n\nclustermesh-apiserver\n---------------------\n\nConfiguration\n^^^^^^^^^^^^^\n\nTo expose any metrics, invoke ``clustermesh-apiserver`` with the\n``--prometheus-serve-addr`` option. This option takes a ``IP:Port`` pair but\npassing an empty IP (e.g. ``:9962``) will bind the server to all available\ninterfaces (there is usually only one in a container).\n\nExported Metrics\n^^^^^^^^^^^^^^^^\n\nAll metrics are exported under the ``cilium_clustermesh_apiserver_``\nPrometheus namespace.\n\nBootstrap\n~~~~~~~~~\n\n======================================== ============================================ ========================================================\nName                                     Labels                                       Description\n======================================== ============================================ ========================================================\n``bootstrap_seconds``                    ``source_cluster``                           Duration in seconds to complete bootstrap\n======================================== ============================================ ========================================================\n\nKVstore\n~~~~~~~\n\n======================================== ============================================ ========================================================\nName                                     Labels                                       Description\n======================================== ============================================ ========================================================\n``kvstore_operations_duration_seconds``  ``action``, ``kind``, ``outcome``, ``scope`` Duration of kvstore operation\n``kvstore_events_queue_seconds``         ``action``, ``scope``                        Seconds waited before a received event was queued\n``kvstore_quorum_errors_total``          ``error``                                    Number of quorum errors\n``kvstore_sync_errors_total``            ``scope``, ``source_cluster``                Number of times synchronization to the kvstore failed\n``kvstore_sync_queue_size``              ``scope``, ``source_cluster``                Number of elements queued for synchronization in the kvstore\n``kvstore_initial_sync_completed``       ``scope``, ``source_cluster``, ``action``    Whether the initial synchronization from\/to the kvstore has completed\n======================================== ============================================ ========================================================\n\nAPI Rate Limiting\n~~~~~~~~~~~~~~~~~\n\n============================================== ========================================== ========================================================\nName                                           Labels                                     Description\n============================================== ========================================== ========================================================\n``api_limiter_processed_requests_total``       ``api_call``, ``outcome``, ``return_code`` Total number of API requests processed\n``api_limiter_processing_duration_seconds``    ``api_call``, ``value``                    Mean and estimated processing duration in seconds\n``api_limiter_rate_limit``                     ``api_call``, ``value``                    Current rate limiting configuration (limit and burst)\n``api_limiter_requests_in_flight``             ``api_call``  ``value``                    Current and maximum allowed number of requests in flight\n``api_limiter_wait_duration_seconds``          ``api_call``, ``value``                     Mean, min, and max wait duration\n============================================== ========================================== ========================================================\n\nControllers\n~~~~~~~~~~~\n\n======================================== ================================================== ========== ========================================================\nName                                     Labels                                             Default    Description\n======================================== ================================================== ========== ========================================================\n``controllers_group_runs_total``         ``status``, ``group_name``                         Enabled    Number of times that a controller process was run, labeled by controller group name\n======================================== ================================================== ========== ========================================================\n\nThe ``controllers_group_runs_total`` metric reports the success\nand failure count of each controller within the system, labeled by\ncontroller group name and completion status. Enabling this metric is\non a per-controller basis. This is configured using an allow-list which\nis passed as the ``controller-group-metrics`` configuration flag.\nThe current default set for ``clustermesh-apiserver`` found in the\nCilium Helm chart is the special name \"all\", which enables the metric\nfor all controller groups. The special name \"none\" is also supported.\n\n.. _kvstoremesh_metrics_reference:\n\nkvstoremesh\n-----------\n\nConfiguration\n^^^^^^^^^^^^^\n\nTo expose any metrics, invoke ``kvstoremesh`` with the\n``--prometheus-serve-addr`` option. This option takes a ``IP:Port`` pair but\npassing an empty IP (e.g. ``:9964``) binds the server to all available\ninterfaces (there is usually only one interface in a container).\n\nExported Metrics\n^^^^^^^^^^^^^^^^\n\nAll metrics are exported under the ``cilium_kvstoremesh_`` Prometheus namespace.\n\nBootstrap\n~~~~~~~~~\n\n======================================== ============================================ ========================================================\nName                                     Labels                                       Description\n======================================== ============================================ ========================================================\n``bootstrap_seconds``                    ``source_cluster``                           Duration in seconds to complete bootstrap\n======================================== ============================================ ========================================================\n\nRemote clusters\n~~~~~~~~~~~~~~~\n\n==================================== ======================================= =================================================================\nName                                 Labels                                                       Description\n==================================== ======================================= =================================================================\n``remote_clusters``                  ``source_cluster``                      The total number of remote clusters meshed with the local cluster\n``remote_cluster_failures``          ``source_cluster``, ``target_cluster``  The total number of failures related to the remote cluster\n``remote_cluster_last_failure_ts``   ``source_cluster``, ``target_cluster``  The timestamp of the last failure of the remote cluster\n``remote_cluster_readiness_status``  ``source_cluster``, ``target_cluster``  The readiness status of the remote cluster\n==================================== ======================================= =================================================================\n\nKVstore\n~~~~~~~\n\n======================================== ============================================ ========================================================\nName                                     Labels                                       Description\n======================================== ============================================ ========================================================\n``kvstore_operations_duration_seconds``  ``action``, ``kind``, ``outcome``, ``scope`` Duration of kvstore operation\n``kvstore_events_queue_seconds``         ``action``, ``scope``                        Seconds waited before a received event was queued\n``kvstore_quorum_errors_total``          ``error``                                    Number of quorum errors\n``kvstore_sync_errors_total``            ``scope``, ``source_cluster``                Number of times synchronization to the kvstore failed\n``kvstore_sync_queue_size``              ``scope``, ``source_cluster``                Number of elements queued for synchronization in the kvstore\n``kvstore_initial_sync_completed``       ``scope``, ``source_cluster``, ``action``    Whether the initial synchronization from\/to the kvstore has completed\n======================================== ============================================ ========================================================\n\nAPI Rate Limiting\n~~~~~~~~~~~~~~~~~\n\n============================================== ========================================== ========================================================\nName                                           Labels                                     Description\n============================================== ========================================== ========================================================\n``api_limiter_processed_requests_total``       ``api_call``, ``outcome``, ``return_code`` Total number of API requests processed\n``api_limiter_processing_duration_seconds``    ``api_call``, ``value``                    Mean and estimated processing duration in seconds\n``api_limiter_rate_limit``                     ``api_call``, ``value``                    Current rate limiting configuration (limit and burst)\n``api_limiter_requests_in_flight``             ``api_call``  ``value``                    Current and maximum allowed number of requests in flight\n``api_limiter_wait_duration_seconds``          ``api_call``, ``value``                    Mean, min, and max wait duration\n============================================== ========================================== ========================================================\n\nControllers\n~~~~~~~~~~~\n\n======================================== ================================================== ========== ========================================================\nName                                     Labels                                             Default    Description\n======================================== ================================================== ========== ========================================================\n``controllers_group_runs_total``         ``status``, ``group_name``                         Enabled    Number of times that a controller process was run, labeled by controller group name\n======================================== ================================================== ========== ========================================================\n\nThe ``controllers_group_runs_total`` metric reports the success\nand failure count of each controller within the system, labeled by\ncontroller group name and completion status. Enabling this metric is\non a per-controller basis. This is configured using an allow-list\nwhich is passed as the ``controller-group-metrics`` configuration\nflag. The current default set for ``kvstoremesh`` found in the\nCilium Helm chart is the special name \"all\", which enables the metric\nfor all controller groups. The special name \"none\" is also supported.\n\nNAT\n~~~\n\n.. _nat_metrics:\n\n======================================== ================================================== ========== ========================================================\nName                                     Labels                                             Default    Description\n======================================== ================================================== ========== ========================================================\n``nat_endpoint_max_connection``          ``family``                                         Enabled    Saturation of the most saturated distinct NAT mapped connection, in terms of egress-IP and remote endpoint address.\n======================================== ================================================== ========== ========================================================\n\nThese metrics are for monitoring Cilium's NAT mapping functionality. NAT is used by features such as Egress Gateway and BPF masquerading.\n\nThe NAT map holds mappings for masqueraded connections. Connection held in the NAT table that are masqueraded with the\nsame egress-IP and are going to the same remote endpoints IP and port all require a unique source port for the mapping.\nThis means that any Node masquerading connections to a distinct external endpoint is limited by the possible ephemeral source ports.\n\nGiven a Node forwarding one or more such egress-IP and remote endpoint tuples, the ``nat_endpoint_max_connection`` metric is the most saturated such connection in terms of a percent of possible source ports available.\nThis metric is especially useful when using the egress gateway feature where it's possible to overload a Node if many connections are all going to the same endpoint.\nIn general, this metric should normally be fairly low.\nA high number here may indicate that a Node is reaching its limit for connections to one or more external endpoints.\n","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      metrics                        Monitoring   Metrics                       Cilium and Hubble can both be configured to serve  Prometheus  https   prometheus io    metrics  Prometheus is a pluggable metrics collection and storage system and can act as a data source for  Grafana  https   grafana com      a metrics visualization frontend  Unlike some metrics collectors like statsd  Prometheus requires the collectors to pull metrics from each source   Cilium and Hubble metrics can be enabled independently of each other   Cilium Metrics                 Cilium metrics provide insights into the state of Cilium itself  namely of the   cilium agent      cilium envoy    and   cilium operator   processes  To run Cilium with Prometheus metrics enabled  deploy it with the   prometheus enabled true   Helm value set   Cilium metrics are exported under the   cilium    Prometheus namespace  Envoy metrics are exported under the   envoy    Prometheus namespace  of which the Cilium defined metrics are exported under the   envoy cilium    namespace  When running and collecting in Kubernetes they will be tagged with a pod name and namespace   Installation               You can enable metrics for   cilium agent    including Envoy  with the Helm value   prometheus enabled true      cilium operator   metrics are enabled by default  if you want to disable them  set Helm value   operator prometheus enabled false        parsed literal       helm install cilium  CHART RELEASE            namespace kube system           set prometheus enabled true           set operator prometheus enabled true  The ports can be configured via   prometheus port      envoy prometheus port    or   operator prometheus port   respectively   When metrics are enabled  all Cilium components will have the following annotations  They can be used to signal Prometheus whether to scrape metrics      code block   yaml          prometheus io scrape  true         prometheus io port  9962  To collect Envoy metrics the Cilium chart will create a Kubernetes headless service named   cilium agent   with the   prometheus io scrape  true    annotation set      code block   yaml          prometheus io scrape  true         prometheus io port  9964  This additional headless service in addition to the other Cilium components is needed as each component can only have one Prometheus scrape and port annotation   Prometheus will pick up the Cilium and Envoy metrics automatically if the following option is set in the   scrape configs   section      code block   yaml      scrape configs        job name   kubernetes pods        kubernetes sd configs          role  pod       relabel configs            source labels     meta kubernetes pod annotation prometheus io scrape            action  keep           regex  true           source labels     address      meta kubernetes pod annotation prometheus io port            action  replace           regex              d      d             replacement    1    2            target label    address        hubble metrics   Hubble Metrics                 While Cilium metrics allow you to monitor the state Cilium itself  Hubble metrics on the other hand allow you to monitor the network behavior of your Cilium managed Kubernetes pods with respect to connectivity and security   Installation               To deploy Cilium with Hubble metrics enabled  you need to enable Hubble with   hubble enabled true   and provide a set of Hubble metrics you want to enable via   hubble metrics enabled     Some of the metrics can also be configured with additional options  See the  ref  Hubble exported metrics hubble exported metrics   section for the full list of available metrics and their options      parsed literal       helm install cilium  CHART RELEASE            namespace kube system           set prometheus enabled true           set operator prometheus enabled true           set hubble enabled true           set hubble metrics enableOpenMetrics true           set hubble metrics enabled   dns drop tcp flow port distribution icmp httpV2 exemplars true labelsContext source ip   source namespace   source workload   destination ip   destination namespace   destination workload   traffic direction    The port of the Hubble metrics can be configured with the   hubble metrics port   Helm value   For details on enabling Hubble metrics with TLS see the  ref  hubble configure metrics tls  section of the documentation      Note        L7 metrics such as HTTP  are only emitted for pods that enable      ref  Layer 7 Protocol Visibility  proxy visibility     When deployed with a non empty   hubble metrics enabled   Helm value  the Cilium chart will create a Kubernetes headless service named   hubble metrics   with the   prometheus io scrape  true    annotation set      code block   yaml          prometheus io scrape  true         prometheus io port  9965  Set the following options in the   scrape configs   section of Prometheus to have it scrape all Hubble metrics from the endpoints automatically      code block   yaml      scrape configs          job name   kubernetes endpoints          scrape interval  30s         kubernetes sd configs              role  endpoints         relabel configs              source labels     meta kubernetes service annotation prometheus io scrape              action  keep             regex  true             source labels     address      meta kubernetes service annotation prometheus io port              action  replace             target label    address               regex           d     d               replacement   1  2      hubble open metrics   OpenMetrics              Additionally  you can opt in to  OpenMetrics  https   openmetrics io    by setting   hubble metrics enableOpenMetrics true    Enabling OpenMetrics configures the Hubble metrics endpoint to support exporting metrics in OpenMetrics format when explicitly requested by clients   Using OpenMetrics supports additional functionality such as Exemplars  which enables associating metrics with traces by embedding trace IDs into the exported metrics   Prometheus needs to be configured to take advantage of OpenMetrics and will only scrape exemplars when the  exemplars storage feature is enabled  https   prometheus io docs prometheus latest feature flags  exemplars storage      OpenMetrics imposes a few additional requirements on metrics names and labels  so this functionality is currently opt in  though we believe all of the Hubble metrics conform to the OpenMetrics requirements        clustermesh apiserver metrics   Cluster Mesh API Server Metrics                                  Cluster Mesh API Server metrics provide insights into the state of the   clustermesh apiserver   process  the   kvstoremesh   process  if enabled   and the sidecar etcd instance  Cluster Mesh API Server metrics are exported under the   cilium clustermesh apiserver    Prometheus namespace  KVStoreMesh metrics are exported under the   cilium kvstoremesh    Prometheus namespace  Etcd metrics are exported under the   etcd    Prometheus namespace    Installation               You can enable the metrics for different Cluster Mesh API Server components by setting the following values     clustermesh apiserver    clustermesh apiserver metrics enabled true     kvstoremesh    clustermesh apiserver metrics kvstoremesh enabled true     sidecar etcd instance    clustermesh apiserver metrics etcd enabled true       parsed literal       helm install cilium  CHART RELEASE            namespace kube system           set clustermesh useAPIServer true           set clustermesh apiserver metrics enabled true           set clustermesh apiserver metrics kvstoremesh enabled true           set clustermesh apiserver metrics etcd enabled true  You can figure the ports by way of   clustermesh apiserver metrics port      clustermesh apiserver metrics kvstoremesh port   and   clustermesh apiserver metrics etcd port   respectively   You can automatically create a  Prometheus Operator  https   github com prometheus operator prometheus operator      ServiceMonitor   by setting   clustermesh apiserver metrics serviceMonitor enabled true     Example Prometheus   Grafana Deployment                                          If you don t have an existing Prometheus and Grafana stack running  you can deploy a stack with      parsed literal        kubectl apply  f    SCM WEB   examples kubernetes addons prometheus monitoring example yaml  It will run Prometheus and Grafana in the   cilium monitoring   namespace  If you have either enabled Cilium or Hubble metrics  they will automatically be scraped by Prometheus  You can then expose Grafana to access it via your browser      code block   shell session      kubectl  n cilium monitoring port forward service grafana   address 0 0 0 0   address    3000 3000  Open your browser and access http   localhost 3000   Metrics Reference                    cilium agent               Configuration                To expose any metrics  invoke   cilium agent   with the     prometheus serve addr   option  This option takes a   IP Port   pair but passing an empty IP  e g     9962    will bind the server to all available interfaces  there is usually only one in a container    To customize   cilium agent   metrics  configure the     metrics   option with     metric a  metric b  metric c     where         means to enable disable the metric  For example  for really large clusters  users may consider to disable the following two metrics as they generate too much data       cilium node connectivity status       cilium node connectivity latency seconds    You can then configure the agent with     metrics   cilium node connectivity status  cilium node connectivity latency seconds      Exported Metrics                   Endpoint                                                                                                                                                                               Name                                         Labels                                             Default    Description                                                                                                                                                                       endpoint                                                                                      Enabled    Number of endpoints managed by this agent   endpoint max ifindex                                                                          Disabled   Maximum interface index observed for existing endpoints   endpoint regenerations total                 outcome                                          Enabled    Count of all endpoint regenerations that have completed   endpoint regeneration time stats seconds     scope                                            Enabled    Endpoint regeneration time stats   endpoint state                               state                                            Enabled    Count of all endpoints                                                                                                                                                                      The default enabled status of   endpoint max ifindex   is dynamic  On earlier kernels  typically with version lower than 5 10   Cilium must store the interface index for each endpoint in the conntrack map  which reserves 16 bits for this field  If Cilium is running on such a kernel  this metric will be enabled by default  It can be used to implement an alert if the ifindex is approaching the limit of 65535  This may be the case in instances of significant Endpoint churn   Services                                                                                                                                                                             Name                                       Labels                                             Default    Description                                                                                                                                                                     services events total                                                                       Enabled    Number of services events labeled by action type   service implementation delay               action                                           Enabled    Duration in seconds to propagate the data plane programming of a service  its network and endpoints from the time the service or the service pod was changed excluding the event queue latency                                                                                                                                                                    Cluster health                                                                                                                                                                                   Name                                       Labels                                             Default    Description                                                                                                                                                                     unreachable nodes                                                                           Enabled    Number of nodes that cannot be reached   unreachable health endpoints                                                                Enabled    Number of health endpoints that cannot be reached                                                                                                                                                                    Node Connectivity                                                                                                                                                                                                                                                                                                                                                                                                                                                                       Name                                          Labels                                                                                                                                                                 Default    Description                                                                                                                                                                                                                                                                                                                                                                                                                                                      node connectivity status                      source cluster      source node name      target cluster      target node name      target node type      type                                                       Enabled    Deprecated  will be removed in Cilium 1 18   use   node health connectivity status   instead  The last observed status of both ICMP and HTTP connectivity between the current Cilium agent and other Cilium nodes   node connectivity latency seconds             address type      protocol      source cluster      source node name      target cluster      target node ip      target node name      target node type      type   Enabled    Deprecated  will be removed in Cilium 1 18   use   node health connectivity latency seconds   instead  The last observed latency between the current Cilium agent and other Cilium nodes in seconds   node health connectivity status               source cluster      source node name      type      status                                                                                                           Enabled    Number of endpoints with last observed status of both ICMP and HTTP connectivity between the current Cilium agent and other Cilium nodes   node health connectivity latency seconds      source cluster      source node name      type      address type      protocol                                                                                       Enabled    Histogram of the last observed latency between the current Cilium agent and other Cilium nodes in seconds                                                                                                                                                                                                                                                                                                                                                                                                                                                     Clustermesh                                                                                                                                                                                                        Name                                            Labels                                                       Default    Description                                                                                                                                                                                             clustermesh global services                     source cluster      source node name                       Enabled    The total number of global services in the cluster mesh   clustermesh remote clusters                     source cluster      source node name                       Enabled    The total number of remote clusters meshed with the local cluster   clustermesh remote cluster failures             source cluster      source node name      target cluster   Enabled    The total number of failures related to the remote cluster   clustermesh remote cluster nodes                source cluster      source node name      target cluster   Enabled    The total number of nodes in the remote cluster   clustermesh remote cluster last failure ts      source cluster      source node name      target cluster   Enabled    The timestamp of the last failure of the remote cluster   clustermesh remote cluster readiness status     source cluster      source node name      target cluster   Enabled    The readiness status of the remote cluster                                                                                                                                                                                            Datapath                                                                                                                                                                                Name                                          Labels                                             Default    Description                                                                                                                                                                        datapath conntrack dump resets total          area      name      family                       Enabled    Number of conntrack dump resets  Happens when a BPF entry gets removed while dumping the map is in progress    datapath conntrack gc runs total              status                                           Enabled    Number of times that the conntrack garbage collector process was run   datapath conntrack gc key fallbacks total                                                      Enabled    The number of alive and deleted conntrack entries at the end of a garbage collector run labeled by datapath family   datapath conntrack gc entries                 family                                           Enabled    The number of alive and deleted conntrack entries at the end of a garbage collector run   datapath conntrack gc duration seconds        status                                           Enabled    Duration in seconds of the garbage collector process                                                                                                                                                                       IPsec                                                                                                                                                                                Name                                          Labels                                             Default    Description                                                                                                                                                                           ipsec xfrm error                              error      type                                  Enabled    Total number of xfrm errors   ipsec keys                                                                                     Enabled    Number of keys in use   ipsec xfrm states                             direction                                        Enabled    Number of XFRM states   ipsec xfrm policies                           direction                                        Enabled    Number of XFRM policies                                                                                                                                                                          eBPF                                                                                                                                                                                            Name                                       Labels                                                                Default    Description                                                                                                                                                                                        bpf syscall duration seconds               operation      outcome                                              Disabled   Duration of eBPF system call performed   bpf map ops total                          mapName    deprecated     map name      operation      outcome      Enabled    Number of eBPF map operations performed    mapName   is deprecated and will be removed in 1 10  Use   map name   instead    bpf map pressure                           map name                                                            Enabled    Map pressure is defined as a ratio of the required map size compared to its configured size  Values   1 0 indicate the map s utilization  while values    1 0 indicate that the map is full  Policy map metrics are only reported when the ratio is over 0 1  ie 10  full    bpf map capacity                           map group                                                           Enabled    Maximum size of eBPF maps by group of maps  type of map that have the same max capacity size   Map types with size of 65536 are not emitted  missing map types can be assumed to be 65536    bpf maps virtual memory max bytes                                                                              Enabled    Max memory used by eBPF maps installed in the system   bpf progs virtual memory max bytes                                                                             Enabled    Max memory used by eBPF programs installed in the system   bpf ratelimit dropped total                usage                                                               Enabled    Total drops resulting from BPF ratelimiter  tagged by source of drop                                                                                                                                                                                       Both   bpf maps virtual memory max bytes   and   bpf progs virtual memory max bytes   are currently reporting the system wide memory usage of eBPF that is directly and not directly managed by Cilium  This might change in the future and only report the eBPF memory usage directly managed by Cilium   Drops Forwards  L3 L4                                                                                                                                                                                            Name                                       Labels                                             Default    Description                                                                                                                                                                     drop count total                           reason      direction                            Enabled    Total dropped packets   drop bytes total                           reason      direction                            Enabled    Total dropped bytes   forward count total                        direction                                        Enabled    Total forwarded packets   forward bytes total                        direction                                        Enabled    Total forwarded bytes                                                                                                                                                                    Policy                                                                                                                                                                           Name                                       Labels                                             Default    Description                                                                                                                                                                     policy                                                                                      Enabled    Number of policies currently loaded   policy regeneration total                                                                   Enabled    Deprecated  will be removed in Cilium 1 17   use   endpoint regenerations total   instead  Total number of policies regenerated successfully   policy regeneration time stats seconds     scope                                            Enabled    Deprecated  will be removed in Cilium 1 17   use   endpoint regeneration time stats seconds   instead  Policy regeneration time stats labeled by the scope   policy max revision                                                                         Enabled    Highest policy revision number in the agent   policy change total                                                                         Enabled    Number of policy changes by outcome   policy endpoint enforcement status                                                          Enabled    Number of endpoints labeled by policy enforcement status   policy implementation delay                source                                           Enabled    Time in seconds between a policy change and it being fully deployed into the datapath  labeled by the policy s source   policy selector match count max            class                                            Enabled    The maximum number of identities selected by a network policy selector                                                                                                                                                                    Policy L7  HTTP Kafka FQDN                                                                                                                                                                                               Name                                     Labels                                             Default    Description                                                                                                                                                                   proxy redirects                          protocol                                         Enabled    Number of redirects installed for endpoints   proxy upstream reply seconds             error      protocol l7      scope                Enabled    Seconds waited for upstream server to reply to a request   proxy datapath update timeout total                                                       Disabled   Number of total datapath update timeouts due to FQDN IP updates   policy l7 total                          rule      proxy type                             Enabled    Number of total L7 requests responses                                                                                                                                                                  Identity                                                                                                                                                                           Name                                     Labels                                             Default    Description                                                                                                                                                                   identity                                 type                                             Enabled    Number of identities currently allocated   identity label sources                   source                                           Enabled    Number of identities which contain at least one label from the given label source   identity gc entries                      identity type                                    Enabled    Number of alive and deleted identities at the end of a garbage collector run   identity gc runs                         outcome      identity type                       Enabled    Number of times identity garbage collector has run   identity gc latency                      outcome      identity type                       Enabled    Duration of the last successful identity GC run   ipcache errors total                     type      error                                  Enabled    Number of errors interacting with the ipcache   ipcache events total                     type                                             Enabled    Number of events interacting with the ipcache   identity cache timer duration            name                                             Enabled    Seconds required to execute periodic policy processes    name  id alloc update policy maps    is the time taken to apply incremental updates to the BPF policy maps    identity cache timer trigger latency     name                                             Enabled    Seconds spent waiting for a previous process to finish before starting the next round    name  id alloc update policy maps    is the time waiting before applying incremental updates to the BPF policy maps    identity cache timer trigger folds       name                                             Enabled    Number of timer triggers that were coalesced in to one execution    name  id alloc update policy maps    applies the incremental updates to the BPF policy maps                                                                                                                                                                   Events external to Cilium                                                                                                                                                                                            Name                                     Labels                                             Default    Description                                                                                                                                                                   event ts                                 source                                           Enabled    Last timestamp when Cilium received an event from a control plane source  per resource and per action   k8s event lag seconds                    source                                           Disabled   Lag for Kubernetes events   computed value between receiving a CNI ADD event from kubelet and a Pod event received from kube api server                                                                                                                                                                  Controllers                                                                                                                                                                              Name                                     Labels                                             Default    Description                                                                                                                                                                   controllers runs total                   status                                           Enabled    Number of times that a controller process was run   controllers runs duration seconds        status                                           Enabled    Duration in seconds of the controller process   controllers group runs total             status      group name                           Enabled    Number of times that a controller process was run  labeled by controller group name   controllers failing                                                                       Enabled    Number of failing controllers                                                                                                                                                                  The   controllers group runs total   metric reports the success and failure count of each controller within the system  labeled by controller group name and completion status  Due to the large number of controllers  enabling this metric is on a per controller basis  This is configured using an allow list which is passed as the   controller group metrics   configuration flag  or the   prometheus controllerGroupMetrics   helm value  The current recommended default set of group names can be found in the values file of the Cilium Helm chart  The special names  all  and  none  are supported   SubProcess                                                                                                                                                                             Name                                     Labels                                             Default    Description                                                                                                                                                                   subprocess start total                   subsystem                                        Enabled    Number of times that Cilium has started a subprocess                                                                                                                                                                  Kubernetes                                                                                                                                                                                Name                                        Labels                                             Default    Description                                                                                                                                                                      kubernetes events received total            scope      action      validity      equal       Enabled    Number of Kubernetes events received   kubernetes events total                     scope      action      outcome                   Enabled    Number of Kubernetes events processed   k8s cnp status completion seconds           attempts      outcome                            Enabled    Duration in seconds in how long it took to complete a CNP status update   k8s terminating endpoints events total                                                       Enabled    Number of terminating endpoint events received from Kubernetes                                                                                                                                                                     Kubernetes Rest Client                                                                                                                                                                                            Name                                          Labels                                        Default    Description                                                                                                                                                                      k8s client api latency time seconds           path      method                            Enabled    Duration of processed API calls labeled by path and method   k8s client rate limiter duration seconds      path      method                            Enabled    Kubernetes client rate limiter latency in seconds  Broken down by path and method   k8s client api calls total                    host      method      return code           Enabled    Number of API calls made to kube apiserver labeled by host  method and return code                                                                                                                                                                     Kubernetes workqueue                                                                                                                                                                                                 Name                                                 Labels                                        Default    Description                                                                                                                                                                             k8s workqueue depth                                  name                                        Enabled    Current depth of workqueue   k8s workqueue adds total                             name                                        Enabled    Total number of adds handled by workqueue   k8s workqueue queue duration seconds                 name                                        Enabled    Duration in seconds an item stays in workqueue prior to request   k8s workqueue work duration seconds                  name                                        Enabled    Duration in seconds to process an item from workqueue   k8s workqueue unfinished work seconds                name                                        Enabled    Duration in seconds of work in progress that hasn t been observed by work duration  Large values indicate stuck threads  You can deduce the number of stuck threads by observing the rate at which this value increases    k8s workqueue longest running processor seconds      name                                        Enabled    Duration in seconds of the longest running processor for workqueue   k8s workqueue retries total                          name                                        Enabled    Total number of retries handled by workqueue                                                                                                                                                                            IPAM                                                                                                                                                                 Name                                     Labels                                       Default    Description                                                                                                                                                             ipam capacity                            family                                     Enabled    Total number of IPs in the IPAM pool labeled by family   ipam events total                                                                   Enabled    Number of IPAM events received labeled by action and datapath family type   ip addresses                             family                                     Enabled    Number of allocated IP addresses                                                                                                                                                            KVstore                                                                                                                                                                    Name                                     Labels                                       Default    Description                                                                                                                                                             kvstore operations duration seconds      action      kind      outcome      scope   Enabled    Duration of kvstore operation   kvstore events queue seconds             action      scope                          Enabled    Seconds waited before a received event was queued   kvstore quorum errors total              error                                      Enabled    Number of quorum errors   kvstore sync errors total                scope      source cluster                  Enabled    Number of times synchronization to the kvstore failed   kvstore sync queue size                  scope      source cluster                  Enabled    Number of elements queued for synchronization in the kvstore   kvstore initial sync completed           scope      source cluster      action      Enabled    Whether the initial synchronization from to the kvstore has completed                                                                                                                                                            Agent                                                                                                                                              Name                             Labels                           Default    Description                                                                                                                                         agent bootstrap seconds          scope      outcome             Enabled    Duration of various bootstrap phases   api process time seconds                                        Enabled    Processing time of all the API calls made to the cilium agent  labeled by API method  API path and returned HTTP code                                                                                                                                         FQDN                                                                                                                                                 Name                               Labels                           Default      Description                                                                                                                                             fqdn gc deletions total                                           Enabled      Number of FQDNs that have been cleaned on FQDN garbage collector job   fqdn active names                  endpoint                       Disabled     Number of domains inside the DNS cache that have not expired  by TTL   per endpoint   fqdn active ips                    endpoint                       Disabled     Number of IPs inside the DNS cache associated with a domain that has not expired  by TTL   per endpoint   fqdn alive zombie connections      endpoint                       Disabled     Number of IPs associated with domains that have expired  by TTL  yet still associated with an active connection  aka zombie   per endpoint   fqdn selectors                                                    Enabled      Number of registered ToFQDN selectors                                                                                                                                            Jobs                                                                                                                                                 Name                               Labels                           Default      Description                                                                                                                                             jobs errors total                  job                            Enabled      Number of jobs runs that returned an error   jobs one shot run seconds          job                            Enabled      Histogram of one shot job run duration   jobs timer run seconds             job                            Enabled      Histogram of timer job run duration   jobs observer run seconds          job                            Enabled      Histogram of observer job run duration                                                                                                                                            CIDRGroups                                                                                                                     Name                                                Labels                Default    Description                                                                                                           cidrgroups referenced                                                   Enabled    Number of CNPs and CCNPs referencing at least one CiliumCIDRGroup  CNPs with empty or non existing CIDRGroupRefs are not considered   cidrgroup translation time stats seconds                                Disabled   CIDRGroup translation time stats                                                                                                              metrics api rate limiting   API Rate Limiting                                                                                                                                                                                  Name                                           Labels                                     Default    Description                                                                                                                                                                 api limiter adjustment factor                  api call                                 Enabled    Most recent adjustment factor for automatic adjustment   api limiter processed requests total           api call      outcome      return code   Enabled    Total number of API requests processed   api limiter processing duration seconds        api call      value                      Enabled    Mean and estimated processing duration in seconds   api limiter rate limit                         api call      value                      Enabled    Current rate limiting configuration  limit and burst    api limiter requests in flight                 api call      value                      Enabled    Current and maximum allowed number of requests in flight   api limiter wait duration seconds              api call      value                      Enabled    Mean  min  and max wait duration   api limiter wait history duration seconds      api call                                 Disabled   Histogram of wait duration per API call processed                                                                                                                                                                    metrics bgp control plane   BGP Control Plane                                                                                                                                                                                        Name                   Labels                                                          Default  Description                                                                                                                                                                       session state          vrouter      neighbor      neighbor asn                       Enabled  Current state of the BGP session with the peer  Up   1 or Down   0   advertised routes      vrouter      neighbor      neighbor asn      afi      safi    Enabled  Number of routes advertised to the peer   received routes        vrouter      neighbor      neighbor asn      afi      safi    Enabled  Number of routes received from the peer                                                                                                                                                                      All metrics are enabled only when the BGP Control Plane is enabled   cilium operator                  Configuration                  cilium operator   can be configured to serve metrics by running with the option     enable metrics     By default  the operator will expose metrics on port 9963  the port can be changed with the option     operator prometheus serve addr     Exported Metrics                   All metrics are exported under the   cilium operator    Prometheus namespace       ipam metrics   IPAM          Note        IPAM metrics are all   Enabled   only if using the AWS  Alibabacloud or Azure IPAM plugins                                                                                                                                                                                  Name                                     Labels                                                            Default    Description                                                                                                                                                                                  ipam ips                                 type                                                            Enabled    Number of IPs allocated   ipam ip allocation ops                   subnet id                                                       Enabled    Number of IP allocation operations    ipam ip release ops                      subnet id                                                       Enabled    Number of IP release operations    ipam interface creation ops              subnet id                                                       Enabled    Number of interfaces creation operations    ipam release duration seconds            type      status      subnet id                                 Enabled    Release ip or interface latency in seconds   ipam allocation duration seconds         type      status      subnet id                                 Enabled    Allocation ip or interface latency in seconds   ipam available interfaces                                                                                Enabled    Number of interfaces with addresses available   ipam nodes                               category                                                        Enabled    Number of nodes by category   total   in deficit   at capacity     ipam resync total                                                                                        Enabled    Number of synchronization operations with external IPAM API   ipam api duration seconds                operation      response code                                    Enabled    Duration of interactions with external IPAM API    ipam api rate limit duration seconds     operation                                                       Enabled    Duration of rate limiting while accessing external IPAM API   ipam available ips                       target node                                                     Enabled    Number of available IPs on a node  taking into account plugin specific NIC Address limits     ipam used ips                            target node                                                     Enabled    Number of currently used IPs on a node    ipam needed ips                          target node                                                     Enabled    Number of IPs needed to satisfy allocation on a node                                                                                                                                                                                  LB IPAM                                                                                                                                                                                         Name                                     Labels                                                            Default    Description                                                                                                                                                                                  lbipam conflicting pools total                                                                           Enabled    Number of conflicting pools   lbipam ips available total               pool                                                            Enabled    Number of available IPs per pool   lbipam ips used total                    pool                                                            Enabled    Number of used IPs per pool   lbipam services matching total                                                                           Enabled    Number of matching services   lbipam services unsatisfied total                                                                        Enabled    Number of services which did not get requested IPs                                                                                                                                                                                 Controllers                                                                                                                                                                              Name                                     Labels                                             Default    Description                                                                                                                                                                   controllers group runs total             status      group name                           Enabled    Number of times that a controller process was run  labeled by controller group name                                                                                                                                                                  The   controllers group runs total   metric reports the success and failure count of each controller within the system  labeled by controller group name and completion status  Due to the large number of controllers  enabling this metric is on a per controller basis  This is configured using an allow list which is passed as the   controller group metrics   configuration flag  or the   prometheus controllerGroupMetrics   helm value  The current recommended default set of group names can be found in the values file of the Cilium Helm chart  The special names  all  and  none  are supported       ces metrics   CiliumEndpointSlices  CES                                                                                                                                                                       Name                                           Labels                           Description                                                                                                                                            number of ceps per ces                                                        The number of CEPs batched in a CES   number of cep changes per ces                  opcode                         The number of changed CEPs in each CES update   ces sync total                                 outcome                        The number of completed CES syncs by outcome   ces queueing delay seconds                                                    CiliumEndpointSlice queueing delay in seconds                                                                                                                                           Unmanaged Pods                                                                                                                                                      Name                                         Labels  Default    Description                                                                                                                                        unmanaged pods                                     Enabled    The total number of pods observed to be unmanaged by Cilium operator                                                                                                                                        Double Write  Identity Allocation Mode                                         When the   ref  Double Write  double write migration    identity allocation mode is enabled  the following metrics are available                                                                                                                                Name                                         Labels  Default    Description                                                                                                                                doublewrite identity crd total count               Enabled    The total number of CRD identities   doublewrite identity kvstore total count           Enabled    The total number of identities in the KVStore   doublewrite identity crd only count                Enabled    The number of CRD identities not present in the KVStore   doublewrite identity kvstore only count            Enabled    The number of identities in the KVStore not present as a CRD                                                                                                                                Hubble         Configuration                Hubble metrics are served by a Hubble instance running inside   cilium agent    The command line options to configure them are     enable hubble        hubble metrics server    and     hubble metrics        hubble metrics server   takes an   IP Port   pair  but passing an empty IP  e g     9965    will bind the server to all available interfaces      hubble metrics   takes a comma separated list of metrics  It s also possible to configure Hubble metrics to listen with TLS and optionally use mTLS for authentication  For details see  ref  hubble configure metrics tls    Some metrics can take additional semicolon separated options per metric  e g      hubble metrics  dns query ignoreAAAA http destinationContext workload name    will enable the   dns   metric with the   query   and   ignoreAAAA   options  and the   http   metric with the   destinationContext workload name   option       hubble context options   Context Options                  Hubble metrics support configuration via context options  Supported context options for all metrics       sourceContext     Configures the   source   label on metrics for both egress and ingress traffic      sourceEgressContext     Configures the   source   label on metrics for egress traffic  takes precedence over   sourceContext         sourceIngressContext     Configures the   source   label on metrics for ingress traffic  takes precedence over   sourceContext         destinationContext     Configures the   destination   label on metrics for both egress and ingress traffic      destinationEgressContext     Configures the   destination   label on metrics for egress traffic  takes precedence over   destinationContext         destinationIngressContext     Configures the   destination   label on metrics for ingress traffic  takes precedence over   destinationContext         labelsContext     Configures a list of labels to be enabled on metrics   There are also some context options that are specific to certain metrics  See the documentation for the individual metrics to see what options are available for each   See below for details on each of the different context options   Most Hubble metrics can be configured to add the source and or destination context as a label using the   sourceContext   and   destinationContext   options  The possible values are                                                                                                             Option Value          Description                                                                                                             identity            All Cilium security identity labels   namespace           Kubernetes namespace name   pod                 Kubernetes pod name and namespace name in the form of   namespace pod      pod name            Kubernetes pod name    dns                 All known DNS names of the source or destination  comma separated    ip                  The IPv4 or IPv6 address   reserved identity   Reserved identity label    workload            Kubernetes pod s workload name and namespace in the form of   namespace workload name      workload name       Kubernetes pod s workload name  workloads are  Deployment  Statefulset  Daemonset  ReplicationController  CronJob  Job  DeploymentConfig  OpenShift   etc     app                 Kubernetes pod s app name  derived from pod labels    app kubernetes io name      k8s app    or   app                                                                                                                When specifying the source and or destination context  multiple contexts can be specified by separating them via the       symbol  When multiple are specified  then the first non empty value is added to the metric as a label  For example  a metric configuration of   flow destinationContext dns ip   will first try to use the DNS name of the target for the label  If no DNS name is known for the target  it will fall back and use the IP address of the target instead      note       There are 3 cases in which the identity label list contains multiple reserved labels      1    reserved kube apiserver   and   reserved host      2    reserved kube apiserver   and   reserved remote node      3    reserved kube apiserver   and   reserved world       In all of these 3 cases    reserved identity   context returns   reserved kube apiserver     Hubble metrics can also be configured with a   labelsContext   which allows providing a list of labels that should be added to the metric  Unlike   sourceContext   and   destinationContext    instead of different values being put into the same metric label  the   labelsContext   puts them into different label values                                                                                                                  Option Value                   Description                                                                                                                  source ip                    The source IP of the flow    source namespace             The namespace of the pod if the flow source is from a Kubernetes pod    source pod                   The pod name if the flow source is from a Kubernetes pod    source workload              The name of the source pod s workload  Deployment  Statefulset  Daemonset  ReplicationController  CronJob  Job  DeploymentConfig  OpenShift      source workload kind         The kind of the source pod s workload  for example  Deployment  Statefulset  Daemonset  ReplicationController  CronJob  Job  DeploymentConfig  OpenShift     source app                   The app name of the source pod  derived from pod labels    app kubernetes io name      k8s app    or   app       destination ip               The destination IP of the flow    destination namespace        The namespace of the pod if the flow destination is from a Kubernetes pod    destination pod              The pod name if the flow destination is from a Kubernetes pod    destination workload         The name of the destination pod s workload  Deployment  Statefulset  Daemonset  ReplicationController  CronJob  Job  DeploymentConfig  OpenShift      destination workload kind    The kind of the destination pod s workload  for example  Deployment  Statefulset  Daemonset  ReplicationController  CronJob  Job  DeploymentConfig  OpenShift     destination app              The app name of the source pod  derived from pod labels    app kubernetes io name      k8s app    or   app       traffic direction            Identifies the traffic direction of the flow  Possible values are   ingress      egress   and   unknown                                                                                                                    When specifying the flow context  multiple values can be specified by separating them via the       symbol  All labels listed are included in the metric  even if empty  For example  a metric configuration of   http labelsContext source namespace source pod   will add the   source namespace   and   source pod   labels to all Hubble HTTP metrics      note        To limit metrics cardinality hubble will remove data series bound to specific pod after one minute from pod deletion      Metric is considered to be bound to a specific pod when at least one of the following conditions is met           sourceContext   is set to   pod   and metric series has   source   label matching    pod namespace   pod name            destinationContext   is set to   pod   and metric series has   destination   label matching    pod namespace   pod name            labelsContext   contains both   source namespace   and   source pod   and metric series labels match namespace and name of deleted pod         labelsContext   contains both   destination namespace   and   destination pod   and metric series labels match namespace and name of deleted pod      hubble exported metrics   Exported Metrics                   Hubble metrics are exported under the   hubble    Prometheus namespace   lost events              This metric  unlike other ones  is not directly tied to network flows  It s enabled if any of the other metrics is enabled                                                                                                                                           Name                             Labels                                   Default    Description                                                                                                                                           lost events total                source                                 Enabled    Number of lost events                                                                                                                                          Labels             source   identifies the source of lost events  one of         perf event ring buffer          observer events queue          hubble ring buffer       dns                                                                                                                                     Name                             Labels                                   Default    Description                                                                                                                            dns queries total                rcode      qtypes      ips returned    Disabled   Number of DNS queries observed   dns responses total              rcode      qtypes      ips returned    Disabled   Number of DNS responses observed   dns response types total         type      qtypes                       Disabled   Number of DNS response types                                                                                                                           Options                                                                                                                            Option Key     Option Value  Description                                                                                                                     query        N A           Include the query as label  query    ignoreAAAA   N A           Ignore any AAAA requests responses                                                                                                                    This metric supports  ref  Context Options hubble context options        drop                                                                                                                                      Name                             Labels                                   Default    Description                                                                                                                            drop total                       reason      protocol                   Disabled   Number of drops                                                                                                                           Options          This metric supports  ref  Context Options hubble context options       flow                                                                                                                                      Name                             Labels                                   Default    Description                                                                                                                            flows processed total            type      subtype      verdict         Disabled   Total number of flows processed                                                                                                                           Options          This metric supports  ref  Context Options hubble context options       flows to world                       This metric counts all non reply flows containing the   reserved world   label in their destination identity  By default  dropped flows are counted if and only if the drop reason is   Policy denied    Set   any drop   option to count all dropped flows                                                                                                                                     Name                             Labels                                   Default    Description                                                                                                                                     flows to world total             protocol      verdict                  Disabled   Total number of flows to   reserved world                                                                                                                                       Options                                                                                              Option Key     Option Value  Description                                                                                       any drop     N A           Count any dropped flows regardless of the drop reason    port         N A           Include the destination port as label   port      syn only     N A           Only count non reply SYNs for TCP flows                                                                                        This metric supports  ref  Context Options hubble context options       http             Deprecated  use   httpV2   instead  These metrics can not be enabled at the same time as   httpV2                                                                                                                                         Name                              Labels                                  Default    Description                                                                                                                                       http requests total               method      protocol      reporter    Disabled   Count of HTTP requests   http responses total              method      status      reporter      Disabled   Count of HTTP responses   http request duration seconds     method      reporter                  Disabled   Histogram of HTTP request duration in seconds                                                                                                                                      Labels             method   is the HTTP method of the request response      protocol   is the HTTP protocol of the request   For example    HTTP 1 1      HTTP 2         status   is the HTTP status code of the response      reporter   identifies the origin of the request response  It is set to   client   if it originated from the client    server   if it originated from the server  or   unknown   if its origin is unknown   Options          This metric supports  ref  Context Options hubble context options       httpV2                 httpV2   is an updated version of the existing   http   metrics  These metrics can not be enabled at the same time as   http     The main difference is that   http requests total   and   http responses total   have been consolidated  and use the response flow data   Additionally  the   http request duration seconds   metric source destination related labels now are from the perspective of the request  In the   http   metrics  the source destination were swapped  because the metric uses the response flow data  where the source destination are swapped  but in   httpV2   we correctly account for this                                                                                                                                                   Name                              Labels                                              Default    Description                                                                                                                                                   http requests total               method      protocol      status      reporter    Disabled   Count of HTTP requests   http request duration seconds     method      reporter                              Disabled   Histogram of HTTP request duration in seconds                                                                                                                                                  Labels             method   is the HTTP method of the request response      protocol   is the HTTP protocol of the request   For example    HTTP 1 1      HTTP 2         status   is the HTTP status code of the response      reporter   identifies the origin of the request response  It is set to   client   if it originated from the client    server   if it originated from the server  or   unknown   if its origin is unknown   Options                                                                                                                                                      Option Key     Option Value   Description                                                                                                                                               exemplars      true         Include extracted trace IDs in HTTP metrics  Requires  ref  OpenMetrics to be enabled hubble open metrics                                                                                                                                                 This metric supports  ref  Context Options hubble context options       icmp                                                                                                                                      Name                             Labels                                   Default    Description                                                                                                                            icmp total                       family      type                       Disabled   Number of ICMP messages                                                                                                                           Options          This metric supports  ref  Context Options hubble context options       kafka                                                                                                                                                                  Name                                Labels                                                Default    Description                                                                                                                                                       kafka requests total                topic      api key      error code      reporter    Disabled   Count of Kafka requests by topic   kafka request duration seconds      topic      api key      reporter                    Disabled   Histogram of Kafka request duration by topic                                                                                                                                                      Options          This metric supports  ref  Context Options hubble context options       port distribution                                                                                                                                                                  Name                             Labels                                   Default    Description                                                                                                                                           port distribution total          protocol      port                     Disabled   Numbers of packets distributed by destination port                                                                                                                                          Options          This metric supports  ref  Context Options hubble context options       tcp                                                                                                                                                    Name                             Labels                                   Default    Description                                                                                                                                           tcp flags total                  flag      family                       Disabled   TCP flag occurrences                                                                                                                                          Options          This metric supports  ref  Context Options hubble context options     dynamic exporter exporters total                                   This is dynamic hubble exporter metric                                                                                                                                               Name                                 Labels                                   Default    Description                                                                                                                                               dynamic exporter exporters total     source                                 Enabled    Number of configured hubble exporters                                                                                                                                              Labels             status   identifies status of exporters  can be one of         active          inactive    dynamic exporter up                      This is dynamic hubble exporter metric                                                                                                                                               Name                                 Labels                                   Default    Description                                                                                                                                               dynamic exporter up                  source                                 Enabled    Status of exporter  1   active  0   inactive                                                                                                                                               Labels             name   identifies exporter name  dynamic exporter reconfigurations total                                          This is dynamic hubble exporter metric                                                                                                                                                      Name                                        Labels                                   Default    Description                                                                                                                                                      dynamic exporter reconfigurations total     op                                     Enabled    Number of dynamic exporters reconfigurations                                                                                                                                                     Labels             op   identifies reconfiguration operation type  can be one of         add          update          remove    dynamic exporter config hash                               This is dynamic hubble exporter metric                                                                                                                                               Name                                 Labels                                   Default    Description                                                                                                                                               dynamic exporter config hash                                                Enabled    Hash of last applied config                                                                                                                                              dynamic exporter config last applied                                       This is dynamic hubble exporter metric                                                                                                                                                   Name                                     Labels                                   Default    Description                                                                                                                                                   dynamic exporter config last applied                                            Enabled    Timestamp of last applied config                                                                                                                                                         clustermesh apiserver metrics reference   clustermesh apiserver                        Configuration                To expose any metrics  invoke   clustermesh apiserver   with the     prometheus serve addr   option  This option takes a   IP Port   pair but passing an empty IP  e g     9962    will bind the server to all available interfaces  there is usually only one in a container    Exported Metrics                   All metrics are exported under the   cilium clustermesh apiserver    Prometheus namespace   Bootstrap                                                                                                                                                           Name                                     Labels                                       Description                                                                                                                                                  bootstrap seconds                        source cluster                             Duration in seconds to complete bootstrap                                                                                                                                                 KVstore                                                                                                                                                         Name                                     Labels                                       Description                                                                                                                                                  kvstore operations duration seconds      action      kind      outcome      scope   Duration of kvstore operation   kvstore events queue seconds             action      scope                          Seconds waited before a received event was queued   kvstore quorum errors total              error                                      Number of quorum errors   kvstore sync errors total                scope      source cluster                  Number of times synchronization to the kvstore failed   kvstore sync queue size                  scope      source cluster                  Number of elements queued for synchronization in the kvstore   kvstore initial sync completed           scope      source cluster      action      Whether the initial synchronization from to the kvstore has completed                                                                                                                                                 API Rate Limiting                                                                                                                                                                       Name                                           Labels                                     Description                                                                                                                                                      api limiter processed requests total           api call      outcome      return code   Total number of API requests processed   api limiter processing duration seconds        api call      value                      Mean and estimated processing duration in seconds   api limiter rate limit                         api call      value                      Current rate limiting configuration  limit and burst    api limiter requests in flight                 api call      value                      Current and maximum allowed number of requests in flight   api limiter wait duration seconds              api call      value                       Mean  min  and max wait duration                                                                                                                                                     Controllers                                                                                                                                                                              Name                                     Labels                                             Default    Description                                                                                                                                                                   controllers group runs total             status      group name                           Enabled    Number of times that a controller process was run  labeled by controller group name                                                                                                                                                                  The   controllers group runs total   metric reports the success and failure count of each controller within the system  labeled by controller group name and completion status  Enabling this metric is on a per controller basis  This is configured using an allow list which is passed as the   controller group metrics   configuration flag  The current default set for   clustermesh apiserver   found in the Cilium Helm chart is the special name  all   which enables the metric for all controller groups  The special name  none  is also supported       kvstoremesh metrics reference   kvstoremesh              Configuration                To expose any metrics  invoke   kvstoremesh   with the     prometheus serve addr   option  This option takes a   IP Port   pair but passing an empty IP  e g     9964    binds the server to all available interfaces  there is usually only one interface in a container    Exported Metrics                   All metrics are exported under the   cilium kvstoremesh    Prometheus namespace   Bootstrap                                                                                                                                                           Name                                     Labels                                       Description                                                                                                                                                  bootstrap seconds                        source cluster                             Duration in seconds to complete bootstrap                                                                                                                                                 Remote clusters                                                                                                                                                                 Name                                 Labels                                                       Description                                                                                                                                                  remote clusters                      source cluster                        The total number of remote clusters meshed with the local cluster   remote cluster failures              source cluster      target cluster    The total number of failures related to the remote cluster   remote cluster last failure ts       source cluster      target cluster    The timestamp of the last failure of the remote cluster   remote cluster readiness status      source cluster      target cluster    The readiness status of the remote cluster                                                                                                                                                 KVstore                                                                                                                                                         Name                                     Labels                                       Description                                                                                                                                                  kvstore operations duration seconds      action      kind      outcome      scope   Duration of kvstore operation   kvstore events queue seconds             action      scope                          Seconds waited before a received event was queued   kvstore quorum errors total              error                                      Number of quorum errors   kvstore sync errors total                scope      source cluster                  Number of times synchronization to the kvstore failed   kvstore sync queue size                  scope      source cluster                  Number of elements queued for synchronization in the kvstore   kvstore initial sync completed           scope      source cluster      action      Whether the initial synchronization from to the kvstore has completed                                                                                                                                                 API Rate Limiting                                                                                                                                                                       Name                                           Labels                                     Description                                                                                                                                                      api limiter processed requests total           api call      outcome      return code   Total number of API requests processed   api limiter processing duration seconds        api call      value                      Mean and estimated processing duration in seconds   api limiter rate limit                         api call      value                      Current rate limiting configuration  limit and burst    api limiter requests in flight                 api call      value                      Current and maximum allowed number of requests in flight   api limiter wait duration seconds              api call      value                      Mean  min  and max wait duration                                                                                                                                                     Controllers                                                                                                                                                                              Name                                     Labels                                             Default    Description                                                                                                                                                                   controllers group runs total             status      group name                           Enabled    Number of times that a controller process was run  labeled by controller group name                                                                                                                                                                  The   controllers group runs total   metric reports the success and failure count of each controller within the system  labeled by controller group name and completion status  Enabling this metric is on a per controller basis  This is configured using an allow list which is passed as the   controller group metrics   configuration flag  The current default set for   kvstoremesh   found in the Cilium Helm chart is the special name  all   which enables the metric for all controller groups  The special name  none  is also supported   NAT          nat metrics                                                                                                                                                                   Name                                     Labels                                             Default    Description                                                                                                                                                                   nat endpoint max connection              family                                           Enabled    Saturation of the most saturated distinct NAT mapped connection  in terms of egress IP and remote endpoint address                                                                                                                                                                   These metrics are for monitoring Cilium s NAT mapping functionality  NAT is used by features such as Egress Gateway and BPF masquerading   The NAT map holds mappings for masqueraded connections  Connection held in the NAT table that are masqueraded with the same egress IP and are going to the same remote endpoints IP and port all require a unique source port for the mapping  This means that any Node masquerading connections to a distinct external endpoint is limited by the possible ephemeral source ports   Given a Node forwarding one or more such egress IP and remote endpoint tuples  the   nat endpoint max connection   metric is the most saturated such connection in terms of a percent of possible source ports available  This metric is especially useful when using the egress gateway feature where it s possible to overload a Node if many connections are all going to the same endpoint  In general  this metric should normally be fairly low  A high number here may indicate that a Node is reaching its limit for connections to one or more external endpoints  "}
{"questions":"cilium Install Prometheus Grafana docs cilium io Running Prometheus Grafana installmetrics","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _install_metrics:\n\n****************************\nRunning Prometheus & Grafana\n****************************\n\nInstall Prometheus & Grafana\n============================\n\nThis is an example deployment that includes Prometheus and Grafana in a single\ndeployment.\n\n.. admonition:: Video\n :class: attention\n\n  You can also see Cilium and Grafana in action on `eCHO episode 68: Cilium & Grafana <https:\/\/www.youtube.com\/watch?v=DdWksYq5Pv4>`__.\n\nThe default installation contains:\n\n- **Grafana**: A visualization dashboard with Cilium Dashboard pre-loaded.\n- **Prometheus**: a time series database and monitoring system.\n\n .. parsed-literal::\n\n    $ kubectl apply -f \\ |SCM_WEB|\\\/examples\/kubernetes\/addons\/prometheus\/monitoring-example.yaml\n    namespace\/cilium-monitoring created\n    serviceaccount\/prometheus-k8s created\n    configmap\/grafana-config created\n    configmap\/grafana-cilium-dashboard created\n    configmap\/grafana-cilium-operator-dashboard created\n    configmap\/grafana-hubble-dashboard created\n    configmap\/prometheus created\n    clusterrole.rbac.authorization.k8s.io\/prometheus unchanged\n    clusterrolebinding.rbac.authorization.k8s.io\/prometheus unchanged\n    service\/grafana created\n    service\/prometheus created\n    deployment.apps\/grafana created\n    deployment.apps\/prometheus created\n\nThis example deployment of Prometheus and Grafana will automatically scrape the\nCilium and Hubble metrics. See the :ref:`metrics` configuration guide on how to\nconfigure a custom Prometheus instance.\n\nDeploy Cilium and Hubble with metrics enabled\n=============================================\n\n*Cilium*, *Hubble*, and *Cilium Operator* do not expose metrics by\ndefault. Enabling metrics for these services will open ports ``9962``, ``9965``,\nand ``9963`` respectively on all nodes of your cluster where these components\nare running.\n\nThe metrics for Cilium, Hubble, and Cilium Operator can all be enabled\nindependently of each other with the following Helm values:\n\n - ``prometheus.enabled=true``: Enables metrics for ``cilium-agent``.\n - ``operator.prometheus.enabled=true``: Enables metrics for ``cilium-operator``.\n - ``hubble.metrics.enabled``: Enables the provided list of Hubble metrics.\n   For Hubble metrics to work, Hubble itself needs to be enabled with\n   ``hubble.enabled=true``. See\n   :ref:`Hubble exported metrics<hubble_exported_metrics>` for the list of\n   available Hubble metrics.\n\nRefer to :ref:`metrics` for more details about the individual metrics.\n\n.. include:: ..\/installation\/k8s-install-download-release.rst\n\nDeploy Cilium via Helm as follows to enable all metrics:\n\n.. parsed-literal::\n\n   helm install cilium |CHART_RELEASE| \\\\\n      --namespace kube-system \\\\\n      --set prometheus.enabled=true \\\\\n      --set operator.prometheus.enabled=true \\\\\n      --set hubble.enabled=true \\\\\n      --set hubble.metrics.enableOpenMetrics=true \\\\\n      --set hubble.metrics.enabled=\"{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\\\\,source_namespace\\\\,source_workload\\\\,destination_ip\\\\,destination_namespace\\\\,destination_workload\\\\,traffic_direction}\"\n\n.. note::\n\n   You can combine the above Helm options with any of the other installation\n   guides.\n\n\nHow to access Grafana\n=====================\n\nExpose the port on your local machine\n\n.. code-block:: shell-session\n\n    kubectl -n cilium-monitoring port-forward service\/grafana --address 0.0.0.0 --address :: 3000:3000\n\nAccess it via your browser: http:\/\/localhost:3000\n\nHow to access Prometheus\n========================\n\nExpose the port on your local machine\n\n.. code-block:: shell-session\n\n    kubectl -n cilium-monitoring port-forward service\/prometheus --address 0.0.0.0 --address :: 9090:9090\n\nAccess it via your browser: http:\/\/localhost:9090\n\nExamples\n========\n\nGeneric\n-------\n\n.. image:: images\/grafana_generic.png\n\nNetwork\n-------\n\n.. image:: images\/grafana_network.png\n\nPolicy\n-------\n\n.. image:: images\/grafana_policy.png\n.. image:: images\/grafana_policy2.png\n\nEndpoints\n---------\n\n.. image:: images\/grafana_endpoints.png\n\nControllers\n-----------\n\n.. image:: images\/grafana_controllers.png\n\nKubernetes\n----------\n\n.. image:: images\/grafana_k8s.png\n\nHubble General Processing\n-------------------------\n\n.. image:: images\/grafana_hubble_general_processing.png\n\nHubble Networking\n-----------------\n.. note::\n\n   The ``port-distribution`` metric is disabled by default.\n   Refer to :ref:`metrics` for more details about the individual metrics.\n\n.. image:: images\/grafana_hubble_network.png\n.. image:: images\/grafana_hubble_tcp.png\n.. image:: images\/grafana_hubble_icmp.png\n\nHubble DNS\n----------\n\n.. image:: images\/grafana_hubble_dns.png\n\nHubble HTTP\n-----------\n\n.. image:: images\/grafana_hubble_http.png\n\nHubble Network Policy\n---------------------\n\n.. image:: images\/grafana_hubble_network_policy.png","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      install metrics                                Running Prometheus   Grafana                               Install Prometheus   Grafana                               This is an example deployment that includes Prometheus and Grafana in a single deployment      admonition   Video   class  attention    You can also see Cilium and Grafana in action on  eCHO episode 68  Cilium   Grafana  https   www youtube com watch v DdWksYq5Pv4       The default installation contains       Grafana    A visualization dashboard with Cilium Dashboard pre loaded      Prometheus    a time series database and monitoring system       parsed literal          kubectl apply  f    SCM WEB   examples kubernetes addons prometheus monitoring example yaml     namespace cilium monitoring created     serviceaccount prometheus k8s created     configmap grafana config created     configmap grafana cilium dashboard created     configmap grafana cilium operator dashboard created     configmap grafana hubble dashboard created     configmap prometheus created     clusterrole rbac authorization k8s io prometheus unchanged     clusterrolebinding rbac authorization k8s io prometheus unchanged     service grafana created     service prometheus created     deployment apps grafana created     deployment apps prometheus created  This example deployment of Prometheus and Grafana will automatically scrape the Cilium and Hubble metrics  See the  ref  metrics  configuration guide on how to configure a custom Prometheus instance   Deploy Cilium and Hubble with metrics enabled                                                 Cilium    Hubble   and  Cilium Operator  do not expose metrics by default  Enabling metrics for these services will open ports   9962      9965    and   9963   respectively on all nodes of your cluster where these components are running   The metrics for Cilium  Hubble  and Cilium Operator can all be enabled independently of each other with the following Helm values        prometheus enabled true    Enables metrics for   cilium agent         operator prometheus enabled true    Enables metrics for   cilium operator         hubble metrics enabled    Enables the provided list of Hubble metrics     For Hubble metrics to work  Hubble itself needs to be enabled with      hubble enabled true    See     ref  Hubble exported metrics hubble exported metrics   for the list of    available Hubble metrics   Refer to  ref  metrics  for more details about the individual metrics      include      installation k8s install download release rst  Deploy Cilium via Helm as follows to enable all metrics      parsed literal       helm install cilium  CHART RELEASE             namespace kube system            set prometheus enabled true            set operator prometheus enabled true            set hubble enabled true            set hubble metrics enableOpenMetrics true            set hubble metrics enabled   dns drop tcp flow port distribution icmp httpV2 exemplars true labelsContext source ip   source namespace   source workload   destination ip   destination namespace   destination workload   traffic direction       note       You can combine the above Helm options with any of the other installation    guides    How to access Grafana                        Expose the port on your local machine     code block   shell session      kubectl  n cilium monitoring port forward service grafana   address 0 0 0 0   address    3000 3000  Access it via your browser  http   localhost 3000  How to access Prometheus                           Expose the port on your local machine     code block   shell session      kubectl  n cilium monitoring port forward service prometheus   address 0 0 0 0   address    9090 9090  Access it via your browser  http   localhost 9090  Examples           Generic             image   images grafana generic png  Network             image   images grafana network png  Policy             image   images grafana policy png    image   images grafana policy2 png  Endpoints               image   images grafana endpoints png  Controllers                 image   images grafana controllers png  Kubernetes                image   images grafana k8s png  Hubble General Processing                               image   images grafana hubble general processing png  Hubble Networking                      note       The   port distribution   metric is disabled by default     Refer to  ref  metrics  for more details about the individual metrics      image   images grafana hubble network png    image   images grafana hubble tcp png    image   images grafana hubble icmp png  Hubble DNS                image   images grafana hubble dns png  Hubble HTTP                 image   images grafana hubble http png  Hubble Network Policy                           image   images grafana hubble network policy png"}
{"questions":"cilium Generic are healthy and ready tion cli download rst shell session Validate that the as well as the pods","answers":"Service Mesh Troubleshooting\n============================\n\n\nInstall the Cilium CLI\n----------------------\n\n.. include:: \/installation\/cli-download.rst\n\nGeneric\n-------\n\n #. Validate that the ``ds\/cilium`` as well as the ``deployment\/cilium-operator`` pods\n    are healthy and ready.\n\n    .. code-block:: shell-session\n\n       $ cilium status\n\nManual Verification of Setup\n----------------------------\n\n #. Validate that ``nodePort.enabled`` is true.\n\n    .. code-block:: shell-session\n\n        $ kubectl exec -n kube-system ds\/cilium -- cilium-dbg status --verbose\n        ...\n        KubeProxyReplacement Details:\n        ...\n          Services:\n          - ClusterIP:      Enabled\n          - NodePort:       Enabled (Range: 30000-32767)\n        ...\n\n #. Validate that runtime the values of ``enable-envoy-config`` and ``enable-ingress-controller``\n    are true. Ingress controller flag is optional if customer only uses ``CiliumEnvoyConfig`` or\n    ``CiliumClusterwideEnvoyConfig`` CRDs.\n\n    .. code-block:: shell-session\n\n        $ kubectl -n kube-system get cm cilium-config -o json | egrep \"enable-ingress-controller|enable-envoy-config\"\n                \"enable-envoy-config\": \"true\",\n                \"enable-ingress-controller\": \"true\",\n\nIngress Troubleshooting\n-----------------------\n\nInternally, the Cilium Ingress controller will create one Load Balancer service, one\n``CiliumEnvoyConfig`` and one dummy Endpoint resource for each Ingress resource.\n\n\n    .. code-block:: shell-session\n\n        $ kubectl get ingress\n        NAME            CLASS    HOSTS   ADDRESS        PORTS   AGE\n        basic-ingress   cilium   *       10.97.60.117   80      16m\n\n        # For dedicated Load Balancer mode\n        $ kubectl get service cilium-ingress-basic-ingress\n        NAME                           TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE\n        cilium-ingress-basic-ingress   LoadBalancer   10.97.60.117   10.97.60.117   80:31911\/TCP   17m\n\n        # For dedicated Load Balancer mode\n        $ kubectl get cec cilium-ingress-default-basic-ingress\n        NAME                                   AGE\n        cilium-ingress-default-basic-ingress   18m\n\n        # For shared Load Balancer mode\n        $ kubectl get services -n kube-system cilium-ingress\n        NAME             TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE\n        cilium-ingress   LoadBalancer   10.111.109.99   10.111.109.99   80:32690\/TCP,443:31566\/TCP   38m\n\n        # For shared Load Balancer mode\n        $ kubectl get cec -n kube-system cilium-ingress\n        NAME             AGE\n        cilium-ingress   15m\n\n #. Validate that the Load Balancer service has either an external IP or FQDN assigned.\n    If it's not available after a long time, please check the Load Balancer related\n    documentation from your respective cloud provider.\n\n #. Check if there is any warning or error message while Cilium is trying to provision\n    the ``CiliumEnvoyConfig`` resource. This is unlikely to happen for CEC resources\n    originating from the Cilium Ingress controller.\n\n    .. include:: \/network\/servicemesh\/warning.rst\n\n\nConnectivity Troubleshooting\n----------------------------\n\nThis section is for troubleshooting connectivity issues mainly for Ingress resources, but\nthe same steps can be applied to manually configured ``CiliumEnvoyConfig`` resources as well.\n\nIt's best to have ``debug`` and ``debug-verbose`` enabled with below values. Kindly\nnote that any change of Cilium flags requires a restart of the Cilium agent and operator.\n\n    .. code-block:: shell-session\n\n        $ kubectl get -n kube-system cm cilium-config -o json | grep \"debug\"\n                \"debug\": \"true\",\n                \"debug-verbose\": \"flow\",\n\n.. note::\n\n    The originating source IP is used for enforcing ingress traffic.\n\nThe request normally traverses from LoadBalancer service to pre-assigned port of your\nnode, then gets forwarded to the Cilium Envoy proxy, and finally gets proxied to the actual\nbackend service.\n\n #. The first step between cloud Load Balancer to node port is out of Cilium scope. Please\n    check related documentation from your respective cloud provider to make sure your\n    clusters are configured properly.\n\n #. The second step could be checked by connecting with SSH to your underlying host, and\n    sending the similar request to localhost on the relevant port:\n\n    .. code-block:: shell-session\n\n        $ kubectl get service cilium-ingress-basic-ingress\n        NAME                           TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE\n        cilium-ingress-basic-ingress   LoadBalancer   10.97.60.117   10.97.60.117   80:31911\/TCP   17m\n\n        # After ssh to any of k8s node\n        $ curl -v http:\/\/localhost:31911\/\n        *   Trying 127.0.0.1:31911...\n        * TCP_NODELAY set\n        * Connected to localhost (127.0.0.1) port 31911 (#0)\n        > GET \/ HTTP\/1.1\n        > Host: localhost:31911\n        > User-Agent: curl\/7.68.0\n        > Accept: *\/*\n        >\n        * Mark bundle as not supporting multiuse\n        < HTTP\/1.1 503 Service Unavailable\n        < content-length: 19\n        < content-type: text\/plain\n        < date: Thu, 07 Jul 2022 12:25:56 GMT\n        < server: envoy\n        <\n        * Connection #0 to host localhost left intact\n\n        # Flows for world identity\n        $ kubectl -n kube-system exec ds\/cilium -- hubble observe -f --identity 2\n        Jul  7 12:28:27.970: 127.0.0.1:54704 <- 127.0.0.1:13681 http-response FORWARDED (HTTP\/1.1 503 0ms (GET http:\/\/localhost:31911\/))\n\n    Alternatively, you can also send a request directly to the Envoy proxy port. For\n    Ingress, the proxy port is randomly assigned by the Cilium Ingress controller. For\n    manually configured ``CiliumEnvoyConfig`` resources, the proxy port is retrieved\n    directly from the spec.\n\n    .. code-block:: shell-session\n\n        $  kubectl logs -f -n kube-system ds\/cilium --timestamps | egrep \"envoy|proxy\"\n        ...\n        2022-07-08T08:05:13.986649816Z level=info msg=\"Adding new proxy port rules for cilium-ingress-default-basic-ingress:19672\" proxy port name=cilium-ingress-default-basic-ingress subsys=proxy\n\n        # After ssh to any of k8s node, send request to Envoy proxy port directly\n        $ curl -v  http:\/\/localhost:19672\n        *   Trying 127.0.0.1:19672...\n        * TCP_NODELAY set\n        * Connected to localhost (127.0.0.1) port 19672 (#0)\n        > GET \/ HTTP\/1.1\n        > Host: localhost:19672\n        > User-Agent: curl\/7.68.0\n        > Accept: *\/*\n        >\n        * Mark bundle as not supporting multiuse\n        < HTTP\/1.1 503 Service Unavailable\n        < content-length: 19\n        < content-type: text\/plain\n        < date: Fri, 08 Jul 2022 08:12:35 GMT\n        < server: envoy\n\n    If you see a response similar to the above, it means that the request is being\n    redirected to proxy successfully. The http response will have one special header\n    ``server: envoy`` accordingly. The same can be observed from ``hubble observe``\n    command :ref:`hubble_troubleshooting`.\n\n    The most common root cause is either that the Cilium Envoy proxy is not running\n    on the node, or there is some other issue with CEC resource provisioning.\n\n    .. code-block:: shell-session\n\n        $ kubectl exec -n kube-system ds\/cilium -- cilium-dbg status\n        ...\n        Controller Status:       49\/49 healthy\n        Proxy Status:            OK, ip 10.0.0.25, 6 redirects active on ports 10000-20000\n        Global Identity Range:   min 256, max 65535\n\n #. Assuming that the above steps are done successfully, you can proceed to send a request via\n    an external IP or via FQDN next.\n\n    Double-check whether your backend service is up and healthy. The Envoy Discovery Service\n    (EDS) has a name that follows the convention ``<namespace>\/<service-name>:<port>``.\n\n    .. code-block:: shell-session\n\n        $ LB_IP=$(kubectl get ingress basic-ingress -o json | jq '.status.loadBalancer.ingress[0].ip' | jq -r .)\n        $ curl -s http:\/\/$LB_IP\/details\/1\n        no healthy upstream\n\n        $ kubectl get cec cilium-ingress-default-basic-ingress -o json | jq '.spec.resources[] | select(.type==\"EDS\")'\n        {\n          \"@type\": \"type.googleapis.com\/envoy.config.cluster.v3.Cluster\",\n          \"connectTimeout\": \"5s\",\n          \"name\": \"default\/details:9080\",\n          \"outlierDetection\": {\n            \"consecutiveLocalOriginFailure\": 2,\n            \"splitExternalLocalOriginErrors\": true\n          },\n          \"type\": \"EDS\",\n          \"typedExtensionProtocolOptions\": {\n            \"envoy.extensions.upstreams.http.v3.HttpProtocolOptions\": {\n              \"@type\": \"type.googleapis.com\/envoy.extensions.upstreams.http.v3.HttpProtocolOptions\",\n              \"useDownstreamProtocolConfig\": {\n                \"http2ProtocolOptions\": {}\n              }\n            }\n          }\n        }\n        {\n          \"@type\": \"type.googleapis.com\/envoy.config.cluster.v3.Cluster\",\n          \"connectTimeout\": \"5s\",\n          \"name\": \"default\/productpage:9080\",\n          \"outlierDetection\": {\n            \"consecutiveLocalOriginFailure\": 2,\n            \"splitExternalLocalOriginErrors\": true\n          },\n          \"type\": \"EDS\",\n          \"typedExtensionProtocolOptions\": {\n            \"envoy.extensions.upstreams.http.v3.HttpProtocolOptions\": {\n              \"@type\": \"type.googleapis.com\/envoy.extensions.upstreams.http.v3.HttpProtocolOptions\",\n              \"useDownstreamProtocolConfig\": {\n                \"http2ProtocolOptions\": {}\n              }\n            }\n          }\n        }\n\n    If everything is configured correctly, you will be able to see the flows from ``world`` (identity 2),\n    ``ingress`` (identity 8) and your backend pod as per below.\n\n    .. code-block:: shell-session\n\n        # Flows for world identity\n        $ kubectl exec -n kube-system ds\/cilium -- hubble observe --identity 2 -f\n        Defaulted container \"cilium-agent\" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init)\n        Jul  7 13:07:46.726: 192.168.49.1:59608 -> default\/details-v1-5498c86cf5-cnt9q:9080 http-request FORWARDED (HTTP\/1.1 GET http:\/\/10.97.60.117\/details\/1)\n        Jul  7 13:07:46.727: 192.168.49.1:59608 <- default\/details-v1-5498c86cf5-cnt9q:9080 http-response FORWARDED (HTTP\/1.1 200 1ms (GET http:\/\/10.97.60.117\/details\/1))\n\n        # Flows for Ingress identity (e.g. envoy proxy)\n        $ kubectl exec -n kube-system ds\/cilium -- hubble observe --identity 8 -f\n        Defaulted container \"cilium-agent\" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init)\n        Jul  7 13:07:46.726: 10.0.0.95:42509 -> default\/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: SYN)\n        Jul  7 13:07:46.726: 10.0.0.95:42509 <- default\/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: SYN, ACK)\n        Jul  7 13:07:46.726: 10.0.0.95:42509 -> default\/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK)\n        Jul  7 13:07:46.726: 10.0.0.95:42509 -> default\/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK, PSH)\n        Jul  7 13:07:46.727: 10.0.0.95:42509 <- default\/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: ACK, PSH)\n\n        # Flows for backend pod, the identity can be retrieved via cilium identity list command\n        $ kubectl exec -n kube-system ds\/cilium -- hubble observe --identity 48847 -f\n        Defaulted container \"cilium-agent\" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init)\n        Jul  7 13:07:46.726: 10.0.0.95:42509 -> default\/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: SYN)\n        Jul  7 13:07:46.726: 10.0.0.95:42509 <- default\/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: SYN, ACK)\n        Jul  7 13:07:46.726: 10.0.0.95:42509 -> default\/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK)\n        Jul  7 13:07:46.726: 10.0.0.95:42509 -> default\/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK, PSH)\n        Jul  7 13:07:46.726: 192.168.49.1:59608 -> default\/details-v1-5498c86cf5-cnt9q:9080 http-request FORWARDED (HTTP\/1.1 GET http:\/\/10.97.60.117\/details\/1)\n        Jul  7 13:07:46.727: 10.0.0.95:42509 <- default\/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: ACK, PSH)\n        Jul  7 13:07:46.727: 192.168.49.1:59608 <- default\/details-v1-5498c86cf5-cnt9q:9080 http-response FORWARDED (HTTP\/1.1 200 1ms (GET http:\/\/10.97.60.117\/details\/1))\n        Jul  7 13:08:16.757: 10.0.0.95:42509 <- default\/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: ACK, FIN)\n        Jul  7 13:08:16.757: 10.0.0.95:42509 -> default\/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK, FIN)\n\n        # Sample output of cilium-dbg monitor\n        $ ksysex ds\/cilium -- cilium-dbg monitor\n        level=info msg=\"Initializing dissection cache...\" subsys=monitor\n        -> endpoint 212 flow 0x3000e251 , identity ingress->61131 state new ifindex lxcfc90a8580fd6 orig-ip 10.0.0.192: 10.0.0.192:34219 -> 10.0.0.164:9080 tcp SYN\n        -> stack flow 0x2481d648 , identity 61131->ingress state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.164:9080 -> 10.0.0.192:34219 tcp SYN, ACK\n        -> endpoint 212 flow 0x3000e251 , identity ingress->61131 state established ifindex lxcfc90a8580fd6 orig-ip 10.0.0.192: 10.0.0.192:34219 -> 10.0.0.164:9080 tcp ACK\n        -> endpoint 212 flow 0x3000e251 , identity ingress->61131 state established ifindex lxcfc90a8580fd6 orig-ip 10.0.0.192: 10.0.0.192:34219 -> 10.0.0.164:9080 tcp ACK\n        -> Request http from 0 ([reserved:world]) to 212 ([k8s:io.cilium.k8s.namespace.labels.kubernetes.io\/metadata.name=default k8s:io.cilium.k8s.policy.cluster=minikube k8s:io.cilium.k8s.policy.serviceaccount=bookinfo-details k8s:io.kubernetes.pod.namespace=default k8s:version=v1 k8s:app=details]), identity 2->61131, verdict Forwarded GET http:\/\/10.99.74.157\/details\/1 => 0\n        -> stack flow 0x2481d648 , identity 61131->ingress state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.164:9080 -> 10.0.0.192:34219 tcp ACK\n        -> Response http to 0 ([reserved:world]) from 212 ([k8s:io.kubernetes.pod.namespace=default k8s:version=v1 k8s:app=details k8s:io.cilium.k8s.namespace.labels.kubernetes.io\/metadata.name=default k8s:io.cilium.k8s.policy.cluster=minikube k8s:io.cilium.k8s.policy.serviceaccount=bookinfo-details]), identity 61131->2, verdict Forwarded GET http:\/\/10.99.74.157\/details\/1 => 200","site":"cilium","answers_cleaned":"Service Mesh Troubleshooting                                Install the Cilium CLI                            include    installation cli download rst  Generic              Validate that the   ds cilium   as well as the   deployment cilium operator   pods     are healthy and ready          code block   shell session           cilium status  Manual Verification of Setup                                   Validate that   nodePort enabled   is true          code block   shell session            kubectl exec  n kube system ds cilium    cilium dbg status   verbose                     KubeProxyReplacement Details                        Services              ClusterIP       Enabled             NodePort        Enabled  Range  30000 32767                   Validate that runtime the values of   enable envoy config   and   enable ingress controller       are true  Ingress controller flag is optional if customer only uses   CiliumEnvoyConfig   or       CiliumClusterwideEnvoyConfig   CRDs          code block   shell session            kubectl  n kube system get cm cilium config  o json   egrep  enable ingress controller enable envoy config                   enable envoy config    true                    enable ingress controller    true    Ingress Troubleshooting                          Internally  the Cilium Ingress controller will create one Load Balancer service  one   CiliumEnvoyConfig   and one dummy Endpoint resource for each Ingress resource           code block   shell session            kubectl get ingress         NAME            CLASS    HOSTS   ADDRESS        PORTS   AGE         basic ingress   cilium           10 97 60 117   80      16m            For dedicated Load Balancer mode           kubectl get service cilium ingress basic ingress         NAME                           TYPE           CLUSTER IP     EXTERNAL IP    PORT S         AGE         cilium ingress basic ingress   LoadBalancer   10 97 60 117   10 97 60 117   80 31911 TCP   17m            For dedicated Load Balancer mode           kubectl get cec cilium ingress default basic ingress         NAME                                   AGE         cilium ingress default basic ingress   18m            For shared Load Balancer mode           kubectl get services  n kube system cilium ingress         NAME             TYPE           CLUSTER IP      EXTERNAL IP     PORT S                       AGE         cilium ingress   LoadBalancer   10 111 109 99   10 111 109 99   80 32690 TCP 443 31566 TCP   38m            For shared Load Balancer mode           kubectl get cec  n kube system cilium ingress         NAME             AGE         cilium ingress   15m      Validate that the Load Balancer service has either an external IP or FQDN assigned      If it s not available after a long time  please check the Load Balancer related     documentation from your respective cloud provider       Check if there is any warning or error message while Cilium is trying to provision     the   CiliumEnvoyConfig   resource  This is unlikely to happen for CEC resources     originating from the Cilium Ingress controller          include    network servicemesh warning rst   Connectivity Troubleshooting                               This section is for troubleshooting connectivity issues mainly for Ingress resources  but the same steps can be applied to manually configured   CiliumEnvoyConfig   resources as well   It s best to have   debug   and   debug verbose   enabled with below values  Kindly note that any change of Cilium flags requires a restart of the Cilium agent and operator          code block   shell session            kubectl get  n kube system cm cilium config  o json   grep  debug                   debug    true                    debug verbose    flow       note        The originating source IP is used for enforcing ingress traffic   The request normally traverses from LoadBalancer service to pre assigned port of your node  then gets forwarded to the Cilium Envoy proxy  and finally gets proxied to the actual backend service       The first step between cloud Load Balancer to node port is out of Cilium scope  Please     check related documentation from your respective cloud provider to make sure your     clusters are configured properly       The second step could be checked by connecting with SSH to your underlying host  and     sending the similar request to localhost on the relevant port          code block   shell session            kubectl get service cilium ingress basic ingress         NAME                           TYPE           CLUSTER IP     EXTERNAL IP    PORT S         AGE         cilium ingress basic ingress   LoadBalancer   10 97 60 117   10 97 60 117   80 31911 TCP   17m            After ssh to any of k8s node           curl  v http   localhost 31911              Trying 127 0 0 1 31911              TCP NODELAY set           Connected to localhost  127 0 0 1  port 31911   0            GET   HTTP 1 1           Host  localhost 31911           User Agent  curl 7 68 0           Accept                          Mark bundle as not supporting multiuse           HTTP 1 1 503 Service Unavailable           content length  19           content type  text plain           date  Thu  07 Jul 2022 12 25 56 GMT           server  envoy                     Connection  0 to host localhost left intact            Flows for world identity           kubectl  n kube system exec ds cilium    hubble observe  f   identity 2         Jul  7 12 28 27 970  127 0 0 1 54704    127 0 0 1 13681 http response FORWARDED  HTTP 1 1 503 0ms  GET http   localhost 31911         Alternatively  you can also send a request directly to the Envoy proxy port  For     Ingress  the proxy port is randomly assigned by the Cilium Ingress controller  For     manually configured   CiliumEnvoyConfig   resources  the proxy port is retrieved     directly from the spec          code block   shell session             kubectl logs  f  n kube system ds cilium   timestamps   egrep  envoy proxy                      2022 07 08T08 05 13 986649816Z level info msg  Adding new proxy port rules for cilium ingress default basic ingress 19672  proxy port name cilium ingress default basic ingress subsys proxy            After ssh to any of k8s node  send request to Envoy proxy port directly           curl  v  http   localhost 19672             Trying 127 0 0 1 19672              TCP NODELAY set           Connected to localhost  127 0 0 1  port 19672   0            GET   HTTP 1 1           Host  localhost 19672           User Agent  curl 7 68 0           Accept                          Mark bundle as not supporting multiuse           HTTP 1 1 503 Service Unavailable           content length  19           content type  text plain           date  Fri  08 Jul 2022 08 12 35 GMT           server  envoy      If you see a response similar to the above  it means that the request is being     redirected to proxy successfully  The http response will have one special header       server  envoy   accordingly  The same can be observed from   hubble observe       command  ref  hubble troubleshooting        The most common root cause is either that the Cilium Envoy proxy is not running     on the node  or there is some other issue with CEC resource provisioning          code block   shell session            kubectl exec  n kube system ds cilium    cilium dbg status                     Controller Status        49 49 healthy         Proxy Status             OK  ip 10 0 0 25  6 redirects active on ports 10000 20000         Global Identity Range    min 256  max 65535      Assuming that the above steps are done successfully  you can proceed to send a request via     an external IP or via FQDN next       Double check whether your backend service is up and healthy  The Envoy Discovery Service      EDS  has a name that follows the convention    namespace   service name   port             code block   shell session            LB IP   kubectl get ingress basic ingress  o json   jq   status loadBalancer ingress 0  ip    jq  r              curl  s http    LB IP details 1         no healthy upstream            kubectl get cec cilium ingress default basic ingress  o json   jq   spec resources     select  type   EDS                          type    type googleapis com envoy config cluster v3 Cluster              connectTimeout    5s              name    default details 9080              outlierDetection                  consecutiveLocalOriginFailure   2               splitExternalLocalOriginErrors   true                         type    EDS              typedExtensionProtocolOptions                  envoy extensions upstreams http v3 HttpProtocolOptions                     type    type googleapis com envoy extensions upstreams http v3 HttpProtocolOptions                  useDownstreamProtocolConfig                      http2ProtocolOptions                                                                                type    type googleapis com envoy config cluster v3 Cluster              connectTimeout    5s              name    default productpage 9080              outlierDetection                  consecutiveLocalOriginFailure   2               splitExternalLocalOriginErrors   true                         type    EDS              typedExtensionProtocolOptions                  envoy extensions upstreams http v3 HttpProtocolOptions                     type    type googleapis com envoy extensions upstreams http v3 HttpProtocolOptions                  useDownstreamProtocolConfig                      http2ProtocolOptions                                                               If everything is configured correctly  you will be able to see the flows from   world    identity 2         ingress    identity 8  and your backend pod as per below          code block   shell session            Flows for world identity           kubectl exec  n kube system ds cilium    hubble observe   identity 2  f         Defaulted container  cilium agent  out of  cilium agent  mount cgroup  init   apply sysctl overwrites  init   mount bpf fs  init   clean cilium state  init          Jul  7 13 07 46 726  192 168 49 1 59608    default details v1 5498c86cf5 cnt9q 9080 http request FORWARDED  HTTP 1 1 GET http   10 97 60 117 details 1          Jul  7 13 07 46 727  192 168 49 1 59608    default details v1 5498c86cf5 cnt9q 9080 http response FORWARDED  HTTP 1 1 200 1ms  GET http   10 97 60 117 details 1              Flows for Ingress identity  e g  envoy proxy            kubectl exec  n kube system ds cilium    hubble observe   identity 8  f         Defaulted container  cilium agent  out of  cilium agent  mount cgroup  init   apply sysctl overwrites  init   mount bpf fs  init   clean cilium state  init          Jul  7 13 07 46 726  10 0 0 95 42509    default details v1 5498c86cf5 cnt9q 9080 to endpoint FORWARDED  TCP Flags  SYN          Jul  7 13 07 46 726  10 0 0 95 42509    default details v1 5498c86cf5 cnt9q 9080 to stack FORWARDED  TCP Flags  SYN  ACK          Jul  7 13 07 46 726  10 0 0 95 42509    default details v1 5498c86cf5 cnt9q 9080 to endpoint FORWARDED  TCP Flags  ACK          Jul  7 13 07 46 726  10 0 0 95 42509    default details v1 5498c86cf5 cnt9q 9080 to endpoint FORWARDED  TCP Flags  ACK  PSH          Jul  7 13 07 46 727  10 0 0 95 42509    default details v1 5498c86cf5 cnt9q 9080 to stack FORWARDED  TCP Flags  ACK  PSH             Flows for backend pod  the identity can be retrieved via cilium identity list command           kubectl exec  n kube system ds cilium    hubble observe   identity 48847  f         Defaulted container  cilium agent  out of  cilium agent  mount cgroup  init   apply sysctl overwrites  init   mount bpf fs  init   clean cilium state  init          Jul  7 13 07 46 726  10 0 0 95 42509    default details v1 5498c86cf5 cnt9q 9080 to endpoint FORWARDED  TCP Flags  SYN          Jul  7 13 07 46 726  10 0 0 95 42509    default details v1 5498c86cf5 cnt9q 9080 to stack FORWARDED  TCP Flags  SYN  ACK          Jul  7 13 07 46 726  10 0 0 95 42509    default details v1 5498c86cf5 cnt9q 9080 to endpoint FORWARDED  TCP Flags  ACK          Jul  7 13 07 46 726  10 0 0 95 42509    default details v1 5498c86cf5 cnt9q 9080 to endpoint FORWARDED  TCP Flags  ACK  PSH          Jul  7 13 07 46 726  192 168 49 1 59608    default details v1 5498c86cf5 cnt9q 9080 http request FORWARDED  HTTP 1 1 GET http   10 97 60 117 details 1          Jul  7 13 07 46 727  10 0 0 95 42509    default details v1 5498c86cf5 cnt9q 9080 to stack FORWARDED  TCP Flags  ACK  PSH          Jul  7 13 07 46 727  192 168 49 1 59608    default details v1 5498c86cf5 cnt9q 9080 http response FORWARDED  HTTP 1 1 200 1ms  GET http   10 97 60 117 details 1           Jul  7 13 08 16 757  10 0 0 95 42509    default details v1 5498c86cf5 cnt9q 9080 to stack FORWARDED  TCP Flags  ACK  FIN          Jul  7 13 08 16 757  10 0 0 95 42509    default details v1 5498c86cf5 cnt9q 9080 to endpoint FORWARDED  TCP Flags  ACK  FIN             Sample output of cilium dbg monitor           ksysex ds cilium    cilium dbg monitor         level info msg  Initializing dissection cache     subsys monitor            endpoint 212 flow 0x3000e251   identity ingress  61131 state new ifindex lxcfc90a8580fd6 orig ip 10 0 0 192  10 0 0 192 34219    10 0 0 164 9080 tcp SYN            stack flow 0x2481d648   identity 61131  ingress state reply ifindex 0 orig ip 0 0 0 0  10 0 0 164 9080    10 0 0 192 34219 tcp SYN  ACK            endpoint 212 flow 0x3000e251   identity ingress  61131 state established ifindex lxcfc90a8580fd6 orig ip 10 0 0 192  10 0 0 192 34219    10 0 0 164 9080 tcp ACK            endpoint 212 flow 0x3000e251   identity ingress  61131 state established ifindex lxcfc90a8580fd6 orig ip 10 0 0 192  10 0 0 192 34219    10 0 0 164 9080 tcp ACK            Request http from 0   reserved world   to 212   k8s io cilium k8s namespace labels kubernetes io metadata name default k8s io cilium k8s policy cluster minikube k8s io cilium k8s policy serviceaccount bookinfo details k8s io kubernetes pod namespace default k8s version v1 k8s app details    identity 2  61131  verdict Forwarded GET http   10 99 74 157 details 1    0            stack flow 0x2481d648   identity 61131  ingress state reply ifindex 0 orig ip 0 0 0 0  10 0 0 164 9080    10 0 0 192 34219 tcp ACK            Response http to 0   reserved world   from 212   k8s io kubernetes pod namespace default k8s version v1 k8s app details k8s io cilium k8s namespace labels kubernetes io metadata name default k8s io cilium k8s policy cluster minikube k8s io cilium k8s policy serviceaccount bookinfo details    identity 61131  2  verdict Forwarded GET http   10 99 74 157 details 1    200"}
{"questions":"cilium Upgrade Guide docs cilium io adminupgrade upgradegeneral","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _admin_upgrade:\n\n*************\nUpgrade Guide\n*************\n\n.. _upgrade_general:\n\nThis upgrade guide is intended for Cilium running on Kubernetes. If you have\nquestions, feel free to ping us on `Cilium Slack`_.\n\n.. include:: upgrade-warning.rst\n\n.. _pre_flight:\n\nRunning pre-flight check (Required)\n===================================\n\nWhen rolling out an upgrade with Kubernetes, Kubernetes will first terminate the\npod followed by pulling the new image version and then finally spin up the new\nimage. In order to reduce the downtime of the agent and to prevent ``ErrImagePull``\nerrors during upgrade, the pre-flight check pre-pulls the new image version.\nIf you are running in :ref:`kubeproxy-free`\nmode you must also pass on the Kubernetes API Server IP and \/\nor the Kubernetes API Server Port when generating the ``cilium-preflight.yaml``\nfile.\n\n.. tabs::\n  .. group-tab:: kubectl\n\n    .. parsed-literal::\n\n      helm template |CHART_RELEASE| \\\\\n        --namespace=kube-system \\\\\n        --set preflight.enabled=true \\\\\n        --set agent=false \\\\\n        --set operator.enabled=false \\\\\n        > cilium-preflight.yaml\n      kubectl create -f cilium-preflight.yaml\n\n  .. group-tab:: Helm\n\n    .. parsed-literal::\n\n      helm install cilium-preflight |CHART_RELEASE| \\\\\n        --namespace=kube-system \\\\\n        --set preflight.enabled=true \\\\\n        --set agent=false \\\\\n        --set operator.enabled=false\n\n  .. group-tab:: kubectl (kubeproxy-free)\n\n    .. parsed-literal::\n\n      helm template |CHART_RELEASE| \\\\\n        --namespace=kube-system \\\\\n        --set preflight.enabled=true \\\\\n        --set agent=false \\\\\n        --set operator.enabled=false \\\\\n        --set k8sServiceHost=API_SERVER_IP \\\\\n        --set k8sServicePort=API_SERVER_PORT \\\\\n        > cilium-preflight.yaml\n      kubectl create -f cilium-preflight.yaml\n\n  .. group-tab:: Helm (kubeproxy-free)\n\n    .. parsed-literal::\n\n      helm install cilium-preflight |CHART_RELEASE| \\\\\n        --namespace=kube-system \\\\\n        --set preflight.enabled=true \\\\\n        --set agent=false \\\\\n        --set operator.enabled=false \\\\\n        --set k8sServiceHost=API_SERVER_IP \\\\\n        --set k8sServicePort=API_SERVER_PORT\n\nAfter applying the ``cilium-preflight.yaml``, ensure that the number of READY\npods is the same number of Cilium pods running.\n\n.. code-block:: shell-session\n\n    $ kubectl get daemonset -n kube-system | sed -n '1p;\/cilium\/p'\n    NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE\n    cilium                    2         2         2       2            2           <none>          1h20m\n    cilium-pre-flight-check   2         2         2       2            2           <none>          7m15s\n\nOnce the number of READY pods are equal, make sure the Cilium pre-flight\ndeployment is also marked as READY 1\/1. If it shows READY 0\/1, consult the\n:ref:`cnp_validation` section and resolve issues with the deployment before\ncontinuing with the upgrade.\n\n.. code-block:: shell-session\n\n    $ kubectl get deployment -n kube-system cilium-pre-flight-check -w\n    NAME                      READY   UP-TO-DATE   AVAILABLE   AGE\n    cilium-pre-flight-check   1\/1     1            0           12s\n\n.. _cleanup_preflight_check:\n\nClean up pre-flight check\n-------------------------\n\nOnce the number of READY for the preflight :term:`DaemonSet` is the same as the number\nof cilium pods running and the preflight ``Deployment`` is marked as READY ``1\/1``\nyou can delete the cilium-preflight and proceed with the upgrade.\n\n.. tabs::\n  .. group-tab:: kubectl\n\n    .. code-block:: shell-session\n\n      kubectl delete -f cilium-preflight.yaml\n\n  .. group-tab:: Helm\n\n    .. code-block:: shell-session\n\n      helm delete cilium-preflight --namespace=kube-system\n\n.. _upgrade_minor:\n\nUpgrading Cilium\n================\n\nDuring normal cluster operations, all Cilium components should run the same\nversion. Upgrading just one of them (e.g., upgrading the agent without\nupgrading the operator) could result in unexpected cluster behavior.\nThe following steps will describe how to upgrade all of the components from\none stable release to a later stable release.\n\n.. include:: upgrade-warning.rst\n\nStep 1: Upgrade to latest patch version\n---------------------------------------\n\nWhen upgrading from one minor release to another minor release, for example\n1.x to 1.y, it is recommended to upgrade to the `latest patch release\n<https:\/\/github.com\/cilium\/cilium#stable-releases>`__ for a Cilium release series first.\nUpgrading to the latest patch release ensures the most seamless experience if a\nrollback is required following the minor release upgrade. The upgrade guides\nfor previous versions can be found for each minor version at the bottom left\ncorner.\n\nStep 2: Use Helm to Upgrade your Cilium deployment\n--------------------------------------------------------------------------------------\n\n:term:`Helm` can be used to either upgrade Cilium directly or to generate a new set of\nYAML files that can be used to upgrade an existing deployment via ``kubectl``.\nBy default, Helm will generate the new templates using the default values files\npackaged with each new release. You still need to ensure that you are\nspecifying the equivalent options as used for the initial deployment, either by\nspecifying a them at the command line or by committing the values to a YAML\nfile.\n\n.. include:: ..\/installation\/k8s-install-download-release.rst\n\nTo minimize datapath disruption during the upgrade, the\n``upgradeCompatibility`` option should be set to the initial Cilium\nversion which was installed in this cluster.\n\n.. tabs::\n  .. group-tab:: kubectl\n\n    Generate the required YAML file and deploy it:\n\n    .. parsed-literal::\n\n      helm template |CHART_RELEASE| \\\\\n        --set upgradeCompatibility=1.X \\\\\n        --namespace kube-system \\\\\n        > cilium.yaml\n      kubectl apply -f cilium.yaml\n\n  .. group-tab:: Helm\n\n    Deploy Cilium release via Helm:\n\n    .. parsed-literal::\n\n      helm upgrade cilium |CHART_RELEASE| \\\\\n        --namespace=kube-system \\\\\n        --set upgradeCompatibility=1.X\n\n.. note::\n\n   Instead of using ``--set``, you can also save the values relative to your\n   deployment in a YAML file and use it to regenerate the YAML for the latest\n   Cilium version. Running any of the previous commands will overwrite\n   the existing cluster's :term:`ConfigMap` so it is critical to preserve any existing\n   options, either by setting them at the command line or storing them in a\n   YAML file, similar to:\n\n   .. code-block:: yaml\n\n      agent: true\n      upgradeCompatibility: \"1.8\"\n      ipam:\n        mode: \"kubernetes\"\n      k8sServiceHost: \"API_SERVER_IP\"\n      k8sServicePort: \"API_SERVER_PORT\"\n      kubeProxyReplacement: \"true\"\n\n   You can then upgrade using this values file by running:\n\n   .. parsed-literal::\n\n      helm upgrade cilium |CHART_RELEASE| \\\\\n        --namespace=kube-system \\\\\n        -f my-values.yaml\n\nWhen upgrading from one minor release to another minor release using\n``helm upgrade``, do *not* use Helm's ``--reuse-values`` flag.\nThe ``--reuse-values`` flag ignores any newly introduced values present in\nthe new release and thus may cause the Helm template to render incorrectly.\nInstead, if you want to reuse the values from your existing installation,\nsave the old values in a values file, check the file for any renamed or\ndeprecated values, and then pass it to the ``helm upgrade`` command as\ndescribed above. You can retrieve and save the values from an existing\ninstallation with the following command:\n\n.. code-block:: shell-session\n\n  helm get values cilium --namespace=kube-system -o yaml > old-values.yaml\n\nThe ``--reuse-values`` flag may only be safely used if the Cilium chart version\nremains unchanged, for example when ``helm upgrade`` is used to apply\nconfiguration changes without upgrading Cilium.\n\nStep 3: Rolling Back\n--------------------\n\nOccasionally, it may be necessary to undo the rollout because a step was missed\nor something went wrong during upgrade. To undo the rollout run:\n\n.. tabs::\n  .. group-tab:: kubectl\n\n    .. code-block:: shell-session\n\n      kubectl rollout undo daemonset\/cilium -n kube-system\n\n  .. group-tab:: Helm\n\n    .. code-block:: shell-session\n\n      helm history cilium --namespace=kube-system\n      helm rollback cilium [REVISION] --namespace=kube-system\n\nThis will revert the latest changes to the Cilium ``DaemonSet`` and return\nCilium to the state it was in prior to the upgrade.\n\n.. note::\n\n    When rolling back after new features of the new minor version have already\n    been consumed, consult the :ref:`version_notes` to check and prepare for\n    incompatible feature use before downgrading\/rolling back. This step is only\n    required after new functionality introduced in the new minor version has\n    already been explicitly used by creating new resources or by opting into\n    new features via the :term:`ConfigMap`.\n\n.. _version_notes:\n.. _upgrade_version_specifics:\n\nVersion Specific Notes\n======================\n\nThis section details the upgrade notes specific to |CURRENT_RELEASE|. Read them\ncarefully and take the suggested actions before upgrading Cilium to |CURRENT_RELEASE|.\nFor upgrades to earlier releases, see the\n:prev-docs:`upgrade notes to the previous version <operations\/upgrade\/#upgrade-notes>`.\n\nThe only tested upgrade and rollback path is between consecutive minor releases.\nAlways perform upgrades and rollbacks between one minor release at a time.\nAdditionally, always update to the latest patch release of your current version\nbefore attempting an upgrade.\n\nTested upgrades are expected to have minimal to no impact on new and existing\nconnections matched by either no Network Policies, or L3\/L4 Network Policies only.\nAny traffic flowing via user space proxies (for example, because an L7 policy is\nin place, or using Ingress\/Gateway API) will be disrupted during upgrade. Endpoints\ncommunicating via the proxy must reconnect to re-establish connections.\n\n.. _current_release_required_changes:\n\n.. _1.17_upgrade_notes:\n\n1.17 Upgrade Notes\n------------------\n\n* Operating Cilium in ``--datapath-mode=lb-only`` for plain Docker mode now requires to\n  add an additional ``--enable-k8s=false`` to the command line, otherwise it is assumed\n  that Kubernetes is present.\n* The Kubernetes clients used by Cilium Agent and Cilium Operator now have separately configurable\n  rate limits. The default rate limit for Cilium Operator K8s clients has been increased to\n  100 QPS\/200 Burst. To configure the rate limit for Cilium Operator, use the\n  ``--operator-k8s-client-qps`` and ``--operator-k8s-client-burst`` flags or the corresponding\n  Helm values.\n* Support for Consul, deprecated since v1.12, has been removed.\n* Cilium now supports services protocol differentiation, which allows the agent to distinguish two\n  services on the same port with different protocols (e.g. TCP and UDP).\n  This feature, enabled by default, can be controlled with the ``--bpf-lb-proto-diff`` flag.\n  After the upgrade, existing services without a protocol set will be preserved as such, to avoid any\n  connection disruptions, and will need to be deleted and recreated in order for their protocol to\n  be taken into account by the agent.\n  In case of downgrades to a version that doesn't support services protocol differentiation,\n  existing services with the protocol set will be deleted and recreated, without the protocol, by\n  the agent, causing connection disruptions for such services.\n* MTU auto-detection is now continuous during agent lifetime, changing device MTU no longer requires\n  restarting the agent to pick up the new MTU.\n* MTU auto-detection will now use the lowest MTU of all external interfaces. Before, only the primary\n  interface was considered. One exception to this is in ENI mode where the secondary interfaces are not\n  considered for MTU auto-detection. MTU can still be configured manually via the ``MTU`` helm option,\n  ``--mtu`` agent flag or ``mtu`` option in CNI configuration.\n* Support for L7 protocol visibility using Pod annotations (``policy.cilium.io\/proxy-visibility``),\n  deprecated since v1.15, has been removed.\n* The Cilium cluster name validation cannot be bypassed anymore, both for the local and\n  remote clusters. The cluster name is strictly enforced to consist of at most 32 lower\n  case alphanumeric characters and '-', start and end with an alphanumeric character.\n* Cilium could previously be run in a configuration where the Etcd instances\n  that distribute Cilium state between nodes would be managed in pod network by\n  Cilium itself. This support, which had been previously deprecated as complicated\n  and error prone, has now been removed. Refer to :ref:`k8s_install_etcd` for\n  alternatives for running Cilium with Etcd.\n* For IPsec, support for a single key has been removed. Per-tunnel keys will\n  now be used regardless of the presence of the ``+`` sign in the secret.\n* The option to run a synchronous probe using ``cilium-health status --probe`` is no longer supported,\n  and is now a hidden option that returns the results of the most recent cached probe. It will be \n  removed in a future release.\n* The Cilium status API now reports the KVStore subsystem with ``Disabled`` state when disabled,\n  instead of ``OK`` state and ``Disabled`` message.\n* Support for ``metallb-bgp``, deprecated since 1.14, has been removed.\n\nRemoved Options\n~~~~~~~~~~~~~~~\n\n* The previously deprecated ``clustermesh-ip-identities-sync-timeout`` flag has\n  been removed in favor of ``clustermesh-sync-timeout``.\n* The previously deprecated built-in WireGuard userspace-mode fallback (Helm ``wireguard.userspaceFallback``)\n  has been removed. Users of WireGuard transparent encryption are required to use a Linux kernel with\n  WireGuard support.\n* The previously deprecated ``metallb-bgp`` flags ``bgp-config-path``, ``bgp-announce-lb-ip``\n  and ``bgp-announce-pod-cidr`` have been removed. Users are now required to use Cilium BGP\n  control plane for BGP advertisements.\n\nDeprecated Options\n~~~~~~~~~~~~~~~~~~\n\n* The high-scale mode for ipcache has been deprecated and will be removed in v1.18.\n* The hubble-relay flag ``--dial-timeout`` has been deprecated (now a no-op)\n  and will be removed in Cilium 1.18.\n\nHelm Options\n~~~~~~~~~~~~\n\n* The Helm options ``hubble.tls.server.cert``, ``hubble.tls.server.key``,\n  ``hubble.relay.tls.client.cert``, ``hubble.relay.tls.client.key``,\n  ``hubble.relay.tls.server.cert``, ``hubble.relay.tls.server.key``,\n  ``hubble.ui.tls.client.cert``, and ``hubble.ui.tls.client.key`` have been\n  deprecated in favor of the associated ``existingSecret`` options and will be\n  removed in a future release.\n* The default value of ``hubble.tls.auto.certValidityDuration`` has been\n  lowered from 1095 days to 365 days because recent versions of MacOS will fail\n  to validate certificates with expirations longer than 825 days.\n* The Helm option ``hubble.relay.dialTimeout`` has been deprecated (now a no-op)\n  and will be removed in Cilium 1.18.\n* The ``metallb-bgp`` integration Helm options ``bgp.enabled``, ``bgp.announce.podCIDR``, and\n  ``bgp.announce.loadbalancerIP`` have been removed. Users are now required to use Cilium BGP\n  control plane options available under ``bgpControlPlane`` for BGP announcements.\n* The default value of ``dnsProxy.endpointMaxIpPerHostname`` and its\n  corresponding agent option has been increased from 50 to 1000 to reflect\n  improved scaling of toFQDNs policies and to better handle domains which return\n  a large number of IPs with short TTLs.\n\nAgent Options\n~~~~~~~~~~~~~\n\n* The ``CONNTRACK_LOCAL`` option has been deprecated and will be removed in a\n  future release.\n\nBugtool Options\n~~~~~~~~~~~~~~~\n\n* The flag ``k8s-mode`` (and related flags ``cilium-agent-container-name``, ``k8s-namespace`` & ``k8s-label``)\n  have been deprecated and will be removed in a Cilium 1.18. Cilium CLI should be used to gather a sysdump from a K8s cluster.\n\nAdded Metrics\n~~~~~~~~~~~~~\n* ``cilium_node_health_connectivity_status``\n* ``cilium_node_health_connectivity_latency_seconds``\n* ``cilium_operator_unmanaged_pods``\n* ``cilium_policy_selector_match_count_max``\n* ``cilium_identity_cache_timer_duration``\n* ``cilium_identity_cache_timer_trigger_latency``\n* ``cilium_identity_cache_timer_trigger_folds``\n\nRemoved Metrics\n~~~~~~~~~~~~~~~\n* ``cilium_cidrgroup_translation_time_stats_seconds`` has been removed, as the measured code path no longer exists.\n* ``cilium_triggers_policy_update_total`` has been removed.\n* ``cilium_triggers_policy_update_folds`` has been removed.\n* ``cilium_triggers_policy_update_call_duration`` has been removed.\n\nChanged Metrics\n~~~~~~~~~~~~~~~\n\nDeprecated Metrics\n~~~~~~~~~~~~~~~~~~\n* ``cilium_node_connectivity_status`` is now deprecated. Please use ``cilium_node_health_connectivity_status`` instead.\n* ``cilium_node_connectivity_latency_seconds`` is now deprecated. Please use ``cilium_node_health_connectivity_latency_seconds`` instead.\n\nHubble CLI\n~~~~~~~~~~\n\n* the ``--cluster`` behavior changed to show flows emitted from nodes outside of\n  the provided cluster name (either coming from or going to the target cluster).\n  This change brings consistency between the ``--cluster`` and ``--namespace``\n  flags and removed the incompatibility between the ``--cluster`` and\n  ``--node-name`` flags. The previous behavior of ``--cluster foo`` can be\n  reproduced with ``--node-name foo\/`` (shows all flows emitted from a node in\n  cluster ``foo``).\n\nAdvanced\n========\n\nUpgrade Impact\n--------------\n\nUpgrades are designed to have minimal impact on your running deployment.\nNetworking connectivity, policy enforcement and load balancing will remain\nfunctional in general. The following is a list of operations that will not be\navailable during the upgrade:\n\n* API-aware policy rules are enforced in user space proxies and are\n  running as part of the Cilium pod. Upgrading Cilium causes the proxy to\n  restart, which results in a connectivity outage and causes the connection to reset.\n\n* Existing policy will remain effective but implementation of new policy rules\n  will be postponed to after the upgrade has been completed on a particular\n  node.\n\n* Monitoring components such as ``cilium-dbg monitor`` will experience a brief\n  outage while the Cilium pod is restarting. Events are queued up and read\n  after the upgrade. If the number of events exceeds the event buffer size,\n  events will be lost.\n\n\n.. _upgrade_configmap:\n\nRebasing a ConfigMap\n--------------------\n\nThis section describes the procedure to rebase an existing :term:`ConfigMap` to the\ntemplate of another version.\n\nExport the current ConfigMap\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n::\n\n        $ kubectl get configmap -n kube-system cilium-config -o yaml --export > cilium-cm-old.yaml\n        $ cat .\/cilium-cm-old.yaml\n        apiVersion: v1\n        data:\n          clean-cilium-state: \"false\"\n          debug: \"true\"\n          disable-ipv4: \"false\"\n          etcd-config: |-\n            ---\n            endpoints:\n            - https:\/\/192.168.60.11:2379\n            #\n            # In case you want to use TLS in etcd, uncomment the 'trusted-ca-file' line\n            # and create a kubernetes secret by following the tutorial in\n            # https:\/\/cilium.link\/etcd-config\n            trusted-ca-file: '\/var\/lib\/etcd-secrets\/etcd-client-ca.crt'\n            #\n            # In case you want client to server authentication, uncomment the following\n            # lines and add the certificate and key in cilium-etcd-secrets below\n            key-file: '\/var\/lib\/etcd-secrets\/etcd-client.key'\n            cert-file: '\/var\/lib\/etcd-secrets\/etcd-client.crt'\n        kind: ConfigMap\n        metadata:\n          creationTimestamp: null\n          name: cilium-config\n          selfLink: \/api\/v1\/namespaces\/kube-system\/configmaps\/cilium-config\n\n\nIn the :term:`ConfigMap` above, we can verify that Cilium is using ``debug`` with\n``true``, it has a etcd endpoint running with `TLS <https:\/\/etcd.io\/docs\/latest\/op-guide\/security\/>`_,\nand the etcd is set up to have `client to server authentication <https:\/\/etcd.io\/docs\/latest\/op-guide\/security\/#example-2-client-to-server-authentication-with-https-client-certificates>`_.\n\nGenerate the latest ConfigMap\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: shell-session\n\n    helm template cilium \\\n      --namespace=kube-system \\\n      --set agent=false \\\n      --set config.enabled=true \\\n      --set operator.enabled=false \\\n      > cilium-configmap.yaml\n\nAdd new options\n~~~~~~~~~~~~~~~\n\nAdd the new options manually to your old :term:`ConfigMap`, and make the necessary\nchanges.\n\nIn this example, the ``debug`` option is meant to be kept with ``true``, the\n``etcd-config`` is kept unchanged, and ``monitor-aggregation`` is a new\noption, but after reading the :ref:`version_notes` the value was kept unchanged\nfrom the default value.\n\nAfter making the necessary changes, the old :term:`ConfigMap` was migrated with the\nnew options while keeping the configuration that we wanted:\n\n::\n\n        $ cat .\/cilium-cm-old.yaml\n        apiVersion: v1\n        data:\n          debug: \"true\"\n          disable-ipv4: \"false\"\n          # If you want to clean cilium state; change this value to true\n          clean-cilium-state: \"false\"\n          monitor-aggregation: \"medium\"\n          etcd-config: |-\n            ---\n            endpoints:\n            - https:\/\/192.168.60.11:2379\n            #\n            # In case you want to use TLS in etcd, uncomment the 'trusted-ca-file' line\n            # and create a kubernetes secret by following the tutorial in\n            # https:\/\/cilium.link\/etcd-config\n            trusted-ca-file: '\/var\/lib\/etcd-secrets\/etcd-client-ca.crt'\n            #\n            # In case you want client to server authentication, uncomment the following\n            # lines and add the certificate and key in cilium-etcd-secrets below\n            key-file: '\/var\/lib\/etcd-secrets\/etcd-client.key'\n            cert-file: '\/var\/lib\/etcd-secrets\/etcd-client.crt'\n        kind: ConfigMap\n        metadata:\n          creationTimestamp: null\n          name: cilium-config\n          selfLink: \/api\/v1\/namespaces\/kube-system\/configmaps\/cilium-config\n\nApply new ConfigMap\n~~~~~~~~~~~~~~~~~~~\n\nAfter adding the options, manually save the file with your changes and install\nthe :term:`ConfigMap` in the ``kube-system`` namespace of your cluster.\n\n.. code-block:: shell-session\n\n        $ kubectl apply -n kube-system -f .\/cilium-cm-old.yaml\n\nAs the :term:`ConfigMap` is successfully upgraded we can start upgrading Cilium\n``DaemonSet`` and ``RBAC`` which will pick up the latest configuration from the\n:term:`ConfigMap`.\n\n\nMigrating from kvstore-backed identities to Kubernetes CRD-backed identities\n----------------------------------------------------------------------------\n\nBeginning with Cilium 1.6, Kubernetes CRD-backed security identities can be\nused for smaller clusters. Along with other changes in 1.6, this allows\nkvstore-free operation if desired. It is possible to migrate identities from an\nexisting kvstore deployment to CRD-backed identities. This minimizes\ndisruptions to traffic as the update rolls out through the cluster.\n\nMigration\n~~~~~~~~~\n\nWhen identities change, existing connections can be disrupted while Cilium\ninitializes and synchronizes with the shared identity store. The disruption\noccurs when new numeric identities are used for existing pods on some instances\nand others are used on others. When converting to CRD-backed identities, it is\npossible to pre-allocate CRD identities so that the numeric identities match\nthose in the kvstore. This allows new and old Cilium instances in the rollout\nto agree.\n\nThere are two ways to achieve this: you can either run a one-off ``cilium preflight migrate-identity`` script\nwhich will perform a point-in-time copy of all identities from the kvstore to CRDs (added in Cilium 1.6), or use the \"Double Write\" identity\nallocation mode which will have Cilium manage identities in both the kvstore and CRD at the same time for a seamless migration (added in Cilium 1.17).\n\nMigration with the ``cilium preflight migrate-identity`` script\n###############################################################\n\nThe ``cilium preflight migrate-identity`` script is a one-off tool that can be used to copy identities from the kvstore into CRDs.\nIt has a couple of limitations:\n\n* If an identity is created in the kvstore after the one-off migration has been completed, it will not be copied into a CRD.\n  This means that you need to perform the migration on a cluster with no identity churn.\n* There is no easy way to revert back to ``--identity-allocation-mode=kvstore`` if something goes wrong after\n  Cilium has been migrated to ``--identity-allocation-mode=crd``\n\nIf these limitations are not acceptable, it is recommended to use the \":ref:`Double Write <double_write_migration>`\" identity allocation mode instead.\n\nThe following steps show an example of performing the migration using the ``cilium preflight migrate-identity`` script.\nIt is safe to re-run the command if desired. It will identify already allocated identities or ones that\ncannot be migrated. Note that identity ``34815`` is migrated, ``17003`` is\nalready migrated, and ``11730`` has a conflict and a new ID allocated for those\nlabels.\n\nThe steps below assume a stable cluster with no new identities created during\nthe rollout. Once Cilium using CRD-backed identities is running, it may begin\nallocating identities in a way that conflicts with older ones in the kvstore.\n\nThe cilium preflight manifest requires etcd support and can be built with:\n\n.. code-block:: shell-session\n\n    helm template cilium \\\n      --namespace=kube-system \\\n      --set preflight.enabled=true \\\n      --set agent=false \\\n      --set config.enabled=false \\\n      --set operator.enabled=false \\\n      --set etcd.enabled=true \\\n      --set etcd.ssl=true \\\n      > cilium-preflight.yaml\n    kubectl create -f cilium-preflight.yaml\n\n\nExample migration\n~~~~~~~~~~~~~~~~~\n\n.. code-block:: shell-session\n\n      $ kubectl exec -n kube-system cilium-pre-flight-check-1234 -- cilium-dbg preflight migrate-identity\n      INFO[0000] Setting up kvstore client\n      INFO[0000] Connecting to etcd server...                  config=\/var\/lib\/cilium\/etcd-config.yml endpoints=\"[https:\/\/192.168.60.11:2379]\" subsys=kvstore\n      INFO[0000] Setting up kubernetes client\n      INFO[0000] Establishing connection to apiserver          host=\"https:\/\/192.168.60.11:6443\" subsys=k8s\n      INFO[0000] Connected to apiserver                        subsys=k8s\n      INFO[0000] Got lease ID 29c66c67db8870c8                 subsys=kvstore\n      INFO[0000] Got lock lease ID 29c66c67db8870ca            subsys=kvstore\n      INFO[0000] Successfully verified version of etcd endpoint  config=\/var\/lib\/cilium\/etcd-config.yml endpoints=\"[https:\/\/192.168.60.11:2379]\" etcdEndpoint=\"https:\/\/192.168.60.11:2379\" subsys=kvstore version=3.3.13\n      INFO[0000] CRD (CustomResourceDefinition) is installed and up-to-date  name=CiliumNetworkPolicy\/v2 subsys=k8s\n      INFO[0000] Updating CRD (CustomResourceDefinition)...    name=v2.CiliumEndpoint subsys=k8s\n      INFO[0001] CRD (CustomResourceDefinition) is installed and up-to-date  name=v2.CiliumEndpoint subsys=k8s\n      INFO[0001] Updating CRD (CustomResourceDefinition)...    name=v2.CiliumNode subsys=k8s\n      INFO[0002] CRD (CustomResourceDefinition) is installed and up-to-date  name=v2.CiliumNode subsys=k8s\n      INFO[0002] Updating CRD (CustomResourceDefinition)...    name=v2.CiliumIdentity subsys=k8s\n      INFO[0003] CRD (CustomResourceDefinition) is installed and up-to-date  name=v2.CiliumIdentity subsys=k8s\n      INFO[0003] Listing identities in kvstore\n      INFO[0003] Migrating identities to CRD\n      INFO[0003] Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination  labels=\"map[]\" subsys=crd-allocator\n      INFO[0003] Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination  labels=\"map[]\" subsys=crd-allocator\n      INFO[0003] Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination  labels=\"map[]\" subsys=crd-allocator\n      INFO[0003] Migrated identity                             identity=34815 identityLabels=\"k8s:class=tiefighter;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=empire;\"\n      WARN[0003] ID is allocated to a different key in CRD. A new ID will be allocated for the this key  identityLabels=\"k8s:class=deathstar;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=empire;\" oldIdentity=11730\n      INFO[0003] Reusing existing global key                   key=\"k8s:class=deathstar;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=empire;\" subsys=allocator\n      INFO[0003] New ID allocated for key in CRD               identity=17281 identityLabels=\"k8s:class=deathstar;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=empire;\" oldIdentity=11730\n      INFO[0003] ID was already allocated to this key. It is already migrated  identity=17003 identityLabels=\"k8s:class=xwing;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=alliance;\"\n\n.. note::\n\n    It is also possible to use the ``--k8s-kubeconfig-path``  and ``--kvstore-opt``\n    ``cilium`` CLI options with the preflight command. The default is to derive the\n    configuration as cilium-agent does.\n\n  .. code-block:: shell-session\n\n        cilium preflight migrate-identity --k8s-kubeconfig-path \/var\/lib\/cilium\/cilium.kubeconfig --kvstore etcd --kvstore-opt etcd.config=\/var\/lib\/cilium\/etcd-config.yml\n\nOnce the migration is complete, confirm the endpoint identities match by listing the endpoints stored in CRDs and in etcd:\n\n.. code-block:: shell-session\n\n      $ kubectl get ciliumendpoints -A # new CRD-backed endpoints\n      $ kubectl exec -n kube-system cilium-1234 -- cilium-dbg endpoint list # existing etcd-backed endpoints\n\nClearing CRD identities\n~~~~~~~~~~~~~~~~~~~~~~~\n\nIf a migration has gone wrong, it possible to start with a clean slate. Ensure that no Cilium instances are running with ``--identity-allocation-mode=crd`` and execute:\n\n.. code-block:: shell-session\n\n      $ kubectl delete ciliumid --all\n\n.. _double_write_migration:\n\nMigration with the \"Double Write\" identity allocation mode\n##########################################################\n\n.. include:: ..\/beta.rst\n\nThe \"Double Write\" Identity Allocation Mode allows Cilium to allocate identities as KVStore values *and* as CRDs at the\nsame time. This mode also has two versions: one where the source of truth comes from the kvstore (``--identity-allocation-mode=doublewrite-readkvstore``),\nand one where the source of truth comes from CRDs (``--identity-allocation-mode=doublewrite-readcrd``).\n\n.. note::\n\n    \"Double Write\" mode is not compatible with Consul as the KVStore\n\nThe high-level migration plan looks as follows:\n\n#. Starting state: Cilium is running in KVStore mode.\n#. Switch Cilium to \u201cDouble Write\u201d mode with all reads happening from the KVStore. This is almost the same as the\n   pure KVStore mode with the only difference being that all identities are duplicated as CRDs but are not used.\n#. Switch Cilium to \u201cDouble Write\u201d mode with all reads happening from CRDs. This is equivalent to Cilium running in\n   pure CRD mode but identities will still be updated in the KVStore to allow for the possibility of a fast rollback.\n#. Switch Cilium to CRD mode. The KVStore will no longer be used and will be ready for decommission.\n\nThis will allow you to perform a gradual and seamless migration with the possibility of a fast rollback at steps two or three.\n\nFurthermore, when the \"Double Write\" mode is enabled, the Operator will emit additional metrics to help monitor the\nmigration progress. These metrics can be used for alerting about identity inconsistencies between the KVStore and CRDs.\n\nNote that you can also use this to migrate from CRD to KVStore mode. All operations simply need to be repeated in reverse order.\n\nRollout Instructions\n~~~~~~~~~~~~~~~~~~~~\n\n#. Re-deploy first the Operator and then the Agents with ``--identity-allocation-mode=doublewrite-readkvstore``.\n#. Monitor the Operator metrics and logs to ensure that all identities have converged between the KVStore and CRDs. The relevant metrics emitted by the Operator are:\n\n   * ``cilium_operator_identity_crd_total_count`` and ``cilium_operator_identity_kvstore_total_count`` report the total number of identities in CRDs and KVStore respectively.\n   * ``cilium_operator_identity_crd_only_count`` and ``cilium_operator_identity_kvstore_only_count`` report the number of\n     identities that are only in CRDs or only in the KVStore respectively, to help detect inconsistencies.\n\n   In case further investigation is needed, the Operator logs will contain detailed information about the discrepancies between KVStore and CRD identities.\n   Note that Garbage Collection for KVStore identities and CRD identities happens at slightly different times, so it is possible to see discrepancies in the metrics\n   for certain periods of time, depending on ``--identity-gc-interval`` and ``--identity-heartbeat-timeout`` settings.\n#. Once all identities have converged, re-deploy the Operator and the Agents with ``--identity-allocation-mode=doublewrite-readcrd``.\n   This will cause Cilium to read identities only from CRDs, but continue to write them to the KVStore.\n#. Once you are ready to decommission the KVStore, re-deploy first the Agents and then the Operator with ``--identity-allocation-mode=crd``.\n   This will make Cilium read and write identities only to CRDs.\n#. You can now decommission the KVStore.\n\n.. _cnp_validation:\n\nCNP Validation\n--------------\n\nRunning the CNP Validator will make sure the policies deployed in the cluster\nare valid. It is important to run this validation before an upgrade so it will\nmake sure Cilium has a correct behavior after upgrade. Avoiding doing this\nvalidation might cause Cilium from updating its ``NodeStatus`` in those invalid\nNetwork Policies as well as in the worst case scenario it might give a false\nsense of security to the user if a policy is badly formatted and Cilium is not\nenforcing that policy due a bad validation schema. This CNP Validator is\nautomatically executed as part of the pre-flight check :ref:`pre_flight`.\n\nStart by deployment the ``cilium-pre-flight-check`` and check if the\n``Deployment`` shows READY 1\/1, if it does not check the pod logs.\n\n.. code-block:: shell-session\n\n      $ kubectl get deployment -n kube-system cilium-pre-flight-check -w\n      NAME                      READY   UP-TO-DATE   AVAILABLE   AGE\n      cilium-pre-flight-check   0\/1     1            0           12s\n\n      $ kubectl logs -n kube-system deployment\/cilium-pre-flight-check -c cnp-validator --previous\n      level=info msg=\"Setting up kubernetes client\"\n      level=info msg=\"Establishing connection to apiserver\" host=\"https:\/\/172.20.0.1:443\" subsys=k8s\n      level=info msg=\"Connected to apiserver\" subsys=k8s\n      level=info msg=\"Validating CiliumNetworkPolicy 'default\/cidr-rule': OK!\n      level=error msg=\"Validating CiliumNetworkPolicy 'default\/cnp-update': unexpected validation error: spec.labels: Invalid value: \\\"string\\\": spec.labels in body must be of type object: \\\"string\\\"\"\n      level=error msg=\"Found invalid CiliumNetworkPolicy\"\n\nIn this example, we can see the ``CiliumNetworkPolicy`` in the ``default``\nnamespace with the name ``cnp-update`` is not valid for the Cilium version we\nare trying to upgrade. In order to fix this policy we need to edit it, we can\ndo this by saving the policy locally and modify it. For this example it seems\nthe ``.spec.labels`` has set an array of strings which is not correct as per\nthe official schema.\n\n.. code-block:: shell-session\n\n      $ kubectl get cnp -n default cnp-update -o yaml > cnp-bad.yaml\n      $ cat cnp-bad.yaml\n        apiVersion: cilium.io\/v2\n        kind: CiliumNetworkPolicy\n        [...]\n        spec:\n          endpointSelector:\n            matchLabels:\n              id: app1\n          ingress:\n          - fromEndpoints:\n            - matchLabels:\n                id: app2\n            toPorts:\n            - ports:\n              - port: \"80\"\n                protocol: TCP\n          labels:\n          - custom=true\n        [...]\n\nTo fix this policy we need to set the ``.spec.labels`` with the right format and\ncommit these changes into Kubernetes.\n\n.. code-block:: shell-session\n\n      $ cat cnp-bad.yaml\n        apiVersion: cilium.io\/v2\n        kind: CiliumNetworkPolicy\n        [...]\n        spec:\n          endpointSelector:\n            matchLabels:\n              id: app1\n          ingress:\n          - fromEndpoints:\n            - matchLabels:\n                id: app2\n            toPorts:\n            - ports:\n              - port: \"80\"\n                protocol: TCP\n          labels:\n          - key: \"custom\"\n            value: \"true\"\n        [...]\n      $\n      $ kubectl apply -f .\/cnp-bad.yaml\n\nAfter applying the fixed policy we can delete the pod that was validating the\npolicies so that Kubernetes creates a new pod immediately to verify if the fixed\npolicies are now valid.\n\n.. code-block:: shell-session\n\n      $ kubectl delete pod -n kube-system -l k8s-app=cilium-pre-flight-check-deployment\n      pod \"cilium-pre-flight-check-86dfb69668-ngbql\" deleted\n      $ kubectl get deployment -n kube-system cilium-pre-flight-check\n      NAME                      READY   UP-TO-DATE   AVAILABLE   AGE\n      cilium-pre-flight-check   1\/1     1            1           55m\n      $ kubectl logs -n kube-system deployment\/cilium-pre-flight-check -c cnp-validator\n      level=info msg=\"Setting up kubernetes client\"\n      level=info msg=\"Establishing connection to apiserver\" host=\"https:\/\/172.20.0.1:443\" subsys=k8s\n      level=info msg=\"Connected to apiserver\" subsys=k8s\n      level=info msg=\"Validating CiliumNetworkPolicy 'default\/cidr-rule': OK!\n      level=info msg=\"Validating CiliumNetworkPolicy 'default\/cnp-update': OK!\n      level=info msg=\"All CCNPs and CNPs valid!\"\n\nOnce they are valid you can continue with the upgrade process. :ref:`cleanup_preflight_check`","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      admin upgrade                 Upgrade Guide                    upgrade general   This upgrade guide is intended for Cilium running on Kubernetes  If you have questions  feel free to ping us on  Cilium Slack        include   upgrade warning rst      pre flight   Running pre flight check  Required                                       When rolling out an upgrade with Kubernetes  Kubernetes will first terminate the pod followed by pulling the new image version and then finally spin up the new image  In order to reduce the downtime of the agent and to prevent   ErrImagePull   errors during upgrade  the pre flight check pre pulls the new image version  If you are running in  ref  kubeproxy free  mode you must also pass on the Kubernetes API Server IP and   or the Kubernetes API Server Port when generating the   cilium preflight yaml   file      tabs        group tab   kubectl         parsed literal          helm template  CHART RELEASE               namespace kube system              set preflight enabled true              set agent false              set operator enabled false              cilium preflight yaml       kubectl create  f cilium preflight yaml       group tab   Helm         parsed literal          helm install cilium preflight  CHART RELEASE               namespace kube system              set preflight enabled true              set agent false              set operator enabled false       group tab   kubectl  kubeproxy free          parsed literal          helm template  CHART RELEASE               namespace kube system              set preflight enabled true              set agent false              set operator enabled false              set k8sServiceHost API SERVER IP              set k8sServicePort API SERVER PORT              cilium preflight yaml       kubectl create  f cilium preflight yaml       group tab   Helm  kubeproxy free          parsed literal          helm install cilium preflight  CHART RELEASE               namespace kube system              set preflight enabled true              set agent false              set operator enabled false              set k8sServiceHost API SERVER IP              set k8sServicePort API SERVER PORT  After applying the   cilium preflight yaml    ensure that the number of READY pods is the same number of Cilium pods running      code block   shell session        kubectl get daemonset  n kube system   sed  n  1p  cilium p      NAME                      DESIRED   CURRENT   READY   UP TO DATE   AVAILABLE   NODE SELECTOR   AGE     cilium                    2         2         2       2            2            none           1h20m     cilium pre flight check   2         2         2       2            2            none           7m15s  Once the number of READY pods are equal  make sure the Cilium pre flight deployment is also marked as READY 1 1  If it shows READY 0 1  consult the  ref  cnp validation  section and resolve issues with the deployment before continuing with the upgrade      code block   shell session        kubectl get deployment  n kube system cilium pre flight check  w     NAME                      READY   UP TO DATE   AVAILABLE   AGE     cilium pre flight check   1 1     1            0           12s      cleanup preflight check   Clean up pre flight check                            Once the number of READY for the preflight  term  DaemonSet  is the same as the number of cilium pods running and the preflight   Deployment   is marked as READY   1 1   you can delete the cilium preflight and proceed with the upgrade      tabs        group tab   kubectl         code block   shell session        kubectl delete  f cilium preflight yaml       group tab   Helm         code block   shell session        helm delete cilium preflight   namespace kube system      upgrade minor   Upgrading Cilium                   During normal cluster operations  all Cilium components should run the same version  Upgrading just one of them  e g   upgrading the agent without upgrading the operator  could result in unexpected cluster behavior  The following steps will describe how to upgrade all of the components from one stable release to a later stable release      include   upgrade warning rst  Step 1  Upgrade to latest patch version                                          When upgrading from one minor release to another minor release  for example 1 x to 1 y  it is recommended to upgrade to the  latest patch release  https   github com cilium cilium stable releases     for a Cilium release series first  Upgrading to the latest patch release ensures the most seamless experience if a rollback is required following the minor release upgrade  The upgrade guides for previous versions can be found for each minor version at the bottom left corner   Step 2  Use Helm to Upgrade your Cilium deployment                                                                                          term  Helm  can be used to either upgrade Cilium directly or to generate a new set of YAML files that can be used to upgrade an existing deployment via   kubectl    By default  Helm will generate the new templates using the default values files packaged with each new release  You still need to ensure that you are specifying the equivalent options as used for the initial deployment  either by specifying a them at the command line or by committing the values to a YAML file      include      installation k8s install download release rst  To minimize datapath disruption during the upgrade  the   upgradeCompatibility   option should be set to the initial Cilium version which was installed in this cluster      tabs        group tab   kubectl      Generate the required YAML file and deploy it          parsed literal          helm template  CHART RELEASE               set upgradeCompatibility 1 X              namespace kube system              cilium yaml       kubectl apply  f cilium yaml       group tab   Helm      Deploy Cilium release via Helm          parsed literal          helm upgrade cilium  CHART RELEASE               namespace kube system              set upgradeCompatibility 1 X     note       Instead of using     set    you can also save the values relative to your    deployment in a YAML file and use it to regenerate the YAML for the latest    Cilium version  Running any of the previous commands will overwrite    the existing cluster s  term  ConfigMap  so it is critical to preserve any existing    options  either by setting them at the command line or storing them in a    YAML file  similar to         code block   yaml        agent  true       upgradeCompatibility   1 8        ipam          mode   kubernetes        k8sServiceHost   API SERVER IP        k8sServicePort   API SERVER PORT        kubeProxyReplacement   true      You can then upgrade using this values file by running         parsed literal          helm upgrade cilium  CHART RELEASE               namespace kube system             f my values yaml  When upgrading from one minor release to another minor release using   helm upgrade    do  not  use Helm s     reuse values   flag  The     reuse values   flag ignores any newly introduced values present in the new release and thus may cause the Helm template to render incorrectly  Instead  if you want to reuse the values from your existing installation  save the old values in a values file  check the file for any renamed or deprecated values  and then pass it to the   helm upgrade   command as described above  You can retrieve and save the values from an existing installation with the following command      code block   shell session    helm get values cilium   namespace kube system  o yaml   old values yaml  The     reuse values   flag may only be safely used if the Cilium chart version remains unchanged  for example when   helm upgrade   is used to apply configuration changes without upgrading Cilium   Step 3  Rolling Back                       Occasionally  it may be necessary to undo the rollout because a step was missed or something went wrong during upgrade  To undo the rollout run      tabs        group tab   kubectl         code block   shell session        kubectl rollout undo daemonset cilium  n kube system       group tab   Helm         code block   shell session        helm history cilium   namespace kube system       helm rollback cilium  REVISION    namespace kube system  This will revert the latest changes to the Cilium   DaemonSet   and return Cilium to the state it was in prior to the upgrade      note        When rolling back after new features of the new minor version have already     been consumed  consult the  ref  version notes  to check and prepare for     incompatible feature use before downgrading rolling back  This step is only     required after new functionality introduced in the new minor version has     already been explicitly used by creating new resources or by opting into     new features via the  term  ConfigMap        version notes      upgrade version specifics   Version Specific Notes                         This section details the upgrade notes specific to  CURRENT RELEASE   Read them carefully and take the suggested actions before upgrading Cilium to  CURRENT RELEASE   For upgrades to earlier releases  see the  prev docs  upgrade notes to the previous version  operations upgrade  upgrade notes     The only tested upgrade and rollback path is between consecutive minor releases  Always perform upgrades and rollbacks between one minor release at a time  Additionally  always update to the latest patch release of your current version before attempting an upgrade   Tested upgrades are expected to have minimal to no impact on new and existing connections matched by either no Network Policies  or L3 L4 Network Policies only  Any traffic flowing via user space proxies  for example  because an L7 policy is in place  or using Ingress Gateway API  will be disrupted during upgrade  Endpoints communicating via the proxy must reconnect to re establish connections       current release required changes       1 17 upgrade notes   1 17 Upgrade Notes                       Operating Cilium in     datapath mode lb only   for plain Docker mode now requires to   add an additional     enable k8s false   to the command line  otherwise it is assumed   that Kubernetes is present    The Kubernetes clients used by Cilium Agent and Cilium Operator now have separately configurable   rate limits  The default rate limit for Cilium Operator K8s clients has been increased to   100 QPS 200 Burst  To configure the rate limit for Cilium Operator  use the       operator k8s client qps   and     operator k8s client burst   flags or the corresponding   Helm values    Support for Consul  deprecated since v1 12  has been removed    Cilium now supports services protocol differentiation  which allows the agent to distinguish two   services on the same port with different protocols  e g  TCP and UDP     This feature  enabled by default  can be controlled with the     bpf lb proto diff   flag    After the upgrade  existing services without a protocol set will be preserved as such  to avoid any   connection disruptions  and will need to be deleted and recreated in order for their protocol to   be taken into account by the agent    In case of downgrades to a version that doesn t support services protocol differentiation    existing services with the protocol set will be deleted and recreated  without the protocol  by   the agent  causing connection disruptions for such services    MTU auto detection is now continuous during agent lifetime  changing device MTU no longer requires   restarting the agent to pick up the new MTU    MTU auto detection will now use the lowest MTU of all external interfaces  Before  only the primary   interface was considered  One exception to this is in ENI mode where the secondary interfaces are not   considered for MTU auto detection  MTU can still be configured manually via the   MTU   helm option        mtu   agent flag or   mtu   option in CNI configuration    Support for L7 protocol visibility using Pod annotations    policy cilium io proxy visibility       deprecated since v1 15  has been removed    The Cilium cluster name validation cannot be bypassed anymore  both for the local and   remote clusters  The cluster name is strictly enforced to consist of at most 32 lower   case alphanumeric characters and      start and end with an alphanumeric character    Cilium could previously be run in a configuration where the Etcd instances   that distribute Cilium state between nodes would be managed in pod network by   Cilium itself  This support  which had been previously deprecated as complicated   and error prone  has now been removed  Refer to  ref  k8s install etcd  for   alternatives for running Cilium with Etcd    For IPsec  support for a single key has been removed  Per tunnel keys will   now be used regardless of the presence of the       sign in the secret    The option to run a synchronous probe using   cilium health status   probe   is no longer supported    and is now a hidden option that returns the results of the most recent cached probe  It will be    removed in a future release    The Cilium status API now reports the KVStore subsystem with   Disabled   state when disabled    instead of   OK   state and   Disabled   message    Support for   metallb bgp    deprecated since 1 14  has been removed   Removed Options                    The previously deprecated   clustermesh ip identities sync timeout   flag has   been removed in favor of   clustermesh sync timeout      The previously deprecated built in WireGuard userspace mode fallback  Helm   wireguard userspaceFallback      has been removed  Users of WireGuard transparent encryption are required to use a Linux kernel with   WireGuard support    The previously deprecated   metallb bgp   flags   bgp config path      bgp announce lb ip     and   bgp announce pod cidr   have been removed  Users are now required to use Cilium BGP   control plane for BGP advertisements   Deprecated Options                       The high scale mode for ipcache has been deprecated and will be removed in v1 18    The hubble relay flag     dial timeout   has been deprecated  now a no op    and will be removed in Cilium 1 18   Helm Options                 The Helm options   hubble tls server cert      hubble tls server key        hubble relay tls client cert      hubble relay tls client key        hubble relay tls server cert      hubble relay tls server key        hubble ui tls client cert    and   hubble ui tls client key   have been   deprecated in favor of the associated   existingSecret   options and will be   removed in a future release    The default value of   hubble tls auto certValidityDuration   has been   lowered from 1095 days to 365 days because recent versions of MacOS will fail   to validate certificates with expirations longer than 825 days    The Helm option   hubble relay dialTimeout   has been deprecated  now a no op    and will be removed in Cilium 1 18    The   metallb bgp   integration Helm options   bgp enabled      bgp announce podCIDR    and     bgp announce loadbalancerIP   have been removed  Users are now required to use Cilium BGP   control plane options available under   bgpControlPlane   for BGP announcements    The default value of   dnsProxy endpointMaxIpPerHostname   and its   corresponding agent option has been increased from 50 to 1000 to reflect   improved scaling of toFQDNs policies and to better handle domains which return   a large number of IPs with short TTLs   Agent Options                  The   CONNTRACK LOCAL   option has been deprecated and will be removed in a   future release   Bugtool Options                    The flag   k8s mode    and related flags   cilium agent container name      k8s namespace       k8s label      have been deprecated and will be removed in a Cilium 1 18  Cilium CLI should be used to gather a sysdump from a K8s cluster   Added Metrics                   cilium node health connectivity status       cilium node health connectivity latency seconds       cilium operator unmanaged pods       cilium policy selector match count max       cilium identity cache timer duration       cilium identity cache timer trigger latency       cilium identity cache timer trigger folds    Removed Metrics                     cilium cidrgroup translation time stats seconds   has been removed  as the measured code path no longer exists      cilium triggers policy update total   has been removed      cilium triggers policy update folds   has been removed      cilium triggers policy update call duration   has been removed   Changed Metrics                  Deprecated Metrics                        cilium node connectivity status   is now deprecated  Please use   cilium node health connectivity status   instead      cilium node connectivity latency seconds   is now deprecated  Please use   cilium node health connectivity latency seconds   instead   Hubble CLI               the     cluster   behavior changed to show flows emitted from nodes outside of   the provided cluster name  either coming from or going to the target cluster     This change brings consistency between the     cluster   and     namespace     flags and removed the incompatibility between the     cluster   and       node name   flags  The previous behavior of     cluster foo   can be   reproduced with     node name foo     shows all flows emitted from a node in   cluster   foo      Advanced           Upgrade Impact                 Upgrades are designed to have minimal impact on your running deployment  Networking connectivity  policy enforcement and load balancing will remain functional in general  The following is a list of operations that will not be available during the upgrade     API aware policy rules are enforced in user space proxies and are   running as part of the Cilium pod  Upgrading Cilium causes the proxy to   restart  which results in a connectivity outage and causes the connection to reset     Existing policy will remain effective but implementation of new policy rules   will be postponed to after the upgrade has been completed on a particular   node     Monitoring components such as   cilium dbg monitor   will experience a brief   outage while the Cilium pod is restarting  Events are queued up and read   after the upgrade  If the number of events exceeds the event buffer size    events will be lost        upgrade configmap   Rebasing a ConfigMap                       This section describes the procedure to rebase an existing  term  ConfigMap  to the template of another version   Export the current ConfigMap                                             kubectl get configmap  n kube system cilium config  o yaml   export   cilium cm old yaml           cat   cilium cm old yaml         apiVersion  v1         data            clean cilium state   false            debug   true            disable ipv4   false            etcd config                                 endpoints                https   192 168 60 11 2379                             In case you want to use TLS in etcd  uncomment the  trusted ca file  line               and create a kubernetes secret by following the tutorial in               https   cilium link etcd config             trusted ca file    var lib etcd secrets etcd client ca crt                              In case you want client to server authentication  uncomment the following               lines and add the certificate and key in cilium etcd secrets below             key file    var lib etcd secrets etcd client key              cert file    var lib etcd secrets etcd client crt          kind  ConfigMap         metadata            creationTimestamp  null           name  cilium config           selfLink   api v1 namespaces kube system configmaps cilium config   In the  term  ConfigMap  above  we can verify that Cilium is using   debug   with   true    it has a etcd endpoint running with  TLS  https   etcd io docs latest op guide security      and the etcd is set up to have  client to server authentication  https   etcd io docs latest op guide security  example 2 client to server authentication with https client certificates      Generate the latest ConfigMap                                   code block   shell session      helm template cilium           namespace kube system           set agent false           set config enabled true           set operator enabled false           cilium configmap yaml  Add new options                  Add the new options manually to your old  term  ConfigMap   and make the necessary changes   In this example  the   debug   option is meant to be kept with   true    the   etcd config   is kept unchanged  and   monitor aggregation   is a new option  but after reading the  ref  version notes  the value was kept unchanged from the default value   After making the necessary changes  the old  term  ConfigMap  was migrated with the new options while keeping the configuration that we wanted                 cat   cilium cm old yaml         apiVersion  v1         data            debug   true            disable ipv4   false              If you want to clean cilium state  change this value to true           clean cilium state   false            monitor aggregation   medium            etcd config                                 endpoints                https   192 168 60 11 2379                             In case you want to use TLS in etcd  uncomment the  trusted ca file  line               and create a kubernetes secret by following the tutorial in               https   cilium link etcd config             trusted ca file    var lib etcd secrets etcd client ca crt                              In case you want client to server authentication  uncomment the following               lines and add the certificate and key in cilium etcd secrets below             key file    var lib etcd secrets etcd client key              cert file    var lib etcd secrets etcd client crt          kind  ConfigMap         metadata            creationTimestamp  null           name  cilium config           selfLink   api v1 namespaces kube system configmaps cilium config  Apply new ConfigMap                      After adding the options  manually save the file with your changes and install the  term  ConfigMap  in the   kube system   namespace of your cluster      code block   shell session            kubectl apply  n kube system  f   cilium cm old yaml  As the  term  ConfigMap  is successfully upgraded we can start upgrading Cilium   DaemonSet   and   RBAC   which will pick up the latest configuration from the  term  ConfigMap     Migrating from kvstore backed identities to Kubernetes CRD backed identities                                                                               Beginning with Cilium 1 6  Kubernetes CRD backed security identities can be used for smaller clusters  Along with other changes in 1 6  this allows kvstore free operation if desired  It is possible to migrate identities from an existing kvstore deployment to CRD backed identities  This minimizes disruptions to traffic as the update rolls out through the cluster   Migration            When identities change  existing connections can be disrupted while Cilium initializes and synchronizes with the shared identity store  The disruption occurs when new numeric identities are used for existing pods on some instances and others are used on others  When converting to CRD backed identities  it is possible to pre allocate CRD identities so that the numeric identities match those in the kvstore  This allows new and old Cilium instances in the rollout to agree   There are two ways to achieve this  you can either run a one off   cilium preflight migrate identity   script which will perform a point in time copy of all identities from the kvstore to CRDs  added in Cilium 1 6   or use the  Double Write  identity allocation mode which will have Cilium manage identities in both the kvstore and CRD at the same time for a seamless migration  added in Cilium 1 17    Migration with the   cilium preflight migrate identity   script                                                                  The   cilium preflight migrate identity   script is a one off tool that can be used to copy identities from the kvstore into CRDs  It has a couple of limitations     If an identity is created in the kvstore after the one off migration has been completed  it will not be copied into a CRD    This means that you need to perform the migration on a cluster with no identity churn    There is no easy way to revert back to     identity allocation mode kvstore   if something goes wrong after   Cilium has been migrated to     identity allocation mode crd    If these limitations are not acceptable  it is recommended to use the   ref  Double Write  double write migration    identity allocation mode instead   The following steps show an example of performing the migration using the   cilium preflight migrate identity   script  It is safe to re run the command if desired  It will identify already allocated identities or ones that cannot be migrated  Note that identity   34815   is migrated    17003   is already migrated  and   11730   has a conflict and a new ID allocated for those labels   The steps below assume a stable cluster with no new identities created during the rollout  Once Cilium using CRD backed identities is running  it may begin allocating identities in a way that conflicts with older ones in the kvstore   The cilium preflight manifest requires etcd support and can be built with      code block   shell session      helm template cilium           namespace kube system           set preflight enabled true           set agent false           set config enabled false           set operator enabled false           set etcd enabled true           set etcd ssl true           cilium preflight yaml     kubectl create  f cilium preflight yaml   Example migration                       code block   shell session          kubectl exec  n kube system cilium pre flight check 1234    cilium dbg preflight migrate identity       INFO 0000  Setting up kvstore client       INFO 0000  Connecting to etcd server                     config  var lib cilium etcd config yml endpoints   https   192 168 60 11 2379   subsys kvstore       INFO 0000  Setting up kubernetes client       INFO 0000  Establishing connection to apiserver          host  https   192 168 60 11 6443  subsys k8s       INFO 0000  Connected to apiserver                        subsys k8s       INFO 0000  Got lease ID 29c66c67db8870c8                 subsys kvstore       INFO 0000  Got lock lease ID 29c66c67db8870ca            subsys kvstore       INFO 0000  Successfully verified version of etcd endpoint  config  var lib cilium etcd config yml endpoints   https   192 168 60 11 2379   etcdEndpoint  https   192 168 60 11 2379  subsys kvstore version 3 3 13       INFO 0000  CRD  CustomResourceDefinition  is installed and up to date  name CiliumNetworkPolicy v2 subsys k8s       INFO 0000  Updating CRD  CustomResourceDefinition        name v2 CiliumEndpoint subsys k8s       INFO 0001  CRD  CustomResourceDefinition  is installed and up to date  name v2 CiliumEndpoint subsys k8s       INFO 0001  Updating CRD  CustomResourceDefinition        name v2 CiliumNode subsys k8s       INFO 0002  CRD  CustomResourceDefinition  is installed and up to date  name v2 CiliumNode subsys k8s       INFO 0002  Updating CRD  CustomResourceDefinition        name v2 CiliumIdentity subsys k8s       INFO 0003  CRD  CustomResourceDefinition  is installed and up to date  name v2 CiliumIdentity subsys k8s       INFO 0003  Listing identities in kvstore       INFO 0003  Migrating identities to CRD       INFO 0003  Skipped non kubernetes labels when labelling ciliumidentity  All labels will still be used in identity determination  labels  map    subsys crd allocator       INFO 0003  Skipped non kubernetes labels when labelling ciliumidentity  All labels will still be used in identity determination  labels  map    subsys crd allocator       INFO 0003  Skipped non kubernetes labels when labelling ciliumidentity  All labels will still be used in identity determination  labels  map    subsys crd allocator       INFO 0003  Migrated identity                             identity 34815 identityLabels  k8s class tiefighter k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire         WARN 0003  ID is allocated to a different key in CRD  A new ID will be allocated for the this key  identityLabels  k8s class deathstar k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire   oldIdentity 11730       INFO 0003  Reusing existing global key                   key  k8s class deathstar k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire   subsys allocator       INFO 0003  New ID allocated for key in CRD               identity 17281 identityLabels  k8s class deathstar k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org empire   oldIdentity 11730       INFO 0003  ID was already allocated to this key  It is already migrated  identity 17003 identityLabels  k8s class xwing k8s io cilium k8s policy cluster default k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s org alliance       note        It is also possible to use the     k8s kubeconfig path    and     kvstore opt         cilium   CLI options with the preflight command  The default is to derive the     configuration as cilium agent does        code block   shell session          cilium preflight migrate identity   k8s kubeconfig path  var lib cilium cilium kubeconfig   kvstore etcd   kvstore opt etcd config  var lib cilium etcd config yml  Once the migration is complete  confirm the endpoint identities match by listing the endpoints stored in CRDs and in etcd      code block   shell session          kubectl get ciliumendpoints  A   new CRD backed endpoints         kubectl exec  n kube system cilium 1234    cilium dbg endpoint list   existing etcd backed endpoints  Clearing CRD identities                          If a migration has gone wrong  it possible to start with a clean slate  Ensure that no Cilium instances are running with     identity allocation mode crd   and execute      code block   shell session          kubectl delete ciliumid   all      double write migration   Migration with the  Double Write  identity allocation mode                                                                include      beta rst  The  Double Write  Identity Allocation Mode allows Cilium to allocate identities as KVStore values  and  as CRDs at the same time  This mode also has two versions  one where the source of truth comes from the kvstore      identity allocation mode doublewrite readkvstore     and one where the source of truth comes from CRDs      identity allocation mode doublewrite readcrd         note         Double Write  mode is not compatible with Consul as the KVStore  The high level migration plan looks as follows      Starting state  Cilium is running in KVStore mode     Switch Cilium to  Double Write  mode with all reads happening from the KVStore  This is almost the same as the    pure KVStore mode with the only difference being that all identities are duplicated as CRDs but are not used     Switch Cilium to  Double Write  mode with all reads happening from CRDs  This is equivalent to Cilium running in    pure CRD mode but identities will still be updated in the KVStore to allow for the possibility of a fast rollback     Switch Cilium to CRD mode  The KVStore will no longer be used and will be ready for decommission   This will allow you to perform a gradual and seamless migration with the possibility of a fast rollback at steps two or three   Furthermore  when the  Double Write  mode is enabled  the Operator will emit additional metrics to help monitor the migration progress  These metrics can be used for alerting about identity inconsistencies between the KVStore and CRDs   Note that you can also use this to migrate from CRD to KVStore mode  All operations simply need to be repeated in reverse order   Rollout Instructions                          Re deploy first the Operator and then the Agents with     identity allocation mode doublewrite readkvstore       Monitor the Operator metrics and logs to ensure that all identities have converged between the KVStore and CRDs  The relevant metrics emitted by the Operator are          cilium operator identity crd total count   and   cilium operator identity kvstore total count   report the total number of identities in CRDs and KVStore respectively         cilium operator identity crd only count   and   cilium operator identity kvstore only count   report the number of      identities that are only in CRDs or only in the KVStore respectively  to help detect inconsistencies      In case further investigation is needed  the Operator logs will contain detailed information about the discrepancies between KVStore and CRD identities     Note that Garbage Collection for KVStore identities and CRD identities happens at slightly different times  so it is possible to see discrepancies in the metrics    for certain periods of time  depending on     identity gc interval   and     identity heartbeat timeout   settings     Once all identities have converged  re deploy the Operator and the Agents with     identity allocation mode doublewrite readcrd       This will cause Cilium to read identities only from CRDs  but continue to write them to the KVStore     Once you are ready to decommission the KVStore  re deploy first the Agents and then the Operator with     identity allocation mode crd       This will make Cilium read and write identities only to CRDs     You can now decommission the KVStore       cnp validation   CNP Validation                 Running the CNP Validator will make sure the policies deployed in the cluster are valid  It is important to run this validation before an upgrade so it will make sure Cilium has a correct behavior after upgrade  Avoiding doing this validation might cause Cilium from updating its   NodeStatus   in those invalid Network Policies as well as in the worst case scenario it might give a false sense of security to the user if a policy is badly formatted and Cilium is not enforcing that policy due a bad validation schema  This CNP Validator is automatically executed as part of the pre flight check  ref  pre flight    Start by deployment the   cilium pre flight check   and check if the   Deployment   shows READY 1 1  if it does not check the pod logs      code block   shell session          kubectl get deployment  n kube system cilium pre flight check  w       NAME                      READY   UP TO DATE   AVAILABLE   AGE       cilium pre flight check   0 1     1            0           12s          kubectl logs  n kube system deployment cilium pre flight check  c cnp validator   previous       level info msg  Setting up kubernetes client        level info msg  Establishing connection to apiserver  host  https   172 20 0 1 443  subsys k8s       level info msg  Connected to apiserver  subsys k8s       level info msg  Validating CiliumNetworkPolicy  default cidr rule   OK        level error msg  Validating CiliumNetworkPolicy  default cnp update   unexpected validation error  spec labels  Invalid value    string    spec labels in body must be of type object    string          level error msg  Found invalid CiliumNetworkPolicy   In this example  we can see the   CiliumNetworkPolicy   in the   default   namespace with the name   cnp update   is not valid for the Cilium version we are trying to upgrade  In order to fix this policy we need to edit it  we can do this by saving the policy locally and modify it  For this example it seems the    spec labels   has set an array of strings which is not correct as per the official schema      code block   shell session          kubectl get cnp  n default cnp update  o yaml   cnp bad yaml         cat cnp bad yaml         apiVersion  cilium io v2         kind  CiliumNetworkPolicy                       spec            endpointSelector              matchLabels                id  app1           ingress              fromEndpoints                matchLabels                  id  app2             toPorts                ports                  port   80                  protocol  TCP           labels              custom true                To fix this policy we need to set the    spec labels   with the right format and commit these changes into Kubernetes      code block   shell session          cat cnp bad yaml         apiVersion  cilium io v2         kind  CiliumNetworkPolicy                       spec            endpointSelector              matchLabels                id  app1           ingress              fromEndpoints                matchLabels                  id  app2             toPorts                ports                  port   80                  protocol  TCP           labels              key   custom              value   true                                kubectl apply  f   cnp bad yaml  After applying the fixed policy we can delete the pod that was validating the policies so that Kubernetes creates a new pod immediately to verify if the fixed policies are now valid      code block   shell session          kubectl delete pod  n kube system  l k8s app cilium pre flight check deployment       pod  cilium pre flight check 86dfb69668 ngbql  deleted         kubectl get deployment  n kube system cilium pre flight check       NAME                      READY   UP TO DATE   AVAILABLE   AGE       cilium pre flight check   1 1     1            1           55m         kubectl logs  n kube system deployment cilium pre flight check  c cnp validator       level info msg  Setting up kubernetes client        level info msg  Establishing connection to apiserver  host  https   172 20 0 1 443  subsys k8s       level info msg  Connected to apiserver  subsys k8s       level info msg  Validating CiliumNetworkPolicy  default cidr rule   OK        level info msg  Validating CiliumNetworkPolicy  default cnp update   OK        level info msg  All CCNPs and CNPs valid    Once they are valid you can continue with the upgrade process   ref  cleanup preflight check "}
{"questions":"cilium docs cilium io This document describes how to troubleshoot Cilium in different deployment adminguide modes It focuses on a full deployment of Cilium within a datacenter or public Troubleshooting","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _admin_guide:\n\n###############\nTroubleshooting\n###############\n\nThis document describes how to troubleshoot Cilium in different deployment\nmodes. It focuses on a full deployment of Cilium within a datacenter or public\ncloud. If you are just looking for a simple way to experiment, we highly\nrecommend trying out the :ref:`getting_started` guide instead.\n\nThis guide assumes that you have read the :ref:`network_root` and `security_root` which explain all\nthe components and concepts.\n\nWe use GitHub issues to maintain a list of `Cilium Frequently Asked Questions\n(FAQ)`_. You can also check there to see if your question(s) is already\naddressed.\n\nComponent & Cluster Health\n==========================\n\nKubernetes\n----------\n\nAn initial overview of Cilium can be retrieved by listing all pods to verify\nwhether all pods have the status ``Running``:\n\n.. code-block:: shell-session\n\n   $ kubectl -n kube-system get pods -l k8s-app=cilium\n   NAME           READY     STATUS    RESTARTS   AGE\n   cilium-2hq5z   1\/1       Running   0          4d\n   cilium-6kbtz   1\/1       Running   0          4d\n   cilium-klj4b   1\/1       Running   0          4d\n   cilium-zmjj9   1\/1       Running   0          4d\n\nIf Cilium encounters a problem that it cannot recover from, it will\nautomatically report the failure state via ``cilium-dbg status`` which is regularly\nqueried by the Kubernetes liveness probe to automatically restart Cilium pods.\nIf a Cilium pod is in state ``CrashLoopBackoff`` then this indicates a\npermanent failure scenario.\n\nDetailed Status\n~~~~~~~~~~~~~~~\n\nIf a particular Cilium pod is not in running state, the status and health of\nthe agent on that node can be retrieved by running ``cilium-dbg status`` in the\ncontext of that pod:\n\n.. code-block:: shell-session\n\n   $ kubectl -n kube-system exec cilium-2hq5z -- cilium-dbg status\n   KVStore:                Ok   etcd: 1\/1 connected: http:\/\/demo-etcd-lab--a.etcd.tgraf.test1.lab.corp.isovalent.link:2379 - 3.2.5 (Leader)\n   ContainerRuntime:       Ok   docker daemon: OK\n   Kubernetes:             Ok   OK\n   Kubernetes APIs:        [\"cilium\/v2::CiliumNetworkPolicy\", \"networking.k8s.io\/v1::NetworkPolicy\", \"core\/v1::Service\", \"core\/v1::Endpoint\", \"core\/v1::Node\", \"CustomResourceDefinition\"]\n   Cilium:                 Ok   OK\n   NodeMonitor:            Disabled\n   Cilium health daemon:   Ok\n   Controller Status:      14\/14 healthy\n   Proxy Status:           OK, ip 10.2.0.172, port-range 10000-20000\n   Cluster health:   4\/4 reachable   (2018-06-16T09:49:58Z)\n\nAlternatively, the ``k8s-cilium-exec.sh`` script can be used to run ``cilium-dbg\nstatus`` on all nodes. This will provide detailed status and health information\nof all nodes in the cluster:\n\n.. code-block:: shell-session\n\n   curl -sLO https:\/\/raw.githubusercontent.com\/cilium\/cilium\/main\/contrib\/k8s\/k8s-cilium-exec.sh\n   chmod +x .\/k8s-cilium-exec.sh\n\n... and run ``cilium-dbg status`` on all nodes:\n\n.. code-block:: shell-session\n\n   $ .\/k8s-cilium-exec.sh cilium-dbg status\n   KVStore:                Ok   Etcd: http:\/\/127.0.0.1:2379 - (Leader) 3.1.10\n   ContainerRuntime:       Ok\n   Kubernetes:             Ok   OK\n   Kubernetes APIs:        [\"networking.k8s.io\/v1beta1::Ingress\", \"core\/v1::Node\", \"CustomResourceDefinition\", \"cilium\/v2::CiliumNetworkPolicy\", \"networking.k8s.io\/v1::NetworkPolicy\", \"core\/v1::Service\", \"core\/v1::Endpoint\"]\n   Cilium:                 Ok   OK\n   NodeMonitor:            Listening for events on 2 CPUs with 64x4096 of shared memory\n   Cilium health daemon:   Ok\n   Controller Status:      7\/7 healthy\n   Proxy Status:           OK, ip 10.15.28.238, 0 redirects, port-range 10000-20000\n   Cluster health:   1\/1 reachable   (2018-02-27T00:24:34Z)\n\nDetailed information about the status of Cilium can be inspected with the\n``cilium-dbg status --verbose`` command. Verbose output includes detailed IPAM state\n(allocated addresses), Cilium controller status, and details of the Proxy\nstatus.\n\n.. _ts_agent_logs:\n\nLogs\n~~~~\n\nTo retrieve log files of a cilium pod, run (replace ``cilium-1234`` with a pod\nname returned by ``kubectl -n kube-system get pods -l k8s-app=cilium``)\n\n.. code-block:: shell-session\n\n   kubectl -n kube-system logs --timestamps cilium-1234\n\nIf the cilium pod was already restarted due to the liveness problem after\nencountering an issue, it can be useful to retrieve the logs of the pod before\nthe last restart:\n\n.. code-block:: shell-session\n\n   kubectl -n kube-system logs --timestamps -p cilium-1234\n\nGeneric\n-------\n\nWhen logged in a host running Cilium, the cilium CLI can be invoked directly,\ne.g.:\n\n.. code-block:: shell-session\n\n   $ cilium-dbg status\n   KVStore:                Ok   etcd: 1\/1 connected: https:\/\/192.168.60.11:2379 - 3.2.7 (Leader)\n   ContainerRuntime:       Ok\n   Kubernetes:             Ok   OK\n   Kubernetes APIs:        [\"core\/v1::Endpoint\", \"networking.k8s.io\/v1beta1::Ingress\", \"core\/v1::Node\", \"CustomResourceDefinition\", \"cilium\/v2::CiliumNetworkPolicy\", \"networking.k8s.io\/v1::NetworkPolicy\", \"core\/v1::Service\"]\n   Cilium:                 Ok   OK\n   NodeMonitor:            Listening for events on 2 CPUs with 64x4096 of shared memory\n   Cilium health daemon:   Ok\n   IPv4 address pool:      261\/65535 allocated\n   IPv6 address pool:      4\/4294967295 allocated\n   Controller Status:      20\/20 healthy\n   Proxy Status:           OK, ip 10.0.28.238, port-range 10000-20000\n   Hubble:                 Ok      Current\/Max Flows: 2542\/4096 (62.06%), Flows\/s: 164.21      Metrics: Disabled\n   Cluster health:         2\/2 reachable   (2018-04-11T15:41:01Z)\n\n.. _hubble_troubleshooting:\n\nObserving Flows with Hubble\n===========================\n\nHubble is a built-in observability tool which allows you to inspect recent flow\nevents on all endpoints managed by Cilium.\n\nEnsure Hubble is running correctly\n----------------------------------\n\nTo ensure the Hubble client can connect to the Hubble server running inside\nCilium, you may use the ``hubble status`` command from within a Cilium pod:\n\n.. code-block:: shell-session\n\n   $ hubble status\n   Healthcheck (via unix:\/\/\/var\/run\/cilium\/hubble.sock): Ok\n   Current\/Max Flows: 4095\/4095 (100.00%)\n   Flows\/s: 164.21\n\n``cilium-agent`` must be running with the ``--enable-hubble`` option (default) in order\nfor the Hubble server to be enabled. When deploying Cilium with Helm, make sure\nto set the ``hubble.enabled=true`` value.\n\nTo check if Hubble is enabled in your deployment, you may look for the\nfollowing output in ``cilium-dbg status``:\n\n.. code-block:: shell-session\n\n   $ cilium status\n   ...\n   Hubble:   Ok   Current\/Max Flows: 4095\/4095 (100.00%), Flows\/s: 164.21   Metrics: Disabled\n   ...\n\n.. note::\n   Pods need to be managed by Cilium in order to be observable by Hubble.\n   See how to :ref:`ensure a pod is managed by Cilium<ensure_managed_pod>`\n   for more details.\n\nObserving flows of a specific pod\n---------------------------------\n\nIn order to observe the traffic of a specific pod, you will first have to\n:ref:`retrieve the name of the cilium instance managing it<retrieve_cilium_pod>`.\nThe Hubble CLI is part of the Cilium container image and can be accessed via\n``kubectl exec``. The following query for example will show all events related\nto flows which either originated or terminated in the ``default\/tiefighter`` pod\nin the last three minutes:\n\n.. code-block:: shell-session\n\n   $ kubectl exec -n kube-system cilium-77lk6 -- hubble observe --since 3m --pod default\/tiefighter\n   May  4 12:47:08.811: default\/tiefighter:53875 -> kube-system\/coredns-74ff55c5b-66f4n:53 to-endpoint FORWARDED (UDP)\n   May  4 12:47:08.811: default\/tiefighter:53875 -> kube-system\/coredns-74ff55c5b-66f4n:53 to-endpoint FORWARDED (UDP)\n   May  4 12:47:08.811: default\/tiefighter:53875 <- kube-system\/coredns-74ff55c5b-66f4n:53 to-endpoint FORWARDED (UDP)\n   May  4 12:47:08.811: default\/tiefighter:53875 <- kube-system\/coredns-74ff55c5b-66f4n:53 to-endpoint FORWARDED (UDP)\n   May  4 12:47:08.811: default\/tiefighter:50214 <> default\/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: SYN)\n   May  4 12:47:08.812: default\/tiefighter:50214 <- default\/deathstar-c74d84667-cx5kp:80 to-endpoint FORWARDED (TCP Flags: SYN, ACK)\n   May  4 12:47:08.812: default\/tiefighter:50214 <> default\/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: ACK)\n   May  4 12:47:08.812: default\/tiefighter:50214 <> default\/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: ACK, PSH)\n   May  4 12:47:08.812: default\/tiefighter:50214 <- default\/deathstar-c74d84667-cx5kp:80 to-endpoint FORWARDED (TCP Flags: ACK, PSH)\n   May  4 12:47:08.812: default\/tiefighter:50214 <> default\/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: ACK, FIN)\n   May  4 12:47:08.812: default\/tiefighter:50214 <- default\/deathstar-c74d84667-cx5kp:80 to-endpoint FORWARDED (TCP Flags: ACK, FIN)\n   May  4 12:47:08.812: default\/tiefighter:50214 <> default\/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: ACK)\n\nYou may also use ``-o json`` to obtain more detailed information about each\nflow event.\n\n.. note::\n   **Hubble Relay**  allows you to query multiple Hubble instances\n   simultaneously without having to first manually target a specific node.  See\n   `Observing flows with Hubble Relay`_ for more information.\n\nObserving flows with Hubble Relay\n=================================\n\nHubble Relay is a service which allows to query multiple Hubble instances\nsimultaneously and aggregate the results. See :ref:`hubble_setup` to enable\nHubble Relay if it is not yet enabled and install the Hubble CLI on your local\nmachine.\n\nYou may access the Hubble Relay service by port-forwarding it locally:\n\n.. code-block:: shell-session\n\n   kubectl -n kube-system port-forward service\/hubble-relay --address 0.0.0.0 --address :: 4245:80\n\nThis will forward the Hubble Relay service port (``80``) to your local machine\non port ``4245`` on all of it's IP addresses.\n\nYou can verify that Hubble Relay can be reached by using the Hubble CLI and\nrunning the following command from your local machine:\n\n.. code-block:: shell-session\n\n   hubble status\n\nThis command should return an output similar to the following:\n\n::\n\n   Healthcheck (via localhost:4245): Ok\n   Current\/Max Flows: 16380\/16380 (100.00%)\n   Flows\/s: 46.19\n   Connected Nodes: 4\/4\n\nYou may see details about nodes that Hubble Relay is connected to by running\nthe following command:\n\n.. code-block:: shell-session\n\n   hubble list nodes\n\nAs Hubble Relay shares the same API as individual Hubble instances, you may\nfollow the `Observing flows with Hubble`_ section keeping in mind that\nlimitations with regards to what can be seen from individual Hubble instances no\nlonger apply.\n\nConnectivity Problems\n=====================\n\nCilium connectivity tests\n------------------------------------\n\nThe Cilium connectivity test deploys a series of services, deployments, and\nCiliumNetworkPolicy which will use various connectivity paths to connect to\neach other. Connectivity paths include with and without service load-balancing\nand various network policy combinations.\n\n.. note::\n   The connectivity tests this will only work in a namespace with no other pods\n   or network policies applied. If there is a Cilium Clusterwide Network Policy\n   enabled, that may also break this connectivity check.\n\nTo run the connectivity tests create an isolated test namespace called\n``cilium-test`` to deploy the tests with.\n\n.. parsed-literal::\n\n   kubectl create ns cilium-test\n   kubectl apply --namespace=cilium-test -f \\ |SCM_WEB|\\\/examples\/kubernetes\/connectivity-check\/connectivity-check.yaml\n\nThe tests cover various functionality of the system. Below we call out each test\ntype. If tests pass, it suggests functionality of the referenced subsystem.\n\n+----------------------------+-----------------------------+-------------------------------+-----------------------------+----------------------------------------+\n| Pod-to-pod (intra-host)    | Pod-to-pod (inter-host)     | Pod-to-service (intra-host)   | Pod-to-service (inter-host) | Pod-to-external resource               |\n+============================+=============================+===============================+=============================+========================================+\n| eBPF routing is functional | Data plane, routing, network| eBPF service map lookup       | VXLAN overlay port if used  | Egress, CiliumNetworkPolicy, masquerade|\n+----------------------------+-----------------------------+-------------------------------+-----------------------------+----------------------------------------+\n\nThe pod name indicates the connectivity\nvariant and the readiness and liveness gate indicates success or failure of the\ntest:\n\n.. code-block:: shell-session\n\n   $ kubectl get pods -n cilium-test\n   NAME                                                    READY   STATUS    RESTARTS   AGE\n   echo-a-6788c799fd-42qxx                                 1\/1     Running   0          69s\n   echo-b-59757679d4-pjtdl                                 1\/1     Running   0          69s\n   echo-b-host-f86bd784d-wnh4v                             1\/1     Running   0          68s\n   host-to-b-multi-node-clusterip-585db65b4d-x74nz         1\/1     Running   0          68s\n   host-to-b-multi-node-headless-77c64bc7d8-kgf8p          1\/1     Running   0          67s\n   pod-to-a-allowed-cnp-87b5895c8-bfw4x                    1\/1     Running   0          68s\n   pod-to-a-b76ddb6b4-2v4kb                                1\/1     Running   0          68s\n   pod-to-a-denied-cnp-677d9f567b-kkjp4                    1\/1     Running   0          68s\n   pod-to-b-intra-node-nodeport-8484fb6d89-bwj8q           1\/1     Running   0          68s\n   pod-to-b-multi-node-clusterip-f7655dbc8-h5bwk           1\/1     Running   0          68s\n   pod-to-b-multi-node-headless-5fd98b9648-5bjj8           1\/1     Running   0          68s\n   pod-to-b-multi-node-nodeport-74bd8d7bd5-kmfmm           1\/1     Running   0          68s\n   pod-to-external-1111-7489c7c46d-jhtkr                   1\/1     Running   0          68s\n   pod-to-external-fqdn-allow-google-cnp-b7b6bcdcb-97p75   1\/1     Running   0          68s\n\nInformation about test failures can be determined by describing a failed test\npod\n\n.. code-block:: shell-session\n\n   $ kubectl describe pod pod-to-b-intra-node-hostport\n     Warning  Unhealthy  6s (x6 over 56s)   kubelet, agent1    Readiness probe failed: curl: (7) Failed to connect to echo-b-host-headless port 40000: Connection refused\n     Warning  Unhealthy  2s (x3 over 52s)   kubelet, agent1    Liveness probe failed: curl: (7) Failed to connect to echo-b-host-headless port 40000: Connection refused\n\n.. _cluster_connectivity_health:\n\nChecking cluster connectivity health\n------------------------------------\n\nCilium can rule out network fabric related issues when troubleshooting\nconnectivity issues by providing reliable health and latency probes between all\ncluster nodes and a simulated workload running on each node.\n\nBy default when Cilium is run, it launches instances of ``cilium-health`` in\nthe background to determine the overall connectivity status of the cluster. This\ntool periodically runs bidirectional traffic across multiple paths through the\ncluster and through each node using different protocols to determine the health\nstatus of each path and protocol. At any point in time, cilium-health may be\nqueried for the connectivity status of the last probe.\n\n.. code-block:: shell-session\n\n   $ kubectl -n kube-system exec -ti cilium-2hq5z -- cilium-health status\n   Probe time:   2018-06-16T09:51:58Z\n   Nodes:\n     ip-172-0-52-116.us-west-2.compute.internal (localhost):\n       Host connectivity to 172.0.52.116:\n         ICMP to stack: OK, RTT=315.254\u00b5s\n         HTTP to agent: OK, RTT=368.579\u00b5s\n       Endpoint connectivity to 10.2.0.183:\n         ICMP to stack: OK, RTT=190.658\u00b5s\n         HTTP to agent: OK, RTT=536.665\u00b5s\n     ip-172-0-117-198.us-west-2.compute.internal:\n       Host connectivity to 172.0.117.198:\n         ICMP to stack: OK, RTT=1.009679ms\n         HTTP to agent: OK, RTT=1.808628ms\n       Endpoint connectivity to 10.2.1.234:\n         ICMP to stack: OK, RTT=1.016365ms\n         HTTP to agent: OK, RTT=2.29877ms\n\nFor each node, the connectivity will be displayed for each protocol and path,\nboth to the node itself and to an endpoint on that node. The latency specified\nis a snapshot at the last time a probe was run, which is typically once per\nminute. The ICMP connectivity row represents Layer 3 connectivity to the\nnetworking stack, while the HTTP connectivity row represents connection to an\ninstance of the ``cilium-health`` agent running on the host or as an endpoint.\n\n.. _monitor:\n\nMonitoring Datapath State\n-------------------------\n\nSometimes you may experience broken connectivity, which may be due to a\nnumber of different causes. A main cause can be unwanted packet drops on\nthe networking level. The tool\n``cilium-dbg monitor`` allows you to quickly inspect and see if and where packet\ndrops happen. Following is an example output (use ``kubectl exec`` as in\nprevious examples if running with Kubernetes):\n\n.. code-block:: shell-session\n\n   $ kubectl -n kube-system exec -ti cilium-2hq5z -- cilium-dbg monitor --type drop\n   Listening for events on 2 CPUs with 64x4096 of shared memory\n   Press Ctrl-C to quit\n   xx drop (Policy denied) to endpoint 25729, identity 261->264: fd02::c0a8:210b:0:bf00 -> fd02::c0a8:210b:0:6481 EchoRequest\n   xx drop (Policy denied) to endpoint 25729, identity 261->264: fd02::c0a8:210b:0:bf00 -> fd02::c0a8:210b:0:6481 EchoRequest\n   xx drop (Policy denied) to endpoint 25729, identity 261->264: 10.11.13.37 -> 10.11.101.61 EchoRequest\n   xx drop (Policy denied) to endpoint 25729, identity 261->264: 10.11.13.37 -> 10.11.101.61 EchoRequest\n   xx drop (Invalid destination mac) to endpoint 0, identity 0->0: fe80::5c25:ddff:fe8e:78d8 -> ff02::2 RouterSolicitation\n\nThe above indicates that a packet to endpoint ID ``25729`` has been dropped due\nto violation of the Layer 3 policy.\n\nHandling drop (CT: Map insertion failed)\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIf connectivity fails and ``cilium-dbg monitor --type drop`` shows ``xx drop (CT:\nMap insertion failed)``, then it is likely that the connection tracking table\nis filling up and the automatic adjustment of the garbage collector interval is\ninsufficient.\n\nSetting ``--conntrack-gc-interval`` to an interval lower than the current value\nmay help. This controls the time interval between two garbage collection runs.\n\nBy default ``--conntrack-gc-interval`` is set to 0 which translates to\nusing a dynamic interval. In that case, the interval is updated after each\ngarbage collection run depending on how many entries were garbage collected.\nIf very few or no entries were garbage collected, the interval will increase;\nif many entries were garbage collected, it will decrease. The current interval\nvalue is reported in the Cilium agent logs.\n\nAlternatively, the value for ``bpf-ct-global-any-max`` and\n``bpf-ct-global-tcp-max`` can be increased. Setting both of these options will\nbe a trade-off of CPU for ``conntrack-gc-interval``, and for\n``bpf-ct-global-any-max`` and ``bpf-ct-global-tcp-max`` the amount of memory\nconsumed. You can track conntrack garbage collection related metrics such as\n``datapath_conntrack_gc_runs_total`` and ``datapath_conntrack_gc_entries`` to\nget visibility into garbage collection runs. Refer to :ref:`metrics` for more\ndetails.\n\nEnabling datapath debug messages\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nBy default, datapath debug messages are disabled, and therefore not shown in\n``cilium-dbg monitor -v`` output. To enable them, add ``\"datapath\"`` to\nthe ``debug-verbose`` option.\n\nPolicy Troubleshooting\n======================\n\n.. _ensure_managed_pod:\n\nEnsure pod is managed by Cilium\n-------------------------------\n\nA potential cause for policy enforcement not functioning as expected is that\nthe networking of the pod selected by the policy is not being managed by\nCilium. The following situations result in unmanaged pods:\n\n* The pod is running in host networking and will use the host's IP address\n  directly. Such pods have full network connectivity but Cilium will not\n  provide security policy enforcement for such pods by default. To enforce\n  policy against these pods, either set ``hostNetwork`` to false or use\n  :ref:`HostPolicies`.\n\n* The pod was started before Cilium was deployed. Cilium only manages pods\n  that have been deployed after Cilium itself was started. Cilium will not\n  provide security policy enforcement for such pods. These pods should be\n  restarted in order to ensure that Cilium can provide security policy\n  enforcement.\n\nIf pod networking is not managed by Cilium. Ingress and egress policy rules\nselecting the respective pods will not be applied. See the section\n:ref:`network_policy` for more details.\n\nFor a quick assessment of whether any pods are not managed by Cilium, the\n`Cilium CLI <https:\/\/github.com\/cilium\/cilium-cli>`_ will print the number\nof managed pods. If this prints that all of the pods are managed by Cilium,\nthen there is no problem:\n\n.. code-block:: shell-session\n\n   $ cilium status\n       \/\u00af\u00af\\\n    \/\u00af\u00af\\__\/\u00af\u00af\\    Cilium:         OK\n    \\__\/\u00af\u00af\\__\/    Operator:       OK\n    \/\u00af\u00af\\__\/\u00af\u00af\\    Hubble:         OK\n    \\__\/\u00af\u00af\\__\/    ClusterMesh:    disabled\n       \\__\/\n\n   Deployment        cilium-operator    Desired: 2, Ready: 2\/2, Available: 2\/2\n   Deployment        hubble-relay       Desired: 1, Ready: 1\/1, Available: 1\/1\n   Deployment        hubble-ui          Desired: 1, Ready: 1\/1, Available: 1\/1\n   DaemonSet         cilium             Desired: 2, Ready: 2\/2, Available: 2\/2\n   Containers:       cilium-operator    Running: 2\n                     hubble-relay       Running: 1\n                     hubble-ui          Running: 1\n                     cilium             Running: 2\n   Cluster Pods:     5\/5 managed by Cilium\n   ...\n\nYou can run the following script to list the pods which are *not* managed by\nCilium:\n\n.. code-block:: shell-session\n\n   $ curl -sLO https:\/\/raw.githubusercontent.com\/cilium\/cilium\/main\/contrib\/k8s\/k8s-unmanaged.sh\n   $ chmod +x k8s-unmanaged.sh\n   $ .\/k8s-unmanaged.sh\n   kube-system\/cilium-hqpk7\n   kube-system\/kube-addon-manager-minikube\n   kube-system\/kube-dns-54cccfbdf8-zmv2c\n   kube-system\/kubernetes-dashboard-77d8b98585-g52k5\n   kube-system\/storage-provisioner\n\nUnderstand the rendering of your policy\n---------------------------------------\n\nThere are always multiple ways to approach a problem. Cilium can provide the\nrendering of the aggregate policy provided to it, leaving you to simply compare\nwith what you expect the policy to actually be rather than search (and\npotentially overlook) every policy. At the expense of reading a very large dump\nof an endpoint, this is often a faster path to discovering errant policy\nrequests in the Kubernetes API.\n\nStart by finding the endpoint you are debugging from the following list. There\nare several cross references for you to use in this list, including the IP\naddress and pod labels:\n\n.. code-block:: shell-session\n\n    kubectl -n kube-system exec -ti cilium-q8wvt -- cilium-dbg endpoint list\n\nWhen you find the correct endpoint, the first column of every row is the\nendpoint ID. Use that to dump the full endpoint information:\n\n.. code-block:: shell-session\n\n    kubectl -n kube-system exec -ti cilium-q8wvt -- cilium-dbg endpoint get 59084\n\n.. image:: images\/troubleshooting_policy.png\n    :align: center\n\nImporting this dump into a JSON-friendly editor can help browse and navigate the\ninformation here. At the top level of the dump, there are two nodes of note:\n\n* ``spec``: The desired state of the endpoint\n* ``status``: The current state of the endpoint\n\nThis is the standard Kubernetes control loop pattern. Cilium is the controller\nhere, and it is iteratively working to bring the ``status`` in line with the\n``spec``.\n\nOpening the ``status``, we can drill down through ``policy.realized.l4``. Do\nyour ``ingress`` and ``egress`` rules match what you expect? If not, the\nreference to the errant rules can be found in the ``derived-from-rules`` node.\n\nPolicymap pressure and overflow\n-------------------------------\n\nThe most important step in debugging policymap pressure is finding out which\nnode(s) are impacted.\n\nThe ``cilium_bpf_map_pressure{map_name=\"cilium_policy_v2_*\"}`` metric monitors the\nendpoint's BPF policymap pressure. This metric exposes the maximum BPF map\npressure on the node, meaning the policymap experiencing the most pressure on a\nparticular node.\n\nOnce the node is known, the troubleshooting steps are as follows:\n\n1. Find the Cilium pod on the node experiencing the problematic policymap\n   pressure and obtain a shell via ``kubectl exec``.\n2. Use ``cilium policy selectors`` to get an overview of which selectors are\n   selecting many identities. The output of this command as of Cilium v1.15\n   additionally displays the namespace and name of the policy resource of each\n   selector.\n3. The type of selector tells you what sort of policy rule could be having an\n   impact. The three existing types of selectors are explained below, each with\n   specific steps depending on the selector. See the steps below corresponding\n   to the type of selector.\n4. Consider bumping the policymap size as a last resort. However, keep in mind\n   the following implications:\n\n   * Increased memory consumption for each policymap.\n   * Generally, as identities increase in the cluster, the more work Cilium\n     performs.\n   * At a broader level, if the policy posture is such that all or nearly all\n     identities are selected, this suggests that the posture is too permissive.\n\n+---------------+------------------------------------------------------------------------------------------------------------+\n| Selector type | Form in ``cilium policy selectors`` output                                                                 |\n+===============+============================================================================================================+\n| CIDR          | ``&LabelSelector{MatchLabels:map[string]string{cidr.1.1.1.1\/32: ,}``                                       |\n+---------------+------------------------------------------------------------------------------------------------------------+\n| FQDN          | ``MatchName: , MatchPattern: *``                                                                           |\n+---------------+------------------------------------------------------------------------------------------------------------+\n| Label         | ``&LabelSelector{MatchLabels:map[string]string{any.name: curl,k8s.io.kubernetes.pod.namespace: default,}`` |\n+---------------+------------------------------------------------------------------------------------------------------------+\n\nAn example output of ``cilium policy selectors``:\n\n.. code-block:: shell-session\n\n    root@kind-worker:\/home\/cilium# cilium policy selectors\n    SELECTOR                                                                                                                                                            LABELS                          USERS   IDENTITIES\n    &LabelSelector{MatchLabels:map[string]string{k8s.io.kubernetes.pod.namespace: kube-system,k8s.k8s-app: kube-dns,},MatchExpressions:[]LabelSelectorRequirement{},}   default\/tofqdn-dns-visibility   1       16500\n    &LabelSelector{MatchLabels:map[string]string{reserved.none: ,},MatchExpressions:[]LabelSelectorRequirement{},}                                                      default\/tofqdn-dns-visibility   1\n    MatchName: , MatchPattern: *                                                                                                                                        default\/tofqdn-dns-visibility   1       16777231\n                                                                                                                                                                                                                16777232\n                                                                                                                                                                                                                16777233\n                                                                                                                                                                                                                16860295\n                                                                                                                                                                                                                16860322\n                                                                                                                                                                                                                16860323\n                                                                                                                                                                                                                16860324\n                                                                                                                                                                                                                16860325\n                                                                                                                                                                                                                16860326\n                                                                                                                                                                                                                16860327\n                                                                                                                                                                                                                16860328\n    &LabelSelector{MatchLabels:map[string]string{any.name: netperf,k8s.io.kubernetes.pod.namespace: default,},MatchExpressions:[]LabelSelectorRequirement{},}           default\/tofqdn-dns-visibility   1\n    &LabelSelector{MatchLabels:map[string]string{cidr.1.1.1.1\/32: ,},MatchExpressions:[]LabelSelectorRequirement{},}                                                    default\/tofqdn-dns-visibility   1       16860329\n    &LabelSelector{MatchLabels:map[string]string{cidr.1.1.1.2\/32: ,},MatchExpressions:[]LabelSelectorRequirement{},}                                                    default\/tofqdn-dns-visibility   1       16860330\n    &LabelSelector{MatchLabels:map[string]string{cidr.1.1.1.3\/32: ,},MatchExpressions:[]LabelSelectorRequirement{},}                                                    default\/tofqdn-dns-visibility   1       16860331\n\nFrom the output above, we see that all three selectors are in use. The\nsignificant action here is to determine which selector is selecting the most\nidentities, because the policy containing that selector is the likely cause for\nthe policymap pressure.\n\nLabel\n~~~~~\n\nSee section on :ref:`identity-relevant labels <identity-relevant-labels>`.\n\nAnother aspect to consider is the permissiveness of the policies and whether it\ncould be reduced.\n\nCIDR\n~~~~\n\nOne way to reduce the number of identities selected by a CIDR selector is to\nbroaden the range of the CIDR, if possible. For example, in the above example\noutput, the policy contains a ``\/32`` rule for each CIDR, rather than using a\nwider range like ``\/30`` instead. Updating the policy with this rule creates an\nidentity that represents all IPs within the ``\/30`` and therefore, only\nrequires the selector to select 1 identity.\n\nFQDN\n~~~~\n\nSee section on :ref:`isolating the source of toFQDNs issues regarding\nidentities and policy <isolating-source-toFQDNs-issues-identities-policy>`.\n\netcd (kvstore)\n==============\n\nIntroduction\n------------\n\nCilium can be operated in CRD-mode and kvstore\/etcd mode. When cilium is\nrunning in kvstore\/etcd mode, the kvstore becomes a vital component of the\noverall cluster health as it is required to be available for several\noperations.\n\nOperations for which the kvstore is strictly required when running in etcd\nmode:\n\nScheduling of new workloads:\n  As part of scheduling workloads\/endpoints, agents will perform security\n  identity allocation which requires interaction with the kvstore. If a\n  workload can be scheduled due to re-using a known security identity, then\n  state propagation of the endpoint details to other nodes will still depend on\n  the kvstore and thus packets drops due to policy enforcement may be observed\n  as other nodes in the cluster will not be aware of the new workload.\n\nMulti cluster:\n  All state propagation between clusters depends on the kvstore.\n\nNode discovery:\n  New nodes require to register themselves in the kvstore.\n\nAgent bootstrap:\n  The Cilium agent will eventually fail if it can't connect to the kvstore at\n  bootstrap time, however, the agent will still perform all possible operations\n  while waiting for the kvstore to appear.\n\nOperations which *do not* require kvstore availability:\n\nAll datapath operations:\n  All datapath forwarding, policy enforcement and visibility functions for\n  existing workloads\/endpoints do not depend on the kvstore. Packets will\n  continue to be forwarded and network policy rules will continue to be\n  enforced.\n\n  However, if the agent requires to restart as part of the\n  :ref:`etcd_recovery_behavior`, there can be delays in:\n\n  * processing of flow events and metrics\n  * short unavailability of layer 7 proxies\n\nNetworkPolicy updates:\n  Network policy updates will continue to be processed and applied.\n\nServices updates:\n  All updates to services will be processed and applied.\n\nUnderstanding etcd status\n-------------------------\n\nThe etcd status is reported when running ``cilium-dbg status``. The following line\nrepresents the status of etcd::\n\n   KVStore:  Ok  etcd: 1\/1 connected, lease-ID=29c6732d5d580cb5, lock lease-ID=29c6732d5d580cb7, has-quorum=true: https:\/\/192.168.60.11:2379 - 3.4.9 (Leader)\n\nOK:\n  The overall status. Either ``OK`` or ``Failure``.\n\n1\/1 connected:\n  Number of total etcd endpoints and how many of them are reachable.\n\nlease-ID:\n  UUID of the lease used for all keys owned by this agent.\n\nlock lease-ID:\n  UUID of the lease used for locks acquired by this agent.\n\nhas-quorum:\n  Status of etcd quorum. Either ``true`` or set to an error.\n\nconsecutive-errors:\n  Number of consecutive quorum errors. Only printed if errors are present.\n\nhttps:\/\/192.168.60.11:2379 - 3.4.9 (Leader):\n  List of all etcd endpoints stating the etcd version and whether the\n  particular endpoint is currently the elected leader. If an etcd endpoint\n  cannot be reached, the error is shown.\n\n.. _etcd_recovery_behavior:\n\nRecovery behavior\n-----------------\n\nIn the event of an etcd endpoint becoming unhealthy, etcd should automatically\nresolve this by electing a new leader and by failing over to a healthy etcd\nendpoint. As long as quorum is preserved, the etcd cluster will remain\nfunctional.\n\nIn addition, Cilium performs a background check in an interval to determine\netcd health and potentially take action. The interval depends on the overall\ncluster size. The larger the cluster, the longer the `interval\n<https:\/\/pkg.go.dev\/github.com\/cilium\/cilium\/pkg\/kvstore?tab=doc#ExtraOptions.StatusCheckInterval>`_:\n\n * If no etcd endpoints can be reached, Cilium will report failure in ``cilium-dbg\n   status``. This will cause the liveness and readiness probe of Kubernetes to\n   fail and Cilium will be restarted.\n\n * A lock is acquired and released to test a write operation which requires\n   quorum. If this operation fails, loss of quorum is reported. If quorum fails\n   for three or more intervals in a row, Cilium is declared unhealthy.\n\n * The Cilium operator will constantly write to a heartbeat key\n   (``cilium\/.heartbeat``). All Cilium agents will watch for updates to this\n   heartbeat key. This validates the ability for an agent to receive key\n   updates from etcd. If the heartbeat key is not updated in time, the quorum\n   check is declared to have failed and Cilium is declared unhealthy after 3 or\n   more consecutive failures.\n\nExample of a status with a quorum failure which has not yet reached the\nthreshold::\n\n    KVStore: Ok   etcd: 1\/1 connected, lease-ID=29c6732d5d580cb5, lock lease-ID=29c6732d5d580cb7, has-quorum=2m2.778966915s since last heartbeat update has been received, consecutive-errors=1: https:\/\/192.168.60.11:2379 - 3.4.9 (Leader)\n\nExample of a status with the number of quorum failures exceeding the threshold::\n\n    KVStore: Failure   Err: quorum check failed 8 times in a row: 4m28.446600949s since last heartbeat update has been received\n\n.. _troubleshooting_clustermesh:\n\n.. include:: .\/troubleshooting_clustermesh.rst\n\n.. _troubleshooting_servicemesh:\n\n.. include:: troubleshooting_servicemesh.rst\n\nSymptom Library\n===============\n\nNode to node traffic is being dropped\n-------------------------------------\n\nSymptom\n~~~~~~~\n\nEndpoint to endpoint communication on a single node succeeds but communication\nfails between endpoints across multiple nodes.\n\nTroubleshooting steps:\n~~~~~~~~~~~~~~~~~~~~~~\n\n#. Run ``cilium-health status`` on the node of the source and destination\n   endpoint. It should describe the connectivity from that node to other\n   nodes in the cluster, and to a simulated endpoint on each other node.\n   Identify points in the cluster that cannot talk to each other. If the\n   command does not describe the status of the other node, there may be an\n   issue with the KV-Store.\n\n#. Run ``cilium-dbg monitor`` on the node of the source and destination endpoint.\n   Look for packet drops.\n\n   When running in :ref:`arch_overlay` mode:\n\n#. Run ``cilium-dbg bpf tunnel list`` and verify that each Cilium node is aware of\n   the other nodes in the cluster.  If not, check the logfile for errors.\n\n#. If nodes are being populated correctly, run ``tcpdump -n -i cilium_vxlan`` on\n   each node to verify whether cross node traffic is being forwarded correctly\n   between nodes.\n\n   If packets are being dropped,\n\n   * verify that the node IP listed in ``cilium-dbg bpf tunnel list`` can reach each\n     other.\n   * verify that the firewall on each node allows UDP port 8472.\n\n   When running in :ref:`arch_direct_routing` mode:\n\n#. Run ``ip route`` or check your cloud provider router and verify that you have\n   routes installed to route the endpoint prefix between all nodes.\n\n#. Verify that the firewall on each node permits to route the endpoint IPs.\n\n\nUseful Scripts\n==============\n\n.. _retrieve_cilium_pod:\n\nRetrieve Cilium pod managing a particular pod\n---------------------------------------------\n\nIdentifies the Cilium pod that is managing a particular pod in a namespace:\n\n.. code-block:: shell-session\n\n    k8s-get-cilium-pod.sh <pod> <namespace>\n\n**Example:**\n\n.. code-block:: shell-session\n\n    $ curl -sLO https:\/\/raw.githubusercontent.com\/cilium\/cilium\/main\/contrib\/k8s\/k8s-get-cilium-pod.sh\n    $ chmod +x k8s-get-cilium-pod.sh\n    $ .\/k8s-get-cilium-pod.sh luke-pod default\n    cilium-zmjj9\n    cilium-node-init-v7r9p\n    cilium-operator-f576f7977-s5gpq\n\nExecute a command in all Kubernetes Cilium pods\n-----------------------------------------------\n\nRun a command within all Cilium pods of a cluster\n\n.. code-block:: shell-session\n\n    k8s-cilium-exec.sh <command>\n\n**Example:**\n\n.. code-block:: shell-session\n\n    $ curl -sLO https:\/\/raw.githubusercontent.com\/cilium\/cilium\/main\/contrib\/k8s\/k8s-cilium-exec.sh\n    $ chmod +x k8s-cilium-exec.sh\n    $ .\/k8s-cilium-exec.sh uptime\n     10:15:16 up 6 days,  7:37,  0 users,  load average: 0.00, 0.02, 0.00\n     10:15:16 up 6 days,  7:32,  0 users,  load average: 0.00, 0.03, 0.04\n     10:15:16 up 6 days,  7:30,  0 users,  load average: 0.75, 0.27, 0.15\n     10:15:16 up 6 days,  7:28,  0 users,  load average: 0.14, 0.04, 0.01\n\nList unmanaged Kubernetes pods\n------------------------------\n\nLists all Kubernetes pods in the cluster for which Cilium does *not* provide\nnetworking. This includes pods running in host-networking mode and pods that\nwere started before Cilium was deployed.\n\n.. code-block:: shell-session\n\n   k8s-unmanaged.sh\n\n**Example:**\n\n.. code-block:: shell-session\n\n   $ curl -sLO https:\/\/raw.githubusercontent.com\/cilium\/cilium\/main\/contrib\/k8s\/k8s-unmanaged.sh\n   $ chmod +x k8s-unmanaged.sh\n   $ .\/k8s-unmanaged.sh\n   kube-system\/cilium-hqpk7\n   kube-system\/kube-addon-manager-minikube\n   kube-system\/kube-dns-54cccfbdf8-zmv2c\n   kube-system\/kubernetes-dashboard-77d8b98585-g52k5\n   kube-system\/storage-provisioner\n\nReporting a problem\n===================\n\nBefore you report a problem, make sure to retrieve the necessary information\nfrom your cluster before the failure state is lost.\n\n.. _sysdump:\n\nAutomatic log & state collection\n--------------------------------\n\n.. include:: ..\/installation\/cli-download.rst\n\nThen, execute ``cilium sysdump`` command to collect troubleshooting information\nfrom your Kubernetes cluster:\n\n.. code-block:: shell-session\n\n   cilium sysdump\n\nNote that by default ``cilium sysdump`` will attempt to collect as much logs as\npossible and for all the nodes in the cluster. If your cluster size is above 20\nnodes, consider setting the following options to limit the size of the sysdump.\nThis is not required, but useful for those who have a constraint on bandwidth or\nupload size.\n\n* set the ``--node-list`` option to pick only a few nodes in case the cluster has\n  many of them.\n* set the ``--logs-since-time`` option to go back in time to when the issues started.\n* set the ``--logs-limit-bytes`` option to limit the size of the log files (note:\n  passed onto ``kubectl logs``; does not apply to entire collection archive).\n\nIdeally, a sysdump that has a full history of select nodes, rather than a brief\nhistory of all the nodes, would be preferred (by using ``--node-list``). The second\nrecommended way would be to use ``--logs-since-time`` if you are able to narrow down\nwhen the issues started. Lastly, if the Cilium agent and Operator logs are too\nlarge, consider ``--logs-limit-bytes``.\n\nUse ``--help`` to see more options:\n\n.. code-block:: shell-session\n\n   cilium sysdump --help\n\nSingle Node Bugtool\n~~~~~~~~~~~~~~~~~~~\n\nIf you are not running Kubernetes, it is also possible to run the bug\ncollection tool manually with the scope of a single node:\n\nThe ``cilium-bugtool`` captures potentially useful information about your\nenvironment for debugging. The tool is meant to be used for debugging a single\nCilium agent node. In the Kubernetes case, if you have multiple Cilium pods,\nthe tool can retrieve debugging information from all of them. The tool works by\narchiving a collection of command output and files from several places. By\ndefault, it writes to the ``tmp`` directory.\n\nNote that the command needs to be run from inside the Cilium pod\/container.\n\n.. code-block:: shell-session\n\n   cilium-bugtool\n\nWhen running it with no option as shown above, it will try to copy various\nfiles and execute some commands. If ``kubectl`` is detected, it will search for\nCilium pods. The default label being ``k8s-app=cilium``, but this and the\nnamespace can be changed via ``k8s-namespace`` and ``k8s-label`` respectively.\n\nIf you want to capture the archive from a Kubernetes pod, then the process is a\nbit different\n\n.. code-block:: shell-session\n\n   $ # First we need to get the Cilium pod\n   $ kubectl get pods --namespace kube-system\n   NAME                          READY     STATUS    RESTARTS   AGE\n   cilium-kg8lv                  1\/1       Running   0          13m\n   kube-addon-manager-minikube   1\/1       Running   0          1h\n   kube-dns-6fc954457d-sf2nk     3\/3       Running   0          1h\n   kubernetes-dashboard-6xvc7    1\/1       Running   0          1h\n\n   $ # Run the bugtool from this pod\n   $ kubectl -n kube-system exec cilium-kg8lv -- cilium-bugtool\n   [...]\n\n   $ # Copy the archive from the pod\n   $ kubectl cp kube-system\/cilium-kg8lv:\/tmp\/cilium-bugtool-20180411-155146.166+0000-UTC-266836983.tar \/tmp\/cilium-bugtool-20180411-155146.166+0000-UTC-266836983.tar\n   [...]\n\n.. note::\n\n   Please check the archive for sensitive information and strip it\n   away before sharing it with us.\n\nBelow is an approximate list of the kind of information in the archive.\n\n* Cilium status\n* Cilium version\n* Kernel configuration\n* Resolve configuration\n* Cilium endpoint state\n* Cilium logs\n* Docker logs\n* ``dmesg``\n* ``ethtool``\n* ``ip a``\n* ``ip link``\n* ``ip r``\n* ``iptables-save``\n* ``kubectl -n kube-system get pods``\n* ``kubectl get pods,svc for all namespaces``\n* ``uname``\n* ``uptime``\n* ``cilium-dbg bpf * list``\n* ``cilium-dbg endpoint get for each endpoint``\n* ``cilium-dbg endpoint list``\n* ``hostname``\n* ``cilium-dbg policy get``\n* ``cilium-dbg service list``\n\n\nDebugging information\n~~~~~~~~~~~~~~~~~~~~~\n\nIf you are not running Kubernetes, you can use the ``cilium-dbg debuginfo`` command\nto retrieve useful debugging information. If you are running Kubernetes, this\ncommand is automatically run as part of the system dump.\n\n``cilium-dbg debuginfo`` can print useful output from the Cilium API. The output\nformat is in Markdown format so this can be used when reporting a bug on the\n`issue tracker`_.  Running without arguments will print to standard output, but\nyou can also redirect to a file like\n\n.. code-block:: shell-session\n\n   cilium-dbg debuginfo -f debuginfo.md\n\n.. note::\n\n   Please check the debuginfo file for sensitive information and strip it\n   away before sharing it with us.\n\n\nSlack assistance\n----------------\n\nThe `Cilium Slack`_ community is a helpful first point of assistance to get\nhelp troubleshooting a problem or to discuss options on how to address a\nproblem. The community is open to anyone.\n\nReport an issue via GitHub\n--------------------------\n\nIf you believe to have found an issue in Cilium, please report a\n`GitHub issue`_ and make sure to attach a system dump as described above to\nensure that developers have the best chance to reproduce the issue.\n\n.. _NodeSelector: https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#nodeselector\n.. _RBAC: https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/\n.. _CNI: https:\/\/github.com\/containernetworking\/cni\n.. _Volumes: https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-volume-storage\/\n\n.. _Cilium Frequently Asked Questions (FAQ): https:\/\/github.com\/cilium\/cilium\/issues?utf8=%E2%9C%93&q=label%3Akind%2Fquestion%20\n\n.. _issue tracker: https:\/\/github.com\/cilium\/cilium\/issues\n.. _GitHub issue: `issue tracker`_","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      admin guide                   Troubleshooting                  This document describes how to troubleshoot Cilium in different deployment modes  It focuses on a full deployment of Cilium within a datacenter or public cloud  If you are just looking for a simple way to experiment  we highly recommend trying out the  ref  getting started  guide instead   This guide assumes that you have read the  ref  network root  and  security root  which explain all the components and concepts   We use GitHub issues to maintain a list of  Cilium Frequently Asked Questions  FAQ     You can also check there to see if your question s  is already addressed   Component   Cluster Health                             Kubernetes             An initial overview of Cilium can be retrieved by listing all pods to verify whether all pods have the status   Running        code block   shell session       kubectl  n kube system get pods  l k8s app cilium    NAME           READY     STATUS    RESTARTS   AGE    cilium 2hq5z   1 1       Running   0          4d    cilium 6kbtz   1 1       Running   0          4d    cilium klj4b   1 1       Running   0          4d    cilium zmjj9   1 1       Running   0          4d  If Cilium encounters a problem that it cannot recover from  it will automatically report the failure state via   cilium dbg status   which is regularly queried by the Kubernetes liveness probe to automatically restart Cilium pods  If a Cilium pod is in state   CrashLoopBackoff   then this indicates a permanent failure scenario   Detailed Status                  If a particular Cilium pod is not in running state  the status and health of the agent on that node can be retrieved by running   cilium dbg status   in the context of that pod      code block   shell session       kubectl  n kube system exec cilium 2hq5z    cilium dbg status    KVStore                 Ok   etcd  1 1 connected  http   demo etcd lab  a etcd tgraf test1 lab corp isovalent link 2379   3 2 5  Leader     ContainerRuntime        Ok   docker daemon  OK    Kubernetes              Ok   OK    Kubernetes APIs           cilium v2  CiliumNetworkPolicy    networking k8s io v1  NetworkPolicy    core v1  Service    core v1  Endpoint    core v1  Node    CustomResourceDefinition      Cilium                  Ok   OK    NodeMonitor             Disabled    Cilium health daemon    Ok    Controller Status       14 14 healthy    Proxy Status            OK  ip 10 2 0 172  port range 10000 20000    Cluster health    4 4 reachable    2018 06 16T09 49 58Z   Alternatively  the   k8s cilium exec sh   script can be used to run   cilium dbg status   on all nodes  This will provide detailed status and health information of all nodes in the cluster      code block   shell session     curl  sLO https   raw githubusercontent com cilium cilium main contrib k8s k8s cilium exec sh    chmod  x   k8s cilium exec sh      and run   cilium dbg status   on all nodes      code block   shell session         k8s cilium exec sh cilium dbg status    KVStore                 Ok   Etcd  http   127 0 0 1 2379    Leader  3 1 10    ContainerRuntime        Ok    Kubernetes              Ok   OK    Kubernetes APIs           networking k8s io v1beta1  Ingress    core v1  Node    CustomResourceDefinition    cilium v2  CiliumNetworkPolicy    networking k8s io v1  NetworkPolicy    core v1  Service    core v1  Endpoint      Cilium                  Ok   OK    NodeMonitor             Listening for events on 2 CPUs with 64x4096 of shared memory    Cilium health daemon    Ok    Controller Status       7 7 healthy    Proxy Status            OK  ip 10 15 28 238  0 redirects  port range 10000 20000    Cluster health    1 1 reachable    2018 02 27T00 24 34Z   Detailed information about the status of Cilium can be inspected with the   cilium dbg status   verbose   command  Verbose output includes detailed IPAM state  allocated addresses   Cilium controller status  and details of the Proxy status       ts agent logs   Logs       To retrieve log files of a cilium pod  run  replace   cilium 1234   with a pod name returned by   kubectl  n kube system get pods  l k8s app cilium        code block   shell session     kubectl  n kube system logs   timestamps cilium 1234  If the cilium pod was already restarted due to the liveness problem after encountering an issue  it can be useful to retrieve the logs of the pod before the last restart      code block   shell session     kubectl  n kube system logs   timestamps  p cilium 1234  Generic          When logged in a host running Cilium  the cilium CLI can be invoked directly  e g       code block   shell session       cilium dbg status    KVStore                 Ok   etcd  1 1 connected  https   192 168 60 11 2379   3 2 7  Leader     ContainerRuntime        Ok    Kubernetes              Ok   OK    Kubernetes APIs           core v1  Endpoint    networking k8s io v1beta1  Ingress    core v1  Node    CustomResourceDefinition    cilium v2  CiliumNetworkPolicy    networking k8s io v1  NetworkPolicy    core v1  Service      Cilium                  Ok   OK    NodeMonitor             Listening for events on 2 CPUs with 64x4096 of shared memory    Cilium health daemon    Ok    IPv4 address pool       261 65535 allocated    IPv6 address pool       4 4294967295 allocated    Controller Status       20 20 healthy    Proxy Status            OK  ip 10 0 28 238  port range 10000 20000    Hubble                  Ok      Current Max Flows  2542 4096  62 06    Flows s  164 21      Metrics  Disabled    Cluster health          2 2 reachable    2018 04 11T15 41 01Z       hubble troubleshooting   Observing Flows with Hubble                              Hubble is a built in observability tool which allows you to inspect recent flow events on all endpoints managed by Cilium   Ensure Hubble is running correctly                                     To ensure the Hubble client can connect to the Hubble server running inside Cilium  you may use the   hubble status   command from within a Cilium pod      code block   shell session       hubble status    Healthcheck  via unix    var run cilium hubble sock   Ok    Current Max Flows  4095 4095  100 00      Flows s  164 21    cilium agent   must be running with the     enable hubble   option  default  in order for the Hubble server to be enabled  When deploying Cilium with Helm  make sure to set the   hubble enabled true   value   To check if Hubble is enabled in your deployment  you may look for the following output in   cilium dbg status        code block   shell session       cilium status           Hubble    Ok   Current Max Flows  4095 4095  100 00    Flows s  164 21   Metrics  Disabled            note      Pods need to be managed by Cilium in order to be observable by Hubble     See how to  ref  ensure a pod is managed by Cilium ensure managed pod      for more details   Observing flows of a specific pod                                    In order to observe the traffic of a specific pod  you will first have to  ref  retrieve the name of the cilium instance managing it retrieve cilium pod    The Hubble CLI is part of the Cilium container image and can be accessed via   kubectl exec    The following query for example will show all events related to flows which either originated or terminated in the   default tiefighter   pod in the last three minutes      code block   shell session       kubectl exec  n kube system cilium 77lk6    hubble observe   since 3m   pod default tiefighter    May  4 12 47 08 811  default tiefighter 53875    kube system coredns 74ff55c5b 66f4n 53 to endpoint FORWARDED  UDP     May  4 12 47 08 811  default tiefighter 53875    kube system coredns 74ff55c5b 66f4n 53 to endpoint FORWARDED  UDP     May  4 12 47 08 811  default tiefighter 53875    kube system coredns 74ff55c5b 66f4n 53 to endpoint FORWARDED  UDP     May  4 12 47 08 811  default tiefighter 53875    kube system coredns 74ff55c5b 66f4n 53 to endpoint FORWARDED  UDP     May  4 12 47 08 811  default tiefighter 50214    default deathstar c74d84667 cx5kp 80 to overlay FORWARDED  TCP Flags  SYN     May  4 12 47 08 812  default tiefighter 50214    default deathstar c74d84667 cx5kp 80 to endpoint FORWARDED  TCP Flags  SYN  ACK     May  4 12 47 08 812  default tiefighter 50214    default deathstar c74d84667 cx5kp 80 to overlay FORWARDED  TCP Flags  ACK     May  4 12 47 08 812  default tiefighter 50214    default deathstar c74d84667 cx5kp 80 to overlay FORWARDED  TCP Flags  ACK  PSH     May  4 12 47 08 812  default tiefighter 50214    default deathstar c74d84667 cx5kp 80 to endpoint FORWARDED  TCP Flags  ACK  PSH     May  4 12 47 08 812  default tiefighter 50214    default deathstar c74d84667 cx5kp 80 to overlay FORWARDED  TCP Flags  ACK  FIN     May  4 12 47 08 812  default tiefighter 50214    default deathstar c74d84667 cx5kp 80 to endpoint FORWARDED  TCP Flags  ACK  FIN     May  4 12 47 08 812  default tiefighter 50214    default deathstar c74d84667 cx5kp 80 to overlay FORWARDED  TCP Flags  ACK   You may also use    o json   to obtain more detailed information about each flow event      note        Hubble Relay    allows you to query multiple Hubble instances    simultaneously without having to first manually target a specific node   See     Observing flows with Hubble Relay   for more information   Observing flows with Hubble Relay                                    Hubble Relay is a service which allows to query multiple Hubble instances simultaneously and aggregate the results  See  ref  hubble setup  to enable Hubble Relay if it is not yet enabled and install the Hubble CLI on your local machine   You may access the Hubble Relay service by port forwarding it locally      code block   shell session     kubectl  n kube system port forward service hubble relay   address 0 0 0 0   address    4245 80  This will forward the Hubble Relay service port    80    to your local machine on port   4245   on all of it s IP addresses   You can verify that Hubble Relay can be reached by using the Hubble CLI and running the following command from your local machine      code block   shell session     hubble status  This command should return an output similar to the following          Healthcheck  via localhost 4245   Ok    Current Max Flows  16380 16380  100 00      Flows s  46 19    Connected Nodes  4 4  You may see details about nodes that Hubble Relay is connected to by running the following command      code block   shell session     hubble list nodes  As Hubble Relay shares the same API as individual Hubble instances  you may follow the  Observing flows with Hubble   section keeping in mind that limitations with regards to what can be seen from individual Hubble instances no longer apply   Connectivity Problems                        Cilium connectivity tests                                       The Cilium connectivity test deploys a series of services  deployments  and CiliumNetworkPolicy which will use various connectivity paths to connect to each other  Connectivity paths include with and without service load balancing and various network policy combinations      note      The connectivity tests this will only work in a namespace with no other pods    or network policies applied  If there is a Cilium Clusterwide Network Policy    enabled  that may also break this connectivity check   To run the connectivity tests create an isolated test namespace called   cilium test   to deploy the tests with      parsed literal       kubectl create ns cilium test    kubectl apply   namespace cilium test  f    SCM WEB   examples kubernetes connectivity check connectivity check yaml  The tests cover various functionality of the system  Below we call out each test type  If tests pass  it suggests functionality of the referenced subsystem                                                                                                                                                                         Pod to pod  intra host       Pod to pod  inter host        Pod to service  intra host      Pod to service  inter host    Pod to external resource                                                                                                                                                                                       eBPF routing is functional   Data plane  routing  network  eBPF service map lookup         VXLAN overlay port if used    Egress  CiliumNetworkPolicy  masquerade                                                                                                                                                                       The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test      code block   shell session       kubectl get pods  n cilium test    NAME                                                    READY   STATUS    RESTARTS   AGE    echo a 6788c799fd 42qxx                                 1 1     Running   0          69s    echo b 59757679d4 pjtdl                                 1 1     Running   0          69s    echo b host f86bd784d wnh4v                             1 1     Running   0          68s    host to b multi node clusterip 585db65b4d x74nz         1 1     Running   0          68s    host to b multi node headless 77c64bc7d8 kgf8p          1 1     Running   0          67s    pod to a allowed cnp 87b5895c8 bfw4x                    1 1     Running   0          68s    pod to a b76ddb6b4 2v4kb                                1 1     Running   0          68s    pod to a denied cnp 677d9f567b kkjp4                    1 1     Running   0          68s    pod to b intra node nodeport 8484fb6d89 bwj8q           1 1     Running   0          68s    pod to b multi node clusterip f7655dbc8 h5bwk           1 1     Running   0          68s    pod to b multi node headless 5fd98b9648 5bjj8           1 1     Running   0          68s    pod to b multi node nodeport 74bd8d7bd5 kmfmm           1 1     Running   0          68s    pod to external 1111 7489c7c46d jhtkr                   1 1     Running   0          68s    pod to external fqdn allow google cnp b7b6bcdcb 97p75   1 1     Running   0          68s  Information about test failures can be determined by describing a failed test pod     code block   shell session       kubectl describe pod pod to b intra node hostport      Warning  Unhealthy  6s  x6 over 56s    kubelet  agent1    Readiness probe failed  curl   7  Failed to connect to echo b host headless port 40000  Connection refused      Warning  Unhealthy  2s  x3 over 52s    kubelet  agent1    Liveness probe failed  curl   7  Failed to connect to echo b host headless port 40000  Connection refused      cluster connectivity health   Checking cluster connectivity health                                       Cilium can rule out network fabric related issues when troubleshooting connectivity issues by providing reliable health and latency probes between all cluster nodes and a simulated workload running on each node   By default when Cilium is run  it launches instances of   cilium health   in the background to determine the overall connectivity status of the cluster  This tool periodically runs bidirectional traffic across multiple paths through the cluster and through each node using different protocols to determine the health status of each path and protocol  At any point in time  cilium health may be queried for the connectivity status of the last probe      code block   shell session       kubectl  n kube system exec  ti cilium 2hq5z    cilium health status    Probe time    2018 06 16T09 51 58Z    Nodes       ip 172 0 52 116 us west 2 compute internal  localhost          Host connectivity to 172 0 52 116           ICMP to stack  OK  RTT 315 254 s          HTTP to agent  OK  RTT 368 579 s        Endpoint connectivity to 10 2 0 183           ICMP to stack  OK  RTT 190 658 s          HTTP to agent  OK  RTT 536 665 s      ip 172 0 117 198 us west 2 compute internal         Host connectivity to 172 0 117 198           ICMP to stack  OK  RTT 1 009679ms          HTTP to agent  OK  RTT 1 808628ms        Endpoint connectivity to 10 2 1 234           ICMP to stack  OK  RTT 1 016365ms          HTTP to agent  OK  RTT 2 29877ms  For each node  the connectivity will be displayed for each protocol and path  both to the node itself and to an endpoint on that node  The latency specified is a snapshot at the last time a probe was run  which is typically once per minute  The ICMP connectivity row represents Layer 3 connectivity to the networking stack  while the HTTP connectivity row represents connection to an instance of the   cilium health   agent running on the host or as an endpoint       monitor   Monitoring Datapath State                            Sometimes you may experience broken connectivity  which may be due to a number of different causes  A main cause can be unwanted packet drops on the networking level  The tool   cilium dbg monitor   allows you to quickly inspect and see if and where packet drops happen  Following is an example output  use   kubectl exec   as in previous examples if running with Kubernetes       code block   shell session       kubectl  n kube system exec  ti cilium 2hq5z    cilium dbg monitor   type drop    Listening for events on 2 CPUs with 64x4096 of shared memory    Press Ctrl C to quit    xx drop  Policy denied  to endpoint 25729  identity 261  264  fd02  c0a8 210b 0 bf00    fd02  c0a8 210b 0 6481 EchoRequest    xx drop  Policy denied  to endpoint 25729  identity 261  264  fd02  c0a8 210b 0 bf00    fd02  c0a8 210b 0 6481 EchoRequest    xx drop  Policy denied  to endpoint 25729  identity 261  264  10 11 13 37    10 11 101 61 EchoRequest    xx drop  Policy denied  to endpoint 25729  identity 261  264  10 11 13 37    10 11 101 61 EchoRequest    xx drop  Invalid destination mac  to endpoint 0  identity 0  0  fe80  5c25 ddff fe8e 78d8    ff02  2 RouterSolicitation  The above indicates that a packet to endpoint ID   25729   has been dropped due to violation of the Layer 3 policy   Handling drop  CT  Map insertion failed                                            If connectivity fails and   cilium dbg monitor   type drop   shows   xx drop  CT  Map insertion failed     then it is likely that the connection tracking table is filling up and the automatic adjustment of the garbage collector interval is insufficient   Setting     conntrack gc interval   to an interval lower than the current value may help  This controls the time interval between two garbage collection runs   By default     conntrack gc interval   is set to 0 which translates to using a dynamic interval  In that case  the interval is updated after each garbage collection run depending on how many entries were garbage collected  If very few or no entries were garbage collected  the interval will increase  if many entries were garbage collected  it will decrease  The current interval value is reported in the Cilium agent logs   Alternatively  the value for   bpf ct global any max   and   bpf ct global tcp max   can be increased  Setting both of these options will be a trade off of CPU for   conntrack gc interval    and for   bpf ct global any max   and   bpf ct global tcp max   the amount of memory consumed  You can track conntrack garbage collection related metrics such as   datapath conntrack gc runs total   and   datapath conntrack gc entries   to get visibility into garbage collection runs  Refer to  ref  metrics  for more details   Enabling datapath debug messages                                   By default  datapath debug messages are disabled  and therefore not shown in   cilium dbg monitor  v   output  To enable them  add    datapath    to the   debug verbose   option   Policy Troubleshooting                             ensure managed pod   Ensure pod is managed by Cilium                                  A potential cause for policy enforcement not functioning as expected is that the networking of the pod selected by the policy is not being managed by Cilium  The following situations result in unmanaged pods     The pod is running in host networking and will use the host s IP address   directly  Such pods have full network connectivity but Cilium will not   provide security policy enforcement for such pods by default  To enforce   policy against these pods  either set   hostNetwork   to false or use    ref  HostPolicies      The pod was started before Cilium was deployed  Cilium only manages pods   that have been deployed after Cilium itself was started  Cilium will not   provide security policy enforcement for such pods  These pods should be   restarted in order to ensure that Cilium can provide security policy   enforcement   If pod networking is not managed by Cilium  Ingress and egress policy rules selecting the respective pods will not be applied  See the section  ref  network policy  for more details   For a quick assessment of whether any pods are not managed by Cilium  the  Cilium CLI  https   github com cilium cilium cli    will print the number of managed pods  If this prints that all of the pods are managed by Cilium  then there is no problem      code block   shell session       cilium status                               Cilium          OK                   Operator        OK                   Hubble          OK                   ClusterMesh     disabled                 Deployment        cilium operator    Desired  2  Ready  2 2  Available  2 2    Deployment        hubble relay       Desired  1  Ready  1 1  Available  1 1    Deployment        hubble ui          Desired  1  Ready  1 1  Available  1 1    DaemonSet         cilium             Desired  2  Ready  2 2  Available  2 2    Containers        cilium operator    Running  2                      hubble relay       Running  1                      hubble ui          Running  1                      cilium             Running  2    Cluster Pods      5 5 managed by Cilium         You can run the following script to list the pods which are  not  managed by Cilium      code block   shell session       curl  sLO https   raw githubusercontent com cilium cilium main contrib k8s k8s unmanaged sh      chmod  x k8s unmanaged sh        k8s unmanaged sh    kube system cilium hqpk7    kube system kube addon manager minikube    kube system kube dns 54cccfbdf8 zmv2c    kube system kubernetes dashboard 77d8b98585 g52k5    kube system storage provisioner  Understand the rendering of your policy                                          There are always multiple ways to approach a problem  Cilium can provide the rendering of the aggregate policy provided to it  leaving you to simply compare with what you expect the policy to actually be rather than search  and potentially overlook  every policy  At the expense of reading a very large dump of an endpoint  this is often a faster path to discovering errant policy requests in the Kubernetes API   Start by finding the endpoint you are debugging from the following list  There are several cross references for you to use in this list  including the IP address and pod labels      code block   shell session      kubectl  n kube system exec  ti cilium q8wvt    cilium dbg endpoint list  When you find the correct endpoint  the first column of every row is the endpoint ID  Use that to dump the full endpoint information      code block   shell session      kubectl  n kube system exec  ti cilium q8wvt    cilium dbg endpoint get 59084     image   images troubleshooting policy png      align  center  Importing this dump into a JSON friendly editor can help browse and navigate the information here  At the top level of the dump  there are two nodes of note       spec    The desired state of the endpoint     status    The current state of the endpoint  This is the standard Kubernetes control loop pattern  Cilium is the controller here  and it is iteratively working to bring the   status   in line with the   spec     Opening the   status    we can drill down through   policy realized l4    Do your   ingress   and   egress   rules match what you expect  If not  the reference to the errant rules can be found in the   derived from rules   node   Policymap pressure and overflow                                  The most important step in debugging policymap pressure is finding out which node s  are impacted   The   cilium bpf map pressure map name  cilium policy v2       metric monitors the endpoint s BPF policymap pressure  This metric exposes the maximum BPF map pressure on the node  meaning the policymap experiencing the most pressure on a particular node   Once the node is known  the troubleshooting steps are as follows   1  Find the Cilium pod on the node experiencing the problematic policymap    pressure and obtain a shell via   kubectl exec    2  Use   cilium policy selectors   to get an overview of which selectors are    selecting many identities  The output of this command as of Cilium v1 15    additionally displays the namespace and name of the policy resource of each    selector  3  The type of selector tells you what sort of policy rule could be having an    impact  The three existing types of selectors are explained below  each with    specific steps depending on the selector  See the steps below corresponding    to the type of selector  4  Consider bumping the policymap size as a last resort  However  keep in mind    the following implications        Increased memory consumption for each policymap       Generally  as identities increase in the cluster  the more work Cilium      performs       At a broader level  if the policy posture is such that all or nearly all      identities are selected  this suggests that the posture is too permissive                                                                                                                                    Selector type   Form in   cilium policy selectors   output                                                                                                                                                                                                    CIDR               LabelSelector MatchLabels map string string cidr 1 1 1 1 32                                                                                                                                                                                FQDN              MatchName    MatchPattern                                                                                                                                                                                                                   Label              LabelSelector MatchLabels map string string any name  curl k8s io kubernetes pod namespace  default                                                                                                                                       An example output of   cilium policy selectors        code block   shell session      root kind worker  home cilium  cilium policy selectors     SELECTOR                                                                                                                                                            LABELS                          USERS   IDENTITIES      LabelSelector MatchLabels map string string k8s io kubernetes pod namespace  kube system k8s k8s app  kube dns   MatchExpressions   LabelSelectorRequirement       default tofqdn dns visibility   1       16500      LabelSelector MatchLabels map string string reserved none     MatchExpressions   LabelSelectorRequirement                                                          default tofqdn dns visibility   1     MatchName    MatchPattern                                                                                                                                           default tofqdn dns visibility   1       16777231                                                                                                                                                                                                                 16777232                                                                                                                                                                                                                 16777233                                                                                                                                                                                                                 16860295                                                                                                                                                                                                                 16860322                                                                                                                                                                                                                 16860323                                                                                                                                                                                                                 16860324                                                                                                                                                                                                                 16860325                                                                                                                                                                                                                 16860326                                                                                                                                                                                                                 16860327                                                                                                                                                                                                                 16860328      LabelSelector MatchLabels map string string any name  netperf k8s io kubernetes pod namespace  default   MatchExpressions   LabelSelectorRequirement               default tofqdn dns visibility   1      LabelSelector MatchLabels map string string cidr 1 1 1 1 32     MatchExpressions   LabelSelectorRequirement                                                        default tofqdn dns visibility   1       16860329      LabelSelector MatchLabels map string string cidr 1 1 1 2 32     MatchExpressions   LabelSelectorRequirement                                                        default tofqdn dns visibility   1       16860330      LabelSelector MatchLabels map string string cidr 1 1 1 3 32     MatchExpressions   LabelSelectorRequirement                                                        default tofqdn dns visibility   1       16860331  From the output above  we see that all three selectors are in use  The significant action here is to determine which selector is selecting the most identities  because the policy containing that selector is the likely cause for the policymap pressure   Label        See section on  ref  identity relevant labels  identity relevant labels     Another aspect to consider is the permissiveness of the policies and whether it could be reduced   CIDR       One way to reduce the number of identities selected by a CIDR selector is to broaden the range of the CIDR  if possible  For example  in the above example output  the policy contains a    32   rule for each CIDR  rather than using a wider range like    30   instead  Updating the policy with this rule creates an identity that represents all IPs within the    30   and therefore  only requires the selector to select 1 identity   FQDN       See section on  ref  isolating the source of toFQDNs issues regarding identities and policy  isolating source toFQDNs issues identities policy     etcd  kvstore                  Introduction               Cilium can be operated in CRD mode and kvstore etcd mode  When cilium is running in kvstore etcd mode  the kvstore becomes a vital component of the overall cluster health as it is required to be available for several operations   Operations for which the kvstore is strictly required when running in etcd mode   Scheduling of new workloads    As part of scheduling workloads endpoints  agents will perform security   identity allocation which requires interaction with the kvstore  If a   workload can be scheduled due to re using a known security identity  then   state propagation of the endpoint details to other nodes will still depend on   the kvstore and thus packets drops due to policy enforcement may be observed   as other nodes in the cluster will not be aware of the new workload   Multi cluster    All state propagation between clusters depends on the kvstore   Node discovery    New nodes require to register themselves in the kvstore   Agent bootstrap    The Cilium agent will eventually fail if it can t connect to the kvstore at   bootstrap time  however  the agent will still perform all possible operations   while waiting for the kvstore to appear   Operations which  do not  require kvstore availability   All datapath operations    All datapath forwarding  policy enforcement and visibility functions for   existing workloads endpoints do not depend on the kvstore  Packets will   continue to be forwarded and network policy rules will continue to be   enforced     However  if the agent requires to restart as part of the    ref  etcd recovery behavior   there can be delays in       processing of flow events and metrics     short unavailability of layer 7 proxies  NetworkPolicy updates    Network policy updates will continue to be processed and applied   Services updates    All updates to services will be processed and applied   Understanding etcd status                            The etcd status is reported when running   cilium dbg status    The following line represents the status of etcd       KVStore   Ok  etcd  1 1 connected  lease ID 29c6732d5d580cb5  lock lease ID 29c6732d5d580cb7  has quorum true  https   192 168 60 11 2379   3 4 9  Leader   OK    The overall status  Either   OK   or   Failure     1 1 connected    Number of total etcd endpoints and how many of them are reachable   lease ID    UUID of the lease used for all keys owned by this agent   lock lease ID    UUID of the lease used for locks acquired by this agent   has quorum    Status of etcd quorum  Either   true   or set to an error   consecutive errors    Number of consecutive quorum errors  Only printed if errors are present   https   192 168 60 11 2379   3 4 9  Leader     List of all etcd endpoints stating the etcd version and whether the   particular endpoint is currently the elected leader  If an etcd endpoint   cannot be reached  the error is shown       etcd recovery behavior   Recovery behavior                    In the event of an etcd endpoint becoming unhealthy  etcd should automatically resolve this by electing a new leader and by failing over to a healthy etcd endpoint  As long as quorum is preserved  the etcd cluster will remain functional   In addition  Cilium performs a background check in an interval to determine etcd health and potentially take action  The interval depends on the overall cluster size  The larger the cluster  the longer the  interval  https   pkg go dev github com cilium cilium pkg kvstore tab doc ExtraOptions StatusCheckInterval         If no etcd endpoints can be reached  Cilium will report failure in   cilium dbg    status    This will cause the liveness and readiness probe of Kubernetes to    fail and Cilium will be restarted      A lock is acquired and released to test a write operation which requires    quorum  If this operation fails  loss of quorum is reported  If quorum fails    for three or more intervals in a row  Cilium is declared unhealthy      The Cilium operator will constantly write to a heartbeat key       cilium  heartbeat     All Cilium agents will watch for updates to this    heartbeat key  This validates the ability for an agent to receive key    updates from etcd  If the heartbeat key is not updated in time  the quorum    check is declared to have failed and Cilium is declared unhealthy after 3 or    more consecutive failures   Example of a status with a quorum failure which has not yet reached the threshold        KVStore  Ok   etcd  1 1 connected  lease ID 29c6732d5d580cb5  lock lease ID 29c6732d5d580cb7  has quorum 2m2 778966915s since last heartbeat update has been received  consecutive errors 1  https   192 168 60 11 2379   3 4 9  Leader   Example of a status with the number of quorum failures exceeding the threshold        KVStore  Failure   Err  quorum check failed 8 times in a row  4m28 446600949s since last heartbeat update has been received      troubleshooting clustermesh      include     troubleshooting clustermesh rst      troubleshooting servicemesh      include   troubleshooting servicemesh rst  Symptom Library                  Node to node traffic is being dropped                                        Symptom          Endpoint to endpoint communication on a single node succeeds but communication fails between endpoints across multiple nodes   Troubleshooting steps                             Run   cilium health status   on the node of the source and destination    endpoint  It should describe the connectivity from that node to other    nodes in the cluster  and to a simulated endpoint on each other node     Identify points in the cluster that cannot talk to each other  If the    command does not describe the status of the other node  there may be an    issue with the KV Store      Run   cilium dbg monitor   on the node of the source and destination endpoint     Look for packet drops      When running in  ref  arch overlay  mode      Run   cilium dbg bpf tunnel list   and verify that each Cilium node is aware of    the other nodes in the cluster   If not  check the logfile for errors      If nodes are being populated correctly  run   tcpdump  n  i cilium vxlan   on    each node to verify whether cross node traffic is being forwarded correctly    between nodes      If packets are being dropped        verify that the node IP listed in   cilium dbg bpf tunnel list   can reach each      other       verify that the firewall on each node allows UDP port 8472      When running in  ref  arch direct routing  mode      Run   ip route   or check your cloud provider router and verify that you have    routes installed to route the endpoint prefix between all nodes      Verify that the firewall on each node permits to route the endpoint IPs    Useful Scripts                     retrieve cilium pod   Retrieve Cilium pod managing a particular pod                                                Identifies the Cilium pod that is managing a particular pod in a namespace      code block   shell session      k8s get cilium pod sh  pod   namespace     Example        code block   shell session        curl  sLO https   raw githubusercontent com cilium cilium main contrib k8s k8s get cilium pod sh       chmod  x k8s get cilium pod sh         k8s get cilium pod sh luke pod default     cilium zmjj9     cilium node init v7r9p     cilium operator f576f7977 s5gpq  Execute a command in all Kubernetes Cilium pods                                                  Run a command within all Cilium pods of a cluster     code block   shell session      k8s cilium exec sh  command     Example        code block   shell session        curl  sLO https   raw githubusercontent com cilium cilium main contrib k8s k8s cilium exec sh       chmod  x k8s cilium exec sh         k8s cilium exec sh uptime      10 15 16 up 6 days   7 37   0 users   load average  0 00  0 02  0 00      10 15 16 up 6 days   7 32   0 users   load average  0 00  0 03  0 04      10 15 16 up 6 days   7 30   0 users   load average  0 75  0 27  0 15      10 15 16 up 6 days   7 28   0 users   load average  0 14  0 04  0 01  List unmanaged Kubernetes pods                                 Lists all Kubernetes pods in the cluster for which Cilium does  not  provide networking  This includes pods running in host networking mode and pods that were started before Cilium was deployed      code block   shell session     k8s unmanaged sh    Example        code block   shell session       curl  sLO https   raw githubusercontent com cilium cilium main contrib k8s k8s unmanaged sh      chmod  x k8s unmanaged sh        k8s unmanaged sh    kube system cilium hqpk7    kube system kube addon manager minikube    kube system kube dns 54cccfbdf8 zmv2c    kube system kubernetes dashboard 77d8b98585 g52k5    kube system storage provisioner  Reporting a problem                      Before you report a problem  make sure to retrieve the necessary information from your cluster before the failure state is lost       sysdump   Automatic log   state collection                                      include      installation cli download rst  Then  execute   cilium sysdump   command to collect troubleshooting information from your Kubernetes cluster      code block   shell session     cilium sysdump  Note that by default   cilium sysdump   will attempt to collect as much logs as possible and for all the nodes in the cluster  If your cluster size is above 20 nodes  consider setting the following options to limit the size of the sysdump  This is not required  but useful for those who have a constraint on bandwidth or upload size     set the     node list   option to pick only a few nodes in case the cluster has   many of them    set the     logs since time   option to go back in time to when the issues started    set the     logs limit bytes   option to limit the size of the log files  note    passed onto   kubectl logs    does not apply to entire collection archive    Ideally  a sysdump that has a full history of select nodes  rather than a brief history of all the nodes  would be preferred  by using     node list     The second recommended way would be to use     logs since time   if you are able to narrow down when the issues started  Lastly  if the Cilium agent and Operator logs are too large  consider     logs limit bytes     Use     help   to see more options      code block   shell session     cilium sysdump   help  Single Node Bugtool                      If you are not running Kubernetes  it is also possible to run the bug collection tool manually with the scope of a single node   The   cilium bugtool   captures potentially useful information about your environment for debugging  The tool is meant to be used for debugging a single Cilium agent node  In the Kubernetes case  if you have multiple Cilium pods  the tool can retrieve debugging information from all of them  The tool works by archiving a collection of command output and files from several places  By default  it writes to the   tmp   directory   Note that the command needs to be run from inside the Cilium pod container      code block   shell session     cilium bugtool  When running it with no option as shown above  it will try to copy various files and execute some commands  If   kubectl   is detected  it will search for Cilium pods  The default label being   k8s app cilium    but this and the namespace can be changed via   k8s namespace   and   k8s label   respectively   If you want to capture the archive from a Kubernetes pod  then the process is a bit different     code block   shell session         First we need to get the Cilium pod      kubectl get pods   namespace kube system    NAME                          READY     STATUS    RESTARTS   AGE    cilium kg8lv                  1 1       Running   0          13m    kube addon manager minikube   1 1       Running   0          1h    kube dns 6fc954457d sf2nk     3 3       Running   0          1h    kubernetes dashboard 6xvc7    1 1       Running   0          1h         Run the bugtool from this pod      kubectl  n kube system exec cilium kg8lv    cilium bugtool                  Copy the archive from the pod      kubectl cp kube system cilium kg8lv  tmp cilium bugtool 20180411 155146 166 0000 UTC 266836983 tar  tmp cilium bugtool 20180411 155146 166 0000 UTC 266836983 tar              note       Please check the archive for sensitive information and strip it    away before sharing it with us   Below is an approximate list of the kind of information in the archive     Cilium status   Cilium version   Kernel configuration   Resolve configuration   Cilium endpoint state   Cilium logs   Docker logs     dmesg       ethtool       ip a       ip link       ip r       iptables save       kubectl  n kube system get pods       kubectl get pods svc for all namespaces       uname       uptime       cilium dbg bpf   list       cilium dbg endpoint get for each endpoint       cilium dbg endpoint list       hostname       cilium dbg policy get       cilium dbg service list     Debugging information                        If you are not running Kubernetes  you can use the   cilium dbg debuginfo   command to retrieve useful debugging information  If you are running Kubernetes  this command is automatically run as part of the system dump     cilium dbg debuginfo   can print useful output from the Cilium API  The output format is in Markdown format so this can be used when reporting a bug on the  issue tracker     Running without arguments will print to standard output  but you can also redirect to a file like     code block   shell session     cilium dbg debuginfo  f debuginfo md     note       Please check the debuginfo file for sensitive information and strip it    away before sharing it with us    Slack assistance                   The  Cilium Slack   community is a helpful first point of assistance to get help troubleshooting a problem or to discuss options on how to address a problem  The community is open to anyone   Report an issue via GitHub                             If you believe to have found an issue in Cilium  please report a  GitHub issue   and make sure to attach a system dump as described above to ensure that developers have the best chance to reproduce the issue       NodeSelector  https   kubernetes io docs concepts scheduling eviction assign pod node  nodeselector     RBAC  https   kubernetes io docs reference access authn authz rbac      CNI  https   github com containernetworking cni     Volumes  https   kubernetes io docs tasks configure pod container configure volume storage       Cilium Frequently Asked Questions  FAQ   https   github com cilium cilium issues utf8  E2 9C 93 q label 3Akind 2Fquestion 20      issue tracker  https   github com cilium cilium issues     GitHub issue   issue tracker  "}
{"questions":"cilium requirements below Most modern Linux distributions already do adminsystemreqs System Requirements docs cilium io Before installing Cilium please ensure that your system meets the minimum","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _admin_system_reqs:\n\n*******************\nSystem Requirements\n*******************\n\nBefore installing Cilium, please ensure that your system meets the minimum\nrequirements below. Most modern Linux distributions already do.\n\nSummary\n=======\n\nWhen running Cilium using the container image ``cilium\/cilium``, the host\nsystem must meet these requirements:\n\n- Hosts with either AMD64 or AArch64 architecture\n- `Linux kernel`_ >= 5.4 or equivalent (e.g., 4.18 on RHEL 8.6)\n\nWhen running Cilium as a native process on your host (i.e. **not** running the\n``cilium\/cilium`` container image) these additional requirements must be met:\n\n- `clang+LLVM`_ >= 10.0\n\n.. _`clang+LLVM`: https:\/\/llvm.org\n\nWhen running Cilium without Kubernetes these additional requirements\nmust be met:\n\n- :ref:`req_kvstore` etcd >= 3.1.0\n\n======================== ============================== ===================\nRequirement              Minimum Version                In cilium container\n======================== ============================== ===================\n`Linux kernel`_          >= 5.4 or >= 4.18 on RHEL 8.6  no\nKey-Value store (etcd)   >= 3.1.0                       no\nclang+LLVM               >= 10.0                        yes\n======================== ============================== ===================\n\nArchitecture Support\n====================\n\nCilium images are built for the following platforms:\n\n- AMD64\n- AArch64\n\nLinux Distribution Compatibility & Considerations\n=================================================\n\nThe following table lists Linux distributions that are known to work\nwell with Cilium. Some distributions require a few initial tweaks. Please make\nsure to read each distribution's specific notes below before attempting to\nrun Cilium.\n\n========================== ====================\nDistribution               Minimum Version\n========================== ====================\n`Amazon Linux 2`_          all\n`Bottlerocket OS`_         all\n`CentOS`_                  >= 8.6\n`Container-Optimized OS`_  >= 85\nDebian_                    >= 10 Buster\n`Fedora CoreOS`_           >= 31.20200108.3.0\nFlatcar_                   all\nLinuxKit_                  all\nOpensuse_                  Tumbleweed, >=Leap 15.4\n`RedHat Enterprise Linux`_ >= 8.6\n`RedHat CoreOS`_           >= 4.12\n`Talos Linux`_             >= 1.5.0\nUbuntu_                    >= 20.04\n========================== ====================\n\n.. _Amazon Linux 2: https:\/\/docs.aws.amazon.com\/AL2\/latest\/relnotes\/relnotes-al2.html\n.. _CentOS: https:\/\/centos.org\n.. _Container-Optimized OS: https:\/\/cloud.google.com\/container-optimized-os\/docs\n.. _Fedora CoreOS: https:\/\/fedoraproject.org\/coreos\/release-notes\n.. _Debian: https:\/\/www.debian.org\/releases\/\n.. _Flatcar: https:\/\/www.flatcar.org\/releases\n.. _LinuxKit: https:\/\/github.com\/linuxkit\/linuxkit\/tree\/master\/kernel\n.. _RedHat Enterprise Linux: https:\/\/www.redhat.com\/en\/technologies\/linux-platforms\/enterprise-linux\n.. _RedHat CoreOS: https:\/\/access.redhat.com\/articles\/6907891\n.. _Ubuntu: https:\/\/www.releases.ubuntu.com\/\n.. _Opensuse: https:\/\/en.opensuse.org\/openSUSE:Roadmap\n.. _Bottlerocket OS: https:\/\/github.com\/bottlerocket-os\/bottlerocket\n.. _Talos Linux: https:\/\/www.talos.dev\/\n\n.. note:: The above list is based on feedback by users. If you find an unlisted\n          Linux distribution that works well, please let us know by opening a\n          GitHub issue or by creating a pull request that updates this guide.\n\n\nFlatcar on AWS EKS in ENI mode\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFlatcar is known to manipulate network interfaces created and managed by\nCilium. When running the official Flatcar image for AWS EKS nodes in ENI\nmode, this may cause connectivity issues and potentially prevent the Cilium\nagent from booting. To avoid this, disable DHCP on the ENI interfaces and mark\nthem as unmanaged by adding\n\n.. code-block:: text\n\n        [Match]\n        Name=eth[1-9]*\n\n        [Network]\n        DHCP=no\n\n        [Link]\n        Unmanaged=yes\n\nto ``\/etc\/systemd\/network\/01-no-dhcp.network`` and then\n\n.. code-block:: shell-session\n\n        systemctl daemon-reload\n        systemctl restart systemd-networkd\n\nUbuntu 22.04 on Raspberry Pi\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nBefore running Cilium on Ubuntu 22.04 on a Raspberry Pi, please make sure to install the following package:\n\n.. code-block:: shell-session\n\n        sudo apt install linux-modules-extra-raspi\n\n.. _admin_kernel_version:\n\nLinux Kernel\n============\n\nBase Requirements\n~~~~~~~~~~~~~~~~~\n\nCilium leverages and builds on the kernel eBPF functionality as well as various\nsubsystems which integrate with eBPF. Therefore, host systems are required to\nrun a recent Linux kernel to run a Cilium agent. More recent kernels may\nprovide additional eBPF functionality that Cilium will automatically detect and\nuse on agent start. For this version of Cilium, it is recommended to use kernel\n5.4 or later (or equivalent such as 4.18 on RHEL8). For a list of features\nthat require newer kernels, see :ref:`advanced_features`.\n\nIn order for the eBPF feature to be enabled properly, the following kernel\nconfiguration options must be enabled. This is typically the case with\ndistribution kernels. When an option can be built as a module or statically\nlinked, either choice is valid.\n\n::\n\n        CONFIG_BPF=y\n        CONFIG_BPF_SYSCALL=y\n        CONFIG_NET_CLS_BPF=y\n        CONFIG_BPF_JIT=y\n        CONFIG_NET_CLS_ACT=y\n        CONFIG_NET_SCH_INGRESS=y\n        CONFIG_CRYPTO_SHA1=y\n        CONFIG_CRYPTO_USER_API_HASH=y\n        CONFIG_CGROUPS=y\n        CONFIG_CGROUP_BPF=y\n        CONFIG_PERF_EVENTS=y\n        CONFIG_SCHEDSTATS=y\n\n\nRequirements for Iptables-based Masquerading\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIf you are not using BPF for masquerading (``enable-bpf-masquerade=false``, the\ndefault value), then you will need the following kernel configuration options.\n\n::\n\n        CONFIG_NETFILTER_XT_SET=m\n        CONFIG_IP_SET=m\n        CONFIG_IP_SET_HASH_IP=m\n\nRequirements for L7 and FQDN Policies\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nL7 proxy redirection currently uses ``TPROXY`` iptables actions as well\nas ``socket`` matches. For L7 redirection to work as intended kernel\nconfiguration must include the following modules:\n\n::\n\n        CONFIG_NETFILTER_XT_TARGET_TPROXY=m\n        CONFIG_NETFILTER_XT_TARGET_MARK=m\n        CONFIG_NETFILTER_XT_TARGET_CT=m\n        CONFIG_NETFILTER_XT_MATCH_MARK=m\n        CONFIG_NETFILTER_XT_MATCH_SOCKET=m\n\nWhen ``xt_socket`` kernel module is missing the forwarding of\nredirected L7 traffic does not work in non-tunneled datapath\nmodes. Since some notable kernels (e.g., COS) are shipping without\n``xt_socket`` module, Cilium implements a fallback compatibility mode\nto allow L7 policies and visibility to be used with those\nkernels. Currently this fallback disables ``ip_early_demux`` kernel\nfeature in non-tunneled datapath modes, which may decrease system\nnetworking performance. This guarantees HTTP and Kafka redirection\nworks as intended.  However, if HTTP or Kafka enforcement policies are\nnever used, this behavior can be turned off by adding the following to\nthe helm configuration command line:\n\n.. parsed-literal::\n\n   helm install cilium |CHART_RELEASE| \\\\\n     ...\n     --set enableXTSocketFallback=false\n\n.. _features_kernel_matrix:\n\nRequirements for IPsec\n~~~~~~~~~~~~~~~~~~~~~~\n\nThe :ref:`encryption_ipsec` feature requires a lot of kernel configuration\noptions, most of which to enable the actual encryption. Note that the\nspecific options required depend on the algorithm. The list below\ncorresponds to requirements for GCM-128-AES.\n\n::\n\n        CONFIG_XFRM=y\n        CONFIG_XFRM_OFFLOAD=y\n        CONFIG_XFRM_STATISTICS=y\n        CONFIG_XFRM_ALGO=m\n        CONFIG_XFRM_USER=m\n        CONFIG_INET{,6}_ESP=m\n        CONFIG_INET{,6}_IPCOMP=m\n        CONFIG_INET{,6}_XFRM_TUNNEL=m\n        CONFIG_INET{,6}_TUNNEL=m\n        CONFIG_INET_XFRM_MODE_TUNNEL=m\n        CONFIG_CRYPTO_AEAD=m\n        CONFIG_CRYPTO_AEAD2=m\n        CONFIG_CRYPTO_GCM=m\n        CONFIG_CRYPTO_SEQIV=m\n        CONFIG_CRYPTO_CBC=m\n        CONFIG_CRYPTO_HMAC=m\n        CONFIG_CRYPTO_SHA256=m\n        CONFIG_CRYPTO_AES=m\n\nRequirements for the Bandwidth Manager\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe :ref:`bandwidth-manager` requires the following kernel configuration option\nto change the packet scheduling algorithm.\n\n::\n\n        CONFIG_NET_SCH_FQ=m\n\nRequirements for Netkit Device Mode\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe :ref:`netkit` requires the following kernel configuration option\nto create netkit devices.\n\n::\n\n        CONFIG_NETKIT=y\n\n.. _advanced_features:\n\nRequired Kernel Versions for Advanced Features\n==============================================\n\nAdditional kernel features continues to progress in the Linux community. Some\nof Cilium's features are dependent on newer kernel versions and are thus\nenabled by upgrading to more recent kernel versions as detailed below.\n\n====================================================== ===============================\nCilium Feature                                         Minimum Kernel Version\n====================================================== ===============================\n:ref:`encryption_wg`                                   >= 5.6\nFull support for :ref:`session-affinity`               >= 5.7\nBPF-based proxy redirection                            >= 5.7\nSocket-level LB bypass in pod netns                    >= 5.7\nL3 devices                                             >= 5.8\nBPF-based host routing                                 >= 5.10\n:ref:`enable_multicast` (AMD64)                        >= 5.10\nIPv6 BIG TCP support                                   >= 5.19\n:ref:`enable_multicast` (AArch64)                      >= 6.0\nIPv4 BIG TCP support                                   >= 6.3\n====================================================== ===============================\n\n.. _req_kvstore:\n\nKey-Value store\n===============\n\nCilium optionally uses a distributed Key-Value store to manage,\nsynchronize and distribute security identities across all cluster\nnodes. The following Key-Value stores are currently supported:\n\n- etcd >= 3.1.0\n\nCilium can be used without a Key-Value store when CRD-based state\nmanagement is used with Kubernetes. This is the default for new Cilium\ninstallations. Larger clusters will perform better with a Key-Value\nstore backed identity management instead, see :ref:`k8s_quick_install`\nfor more details.\n\nSee :ref:`install_kvstore` for details on how to configure the\n``cilium-agent`` to use a Key-Value store.\n\nclang+LLVM\n==========\n\n\n.. note:: This requirement is only needed if you run ``cilium-agent`` natively.\n          If you are using the Cilium container image ``cilium\/cilium``,\n          clang+LLVM is included in the container image.\n\nLLVM is the compiler suite that Cilium uses to generate eBPF bytecode programs\nto be loaded into the Linux kernel. The minimum supported version of LLVM\navailable to ``cilium-agent`` should be >=5.0. The version of clang installed\nmust be compiled with the eBPF backend enabled.\n\nSee https:\/\/releases.llvm.org\/ for information on how to download and install\nLLVM.\n\n.. _firewall_requirements:\n\nFirewall Rules\n==============\n\nIf you are running Cilium in an environment that requires firewall rules to\nenable connectivity, you will have to add the following rules to ensure Cilium\nworks properly.\n\nIt is recommended but optional that all nodes running Cilium in a given cluster\nmust be able to ping each other so ``cilium-health`` can report and monitor\nconnectivity among nodes. This requires ICMP Type 0\/8, Code 0 open among all\nnodes. TCP 4240 should also be open among all nodes for ``cilium-health``\nmonitoring. Note that it is also an option to only use one of these two methods\nto enable health monitoring. If the firewall does not permit either of these\nmethods, Cilium will still operate fine but will not be able to provide health\ninformation.\n\nFor IPsec enabled Cilium deployments, you need to ensure that the firewall\nallows ESP traffic through. For example, AWS Security Groups doesn't allow ESP\ntraffic by default.\n\nIf you are using WireGuard, you must allow UDP port 51871.\n\nIf you are using VXLAN overlay network mode, Cilium uses Linux's default VXLAN\nport 8472 over UDP, unless Linux has been configured otherwise. In this case,\nUDP 8472 must be open among all nodes to enable VXLAN overlay mode. The same\napplies to Geneve overlay network mode, except the port is UDP 6081.\n\nIf you are running in direct routing mode, your network must allow routing of\npod IPs.\n\nAs an example, if you are running on AWS with VXLAN overlay networking, here is\na minimum set of AWS Security Group (SG) rules. It assumes a separation between\nthe SG on the master nodes, ``master-sg``, and the worker nodes, ``worker-sg``.\nIt also assumes ``etcd`` is running on the master nodes.\n\nMaster Nodes (``master-sg``) Rules:\n\n======================== =============== ==================== ===============\nPort Range \/ Protocol    Ingress\/Egress  Source\/Destination   Description\n======================== =============== ==================== ===============\n2379-2380\/tcp            ingress         ``worker-sg``        etcd access\n8472\/udp                 ingress         ``master-sg`` (self) VXLAN overlay\n8472\/udp                 ingress         ``worker-sg``        VXLAN overlay\n4240\/tcp                 ingress         ``master-sg`` (self) health checks\n4240\/tcp                 ingress         ``worker-sg``        health checks\nICMP 8\/0                 ingress         ``master-sg`` (self) health checks\nICMP 8\/0                 ingress         ``worker-sg``        health checks\n8472\/udp                 egress          ``master-sg`` (self) VXLAN overlay\n8472\/udp                 egress          ``worker-sg``        VXLAN overlay\n4240\/tcp                 egress          ``master-sg`` (self) health checks\n4240\/tcp                 egress          ``worker-sg``        health checks\nICMP 8\/0                 egress          ``master-sg`` (self) health checks\nICMP 8\/0                 egress          ``worker-sg``        health checks\n======================== =============== ==================== ===============\n\nWorker Nodes (``worker-sg``):\n\n======================== =============== ==================== ===============\nPort Range \/ Protocol    Ingress\/Egress  Source\/Destination   Description\n======================== =============== ==================== ===============\n8472\/udp                 ingress         ``master-sg``        VXLAN overlay\n8472\/udp                 ingress         ``worker-sg`` (self) VXLAN overlay\n4240\/tcp                 ingress         ``master-sg``        health checks\n4240\/tcp                 ingress         ``worker-sg`` (self) health checks\nICMP 8\/0                 ingress         ``master-sg``        health checks\nICMP 8\/0                 ingress         ``worker-sg`` (self) health checks\n8472\/udp                 egress          ``master-sg``        VXLAN overlay\n8472\/udp                 egress          ``worker-sg`` (self) VXLAN overlay\n4240\/tcp                 egress          ``master-sg``        health checks\n4240\/tcp                 egress          ``worker-sg`` (self) health checks\nICMP 8\/0                 egress          ``master-sg``        health checks\nICMP 8\/0                 egress          ``worker-sg`` (self) health checks\n2379-2380\/tcp            egress          ``master-sg``        etcd access\n======================== =============== ==================== ===============\n\n.. note:: If you use a shared SG for the masters and workers, you can condense\n          these rules into ingress\/egress to self. If you are using Direct\n          Routing mode, you can condense all rules into ingress\/egress ANY\n          port\/protocol to\/from self.\n\nThe following ports should also be available on each node:\n\n======================== ==================================================================\nPort Range \/ Protocol    Description\n======================== ==================================================================\n4240\/tcp                 cluster health checks (``cilium-health``)\n4244\/tcp                 Hubble server\n4245\/tcp                 Hubble Relay\n4250\/tcp                 Mutual Authentication port\n4251\/tcp                 Spire Agent health check port (listening on 127.0.0.1 or ::1)\n6060\/tcp                 cilium-agent pprof server (listening on 127.0.0.1)\n6061\/tcp                 cilium-operator pprof server (listening on 127.0.0.1)\n6062\/tcp                 Hubble Relay pprof server (listening on 127.0.0.1)\n9878\/tcp                 cilium-envoy health listener (listening on 127.0.0.1)\n9879\/tcp                 cilium-agent health status API (listening on 127.0.0.1 and\/or ::1)\n9890\/tcp                 cilium-agent gops server (listening on 127.0.0.1)\n9891\/tcp                 operator gops server (listening on 127.0.0.1)\n9893\/tcp                 Hubble Relay gops server (listening on 127.0.0.1)\n9901\/tcp                 cilium-envoy Admin API (listening on 127.0.0.1)\n9962\/tcp                 cilium-agent Prometheus metrics\n9963\/tcp                 cilium-operator Prometheus metrics\n9964\/tcp                 cilium-envoy Prometheus metrics\n51871\/udp                WireGuard encryption tunnel endpoint\n======================== ==================================================================\n\n.. _admin_mount_bpffs:\n\nMounted eBPF filesystem\n=======================\n\n.. Note::\n\n        Some distributions mount the bpf filesystem automatically. Check if the\n        bpf filesystem is mounted by running the command.\n\n        .. code-block:: shell-session\n\n            # mount | grep \/sys\/fs\/bpf\n            $ # if present should output, e.g. \"none on \/sys\/fs\/bpf type bpf\"...\n\nIf the eBPF filesystem is not mounted in the host filesystem, Cilium will\nautomatically mount the filesystem.\n\nMounting this BPF filesystem allows the ``cilium-agent`` to persist eBPF\nresources across restarts of the agent so that the datapath can continue to\noperate while the agent is subsequently restarted or upgraded.\n\nOptionally it is also possible to mount the eBPF filesystem before Cilium is\ndeployed in the cluster, the following command must be run in the host mount\nnamespace. The command must only be run once during the boot process of the\nmachine.\n\n   .. code-block:: shell-session\n\n\t# mount bpffs \/sys\/fs\/bpf -t bpf\n\nA portable way to achieve this with persistence is to add the following line to\n``\/etc\/fstab`` and then run ``mount \/sys\/fs\/bpf``. This will cause the\nfilesystem to be automatically mounted when the node boots.\n\n::\n\n     bpffs\t\t\t\/sys\/fs\/bpf\t\tbpf\tdefaults 0 0\n\nIf you are using systemd to manage the kubelet, see the section\n:ref:`bpffs_systemd`.\n\nRouting Tables\n==============\n\nWhen running in :ref:`ipam_eni` IPAM mode, Cilium will install per-ENI routing\ntables for each ENI that is used by Cilium for pod IP allocation.\nThese routing tables are added to the host network namespace and must not be\notherwise used by the system.\nThe index of those per-ENI routing tables is computed as\n``10 + <eni-interface-index>``. The base offset of 10 is chosen as it is highly\nunlikely to collide with the main routing table which is between 253-255.\n\nPrivileges\n==========\n\nThe following privileges are required to run Cilium. When running the standard\nKubernetes :term:`DaemonSet`, the privileges are automatically granted to Cilium.\n\n* Cilium interacts with the Linux kernel to install eBPF program which will then\n  perform networking tasks and implement security rules. In order to install\n  eBPF programs system-wide, ``CAP_SYS_ADMIN`` privileges are required. These\n  privileges must be granted to ``cilium-agent``.\n\n  The quickest way to meet the requirement is to run ``cilium-agent`` as root\n  and\/or as privileged container.\n\n* Cilium requires access to the host networking namespace. For this purpose,\n  the Cilium pod is scheduled to run in the host networking namespace directly.","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      admin system reqs                       System Requirements                      Before installing Cilium  please ensure that your system meets the minimum requirements below  Most modern Linux distributions already do   Summary          When running Cilium using the container image   cilium cilium    the host system must meet these requirements     Hosts with either AMD64 or AArch64 architecture    Linux kernel      5 4 or equivalent  e g   4 18 on RHEL 8 6   When running Cilium as a native process on your host  i e    not   running the   cilium cilium   container image  these additional requirements must be met      clang LLVM      10 0       clang LLVM   https   llvm org  When running Cilium without Kubernetes these additional requirements must be met      ref  req kvstore  etcd    3 1 0                                                                              Requirement              Minimum Version                In cilium container                                                                              Linux kernel               5 4 or    4 18 on RHEL 8 6  no Key Value store  etcd       3 1 0                       no clang LLVM                  10 0                        yes                                                                              Architecture Support                       Cilium images are built for the following platforms     AMD64   AArch64  Linux Distribution Compatibility   Considerations                                                    The following table lists Linux distributions that are known to work well with Cilium  Some distributions require a few initial tweaks  Please make sure to read each distribution s specific notes below before attempting to run Cilium                                                   Distribution               Minimum Version                                                  Amazon Linux 2            all  Bottlerocket OS           all  CentOS                       8 6  Container Optimized OS       85 Debian                        10 Buster  Fedora CoreOS                31 20200108 3 0 Flatcar                    all LinuxKit                   all Opensuse                   Tumbleweed    Leap 15 4  RedHat Enterprise Linux      8 6  RedHat CoreOS                4 12  Talos Linux                  1 5 0 Ubuntu                        20 04                                                      Amazon Linux 2  https   docs aws amazon com AL2 latest relnotes relnotes al2 html     CentOS  https   centos org     Container Optimized OS  https   cloud google com container optimized os docs     Fedora CoreOS  https   fedoraproject org coreos release notes     Debian  https   www debian org releases      Flatcar  https   www flatcar org releases     LinuxKit  https   github com linuxkit linuxkit tree master kernel     RedHat Enterprise Linux  https   www redhat com en technologies linux platforms enterprise linux     RedHat CoreOS  https   access redhat com articles 6907891     Ubuntu  https   www releases ubuntu com      Opensuse  https   en opensuse org openSUSE Roadmap     Bottlerocket OS  https   github com bottlerocket os bottlerocket     Talos Linux  https   www talos dev      note   The above list is based on feedback by users  If you find an unlisted           Linux distribution that works well  please let us know by opening a           GitHub issue or by creating a pull request that updates this guide    Flatcar on AWS EKS in ENI mode                                 Flatcar is known to manipulate network interfaces created and managed by Cilium  When running the official Flatcar image for AWS EKS nodes in ENI mode  this may cause connectivity issues and potentially prevent the Cilium agent from booting  To avoid this  disable DHCP on the ENI interfaces and mark them as unmanaged by adding     code block   text           Match          Name eth 1 9             Network          DHCP no           Link          Unmanaged yes  to    etc systemd network 01 no dhcp network   and then     code block   shell session          systemctl daemon reload         systemctl restart systemd networkd  Ubuntu 22 04 on Raspberry Pi                               Before running Cilium on Ubuntu 22 04 on a Raspberry Pi  please make sure to install the following package      code block   shell session          sudo apt install linux modules extra raspi      admin kernel version   Linux Kernel               Base Requirements                    Cilium leverages and builds on the kernel eBPF functionality as well as various subsystems which integrate with eBPF  Therefore  host systems are required to run a recent Linux kernel to run a Cilium agent  More recent kernels may provide additional eBPF functionality that Cilium will automatically detect and use on agent start  For this version of Cilium  it is recommended to use kernel 5 4 or later  or equivalent such as 4 18 on RHEL8   For a list of features that require newer kernels  see  ref  advanced features    In order for the eBPF feature to be enabled properly  the following kernel configuration options must be enabled  This is typically the case with distribution kernels  When an option can be built as a module or statically linked  either choice is valid               CONFIG BPF y         CONFIG BPF SYSCALL y         CONFIG NET CLS BPF y         CONFIG BPF JIT y         CONFIG NET CLS ACT y         CONFIG NET SCH INGRESS y         CONFIG CRYPTO SHA1 y         CONFIG CRYPTO USER API HASH y         CONFIG CGROUPS y         CONFIG CGROUP BPF y         CONFIG PERF EVENTS y         CONFIG SCHEDSTATS y   Requirements for Iptables based Masquerading                                               If you are not using BPF for masquerading    enable bpf masquerade false    the default value   then you will need the following kernel configuration options               CONFIG NETFILTER XT SET m         CONFIG IP SET m         CONFIG IP SET HASH IP m  Requirements for L7 and FQDN Policies                                        L7 proxy redirection currently uses   TPROXY   iptables actions as well as   socket   matches  For L7 redirection to work as intended kernel configuration must include the following modules               CONFIG NETFILTER XT TARGET TPROXY m         CONFIG NETFILTER XT TARGET MARK m         CONFIG NETFILTER XT TARGET CT m         CONFIG NETFILTER XT MATCH MARK m         CONFIG NETFILTER XT MATCH SOCKET m  When   xt socket   kernel module is missing the forwarding of redirected L7 traffic does not work in non tunneled datapath modes  Since some notable kernels  e g   COS  are shipping without   xt socket   module  Cilium implements a fallback compatibility mode to allow L7 policies and visibility to be used with those kernels  Currently this fallback disables   ip early demux   kernel feature in non tunneled datapath modes  which may decrease system networking performance  This guarantees HTTP and Kafka redirection works as intended   However  if HTTP or Kafka enforcement policies are never used  this behavior can be turned off by adding the following to the helm configuration command line      parsed literal       helm install cilium  CHART RELEASE                     set enableXTSocketFallback false      features kernel matrix   Requirements for IPsec                         The  ref  encryption ipsec  feature requires a lot of kernel configuration options  most of which to enable the actual encryption  Note that the specific options required depend on the algorithm  The list below corresponds to requirements for GCM 128 AES               CONFIG XFRM y         CONFIG XFRM OFFLOAD y         CONFIG XFRM STATISTICS y         CONFIG XFRM ALGO m         CONFIG XFRM USER m         CONFIG INET  6  ESP m         CONFIG INET  6  IPCOMP m         CONFIG INET  6  XFRM TUNNEL m         CONFIG INET  6  TUNNEL m         CONFIG INET XFRM MODE TUNNEL m         CONFIG CRYPTO AEAD m         CONFIG CRYPTO AEAD2 m         CONFIG CRYPTO GCM m         CONFIG CRYPTO SEQIV m         CONFIG CRYPTO CBC m         CONFIG CRYPTO HMAC m         CONFIG CRYPTO SHA256 m         CONFIG CRYPTO AES m  Requirements for the Bandwidth Manager                                         The  ref  bandwidth manager  requires the following kernel configuration option to change the packet scheduling algorithm               CONFIG NET SCH FQ m  Requirements for Netkit Device Mode                                      The  ref  netkit  requires the following kernel configuration option to create netkit devices               CONFIG NETKIT y      advanced features   Required Kernel Versions for Advanced Features                                                 Additional kernel features continues to progress in the Linux community  Some of Cilium s features are dependent on newer kernel versions and are thus enabled by upgrading to more recent kernel versions as detailed below                                                                                          Cilium Feature                                         Minimum Kernel Version                                                                                         ref  encryption wg                                       5 6 Full support for  ref  session affinity                   5 7 BPF based proxy redirection                               5 7 Socket level LB bypass in pod netns                       5 7 L3 devices                                                5 8 BPF based host routing                                    5 10  ref  enable multicast   AMD64                            5 10 IPv6 BIG TCP support                                      5 19  ref  enable multicast   AArch64                          6 0 IPv4 BIG TCP support                                      6 3                                                                                             req kvstore   Key Value store                  Cilium optionally uses a distributed Key Value store to manage  synchronize and distribute security identities across all cluster nodes  The following Key Value stores are currently supported     etcd    3 1 0  Cilium can be used without a Key Value store when CRD based state management is used with Kubernetes  This is the default for new Cilium installations  Larger clusters will perform better with a Key Value store backed identity management instead  see  ref  k8s quick install  for more details   See  ref  install kvstore  for details on how to configure the   cilium agent   to use a Key Value store   clang LLVM                 note   This requirement is only needed if you run   cilium agent   natively            If you are using the Cilium container image   cilium cilium              clang LLVM is included in the container image   LLVM is the compiler suite that Cilium uses to generate eBPF bytecode programs to be loaded into the Linux kernel  The minimum supported version of LLVM available to   cilium agent   should be   5 0  The version of clang installed must be compiled with the eBPF backend enabled   See https   releases llvm org  for information on how to download and install LLVM       firewall requirements   Firewall Rules                 If you are running Cilium in an environment that requires firewall rules to enable connectivity  you will have to add the following rules to ensure Cilium works properly   It is recommended but optional that all nodes running Cilium in a given cluster must be able to ping each other so   cilium health   can report and monitor connectivity among nodes  This requires ICMP Type 0 8  Code 0 open among all nodes  TCP 4240 should also be open among all nodes for   cilium health   monitoring  Note that it is also an option to only use one of these two methods to enable health monitoring  If the firewall does not permit either of these methods  Cilium will still operate fine but will not be able to provide health information   For IPsec enabled Cilium deployments  you need to ensure that the firewall allows ESP traffic through  For example  AWS Security Groups doesn t allow ESP traffic by default   If you are using WireGuard  you must allow UDP port 51871   If you are using VXLAN overlay network mode  Cilium uses Linux s default VXLAN port 8472 over UDP  unless Linux has been configured otherwise  In this case  UDP 8472 must be open among all nodes to enable VXLAN overlay mode  The same applies to Geneve overlay network mode  except the port is UDP 6081   If you are running in direct routing mode  your network must allow routing of pod IPs   As an example  if you are running on AWS with VXLAN overlay networking  here is a minimum set of AWS Security Group  SG  rules  It assumes a separation between the SG on the master nodes    master sg    and the worker nodes    worker sg    It also assumes   etcd   is running on the master nodes   Master Nodes    master sg    Rules                                                                                 Port Range   Protocol    Ingress Egress  Source Destination   Description                                                                               2379 2380 tcp            ingress           worker sg          etcd access 8472 udp                 ingress           master sg    self  VXLAN overlay 8472 udp                 ingress           worker sg          VXLAN overlay 4240 tcp                 ingress           master sg    self  health checks 4240 tcp                 ingress           worker sg          health checks ICMP 8 0                 ingress           master sg    self  health checks ICMP 8 0                 ingress           worker sg          health checks 8472 udp                 egress            master sg    self  VXLAN overlay 8472 udp                 egress            worker sg          VXLAN overlay 4240 tcp                 egress            master sg    self  health checks 4240 tcp                 egress            worker sg          health checks ICMP 8 0                 egress            master sg    self  health checks ICMP 8 0                 egress            worker sg          health checks                                                                                Worker Nodes    worker sg                                                                                    Port Range   Protocol    Ingress Egress  Source Destination   Description                                                                               8472 udp                 ingress           master sg          VXLAN overlay 8472 udp                 ingress           worker sg    self  VXLAN overlay 4240 tcp                 ingress           master sg          health checks 4240 tcp                 ingress           worker sg    self  health checks ICMP 8 0                 ingress           master sg          health checks ICMP 8 0                 ingress           worker sg    self  health checks 8472 udp                 egress            master sg          VXLAN overlay 8472 udp                 egress            worker sg    self  VXLAN overlay 4240 tcp                 egress            master sg          health checks 4240 tcp                 egress            worker sg    self  health checks ICMP 8 0                 egress            master sg          health checks ICMP 8 0                 egress            worker sg    self  health checks 2379 2380 tcp            egress            master sg          etcd access                                                                                   note   If you use a shared SG for the masters and workers  you can condense           these rules into ingress egress to self  If you are using Direct           Routing mode  you can condense all rules into ingress egress ANY           port protocol to from self   The following ports should also be available on each node                                                                                               Port Range   Protocol    Description                                                                                             4240 tcp                 cluster health checks    cilium health    4244 tcp                 Hubble server 4245 tcp                 Hubble Relay 4250 tcp                 Mutual Authentication port 4251 tcp                 Spire Agent health check port  listening on 127 0 0 1 or   1  6060 tcp                 cilium agent pprof server  listening on 127 0 0 1  6061 tcp                 cilium operator pprof server  listening on 127 0 0 1  6062 tcp                 Hubble Relay pprof server  listening on 127 0 0 1  9878 tcp                 cilium envoy health listener  listening on 127 0 0 1  9879 tcp                 cilium agent health status API  listening on 127 0 0 1 and or   1  9890 tcp                 cilium agent gops server  listening on 127 0 0 1  9891 tcp                 operator gops server  listening on 127 0 0 1  9893 tcp                 Hubble Relay gops server  listening on 127 0 0 1  9901 tcp                 cilium envoy Admin API  listening on 127 0 0 1  9962 tcp                 cilium agent Prometheus metrics 9963 tcp                 cilium operator Prometheus metrics 9964 tcp                 cilium envoy Prometheus metrics 51871 udp                WireGuard encryption tunnel endpoint                                                                                                  admin mount bpffs   Mounted eBPF filesystem                             Note            Some distributions mount the bpf filesystem automatically  Check if the         bpf filesystem is mounted by running the command              code block   shell session                mount   grep  sys fs bpf                 if present should output  e g   none on  sys fs bpf type bpf      If the eBPF filesystem is not mounted in the host filesystem  Cilium will automatically mount the filesystem   Mounting this BPF filesystem allows the   cilium agent   to persist eBPF resources across restarts of the agent so that the datapath can continue to operate while the agent is subsequently restarted or upgraded   Optionally it is also possible to mount the eBPF filesystem before Cilium is deployed in the cluster  the following command must be run in the host mount namespace  The command must only be run once during the boot process of the machine         code block   shell session     mount bpffs  sys fs bpf  t bpf  A portable way to achieve this with persistence is to add the following line to    etc fstab   and then run   mount  sys fs bpf    This will cause the filesystem to be automatically mounted when the node boots            bpffs    sys fs bpf  bpf defaults 0 0  If you are using systemd to manage the kubelet  see the section  ref  bpffs systemd    Routing Tables                 When running in  ref  ipam eni  IPAM mode  Cilium will install per ENI routing tables for each ENI that is used by Cilium for pod IP allocation  These routing tables are added to the host network namespace and must not be otherwise used by the system  The index of those per ENI routing tables is computed as   10    eni interface index     The base offset of 10 is chosen as it is highly unlikely to collide with the main routing table which is between 253 255   Privileges             The following privileges are required to run Cilium  When running the standard Kubernetes  term  DaemonSet   the privileges are automatically granted to Cilium     Cilium interacts with the Linux kernel to install eBPF program which will then   perform networking tasks and implement security rules  In order to install   eBPF programs system wide    CAP SYS ADMIN   privileges are required  These   privileges must be granted to   cilium agent       The quickest way to meet the requirement is to run   cilium agent   as root   and or as privileged container     Cilium requires access to the host networking namespace  For this purpose    the Cilium pod is scheduled to run in the host networking namespace directly "}
{"questions":"cilium docs cilium io This document serves as an introduction for using Cilium to enforce DNS based Locking Down External Access with DNS Based Policies security policies for Kubernetes pods gsdns","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _gs_dns:\n\n****************************************************\nLocking Down External Access with DNS-Based Policies\n****************************************************\n\nThis document serves as an introduction for using Cilium to enforce DNS-based\nsecurity policies for Kubernetes pods.\n\n.. include:: gsg_requirements.rst\n\nDeploy the Demo Application\n===========================\n\nDNS-based policies are very useful for controlling access to services running outside the Kubernetes cluster. DNS acts as a persistent service identifier for both external services provided by AWS, Google, Twilio, Stripe, etc., and internal services such as database clusters running in private subnets outside Kubernetes. CIDR or IP-based policies are cumbersome and hard to maintain as the IPs associated with external services can change frequently. The Cilium DNS-based policies provide an easy mechanism to specify access control while Cilium manages the harder aspects of tracking DNS to IP mapping.\n\nIn this guide we will learn about:\n\n- Controlling egress access to services outside the cluster using DNS-based policies\n- Using patterns (or wildcards) to whitelist a subset of DNS domains\n- Combining DNS, port and L7 rules for restricting access to external service\n\nIn line with our Star Wars theme examples, we will use a simple scenario where\nthe Empire's ``mediabot`` pods need access to GitHub for managing the Empire's\ngit repositories. The pods shouldn't have access to any other external service.\n\n.. parsed-literal::\n\n   $ kubectl create -f \\ |SCM_WEB|\\\/examples\/kubernetes-dns\/dns-sw-app.yaml\n   $ kubectl wait pod\/mediabot --for=condition=Ready\n   $ kubectl get pods\n   NAME                             READY   STATUS    RESTARTS   AGE\n   pod\/mediabot                     1\/1     Running   0          14s\n\n\nApply DNS Egress Policy\n=======================\n\nThe following Cilium network policy allows ``mediabot`` pods to only access ``api.github.com``.\n\n.. tabs::\n\n   .. group-tab:: Generic\n\n      .. literalinclude:: ..\/..\/examples\/kubernetes-dns\/dns-matchname.yaml\n\n   .. group-tab:: OpenShift\n\n      .. literalinclude:: ..\/..\/examples\/kubernetes-dns\/dns-matchname-openshift.yaml\n\n.. note::\n\n   OpenShift users will need to modify the policies to match the namespace\n   ``openshift-dns`` (instead of ``kube-system``), remove the match on the\n   ``k8s:k8s-app=kube-dns`` label, and change the port to 5353.\n\nLet's take a closer look at the policy:\n\n* The first egress section uses ``toFQDNs: matchName`` specification to allow\n  egress to ``api.github.com``. The destination DNS should match exactly the\n  name specified in the rule. The ``endpointSelector`` allows only pods with\n  labels ``class: mediabot, org:empire`` to have the egress access.\n* The second egress section (``toEndpoints``) allows ``mediabot`` pods to access\n  ``kube-dns`` service. Note that ``rules: dns`` instructs Cilium to inspect and\n  allow DNS lookups matching specified patterns. In this case, inspect and allow\n  all DNS queries.\n\nNote that with this policy the ``mediabot`` doesn't have access to any internal\ncluster service other than ``kube-dns``. Refer to :ref:`Network Policy` to learn\nmore about policies for controlling access to internal cluster services.\n\nLet's apply the policy:\n\n.. parsed-literal::\n\n  kubectl apply -f \\ |SCM_WEB|\\\/examples\/kubernetes-dns\/dns-matchname.yaml\n\nTesting the policy, we see that ``mediabot`` has access to ``api.github.com``\nbut doesn't have access to any other external service, e.g.,\n``support.github.com``.\n\n.. code-block:: shell-session\n\n   $ kubectl exec mediabot -- curl -I -s https:\/\/api.github.com | head -1\n   HTTP\/2 200\n\n   $ kubectl exec mediabot -- curl -I -s --max-time 5 https:\/\/support.github.com | head -1\n   curl: (28) Connection timed out after 5000 milliseconds\n   command terminated with exit code 28\n\nDNS Policies Using Patterns\n===========================\n\nThe above policy controlled DNS access based on exact match of the DNS domain\nname. Often, it is required to allow access to a subset of domains. Let's say,\nin the above example, ``mediabot`` pods need access to any GitHub sub-domain,\ne.g., the pattern ``*.github.com``. We can achieve this easily by changing the\n``toFQDN`` rule to use ``matchPattern`` instead of ``matchName``.\n\n.. literalinclude:: ..\/..\/examples\/kubernetes-dns\/dns-pattern.yaml\n\n.. parsed-literal::\n\n   kubectl apply -f \\ |SCM_WEB|\\\/examples\/kubernetes-dns\/dns-pattern.yaml\n\nTest that ``mediabot`` has access to multiple GitHub services for which the DNS\nmatches the pattern ``*.github.com``. It is important to note and test that this\ndoesn't allow access to ``github.com`` because the ``*.`` in the pattern\nrequires one subdomain to be present in the DNS name. You can simply add more\n``matchName`` and ``matchPattern`` clauses to extend the access. (See :ref:`DNS based`\npolicies to learn more about specifying DNS rules using patterns and names.)\n\n.. code-block:: shell-session\n\n   $ kubectl exec mediabot -- curl -I -s https:\/\/support.github.com | head -1\n   HTTP\/1.1 200 OK\n\n   $ kubectl exec mediabot -- curl -I -s https:\/\/gist.github.com | head -1\n   HTTP\/1.1 302 Found\n\n   $ kubectl exec mediabot -- curl -I -s --max-time 5 https:\/\/github.com | head -1\n   curl: (28) Connection timed out after 5000 milliseconds\n   command terminated with exit code 28\n\nCombining DNS, Port and L7 Rules\n================================\n\nThe DNS-based policies can be combined with port (L4) and API (L7) rules to\nfurther restrict the access. In our example, we will restrict ``mediabot`` pods\nto access GitHub services only on ports ``443``. The ``toPorts`` section in the\npolicy below achieves the port-based restrictions along with the DNS-based\npolicies.\n\n.. literalinclude:: ..\/..\/examples\/kubernetes-dns\/dns-port.yaml\n\n.. parsed-literal::\n\n  kubectl apply -f \\ |SCM_WEB|\\\/examples\/kubernetes-dns\/dns-port.yaml\n\nTesting, the access to ``https:\/\/support.github.com`` on port ``443`` will\nsucceed but the access to ``http:\/\/support.github.com`` on port ``80`` will be\ndenied.\n\n.. code-block:: shell-session\n\n   $ kubectl exec mediabot -- curl -I -s https:\/\/support.github.com | head -1\n   HTTP\/1.1 200 OK\n\n   $ kubectl exec mediabot -- curl -I -s --max-time 5 http:\/\/support.github.com | head -1\n   curl: (28) Connection timed out after 5001 milliseconds\n   command terminated with exit code 28\n\nRefer to :ref:`l4_policy` and :ref:`l7_policy` to learn more about Cilium L4 and\nL7 network policies.\n\nClean-up\n========\n\n.. parsed-literal::\n\n   kubectl delete -f \\ |SCM_WEB|\\\/examples\/kubernetes-dns\/dns-sw-app.yaml\n   kubectl delete cnp fqdn","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      gs dns                                                        Locking Down External Access with DNS Based Policies                                                       This document serves as an introduction for using Cilium to enforce DNS based security policies for Kubernetes pods      include   gsg requirements rst  Deploy the Demo Application                              DNS based policies are very useful for controlling access to services running outside the Kubernetes cluster  DNS acts as a persistent service identifier for both external services provided by AWS  Google  Twilio  Stripe  etc   and internal services such as database clusters running in private subnets outside Kubernetes  CIDR or IP based policies are cumbersome and hard to maintain as the IPs associated with external services can change frequently  The Cilium DNS based policies provide an easy mechanism to specify access control while Cilium manages the harder aspects of tracking DNS to IP mapping   In this guide we will learn about     Controlling egress access to services outside the cluster using DNS based policies   Using patterns  or wildcards  to whitelist a subset of DNS domains   Combining DNS  port and L7 rules for restricting access to external service  In line with our Star Wars theme examples  we will use a simple scenario where the Empire s   mediabot   pods need access to GitHub for managing the Empire s git repositories  The pods shouldn t have access to any other external service      parsed literal         kubectl create  f    SCM WEB   examples kubernetes dns dns sw app yaml      kubectl wait pod mediabot   for condition Ready      kubectl get pods    NAME                             READY   STATUS    RESTARTS   AGE    pod mediabot                     1 1     Running   0          14s   Apply DNS Egress Policy                          The following Cilium network policy allows   mediabot   pods to only access   api github com        tabs          group tab   Generic           literalinclude         examples kubernetes dns dns matchname yaml        group tab   OpenShift           literalinclude         examples kubernetes dns dns matchname openshift yaml     note       OpenShift users will need to modify the policies to match the namespace      openshift dns    instead of   kube system     remove the match on the      k8s k8s app kube dns   label  and change the port to 5353   Let s take a closer look at the policy     The first egress section uses   toFQDNs  matchName   specification to allow   egress to   api github com    The destination DNS should match exactly the   name specified in the rule  The   endpointSelector   allows only pods with   labels   class  mediabot  org empire   to have the egress access    The second egress section    toEndpoints    allows   mediabot   pods to access     kube dns   service  Note that   rules  dns   instructs Cilium to inspect and   allow DNS lookups matching specified patterns  In this case  inspect and allow   all DNS queries   Note that with this policy the   mediabot   doesn t have access to any internal cluster service other than   kube dns    Refer to  ref  Network Policy  to learn more about policies for controlling access to internal cluster services   Let s apply the policy      parsed literal      kubectl apply  f    SCM WEB   examples kubernetes dns dns matchname yaml  Testing the policy  we see that   mediabot   has access to   api github com   but doesn t have access to any other external service  e g     support github com        code block   shell session       kubectl exec mediabot    curl  I  s https   api github com   head  1    HTTP 2 200       kubectl exec mediabot    curl  I  s   max time 5 https   support github com   head  1    curl   28  Connection timed out after 5000 milliseconds    command terminated with exit code 28  DNS Policies Using Patterns                              The above policy controlled DNS access based on exact match of the DNS domain name  Often  it is required to allow access to a subset of domains  Let s say  in the above example    mediabot   pods need access to any GitHub sub domain  e g   the pattern     github com    We can achieve this easily by changing the   toFQDN   rule to use   matchPattern   instead of   matchName        literalinclude         examples kubernetes dns dns pattern yaml     parsed literal       kubectl apply  f    SCM WEB   examples kubernetes dns dns pattern yaml  Test that   mediabot   has access to multiple GitHub services for which the DNS matches the pattern     github com    It is important to note and test that this doesn t allow access to   github com   because the        in the pattern requires one subdomain to be present in the DNS name  You can simply add more   matchName   and   matchPattern   clauses to extend the access   See  ref  DNS based  policies to learn more about specifying DNS rules using patterns and names       code block   shell session       kubectl exec mediabot    curl  I  s https   support github com   head  1    HTTP 1 1 200 OK       kubectl exec mediabot    curl  I  s https   gist github com   head  1    HTTP 1 1 302 Found       kubectl exec mediabot    curl  I  s   max time 5 https   github com   head  1    curl   28  Connection timed out after 5000 milliseconds    command terminated with exit code 28  Combining DNS  Port and L7 Rules                                   The DNS based policies can be combined with port  L4  and API  L7  rules to further restrict the access  In our example  we will restrict   mediabot   pods to access GitHub services only on ports   443    The   toPorts   section in the policy below achieves the port based restrictions along with the DNS based policies      literalinclude         examples kubernetes dns dns port yaml     parsed literal      kubectl apply  f    SCM WEB   examples kubernetes dns dns port yaml  Testing  the access to   https   support github com   on port   443   will succeed but the access to   http   support github com   on port   80   will be denied      code block   shell session       kubectl exec mediabot    curl  I  s https   support github com   head  1    HTTP 1 1 200 OK       kubectl exec mediabot    curl  I  s   max time 5 http   support github com   head  1    curl   28  Connection timed out after 5001 milliseconds    command terminated with exit code 28  Refer to  ref  l4 policy  and  ref  l7 policy  to learn more about Cilium L4 and L7 network policies   Clean up              parsed literal       kubectl delete  f    SCM WEB   examples kubernetes dns dns sw app yaml    kubectl delete cnp fqdn"}
{"questions":"cilium Host Firewall docs cilium io hostfirewall security policies for Kubernetes nodes This document serves as an introduction to Cilium s host firewall to enforce","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _host_firewall:\n\n*************\nHost Firewall\n*************\n\nThis document serves as an introduction to Cilium's host firewall, to enforce\nsecurity policies for Kubernetes nodes.\n\n.. admonition:: Video\n  :class: attention\n\n  You can also watch a video of Cilium's host firewall in action on\n  `eCHO Episode 40: Cilium Host Firewall <https:\/\/www.youtube.com\/watch?v=GLLLcz398K0&t=288s>`__.\n\nEnable the Host Firewall in Cilium\n==================================\n\n.. include:: \/installation\/k8s-install-download-release.rst\n\nDeploy Cilium release via Helm:\n\n.. parsed-literal::\n\n    helm install cilium |CHART_RELEASE|        \\\\\n      --namespace kube-system                  \\\\\n      --set hostFirewall.enabled=true          \\\\\n      --set devices='{ethX,ethY}'\n\nThe ``devices`` flag refers to the network devices Cilium is configured on,\nsuch as ``eth0``. If you omit this option, Cilium auto-detects what interfaces\nthe host firewall applies to. The resulting interfaces are shown in the\noutput of the ``cilium-dbg status`` command:\n\n.. code-block:: shell-session\n\n    $ kubectl exec -n kube-system ds\/cilium -- \\\n         cilium-dbg status | grep 'Host firewall'\n\nAt this point, the Cilium-managed nodes are ready to enforce network policies.\n\n\nAttach a Label to the Node\n==========================\n\nIn this guide, host policies only apply to nodes with the label\n``node-access=ssh``. Therefore, you first need to attach this label to a node\nin the cluster:\n\n.. code-block:: shell-session\n\n    $ export NODE_NAME=k8s1\n    $ kubectl label node $NODE_NAME node-access=ssh\n    node\/k8s1 labeled\n\n\nEnable Policy Audit Mode for the Host Endpoint\n==============================================\n\n`HostPolicies` enforce access control over connectivity to and from nodes.\nParticular care must be taken to ensure that when host policies are imported,\nCilium does not block access to the nodes or break the cluster's normal\nbehavior (for example by blocking communication with ``kube-apiserver``).\n\nTo avoid such issues, switch the host firewall in audit mode and validate the\nimpact of host policies before enforcing them.\n\n.. warning::\n\n   When Policy Audit Mode is enabled, no network policy is enforced so this\n   setting is not recommended for production deployment.\n\nEnable and check status for the Policy Audit Mode on the host endpoint for a\ngiven node with the following commands:\n\n.. code-block:: shell-session\n\n    $ CILIUM_NAMESPACE=kube-system\n    $ CILIUM_POD_NAME=$(kubectl -n $CILIUM_NAMESPACE get pods -l \"k8s-app=cilium\" -o jsonpath=\"{.items[?(@.spec.nodeName=='$NODE_NAME')].metadata.name}\")\n    $ alias kexec=\"kubectl -n $CILIUM_NAMESPACE exec $CILIUM_POD_NAME --\"\n    $ HOST_EP_ID=$(kexec cilium-dbg endpoint list -o jsonpath='{[?(@.status.identity.id==1)].id}')\n    $ kexec cilium-dbg status | grep 'Host firewall'\n    Host firewall:           Enabled   [eth0]\n    $ kexec cilium-dbg endpoint config $HOST_EP_ID PolicyAuditMode=Enabled\n    Endpoint 3353 configuration updated successfully\n    $ kexec cilium-dbg endpoint config $HOST_EP_ID | grep PolicyAuditMode\n    PolicyAuditMode        : Enabled\n\n\nApply a Host Network Policy\n===========================\n\n:ref:`HostPolicies` match on node labels using a :ref:`NodeSelector` to\nidentify the nodes to which the policies applies. They apply only to the host\nnamespace, including host-networking pods. They don't apply to communications\nbetween pods or between pods and the outside of the cluster, except if those\npods are host-networking pods.\n\nThe following policy applies to all nodes with the ``node-access=ssh`` label.\nIt allows communications from outside the cluster only for TCP\/22 and for ICMP\n(ping) echo requests. All communications from the cluster to the hosts are\nallowed.\n\n.. literalinclude:: ..\/..\/examples\/policies\/host\/demo-host-policy.yaml\n\nTo apply this policy, run:\n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/policies\/host\/demo-host-policy.yaml\n    ciliumclusterwidenetworkpolicy.cilium.io\/demo-host-policy created\n\nThe host is represented as a special endpoint, with label ``reserved:host``, in\nthe output of command ``cilium-dbg endpoint list``. Use this command to inspect\nthe status of host policies:\n\n.. code-block:: shell-session\n\n    $ kexec cilium-dbg endpoint list\n    ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                       IPv6                 IPv4           STATUS\n               ENFORCEMENT        ENFORCEMENT\n    266        Disabled           Disabled          104        k8s:io.cilium.k8s.policy.cluster=default          f00d::a0b:0:0:ef4e   10.16.172.63   ready\n                                                               k8s:io.cilium.k8s.policy.serviceaccount=coredns\n                                                               k8s:io.kubernetes.pod.namespace=kube-system\n                                                               k8s:k8s-app=kube-dns\n    1687       Disabled (Audit)   Disabled          1          k8s:node-access=ssh                                                                   ready\n                                                               reserved:host\n    3362       Disabled           Disabled          4          reserved:health                                   f00d::a0b:0:0:49cf   10.16.87.66    ready\n\nIn this example, one can observe that policy enforcement on the host endpoint\nis in audit mode for ingress traffic, and disabled for egress traffic.\n\n\nAdjust the Host Policy to Your Environment\n==========================================\n\nAs long as the host endpoint runs in audit mode, communications disallowed by\nthe policy are not dropped. Nevertheless, they are reported by ``cilium-dbg\nmonitor``, as ``action audit``. With these reports, the audit mode allows you\nto adjust the host policy to your environment in order to avoid unexpected\nconnection breakages.\n\n.. code-block:: shell-session\n\n    $ kexec cilium-dbg monitor -t policy-verdict --related-to $HOST_EP_ID\n    Policy verdict log: flow 0x0 local EP ID 1687, remote ID 6, proto 1, ingress, action allow, match L3-Only, 192.168.60.12 -> 192.168.60.11 EchoRequest\n    Policy verdict log: flow 0x0 local EP ID 1687, remote ID 6, proto 6, ingress, action allow, match L3-Only, 192.168.60.12:37278 -> 192.168.60.11:2379 tcp SYN\n    Policy verdict log: flow 0x0 local EP ID 1687, remote ID 2, proto 6, ingress, action audit, match none, 10.0.2.2:47500 -> 10.0.2.15:6443 tcp SYN\n\nFor details on deriving the network policies from the output of ``cilium\nmonitor``, refer to `observe_policy_verdicts` and `create_network_policy` in\nthe `policy_verdicts` guide.\n\nNote that `Entities based` rules are convenient when combined with host\npolicies, for example to allow communication to entire classes of destinations,\nsuch as all remotes nodes (``remote-node``) or the entire cluster\n(``cluster``).\n\n.. warning::\n\n    Make sure that none of the communications required to access the cluster or\n    for the cluster to work properly are denied. Ensure they all appear as\n    ``action allow`` before disabling the audit mode.\n\n.. _disable_policy_audit_mode:\n\nDisable Policy Audit Mode\n=========================\n\nOnce you are confident all required communications to the host from outside the\ncluster are allowed, disable the policy audit mode to enforce the host policy:\n\n.. code-block:: shell-session\n\n    $ kexec cilium-dbg endpoint config $HOST_EP_ID PolicyAuditMode=Disabled\n    Endpoint 3353 configuration updated successfully\n\nIngress host policies should now appear as enforced:\n\n.. code-block:: shell-session\n\n    $ kexec cilium-dbg endpoint list\n    ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                       IPv6                 IPv4           STATUS\n               ENFORCEMENT        ENFORCEMENT\n    266        Disabled           Disabled          104        k8s:io.cilium.k8s.policy.cluster=default          f00d::a0b:0:0:ef4e   10.16.172.63   ready\n                                                               k8s:io.cilium.k8s.policy.serviceaccount=coredns\n                                                               k8s:io.kubernetes.pod.namespace=kube-system\n                                                               k8s:k8s-app=kube-dns\n    1687       Enabled            Disabled          1          k8s:node-access=ssh                                                                   ready\n                                                               reserved:host\n    3362       Disabled           Disabled          4          reserved:health                                   f00d::a0b:0:0:49cf   10.16.87.66    ready\n\n\nCommunications that are not explicitly allowed by the host policy are now\ndropped:\n\n.. code-block:: shell-session\n\n    $ kexec cilium-dbg monitor -t policy-verdict --related-to $HOST_EP_ID\n    Policy verdict log: flow 0x0 local EP ID 1687, remote ID 2, proto 6, ingress, action deny, match none, 10.0.2.2:49038 -> 10.0.2.15:21 tcp SYN\n\n\nClean up\n========\n\n.. code-block:: shell-session\n\n   $ kubectl delete ccnp demo-host-policy\n   $ kubectl label node $NODE_NAME node-access-\n\nFurther Reading\n===============\n\nRead the documentation on :ref:`HostPolicies` for additional details on how to\nuse the policies. In particular, refer to the :ref:`Troubleshooting Host\nPolicies <troubleshooting_host_policies>` subsection to understand how to debug\nissues with Host Policies, or to the section on :ref:`Host Policies known\nissues <host_policies_known_issues>` to understand the current limitations of\nthe feature.","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      host firewall                 Host Firewall                This document serves as an introduction to Cilium s host firewall  to enforce security policies for Kubernetes nodes      admonition   Video    class  attention    You can also watch a video of Cilium s host firewall in action on    eCHO Episode 40  Cilium Host Firewall  https   www youtube com watch v GLLLcz398K0 t 288s       Enable the Host Firewall in Cilium                                        include    installation k8s install download release rst  Deploy Cilium release via Helm      parsed literal        helm install cilium  CHART RELEASE                    namespace kube system                             set hostFirewall enabled true                     set devices   ethX ethY    The   devices   flag refers to the network devices Cilium is configured on  such as   eth0    If you omit this option  Cilium auto detects what interfaces the host firewall applies to  The resulting interfaces are shown in the output of the   cilium dbg status   command      code block   shell session        kubectl exec  n kube system ds cilium               cilium dbg status   grep  Host firewall   At this point  the Cilium managed nodes are ready to enforce network policies    Attach a Label to the Node                             In this guide  host policies only apply to nodes with the label   node access ssh    Therefore  you first need to attach this label to a node in the cluster      code block   shell session        export NODE NAME k8s1       kubectl label node  NODE NAME node access ssh     node k8s1 labeled   Enable Policy Audit Mode for the Host Endpoint                                                  HostPolicies  enforce access control over connectivity to and from nodes  Particular care must be taken to ensure that when host policies are imported  Cilium does not block access to the nodes or break the cluster s normal behavior  for example by blocking communication with   kube apiserver      To avoid such issues  switch the host firewall in audit mode and validate the impact of host policies before enforcing them      warning       When Policy Audit Mode is enabled  no network policy is enforced so this    setting is not recommended for production deployment   Enable and check status for the Policy Audit Mode on the host endpoint for a given node with the following commands      code block   shell session        CILIUM NAMESPACE kube system       CILIUM POD NAME   kubectl  n  CILIUM NAMESPACE get pods  l  k8s app cilium   o jsonpath    items     spec nodeName    NODE NAME    metadata name          alias kexec  kubectl  n  CILIUM NAMESPACE exec  CILIUM POD NAME           HOST EP ID   kexec cilium dbg endpoint list  o jsonpath        status identity id  1   id          kexec cilium dbg status   grep  Host firewall      Host firewall            Enabled    eth0        kexec cilium dbg endpoint config  HOST EP ID PolicyAuditMode Enabled     Endpoint 3353 configuration updated successfully       kexec cilium dbg endpoint config  HOST EP ID   grep PolicyAuditMode     PolicyAuditMode          Enabled   Apply a Host Network Policy                               ref  HostPolicies  match on node labels using a  ref  NodeSelector  to identify the nodes to which the policies applies  They apply only to the host namespace  including host networking pods  They don t apply to communications between pods or between pods and the outside of the cluster  except if those pods are host networking pods   The following policy applies to all nodes with the   node access ssh   label  It allows communications from outside the cluster only for TCP 22 and for ICMP  ping  echo requests  All communications from the cluster to the hosts are allowed      literalinclude         examples policies host demo host policy yaml  To apply this policy  run      parsed literal          kubectl create  f    SCM WEB   examples policies host demo host policy yaml     ciliumclusterwidenetworkpolicy cilium io demo host policy created  The host is represented as a special endpoint  with label   reserved host    in the output of command   cilium dbg endpoint list    Use this command to inspect the status of host policies      code block   shell session        kexec cilium dbg endpoint list     ENDPOINT   POLICY  ingress    POLICY  egress    IDENTITY   LABELS  source key  value                         IPv6                 IPv4           STATUS                ENFORCEMENT        ENFORCEMENT     266        Disabled           Disabled          104        k8s io cilium k8s policy cluster default          f00d  a0b 0 0 ef4e   10 16 172 63   ready                                                                k8s io cilium k8s policy serviceaccount coredns                                                                k8s io kubernetes pod namespace kube system                                                                k8s k8s app kube dns     1687       Disabled  Audit    Disabled          1          k8s node access ssh                                                                   ready                                                                reserved host     3362       Disabled           Disabled          4          reserved health                                   f00d  a0b 0 0 49cf   10 16 87 66    ready  In this example  one can observe that policy enforcement on the host endpoint is in audit mode for ingress traffic  and disabled for egress traffic    Adjust the Host Policy to Your Environment                                             As long as the host endpoint runs in audit mode  communications disallowed by the policy are not dropped  Nevertheless  they are reported by   cilium dbg monitor    as   action audit    With these reports  the audit mode allows you to adjust the host policy to your environment in order to avoid unexpected connection breakages      code block   shell session        kexec cilium dbg monitor  t policy verdict   related to  HOST EP ID     Policy verdict log  flow 0x0 local EP ID 1687  remote ID 6  proto 1  ingress  action allow  match L3 Only  192 168 60 12    192 168 60 11 EchoRequest     Policy verdict log  flow 0x0 local EP ID 1687  remote ID 6  proto 6  ingress  action allow  match L3 Only  192 168 60 12 37278    192 168 60 11 2379 tcp SYN     Policy verdict log  flow 0x0 local EP ID 1687  remote ID 2  proto 6  ingress  action audit  match none  10 0 2 2 47500    10 0 2 15 6443 tcp SYN  For details on deriving the network policies from the output of   cilium monitor    refer to  observe policy verdicts  and  create network policy  in the  policy verdicts  guide   Note that  Entities based  rules are convenient when combined with host policies  for example to allow communication to entire classes of destinations  such as all remotes nodes    remote node    or the entire cluster    cluster         warning        Make sure that none of the communications required to access the cluster or     for the cluster to work properly are denied  Ensure they all appear as       action allow   before disabling the audit mode       disable policy audit mode   Disable Policy Audit Mode                            Once you are confident all required communications to the host from outside the cluster are allowed  disable the policy audit mode to enforce the host policy      code block   shell session        kexec cilium dbg endpoint config  HOST EP ID PolicyAuditMode Disabled     Endpoint 3353 configuration updated successfully  Ingress host policies should now appear as enforced      code block   shell session        kexec cilium dbg endpoint list     ENDPOINT   POLICY  ingress    POLICY  egress    IDENTITY   LABELS  source key  value                         IPv6                 IPv4           STATUS                ENFORCEMENT        ENFORCEMENT     266        Disabled           Disabled          104        k8s io cilium k8s policy cluster default          f00d  a0b 0 0 ef4e   10 16 172 63   ready                                                                k8s io cilium k8s policy serviceaccount coredns                                                                k8s io kubernetes pod namespace kube system                                                                k8s k8s app kube dns     1687       Enabled            Disabled          1          k8s node access ssh                                                                   ready                                                                reserved host     3362       Disabled           Disabled          4          reserved health                                   f00d  a0b 0 0 49cf   10 16 87 66    ready   Communications that are not explicitly allowed by the host policy are now dropped      code block   shell session        kexec cilium dbg monitor  t policy verdict   related to  HOST EP ID     Policy verdict log  flow 0x0 local EP ID 1687  remote ID 2  proto 6  ingress  action deny  match none  10 0 2 2 49038    10 0 2 15 21 tcp SYN   Clean up              code block   shell session       kubectl delete ccnp demo host policy      kubectl label node  NODE NAME node access   Further Reading                  Read the documentation on  ref  HostPolicies  for additional details on how to use the policies  In particular  refer to the  ref  Troubleshooting Host Policies  troubleshooting host policies   subsection to understand how to debug issues with Host Policies  or to the section on  ref  Host Policies known issues  host policies known issues   to understand the current limitations of the feature "}
{"questions":"cilium Cilium environment running on your machine It is designed to take 15 30 docs cilium io security policies It is a detailed walk through of getting a single node This document serves as an introduction for using Cilium to enforce Elasticsearch aware minutes Securing Elasticsearch","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n**********************\nSecuring Elasticsearch\n**********************\n\nThis document serves as an introduction for using Cilium to enforce Elasticsearch-aware\nsecurity policies.  It is a detailed walk-through of getting a single-node\nCilium environment running on your machine. It is designed to take 15-30\nminutes.\n\n.. include:: gsg_requirements.rst\n\nDeploy the Demo Application\n===========================\n\nFollowing the Cilium tradition, we will use a Star Wars-inspired example. The Empire has a large scale Elasticsearch cluster which is used for storing a variety of data including: \n\n* ``index: troop_logs``: Stormtroopers performance logs collected from every outpost which are used to identify and eliminate weak performers!\n* ``index: spaceship_diagnostics``: Spaceships diagnostics data collected from every spaceship which is used for R&D and improvement of the spaceships.\n\nEvery outpost has an Elasticsearch client service to upload the Stormtroopers logs. And every spaceship has a service to upload diagnostics. Similarly, the Empire headquarters has a service to search and analyze the troop logs and spaceship diagnostics data. Before we look into the security concerns, let's first create this application scenario in minikube.\n\nDeploy the app using command below, which will create\n\n* An ``elasticsearch`` service with the selector label ``component:elasticsearch`` and a pod running Elasticsearch.\n* Three Elasticsearch clients one each for ``empire-hq``, ``outpost`` and ``spaceship``. \n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/kubernetes-es\/es-sw-app.yaml\n    serviceaccount \"elasticsearch\" created\n    service \"elasticsearch\" created\n    replicationcontroller \"es\" created\n    role \"elasticsearch\" created\n    rolebinding \"elasticsearch\" created\n    pod \"outpost\" created\n    pod \"empire-hq\" created\n    pod \"spaceship\" created\n\n.. code-block:: shell-session\n\n    $ kubectl get svc,pods\n    NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                           AGE\n    svc\/elasticsearch   NodePort    10.111.238.254   <none>        9200:30130\/TCP,9300:31721\/TCP     2d\n    svc\/etcd-cilium     NodePort    10.98.67.60      <none>        32379:31079\/TCP,32380:31080\/TCP   9d\n    svc\/kubernetes      ClusterIP   10.96.0.1        <none>        443\/TCP                           9d\n\n    NAME               READY     STATUS    RESTARTS   AGE\n    po\/empire-hq       1\/1       Running   0          2d\n    po\/es-g9qk2        1\/1       Running   0          2d\n    po\/etcd-cilium-0   1\/1       Running   0          9d\n    po\/outpost         1\/1       Running   0          2d\n    po\/spaceship       1\/1       Running   0          2d\n\n\nSecurity Risks for Elasticsearch Access\n=======================================\n\nFor Elasticsearch clusters the **least privilege security** challenge is to give clients access only to particular indices, and to limit the operations each client is allowed to perform on each index. In this example, the ``outpost`` Elasticsearch clients only need access to upload troop logs; and the ``empire-hq`` client only needs search access to both the indices.  From the security perspective, the outposts are weak spots and susceptible to be captured by the rebels. Once compromised, the clients can be used to search and manipulate the critical data in Elasticsearch. We can simulate this attack, but first let's run the commands for legitimate behavior for all the client services. \n\n``outpost`` client uploading troop logs \n\n.. code-block:: shell-session\n\n    $ kubectl exec outpost -- python upload_logs.py \n    Uploading Stormtroopers Performance Logs\n    created :  {'_index': 'troop_logs', '_type': 'log', '_id': '1', '_version': 1, 'result': 'created', '_shards': {'total': 2, 'successful': 1, 'failed': 0}, 'created': True}\n\n``spaceship`` uploading diagnostics\n\n.. code-block:: shell-session\n\n    $ kubectl exec spaceship -- python upload_diagnostics.py \n    Uploading Spaceship Diagnostics\n    created :  {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_version': 1, 'result': 'created', '_shards': {'total': 2, 'successful': 1, 'failed': 0}, 'created': True}\n\n``empire-hq`` running search queries for logs and diagnostics\n\n.. code-block:: shell-session\n\n    $ kubectl exec empire-hq -- python search.py \n    Searching for Spaceship Diagnostics\n    Got 1 Hits:\n    {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \\\n     '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \\\n                 'stats': '[CRITICAL] [ENGINE BURN @SPEED 5000 km\/s] [CHANCE 80%]'}}\n    Searching for Stormtroopers Performance Logs\n    Got 1 Hits:\n    {'_index': 'troop_logs', '_type': 'log', '_id': '1', '_score': 1.0, \\\n     '_source': {'outpost': 'Endor', 'datetime': '33 ABY 4AM DST', 'title': 'Endor Corps 1: Morning Drill', \\\n                 'notes': '5100 PRESENT; 15 ABSENT; 130 CODE-RED BELOW PAR PERFORMANCE'}}\n\n\nNow imagine an outpost captured by the rebels. In the commands below, the rebels first search all the indices and then manipulate the diagnostics data from a compromised outpost. \n\n.. code-block:: shell-session\n\n    $ kubectl exec outpost -- python search.py \n    Searching for Spaceship Diagnostics\n    Got 1 Hits:\n    {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \\\n     '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \\\n                 'stats': '[CRITICAL] [ENGINE BURN @SPEED 5000 km\/s] [CHANCE 80%]'}}\n    Searching for Stormtroopers Performance Logs\n    Got 1 Hits:\n    {'_index': 'troop_logs', '_type': 'log', '_id': '1', '_score': 1.0, \\\n     '_source': {'outpost': 'Endor', 'datetime': '33 ABY 4AM DST', 'title': 'Endor Corps 1: Morning Drill', \\\n                 'notes': '5100 PRESENT; 15 ABSENT; 130 CODE-RED BELOW PAR PERFORMANCE'}}\n\nRebels manipulate spaceship diagnostics data so that the spaceship defects are not known to the empire-hq! (Hint: Rebels have changed the ``stats`` for the tiefighter spaceship, a change hard to detect but with adverse impact!)\n\n\n.. code-block:: shell-session\n\n    $ kubectl exec outpost -- python update.py \n    Uploading Spaceship Diagnostics\n    {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \\\n     '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \\\n                 'stats': '[OK] [ENGINE OK @SPEED 5000 km\/s]'}}\n\n\nSecuring Elasticsearch Using Cilium\n====================================\n\n\n.. image:: images\/cilium_es_gsg_topology.png\n   :scale: 40 %\n\nFollowing the least privilege security principle, we want to the allow the following legitimate actions and nothing more:\n\n* ``outpost`` service only has upload access to ``index: troop_logs``\n* ``spaceship`` service only has upload access to ``index: spaceship_diagnostics``\n* ``empire-hq`` service only has search access for both the indices \n\nFortunately, the Empire DevOps team is using Cilium for their Kubernetes cluster. Cilium provides L7 visibility and security policies to control Elasticsearch API access. Cilium follows the **white-list, least privilege model** for security. That is to say, a *CiliumNetworkPolicy* contains a list of rules that define **allowed requests** and any request that does not match the rules is denied. \n\nIn this example, the policy rules are defined for inbound traffic (i.e., \"ingress\") connections to the *elasticsearch* service. Note that endpoints selected as backend pods for the service are defined by the *selector* labels. *Selector* labels use the same concept as Kubernetes to define a service. In this example, label ``component: elasticsearch`` defines the pods that are part of the *elasticsearch* service in Kubernetes.\n\nIn the policy file below, you will see the following rules for controlling the indices access and actions performed:\n\n* ``fromEndpoints`` with labels ``app:spaceship`` only ``HTTP`` ``PUT`` is allowed on paths matching regex ``^\/spaceship_diagnostics\/stats\/.*$``\n* ``fromEndpoints`` with labels ``app:outpost`` only ``HTTP`` ``PUT`` is allowed on paths matching regex ``^\/troop_logs\/log\/.*$``\n* ``fromEndpoints`` with labels ``app:empire`` only ``HTTP`` ``GET`` is allowed on paths matching regex ``^\/spaceship_diagnostics\/_search\/??.*$`` and ``^\/troop_logs\/search\/??.*$``\n\n.. literalinclude:: ..\/..\/examples\/kubernetes-es\/es-sw-policy.yaml\n\nApply this Elasticsearch-aware network security policy using ``kubectl``:\n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/kubernetes-es\/es-sw-policy.yaml\n    ciliumnetworkpolicy \"secure-empire-elasticsearch\" created\n\nLet's test the security policies. Firstly, the search access is blocked for both outpost and spaceship. So from a compromised outpost, rebels will not be able to search and obtain knowledge about troops and spaceship diagnostics. Secondly, the outpost clients don't have access to create or update the ``index: spaceship_diagnostics``. \n\n.. code-block:: shell-session\n\n    $ kubectl exec outpost -- python search.py \n    GET http:\/\/elasticsearch:9200\/spaceship_diagnostics\/_search [status:403 request:0.008s]\n    ...\n    ...\n    elasticsearch.exceptions.AuthorizationException: TransportError(403, 'Access denied\\r\\n')\n    command terminated with exit code 1\n\n.. code-block:: shell-session\n\n    $ kubectl exec outpost -- python update.py \n    PUT http:\/\/elasticsearch:9200\/spaceship_diagnostics\/stats\/1 [status:403 request:0.006s]\n    ...\n    ...\n    elasticsearch.exceptions.AuthorizationException: TransportError(403, 'Access denied\\r\\n')\n    command terminated with exit code 1\n\nWe can re-run any of the below commands to show that the security policy still allows all legitimate requests (i.e., no 403 errors are returned).\n\n.. code-block:: shell-session\n\n    $ kubectl exec outpost -- python upload_logs.py \n    ...\n    $ kubectl exec spaceship -- python upload_diagnostics.py \n    ...\n    $ kubectl exec empire-hq -- python search.py \n    ...\n\n\nClean Up\n========\n\nYou have now installed Cilium, deployed a demo app, and finally deployed & tested Elasticsearch-aware network security policies. To clean up, run:\n\n.. parsed-literal::\n\n   $ kubectl delete -f \\ |SCM_WEB|\\\/examples\/kubernetes-es\/es-sw-app.yaml\n   $ kubectl delete cnp secure-empire-elasticsearch","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io                         Securing Elasticsearch                         This document serves as an introduction for using Cilium to enforce Elasticsearch aware security policies   It is a detailed walk through of getting a single node Cilium environment running on your machine  It is designed to take 15 30 minutes      include   gsg requirements rst  Deploy the Demo Application                              Following the Cilium tradition  we will use a Star Wars inspired example  The Empire has a large scale Elasticsearch cluster which is used for storing a variety of data including        index  troop logs    Stormtroopers performance logs collected from every outpost which are used to identify and eliminate weak performers      index  spaceship diagnostics    Spaceships diagnostics data collected from every spaceship which is used for R D and improvement of the spaceships   Every outpost has an Elasticsearch client service to upload the Stormtroopers logs  And every spaceship has a service to upload diagnostics  Similarly  the Empire headquarters has a service to search and analyze the troop logs and spaceship diagnostics data  Before we look into the security concerns  let s first create this application scenario in minikube   Deploy the app using command below  which will create    An   elasticsearch   service with the selector label   component elasticsearch   and a pod running Elasticsearch    Three Elasticsearch clients one each for   empire hq      outpost   and   spaceship         parsed literal          kubectl create  f    SCM WEB   examples kubernetes es es sw app yaml     serviceaccount  elasticsearch  created     service  elasticsearch  created     replicationcontroller  es  created     role  elasticsearch  created     rolebinding  elasticsearch  created     pod  outpost  created     pod  empire hq  created     pod  spaceship  created     code block   shell session        kubectl get svc pods     NAME                TYPE        CLUSTER IP       EXTERNAL IP   PORT S                            AGE     svc elasticsearch   NodePort    10 111 238 254    none         9200 30130 TCP 9300 31721 TCP     2d     svc etcd cilium     NodePort    10 98 67 60       none         32379 31079 TCP 32380 31080 TCP   9d     svc kubernetes      ClusterIP   10 96 0 1         none         443 TCP                           9d      NAME               READY     STATUS    RESTARTS   AGE     po empire hq       1 1       Running   0          2d     po es g9qk2        1 1       Running   0          2d     po etcd cilium 0   1 1       Running   0          9d     po outpost         1 1       Running   0          2d     po spaceship       1 1       Running   0          2d   Security Risks for Elasticsearch Access                                          For Elasticsearch clusters the   least privilege security   challenge is to give clients access only to particular indices  and to limit the operations each client is allowed to perform on each index  In this example  the   outpost   Elasticsearch clients only need access to upload troop logs  and the   empire hq   client only needs search access to both the indices   From the security perspective  the outposts are weak spots and susceptible to be captured by the rebels  Once compromised  the clients can be used to search and manipulate the critical data in Elasticsearch  We can simulate this attack  but first let s run the commands for legitimate behavior for all the client services      outpost   client uploading troop logs      code block   shell session        kubectl exec outpost    python upload logs py      Uploading Stormtroopers Performance Logs     created       index    troop logs     type    log     id    1     version   1   result    created     shards     total   2   successful   1   failed   0    created   True     spaceship   uploading diagnostics     code block   shell session        kubectl exec spaceship    python upload diagnostics py      Uploading Spaceship Diagnostics     created       index    spaceship diagnostics     type    stats     id    1     version   1   result    created     shards     total   2   successful   1   failed   0    created   True     empire hq   running search queries for logs and diagnostics     code block   shell session        kubectl exec empire hq    python search py      Searching for Spaceship Diagnostics     Got 1 Hits         index    spaceship diagnostics     type    stats     id    1     score   1 0           source     spaceshipid    3459B78XNZTF    type    tiefighter    title    Engine Diagnostics                       stats     CRITICAL   ENGINE BURN  SPEED 5000 km s   CHANCE 80          Searching for Stormtroopers Performance Logs     Got 1 Hits         index    troop logs     type    log     id    1     score   1 0           source     outpost    Endor    datetime    33 ABY 4AM DST    title    Endor Corps 1  Morning Drill                       notes    5100 PRESENT  15 ABSENT  130 CODE RED BELOW PAR PERFORMANCE      Now imagine an outpost captured by the rebels  In the commands below  the rebels first search all the indices and then manipulate the diagnostics data from a compromised outpost       code block   shell session        kubectl exec outpost    python search py      Searching for Spaceship Diagnostics     Got 1 Hits         index    spaceship diagnostics     type    stats     id    1     score   1 0           source     spaceshipid    3459B78XNZTF    type    tiefighter    title    Engine Diagnostics                       stats     CRITICAL   ENGINE BURN  SPEED 5000 km s   CHANCE 80          Searching for Stormtroopers Performance Logs     Got 1 Hits         index    troop logs     type    log     id    1     score   1 0           source     outpost    Endor    datetime    33 ABY 4AM DST    title    Endor Corps 1  Morning Drill                       notes    5100 PRESENT  15 ABSENT  130 CODE RED BELOW PAR PERFORMANCE     Rebels manipulate spaceship diagnostics data so that the spaceship defects are not known to the empire hq   Hint  Rebels have changed the   stats   for the tiefighter spaceship  a change hard to detect but with adverse impact        code block   shell session        kubectl exec outpost    python update py      Uploading Spaceship Diagnostics        index    spaceship diagnostics     type    stats     id    1     score   1 0           source     spaceshipid    3459B78XNZTF    type    tiefighter    title    Engine Diagnostics                       stats     OK   ENGINE OK  SPEED 5000 km s       Securing Elasticsearch Using Cilium                                           image   images cilium es gsg topology png     scale  40    Following the least privilege security principle  we want to the allow the following legitimate actions and nothing more       outpost   service only has upload access to   index  troop logs       spaceship   service only has upload access to   index  spaceship diagnostics       empire hq   service only has search access for both the indices   Fortunately  the Empire DevOps team is using Cilium for their Kubernetes cluster  Cilium provides L7 visibility and security policies to control Elasticsearch API access  Cilium follows the   white list  least privilege model   for security  That is to say  a  CiliumNetworkPolicy  contains a list of rules that define   allowed requests   and any request that does not match the rules is denied    In this example  the policy rules are defined for inbound traffic  i e    ingress   connections to the  elasticsearch  service  Note that endpoints selected as backend pods for the service are defined by the  selector  labels   Selector  labels use the same concept as Kubernetes to define a service  In this example  label   component  elasticsearch   defines the pods that are part of the  elasticsearch  service in Kubernetes   In the policy file below  you will see the following rules for controlling the indices access and actions performed       fromEndpoints   with labels   app spaceship   only   HTTP     PUT   is allowed on paths matching regex     spaceship diagnostics stats           fromEndpoints   with labels   app outpost   only   HTTP     PUT   is allowed on paths matching regex     troop logs log           fromEndpoints   with labels   app empire   only   HTTP     GET   is allowed on paths matching regex     spaceship diagnostics  search         and     troop logs search             literalinclude         examples kubernetes es es sw policy yaml  Apply this Elasticsearch aware network security policy using   kubectl        parsed literal          kubectl create  f    SCM WEB   examples kubernetes es es sw policy yaml     ciliumnetworkpolicy  secure empire elasticsearch  created  Let s test the security policies  Firstly  the search access is blocked for both outpost and spaceship  So from a compromised outpost  rebels will not be able to search and obtain knowledge about troops and spaceship diagnostics  Secondly  the outpost clients don t have access to create or update the   index  spaceship diagnostics         code block   shell session        kubectl exec outpost    python search py      GET http   elasticsearch 9200 spaceship diagnostics  search  status 403 request 0 008s                      elasticsearch exceptions AuthorizationException  TransportError 403   Access denied r n       command terminated with exit code 1     code block   shell session        kubectl exec outpost    python update py      PUT http   elasticsearch 9200 spaceship diagnostics stats 1  status 403 request 0 006s                      elasticsearch exceptions AuthorizationException  TransportError 403   Access denied r n       command terminated with exit code 1  We can re run any of the below commands to show that the security policy still allows all legitimate requests  i e   no 403 errors are returned       code block   shell session        kubectl exec outpost    python upload logs py                kubectl exec spaceship    python upload diagnostics py                kubectl exec empire hq    python search py            Clean Up           You have now installed Cilium  deployed a demo app  and finally deployed   tested Elasticsearch aware network security policies  To clean up  run      parsed literal         kubectl delete  f    SCM WEB   examples kubernetes es es sw app yaml      kubectl delete cnp secure empire elasticsearch"}
{"questions":"cilium This document serves as an introduction to using Cilium to enforce gRPC aware Cilium environment running on your machine It is designed to take 15 30 docs cilium io security policies It is a detailed walk through of getting a single node Securing gRPC minutes","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n*************\nSecuring gRPC\n*************\n\nThis document serves as an introduction to using Cilium to enforce gRPC-aware\nsecurity policies.  It is a detailed walk-through of getting a single-node\nCilium environment running on your machine. It is designed to take 15-30\nminutes.\n\n.. include:: gsg_requirements.rst\n\nIt is important for this demo that ``kube-dns`` is working correctly. To know the\nstatus of ``kube-dns`` you can run the following command:\n\n.. code-block:: shell-session\n\n    $ kubectl get deployment kube-dns -n kube-system\n    NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE\n    kube-dns   1         1         1            1           13h\n\nWhere at least one pod should be available.\n\nDeploy the Demo Application\n===========================\n\nNow that we have Cilium deployed and ``kube-dns`` operating correctly we can\ndeploy our demo gRPC application.  Since our first demo of Cilium + HTTP-aware security\npolicies was Star Wars-themed, we decided to do the same for gRPC. While the\n`HTTP-aware Cilium  Star Wars demo <https:\/\/cilium.io\/blog\/2017\/5\/4\/demo-may-the-force-be-with-you\/>`_\nshowed how the Galactic Empire used HTTP-aware security policies to protect the Death Star from the\nRebel Alliance, this gRPC demo shows how the lack of gRPC-aware security policies allowed Leia, Chewbacca, Lando, C-3PO, and R2-D2 to escape from Cloud City, which had been overtaken by\nempire forces.\n\n`gRPC <https:\/\/grpc.io\/>`_ is a high-performance RPC framework built on top of the `protobuf <https:\/\/developers.google.com\/protocol-buffers\/>`_\nserialization\/deserialization library popularized by Google.  There are gRPC bindings\nfor many programming languages, and the efficiency of the protobuf parsing as well as\nadvantages from leveraging HTTP 2 as a transport make it a popular RPC framework for\nthose building new microservices from scratch.\n\nFor those unfamiliar with the details of the movie, Leia and the other rebels are\nfleeing storm troopers and trying to reach the space port platform where the Millennium Falcon\nis parked, so they can fly out of Cloud City. However, the door to the platform is closed,\nand the access code has been changed. However, R2-D2 is able to access the Cloud City\ncomputer system via a public terminal, and disable this security, opening the door and\nletting the Rebels reach the Millennium Falcon just in time to escape.\n\n.. image:: images\/cilium_grpc_gsg_r2d2_terminal.png\n\nIn our example, Cloud City's internal computer system is built as a set of gRPC-based\nmicroservices (who knew that gRPC was actually invented a long time ago, in a galaxy\nfar, far away?).\n\nWith gRPC, each service is defined using a language independent protocol buffer definition.\nHere is the definition for the system used to manage doors within Cloud City:\n\n.. code-block:: java\n\n  package cloudcity;\n\n  \/\/ The door manager service definition.\n  service DoorManager {\n\n    \/\/ Get human readable name of door.\n    rpc GetName(DoorRequest) returns (DoorNameReply) {}\n\n    \/\/ Find the location of this door.\n    rpc GetLocation (DoorRequest) returns (DoorLocationReply) {}\n\n    \/\/ Find out whether door is open or closed\n    rpc GetStatus(DoorRequest) returns (DoorStatusReply) {}\n\n    \/\/ Request maintenance on the door\n    rpc RequestMaintenance(DoorMaintRequest) returns (DoorActionReply) {}\n\n    \/\/ Set Access Code to Open \/ Lock the door\n    rpc SetAccessCode(DoorAccessCodeRequest) returns (DoorActionReply) {}\n\n  }\n\nTo keep the setup small, we will just launch two pods to represent this setup:\n\n- **cc-door-mgr**: A single pod running the gRPC door manager service with label ``app=cc-door-mgr``.\n- **terminal-87**: One of the public network access terminals scattered across Cloud City. R2-D2 plugs into terminal-87 as the rebels are desperately trying to escape. This terminal uses the gRPC client code to communicate with the door management services with label ``app=public-terminal``.\n\n\n.. image:: images\/cilium_grpc_gsg_topology.png\n\nThe file ``cc-door-app.yaml`` contains a Kubernetes Deployment for the door manager\nservice, a Kubernetes Pod representing ``terminal-87``, and a Kubernetes Service for\nthe door manager services. To deploy this example app, run:\n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/kubernetes-grpc\/cc-door-app.yaml\n    deployment \"cc-door-mgr\" created\n    service \"cc-door-server\" created\n    pod \"terminal-87\" created\n\nKubernetes will deploy the pods and service in the background. Running\n``kubectl get svc,pods`` will inform you about the progress of the operation.\nEach pod will go through several states until it reaches ``Running`` at which\npoint the setup is ready.\n\n.. code-block:: shell-session\n\n    $ kubectl get pods,svc\n    NAME                                 READY     STATUS    RESTARTS   AGE\n    po\/cc-door-mgr-3590146619-cv4jn      1\/1       Running   0          1m\n    po\/terminal-87                       1\/1       Running   0          1m\n\n    NAME                 CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE\n    svc\/cc-door-server   10.0.0.72    <none>        50051\/TCP   1m\n    svc\/kubernetes       10.0.0.1     <none>        443\/TCP     6m\n\nTest Access Between gRPC Client and Server\n==========================================\n\nFirst, let's confirm that the public terminal can properly act as a client to the\ndoor service.  We can test this by running a Python gRPC client for the door service that\nexists in the *terminal-87* container.\n\nWe'll invoke the 'cc_door_client' with the name of the gRPC method to call, and any\nparameters (in this case, the door-id):\n\n.. code-block:: shell-session\n\n    $ kubectl exec terminal-87 -- python3 \/cloudcity\/cc_door_client.py GetName 1\n    Door name is: Spaceport Door #1\n\n    $ kubectl exec terminal-87 -- python3 \/cloudcity\/cc_door_client.py GetLocation 1\n    Door location is lat = 10.222200393676758 long = 68.87879943847656\n\nExposing this information to public terminals seems quite useful, as it helps travelers new\nto Cloud City identify and locate different doors. But recall that the door service also\nexposes several other methods, including ``SetAccessCode``. If access to the door manager\nservice is protected only using traditional IP and port-based firewalling, the TCP port of\nthe service (50051 in this example) will be wide open to allow legitimate calls like\n``GetName`` and ``GetLocation``, which also leave more sensitive calls like ``SetAccessCode`` exposed as\nwell. It is this mismatch between the course granularity of traditional firewalls and\nthe fine-grained nature of gRPC calls that R2-D2 exploited to override the security\nand help the rebels escape.\n\nTo see this, run:\n\n.. code-block:: shell-session\n\n    $ kubectl exec terminal-87 -- python3 \/cloudcity\/cc_door_client.py SetAccessCode 1 999\n    Successfully set AccessCode to 999\n\n\nSecuring Access to a gRPC Service with Cilium\n=============================================\n\nOnce the legitimate owners of Cloud City recover the city from the empire, how can they\nuse Cilium to plug this key security hole and block requests to ``SetAccessCode`` and ``GetStatus``\nwhile still allowing ``GetName``, ``GetLocation``, and ``RequestMaintenance``?\n\n.. image:: images\/cilium_grpc_gsg_policy.png\n\nSince gRPC build on top of HTTP, this can be achieved easily by understanding how a\ngRPC call is mapped to an HTTP URL, and then applying a Cilium HTTP-aware filter to\nallow public terminals to only invoke a subset of all the total gRPC methods available\non the door service.\n\nEach gRPC method is mapped to an HTTP POST call to a URL of the form\n``\/cloudcity.DoorManager\/<method-name>``.\n\nAs a result, the following *CiliumNetworkPolicy* rule limits access of pods with label\n``app=public-terminal`` to only invoke ``GetName``, ``GetLocation``, and ``RequestMaintenance``\non the door service, identified by label ``app=cc-door-mgr``:\n\n.. literalinclude:: ..\/..\/examples\/kubernetes-grpc\/cc-door-ingress-security.yaml\n   :language: yaml\n   :emphasize-lines: 9,13,21\n\nA *CiliumNetworkPolicy* contains a list of rules that define allowed requests,\nmeaning that requests that do not match any rules (e.g., ``SetAccessCode``) are denied as invalid.\n\nThe above rule applies to inbound (i.e., \"ingress\") connections to ``cc-door-mgr pods`` (as\nindicated by ``app: cc-door-mgr``\nin the \"endpointSelector\" section). The rule will apply to connections from pods with label\n``app: public-terminal`` as indicated by the \"fromEndpoints\" section.\nThe rule explicitly matches\ngRPC connections destined to TCP 50051, and white-lists specifically the permitted URLs.\n\nApply this gRPC-aware network security policy using ``kubectl`` in the main window:\n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/kubernetes-grpc\/cc-door-ingress-security.yaml\n\nAfter this security policy is in place, access to the innocuous calls like ``GetLocation``\nstill works as intended:\n\n.. code-block:: shell-session\n\n    $ kubectl exec terminal-87 -- python3 \/cloudcity\/cc_door_client.py GetLocation 1\n    Door location is lat = 10.222200393676758 long = 68.87879943847656\n\n\nHowever, if we then again try to invoke ``SetAccessCode``, it is denied:\n\n.. code-block:: shell-session\n\n    $ kubectl exec terminal-87 -- python3 \/cloudcity\/cc_door_client.py SetAccessCode 1 999\n\n    Traceback (most recent call last):\n      File \"\/cloudcity\/cc_door_client.py\", line 71, in <module>\n        run()\n      File \"\/cloudcity\/cc_door_client.py\", line 53, in run\n        door_id=int(arg2), access_code=int(arg3)))\n      File \"\/usr\/local\/lib\/python3.4\/dist-packages\/grpc\/_channel.py\", line 492, in __call__\n        return _end_unary_response_blocking(state, call, False, deadline)\n      File \"\/usr\/local\/lib\/python3.4\/dist-packages\/grpc\/_channel.py\", line 440, in _end_unary_response_blocking\n        raise _Rendezvous(state, None, None, deadline)\n    grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.CANCELLED, Received http2 header with status: 403)>\n\n\nThis is now blocked, thanks to the Cilium network policy. And notice that unlike\na traditional firewall which would just drop packets in a way indistinguishable\nfrom a network failure, because Cilium operates at the API-layer, it can\nexplicitly reply with an custom HTTP 403 Unauthorized error, indicating that the\nrequest was intentionally denied for security reasons.\n\nThank goodness that the empire IT staff hadn't had time to deploy Cilium on\nCloud City's internal network prior to the escape attempt, or things might have\nturned out quite differently for Leia and the other Rebels!\n\nClean-Up\n========\n\nYou have now installed Cilium, deployed a demo app, and tested\nL7 gRPC-aware network security policies. To clean-up, run:\n\n.. parsed-literal::\n\n   $ kubectl delete -f \\ |SCM_WEB|\\\/examples\/kubernetes-grpc\/cc-door-app.yaml\n   $ kubectl delete cnp rule1\n\nAfter this, you can re-run the tutorial from Step 1.","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io                Securing gRPC                This document serves as an introduction to using Cilium to enforce gRPC aware security policies   It is a detailed walk through of getting a single node Cilium environment running on your machine  It is designed to take 15 30 minutes      include   gsg requirements rst  It is important for this demo that   kube dns   is working correctly  To know the status of   kube dns   you can run the following command      code block   shell session        kubectl get deployment kube dns  n kube system     NAME       DESIRED   CURRENT   UP TO DATE   AVAILABLE   AGE     kube dns   1         1         1            1           13h  Where at least one pod should be available   Deploy the Demo Application                              Now that we have Cilium deployed and   kube dns   operating correctly we can deploy our demo gRPC application   Since our first demo of Cilium   HTTP aware security policies was Star Wars themed  we decided to do the same for gRPC  While the  HTTP aware Cilium  Star Wars demo  https   cilium io blog 2017 5 4 demo may the force be with you     showed how the Galactic Empire used HTTP aware security policies to protect the Death Star from the Rebel Alliance  this gRPC demo shows how the lack of gRPC aware security policies allowed Leia  Chewbacca  Lando  C 3PO  and R2 D2 to escape from Cloud City  which had been overtaken by empire forces    gRPC  https   grpc io     is a high performance RPC framework built on top of the  protobuf  https   developers google com protocol buffers     serialization deserialization library popularized by Google   There are gRPC bindings for many programming languages  and the efficiency of the protobuf parsing as well as advantages from leveraging HTTP 2 as a transport make it a popular RPC framework for those building new microservices from scratch   For those unfamiliar with the details of the movie  Leia and the other rebels are fleeing storm troopers and trying to reach the space port platform where the Millennium Falcon is parked  so they can fly out of Cloud City  However  the door to the platform is closed  and the access code has been changed  However  R2 D2 is able to access the Cloud City computer system via a public terminal  and disable this security  opening the door and letting the Rebels reach the Millennium Falcon just in time to escape      image   images cilium grpc gsg r2d2 terminal png  In our example  Cloud City s internal computer system is built as a set of gRPC based microservices  who knew that gRPC was actually invented a long time ago  in a galaxy far  far away     With gRPC  each service is defined using a language independent protocol buffer definition  Here is the definition for the system used to manage doors within Cloud City      code block   java    package cloudcity        The door manager service definition    service DoorManager           Get human readable name of door      rpc GetName DoorRequest  returns  DoorNameReply             Find the location of this door      rpc GetLocation  DoorRequest  returns  DoorLocationReply             Find out whether door is open or closed     rpc GetStatus DoorRequest  returns  DoorStatusReply             Request maintenance on the door     rpc RequestMaintenance DoorMaintRequest  returns  DoorActionReply             Set Access Code to Open   Lock the door     rpc SetAccessCode DoorAccessCodeRequest  returns  DoorActionReply           To keep the setup small  we will just launch two pods to represent this setup       cc door mgr    A single pod running the gRPC door manager service with label   app cc door mgr        terminal 87    One of the public network access terminals scattered across Cloud City  R2 D2 plugs into terminal 87 as the rebels are desperately trying to escape  This terminal uses the gRPC client code to communicate with the door management services with label   app public terminal         image   images cilium grpc gsg topology png  The file   cc door app yaml   contains a Kubernetes Deployment for the door manager service  a Kubernetes Pod representing   terminal 87    and a Kubernetes Service for the door manager services  To deploy this example app  run      parsed literal          kubectl create  f    SCM WEB   examples kubernetes grpc cc door app yaml     deployment  cc door mgr  created     service  cc door server  created     pod  terminal 87  created  Kubernetes will deploy the pods and service in the background  Running   kubectl get svc pods   will inform you about the progress of the operation  Each pod will go through several states until it reaches   Running   at which point the setup is ready      code block   shell session        kubectl get pods svc     NAME                                 READY     STATUS    RESTARTS   AGE     po cc door mgr 3590146619 cv4jn      1 1       Running   0          1m     po terminal 87                       1 1       Running   0          1m      NAME                 CLUSTER IP   EXTERNAL IP   PORT S      AGE     svc cc door server   10 0 0 72     none         50051 TCP   1m     svc kubernetes       10 0 0 1      none         443 TCP     6m  Test Access Between gRPC Client and Server                                             First  let s confirm that the public terminal can properly act as a client to the door service   We can test this by running a Python gRPC client for the door service that exists in the  terminal 87  container   We ll invoke the  cc door client  with the name of the gRPC method to call  and any parameters  in this case  the door id       code block   shell session        kubectl exec terminal 87    python3  cloudcity cc door client py GetName 1     Door name is  Spaceport Door  1        kubectl exec terminal 87    python3  cloudcity cc door client py GetLocation 1     Door location is lat   10 222200393676758 long   68 87879943847656  Exposing this information to public terminals seems quite useful  as it helps travelers new to Cloud City identify and locate different doors  But recall that the door service also exposes several other methods  including   SetAccessCode    If access to the door manager service is protected only using traditional IP and port based firewalling  the TCP port of the service  50051 in this example  will be wide open to allow legitimate calls like   GetName   and   GetLocation    which also leave more sensitive calls like   SetAccessCode   exposed as well  It is this mismatch between the course granularity of traditional firewalls and the fine grained nature of gRPC calls that R2 D2 exploited to override the security and help the rebels escape   To see this  run      code block   shell session        kubectl exec terminal 87    python3  cloudcity cc door client py SetAccessCode 1 999     Successfully set AccessCode to 999   Securing Access to a gRPC Service with Cilium                                                Once the legitimate owners of Cloud City recover the city from the empire  how can they use Cilium to plug this key security hole and block requests to   SetAccessCode   and   GetStatus   while still allowing   GetName      GetLocation    and   RequestMaintenance        image   images cilium grpc gsg policy png  Since gRPC build on top of HTTP  this can be achieved easily by understanding how a gRPC call is mapped to an HTTP URL  and then applying a Cilium HTTP aware filter to allow public terminals to only invoke a subset of all the total gRPC methods available on the door service   Each gRPC method is mapped to an HTTP POST call to a URL of the form    cloudcity DoorManager  method name      As a result  the following  CiliumNetworkPolicy  rule limits access of pods with label   app public terminal   to only invoke   GetName      GetLocation    and   RequestMaintenance   on the door service  identified by label   app cc door mgr        literalinclude         examples kubernetes grpc cc door ingress security yaml     language  yaml     emphasize lines  9 13 21  A  CiliumNetworkPolicy  contains a list of rules that define allowed requests  meaning that requests that do not match any rules  e g     SetAccessCode    are denied as invalid   The above rule applies to inbound  i e    ingress   connections to   cc door mgr pods    as indicated by   app  cc door mgr   in the  endpointSelector  section   The rule will apply to connections from pods with label   app  public terminal   as indicated by the  fromEndpoints  section  The rule explicitly matches gRPC connections destined to TCP 50051  and white lists specifically the permitted URLs   Apply this gRPC aware network security policy using   kubectl   in the main window      parsed literal          kubectl create  f    SCM WEB   examples kubernetes grpc cc door ingress security yaml  After this security policy is in place  access to the innocuous calls like   GetLocation   still works as intended      code block   shell session        kubectl exec terminal 87    python3  cloudcity cc door client py GetLocation 1     Door location is lat   10 222200393676758 long   68 87879943847656   However  if we then again try to invoke   SetAccessCode    it is denied      code block   shell session        kubectl exec terminal 87    python3  cloudcity cc door client py SetAccessCode 1 999      Traceback  most recent call last         File   cloudcity cc door client py   line 71  in  module          run         File   cloudcity cc door client py   line 53  in run         door id int arg2   access code int arg3          File   usr local lib python3 4 dist packages grpc  channel py   line 492  in   call           return  end unary response blocking state  call  False  deadline        File   usr local lib python3 4 dist packages grpc  channel py   line 440  in  end unary response blocking         raise  Rendezvous state  None  None  deadline      grpc  channel  Rendezvous    Rendezvous of RPC that terminated with  StatusCode CANCELLED  Received http2 header with status  403     This is now blocked  thanks to the Cilium network policy  And notice that unlike a traditional firewall which would just drop packets in a way indistinguishable from a network failure  because Cilium operates at the API layer  it can explicitly reply with an custom HTTP 403 Unauthorized error  indicating that the request was intentionally denied for security reasons   Thank goodness that the empire IT staff hadn t had time to deploy Cilium on Cloud City s internal network prior to the escape attempt  or things might have turned out quite differently for Leia and the other Rebels   Clean Up           You have now installed Cilium  deployed a demo app  and tested L7 gRPC aware network security policies  To clean up  run      parsed literal         kubectl delete  f    SCM WEB   examples kubernetes grpc cc door app yaml      kubectl delete cnp rule1  After this  you can re run the tutorial from Step 1 "}
{"questions":"cilium TLS encrypted connections This TLS aware inspection allows Cilium API aware visibility and policy to function docs cilium io Inspecting TLS Encrypted Connections with Cilium gstlsinspection This document serves as an introduction for how network security teams can use Cilium to transparently inspect","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _gs_tls_inspection:\n\n************************************************\nInspecting TLS Encrypted Connections with Cilium\n************************************************\n\nThis document serves as an introduction for how network security teams can use Cilium to transparently inspect\nTLS-encrypted connections.  This TLS-aware inspection allows Cilium API-aware visibility and policy to function\neven for connections where client to server\ncommunication is protected by TLS, such as when a client accesses the API service via HTTPS.  This capability is similar to\nwhat is possible to traditional hardware firewalls, but is implemented entirely in software on the Kubernetes worker node,\nand is policy driven, allowing inspection to target only selected network connectivity.\n\nThis type of visibility is\nextremely valuable to be able to monitor how external API services are being used,\nfor example, understanding which S3 buckets are being accessed by an given application.\n\n.. include:: gsg_requirements.rst\n\n\nTo ensure that the Cilium agent has the correct permissions to perform TLS Interception, please set the following values\nin your Helm chart settings:\n\n.. code-block:: YAML\n\n    tls:\n      secretsBackend: k8s\n      secretSync:\n        enabled: true\n\n\nThis configures Cilium so that the Cilium Operator will synchronize any secrets referenced in CiliumNetworkPolicy (or\nCiliumClusterwideNetworkPolicy) to a ``cilium-secrets`` namespace, and grant the Cilium agent read access to Secrets for\nthat namespace only.\n\n\nDeploy the Demo Application\n===========================\n\nTo demonstrate TLS-interception we will use the same ``mediabot`` application that we used for the DNS-aware policy example.\nThis application will access the Star Wars API service using HTTPS, which would normally mean that network-layer mechanisms\nlike Cilium would not be able to see the HTTP-layer details of the communication, since all application data is encrypted\nusing TLS before that data is sent on the network.\n\nIn this guide we will learn about:\n\n- Creating an internal Certificate Authority (CA) and associated certificates signed by that CA to enable TLS interception.\n- Using Cilium network policy to select the traffic to intercept using DNS-based policy rules.\n- Inspecting the details of the HTTP request using cilium monitor (accessing this visibility data via Hubble, and applying\n  Cilium network policies to filter\/modify the HTTP request is also possible, but is beyond the scope of this simple Getting Started Guide)\n\n\nFirst off, we will create a single pod ``mediabot`` application:\n\n.. parsed-literal::\n\n   $ kubectl create -f \\ |SCM_WEB|\\\/examples\/kubernetes-dns\/dns-sw-app.yaml\n   $ kubectl wait pod\/mediabot --for=condition=Ready\n   $ kubectl get pods\n   NAME                             READY   STATUS    RESTARTS   AGE\n   pod\/mediabot                     1\/1     Running   0          14s\n\n\nA Brief Overview of the TLS Certificate Model\n=============================================\n\nTLS is a protocol that \"wraps\" other protocols like HTTP and ensures that communication between client and\nserver has confidentiality (no one can read the data except the intended recipient), integrity (recipient\ncan confirm that the data has not been modified in transit), and authentication (sender can confirm that\nit is talking with the intended destination, not an impostor).  We will provide a highly simplified overview\nof TLS in this document, but for full details, please see\n`<https:\/\/en.wikipedia.org\/wiki\/Transport_Layer_Security>`_ .\n\nFrom an authentication perspective, the TLS model relies on a \"Certificate Authority\" (CA) which is an\nentity that is trusted to create proof that a given network service (e.g., www.cilium.io)\nis who they say they are.   The goal is to prevents a malicious party in the network between the client\nand the server from intercepting the traffic and pretending to be the destination server.\n\nIn the case of \"friendly interception\" for network security monitoring, Cilium uses a model similar to\ntraditional firewalls with TLS inspection capabilities:  the network security team creates their own \"internal\ncertificate authority\"\nthat can be used to create alternative certificates for external destinations.  This model requires each\nclient workload to also trust this new certificate, otherwise the client's TLS library will reject\nthe connection as invalid.  In this model, the network firewall uses the certificate signed by the internal\nCA to act like the destination service and terminate the TLS connection.  This allows the firewall to\ninspect and even modify the application layer data, and then initiate another TLS connect to the actual\ndestination service.\n\nThe CA model within TLS is based on cryptographic keys and certificates.  Realizing the above model\nrequires four primary steps:\n\n1) Create an internal certificate authority by generating a CA private key and CA certificate.\n\n2) For any destination where TLS inspection is desired (e.g., httpbin.org in the example below),\n   generate a private key and certificate signing request with a common name that matches the destination DNS\n   name.\n\n3) Use the CA private key to create a signed certificate.\n\n4) Ensure that all clients where TLS inspection is have the CA certificate installed so that they will\n   trust all certificates signed by that CA.\n\n5) Given that Cilium will be terminating the initial TLS connection from the\n   client and creating a new TLS connection to the destination, Cilium must be told the set of CAs that it\n   should trust when validating the new TLS connection to the destination service.\n\n.. note::\n\n    In a non-demo environment it is EXTREMELY important that you keep the above private keys safe, as anyone\n    with access to this private key will be able to inspect TLS-encrypted traffic (certificates on the other\n    hand are public information, and are not at all sensitive).  In the guide below, the\n    CA private key does not need to be provided to Cilium at all (it is used only to create certificates, which\n    can be done offline) and private keys for individual destination services are stored as Kubernetes secrets.\n    These secrets should be stored in a namespace where they can be accessed by Cilium, but not general purpose\n    workloads.\n\nGenerating and Installing TLS Keys and Certificates\n===================================================\n\nNow that we have explained the high-level certificate model used by TLS, we will walk through the\nconcrete steps to generate the appropriate keys and certificates using the ``openssl`` utility.\n\nThe following image describes the different files containing cryptographic data that are generated\nor copied, and what components in the system need access to those files:\n\n.. image:: images\/cilium_tls_visibility_gsg.png\n\nYou can use openssl on your local system if it is already installed, but if not a simple\nshortcut is to use ``kubectl exec`` to execute ``\/bin\/bash`` within any of the cilium pods, and\nthen run the resulting ``openssl`` commands.  Use ``kubectl cp`` to copy the resulting files out\nof the cilium pod when it is time to use them to create Kubernetes secrets of copy them to the\n``mediabot`` pod.\n\nCreate an Internal Certificate Authority (CA)\n---------------------------------------------\n\nGenerate CA private key named 'myCA.key':\n\n.. code-block:: shell-session\n\n    $ openssl genrsa -des3 -out myCA.key 2048\n\nEnter any password, just remember it for some of the later steps.\n\nGenerate CA certificate from the private key:\n\n.. code-block:: shell-session\n\n    $ openssl req -x509 -new -nodes -key myCA.key -sha256 -days 1825 -out myCA.crt\n\nThe values you enter for each prompt do not need to be any specific value, and do not need to be\naccurate.\n\nCreate Private Key and Certificate Signing Request for a Given DNS Name\n-----------------------------------------------------------------------\n\nGenerate an internal private key and certificate signing with a common name that matches the DNS name\nof the destination service to be intercepted for inspection (in this example, use ``httpbin.org``).\n\nFirst create the private key:\n\n.. code-block:: shell-session\n\n    $ openssl genrsa -out internal-httpbin.key 2048\n\nNext, create a certificate signing request, specifying the DNS name of the destination service\nfor the common name field when prompted.  All other prompts can be filled with any value.\n\n.. code-block:: shell-session\n\n    $ openssl req -new -key internal-httpbin.key -out internal-httpbin.csr\n\nThe only field that must be a specific value is ensuring that ``Common Name`` is the exact DNS\ndestination ``httpbin.org`` that will be provided to the client.\n\n\nThis example workflow will work for any DNS\nname as long as the toFQDNs rule in the policy YAML (below) is also updated to match the DNS name in the certificate.\n\n\nUse CA to Generate a Signed Certificate for the DNS Name\n--------------------------------------------------------\n\nUse the internal CA private key to create a signed certificate for httpbin.org named ``internal-httpbin.crt``.\n\n.. code-block:: shell-session\n\n    $ openssl x509 -req -days 360 -in internal-httpbin.csr -CA myCA.crt -CAkey myCA.key -CAcreateserial -out internal-httpbin.crt -sha256\n\nNext we create a Kubernetes secret that includes both the private key and signed certificates for the destination service:\n\n.. code-block:: shell-session\n\n    $ kubectl create secret tls httpbin-tls-data -n kube-system --cert=internal-httpbin.crt --key=internal-httpbin.key\n\nAdd the Internal CA as a Trusted CA Inside the Client Pod\n---------------------------------------------------------\n\nOnce the CA certificate is inside the client pod, we still must make sure that the CA file is picked up by the TLS library used by your\napplication.  Most Linux applications automatically use a set of trusted CA certificates that are bundled along with the Linux distro.\nIn this guide, we are using an Ubuntu container as the client, and so will update it with Ubuntu specific instructions.  Other Linux distros\nwill have different mechanisms.  Also, individual applications may leverage their own certificate stores rather than use the OS certificate\nstore.  Java applications and the aws-cli are two common examples.  Please refer to the application or application runtime documentation\nfor more details.\n\nFor Ubuntu, we first copy the additional CA certificate to the client pod filesystem\n\n.. code-block:: shell-session\n\n    $ kubectl cp myCA.crt default\/mediabot:\/usr\/local\/share\/ca-certificates\/myCA.crt\n\nThen run the Ubuntu-specific utility that adds this certificate to the global set of trusted certificate authorities in \/etc\/ssl\/certs\/ca-certificates.crt .\n\n\n.. code-block:: shell-session\n\n    $ kubectl exec mediabot -- update-ca-certificates\n\nThis command will issue a WARNING, but this can be ignored.\n\nProvide Cilium with List of Trusted CAs\n---------------------------------------\n\nNext, we will provide Cilium with the set of CAs that it should trust when originating the secondary TLS connections.\nThis list should correspond to the standard set of global CAs that your organization trusts.  A logical option for this is the standard CAs\nthat are trusted by your operating system, since this is the set of CAs that were being used prior to introducing TLS inspection.\n\nTo keep things simple, in this example we will simply copy this list out of the Ubuntu filesystem of the mediabot pod, though it is\nimportant to understand that this list of trusted CAs is not specific to a particular TLS client or server, and so this step need only\nbe performed once regardless of how many TLS clients or servers are involved in TLS inspection.\n\n.. code-block:: shell-session\n\n    $ kubectl cp default\/mediabot:\/etc\/ssl\/certs\/ca-certificates.crt ca-certificates.crt\n\nWe then will create a Kubernetes secret using this certificate bundle so that Cilium can read the\ncertificate bundle and use it to validate outgoing TLS connections.\n\n\n.. code-block:: shell-session\n\n    $ kubectl create secret generic tls-orig-data -n kube-system --from-file=ca.crt=.\/ca-certificates.crt\n\n\nApply DNS and TLS-aware Egress Policy\n=====================================\n\nUp to this point, we have created keys and certificates to enable TLS inspection, but we\nhave not told Cilium which traffic we want to intercept and inspect.   This is done using\nthe same Cilium Network Policy constructs that are used for other Cilium Network Policies.\n\nThe following Cilium network policy indicates that Cilium should perform HTTP-aware inspect\nof communication between the ``mediabot`` pod to ``httpbin.org``.\n\n.. literalinclude:: ..\/..\/examples\/kubernetes-tls-inspection\/l7-visibility-tls.yaml\n\nLet's take a closer look at the policy:\n\n* The ``endpointSelector`` means that this policy will only apply to pods with labels ``class: mediabot, org:empire`` to have the egress access.\n* The first egress section uses ``toFQDNs: matchName`` specification to allow TCP port 443 egress to ``httpbin.org``.\n* The ``http`` section below the toFQDNs rule\n  indicates that such connections should be parsed as HTTP, with a policy of ``{}`` which will allow all requests.\n* The ``terminatingTLS`` and ``originatingTLS`` sections indicate that TLS interception should be used to terminate the initial TLS connection\n  from mediabot and initiate a new out-bound TLS connection to ``httpbin.org``.\n* The second egress section allows ``mediabot`` pods to access ``kube-dns`` service. Note that\n  ``rules: dns`` instructs Cilium to inspect and allow DNS lookups matching specified patterns.\n  In this case, inspect and allow all DNS queries.\n\nNote that with this policy the ``mediabot`` doesn't have access to any internal cluster service other than ``kube-dns``\nand will have no access to any other external destinations either. Refer to :ref:`Network Policy`\nto learn more about policies for controlling access to internal cluster services.\n\nLet's apply the policy:\n\n.. parsed-literal::\n\n  $ kubectl create -f \\ |SCM_WEB|\\\/examples\/kubernetes-tls-inspection\/l7-visibility-tls.yaml\n\n\nDemonstrating TLS Inspection\n============================\n\nRecall that the policy we pushed will allow all HTTPS requests from ``mediabot`` to ``httpbin.org``, but will parse all data at\nthe HTTP-layer, meaning that cilium monitor will report each HTTP request and response.\n\nTo see this, open a new window and run the following command to identity the name of the\ncilium pod (e.g, cilium-97s78) that is running on the same Kubernetes worker node as the ``mediabot`` pod.\n\nThen start running cilium-dbg monitor in \"L7 mode\" to monitor for HTTP requests being reported by Cilium:\n\n.. code-block:: shell-session\n\n    $ kubectl exec -it -n kube-system cilium-d5x8v -- cilium-dbg monitor -t l7\n\nNext in the original window, from the ``mediabot`` pod we can access ``httpbin.org`` via HTTPS:\n\n.. code-block:: shell-session\n\n    $ kubectl exec -it mediabot -- curl -sL 'https:\/\/httpbin.org\/anything'\n    ...\n    ...\n\n    $ kubectl exec -it mediabot -- curl -sL 'https:\/\/httpbin.org\/headers'\n    ...\n    ...\n\nLooking back at the cilium-dbg monitor window, you will see each individual HTTP request and response.  For example::\n\n    -> Request http from 2585 ([k8s:class=mediabot k8s:org=empire k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.cilium.k8s.policy.cluster=default]) to 0 ([reserved:world]), identity 24948->2, verdict Forwarded GET https:\/\/httpbin.org\/anything => 0\n    -> Response http to 2585 ([k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.cilium.k8s.policy.cluster=default k8s:class=mediabot k8s:org=empire]) from 0 ([reserved:world]), identity 24948->2, verdict Forwarded GET https:\/\/httpbin.org\/anything => 200\n\nRefer to :ref:`l4_policy` and :ref:`l7_policy` to learn more about Cilium L4 and L7 network policies.\n\nClean-up\n========\n\n.. parsed-literal::\n\n   $ kubectl delete -f \\ |SCM_WEB|\\\/examples\/kubernetes-dns\/dns-sw-app.yaml\n   $ kubectl delete cnp l7-visibility-tls\n   $ kubectl delete secret -n kube-system tls-orig-data\n   $ kubectl delete secret -n kube-system httpbin-tls-data","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      gs tls inspection                                                    Inspecting TLS Encrypted Connections with Cilium                                                   This document serves as an introduction for how network security teams can use Cilium to transparently inspect TLS encrypted connections   This TLS aware inspection allows Cilium API aware visibility and policy to function even for connections where client to server communication is protected by TLS  such as when a client accesses the API service via HTTPS   This capability is similar to what is possible to traditional hardware firewalls  but is implemented entirely in software on the Kubernetes worker node  and is policy driven  allowing inspection to target only selected network connectivity   This type of visibility is extremely valuable to be able to monitor how external API services are being used  for example  understanding which S3 buckets are being accessed by an given application      include   gsg requirements rst   To ensure that the Cilium agent has the correct permissions to perform TLS Interception  please set the following values in your Helm chart settings      code block   YAML      tls        secretsBackend  k8s       secretSync          enabled  true   This configures Cilium so that the Cilium Operator will synchronize any secrets referenced in CiliumNetworkPolicy  or CiliumClusterwideNetworkPolicy  to a   cilium secrets   namespace  and grant the Cilium agent read access to Secrets for that namespace only    Deploy the Demo Application                              To demonstrate TLS interception we will use the same   mediabot   application that we used for the DNS aware policy example  This application will access the Star Wars API service using HTTPS  which would normally mean that network layer mechanisms like Cilium would not be able to see the HTTP layer details of the communication  since all application data is encrypted using TLS before that data is sent on the network   In this guide we will learn about     Creating an internal Certificate Authority  CA  and associated certificates signed by that CA to enable TLS interception    Using Cilium network policy to select the traffic to intercept using DNS based policy rules    Inspecting the details of the HTTP request using cilium monitor  accessing this visibility data via Hubble  and applying   Cilium network policies to filter modify the HTTP request is also possible  but is beyond the scope of this simple Getting Started Guide    First off  we will create a single pod   mediabot   application      parsed literal         kubectl create  f    SCM WEB   examples kubernetes dns dns sw app yaml      kubectl wait pod mediabot   for condition Ready      kubectl get pods    NAME                             READY   STATUS    RESTARTS   AGE    pod mediabot                     1 1     Running   0          14s   A Brief Overview of the TLS Certificate Model                                                TLS is a protocol that  wraps  other protocols like HTTP and ensures that communication between client and server has confidentiality  no one can read the data except the intended recipient   integrity  recipient can confirm that the data has not been modified in transit   and authentication  sender can confirm that it is talking with the intended destination  not an impostor    We will provide a highly simplified overview of TLS in this document  but for full details  please see   https   en wikipedia org wiki Transport Layer Security       From an authentication perspective  the TLS model relies on a  Certificate Authority   CA  which is an entity that is trusted to create proof that a given network service  e g   www cilium io  is who they say they are    The goal is to prevents a malicious party in the network between the client and the server from intercepting the traffic and pretending to be the destination server   In the case of  friendly interception  for network security monitoring  Cilium uses a model similar to traditional firewalls with TLS inspection capabilities   the network security team creates their own  internal certificate authority  that can be used to create alternative certificates for external destinations   This model requires each client workload to also trust this new certificate  otherwise the client s TLS library will reject the connection as invalid   In this model  the network firewall uses the certificate signed by the internal CA to act like the destination service and terminate the TLS connection   This allows the firewall to inspect and even modify the application layer data  and then initiate another TLS connect to the actual destination service   The CA model within TLS is based on cryptographic keys and certificates   Realizing the above model requires four primary steps   1  Create an internal certificate authority by generating a CA private key and CA certificate   2  For any destination where TLS inspection is desired  e g   httpbin org in the example below      generate a private key and certificate signing request with a common name that matches the destination DNS    name   3  Use the CA private key to create a signed certificate   4  Ensure that all clients where TLS inspection is have the CA certificate installed so that they will    trust all certificates signed by that CA   5  Given that Cilium will be terminating the initial TLS connection from the    client and creating a new TLS connection to the destination  Cilium must be told the set of CAs that it    should trust when validating the new TLS connection to the destination service      note        In a non demo environment it is EXTREMELY important that you keep the above private keys safe  as anyone     with access to this private key will be able to inspect TLS encrypted traffic  certificates on the other     hand are public information  and are not at all sensitive    In the guide below  the     CA private key does not need to be provided to Cilium at all  it is used only to create certificates  which     can be done offline  and private keys for individual destination services are stored as Kubernetes secrets      These secrets should be stored in a namespace where they can be accessed by Cilium  but not general purpose     workloads   Generating and Installing TLS Keys and Certificates                                                      Now that we have explained the high level certificate model used by TLS  we will walk through the concrete steps to generate the appropriate keys and certificates using the   openssl   utility   The following image describes the different files containing cryptographic data that are generated or copied  and what components in the system need access to those files      image   images cilium tls visibility gsg png  You can use openssl on your local system if it is already installed  but if not a simple shortcut is to use   kubectl exec   to execute    bin bash   within any of the cilium pods  and then run the resulting   openssl   commands   Use   kubectl cp   to copy the resulting files out of the cilium pod when it is time to use them to create Kubernetes secrets of copy them to the   mediabot   pod   Create an Internal Certificate Authority  CA                                                 Generate CA private key named  myCA key       code block   shell session        openssl genrsa  des3  out myCA key 2048  Enter any password  just remember it for some of the later steps   Generate CA certificate from the private key      code block   shell session        openssl req  x509  new  nodes  key myCA key  sha256  days 1825  out myCA crt  The values you enter for each prompt do not need to be any specific value  and do not need to be accurate   Create Private Key and Certificate Signing Request for a Given DNS Name                                                                          Generate an internal private key and certificate signing with a common name that matches the DNS name of the destination service to be intercepted for inspection  in this example  use   httpbin org      First create the private key      code block   shell session        openssl genrsa  out internal httpbin key 2048  Next  create a certificate signing request  specifying the DNS name of the destination service for the common name field when prompted   All other prompts can be filled with any value      code block   shell session        openssl req  new  key internal httpbin key  out internal httpbin csr  The only field that must be a specific value is ensuring that   Common Name   is the exact DNS destination   httpbin org   that will be provided to the client    This example workflow will work for any DNS name as long as the toFQDNs rule in the policy YAML  below  is also updated to match the DNS name in the certificate    Use CA to Generate a Signed Certificate for the DNS Name                                                           Use the internal CA private key to create a signed certificate for httpbin org named   internal httpbin crt        code block   shell session        openssl x509  req  days 360  in internal httpbin csr  CA myCA crt  CAkey myCA key  CAcreateserial  out internal httpbin crt  sha256  Next we create a Kubernetes secret that includes both the private key and signed certificates for the destination service      code block   shell session        kubectl create secret tls httpbin tls data  n kube system   cert internal httpbin crt   key internal httpbin key  Add the Internal CA as a Trusted CA Inside the Client Pod                                                            Once the CA certificate is inside the client pod  we still must make sure that the CA file is picked up by the TLS library used by your application   Most Linux applications automatically use a set of trusted CA certificates that are bundled along with the Linux distro  In this guide  we are using an Ubuntu container as the client  and so will update it with Ubuntu specific instructions   Other Linux distros will have different mechanisms   Also  individual applications may leverage their own certificate stores rather than use the OS certificate store   Java applications and the aws cli are two common examples   Please refer to the application or application runtime documentation for more details   For Ubuntu  we first copy the additional CA certificate to the client pod filesystem     code block   shell session        kubectl cp myCA crt default mediabot  usr local share ca certificates myCA crt  Then run the Ubuntu specific utility that adds this certificate to the global set of trusted certificate authorities in  etc ssl certs ca certificates crt        code block   shell session        kubectl exec mediabot    update ca certificates  This command will issue a WARNING  but this can be ignored   Provide Cilium with List of Trusted CAs                                          Next  we will provide Cilium with the set of CAs that it should trust when originating the secondary TLS connections  This list should correspond to the standard set of global CAs that your organization trusts   A logical option for this is the standard CAs that are trusted by your operating system  since this is the set of CAs that were being used prior to introducing TLS inspection   To keep things simple  in this example we will simply copy this list out of the Ubuntu filesystem of the mediabot pod  though it is important to understand that this list of trusted CAs is not specific to a particular TLS client or server  and so this step need only be performed once regardless of how many TLS clients or servers are involved in TLS inspection      code block   shell session        kubectl cp default mediabot  etc ssl certs ca certificates crt ca certificates crt  We then will create a Kubernetes secret using this certificate bundle so that Cilium can read the certificate bundle and use it to validate outgoing TLS connections       code block   shell session        kubectl create secret generic tls orig data  n kube system   from file ca crt   ca certificates crt   Apply DNS and TLS aware Egress Policy                                        Up to this point  we have created keys and certificates to enable TLS inspection  but we have not told Cilium which traffic we want to intercept and inspect    This is done using the same Cilium Network Policy constructs that are used for other Cilium Network Policies   The following Cilium network policy indicates that Cilium should perform HTTP aware inspect of communication between the   mediabot   pod to   httpbin org        literalinclude         examples kubernetes tls inspection l7 visibility tls yaml  Let s take a closer look at the policy     The   endpointSelector   means that this policy will only apply to pods with labels   class  mediabot  org empire   to have the egress access    The first egress section uses   toFQDNs  matchName   specification to allow TCP port 443 egress to   httpbin org      The   http   section below the toFQDNs rule   indicates that such connections should be parsed as HTTP  with a policy of        which will allow all requests    The   terminatingTLS   and   originatingTLS   sections indicate that TLS interception should be used to terminate the initial TLS connection   from mediabot and initiate a new out bound TLS connection to   httpbin org      The second egress section allows   mediabot   pods to access   kube dns   service  Note that     rules  dns   instructs Cilium to inspect and allow DNS lookups matching specified patterns    In this case  inspect and allow all DNS queries   Note that with this policy the   mediabot   doesn t have access to any internal cluster service other than   kube dns   and will have no access to any other external destinations either  Refer to  ref  Network Policy  to learn more about policies for controlling access to internal cluster services   Let s apply the policy      parsed literal        kubectl create  f    SCM WEB   examples kubernetes tls inspection l7 visibility tls yaml   Demonstrating TLS Inspection                               Recall that the policy we pushed will allow all HTTPS requests from   mediabot   to   httpbin org    but will parse all data at the HTTP layer  meaning that cilium monitor will report each HTTP request and response   To see this  open a new window and run the following command to identity the name of the cilium pod  e g  cilium 97s78  that is running on the same Kubernetes worker node as the   mediabot   pod   Then start running cilium dbg monitor in  L7 mode  to monitor for HTTP requests being reported by Cilium      code block   shell session        kubectl exec  it  n kube system cilium d5x8v    cilium dbg monitor  t l7  Next in the original window  from the   mediabot   pod we can access   httpbin org   via HTTPS      code block   shell session        kubectl exec  it mediabot    curl  sL  https   httpbin org anything                         kubectl exec  it mediabot    curl  sL  https   httpbin org headers                   Looking back at the cilium dbg monitor window  you will see each individual HTTP request and response   For example           Request http from 2585   k8s class mediabot k8s org empire k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default k8s io cilium k8s policy cluster default   to 0   reserved world    identity 24948  2  verdict Forwarded GET https   httpbin org anything    0        Response http to 2585   k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default k8s io cilium k8s policy cluster default k8s class mediabot k8s org empire   from 0   reserved world    identity 24948  2  verdict Forwarded GET https   httpbin org anything    200  Refer to  ref  l4 policy  and  ref  l7 policy  to learn more about Cilium L4 and L7 network policies   Clean up              parsed literal         kubectl delete  f    SCM WEB   examples kubernetes dns dns sw app yaml      kubectl delete cnp l7 visibility tls      kubectl delete secret  n kube system tls orig data      kubectl delete secret  n kube system httpbin tls data"}
{"questions":"cilium This document serves as an introduction to using Cilium to enforce policies docs cilium io based on AWS metadata It provides a detailed walk through of running a single node awsmetadatawithpolicy Locking Down External Access Using AWS Metadata","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _aws_metadata_with_policy:\n\n***********************************************\nLocking Down External Access Using AWS Metadata\n***********************************************\n\nThis document serves as an introduction to using Cilium to enforce policies\nbased on AWS metadata. It provides a detailed walk-through of running a single-node\nCilium environment on your machine. It is designed to take 15-30 minutes\nfor users with some experience running Kubernetes.\n\n\nSetup Cilium\n============\n\nThis guide will work with any approach to installing Cilium, including minikube,\nas long as the cilium-operator pod in the deployment can reach the AWS API server\nHowever, since the most common use of this mechanism is for Kubernetes clusters\nrunning in AWS, we recommend trying it out along with the guide: :ref:`k8s_install_quick` .\n\nCreate AWS secrets\n==================\n\nBefore installing Cilium, a new Kubernetes Secret with the AWS Tokens needs to\nbe added to your Kubernetes cluster. This Secret will allow Cilium to gather\ninformation from the AWS API which is needed to implement ToGroups policies.\n\nAWS Access keys and IAM role\n------------------------------\n\nTo create a new access token the `following guide can be used\n<https:\/\/docs.aws.amazon.com\/cli\/latest\/userguide\/cli-configure-files.html>`_.\nThese keys need to have certain permissions set:\n\n.. code-block:: javascript\n\n    {\n        \"Version\": \"2012-10-17\",\n        \"Statement\": [\n            {\n                \"Effect\": \"Allow\",\n                \"Action\": \"ec2:Describe*\",\n                \"Resource\": \"*\"\n            }\n        ]\n    }\n\nAs soon as you have the access tokens, the following secret needs to be added,\nwith each empty string replaced by the associated value as a base64-encoded string:\n\n\n.. code-block:: yaml\n    :name: cilium-secret.yaml\n\n    apiVersion: v1\n    kind: Secret\n    metadata:\n      name: cilium-aws\n      namespace: kube-system\n    type: Opaque\n    data:\n      AWS_ACCESS_KEY_ID: \"\"\n      AWS_SECRET_ACCESS_KEY: \"\"\n      AWS_DEFAULT_REGION: \"\"\n\nThe base64 command line utility can be used to generate each value, for example:\n\n.. code-block:: shell-session\n\n    $ echo -n \"eu-west-1\"  | base64\n    ZXUtd2VzdC0x\n\nThis secret stores the AWS credentials, which will be used to connect the AWS\nAPI.\n\n.. code-block:: shell-session\n\n    $ kubectl create -f cilium-secret.yaml\n\nTo validate that the credentials are correct, the following pod can be created\nfor debugging purposes:\n\n.. code-block:: yaml\n\n    apiVersion: v1\n    kind: Pod\n    metadata:\n      name: testing-aws-pod\n      namespace: kube-system\n    spec:\n      containers:\n      - name: aws-cli\n        image: mesosphere\/aws-cli\n        command: ['sh', '-c', 'sleep 3600']\n        env:\n          - name: AWS_ACCESS_KEY_ID\n            valueFrom:\n              secretKeyRef:\n                name: cilium-aws\n                key: AWS_ACCESS_KEY_ID\n                optional: true\n          - name: AWS_SECRET_ACCESS_KEY\n            valueFrom:\n              secretKeyRef:\n                name: cilium-aws\n                key: AWS_SECRET_ACCESS_KEY\n                optional: true\n          - name: AWS_DEFAULT_REGION\n            valueFrom:\n              secretKeyRef:\n                name: cilium-aws\n                key: AWS_DEFAULT_REGION\n                optional: true\n\nTo list all of the available AWS instances, the following command can be used:\n\n.. code-block:: shell-session\n\n   $ kubectl  -n kube-system exec -ti testing-aws-pod -- aws ec2 describe-instances\n\nOnce the secret has been created and validated, the cilium-operator pod must be\nrestarted in order to pick up the credentials in the secret.\nTo do this, identify and delete the existing cilium-operator pod, which will be\nrecreated automatically:\n\n.. code-block:: shell-session\n\n    $ kubectl get pods -l name=cilium-operator -n kube-system\n    NAME                              READY   STATUS    RESTARTS   AGE\n    cilium-operator-7c9d69f7c-97vqx   1\/1     Running   0          36h\n\n    $ kubectl delete pod cilium-operator-7c9d69f7c-97vqx\n\n\n\nIt is important for this demo that ``coredns`` is working correctly. To know the\nstatus of ``coredns`` you can run the following command:\n\n.. code-block:: shell-session\n\n    $ kubectl get deployment kube-dns -n kube-system\n    NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE\n    coredns    2         2         2            2           13h\n\nWhere at least one pod should be available.\n\nConfigure AWS Security Groups\n=============================\n\nCilium's AWS Metadata filtering capability enables explicit whitelisting\nof communication between a subset of pods (identified by Kubernetes labels)\nwith a set of destination EC2 ENIs (identified by membership in an AWS security group).\n\nIn this example, the destination EC2 elastic network interfaces are attached to\nEC2 instances that are members of a single AWS security group ('sg-0f2146100a88d03c3').\nPods with label ``class=xwing`` should only be able to make connections outside the\ncluster to the destination network interfaces in that security group.\n\nTo enable this, the VMs acting as Kubernetes worker nodes must be able to\nsend traffic to the destination VMs that are being accessed by pods.  One approach\nfor achieving this is to put all Kubernetes worker VMs in a single 'k8s-worker'\nsecurity group, and then ensure that any security group that is referenced in a\nCilium toGroups policy has an allow all ingress rule (all ports) for connections from the\n'k8s-worker' security group.  Cilium filtering will then ensure that the only pods allowed\nby policy can reach the destination VMs.\n\nCreate a sample policy\n======================\n\nDeploy a demo application:\n----------------------------\n\nIn this case we're going to use a demo application that is used in other guides.\nThese manifests will create three microservices applications: *deathstar*,\n*tiefighter*, and *xwing*. In this case, we are only going to use our *xwing*\nmicroservice to secure communications to existing AWS instances.\n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/minikube\/http-sw-app.yaml\n    service \"deathstar\" created\n    deployment \"deathstar\" created\n    deployment \"tiefighter\" created\n    deployment \"xwing\" created\n\n\nKubernetes will deploy the pods and service in the background. Running ``kubectl\nget pods,svc`` will inform you about the progress of the operation.  Each pod\nwill go through several states until it reaches ``Running`` at which point the\npod is ready.\n\n.. code-block:: shell-session\n\n    $ kubectl get pods,svc\n    NAME                             READY     STATUS    RESTARTS   AGE\n    po\/deathstar-76995f4687-2mxb2    1\/1       Running   0          1m\n    po\/deathstar-76995f4687-xbgnl    1\/1       Running   0          1m\n    po\/tiefighter                    1\/1       Running   0          1m\n    po\/xwing                         1\/1       Running   0          1m\n\n    NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE\n    svc\/deathstar    ClusterIP   10.109.254.198   <none>        80\/TCP    3h\n    svc\/kubernetes   ClusterIP   10.96.0.1        <none>        443\/TCP   3h\n\nPolicy Language:\n-----------------\n\n**ToGroups** rules can be used to define policy in relation to cloud providers, like AWS.\n\n.. code-block:: yaml\n\n    ---\n    kind: CiliumNetworkPolicy\n    apiVersion: cilium.io\/v2\n    metadata:\n      name: to-groups-sample\n      namespace: default\n    spec:\n      endpointSelector:\n        matchLabels:\n          org: alliance\n          class: xwing\n      egress:\n      - toPorts:\n        - ports:\n          - port: '80'\n            protocol: TCP\n        toGroups:\n        - aws:\n            securityGroupsIds:\n            - 'sg-0f2146100a88d03c3'\n\nThis policy allows traffic from pod *xwing* to any AWS elastic network interface\nin the security group with ID ``sg-0f2146100a88d03c3``.\n\nValidate that derived policy is in place\n----------------------------------------\n\nEvery time that a new policy with ToGroups rules is added, an equivalent policy\n(also called \"derivative policy\"), will be created. This policy will contain the\nset of CIDRs that correspond to the specification in ToGroups, e.g., the IPs of\nall network interfaces that are part of a specified security group. The list of\nIPs is updated periodically.\n\n.. code-block:: shell-session\n\n    $ kubectl get cnp\n    NAME                                                             AGE\n    to-groups-sample                                                 11s\n    to-groups-sample-togroups-044ba7d1-f491-11e8-ad2e-080027d2d952   10s\n\nEventually, the derivative policy will contain IPs in the ToCIDR section:\n\n.. code-block:: shell-session\n\n   $ kubectl get cnp to-groups-sample-togroups-044ba7d1-f491-11e8-ad2e-080027d2d952\n\n\n.. code-block:: yaml\n\n    apiVersion: cilium.io\/v2\n    kind: CiliumNetworkPolicy\n    metadata:\n      creationTimestamp: 2018-11-30T11:13:52Z\n      generation: 1\n      labels:\n        io.cilium.network.policy.kind: derivative\n        io.cilium.network.policy.parent.uuid: 044ba7d1-f491-11e8-ad2e-080027d2d952\n      name: to-groups-sample-togroups-044ba7d1-f491-11e8-ad2e-080027d2d952\n      namespace: default\n      ownerReferences:\n      - apiVersion: cilium.io\/v2\n        blockOwnerDeletion: true\n        kind: CiliumNetworkPolicy\n        name: to-groups-sample\n        uid: 044ba7d1-f491-11e8-ad2e-080027d2d952\n      resourceVersion: \"34853\"\n      selfLink: \/apis\/cilium.io\/v2\/namespaces\/default\/ciliumnetworkpolicies\/to-groups-sample-togroups-044ba7d1-f491-11e8-ad2e-080027d2d952\n      uid: 04b289ba-f491-11e8-ad2e-080027d2d952\n    specs:\n    - egress:\n      - toCIDRSet:\n        - cidr: 34.254.113.42\/32\n        - cidr: 172.31.44.160\/32\n        toPorts:\n        - ports:\n          - port: \"80\"\n            protocol: TCP\n      endpointSelector:\n        matchLabels:\n          any:class: xwing\n          any:org: alliance\n          k8s:io.kubernetes.pod.namespace: default\n      labels:\n      - key: io.cilium.k8s.policy.name\n        source: k8s\n        value: to-groups-sample\n      - key: io.cilium.k8s.policy.uid\n        source: k8s\n        value: 044ba7d1-f491-11e8-ad2e-080027d2d952\n      - key: io.cilium.k8s.policy.namespace\n        source: k8s\n        value: default\n      - key: io.cilium.k8s.policy.derived-from\n        source: k8s\n        value: CiliumNetworkPolicy\n    status:\n      nodes:\n        k8s1:\n          enforcing: true\n          lastUpdated: 2018-11-30T11:28:03.907678888Z\n          localPolicyRevision: 28\n          ok: true\n\nThe derivative rule should contain the following information:\n\n- *metadata.OwnerReferences*: that contains the information about the ToGroups\n  policy.\n\n- *specs.Egress.ToCIDRSet*:  the list of private and public IPs of the instances\n  that correspond to the spec of the parent policy.\n\n- *status*: whether or not the policy is enforced yet, and when the policy was\n  last updated.\n\nThe endpoint status for the *xwing* should have policy enforcement\nenabled only for egress connectivity:\n\n.. code-block:: shell-session\n\n    $ kubectl exec -q -it -n kube-system cilium-85vtg -- cilium-dbg endpoint get 23453 -o jsonpath='{$[0].status.policy.realized.policy-enabled}'\n    egress\n\nIn this example, *xwing* pod can only connect to ``34.254.113.42\/32`` and\n``172.31.44.160\/32`` and connectivity to other IP will be denied.","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      aws metadata with policy                                                   Locking Down External Access Using AWS Metadata                                                  This document serves as an introduction to using Cilium to enforce policies based on AWS metadata  It provides a detailed walk through of running a single node Cilium environment on your machine  It is designed to take 15 30 minutes for users with some experience running Kubernetes    Setup Cilium               This guide will work with any approach to installing Cilium  including minikube  as long as the cilium operator pod in the deployment can reach the AWS API server However  since the most common use of this mechanism is for Kubernetes clusters running in AWS  we recommend trying it out along with the guide   ref  k8s install quick     Create AWS secrets                     Before installing Cilium  a new Kubernetes Secret with the AWS Tokens needs to be added to your Kubernetes cluster  This Secret will allow Cilium to gather information from the AWS API which is needed to implement ToGroups policies   AWS Access keys and IAM role                                 To create a new access token the  following guide can be used  https   docs aws amazon com cli latest userguide cli configure files html     These keys need to have certain permissions set      code block   javascript                 Version    2012 10 17            Statement                                    Effect    Allow                    Action    ec2 Describe                     Resource                                      As soon as you have the access tokens  the following secret needs to be added  with each empty string replaced by the associated value as a base64 encoded string       code block   yaml      name  cilium secret yaml      apiVersion  v1     kind  Secret     metadata        name  cilium aws       namespace  kube system     type  Opaque     data        AWS ACCESS KEY ID           AWS SECRET ACCESS KEY           AWS DEFAULT REGION      The base64 command line utility can be used to generate each value  for example      code block   shell session        echo  n  eu west 1     base64     ZXUtd2VzdC0x  This secret stores the AWS credentials  which will be used to connect the AWS API      code block   shell session        kubectl create  f cilium secret yaml  To validate that the credentials are correct  the following pod can be created for debugging purposes      code block   yaml      apiVersion  v1     kind  Pod     metadata        name  testing aws pod       namespace  kube system     spec        containers          name  aws cli         image  mesosphere aws cli         command    sh     c    sleep 3600           env              name  AWS ACCESS KEY ID             valueFrom                secretKeyRef                  name  cilium aws                 key  AWS ACCESS KEY ID                 optional  true             name  AWS SECRET ACCESS KEY             valueFrom                secretKeyRef                  name  cilium aws                 key  AWS SECRET ACCESS KEY                 optional  true             name  AWS DEFAULT REGION             valueFrom                secretKeyRef                  name  cilium aws                 key  AWS DEFAULT REGION                 optional  true  To list all of the available AWS instances  the following command can be used      code block   shell session       kubectl   n kube system exec  ti testing aws pod    aws ec2 describe instances  Once the secret has been created and validated  the cilium operator pod must be restarted in order to pick up the credentials in the secret  To do this  identify and delete the existing cilium operator pod  which will be recreated automatically      code block   shell session        kubectl get pods  l name cilium operator  n kube system     NAME                              READY   STATUS    RESTARTS   AGE     cilium operator 7c9d69f7c 97vqx   1 1     Running   0          36h        kubectl delete pod cilium operator 7c9d69f7c 97vqx    It is important for this demo that   coredns   is working correctly  To know the status of   coredns   you can run the following command      code block   shell session        kubectl get deployment kube dns  n kube system     NAME       DESIRED   CURRENT   UP TO DATE   AVAILABLE   AGE     coredns    2         2         2            2           13h  Where at least one pod should be available   Configure AWS Security Groups                                Cilium s AWS Metadata filtering capability enables explicit whitelisting of communication between a subset of pods  identified by Kubernetes labels  with a set of destination EC2 ENIs  identified by membership in an AWS security group    In this example  the destination EC2 elastic network interfaces are attached to EC2 instances that are members of a single AWS security group   sg 0f2146100a88d03c3    Pods with label   class xwing   should only be able to make connections outside the cluster to the destination network interfaces in that security group   To enable this  the VMs acting as Kubernetes worker nodes must be able to send traffic to the destination VMs that are being accessed by pods   One approach for achieving this is to put all Kubernetes worker VMs in a single  k8s worker  security group  and then ensure that any security group that is referenced in a Cilium toGroups policy has an allow all ingress rule  all ports  for connections from the  k8s worker  security group   Cilium filtering will then ensure that the only pods allowed by policy can reach the destination VMs   Create a sample policy                         Deploy a demo application                                In this case we re going to use a demo application that is used in other guides  These manifests will create three microservices applications   deathstar    tiefighter   and  xwing   In this case  we are only going to use our  xwing  microservice to secure communications to existing AWS instances      parsed literal          kubectl create  f    SCM WEB   examples minikube http sw app yaml     service  deathstar  created     deployment  deathstar  created     deployment  tiefighter  created     deployment  xwing  created   Kubernetes will deploy the pods and service in the background  Running   kubectl get pods svc   will inform you about the progress of the operation   Each pod will go through several states until it reaches   Running   at which point the pod is ready      code block   shell session        kubectl get pods svc     NAME                             READY     STATUS    RESTARTS   AGE     po deathstar 76995f4687 2mxb2    1 1       Running   0          1m     po deathstar 76995f4687 xbgnl    1 1       Running   0          1m     po tiefighter                    1 1       Running   0          1m     po xwing                         1 1       Running   0          1m      NAME             TYPE        CLUSTER IP       EXTERNAL IP   PORT S    AGE     svc deathstar    ClusterIP   10 109 254 198    none         80 TCP    3h     svc kubernetes   ClusterIP   10 96 0 1         none         443 TCP   3h  Policy Language                       ToGroups   rules can be used to define policy in relation to cloud providers  like AWS      code block   yaml              kind  CiliumNetworkPolicy     apiVersion  cilium io v2     metadata        name  to groups sample       namespace  default     spec        endpointSelector          matchLabels            org  alliance           class  xwing       egress          toPorts            ports              port   80              protocol  TCP         toGroups            aws              securityGroupsIds                 sg 0f2146100a88d03c3   This policy allows traffic from pod  xwing  to any AWS elastic network interface in the security group with ID   sg 0f2146100a88d03c3     Validate that derived policy is in place                                           Every time that a new policy with ToGroups rules is added  an equivalent policy  also called  derivative policy    will be created  This policy will contain the set of CIDRs that correspond to the specification in ToGroups  e g   the IPs of all network interfaces that are part of a specified security group  The list of IPs is updated periodically      code block   shell session        kubectl get cnp     NAME                                                             AGE     to groups sample                                                 11s     to groups sample togroups 044ba7d1 f491 11e8 ad2e 080027d2d952   10s  Eventually  the derivative policy will contain IPs in the ToCIDR section      code block   shell session       kubectl get cnp to groups sample togroups 044ba7d1 f491 11e8 ad2e 080027d2d952      code block   yaml      apiVersion  cilium io v2     kind  CiliumNetworkPolicy     metadata        creationTimestamp  2018 11 30T11 13 52Z       generation  1       labels          io cilium network policy kind  derivative         io cilium network policy parent uuid  044ba7d1 f491 11e8 ad2e 080027d2d952       name  to groups sample togroups 044ba7d1 f491 11e8 ad2e 080027d2d952       namespace  default       ownerReferences          apiVersion  cilium io v2         blockOwnerDeletion  true         kind  CiliumNetworkPolicy         name  to groups sample         uid  044ba7d1 f491 11e8 ad2e 080027d2d952       resourceVersion   34853        selfLink   apis cilium io v2 namespaces default ciliumnetworkpolicies to groups sample togroups 044ba7d1 f491 11e8 ad2e 080027d2d952       uid  04b289ba f491 11e8 ad2e 080027d2d952     specs        egress          toCIDRSet            cidr  34 254 113 42 32           cidr  172 31 44 160 32         toPorts            ports              port   80              protocol  TCP       endpointSelector          matchLabels            any class  xwing           any org  alliance           k8s io kubernetes pod namespace  default       labels          key  io cilium k8s policy name         source  k8s         value  to groups sample         key  io cilium k8s policy uid         source  k8s         value  044ba7d1 f491 11e8 ad2e 080027d2d952         key  io cilium k8s policy namespace         source  k8s         value  default         key  io cilium k8s policy derived from         source  k8s         value  CiliumNetworkPolicy     status        nodes          k8s1            enforcing  true           lastUpdated  2018 11 30T11 28 03 907678888Z           localPolicyRevision  28           ok  true  The derivative rule should contain the following information      metadata OwnerReferences   that contains the information about the ToGroups   policy      specs Egress ToCIDRSet    the list of private and public IPs of the instances   that correspond to the spec of the parent policy      status   whether or not the policy is enforced yet  and when the policy was   last updated   The endpoint status for the  xwing  should have policy enforcement enabled only for egress connectivity      code block   shell session        kubectl exec  q  it  n kube system cilium 85vtg    cilium dbg endpoint get 23453  o jsonpath     0  status policy realized policy enabled       egress  In this example   xwing  pod can only connect to   34 254 113 42 32   and   172 31 44 160 32   and connectivity to other IP will be denied "}
{"questions":"cilium policyverdicts docs cilium io Policy Audit Mode configures Cilium to allow all traffic while logging all Creating Policies from Verdicts connections that would otherwise be dropped by network policies Policy Audit","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _policy_verdicts:\n\n*******************************\nCreating Policies from Verdicts\n*******************************\n\nPolicy Audit Mode configures Cilium to allow all traffic while logging all\nconnections that would otherwise be dropped by network policies. Policy Audit\nMode may be configured for the entire daemon using ``--policy-audit-mode=true``\nor for individual Cilium Endpoints. When Policy Audit Mode is enabled, no\nnetwork policy is enforced so this setting is **not recommended for production\ndeployment**. Policy Audit Mode supports auditing network policies implemented\nat networks layers 3 and 4. This guide walks through the process of creating\npolicies using Policy Audit Mode.\n\n.. include:: gsg_requirements.rst\n.. include:: gsg_sw_demo.rst\n\nScale down the deathstar Deployment\n===================================\n\nIn this guide we're going to scale down the deathstar Deployment in order to\nsimplify the next steps:\n\n.. code-block:: shell-session\n\n   $ kubectl scale --replicas=1 deployment deathstar\n   deployment.apps\/deathstar scaled\n\nEnable Policy Audit Mode (Entire Daemon)\n========================================\n\nTo observe policy audit messages for all endpoints managed by this Daemonset,\nmodify the Cilium ConfigMap and restart all daemons:\n\n   .. tabs::\n\n      .. group-tab:: Configure via kubectl\n\n         .. code-block:: shell-session\n\n            $ kubectl patch -n $CILIUM_NAMESPACE configmap cilium-config --type merge --patch '{\"data\":{\"policy-audit-mode\":\"true\"}}'\n            configmap\/cilium-config patched\n            $ kubectl -n $CILIUM_NAMESPACE rollout restart ds\/cilium\n            daemonset.apps\/cilium restarted\n            $ kubectl -n $CILIUM_NAMESPACE rollout status ds\/cilium\n            Waiting for daemon set \"cilium\" rollout to finish: 0 of 1 updated pods are available...\n            daemon set \"cilium\" successfully rolled out\n\n      .. group-tab:: Helm Upgrade\n\n         If you installed Cilium via ``helm install``, then you can use ``helm\n         upgrade`` to enable Policy Audit Mode:\n\n         .. parsed-literal::\n\n            $ helm upgrade cilium |CHART_RELEASE| \\\\\n                --namespace $CILIUM_NAMESPACE \\\\\n                --reuse-values \\\\\n                --set policyAuditMode=true\n\n\nEnable Policy Audit Mode (Specific Endpoint)\n============================================\n\nCilium can enable Policy Audit Mode for a specific endpoint. This may be helpful when enabling\nPolicy Audit Mode for the entire daemon is too broad. Enabling per endpoint will ensure other\nendpoints managed by the same daemon are not impacted.\n\nThis approach is meant to be temporary.  **Restarting Cilium pod will reset the Policy Audit\nMode to match the daemon's configuration.**\n\nPolicy Audit Mode is enabled for a given endpoint by modifying the endpoint configuration via\nthe ``cilium`` tool on the endpoint's Kubernetes node. The steps include:\n\n#. Determine the endpoint id on which Policy Audit Mode will be enabled.\n#. Identify the Cilium pod running on the same Kubernetes node corresponding to the endpoint.\n#. Using the Cilium pod above, modify the endpoint configuration by setting ``PolicyAuditMode=Enabled``.\n\nThe following shell commands perform these steps:\n\n.. code-block:: shell-session\n\n   $ PODNAME=$(kubectl get pods -l app.kubernetes.io\/name=deathstar -o jsonpath='{.items[*].metadata.name}')\n   $ NODENAME=$(kubectl get pod -o jsonpath=\"{.items[?(@.metadata.name=='$PODNAME')].spec.nodeName}\")\n   $ ENDPOINT=$(kubectl get cep -o jsonpath=\"{.items[?(@.metadata.name=='$PODNAME')].status.id}\")\n   $ CILIUM_POD=$(kubectl -n \"$CILIUM_NAMESPACE\" get pod --all-namespaces --field-selector spec.nodeName=\"$NODENAME\" -lk8s-app=cilium -o jsonpath='{.items[*].metadata.name}')\n   $ kubectl -n \"$CILIUM_NAMESPACE\" exec \"$CILIUM_POD\" -c cilium-agent -- \\\n       cilium-dbg endpoint config \"$ENDPOINT\" PolicyAuditMode=Enabled\n    Endpoint 232 configuration updated successfully\n\nWe can check that Policy Audit Mode is enabled for this endpoint with\n\n.. code-block:: shell-session\n\n   $ kubectl -n \"$CILIUM_NAMESPACE\" exec \"$CILIUM_POD\" -c cilium-agent -- \\\n       cilium-dbg endpoint get \"$ENDPOINT\" -o jsonpath='{[*].spec.options.PolicyAuditMode}'\n   Enabled\n\n.. _observe_policy_verdicts:\n\nObserve policy verdicts\n=======================\n\nIn this example, we are tasked with applying security policy for the deathstar.\nFirst, from the Cilium pod we need to monitor the notifications for policy\nverdicts using the Hubble CLI. We'll be monitoring for inbound traffic towards\nthe deathstar to identify it and determine whether to extend the network policy\nto allow that traffic.\n\nApply a default-deny policy:\n\n.. literalinclude:: ..\/..\/examples\/minikube\/sw_deny_policy.yaml\n\nCiliumNetworkPolicies match on pod labels using an ``endpointSelector`` to identify\nthe sources and destinations to which the policy applies. The above policy denies\ntraffic sent to any pods with label (``org=empire``). Due to the Policy Audit Mode\nenabled above (either for the entire daemon, or for just the ``deathstar`` endpoint),\nthe traffic will not actually be denied but will instead trigger policy verdict\nnotifications.\n\nTo apply this policy, run:\n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/minikube\/sw_deny_policy.yaml\n    ciliumnetworkpolicy.cilium.io\/empire-default-deny created\n\nWith the above policy, we will enable a default-deny posture on ingress to pods\nwith the label ``org=empire`` and enable the policy verdict notifications for\nthose pods. The same principle applies on egress as well.\n\nNow let's send some traffic from the tiefighter to the deathstar:\n\n.. code-block:: shell-session\n\n    $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local\/v1\/request-landing\n    Ship landed\n\nWe can check the policy verdict from the Cilium Pod:\n\n.. code-block:: shell-session\n\n   $ kubectl -n \"$CILIUM_NAMESPACE\" exec \"$CILIUM_POD\" -c cilium-agent -- \\\n       hubble observe flows -t policy-verdict --last 1\n   Feb  7 12:53:39.168: default\/tiefighter:54134 (ID:31028) -> default\/deathstar-6fb5694d48-5hmds:80 (ID:16530) policy-verdict:none AUDITED (TCP Flags: SYN)\n\nIn the above example, we can see that the Pod ``deathstar-6fb5694d48-5hmds`` has\nreceived traffic from the ``tiefighter`` Pod which doesn't match the policy\n(``policy-verdict:none AUDITED``).\n\n.. _create_network_policy:\n\nCreate the Network Policy\n=========================\n\nWe can get more information about the flow with\n\n.. code-block:: shell-session\n\n   $ kubectl -n \"$CILIUM_NAMESPACE\" exec \"$CILIUM_POD\" -c cilium-agent -- \\\n       hubble observe flows -t policy-verdict -o json --last 1\n\nGiven the above information, we now know the labels of the source and\ndestination Pods, the traffic direction, and the destination port. In this case,\nwe can see clearly that the source (i.e. the tiefighter Pod) is an empire\naircraft (as it has the ``org=empire`` label) so once we've determined that we\nexpect this traffic to arrive at the deathstar, we can form a policy to match\nthe traffic:\n\n.. literalinclude:: ..\/..\/examples\/minikube\/sw_l3_l4_policy.yaml\n\nTo apply this L3\/L4 policy, run:\n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/minikube\/sw_l3_l4_policy.yaml\n    ciliumnetworkpolicy.cilium.io\/rule1 created\n\nNow if we run the landing requests again,\n\n.. code-block:: shell-session\n\n    $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local\/v1\/request-landing\n    Ship landed\n\nwe can then observe that the traffic which was previously audited to be dropped\nby the policy is reported as allowed:\n\n.. code-block:: shell-session\n\n   $ kubectl -n \"$CILIUM_NAMESPACE\" exec \"$CILIUM_POD\" -c cilium-agent -- \\\n       hubble observe flows -t policy-verdict --last 1\n   ...\n   Feb  7 13:06:45.130: default\/tiefighter:59824 (ID:31028) -> default\/deathstar-6fb5694d48-5hmds:80 (ID:16530) policy-verdict:L3-L4 ALLOWED (TCP Flags: SYN)\n\nNow the policy verdict states that the traffic would be allowed:\n``policy-verdict:L3-L4 ALLOWED``. Success!\n\nDisable Policy Audit Mode (Entire Daemon)\n=========================================\n\nThese steps should be repeated for each connection in the cluster to ensure\nthat the network policy allows all of the expected traffic. The final step\nafter deploying the policy is to disable Policy Audit Mode again:\n\n   .. tabs::\n\n      .. group-tab:: Configure via kubectl\n\n         .. code-block:: shell-session\n\n            $ kubectl patch -n $CILIUM_NAMESPACE configmap cilium-config --type merge --patch '{\"data\":{\"policy-audit-mode\":\"false\"}}'\n            configmap\/cilium-config patched\n            $ kubectl -n $CILIUM_NAMESPACE rollout restart ds\/cilium\n            daemonset.apps\/cilium restarted\n            $ kubectl -n kube-system rollout status ds\/cilium\n            Waiting for daemon set \"cilium\" rollout to finish: 0 of 1 updated pods are available...\n            daemon set \"cilium\" successfully rolled out\n\n      .. group-tab:: Helm Upgrade\n\n         .. parsed-literal::\n\n            $ helm upgrade cilium |CHART_RELEASE| \\\\\n                --namespace $CILIUM_NAMESPACE \\\\\n                --reuse-values \\\\\n                --set policyAuditMode=false\n\n\nDisable Policy Audit Mode (Specific Endpoint)\n=============================================\n\nThese steps are nearly identical to enabling Policy Audit Mode.\n\n.. code-block:: shell-session\n\n   $ PODNAME=$(kubectl get pods -l app.kubernetes.io\/name=deathstar -o jsonpath='{.items[*].metadata.name}')\n   $ NODENAME=$(kubectl get pod -o jsonpath=\"{.items[?(@.metadata.name=='$PODNAME')].spec.nodeName}\")\n   $ ENDPOINT=$(kubectl get cep -o jsonpath=\"{.items[?(@.metadata.name=='$PODNAME')].status.id}\")\n   $ CILIUM_POD=$(kubectl -n \"$CILIUM_NAMESPACE\" get pod --all-namespaces --field-selector spec.nodeName=\"$NODENAME\" -lk8s-app=cilium -o jsonpath='{.items[*].metadata.name}')\n   $ kubectl -n \"$CILIUM_NAMESPACE\" exec \"$CILIUM_POD\" -c cilium-agent -- \\\n       cilium-dbg endpoint config \"$ENDPOINT\" PolicyAuditMode=Disabled\n    Endpoint 232 configuration updated successfully\n\nAlternatively, **restarting the Cilium pod** will set the endpoint Policy Audit Mode to the daemon set configuration.\n\n\nVerify Policy Audit Mode is Disabled\n====================================\n\n.. code-block:: shell-session\n\n   $ kubectl -n \"$CILIUM_NAMESPACE\" exec \"$CILIUM_POD\" -c cilium-agent -- \\\n       cilium-dbg endpoint get \"$ENDPOINT\" -o jsonpath='{[*].spec.options.PolicyAuditMode}'\n   Disabled\n\nNow if we run the landing requests again, only the *tiefighter* pods with the\nlabel ``org=empire`` should succeed:\n\n.. code-block:: shell-session\n\n    $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local\/v1\/request-landing\n    Ship landed\n\nAnd we can observe that the traffic was allowed by the policy:\n\n.. code-block:: shell-session\n\n    $ kubectl -n \"$CILIUM_NAMESPACE\" exec \"$CILIUM_POD\" -c cilium-agent -- \\\n        hubble observe flows -t policy-verdict --from-pod tiefighter --last 1\n    Feb  7 13:34:26.112: default\/tiefighter:37314 (ID:31028) -> default\/deathstar-6fb5694d48-5hmds:80 (ID:16530) policy-verdict:L3-L4 ALLOWED (TCP Flags: SYN)\n\n\nThis works as expected. Now the same request from an *xwing* Pod should fail:\n\n.. code-block:: shell-session\n\n    $ kubectl exec xwing -- curl --connect-timeout 3 -s -XPOST deathstar.default.svc.cluster.local\/v1\/request-landing\n    command terminated with exit code 28\n\nThis curl request should timeout after three seconds, we can observe the policy\nverdict with:\n\n.. code-block:: shell-session\n\n    $ kubectl -n \"$CILIUM_NAMESPACE\" exec \"$CILIUM_POD\" -c cilium-agent -- \\\n        hubble observe flows -t policy-verdict --from-pod xwing --last 1\n    Feb  7 13:43:46.791: default\/xwing:54842 (ID:22654) <> default\/deathstar-6fb5694d48-5hmds:80 (ID:16530) policy-verdict:none DENIED (TCP Flags: SYN)\n\n\nWe hope you enjoyed the tutorial.  Feel free to play more with the setup,\nfollow the `gs_http` guide, and reach out to us on `Cilium Slack`_ with any\nquestions!\n\nClean-up\n========\n\n.. parsed-literal::\n\n   $ kubectl delete -f \\ |SCM_WEB|\\\/examples\/minikube\/http-sw-app.yaml\n   $ kubectl delete cnp empire-default-deny rule1","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      policy verdicts                                   Creating Policies from Verdicts                                  Policy Audit Mode configures Cilium to allow all traffic while logging all connections that would otherwise be dropped by network policies  Policy Audit Mode may be configured for the entire daemon using     policy audit mode true   or for individual Cilium Endpoints  When Policy Audit Mode is enabled  no network policy is enforced so this setting is   not recommended for production deployment    Policy Audit Mode supports auditing network policies implemented at networks layers 3 and 4  This guide walks through the process of creating policies using Policy Audit Mode      include   gsg requirements rst    include   gsg sw demo rst  Scale down the deathstar Deployment                                      In this guide we re going to scale down the deathstar Deployment in order to simplify the next steps      code block   shell session       kubectl scale   replicas 1 deployment deathstar    deployment apps deathstar scaled  Enable Policy Audit Mode  Entire Daemon                                            To observe policy audit messages for all endpoints managed by this Daemonset  modify the Cilium ConfigMap and restart all daemons         tabs             group tab   Configure via kubectl              code block   shell session                kubectl patch  n  CILIUM NAMESPACE configmap cilium config   type merge   patch    data    policy audit mode   true                 configmap cilium config patched               kubectl  n  CILIUM NAMESPACE rollout restart ds cilium             daemonset apps cilium restarted               kubectl  n  CILIUM NAMESPACE rollout status ds cilium             Waiting for daemon set  cilium  rollout to finish  0 of 1 updated pods are available                daemon set  cilium  successfully rolled out           group tab   Helm Upgrade           If you installed Cilium via   helm install    then you can use   helm          upgrade   to enable Policy Audit Mode               parsed literal                  helm upgrade cilium  CHART RELEASE                       namespace  CILIUM NAMESPACE                      reuse values                      set policyAuditMode true   Enable Policy Audit Mode  Specific Endpoint                                                Cilium can enable Policy Audit Mode for a specific endpoint  This may be helpful when enabling Policy Audit Mode for the entire daemon is too broad  Enabling per endpoint will ensure other endpoints managed by the same daemon are not impacted   This approach is meant to be temporary     Restarting Cilium pod will reset the Policy Audit Mode to match the daemon s configuration     Policy Audit Mode is enabled for a given endpoint by modifying the endpoint configuration via the   cilium   tool on the endpoint s Kubernetes node  The steps include      Determine the endpoint id on which Policy Audit Mode will be enabled     Identify the Cilium pod running on the same Kubernetes node corresponding to the endpoint     Using the Cilium pod above  modify the endpoint configuration by setting   PolicyAuditMode Enabled     The following shell commands perform these steps      code block   shell session       PODNAME   kubectl get pods  l app kubernetes io name deathstar  o jsonpath    items    metadata name         NODENAME   kubectl get pod  o jsonpath    items     metadata name    PODNAME    spec nodeName         ENDPOINT   kubectl get cep  o jsonpath    items     metadata name    PODNAME    status id         CILIUM POD   kubectl  n   CILIUM NAMESPACE  get pod   all namespaces   field selector spec nodeName   NODENAME   lk8s app cilium  o jsonpath    items    metadata name         kubectl  n   CILIUM NAMESPACE  exec   CILIUM POD   c cilium agent             cilium dbg endpoint config   ENDPOINT  PolicyAuditMode Enabled     Endpoint 232 configuration updated successfully  We can check that Policy Audit Mode is enabled for this endpoint with     code block   shell session       kubectl  n   CILIUM NAMESPACE  exec   CILIUM POD   c cilium agent             cilium dbg endpoint get   ENDPOINT   o jsonpath       spec options PolicyAuditMode      Enabled      observe policy verdicts   Observe policy verdicts                          In this example  we are tasked with applying security policy for the deathstar  First  from the Cilium pod we need to monitor the notifications for policy verdicts using the Hubble CLI  We ll be monitoring for inbound traffic towards the deathstar to identify it and determine whether to extend the network policy to allow that traffic   Apply a default deny policy      literalinclude         examples minikube sw deny policy yaml  CiliumNetworkPolicies match on pod labels using an   endpointSelector   to identify the sources and destinations to which the policy applies  The above policy denies traffic sent to any pods with label    org empire     Due to the Policy Audit Mode enabled above  either for the entire daemon  or for just the   deathstar   endpoint   the traffic will not actually be denied but will instead trigger policy verdict notifications   To apply this policy  run      parsed literal          kubectl create  f    SCM WEB   examples minikube sw deny policy yaml     ciliumnetworkpolicy cilium io empire default deny created  With the above policy  we will enable a default deny posture on ingress to pods with the label   org empire   and enable the policy verdict notifications for those pods  The same principle applies on egress as well   Now let s send some traffic from the tiefighter to the deathstar      code block   shell session        kubectl exec tiefighter    curl  s  XPOST deathstar default svc cluster local v1 request landing     Ship landed  We can check the policy verdict from the Cilium Pod      code block   shell session       kubectl  n   CILIUM NAMESPACE  exec   CILIUM POD   c cilium agent             hubble observe flows  t policy verdict   last 1    Feb  7 12 53 39 168  default tiefighter 54134  ID 31028     default deathstar 6fb5694d48 5hmds 80  ID 16530  policy verdict none AUDITED  TCP Flags  SYN   In the above example  we can see that the Pod   deathstar 6fb5694d48 5hmds   has received traffic from the   tiefighter   Pod which doesn t match the policy    policy verdict none AUDITED          create network policy   Create the Network Policy                            We can get more information about the flow with     code block   shell session       kubectl  n   CILIUM NAMESPACE  exec   CILIUM POD   c cilium agent             hubble observe flows  t policy verdict  o json   last 1  Given the above information  we now know the labels of the source and destination Pods  the traffic direction  and the destination port  In this case  we can see clearly that the source  i e  the tiefighter Pod  is an empire aircraft  as it has the   org empire   label  so once we ve determined that we expect this traffic to arrive at the deathstar  we can form a policy to match the traffic      literalinclude         examples minikube sw l3 l4 policy yaml  To apply this L3 L4 policy  run      parsed literal          kubectl create  f    SCM WEB   examples minikube sw l3 l4 policy yaml     ciliumnetworkpolicy cilium io rule1 created  Now if we run the landing requests again      code block   shell session        kubectl exec tiefighter    curl  s  XPOST deathstar default svc cluster local v1 request landing     Ship landed  we can then observe that the traffic which was previously audited to be dropped by the policy is reported as allowed      code block   shell session       kubectl  n   CILIUM NAMESPACE  exec   CILIUM POD   c cilium agent             hubble observe flows  t policy verdict   last 1           Feb  7 13 06 45 130  default tiefighter 59824  ID 31028     default deathstar 6fb5694d48 5hmds 80  ID 16530  policy verdict L3 L4 ALLOWED  TCP Flags  SYN   Now the policy verdict states that the traffic would be allowed    policy verdict L3 L4 ALLOWED    Success   Disable Policy Audit Mode  Entire Daemon                                             These steps should be repeated for each connection in the cluster to ensure that the network policy allows all of the expected traffic  The final step after deploying the policy is to disable Policy Audit Mode again         tabs             group tab   Configure via kubectl              code block   shell session                kubectl patch  n  CILIUM NAMESPACE configmap cilium config   type merge   patch    data    policy audit mode   false                 configmap cilium config patched               kubectl  n  CILIUM NAMESPACE rollout restart ds cilium             daemonset apps cilium restarted               kubectl  n kube system rollout status ds cilium             Waiting for daemon set  cilium  rollout to finish  0 of 1 updated pods are available                daemon set  cilium  successfully rolled out           group tab   Helm Upgrade              parsed literal                  helm upgrade cilium  CHART RELEASE                       namespace  CILIUM NAMESPACE                      reuse values                      set policyAuditMode false   Disable Policy Audit Mode  Specific Endpoint                                                 These steps are nearly identical to enabling Policy Audit Mode      code block   shell session       PODNAME   kubectl get pods  l app kubernetes io name deathstar  o jsonpath    items    metadata name         NODENAME   kubectl get pod  o jsonpath    items     metadata name    PODNAME    spec nodeName         ENDPOINT   kubectl get cep  o jsonpath    items     metadata name    PODNAME    status id         CILIUM POD   kubectl  n   CILIUM NAMESPACE  get pod   all namespaces   field selector spec nodeName   NODENAME   lk8s app cilium  o jsonpath    items    metadata name         kubectl  n   CILIUM NAMESPACE  exec   CILIUM POD   c cilium agent             cilium dbg endpoint config   ENDPOINT  PolicyAuditMode Disabled     Endpoint 232 configuration updated successfully  Alternatively    restarting the Cilium pod   will set the endpoint Policy Audit Mode to the daemon set configuration    Verify Policy Audit Mode is Disabled                                          code block   shell session       kubectl  n   CILIUM NAMESPACE  exec   CILIUM POD   c cilium agent             cilium dbg endpoint get   ENDPOINT   o jsonpath       spec options PolicyAuditMode      Disabled  Now if we run the landing requests again  only the  tiefighter  pods with the label   org empire   should succeed      code block   shell session        kubectl exec tiefighter    curl  s  XPOST deathstar default svc cluster local v1 request landing     Ship landed  And we can observe that the traffic was allowed by the policy      code block   shell session        kubectl  n   CILIUM NAMESPACE  exec   CILIUM POD   c cilium agent              hubble observe flows  t policy verdict   from pod tiefighter   last 1     Feb  7 13 34 26 112  default tiefighter 37314  ID 31028     default deathstar 6fb5694d48 5hmds 80  ID 16530  policy verdict L3 L4 ALLOWED  TCP Flags  SYN    This works as expected  Now the same request from an  xwing  Pod should fail      code block   shell session        kubectl exec xwing    curl   connect timeout 3  s  XPOST deathstar default svc cluster local v1 request landing     command terminated with exit code 28  This curl request should timeout after three seconds  we can observe the policy verdict with      code block   shell session        kubectl  n   CILIUM NAMESPACE  exec   CILIUM POD   c cilium agent              hubble observe flows  t policy verdict   from pod xwing   last 1     Feb  7 13 43 46 791  default xwing 54842  ID 22654     default deathstar 6fb5694d48 5hmds 80  ID 16530  policy verdict none DENIED  TCP Flags  SYN    We hope you enjoyed the tutorial   Feel free to play more with the setup  follow the  gs http  guide  and reach out to us on  Cilium Slack   with any questions   Clean up              parsed literal         kubectl delete  f    SCM WEB   examples minikube http sw app yaml      kubectl delete cnp empire default deny rule1"}
{"questions":"cilium Cilium environment running on your machine It is designed to take 15 30 docs cilium io This document serves as an introduction to using Cilium to enforce memcached aware security policies It walks through a single node Securing Memcached minutes","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n******************\nSecuring Memcached\n******************\n\nThis document serves as an introduction to using Cilium to enforce memcached-aware\nsecurity policies. It walks through a single-node\nCilium environment running on your machine. It is designed to take 15-30\nminutes.\n\n**NOTE:** memcached-aware policy support is still in beta.  It is not yet ready for\nproduction use. Additionally, the memcached-specific policy language is highly likely to\nchange in a future Cilium version.\n\n`Memcached <https:\/\/memcached.org\/>`_ is a high performance, distributed memory object caching system. It's simple yet powerful, and used by dynamic web applications to alleviate database load. Memcached is designed to work efficiently for a very large number of open connections. Thus, clients are encouraged to cache their connections rather\nthan the overhead of reopening TCP connections every time they need to store or retrieve data. Multiple clients can benefit from this distributed cache's performance benefits.\n\nThere are two kinds of data sent in the memcache protocol: text lines\nand unstructured (binary) data.  We will demonstrate clients using both types of protocols to communicate with a memcached server.\n\n.. include:: gsg_requirements.rst\n\nStep 2: Deploy the Demo Application\n===================================\n\nNow that we have Cilium deployed and ``kube-dns`` operating correctly we can\ndeploy our demo memcached application.  Since our first\n`HTTP-aware Cilium demo <https:\/\/cilium.io\/blog\/2017\/5\/4\/demo-may-the-force-be-with-you\/>`_ was based on Star Wars, we continue with the theme for the memcached demo as well.\n\nEver wonder how the Alliance Fleet manages the changing positions of their ships? The Alliance Fleet uses memcached to store the coordinates of their ships. The Alliance Fleet leverages the memcached-svc service implemented as a memcached server. Each ship in the fleet constantly updates its coordinates and has the ability to get the coordinates of other ships in the Alliance Fleet.\n\nIn this simple example, the Alliance Fleet uses a memcached server for their starfighters to store their own supergalatic coordinates and get those of other starfighters.\n\nIn order to avoid collisions and protect against compromised starfighters, memcached commands are limited to gets for any starfighter coordinates and sets only to a key specific to the starfighter. Thus the following operations are allowed:\n\n- **A-wing**: can set coordinates to key \"awing-coord\" and get the key coordinates.\n- **X-wing**: can set coordinates to key \"xwing-coord\" and get the key coordinates.\n- **Alliance-Tracker**: can get any coordinates but not set any.\n\nTo keep the setup small, we will launch a small number of pods to represent a larger environment:\n\n- **memcached-server** : A Kubernetes service represented by a single pod running a memcached server (label app=memcd-server).\n- **a-wing** memcached binary client : A pod representing an A-wing starfighter, which can update its coordinates and read it via the binary memcached protocol (label app=a-wing).\n- **x-wing** memcached text-based client : A pod representing an X-wing starfighter, which can update its coordinates and read it via the text-based memcached protocol (label app=x-wing).\n- **alliance-tracker** memcached binary client : A pod representing the Alliance Fleet Tracker, able to read the coordinates of all starfighters (label name=fleet-tracker).\n\n\nMemcached clients access the *memcached-server* on TCP port 11211 and send memcached protocol messages to it.\n\n.. image:: images\/cilium_memcd_gsg_topology.png\n\n\nThe file ``memcd-sw-app.yaml`` contains a Kubernetes Deployment for each of the pods described\nabove, as well as a Kubernetes Service *memcached-server* for the Memcached server.\n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/kubernetes-memcached\/memcd-sw-app.yaml\n    deployment.apps\/memcached-server created\n    service\/memcached-server created\n    deployment.apps\/a-wing created\n    deployment.apps\/x-wing created\n    deployment.apps\/alliance-tracker created\n\nKubernetes will deploy the pods and service in the background.\nRunning ``kubectl get svc,pods`` will inform you about the progress of the operation.\nEach pod will go through several states until it reaches ``Running`` at which\npoint the setup is ready.\n\n.. code-block:: shell-session\n\n    $ kubectl get svc,pods\n    NAME                       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE\n    service\/kubernetes         ClusterIP   10.96.0.1    <none>        443\/TCP     31m\n    service\/memcached-server   ClusterIP   None         <none>        11211\/TCP   14m\n\n    NAME                                    READY   STATUS    RESTARTS   AGE\n    pod\/a-wing-67db8d5fcc-dpwl4             1\/1     Running   0          14m\n    pod\/alliance-tracker-6b6447bd69-sz5hz   1\/1     Running   0          14m\n    pod\/memcached-server-bdbfb87cd-8tdh7    1\/1     Running   0          14m\n    pod\/x-wing-fd5dfb9d9-wrtwn              1\/1     Running   0          14m\n\nWe suggest having a main terminal window to execute *kubectl* commands and two additional terminal windows dedicated to accessing the **A-Wing** and **Alliance-Tracker**, which use a python library to communicate to the memcached server using the binary protocol.\n\nIn **all three** terminal windows, set some handy environment variables for the demo with the following script:\n\n.. parsed-literal::\n\n    $ curl -s \\ |SCM_WEB|\\\/examples\/kubernetes-memcached\/memcd-env.sh > memcd-env.sh\n    $ source memcd-env.sh\n\n\nIn the terminal window dedicated for the A-wing pod, exec in, use python to import the binary memcached library and set the client connection to the memcached server:\n\n.. code-block:: shell-session\n\n    $ kubectl exec -ti $AWING_POD -- sh\n    # python\n    Python 3.7.0 (default, Sep  5 2018, 03:25:31)\n    [GCC 6.3.0 20170516] on linux\n    Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n    >>> import bmemcached\n    >>> client = bmemcached.Client((\"memcached-server:11211\", ))\n\nIn the terminal window dedicated for the Alliance-Tracker, exec in, use python to import the binary memcached library and set the client connection to the memcached server:\n\n.. code-block:: shell-session\n\n    $ kubectl exec -ti $TRACKER_POD -- sh\n    # python\n    Python 3.7.0 (default, Sep  5 2018, 03:25:31)\n    [GCC 6.3.0 20170516] on linux\n    Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n    >>> import bmemcached\n    >>> client = bmemcached.Client((\"memcached-server:11211\", ))\n\n\n\nStep 3: Test Basic Memcached Access\n===================================\n\nLet's show that each client is able to access the memcached server. Execute the following to have the A-wing and X-wing starfighters update the Alliance Fleet memcached-server with their respective supergalatic coordinates:\n\nA-wing will access the memcached-server using the *binary protocol*. In your terminal window for A-Wing, set A-wing's coordinates:\n\n.. code-block:: python\n\n    >>> client.set(\"awing-coord\",\"4309.432,918.980\",time=2400)\n    True\n    >>> client.get(\"awing-coord\")\n    '4309.432,918.980'\n\n\nIn your main terminal window, have X-wing starfighter set their coordinates using the text-based protocol to the memcached server.\n\n.. code-block:: shell-session\n\n    $ kubectl exec $XWING_POD -- sh -c \"echo -en \\\\\"$SETXC\\\\\" | nc memcached-server 11211\"\n    STORED\n    $ kubectl exec $XWING_POD -- sh -c \"echo -en \\\\\"$GETXC\\\\\" | nc memcached-server 11211\"\n    VALUE xwing-coord 0 16\n    8893.34,234.3290\n    END\n\nCheck that the Alliance Fleet Tracker is able to get all starfighters' coordinates in your terminal window for the Alliance-Tracker:\n\n.. code-block:: python\n\n    >>> client.get(\"awing-coord\")\n    '4309.432,918.980'\n    >>> client.get(\"xwing-coord\")\n    '8893.34,234.3290'\n\n\nStep 4:  The Danger of a Compromised Memcached Client\n=====================================================\n\nImagine if a starfighter ship is captured. Should the starfighter be able to set the coordinates of other ships, or get the coordinates of all other ships? Or if the Alliance-Tracker is compromised, can it modify the coordinates of any starfighter ship?\nIf every client has access to the Memcached API on port 11211, all have over-privileged access until further locked down.\n\nWith L4 port access to the memcached server, all starfighters could write to any key\/ship and read all ship coordinates. In your main terminal, execute:\n\n.. code-block:: shell-session\n\n    $ kubectl exec $XWING_POD sh -- -c \"echo -en \\\\\"$GETAC\\\\\" | nc memcached-server 11211\"\n    VALUE awing-coord 0 16\n    4309.432,918.980\n    END\n\nIn your A-Wing terminal window, confirm the over-privileged access:\n\n.. code-block:: python\n\n    >>> client.get(\"xwing-coord\")\n    '8893.34,234.3290'\n    >>> client.set(\"xwing-coord\",\"0.0,0.0\",time=2400)\n    True\n    >>> client.get(\"xwing-coord\")\n    '0.0,0.0'\n\nFrom A-Wing, set the X-Wing coordinates back to their proper position:\n\n.. code-block:: python\n\n    >>> client.set(\"xwing-coord\",\"8893.34,234.3290\",time=2400)\n    True\n\nThus, the Alliance Fleet Tracking System could be made weak if a single starfighter ship is compromised.\n\nStep 5: Securing Access to Memcached with Cilium\n================================================\n\nCilium helps lock down Memcached servers to ensure clients have secure access to it. Beyond just providing access to port 11211,\nCilium can enforce specific key value access by understanding both the text-based and the unstructured (binary) memcached protocol.\n\nWe'll create a policy that limits the scope of what a starfighter can access and write. Thus, only the intended memcached protocol calls to the memcached-server can be made.\n\nIn this example, we'll only allow A-Wing to get and set the key \"awing-coord\", only allow X-Wing to get and set key \"xwing-coord\", and allow Alliance-Tracker to only get coordinates.\n\n\n.. image:: images\/cilium_memcd_gsg_attack.png\n\nHere is the *CiliumNetworkPolicy* rule that limits the access of starfighters to their own key and allows Alliance Tracker to get any coordinate:\n\n.. literalinclude:: ..\/..\/examples\/kubernetes-memcached\/memcd-sw-security-policy.yaml\n\nA *CiliumNetworkPolicy* contains a list of rules that define allowed memcached commands, and requests\nthat do not match any rules are denied. The rules explicitly match connections destined to the Memcached Service on TCP 11211.\n\nThe rules apply to inbound (i.e., \"ingress\") connections bound for memcached-server pods (as indicated by ``app:memcached-server``\nin the \"endpointSelector\" section).  The rules apply differently depending on the\nclient pod: ``app:a-wing``, ``app:x-wing``, or ``name:fleet-tracker`` as indicated by the \"fromEndpoints\" section.\n\nWith the policy in place, A-wings can only get and set the key \"awing-coord\"; similarly the X-Wing can only get and set \"xwing-coord\". The Alliance Tracker can only get coordinates - not set.\n\nApply this Memcached-aware network security policy using ``kubectl`` in your main terminal window:\n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/kubernetes-memcached\/memcd-sw-security-policy.yaml\n\nIf we then try to perform the attacks from the *X-wing* pod from the main terminal window, we'll see that they are denied:\n\n.. code-block:: shell-session\n\n    $ kubectl exec $XWING_POD -- sh -c \"echo -en \\\\\"$GETAC\\\\\" | nc memcached-server 11211\"\n    CLIENT_ERROR access denied\n\nFrom the A-Wing terminal window, we can confirm that if *A-wing* goes outside of the bounds of its allowed calls. You may need to run the ``client.get`` command twice for the python call:\n\n.. code-block:: python\n\n    >>> client.get(\"awing-coord\")\n    '4309.432,918.980'\n    >>> client.get(\"xwing-coord\")\n    Traceback (most recent call last):\n      File \"<stdin>\", line 1, in <module>\n      File \"\/usr\/local\/lib\/python3.7\/site-packages\/bmemcached\/client\/replicating.py\", line 42, in get\n        value, cas = server.get(key)\n      File \"\/usr\/local\/lib\/python3.7\/site-packages\/bmemcached\/protocol.py\", line 440, in get\n        raise MemcachedException('Code: %d Message: %s' % (status, extra_content), status)\n    bmemcached.exceptions.MemcachedException: (\"Code: 8 Message: b'access denied'\", 8)\n\nSimilarly, the Alliance-Tracker cannot set any coordinates, which you can attempt from the Alliance-Tracker terminal window:\n\n.. code-block:: python\n\n    >>> client.get(\"xwing-coord\")\n    '8893.34,234.3290'\n    >>> client.set(\"awing-coord\",\"0.0,0.0\",time=1200)\n    Traceback (most recent call last):\n      File \"<stdin>\", line 1, in <module>\n      File \"\/usr\/local\/lib\/python3.7\/site-packages\/bmemcached\/client\/replicating.py\", line 112, in set\n        returns.append(server.set(key, value, time, compress_level=compress_level))\n      File \"\/usr\/local\/lib\/python3.7\/site-packages\/bmemcached\/protocol.py\", line 604, in set\n        return self._set_add_replace('set', key, value, time, compress_level=compress_level)\n      File \"\/usr\/local\/lib\/python3.7\/site-packages\/bmemcached\/protocol.py\", line 583, in _set_add_replace\n        raise MemcachedException('Code: %d Message: %s' % (status, extra_content), status)\n    bmemcached.exceptions.MemcachedException: (\"Code: 8 Message: b'access denied'\", 8)\n\nThe policy is working as expected.\n\nWith the CiliumNetworkPolicy in place, the allowed Memcached calls are still allowed from the respective pods.\n\nIn the main terminal window, execute:\n\n.. code-block:: shell-session\n\n  $ kubectl exec $XWING_POD -- sh -c \"echo -en \\\\\"$GETXC\\\\\" | nc memcached-server 11211\"\n  VALUE xwing-coord 0 16\n  8893.34,234.3290\n  END\n  $ SETXC=\"set xwing-coord 0 1200 16\\\\r\\\\n9854.34,926.9187\\\\r\\\\nquit\\\\r\\\\n\"\n  $ kubectl exec $XWING_POD -- sh -c \"echo -en \\\\\"$SETXC\\\\\" | nc memcached-server 11211\"\n  STORED\n  $ kubectl exec $XWING_POD -- sh -c \"echo -en \\\\\"$GETXC\\\\\" | nc memcached-server 11211\"\n  VALUE xwing-coord 0 16\n  9854.34,926.9187\n  END\n\nIn the A-Wing terminal window, execute:\n\n.. code-block:: python\n\n  >>> client.set(\"awing-coord\",\"9852.542,892.1318\",time=1200)\n  True\n  >>> client.get(\"awing-coord\")\n  '9852.542,892.1318'\n  >>> exit()\n  # exit\n\nIn the Alliance-Tracker terminal window, execute:\n\n.. code-block:: python\n\n  >>> client.get(\"awing-coord\")\n  '9852.542,892.1318'\n  >>> client.get(\"xwing-coord\")\n  '9854.34,926.9187'\n  >>> exit()\n  # exit\n\n\nStep 6: Clean Up\n================\n\nYou have now installed Cilium, deployed a demo app, and tested\nL7 memcached-aware network security policies.  To clean up, in your main terminal window, run:\n\n.. parsed-literal::\n\n   $ kubectl delete -f \\ |SCM_WEB|\\\/examples\/kubernetes-memcached\/memcd-sw-app.yaml\n   $ kubectl delete cnp secure-fleet\n\nFor some handy memcached references, see below:\n\n* https:\/\/memcached.org\/\n* https:\/\/github.com\/memcached\/memcached\/blob\/master\/doc\/protocol.txt\n* https:\/\/python-binary-memcached.readthedocs.io\/en\/latest\/intro\/","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io                     Securing Memcached                     This document serves as an introduction to using Cilium to enforce memcached aware security policies  It walks through a single node Cilium environment running on your machine  It is designed to take 15 30 minutes     NOTE    memcached aware policy support is still in beta   It is not yet ready for production use  Additionally  the memcached specific policy language is highly likely to change in a future Cilium version    Memcached  https   memcached org     is a high performance  distributed memory object caching system  It s simple yet powerful  and used by dynamic web applications to alleviate database load  Memcached is designed to work efficiently for a very large number of open connections  Thus  clients are encouraged to cache their connections rather than the overhead of reopening TCP connections every time they need to store or retrieve data  Multiple clients can benefit from this distributed cache s performance benefits   There are two kinds of data sent in the memcache protocol  text lines and unstructured  binary  data   We will demonstrate clients using both types of protocols to communicate with a memcached server      include   gsg requirements rst  Step 2  Deploy the Demo Application                                      Now that we have Cilium deployed and   kube dns   operating correctly we can deploy our demo memcached application   Since our first  HTTP aware Cilium demo  https   cilium io blog 2017 5 4 demo may the force be with you     was based on Star Wars  we continue with the theme for the memcached demo as well   Ever wonder how the Alliance Fleet manages the changing positions of their ships  The Alliance Fleet uses memcached to store the coordinates of their ships  The Alliance Fleet leverages the memcached svc service implemented as a memcached server  Each ship in the fleet constantly updates its coordinates and has the ability to get the coordinates of other ships in the Alliance Fleet   In this simple example  the Alliance Fleet uses a memcached server for their starfighters to store their own supergalatic coordinates and get those of other starfighters   In order to avoid collisions and protect against compromised starfighters  memcached commands are limited to gets for any starfighter coordinates and sets only to a key specific to the starfighter  Thus the following operations are allowed       A wing    can set coordinates to key  awing coord  and get the key coordinates      X wing    can set coordinates to key  xwing coord  and get the key coordinates      Alliance Tracker    can get any coordinates but not set any   To keep the setup small  we will launch a small number of pods to represent a larger environment       memcached server     A Kubernetes service represented by a single pod running a memcached server  label app memcd server       a wing   memcached binary client   A pod representing an A wing starfighter  which can update its coordinates and read it via the binary memcached protocol  label app a wing       x wing   memcached text based client   A pod representing an X wing starfighter  which can update its coordinates and read it via the text based memcached protocol  label app x wing       alliance tracker   memcached binary client   A pod representing the Alliance Fleet Tracker  able to read the coordinates of all starfighters  label name fleet tracker     Memcached clients access the  memcached server  on TCP port 11211 and send memcached protocol messages to it      image   images cilium memcd gsg topology png   The file   memcd sw app yaml   contains a Kubernetes Deployment for each of the pods described above  as well as a Kubernetes Service  memcached server  for the Memcached server      parsed literal          kubectl create  f    SCM WEB   examples kubernetes memcached memcd sw app yaml     deployment apps memcached server created     service memcached server created     deployment apps a wing created     deployment apps x wing created     deployment apps alliance tracker created  Kubernetes will deploy the pods and service in the background  Running   kubectl get svc pods   will inform you about the progress of the operation  Each pod will go through several states until it reaches   Running   at which point the setup is ready      code block   shell session        kubectl get svc pods     NAME                       TYPE        CLUSTER IP   EXTERNAL IP   PORT S      AGE     service kubernetes         ClusterIP   10 96 0 1     none         443 TCP     31m     service memcached server   ClusterIP   None          none         11211 TCP   14m      NAME                                    READY   STATUS    RESTARTS   AGE     pod a wing 67db8d5fcc dpwl4             1 1     Running   0          14m     pod alliance tracker 6b6447bd69 sz5hz   1 1     Running   0          14m     pod memcached server bdbfb87cd 8tdh7    1 1     Running   0          14m     pod x wing fd5dfb9d9 wrtwn              1 1     Running   0          14m  We suggest having a main terminal window to execute  kubectl  commands and two additional terminal windows dedicated to accessing the   A Wing   and   Alliance Tracker    which use a python library to communicate to the memcached server using the binary protocol   In   all three   terminal windows  set some handy environment variables for the demo with the following script      parsed literal          curl  s    SCM WEB   examples kubernetes memcached memcd env sh   memcd env sh       source memcd env sh   In the terminal window dedicated for the A wing pod  exec in  use python to import the binary memcached library and set the client connection to the memcached server      code block   shell session        kubectl exec  ti  AWING POD    sh       python     Python 3 7 0  default  Sep  5 2018  03 25 31       GCC 6 3 0 20170516  on linux     Type  help    copyright    credits  or  license  for more information          import bmemcached         client   bmemcached Client   memcached server 11211       In the terminal window dedicated for the Alliance Tracker  exec in  use python to import the binary memcached library and set the client connection to the memcached server      code block   shell session        kubectl exec  ti  TRACKER POD    sh       python     Python 3 7 0  default  Sep  5 2018  03 25 31       GCC 6 3 0 20170516  on linux     Type  help    copyright    credits  or  license  for more information          import bmemcached         client   bmemcached Client   memcached server 11211         Step 3  Test Basic Memcached Access                                      Let s show that each client is able to access the memcached server  Execute the following to have the A wing and X wing starfighters update the Alliance Fleet memcached server with their respective supergalatic coordinates   A wing will access the memcached server using the  binary protocol   In your terminal window for A Wing  set A wing s coordinates      code block   python          client set  awing coord   4309 432 918 980  time 2400      True         client get  awing coord        4309 432 918 980    In your main terminal window  have X wing starfighter set their coordinates using the text based protocol to the memcached server      code block   shell session        kubectl exec  XWING POD    sh  c  echo  en     SETXC      nc memcached server 11211      STORED       kubectl exec  XWING POD    sh  c  echo  en     GETXC      nc memcached server 11211      VALUE xwing coord 0 16     8893 34 234 3290     END  Check that the Alliance Fleet Tracker is able to get all starfighters  coordinates in your terminal window for the Alliance Tracker      code block   python          client get  awing coord        4309 432 918 980          client get  xwing coord        8893 34 234 3290    Step 4   The Danger of a Compromised Memcached Client                                                        Imagine if a starfighter ship is captured  Should the starfighter be able to set the coordinates of other ships  or get the coordinates of all other ships  Or if the Alliance Tracker is compromised  can it modify the coordinates of any starfighter ship  If every client has access to the Memcached API on port 11211  all have over privileged access until further locked down   With L4 port access to the memcached server  all starfighters could write to any key ship and read all ship coordinates  In your main terminal  execute      code block   shell session        kubectl exec  XWING POD sh     c  echo  en     GETAC      nc memcached server 11211      VALUE awing coord 0 16     4309 432 918 980     END  In your A Wing terminal window  confirm the over privileged access      code block   python          client get  xwing coord        8893 34 234 3290          client set  xwing coord   0 0 0 0  time 2400      True         client get  xwing coord        0 0 0 0   From A Wing  set the X Wing coordinates back to their proper position      code block   python          client set  xwing coord   8893 34 234 3290  time 2400      True  Thus  the Alliance Fleet Tracking System could be made weak if a single starfighter ship is compromised   Step 5  Securing Access to Memcached with Cilium                                                   Cilium helps lock down Memcached servers to ensure clients have secure access to it  Beyond just providing access to port 11211  Cilium can enforce specific key value access by understanding both the text based and the unstructured  binary  memcached protocol   We ll create a policy that limits the scope of what a starfighter can access and write  Thus  only the intended memcached protocol calls to the memcached server can be made   In this example  we ll only allow A Wing to get and set the key  awing coord   only allow X Wing to get and set key  xwing coord   and allow Alliance Tracker to only get coordinates       image   images cilium memcd gsg attack png  Here is the  CiliumNetworkPolicy  rule that limits the access of starfighters to their own key and allows Alliance Tracker to get any coordinate      literalinclude         examples kubernetes memcached memcd sw security policy yaml  A  CiliumNetworkPolicy  contains a list of rules that define allowed memcached commands  and requests that do not match any rules are denied  The rules explicitly match connections destined to the Memcached Service on TCP 11211   The rules apply to inbound  i e    ingress   connections bound for memcached server pods  as indicated by   app memcached server   in the  endpointSelector  section    The rules apply differently depending on the client pod    app a wing      app x wing    or   name fleet tracker   as indicated by the  fromEndpoints  section   With the policy in place  A wings can only get and set the key  awing coord   similarly the X Wing can only get and set  xwing coord   The Alliance Tracker can only get coordinates   not set   Apply this Memcached aware network security policy using   kubectl   in your main terminal window      parsed literal          kubectl create  f    SCM WEB   examples kubernetes memcached memcd sw security policy yaml  If we then try to perform the attacks from the  X wing  pod from the main terminal window  we ll see that they are denied      code block   shell session        kubectl exec  XWING POD    sh  c  echo  en     GETAC      nc memcached server 11211      CLIENT ERROR access denied  From the A Wing terminal window  we can confirm that if  A wing  goes outside of the bounds of its allowed calls  You may need to run the   client get   command twice for the python call      code block   python          client get  awing coord        4309 432 918 980          client get  xwing coord       Traceback  most recent call last         File   stdin    line 1  in  module        File   usr local lib python3 7 site packages bmemcached client replicating py   line 42  in get         value  cas   server get key        File   usr local lib python3 7 site packages bmemcached protocol py   line 440  in get         raise MemcachedException  Code   d Message   s     status  extra content   status      bmemcached exceptions MemcachedException    Code  8 Message  b access denied    8   Similarly  the Alliance Tracker cannot set any coordinates  which you can attempt from the Alliance Tracker terminal window      code block   python          client get  xwing coord        8893 34 234 3290          client set  awing coord   0 0 0 0  time 1200      Traceback  most recent call last         File   stdin    line 1  in  module        File   usr local lib python3 7 site packages bmemcached client replicating py   line 112  in set         returns append server set key  value  time  compress level compress level         File   usr local lib python3 7 site packages bmemcached protocol py   line 604  in set         return self  set add replace  set   key  value  time  compress level compress level        File   usr local lib python3 7 site packages bmemcached protocol py   line 583  in  set add replace         raise MemcachedException  Code   d Message   s     status  extra content   status      bmemcached exceptions MemcachedException    Code  8 Message  b access denied    8   The policy is working as expected   With the CiliumNetworkPolicy in place  the allowed Memcached calls are still allowed from the respective pods   In the main terminal window  execute      code block   shell session      kubectl exec  XWING POD    sh  c  echo  en     GETXC      nc memcached server 11211    VALUE xwing coord 0 16   8893 34 234 3290   END     SETXC  set xwing coord 0 1200 16  r  n9854 34 926 9187  r  nquit  r  n      kubectl exec  XWING POD    sh  c  echo  en     SETXC      nc memcached server 11211    STORED     kubectl exec  XWING POD    sh  c  echo  en     GETXC      nc memcached server 11211    VALUE xwing coord 0 16   9854 34 926 9187   END  In the A Wing terminal window  execute      code block   python        client set  awing coord   9852 542 892 1318  time 1200    True       client get  awing coord      9852 542 892 1318        exit       exit  In the Alliance Tracker terminal window  execute      code block   python        client get  awing coord      9852 542 892 1318        client get  xwing coord      9854 34 926 9187        exit       exit   Step 6  Clean Up                   You have now installed Cilium  deployed a demo app  and tested L7 memcached aware network security policies   To clean up  in your main terminal window  run      parsed literal         kubectl delete  f    SCM WEB   examples kubernetes memcached memcd sw app yaml      kubectl delete cnp secure fleet  For some handy memcached references  see below     https   memcached org    https   github com memcached memcached blob master doc protocol txt   https   python binary memcached readthedocs io en latest intro "}
{"questions":"cilium Securing a Cassandra Database Cilium environment running on your machine It is designed to take 15 30 docs cilium io security policies It is a detailed walk through of getting a single node This document serves as an introduction to using Cilium to enforce Cassandra aware minutes","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n*****************************\nSecuring a Cassandra Database\n*****************************\n\nThis document serves as an introduction to using Cilium to enforce Cassandra-aware\nsecurity policies.  It is a detailed walk-through of getting a single-node\nCilium environment running on your machine. It is designed to take 15-30\nminutes.\n\n**NOTE:** Cassandra-aware policy support is still in beta phase.  It is not yet ready for\nproduction use.   Additionally, the Cassandra-specific policy language is highly likely to\nchange in a future Cilium version.\n\n.. include:: gsg_requirements.rst\n\nDeploy the Demo Application\n===========================\n\nNow that we have Cilium deployed and ``kube-dns`` operating correctly we can\ndeploy our demo Cassandra application.  Since our first\n`HTTP-aware Cilium  Star Wars demo <https:\/\/cilium.io\/blog\/2017\/5\/4\/demo-may-the-force-be-with-you\/>`_\nshowed how the Galactic Empire used HTTP-aware security policies to protect the Death Star from the\nRebel Alliance, this Cassandra demo is Star Wars-themed as well.\n\n`Apache Cassanadra <http:\/\/cassandra.apache.org>`_ is a popular NOSQL database focused on\ndelivering high-performance transactions (especially on writes) without sacrificing on availability or scale.\nCassandra operates as a cluster of servers, and Cassandra clients query these services via a\nthe `native Cassandra protocol <https:\/\/github.com\/apache\/cassandra\/blob\/trunk\/doc\/native_protocol_v4.spec>`_ .\nCilium understands the Cassandra protocol, and thus is able to provide deep visibility and control over\nwhich clients are able to access particular tables inside a Cassandra cluster, and which actions\n(e.g., \"select\", \"insert\", \"update\", \"delete\") can be performed on tables.\n\nWith Cassandra, each table belongs to a \"keyspace\", allowing multiple groups to use a single cluster without conflicting.\nCassandra queries specify the full table name qualified by the keyspace using the syntax \"<keyspace>.<table>\".\n\nIn our simple example, the Empire uses a Cassandra cluster to store two different types of information:\n\n- **Employee Attendance Records** : Use to store daily attendance data (attendance.daily_records).\n- **Deathstar Scrum Reports** : Daily scrum reports from the teams working on the Deathstar (deathstar.scrum_reports).\n\nTo keep the setup small, we will just launch a small number of pods to represent this setup:\n\n- **cass-server** : A single pod running the Cassandra service, representing a Cassandra cluster\n  (label app=cass-server).\n- **empire-hq** : A pod representing the Empire's Headquarters, which is the only pod that should\n  be able to read all attendance data, or read\/write the Deathstar scrum notes (label app=empire-hq).\n- **empire-outpost** : A random outpost in the empire.  It should be able to insert employee attendance\n  records, but not read records for other empire facilities.   It also should not have any access to the\n  deathstar keyspace (label app=empire-outpost).\n\nAll pods other than *cass-server* are Cassandra clients, which need access to the *cass-server*\ncontainer on TCP port 9042 in order to send Cassandra protocol messages.\n\n.. image:: images\/cilium_cass_gsg_topology.png\n\nThe file ``cass-sw-app.yaml`` contains a Kubernetes Deployment for each of the pods described\nabove, as well as a Kubernetes Service *cassandra-svc* for the Cassandra cluster.\n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/kubernetes-cassandra\/cass-sw-app.yaml\n    deployment.apps\/cass-server created\n    service\/cassandra-svc created\n    deployment.apps\/empire-hq created\n    deployment.apps\/empire-outpost created\n\nKubernetes will deploy the pods and service in the background.\nRunning ``kubectl get svc,pods`` will inform you about the progress of the operation.\nEach pod will go through several states until it reaches ``Running`` at which\npoint the setup is ready.\n\n.. code-block:: shell-session\n\n    $ kubectl get svc,pods\n    NAME                    TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE\n    service\/cassandra-svc   ClusterIP   None         <none>        9042\/TCP   1m\n    service\/kubernetes      ClusterIP   10.96.0.1    <none>        443\/TCP    15h\n\n    NAME                                  READY     STATUS    RESTARTS   AGE\n    pod\/cass-server-5674d5b946-x8v4j      1\/1       Running   0          1m\n    pod\/empire-hq-c494c664d-xmvdl         1\/1       Running   0          1m\n    pod\/empire-outpost-68bf76858d-flczn   1\/1       Running   0          1m\n\n\nStep 3: Test Basic Cassandra Access\n===================================\n\nFirst, we'll create the keyspaces and tables mentioned above, and populate them with some initial data:\n\n.. parsed-literal::\n\n   $  curl -s \\ |SCM_WEB|\\\/examples\/kubernetes-cassandra\/cass-populate-tables.sh | bash\n\nNext, create two environment variables that refer to the *empire-hq* and *empire-outpost* pods:\n\n.. code-block:: shell-session\n\n   $ HQ_POD=$(kubectl get pods -l app=empire-hq -o jsonpath='{.items[0].metadata.name}')\n   $ OUTPOST_POD=$(kubectl get pods -l app=empire-outpost -o jsonpath='{.items[0].metadata.name}')\n\n\nNow we will run the 'cqlsh' Cassandra client in the *empire-outpost* pod, telling it to access\nthe Cassandra cluster identified by the 'cassandra-svc' DNS name:\n\n.. code-block:: shell-session\n\n    $ kubectl exec -it $OUTPOST_POD -- cqlsh cassandra-svc\n    Connected to Test Cluster at cassandra-svc:9042.\n    [cqlsh 5.0.1 | Cassandra 3.11.3 | CQL spec 3.4.4 | Native protocol v4]\n    Use HELP for help.\n    cqlsh>\n\nNext, using the cqlsh prompt, we'll show that the outpost can add records to the \"daily_records\" table\nin the \"attendance\" keyspace:\n\n.. code-block:: shell-session\n\n    cqlsh> INSERT INTO attendance.daily_records (creation, loc_id, present, empire_member_id) values (now(), 074AD3B9-A47D-4EBC-83D3-CAD75B1911CE, true, 6AD3139F-EBFC-4E0C-9F79-8F997BA01D90);\n\nWe have confirmed that outposts are able to report daily attendance records as intended. We're off to a good start!\n\nThe Danger of a Compromised Cassandra Client\n============================================\n\nBut what if a rebel spy gains access to any of the remote outposts that act as a Cassandra client?\nSince every client has access to the Cassandra API on port 9042, it can do some bad stuff.\nFor starters, the outpost container can not only add entries to the attendance.daily_reports table,\nbut it could read all entries as well.\n\nTo see this, we can run the following command:\n\n.. code-block:: shell-session\n\n  $ cqlsh> SELECT * FROM attendance.daily_records;\n\n    loc_id                               | creation                             | empire_member_id                     | present\n   --------------------------------------+--------------------------------------+--------------------------------------+---------\n   a855e745-69d8-4159-b8b6-e2bafed8387a | c692ce90-bf57-11e8-98e6-f1a9f45fc4d8 | cee6d956-dbeb-4b09-ad21-1dd93290fa6c |    True\n   5b9a7990-657e-442d-a3f7-94484f06696e | c8493120-bf57-11e8-98e6-f1a9f45fc4d8 | e74a0300-94f3-4b3d-aee4-fea85eca5af7 |    True\n   53ed94d0-ddac-4b14-8c2f-ba6f83a8218c | c641a150-bf57-11e8-98e6-f1a9f45fc4d8 | 104ddbb6-f2f7-4cd0-8683-cc18cccc1326 |    True\n   074ad3b9-a47d-4ebc-83d3-cad75b1911ce | 9674ed40-bf59-11e8-98e6-f1a9f45fc4d8 | 6ad3139f-ebfc-4e0c-9f79-8f997ba01d90 |    True\n   fe72cc39-dffb-45dc-8e5f-86c674a58951 | c5e79a70-bf57-11e8-98e6-f1a9f45fc4d8 | 6782689c-0488-4ecb-b582-a2ccd282405e |    True\n   461f4176-eb4c-4bcc-a08a-46787ca01af3 | c6fefde0-bf57-11e8-98e6-f1a9f45fc4d8 | 01009199-3d6b-4041-9c43-b1ca9aef021c |    True\n   64dbf608-6947-4a23-98e9-63339c413136 | c8096900-bf57-11e8-98e6-f1a9f45fc4d8 | 6ffe024e-beff-4370-a1b5-dcf6330ec82b |    True\n   13cefcac-5652-4c69-a3c2-1484671f2467 | c53f4c80-bf57-11e8-98e6-f1a9f45fc4d8 | 55218adc-2f3d-4f84-a693-87a2c238bb26 |    True\n   eabf5185-376b-4d4a-a5b5-99f912d98279 | c593fc30-bf57-11e8-98e6-f1a9f45fc4d8 | 5e22159b-f3a9-4f8a-9944-97375df570e9 |    True\n   3c0ae2d1-c836-4aa4-8fe2-5db6cc1f92fc | c7af1400-bf57-11e8-98e6-f1a9f45fc4d8 | 0ccb3df7-78d0-4434-8a7f-4bfa8d714275 |    True\n   31a292e0-2e28-4a7d-8c84-8d4cf0c57483 | c4e0d8d0-bf57-11e8-98e6-f1a9f45fc4d8 | 8fe7625c-f482-4eb6-b33e-271440777403 |    True\n\n  (11 rows)\n\n\nUh oh!  The rebels now has strategic information about empire troop strengths at each location in the galaxy.\n\nBut even more nasty from a security perspective is that the outpost container can also access information in any keyspace,\nincluding the deathstar keyspace.  For example, run:\n\n.. code-block:: shell-session\n\n $ cqlsh> SELECT * FROM deathstar.scrum_notes;\n\n  empire_member_id                     | content                                                                                                        | creation\n --------------------------------------+----------------------------------------------------------------------------------------------------------------+--------------------------------------\n 34e564c2-781b-477e-acd0-b357d67f94f2 | Designed protective shield for deathstar.  Could be based on nearby moon.  Feature punted to v2.  Not blocked. | c3c8b210-bf57-11e8-98e6-f1a9f45fc4d8\n dfa974ea-88cd-4e9b-85e3-542b9d00e2df |   I think the exhaust port could be vulnerable to a direct hit.  Hope no one finds out about it.  Not blocked. | c37f4d00-bf57-11e8-98e6-f1a9f45fc4d8\n ee12306a-7b44-46a4-ad68-42e86f0f111e |        Trying to figure out if we should paint it medium grey, light grey, or medium-light grey.  Not blocked. | c32daa90-bf57-11e8-98e6-f1a9f45fc4d8\n\n (3 rows)\n\nWe see that any outpost can actually access the deathstar scrum notes, which mentions a pretty serious issue with the exhaust port.\n\nSecuring Access to Cassandra with Cilium\n========================================\n\nObviously, it would be much more secure to limit each pod's access to the Cassandra server to be\nleast privilege (i.e., only what is needed for the app to operate correctly and nothing more).\n\nWe can do that with the following Cilium security policy.   As with Cilium HTTP policies, we can write\npolicies that identify pods by labels, and then limit the traffic in\/out of this pod.  In\nthis case, we'll create a policy that identifies the tables that each client should be able to access,\nthe actions that are allowed on those tables, and deny the rest.\n\nAs an example, a policy could limit containers with label *app=empire-outpost* to only be able to\ninsert entries into the table \"attendance.daily_reports\", but would block any attempt by a compromised outpost\nto read all attendance information or access other keyspaces.\n\n.. image:: images\/cilium_cass_gsg_attack.png\n\nHere is the *CiliumNetworkPolicy* rule that limits access of pods with label *app=empire-outpost* to\nonly insert records into \"attendance.daily_reports\":\n\n.. literalinclude:: ..\/..\/examples\/kubernetes-cassandra\/cass-sw-security-policy.yaml\n\nA *CiliumNetworkPolicy* contains a list of rules that define allowed requests, meaning that requests\nthat do not match any rules are denied as invalid.\n\nThe rule explicitly matches Cassandra connections destined to TCP 9042 on cass-server pods, and allows\nquery actions like select\/insert\/update\/delete only on a specified set of tables.\nThe above rule applies to inbound (i.e., \"ingress\") connections to cass-server pods (as indicated by \"app:cass-server\"\nin the \"endpointSelector\" section).  The rule applies different rules based on whether the\nclient pod has labels \"app: empire-outpost\" or \"app: empire-hq\" as indicated by the \"fromEndpoints\" section.\n\nThe policy limits the *empire-outpost* pod to performing \"select\" queries on the \"system\" and \"system_schema\"\nkeyspaces (required by cqlsh on startup) and \"insert\" queries to the \"attendance.daily_records\" table.\n\nThe full policy adds another rule that allows all queries from the *empire-hq* pod.\n\nApply this Cassandra-aware network security policy using ``kubectl`` in a new window:\n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/kubernetes-cassandra\/cass-sw-security-policy.yaml\n\nIf we then again try to perform the attacks from the *empire-outpost* pod, we'll see that they are denied:\n\n.. code-block:: shell-session\n\n  $ cqlsh> SELECT * FROM attendance.daily_records;\n  Unauthorized: Error from server: code=2100 [Unauthorized] message=\"Request Unauthorized\"\n\nThis is because the policy only permits pods with labels app: empire-outpost to insert into attendance.daily_records, it does\nnot permit select on that table, or any action on other tables (with the exception of the system.* and system_schema.*\nkeyspaces).  Its worth noting that we don't simply drop the message (which\ncould easily be confused with a network error), but rather we respond with the Cassandra Unauthorized error message.\n(similar to how HTTP would return an error code of 403 unauthorized).\n\nLikewise, if the outpost pod ever tries to access a table in another keyspace, like deathstar, this request will also be\ndenied:\n\n.. code-block:: shell-session\n\n  $ cqlsh> SELECT * FROM deathstar.scrum_notes;\n  Unauthorized: Error from server: code=2100 [Unauthorized] message=\"Request Unauthorized\"\n\nThis is blocked as well, thanks to the Cilium network policy.\n\nUse another window to confirm that the *empire-hq* pod still has full access to the cassandra cluster:\n\n.. code-block:: shell-session\n\n    $ kubectl exec -it $HQ_POD -- cqlsh cassandra-svc\n    Connected to Test Cluster at cassandra-svc:9042.\n    [cqlsh 5.0.1 | Cassandra 3.11.3 | CQL spec 3.4.4 | Native protocol v4]\n    Use HELP for help.\n    cqlsh>\n\nThe power of Cilium's identity-based security allows *empire-hq* to still have full access\nto both tables:\n\n.. code-block:: shell-session\n\n\n  $ cqlsh> SELECT * FROM attendance.daily_records;\n   loc_id                               | creation                             | empire_member_id                     | present\n  --------------------------------------+--------------------------------------+--------------------------------------+---------\n  a855e745-69d8-4159-b8b6-e2bafed8387a | c692ce90-bf57-11e8-98e6-f1a9f45fc4d8 | cee6d956-dbeb-4b09-ad21-1dd93290fa6c |    True\n\n  <snip>\n\n  (12 rows)\n\n\nSimilarly, the deathstar can still access the scrum notes:\n\n.. code-block:: shell-session\n\n  $ cqlsh> SELECT * FROM deathstar.scrum_notes;\n\n    <snip>\n\n  (3 rows)\n\nCassandra-Aware Visibility (Bonus)\n==================================\n\nAs a bonus, you can re-run the above queries with policy enforced and view how Cilium provides Cassandra-aware visibility, including\nwhether requests are forwarded or denied.   First, use \"kubectl exec\" to access the cilium pod.\n\n.. code-block:: shell-session\n\n  $ CILIUM_POD=$(kubectl get pods -n kube-system -l k8s-app=cilium -o jsonpath='{.items[0].metadata.name}')\n  $ kubectl exec -it -n kube-system $CILIUM_POD -- \/bin\/bash\n  root@minikube:~#\n\nNext, start Cilium monitor, and limit the output to only \"l7\" type messages using the \"-t\" flag:\n\n::\n\n  root@minikube:~# cilium-dbg monitor -t l7\n  Listening for events on 2 CPUs with 64x4096 of shared memory\n  Press Ctrl-C to quit\n\nIn the other windows, re-run the above queries, and you will see that Cilium provides full visibility at the level of\neach Cassandra request, indicating:\n\n- The Kubernetes label-based identity of both the sending and receiving pod.\n- The details of the Cassandra request, including the 'query_action' (e.g., 'select', 'insert')\n  and 'query_table' (e.g., 'system.local', 'attendance.daily_records')\n- The 'verdict' indicating whether the request was allowed by policy ('Forwarded' or 'Denied').\n\nExample output is below.   All requests are from *empire-outpost* to *cass-server*.   The first two requests are\nallowed, a 'select' into 'system.local' and an 'insert' into 'attendance.daily_records'.\nThe second two requests are denied, a 'select' into 'attendance.daily_records' and a select into 'deathstar.scrum_notes' :\n\n::\n\n  <- Request cassandra from 0 ([k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:app=empire-outpost]) to 64503 ([k8s:app=cass-server k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 12443->16168, verdict Forwarded query_table:system.local query_action:selec\n  <- Request cassandra from 0 ([k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:app=empire-outpost]) to 64503 ([k8s:app=cass-server k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 12443->16168, verdict Forwarded query_action:insert query_table:attendance.daily_records\n  <- Request cassandra from 0 ([k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:app=empire-outpost]) to 64503 ([k8s:app=cass-server k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 12443->16168, verdict Denied query_action:select query_table:attendance.daily_records\n  <- Request cassandra from 0 ([k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:app=empire-outpost]) to 64503 ([k8s:app=cass-server k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 12443->16168, verdict Denied query_table:deathstar.scrum_notes query_action:select\n\nClean Up\n========\n\nYou have now installed Cilium, deployed a demo app, and tested\nL7 Cassandra-aware network security policies.  To clean up, run:\n\n.. parsed-literal::\n\n   $ kubectl delete -f \\ |SCM_WEB|\\\/examples\/kubernetes-cassandra\/cass-sw-app.yaml\n   $ kubectl delete cnp secure-empire-cassandra\n\nAfter this, you can re-run the tutorial from Step 1.","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io                                Securing a Cassandra Database                                This document serves as an introduction to using Cilium to enforce Cassandra aware security policies   It is a detailed walk through of getting a single node Cilium environment running on your machine  It is designed to take 15 30 minutes     NOTE    Cassandra aware policy support is still in beta phase   It is not yet ready for production use    Additionally  the Cassandra specific policy language is highly likely to change in a future Cilium version      include   gsg requirements rst  Deploy the Demo Application                              Now that we have Cilium deployed and   kube dns   operating correctly we can deploy our demo Cassandra application   Since our first  HTTP aware Cilium  Star Wars demo  https   cilium io blog 2017 5 4 demo may the force be with you     showed how the Galactic Empire used HTTP aware security policies to protect the Death Star from the Rebel Alliance  this Cassandra demo is Star Wars themed as well    Apache Cassanadra  http   cassandra apache org    is a popular NOSQL database focused on delivering high performance transactions  especially on writes  without sacrificing on availability or scale  Cassandra operates as a cluster of servers  and Cassandra clients query these services via a the  native Cassandra protocol  https   github com apache cassandra blob trunk doc native protocol v4 spec      Cilium understands the Cassandra protocol  and thus is able to provide deep visibility and control over which clients are able to access particular tables inside a Cassandra cluster  and which actions  e g    select    insert    update    delete   can be performed on tables   With Cassandra  each table belongs to a  keyspace   allowing multiple groups to use a single cluster without conflicting  Cassandra queries specify the full table name qualified by the keyspace using the syntax   keyspace   table     In our simple example  the Empire uses a Cassandra cluster to store two different types of information       Employee Attendance Records     Use to store daily attendance data  attendance daily records       Deathstar Scrum Reports     Daily scrum reports from the teams working on the Deathstar  deathstar scrum reports    To keep the setup small  we will just launch a small number of pods to represent this setup       cass server     A single pod running the Cassandra service  representing a Cassandra cluster    label app cass server       empire hq     A pod representing the Empire s Headquarters  which is the only pod that should   be able to read all attendance data  or read write the Deathstar scrum notes  label app empire hq       empire outpost     A random outpost in the empire   It should be able to insert employee attendance   records  but not read records for other empire facilities    It also should not have any access to the   deathstar keyspace  label app empire outpost    All pods other than  cass server  are Cassandra clients  which need access to the  cass server  container on TCP port 9042 in order to send Cassandra protocol messages      image   images cilium cass gsg topology png  The file   cass sw app yaml   contains a Kubernetes Deployment for each of the pods described above  as well as a Kubernetes Service  cassandra svc  for the Cassandra cluster      parsed literal          kubectl create  f    SCM WEB   examples kubernetes cassandra cass sw app yaml     deployment apps cass server created     service cassandra svc created     deployment apps empire hq created     deployment apps empire outpost created  Kubernetes will deploy the pods and service in the background  Running   kubectl get svc pods   will inform you about the progress of the operation  Each pod will go through several states until it reaches   Running   at which point the setup is ready      code block   shell session        kubectl get svc pods     NAME                    TYPE        CLUSTER IP   EXTERNAL IP   PORT S     AGE     service cassandra svc   ClusterIP   None          none         9042 TCP   1m     service kubernetes      ClusterIP   10 96 0 1     none         443 TCP    15h      NAME                                  READY     STATUS    RESTARTS   AGE     pod cass server 5674d5b946 x8v4j      1 1       Running   0          1m     pod empire hq c494c664d xmvdl         1 1       Running   0          1m     pod empire outpost 68bf76858d flczn   1 1       Running   0          1m   Step 3  Test Basic Cassandra Access                                      First  we ll create the keyspaces and tables mentioned above  and populate them with some initial data      parsed literal          curl  s    SCM WEB   examples kubernetes cassandra cass populate tables sh   bash  Next  create two environment variables that refer to the  empire hq  and  empire outpost  pods      code block   shell session       HQ POD   kubectl get pods  l app empire hq  o jsonpath    items 0  metadata name         OUTPOST POD   kubectl get pods  l app empire outpost  o jsonpath    items 0  metadata name      Now we will run the  cqlsh  Cassandra client in the  empire outpost  pod  telling it to access the Cassandra cluster identified by the  cassandra svc  DNS name      code block   shell session        kubectl exec  it  OUTPOST POD    cqlsh cassandra svc     Connected to Test Cluster at cassandra svc 9042       cqlsh 5 0 1   Cassandra 3 11 3   CQL spec 3 4 4   Native protocol v4      Use HELP for help      cqlsh   Next  using the cqlsh prompt  we ll show that the outpost can add records to the  daily records  table in the  attendance  keyspace      code block   shell session      cqlsh  INSERT INTO attendance daily records  creation  loc id  present  empire member id  values  now    074AD3B9 A47D 4EBC 83D3 CAD75B1911CE  true  6AD3139F EBFC 4E0C 9F79 8F997BA01D90    We have confirmed that outposts are able to report daily attendance records as intended  We re off to a good start   The Danger of a Compromised Cassandra Client                                               But what if a rebel spy gains access to any of the remote outposts that act as a Cassandra client  Since every client has access to the Cassandra API on port 9042  it can do some bad stuff  For starters  the outpost container can not only add entries to the attendance daily reports table  but it could read all entries as well   To see this  we can run the following command      code block   shell session      cqlsh  SELECT   FROM attendance daily records       loc id                                 creation                               empire member id                       present                                                                                                                                      a855e745 69d8 4159 b8b6 e2bafed8387a   c692ce90 bf57 11e8 98e6 f1a9f45fc4d8   cee6d956 dbeb 4b09 ad21 1dd93290fa6c      True    5b9a7990 657e 442d a3f7 94484f06696e   c8493120 bf57 11e8 98e6 f1a9f45fc4d8   e74a0300 94f3 4b3d aee4 fea85eca5af7      True    53ed94d0 ddac 4b14 8c2f ba6f83a8218c   c641a150 bf57 11e8 98e6 f1a9f45fc4d8   104ddbb6 f2f7 4cd0 8683 cc18cccc1326      True    074ad3b9 a47d 4ebc 83d3 cad75b1911ce   9674ed40 bf59 11e8 98e6 f1a9f45fc4d8   6ad3139f ebfc 4e0c 9f79 8f997ba01d90      True    fe72cc39 dffb 45dc 8e5f 86c674a58951   c5e79a70 bf57 11e8 98e6 f1a9f45fc4d8   6782689c 0488 4ecb b582 a2ccd282405e      True    461f4176 eb4c 4bcc a08a 46787ca01af3   c6fefde0 bf57 11e8 98e6 f1a9f45fc4d8   01009199 3d6b 4041 9c43 b1ca9aef021c      True    64dbf608 6947 4a23 98e9 63339c413136   c8096900 bf57 11e8 98e6 f1a9f45fc4d8   6ffe024e beff 4370 a1b5 dcf6330ec82b      True    13cefcac 5652 4c69 a3c2 1484671f2467   c53f4c80 bf57 11e8 98e6 f1a9f45fc4d8   55218adc 2f3d 4f84 a693 87a2c238bb26      True    eabf5185 376b 4d4a a5b5 99f912d98279   c593fc30 bf57 11e8 98e6 f1a9f45fc4d8   5e22159b f3a9 4f8a 9944 97375df570e9      True    3c0ae2d1 c836 4aa4 8fe2 5db6cc1f92fc   c7af1400 bf57 11e8 98e6 f1a9f45fc4d8   0ccb3df7 78d0 4434 8a7f 4bfa8d714275      True    31a292e0 2e28 4a7d 8c84 8d4cf0c57483   c4e0d8d0 bf57 11e8 98e6 f1a9f45fc4d8   8fe7625c f482 4eb6 b33e 271440777403      True     11 rows    Uh oh   The rebels now has strategic information about empire troop strengths at each location in the galaxy   But even more nasty from a security perspective is that the outpost container can also access information in any keyspace  including the deathstar keyspace   For example  run      code block   shell session     cqlsh  SELECT   FROM deathstar scrum notes     empire member id                       content                                                                                                          creation                                                                                                                                                                                                  34e564c2 781b 477e acd0 b357d67f94f2   Designed protective shield for deathstar   Could be based on nearby moon   Feature punted to v2   Not blocked    c3c8b210 bf57 11e8 98e6 f1a9f45fc4d8  dfa974ea 88cd 4e9b 85e3 542b9d00e2df     I think the exhaust port could be vulnerable to a direct hit   Hope no one finds out about it   Not blocked    c37f4d00 bf57 11e8 98e6 f1a9f45fc4d8  ee12306a 7b44 46a4 ad68 42e86f0f111e          Trying to figure out if we should paint it medium grey  light grey  or medium light grey   Not blocked    c32daa90 bf57 11e8 98e6 f1a9f45fc4d8    3 rows   We see that any outpost can actually access the deathstar scrum notes  which mentions a pretty serious issue with the exhaust port   Securing Access to Cassandra with Cilium                                           Obviously  it would be much more secure to limit each pod s access to the Cassandra server to be least privilege  i e   only what is needed for the app to operate correctly and nothing more    We can do that with the following Cilium security policy    As with Cilium HTTP policies  we can write policies that identify pods by labels  and then limit the traffic in out of this pod   In this case  we ll create a policy that identifies the tables that each client should be able to access  the actions that are allowed on those tables  and deny the rest   As an example  a policy could limit containers with label  app empire outpost  to only be able to insert entries into the table  attendance daily reports   but would block any attempt by a compromised outpost to read all attendance information or access other keyspaces      image   images cilium cass gsg attack png  Here is the  CiliumNetworkPolicy  rule that limits access of pods with label  app empire outpost  to only insert records into  attendance daily reports       literalinclude         examples kubernetes cassandra cass sw security policy yaml  A  CiliumNetworkPolicy  contains a list of rules that define allowed requests  meaning that requests that do not match any rules are denied as invalid   The rule explicitly matches Cassandra connections destined to TCP 9042 on cass server pods  and allows query actions like select insert update delete only on a specified set of tables  The above rule applies to inbound  i e    ingress   connections to cass server pods  as indicated by  app cass server  in the  endpointSelector  section    The rule applies different rules based on whether the client pod has labels  app  empire outpost  or  app  empire hq  as indicated by the  fromEndpoints  section   The policy limits the  empire outpost  pod to performing  select  queries on the  system  and  system schema  keyspaces  required by cqlsh on startup  and  insert  queries to the  attendance daily records  table   The full policy adds another rule that allows all queries from the  empire hq  pod   Apply this Cassandra aware network security policy using   kubectl   in a new window      parsed literal          kubectl create  f    SCM WEB   examples kubernetes cassandra cass sw security policy yaml  If we then again try to perform the attacks from the  empire outpost  pod  we ll see that they are denied      code block   shell session      cqlsh  SELECT   FROM attendance daily records    Unauthorized  Error from server  code 2100  Unauthorized  message  Request Unauthorized   This is because the policy only permits pods with labels app  empire outpost to insert into attendance daily records  it does not permit select on that table  or any action on other tables  with the exception of the system   and system schema   keyspaces    Its worth noting that we don t simply drop the message  which could easily be confused with a network error   but rather we respond with the Cassandra Unauthorized error message   similar to how HTTP would return an error code of 403 unauthorized    Likewise  if the outpost pod ever tries to access a table in another keyspace  like deathstar  this request will also be denied      code block   shell session      cqlsh  SELECT   FROM deathstar scrum notes    Unauthorized  Error from server  code 2100  Unauthorized  message  Request Unauthorized   This is blocked as well  thanks to the Cilium network policy   Use another window to confirm that the  empire hq  pod still has full access to the cassandra cluster      code block   shell session        kubectl exec  it  HQ POD    cqlsh cassandra svc     Connected to Test Cluster at cassandra svc 9042       cqlsh 5 0 1   Cassandra 3 11 3   CQL spec 3 4 4   Native protocol v4      Use HELP for help      cqlsh   The power of Cilium s identity based security allows  empire hq  to still have full access to both tables      code block   shell session       cqlsh  SELECT   FROM attendance daily records     loc id                                 creation                               empire member id                       present                                                                                                                                    a855e745 69d8 4159 b8b6 e2bafed8387a   c692ce90 bf57 11e8 98e6 f1a9f45fc4d8   cee6d956 dbeb 4b09 ad21 1dd93290fa6c      True     snip      12 rows    Similarly  the deathstar can still access the scrum notes      code block   shell session      cqlsh  SELECT   FROM deathstar scrum notes        snip      3 rows   Cassandra Aware Visibility  Bonus                                      As a bonus  you can re run the above queries with policy enforced and view how Cilium provides Cassandra aware visibility  including whether requests are forwarded or denied    First  use  kubectl exec  to access the cilium pod      code block   shell session      CILIUM POD   kubectl get pods  n kube system  l k8s app cilium  o jsonpath    items 0  metadata name        kubectl exec  it  n kube system  CILIUM POD     bin bash   root minikube     Next  start Cilium monitor  and limit the output to only  l7  type messages using the   t  flag         root minikube    cilium dbg monitor  t l7   Listening for events on 2 CPUs with 64x4096 of shared memory   Press Ctrl C to quit  In the other windows  re run the above queries  and you will see that Cilium provides full visibility at the level of each Cassandra request  indicating     The Kubernetes label based identity of both the sending and receiving pod    The details of the Cassandra request  including the  query action   e g    select    insert     and  query table   e g    system local    attendance daily records     The  verdict  indicating whether the request was allowed by policy   Forwarded  or  Denied     Example output is below    All requests are from  empire outpost  to  cass server     The first two requests are allowed  a  select  into  system local  and an  insert  into  attendance daily records   The second two requests are denied  a  select  into  attendance daily records  and a select into  deathstar scrum notes              Request cassandra from 0   k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s app empire outpost   to 64503   k8s app cass server k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default    identity 12443  16168  verdict Forwarded query table system local query action selec      Request cassandra from 0   k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s app empire outpost   to 64503   k8s app cass server k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default    identity 12443  16168  verdict Forwarded query action insert query table attendance daily records      Request cassandra from 0   k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s app empire outpost   to 64503   k8s app cass server k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default    identity 12443  16168  verdict Denied query action select query table attendance daily records      Request cassandra from 0   k8s io cilium k8s policy serviceaccount default k8s io kubernetes pod namespace default k8s app empire outpost   to 64503   k8s app cass server k8s io kubernetes pod namespace default k8s io cilium k8s policy serviceaccount default    identity 12443  16168  verdict Denied query table deathstar scrum notes query action select  Clean Up           You have now installed Cilium  deployed a demo app  and tested L7 Cassandra aware network security policies   To clean up  run      parsed literal         kubectl delete  f    SCM WEB   examples kubernetes cassandra cass sw app yaml      kubectl delete cnp secure empire cassandra  After this  you can re run the tutorial from Step 1 "}
{"questions":"cilium This document serves as an introduction to using Cilium to enforce Kafka aware Securing a Kafka Cluster docs cilium io security policies It is a detailed walk through of getting a single node gskafka","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\n.. _gs_kafka:\n\n************************\nSecuring a Kafka Cluster\n************************\n\nThis document serves as an introduction to using Cilium to enforce Kafka-aware\nsecurity policies.  It is a detailed walk-through of getting a single-node\nCilium environment running on your machine. It is designed to take 15-30\nminutes.\n\n.. include:: gsg_requirements.rst\n\nDeploy the Demo Application\n===========================\n\nNow that we have Cilium deployed and ``kube-dns`` operating correctly we can\ndeploy our demo Kafka application.  Since our first demo of Cilium + HTTP-aware security\npolicies was Star Wars-themed we decided to do the same for Kafka.  While the\n`HTTP-aware Cilium  Star Wars demo <https:\/\/cilium.io\/blog\/2017\/5\/4\/demo-may-the-force-be-with-you\/>`_\nshowed how the Galactic Empire used HTTP-aware security policies to protect the Death Star from the\nRebel Alliance, this Kafka demo shows how the lack of Kafka-aware security policies allowed the\nRebels to steal the Death Star plans in the first place.\n\nKafka is a powerful platform for passing datastreams between different components of an application.\nA cluster of \"Kafka brokers\" connect nodes that \"produce\" data into a data stream, or \"consume\" data\nfrom a datastream.   Kafka refers to each datastream as a \"topic\".\nBecause scalable and highly-available Kafka clusters are non-trivial to run, the same cluster of\nKafka brokers often handles many different topics at once (read this `Introduction to Kafka\n<https:\/\/kafka.apache.org\/intro>`_ for more background).\n\nIn our simple example, the Empire uses a Kafka cluster to handle two different topics:\n\n- *empire-announce* : Used to broadcast announcements to sites spread across the galaxy\n- *deathstar-plans* : Used by a small group of sites coordinating on building the ultimate battlestation.\n\nTo keep the setup small, we will just launch a small number of pods to represent this setup:\n\n- *kafka-broker* : A single pod running Kafka and Zookeeper representing the Kafka cluster\n  (label app=kafka).\n- *empire-hq* : A pod representing the Empire's Headquarters, which is the only pod that should\n  produce messages to *empire-announce* or *deathstar-plans* (label app=empire-hq).\n- *empire-backup* : A secure backup facility located in `Scarif <https:\/\/starwars.fandom.com\/wiki\/Scarif_vault>`_ ,\n  which is allowed to \"consume\" from the secret *deathstar-plans* topic (label app=empire-backup).\n- *empire-outpost-8888* : A random outpost in the empire.  It needs to \"consume\" messages from\n  the *empire-announce* topic (label app=empire-outpost).\n- *empire-outpost-9999* : Another random outpost in the empire that \"consumes\" messages from\n  the *empire-announce* topic (label app=empire-outpost).\n\nAll pods other than *kafka-broker* are Kafka clients, which need access to the *kafka-broker*\ncontainer on TCP port 9092 in order to send Kafka protocol messages.\n\n.. image:: images\/cilium_kafka_gsg_topology.png\n\nThe file ``kafka-sw-app.yaml`` contains a Kubernetes Deployment for each of the pods described\nabove, as well as a Kubernetes Service for both Kafka and Zookeeper.\n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/kubernetes-kafka\/kafka-sw-app.yaml\n    deployment \"kafka-broker\" created\n    deployment \"zookeeper\" created\n    service \"zook\" created\n    service \"kafka-service\" created\n    deployment \"empire-hq\" created\n    deployment \"empire-outpost-8888\" created\n    deployment \"empire-outpost-9999\" created\n    deployment \"empire-backup\" created\n\nKubernetes will deploy the pods and service  in the background.\nRunning ``kubectl get svc,pods`` will inform you about the progress of the operation.\nEach pod will go through several states until it reaches ``Running`` at which\npoint the setup is ready.\n\n.. code-block:: shell-session\n\n    $ kubectl get svc,pods\n    NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE\n    kafka-service   ClusterIP   None            <none>        9092\/TCP   2m\n    kubernetes      ClusterIP   10.96.0.1       <none>        443\/TCP    10m\n    zook            ClusterIP   10.97.250.131   <none>        2181\/TCP   2m\n\n    NAME                                   READY     STATUS    RESTARTS   AGE\n    empire-backup-6f4567d5fd-gcrvg         1\/1       Running   0          2m\n    empire-hq-59475b4b64-mrdww             1\/1       Running   0          2m\n    empire-outpost-8888-78dffd49fb-tnnhf   1\/1       Running   0          2m\n    empire-outpost-9999-7dd9fc5f5b-xp6jw   1\/1       Running   0          2m\n    kafka-broker-b874c78fd-jdwqf           1\/1       Running   0          2m\n    zookeeper-85f64b8cd4-nprck             1\/1       Running   0          2m\n\nSetup Client Terminals\n======================\n\nFirst we will open a set of windows to represent the different Kafka clients discussed above.\nFor consistency, we recommend opening them in the pattern shown in the image below, but this is optional.\n\n.. image:: images\/cilium_kafka_gsg_terminal_layout.png\n\nIn each window, use copy-paste to have each terminal provide a shell inside each pod.\n\nempire-hq terminal:\n\n.. code-block:: shell-session\n\n   $ HQ_POD=$(kubectl get pods -l app=empire-hq -o jsonpath='{.items[0].metadata.name}') && kubectl exec -it $HQ_POD -- sh -c \"PS1=\\\"empire-hq $\\\" \/bin\/bash\"\n\nempire-backup terminal:\n\n.. code-block:: shell-session\n\n   $ BACKUP_POD=$(kubectl get pods -l app=empire-backup -o jsonpath='{.items[0].metadata.name}') && kubectl exec -it $BACKUP_POD -- sh -c \"PS1=\\\"empire-backup $\\\" \/bin\/bash\"\n\noutpost-8888 terminal:\n\n.. code-block:: shell-session\n\n   $ OUTPOST_8888_POD=$(kubectl get pods -l outpostid=8888 -o jsonpath='{.items[0].metadata.name}') && kubectl exec -it $OUTPOST_8888_POD -- sh -c \"PS1=\\\"outpost-8888 $\\\" \/bin\/bash\"\n\noutpost-9999 terminal:\n\n.. code-block:: shell-session\n\n   $ OUTPOST_9999_POD=$(kubectl get pods -l outpostid=9999 -o jsonpath='{.items[0].metadata.name}') && kubectl exec -it $OUTPOST_9999_POD -- sh -c \"PS1=\\\"outpost-9999 $\\\" \/bin\/bash\"\n\n\nTest Basic Kafka Produce & Consume\n==================================\n\nFirst, let's start the consumer clients listening to their respective Kafka topics.  All of the consumer\ncommands below will hang intentionally, waiting to print data they consume from the Kafka topic:\n\nIn the *empire-backup* window, start listening on the top-secret *deathstar-plans* topic:\n\n.. code-block:: shell-session\n\n    $ .\/kafka-consume.sh --topic deathstar-plans\n\nIn the *outpost-8888* window, start listening to *empire-announcement*:\n\n.. code-block:: shell-session\n\n    $ .\/kafka-consume.sh --topic empire-announce\n\nDo the same in the *outpost-9999* window:\n\n.. code-block:: shell-session\n\n    $ .\/kafka-consume.sh --topic empire-announce\n\nNow from the *empire-hq*, first produce a message to the *empire-announce* topic:\n\n.. code-block:: shell-session\n\n   $ echo \"Happy 40th Birthday to General Tagge\" | .\/kafka-produce.sh --topic empire-announce\n\nThis message will be posted to the *empire-announce* topic, and shows up in both the *outpost-8888* and\n*outpost-9999* windows who consume that topic.   It will not show up in *empire-backup*.\n\n*empire-hq* can also post a version of the top-secret deathstar plans to the *deathstar-plans* topic:\n\n.. code-block:: shell-session\n\n   $ echo \"deathstar reactor design v3\" | .\/kafka-produce.sh --topic deathstar-plans\n\nThis message shows up in the *empire-backup* window, but not for the outposts.\n\nCongratulations, Kafka is working as expected :)\n\nThe Danger of a Compromised Kafka Client\n========================================\n\nBut what if a rebel spy gains access to any of the remote outposts that act as Kafka clients?\nSince every client has access to the Kafka broker on port 9092, it can do some bad stuff.\nFor starters, the outpost container can actually switch roles from a consumer to a producer,\nsending \"malicious\" data to all other consumers on the topic.\n\nTo prove this, kill the existing ``kafka-consume.sh`` command in the outpost-9999 window\nby typing control-C and instead run:\n\n.. code-block:: shell-session\n\n  $ echo \"Vader Booed at Empire Karaoke Party\" | .\/kafka-produce.sh --topic empire-announce\n\nUh oh!  Outpost-8888 and all of the other outposts in the empire have now received this fake announcement.\n\nBut even more nasty from a security perspective is that the outpost container can access any topic\non the kafka-broker.\n\nIn the outpost-9999 container, run:\n\n.. code-block:: shell-session\n\n  $ .\/kafka-consume.sh --topic deathstar-plans\n  \"deathstar reactor design v3\"\n\nWe see that any outpost can actually access the secret deathstar plans.  Now we know how the rebels got\naccess to them!\n\nSecuring Access to Kafka with Cilium\n====================================\n\nObviously, it would be much more secure to limit each pod's access to the Kafka broker to be\nleast privilege (i.e., only what is needed for the app to operate correctly and nothing more).\n\nWe can do that with the following Cilium security policy.   As with Cilium HTTP policies, we can write\npolicies that identify pods by labels, and then limit the traffic in\/out of this pod.  In\nthis case, we'll create a policy that identifies the exact traffic that should be allowed to reach the\nKafka broker, and deny the rest.\n\nAs an example, a policy could limit containers with label *app=empire-outpost* to only be able to consume\ntopic *empire-announce*, but would block any attempt by a compromised container (e.g., empire-outpost-9999)\nfrom producing to *empire-announce* or consuming from *deathstar-plans*.\n\n.. image:: images\/cilium_kafka_gsg_attack.png\n\nHere is the *CiliumNetworkPolicy* rule that limits access of pods with label *app=empire-outpost* to\nonly consume on topic *empire-announce*:\n\n.. literalinclude:: ..\/..\/examples\/policies\/getting-started\/kafka.yaml\n\nA *CiliumNetworkPolicy* contains a list of rules that define allowed requests, meaning that requests\nthat do not match any rules are denied as invalid.\n\nThe above rule applies to inbound (i.e., \"ingress\") connections to kafka-broker pods (as\nindicated by \"app: kafka\"\nin the \"endpointSelector\" section).  The rule will apply to connections from pods with label\n\"app: empire-outpost\" as indicated by the \"fromEndpoints\" section.   The rule explicitly matches\nKafka connections destined to TCP 9092, and allows consume\/produce actions on various topics of interest.\nFor example we are allowing *consume* from topic *empire-announce* in this case.\n\nThe full policy adds two additional rules that permit the legitimate \"produce\"\n(topic *empire-announce* and topic *deathstar-plans*) from *empire-hq* and the\nlegitimate consume  (topic = \"deathstar-plans\") from *empire-backup*.  The full policy\ncan be reviewed by opening the URL in the command below in a browser.\n\nApply this Kafka-aware network security policy using ``kubectl`` in the main window:\n\n.. parsed-literal::\n\n    $ kubectl create -f \\ |SCM_WEB|\\\/examples\/kubernetes-kafka\/kafka-sw-security-policy.yaml\n\nIf we then again try to produce a message from outpost-9999 to *empire-annnounce*, it is denied.\nType control-c and then run:\n\n.. code-block:: shell-session\n\n  $ echo \"Vader Trips on His Own Cape\" | .\/kafka-produce.sh --topic empire-announce\n  >>[2018-04-10 23:50:34,638] ERROR Error when sending message to topic empire-announce with key: null, value: 27 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)\n  org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [empire-announce]\n\nThis is because the policy does not allow messages with role = \"produce\" for topic \"empire-announce\" from\ncontainers with label app = empire-outpost.  Its worth noting that we don't simply drop the message (which\ncould easily be confused with a network error), but rather we respond with the Kafka access denied error\n(similar to how HTTP would return an error code of 403 unauthorized).\n\nLikewise, if the outpost container ever tries to consume from topic *deathstar-plans*, it is denied, as\nrole = consume is only allowed for topic *empire-announce*.\n\nTo test, from the outpost-9999 terminal, run:\n\n.. code-block:: shell-session\n\n  $.\/kafka-consume.sh --topic deathstar-plans\n  [2018-04-10 23:51:12,956] WARN Error while fetching metadata with correlation id 2 : {deathstar-plans=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient)\n\nThis is blocked as well, thanks to the Cilium network policy. Imagine how different things would have been if the empire had been using\nCilium from the beginning!\n\nClean Up\n========\n\nYou have now installed Cilium, deployed a demo app, and tested both\nL7 Kafka-aware network security policies.  To clean up, run:\n\n.. parsed-literal::\n\n   $ kubectl delete -f \\ |SCM_WEB|\\\/examples\/kubernetes-kafka\/kafka-sw-app.yaml\n   $ kubectl delete cnp secure-empire-kafka\n\n\nAfter this, you can re-run the tutorial from Step 1.","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io      gs kafka                            Securing a Kafka Cluster                           This document serves as an introduction to using Cilium to enforce Kafka aware security policies   It is a detailed walk through of getting a single node Cilium environment running on your machine  It is designed to take 15 30 minutes      include   gsg requirements rst  Deploy the Demo Application                              Now that we have Cilium deployed and   kube dns   operating correctly we can deploy our demo Kafka application   Since our first demo of Cilium   HTTP aware security policies was Star Wars themed we decided to do the same for Kafka   While the  HTTP aware Cilium  Star Wars demo  https   cilium io blog 2017 5 4 demo may the force be with you     showed how the Galactic Empire used HTTP aware security policies to protect the Death Star from the Rebel Alliance  this Kafka demo shows how the lack of Kafka aware security policies allowed the Rebels to steal the Death Star plans in the first place   Kafka is a powerful platform for passing datastreams between different components of an application  A cluster of  Kafka brokers  connect nodes that  produce  data into a data stream  or  consume  data from a datastream    Kafka refers to each datastream as a  topic   Because scalable and highly available Kafka clusters are non trivial to run  the same cluster of Kafka brokers often handles many different topics at once  read this  Introduction to Kafka  https   kafka apache org intro    for more background    In our simple example  the Empire uses a Kafka cluster to handle two different topics      empire announce    Used to broadcast announcements to sites spread across the galaxy    deathstar plans    Used by a small group of sites coordinating on building the ultimate battlestation   To keep the setup small  we will just launch a small number of pods to represent this setup      kafka broker    A single pod running Kafka and Zookeeper representing the Kafka cluster    label app kafka      empire hq    A pod representing the Empire s Headquarters  which is the only pod that should   produce messages to  empire announce  or  deathstar plans   label app empire hq      empire backup    A secure backup facility located in  Scarif  https   starwars fandom com wiki Scarif vault        which is allowed to  consume  from the secret  deathstar plans  topic  label app empire backup      empire outpost 8888    A random outpost in the empire   It needs to  consume  messages from   the  empire announce  topic  label app empire outpost      empire outpost 9999    Another random outpost in the empire that  consumes  messages from   the  empire announce  topic  label app empire outpost    All pods other than  kafka broker  are Kafka clients  which need access to the  kafka broker  container on TCP port 9092 in order to send Kafka protocol messages      image   images cilium kafka gsg topology png  The file   kafka sw app yaml   contains a Kubernetes Deployment for each of the pods described above  as well as a Kubernetes Service for both Kafka and Zookeeper      parsed literal          kubectl create  f    SCM WEB   examples kubernetes kafka kafka sw app yaml     deployment  kafka broker  created     deployment  zookeeper  created     service  zook  created     service  kafka service  created     deployment  empire hq  created     deployment  empire outpost 8888  created     deployment  empire outpost 9999  created     deployment  empire backup  created  Kubernetes will deploy the pods and service  in the background  Running   kubectl get svc pods   will inform you about the progress of the operation  Each pod will go through several states until it reaches   Running   at which point the setup is ready      code block   shell session        kubectl get svc pods     NAME            TYPE        CLUSTER IP      EXTERNAL IP   PORT S     AGE     kafka service   ClusterIP   None             none         9092 TCP   2m     kubernetes      ClusterIP   10 96 0 1        none         443 TCP    10m     zook            ClusterIP   10 97 250 131    none         2181 TCP   2m      NAME                                   READY     STATUS    RESTARTS   AGE     empire backup 6f4567d5fd gcrvg         1 1       Running   0          2m     empire hq 59475b4b64 mrdww             1 1       Running   0          2m     empire outpost 8888 78dffd49fb tnnhf   1 1       Running   0          2m     empire outpost 9999 7dd9fc5f5b xp6jw   1 1       Running   0          2m     kafka broker b874c78fd jdwqf           1 1       Running   0          2m     zookeeper 85f64b8cd4 nprck             1 1       Running   0          2m  Setup Client Terminals                         First we will open a set of windows to represent the different Kafka clients discussed above  For consistency  we recommend opening them in the pattern shown in the image below  but this is optional      image   images cilium kafka gsg terminal layout png  In each window  use copy paste to have each terminal provide a shell inside each pod   empire hq terminal      code block   shell session       HQ POD   kubectl get pods  l app empire hq  o jsonpath    items 0  metadata name       kubectl exec  it  HQ POD    sh  c  PS1   empire hq      bin bash   empire backup terminal      code block   shell session       BACKUP POD   kubectl get pods  l app empire backup  o jsonpath    items 0  metadata name       kubectl exec  it  BACKUP POD    sh  c  PS1   empire backup      bin bash   outpost 8888 terminal      code block   shell session       OUTPOST 8888 POD   kubectl get pods  l outpostid 8888  o jsonpath    items 0  metadata name       kubectl exec  it  OUTPOST 8888 POD    sh  c  PS1   outpost 8888      bin bash   outpost 9999 terminal      code block   shell session       OUTPOST 9999 POD   kubectl get pods  l outpostid 9999  o jsonpath    items 0  metadata name       kubectl exec  it  OUTPOST 9999 POD    sh  c  PS1   outpost 9999      bin bash    Test Basic Kafka Produce   Consume                                     First  let s start the consumer clients listening to their respective Kafka topics   All of the consumer commands below will hang intentionally  waiting to print data they consume from the Kafka topic   In the  empire backup  window  start listening on the top secret  deathstar plans  topic      code block   shell session          kafka consume sh   topic deathstar plans  In the  outpost 8888  window  start listening to  empire announcement       code block   shell session          kafka consume sh   topic empire announce  Do the same in the  outpost 9999  window      code block   shell session          kafka consume sh   topic empire announce  Now from the  empire hq   first produce a message to the  empire announce  topic      code block   shell session       echo  Happy 40th Birthday to General Tagge      kafka produce sh   topic empire announce  This message will be posted to the  empire announce  topic  and shows up in both the  outpost 8888  and  outpost 9999  windows who consume that topic    It will not show up in  empire backup     empire hq  can also post a version of the top secret deathstar plans to the  deathstar plans  topic      code block   shell session       echo  deathstar reactor design v3      kafka produce sh   topic deathstar plans  This message shows up in the  empire backup  window  but not for the outposts   Congratulations  Kafka is working as expected     The Danger of a Compromised Kafka Client                                           But what if a rebel spy gains access to any of the remote outposts that act as Kafka clients  Since every client has access to the Kafka broker on port 9092  it can do some bad stuff  For starters  the outpost container can actually switch roles from a consumer to a producer  sending  malicious  data to all other consumers on the topic   To prove this  kill the existing   kafka consume sh   command in the outpost 9999 window by typing control C and instead run      code block   shell session      echo  Vader Booed at Empire Karaoke Party      kafka produce sh   topic empire announce  Uh oh   Outpost 8888 and all of the other outposts in the empire have now received this fake announcement   But even more nasty from a security perspective is that the outpost container can access any topic on the kafka broker   In the outpost 9999 container  run      code block   shell session        kafka consume sh   topic deathstar plans    deathstar reactor design v3   We see that any outpost can actually access the secret deathstar plans   Now we know how the rebels got access to them   Securing Access to Kafka with Cilium                                       Obviously  it would be much more secure to limit each pod s access to the Kafka broker to be least privilege  i e   only what is needed for the app to operate correctly and nothing more    We can do that with the following Cilium security policy    As with Cilium HTTP policies  we can write policies that identify pods by labels  and then limit the traffic in out of this pod   In this case  we ll create a policy that identifies the exact traffic that should be allowed to reach the Kafka broker  and deny the rest   As an example  a policy could limit containers with label  app empire outpost  to only be able to consume topic  empire announce   but would block any attempt by a compromised container  e g   empire outpost 9999  from producing to  empire announce  or consuming from  deathstar plans       image   images cilium kafka gsg attack png  Here is the  CiliumNetworkPolicy  rule that limits access of pods with label  app empire outpost  to only consume on topic  empire announce       literalinclude         examples policies getting started kafka yaml  A  CiliumNetworkPolicy  contains a list of rules that define allowed requests  meaning that requests that do not match any rules are denied as invalid   The above rule applies to inbound  i e    ingress   connections to kafka broker pods  as indicated by  app  kafka  in the  endpointSelector  section    The rule will apply to connections from pods with label  app  empire outpost  as indicated by the  fromEndpoints  section    The rule explicitly matches Kafka connections destined to TCP 9092  and allows consume produce actions on various topics of interest  For example we are allowing  consume  from topic  empire announce  in this case   The full policy adds two additional rules that permit the legitimate  produce   topic  empire announce  and topic  deathstar plans   from  empire hq  and the legitimate consume   topic    deathstar plans   from  empire backup    The full policy can be reviewed by opening the URL in the command below in a browser   Apply this Kafka aware network security policy using   kubectl   in the main window      parsed literal          kubectl create  f    SCM WEB   examples kubernetes kafka kafka sw security policy yaml  If we then again try to produce a message from outpost 9999 to  empire annnounce   it is denied  Type control c and then run      code block   shell session      echo  Vader Trips on His Own Cape      kafka produce sh   topic empire announce      2018 04 10 23 50 34 638  ERROR Error when sending message to topic empire announce with key  null  value  27 bytes with error   org apache kafka clients producer internals ErrorLoggingCallback    org apache kafka common errors TopicAuthorizationException  Not authorized to access topics   empire announce   This is because the policy does not allow messages with role    produce  for topic  empire announce  from containers with label app   empire outpost   Its worth noting that we don t simply drop the message  which could easily be confused with a network error   but rather we respond with the Kafka access denied error  similar to how HTTP would return an error code of 403 unauthorized    Likewise  if the outpost container ever tries to consume from topic  deathstar plans   it is denied  as role   consume is only allowed for topic  empire announce    To test  from the outpost 9999 terminal  run      code block   shell session       kafka consume sh   topic deathstar plans    2018 04 10 23 51 12 956  WARN Error while fetching metadata with correlation id 2    deathstar plans TOPIC AUTHORIZATION FAILED   org apache kafka clients NetworkClient   This is blocked as well  thanks to the Cilium network policy  Imagine how different things would have been if the empire had been using Cilium from the beginning   Clean Up           You have now installed Cilium  deployed a demo app  and tested both L7 Kafka aware network security policies   To clean up  run      parsed literal         kubectl delete  f    SCM WEB   examples kubernetes kafka kafka sw app yaml      kubectl delete cnp secure empire kafka   After this  you can re run the tutorial from Step 1 "}
{"questions":"cilium Threat Model docs cilium io This section presents a threat model for Cilium This threat model controls that are in place to secure data flowing through Cilium s various components allows interested parties to understand security specific implications of Cilium s architecture","answers":".. only:: not (epub or latex or html)\n\n    WARNING: You are looking at unreleased Cilium documentation.\n    Please use the official rendered version released here:\n    https:\/\/docs.cilium.io\n\nThreat Model\n============\n\nThis section presents a threat model for Cilium. This threat model\nallows interested parties to understand:\n\n-  security-specific implications of Cilium's architecture\n-  controls that are in place to secure data flowing through Cilium's various components\n-  recommended controls for running Cilium in a production environment\n\nScope and Prerequisites\n-----------------------\n\nThis threat model considers the possible attacks that could affect an\nup-to-date version of Cilium running in a production environment; it\nwill be refreshed when there are significant changes to Cilium's\narchitecture or security posture.\n\nThis model does not consider supply-chain attacks, such as attacks where\na malicious contributor is able to intentionally inject vulnerable code\ninto Cilium. For users who are concerned about supply-chain attacks,\nCilium's `security audit`_ assessed Cilium's supply chain controls\nagainst `the SLSA framework`_.\n\nIn order to understand the following threat model, readers will need\nfamiliarity with basic Kubernetes concepts, as well as a high-level\nunderstanding of Cilium's :ref:`architecture and components<component_overview>`.\n\n.. _security audit: https:\/\/github.com\/cilium\/cilium.io\/blob\/main\/Security-Reports\/CiliumSecurityAudit2022.pdf\n.. _the SLSA framework:  https:\/\/slsa.dev\/\n\nMethodology\n-----------\n\nThis threat model considers eight different types of threat\nactors, placed at different parts of a typical deployment stack. We will\nprimarily use Kubernetes as an example but the threat model remains\naccurate if deployed with other orchestration systems, or when running\nCilium outside of Kubernetes. The attackers will have different levels\nof initial privileges, giving us a broad overview of the security\nguarantees that Cilium can provide depending on the nature of the threat\nand the extent of a previous compromise.\n\nFor each threat actor, this guide uses the `the STRIDE methodology`_ to\nassess likely attacks. Where one attack type in the STRIDE set can lead to others\n(for example, tampering leading to denial of service), we have described the\nattack path under the most impactful attack type. For the potential attacks\nthat we identify, we recommend controls that can be used to reduce the\nrisk of the identified attacks compromising a cluster. Applying the\nrecommended controls is strongly advised in order to run Cilium securely\nin production.\n\n.. _the STRIDE methodology: https:\/\/en.wikipedia.org\/wiki\/STRIDE_(security)\n\nReference Architecture\n----------------------\n\nFor ease of understanding, consider a single Kubernetes\ncluster running Cilium, as illustrated below:\n\n.. image:: images\/cilium_threat_model_reference_architecture.png\n\nThe Threat Surface\n~~~~~~~~~~~~~~~~~~\n\nIn the above scenario, the aim of Cilium's security controls is to\nensure that all the components of the Cilium platform are operating\ncorrectly, to the extent possible given the abilities of the threat\nactor that Cilium is faced with. The key components that need to be\nprotected are:\n\n-  the Cilium agent running on a node, either as a Kubernetes pod, a host process, or as an entire virtual machine\n-  Cilium state (either stored via CRDs or via an external key-value store like etcd)\n-  eBPF programs loaded by Cilium into the kernel\n-  network packets managed by Cilium\n-  observability data collected by Cilium and stored by Hubble\n\nThe Threat Model\n----------------\n\nFor each type of attacker, we consider the plausible types of attacks\navailable to them, how Cilium can be used to protect against these\nattacks, as well as the security controls that Cilium provides. For\nattacks which might arise as a consequence of the high level of\nprivileges required by Cilium, we also suggest mitigations that users\nshould apply to secure their environments.\n\n.. _kubernetes-workload-attacker:\n\nKubernetes Workload Attacker\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFor the first scenario, consider an attacker who has been able to\ngain access to a Kubernetes pod, and is now able to run arbitrary code\ninside a container. This could occur, for example, if a vulnerable\nservice is exposed externally to a network. In this case, let us also\nassume that the compromised pod does not have any elevated privileges\n(in Kubernetes or on the host) or direct access to host files.\n\n.. image:: images\/cilium_threat_model_workload.png\n\nIn this scenario, there is no potential for compromise of the Cilium\nstack; in fact, Cilium provides several features that would allow users\nto limit the scope of such an attack:\n\n.. rst-class:: wrapped-table\n\n+-----------------+---------------------+--------------------------------+\n| Threat surface  | Identified STRIDE   | Cilium security benefits       |\n|                 | threats             |                                |\n+=================+=====================+================================+\n| Cilium agent    | Potential denial of | Cilium can enforce             |\n|                 | service if the      | `bandwidth limitations`_       |\n|                 | compromised         | on pods to limit the network   | \n|                 |                     | resource utilization.          |\n|                 | Kubernetes workload |                                |\n|                 | does not have       |                                |\n|                 | defined resource    |                                |\n|                 | limits.             |                                |\n+-----------------+---------------------+--------------------------------+\n| Cilium          | None                |                                |\n| configuration   |                     |                                |\n+-----------------+---------------------+--------------------------------+\n| Cilium eBPF     | None                |                                |\n| programs        |                     |                                |\n+-----------------+---------------------+--------------------------------+\n| Network data    | None                | - Cilium's network policy can  |\n|                 |                     |   be used to provide           |\n|                 |                     |   least-privilege isolation    |\n|                 |                     |   between Kubernetes           |\n|                 |                     |   workloads, and between       |\n|                 |                     |   Kubernetes workloads and     |\n|                 |                     |   \"external\" endpoints running |\n|                 |                     |   outside the Kubernetes       |\n|                 |                     |   cluster, or running on the   |\n|                 |                     |   Kubernetes worker nodes.     |\n|                 |                     |   Users should ideally define  |\n|                 |                     |   specific allow rules that    |\n|                 |                     |   only permit expected         |\n|                 |                     |   communication between        |\n|                 |                     |   services.                    |\n|                 |                     | - Cilium's network             |\n|                 |                     |   connectivity will prevent an |\n|                 |                     |   attacker from observing the  |\n|                 |                     |   traffic intended for other   |\n|                 |                     |   workloads, or sending        |\n|                 |                     |   traffic that \"spoofs\" the    |\n|                 |                     |   identity of another pod,     |\n|                 |                     |   even if transparent          |\n|                 |                     |   encryption is not in use.    |\n|                 |                     |   Pods cannot send traffic     |\n|                 |                     |   that \"spoofs\" other pods due |\n|                 |                     |   to limits on the use of      |\n|                 |                     |   source IPs and limits on     |\n|                 |                     |   sending tunneled traffic.    |\n+-----------------+---------------------+--------------------------------+\n| Observability   | None                | Cilium's Hubble flow-event     |\n| data            |                     | observability can be used to   |\n|                 |                     | provide reliable audit of      |\n|                 |                     | the attacker's L3\/L4 and L7    |\n|                 |                     | network connectivity.          |\n+-----------------+---------------------+--------------------------------+\n\n.. _bandwidth limitations: https:\/\/docs.cilium.io\/en\/stable\/network\/kubernetes\/bandwidth-manager\/\n\nRecommended Controls\n^^^^^^^^^^^^^^^^^^^^\n\n-  Kubernetes workloads should have `defined resource limits`_.\n   This will help in ensuring that Cilium is not starved of resources due to a misbehaving deployment in a cluster.\n-  Cilium can be given prioritized access to system resources either via\n   Kubernetes, cgroups, or other controls.\n-  Runtime security solutions such as `Tetragon`_ should be deployed to \n   ensure that container compromises can be detected as they occur.\n\n.. _defined resource limits: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n.. _Tetragon: https:\/\/github.com\/cilium\/tetragon\n\n.. _limited-privilege-host-attacker:\n\nLimited-privilege Host Attacker\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIn this scenario, the attacker is someone with the ability to run\narbitrary code with direct access to the host PID or network namespace\n(or both), but without \"root\" privileges that would allow them to\ndisable Cilium components or undermine the eBPF and other kernel state\nCilium relies on.\n\nThis level of access could exist for a variety of reasons, including:\n\n-  Pods or other containers running in the host PID or network\n   namespace, but not with \"root\" privileges. This includes\n   ``hostNetwork: true`` and ``hostPID: true`` containers.\n-  Non-\"root\" SSH or other console access to a node.\n-  A containerized workload that has \"escaped\" the container namespace\n   but as a non-privileged user.\n\n.. image:: images\/cilium_threat_model_non_privileged.png\n\nIn this case, an attacker would be able to bypass some of Cilium's\nnetwork controls, as described below:\n\n.. rst-class:: wrapped-table\n\n+-----------------+-------------------------+----------------------------+\n| **Threat        | **Identified STRIDE     | **Cilium security          |\n| surface**       | threats**               | benefits**                 |\n+=================+=========================+============================+\n| Cilium agent    | - If the non-privileged |                            |\n|                 |   attacker is able to   |                            |\n|                 |   access the container  |                            |\n|                 |   runtime and Cilium is |                            |\n|                 |   running as a          |                            |\n|                 |   container, the        |                            |\n|                 |   attacker will be able |                            |\n|                 |   to tamper with the    |                            |\n|                 |   Cilium agent running  |                            |\n|                 |   on the node.          |                            |\n|                 | - Denial of service is  |                            |\n|                 |   also possible via     |                            |\n|                 |   spawning workloads    |                            |\n|                 |   directly on the host. |                            |\n+-----------------+-------------------------+----------------------------+\n| Cilium          | Same as for the Cilium  |                            |\n| configuration   | agent.                  |                            |\n|                 |                         |                            |\n|                 |                         |                            |\n|                 |                         |                            |\n|                 |                         |                            |\n|                 |                         |                            |\n|                 |                         |                            |\n|                 |                         |                            |\n+-----------------+-------------------------+----------------------------+\n| Cilium eBPF     | Same as for the Cilium  |                            |\n| programs        | agent.                  |                            |\n|                 |                         |                            |\n|                 |                         |                            |\n|                 |                         |                            |\n|                 |                         |                            |\n|                 |                         |                            |\n|                 |                         |                            |\n|                 |                         |                            |\n+-----------------+-------------------------+----------------------------+\n| Network data    | Elevation of            | Cilium's network           |\n|                 | privilege: traffic      | connectivity will prevent  |\n|                 | sent by the attacker    | an attacker from observing |\n|                 | will no longer be       | the traffic intended for   |\n|                 | subject to Kubernetes   | other workloads, or        |\n|                 | or                      | sending traffic that       |\n|                 | container-networked     | spoofs the identity of     |\n|                 | Cilium network          | another pod, even if       |\n|                 | policies.               | transparent encryption is  |\n|                 | :ref:`Host-networked    | not in use.                |\n|                 | Cilium                  |                            |\n|                 | policies                |                            |\n|                 | <host_firewall>`        |                            |\n|                 | will continue to        |                            |\n|                 | apply. Other traffic    |                            |\n|                 | within the cluster      |                            |\n|                 | remains unaffected.     |                            |\n+-----------------+-------------------------+----------------------------+\n| Observability   | None                    | Cilium's Hubble flow-event |\n| data            |                         | observability can be used  |\n|                 |                         | to provide reliable audit  |\n|                 |                         | of the attacker's L3\/L4    |\n|                 |                         | and L7 network             |\n|                 |                         | connectivity. Traffic sent |\n|                 |                         | by the attacker will be    |\n|                 |                         | attributed to the worker   |\n|                 |                         | node, and not to a         |\n|                 |                         | specific Kubernetes        |\n|                 |                         | workload.                  |\n+-----------------+-------------------------+----------------------------+\n\nRecommended Controls\n^^^^^^^^^^^^^^^^^^^^\n\nIn addition to the recommended controls against the :ref:`kubernetes-workload-attacker`:\n\n-  Container images should be regularly patched to reduce the chance of\n   compromise.\n-  Minimal container images should be used where possible.\n-  Host-level privileges should be avoided where possible.\n-  Ensure that the container users do not have access to the underlying\n   container runtime.\n\n.. _root-equivalent-host-attacker:\n\nRoot-equivalent Host Attacker\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nA \"root\" privilege host attacker has full privileges to do everything on\nthe local host. This access could exist for several reasons, including:\n\n-  Root SSH or other console access to the Kubernetes worker node.\n-  A containerized workload that has escaped the container namespace as\n   a privileged user.\n-  Pods running with ``privileged: true`` or other significant\n   capabilities like ``CAP_SYS_ADMIN`` or ``CAP_BPF``.\n\n.. image:: images\/cilium_threat_model_root.png\n\n.. rst-class:: wrapped-table\n\n+-------------------+--------------------------------------------------+\n| **Threat          | **Identified STRIDE threats**                    |\n| surface**         |                                                  |\n+===================+==================================================+\n| Cilium agent      | In this situation, all potential attacks covered |\n|                   | by STRIDE are possible. Of note:                 |\n|                   |                                                  |\n|                   | -  The attacker would be able to disable eBPF on |\n|                   |    the node, disabling Cilium's network and      |\n|                   |    runtime visibility and enforcement. All       |\n|                   |    further operations by the attacker will be    |\n|                   |    unlimited and unaudited.                      |\n|                   | -  The attacker would be able to observe network |\n|                   |    connectivity across all workloads on the      |\n|                   |    host.                                         |\n|                   | -  The attacker can spoof traffic from the node  |\n|                   |    such that it appears to come from pods        |\n|                   |    with any identity.                            |\n|                   | -  If the physical network allows ARP poisoning, |\n|                   |    or if any other attack allows a               |\n|                   |    compromised node to \"attract\" traffic         |\n|                   |    destined to other nodes, the attacker can     |\n|                   |    potentially intercept all traffic in the      |\n|                   |    cluster, even if this traffic is encrypted    |\n|                   |    using IPsec, since we use a cluster-wide      |\n|                   |    pre-shared key.                               |\n|                   | -  The attacker can also use Cilium's            |\n|                   |    credentials to :ref:`attack the Kubernetes    |\n|                   |    API server <kubernetes-api-server-attacker>`, |\n|                   |    as well as Cilium's :ref:`etcd key-value      |\n|                   |    store <kv-store-attacker>` (if in use).       |\n|                   | -  If the compromised node is running the        |\n|                   |    ``cilium-operator`` pod, the attacker         |\n|                   |    would be able to carry out denial of          |\n|                   |    service attacks against other nodes using     |\n|                   |    the ``cilium-operator`` service account       |\n|                   |    credentials found on the node.                |\n+-------------------+                                                  |\n| Cilium            |                                                  |\n| configuration     |                                                  |\n+-------------------+                                                  |\n| Cilium eBPF       |                                                  |\n| programs          |                                                  |\n+-------------------+                                                  |\n| Network data      |                                                  |\n+-------------------+                                                  |\n| Observability     |                                                  |\n| data              |                                                  |\n+-------------------+--------------------------------------------------+\n\nThis attack scenario emphasizes the importance of securing Kubernetes\nnodes, minimizing the permissions available to container workloads, and\nmonitoring for suspicious activity on the node, container, and API\nserver levels.\n\nRecommended Controls\n^^^^^^^^^^^^^^^^^^^^\n\nIn addition to the controls against a :ref:`limited-privilege-host-attacker`:\n\n-  Workloads with privileged access should be reviewed; privileged access should\n   only be provided to deployments if essential.\n-  Network policies should be configured to limit connectivity to workloads with\n   privileged access.\n-  Kubernetes audit logging should be enabled, with audit logs being sent to a\n   centralized external location for automated review.\n-  Detections should be configured to alert on suspicious activity.\n-  ``cilium-operator`` pods should not be scheduled on nodes that run regular\n   workloads, and should instead be configured to run on control plane nodes.\n\n.. _mitm-attacker:\n\nMan-in-the-middle Attacker\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIn this scenario, our attacker has access to the underlying network\nbetween Kubernetes worker nodes, but not the Kubernetes worker nodes\nthemselves. This attacker may inspect, modify, or inject malicious\nnetwork traffic.\n\n.. image:: images\/cilium_threat_model_mitm.png\n\nThe threat matrix for such an attacker is as follows:\n\n.. rst-class:: wrapped-table\n\n+------------------+---------------------------------------------------+\n| **Threat         | **Identified STRIDE threats**                     |\n| surface**        |                                                   |\n+==================+===================================================+\n| Cilium agent     | None                                              |\n+------------------+---------------------------------------------------+\n| Cilium           | None                                              |\n| configuration    |                                                   |\n+------------------+---------------------------------------------------+\n| Cilium eBPF      | None                                              |\n| programs         |                                                   |\n+------------------+---------------------------------------------------+\n| Network data     | - Without transparent encryption, an attacker     |\n|                  |   could inspect traffic between workloads in both |\n|                  |   overlay and native routing modes.               |\n|                  | - An attacker with knowledge of pod network       |\n|                  |   configuration (including pod IP addresses and   |\n|                  |   ports) could inject traffic into a cluster by   |\n|                  |   forging packets.                                |\n|                  | - Denial of service could occur depending on the  |\n|                  |   behavior of the attacker.                       |\n+------------------+---------------------------------------------------+\n| Observability    | - TLS is required for all connectivity between    |\n| data             |   Cilium components, as well as for exporting     |\n|                  |   data to other destinations, removing the        |\n|                  |   scope for spoofing or tampering.                |\n|                  | - Without transparent encryption, the attacker    |\n|                  |   could re-create the observability data as       |\n|                  |   available on the network level.                 |\n|                  | - Information leakage could occur via an attacker |\n|                  |   scraping Hubble Prometheus metrics. These       |\n|                  |   metrics are disabled by default, and            |\n|                  |   can contain sensitive information on network    |\n|                  |   flows.                                          |\n|                  | - Denial of service could occur depending on the  |\n|                  |   behavior of the attacker.                       |\n+------------------+---------------------------------------------------+\n\nRecommended Controls\n^^^^^^^^^^^^^^^^^^^^\n\n- :ref:`gsg_encryption` should be configured to ensure the confidentiality of\n  communication between workloads.\n- TLS should be configured for communication between the Prometheus\n  metrics endpoints and the Prometheus server.\n- Network policies should be configured such that only the Prometheus\n  server is allowed to scrape :ref:`Hubble metrics <metrics>` in particular.\n\n.. _network-attacker:\n\nNetwork Attacker\n~~~~~~~~~~~~~~~~\n\nIn our threat model, a generic network attacker has access to the same\nunderlying IP network as Kubernetes worker nodes, but is not inline\nbetween the nodes. The assumption is that this attacker is still able to\nsend IP layer traffic that reaches a Kubernetes worker node. This is a\nweaker variant of the man-in-the-middle attack described above, as the\nattacker can only inject traffic to worker nodes, but not see the\nreplies.\n\n.. image:: images\/cilium_threat_model_network_attacker.png\n\nFor such an attacker, the threat matrix is as follows:\n\n.. rst-class:: wrapped-table\n\n+------------------+---------------------------------------------------+\n| **Threat         | **Identified STRIDE threats**                     |\n| surface**        |                                                   |\n+==================+===================================================+\n| Cilium agent     | None                                              |\n+------------------+---------------------------------------------------+\n| Cilium           | None                                              |\n| configuration    |                                                   |\n+------------------+---------------------------------------------------+\n| Cilium eBPF      | None                                              |\n| programs         |                                                   |\n+------------------+---------------------------------------------------+\n| Network data     | - An attacker with knowledge of pod network       |\n|                  |   configuration (including pod IP addresses and   |\n|                  |   ports) could inject traffic into a cluster by   |\n|                  |   forging packets.                                |\n|                  | - Denial of service could occur depending on the  |\n|                  |   behavior of the attacker.                       |\n+------------------+---------------------------------------------------+\n| Observability    | - Denial of service could occur depending on the  |\n| data             |   behavior of the attacker.                       |\n|                  | - Information leakage could occur via an attacker |\n|                  |   scraping Cilium or Hubble Prometheus metrics,   |\n|                  |   depending on the specific metrics enabled.      |\n+------------------+---------------------------------------------------+\n\nRecommended Controls\n^^^^^^^^^^^^^^^^^^^^\n\n- :ref:`gsg_encryption` should be configured to ensure the confidentiality of\n  communication between workloads.\n\n.. _kubernetes-api-server-attacker:\n\nKubernetes API Server Attacker\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThis type of attack could be carried out by any user or code with\nnetwork access to the Kubernetes API server and credentials that allow\nKubernetes API requests. Such permissions would allow the user to read\nor manipulate the API server state (for example by changing CRDs).\n\nThis section is intended to cover any attack that might be exposed via\nKubernetes API server access, regardless of whether the access is full or\nlimited. \n\n.. image:: images\/cilium_threat_model_api_server_attacker.png\n\nFor such an attacker, our threat matrix is as follows:\n\n.. rst-class:: wrapped-table\n\n+------------------+---------------------------------------------------+\n| **Threat         | **Identified STRIDE threats**                     |\n| surface**        |                                                   |\n+==================+===================================================+\n| Cilium agent     | - A Kubernetes API user with ``kubectl exec``     |\n|                  |   access to the pod running Cilium effectively    |\n|                  |   becomes a :ref:`root-equivalent host            |\n|                  |   attacker <root-equivalent-host-attacker>`,      |\n|                  |   since Cilium runs as a privileged pod.          |\n|                  | - An attacker with permissions to configure       |\n|                  |   workload settings effectively becomes a         |\n|                  |   :ref:`kubernetes-workload-attacker`.            |\n+------------------+---------------------------------------------------+\n| Cilium           | The ability to modify the ``Cilium*``             |\n| configuration    | CustomResourceDefinitions, as well as any         |\n|                  | CustomResource from Cilium, in the cluster could  |\n|                  | have the following effects:                       |\n|                  |                                                   |\n|                  | -  The ability to create or modify CiliumIdentity |\n|                  |    and CiliumEndpoint or CiliumEndpointSlice      |\n|                  |    resources would allow an attacker to tamper    |\n|                  |    with the identities of pods.                   |\n|                  | -  The ability to delete Kubernetes or Cilium     |\n|                  |    NetworkPolicies would remove policy            |\n|                  |    enforcement.                                   |\n|                  | -  Creating a large number of CiliumIdentity      |\n|                  |    resources could result in denial of service.   |\n|                  | -  Workloads external to the cluster could be     |\n|                  |    added to the network.                          |\n|                  | -  Traffic routing settings between workloads     |\n|                  |    could be modified                              |\n|                  |                                                   |\n|                  | The cumulative effect of such actions could       |\n|                  | result in the escalation of a single-node         |\n|                  | compromise into a multi-node compromise.          |\n+------------------+---------------------------------------------------+\n| Cilium eBPF      | An attacker with ``kubectl exec`` access to the   |\n| programs         | Cilium agent pod will be able to modify eBPF      |\n|                  | programs.                                         |\n+------------------+---------------------------------------------------+\n| Network data     | Privileged Kubernetes API server access (``exec`` |\n|                  | access to Cilium pods or access to view           |\n|                  | Kubernetes secrets) could allow an attacker to    |\n|                  | access the pre-shared key used for IPsec. When    |\n|                  | used by a :ref:`man-in-the-middle                 |\n|                  | attacker <mitm-attacker>`, this                   |\n|                  | could undermine the confidentiality and integrity |\n|                  | of workload communication.                        |\n|                  | |br| |br|                                         |\n|                  | Depending on the attacker's level of access, the  |\n|                  | ability to spoof identities or tamper with policy |\n|                  | enforcement could also allow them to view network |\n|                  | data.                                             |\n+------------------+---------------------------------------------------+\n| Observability    | Users with permissions to configure workload      |\n| data             | settings could cause denial of service.           |\n+------------------+---------------------------------------------------+\n\nRecommended Controls\n^^^^^^^^^^^^^^^^^^^^\n\n- `Kubernetes RBAC`_ should be configured to only grant necessary permissions\n  to users and service accounts. Access to resources in the ``kube-system``\n  and ``cilium`` namespaces in particular should be highly limited.\n- Kubernetes audit logs should be used to automatically review requests\n  made to the API server, and detections should be configured to\n  alert on suspicious activity.\n\n.. _Kubernetes RBAC: https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/\n\n.. _kv-store-attacker:\n\nCilium Key-value Store Attacker\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nCilium can use :ref:`an external key-value store <k8s_install_etcd>`\nsuch as etcd to store state. In this scenario, we consider a user with\nnetwork access to the Cilium etcd endpoints and credentials to access\nthose etcd endpoints. The credentials to the etcd endpoints are stored\nas Kubernetes secrets; any attacker would first have to compromise these\nsecrets before gaining access to the key-value store.\n\n.. image:: images\/cilium_threat_model_etcd_attacker.png\n\n.. rst-class:: wrapped-table\n\n+------------------+---------------------------------------------------+\n| **Threat         | **Identified STRIDE threats**                     |\n| surface**        |                                                   |\n+==================+===================================================+\n| Cilium agent     | None                                              |\n+------------------+---------------------------------------------------+\n| Cilium           | The ability to create or modify Identities or     |\n| configuration    | Endpoints in etcd would allow an attacker to      |\n|                  | \"give\" any pod any identity. The ability to spoof |\n|                  | identities in this manner might be used to        |\n|                  | escalate a single node compromise to a multi-node |\n|                  | compromise, for example by spoofing identities to |\n|                  | undermine ingress segmentation rules that would   |\n|                  | be applied on remote nodes.                       |\n+------------------+---------------------------------------------------+\n| Cilium eBPF      | None                                              |\n| programs         |                                                   |\n+------------------+---------------------------------------------------+\n| Network data     | An attacker would be able to modify the routing   |\n|                  | of traffic within a cluster, and as a consequence |\n|                  | gain the privileges of a :ref:`mitm-attacker`.    |\n|                  |                                                   |\n+------------------+---------------------------------------------------+\n| Observability    | None                                              |\n| data             |                                                   |\n+------------------+---------------------------------------------------+\n\nRecommended Controls\n^^^^^^^^^^^^^^^^^^^^\n\n-  The ``etcd`` instance deployed to store Cilium configuration should be independent\n   of the instance that is typically deployed as part of configuring a Kubernetes\n   cluster. This separation reduces the risk of a Cilium ``etcd`` compromise\n   leading to further cluster-wide impact.\n-  Kubernetes RBAC controls should be applied to restrict access to Kubernetes\n   secrets.\n-  Kubernetes audit logs should be used to detect access to secret data and\n   alert if such access is suspicious.\n\nHubble Data Attacker\n~~~~~~~~~~~~~~~~~~~~\n\nThis is an attacker with network reachability to Kubernetes worker\nnodes, or other systems that store or expose Hubble data, with the goal\nof gaining access to potentially sensitive Hubble flow or process data.\n\n.. image:: images\/cilium_threat_model_hubble_attacker.png\n\n.. rst-class:: wrapped-table\n\n+------------------+---------------------------------------------------+\n| **Threat         | **Identified STRIDE threats**                     |\n| surface**        |                                                   |\n+==================+===================================================+\n| Cilium pods      | None                                              |\n+------------------+---------------------------------------------------+\n| Cilium           | None                                              |\n| configuration    |                                                   |\n+------------------+---------------------------------------------------+\n| Cilium eBPF      | None                                              |\n| programs         |                                                   |\n+------------------+---------------------------------------------------+\n| Network data     | None                                              |\n+------------------+---------------------------------------------------+\n| Observability    | None, assuming correct configuration of the       |\n| data             | following:                                        |\n|                  |                                                   |\n|                  | -  Network policy to limit access to              |\n|                  |    ``hubble-relay`` or ``hubble-ui`` services     |\n|                  | -  Limited access to ``cilium``,                  |\n|                  |    ``hubble-relay``, or ``hubble-ui`` pods        |\n|                  | -  TLS for external data export                   |\n|                  | -  Security controls at the destination of any    |\n|                  |    exported data                                  |\n+------------------+---------------------------------------------------+\n\nRecommended Controls\n^^^^^^^^^^^^^^^^^^^^\n\n-  Network policies should limit access to the ``hubble-relay`` and\n   ``hubble-ui`` services\n-  Kubernetes RBAC should be used to limit access to any ``cilium-*``\n   or ``hubble-`*`` pods\n-  TLS should be configured for access to the Hubble Relay API and Hubble UI\n-  TLS should be correctly configured for any data export\n-  The destination data stores for exported data should be secured (such\n   as by applying encryption at rest and cloud provider specific RBAC\n   controls, for example)\n\nOverall Recommendations\n-----------------------\n\nTo summarize the recommended controls to be used when configuring a\nproduction Kubernetes cluster with Cilium:\n\n#. Ensure that Kubernetes roles are scoped correctly to the requirements of your\n   users, and that service account permissions for pods are tightly scoped to\n   the needs of the workloads. In particular, access to sensitive namespaces,\n   ``exec`` actions, and Kubernetes secrets should all be highly controlled.\n#. Use resource limits for workloads where possible to reduce the chance of\n   denial of service attacks.\n#. Ensure that workload privileges and capabilities are only granted when\n   essential to the functionality of the workload, and ensure that specific\n   controls to limit and monitor the behavior of the workload are in place.\n#. Use :ref:`network policies <network_policy>` to ensure that network traffic in Kubernetes is segregated.\n#. Use :ref:`gsg_encryption` in Cilium to ensure that communication between\n   workloads is secured.\n#. Enable Kubernetes audit logging, forward the audit logs to a centralized\n   monitoring platform, and define alerting for suspicious activity.\n#. Enable TLS for access to any externally-facing services, such as Hubble Relay\n   and Hubble UI.\n#. Use `Tetragon`_ as a runtime security solution to rapidly detect unexpected\n   behavior within your Kubernetes cluster.\n\nIf you have questions, suggestions, or would like to help improve Cilium's security\nposture, reach out to security@cilium.io.\n\n.. |br| raw:: html\n\n      <br>","site":"cilium","answers_cleaned":"   only   not  epub or latex or html       WARNING  You are looking at unreleased Cilium documentation      Please use the official rendered version released here      https   docs cilium io  Threat Model               This section presents a threat model for Cilium  This threat model allows interested parties to understand      security specific implications of Cilium s architecture    controls that are in place to secure data flowing through Cilium s various components    recommended controls for running Cilium in a production environment  Scope and Prerequisites                          This threat model considers the possible attacks that could affect an up to date version of Cilium running in a production environment  it will be refreshed when there are significant changes to Cilium s architecture or security posture   This model does not consider supply chain attacks  such as attacks where a malicious contributor is able to intentionally inject vulnerable code into Cilium  For users who are concerned about supply chain attacks  Cilium s  security audit   assessed Cilium s supply chain controls against  the SLSA framework     In order to understand the following threat model  readers will need familiarity with basic Kubernetes concepts  as well as a high level understanding of Cilium s  ref  architecture and components component overview         security audit  https   github com cilium cilium io blob main Security Reports CiliumSecurityAudit2022 pdf     the SLSA framework   https   slsa dev   Methodology              This threat model considers eight different types of threat actors  placed at different parts of a typical deployment stack  We will primarily use Kubernetes as an example but the threat model remains accurate if deployed with other orchestration systems  or when running Cilium outside of Kubernetes  The attackers will have different levels of initial privileges  giving us a broad overview of the security guarantees that Cilium can provide depending on the nature of the threat and the extent of a previous compromise   For each threat actor  this guide uses the  the STRIDE methodology   to assess likely attacks  Where one attack type in the STRIDE set can lead to others  for example  tampering leading to denial of service   we have described the attack path under the most impactful attack type  For the potential attacks that we identify  we recommend controls that can be used to reduce the risk of the identified attacks compromising a cluster  Applying the recommended controls is strongly advised in order to run Cilium securely in production       the STRIDE methodology  https   en wikipedia org wiki STRIDE  security   Reference Architecture                         For ease of understanding  consider a single Kubernetes cluster running Cilium  as illustrated below      image   images cilium threat model reference architecture png  The Threat Surface                     In the above scenario  the aim of Cilium s security controls is to ensure that all the components of the Cilium platform are operating correctly  to the extent possible given the abilities of the threat actor that Cilium is faced with  The key components that need to be protected are      the Cilium agent running on a node  either as a Kubernetes pod  a host process  or as an entire virtual machine    Cilium state  either stored via CRDs or via an external key value store like etcd     eBPF programs loaded by Cilium into the kernel    network packets managed by Cilium    observability data collected by Cilium and stored by Hubble  The Threat Model                   For each type of attacker  we consider the plausible types of attacks available to them  how Cilium can be used to protect against these attacks  as well as the security controls that Cilium provides  For attacks which might arise as a consequence of the high level of privileges required by Cilium  we also suggest mitigations that users should apply to secure their environments       kubernetes workload attacker   Kubernetes Workload Attacker                               For the first scenario  consider an attacker who has been able to gain access to a Kubernetes pod  and is now able to run arbitrary code inside a container  This could occur  for example  if a vulnerable service is exposed externally to a network  In this case  let us also assume that the compromised pod does not have any elevated privileges  in Kubernetes or on the host  or direct access to host files      image   images cilium threat model workload png  In this scenario  there is no potential for compromise of the Cilium stack  in fact  Cilium provides several features that would allow users to limit the scope of such an attack      rst class   wrapped table                                                                               Threat surface    Identified STRIDE     Cilium security benefits                             threats                                                                                                                             Cilium agent      Potential denial of   Cilium can enforce                                   service if the         bandwidth limitations                               compromised           on pods to limit the network                                                resource utilization                                 Kubernetes workload                                                        does not have                                                              defined resource                                                           limits                                                                                                                              Cilium            None                                                     configuration                                                                                                                                         Cilium eBPF       None                                                     programs                                                                                                                                              Network data      None                    Cilium s network policy can                                                be used to provide                                                         least privilege isolation                                                  between Kubernetes                                                         workloads  and between                                                     Kubernetes workloads and                                                    external  endpoints running                                               outside the Kubernetes                                                     cluster  or running on the                                                 Kubernetes worker nodes                                                    Users should ideally define                                                specific allow rules that                                                  only permit expected                                                       communication between                                                      services                                                                   Cilium s network                                                           connectivity will prevent an                                               attacker from observing the                                                traffic intended for other                                                 workloads  or sending                                                      traffic that  spoofs  the                                                  identity of another pod                                                    even if transparent                                                        encryption is not in use                                                   Pods cannot send traffic                                                   that  spoofs  other pods due                                               to limits on the use of                                                    source IPs and limits on                                                   sending tunneled traffic                                                                                    Observability     None                  Cilium s Hubble flow event         data                                    observability can be used to                                               provide reliable audit of                                                  the attacker s L3 L4 and L7                                                network connectivity                                                                                             bandwidth limitations  https   docs cilium io en stable network kubernetes bandwidth manager   Recommended Controls                          Kubernetes workloads should have  defined resource limits       This will help in ensuring that Cilium is not starved of resources due to a misbehaving deployment in a cluster     Cilium can be given prioritized access to system resources either via    Kubernetes  cgroups  or other controls     Runtime security solutions such as  Tetragon   should be deployed to     ensure that container compromises can be detected as they occur       defined resource limits  https   kubernetes io docs concepts configuration manage resources containers      Tetragon  https   github com cilium tetragon      limited privilege host attacker   Limited privilege Host Attacker                                  In this scenario  the attacker is someone with the ability to run arbitrary code with direct access to the host PID or network namespace  or both   but without  root  privileges that would allow them to disable Cilium components or undermine the eBPF and other kernel state Cilium relies on   This level of access could exist for a variety of reasons  including      Pods or other containers running in the host PID or network    namespace  but not with  root  privileges  This includes      hostNetwork  true   and   hostPID  true   containers     Non  root  SSH or other console access to a node     A containerized workload that has  escaped  the container namespace    but as a non privileged user      image   images cilium threat model non privileged png  In this case  an attacker would be able to bypass some of Cilium s network controls  as described below      rst class   wrapped table                                                                                 Threat            Identified STRIDE         Cilium security              surface           threats                   benefits                                                                                                  Cilium agent        If the non privileged                                                      attacker is able to                                                        access the container                                                       runtime and Cilium is                                                      running as a                                                               container  the                                                             attacker will be able                                                      to tamper with the                                                         Cilium agent running                                                       on the node                                                                Denial of service is                                                       also possible via                                                          spawning workloads                                                         directly on the host                                                                                                              Cilium            Same as for the Cilium                                   configuration     agent                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            Cilium eBPF       Same as for the Cilium                                   programs          agent                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            Network data      Elevation of              Cilium s network                                 privilege  traffic        connectivity will prevent                        sent by the attacker      an attacker from observing                       will no longer be         the traffic intended for                         subject to Kubernetes     other workloads  or                              or                        sending traffic that                             container networked       spoofs the identity of                           Cilium network            another pod  even if                             policies                  transparent encryption is                         ref  Host networked      not in use                                       Cilium                                                                     policies                                                                    host firewall                                                             will continue to                                                           apply  Other traffic                                                       within the cluster                                                         remains unaffected                                                                                                                  Observability     None                      Cilium s Hubble flow event     data                                        observability can be used                                                  to provide reliable audit                                                  of the attacker s L3 L4                                                    and L7 network                                                             connectivity  Traffic sent                                                 by the attacker will be                                                    attributed to the worker                                                   node  and not to a                                                         specific Kubernetes                                                        workload                                                                                                 Recommended Controls                       In addition to the recommended controls against the  ref  kubernetes workload attacker       Container images should be regularly patched to reduce the chance of    compromise     Minimal container images should be used where possible     Host level privileges should be avoided where possible     Ensure that the container users do not have access to the underlying    container runtime       root equivalent host attacker   Root equivalent Host Attacker                                A  root  privilege host attacker has full privileges to do everything on the local host  This access could exist for several reasons  including      Root SSH or other console access to the Kubernetes worker node     A containerized workload that has escaped the container namespace as    a privileged user     Pods running with   privileged  true   or other significant    capabilities like   CAP SYS ADMIN   or   CAP BPF        image   images cilium threat model root png     rst class   wrapped table                                                                               Threat              Identified STRIDE threats                          surface                                                                                                                                           Cilium agent        In this situation  all potential attacks covered                         by STRIDE are possible  Of note                                                                                                                      The attacker would be able to disable eBPF on                            the node  disabling Cilium s network and                                 runtime visibility and enforcement  All                                  further operations by the attacker will be                               unlimited and unaudited                                                  The attacker would be able to observe network                            connectivity across all workloads on the                                 host                                                                     The attacker can spoof traffic from the node                             such that it appears to come from pods                                   with any identity                                                        If the physical network allows ARP poisoning                             or if any other attack allows a                                          compromised node to  attract  traffic                                    destined to other nodes  the attacker can                                potentially intercept all traffic in the                                 cluster  even if this traffic is encrypted                               using IPsec  since we use a cluster wide                                 pre shared key                                                           The attacker can also use Cilium s                                       credentials to  ref  attack the Kubernetes                               API server  kubernetes api server attacker                               as well as Cilium s  ref  etcd key value                                 store  kv store attacker    if in use                                    If the compromised node is running the                                     cilium operator   pod  the attacker                                    would be able to carry out denial of                                     service attacks against other nodes using                                the   cilium operator   service account                                  credentials found on the node                                                                                              Cilium                                                                   configuration                                                                                                                                     Cilium eBPF                                                              programs                                                                                                                                          Network data                                                                                                                                      Observability                                                            data                                                                                                                                             This attack scenario emphasizes the importance of securing Kubernetes nodes  minimizing the permissions available to container workloads  and monitoring for suspicious activity on the node  container  and API server levels   Recommended Controls                       In addition to the controls against a  ref  limited privilege host attacker       Workloads with privileged access should be reviewed  privileged access should    only be provided to deployments if essential     Network policies should be configured to limit connectivity to workloads with    privileged access     Kubernetes audit logging should be enabled  with audit logs being sent to a    centralized external location for automated review     Detections should be configured to alert on suspicious activity       cilium operator   pods should not be scheduled on nodes that run regular    workloads  and should instead be configured to run on control plane nodes       mitm attacker   Man in the middle Attacker                             In this scenario  our attacker has access to the underlying network between Kubernetes worker nodes  but not the Kubernetes worker nodes themselves  This attacker may inspect  modify  or inject malicious network traffic      image   images cilium threat model mitm png  The threat matrix for such an attacker is as follows      rst class   wrapped table                                                                               Threat             Identified STRIDE threats                           surface                                                                                                                                           Cilium agent       None                                                                                                                           Cilium             None                                                  configuration                                                                                                                                     Cilium eBPF        None                                                  programs                                                                                                                                          Network data         Without transparent encryption  an attacker                              could inspect traffic between workloads in both                          overlay and native routing modes                                         An attacker with knowledge of pod network                                configuration  including pod IP addresses and                            ports  could inject traffic into a cluster by                            forging packets                                                          Denial of service could occur depending on the                           behavior of the attacker                                                                                                     Observability        TLS is required for all connectivity between        data                 Cilium components  as well as for exporting                              data to other destinations  removing the                                 scope for spoofing or tampering                                          Without transparent encryption  the attacker                             could re create the observability data as                                available on the network level                                           Information leakage could occur via an attacker                          scraping Hubble Prometheus metrics  These                                metrics are disabled by default  and                                     can contain sensitive information on network                             flows                                                                    Denial of service could occur depending on the                           behavior of the attacker                                                                                                    Recommended Controls                          ref  gsg encryption  should be configured to ensure the confidentiality of   communication between workloads    TLS should be configured for communication between the Prometheus   metrics endpoints and the Prometheus server    Network policies should be configured such that only the Prometheus   server is allowed to scrape  ref  Hubble metrics  metrics   in particular       network attacker   Network Attacker                   In our threat model  a generic network attacker has access to the same underlying IP network as Kubernetes worker nodes  but is not inline between the nodes  The assumption is that this attacker is still able to send IP layer traffic that reaches a Kubernetes worker node  This is a weaker variant of the man in the middle attack described above  as the attacker can only inject traffic to worker nodes  but not see the replies      image   images cilium threat model network attacker png  For such an attacker  the threat matrix is as follows      rst class   wrapped table                                                                               Threat             Identified STRIDE threats                           surface                                                                                                                                           Cilium agent       None                                                                                                                           Cilium             None                                                  configuration                                                                                                                                     Cilium eBPF        None                                                  programs                                                                                                                                          Network data         An attacker with knowledge of pod network                                configuration  including pod IP addresses and                            ports  could inject traffic into a cluster by                            forging packets                                                          Denial of service could occur depending on the                           behavior of the attacker                                                                                                     Observability        Denial of service could occur depending on the      data                 behavior of the attacker                                                 Information leakage could occur via an attacker                          scraping Cilium or Hubble Prometheus metrics                             depending on the specific metrics enabled                                                                                   Recommended Controls                          ref  gsg encryption  should be configured to ensure the confidentiality of   communication between workloads       kubernetes api server attacker   Kubernetes API Server Attacker                                 This type of attack could be carried out by any user or code with network access to the Kubernetes API server and credentials that allow Kubernetes API requests  Such permissions would allow the user to read or manipulate the API server state  for example by changing CRDs    This section is intended to cover any attack that might be exposed via Kubernetes API server access  regardless of whether the access is full or limited       image   images cilium threat model api server attacker png  For such an attacker  our threat matrix is as follows      rst class   wrapped table                                                                               Threat             Identified STRIDE threats                           surface                                                                                                                                           Cilium agent         A Kubernetes API user with   kubectl exec                                access to the pod running Cilium effectively                             becomes a  ref  root equivalent host                                     attacker  root equivalent host attacker                                  since Cilium runs as a privileged pod                                    An attacker with permissions to configure                                workload settings effectively becomes a                                   ref  kubernetes workload attacker                                                                                           Cilium             The ability to modify the   Cilium                    configuration      CustomResourceDefinitions  as well as any                                CustomResource from Cilium  in the cluster could                         have the following effects                                                                                                                           The ability to create or modify CiliumIdentity                           and CiliumEndpoint or CiliumEndpointSlice                                resources would allow an attacker to tamper                              with the identities of pods                                              The ability to delete Kubernetes or Cilium                               NetworkPolicies would remove policy                                      enforcement                                                              Creating a large number of CiliumIdentity                                resources could result in denial of service                              Workloads external to the cluster could be                               added to the network                                                     Traffic routing settings between workloads                               could be modified                                                                                                                              The cumulative effect of such actions could                              result in the escalation of a single node                                compromise into a multi node compromise                                                                                        Cilium eBPF        An attacker with   kubectl exec   access to the       programs           Cilium agent pod will be able to modify eBPF                             programs                                                                                                                       Network data       Privileged Kubernetes API server access    exec                          access to Cilium pods or access to view                                  Kubernetes secrets  could allow an attacker to                           access the pre shared key used for IPsec  When                           used by a  ref  man in the middle                                        attacker  mitm attacker    this                                          could undermine the confidentiality and integrity                        of workload communication                                                 br   br                                                                 Depending on the attacker s level of access  the                         ability to spoof identities or tamper with policy                        enforcement could also allow them to view network                        data                                                                                                                           Observability      Users with permissions to configure workload          data               settings could cause denial of service                                                                                        Recommended Controls                          Kubernetes RBAC   should be configured to only grant necessary permissions   to users and service accounts  Access to resources in the   kube system     and   cilium   namespaces in particular should be highly limited    Kubernetes audit logs should be used to automatically review requests   made to the API server  and detections should be configured to   alert on suspicious activity       Kubernetes RBAC  https   kubernetes io docs reference access authn authz rbac       kv store attacker   Cilium Key value Store Attacker                                  Cilium can use  ref  an external key value store  k8s install etcd   such as etcd to store state  In this scenario  we consider a user with network access to the Cilium etcd endpoints and credentials to access those etcd endpoints  The credentials to the etcd endpoints are stored as Kubernetes secrets  any attacker would first have to compromise these secrets before gaining access to the key value store      image   images cilium threat model etcd attacker png     rst class   wrapped table                                                                               Threat             Identified STRIDE threats                           surface                                                                                                                                           Cilium agent       None                                                                                                                           Cilium             The ability to create or modify Identities or         configuration      Endpoints in etcd would allow an attacker to                              give  any pod any identity  The ability to spoof                        identities in this manner might be used to                               escalate a single node compromise to a multi node                        compromise  for example by spoofing identities to                        undermine ingress segmentation rules that would                          be applied on remote nodes                                                                                                     Cilium eBPF        None                                                  programs                                                                                                                                          Network data       An attacker would be able to modify the routing                          of traffic within a cluster  and as a consequence                        gain the privileges of a  ref  mitm attacker                                                                                                                                                            Observability      None                                                  data                                                                                                                                             Recommended Controls                          The   etcd   instance deployed to store Cilium configuration should be independent    of the instance that is typically deployed as part of configuring a Kubernetes    cluster  This separation reduces the risk of a Cilium   etcd   compromise    leading to further cluster wide impact     Kubernetes RBAC controls should be applied to restrict access to Kubernetes    secrets     Kubernetes audit logs should be used to detect access to secret data and    alert if such access is suspicious   Hubble Data Attacker                       This is an attacker with network reachability to Kubernetes worker nodes  or other systems that store or expose Hubble data  with the goal of gaining access to potentially sensitive Hubble flow or process data      image   images cilium threat model hubble attacker png     rst class   wrapped table                                                                               Threat             Identified STRIDE threats                           surface                                                                                                                                           Cilium pods        None                                                                                                                           Cilium             None                                                  configuration                                                                                                                                     Cilium eBPF        None                                                  programs                                                                                                                                          Network data       None                                                                                                                           Observability      None  assuming correct configuration of the           data               following                                                                                                                                            Network policy to limit access to                                          hubble relay   or   hubble ui   services                               Limited access to   cilium                                                 hubble relay    or   hubble ui   pods                                  TLS for external data export                                             Security controls at the destination of any                              exported data                                                                                                              Recommended Controls                          Network policies should limit access to the   hubble relay   and      hubble ui   services    Kubernetes RBAC should be used to limit access to any   cilium        or   hubble      pods    TLS should be configured for access to the Hubble Relay API and Hubble UI    TLS should be correctly configured for any data export    The destination data stores for exported data should be secured  such    as by applying encryption at rest and cloud provider specific RBAC    controls  for example   Overall Recommendations                          To summarize the recommended controls to be used when configuring a production Kubernetes cluster with Cilium      Ensure that Kubernetes roles are scoped correctly to the requirements of your    users  and that service account permissions for pods are tightly scoped to    the needs of the workloads  In particular  access to sensitive namespaces       exec   actions  and Kubernetes secrets should all be highly controlled     Use resource limits for workloads where possible to reduce the chance of    denial of service attacks     Ensure that workload privileges and capabilities are only granted when    essential to the functionality of the workload  and ensure that specific    controls to limit and monitor the behavior of the workload are in place     Use  ref  network policies  network policy   to ensure that network traffic in Kubernetes is segregated     Use  ref  gsg encryption  in Cilium to ensure that communication between    workloads is secured     Enable Kubernetes audit logging  forward the audit logs to a centralized    monitoring platform  and define alerting for suspicious activity     Enable TLS for access to any externally facing services  such as Hubble Relay    and Hubble UI     Use  Tetragon   as a runtime security solution to rapidly detect unexpected    behavior within your Kubernetes cluster   If you have questions  suggestions  or would like to help improve Cilium s security posture  reach out to security cilium io       br  raw   html         br "}
{"questions":"ckad excersises Create busybox pod with two containers each one will have the image busybox and will run the sleep 3600 command Make both containers mount an emptyDir at etc foo Connect to the second busybox write the first column of etc passwd file to etc foo passwd Connect to the first busybox and write etc foo passwd file to standard output Delete pod kubernetes io Documentation Tasks Configure Pods and Containers State Persistence 8 Define volumes","answers":"![](https:\/\/gaforgithub.azurewebsites.net\/api?repo=CKAD-exercises\/state&empty)\n# State Persistence (8%)\n\nkubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Configure a Pod to Use a Volume for Storage](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-volume-storage\/)\n\nkubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Configure a Pod to Use a PersistentVolume for Storage](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-persistent-volume-storage\/)\n\n## Define volumes \n\n### Create busybox pod with two containers, each one will have the image busybox and will run the 'sleep 3600' command. Make both containers mount an emptyDir at '\/etc\/foo'. Connect to the second busybox, write the first column of '\/etc\/passwd' file to '\/etc\/foo\/passwd'. Connect to the first busybox and write '\/etc\/foo\/passwd' file to standard output. Delete pod.\n\n<details><summary>show<\/summary>\n<p>\n\n*This question is probably a better fit for the 'Multi-container-pods' section but I'm keeping it here as it will help you get acquainted with state*\n\nEasiest way to do this is to create a template pod with:\n\n```bash\nkubectl run busybox --image=busybox --restart=Never -o yaml --dry-run=client -- \/bin\/sh -c 'sleep 3600' > pod.yaml\nvi pod.yaml\n```\nCopy paste the container definition and type the lines that have a comment in the end:\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: busybox\n  name: busybox\nspec:\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\n  containers:\n  - args:\n    - \/bin\/sh\n    - -c\n    - sleep 3600\n    image: busybox\n    imagePullPolicy: IfNotPresent\n    name: busybox\n    resources: {}\n    volumeMounts: #\n    - name: myvolume #\n      mountPath: \/etc\/foo #\n  - args:\n    - \/bin\/sh\n    - -c\n    - sleep 3600\n    image: busybox\n    name: busybox2 # don't forget to change the name during copy paste, must be different from the first container's name!\n    volumeMounts: #\n    - name: myvolume #\n      mountPath: \/etc\/foo #\n  volumes: #\n  - name: myvolume #\n    emptyDir: {} #\n```\nIn case you forget to add ```bash -- \/bin\/sh -c 'sleep 3600'``` in template pod create command, you can include command field in config file\n\n```YAML\nspec:\n  containers:\n  - image: busybox\n    name: busybox\n    command: [\"\/bin\/sh\", \"-c\", \"sleep 3600\"]\n```\n\nConnect to the second container:\n\n```bash\nkubectl exec -it busybox -c busybox2 -- \/bin\/sh\ncat \/etc\/passwd | cut -f 1 -d ':' > \/etc\/foo\/passwd # instead of cut command you can use awk -F \":\" '{print $1}'\ncat \/etc\/foo\/passwd # confirm that stuff has been written successfully\nexit\n```\n\nConnect to the first container:\n\n```bash\nkubectl exec -it busybox -c busybox -- \/bin\/sh\nmount | grep foo # confirm the mounting\ncat \/etc\/foo\/passwd\nexit\nkubectl delete po busybox\n```\n\n<\/p>\n<\/details>\n\n\n### Create a PersistentVolume of 10Gi, called 'myvolume'. Make it have accessMode of 'ReadWriteOnce' and 'ReadWriteMany', storageClassName 'normal', mounted on hostPath '\/etc\/foo'. Save it on pv.yaml, add it to the cluster. Show the PersistentVolumes that exist on the cluster\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nvi pv.yaml\n```\n\n```YAML\nkind: PersistentVolume\napiVersion: v1\nmetadata:\n  name: myvolume\nspec:\n  storageClassName: normal\n  capacity:\n    storage: 10Gi\n  accessModes:\n    - ReadWriteOnce\n    - ReadWriteMany\n  hostPath:\n    path: \/etc\/foo\n```\n\nShow the PersistentVolumes:\n\n```bash\nkubectl create -f pv.yaml\n# will have status 'Available'\nkubectl get pv\n```\n\n<\/p>\n<\/details>\n\n### Create a PersistentVolumeClaim for this storage class, called 'mypvc', a request of 4Gi and an accessMode of ReadWriteOnce, with the storageClassName of normal, and save it on pvc.yaml. Create it on the cluster. Show the PersistentVolumeClaims of the cluster. Show the PersistentVolumes of the cluster\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nvi pvc.yaml\n```\n\n```YAML\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: mypvc\nspec:\n  storageClassName: normal\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 4Gi\n```\n\nCreate it on the cluster:\n\n```bash\nkubectl create -f pvc.yaml\n```\n\nShow the PersistentVolumeClaims and PersistentVolumes:\n\n```bash\nkubectl get pvc # will show as 'Bound'\nkubectl get pv # will show as 'Bound' as well\n```\n\n<\/p>\n<\/details>\n\n### Create a busybox pod with command 'sleep 3600', save it on pod.yaml. Mount the PersistentVolumeClaim to '\/etc\/foo'. Connect to the 'busybox' pod, and copy the '\/etc\/passwd' file to '\/etc\/foo\/passwd'\n\n<details><summary>show<\/summary>\n<p>\n\nCreate a skeleton pod:\n\n```bash\nkubectl run busybox --image=busybox --restart=Never -o yaml --dry-run=client -- \/bin\/sh -c 'sleep 3600' > pod.yaml\nvi pod.yaml\n```\n\nAdd the lines that finish with a comment:\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: busybox\n  name: busybox\nspec:\n  containers:\n  - args:\n    - \/bin\/sh\n    - -c\n    - sleep 3600\n    image: busybox\n    imagePullPolicy: IfNotPresent\n    name: busybox\n    resources: {}\n    volumeMounts: #\n    - name: myvolume #\n      mountPath: \/etc\/foo #\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\n  volumes: #\n  - name: myvolume #\n    persistentVolumeClaim: #\n      claimName: mypvc #\nstatus: {}\n```\n\nCreate the pod:\n\n```bash\nkubectl create -f pod.yaml\n```\n\nConnect to the pod and copy '\/etc\/passwd' to '\/etc\/foo\/passwd':\n\n```bash\nkubectl exec busybox -it -- cp \/etc\/passwd \/etc\/foo\/passwd\n```\n\n<\/p>\n<\/details>\n\n### Create a second pod which is identical with the one you just created (you can easily do it by changing the 'name' property on pod.yaml). Connect to it and verify that '\/etc\/foo' contains the 'passwd' file. Delete pods to cleanup. Note: If you can't see the file from the second pod, can you figure out why? What would you do to fix that?\n\n\n\n<details><summary>show<\/summary>\n<p>\n\nCreate the second pod, called busybox2:\n\n```bash\nvim pod.yaml\n# change 'metadata.name: busybox' to 'metadata.name: busybox2'\nkubectl create -f pod.yaml\nkubectl exec busybox2 -- ls \/etc\/foo # will show 'passwd'\n# cleanup\nkubectl delete po busybox busybox2\nkubectl delete pvc mypvc\nkubectl delete pv myvolume\n```\n\nIf the file doesn't show on the second pod but it shows on the first, it has most likely been scheduled on a different node.\n\n```bash\n# check which nodes the pods are on\nkubectl get po busybox -o wide\nkubectl get po busybox2 -o wide\n```\n\nIf they are on different nodes, you won't see the file, because we used the `hostPath` volume type.\nIf you need to access the same files in a multi-node cluster, you need a volume type that is independent of a specific node.\nThere are lots of different types per cloud provider [(see here)](https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes\/#types-of-persistent-volumes), a general solution could be to use NFS.\n\n<\/p>\n<\/details>\n\n### Create a busybox pod with 'sleep 3600' as arguments. Copy '\/etc\/passwd' from the pod to your local folder\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run busybox --image=busybox --restart=Never -- sleep 3600\nkubectl cp busybox:\/etc\/passwd .\/passwd # kubectl cp command\n# previous command might report an error, feel free to ignore it since copy command works\ncat passwd\n```\n\n<\/p>\n<\/details>","site":"ckad excersises","answers_cleaned":"    https   gaforgithub azurewebsites net api repo CKAD exercises state empty    State Persistence  8    kubernetes io   Documentation   Tasks   Configure Pods and Containers    Configure a Pod to Use a Volume for Storage  https   kubernetes io docs tasks configure pod container configure volume storage    kubernetes io   Documentation   Tasks   Configure Pods and Containers    Configure a Pod to Use a PersistentVolume for Storage  https   kubernetes io docs tasks configure pod container configure persistent volume storage       Define volumes       Create busybox pod with two containers  each one will have the image busybox and will run the  sleep 3600  command  Make both containers mount an emptyDir at   etc foo   Connect to the second busybox  write the first column of   etc passwd  file to   etc foo passwd   Connect to the first busybox and write   etc foo passwd  file to standard output  Delete pod    details  summary show  summary   p    This question is probably a better fit for the  Multi container pods  section but I m keeping it here as it will help you get acquainted with state   Easiest way to do this is to create a template pod with      bash kubectl run busybox   image busybox   restart Never  o yaml   dry run client     bin sh  c  sleep 3600    pod yaml vi pod yaml     Copy paste the container definition and type the lines that have a comment in the end      YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  busybox   name  busybox spec    dnsPolicy  ClusterFirst   restartPolicy  Never   containers      args         bin sh        c       sleep 3600     image  busybox     imagePullPolicy  IfNotPresent     name  busybox     resources         volumeMounts          name  myvolume         mountPath   etc foo       args         bin sh        c       sleep 3600     image  busybox     name  busybox2   don t forget to change the name during copy paste  must be different from the first container s name      volumeMounts          name  myvolume         mountPath   etc foo     volumes        name  myvolume       emptyDir           In case you forget to add    bash     bin sh  c  sleep 3600     in template pod create command  you can include command field in config file     YAML spec    containers      image  busybox     name  busybox     command     bin sh     c    sleep 3600        Connect to the second container      bash kubectl exec  it busybox  c busybox2     bin sh cat  etc passwd   cut  f 1  d        etc foo passwd   instead of cut command you can use awk  F       print  1   cat  etc foo passwd   confirm that stuff has been written successfully exit      Connect to the first container      bash kubectl exec  it busybox  c busybox     bin sh mount   grep foo   confirm the mounting cat  etc foo passwd exit kubectl delete po busybox        p    details        Create a PersistentVolume of 10Gi  called  myvolume   Make it have accessMode of  ReadWriteOnce  and  ReadWriteMany   storageClassName  normal   mounted on hostPath   etc foo   Save it on pv yaml  add it to the cluster  Show the PersistentVolumes that exist on the cluster   details  summary show  summary   p      bash vi pv yaml         YAML kind  PersistentVolume apiVersion  v1 metadata    name  myvolume spec    storageClassName  normal   capacity      storage  10Gi   accessModes        ReadWriteOnce       ReadWriteMany   hostPath      path   etc foo      Show the PersistentVolumes      bash kubectl create  f pv yaml   will have status  Available  kubectl get pv        p    details       Create a PersistentVolumeClaim for this storage class  called  mypvc   a request of 4Gi and an accessMode of ReadWriteOnce  with the storageClassName of normal  and save it on pvc yaml  Create it on the cluster  Show the PersistentVolumeClaims of the cluster  Show the PersistentVolumes of the cluster   details  summary show  summary   p      bash vi pvc yaml         YAML kind  PersistentVolumeClaim apiVersion  v1 metadata    name  mypvc spec    storageClassName  normal   accessModes        ReadWriteOnce   resources      requests        storage  4Gi      Create it on the cluster      bash kubectl create  f pvc yaml      Show the PersistentVolumeClaims and PersistentVolumes      bash kubectl get pvc   will show as  Bound  kubectl get pv   will show as  Bound  as well        p    details       Create a busybox pod with command  sleep 3600   save it on pod yaml  Mount the PersistentVolumeClaim to   etc foo   Connect to the  busybox  pod  and copy the   etc passwd  file to   etc foo passwd    details  summary show  summary   p   Create a skeleton pod      bash kubectl run busybox   image busybox   restart Never  o yaml   dry run client     bin sh  c  sleep 3600    pod yaml vi pod yaml      Add the lines that finish with a comment      YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  busybox   name  busybox spec    containers      args         bin sh        c       sleep 3600     image  busybox     imagePullPolicy  IfNotPresent     name  busybox     resources         volumeMounts          name  myvolume         mountPath   etc foo     dnsPolicy  ClusterFirst   restartPolicy  Never   volumes        name  myvolume       persistentVolumeClaim          claimName  mypvc   status          Create the pod      bash kubectl create  f pod yaml      Connect to the pod and copy   etc passwd  to   etc foo passwd       bash kubectl exec busybox  it    cp  etc passwd  etc foo passwd        p    details       Create a second pod which is identical with the one you just created  you can easily do it by changing the  name  property on pod yaml   Connect to it and verify that   etc foo  contains the  passwd  file  Delete pods to cleanup  Note  If you can t see the file from the second pod  can you figure out why  What would you do to fix that      details  summary show  summary   p   Create the second pod  called busybox2      bash vim pod yaml   change  metadata name  busybox  to  metadata name  busybox2  kubectl create  f pod yaml kubectl exec busybox2    ls  etc foo   will show  passwd    cleanup kubectl delete po busybox busybox2 kubectl delete pvc mypvc kubectl delete pv myvolume      If the file doesn t show on the second pod but it shows on the first  it has most likely been scheduled on a different node      bash   check which nodes the pods are on kubectl get po busybox  o wide kubectl get po busybox2  o wide      If they are on different nodes  you won t see the file  because we used the  hostPath  volume type  If you need to access the same files in a multi node cluster  you need a volume type that is independent of a specific node  There are lots of different types per cloud provider   see here   https   kubernetes io docs concepts storage persistent volumes  types of persistent volumes   a general solution could be to use NFS     p    details       Create a busybox pod with  sleep 3600  as arguments  Copy   etc passwd  from the pod to your local folder   details  summary show  summary   p      bash kubectl run busybox   image busybox   restart Never    sleep 3600 kubectl cp busybox  etc passwd   passwd   kubectl cp command   previous command might report an error  feel free to ignore it since copy command works cat passwd        p    details "}
{"questions":"ckad excersises Pod design 20","answers":"![](https:\/\/gaforgithub.azurewebsites.net\/api?repo=CKAD-exercises\/pod_design&empty)\n# Pod design (20%)\n\n[Labels And Annotations](#labels-and-annotations)\n\n[Deployments](#deployments)\n\n[Jobs](#jobs)\n\n[Cron Jobs](#cron-jobs)\n\n## Labels and Annotations\nkubernetes.io > Documentation > Concepts > Overview > Working with Kubernetes Objects > [Labels and Selectors](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\/#label-selectors)\n\n### Create 3 pods with names nginx1,nginx2,nginx3. All of them should have the label app=v1\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run nginx1 --image=nginx --restart=Never --labels=app=v1\nkubectl run nginx2 --image=nginx --restart=Never --labels=app=v1\nkubectl run nginx3 --image=nginx --restart=Never --labels=app=v1\n# or\nfor i in `seq 1 3`; do kubectl run nginx$i --image=nginx -l app=v1 ; done\n```\n\n<\/p>\n<\/details>\n\n### Show all labels of the pods\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get po --show-labels\n```\n\n<\/p>\n<\/details>\n\n### Change the labels of pod 'nginx2' to be app=v2\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl label po nginx2 app=v2 --overwrite\n```\n\n<\/p>\n<\/details>\n\n### Get the label 'app' for the pods (show a column with APP labels)\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get po -L app\n# or\nkubectl get po --label-columns=app\n```\n\n<\/p>\n<\/details>\n\n### Get only the 'app=v2' pods\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get po -l app=v2\n# or\nkubectl get po -l 'app in (v2)'\n# or\nkubectl get po --selector=app=v2\n```\n\n<\/p>\n<\/details>\n\n### Add a new label tier=web to all pods having 'app=v2' or 'app=v1' labels\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl label po -l \"app in(v1,v2)\" tier=web\n```\n<\/p>\n<\/details>\n\n\n### Add an annotation 'owner: marketing' to all pods having 'app=v2' label\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl annotate po -l \"app=v2\" owner=marketing\n```\n<\/p>\n<\/details>\n\n### Remove the 'app' label from the pods we created before\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl label po nginx1 nginx2 nginx3 app-\n# or\nkubectl label po nginx{1..3} app-\n# or\nkubectl label po -l app app-\n```\n\n<\/p>\n<\/details>\n\n### Annotate pods nginx1, nginx2, nginx3 with \"description='my description'\" value\n\n<details><summary>show<\/summary>\n<p>\n\n\n```bash\nkubectl annotate po nginx1 nginx2 nginx3 description='my description'\n\n#or\n\nkubectl annotate po nginx{1..3} description='my description'\n```\n\n<\/p>\n<\/details>\n\n### Check the annotations for pod nginx1\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl annotate pod nginx1 --list\n\n# or\n\nkubectl describe po nginx1 | grep -i 'annotations'\n\n# or\n\nkubectl get po nginx1 -o custom-columns=Name:metadata.name,ANNOTATIONS:metadata.annotations.description\n```\n\nAs an alternative to using `| grep` you can use jsonPath like `kubectl get po nginx1 -o jsonpath='{.metadata.annotations}{\"\\n\"}'`\n\n<\/p>\n<\/details>\n\n### Remove the annotations for these three pods\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl annotate po nginx{1..3} description- owner-\n```\n\n<\/p>\n<\/details>\n\n### Remove these pods to have a clean state in your cluster\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl delete po nginx{1..3}\n```\n\n<\/p>\n<\/details>\n\n## Pod Placement\n\n### Create a pod that will be deployed to a Node that has the label 'accelerator=nvidia-tesla-p100'\n\n<details><summary>show<\/summary>\n<p>\n\nAdd the label to a node:\n\n```bash\nkubectl label nodes <your-node-name> accelerator=nvidia-tesla-p100\nkubectl get nodes --show-labels\n```\n\nWe can use the 'nodeSelector' property on the Pod YAML:\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  name: cuda-test\nspec:\n  containers:\n    - name: cuda-test\n      image: \"k8s.gcr.io\/cuda-vector-add:v0.1\"\n  nodeSelector: # add this\n    accelerator: nvidia-tesla-p100 # the selection label\n```\n\nYou can easily find out where in the YAML it should be placed by:\n\n```bash\nkubectl explain po.spec\n```\n\nOR:\nUse node affinity (https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/assign-pods-nodes-using-node-affinity\/#schedule-a-pod-using-required-node-affinity)\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  name: affinity-pod\nspec:\n  affinity:\n    nodeAffinity:\n      requiredDuringSchedulingIgnoredDuringExecution:\n        nodeSelectorTerms:\n        - matchExpressions:\n          - key: accelerator\n            operator: In\n            values:\n            - nvidia-tesla-p100\n  containers:\n    ...\n```\n\n<\/p>\n<\/details>\n\n### Taint a node with key `tier` and value `frontend` with the effect `NoSchedule`. Then, create a pod that tolerates this taint.\n\n<details><summary>show<\/summary>\n<p>\n\nTaint a node:\n\n```bash\nkubectl taint node node1 tier=frontend:NoSchedule # key=value:Effect\nkubectl describe node node1 # view the taints on a node\n```\n\nAnd to tolerate the taint:\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: frontend\nspec:\n  containers:\n  - name: nginx\n    image: nginx\n  tolerations:\n  - key: \"tier\"\n    operator: \"Equal\"\n    value: \"frontend\"\n    effect: \"NoSchedule\"\n```\n\n<\/p>\n<\/details>\n\n### Create a pod that will be placed on node `controlplane`. Use nodeSelector and tolerations.\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nvi pod.yaml\n```\n\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: frontend\nspec:\n  containers:\n  - name: nginx\n    image: nginx\n  nodeSelector:\n    kubernetes.io\/hostname: controlplane\n  tolerations:\n  - key: \"node-role.kubernetes.io\/control-plane\"\n    operator: \"Exists\"\n    effect: \"NoSchedule\"\n```\n\n```bash\nkubectl create -f pod.yaml\n```\n\n<\/p>\n<\/details>\n\n## Deployments\n\nkubernetes.io > Documentation > Concepts > Workloads > Workload Resources > [Deployments](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment)\n\n### Create a deployment with image nginx:1.18.0, called nginx, having 2 replicas, defining port 80 as the port that this container exposes (don't create a service for this deployment)\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create deployment nginx  --image=nginx:1.18.0  --dry-run=client -o yaml > deploy.yaml\nvi deploy.yaml\n# change the replicas field from 1 to 2\n# add this section to the container spec and save the deploy.yaml file\n# ports:\n#   - containerPort: 80\nkubectl apply -f deploy.yaml\n```\n\nor, do something like:\n\n```bash\nkubectl create deployment nginx  --image=nginx:1.18.0  --dry-run=client -o yaml | sed 's\/replicas: 1\/replicas: 2\/g'  | sed 's\/image: nginx:1.18.0\/image: nginx:1.18.0\\n        ports:\\n        - containerPort: 80\/g' | kubectl apply -f -\n```\n\nor,\n```bash\nkubectl create deploy nginx --image=nginx:1.18.0 --replicas=2 --port=80\n```\n\n<\/p>\n<\/details>\n\n### View the YAML of this deployment\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get deploy nginx -o yaml\n```\n\n<\/p>\n<\/details>\n\n### View the YAML of the replica set that was created by this deployment\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl describe deploy nginx # you'll see the name of the replica set on the Events section and in the 'NewReplicaSet' property\n# OR you can find rs directly by:\nkubectl get rs -l run=nginx # if you created deployment by 'run' command\nkubectl get rs -l app=nginx # if you created deployment by 'create' command\n# you could also just do kubectl get rs\nkubectl get rs nginx-7bf7478b77 -o yaml\n```\n\n<\/p>\n<\/details>\n\n### Get the YAML for one of the pods\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get po # get all the pods\n# OR you can find pods directly by:\nkubectl get po -l run=nginx # if you created deployment by 'run' command\nkubectl get po -l app=nginx # if you created deployment by 'create' command\nkubectl get po nginx-7bf7478b77-gjzp8 -o yaml\n```\n\n<\/p>\n<\/details>\n\n### Check how the deployment rollout is going\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl rollout status deploy nginx\n```\n\n<\/p>\n<\/details>\n\n### Update the nginx image to nginx:1.19.8\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl set image deploy nginx nginx=nginx:1.19.8\n# alternatively...\nkubectl edit deploy nginx # change the .spec.template.spec.containers[0].image\n```\n\nThe syntax of the 'kubectl set image' command is `kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N [options]`\n\n<\/p>\n<\/details>\n\n### Check the rollout history and confirm that the replicas are OK\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl rollout history deploy nginx\nkubectl get deploy nginx\nkubectl get rs # check that a new replica set has been created\nkubectl get po\n```\n\n<\/p>\n<\/details>\n\n### Undo the latest rollout and verify that new pods have the old image (nginx:1.18.0)\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl rollout undo deploy nginx\n# wait a bit\nkubectl get po # select one 'Running' Pod\nkubectl describe po nginx-5ff4457d65-nslcl | grep -i image # should be nginx:1.18.0\n```\n\n<\/p>\n<\/details>\n\n### Do an on purpose update of the deployment with a wrong image nginx:1.91\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl set image deploy nginx nginx=nginx:1.91\n# or\nkubectl edit deploy nginx\n# change the image to nginx:1.91\n# vim tip: type (without quotes) '\/image' and Enter, to navigate quickly\n```\n\n<\/p>\n<\/details>\n\n### Verify that something's wrong with the rollout\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl rollout status deploy nginx\n# or\nkubectl get po # you'll see 'ErrImagePull' or 'ImagePullBackOff'\n```\n\n<\/p>\n<\/details>\n\n\n### Return the deployment to the second revision (number 2) and verify the image is nginx:1.19.8\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl rollout undo deploy nginx --to-revision=2\nkubectl describe deploy nginx | grep Image:\nkubectl rollout status deploy nginx # Everything should be OK\n```\n\n<\/p>\n<\/details>\n\n### Check the details of the fourth revision (number 4)\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl rollout history deploy nginx --revision=4 # You'll also see the wrong image displayed here\n```\n\n<\/p>\n<\/details>\n\n### Scale the deployment to 5 replicas\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl scale deploy nginx --replicas=5\nkubectl get po\nkubectl describe deploy nginx\n```\n\n<\/p>\n<\/details>\n\n### Autoscale the deployment, pods between 5 and 10, targetting CPU utilization at 80%\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl autoscale deploy nginx --min=5 --max=10 --cpu-percent=80\n# view the horizontalpodautoscalers.autoscaling for nginx\nkubectl get hpa nginx\n```\n\n<\/p>\n<\/details>\n\n### Pause the rollout of the deployment\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl rollout pause deploy nginx\n```\n\n<\/p>\n<\/details>\n\n### Update the image to nginx:1.19.9 and check that there's nothing going on, since we paused the rollout\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl set image deploy nginx nginx=nginx:1.19.9\n# or\nkubectl edit deploy nginx\n# change the image to nginx:1.19.9\nkubectl rollout history deploy nginx # no new revision\n```\n\n<\/p>\n<\/details>\n\n### Resume the rollout and check that the nginx:1.19.9 image has been applied\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl rollout resume deploy nginx\nkubectl rollout history deploy nginx\nkubectl rollout history deploy nginx --revision=6 # insert the number of your latest revision\n```\n\n<\/p>\n<\/details>\n\n### Delete the deployment and the horizontal pod autoscaler you created\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl delete deploy nginx\nkubectl delete hpa nginx\n\n#Or\nkubectl delete deploy\/nginx hpa\/nginx\n```\n<\/p>\n<\/details>\n\n### Implement canary deployment by running two instances of nginx marked as version=v1 and version=v2 so that the load is balanced at 75%-25% ratio\n\n<details><summary>show<\/summary>\n<p>\n\nDeploy 3 replicas of v1:\n```\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: my-app-v1\n  labels:\n    app: my-app\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: my-app\n      version: v1\n  template:\n    metadata:\n      labels:\n        app: my-app\n        version: v1\n    spec:\n      containers:\n      - name: nginx\n        image: nginx\n        ports:\n        - containerPort: 80\n        volumeMounts:\n        - name: workdir\n          mountPath: \/usr\/share\/nginx\/html\n      initContainers:\n      - name: install\n        image: busybox:1.28\n        command:\n        - \/bin\/sh\n        - -c\n        - \"echo version-1 > \/work-dir\/index.html\"\n        volumeMounts:\n        - name: workdir\n          mountPath: \"\/work-dir\"\n      volumes:\n      - name: workdir\n        emptyDir: {}\n```\n\nCreate the service:\n```\napiVersion: v1\nkind: Service\nmetadata:\n  name: my-app-svc\n  labels:\n    app: my-app\nspec:\n  type: ClusterIP\n  ports:\n  - name: http\n    port: 80\n    targetPort: 80\n  selector:\n    app: my-app\n```\n\nTest if the deployment was successful from within a Pod:\n```\n# run a wget to the Service my-app-svc\nkubectl run -it --rm --restart=Never busybox --image=gcr.io\/google-containers\/busybox --command -- wget -qO- my-app-svc\n\nversion-1\n```\n\nDeploy 1 replica of v2:\n```\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: my-app-v2\n  labels:\n    app: my-app\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: my-app\n      version: v2\n  template:\n    metadata:\n      labels:\n        app: my-app\n        version: v2\n    spec:\n      containers:\n      - name: nginx\n        image: nginx\n        ports:\n        - containerPort: 80\n        volumeMounts:\n        - name: workdir\n          mountPath: \/usr\/share\/nginx\/html\n      initContainers:\n      - name: install\n        image: busybox:1.28\n        command:\n        - \/bin\/sh\n        - -c\n        - \"echo version-2 > \/work-dir\/index.html\"\n        volumeMounts:\n        - name: workdir\n          mountPath: \"\/work-dir\"\n      volumes:\n      - name: workdir\n        emptyDir: {}\n```\n\nObserve that calling the ip exposed by the service the requests are load balanced across the two versions:\n```\n# run a busyBox pod that will make a wget call to the service my-app-svc and print out the version of the pod it reached.\nkubectl run -it --rm --restart=Never busybox --image=gcr.io\/google-containers\/busybox -- \/bin\/sh -c 'while sleep 1; do wget -qO- my-app-svc; done'\n\nversion-1\nversion-1\nversion-1\nversion-2\nversion-2\nversion-1\n```\n\nIf the v2 is stable, scale it up to 4 replicas and shoutdown the v1:\n```\nkubectl scale --replicas=4 deploy my-app-v2\nkubectl delete deploy my-app-v1\nwhile sleep 0.1; do curl $(kubectl get svc my-app-svc -o jsonpath=\"{.spec.clusterIP}\"); done\nversion-2\nversion-2\nversion-2\nversion-2\nversion-2\nversion-2\n```\n\n<\/p>\n<\/details>\n\n## Jobs\n\n### Create a job named pi with image perl:5.34 that runs the command with arguments \"perl -Mbignum=bpi -wle 'print bpi(2000)'\"\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create job pi  --image=perl:5.34 -- perl -Mbignum=bpi -wle 'print bpi(2000)'\n```\n\n<\/p>\n<\/details>\n\n### Wait till it's done, get the output\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get jobs -w # wait till 'SUCCESSFUL' is 1 (will take some time, perl image might be big)\nkubectl get po # get the pod name\nkubectl logs pi-**** # get the pi numbers\nkubectl delete job pi\n```\nOR\n\n```bash\nkubectl get jobs -w # wait till 'SUCCESSFUL' is 1 (will take some time, perl image might be big)\nkubectl logs job\/pi\nkubectl delete job pi\n```\nOR\n\n```bash\nkubectl wait --for=condition=complete --timeout=300s job pi\nkubectl logs job\/pi\nkubectl delete job pi\n```\n\n<\/p>\n<\/details>\n\n### Create a job with the image busybox that executes the command 'echo hello;sleep 30;echo world'\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create job busybox --image=busybox -- \/bin\/sh -c 'echo hello;sleep 30;echo world'\n```\n\n<\/p>\n<\/details>\n\n### Follow the logs for the pod (you'll wait for 30 seconds)\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get po # find the job pod\nkubectl logs busybox-ptx58 -f # follow the logs\n```\n\n<\/p>\n<\/details>\n\n### See the status of the job, describe it and see the logs\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get jobs\nkubectl describe jobs busybox\nkubectl logs job\/busybox\n```\n\n<\/p>\n<\/details>\n\n### Delete the job\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl delete job busybox\n```\n\n<\/p>\n<\/details>\n\n### Create a job but ensure that it will be automatically terminated by kubernetes if it takes more than 30 seconds to execute\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create job busybox --image=busybox --dry-run=client -o yaml -- \/bin\/sh -c 'while true; do echo hello; sleep 10;done' > job.yaml\nvi job.yaml\n```\n\nAdd job.spec.activeDeadlineSeconds=30\n\n```bash\napiVersion: batch\/v1\nkind: Job\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: busybox\n  name: busybox\nspec:\n  activeDeadlineSeconds: 30 # add this line\n  template:\n    metadata:\n      creationTimestamp: null\n      labels:\n        run: busybox\n    spec:\n      containers:\n      - args:\n        - \/bin\/sh\n        - -c\n        - while true; do echo hello; sleep 10;done\n        image: busybox\n        name: busybox\n        resources: {}\n      restartPolicy: OnFailure\nstatus: {}\n```\n<\/p>\n<\/details>\n\n### Create the same job, make it run 5 times, one after the other. Verify its status and delete it\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create job busybox --image=busybox --dry-run=client -o yaml -- \/bin\/sh -c 'echo hello;sleep 30;echo world' > job.yaml\nvi job.yaml\n```\n\nAdd job.spec.completions=5\n\n```YAML\napiVersion: batch\/v1\nkind: Job\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: busybox\n  name: busybox\nspec:\n  completions: 5 # add this line\n  template:\n    metadata:\n      creationTimestamp: null\n      labels:\n        run: busybox\n    spec:\n      containers:\n      - args:\n        - \/bin\/sh\n        - -c\n        - echo hello;sleep 30;echo world\n        image: busybox\n        name: busybox\n        resources: {}\n      restartPolicy: OnFailure\nstatus: {}\n```\n\n```bash\nkubectl create -f job.yaml\n```\n\nVerify that it has been completed:\n\n```bash\nkubectl get job busybox -w # will take two and a half minutes\nkubectl delete jobs busybox\n```\n\n<\/p>\n<\/details>\n\n### Create the same job, but make it run 5 parallel times\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nvi job.yaml\n```\n\nAdd job.spec.parallelism=5\n\n```YAML\napiVersion: batch\/v1\nkind: Job\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: busybox\n  name: busybox\nspec:\n  parallelism: 5 # add this line\n  template:\n    metadata:\n      creationTimestamp: null\n      labels:\n        run: busybox\n    spec:\n      containers:\n      - args:\n        - \/bin\/sh\n        - -c\n        - echo hello;sleep 30;echo world\n        image: busybox\n        name: busybox\n        resources: {}\n      restartPolicy: OnFailure\nstatus: {}\n```\n\n```bash\nkubectl create -f job.yaml\nkubectl get jobs\n```\n\nIt will take some time for the parallel jobs to finish (>= 30 seconds)\n\n```bash\nkubectl delete job busybox\n```\n\n<\/p>\n<\/details>\n\n## Cron jobs\n\nkubernetes.io > Documentation > Tasks > Run Jobs > [Running Automated Tasks with a CronJob](https:\/\/kubernetes.io\/docs\/tasks\/job\/automated-tasks-with-cron-jobs\/)\n\n### Create a cron job with image busybox that runs on a schedule of \"*\/1 * * * *\" and writes 'date; echo Hello from the Kubernetes cluster' to standard output\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create cronjob busybox --image=busybox --schedule=\"*\/1 * * * *\" -- \/bin\/sh -c 'date; echo Hello from the Kubernetes cluster'\n```\n\n<\/p>\n<\/details>\n\n### See its logs and delete it\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get po # copy the ID of the pod whose container was just created\nkubectl logs <busybox-***> # you will see the date and message \nkubectl delete cj busybox # cj stands for cronjob\n```\n\n<\/p>\n<\/details>\n\n### Create the same cron job again, and watch the status. Once it ran, check which job ran by the created cron job. Check the log, and delete the cron job\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get cj\nkubectl get jobs --watch\nkubectl get po --show-labels # observe that the pods have a label that mentions their 'parent' job\nkubectl logs busybox-1529745840-m867r\n# Bear in mind that Kubernetes will run a new job\/pod for each new cron job\nkubectl delete cj busybox\n```\n\n<\/p>\n<\/details>\n\n### Create a cron job with image busybox that runs every minute and writes 'date; echo Hello from the Kubernetes cluster' to standard output. The cron job should be terminated if it takes more than 17 seconds to start execution after its scheduled time (i.e. the job missed its scheduled time).\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create cronjob time-limited-job --image=busybox --restart=Never --dry-run=client --schedule=\"* * * * *\" -o yaml -- \/bin\/sh -c 'date; echo Hello from the Kubernetes cluster' > time-limited-job.yaml\nvi time-limited-job.yaml\n```\nAdd cronjob.spec.startingDeadlineSeconds=17\n\n```bash\napiVersion: batch\/v1\nkind: CronJob\nmetadata:\n  creationTimestamp: null\n  name: time-limited-job\nspec:\n  startingDeadlineSeconds: 17 # add this line\n  jobTemplate:\n    metadata:\n      creationTimestamp: null\n      name: time-limited-job\n    spec:\n      template:\n        metadata:\n          creationTimestamp: null\n        spec:\n          containers:\n          - args:\n            - \/bin\/sh\n            - -c\n            - date; echo Hello from the Kubernetes cluster\n            image: busybox\n            name: time-limited-job\n            resources: {}\n          restartPolicy: Never\n  schedule: '* * * * *'\nstatus: {}\n```\n\n<\/p>\n<\/details>\n\n### Create a cron job with image busybox that runs every minute and writes 'date; echo Hello from the Kubernetes cluster' to standard output. The cron job should be terminated if it successfully starts but takes more than 12 seconds to complete execution.\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create cronjob time-limited-job --image=busybox --restart=Never --dry-run=client --schedule=\"* * * * *\" -o yaml -- \/bin\/sh -c 'date; echo Hello from the Kubernetes cluster' > time-limited-job.yaml\nvi time-limited-job.yaml\n```\nAdd cronjob.spec.jobTemplate.spec.activeDeadlineSeconds=12\n\n```bash\napiVersion: batch\/v1\nkind: CronJob\nmetadata:\n  creationTimestamp: null\n  name: time-limited-job\nspec:\n  jobTemplate:\n    metadata:\n      creationTimestamp: null\n      name: time-limited-job\n    spec:\n      activeDeadlineSeconds: 12 # add this line\n      template:\n        metadata:\n          creationTimestamp: null\n        spec:\n          containers:\n          - args:\n            - \/bin\/sh\n            - -c\n            - date; echo Hello from the Kubernetes cluster\n            image: busybox\n            name: time-limited-job\n            resources: {}\n          restartPolicy: Never\n  schedule: '* * * * *'\nstatus: {}\n```\n\n<\/p>\n<\/details>\n\n### Create a job from cronjob.\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create job --from=cronjob\/sample-cron-job sample-job\n```\n<\/p>\n<\/details>","site":"ckad excersises","answers_cleaned":"    https   gaforgithub azurewebsites net api repo CKAD exercises pod design empty    Pod design  20     Labels And Annotations   labels and annotations    Deployments   deployments    Jobs   jobs    Cron Jobs   cron jobs      Labels and Annotations kubernetes io   Documentation   Concepts   Overview   Working with Kubernetes Objects    Labels and Selectors  https   kubernetes io docs concepts overview working with objects labels  label selectors       Create 3 pods with names nginx1 nginx2 nginx3  All of them should have the label app v1   details  summary show  summary   p      bash kubectl run nginx1   image nginx   restart Never   labels app v1 kubectl run nginx2   image nginx   restart Never   labels app v1 kubectl run nginx3   image nginx   restart Never   labels app v1   or for i in  seq 1 3   do kubectl run nginx i   image nginx  l app v1   done        p    details       Show all labels of the pods   details  summary show  summary   p      bash kubectl get po   show labels        p    details       Change the labels of pod  nginx2  to be app v2   details  summary show  summary   p      bash kubectl label po nginx2 app v2   overwrite        p    details       Get the label  app  for the pods  show a column with APP labels    details  summary show  summary   p      bash kubectl get po  L app   or kubectl get po   label columns app        p    details       Get only the  app v2  pods   details  summary show  summary   p      bash kubectl get po  l app v2   or kubectl get po  l  app in  v2     or kubectl get po   selector app v2        p    details       Add a new label tier web to all pods having  app v2  or  app v1  labels   details  summary show  summary   p      bash kubectl label po  l  app in v1 v2   tier web       p    details        Add an annotation  owner  marketing  to all pods having  app v2  label   details  summary show  summary   p      bash kubectl annotate po  l  app v2  owner marketing       p    details       Remove the  app  label from the pods we created before   details  summary show  summary   p      bash kubectl label po nginx1 nginx2 nginx3 app    or kubectl label po nginx 1  3  app    or kubectl label po  l app app         p    details       Annotate pods nginx1  nginx2  nginx3 with  description  my description   value   details  summary show  summary   p       bash kubectl annotate po nginx1 nginx2 nginx3 description  my description    or  kubectl annotate po nginx 1  3  description  my description         p    details       Check the annotations for pod nginx1   details  summary show  summary   p      bash kubectl annotate pod nginx1   list    or  kubectl describe po nginx1   grep  i  annotations     or  kubectl get po nginx1  o custom columns Name metadata name ANNOTATIONS metadata annotations description      As an alternative to using    grep  you can use jsonPath like  kubectl get po nginx1  o jsonpath    metadata annotations    n        p    details       Remove the annotations for these three pods   details  summary show  summary   p      bash kubectl annotate po nginx 1  3  description  owner         p    details       Remove these pods to have a clean state in your cluster   details  summary show  summary   p      bash kubectl delete po nginx 1  3         p    details      Pod Placement      Create a pod that will be deployed to a Node that has the label  accelerator nvidia tesla p100    details  summary show  summary   p   Add the label to a node      bash kubectl label nodes  your node name  accelerator nvidia tesla p100 kubectl get nodes   show labels      We can use the  nodeSelector  property on the Pod YAML      YAML apiVersion  v1 kind  Pod metadata    name  cuda test spec    containers        name  cuda test       image   k8s gcr io cuda vector add v0 1    nodeSelector    add this     accelerator  nvidia tesla p100   the selection label      You can easily find out where in the YAML it should be placed by      bash kubectl explain po spec      OR  Use node affinity  https   kubernetes io docs tasks configure pod container assign pods nodes using node affinity  schedule a pod using required node affinity      YAML apiVersion  v1 kind  Pod metadata    name  affinity pod spec    affinity      nodeAffinity        requiredDuringSchedulingIgnoredDuringExecution          nodeSelectorTerms            matchExpressions              key  accelerator             operator  In             values                nvidia tesla p100   containers                 p    details       Taint a node with key  tier  and value  frontend  with the effect  NoSchedule   Then  create a pod that tolerates this taint    details  summary show  summary   p   Taint a node      bash kubectl taint node node1 tier frontend NoSchedule   key value Effect kubectl describe node node1   view the taints on a node      And to tolerate the taint     yaml apiVersion  v1 kind  Pod metadata    name  frontend spec    containers      name  nginx     image  nginx   tolerations      key   tier      operator   Equal      value   frontend      effect   NoSchedule         p    details       Create a pod that will be placed on node  controlplane   Use nodeSelector and tolerations    details  summary show  summary   p      bash vi pod yaml         yaml apiVersion  v1 kind  Pod metadata    name  frontend spec    containers      name  nginx     image  nginx   nodeSelector      kubernetes io hostname  controlplane   tolerations      key   node role kubernetes io control plane      operator   Exists      effect   NoSchedule          bash kubectl create  f pod yaml        p    details      Deployments  kubernetes io   Documentation   Concepts   Workloads   Workload Resources    Deployments  https   kubernetes io docs concepts workloads controllers deployment       Create a deployment with image nginx 1 18 0  called nginx  having 2 replicas  defining port 80 as the port that this container exposes  don t create a service for this deployment    details  summary show  summary   p      bash kubectl create deployment nginx    image nginx 1 18 0    dry run client  o yaml   deploy yaml vi deploy yaml   change the replicas field from 1 to 2   add this section to the container spec and save the deploy yaml file   ports        containerPort  80 kubectl apply  f deploy yaml      or  do something like      bash kubectl create deployment nginx    image nginx 1 18 0    dry run client  o yaml   sed  s replicas  1 replicas  2 g     sed  s image  nginx 1 18 0 image  nginx 1 18 0 n        ports  n          containerPort  80 g    kubectl apply  f        or     bash kubectl create deploy nginx   image nginx 1 18 0   replicas 2   port 80        p    details       View the YAML of this deployment   details  summary show  summary   p      bash kubectl get deploy nginx  o yaml        p    details       View the YAML of the replica set that was created by this deployment   details  summary show  summary   p      bash kubectl describe deploy nginx   you ll see the name of the replica set on the Events section and in the  NewReplicaSet  property   OR you can find rs directly by  kubectl get rs  l run nginx   if you created deployment by  run  command kubectl get rs  l app nginx   if you created deployment by  create  command   you could also just do kubectl get rs kubectl get rs nginx 7bf7478b77  o yaml        p    details       Get the YAML for one of the pods   details  summary show  summary   p      bash kubectl get po   get all the pods   OR you can find pods directly by  kubectl get po  l run nginx   if you created deployment by  run  command kubectl get po  l app nginx   if you created deployment by  create  command kubectl get po nginx 7bf7478b77 gjzp8  o yaml        p    details       Check how the deployment rollout is going   details  summary show  summary   p      bash kubectl rollout status deploy nginx        p    details       Update the nginx image to nginx 1 19 8   details  summary show  summary   p      bash kubectl set image deploy nginx nginx nginx 1 19 8   alternatively    kubectl edit deploy nginx   change the  spec template spec containers 0  image      The syntax of the  kubectl set image  command is  kubectl set image   f FILENAME   TYPE NAME  CONTAINER NAME 1 CONTAINER IMAGE 1     CONTAINER NAME N CONTAINER IMAGE N  options      p    details       Check the rollout history and confirm that the replicas are OK   details  summary show  summary   p      bash kubectl rollout history deploy nginx kubectl get deploy nginx kubectl get rs   check that a new replica set has been created kubectl get po        p    details       Undo the latest rollout and verify that new pods have the old image  nginx 1 18 0    details  summary show  summary   p      bash kubectl rollout undo deploy nginx   wait a bit kubectl get po   select one  Running  Pod kubectl describe po nginx 5ff4457d65 nslcl   grep  i image   should be nginx 1 18 0        p    details       Do an on purpose update of the deployment with a wrong image nginx 1 91   details  summary show  summary   p      bash kubectl set image deploy nginx nginx nginx 1 91   or kubectl edit deploy nginx   change the image to nginx 1 91   vim tip  type  without quotes    image  and Enter  to navigate quickly        p    details       Verify that something s wrong with the rollout   details  summary show  summary   p      bash kubectl rollout status deploy nginx   or kubectl get po   you ll see  ErrImagePull  or  ImagePullBackOff         p    details        Return the deployment to the second revision  number 2  and verify the image is nginx 1 19 8   details  summary show  summary   p      bash kubectl rollout undo deploy nginx   to revision 2 kubectl describe deploy nginx   grep Image  kubectl rollout status deploy nginx   Everything should be OK        p    details       Check the details of the fourth revision  number 4    details  summary show  summary   p      bash kubectl rollout history deploy nginx   revision 4   You ll also see the wrong image displayed here        p    details       Scale the deployment to 5 replicas   details  summary show  summary   p      bash kubectl scale deploy nginx   replicas 5 kubectl get po kubectl describe deploy nginx        p    details       Autoscale the deployment  pods between 5 and 10  targetting CPU utilization at 80    details  summary show  summary   p      bash kubectl autoscale deploy nginx   min 5   max 10   cpu percent 80   view the horizontalpodautoscalers autoscaling for nginx kubectl get hpa nginx        p    details       Pause the rollout of the deployment   details  summary show  summary   p      bash kubectl rollout pause deploy nginx        p    details       Update the image to nginx 1 19 9 and check that there s nothing going on  since we paused the rollout   details  summary show  summary   p      bash kubectl set image deploy nginx nginx nginx 1 19 9   or kubectl edit deploy nginx   change the image to nginx 1 19 9 kubectl rollout history deploy nginx   no new revision        p    details       Resume the rollout and check that the nginx 1 19 9 image has been applied   details  summary show  summary   p      bash kubectl rollout resume deploy nginx kubectl rollout history deploy nginx kubectl rollout history deploy nginx   revision 6   insert the number of your latest revision        p    details       Delete the deployment and the horizontal pod autoscaler you created   details  summary show  summary   p      bash kubectl delete deploy nginx kubectl delete hpa nginx   Or kubectl delete deploy nginx hpa nginx       p    details       Implement canary deployment by running two instances of nginx marked as version v1 and version v2 so that the load is balanced at 75  25  ratio   details  summary show  summary   p   Deploy 3 replicas of v1      apiVersion  apps v1 kind  Deployment metadata    name  my app v1   labels      app  my app spec    replicas  3   selector      matchLabels        app  my app       version  v1   template      metadata        labels          app  my app         version  v1     spec        containers          name  nginx         image  nginx         ports            containerPort  80         volumeMounts            name  workdir           mountPath   usr share nginx html       initContainers          name  install         image  busybox 1 28         command             bin sh            c            echo version 1    work dir index html          volumeMounts            name  workdir           mountPath    work dir        volumes          name  workdir         emptyDir          Create the service      apiVersion  v1 kind  Service metadata    name  my app svc   labels      app  my app spec    type  ClusterIP   ports      name  http     port  80     targetPort  80   selector      app  my app      Test if the deployment was successful from within a Pod        run a wget to the Service my app svc kubectl run  it   rm   restart Never busybox   image gcr io google containers busybox   command    wget  qO  my app svc  version 1      Deploy 1 replica of v2      apiVersion  apps v1 kind  Deployment metadata    name  my app v2   labels      app  my app spec    replicas  1   selector      matchLabels        app  my app       version  v2   template      metadata        labels          app  my app         version  v2     spec        containers          name  nginx         image  nginx         ports            containerPort  80         volumeMounts            name  workdir           mountPath   usr share nginx html       initContainers          name  install         image  busybox 1 28         command             bin sh            c            echo version 2    work dir index html          volumeMounts            name  workdir           mountPath    work dir        volumes          name  workdir         emptyDir          Observe that calling the ip exposed by the service the requests are load balanced across the two versions        run a busyBox pod that will make a wget call to the service my app svc and print out the version of the pod it reached  kubectl run  it   rm   restart Never busybox   image gcr io google containers busybox     bin sh  c  while sleep 1  do wget  qO  my app svc  done   version 1 version 1 version 1 version 2 version 2 version 1      If the v2 is stable  scale it up to 4 replicas and shoutdown the v1      kubectl scale   replicas 4 deploy my app v2 kubectl delete deploy my app v1 while sleep 0 1  do curl   kubectl get svc my app svc  o jsonpath    spec clusterIP     done version 2 version 2 version 2 version 2 version 2 version 2        p    details      Jobs      Create a job named pi with image perl 5 34 that runs the command with arguments  perl  Mbignum bpi  wle  print bpi 2000      details  summary show  summary   p      bash kubectl create job pi    image perl 5 34    perl  Mbignum bpi  wle  print bpi 2000          p    details       Wait till it s done  get the output   details  summary show  summary   p      bash kubectl get jobs  w   wait till  SUCCESSFUL  is 1  will take some time  perl image might be big  kubectl get po   get the pod name kubectl logs pi        get the pi numbers kubectl delete job pi     OR     bash kubectl get jobs  w   wait till  SUCCESSFUL  is 1  will take some time  perl image might be big  kubectl logs job pi kubectl delete job pi     OR     bash kubectl wait   for condition complete   timeout 300s job pi kubectl logs job pi kubectl delete job pi        p    details       Create a job with the image busybox that executes the command  echo hello sleep 30 echo world    details  summary show  summary   p      bash kubectl create job busybox   image busybox     bin sh  c  echo hello sleep 30 echo world         p    details       Follow the logs for the pod  you ll wait for 30 seconds    details  summary show  summary   p      bash kubectl get po   find the job pod kubectl logs busybox ptx58  f   follow the logs        p    details       See the status of the job  describe it and see the logs   details  summary show  summary   p      bash kubectl get jobs kubectl describe jobs busybox kubectl logs job busybox        p    details       Delete the job   details  summary show  summary   p      bash kubectl delete job busybox        p    details       Create a job but ensure that it will be automatically terminated by kubernetes if it takes more than 30 seconds to execute   details  summary show  summary   p      bash kubectl create job busybox   image busybox   dry run client  o yaml     bin sh  c  while true  do echo hello  sleep 10 done    job yaml vi job yaml      Add job spec activeDeadlineSeconds 30     bash apiVersion  batch v1 kind  Job metadata    creationTimestamp  null   labels      run  busybox   name  busybox spec    activeDeadlineSeconds  30   add this line   template      metadata        creationTimestamp  null       labels          run  busybox     spec        containers          args             bin sh            c           while true  do echo hello  sleep 10 done         image  busybox         name  busybox         resources           restartPolicy  OnFailure status           p    details       Create the same job  make it run 5 times  one after the other  Verify its status and delete it   details  summary show  summary   p      bash kubectl create job busybox   image busybox   dry run client  o yaml     bin sh  c  echo hello sleep 30 echo world    job yaml vi job yaml      Add job spec completions 5     YAML apiVersion  batch v1 kind  Job metadata    creationTimestamp  null   labels      run  busybox   name  busybox spec    completions  5   add this line   template      metadata        creationTimestamp  null       labels          run  busybox     spec        containers          args             bin sh            c           echo hello sleep 30 echo world         image  busybox         name  busybox         resources           restartPolicy  OnFailure status             bash kubectl create  f job yaml      Verify that it has been completed      bash kubectl get job busybox  w   will take two and a half minutes kubectl delete jobs busybox        p    details       Create the same job  but make it run 5 parallel times   details  summary show  summary   p      bash vi job yaml      Add job spec parallelism 5     YAML apiVersion  batch v1 kind  Job metadata    creationTimestamp  null   labels      run  busybox   name  busybox spec    parallelism  5   add this line   template      metadata        creationTimestamp  null       labels          run  busybox     spec        containers          args             bin sh            c           echo hello sleep 30 echo world         image  busybox         name  busybox         resources           restartPolicy  OnFailure status             bash kubectl create  f job yaml kubectl get jobs      It will take some time for the parallel jobs to finish     30 seconds      bash kubectl delete job busybox        p    details      Cron jobs  kubernetes io   Documentation   Tasks   Run Jobs    Running Automated Tasks with a CronJob  https   kubernetes io docs tasks job automated tasks with cron jobs        Create a cron job with image busybox that runs on a schedule of    1          and writes  date  echo Hello from the Kubernetes cluster  to standard output   details  summary show  summary   p      bash kubectl create cronjob busybox   image busybox   schedule    1              bin sh  c  date  echo Hello from the Kubernetes cluster         p    details       See its logs and delete it   details  summary show  summary   p      bash kubectl get po   copy the ID of the pod whose container was just created kubectl logs  busybox        you will see the date and message  kubectl delete cj busybox   cj stands for cronjob        p    details       Create the same cron job again  and watch the status  Once it ran  check which job ran by the created cron job  Check the log  and delete the cron job   details  summary show  summary   p      bash kubectl get cj kubectl get jobs   watch kubectl get po   show labels   observe that the pods have a label that mentions their  parent  job kubectl logs busybox 1529745840 m867r   Bear in mind that Kubernetes will run a new job pod for each new cron job kubectl delete cj busybox        p    details       Create a cron job with image busybox that runs every minute and writes  date  echo Hello from the Kubernetes cluster  to standard output  The cron job should be terminated if it takes more than 17 seconds to start execution after its scheduled time  i e  the job missed its scheduled time     details  summary show  summary   p      bash kubectl create cronjob time limited job   image busybox   restart Never   dry run client   schedule              o yaml     bin sh  c  date  echo Hello from the Kubernetes cluster    time limited job yaml vi time limited job yaml     Add cronjob spec startingDeadlineSeconds 17     bash apiVersion  batch v1 kind  CronJob metadata    creationTimestamp  null   name  time limited job spec    startingDeadlineSeconds  17   add this line   jobTemplate      metadata        creationTimestamp  null       name  time limited job     spec        template          metadata            creationTimestamp  null         spec            containers              args                 bin sh                c               date  echo Hello from the Kubernetes cluster             image  busybox             name  time limited job             resources               restartPolicy  Never   schedule              status            p    details       Create a cron job with image busybox that runs every minute and writes  date  echo Hello from the Kubernetes cluster  to standard output  The cron job should be terminated if it successfully starts but takes more than 12 seconds to complete execution    details  summary show  summary   p      bash kubectl create cronjob time limited job   image busybox   restart Never   dry run client   schedule              o yaml     bin sh  c  date  echo Hello from the Kubernetes cluster    time limited job yaml vi time limited job yaml     Add cronjob spec jobTemplate spec activeDeadlineSeconds 12     bash apiVersion  batch v1 kind  CronJob metadata    creationTimestamp  null   name  time limited job spec    jobTemplate      metadata        creationTimestamp  null       name  time limited job     spec        activeDeadlineSeconds  12   add this line       template          metadata            creationTimestamp  null         spec            containers              args                 bin sh                c               date  echo Hello from the Kubernetes cluster             image  busybox             name  time limited job             resources               restartPolicy  Never   schedule              status            p    details       Create a job from cronjob    details  summary show  summary   p      bash kubectl create job   from cronjob sample cron job sample job       p    details "}
{"questions":"ckad excersises Services and Networking 13 kubectl run nginx image nginx restart Never port 80 expose bash Create a pod with image nginx called nginx and expose its port 80 details summary show summary p","answers":"![](https:\/\/gaforgithub.azurewebsites.net\/api?repo=CKAD-exercises\/services&empty)\n# Services and Networking (13%)\n\n### Create a pod with image nginx called nginx and expose its port 80\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run nginx --image=nginx --restart=Never --port=80 --expose\n# observe that a pod as well as a service are created\n```\n\n<\/p>\n<\/details>\n\n\n### Confirm that ClusterIP has been created. Also check endpoints\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get svc nginx # services\nkubectl get ep # endpoints\n```\n\n<\/p>\n<\/details>\n\n### Get service's ClusterIP, create a temp busybox pod and 'hit' that IP with wget\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get svc nginx # get the IP (something like 10.108.93.130)\nkubectl run busybox --rm --image=busybox -it --restart=Never --\nwget -O- [PUT THE POD'S IP ADDRESS HERE]:80\nexit\n```\n\n<\/p>\nor\n<p>\n\n```bash\nIP=$(kubectl get svc nginx --template=) # get the IP (something like 10.108.93.130)\nkubectl run busybox --rm --image=busybox -it --restart=Never --env=\"IP=$IP\" -- wget -O- $IP:80 --timeout 2\n# Tip: --timeout is optional, but it helps to get answer more quickly when connection fails (in seconds vs minutes)\n```\n\n<\/p>\n<\/details>\n\n### Convert the ClusterIP to NodePort for the same service and find the NodePort port. Hit service using Node's IP. Delete the service and the pod at the end.\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl edit svc nginx\n```\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  creationTimestamp: 2018-06-25T07:55:16Z\n  name: nginx\n  namespace: default\n  resourceVersion: \"93442\"\n  selfLink: \/api\/v1\/namespaces\/default\/services\/nginx\n  uid: 191e3dac-784d-11e8-86b1-00155d9f663c\nspec:\n  clusterIP: 10.97.242.220\n  ports:\n  - port: 80\n    protocol: TCP\n    targetPort: 80\n  selector:\n    run: nginx\n  sessionAffinity: None\n  type: NodePort # change cluster IP to nodeport\nstatus:\n  loadBalancer: {}\n```\n\nor\n\n```bash\nkubectl patch svc nginx -p '{\"spec\":{\"type\":\"NodePort\"}}' \n```\n\n```bash\nkubectl get svc\n```\n\n```\n# result:\nNAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE\nkubernetes   ClusterIP   10.96.0.1        <none>        443\/TCP        1d\nnginx        NodePort    10.107.253.138   <none>        80:31931\/TCP   3m\n```\n\n```bash\nwget -O- NODE_IP:31931 # if you're using Kubernetes with Docker for Windows\/Mac, try 127.0.0.1\n#if you're using minikube, try minikube ip, then get the node ip such as 192.168.99.117\n```\n\n```bash\nkubectl delete svc nginx # Deletes the service\nkubectl delete pod nginx # Deletes the pod\n```\n<\/p>\n<\/details>\n\n### Create a deployment called foo using image 'dgkanatsios\/simpleapp' (a simple server that returns hostname) and 3 replicas. Label it as 'app=foo'. Declare that containers in this pod will accept traffic on port 8080 (do NOT create a service yet)\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create deploy foo --image=dgkanatsios\/simpleapp --port=8080 --replicas=3\nkubectl label deployment foo --overwrite app=foo #This is optional since kubectl create deploy foo will create label app=foo by default\n```\n<\/p>\n<\/details>\n\n### Get the pod IPs. Create a temp busybox pod and try hitting them on port 8080\n\n<details><summary>show<\/summary>\n<p>\n\n\n```bash\nkubectl get pods -l app=foo -o wide # 'wide' will show pod IPs\nkubectl run busybox --image=busybox --restart=Never -it --rm -- sh\nwget -O- <POD_IP>:8080 # do not try with pod name, will not work\n# try hitting all IPs generated after running 1st command to confirm that hostname is different\nexit\n# or\nkubectl get po -o wide -l app=foo | awk '{print $6}' | grep -v IP | xargs -L1 -I '{}' kubectl run --rm -ti tmp --restart=Never --image=busybox -- wget -O- http:\/\/\\{\\}:8080\n# or\nkubectl get po -l app=foo -o jsonpath='{range .items[*]}{.status.podIP}{\"\\n\"}{end}' | xargs -L1 -I '{}' kubectl run --rm -ti tmp --restart=Never --image=busybox -- wget -O- http:\/\/\\{\\}:8080\n```\n\n<\/p>\n<\/details>\n\n### Create a service that exposes the deployment on port 6262. Verify its existence, check the endpoints\n\n<details><summary>show<\/summary>\n<p>\n\n\n```bash\nkubectl expose deploy foo --port=6262 --target-port=8080\nkubectl get service foo # you will see ClusterIP as well as port 6262\nkubectl get endpoints foo # you will see the IPs of the three replica pods, listening on port 8080\n```\n\n<\/p>\n<\/details>\n\n### Create a temp busybox pod and connect via wget to foo service. Verify that each time there's a different hostname returned. Delete deployment and services to cleanup the cluster\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get svc # get the foo service ClusterIP\nkubectl run busybox --image=busybox -it --rm --restart=Never -- sh\nwget -O- foo:6262 # DNS works! run it many times, you'll see different pods responding\nwget -O- <SERVICE_CLUSTER_IP>:6262 # ClusterIP works as well\n# you can also kubectl logs on deployment pods to see the container logs\nkubectl delete svc foo\nkubectl delete deploy foo\n```\n\n<\/p>\n<\/details>\n\n### Create an nginx deployment of 2 replicas, expose it via a ClusterIP service on port 80. Create a NetworkPolicy so that only pods with labels 'access: granted' can access the deployment and apply it\n\nkubernetes.io > Documentation > Concepts > Services, Load Balancing, and Networking > [Network Policies](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/network-policies\/)\n\n> Note that network policies may not be enforced by default, depending on your k8s implementation. E.g. Azure AKS by default won't have policy enforcement, the cluster must be created with an explicit support for `netpol` https:\/\/docs.microsoft.com\/en-us\/azure\/aks\/use-network-policies#overview-of-network-policy  \n  \n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create deployment nginx --image=nginx --replicas=2\nkubectl expose deployment nginx --port=80\n\nkubectl describe svc nginx # see the 'app=nginx' selector for the pods\n# or\nkubectl get svc nginx -o yaml\n\nvi policy.yaml\n```\n\n```YAML\nkind: NetworkPolicy\napiVersion: networking.k8s.io\/v1\nmetadata:\n  name: access-nginx # pick a name\nspec:\n  podSelector:\n    matchLabels:\n      app: nginx # selector for the pods\n  ingress: # allow ingress traffic\n  - from:\n    - podSelector: # from pods\n        matchLabels: # with this label\n          access: granted\n```\n\n```bash\n# Create the NetworkPolicy\nkubectl create -f policy.yaml\n\n# Check if the Network Policy has been created correctly\n# make sure that your cluster's network provider supports Network Policy (https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/declare-network-policy\/#before-you-begin)\nkubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- http:\/\/nginx:80 --timeout 2                          # This should not work. --timeout is optional here. But it helps to get answer more quickly (in seconds vs minutes)\nkubectl run busybox --image=busybox --rm -it --restart=Never --labels=access=granted -- wget -O- http:\/\/nginx:80 --timeout 2  # This should be fine\n```\n\n<\/p>\n<\/details>","site":"ckad excersises","answers_cleaned":"    https   gaforgithub azurewebsites net api repo CKAD exercises services empty    Services and Networking  13        Create a pod with image nginx called nginx and expose its port 80   details  summary show  summary   p      bash kubectl run nginx   image nginx   restart Never   port 80   expose   observe that a pod as well as a service are created        p    details        Confirm that ClusterIP has been created  Also check endpoints   details  summary show  summary   p      bash kubectl get svc nginx   services kubectl get ep   endpoints        p    details       Get service s ClusterIP  create a temp busybox pod and  hit  that IP with wget   details  summary show  summary   p      bash kubectl get svc nginx   get the IP  something like 10 108 93 130  kubectl run busybox   rm   image busybox  it   restart Never    wget  O   PUT THE POD S IP ADDRESS HERE  80 exit        p  or  p      bash IP   kubectl get svc nginx   template     get the IP  something like 10 108 93 130  kubectl run busybox   rm   image busybox  it   restart Never   env  IP  IP     wget  O   IP 80   timeout 2   Tip    timeout is optional  but it helps to get answer more quickly when connection fails  in seconds vs minutes         p    details       Convert the ClusterIP to NodePort for the same service and find the NodePort port  Hit service using Node s IP  Delete the service and the pod at the end    details  summary show  summary   p      bash kubectl edit svc nginx         yaml apiVersion  v1 kind  Service metadata    creationTimestamp  2018 06 25T07 55 16Z   name  nginx   namespace  default   resourceVersion   93442    selfLink   api v1 namespaces default services nginx   uid  191e3dac 784d 11e8 86b1 00155d9f663c spec    clusterIP  10 97 242 220   ports      port  80     protocol  TCP     targetPort  80   selector      run  nginx   sessionAffinity  None   type  NodePort   change cluster IP to nodeport status    loadBalancer          or     bash kubectl patch svc nginx  p    spec    type   NodePort              bash kubectl get svc            result  NAME         TYPE        CLUSTER IP       EXTERNAL IP   PORT S         AGE kubernetes   ClusterIP   10 96 0 1         none         443 TCP        1d nginx        NodePort    10 107 253 138    none         80 31931 TCP   3m         bash wget  O  NODE IP 31931   if you re using Kubernetes with Docker for Windows Mac  try 127 0 0 1  if you re using minikube  try minikube ip  then get the node ip such as 192 168 99 117         bash kubectl delete svc nginx   Deletes the service kubectl delete pod nginx   Deletes the pod       p    details       Create a deployment called foo using image  dgkanatsios simpleapp   a simple server that returns hostname  and 3 replicas  Label it as  app foo   Declare that containers in this pod will accept traffic on port 8080  do NOT create a service yet    details  summary show  summary   p      bash kubectl create deploy foo   image dgkanatsios simpleapp   port 8080   replicas 3 kubectl label deployment foo   overwrite app foo  This is optional since kubectl create deploy foo will create label app foo by default       p    details       Get the pod IPs  Create a temp busybox pod and try hitting them on port 8080   details  summary show  summary   p       bash kubectl get pods  l app foo  o wide    wide  will show pod IPs kubectl run busybox   image busybox   restart Never  it   rm    sh wget  O   POD IP  8080   do not try with pod name  will not work   try hitting all IPs generated after running 1st command to confirm that hostname is different exit   or kubectl get po  o wide  l app foo   awk   print  6     grep  v IP   xargs  L1  I      kubectl run   rm  ti tmp   restart Never   image busybox    wget  O  http        8080   or kubectl get po  l app foo  o jsonpath   range  items      status podIP    n   end     xargs  L1  I      kubectl run   rm  ti tmp   restart Never   image busybox    wget  O  http        8080        p    details       Create a service that exposes the deployment on port 6262  Verify its existence  check the endpoints   details  summary show  summary   p       bash kubectl expose deploy foo   port 6262   target port 8080 kubectl get service foo   you will see ClusterIP as well as port 6262 kubectl get endpoints foo   you will see the IPs of the three replica pods  listening on port 8080        p    details       Create a temp busybox pod and connect via wget to foo service  Verify that each time there s a different hostname returned  Delete deployment and services to cleanup the cluster   details  summary show  summary   p      bash kubectl get svc   get the foo service ClusterIP kubectl run busybox   image busybox  it   rm   restart Never    sh wget  O  foo 6262   DNS works  run it many times  you ll see different pods responding wget  O   SERVICE CLUSTER IP  6262   ClusterIP works as well   you can also kubectl logs on deployment pods to see the container logs kubectl delete svc foo kubectl delete deploy foo        p    details       Create an nginx deployment of 2 replicas  expose it via a ClusterIP service on port 80  Create a NetworkPolicy so that only pods with labels  access  granted  can access the deployment and apply it  kubernetes io   Documentation   Concepts   Services  Load Balancing  and Networking    Network Policies  https   kubernetes io docs concepts services networking network policies      Note that network policies may not be enforced by default  depending on your k8s implementation  E g  Azure AKS by default won t have policy enforcement  the cluster must be created with an explicit support for  netpol  https   docs microsoft com en us azure aks use network policies overview of network policy       details  summary show  summary   p      bash kubectl create deployment nginx   image nginx   replicas 2 kubectl expose deployment nginx   port 80  kubectl describe svc nginx   see the  app nginx  selector for the pods   or kubectl get svc nginx  o yaml  vi policy yaml         YAML kind  NetworkPolicy apiVersion  networking k8s io v1 metadata    name  access nginx   pick a name spec    podSelector      matchLabels        app  nginx   selector for the pods   ingress    allow ingress traffic     from        podSelector    from pods         matchLabels    with this label           access  granted         bash   Create the NetworkPolicy kubectl create  f policy yaml    Check if the Network Policy has been created correctly   make sure that your cluster s network provider supports Network Policy  https   kubernetes io docs tasks administer cluster declare network policy  before you begin  kubectl run busybox   image busybox   rm  it   restart Never    wget  O  http   nginx 80   timeout 2                            This should not work    timeout is optional here  But it helps to get answer more quickly  in seconds vs minutes  kubectl run busybox   image busybox   rm  it   restart Never   labels access granted    wget  O  http   nginx 80   timeout 2    This should be fine        p    details "}
{"questions":"ckad excersises Note The topic is part of the new CKAD syllabus Here are a few examples of using podman to manage the life cycle of container images The use of docker had been the industry standard for many years but now large companies like are moving to a new suite of open source tools podman skopeo and buildah Also Kubernetes has moved in this In particular is meant to be the replacement of the command so it makes sense to get familiar with it although they are quite interchangeable considering that they use the same syntax Podman basics details summary show summary Create a Dockerfile to deploy an Apache HTTP Server which hosts a custom main page p Define build and modify container images","answers":"# Define, build and modify container images\n\n- Note: The topic is part of the new CKAD syllabus. Here are a few examples of using **podman** to manage the life cycle of container images. The use of **docker** had been the industry standard for many years, but now large companies like [Red Hat](https:\/\/www.redhat.com\/en\/blog\/say-hello-buildah-podman-and-skopeo) are moving to a new suite of open source tools: podman, skopeo and buildah. Also Kubernetes has moved in this [direction](https:\/\/kubernetes.io\/blog\/2022\/02\/17\/dockershim-faq\/). In particular, `podman` is meant to be the replacement of the `docker` command: so it makes sense to get familiar with it, although they are quite interchangeable considering that they use the same syntax.\n\n## Podman basics\n\n### Create a Dockerfile to deploy an Apache HTTP Server which hosts a custom main page\n\n<details><summary>show<\/summary>\n<p>\n\n```Dockerfile\nFROM docker.io\/httpd:2.4\nRUN echo \"Hello, World!\" > \/usr\/local\/apache2\/htdocs\/index.html\n```\n\n<\/p>\n<\/details>\n\n### Build and see how many layers the image consists of\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\n:~$ podman build -t simpleapp .\nSTEP 1\/2: FROM httpd:2.4\nSTEP 2\/2: RUN echo \"Hello, World!\" > \/usr\/local\/apache2\/htdocs\/index.html\nCOMMIT simpleapp\n--> ef4b14a72d0\nSuccessfully tagged localhost\/simpleapp:latest\nef4b14a72d02ae0577eb0632d084c057777725c279e12ccf5b0c6e4ff5fd598b\n\n:~$ podman images\nREPOSITORY               TAG         IMAGE ID      CREATED        SIZE\nlocalhost\/simpleapp      latest      ef4b14a72d02  8 seconds ago  148 MB\ndocker.io\/library\/httpd  2.4         98f93cd0ec3b  7 days ago     148 MB\n\n:~$ podman image tree localhost\/simpleapp:latest\nImage ID: ef4b14a72d02\nTags:     [localhost\/simpleapp:latest]\nSize:     147.8MB\nImage Layers\n\u251c\u2500\u2500 ID: ad6562704f37 Size:  83.9MB\n\u251c\u2500\u2500 ID: c234616e1912 Size: 3.072kB\n\u251c\u2500\u2500 ID: c23a797b2d04 Size: 2.721MB\n\u251c\u2500\u2500 ID: ede2e092faf0 Size: 61.11MB\n\u251c\u2500\u2500 ID: 971c2cdf3872 Size: 3.584kB Top Layer of: [docker.io\/library\/httpd:2.4]\n\u2514\u2500\u2500 ID: 61644e82ef1f Size: 6.144kB Top Layer of: [localhost\/simpleapp:latest]\n```\n\n<\/p>\n<\/details>\n\n### Run the image locally, inspect its status and logs, finally test that it responds as expected\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\n:~$ podman run -d --name test -p 8080:80 localhost\/simpleapp\n2f3d7d613ea6ba19703811d30704d4025123c7302ff6fa295affc9bd30e532f8\n\n:~$ podman ps\nCONTAINER ID  IMAGE                       COMMAND           CREATED        STATUS            PORTS                 NAMES\n2f3d7d613ea6  localhost\/simpleapp:latest  httpd-foreground  5 seconds ago  Up 6 seconds ago  0.0.0.0:8080->80\/tcp  test\n\n:~$ podman logs test\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.0.2.100. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.0.2.100. Set the 'ServerName' directive globally to suppress this message\n[Sat Jun 04 16:15:38.071377 2022] [mpm_event:notice] [pid 1:tid 139756978220352] AH00489: Apache\/2.4.53 (Unix) configured -- resuming normal operations\n[Sat Jun 04 16:15:38.073570 2022] [core:notice] [pid 1:tid 139756978220352] AH00094: Command line: 'httpd -D FOREGROUND'\n\n:~$ curl 0.0.0.0:8080\nHello, World!\n```\n\n<\/p>\n<\/details>\n\n### Run a command inside the pod to print out the index.html file\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\n:~$ podman exec -it test cat \/usr\/local\/apache2\/htdocs\/index.html\nHello, World!\n```\n<\/p>\n<\/details>\n\n### Tag the image with ip and port of a private local registry and then push the image to this registry\n\n<details><summary>show<\/summary>\n<p>\n\n> Note: Some small distributions of Kubernetes (such as [microk8s](https:\/\/microk8s.io\/docs\/registry-built-in)) have a built-in registry you can use for this exercise. If this is not your case, you'll have to setup it on your own.\n\n```bash\n:~$ podman tag localhost\/simpleapp $registry_ip:5000\/simpleapp\n\n:~$ podman push $registry_ip:5000\/simpleapp\n```\n\n<\/p>\n<\/details>\n\n Verify that the registry contains the pushed image and that you can pull it\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\n:~$ curl http:\/\/$registry_ip:5000\/v2\/_catalog\n{\"repositories\":[\"simpleapp\"]}\n\n# remove the image already present\n:~$ podman rmi $registry_ip:5000\/simpleapp\n\n:~$ podman pull $registry_ip:5000\/simpleapp\nTrying to pull 10.152.183.13:5000\/simpleapp:latest...\nGetting image source signatures\nCopying blob 643ea8c2c185 skipped: already exists\nCopying blob 972107ece720 skipped: already exists\nCopying blob 9857eeea6120 skipped: already exists\nCopying blob 93859aa62dbd skipped: already exists\nCopying blob 8e47efbf2b7e skipped: already exists\nCopying blob 42e0f5a91e40 skipped: already exists\nCopying config ef4b14a72d done\nWriting manifest to image destination\nStoring signatures\nef4b14a72d02ae0577eb0632d084c057777725c279e12ccf5b0c6e4ff5fd598b\n```\n\n<\/p>\n<\/details>\n\n### Run a pod with the image pushed to the registry\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\n:~$ kubectl run simpleapp --image=$registry_ip:5000\/simpleapp --port=80\n\n:~$ curl ${kubectl get pods simpleapp -o jsonpath='{.status.podIP}'}\nHello, World!\n```\n\n<\/p>\n<\/details>\n\n### Log into a remote registry server and then read the credentials from the default file\n\n\n<details><summary>show<\/summary>\n<p>\n\n> Note: The two most used container registry servers with a free plan are [DockerHub](https:\/\/hub.docker.com\/) and [Quay.io](https:\/\/quay.io\/).\n\n```bash\n:~$ podman login --username $YOUR_USER --password $YOUR_PWD docker.io\n:~$ cat ${XDG_RUNTIME_DIR}\/containers\/auth.json\n{\n        \"auths\": {\n                \"docker.io\": {\n                        \"auth\": \"Z2l1bGl0JLSGtvbkxCcX1xb617251xh0x3zaUd4QW45Q3JuV3RDOTc=\"\n                }\n        }\n}\n```\n\n<\/p>\n<\/details>\n\n### Create a secret both from existing login credentials and from the CLI\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\n:~$ kubectl create secret generic mysecret --from-file=.dockerconfigjson=${XDG_RUNTIME_DIR}\/containers\/auth.json --type=kubernetes.io\/dockeconfigjson\nsecret\/mysecret created\n:~$ kubectl create secret docker-registry mysecret2 --docker-server=https:\/\/index.docker.io\/v1\/ --docker-username=$YOUR_USR --docker-password=$YOUR_PWD\nsecret\/mysecret2 created\n```\n\n<\/p>\n<\/details>\n\n### Create the manifest for a Pod that uses one of the two secrets just created to pull an image hosted on the relative private remote registry\n\n<details><summary>show<\/summary>\n<p>\n\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: private-reg\nspec:\n  containers:\n  - name: private-reg-container\n    image: $YOUR_PRIVATE_IMAGE\n  imagePullSecrets:\n  - name: mysecret\n```\n\n<\/p>\n<\/details>\n\n### Clean up all the images and containers\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\n:~$ podman rm --all --force\n:~$ podman rmi --all\n:~$ kubectl delete pod simpleapp\n```\n\n<\/p>\n<\/details>","site":"ckad excersises","answers_cleaned":"  Define  build and modify container images    Note  The topic is part of the new CKAD syllabus  Here are a few examples of using   podman   to manage the life cycle of container images  The use of   docker   had been the industry standard for many years  but now large companies like  Red Hat  https   www redhat com en blog say hello buildah podman and skopeo  are moving to a new suite of open source tools  podman  skopeo and buildah  Also Kubernetes has moved in this  direction  https   kubernetes io blog 2022 02 17 dockershim faq    In particular   podman  is meant to be the replacement of the  docker  command  so it makes sense to get familiar with it  although they are quite interchangeable considering that they use the same syntax      Podman basics      Create a Dockerfile to deploy an Apache HTTP Server which hosts a custom main page   details  summary show  summary   p      Dockerfile FROM docker io httpd 2 4 RUN echo  Hello  World      usr local apache2 htdocs index html        p    details       Build and see how many layers the image consists of   details  summary show  summary   p      bash     podman build  t simpleapp   STEP 1 2  FROM httpd 2 4 STEP 2 2  RUN echo  Hello  World      usr local apache2 htdocs index html COMMIT simpleapp     ef4b14a72d0 Successfully tagged localhost simpleapp latest ef4b14a72d02ae0577eb0632d084c057777725c279e12ccf5b0c6e4ff5fd598b      podman images REPOSITORY               TAG         IMAGE ID      CREATED        SIZE localhost simpleapp      latest      ef4b14a72d02  8 seconds ago  148 MB docker io library httpd  2 4         98f93cd0ec3b  7 days ago     148 MB      podman image tree localhost simpleapp latest Image ID  ef4b14a72d02 Tags       localhost simpleapp latest  Size      147 8MB Image Layers     ID  ad6562704f37 Size   83 9MB     ID  c234616e1912 Size  3 072kB     ID  c23a797b2d04 Size  2 721MB     ID  ede2e092faf0 Size  61 11MB     ID  971c2cdf3872 Size  3 584kB Top Layer of   docker io library httpd 2 4      ID  61644e82ef1f Size  6 144kB Top Layer of   localhost simpleapp latest         p    details       Run the image locally  inspect its status and logs  finally test that it responds as expected   details  summary show  summary   p      bash     podman run  d   name test  p 8080 80 localhost simpleapp 2f3d7d613ea6ba19703811d30704d4025123c7302ff6fa295affc9bd30e532f8      podman ps CONTAINER ID  IMAGE                       COMMAND           CREATED        STATUS            PORTS                 NAMES 2f3d7d613ea6  localhost simpleapp latest  httpd foreground  5 seconds ago  Up 6 seconds ago  0 0 0 0 8080  80 tcp  test      podman logs test AH00558  httpd  Could not reliably determine the server s fully qualified domain name  using 10 0 2 100  Set the  ServerName  directive globally to suppress this message AH00558  httpd  Could not reliably determine the server s fully qualified domain name  using 10 0 2 100  Set the  ServerName  directive globally to suppress this message  Sat Jun 04 16 15 38 071377 2022   mpm event notice   pid 1 tid 139756978220352  AH00489  Apache 2 4 53  Unix  configured    resuming normal operations  Sat Jun 04 16 15 38 073570 2022   core notice   pid 1 tid 139756978220352  AH00094  Command line   httpd  D FOREGROUND       curl 0 0 0 0 8080 Hello  World         p    details       Run a command inside the pod to print out the index html file   details  summary show  summary   p      bash     podman exec  it test cat  usr local apache2 htdocs index html Hello  World        p    details       Tag the image with ip and port of a private local registry and then push the image to this registry   details  summary show  summary   p     Note  Some small distributions of Kubernetes  such as  microk8s  https   microk8s io docs registry built in   have a built in registry you can use for this exercise  If this is not your case  you ll have to setup it on your own      bash     podman tag localhost simpleapp  registry ip 5000 simpleapp      podman push  registry ip 5000 simpleapp        p    details    Verify that the registry contains the pushed image and that you can pull it   details  summary show  summary   p      bash     curl http    registry ip 5000 v2  catalog   repositories    simpleapp       remove the image already present     podman rmi  registry ip 5000 simpleapp      podman pull  registry ip 5000 simpleapp Trying to pull 10 152 183 13 5000 simpleapp latest    Getting image source signatures Copying blob 643ea8c2c185 skipped  already exists Copying blob 972107ece720 skipped  already exists Copying blob 9857eeea6120 skipped  already exists Copying blob 93859aa62dbd skipped  already exists Copying blob 8e47efbf2b7e skipped  already exists Copying blob 42e0f5a91e40 skipped  already exists Copying config ef4b14a72d done Writing manifest to image destination Storing signatures ef4b14a72d02ae0577eb0632d084c057777725c279e12ccf5b0c6e4ff5fd598b        p    details       Run a pod with the image pushed to the registry   details  summary show  summary   p      bash     kubectl run simpleapp   image  registry ip 5000 simpleapp   port 80      curl   kubectl get pods simpleapp  o jsonpath    status podIP    Hello  World         p    details       Log into a remote registry server and then read the credentials from the default file    details  summary show  summary   p     Note  The two most used container registry servers with a free plan are  DockerHub  https   hub docker com   and  Quay io  https   quay io        bash     podman login   username  YOUR USER   password  YOUR PWD docker io     cat   XDG RUNTIME DIR  containers auth json            auths                      docker io                              auth    Z2l1bGl0JLSGtvbkxCcX1xb617251xh0x3zaUd4QW45Q3JuV3RDOTc                                        p    details       Create a secret both from existing login credentials and from the CLI   details  summary show  summary   p      bash     kubectl create secret generic mysecret   from file  dockerconfigjson   XDG RUNTIME DIR  containers auth json   type kubernetes io dockeconfigjson secret mysecret created     kubectl create secret docker registry mysecret2   docker server https   index docker io v1    docker username  YOUR USR   docker password  YOUR PWD secret mysecret2 created        p    details       Create the manifest for a Pod that uses one of the two secrets just created to pull an image hosted on the relative private remote registry   details  summary show  summary   p      yaml apiVersion  v1 kind  Pod metadata    name  private reg spec    containers      name  private reg container     image   YOUR PRIVATE IMAGE   imagePullSecrets      name  mysecret        p    details       Clean up all the images and containers   details  summary show  summary   p      bash     podman rm   all   force     podman rmi   all     kubectl delete pod simpleapp        p    details "}
{"questions":"ckad excersises Configuration 18","answers":"![](https:\/\/gaforgithub.azurewebsites.net\/api?repo=CKAD-exercises\/configuration&empty)\n# Configuration (18%)\n\n[ConfigMaps](#configmaps)\n\n[SecurityContext](#securitycontext)\n\n[Requests and Limits](#requests-and-limits)\n\n[Secrets](#secrets)\n\n[Service Accounts](#serviceaccounts)\n\n<br>#Tips, export to variable<br>\n<br>export ns=\"-n secret-ops\"<\/br>\n<br>export do=\"--dry-run=client -oyaml\"<\/br>\n## ConfigMaps\n\nkubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Configure a Pod to Use a ConfigMap](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-pod-configmap\/)\n\n### Create a configmap named config with values foo=lala,foo2=lolo\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create configmap config --from-literal=foo=lala --from-literal=foo2=lolo\n```\n\n<\/p>\n<\/details>\n\n### Display its values\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get cm config -o yaml\n# or\nkubectl describe cm config\n```\n\n<\/p>\n<\/details>\n\n### Create and display a configmap from a file\n\nCreate the file with\n\n```bash\necho -e \"foo3=lili\\nfoo4=lele\" > config.txt\n```\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create cm configmap2 --from-file=config.txt\nkubectl get cm configmap2 -o yaml\n```\n\n<\/p>\n<\/details>\n\n### Create and display a configmap from a .env file\n\nCreate the file with the command\n\n```bash\necho -e \"var1=val1\\n# this is a comment\\n\\nvar2=val2\\n#anothercomment\" > config.env\n```\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create cm configmap3 --from-env-file=config.env\nkubectl get cm configmap3 -o yaml\n```\n\n<\/p>\n<\/details>\n\n### Create and display a configmap from a file, giving the key 'special'\n\nCreate the file with\n\n```bash\necho -e \"var3=val3\\nvar4=val4\" > config4.txt\n```\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create cm configmap4 --from-file=special=config4.txt\nkubectl describe cm configmap4\nkubectl get cm configmap4 -o yaml\n```\n\n<\/p>\n<\/details>\n\n### Create a configMap called 'options' with the value var5=val5. Create a new nginx pod that loads the value from variable 'var5' in an env variable called 'option'\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create cm options --from-literal=var5=val5\nkubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml\nvi pod.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  containers:\n  - image: nginx\n    imagePullPolicy: IfNotPresent\n    name: nginx\n    resources: {}\n    env:\n    - name: option # name of the env variable\n      valueFrom:\n        configMapKeyRef:\n          name: options # name of config map\n          key: var5 # name of the entity in config map\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\nstatus: {}\n```\n\n```bash\nkubectl create -f pod.yaml\nkubectl exec -it nginx -- env | grep option # will show 'option=val5'\n```\n\n<\/p>\n<\/details>\n\n### Create a configMap 'anotherone' with values 'var6=val6', 'var7=val7'. Load this configMap as env variables into a new nginx pod\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create configmap anotherone --from-literal=var6=val6 --from-literal=var7=val7\nkubectl run --restart=Never nginx --image=nginx -o yaml --dry-run=client > pod.yaml\nvi pod.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  containers:\n  - image: nginx\n    imagePullPolicy: IfNotPresent\n    name: nginx\n    resources: {}\n    envFrom: # different than previous one, that was 'env'\n    - configMapRef: # different from the previous one, was 'configMapKeyRef'\n        name: anotherone # the name of the config map\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\nstatus: {}\n```\n\n```bash\nkubectl create -f pod.yaml\nkubectl exec -it nginx -- env \n```\n\n<\/p>\n<\/details>\n\n### Create a configMap 'cmvolume' with values 'var8=val8', 'var9=val9'. Load this as a volume inside an nginx pod on path '\/etc\/lala'. Create the pod and 'ls' into the '\/etc\/lala' directory.\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create configmap cmvolume --from-literal=var8=val8 --from-literal=var9=val9\nkubectl run nginx --image=nginx --restart=Never -o yaml --dry-run=client > pod.yaml\nvi pod.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  volumes: # add a volumes list\n  - name: myvolume # just a name, you'll reference this in the pods\n    configMap:\n      name: cmvolume # name of your configmap\n  containers:\n  - image: nginx\n    imagePullPolicy: IfNotPresent\n    name: nginx\n    resources: {}\n    volumeMounts: # your volume mounts are listed here\n    - name: myvolume # the name that you specified in pod.spec.volumes.name\n      mountPath: \/etc\/lala # the path inside your container\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\nstatus: {}\n```\n\n```bash\nkubectl create -f pod.yaml\nkubectl exec -it nginx -- \/bin\/sh\ncd \/etc\/lala\nls # will show var8 var9\ncat var8 # will show val8\n```\n\n<\/p>\n<\/details>\n\n## SecurityContext\n\nkubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Configure a Security Context for a Pod or Container](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/)\n\n### Create the YAML for an nginx pod that runs with the user ID 101. No need to create the pod\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml\nvi pod.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  securityContext: # insert this line\n    runAsUser: 101 # UID for the user\n  containers:\n  - image: nginx\n    imagePullPolicy: IfNotPresent\n    name: nginx\n    resources: {}\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\nstatus: {}\n```\n\n<\/p>\n<\/details>\n\n\n### Create the YAML for an nginx pod that has the capabilities \"NET_ADMIN\", \"SYS_TIME\" added to its single container\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml\nvi pod.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  containers:\n  - image: nginx\n    imagePullPolicy: IfNotPresent\n    name: nginx\n    securityContext: # insert this line\n      capabilities: # and this\n        add: [\"NET_ADMIN\", \"SYS_TIME\"] # this as well\n    resources: {}\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\nstatus: {}\n```\n\n<\/p>\n<\/details>\n\n## Resource requests and limits\n\nkubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Assign CPU Resources to Containers and Pods](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/assign-cpu-resource\/)\n\n### Create an nginx pod with requests cpu=100m,memory=256Mi and limits cpu=200m,memory=512Mi\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml\nvi pod.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  containers:\n  - image: nginx\n    name: nginx\n    resources:\n      requests:\n        memory: \"256Mi\"\n        cpu: \"100m\"\n      limits:    \n        memory: \"512Mi\"\n        cpu: \"200m\"\n  dnsPolicy: ClusterFirst\n  restartPolicy: Always\nstatus: {}\n``` \n\n<\/p>\n<\/details>\n\n## Limit Ranges\nkubernetes.io > Documentation > Concepts > Policies > Limit Ranges (https:\/\/kubernetes.io\/docs\/concepts\/policy\/limit-range\/)\n\n### Create a namespace named limitrange with a LimitRange that limits pod memory to a max of 500Mi and min of 100Mi\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create ns limitrange\n```\n\nvi 1.yaml\n```YAML\napiVersion: v1\nkind: LimitRange\nmetadata:\n  name: ns-memory-limit\n  namespace: limitrange\nspec:\n  limits:\n  - max: # max and min define the limit range\n      memory: \"500Mi\"\n    min:\n      memory: \"100Mi\"\n    type: Container\n```\n\n```bash\nkubectl apply -f 1.yaml\n```\n<\/p>\n<\/details>\n\n### Describe the namespace limitrange\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl describe limitrange ns-memory-limit -n limitrange\n```\n<\/p>\n<\/details>\n\n### Create an nginx pod that requests 250Mi of memory in the limitrange namespace\n\n<details><summary>show<\/summary>\n<p>\n\nvi 2.yaml\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\n  namespace: limitrange\nspec:\n  containers:\n  - image: nginx\n    name: nginx\n    resources:\n      requests:\n        memory: \"250Mi\"\n      limits:\n        memory: \"500Mi\" # limit has to be specified and be <= limitrange\n  dnsPolicy: ClusterFirst\n  restartPolicy: Always\nstatus: {}\n``` \n\n```bash\nkubectl apply -f 2.yaml\n```\n<\/p>\n<\/details>\n\n\n## Resource Quotas\nkubernetes.io > Documentation > Concepts > Policies > Resource Quotas (https:\/\/kubernetes.io\/docs\/concepts\/policy\/resource-quotas\/)\n\n### Create ResourceQuota in namespace `one` with hard requests `cpu=1`, `memory=1Gi` and hard limits `cpu=2`, `memory=2Gi`.\n\n<details><summary>show<\/summary>\n<p>\n\nCreate the namespace:\n```bash\nkubectl create ns one\n```\n\nCreate the ResourceQuota\n```bash\nvi rq-one.yaml\n```\n\n```YAML\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n  name: my-rq\n  namespace: one\nspec:\n  hard:\n    requests.cpu: \"1\"\n    requests.memory: 1Gi\n    limits.cpu: \"2\"\n    limits.memory: 2Gi\n```\n\n```bash\nkubectl apply -f rq-one.yaml\n```\n\nor\n```bash\nkubectl create quota my-rq --namespace=one --hard=requests.cpu=1,requests.memory=1Gi,limits.cpu=2,limits.memory=2Gi\n```\n<\/p>\n<\/details>\n\n### Attempt to create a pod with resource requests `cpu=2`, `memory=3Gi` and limits `cpu=3`, `memory=4Gi` in namespace `one`\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nvi pod.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\n  namespace: one\nspec:\n  containers:\n  - image: nginx\n    name: nginx\n    resources:\n      requests:\n        memory: \"3Gi\"\n        cpu: \"2\"\n      limits:\n        memory: \"4Gi\"\n        cpu: \"3\"\n  dnsPolicy: ClusterFirst\n  restartPolicy: Always\nstatus: {}\n```\n\n```bash\nkubectl create -f pod.yaml\n```\n\nExpected error message:\n```bash\nError from server (Forbidden): error when creating \"pod.yaml\": pods \"nginx\" is forbidden: exceeded quota: my-rq, requested: limits.cpu=3,limits.memory=4Gi,requests.cpu=2,requests.memory=3Gi, used: limits.cpu=0,limits.memory=0,requests.cpu=0,requests.memory=0, limited: limits.cpu=2,limits.memory=2Gi,requests.cpu=1,requests.memory=1Gi\n```\n<\/p>\n<\/details>\n\n### Create a pod with resource requests `cpu=0.5`, `memory=1Gi` and limits `cpu=1`, `memory=2Gi` in namespace `one`\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nvi pod2.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\n  namespace: one\nspec:\n  containers:\n  - image: nginx\n    name: nginx\n    resources:\n      requests:\n        memory: \"1Gi\"\n        cpu: \"0.5\"\n      limits:\n        memory: \"2Gi\"\n        cpu: \"1\"\n  dnsPolicy: ClusterFirst\n  restartPolicy: Always\nstatus: {}\n```\n\n```bash\nkubectl create -f pod2.yaml\n```\n\nShow the ResourceQuota usage in namespace `one`\n```bash\nkubectl get resourcequota -n one\n```\n\n```\nNAME    AGE   REQUEST                                          LIMIT\nmy-rq   10m   requests.cpu: 500m\/1, requests.memory: 1Gi\/1Gi   limits.cpu: 1\/2, limits.memory: 2Gi\/2Gi\n```\n<\/p>\n<\/details>\n\n\n## Secrets\n\nkubernetes.io > Documentation > Concepts > Configuration > [Secrets](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/)\n\nkubernetes.io > Documentation > Tasks > Inject Data Into Applications > [Distribute Credentials Securely Using Secrets](https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/distribute-credentials-secure\/)\n\n### Create a secret called mysecret with the values password=mypass\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create secret generic mysecret --from-literal=password=mypass\n```\n\n<\/p>\n<\/details>\n\n### Create a secret called mysecret2 that gets key\/value from a file\n\nCreate a file called username with the value admin:\n\n```bash\necho -n admin > username\n```\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create secret generic mysecret2 --from-file=username\n```\n\n<\/p>\n<\/details>\n\n### Get the value of mysecret2\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get secret mysecret2 -o yaml\necho -n YWRtaW4= | base64 -d # on MAC it is -D, which decodes the value and shows 'admin'\n```\n\nAlternative using `--jsonpath`:\n\n```bash\nkubectl get secret mysecret2 -o jsonpath='{.data.username}' | base64 -d  # on MAC it is -D\n```\n\nAlternative using `--template`:\n\n```bash\nkubectl get secret mysecret2 --template '' | base64 -d  # on MAC it is -D\n```\n\nAlternative using `jq`:\n\n```bash\nkubectl get secret mysecret2 -o json | jq -r .data.username | base64 -d  # on MAC it is -D\n```\n\n<\/p>\n<\/details>\n\n### Create an nginx pod that mounts the secret mysecret2 in a volume on path \/etc\/foo\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run nginx --image=nginx --restart=Never -o yaml --dry-run=client > pod.yaml\nvi pod.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  volumes: # specify the volumes\n  - name: foo # this name will be used for reference inside the container\n    secret: # we want a secret\n      secretName: mysecret2 # name of the secret - this must already exist on pod creation\n  containers:\n  - image: nginx\n    imagePullPolicy: IfNotPresent\n    name: nginx\n    resources: {}\n    volumeMounts: # our volume mounts\n    - name: foo # name on pod.spec.volumes\n      mountPath: \/etc\/foo #our mount path\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\nstatus: {}\n```\n\n```bash\nkubectl create -f pod.yaml\nkubectl exec -it nginx -- \/bin\/bash\nls \/etc\/foo  # shows username\ncat \/etc\/foo\/username # shows admin\n```\n\n<\/p>\n<\/details>\n\n### Delete the pod you just created and mount the variable 'username' from secret mysecret2 onto a new nginx pod in env variable called 'USERNAME'\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl delete po nginx\nkubectl run nginx --image=nginx --restart=Never -o yaml --dry-run=client > pod.yaml\nvi pod.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  containers:\n  - image: nginx\n    imagePullPolicy: IfNotPresent\n    name: nginx\n    resources: {}\n    env: # our env variables\n    - name: USERNAME # asked name\n      valueFrom:\n        secretKeyRef: # secret reference\n          name: mysecret2 # our secret's name\n          key: username # the key of the data in the secret\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\nstatus: {}\n```\n\n```bash\nkubectl create -f pod.yaml\nkubectl exec -it nginx -- env | grep USERNAME | cut -d '=' -f 2 # will show 'admin'\n```\n\n<\/p>\n<\/details>\n\n### Create a Secret named 'ext-service-secret' in the namespace 'secret-ops'. Then, provide the key-value pair API_KEY=LmLHbYhsgWZwNifiqaRorH8T as literal.\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nexport ns=\"-n secret-ops\"\nexport do=\"--dry-run=client -oyaml\"\nk create secret generic ext-service-secret --from-literal=API_KEY=LmLHbYhsgWZwNifiqaRorH8T $ns $do > sc.yaml\nk apply -f sc.yaml\n```\n\n<\/p>\n<\/details>\n\n### Consuming the Secret. Create a Pod named 'consumer' with the image 'nginx' in the namespace 'secret-ops' and consume the Secret as an environment variable. Then, open an interactive shell to the Pod, and print all environment variables.\n<details><summary>show<\/summary>\n<p>\n\n```bash\nexport ns=\"-n secret-ops\"\nexport do=\"--dry-run=client -oyaml\"\nk run consumer --image=nginx $ns $do > nginx.yaml\nvi nginx.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: consumer\n  name: consumer\n  namespace: secret-ops\nspec:\n  containers:\n  - image: nginx\n    name: consumer\n    resources: {}\n    env:\n    - name: API_KEY\n      valueFrom:\n        secretKeyRef:\n          name: ext-service-secret\n          key: API_KEY\n  dnsPolicy: ClusterFirst\n  restartPolicy: Always\nstatus: {}\n```\n\n```bash\nk exec -it $ns consumer -- \/bin\/sh\n#env\n```\n<\/p>\n<\/details>\n\n### Create a Secret named 'my-secret' of type 'kubernetes.io\/ssh-auth' in the namespace 'secret-ops'. Define a single key named 'ssh-privatekey', and point it to the file 'id_rsa' in this directory.\n<details><summary>show<\/summary>\n<p>\n\n```bash\n#Tips, export to variable\nexport do=\"--dry-run=client -oyaml\"\nexport ns=\"-n secret-ops\"\n\n#if id_rsa file didn't exist.\nssh-keygen\n\nk create secret generic my-secret $ns --type=\"kubernetes.io\/ssh-auth\" --from-file=ssh-privatekey=id_rsa $do > sc.yaml\nk apply -f sc.yaml\n```\n<\/p>\n<\/details>\n\n### Create a Pod named 'consumer' with the image 'nginx' in the namespace 'secret-ops', and consume the Secret as Volume. Mount the Secret as Volume to the path \/var\/app with read-only access. Open an interactive shell to the Pod, and render the contents of the file.\n<details><summary>show<\/summary>\n<p>\n\n```bash\n#Tips, export to variable\nexport ns=\"-n secret-ops\"\nexport do=\"--dry-run=client -oyaml\"\nk run consumer --image=nginx $ns $do > nginx.yaml\nvi nginx.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: consumer\n  name: consumer\n  namespace: secret-ops\nspec:\n  containers:\n    - image: nginx\n      name: consumer\n      resources: {}\n      volumeMounts:\n        - name: foo\n          mountPath: \"\/var\/app\"\n          readOnly: true\n  volumes:\n    - name: foo\n      secret:\n        secretName: my-secret\n        optional: true\n  dnsPolicy: ClusterFirst\n  restartPolicy: Always\nstatus: {}\n```\n\n```bash\nk exec -it $ns consumer -- \/bin\/sh\n# cat \/var\/app\/ssh-privatekey\n# exit\n```\n<\/p>\n<\/details>\n\n## ServiceAccounts\n\nkubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Configure Service Accounts for Pods](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/)\n\n### See all the service accounts of the cluster in all namespaces\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get sa --all-namespaces\n```\nAlternatively \n\n```bash\nkubectl get sa -A\n```\n\n<\/p>\n<\/details>\n\n### Create a new serviceaccount called 'myuser'\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create sa myuser\n```\n\nAlternatively:\n\n```bash\n# let's get a template easily\nkubectl get sa default -o yaml > sa.yaml\nvim sa.yaml\n```\n\n```YAML\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: myuser\n```\n\n```bash\nkubectl create -f sa.yaml\n```\n\n<\/p>\n<\/details>\n\n### Create an nginx pod that uses 'myuser' as a service account\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run nginx --image=nginx --restart=Never -o yaml --dry-run=client > pod.yaml\nvi pod.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  serviceAccountName: myuser # we use pod.spec.serviceAccountName\n  containers:\n  - image: nginx\n    imagePullPolicy: IfNotPresent\n    name: nginx\n    resources: {}\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\nstatus: {}\n```\n\nor\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  serviceAccount: myuser # we use pod.spec.serviceAccount\n  containers:\n  - image: nginx\n    imagePullPolicy: IfNotPresent\n    name: nginx\n    resources: {}\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\nstatus: {}\n```\n\n```bash\nkubectl create -f pod.yaml\nkubectl describe pod nginx # will see that a new secret called myuser-token-***** has been mounted\n```\n\n<\/p>\n<\/details>\n\n### Generate an API token for the service account 'myuser'\n\n<details><summary>show<\/summary>\n<p>\n  \n```bash\nkubectl create token myuser\n```\n\n<\/p>\n<\/details>","site":"ckad excersises","answers_cleaned":"    https   gaforgithub azurewebsites net api repo CKAD exercises configuration empty    Configuration  18     ConfigMaps   configmaps    SecurityContext   securitycontext    Requests and Limits   requests and limits    Secrets   secrets    Service Accounts   serviceaccounts    br  Tips  export to variable br   br export ns   n secret ops   br   br export do    dry run client  oyaml   br     ConfigMaps  kubernetes io   Documentation   Tasks   Configure Pods and Containers    Configure a Pod to Use a ConfigMap  https   kubernetes io docs tasks configure pod container configure pod configmap        Create a configmap named config with values foo lala foo2 lolo   details  summary show  summary   p      bash kubectl create configmap config   from literal foo lala   from literal foo2 lolo        p    details       Display its values   details  summary show  summary   p      bash kubectl get cm config  o yaml   or kubectl describe cm config        p    details       Create and display a configmap from a file  Create the file with     bash echo  e  foo3 lili nfoo4 lele    config txt       details  summary show  summary   p      bash kubectl create cm configmap2   from file config txt kubectl get cm configmap2  o yaml        p    details       Create and display a configmap from a  env file  Create the file with the command     bash echo  e  var1 val1 n  this is a comment n nvar2 val2 n anothercomment    config env       details  summary show  summary   p      bash kubectl create cm configmap3   from env file config env kubectl get cm configmap3  o yaml        p    details       Create and display a configmap from a file  giving the key  special   Create the file with     bash echo  e  var3 val3 nvar4 val4    config4 txt       details  summary show  summary   p      bash kubectl create cm configmap4   from file special config4 txt kubectl describe cm configmap4 kubectl get cm configmap4  o yaml        p    details       Create a configMap called  options  with the value var5 val5  Create a new nginx pod that loads the value from variable  var5  in an env variable called  option    details  summary show  summary   p      bash kubectl create cm options   from literal var5 val5 kubectl run nginx   image nginx   restart Never   dry run client  o yaml   pod yaml vi pod yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx spec    containers      image  nginx     imagePullPolicy  IfNotPresent     name  nginx     resources         env        name  option   name of the env variable       valueFrom          configMapKeyRef            name  options   name of config map           key  var5   name of the entity in config map   dnsPolicy  ClusterFirst   restartPolicy  Never status             bash kubectl create  f pod yaml kubectl exec  it nginx    env   grep option   will show  option val5         p    details       Create a configMap  anotherone  with values  var6 val6    var7 val7   Load this configMap as env variables into a new nginx pod   details  summary show  summary   p      bash kubectl create configmap anotherone   from literal var6 val6   from literal var7 val7 kubectl run   restart Never nginx   image nginx  o yaml   dry run client   pod yaml vi pod yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx spec    containers      image  nginx     imagePullPolicy  IfNotPresent     name  nginx     resources         envFrom    different than previous one  that was  env        configMapRef    different from the previous one  was  configMapKeyRef          name  anotherone   the name of the config map   dnsPolicy  ClusterFirst   restartPolicy  Never status             bash kubectl create  f pod yaml kubectl exec  it nginx    env         p    details       Create a configMap  cmvolume  with values  var8 val8    var9 val9   Load this as a volume inside an nginx pod on path   etc lala   Create the pod and  ls  into the   etc lala  directory    details  summary show  summary   p      bash kubectl create configmap cmvolume   from literal var8 val8   from literal var9 val9 kubectl run nginx   image nginx   restart Never  o yaml   dry run client   pod yaml vi pod yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx spec    volumes    add a volumes list     name  myvolume   just a name  you ll reference this in the pods     configMap        name  cmvolume   name of your configmap   containers      image  nginx     imagePullPolicy  IfNotPresent     name  nginx     resources         volumeMounts    your volume mounts are listed here       name  myvolume   the name that you specified in pod spec volumes name       mountPath   etc lala   the path inside your container   dnsPolicy  ClusterFirst   restartPolicy  Never status             bash kubectl create  f pod yaml kubectl exec  it nginx     bin sh cd  etc lala ls   will show var8 var9 cat var8   will show val8        p    details      SecurityContext  kubernetes io   Documentation   Tasks   Configure Pods and Containers    Configure a Security Context for a Pod or Container  https   kubernetes io docs tasks configure pod container security context        Create the YAML for an nginx pod that runs with the user ID 101  No need to create the pod   details  summary show  summary   p      bash kubectl run nginx   image nginx   restart Never   dry run client  o yaml   pod yaml vi pod yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx spec    securityContext    insert this line     runAsUser  101   UID for the user   containers      image  nginx     imagePullPolicy  IfNotPresent     name  nginx     resources       dnsPolicy  ClusterFirst   restartPolicy  Never status            p    details        Create the YAML for an nginx pod that has the capabilities  NET ADMIN    SYS TIME  added to its single container   details  summary show  summary   p      bash kubectl run nginx   image nginx   restart Never   dry run client  o yaml   pod yaml vi pod yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx spec    containers      image  nginx     imagePullPolicy  IfNotPresent     name  nginx     securityContext    insert this line       capabilities    and this         add    NET ADMIN    SYS TIME     this as well     resources       dnsPolicy  ClusterFirst   restartPolicy  Never status            p    details      Resource requests and limits  kubernetes io   Documentation   Tasks   Configure Pods and Containers    Assign CPU Resources to Containers and Pods  https   kubernetes io docs tasks configure pod container assign cpu resource        Create an nginx pod with requests cpu 100m memory 256Mi and limits cpu 200m memory 512Mi   details  summary show  summary   p      bash kubectl run nginx   image nginx   dry run client  o yaml   pod yaml vi pod yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx spec    containers      image  nginx     name  nginx     resources        requests          memory   256Mi          cpu   100m        limits              memory   512Mi          cpu   200m    dnsPolicy  ClusterFirst   restartPolicy  Always status             p    details      Limit Ranges kubernetes io   Documentation   Concepts   Policies   Limit Ranges  https   kubernetes io docs concepts policy limit range        Create a namespace named limitrange with a LimitRange that limits pod memory to a max of 500Mi and min of 100Mi   details  summary show  summary   p      bash kubectl create ns limitrange      vi 1 yaml    YAML apiVersion  v1 kind  LimitRange metadata    name  ns memory limit   namespace  limitrange spec    limits      max    max and min define the limit range       memory   500Mi      min        memory   100Mi      type  Container         bash kubectl apply  f 1 yaml       p    details       Describe the namespace limitrange   details  summary show  summary   p      bash kubectl describe limitrange ns memory limit  n limitrange       p    details       Create an nginx pod that requests 250Mi of memory in the limitrange namespace   details  summary show  summary   p   vi 2 yaml    YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx   namespace  limitrange spec    containers      image  nginx     name  nginx     resources        requests          memory   250Mi        limits          memory   500Mi    limit has to be specified and be    limitrange   dnsPolicy  ClusterFirst   restartPolicy  Always status              bash kubectl apply  f 2 yaml       p    details       Resource Quotas kubernetes io   Documentation   Concepts   Policies   Resource Quotas  https   kubernetes io docs concepts policy resource quotas        Create ResourceQuota in namespace  one  with hard requests  cpu 1    memory 1Gi  and hard limits  cpu 2    memory 2Gi     details  summary show  summary   p   Create the namespace     bash kubectl create ns one      Create the ResourceQuota    bash vi rq one yaml         YAML apiVersion  v1 kind  ResourceQuota metadata    name  my rq   namespace  one spec    hard      requests cpu   1      requests memory  1Gi     limits cpu   2      limits memory  2Gi         bash kubectl apply  f rq one yaml      or    bash kubectl create quota my rq   namespace one   hard requests cpu 1 requests memory 1Gi limits cpu 2 limits memory 2Gi       p    details       Attempt to create a pod with resource requests  cpu 2    memory 3Gi  and limits  cpu 3    memory 4Gi  in namespace  one    details  summary show  summary   p      bash vi pod yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx   namespace  one spec    containers      image  nginx     name  nginx     resources        requests          memory   3Gi          cpu   2        limits          memory   4Gi          cpu   3    dnsPolicy  ClusterFirst   restartPolicy  Always status             bash kubectl create  f pod yaml      Expected error message     bash Error from server  Forbidden   error when creating  pod yaml   pods  nginx  is forbidden  exceeded quota  my rq  requested  limits cpu 3 limits memory 4Gi requests cpu 2 requests memory 3Gi  used  limits cpu 0 limits memory 0 requests cpu 0 requests memory 0  limited  limits cpu 2 limits memory 2Gi requests cpu 1 requests memory 1Gi       p    details       Create a pod with resource requests  cpu 0 5    memory 1Gi  and limits  cpu 1    memory 2Gi  in namespace  one    details  summary show  summary   p      bash vi pod2 yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx   namespace  one spec    containers      image  nginx     name  nginx     resources        requests          memory   1Gi          cpu   0 5        limits          memory   2Gi          cpu   1    dnsPolicy  ClusterFirst   restartPolicy  Always status             bash kubectl create  f pod2 yaml      Show the ResourceQuota usage in namespace  one     bash kubectl get resourcequota  n one          NAME    AGE   REQUEST                                          LIMIT my rq   10m   requests cpu  500m 1  requests memory  1Gi 1Gi   limits cpu  1 2  limits memory  2Gi 2Gi       p    details       Secrets  kubernetes io   Documentation   Concepts   Configuration    Secrets  https   kubernetes io docs concepts configuration secret    kubernetes io   Documentation   Tasks   Inject Data Into Applications    Distribute Credentials Securely Using Secrets  https   kubernetes io docs tasks inject data application distribute credentials secure        Create a secret called mysecret with the values password mypass   details  summary show  summary   p      bash kubectl create secret generic mysecret   from literal password mypass        p    details       Create a secret called mysecret2 that gets key value from a file  Create a file called username with the value admin      bash echo  n admin   username       details  summary show  summary   p      bash kubectl create secret generic mysecret2   from file username        p    details       Get the value of mysecret2   details  summary show  summary   p      bash kubectl get secret mysecret2  o yaml echo  n YWRtaW4    base64  d   on MAC it is  D  which decodes the value and shows  admin       Alternative using    jsonpath       bash kubectl get secret mysecret2  o jsonpath    data username     base64  d    on MAC it is  D      Alternative using    template       bash kubectl get secret mysecret2   template      base64  d    on MAC it is  D      Alternative using  jq       bash kubectl get secret mysecret2  o json   jq  r  data username   base64  d    on MAC it is  D        p    details       Create an nginx pod that mounts the secret mysecret2 in a volume on path  etc foo   details  summary show  summary   p      bash kubectl run nginx   image nginx   restart Never  o yaml   dry run client   pod yaml vi pod yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx spec    volumes    specify the volumes     name  foo   this name will be used for reference inside the container     secret    we want a secret       secretName  mysecret2   name of the secret   this must already exist on pod creation   containers      image  nginx     imagePullPolicy  IfNotPresent     name  nginx     resources         volumeMounts    our volume mounts       name  foo   name on pod spec volumes       mountPath   etc foo  our mount path   dnsPolicy  ClusterFirst   restartPolicy  Never status             bash kubectl create  f pod yaml kubectl exec  it nginx     bin bash ls  etc foo    shows username cat  etc foo username   shows admin        p    details       Delete the pod you just created and mount the variable  username  from secret mysecret2 onto a new nginx pod in env variable called  USERNAME    details  summary show  summary   p      bash kubectl delete po nginx kubectl run nginx   image nginx   restart Never  o yaml   dry run client   pod yaml vi pod yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx spec    containers      image  nginx     imagePullPolicy  IfNotPresent     name  nginx     resources         env    our env variables       name  USERNAME   asked name       valueFrom          secretKeyRef    secret reference           name  mysecret2   our secret s name           key  username   the key of the data in the secret   dnsPolicy  ClusterFirst   restartPolicy  Never status             bash kubectl create  f pod yaml kubectl exec  it nginx    env   grep USERNAME   cut  d      f 2   will show  admin         p    details       Create a Secret named  ext service secret  in the namespace  secret ops   Then  provide the key value pair API KEY LmLHbYhsgWZwNifiqaRorH8T as literal    details  summary show  summary   p      bash export ns   n secret ops  export do    dry run client  oyaml  k create secret generic ext service secret   from literal API KEY LmLHbYhsgWZwNifiqaRorH8T  ns  do   sc yaml k apply  f sc yaml        p    details       Consuming the Secret  Create a Pod named  consumer  with the image  nginx  in the namespace  secret ops  and consume the Secret as an environment variable  Then  open an interactive shell to the Pod  and print all environment variables   details  summary show  summary   p      bash export ns   n secret ops  export do    dry run client  oyaml  k run consumer   image nginx  ns  do   nginx yaml vi nginx yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  consumer   name  consumer   namespace  secret ops spec    containers      image  nginx     name  consumer     resources         env        name  API KEY       valueFrom          secretKeyRef            name  ext service secret           key  API KEY   dnsPolicy  ClusterFirst   restartPolicy  Always status             bash k exec  it  ns consumer     bin sh  env       p    details       Create a Secret named  my secret  of type  kubernetes io ssh auth  in the namespace  secret ops   Define a single key named  ssh privatekey   and point it to the file  id rsa  in this directory   details  summary show  summary   p      bash  Tips  export to variable export do    dry run client  oyaml  export ns   n secret ops    if id rsa file didn t exist  ssh keygen  k create secret generic my secret  ns   type  kubernetes io ssh auth    from file ssh privatekey id rsa  do   sc yaml k apply  f sc yaml       p    details       Create a Pod named  consumer  with the image  nginx  in the namespace  secret ops   and consume the Secret as Volume  Mount the Secret as Volume to the path  var app with read only access  Open an interactive shell to the Pod  and render the contents of the file   details  summary show  summary   p      bash  Tips  export to variable export ns   n secret ops  export do    dry run client  oyaml  k run consumer   image nginx  ns  do   nginx yaml vi nginx yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  consumer   name  consumer   namespace  secret ops spec    containers        image  nginx       name  consumer       resources           volumeMounts            name  foo           mountPath    var app            readOnly  true   volumes        name  foo       secret          secretName  my secret         optional  true   dnsPolicy  ClusterFirst   restartPolicy  Always status             bash k exec  it  ns consumer     bin sh   cat  var app ssh privatekey   exit       p    details      ServiceAccounts  kubernetes io   Documentation   Tasks   Configure Pods and Containers    Configure Service Accounts for Pods  https   kubernetes io docs tasks configure pod container configure service account        See all the service accounts of the cluster in all namespaces   details  summary show  summary   p      bash kubectl get sa   all namespaces     Alternatively      bash kubectl get sa  A        p    details       Create a new serviceaccount called  myuser    details  summary show  summary   p      bash kubectl create sa myuser      Alternatively      bash   let s get a template easily kubectl get sa default  o yaml   sa yaml vim sa yaml         YAML apiVersion  v1 kind  ServiceAccount metadata    name  myuser         bash kubectl create  f sa yaml        p    details       Create an nginx pod that uses  myuser  as a service account   details  summary show  summary   p      bash kubectl run nginx   image nginx   restart Never  o yaml   dry run client   pod yaml vi pod yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx spec    serviceAccountName  myuser   we use pod spec serviceAccountName   containers      image  nginx     imagePullPolicy  IfNotPresent     name  nginx     resources       dnsPolicy  ClusterFirst   restartPolicy  Never status          or     YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx spec    serviceAccount  myuser   we use pod spec serviceAccount   containers      image  nginx     imagePullPolicy  IfNotPresent     name  nginx     resources       dnsPolicy  ClusterFirst   restartPolicy  Never status             bash kubectl create  f pod yaml kubectl describe pod nginx   will see that a new secret called myuser token       has been mounted        p    details       Generate an API token for the service account  myuser    details  summary show  summary   p        bash kubectl create token myuser        p    details "}
{"questions":"ckad excersises Observability 18 details summary show summary Create an nginx pod with a liveness probe that just runs the command ls Save its YAML in pod yaml Run it check its probe status delete it Liveness readiness and startup probes kubernetes io Documentation Tasks Configure Pods and Containers","answers":"![](https:\/\/gaforgithub.azurewebsites.net\/api?repo=CKAD-exercises\/observability&empty)\n# Observability (18%)\n\n## Liveness, readiness and startup probes\n\nkubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Configure Liveness, Readiness and Startup Probes](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-liveness-readiness-startup-probes\/)\n\n### Create an nginx pod with a liveness probe that just runs the command 'ls'. Save its YAML in pod.yaml. Run it, check its probe status, delete it.\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml\nvi pod.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  containers:\n  - image: nginx\n    imagePullPolicy: IfNotPresent\n    name: nginx\n    resources: {}\n    livenessProbe: # our probe\n      exec: # add this line\n        command: # command definition\n        - ls # ls command\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\nstatus: {}\n```\n\n```bash\nkubectl create -f pod.yaml\nkubectl describe pod nginx | grep -i liveness # run this to see that liveness probe works\nkubectl delete -f pod.yaml\n```\n\n<\/p>\n<\/details>\n\n### Modify the pod.yaml file so that liveness probe starts kicking in after 5 seconds whereas the interval between probes would be 5 seconds. Run it, check the probe, delete it.\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl explain pod.spec.containers.livenessProbe # get the exact names\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  containers:\n  - image: nginx\n    imagePullPolicy: IfNotPresent\n    name: nginx\n    resources: {}\n    livenessProbe:\n      initialDelaySeconds: 5 # add this line\n      periodSeconds: 5 # add this line as well\n      exec:\n        command:\n        - ls\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\nstatus: {}\n```\n\n```bash\nkubectl create -f pod.yaml\nkubectl describe po nginx | grep -i liveness\nkubectl delete -f pod.yaml\n```\n\n<\/p>\n<\/details>\n\n### Create an nginx pod (that includes port 80) with an HTTP readinessProbe on path '\/' on port 80. Again, run it, check the readinessProbe, delete it.\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run nginx --image=nginx --dry-run=client -o yaml --restart=Never --port=80 > pod.yaml\nvi pod.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\nspec:\n  containers:\n  - image: nginx\n    imagePullPolicy: IfNotPresent\n    name: nginx\n    resources: {}\n    ports:\n    - containerPort: 80 # Note: Readiness probes runs on the container during its whole lifecycle. Since nginx exposes 80, containerPort: 80 is not required for readiness to work.\n    readinessProbe: # declare the readiness probe\n      httpGet: # add this line\n        path: \/ #\n        port: 80 #\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\nstatus: {}\n```\n\n```bash\nkubectl create -f pod.yaml\nkubectl describe pod nginx | grep -i readiness # to see the pod readiness details\nkubectl delete -f pod.yaml\n```\n\n<\/p>\n<\/details>\n\n### Lots of pods are running in `qa`,`alan`,`test`,`production` namespaces.  All of these pods are configured with liveness probe.  Please list all pods whose liveness probe are failed in the format of `<namespace>\/<pod name>` per line.\n\n<details><summary>show<\/summary>\n<p>\n\nA typical liveness probe failure event\n```\nLAST SEEN   TYPE      REASON      OBJECT              MESSAGE\n22m         Warning   Unhealthy   pod\/liveness-exec   Liveness probe failed: cat: can't open '\/tmp\/healthy': No such file or directory\n```\n\ncollect failed pods namespace by namespace\n\n```sh\nkubectl get events -o json | jq -r '.items[] | select(.message | contains(\"Liveness probe failed\")).involvedObject | .namespace + \"\/\" + .name'\n```\n\n<\/p>\n<\/details>\n\n## Logging\n\n### Create a busybox pod that runs `i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done`. Check its logs\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run busybox --image=busybox --restart=Never -- \/bin\/sh -c 'i=0; while true; do echo \"$i: $(date)\"; i=$((i+1)); sleep 1; done'\nkubectl logs busybox -f # follow the logs\n```\n\n<\/p>\n<\/details>\n\n## Debugging\n\n### Create a busybox pod that runs 'ls \/notexist'. Determine if there's an error (of course there is), see it. In the end, delete the pod\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run busybox --restart=Never --image=busybox -- \/bin\/sh -c 'ls \/notexist'\n# show that there's an error\nkubectl logs busybox\nkubectl describe po busybox\nkubectl delete po busybox\n```\n\n<\/p>\n<\/details>\n\n### Create a busybox pod that runs 'notexist'. Determine if there's an error (of course there is), see it. In the end, delete the pod forcefully with a 0 grace period\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run busybox --restart=Never --image=busybox -- notexist\nkubectl logs busybox # will bring nothing! container never started\nkubectl describe po busybox # in the events section, you'll see the error\n# also...\nkubectl get events | grep -i error # you'll see the error here as well\nkubectl delete po busybox --force --grace-period=0\n```\n\n<\/p>\n<\/details>\n\n\n### Get CPU\/memory utilization for nodes ([metrics-server](https:\/\/github.com\/kubernetes-incubator\/metrics-server) must be running)\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl top nodes\n```\n\n<\/p>\n<\/details>","site":"ckad excersises","answers_cleaned":"    https   gaforgithub azurewebsites net api repo CKAD exercises observability empty    Observability  18       Liveness  readiness and startup probes  kubernetes io   Documentation   Tasks   Configure Pods and Containers    Configure Liveness  Readiness and Startup Probes  https   kubernetes io docs tasks configure pod container configure liveness readiness startup probes        Create an nginx pod with a liveness probe that just runs the command  ls   Save its YAML in pod yaml  Run it  check its probe status  delete it    details  summary show  summary   p      bash kubectl run nginx   image nginx   restart Never   dry run client  o yaml   pod yaml vi pod yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx spec    containers      image  nginx     imagePullPolicy  IfNotPresent     name  nginx     resources         livenessProbe    our probe       exec    add this line         command    command definition           ls   ls command   dnsPolicy  ClusterFirst   restartPolicy  Never status             bash kubectl create  f pod yaml kubectl describe pod nginx   grep  i liveness   run this to see that liveness probe works kubectl delete  f pod yaml        p    details       Modify the pod yaml file so that liveness probe starts kicking in after 5 seconds whereas the interval between probes would be 5 seconds  Run it  check the probe  delete it    details  summary show  summary   p      bash kubectl explain pod spec containers livenessProbe   get the exact names         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx spec    containers      image  nginx     imagePullPolicy  IfNotPresent     name  nginx     resources         livenessProbe        initialDelaySeconds  5   add this line       periodSeconds  5   add this line as well       exec          command            ls   dnsPolicy  ClusterFirst   restartPolicy  Never status             bash kubectl create  f pod yaml kubectl describe po nginx   grep  i liveness kubectl delete  f pod yaml        p    details       Create an nginx pod  that includes port 80  with an HTTP readinessProbe on path     on port 80  Again  run it  check the readinessProbe  delete it    details  summary show  summary   p      bash kubectl run nginx   image nginx   dry run client  o yaml   restart Never   port 80   pod yaml vi pod yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx spec    containers      image  nginx     imagePullPolicy  IfNotPresent     name  nginx     resources         ports        containerPort  80   Note  Readiness probes runs on the container during its whole lifecycle  Since nginx exposes 80  containerPort  80 is not required for readiness to work      readinessProbe    declare the readiness probe       httpGet    add this line         path              port  80     dnsPolicy  ClusterFirst   restartPolicy  Never status             bash kubectl create  f pod yaml kubectl describe pod nginx   grep  i readiness   to see the pod readiness details kubectl delete  f pod yaml        p    details       Lots of pods are running in  qa   alan   test   production  namespaces   All of these pods are configured with liveness probe   Please list all pods whose liveness probe are failed in the format of   namespace   pod name   per line    details  summary show  summary   p   A typical liveness probe failure event     LAST SEEN   TYPE      REASON      OBJECT              MESSAGE 22m         Warning   Unhealthy   pod liveness exec   Liveness probe failed  cat  can t open   tmp healthy   No such file or directory      collect failed pods namespace by namespace     sh kubectl get events  o json   jq  r   items     select  message   contains  Liveness probe failed    involvedObject    namespace          name         p    details      Logging      Create a busybox pod that runs  i 0  while true  do echo   i    date    i    i 1    sleep 1  done   Check its logs   details  summary show  summary   p      bash kubectl run busybox   image busybox   restart Never     bin sh  c  i 0  while true  do echo   i    date    i    i 1    sleep 1  done  kubectl logs busybox  f   follow the logs        p    details      Debugging      Create a busybox pod that runs  ls  notexist   Determine if there s an error  of course there is   see it  In the end  delete the pod   details  summary show  summary   p      bash kubectl run busybox   restart Never   image busybox     bin sh  c  ls  notexist    show that there s an error kubectl logs busybox kubectl describe po busybox kubectl delete po busybox        p    details       Create a busybox pod that runs  notexist   Determine if there s an error  of course there is   see it  In the end  delete the pod forcefully with a 0 grace period   details  summary show  summary   p      bash kubectl run busybox   restart Never   image busybox    notexist kubectl logs busybox   will bring nothing  container never started kubectl describe po busybox   in the events section  you ll see the error   also    kubectl get events   grep  i error   you ll see the error here as well kubectl delete po busybox   force   grace period 0        p    details        Get CPU memory utilization for nodes   metrics server  https   github com kubernetes incubator metrics server  must be running    details  summary show  summary   p      bash kubectl top nodes        p    details "}
{"questions":"ckad excersises Helm in K8s Creating a basic Helm chart Note Helm is part of the new CKAD syllabus Here are a few examples of using Helm to manage Kubernetes details summary show summary p Managing Kubernetes with Helm","answers":"# Managing Kubernetes with Helm\n\n- Note: Helm is part of the new CKAD syllabus. Here are a few examples of using Helm to manage Kubernetes.\n\n## Helm in K8s\n\n### Creating a basic Helm chart\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nhelm create chart-test ## this would create a helm \n```\n\n<\/p>\n<\/details>\n\n### Running a Helm chart\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nhelm install -f myvalues.yaml myredis .\/redis\n```\n\n<\/p>\n<\/details>\n\n### Find pending Helm deployments on all namespaces\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nhelm list --pending -A\n```\n\n<\/p>\n<\/details>\n\n### Uninstall a Helm release\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nhelm uninstall -n namespace release_name\n```\n\n<\/p>\n<\/details>\n\n### Upgrading a Helm chart\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nhelm upgrade -f myvalues.yaml -f override.yaml redis .\/redis\n```\n\n<\/p>\n<\/details>\n\n### Using Helm repo\n\n<details><summary>show<\/summary>\n<p>\n\nAdd, list, remove, update and index chart repos\n\n```bash\nhelm repo add [NAME] [URL]  [flags]\n\nhelm repo list \/ helm repo ls\n\nhelm repo remove [REPO1] [flags]\n\nhelm repo update \/ helm repo up\n\nhelm repo update [REPO1] [flags]\n\nhelm repo index [DIR] [flags]\n```\n\n<\/p>\n<\/details>\n\n### Download a Helm chart from a repository \n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nhelm pull [chart URL | repo\/chartname] [...] [flags] ## this would download a helm, not install \nhelm pull --untar [rep\/chartname] # untar the chart after downloading it \n```\n\n<\/p>\n<\/details>\n\n### Add the Bitnami repo at https:\/\/charts.bitnami.com\/bitnami to Helm\n<details><summary>show<\/summary>\n<p>\n    \n```bash\nhelm repo add bitnami https:\/\/charts.bitnami.com\/bitnami\n```\n  \n<\/p>\n<\/details>\n\n### Write the contents of the values.yaml file of the `bitnami\/node` chart to standard output\n<details><summary>show<\/summary>\n<p>\n    \n```bash\nhelm show values bitnami\/node\n```\n  \n<\/p>\n<\/details>\n\n### Install the `bitnami\/node` chart setting the number of replicas to 5\n<details><summary>show<\/summary>\n<p>\n\nTo achieve this, we need two key pieces of information:\n- The name of the attribute in values.yaml which controls replica count\n- A simple way to set the value of this attribute during installation\n\nTo identify the name of the attribute in the values.yaml file, we could get all the values, as in the previous task, and then grep to find attributes matching the pattern `replica`\n```bash\nhelm show values bitnami\/node | grep -i replica\n```\nwhich returns\n```bash\n## @param replicaCount Specify the number of replicas for the application\nreplicaCount: 1\n```\n \nWe can use the `--set` argument during installation to override attribute values. Hence, to set the replica count to 5, we need to run\n```bash\nhelm install mynode bitnami\/node --set replicaCount=5\n```\n\n<\/p>\n<\/details>\n\n","site":"ckad excersises","answers_cleaned":"  Managing Kubernetes with Helm    Note  Helm is part of the new CKAD syllabus  Here are a few examples of using Helm to manage Kubernetes      Helm in K8s      Creating a basic Helm chart   details  summary show  summary   p      bash helm create chart test    this would create a helm         p    details       Running a Helm chart   details  summary show  summary   p      bash helm install  f myvalues yaml myredis   redis        p    details       Find pending Helm deployments on all namespaces   details  summary show  summary   p      bash helm list   pending  A        p    details       Uninstall a Helm release   details  summary show  summary   p      bash helm uninstall  n namespace release name        p    details       Upgrading a Helm chart   details  summary show  summary   p      bash helm upgrade  f myvalues yaml  f override yaml redis   redis        p    details       Using Helm repo   details  summary show  summary   p   Add  list  remove  update and index chart repos     bash helm repo add  NAME   URL    flags   helm repo list   helm repo ls  helm repo remove  REPO1   flags   helm repo update   helm repo up  helm repo update  REPO1   flags   helm repo index  DIR   flags         p    details       Download a Helm chart from a repository    details  summary show  summary   p      bash helm pull  chart URL   repo chartname         flags     this would download a helm  not install  helm pull   untar  rep chartname    untar the chart after downloading it         p    details       Add the Bitnami repo at https   charts bitnami com bitnami to Helm  details  summary show  summary   p          bash helm repo add bitnami https   charts bitnami com bitnami          p    details       Write the contents of the values yaml file of the  bitnami node  chart to standard output  details  summary show  summary   p          bash helm show values bitnami node          p    details       Install the  bitnami node  chart setting the number of replicas to 5  details  summary show  summary   p   To achieve this  we need two key pieces of information    The name of the attribute in values yaml which controls replica count   A simple way to set the value of this attribute during installation  To identify the name of the attribute in the values yaml file  we could get all the values  as in the previous task  and then grep to find attributes matching the pattern  replica     bash helm show values bitnami node   grep  i replica     which returns    bash     param replicaCount Specify the number of replicas for the application replicaCount  1       We can use the    set  argument during installation to override attribute values  Hence  to set the replica count to 5  we need to run    bash helm install mynode bitnami node   set replicaCount 5        p    details   "}
{"questions":"ckad excersises kubernetes io Documentation Tasks Access Applications in a Cluster using API kubernetes io Documentation Tasks Access Applications in a Cluster kubernetes io Documentation Reference kubectl CLI Core Concepts 13 kubernetes io Documentation Tasks Monitoring Logging and Debugging","answers":"![](https:\/\/gaforgithub.azurewebsites.net\/api?repo=CKAD-exercises\/core_concepts&empty)\n# Core Concepts (13%)\n\nkubernetes.io > Documentation > Reference > kubectl CLI > [kubectl Cheat Sheet](https:\/\/kubernetes.io\/docs\/reference\/kubectl\/cheatsheet\/)\n\nkubernetes.io > Documentation > Tasks > Monitoring, Logging, and Debugging > [Get a Shell to a Running Container](https:\/\/kubernetes.io\/docs\/tasks\/debug-application-cluster\/get-shell-running-container\/)\n\nkubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Configure Access to Multiple Clusters](https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/configure-access-multiple-clusters\/)\n\nkubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Accessing Clusters](https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/access-cluster\/) using API\n\nkubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Use Port Forwarding to Access Applications in a Cluster](https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/port-forward-access-application-cluster\/)\n\n### Create a namespace called 'mynamespace' and a pod with image nginx called nginx on this namespace\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create namespace mynamespace\nkubectl run nginx --image=nginx --restart=Never -n mynamespace\n```\n\n<\/p>\n<\/details>\n\n### Create the pod that was just described using YAML\n\n<details><summary>show<\/summary>\n<p>\n\nEasily generate YAML with:\n\n```bash\nkubectl run nginx --image=nginx --restart=Never --dry-run=client -n mynamespace -o yaml > pod.yaml\n```\n\n```bash\ncat pod.yaml\n```\n\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: nginx\n  name: nginx\n  namespace: mynamespace\nspec:\n  containers:\n  - image: nginx\n    imagePullPolicy: IfNotPresent\n    name: nginx\n    resources: {}\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\nstatus: {}\n```\n\n```bash\nkubectl create -f pod.yaml\n```\n\nAlternatively, you can run in one line\n\n```bash\nkubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml | kubectl create -n mynamespace -f -\n```\n\n<\/p>\n<\/details>\n\n### Create a busybox pod (using kubectl command) that runs the command \"env\". Run it and see the output\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run busybox --image=busybox --command --restart=Never -it --rm -- env # -it will help in seeing the output, --rm will immediately delete the pod after it exits\n# or, just run it without -it\nkubectl run busybox --image=busybox --command --restart=Never -- env\n# and then, check its logs\nkubectl logs busybox\n```\n\n<\/p>\n<\/details>\n\n### Create a busybox pod (using YAML) that runs the command \"env\". Run it and see the output\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\n# create a  YAML template with this command\nkubectl run busybox --image=busybox --restart=Never --dry-run=client -o yaml --command -- env > envpod.yaml\n# see it\ncat envpod.yaml\n```\n\n```YAML\napiVersion: v1\nkind: Pod\nmetadata:\n  creationTimestamp: null\n  labels:\n    run: busybox\n  name: busybox\nspec:\n  containers:\n  - command:\n    - env\n    image: busybox\n    name: busybox\n    resources: {}\n  dnsPolicy: ClusterFirst\n  restartPolicy: Never\nstatus: {}\n```\n\n```bash\n# apply it and then see the logs\nkubectl apply -f envpod.yaml\nkubectl logs busybox\n```\n\n<\/p>\n<\/details>\n\n### Get the YAML for a new namespace called 'myns' without creating it\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create namespace myns -o yaml --dry-run=client\n```\n\n<\/p>\n<\/details>\n\n### Create the YAML for a new ResourceQuota called 'myrq' with hard limits of 1 CPU, 1G memory and 2 pods without creating it\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl create quota myrq --hard=cpu=1,memory=1G,pods=2 --dry-run=client -o yaml\n```\n\n<\/p>\n<\/details>\n\n### Get pods on all namespaces\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get po --all-namespaces\n```\nAlternatively \n\n```bash\nkubectl get po -A\n```\n<\/p>\n<\/details>\n\n### Create a pod with image nginx called nginx and expose traffic on port 80\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run nginx --image=nginx --restart=Never --port=80\n```\n\n<\/p>\n<\/details>\n\n### Change pod's image to nginx:1.24.0. Observe that the container will be restarted as soon as the image gets pulled\n\n<details><summary>show<\/summary>\n<p>\n\n*Note*: The `RESTARTS` column should contain 0 initially (ideally - it could be any number)\n\n```bash\n# kubectl set image POD\/POD_NAME CONTAINER_NAME=IMAGE_NAME:TAG\nkubectl set image pod\/nginx nginx=nginx:1.24.0\nkubectl describe po nginx # you will see an event 'Container will be killed and recreated'\nkubectl get po nginx -w # watch it\n```\n\n*Note*: some time after changing the image, you should see that the value in the `RESTARTS` column has been increased by 1, because the container has been restarted, as stated in the events shown at the bottom of the `kubectl describe pod` command:\n\n```\nEvents:\n  Type    Reason     Age                  From               Message\n  ----    ------     ----                 ----               -------\n[...]\n  Normal  Killing    100s                 kubelet, node3     Container pod1 definition changed, will be restarted\n  Normal  Pulling    100s                 kubelet, node3     Pulling image \"nginx:1.24.0\"\n  Normal  Pulled     41s                  kubelet, node3     Successfully pulled image \"nginx:1.24.0\"\n  Normal  Created    36s (x2 over 9m43s)  kubelet, node3     Created container pod1\n  Normal  Started    36s (x2 over 9m43s)  kubelet, node3     Started container pod1\n```\n\n*Note*: you can check pod's image by running\n\n```bash\nkubectl get po nginx -o jsonpath='{.spec.containers[].image}{\"\\n\"}'\n```\n\n<\/p>\n<\/details>\n\n### Get nginx pod's ip created in previous step, use a temp busybox image to wget its '\/'\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get po -o wide # get the IP, will be something like '10.1.1.131'\n# create a temp busybox pod\nkubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- 10.1.1.131:80\n```\n\nAlternatively you can also try a more advanced option:\n\n```bash\n# Get IP of the nginx pod\nNGINX_IP=$(kubectl get pod nginx -o jsonpath='{.status.podIP}')\n# create a temp busybox pod\nkubectl run busybox --image=busybox --env=\"NGINX_IP=$NGINX_IP\" --rm -it --restart=Never -- sh -c 'wget -O- $NGINX_IP:80'\n``` \n\nOr just in one line:\n\n```bash\nkubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- $(kubectl get pod nginx -o jsonpath='{.status.podIP}:{.spec.containers[0].ports[0].containerPort}')\n```\n\n<\/p>\n<\/details>\n\n### Get pod's YAML\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl get po nginx -o yaml\n# or\nkubectl get po nginx -oyaml\n# or\nkubectl get po nginx --output yaml\n# or\nkubectl get po nginx --output=yaml\n```\n\n<\/p>\n<\/details>\n\n### Get information about the pod, including details about potential issues (e.g. pod hasn't started)\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl describe po nginx\n```\n\n<\/p>\n<\/details>\n\n### Get pod logs\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl logs nginx\n```\n\n<\/p>\n<\/details>\n\n### If pod crashed and restarted, get logs about the previous instance\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl logs nginx -p\n# or\nkubectl logs nginx --previous\n```\n\n<\/p>\n<\/details>\n\n### Execute a simple shell on the nginx pod\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl exec -it nginx -- \/bin\/sh\n```\n\n<\/p>\n<\/details>\n\n### Create a busybox pod that echoes 'hello world' and then exits\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run busybox --image=busybox -it --restart=Never -- echo 'hello world'\n# or\nkubectl run busybox --image=busybox -it --restart=Never -- \/bin\/sh -c 'echo hello world'\n```\n\n<\/p>\n<\/details>\n\n### Do the same, but have the pod deleted automatically when it's completed\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run busybox --image=busybox -it --rm --restart=Never -- \/bin\/sh -c 'echo hello world'\nkubectl get po # nowhere to be found :)\n```\n\n<\/p>\n<\/details>\n\n### Create an nginx pod and set an env value as 'var1=val1'. Check the env value existence within the pod\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl run nginx --image=nginx --restart=Never --env=var1=val1\n# then\nkubectl exec -it nginx -- env\n# or\nkubectl exec -it nginx -- sh -c 'echo $var1'\n# or\nkubectl describe po nginx | grep val1\n# or\nkubectl run nginx --restart=Never --image=nginx --env=var1=val1 -it --rm -- env\n# or\nkubectl run nginx --image nginx --restart=Never --env=var1=val1 -it --rm -- sh -c 'echo $var1'\n```\n\n<\/p>\n<\/details>","site":"ckad excersises","answers_cleaned":"    https   gaforgithub azurewebsites net api repo CKAD exercises core concepts empty    Core Concepts  13    kubernetes io   Documentation   Reference   kubectl CLI    kubectl Cheat Sheet  https   kubernetes io docs reference kubectl cheatsheet    kubernetes io   Documentation   Tasks   Monitoring  Logging  and Debugging    Get a Shell to a Running Container  https   kubernetes io docs tasks debug application cluster get shell running container    kubernetes io   Documentation   Tasks   Access Applications in a Cluster    Configure Access to Multiple Clusters  https   kubernetes io docs tasks access application cluster configure access multiple clusters    kubernetes io   Documentation   Tasks   Access Applications in a Cluster    Accessing Clusters  https   kubernetes io docs tasks access application cluster access cluster   using API  kubernetes io   Documentation   Tasks   Access Applications in a Cluster    Use Port Forwarding to Access Applications in a Cluster  https   kubernetes io docs tasks access application cluster port forward access application cluster        Create a namespace called  mynamespace  and a pod with image nginx called nginx on this namespace   details  summary show  summary   p      bash kubectl create namespace mynamespace kubectl run nginx   image nginx   restart Never  n mynamespace        p    details       Create the pod that was just described using YAML   details  summary show  summary   p   Easily generate YAML with      bash kubectl run nginx   image nginx   restart Never   dry run client  n mynamespace  o yaml   pod yaml         bash cat pod yaml         yaml apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  nginx   name  nginx   namespace  mynamespace spec    containers      image  nginx     imagePullPolicy  IfNotPresent     name  nginx     resources       dnsPolicy  ClusterFirst   restartPolicy  Never status             bash kubectl create  f pod yaml      Alternatively  you can run in one line     bash kubectl run nginx   image nginx   restart Never   dry run client  o yaml   kubectl create  n mynamespace  f          p    details       Create a busybox pod  using kubectl command  that runs the command  env   Run it and see the output   details  summary show  summary   p      bash kubectl run busybox   image busybox   command   restart Never  it   rm    env    it will help in seeing the output    rm will immediately delete the pod after it exits   or  just run it without  it kubectl run busybox   image busybox   command   restart Never    env   and then  check its logs kubectl logs busybox        p    details       Create a busybox pod  using YAML  that runs the command  env   Run it and see the output   details  summary show  summary   p      bash   create a  YAML template with this command kubectl run busybox   image busybox   restart Never   dry run client  o yaml   command    env   envpod yaml   see it cat envpod yaml         YAML apiVersion  v1 kind  Pod metadata    creationTimestamp  null   labels      run  busybox   name  busybox spec    containers      command        env     image  busybox     name  busybox     resources       dnsPolicy  ClusterFirst   restartPolicy  Never status             bash   apply it and then see the logs kubectl apply  f envpod yaml kubectl logs busybox        p    details       Get the YAML for a new namespace called  myns  without creating it   details  summary show  summary   p      bash kubectl create namespace myns  o yaml   dry run client        p    details       Create the YAML for a new ResourceQuota called  myrq  with hard limits of 1 CPU  1G memory and 2 pods without creating it   details  summary show  summary   p      bash kubectl create quota myrq   hard cpu 1 memory 1G pods 2   dry run client  o yaml        p    details       Get pods on all namespaces   details  summary show  summary   p      bash kubectl get po   all namespaces     Alternatively      bash kubectl get po  A       p    details       Create a pod with image nginx called nginx and expose traffic on port 80   details  summary show  summary   p      bash kubectl run nginx   image nginx   restart Never   port 80        p    details       Change pod s image to nginx 1 24 0  Observe that the container will be restarted as soon as the image gets pulled   details  summary show  summary   p    Note   The  RESTARTS  column should contain 0 initially  ideally   it could be any number      bash   kubectl set image POD POD NAME CONTAINER NAME IMAGE NAME TAG kubectl set image pod nginx nginx nginx 1 24 0 kubectl describe po nginx   you will see an event  Container will be killed and recreated  kubectl get po nginx  w   watch it       Note   some time after changing the image  you should see that the value in the  RESTARTS  column has been increased by 1  because the container has been restarted  as stated in the events shown at the bottom of the  kubectl describe pod  command       Events    Type    Reason     Age                  From               Message                                                                              Normal  Killing    100s                 kubelet  node3     Container pod1 definition changed  will be restarted   Normal  Pulling    100s                 kubelet  node3     Pulling image  nginx 1 24 0    Normal  Pulled     41s                  kubelet  node3     Successfully pulled image  nginx 1 24 0    Normal  Created    36s  x2 over 9m43s   kubelet  node3     Created container pod1   Normal  Started    36s  x2 over 9m43s   kubelet  node3     Started container pod1       Note   you can check pod s image by running     bash kubectl get po nginx  o jsonpath    spec containers   image    n           p    details       Get nginx pod s ip created in previous step  use a temp busybox image to wget its       details  summary show  summary   p      bash kubectl get po  o wide   get the IP  will be something like  10 1 1 131    create a temp busybox pod kubectl run busybox   image busybox   rm  it   restart Never    wget  O  10 1 1 131 80      Alternatively you can also try a more advanced option      bash   Get IP of the nginx pod NGINX IP   kubectl get pod nginx  o jsonpath    status podIP      create a temp busybox pod kubectl run busybox   image busybox   env  NGINX IP  NGINX IP    rm  it   restart Never    sh  c  wget  O   NGINX IP 80        Or just in one line      bash kubectl run busybox   image busybox   rm  it   restart Never    wget  O    kubectl get pod nginx  o jsonpath    status podIP    spec containers 0  ports 0  containerPort           p    details       Get pod s YAML   details  summary show  summary   p      bash kubectl get po nginx  o yaml   or kubectl get po nginx  oyaml   or kubectl get po nginx   output yaml   or kubectl get po nginx   output yaml        p    details       Get information about the pod  including details about potential issues  e g  pod hasn t started    details  summary show  summary   p      bash kubectl describe po nginx        p    details       Get pod logs   details  summary show  summary   p      bash kubectl logs nginx        p    details       If pod crashed and restarted  get logs about the previous instance   details  summary show  summary   p      bash kubectl logs nginx  p   or kubectl logs nginx   previous        p    details       Execute a simple shell on the nginx pod   details  summary show  summary   p      bash kubectl exec  it nginx     bin sh        p    details       Create a busybox pod that echoes  hello world  and then exits   details  summary show  summary   p      bash kubectl run busybox   image busybox  it   restart Never    echo  hello world    or kubectl run busybox   image busybox  it   restart Never     bin sh  c  echo hello world         p    details       Do the same  but have the pod deleted automatically when it s completed   details  summary show  summary   p      bash kubectl run busybox   image busybox  it   rm   restart Never     bin sh  c  echo hello world  kubectl get po   nowhere to be found           p    details       Create an nginx pod and set an env value as  var1 val1   Check the env value existence within the pod   details  summary show  summary   p      bash kubectl run nginx   image nginx   restart Never   env var1 val1   then kubectl exec  it nginx    env   or kubectl exec  it nginx    sh  c  echo  var1    or kubectl describe po nginx   grep val1   or kubectl run nginx   restart Never   image nginx   env var1 val1  it   rm    env   or kubectl run nginx   image nginx   restart Never   env var1 val1  it   rm    sh  c  echo  var1         p    details "}
{"questions":"ckad excersises Create a CustomResourceDefinition manifest file for an Operator with the following specifications Name Note CRD is part of the new CKAD syllabus Here are a few examples of installing custom resource into the Kubernetes API by creating a CRD Group Schema CRD in K8s Extend the Kubernetes API with CRD CustomResourceDefinition","answers":"# Extend the Kubernetes API with CRD (CustomResourceDefinition)\n\n- Note: CRD is part of the new CKAD syllabus. Here are a few examples of installing custom resource into the Kubernetes API by creating a CRD.\n\n## CRD in K8s\n\n### Create a CustomResourceDefinition manifest file for an Operator with the following specifications :\n* *Name* : `operators.stable.example.com`\n* *Group* : `stable.example.com`\n* *Schema*: `<email: string><name: string><age: integer>`\n* *Scope*: `Namespaced`\n* *Names*: `<plural: operators><singular: operator><shortNames: op>`\n* *Kind*: `Operator`\n\n<details><summary>show<\/summary>\n<p>\n\n```yaml\napiVersion: apiextensions.k8s.io\/v1\nkind: CustomResourceDefinition\nmetadata:\n  # name must match the spec fields below, and be in the form: <plural>.<group>\n  name: operators.stable.example.com\nspec:\n  group: stable.example.com\n  versions:\n    - name: v1\n      served: true\n      # One and only one version must be marked as the storage version.\n      storage: true\n      schema:\n        openAPIV3Schema:\n          type: object\n          properties:\n            spec:\n              type: object\n              properties:\n                email:\n                  type: string\n                name:\n                  type: string\n                age:\n                  type: integer\n  scope: Namespaced\n  names:\n    plural: operators\n    singular: operator\n    # kind is normally the CamelCased singular type. Your resource manifests use this.\n    kind: Operator\n    shortNames:\n    - op\n```\n\n<\/p>\n<\/details>\n\n### Create the CRD resource in the K8S API\n\n<details><summary>show<\/summary>\n<p>\n\n```bash\nkubectl apply -f operator-crd.yml\n```\n\n<\/p>\n<\/details>\n\n### Create custom object from the CRD\n\n* *Name* : `operator-sample`\n* *Kind*: `Operator`\n* Spec:\n  * email: `operator-sample@stable.example.com`\n  * name: `operator sample`\n  * age: `30`\n\n<details><summary>show<\/summary>\n<p>\n\n```yaml\napiVersion: stable.example.com\/v1\nkind: Operator\nmetadata:\n  name: operator-sample\nspec:\n  email: operator-sample@stable.example.com\n  name: \"operator sample\"\n  age: 30\n```\n\n```bash\nkubectl apply -f operator.yml\n```\n\n<\/p>\n<\/details>\n\n### Listing operator\n\n<details><summary>show<\/summary>\n<p>\n\nUse singular, plural and short forms\n\n```bash\nkubectl get operators\nor\nkubectl get operator\nor\nkubectl get op\n```\n\n<\/p>\n<\/details>","site":"ckad excersises","answers_cleaned":"  Extend the Kubernetes API with CRD  CustomResourceDefinition     Note  CRD is part of the new CKAD syllabus  Here are a few examples of installing custom resource into the Kubernetes API by creating a CRD      CRD in K8s      Create a CustomResourceDefinition manifest file for an Operator with the following specifications      Name     operators stable example com     Group     stable example com     Schema     email  string  name  string  age  integer      Scope    Namespaced     Names     plural  operators  singular  operator  shortNames  op      Kind    Operator    details  summary show  summary   p      yaml apiVersion  apiextensions k8s io v1 kind  CustomResourceDefinition metadata      name must match the spec fields below  and be in the form   plural   group    name  operators stable example com spec    group  stable example com   versions        name  v1       served  true         One and only one version must be marked as the storage version        storage  true       schema          openAPIV3Schema            type  object           properties              spec                type  object               properties                  email                    type  string                 name                    type  string                 age                    type  integer   scope  Namespaced   names      plural  operators     singular  operator       kind is normally the CamelCased singular type  Your resource manifests use this      kind  Operator     shortNames        op        p    details       Create the CRD resource in the K8S API   details  summary show  summary   p      bash kubectl apply  f operator crd yml        p    details       Create custom object from the CRD     Name     operator sample     Kind    Operator    Spec      email   operator sample stable example com      name   operator sample      age   30    details  summary show  summary   p      yaml apiVersion  stable example com v1 kind  Operator metadata    name  operator sample spec    email  operator sample stable example com   name   operator sample    age  30         bash kubectl apply  f operator yml        p    details       Listing operator   details  summary show  summary   p   Use singular  plural and short forms     bash kubectl get operators or kubectl get operator or kubectl get op        p    details "}
{"questions":"ckad excersises Multi container Pods 10 Create a Pod with two containers both with image busybox and command echo hello sleep 3600 Connect to the second container and run ls details summary show summary p Easiest way to do it is create a pod with a single container and save its definition in a YAML file","answers":"![](https:\/\/gaforgithub.azurewebsites.net\/api?repo=CKAD-exercises\/multi_container&empty)\n# Multi-container Pods (10%)\n\n### Create a Pod with two containers, both with image busybox and command \"echo hello; sleep 3600\". Connect to the second container and run 'ls'\n\n<details><summary>show<\/summary>\n<p>\n\nEasiest way to do it is create a pod with a single container and save its definition in a YAML file:\n\n```bash\nkubectl run busybox --image=busybox --restart=Never -o yaml --dry-run=client -- \/bin\/sh -c 'echo hello;sleep 3600' > pod.yaml\nvi pod.yaml\n```\n\nCopy\/paste the container related values, so your final YAML should contain the following two containers (make sure those containers have a different name):\n\n```YAML\ncontainers:\n  - args:\n    - \/bin\/sh\n    - -c\n    - echo hello;sleep 3600\n    image: busybox\n    imagePullPolicy: IfNotPresent\n    name: busybox\n    resources: {}\n  - args:\n    - \/bin\/sh\n    - -c\n    - echo hello;sleep 3600\n    image: busybox\n    name: busybox2\n```\n\n```bash\nkubectl create -f pod.yaml\n# Connect to the busybox2 container within the pod\nkubectl exec -it busybox -c busybox2 -- \/bin\/sh\nls\nexit\n\n# or you can do the above with just an one-liner\nkubectl exec -it busybox -c busybox2 -- ls\n\n# you can do some cleanup\nkubectl delete po busybox\n```\n\n<\/p>\n<\/details>\n\n### Create a pod with an nginx container exposed on port 80. Add a busybox init container which downloads a page using \"wget -O \/work-dir\/index.html http:\/\/neverssl.com\/online\". Make a volume of type emptyDir and mount it in both containers. For the nginx container, mount it on \"\/usr\/share\/nginx\/html\" and for the initcontainer, mount it on \"\/work-dir\". When done, get the IP of the created pod and create a busybox pod and run \"wget -O- IP\"\n\n<details><summary>show<\/summary>\n<p>\n\nEasiest way to do it is create a pod with a single container and save its definition in a YAML file:\n\n```bash\nkubectl run box --image=nginx --restart=Never --port=80 --dry-run=client -o yaml > pod-init.yaml\n```\n\nCopy\/paste the container related values, so your final YAML should contain the volume and the initContainer:\n\nVolume:\n\n```YAML\ncontainers:\n- image: nginx\n...\n  volumeMounts:\n  - name: vol\n    mountPath: \/usr\/share\/nginx\/html\nvolumes:\n- name: vol\n  emptyDir: {}\n```\n\ninitContainer:\n\n```YAML\n...\ninitContainers:\n- args:\n  - \/bin\/sh\n  - -c\n  - \"wget -O \/work-dir\/index.html http:\/\/neverssl.com\/online\"\n  image: busybox\n  name: box\n  volumeMounts:\n  - name: vol\n    mountPath: \/work-dir\n```\n\nIn total you get:\n\n```YAML\n\napiVersion: v1\nkind: Pod\nmetadata:\n  labels:\n    run: box\n  name: box\nspec:\n  initContainers: \n  - args: \n    - \/bin\/sh \n    - -c \n    - \"wget -O \/work-dir\/index.html http:\/\/neverssl.com\/online\"\n    image: busybox \n    name: box \n    volumeMounts: \n    - name: vol \n      mountPath: \/work-dir \n  containers:\n  - image: nginx\n    name: nginx\n    ports:\n    - containerPort: 80\n    volumeMounts: \n    - name: vol \n      mountPath: \/usr\/share\/nginx\/html \n  volumes: \n  - name: vol \n    emptyDir: {} \n```\n\n```bash\n# Apply pod\nkubectl apply -f pod-init.yaml\n\n# Get IP\nkubectl get po -o wide\n\n# Execute wget\nkubectl run box-test --image=busybox --restart=Never -it --rm -- \/bin\/sh -c \"wget -O- $(kubectl get pod box -o jsonpath='{.status.podIP}')\"\n\n# you can do some cleanup\nkubectl delete po box\n```\n\n<\/p>\n<\/details>\n","site":"ckad excersises","answers_cleaned":"    https   gaforgithub azurewebsites net api repo CKAD exercises multi container empty    Multi container Pods  10        Create a Pod with two containers  both with image busybox and command  echo hello  sleep 3600   Connect to the second container and run  ls    details  summary show  summary   p   Easiest way to do it is create a pod with a single container and save its definition in a YAML file      bash kubectl run busybox   image busybox   restart Never  o yaml   dry run client     bin sh  c  echo hello sleep 3600    pod yaml vi pod yaml      Copy paste the container related values  so your final YAML should contain the following two containers  make sure those containers have a different name       YAML containers      args         bin sh        c       echo hello sleep 3600     image  busybox     imagePullPolicy  IfNotPresent     name  busybox     resources         args         bin sh        c       echo hello sleep 3600     image  busybox     name  busybox2         bash kubectl create  f pod yaml   Connect to the busybox2 container within the pod kubectl exec  it busybox  c busybox2     bin sh ls exit    or you can do the above with just an one liner kubectl exec  it busybox  c busybox2    ls    you can do some cleanup kubectl delete po busybox        p    details       Create a pod with an nginx container exposed on port 80  Add a busybox init container which downloads a page using  wget  O  work dir index html http   neverssl com online   Make a volume of type emptyDir and mount it in both containers  For the nginx container  mount it on   usr share nginx html  and for the initcontainer  mount it on   work dir   When done  get the IP of the created pod and create a busybox pod and run  wget  O  IP    details  summary show  summary   p   Easiest way to do it is create a pod with a single container and save its definition in a YAML file      bash kubectl run box   image nginx   restart Never   port 80   dry run client  o yaml   pod init yaml      Copy paste the container related values  so your final YAML should contain the volume and the initContainer   Volume      YAML containers    image  nginx       volumeMounts      name  vol     mountPath   usr share nginx html volumes    name  vol   emptyDir          initContainer      YAML     initContainers    args       bin sh      c      wget  O  work dir index html http   neverssl com online    image  busybox   name  box   volumeMounts      name  vol     mountPath   work dir      In total you get      YAML  apiVersion  v1 kind  Pod metadata    labels      run  box   name  box spec    initContainers       args          bin sh         c         wget  O  work dir index html http   neverssl com online      image  busybox      name  box      volumeMounts         name  vol        mountPath   work dir    containers      image  nginx     name  nginx     ports        containerPort  80     volumeMounts         name  vol        mountPath   usr share nginx html    volumes       name  vol      emptyDir              bash   Apply pod kubectl apply  f pod init yaml    Get IP kubectl get po  o wide    Execute wget kubectl run box test   image busybox   restart Never  it   rm     bin sh  c  wget  O    kubectl get pod box  o jsonpath    status podIP        you can do some cleanup kubectl delete po box        p    details  "}
{"questions":"docker on GitHub GitLab doing exactly this now your project repo it s now version controlled and easily enable someone else to contribute to your project is a tool that was developed to help define and Someone would only need to clone your repo and start the compose app In fact you might see quite a few projects The big advantage of using Compose is you can define your application stack in a file keep it at the root of and with a single command can spin everything up or tear it all down share multi container applications With Compose we can create a YAML file to define the services","answers":"\n[Docker Compose](https:\/\/docs.docker.com\/compose\/) is a tool that was developed to help define and\nshare multi-container applications. With Compose, we can create a YAML file to define the services\nand with a single command, can spin everything up or tear it all down. \n\nThe _big_ advantage of using Compose is you can define your application stack in a file, keep it at the root of\nyour project repo (it's now version controlled), and easily enable someone else to contribute to your project. \nSomeone would only need to clone your repo and start the compose app. In fact, you might see quite a few projects\non GitHub\/GitLab doing exactly this now.\n\nSo, how do we get started?\n\n## Installing Docker Compose\n\nIf you installed Docker Desktop for Windows, Mac, or Linux you already have Docker Compose!\nPlay-with-Docker instances already have Docker Compose installed as well. If you are on\nanother system, you can install Docker Compose using [the instructions here](https:\/\/docs.docker.com\/compose\/install\/). \n\n\n## Creating our Compose File\n\n1. Inside of the app folder, create a file named `docker-compose.yml` (next to the `Dockerfile` and `package.json` files).\n\n1. In the compose file, we'll start off by defining a list of services (or containers) we want to run as part of our application.\n\n    ```yaml\n    services:\n    ```\n\nAnd now, we'll start migrating a service at a time into the compose file.\n\n\n## Defining the App Service\n\nTo remember, this was the command we were using to define our app container.\n\n```bash\ndocker run -dp 3000:3000 \\\n  -w \/app -v \"$(pwd):\/app\" \\\n  --network todo-app \\\n  -e MYSQL_HOST=mysql \\\n  -e MYSQL_USER=root \\\n  -e MYSQL_PASSWORD=secret \\\n  -e MYSQL_DB=todos \\\n  node:18-alpine \\\n  sh -c \"yarn install && yarn run dev\"\n```\n\n1. First, let's define the service entry and the image for the container. We can pick any name for the service. \n   The name will automatically become a network alias, which will be useful when defining our MySQL service.\n\n    ```yaml hl_lines=\"2 3\"\n    services:\n      app:\n        image: node:18-alpine\n    ```\n\n1. Typically, you will see the command close to the `image` definition, although there is no requirement on ordering.\n   So, let's go ahead and move that into our file.\n\n    ```yaml hl_lines=\"4\"\n    services:\n      app:\n        image: node:18-alpine\n        command: sh -c \"yarn install && yarn run dev\"\n    ```\n\n\n1. Let's migrate the `-p 3000:3000` part of the command by defining the `ports` for the service. We will use the\n   [short syntax](https:\/\/docs.docker.com\/compose\/compose-file\/#short-syntax-2) here, but there is also a more verbose \n   [long syntax](https:\/\/docs.docker.com\/compose\/compose-file\/#long-syntax-2) available as well.\n\n    ```yaml hl_lines=\"5 6\"\n    services:\n      app:\n        image: node:18-alpine\n        command: sh -c \"yarn install && yarn run dev\"\n        ports:\n          - 3000:3000\n    ```\n\n1. Next, we'll migrate both the working directory (`-w \/app`) and the volume mapping (`-v \"$(pwd):\/app\"`) by using\n   the `working_dir` and `volumes` definitions. Volumes also has a [short](https:\/\/docs.docker.com\/compose\/compose-file\/#short-syntax-4) and [long](https:\/\/docs.docker.com\/compose\/compose-file\/#long-syntax-4) syntax.\n\n    One advantage of Docker Compose volume definitions is we can use relative paths from the current directory.\n\n    ```yaml hl_lines=\"7 8 9\"\n    services:\n      app:\n        image: node:18-alpine\n        command: sh -c \"yarn install && yarn run dev\"\n        ports:\n          - 3000:3000\n        working_dir: \/app\n        volumes:\n          - .\/:\/app\n    ```\n\n1. Finally, we need to migrate the environment variable definitions using the `environment` key.\n\n    ```yaml hl_lines=\"10 11 12 13 14\"\n    services:\n      app:\n        image: node:18-alpine\n        command: sh -c \"yarn install && yarn run dev\"\n        ports:\n          - 3000:3000\n        working_dir: \/app\n        volumes:\n          - .\/:\/app\n        environment:\n          MYSQL_HOST: mysql\n          MYSQL_USER: root\n          MYSQL_PASSWORD: secret\n          MYSQL_DB: todos\n    ```\n\n  \n### Defining the MySQL Service\n\nNow, it's time to define the MySQL service. The command that we used for that container was the following:\n\n```bash\ndocker run -d \\\n  --network todo-app --network-alias mysql \\\n  -v todo-mysql-data:\/var\/lib\/mysql \\\n  -e MYSQL_ROOT_PASSWORD=secret \\\n  -e MYSQL_DATABASE=todos \\\n  mysql:8.0\n```\n\n1. We will first define the new service and name it `mysql` so it automatically gets the network alias. We'll\n   go ahead and specify the image to use as well.\n\n    ```yaml hl_lines=\"4 5\"\n    services:\n      app:\n        # The app service definition\n      mysql:\n        image: mysql:8.0\n    ```\n\n1. Next, we'll define the volume mapping. When we ran the container with `docker run`, the named volume was created\n   automatically. However, that doesn't happen when running with Compose. We need to define the volume in the top-level\n   `volumes:` section and then specify the mountpoint in the service config. By simply providing only the volume name,\n   the default options are used. There are [many more options available](https:\/\/docs.docker.com\/compose\/compose-file\/#volumes-top-level-element) though.\n\n    ```yaml hl_lines=\"6 7 8 9 10\"\n    services:\n      app:\n        # The app service definition\n      mysql:\n        image: mysql:8.0\n        volumes:\n          - todo-mysql-data:\/var\/lib\/mysql\n    \n    volumes:\n      todo-mysql-data:\n    ```\n\n1. Finally, we only need to specify the environment variables.\n\n    ```yaml hl_lines=\"8 9 10\"\n    services:\n      app:\n        # The app service definition\n      mysql:\n        image: mysql:8.0\n        volumes:\n          - todo-mysql-data:\/var\/lib\/mysql\n        environment: \n          MYSQL_ROOT_PASSWORD: secret\n          MYSQL_DATABASE: todos\n    \n    volumes:\n      todo-mysql-data:\n    ```\n\nAt this point, our complete `docker-compose.yml` should look like this:\n\n\n```yaml\nservices:\n  app:\n    image: node:18-alpine\n    command: sh -c \"yarn install && yarn run dev\"\n    ports:\n      - 3000:3000\n    working_dir: \/app\n    volumes:\n      - .\/:\/app\n    environment:\n      MYSQL_HOST: mysql\n      MYSQL_USER: root\n      MYSQL_PASSWORD: secret\n      MYSQL_DB: todos\n\n  mysql:\n    image: mysql:8.0\n    volumes:\n      - todo-mysql-data:\/var\/lib\/mysql\n    environment: \n      MYSQL_ROOT_PASSWORD: secret\n      MYSQL_DATABASE: todos\n\nvolumes:\n  todo-mysql-data:\n```\n\n\n## Running our Application Stack\n\nNow that we have our `docker-compose.yml` file, we can start it up!\n\n1. Make sure no other copies of the app\/db are running first (`docker ps` and `docker rm -f <ids>`).\n\n1. Start up the application stack using the `docker compose up` command. We'll add the `-d` flag to run everything in the\n   background.\n\n    ```bash\n    docker compose up -d\n    ```\n\n    When we run this, we should see output like this:\n\n    ```plaintext\n    [+] Running 3\/3\n    \u283f Network app_default    Created                                0.0s\n    \u283f Container app-mysql-1  Started                                0.4s\n    \u283f Container app-app-1    Started                                0.4s\n    ```\n\n    You'll notice that the volume was created as well as a network! By default, Docker Compose automatically creates a \n    network specifically for the application stack (which is why we didn't define one in the compose file).\n\n1. Let's look at the logs using the `docker compose logs -f` command. You'll see the logs from each of the services interleaved\n    into a single stream. This is incredibly useful when you want to watch for timing-related issues. The `-f` flag \"follows\" the\n    log, so will give you live output as it's generated.\n\n    If you don't already, you'll see output that looks like this...\n\n    ```plaintext\n    mysql_1  | 2022-11-23T04:01:20.185015Z 0 [System] [MY-010931] [Server] \/usr\/sbin\/mysqld: ready for connections. Version: '8.0.31'  socket: '\/var\/run\/mysqld\/mysqld.sock'  port: 3306  MySQL Community Server - GPL.\n    app_1    | Connected to mysql db at host mysql\n    app_1    | Listening on port 3000\n    ```\n\n    The service name is displayed at the beginning of the line (often colored) to help distinguish messages. If you want to\n    view the logs for a specific service, you can add the service name to the end of the logs command (for example,\n    `docker compose logs -f app`).\n\n    !!! info \"Pro tip - Waiting for the DB before starting the app\"\n        When the app is starting up, it actually sits and waits for MySQL to be up and ready before trying to connect to it.\n        Docker doesn't have any built-in support to wait for another container to be fully up, running, and ready\n        before starting another container. For Node-based projects, you can use the \n        [wait-port](https:\/\/github.com\/dwmkerr\/wait-port) dependency. Similar projects exist for other languages\/frameworks.\n\n1. At this point, you should be able to open your app and see it running. And hey! We're down to a single command!\n\n## Seeing our App Stack in Docker Dashboard\n\nIf we look at the Docker Dashboard, we'll see that there is a group named **app**. This is the \"project name\" from Docker\nCompose and used to group the containers together. By default, the project name is simply the name of the directory that the\n`docker-compose.yml` was located in.\n\n![Docker Dashboard with app project](dashboard-app-project-collapsed.png)\n\nIf you twirl down the app, you will see the two containers we defined in the compose file. The names are also a little\nmore descriptive, as they follow the pattern of `<project-name>_<service-name>_<replica-number>`. So, it's very easy to\nquickly see what container is our app and which container is the mysql database.\n\n![Docker Dashboard with app project expanded](dashboard-app-project-expanded.png)\n\n\n## Tearing it All Down\n\nWhen you're ready to tear it all down, simply run `docker compose down` or hit the trash can on the Docker Dashboard \nfor the entire app. The containers will stop and the network will be removed.\n\n!!! warning \"Removing Volumes\"\n    By default, named volumes in your compose file are NOT removed when running `docker compose down`. If you want to\n    remove the volumes, you will need to add the `--volumes` flag.\n\n    The Docker Dashboard does _not_ remove volumes when you delete the app stack.\n\nOnce torn down, you can switch to another project, run `docker compose up` and be ready to contribute to that project! It really\ndoesn't get much simpler than that!\n\n\n## Recap\n\nIn this section, we learned about Docker Compose and how it helps us dramatically simplify the defining and\nsharing of multi-service applications. We created a Compose file by translating the commands we were\nusing into the appropriate compose format.\n\nAt this point, we're starting to wrap up the tutorial. However, there are a few best practices about\nimage building we want to cover, as there is a big issue with the Dockerfile we've been using. So,\nlet's take a look!","site":"docker","answers_cleaned":"  Docker Compose  https   docs docker com compose   is a tool that was developed to help define and share multi container applications  With Compose  we can create a YAML file to define the services and with a single command  can spin everything up or tear it all down    The  big  advantage of using Compose is you can define your application stack in a file  keep it at the root of your project repo  it s now version controlled   and easily enable someone else to contribute to your project   Someone would only need to clone your repo and start the compose app  In fact  you might see quite a few projects on GitHub GitLab doing exactly this now   So  how do we get started      Installing Docker Compose  If you installed Docker Desktop for Windows  Mac  or Linux you already have Docker Compose  Play with Docker instances already have Docker Compose installed as well  If you are on another system  you can install Docker Compose using  the instructions here  https   docs docker com compose install          Creating our Compose File  1  Inside of the app folder  create a file named  docker compose yml   next to the  Dockerfile  and  package json  files    1  In the compose file  we ll start off by defining a list of services  or containers  we want to run as part of our application          yaml     services           And now  we ll start migrating a service at a time into the compose file       Defining the App Service  To remember  this was the command we were using to define our app container      bash docker run  dp 3000 3000      w  app  v    pwd   app        network todo app      e MYSQL HOST mysql      e MYSQL USER root      e MYSQL PASSWORD secret      e MYSQL DB todos     node 18 alpine     sh  c  yarn install    yarn run dev       1  First  let s define the service entry and the image for the container  We can pick any name for the service      The name will automatically become a network alias  which will be useful when defining our MySQL service          yaml hl lines  2 3      services        app          image  node 18 alpine          1  Typically  you will see the command close to the  image  definition  although there is no requirement on ordering     So  let s go ahead and move that into our file          yaml hl lines  4      services        app          image  node 18 alpine         command  sh  c  yarn install    yarn run dev            1  Let s migrate the   p 3000 3000  part of the command by defining the  ports  for the service  We will use the     short syntax  https   docs docker com compose compose file  short syntax 2  here  but there is also a more verbose      long syntax  https   docs docker com compose compose file  long syntax 2  available as well          yaml hl lines  5 6      services        app          image  node 18 alpine         command  sh  c  yarn install    yarn run dev          ports              3000 3000          1  Next  we ll migrate both the working directory    w  app   and the volume mapping    v    pwd   app    by using    the  working dir  and  volumes  definitions  Volumes also has a  short  https   docs docker com compose compose file  short syntax 4  and  long  https   docs docker com compose compose file  long syntax 4  syntax       One advantage of Docker Compose volume definitions is we can use relative paths from the current directory          yaml hl lines  7 8 9      services        app          image  node 18 alpine         command  sh  c  yarn install    yarn run dev          ports              3000 3000         working dir   app         volumes                  app          1  Finally  we need to migrate the environment variable definitions using the  environment  key          yaml hl lines  10 11 12 13 14      services        app          image  node 18 alpine         command  sh  c  yarn install    yarn run dev          ports              3000 3000         working dir   app         volumes                  app         environment            MYSQL HOST  mysql           MYSQL USER  root           MYSQL PASSWORD  secret           MYSQL DB  todos                 Defining the MySQL Service  Now  it s time to define the MySQL service  The command that we used for that container was the following      bash docker run  d       network todo app   network alias mysql      v todo mysql data  var lib mysql      e MYSQL ROOT PASSWORD secret      e MYSQL DATABASE todos     mysql 8 0      1  We will first define the new service and name it  mysql  so it automatically gets the network alias  We ll    go ahead and specify the image to use as well          yaml hl lines  4 5      services        app            The app service definition       mysql          image  mysql 8 0          1  Next  we ll define the volume mapping  When we ran the container with  docker run   the named volume was created    automatically  However  that doesn t happen when running with Compose  We need to define the volume in the top level     volumes   section and then specify the mountpoint in the service config  By simply providing only the volume name     the default options are used  There are  many more options available  https   docs docker com compose compose file  volumes top level element  though          yaml hl lines  6 7 8 9 10      services        app            The app service definition       mysql          image  mysql 8 0         volumes              todo mysql data  var lib mysql          volumes        todo mysql data           1  Finally  we only need to specify the environment variables          yaml hl lines  8 9 10      services        app            The app service definition       mysql          image  mysql 8 0         volumes              todo mysql data  var lib mysql         environment             MYSQL ROOT PASSWORD  secret           MYSQL DATABASE  todos          volumes        todo mysql data           At this point  our complete  docker compose yml  should look like this       yaml services    app      image  node 18 alpine     command  sh  c  yarn install    yarn run dev      ports          3000 3000     working dir   app     volumes              app     environment        MYSQL HOST  mysql       MYSQL USER  root       MYSQL PASSWORD  secret       MYSQL DB  todos    mysql      image  mysql 8 0     volumes          todo mysql data  var lib mysql     environment         MYSQL ROOT PASSWORD  secret       MYSQL DATABASE  todos  volumes    todo mysql data           Running our Application Stack  Now that we have our  docker compose yml  file  we can start it up   1  Make sure no other copies of the app db are running first   docker ps  and  docker rm  f  ids      1  Start up the application stack using the  docker compose up  command  We ll add the   d  flag to run everything in the    background          bash     docker compose up  d              When we run this  we should see output like this          plaintext         Running 3 3       Network app default    Created                                0 0s       Container app mysql 1  Started                                0 4s       Container app app 1    Started                                0 4s              You ll notice that the volume was created as well as a network  By default  Docker Compose automatically creates a      network specifically for the application stack  which is why we didn t define one in the compose file    1  Let s look at the logs using the  docker compose logs  f  command  You ll see the logs from each of the services interleaved     into a single stream  This is incredibly useful when you want to watch for timing related issues  The   f  flag  follows  the     log  so will give you live output as it s generated       If you don t already  you ll see output that looks like this            plaintext     mysql 1    2022 11 23T04 01 20 185015Z 0  System   MY 010931   Server   usr sbin mysqld  ready for connections  Version   8 0 31   socket    var run mysqld mysqld sock   port  3306  MySQL Community Server   GPL      app 1      Connected to mysql db at host mysql     app 1      Listening on port 3000              The service name is displayed at the beginning of the line  often colored  to help distinguish messages  If you want to     view the logs for a specific service  you can add the service name to the end of the logs command  for example       docker compose logs  f app             info  Pro tip   Waiting for the DB before starting the app          When the app is starting up  it actually sits and waits for MySQL to be up and ready before trying to connect to it          Docker doesn t have any built in support to wait for another container to be fully up  running  and ready         before starting another container  For Node based projects  you can use the           wait port  https   github com dwmkerr wait port  dependency  Similar projects exist for other languages frameworks   1  At this point  you should be able to open your app and see it running  And hey  We re down to a single command      Seeing our App Stack in Docker Dashboard  If we look at the Docker Dashboard  we ll see that there is a group named   app    This is the  project name  from Docker Compose and used to group the containers together  By default  the project name is simply the name of the directory that the  docker compose yml  was located in     Docker Dashboard with app project  dashboard app project collapsed png   If you twirl down the app  you will see the two containers we defined in the compose file  The names are also a little more descriptive  as they follow the pattern of   project name   service name   replica number    So  it s very easy to quickly see what container is our app and which container is the mysql database     Docker Dashboard with app project expanded  dashboard app project expanded png       Tearing it All Down  When you re ready to tear it all down  simply run  docker compose down  or hit the trash can on the Docker Dashboard  for the entire app  The containers will stop and the network will be removed       warning  Removing Volumes      By default  named volumes in your compose file are NOT removed when running  docker compose down   If you want to     remove the volumes  you will need to add the    volumes  flag       The Docker Dashboard does  not  remove volumes when you delete the app stack   Once torn down  you can switch to another project  run  docker compose up  and be ready to contribute to that project  It really doesn t get much simpler than that       Recap  In this section  we learned about Docker Compose and how it helps us dramatically simplify the defining and sharing of multi service applications  We created a Compose file by translating the commands we were using into the appropriate compose format   At this point  we re starting to wrap up the tutorial  However  there are a few best practices about image building we want to cover  as there is a big issue with the Dockerfile we ve been using  So  let s take a look "}
{"questions":"docker Updating our Source Code You have no todo items yet Add one above Pretty simple right Let s make the change change the empty text when we don t have any todo list items They As a small feature request we ve been asked by the product team to would like to transition it to the following","answers":"\nAs a small feature request, we've been asked by the product team to\nchange the \"empty text\" when we don't have any todo list items. They\nwould like to transition it to the following:\n\n> You have no todo items yet! Add one above!\n\nPretty simple, right? Let's make the change.\n\n## Updating our Source Code\n\n1. In the `src\/static\/js\/app.js` file, update line 56 to use the new empty text.\n\n    ```diff\n    -                <p className=\"text-center\">No items yet! Add one above!<\/p>\n    +                <p className=\"text-center\">You have no todo items yet! Add one above!<\/p>\n    ```\n\n1. Let's build our updated version of the image, using the same command we used before.\n\n    ```bash\n    docker build -t getting-started .\n    ```\n\n1. Let's start a new container using the updated code.\n\n    ```bash\n    docker run -dp 3000:3000 getting-started\n    ```\n\n**Uh oh!** You probably saw an error like this (the IDs will be different):\n\n```bash\ndocker: Error response from daemon: driver failed programming external connectivity on endpoint laughing_burnell \n(bb242b2ca4d67eba76e79474fb36bb5125708ebdabd7f45c8eaf16caaabde9dd): Bind for 0.0.0.0:3000 failed: port is already allocated.\n```\n\nSo, what happened? We aren't able to start the new container because our old container is still\nrunning. The reason this is a problem is because that container is using the host's port 3000 and\nonly one process on the machine (containers included) can listen to a specific port. To fix this, \nwe need to remove the old container.\n\n\n## Replacing our Old Container\n\nTo remove a container, it first needs to be stopped. Once it has stopped, it can be removed. We have two\nways that we can remove the old container. Feel free to choose the path that you're most comfortable with.\n\n\n### Removing a container using the CLI\n\n1. Get the ID of the container by using the `docker ps` command.\n\n    ```bash\n    docker ps\n    ```\n\n1. Use the `docker stop` command to stop the container.\n\n    ```bash\n    # Swap out <the-container-id> with the ID from docker ps\n    docker stop <the-container-id>\n    ```\n\n1. Once the container has stopped, you can remove it by using the `docker rm` command.\n\n    ```bash\n    docker rm <the-container-id>\n    ```\n\n!!! info \"Pro tip\"\n    You can stop and remove a container in a single command by adding the \"force\" flag\n    to the `docker rm` command. For example: `docker rm -f <the-container-id>`\n\n### Removing a container using the Docker Dashboard\n\nIf you open the Docker dashboard, you can remove a container with two clicks! It's certainly\nmuch easier than having to look up the container ID and remove it.\n\n1. With the dashboard opened, hover over the app container and you'll see a collection of action\n    buttons appear on the right.\n\n1. Click on the trash can icon to delete the container. \n\n1. Confirm the removal and you're done!\n\n![Docker Dashboard - removing a container](dashboard-removing-container.png)\n\n\n### Starting our updated app container\n\n1. Now, start your updated app.\n\n    ```bash\n    docker run -dp 3000:3000 getting-started\n    ```\n\n1. Refresh your browser on [http:\/\/localhost:3000](http:\/\/localhost:3000) and you should see your updated help text!\n\n![Updated application with updated empty text](todo-list-updated-empty-text.png){: style=\"width:55%\" }\n{: .text-center }\n\n\n\n## Recap\n\nWhile we were able to build an update, there were two things you might have noticed:\n\n- All of the existing items in our todo list are gone! That's not a very good app! We'll talk about that\nshortly.\n- There were _a lot_ of steps involved for such a small change. In an upcoming section, we'll talk about \nhow to see code updates without needing to rebuild and start a new container every time we make a change.\n\nBefore talking about persistence, we'll quickly see how to share these images with others.","site":"docker","answers_cleaned":" As a small feature request  we ve been asked by the product team to change the  empty text  when we don t have any todo list items  They would like to transition it to the following     You have no todo items yet  Add one above   Pretty simple  right  Let s make the change      Updating our Source Code  1  In the  src static js app js  file  update line 56 to use the new empty text          diff                       p className  text center  No items yet  Add one above   p                        p className  text center  You have no todo items yet  Add one above   p           1  Let s build our updated version of the image  using the same command we used before          bash     docker build  t getting started            1  Let s start a new container using the updated code          bash     docker run  dp 3000 3000 getting started            Uh oh    You probably saw an error like this  the IDs will be different       bash docker  Error response from daemon  driver failed programming external connectivity on endpoint laughing burnell   bb242b2ca4d67eba76e79474fb36bb5125708ebdabd7f45c8eaf16caaabde9dd   Bind for 0 0 0 0 3000 failed  port is already allocated       So  what happened  We aren t able to start the new container because our old container is still running  The reason this is a problem is because that container is using the host s port 3000 and only one process on the machine  containers included  can listen to a specific port  To fix this   we need to remove the old container       Replacing our Old Container  To remove a container  it first needs to be stopped  Once it has stopped  it can be removed  We have two ways that we can remove the old container  Feel free to choose the path that you re most comfortable with        Removing a container using the CLI  1  Get the ID of the container by using the  docker ps  command          bash     docker ps          1  Use the  docker stop  command to stop the container          bash       Swap out  the container id  with the ID from docker ps     docker stop  the container id           1  Once the container has stopped  you can remove it by using the  docker rm  command          bash     docker rm  the container id               info  Pro tip      You can stop and remove a container in a single command by adding the  force  flag     to the  docker rm  command  For example   docker rm  f  the container id        Removing a container using the Docker Dashboard  If you open the Docker dashboard  you can remove a container with two clicks  It s certainly much easier than having to look up the container ID and remove it   1  With the dashboard opened  hover over the app container and you ll see a collection of action     buttons appear on the right   1  Click on the trash can icon to delete the container    1  Confirm the removal and you re done     Docker Dashboard   removing a container  dashboard removing container png        Starting our updated app container  1  Now  start your updated app          bash     docker run  dp 3000 3000 getting started          1  Refresh your browser on  http   localhost 3000  http   localhost 3000  and you should see your updated help text     Updated application with updated empty text  todo list updated empty text png    style  width 55         text center         Recap  While we were able to build an update  there were two things you might have noticed     All of the existing items in our todo list are gone  That s not a very good app  We ll talk about that shortly    There were  a lot  of steps involved for such a small change  In an upcoming section  we ll talk about  how to see code updates without needing to rebuild and start a new container every time we make a change   Before talking about persistence  we ll quickly see how to share these images with others "}
{"questions":"docker used to provide additional data into containers When working on an application we can use a bind mount to is stored In the previous chapter we talked about and used a named volume to persist the data in our database mount our source code into the container to let it see code changes respond and let us see the changes right Named volumes are great if we simply want to store data as we don t have to worry about where the data away With bind mounts we control the exact mountpoint on the host We can use this to persist data but is often","answers":"\nIn the previous chapter, we talked about and used a **named volume** to persist the data in our database.\nNamed volumes are great if we simply want to store data, as we don't have to worry about _where_ the data\nis stored.\n\nWith **bind mounts**, we control the exact mountpoint on the host. We can use this to persist data, but is often\nused to provide additional data into containers. When working on an application, we can use a bind mount to\nmount our source code into the container to let it see code changes, respond, and let us see the changes right\naway.\n\nFor Node-based applications, [nodemon](https:\/\/npmjs.com\/package\/nodemon) is a great tool to watch for file\nchanges and then restart the application. There are equivalent tools in most other languages and frameworks.\n\n## Quick Volume Type Comparisons\n\nBind mounts and named volumes are the two main types of volumes that come with the Docker engine. However, additional\nvolume drivers are available to support other use cases ([SFTP](https:\/\/github.com\/vieux\/docker-volume-sshfs), [Ceph](https:\/\/ceph.com\/geen-categorie\/getting-started-with-the-docker-rbd-volume-plugin\/), [NetApp](https:\/\/netappdvp.readthedocs.io\/en\/stable\/), [S3](https:\/\/github.com\/elementar\/docker-s3-volume), and more).\n\n|   | Named Volumes | Bind Mounts |\n| - | ------------- | ----------- |\n| Host Location | Docker chooses | You control |\n| Mount Example (using `-v`) | my-volume:\/usr\/local\/data | \/path\/to\/data:\/usr\/local\/data |\n| Populates new volume with container contents | Yes | No |\n| Supports Volume Drivers | Yes | No |\n\n\n## Starting a Dev-Mode Container\n\nTo run our container to support a development workflow, we will do the following:\n\n- Mount our source code into the container\n- Install all dependencies, including the \"dev\" dependencies\n- Start nodemon to watch for filesystem changes\n\nSo, let's do it!\n\n1. Make sure you don't have any of your own `getting-started` containers running (only the tutorial itself should be running).\n\n1. Also make sure you are in app source code directory, i.e. `\/path\/to\/getting-started\/app`. If you aren't, you can `cd` into it, .e.g:\n\n    ```bash\n    cd \/path\/to\/getting-started\/app\n    ```\n\n1. Now that you are in the `getting-started\/app` directory, run the following command. We'll explain what's going on afterwards:\n\n    ```bash\n    docker run -dp 3000:3000 \\\n        -w \/app -v \"$(pwd):\/app\" \\\n        node:18-alpine \\\n        sh -c \"yarn install && yarn run dev\"\n    ```\n\n    If you are using PowerShell then use this command.\n\n    ```powershell\n    docker run -dp 3000:3000 `\n        -w \/app -v \"$(pwd):\/app\" `\n        node:18-alpine `\n        sh -c \"yarn install && yarn run dev\"\n    ```\n\n    - `-dp 3000:3000` - same as before. Run in detached (background) mode and create a port mapping\n    - `-w \/app` - sets the container's present working directory where the command will run from\n    - `-v \"$(pwd):\/app\"` - bind mount (link) the host's present `getting-started\/app` directory to the container's `\/app` directory. Note: Docker requires absolute paths for binding mounts, so in this example we use `pwd` for printing the absolute path of the working directory, i.e. the `app` directory, instead of typing it manually\n    - `node:18-alpine` - the image to use. Note that this is the base image for our app from the Dockerfile\n    - `sh -c \"yarn install && yarn run dev\"` - the command. We're starting a shell using `sh` (alpine doesn't have `bash`) and\n      running `yarn install` to install _all_ dependencies and then running `yarn run dev`. If we look in the `package.json`,\n      we'll see that the `dev` script is starting `nodemon`.\n\n1. You can watch the logs using `docker logs -f <container-id>`. You'll know you're ready to go when you see this...\n\n    ```bash\n    docker logs -f <container-id>\n    $ nodemon src\/index.js\n    [nodemon] 2.0.20\n    [nodemon] to restart at any time, enter `rs`\n    [nodemon] watching path(s): *.*\n    [nodemon] watching extensions: js,mjs,json\n    [nodemon] starting `node src\/index.js`\n    Using sqlite database at \/etc\/todos\/todo.db\n    Listening on port 3000\n    ```\n\n    When you're done watching the logs, exit out by hitting `Ctrl`+`C`.\n\n1. Now, let's make a change to the app. In the `src\/static\/js\/app.js` file, let's change the \"Add Item\" button to simply say\n   \"Add\". This change will be on line 109 - remember to save the file.\n\n    ```diff\n    -                         {submitting ? 'Adding...' : 'Add Item'}\n    +                         {submitting ? 'Adding...' : 'Add'}\n    ```\n\n1. Simply refresh the page (or open it) and you should see the change reflected in the browser almost immediately. It might\n   take a few seconds for the Node server to restart, so if you get an error, just try refreshing after a few seconds.\n\n    ![Screenshot of updated label for Add button](updated-add-button.png){: style=\"width:75%;\"}\n    {: .text-center }\n\n1. Feel free to make any other changes you'd like to make. When you're done, stop the container and build your new image\n   using `docker build -t getting-started .`.\n\n\nUsing bind mounts is _very_ common for local development setups. The advantage is that the dev machine doesn't need to have\nall of the build tools and environments installed. With a single `docker run` command, the dev environment is pulled and ready\nto go. We'll talk about Docker Compose in a future step, as this will help simplify our commands (we're already getting a lot\nof flags).\n\n## Recap\n\nAt this point, we can persist our database and respond rapidly to the needs and demands of our investors and founders. Hooray!\nBut, guess what? We received great news!\n\n**Your project has been selected for future development!** \n\nIn order to prepare for production, we need to migrate our database from working in SQLite to something that can scale a\nlittle better. For simplicity, we'll keep with a relational database and switch our application to use MySQL. But, how \nshould we run MySQL? How do we allow the containers to talk to each other? We'll talk about that next!","site":"docker","answers_cleaned":" In the previous chapter  we talked about and used a   named volume   to persist the data in our database  Named volumes are great if we simply want to store data  as we don t have to worry about  where  the data is stored   With   bind mounts    we control the exact mountpoint on the host  We can use this to persist data  but is often used to provide additional data into containers  When working on an application  we can use a bind mount to mount our source code into the container to let it see code changes  respond  and let us see the changes right away   For Node based applications   nodemon  https   npmjs com package nodemon  is a great tool to watch for file changes and then restart the application  There are equivalent tools in most other languages and frameworks      Quick Volume Type Comparisons  Bind mounts and named volumes are the two main types of volumes that come with the Docker engine  However  additional volume drivers are available to support other use cases   SFTP  https   github com vieux docker volume sshfs    Ceph  https   ceph com geen categorie getting started with the docker rbd volume plugin     NetApp  https   netappdvp readthedocs io en stable     S3  https   github com elementar docker s3 volume   and more          Named Volumes   Bind Mounts                                         Host Location   Docker chooses   You control     Mount Example  using   v     my volume  usr local data    path to data  usr local data     Populates new volume with container contents   Yes   No     Supports Volume Drivers   Yes   No        Starting a Dev Mode Container  To run our container to support a development workflow  we will do the following     Mount our source code into the container   Install all dependencies  including the  dev  dependencies   Start nodemon to watch for filesystem changes  So  let s do it   1  Make sure you don t have any of your own  getting started  containers running  only the tutorial itself should be running    1  Also make sure you are in app source code directory  i e    path to getting started app   If you aren t  you can  cd  into it   e g          bash     cd  path to getting started app          1  Now that you are in the  getting started app  directory  run the following command  We ll explain what s going on afterwards          bash     docker run  dp 3000 3000            w  app  v    pwd   app            node 18 alpine           sh  c  yarn install    yarn run dev               If you are using PowerShell then use this command          powershell     docker run  dp 3000 3000            w  app  v    pwd   app            node 18 alpine           sh  c  yarn install    yarn run dev                   dp 3000 3000    same as before  Run in detached  background  mode and create a port mapping         w  app    sets the container s present working directory where the command will run from         v    pwd   app     bind mount  link  the host s present  getting started app  directory to the container s   app  directory  Note  Docker requires absolute paths for binding mounts  so in this example we use  pwd  for printing the absolute path of the working directory  i e  the  app  directory  instead of typing it manually        node 18 alpine    the image to use  Note that this is the base image for our app from the Dockerfile        sh  c  yarn install    yarn run dev     the command  We re starting a shell using  sh   alpine doesn t have  bash   and       running  yarn install  to install  all  dependencies and then running  yarn run dev   If we look in the  package json         we ll see that the  dev  script is starting  nodemon    1  You can watch the logs using  docker logs  f  container id    You ll know you re ready to go when you see this            bash     docker logs  f  container id        nodemon src index js      nodemon  2 0 20      nodemon  to restart at any time  enter  rs       nodemon  watching path s            nodemon  watching extensions  js mjs json      nodemon  starting  node src index js      Using sqlite database at  etc todos todo db     Listening on port 3000              When you re done watching the logs  exit out by hitting  Ctrl   C    1  Now  let s make a change to the app  In the  src static js app js  file  let s change the  Add Item  button to simply say     Add   This change will be on line 109   remember to save the file          diff                                submitting    Adding        Add Item                                  submitting    Adding        Add            1  Simply refresh the page  or open it  and you should see the change reflected in the browser almost immediately  It might    take a few seconds for the Node server to restart  so if you get an error  just try refreshing after a few seconds         Screenshot of updated label for Add button  updated add button png    style  width 75             text center    1  Feel free to make any other changes you d like to make  When you re done  stop the container and build your new image    using  docker build  t getting started       Using bind mounts is  very  common for local development setups  The advantage is that the dev machine doesn t need to have all of the build tools and environments installed  With a single  docker run  command  the dev environment is pulled and ready to go  We ll talk about Docker Compose in a future step  as this will help simplify our commands  we re already getting a lot of flags       Recap  At this point  we can persist our database and respond rapidly to the needs and demands of our investors and founders  Hooray  But  guess what  We received great news     Your project has been selected for future development      In order to prepare for production  we need to migrate our database from working in SQLite to something that can scale a little better  For simplicity  we ll keep with a relational database and switch our application to use MySQL  But  how  should we run MySQL  How do we allow the containers to talk to each other  We ll talk about that next "}
{"questions":"docker bash docker scan getting started When you have built an image it is good practice to scan it for security vulnerabilities using the command For example to scan the image you created earlier in the tutorial you can just type Docker has partnered with to provide the vulnerability scanning service Security Scanning","answers":"## Security Scanning\n\nWhen you have built an image, it is good practice to scan it for security vulnerabilities using the `docker scan` command.\nDocker has partnered with [Snyk](http:\/\/snyk.io) to provide the vulnerability scanning service.\n\nFor example, to scan the `getting-started` image you created earlier in the tutorial, you can just type\n\n```bash\ndocker scan getting-started\n```\n\nThe scan uses a constantly updated database of vulnerabilities, so the output you see will vary as new\nvulnerabilities are discovered, but it might look something like this:\n\n```plaintext\n\u2717 Low severity vulnerability found in freetype\/freetype\n  Description: CVE-2020-15999\n  Info: https:\/\/snyk.io\/vuln\/SNYK-ALPINE310-FREETYPE-1019641\n  Introduced through: freetype\/freetype@2.10.0-r0, gd\/libgd@2.2.5-r2\n  From: freetype\/freetype@2.10.0-r0\n  From: gd\/libgd@2.2.5-r2 > freetype\/freetype@2.10.0-r0\n  Fixed in: 2.10.0-r1\n\n\u2717 Medium severity vulnerability found in libxml2\/libxml2\n  Description: Out-of-bounds Read\n  Info: https:\/\/snyk.io\/vuln\/SNYK-ALPINE310-LIBXML2-674791\n  Introduced through: libxml2\/libxml2@2.9.9-r3, libxslt\/libxslt@1.1.33-r3, nginx-module-xslt\/nginx-module-xslt@1.17.9-r1\n  From: libxml2\/libxml2@2.9.9-r3\n  From: libxslt\/libxslt@1.1.33-r3 > libxml2\/libxml2@2.9.9-r3\n  From: nginx-module-xslt\/nginx-module-xslt@1.17.9-r1 > libxml2\/libxml2@2.9.9-r3\n  Fixed in: 2.9.9-r4\n```\n\nThe output lists the type of vulnerability, a URL to learn more, and importantly which version of the relevant library\nfixes the vulnerability.\n\nThere are several other options, which you can read about in the [docker scan documentation](https:\/\/docs.docker.com\/engine\/scan\/).\n\nAs well as scanning your newly built image on the command line, you can also [configure Docker Hub](https:\/\/docs.docker.com\/docker-hub\/vulnerability-scanning\/)\nto scan all newly pushed images automatically, and you can then see the results in both Docker Hub and Docker Desktop.\n\n![Hub vulnerability scanning](hvs.png){: style=width:75% }\n{: .text-center }\n\n## Image Layering\n\nDid you know that you can look at how an image is composed? Using the `docker image history`\ncommand, you can see the command that was used to create each layer within an image.\n\n1. Use the `docker image history` command to see the layers in the `getting-started` image you\n   created earlier in the tutorial.\n\n    ```bash\n    docker image history getting-started\n    ```\n\n    You should get output that looks something like this (dates\/IDs may be different).\n\n    ```plaintext\n    IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT\n    05bd8640b718   53 minutes ago   CMD [\"node\" \"src\/index.js\"]                     0B        buildkit.dockerfile.v0\n    <missing>      53 minutes ago   RUN \/bin\/sh -c yarn install --production # b\u2026   83.3MB    buildkit.dockerfile.v0\n    <missing>      53 minutes ago   COPY . . # buildkit                             4.59MB    buildkit.dockerfile.v0\n    <missing>      55 minutes ago   WORKDIR \/app                                    0B        buildkit.dockerfile.v0\n    <missing>      10 days ago      \/bin\/sh -c #(nop)  CMD [\"node\"]                 0B        \n    <missing>      10 days ago      \/bin\/sh -c #(nop)  ENTRYPOINT [\"docker-entry\u2026   0B        \n    <missing>      10 days ago      \/bin\/sh -c #(nop) COPY file:4d192565a7220e13\u2026   388B      \n    <missing>      10 days ago      \/bin\/sh -c apk add --no-cache --virtual .bui\u2026   7.85MB    \n    <missing>      10 days ago      \/bin\/sh -c #(nop)  ENV YARN_VERSION=1.22.19     0B        \n    <missing>      10 days ago      \/bin\/sh -c addgroup -g 1000 node     && addu\u2026   152MB     \n    <missing>      10 days ago      \/bin\/sh -c #(nop)  ENV NODE_VERSION=18.12.1     0B        \n    <missing>      11 days ago      \/bin\/sh -c #(nop)  CMD [\"\/bin\/sh\"]              0B        \n    <missing>      11 days ago      \/bin\/sh -c #(nop) ADD file:57d621536158358b1\u2026   5.29MB \n    ```\n\n    Each line represents a layer in the image. The display here shows the base at the bottom with\n    the newest layer at the top. Using this you can also quickly see the size of each layer, helping to\n    diagnose large images.\n\n1. You'll notice that several of the lines are truncated. If you add the `--no-trunc` flag, you'll get the\n   full output (yes... funny how you use a truncated flag to get untruncated output, huh?)\n\n    ```bash\n    docker image history --no-trunc getting-started\n    ```\n\n\n## Layer Caching\n\nNow that you've seen the layering in action, there's an important lesson to learn to help decrease build\ntimes for your container images.\n\n> Once a layer changes, all downstream layers have to be recreated as well\n\nLet's look at the Dockerfile we were using one more time...\n\n```dockerfile\nFROM node:18-alpine\nWORKDIR \/app\nCOPY . .\nRUN yarn install --production\nCMD [\"node\", \"src\/index.js\"]\n```\n\nGoing back to the image history output, we see that each command in the Dockerfile becomes a new layer in the image.\nYou might remember that when we made a change to the image, the yarn dependencies had to be reinstalled. Is there a\nway to fix this? It doesn't make much sense to ship around the same dependencies every time we build, right?\n\nTo fix this, we need to restructure our Dockerfile to help support the caching of the dependencies. For Node-based\napplications, those dependencies are defined in the `package.json` file. So what if we start by copying only that file in first,\ninstall the dependencies, and _then_ copy in everything else? Then, we only recreate the yarn dependencies if there was\na change to the `package.json`. Make sense?\n\n1. Update the Dockerfile to copy in the `package.json` first, install dependencies, and then copy everything else in.\n\n    ```dockerfile hl_lines=\"3 4 5\"\n    FROM node:18-alpine\n    WORKDIR \/app\n    COPY package.json yarn.lock .\/\n    RUN yarn install --production\n    COPY . .\n    CMD [\"node\", \"src\/index.js\"]\n    ```\n\n1. Create a file named `.dockerignore` in the same folder as the Dockerfile with the following contents.\n\n    ```ignore\n    node_modules\n    ```\n\n    `.dockerignore` files are an easy way to selectively copy only image relevant files.\n    You can read more about this\n    [here](https:\/\/docs.docker.com\/engine\/reference\/builder\/#dockerignore-file).\n    In this case, the `node_modules` folder should be omitted in the second `COPY` step because otherwise\n    it would possibly overwrite files which were created by the command in the `RUN` step.\n    For further details on why this is recommended for Node.js applications as well as further best practices,\n    have a look at their guide on\n    [Dockerizing a Node.js web app](https:\/\/nodejs.org\/en\/docs\/guides\/nodejs-docker-webapp\/).\n\n1. Build a new image using `docker build`.\n\n    ```bash\n    docker build -t getting-started .\n    ```\n\n    You should see output like this...\n\n    ```plaintext\n    [+] Building 16.1s (10\/10) FINISHED\n    => [internal] load build definition from Dockerfile                                               0.0s\n    => => transferring dockerfile: 175B                                                               0.0s\n    => [internal] load .dockerignore                                                                  0.0s\n    => => transferring context: 2B                                                                    0.0s\n    => [internal] load metadata for docker.io\/library\/node:18-alpine                                  0.0s\n    => [internal] load build context                                                                  0.8s\n    => => transferring context: 53.37MB                                                               0.8s\n    => [1\/5] FROM docker.io\/library\/node:18-alpine                                                    0.0s\n    => CACHED [2\/5] WORKDIR \/app                                                                      0.0s\n    => [3\/5] COPY package.json yarn.lock .\/                                                           0.2s\n    => [4\/5] RUN yarn install --production                                                           14.0s\n    => [5\/5] COPY . .                                                                                 0.5s \n    => exporting to image                                                                             0.6s \n    => => exporting layers                                                                            0.6s \n    => => writing image sha256:d6f819013566c54c50124ed94d5e66c452325327217f4f04399b45f94e37d25        0.0s \n    => => naming to docker.io\/library\/getting-started                                                 0.0s\n    ```\n\n    You'll see that all layers were rebuilt. Perfectly fine since we changed the Dockerfile quite a bit.\n\n1. Now, make a change to the `src\/static\/index.html` file (like change the `<title>` to say \"The Awesome Todo App\").\n\n1. Build the Docker image now using `docker build -t getting-started .` again. This time, your output should look a little different.\n\n    ```plaintext hl_lines=\"10 11 12\"\n    [+] Building 1.2s (10\/10) FINISHED\n    => [internal] load build definition from Dockerfile                                               0.0s\n    => => transferring dockerfile: 37B                                                                0.0s\n    => [internal] load .dockerignore                                                                  0.0s\n    => => transferring context: 2B                                                                    0.0s\n    => [internal] load metadata for docker.io\/library\/node:18-alpine                                  0.0s\n    => [internal] load build context                                                                  0.2s\n    => => transferring context: 450.43kB                                                              0.2s\n    => [1\/5] FROM docker.io\/library\/node:18-alpine                                                    0.0s\n    => CACHED [2\/5] WORKDIR \/app                                                                      0.0s\n    => CACHED [3\/5] COPY package.json yarn.lock .\/                                                    0.0s\n    => CACHED [4\/5] RUN yarn install --production                                                     0.0s\n    => [5\/5] COPY . .                                                                                 0.5s\n    => exporting to image                                                                             0.3s\n    => => exporting layers                                                                            0.3s\n    => => writing image sha256:91790c87bcb096a83c2bd4eb512bc8b134c757cda0bdee4038187f98148e2eda       0.0s\n    => => naming to docker.io\/library\/getting-started                                                 0.0s\n    ```\n\n    First off, you should notice that the build was MUCH faster! You'll see that several steps are using\n    previously cached layers. So, hooray! We're using the build cache. Pushing and pulling this image and updates to it\n    will be much faster as well. Hooray!\n\n\n## Multi-Stage Builds\n\nWhile we're not going to dive into it too much in this tutorial, multi-stage builds are an incredibly powerful\ntool which help us by using multiple stages to create an image. They offer several advantages including:\n\n- Separate build-time dependencies from runtime dependencies\n- Reduce overall image size by shipping _only_ what your app needs to run\n\n### Maven\/Tomcat Example\n\nWhen building Java-based applications, a JDK is needed to compile the source code to Java bytecode. However,\nthat JDK isn't needed in production. You might also be using tools such as Maven or Gradle to help build the app.\nThose also aren't needed in our final image. Multi-stage builds help.\n\n```dockerfile\nFROM maven AS build\nWORKDIR \/app\nCOPY . .\nRUN mvn package\n\nFROM tomcat\nCOPY --from=build \/app\/target\/file.war \/usr\/local\/tomcat\/webapps \n```\n\nIn this example, we use one stage (called `build`) to perform the actual Java build with Maven. In the second\nstage (starting at `FROM tomcat`), we copy in files from the `build` stage. The final image is only the last stage\nbeing created (which can be overridden using the `--target` flag).\n\n\n### React Example\n\nWhen building React applications, we need a Node environment to compile the JS code (typically JSX), SASS stylesheets,\nand more into static HTML, JS, and CSS. Although if we aren't performing server-side rendering, we don't even need a Node environment\nfor our production build. Why not ship the static resources in a static nginx container?\n\n```dockerfile\nFROM node:18 AS build\nWORKDIR \/app\nCOPY package* yarn.lock .\/\nRUN yarn install\nCOPY public .\/public\nCOPY src .\/src\nRUN yarn run build\n\nFROM nginx:alpine\nCOPY --from=build \/app\/build \/usr\/share\/nginx\/html\n```\n\nHere, we are using a `node:18` image to perform the build (maximizing layer caching) and then copying the output\ninto an nginx container. Cool, huh?\n\n\n## Recap\n\nBy understanding a little bit about how images are structured, we can build images faster and ship fewer changes.\nScanning images gives us confidence that the containers we are running and distributing are secure.\nMulti-stage builds also help us reduce overall image size and increase final container security by separating\nbuild-time dependencies from runtime dependencies.","site":"docker","answers_cleaned":"   Security Scanning  When you have built an image  it is good practice to scan it for security vulnerabilities using the  docker scan  command  Docker has partnered with  Snyk  http   snyk io  to provide the vulnerability scanning service   For example  to scan the  getting started  image you created earlier in the tutorial  you can just type     bash docker scan getting started      The scan uses a constantly updated database of vulnerabilities  so the output you see will vary as new vulnerabilities are discovered  but it might look something like this      plaintext   Low severity vulnerability found in freetype freetype   Description  CVE 2020 15999   Info  https   snyk io vuln SNYK ALPINE310 FREETYPE 1019641   Introduced through  freetype freetype 2 10 0 r0  gd libgd 2 2 5 r2   From  freetype freetype 2 10 0 r0   From  gd libgd 2 2 5 r2   freetype freetype 2 10 0 r0   Fixed in  2 10 0 r1    Medium severity vulnerability found in libxml2 libxml2   Description  Out of bounds Read   Info  https   snyk io vuln SNYK ALPINE310 LIBXML2 674791   Introduced through  libxml2 libxml2 2 9 9 r3  libxslt libxslt 1 1 33 r3  nginx module xslt nginx module xslt 1 17 9 r1   From  libxml2 libxml2 2 9 9 r3   From  libxslt libxslt 1 1 33 r3   libxml2 libxml2 2 9 9 r3   From  nginx module xslt nginx module xslt 1 17 9 r1   libxml2 libxml2 2 9 9 r3   Fixed in  2 9 9 r4      The output lists the type of vulnerability  a URL to learn more  and importantly which version of the relevant library fixes the vulnerability   There are several other options  which you can read about in the  docker scan documentation  https   docs docker com engine scan     As well as scanning your newly built image on the command line  you can also  configure Docker Hub  https   docs docker com docker hub vulnerability scanning   to scan all newly pushed images automatically  and you can then see the results in both Docker Hub and Docker Desktop     Hub vulnerability scanning  hvs png    style width 75        text center       Image Layering  Did you know that you can look at how an image is composed  Using the  docker image history  command  you can see the command that was used to create each layer within an image   1  Use the  docker image history  command to see the layers in the  getting started  image you    created earlier in the tutorial          bash     docker image history getting started              You should get output that looks something like this  dates IDs may be different           plaintext     IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT     05bd8640b718   53 minutes ago   CMD   node   src index js                       0B        buildkit dockerfile v0      missing       53 minutes ago   RUN  bin sh  c yarn install   production   b    83 3MB    buildkit dockerfile v0      missing       53 minutes ago   COPY       buildkit                             4 59MB    buildkit dockerfile v0      missing       55 minutes ago   WORKDIR  app                                    0B        buildkit dockerfile v0      missing       10 days ago       bin sh  c   nop   CMD   node                   0B              missing       10 days ago       bin sh  c   nop   ENTRYPOINT   docker entry    0B              missing       10 days ago       bin sh  c   nop  COPY file 4d192565a7220e13    388B            missing       10 days ago       bin sh  c apk add   no cache   virtual  bui    7 85MB          missing       10 days ago       bin sh  c   nop   ENV YARN VERSION 1 22 19     0B              missing       10 days ago       bin sh  c addgroup  g 1000 node        addu    152MB           missing       10 days ago       bin sh  c   nop   ENV NODE VERSION 18 12 1     0B              missing       11 days ago       bin sh  c   nop   CMD    bin sh                0B              missing       11 days ago       bin sh  c   nop  ADD file 57d621536158358b1    5 29MB               Each line represents a layer in the image  The display here shows the base at the bottom with     the newest layer at the top  Using this you can also quickly see the size of each layer  helping to     diagnose large images   1  You ll notice that several of the lines are truncated  If you add the    no trunc  flag  you ll get the    full output  yes    funny how you use a truncated flag to get untruncated output  huh           bash     docker image history   no trunc getting started              Layer Caching  Now that you ve seen the layering in action  there s an important lesson to learn to help decrease build times for your container images     Once a layer changes  all downstream layers have to be recreated as well  Let s look at the Dockerfile we were using one more time        dockerfile FROM node 18 alpine WORKDIR  app COPY     RUN yarn install   production CMD   node    src index js        Going back to the image history output  we see that each command in the Dockerfile becomes a new layer in the image  You might remember that when we made a change to the image  the yarn dependencies had to be reinstalled  Is there a way to fix this  It doesn t make much sense to ship around the same dependencies every time we build  right   To fix this  we need to restructure our Dockerfile to help support the caching of the dependencies  For Node based applications  those dependencies are defined in the  package json  file  So what if we start by copying only that file in first  install the dependencies  and  then  copy in everything else  Then  we only recreate the yarn dependencies if there was a change to the  package json   Make sense   1  Update the Dockerfile to copy in the  package json  first  install dependencies  and then copy everything else in          dockerfile hl lines  3 4 5      FROM node 18 alpine     WORKDIR  app     COPY package json yarn lock        RUN yarn install   production     COPY         CMD   node    src index js            1  Create a file named   dockerignore  in the same folder as the Dockerfile with the following contents          ignore     node modules                dockerignore  files are an easy way to selectively copy only image relevant files      You can read more about this      here  https   docs docker com engine reference builder  dockerignore file       In this case  the  node modules  folder should be omitted in the second  COPY  step because otherwise     it would possibly overwrite files which were created by the command in the  RUN  step      For further details on why this is recommended for Node js applications as well as further best practices      have a look at their guide on      Dockerizing a Node js web app  https   nodejs org en docs guides nodejs docker webapp     1  Build a new image using  docker build           bash     docker build  t getting started                You should see output like this            plaintext         Building 16 1s  10 10  FINISHED         internal  load build definition from Dockerfile                                               0 0s           transferring dockerfile  175B                                                               0 0s         internal  load  dockerignore                                                                  0 0s           transferring context  2B                                                                    0 0s         internal  load metadata for docker io library node 18 alpine                                  0 0s         internal  load build context                                                                  0 8s           transferring context  53 37MB                                                               0 8s         1 5  FROM docker io library node 18 alpine                                                    0 0s        CACHED  2 5  WORKDIR  app                                                                      0 0s         3 5  COPY package json yarn lock                                                              0 2s         4 5  RUN yarn install   production                                                           14 0s         5 5  COPY                                                                                     0 5s         exporting to image                                                                             0 6s            exporting layers                                                                            0 6s            writing image sha256 d6f819013566c54c50124ed94d5e66c452325327217f4f04399b45f94e37d25        0 0s            naming to docker io library getting started                                                 0 0s              You ll see that all layers were rebuilt  Perfectly fine since we changed the Dockerfile quite a bit   1  Now  make a change to the  src static index html  file  like change the   title   to say  The Awesome Todo App     1  Build the Docker image now using  docker build  t getting started    again  This time  your output should look a little different          plaintext hl lines  10 11 12          Building 1 2s  10 10  FINISHED         internal  load build definition from Dockerfile                                               0 0s           transferring dockerfile  37B                                                                0 0s         internal  load  dockerignore                                                                  0 0s           transferring context  2B                                                                    0 0s         internal  load metadata for docker io library node 18 alpine                                  0 0s         internal  load build context                                                                  0 2s           transferring context  450 43kB                                                              0 2s         1 5  FROM docker io library node 18 alpine                                                    0 0s        CACHED  2 5  WORKDIR  app                                                                      0 0s        CACHED  3 5  COPY package json yarn lock                                                       0 0s        CACHED  4 5  RUN yarn install   production                                                     0 0s         5 5  COPY                                                                                     0 5s        exporting to image                                                                             0 3s           exporting layers                                                                            0 3s           writing image sha256 91790c87bcb096a83c2bd4eb512bc8b134c757cda0bdee4038187f98148e2eda       0 0s           naming to docker io library getting started                                                 0 0s              First off  you should notice that the build was MUCH faster  You ll see that several steps are using     previously cached layers  So  hooray  We re using the build cache  Pushing and pulling this image and updates to it     will be much faster as well  Hooray       Multi Stage Builds  While we re not going to dive into it too much in this tutorial  multi stage builds are an incredibly powerful tool which help us by using multiple stages to create an image  They offer several advantages including     Separate build time dependencies from runtime dependencies   Reduce overall image size by shipping  only  what your app needs to run      Maven Tomcat Example  When building Java based applications  a JDK is needed to compile the source code to Java bytecode  However  that JDK isn t needed in production  You might also be using tools such as Maven or Gradle to help build the app  Those also aren t needed in our final image  Multi stage builds help      dockerfile FROM maven AS build WORKDIR  app COPY     RUN mvn package  FROM tomcat COPY   from build  app target file war  usr local tomcat webapps       In this example  we use one stage  called  build   to perform the actual Java build with Maven  In the second stage  starting at  FROM tomcat    we copy in files from the  build  stage  The final image is only the last stage being created  which can be overridden using the    target  flag         React Example  When building React applications  we need a Node environment to compile the JS code  typically JSX   SASS stylesheets  and more into static HTML  JS  and CSS  Although if we aren t performing server side rendering  we don t even need a Node environment for our production build  Why not ship the static resources in a static nginx container      dockerfile FROM node 18 AS build WORKDIR  app COPY package  yarn lock    RUN yarn install COPY public   public COPY src   src RUN yarn run build  FROM nginx alpine COPY   from build  app build  usr share nginx html      Here  we are using a  node 18  image to perform the build  maximizing layer caching  and then copying the output into an nginx container  Cool  huh       Recap  By understanding a little bit about how images are structured  we can build images faster and ship fewer changes  Scanning images gives us confidence that the containers we are running and distributing are secure  Multi stage builds also help us reduce overall image size and increase final container security by separating build time dependencies from runtime dependencies "}
{"questions":"docker The Container s Filesystem When a container runs it uses the various layers from an image for its filesystem changes won t be seen in another container even if they are using the same image we launch the container Why is this Let s dive into how the container is working In case you didn t notice our todo list is being wiped clean every single time Each container also gets its own scratch space to create update remove files Any","answers":"\nIn case you didn't notice, our todo list is being wiped clean every single time\nwe launch the container. Why is this? Let's dive into how the container is working.\n\n## The Container's Filesystem\n\nWhen a container runs, it uses the various layers from an image for its filesystem.\nEach container also gets its own \"scratch space\" to create\/update\/remove files. Any\nchanges won't be seen in another container, _even if_ they are using the same image.\n\n### Seeing this in Practice\n\nTo see this in action, we're going to start two containers and create a file in each.\nWhat you'll see is that the files created in one container aren't available in another.\n\n1. Start a `ubuntu` container that will create a file named `\/data.txt` with a random number\n   between 1 and 10000.\n\n    ```bash\n    docker run -d ubuntu bash -c \"shuf -i 1-10000 -n 1 -o \/data.txt && tail -f \/dev\/null\"\n    ```\n\n    In case you're curious about the command, we're starting a bash shell and invoking two\n    commands (why we have the `&&`). The first portion picks a single random number and writes\n    it to `\/data.txt`. The second command is simply watching a file to keep the container running.\n\n1. Validate we can see the output by `exec`'ing into the container. To do so, open the Dashboard, find your Ubuntu container, click on the \"triple dot\" menu to get additional actions, and click on the \"Open in terminal\" menu item.\n\n    ![Dashboard open CLI into ubuntu container](dashboard-open-cli-ubuntu.png){: style=width:75% }\n{: .text-center }\n\n    You will see a terminal that is running a shell in the ubuntu container. Run the following command to see the content of the `\/data.txt` file. Close this terminal afterwards again.\n\n    ```bash\n    cat \/data.txt\n    ```\n\n    If you prefer the command line you can use the `docker exec` command to do the same. You need to get the\n   container's ID (use `docker ps` to get it) and get the content with the following command.\n\n    ```bash\n    docker exec <container-id> cat \/data.txt\n    ```\n\n    You should see a random number!\n\n1. Now, let's start another `ubuntu` container (the same image) and we'll see we don't have the same\n   file.\n\n    ```bash\n    docker run -it ubuntu ls \/\n    ```\n\n    And look! There's no `data.txt` file there! That's because it was written to the scratch space for\n    only the first container.\n\n1. Go ahead and remove the first container using the `docker rm -f <container-id>` command.\n    ```bash\n    docker rm -f <container-id>\n    ```\n\n## Container Volumes\n\nWith the previous experiment, we saw that each container starts from the image definition each time it starts. \nWhile containers can create, update, and delete files, those changes are lost when the container is removed \nand all changes are isolated to that container. With volumes, we can change all of this.\n\n[Volumes](https:\/\/docs.docker.com\/storage\/volumes\/) provide the ability to connect specific filesystem paths of \nthe container back to the host machine. If a directory in the container is mounted, changes in that\ndirectory are also seen on the host machine. If we mount that same directory across container restarts, we'd see\nthe same files.\n\nThere are two main types of volumes. We will eventually use both, but we will start with **named volumes**.\n\n## Persisting our Todo Data\n\nBy default, the todo app stores its data in a [SQLite Database](https:\/\/www.sqlite.org\/index.html) at\n`\/etc\/todos\/todo.db`. If you're not familiar with SQLite, no worries! It's simply a relational database in \nwhich all of the data is stored in a single file. While this isn't the best for large-scale applications,\nit works for small demos. We'll talk about switching this to a different database engine later.\n\nWith the database being a single file, if we can persist that file on the host and make it available to the\nnext container, it should be able to pick up where the last one left off. By creating a volume and attaching\n(often called \"mounting\") it to the directory the data is stored in, we can persist the data. As our container \nwrites to the `todo.db` file, it will be persisted to the host in the volume.\n\nAs mentioned, we are going to use a **named volume**. Think of a named volume as simply a bucket of data. \nDocker maintains the physical location on the disk and you only need to remember the name of the volume. \nEvery time you use the volume, Docker will make sure the correct data is provided.\n\n1. Create a volume by using the `docker volume create` command.\n\n    ```bash\n    docker volume create todo-db\n    ```\n\n1. Stop the todo app container once again in the Dashboard (or with `docker rm -f <container-id>`), as it is still running without using the persistent volume.\n\n1. Start the todo app container, but add the `-v` flag to specify a volume mount. We will use the named volume and mount\n   it to `\/etc\/todos`, which will capture all files created at the path.\n\n    ```bash\n    docker run -dp 3000:3000 -v todo-db:\/etc\/todos getting-started\n    ```\n\n1. Once the container starts up, open the app and add a few items to your todo list.\n\n    ![Items added to todo list](items-added.png){: style=\"width: 55%; \" }\n    {: .text-center }\n\n1. Remove the container for the todo app. Use the Dashboard or `docker ps` to get the ID and then `docker rm -f <container-id>` to remove it.\n\n1. Start a new container using the same command from above.\n\n1. Open the app. You should see your items still in your list!\n\n1. Go ahead and remove the container when you're done checking out your list.\n\nHooray! You've now learned how to persist data!\n\n!!! info \"Pro-tip\"\n    While named volumes and bind mounts (which we'll talk about in a minute) are the two main types of volumes supported\n    by a default Docker engine installation, there are many volume driver plugins available to support NFS, SFTP, NetApp, \n    and more! This will be especially important once you start running containers on multiple hosts in a clustered\n    environment with Swarm, Kubernetes, etc.\n\n## Diving into our Volume\n\nA lot of people frequently ask \"Where is Docker _actually_ storing my data when I use a named volume?\" If you want to know, \nyou can use the `docker volume inspect` command.\n\n```bash\ndocker volume inspect todo-db\n[\n    {\n        \"CreatedAt\": \"2019-09-26T02:18:36Z\",\n        \"Driver\": \"local\",\n        \"Labels\": {},\n        \"Mountpoint\": \"\/var\/lib\/docker\/volumes\/todo-db\/_data\",\n        \"Name\": \"todo-db\",\n        \"Options\": {},\n        \"Scope\": \"local\"\n    }\n]\n```\n\nThe `Mountpoint` is the actual location on the disk where the data is stored. Note that on most machines, you will\nneed to have root access to access this directory from the host. But, that's where it is!\n\n!!! info \"Accessing Volume data directly on Docker Desktop\"\n    While running in Docker Desktop, the Docker commands are actually running inside a small VM on your machine.\n    If you wanted to look at the actual contents of the Mountpoint directory, you would need to first get inside\n    of the VM.\n\n## Recap\n\nAt this point, we have a functioning application that can survive restarts! We can show it off to our investors and\nhope they can catch our vision!\n\nHowever, we saw earlier that rebuilding images for every change takes quite a bit of time. There's got to be a better\nway to make changes, right? With bind mounts (which we hinted at earlier), there is a better way! Let's take a look at that now!","site":"docker","answers_cleaned":" In case you didn t notice  our todo list is being wiped clean every single time we launch the container  Why is this  Let s dive into how the container is working      The Container s Filesystem  When a container runs  it uses the various layers from an image for its filesystem  Each container also gets its own  scratch space  to create update remove files  Any changes won t be seen in another container   even if  they are using the same image       Seeing this in Practice  To see this in action  we re going to start two containers and create a file in each  What you ll see is that the files created in one container aren t available in another   1  Start a  ubuntu  container that will create a file named   data txt  with a random number    between 1 and 10000          bash     docker run  d ubuntu bash  c  shuf  i 1 10000  n 1  o  data txt    tail  f  dev null               In case you re curious about the command  we re starting a bash shell and invoking two     commands  why we have the        The first portion picks a single random number and writes     it to   data txt   The second command is simply watching a file to keep the container running   1  Validate we can see the output by  exec  ing into the container  To do so  open the Dashboard  find your Ubuntu container  click on the  triple dot  menu to get additional actions  and click on the  Open in terminal  menu item         Dashboard open CLI into ubuntu container  dashboard open cli ubuntu png    style width 75        text center        You will see a terminal that is running a shell in the ubuntu container  Run the following command to see the content of the   data txt  file  Close this terminal afterwards again          bash     cat  data txt              If you prefer the command line you can use the  docker exec  command to do the same  You need to get the    container s ID  use  docker ps  to get it  and get the content with the following command          bash     docker exec  container id  cat  data txt              You should see a random number   1  Now  let s start another  ubuntu  container  the same image  and we ll see we don t have the same    file          bash     docker run  it ubuntu ls                And look  There s no  data txt  file there  That s because it was written to the scratch space for     only the first container   1  Go ahead and remove the first container using the  docker rm  f  container id   command         bash     docker rm  f  container id              Container Volumes  With the previous experiment  we saw that each container starts from the image definition each time it starts   While containers can create  update  and delete files  those changes are lost when the container is removed  and all changes are isolated to that container  With volumes  we can change all of this    Volumes  https   docs docker com storage volumes   provide the ability to connect specific filesystem paths of  the container back to the host machine  If a directory in the container is mounted  changes in that directory are also seen on the host machine  If we mount that same directory across container restarts  we d see the same files   There are two main types of volumes  We will eventually use both  but we will start with   named volumes        Persisting our Todo Data  By default  the todo app stores its data in a  SQLite Database  https   www sqlite org index html  at   etc todos todo db   If you re not familiar with SQLite  no worries  It s simply a relational database in  which all of the data is stored in a single file  While this isn t the best for large scale applications  it works for small demos  We ll talk about switching this to a different database engine later   With the database being a single file  if we can persist that file on the host and make it available to the next container  it should be able to pick up where the last one left off  By creating a volume and attaching  often called  mounting   it to the directory the data is stored in  we can persist the data  As our container  writes to the  todo db  file  it will be persisted to the host in the volume   As mentioned  we are going to use a   named volume    Think of a named volume as simply a bucket of data   Docker maintains the physical location on the disk and you only need to remember the name of the volume   Every time you use the volume  Docker will make sure the correct data is provided   1  Create a volume by using the  docker volume create  command          bash     docker volume create todo db          1  Stop the todo app container once again in the Dashboard  or with  docker rm  f  container id     as it is still running without using the persistent volume   1  Start the todo app container  but add the   v  flag to specify a volume mount  We will use the named volume and mount    it to   etc todos   which will capture all files created at the path          bash     docker run  dp 3000 3000  v todo db  etc todos getting started          1  Once the container starts up  open the app and add a few items to your todo list         Items added to todo list  items added png    style  width  55               text center    1  Remove the container for the todo app  Use the Dashboard or  docker ps  to get the ID and then  docker rm  f  container id   to remove it   1  Start a new container using the same command from above   1  Open the app  You should see your items still in your list   1  Go ahead and remove the container when you re done checking out your list   Hooray  You ve now learned how to persist data       info  Pro tip      While named volumes and bind mounts  which we ll talk about in a minute  are the two main types of volumes supported     by a default Docker engine installation  there are many volume driver plugins available to support NFS  SFTP  NetApp       and more  This will be especially important once you start running containers on multiple hosts in a clustered     environment with Swarm  Kubernetes  etc      Diving into our Volume  A lot of people frequently ask  Where is Docker  actually  storing my data when I use a named volume   If you want to know   you can use the  docker volume inspect  command      bash docker volume inspect todo db                  CreatedAt    2019 09 26T02 18 36Z            Driver    local            Labels                Mountpoint     var lib docker volumes todo db  data            Name    todo db            Options                Scope    local               The  Mountpoint  is the actual location on the disk where the data is stored  Note that on most machines  you will need to have root access to access this directory from the host  But  that s where it is       info  Accessing Volume data directly on Docker Desktop      While running in Docker Desktop  the Docker commands are actually running inside a small VM on your machine      If you wanted to look at the actual contents of the Mountpoint directory  you would need to first get inside     of the VM      Recap  At this point  we have a functioning application that can survive restarts  We can show it off to our investors and hope they can catch our vision   However  we saw earlier that rebuilding images for every change takes quite a bit of time  There s got to be a better way to make changes  right  With bind mounts  which we hinted at earlier   there is a better way  Let s take a look at that now "}
{"questions":"docker For the rest of this tutorial we will be working with a simple todo building an app to prove out your MVP minimum viable product You want to show how it works and what it s capable of doing without needing to list manager that is running in Node js If you re not familiar with Node js At this point your development team is quite small and you re simply think about how it will work for a large team multiple developers etc don t worry No real JavaScript experience is needed","answers":"\nFor the rest of this tutorial, we will be working with a simple todo\nlist manager that is running in Node.js. If you're not familiar with Node.js,\ndon't worry! No real JavaScript experience is needed!\n\nAt this point, your development team is quite small and you're simply\nbuilding an app to prove out your MVP (minimum viable product). You want\nto show how it works and what it's capable of doing without needing to\nthink about how it will work for a large team, multiple developers, etc.\n\n![Todo List Manager Screenshot](todo-list-sample.png){: style=\"width:50%;\" }\n{ .text-center }\n\n## Getting our App\n\nBefore we can run the application, we need to get the application source code onto \nour machine. For real projects, you will typically clone the repo. But, for this tutorial,\nwe have created a ZIP file containing the application.\n\n1. [Download the ZIP](\/assets\/app.zip). Open the ZIP file and make sure you extract the\n    contents.\n\n1. Once extracted, use your favorite code editor to open the project. If you're in need of\n    an editor, you can use [Visual Studio Code](https:\/\/code.visualstudio.com\/). You should\n    see the `package.json` and two subdirectories (`src` and `spec`).\n\n    ![Screenshot of Visual Studio Code opened with the app loaded](ide-screenshot.png){: style=\"width:650px;margin-top:20px;\"}\n    {: .text-center }\n\n## Building the App's Container Image\n\nIn order to build the application, we need to use a `Dockerfile`. A\nDockerfile is simply a text-based script of instructions that is used to\ncreate a container image. If you've created Dockerfiles before, you might\nsee a few flaws in the Dockerfile below. But, don't worry! We'll go over them.\n\n1. Create a file named `Dockerfile` in the same folder as the file `package.json` with the following contents.\n\n    ```dockerfile\n    FROM node:18-alpine\n    WORKDIR \/app\n    COPY . .\n    RUN yarn install --production\n    CMD [\"node\", \".\/src\/index.js\"]\n    ```\n\n    Please check that the file `Dockerfile` has no file extension like `.txt`. Some editors may append this file extension automatically and this would result in an error in the next step.\n\n1. If you haven't already done so, open a terminal and go to the `app` directory with the `Dockerfile`. Now build the container image using the `docker build` command.\n\n    ```bash\n    docker build -t getting-started .\n    ```\n\n    This command used the Dockerfile to build a new container image. You might\n    have noticed that a lot of \"layers\" were downloaded. This is because we instructed\n    the builder that we wanted to start from the `node:18-alpine` image. But, since we\n    didn't have that on our machine, that image needed to be downloaded.\n\n    After the image was downloaded, we copied in our application and used `yarn` to \n    install our application's dependencies. The `CMD` directive specifies the default \n    command to run when starting a container from this image.\n\n    Finally, the `-t` flag tags our image. Think of this simply as a human-readable name\n    for the final image. Since we named the image `getting-started`, we can refer to that\n    image when we run a container.\n\n    The `.` at the end of the `docker build` command tells that Docker should look for the `Dockerfile` in the current directory.\n\n## Starting an App Container\n\nNow that we have an image, let's run the application! To do so, we will use the `docker run`\ncommand (remember that from earlier?).\n\n1. Start your container using the `docker run` command and specify the name of the image we \n    just created:\n\n    ```bash\n    docker run -dp 3000:3000 getting-started\n    ```\n\n    Remember the `-d` and `-p` flags? We're running the new container in \"detached\" mode (in the \n    background) and creating a mapping between the host's port 3000 to the container's port 3000.\n    Without the port mapping, we wouldn't be able to access the application.\n\n1. After a few seconds, open your web browser to [http:\/\/localhost:3000](http:\/\/localhost:3000).\n    You should see our app!\n\n    ![Empty Todo List](todo-list-empty.png){: style=\"width:450px;margin-top:20px;\"}\n    {: .text-center }\n\n1. Go ahead and add an item or two and see that it works as you expect. You can mark items as\n   complete and remove items. Your frontend is successfully storing items in the backend!\n   Pretty quick and easy, huh?\n\n\nAt this point, you should have a running todo list manager with a few items, all built by you!\nNow, let's make a few changes and learn about managing our containers.\n\nIf you take a quick look at the Docker Dashboard, you should see your two containers running now \n(this tutorial and your freshly launched app container)!\n\n![Docker Dashboard with tutorial and app containers running](dashboard-two-containers.png)\n\n\n## Recap\n\nIn this short section, we learned the very basics about building a container image and created a\nDockerfile to do so. Once we built an image, we started the container and saw the running app!\n\nNext, we're going to make a modification to our app and learn how to update our running application\nwith a new image. Along the way, we'll learn a few other useful commands.","site":"docker","answers_cleaned":" For the rest of this tutorial  we will be working with a simple todo list manager that is running in Node js  If you re not familiar with Node js  don t worry  No real JavaScript experience is needed   At this point  your development team is quite small and you re simply building an app to prove out your MVP  minimum viable product   You want to show how it works and what it s capable of doing without needing to think about how it will work for a large team  multiple developers  etc     Todo List Manager Screenshot  todo list sample png    style  width 50         text center       Getting our App  Before we can run the application  we need to get the application source code onto  our machine  For real projects  you will typically clone the repo  But  for this tutorial  we have created a ZIP file containing the application   1   Download the ZIP   assets app zip   Open the ZIP file and make sure you extract the     contents   1  Once extracted  use your favorite code editor to open the project  If you re in need of     an editor  you can use  Visual Studio Code  https   code visualstudio com    You should     see the  package json  and two subdirectories   src  and  spec           Screenshot of Visual Studio Code opened with the app loaded  ide screenshot png    style  width 650px margin top 20px            text center       Building the App s Container Image  In order to build the application  we need to use a  Dockerfile   A Dockerfile is simply a text based script of instructions that is used to create a container image  If you ve created Dockerfiles before  you might see a few flaws in the Dockerfile below  But  don t worry  We ll go over them   1  Create a file named  Dockerfile  in the same folder as the file  package json  with the following contents          dockerfile     FROM node 18 alpine     WORKDIR  app     COPY         RUN yarn install   production     CMD   node      src index js                Please check that the file  Dockerfile  has no file extension like   txt   Some editors may append this file extension automatically and this would result in an error in the next step   1  If you haven t already done so  open a terminal and go to the  app  directory with the  Dockerfile   Now build the container image using the  docker build  command          bash     docker build  t getting started                This command used the Dockerfile to build a new container image  You might     have noticed that a lot of  layers  were downloaded  This is because we instructed     the builder that we wanted to start from the  node 18 alpine  image  But  since we     didn t have that on our machine  that image needed to be downloaded       After the image was downloaded  we copied in our application and used  yarn  to      install our application s dependencies  The  CMD  directive specifies the default      command to run when starting a container from this image       Finally  the   t  flag tags our image  Think of this simply as a human readable name     for the final image  Since we named the image  getting started   we can refer to that     image when we run a container       The     at the end of the  docker build  command tells that Docker should look for the  Dockerfile  in the current directory      Starting an App Container  Now that we have an image  let s run the application  To do so  we will use the  docker run  command  remember that from earlier     1  Start your container using the  docker run  command and specify the name of the image we      just created          bash     docker run  dp 3000 3000 getting started              Remember the   d  and   p  flags  We re running the new container in  detached  mode  in the      background  and creating a mapping between the host s port 3000 to the container s port 3000      Without the port mapping  we wouldn t be able to access the application   1  After a few seconds  open your web browser to  http   localhost 3000  http   localhost 3000       You should see our app         Empty Todo List  todo list empty png    style  width 450px margin top 20px            text center    1  Go ahead and add an item or two and see that it works as you expect  You can mark items as    complete and remove items  Your frontend is successfully storing items in the backend     Pretty quick and easy  huh    At this point  you should have a running todo list manager with a few items  all built by you  Now  let s make a few changes and learn about managing our containers   If you take a quick look at the Docker Dashboard  you should see your two containers running now   this tutorial and your freshly launched app container      Docker Dashboard with tutorial and app containers running  dashboard two containers png       Recap  In this short section  we learned the very basics about building a container image and created a Dockerfile to do so  Once we built an image  we started the container and saw the running app   Next  we re going to make a modification to our app and learn how to update our running application with a new image  Along the way  we ll learn a few other useful commands "}
{"questions":"docker for the database in production You don t want to ship your database engine with your app then There s a good chance you d have to scale APIs and front ends differently than databases While you may use a container for the database locally you may want to use a managed service Up to this point we have been working with single container apps But we now want to add MySQL to the application stack The following question often arises Where will MySQL run Install it in the same Separate containers let you version and update versions in isolation reasons container or run it separately In general each container should do one thing and do it well A few","answers":"\nUp to this point, we have been working with single container apps. But, we now want to add MySQL to the\napplication stack. The following question often arises - \"Where will MySQL run? Install it in the same\ncontainer or run it separately?\" In general, **each container should do one thing and do it well.** A few\nreasons:\n\n- There's a good chance you'd have to scale APIs and front-ends differently than databases.\n- Separate containers let you version and update versions in isolation.\n- While you may use a container for the database locally, you may want to use a managed service\n  for the database in production. You don't want to ship your database engine with your app then.\n- Running multiple processes will require a process manager (the container only starts one process),\n  which adds complexity to container startup\/shutdown.\n\nAnd there are more reasons. So, we will update our application to work like this:\n\n![Todo App connected to MySQL container](multi-app-architecture.png)\n{: .text-center }\n\n\n## Container Networking\n\nRemember that containers, by default, run in isolation and don't know anything about other processes\nor containers on the same machine. So, how do we allow one container to talk to another? The answer is\n**networking**. Now, you don't have to be a network engineer (hooray!). Simply remember this rule...\n\n> If two containers are on the same network, they can talk to each other. If they aren't, they can't.\n\n\n## Starting MySQL\n\nThere are two ways to put a container on a network: 1) Assign it at start or 2) connect an existing container.\nFor now, we will create the network first and attach the MySQL container at startup.\n\n1. Create the network.\n\n    ```bash\n    docker network create todo-app\n    ```\n\n1. Start a MySQL container and attach it to the network. We're also going to define a few environment variables that the\n  database will use to initialize the database (see the \"Environment Variables\" section in the [MySQL Docker Hub listing](https:\/\/hub.docker.com\/_\/mysql\/)).\n\n    ```bash\n    docker run -d \\\n        --network todo-app --network-alias mysql \\\n        -v todo-mysql-data:\/var\/lib\/mysql \\\n        -e MYSQL_ROOT_PASSWORD=secret \\\n        -e MYSQL_DATABASE=todos \\\n        mysql:8.0\n    ```\n\n    If you are using PowerShell then use this command.\n\n    ```powershell\n    docker run -d `\n        --network todo-app --network-alias mysql `\n        -v todo-mysql-data:\/var\/lib\/mysql `\n        -e MYSQL_ROOT_PASSWORD=secret `\n        -e MYSQL_DATABASE=todos `\n        mysql:8.0\n    ```\n\n    You'll also see we specified the `--network-alias` flag. We'll come back to that in just a moment.\n\n    !!! info \"Pro-tip\"\n        You'll notice we're using a volume named `todo-mysql-data` here and mounting it at `\/var\/lib\/mysql`, which is\n        where MySQL stores its data. However, we never ran a `docker volume create` command. Docker recognizes we want\n        to use a named volume and creates one automatically for us.\n\n1. To confirm we have the database up and running, connect to the database and verify it connects.\n\n    ```bash\n    docker exec -it <mysql-container-id> mysql -p\n    ```\n\n    When the password prompt comes up, type in **secret**. In the MySQL shell, list the databases and verify\n    you see the `todos` database.\n\n    ```cli\n    mysql> SHOW DATABASES;\n    ```\n\n    You should see output that looks like this:\n\n    ```plaintext\n    +--------------------+\n    | Database           |\n    +--------------------+\n    | information_schema |\n    | mysql              |\n    | performance_schema |\n    | sys                |\n    | todos              |\n    +--------------------+\n    5 rows in set (0.00 sec)\n    ```\n\n    Hooray! We have our `todos` database and it's ready for us to use!\n\n    To exit the sql terminal type `exit` in the terminal.\n\n\n## Connecting to MySQL\n\nNow that we know MySQL is up and running, let's use it! But, the question is... how? If we run\nanother container on the same network, how do we find the container (remember each container has its own IP\naddress)?\n\nTo figure it out, we're going to make use of the [nicolaka\/netshoot](https:\/\/github.com\/nicolaka\/netshoot) container,\nwhich ships with a _lot_ of tools that are useful for troubleshooting or debugging networking issues.\n\n1. Start a new container using the nicolaka\/netshoot image. Make sure to connect it to the same network.\n\n    ```bash\n    docker run -it --network todo-app nicolaka\/netshoot\n    ```\n\n1. Inside the container, we're going to use the `dig` command, which is a useful DNS tool. We're going to look up\n   the IP address for the hostname `mysql`.\n\n    ```bash\n    dig mysql\n    ```\n\n    And you'll get an output like this...\n\n    ```text\n    ; <<>> DiG 9.18.8 <<>> mysql\n    ;; global options: +cmd\n    ;; Got answer:\n    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32162\n    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0\n\n    ;; QUESTION SECTION:\n    ;mysql.\t\t\t\tIN\tA\n\n    ;; ANSWER SECTION:\n    mysql.\t\t\t600\tIN\tA\t172.23.0.2\n\n    ;; Query time: 0 msec\n    ;; SERVER: 127.0.0.11#53(127.0.0.11)\n    ;; WHEN: Tue Oct 01 23:47:24 UTC 2019\n    ;; MSG SIZE  rcvd: 44\n    ```\n\n    In the \"ANSWER SECTION\", you will see an `A` record for `mysql` that resolves to `172.23.0.2`\n    (your IP address will most likely have a different value). While `mysql` isn't normally a valid hostname,\n    Docker was able to resolve it to the IP address of the container that had that network alias (remember the\n    `--network-alias` flag we used earlier?).\n\n    What this means is... our app only simply needs to connect to a host named `mysql` and it'll talk to the\n    database! It doesn't get much simpler than that!\n\n    When you're done, run `exit` to close out of the container.\n\n\n## Running our App with MySQL\n\nThe todo app supports the setting of a few environment variables to specify MySQL connection settings. They are:\n\n- `MYSQL_HOST` - the hostname for the running MySQL server\n- `MYSQL_USER` - the username to use for the connection\n- `MYSQL_PASSWORD` - the password to use for the connection\n- `MYSQL_DB` - the database to use once connected\n\n!!! warning Setting Connection Settings via Env Vars\n    While using env vars to set connection settings is generally ok for development, it is **HIGHLY DISCOURAGED**\n    when running applications in production. Diogo Monica, a former lead of security at Docker,\n    [wrote a fantastic blog post](https:\/\/diogomonica.com\/2017\/03\/27\/why-you-shouldnt-use-env-variables-for-secret-data\/)\n    explaining why.\n\n    A more secure mechanism is to use the secret support provided by your container orchestration framework. In most cases,\n    these secrets are mounted as files in the running container. You'll see many apps (including the MySQL image and the todo app)\n    also support env vars with a `_FILE` suffix to point to a file containing the variable.\n\n    As an example, setting the `MYSQL_PASSWORD_FILE` var will cause the app to use the contents of the referenced file\n    as the connection password. Docker doesn't do anything to support these env vars. Your app will need to know to look for\n    the variable and get the file contents.\n\n\nWith all of that explained, let's start our dev-ready container!\n\n1. We'll specify each of the environment variables above, as well as connect the container to our app network.\n\n    ```bash hl_lines=\"3 4 5 6 7\"\n    docker run -dp 3000:3000 \\\n      -w \/app -v \"$(pwd):\/app\" \\\n      --network todo-app \\\n      -e MYSQL_HOST=mysql \\\n      -e MYSQL_USER=root \\\n      -e MYSQL_PASSWORD=secret \\\n      -e MYSQL_DB=todos \\\n      node:18-alpine \\\n      sh -c \"yarn install && yarn run dev\"\n    ```\n\n    If you are using PowerShell then use this command.\n\n    ```powershell hl_lines=\"3 4 5 6 7\"\n    docker run -dp 3000:3000 `\n      -w \/app -v \"$(pwd):\/app\" `\n      --network todo-app `\n      -e MYSQL_HOST=mysql `\n      -e MYSQL_USER=root `\n      -e MYSQL_PASSWORD=secret `\n      -e MYSQL_DB=todos `\n      node:18-alpine `\n      sh -c \"yarn install && yarn run dev\"\n    ```\n\n1. If we look at the logs for the container (`docker logs <container-id>`), we should see a message indicating it's\n   using the mysql database.\n\n    ```plaintext hl_lines=\"7\"\n    # Previous log messages omitted\n    $ nodemon src\/index.js\n    [nodemon] 2.0.20\n    [nodemon] to restart at any time, enter `rs`\n    [nodemon] watching path(s): *.*\n    [nodemon] watching extensions: js,mjs,json\n    [nodemon] starting `node src\/index.js`\n    Connected to mysql db at host mysql\n    Listening on port 3000\n    ```\n\n1. Open the app in your browser and add a few items to your todo list.\n\n1. Connect to the mysql database and prove that the items are being written to the database. Remember, the password\n   is **secret**.\n\n    ```bash\n    docker exec -it <mysql-container-id> mysql -p todos\n    ```\n\n    And in the mysql shell, run the following:\n\n    ```plaintext\n    mysql> select * from todo_items;\n    +--------------------------------------+--------------------+-----------+\n    | id                                   | name               | completed |\n    +--------------------------------------+--------------------+-----------+\n    | c906ff08-60e6-44e6-8f49-ed56a0853e85 | Do amazing things! |         0 |\n    | 2912a79e-8486-4bc3-a4c5-460793a575ab | Be awesome!        |         0 |\n    +--------------------------------------+--------------------+-----------+\n    ```\n\n    Obviously, your table will look different because it has your items. But, you should see them stored there!\n\nIf you take a quick look at the Docker Dashboard, you'll see that we have two app containers running. But, there's\nno real indication that they are grouped together in a single app. We'll see how to make that better shortly!\n\n![Docker Dashboard showing two ungrouped app containers](dashboard-multi-container-app.png)\n\n## Recap\n\nAt this point, we have an application that now stores its data in an external database running in a separate\ncontainer. We learned a little bit about container networking and saw how service discovery can be performed\nusing DNS.\n\nBut, there's a good chance you are starting to feel a little overwhelmed with everything you need to do to start up\nthis application. We have to create a network, start containers, specify all of the environment variables, expose\nports, and more! That's a lot to remember and it's certainly making things harder to pass along to someone else.\n\nIn the next section, we'll talk about Docker Compose. With Docker Compose, we can share our application stacks in a\nmuch easier way and let others spin them up with a single (and simple) command!","site":"docker","answers_cleaned":" Up to this point  we have been working with single container apps  But  we now want to add MySQL to the application stack  The following question often arises    Where will MySQL run  Install it in the same container or run it separately   In general    each container should do one thing and do it well    A few reasons     There s a good chance you d have to scale APIs and front ends differently than databases    Separate containers let you version and update versions in isolation    While you may use a container for the database locally  you may want to use a managed service   for the database in production  You don t want to ship your database engine with your app then    Running multiple processes will require a process manager  the container only starts one process     which adds complexity to container startup shutdown   And there are more reasons  So  we will update our application to work like this     Todo App connected to MySQL container  multi app architecture png      text center        Container Networking  Remember that containers  by default  run in isolation and don t know anything about other processes or containers on the same machine  So  how do we allow one container to talk to another  The answer is   networking    Now  you don t have to be a network engineer  hooray    Simply remember this rule       If two containers are on the same network  they can talk to each other  If they aren t  they can t       Starting MySQL  There are two ways to put a container on a network  1  Assign it at start or 2  connect an existing container  For now  we will create the network first and attach the MySQL container at startup   1  Create the network          bash     docker network create todo app          1  Start a MySQL container and attach it to the network  We re also going to define a few environment variables that the   database will use to initialize the database  see the  Environment Variables  section in the  MySQL Docker Hub listing  https   hub docker com   mysql             bash     docker run  d             network todo app   network alias mysql            v todo mysql data  var lib mysql            e MYSQL ROOT PASSWORD secret            e MYSQL DATABASE todos           mysql 8 0              If you are using PowerShell then use this command          powershell     docker run  d             network todo app   network alias mysql            v todo mysql data  var lib mysql            e MYSQL ROOT PASSWORD secret            e MYSQL DATABASE todos           mysql 8 0              You ll also see we specified the    network alias  flag  We ll come back to that in just a moment           info  Pro tip          You ll notice we re using a volume named  todo mysql data  here and mounting it at   var lib mysql   which is         where MySQL stores its data  However  we never ran a  docker volume create  command  Docker recognizes we want         to use a named volume and creates one automatically for us   1  To confirm we have the database up and running  connect to the database and verify it connects          bash     docker exec  it  mysql container id  mysql  p              When the password prompt comes up  type in   secret    In the MySQL shell  list the databases and verify     you see the  todos  database          cli     mysql  SHOW DATABASES               You should see output that looks like this          plaintext                                  Database                                              information schema         mysql                      performance schema         sys                        todos                                               5 rows in set  0 00 sec               Hooray  We have our  todos  database and it s ready for us to use       To exit the sql terminal type  exit  in the terminal       Connecting to MySQL  Now that we know MySQL is up and running  let s use it  But  the question is    how  If we run another container on the same network  how do we find the container  remember each container has its own IP address    To figure it out  we re going to make use of the  nicolaka netshoot  https   github com nicolaka netshoot  container  which ships with a  lot  of tools that are useful for troubleshooting or debugging networking issues   1  Start a new container using the nicolaka netshoot image  Make sure to connect it to the same network          bash     docker run  it   network todo app nicolaka netshoot          1  Inside the container  we re going to use the  dig  command  which is a useful DNS tool  We re going to look up    the IP address for the hostname  mysql           bash     dig mysql              And you ll get an output like this            text            DiG 9 18 8      mysql        global options   cmd        Got answer            HEADER    opcode  QUERY  status  NOERROR  id  32162        flags  qr rd ra  QUERY  1  ANSWER  1  AUTHORITY  0  ADDITIONAL  0         QUESTION SECTION       mysql     IN A         ANSWER SECTION      mysql    600 IN A 172 23 0 2         Query time  0 msec        SERVER  127 0 0 11 53 127 0 0 11         WHEN  Tue Oct 01 23 47 24 UTC 2019        MSG SIZE  rcvd  44              In the  ANSWER SECTION   you will see an  A  record for  mysql  that resolves to  172 23 0 2       your IP address will most likely have a different value   While  mysql  isn t normally a valid hostname      Docker was able to resolve it to the IP address of the container that had that network alias  remember the        network alias  flag we used earlier         What this means is    our app only simply needs to connect to a host named  mysql  and it ll talk to the     database  It doesn t get much simpler than that       When you re done  run  exit  to close out of the container       Running our App with MySQL  The todo app supports the setting of a few environment variables to specify MySQL connection settings  They are      MYSQL HOST    the hostname for the running MySQL server    MYSQL USER    the username to use for the connection    MYSQL PASSWORD    the password to use for the connection    MYSQL DB    the database to use once connected      warning Setting Connection Settings via Env Vars     While using env vars to set connection settings is generally ok for development  it is   HIGHLY DISCOURAGED       when running applications in production  Diogo Monica  a former lead of security at Docker       wrote a fantastic blog post  https   diogomonica com 2017 03 27 why you shouldnt use env variables for secret data       explaining why       A more secure mechanism is to use the secret support provided by your container orchestration framework  In most cases      these secrets are mounted as files in the running container  You ll see many apps  including the MySQL image and the todo app      also support env vars with a   FILE  suffix to point to a file containing the variable       As an example  setting the  MYSQL PASSWORD FILE  var will cause the app to use the contents of the referenced file     as the connection password  Docker doesn t do anything to support these env vars  Your app will need to know to look for     the variable and get the file contents    With all of that explained  let s start our dev ready container   1  We ll specify each of the environment variables above  as well as connect the container to our app network          bash hl lines  3 4 5 6 7      docker run  dp 3000 3000          w  app  v    pwd   app            network todo app          e MYSQL HOST mysql          e MYSQL USER root          e MYSQL PASSWORD secret          e MYSQL DB todos         node 18 alpine         sh  c  yarn install    yarn run dev               If you are using PowerShell then use this command          powershell hl lines  3 4 5 6 7      docker run  dp 3000 3000          w  app  v    pwd   app            network todo app          e MYSQL HOST mysql          e MYSQL USER root          e MYSQL PASSWORD secret          e MYSQL DB todos         node 18 alpine         sh  c  yarn install    yarn run dev           1  If we look at the logs for the container   docker logs  container id     we should see a message indicating it s    using the mysql database          plaintext hl lines  7        Previous log messages omitted       nodemon src index js      nodemon  2 0 20      nodemon  to restart at any time  enter  rs       nodemon  watching path s            nodemon  watching extensions  js mjs json      nodemon  starting  node src index js      Connected to mysql db at host mysql     Listening on port 3000          1  Open the app in your browser and add a few items to your todo list   1  Connect to the mysql database and prove that the items are being written to the database  Remember  the password    is   secret            bash     docker exec  it  mysql container id  mysql  p todos              And in the mysql shell  run the following          plaintext     mysql  select   from todo items                                                                                      id                                     name                 completed                                                                                       c906ff08 60e6 44e6 8f49 ed56a0853e85   Do amazing things            0         2912a79e 8486 4bc3 a4c5 460793a575ab   Be awesome                   0                                                                                              Obviously  your table will look different because it has your items  But  you should see them stored there   If you take a quick look at the Docker Dashboard  you ll see that we have two app containers running  But  there s no real indication that they are grouped together in a single app  We ll see how to make that better shortly     Docker Dashboard showing two ungrouped app containers  dashboard multi container app png      Recap  At this point  we have an application that now stores its data in an external database running in a separate container  We learned a little bit about container networking and saw how service discovery can be performed using DNS   But  there s a good chance you are starting to feel a little overwhelmed with everything you need to do to start up this application  We have to create a network  start containers  specify all of the environment variables  expose ports  and more  That s a lot to remember and it s certainly making things harder to pass along to someone else   In the next section  we ll talk about Docker Compose  With Docker Compose  we can share our application stacks in a much easier way and let others spin them up with a single  and simple  command "}
{"questions":"eks Karpenter exclude true search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# Karpenter \ubaa8\ubc94 \uc0ac\ub840\n\n## Karpenter\n\nKarpenter\ub294 unschedulable \ud30c\ub4dc\uc5d0 \ub300\uc751\ud558\uc5ec \uc0c8 \ub178\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub294 \uc624\ud508 \uc18c\uc2a4 \ud074\ub7ec\uc2a4\ud130 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec\uc785\ub2c8\ub2e4. Karpenter\ub294 pending \uc0c1\ud0dc\uc758 \ud30c\ub4dc\uc758 \uc804\uccb4 \ub9ac\uc18c\uc2a4 \uc694\uad6c \uc0ac\ud56d\uc744 \ud3c9\uac00\ud558\uace0 \uc774\ub97c \uc2e4\ud589\ud558\uae30 \uc704\ud55c \ucd5c\uc801\uc758 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc120\ud0dd\ud569\ub2c8\ub2e4. \ub370\ubaac\uc14b\uc774 \uc544\ub2cc \ud30c\ub4dc\uac00 \uc5c6\ub294 \uc778\uc2a4\ud134\uc2a4\ub97c \uc790\ub3d9\uc73c\ub85c \ud655\uc7a5\ud558\uac70\ub098 \uc885\ub8cc\ud558\uc5ec \ub0ad\ube44\ub97c \uc904\uc785\ub2c8\ub2e4. \ub610\ud55c \ud30c\ub4dc\ub97c \ub2a5\ub3d9\uc801\uc73c\ub85c \uc774\ub3d9\ud558\uace0 \ub178\ub4dc\ub97c \uc0ad\uc81c\ud558\uac70\ub098 \ub354 \uc800\ub834\ud55c \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc73c\ub85c \uad50\uccb4\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 \ube44\uc6a9\uc744 \uc808\uac10\ud558\ub294 \ud1b5\ud569 \uae30\ub2a5\ub3c4 \uc9c0\uc6d0\ud569\ub2c8\ub2e4.\n\n**Karpenter\ub97c \uc0ac\uc6a9\ud574\uc57c \ud558\ub294 \uc774\uc720**\n\nKarpenter\uac00 \ucd9c\uc2dc\ub418\uae30 \uc804\uc5d0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc0ac\uc6a9\uc790\ub294 \uc8fc\ub85c [Amazon EC2 Auto Scaling \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/autoscaling\/ec2\/userguide\/AutoScalingGroup.html)\uacfc [\ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/cluster-autoscaler)(CA)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc758 \ucef4\ud4e8\ud305 \uc6a9\ub7c9\uc744 \ub3d9\uc801\uc73c\ub85c \uc870\uc815\ud588\uc2b5\ub2c8\ub2e4. Karpenter\ub97c \uc0ac\uc6a9\ud558\uba74 \uc720\uc5f0\uc131\uacfc \ub2e4\uc591\uc131\uc744 \ub2ec\uc131\ud558\uae30 \uc704\ud574 \uc218\uc2ed \uac1c\uc758 \ub178\ub4dc \uadf8\ub8f9\uc744 \ub9cc\ub4e4 \ud544\uc694\uac00 \uc5c6\uc2b5\ub2c8\ub2e4. \uac8c\ub2e4\uac00 Karpenter\ub294 (CA\ucc98\ub7fc) \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uacfc \ubc00\uc811\ud558\uac8c \uc5f0\uacb0\ub418\uc5b4 \uc788\uc9c0 \uc54a\uae30 \ub54c\ubb38\uc5d0 AWS\uc640 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc0ac\uc774\ub97c \uc624\uac08 \ud544\uc694\uac00 \uc5c6\uc2b5\ub2c8\ub2e4.\n\nKarpenter\ub294 \ub2e8\uc77c \uc2dc\uc2a4\ud15c \ub0b4 \uc778\uc2a4\ud134\uc2a4 \uc624\ucf00\uc2a4\ud2b8\ub808\uc774\uc158 \uae30\ub2a5\uc744 \ud1b5\ud569\uc801\uc73c\ub85c \uc218\ud589\ud558\uba70 \ub354 \uac04\ub2e8\ud558\uace0 \uc548\uc815\uc801\uc774\uba70 \ubcf4\ub2e4 \ud074\ub7ec\uc2a4\ud130\ub97c \uc798 \ud30c\uc545\ud569\ub2c8\ub2e4. Karpenter\ub294 \ub2e4\uc74c\uacfc \uac19\uc740 \uac04\uc18c\ud654\ub41c \ubc29\ubc95\uc744 \uc81c\uacf5\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec\uac00 \uc81c\uc2dc\ud558\ub294 \uba87 \uac00\uc9c0 \ubb38\uc81c\ub97c \ud574\uacb0\ud558\ub3c4\ub85d \uc124\uacc4\ub418\uc5c8\uc2b5\ub2c8\ub2e4.\n\n* \uc6cc\ud06c\ub85c\ub4dc \uc694\uad6c \uc0ac\ud56d\uc5d0 \ub530\ub77c \ub178\ub4dc\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud569\ub2c8\ub2e4.\n* \uc720\uc5f0\ud55c \uc6cc\ud06c\ub85c\ub4dc \ud504\ub85c\ube44\uc800\ub108 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\ubcc4\ub85c \ub2e4\uc591\ud55c \ub178\ub4dc \uad6c\uc131\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4. Karpenter\ub97c \uc0ac\uc6a9\ud558\uba74 \ub9ce\uc740 \ud2b9\uc815 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub178\ub4dc \uadf8\ub8f9\uc744 \uad00\ub9ac\ud558\ub294 \ub300\uc2e0 \uc720\uc5f0\ud55c \ub2e8\uc77c \ud504\ub85c\ube44\uc800\ub108\ub85c \ub2e4\uc591\ud55c \uc6cc\ud06c\ub85c\ub4dc \uc6a9\ub7c9\uc744 \uad00\ub9ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n* \ub178\ub4dc\ub97c \ube60\ub974\uac8c \uc2dc\uc791\ud558\uace0 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud558\uc5ec \ub300\uaddc\ubaa8 \ud30c\ub4dc \uc2a4\ucf00\uc904\ub9c1\uc744 \uac1c\uc120\ud569\ub2c8\ub2e4.\n\nKarpenter \uc0ac\uc6a9\uc5d0 \ub300\ud55c \uc815\ubcf4 \ubc0f \uc124\uba85\uc11c\ub97c \ubcf4\ub824\uba74 [karpenter.sh](https:\/\/karpenter.sh\/) \uc0ac\uc774\ud2b8\ub97c \ubc29\ubb38\ud558\uc138\uc694.\n\n## \uad8c\uc7a5 \uc0ac\ud56d\n\n\ubaa8\ubc94 \uc0ac\ub840\ub294 Karpenter, \ud504\ub85c\ube44\uc800\ub108(provisioner), \ud30c\ub4dc \uc2a4\ucf00\uc904\ub9c1 \uc139\uc158\uc73c\ub85c \uad6c\ubd84\ub429\ub2c8\ub2e4.\n\n## Karpenter \ubaa8\ubc94 \uc0ac\ub840\n\n\ub2e4\uc74c \ubaa8\ubc94 \uc0ac\ub840\ub294 Karpenter \uc790\uccb4\uc640 \uad00\ub828\ub41c \uc8fc\uc81c\ub97c \ub2e4\ub8f9\ub2c8\ub2e4.\n\n### \ubcc0\ud654\ud558\ub294 \uc6a9\ub7c9 \uc694\uad6c\uac00 \uc788\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc5d0\ub294 Karpenter\ub97c \uc0ac\uc6a9\ud558\uc138\uc694\n\nKarpenter\ub294 [Auto Scaling \uadf8\ub8f9](https:\/\/aws.amazon.com\/blogs\/containers\/amazon-eks-cluster-multi-zone-auto-scaling-groups\/)(ASG) \ubc0f [\uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managed-node-groups.html)(MNG)\ubcf4\ub2e4 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\uc774\ud2f0\ube0c API\uc5d0 \ub354 \uac00\uae4c\uc6b4 \uc2a4\ucf00\uc77c\ub9c1 \uad00\ub9ac\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. ASG \ubc0f MNG\ub294 EC2 CPU \ubd80\ud558\uc640 \uac19\uc740 AWS \ub808\ubca8 \uba54\ud2b8\ub9ad\uc744 \uae30\ubc18\uc73c\ub85c \uc2a4\ucf00\uc77c\ub9c1\uc774 \ud2b8\ub9ac\uac70\ub418\ub294 AWS \ub124\uc774\ud2f0\ube0c \ucd94\uc0c1\ud654\uc785\ub2c8\ub2e4. [Cluster Autoscaler](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/autoscaling.html#cluster-autoscaler)\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucd94\uc0c1\ud654\ub97c AWS \ucd94\uc0c1\ud654\ub85c \uc5f0\uacb0\ud558\uc9c0\ub9cc, \uc774\ub85c \uc778\ud574 \ud2b9\uc815 \uac00\uc6a9\uc601\uc5ed\uc5d0 \ub300\ud55c \uc2a4\ucf00\uc904\ub9c1\uacfc \uac19\uc740 \uc720\uc5f0\uc131\uc774 \ub2e4\uc18c \ub5a8\uc5b4\uc9d1\ub2c8\ub2e4.\n\nKarpenter\ub294 \uc77c\ubd80 \uc720\uc5f0\uc131\uc744 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0 \uc9c1\uc811 \uc801\uc6a9\ud558\uae30 \uc704\ud574 AWS \ucd94\uc0c1\ud654 \uacc4\uce35\uc744 \uc81c\uac70\ud569\ub2c8\ub2e4. Karpenter\ub294 \uc218\uc694\uac00 \uae09\uc99d\ud558\ub294 \uc2dc\uae30\uc5d0 \uc9c1\uba74\ud558\uac70\ub098 \ub2e4\uc591\ud55c \ucef4\ud4e8\ud305 \uc694\uad6c \uc0ac\ud56d\uc774 \uc788\ub294 \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc788\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uac00\uc7a5 \uc801\ud569\ud569\ub2c8\ub2e4.MNG\uc640 ASG\ub294 \uc815\uc801\uc774\uace0 \uc77c\uad00\uc131\uc774 \ub192\uc740 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc2e4\ud589\ud558\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc801\ud569\ud569\ub2c8\ub2e4. \uc694\uad6c \uc0ac\ud56d\uc5d0 \ub530\ub77c \ub3d9\uc801\uc73c\ub85c \uad00\ub9ac\ub418\ub294 \ub178\ub4dc\uc640 \uc815\uc801\uc73c\ub85c \uad00\ub9ac\ub418\ub294 \ub178\ub4dc\ub97c \ud63c\ud569\ud558\uc5ec \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \ub2e4\uc74c\uacfc \uac19\uc740 \uacbd\uc6b0\uc5d0\ub294 \ub2e4\ub978 Auto Scaling \ud504\ub85c\uc81d\ud2b8\ub97c \uace0\ub824\ud569\ub2c8\ub2e4.\n\nKarpenter\uc5d0\uc11c \uc544\uc9c1 \uac1c\ubc1c \uc911\uc778 \uae30\ub2a5\uc774 \ud544\uc694\ud569\ub2c8\ub2e4. Karpenter\ub294 \ube44\uad50\uc801 \uc0c8\ub85c\uc6b4 \ud504\ub85c\uc81d\ud2b8\uc774\ubbc0\ub85c \uc544\uc9c1 Karpenter\uc5d0 \ud3ec\ud568\ub418\uc9c0 \uc54a\uc740 \uae30\ub2a5\uc774 \ud544\uc694\ud55c \uacbd\uc6b0 \ub2f9\ubd84\uac04 \ub2e4\ub978 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1 \ud504\ub85c\uc81d\ud2b8\ub97c \uace0\ub824\ud574 \ubcf4\uc138\uc694.\n\n### EKS Fargate \ub610\ub294 \ub178\ub4dc \uadf8\ub8f9\uc5d0 \uc18d\ud55c \uc6cc\ucee4 \ub178\ub4dc\uc5d0\uc11c Karpenter \ucee8\ud2b8\ub864\ub7ec\ub97c \uc2e4\ud589\ud569\ub2c8\ub2e4.\n\nKarpenter\ub294 [\ud5ec\ub984 \ucc28\ud2b8](https:\/\/karpenter.sh\/docs\/getting-started\/)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc124\uce58\ub429\ub2c8\ub2e4. \uc774 \ud5ec\ub984 \ucc28\ud2b8\ub294 Karpenter \ucee8\ud2b8\ub864\ub7ec\uc640 \uc6f9\ud6c5 \ud30c\ub4dc\ub97c \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub85c \uc124\uce58\ud558\ub294\ub370, \uc774 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub97c \uc2e4\ud589\ud574\uc57c \ucee8\ud2b8\ub864\ub7ec\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\ub97c \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucd5c\uc18c \ud558\ub098 \uc774\uc0c1\uc758 \uc6cc\ucee4 \ub178\ub4dc\uac00 \uc788\ub294 \uc18c\uaddc\ubaa8 \ub178\ub4dc \uadf8\ub8f9\uc744 \ud558\ub098 \uc774\uc0c1 \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \ub300\uc548\uc73c\ub85c, 'karpenter' \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \ub300\ud55c Fargate \ud504\ub85c\ud30c\uc77c\uc744 \uc0dd\uc131\ud558\uc5ec EKS Fargate\uc5d0\uc11c \uc774\ub7f0 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uc774 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \ubc30\ud3ec\ub41c \ubaa8\ub4e0 \ud30c\ub4dc\uac00 EKS Fargate\uc5d0\uc11c \uc2e4\ud589\ub429\ub2c8\ub2e4. Karpenter\uac00 \uad00\ub9ac\ud558\ub294 \ub178\ub4dc\uc5d0\uc11c\ub294 Karpenter\ub97c \uc2e4\ud589\ud558\uc9c0 \ub9c8\uc2ed\uc2dc\uc624.\n\n### Karpenter\uc5d0\uc11c \uc0ac\uc6a9\uc790 \uc9c0\uc815 \uc2dc\uc791 \ud15c\ud50c\ub9bf(launch template)\uc744 \uc0ac\uc6a9\ud558\uc9c0 \ub9c8\uc2ed\uc2dc\uc624.\n\nKarpenter\ub294 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \uc2dc\uc791 \ud15c\ud50c\ub9bf\uc744 \uc0ac\uc6a9\ud558\uc9c0 \ub9d0 \uac83\uc744 \uac15\ub825\ud788 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \uc0ac\uc6a9\uc790 \uc9c0\uc815 \uc2dc\uc791 \ud15c\ud50c\ub9bf\uc744 \uc0ac\uc6a9\ud558\uba74 \uba40\ud2f0 \uc544\ud0a4\ud14d\ucc98 \uc9c0\uc6d0, \ub178\ub4dc \uc790\ub3d9 \uc5c5\uadf8\ub808\uc774\ub4dc \uae30\ub2a5 \ubc0f \ubcf4\uc548\uadf8\ub8f9 \uac80\uc0c9\uc774 \ubd88\uac00\ub2a5\ud569\ub2c8\ub2e4. \uc2dc\uc791 \ud15c\ud50c\ub9bf\uc744 \uc0ac\uc6a9\ud558\uba74 Karpenter \ud504\ub85c\ube44\uc800\ub108 \ub0b4\uc5d0\uc11c \ud2b9\uc815 \ud544\ub4dc\uac00 \uc911\ubcf5\ub418\uace0 Karpenter\ub294 \ub2e4\ub978 \ud544\ub4dc(\uc608: \uc11c\ube0c\ub137 \ubc0f \uc778\uc2a4\ud134\uc2a4 \uc720\ud615)\ub97c \ubb34\uc2dc\ud558\uae30 \ub54c\ubb38\uc5d0 \ud63c\ub3d9\uc774 \ubc1c\uc0dd\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc0ac\uc6a9\uc790 \uc9c0\uc815 \uc0ac\uc6a9\uc790 \ub370\uc774\ud130(EC2 User Data)\ub97c \uc0ac\uc6a9\ud558\uac70\ub098 AWS \ub178\ub4dc \ud15c\ud50c\ub9bf\uc5d0\uc11c \uc0ac\uc6a9\uc790 \uc9c0\uc815 AMI\ub97c \uc9c1\uc811 \uc9c0\uc815\ud558\uba74 \uc2dc\uc791 \ud15c\ud50c\ub9bf \uc0ac\uc6a9\uc744 \ud53c\ud560 \uc218 \uc788\ub294 \uacbd\uc6b0\uac00 \ub9ce\uc2b5\ub2c8\ub2e4. \uc774 \uc791\uc5c5\uc744 \uc218\ud589\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\ub178\ub4dc \ud15c\ud50c\ub9bf](https:\/\/karpenter.sh\/docs\/concepts\/node-templates)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\n### \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub9de\uc9c0 \uc54a\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc740 \uc81c\uc678\ud569\ub2c8\ub2e4.\n\n\ud2b9\uc815 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc774 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ud544\uc694\ud558\uc9c0 \uc54a\uc740 \uacbd\uc6b0, [node.kubernetes.io\/instance-type](http:\/\/node.kubernetes.io\/instance-type) \ud0a4\uc5d0\uc11c \ud574\ub2f9 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc81c\uc678\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c \uc608\uc81c\ub294 \ud070 Graviton \uc778\uc2a4\ud134\uc2a4\uc758 \ud504\ub85c\ube44\uc800\ub2dd\uc744 \ubc29\uc9c0\ud558\ub294 \ubc29\ubc95\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\n```yaml\n- key: node.kubernetes.io\/instance-type\n    operator: NotIn\n    values:\n      'm6g.16xlarge'\n      'm6gd.16xlarge'\n      'r6g.16xlarge'\n      'r6gd.16xlarge'\n      'c6g.16xlarge'\n```\n\n### \uc2a4\ud31f \uc0ac\uc6a9 \uc2dc \uc778\ud130\ub7fd\ud2b8 \ud578\ub4e4\ub9c1 \ud65c\uc131\ud654\n\nKarpenter\ub294 [\uc124\uc815](https:\/\/karpenter.sh\/docs\/concepts\/settings\/#configmap)\uc5d0\uc11c `aws.interruptionQueue` \uac12\uc744 \ud1b5\ud574 [\ub124\uc774\ud2f0\ube0c \uc778\ud130\ub7fd\ud2b8 \ucc98\ub9ac](https:\/\/karpenter.sh\/docs\/concepts\/deprovisioning\/#interruption)\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \uc778\ud130\ub7fd\ud2b8 \ud578\ub4e4\ub9c1\uc740 \ub2e4\uc74c\uacfc \uac19\uc774 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc7a5\uc560\ub97c \uc77c\uc73c\ud0ac \uc218 \uc788\ub294 \ud5a5\ud6c4 \ube44\uc790\ubc1c\uc801 \uc778\ud130\ub7fd\ud2b8 \uc774\ubca4\ud2b8\ub97c \uac10\uc2dc\ud569\ub2c8\ub2e4.\n\n* \uc2a4\ud31f \uc778\ud130\ub7fd\ud2b8 \uacbd\uace0\n* \uc608\uc815\ub41c \ubcc0\uacbd \uc0c1\ud0dc \uc774\ubca4\ud2b8 (\uc720\uc9c0 \uad00\ub9ac \uc774\ubca4\ud2b8)\n* \uc778\uc2a4\ud134\uc2a4 \uc885\ub8cc \uc774\ubca4\ud2b8\n* \uc778\uc2a4\ud134\uc2a4 \uc911\uc9c0 \uc774\ubca4\ud2b8\n\nKarpenter\ub294 \ub178\ub4dc\uc5d0\uc11c \uc774\ub7f0 \uc774\ubca4\ud2b8 \uc911 \ud558\ub098\uac00 \ubc1c\uc0dd\ud560 \uac83\uc744 \uac10\uc9c0\ud558\uba74 \uc911\ub2e8 \uc774\ubca4\ud2b8\uac00 \ubc1c\uc0dd\ud558\uae30 \uc804\uc5d0 \ub178\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \ucc28\ub2e8(cordon), \ub4dc\ub808\uc778 \ubc0f \uc885\ub8cc\ud558\uc5ec \uc911\ub2e8 \uc804\uc5d0 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc815\ub9ac\ud560 \uc218 \uc788\ub294 \ucd5c\ub300 \uc2dc\uac04\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. [\ud574\ub2f9 \uae00](https:\/\/karpenter.sh\/docs\/faq\/#interruption-handling)\uc5d0\uc11c \uc124\uba85\ud55c \uac83\ucc98\ub7fc AWS Node Termination Handler\ub97c Karpenter\uc640 \ud568\uaed8 \uc0ac\uc6a9\ud558\ub294 \uac83\uc740 \uad8c\uc7a5\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n\uc885\ub8cc \uc804 2\ubd84\uc774 \uc18c\uc694\ub418\ub294 \uccb4\ud06c\ud3ec\uc778\ud2b8 \ub610\ub294 \uae30\ud0c0 \ud615\ud0dc\uc758 \uc815\uc0c1\uc801\uc778 \ub4dc\ub808\uc778\uc774 \ud544\uc694\ud55c \ud30c\ub4dc\ub294 \ud574\ub2f9 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c Karpenter \uc911\ub2e8 \ucc98\ub9ac\uac00 \uac00\ub2a5\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n### **\uc544\uc6c3\ubc14\uc6b4\ub4dc \uc778\ud130\ub137 \uc561\uc138\uc2a4\uac00 \uc5c6\ub294 Amazon EKS \ud504\ub77c\uc774\ube57 \ud074\ub7ec\uc2a4\ud130**\n\n\uc778\ud130\ub137 \uc5f0\uacb0 \uacbd\ub85c \uc5c6\uc774 VPC\uc5d0 EKS \ud074\ub7ec\uc2a4\ud130\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud560 \ub54c\ub294 EKS \uc124\uba85\uc11c\uc5d0 \ub098\uc640 \uc788\ub294 \ud504\ub77c\uc774\ube57 \ud074\ub7ec\uc2a4\ud130 [\uc694\uad6c \uc0ac\ud56d](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/private-clusters.html#private-cluster-requirements)\uc5d0 \ub530\ub77c \ud658\uacbd\uc744 \uad6c\uc131\ud588\ub294\uc9c0 \ud655\uc778\ud574\uc57c \ud569\ub2c8\ub2e4. \ub610\ud55c VPC\uc5d0 STS VPC \uc9c0\uc5ed \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc0dd\uc131\ud588\ub294\uc9c0 \ud655\uc778\ud574\uc57c \ud569\ub2c8\ub2e4. \uadf8\ub807\uc9c0 \uc54a\uc740 \uacbd\uc6b0 \uc544\ub798\uc640 \ube44\uc2b7\ud55c \uc624\ub958\uac00 \ud45c\uc2dc\ub429\ub2c8\ub2e4.\n\n```console\nERROR controller.controller.metrics Reconciler error {\"commit\": \"5047f3c\", \"reconciler group\": \"karpenter.sh\", \"reconciler kind\": \"Provisioner\", \"name\": \"default\", \"namespace\": \"\", \"error\": \"fetching instance types using ec2.DescribeInstanceTypes, WebIdentityErr: failed to retrieve credentials\\ncaused by: RequestError: send request failed\\ncaused by: Post \\\"https:\/\/sts.<region>.amazonaws.com\/\\\": dial tcp x.x.x.x:443: i\/o timeout\"}\n```\n\nKarpenter \ucee8\ud2b8\ub864\ub7ec\ub294 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc6a9 IAM \uc5ed\ud560(IRSA)\uc744 \uc0ac\uc6a9\ud558\uae30 \ub54c\ubb38\uc5d0 \ud504\ub77c\uc774\ube57 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c\ub294 \uc774\ub7f0 \ubcc0\uacbd\uc774 \ud544\uc694\ud569\ub2c8\ub2e4. IRSA\ub85c \uad6c\uc131\ub41c \ud30c\ub4dc\ub294 AWS \ubcf4\uc548 \ud1a0\ud070 \uc11c\ube44\uc2a4 (AWS STS) API\ub97c \ud638\ucd9c\ud558\uc5ec \uc790\uaca9 \uc99d\uba85\uc744 \ud68d\ub4dd\ud569\ub2c8\ub2e4. \uc544\uc6c3\ubc14\uc6b4\ub4dc \uc778\ud130\ub137 \uc561\uc138\uc2a4\uac00 \uc5c6\ub294 \uacbd\uc6b0 ***VPC\uc548\uc5d0\uc11c AWS STS VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8***\ub97c \uc0dd\uc131\ud558\uc5ec \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ub610\ud55c \ud504\ub77c\uc774\ube57 \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud558\ub824\uba74 ***SSM\uc6a9VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8***\ub97c \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. Karpenter\ub294 \uc0c8 \ub178\ub4dc\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub824\uace0 \ud560 \ub54c \uc2dc\uc791 \ud15c\ud50c\ub9bf \uad6c\uc131\uacfc SSM \ud30c\ub77c\ubbf8\ud130\ub97c \ucffc\ub9ac\ud569\ub2c8\ub2e4. VPC\uc5d0 SSM VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \uc5c6\ub294 \uacbd\uc6b0 \ub2e4\uc74c\uacfc \uac19\uc740 \uc624\ub958\uac00 \ubc1c\uc0dd\ud569\ub2c8\ub2e4.\n\n```console\nINFO    controller.provisioning Waiting for unschedulable pods  {\"commit\": \"5047f3c\", \"provisioner\": \"default\"}\nINFO    controller.provisioning Batched 3 pods in 1.000572709s  {\"commit\": \"5047f3c\", \"provisioner\": \"default\"}\nINFO    controller.provisioning Computed packing of 1 node(s) for 3 pod(s) with instance type option(s) [c4.xlarge c6i.xlarge c5.xlarge c5d.xlarge c5a.xlarge c5n.xlarge m6i.xlarge m4.xlarge m6a.xlarge m5ad.xlarge m5d.xlarge t3.xlarge m5a.xlarge t3a.xlarge m5.xlarge r4.xlarge r3.xlarge r5ad.xlarge r6i.xlarge r5a.xlarge]        {\"commit\": \"5047f3c\", \"provisioner\": \"default\"}\nERROR   controller.provisioning Could not launch node, launching instances, getting launch template configs, getting launch templates, getting ssm parameter, RequestError: send request failed\ncaused by: Post \"https:\/\/ssm.<region>.amazonaws.com\/\": dial tcp x.x.x.x:443: i\/o timeout  {\"commit\": \"5047f3c\", \"provisioner\": \"default\"}\n```\n\n***[\uac00\uaca9 \ubaa9\ub85d \ucffc\ub9ac API](https:\/\/docs.aws.amazon.com\/awsaccountbilling\/latest\/aboutv2\/using-pelong.html)\ub97c \uc704\ud55c VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8*** \ub294 \uc5c6\uc2b5\ub2c8\ub2e4.\n\uacb0\uacfc\uc801\uc73c\ub85c \uac00\uaca9 \ub370\uc774\ud130\ub294 \uc2dc\uac04\uc774 \uc9c0\ub0a8\uc5d0 \ub530\ub77c \ubd80\uc2e4\ud574\uc9c8 \uac83\uc785\ub2c8\ub2e4. \nKarpenter\ub294 \ubc14\uc774\ub108\ub9ac\uc5d0 \uc628\ub514\ub9e8\ub4dc \uac00\uaca9 \ucc45\uc815 \ub370\uc774\ud130\ub97c \ud3ec\ud568\ud558\uc5ec \uc774 \ubb38\uc81c\ub97c \ud574\uacb0\ud558\uc9c0\ub9cc Karpenter\uac00 \uc5c5\uadf8\ub808\uc774\ub4dc\ub420 \ub54c\ub9cc \ud574\ub2f9 \ub370\uc774\ud130\ub97c \uc5c5\ub370\uc774\ud2b8\ud569\ub2c8\ub2e4.\n\uac00\uaca9 \ub370\uc774\ud130 \uc694\uccad\uc774 \uc2e4\ud328\ud558\uba74 \ub2e4\uc74c\uacfc \uac19\uc740 \uc624\ub958 \uba54\uc2dc\uc9c0\uac00 \ud45c\uc2dc\ub429\ub2c8\ub2e4.\n\n```console\nERROR   controller.aws.pricing  updating on-demand pricing, RequestError: send request failed\ncaused by: Post \"https:\/\/api.pricing.us-east-1.amazonaws.com\/\": dial tcp 52.94.231.236:443: i\/o timeout; RequestError: send request failed\ncaused by: Post \"https:\/\/api.pricing.us-east-1.amazonaws.com\/\": dial tcp 52.94.231.236:443: i\/o timeout, using existing pricing data from 2022-08-17T00:19:52Z  {\"commit\": \"4b5f953\"}\n```\n\n\uc694\uc57d\ud558\uc790\uba74 \uc644\uc804\ud55c \ud504\ub77c\uc774\ube57 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c Karpenter\ub97c \uc0ac\uc6a9\ud558\ub824\uba74 \ub2e4\uc74c\uacfc \uac19\uc740 VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n```console\ncom.amazonaws.<region>.ec2\ncom.amazonaws.<region>.ecr.api\ncom.amazonaws.<region>.ecr.dkr\ncom.amazonaws.<region>.s3 \u2013 For pulling container images\ncom.amazonaws.<region>.sts \u2013 For IAM roles for service accounts\ncom.amazonaws.<region>.ssm - If using Karpenter\n```\n\n!!! note\n    Karpenter (\ucee8\ud2b8\ub864\ub7ec \ubc0f \uc6f9\ud6c5 \ubc30\ud3ec) \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub294 Amazon ECR \uc804\uc6a9 \ub610\ub294 VPC \ub0b4\ubd80\uc5d0\uc11c \uc561\uc138\uc2a4\ud560 \uc218 \uc788\ub294 \ub2e4\ub978 \uc0ac\uc124 \ub808\uc9c0\uc2a4\ud2b8\ub9ac\uc5d0 \uc788\uac70\ub098 \ubcf5\uc0ac\ub418\uc5b4\uc57c \ud569\ub2c8\ub2e4.\uadf8 \uc774\uc720\ub294 Karpenter \ucee8\ud2b8\ub864\ub7ec\uc640 \uc6f9\ud6c5 \ud30c\ub4dc\uac00 \ud604\uc7ac \ud37c\ube14\ub9ad ECR \uc774\ubbf8\uc9c0\ub97c \uc0ac\uc6a9\ud558\uace0 \uc788\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4. VPC \ub0b4\uc5d0\uc11c \ub610\ub294 VPC\uc640 \ud53c\uc5b4\ub9c1\ub41c \ub124\ud2b8\uc6cc\ud06c\uc5d0\uc11c \uc774\ub7f0 \uc774\ubbf8\uc9c0\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0, \ucfe0\ubc84\ub124\ud2f0\uc2a4\uac00 ECR Public\uc5d0\uc11c \uc774\ub7f0 \uc774\ubbf8\uc9c0\ub97c \uac00\uc838\uc624\ub824\uace0 \ud560 \ub54c \uc774\ubbf8\uc9c0 \uac00\uc838\uc624\uae30 \uc624\ub958\uac00 \ubc1c\uc0dd\ud569\ub2c8\ub2e4.\n\n\uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\uc774\uc288 988](https:\/\/github.com\/aws\/karpenter\/issues\/988) \ubc0f [\uc774\uc288 1157](https:\/\/github.com\/aws\/karpenter\/issues\/1157) \uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n## \ud504\ub85c\ube44\uc838\ub108 \uc0dd\uc131\n\n\ub2e4\uc74c \ubaa8\ubc94 \uc0ac\ub840\ub294 \ud504\ub85c\ube44\uc838\ub108 \uc0dd\uc131\uacfc \uad00\ub828\ub41c \uc8fc\uc81c\ub97c \ub2e4\ub8f9\ub2c8\ub2e4.\n\n### \ub2e4\uc74c\uacfc \uac19\uc740 \uacbd\uc6b0 \ud504\ub85c\ube44\uc838\ub108\ub97c \uc5ec\ub7ec \uac1c \ub9cc\ub4e4 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc5ec\ub7ec \ud300\uc774 \ud074\ub7ec\uc2a4\ud130\ub97c \uacf5\uc720\ud558\uace0 \uc11c\ub85c \ub2e4\ub978 \uc6cc\ucee4 \ub178\ub4dc\uc5d0\uc11c \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc2e4\ud589\ud574\uc57c \ud558\uac70\ub098 OS \ub610\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615 \uc694\uad6c \uc0ac\ud56d\uc774 \ub2e4\ub978 \uacbd\uc6b0 \uc5ec\ub7ec \ud504\ub85c\ube44\uc800\ub108\ub97c \uc0dd\uc131\ud558\uc138\uc694. \uc608\ub97c \ub4e4\uc5b4 \ud55c \ud300\uc740 Bottlerocket\uc744 \uc0ac\uc6a9\ud558\uace0 \ub2e4\ub978 \ud300\uc740 Amazon Linux\ub97c \uc0ac\uc6a9\ud558\ub824\uace0 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub9c8\ucc2c\uac00\uc9c0\ub85c \ud55c \ud300\uc740 \ub2e4\ub978 \ud300\uc5d0\ub294 \ud544\uc694\ud558\uc9c0 \uc54a\uc740 \uac12\ube44\uc2fc GPU \ud558\ub4dc\uc6e8\uc5b4\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud504\ub85c\ube44\uc800\ub2dd \ub3c4\uad6c\ub97c \uc5ec\ub7ec \uac1c \uc0ac\uc6a9\ud558\uba74 \uac01 \ud300\uc5d0\uc11c \uac00\uc7a5 \uc801\ud569\ud55c \uc790\uc0b0\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc0c1\ud638 \ubc30\ud0c0\uc801\uc774\uac70\ub098 \uac00\uc911\uce58\uac00 \ubd80\uc5ec\ub418\ub294 \ud504\ub85c\ube44\uc800\ub2dd \ub3c4\uad6c \ub9cc\ub4e4\uae30\n\n\uc77c\uad00\ub41c \uc2a4\ucf00\uc904\ub9c1 \ub3d9\uc791\uc744 \uc81c\uacf5\ud558\ub824\uba74 \uc0c1\ud638 \ubc30\ud0c0\uc801\uc774\uac70\ub098 \uac00\uc911\uce58\uac00 \ubd80\uc5ec\ub418\ub294 \ud504\ub85c\ube44\uc800\ub108\ub97c \ub9cc\ub4dc\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc77c\uce58\ud558\uc9c0 \uc54a\uace0 \uc5ec\ub7ec \ud504\ub85c\ube44\uc838\ub108\uac00 \uc77c\uce58\ud558\ub294 \uacbd\uc6b0 Karpenter\ub294 \uc0ac\uc6a9\ud560 \ud504\ub85c\ube44\uc838\ub108\ub97c \uc784\uc758\ub85c \uc120\ud0dd\ud558\uc5ec \uc608\uc0c1\uce58 \ubabb\ud55c \uacb0\uacfc\ub97c \ucd08\ub798\ud569\ub2c8\ub2e4. \uc5ec\ub7ec \ud504\ub85c\ube44\uc838\ub108\ub97c \ub9cc\ub4e4 \ub54c \uc720\uc6a9\ud55c \uc608\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\nGPU\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud504\ub85c\ube44\uc800\ub2dd \ub3c4\uad6c\ub97c \ub9cc\ub4e4\uace0 \uc774\ub7f0 (\ube44\uc6a9\uc774 \ub9ce\uc774 \ub4dc\ub294) \ub178\ub4dc\uc5d0\uc11c\ub9cc \ud2b9\uc218 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc2e4\ud589\ud558\ub3c4\ub85d \ud5c8\uc6a9:\n\n```yaml\n# Provisioner for GPU Instances with Taints\napiVersion: karpenter.sh\/v1alpha5\nkind: Provisioner\nmetadata:\n  name: gpu\nspec:\n  requirements:\n  - key: node.kubernetes.io\/instance-type\n    operator: In\n    values:\n    - p3.8xlarge\n    - p3.16xlarge\n  taints:\n  - effect: NoSchedule\n    key: nvidia.com\/gpu\n    value: \"true\"\n  ttlSecondsAfterEmpty: 60\n```\n\n\ud0dc\uc778\ud2b8(Taint)\ub97c \uc704\ud55c \ud1a8\ub7ec\ub808\uc774\uc158(Toleration)\uc744 \uac16\uace0 \uc788\ub294 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8: \n\n```yaml\n# Deployment of GPU Workload will have tolerations defined\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: inflate-gpu\nspec:\n  ...\n    spec:\n      tolerations:\n      - key: \"nvidia.com\/gpu\"\n        operator: \"Exists\"\n        effect: \"NoSchedule\"\n```\n\n\ub2e4\ub978 \ud300\uc744 \uc704\ud55c \uc77c\ubc18 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc758 \uacbd\uc6b0 \ud504\ub85c\ube44\uc800\ub108 \uc0ac\uc591\uc5d0 NodeAffinify\uac00 \ud3ec\ud568\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uadf8\ub7ec\uba74 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub294 \ub178\ub4dc \uc140\ub809\ud130 \uc6a9\uc5b4\ub97c \uc0ac\uc6a9\ud558\uc5ec `billing-team` \uacfc \uc77c\uce58\uc2dc\ud0ac \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```yaml\n# Provisioner for regular EC2 instances\napiVersion: karpenter.sh\/v1alpha5\nkind: Provisioner\nmetadata:\n  name: generalcompute\nspec:\n  labels:\n    billing-team: my-team\n  requirements:\n  - key: node.kubernetes.io\/instance-type\n    operator: In\n    values:\n    - m5.large\n    - m5.xlarge\n    - m5.2xlarge\n    - c5.large\n    - c5.xlarge\n    - c5a.large\n    - c5a.xlarge\n    - r5.large\n    - r5.xlarge\n```\n\n\ub178\ub4dc \uc5b4\ud53c\ub2c8\ud2f0\ub97c \uc0ac\uc6a9\ud558\ub294 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8:\n\n```yaml\n# Deployment will have spec.affinity.nodeAffinity defined\nkind: Deployment\nmetadata:\n  name: workload-my-team\nspec:\n  replicas: 200\n  ...\n    spec:\n      affinity:\n        nodeAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            nodeSelectorTerms:\n              - matchExpressions:\n                - key: \"billing-team\"\n                  operator: \"In\"\n                  values: [\"my-team\"]\n```\n\n### \ud0c0\uc774\uba38(TTL)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ub178\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \uc0ad\uc81c\ud569\ub2c8\ub2e4.\n\n\ud504\ub85c\ube44\uc800\ub2dd\ub41c \ub178\ub4dc\uc758 \ud0c0\uc774\uba38\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc6cc\ud06c\ub85c\ub4dc \ud30c\ub4dc\uac00 \uc5c6\uac70\ub098 \ub9cc\ub8cc \uc2dc\uac04\uc5d0 \ub3c4\ub2ec\ud55c \ub178\ub4dc\ub97c \uc0ad\uc81c\ud560 \uc2dc\uae30\ub97c \uc124\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub178\ub4dc \ub9cc\ub8cc\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc \uc218\ub2e8\uc73c\ub85c \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc\ub97c \ud3d0\uae30\ud558\uace0 \uc5c5\ub370\uc774\ud2b8\ub41c \ubc84\uc804\uc73c\ub85c \uad50\uccb4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. **`ttlSecondsUntilExpired`** \ubc0f **`ttlSecondsAfterEmpty`**\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc\ub97c \ud504\ub85c\ube44\uc800\ub2dd \ud574\uc81c\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 Karpenter \uc124\uba85\uc11c\uc758 [Karpenter \ub178\ub4dc \ub514\ud504\ub85c\ube44\uc800\ub2dd \ubc29\ubc95](https:\/\/karpenter.sh\/docs\/concepts\/deprovisioning)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n### \ud2b9\ud788 \uc2a4\ud31f\uc744 \uc0ac\uc6a9\ud560 \ub54c\ub294 Karpenter\uac00 \ud504\ub85c\ube44\uc800\ub2dd\ud560 \uc218 \uc788\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc9c0\ub098\uce58\uac8c \uc81c\ud55c\ud558\uc9c0 \ub9c8\uc2ed\uc2dc\uc624.\n\n\uc2a4\ud31f\uc744 \uc0ac\uc6a9\ud560 \ub54c Karpenter\ub294 [\uac00\uaca9 \ubc0f \uc6a9\ub7c9 \ucd5c\uc801\ud654](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/ec2-fleet-allocation-strategy.html) \ud560\ub2f9 \uc804\ub7b5\uc744 \uc0ac\uc6a9\ud558\uc5ec EC2 \uc778\uc2a4\ud134\uc2a4\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud569\ub2c8\ub2e4. \uc774 \uc804\ub7b5\uc740 EC2\uac00 \uc2dc\uc791 \uc911\uc778 \uc778\uc2a4\ud134\uc2a4 \uc218\ub9cc\ud07c \uac00\uc7a5 \uae4a\uc740 \ud480\uc758 \uc778\uc2a4\ud134\uc2a4\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\uace0 \uc911\ub2e8 \uc704\ud5d8\uc774 \uac00\uc7a5 \uc801\uc740 \uc778\uc2a4\ud134\uc2a4 \uc218\uc5d0 \ub9de\uac8c \uc778\uc2a4\ud134\uc2a4\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub3c4\ub85d \uc9c0\uc2dc\ud569\ub2c8\ub2e4. \uadf8\ub7f0 \ub2e4\uc74c EC2 \ud50c\ub9bf\uc740 \uc774\ub7f0 \ud480 \uc911 \uac00\uc7a5 \uc800\ub834\ud55c \uac00\uaca9\uc758 \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\ub97c \uc694\uccad\ud569\ub2c8\ub2e4. Karpenter\uc5d0 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc774 \ub9ce\uc744\uc218\ub85d EC2\ub294 \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\uc758 \ub7f0\ud0c0\uc784\uc744 \ub354 \uc798 \ucd5c\uc801\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c Karpenter\ub294 \ud074\ub7ec\uc2a4\ud130\uac00 \ubc30\ud3ec\ub41c \uc9c0\uc5ed \ubc0f \uac00\uc6a9\uc601\uc5ed\uc5d0\uc11c EC2\uac00 \uc81c\uacf5\ud558\ub294 \ubaa8\ub4e0 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. Karpenter\ub294 \ubcf4\ub958 \uc911\uc778 \ud30c\ub4dc\ub97c \uae30\ubc18\uc73c\ub85c \ubaa8\ub4e0 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615 \uc138\ud2b8 \uc911\uc5d0\uc11c \uc9c0\ub2a5\uc801\uc73c\ub85c \uc120\ud0dd\ud558\uc5ec \ud30c\ub4dc\uac00 \uc801\uc808\ud55c \ud06c\uae30\uc640 \uc7a5\ube44\ub97c \uac16\ucd98 \uc778\uc2a4\ud134\uc2a4\ub85c \uc2a4\ucf00\uc904\ub9c1\ub418\ub3c4\ub85d \ud569\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4, \ud30c\ub4dc\uc5d0 GPU\uac00 \ud544\uc694\ud558\uc9c0 \uc54a\uc740 \uacbd\uc6b0 Karpenter\ub294 GPU\ub97c \uc9c0\uc6d0\ud558\ub294 EC2 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc73c\ub85c \ud30c\ub4dc\ub97c \uc608\uc57d\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc5b4\ub5a4 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc0ac\uc6a9\ud574\uc57c \ud560\uc9c0 \ud655\uc2e4\ud558\uc9c0 \uc54a\uc740 \uacbd\uc6b0 Amazon [ec2-instance-selector](https:\/\/github.com\/aws\/amazon-ec2-instance-selector)\ub97c \uc2e4\ud589\ud558\uc5ec \ucef4\ud4e8\ud305 \uc694\uad6c \uc0ac\ud56d\uc5d0 \ub9de\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615 \ubaa9\ub85d\uc744 \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 CLI\ub294 \uba54\ubaa8\ub9ac vCPU,\uc544\ud0a4\ud14d\ucc98 \ubc0f \uc9c0\uc5ed\uc744 \uc785\ub825 \ud30c\ub77c\ubbf8\ud130\ub85c \uc0ac\uc6a9\ud558\uace0 \uc774\ub7f0 \uc81c\uc57d \uc870\uac74\uc744 \ucda9\uc871\ud558\ub294 EC2 \uc778\uc2a4\ud134\uc2a4 \ubaa9\ub85d\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4.\n\n```console\n$ ec2-instance-selector --memory 4 --vcpus 2 --cpu-architecture x86_64 -r ap-southeast-1\nc5.large\nc5a.large\nc5ad.large\nc5d.large\nc6i.large\nt2.medium\nt3.medium\nt3a.medium\n```\n\n\uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\ub97c \uc0ac\uc6a9\ud560 \ub54c Karpenter\uc5d0 \ub108\ubb34 \ub9ce\uc740 \uc81c\uc57d\uc744 \ub450\uc5b4\uc11c\ub294 \uc548 \ub429\ub2c8\ub2e4. \uadf8\ub807\uac8c \ud558\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uac00\uc6a9\uc131\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \ud2b9\uc815 \uc720\ud615\uc758 \ubaa8\ub4e0 \uc778\uc2a4\ud134\uc2a4\uac00 \ud68c\uc218\ub418\uace0 \uc774\ub97c \ub300\uccb4\ud560 \uc801\uc808\ud55c \ub300\uc548\uc774 \uc5c6\ub2e4\uace0 \uac00\uc815\ud574 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4. \uad6c\uc131\ub41c \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc758 \uc2a4\ud31f \uc6a9\ub7c9\uc774 \ubcf4\ucda9\ub420 \ub54c\uae4c\uc9c0 \ud30c\ub4dc\ub294 \ubcf4\ub958 \uc0c1\ud0dc\ub85c \uc720\uc9c0\ub429\ub2c8\ub2e4. \uc2a4\ud31f \ud480\uc740 AZ\ub9c8\ub2e4 \ub2e4\ub974\uae30 \ub54c\ubb38\uc5d0 \uc5ec\ub7ec \uac00\uc6a9\uc601\uc5ed\uc5d0 \uc778\uc2a4\ud134\uc2a4\ub97c \ubd84\uc0b0\ud558\uc5ec \uc6a9\ub7c9 \ubd80\uc871 \uc624\ub958\uac00 \ubc1c\uc0dd\ud560 \uc704\ud5d8\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uc77c\ubc18\uc801\uc778 \ubaa8\ubc94 \uc0ac\ub840\ub294 Karpenter\uac00 \uc2a4\ud31f\uc744 \uc0ac\uc6a9\ud560 \ub54c \ub2e4\uc591\ud55c \uc778\uc2a4\ud134\uc2a4 \uc720\ud615 \uc138\ud2b8\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub3c4\ub85d \ud558\ub294 \uac83\uc785\ub2c8\ub2e4.\n\n## \uc2a4\ucf00\uc904\ub9c1 \ud30c\ub4dc\n\n\ub2e4\uc74c \ubaa8\ubc94 \uc0ac\ub840\ub294 \ub178\ub4dc \ud504\ub85c\ube44\uc800\ub2dd\uc744 \uc704\ud574 Karpenter\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc5d0 \ud30c\ub4dc\ub97c \ubc30\ud3ec\ud558\ub294 \uac83\uacfc \uad00\ub828\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uace0\uac00\uc6a9\uc131\uc744 \uc704\ud55c EKS \ubaa8\ubc94 \uc0ac\ub840\ub97c \ub530\ub974\uc2ed\uc2dc\uc624.\n\n\uace0\uac00\uc6a9\uc131 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc2e4\ud589\ud574\uc57c \ud558\ub294 \uacbd\uc6b0 \uc77c\ubc18\uc801\uc778 EKS \ubaa8\ubc94 \uc0ac\ub840 [\uad8c\uc7a5 \uc0ac\ud56d](https:\/\/aws.github.io\/aws-eks-best-practices\/reliability\/docs\/application\/#recommendations)\uc744 \ub530\ub974\uc2ed\uc2dc\uc624. \uc5ec\ub7ec \ub178\ub4dc\uc640 \uc601\uc5ed\uc5d0 \ud30c\ub4dc\ub97c \ubd84\uc0b0\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 Karpenter \uc124\uba85\uc11c\uc758 [\ud1a0\ud3f4\ub85c\uc9c0 \ud655\uc0b0](https:\/\/karpenter.sh\/docs\/concepts\/scheduling\/#topology-spread)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624. \ud30c\ub4dc\ub97c \uc81c\uac70\ud558\uac70\ub098 \uc0ad\uc81c\ud558\ub824\ub294 \uc2dc\ub3c4\uac00 \uc788\ub294 \uacbd\uc6b0 [\uc911\ub2e8 \uc608\uc0b0(Disruption Budgets)](https:\/\/karpenter.sh\/docs\/troubleshooting\/#disruption-budgets)\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc720\uc9c0 \uad00\ub9ac\uac00 \ud544\uc694\ud55c \ucd5c\uc18c \uac00\uc6a9 \ud30c\ub4dc\ub97c \uc124\uc815\ud558\uc138\uc694.\n\n### \uacc4\uce35\ud654\ub41c \uc81c\uc57d \uc870\uac74\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud074\ub77c\uc6b0\ub4dc \uacf5\uae09\uc790\uac00 \uc81c\uacf5\ud558\ub294 \ucef4\ud4e8\ud305 \uae30\ub2a5\uc744 \uc81c\ud55c\ud558\uc2ed\uc2dc\uc624.\n\nKarpenter\uc758 \uacc4\uce35\ud615 \uc81c\uc57d \uc870\uac74 \ubaa8\ub378\uc744 \uc0ac\uc6a9\ud558\uba74 \ubcf5\uc7a1\ud55c \ud504\ub85c\ube44\uc800\ub108 \ubc0f \ud30c\ub4dc \ubc30\ud3ec \uc81c\uc57d \uc870\uac74 \uc138\ud2b8\ub97c \uc0dd\uc131\ud558\uc5ec \ud30c\ub4dc \uc2a4\ucf00\uc904\ub9c1\uc5d0 \uac00\uc7a5 \uc801\ud569\ud55c \uc870\uac74\uc744 \uc5bb\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud30c\ub4dc \uc0ac\uc591\uc774 \uc694\uccad\ud560 \uc218 \uc788\ub294 \uc81c\uc57d \uc870\uac74\uc758 \uc608\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n* \ud2b9\uc815 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\ub9cc \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uac00\uc6a9\uc601\uc5ed\uc5d0\uc11c \uc2e4\ud589\ud574\uc57c \ud569\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \ud2b9\uc815 \uac00\uc6a9\uc601\uc5ed\uc5d0 \uc788\ub294 EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ub2e4\ub978 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uacfc \ud1b5\uc2e0\ud574\uc57c \ud558\ub294 \ud30c\ub4dc\uac00 \uc788\ub2e4\uace0 \uac00\uc815\ud574 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4. VPC\uc758 AZ \uac04 \ud2b8\ub798\ud53d\uc744 \uc904\uc774\ub294 \uac83\uc774 \ubaa9\ud45c\ub77c\uba74 EC2 \uc778\uc2a4\ud134\uc2a4\uac00 \uc704\uce58\ud55c AZ\uc5d0 \ud30c\ub4dc\ub97c \uac19\uc740 \uc704\uce58\uc5d0 \ubc30\uce58\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \uc885\ub958\uc758 \ud0c0\uac9f\ud305\uc740 \ub300\uac1c \ub178\ub4dc \uc140\ub809\ud130\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc218\ud589\ub429\ub2c8\ub2e4. [\ub178\ub4dc \uc140\ub809\ud130](https:\/\/karpenter.sh\/docs\/concepts\/scheduling\/#selecting-nodes)\uc5d0 \ub300\ud55c \ucd94\uac00 \uc815\ubcf4\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc124\uba85\uc11c\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n* \ud2b9\uc815 \uc885\ub958\uc758 \ud504\ub85c\uc138\uc11c \ub610\ub294 \uae30\ud0c0 \ud558\ub4dc\uc6e8\uc5b4\uac00 \ud544\uc694\ud569\ub2c8\ub2e4. GPU\uc5d0\uc11c \ud30c\ub4dc\ub97c \uc2e4\ud589\ud574\uc57c \ud558\ub294 \ud31f\uc2a4\ud399 \uc608\uc81c\ub294 Karpenter \ubb38\uc11c\uc758 [\uc561\uc140\ub7ec\ub808\uc774\ud130](https:\/\/karpenter.sh\/docs\/concepts\/scheduling\/#acceleratorsgpu-resources)\uc139\uc158\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n### \uacb0\uc81c \uacbd\ubcf4\ub97c \uc0dd\uc131\ud558\uc5ec \ucef4\ud4e8\ud305 \uc9c0\ucd9c\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud558\uc138\uc694\n\n\ud074\ub7ec\uc2a4\ud130\ub97c \uc790\ub3d9\uc73c\ub85c \ud655\uc7a5\ud558\ub3c4\ub85d \uad6c\uc131\ud560 \ub54c\ub294 \uc9c0\ucd9c\uc774 \uc784\uacc4\uac12\uc744 \ucd08\uacfc\ud588\uc744 \ub54c \uacbd\uace0\ud558\ub294 \uccad\uad6c \uc54c\ub78c\ub97c \uc0dd\uc131\ud558\uace0 Karpenter \uad6c\uc131\uc5d0 \ub9ac\uc18c\uc2a4 \uc81c\ud55c\uc744 \ucd94\uac00\ud574\uc57c \ud569\ub2c8\ub2e4. Karpenter\ub85c \ub9ac\uc18c\uc2a4 \uc81c\ud55c\uc744 \uc124\uc815\ud558\ub294 \uac83\uc740 Karpenter \ud504\ub85c\ube44\uc800\ub108\uac00 \uc778\uc2a4\ud134\uc2a4\ud654\ud560 \uc218 \uc788\ub294 \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4\uc758 \ucd5c\ub300\ub7c9\uc744 \ub098\ud0c0\ub0b8\ub2e4\ub294 \uc810\uc5d0\uc11c AWS Autoscaling \uadf8\ub8f9\uc758 \ucd5c\ub300 \uc6a9\ub7c9\uc744 \uc124\uc815\ud558\ub294 \uac83\uacfc \ube44\uc2b7\ud569\ub2c8\ub2e4.\n\n!!! note\n    \uc804\uccb4 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud574 \uae00\ub85c\ubc8c \uc81c\ud55c\uc744 \uc124\uc815\ud560 \uc218\ub294 \uc5c6\uc2b5\ub2c8\ub2e4. \ud55c\ub3c4\ub294 \ud2b9\uc815 \ud504\ub85c\ube44\uc800\ub108\uc5d0 \uc801\uc6a9\ub429\ub2c8\ub2e4.\n\n\uc544\ub798 \uc2a4\ub2c8\ud3ab\uc740 Karpenter\uc5d0\uac8c \ucd5c\ub300 1000\uac1c\uc758 CPU \ucf54\uc5b4\uc640 1000Gi\uc758 \uba54\ubaa8\ub9ac\ub9cc \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub3c4\ub85d \uc9c0\uc2dc\ud569\ub2c8\ub2e4. Karpenter\ub294 \ud55c\ub3c4\uc5d0 \ub3c4\ub2ec\ud558\uac70\ub098 \ucd08\uacfc\ud560 \ub54c\ub9cc \uc6a9\ub7c9 \ucd94\uac00\ub97c \uc911\ub2e8\ud569\ub2c8\ub2e4. \ud55c\ub3c4\ub97c \ucd08\uacfc\ud558\uba74 Karpenter \ucee8\ud2b8\ub864\ub7ec\ub294 '1001\uc758 \uba54\ubaa8\ub9ac \ub9ac\uc18c\uc2a4 \uc0ac\uc6a9\ub7c9\uc774 \ud55c\ub3c4 1000\uc744 \ucd08\uacfc\ud569\ub2c8\ub2e4' \ub610\ub294 \uc774\uc640 \ube44\uc2b7\ud55c \ubaa8\uc591\uc758 \uba54\uc2dc\uc9c0\ub97c \ucee8\ud2b8\ub864\ub7ec \ub85c\uadf8\uc5d0 \uae30\ub85d\ud569\ub2c8\ub2e4. \ucee8\ud14c\uc774\ub108 \ub85c\uadf8\ub97c CloudWatch \ub85c\uadf8\ub85c \ub77c\uc6b0\ud305\ud558\ub294 \uacbd\uc6b0 [\uc9c0\ud45c \ud544\ud130](https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/logs\/MonitoringLogData.html)\ub97c \uc0dd\uc131\ud558\uc5ec \ub85c\uadf8\uc5d0\uc11c \ud2b9\uc815 \ud328\ud134\uc774\ub098 \uc6a9\uc5b4\ub97c \ucc3e\uc740 \ub2e4\uc74c [CloudWatch \uc54c\ub78c](https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/monitoring\/AlarmThatSendsEmail.html)\uc744 \uc0dd\uc131\ud558\uc5ec \uad6c\uc131\ub41c \uc9c0\ud45c \uc784\uacc4\uac12\uc744 \uc704\ubc18\ud588\uc744 \ub54c \uacbd\uace0\ub97c \ubcf4\ub0bc \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nKarpenter\uc5d0\uc11c \uc81c\ud55c\uc744 \uc0ac\uc6a9\ud558\ub294 \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 Karpenter \uc124\uba85\uc11c\uc758 [\ub9ac\uc18c\uc2a4 \uc81c\ud55c \uc124\uc815](https:\/\/karpenter.sh\/docs\/concepts\/provisioners\/#speclimitsresources)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n```yaml\nspec:\n  limits:\n    resources:\n      cpu: 1000\n      memory: 1000Gi\n```\n\nKarpenter\uac00 \ud504\ub85c\ube44\uc800\ub2dd\ud560 \uc218 \uc788\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc81c\ud55c\ud558\uac70\ub098 \uc81c\ud55c\ud558\uc9c0 \uc54a\ub294 \uacbd\uc6b0 Karpenter\ub294 \ud544\uc694\uc5d0 \ub530\ub77c \ud074\ub7ec\uc2a4\ud130\uc5d0 \ucef4\ud4e8\ud305 \ud30c\uc6cc\ub97c \uacc4\uc18d \ucd94\uac00\ud569\ub2c8\ub2e4. Karpenter\ub97c \uc774\ub7f0 \ubc29\uc2dd\uc73c\ub85c \uad6c\uc131\ud558\uba74 \ud074\ub7ec\uc2a4\ud130\ub97c \uc790\uc720\ub86d\uac8c \ud655\uc7a5\ud560 \uc218 \uc788\uc9c0\ub9cc \ube44\uc6a9\uc5d0\ub3c4 \uc0c1\ub2f9\ud55c \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \uc774\uc720\ub85c \uacb0\uc81c \uacbd\ubcf4\ub97c \uad6c\uc131\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uccad\uad6c \uacbd\ubcf4\ub97c \uc0ac\uc6a9\ud558\uba74 \uacc4\uc815\uc5d0\uc11c \uacc4\uc0b0\ub41c \uc608\uc0c1 \uc694\uae08\uc774 \uc815\uc758\ub41c \uc784\uacc4\uac12\uc744 \ucd08\uacfc\ud560 \uacbd\uc6b0 \uc54c\ub9bc\uc744 \ubc1b\uace0 \uc0ac\uc804\uc5d0 \uc54c\ub9bc\uc744 \ubc1b\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\uc608\uc0c1 \uc694\uae08\uc744 \uc0ac\uc804\uc5d0 \ubaa8\ub2c8\ud130\ub9c1\ud558\uae30 \uc704\ud55c Amazon CloudWatch \uccad\uad6c \uacbd\ubcf4 \uc124\uc815](https:\/\/aws.amazon.com\/blogs\/mt\/setting-up-an-amazon-cloudwatch-billing-alarm-to-proactively-monitor-estimated-charges\/)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n\uae30\uacc4 \ud559\uc2b5\uc744 \uc0ac\uc6a9\ud558\uc5ec \ube44\uc6a9\uacfc \uc0ac\uc6a9\ub7c9\uc744 \uc9c0\uc18d\uc801\uc73c\ub85c \ubaa8\ub2c8\ud130\ub9c1\ud558\uc5ec \ube44\uc815\uc0c1\uc801\uc778 \uc9c0\ucd9c\uc744 \uac10\uc9c0\ud558\ub294 AWS \ube44\uc6a9 \uad00\ub9ac \uae30\ub2a5\uc778 \ube44\uc6a9 \uc608\uc678 \ud0d0\uc9c0\ub97c \ud65c\uc131\ud654\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [AWS \ube44\uc6a9 \uc774\uc0c1 \ud0d0\uc9c0 \uc2dc\uc791](https:\/\/docs.aws.amazon.com\/cost-management\/latest\/userguide\/getting-started-ad.html) \uac00\uc774\ub4dc\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. AWS Budgets\uc5d0\uc11c \uc608\uc0b0\uc744 \ud3b8\uc131\ud55c \uacbd\uc6b0, \ud2b9\uc815 \uc784\uacc4\uac12 \uc704\ubc18 \uc2dc \uc54c\ub9bc\uc744 \ubc1b\ub3c4\ub85d \uc870\uce58\ub97c \uad6c\uc131\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc608\uc0b0 \ud65c\ub3d9\uc744 \ud1b5\ud574 \uc774\uba54\uc77c\uc744 \ubcf4\ub0b4\uac70\ub098, SNS \uc8fc\uc81c\uc5d0 \uba54\uc2dc\uc9c0\ub97c \uac8c\uc2dc\ud558\uac70\ub098, Slack\uacfc \uac19\uc740 \ucc57\ubd07\uc5d0 \uba54\uc2dc\uc9c0\ub97c \ubcf4\ub0bc \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [AWS \uc608\uc0b0 \uc791\uc5c5 \uad6c\uc131](https:\/\/docs.aws.amazon.com\/cost-management\/latest\/userguide\/budgets-controls.html)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n### \uc81c\uac70 \uae08\uc9c0(do-not-evict) \uc5b4\ub178\ud14c\uc774\uc158 \uc0ac\uc6a9\ud558\uc5ec Karpenter\uac00 \ub178\ub4dc \ud504\ub85c\ube44\uc800\ub2dd\uc744 \ucde8\uc18c\ud558\uc9c0 \ubabb\ud558\ub3c4\ub85d \ud558\uc138\uc694.\n\nKarpenter\uac00 \ud504\ub85c\ube44\uc800\ub2dd\ud55c \ub178\ub4dc\uc5d0\uc11c \uc911\uc694\ud55c \uc560\ud50c\ub9ac\ucf00\uc774\uc158(\uc608: *\uc7a5\uae30 \uc2e4\ud589* \ubc30\uce58 \uc791\uc5c5 \ub610\ub294 \uc2a4\ud14c\uc774\ud2b8\ud480 \uc560\ud50c\ub9ac\ucf00\uc774\uc158)\uc744 \uc2e4\ud589 \uc911\uc774\uace0 \ub178\ub4dc\uc758 TTL\uc774 \ub9cc\ub8cc\ub418\uc5c8\uc73c\uba74* \uc778\uc2a4\ud134\uc2a4\uac00 \uc885\ub8cc\ub418\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc911\ub2e8\ub429\ub2c8\ub2e4. \ud30c\ub4dc\uc5d0 `karpenter.sh\/do-not-evict` \uc5b4\ub178\ud14c\uc774\uc158\uc744 \ucd94\uac00\ud558\uba74 \ud30c\ub4dc\uac00 \uc885\ub8cc\ub418\uac70\ub098 `do-not-evict` \uc5b4\ub178\ud14c\uc774\uc158\uc774 \uc81c\uac70\ub420 \ub54c\uae4c\uc9c0 Karpenter\uac00 \ub178\ub4dc\ub97c \ubcf4\uc874\ud558\ub3c4\ub85d \uc9c0\uc2dc\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\ub514\ud504\ub85c\ube44\uc800\ub2dd](https:\/\/karpenter.sh\/docs\/concepts\/deprovisioning\/#disabling-deprovisioning) \uc124\uba85\uc11c\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n\ub178\ub4dc\uc5d0 \ub370\ubaac\uc14b\uc774 \uc544\ub2cc \ud30c\ub4dc\uac00 \uc791\uc5c5\uacfc \uad00\ub828\ub41c \ud30c\ub4dc\ub9cc \ub0a8\uc544 \uc788\ub294 \uacbd\uc6b0, Karpenter\ub294 \uc791\uc5c5 \uc0c1\ud0dc\uac00 \uc131\uacf5 \ub610\ub294 \uc2e4\ud328\uc778 \ud55c \ud574\ub2f9 \ub178\ub4dc\ub97c \ub300\uc0c1\uc73c\ub85c \uc9c0\uc815\ud558\uace0 \uc885\ub8cc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \ud1b5\ud569(Consolidation)\uc744 \uc0ac\uc6a9\ud560 \ub54c CPU\uac00 \uc544\ub2cc \ubaa8\ub4e0 \ub9ac\uc18c\uc2a4\uc5d0 \ub300\ud574 \uc694\uccad=\uc81c\ud55c(requests=limits)\uc744 \uad6c\uc131\ud569\ub2c8\ub2e4.\n\n\uc77c\ubc18\uc801\uc73c\ub85c \ud30c\ub4dc \ub9ac\uc18c\uc2a4 \uc694\uccad\uacfc \ub178\ub4dc\uc758 \ud560\ub2f9 \uac00\ub2a5\ud55c \ub9ac\uc18c\uc2a4 \uc591\uc744 \ube44\uad50\ud558\uc5ec \ud1b5\ud569 \ubc0f \uc2a4\ucf00\uc904\ub9c1\uc744 \uc218\ud589\ud569\ub2c8\ub2e4. \ub9ac\uc18c\uc2a4 \uc81c\ud55c\uc740 \uace0\ub824\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uba54\ubaa8\ub9ac \ud55c\ub3c4\uac00 \uba54\ubaa8\ub9ac \uc694\uccad\ub7c9\ubcf4\ub2e4 \ud070 \ud30c\ub4dc\ub294 \uc694\uccad\uc744 \ucd08\uacfc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub3d9\uc77c\ud55c \ub178\ub4dc\uc758 \uc5ec\ub7ec \ud30c\ub4dc\uac00 \ub3d9\uc2dc\uc5d0 \ubc84\uc2a4\ud2b8\ub418\uba74 \uba54\ubaa8\ub9ac \ubd80\uc871(OOM) \uc0c1\ud0dc\ub85c \uc778\ud574 \uc77c\ubd80 \ud30c\ub4dc\uac00 \uc885\ub8cc\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\ud1b5\ud569\uc740 \uc694\uccad\ub9cc \uace0\ub824\ud558\uc5ec \ud30c\ub4dc\ub97c \ub178\ub4dc\uc5d0 \ud328\ud0b9\ud558\ub294 \ubc29\uc2dd\uc73c\ub85c \uc791\ub3d9\ud558\uae30 \ub54c\ubb38\uc5d0 \uc774\ub7f0 \uc77c\uc774 \ubc1c\uc0dd\ud560 \uac00\ub2a5\uc131\uc744 \ub192\uc77c \uc218 \uc788\ub2e4.\n\n### LimitRanges \ub97c \uc0ac\uc6a9\ud558\uc5ec \ub9ac\uc18c\uc2a4 \uc694\uccad \ubc0f \uc81c\ud55c\uc5d0 \ub300\ud55c \uae30\ubcf8\uac12\uc744 \uad6c\uc131\ud569\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \uae30\ubcf8 \uc694\uccad\uc774\ub098 \uc81c\ud55c\uc744 \uc124\uc815\ud558\uc9c0 \uc54a\uae30 \ub54c\ubb38\uc5d0 \ucee8\ud14c\uc774\ub108\ub294 \uae30\ubcf8 \ud638\uc2a4\ud2b8, CPU \ubc0f \uba54\ubaa8\ub9ac\uc758 \ub9ac\uc18c\uc2a4 \uc0ac\uc6a9\ub7c9\uc744 \uc81c\ud55c\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc2a4\ucf00\uc904\ub7ec\ub294 \ud30c\ub4dc\uc758 \ucd1d \uc694\uccad(\ud30c\ub4dc \ucee8\ud14c\uc774\ub108\uc758 \ucd1d \uc694\uccad \ub610\ub294 \ud30c\ub4dc Init \ucee8\ud14c\uc774\ub108\uc758 \ucd1d \ub9ac\uc18c\uc2a4 \uc911 \ub354 \ub192\uc740 \uc694\uccad)\uc744 \uac80\ud1a0\ud558\uc5ec \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc6cc\ucee4 \ub178\ub4dc\ub97c \uacb0\uc815\ud569\ub2c8\ub2e4. \ub9c8\ucc2c\uac00\uc9c0\ub85c Karpenter\ub294 \ud30c\ub4dc\uc758 \uc694\uccad\uc744 \uace0\ub824\ud558\uc5ec \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uacb0\uc815\ud569\ub2c8\ub2e4. \uc77c\ubd80 \ud30c\ub4dc\uc5d0\uc11c \ub9ac\uc18c\uc2a4 \uc694\uccad\uc744 \uc9c0\uc815\ud558\uc9c0 \uc54a\ub294 \uacbd\uc6b0 \uc81c\ud55c \ubc94\uc704\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \uc801\uc808\ud55c \uae30\ubcf8\uac12\uc744 \uc801\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n[\ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \ub300\ud55c \uae30\ubcf8 \uba54\ubaa8\ub9ac \uc694\uccad \ubc0f \uc81c\ud55c \uad6c\uc131](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/manage-resources\/memory-default-namespace\/)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n### \uc815\ud655\ud55c \ub9ac\uc18c\uc2a4 \uc694\uccad\uc744 \ubaa8\ub4e0 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc801\uc6a9\n\nKarpenter\ub294 \uc6cc\ud06c\ub85c\ub4dc \uc694\uad6c \uc0ac\ud56d\uc5d0 \ub300\ud55c \uc815\ubcf4\uac00 \uc815\ud655\ud560 \ub54c \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uac00\uc7a5 \uc801\ud569\ud55c \ub178\ub4dc\ub97c \uc2dc\uc791\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\uc774\ub294 Karpenter\uc758 \ud1b5\ud569 \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \ud2b9\ud788 \uc911\uc694\ud569\ub2c8\ub2e4.\n\n[\ubaa8\ub4e0 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud55c \ub9ac\uc18c\uc2a4 \uc694\uccad\/\uc81c\ud55c \uad6c\uc131 \ubc0f \ud06c\uae30 \uc870\uc815](https:\/\/aws.github.io\/aws-eks-best-practices\/reliability\/docs\/dataplane\/#configure-and-size-resource-requestslimits-for-all-workloads)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n## \ucd94\uac00 \ub9ac\uc18c\uc2a4\n* [Karpenter\/Spot Workshop](https:\/\/ec2spotworkshops.com\/karpenter.html)\n* [Karpenter Node Provisioner](https:\/\/youtu.be\/_FXRIKWJWUk)\n* [TGIK Karpenter](https:\/\/youtu.be\/zXqrNJaTCrU)\n* [Karpenter vs. Cluster Autoscaler](https:\/\/youtu.be\/3QsVRHVdOnM)\n* [Groupless Autoscaling with Karpenter](https:\/\/www.youtube.com\/watch?v=43g8uPohTgc","site":"eks","answers_cleaned":"    search    exclude  true         Karpenter           Karpenter  Karpenter  unschedulable                                                   Karpenter  pending                                                                                                                                                                                          Karpenter                Karpenter                         Amazon EC2 Auto Scaling     https   docs aws amazon com autoscaling ec2 userguide AutoScalingGroup html          Cluster Autoscaler  https   github com kubernetes autoscaler tree master cluster autoscaler  CA                                   Karpenter                                                       Karpenter   CA                                  AWS        API                   Karpenter                                                                      Karpenter                                                                                                                                                       Karpenter                                                                                                                         Karpenter                       karpenter sh  https   karpenter sh                                 Karpenter        provisioner                           Karpenter                  Karpenter                                              Karpenter         Karpenter   Auto Scaling     https   aws amazon com blogs containers amazon eks cluster multi zone auto scaling groups   ASG                https   docs aws amazon com eks latest userguide managed node groups html  MNG               API                        ASG   MNG  EC2 CPU        AWS                          AWS               Cluster Autoscaler  https   docs aws amazon com eks latest userguide autoscaling html cluster autoscaler              AWS                                                        Karpenter                            AWS                Karpenter                                                               MNG  ASG                                                                                                                  Auto Scaling               Karpenter                       Karpenter                     Karpenter                                                       EKS Fargate                      Karpenter               Karpenter          https   karpenter sh docs getting started                         Karpenter                                                                                                                                   karpenter             Fargate            EKS Fargate                                                  EKS Fargate          Karpenter             Karpenter                  Karpenter                 launch template               Karpenter                                                                                                                     Karpenter                       Karpenter                                                                     EC2 User Data         AWS                 AMI                                                                          https   karpenter sh docs concepts node templates                                                                                              node kubernetes io instance type  http   node kubernetes io instance type                                          Graviton                                  yaml   key  node kubernetes io instance type     operator  NotIn     values         m6g 16xlarge         m6gd 16xlarge         r6g 16xlarge         r6gd 16xlarge         c6g 16xlarge                                 Karpenter       https   karpenter sh docs concepts settings  configmap     aws interruptionQueue                       https   karpenter sh docs concepts deprovisioning  interruption                                                                                                                                              Karpenter                                                            cordon                                                        https   karpenter sh docs faq  interruption handling            AWS Node Termination Handler  Karpenter                              2                                                    Karpenter                                           Amazon EKS                           VPC  EKS                 EKS                              https   docs aws amazon com eks latest userguide private clusters html private cluster requirements                             VPC  STS VPC                                                            console ERROR controller controller metrics Reconciler error   commit    5047f3c    reconciler group    karpenter sh    reconciler kind    Provisioner    name    default    namespace        error    fetching instance types using ec2 DescribeInstanceTypes  WebIdentityErr  failed to retrieve credentials ncaused by  RequestError  send request failed ncaused by  Post   https   sts  region  amazonaws com     dial tcp x x x x 443  i o timeout        Karpenter                 IAM    IRSA                                       IRSA          AWS            AWS STS  API                                             VPC    AWS STS VPC                                                  SSM VPC                     Karpenter                                SSM              VPC  SSM VPC                                    console INFO    controller provisioning Waiting for unschedulable pods    commit    5047f3c    provisioner    default   INFO    controller provisioning Batched 3 pods in 1 000572709s    commit    5047f3c    provisioner    default   INFO    controller provisioning Computed packing of 1 node s  for 3 pod s  with instance type option s   c4 xlarge c6i xlarge c5 xlarge c5d xlarge c5a xlarge c5n xlarge m6i xlarge m4 xlarge m6a xlarge m5ad xlarge m5d xlarge t3 xlarge m5a xlarge t3a xlarge m5 xlarge r4 xlarge r3 xlarge r5ad xlarge r6i xlarge r5a xlarge           commit    5047f3c    provisioner    default   ERROR   controller provisioning Could not launch node  launching instances  getting launch template configs  getting launch templates  getting ssm parameter  RequestError  send request failed caused by  Post  https   ssm  region  amazonaws com    dial tcp x x x x 443  i o timeout    commit    5047f3c    provisioner    default                     API  https   docs aws amazon com awsaccountbilling latest aboutv2 using pelong html      VPC                                                       Karpenter                                         Karpenter                                                                       console ERROR   controller aws pricing  updating on demand pricing  RequestError  send request failed caused by  Post  https   api pricing us east 1 amazonaws com    dial tcp 52 94 231 236 443  i o timeout  RequestError  send request failed caused by  Post  https   api pricing us east 1 amazonaws com    dial tcp 52 94 231 236 443  i o timeout  using existing pricing data from 2022 08 17T00 19 52Z    commit    4b5f953                       EKS        Karpenter               VPC                      console com amazonaws  region  ec2 com amazonaws  region  ecr api com amazonaws  region  ecr dkr com amazonaws  region  s3   For pulling container images com amazonaws  region  sts   For IAM roles for service accounts com amazonaws  region  ssm   If using Karpenter          note     Karpenter                          Amazon ECR       VPC                                                 Karpenter                     ECR                     VPC        VPC                                          ECR Public                                                      988  https   github com aws karpenter issues 988        1157  https   github com aws karpenter issues 1157                                                                                                                                                 OS                                                     Bottlerocket             Amazon Linux                                                GPU                                                                                                                                                                                                Karpenter                                                                         GPU                                                                 yaml   Provisioner for GPU Instances with Taints apiVersion  karpenter sh v1alpha5 kind  Provisioner metadata    name  gpu spec    requirements      key  node kubernetes io instance type     operator  In     values        p3 8xlarge       p3 16xlarge   taints      effect  NoSchedule     key  nvidia com gpu     value   true    ttlSecondsAfterEmpty  60          Taint            Toleration                      yaml   Deployment of GPU Workload will have tolerations defined apiVersion  apps v1 kind  Deployment metadata    name  inflate gpu spec            spec        tolerations          key   nvidia com gpu          operator   Exists          effect   NoSchedule                                        NodeAffinify                                           billing team                     yaml   Provisioner for regular EC2 instances apiVersion  karpenter sh v1alpha5 kind  Provisioner metadata    name  generalcompute spec    labels      billing team  my team   requirements      key  node kubernetes io instance type     operator  In     values        m5 large       m5 xlarge       m5 2xlarge       c5 large       c5 xlarge       c5a large       c5a xlarge       r5 large       r5 xlarge                                yaml   Deployment will have spec affinity nodeAffinity defined kind  Deployment metadata    name  workload my team spec    replicas  200           spec        affinity          nodeAffinity            requiredDuringSchedulingIgnoredDuringExecution              nodeSelectorTerms                  matchExpressions                    key   billing team                    operator   In                    values    my team                TTL                                                                                                                                                               ttlSecondsUntilExpired         ttlSecondsAfterEmpty                                        Karpenter       Karpenter               https   karpenter sh docs concepts deprovisioning                              Karpenter                                                  Karpenter                https   docs aws amazon com AWSEC2 latest UserGuide ec2 fleet allocation strategy html              EC2                       EC2                                                                                            EC2                                       Karpenter                         EC2                                       Karpenter                        EC2                          Karpenter                                                                                               GPU             Karpenter  GPU       EC2                                                         Amazon  ec2 instance selector  https   github com aws amazon ec2 instance selector                                                    CLI      vCPU                                         EC2                     console   ec2 instance selector   memory 4   vcpus 2   cpu architecture x86 64  r ap southeast 1 c5 large c5a large c5ad large c5d large c6i large t2 medium t3 medium t3a medium                     Karpenter                                                                                                                                                                            AZ                                                                            Karpenter                                                                                   Karpenter                                                 EKS                                             EKS               https   aws github io aws eks best practices reliability docs application  recommendations                                             Karpenter                https   karpenter sh docs concepts scheduling  topology spread                                            Disruption Budgets   https   karpenter sh docs troubleshooting  disruption budgets                                                                                          Karpenter                                                                                                                                                                                   EC2                                                   VPC  AZ                    EC2           AZ                                                                       https   karpenter sh docs concepts scheduling  selecting nodes                                                                 GPU                       Karpenter              https   karpenter sh docs concepts scheduling  acceleratorsgpu resources                                                                                                        Karpenter                       Karpenter                  Karpenter                                            AWS Autoscaling                                note                                                                    Karpenter      1000   CPU     1000Gi                       Karpenter                                          Karpenter        1001                  1000                                                     CloudWatch                      https   docs aws amazon com AmazonCloudWatch latest logs MonitoringLogData html                                CloudWatch     https   docs aws amazon com AmazonCloudWatch latest monitoring AlarmThatSendsEmail html                                           Karpenter                    Karpenter                  https   karpenter sh docs concepts provisioners  speclimitsresources               yaml spec    limits      resources        cpu  1000       memory  1000Gi      Karpenter                                        Karpenter                                 Karpenter                                                                                                                                                                                            Amazon CloudWatch           https   aws amazon com blogs mt setting up an amazon cloudwatch billing alarm to proactively monitor estimated charges                                                              AWS                                            AWS              https   docs aws amazon com cost management latest userguide getting started ad html                    AWS Budgets                                                                         SNS                 Slack                                  AWS           https   docs aws amazon com cost management latest userguide budgets controls html                      do not evict             Karpenter                            Karpenter                                                                       TTL                                         karpenter sh do not evict                         do not evict                 Karpenter                                        https   karpenter sh docs concepts deprovisioning  disabling deprovisioning                                                      Karpenter                                                          Consolidation         CPU                      requests limits                                                                                                                                                                      OOM                                                                                                LimitRanges                                                                                  CPU                                                                     Init                                                            Karpenter                                                                                                                                           https   kubernetes io docs tasks administer cluster manage resources memory default namespace                                          Karpenter                                                          Karpenter                                                               https   aws github io aws eks best practices reliability docs dataplane  configure and size resource requestslimits for all workloads                         Karpenter Spot Workshop  https   ec2spotworkshops com karpenter html     Karpenter Node Provisioner  https   youtu be  FXRIKWJWUk     TGIK Karpenter  https   youtu be zXqrNJaTCrU     Karpenter vs  Cluster Autoscaler  https   youtu be 3QsVRHVdOnM     Groupless Autoscaling with Karpenter  https   www youtube com watch v 43g8uPohTgc"}
{"questions":"eks AWS exclude true search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \uc774\ubbf8\uc9c0 \ubcf4\uc548\n\n\ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub294 \uacf5\uaca9\uc5d0 \ub300\ud55c \uccab \ubc88\uc9f8 \ubc29\uc5b4\uc120\uc73c\ub85c \uace0\ub824\ud558\uc5ec\uc57c \ud569\ub2c8\ub2e4. \uc548\uc804\ud558\uc9c0 \uc54a\uace0 \uc798\ubabb \uad6c\uc131\ub41c \uc774\ubbf8\uc9c0\ub294 \uacf5\uaca9\uc790\ub294 \ucee8\ud14c\uc774\ub108\uc758 \uacbd\uacc4\ub97c \ubc97\uc5b4\ub098 \ud638\uc2a4\ud2b8\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\ub3c4\ub85d \ud5c8\uc6a9\ud569\ub2c8\ub2e4. \ud638\uc2a4\ud2b8\uc5d0 \ub4e4\uc5b4\uac00\uba74 \uacf5\uaca9\uc790\ub294 \ubbfc\uac10\ud55c \uc815\ubcf4\uc5d0 \uc561\uc138\uc2a4\ud558\uac70\ub098 \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ub610\ub294 AWS \uacc4\uc815 \ub0b4\uc5d0 \uc811\uadfc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c \ubaa8\ubc94 \uc0ac\ub840\ub294 \uc774\ub7f0 \uc0c1\ud669\uc774 \ubc1c\uc0dd\ud560 \uc704\ud5d8\uc744 \uc644\ud654\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4.\n\n## \uad8c\uc7a5 \uc0ac\ud56d\n\n### \ucd5c\uc18c \uc774\ubbf8\uc9c0 \uc0dd\uc131\n\n\uba3c\uc800 \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\uc5d0\uc11c \ud544\uc694\uc5c6\ub294 \ubc14\uc774\ub108\ub9ac\ub97c \ubaa8\ub450 \uc81c\uac70\ud569\ub2c8\ub2e4. Dockerhub\ub85c\ubd80\ud130 \uac80\uc99d\ub418\uc9c0 \uc54a\uc740 \uc774\ubbf8\uc9c0\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \uac01 \ucee8\ud14c\uc774\ub108 \ub808\uc774\uc5b4\uc758 \ub0b4\uc6a9\uc744 \ubcfc \uc218 \uc788\ub294 [Dive](https:\/\/github.com\/wagoodman\/dive)\uc640 \uac19\uc740 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc774\ubbf8\uc9c0\ub97c \uac80\uc0ac\ud569\ub2c8\ub2e4. \uad8c\ud55c\uc744 \uc0c1\uc2b9\ud560 \uc218 \uc788\ub294 SETUID \ubc0f SETGID \ube44\ud2b8\uac00 \uc788\ub294 \ubaa8\ub4e0 \ubc14\uc774\ub108\ub9ac\ub97c \uc81c\uac70\ud558\uace0 nc\ub098 curl\uacfc \uac19\uc774 \uc545\uc758\uc801\uc778 \uc6a9\ub3c4\ub85c \uc0ac\uc6a9\ub420 \uc218 \uc788\ub294 \uc178\uacfc \uc720\ud2f8\ub9ac\ud2f0\ub97c \ubaa8\ub450 \uc81c\uac70\ud558\ub294 \uac83\uc744 \uace0\ub824\ud569\ub2c8\ub2e4. \ub2e4\uc74c \uba85\ub839\uc744 \uc0ac\uc6a9\ud558\uc5ec SETUID \ubc0f SETGID \ube44\ud2b8\uac00 \uc788\ub294 \ud30c\uc77c\uc744 \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```bash\nfind \/ -perm \/6000 -type f -exec ls -ld {} \\;\n```\n\n\uc774\ub7f0 \ud30c\uc77c\uc5d0\uc11c \ud2b9\uc218 \uad8c\ud55c\uc744 \uc81c\uac70\ud558\ub824\uba74 \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\uc5d0 \ub2e4\uc74c \uc9c0\uc2dc\ubb38\uc744 \ucd94\uac00\ud569\ub2c8\ub2e4.\n\n```docker\nRUN find \/ -xdev -perm \/6000 -type f -exec chmod a-s {} \\; || true\n```\n\n### \uba40\ud2f0 \uc2a4\ud14c\uc774\uc9c0 \ube4c\ub4dc \uc0ac\uc6a9\n\n\uba40\ud2f0 \uc2a4\ud14c\uc774\uc9c0 \ube4c\ub4dc\ub97c \uc0ac\uc6a9\ud558\uba74 \ucd5c\uc18c\ud55c\uc758 \uc774\ubbf8\uc9c0\ub97c \ub9cc\ub4e4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc9c0\uc18d\uc801 \ud1b5\ud569 \uc8fc\uae30\uc758 \uc77c\ubd80\ub97c \uc790\ub3d9\ud654\ud558\ub294 \ub370 \uba40\ud2f0 \uc2a4\ud14c\uc774\uc9c0 \ube4c\ub4dc\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0\uac00 \ub9ce\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uba40\ud2f0 \uc2a4\ud14c\uc774\uc9c0 \ube4c\ub4dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc18c\uc2a4 \ucf54\ub4dc\ub97c \ub9b0\ud2b8\ud558\uac70\ub098 \uc815\uc801 \ucf54\ub4dc \ubd84\uc11d\uc744 \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 \uac1c\ubc1c\uc790\ub294 \ud30c\uc774\ud504\ub77c\uc778 \uc2e4\ud589\uc744 \uae30\ub2e4\ub9b4 \ud544\uc694 \uc5c6\uc774 \uac70\uc758 \uc989\uac01\uc801\uc778 \ud53c\ub4dc\ubc31\uc744 \ubc1b\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uba40\ud2f0 \uc2a4\ud14c\uc774\uc9c0 \ube4c\ub4dc\ub294 \ucee8\ud14c\uc774\ub108 \ub808\uc9c0\uc2a4\ud2b8\ub9ac\ub85c \ud478\uc2dc\ub418\ub294 \ucd5c\uc885 \uc774\ubbf8\uc9c0\uc758 \ud06c\uae30\ub97c \ucd5c\uc18c\ud654\ud560 \uc218 \uc788\uae30 \ub54c\ubb38\uc5d0 \ubcf4\uc548 \uad00\uc810\uc5d0\uc11c \ub9e4\ub825\uc801\uc785\ub2c8\ub2e4. \ube4c\ub4dc \ub3c4\uad6c \ubc0f \uae30\ud0c0 \uad00\ub828 \uc5c6\ub294 \ubc14\uc774\ub108\ub9ac\uac00 \uc5c6\ub294 \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub294 \uc774\ubbf8\uc9c0\uc758 \uacf5\uaca9 \ud45c\uba74\uc744 \uc904\uc5ec \ubcf4\uc548 \uc0c1\ud0dc\ub97c \uac1c\uc120\ud569\ub2c8\ub2e4. \uba40\ud2f0 \uc2a4\ud14c\uc774\uc9c0 \ube4c\ub4dc\uc5d0 \ub300\ud55c \ucd94\uac00 \uc815\ubcf4\ub294 [\ubcf8 \ubb38\uc11c](https:\/\/docs.docker.com\/develop\/develop-images\/multistage-build\/)\ub97c \ucc38\uc870\ud569\ub2c8\ub2e4.\n\n### \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub97c \uc704\ud55c \uc18c\ud504\ud2b8\uc6e8\uc5b4 \uc7ac\ub8cc \uba85\uc138\uc11c (SBOM, Software Bill of Materials) \uc0dd\uc131\n\nSBOM\uc740 \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub97c \uad6c\uc131\ud558\ub294 \uc18c\ud504\ud2b8\uc6e8\uc5b4 \uc544\ud2f0\ud329\ud2b8\uc758 \uc911\ucca9\ub41c \uc778\ubca4\ud1a0\ub9ac\uc785\ub2c8\ub2e4.\nSBOM\uc740 \uc18c\ud504\ud2b8\uc6e8\uc5b4 \ubcf4\uc548 \ubc0f \uc18c\ud504\ud2b8\uc6e8\uc5b4 \uacf5\uae09\ub9dd \uc704\ud5d8 \uad00\ub9ac\uc758 \ud575\uc2ec \uad6c\uc131 \uc694\uc18c\uc785\ub2c8\ub2e4. [SBOM\uc744 \uc0dd\uc131\ud558\uc5ec \uc911\uc559 \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\uc5d0 \uc800\uc7a5\ud558\uace0 SBOM\uc758 \ucde8\uc57d\uc131 \uac80\uc0ac](https:\/\/anchore.com\/SBOM\/)\ub294 \ub2e4\uc74c\uacfc \uac19\uc740 \ubb38\uc81c\ub97c \ud574\uacb0\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4.\n\n- **\uac00\uc2dc\uc131**: \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub97c \uad6c\uc131\ud558\ub294 \uad6c\uc131 \uc694\uc18c\ub97c \uc774\ud574\ud569\ub2c8\ub2e4. \uc911\uc559 \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\uc5d0 \uc800\uc7a5\ud558\uba74 \ubc30\ud3ec \uc774\ud6c4\uc5d0\ub3c4 \uc5b8\uc81c\ub4e0\uc9c0 SBOM\uc744 \uac10\uc0ac \ubc0f \uc2a4\uce94\ud558\uc5ec \uc81c\ub85c \ub370\uc774 \ucde8\uc57d\uc131\uacfc \uac19\uc740 \uc0c8\ub85c\uc6b4 \ucde8\uc57d\uc131\uc744 \ud0d0\uc9c0\ud558\uace0 \uc774\uc5d0 \ub300\uc751\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n- **\ucd9c\ucc98 \uac80\uc99d**: \uc544\ud2f0\ud329\ud2b8\uc758 \ucd9c\ucc98 \ubc0f \ucd9c\ucc98\uc5d0 \ub300\ud55c \uae30\uc874 \uac00\uc815\uc774 \uc0ac\uc2e4\uc774\uace0 \ube4c\ub4dc \ub610\ub294 \uc81c\uacf5 \ud504\ub85c\uc138\uc2a4 \uc911\uc5d0 \uc544\ud2f0\ud329\ud2b8 \ub610\ub294 \uad00\ub828 \uba54\ud0c0\ub370\uc774\ud130\uac00 \ubcc0\uc870\ub418\uc9c0 \uc54a\uc558\uc74c\uc744 \ubcf4\uc99d\ud569\ub2c8\ub2e4.\n- **\uc2e0\ub8b0\uc131**: \ud2b9\uc815 \uc720\ubb3c\uacfc \uadf8 \ub0b4\uc6a9\ubb3c\uc774 \uc758\ub3c4\ud55c \uc791\uc5c5, \uc989 \ubaa9\uc801\uc5d0 \uc801\ud569\ud558\ub2e4\ub294 \uac83\uc744 \uc2e0\ub8b0\ud560 \uc218 \uc788\ub2e4\ub294 \ubcf4\uc7a5. \uc5ec\uae30\uc5d0\ub294 \ucf54\ub4dc\ub97c \uc2e4\ud589\ud558\uae30\uc5d0 \uc548\uc804\ud55c\uc9c0 \ud310\ub2e8\ud558\uace0 \ucf54\ub4dc \uc2e4\ud589\uacfc \uad00\ub828\ub41c \uc704\ud5d8\uc5d0 \ub300\ud574 \uc815\ubcf4\uc5d0 \uc785\uac01\ud55c \uacb0\uc815\uc744 \ub0b4\ub9ac\ub294 \uac83\uc774 \ud3ec\ud568\ub429\ub2c8\ub2e4. \uc778\uc99d\ub41c SBOM \ubc0f \uc778\uc99d\ub41c CVE \uc2a4\uce94 \ubcf4\uace0\uc11c\uc640 \ud568\uaed8 \uac80\uc99d\ub41c \ud30c\uc774\ud504\ub77c\uc778 \uc2e4\ud589 \ubcf4\uace0\uc11c\ub97c \uc791\uc131\ud558\uc5ec \uc774\ubbf8\uc9c0 \uc18c\ube44\uc790\uc5d0\uac8c \uc774 \uc774\ubbf8\uc9c0\uac00 \uc2e4\uc81c\ub85c \ubcf4\uc548 \uad6c\uc131 \uc694\uc18c\ub97c \uac16\ucd98 \uc548\uc804\ud55c \uc218\ub2e8 (\ud30c\uc774\ud504\ub77c\uc778) \uc744 \ud1b5\ud574 \uc0dd\uc131\ub418\uc5c8\uc74c\uc744 \ud655\uc778\ud558\uba74 \uc2e0\ub8b0\uc131\uc774 \ubcf4\uc7a5\ub429\ub2c8\ub2e4.\n- **\uc885\uc18d\uc131 \uc2e0\ub8b0 \ud655\uc778**: \uc544\ud2f0\ud329\ud2b8\uc758 \uc885\uc18d\uc131 \ud2b8\ub9ac\uac00 \uc0ac\uc6a9\ud558\ub294 \uc544\ud2f0\ud329\ud2b8\uc758 \uc2e0\ub8b0\uc131\uacfc \ucd9c\ucc98\ub97c \ubc18\ubcf5\uc801\uc73c\ub85c \uac80\uc0ac\ud569\ub2c8\ub2e4. SBOM\uc758 \ub4dc\ub9ac\ud504\ud2b8\ub294 \uc2e0\ub8b0\ud560 \uc218 \uc5c6\ub294 \ubb34\ub2e8 \uc885\uc18d\uc131, \uce68\uc785 \uc2dc\ub3c4 \ub4f1 \uc545\uc758\uc801\uc778 \ud65c\ub3d9\uc744 \ud0d0\uc9c0\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uc5ec SBOM\uc744 \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4:\n\n- [Amazon Inspector](https:\/\/docs.aws.amazon.com\/inspector)\ub97c \uc0ac\uc6a9\ud558\uc5ec [SBOM \uc0dd\uc131 \ubc0f \ub0b4\ubcf4\ub0b4\uae30](https:\/\/docs.aws.amazon.com\/inspector\/latest\/user\/SBOM-export.html)\ub97c \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n- [Syft from Anchore](https:\/\/github.com\/anchore\/syft) \ub294 SBOM \uc0dd\uc131\uc5d0\ub3c4 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucde8\uc57d\uc131 \uc2a4\uce94\uc744 \ub354 \ube60\ub974\uac8c \ud558\uae30 \uc704\ud574 \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\uc5d0 \ub300\ud574 \uc0dd\uc131\ub41c SBOM\uc744 \uc2a4\uce94\uc744 \uc704\ud55c \uc785\ub825\uc73c\ub85c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uadf8\ub7f0 \ub2e4\uc74c \uac80\ud1a0 \ubc0f \uac10\uc0ac \ubaa9\uc801\uc73c\ub85c \uc774\ubbf8\uc9c0\ub97c Amazon ECR\uacfc \uac19\uc740 \uc911\uc559 OCI \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\ub85c \ud478\uc2dc\ud558\uae30 \uc804\uc5d0 SBOM \ubc0f \uc2a4\uce94 \ubcf4\uace0\uc11c\ub97c \uc774\ubbf8\uc9c0\uc5d0 [\uc99d\uba85 \ubc0f \ucca8\ubd80](https:\/\/github.com\/sigstore\/cosign\/blob\/main\/doc\/cosign_attach_attestation.md)\ud569\ub2c8\ub2e4.\n\n[CNCF \uc18c\ud504\ud2b8\uc6e8\uc5b4 \uacf5\uae09\ub9dd \ubaa8\ubc94 \uc0ac\ub840 \uac00\uc774\ub4dc](https:\/\/project.linuxfoundation.org\/hubfs\/CNCF_SSCP_v1.pdf)\ub97c \uac80\ud1a0\ud558\uace0 \uc18c\ud504\ud2b8\uc6e8\uc5b4 \uacf5\uae09\ub9dd \ubcf4\uc548\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\uc138\uc694.\n\n### \ucde8\uc57d\uc810\uc774 \uc788\ub294\uc9c0 \uc774\ubbf8\uc9c0\ub97c \uc815\uae30\uc801\uc73c\ub85c \uc2a4\uce94\n\n\uac00\uc0c1 \uba38\uc2e0\uacfc \ub9c8\ucc2c\uac00\uc9c0\ub85c \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\uc5d0\ub294 \ucde8\uc57d\uc131\uc774 \uc788\ub294 \ubc14\uc774\ub108\ub9ac\uc640 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub77c\uc774\ube0c\ub7ec\ub9ac\uac00 \ud3ec\ud568\ub418\uac70\ub098 \uc2dc\uac04\uc774 \uc9c0\ub0a8\uc5d0 \ub530\ub77c \ucde8\uc57d\uc131\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc545\uc6a9\uc73c\ub85c\ubd80\ud130 \ubcf4\ud638\ud558\ub294 \uac00\uc7a5 \uc88b\uc740 \ubc29\ubc95\uc740 \uc774\ubbf8\uc9c0 \uc2a4\uce90\ub108\ub85c \uc774\ubbf8\uc9c0\ub97c \uc815\uae30\uc801\uc73c\ub85c \uc2a4\uce94\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. Amazon ECR\uc5d0 \uc800\uc7a5\ub41c \uc774\ubbf8\uc9c0\ub294 \ud478\uc2dc \ub610\ub294 \uc628\ub514\ub9e8\ub4dc\ub85c \uc2a4\uce94\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. (24\uc2dc\uac04 \ub3d9\uc548 \ud55c \ubc88) ECR\uc740 \ud604\uc7ac [\ub450 \uac00\uc9c0 \uc720\ud615\uc758 \uc2a4\uce94 - \uae30\ubcf8 \ubc0f \uace0\uae09](https:\/\/docs.aws.amazon.com\/AmazonECR\/latest\/userguide\/image-scanning.html)\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \uae30\ubcf8 \uc2a4\uce94\uc740 [Clair](https:\/\/github.com\/quay\/clair)\uc758 \uc624\ud508 \uc18c\uc2a4 \uc774\ubbf8\uc9c0 \uc2a4\uce94 \uc194\ub8e8\uc158\uc744 \ubb34\ub8cc\ub85c \ud65c\uc6a9\ud569\ub2c8\ub2e4. [\uace0\uae09 \uc2a4\uce94](https:\/\/docs.aws.amazon.com\/AmazonECR\/latest\/userguide\/image-scanning-enhanced.html)\uc740 [\ucd94\uac00 \ube44\uc6a9](https:\/\/aws.amazon.com\/inspector\/pricing\/)\uc774 \uacfc\uae08\ub418\uba70 Amazon Inspector\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc790\ub3d9 \uc5f0\uc18d \uc2a4\uce94\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc774\ubbf8\uc9c0\ub97c \uc2a4\uce94\ud55c \ud6c4 \uacb0\uacfc\ub294 EventBridge\uc758 ECR\uc6a9 \uc774\ubca4\ud2b8 \uc2a4\ud2b8\ub9bc\uc5d0 \uae30\ub85d\ub429\ub2c8\ub2e4. ECR \ucf58\uc194 \ub0b4\uc5d0\uc11c \uc2a4\uce94 \uacb0\uacfc\ub97c \ubcfc \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc2ec\uac01\ud558\uac70\ub098 \uc2ec\uac01\ud55c \ucde8\uc57d\uc131\uc774 \uc788\ub294 \uc774\ubbf8\uc9c0\ub294 \uc0ad\uc81c\ud558\uac70\ub098 \ub2e4\uc2dc \ube4c\ub4dc\ud574\uc57c \ud569\ub2c8\ub2e4. \ubc30\ud3ec\ub41c \uc774\ubbf8\uc9c0\uc5d0\uc11c \ucde8\uc57d\uc810\uc774 \ubc1c\uacac\ub418\uba74 \uac00\ub2a5\ud55c \ud55c \ube68\ub9ac \uad50\uccb4\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ucde8\uc57d\uc131\uc774 \uc788\ub294 \uc774\ubbf8\uc9c0\uac00 \ubc30\ud3ec\ub41c \uc704\uce58\ub97c \uc544\ub294 \uac83\uc740 \ud658\uacbd\uc744 \uc548\uc804\ud558\uac8c \uc720\uc9c0\ud558\ub294 \ub370 \ud544\uc218\uc801\uc785\ub2c8\ub2e4. \uc774\ubbf8\uc9c0 \ucd94\uc801 \uc194\ub8e8\uc158\uc744 \uc9c1\uc811 \uad6c\ucd95\ud560 \uc218\ub3c4 \uc788\uc9c0\ub9cc, \ub2e4\uc74c\uacfc \uac19\uc774 \uc774 \uae30\ub2a5\uc744 \ube44\ub86f\ud55c \uae30\ud0c0 \uace0\uae09 \uae30\ub2a5\uc744 \uc989\uc2dc \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uc0c1\uc6a9 \uc81c\ud488\uc774 \uc774\ubbf8 \uc5ec\ub7ec \uac1c \uc788\uc2b5\ub2c8\ub2e4.\n\n- [Grype](https:\/\/github.com\/anchore\/grype)\n- [Palo Alto - Prisma Cloud (twistcli)](https:\/\/docs.paloaltonetworks.com\/prisma\/prisma-cloud\/prisma-cloud-admin-compute\/tools\/twistcli_scan_images)\n- [Aqua](https:\/\/www.aquasec.com\/)\n- [Kubei](https:\/\/github.com\/Portshift\/kubei)\n- [Trivy](https:\/\/github.com\/aquasecurity\/trivy)\n- [Snyk](https:\/\/support.snyk.io\/hc\/en-us\/articles\/360003946917-Test-images-with-the-Snyk-Container-CLI)\n\nKubernetes \uac80\uc99d \uc6f9\ud6c5\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc774\ubbf8\uc9c0\uc5d0 \uc2ec\uac01\ud55c \ucde8\uc57d\uc810\uc774 \uc5c6\ub294\uc9c0 \uac80\uc99d\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4.\uac80\uc99d \uc6f9\ud6c5\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\ubcf4\ub2e4 \uba3c\uc800 \ud638\ucd9c\ub429\ub2c8\ub2e4. \uc77c\ubc18\uc801\uc73c\ub85c \uc6f9\ud6c5\uc5d0 \uc815\uc758\ub41c \uac80\uc99d \uae30\uc900\uc744 \uc900\uc218\ud558\uc9c0 \uc54a\ub294 \uc694\uccad\uc744 \uac70\ubd80\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4.[\uc774 \ube14\ub85c\uadf8](https:\/\/aws.amazon.com\/blogs\/containers\/building-serverless-admission-webhooks-for-kubernetes-with-aws-sam\/)\ub294 ECR DescribeImagesCanVinds API\ub97c \ud638\ucd9c\ud558\uc5ec \ud30c\ub4dc\uac00 \uc2ec\uac01\ud55c \ucde8\uc57d\uc131\uc774 \uc788\ub294 \uc774\ubbf8\uc9c0\ub97c \uac00\uc838\uc624\ub294\uc9c0 \uc5ec\ubd80\ub97c \ud655\uc778\ud558\ub294 \uc11c\ubc84\ub9ac\uc2a4 \uc6f9\ud6c5\uc744 \uc18c\uac1c\ud569\ub2c8\ub2e4. \ucde8\uc57d\uc131\uc774 \ubc1c\uacac\ub418\uba74 \ud30c\ub4dc\uac00 \uac70\ubd80\ub418\uace0 CVE \ubaa9\ub85d\uc774 \ud3ec\ud568\ub41c \uba54\uc2dc\uc9c0\uac00 \uc774\ubca4\ud2b8\ub85c \ubc18\ud658\ub429\ub2c8\ub2e4.\n\n### \uc99d\uba85(Attestation)\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc544\ud2f0\ud329\ud2b8 \ubb34\uacb0\uc131 \uac80\uc99d\n\n\uc99d\uba85\uc774\ub780 \ud2b9\uc815 \uc0ac\ubb3c (\uc608: \ud30c\uc774\ud504\ub77c\uc778 \uc2e4\ud589, SBOM) \ub610\ub294 \ucde8\uc57d\uc131 \uc2a4\uce94 \ubcf4\uace0\uc11c\uc640 \uac19\uc740 \ub2e4\ub978 \uc0ac\ubb3c\uc5d0 \ub300\ud55c \"\uc804\uc81c \uc870\uac74\" \ub610\ub294 \"\uc8fc\uc81c\", \uc989 \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub97c \uc8fc\uc7a5\ud558\ub294 \uc554\ud638\ud654 \ubc29\uc2dd\uc73c\ub85c \uc11c\uba85\ub41c \"\uc9c4\uc220\"\uc785\ub2c8\ub2e4.\n\n\uc99d\uba85\uc744 \ud1b5\ud574 \uc0ac\uc6a9\uc790\ub294 \uc544\ud2f0\ud329\ud2b8\uac00 \uc18c\ud504\ud2b8\uc6e8\uc5b4 \uacf5\uae09\ub9dd\uc758 \uc2e0\ub8b0\ud560 \uc218 \uc788\ub294 \ucd9c\ucc98\uc5d0\uc11c \ub098\uc628 \uac83\uc778\uc9c0 \uac80\uc99d\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\uc608\ub97c \ub4e4\uc5b4 \uc774\ubbf8\uc9c0\uc5d0 \ud3ec\ud568\ub41c \ubaa8\ub4e0 \uc18c\ud504\ud2b8\uc6e8\uc5b4 \uad6c\uc131 \uc694\uc18c\ub098 \uc885\uc18d\uc131\uc744 \uc54c\uc9c0 \ubabb\ud55c \uc0c1\ud0dc\uc5d0\uc11c \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0 \uc81c\uc791\uc790\uac00 \uc5b4\ub5a4 \uc18c\ud504\ud2b8\uc6e8\uc5b4\uac00 \uc874\uc7ac\ud558\ub294\uc9c0\uc5d0 \ub300\ud574 \ub9d0\ud558\ub294 \ub0b4\uc6a9\uc744 \uc2e0\ub8b0\ud560 \uc218 \uc788\ub2e4\uba74 \uc81c\uc791\uc790\uc758 \uc99d\uba85\uc744 \uc774\uc6a9\ud574 \ud574\ub2f9 \uc544\ud2f0\ud329\ud2b8\ub97c \uc2e0\ub8b0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc989, \uc9c1\uc811 \ubd84\uc11d\uc744 \uc218\ud589\ud558\ub294 \ub300\uc2e0 \uc6cc\ud06c\ud50c\ub85c\uc6b0\uc5d0\uc11c \uc544\ud2f0\ud329\ud2b8\ub97c \uc548\uc804\ud558\uac8c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n- \uc99d\uba85\uc740 [AWS Signer](https:\/\/docs.aws.amazon.com\/signer\/latest\/developerguide\/Welcome.html) \ub610\ub294 [Sigstore cosign](https:\/\/github.com\/sigstore\/cosign\/blob\/main\/doc\/cosign_attest.md)\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n- [Kyverno](https:\/\/kyverno.io\/)\uc640 \uac19\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc5b4\ub4dc\ubbf8\uc158 \ucee8\ud2b8\ub864\ub7ec\ub97c \uc0ac\uc6a9\ud558\uc5ec [\uc99d\uba85 \ud655\uc778](https:\/\/kyverno.io\/docs\/writing-policies\/verify-images\/sigstore\/)\uc744 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n- \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\uc5d0 \uc99d\uba85 \uc0dd\uc131 \ubc0f \ucca8\ubd80\ub97c \ud3ec\ud568\ud55c \uc8fc\uc81c\uc640 \ud568\uaed8 \uc624\ud508 \uc18c\uc2a4 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\ub294 AWS\uc758 \uc18c\ud504\ud2b8\uc6e8\uc5b4 \uacf5\uae09\ub9dd \uad00\ub9ac \ubaa8\ubc94 \uc0ac\ub840\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\ub824\uba74 \uc774 [\uc6cc\ud06c\uc0f5\uc744](https:\/\/catalog.us-east-1.prod.workshops.aws\/workshops\/49343bb7-2cc5-4001-9d3b-f6a33b3c4442\/en-US\/0-introduction)\uc744 \ucc38\uc870\ud569\ub2c8\ub2e4.\n\n### ECR \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\uc5d0 \ub300\ud55c IAM \uc815\ucc45 \uc0dd\uc131\n\n\uc870\uc9c1\uc5d0\uc11c \uacf5\uc720 AWS \uacc4\uc815 \ub0b4\uc5d0\uc11c \ub3c5\ub9bd\uc801\uc73c\ub85c \uc6b4\uc601\ub418\ub294 \uc5ec\ub7ec \uac1c\ubc1c \ud300\uc774 \uc788\ub294 \uacbd\uc6b0\uac00 \ub4dc\ubb3c\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \ud300\uc774 \uc790\uc0b0\uc744 \uacf5\uc720\ud560 \ud544\uc694\uac00 \uc5c6\ub294 \uacbd\uc6b0 \uac01 \ud300\uc774 \uc0c1\ud638 \uc791\uc6a9\ud560 \uc218 \uc788\ub294 \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4\ub97c \uc81c\ud55c\ud558\ub294 IAM \uc815\ucc45 \uc138\ud2b8\ub97c \ub9cc\ub4dc\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc774\ub97c \uad6c\ud604\ud558\ub294 \uc88b\uc740 \ubc29\ubc95\uc740 ECR [\ub124\uc784\uc2a4\ud398\uc774\uc2a4](https:\/\/docs.aws.amazon.com\/AmazonECR\/latest\/userguide\/Repositories.html#repository-concepts)\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \ub124\uc784\uc2a4\ud398\uc774\uc2a4\ub294 \uc720\uc0ac\ud55c \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\ub97c \uadf8\ub8f9\ud654\ud558\ub294 \ubc29\ubc95\uc785\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \ud300 A\uc758 \ubaa8\ub4e0 \ub808\uc9c0\uc2a4\ud2b8\ub9ac \uc55e\uc5d0 team-a\/\ub97c \ubd99\uc774\uace0 \ud300 B\uc758 \ub808\uc9c0\uc2a4\ud2b8\ub9ac \uc55e\uc5d0\ub294 team-b\/ \uc811\ub450\uc0ac\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc561\uc138\uc2a4\ub97c \uc81c\ud55c\ud558\ub294 \uc815\ucc45\uc740 \ub2e4\uc74c\uacfc \uac19\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```json\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Sid\": \"AllowPushPull\",\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"ecr:GetDownloadUrlForLayer\",\n        \"ecr:BatchGetImage\",\n        \"ecr:BatchCheckLayerAvailability\",\n        \"ecr:PutImage\",\n        \"ecr:InitiateLayerUpload\",\n        \"ecr:UploadLayerPart\",\n        \"ecr:CompleteLayerUpload\"\n      ],\n      \"Resource\": [\n        \"arn:aws:ecr:<region>:<account_id>:repository\/team-a\/*\"\n      ]\n    }\n  ]\n}\n```\n\n### ECR \ud504\ub77c\uc774\ube57 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc0ac\uc6a9 \uace0\ub824\n\nECR API\uc5d0\ub294 \ud37c\ube14\ub9ad \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \uc788\uc2b5\ub2c8\ub2e4. \ub530\ub77c\uc11c IAM\uc5d0\uc11c \uc694\uccad\uc744 \uc778\uc99d\ud558\uace0 \uc2b9\uc778\ud558\uae30\ub9cc \ud558\uba74 \uc778\ud130\ub137\uc5d0\uc11c ECR \ub808\uc9c0\uc2a4\ud2b8\ub9ac\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 VPC\uc5d0 IGW(\uc778\ud130\ub137 \uac8c\uc774\ud2b8\uc6e8\uc774)\uac00 \uc5c6\ub294 \uc0cc\ub4dc\ubc15\uc2a4 \ud658\uacbd\uc5d0\uc11c \uc6b4\uc601\ud574\uc57c \ud558\ub294 \uacbd\uc6b0 ECR\uc6a9 \ud504\ub77c\uc774\ube57 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud504\ub77c\uc774\ube57 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc0dd\uc131\ud558\uba74 \uc778\ud130\ub137\uc744 \ud1b5\ud574 \ud2b8\ub798\ud53d\uc744 \ub77c\uc6b0\ud305\ud558\ub294 \ub300\uc2e0 \ud504\ub77c\uc774\ube57 IP \uc8fc\uc18c\ub97c \ud1b5\ud574 ECR API\uc5d0 \ube44\uacf5\uac1c\ub85c \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc8fc\uc81c\uc5d0 \ub300\ud55c \ucd94\uac00 \uc815\ubcf4\ub294 [Amazon ECR \uc778\ud130\ud398\uc774\uc2a4 VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8](https:\/\/docs.aws.amazon.com\/AmazonECR\/latest\/userguide\/vpc-endpoints.html)\ub97c \ucc38\uc870\ud569\ub2c8\ub2e4.\n\n### ECR \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc815\ucc45 \uad6c\ud604\n\n\uae30\ubcf8 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc815\ucc45\uc740 \ub9ac\uc804 \ub0b4\uc758 \ubaa8\ub4e0 ECR \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4\ub97c \ud5c8\uc6a9\ud569\ub2c8\ub2e4. \uc774\ub85c \uc778\ud574 \uacf5\uaca9\uc790\/\ub0b4\ubd80\uc790\uac00 \ub370\uc774\ud130\ub97c \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub85c \ud328\ud0a4\uc9d5\ud558\uace0 \ub2e4\ub978 AWS \uacc4\uc815\uc758 \ub808\uc9c0\uc2a4\ud2b8\ub9ac\ub85c \ud478\uc2dc\ud558\uc5ec \ub370\uc774\ud130\ub97c \uc720\ucd9c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc704\ud5d8\uc744 \uc644\ud654\ud558\ub824\uba74 ECR \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\uc5d0 \ub300\ud55c API \uc561\uc138\uc2a4\ub97c \uc81c\ud55c\ud558\ub294 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc815\ucc45\uc744 \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \ub2e4\uc74c \uc815\ucc45\uc740 \uacc4\uc815\uc758 \ubaa8\ub4e0 AWS \uc6d0\uce59\uc774 ECR \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\uc5d0 \ub300\ud574\uc11c\ub9cc \ubaa8\ub4e0 \uc791\uc5c5\uc744 \uc218\ud589\ud558\ub3c4\ub85d \ud5c8\uc6a9\ud569\ub2c8\ub2e4.\n\n```json\n{\n  \"Statement\": [\n    {\n      \"Sid\": \"LimitECRAccess\",\n      \"Principal\": \"*\",\n      \"Action\": \"*\",\n      \"Effect\": \"Allow\",\n      \"Resource\": \"arn:aws:ecr:<region>:<account_id>:repository\/*\"\n    }\n  ]\n}\n```\n\nAWS \uc870\uc9c1\uc5d0 \uc18d\ud558\uc9c0 \uc54a\uc740 IAM \uc6d0\uce59\uc5d0 \uc758\ud55c \uc774\ubbf8\uc9c0 \ud478\uc2dc\/\ud480\ub9c1\uc744 \ubc29\uc9c0\ud558\ub294 \uc0c8\ub85c\uc6b4 `PrincipalOrGid` \uc18d\uc131\uc744 \uc0ac\uc6a9\ud558\ub294 \uc870\uac74\uc744 \uc124\uc815\ud558\uc5ec \uc774\ub97c \ub354\uc6b1 \uac1c\uc120\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [AWS:PrincipalorgID](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/reference_policies_condition-keys.html#condition-keys-principalorgid)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n`com.amazonaws.<region>.ecr.dkr` \ubc0f `com.amazonaws.<region>.ecr.api` \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ubaa8\ub450\uc5d0 \ub3d9\uc77c\ud55c \uc815\ucc45\uc744 \uc801\uc6a9\ud558\ub294 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4.\nEKS\ub294 ECR\uc5d0\uc11c kube-proxy, coredns \ubc0f aws-node\uc6a9 \uc774\ubbf8\uc9c0\ub97c \uac00\uc838\uc624\ubbc0\ub85c, \ub808\uc9c0\uc2a4\ud2b8\ub9ac\uc758 \uacc4\uc815 ID (\uc608: `602401143452.dkr. ecr.us-west-2.amazonaws.com \/*`)\ub97c \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc815\ucc45\uc758 \ub9ac\uc18c\uc2a4 \ubaa9\ub85d\uc5d0 \ucd94\uac00\ud558\uac70\ub098 \"*\"\uc5d0\uc11c \uac00\uc838\uc624\uae30\ub97c \ud5c8\uc6a9\ud558\uace0 \uacc4\uc815 ID\uc5d0 \ub300\ud55c \ud478\uc2dc\ub97c \uc81c\ud55c\ud558\ub3c4\ub85d \uc815\ucc45\uc744 \ubcc0\uacbd\ud574\uc57c \ud569\ub2c8\ub2e4. \uc544\ub798 \ud45c\ub294 EKS \uc774\ubbf8\uc9c0\ub97c \uc81c\uacf5\ud558\ub294 AWS \uacc4\uc815\uacfc \ud074\ub7ec\uc2a4\ud130 \uc9c0\uc5ed \uac04\uc758 \ub9e4\ud551\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\n|Account Number |Region |\n| -------------- | ------ |\n|602401143452 |All commercial regions except for those listed below |\n|--- |--- |\n|800184023465 |ap-east-1 - Asia Pacific (Hong Kong) |\n|558608220178 |me-south-1 - Middle East (Bahrain) |\n|918309763551 |cn-north-1 - China (Beijing) |\n|961992271922 |cn-northwest-1 - China (Ningxia) |\n\n\uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc815\ucc45 \uc0ac\uc6a9\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc815\ucc45\uc744 \uc0ac\uc6a9\ud558\uc5ec Amazon ECR \uc561\uc138\uc2a4 \uc81c\uc5b4](https:\/\/aws.amazon.com\/blogs\/containers\/using-vpc-endpoint-policies-to-control-amazon-ecr-access\/)\ub97c \ucc38\uc870\ud569\ub2c8\ub2e4.\n\n### ECR\uc5d0 \ub300\ud55c \uc218\uba85 \uc8fc\uae30 \uc815\ucc45 \uad6c\ud604\n\n[NIST \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ucee8\ud14c\uc774\ub108 \ubcf4\uc548 \uac00\uc774\ub4dc](https:\/\/nvlpubs.nist.gov\/nistpubs\/SpecialPublications\/NIST.SP.800-190.pdf) \ub294 \"\ub808\uc9c0\uc2a4\ud2b8\ub9ac\uc758 \uc624\ub798\ub41c \uc774\ubbf8\uc9c0\"\uc758 \uc704\ud5d8\uc5d0 \ub300\ud574 \uacbd\uace0\ud558\uba70, \uc2dc\uac04\uc774 \uc9c0\ub098\uba74 \ucde8\uc57d\ud558\uace0 \uc624\ub798\ub41c \uc18c\ud504\ud2b8\uc6e8\uc5b4 \ud328\ud0a4\uc9c0\uac00 \ud3ec\ud568\ub41c \uc624\ub798\ub41c \uc774\ubbf8\uc9c0\ub97c \uc81c\uac70\ud558\uc5ec \uc6b0\ubc1c\uc801\uc778 \ubc30\ud3ec \ubc0f \ub178\ucd9c\uc744 \ubc29\uc9c0\ud574\uc57c \ud55c\ub2e4\uace0 \uc9c0\uc801\ud569\ub2c8\ub2e4.\n\uac01 ECR \uc800\uc7a5\uc18c\uc5d0\ub294 \uc774\ubbf8\uc9c0 \ub9cc\ub8cc \uc2dc\uae30\uc5d0 \ub300\ud55c \uaddc\uce59\uc744 \uc124\uc815\ud558\ub294 \uc218\uba85 \uc8fc\uae30 \uc815\ucc45\uc774 \uc788\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. [AWS \uacf5\uc2dd \ubb38\uc11c](https:\/\/docs.aws.amazon.com\/AmazonECR\/latest\/userguide\/LifecyclePolicies.html)\uc5d0\ub294 \ud14c\uc2a4\ud2b8 \uaddc\uce59\uc744 \uc124\uc815\ud558\uace0 \ud3c9\uac00\ud55c \ub2e4\uc74c \uc801\uc6a9\ud558\ub294 \ubc29\ubc95\uc774 \uc124\uba85\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \uacf5\uc2dd \ubb38\uc11c\uc5d0\ub294 \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\uc758 \uc774\ubbf8\uc9c0\ub97c \ud544\ud130\ub9c1\ud558\ub294 \ub2e4\uc591\ud55c \ubc29\ubc95\uc744 \ubcf4\uc5ec\uc8fc\ub294 \uc5ec\ub7ec [\uc218\uba85 \uc8fc\uae30 \uc815\ucc45 \uc608\uc81c](https:\/\/docs.aws.amazon.com\/AmazonECR\/latest\/userguide\/lifecycle_policy_examples.html)\uac00 \uc788\uc2b5\ub2c8\ub2e4.\n\n- \uc774\ubbf8\uc9c0 \uc0dd\uc131\uc2dc\uae30 \ub610\ub294 \uac1c\uc218\ub85c \ud544\ud130\ub9c1\n- \ud0dc\uadf8 \ub610\ub294 \ud0dc\uadf8\uac00 \uc9c0\uc815\ub418\uc9c0 \uc54a\uc740 \uc774\ubbf8\uc9c0\ub85c \ud544\ud130\ub9c1\n- \uc5ec\ub7ec \uaddc\uce59 \ub610\ub294 \ub2e8\uc77c \uaddc\uce59\uc5d0\uc11c \uc774\ubbf8\uc9c0 \ud0dc\uadf8\ub85c \ud544\ud130\ub9c1\n\n???+ \uacbd\uace0\n    \uc7a5\uae30 \uc2e4\ud589 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc6a9 \uc774\ubbf8\uc9c0\ub97c ECR\uc5d0\uc11c \uc81c\uac70\ud558\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc7ac\ubc30\ud3ec\ud558\uac70\ub098 \uc218\ud3c9\uc73c\ub85c \ud655\uc7a5\ud560 \ub54c \uc774\ubbf8\uc9c0 \uac00\uc838\uc624\uae30 \uc624\ub958\uac00 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\uc774\ubbf8\uc9c0 \uc218\uba85 \uc8fc\uae30 \uc815\ucc45\uc744 \uc0ac\uc6a9\ud560 \ub54c\ub294 \ubc30\ud3ec\uc640 \ud574\ub2f9 \ubc30\ud3ec\uc5d0\uc11c \ucc38\uc870\ud558\ub294 \uc774\ubbf8\uc9c0\ub97c \ucd5c\uc2e0 \uc0c1\ud0dc\ub85c \uc720\uc9c0\ud558\uace0 \ub9b4\ub9ac\uc2a4\/\ubc30\ud3ec\uc758 \ube48\ub3c4\ub97c \uc124\uba85\ud558\ub294 [\uc774\ubbf8\uc9c0] \ub9cc\ub8cc \uaddc\uce59\uc744 \ud56d\uc0c1 \ub9cc\ub4e4 \uc218 \uc788\ub3c4\ub85d CI\/CD \ubaa8\ubc94 \uc0ac\ub840\ub97c \ub9c8\ub828\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n### \uc120\ubcc4\ub41c \uc774\ubbf8\uc9c0 \uc138\ud2b8 \ub9cc\ub4e4\uae30\n\n\uac1c\ubc1c\uc790\uac00 \uc9c1\uc811 \uc774\ubbf8\uc9c0\ub97c \ub9cc\ub4e4\ub3c4\ub85d \ud5c8\uc6a9\ud558\ub294 \ub300\uc2e0 \uc870\uc9c1\uc758 \ub2e4\uc591\ud55c \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc2a4\ud0dd\uc5d0 \ub300\ud574 \uac80\uc99d\ub41c \uc774\ubbf8\uc9c0 \uc138\ud2b8\ub97c \ub9cc\ub4dc\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc138\uc694. \uc774\ub807\uac8c \ud558\uba74 \uac1c\ubc1c\uc790\ub294 Dockerfile \uc791\uc131 \ubc29\ubc95\uc744 \ubc30\uc6b0\uc9c0 \uc54a\uace0 \ucf54\ub4dc \uc791\uc131\uc5d0 \uc9d1\uc911\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubcc0\uacbd \uc0ac\ud56d\uc774 Master\uc5d0 \ubcd1\ud569\ub418\uba74 CI\/CD \ud30c\uc774\ud504\ub77c\uc778\uc740 \uc790\ub3d9\uc73c\ub85c \uc5d0\uc14b\uc744 \ucef4\ud30c\uc77c\ud558\uace0, \uc544\ud2f0\ud329\ud2b8 \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\uc5d0 \uc800\uc7a5\ud558\uace0, \uc544\ud2f0\ud329\ud2b8\ub97c \uc801\uc808\ud55c \uc774\ubbf8\uc9c0\uc5d0 \ubcf5\uc0ac\ud55c \ub2e4\uc74c ECR\uacfc \uac19\uc740 Docker \ub808\uc9c0\uc2a4\ud2b8\ub9ac\ub85c \ud478\uc2dc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucd5c\uc18c\ud55c \uac1c\ubc1c\uc790\uac00 \uc790\uccb4 Dockerfile\uc744 \ub9cc\ub4e4 \uc218 \uc788\ub294 \uae30\ubcf8 \uc774\ubbf8\uc9c0 \uc138\ud2b8\ub97c \ub9cc\ub4e4\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uc774\uc0c1\uc801\uc73c\ub85c\ub294 Dockerhub\uc5d0\uc11c \uc774\ubbf8\uc9c0\ub97c \uac00\uc838\uc624\uc9c0 \uc54a\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. a) \uc774\ubbf8\uc9c0\uc5d0 \ubb34\uc5c7\uc774 \ub4e4\uc5b4 \uc788\ub294\uc9c0 \ud56d\uc0c1 \uc54c \uc218\ub294 \uc5c6\uace0 b) \uc0c1\uc704 1000\uac1c \uc774\ubbf8\uc9c0 \uc911 \uc57d [1\/5](https:\/\/www.kennasecurity.com\/blog\/one-fifth-of-the-most-used-docker-containers-have-at-least-one-critical-vulnerability\/)\uc5d0\ub294 \ucde8\uc57d\uc810\uc774 \uc788\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4. \uc774\ub7f0 \uc774\ubbf8\uc9c0 \ubc0f \ucde8\uc57d\uc131 \ubaa9\ub85d\uc740 [\uc774 \uc0ac\uc774\ud2b8](https:\/\/vulnerablecontainers.org\/)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \ub8e8\ud2b8\uac00 \uc544\ub2cc \uc0ac\uc6a9\uc790\ub85c \uc2e4\ud589\ud558\ub824\uba74 Dockerfile\uc5d0 USER \uc9c0\uc2dc\ubb38\uc744 \ucd94\uac00\n\n\ud30c\ub4dc \ubcf4\uc548 \uc139\uc158\uc5d0\uc11c \uc5b8\uae09\ud588\ub4ef\uc774 \ucee8\ud14c\uc774\ub108\ub97c \ub8e8\ud2b8\ub85c \uc2e4\ud589\ud558\ub294 \uac83\uc740 \ud53c\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub97c PodSpec\uc758 \uc77c\ubd80\ub85c \uad6c\uc131\ud560 \uc218 \uc788\uc9c0\ub9cc Dockerfile\uc5d0\ub294 `USER` \ub514\ub809\ud2f0\ube0c\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. `USER` \uc9c0\uc2dc\uc5b4\ub294 USER \uc9c0\uc2dc\ubb38 \ub4a4\uc5d0 \ub098\ud0c0\ub098\ub294 `RUN`, `ENTRYPOINT` \ub610\ub294 `CMD` \uba85\ub839\uc744 \uc2e4\ud589\ud560 \ub54c \uc0ac\uc6a9\ud560 UID\ub97c \uc124\uc815\ud569\ub2c8\ub2e4.\n\n### Dockerfile \ub9b0\ud2b8\n\nLinting\uc744 \uc0ac\uc6a9\ud558\uc5ec Dockerfile\uc774 \uc0ac\uc804 \uc815\uc758\ub41c \uc9c0\uce68(\uc608: 'USER' \uc9c0\uce68 \ud3ec\ud568 \ub610\ub294 \ubaa8\ub4e0 \uc774\ubbf8\uc9c0\uc5d0 \ud0dc\uadf8\ub97c \uc9c0\uc815\ud574\uc57c \ud568)\uc744 \uc900\uc218\ud558\ub294\uc9c0 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. [dockerfile_lint](https:\/\/github.com\/projectatomic\/dockerfile_lint)\ub294 \uc77c\ubc18\uc801\uc778 \ubaa8\ubc94 \uc0ac\ub840\ub97c \uac80\uc99d\ud558\uace0 \ub3c4\ucee4\ud30c\uc77c \ub9b0\ud2b8\ub97c \uc704\ud55c \uc790\uccb4 \uaddc\uce59\uc744 \uad6c\ucd95\ud558\ub294 \ub370 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uaddc\uce59 \uc5d4\uc9c4\uc744 \ud3ec\ud568\ud558\ub294 RedHat\uc758 \uc624\ud508\uc18c\uc2a4 \ud504\ub85c\uc81d\ud2b8\uc785\ub2c8\ub2e4. \uaddc\uce59\uc744 \uc704\ubc18\ud558\ub294 Dockerfile\uc774 \ud3ec\ud568\ub41c \ube4c\ub4dc\ub294 \uc790\ub3d9\uc73c\ub85c \uc2e4\ud328\ud55c\ub2e4\ub294 \uc810\uc5d0\uc11c CI \ud30c\uc774\ud504\ub77c\uc778\uc5d0 \ud1b5\ud569\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc2a4\ud06c\ub798\uce58\uc5d0\uc11c \uc774\ubbf8\uc9c0 \ube4c\ub4dc\n\n\uc774\ubbf8\uc9c0\ub97c \uad6c\ucd95\ud560 \ub54c \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\uc758 \uacf5\uaca9 \ud45c\uba74\uc744 \uc904\uc774\ub294 \uac83\uc774 \uc8fc\uc694 \ubaa9\ud45c\uac00 \ub418\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uc774\ub97c \uc704\ud55c \uc774\uc0c1\uc801\uc778 \ubc29\ubc95\uc740 \ucde8\uc57d\uc131\uc744 \uc545\uc6a9\ud558\ub294 \ub370 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \ubc14\uc774\ub108\ub9ac\uac00 \uc5c6\ub294 \ucd5c\uc18c\ud55c\uc758 \uc774\ubbf8\uc9c0\ub97c \ub9cc\ub4dc\ub294 \uac83\uc785\ub2c8\ub2e4. \ub2e4\ud589\ud788 \ub3c4\ucee4\uc5d0\ub294 [`scratch`](https:\/\/docs.docker.com\/develop\/develop-images\/baseimages\/#create-a-simple-parent-image-using-scratch)\uc5d0\uc11c \uc774\ubbf8\uc9c0\ub97c \uc0dd\uc131\ud558\ub294 \uba54\ucee4\ub2c8\uc998\uc774 \uc788\uc2b5\ub2c8\ub2e4. Go\uc640 \uac19\uc740 \uc5b8\uc5b4\ub97c \uc0ac\uc6a9\ud558\uba74 \ub2e4\uc74c \uc608\uc81c\uc640 \uac19\uc774 \uc815\uc801 \uc5f0\uacb0 \ubc14\uc774\ub108\ub9ac\ub97c \ub9cc\ub4e4\uc5b4 Dockerfile\uc5d0\uc11c \ucc38\uc870\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```docker\n############################\n# STEP 1 build executable binary\n############################\nFROM golang:alpine AS builder# Install git.\n# Git is required for fetching the dependencies.\nRUN apk update && apk add --no-cache gitWORKDIR $GOPATH\/src\/mypackage\/myapp\/COPY . . # Fetch dependencies.\n# Using go get.\nRUN go get -d -v# Build the binary.\nRUN go build -o \/go\/bin\/hello\n\n############################\n# STEP 2 build a small image\n############################\nFROM scratch# Copy our static executable.\nCOPY --from=builder \/go\/bin\/hello \/go\/bin\/hello# Run the hello binary.\nENTRYPOINT [\"\/go\/bin\/hello\"]\n```\n\n\uc774\ub807\uac8c \ud558\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc73c\ub85c\ub9cc \uad6c\uc131\ub41c \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\uac00 \uc0dd\uc131\ub418\uc5b4 \ub9e4\uc6b0 \uc548\uc804\ud569\ub2c8\ub2e4.\n\n### ECR\uacfc \ud568\uaed8 \ubd88\ubcc0 \ud0dc\uadf8 \uc0ac\uc6a9\n\n[\ubcc0\uacbd \ubd88\uac00\ub2a5\ud55c \ud0dc\uadf8](https:\/\/aws.amazon.com\/about-aws\/whats-new\/2019\/07\/amazon-ecr-now-supports-immutable-image-tags\/)\ub97c \uc0ac\uc6a9\ud558\uba74 \uc774\ubbf8\uc9c0 \uc800\uc7a5\uc18c\ub85c \ud478\uc2dc\ud560 \ub54c\ub9c8\ub2e4 \uc774\ubbf8\uc9c0 \ud0dc\uadf8\ub97c \uc5c5\ub370\uc774\ud2b8\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uacf5\uaca9\uc790\uac00 \uc774\ubbf8\uc9c0\uc758 \ud0dc\uadf8\ub97c \ubcc0\uacbd\ud558\uc9c0 \uc54a\uace0\ub3c4 \uc545\uc131 \ubc84\uc804\uc73c\ub85c \uc774\ubbf8\uc9c0\ub97c \ub36e\uc5b4\uc4f0\ub294 \uac83\uc744 \ub9c9\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub610\ud55c \uc774\ubbf8\uc9c0\ub97c \uc27d\uace0 \uace0\uc720\ud558\uac8c \uc2dd\ubcc4\ud560 \uc218 \uc788\ub294 \ubc29\ubc95\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4.\n\n### \uc774\ubbf8\uc9c0, SBOM, \ud30c\uc774\ud504\ub77c\uc778 \uc2e4\ud589 \ubc0f \ucde8\uc57d\uc131 \ubcf4\uace0\uc11c\uc5d0 \uc11c\uba85\n\n\ub3c4\ucee4\uac00 \ucc98\uc74c \ub3c4\uc785\ub418\uc5c8\uc744 \ub54c\ub294 \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub97c \uac80\uc99d\ud558\uae30 \uc704\ud55c \uc554\ud638\ud654 \ubaa8\ub378\uc774 \uc5c6\uc5c8\uc2b5\ub2c8\ub2e4. v2\uc5d0\uc11c \ub3c4\ucee4\ub294 \uc774\ubbf8\uc9c0 \ub9e4\ub2c8\ud398\uc2a4\ud2b8\uc5d0 \ub2e4\uc774\uc81c\uc2a4\ud2b8\ub97c \ucd94\uac00\ud588\uc2b5\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 \uc774\ubbf8\uc9c0 \uad6c\uc131\uc744 \ud574\uc2dc\ud558\uace0 \ud574\uc2dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc774\ubbf8\uc9c0\uc758 ID\ub97c \uc0dd\uc131\ud560 \uc218 \uc788\uc5c8\uc2b5\ub2c8\ub2e4. \uc774\ubbf8\uc9c0 \uc11c\uba85\uc774 \ud65c\uc131\ud654\ub418\uba74 \ub3c4\ucee4 \uc5d4\uc9c4\uc740 \ub9e4\ub2c8\ud398\uc2a4\ud2b8\uc758 \uc11c\uba85\uc744 \ud655\uc778\ud558\uc5ec \ucf58\ud150\uce20\uac00 \uc2e0\ub8b0\ud560 \uc218 \uc788\ub294 \ucd9c\ucc98\uc5d0\uc11c \uc0dd\uc131\ub418\uc5c8\uc73c\uba70 \ubcc0\uc870\uac00 \ubc1c\uc0dd\ud558\uc9c0 \uc54a\uc558\ub294\uc9c0 \ud655\uc778\ud569\ub2c8\ub2e4. \uac01 \uacc4\uce35\uc774 \ub2e4\uc6b4\ub85c\ub4dc\ub41c \ud6c4 \uc5d4\uc9c4\uc740 \uacc4\uce35\uc758 \ub2e4\uc774\uc81c\uc2a4\ud2b8\ub97c \ud655\uc778\ud558\uc5ec \ucf58\ud150\uce20\uac00 \ub9e4\ub2c8\ud398\uc2a4\ud2b8\uc5d0 \uc9c0\uc815\ub41c \ucf58\ud150\uce20\uc640 \uc77c\uce58\ud558\ub294\uc9c0 \ud655\uc778\ud569\ub2c8\ub2e4. \uc774\ubbf8\uc9c0 \uc11c\uba85\uc744 \uc0ac\uc6a9\ud558\uba74 \uc774\ubbf8\uc9c0\uc640 \uad00\ub828\ub41c \ub514\uc9c0\ud138 \uc11c\uba85\uc744 \uac80\uc99d\ud558\uc5ec \uc548\uc804\ud55c \uacf5\uae09\ub9dd\uc744 \ud6a8\uacfc\uc801\uc73c\ub85c \uad6c\ucd95\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n[AWS Signer](https:\/\/docs.aws.amazon.com\/signer\/latest\/developerguide\/Welcome.html) \ub610\ub294 [Sigstore Cosign](https:\/\/github.com\/sigstore\/cosign)\uc744 \uc0ac\uc6a9\ud558\uc5ec \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\uc5d0 \uc11c\uba85\ud558\uace0, SBOM\uc5d0 \ub300\ud55c \uc99d\uba85, \ucde8\uc57d\uc131 \uc2a4\uce94 \ubcf4\uace0\uc11c \ubc0f \ud30c\uc774\ud504\ub77c\uc778 \uc2e4\ud589 \ubcf4\uace0\uc11c\ub97c \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \uc99d\uba85\uc740 \uc774\ubbf8\uc9c0\uc758 \uc2e0\ub8b0\uc131\uacfc \ubb34\uacb0\uc131\uc744 \ubcf4\uc7a5\ud558\uace0, \uc774\ubbf8\uc9c0\uac00 \uc2e4\uc81c\ub85c \uc5b4\ub5a0\ud55c \uac04\uc12d\uc774\ub098 \ubcc0\uc870 \uc5c6\uc774 \uc2e0\ub8b0\ud560 \uc218 \uc788\ub294 \ud30c\uc774\ud504\ub77c\uc778\uc5d0 \uc758\ud574 \uc0dd\uc131\ub418\uc5c8\uc73c\uba70, \uc774\ubbf8\uc9c0 \uac8c\uc2dc\uc790\uac00 \uac80\uc99d\ud558\uace0 \uc2e0\ub8b0\ud558\ub294 SBOM\uc5d0 \ubb38\uc11c\ud654\ub41c \uc18c\ud504\ud2b8\uc6e8\uc5b4 \uad6c\uc131 \uc694\uc18c\ub9cc \ud3ec\ud568\ud55c\ub2e4\ub294 \uac83\uc744 \ubcf4\uc99d\ud569\ub2c8\ub2e4. \uc774\ub7f0 \uc99d\uba85\uc744 \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\uc5d0 \ucca8\ubd80\ud558\uc5ec \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\ub85c \ud478\uc2dc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c \uc139\uc158\uc5d0\uc11c\ub294 \uac10\uc0ac \ubc0f \uc5b4\ub4dc\ubbf8\uc158 \ucee8\ud2b8\ub864\ub7ec \uac80\uc99d\uc744 \uc704\ud574 \uc785\uc99d\ub41c \uc544\ud2f0\ud329\ud2b8\ub97c \uc0ac\uc6a9\ud558\ub294 \ubc29\ubc95\uc744 \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n### \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc5b4\ub4dc\ubbf8\uc158 \ucee8\ud2b8\ub864\ub7ec\ub97c \uc0ac\uc6a9\ud55c \uc774\ubbf8\uc9c0 \ubb34\uacb0\uc131 \uac80\uc99d\n\n[\ub3d9\uc801 \uc5b4\ub4dc\ubbf8\uc158 \ucee8\ud2b8\ub864\ub7ec](https:\/\/kubernetes.io\/blog\/2019\/03\/21\/a-guide-to-kubernetes-admission-controllers\/)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub300\uc0c1 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc774\ubbf8\uc9c0\ub97c \ubc30\ud3ec\ud558\uae30 \uc804\uc5d0 \uc790\ub3d9\ud654\ub41c \ubc29\uc2dd\uc73c\ub85c \uc774\ubbf8\uc9c0 \uc11c\uba85\uacfc \uc785\uc99d\ub41c \uc544\ud2f0\ud329\ud2b8\ub97c \ud655\uc778\ud558\uace0 \uc544\ud2f0\ud329\ud2b8\uc758 \ubcf4\uc548 \uba54\ud0c0\ub370\uc774\ud130\uac00 \uc5b4\ub4dc\ubbf8\uc158 \ucee8\ud2b8\ub864\ub7ec \uc815\ucc45\uc744 \uc900\uc218\ud558\ub294 \uacbd\uc6b0\uc5d0\ub9cc \ubc30\ud3ec\ub97c \uc2b9\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc608\ub97c \ub4e4\uc5b4 \uc774\ubbf8\uc9c0\uc758 \uc11c\uba85\uc744 \uc554\ud638\ub85c \ud655\uc778\ud558\ub294 \uc815\ucc45, \uc785\uc99d\ub41c SBOM, \uc785\uc99d\ub41c \ud30c\uc774\ud504\ub77c\uc778 \uc2e4\ud589 \ubcf4\uace0\uc11c \ub610\ub294 \uc785\uc99d\ub41c CVE \uc2a4\uce94 \ubcf4\uace0\uc11c\ub97c \uc791\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\ubcf4\uace0\uc11c\uc5d0 \ub370\uc774\ud130\ub97c \ud655\uc778\ud558\uae30 \uc704\ud55c \uc870\uac74\uc744 \uc815\ucc45\uc5d0 \uc791\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4, CVE \uc2a4\uce94\uc5d0\ub294 \uc911\uc694\ud55c CVE\uac00 \uc5c6\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uc774\ub7f0 \uc870\uac74\uc744 \ucda9\uc871\ud558\ub294 \uc774\ubbf8\uc9c0\uc5d0\ub9cc \ubc30\ud3ec\uac00 \ud5c8\uc6a9\ub418\uba70 \ub2e4\ub978 \ubaa8\ub4e0 \ubc30\ud3ec\ub294 \uc5b4\ub4dc\ubbf8\uc158 \ucee8\ud2b8\ub864\ub7ec\uc5d0 \uc758\ud574 \uac70\ubd80\ub429\ub2c8\ub2e4.\n\n\uc5b4\ub4dc\ubbf8\uc158 \ucee8\ud2b8\ub864\ub7ec\uc758 \uc608\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4:\n\n- [Kyverno](https:\/\/kyverno.io\/)\n- [OPA Gatekeeper](https:\/\/github.com\/open-policy-agent\/gatekeeper)\n- [Portieris](https:\/\/github.com\/IBM\/portieris)\n- [Ratify](https:\/\/github.com\/deislabs\/ratify)\n- [Kritis](https:\/\/github.com\/grafeas\/kritis)\n- [Grafeas tutorial](https:\/\/github.com\/kelseyhightower\/grafeas-tutorial)\n- [Voucher](https:\/\/github.com\/Shopify\/voucher)\n\n### \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\uc758 \ud328\ud0a4\uc9c0 \uc5c5\ub370\uc774\ud2b8\n\n\uc774\ubbf8\uc9c0\uc758 \ud328\ud0a4\uc9c0\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub824\uba74 \ub3c4\ucee4\ud30c\uc77c\uc5d0 `apt-get update && apt-get upgrade` \uc2e4\ud589\uc744 \ud3ec\ud568\ud574\uc57c \ud569\ub2c8\ub2e4. \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub824\uba74 \ub8e8\ud2b8\ub85c \uc2e4\ud589\ud574\uc57c \ud558\uc9c0\ub9cc \uc774\ub294 \uc774\ubbf8\uc9c0 \ube4c\ub4dc \ub2e8\uacc4\uc5d0\uc11c \ubc1c\uc0dd\ud569\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ub8e8\ud2b8\ub85c \uc2e4\ud589\ud560 \ud544\uc694\ub294 \uc5c6\uc2b5\ub2c8\ub2e4. \uc5c5\ub370\uc774\ud2b8\ub97c \uc124\uce58\ud55c \ub2e4\uc74c USER \uc9c0\uc2dc\ubb38\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub2e4\ub978 \uc0ac\uc6a9\uc790\ub85c \uc804\ud658\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uae30\ubcf8 \uc774\ubbf8\uc9c0\ub97c \ub8e8\ud2b8 \uc0ac\uc6a9\uc790\uac00 \uc544\ub2cc \uc0ac\uc6a9\uc790\ub85c \uc2e4\ud589\ud558\ub294 \uacbd\uc6b0 \ub8e8\ud2b8 \uc0ac\uc6a9\uc790\ub85c \uc804\ud658\ud588\ub2e4\uac00 \ub2e4\uc2dc \uc2e4\ud589\ud558\uc138\uc694. \uae30\ubcf8 \uc774\ubbf8\uc9c0 \uad00\ub9ac\uc790\uc5d0\uac8c\ub9cc \uc758\uc874\ud558\uc5ec \ucd5c\uc2e0 \ubcf4\uc548 \uc5c5\ub370\uc774\ud2b8\ub97c \uc124\uce58\ud558\uc9c0 \ub9c8\uc2ed\uc2dc\uc624.\n\n`apt-get clean`\uc744 \uc2e4\ud589\ud558\uc5ec `\/var\/cache\/apt\/archives\/`\uc5d0\uc11c \uc124\uce58 \ud504\ub85c\uadf8\ub7a8 \ud30c\uc77c\uc744 \uc0ad\uc81c\ud569\ub2c8\ub2e4. \ud328\ud0a4\uc9c0\ub97c \uc124\uce58\ud55c \ud6c4 `rm -rf \/var\/lib\/apt\/lists\/*`\ub97c \uc2e4\ud589\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uc124\uce58\ud560 \uc218 \uc788\ub294 \uc778\ub371\uc2a4 \ud30c\uc77c\uc774\ub098 \ud328\ud0a4\uc9c0 \ubaa9\ub85d\uc774 \uc81c\uac70\ub429\ub2c8\ub2e4. \uc774\ub7f0 \uba85\ub839\uc740 \uac01 \ud328\ud0a4\uc9c0 \uad00\ub9ac\uc790\ub9c8\ub2e4 \ub2e4\ub97c \uc218 \uc788\ub2e4\ub294 \uc810\uc5d0 \uc720\uc758\ud558\uc2ed\uc2dc\uc624. \uc608\ub97c \ub4e4\uba74 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n```docker\nRUN apt-get update && apt-get install -y \\\n    curl \\\n    git \\\n    libsqlite3-dev \\\n    && apt-get clean && rm -rf \/var\/lib\/apt\/lists\/*\n```\n\n## \ub3c4\uad6c \ubc0f \ub9ac\uc18c\uc2a4\n\n- [docker-slim](https:\/\/github.com\/docker-slim\/docker-slim)\ub294 \uc548\uc804\ud55c \ucd5c\uc18c \uc774\ubbf8\uc9c0\ub97c \uad6c\ucd95\ud569\ub2c8\ub2e4.\n- [dockle](https:\/\/github.com\/goodwithtech\/dockle)\ub294 Dockerfile\uc774 \ubcf4\uc548 \uc774\ubbf8\uc9c0 \uc0dd\uc131 \ubaa8\ubc94 \uc0ac\ub840\uc640 \uc77c\uce58\ud558\ub294\uc9c0 \ud655\uc778\ud569\ub2c8\ub2e4.\n- [dockerfile-lint](https:\/\/github.com\/projectatomic\/dockerfile_lint) Rule based linter for Dockerfiles\n- [hadolint](https:\/\/github.com\/hadolint\/hadolint)\ub294 \ub3c4\ucee4\ud30c\uc77c\uc6a9 \uaddc\uce59 \uae30\ubc18 \ub9b0\ud130\uc785\ub2c8\ub2e4.\n- [Gatekeeper and OPA](https:\/\/github.com\/open-policy-agent\/gatekeeper)\ub294 \uc815\ucc45 \uae30\ubc18 \uc5b4\ub4dc\ubbf8\uc158 \ucee8\ud2b8\ub864\ub7ec\uc785\ub2c8\ub2e4.\n- [Kyverno](https:\/\/kyverno.io\/)\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\uc774\ud2f0\ube0c \uc815\ucc45 \uc5d4\uc9c4\uc785\ub2c8\ub2e4.\n- [in-toto](https:\/\/in-toto.io\/)\ub97c \ud1b5\ud574 \uacf5\uae09\ub9dd\uc758 \ud2b9\uc815 \ub2e8\uacc4\uac00 \uc218\ud589\ub420 \uc608\uc815\uc774\uc5c8\ub294\uc9c0, \ud574\ub2f9 \ub2e8\uacc4\uac00 \uc62c\ubc14\ub978 \ud589\uc704\uc790\uc5d0 \uc758\ud574 \uc218\ud589\ub418\uc5c8\ub294\uc9c0 \uc0ac\uc6a9\uc790\uac00 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n- [Notary](https:\/\/github.com\/theupdateframework\/notary)\ub294 \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0 \uc11c\uba85 \ud504\ub85c\uc81d\ud2b8\uc785\ub2c8\ub2e4.\n- [Notary v2](https:\/\/github.com\/notaryproject\/nv2)\n- [Grafeas](https:\/\/grafeas.io\/)\ub294 \uc18c\ud504\ud2b8\uc6e8\uc5b4 \uacf5\uae09\ub9dd\uc744 \uac10\uc0ac \ubc0f \uad00\ub9ac\ud558\uae30 \uc704\ud55c \uac1c\ubc29\ud615 \uc544\ud2f0\ud329\ud2b8 \uba54\ud0c0\ub370\uc774\ud130 API\uc785\ub2c8\ub2e4.","site":"eks","answers_cleaned":"    search    exclude  true                                                                                                                                                              AWS                                                                                                                         Dockerhub                                                 Dive  https   github com wagoodman dive                                            SETUID   SETGID                      nc  curl                                                              SETUID   SETGID                           bash find    perm  6000  type f  exec ls  ld                                                              docker RUN find    xdev  perm  6000  type f  exec chmod a s          true                                                                                                                                                                                                                                                                                                                                                                                                       https   docs docker com develop develop images multistage build                                           SBOM  Software Bill of Materials      SBOM                                          SBOM                                            SBOM                      SBOM          https   anchore com SBOM                                                                                                    SBOM                                                                                                                                                                                                                                                                                             SBOM       CVE                                                                                                                                                                                 SBOM                                                                            SBOM                  Amazon Inspector  https   docs aws amazon com inspector         SBOM            https   docs aws amazon com inspector latest user SBOM export html                  Syft from Anchore  https   github com anchore syft    SBOM                                                       SBOM                                                  Amazon ECR        OCI                SBOM                          https   github com sigstore cosign blob main doc cosign attach attestation md        CNCF                      https   project linuxfoundation org hubfs CNCF SSCP v1 pdf                                                                                                                                                                                                            Amazon ECR                                    24           ECR                             https   docs aws amazon com AmazonECR latest userguide image scanning html                  Clair  https   github com quay clair                                        https   docs aws amazon com AmazonECR latest userguide image scanning enhanced html           https   aws amazon com inspector pricing         Amazon Inspector                                       EventBridge  ECR                  ECR                                                                                                                                                                                                                                                          Grype  https   github com anchore grype     Palo Alto   Prisma Cloud  twistcli   https   docs paloaltonetworks com prisma prisma cloud prisma cloud admin compute tools twistcli scan images     Aqua  https   www aquasec com      Kubei  https   github com Portshift kubei     Trivy  https   github com aquasecurity trivy     Snyk  https   support snyk io hc en us articles 360003946917 Test images with the Snyk Container CLI   Kubernetes                                                        API                                                                   https   aws amazon com blogs containers building serverless admission webhooks for kubernetes with aws sam    ECR DescribeImagesCanVinds API                                                                              CVE                                  Attestation                                              SBOM                                                                                                                                                                                                                                                                                                                                                                            AWS Signer  https   docs aws amazon com signer latest developerguide Welcome html      Sigstore cosign  https   github com sigstore cosign blob main doc cosign attest md                       Kyverno  https   kyverno io                                     https   kyverno io docs writing policies verify images sigstore                                                                AWS                                            https   catalog us east 1 prod workshops aws workshops 49343bb7 2cc5 4001 9d3b f6a33b3c4442 en US 0 introduction               ECR           IAM                AWS                                                                                                          IAM                                    ECR          https   docs aws amazon com AmazonECR latest userguide Repositories html repository concepts                                                      A              team a         B            team b                                                    json      Version    2012 10 17      Statement                  Sid    AllowPushPull          Effect    Allow          Action              ecr GetDownloadUrlForLayer            ecr BatchGetImage            ecr BatchCheckLayerAvailability            ecr PutImage            ecr InitiateLayerUpload            ecr UploadLayerPart            ecr CompleteLayerUpload                  Resource              arn aws ecr  region   account id  repository team a                                 ECR                   ECR API                        IAM                           ECR                          VPC  IGW                                     ECR                                                                      IP        ECR API                                     Amazon ECR       VPC        https   docs aws amazon com AmazonECR latest userguide vpc endpoints html               ECR                                    ECR                                                              AWS                                              ECR           API                                                   AWS     ECR                                     json      Statement                  Sid    LimitECRAccess          Principal               Action               Effect    Allow          Resource    arn aws ecr  region   account id  repository                     AWS            IAM                             PrincipalOrGid                                               AWS PrincipalorgID  https   docs aws amazon com IAM latest UserGuide reference policies condition keys html condition keys principalorgid            com amazonaws  region  ecr dkr     com amazonaws  region  ecr api                                   EKS  ECR   kube proxy  coredns   aws node                        ID      602401143452 dkr  ecr us west 2 amazonaws com                                                   ID                                   EKS           AWS                             Account Number  Region                                602401143452  All commercial regions except for those listed below                800184023465  ap east 1   Asia Pacific  Hong Kong     558608220178  me south 1   Middle East  Bahrain     918309763551  cn north 1   China  Beijing     961992271922  cn northwest 1   China  Ningxia                              VPC                Amazon ECR         https   aws amazon com blogs containers using vpc endpoint policies to control amazon ecr access                ECR                   NIST                     https   nvlpubs nist gov nistpubs SpecialPublications NIST SP 800 190 pdf                                                                                                                 ECR                                                    AWS        https   docs aws amazon com AmazonECR latest userguide LifecyclePolicies html                                                                                                    https   docs aws amazon com AmazonECR latest userguide lifecycle policy examples html                                                                                                                            ECR                                                                                                                                                           CI CD                                                                                                                               Dockerfile                                         Master       CI CD                                                                ECR     Docker                                Dockerfile                                      Dockerhub                         a                             b     1000           1 5  https   www kennasecurity com blog one fifth of the most used docker containers have at least one critical vulnerability                                             https   vulnerablecontainers org                                        Dockerfile  USER                                                         PodSpec                Dockerfile    USER                       USER       USER              RUN    ENTRYPOINT      CMD                UID              Dockerfile     Linting       Dockerfile                USER                                                   dockerfile lint  https   github com projectatomic dockerfile lint                                                                   RedHat                         Dockerfile                         CI                                                                                                                                                                       scratch   https   docs docker com develop develop images baseimages  create a simple parent image using scratch                          Go                                        Dockerfile                   docker                                STEP 1 build executable binary                              FROM golang alpine AS builder  Install git    Git is required for fetching the dependencies  RUN apk update    apk add   no cache gitWORKDIR  GOPATH src mypackage myapp COPY       Fetch dependencies    Using go get  RUN go get  d  v  Build the binary  RUN go build  o  go bin hello                                 STEP 2 build a small image                              FROM scratch  Copy our static executable  COPY   from builder  go bin hello  go bin hello  Run the hello binary  ENTRYPOINT    go bin hello                                                           ECR                            https   aws amazon com about aws whats new 2019 07 amazon ecr now supports immutable image tags                                                                                                                                                          SBOM                                                                           v2                                                                  ID                                                                                                                                                                                                                                       AWS Signer  https   docs aws amazon com signer latest developerguide Welcome html      Sigstore Cosign  https   github com sigstore cosign                        SBOM                                                                                                                                                  SBOM                                                                                                                                                                                              https   kubernetes io blog 2019 03 21 a guide to kubernetes admission controllers                                                                                                                                                               SBOM                          CVE                                                                 CVE          CVE                                                                                                  Kyverno  https   kyverno io      OPA Gatekeeper  https   github com open policy agent gatekeeper     Portieris  https   github com IBM portieris     Ratify  https   github com deislabs ratify     Kritis  https   github com grafeas kritis     Grafeas tutorial  https   github com kelseyhightower grafeas tutorial     Voucher  https   github com Shopify voucher                                                     apt get update    apt get upgrade                                                                                                   USER                                                                                                                                   apt get clean          var cache apt archives                                    rm  rf  var lib apt lists                                                                                                                     docker RUN apt get update    apt get install  y       curl       git       libsqlite3 dev          apt get clean    rm  rf  var lib apt lists                        docker slim  https   github com docker slim docker slim                         dockle  https   github com goodwithtech dockle   Dockerfile                                   dockerfile lint  https   github com projectatomic dockerfile lint  Rule based linter for Dockerfiles    hadolint  https   github com hadolint hadolint                         Gatekeeper and OPA  https   github com open policy agent gatekeeper                          Kyverno  https   kyverno io                            in toto  https   in toto io                                                                             Notary  https   github com theupdateframework notary                           Notary v2  https   github com notaryproject nv2     Grafeas  https   grafeas io                                           API    "}
{"questions":"eks exclude true EKS search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \ub124\ud2b8\uc6cc\ud06c \ubcf4\uc548\n\n\ub124\ud2b8\uc6cc\ud06c \ubcf4\uc548\uc5d0\ub294 \uc5ec\ub7ec \uce21\uba74\uc774 \uc788\uc2b5\ub2c8\ub2e4. \uccab \ubc88\uc9f8\ub294 \uc11c\ube44\uc2a4 \uac04\uc758 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d \ud750\ub984\uc744 \uc81c\ud55c\ud558\ub294 \uaddc\uce59 \uc801\uc6a9\uacfc \uad00\ub828\ub429\ub2c8\ub2e4. \ub450 \ubc88\uc9f8\ub294 \uc804\uc1a1 \uc911\uc778 \ud2b8\ub798\ud53d\uc758 \uc554\ud638\ud654\uc640 \uad00\ub828\uc774 \uc788\uc2b5\ub2c8\ub2e4. EKS\uc5d0\uc11c \uc774\ub7f0 \ubcf4\uc548 \uc870\uce58\ub97c \uad6c\ud604\ud558\ub294 \uba54\ucee4\ub2c8\uc998\uc740 \ub2e4\uc591\ud558\uc9c0\ub9cc \uc885\uc885 \ub2e4\uc74c \ud56d\ubaa9\uc744 \ud3ec\ud568\ud569\ub2c8\ub2e4.\n\n## \ud2b8\ub798\ud53d \uad00\ub9ac\n\n- \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\n- \ubcf4\uc548 \uadf8\ub8f9\n\n## \ub124\ud2b8\uc6cc\ud06c \uc554\ud638\ud654\n\n- \uc11c\ube44\uc2a4 \uba54\uc2dc\n- \ucee8\ud14c\uc774\ub108 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4(CNI)\n- \uc778\uadf8\ub808\uc2a4 \ucee8\ud2b8\ub864\ub7ec\uc640 \ub85c\ub4dc\ubc38\ub7f0\uc11c\n- Nitro \uc778\uc2a4\ud134\uc2a4\n- cert-manager\uc640 ACM Private CA\n\n## \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c\ub294 \uae30\ubcf8\uc801\uc73c\ub85c \ubaa8\ub4e0 \ud30c\ub4dc\uc640 \ud30c\ub4dc \uac04\uc758 \ud1b5\uc2e0\uc774 \ud5c8\uc6a9\ub41c\ub2e4. \uc774\ub7ec\ud55c \uc720\uc5f0\uc131\uc740 \uc2e4\ud5d8\uc744 \ucd09\uc9c4\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc9c0\ub9cc \uc548\uc804\ud55c \uac83\uc73c\ub85c \uac04\uc8fc\ub418\uc9c0\ub294 \uc54a\uc2b5\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc740 \ud30c\ub4dc \uac04 \ud1b5\uc2e0(East\/West \ud2b8\ub798\ud53d\uc774\ub77c\uace0\ub3c4 \ud568)\uacfc \ud30c\ub4dc\uc640 \uc678\ubd80 \uc11c\ube44\uc2a4 \uac04\uc758 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d\uc744 \uc81c\ud55c\ud558\ub294 \uba54\ucee4\ub2c8\uc998\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc740 OSI \ubaa8\ub378\uc758 \uacc4\uce35 3\uacfc 4\uc5d0\uc11c \uc791\ub3d9\ud569\ub2c8\ub2e4. \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc740 \ud30c\ub4dc, \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \uc140\ub809\ud130 \ubc0f \ub808\uc774\ube14\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc18c\uc2a4 \ubc0f \ub300\uc0c1 \ud30c\ub4dc\ub97c \uc2dd\ubcc4\ud558\uc9c0\ub9cc IP \uc8fc\uc18c, \ud3ec\ud2b8 \ubc88\ud638, \ud504\ub85c\ud1a0\ucf5c \ub610\ub294 \uc774\ub4e4\uc758 \uc870\ud569\uc744 \ud3ec\ud568\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc740 \ud30c\ub4dc\uc5d0 \ub300\ud55c \uc778\ubc14\uc6b4\ub4dc \ub610\ub294 \uc544\uc6c3\ubc14\uc6b4\ub4dc \uc5f0\uacb0 \ubaa8\ub450\uc5d0 \uc801\uc6a9\ud560 \uc218 \uc788\uc73c\uba70, \uc774\ub97c \uc778\uadf8\ub808\uc2a4(ingress) \ubc0f \uc774\uadf8\ub808\uc2a4(egress) \uaddc\uce59\uc774\ub77c\uace0\ub3c4 \ud569\ub2c8\ub2e4.\n\nAmazon VPC CNI \ud50c\ub7ec\uadf8\uc778\uc758 \uae30\ubcf8 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc9c0\uc6d0\uc744 \ud1b5\ud574 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uad6c\ud604\ud558\uc5ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc758 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d\uc744 \ubcf4\ud638\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub294 \uc5c5\uc2a4\ud2b8\ub9bc \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 API\uc640 \ud1b5\ud569\ub418\uc5b4 \ud638\ud658\uc131\uacfc \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud45c\uc900 \uc900\uc218\ub97c \ubcf4\uc7a5\ud569\ub2c8\ub2e4. \uc5c5\uc2a4\ud2b8\ub9bc API\uc5d0\uc11c \uc9c0\uc6d0\ud558\ub294 \ub2e4\uc591\ud55c [\uc2dd\ubcc4\uc790](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/network-policies\/)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc815\ucc45\uc744 \uc815\uc758\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c \ubaa8\ub4e0 \uc218\uc2e0 \ubc0f \uc1a1\uc2e0 \ud2b8\ub798\ud53d\uc740 \ud30c\ub4dc\uc5d0 \ud5c8\uc6a9\ub429\ub2c8\ub2e4. PolicyType Ingress\uac00 \ud3ec\ud568\ub41c \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uc9c0\uc815\ud558\ub294 \uacbd\uc6b0 \ud30c\ub4dc \ub178\ub4dc\uc758 \uc5f0\uacb0\uacfc \uc778\uadf8\ub808\uc2a4 \uaddc\uce59\uc5d0\uc11c \ud5c8\uc6a9\ud558\ub294 \uc5f0\uacb0\ub9cc \ud30c\ub4dc\uc5d0 \ub300\ud55c \uc5f0\uacb0\ub9cc \ud5c8\uc6a9\ub429\ub2c8\ub2e4. \uc774\uadf8\ub808\uc2a4 \uaddc\uce59\uc5d0\ub3c4 \ub3d9\uc77c\ud558\uac8c \uc801\uc6a9\ub429\ub2c8\ub2e4. \uc5ec\ub7ec \uaddc\uce59\uc774 \uc815\uc758\ub41c \uacbd\uc6b0 \uacb0\uc815\uc744 \ub0b4\ub9b4 \ub54c \ubaa8\ub4e0 \uaddc\uce59\uc758 \ud1b5\ud569\uc744 \uace0\ub824\ud569\ub2c8\ub2e4. \ub530\ub77c\uc11c \ud3c9\uac00 \uc21c\uc11c\ub294 \uc815\ucc45 \uacb0\uacfc\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce58\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n!!! attention\n    EKS \ud074\ub7ec\uc2a4\ud130\ub97c \ucc98\uc74c \ud504\ub85c\ube44\uc800\ub2dd\ud560 \ub54c VPC CNI \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uae30\ub2a5\uc740 \uae30\ubcf8\uc801\uc73c\ub85c \ud65c\uc131\ud654\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc9c0\uc6d0\ub418\ub294 VPC CNI \uc560\ub4dc\uc628 \ubc84\uc804\uc744 \ubc30\ud3ec\ud588\ub294\uc9c0 \ud655\uc778\ud558\uace0 vpc-cni \uc560\ub4dc\uc628\uc5d0\uc11c `ENABLE_NETWORK_POLICY` \ud50c\ub798\uadf8\ub97c `true`\ub85c \uc124\uc815\ud558\uc5ec \uc774\ub97c \ud65c\uc131\ud654\ud558\uc138\uc694. \uc790\uc138\ud55c \uc9c0\uce68\uc740 [Amazon EKS \uc0ac\uc6a9\uc790 \uac00\uc774\ub4dc](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managing-vpc-cni.html)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n## \uad8c\uc7a5\uc0ac\ud56d\n\n### \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc2dc\uc791\ud558\uae30 - \ucd5c\uc18c \uad8c\ud55c \uc6d0\uce59 \uc801\uc6a9\n\n### \ub514\ud3f4\ud2b8 \uac70\ubd80(deny) \uc815\ucc45 \ub9cc\ub4e4\uae30\n\nRBAC \uc815\ucc45\uacfc \ub9c8\ucc2c\uac00\uc9c0\ub85c \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc5d0\uc11c\ub3c4 \ucd5c\uc18c \uad8c\ud55c \uc561\uc138\uc2a4 \uc6d0\uce59\uc744 \ub530\ub974\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uba3c\uc800 \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \ub0b4\uc5d0\uc11c \ubaa8\ub4e0 \uc778\ubc14\uc6b4\ub4dc \ubc0f \uc544\uc6c3\ubc14\uc6b4\ub4dc \ud2b8\ub798\ud53d\uc744 \uc81c\ud55c\ud558\ub294 '\ubaa8\ub450 \uac70\ubd80' \uc815\ucc45\uc744 \ub9cc\ub4dc\uc138\uc694.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: NetworkPolicy\nmetadata:\n  name: default-deny\n  namespace: default\nspec:\n  podSelector: {}\n  policyTypes:\n  - Ingress\n  - Egress\n```\n\n![default-deny]( .\/images\/default-deny.jpg )\n\n!!! tip\n    \uc704 \uc774\ubbf8\uc9c0\ub294 [Tufin](https:\/\/orca.tufin.io\/netpol\/)\uc758 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \ubdf0\uc5b4\ub85c \uc0dd\uc131\ub418\uc5c8\uc2b5\ub2c8\ub2e4.\n\n### DNS \ucffc\ub9ac\ub97c \ud5c8\uc6a9\ud558\ub294 \uaddc\uce59 \ub9cc\ub4e4\uae30\n\n\uae30\ubcf8 \uac70\ubd80 \ubaa8\ub4e0 \uaddc\uce59\uc744 \uc801\uc6a9\ud55c \ud6c4\uc5d0\ub294 \ud30c\ub4dc\uac00 \uc774\ub984 \ud655\uc778\uc744 \uc704\ud574 CoreDNS\ub97c \ucffc\ub9ac\ud558\ub3c4\ub85d \ud5c8\uc6a9\ud558\ub294 \uc804\uc5ed \uaddc\uce59\uacfc \uac19\uc740 \ucd94\uac00 \uaddc\uce59\uc5d0 \uacc4\uce35\ud654\ub97c \uc2dc\uc791\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \ub808\uc774\ube14\uc744 \uc9c0\uc815\ud558\uc5ec \uc2dc\uc791\ud569\ub2c8\ub2e4.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: NetworkPolicy\nmetadata:\n  name: allow-dns-access\n  namespace: default\nspec:\n  podSelector:\n    matchLabels: {}\n  policyTypes:\n  - Egress\n  egress:\n  - to:\n    - namespaceSelector:\n        matchLabels:\n          kubernetes.io\/metadata.name: kube-system\n      podSelector:\n        matchLabels:\n          k8s-app: kube-dns\n    ports:\n    - protocol: UDP\n      port: 53\n```\n\n![allow-dns-access](.\/images\/allow-dns-access.jpg)\n\n#### \ub124\uc784\uc2a4\ud398\uc774\uc2a4\/\ud30c\ub4dc \uac04 \ud2b8\ub798\ud53d \ud750\ub984\uc744 \uc120\ud0dd\uc801\uc73c\ub85c \ud5c8\uc6a9\ud558\ub294 \uaddc\uce59\uc744 \uc810\uc9c4\uc801\uc73c\ub85c \ucd94\uac00\n\n\uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc694\uad6c \uc0ac\ud56d\uc744 \uc774\ud574\ud558\uace0 \ud544\uc694\uc5d0 \ub530\ub77c \uc138\ubd84\ud654\ub41c \uc218\uc2e0 \ubc0f \uc1a1\uc2e0 \uaddc\uce59\uc744 \uc0dd\uc131\ud558\uc2ed\uc2dc\uc624. \uc544\ub798 \uc608\ub294 \ud3ec\ud2b8 80\uc758 \uc778\uadf8\ub808\uc2a4 \ud2b8\ub798\ud53d\uc744 `client-one`\uc5d0\uc11c `app-one`\uc73c\ub85c \uc81c\ud55c\ud558\ub294 \ubc29\ubc95\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uacf5\uaca9 \ud45c\uba74\uc744 \ucd5c\uc18c\ud654\ud558\uace0 \uc778\uc99d\ub418\uc9c0 \uc54a\uc740 \uc811\uadfc\uc5d0 \ub300\ud55c \uc704\ud5d8\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: NetworkPolicy\nmetadata:\n  name: allow-ingress-app-one\n  namespace: default\nspec:\n  podSelector:\n    matchLabels:\n      k8s-app: app-one\n  policyTypes:\n  - Ingress\n  ingress:\n  - from:\n    - podSelector:\n        matchLabels:\n          k8s-app: client-one\n    ports:\n    - protocol: TCP\n      port: 80\n```\n\n![allow-ingress-app-one](.\/images\/allow-ingress-app-one.png)\n\n### \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc801\uc6a9 \ubaa8\ub2c8\ud130\ub9c1\n\n- **\ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \ud3b8\uc9d1\uae30 \uc0ac\uc6a9**\n  - [\ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \ud3b8\uc9d1\uae30](https:\/\/networkpolicy.io\/)\ub294 \ub124\ud2b8\uc6cc\ud06c \ud750\ub984 \ub85c\uadf8\uc758 \uc2dc\uac01\ud654, \ubcf4\uc548 \uc810\uc218, \uc790\ub3d9 \uc0dd\uc131\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4.\n  - \uc0c1\ud638\ud65c\ub3d9\uc801\uc73c\ub85c \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uad6c\uc131\ud558\uc138\uc694.\n- **\ub85c\uadf8 \uac10\uc0ac**\n  - EKS \ud074\ub7ec\uc2a4\ud130\uc758 \uac10\uc0ac \ub85c\uadf8\ub97c \uc815\uae30\uc801\uc73c\ub85c \uac80\ud1a0\ud558\uc138\uc694.\n  - \uac10\uc0ac \ub85c\uadf8\ub294 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \ubcc0\uacbd\uc744 \ud3ec\ud568\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc218\ud589\ub41c \uc791\uc5c5\uc5d0 \ub300\ud55c \ud48d\ubd80\ud55c \uc815\ubcf4\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4.\n  - \uc774 \uc815\ubcf4\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc2dc\uac04 \uacbd\uacfc\uc5d0 \ub530\ub978 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \ubcc0\uacbd \uc0ac\ud56d\uc744 \ucd94\uc801\ud558\uace0 \uc2b9\uc778\ub418\uc9c0 \uc54a\uc558\uac70\ub098 \uc608\uc0c1\uce58 \ubabb\ud55c \ubcc0\uacbd\uc744 \uac10\uc9c0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n- **\ud14c\uc2a4\ud2b8 \uc790\ub3d9\ud654**\n  - \uc6b4\uc601 \ud658\uacbd\uc744 \ubbf8\ub7ec\ub9c1\ud558\ub294 \ud14c\uc2a4\ud2b8 \ud658\uacbd\uc744 \ub9cc\ub4e4\uace0 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uc704\ubc18\ud558\ub824\ub294 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc815\uae30\uc801\uc73c\ub85c \ubc30\ud3ec\ud558\uc5ec \uc790\ub3d9\ud654\ub41c \ud14c\uc2a4\ud2b8\ub97c \uad6c\ud604\ud558\uc2ed\uc2dc\uc624.\n- **\uba54\ud2b8\ub9ad \uc9c0\ud45c \ubaa8\ub2c8\ud130\ub9c1**\n  - VPC CNI \ub178\ub4dc \uc5d0\uc774\uc804\ud2b8\uc5d0\uc11c \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uba54\ud2b8\ub9ad\uc744 \uc218\uc9d1\ud558\ub3c4\ub85d \uc635\uc800\ubc84\ube4c\ub9ac\ud2f0 \uc5d0\uc774\uc804\ud2b8\ub97c \uad6c\uc131\ud558\uc5ec \uc5d0\uc774\uc804\ud2b8 \uc0c1\ud0dc \ubc0f SDK \uc624\ub958\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n- **\uc815\uae30\uc801\uc73c\ub85c \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uac10\uc0ac**\n  - \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uc815\uae30\uc801\uc73c\ub85c \uac10\uc0ac\ud558\uc5ec \ud604\uc7ac \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc694\uad6c \uc0ac\ud56d\uc744 \ucda9\uc871\ud558\ub294\uc9c0 \ud655\uc778\ud558\uc2ed\uc2dc\uc624.\uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \ubc1c\uc804\ud568\uc5d0 \ub530\ub77c \uac10\uc0ac\ub97c \ud1b5\ud574 \uc911\ubcf5\ub41c \uc778\uadf8\ub808\uc2a4, \uc774\uadf8\ub808\uc2a4 \uaddc\uce59\uc744 \uc81c\uac70\ud558\uace0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \uacfc\ub3c4\ud55c \uad8c\ud55c\uc774 \ubd80\uc5ec\ub418\uc9c0 \uc54a\ub3c4\ub85d \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n- **Open Policy Agent(OPA)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc774 \uc874\uc7ac\ud558\ub294\uc9c0 \ud655\uc778**\n  - \uc544\ub798\uc640 \uac19\uc740 OPA \uc815\ucc45\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ud30c\ub4dc\ub97c \uc628\ubcf4\ub529\ud558\uae30 \uc804\uc5d0 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc774 \ud56d\uc0c1 \uc874\uc7ac\ud558\ub294\uc9c0 \ud655\uc778\ud558\uc2ed\uc2dc\uc624. \uc774 \uc815\ucc45\uc740 \ud574\ub2f9 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc774 \uc5c6\ub294 \uacbd\uc6b0 `k8s-app: sample-app`\uc774\ub77c\ub294 \ub808\uc774\ube14\uc774 \ubd99\uc740 k8s \ud30c\ub4dc\uc758 \uc628\ubcf4\ub529\uc744 \uac70\ubd80\ud569\ub2c8\ub2e4.\n\n```javascript\npackage kubernetes.admission\nimport data.kubernetes.networkpolicies\n\ndeny[msg] {\n    input.request.kind.kind == \"Pod\"\n    pod_label_value := {v[\"k8s-app\"] | v := input.request.object.metadata.labels}\n    contains_label(pod_label_value, \"sample-app\")\n    np_label_value := {v[\"k8s-app\"] | v := networkpolicies[_].spec.podSelector.matchLabels}\n    not contains_label(np_label_value, \"sample-app\")\n    msg:= sprintf(\"The Pod %v could not be created because it is missing an associated Network Policy.\", [input.request.object.metadata.name])\n}\ncontains_label(arr, val) {\n    arr[_] == val\n}\n```\n\n### \ubb38\uc81c \ud574\uacb0 (\ud2b8\ub7ec\ube14\uc288\ud305)\n\n#### vpc-network-policy-controller \ubc0f node-agent \ub85c\uadf8 \ubaa8\ub2c8\ud130\ub9c1\n\nEKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc758 \ucee8\ud2b8\ub864\ub7ec \ub9e4\ub2c8\uc800 \ub85c\uadf8\ub97c \ud65c\uc131\ud654\ud558\uc5ec \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uae30\ub2a5\uc744 \uc9c4\ub2e8\ud569\ub2c8\ub2e4. \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub85c\uadf8\ub97c CloudWatch \ub85c\uadf8 \uadf8\ub8f9\uc73c\ub85c \uc2a4\ud2b8\ub9ac\ubc0d\ud558\uace0 [CloudWatch Log Insights](https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/logs\/AnalyzingLogData.html)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uace0\uae09 \ucffc\ub9ac\ub97c \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub85c\uadf8\uc5d0\uc11c \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc73c\ub85c \ud655\uc778\ub41c \ud30c\ub4dc \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uac1d\uccb4, \uc815\ucc45\uc758 \uc870\uc815 \uc0c1\ud0dc\ub97c \ud655\uc778\ud558\uace0 \uc815\ucc45\uc774 \uc608\uc0c1\ub300\ub85c \uc791\ub3d9\ud558\ub294\uc9c0 \ub514\ubc84\uae45\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub610\ud55c Amazon VPC CNI\ub97c \uc0ac\uc6a9\ud558\uba74 EKS \uc6cc\ucee4 \ub178\ub4dc\uc5d0\uc11c \uc815\ucc45 \uc801\uc6a9 \ub85c\uadf8\ub97c \uc218\uc9d1\ud558\uace0 [Amazon Cloudwatch](https:\/\/aws.amazon.com\/cloudwatch\/) \ub85c \ub0b4\ubcf4\ub0bc \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud65c\uc131\ud654\ub418\uba74 [CloudWatch Container Insights](https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/monitoring\/ContainerInsights.html)\ub97c \ud65c\uc6a9\ud558\uc5ec \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uacfc \uad00\ub828\ub41c \uc0ac\uc6a9\uc5d0 \ub300\ud55c \ud1b5\ucc30\ub825\uc744 \uc81c\uacf5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub610\ud55c Amazon VPC CNI\ub294 \ub178\ub4dc \ub0b4 eBPF \ud504\ub85c\uadf8\ub7a8\uacfc \uc0c1\ud638 \uc791\uc6a9\ud560 \uc218 \uc788\ub294 \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc81c\uacf5\ud558\ub294 SDK\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. SDK\ub294 `aws-node`\uac00 \ub178\ub4dc\uc5d0 \ubc30\ud3ec\ub420 \ub54c \uc124\uce58\ub429\ub2c8\ub2e4. \ub178\ub4dc\uc758 `\/opt\/cni\/bin` \ub514\ub809\ud130\ub9ac\uc5d0\uc11c \uc124\uce58\ub41c SDK \ubc14\uc774\ub108\ub9ac\ub97c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucd9c\uc2dc \uc2dc SDK\ub294 eBPF \ud504\ub85c\uadf8\ub7a8 \ubc0f \ub9f5 \uac80\uc0ac\uc640 \uac19\uc740 \uae30\ubcf8 \uae30\ub2a5\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4.\n\n```shell\nsudo \/opt\/cni\/bin\/aws-eks-na-cli ebpf progs\n```\n\n#### \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d \uba54\ud0c0\ub370\uc774\ud130 \ub85c\uae45\n\n[AWS VPC Flow Logs](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/flow-logs.html)\ub294 VPC\ub97c \ud1b5\uacfc\ud558\ub294 \ud2b8\ub798\ud53d\uc5d0 \ub300\ud55c \uba54\ud0c0\ub370\uc774\ud130(\uc608: \uc18c\uc2a4 \ubc0f \ub300\uc0c1 IP \uc8fc\uc18c, \ud3ec\ud2b8)\ub97c \ud5c8\uc6a9\/\ub4dc\ub78d\ub41c \ud328\ud0b7\uacfc \ud568\uaed8 \ucea1\ucc98\ud569\ub2c8\ub2e4. \uc774 \uc815\ubcf4\ub97c \ubd84\uc11d\ud558\uc5ec VPC \ub0b4 \ub9ac\uc18c\uc2a4(\ud30c\ub4dc \ud3ec\ud568) \uac04\uc5d0 \uc758\uc2ec\uc2a4\ub7fd\uac70\ub098 \ud2b9\uc774\ud55c \ud65c\ub3d9\uc774 \uc788\ub294\uc9c0 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \ud30c\ub4dc\uc758 IP \uc8fc\uc18c\ub294 \uad50\uccb4\ub420 \ub54c \uc790\uc8fc \ubcc0\uacbd\ub418\ubbc0\ub85c \ud50c\ub85c\uc6b0 \ub85c\uadf8\ub9cc\uc73c\ub85c\ub294 \ucda9\ubd84\ud558\uc9c0 \uc54a\uc744 \uc218 \uc788\ub2e4. Calico Enterprise\ub294 \ud30c\ub4dc \ub808\uc774\ube14 \ubc0f \uae30\ud0c0 \uba54\ud0c0\ub370\uc774\ud130\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud50c\ub85c\uc6b0 \ub85c\uadf8\ub97c \ud655\uc7a5\ud558\uc5ec \ud30c\ub4dc \uac04 \ud2b8\ub798\ud53d \ud750\ub984\uc744 \ub354 \uc27d\uac8c \ud574\ub3c5\ud560 \uc218 \uc788\ub3c4\ub85d \ud569\ub2c8\ub2e4.\n\n## \ubcf4\uc548 \uadf8\ub8f9\n\nEKS\ub294 [AWS VPC \ubcf4\uc548 \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/VPC_SecurityGroups.html)(SG)\uc744 \uc0ac\uc6a9\ud558\uc5ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \ud074\ub7ec\uc2a4\ud130\uc758 \uc6cc\ucee4 \ub178\ub4dc \uc0ac\uc774\uc758 \ud2b8\ub798\ud53d\uc744 \uc81c\uc5b4\ud569\ub2c8\ub2e4. \ubcf4\uc548 \uadf8\ub8f9\uc740 \uc6cc\ucee4 \ub178\ub4dc, \uae30\ud0c0 VPC \ub9ac\uc18c\uc2a4 \ubc0f \uc678\ubd80 IP \uc8fc\uc18c \uac04\uc758 \ud2b8\ub798\ud53d\uc744 \uc81c\uc5b4\ud558\ub294 \ub370\uc5d0\ub3c4 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. EKS \ud074\ub7ec\uc2a4\ud130 (\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804 1.14-eks.3 \uc774\uc0c1)\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\uba74 \ud074\ub7ec\uc2a4\ud130 \ubcf4\uc548 \uadf8\ub8f9\uc774 \uc790\ub3d9\uc73c\ub85c \uc0dd\uc131\ub429\ub2c8\ub2e4. \uc774 \ubcf4\uc548 \uadf8\ub8f9\uc740 EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc758 \ub178\ub4dc \uac04\uc758 \uc790\uc720\ub85c\uc6b4 \ud1b5\uc2e0\uc744 \ud5c8\uc6a9\ud569\ub2c8\ub2e4. \ub2e8\uc21c\ud654\ub97c \uc704\ud574 \ube44\uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc744 \ud3ec\ud568\ud55c \ubaa8\ub4e0 \ub178\ub4dc \uadf8\ub8f9\uc5d0 \ud074\ub7ec\uc2a4\ud130 SG\ub97c \ucd94\uac00\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804 1.14 \ubc0f EKS \ubc84\uc804 eks.3 \uc774\uc804\uc5d0\ub294 EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ubc0f \ub178\ub4dc \uadf8\ub8f9\uc5d0 \ub300\ud574 \ubcc4\ub3c4\uc758 \ubcf4\uc548 \uadf8\ub8f9\uc774 \uad6c\uc131\ub418\uc5c8\uc2b5\ub2c8\ub2e4. \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ubc0f \ub178\ub4dc \uadf8\ub8f9 \ubcf4\uc548 \uadf8\ub8f9\uc5d0 \ub300\ud55c \ucd5c\uc18c \ubc0f \uad8c\uc7a5 \uaddc\uce59\uc740 [AWS \ubb38\uc11c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/sec-group-reqs.html)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. _\ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ubcf4\uc548 \uadf8\ub8f9_ \uc758 \ucd5c\uc18c \uaddc\uce59\uc5d0 \ub530\ub77c \uc6cc\ucee4 \ub178\ub4dc \ubcf4\uc548\uadf8\ub8f9\uc5d0\uc11c \ud3ec\ud2b8 443\uc744 \uc778\ubc14\uc6b4\ub4dc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uaddc\uce59\uc740 kubelets\uac00 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84\uc640 \ud1b5\uc2e0\ud560 \uc218 \uc788\ub3c4\ub85d \ud558\ub294 \uaddc\uce59\uc785\ub2c8\ub2e4. \ub610\ud55c \uc6cc\ucee4 \ub178\ub4dc \ubcf4\uc548\uadf8\ub8f9\ub85c\uc758 \uc544\uc6c3\ubc14\uc6b4\ub4dc \ud2b8\ub798\ud53d\uc744 \uc704\ud55c \ud3ec\ud2b8 10250\ub3c4 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. 10250\uc740 kubelet\uc774 \uc218\uc2e0\ud558\ub294 \ud3ec\ud2b8\uc785\ub2c8\ub2e4. \ub9c8\ucc2c\uac00\uc9c0\ub85c \ucd5c\uc18c _\ub178\ub4dc \uadf8\ub8f9_ \uaddc\uce59\uc740 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ubcf4\uc548\uadf8\ub8f9\uc5d0\uc11c \ud3ec\ud2b8 10250\uc744 \uc778\ubc14\uc6b4\ub4dc\ud558\uace0 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ubcf4\uc548\uadf8\ub8f9\ub85c \uc544\uc6c3\ubc14\uc6b4\ub4dc\ud558\ub294 443\uc744 \ud5c8\uc6a9\ud569\ub2c8\ub2e4. \ub9c8\uc9c0\ub9c9\uc73c\ub85c \ub178\ub4dc \uadf8\ub8f9 \ub0b4 \ub178\ub4dc \uac04\uc758 \uc790\uc720\ub85c\uc6b4 \ud1b5\uc2e0\uc744 \ud5c8\uc6a9\ud558\ub294 \uaddc\uce59\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc11c\ube44\uc2a4\uc640 RDS \ub370\uc774\ud130\ubca0\uc774\uc2a4\uc640 \uac19\uc774 \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc11c\ube44\uc2a4 \uac04\uc758 \ud1b5\uc2e0\uc744 \uc81c\uc5b4\ud574\uc57c \ud558\ub294 \uacbd\uc6b0 [\ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/security-groups-for-pods.html)\uc744 \uace0\ub824\ud574 \ubcf4\uc138\uc694. \ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\uba74 \ud30c\ub4dc \uceec\ub809\uc158\uc5d0 **\uae30\uc874** \ubcf4\uc548 \uadf8\ub8f9\uc744 \ud560\ub2f9\ud560 \uc218 \uc788\ub2e4.\n\n!!! warning\n    \ud30c\ub4dc\ub97c \uc0dd\uc131\ud558\uae30 \uc804\uc5d0 \uc874\uc7ac\ud558\uc9c0 \uc54a\ub294 \ubcf4\uc548 \uadf8\ub8f9\uc744 \ucc38\uc870\ud558\ub294 \uacbd\uc6b0, \ud30c\ub4dc\ub294 \uc2a4\ucf00\uc904\ub9c1\ub418\uc9c0 \uc54a\ub294\ub2e4.\n\n`SecurityGroupPolicy` \uac1d\uccb4\ub97c \uc0dd\uc131\ud558\uace0 `PodSelector` \ub610\ub294 `ServiceAccountSelector`\ub97c \uc9c0\uc815\ud558\uc5ec \uc5b4\ub5a4 \ud30c\ub4dc\ub97c \ubcf4\uc548 \uadf8\ub8f9\uc5d0 \ud560\ub2f9\ud560\uc9c0 \uc81c\uc5b4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc140\ub809\ud130\ub97c `{}`\ub85c \uc124\uc815\ud558\uba74 `SecurityGroupPolicy`\uc5d0\uc11c \ucc38\uc870\ud558\ub294 \ubcf4\uc548\uadf8\ub8f9\uc774 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc758 \ubaa8\ub4e0 \ud30c\ub4dc \ub610\ub294 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc758 \ubaa8\ub4e0 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc5d0 \ud560\ub2f9\ub429\ub2c8\ub2e4. \ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uad6c\ud604\ud558\uae30 \uc804\uc5d0 [\uace0\ub824 \uc0ac\ud56d](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/security-groups-for-pods.html#security-groups-pods-considerations)\uc744 \ubaa8\ub450 \uc219\uc9c0\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n!!! important\n    \ud30c\ub4dc\uc5d0 \ubcf4\uc548\uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \ud074\ub7ec\uc2a4\ud130 \ubcf4\uc548 \uadf8\ub8f9\uc5d0 \ud3ec\ud2b8 53\uc774 \uc544\uc6c3\ubc14\uc6b4\ub4dc\ub418\ub3c4\ub85d \ud5c8\uc6a9\ud558\ub294 \ubcf4\uc548\uadf8\ub8f9\uc744 \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \ub9c8\ucc2c\uac00\uc9c0\ub85c, \ud30c\ub4dc \ubcf4\uc548 \uadf8\ub8f9\uc758 \ud3ec\ud2b8 53 \uc778\ubc14\uc6b4\ub4dc \ud2b8\ub798\ud53d\uc744 \ud5c8\uc6a9\ud558\ub3c4\ub85d \ud074\ub7ec\uc2a4\ud130 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc5c5\ub370\uc774\ud2b8\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n!!! important\n    \ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud560 \ub54c\uc5d0\ub3c4 [\ubcf4\uc548 \uadf8\ub8f9 \uc81c\ud55c\uc0ac\ud56d](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/amazon-vpc-limits.html#vpc-limits-security-groups)\uc774 \uc5ec\uc804\ud788 \uc801\uc6a9\ub418\ubbc0\ub85c \uc2e0\uc911\ud558\uac8c \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n!!! important\n    \ud30c\ub4dc\uc5d0 \uad6c\uc131\ub41c \ubaa8\ub4e0 \ud504\ub85c\ube0c\uc5d0 \ub300\ud574 \ud074\ub7ec\uc2a4\ud130 \ubcf4\uc548 \uadf8\ub8f9 (kubelet)\uc758 \uc778\ubc14\uc6b4\ub4dc \ud2b8\ub798\ud53d\uc5d0 \ub300\ud55c \uaddc\uce59\uc744 **\ud544\uc218** \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n!!! important\n    \ud30c\ub4dc\uc758 \ubcf4\uc548 \uadf8\ub8f9\uc740 EC2 \uc778\uc2a4\ud134\uc2a4\uc758 \uc5ec\ub7ec \uac1c\uc758 ENI\ub97c \ud560\ub2f9 \ud558\uae30 \uc704\ud574 [ENI \ud2b8\ub801\ud0b9](https:\/\/docs.aws.amazon.com\/AmazonECS\/latest\/developerguide\/container-instance-eni.html) \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \ud30c\ub4dc\uac00 \ubcf4\uc548\uadf8\ub8f9\uc5d0 \ud560\ub2f9\ub418\uba74 VPC \ucee8\ud2b8\ub864\ub7ec\ub294 \ub178\ub4dc \uadf8\ub8f9\uc758 \ube0c\ub79c\uce58 ENI\ub97c \ud30c\ub4dc\uc640 \uc5f0\uacb0\ud569\ub2c8\ub2e4. \ud30c\ub4dc\uac00 \uc608\uc57d\ub420 \ub54c \ub178\ub4dc\uadf8\ub8f9\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \ube0c\ub79c\uce58 ENI\uac00 \ucda9\ubd84\ud558\uc9c0 \uc54a\uc740 \uacbd\uc6b0 \ud30c\ub4dc\ub294 \ubcf4\ub958 \uc0c1\ud0dc\ub85c \uc720\uc9c0\ub429\ub2c8\ub2e4. \uc778\uc2a4\ud134\uc2a4\uac00 \uc9c0\uc6d0\ud560 \uc218 \uc788\ub294 \ube0c\ub79c\uce58 ENI\uc758 \uc218\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\/\ud328\ubc00\ub9ac\uc5d0 \ub530\ub77c \ub2e4\ub985\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [AWS \ubb38\uc11c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/security-groups-for-pods.html#supported-instance-types)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n\ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9\uc740 \uc815\ucc45 \ub370\ubaac\uc758 \uc624\ubc84\ud5e4\ub4dc \uc5c6\uc774 \ud074\ub7ec\uc2a4\ud130 \ub0b4\ubd80 \ubc0f \uc678\ubd80\uc758 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d\uc744 \uc81c\uc5b4\ud560 \uc218 \uc788\ub294 AWS \ub124\uc774\ud2f0\ube0c \ubc29\ubc95\uc744 \uc81c\uacf5\ud558\uc9c0\ub9cc \ub2e4\ub978 \uc635\uc158\ub3c4 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 Cilium \uc815\ucc45 \uc5d4\uc9c4\uc744 \uc0ac\uc6a9\ud558\uba74 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc5d0\uc11c DNS \uc774\ub984\uc744 \ucc38\uc870\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Calico Enterprise\uc5d0\ub294 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 AWS \ubcf4\uc548 \uadf8\ub8f9\uc5d0 \ub9e4\ud551\ud558\ub294 \uc635\uc158\uc774 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. Istio\uc640 \uac19\uc740 \uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \uad6c\ud604\ud55c \uacbd\uc6b0, \uc774\uadf8\ub808\uc2a4 \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub124\ud2b8\uc6cc\ud06c \uc1a1\uc2e0\uc744 \uac80\uc99d\ub41c \ud2b9\uc815 \ub3c4\uba54\uc778 \ub610\ub294 IP \uc8fc\uc18c\ub85c \uc81c\ud55c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc635\uc158\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [Istio\uc758 \uc774\uadf8\ub808\uc2a4 \ud2b8\ub798\ud53d \uc81c\uc5b4](https:\/\/istio.io\/blog\/2019\/egress-traffic-control-in-istio-part-1\/)\uc5d0 \ub300\ud55c 3\ubd80\uc791 \uc2dc\ub9ac\uc988\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n## \uc5b8\uc81c \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uacfc \ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud574\uc57c \ud560\uae4c\uc694?\n\n### \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0\n\n- **\ud30c\ub4dc-\ud30c\ub4dc \uac04 \ud2b8\ub798\ud53d \uc81c\uc5b4**\n  - \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ud30c\ub4dc \uac04 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d(\uc774\uc2a4\ud2b8-\uc6e8\uc2a4\ud2b8 \ud2b8\ub798\ud53d) \uc81c\uc5b4\uc5d0 \uc801\ud569\n- **IP \uc8fc\uc18c \ub610\ub294 \ud3ec\ud2b8 \uc218\uc900\uc5d0\uc11c \ud2b8\ub798\ud53d \uc81c\uc5b4 (OSI \uacc4\uce35 3 \ub610\ub294 4)**\n\n### \ud30c\ub4dc\uc6a9 AWS \ubcf4\uc548 \uadf8\ub8f9(SGP) \uc744 \uc0ac\uc6a9\ud574\uc57c \ud558\ub294 \uacbd\uc6b0\n\n- **\uae30\uc874 AWS \uad6c\uc131 \ud65c\uc6a9**\n  - AWS \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4\ub97c \uad00\ub9ac\ud558\ub294 \ubcf5\uc7a1\ud55c EC2 \ubcf4\uc548 \uadf8\ub8f9\uc774 \uc774\ubbf8 \uc788\uace0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c EKS\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud558\ub294 \uacbd\uc6b0 SGP\ub294 \ubcf4\uc548 \uadf8\ub8f9 \ub9ac\uc18c\uc2a4\ub97c \uc7ac\uc0ac\uc6a9\ud558\uace0 \uc774\ub97c \ud30c\ub4dc\uc5d0 \uc801\uc6a9\ud560 \uc218 \uc788\ub294 \ub9e4\uc6b0 \uc88b\uc740 \uc120\ud0dd\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n- **AWS \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \uc811\uadfc \uc81c\uc5b4**\n  - EKS \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \ub2e4\ub978 AWS \uc11c\ube44\uc2a4(RDS \ub370\uc774\ud130\ubca0\uc774\uc2a4)\uc640 \ud1b5\uc2e0\ud558\ub824\ub294 \uacbd\uc6b0 SGP\ub97c \ud6a8\uc728\uc801\uc778 \uba54\ucee4\ub2c8\uc998\uc73c\ub85c \uc0ac\uc6a9\ud558\uc5ec \ud30c\ub4dc\uc5d0\uc11c AWS \uc11c\ube44\uc2a4\ub85c\uc758 \ud2b8\ub798\ud53d\uc744 \uc81c\uc5b4\ud569\ub2c8\ub2e4.\n- **\ud30c\ub4dc \ubc0f \ub178\ub4dc \ud2b8\ub798\ud53d \uaca9\ub9ac**\n  - \ud30c\ub4dc \ud2b8\ub798\ud53d\uc744 \ub098\uba38\uc9c0 \ub178\ub4dc \ud2b8\ub798\ud53d\uacfc \uc644\uc804\ud788 \ubd84\ub9ac\ud558\ub824\uba74 `POD_SECURITY_GROUP_ENFORCING_MODE=strict` \ubaa8\ub4dc\uc5d0\uc11c SGP\ub97c \uc0ac\uc6a9\ud558\uc2ed\uc2dc\uc624.\n\n### `\ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9` \ubc0f `\ub124\ud2b8\uc6cc\ud06c \uc815\ucc45` \ubaa8\ubc94 \uc0ac\ub840\n\n- **\ub808\uc774\uc5b4\ubcc4 \ubcf4\uc548**\n  - \uacc4\uce35\ud654\ub41c \ubcf4\uc548 \uc811\uadfc \ubc29\uc2dd\uc744 \uc704\ud574 SGP\uc640 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \ud568\uaed8 \uc0ac\uc6a9\ud558\uc2ed\uc2dc\uc624.\n  - SGP\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc18d\ud558\uc9c0 \uc54a\uc740 AWS \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \ub124\ud2b8\uc6cc\ud06c \uc218\uc900 \uc561\uc138\uc2a4\ub97c \uc81c\ud55c\ud558\ub294 \ubc18\uba74, \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc740 \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ud30c\ub4dc \uac04 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d\uc744 \uc81c\ud55c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n- **\ucd5c\uc18c \uad8c\ud55c \uc6d0\uce59**\n  - \ud30c\ub4dc \ub610\ub294 \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \uac04\uc5d0 \ud544\uc694\ud55c \ud2b8\ub798\ud53d\ub9cc \ud5c8\uc6a9\n- **\uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uaca9\ub9ac**\n  - \uac00\ub2a5\ud55c \uacbd\uc6b0 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc5d0 \ub530\ub77c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uad6c\ubd84\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc190\uc0c1\ub41c \uacbd\uc6b0 \uce68\ud574 \ubc94\uc704\ub97c \uc904\uc774\uc2ed\uc2dc\uc624.\n- **\uc815\ucc45\uc744 \ub2e8\uc21c\ud558\uace0 \uba85\ud655\ud558\uac8c \uc720\uc9c0**\n  - \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc740 \ub9e4\uc6b0 \uc138\ubc00\ud558\uace0 \ubcf5\uc7a1\ud560 \uc218 \uc788\uc73c\ubbc0\ub85c \uc798\ubabb\ub41c \uad6c\uc131\uc758 \uc704\ud5d8\uc744 \uc904\uc774\uace0 \uad00\ub9ac \uc624\ubc84\ud5e4\ub4dc\ub97c \uc904\uc774\ub824\uba74 \uac00\ub2a5\ud55c \ud55c \ub2e8\uc21c\ud558\uac8c \uc720\uc9c0\ud558\ub294 \uac83\uc774 \uac00\uc7a5 \uc88b\uc2b5\ub2c8\ub2e4.\n- **\uacf5\uaca9 \ubc94\uc704 \ucd95\uc18c**\n  - \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub178\ucd9c\uc744 \uc81c\ud55c\ud558\uc5ec \uacf5\uaca9 \ud45c\uba74\uc744 \ucd5c\uc18c\ud654\ud569\ub2c8\ub2e4.\n\n!!! caution\n    \ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9\uc740 `strict`\uacfc `standard`\uc774\ub77c\ub294 \ub450 \uac00\uc9c0 \uc801\uc6a9 \ubaa8\ub4dc\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. EKS \ud074\ub7ec\uc2a4\ud130\uc758 \ud30c\ub4dc \uae30\ub2a5\uc5d0 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uacfc \ubcf4\uc548 \uadf8\ub8f9\uc744 \ubaa8\ub450 \uc0ac\uc6a9\ud560 \ub54c\ub294 `standard` \ubaa8\ub4dc\ub97c \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ub124\ud2b8\uc6cc\ud06c \ubcf4\uc548\uacfc \uad00\ub828\ud574\uc11c\ub294 \uacc4\uce35\ud654\ub41c \uc811\uadfc \ubc29\uc2dd\uc774 \uac00\uc7a5 \ud6a8\uacfc\uc801\uc778 \uc194\ub8e8\uc158\uc778 \uacbd\uc6b0\uac00 \ub9ce\uc2b5\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uacfc SGP\ub97c \ud568\uaed8 \uc0ac\uc6a9\ud558\uba74 EKS\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc704\ud55c \uac15\ub825\ud55c \uc2ec\uce35 \ubc29\uc5b4 \uc804\ub7b5\uc744 \uc81c\uacf5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \uc11c\ube44\uc2a4 \uba54\uc2dc \uc815\ucc45 \uc801\uc6a9 \ub610\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\n\n'\uc11c\ube44\uc2a4 \uba54\uc2dc'\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \ucd94\uac00\ud560 \uc218 \uc788\ub294 \uc804\uc6a9 \uc778\ud504\ub77c \uacc4\uce35\uc785\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 \uac00\uc2dc\uc131, \ud2b8\ub798\ud53d \uad00\ub9ac, \ubcf4\uc548 \ub4f1\uc758 \uae30\ub2a5\uc744 \uc790\uccb4 \ucf54\ub4dc\uc5d0 \ucd94\uac00\ud558\uc9c0 \uc54a\uace0\ub3c4 \ud22c\uba85\ud558\uac8c \ucd94\uac00\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc11c\ube44\uc2a4 \uba54\uc2dc\ub294 OSI \ubaa8\ub378\uc758 \uacc4\uce35 7 (\uc560\ud50c\ub9ac\ucf00\uc774\uc158) \uc5d0\uc11c \uc815\ucc45\uc744 \uc801\uc6a9\ud558\ub294 \ubc18\uba74, \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc740 \uacc4\uce35 3 (\ub124\ud2b8\uc6cc\ud06c) \ubc0f \uacc4\uce35 4 (\uc804\uc1a1) \uc5d0\uc11c \uc791\ub3d9\ud569\ub2c8\ub2e4.\uc774 \ubd84\uc57c\uc5d0\ub294 AWS AppMesh, Istio, Linkerd \ub4f1\uacfc \uac19\uc740 \ub2e4\uc591\ud55c \uc81c\ud488\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc815\ucc45 \uc2dc\ud589\uc5d0 \uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0\n\n- \uc11c\ube44\uc2a4 \uba54\uc26c\uac00 \uad6c\uc131\ub418\uc5b4 \uc788\ub294 \uacbd\uc6b0\n- \ud2b8\ub798\ud53d \uad00\ub9ac, \uc635\uc800\ubc84\ube4c\ub9ac\ud2f0 \ubc0f \ubcf4\uc548\uacfc \uac19\uc740 \uace0\uae09 \uae30\ub2a5\uc774 \ud544\uc694\ud55c \uacbd\uc6b0\n  - \ud2b8\ub798\ud53d \uc81c\uc5b4, \ub85c\ub4dc \ubc38\ub7f0\uc2f1, \uc11c\ud0b7 \ube0c\ub808\uc774\ud0b9, \uc18d\ub3c4 \uc81c\ud55c, \ud0c0\uc784\uc544\uc6c3 \ub4f1\n  - \uc11c\ube44\uc2a4\uac00 \uc798 \ub3d9\uc791\ud558\ub294\uc9c0\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \uc778\uc0ac\uc774\ud2b8 (\ub808\uc774\ud134\uc2dc, \uc624\ub958\uc728, \ucd08\ub2f9 \uc694\uccad \uc218, \uc694\uccad\ub7c9 \ub4f1)\n  - mTLS\uacfc \uac19\uc740 \ubcf4\uc548 \uae30\ub2a5\uc744 \uc704\ud574 \uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \uad6c\ud604\ud558\uace0 \ud65c\uc6a9\ud558\uace0\uc790 \ud569\ub2c8\ub2e4.\n\n### \ub354 \uac04\ub2e8\ud55c \uc0ac\uc6a9 \uc0ac\ub840\ub97c \uc704\ud574 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uc120\ud0dd\ud558\uc138\uc694\n\n- \uc11c\ub85c \ud1b5\uc2e0\ud560 \uc218 \uc788\ub294 \ud30c\ub4dc\ub97c \uc81c\ud55c\ud558\uc138\uc694.\n- \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc740 \uc11c\ube44\uc2a4 \uba54\uc2dc\ubcf4\ub2e4 \ud544\uc694\ud55c \ub9ac\uc18c\uc2a4\uac00 \uc801\uae30 \ub54c\ubb38\uc5d0 \ub2e8\uc21c\ud55c \uc0ac\uc6a9 \uc0ac\ub840\ub098 \uc11c\ube44\uc2a4 \uba54\uc2dc\uc758 \uc2e4\ud589 \ubc0f \uad00\ub9ac \uc624\ubc84\ud5e4\ub4dc\uac00 \uc815\ub2f9\ud558\uc9c0 \uc54a\uc744 \uc218 \uc788\ub294 \uc18c\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc801\ud569\ud569\ub2c8\ub2e4.\n\n!!! tip\n    \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uacfc \uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \ud568\uaed8 \uc0ac\uc6a9\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud30c\ub4dc \uac04\uc5d0 \uae30\ubcf8 \uc218\uc900\uc758 \ubcf4\uc548 \ubc0f \uaca9\ub9ac\ub97c \uc81c\uacf5\ud55c \ub2e4\uc74c \uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud2b8\ub798\ud53d \uad00\ub9ac, \uad00\ucc30 \uac00\ub2a5\uc131 \ubc0f \ubcf4\uc548\uacfc \uac19\uc740 \ucd94\uac00 \uae30\ub2a5\uc744 \ucd94\uac00\ud569\ub2c8\ub2e4.\n\n## \uc11c\ub4dc\ud30c\ud2f0 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc5d4\uc9c4\n\n\uae00\ub85c\ubc8c \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45, DNS \ud638\uc2a4\ud2b8 \uc774\ub984 \uae30\ubc18 \uaddc\uce59 \uc9c0\uc6d0, \uacc4\uce35 7 \uaddc\uce59, \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \uae30\ubc18 \uaddc\uce59, \uba85\uc2dc\uc801 \uac70\ubd80\/\ub85c\uadf8 \uc791\uc5c5 \ub4f1\uacfc \uac19\uc740 \uace0\uae09 \uc815\ucc45 \uc694\uad6c \uc0ac\ud56d\uc774 \uc788\ub294 \uacbd\uc6b0 \ud0c0\uc0ac \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc5d4\uc9c4\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624. [Calico](https:\/\/docs.projectcalico.org\/introduction\/)\ub294 EKS\uc640 \uc798 \uc791\ub3d9\ud558\ub294 [Tigera](https:\/\/tigera.io)\uc758 \uc624\ud508 \uc18c\uc2a4 \uc815\ucc45 \uc5d4\uc9c4\uc785\ub2c8\ub2e4. Calico\ub294 Kubernetes \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uae30\ub2a5 \uc804\uccb4\ub97c \uad6c\ud604\ud558\ub294 \uac83 \uc678\uc5d0\ub3c4 Istio\uc640 \ud1b5\ud569\ub420 \uacbd\uc6b0 \uacc4\uce35 7 \uaddc\uce59 (\uc608: HTTP)\uc5d0 \ub300\ud55c \uc9c0\uc6d0\uc744 \ud3ec\ud568\ud558\uc5ec \ub354 \ub2e4\uc591\ud55c \uae30\ub2a5\uc744 \uac16\ucd98 \ud655\uc7a5 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. Calico \uc815\ucc45\uc758 \ubc94\uc704\ub294 \ub124\uc784\uc2a4\ud398\uc774\uc2a4, \ud30c\ub4dc, \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \ub610\ub294 \uc804 \uc138\uacc4\ub85c \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc815\ucc45 \ubc94\uc704\ub97c \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc73c\ub85c \uc9c0\uc815\ud558\uba74 \uc218\uc2e0\/\uc1a1\uc2e0 \uaddc\uce59 \uc9d1\ud569\uc774 \ud574\ub2f9 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uacfc \uc5f0\uacb0\ub429\ub2c8\ub2e4. \uc801\uc808\ud55c RBAC \uaddc\uce59\uc744 \uc801\uc6a9\ud558\uba74 \ud300\uc5d0\uc11c \uc774\ub7f0 \uaddc\uce59\uc744 \uc7ac\uc815\uc758\ud558\ub294 \uac83\uc744 \ubc29\uc9c0\ud558\uc5ec IT \ubcf4\uc548 \uc804\ubb38\uac00\uac00 \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \uad00\ub9ac\ub97c \uc548\uc804\ud558\uac8c \uc704\uc784\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub9c8\ucc2c\uac00\uc9c0\ub85c [Cilium](https:\/\/cilium.readthedocs.io\/en\/stable\/intro\/)\uc758 \uad00\ub9ac\uc790\ub4e4\ub3c4 HTTP\uc640 \uac19\uc740 \uacc4\uce35 7 \uaddc\uce59\uc5d0 \ub300\ud55c \ubd80\ubd84\uc801 \uc9c0\uc6d0\uc744 \ud3ec\ud568\ud558\ub3c4\ub85d \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \ud655\uc7a5\ud588\uc2b5\ub2c8\ub2e4. \ub610\ud55c Cilium\uc740 DNS \ud638\uc2a4\ud2b8 \uc774\ub984\uc744 \uc9c0\uc6d0\ud558\ub294\ub370, \uc774\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\/\ud30c\ub4dc\uc640 VPC \ub0b4\ubd80 \ub610\ub294 \uc678\ubd80\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ub9ac\uc18c\uc2a4 \uac04\uc758 \ud2b8\ub798\ud53d\uc744 \uc81c\ud55c\ud558\ub294 \ub370 \uc720\uc6a9\ud560 \uc218 \uc788\ub2e4. \ubc18\ub300\ub85c Calico Enterprise\uc5d0\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 AWS \ubcf4\uc548 \uadf8\ub8f9\uacfc DNS \ud638\uc2a4\ud2b8 \uc774\ub984\uc5d0 \ub9e4\ud551\ud560 \uc218 \uc788\ub294 \uae30\ub2a5\uc774 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\n[\uc774 Github \ud504\ub85c\uc81d\ud2b8](https:\/\/github.com\/ahmetb\/kubernetes-network-policy-recipes)\uc5d0\uc11c \uc77c\ubc18\uc801\uc778 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \ubaa9\ub85d\uc744 \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Calico\uc5d0 \ub300\ud55c \uc720\uc0ac\ud55c \uaddc\uce59 \uc138\ud2b8\ub294 [Calico \ubb38\uc11c](https:\/\/docs.projectcalico.org\/security\/calico-network-policy)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### Amazon VPC CNI \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc5d4\uc9c4\uc73c\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\n\n\uc77c\uad00\uc131\uc744 \uc720\uc9c0\ud558\uace0 \uc608\uc0c1\uce58 \ubabb\ud55c \ud30c\ub4dc \ud1b5\uc2e0 \ub3d9\uc791\uc744 \ubc29\uc9c0\ud558\ub824\uba74 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc5d4\uc9c4\uc744 \ud558\ub098\ub9cc \ubc30\ud3ec\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. 3P\uc5d0\uc11c VPC CNI \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc5d4\uc9c4\uc73c\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud558\ub824\ub294 \uacbd\uc6b0 VPC CNI \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc9c0\uc6d0\uc744 \ud65c\uc131\ud654\ud558\uae30 \uc804\uc5d0 \uae30\uc874 3P \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 CRD\ub97c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \ub9ac\uc18c\uc2a4\ub85c \ubcc0\ud658\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \ub610\ud55c \ub9c8\uc774\uadf8\ub808\uc774\uc158\ub41c \uc815\ucc45\uc744 \ud504\ub85c\ub355\uc158 \ud658\uacbd\uc5d0 \uc801\uc6a9\ud558\uae30 \uc804\uc5d0 \ubcc4\ub3c4\uc758 \ud14c\uc2a4\ud2b8 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ud14c\uc2a4\ud2b8\ud558\uc138\uc694. \uc774\ub97c \ud1b5\ud574 \ud30c\ub4dc \ud1b5\uc2e0 \ub3d9\uc791\uc758 \uc7a0\uc7ac\uc801 \ubb38\uc81c\ub098 \ubd88\uc77c\uce58\ub97c \uc2dd\ubcc4\ud558\uace0 \ud574\uacb0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n#### \ub9c8\uc774\uadf8\ub808\uc774\uc158 \ub3c4\uad6c\n\n\ub9c8\uc774\uadf8\ub808\uc774\uc158 \ud504\ub85c\uc138\uc2a4\ub97c \uc9c0\uc6d0\ud558\uae30 \uc704\ud574 \uae30\uc874 Calico\/Cilium \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 CRD\ub97c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\uc774\ud2f0\ube0c \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc73c\ub85c \ubcc0\ud658\ud558\ub294 [K8s Network Policy Migrator](https:\/\/github.com\/awslabs\/k8s-network-policy-migrator) \ub3c4\uad6c\ub97c \uac1c\ubc1c\ud588\uc2b5\ub2c8\ub2e4. \ubcc0\ud658 \ud6c4\uc5d0\ub294 VPC CNI \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \ucee8\ud2b8\ub864\ub7ec\ub97c \uc2e4\ud589\ud558\ub294 \uc0c8 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ubcc0\ud658\ub41c \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uc9c1\uc811 \ud14c\uc2a4\ud2b8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ub3c4\uad6c\ub294 \ub9c8\uc774\uadf8\ub808\uc774\uc158 \ud504\ub85c\uc138\uc2a4\ub97c \uac04\uc18c\ud654\ud558\uace0 \uc6d0\ud65c\ud55c \uc804\ud658\uc744 \ubcf4\uc7a5\ud558\ub3c4\ub85d \uc124\uacc4\ub418\uc5c8\uc2b5\ub2c8\ub2e4.\n\n!!! important\n    \ub9c8\uc774\uadf8\ub808\uc774\uc158 \ub3c4\uad6c\ub294 \ub124\uc774\ud2f0\ube0c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 API\uc640 \ud638\ud658\ub418\ub294 \uc11c\ub4dc\ud30c\ud2f0 \uc815\ucc45\ub9cc \ubcc0\ud658\ud569\ub2c8\ub2e4. \uc11c\ub4dc\ud30c\ud2f0 \ud50c\ub7ec\uadf8\uc778\uc774 \uc81c\uacf5\ud558\ub294 \uace0\uae09 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \ub9c8\uc774\uadf8\ub808\uc774\uc158 \ub3c4\uad6c\ub294 \ud574\ub2f9 \uae30\ub2a5\uc744 \uac74\ub108\ub6f0\uace0 \ubcf4\uace0\ud569\ub2c8\ub2e4.\n\n\ucc38\uace0\ub85c, \ub9c8\uc774\uadf8\ub808\uc774\uc158 \ub3c4\uad6c\ub294 \ud604\uc7ac AWS VPC CNI \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc5d4\uc9c0\ub2c8\uc5b4\ub9c1 \ud300\uc5d0\uc11c \uc9c0\uc6d0\ud558\uc9c0 \uc54a\uc73c\uba70, \uace0\uac1d\uc774 \ucd5c\uc120\uc758 \ub178\ub825\uc744 \uae30\uc6b8\uc5ec \uc0ac\uc6a9\ud560 \uc218 \uc788\ub3c4\ub85d \ub9cc\ub4e4\uc5b4\uc84c\uc2b5\ub2c8\ub2e4. \ub9c8\uc774\uadf8\ub808\uc774\uc158 \ud504\ub85c\uc138\uc2a4\ub97c \uc6d0\ud65c\ud558\uac8c \uc9c4\ud589\ud558\ub824\uba74 \uc774 \ub3c4\uad6c\ub97c \ud65c\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \ub3c4\uad6c\uc5d0\uc11c \ubb38\uc81c\ub098 \ubc84\uadf8\uac00 \ubc1c\uc0dd\ud558\ub294 \uacbd\uc6b0 [Github \uc774\uc288](https:\/\/github.com\/awslabs\/k8s-network-policy-migrator\/issues)\ub97c \uc0dd\uc131\ud574 \uc8fc\uc2dc\uae30 \ubc14\ub78d\ub2c8\ub2e4. \uc5ec\ub7ec\ubd84\uc758 \ud53c\ub4dc\ubc31\uc740 \uc6b0\ub9ac\uc5d0\uac8c \ub9e4\uc6b0 \uc18c\uc911\ud558\uba70 \uc11c\ube44\uc2a4\ub97c \uc9c0\uc18d\uc801\uc73c\ub85c \uac1c\uc120\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4.\n\n### \ucd94\uac00 \ub9ac\uc18c\uc2a4\n\n- [Kubernetes & Tigera: \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45, \ubcf4\uc548 \ubc0f \uac10\uc0ac](https:\/\/youtu.be\/lEY2WnRHYpg)\n- [Calico Enterprise](https:\/\/www.tigera.io\/tigera-products\/calico-enterprise\/)\n- [Cilium](https:\/\/cilium.readthedocs.io\/en\/stable\/intro\/)\n- [NetworkPolicy Editor](https:\/\/cilium.io\/blog\/2021\/02\/10\/network-policy-editor) Cilium\uc758 \ub300\ud654\ud615 \uc815\ucc45 \ud3b8\uc9d1\uc790\n- [Inspektor Gadget advise network-policy gadget](https:\/\/www.inspektor-gadget.io\/docs\/latest\/gadgets\/advise\/network-policy\/)\ub294 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d \ubd84\uc11d\uc744 \uae30\ubc18\uc73c\ub85c \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uc81c\uc548\ud569\ub2c8\ub2e4.\n\n## \uc804\uc1a1 \uc554\ud638\ud654\n\nPCI, HIPAA \ub610\ub294 \uae30\ud0c0 \uaddc\uc815\uc744 \uc900\uc218\ud574\uc57c \ud558\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc740 \uc804\uc1a1 \uc911\uc5d0 \ub370\uc774\ud130\ub97c \uc554\ud638\ud654\ud574\uc57c \ud569\ub2c8\ub2e4. \uc624\ub298\ub0a0 TLS\ub294 \uc720\uc120 \ud2b8\ub798\ud53d\uc744 \uc554\ud638\ud654\ud558\uae30 \uc704\ud55c \uc0ac\uc2e4\uc0c1 \ud45c\uc900 \ubc29\uc2dd\uc785\ub2c8\ub2e4. TLS\ub294 \uc774\uc804 SSL\uacfc \ub9c8\ucc2c\uac00\uc9c0\ub85c \uc554\ud638\ud654 \ud504\ub85c\ud1a0\ucf5c\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub124\ud2b8\uc6cc\ud06c\ub97c \ud1b5\ud574 \ubcf4\uc548 \ud1b5\uc2e0\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. TLS\ub294 \uc138\uc158 \uc2dc\uc791 \uc2dc \ud611\uc0c1\ub418\ub294 \uacf5\uc720 \uc554\ud638\ub97c \uae30\ubc18\uc73c\ub85c \ub370\uc774\ud130\ub97c \uc554\ud638\ud654\ud558\ub294 \ud0a4\ub97c \uc0dd\uc131\ud558\ub294 \ub300\uce6d \uc554\ud638\ud654\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \ub2e4\uc74c\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud658\uacbd\uc5d0\uc11c \ub370\uc774\ud130\ub97c \uc554\ud638\ud654\ud560 \uc218 \uc788\ub294 \uba87 \uac00\uc9c0 \ubc29\ubc95\uc785\ub2c8\ub2e4.\n\n### Nitro \uc778\uc2a4\ud134\uc2a4\n\n\ub2e4\uc74c Nitro \uc778\uc2a4\ud134\uc2a4 \uc720\ud615 (\uc608: C5n, G4, I3en, M5dn, M5n, P3dn, R5dn, R5n)\uac04\uc5d0 \uad50\ud658\ub418\ub294 \ud2b8\ub798\ud53d\uc740 \uae30\ubcf8\uc801\uc73c\ub85c \uc790\ub3d9 \uc554\ud638\ud654\ub429\ub2c8\ub2e4. Transit Gateway \ub610\ub294 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc640 \uac19\uc774 \uc911\uac04 \ud649\uc774 \uc788\ub294 \uacbd\uc6b0 \ud2b8\ub798\ud53d\uc740 \uc554\ud638\ud654\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc804\uc1a1 \uc911 \uc554\ud638\ud654\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uacfc \uae30\ubcf8\uc801\uc73c\ub85c \ub124\ud2b8\uc6cc\ud06c \uc554\ud638\ud654\ub97c \uc9c0\uc6d0\ud558\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc758 \uc804\uccb4 \ubaa9\ub85d\uc740 [\uc804\uc1a1 \uc554\ud638\ud654 \ubb38\uc11c](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/data-protection.html#encryption-transit)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n### \ucee8\ud14c\uc774\ub108 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4(CNI)\n\n[WeAvenet](https:\/\/www.weave.works\/oss\/net\/)\uc740 \uc2ac\ub9ac\ube0c \ud2b8\ub798\ud53d(\ub354 \ub290\ub9b0 \ud328\ud0b7 \ud3ec\uc6cc\ub529 \uc811\uadfc \ubc29\uc2dd)\uc5d0\ub294 NaCL \uc554\ud638\ud654\ub97c \uc0ac\uc6a9\ud558\uace0 \ube60\ub978 \ub370\uc774\ud130 \uacbd\ub85c \ud2b8\ub798\ud53d\uc5d0\ub294 IPsec ESP\ub97c \uc0ac\uc6a9\ud558\uc5ec \ubaa8\ub4e0 \ud2b8\ub798\ud53d\uc744 \uc790\ub3d9\uc73c\ub85c \uc554\ud638\ud654\ud558\ub3c4\ub85d \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc11c\ube44\uc2a4 \uba54\uc2dc\n\n\uc804\uc1a1 \uc911 \uc554\ud638\ud654\ub294 AppMesh, Linkerd v2 \ubc0f Istio\uc640 \uac19\uc740 \uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \uad6c\ud604\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. AppMesh\ub294 X.509 \uc778\uc99d\uc11c \ub610\ub294 Envoy\uc758 \ube44\ubc00 \uac80\uc0c9 \uc11c\ube44\uc2a4(SDS)\ub97c \uc0ac\uc6a9\ud558\ub294 [mTLS](https:\/\/docs.aws.amazon.com\/app-mesh\/latest\/userguide\/mutual-tls.html)\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4. Linkerd\uc640 Istio \ubaa8\ub450 mTLS\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4.\n\n[aws-app-mesh-examples](https:\/\/github.com\/aws\/aws-app-mesh-examples) GitHub \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\ub294 X.509 \uc778\uc99d\uc11c \ubc0f SPIRE\ub97c Envoy \ucee8\ud14c\uc774\ub108\uc640 \ud568\uaed8 SDS \uacf5\uae09\uc790\ub85c \uc0ac\uc6a9\ud558\uc5ec MTL\uc744 \uad6c\uc131\ud558\ub294 \ubc29\ubc95\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4:\n\n- [X.509 \uc778\uc99d\uc11c\ub97c \uc0ac\uc6a9\ud558\uc5ec mTLS \uad6c\uc131](https:\/\/github.com\/aws\/aws-app-mesh-examples\/tree\/main\/walkthroughs\/howto-k8s-mtls-file-based)\n- [SPIRE (SDS) \ub97c \uc0ac\uc6a9\ud558\uc5ec TLS \uad6c\uc131](https:\/\/github.com\/aws\/aws-app-mesh-examples\/tree\/main\/walkthroughs\/howto-k8s-mtls-sds-based)\n\n\ub610\ud55c App Mesh\ub294 [AWS Certificate Manager](https:\/\/docs.aws.amazon.com\/acm\/latest\/userguide\/acm-overview.html)(ACM)\uc5d0\uc11c \ubc1c\uae09\ud55c \uc0ac\uc124 \uc778\uc99d\uc11c \ub610\ub294 \uac00\uc0c1 \ub178\ub4dc\uc758 \ub85c\uceec \ud30c\uc77c \uc2dc\uc2a4\ud15c\uc5d0 \uc800\uc7a5\ub41c \uc778\uc99d\uc11c\ub97c \uc0ac\uc6a9\ud558\uc5ec [TLS \uc554\ud638\ud654](https:\/\/docs.aws.amazon.com\/app-mesh\/latest\/userguide\/virtual-node-tls.html)\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4.\n\n- [aws-app-mesh-examples](https:\/\/github.com\/aws\/aws-app-mesh-examples) GitHub \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\ub294 ACM\uc5d0\uc11c \ubc1c\uae09\ud55c \uc778\uc99d\uc11c \ubc0f Envoy \ucee8\ud14c\uc774\ub108\uc640 \ud568\uaed8 \ud328\ud0a4\uc9d5\ub41c \uc778\uc99d\uc11c\ub97c \uc0ac\uc6a9\ud558\uc5ec TLS\ub97c \uad6c\uc131\ud558\ub294 \ubc29\ubc95\uc744 \uc548\ub0b4\ud569\ub2c8\ub2e4:\n- [\ud30c\uc77c \uc81c\uacf5 TLS \uc778\uc99d\uc11c\ub97c \uc0ac\uc6a9\ud558\uc5ec TLS \uad6c\uc131](https:\/\/github.com\/aws\/aws-app-mesh-examples\/tree\/master\/walkthroughs\/howto-tls-file-provided)\n- [AWS Certificate Manager\ub97c \uc0ac\uc6a9\ud558\uc5ec TLS \uad6c\uc131](https:\/\/github.com\/aws\/aws-app-mesh-examples\/tree\/master\/walkthroughs\/tls-with-acm)\n\n### \uc778\uadf8\ub808\uc2a4 \ucee8\ud2b8\ub864\ub7ec \ubc0f \ub85c\ub4dc\ubc38\ub7f0\uc11c\n\n\uc778\uadf8\ub808\uc2a4 \ucee8\ud2b8\ub864\ub7ec\ub294 \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc5d0\uc11c \ubc1c\uc0dd\ud558\ub294 HTTP\/S \ud2b8\ub798\ud53d\uc744 \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc11c\ube44\uc2a4\ub85c \uc9c0\ub2a5\uc801\uc73c\ub85c \ub77c\uc6b0\ud305\ud558\ub294 \ubc29\ubc95\uc785\ub2c8\ub2e4. \uc774\ub7f0 \uc778\uadf8\ub808\uc2a4 \uc55e\uc5d0 CLB(Classic Load Balancer) \ub610\ub294 NLB(Network Load Balancer)\uc640 \uac19\uc740 \ub808\uc774\uc5b4 4 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uac00 \uc788\ub294 \uacbd\uc6b0\uac00 \ub9ce\uc2b5\ub2c8\ub2e4. \uc554\ud638\ud654\ub41c \ud2b8\ub798\ud53d\uc740 \ub124\ud2b8\uc6cc\ud06c \ub0b4 \uc5ec\ub7ec \uc704\uce58 (\uc608: \ub85c\ub4dc\ubc38\ub7f0\uc11c, \uc778\uadf8\ub808\uc2a4 \ub9ac\uc18c\uc2a4, \ud30c\ub4dc) \uc5d0\uc11c \uc885\ub8cc\ub420 \uc218 \uc788\ub2e4. SSL \uc5f0\uacb0\uc744 \uc885\ub8cc\ud558\ub294 \ubc29\ubc95\uacfc \uc704\uce58\ub294 \uad81\uadf9\uc801\uc73c\ub85c \uc870\uc9c1\uc758 \ub124\ud2b8\uc6cc\ud06c \ubcf4\uc548 \uc815\ucc45\uc5d0 \ub530\ub77c \uacb0\uc815\ub429\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uc5d4\ub4dc \ud22c \uc5d4\ub4dc \uc554\ud638\ud654\ub97c \uc694\uad6c\ud558\ub294 \uc815\ucc45\uc774 \uc788\ub294 \uacbd\uc6b0 Pod\uc5d0\uc11c \ud2b8\ub798\ud53d\uc744 \ud574\ub3c5\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ub418\uba74 \ud30c\ub4dc\uac00 \ucd08\uae30 \ud578\ub4dc\uc170\uc774\ud06c\ub97c \uc124\uc815\ud558\ub294 \ub370 \ub9ce\uc740 \uc2dc\uac04\uc744 \uc18c\ube44\ud574\uc57c \ud558\ubbc0\ub85c \ud30c\ub4dc\uc5d0 \ucd94\uac00\uc801\uc778 \ubd80\ub2f4\uc774 \uac00\uc911\ub429\ub2c8\ub2e4. \uc804\uccb4 SSL\/TLS \ucc98\ub9ac\ub294 CPU \uc9d1\uc57d\ub3c4\uac00 \ub9e4\uc6b0 \ub192\uc2b5\ub2c8\ub2e4. \ub530\ub77c\uc11c \uc720\uc5f0\uc131\uc774 \uc788\ub2e4\uba74 \uc778\uadf8\ub808\uc2a4 \ub610\ub294 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0\uc11c SSL \uc624\ud504\ub85c\ub4dc\ub97c \uc218\ud589\ud574 \ubcf4\uc138\uc694.\n\n#### AWS Elastic \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub97c \ud1b5\ud55c \uc554\ud638\ud654 \uc0ac\uc6a9\n\n[AWS Application Load Balancer](https:\/\/docs.aws.amazon.com\/elasticloadbalancing\/latest\/application\/introduction.html)(ALB) \ubc0f [Network Load Balancer](https:\/\/docs.aws.amazon.com\/elasticloadbalancing\/latest\/network\/introduction.html)(NLB) \ubaa8\ub450 \uc804\uc1a1 \uc554\ud638\ud654(SSL \ubc0f TLS)\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4. ALB\uc5d0 \ub300\ud55c `alb.ingress.kubernetes.io\/certificate-arn` \uc5b4\ub178\ud14c\uc774\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 ALB\uc5d0 \ucd94\uac00\ud560 \uc778\uc99d\uc11c\ub97c \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc5b4\ub178\ud14c\uc774\uc158\uc744 \uc0dd\ub7b5\ud558\uba74 \ucee8\ud2b8\ub864\ub7ec\ub294 \ud638\uc2a4\ud2b8 \ud544\ub4dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc0ac\uc6a9 \uac00\ub2a5\ud55c [AWS Certificate Manager (ACM)](https:\/\/docs.aws.amazon.com\/acm\/latest\/userguide\/acm-overview.html)\uc778\uc99d\uc11c\ub97c \uc77c\uce58\uc2dc\ucf1c \uc778\uc99d\uc11c\ub97c \ud544\uc694\ub85c \ud558\ub294 \ub9ac\uc2a4\ub108\uc5d0 \uc778\uc99d\uc11c\ub97c \ucd94\uac00\ud558\ub824\uace0 \uc2dc\ub3c4\ud569\ub2c8\ub2e4. EKS v1.15\ubd80\ud130 \uc544\ub798 \uc608\uc640 \uac19\uc774 NLB\uc640 \ud568\uaed8 `service.beta.kubernetes.io\/aws-load-balancer-ssl-cert` \uc5b4\ub178\ud14c\uc774\uc158\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: demo-app\n  namespace: default\n  labels:\n    app: demo-app\n  annotations:\n     service.beta.kubernetes.io\/aws-load-balancer-type: \"nlb\"\n     service.beta.kubernetes.io\/aws-load-balancer-ssl-cert: \"<certificate ARN>\"\n     service.beta.kubernetes.io\/aws-load-balancer-ssl-ports: \"443\"\n     service.beta.kubernetes.io\/aws-load-balancer-backend-protocol: \"http\"\nspec:\n  type: LoadBalancer\n  ports:\n  - port: 443\n    targetPort: 80\n    protocol: TCP\n  selector:\n    app: demo-app\n---\nkind: Deployment\napiVersion: apps\/v1\nmetadata:\n  name: nginx\n  namespace: default\n  labels:\n    app: demo-app\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: demo-app\n  template:\n    metadata:\n      labels:\n        app: demo-app\n    spec:\n      containers:\n        - name: nginx\n          image: nginx\n          ports:\n            - containerPort: 443\n              protocol: TCP\n            - containerPort: 80\n              protocol: TCP\n```\n\n\ub2e4\uc74c\uc740 SSL\/TLS \uc885\ub8cc\uc5d0 \ub300\ud55c \ucd94\uac00 \uc608\uc785\ub2c8\ub2e4.\n\n- [Securing EKS Ingress With Contour And Let\u2019s Encrypt The GitOps Way](https:\/\/aws.amazon.com\/blogs\/containers\/securing-eks-ingress-contour-lets-encrypt-gitops\/)\n- [How do I terminate HTTPS traffic on Amazon EKS workloads with ACM?](https:\/\/aws.amazon.com\/premiumsupport\/knowledge-center\/terminate-https-traffic-eks-acm\/)\n\n!!! caution\n    AWS LB \ucee8\ud2b8\ub864\ub7ec\uc640 \uac19\uc740 \uc77c\ubd80 \uc778\uadf8\ub808\uc2a4\ub294 \uc778\uadf8\ub808\uc2a4 \uc0ac\uc591\uc758 \uc77c\ubd80\uac00 \uc544\ub2cc \uc5b4\ub178\ud14c\uc774\uc158\uc744 \uc0ac\uc6a9\ud558\uc5ec SSL\/TLS\ub97c \uad6c\ud604\ud569\ub2c8\ub2e4.\n\n### ACM Private CA \uc5f0\ub3d9 (cert-manager)\n\n\uc778\uc99d\uc11c\ub97c \ubc30\ud3ec, \uac31\uc2e0 \ubc0f \ucde8\uc18c\ud558\ub294 \uc778\uae30 \uc788\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc560\ub4dc\uc628\uc778 ACM Private Certificate Authority(CA) \ubc0f [cert-manager](https:\/\/cert-manager.io\/)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc218\uc2e0, \ud30c\ub4dc, \ud30c\ub4dc \uac04\uc5d0 EKS \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc6cc\ud06c\ub85c\ub4dc\ub97c \ubcf4\ud638\ud558\ub3c4\ub85d TLS\uc640 mTLS\ub97c \ud65c\uc131\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. ACM Private CA\ub294 \uc790\uccb4 CA\ub97c \uad00\ub9ac\ud558\ub294 \ub370 \ub4dc\ub294 \uc120\uacb0\uc81c \ubc0f \uc720\uc9c0 \uad00\ub9ac \ube44\uc6a9\uc774 \uc5c6\ub294 \uac00\uc6a9\uc131\uc774 \ub192\uace0 \uc548\uc804\ud55c \uad00\ub9ac\ud615 CA\uc785\ub2c8\ub2e4. \uae30\ubcf8 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc778\uc99d \uae30\uad00\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 ACM Private CA\ub97c \ud1b5\ud574 \ubcf4\uc548\uc744 \uac1c\uc120\ud558\uace0 \uaddc\uc815 \uc900\uc218 \uc694\uad6c \uc0ac\ud56d\uc744 \ucda9\uc871\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. ACM Private CA\ub294 FIPS 140-2 \ub808\ubca8 3 \ud558\ub4dc\uc6e8\uc5b4 \ubcf4\uc548 \ubaa8\ub4c8\uc5d0\uc11c \ud504\ub77c\uc774\ube57 \ud0a4\ub97c \ubcf4\ud638\ud569\ub2c8\ub2e4(\ub9e4\uc6b0 \uc548\uc804\ud568). \uc774\ub294 \uba54\ubaa8\ub9ac\uc5d0 \uc778\ucf54\ub529\ub41c \ud0a4\ub97c \uc800\uc7a5\ud558\ub294 \uae30\ubcf8 CA (\ubcf4\uc548 \uc218\uc900\uc774 \ub0ae\uc74c)\uc640 \ube44\uad50\ud558\uba74 \ub9e4\uc6b0 \uc548\uc804\ud569\ub2c8\ub2e4. \ub610\ud55c \uc911\uc559 \uc9d1\uc911\uc2dd CA\ub97c \uc0ac\uc6a9\ud558\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud658\uacbd \ub0b4\ubd80 \ubc0f \uc678\ubd80\uc5d0\uc11c \uc0ac\uc124 \uc778\uc99d\uc11c\ub97c \ub354 \uc798 \uc81c\uc5b4\ud558\uace0 \uac10\uc0ac \uae30\ub2a5\uc744 \uac1c\uc120\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.  \n\n#### \uc6cc\ud06c\ub85c\ub4dc \uac04 \uc0c1\ud638 TLS\ub97c \uc704\ud55c \uc9e7\uc740 \uc218\uba85\uc758 CA \ubaa8\ub4dc\n\nEKS \ud658\uacbd\uc5d0\uc11c\uc5d0\uc11c mTLS\uc6a9 ACM Private CA\ub97c \uc0ac\uc6a9\ud560 \ub54c\ub294 _\uc218\uba85\uc774 \uc9e7\uc740 CA \ubaa8\ub4dc_\uc640 \ud568\uaed8 \uc218\uba85\uc774 \uc9e7\uc740 \uc778\uc99d\uc11c\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \ubc94\uc6a9 CA \ubaa8\ub4dc\uc5d0\uc11c \uc218\uba85\uc774 \uc9e7\uc740 \uc778\uc99d\uc11c\ub97c \ubc1c\uae09\ud560 \uc218 \uc788\uc9c0\ub9cc \uc0c8 \uc778\uc99d\uc11c\ub97c \uc790\uc8fc \ubc1c\uae09\ud574\uc57c \ud558\ub294 \uc0ac\uc6a9 \uc0ac\ub840\uc5d0\uc11c\ub294 \uc218\uba85\uc774 \uc9e7\uc740 CA \ubaa8\ub4dc\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \ub354 \ube44\uc6a9 \ud6a8\uc728\uc801\uc785\ub2c8\ub2e4 (\uc77c\ubc18 \ubaa8\ub4dc\ubcf4\ub2e4 \ucd5c\ub300 75% \uc800\ub834). \uc774 \uc678\uc5d0\ub3c4 \uc0ac\uc124 \uc778\uc99d\uc11c\uc758 \uc720\ud6a8 \uae30\uac04\uc744 EKS \ud074\ub7ec\uc2a4\ud130\uc758 \ud30c\ub4dc \uc218\uba85\uc5d0 \ub9de\ucdb0 \uc870\uc815\ud574\uc57c \ud569\ub2c8\ub2e4 [\uc5ec\uae30\uc5d0\uc11c ACM Private CA\uc640 \uadf8 \uc774\uc810\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\uc2ed\uc2dc\uc624](https:\/\/aws.amazon.com\/certificate-manager\/private-certificate-authority\/).\n\n#### ACM \uad6c\uc131 \uac00\uc774\ub4dc\n\n\uba3c\uc800 [ACM Private CA \ubb38\uc11c](https:\/\/docs.aws.amazon.com\/acm-pca\/latest\/userguide\/create-CA.html)\uc5d0 \uc81c\uacf5\ub41c \uc808\ucc28\uc5d0 \ub530\ub77c Private CA\ub97c \uc0dd\uc131\ud558\uc2ed\uc2dc\uc624. Private CA\ub97c \ub9cc\ub4e0 \ud6c4\uc5d0\ub294 [cert-manager \uc124\uce58 \uac00\uc774\ub4dc](https:\/\/cert-manager.io\/docs\/installation\/) \uc808\ucc28\uc5d0 \ub530\ub77c  cert-manager\ub97c \uc124\uce58\ud558\uc2ed\uc2dc\uc624. cert-manager\ub97c \uc124\uce58\ud55c \ud6c4 [GitHub\uc758 \uc124\uc815 \uac00\uc774\ub4dc](https:\/\/github.com\/cert-manager\/aws-privateca-issuer#setup)\uc5d0 \ub530\ub77c Private CA \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc778\uc99d\uc11c \uad00\ub9ac\uc790 \ud50c\ub7ec\uadf8\uc778\uc744 \uc124\uce58\ud569\ub2c8\ub2e4. \ud50c\ub7ec\uadf8\uc778\uc744 \uc0ac\uc6a9\ud558\uba74 \uc778\uc99d\uc11c \uad00\ub9ac\uc790\uac00 ACM Private CA\uc5d0 \uc0ac\uc124 \uc778\uc99d\uc11c\ub97c \uc694\uccad\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc774\uc81c cert-manager\uc640 \ud50c\ub7ec\uadf8\uc778\uc774 \uc124\uce58\ub41c \uc0ac\uc124 CA\uc640 EKS \ud074\ub7ec\uc2a4\ud130\ub97c \ub9cc\ub4e4\uc5c8\uc73c\ub2c8, \uad8c\ud55c\uc744 \uc124\uc815\ud558\uace0 \ubc1c\uae09\uc790\ub97c \uc0dd\uc131\ud560 \ucc28\ub840\uc785\ub2c8\ub2e4. ACM \uc0ac\uc124 CA\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4\ub97c \ud5c8\uc6a9\ud558\ub3c4\ub85d EKS \ub178\ub4dc \uc5ed\ud560\uc758 IAM \uad8c\ud55c\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\uc2ed\uc2dc\uc624. `<CA_ARN>`\ub97c \uc0ac\uc124 CA\uc758 \uac12\uc73c\ub85c \ubc14\uafb8\uc2ed\uc2dc\uc624.\n\n```json\n{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Sid\": \"awspcaissuer\",\n            \"Action\": [\n                \"acm-pca:DescribeCertificateAuthority\",\n                \"acm-pca:GetCertificate\",\n                \"acm-pca:IssueCertificate\"\n            ],\n            \"Effect\": \"Allow\",\n            \"Resource\": \"<CA_ARN>\"\n        }\n    ]\n}\n```\n\n[\uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc6a9 IAM \uc5ed\ud560(IRSA)](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts.html)\ub3c4 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc804\uccb4 \uc608\ub294 \uc544\ub798\uc758 \ucd94\uac00 \ub9ac\uc18c\uc2a4 \uc139\uc158\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n\uc544\ub798 \uc608\uc81c\uc758 \ub0b4\uc6a9\uc744 \ud3ec\ud568\ud55c `cluster-issuer.yaml` CRD \ud30c\uc77c\uc744 \uc0dd\uc131\ud558\uace0 `<CA_ARN>` \ubc0f `<Region>` \uc601\uc5ed\uc744 Private CA \uc815\ubcf4\ub85c \ub300\uccb4\ud558\uc5ec Amazon EKS\uc5d0\uc11c \ubc1c\uae09\uc790\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4.\n\n```yaml\napiVersion: awspca.cert-manager.io\/v1beta1\nkind: AWSPCAClusterIssuer\nmetadata:\n          name: demo-test-root-ca\nspec:\n          arn: <CA_ARN>\n          region: <Region>\n```\n\nDeploy the Issuer you created.\n\n```bash\nkubectl apply -f cluster-issuer.yaml\n```\n\nEKS \ud074\ub7ec\uc2a4\ud130\ub294 \uc0ac\uc124 CA\uc5d0\uc11c \uc778\uc99d\uc11c\ub97c \uc694\uccad\ud558\ub3c4\ub85d \uad6c\uc131\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \uc774\uc81c cert-manager\uc758 `Certificate` \ub9ac\uc18c\uc2a4\ub97c \uc0ac\uc6a9\ud558\uc5ec `IssuerRef` \ud544\ub4dc \uac12\uc744 \uc704\uc5d0\uc11c \ub9cc\ub4e0 \uc0ac\uc124 CA \ubc1c\uae09\uc790\ub85c \ubcc0\uacbd\ud558\uc5ec \uc778\uc99d\uc11c\ub97c \ubc1c\uae09\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc778\uc99d\uc11c \ub9ac\uc18c\uc2a4\ub97c \uc9c0\uc815\ud558\uace0 \uc694\uccad\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 cert-manager\uc758 [\uc778\uc99d\uc11c \ub9ac\uc18c\uc2a4 \uac00\uc774\ub4dc](https:\/\/cert-manager.io\/docs\/usage\/certificate\/)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624. [\ub2e4\uc74c \uc608\uc81c\ub97c \ucc38\uc870\ud558\uc138\uc694.](https:\/\/github.com\/cert-manager\/aws-privateca-issuer\/tree\/main\/config\/samples\/).\n\n### ACM Private CA \uc5f0\ub3d9 (Istio \ubc0f cert-manager)\n\nEKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c Istio\ub97c \uc2e4\ud589\ud558\ub294 \uacbd\uc6b0 Istio \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 (\ud2b9\ud788 'istiod') \uc774 \ub8e8\ud2b8 \uc778\uc99d \uae30\uad00 (CA) \uc73c\ub85c \uc791\ub3d9\ud558\uc9c0 \uc54a\ub3c4\ub85d \uc124\uc815\ud558\uace0 ACM Private CA\ub97c \uc6cc\ud06c\ub85c\ub4dc \uac04 MTL\uc758 \ub8e8\ud2b8 CA\ub85c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ubc29\ubc95\uc744 \uc0ac\uc6a9\ud558\ub824\ub294 \uacbd\uc6b0 ACM Private CA\uc5d0\uc11c _\ub2e8\uae30 CA \ubaa8\ub4dc_\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\uc774\uc804 \uc139\uc158](#\uc6cc\ud06c\ub85c\ub4dc-\uac04-\uc0c1\ud638-tls\ub97c-\uc704\ud55c-\uc9e7\uc740-\uc218\uba85\uc758-ca-\ubaa8\ub4dc) \ubc0f \uc774 [\ube14\ub85c\uadf8](https:\/\/aws.amazon.com\/blogs\/security\/how-to-use-aws-private-certificate-authority-short-lived-certificate-mode)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n#### Istio\uc5d0\uc11c \uc778\uc99d\uc11c \uc11c\uba85 \uc791\ub3d9 \ubc29\uc2dd (\uae30\ubcf8\ubc29\uc2dd)\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\uc758 \uc6cc\ud06c\ub85c\ub4dc\ub294 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc2dd\ubcc4\ub429\ub2c8\ub2e4. \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub97c \uc9c0\uc815\ud558\uc9c0 \uc54a\uc73c\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub97c \uc790\ub3d9\uc73c\ub85c \ud560\ub2f9\ud569\ub2c8\ub2e4. \ub610\ud55c \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub294 \uad00\ub828 \ud1a0\ud070\uc744 \uc790\ub3d9\uc73c\ub85c \ub9c8\uc6b4\ud2b8\ud569\ub2c8\ub2e4. \uc774 \ud1a0\ud070\uc740 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc5d0\uc11c \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\uc5d0 \ub300\ud55c \uc778\uc99d\uc744 \uc704\ud55c \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc758 ID\ub85c \ucda9\ubd84\ud560 \uc218 \uc788\uc9c0\ub9cc Istio\uc5d0\ub294 \uc790\uccb4 ID \uad00\ub9ac \uc2dc\uc2a4\ud15c\uacfc CA\uac00 \uc788\uc2b5\ub2c8\ub2e4. Envoy \uc0ac\uc774\ub4dc\uce74 \ud504\ub85d\uc2dc\ub85c \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc2dc\uc791\ud558\ub294 \uacbd\uc6b0 Istio\uc5d0\uc11c \ud560\ub2f9\ud55c ID\uac00 \uc788\uc5b4\uc57c \ud574\ub2f9 \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc2e0\ub8b0\ud560 \uc218 \uc788\ub294 \uac83\uc73c\ub85c \uac04\uc8fc\ub418\uace0 \uba54\uc2dc\uc758 \ub2e4\ub978 \uc11c\ube44\uc2a4\uc640 \ud1b5\uc2e0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nIstio\uc5d0\uc11c \uc774 ID\ub97c \uac00\uc838\uc624\uae30 \uc704\ud574 'istio-agent'\ub294 \uc778\uc99d\uc11c \uc11c\uba85 \uc694\uccad(\ub610\ub294 CSR) \uc774\ub77c\ub294 \uc694\uccad\uc744 Istio \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0 \ubcf4\ub0c5\ub2c8\ub2e4. \uc774 CSR\uc5d0\ub294 \ucc98\ub9ac \uc804\uc5d0 \uc6cc\ud06c\ub85c\ub4dc\uc758 ID\ub97c \ud655\uc778\ud560 \uc218 \uc788\ub3c4\ub85d \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \ud1a0\ud070\uc774 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ud655\uc778 \ud504\ub85c\uc138\uc2a4\ub294 \ub4f1\ub85d \uae30\uad00(\ub610\ub294 RA)\uacfc CA \uc5ed\ud560\uc744 \ubaa8\ub450 \ud558\ub294 'istiod'\uc5d0 \uc758\ud574 \ucc98\ub9ac\ub429\ub2c8\ub2e4. RA\ub294 \uac80\uc99d\ub41c CSR\ub9cc CA\uc5d0 \uc804\ub2ec\ub418\ub3c4\ub85d \ud558\ub294 \uac8c\uc774\ud2b8\ud0a4\ud37c \uc5ed\ud560\uc744 \ud569\ub2c8\ub2e4. CSR\uc774 \ud655\uc778\ub418\uba74 CA\ub85c \uc804\ub2ec\ub418\uba70 CA\ub294 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uacfc \ud568\uaed8 [SPIFFE](https:\/\/spiffe.io\/) ID\uac00 \ud3ec\ud568\ub41c \uc778\uc99d\uc11c\ub97c \ubc1c\uae09\ud569\ub2c8\ub2e4. \uc774 \uc778\uc99d\uc11c\ub97c SPIFE \uac80\uc99d \uac00\ub2a5\ud55c ID \ubb38\uc11c(\ub610\ub294 SVID) \ub77c\uace0 \ud569\ub2c8\ub2e4. SVID\ub294 \uc694\uccad \uc11c\ube44\uc2a4\uc5d0 \ud560\ub2f9\ub418\uc5b4 \uc2dd\ubcc4\uc744 \ubaa9\uc801\uc73c\ub85c \ud558\uace0 \ud1b5\uc2e0 \uc11c\ube44\uc2a4 \uac04\uc5d0 \uc804\uc1a1\ub418\ub294 \ud2b8\ub798\ud53d\uc744 \uc554\ud638\ud654\ud569\ub2c8\ub2e4.\n\n![Istio \uc778\uc99d\uc11c \uc11c\uba85 \uc694\uccad\uc758 \uae30\ubcf8 \ud750\ub984](.\/images\/default-istio-csr-flow.png)\n\n#### ACM Private CA\ub97c \uc0ac\uc6a9\ud558\uc5ec Istio\uc5d0\uc11c \uc778\uc99d\uc11c \uc11c\uba85\uc774 \uc791\ub3d9\ud558\ub294 \ubc29\uc2dd\n\nIstio \uc778\uc99d\uc11c \uc11c\uba85 \uc694\uccad \uc5d0\uc774\uc804\ud2b8 ([istio-csr](https:\/\/cert-manager.io\/docs\/projects\/istio-csr\/))\ub77c\ub294 \uc778\uc99d\uc11c \uad00\ub9ac \uc560\ub4dc\uc628\uc744 \uc0ac\uc6a9\ud558\uc5ec Istio\ub97c ACM \uc0ac\uc124 CA\uc640 \ud1b5\ud569\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc5d0\uc774\uc804\ud2b8\ub97c \uc0ac\uc6a9\ud558\uba74 \uc778\uc99d\uc11c \uad00\ub9ac\uc790 \ubc1c\uae09\uc790(\uc774 \uacbd\uc6b0 ACM Private CA)\ub97c \ud1b5\ud574 Istio \uc6cc\ud06c\ub85c\ub4dc\uc640 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uad6c\uc131 \uc694\uc18c\ub97c \ubcf4\ud638\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. _istio-csr_ \uc5d0\uc774\uc804\ud2b8\ub294 \uc218\uc2e0 CSR\uc744 \uac80\uc99d\ud558\ub294 \uae30\ubcf8 \uad6c\uc131\uc5d0\uc11c _istiod_\uac00 \uc81c\uacf5\ud558\ub294 \uac83\uacfc \ub3d9\uc77c\ud55c \uc11c\ube44\uc2a4\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. \ub2e8, \ud655\uc778 \ud6c4\uc5d0\ub294 \uc694\uccad\uc744 \uc778\uc99d\uc11c \uad00\ub9ac\uc790\uac00 \uc9c0\uc6d0\ud558\ub294 \ub9ac\uc18c\uc2a4(\uc608: \uc678\ubd80 CA \ubc1c\uae09\uc790\uc640\uc758 \ud1b5\ud569)\ub85c \ubcc0\ud658\ud569\ub2c8\ub2e4.\n\n\uc6cc\ud06c\ub85c\ub4dc\uc5d0\uc11c CSR\uc774 \uc218\uc2e0\ub420 \ub54c\ub9c8\ub2e4 \ud574\ub2f9 CSR\uc740 _istio-csr_\ub85c \uc804\ub2ec\ub418\uba70, \uc774 CSR\uc740 ACM \uc0ac\uc124 CA\ub85c\ubd80\ud130 \uc778\uc99d\uc11c\ub97c \uc694\uccad\ud569\ub2c8\ub2e4. _istio-csr_\uc640 ACM Private CA \uac04\uc758 \uc774\ub7f0 \ud1b5\uc2e0\uc740 [AWS Private CA \ubc1c\uae09\uc790 \ud50c\ub7ec\uadf8\uc778](https:\/\/github.com\/cert-manager\/aws-privateca-issuer)\uc744 \ud1b5\ud574 \ud65c\uc131\ud654\ub429\ub2c8\ub2e4. \uc778\uc99d\uc11c \uad00\ub9ac\uc790\ub294 \uc774 \ud50c\ub7ec\uadf8\uc778\uc744 \uc0ac\uc6a9\ud558\uc5ec ACM \uc0ac\uc124 CA\uc5d0 TLS \uc778\uc99d\uc11c\ub97c \uc694\uccad\ud569\ub2c8\ub2e4. \ubc1c\uae09\uc790 \ud50c\ub7ec\uadf8\uc778\uc740 ACM \uc0ac\uc124 CA \uc11c\ube44\uc2a4\uc640 \ud1b5\uc2e0\ud558\uc5ec \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud55c \uc11c\uba85\ub41c \uc778\uc99d\uc11c\ub97c \uc694\uccad\ud569\ub2c8\ub2e4. \uc778\uc99d\uc11c\uac00 \uc11c\uba85\ub418\uba74 \uc774 \uc778\uc99d\uc11c\ub294 _istio-csr_\ub85c \ubc18\ud658\ub418\uba70, \uadf8\ub7ec\uba74 \uc11c\uba85\ub41c \uc694\uccad\uc744 \uc77d\uace0 CSR\uc744 \uc2dc\uc791\ud55c \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ud574\ub2f9 \uc694\uccad\uc744 \ubc18\ud658\ud569\ub2c8\ub2e4.\n\n![istio-csr\ub97c \uc0ac\uc6a9\ud55c Istio \uc778\uc99d\uc11c \uc11c\uba85 \uc694\uccad \ud750\ub984](.\/images\/istio-csr-with-acm-private-ca.png)\n\n#### \uc0ac\uc124 CA\uac00 \ud3ec\ud568\ub41c Istio \uad6c\uc131 \uac00\uc774\ub4dc\n\n1. \uba3c\uc800 [\uc774 \uc139\uc158\uc758 \uc124\uc815 \uc9c0\uce68](#acm-private-ca-\uc5f0\ub3d9-istio-\ubc0f-cert-manager) \uc5d0 \ub530\ub77c \ub2e4\uc74c\uc744 \uc644\ub8cc\ud558\uc2ed\uc2dc\uc624.:\n2. \uc0ac\uc124 CA \uc0dd\uc131\n3. cert-manager \uc124\uce58\n4. \ubc1c\uae09\uc790 \ud50c\ub7ec\uadf8\uc778 \uc124\uce58\n5. \uad8c\ud55c\uc744 \uc124\uc815\ud558\uace0 \ubc1c\uae09\uc790\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4. \ubc1c\uae09\uc790\ub294 CA\ub97c \ub098\ud0c0\ub0b4\uba70 'istiod' \ubc0f \uba54\uc2dc \uc6cc\ud06c\ub85c\ub4dc \uc778\uc99d\uc11c\uc5d0 \uc11c\uba85\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. ACM \uc0ac\uc124 CA\uc640 \ud1b5\uc2e0\ud569\ub2c8\ub2e4.\n6. 'Istio-system' \ub124\uc784\uc2a4\ud398\uc774\uc2a4\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4. \uc5ec\uae30\uc5d0 'istiod \uc778\uc99d\uc11c'\uc640 \uae30\ud0c0 Istio \ub9ac\uc18c\uc2a4\uac00 \ubc30\ud3ec\ub429\ub2c8\ub2e4.\n7. AWS \uc0ac\uc124 CA \ubc1c\uae09\uc790 \ud50c\ub7ec\uadf8\uc778\uc73c\ub85c \uad6c\uc131\ub41c Istio CSR\uc744 \uc124\uce58\ud569\ub2c8\ub2e4.\uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud55c \uc778\uc99d\uc11c \uc11c\uba85 \uc694\uccad\uc744 \ubcf4\uc874\ud558\uc5ec \uc2b9\uc778 \ubc0f \uc11c\uba85 \uc5ec\ubd80\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4 (`PerveCertificateRequests=true`).\n\n    ```bash\n    helm install -n cert-manager cert-manager-istio-csr jetstack\/cert-manager-istio-csr \\\n      --set \"app.certmanager.issuer.group=awspca.cert-manager.io\" \\\n      --set \"app.certmanager.issuer.kind=AWSPCAClusterIssuer\" \\\n      --set \"app.certmanager.issuer.name=<the-name-of-the-issuer-you-created>\" \\\n      --set \"app.certmanager.preserveCertificateRequests=true\" \\\n      --set \"app.server.maxCertificateDuration=48h\" \\\n      --set \"app.tls.certificateDuration=24h\" \\\n      --set \"app.tls.istiodCertificateDuration=24h\" \\\n      --set \"app.tls.rootCAFile=\/var\/run\/secrets\/istio-csr\/ca.pem\" \\\n      --set \"volumeMounts[0].name=root-ca\" \\\n      --set \"volumeMounts[0].mountPath=\/var\/run\/secrets\/istio-csr\" \\\n      --set \"volumes[0].name=root-ca\" \\\n      --set \"volumes[0].secret.secretName=istio-root-ca\"\n    ```\n\n8. 'istiod'\ub97c 'cert-manager istio-csr'\ub85c \ubc14\uafbc \uc0ac\uc6a9\uc790 \uc9c0\uc815 \uc124\uc815\uc73c\ub85c \uba54\uc2dc\uc758 \uc778\uc99d\uc11c \uacf5\uae09\uc790\ub85c Istio\ub97c \uc124\uce58\ud558\uc138\uc694. \uc774 \ud504\ub85c\uc138\uc2a4\ub294 [Istio \uc624\ud37c\ub808\uc774\ud130](https:\/\/tetrate.io\/blog\/what-is-istio-operator\/)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n    ```yaml\n    apiVersion: install.istio.io\/v1alpha1\n    kind: IstioOperator\n    metadata:\n      name: istio\n      namespace: istio-system\n    spec:\n      profile: \"demo\"\n      hub: gcr.io\/istio-release\n      values:\n      global:\n        # Change certificate provider to cert-manager istio agent for istio agent\n        caAddress: cert-manager-istio-csr.cert-manager.svc:443\n      components:\n        pilot:\n          k8s:\n            env:\n              # Disable istiod CA Sever functionality\n            - name: ENABLE_CA_SERVER\n              value: \"false\"\n            overlays:\n            - apiVersion: apps\/v1\n              kind: Deployment\n              name: istiod\n              patches:\n\n                # Mount istiod serving and webhook certificate from Secret mount\n              - path: spec.template.spec.containers.[name:discovery].args[7]\n                value: \"--tlsCertFile=\/etc\/cert-manager\/tls\/tls.crt\"\n              - path: spec.template.spec.containers.[name:discovery].args[8]\n                value: \"--tlsKeyFile=\/etc\/cert-manager\/tls\/tls.key\"\n              - path: spec.template.spec.containers.[name:discovery].args[9]\n                value: \"--caCertFile=\/etc\/cert-manager\/ca\/root-cert.pem\"\n\n              - path: spec.template.spec.containers.[name:discovery].volumeMounts[6]\n                value:\n                  name: cert-manager\n                  mountPath: \"\/etc\/cert-manager\/tls\"\n                  readOnly: true\n              - path: spec.template.spec.containers.[name:discovery].volumeMounts[7]\n                value:\n                  name: ca-root-cert\n                  mountPath: \"\/etc\/cert-manager\/ca\"\n                  readOnly: true\n\n              - path: spec.template.spec.volumes[6]\n                value:\n                  name: cert-manager\n                  secret:\n                    secretName: istiod-tls\n              - path: spec.template.spec.volumes[7]\n                value:\n                  name: ca-root-cert\n                  configMap:\n                    defaultMode: 420\n                    name: istio-ca-root-cert\n    ```\n\n9. \uc0dd\uc131\ud55c \uc704\uc758 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub9ac\uc18c\uc2a4(CRD)\ub97c \ubc30\ud3ec\ud569\ub2c8\ub2e4.\n\n    ```bash\n    istioctl operator init\n    kubectl apply -f istio-custom-config.yaml\n    ```\n\n10. \uc774\uc81c EKS \ud074\ub7ec\uc2a4\ud130\uc758 \uba54\uc2dc\uc5d0 \uc6cc\ud06c\ub85c\ub4dc\ub97c \ubc30\ud3ec\ud558\uace0 [mTLS\ub97c \uc801\uc6a9](https:\/\/istio.io\/latest\/docs\/reference\/config\/security\/peer_authentication\/)\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n![Istio \uc778\uc99d\uc11c \uc11c\uba85 \uc694\uccad](.\/images\/istio-csr-requests.png)\n\n## \ub3c4\uad6c \ubc0f \ub9ac\uc18c\uc2a4\n\n- [Amazon EKS \ubcf4\uc548 \uc6cc\ud06c\uc0f5 - \ub124\ud2b8\uc6cc\ud06c \ubcf4\uc548](https:\/\/catalog.workshops.aws\/eks-security-immersionday\/en-US\/6-network-security)\n- [cert-manager \ubc0f ACM Private CA \ud50c\ub7ec\uadf8\uc778\uc744 \uad6c\ud604\ud558\uc5ec EKS\uc5d0\uc11c TLS\ub97c \ud65c\uc131\ud654\ud558\ub294 \ubc29\ubc95](https:\/\/aws.amazon.com\/blogs\/security\/tls-enabled-kubernetes-clusters-with-acm-private-ca-and-amazon-eks-2\/).\n- [\uc0c8\ub85c\uc6b4 AWS Load Balancer Controller \ubc0f ACM Private CA\ub97c \uc0ac\uc6a9\ud558\uc5ec Amazon EKS\uc5d0\uc11c \uc5d4\ub4dc-\ud22c-\uc5d4\ub4dc TLS \uc554\ud638\ud654 \uc124\uc815](https:\/\/aws.amazon.com\/blogs\/containers\/setting-up-end-to-end-tls-encryption-on-amazon-eks-with-the-new-aws-load-balancer-controller\/).\n- [Private CA \ucfe0\ubc84\ub124\ud2f0\uc2a4 cert-manager \ud50c\ub7ec\uadf8\uc778 (Github)](https:\/\/github.com\/cert-manager\/aws-privateca-issuer).\n- [Private CA \ucfe0\ubc84\ub124\ud2f0\uc2a4 cert-manager \ud50c\ub7ec\uadf8\uc778 \uc0ac\uc6a9\uc790 \uac00\uc774\ub4dc](https:\/\/docs.aws.amazon.com\/acm-pca\/latest\/userguide\/PcaKubernetes.html).\n- [AWS Private Certificate Authority \ub2e8\uae30 \uc778\uc99d\uc11c \ubaa8\ub4dc \uc0ac\uc6a9 \ubc29\ubc95](https:\/\/aws.amazon.com\/blogs\/security\/how-to-use-aws-private-certificate-authority-short-lived-certificate-mode)\n- [egress-operator](https:\/\/github.com\/monzo\/egress-operator) \ud504\ub85c\ud1a0\ucf5c \uac80\uc0ac \uc5c6\uc774 \ud074\ub7ec\uc2a4\ud130\uc758 \uc1a1\uc2e0 \ud2b8\ub798\ud53d\uc744 \uc81c\uc5b4\ud558\ub294 \uc624\ud37c\ub808\uc774\ud130 \ubc0f DNS \ud50c\ub7ec\uadf8\uc778\n- [NeuVector by SUSE](https:\/\/www.suse.com\/neuvector\/) \uc624\ud508 \uc18c\uc2a4 \uc81c\ub85c \ud2b8\ub7ec\uc2a4\ud2b8 \ucee8\ud14c\uc774\ub108 \ubcf4\uc548 \ud50c\ub7ab\ud3fc\uc740 \uc815\ucc45 \ub124\ud2b8\uc6cc\ud06c \uaddc\uce59, \ub370\uc774\ud130 \uc190\uc2e4 \ubc29\uc9c0 (DLP), \uc6f9 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ubc29\ud654\ubcbd (WAF) \ubc0f \ub124\ud2b8\uc6cc\ud06c \uc704\ud611 \uc2dc\uadf8\ub2c8\ucc98\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4.","site":"eks","answers_cleaned":"    search    exclude  true                                                                                                                      EKS                                                                                                                      CNI                       Nitro        cert manager  ACM Private CA                                                                                                                                      East West                                                                      OSI        3  4                                                                IP                                                                                                 ingress         egress               Amazon VPC CNI                                                                                               API                                     API                  https   kubernetes io docs concepts services networking network policies                                                          PolicyType Ingress                                                                                                                                                                           attention     EKS                   VPC CNI                                    VPC CNI                    vpc cni        ENABLE NETWORK POLICY        true                            Amazon EKS          https   docs aws amazon com eks latest userguide managing vpc cni html                                                                deny          RBAC                                                                                                               yaml apiVersion  networking k8s io v1 kind  NetworkPolicy metadata    name  default deny   namespace  default spec    podSelector       policyTypes      Ingress     Egress        default deny     images default deny jpg        tip             Tufin  https   orca tufin io netpol                              DNS                                                     CoreDNS                                                                            yaml apiVersion  networking k8s io v1 kind  NetworkPolicy metadata    name  allow dns access   namespace  default spec    podSelector      matchLabels       policyTypes      Egress   egress      to        namespaceSelector          matchLabels            kubernetes io metadata name  kube system       podSelector          matchLabels            k8s app  kube dns     ports        protocol  UDP       port  53        allow dns access    images allow dns access jpg                                                                                                                 80             client one     app one                                                                          yaml apiVersion  networking k8s io v1 kind  NetworkPolicy metadata    name  allow ingress app one   namespace  default spec    podSelector      matchLabels        k8s app  app one   policyTypes      Ingress   ingress      from        podSelector          matchLabels            k8s app  client one     ports        protocol  TCP       port  80        allow ingress app one    images allow ingress app one png                                                               https   networkpolicy io                                                                                     EKS                                                                                                                                                                                                                                                                                         VPC CNI                                                         SDK                                                                                                                                                                                       Open Policy Agent OPA                                       OPA                                                                                 k8s app  sample app             k8s                     javascript package kubernetes admission import data kubernetes networkpolicies  deny msg        input request kind kind     Pod      pod label value     v  k8s app     v    input request object metadata labels      contains label pod label value   sample app       np label value     v  k8s app     v    networkpolicies    spec podSelector matchLabels      not contains label np label value   sample app       msg   sprintf  The Pod  v could not be created because it is missing an associated Network Policy     input request object metadata name     contains label arr  val        arr       val                                vpc network policy controller   node agent          EKS                                                            CloudWatch                 CloudWatch Log Insights  https   docs aws amazon com AmazonCloudWatch latest logs AnalyzingLogData html                                                                                                           Amazon VPC CNI       EKS                         Amazon Cloudwatch  https   aws amazon com cloudwatch                        CloudWatch Container Insights  https   docs aws amazon com AmazonCloudWatch latest monitoring ContainerInsights html                                                 Amazon VPC CNI       eBPF                               SDK         SDK   aws node                          opt cni bin             SDK                       SDK  eBPF                                   shell sudo  opt cni bin aws eks na cli ebpf progs                               AWS VPC Flow Logs  https   docs aws amazon com vpc latest userguide flow logs html   VPC                                IP                                          VPC                                                        IP                                             Calico Enterprise                                                                                   EKS   AWS VPC        https   docs aws amazon com vpc latest userguide VPC SecurityGroups html  SG                                                                    VPC          IP                            EKS                1 14 eks 3                                               EKS                                                                                  SG                          1 14   EKS    eks 3      EKS                                                                                AWS     https   docs aws amazon com eks latest userguide sec group reqs html                                                            443                      kubelets        API                                                          10250             10250  kubelet                                                     10250                               443                                                                         RDS                                                             https   docs aws amazon com eks latest userguide security groups for pods html                                                                 warning                                                           SecurityGroupPolicy            PodSelector      ServiceAccountSelector                                                        SecurityGroupPolicy                                                                                       https   docs aws amazon com eks latest userguide security groups for pods html security groups pods considerations                     important                                      53                                                    53                                              important                                     https   docs aws amazon com vpc latest userguide amazon vpc limits html vpc limits security groups                                 important                                    kubelet                                          important                EC2             ENI            ENI      https   docs aws amazon com AmazonECS latest developerguide container instance eni html                            VPC                  ENI                                           ENI                                                  ENI                                    AWS     https   docs aws amazon com eks latest userguide security groups for pods html supported instance types                                                                       AWS                                         Cilium                       DNS                 Calico Enterprise            AWS                            Istio                                                             IP                                   Istio               https   istio io blog 2019 egress traffic control in istio part 1       3                                                                                                                                                        IP                       OSI    3    4             AWS       SGP                       AWS             AWS                       EC2                      EC2        EKS              SGP                                                           AWS                     EKS                          AWS     RDS                   SGP                        AWS                                                                           POD SECURITY GROUP ENFORCING MODE strict       SGP                                                                                  SGP                                SGP                    AWS                                                                                                                                                                                                                                                                                                                                                                                              caution                 strict    standard                        EKS                                         standard                                                                                  SGP          EKS                                                                                                                                                                                                OSI        7                                            3             4                      AWS AppMesh  Istio  Linkerd                                                                                                                                                                                                                        mTLS                                                                                                                                                                                                                   tip                                                                                                                                                                      DNS                     7                                                                                     Calico  https   docs projectcalico org introduction    EKS          Tigera  https   tigera io                   Calico  Kubernetes                           Istio            7        HTTP                                               Calico                                                                                                               RBAC                                   IT                                            Cilium  https   cilium readthedocs io en stable intro          HTTP        7                                          Cilium  DNS                                 VPC                                                  Calico Enterprise                  AWS        DNS                                     Github       https   github com ahmetb kubernetes network policy recipes                                      Calico                 Calico     https   docs projectcalico org security calico network policy                     Amazon VPC CNI                                                                                           3P   VPC CNI                           VPC CNI                         3P         CRD                                                                                                                                                                                 Calico Cilium         CRD                             K8s Network Policy Migrator  https   github com awslabs k8s network policy migrator                     VPC CNI                                                                                                               important                                   API                                                                                                                AWS VPC CNI                                                                                                                                  Github     https   github com awslabs k8s network policy migrator issues                                                                                   Kubernetes   Tigera                    https   youtu be lEY2WnRHYpg     Calico Enterprise  https   www tigera io tigera products calico enterprise      Cilium  https   cilium readthedocs io en stable intro      NetworkPolicy Editor  https   cilium io blog 2021 02 10 network policy editor  Cilium                Inspektor Gadget advise network policy gadget  https   www inspektor gadget io docs latest gadgets advise network policy                                                  PCI  HIPAA                                                     TLS                                 TLS     SSL                                              TLS                                                                                                            Nitro          Nitro             C5n  G4  I3en  M5dn  M5n  P3dn  R5dn  R5n                               Transit Gateway                                                                                                                 https   docs aws amazon com AWSEC2 latest UserGuide data protection html encryption transit                                CNI    WeAvenet  https   www weave works oss net                                 NaCL                           IPsec ESP                                                              AppMesh  Linkerd v2   Istio                               AppMesh  X 509        Envoy            SDS         mTLS  https   docs aws amazon com app mesh latest userguide mutual tls html          Linkerd  Istio    mTLS           aws app mesh examples  https   github com aws aws app mesh examples  GitHub        X 509       SPIRE  Envoy          SDS           MTL                      X 509           mTLS     https   github com aws aws app mesh examples tree main walkthroughs howto k8s mtls file based     SPIRE  SDS         TLS     https   github com aws aws app mesh examples tree main walkthroughs howto k8s mtls sds based      App Mesh   AWS Certificate Manager  https   docs aws amazon com acm latest userguide acm overview html  ACM                                                   TLS      https   docs aws amazon com app mesh latest userguide virtual node tls html              aws app mesh examples  https   github com aws aws app mesh examples  GitHub        ACM             Envoy                         TLS                           TLS           TLS     https   github com aws aws app mesh examples tree master walkthroughs howto tls file provided     AWS Certificate Manager       TLS     https   github com aws aws app mesh examples tree master walkthroughs tls with acm                                                    HTTP S                                                       CLB Classic Load Balancer     NLB Network Load Balancer          4                                                                                  SSL                                                                                    Pod                                                                                     SSL TLS     CPU                                            SSL                      AWS Elastic                    AWS Application Load Balancer  https   docs aws amazon com elasticloadbalancing latest application introduction html  ALB     Network Load Balancer  https   docs aws amazon com elasticloadbalancing latest network introduction html  NLB            SSL   TLS          ALB      alb ingress kubernetes io certificate arn              ALB                                                              AWS Certificate Manager  ACM   https   docs aws amazon com acm latest userguide acm overview html                                              EKS v1 15            NLB      service beta kubernetes io aws load balancer ssl cert                         yaml apiVersion  v1 kind  Service metadata    name  demo app   namespace  default   labels      app  demo app   annotations       service beta kubernetes io aws load balancer type   nlb       service beta kubernetes io aws load balancer ssl cert    certificate ARN        service beta kubernetes io aws load balancer ssl ports   443       service beta kubernetes io aws load balancer backend protocol   http  spec    type  LoadBalancer   ports      port  443     targetPort  80     protocol  TCP   selector      app  demo app     kind  Deployment apiVersion  apps v1 metadata    name  nginx   namespace  default   labels      app  demo app spec    replicas  1   selector      matchLabels        app  demo app   template      metadata        labels          app  demo app     spec        containers            name  nginx           image  nginx           ports                containerPort  443               protocol  TCP               containerPort  80               protocol  TCP          SSL TLS                     Securing EKS Ingress With Contour And Let s Encrypt The GitOps Way  https   aws amazon com blogs containers securing eks ingress contour lets encrypt gitops      How do I terminate HTTPS traffic on Amazon EKS workloads with ACM   https   aws amazon com premiumsupport knowledge center terminate https traffic eks acm        caution     AWS LB                                               SSL TLS              ACM Private CA     cert manager                                       ACM Private Certificate Authority CA     cert manager  https   cert manager io                       EKS                    TLS  mTLS               ACM Private CA     CA                                               CA                             ACM Private CA                                       ACM Private CA  FIPS 140 2    3                                                             CA                                       CA                                                                                TLS            CA     EKS        mTLS  ACM Private CA                 CA                                      CA                                                              CA                                     75                            EKS                                ACM Private CA                       https   aws amazon com certificate manager private certificate authority          ACM             ACM Private CA     https   docs aws amazon com acm pca latest userguide create CA html              Private CA          Private CA          cert manager         https   cert manager io docs installation           cert manager          cert manager         GitHub          https   github com cert manager aws privateca issuer setup      Private CA                                                ACM Private CA                          cert manager               CA  EKS                                       ACM    CA                EKS        IAM                 CA ARN       CA                 json        Version    2012 10 17        Statement                            Sid    awspcaissuer                Action                      acm pca DescribeCertificateAuthority                    acm pca GetCertificate                    acm pca IssueCertificate                              Effect    Allow                Resource     CA ARN                                     IAM    IRSA   https   docs aws amazon com eks latest userguide iam roles for service accounts html                                                             cluster issuer yaml  CRD            CA ARN       Region       Private CA          Amazon EKS                   yaml apiVersion  awspca cert manager io v1beta1 kind  AWSPCAClusterIssuer metadata            name  demo test root ca spec            arn   CA ARN            region   Region       Deploy the Issuer you created      bash kubectl apply  f cluster issuer yaml      EKS          CA                            cert manager   Certificate             IssuerRef                  CA                                                              cert manager                https   cert manager io docs usage certificate                            https   github com cert manager aws privateca issuer tree main config samples         ACM Private CA     Istio   cert manager   EKS        Istio          Istio              istiod               CA                   ACM Private CA         MTL     CA                             ACM Private CA       CA                                               tls            ca               https   aws amazon com blogs security how to use aws private certificate authority short lived certificate mode                 Istio                                                                                                                                                                         API                                          ID            Istio      ID         CA        Envoy                         Istio       ID                                                          Istio     ID           istio agent                CSR          Istio                  CSR               ID                                                        RA   CA            istiod             RA      CSR  CA                          CSR       CA       CA                SPIFFE  https   spiffe io   ID                         SPIFE        ID       SVID          SVID                                                          Istio                     images default istio csr flow png        ACM Private CA       Istio                    Istio                  istio csr  https   cert manager io docs projects istio csr                       Istio  ACM    CA                                            ACM Private CA      Istio                                   istio csr           CSR                istiod                                                                 CA                           CSR             CSR   istio csr           CSR  ACM    CA                 istio csr   ACM Private CA            AWS Private CA           https   github com cert manager aws privateca issuer                                    ACM    CA  TLS                       ACM    CA                                                      istio csr                        CSR                             istio csr      Istio                 images istio csr with acm private ca png           CA      Istio         1                    acm private ca    istio   cert manager                    2     CA    3  cert manager    4              5                            CA        istiod                               ACM    CA         6   Istio system                      istiod          Istio             7  AWS    CA                Istio CSR                                                          PerveCertificateRequests true            bash     helm install  n cert manager cert manager istio csr jetstack cert manager istio csr           set  app certmanager issuer group awspca cert manager io            set  app certmanager issuer kind AWSPCAClusterIssuer            set  app certmanager issuer name  the name of the issuer you created             set  app certmanager preserveCertificateRequests true            set  app server maxCertificateDuration 48h            set  app tls certificateDuration 24h            set  app tls istiodCertificateDuration 24h            set  app tls rootCAFile  var run secrets istio csr ca pem            set  volumeMounts 0  name root ca            set  volumeMounts 0  mountPath  var run secrets istio csr            set  volumes 0  name root ca            set  volumes 0  secret secretName istio root ca           8   istiod    cert manager istio csr                               Istio                  Istio        https   tetrate io blog what is istio operator                             yaml     apiVersion  install istio io v1alpha1     kind  IstioOperator     metadata        name  istio       namespace  istio system     spec        profile   demo        hub  gcr io istio release       values        global            Change certificate provider to cert manager istio agent for istio agent         caAddress  cert manager istio csr cert manager svc 443       components          pilot            k8s              env                  Disable istiod CA Sever functionality               name  ENABLE CA SERVER               value   false              overlays                apiVersion  apps v1               kind  Deployment               name  istiod               patches                     Mount istiod serving and webhook certificate from Secret mount                 path  spec template spec containers  name discovery  args 7                  value     tlsCertFile  etc cert manager tls tls crt                  path  spec template spec containers  name discovery  args 8                  value     tlsKeyFile  etc cert manager tls tls key                  path  spec template spec containers  name discovery  args 9                  value     caCertFile  etc cert manager ca root cert pem                   path  spec template spec containers  name discovery  volumeMounts 6                  value                    name  cert manager                   mountPath    etc cert manager tls                    readOnly  true                 path  spec template spec containers  name discovery  volumeMounts 7                  value                    name  ca root cert                   mountPath    etc cert manager ca                    readOnly  true                  path  spec template spec volumes 6                  value                    name  cert manager                   secret                      secretName  istiod tls                 path  spec template spec volumes 7                  value                    name  ca root cert                   configMap                      defaultMode  420                     name  istio ca root cert          9                    CRD                  bash     istioctl operator init     kubectl apply  f istio custom config yaml          10     EKS                       mTLS      https   istio io latest docs reference config security peer authentication               Istio              images istio csr requests png                   Amazon EKS                   https   catalog workshops aws eks security immersionday en US 6 network security     cert manager   ACM Private CA            EKS   TLS            https   aws amazon com blogs security tls enabled kubernetes clusters with acm private ca and amazon eks 2           AWS Load Balancer Controller   ACM Private CA       Amazon EKS           TLS         https   aws amazon com blogs containers setting up end to end tls encryption on amazon eks with the new aws load balancer controller       Private CA       cert manager       Github   https   github com cert manager aws privateca issuer      Private CA       cert manager               https   docs aws amazon com acm pca latest userguide PcaKubernetes html      AWS Private Certificate Authority                  https   aws amazon com blogs security how to use aws private certificate authority short lived certificate mode     egress operator  https   github com monzo egress operator                                        DNS         NeuVector by SUSE  https   www suse com neuvector                                                     DLP                 WAF                        "}
{"questions":"eks exclude true search AWS AWS AWS AWS EC2 AWS","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \uc778\uc99d \ubc0f \uc811\uadfc \uad00\ub9ac\n\n[AWS IAM(Identity and Access Management)](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/introduction.html)\uc740 \uc778\uc99d \ubc0f \uad8c\ud55c \ubd80\uc5ec\ub77c\ub294 \ub450 \uac00\uc9c0 \ud544\uc218 \uae30\ub2a5\uc744 \uc218\ud589\ud558\ub294 AWS \uc11c\ube44\uc2a4\uc785\ub2c8\ub2e4. \uc778\uc99d\uc5d0\ub294 \uc790\uaca9 \uc99d\uba85 \ud655\uc778\uc774 \ud3ec\ud568\ub418\ub294 \ubc18\uba74 \uad8c\ud55c \ubd80\uc5ec\ub294 AWS \ub9ac\uc18c\uc2a4\uc5d0\uc11c \uc218\ud589\ud560 \uc218 \uc788\ub294 \uc791\uc5c5\uc744 \uad00\ub9ac\ud569\ub2c8\ub2e4. AWS \ub0b4\uc5d0\uc11c \ub9ac\uc18c\uc2a4\ub294 \ub2e4\ub978 AWS \uc11c\ube44\uc2a4(\uc608: EC2) \ub610\ub294 [IAM \uc0ac\uc6a9\uc790](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id.html#id_iam-users) \ub610\ub294 [IAM \uc5ed\ud560](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id.html#id_iam-roles)\uacfc \uac19\uc740 AWS [\ubcf4\uc548 \uc8fc\uccb4](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/intro-structure.html#intro-structure-principal)\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub9ac\uc18c\uc2a4\uac00 \uc218\ud589\ud560 \uc218 \uc788\ub294 \uc791\uc5c5\uc744 \uad00\ub9ac\ud558\ub294 \uaddc\uce59\uc740 [IAM \uc815\ucc45]( https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/access_policies.html)\uc73c\ub85c \ud45c\ud604\ub429\ub2c8\ub2e4.\n\n## EKS \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c \uc811\uadfc \uc81c\uc5b4\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud504\ub85c\uc81d\ud2b8\ub294 \ubca0\uc5b4\ub7ec(Bearer) \ud1a0\ud070, X.509 \uc778\uc99d\uc11c, OIDC \ub4f1 kube-apiserver \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \uc694\uccad\uc744 \uc778\uc99d\ud558\uae30 \uc704\ud55c \ub2e4\uc591\ud55c \ubc29\uc2dd\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. EKS\ub294 \ud604\uc7ac [\uc6f9\ud6c5(Webhook) \ud1a0\ud070 \uc778\uc99d](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/authentication\/#webhook-token-authentication), [\uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \ud1a0\ud070]( https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/authentication\/#service-account-tokens) \ubc0f 2021\ub144 2\uc6d4 21\uc77c\ubd80\ud130 OIDC \uc778\uc99d\uc744 \uae30\ubcf8\uc801\uc73c\ub85c \uc9c0\uc6d0\ud569\ub2c8\ub2e4.  \n\n\uc6f9\ud6c5 \uc778\uc99d \ubc29\uc2dd\uc740 \ubca0\uc5b4\ub7ec \ud1a0\ud070\uc744 \ud655\uc778\ud558\ub294 \uc6f9\ud6c5\uc744 \ud638\ucd9c\ud569\ub2c8\ub2e4. EKS\uc5d0\uc11c \uc774\ub7f0 \ubca0\uc5b4\ub7ec \ud1a0\ud070\uc740 `kubectl` \uba85\ub839 \uc2e4\ud589 \uc2dc AWS CLI \ub610\ub294 [aws-iam-authenticator](https:\/\/github.com\/kubernetes-sigs\/aws-iam-authenticator) \ud074\ub77c\uc774\uc5b8\ud2b8\uc5d0 \uc758\ud574 \uc0dd\uc131\ub429\ub2c8\ub2e4. \uba85\ub839\uc744 \uc2e4\ud589\ud558\uba74 \ud1a0\ud070\uc740 kube-apiserver\ub85c \uc804\ub2ec\ub418\uace0 \ub2e4\uc2dc \uc6f9\ud6c5\uc73c\ub85c \ud3ec\uc6cc\ub529\ub429\ub2c8\ub2e4. \uc694\uccad\uc774 \uc62c\ubc14\ub978 \ud615\uc2dd\uc774\uba74 \uc6f9\ud6c5\uc740 \ud1a0\ud070 \ubcf8\ubb38\uc5d0 \ud3ec\ud568\ub41c \ubbf8\ub9ac \uc11c\uba85\ub41c URL\uc744 \ud638\ucd9c\ud569\ub2c8\ub2e4. \uc774 URL\uc740 \uc694\uccad \uc11c\uba85\uc758 \uc720\ud6a8\uc131\uc744 \uac80\uc0ac\ud558\uace0 \uc0ac\uc6a9\uc790 \uc815\ubcf4(\uc0ac\uc6a9\uc790 \uc5b4\uce74\uc6b4\ud2b8, ARN \ubc0f \uc0ac\uc6a9\uc790 ID \ub4f1)\ub97c kube-apiserver\uc5d0 \ubc18\ud658\ud569\ub2c8\ub2e4.\n\n\uc778\uc99d \ud1a0\ud070\uc744 \uc218\ub3d9\uc73c\ub85c \uc0dd\uc131\ud558\ub824\uba74 \ud130\ubbf8\ub110 \ucc3d\uc5d0 \ub2e4\uc74c \uba85\ub839\uc744 \uc785\ub825\ud569\ub2c8\ub2e4.\n\n```bash\naws eks get-token --cluster-name <\ud074\ub7ec\uc2a4\ud130_\uc774\ub984>\n```\n\n\ud504\ub85c\uadf8\ub798\ubc0d \ubc29\uc2dd\uc73c\ub85c \ud1a0\ud070\uc744 \uc5bb\uc744 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c\uc740 Go \uc5b8\uc5b4\ub85c \uc791\uc131\ub41c \uc608\uc785\ub2c8\ub2e4.\n\n```golang\npackage main\n\nimport (\n \"fmt\"\n \"log\"\n \"sigs.k8s.io\/aws-iam-authenticator\/pkg\/token\"\n)\n\nfunc main()  {\n g, _ := token.NewGenerator(false, false)\n tk, err := g.Get(\"<cluster_name>\")\n if err != nil {\n  log.Fatal(err)\n }\n fmt.Println(tk)\n}\n```\n\n\ucd9c\ub825 \uc751\ub2f5\uc740 \ub2e4\uc74c\uacfc \ud615\ud0dc\ub97c \uac00\uc9d1\ub2c8\ub2e4.\n\n```json\n{\n  \"kind\": \"ExecCredential\", \n  \"apiVersion\": \"client.authentication.k8s.io\/v1alpha1\", \n  \"spec\": {}, \n  \"status\": {\n    \"expirationTimestamp\": \"2020-02-19T16:08:27Z\", \n    \"token\": \"k8s-aws-v1.aHR0cHM6Ly9zdHMuYW1hem9uYXdzLmNvbS8_QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNSZYLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFKTkdSSUxLTlNSQzJXNVFBJTJGMjAyMDAyMTklMkZ1cy1lYXN0LTElMkZzdHMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDIwMDIxOVQxNTU0MjdaJlgtQW16LUV4cGlyZXM9NjAmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JTNCeC1rOHMtYXdzLWlkJlgtQW16LVNpZ25hdHVyZT0yMjBmOGYzNTg1ZTMyMGRkYjVlNjgzYTVjOWE0MDUzMDFhZDc2NTQ2ZjI0ZjI4MTExZmRhZDA5Y2Y2NDhhMzkz\"\n  }\n}\n```\n\n\uac01 \ud1a0\ud070\uc740 `k8s-aws-v1.`\uc73c\ub85c \uc2dc\uc791\ud558\uace0 base64\ub85c \uc778\ucf54\ub529\ub41c \ubb38\uc790\uc5f4\uc774 \ub4a4\ub530\ub985\ub2c8\ub2e4. \ubb38\uc790\uc5f4\uc740 \ub514\ucf54\ub529\ud558\uba74 \ub2e4\uc74c\uacfc \uac19\uc740 \ud615\ud0dc\ub97c \uac00\uc9d1\ub2c8\ub2e4.\n\n```bash\nhttps:\/\/sts.amazonaws.com\/?Action=GetCallerIdentity&Version=2011-06-15&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=0KIAJPFRILKNSRC2W5QA%2F20200219%2F$REGION%2Fsts%2Faws4_request&X-Amz-Date=20200219T155427Z&X-Amz-Expires=60&X-Amz-SignedHeaders=host%3Bx-k8s-aws-id&X-Amz-Signature=BB0f8f3285e320ddb5e683a5c9a405301ad76546f24f28111fdad09cf648a393\n```\n\n\ud1a0\ud070\uc740 Amazon \uc790\uaca9 \uc99d\uba85 \ud06c\ub9ac\ub374\uc15c \ubc0f \uc11c\uba85\uc774 \ud3ec\ud568\ub41c \ubbf8\ub9ac \uc11c\uba85\ub41c URL\ub85c \uad6c\uc131\ub429\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [GetCallerIdentity API \ubb38\uc11c](https:\/\/docs.aws.amazon.com\/STS\/latest\/APIReference\/API_GetCallerIdentity.html)\ub97c \ucc38\uc870\ud569\ub2c8\ub2e4.\n\n\ud1a0\ud070\uc740 15\ubd84\uc758 TTL \uc218\uba85\uc774 \uc788\uace0, \uc218\uba85 \uc885\ub958 \ud6c4\uc5d0\ub294 \uc0c8 \ud1a0\ud070\uc744 \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub294 `kubectl`\uacfc \uac19\uc740 \ud074\ub77c\uc774\uc5b8\ud2b8\ub97c \uc0ac\uc6a9\ud560 \ub54c \uc790\ub3d9\uc73c\ub85c \ucc98\ub9ac \ub418\uc9c0\ub9cc \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub300\uc2dc\ubcf4\ub4dc\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \ud1a0\ud070\uc774 \ub9cc\ub8cc\ub420 \ub54c\ub9c8\ub2e4 \uc0c8 \ud1a0\ud070\uc744 \uc0dd\uc131\ud558\uace0 \ub2e4\uc2dc \uc778\uc99d\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\uc0ac\uc6a9\uc790 ID\uac00 AWS IAM \uc11c\ube44\uc2a4\uc5d0 \uc758\ud574 \uc778\uc99d\ub418\uba74 kube-apiserver \ub294 'kube-system' \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c 'aws-auth' \ucee8\ud53c\uadf8\ub9f5\uc744 \uc77d\uc5b4 \uc0ac\uc6a9\uc790\uc640 \uc5f0\uacb0\ud560 RBAC \uadf8\ub8f9\uc744 \uacb0\uc815\ud569\ub2c8\ub2e4. `aws-auth` \ucee8\ud53c\uadf8\ub9f5 \uc740 IAM \ubcf4\uc548 \uc8fc\uccb4(\uc608: IAM \uc0ac\uc6a9\uc790 \ubc0f \uc5ed\ud560)\uc640 \ucfe0\ubc84\ub124\ud2f0\uc2a4 RBAC \uadf8\ub8f9 \uac04\uc758 \uc815\uc801 \ub9e4\ud551\uc744 \uc0dd\uc131\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. RBAC \uadf8\ub8f9\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub864\ubc14\uc778\ub529 \ub610\ub294 \ud074\ub7ec\uc2a4\ud130\ub864\ubc14\uc778\ub529\uc5d0\uc11c \ucc38\uc870\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9ac\uc18c\uc2a4(\uac1d\uccb4) \ubaa8\uc74c\uc5d0 \ub300\ud574 \uc218\ud589\ud560 \uc218 \uc788\ub294 \uc77c\ub828\uc758 \uc791\uc5c5(\ub3d9\uc0ac)\uc744 \uc815\uc758\ud55c\ub2e4\ub294 \uc810\uc5d0\uc11c IAM \uc5ed\ud560\uacfc \uc720\uc0ac\ud569\ub2c8\ub2e4.\n\n## \uad8c\uc7a5 \uc0ac\ud56d\n\n### \uc778\uc99d\uc5d0 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \ud1a0\ud070\uc744 \uc0ac\uc6a9\ud558\uc9c0 \ub9c8\uc138\uc694\n\n\uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \ud1a0\ud070\uc740 \uc218\uba85\uc774 \uae34 \uc815\uc801 \uc0ac\uc6a9\uc790 \uc778\uc99d \uc815\ubcf4\uc785\ub2c8\ub2e4. \uc190\uc0c1, \ubd84\uc2e4 \ub610\ub294 \ub3c4\ub09c\ub41c \uacbd\uc6b0 \uacf5\uaca9\uc790\ub294 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc774 \uc0ad\uc81c\ub420 \ub54c\uae4c\uc9c0 \ud574\ub2f9 \ud1a0\ud070\uacfc \uad00\ub828\ub41c \ubaa8\ub4e0 \uc791\uc5c5\uc744 \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uacbd\uc6b0\uc5d0 \ub530\ub77c \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc5d0\uc11c \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\ub97c \uc0ac\uc6a9\ud574\uc57c \ud558\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158(\uc608: CI\/CD \ud30c\uc774\ud504\ub77c\uc778 \uc560\ud50c\ub9ac\ucf00\uc774\uc158)\uc5d0 \ub300\ud55c \uc608\uc678\ub97c \ubd80\uc5ec\ud574\uc57c \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 EC2 \uc778\uc2a4\ud134\uc2a4\uc640 \uac19\uc740 AWS \uc778\ud504\ub77c\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uacbd\uc6b0 \ub300\uc2e0 \uc778\uc2a4\ud134\uc2a4 \ud504\ub85c\ud30c\uc77c\uc744 \uc0ac\uc6a9\ud558\uace0 \uc774\ub97c 'aws-auth' \ucee8\ud53c\uadf8\ub9f5\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 RBAC \uc5ed\ud560\uc5d0 \ub9e4\ud551\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n### AWS \ub9ac\uc18c\uc2a4\uc5d0 \ub300\ud55c \ucd5c\uc18c \uad8c\ud55c \uc561\uc138\uc2a4 \uc0ac\uc6a9\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 API\uc5d0 \uc561\uc138\uc2a4\ud558\uae30 \uc704\ud574 IAM \uc0ac\uc6a9\uc790\uc5d0\uac8c AWS \ub9ac\uc18c\uc2a4\uc5d0 \ub300\ud55c \uad8c\ud55c\uc744 \ud560\ub2f9\ud560 \ud544\uc694\uac00 \uc5c6\uc2b5\ub2c8\ub2e4. IAM \uc0ac\uc6a9\uc790\uc5d0\uac8c EKS \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4 \uad8c\ud55c\uc744 \ubd80\uc5ec\ud574\uc57c \ud558\ub294 \uacbd\uc6b0 \ud2b9\uc815 \ucfe0\ubc84\ub124\ud2f0\uc2a4 RBAC \uadf8\ub8f9\uc5d0 \ub9e4\ud551\ub418\ub294 \ud574\ub2f9 \uc0ac\uc6a9\uc790 \uc758 'aws-auth' \ucee8\ud53c\uadf8\ub9f5\uc5d0 \ud56d\ubaa9\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4.\n\n### \uc5ec\ub7ec \uc0ac\uc6a9\uc790\uac00 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud574 \ub3d9\uc77c\ud55c \uc561\uc138\uc2a4 \uad8c\ud55c\uc774 \ud544\uc694\ud55c \uacbd\uc6b0 IAM \uc5ed\ud560 \uc0ac\uc6a9\n\n'aws-auth' \ucee8\ud53c\uadf8\ub9f5 \uc5d0\uc11c \uac01 \uac1c\ubcc4 IAM \uc0ac\uc6a9\uc790\uc5d0 \ub300\ud55c \ud56d\ubaa9\uc744 \uc0dd\uc131\ud558\ub294 \ub300\uc2e0 \ud574\ub2f9 \uc0ac\uc6a9\uc790\uac00 IAM \uc5ed\ud560\uc744 \uc218\uc784\ud558\uace0 \ud574\ub2f9 \uc5ed\ud560\uc744 \ucfe0\ubc84\ub124\ud2f0\uc2a4 RBAC \uadf8\ub8f9\uc5d0 \ub9e4\ud551\ud558\ub3c4\ub85d \ud5c8\uc6a9\ud569\ub2c8\ub2e4. \ud2b9\ud788 \uc561\uc138\uc2a4\uac00 \ud544\uc694\ud55c \uc0ac\uc6a9\uc790 \uc218\uac00 \uc99d\uac00\ud568\uc5d0 \ub530\ub77c \uc720\uc9c0 \uad00\ub9ac\uac00 \ub354 \uc26c\uc6cc\uc9d1\ub2c8\ub2e4.\n\n!!! attention\n    aws-auth \ucee8\ud53c\uadf8\ub9f5\uc5d0 \uc758\ud574 \ub9e4\ud551\ub41c IAM \ubcf4\uc548 \uc8fc\uccb4\ub85c EKS \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc561\uc138\uc2a4\ud560 \ub54c aws-auth \ucee8\ud53c\uadf8\ub9f5\uc5d0 \uc124\uba85\ub41c \uc0ac\uc6a9\uc790 \uc774\ub984\uc774 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uac10\uc0ac \ub85c\uadf8\uc758 \uc0ac\uc6a9\uc790 \ud544\ub4dc\uc5d0 \uae30\ub85d\ub429\ub2c8\ub2e4. IAM \uc5ed\ud560\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \ud574\ub2f9 \uc5ed\ud560\uc744 \ub9e1\ub294 \uc2e4\uc81c \uc0ac\uc6a9\uc790\ub294 \uae30\ub85d\ub418\uc9c0 \uc54a\uc73c\uba70 \uac10\uc0ac\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4.\n\naws-auth \ucee8\ud53c\uadf8\ub9f5\uc5d0\uc11c mapRoles\ub97c \uc0ac\uc6a9\ud558\uc5ec K8s RBAC \uad8c\ud55c\uc744 IAM \uc5ed\ud560\uc5d0 \ud560\ub2f9\ud560 \ub54c \uc0ac\uc6a9\uc790 \uc774\ub984\uc5d0 \uc744 \ud3ec\ud568\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uac10\uc0ac \ub85c\uadf8\uc5d0 \uc138\uc158 \uc774\ub984\uc774 \uae30\ub85d\ub418\ubbc0\ub85c CloudTrail \ub85c\uadf8\uc640 \ud568\uaed8 \uc774 \uc5ed\ud560\uc744 \ub9e1\uc740 \uc2e4\uc81c \uc0ac\uc6a9\uc790\ub97c \ucd94\uc801\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```yaml\n- rolearn: arn:aws:iam::XXXXXXXXXXXX:role\/testRole\n  username: testRole:\n  groups:\n    - system:masters\n```\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 1.20 \ub610\ub294 \uc774\ud6c4 \ubc84\uc804\uc5d0\uc11c\ub294 ```User.Extra.sessionName.0```\uc774 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uac10\uc0ac \ub85c\uadf8\uc5d0 \ucd94\uac00\ub418\uc5c8\uc73c\ubbc0\ub85c \uc774\ub7f0 \ubcc0\uacbd\uc774 \ub354 \uc774\uc0c1 \ud544\uc694\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n### \ub864\ubc14\uc778\ub529(RoleBinding) \ubc0f \ud074\ub7ec\uc2a4\ud130\ub864\ubc14\uc778\ub529(ClusterRoleBinding) \uc0dd\uc131 \uc2dc \ucd5c\uc18c \uad8c\ud55c \uc811\uadfc \ud5c8\uc6a9\n\nAWS \ub9ac\uc18c\uc2a4\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4 \uad8c\ud55c \ubd80\uc5ec\uc5d0 \ub300\ud55c \uc774\uc804 \ud56d\ubaa9\uacfc \ub9c8\ucc2c\uac00\uc9c0\ub85c \ub864\ubc14\uc778\ub529 \ubc0f \ud074\ub7ec\uc2a4\ud130\ub864\ubc14\uc778\ub529\uc5d0\ub294 \ud2b9\uc815 \uae30\ub2a5\uc744 \uc218\ud589\ud558\ub294 \ub370 \ud544\uc694\ud55c \uad8c\ud55c \uc9d1\ud569\ub9cc \ud3ec\ud568\ub418\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uc808\ub300\uc801\uc73c\ub85c \ud544\uc694\ud55c \uacbd\uc6b0\uac00 \uc544\ub2c8\uba74 \ub864(Role) \ubc0f \ud074\ub7ec\uc2a4\ud130\ub864(ClusterRole)\uc5d0\uc11c `[\"*\"]` \ub97c \uc0ac\uc6a9\ud558\uc9c0 \ub9c8\uc2ed\uc2dc\uc624. \ud560\ub2f9\ud560 \uad8c\ud55c\uc774 \ud655\uc2e4\ud558\uc9c0 \uc54a\uc740 \uacbd\uc6b0 [audit2rbac](https:\/\/github.com\/liggitt\/audit2rbac)\uacfc \uac19\uc740 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uc5ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uac10\uc0ac \ub85c\uadf8\uc5d0\uc11c \uad00\ucc30\ub41c API \ud638\ucd9c\uc744 \uae30\ubc18\uc73c\ub85c \uc5ed\ud560 \ubc0f \ubc14\uc778\ub529\uc744 \uc790\ub3d9\uc73c\ub85c \uc0dd\uc131\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n### EKS \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud504\ub77c\uc774\ube57\uc73c\ub85c \uc124\uc815\n\n\uae30\ubcf8\uc801\uc73c\ub85c EKS \ud074\ub7ec\uc2a4\ud130\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud560 \ub54c API \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub294 \ud37c\ube14\ub9ad\uc73c\ub85c \uc124\uc815\ub429\ub2c8\ub2e4. \uc989, \uc778\ud130\ub137\uc5d0\uc11c \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc778\ud130\ub137\uc5d0\uc11c \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc74c\uc5d0\ub3c4 \ubd88\uad6c\ud558\uace0 \ubaa8\ub4e0 API \uc694\uccad\uc774 IAM\uc5d0 \uc758\ud574 \uc778\uc99d\ub418\uace0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 RBAC\uc5d0 \uc758\ud574 \uc2b9\uc778\ub418\uc5b4\uc57c \ud558\uae30 \ub54c\ubb38\uc5d0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub294 \uc5ec\uc804\ud788 \uc548\uc804\ud55c \uac83\uc73c\ub85c \uac04\uc8fc\ub429\ub2c8\ub2e4. \uc989, \ud68c\uc0ac \ubcf4\uc548 \uc815\ucc45\uc5d0 \ub530\ub77c \uc778\ud130\ub137\uc5d0\uc11c API\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4\ub97c \uc81c\ud55c\ud558\uac70\ub098 \ud074\ub7ec\uc2a4\ud130 VPC \uc678\ubd80\ub85c \ud2b8\ub798\ud53d\uc744 \ub77c\uc6b0\ud305\ud558\uc9c0 \ubabb\ud558\ub3c4\ub85d \ud558\ub294 \uacbd\uc6b0 \ub2e4\uc74c\uc744 \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n- EKS \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud504\ub77c\uc774\ube57\uc73c\ub85c \uad6c\uc131\ud569\ub2c8\ub2e4. \uc774 \uc8fc\uc81c\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc561\uc138\uc2a4 \uc218\uc815](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cluster-endpoint.html)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n- \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud37c\ube14\ub9ad\uc73c\ub85c \ub450\uace0 \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc640 \ud1b5\uc2e0\ud560 \uc218 \uc788\ub294 CIDR \ube14\ub85d\uc744 \uc9c0\uc815\ud569\ub2c8\ub2e4. \ud574\ub2f9 \ube14\ub85d\uc740 \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\ub3c4\ub85d \ud5c8\uc6a9\ub41c \ud37c\ube14\ub9ad IP \uc8fc\uc18c \uc9d1\ud569\uc785\ub2c8\ub2e4.\n- \ud37c\ube14\ub9ad \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub294 \uc811\uadfc\uc774 \ud5c8\uc6a9\ub41c \ud654\uc774\ud2b8\ub9ac\uc2a4\ud2b8 \uae30\ubc18\uc758 \uc77c\ubd80 CIDR \ube14\ub85d\uc5d0\ub9cc \ud5c8\uc6a9\ud558\uace0 \ud504\ub77c\uc774\ube57 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud65c\uc131\ud654\ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc774 \ud504\ub85c\ube44\uc800\ub2dd\ub420 \ub54c \ud074\ub7ec\uc2a4\ud130 VPC\uc5d0 \ud504\ub85c\ube44\uc800\ub2dd\ub418\ub294 \ud06c\ub85c\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 ENI\ub97c \ud1b5\ud574 kubelet\uacfc \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc0ac\uc774\uc758 \ubaa8\ub4e0 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d\uc744 \uac15\uc81c\ud558\ub294 \ub3d9\uc2dc\uc5d0 \ud2b9\uc815 \ud37c\ube14\ub9ad IP \ubc94\uc704\uc758 \ud37c\ube14\ub9ad \uc561\uc138\uc2a4\uac00 \ud5c8\uc6a9\ub429\ub2c8\ub2e4.\n\n### \uc804\uc6a9 IAM \uc5ed\ud560\ub85c \ud074\ub7ec\uc2a4\ud130 \uc0dd\uc131\n\nAmazon EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud558\uba74 \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud558\ub294 \uc5f0\ub3d9 \uc0ac\uc6a9\uc790\uc640 \uac19\uc740 IAM \uc5d4\ud130\ud2f0 \uc0ac\uc6a9\uc790 \ub610\ub294 \uc5ed\ud560\uc5d0 \ud074\ub7ec\uc2a4\ud130\uc758 RBAC \uad6c\uc131\uc5d0\uc11c `system:masters` \uad8c\ud55c\uc774 \uc790\ub3d9\uc73c\ub85c \ubd80\uc5ec\ub429\ub2c8\ub2e4. \uc774 \uc561\uc138\uc2a4\ub294 \uc81c\uac70\ud560 \uc218 \uc5c6\uc73c\uba70 `aws-auth` \ucee8\ud53c\uadf8\ub9f5\uc744 \ud1b5\ud574 \uad00\ub9ac\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ub530\ub77c\uc11c \uc804\uc6a9 IAM \uc5ed\ud560\ub85c \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud558\uace0 \uc774 \uc5ed\ud560\uc744 \ub9e1\uc744 \uc218 \uc788\ub294 \uc0ac\ub78c\uc744 \uc815\uae30\uc801\uc73c\ub85c \uac10\uc0ac\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc774 \uc5ed\ud560\uc740 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc77c\uc0c1\uc801\uc778 \uc791\uc5c5\uc744 \uc218\ud589\ud558\ub294 \ub370 \uc0ac\uc6a9\ub418\uc5b4\uc11c\ub294 \uc548 \ub418\uba70, \ub300\uc2e0 \uc774\ub7f0 \ubaa9\uc801\uc744 \uc704\ud574 `aws-auth` \ucee8\ud53c\uadf8\ub9f5\uc744 \ud1b5\ud574 \ucd94\uac00 \uc0ac\uc6a9\uc790\uc5d0\uac8c \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4 \uad8c\ud55c\uc744 \ubd80\uc5ec\ud574\uc57c \ud569\ub2c8\ub2e4. `aws-auth` \ucee8\ud53c\uadf8\ub9f5\uc774 \uad6c\uc131\ub41c \uc774\ud6c4\uc5d0\ub294 \uc5ed\ud560\uc744 \uc0ad\uc81c\ud560 \uc218 \uc788\uc73c\uba70 `aws-auth` \ucee8\ud53c\uadf8\ub9f5\uc774 \uc190\uc0c1\ub418\uace0 \uadf8\ub807\uc9c0 \uc54a\uc73c\uba74 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc5c6\ub294 \uae34\uae09\/\uc720\ub9ac \ud30c\uc190 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub9cc \ub2e4\uc2dc \uc0dd\uc131 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub294 \uc77c\ubc18\uc801\uc73c\ub85c \uc9c1\uc811 \uc0ac\uc6a9\uc790 \uc561\uc138\uc2a4\uac00 \uad6c\uc131\ub418\uc9c0 \uc54a\uc740 \uc6b4\uc601 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ud2b9\ud788 \uc720\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uc5ec aws-auth \ucee8\ud53c\uadf8\ub9f5 \ubcc0\uacbd\n\n\uc798\ubabb\ub41c \ud615\uc2dd\uc758 aws-auth \ucee8\ud53c\uadf8\ub9f5\uc73c\ub85c \uc778\ud574 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c \uc811\uadfc \uad8c\ud55c\uc744 \uc783\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucee8\ud53c\uadf8\ub9f5\uc744 \ubcc0\uacbd\ud574\uc57c \ud558\ub294 \uacbd\uc6b0 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uc2ed\uc2dc\uc624.\n\n#### eksctl\n\n`eksctl` CLI\uc5d0\ub294 aws-auth \ucee8\ud53c\uadf8\ub9f5\uc5d0 ID \ub9e4\ud551\uc744 \ucd94\uac00\ud558\uae30 \uc704\ud55c \uba85\ub839\uc774 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\nCLI \ub3c4\uc6c0\ub9d0 \ubcf4\uae30:\n\n```bash\neksctl create iamidentitymapping --help\n```\n\nIAM \uc5ed\ud560\uc744 \ud074\ub7ec\uc2a4\ud130 \uad00\ub9ac\uc790\ub85c \uc9c0\uc815:\n\n```bash\n eksctl create iamidentitymapping --cluster  <clusterName> --region=<region> --arn arn:aws:iam::123456:role\/testing --group system:masters --username admin\n```\n\n\uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [`eksctl` \ubb38\uc11c]( https:\/\/eksctl.io\/usage\/iam-identity-mappings\/)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n**keikoproj\uc758 [aws-auth](https:\/\/github.com\/keikoproj\/aws-auth)**\n\nkeikoproj\uc758 `aws-auth` \uc5d0\ub294 cli \ubc0f go \ub77c\uc774\ube0c\ub7ec\ub9ac\uac00 \ubaa8\ub450 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\nCLI \ub3c4\uc6c0\ub9d0 \ub2e4\uc6b4\ub85c\ub4dc \ubc0f \ubcf4\uae30:\n\n```bash\ngo get github.com\/keikoproj\/aws-auth\naws-auth help\n```\n\n\ub610\ub294 kubectl\uc6a9 [krew \ud50c\ub7ec\uadf8\uc778 \uad00\ub9ac\uc790](https:\/\/krew.sigs.k8s.io)\ub85c `aws-auth` \ub97c \uc124\uce58 \ud569\ub2c8\ub2e4.\n\n```bash\nkubectl krew install aws-auth\nkubectl aws-auth\n```\n\ngo \ub77c\uc774\ube0c\ub7ec\ub9ac\ub97c \ube44\ub86f\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [GitHub \ub0b4 aws-auth \ubb38\uc11c\ub97c \ud655\uc778](https:\/\/github.com\/keikoproj\/aws-auth\/blob\/master\/README.md)\ud558\uc2ed\uc2dc\uc624.\n\n**[AWS IAM Authenticator CLI](https:\/\/github.com\/kubernetes-sigs\/aws-iam-authenticator\/tree\/master\/cmd\/aws-iam-authenticator)**\n\n`aws-iam-authenticator` \ud504\ub85c\uc81d\ud2b8\uc5d0\ub294 \ucee8\ud53c\uadf8\ub9f5\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\uae30 \uc704\ud55c CLI\uac00 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\nGitHub\uc5d0\uc11c [\ub9b4\ub9ac\uc2a4 \ub2e4\uc6b4\ub85c\ub4dc]( https:\/\/github.com\/kubernetes-sigs\/aws-iam-authenticator\/releases)\ud558\uc138\uc694.\n\nIAM \uc5ed\ud560\uc5d0 \ud074\ub7ec\uc2a4\ud130 \uad8c\ud55c\uc744 \ucd94\uac00\ud569\ub2c8\ub2e4:\n\n```bash\n.\/aws-iam-authenticator add role --rolearn arn:aws:iam::185309785115:role\/lil-dev-role-cluster --username lil-dev-user --groups system:masters --kubeconfig ~\/.kube\/config\n```\n\n### \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c \uc811\uadfc\uc744 \uc815\uae30\uc801\uc73c\ub85c \uac10\uc0ac\ud569\ub2c8\ub2e4\n\n\ud074\ub7ec\uc2a4\ud130\uc5d0 \uc811\uadfc\uc774 \ud544\uc694\ud55c \uc0ac\ub78c\uc740 \uc2dc\uac04\uc774 \uc9c0\ub0a8\uc5d0 \ub530\ub77c \ubcc0\uacbd\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc8fc\uae30\uc801\uc73c\ub85c `aws-auth` \ucee8\ud53c\uadf8\ub9f5\uc744 \uac10\uc0ac\ud558\uc5ec \uc811\uadfc \uad8c\ud55c\uc774 \ubd80\uc5ec\ub41c \uc0ac\ub78c\uacfc \ud560\ub2f9\ub41c \uad8c\ud55c\uc744 \ud655\uc778\ud558\uc2ed\uc2dc\uc624. \ud2b9\uc815 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8, \uc0ac\uc6a9\uc790 \ub610\ub294 \uadf8\ub8f9\uc5d0 \ubc14\uc778\ub529\ub41c \uc5ed\ud560\uc744 \uac80\uc0ac\ud558\uae30 \uc704\ud574 [kubectl-who-can](https:\/\/github.com\/aquasecurity\/kubectl-who-can) \ub610\ub294 [rbac-lookup](https:\/\/github.com\/FairwindsOps\/)\uacfc \uac19\uc740 \uc624\ud508 \uc18c\uc2a4 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \ud574\ub2f9 \uc8fc\uc81c\uc5d0 \ub300\ud574\uc11c\ub294 [\uac10\uc0ac](detective.md)\uc139\uc158\uc5d0\uc11c \ub354 \uc790\uc138\ud788 \uc0b4\ud3b4 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4. \ucd94\uac00 \uc544\uc774\ub514\uc5b4\ub294 NCC Group\uc758 \uc774 [\uae30\uc0ac](https:\/\/www.nccgroup.trust\/us\/about-us\/newsroom-and-events\/blog\/2019\/august\/tools-and-methods-for-auditing-kubernetes-rbac-policies\/?mkt_tok=eyJpIjoiWWpGa056SXlNV1E0WWpRNSIsInQiOiJBT1hyUTRHYkg1TGxBV0hTZnRibDAyRUZ0VzBxbndnRzNGbTAxZzI0WmFHckJJbWlKdE5WWDdUQlBrYVZpMnNuTFJ1R3hacVYrRCsxYWQ2RTRcL2pMN1BtRVA1ZFZcL0NtaEtIUDdZV3pENzNLcE1zWGVwUndEXC9Pb2tmSERcL1pUaGUifQ%3D%3D)\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc778\uc99d \ubc0f \uc561\uc138\uc2a4 \uad00\ub9ac\uc5d0 \ub300\ud55c \ub300\uccb4 \uc811\uadfc \ubc29\uc2dd\n\nIAM\uc740 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc561\uc138\uc2a4\ud574\uc57c \ud558\ub294 \uc0ac\uc6a9\uc790\ub97c \uc778\uc99d\ud558\ub294 \ub370 \uc120\ud638\ub418\ub294 \ubc29\ubc95\uc774\uc9c0\ub9cc, \uc778\uc99d \ud504\ub85d\uc2dc \ub610\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 [impersonation](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/authentication\/#user-impersonation)\ub4f1\uc744 \uc0ac\uc6a9\ud558\ub294 GitHub\uc640 \uac19\uc740 OIDC ID \uacf5\uae09\uc790\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \ub450 \uac00\uc9c0 \uc194\ub8e8\uc158\uc5d0 \ub300\ud55c \uac8c\uc2dc\ubb3c\uc774 AWS \uc624\ud508 \uc18c\uc2a4 \ube14\ub85c\uadf8\uc5d0 \uac8c\uc2dc\ub418\uc5c8\uc2b5\ub2c8\ub2e4.\n\n- [Teleport\uc640 \ud568\uaed8 GitHub \uc790\uaca9 \uc99d\uba85\uc744 \uc0ac\uc6a9\ud558\uc5ec EKS\uc5d0 \uc778\uc99d](https:\/\/aws.amazon.com\/blogs\/opensource\/authenticating-eks-github-credentials-teleport\/)\n- [kube-oidc-proxy\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc5ec\ub7ec EKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc77c\uad00\ub41c OIDC \uc778\uc99d](https:\/\/aws.amazon.com\/blogs\/opensource\/consistent-oidc-authentication-across-multiple-eks-clusters-using-kube-oidc-proxy\/)\n\n!!! attention\n    EKS\ub294 \uae30\ubcf8\uc801\uc73c\ub85c \ud504\ub85d\uc2dc\ub97c \uc0ac\uc6a9\ud558\uc9c0 \uc54a\uace0 OIDC \uc778\uc99d\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [Amazon EKS\uc5d0 \ub300\ud55c OIDC \uc790\uaca9 \uc99d\uba85 \uacf5\uae09\uc790 \uc778\uc99d \uc18c\uac1c](https:\/\/aws.amazon.com\/blogs\/containers\/introducing-oidc-identity-provider-authentication-amazon-eks\/)\ube14\ub85c\uadf8\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624. \ub2e4\uc591\ud55c \uc778\uc99d \ubc29\ubc95\uc5d0 \ub300\ud55c \ucee4\ub125\ud130\ub97c \uc81c\uacf5\ud558\ub294 \uc778\uae30 \uc788\ub294 \uc624\ud508 \uc18c\uc2a4 OIDC \uacf5\uae09\uc790\uc778 Dex\ub85c EKS\ub97c \uad6c\uc131\ud558\ub294 \ubc29\ubc95\uc744 \ubcf4\uc5ec\uc8fc\ub294 \uc608\ub294 [Dex \ubc0f dex-k8s-authenticator\ub97c \uc0ac\uc6a9\ud558\uc5ec Amazon EKS \uc778\uc99d](https:\/\/aws.amazon.com\/blogs\/containers\/using-dex-dex-k8s-authenticator-to-authenticate-to-amazon-eks\/ ) \ube14\ub85c\uadf8\ub97c \ucc38\uc870\ud558\uc138\uc694. \ube14\ub85c\uadf8\uc5d0 \uc124\uba85\ub41c \ub300\ub85c OIDC \uacf5\uae09\uc790\uac00 \uc778\uc99d\ud55c \uc0ac\uc6a9\uc790 \uc774\ub984\/\uc0ac\uc6a9\uc790 \uadf8\ub8f9\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uac10\uc0ac \ub85c\uadf8\uc5d0 \ub098\ud0c0\ub0a9\ub2c8\ub2e4.\n\n\ub610\ud55c [AWS SSO](https:\/\/docs.aws.amazon.com\/singlesignon\/latest\/userguide\/what-is.html)\ub97c \uc0ac\uc6a9\ud558\uc5ec Azure AD\uc640 \uac19\uc740 \uc678\ubd80 \uc790\uaca9 \uc99d\uba85 \uacf5\uae09\uc790\uc640 AWS\ub97c \ud398\ub354\ub808\uc774\uc158\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub97c \uc0ac\uc6a9\ud558\uae30\ub85c \uacb0\uc815\ud55c \uacbd\uc6b0 AWS CLI v2.0\uc5d0\ub294 SSO \uc138\uc158\uc744 \ud604\uc7ac CLI \uc138\uc158\uacfc \uc27d\uac8c \uc5f0\uacb0\ud558\uace0 IAM \uc5ed\ud560\uc744 \uc218\uc784\ud560 \uc218 \uc788\ub294 \uba85\uba85\ub41c \ud504\ub85c\ud30c\uc77c\uc744 \uc0dd\uc131\ud558\ub294 \uc635\uc158\uc774 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \uc0ac\uc6a9\uc790\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 RBAC \uadf8\ub8f9\uc744 \uacb0\uc815\ud558\ub294 \ub370 IAM \uc5ed\ud560\uc774 \uc0ac\uc6a9\ub418\ubbc0\ub85c `kubectl` \uc744 \uc2e4\ud589\ud558\uae30 \"\uc804\" \uc5ed\ud560\uc744 \uc218\uc784(Assume)\ud558\uc5ec\uc57c \ud569\ub2c8\ub2e4.\n\n### \ucd94\uac00 \ub9ac\uc18c\uc2a4\n\n[rbac.dev](https:\/\/github.com\/mhausenblas\/rbac.dev) \ucfe0\ubc84\ub124\ud2f0\uc2a4 RBAC\uc5d0 \ub300\ud55c \ube14\ub85c\uadf8 \ubc0f \ub3c4\uad6c\ub97c \ud3ec\ud568\ud55c \ucd94\uac00 \ub9ac\uc18c\uc2a4 \ubaa9\ub85d\n\n## \ud30c\ub4dc \uc544\uc774\ub374\ud2f0\ud2f0\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ud2b9\uc815 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc740 \uc81c\ub300\ub85c \uc791\ub3d9\ud558\uae30 \uc704\ud574 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\ub97c \ud638\ucd9c\ud560 \uc218 \uc788\ub294 \uad8c\ud55c\uc774 \ud544\uc694\ud569\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 [AWS \ub85c\ub4dc\ubc38\ub7f0\uc11c \ucee8\ud2b8\ub864\ub7ec](https:\/\/github.com\/kubernetes-sigs\/aws-load-balancer-controller)\ub294 \uc11c\ube44\uc2a4\uc758 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ub098\uc5f4\ud560 \uc218 \uc788\uc5b4\uc57c \ud569\ub2c8\ub2e4. \ub610\ud55c \ucee8\ud2b8\ub864\ub7ec\ub294 ALB\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\uace0 \uad6c\uc131\ud558\uae30 \uc704\ud574 AWS API\ub97c \ud638\ucd9c\ud560 \uc218 \uc788\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uc774 \uc139\uc158\uc5d0\uc11c\ub294 \ud30c\ub4dc\uc5d0 \uad8c\ud55c\uc744 \ud560\ub2f9\ud558\ub294 \ubaa8\ubc94 \uc0ac\ub840\ub97c \uc0b4\ud3b4\ubd05\ub2c8\ub2e4.\n\n### \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\n\n\uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub294 \ud30c\ub4dc\uc5d0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 RBAC \uc5ed\ud560\uc744 \ud560\ub2f9\ud560 \uc218 \uc788\ub294 \ud2b9\uc218\ud55c \uc720\ud615\uc758 \uac1c\uccb4\uc785\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc758 \uac01 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \ub300\ud574 \uae30\ubcf8 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc774 \uc790\ub3d9\uc73c\ub85c \uc0dd\uc131\ub429\ub2c8\ub2e4. \ud2b9\uc815 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc744 \ucc38\uc870\ud558\uc9c0 \uc54a\uace0 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \ud30c\ub4dc\ub97c \ubc30\ud3ec\ud558\uba74, \ud574\ub2f9 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc758 \uae30\ubcf8 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc774 \uc790\ub3d9\uc73c\ub85c \ud30c\ub4dc\uc5d0 \ud560\ub2f9\ub418\uace0 \uc2dc\ud06c\ub9bf, \uc989 \ud574\ub2f9 \uc11c\ube44\uc2a4 \uc544\uce74\uc6b4\ud2b8\uc758 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 (JWT) \ud1a0\ud070\uc740 `\/var\/run\/secrets\/kubernetes.io\/serviceaccount`\uc5d0\uc11c \ubcfc\ub968\uc73c\ub85c \ud30c\ub4dc\uc5d0 \ub9c8\uc6b4\ud2b8\ub429\ub2c8\ub2e4. \ud574\ub2f9 \ub514\ub809\ud130\ub9ac\uc758 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \ud1a0\ud070\uc744 \ub514\ucf54\ub529\ud558\uba74 \ub2e4\uc74c\uacfc \uac19\uc740 \uba54\ud0c0\ub370\uc774\ud130\uac00 \ub098\ud0c0\ub0a9\ub2c8\ub2e4.\n\n```json\n{\n  \"iss\": \"kubernetes\/serviceaccount\",\n  \"kubernetes.io\/serviceaccount\/namespace\": \"default\",\n  \"kubernetes.io\/serviceaccount\/secret.name\": \"default-token-5pv4z\",\n  \"kubernetes.io\/serviceaccount\/service-account.name\": \"default\",\n  \"kubernetes.io\/serviceaccount\/service-account.uid\": \"3b36ddb5-438c-11ea-9438-063a49b60fba\",\n  \"sub\": \"system:serviceaccount:default:default\"\n}\n```\n\n\uae30\ubcf8 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc5d0\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\uc5d0 \ub300\ud55c \ub2e4\uc74c \uad8c\ud55c\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  annotations:\n    rbac.authorization.kubernetes.io\/autoupdate: \"true\"\n  creationTimestamp: \"2020-01-30T18:13:25Z\"\n  labels:\n    kubernetes.io\/bootstrapping: rbac-defaults\n  name: system:discovery\n  resourceVersion: \"43\"\n  selfLink: \/apis\/rbac.authorization.k8s.io\/v1\/clusterroles\/system%3Adiscovery\n  uid: 350d2ab8-438c-11ea-9438-063a49b60fba\nrules:\n- nonResourceURLs:\n  - \/api\n  - \/api\/*\n  - \/apis\n  - \/apis\/*\n  - \/healthz\n  - \/openapi\n  - \/openapi\/*\n  - \/version\n  - \/version\/\n  verbs:\n  - get\n```\n\n\uc774 \uc5ed\ud560\uc740 \uc778\uc99d\ub418\uc9c0 \uc54a\uc740 \uc0ac\uc6a9\uc790\uc640 \uc778\uc99d\ub41c \uc0ac\uc6a9\uc790\uac00 API \uc815\ubcf4\ub97c \uc77d\uc744 \uc218 \uc788\ub294 \uad8c\ud55c\uc744 \ubd80\uc5ec\ud558\uba70 \uacf5\uac1c\uc801\uc73c\ub85c \uc561\uc138\uc2a4\ud574\ub3c4 \uc548\uc804\ud55c \uac83\uc73c\ub85c \uac04\uc8fc\ub429\ub2c8\ub2e4.\n\n\ud30c\ub4dc \ub0b4\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\ub97c \ud638\ucd9c\ud560 \ub54c \ud574\ub2f9 API\ub97c \ud638\ucd9c\ud560 \uc218 \uc788\ub294 \uad8c\ud55c\uc744 \uba85\uc2dc\uc801\uc73c\ub85c \ubd80\uc5ec\ud558\ub294 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub97c \ud30c\ub4dc\uc5d0 \ud560\ub2f9\ud574\uc57c \ud569\ub2c8\ub2e4. \uc0ac\uc6a9\uc790 \uc811\uadfc\uc5d0 \ub300\ud55c \uc9c0\uce68\uacfc \uc720\uc0ac\ud558\uac8c \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc5d0 \ubc14\uc778\ub529\ub41c Role \ub610\ub294 ClusterRole\uc740 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc791\ub3d9\ud558\ub294 \ub370 \ud544\uc694\ud55c API \ub9ac\uc18c\uc2a4 \ubc0f \uba54\uc11c\ub4dc\ub85c \uc81c\ud55c\ub418\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uae30\ubcf8\uc774 \uc544\ub2cc \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub97c \uc0ac\uc6a9\ud558\ub824\uba74 \ud30c\ub4dc\uc758 `spec.serviceAccountName` \ud544\ub4dc\ub97c \uc0ac\uc6a9\ud558\ub824\ub294 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc758 \uc774\ub984\uc73c\ub85c \uc124\uc815\ud558\uae30\ub9cc \ud558\uba74 \ub429\ub2c8\ub2e4. \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \uc0dd\uc131\uc5d0 \ub300\ud55c \ucd94\uac00 \uc815\ubcf4\ub294 [\ud574\ub2f9 \ubb38\uc11c](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/#service-account-permissions)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n!!! note\n    \ucfe0\ubc84\ub124\ud2f0\uc2a4 1.24 \uc774\uc804\uc5d0\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uac00 \uac01 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc5d0 \ub300\ud55c \uc554\ud638\ub97c \uc790\ub3d9\uc73c\ub85c \uc0dd\uc131\ud588\uc2b5\ub2c8\ub2e4. \uc774 \uc2dc\ud06c\ub9bf\uc740 \ud30c\ub4dc \ub0b4 \/var\/run\/secrets\/kubernetes.io\/serviceaccount \uacbd\ub85c\ub85c \ub9c8\uc6b4\ud2b8\ub418\uc5c8\uc73c\uba70 \ud30c\ub4dc\uc5d0\uc11c \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84\ub97c \uc778\uc99d\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 1.24\uc5d0\uc11c\ub294 \ud30c\ub4dc\uac00 \uc2e4\ud589\ub420 \ub54c \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \ud1a0\ud070\uc774 \ub3d9\uc801\uc73c\ub85c \uc0dd\uc131\ub418\uba70 \uae30\ubcf8\uc801\uc73c\ub85c 1\uc2dc\uac04 \ub3d9\uc548\ub9cc \uc720\ud6a8\ud569\ub2c8\ub2e4. \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc758 \uc2dc\ud06c\ub9bf\uc740 \uc0dd\uc131\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. Jenkins\uc640 \uac19\uc774 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\uc5d0 \uc778\uc99d\ud574\uc57c \ud558\ub294 \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc788\ub294 \uacbd\uc6b0, `metadata.annotations.kubernetes.io\/service-account.name: <SERVICE_ACCOUNT_NAME>`\uc640 \uac19\uc740 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub97c \ucc38\uc870\ud558\ub294 \uc5b4\ub178\ud14c\uc774\uc158\uacfc \ud568\uaed8 `kubernetes.io\/service-account-token` \uc720\ud615\uc758 \uc2dc\ud06c\ub9bf\uc744 \uc0dd\uc131\ud574\uc57c \ud55c\ub2e4. \uc774 \ubc29\ubc95\uc73c\ub85c \uc0dd\uc131\ub41c \uc2dc\ud06c\ub9bf\uc740 \ub9cc\ub8cc\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n### \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc6a9 IAM \uc5ed\ud560(IRSA)\n\nIRSA\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc5d0 IAM \uc5ed\ud560\uc744 \ud560\ub2f9\ud560 \uc218 \uc788\ub294 \uae30\ub2a5\uc785\ub2c8\ub2e4. [Service Account Token Volume Projection](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/#service-account-token-volume-projection)\uc774\ub77c\ub294  \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uae30\ub2a5\uc744 \ud65c\uc6a9\ud558\uc5ec \uc791\ub3d9\ud569\ub2c8\ub2e4. \ud30c\ub4dc\uac00 IAM \uc5ed\ud560\uc744 \ucc38\uc870\ud558\ub294 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc73c\ub85c \uad6c\uc131\ub41c \uacbd\uc6b0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84\ub294 \uc2dc\uc791 \uc2dc \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c \uacf5\uac1c OIDC \uac80\uc0c9 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud638\ucd9c\ud569\ub2c8\ub2e4. \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub294 Kubernetes\uc5d0\uc11c \ubc1c\ud589\ud55c OIDC \ud1a0\ud070\uc5d0 \uc554\ud638\ub85c \uc11c\uba85\ud558\uace0, \uc0dd\uc131\ub41c \ud1a0\ud070\uc740 \ubcfc\ub968\uc73c\ub85c \ub9c8\uc6b4\ud2b8\ub429\ub2c8\ub2e4. \uc774 \uc11c\uba85\ub41c \ud1a0\ud070\uc744 \ud1b5\ud574 \ud30c\ub4dc\ub294 IAM \uc5ed\ud560\uacfc \uc5f0\uacb0\ub41c AWS API\ub97c \ud638\ucd9c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. AWS API\uac00 \ud638\ucd9c\ub418\uba74 AWS SDK\ub294 `sts:AssumeRoleWithWebIdentity`\ub97c \ud638\ucd9c\ud569\ub2c8\ub2e4. \ud1a0\ud070\uc758 \uc11c\uba85\uc744 \ud655\uc778\ud55c \ud6c4 IAM\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0\uc11c \ubc1c\ud589\ud55c \ud1a0\ud070\uc744 \uc784\uc2dc AWS \uc5ed\ud560 \uc790\uaca9 \uc99d\uba85\uc73c\ub85c \uad50\ud658\ud569\ub2c8\ub2e4.\n\nIRSA\uc5d0 \ub300\ud55c (JWT)\ud1a0\ud070\uc744 \ub514\ucf54\ub529\ud558\uba74 \uc544\ub798\uc5d0 \ud45c\uc2dc\ub41c \uc608\uc640 \uc720\uc0ac\ud55c \ucd9c\ub825\uc774 \uc0dd\uc131\ub429\ub2c8\ub2e4.\n\n```json\n{\n  \"aud\": [\n    \"sts.amazonaws.com\"\n  ],\n  \"exp\": 1582306514,\n  \"iat\": 1582220114,\n  \"iss\": \"https:\/\/oidc.eks.us-west-2.amazonaws.com\/id\/D43CF17C27A865933144EA99A26FB128\",\n  \"kubernetes.io\": {\n    \"namespace\": \"default\",\n    \"pod\": {\n      \"name\": \"alpine-57b5664646-rf966\",\n      \"uid\": \"5a20f883-5407-11ea-a85c-0e62b7a4a436\"\n    },\n    \"serviceaccount\": {\n      \"name\": \"s3-read-only\",\n      \"uid\": \"a720ba5c-5406-11ea-9438-063a49b60fba\"\n    }\n  },\n  \"nbf\": 1582220114,\n  \"sub\": \"system:serviceaccount:default:s3-read-only\"\n}\n```\n\n\uc774 \ud2b9\uc815 \ud1a0\ud070\uc740 \ud30c\ub4dc\uc5d0 S3 \ubcf4\uae30 \uc804\uc6a9 \uad8c\ud55c\uc744 \ubd80\uc5ec\ud569\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 S3\uc5d0\uc11c \uc77d\uae30\ub97c \uc2dc\ub3c4\ud558\uba74 \ud1a0\ud070\uc774 \ub2e4\uc74c\uacfc \uc720\uc0ac\ud55c \uc784\uc2dc IAM \uc790\uaca9 \uc99d\uba85 \uc138\ud2b8\ub85c \uad50\ud658\ub429\ub2c8\ub2e4.\n\n```json\n{\n    \"AssumedRoleUser\": {\n        \"AssumedRoleId\": \"AROA36C6WWEJULFUYMPB6:abc\", \n        \"Arn\": \"arn:aws:sts::123456789012:assumed-role\/eksctl-winterfell-addon-iamserviceaccount-de-Role1-1D61LT75JH3MB\/abc\"\n    }, \n    \"Audience\": \"sts.amazonaws.com\", \n    \"Provider\": \"arn:aws:iam::123456789012:oidc-provider\/oidc.eks.us-west-2.amazonaws.com\/id\/D43CF17C27A865933144EA99A26FB128\", \n    \"SubjectFromWebIdentityToken\": \"system:serviceaccount:default:s3-read-only\", \n    \"Credentials\": {\n        \"SecretAccessKey\": \"ORJ+8Adk+wW+nU8FETq7+mOqeA8Z6jlPihnV8hX1\", \n        \"SessionToken\": \"FwoGZXIvYXdzEGMaDMLxAZkuLpmSwYXShiL9A1S0X87VBC1mHCrRe\/pB2oes+l1eXxUYnPJyC9ayOoXMvqXQsomq0xs6OqZ3vaa5Iw1HIyA4Cv1suLaOCoU3hNvOIJ6C94H1vU0siQYk7DIq9Av5RZe+uE2FnOctNBvYLd3i0IZo1ajjc00yRK3v24VRq9nQpoPLuqyH2jzlhCEjXuPScPbi5KEVs9fNcOTtgzbVf7IG2gNiwNs5aCpN4Bv\/Zv2A6zp5xGz9cWj2f0aD9v66vX4bexOs5t\/YYhwuwAvkkJPSIGvxja0xRThnceHyFHKtj0H+bi\/PWAtlI8YJcDX69cM30JAHDdQH+ltm\/4scFptW1hlvMaP+WReCAaCrsHrAT+yka7ttw5YlUyvZ8EPog+j6fwHlxmrXM9h1BqdikomyJU00gm1++FJelfP+1zAwcyrxCnbRl3ARFrAt8hIlrT6Vyu8WvWtLxcI8KcLcJQb\/LgkW+sCTGlYcY8z3zkigJMbYn07ewTL5Ss7LazTJJa758I7PZan\/v3xQHd5DEc5WBneiV3iOznDFgup0VAMkIviVjVCkszaPSVEdK2NU7jtrh6Jfm7bU\/3P6ZG+CkyDLIa8MBn9KPXeJd\/y+jTk5Ii+fIwO\/+mDpGNUribg6TPxhzZ8b\/XdZO1kS1gVgqjXyVC+M+BRBh6C4H21w\/eMzjCtDIpoxt5rGKL6Nu\/IFMipoC4fgx6LIIHwtGYMG7SWQi7OsMAkiwZRg0n68\/RqWgLzBt\/4pfjSRYuk=\", \n        \"Expiration\": \"2020-02-20T18:49:50Z\", \n        \"AccessKeyId\": \"0SIA12CFWWEJUMHACL7Z\"\n    }\n}\n```  \n\nEKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc758 \uc77c\ubd80\ub85c \uc2e4\ud589\ub418\ub294 Mutating \uc6f9\ud6c5\uc740 AWS \uc5ed\ud560 ARN\uacfc \uc6f9 \uc790\uaca9 \uc99d\uba85 \ud1a0\ud070 \ud30c\uc77c\uc758 \uacbd\ub85c\ub97c \ud658\uacbd \ubcc0\uc218\ub85c \ud30c\ub4dc\uc5d0 \uc8fc\uc785\ud569\ub2c8\ub2e4. \uc774\ub7f0 \uac12\uc740 \uc218\ub3d9\uc73c\ub85c \uc81c\uacf5\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4.\n\n```bash\nAWS_ROLE_ARN=arn:aws:iam::AWS_ACCOUNT_ID:role\/IAM_ROLE_NAME\nAWS_WEB_IDENTITY_TOKEN_FILE=\/var\/run\/secrets\/eks.amazonaws.com\/serviceaccount\/token\n```\n\nkubelet\uc740 \ucd1d TTL\uc758 80%\ubcf4\ub2e4 \uc624\ub798\ub418\uac70\ub098 24\uc2dc\uac04\uc774 \uc9c0\ub098\uba74 \ud504\ub85c\uc81d\uc158\ub41c \ud1a0\ud070\uc744 \uc790\ub3d9\uc73c\ub85c \uad50\uccb4\ud569\ub2c8\ub2e4. AWS SDK\ub294 \ud1a0\ud070\uc774 \ud68c\uc804\ud560 \ub54c \ud1a0\ud070\uc744 \ub2e4\uc2dc \ub85c\ub4dc\ud558\ub294 \uc5ed\ud560\uc744 \ud569\ub2c8\ub2e4. IRSA\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [AWS \ubb38\uc11c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts-technical-overview.html)\ub97c \ucc38\uc870\ud569\ub2c8\ub2e4.\n\n## \ud3ec\ub4dc ID \uad8c\uc7a5 \uc0ac\ud56d\n\n### IRSA\ub97c \uc0ac\uc6a9\ud558\ub3c4\ub85d aws-node \ub370\ubaac\uc14b \uc5c5\ub370\uc774\ud2b8\n\n\ud604\uc7ac aws-node \ub370\ubaac\uc14b\uc740 EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0 \ud560\ub2f9\ub41c \uc5ed\ud560\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud30c\ub4dc\uc5d0 IP\ub97c \ud560\ub2f9\ud558\ub3c4\ub85d \uad6c\uc131\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc5ed\ud560\uc5d0\ub294 AmazonEKS_CNI_Policy \ubc0f EC2ContainerRegistryReadOnly\uc640 \uac19\uc774 \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 **\ubaa8\ub4e0** \ud30c\ub4dc\uac00 ENI\ub97c \uc5f0\uacb0\/\ubd84\ub9ac\ud558\uac70\ub098, IP \uc8fc\uc18c\ub97c \ud560\ub2f9\/\ud560\ub2f9 \ud574\uc81c\ud558\uac70\ub098, ECR\uc5d0\uc11c \uc774\ubbf8\uc9c0\ub97c \uac00\uc838\uc624\ub3c4\ub85d \ud6a8\uacfc\uc801\uc73c\ub85c \ud5c8\uc6a9\ud558\ub294 \uba87 \uac00\uc9c0 AWS \uad00\ub9ac\ud615 \uc815\ucc45\uc774 \ud3ec\ud568\ub429\ub2c8\ub2e4. \uc774\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc704\ud5d8\uc744 \ucd08\ub798\ud558\ubbc0\ub85c IRSA\ub97c \uc0ac\uc6a9\ud558\ub3c4\ub85d aws-node \ub370\ubaac\uc14b\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc774 \uc791\uc5c5\uc744 \uc218\ud589\ud558\uae30 \uc704\ud55c \uc2a4\ud06c\ub9bd\ud2b8\ub294 \uc774 \uac00\uc774\ub4dc\uc758 [\ub9ac\ud30c\uc9c0\ud1a0\ub9ac](https:\/\/github.com\/aws\/aws-eks-best-practices\/tree\/master\/projects\/enable-irsa\/src)\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc6cc\ucee4 \ub178\ub4dc\uc5d0 \ud560\ub2f9\ub41c \uc778\uc2a4\ud134\uc2a4 \ud504\ub85c\ud30c\uc77c\uc5d0 \ub300\ud55c \uc811\uadfc \uc81c\ud55c\n\nIRSA\ub97c \uc0ac\uc6a9\ud558\uba74 IRSA \ud1a0\ud070\uc744 \uc0ac\uc6a9\ud558\ub3c4\ub85d \ud30c\ub4dc\uc758 \uc790\uaca9 \uc99d\uba85 \uccb4\uc778\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\uc9c0\ub9cc \ud30c\ub4dc\ub294 _\uc6cc\ucee4 \ub178\ub4dc\uc5d0 \ud560\ub2f9\ub41c \uc778\uc2a4\ud134\uc2a4 \ud504\ub85c\ud30c\uc77c\uc758 \uad8c\ud55c\uc744 \uacc4\uc18d \uc0c1\uc18d\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4_. IRSA \uc0ac\uc6a9 \uc2dc \ud5c8\uc6a9\ub418\uc9c0 \uc54a\uc740 \uad8c\ud55c\uc758 \ubc94\uc704\ub97c \ucd5c\uc18c\ud654\ud558\uae30 \uc704\ud574 [\uc778\uc2a4\ud134\uc2a4 \uba54\ud0c0\ub370\uc774\ud130](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/configuring-instance-metadata-service.html) \uc561\uc138\uc2a4\ub97c \ucc28\ub2e8\ud558\ub294 \uac83\uc774 **\uac15\ub825\ud558\uac8c** \uad8c\uc7a5\ub429\ub2c8\ub2e4.\n\n!!! caution\n    \uc778\uc2a4\ud134\uc2a4 \uba54\ud0c0\ub370\uc774\ud130\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4\ub97c \ucc28\ub2e8\ud558\uba74 IRSA\ub97c \uc0ac\uc6a9\ud558\uc9c0 \uc54a\ub294 \ud30c\ub4dc\uac00 \uc6cc\ucee4 \ub178\ub4dc\uc5d0 \ud560\ub2f9\ub41c \uc5ed\ud560\uc744 \uc0c1\uc18d\ubc1b\uc9c0 \ubabb\ud569\ub2c8\ub2e4.\n\n\uc544\ub798 \uc608\uc640 \uac19\uc774 \uc778\uc2a4\ud134\uc2a4\uac00 IMDSv2\ub9cc \uc0ac\uc6a9\ud558\ub3c4\ub85d \ud558\uace0 \ud649 \uc81c\ud55c\uc744 1\ub85c \uc5c5\ub370\uc774\ud2b8\ud558\uc5ec \uc778\uc2a4\ud134\uc2a4 \uba54\ud0c0\ub370\uc774\ud130\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4\ub97c \ucc28\ub2e8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub178\ub4dc \uadf8\ub8f9\uc758 \uc2dc\uc791 \ud15c\ud50c\ub9bf\uc5d0 \uc774\ub7f0 \uc124\uc815\uc744 \ud3ec\ud568\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc778\uc2a4\ud134\uc2a4 \uba54\ud0c0\ub370\uc774\ud130\ub97c **\ube44\ud65c\uc131\ud654 \ud558\uc9c0\ub9c8\uc138\uc694**. \uc774\ub807\uac8c \ud558\uba74 \ub178\ub4dc \uc885\ub8cc \ud578\ub4e4\ub7ec\uc640 \uac19\uc740 \uad6c\uc131 \uc694\uc18c\uc640 \uc778\uc2a4\ud134\uc2a4 \uba54\ud0c0\ub370\uc774\ud130\uc5d0 \uc758\uc874\ud558\ub294 \uae30\ud0c0 \uc694\uc18c\uac00 \uc81c\ub300\ub85c \uc791\ub3d9\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n```bash\naws ec2 modify-instance-metadata-options --instance-id <value> --http-tokens required --http-put-response-hop-limit 1\n```\n\nTerraform\uc744 \uc0ac\uc6a9\ud558\uc5ec \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uacfc \ud568\uaed8 \uc0ac\uc6a9\ud560 \uc2dc\uc791 \ud15c\ud50c\ub9bf\uc744 \ub9cc\ub4dc\ub294 \uacbd\uc6b0 \uba54\ud0c0\ub370\uc774\ud130 \ube14\ub85d\uc744 \ucd94\uac00\ud558\uc5ec \ub2e4\uc74c \ucf54\ub4dc \uc2a4\ub2c8\ud3ab\uc5d0 \ud45c\uc2dc\ub41c \ub300\ub85c \ud649 \uc218\ub97c \uad6c\uc131\ud558\uc2ed\uc2dc\uc624.\n\n```tf hl_lines=\"7\"\nresource \"aws_launch_template\" \"foo\" {\n  name = \"foo\"\n  ...\n    metadata_options {\n    http_endpoint               = \"enabled\"\n    http_tokens                 = \"required\"\n    http_put_response_hop_limit = 1\n    instance_metadata_tags      = \"enabled\"\n  }\n  ...\n```\n\n\ub178\ub4dc\uc5d0\uc11c iptables\ub97c \uc870\uc791\ud558\uc5ec EC2 \uba54\ud0c0\ub370\uc774\ud130\uc5d0 \ub300\ud55c \ud30c\ub4dc\uc758 \uc561\uc138\uc2a4\ub97c \ucc28\ub2e8\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ubc29\ubc95\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\uc778\uc2a4\ud134\uc2a4 \uba54\ud0c0\ub370\uc774\ud130 \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4 \uc81c\ud55c](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/instancedata-data-retrieval.html#instance-metadata-limiting-access) \ubb38\uc11c\ub97c \ucc38\uc870\ud558\uc138\uc694.\n\nIRSA\ub97c \uc9c0\uc6d0\ud558\uc9c0 \uc54a\ub294 \uc774\uc804 \ubc84\uc804\uc758 AWS SDK\ub97c \uc0ac\uc6a9\ud558\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc788\ub294 \uacbd\uc6b0 SDK \ubc84\uc804\uc744 \uc5c5\ub370\uc774\ud2b8\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n### IRSA\uc5d0 \ub300\ud55c IAM \uc5ed\ud560 \uc2e0\ub8b0 \uc815\ucc45\uc758 \ubc94\uc704\ub97c \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \uc774\ub984\uc73c\ub85c \uc9c0\uc815\ud569\ub2c8\ub2e4\n\n\uc2e0\ub8b0 \uc815\ucc45\uc740 \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \ub610\ub294 \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \ub0b4\uc758 \ud2b9\uc815 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub85c \ubc94\uc704\ub97c \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. IRSA\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \uc774\ub984\uc744 \ud3ec\ud568\ud558\uc5ec \uc5ed\ud560 \uc2e0\ub8b0 \uc815\ucc45\uc744 \uac00\ub2a5\ud55c \ud55c \uba85\uc2dc\uc801\uc73c\ub85c \ub9cc\ub4dc\ub294 \uac83\uc774 \uac00\uc7a5 \uc88b\uc2b5\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \ub3d9\uc77c\ud55c \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \ub0b4\uc758 \ub2e4\ub978 \ud30c\ub4dc\uac00 \uc5ed\ud560\uc744 \ub9e1\ub294 \uac83\uc744 \ud6a8\uacfc\uc801\uc73c\ub85c \ubc29\uc9c0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. CLI `eksctl` \uc740 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\/IAM \uc5ed\ud560\uc744 \uc0dd\uc131\ud558\ub294 \ub370 \uc0ac\uc6a9\ud560 \ub54c \uc774 \uc791\uc5c5\uc744 \uc790\ub3d9\uc73c\ub85c \uc218\ud589\ud569\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [eksctl \ubb38\uc11c](https:\/\/eksctl.io\/usage\/iamserviceaccounts\/)\ub97c \ucc38\uc870\ud558\uc138\uc694.\n\n### \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 IMDS\uc5d0 \uc561\uc138\uc2a4\ud574\uc57c \ud558\ub294 \uacbd\uc6b0 IMDSv2\ub97c \uc0ac\uc6a9\ud558\uace0 EC2 \uc778\uc2a4\ud134\uc2a4\uc758 \ud649 \uc81c\ud55c\uc744 2\ub85c \ub298\ub9ac\uc138\uc694\n\n[IMDSv2](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/configuring-instance-metadata-service.html)\uc5d0\uc11c\ub294 PUT \uc694\uccad\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc138\uc158 \ud1a0\ud070\uc744 \uac00\uc838\uc640\uc57c \ud569\ub2c8\ub2e4. \ucd08\uae30 PUT \uc694\uccad\uc5d0\ub294 \uc138\uc158 \ud1a0\ud070\uc5d0 \ub300\ud55c TTL\uc774 \ud3ec\ud568\ub418\uc5b4\uc57c \ud569\ub2c8\ub2e4. \ucd5c\uc2e0 \ubc84\uc804\uc758 AWS SDK\ub294 \uc774\ub97c \ucc98\ub9ac\ud558\uace0 \ud574\ub2f9 \ud1a0\ud070\uc758 \uac31\uc2e0\uc744 \uc790\ub3d9\uc73c\ub85c \ucc98\ub9ac\ud569\ub2c8\ub2e4. \ub610\ud55c IP \uc804\ub2ec\uc744 \ubc29\uc9c0\ud558\uae30 \uc704\ud574 EC2 \uc778\uc2a4\ud134\uc2a4\uc758 \uae30\ubcf8 \ud649 \uc81c\ud55c\uc774 \uc758\ub3c4\uc801\uc73c\ub85c 1\ub85c \uc124\uc815\ub418\uc5b4 \uc788\ub2e4\ub294 \uc810\uc5d0 \uc720\uc758\ud574\uc57c \ud569\ub2c8\ub2e4. \uacb0\uacfc\uc801\uc73c\ub85c EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc138\uc158 \ud1a0\ud070\uc744 \uc694\uccad\ud558\ub294 \ud30c\ub4dc\ub294 \uacb0\uad6d \uc2dc\uac04 \ucd08\uacfc\ub418\uc5b4 IMDSv1 \ub370\uc774\ud130 \ud750\ub984\uc744 \uc0ac\uc6a9\ud558\ub3c4\ub85d \ub300\uccb4\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. EKS\ub294 v1\uacfc v2\ub97c \ubaa8\ub450 _\ud65c\uc131\ud654_\ud558\uace0 eksctl \ub610\ub294 \uacf5\uc2dd CloudFormation \ud15c\ud50c\ub9bf\uc73c\ub85c \ud504\ub85c\ube44\uc800\ub2dd\ub41c \ub178\ub4dc\uc5d0\uc11c \ud649 \uc81c\ud55c\uc744 2\ub85c \ubcc0\uacbd\ud558\uc5ec \uc9c0\uc6d0 IMDSv2\ub97c \ucd94\uac00\ud569\ub2c8\ub2e4.\n\n### \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \ud1a0\ud070 \uc790\ub3d9 \ub9c8\uc6b4\ud2b8 \ube44\ud65c\uc131\ud654\n\n\uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 Kubernetes API\ub97c \ud638\ucd9c\ud560 \ud544\uc694\uac00 \uc5c6\ub294 \uacbd\uc6b0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc758 PodSpec\uc5d0\uc11c `automountServiceAccountToken` \uc18d\uc131\uc744 `false`\ub85c \uc124\uc815\ud558\uac70\ub098 \uac01 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc758 \uae30\ubcf8 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc744 \ud328\uce58\ud558\uc5ec \ub354 \uc774\uc0c1 \ud30c\ub4dc\uc5d0 \uc790\ub3d9\uc73c\ub85c \ub9c8\uc6b4\ud2b8\ub418\uc9c0 \uc54a\ub3c4\ub85d \ud569\ub2c8\ub2e4. \uc608:\n\n```bash\nkubectl patch serviceaccount default -p $'automountServiceAccountToken: false'\n```\n\n### \uac01 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \uc804\uc6a9 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \uc0ac\uc6a9\n\n\uac01 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0\ub294 \uace0\uc720\ud55c \uc804\uc6a9 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc774 \uc788\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uc774\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \ubc0f IRSA\uc758 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc5d0 \uc801\uc6a9\ub429\ub2c8\ub2e4.\n\n!!! attention\n    \uc804\uccb4 \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uc218\ud589\ud558\ub294 \ub300\uc2e0 \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc\uc5d0 \ube14\ub8e8\/\uadf8\ub9b0 \uc811\uadfc \ubc29\uc2dd\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \uac01 IRSA IAM \uc5ed\ud560\uc758 \uc2e0\ub8b0 \uc815\ucc45\uc744 \uc0c8 \ud074\ub7ec\uc2a4\ud130\uc758 OIDC \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc5c5\ub370\uc774\ud2b8\ud574\uc57c \ud569\ub2c8\ub2e4. \ube14\ub8e8\/\uadf8\ub9b0 \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc\ub294 \uc774\uc804 \ud074\ub7ec\uc2a4\ud130\uc640 \ud568\uaed8 \ucd5c\uc2e0 \ubc84\uc804\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub97c \uc2e4\ud589\ud558\ub294 \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud558\uace0 \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub610\ub294 \uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc774\uc804 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc11c\ube44\uc2a4\uc5d0\uc11c \uc0c8 \ud074\ub7ec\uc2a4\ud130\ub85c \ud2b8\ub798\ud53d\uc744 \uc6d0\ud65c\ud558\uac8c \uc774\ub3d9\ud558\ub294 \uac83\uc785\ub2c8\ub2e4.\n\n### \ub8e8\ud2b8\uac00 \uc544\ub2cc \uc0ac\uc6a9\uc790\ub85c \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc2e4\ud589\n\n\ucee8\ud14c\uc774\ub108\ub294 \uae30\ubcf8\uc801\uc73c\ub85c \ub8e8\ud2b8\ub85c \uc2e4\ud589\ub429\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uc6f9 \uc790\uaca9 \uc99d\uba85 \ud1a0\ud070 \ud30c\uc77c\uc744 \uc77d\uc744 \uc218 \uc788\uc9c0\ub9cc \ucee8\ud14c\uc774\ub108\ub97c \ub8e8\ud2b8\ub85c \uc2e4\ud589\ud558\ub294 \uac83\uc740 \ubaa8\ubc94 \uc0ac\ub840\ub85c \uac04\uc8fc\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ub610\ub294 PodSpec\uc5d0 `spec.securityContext.runAsUser` \uc18d\uc131\uc744 \ucd94\uac00\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. `runAsUser` \uc758 \uac12 \uc740 \uc784\uc758\uc758 \uac12\uc785\ub2c8\ub2e4.\n\n\ub2e4\uc74c \uc608\uc81c\uc5d0\uc11c \ud30c\ub4dc \ub0b4\uc758 \ubaa8\ub4e0 \ud504\ub85c\uc138\uc2a4\ub294 `RunAsUser` \ud544\ub4dc\uc5d0 \uc9c0\uc815\ub41c \uc0ac\uc6a9\uc790 ID\ub85c \uc2e4\ud589\ub429\ub2c8\ub2e4.\n\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: security-context-demo\nspec:\n  securityContext:\n    runAsUser: 1000\n    runAsGroup: 3000\n  containers:\n  - name: sec-ctx-demo\n    image: busybox\n    command: [ \"sh\", \"-c\", \"sleep 1h\" ]\n```\n\n\ub8e8\ud2b8\uac00 \uc544\ub2cc \uc0ac\uc6a9\uc790\ub85c \ucee8\ud14c\uc774\ub108\ub97c \uc2e4\ud589\ud558\uba74 \uae30\ubcf8\uc801\uc73c\ub85c \ud1a0\ud070\uc5d0 0600 [Root] \uad8c\ud55c\uc774 \ud560\ub2f9\ub418\uae30 \ub54c\ubb38\uc5d0 \ucee8\ud14c\uc774\ub108\uac00 IRSA \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \ud1a0\ud070\uc744 \uc77d\uc744 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. fsgroup=65534 [Nobody]\ub97c \ud3ec\ud568\ud558\ub3c4\ub85d \ucee8\ud14c\uc774\ub108\uc758 securityContext\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\uba74 \ucee8\ud14c\uc774\ub108\uac00 \ud1a0\ud070\uc744 \uc77d\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```yaml\nspec:\n  securityContext:\n    fsGroup: 65534\n```\n\nKubernetes 1.19 \ubc0f \uc774\ud6c4 \ubc84\uc804\uc5d0\uc11c\ub294 \uc774 \ubcc0\uacbd\uc774 \ub354 \uc774\uc0c1 \ud544\uc694\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n### \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \ub300\ud55c \ucd5c\uc18c \uc811\uadfc \uad8c\ud55c \ubd80\uc5ec\n\n[Action Hero](https:\/\/github.com\/princespaghetti\/actionhero)\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc81c\ub300\ub85c \uc791\ub3d9\ud558\ub294 \ub370 \ud544\uc694\ud55c AWS API \ud638\ucd9c \ubc0f \ud574\ub2f9 IAM \uad8c\ud55c\uc744 \uc2dd\ubcc4\ud558\uae30 \uc704\ud574 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uacfc \ud568\uaed8 \uc2e4\ud589\ud560 \uc218 \uc788\ub294 \uc720\ud2f8\ub9ac\ud2f0\uc785\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \ud560\ub2f9\ub41c IAM \uc5ed\ud560\uc758 \ubc94\uc704\ub97c \uc810\uc9c4\uc801\uc73c\ub85c \uc81c\ud55c\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub41c\ub2e4\ub294 \uc810\uc5d0\uc11c [IAM Access Advisor](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/access_policies_access-advisor.html)\uc640 \uc720\uc0ac\ud569\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 AWS \ub9ac\uc18c\uc2a4\uc5d0 [\ucd5c\uc18c \uc811\uadfc \uad8c\ud55c](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/best-practices.html#grant-least-privilege) \ubd80\uc5ec\uc5d0 \ub300\ud55c \uc124\uba85\uc11c\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n### \ubd88\ud544\uc694\ud55c \uc775\uba85 \uc811\uadfc \uac80\ud1a0 \ubc0f \ucca0\ud68c\n\n\uc774\uc0c1\uc801\uc73c\ub85c\ub294 \ubaa8\ub4e0 API \uc791\uc5c5\uc5d0 \ub300\ud574 \uc775\uba85\uc758 \uc811\uadfc\uc744 \ube44\ud65c\uc131\ud654\ud558\uc5ec\uc57c \ud569\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uae30\ubcf8 \uc81c\uacf5 \uc0ac\uc6a9\uc790 system:anonymous\uc5d0 \ub300\ud55c RoleBinding \ub610\ub294 ClusterRoleBinding\uc744 \uc0dd\uc131\ud558\uc5ec \uc775\uba85 \uc561\uc138\uc2a4 \uad8c\ud55c\uc744 \ubd80\uc5ec\ud569\ub2c8\ub2e4. [rbac-lookup](https:\/\/github.com\/FairwindsOps\/rbac-lookup) \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uc5ec system:anonymous \uc0ac\uc6a9\uc790\uac00 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud574 \uac16\ub294 \uad8c\ud55c\uc744 \uc2dd\ubcc4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```bash\n.\/rbac-lookup | grep -P 'system:(anonymous)|(unauthenticated)'\nsystem:anonymous               cluster-wide        ClusterRole\/system:discovery\nsystem:unauthenticated         cluster-wide        ClusterRole\/system:discovery\nsystem:unauthenticated         cluster-wide        ClusterRole\/system:public-info-viewer\n```\n\nsystem:public-info-viewer\uc678\uc758 ClusterRole \ub610\ub294 \ubaa8\ub4e0 \uc5ed\ud560\uc740 system:anonymous \uc0ac\uc6a9\uc790 \ub610\ub294 system:unauthenticated \uadf8\ub8f9\uc5d0 \ubc14\uc778\ub529\ub418\uc9c0 \uc54a\uc544\uc57c \ud569\ub2c8\ub2e4.\n\n\ud2b9\uc815 API\uc5d0\uc11c \uc775\uba85 \uc561\uc138\uc2a4\ub97c \ud65c\uc131\ud654\ud574\uc57c \ud558\ub294 \uc815\ub2f9\ud55c \uc774\uc720\uac00 \uc788\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uc758 \uacbd\uc6b0 \uc775\uba85 \uc0ac\uc6a9\uc790\uac00 \ud2b9\uc815 API\ub9cc \uc561\uc138\uc2a4\ud560 \uc218 \uc788\ub3c4\ub85d \ud558\uace0 \uc778\uc99d \uc5c6\uc774 \ud574\ub2f9 API\ub97c \ub178\ucd9c\ud574\ub3c4 \ud074\ub7ec\uc2a4\ud130\uac00 \ucde8\uc57d\ud574\uc9c0\uc9c0 \uc54a\ub3c4\ub85d \ud574\uc57c \ud569\ub2c8\ub2e4.\n\nKubernetes\/EKS \ubc84\uc804 1.14 \uc774\uc804\uc5d0\ub294 system:unauthenticated \uadf8\ub8f9\uc774 \uae30\ubcf8\uc801\uc73c\ub85c system:discovery \ubc0f system:basic-user \ud074\ub7ec\uc2a4\ud130 \uc5ed\ud560\uc5d0 \uc5f0\uacb0\ub418\uc5c8\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\ub97c \ubc84\uc804 1.14 \uc774\uc0c1\uc73c\ub85c \uc5c5\ub370\uc774\ud2b8\ud588\ub354\ub77c\ub3c4 \ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\ub370\uc774\ud2b8\ud574\ub3c4 \uc774\ub7f0 \uad8c\ud55c\uc774 \ucde8\uc18c\ub418\uc9c0 \uc54a\uc73c\ubbc0\ub85c \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc774\ub7f0 \uad8c\ud55c\uc774 \uacc4\uc18d \ud65c\uc131\ud654\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\nsystem:public-info-viewer\ub97c \uc81c\uc678\ud558\uace0 \uc5b4\ub5a4 ClusterRole\uc5d0 \"system:unauthenticated\"\uac00 \uc788\ub294\uc9c0 \ud655\uc778\ud558\ub824\uba74 \ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4(jq \uc720\ud2f8\ub9ac\ud2f0\uac00 \ud544\uc694\ud569\ub2c8\ub2e4):\n\n```bash\nkubectl get ClusterRoleBinding -o json | jq -r '.items[] | select(.subjects[]?.name ==\"system:unauthenticated\") | select(.metadata.name != \"system:public-info-viewer\") | .metadata.name'\n```\n\n\uadf8\ub9ac\uace0 \"system:unauthenticated\"\ub294 \uc544\ub798 \uba85\ub839\uc744 \uc0ac\uc6a9\ud558\uc5ec \"system:public-info-viewer\"\ub97c \uc81c\uc678\ud55c \ubaa8\ub4e0 \uc5ed\ud560\uc5d0\uc11c \uc81c\uac70\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```bash\nkubectl get ClusterRoleBinding -o json | jq -r '.items[] | select(.subjects[]?.name ==\"system:unauthenticated\") | select(.metadata.name != \"system:public-info-viewer\") | del(.subjects[] | select(.name ==\"system:unauthenticated\"))' | kubectl apply -f -\n```\n\n\ub610\ub294 kubectl describe \ubc0f kubectl edit\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc218\ub3d9\uc73c\ub85c \ud655\uc778\ud558\uace0 \uc81c\uac70\ud560 \uc218 \uc788\ub2e4. system:unauthenticated \uadf8\ub8f9\uc5d0 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c system:discovery \uad8c\ud55c\uc774 \uc788\ub294\uc9c0 \ud655\uc778\ud558\ub824\uba74 \ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud558\uc2ed\uc2dc\uc624.\n\n```bash\nkubectl describe clusterrolebindings system:discovery\n\nName:         system:discovery\nLabels:       kubernetes.io\/bootstrapping=rbac-defaults\nAnnotations:  rbac.authorization.kubernetes.io\/autoupdate: true\nRole:\n  Kind:  ClusterRole\n  Name:  system:discovery\nSubjects:\n  Kind   Name                    Namespace\n  ----   ----                    ---------\n  Group  system:authenticated\n  Group  system:unauthenticated\n```\n\nsystem:unauthenticated \uadf8\ub8f9\uc5d0 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c system:basic-user \uad8c\ud55c\uc774 \uc788\ub294\uc9c0 \ud655\uc778\ud558\ub824\uba74 \ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud569\ub2c8\ub2e4.\n\n```bash\nkubectl describe clusterrolebindings system:basic-user\n\nName:         system:basic-user\nLabels:       kubernetes.io\/bootstrapping=rbac-defaults\nAnnotations:  rbac.authorization.kubernetes.io\/autoupdate: true\nRole:\n  Kind:  ClusterRole\n  Name:  system:basic-user\nSubjects:\n  Kind   Name                    Namespace\n  ----   ----                    ---------\n  Group  system:authenticated\n  Group  system:unauthenticated\n```\n\nsystem:unauthenticated \uadf8\ub8f9\uc774 \ud074\ub7ec\uc2a4\ud130\uc758 system:discovery \ubc0f\/\ub610\ub294 system:basic-user ClusterRoles\uc5d0 \ubc14\uc778\ub529\ub41c \uacbd\uc6b0 \uc774\ub7f0 \uc5ed\ud560\uc744 system:unauthenticated \uadf8\ub8f9\uc5d0\uc11c \ubd84\ub9ac\ud574\uc57c \ud569\ub2c8\ub2e4. \ub2e4\uc74c \uba85\ub839\uc744 \uc0ac\uc6a9\ud558\uc5ec system:discovery ClusterRoleBinding\uc744 \ud3b8\uc9d1\ud569\ub2c8\ub2e4:\n\n```bash\nkubectl edit clusterrolebindings system:discovery\n```\n\n\uc704 \uba85\ub839\uc740 \uc544\ub798\uc640 \uac19\uc774 \ud3b8\uc9d1\uae30\uc5d0\uc11c system:discovery ClusterRoleBinding\uc758 \ud604\uc7ac \uc815\uc758\ub97c \uc5fd\ub2c8\ub2e4:\n\n```yaml\n# Please edit the object below. Lines beginning with a '#' will be ignored,\n# and an empty file will abort the edit. If an error occurs while saving this file will be\n# reopened with the relevant failures.\n#\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  annotations:\n    rbac.authorization.kubernetes.io\/autoupdate: \"true\"\n  creationTimestamp: \"2021-06-17T20:50:49Z\"\n  labels:\n    kubernetes.io\/bootstrapping: rbac-defaults\n  name: system:discovery\n  resourceVersion: \"24502985\"\n  selfLink: \/apis\/rbac.authorization.k8s.io\/v1\/clusterrolebindings\/system%3Adiscovery\n  uid: b7936268-5043-431a-a0e1-171a423abeb6\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: system:discovery\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n  kind: Group\n  name: system:authenticated\n- apiGroup: rbac.authorization.k8s.io\n  kind: Group\n  name: system:unauthenticated\n```\n\n\uc704 \ud3b8\uc9d1\uae30 \ud654\uba74\uc758 \"Subjects\" \uc139\uc158\uc5d0\uc11c system:unauthenticated \uadf8\ub8f9 \ud56d\ubaa9\uc744 \uc0ad\uc81c\ud569\ub2c8\ub2e4.\n\nsystem:basic-user ClusterRoleBinding\uc5d0 \ub300\ud574 \ub3d9\uc77c\ud55c \ub2e8\uacc4\ub97c \ubc18\ubcf5\ud569\ub2c8\ub2e4.\n\n### \ub300\uccb4 \uc811\uadfc \ubc29\uc2dd\n\nIRSA\ub294 \ud30c\ub4dc\uc5d0 AWS \"ID\"\ub97c \ud560\ub2f9\ud558\ub294 _\uc120\ud638 \ub418\ub294 \ubc29\ubc95_\uc774\uc9c0\ub9cc \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \ucd5c\uc2e0 \ubc84\uc804\uc758 AWS SDK\ub97c \ud3ec\ud568\ud574\uc57c \ud569\ub2c8\ub2e4. \ud604\uc7ac IRSA\ub97c \uc9c0\uc6d0\ud558\ub294 SDK\uc758 \uc804\uccb4 \ubaa9\ub85d\uc740 [AWS \ubb38\uc11c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts-minimum-sdk.html)\ub97c \ucc38\uc870\ud569\ub2c8\ub2e4. IRSA \ud638\ud658 SDK\ub85c \uc989\uc2dc \uc5c5\ub370\uc774\ud2b8\ud560 \uc218 \uc5c6\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc788\ub294 \uacbd\uc6b0, [kube2iam](https:\/\/github.com\/jtblin\/kube2iam) \ubc0f [kiam](https:\/\/github.com\/uswitch\/kiam) \uc744 \ud3ec\ud568\ud558\uc5ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud30c\ub4dc\uc5d0 IAM \uc5ed\ud560\uc744 \ud560\ub2f9\ud558\ub294 \ub370 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uba87 \uac00\uc9c0 \ucee4\ubba4\ub2c8\ud2f0 \uad6c\ucd95 \uc194\ub8e8\uc158\uc774 \uc788\uc2b5\ub2c8\ub2e4. AWS\ub294 \uc774\ub7f0 \uc194\ub8e8\uc158\uc758 \uc0ac\uc6a9\uc744 \ubcf4\uc99d\ud558\uac70\ub098 \uc6a9\uc778\ud558\uc9c0 \uc54a\uc9c0\ub9cc IRSA\uc640 \uc720\uc0ac\ud55c \uacb0\uacfc\ub97c \uc5bb\uae30 \uc704\ud574 \ucee4\ubba4\ub2c8\ud2f0\uc5d0\uc11c \uc790\uc8fc \uc0ac\uc6a9\ud569\ub2c8\ub2e4.","site":"eks","answers_cleaned":"    search    exclude  true                      AWS IAM Identity and Access Management   https   docs aws amazon com IAM latest UserGuide introduction html                                 AWS                                       AWS                           AWS             AWS        EC2      IAM      https   docs aws amazon com IAM latest UserGuide id html id iam users      IAM     https   docs aws amazon com IAM latest UserGuide id html id iam roles      AWS         https   docs aws amazon com IAM latest UserGuide intro structure html intro structure principal                                       IAM      https   docs aws amazon com IAM latest UserGuide access policies html               EKS                                 Bearer      X 509      OIDC   kube apiserver                                    EKS         Webhook         https   kubernetes io docs reference access authn authz authentication  webhook token authentication                  https   kubernetes io docs reference access authn authz authentication  service account tokens    2021  2  21    OIDC                                                       EKS               kubectl          AWS CLI     aws iam authenticator  https   github com kubernetes sigs aws iam authenticator                                kube apiserver                                                          URL           URL                                    ARN       ID     kube apiserver                                                     bash aws eks get token   cluster name                                               Go                   golang package main  import     fmt    log    sigs k8s io aws iam authenticator pkg token     func main       g       token NewGenerator false  false   tk  err    g Get   cluster name     if err    nil     log Fatal err      fmt Println tk                                  json      kind    ExecCredential       apiVersion    client authentication k8s io v1alpha1       spec           status          expirationTimestamp    2020 02 19T16 08 27Z         token    k8s aws v1 aHR0cHM6Ly9zdHMuYW1hem9uYXdzLmNvbS8 QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNSZYLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFKTkdSSUxLTlNSQzJXNVFBJTJGMjAyMDAyMTklMkZ1cy1lYXN0LTElMkZzdHMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDIwMDIxOVQxNTU0MjdaJlgtQW16LUV4cGlyZXM9NjAmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JTNCeC1rOHMtYXdzLWlkJlgtQW16LVNpZ25hdHVyZT0yMjBmOGYzNTg1ZTMyMGRkYjVlNjgzYTVjOWE0MDUzMDFhZDc2NTQ2ZjI0ZjI4MTExZmRhZDA5Y2Y2NDhhMzkz                    k8s aws v1          base64                                                   bash https   sts amazonaws com  Action GetCallerIdentity Version 2011 06 15 X Amz Algorithm AWS4 HMAC SHA256 X Amz Credential 0KIAJPFRILKNSRC2W5QA 2F20200219 2F REGION 2Fsts 2Faws4 request X Amz Date 20200219T155427Z X Amz Expires 60 X Amz SignedHeaders host 3Bx k8s aws id X Amz Signature BB0f8f3285e320ddb5e683a5c9a405301ad76546f24f28111fdad09cf648a393          Amazon                             URL                  GetCallerIdentity API     https   docs aws amazon com STS latest APIReference API GetCallerIdentity html               15   TTL                                       kubectl                                                                                            ID  AWS IAM              kube apiserver    kube system            aws auth                    RBAC             aws auth         IAM          IAM                  RBAC                            RBAC                                                                                              IAM                                                                                                                                                                                       API                    CI CD                                                 EC2          AWS                                      aws auth              RBAC                        AWS                             API           IAM       AWS                           IAM       EKS                                      RBAC                    aws auth                                                            IAM         aws auth               IAM                             IAM                       RBAC                                                                  attention     aws auth              IAM        EKS              aws auth                                               IAM                                                     aws auth        mapRoles       K8s RBAC     IAM                                                          CloudTrail                                         yaml   rolearn  arn aws iam  XXXXXXXXXXXX role testRole   username  testRole    groups        system masters            1 20                User Extra sessionName 0                                                           RoleBinding             ClusterRoleBinding                    AWS                                                                                                              Role          ClusterRole                                             audit2rbac  https   github com liggitt audit2rbac                                 API                                            EKS                              EKS                API                                                                          API     IAM                RBAC                                                                  API                     VPC                                              EKS                                                                https   docs aws amazon com eks latest userguide cluster endpoint html                                                       CIDR                                                  IP                                              CIDR                                                             VPC                   ENI     kubelet        API                                  IP                             IAM              Amazon EKS                                  IAM                      RBAC       system masters                                    aws auth                             IAM                                                                                                              aws auth                                                aws auth                                aws auth                                                                                                                                          aws auth                  aws auth                                                                         eksctl   eksctl  CLI   aws auth       ID                             CLI             bash eksctl create iamidentitymapping   help      IAM                       bash  eksctl create iamidentitymapping   cluster   clusterName    region  region    arn arn aws iam  123456 role testing   group system masters   username admin                eksctl       https   eksctl io usage iam identity mappings               keikoproj   aws auth  https   github com keikoproj aws auth     keikoproj   aws auth     cli   go                       CLI                    bash go get github com keikoproj aws auth aws auth help         kubectl   krew           https   krew sigs k8s io    aws auth                bash kubectl krew install aws auth kubectl aws auth      go                     GitHub   aws auth         https   github com keikoproj aws auth blob master README md           AWS IAM Authenticator CLI  https   github com kubernetes sigs aws iam authenticator tree master cmd aws iam authenticator      aws iam authenticator                         CLI              GitHub               https   github com kubernetes sigs aws iam authenticator releases       IAM                         bash   aws iam authenticator add role   rolearn arn aws iam  185309785115 role lil dev role cluster   username lil dev user   groups system masters   kubeconfig    kube config                                                                                    aws auth                                                                                      kubectl who can  https   github com aquasecurity kubectl who can      rbac lookup  https   github com FairwindsOps                                               detective md                               NCC Group         https   www nccgroup trust us about us newsroom and events blog 2019 august tools and methods for auditing kubernetes rbac policies  mkt tok eyJpIjoiWWpGa056SXlNV1E0WWpRNSIsInQiOiJBT1hyUTRHYkg1TGxBV0hTZnRibDAyRUZ0VzBxbndnRzNGbTAxZzI0WmFHckJJbWlKdE5WWDdUQlBrYVZpMnNuTFJ1R3hacVYrRCsxYWQ2RTRcL2pMN1BtRVA1ZFZcL0NtaEtIUDdZV3pENzNLcE1zWGVwUndEXC9Pb2tmSERcL1pUaGUifQ 3D 3D                                              IAM  EKS                                                         impersonation  https   kubernetes io docs reference access authn authz authentication  user impersonation         GitHub     OIDC ID                                       AWS                         Teleport     GitHub             EKS      https   aws amazon com blogs opensource authenticating eks github credentials teleport      kube oidc proxy          EKS            OIDC     https   aws amazon com blogs opensource consistent oidc authentication across multiple eks clusters using kube oidc proxy        attention     EKS                     OIDC                     Amazon EKS     OIDC                  https   aws amazon com blogs containers introducing oidc identity provider authentication amazon eks                                                   OIDC      Dex  EKS                    Dex   dex k8s authenticator       Amazon EKS     https   aws amazon com blogs containers using dex dex k8s authenticator to authenticate to amazon eks                            OIDC                                                  AWS SSO  https   docs aws amazon com singlesignon latest userguide what is html        Azure AD                   AWS                                 AWS CLI v2 0   SSO        CLI             IAM                                                       RBAC            IAM            kubectl                    Assume                        rbac dev  https   github com mhausenblas rbac dev        RBAC                                                                                            API                             AWS             https   github com kubernetes sigs aws load balancer controller                                       ALB                  AWS API                                                                                             RBAC                                                                                                                                                                                      JWT        var run secrets kubernetes io serviceaccount                                                                          json      iss    kubernetes serviceaccount      kubernetes io serviceaccount namespace    default      kubernetes io serviceaccount secret name    default token 5pv4z      kubernetes io serviceaccount service account name    default      kubernetes io serviceaccount service account uid    3b36ddb5 438c 11ea 9438 063a49b60fba      sub    system serviceaccount default default                             API                      yaml apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    annotations      rbac authorization kubernetes io autoupdate   true    creationTimestamp   2020 01 30T18 13 25Z    labels      kubernetes io bootstrapping  rbac defaults   name  system discovery   resourceVersion   43    selfLink   apis rbac authorization k8s io v1 clusterroles system 3Adiscovery   uid  350d2ab8 438c 11ea 9438 063a49b60fba rules    nonResourceURLs       api      api        apis      apis        healthz      openapi      openapi        version      version    verbs      get                                  API                                                                             API           API                                                                                     Role    ClusterRole                     API                                                   spec serviceAccountName                                                                        https   kubernetes io docs reference access authn authz rbac  service account permissions                note           1 24                                                          var run secrets kubernetes io serviceaccount                        API                         1 24                                           1                                        Jenkins           API                                         metadata annotations kubernetes io service account name   SERVICE ACCOUNT NAME                                 kubernetes io service account token                                                              IAM    IRSA   IRSA                  IAM                      Service Account Token Volume Projection  https   kubernetes io docs tasks configure pod container configure service account  service account token volume projection                                IAM                                  API                      OIDC                         Kubernetes       OIDC                                                     IAM         AWS API              AWS API       AWS SDK   sts AssumeRoleWithWebIdentity                        IAM                     AWS                    IRSA      JWT                                         json      aud          sts amazonaws com          exp   1582306514     iat   1582220114     iss    https   oidc eks us west 2 amazonaws com id D43CF17C27A865933144EA99A26FB128      kubernetes io          namespace    default        pod            name    alpine 57b5664646 rf966          uid    5a20f883 5407 11ea a85c 0e62b7a4a436              serviceaccount            name    s3 read only          uid    a720ba5c 5406 11ea 9438 063a49b60fba                nbf   1582220114     sub    system serviceaccount default s3 read only                      S3                          S3                           IAM                      json        AssumedRoleUser              AssumedRoleId    AROA36C6WWEJULFUYMPB6 abc             Arn    arn aws sts  123456789012 assumed role eksctl winterfell addon iamserviceaccount de Role1 1D61LT75JH3MB abc               Audience    sts amazonaws com         Provider    arn aws iam  123456789012 oidc provider oidc eks us west 2 amazonaws com id D43CF17C27A865933144EA99A26FB128         SubjectFromWebIdentityToken    system serviceaccount default s3 read only         Credentials              SecretAccessKey    ORJ 8Adk wW nU8FETq7 mOqeA8Z6jlPihnV8hX1             SessionToken    FwoGZXIvYXdzEGMaDMLxAZkuLpmSwYXShiL9A1S0X87VBC1mHCrRe pB2oes l1eXxUYnPJyC9ayOoXMvqXQsomq0xs6OqZ3vaa5Iw1HIyA4Cv1suLaOCoU3hNvOIJ6C94H1vU0siQYk7DIq9Av5RZe uE2FnOctNBvYLd3i0IZo1ajjc00yRK3v24VRq9nQpoPLuqyH2jzlhCEjXuPScPbi5KEVs9fNcOTtgzbVf7IG2gNiwNs5aCpN4Bv Zv2A6zp5xGz9cWj2f0aD9v66vX4bexOs5t YYhwuwAvkkJPSIGvxja0xRThnceHyFHKtj0H bi PWAtlI8YJcDX69cM30JAHDdQH ltm 4scFptW1hlvMaP WReCAaCrsHrAT yka7ttw5YlUyvZ8EPog j6fwHlxmrXM9h1BqdikomyJU00gm1  FJelfP 1zAwcyrxCnbRl3ARFrAt8hIlrT6Vyu8WvWtLxcI8KcLcJQb LgkW sCTGlYcY8z3zkigJMbYn07ewTL5Ss7LazTJJa758I7PZan v3xQHd5DEc5WBneiV3iOznDFgup0VAMkIviVjVCkszaPSVEdK2NU7jtrh6Jfm7bU 3P6ZG CkyDLIa8MBn9KPXeJd y jTk5Ii fIwO  mDpGNUribg6TPxhzZ8b XdZO1kS1gVgqjXyVC M BRBh6C4H21w eMzjCtDIpoxt5rGKL6Nu IFMipoC4fgx6LIIHwtGYMG7SWQi7OsMAkiwZRg0n68 RqWgLzBt 4pfjSRYuk              Expiration    2020 02 20T18 49 50Z             AccessKeyId    0SIA12CFWWEJUMHACL7Z                 EKS                   Mutating     AWS    ARN                                                                   bash AWS ROLE ARN arn aws iam  AWS ACCOUNT ID role IAM ROLE NAME AWS WEB IDENTITY TOKEN FILE  var run secrets eks amazonaws com serviceaccount token      kubelet    TTL  80          24                              AWS SDK                                 IRSA              AWS     https   docs aws amazon com eks latest userguide iam roles for service accounts technical overview html                 ID            IRSA        aws node              aws node      EC2                        IP                          AmazonEKS CNI Policy   EC2ContainerRegistryReadOnly                           ENI            IP                  ECR                              AWS                                   IRSA        aws node                                                         https   github com aws aws eks best practices tree master projects enable irsa src                                                    IRSA       IRSA                                                                                IRSA                                            https   docs aws amazon com AWSEC2 latest UserGuide configuring instance metadata service html                                    caution                              IRSA                                                        IMDSv2                 1                                                                                                                                                                               bash aws ec2 modify instance metadata options   instance id  value    http tokens required   http put response hop limit 1      Terraform                                                                                          tf hl lines  7  resource  aws launch template   foo      name    foo            metadata options       http endpoint                  enabled      http tokens                    required      http put response hop limit   1     instance metadata tags         enabled                      iptables       EC2                                                                              https   docs aws amazon com AWSEC2 latest UserGuide instancedata data retrieval html instance metadata limiting access              IRSA                 AWS SDK                     SDK                      IRSA     IAM                                                                                            IRSA                                                                                                                          CLI  eksctl             IAM                                             eksctl     https   eksctl io usage iamserviceaccounts                        IMDS              IMDSv2       EC2             2         IMDSv2  https   docs aws amazon com AWSEC2 latest UserGuide configuring instance metadata service html     PUT                              PUT                TTL                    AWS SDK                                    IP             EC2                      1                              EC2                                        IMDSv1                           EKS  v1  v2             eksctl       CloudFormation                         2          IMDSv2                                               Kubernetes API                         PodSpec    automountServiceAccountToken       false                                                                         bash kubectl patch serviceaccount default  p   automountServiceAccountToken  false                                                                                   API   IRSA                        attention                                                               IRSA IAM                    OIDC                                                                                                                                                                                                                                                                                 PodSpec   spec securityContext runAsUser                     runAsUser                                           RunAsUser              ID             yaml apiVersion  v1 kind  Pod metadata    name  security context demo spec    securityContext      runAsUser  1000     runAsGroup  3000   containers      name  sec ctx demo     image  busybox     command     sh     c    sleep 1h                                          0600  Root                     IRSA                         fsgroup 65534  Nobody               securityContext                                  yaml spec    securityContext      fsGroup  65534      Kubernetes 1 19                                                                Action Hero  https   github com princespaghetti actionhero                          AWS API         IAM                                                      IAM                                   IAM Access Advisor  https   docs aws amazon com IAM latest UserGuide access policies access advisor html                  AWS                 https   docs aws amazon com IAM latest UserGuide best practices html grant least privilege                                                         API                                             system anonymous     RoleBinding    ClusterRoleBinding                          rbac lookup  https   github com FairwindsOps rbac lookup           system anonymous                                      bash   rbac lookup   grep  P  system  anonymous   unauthenticated   system anonymous               cluster wide        ClusterRole system discovery system unauthenticated         cluster wide        ClusterRole system discovery system unauthenticated         cluster wide        ClusterRole system public info viewer      system public info viewer   ClusterRole           system anonymous        system unauthenticated                        API                                                           API                         API                                Kubernetes EKS    1 14      system unauthenticated           system discovery   system basic user                            1 14                                                                           system public info viewer          ClusterRole   system unauthenticated                               jq                   bash kubectl get ClusterRoleBinding  o json   jq  r   items     select  subjects    name    system unauthenticated     select  metadata name     system public info viewer      metadata name            system unauthenticated                system public info viewer                               bash kubectl get ClusterRoleBinding  o json   jq  r   items     select  subjects    name    system unauthenticated     select  metadata name     system public info viewer     del  subjects     select  name    system unauthenticated       kubectl apply  f           kubectl describe   kubectl edit                           system unauthenticated              system discovery                                  bash kubectl describe clusterrolebindings system discovery  Name          system discovery Labels        kubernetes io bootstrapping rbac defaults Annotations   rbac authorization kubernetes io autoupdate  true Role    Kind   ClusterRole   Name   system discovery Subjects    Kind   Name                    Namespace                                              Group  system authenticated   Group  system unauthenticated      system unauthenticated              system basic user                                 bash kubectl describe clusterrolebindings system basic user  Name          system basic user Labels        kubernetes io bootstrapping rbac defaults Annotations   rbac authorization kubernetes io autoupdate  true Role    Kind   ClusterRole   Name   system basic user Subjects    Kind   Name                    Namespace                                              Group  system authenticated   Group  system unauthenticated      system unauthenticated           system discovery      system basic user ClusterRoles                 system unauthenticated                            system discovery ClusterRoleBinding             bash kubectl edit clusterrolebindings system discovery                         system discovery ClusterRoleBinding                  yaml   Please edit the object below  Lines beginning with a     will be ignored    and an empty file will abort the edit  If an error occurs while saving this file will be   reopened with the relevant failures    apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    annotations      rbac authorization kubernetes io autoupdate   true    creationTimestamp   2021 06 17T20 50 49Z    labels      kubernetes io bootstrapping  rbac defaults   name  system discovery   resourceVersion   24502985    selfLink   apis rbac authorization k8s io v1 clusterrolebindings system 3Adiscovery   uid  b7936268 5043 431a a0e1 171a423abeb6 roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  system discovery subjects    apiGroup  rbac authorization k8s io   kind  Group   name  system authenticated   apiGroup  rbac authorization k8s io   kind  Group   name  system unauthenticated                 Subjects       system unauthenticated                system basic user ClusterRoleBinding                                   IRSA      AWS  ID                                     AWS SDK               IRSA       SDK          AWS     https   docs aws amazon com eks latest userguide iam roles for service accounts minimum sdk html          IRSA    SDK                                kube2iam  https   github com jtblin kube2iam     kiam  https   github com uswitch kiam                   IAM                                             AWS                             IRSA                                "}
{"questions":"eks AWS AWS exclude true search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \uba40\ud2f0 \uc5b4\uce74\uc6b4\ud2b8 \uc804\ub7b5 \n\nAWS\ub294 \ube44\uc988\ub2c8\uc2a4 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ubc0f \ub370\uc774\ud130\ub97c \ubd84\ub9ac\ud558\uace0 \uad00\ub9ac\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub418\ub294 [\ub2e4\uc911 \uacc4\uc815 \uc804\ub7b5](https:\/\/docs.aws.amazon.com\/whitepapers\/latest\/organizing-your-aws-environment\/organizing-your-aws-environment.html) \ubc0f AWS \uc870\uc9c1\uc744 \uc0ac\uc6a9\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \ub2e4\uc911 \uacc4\uc815 \uc804\ub7b5\uc744 \uc0ac\uc6a9\ud558\uba74 [\ub9ce\uc740 \uc774\uc810](https:\/\/docs.aws.amazon.com\/whitepapers\/latest\/organizing-your-aws-environment\/benefits-of-using-multiple-aws-accounts.html)\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\n+ AWS API \uc11c\ube44\uc2a4 \ud560\ub2f9\ub7c9\uc774 \uc99d\uac00\ud588\uc2b5\ub2c8\ub2e4. \ud560\ub2f9\ub7c9\uc740 AWS \uacc4\uc815\uc5d0 \uc801\uc6a9\ub418\uba70, \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc5ec\ub7ec \uacc4\uc815\uc744 \uc0ac\uc6a9\ud558\uba74 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uc804\uccb4 \ud560\ub2f9\ub7c9\uc774 \ub298\uc5b4\ub0a9\ub2c8\ub2e4.\n+ \ub354 \uac04\ub2e8\ud55c \uc778\uc99d \ubc0f \uc811\uadfc \uad8c\ud55c \uad00\ub9ac (IAM) \uc815\ucc45. \uc6cc\ud06c\ub85c\ub4dc\uc640 \uc774\ub97c \uc9c0\uc6d0\ud558\ub294 \uc6b4\uc601\uc790\uc5d0\uac8c \uc790\uccb4 AWS \uacc4\uc815\uc5d0\ub9cc \uc561\uc138\uc2a4 \uad8c\ud55c\uc744 \ubd80\uc5ec\ud558\uba74 \ucd5c\uc18c \uad8c\ud55c \uc6d0\uce59\uc744 \ub2ec\uc131\ud558\uae30 \uc704\ud574 \uc138\ubd84\ud654\ub41c IAM \uc815\ucc45\uc744 \uc218\ub9bd\ud558\ub294 \ub370 \uac78\ub9ac\ub294 \uc2dc\uac04\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n+ AWS \ub9ac\uc18c\uc2a4\uc758 \uaca9\ub9ac \uac1c\uc120. \uc124\uacc4\uc0c1 \uacc4\uc815 \ub0b4\uc5d0\uc11c \ud504\ub85c\ube44\uc800\ub2dd\ub41c \ubaa8\ub4e0 \ub9ac\uc18c\uc2a4\ub294 \ub2e4\ub978 \uacc4\uc815\uc5d0 \ud504\ub85c\ube44\uc800\ub2dd\ub41c \ub9ac\uc18c\uc2a4\uc640 \ub17c\ub9ac\uc801\uc73c\ub85c \uaca9\ub9ac\ub429\ub2c8\ub2e4. \uc774 \uaca9\ub9ac \uacbd\uacc4\ub97c \ud1b5\ud574 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uad00\ub828 \ubb38\uc81c, \uc798\ubabb\ub41c \uad6c\uc131 \ub610\ub294 \uc545\uc758\uc801\uc778 \ub3d9\uc791\uc758 \uc704\ud5d8\uc744 \uc81c\ud55c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud55c \uacc4\uc815 \ub0b4\uc5d0\uc11c \ubb38\uc81c\uac00 \ubc1c\uc0dd\ud558\uba74 \ub2e4\ub978 \uacc4\uc815\uc5d0 \ud3ec\ud568\ub41c \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ubbf8\uce58\ub294 \uc601\ud5a5\uc744 \uc904\uc774\uac70\ub098 \uc81c\uac70\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n+ [AWS \ub2e4\uc911 \uacc4\uc815 \uc804\ub7b5 \ubc31\uc11c](https:\/\/docs.aws.amazon.com\/whitepapers\/latest\/organizing-your-aws-environment\/benefits-of-using-multiple-aws-accounts.html#group-workloads-based-on-business-purpose-and-ownership)\uc5d0\uc11c \ucd94\uac00\uc801\uc778 \ud61c\ud0dd\uc5d0 \ub300\ud574 \uc124\uba85\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c \uc139\uc158\uc5d0\uc11c\ub294 \uc911\uc559 \uc9d1\uc911\uc2dd \ub610\ub294 \ubd84\uc0b0\ud615 EKS \ud074\ub7ec\uc2a4\ud130 \uc811\uadfc \ubc29\uc2dd\uc744 \uc0ac\uc6a9\ud558\uc5ec EKS \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud55c \ub2e4\uc911 \uacc4\uc815 \uc804\ub7b5\uc744 \uad6c\ud604\ud558\ub294 \ubc29\ubc95\uc744 \uc124\uba85\ud569\ub2c8\ub2e4.\n\n## \uba40\ud2f0 \ud14c\ub10c\ud2b8 \ud074\ub7ec\uc2a4\ud130\ub97c \uc704\ud55c \uba40\ud2f0 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815 \uc804\ub7b5 \uacc4\ud68d\n\n\ub2e4\uc911 \uacc4\uc815 AWS \uc804\ub7b5\uc5d0\uc11c\ub294 \ud2b9\uc815 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc18d\ud558\ub294 \ub9ac\uc18c\uc2a4(\uc608: S3 \ubc84\ud0b7, ElastiCache \ud074\ub7ec\uc2a4\ud130 \ubc0f DynamoDB \ud14c\uc774\ube14)\ub294 \ubaa8\ub450 \ud574\ub2f9 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud55c \ubaa8\ub4e0 \ub9ac\uc18c\uc2a4\uac00 \ud3ec\ud568\ub41c AWS \uacc4\uc815\uc5d0\uc11c \uc0dd\uc131\ub429\ub2c8\ub2e4. \uc774\ub97c \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc774\ub77c\uace0 \ud558\uba70, EKS \ud074\ub7ec\uc2a4\ud130\ub294 \ud074\ub7ec\uc2a4\ud130 \uacc4\uc815\uc774\ub77c\uace0 \ud558\ub294 \uacc4\uc815\uc5d0 \ubc30\ud3ec\ub429\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uacc4\uc815\uc740 \ub2e4\uc74c \uc139\uc158\uc5d0\uc11c \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4. \uc804\uc6a9 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc5d0 \ub9ac\uc18c\uc2a4\ub97c \ubc30\ud3ec\ud558\ub294 \uac83\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9ac\uc18c\uc2a4\ub97c \uc804\uc6a9 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \ubc30\ud3ec\ud558\ub294 \uac83\uacfc \ube44\uc2b7\ud569\ub2c8\ub2e4. \n\n\uadf8\ub7f0 \ub2e4\uc74c \ud544\uc694\uc5d0 \ub530\ub77c \uc18c\ud504\ud2b8\uc6e8\uc5b4 \uac1c\ubc1c \ub77c\uc774\ud504\uc0ac\uc774\ud074 \ub610\ub294 \uae30\ud0c0 \uc694\uad6c \uc0ac\ud56d\uc5d0 \ub530\ub77c \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc744 \ub354 \uc138\ubd84\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \ud2b9\uc815 \uc6cc\ud06c\ub85c\ub4dc\uc5d0\ub294 \ud504\ub85c\ub355\uc158 \uacc4\uc815, \uac1c\ubc1c \uacc4\uc815 \ub610\ub294 \ud2b9\uc815 \uc9c0\uc5ed\uc5d0\uc11c \ud574\ub2f9 \uc6cc\ud06c\ub85c\ub4dc\uc758 \uc778\uc2a4\ud134\uc2a4\ub97c \ud638\uc2a4\ud305\ud558\ub294 \uacc4\uc815\uc774 \uc788\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucd94\uac00 \uc815\ubcf4\ub294 [AWS \ubc31\uc11c](https:\/\/docs.aws.amazon.com\/whitepapers\/latest\/organizing-your-aws-environment\/organizing-workload-oriented-ous.html)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nEKS \ub2e4\uc911 \uacc4\uc815 \uc804\ub7b5\uc744 \uad6c\ud604\ud560 \ub54c \ub2e4\uc74c\uacfc \uac19\uc740 \uc811\uadfc \ubc29\uc2dd\uc744 \ucc44\ud0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4:\n\n## \uc911\uc559 \uc9d1\uc911\uc2dd EKS \ud074\ub7ec\uc2a4\ud130\n\n\uc774 \uc811\uadfc \ubc29\uc2dd\uc5d0\uc11c\ub294 EKS \ud074\ub7ec\uc2a4\ud130\ub97c `\ud074\ub7ec\uc2a4\ud130 \uacc4\uc815`\uc774\ub77c\ub294 \ub2e8\uc77c AWS \uacc4\uc815\uc5d0 \ubc30\ud3ec\ud569\ub2c8\ub2e4. [IAM roles for Service Accounts (IRSA)](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts.html) \ub610\ub294 [EKS Pod Identitiesentities](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/pod-identities.html)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc784\uc2dc AWS \uc790\uaca9 \uc99d\uba85\uc744 \uc81c\uacf5\ud558\uace0 [AWS Resource Access Manager  (RAM)](https:\/\/aws.amazon.com\/ram\/) \ub97c \uc0ac\uc6a9\ud558\uc5ec \ub124\ud2b8\uc6cc\ud06c \uc561\uc138\uc2a4\ub97c \ub2e8\uc21c\ud654\ud558\uba74 \uba40\ud2f0 \ud14c\ub10c\ud2b8 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub2e4\uc911 \uacc4\uc815 \uc804\ub7b5\uc744 \ucc44\ud0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uacc4\uc815\uc5d0\ub294 VPC, \uc11c\ube0c\ub137, EKS \ud074\ub7ec\uc2a4\ud130, EC2\/Fargate \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4 (\uc791\uc5c5\uc790 \ub178\ub4dc) \ubc0f EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uc2e4\ud589\ud558\ub294 \ub370 \ud544\uc694\ud55c \ucd94\uac00 \ub124\ud2b8\uc6cc\ud0b9 \uad6c\uc131\uc774 \ud3ec\ud568\ub429\ub2c8\ub2e4.\n\n\uba40\ud2f0 \ud14c\ub10c\ud2b8 \ud074\ub7ec\uc2a4\ud130\ub97c \uc704\ud55c \uba40\ud2f0 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815 \uc804\ub7b5\uc5d0\uc11c AWS \uacc4\uc815\uc740 \uc77c\ubc18\uc801\uc73c\ub85c \ub9ac\uc18c\uc2a4 \uadf8\ub8f9\uc744 \uaca9\ub9ac\ud558\uae30 \uc704\ud55c \uba54\ucee4\ub2c8\uc998\uc73c\ub85c [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\uc784\uc2a4\ud398\uc774\uc2a4](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/namespaces\/)\uc640 \uc77c\uce58\ud569\ub2c8\ub2e4. \uba40\ud2f0 \ud14c\ub10c\ud2b8 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c \uba40\ud2f0 \uacc4\uc815 \uc804\ub7b5\uc744 \uad6c\ud604\ud560 \ub54c\ub294 EKS \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc758 [\ud14c\ub10c\ud2b8 \uaca9\ub9ac\uc758 \ubaa8\ubc94 \uc0ac\ub840](\/security\/docs\/multitenancy\/)\ub97c \uc5ec\uc804\ud788 \ub530\ub77c\uc57c \ud569\ub2c8\ub2e4.\n\nAWS \uc870\uc9c1\uc5d0 `\ud074\ub7ec\uc2a4\ud130 \uacc4\uc815`\uc744 \uc5ec\ub7ec \uac1c \ubcf4\uc720\ud560 \uc218 \uc788\uc73c\uba70, \uc18c\ud504\ud2b8\uc6e8\uc5b4 \uac1c\ubc1c \uc218\uba85 \uc8fc\uae30 \uc694\uad6c \uc0ac\ud56d\uc5d0 \ub9de\ub294 `\ud074\ub7ec\uc2a4\ud130 \uacc4\uc815`\uc744 \uc5ec\ub7ec \uac1c \ubcf4\uc720\ud558\ub294 \uac83\uc774 \uac00\uc7a5 \uc88b\uc2b5\ub2c8\ub2e4. \ub300\uaddc\ubaa8 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc6b4\uc601\ud558\ub294 \uacbd\uc6b0 \ubaa8\ub4e0 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \ucda9\ubd84\ud55c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc0f AWS \uc11c\ube44\uc2a4 \ud560\ub2f9\ub7c9\uc744 \ud655\ubcf4\ud558\ub824\uba74 \uc5ec\ub7ec \uac1c\uc758 `\ud074\ub7ec\uc2a4\ud130 \uacc4\uc815`\uc774 \ud544\uc694\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n| ![](.\/images\/multi-account-eks.jpg) |\n|:--:|\n| \uc704 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc5d0\uc11c AWS RAM\uc740 \ud074\ub7ec\uc2a4\ud130 \uacc4\uc815\uc758 \uc11c\ube0c\ub137\uc744 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc73c\ub85c \uacf5\uc720\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. \uadf8\ub7f0 \ub2e4\uc74c EKS \ud30c\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc6cc\ud06c\ub85c\ub4dc\ub294 IRSA \ub610\ub294 EKS Pod Identitiesentities\uc640 \uc5ed\ud560 \uccb4\uc778\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc5d0\uc11c \uc5ed\ud560\uc744 \ub9e1\uc544 AWS \ub9ac\uc18c\uc2a4\uc5d0 \uc561\uc138\uc2a4\ud569\ub2c8\ub2e4. |\n\n \n### \uba40\ud2f0 \ud14c\ub10c\ud2b8 \ud074\ub7ec\uc2a4\ud130\ub97c \uc704\ud55c \uba40\ud2f0 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815 \uc804\ub7b5 \uad6c\ud604\n\n#### AWS \ub9ac\uc18c\uc2a4 \uc561\uc138\uc2a4 \uad00\ub9ac\uc790\uc640 \uc11c\ube0c\ub137 \uacf5\uc720 \n\n[AWS Resource Access Manager](https:\/\/aws.amazon.com\/ram\/) (RAM)\uc744 \uc0ac\uc6a9\ud558\uba74 AWS \uacc4\uc815 \uc804\uccb4\uc5d0\uc11c \ub9ac\uc18c\uc2a4\ub97c \uacf5\uc720\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n[AWS \uc870\uc9c1\uc5d0 RAM\uc774 \ud65c\uc131\ud654\ub418\uc5b4 \uc788\ub294 \uacbd\uc6b0](https:\/\/docs.aws.amazon.com\/ram\/latest\/userguide\/getting-started-sharing.html#getting-started-sharing-orgs), \ud074\ub7ec\uc2a4\ud130 \uacc4\uc815\uc758 VPC \uc11c\ube0c\ub137\uc744 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uacfc \uacf5\uc720\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 [Amazon ElastiCache](https:\/\/aws.amazon.com\/elasticache\/) \ud074\ub7ec\uc2a4\ud130 \ub610\ub294 [Amazon Relational Database Service (RDS)](https:\/\/aws.amazon.com\/rds\/) \ub370\uc774\ud130\ubca0\uc774\uc2a4 \ub4f1 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc774 \uc18c\uc720\ud55c AWS \ub9ac\uc18c\uc2a4\ub97c EKS \ud074\ub7ec\uc2a4\ud130\uc640 \ub3d9\uc77c\ud55c VPC\uc5d0 \ubc30\ud3ec\ud558\uace0 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nRAM\uc744 \ud1b5\ud574 \ub9ac\uc18c\uc2a4\ub97c \uacf5\uc720\ud558\ub824\uba74 \ud074\ub7ec\uc2a4\ud130 \uacc4\uc815\uc758 AWS \ucf58\uc194\uc5d0\uc11c RAM\uc744 \uc5f4\uace0 \"\ub9ac\uc18c\uc2a4 \uacf5\uc720\" \ubc0f \"\ub9ac\uc18c\uc2a4 \uacf5\uc720 \uc0dd\uc131\"\uc744 \uc120\ud0dd\ud569\ub2c8\ub2e4. \ub9ac\uc18c\uc2a4 \uacf5\uc720\uc758 \uc774\ub984\uc744 \uc9c0\uc815\ud558\uace0 \uacf5\uc720\ud558\ub824\ub294 \uc11c\ube0c\ub137\uc744 \uc120\ud0dd\ud569\ub2c8\ub2e4. \ub2e4\uc2dc \ub2e4\uc74c\uc744 \uc120\ud0dd\ud558\uace0 \uc11c\ube0c\ub137\uc744 \uacf5\uc720\ud558\ub824\ub294 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc758 12\uc790\ub9ac \uacc4\uc815 ID\ub97c \uc785\ub825\ud558\uace0 \ub2e4\uc74c\uc744 \ub2e4\uc2dc \uc120\ud0dd\ud55c \ud6c4 Create resource share (\ub9ac\uc18c\uc2a4 \uacf5\uc720 \uc0dd\uc131) \ub97c \ud074\ub9ad\ud558\uc5ec \uc644\ub8cc\ud569\ub2c8\ub2e4. \uc774 \ub2e8\uacc4\uac00 \ub05d\ub098\uba74 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc740 \ub9ac\uc18c\uc2a4\ub97c \ud574\ub2f9 \uc11c\ube0c\ub137\uc5d0 \ubc30\ud3ec\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ud504\ub85c\uadf8\ub798\ubc0d \ubc29\uc2dd \ub610\ub294 IaC\ub85c RAM \uacf5\uc720\ub97c \uc0dd\uc131\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \n\n#### EKS Pod Identitiesentities\uc640 IRSA \uc911 \uc120\ud0dd\n\nre:Invent 2023\uc5d0\uc11c AWS\ub294 EKS\uc758 \ud30c\ub4dc\uc5d0 \uc784\uc2dc AWS \uc790\uaca9 \uc99d\uba85\uc744 \uc804\ub2ec\ud558\ub294 \ub354 \uac04\ub2e8\ud55c \ubc29\ubc95\uc73c\ub85c EKS Pod Identitiesentities\ub97c \ucd9c\uc2dc\ud588\uc2b5\ub2c8\ub2e4. IRSA \ubc0f EKS \ud30c\ub4dc \uc790\uaca9 \uc99d\uba85\uc740 \ubaa8\ub450 EKS \ud30c\ub4dc\uc5d0 \uc784\uc2dc AWS \uc790\uaca9 \uc99d\uba85\uc744 \uc804\ub2ec\ud558\ub294 \uc720\ud6a8\ud55c \ubc29\ubc95\uc774\uba70 \uc55e\uc73c\ub85c\ub3c4 \uacc4\uc18d \uc9c0\uc6d0\ub420 \uac83\uc785\ub2c8\ub2e4. \uc5b4\ub5a4 \uc804\ub2ec \ubc29\ubc95\uc774 \uc694\uad6c \uc0ac\ud56d\uc5d0 \uac00\uc7a5 \uc798 \ub9de\ub294\uc9c0 \uace0\ub824\ud574\uc57c \ud569\ub2c8\ub2e4.\n\nEKS \ud074\ub7ec\uc2a4\ud130 \ubc0f \uc5ec\ub7ec AWS \uacc4\uc815\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 IRSA\ub294 EKS \ud074\ub7ec\uc2a4\ud130\uac00 \uc9c1\uc811 \ud638\uc2a4\ud305\ub418\ub294 \uacc4\uc815 \uc774\uc678\uc758 AWS \uacc4\uc815\uc5d0\uc11c \uc5ed\ud560\uc744 \uc9c1\uc811 \ub9e1\uc744 \uc218 \uc788\uc9c0\ub9cc EKS Pod Identities\ub294 \uc5ed\ud560 \uccb4\uc778\uc744 \uad6c\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \uc790\uc138\ud55c \ube44\uad50\ub294 [EKS \ubb38\uc11c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/service-accounts.html#service-accounts-iam)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n\n##### IRSA(IAM Roles for Service Accounts)\ub97c \uc0ac\uc6a9\ud558\uc5ec AWS API \ub9ac\uc18c\uc2a4\uc5d0 \uc561\uc138\uc2a4\n \n[IAM Roles for Service Accounts (IRSA)](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts.html)\ub97c \uc0ac\uc6a9\ud558\uba74 EKS\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc784\uc2dc AWS \uc790\uaca9 \uc99d\uba85\uc744 \uc81c\uacf5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. IRSA\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 \uacc4\uc815\uc5d0\uc11c \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc758 IAM \uc5ed\ud560\uc5d0 \ub300\ud55c \uc784\uc2dc \uc790\uaca9 \uc99d\uba85\uc744 \uc5bb\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 \ud074\ub7ec\uc2a4\ud130 \uacc4\uc815\uc758 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc5d0 \ud638\uc2a4\ud305\ub41c S3 \ubc84\ud0b7\uacfc \uac19\uc740 AWS API \ub9ac\uc18c\uc2a4\ub97c \uc6d0\ud65c\ud558\uac8c \uc0ac\uc6a9\ud558\uace0 Amazon RDS \ub370\uc774\ud130\ubca0\uc774\uc2a4 \ub610\ub294 Amazon EFS File Systems\uc640 \uac19\uc740 \ub9ac\uc18c\uc2a4\uc5d0 IAM \uc778\uc99d\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc5d0\uc11c IAM \uc778\uc99d\uc744 \uc0ac\uc6a9\ud558\ub294 AWS API \ub9ac\uc18c\uc2a4 \ubc0f \uae30\ud0c0 \ub9ac\uc18c\uc2a4\ub294 \ub3d9\uc77c\ud55c \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc758 IAM \uc5ed\ud560 \uc790\uaca9 \uc99d\uba85\uc73c\ub85c\ub9cc \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub2e8, \uad50\ucc28 \uacc4\uc815 \uc561\uc138\uc2a4\uac00 \uac00\ub2a5\ud558\uace0 \uba85\uc2dc\uc801\uc73c\ub85c \ud65c\uc131\ud654\ub41c \uacbd\uc6b0\ub294 \uc608\uc678\uc785\ub2c8\ub2e4.\n\n##### \uad50\ucc28 \uacc4\uc815 \uc561\uc138\uc2a4\ub97c \uc704\ud55c IRSA \ud65c\uc131\ud654\n\n\ud074\ub7ec\uc2a4\ud130 \uacc4\uc815\uc758 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud574 IRSA\uac00 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc758 \ub9ac\uc18c\uc2a4\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\ub3c4\ub85d \ud558\ub824\uba74 \uba3c\uc800 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc5d0 IAM OIDC ID \uacf5\uae09\uc790\ub97c \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. IRSA \uc124\uc815 \uc808\ucc28\uc640 \ub3d9\uc77c\ud55c \ubc29\ubc95\uc73c\ub85c \uc774 \uc791\uc5c5\uc744 \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub2e8, \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc5d0 ID \uacf5\uae09\uc790\uac00 \uc0dd\uc131\ub41c\ub2e4\ub294 \uc810\ub9cc \ub2e4\ub985\ub2c8\ub2e4: https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/enable-iam-roles-for-service-accounts.html\n\n\uadf8\ub7f0 \ub2e4\uc74c EKS\uc758 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud574 IRSA\ub97c \uad6c\uc131\ud560 \ub54c [\uc124\uba85\uc11c\uc640 \ub3d9\uc77c\ud55c \ub2e8\uacc4\ub97c \uc218\ud589](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/associate-service-account-role.html) \ud560 \uc218 \uc788\uc9c0\ub9cc \u201c\uc608\uc81c \ub2e4\ub978 \uacc4\uc815\uc758 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c ID \uacf5\uae09\uc790 \uc0dd\uc131\u201d \uc139\uc158\uc5d0\uc11c \uc124\uba85\ud55c \ub300\ub85c [\uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc758 12\uc790\ub9ac \uacc4\uc815 ID](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cross-account-access.html)\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc774\ub97c \uad6c\uc131\ud55c \ud6c4\uc5d0\ub294 EKS\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \ud574\ub2f9 \uc11c\ube44\uc2a4 \uacc4\uc815\uc744 \uc9c1\uc811 \uc0ac\uc6a9\ud558\uc5ec \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc5d0\uc11c \uc5ed\ud560\uc744 \ub2f4\ub2f9\ud558\uace0 \ud574\ub2f9 \uacc4\uc815 \ub0b4\uc758 \ub9ac\uc18c\uc2a4\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n##### EKS Pod Identities\ub85c AWS API \ub9ac\uc18c\uc2a4\uc5d0 \uc561\uc138\uc2a4\n\n[EKS Pod Identities](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/pod-identities.html)\ub294 EKS\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 AWS \uc790\uaca9 \uc99d\uba85\uc744 \uc81c\uacf5\ud558\ub294 \uc0c8\ub85c\uc6b4 \ubc29\ubc95\uc785\ub2c8\ub2e4. EKS Pod Identities\ub294 EKS\uc758 \ud30c\ub4dc\uc5d0 AWS \uc790\uaca9 \uc99d\uba85\uc744 \uc81c\uacf5\ud558\uae30 \uc704\ud574 \ub354 \uc774\uc0c1 OIDC \uad6c\uc131\uc744 \uad00\ub9ac\ud560 \ud544\uc694\uac00 \uc5c6\uae30 \ub54c\ubb38\uc5d0 AWS \ub9ac\uc18c\uc2a4 \uad6c\uc131\uc744 \uac04\uc18c\ud654\ud569\ub2c8\ub2e4.\n\n##### \uacc4\uc815 \uac04 \uc561\uc138\uc2a4\ub97c \uc704\ud55c EKS Pod Identities \ud65c\uc131\ud654\n\nIRSA\uc640 \ub2ec\ub9ac EKS Pod Identities\ub294 EKS \ud074\ub7ec\uc2a4\ud130\uc640 \ub3d9\uc77c\ud55c \uacc4\uc815\uc758 \uc5ed\ud560\uc5d0 \uc9c1\uc811 \uc561\uc138\uc2a4 \uad8c\ud55c\uc744 \ubd80\uc5ec\ud558\ub294 \ub370\ub9cc \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub2e4\ub978 AWS \uacc4\uc815\uc758 \uc5ed\ud560\uc5d0 \uc561\uc138\uc2a4\ud558\ub824\uba74 EKS Pod Identities\ub97c \uc0ac\uc6a9\ud558\ub294 \ud30c\ub4dc\uac00 [\uc5ed\ud560 \uccb4\uc778](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id_roles_terms-and-concepts.html#iam-term-role-chaining)\uc744 \uc218\ud589\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ub2e4\uc591\ud55c AWS SDK\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 [\ud504\ub85c\uc138\uc2a4 \uc790\uaca9 \uc99d\uba85 \uacf5\uae09\uc790](https:\/\/docs.aws.amazon.com\/sdkref\/latest\/guide\/feature-process-credentials.html)\ub97c \uc0ac\uc6a9\ud558\uc5ec aws \uad6c\uc131 \ud30c\uc77c\uc774 \ud3ec\ud568\ub41c \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ud504\ub85c\ud544\uc5d0\uc11c \uc5ed\ud560 \uccb4\uc778\uc744 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c\uacfc \uac19\uc774 \ud504\ub85c\ud544\uc744 \uad6c\uc131\ud560 \ub54c `credentials _process`\ub97c \uc790\uaca9 \uc99d\uba85 \uc18c\uc2a4\ub85c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\n# Content of the AWS Config file\n[profile account_b_role] \nsource_profile = account_a_role \nrole_arn = arn:aws:iam::444455556666:role\/account-b-role\n\n[profile account_a_role] \ncredential_process = \/eks-credential-processrole.sh\n\n```\n\ncredential_process\uc5d0 \uc758\ud574 \ud638\ucd9c\ub41c \uc2a4\ud06c\ub9bd\ud2b8\uc758 \uc18c\uc2a4:\n\n```\n#!\/bin\/bash\n# Content of the eks-credential-processrole.sh\n# This will retreive the credential from the pod identities agent,\n# and return it to the AWS SDK when referenced in a profile\ncurl -H \"Authorization: $(cat $AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE)\" $AWS_CONTAINER_CREDENTIALS_FULL_URI | jq -c '{AccessKeyId: .AccessKeyId, SecretAccessKey: .SecretAccessKey, SessionToken: .Token, Expiration: .Expiration, Version: 1}' \n```\n\n\uacc4\uc815 A\uc640 B \uc5ed\ud560\uc744 \ubaa8\ub450 \ud3ec\ud568\ud558\ub294 aws \uad6c\uc131 \ud30c\uc77c\uc744 \uc0dd\uc131\ud558\uace0 \ud30c\ub4dc \uc0ac\uc591\uc5d0 AWS_CONFIG_FILE \ubc0f AWS_PROFILE \ud658\uacbd \ubcc0\uc218\ub97c \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\ud658\uacbd \ubcc0\uc218\uac00 \ud30c\ub4dc \uc0ac\uc591\uc5d0 \uc774\ubbf8 \uc874\uc7ac\ud558\ub294 \uacbd\uc6b0 EKS \ud30c\ub4dc \uc544\uc774\ub374\ud2f0\ud2f0 \uc6f9\ud6c5\uc740 \uc624\ubc84\ub77c\uc774\ub4dc\ub418\uc9c0 \uc54a\ub294\ub2e4.\n\n```yaml\n# Snippet of the PodSpec\ncontainers: \n  - name: container-name\n    image: container-image:version\n    env:\n    - name: AWS_CONFIG_FILE\n      value: path-to-customer-provided-aws-config-file\n    - name: AWS_PROFILE\n      value: account_b_role\n```\n\nEKS Pod Identities\uc640\uc758 \uc5ed\ud560 \uccb4\uc778\uc744 \uc704\ud55c \uc5ed\ud560 \uc2e0\ub8b0 \uc815\ucc45\uc744 \uad6c\uc131\ud560 \ub54c [EKS \ud2b9\uc815 \uc18d\uc131](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/pod-id-abac.html)\uc744 \uc138\uc158 \ud0dc\uadf8\ub85c \ucc38\uc870\ud558\uace0 \uc18d\uc131 \uae30\ubc18 \uc561\uc138\uc2a4 \uc81c\uc5b4(ABAC)\ub97c \uc0ac\uc6a9\ud558\uc5ec IAM \uc5ed\ud560\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4\ub97c \ud30c\ub4dc\uac00 \uc18d\ud55c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4 \uacc4\uc815\uacfc \uac19\uc740 \ud2b9\uc815 EKS Pod ID \uc138\uc158\uc73c\ub85c\ub9cc \uc81c\ud55c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\uc774\ub7ec\ud55c \ud2b9\uc131 \uc911 \uc77c\ubd80\ub294 \ubcf4\ud3b8\uc801\uc73c\ub85c \uace0\uc720\ud558\uc9c0 \uc54a\uc744 \uc218 \uc788\ub2e4\ub294 \uc810\uc5d0 \uc720\uc758\ud558\uc2ed\uc2dc\uc624. \uc608\ub97c \ub4e4\uc5b4 \ub450 EKS \ud074\ub7ec\uc2a4\ud130\ub294 \ub3d9\uc77c\ud55c \ub124\uc784\uc2a4\ud398\uc774\uc2a4\ub97c \uac00\uc9c8 \uc218 \uc788\uace0 \ud55c \ud074\ub7ec\uc2a4\ud130\ub294 \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \uc804\uccb4\uc5d0\uc11c \ub3d9\uc77c\ud55c \uc774\ub984\uc758 \uc11c\ube44\uc2a4 \uacc4\uc815\uc744 \uac00\uc9c8 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub530\ub77c\uc11c EKS Pod Identities \ubc0f ABAC\ub97c \ud1b5\ud574 \uc561\uc138\uc2a4 \uad8c\ud55c\uc744 \ubd80\uc5ec\ud560 \ub54c\ub294 \uc11c\ube44\uc2a4 \uacc4\uc815\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4 \uad8c\ud55c\uc744 \ubd80\uc5ec\ud560 \ub54c \ud56d\uc0c1 \ud074\ub7ec\uc2a4\ud130 ARN\uacfc \ub124\uc784\uc2a4\ud398\uc774\uc2a4\ub97c \uace0\ub824\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n##### \uad50\ucc28 \uacc4\uc815 \uc561\uc138\uc2a4\ub97c \uc704\ud55c ABAC \ubc0f EKS Pod Identities\n\n\ub2e4\uc911 \uacc4\uc815 \uc804\ub7b5\uc758 \uc77c\ud658\uc73c\ub85c EKS Pod ID\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub2e4\ub978 \uacc4\uc815\uc5d0\uc11c \uc5ed\ud560 (\uc5ed\ud560 \uccb4\uc778) \uc744 \ub9e1\ub294 \uacbd\uc6b0, \ub2e4\ub978 \uacc4\uc815\uc5d0 \uc561\uc138\uc2a4\ud574\uc57c \ud558\ub294 \uac01 \uc11c\ube44\uc2a4 \uacc4\uc815\uc5d0 \uace0\uc720\ud55c IAM \uc5ed\ud560\uc744 \ud560\ub2f9\ud558\uac70\ub098, \uc5ec\ub7ec \uc11c\ube44\uc2a4 \uacc4\uc815\uc5d0\uc11c \uacf5\ud1b5 IAM \uc5ed\ud560\uc744 \uc0ac\uc6a9\ud558\uace0 ABAC\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc561\uc138\uc2a4\ud560 \uc218 \uc788\ub294 \uacc4\uc815\uc744 \uc81c\uc5b4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nABAC\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc5ed\ud560 \uccb4\uc778\uc744 \ud1b5\ud574 \ub2e4\ub978 \uacc4\uc815\uc5d0 \uc5ed\ud560\uc744 \uc218\uc784\ud560 \uc218 \uc788\ub294 \uc11c\ube44\uc2a4 \uacc4\uc815\uc744 \uc81c\uc5b4\ud558\ub824\uba74 \uc608\uc0c1 \uac12\uc774 \uc788\uc744 \ub54c\ub9cc \uc5ed\ud560 \uc138\uc158\uc5d0\uc11c \uc5ed\ud560\uc744 \uc218\uc784\ud558\ub3c4\ub85d \ud5c8\uc6a9\ud558\ub294 \uc5ed\ud560 \uc2e0\ub8b0 \uc815\ucc45 \uc124\uba85\uc744 \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \ub2e4\uc74c \uc5ed\ud560 \uc2e0\ub8b0 \uc815\ucc45\uc740 `kubernetes-service-account`, `eks-cluster-arn` \ubc0f `kubernetes-namespace` \ud0dc\uadf8\uac00 \ubaa8\ub450 \uae30\ub300\ub418\ub294 \uac12\uc744 \uac16\ub294 \uacbd\uc6b0\uc5d0\ub9cc EKS \ud074\ub7ec\uc2a4\ud130 \uacc4\uc815 (\uacc4\uc815 ID 111122223333) \uc758 \uc5ed\ud560\uc774 \uc5ed\ud560\uc744 \uc218\uc784\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```json\n{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Principal\": {\n                \"AWS\": \"arn:aws:iam::111122223333:root\"\n            },\n            \"Action\": \"sts:AssumeRole\",\n            \"Condition\": {\n                \"StringEquals\": {\n                    \"aws:PrincipalTag\/kubernetes-service-account\": \"PayrollApplication\",\n                    \"aws:PrincipalTag\/eks-cluster-arn\": \"arn:aws:eks:us-east-1:111122223333:cluster\/ProductionCluster\",\n                    \"aws:PrincipalTag\/kubernetes-namespace\": \"PayrollNamespace\"\n                }\n            }\n        }\n    ]\n}\n```\n\n\uc774 \uc804\ub7b5\uc744 \uc0ac\uc6a9\ud560 \ub54c\ub294 \uacf5\ud1b5 IAM \uc5ed\ud560\uc5d0 `STS:assumeRole` \uad8c\ud55c\ub9cc \uc788\uace0 \ub2e4\ub978 AWS \uc561\uc138\uc2a4\ub294 \ud5c8\uc6a9\ud558\uc9c0 \uc54a\ub294 \uac83\uc774 \uac00\uc7a5 \uc88b\uc2b5\ub2c8\ub2e4.\n\nABAC\ub97c \uc0ac\uc6a9\ud560 \ub54c\ub294 IAM \uc5ed\ud560\uacfc \uc0ac\uc6a9\uc790\ub97c \uaf2d \ud544\uc694\ud55c \uc0ac\ub78c\uc5d0\uac8c\ub9cc \ud0dc\uadf8\ud560 \uc218 \uc788\ub294 \uad8c\ud55c\uc744 \uac00\uc9c4 \uc0ac\ub78c\uc744 \uc81c\uc5b4\ud558\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4. IAM \uc5ed\ud560 \ub610\ub294 \uc0ac\uc6a9\uc790\uc5d0 \ud0dc\uadf8\ub97c \uc9c0\uc815\ud560 \uc218 \uc788\ub294 \uc0ac\ub78c\uc740 EKS Pod Identities\uc5d0\uc11c \uc124\uc815\ud558\ub294 \uac83\uacfc \ub3d9\uc77c\ud55c \ud0dc\uadf8\ub97c \uc5ed\ud560\/\uc0ac\uc6a9\uc790\uc5d0 \uc124\uc815\ud560 \uc218 \uc788\uc73c\uba70 \uad8c\ud55c\uc744 \uc5d0\uc2a4\uceec\ub808\uc774\uc158\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. IAM \uc815\ucc45 \ub610\ub294 \uc11c\ube44\uc2a4 \uc81c\uc5b4 \uc815\ucc45(SCP)\uc744 \uc0ac\uc6a9\ud558\uc5ec IAM \uc5ed\ud560 \ubc0f \uc0ac\uc6a9\uc790\uc5d0\uac8c `kubernetes-` \ubc0f `eks-` \ud0dc\uadf8\ub97c \uc124\uc815\ud560 \uc218 \uc788\ub294 \uc561\uc138\uc2a4 \uad8c\ud55c\uc744 \uac00\uc9c4 \uc0ac\uc6a9\uc790\ub97c \uc81c\ud55c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n## \ubd84\uc0b0\ud615 EKS \ud074\ub7ec\uc2a4\ud130\n\n\uc774 \uc811\uadfc \ubc29\uc2dd\uc5d0\uc11c\ub294 EKS \ud074\ub7ec\uc2a4\ud130\uac00 \uac01 \uc6cc\ud06c\ub85c\ub4dc AWS \uacc4\uc815\uc5d0 \ubc30\ud3ec\ub418\uace0 Amazon S3 \ubc84\ud0b7, VPC, Amazon DynamoDB \ud14c\uc774\ube14 \ub4f1\uacfc \uac19\uc740 \ub2e4\ub978 AWS \ub9ac\uc18c\uc2a4\uc640 \ud568\uaed8 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. \uac01 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc740 \ub3c5\ub9bd\uc801\uc774\uace0 \uc790\uae09\uc790\uc871\ud558\uba70 \uac01 \uc0ac\uc5c5\ubd80\/\uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ud300\uc5d0\uc11c \uc6b4\uc601\ud569\ub2c8\ub2e4. \uc774 \ubaa8\ub378\uc744 \uc0ac\uc6a9\ud558\uba74 \ub2e4\uc591\ud55c \ud074\ub7ec\uc2a4\ud130 \uae30\ub2a5 (AI\/ML \ud074\ub7ec\uc2a4\ud130, \ubc30\uce58 \ucc98\ub9ac, \ubc94\uc6a9 \ub4f1) \uc5d0 \ub300\ud55c \uc7ac\uc0ac\uc6a9 \uac00\ub2a5\ud55c \ube14\ub8e8\ud504\ub9b0\ud2b8\ub97c \uc0dd\uc131\ud558\uace0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ud300 \uc694\uad6c \uc0ac\ud56d\uc5d0 \ub530\ub77c \ud074\ub7ec\uc2a4\ud130\ub97c \ud310\ub9e4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ud300\uacfc \ud50c\ub7ab\ud3fc \ud300 \ubaa8\ub450 \uac01\uac01\uc758 [GitOps](https:\/\/www.weave.works\/technologies\/gitops\/) \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\uc5d0\uc11c \uc791\uc5c5\ud558\uc5ec \uc6cc\ud06c\ub85c\ub4dc \ud074\ub7ec\uc2a4\ud130\ub85c\uc758 \ubc30\ud3ec\ub97c \uad00\ub9ac\ud569\ub2c8\ub2e4.\n\n|![De-centralized EKS Cluster Architecture](.\/images\/multi-account-eks-decentralized.png)|\n|:--:|\n| \uc704 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc5d0\uc11c Amazon EKS \ud074\ub7ec\uc2a4\ud130 \ubc0f \uae30\ud0c0 AWS \ub9ac\uc18c\uc2a4\ub294 \uac01 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc5d0 \ubc30\ud3ec\ub429\ub2c8\ub2e4. \uadf8\ub7f0 \ub2e4\uc74c EKS \ud30c\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc6cc\ud06c\ub85c\ub4dc\ub294 IRSA \ub610\ub294 EKS Pod Identities\ub97c \uc0ac\uc6a9\ud558\uc5ec AWS \ub9ac\uc18c\uc2a4\uc5d0 \uc561\uc138\uc2a4\ud569\ub2c8\ub2e4. |\n\nGitOps\ub294 \uc804\uccb4 \uc2dc\uc2a4\ud15c\uc774 Git \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\uc5d0 \uc120\uc5b8\uc801\uc73c\ub85c \uc124\uba85\ub418\ub3c4\ub85d \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ubc0f \uc778\ud504\ub77c \ubc30\ud3ec\ub97c \uad00\ub9ac\ud558\ub294 \ubc29\ubc95\uc785\ub2c8\ub2e4. \ubc84\uc804 \uc81c\uc5b4, \ubcc0\uacbd \ubd88\uac00\ub2a5\ud55c \uc544\ud2f0\ud329\ud2b8 \ubc0f \uc790\ub3d9\ud654\uc758 \ubaa8\ubc94 \uc0ac\ub840\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc5ec\ub7ec Kubernetes \ud074\ub7ec\uc2a4\ud130\uc758 \uc0c1\ud0dc\ub97c \uad00\ub9ac\ud560 \uc218 \uc788\ub294 \uc6b4\uc601 \ubaa8\ub378\uc785\ub2c8\ub2e4. \uc774 \ub2e4\uc911 \ud074\ub7ec\uc2a4\ud130 \ubaa8\ub378\uc5d0\uc11c\ub294 \uac01 \uc6cc\ud06c\ub85c\ub4dc \ud074\ub7ec\uc2a4\ud130\uac00 \uc5ec\ub7ec Git \uc800\uc7a5\uc18c\ub85c \ubd80\ud2b8\uc2a4\ud2b8\ub7a9\ub418\uc5b4 \uac01 \ud300(\uc560\ud50c\ub9ac\ucf00\uc774\uc158, \ud50c\ub7ab\ud3fc, \ubcf4\uc548 \ub4f1)\uc774 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uac01\uac01\uc758 \ubcc0\uacbd \uc0ac\ud56d\uc744 \ubc30\ud3ec\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\uac01 \uacc4\uc815\uc5d0\uc11c [IAM roles for Service Accounts (IRSA)](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts.html) \ub610\ub294 [EKS Pod Identities](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/pod-identities.html)\ub97c \ud65c\uc6a9\ud558\uc5ec EKS \uc6cc\ud06c\ub85c\ub4dc\uac00 \ub2e4\ub978 AWS \ub9ac\uc18c\uc2a4\uc5d0 \uc548\uc804\ud558\uac8c \uc561\uc138\uc2a4\ud560 \uc218 \uc788\ub294 \uc784\uc2dc AWS \uc790\uaca9 \uc99d\uba85\uc744 \uc5bb\uc744 \uc218 \uc788\ub3c4\ub85d \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. IAM \uc5ed\ud560\uc740 \uac01 \uc6cc\ud06c\ub85c\ub4dc AWS \uacc4\uc815\uc5d0\uc11c \uc0dd\uc131\ub418\uba70 \uc774\ub97c k8s \uc11c\ube44\uc2a4 \uacc4\uc815\uc5d0 \ub9e4\ud551\ud558\uc5ec \uc784\uc2dc IAM \uc561\uc138\uc2a4\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. \ub530\ub77c\uc11c \uc774 \uc811\uadfc \ubc29\uc2dd\uc5d0\uc11c\ub294 \uacc4\uc815 \uac04 \uc561\uc138\uc2a4\uac00 \ud544\uc694\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. IRSA\uc758 \uac01 \uc6cc\ud06c\ub85c\ub4dc\uc5d0\uc11c \uc124\uc815\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c [IAM roles for Service Accounts](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts.html) \uc124\uba85\uc11c\uc640 \uac01 \uacc4\uc815\uc5d0\uc11c EKS Pod Identities\ub97c \uc124\uc815\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c [EKS Pod Identities](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/pod-identities.html) \ubb38\uc11c\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n### \uc911\uc559 \uc9d1\uc911\ub41c \ub124\ud2b8\uc6cc\ud0b9\n\n\ub610\ud55c AWS RAM\uc744 \ud65c\uc6a9\ud558\uc5ec VPC \uc11c\ube0c\ub137\uc744 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uacfc \uacf5\uc720\ud558\uace0 \uc774 \uacc4\uc815\uc5d0\uc11c Amazon EKS \ud074\ub7ec\uc2a4\ud130 \ubc0f \uae30\ud0c0 AWS \ub9ac\uc18c\uc2a4\ub97c \uc2dc\uc791\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 \uc911\uc559 \uc9d1\uc911\uc2dd \ub124\ud2b8\uc6cc\ud06c \uad00\ub9ac\/\uad00\ub9ac, \uac04\uc18c\ud654\ub41c \ub124\ud2b8\uc6cc\ud06c \uc5f0\uacb0, \ud0c8\uc911\uc559\ud654\ub41c EKS \ud074\ub7ec\uc2a4\ud130\uac00 \uac00\ub2a5\ud569\ub2c8\ub2e4. \uc774 \uc811\uadfc \ubc29\uc2dd\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \uc124\uba85\uacfc \uace0\ub824 \uc0ac\ud56d\uc740 [AWS \ube14\ub85c\uadf8](https:\/\/aws.amazon.com\/blogs\/containers\/use-shared-vpcs-in-amazon-eks\/)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n|![De-centralized EKS Cluster Architecture using VPC Shared Subnets](.\/images\/multi-account-eks-shared-subnets.png)|\n|:--:|\n| \uc704 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc5d0\uc11c AWS RAM\uc740 \uc911\uc559 \ub124\ud2b8\uc6cc\ud0b9 \uacc4\uc815\uc758 \uc11c\ube0c\ub137\uc744 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc73c\ub85c \uacf5\uc720\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. \uadf8\ub7ec\uba74 EKS \ud074\ub7ec\uc2a4\ud130 \ubc0f \uae30\ud0c0 AWS \ub9ac\uc18c\uc2a4\uac00 \uac01 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc758 \ud574\ub2f9 \uc11c\ube0c\ub137\uc5d0\uc11c \uc2dc\uc791\ub429\ub2c8\ub2e4. EKS \ud30c\ub4dc\ub294 IRSA \ub610\ub294 EKS Pod Identities\ub97c \uc0ac\uc6a9\ud558\uc5ec AWS \ub9ac\uc18c\uc2a4\uc5d0 \uc561\uc138\uc2a4\ud569\ub2c8\ub2e4. |\n\n## \uc911\uc559\ud654\ub41c EKS \ud074\ub7ec\uc2a4\ud130\uc640 \ubd84\uc0b0\ud654\ub41c EKS \ud074\ub7ec\uc2a4\ud130\n\n\uc911\uc559 \uc9d1\uc911\uc2dd \ub610\ub294 \ubd84\uc0b0\ud615 \uc911 \uc5b4\ub290 \uac83\uc744 \uc0ac\uc6a9\ud560\uc9c0\ub294 \uc694\uad6c \uc0ac\ud56d\uc5d0 \ub530\ub77c \ub2ec\ub77c\uc9d1\ub2c8\ub2e4. \uc774 \ud45c\ub294 \uac01 \uc804\ub7b5\uc758 \uc8fc\uc694 \ucc28\uc774\uc810\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\n\n|# |\uc911\uc559\ud654\ub41c EKS \ud074\ub7ec\uc2a4\ud130 | \ubd84\uc0b0\ud654\ub41c EKS \ud074\ub7ec\uc2a4\ud130 |\n|:--|:--|:--|\n|\ud074\ub7ec\uc2a4\ud130 \uad00\ub9ac: |\ub2e8\uc77c EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uad00\ub9ac\ud558\ub294 \uac83\uc774 \uc5ec\ub7ec \ud074\ub7ec\uc2a4\ud130\ub97c \uad00\ub9ac\ud558\ub294 \uac83\ubcf4\ub2e4 \uc27d\uc2b5\ub2c8\ub2e4. | \uc5ec\ub7ec EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uad00\ub9ac\ud558\ub294 \ub370 \ub530\ub978 \uc6b4\uc601 \uc624\ubc84\ud5e4\ub4dc\ub97c \uc904\uc774\ub824\uba74 \ud6a8\uc728\uc801\uc778 \ud074\ub7ec\uc2a4\ud130 \uad00\ub9ac \uc790\ub3d9\ud654\uac00 \ud544\uc694\ud569\ub2c8\ub2e4|\n|\ube44\uc6a9 \ud6a8\uc728\uc131: | EKS \ud074\ub7ec\uc2a4\ud130 \ubc0f \ub124\ud2b8\uc6cc\ud06c \ub9ac\uc18c\uc2a4\ub97c \uc7ac\uc0ac\uc6a9\ud560 \uc218 \uc788\uc5b4 \ube44\uc6a9 \ud6a8\uc728\uc131\uc774 \ud5a5\uc0c1\ub429\ub2c8\ub2e4. | \uc6cc\ud06c\ub85c\ub4dc\ub2f9 \ub124\ud2b8\uc6cc\ud0b9 \ubc0f \ud074\ub7ec\uc2a4\ud130 \uc124\uc815\uc774 \ud544\uc694\ud558\ubbc0\ub85c \ucd94\uac00 \ub9ac\uc18c\uc2a4\uac00 \ud544\uc694\ud569\ub2c8\ub2e4|\n|\ubcf5\uc6d0\ub825: | \ud074\ub7ec\uc2a4\ud130\uac00 \uc190\uc0c1\ub418\uba74 \uc911\uc559 \uc9d1\uc911\uc2dd \ud074\ub7ec\uc2a4\ud130\uc758 \uc5ec\ub7ec \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc601\ud5a5\uc744 \ubc1b\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. | \ud074\ub7ec\uc2a4\ud130\uac00 \uc190\uc0c1\ub418\uba74 \uc190\uc0c1\uc740 \ud574\ub2f9 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc6cc\ud06c\ub85c\ub4dc\ub85c\ub9cc \uc81c\ud55c\ub429\ub2c8\ub2e4. \ub2e4\ub978 \ubaa8\ub4e0 \uc6cc\ud06c\ub85c\ub4dc\ub294 \uc601\ud5a5\uc744 \ubc1b\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. |\n|\uaca9\ub9ac \ubc0f \ubcf4\uc548: |\uaca9\ub9ac\/\uc18c\ud504\ud2b8 \uba40\ud2f0\ud14c\ub10c\uc2dc\ub294 '\ub124\uc784\uc2a4\ud398\uc774\uc2a4'\uc640 \uac19\uc740 k8\uc758 \uae30\ubcf8 \uad6c\uc870\ub97c \uc0ac\uc6a9\ud558\uc5ec \uad6c\ud604\ub429\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc\ub294 CPU, \uba54\ubaa8\ub9ac \ub4f1\uacfc \uac19\uc740 \uae30\ubcf8 \ub9ac\uc18c\uc2a4\ub97c \uacf5\uc720\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. AWS \ub9ac\uc18c\uc2a4\ub294 \uae30\ubcf8\uc801\uc73c\ub85c \ub2e4\ub978 AWS \uacc4\uc815\uc5d0\uc11c \uc561\uc138\uc2a4\ud560 \uc218 \uc5c6\ub294 \uc790\uccb4 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc73c\ub85c \uaca9\ub9ac\ub429\ub2c8\ub2e4.|\ub9ac\uc18c\uc2a4\ub97c \uacf5\uc720\ud558\uc9c0 \uc54a\ub294 \uac1c\ubcc4 \ud074\ub7ec\uc2a4\ud130 \ubc0f \ub178\ub4dc\uc5d0\uc11c \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc2e4\ud589\ub418\ubbc0\ub85c \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4\uc758 \uaca9\ub9ac\uac00 \uac15\ud654\ub429\ub2c8\ub2e4. AWS \ub9ac\uc18c\uc2a4\ub294 \uae30\ubcf8\uc801\uc73c\ub85c \ub2e4\ub978 AWS \uacc4\uc815\uc5d0\uc11c \uc561\uc138\uc2a4\ud560 \uc218 \uc5c6\ub294 \uc790\uccb4 \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc73c\ub85c \uaca9\ub9ac\ub429\ub2c8\ub2e4.|\n|\uc131\ub2a5 \ubc0f \ud655\uc7a5\uc131: |\uc6cc\ud06c\ub85c\ub4dc\uac00 \ub9e4\uc6b0 \ud070 \uaddc\ubaa8\ub85c \uc131\uc7a5\ud568\uc5d0 \ub530\ub77c \ud074\ub7ec\uc2a4\ud130 \uacc4\uc815\uc5d0\uc11c kubernetes \ubc0f AWS \uc11c\ube44\uc2a4 \ud560\ub2f9\ub7c9\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uacc4\uc815\uc744 \ucd94\uac00\ub85c \ubc30\ud3ec\ud558\uc5ec \ub354 \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4|\ud074\ub7ec\uc2a4\ud130\uc640 VPC\uac00 \ub9ce\uc544\uc9c8\uc218\ub85d \uac01 \uc6cc\ud06c\ub85c\ub4dc\uc758 \uc0ac\uc6a9 \uac00\ub2a5\ud55c k8\uacfc AWS \uc11c\ube44\uc2a4 \ud560\ub2f9\ub7c9\uc774 \ub9ce\uc544\uc9d1\ub2c8\ub2e4|\n|\ub124\ud2b8\uc6cc\ud0b9: | \ud074\ub7ec\uc2a4\ud130\ub2f9 \ub2e8\uc77c VPC\uac00 \uc0ac\uc6a9\ub418\ubbc0\ub85c \ud574\ub2f9 \ud074\ub7ec\uc2a4\ud130\uc758 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ub354 \uac04\ub2e8\ud558\uac8c \uc5f0\uacb0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. | \ubd84\uc0b0\ub418\uc9c0 \uc54a\uc740 EKS \ud074\ub7ec\uc2a4\ud130 VPC \uac04\uc5d0 \ub77c\uc6b0\ud305\uc744 \uc124\uc815\ud574\uc57c \ud569\ub2c8\ub2e4. |\n|\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc561\uc138\uc2a4 \uad00\ub9ac: |\ubaa8\ub4e0 \uc6cc\ud06c\ub85c\ub4dc \ud300\uc5d0 \uc561\uc138\uc2a4\ub97c \uc81c\uacf5\ud558\uace0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9ac\uc18c\uc2a4\uac00 \uc801\uc808\ud558\uac8c \ubd84\ub9ac\ub418\ub3c4\ub85d \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ub2e4\uc591\ud55c \uc5ed\ud560\uacfc \uc0ac\uc6a9\uc790\ub97c \uc720\uc9c0\ud574\uc57c \ud569\ub2c8\ub2e4.| \uac01 \ud074\ub7ec\uc2a4\ud130\uac00 \uc6cc\ud06c\ub85c\ub4dc\/\ud300 \uc804\uc6a9\uc73c\ub85c \uc0ac\uc6a9\ub418\ubbc0\ub85c \uc561\uc138\uc2a4 \uad00\ub9ac\uac00 \uac04\uc18c\ud654\ub429\ub2c8\ub2e4. |\n|AWS \uc561\uc138\uc2a4 \uad00\ub9ac: |AWS \ub9ac\uc18c\uc2a4\ub294 \uae30\ubcf8\uc801\uc73c\ub85c \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc758 IAM \uc5ed\ud560\uc744 \ud1b5\ud574\uc11c\ub9cc \uc561\uc138\uc2a4\ud560 \uc218 \uc788\ub294 \uc790\uccb4 \uacc4\uc815\uc5d0 \ubc30\ud3ec\ub429\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc758 IAM \uc5ed\ud560\uc740 IRSA \ub610\ub294 EKS Pod Identities\uc640\uc758 \uad50\ucc28 \uacc4\uc815\uc73c\ub85c \uac04\uc8fc\ub429\ub2c8\ub2e4.|AWS \ub9ac\uc18c\uc2a4\ub294 \uae30\ubcf8\uc801\uc73c\ub85c \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc758 IAM \uc5ed\ud560\uc744 \ud1b5\ud574\uc11c\ub9cc \uc561\uc138\uc2a4\ud560 \uc218 \uc788\ub294 \uc790\uccb4 \uacc4\uc815\uc5d0 \ubc30\ud3ec\ub429\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc \uacc4\uc815\uc758 IAM \uc5ed\ud560\uc740 IRSA \ub610\ub294 EKS Pod Identities\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud30c\ub4dc\uc5d0 \uc9c1\uc811 \uc804\ub2ec\ub429\ub2c8\ub2e4. |","site":"eks","answers_cleaned":"    search    exclude  true                      AWS                                                   https   docs aws amazon com whitepapers latest organizing your aws environment organizing your aws environment html    AWS                                          https   docs aws amazon com whitepapers latest organizing your aws environment benefits of using multiple aws accounts html            AWS API                       AWS                                                                                  IAM                             AWS                                          IAM                                 AWS                                                                                                                                                                                                    AWS              https   docs aws amazon com whitepapers latest organizing your aws environment benefits of using multiple aws accounts html group workloads based on business purpose and ownership                                                   EKS                  EKS                                                                                AWS                           S3     ElastiCache        DynamoDB                                  AWS                               EKS                                                                                                                                                                                                                                                                                     AWS     https   docs aws amazon com whitepapers latest organizing your aws environment organizing workload oriented ous html                 EKS                                                      EKS                  EKS                       AWS             IAM roles for Service Accounts  IRSA   https   docs aws amazon com eks latest userguide iam roles for service accounts html      EKS Pod Identitiesentities  https   docs aws amazon com eks latest userguide pod identities html           AWS              AWS Resource Access Manager   RAM   https   aws amazon com ram                                 EKS                                       VPC       EKS       EC2 Fargate                    EKS                                                                      AWS                                                 https   kubernetes io docs concepts overview working with objects namespaces                  EKS                           EKS                          security docs multitenancy                  AWS                                                                                                                                          AWS                                                      images multi account eks jpg                       AWS RAM                                              EKS                 IRSA    EKS Pod Identitiesentities                               AWS                                                              AWS                        AWS Resource Access Manager  https   aws amazon com ram    RAM        AWS                             AWS     RAM               https   docs aws amazon com ram latest userguide getting started sharing html getting started sharing orgs            VPC                                   Amazon ElastiCache  https   aws amazon com elasticache            Amazon Relational Database Service  RDS   https   aws amazon com rds                         AWS      EKS           VPC       EKS                                 RAM                         AWS      RAM                                                                                                       12      ID                    Create resource share                                                                                    IaC  RAM                         EKS Pod Identitiesentities  IRSA       re Invent 2023   AWS  EKS         AWS                        EKS Pod Identitiesentities          IRSA   EKS              EKS        AWS                                                                             EKS           AWS             IRSA  EKS                       AWS                      EKS Pod Identities                            EKS     https   docs aws amazon com eks latest userguide service accounts html service accounts iam                   IRSA IAM Roles for Service Accounts        AWS API             IAM Roles for Service Accounts  IRSA   https   docs aws amazon com eks latest userguide iam roles for service accounts html        EKS                 AWS                    IRSA                          IAM                                            EKS                                 S3        AWS API                Amazon RDS           Amazon EFS File Systems          IAM                             IAM          AWS API                            IAM                                                                                        IRSA                        IRSA                                           IAM OIDC ID                IRSA                                               ID                     https   docs aws amazon com eks latest userguide enable iam roles for service accounts html        EKS           IRSA                          https   docs aws amazon com eks latest userguide associate service account role html                            ID                               12      ID  https   docs aws amazon com eks latest userguide cross account access html                           EKS                                                                                       EKS Pod Identities  AWS API            EKS Pod Identities  https   docs aws amazon com eks latest userguide pod identities html   EKS              AWS                        EKS Pod Identities  EKS      AWS                     OIDC                    AWS                                     EKS Pod Identities      IRSA     EKS Pod Identities  EKS                                                     AWS                EKS Pod Identities                   https   docs aws amazon com IAM latest UserGuide id roles terms and concepts html iam term role chaining                  AWS SDK                             https   docs aws amazon com sdkref latest guide feature process credentials html        aws                                                               credentials  process                                Content of the AWS Config file  profile account b role   source profile   account a role  role arn   arn aws iam  444455556666 role account b role   profile account a role   credential process    eks credential processrole sh       credential process                           bin bash   Content of the eks credential processrole sh   This will retreive the credential from the pod identities agent    and return it to the AWS SDK when referenced in a profile curl  H  Authorization    cat  AWS CONTAINER AUTHORIZATION TOKEN FILE    AWS CONTAINER CREDENTIALS FULL URI   jq  c   AccessKeyId   AccessKeyId  SecretAccessKey   SecretAccessKey  SessionToken   Token  Expiration   Expiration  Version  1            A  B             aws                    AWS CONFIG FILE   AWS PROFILE                                            EKS                               yaml   Snippet of the PodSpec containers       name  container name     image  container image version     env        name  AWS CONFIG FILE       value  path to customer provided aws config file       name  AWS PROFILE       value  account b role      EKS Pod Identities                              EKS        https   docs aws amazon com eks latest userguide pod id abac html                            ABAC        IAM                                        EKS Pod ID                                                                         EKS                                                                              EKS Pod Identities   ABAC                                                     ARN                                             ABAC   EKS Pod Identities                 EKS Pod ID                                                                 IAM                           IAM          ABAC                                  ABAC                                                                                                                           kubernetes service account    eks cluster arn     kubernetes namespace                         EKS             ID 111122223333                            json        Version    2012 10 17        Statement                            Effect    Allow                Principal                      AWS    arn aws iam  111122223333 root                              Action    sts AssumeRole                Condition                      StringEquals                          aws PrincipalTag kubernetes service account    PayrollApplication                        aws PrincipalTag eks cluster arn    arn aws eks us east 1 111122223333 cluster ProductionCluster                        aws PrincipalTag kubernetes namespace    PayrollNamespace                                                                         IAM      STS assumeRole            AWS                           ABAC         IAM                                                         IAM                             EKS Pod Identities                                                         IAM                 SCP        IAM             kubernetes      eks                                                     EKS                  EKS              AWS          Amazon S3     VPC  Amazon DynamoDB              AWS                                                                                        AI ML                                                                                                          GitOps  https   www weave works technologies gitops                                           De centralized EKS Cluster Architecture    images multi account eks decentralized png                      Amazon EKS           AWS                              EKS                 IRSA    EKS Pod Identities       AWS                 GitOps          Git                                                                                           Kubernetes                                                              Git                                                                             IAM roles for Service Accounts  IRSA   https   docs aws amazon com eks latest userguide iam roles for service accounts html      EKS Pod Identities  https   docs aws amazon com eks latest userguide pod identities html        EKS          AWS                        AWS                           IAM            AWS              k8s                 IAM                                                 IRSA                        IAM roles for Service Accounts  https   docs aws amazon com eks latest userguide iam roles for service accounts html              EKS Pod Identities               EKS Pod Identities  https   docs aws amazon com eks latest userguide pod identities html                                   AWS RAM       VPC                           Amazon EKS           AWS                                                               EKS                                          AWS      https   aws amazon com blogs containers use shared vpcs in amazon eks                De centralized EKS Cluster Architecture using VPC Shared Subnets    images multi account eks shared subnets png                      AWS RAM                                               EKS           AWS                                 EKS     IRSA    EKS Pod Identities       AWS                         EKS            EKS                                                                                         EKS             EKS                                    EKS                                            EKS                                                                   EKS                                                                                                                                                                                                                                                               k8                           CPU                                AWS               AWS                                                                                                AWS               AWS                                                                                  kubernetes   AWS                                                           VPC                       k8  AWS                                   VPC                                                      EKS      VPC                                                                                                                                                            AWS          AWS                     IAM                                           IAM     IRSA    EKS Pod Identities                  AWS                     IAM                                           IAM     IRSA    EKS Pod Identities                      "}
{"questions":"eks RCA EKS Amazon Cloudwatch EKS search exclude true Audit","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \uac10\uc0ac(Audit) \ubc0f \ub85c\uae45\n\n\\[\uac10\uc0ac\\] \ub85c\uadf8\ub97c \uc218\uc9d1\ud558\uace0 \ubd84\uc11d\ud558\ub294 \uac83\uc740 \uc5ec\ub7ec \uac00\uc9c0 \uc774\uc720\ub85c \uc720\uc6a9\ud569\ub2c8\ub2e4. \ub85c\uadf8\ub294 \uadfc\ubcf8 \uc6d0\uc778 \ubd84\uc11d(RCA) \ubc0f \ucc45\uc784 \ubd84\uc11d(\uc608: \ud2b9\uc815 \ubcc0\uacbd\uc5d0 \ub300\ud55c \uc0ac\uc6a9\uc790 \ucd94\uc801)\uc5d0 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub85c\uadf8\uac00 \ucda9\ubd84\ud788 \uc218\uc9d1\ub418\uba74 \uc774\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc774\uc0c1 \ud589\ub3d9\uc744 \ud0d0\uc9c0\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. EKS\uc5d0\uc11c\ub294 \uac10\uc0ac \ub85c\uadf8\uac00 Amazon Cloudwatch \ub85c\uadf8\ub85c \uc804\uc1a1\ub429\ub2c8\ub2e4. EKS\uc758 \uac10\uc0ac \uc815\ucc45\uc740 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n```yaml\napiVersion: audit.k8s.io\/v1beta1\nkind: Policy\nrules:\n  # Log aws-auth configmap changes\n  - level: RequestResponse\n    namespaces: [\"kube-system\"]\n    verbs: [\"update\", \"patch\", \"delete\"]\n    resources:\n      - group: \"\" # core\n        resources: [\"configmaps\"]\n        resourceNames: [\"aws-auth\"]\n    omitStages:\n      - \"RequestReceived\"\n  - level: None\n    users: [\"system:kube-proxy\"]\n    verbs: [\"watch\"]\n    resources:\n      - group: \"\" # core\n        resources: [\"endpoints\", \"services\", \"services\/status\"]\n  - level: None\n    users: [\"kubelet\"] # legacy kubelet identity\n    verbs: [\"get\"]\n    resources:\n      - group: \"\" # core\n        resources: [\"nodes\", \"nodes\/status\"]\n  - level: None\n    userGroups: [\"system:nodes\"]\n    verbs: [\"get\"]\n    resources:\n      - group: \"\" # core\n        resources: [\"nodes\", \"nodes\/status\"]\n  - level: None\n    users:\n      - system:kube-controller-manager\n      - system:kube-scheduler\n      - system:serviceaccount:kube-system:endpoint-controller\n    verbs: [\"get\", \"update\"]\n    namespaces: [\"kube-system\"]\n    resources:\n      - group: \"\" # core\n        resources: [\"endpoints\"]\n  - level: None\n    users: [\"system:apiserver\"]\n    verbs: [\"get\"]\n    resources:\n      - group: \"\" # core\n        resources: [\"namespaces\", \"namespaces\/status\", \"namespaces\/finalize\"]\n  - level: None\n    users:\n      - system:kube-controller-manager\n    verbs: [\"get\", \"list\"]\n    resources:\n      - group: \"metrics.k8s.io\"\n  - level: None\n    nonResourceURLs:\n      - \/healthz*\n      - \/version\n      - \/swagger*\n  - level: None\n    resources:\n      - group: \"\" # core\n        resources: [\"events\"]\n  - level: Request\n    users: [\"kubelet\", \"system:node-problem-detector\", \"system:serviceaccount:kube-system:node-problem-detector\"]\n    verbs: [\"update\",\"patch\"]\n    resources:\n      - group: \"\" # core\n        resources: [\"nodes\/status\", \"pods\/status\"]\n    omitStages:\n      - \"RequestReceived\"\n  - level: Request\n    userGroups: [\"system:nodes\"]\n    verbs: [\"update\",\"patch\"]\n    resources:\n      - group: \"\" # core\n        resources: [\"nodes\/status\", \"pods\/status\"]\n    omitStages:\n      - \"RequestReceived\"\n  - level: Request\n    users: [\"system:serviceaccount:kube-system:namespace-controller\"]\n    verbs: [\"deletecollection\"]\n    omitStages:\n      - \"RequestReceived\"\n  # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,\n  # so only log at the Metadata level.\n  - level: Metadata\n    resources:\n      - group: \"\" # core\n        resources: [\"secrets\", \"configmaps\"]\n      - group: authentication.k8s.io\n        resources: [\"tokenreviews\"]\n    omitStages:\n      - \"RequestReceived\"\n  - level: Request\n    resources:\n      - group: \"\"\n        resources: [\"serviceaccounts\/token\"]\n  - level: Request\n    verbs: [\"get\", \"list\", \"watch\"]\n    resources: \n      - group: \"\" # core\n      - group: \"admissionregistration.k8s.io\"\n      - group: \"apiextensions.k8s.io\"\n      - group: \"apiregistration.k8s.io\"\n      - group: \"apps\"\n      - group: \"authentication.k8s.io\"\n      - group: \"authorization.k8s.io\"\n      - group: \"autoscaling\"\n      - group: \"batch\"\n      - group: \"certificates.k8s.io\"\n      - group: \"extensions\"\n      - group: \"metrics.k8s.io\"\n      - group: \"networking.k8s.io\"\n      - group: \"policy\"\n      - group: \"rbac.authorization.k8s.io\"\n      - group: \"scheduling.k8s.io\"\n      - group: \"settings.k8s.io\"\n      - group: \"storage.k8s.io\"\n    omitStages:\n      - \"RequestReceived\"\n  # Default level for known APIs\n  - level: RequestResponse\n    resources: \n      - group: \"\" # core\n      - group: \"admissionregistration.k8s.io\"\n      - group: \"apiextensions.k8s.io\"\n      - group: \"apiregistration.k8s.io\"\n      - group: \"apps\"\n      - group: \"authentication.k8s.io\"\n      - group: \"authorization.k8s.io\"\n      - group: \"autoscaling\"\n      - group: \"batch\"\n      - group: \"certificates.k8s.io\"\n      - group: \"extensions\"\n      - group: \"metrics.k8s.io\"\n      - group: \"networking.k8s.io\"\n      - group: \"policy\"\n      - group: \"rbac.authorization.k8s.io\"\n      - group: \"scheduling.k8s.io\"\n      - group: \"settings.k8s.io\"\n      - group: \"storage.k8s.io\"\n    omitStages:\n      - \"RequestReceived\"\n  # Default level for all other requests.\n  - level: Metadata\n    omitStages:\n      - \"RequestReceived\"\n```\n\n## \uad8c\uc7a5 \uc0ac\ud56d\n\n### \uac10\uc0ac \ub85c\uadf8 \ud65c\uc131\ud654\n\n\uac10\uc0ac \ub85c\uadf8\ub294 EKS\uc5d0\uc11c \uad00\ub9ac\ud558\ub294 EKS \uad00\ub9ac\ud615 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub85c\uadf8\uc758 \uc77c\ubd80\uc785\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84, \ucee8\ud2b8\ub864\ub7ec \uad00\ub9ac\uc790 \ubc0f \uc2a4\ucf00\uc904\ub7ec\uc5d0 \ub300\ud55c \ub85c\uadf8\uc640 \uac10\uc0ac \ub85c\uadf8\ub97c \ud3ec\ud568\ud558\ub294 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub85c\uadf8\uc758 \ud65c\uc131\ud654\/\ube44\ud65c\uc131\ud654 \uc9c0\uce68\uc740 [AWS \ubb38\uc11c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/control-plane-logs.html#enabling-control-plane-log-export)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n!!! info\n    \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub85c\uae45\uc744 \ud65c\uc131\ud654\ud558\uba74 \ub85c\uadf8\ub97c CloudWatch\uc5d0 \uc800\uc7a5\ud558\ub294 \ub370 [\ube44\uc6a9](https:\/\/aws.amazon.com\/cloudwatch\/pricing\/)\uc774 \ubc1c\uc0dd\ud569\ub2c8\ub2e4. \uc774\ub85c \uc778\ud574 \uc9c0\uc18d\uc801\uc778 \ubcf4\uc548 \ube44\uc6a9\uc5d0 \ub300\ud55c \uad11\ubc94\uc704\ud55c \ubb38\uc81c\uac00 \uc81c\uae30\ub429\ub2c8\ub2e4. \uad81\uadf9\uc801\uc73c\ub85c \uc774\ub7f0 \ube44\uc6a9\uc744 \ubcf4\uc548 \uce68\ud574 \ube44\uc6a9 (\uc608: \uc7ac\uc815\uc801 \uc190\uc2e4, \ud3c9\ud310 \ud6fc\uc190 \ub4f1)\uacfc \ube44\uad50\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774 \uac00\uc774\ub4dc\uc758 \uad8c\uc7a5 \uc0ac\ud56d \uc911 \uc77c\ubd80\ub9cc \uad6c\ud604\ud558\uba74 \ud658\uacbd\uc744 \uc801\uc808\ud558\uac8c \ubcf4\ud638\ud560 \uc218 \uc788\uc744 \uac83\uc785\ub2c8\ub2e4.\n\n!!! warning\n    \ud074\ub77c\uc6b0\ub4dc\uc6cc\uce58 \ub85c\uadf8 \ud56d\ubaa9\uc758 \ucd5c\ub300 \ud06c\uae30\ub294 [256KB](https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/logs\/cloudwatch_limits_cwl.html)\uc778 \ubc18\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc694\uccad \ucd5c\ub300 \ud06c\uae30\ub294 1.5MiB\uc785\ub2c8\ub2e4. 256KB\ub97c \ucd08\uacfc\ud558\ub294 \ub85c\uadf8 \ud56d\ubaa9\uc740 \uc798\ub9ac\uac70\ub098 \uc694\uccad \uba54\ud0c0\ub370\uc774\ud130\ub9cc \ud3ec\ud568\ub429\ub2c8\ub2e4.\n\n### \uac10\uc0ac \uba54\ud0c0\ub370\uc774\ud130 \ud65c\uc6a9\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uac10\uc0ac \ub85c\uadf8\uc5d0\ub294 \uc694\uccad\uc774 \uc2b9\uc778\ub418\uc5c8\ub294\uc9c0 \uc5ec\ubd80\ub97c \ub098\ud0c0\ub0b4\ub294 `authorization.k8s.io\/decision`\uc640 \uacb0\uc815\uc758 \uc774\uc720\ub97c \ub098\ud0c0\ub0b4\ub294 `authorization.k8s.io\/reason`, \ub450 \uac1c\uc758 \uc5b4\ub178\ud14c\uc774\uc158\uc774 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \uc18d\uc131\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud2b9\uc815 API \ud638\ucd9c\uc774 \ud5c8\uc6a9\ub41c \uc774\uc720\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc758\uc2ec\uc2a4\ub7ec\uc6b4 \uc774\ubca4\ud2b8\uc5d0 \ub300\ud55c \uc54c\ub78c \uc0dd\uc131\n\n403 Forbidden \ubc0f 401 Unauthorized \uc751\ub2f5\uc774 \uc99d\uac00\ud558\ub294 \uc704\uce58\ub97c \uc790\ub3d9\uc73c\ub85c \uc54c\ub9ac\ub294 \uacbd\ubcf4\ub97c \uc0dd\uc131\ud55c \ub2e4\uc74c `host` , `sourceIPs` \ubc0f `k8s_user.username` \uacfc \uac19\uc740 \uc18d\uc131\uc744 \uc0ac\uc6a9 \ud558\uc5ec \uc774\ub7f0 \uc694\uccad\uc758 \ucd9c\ucc98\ub97c \ucc3e\uc544\ub0c5\ub2c8\ub2e4.\n  \n### Log Insights\ub85c \ub85c\uadf8 \ubd84\uc11d\n\nCloudWatch Log Insights\ub97c \uc0ac\uc6a9\ud558\uc5ec RBAC \uac1d\uccb4 (\uc608: \ub864, \ub864\ubc14\uc778\ub529, \ud074\ub7ec\uc2a4\ud130\ub864, \ud074\ub7ec\uc2a4\ud130 \ub864\ubc14\uc778\ub529) \uc5d0 \ub300\ud55c \ubcc0\uacbd \uc0ac\ud56d\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uba87 \uac00\uc9c0 \uc0d8\ud50c \ucffc\ub9ac\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n`aws-auth` \ucee8\ud53c\uadf8\ub9f5 \uc5d0 \ub300\ud55c \uc5c5\ub370\uc774\ud2b8\ub97c \ub098\uc5f4\ud569\ub2c8\ub2e4:\n\n```bash\nfields @timestamp, @message\n| filter @logStream like \"kube-apiserver-audit\"\n| filter verb in [\"update\", \"patch\"]\n| filter objectRef.resource = \"configmaps\" and objectRef.name = \"aws-auth\" and objectRef.namespace = \"kube-system\"\n| sort @timestamp desc\n```\n\nValidation \uc6f9\ud6c5\uc5d0 \ub300\ud55c \uc0dd\uc131 \ub610\ub294 \ubcc0\uacbd \uc0ac\ud56d\uc744 \ub098\uc5f4\ud569\ub2c8\ub2e4:\n\n```bash\nfields @timestamp, @message\n| filter @logStream like \"kube-apiserver-audit\"\n| filter verb in [\"create\", \"update\", \"patch\"] and responseStatus.code = 201\n| filter objectRef.resource = \"validatingwebhookconfigurations\"\n| sort @timestamp desc\n```\n\n\ub864\uc5d0 \ub300\ud55c \uc0dd\uc131, \uc5c5\ub370\uc774\ud2b8, \uc0ad\uc81c \uc791\uc5c5\uc744 \ub098\uc5f4\ud569\ub2c8\ub2e4:\n\n```bash\nfields @timestamp, @message\n| sort @timestamp desc\n| limit 100\n| filter objectRef.resource=\"roles\" and verb in [\"create\", \"update\", \"patch\", \"delete\"]\n```\n\n\ub864\ubc14\uc778\ub529\uc5d0 \ub300\ud55c \uc0dd\uc131, \uc5c5\ub370\uc774\ud2b8, \uc0ad\uc81c \uc791\uc5c5\uc744 \ub098\uc5f4\ud569\ub2c8\ub2e4:\n\n```bash\nfields @timestamp, @message\n| sort @timestamp desc\n| limit 100\n| filter objectRef.resource=\"rolebindings\" and verb in [\"create\", \"update\", \"patch\", \"delete\"]\n```\n\n\ud074\ub7ec\uc2a4\ud130\ub864\uc5d0 \ub300\ud55c \uc0dd\uc131, \uc5c5\ub370\uc774\ud2b8, \uc0ad\uc81c \uc791\uc5c5\uc744 \ub098\uc5f4\ud569\ub2c8\ub2e4:\n\n```bash\nfields @timestamp, @message\n| sort @timestamp desc\n| limit 100\n| filter objectRef.resource=\"clusterroles\" and verb in [\"create\", \"update\", \"patch\", \"delete\"]\n```\n\n\ud074\ub7ec\uc2a4\ud130\ub864\ubc14\uc778\ub529\uc5d0 \ub300\ud55c \uc0dd\uc131, \uc5c5\ub370\uc774\ud2b8, \uc0ad\uc81c \uc791\uc5c5\uc744 \ub098\uc5f4\ud569\ub2c8\ub2e4:\n\n```bash\nfields @timestamp, @message\n| sort @timestamp desc\n| limit 100\n| filter objectRef.resource=\"clusterrolebindings\" and verb in [\"create\", \"update\", \"patch\", \"delete\"]\n```\n\n\uc2dc\ud06c\ub9bf\uc5d0 \ub300\ud55c \ubb34\ub2e8 \uc77d\uae30 \uc791\uc5c5\uc744 \ud45c\uc2dc\ud569\ub2c8\ub2e4:\n\n```bash\nfields @timestamp, @message\n| sort @timestamp desc\n| limit 100\n| filter objectRef.resource=\"secrets\" and verb in [\"get\", \"watch\", \"list\"] and responseStatus.code=\"401\"\n| stats count() by bin(1m)\n```\n\n\uc2e4\ud328\ud55c \uc775\uba85 \uc694\uccad \ubaa9\ub85d:\n\n```bash\nfields @timestamp, @message, sourceIPs.0\n| sort @timestamp desc\n| limit 100\n| filter user.username=\"system:anonymous\" and responseStatus.code in [\"401\", \"403\"]\n```\n\n### CloudTrail \ub85c\uadf8 \uac10\uc0ac\n\nIAM Roles for Service Account(IRSA)\uc744 \ud65c\uc6a9\ud558\ub294 \ud30c\ub4dc\uc5d0\uc11c \ud638\ucd9c\ud55c AWS API\ub294 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \uc774\ub984\uacfc \ud568\uaed8 CloudTrail\uc5d0 \uc790\ub3d9\uc73c\ub85c \ub85c\uae45\ub429\ub2c8\ub2e4. API \ud638\ucd9c \uad8c\ud55c\uc774 \uba85\uc2dc\uc801\uc73c\ub85c \ubd80\uc5ec\ub418\uc9c0 \uc54a\uc740 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc758 \uc774\ub984\uc774 \ub85c\uadf8\uc5d0 \ud45c\uc2dc\ub418\uba74 IAM \uc5ed\ud560\uc758 \uc2e0\ub8b0 \uc815\ucc45\uc774 \uc798\ubabb \uad6c\uc131\ub418\uc5c8\ub2e4\ub294 \ud45c\uc2dc\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc77c\ubc18\uc801\uc73c\ub85c Cloudtrail\uc740 AWS API \ud638\ucd9c\uc744 \ud2b9\uc815 IAM \ubcf4\uc548 \uc8fc\uccb4\uc5d0 \ud560\ub2f9\ud560 \uc218 \uc788\ub294 \uc88b\uc740 \ubc29\ubc95\uc785\ub2c8\ub2e4.\n\n### CloudTrail Insights\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc758\uc2ec\uc2a4\ub7ec\uc6b4 \ud65c\ub3d9 \ubc1c\uacac\n\nCloudTrail Insights\ub294 CloudTrail \ud2b8\ub808\uc77c\uc5d0\uc11c \uc4f0\uae30 \uad00\ub9ac \uc774\ubca4\ud2b8\ub97c \uc790\ub3d9\uc73c\ub85c \ubd84\uc11d\ud558\uace0 \ube44\uc815\uc0c1\uc801\uc778 \ud65c\ub3d9\uc774 \ubc1c\uc0dd\ud558\uba74 \uc54c\ub824\uc90d\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 IRSA \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud558\uc5ec IAM \uc5ed\ud560\uc744 \ub9e1\ub294 \ud30c\ub4dc \ub4f1 AWS \uacc4\uc815\uc758 \uc4f0\uae30 API\uc5d0 \ub300\ud55c \ud638\ucd9c\ub7c9\uc774 \uc99d\uac00\ud558\ub294 \uc2dc\uae30\ub97c \ud30c\uc545\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [CloudTrail Insights \ubc1c\ud45c: \ube44\uc815\uc0c1\uc801\uc778 API \ud65c\ub3d9 \uc2dd\ubcc4 \ubc0f \ub300\uc751](https:\/\/aws.amazon.com\/blogs\/aws\/announcing-cloudtrail-insights-identify-and-respond-to-unusual-api-activity\/)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n### \ucd94\uac00 \ub9ac\uc18c\uc2a4\n\n\ub85c\uadf8\uc758 \uc591\uc774 \uc99d\uac00\ud558\uba74 Log Insights \ub610\ub294 \ub2e4\ub978 \ub85c\uadf8 \ubd84\uc11d \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub85c\uadf8\ub97c \ud30c\uc2f1\ud558\uace0 \ud544\ud130\ub9c1\ud558\ub294 \uac83\uc774 \ube44\ud6a8\uc728\uc801\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub300\uc548\uc73c\ub85c [Sysdig Falco](https:\/\/github.com\/falcosecurity\/falco)\uc640 [ekscloudwatch](https:\/\/github.com\/sysdiglabs\/ekscloudwatch)\ub97c \uc2e4\ud589\ud558\ub294 \uac83\ub3c4 \uace0\ub824\ud574 \ubcfc \uc218 \uc788\uc2b5\ub2c8\ub2e4. Falco\ub294 \uac10\uc0ac \ub85c\uadf8\ub97c \ubd84\uc11d\ud558\uace0 \uc624\ub79c \uae30\uac04 \ub3d9\uc548 \uc774\uc0c1 \uc9d5\ud6c4\ub098 \uc545\uc6a9\uc5d0 \ub300\ud574 \ud50c\ub798\uadf8\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. ekscloudwatch \ud504\ub85c\uc81d\ud2b8\ub294 \ubd84\uc11d\uc744 \uc704\ud574 CloudWatch\uc758 \uac10\uc0ac \ub85c\uadf8 \uc774\ubca4\ud2b8\ub97c \ud314\ucf54\ub85c \uc804\ub2ec\ud569\ub2c8\ub2e4. \ud314\ucf54\ub294 \uc77c\ub828\uc758 [\uae30\ubcf8 \uac10\uc0ac \uaddc\uce59](https:\/\/github.com\/falcosecurity\/plugins\/blob\/master\/plugins\/k8saudit\/rules\/k8s_audit_rules.yaml)\uacfc \ud568\uaed8 \uc790\uccb4 \uac10\uc0ac \uaddc\uce59\uc744 \ucd94\uac00\ud560 \uc218 \uc788\ub294 \uae30\ub2a5\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4.\n\n\ub610 \ub2e4\ub978 \uc635\uc158\uc740 \uac10\uc0ac \ub85c\uadf8\ub97c S3\uc5d0 \uc800\uc7a5\ud558\uace0 SageMaker [Random Cut Forest](https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/randomcutforest.html) \uc54c\uace0\ub9ac\uc998\uc744 \uc0ac\uc6a9\ud558\uc5ec \ucd94\uac00 \uc870\uc0ac\uac00 \ud544\uc694\ud55c \uc774\uc0c1 \ub3d9\uc791\uc5d0 \uc0ac\uc6a9\ud558\ub294 \uac83\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \ub3c4\uad6c \ubc0f \ub9ac\uc18c\uc2a4\n\n\ub2e4\uc74c \uc0c1\uc6a9 \ubc0f \uc624\ud508 \uc18c\uc2a4 \ud504\ub85c\uc81d\ud2b8\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uac00 \ud655\ub9bd\ub41c \ubaa8\ubc94 \uc0ac\ub840\uc640 \uc77c\uce58\ud558\ub294\uc9c0 \ud3c9\uac00\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n- [kubeaudit](https:\/\/github.com\/Shopify\/kubeaudit)\n- [kube-scan](https:\/\/github.com\/octarinesec\/kube-scan) \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uacf5\ud1b5 \uad6c\uc131 \uc810\uc218 \uc0b0\uc815 \uc2dc\uc2a4\ud15c \ud504\ub808\uc784\uc6cc\ud06c\uc5d0 \ub530\ub77c \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc704\ud5d8 \uc810\uc218\ub97c \ud560\ub2f9\ud569\ub2c8\ub2e4.\n- [kubesec.io](https:\/\/kubesec.io\/)\n- [polaris](https:\/\/github.com\/FairwindsOps\/polaris)\n- [Starboard](https:\/\/github.com\/aquasecurity\/starboard)\n- [Snyk](https:\/\/support.snyk.io\/hc\/en-us\/articles\/360003916138-Kubernetes-integration-overview)\n- [Kubescape](https:\/\/github.com\/kubescape\/kubescape) Kubescape\ub294 \ud074\ub7ec\uc2a4\ud130, YAML \ud30c\uc77c \ubc0f \ud5ec\ub984 \ucc28\ud2b8\ub97c \uc2a4\uce94\ud558\ub294 \uc624\ud508 \uc18c\uc2a4 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubcf4\uc548 \ub3c4\uad6c\uc785\ub2c8\ub2e4. \uc5ec\ub7ec \ud504\ub808\uc784\uc6cc\ud06c ([NSA-CISA](https:\/\/www.armosec.io\/blog\/kubernetes-hardening-guidance-summary-by-armo\/?utm_source=github&utm_medium=repository) \ubc0f [MITRE ATT&CK\u00ae](https:\/\/www.microsoft.com\/security\/blog\/2021\/03\/23\/secure-containerized-environments-with-updated-threat-matrix-for-kubernetes\/))\uc5d0 \ub530\ub77c \uc124\uc815 \uc624\ub958\ub97c \ud0d0\uc9c0\ud569\ub2c8\ub2e4.","site":"eks","answers_cleaned":"    search    exclude  true            Audit                                                              RCA                                                                                       EKS           Amazon Cloudwatch            EKS                       yaml apiVersion  audit k8s io v1beta1 kind  Policy rules      Log aws auth configmap changes     level  RequestResponse     namespaces    kube system       verbs    update    patch    delete       resources          group       core         resources    configmaps           resourceNames    aws auth       omitStages           RequestReceived      level  None     users    system kube proxy       verbs    watch       resources          group       core         resources    endpoints    services    services status       level  None     users    kubelet     legacy kubelet identity     verbs    get       resources          group       core         resources    nodes    nodes status       level  None     userGroups    system nodes       verbs    get       resources          group       core         resources    nodes    nodes status       level  None     users          system kube controller manager         system kube scheduler         system serviceaccount kube system endpoint controller     verbs    get    update       namespaces    kube system       resources          group       core         resources    endpoints       level  None     users    system apiserver       verbs    get       resources          group       core         resources    namespaces    namespaces status    namespaces finalize       level  None     users          system kube controller manager     verbs    get    list       resources          group   metrics k8s io      level  None     nonResourceURLs           healthz           version          swagger      level  None     resources          group       core         resources    events       level  Request     users    kubelet    system node problem detector    system serviceaccount kube system node problem detector       verbs    update   patch       resources          group       core         resources    nodes status    pods status       omitStages           RequestReceived      level  Request     userGroups    system nodes       verbs    update   patch       resources          group       core         resources    nodes status    pods status       omitStages           RequestReceived      level  Request     users    system serviceaccount kube system namespace controller       verbs    deletecollection       omitStages           RequestReceived      Secrets  ConfigMaps  and TokenReviews can contain sensitive   binary data      so only log at the Metadata level      level  Metadata     resources          group       core         resources    secrets    configmaps           group  authentication k8s io         resources    tokenreviews       omitStages           RequestReceived      level  Request     resources          group             resources    serviceaccounts token       level  Request     verbs    get    list    watch       resources           group       core         group   admissionregistration k8s io          group   apiextensions k8s io          group   apiregistration k8s io          group   apps          group   authentication k8s io          group   authorization k8s io          group   autoscaling          group   batch          group   certificates k8s io          group   extensions          group   metrics k8s io          group   networking k8s io          group   policy          group   rbac authorization k8s io          group   scheduling k8s io          group   settings k8s io          group   storage k8s io      omitStages           RequestReceived      Default level for known APIs     level  RequestResponse     resources           group       core         group   admissionregistration k8s io          group   apiextensions k8s io          group   apiregistration k8s io          group   apps          group   authentication k8s io          group   authorization k8s io          group   autoscaling          group   batch          group   certificates k8s io          group   extensions          group   metrics k8s io          group   networking k8s io          group   policy          group   rbac authorization k8s io          group   scheduling k8s io          group   settings k8s io          group   storage k8s io      omitStages           RequestReceived      Default level for all other requests      level  Metadata     omitStages           RequestReceived                                       EKS        EKS                                    API                                                                   AWS     https   docs aws amazon com eks latest userguide control plane logs html enabling control plane log export                     info                           CloudWatch              https   aws amazon com cloudwatch pricing                                                                                                                                                           warning                           256KB  https   docs aws amazon com AmazonCloudWatch latest logs cloudwatch limits cwl html            API           1 5MiB     256KB                                                                                         authorization k8s io decision                 authorization k8s io reason                                         API                                                   403 Forbidden   401 Unauthorized                                   host     sourceIPs     k8s user username                                          Log Insights         CloudWatch Log Insights       RBAC                                                                                     aws auth                             bash fields  timestamp   message   filter  logStream like  kube apiserver audit    filter verb in   update    patch     filter objectRef resource    configmaps  and objectRef name    aws auth  and objectRef namespace    kube system    sort  timestamp desc      Validation                                bash fields  timestamp   message   filter  logStream like  kube apiserver audit    filter verb in   create    update    patch   and responseStatus code   201   filter objectRef resource    validatingwebhookconfigurations    sort  timestamp desc                                        bash fields  timestamp   message   sort  timestamp desc   limit 100   filter objectRef resource  roles  and verb in   create    update    patch    delete                                             bash fields  timestamp   message   sort  timestamp desc   limit 100   filter objectRef resource  rolebindings  and verb in   create    update    patch    delete                                              bash fields  timestamp   message   sort  timestamp desc   limit 100   filter objectRef resource  clusterroles  and verb in   create    update    patch    delete                                                 bash fields  timestamp   message   sort  timestamp desc   limit 100   filter objectRef resource  clusterrolebindings  and verb in   create    update    patch    delete                                     bash fields  timestamp   message   sort  timestamp desc   limit 100   filter objectRef resource  secrets  and verb in   get    watch    list   and responseStatus code  401    stats count   by bin 1m                         bash fields  timestamp   message  sourceIPs 0   sort  timestamp desc   limit 100   filter user username  system anonymous  and responseStatus code in   401    403            CloudTrail        IAM Roles for Service Account IRSA                 AWS API                  CloudTrail              API                                             IAM                                        Cloudtrail  AWS API        IAM                                CloudTrail Insights                    CloudTrail Insights  CloudTrail                                                        IRSA          IAM             AWS        API                                        CloudTrail Insights           API             https   aws amazon com blogs aws announcing cloudtrail insights identify and respond to unusual api activity                                     Log Insights                                                            Sysdig Falco  https   github com falcosecurity falco    ekscloudwatch  https   github com sysdiglabs ekscloudwatch                         Falco                                                 ekscloudwatch              CloudWatch                                           https   github com falcosecurity plugins blob master plugins k8saudit rules k8s audit rules yaml                                                     S3       SageMaker  Random Cut Forest  https   docs aws amazon com sagemaker latest dg randomcutforest html                                                                                                                             kubeaudit  https   github com Shopify kubeaudit     kube scan  https   github com octarinesec kube scan                                                                      kubesec io  https   kubesec io      polaris  https   github com FairwindsOps polaris     Starboard  https   github com aquasecurity starboard     Snyk  https   support snyk io hc en us articles 360003916138 Kubernetes integration overview     Kubescape  https   github com kubescape kubescape  Kubescape        YAML                                                   NSA CISA  https   www armosec io blog kubernetes hardening guidance summary by armo  utm source github utm medium repository     MITRE ATT CK   https   www microsoft com security blog 2021 03 23 secure containerized environments with updated threat matrix for kubernetes                     "}
{"questions":"eks exclude true CPU search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \ube44\uc6a9 \ucd5c\uc801\ud654 - \ucef4\ud4e8\ud305 \ubc0f \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1\n\n\uac1c\ubc1c\uc790\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \ub9ac\uc18c\uc2a4 \uc694\uad6c \uc0ac\ud56d(\uc608: CPU \ubc0f \uba54\ubaa8\ub9ac)\uc744 \ucd08\uae30 \uc608\uc0c1\ud558\uc9c0\ub9cc \uc774\ub7f0 \ub9ac\uc18c\uc2a4 \uc2a4\ud399\uc744 \uc9c0\uc18d\uc801\uc73c\ub85c \uc870\uc815\ud558\uc9c0 \uc54a\uc73c\uba74 \ube44\uc6a9\uc774 \uc99d\uac00\ud558\uace0 \uc131\ub2a5 \ubc0f \uc548\uc815\uc131\uc774 \uc800\ud558\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucd08\uae30\uc5d0 \uc815\ud655\ud55c \uc608\uce21\uac12\uc744 \uc5bb\ub294 \uac83\ubcf4\ub2e4 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \ub9ac\uc18c\uc2a4 \uc694\uad6c \uc0ac\ud56d\uc744 \uc9c0\uc18d\uc801\uc73c\ub85c \uc870\uc815\ud558\ub294 \uac83\uc774 \ub354 \uc911\uc694\ud569\ub2c8\ub2e4.\n\n\uc544\ub798\uc5d0 \uc5b8\uae09\ub41c \ubaa8\ubc94 \uc0ac\ub840\ub294 \ube44\uc6a9\uc744 \ucd5c\uc18c\ud654\ud558\uace0 \uc870\uc9c1\uc774 \ud22c\uc790 \uc218\uc775\uc744 \uadf9\ub300\ud654\ud560 \uc218 \uc788\ub3c4\ub85d \ud558\ub294 \ub3d9\uc2dc\uc5d0 \ube44\uc988\ub2c8\uc2a4 \uc131\uacfc\ub97c \ub2ec\uc131\ud558\ub294 \ube44\uc6a9 \uc778\uc9c0\ud615 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uad6c\ucd95\ud558\uace0 \uc6b4\uc601\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \ucef4\ud4e8\ud305 \ube44\uc6a9\uc744 \ucd5c\uc801\ud654\ud558\ub294 \ub370 \uc788\uc5b4 \uac00\uc7a5 \uc911\uc694\ud55c \uc21c\uc11c\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4:\n\n1. \uc6cc\ud06c\ub85c\ub4dc \uaddc\ubaa8 \uc870\uc815(Right-sizing)\n2. \uc0ac\uc6a9\ub418\uc9c0 \uc54a\ub294 \uc6a9\ub7c9 \uc904\uc774\uae30\n3. \ucef4\ud4e8\ud305 \uc720\ud615(\uc608: \uc2a4\ud31f) \ubc0f \uac00\uc18d\uae30(\uc608: GPU) \ucd5c\uc801\ud654\n\n## \uc6cc\ud06c\ub85c\ub4dc \uaddc\ubaa8 \uc870\uc815(Right-sizing)\n\n\ub300\ubd80\ubd84\uc758 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ub300\ubd80\ubd84\uc758 \ube44\uc6a9\uc740 \ucee8\ud14c\uc774\ub108\uc2dd \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc2e4\ud589\ud558\ub294 \ub370 \uc0ac\uc6a9\ub418\ub294 EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c \ubc1c\uc0dd\ud569\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc \uc694\uad6c \uc0ac\ud56d\uc744 \uc774\ud574\ud558\uc9c0 \uc54a\uace0\ub294 \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4\uc758 \ud06c\uae30\ub97c \uc801\uc808\ud558\uac8c \uc870\uc815\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \ub530\ub77c\uc11c \uc801\uc808\ud55c \uc694\uccad(request) \ubc0f \uc81c\ud55c(limit)\uc744 \uc0ac\uc6a9\ud558\uace0 \ud544\uc694\uc5d0 \ub530\ub77c \ud574\ub2f9 \uc124\uc815\uc744 \uc870\uc815\ud574\uc57c \ud569\ub2c8\ub2e4. \ub610\ud55c \uc778\uc2a4\ud134\uc2a4 \ud06c\uae30 \ubc0f \uc2a4\ud1a0\ub9ac\uc9c0 \uc120\ud0dd\uacfc \uac19\uc740 \uc885\uc18d\uc131\uc774 \uc6cc\ud06c\ub85c\ub4dc \uc131\ub2a5\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uccd0 \ube44\uc6a9\uacfc \uc548\uc815\uc131\uc5d0 \uc758\ub3c4\ud558\uc9c0 \uc54a\uc740 \ub2e4\uc591\ud55c \uacb0\uacfc\ub97c \ucd08\ub798\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n*request*\ub294 \uc2e4\uc81c \uc0ac\uc6a9\ub960\uacfc \uc77c\uce58\ud574\uc57c \ud569\ub2c8\ub2e4. \ucee8\ud14c\uc774\ub108\uc758 request\uac00 \ub108\ubb34 \ud06c\uba74 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\uc740 \uc6a9\ub7c9\uc774 \ubc1c\uc0dd\ud558\uc5ec \ucd1d \ud074\ub7ec\uc2a4\ud130 \ube44\uc6a9\uc758 \ud070 \ubd80\ubd84\uc744 \ucc28\uc9c0\ud569\ub2c8\ub2e4. \ud30c\ub4dc \ub0b4 \uac01 \ucee8\ud14c\uc774\ub108 (\uc608: \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ubc0f \uc0ac\uc774\ub4dc\uce74) \uc5d0\ub294 \ucd1d \ud30c\ub4dc \ud55c\ub3c4\uac00 \ucd5c\ub300\ud55c \uc815\ud655\ud558\ub3c4\ub85d \uc790\uccb4 request \ubc0f limit\uc744 \uc124\uc815\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ucee8\ud14c\uc774\ub108\uc5d0 \ub300\ud55c \ub9ac\uc18c\uc2a4 \uc694\uccad \ubc0f \ud55c\ub3c4\ub97c \ucd94\uc815\ud558\ub294 [Goldilocks](https:\/\/www.youtube.com\/watch?v=DfmQWYiwFDk), [KRR](https:\/\/www.youtube.com\/watch?v=uITOzpf82RY), [Kubecost](https:\/\/aws.amazon.com\/blogs\/containers\/aws-and-kubecost-collaborate-to-deliver-cost-monitoring-for-eks-customers\/)\uc640 \uac19\uc740 \ub3c4\uad6c\ub97c \ud65c\uc6a9\ud558\uc138\uc694. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \ud2b9\uc131, \uc131\ub2a5\/\ube44\uc6a9 \uc694\uad6c \uc0ac\ud56d \ubc0f \ubcf5\uc7a1\uc131\uc5d0 \ub530\ub77c \uc5b4\ub5a4 \uba54\ud2b8\ub9ad\uc744 \ud655\uc7a5\ud558\ub294 \uac83\uc774 \uac00\uc7a5 \uc88b\uc740\uc9c0, \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc131\ub2a5\uc774 \uc800\ud558\ub418\ub294 \uc2dc\uc810 (\ud3ec\ud654 \uc2dc\uc810), \uadf8\ub9ac\uace0 \uadf8\uc5d0 \ub530\ub77c \uc694\uccad \ubc0f \uc81c\ud55c\uc744 \uc870\uc815\ud558\ub294 \ubc29\ubc95\uc744 \ud3c9\uac00\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774 \uc8fc\uc81c\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \uc9c0\uce68\uc740 [\uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc801\uc815 \ud06c\uae30 \uc870\uc815](https:\/\/aws.github.io\/aws-eks-best-practices\/scalability\/docs\/node_efficiency\/#application-right-sizing)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\nHorizontal Pod Autoscaler(HPA)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc2e4\ud589\ud574\uc57c \ud558\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ubcf5\uc81c\ubcf8 \uc218\ub97c \uc81c\uc5b4\ud558\uace0, Vertical Pod Autoscaler(VPA)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ubcf5\uc81c\ubcf8 \ub2f9 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \ud544\uc694\ud55c \uc694\uccad \uc218\uc640 \uc81c\ud55c\uc744 \uc870\uc815\ud558\uace0, [Karpenter](http:\/\/karpenter.sh\/) \ub610\ub294 [Cluster Autoscaler](https:\/\/github.com\/kubernetes\/autoscaler)\uc640 \uac19\uc740 \ub178\ub4dc \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc758 \ucd1d \ub178\ub4dc \uc218\ub97c \uc9c0\uc18d\uc801\uc73c\ub85c \uc870\uc815\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. Karpenter \ubc0f Cluster Autoscaler\ub97c \uc0ac\uc6a9\ud55c \ube44\uc6a9 \ucd5c\uc801\ud654 \uae30\ubc95\uc740 \uc774 \ubb38\uc11c\uc758 \ub4b7\ubd80\ubd84\uc5d0 \uc124\uba85\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\nVPA\ub294 \uc6cc\ud06c\ub85c\ub4dc\uac00 \ucd5c\uc801\uc73c\ub85c \uc2e4\ud589\ub418\ub3c4\ub85d \ucee8\ud14c\uc774\ub108\uc5d0 \ud560\ub2f9\ub41c \uc694\uccad \ubc0f \uc81c\ud55c\uc744 \uc870\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. VPA\ub97c \uac10\uc0ac \ubaa8\ub4dc\uc5d0\uc11c \uc2e4\ud589\ud558\uc5ec \uc790\ub3d9\uc73c\ub85c \ubcc0\uacbd\ud55c \ud6c4 \ud30c\ub4dc\ub97c \ub2e4\uc2dc \uc2dc\uc791\ud558\uc9c0 \uc54a\ub3c4\ub85d \ud574\uc57c \ud569\ub2c8\ub2e4. \uad00\ucc30\ub41c \uba54\ud2b8\ub9ad\uc744 \uae30\ubc18\uc73c\ub85c \ubcc0\uacbd \uc0ac\ud56d\uc744 \uc81c\uc548\ud569\ub2c8\ub2e4. \ud504\ub85c\ub355\uc158 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce58\ub294 \ubcc0\uacbd \uc0ac\ud56d\uc740 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uc548\uc815\uc131\uacfc \uc131\ub2a5\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc73c\ubbc0\ub85c \ube44\ud504\ub85c\ub355\uc158 \ud658\uacbd\uc5d0\uc11c \uba3c\uc800 \ud574\ub2f9 \ubcc0\uacbd \uc0ac\ud56d\uc744 \uac80\ud1a0\ud558\uace0 \ud14c\uc2a4\ud2b8\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n## \uc18c\ube44 \uac10\uc18c\n\n\ube44\uc6a9\uc744 \uc808\uac10\ud558\ub294 \uac00\uc7a5 \uc88b\uc740 \ubc29\ubc95\uc740 \ub9ac\uc18c\uc2a4\ub97c \uc801\uac8c \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub294 \uac83\uc785\ub2c8\ub2e4.\uc774\ub97c \uc704\ud55c \ud55c \uac00\uc9c0 \ubc29\ubc95\uc740 \ud604\uc7ac \uc694\uad6c \uc0ac\ud56d\uc5d0 \ub530\ub77c \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc870\uc815\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc694\uad6c \uc0ac\ud56d\uc744 \uc815\uc758\ud558\uace0 \ub3d9\uc801\uc73c\ub85c \ud655\uc7a5\ub418\ub3c4\ub85d \ud558\ub294 \uac83\ubd80\ud130 \ube44\uc6a9 \ucd5c\uc801\ud654 \ub178\ub825\uc744 \uc2dc\uc791\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub97c \uc704\ud574\uc11c\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0\uc11c \uba54\ud2b8\ub9ad\uc744 \uac00\uc838\uc624\uace0 [`PodDisruptionBudgets`](https:\/\/kubernetes.io\/docs\/tasks\/run-application\/configure-pdb\/) \ubc0f [Pod Readiness Gates](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/v2.5\/deploy\/pod_readiness_gate\/)\uc640 \uac19\uc740 \uad6c\uc131\uc744 \uc124\uc815\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc548\uc804\ud558\uac8c \ub3d9\uc801\uc73c\ub85c \ud655\uc7a5 \ubc0f \ucd95\uc18c\ud560 \uc218 \uc788\ub294\uc9c0 \ud655\uc778\ud574\uc57c \ud569\ub2c8\ub2e4.\n\nHPA\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uc131\ub2a5 \ubc0f \uc548\uc815\uc131 \uc694\uad6c \uc0ac\ud56d\uc744 \ucda9\uc871\ud558\ub294 \ub370 \ud544\uc694\ud55c \ubcf5\uc81c\ubcf8 \uc218\ub97c \uc870\uc815\ud560 \uc218 \uc788\ub294 \uc720\uc5f0\ud55c \uc6cc\ud06c\ub85c\ub4dc \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec\uc785\ub2c8\ub2e4. CPU, \uba54\ubaa8\ub9ac \ub610\ub294 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \uba54\ud2b8\ub9ad (\uc608: \ud050 \uae4a\uc774, \ud30c\ub4dc\uc5d0 \ub300\ud55c \uc5f0\uacb0 \uc218 \ub4f1) \uacfc \uac19\uc740 \ub2e4\uc591\ud55c \uba54\ud2b8\ub9ad\uc744 \uae30\ubc18\uc73c\ub85c \ud655\uc7a5 \ubc0f \ucd95\uc18c \uc2dc\uae30\ub97c \uc815\uc758\ud560 \uc218 \uc788\ub294 \uc720\uc5f0\ud55c \ubaa8\ub378\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uba54\ud2b8\ub9ad \uc11c\ubc84\ub294 CPU \ubc0f \uba54\ubaa8\ub9ac \uc0ac\uc6a9\ub7c9\uacfc \uac19\uc740 \ub0b4\uc7a5\ub41c \uc9c0\ud45c\uc5d0 \ub530\ub77c \ud06c\uae30\ub97c \uc870\uc815\ud560 \uc218 \uc788\uc9c0\ub9cc Amazon CloudWatch \ub610\ub294 SQS \ub300\uae30\uc5f4 \uae4a\uc774\uc640 \uac19\uc740 \ub2e4\ub978 \uc9c0\ud45c\ub97c \uae30\ubc18\uc73c\ub85c \ud655\uc7a5\ud558\ub824\uba74 [KEDA](https:\/\/keda.sh\/)\uc640 \uac19\uc740 \uc774\ubca4\ud2b8 \uae30\ubc18 \uc790\ub3d9 \ud06c\uae30 \uc870\uc815 \ud504\ub85c\uc81d\ud2b8\ub97c \uace0\ub824\ud574\uc57c \ud569\ub2c8\ub2e4. KEDA\ub97c CloudWatch \uc9c0\ud45c\uc640 \ud568\uaed8 \uc0ac\uc6a9\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud574\uc11c\ub294 [\uc774 \ube14\ub85c\uadf8 \uac8c\uc2dc\ubb3c](https:\/\/aws.amazon.com\/blogs\/mt\/proactive-autoscaling-of-kubernetes-workloads-with-keda-using-metrics-ingested-into-amazon-cloudwatch\/)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624. \uc5b4\ub5a4 \uc9c0\ud45c\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \uaddc\ubaa8\ub97c \uc870\uc815\ud574\uc57c \ud560\uc9c0 \uc798 \ubaa8\ub974\uaca0\ub2e4\uba74 [\uc911\uc694\ud55c \uc9c0\ud45c \ubaa8\ub2c8\ud130\ub9c1\uc5d0 \ub300\ud55c \ubaa8\ubc94 \uc0ac\ub840](https:\/\/aws-observability.github.io\/observability-best-practices\/guides\/#monitor-what-matters)\ub97c \ud655\uc778\ud558\uc2ed\uc2dc\uc624.\n\n\uc6cc\ud06c\ub85c\ub4dc \uc18c\ube44\ub97c \uc904\uc774\uba74 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ucd08\uacfc \uc6a9\ub7c9\uc774 \uc0dd\uc131\ub418\uba70 \uc801\uc808\ud55c \uc790\ub3d9 \ud06c\uae30 \uc870\uc815 \uad6c\uc131\uc744 \ud1b5\ud574 \ub178\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \ucd95\uc18c\ud558\uc5ec \ucd1d \uc9c0\ucd9c\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucef4\ud4e8\ud305 \ud30c\uc6cc\ub97c \uc218\ub3d9\uc73c\ub85c \ucd5c\uc801\ud654\ud558\uc9c0 \uc54a\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc2a4\ucf00\uc904\ub7ec\uc640 \ub178\ub4dc \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec\ub294 \uc774 \ud504\ub85c\uc138\uc2a4\ub97c \ucc98\ub9ac\ud558\ub3c4\ub85d \uc124\uacc4\ub418\uc5c8\uc2b5\ub2c8\ub2e4.\n\n## \ubbf8\uc0ac\uc6a9 \uc6a9\ub7c9 \uc904\uc774\uae30\n\n\uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \ud06c\uae30\ub97c \uc62c\ubc14\ub974\uac8c \uacb0\uc815\ud558\uace0 \ucd08\uacfc \uc694\uccad\uc744 \uc904\uc778 \ud6c4 \ud504\ub85c\ube44\uc800\ub2dd\ub41c \ucef4\ud4e8\ud305 \ud30c\uc6cc\ub97c \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc704 \uc139\uc158\uc5d0\uc11c \uc2dc\uac04\uc744 \ub4e4\uc5ec \uc6cc\ud06c\ub85c\ub4dc \ud06c\uae30\ub97c \uc62c\ubc14\ub974\uac8c \uc870\uc815\ud588\ub2e4\uba74 \uc774 \uc791\uc5c5\uc744 \ub3d9\uc801\uc73c\ub85c \uc218\ud589\ud560 \uc218 \uc788\uc744 \uac83\uc785\ub2c8\ub2e4.AWS\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc640 \ud568\uaed8 \uc0ac\uc6a9\ub418\ub294 \uae30\ubcf8 \ub178\ub4dc \uc790\ub3d9 \ud655\uc7a5 \ud504\ub85c\uadf8\ub7a8\uc740 \ub450 \uac00\uc9c0\uc785\ub2c8\ub2e4.\n\n### Karpenter\uc640 Cluster Autoscaler\n\nKarpenter\uc640 \ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler\ub294 \ubaa8\ub450 \ud30c\ub4dc\uac00 \uc0dd\uc131\ub418\uac70\ub098 \uc81c\uac70\ub418\uace0 \ucef4\ud4e8\ud305 \uc694\uad6c \uc0ac\ud56d\uc774 \ubcc0\uacbd\ub428\uc5d0 \ub530\ub77c \ud074\ub7ec\uc2a4\ud130\uc758 \ub178\ub4dc \uc218\ub97c \ud655\uc7a5\ud569\ub2c8\ub2e4. \ub458 \ub2e4 \uae30\ubcf8 \ubaa9\ud45c\ub294 \uac19\uc9c0\ub9cc Karpenter\ub294 \ube44\uc6a9\uc744 \uc904\uc774\uace0 \ud074\ub7ec\uc2a4\ud130 \uc804\uccb4 \uc0ac\uc6a9\uc744 \ucd5c\uc801\ud654\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub418\ub294 \ub178\ub4dc \uad00\ub9ac \ud504\ub85c\ube44\uc800\ub2dd\uacfc \ub514\ud504\ub85c\ube44\uc800\ub2dd\uc5d0 \ub300\ud574 \ub2e4\ub978 \uc811\uadfc \ubc29\uc2dd\uc744 \ucde8\ud569\ub2c8\ub2e4.\n\n\ud074\ub7ec\uc2a4\ud130\uc758 \uaddc\ubaa8\uac00 \ucee4\uc9c0\uace0 \uc6cc\ud06c\ub85c\ub4dc\uc758 \ub2e4\uc591\uc131\uc774 \uc99d\uac00\ud568\uc5d0 \ub530\ub77c \ub178\ub4dc \uadf8\ub8f9\uacfc \uc778\uc2a4\ud134\uc2a4\ub97c \ubbf8\ub9ac \uad6c\uc131\ud558\uae30\uac00 \ub354\uc6b1 \uc5b4\ub824\uc6cc\uc9c0\uace0 \uc788\uc2b5\ub2c8\ub2e4.\uc6cc\ud06c\ub85c\ub4dc \uc694\uccad\uacfc \ub9c8\ucc2c\uac00\uc9c0\ub85c \ucd08\uae30 \uae30\uc900\uc744 \uc124\uc815\ud558\uace0 \ud544\uc694\uc5d0 \ub530\ub77c \uc9c0\uc18d\uc801\uc73c\ub85c \uc870\uc815\ud558\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4.\n\n### Cluster Autoscaler Priority Expander\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler\ub294 \ub178\ub4dc \uadf8\ub8f9\uc744 \ud655\uc7a5\ud558\uac70\ub098 \ucd95\uc18c\ud558\ub294 \ubc29\uc2dd\uc73c\ub85c \uc791\ub3d9\ud569\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc\ub97c \ub3d9\uc801\uc73c\ub85c \ud655\uc7a5\ud558\uc9c0 \uc54a\ub294 \uacbd\uc6b0 \ud074\ub7ec\uc2a4\ud130 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec\ub294 \ube44\uc6a9 \uc808\uac10\uc5d0 \ub3c4\uc6c0\uc774 \ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. Cluster Autoscaler\ub97c \uc0ac\uc6a9\ud558\ub824\uba74 \ud074\ub7ec\uc2a4\ud130 \uad00\ub9ac\uc790\uac00 \uc6cc\ud06c\ub85c\ub4dc \uc0ac\uc6a9\uc5d0 \ub300\ube44\ud558\uc5ec \ub178\ub4dc \uadf8\ub8f9\uc744 \ubbf8\ub9ac \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \ub178\ub4dc \uadf8\ub8f9\uc740 \"\ud504\ub85c\ud30c\uc77c\"\uc774 \ub3d9\uc77c\ud55c \uc778\uc2a4\ud134\uc2a4, \uc989 CPU\uc640 \uba54\ubaa8\ub9ac \uc591\uc774 \uac70\uc758 \uac19\uc740 \uc778\uc2a4\ud134\uc2a4\ub97c \uc0ac\uc6a9\ud558\ub3c4\ub85d \uad6c\uc131\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ub178\ub4dc \uadf8\ub8f9\uc744 \uc5ec\ub7ec \uac1c \uac00\uc9c8 \uc218 \uc788\uc73c\uba70 \uc6b0\uc120 \uc21c\uc704 \uc870\uc815 \uc218\uc900\uc744 \uc124\uc815\ud558\ub3c4\ub85d \ud074\ub7ec\uc2a4\ud130 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc73c\uba70 \uac01 \ub178\ub4dc \uadf8\ub8f9\uc740 \uc11c\ub85c \ub2e4\ub978 \ud06c\uae30\uc758 \ub178\ub4dc\ub97c \ud3ec\ud568\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\ub178\ub4dc \uadf8\ub8f9\uc740 \ub2e4\uc591\ud55c \uc6a9\ub7c9 \uc720\ud615\uc744 \uac00\uc9c8 \uc218 \uc788\uc73c\uba70 \uc6b0\uc120 \uc21c\uc704 \ud655\uc7a5\uae30\ub97c \uc0ac\uc6a9\ud558\uc5ec \ube44\uc6a9\uc774 \uc800\ub834\ud55c \uadf8\ub8f9\uc744 \uba3c\uc800 \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c\uc740 \uc628\ub514\ub9e8\ub4dc \uc778\uc2a4\ud134\uc2a4\ub97c \uc0ac\uc6a9\ud558\uae30 \uc804\uc5d0 '\ucee8\ud53c\uadf8\ub9f5'\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc608\uc57d \uc6a9\ub7c9\uc758 \uc6b0\uc120 \uc21c\uc704\ub97c \uc9c0\uc815\ud558\ub294 \ud074\ub7ec\uc2a4\ud130 \uad6c\uc131 \uc2a4\ub2c8\ud3ab\uc758 \uc608\uc785\ub2c8\ub2e4. \ub3d9\uc77c\ud55c \uae30\ubc95\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub2e4\ub978 \uc720\ud615\ubcf4\ub2e4 Graviton \ub610\ub294 \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\uc758 \uc6b0\uc120 \uc21c\uc704\ub97c \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.  \n\n```yaml\napiVersion: eksctl.io\/v1alpha5\nkind: ClusterConfig\nmetadata:\n  name: my-cluster\nmanagedNodeGroups:\n  - name: managed-ondemand\n    minSize: 1\n    maxSize: 7\n    instanceType: m5.xlarge\n  - name: managed-reserved\n    minSize: 2\n    maxSize: 10\n    instanceType: c5.2xlarge\n```\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: cluster-autoscaler-priority-expander\n  namespace: kube-system\ndata:\n  priorities: |-\n    10:\n      - .*ondemand.*\n    50:\n      - .*reserved.*\n```\n\n\ub178\ub4dc \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\uba74 \uae30\ubcf8 \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4\uac00 \uae30\ubcf8\uc801\uc73c\ub85c \uc608\uc0c1\ud55c \uc791\uc5c5\uc744 \uc218\ud589\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4(\uc608: AZ \uc804\uccb4\uc5d0 \ub178\ub4dc\ub97c \ubd84\uc0b0\uc2dc\ud0a4\ub294 \uacbd\uc6b0). \uadf8\ub7ec\ub098 \ubaa8\ub4e0 \uc6cc\ud06c\ub85c\ub4dc\uc758 \uc694\uad6c \uc0ac\ud56d\uc774\ub098 \uae30\ub300\uce58\uac00 \ub3d9\uc77c\ud558\uc9c0\ub294 \uc54a\uc73c\ubbc0\ub85c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc694\uad6c \uc0ac\ud56d\uc744 \uba85\uc2dc\uc801\uc73c\ub85c \uc120\uc5b8\ud558\ub3c4\ub85d \ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. Cluster Autoscaler\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\ubaa8\ubc94 \uc0ac\ub840 \uc139\uc158](https:\/\/aws.github.io\/aws-eks-best-practices\/cluster-autoscaling\/)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n### \uc2a4\ucf00\uc904 \uc870\uc815\uc790\n\nCluster Autoscaler\ub294 \uc2a4\ucf00\uc904\uc774 \ud544\uc694\ud55c \uc0c8 \ud30c\ub4dc \ub610\ub294 \uc0ac\uc6a9\ub960\uc774 \ub0ae\uc740 \ub178\ub4dc\ub97c \uae30\ubc18\uc73c\ub85c \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ub178\ub4dc \uc6a9\ub7c9\uc744 \ucd94\uac00\ud558\uace0 \uc81c\uac70\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub178\ub4dc\uc5d0 \uc2a4\ucf00\uc904\ub9c1\ub41c \ud6c4\uc5d0\ub294 \ud30c\ub4dc \ubc30\uce58\ub97c \uc804\uccb4\uc801\uc73c\ub85c \uc0b4\ud3b4\ubcfc \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \ud074\ub7ec\uc2a4\ud130\uc758 \uc6a9\ub7c9 \ub0ad\ube44\ub97c \ubc29\uc9c0\ud558\uae30 \uc704\ud574 [Kubernetes descheduler](https:\/\/github.com\/kubernetes-sigs\/descheduler)\ub3c4 \uc0b4\ud3b4\ubd10\uc57c \ud569\ub2c8\ub2e4.\n\n\ud074\ub7ec\uc2a4\ud130\uc5d0 10\uac1c\uc758 \ub178\ub4dc\uac00 \uc788\uace0 \uac01 \ub178\ub4dc\uc758 \uc0ac\uc6a9\ub960\uc774 60%\ub77c\uba74 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ud504\ub85c\ube44\uc800\ub2dd\ub41c \uc6a9\ub7c9\uc758 40% \ub97c \uc0ac\uc6a9\ud558\uc9c0 \uc54a\ub294 \uac83\uc785\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec\ub97c \uc0ac\uc6a9\ud558\uba74 \ub178\ub4dc\ub2f9 \uc0ac\uc6a9\ub960 \uc784\uacc4\uac12\uc744 60% \ub85c \uc124\uc815\ud560 \uc218 \uc788\uc9c0\ub9cc, \uc774\ub807\uac8c \ud558\uba74 \uc0ac\uc6a9\ub960\uc774 60% \ubbf8\ub9cc\uc73c\ub85c \ub5a8\uc5b4\uc9c4 \ud6c4 \ub2e8\uc77c \ub178\ub4dc\ub9cc \ucd95\uc18c\ud558\ub824\uace0 \ud569\ub2c8\ub2e4.\n\nDescheduler\ub97c \uc0ac\uc6a9\ud558\uba74 \ud30c\ub4dc\uac00 \uc2a4\ucf00\uc904\ub9c1\ub418\uac70\ub098 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub178\ub4dc\uac00 \ucd94\uac00\ub41c \ud6c4 \ud074\ub7ec\uc2a4\ud130 \uc6a9\ub7c9 \ubc0f \uc0ac\uc6a9\ub960\uc744 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uc758 \ucd1d \uc6a9\ub7c9\uc744 \uc9c0\uc815\ub41c \uc784\uacc4\uac12 \uc774\uc0c1\uc73c\ub85c \uc720\uc9c0\ud558\ub824\uace0 \uc2dc\ub3c4\ud569\ub2c8\ub2e4. \ub610\ud55c \ub178\ub4dc \ud14c\uc778\ud2b8\ub098 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ud569\ub958\ud558\ub294 \uc0c8 \ub178\ub4dc\ub97c \uae30\ubc18\uc73c\ub85c \ud30c\ub4dc\ub97c \uc81c\uac70\ud558\uc5ec \ud30c\ub4dc\uac00 \ucd5c\uc801\uc758 \ucef4\ud4e8\ud305 \ud658\uacbd\uc5d0\uc11c \uc2e4\ud589\ub418\ub3c4\ub85d \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucc38\uace0\ub85c Descheduler\ub294 \uc81c\uac70\ub41c \ud30c\ub4dc\uc758 \uad50\uccb4\ub97c \uc2a4\ucf00\uc904\ub9c1\ud558\uc9c0 \uc54a\uace0 \uae30\ubcf8 \uc2a4\ucf00\uc904\ub7ec\ub97c \uc0ac\uc6a9\ud55c\ub2e4.\n\n### Karpenter \ud1b5\ud569 \uae30\ub2a5\n\nKarpenter\ub294 \ub178\ub4dc \uad00\ub9ac\uc5d0 \ub300\ud574 \"\uadf8\ub8f9\uacfc \ubb34\uad00\ud55c(groupless)\" \uc811\uadfc \ubc29\uc2dd\uc744 \ucde8\ud569\ub2c8\ub2e4. \uc774 \uc811\uadfc \ubc29\uc2dd\uc740 \ub2e4\uc591\ud55c \uc6cc\ud06c\ub85c\ub4dc \uc720\ud615\uc5d0 \ub354 \uc720\uc5f0\ud558\uba70 \ud074\ub7ec\uc2a4\ud130 \uad00\ub9ac\uc790\uc758 \uc0ac\uc804 \uad6c\uc131\uc774 \ub35c \ud544\uc694\ud569\ub2c8\ub2e4. \uadf8\ub8f9\uc744 \ubbf8\ub9ac \uc815\uc758\ud558\uace0 \uc6cc\ud06c\ub85c\ub4dc \ud544\uc694\uc5d0 \ub530\ub77c \uac01 \uadf8\ub8f9\uc744 \uc870\uc815\ud558\ub294 \ub300\uc2e0 Karpenter\ub294 \ud504\ub85c\ube44\uc800\ub108\uc640 \ub178\ub4dc \ud15c\ud50c\ub9bf\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc0dd\uc131\ud560 \uc218 \uc788\ub294 EC2 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uacfc \uc0dd\uc131 \uc2dc \uc778\uc2a4\ud134\uc2a4\uc5d0 \ub300\ud55c \uc124\uc815\uc744 \uad11\ubc94\uc704\ud558\uac8c \uc815\uc758\ud569\ub2c8\ub2e4.\n\n\ube48\ud328\ud0b9(Bin Packing)\uc740 \ub354 \uc801\uc740 \uc218\uc758 \ucd5c\uc801 \ud06c\uae30\uc758 \uc778\uc2a4\ud134\uc2a4\uc5d0 \ub354 \ub9ce\uc740 \uc6cc\ud06c\ub85c\ub4dc\ub97c \ud328\ud0b9\ud558\uc5ec \uc778\uc2a4\ud134\uc2a4\uc758 \ub9ac\uc18c\uc2a4\ub97c \ub354 \ub9ce\uc774 \ud65c\uc6a9\ud558\ub294 \ubc29\ubc95\uc785\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uc6cc\ud06c\ub85c\ub4dc\uc5d0\uc11c \uc0ac\uc6a9\ud558\ub294 \ub9ac\uc18c\uc2a4\ub9cc \ud504\ub85c\ube44\uc800\ub2dd\ud558\uc5ec \ucef4\ud4e8\ud305 \ube44\uc6a9\uc744 \uc904\uc774\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub418\uc9c0\ub9cc \uc808\ucda9\uc810\uc774 \uc788\uc2b5\ub2c8\ub2e4. \ud2b9\ud788 \ub300\uaddc\ubaa8 \ud655\uc7a5 \uc774\ubca4\ud2b8\uc758 \uacbd\uc6b0 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc6a9\ub7c9\uc744 \ucd94\uac00\ud574\uc57c \ud558\ubbc0\ub85c \uc0c8 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc2dc\uc791\ud558\ub294 \ub370 \uc2dc\uac04\uc774 \ub354 \uc624\ub798 \uac78\ub9b4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ube48\ud328\ud0b9\uc744 \uc124\uc815\ud560 \ub54c\ub294 \ube44\uc6a9 \ucd5c\uc801\ud654, \uc131\ub2a5 \ubc0f \uac00\uc6a9\uc131 \uac04\uc758 \uade0\ud615\uc744 \uace0\ub824\ud558\uc2ed\uc2dc\uc624. \n\nKarpenter\ub294 \uc9c0\uc18d\uc801\uc73c\ub85c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \ube48\ud328\ud0b9\ud558\uc5ec \uc778\uc2a4\ud134\uc2a4 \ub9ac\uc18c\uc2a4 \uc0ac\uc6a9\ub960\uc744 \ub192\uc774\uace0 \ucef4\ud4e8\ud305 \ube44\uc6a9\uc744 \ub0ae\ucd9c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub610\ud55c Karpenter\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud574 \ub354 \ube44\uc6a9 \ud6a8\uc728\uc801\uc778 \uc6cc\ucee4 \ub178\ub4dc\ub97c \uc120\ud0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud504\ub85c\ube44\uc800\ub2dd \ub3c4\uad6c\uc5d0\uc11c \"consolidation\" \ud50c\ub798\uadf8\ub97c true\ub85c \uc124\uc815\ud558\uba74 \uc774\ub97c \ub2ec\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4 (\uc544\ub798 \uc0d8\ud50c \ucf54\ub4dc \uc2a4\ub2c8\ud3ab \ucc38\uc870). \uc544\ub798 \uc608\uc81c\ub294 \ud1b5\ud569\uc744 \uc9c0\uc6d0\ud558\ub294 \ud504\ub85c\ube44\uc800\ub2dd \ub3c4\uad6c\uc758 \uc608\ub97c \ubcf4\uc5ec\uc90d\ub2c8\ub2e4. \uc774 \uc548\ub0b4\uc11c\ub97c \uc791\uc131\ud558\ub294 \uc2dc\uc810\uc5d0\uc11c Karpenter\ub294 \uc2e4\ud589 \uc911\uc778 \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\ub97c \ub354 \uc800\ub834\ud55c \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\ub85c \ub300\uccb4\ud558\uc9c0 \uc54a\uc744 \uac83\uc785\ub2c8\ub2e4. Karpenter \ud1b5\ud569\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\uc774 \ube14\ub85c\uadf8](https:\/\/aws.amazon.com\/blogs\/containers\/optimizing-your-kubernetes-compute-costs-with-karpenter-consolidation\/)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.  \n\n```yaml\napiVersion: karpenter.sh\/v1alpha5\nkind: Provisioner\nmetadata:\n  name: enable-binpacking\nspec:\n  consolidation:\n    enabled: true\n```\n\n\uc911\ub2e8\uc774 \ubd88\uac00\ub2a5\ud560 \uc218 \uc788\ub294 \uc6cc\ud06c\ub85c\ub4dc(\uc608: \uccb4\ud06c\ud3ec\uc778\ud2b8 \uc5c6\uc774 \uc7a5\uae30\uac04 \uc2e4\ud589\ub418\ub294 \uc77c\uad04 \uc791\uc5c5)\uc758 \uacbd\uc6b0, \ud30c\ub4dc\uc5d0 `do-not-evict` \uc5b4\ub178\ud14c\uc774\uc158\uc744 \ub2ec\uc544 \ubcf4\uc138\uc694. \ud30c\ub4dc\ub97c \uc81c\uac70\uc5d0\uc11c \uc81c\uc678\uc2dc\ud0a4\ub294 \uac83\uc740 Karpenter\uac00 \uc774 \ud30c\ub4dc\ub97c \ud3ec\ud568\ud558\ub294 \ub178\ub4dc\ub97c \uc790\ubc1c\uc801\uc73c\ub85c \uc81c\uac70\ud574\uc11c\ub294 \uc548 \ub41c\ub2e4\uace0 \ub9d0\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \ub178\ub4dc\uac00 \ub4dc\ub808\uc774\ub2dd\ub418\ub294 \ub3d9\uc548 \ub178\ub4dc\uc5d0 `do-not-evict` \ud30c\ub4dc\uac00 \ucd94\uac00\ub418\uba74 \ub098\uba38\uc9c0 \ud30c\ub4dc\ub294 \uc5ec\uc804\ud788 \uc81c\uac70\ub418\uc9c0\ub9cc \ud574\ub2f9 \ud30c\ub4dc\ub294 \uc81c\uac70\ub420 \ub54c\uae4c\uc9c0 \uc885\ub8cc\ub97c \ucc28\ub2e8\ud569\ub2c8\ub2e4. \uc5b4\ub290 \uacbd\uc6b0\ub4e0 \ub178\ub4dc\uc5d0 \ucd94\uac00 \uc791\uc5c5\uc774 \uc2a4\ucf00\uc904\ub9c1\ub418\ub294 \uac83\uc744 \ubc29\uc9c0\ud558\uae30 \uc704\ud574 \ub178\ub4dc\ub294 cordon\ub429\ub2c8\ub2e4. \ub2e4\uc74c\uc740 \uc5b4\ub178\ud14c\uc774\uc158\uc744 \uc124\uc815\ud558\ub294 \ubc29\ubc95\uc744 \ubcf4\uc5ec\uc8fc\ub294 \uc608\uc2dc\uc785\ub2c8\ub2e4.\n\n```yaml hl_lines=\"8\"\napiVersion: v1\nkind: Pod\nmetadata:\n  name: label-demo\n  labels:\n    environment: production\n  annotations:  \n    \"karpenter.sh\/do-not-evict\": \"true\"\nspec:\n  containers:\n  - name: nginx\n    image: nginx\n    ports:\n    - containerPort: 80\n```\n\n### Cluster Autoscaler \ud30c\ub77c\ubbf8\ud130\ub97c \uc870\uc815\ud558\uc5ec \uc0ac\uc6a9\ub960\uc774 \ub0ae\uc740 \ub178\ub4dc\ub97c \uc81c\uac70\ud569\ub2c8\ub2e4.\n\n\ub178\ub4dc \uc0ac\uc6a9\ub960\uc740 \uc694\uccad\ub41c \ub9ac\uc18c\uc2a4\uc758 \ud569\uacc4\ub97c \uc6a9\ub7c9\uc73c\ub85c \ub098\ub208 \uac12\uc73c\ub85c \uc815\uc758\ub429\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c 'scale-down-utilization-threshold'\uc740 50% \ub85c \uc124\uc815\ub429\ub2c8\ub2e4. \uc774 \ud30c\ub77c\ubbf8\ud130\ub294 'scale-down-unneeded-time'\uacfc \ud568\uaed8 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc2dc\uac04\uc740 \ub178\ub4dc\ub97c \ucd95\uc18c\ud560 \uc218 \uc788\uc744 \ub54c\uae4c\uc9c0 \ud544\uc694\ud558\uc9c0 \uc54a\uac8c \ub418\ub294 \uae30\uac04\uc744 \uacb0\uc815\ud569\ub2c8\ub2e4. \uae30\ubcf8\uac12\uc740 10\ubd84\uc785\ub2c8\ub2e4.\ucd95\uc18c\ub41c \ub178\ub4dc\uc5d0\uc11c \uc5ec\uc804\ud788 \uc2e4\ud589 \uc911\uc778 \ud30c\ub4dc\ub294 kube-scheduler\uc5d0 \uc758\ud574 \ub2e4\ub978 \ub178\ub4dc\uc5d0 \uc2a4\ucf00\uc904\ub9c1\ub429\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc124\uc815\uc744 \uc870\uc815\ud558\uba74 \ud65c\uc6a9\ub3c4\uac00 \ub0ae\uc740 \ub178\ub4dc\ub97c \uc81c\uac70\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc9c0\ub9cc, \ud074\ub7ec\uc2a4\ud130\ub97c \uc870\uae30\uc5d0 \uac15\uc81c\ub85c \ucd95\uc18c\ud558\uc9c0 \uc54a\ub3c4\ub85d \uba3c\uc800 \uc774 \uac12\uc744 \ud14c\uc2a4\ud2b8\ud558\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4.\n\n\uc81c\uac70\ud558\ub294 \ub370 \ube44\uc6a9\uc774 \ub9ce\uc774 \ub4dc\ub294 \ud30c\ub4dc\ub97c \ud074\ub7ec\uc2a4\ud130 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec\uc5d0\uc11c \uc778\uc2dd\ud558\ub294 \ub808\uc774\ube14\ub85c \ubcf4\ud638\ud568\uc73c\ub85c\uc368 \uc2a4\ucf00\uc77c \ub2e4\uc6b4\uc774 \ubc1c\uc0dd\ud558\uc9c0 \uc54a\ub3c4\ub85d \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\ub824\uba74 \uc81c\uac70 \ube44\uc6a9\uc774 \ub9ce\uc774 \ub4dc\ub294 \ud30c\ub4dc\uc5d0 `cluster-autoscaler.kubernetes.io\/safe-to-evict=false`\ub77c\ub294 \uc8fc\uc11d\uc744 \ub2ec\uc544\uc57c \ud55c\ub2e4. \ub2e4\uc74c\uc740 \uc5b4\ub178\ud14c\uc774\uc158\uc744 \uc124\uc815\ud558\ub294 yaml\uc758 \uc608\uc2dc\uc785\ub2c8\ub2e4.\n\n```yaml hl_lines=\"8\"\napiVersion: v1\nkind: Pod\nmetadata:\n  name: label-demo\n  labels:\n    environment: production\n  annotations:  \n    \"cluster-autoscaler.kubernetes.io\/safe-to-evict\": \"false\"\nspec:\n  containers:\n  - name: nginx\n    image: nginx\n    ports:\n    - containerPort: 80\n```\n\n### \ub178\ub4dc\uc5d0 Cluster Autoscaler \ubc0f Karpenter \ud0dc\uadf8 \uc9c0\uc815\n\nAWS  [\ud0dc\uadf8](https:\/\/docs.aws.amazon.com\/tag-editor\/latest\/userguide\/tagging.html)\ub294 \ub9ac\uc18c\uc2a4\ub97c \uad6c\uc131\ud558\uace0 \uc138\ubd80 \uc218\uc900\uc5d0\uc11c AWS \ube44\uc6a9\uc744 \ucd94\uc801\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. \ube44\uc6a9 \ucd94\uc801\uc744 \uc704\ud55c Kubernetes \ub808\uc774\ube14\uacfc \uc9c1\uc811\uc801\uc778 \uc0c1\uad00 \uad00\uacc4\ub97c \ub9fa\uc9c0\ub294 \uc54a\uc2b5\ub2c8\ub2e4. \uba3c\uc800 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9ac\uc18c\uc2a4 \ub808\uc774\ube14\ub9c1\uc73c\ub85c \uc2dc\uc791\ud558\uace0 [Kubecost](https:\/\/aws.amazon.com\/blogs\/containers\/aws-and-kubecost-collaborate-to-deliver-cost-monitoring-for-eks-customers\/)\uc640 \uac19\uc740 \ub3c4\uad6c\ub97c \ud65c\uc6a9\ud558\uc5ec \ud30c\ub4dc, \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \ub4f1\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub808\uc774\ube14\uc744 \uae30\ubc18\uc73c\ub85c \uc778\ud504\ub77c \ube44\uc6a9\uc744 \ubcf4\uace0\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\nAWS Cost Explorer\uc5d0\uc11c \uacb0\uc81c \uc815\ubcf4\ub97c \ud45c\uc2dc\ud558\ub824\uba74 \uc6cc\ucee4 \ub178\ub4dc\uc5d0 \ud0dc\uadf8\uac00 \uc788\uc5b4\uc57c \ud569\ub2c8\ub2e4. Cluster Autoscaler\ub97c \uc0ac\uc6a9\ud558\uba74 [\uc2dc\uc791 \ud15c\ud50c\ub9bf](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/launch-templates.html)\uc744 \uc0ac\uc6a9\ud558\uc5ec \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9 \ub0b4\uc758 \uc6cc\ucee4 \ub178\ub4dc\uc5d0 \ud0dc\uadf8\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. \uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc758 \uacbd\uc6b0 [EC2 Auto Scaling \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/autoscaling\/ec2\/userguide\/ec2-auto-scaling-tagging.html) \uc744 \uc0ac\uc6a9\ud558\uc5ec \uc778\uc2a4\ud134\uc2a4\uc5d0 \ud0dc\uadf8\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. Karpenter\uc5d0\uc11c \ud504\ub85c\ube44\uc800\ub2dd\ud55c \uc778\uc2a4\ud134\uc2a4\uc758 \uacbd\uc6b0 [\ub178\ub4dc \ud15c\ud50c\ub9bf\uc758 spec.tags](https:\/\/karpenter.sh\/v0.29\/concepts\/node-templates\/#spectags)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud0dc\uadf8\ub97c \uc9c0\uc815\ud558\uc2ed\uc2dc\uc624.\n\n### \uba40\ud2f0 \ud14c\ub10c\ud2b8 \ud074\ub7ec\uc2a4\ud130\n\n\ub2e4\ub978 \ud300\uc774 \uacf5\uc720\ud558\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc791\uc5c5\ud558\ub294 \uacbd\uc6b0 \ub3d9\uc77c\ud55c \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ub2e4\ub978 \uc6cc\ud06c\ub85c\ub4dc\ub97c \ud30c\uc545\ud558\uc9c0 \ubabb\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub9ac\uc18c\uc2a4 \uc694\uccad\uc740 CPU \uacf5\uc720\uc640 \uac19\uc740 \uc77c\ubd80 \"\uc2dc\ub044\ub7ec\uc6b4 \uc774\uc6c3(noisy neighbor)\" \ubb38\uc81c\ub97c \uaca9\ub9ac\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc9c0\ub9cc \ub514\uc2a4\ud06c I\/O \ubcd1\ubaa9\uacfc \uac19\uc740 \ubaa8\ub4e0 \ub9ac\uc18c\uc2a4 \uacbd\uacc4\ub97c \ubd84\ub9ac\ud558\uc9c0\ub294 \ubabb\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc758\ud574 \uc0ac\uc6a9 \uac00\ub2a5\ud55c \ubaa8\ub4e0 \ub9ac\uc18c\uc2a4\ub97c \ubd84\ub9ac\ud558\uac70\ub098 \uc81c\ud55c\ud560 \uc218\ub294 \uc5c6\uc2b5\ub2c8\ub2e4. \ub2e4\ub978 \uc6cc\ud06c\ub85c\ub4dc\ubcf4\ub2e4 \ub192\uc740 \ube44\uc728\ub85c \uacf5\uc720 \ub9ac\uc18c\uc2a4\ub97c \uc0ac\uc6a9\ud558\ub294 \uc6cc\ud06c\ub85c\ub4dc\ub294 \ub178\ub4dc [taint\uc640 toleration](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/taint-and-toleration\/)\uc744 \ud1b5\ud574 \uaca9\ub9ac\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc704\ud55c \ub610 \ub2e4\ub978 \uace0\uae09 \uae30\ubc95\uc740 \ucee8\ud14c\uc774\ub108\uc758 \uacf5\uc720 CPU \ub300\uc2e0 \uc804\uc6a9 CPU\ub97c \ubcf4\uc7a5\ud558\ub294 [CPU pinning](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/cpu-management-policies\/#static-policy)\uc744 \uc801\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc6cc\ud06c\ub85c\ub4dc\ub97c \ub178\ub4dc \uc218\uc900\uc5d0\uc11c \ubd84\ub9ac\ud558\ub294 \uac83\uc740 \ube44\uc6a9\uc774 \ub354 \ub9ce\uc774 \ub4e4 \uc218 \uc788\uc9c0\ub9cc [\uc608\uc57d \uc778\uc2a4\ud134\uc2a4](https:\/\/aws.amazon.com\/ec2\/pricing\/reserved-instances\/), [Graviton \ud504\ub85c\uc138\uc11c](https:\/\/aws.amazon.com\/ec2\/graviton\/) \ub610\ub294 [\uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4](https:\/\/aws.amazon.com\/ec2\/spot\/)\uc744 \uc0ac\uc6a9\ud558\uc5ec [BestEffort](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-qos\/#besteffort) \uc791\uc5c5\uc744 \uc608\uc57d\ud558\uac70\ub098 \ucd94\uac00 \ube44\uc6a9 \uc808\uac10\uc744 \ud65c\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\uacf5\uc720 \ud074\ub7ec\uc2a4\ud130\uc5d0\ub294 IP \uace0\uac08, \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4 \uc81c\ud55c \ub610\ub294 API \ud655\uc7a5 \uc694\uccad\uacfc \uac19\uc740 \ud074\ub7ec\uc2a4\ud130 \uc218\uc900 \ub9ac\uc18c\uc2a4 \uc81c\uc57d\uc774 \uc788\uc744 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. [\ud655\uc7a5\uc131 \ubaa8\ubc94 \uc0ac\ub840 \uac00\uc774\ub4dc](https:\/\/aws.github.io\/aws-eks-best-practices\/scalability\/docs\/control-plane\/)\ub97c \uac80\ud1a0\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uac00 \uc774\ub7ec\ud55c \uc81c\ud55c\uc774 \uc5c6\ub294\uc9c0 \ud655\uc778\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ub124\uc784\uc2a4\ud398\uc774\uc2a4 \ub610\ub294 Karpenter \ud504\ub85c\ube44\uc800\ub108 \uc218\uc900\uc5d0\uc11c \ub9ac\uc18c\uc2a4\ub97c \uaca9\ub9ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. [\ub9ac\uc18c\uc2a4 \ud560\ub2f9\ub7c9(Quota)](https:\/\/kubernetes.io\/docs\/concepts\/policy\/resource-quotas\/)\uc740 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc758 \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc18c\ube44\ud560 \uc218 \uc788\ub294 \ub9ac\uc18c\uc2a4 \uc218\ub97c \uc81c\ud55c\ud558\ub294 \ubc29\ubc95\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc774\ub294 \ucd08\uae30 \ubcf4\ud638 \uc218\ub2e8\uc73c\ub85c \uc720\uc6a9\ud560 \uc218 \uc788\uc9c0\ub9cc, \uc6cc\ud06c\ub85c\ub4dc \ud655\uc7a5\uc744 \uc778\uc704\uc801\uc73c\ub85c \uc81c\ud55c\ud558\uc9c0 \uc54a\ub3c4\ub85d \uc9c0\uc18d\uc801\uc73c\ub85c \ud3c9\uac00\ud574\uc57c \ud569\ub2c8\ub2e4.\n\nKarpenter \ud504\ub85c\ube44\uc800\ub108\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c [\uc77c\ubd80 \uc0ac\uc6a9 \uac00\ub2a5\ud55c \ub9ac\uc18c\uc2a4(\uc608: CPU, GPU)\uc5d0 \uc81c\ud55c\uc744 \uc124\uc815](https:\/\/karpenter.sh\/docs\/concepts\/provisioners\/#speclimitsresources)\ud560 \uc218 \uc788\uc9c0\ub9cc \uc801\uc808\ud55c \ud504\ub85c\ube44\uc800\ub108\ub97c \uc0ac\uc6a9\ud558\ub3c4\ub85d \ud14c\ub10c\ud2b8 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uad6c\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \ub2e8\uc77c \uc81c\uacf5\uc790\uac00 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub108\ubb34 \ub9ce\uc740 \ub178\ub4dc\ub97c \uc0dd\uc131\ud558\ub294 \uac83\uc744 \ubc29\uc9c0\ud560 \uc218 \uc788\uc9c0\ub9cc, \uc81c\ud55c\uc744 \ub108\ubb34 \ub0ae\uac8c \uc124\uc815\ud558\uc9c0 \uc54a\ub3c4\ub85d \uc9c0\uc18d\uc801\uc73c\ub85c \ud3c9\uac00\ud558\uc5ec \uc6cc\ud06c\ub85c\ub4dc\uac00 \ud655\uc7a5\ub418\uc9c0 \uc54a\ub3c4\ub85d \ud574\uc57c \ud569\ub2c8\ub2e4.\n\n### \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1 \uc2a4\ucf00\uc904\ub9c1\n\n\uc8fc\ub9d0 \ub610\ub294 \ud734\uc77c\uc5d0\ub294 \ud074\ub7ec\uc2a4\ud130\ub97c \ucd95\uc18c\ud574\uc57c \ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub294 \uc0ac\uc6a9\ud558\uc9c0 \uc54a\uc744 \ub54c 0\uc73c\ub85c \ucd95\uc18c\ud558\ub824\ub294 \ud14c\uc2a4\ud2b8 \ubc0f \ube44\ud504\ub85c\ub355\uc158 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ud2b9\ud788 \uc801\ud569\ud569\ub2c8\ub2e4. [cluster-turndown](https:\/\/github.com\/kubecost\/cluster-turndown) \uc640 \uac19\uc740 \uc194\ub8e8\uc158\uc740 \ud06c\ub860 \uc2a4\ucf00\uc904\uc5d0 \ub530\ub77c \ubcf5\uc81c\ubcf8\uc744 0\uc73c\ub85c \ucd95\uc18c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c [AWS \ube14\ub85c\uadf8](https:\/\/aws.amazon.com\/blogs\/containers\/manage-scale-to-zero-scenarios-with-karpenter-and-serverless\/)\uc5d0 \uc124\uba85\ub41c \ub300\ub85c Karpenter\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub3d9\uc77c\ud55c \uc791\uc5c5\uc744 \uc218\ud589\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \ucef4\ud4e8\ud305 \uc6a9\ub7c9 \uc720\ud615 \ucd5c\uc801\ud654\n\n\ud074\ub7ec\uc2a4\ud130\uc758 \ucd1d \ucef4\ud4e8\ud305 \uc6a9\ub7c9\uc744 \ucd5c\uc801\ud654\ud558\uace0 \ube48 \ud328\ud0b9\uc744 \uc0ac\uc6a9\ud55c \ud6c4\uc5d0\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ud504\ub85c\ube44\uc800\ub2dd\ud55c \ucef4\ud4e8\ud305 \uc720\ud615\uacfc \ud574\ub2f9 \ub9ac\uc18c\uc2a4\uc5d0 \ub300\ud55c \ube44\uc6a9\uc744 \ud655\uc778\ud574\uc57c \ud569\ub2c8\ub2e4. AWS\ub294 \ucef4\ud4e8\ud305 \ube44\uc6a9\uc744 \uc808\uac10\ud560 \uc218 \uc788\ub294 [\ucef4\ud4e8\ud305 \uc808\uac10\ud615 \ud50c\ub79c(Savings Plan)](https:\/\/aws.amazon.com\/savingsplans\/compute-pricing\/)\uc744 \uc6b4\uc601\ud558\uace0 \uc788\uc73c\uba70, \uc774\ub97c \ub2e4\uc74c\uacfc \uac19\uc740 \uc6a9\ub7c9 \uc720\ud615\uc73c\ub85c \ubd84\ub958\ud574 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n* \uc2a4\ud31f\n* \uc808\uac10\ud615 \ud50c\ub79c(Savings Plan)\n* \uc628\ub514\ub9e8\ub4dc\n* Fargate\n\n\uac01 \uc6a9\ub7c9 \uc720\ud615\uc5d0\ub294 \uad00\ub9ac \uc624\ubc84\ud5e4\ub4dc, \uac00\uc6a9\uc131 \ubc0f \uc7a5\uae30 \uc57d\uc815 \uce21\uba74\uc5d0\uc11c \uc11c\ub85c \ub2e4\ub978 \uc808\ucda9\uc810\uc774 \uc788\uc73c\ubbc0\ub85c \ud658\uacbd\uc5d0 \uc801\ud569\ud55c \uac83\uc744 \uacb0\uc815\ud574\uc57c \ud569\ub2c8\ub2e4. \uc5b4\ub5a4 \ud658\uacbd\ub3c4 \ub2e8\uc77c \uc6a9\ub7c9 \uc720\ud615\uc5d0 \uc758\uc874\ud574\uc11c\ub294 \uc548 \ub418\uba70 \ub2e8\uc77c \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc5ec\ub7ec \uc2e4\ud589 \uc720\ud615\uc744 \ud63c\ud569\ud558\uc5ec \ud2b9\uc815 \uc6cc\ud06c\ub85c\ub4dc \uc694\uad6c \uc0ac\ud56d \ubc0f \ube44\uc6a9\uc744 \ucd5c\uc801\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\n\n[\uc2a4\ud31f](https:\/\/aws.amazon.com\/ec2\/spot\/) \uc6a9\ub7c9 \uc720\ud615\uc740 \uac00\uc6a9\uc601\uc5ed\uc758 \uc608\ube44 \uc6a9\ub7c9\uc5d0\uc11c EC2 \uc778\uc2a4\ud134\uc2a4\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud569\ub2c8\ub2e4. \uc2a4\ud31f\uc740 \ucd5c\ub300 90% \uae4c\uc9c0 \ud560\uc778\uc744 \uc81c\uacf5\ud558\uc9c0\ub9cc, \ub2e4\ub978 \uacf3\uc5d0\uc11c \ud544\uc694\ud560 \uacbd\uc6b0 \ud574\ub2f9 \uc778\uc2a4\ud134\uc2a4\uac00 \uc911\ub2e8\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub610\ud55c \uc0c8 \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud560 \uc6a9\ub7c9\uc774 \ud56d\uc0c1 \uc788\ub294 \uac83\uc740 \uc544\ub2c8\uba70 [2\ubd84 \uc911\ub2e8 \uc54c\ub9bc](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/spot-interruptions.html)\uc744 \ud1b5\ud574 \uae30\uc874 \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\ub97c \ud68c\uc218\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uc2dc\uc791 \ub610\ub294 \uc885\ub8cc \ud504\ub85c\uc138\uc2a4\uac00 \uc624\ub798 \uac78\ub9ac\ub294 \uacbd\uc6b0 \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\uac00 \ucd5c\uc120\uc758 \uc635\uc158\uc774 \uc544\ub2d0 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc2a4\ud31f \ucef4\ud4e8\ud305\uc740 \ub2e4\uc591\ud55c \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc0ac\uc6a9 \uac00\ub2a5\ud55c \uc2a4\ud31f \uc6a9\ub7c9\uc774 \uc5c6\uc744 \uac00\ub2a5\uc131\uc744 \uc904\uc5ec\uc57c \ud569\ub2c8\ub2e4. \ub178\ub4dc\ub97c \uc548\uc804\ud558\uac8c \uc885\ub8cc\ud558\ub824\uba74 \uc778\uc2a4\ud134\uc2a4 \uc911\ub2e8\uc744 \ucc98\ub9ac\ud574\uc57c \ud569\ub2c8\ub2e4. Karpenter \ub610\ub294 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc758 \uc77c\ubd80\ub85c \ud504\ub85c\ube44\uc800\ub2dd\ub41c \ub178\ub4dc\ub294 [\uc778\uc2a4\ud134\uc2a4 \uc911\ub2e8 \uc54c\ub9bc](https:\/\/aws.github.io\/aws-eks-best-practices\/karpenter\/#enable-interruption-handling-when-using-spot)\uc744 \uc790\ub3d9\uc73c\ub85c \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 [\ub178\ub4dc \uc885\ub8cc \ud578\ub4e4\ub7ec](https:\/\/github.com\/aws\/aws-node-termination-handler)\ub97c \ubcc4\ub3c4\ub85c \uc2e4\ud589\ud558\uc5ec \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\ub97c \uc815\uc0c1\uc801\uc73c\ub85c \uc885\ub8cc\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ub2e8\uc77c \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\uc640 \uc628\ub514\ub9e8\ub4dc \uc778\uc2a4\ud134\uc2a4\uc758 \uade0\ud615\uc744 \ub9de\ucd9c \uc218 \uc788\uc2b5\ub2c8\ub2e4. Karpenter\ub97c \uc0ac\uc6a9\ud558\uba74 [\uac00\uc911\uce58 \ud504\ub85c\ube44\uc800\ub108](https:\/\/karpenter.sh\/docs\/concepts\/scheduling\/#on-demandspot-ratio-split)\ub97c \uc0dd\uc131\ud558\uc5ec \ub2e4\uc591\ud55c \uc6a9\ub7c9 \uc720\ud615\uc758 \uade0\ud615\uc744 \ub9de\ucd9c \uc218 \uc788\uc2b5\ub2c8\ub2e4. Cluster Autoscaler\ub97c \uc0ac\uc6a9\ud558\uba74 [\uc2a4\ud31f \ubc0f \uc628\ub514\ub9e8\ub4dc \ub610\ub294 \uc608\uc57d \uc778\uc2a4\ud134\uc2a4\uac00 \ud3ec\ud568\ub41c \ud63c\ud569 \ub178\ub4dc \uadf8\ub8f9](https:\/\/aws.amazon.com\/blogs\/containers\/amazon-eks-now-supports-provisioning-and-managing-ec2-spot-instances-in-managed-node-groups\/)\uc744 \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c\uc740 Karpenter\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc628\ub514\ub9e8\ub4dc \uc778\uc2a4\ud134\uc2a4\ubcf4\ub2e4 \uc2a4\ud31f **** \uc778\uc2a4\ud134\uc2a4 \uc6b0\uc120 \uc21c\uc704\ub97c \ub192\uac8c \uc815\ud558\ub294 \uc608\uc785\ub2c8\ub2e4. \ud504\ub85c\ube44\uc800\ub108\ub97c \uc0dd\uc131\ud560 \ub54c \uc2a4\ud31f, \uc628\ub514\ub9e8\ub4dc \ub610\ub294 \ub458 \ub2e4 \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4(\uc544\ub798 \uadf8\ub9bc \ucc38\uc870). \ub458 \ub2e4 \uc9c0\uc815\ud558\uace0 \ud30c\ub4dc\uac00 \uc2a4\ud31f \ub610\ub294 \uc628\ub514\ub9e8\ub4dc\ub97c \uc0ac\uc6a9\ud574\uc57c \ud558\ub294\uc9c0 \uba85\uc2dc\uc801\uc73c\ub85c \uc9c0\uc815\ud558\uc9c0 \uc54a\ub294 \uacbd\uc6b0 Karpenter\ub294 [price-capacity-optimization \ud560\ub2f9 \uc804\ub7b5](https:\/\/aws.amazon.com\/blogs\/compute\/introducing-price-capacity-optimized-allocation-strategy-for-ec2-spot-instances\/)\uc73c\ub85c \ub178\ub4dc\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud560 \ub54c \uc2a4\ud31f\uc758 \uc6b0\uc120 \uc21c\uc704\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4.\n\n```yaml hl_lines=\"9\"\napiVersion: karpenter.sh\/v1alpha5\nkind: Provisioner\nmetadata:\n  name: spot-prioritized\nspec:\n  requirements:\n    - key: \"karpenter.sh\/capacity-type\" \n      operator: In\n        values: [\"spot\", \"on-demand\"]\n```\n\n### Savings Plans, \uc608\uc57d \uc778\uc2a4\ud134\uc2a4 \ubc0f AWS \uc5d4\ud130\ud504\ub77c\uc774\uc988 \ud560\uc778 \ud504\ub85c\uadf8\ub7a8(EDP)\n\n[Compute Savings Plan](https:\/\/aws.amazon.com\/savingsplans\/compute-pricing\/)\uc744 \uc0ac\uc6a9\ud558\uba74 \ucef4\ud4e8\ud305 \ube44\uc6a9\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. Savings Plan\uc740 1\ub144 \ub610\ub294 3\ub144 \ucef4\ud4e8\ud305 \uc0ac\uc6a9 \uc57d\uc815 \uc2dc \ud560\uc778\ub41c \uac00\uaca9\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc0ac\uc6a9\ub7c9\uc740 EKS \ud074\ub7ec\uc2a4\ud130\uc758 EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0 \uc801\uc6a9\ud560 \uc218 \uc788\uc9c0\ub9cc Lambda \ubc0f Fargate\uc640 \uac19\uc740 \ubaa8\ub4e0 \ucef4\ud4e8\ud305 \uc0ac\uc6a9\uc5d0\ub3c4 \uc801\uc6a9\ub429\ub2c8\ub2e4. Savings Plan\uc744 \uc0ac\uc6a9\ud558\uba74 \ube44\uc6a9\uc744 \uc808\uac10\ud558\uba74\uc11c\ub3c4 \uc57d\uc815 \uae30\uac04 \ub3d9\uc548 \ubaa8\ub4e0 EC2 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc120\ud0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nCompute Savings Plan\uc744 \uc0ac\uc6a9\ud558\uba74 \uc0ac\uc6a9\ud558\ub824\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615, \uc81c\ud488\uad70 \ub610\ub294 \ub9ac\uc804\uc5d0 \ub300\ud55c \uc57d\uc815 \uc5c6\uc774 EC2 \ube44\uc6a9\uc744 \ucd5c\ub300 66% \uc808\uac10\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc808\uac10\uc561\uc740 \uc778\uc2a4\ud134\uc2a4\ub97c \uc0ac\uc6a9\ud560 \ub54c \uc778\uc2a4\ud134\uc2a4\uc5d0 \uc790\ub3d9\uc73c\ub85c \uc801\uc6a9\ub429\ub2c8\ub2e4.\n\nEC2 \uc778\uc2a4\ud134\uc2a4 Savings Plan\uc740 \ud2b9\uc815 \ub9ac\uc804 \ubc0f EC2 \uc81c\ud488\uad70(\uc608: C \uc81c\ud488\uad70 \uc778\uc2a4\ud134\uc2a4)\uc758 \uc0ac\uc6a9\ub7c9\uc744 \uc57d\uc815\ud558\uc5ec \ucef4\ud4e8\ud305 \ube44\uc6a9\uc744 \ucd5c\ub300 72% \uc808\uac10\ud569\ub2c8\ub2e4. \ub9ac\uc804 \ub0b4 \ubaa8\ub4e0 AZ\ub85c \uc0ac\uc6a9\ub7c9\uc744 \uc804\ud658\ud558\uace0, c5 \ub610\ub294 c6\uc640 \uac19\uc740 \ubaa8\ub4e0 \uc138\ub300\uc758 \uc778\uc2a4\ud134\uc2a4 \ud328\ubc00\ub9ac\ub97c \uc0ac\uc6a9\ud558\uace0, \ud328\ubc00\ub9ac \ub0b4\uc5d0\uc11c \uc6d0\ud558\ub294 \ud06c\uae30\uc758 \uc778\uc2a4\ud134\uc2a4\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud560\uc778\uc740 Savings Plan \uae30\uc900\uacfc \uc77c\uce58\ud558\ub294 \uacc4\uc815 \ub0b4 \ubaa8\ub4e0 \uc778\uc2a4\ud134\uc2a4\uc5d0 \uc790\ub3d9\uc73c\ub85c \uc801\uc6a9\ub429\ub2c8\ub2e4.\n\n[\uc608\uc57d \uc778\uc2a4\ud134\uc2a4](https:\/\/aws.amazon.com\/ec2\/pricing\/reserved-instances\/)\ub294 EC2 \uc778\uc2a4\ud134\uc2a4 Savings Plan\uacfc \ube44\uc2b7\ud558\uc9c0\ub9cc \uac00\uc6a9\uc601\uc5ed \ub610\ub294 \uc9c0\uc5ed\uc758 \uc6a9\ub7c9\uc744 \ubcf4\uc7a5\ud558\uace0 \uc628\ub514\ub9e8\ub4dc \uc778\uc2a4\ud134\uc2a4\uc5d0 \ube44\ud574 \ube44\uc6a9\uc744 \ucd5c\ub300 72% \uc808\uac10\ud569\ub2c8\ub2e4 .\ud544\uc694\ud55c \uc608\uc57d \uc6a9\ub7c9\uc744 \uacc4\uc0b0\ud55c \ud6c4 \uc608\uc57d \uae30\uac04(1\ub144 \ub610\ub294 3\ub144)\uc744 \uc120\ud0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc5b4\uce74\uc6b4\ud2b8\uc5d0\uc11c \ud574\ub2f9 EC2 \uc778\uc2a4\ud134\uc2a4\ub97c \uc2e4\ud589\ud558\uba74 \ud560\uc778\uc774 \uc790\ub3d9\uc73c\ub85c \uc801\uc6a9\ub429\ub2c8\ub2e4.\n\n\ub610\ud55c \uace0\uac1d\uc740 AWS\uc640 \uae30\uc5c5 \uacc4\uc57d\uc744 \uccb4\uacb0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\uae30\uc5c5 \uacc4\uc57d\uc740 \uace0\uac1d\uc5d0\uac8c \uc694\uad6c \uc0ac\ud56d\uc5d0 \uac00\uc7a5 \uc801\ud569\ud55c \uacc4\uc57d\uc744 \uc870\uc815\ud560 \uc218 \uc788\ub294 \uc635\uc158\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4.\uace0\uac1d\uc740 AWS \uc5d4\ud130\ud504\ub77c\uc774\uc988 \ud560\uc778 \ud504\ub85c\uadf8\ub7a8(EDP, Enterprise Discount Program)\ub97c \uae30\ubc18\uc73c\ub85c \uac00\uaca9 \ud560\uc778\uc744 \ubc1b\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uae30\uc5c5 \uacc4\uc57d\uc5d0 \ub300\ud55c \ucd94\uac00 \uc815\ubcf4\ub294 AWS \uc601\uc5c5 \ub2f4\ub2f9\uc790\uc5d0\uac8c \ubb38\uc758\ud558\uc138\uc694.\n\n### \uc628\ub514\ub9e8\ub4dc\n\n\uc628\ub514\ub9e8\ub4dc EC2 \uc778\uc2a4\ud134\uc2a4\ub294 (\uc2a4\ud31f\uc5d0 \ube44\ud574) \uc911\ub2e8 \uc5c6\uc774 \uc0ac\uc6a9\ud560 \uc218 \uc788\uace0 (Savings Plan\uc5d0 \ube44\ud574) \uc7a5\uae30 \uc57d\uc815\uc774 \uc5c6\ub2e4\ub294 \uc774\uc810\uc774 \uc788\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ube44\uc6a9\uc744 \uc808\uac10\ud558\ub824\uba74 \uc628\ub514\ub9e8\ub4dc EC2 \uc778\uc2a4\ud134\uc2a4\uc758 \uc0ac\uc6a9\ub7c9\uc744 \uc904\uc5ec\uc57c \ud569\ub2c8\ub2e4.\n\n\uc6cc\ud06c\ub85c\ub4dc \uc694\uad6c \uc0ac\ud56d\uc744 \ucd5c\uc801\ud654\ud55c \ud6c4\uc5d0\ub294 \ud074\ub7ec\uc2a4\ud130\uc758 \ucd5c\uc18c \ubc0f \ucd5c\ub300 \uc6a9\ub7c9\uc744 \uacc4\uc0b0\ud558\uc5ec\uc57c \ud569\ub2c8\ub2e4. \uc774 \uc218\uce58\ub294 \uc2dc\uac04\uc774 \uc9c0\ub0a8\uc5d0 \ub530\ub77c \ubcc0\uacbd\ub420 \uc218 \uc788\uc9c0\ub9cc \uac10\uc18c\ud558\ub294 \uacbd\uc6b0\ub294 \uac70\uc758 \uc5c6\uc2b5\ub2c8\ub2e4. \ucd5c\uc18c \uae08\uc561 \ubbf8\ub9cc\uc758 \ubaa8\ub4e0 \ud56d\ubaa9\uc5d0\ub294 Savings Plan\uc744 \uc0ac\uc6a9\ud558\uace0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uac00\uc6a9\uc131\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce58\uc9c0 \uc54a\ub294 \uc6a9\ub7c9\uc740 \ud655\ubcf4\ud574 \ub450\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc9c0\uc18d\uc801\uc73c\ub85c \uc0ac\uc6a9\ub418\uc9c0 \uc54a\uac70\ub098 \uac00\uc6a9\uc131\uc774 \ud544\uc694\ud55c \ub2e4\ub978 \ubaa8\ub4e0 \ud56d\ubaa9\uc740 \uc628\ub514\ub9e8\ub4dc\ub85c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc774 \uc139\uc158\uc5d0\uc11c \uc5b8\uae09\ud55c \uac83\ucc98\ub7fc \uc0ac\uc6a9\ub7c9\uc744 \uc904\uc774\ub294 \uac00\uc7a5 \uc88b\uc740 \ubc29\ubc95\uc740 \ub9ac\uc18c\uc2a4\ub97c \uc801\uac8c \uc0ac\uc6a9\ud558\uace0 \ud504\ub85c\ube44\uc800\ub2dd\ud55c \ub9ac\uc18c\uc2a4\ub97c \ucd5c\ub300\ud55c \ud65c\uc6a9\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. Cluster Autoscaler\ub97c \uc0ac\uc6a9\ud558\uba74 `scale-down-utilization-threshold` \uc124\uc815\uc73c\ub85c \uc0ac\uc6a9\ub960\uc774 \ub0ae\uc740 \ub178\ub4dc\ub97c \uc81c\uac70\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.Karpenter\uc758 \uacbd\uc6b0 \ud1b5\ud569\uc744 \ud65c\uc131\ud654\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n\uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 EC2 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc218\ub3d9\uc73c\ub85c \uc2dd\ubcc4\ud558\ub824\uba74 [ec2-instance-selector](https:\/\/github.com\/aws\/amazon-ec2-instance-selector)\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uadf8\ub7ec\uba74 \uac01 \ub7ec\uc804\uc5d0\uc11c \uc0ac\uc6a9 \uac00\ub2a5\ud55c \uc778\uc2a4\ud134\uc2a4\ub294 \ubb3c\ub860 EKS\uc640 \ud638\ud658\ub418\ub294 \uc778\uc2a4\ud134\uc2a4\ub3c4 \ud45c\uc2dc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c \uc608\ub294 x86 \ud504\ub85c\uc138\uc2a4 \uc544\ud0a4\ud14d\ucc98, 4Gb \uba54\ubaa8\ub9ac, vCPU 2\uac1c\ub97c \uac16\ucd94\uace0 us-east-1 \uc9c0\uc5ed\uc5d0\uc11c \uc0ac\uc6a9 \uac00\ub2a5\ud55c \uc778\uc2a4\ud134\uc2a4\ub97c \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\n```bash\nec2-instance-selector --memory 4 --vcpus 2 --cpu-architecture x86_64 \\\n  -r us-east-1 --service eks\nc5.large\nc5a.large\nc5ad.large\nc5d.large\nc6a.large\nc6i.large\nt2.medium\nt3.medium\nt3a.medium\n```\n\n\uc6b4\uc601 \ud658\uacbd\uc774 \uc544\ub2cc \uacbd\uc6b0 \uc57c\uac04 \ubc0f \uc8fc\ub9d0\uacfc \uac19\uc774 \uc0ac\uc6a9\ud558\uc9c0 \uc54a\ub294 \uc2dc\uac04\uc5d0\ub294 \ud074\ub7ec\uc2a4\ud130\ub97c \uc790\ub3d9\uc73c\ub85c \ucd95\uc18c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. kubecost \ud504\ub85c\uc81d\ud2b8 [cluster-turndown](https:\/\/github.com\/kubecost\/cluster-turndown)\uc740 \uc124\uc815\ub41c \uc77c\uc815\uc5d0 \ub530\ub77c \ud074\ub7ec\uc2a4\ud130\ub97c \uc790\ub3d9\uc73c\ub85c \ucd95\uc18c\ud560 \uc218 \uc788\ub294 \ucee8\ud2b8\ub864\ub7ec\uc758 \uc608\uc785\ub2c8\ub2e4.\n\n### Fargate \ucef4\ud4e8\ud305\n\nFargate \ucef4\ud4e8\ud305\uc740 EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uc704\ud55c \uc644\uc804 \uad00\ub9ac\ud615 \ucef4\ud4e8\ud305 \uc635\uc158\uc785\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc758 \ub178\ub4dc\ub2f9 \ud30c\ub4dc \ud558\ub098\ub97c \uc2a4\ucf00\uc904\ub9c1\ud558\uc5ec \ud30c\ub4dc \uaca9\ub9ac\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 \uc6cc\ud06c\ub85c\ub4dc\uc758 CPU \ubc0f RAM \uba54\ubaa8\ub9ac \uc694\uad6c \uc0ac\ud56d\uc5d0 \ub9de\uac8c \ucef4\ud4e8\ud305 \ub178\ub4dc\uc758 \ud06c\uae30\ub97c \uc870\uc815\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc758 \uc6cc\ud06c\ub85c\ub4dc \uc0ac\uc6a9\uc744 \uc5c4\uaca9\ud558\uac8c \uc81c\uc5b4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nFargate\ub294 \ucd5c\uc18c 0.25vCPU, 0.5GB \uba54\ubaa8\ub9ac\uc5d0\uc11c\ubd80\ud130 \ucd5c\ub300 16vCPU, 120GB \uba54\ubaa8\ub9ac\uae4c\uc9c0 \uc6cc\ud06c\ub85c\ub4dc\ub97c \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 [\ud30c\ub4dc \ud06c\uae30 \ubcc0\ud615](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/fargate-pod-configuration.html)\uc5d0 \uc81c\ud55c\uc774 \uc788\uc73c\ubbc0\ub85c Fargate \uc124\uc815\uc774 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc801\ud569\ud55c\uc9c0 \ud655\uc778\ud574\uc57c \ud569\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uc6cc\ud06c\ub85c\ub4dc\uac00 vCPU 1\uac1c, 0.5GB \uba54\ubaa8\ub9ac\uac00 \ud544\uc694\ud55c \uacbd\uc6b0 \uc774\ub97c \uc704\ud55c \uac00\uc7a5 \uc791\uc740 Fargate \ud30c\ub4dc\ub294 vCPU 1\uac1c, 2GB \uba54\ubaa8\ub9ac\uc785\ub2c8\ub2e4.\n\nFargate\ub294 EC2 \uc778\uc2a4\ud134\uc2a4 \ub610\ub294 \uc6b4\uc601 \uccb4\uc81c \uad00\ub9ac\uac00 \ud544\uc694 \uc5c6\ub294 \ub4f1 \ub9ce\uc740 \uc774\uc810\uc744 \uc81c\uacf5\ud558\uc9c0\ub9cc \ubc30\ud3ec\ub41c \ubaa8\ub4e0 \ud30c\ub4dc\uac00 \ud074\ub7ec\uc2a4\ud130\uc758 \uac1c\ubcc4 \ub178\ub4dc\ub85c \uaca9\ub9ac\ub418\uc5b4 \uc788\uae30 \ub54c\ubb38\uc5d0 \uae30\uc874 EC2 \uc778\uc2a4\ud134\uc2a4\ubcf4\ub2e4 \ub354 \ub9ce\uc740 \ucef4\ud4e8\ud305 \ud30c\uc6cc\uac00 \ud544\uc694\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub97c \uc704\ud574\uc11c\ub294 Kubelet, \ub85c\uae45 \uc5d0\uc774\uc804\ud2b8, \uc77c\ubc18\uc801\uc73c\ub85c \ub178\ub4dc\uc5d0 \ubc30\ud3ec\ud558\ub294 \ub370\ubaac\uc14b \ub4f1\uc758 \ud56d\ubaa9\uc744 \ub354 \ub9ce\uc774 \ubcf5\uc81c\ud574\uc57c \ud569\ub2c8\ub2e4. \ub370\ubaac\uc14b\uc740 Fargate\uc5d0\uc11c \uc9c0\uc6d0\ub418\uc9c0 \uc54a\uc73c\ubbc0\ub85c \ud30c\ub4dc \"\uc0ac\uc774\ub4dc\uce74\"\ub85c \ubcc0\ud658\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uacfc \ud568\uaed8 \uc2e4\ud589\ud574\uc57c \ud569\ub2c8\ub2e4.\n\nFargate\ub294 \ub178\ub4dc\ubcc4\ub85c \uac01 \uc6cc\ud06c\ub85c\ub4dc\uac04 \ubd84\ub9ac\ub418\uae30 \ub54c\ubb38\uc5d0 \ubc84\uc2a4\ud305\uc774 \ubd88\uac00\ub2a5\ud558\uace0 \uacf5\uc720\ub420 \uc218 \uc5c6\uae30 \ub54c\ubb38\uc5d0 \ube48\ud328\ud0b9\uc774\ub098 CPU \uc624\ubc84\ud504\ub85c\ube44\uc800\ub2dd\uc758 \uc774\uc810\uc744 \ub204\ub9b4 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. Fargate\ub97c \uc0ac\uc6a9\ud558\uba74 \ube44\uc6a9\uc774 \ub4dc\ub294 EC2 \uc778\uc2a4\ud134\uc2a4 \uad00\ub9ac \uc2dc\uac04\uc744 \uc808\uc57d\ud560 \uc218 \uc788\uc9c0\ub9cc CPU \ubc0f \uba54\ubaa8\ub9ac \ube44\uc6a9\uc740 \ub2e4\ub978 EC2 \uc6a9\ub7c9 \uc720\ud615\ubcf4\ub2e4 \ube44\uc300 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Fargate \ud30c\ub4dc\ub294 \ucef4\ud4e8\ud305 Savings Plan\uc744 \ud65c\uc6a9\ud558\uc5ec \uc628\ub514\ub9e8\ub4dc \ube44\uc6a9\uc744 \uc808\uac10\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \ucef4\ud4e8\ud305 \uc0ac\uc6a9 \ucd5c\uc801\ud654\n\n\ucef4\ud4e8\ud305 \uc778\ud504\ub77c \ube44\uc6a9\uc744 \uc808\uac10\ud558\ub294 \ub610 \ub2e4\ub978 \ubc29\ubc95\uc740 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub354 \ud6a8\uc728\uc801\uc778 \ucef4\ud4e8\ud305\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uc774\ub294 x86\ubcf4\ub2e4 \ucd5c\ub300 20% \uc800\ub834\ud558\uace0 \uc5d0\ub108\uc9c0 \ud6a8\uc728\uc774 60% \ub354 \ub192\uc740 [Graviton \ud504\ub85c\uc138\uc11c](https:\/\/aws.amazon.com\/ec2\/graviton\/)\uc640 \uac19\uc774 \uc131\ub2a5\uc774 \ub354 \ub6f0\uc5b4\ub09c \ubc94\uc6a9 \ucef4\ud4e8\ud305\uc774\ub098 GPU \ubc0f [FPGA](https:\/\/aws.amazon.com\/ec2\/instance-types\/f1\/) \uc640 \uac19\uc740 \uc6cc\ud06c\ub85c\ub4dc\ubcc4 \uac00\uc18d\uae30\ub97c \ud1b5\ud574 \uc5bb\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub9de\uac8c [ARM \uc544\ud0a4\ud14d\ucc98\uc5d0\uc11c \uc2e4\ud589](https:\/\/aws.amazon.com\/blogs\/containers\/how-to-build-your-containers-for-arm-and-save-with-graviton-and-spot-instances-on-amazon-ecs\/)\ud558\uace0 [\uc801\uc808\ud55c \uac00\uc18d\uae30\ub85c \ub178\ub4dc\ub97c \uc124\uc815](https:\/\/aws.amazon.com\/blogs\/compute\/running-gpu-accelerated-kubernetes-workloads-on-p3-and-p2-ec2-instances-with-amazon-eks\/)\ud560 \uc218 \uc788\ub294 \ucee8\ud14c\uc774\ub108\ub97c \uad6c\ucd95\ud574\uc57c \ud569\ub2c8\ub2e4.\n\nEKS\ub294 \ud63c\ud569 \uc544\ud0a4\ud14d\ucc98(\uc608: amd64 \ubc0f arm64)\ub85c \ud074\ub7ec\uc2a4\ud130\ub97c \uc2e4\ud589\ud560 \uc218 \uc788\uc73c\uba70 \ucee8\ud14c\uc774\ub108\uac00 \uba40\ud2f0 \uc544\ud0a4\ud14d\ucc98\uc6a9\uc73c\ub85c \ucef4\ud30c\uc77c\ub41c \uacbd\uc6b0 \ud504\ub85c\ube44\uc800\ub108\uc5d0\uc11c \ub450 \uc544\ud0a4\ud14d\ucc98\ub97c \ubaa8\ub450 \ud5c8\uc6a9\ud558\uc5ec Karpenter\uc640 \ud568\uaed8 Graviton \ud504\ub85c\uc138\uc11c\ub97c \ud65c\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uc131\ub2a5\uc744 \uc77c\uad00\ub418\uac8c \uc720\uc9c0\ud558\ub824\uba74 \uac01 \uc6cc\ud06c\ub85c\ub4dc\ub97c \ub2e8\uc77c \ucef4\ud4e8\ud305 \uc544\ud0a4\ud14d\ucc98\uc5d0 \ub450\uace0 \ucd94\uac00 \uc6a9\ub7c9\uc774 \uc5c6\ub294 \uacbd\uc6b0\uc5d0\ub9cc \ub2e4\ub978 \uc544\ud0a4\ud14d\ucc98\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n\ud504\ub85c\ube44\uc800\ub108\ub294 \uc5ec\ub7ec \uc544\ud0a4\ud14d\ucc98\ub85c \uad6c\uc131\ud560 \uc218 \uc788\uc73c\uba70 \uc6cc\ud06c\ub85c\ub4dc\ub294 \uc6cc\ud06c\ub85c\ub4dc \uc0ac\uc591\uc5d0\uc11c \ud2b9\uc815 \uc544\ud0a4\ud14d\ucc98\ub97c \uc694\uccad\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4.\n\n```yaml\napiVersion: karpenter.sh\/v1alpha5\nkind: Provisioner\nmetadata:\n  name: default\nspec:\n  requirements:\n  - key: \"kubernetes.io\/arch\"\n    operator: In\n    values: [\"arm64\", \"amd64\"]\n```\n\nCluster Autoscaler\ub97c \uc0ac\uc6a9\ud558\uba74 Graviton \uc778\uc2a4\ud134\uc2a4\uc6a9 \ub178\ub4dc \uadf8\ub8f9\uc744 \uc0dd\uc131\ud558\uace0 \uc0c8 \uc6a9\ub7c9\uc744 \ud65c\uc6a9\ud558\ub824\uba74 [\uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud55c \ub178\ub4dc \ud5c8\uc6a9 \ubc94\uc704](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/taint-and-toleration\/)\ub97c \uc124\uc815\ud574\uc57c \ud569\ub2c8\ub2e4.\n\nGPU\uc640 FPGA\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc758 \uc131\ub2a5\uc744 \ud06c\uac8c \ud5a5\uc0c1\uc2dc\ud0ac \uc218 \uc788\uc9c0\ub9cc \uac00\uc18d\uae30\ub97c \uc0ac\uc6a9\ud558\ub824\uba74 \uc6cc\ud06c\ub85c\ub4dc\ub97c \ucd5c\uc801\ud654\ud574\uc57c \ud569\ub2c8\ub2e4. \uba38\uc2e0\ub7ec\ub2dd \ubc0f \uc778\uacf5\uc9c0\ub2a5\uc744 \uc704\ud55c \ub2e4\uc591\ud55c \uc6cc\ud06c\ub85c\ub4dc \uc720\ud615\uc740 \ucef4\ud4e8\ud305\uc5d0 GPU\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc73c\uba70, \ub9ac\uc18c\uc2a4 \uc694\uccad\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc778\uc2a4\ud134\uc2a4\ub97c \ucd94\uac00\ud558\uace0 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub9c8\uc6b4\ud2b8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```yaml\nspec:\n  template:\n    spec:\n    - containers:\n      ...\n      resources:\n          limits:\n            nvidia.com\/gpu: \"1\"\n```\n\n\uc77c\ubd80 GPU \ud558\ub4dc\uc6e8\uc5b4\ub294 \uc5ec\ub7ec \uc6cc\ud06c\ub85c\ub4dc\uc5d0\uc11c \uacf5\uc720\ud560 \uc218 \uc788\uc73c\ubbc0\ub85c \ub2e8\uc77c GPU\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\uace0 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc GPU \uacf5\uc720\ub97c \uad6c\uc131\ud558\ub294 \ubc29\ubc95\uc744 \ubcf4\ub824\uba74 \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\uac00\uc0c1 GPU \uc7a5\uce58 \ud50c\ub7ec\uadf8\uc778](https:\/\/aws.amazon.com\/blogs\/opensource\/virtual-gpu-device-plugin-for-inference-workload-in-kubernetes\/)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624. \ub2e4\uc74c \ube14\ub85c\uadf8\ub97c \ucc38\uc870\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \n\n* [NVIDIA \ud0c0\uc784\uc2ac\ub77c\uc774\uc2f1 \ubc0f \uac00\uc18d\ud654 EC2 \uc778\uc2a4\ud134\uc2a4\ub97c \uc0ac\uc6a9\ud558\ub294 Amazon EKS\uc5d0\uc11c\uc758 GPU \uacf5\uc720](https:\/\/aws.amazon.com\/blogs\/containers\/gpu-sharing-on-amazon-eks-with-nvidia-time-slicing-and-accelerated-ec2-instances\/)\n* [Amazon EKS\uc5d0\uc11c NVIDIA\uc758 \uba40\ud2f0 \uc778\uc2a4\ud134\uc2a4 GPU(MIG)\ub97c \uc0ac\uc6a9\ud558\uc5ec GPU \uc0ac\uc6a9\ub960 \uadf9\ub300\ud654: GPU \ub2f9 \ub354 \ub9ce\uc740 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud558\uc5ec \uc131\ub2a5 \ud5a5\uc0c1](https:\/\/aws.amazon.com\/blogs\/containers\/maximizing-gpu-utilization-with-nvidias-multi-instance-gpu-mig-on-amazon-eks-running-more-pods-per-gpu-for-enhanced-performance\/","site":"eks","answers_cleaned":"    search    exclude  true                                                          CPU                                                                                                                                                                                                                                                                                              1             Right sizing  2                 3                         GPU                     Right sizing        EKS                                         EC2                                                                             request       limit                                                                                                                     request                           request                                                                                                         request   limit                                         Goldilocks  https   www youtube com watch v DfmQWYiwFDk    KRR  https   www youtube com watch v uITOzpf82RY    Kubecost  https   aws amazon com blogs containers aws and kubecost collaborate to deliver cost monitoring for eks customers                                                                                                                                                                                 https   aws github io aws eks best practices scalability docs node efficiency  application right sizing            Horizontal Pod Autoscaler HPA                                    Vertical Pod Autoscaler VPA                                           Karpenter  http   karpenter sh       Cluster Autoscaler  https   github com kubernetes autoscaler                                                        Karpenter   Cluster Autoscaler                                        VPA                                                  VPA                                                                                                                                                                                                                                                                                                                                                                     PodDisruptionBudgets   https   kubernetes io docs tasks run application configure pdb      Pod Readiness Gates  https   kubernetes sigs github io aws load balancer controller v2 5 deploy pod readiness gate                                                            HPA                                                                         CPU                                                                                                                   CPU                                        Amazon CloudWatch    SQS                               KEDA  https   keda sh                                       KEDA  CloudWatch                                  https   aws amazon com blogs mt proactive autoscaling of kubernetes workloads with keda using metrics ingested into amazon cloudwatch                                                                      https   aws observability github io observability best practices guides  monitor what matters                                                                                                                                                                                                                                                                                                                  AWS                                                 Karpenter  Cluster Autoscaler  Karpenter        Cluster Autoscaler                                                                         Karpenter                                                                                                                                                                                                               Cluster Autoscaler Priority Expander        Cluster Autoscaler                                                                                       Cluster Autoscaler                                                                               CPU                                                                                                                                                                                                                                                                                                             Graviton                                      yaml apiVersion  eksctl io v1alpha5 kind  ClusterConfig metadata    name  my cluster managedNodeGroups      name  managed ondemand     minSize  1     maxSize  7     instanceType  m5 xlarge     name  managed reserved     minSize  2     maxSize  10     instanceType  c5 2xlarge         yaml apiVersion  v1 kind  ConfigMap metadata    name  cluster autoscaler priority expander   namespace  kube system data    priorities         10            ondemand       50            reserved                                                                     AZ                                                                                                Cluster Autoscaler                        https   aws github io aws eks best practices cluster autoscaling                          Cluster Autoscaler                                                                                                                                                     Kubernetes descheduler  https   github com kubernetes sigs descheduler                    10                     60                      40                                                 60                           60                                Descheduler                                                                                                                                                                                   Descheduler                                            Karpenter        Karpenter                     groupless                                                                                                             Karpenter                               EC2                                              Bin Packing                                                                                                                                                                                                                                                           Karpenter                                                             Karpenter                                                    consolidation       true                                                                                             Karpenter                                              Karpenter                        https   aws amazon com blogs containers optimizing your kubernetes compute costs with karpenter consolidation                  yaml apiVersion  karpenter sh v1alpha5 kind  Provisioner metadata    name  enable binpacking spec    consolidation      enabled  true                                                               do not evict                                   Karpenter                                                                    do not evict                                                                                               cordon                                         yaml hl lines  8  apiVersion  v1 kind  Pod metadata    name  label demo   labels      environment  production   annotations         karpenter sh do not evict    true  spec    containers      name  nginx     image  nginx     ports        containerPort  80          Cluster Autoscaler                                                                               scale down utilization threshold   50                    scale down unneeded time                                                                    10                            kube scheduler                                                                                                                                                                                                                           cluster autoscaler kubernetes io safe to evict false                                yaml             yaml hl lines  8  apiVersion  v1 kind  Pod metadata    name  label demo   labels      environment  production   annotations         cluster autoscaler kubernetes io safe to evict    false  spec    containers      name  nginx     image  nginx     ports        containerPort  80              Cluster Autoscaler   Karpenter        AWS       https   docs aws amazon com tag editor latest userguide tagging html                     AWS                             Kubernetes                                                      Kubecost  https   aws amazon com blogs containers aws and kubecost collaborate to deliver cost monitoring for eks customers                                                                     AWS Cost Explorer                                    Cluster Autoscaler                https   docs aws amazon com eks latest userguide launch templates html                                                         EC2 Auto Scaling     https   docs aws amazon com autoscaling ec2 userguide ec2 auto scaling tagging html                          Karpenter                            spec tags  https   karpenter sh v0 29 concepts node templates  spectags                                                                                                               CPU                    noisy neighbor                              I O                                                                                                                        taint  toleration  https   kubernetes io docs concepts scheduling eviction taint and toleration                                                   CPU       CPU        CPU pinning  https   kubernetes io docs tasks administer cluster cpu management policies  static policy                                                                 https   aws amazon com ec2 pricing reserved instances     Graviton       https   aws amazon com ec2 graviton                https   aws amazon com ec2 spot          BestEffort  https   kubernetes io docs concepts workloads pods pod qos  besteffort                                              IP                     API                                                       https   aws github io aws eks best practices scalability docs control plane                                                Karpenter                                      Quota   https   kubernetes io docs concepts policy resource quotas                                                                                                                    Karpenter                                 CPU  GPU           https   karpenter sh docs concepts provisioners  speclimitsresources                                                                                                                                                                                                                       0                                      cluster turndown  https   github com kubecost cluster turndown                            0                   AWS      https   aws amazon com blogs containers manage scale to zero scenarios with karpenter and serverless           Karpenter                                                                                                                                 AWS                               Savings Plan   https   aws amazon com savingsplans compute pricing                                                          Savings Plan           Fargate                                                                                                                                                                                   https   aws amazon com ec2 spot                        EC2                        90                                                                                          2         https   docs aws amazon com AWSEC2 latest UserGuide spot interruptions html                                                                                                                                                                                       Karpenter                                           https   aws github io aws eks best practices karpenter  enable interruption handling when using spot                                              https   github com aws aws node termination handler                                                                                   Karpenter                   https   karpenter sh docs concepts scheduling  on demandspot ratio split                                  Cluster Autoscaler                                            https   aws amazon com blogs containers amazon eks now supports provisioning and managing ec2 spot instances in managed node groups                     Karpenter                                                                                                                                                          Karpenter   price capacity optimization        https   aws amazon com blogs compute introducing price capacity optimized allocation strategy for ec2 spot instances                                        yaml hl lines  9  apiVersion  karpenter sh v1alpha5 kind  Provisioner metadata    name  spot prioritized spec    requirements        key   karpenter sh capacity type         operator  In         values    spot    on demand            Savings Plans            AWS                EDP    Compute Savings Plan  https   aws amazon com savingsplans compute pricing                            Savings Plan  1     3                                  EKS       EC2                 Lambda   Fargate                        Savings Plan                              EC2                       Compute Savings Plan                                          EC2        66                                                  EC2      Savings Plan          EC2        C                                 72                 AZ             c5    c6                                                                  Savings Plan                                               https   aws amazon com ec2 pricing reserved instances    EC2      Savings Plan                                                  72                                1     3                          EC2                                    AWS                                                                         AWS                EDP  Enterprise Discount Program                                           AWS                                 EC2                                Savings Plan                                                 EC2                                                                                                                                       Savings Plan                                                                                                                                                                                 Cluster Autoscaler        scale down utilization threshold                              Karpenter                                        EC2                      ec2 instance selector  https   github com aws amazon ec2 instance selector                                          EKS                               x86            4Gb      vCPU 2       us east 1                              bash ec2 instance selector   memory 4   vcpus 2   cpu architecture x86 64      r us east 1   service eks c5 large c5a large c5ad large c5d large c6a large c6i large t2 medium t3 medium t3a medium                                                                   kubecost       cluster turndown  https   github com kubecost cluster turndown                                                   Fargate      Fargate      EKS                                                                                    CPU   RAM                                                                 Fargate     0 25vCPU  0 5GB            16vCPU  120GB                                             https   docs aws amazon com eks latest userguide fargate pod configuration html            Fargate                                      vCPU 1   0 5GB                         Fargate     vCPU 1   2GB          Fargate  EC2                                                                               EC2                                         Kubelet                                                         Fargate                                                   Fargate                                                      CPU                         Fargate              EC2                       CPU              EC2                    Fargate         Savings Plan                                                                                                      x86      20               60        Graviton       https   aws amazon com ec2 graviton                          GPU    FPGA  https   aws amazon com ec2 instance types f1                                           ARM            https   aws amazon com blogs containers how to build your containers for arm and save with graviton and spot instances on amazon ecs                       https   aws amazon com blogs compute running gpu accelerated kubernetes workloads on p3 and p2 ec2 instances with amazon eks                          EKS             amd64   arm64                                                                    Karpenter     Graviton                                                                                                                                                                         yaml apiVersion  karpenter sh v1alpha5 kind  Provisioner metadata    name  default spec    requirements      key   kubernetes io arch      operator  In     values    arm64    amd64        Cluster Autoscaler       Graviton                                                   https   kubernetes io docs concepts scheduling eviction taint and toleration               GPU  FPGA                                                                                        GPU                                                                  yaml spec    template      spec        containers                  resources            limits              nvidia com gpu   1          GPU                               GPU                           GPU                              GPU          https   aws amazon com blogs opensource virtual gpu device plugin for inference workload in kubernetes                                      NVIDIA              EC2            Amazon EKS    GPU     https   aws amazon com blogs containers gpu sharing on amazon eks with nvidia time slicing and accelerated ec2 instances      Amazon EKS   NVIDIA          GPU MIG        GPU          GPU                        https   aws amazon com blogs containers maximizing gpu utilization with nvidias multi instance gpu mig on amazon eks running more pods per gpu for enhanced performance "}
{"questions":"eks exclude true HA AWS AZ Amazon EKS EKS VPC ELB ECR search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \ube44\uc6a9 \ucd5c\uc801\ud654 - \ub124\ud2b8\uc6cc\ud0b9\n\n\uace0\uac00\uc6a9\uc131(HA)\uc744 \uc704\ud55c \uc2dc\uc2a4\ud15c \uc544\ud0a4\ud14d\ucc98\ub294 \ubcf5\uc6d0\ub825\uacfc \ub0b4\uacb0\ud568\uc131\uc744 \ub2ec\uc131\ud558\uae30 \uc704\ud55c \ubaa8\ubc94 \uc0ac\ub840\ub97c \ud1b5\ud574 \uad6c\ud604\ub429\ub2c8\ub2e4. \uc774\ub294 \ud2b9\uc815 AWS \ub9ac\uc804\uc758 \uc5ec\ub7ec \uac00\uc6a9\uc601\uc5ed(AZ)\uc5d0 \uc6cc\ud06c\ub85c\ub4dc\uc640 \uae30\ubcf8 \uc778\ud504\ub77c\ub97c \ubd84\uc0b0\uc2dc\ud0a4\ub294 \uac83\uc744 \uc758\ubbf8\ud569\ub2c8\ub2e4. Amazon EKS \ud658\uacbd\uc5d0 \uc774\ub7ec\ud55c \ud2b9\uc131\uc744 \uc801\uc6a9\ud558\uba74 \uc2dc\uc2a4\ud15c\uc758 \uc804\ubc18\uc801\uc778 \uc548\uc815\uc131\uc774 \ud5a5\uc0c1\ub429\ub2c8\ub2e4. \uc774\uc640 \ud568\uaed8 EKS \ud658\uacbd\uc740 \ub2e4\uc591\ud55c \uad6c\uc870(\uc608: VPC), \uad6c\uc131 \uc694\uc18c(\uc608: ELB) \ubc0f \ud1b5\ud569(\uc608: ECR \ubc0f \uae30\ud0c0 \ucee8\ud14c\uc774\ub108 \ub808\uc9c0\uc2a4\ud2b8\ub9ac)\uc73c\ub85c \uad6c\uc131\ub420 \uac00\ub2a5\uc131\uc774 \ub192\uc2b5\ub2c8\ub2e4. \n\n\uace0\uac00\uc6a9\uc131 \uc2dc\uc2a4\ud15c\uacfc \uae30\ud0c0 \uc0ac\uc6a9 \uc0ac\ub840\ubcc4 \uad6c\uc131 \uc694\uc18c\uc758 \uc870\ud569\uc740 \ub370\uc774\ud130 \uc804\uc1a1 \ubc0f \ucc98\ub9ac \ubc29\uc2dd\uc5d0 \uc911\uc694\ud55c \uc5ed\ud560\uc744 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\uc774\ub294 \uacb0\uad6d \ub370\uc774\ud130 \uc804\uc1a1 \ubc0f \ucc98\ub9ac\ub85c \uc778\ud574 \ubc1c\uc0dd\ud558\ub294 \ube44\uc6a9\uc5d0\ub3c4 \uc601\ud5a5\uc744 \ubbf8\uce69\ub2c8\ub2e4. \n\n\uc544\ub798\uc5d0 \uc790\uc138\ud788 \uc124\uba85\ub41c \uc2e4\ucc9c\uc0ac\ud56d\uc740 \ub2e4\uc591\ud55c \ub3c4\uba54\uc778 \ubc0f \uc0ac\uc6a9 \uc0ac\ub840\uc5d0\uc11c \ube44\uc6a9 \ud6a8\uc728\uc131\uc744 \ub2ec\uc131\ud558\uae30 \uc704\ud574 EKS \ud658\uacbd\uc744 \uc124\uacc4\ud558\uace0 \ucd5c\uc801\ud654\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4.\n\n\n## \ud30c\ub4dc \uac04 \ud1b5\uc2e0\n\n\uc124\uc815\uc5d0 \ub530\ub77c \ud30c\ub4dc \uac04 \ub124\ud2b8\uc6cc\ud06c \ud1b5\uc2e0 \ubc0f \ub370\uc774\ud130 \uc804\uc1a1\uc740 Amazon EKS \uc6cc\ud06c\ub85c\ub4dc \uc2e4\ud589\uc758 \uc804\uccb4 \ube44\uc6a9\uc5d0 \uc0c1\ub2f9\ud55c \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\uc774 \uc139\uc158\uc5d0\uc11c\ub294 \uace0\uac00\uc6a9\uc131 (HA) \uc544\ud0a4\ud14d\ucc98, \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc131\ub2a5 \ubc0f \ubcf5\uc6d0\ub825\uc744 \uace0\ub824\ud558\uba74\uc11c \ud30c\ub4dc \uac04 \ud1b5\uc2e0\uacfc \uad00\ub828\ub41c \ube44\uc6a9\uc744 \uc904\uc774\uae30 \uc704\ud55c \ub2e4\uc591\ud55c \uac1c\ub150\uacfc \uc811\uadfc \ubc29\uc2dd\uc744 \ub2e4\ub8f9\ub2c8\ub2e4. \n\n### \uac00\uc6a9\uc601\uc5ed\uc73c\ub85c\uc758 \ud2b8\ub798\ud53d \uc81c\ud55c\n\n\uc7a6\uc740 \uc774\uadf8\ub808\uc2a4 \ud06c\ub85c\uc2a4\uc874 \ud2b8\ub798\ud53d(AZ \uac04\uc5d0 \ubd84\uc0b0\ub418\ub294 \ud2b8\ub798\ud53d)\uc740 \ub124\ud2b8\uc6cc\ud06c \uad00\ub828 \ube44\uc6a9\uc5d0 \ud070 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c\uc740 EKS \ud074\ub7ec\uc2a4\ud130\uc758 \ud30c\ub4dc \uac04 \ud06c\ub85c\uc2a4 \uc874 \ud2b8\ub798\ud53d \uc591\uc744 \uc81c\uc5b4\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \uba87 \uac00\uc9c0 \uc804\ub7b5\uc785\ub2c8\ub2e4. \n\n_\ud074\ub7ec\uc2a4\ud130 \ub0b4 \ud30c\ub4dc \uac04 \ud06c\ub85c\uc2a4-\uc874 \ud2b8\ub798\ud53d \uc591(\uc608: \uc804\uc1a1\ub41c \ub370\uc774\ud130 \uc591 \ub610\ub294 \ubc14\uc774\ud2b8 \ub2e8\uc704 \uc804\uc1a1)\uc744 \uc138\ubc00\ud558\uac8c \ud30c\uc545\ud558\ub824\uba74 [\uc774 \uac8c\uc2dc\ubb3c \ucc38\uc870](https:\/\/aws.amazon.com\/blogs\/containers\/getting-visibility-into-your-amazon-eks-cross-az-pod-to-pod-network-bytes\/)\ud558\uc138\uc694._\n\n**Topology Aware Routing (\uc774\uc804 \uba85\uce6d\uc740 Topology Aware Hint) \ud65c\uc6a9**\n\n![Topology aware routing](..\/images\/topo_aware_routing.png)\n\nTopology Aware Routing\uc744 \uc0ac\uc6a9\ud560 \ub54c\ub294 \ud2b8\ub798\ud53d\uc744 \ub77c\uc6b0\ud305\ud560 \ub54c \uc11c\ube44\uc2a4, EndpointSlices \ubc0f `kube-proxy`\uac00 \ud568\uaed8 \uc791\ub3d9\ud558\ub294 \ubc29\uc2dd\uc744 \uc774\ud574\ud558\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4. \uc704 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc5d0\uc11c \ubcfc \uc218 \uc788\ub4ef\uc774 \uc11c\ube44\uc2a4\ub294 \ud30c\ub4dc\ub85c \ud5a5\ud558\ub294 \ud2b8\ub798\ud53d\uc744 \uc218\uc2e0\ud558\ub294 \uc548\uc815\uc801\uc778 \ub124\ud2b8\uc6cc\ud06c \ucd94\uc0c1\ud654 \uacc4\uce35\uc785\ub2c8\ub2e4. \uc11c\ube44\uc2a4\uac00 \uc0dd\uc131\ub418\uba74 \uc5ec\ub7ec EndpointSlices\uac00 \uc0dd\uc131\ub429\ub2c8\ub2e4. \uac01 EndpointSlice\uc5d0\ub294 \uc2e4\ud589 \uc911\uc778 \ub178\ub4dc \ubc0f \ucd94\uac00 \ud1a0\ud3f4\ub85c\uc9c0 \uc815\ubcf4\uc640 \ud568\uaed8 \ud30c\ub4dc \uc8fc\uc18c\uc758 \ud558\uc704 \uc9d1\ud569\uc774 \ud3ec\ud568\ub41c \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ubaa9\ub85d\uc774 \uc788\uc2b5\ub2c8\ub2e4. `kube-proxy`\ub294 \ud074\ub7ec\uc2a4\ud130\uc758 \ubaa8\ub4e0 \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\uace0 \ub0b4\ubd80 \ub77c\uc6b0\ud305 \uc5ed\ud560\ub3c4 \uc218\ud589\ud558\ub294 \ub370\ubaac\uc14b\uc774\uc9c0\ub9cc, \uc0dd\uc131\ub41c EndpointSlices\uc5d0\uc11c \uc18c\ube44\ud558\ub294 \uc591\uc744 \uae30\ubc18\uc73c\ub85c \ud569\ub2c8\ub2e4.\n\n[*Topology aware routing*](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/topology-aware-routing\/)\uc744 \ud65c\uc131\ud654\ud558\uace0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\uc5d0 \uad6c\ud604\ud558\uba74, EndpointSlices \ucee8\ud2b8\ub864\ub7ec\ub294 \ud074\ub7ec\uc2a4\ud130\uac00 \ubd84\uc0b0\ub418\uc5b4 \uc788\ub294 \uc5ec\ub7ec \uc601\uc5ed\uc5d0 \ube44\ub840\uc801\uc73c\ub85c \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud560\ub2f9\ud569\ub2c8\ub2e4. EndpointSlices \ucee8\ud2b8\ub864\ub7ec\ub294 \uac01 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \ub300\ud574 \uc601\uc5ed\uc5d0 \ub300\ud55c _\ud78c\ud2b8_ \ub3c4 \uc124\uc815\ud569\ub2c8\ub2e4. _\ud78c\ud2b8_ \ub294 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \ud2b8\ub798\ud53d\uc744 \ucc98\ub9ac\ud574\uc57c \ud558\ub294 \uc601\uc5ed\uc744 \uc124\uba85\ud569\ub2c8\ub2e4.\uadf8\ub7ec\uba74 `kube-proxy`\uac00 \uc801\uc6a9\ub41c _\ud78c\ud2b8_ \ub97c \uae30\ubc18\uc73c\ub85c \uc601\uc5ed\uc5d0\uc11c \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \ud2b8\ub798\ud53d\uc744 \ub77c\uc6b0\ud305\ud569\ub2c8\ub2e4. \n\n\uc544\ub798 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 'kube-proxy'\uac00 \uc601\uc5ed \ucd9c\ubc1c\uc9c0\ub97c \uae30\ubc18\uc73c\ub85c \uac00\uc57c \ud560 \ubaa9\uc801\uc9c0\ub97c \uc54c \uc218 \uc788\ub3c4\ub85d \ud78c\ud2b8\uac00 \uc788\ub294 EndpointSlice\ub97c \uad6c\uc131\ud558\ub294 \ubc29\ubc95\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4. \ud78c\ud2b8\uac00 \uc5c6\uc73c\uba74 \uc774\ub7ec\ud55c \ud560\ub2f9\uc774\ub098 \uad6c\uc131\uc774 \uc5c6\uc73c\uba70 \ud2b8\ub798\ud53d\uc774 \uc5b4\ub514\uc5d0\uc11c \uc624\ub294\uc9c0\uc5d0 \uad00\uacc4\uc5c6\uc774 \uc11c\ub85c \ub2e4\ub978 \uc9c0\uc5ed \ubaa9\uc801\uc9c0\ub85c \ud504\ub85d\uc2dc\ub429\ub2c8\ub2e4. \n\n![Endpoint Slice](..\/images\/endpoint_slice.png)\n\n\uacbd\uc6b0\uc5d0 \ub530\ub77c EndPointSlice \ucee8\ud2b8\ub864\ub7ec\ub294 \ub2e4\ub978 \uc601\uc5ed\uc5d0 \ub300\ud574 _\ud78c\ud2b8_ \ub97c \uc801\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc989, \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \ub2e4\ub978 \uc601\uc5ed\uc5d0\uc11c \ubc1c\uc0dd\ud558\ub294 \ud2b8\ub798\ud53d\uc744 \ucc98\ub9ac\ud558\uac8c \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\ub294 \uc774\uc720\ub294 \uc11c\ub85c \ub2e4\ub978 \uc601\uc5ed\uc758 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uac04\uc5d0 \ud2b8\ub798\ud53d\uc744 \uade0\uc77c\ud558\uac8c \ubd84\ubc30\ud558\uae30 \uc704\ud568\uc785\ub2c8\ub2e4.\n\n\ub2e4\uc74c\uc740 \uc11c\ube44\uc2a4\uc5d0 \ub300\ud574 _\ud1a0\ud3f4\ub85c\uc9c0 \uc778\uc2dd \ub77c\uc6b0\ud305_ \uc744 \ud65c\uc131\ud654\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \ucf54\ub4dc \uc2a4\ub2c8\ud3ab\uc785\ub2c8\ub2e4. \n\n```yaml hl_lines=\"6-7\"\napiVersion: v1\nkind: Service\nmetadata:\n  name: orders-service\n  namespace: ecommerce\n    annotations:\n      service.kubernetes.io\/topology-mode: Auto\nspec:\n  selector:\n    app: orders\n  type: ClusterIP\n  ports:\n  - protocol: TCP\n    port: 3003\n    targetPort: 3003\n```\n\n\uc544\ub798 \uc2a4\ud06c\ub9b0\uc0f7\uc740 EndpointSlices \ucee8\ud2b8\ub864\ub7ec\uac00 `eu-west-1a`\uac00\uc6a9\uc601\uc5ed\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ud30c\ub4dc \ubcf5\uc81c\ubcf8\uc758 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \ud78c\ud2b8\ub97c \uc131\uacf5\uc801\uc73c\ub85c \uc801\uc6a9\ud55c \uacb0\uacfc\ub97c \ubcf4\uc5ec\uc900\ub2e4. \n\n![Slice shell](..\/images\/slice_shell.png)\n\n!!! note\n    Topology aware routing\uc774 \uc544\uc9c1 **\ubca0\ud0c0**\ub77c\ub294 \uac83 \uc778\uc9c0\ud574\uc57c \ud569\ub2c8\ub2e4. \ub610\ud55c \uc6cc\ud06c\ub85c\ub4dc\uac00 \ud074\ub7ec\uc2a4\ud130 \ud1a0\ud3f4\ub85c\uc9c0 \uc804\uccb4\uc5d0 \uad11\ubc94\uc704\ud558\uace0 \uade0\ub4f1\ud558\uac8c \ubd84\uc0b0\ub418\uc5b4 \uc788\uc744 \ub54c \uc774 \uae30\ub2a5\uc744 \ub354 \uc798 \uc608\uce21\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub530\ub77c\uc11c [\ud30c\ub4dc \ud1a0\ud3f4\ub85c\uc9c0 \ud655\uc0b0 \uc81c\uc57d](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/topology-spread-constraints\/)\uacfc \uac19\uc774 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uac00\uc6a9\uc131\uc744 \ub192\uc774\ub294 \uc77c\uc815 \uc81c\uc57d\uacfc \ud568\uaed8 \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n**\uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec \uc0ac\uc6a9: \ud2b9\uc815 \uac00\uc6a9\uc601\uc5ed\uc5d0 \ub178\ub4dc \ud504\ub85c\ube44\uc800\ub2dd**\n\n\uc5ec\ub7ec \uac00\uc6a9\uc601\uc5ed\uc758 \uace0\uac00\uc6a9\uc131 \ud658\uacbd\uc5d0\uc11c \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc2e4\ud589\ud558\ub294 \uac83\uc744 _\uac15\ub825\ud788 \uad8c\uc7a5_ \ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uc548\uc815\uc131\uc774 \ud5a5\uc0c1\ub418\uba70, \ud2b9\ud788 \uac00\uc6a9\uc601\uc5ed\uc5d0 \ubb38\uc81c\uac00 \ubc1c\uc0dd\ud55c \uacbd\uc6b0 \ub354\uc6b1 \uadf8\ub807\uc2b5\ub2c8\ub2e4. \ub124\ud2b8\uc6cc\ud06c \uad00\ub828 \ube44\uc6a9\uc744 \uc904\uc774\uae30 \uc704\ud574 \uc548\uc815\uc131\uc744 \ud76c\uc0dd\ud558\ub824\ub294 \uacbd\uc6b0 \ub178\ub4dc\ub97c \ub2e8\uc77c \uac00\uc6a9\uc601\uc5ed\ub85c \uc81c\ud55c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\ub3d9\uc77c\ud55c \uac00\uc6a9\uc601\uc5ed\uc5d0\uc11c \ubaa8\ub4e0 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud558\ub824\uba74 \ub3d9\uc77c\ud55c \uac00\uc6a9\uc601\uc5ed\uc5d0 \uc6cc\ucee4 \ub178\ub4dc\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\uac70\ub098 \ub3d9\uc77c\ud55c \uac00\uc6a9\uc601\uc5ed\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc6cc\ucee4 \ub178\ub4dc\uc5d0 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud574\uc57c \ud569\ub2c8\ub2e4. \ub2e8\uc77c \uac00\uc6a9\uc601\uc5ed \ub0b4\uc5d0\uc11c \ub178\ub4dc\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub824\uba74 [Cluster Autoscaler (CA)](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/cluster-autoscaler)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub3d9\uc77c\ud55c \uac00\uc6a9\uc601\uc5ed\uc5d0 \uc18d\ud558\ub294 \uc11c\ube0c\ub137\uc73c\ub85c \ub178\ub4dc \uadf8\ub8f9\uc744 \uc815\uc758\ud558\uc2ed\uc2dc\uc624. [Karpenter](https:\/\/karpenter.sh\/)\uc758 \uacbd\uc6b0 \"[_topology.kubernetes.io\/zone\"_](http:\/\/topology.kubernetes.io\/zone%E2%80%9D)\uc744 \uc0ac\uc6a9\ud558\uace0 \uc6cc\ucee4 \ub178\ub4dc\ub97c \ub9cc\ub4e4\ub824\ub294 \uac00\uc6a9\uc601\uc5ed\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uc544\ub798 \uce74\ud39c\ud130 \ud504\ub85c\ube44\uc800\ub2dd \uc2a4\ub2c8\ud3ab\uc740 us-west-2a \uac00\uc6a9\uc601\uc5ed\uc758 \ub178\ub4dc\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud569\ub2c8\ub2e4.\n\n**Karpenter**\n\n```yaml hl_lines=\"5-9\"\napiVersion: karpenter.sh\/v1alpha5\nkind: Provisioner\nmetadata:\nname: single-az\nspec:\n  requirements:\n  - key: \"topology.kubernetes.io\/zone\"\n    operator: In\n    values: [\"us-west-2a\"]\n```\n\n**Cluster Autoscaler (CA)**\n\n```yaml hl_lines=\"7-8\"\napiVersion: eksctl.io\/v1alpha5\nkind: ClusterConfig\nmetadata:\n  name: my-ca-cluster\n  region: us-east-1\n  version: \"1.21\"\navailabilityZones:\n- us-east-1a\nmanagedNodeGroups:\n- name: managed-nodes\n  labels:\n    role: managed-nodes\n  instanceType: t3.medium\n  minSize: 1\n  maxSize: 10\n  desiredCapacity: 1\n...\n```\n\n**\ud30c\ub4dc \ud560\ub2f9 \ubc0f \ub178\ub4dc \uc5b4\ud53c\ub2c8\ud2f0 \uc0ac\uc6a9**\n\n\ub610\ub294 \uc5ec\ub7ec \uac00\uc6a9\uc601\uc5ed\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc6cc\ucee4 \ub178\ub4dc\uac00 \uc788\ub294 \uacbd\uc6b0 \uac01 \ub178\ub4dc\uc5d0\ub294 \uac00\uc6a9\uc601\uc5ed \uac12(\uc608: us-west-2a \ub610\ub294 us-west-2b)\uacfc \ud568\uaed8 _[topology.kubernetes.io\/zone](http:\/\/topology.kubernetes.io\/zone%E2%80%9D)_ \ub808\uc774\ube14\uc774 \ubd99\uc2b5\ub2c8\ub2e4.`nodeSelector` \ub610\ub294 `nodeAffinity`\ub97c \ud65c\uc6a9\ud558\uc5ec \ub2e8\uc77c \uac00\uc6a9\uc601\uc5ed\uc758 \ub178\ub4dc\uc5d0 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4, \ub2e4\uc74c \ub9e4\ub2c8\ud398\uc2a4\ud2b8 \ud30c\uc77c\uc740 \uac00\uc6a9\uc601\uc5ed us-west-2a\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ub178\ub4dc \ub0b4\uc5d0\uc11c \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud55c\ub2e4.\n\n```yaml hl_lines=\"7-9\"\napiVersion: v1\nkind: Pod\nmetadata:\n  name: nginx\n  labels:\n    env: test\nspec:\n  nodeSelector:\n    topology.kubernetes.io\/zone: us-west-2a\n  containers:\n  - name: nginx\n    image: nginx \n    imagePullPolicy: IfNotPresent\n```\n\n### \ub178\ub4dc\ub85c\uc758 \ud2b8\ub798\ud53d \uc81c\ud55c\n\n\uc874 \ub808\ubca8\uc5d0\uc11c \ud2b8\ub798\ud53d\uc744 \uc81c\ud55c\ud558\ub294 \uac83\ub9cc\uc73c\ub85c\ub294 \ucda9\ubd84\ud558\uc9c0 \uc54a\uc740 \uacbd\uc6b0\uac00 \uc788\uc2b5\ub2c8\ub2e4. \ube44\uc6a9 \uc808\uac10 \uc678\uc5d0\ub3c4 \uc0c1\ud638 \ud1b5\uc2e0\uc774 \ube48\ubc88\ud55c \ud2b9\uc815 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uac04\uc758 \ub124\ud2b8\uc6cc\ud06c \uc9c0\uc5f0 \uc2dc\uac04\uc744 \uc904\uc5ec\uc57c \ud558\ub294 \ucd94\uac00 \uc694\uad6c \uc0ac\ud56d\uc774 \uc788\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucd5c\uc801\uc758 \ub124\ud2b8\uc6cc\ud06c \uc131\ub2a5\uc744 \ub2ec\uc131\ud558\uace0 \ube44\uc6a9\uc744 \uc808\uac10\ud558\ub824\uba74 \ud2b8\ub798\ud53d\uc744 \ud2b9\uc815 \ub178\ub4dc\ub85c \uc81c\ud55c\ud558\ub294 \ubc29\ubc95\uc774 \ud544\uc694\ud569\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 A\ub294 \uace0\uac00\uc6a9\uc131(HA)\uc124\uc815\uc5d0\uc11c\ub3c4 \ud56d\uc0c1 \ub178\ub4dc 1\uc758 \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 B\uc640 \ud1b5\uc2e0\ud574\uc57c \ud569\ub2c8\ub2e4. \ub178\ub4dc 1\uc758 \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 A\uac00 \ub178\ub4dc 2\uc758 \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 B\uc640 \ud1b5\uc2e0\ud558\ub3c4\ub85d \ud558\uba74 \ud2b9\ud788 \ub178\ub4dc 2\uac00 \uc644\uc804\ud788 \ubcc4\ub3c4\uc758 AZ\uc5d0 \uc788\ub294 \uacbd\uc6b0 \uc774\ub7ec\ud55c \uc131\uaca9\uc758 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \ud544\uc694\ud55c \uc131\ub2a5\uc5d0 \ubd80\uc815\uc801\uc778 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n**\uc11c\ube44\uc2a4 \ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45 \uc0ac\uc6a9**\n\n\ud30c\ub4dc \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d\uc744 \ub178\ub4dc\ub85c \uc81c\ud55c\ud558\ub824\uba74 _[\uc11c\ube44\uc2a4 \ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service-traffic-policy\/)_ \uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c \uc6cc\ud06c\ub85c\ub4dc \uc11c\ube44\uc2a4\ub85c \uc804\uc1a1\ub418\ub294 \ud2b8\ub798\ud53d\uc740 \uc0dd\uc131\ub41c \uc5ec\ub7ec \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \ubb34\uc791\uc704\ub85c \ubd84\uc0b0\ub429\ub2c8\ub2e4. \ub530\ub77c\uc11c HA \uc544\ud0a4\ud14d\ucc98\uc5d0\uc11c\ub294 \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 A\uc758 \ud2b8\ub798\ud53d\uc774 \uc5ec\ub7ec AZ\uc758 \ud2b9\uc815 \ub178\ub4dc\uc5d0 \uc788\ub294 \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 B\uc758 \ubaa8\ub4e0 \ubcf5\uc81c\ubcf8\uc73c\ub85c \uc774\ub3d9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uc11c\ube44\uc2a4\uc758 \ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45\uc744 `local`\ub85c \uc124\uc815\ud558\uba74 \ud2b8\ub798\ud53d\uc774 \ubc1c\uc0dd\ud55c \ub178\ub4dc\uc758 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \ud2b8\ub798\ud53d\uc774 \uc81c\ud55c\ub429\ub2c8\ub2e4. \uc774 \uc815\ucc45\uc740 \ub178\ub4dc-\ub85c\uceec \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ub3c5\uc810\uc801\uc73c\ub85c \uc0ac\uc6a9\ud558\ub3c4\ub85d \uaddc\uc815\ud569\ub2c8\ub2e4. \uc554\uc2dc\uc801\uc73c\ub85c \ubcf4\uba74 \ud574\ub2f9 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud55c \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d \uad00\ub828 \ube44\uc6a9\uc774 \ud074\ub7ec\uc2a4\ud130 \uc804\uccb4\uc5d0 \ubd84\uc0b0\ub418\ub294 \uacbd\uc6b0\ubcf4\ub2e4 \ub0ae\uc544\uc9c8 \uac83\uc785\ub2c8\ub2e4.\ub610\ud55c \uc9c0\uc5f0 \uc2dc\uac04\uc774 \uc9e7\uc544\uc838 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uc131\ub2a5\uc774 \ud5a5\uc0c1\ub429\ub2c8\ub2e4. \n\n!!! note\n    \uc774 \uae30\ub2a5\uc744 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc758 \ud1a0\ud3f4\ub85c\uc9c0 \uc778\uc2dd \ub77c\uc6b0\ud305\uacfc \uacb0\ud569\ud560 \uc218 \uc5c6\ub2e4\ub294 \uc810\uc5d0 \uc720\uc758\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n![Local internal traffic](..\/images\/local_traffic.png)\n\n\ub2e4\uc74c\uc740 \uc11c\ube44\uc2a4\uc758 _\ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45_ \uc744 \uc124\uc815\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \ucf54\ub4dc \uc2a4\ub2c8\ud3ab\uc785\ub2c8\ub2e4. \n\n\n```yaml hl_lines=\"14\"\napiVersion: v1\nkind: Service\nmetadata:\n  name: orders-service\n  namespace: ecommerce\nspec:\n  selector:\n    app: orders\n  type: ClusterIP\n  ports:\n  - protocol: TCP\n    port: 3003\n    targetPort: 3003\n  internalTrafficPolicy: Local\n```\n\n\ud2b8\ub798\ud53d \uac10\uc18c\ub85c \uc778\ud55c \uc608\uc0c1\uce58 \ubabb\ud55c \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub3d9\uc791\uc744 \ubc29\uc9c0\ud558\ub824\uba74 \ub2e4\uc74c\uacfc \uac19\uc740 \uc811\uadfc \ubc29\uc2dd\uc744 \uace0\ub824\ud574\uc57c \ud569\ub2c8\ub2e4:\n\n* \ud1b5\uc2e0\ud558\ub294 \uac01 \ud30c\ub4dc\uc5d0 \ub300\ud574 \ucda9\ubd84\ud55c \ub808\ud50c\ub9ac\uce74 \uc2e4\ud589\n* [\ud1a0\ud3f4\ub85c\uc9c0 \ubd84\uc0b0 \uc81c\uc57d \uc870\uac74](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/topology-spread-constraints\/)\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud30c\ub4dc\ub97c \ube44\uad50\uc801 \uade0\uc77c\ud558\uac8c \ubd84\uc0b0\uc2dc\ud0a4\uc2ed\uc2dc\uc624.\n* \ud1b5\uc2e0\ud558\ub294 \ud30c\ub4dc\ub97c \ub3d9\uc77c\ud55c \ub178\ub4dc\uc5d0 \ubc30\uce58\ud558\uae30 \uc704\ud574 [\ud30c\ub4dc \uc5b4\ud53c\ub2c8\ud2f0 \uaddc\uce59](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#inter-pod-affinity-and-anti-affinity)\uc744 \ud65c\uc6a9\ud558\uc138\uc694.\n\n\uc774 \uc608\uc5d0\uc11c\ub294 \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 A\uc758 \ubcf5\uc81c\ubcf8 2\uac1c\uc640 \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 B\uc758 \ubcf5\uc81c\ubcf8 3\uac1c\uac00 \uc788\uc2b5\ub2c8\ub2e4. \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 A\uc758 \ubcf5\uc81c\ubcf8\uc774 \ub178\ub4dc 1\uacfc 2 \uc0ac\uc774\uc5d0 \ubd84\uc0b0\ub418\uc5b4 \uc788\uace0 \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 B\uc758 \ubcf5\uc81c\ubcf8\uc774 \ub178\ub4dc 3\uc5d0 \uc788\ub294 \uacbd\uc6b0 `local` \ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45 \ub54c\ubb38\uc5d0 \ud1b5\uc2e0\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \uc0ac\uc6a9 \uac00\ub2a5\ud55c \ub178\ub4dc-\ub85c\uceec \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \uc5c6\uc73c\uba74 \ud2b8\ub798\ud53d\uc774 \uc0ad\uc81c\ub429\ub2c8\ub2e4. \n\n![node-local_no_peer](..\/images\/no_node_local_1.png)\n\n\ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 B\uc758 \ub178\ub4dc 1\uacfc 2\uc5d0 \ubcf5\uc81c\ubcf8 3\uac1c \uc911 2\uac1c\uac00 \uc788\ub294 \uacbd\uc6b0 \ud53c\uc5b4 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uac04\uc5d0 \ud1b5\uc2e0\uc774 \uc774\ub8e8\uc5b4\uc9d1\ub2c8\ub2e4.\ud558\uc9c0\ub9cc \ud1b5\uc2e0\ud560 \ud53c\uc5b4 \ubcf5\uc81c\ubcf8\uc774 \uc5c6\ub294 \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 B\uc758 \uaca9\ub9ac\ub41c \ubcf5\uc81c\ubcf8\uc740 \uc5ec\uc804\ud788 \ub0a8\uc544 \uc788\uc744 \uac83\uc785\ub2c8\ub2e4. \n\n![node-local_with_peer](..\/images\/no_node_local_2.png)\n\n!!! note\n    \uc77c\ubd80 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 \uc704 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc5d0 \ud45c\uc2dc\ub41c \uac83\uacfc \uac19\uc740 \uaca9\ub9ac\ub41c \ubcf5\uc81c\ubcf8\uc774 \uc5ec\uc804\ud788 \ubaa9\uc801(\uc608: \uc678\ubd80 \uc218\uc2e0 \ud2b8\ub798\ud53d\uc758 \uc694\uccad \ucc98\ub9ac)\uc5d0 \ubd80\ud569\ud55c\ub2e4\uba74 \uac71\uc815\ud560 \ud544\uc694\uac00 \uc5c6\uc744 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4.\n\n**\ud1a0\ud3f4\ub85c\uc9c0 \ubd84\uc0b0 \uc81c\uc57d\uc774 \uc788\ub294 \uc11c\ube44\uc2a4 \ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45 \uc0ac\uc6a9**\n\n_\ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45_ \uc744 _\ud1a0\ud3f4\ub85c\uc9c0 \ud655\uc0b0 \uc81c\uc57d_ \uacfc \ud568\uaed8 \uc0ac\uc6a9\ud558\uba74 \uc11c\ub85c \ub2e4\ub978 \ub178\ub4dc\uc758 \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4\uc640 \ud1b5\uc2e0\ud558\uae30 \uc704\ud55c \uc801\uc808\ud55c \uc218\uc758 \ubcf5\uc81c\ubcf8\uc744 \ud655\ubcf4\ud558\ub294 \ub370 \uc720\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\n```yaml hl_lines=\"16-22\"\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: express-test\nspec:\n  replicas: 6\n  selector:\n    matchLabels:\n      app: express-test\n  template:\n    metadata:\n      labels:\n        app: express-test\n        tier: backend\n    spec:\n      topologySpreadConstraints:\n      - maxSkew: 1\n        topologyKey: \"topology.kubernetes.io\/zone\"\n        whenUnsatisfiable: ScheduleAnyway\n        labelSelector:\n          matchLabels:\n            app: express-test\n```\n\n**\ud30c\ub4dc \uc5b4\ud53c\ub2c8\ud2f0 \uaddc\uce59\uacfc \ud568\uaed8 \uc11c\ube44\uc2a4 \ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45 \uc0ac\uc6a9**\n\n\ub610 \ub2e4\ub978 \uc811\uadfc \ubc29\uc2dd\uc740 \uc11c\ube44\uc2a4 \ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45\uc744 \uc0ac\uc6a9\ud560 \ub54c \ud30c\ub4dc \uc5b4\ud53c\ub2c8\ud2f0 \uaddc\uce59\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \ud30c\ub4dc \uc5b4\ud53c\ub2c8\ud2f0\ub97c \uc0ac\uc6a9\ud558\uba74 \uc7a6\uc740 \ud1b5\uc2e0\uc73c\ub85c \uc778\ud574 \uc2a4\ucf00\uc904\ub7ec\uac00 \ud2b9\uc815 \ud30c\ub4dc\ub97c \uac19\uc740 \uc704\uce58\uc5d0 \ubc30\uce58\ud558\ub3c4\ub85d \uc601\ud5a5\uc744 \uc904 \uc218 \uc788\ub2e4. \ud2b9\uc815 \ud30c\ub4dc\uc5d0 \uc5c4\uaca9\ud55c \uc2a4\ucf00\uc904\ub9c1 \uc81c\uc57d \uc870\uac74(`RequiredDuringSchedulingExecutionDuringExecutionDuringIgnored`)\uc744 \uc801\uc6a9\ud558\uba74, \uc2a4\ucf00\uc904\ub7ec\uac00 \ud30c\ub4dc\ub97c \ub178\ub4dc\uc5d0 \ubc30\uce58\ud560 \ub54c \ud30c\ub4dc \ucf54\ub85c\ucf00\uc774\uc158\uc5d0 \ub300\ud574 \ub354 \ub098\uc740 \uacb0\uacfc\ub97c \uc5bb\uc744 \uc218 \uc788\ub2e4.\n\n```yaml hl_lines=\"11-20\"\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: graphql\n  namespace: ecommerce\n  labels:\n    app.kubernetes.io\/version: \"0.1.6\"\n    ...\n    spec:\n      serviceAccountName: graphql-service-account\n      affinity:\n        podAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n          - labelSelector:\n              matchExpressions:\n              - key: app\n                operator: In\n                values:\n                - orders\n            topologyKey: \"kubernetes.io\/hostname\"\n```\n\n## \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc640 \ud30c\ub4dc \ud1b5\uc2e0\n\nEKS \uc6cc\ud06c\ub85c\ub4dc\ub294 \uc77c\ubc18\uc801\uc73c\ub85c \ud2b8\ub798\ud53d\uc744 EKS \ud074\ub7ec\uc2a4\ud130\uc758 \uad00\ub828 \ud30c\ub4dc\ub85c \ubd84\uc0b0\ud558\ub294 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0 \uc758\ud574 \uc120\ud589\ub429\ub2c8\ub2e4. \uc544\ud0a4\ud14d\ucc98\ub294 \ub0b4\ubd80 \ub610\ub294 \uc678\ubd80 \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub85c \uad6c\uc131\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc544\ud0a4\ud14d\ucc98 \ubc0f \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d \uad6c\uc131\uc5d0 \ub530\ub77c \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc640 \ud30c\ub4dc \uac04\uc758 \ud1b5\uc2e0\uc73c\ub85c \uc778\ud574 \ub370\uc774\ud130 \uc804\uc1a1 \uc694\uae08\uc774 \ud06c\uac8c \uc99d\uac00\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n[AWS Load Balancer Controller](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller)\ub97c \uc0ac\uc6a9\ud558\uc5ec ELB \ub9ac\uc18c\uc2a4 (ALB \ubc0f NLB) \uc0dd\uc131\uc744 \uc790\ub3d9\uc73c\ub85c \uad00\ub9ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc124\uc815\uc5d0\uc11c \ubc1c\uc0dd\ud558\ub294 \ub370\uc774\ud130 \uc804\uc1a1 \uc694\uae08\uc740 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d\uc774 \uc0ac\uc6a9\ud55c \uacbd\ub85c\uc5d0 \ub530\ub77c \ub2ec\ub77c\uc9d1\ub2c8\ub2e4. AWS Load Balancer Controller\ub294 _\uc778\uc2a4\ud134\uc2a4 \ubaa8\ub4dc_ \uc640 _IP \ubaa8\ub4dc_ \ub77c\ub294 \ub450 \uac00\uc9c0 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d \ubaa8\ub4dc\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4.\n\n_\uc778\uc2a4\ud134\uc2a4 \ubaa8\ub4dc_ \ub97c \uc0ac\uc6a9\ud558\uba74 EKS \ud074\ub7ec\uc2a4\ud130\uc758 \uac01 \ub178\ub4dc\uc5d0\uc11c NodePort\uac00 \uc5f4\ub9bd\ub2c8\ub2e4. \uadf8\ub7ec\uba74 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uac00 \ub178\ub4dc \uc804\uccb4\uc5d0\uc11c \ud2b8\ub798\ud53d\uc744 \uade0\ub4f1\ud558\uac8c \ud504\ub85d\uc2dc\ud569\ub2c8\ub2e4. \ub178\ub4dc\uc5d0 \ub300\uc0c1 \ud30c\ub4dc\uac00 \uc2e4\ud589\ub418\uace0 \uc788\ub294 \uacbd\uc6b0 \ub370\uc774\ud130 \uc804\uc1a1 \ube44\uc6a9\uc774 \ubc1c\uc0dd\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \ub300\uc0c1 \ud30c\ub4dc\uac00 \ubcc4\ub3c4\uc758 \ub178\ub4dc\uc5d0 \uc788\uace0 \ud2b8\ub798\ud53d\uc744 \uc218\uc2e0\ud558\ub294 NodePort\uc640 \ub2e4\ub978 AZ\uc5d0 \uc788\ub294 \uacbd\uc6b0 kube-proxy\uc5d0\uc11c \ub300\uc0c1 \ud30c\ub4dc\ub85c \ucd94\uac00 \ub124\ud2b8\uc6cc\ud06c \ud649\uc774 \ubc1c\uc0dd\ud558\uac8c \ub41c\ub2e4. \uc774\ub7ec\ud55c \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 AZ \uac04 \ub370\uc774\ud130 \uc804\uc1a1 \uc694\uae08\uc774 \ubd80\uacfc\ub429\ub2c8\ub2e4. \ub178\ub4dc \uc804\uccb4\uc5d0 \ud2b8\ub798\ud53d\uc774 \uace0\ub974\uac8c \ubd84\uc0b0\ub418\uae30 \ub54c\ubb38\uc5d0 kube-proxy\uc5d0\uc11c \uad00\ub828 \ub300\uc0c1 \ud30c\ub4dc\ub85c\uc758 \uad50\ucc28 \uc601\uc5ed \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d \ud649\uacfc \uad00\ub828\ub41c \ucd94\uac00 \ub370\uc774\ud130 \uc804\uc1a1 \uc694\uae08\uc774 \ubd80\uacfc\ub420 \uac00\ub2a5\uc131\uc774 \ub192\uc2b5\ub2c8\ub2e4.\n\n\uc544\ub798 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0\uc11c NodePort\ub85c, \uc774\ud6c4\uc5d0 `kube-proxy`\uc5d0\uc11c \ub2e4\ub978 AZ\uc758 \ubcc4\ub3c4 \ub178\ub4dc\uc5d0 \uc788\ub294 \ub300\uc0c1 \ud30c\ub4dc\ub85c \uc774\ub3d9\ud558\ub294 \ud2b8\ub798\ud53d\uc758 \ub124\ud2b8\uc6cc\ud06c \uacbd\ub85c\ub97c \ubcf4\uc5ec\uc90d\ub2c8\ub2e4. \ub2e4\uc74c\uc740 _\uc778\uc2a4\ud134\uc2a4 \ubaa8\ub4dc_ \uc124\uc815\uc758 \uc608\uc2dc\uc774\ub2e4. \n\n![LB to Pod](..\/images\/lb_2_pod.png)\n\n_IP \ubaa8\ub4dc_ \ub97c \uc0ac\uc6a9\ud558\uba74 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d\uc774 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0\uc11c \ub300\uc0c1 \ud30c\ub4dc\ub85c \uc9c1\uc811 \ud504\ub85d\uc2dc\ub429\ub2c8\ub2e4. \ub530\ub77c\uc11c \uc774 \uc811\uadfc \ubc29\uc2dd\uc5d0\ub294 _\ub370\uc774\ud130 \uc804\uc1a1 \uc694\uae08\uc774 \ubd80\uacfc\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4_. \n\n!!! tip\n    \ub370\uc774\ud130 \uc804\uc1a1 \uc694\uae08\uc744 \uc904\uc774\ub824\uba74 \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub97c _IP \ud2b8\ub798\ud53d \ubaa8\ub4dc_ \ub85c \uc124\uc815\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc774 \uc124\uc815\uc5d0\uc11c\ub294 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uac00 VPC\uc758 \ubaa8\ub4e0 \uc11c\ube0c\ub137\uc5d0 \ubc30\ud3ec\ub418\uc5c8\ub294\uc9c0 \ud655\uc778\ud558\ub294 \uac83\ub3c4 \uc911\uc694\ud569\ub2c8\ub2e4. \n\n\uc544\ub798 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 \ub124\ud2b8\uc6cc\ud06c _IP \ubaa8\ub4dc_ \uc5d0\uc11c \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0\uc11c \ud30c\ub4dc\ub85c \uc774\ub3d9\ud558\ub294 \ud2b8\ub798\ud53d\uc758 \ub124\ud2b8\uc6cc\ud06c \uacbd\ub85c\ub97c \ubcf4\uc5ec\uc90d\ub2c8\ub2e4. \n\n![IP mode](..\/images\/ip_mode.png)\n\n## \ucee8\ud14c\uc774\ub108 \ub808\uc9c0\uc2a4\ud2b8\ub9ac\uc5d0\uc11c\uc758 \ub370\uc774\ud130 \uc804\uc1a1\n\n### Amazon ECR\n\nAmazon ECR \ud504\ub77c\uc774\ube57 \ub808\uc9c0\uc2a4\ud2b8\ub9ac\ub85c\uc758 \ub370\uc774\ud130 \uc804\uc1a1\uc740 \ubb34\ub8cc\uc785\ub2c8\ub2e4. _\uc9c0\uc5ed \ub0b4 \ub370\uc774\ud130 \uc804\uc1a1\uc5d0\ub294 \ube44\uc6a9\uc774 \ub4e4\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4_. \ud558\uc9c0\ub9cc \uc778\ud130\ub137\uc73c\ub85c \ub370\uc774\ud130\ub97c \uc804\uc1a1\ud558\uac70\ub098 \uc9c0\uc5ed \uac04\uc5d0 \ub370\uc774\ud130\ub97c \uc804\uc1a1\ud560 \ub54c\ub294 \uc591\ucabd\uc5d0 \uc778\ud130\ub137 \ub370\uc774\ud130 \uc804\uc1a1 \uc694\uae08\uc774 \ubd80\uacfc\ub429\ub2c8\ub2e4. \n\nECR\uc5d0 \ub0b4\uc7a5\ub41c [\uc774\ubbf8\uc9c0 \ubcf5\uc81c \uae30\ub2a5](https:\/\/docs.aws.amazon.com\/AmazonECR\/latest\/userguide\/replication.html)\uc744 \ud65c\uc6a9\ud558\uc5ec \uad00\ub828 \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub97c \uc6cc\ud06c\ub85c\ub4dc\uc640 \ub3d9\uc77c\ud55c \uc9c0\uc5ed\uc5d0 \ubcf5\uc81c\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \ubcf5\uc81c \ube44\uc6a9\uc774 \ud55c \ubc88\ub9cc \uccad\uad6c\ub418\uace0 \ub3d9\uc77c \ub9ac\uc804 (\ub9ac\uc804 \ub0b4) \uc774\ubbf8\uc9c0\ub97c \ubaa8\ub450 \ubb34\ub8cc\ub85c \uac00\uc838\uc62c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n_[\uc778\ud130\ud398\uc774\uc2a4 VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8](https:\/\/docs.aws.amazon.com\/whitepapers\/latest\/aws-privatelink\/what-are-vpc-endpoints.html) \ub97c \uc0ac\uc6a9\ud558\uc5ec \uc9c0\uc5ed \ub0b4 ECR \uc800\uc7a5\uc18c\uc5d0 \uc5f0\uacb0\ud558\uba74_ ECR\uc5d0\uc11c \uc774\ubbf8\uc9c0\ub97c \uac00\uc838\uc624\ub294 \uc791\uc5c5 (\ub370\uc774\ud130 \uc804\uc1a1) \uacfc \uad00\ub828\ub41c \ub370\uc774\ud130 \uc804\uc1a1 \ube44\uc6a9\uc744 \ub354\uc6b1 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. NAT \uac8c\uc774\ud2b8\uc6e8\uc774\uc640 \uc778\ud130\ub137 \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \ud1b5\ud574 ECR\uc758 \ud37c\ube14\ub9ad AWS \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \uc5f0\uacb0\ud558\ub294 \ub300\uc548\uc801\uc778 \uc811\uadfc \ubc29\uc2dd\uc740 \ub354 \ub192\uc740 \ub370\uc774\ud130 \ucc98\ub9ac \ubc0f \uc804\uc1a1 \ube44\uc6a9\uc744 \ubc1c\uc0dd\uc2dc\ud0b5\ub2c8\ub2e4. \ub2e4\uc74c \uc139\uc158\uc5d0\uc11c\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc640 AWS \uc11c\ube44\uc2a4 \uac04\uc758 \ub370\uc774\ud130 \uc804\uc1a1 \ube44\uc6a9 \uc808\uac10\uc5d0 \ub300\ud574 \ub354 \uc790\uc138\ud788 \ub2e4\ub8e8\uaca0\uc2b5\ub2c8\ub2e4. \n\n\ud2b9\ud788 \ud070 \uc774\ubbf8\uc9c0\ub85c \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc2e4\ud589\ud558\ub294 \uacbd\uc6b0, \ubbf8\ub9ac \uce90\uc2dc\ub41c \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub85c \uc0ac\uc6a9\uc790 \uc9c0\uc815 Amazon \uba38\uc2e0 \uc774\ubbf8\uc9c0 (AMI) \ub97c \uad6c\ucd95\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 \ucd08\uae30 \uc774\ubbf8\uc9c0 \uac00\uc838\uc624\uae30 \uc2dc\uac04\uacfc \ucee8\ud14c\uc774\ub108 \ub808\uc9c0\uc2a4\ud2b8\ub9ac\uc5d0\uc11c EKS \uc6cc\ucee4 \ub178\ub4dc\ub85c\uc758 \uc7a0\uc7ac\uc801 \ub370\uc774\ud130 \uc804\uc1a1 \ube44\uc6a9\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\n## \uc778\ud130\ub137 \ubc0f AWS \uc11c\ube44\uc2a4\ub85c\uc758 \ub370\uc774\ud130 \uc804\uc1a1\n\n\uc778\ud130\ub137\uc744 \ud1b5\ud574 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc6cc\ud06c\ub85c\ub4dc\ub97c \ub2e4\ub978 AWS \uc11c\ube44\uc2a4 \ub610\ub294 \ud0c0\uc0ac \ub3c4\uad6c \ubc0f \ud50c\ub7ab\ud3fc\uacfc \ud1b5\ud569\ud558\ub294 \uac83\uc740 \uc77c\ubc18\uc801\uc778 \uad00\ud589\uc785\ub2c8\ub2e4. \uad00\ub828 \ubaa9\uc801\uc9c0\ub97c \uc624\uac00\ub294 \ud2b8\ub798\ud53d\uc744 \ub77c\uc6b0\ud305\ud558\ub294 \ub370 \uc0ac\uc6a9\ub418\ub294 \uae30\ubcf8 \ub124\ud2b8\uc6cc\ud06c \uc778\ud504\ub77c\ub294 \ub370\uc774\ud130 \uc804\uc1a1 \ud504\ub85c\uc138\uc2a4\uc5d0\uc11c \ubc1c\uc0dd\ud558\ub294 \ube44\uc6a9\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### NAT \uac8c\uc774\ud2b8\uc6e8\uc774 \uc0ac\uc6a9\n\nNAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub294 \ub124\ud2b8\uc6cc\ud06c \uc8fc\uc18c \ubcc0\ud658 (NAT) \uc744 \uc218\ud589\ud558\ub294 \ub124\ud2b8\uc6cc\ud06c \uad6c\uc131 \uc694\uc18c\uc785\ub2c8\ub2e4. \uc544\ub798 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 \ub2e4\ub978 AWS \uc11c\ube44\uc2a4 (Amazon ECR, DynamoDB, S3) \ubc0f \ud0c0\uc0ac \ud50c\ub7ab\ud3fc\uacfc \ud1b5\uc2e0\ud558\ub294 EKS \ud074\ub7ec\uc2a4\ud130\uc758 \ud30c\ub4dc\ub97c \ubcf4\uc5ec\uc90d\ub2c8\ub2e4. \uc774 \uc608\uc2dc\uc5d0\uc11c\ub294 \ud30c\ub4dc\uac00 \ubcc4\ub3c4\uc758 \uac00\uc6a9\uc601\uc5ed\uc5d0 \uc788\ub294 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0\uc11c \uc2e4\ud589\ub41c\ub2e4. \uc778\ud130\ub137\uc5d0\uc11c \ud2b8\ub798\ud53d\uc744 \ubcf4\ub0b4\uace0 \ubc1b\uae30 \uc704\ud574 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\uac00 \ud55c \uac00\uc6a9\uc601\uc5ed\uc758 \ud37c\ube14\ub9ad \uc11c\ube0c\ub137\uc5d0 \ubc30\ud3ec\ub418\uc5b4 \ud504\ub77c\uc774\ube57 IP \uc8fc\uc18c\ub97c \uac00\uc9c4 \ubaa8\ub4e0 \ub9ac\uc18c\uc2a4\uac00 \ub2e8\uc77c \ud37c\ube14\ub9ad IP \uc8fc\uc18c\ub97c \uacf5\uc720\ud558\uc5ec \uc778\ud130\ub137\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\ub3c4\ub85d \ud569\ub2c8\ub2e4. \uc774 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub294 \uc778\ud130\ub137 \uac8c\uc774\ud2b8\uc6e8\uc774 \uad6c\uc131 \uc694\uc18c\uc640 \ud1b5\uc2e0\ud558\uc5ec \ud328\ud0b7\uc744 \ucd5c\uc885 \ubaa9\uc801\uc9c0\ub85c \uc804\uc1a1\ud560 \uc218 \uc788\ub3c4\ub85d \ud569\ub2c8\ub2e4.\n\n![NAT Gateway](..\/images\/nat_gw.png)\n\n\uc774\ub7ec\ud55c \uc0ac\uc6a9 \uc0ac\ub840\uc5d0 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \uc0ac\uc6a9\ud558\uba74 _\uac01 AZ\uc5d0 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \ubc30\ud3ec\ud558\uc5ec \ub370\uc774\ud130 \uc804\uc1a1 \ube44\uc6a9\uc744 \ucd5c\uc18c\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4._ \uc774\ub807\uac8c \ud558\uba74 \uc778\ud130\ub137\uc73c\ub85c \ub77c\uc6b0\ud305\ub418\ub294 \ud2b8\ub798\ud53d\uc774 \ub3d9\uc77c\ud55c AZ\uc758 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \ud1b5\uacfc\ud558\ubbc0\ub85c AZ \uac04 \ub370\uc774\ud130 \uc804\uc1a1\uc774 \ubc29\uc9c0\ub429\ub2c8\ub2e4.\uadf8\ub7ec\ub098 AZ \uac04 \ub370\uc774\ud130 \uc804\uc1a1 \ube44\uc6a9\uc744 \uc808\uac10\ud560 \uc218\ub294 \uc788\uc9c0\ub9cc \uc774 \uc124\uc815\uc758 \uc758\ubbf8\ub294 \uc544\ud0a4\ud14d\ucc98\uc5d0 \ucd94\uac00 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \uc124\uce58\ud558\ub294 \ub370 \ub4dc\ub294 \ube44\uc6a9\uc774 \ubc1c\uc0dd\ud55c\ub2e4\ub294 \uac83\uc785\ub2c8\ub2e4. \n\n\uc774 \uad8c\uc7a5 \uc811\uadfc \ubc29\uc2dd\uc740 \uc544\ub798 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc5d0 \ub098\uc640 \uc788\uc2b5\ub2c8\ub2e4.\n\n![Recommended approach](..\/images\/recommended_approach.png)\n\n### VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc0ac\uc6a9\n\n\uc774\ub7ec\ud55c \uc544\ud0a4\ud14d\ucc98\uc5d0\uc11c \ube44\uc6a9\uc744 \ucd94\uac00\ub85c \uc808\uac10\ud558\ub824\uba74 _[VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8](https:\/\/docs.aws.amazon.com\/whitepapers\/latest\/aws-privatelink\/what-are-vpc-endpoints.html)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc6cc\ud06c\ub85c\ub4dc\uc640 AWS \uc11c\ube44\uc2a4 \uac04\uc758 \uc5f0\uacb0\uc744 \uc124\uc815\ud574\uc57c \ud569\ub2c8\ub2e4._ VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc0ac\uc6a9\ud558\uba74 \uc778\ud130\ub137\uc744 \ud1b5\uacfc\ud558\ub294 \ub370\uc774\ud130\/\ub124\ud2b8\uc6cc\ud06c \ud328\ud0b7 \uc5c6\uc774 VPC \ub0b4\uc5d0\uc11c AWS \uc11c\ube44\uc2a4\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubaa8\ub4e0 \ud2b8\ub798\ud53d\uc740 \ub0b4\ubd80\uc801\uc774\uba70 AWS \ub124\ud2b8\uc6cc\ud06c \ub0b4\uc5d0 \uba38\ubb3c\ub7ec \uc788\uc2b5\ub2c8\ub2e4. VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0\ub294 \uc778\ud130\ud398\uc774\uc2a4 VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8([\ub9ce\uc740 AWS \uc11c\ube44\uc2a4\uc5d0\uc11c \uc9c0\uc6d0](https:\/\/docs.aws.amazon.com\/vpc\/latest\/privatelink\/aws-services-privatelink-support.html))\uc640 \uac8c\uc774\ud2b8\uc6e8\uc774 VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8 (S3 \ubc0f DynamoDB\uc5d0\uc11c\ub9cc \uc9c0\uc6d0)\uc758 \ub450 \uac00\uc9c0 \uc720\ud615\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\n**\uac8c\uc774\ud2b8\uc6e8\uc774 VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8**\n\n_\uac8c\uc774\ud2b8\uc6e8\uc774 VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc640 \uad00\ub828\ub41c \uc2dc\uac04\ub2f9 \ub610\ub294 \ub370\uc774\ud130 \uc804\uc1a1 \ube44\uc6a9\uc740 \uc5c6\uc2b5\ub2c8\ub2e4._ \uac8c\uc774\ud2b8\uc6e8\uc774 VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc0ac\uc6a9\ud560 \ub54c\ub294 VPC \uacbd\uacc4\ub97c \ub118\uc5b4 \ud655\uc7a5\ud560 \uc218 \uc5c6\ub2e4\ub294 \uc810\uc5d0 \uc720\uc758\ud574\uc57c \ud569\ub2c8\ub2e4. VPC \ud53c\uc5b4\ub9c1, VPN \ub124\ud2b8\uc6cc\ud0b9 \ub610\ub294 Direct Connect\ub97c \ud1b5\ud574\uc11c\ub294 \uc0ac\uc6a9\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4.\n\n**\uc778\ud130\ud398\uc774\uc2a4 VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8**\n\nVPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0\ub294 [\uc2dc\uac04\ub2f9 \uc694\uae08](https:\/\/aws.amazon.com\/privatelink\/pricing\/)\uc774 \ubd80\uacfc\ub418\uba70, AWS \uc11c\ube44\uc2a4\uc5d0 \ub530\ub77c \uae30\ubcf8 ENI\ub97c \ud1b5\ud55c \ub370\uc774\ud130 \ucc98\ub9ac\uc640 \uad00\ub828\ub41c \ucd94\uac00 \uc694\uae08\uc774 \ubd80\uacfc\ub418\uac70\ub098 \ubd80\uacfc\ub418\uc9c0 \uc54a\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc778\ud130\ud398\uc774\uc2a4 VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc640 \uad00\ub828\ub41c \uac00\uc6a9\uc601\uc5ed \uac04 \ub370\uc774\ud130 \uc804\uc1a1 \ube44\uc6a9\uc744 \uc904\uc774\ub824\uba74 \uac01 \uac00\uc6a9\uc601\uc5ed\uc5d0 VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ub9cc\ub4e4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub3d9\uc77c\ud55c AWS \uc11c\ube44\uc2a4\ub97c \uac00\ub9ac\ud0a4\ub354\ub77c\ub3c4 \ub3d9\uc77c\ud55c VPC\uc5d0 \uc5ec\ub7ec VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc544\ub798 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud1b5\ud574 AWS \uc11c\ube44\uc2a4\uc640 \ud1b5\uc2e0\ud558\ub294 \ud30c\ub4dc\ub97c \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\n![VPC Endpoints](..\/images\/vpc_endpoints.png)\n\n## VPC \uac04 \ub370\uc774\ud130 \uc804\uc1a1\n\n\uacbd\uc6b0\uc5d0 \ub530\ub77c \uc11c\ub85c \ud1b5\uc2e0\ud574\uc57c \ud558\ub294 \uc11c\ub85c \ub2e4\ub978 VPC(\ub3d9\uc77c\ud55c AWS \uc9c0\uc5ed \ub0b4)\uc5d0 \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc788\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub294 \uac01 VPC\uc5d0 \uc5f0\uacb0\ub41c \uc778\ud130\ub137 \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \ud1b5\ud574 \ud2b8\ub798\ud53d\uc774 \ud37c\ube14\ub9ad \uc778\ud130\ub137\uc744 \ud1b5\uacfc\ud558\ub3c4\ub85d \ud5c8\uc6a9\ud568\uc73c\ub85c\uc368 \ub2ec\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \ud1b5\uc2e0\uc740 EC2 \uc778\uc2a4\ud134\uc2a4, NAT \uac8c\uc774\ud2b8\uc6e8\uc774 \ub610\ub294 NAT \uc778\uc2a4\ud134\uc2a4\uc640 \uac19\uc740 \uc778\ud504\ub77c \uad6c\uc131 \uc694\uc18c\ub97c \ud37c\ube14\ub9ad \uc11c\ube0c\ub137\uc5d0 \ubc30\ud3ec\ud558\uc5ec \ud65c\uc131\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uc774\ub7ec\ud55c \uad6c\uc131 \uc694\uc18c\uac00 \ud3ec\ud568\ub41c \uc124\uc815\uc5d0\uc11c\ub294 VPC \ub0b4\/\uc678\ubd80\ub85c \ub370\uc774\ud130\ub97c \ucc98\ub9ac\/\uc804\uc1a1\ud558\ub294 \ub370 \ube44\uc6a9\uc774 \ubc1c\uc0dd\ud569\ub2c8\ub2e4. \uac1c\ubcc4 VPC\uc640 \uc8fc\uace0\ubc1b\ub294 \ud2b8\ub798\ud53d\uc774 AZ \uac04\uc5d0 \uc774\ub3d9\ud558\ub294 \uacbd\uc6b0 \ub370\uc774\ud130 \uc804\uc1a1 \uc2dc \ucd94\uac00 \uc694\uae08\uc774 \ubd80\uacfc\ub429\ub2c8\ub2e4. \uc544\ub798 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\uc640 \uc778\ud130\ub137 \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc11c\ub85c \ub2e4\ub978 VPC\uc758 \uc6cc\ud06c\ub85c\ub4dc \uac04\uc5d0 \ud1b5\uc2e0\uc744 \uc124\uc815\ud558\ub294 \uc124\uc815\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4. \n\n![Between VPCs](..\/images\/between_vpcs.png)\n\n### VPC \ud53c\uc5b4\ub9c1 \uc5f0\uacb0 \n\n\uc774\ub7ec\ud55c \uc0ac\uc6a9 \uc0ac\ub840\uc5d0\uc11c \ube44\uc6a9\uc744 \uc808\uac10\ud558\ub824\uba74 [VPC \ud53c\uc5b4\ub9c1](https:\/\/docs.aws.amazon.com\/vpc\/latest\/peering\/what-is-vpc-peering.html)\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. VPC \ud53c\uc5b4\ub9c1 \uc5f0\uacb0\uc744 \uc0ac\uc6a9\ud558\uba74 \ub3d9\uc77c\ud55c AZ \ub0b4\uc5d0 \uc788\ub294 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d\uc5d0 \ub300\ud55c \ub370\uc774\ud130 \uc804\uc1a1 \uc694\uae08\uc774 \ubd80\uacfc\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ud2b8\ub798\ud53d\uc774 AZ\ub97c \ud1b5\uacfc\ud558\ub294 \uacbd\uc6b0 \ube44\uc6a9\uc774 \ubc1c\uc0dd\ud569\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \ub3d9\uc77c\ud55c AWS \uc9c0\uc5ed \ub0b4\uc758 \uac1c\ubcc4 VPC\uc5d0 \uc788\ub294 \uc6cc\ud06c\ub85c\ub4dc \uac04\uc758 \ube44\uc6a9 \ud6a8\uc728\uc801\uc778 \ud1b5\uc2e0\uc744 \uc704\ud574\uc11c\ub294 VPC \ud53c\uc5b4\ub9c1 \uc811\uadfc \ubc29\uc2dd\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \ud558\uc9c0\ub9cc VPC \ud53c\uc5b4\ub9c1\uc740 \uc804\uc774\uc801 \ub124\ud2b8\uc6cc\ud0b9\uc744 \ud5c8\uc6a9\ud558\uc9c0 \uc54a\uae30 \ub54c\ubb38\uc5d0 \uc8fc\ub85c 1:1 VPC \uc5f0\uacb0\uc5d0 \ud6a8\uacfc\uc801\uc774\ub77c\ub294 \uc810\uc5d0 \uc720\uc758\ud574\uc57c \ud569\ub2c8\ub2e4. \n\n\uc544\ub798 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 VPC \ud53c\uc5b4\ub9c1 \uc5f0\uacb0\uc744 \ud1b5\ud55c \uc6cc\ud06c\ub85c\ub4dc \ud1b5\uc2e0\uc744 \uac1c\uad04\uc801\uc73c\ub85c \ub098\ud0c0\ub0b8 \uac83\uc785\ub2c8\ub2e4. \n\n![Peering](..\/images\/peering.png)\n\n### \ud2b8\ub79c\uc9c0\ud2f0\ube0c(Transitive) \ub124\ud2b8\uc6cc\ud0b9 \uc5f0\uacb0\n\n\uc774\uc804 \uc139\uc158\uc5d0\uc11c \uc124\uba85\ud55c \uac83\ucc98\ub7fc VPC \ud53c\uc5b4\ub9c1 \uc5f0\uacb0\uc740 \ud2b8\ub79c\uc9c0\ud2f0\ube0c \ub124\ud2b8\uc6cc\ud0b9 \uc5f0\uacb0\uc744 \ud5c8\uc6a9\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ud2b8\ub79c\uc9c0\ud2f0\ube0c \ub124\ud2b8\uc6cc\ud0b9 \uc694\uad6c \uc0ac\ud56d\uc774 \uc788\ub294 VPC\ub97c 3\uac1c \uc774\uc0c1 \uc5f0\uacb0\ud558\ub824\uba74 [Transit Gateway(TGW)](https:\/\/docs.aws.amazon.com\/vpc\/latest\/tgw\/what-is-transit-gateway.html)\ub97c \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 VPC \ud53c\uc5b4\ub9c1\uc758 \ud55c\uacc4 \ub610\ub294 \uc5ec\ub7ec VPC \uac04\uc758 \ub2e4\uc911 VPC \ud53c\uc5b4\ub9c1 \uc5f0\uacb0\uacfc \uad00\ub828\ub41c \uc6b4\uc601 \uc624\ubc84\ud5e4\ub4dc\ub97c \uadf9\ubcf5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. TGW\ub85c \uc804\uc1a1\ub41c \ub370\uc774\ud130\uc5d0 \ub300\ud574\uc11c\ub294 [\uc2dc\uac04\ub2f9 \uc694\uae08](https:\/\/aws.amazon.com\/transit-gateway\/pricing\/) \uc774 \uccad\uad6c\ub429\ub2c8\ub2e4. _TGW\ub97c \ud1b5\ud574 \uc774\ub3d9\ud558\ub294 \uac00\uc6a9\uc601\uc5ed \uac04 \ud2b8\ub798\ud53d\uacfc \uad00\ub828\ub41c \ubaa9\uc801\uc9c0 \ube44\uc6a9\uc740 \uc5c6\uc2b5\ub2c8\ub2e4._\n\n\uc544\ub798 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 \ub3d9\uc77c\ud55c AWS \uc9c0\uc5ed \ub0b4\uc5d0 \uc788\ub294 \uc11c\ub85c \ub2e4\ub978 VPC\uc5d0 \uc788\ub294 \uc6cc\ud06c\ub85c\ub4dc \uac04\uc5d0 TGW\ub97c \ud1b5\ud574 \uc774\ub3d9\ud558\ub294 \uac00\uc6a9\uc601\uc5ed \uac04 \ud2b8\ub798\ud53d\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\n![Transitive](..\/images\/transititive.png)\n\n## \uc11c\ube44\uc2a4 \uba54\uc2dc \uc0ac\uc6a9\n\n\uc11c\ube44\uc2a4 \uba54\uc2dc\ub294 EKS \ud074\ub7ec\uc2a4\ud130 \ud658\uacbd\uc5d0\uc11c \ub124\ud2b8\uc6cc\ud06c \uad00\ub828 \ube44\uc6a9\uc744 \uc904\uc774\ub294 \ub370 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uac15\ub825\ud55c \ub124\ud2b8\uc6cc\ud0b9 \uae30\ub2a5\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uadf8\ub7ec\ub098 \uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \ucc44\ud0dd\ud560 \uacbd\uc6b0 \uc11c\ube44\uc2a4 \uba54\uc2dc\ub85c \uc778\ud574 \ud658\uacbd\uc5d0 \ubc1c\uc0dd\ud560 \uc218 \uc788\ub294 \uc6b4\uc601 \uc791\uc5c5\uacfc \ubcf5\uc7a1\uc131\uc744 \uc2e0\uc911\ud558\uac8c \uace0\ub824\ud574\uc57c \ud569\ub2c8\ub2e4. \n\n### \uac00\uc6a9\uc601\uc5ed\uc73c\ub85c\uc758 \ud2b8\ub798\ud53d \uc81c\ud55c\n\n**Istio\uc758 \uc9c0\uc5ed\uc131 \uac00\uc911 \ubd84\ud3ec \uc0ac\uc6a9**\n\nIstio\ub97c \uc0ac\uc6a9\ud558\uba74 \ub77c\uc6b0\ud305\uc774 \ubc1c\uc0dd\ud55c _\uc774\ud6c4\uc5d0_ \ud2b8\ub798\ud53d\uc5d0 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uc801\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc791\uc5c5\uc740 [\uc9c0\uc5ed \uac00\uc911 \ubd84\ud3ec](https:\/\/istio.io\/latest\/docs\/tasks\/traffic-management\/locality-load-balancing\/distribute\/)\uc640 \uac19\uc740 [\ub370\uc2a4\ud2f0\ub124\uc774\uc158\ub8f0(Destination Rules)](https:\/\/istio.io\/latest\/docs\/reference\/config\/networking\/destination-rule\/)\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc218\ud589\ub429\ub2c8\ub2e4. \uc774 \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud558\uba74 \ucd9c\ubc1c\uc9c0\ub97c \uae30\uc900\uc73c\ub85c \ud2b9\uc815 \ubaa9\uc801\uc9c0\ub85c \uc774\ub3d9\ud560 \uc218 \uc788\ub294 \ud2b8\ub798\ud53d\uc758 \uac00\uc911\uce58 (\ubc31\ubd84\uc728\ub85c \ud45c\uc2dc) \ub97c \uc81c\uc5b4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ud2b8\ub798\ud53d\uc758 \uc18c\uc2a4\ub294 \uc678\ubd80 (\ub610\ub294 \uacf5\uc6a9) \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub610\ub294 \ud074\ub7ec\uc2a4\ud130 \uc790\uccb4 \ub0b4\uc758 \ud30c\ub4dc\uc5d0\uc11c \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubaa8\ub4e0 \ud30c\ub4dc \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uac8c \ub418\uba74 \uac00\uc911\uce58 \uae30\ubc18 \ub77c\uc6b4\ub4dc\ub85c\ube48 \ub85c\ub4dc \ubc38\ub7f0\uc2f1 \uc54c\uace0\ub9ac\uc998\uc744 \uae30\ubc18\uc73c\ub85c \uc9c0\uc5ed\uc774 \uc120\ud0dd\ub429\ub2c8\ub2e4. \ud2b9\uc815 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \ube44\uc815\uc0c1\uc774\uac70\ub098 \uc0ac\uc6a9\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0, \uc0ac\uc6a9 \uac00\ub2a5\ud55c \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \uc774\ub7ec\ud55c \ubcc0\uacbd \uc0ac\ud56d\uc744 \ubc18\uc601\ud558\ub3c4\ub85d [\uc9c0\uc5ed\uc131 \uac00\uc911\uce58\uac00 \uc790\ub3d9\uc73c\ub85c \uc870\uc815\ub429\ub2c8\ub2e4](https:\/\/www.envoyproxy.io\/docs\/envoy\/latest\/intro\/arch_overview\/upstream\/load_balancing\/locality_weight.html). \n\n!!! note\n    \uc9c0\uc5ed\uc131 \uac00\uc911 \ubc30\ud3ec\ub97c \uad6c\ud604\ud558\uae30 \uc804\uc5d0 \uba3c\uc800 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d \ud328\ud134\uacfc \ub300\uc0c1 \uaddc\uce59 \uc815\ucc45\uc774 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub3d9\uc791\uc5d0 \ubbf8\uce60 \uc218 \uc788\ub294 \uc601\ud5a5\uc744 \uc774\ud574\ud574\uc57c \ud569\ub2c8\ub2e4.\ub530\ub77c\uc11c [AWS X-Ray](https:\/\/aws.amazon.com\/xray\/) \ub610\ub294 [Jaeger](https:\/\/www.jaegertracing.io\/)\uc640 \uac19\uc740 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uc5ec \ubd84\uc0b0 \ucd94\uc801(\ud2b8\ub808\uc774\uc2f1) \uba54\ucee4\ub2c8\uc998\uc744 \ub9c8\ub828\ud558\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4. \n\n\uc704\uc5d0\uc11c \uc124\uba85\ud55c Istio \ub300\uc0c1 \uaddc\uce59\uc744 \uc801\uc6a9\ud558\uc5ec EKS \ud074\ub7ec\uc2a4\ud130\uc758 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0\uc11c \ud30c\ub4dc\ub85c \uc774\ub3d9\ud558\ub294 \ud2b8\ub798\ud53d\uc744 \uad00\ub9ac\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uac00\uc6a9\uc131\uc774 \ub192\uc740 \ub85c\ub4dc\ubc38\ub7f0\uc11c (\ud2b9\ud788 Ingress \uac8c\uc774\ud2b8\uc6e8\uc774) \ub85c\ubd80\ud130 \ud2b8\ub798\ud53d\uc744 \uc218\uc2e0\ud558\ub294 \uc11c\ube44\uc2a4\uc5d0 \uc9c0\uc5ed\uc131 \uac00\uc911\uce58 \ubc30\ud3ec \uaddc\uce59\uc744 \uc801\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uaddc\uce59\uc744 \uc0ac\uc6a9\ud558\uba74 \uc601\uc5ed \ucd9c\ucc98 (\uc774 \uacbd\uc6b0\uc5d0\ub294 \ub85c\ub4dc\ubc38\ub7f0\uc11c) \ub97c \uae30\ubc18\uc73c\ub85c \ud2b8\ub798\ud53d\uc774 \uc5b4\ub514\ub85c \uc774\ub3d9\ud558\ub294\uc9c0 \uc81c\uc5b4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc62c\ubc14\ub974\uac8c \uad6c\uc131\ud558\uba74 \ud2b8\ub798\ud53d\uc744 \uc5ec\ub7ec \uac00\uc6a9\uc601\uc5ed\uc758 \ud30c\ub4dc \ubcf5\uc81c\ubcf8\uc5d0 \uade0\ub4f1\ud558\uac8c \ub610\ub294 \ubb34\uc791\uc704\ub85c \ubd84\ubc30\ud558\ub294 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0 \ube44\ud574 \uc774\uadf8\ub808\uc2a4 \uad50\ucc28 \uc601\uc5ed \ud2b8\ub798\ud53d\uc774 \ub35c \ubc1c\uc0dd\ud569\ub2c8\ub2e4. \n\n\ub2e4\uc74c\uc740 Istio\uc5d0 \uc788\ub294 \ub370\uc2a4\ud2f0\ub124\uc774\uc158\ub8f0 \ub9ac\uc18c\uc2a4\uc758 \ucf54\ub4dc \ube14\ub85d \uc608\uc2dc\uc785\ub2c8\ub2e4. \uc544\ub798\uc5d0\uc11c \ubcfc \uc218 \uc788\ub4ef\uc774 \uc774 \ub9ac\uc18c\uc2a4\ub294 `eu-west-1` \uc9c0\uc5ed\uc758 \uc11c\ub85c \ub2e4\ub978 3\uac1c \uac00\uc6a9\uc601\uc5ed\uc5d0\uc11c \ub4e4\uc5b4\uc624\ub294 \ud2b8\ub798\ud53d\uc5d0 \ub300\ud55c \uac00\uc911\uce58 \uae30\ubc18 \uad6c\uc131\uc744 \uc9c0\uc815\ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uad6c\uc131\uc740 \ud2b9\uc815 \uac00\uc6a9\uc601\uc5ed\uc5d0\uc11c \ub4e4\uc5b4\uc624\ub294 \ud2b8\ub798\ud53d\uc758 \ub300\ubd80\ubd84 (\uc774 \uacbd\uc6b0 70%) \uc744 \ud574\ub2f9 \ud2b8\ub798\ud53d\uc774 \ubc1c\uc0dd\ud55c \ub3d9\uc77c\ud55c \uac00\uc6a9\uc601\uc5ed\uc758 \ub300\uc0c1\uc73c\ub85c \ud504\ub85d\uc2dc\ud574\uc57c \ud55c\ub2e4\uace0 \uc120\uc5b8\ud569\ub2c8\ub2e4. \n\n```yaml hl_lines=\"7-11\"\napiVersion: networking.istio.io\/v1beta1\nkind: DestinationRule\nmetadata:\n  name: express-test-dr\nspec:\n  host: express-test.default.svc.cluster.local\n  trafficPolicy:\n    loadBalancer:                        \n      localityLbSetting:\n        distribute:\n        - from: eu-west-1\/eu-west-1a\/    \n          to:\n            \"eu-west-1\/eu-west-1a\/*\": 70 \n            \"eu-west-1\/eu-west-1b\/*\": 20\n            \"eu-west-1\/eu-west-1c\/*\": 10\n        - from: eu-west-1\/eu-west-1b\/*    \n          to:\n            \"eu-west-1\/eu-west-1a\/*\": 20 \n            \"eu-west-1\/eu-west-1b\/*\": 70\n            \"eu-west-1\/eu-west-1c\/*\": 10\n        - from: eu-west-1\/eu-west-1c\/*    \n          to:\n            \"eu-west-1\/eu-west-1a\/*\": 20 \n            \"eu-west-1\/eu-west-1b\/*\": 10\n            \"eu-west-1\/eu-west-1c\/*\": 70**\n    connectionPool:\n      http:\n        http2MaxRequests: 10\n        maxRequestsPerConnection: 10\n    outlierDetection:\n      consecutiveGatewayErrors: 1\n      interval: 1m\n      baseEjectionTime: 30s\n```\n\n!!! note\n    \ubaa9\uc801\uc9c0\uc5d0 \ubd84\ubc30\ud560 \uc218 \uc788\ub294 \ucd5c\uc18c \uac00\uc911\uce58\ub294 1% \uc785\ub2c8\ub2e4. \uadf8 \uc774\uc720\ub294 \uc8fc \ub300\uc0c1\uc758 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \uc815\uc0c1\uc774 \uc544\ub2c8\uac70\ub098 \uc0ac\uc6a9\ud560 \uc218 \uc5c6\uac8c \ub418\ub294 \uacbd\uc6b0\uc5d0 \ub300\ube44\ud558\uc5ec \uc7a5\uc560 \uc870\uce58 \ub9ac\uc804 \ubc0f \uac00\uc6a9\uc601\uc5ed\uc744 \uc720\uc9c0\ud558\uae30 \uc704\ud568\uc785\ub2c8\ub2e4.\n\n\uc544\ub798 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 _eu-west-1_ \uc9c0\uc5ed\uc5d0 \uace0\uac00\uc6a9\uc131 \ub85c\ub4dc \ubc38\ub7f0\uc11c\uac00 \uc788\uace0 \uc9c0\uc5ed \uac00\uc911 \ubd84\ubc30\uac00 \uc801\uc6a9\ub418\ub294 \uc2dc\ub098\ub9ac\uc624\ub97c \ubcf4\uc5ec\uc90d\ub2c8\ub2e4. \uc774 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc758 \ub300\uc0c1 \uaddc\uce59 \uc815\ucc45\uc740 _eu-west-1a_ \uc5d0\uc11c \ub4e4\uc5b4\uc624\ub294 \ud2b8\ub798\ud53d\uc758 60% \ub97c \ub3d9\uc77c\ud55c AZ\uc758 \ud30c\ub4dc\ub85c \uc804\uc1a1\ud558\ub3c4\ub85d \uad6c\uc131\ub418\uc5b4 \uc788\ub294 \ubc18\uba74, _eu-west-1a_ \uc758 \ud2b8\ub798\ud53d \uc911 40% \ub294 eu-west-1b\uc758 \ud30c\ub4dc\ub85c \uc774\ub3d9\ud574\uc57c \ud55c\ub2e4. \n\n![Istio Traffic Control](..\/images\/istio-traffic-control.png)\n\n### \uac00\uc6a9 \uc601\uc5ed \ubc0f \ub178\ub4dc\ub85c\uc758 \ud2b8\ub798\ud53d \uc81c\ud55c\n\n**Istio\uc5d0\uc11c \uc11c\ube44\uc2a4 \ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45 \uc0ac\uc6a9**\n\n\ud30c\ub4dc \uac04\uc758 _\uc678\ubd80_ \uc218\uc2e0 \ud2b8\ub798\ud53d \ubc0f _\ub0b4\ubd80_ \ud2b8\ub798\ud53d\uacfc \uad00\ub828\ub41c \ub124\ud2b8\uc6cc\ud06c \ube44\uc6a9\uc744 \uc904\uc774\uae30 \uc704\ud574 Istio\uc758 \ub300\uc0c1 \uaddc\uce59\uacfc \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\uc758 _\ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45_ \uc744 \uacb0\ud569\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Istio \ubaa9\uc801\uc9c0 \uaddc\uce59\uc744 \uc11c\ube44\uc2a4 \ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45\uacfc \uacb0\ud569\ud558\ub294 \ubc29\ubc95\uc740 \ud06c\uac8c \ub2e4\uc74c \uc138 \uac00\uc9c0 \uc694\uc18c\uc5d0 \ub530\ub77c \ub2ec\ub77c\uc9d1\ub2c8\ub2e4.\n\n* \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4\uc758 \uc5ed\ud560\n* \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 \uc804\ubc18\uc758 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d \ud328\ud134\n* \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130 \ud1a0\ud3f4\ub85c\uc9c0 \uc804\ubc18\uc5d0 \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4\ub97c \ubc30\ud3ec\ud558\ub294 \ubc29\ubc95\n\n\uc544\ub798 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 \uc911\ucca9\ub41c \uc694\uccad\uc758 \uacbd\uc6b0 \ub124\ud2b8\uc6cc\ud06c \ud750\ub984\uc774 \uc5b4\ub5bb\uac8c \ud45c\uc2dc\ub418\ub294\uc9c0\uc640 \uc55e\uc11c \uc5b8\uae09\ud55c \uc815\ucc45\uc774 \ud2b8\ub798\ud53d\uc744 \uc81c\uc5b4\ud558\ub294 \ubc29\uc2dd\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\n![External and Internal traffic policy](..\/images\/external-and-internal-traffic-policy.png)\n\n1. \ucd5c\uc885 \uc0ac\uc6a9\uc790\uac00 **\uc571 A**\uc5d0 \uc694\uccad\uc744 \ubcf4\ub0b4\uace0, \uc774 \uc694\uccad\uc740 \ub2e4\uc2dc **\uc571 C**\uc5d0 \uc911\ucca9\ub41c \uc694\uccad\uc744 \ubcf4\ub0c5\ub2c8\ub2e4. \uc774 \uc694\uccad\uc740 \uba3c\uc800 \uac00\uc6a9\uc131\uc774 \ub6f0\uc5b4\ub09c \ub85c\ub4dc \ubc38\ub7f0\uc11c\ub85c \uc804\uc1a1\ub429\ub2c8\ub2e4. \uc774 \ub85c\ub4dc \ubc38\ub7f0\uc11c\ub294 \uc704 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc5d0\uc11c \ubcfc \uc218 \uc788\ub4ef\uc774 AZ 1\uacfc AZ 2\uc5d0 \uc778\uc2a4\ud134\uc2a4\uac00 \uc788\uc2b5\ub2c8\ub2e4.\n2. \uadf8\ub7f0 \ub2e4\uc74c \uc678\ubd80 \uc218\uc2e0 \uc694\uccad\uc740 Istio \uac00\uc0c1 \uc11c\ube44\uc2a4\uc5d0 \uc758\ud574 \uc62c\ubc14\ub978 \ub300\uc0c1\uc73c\ub85c \ub77c\uc6b0\ud305\ub429\ub2c8\ub2e4.\n3. \uc694\uccad\uc774 \ub77c\uc6b0\ud305\ub41c \ud6c4 Istio \ub300\uc0c1 \uaddc\uce59\uc740 \ud2b8\ub798\ud53d\uc774 \uc2dc\uc791\ub41c \uc704\uce58 (AZ 1 \ub610\ub294 AZ 2) \ub97c \uae30\ubc18\uc73c\ub85c \uac01 AZ\ub85c \uc774\ub3d9\ud558\ub294 \ud2b8\ub798\ud53d \uc591\uc744 \uc81c\uc5b4\ud569\ub2c8\ub2e4. \n4. \uadf8\ub7f0 \ub2e4\uc74c \ud2b8\ub798\ud53d\uc740**\uc571 A**\uc6a9 \uc11c\ube44\uc2a4\ub85c \uc774\ub3d9\ud55c \ub2e4\uc74c \uac01 Pod \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \ud504\ub85d\uc2dc\ub429\ub2c8\ub2e4. \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc5d0\uc11c \ubcfc \uc218 \uc788\ub4ef\uc774 \uc218\uc2e0 \ud2b8\ub798\ud53d\uc758 80% \ub294 AZ 1\uc758 \ud30c\ub4dc \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc804\uc1a1\ub418\uace0, \uc218\uc2e0 \ud2b8\ub798\ud53d\uc758 20% \ub294 AZ 2\ub85c \uc804\uc1a1\ub429\ub2c8\ub2e4.\n5. \uadf8\ub7f0 \ub2e4\uc74c **\uc571 A**\uac00 \ub0b4\ubd80\uc801\uc73c\ub85c **\uc571 C**\uc5d0 \uc694\uccad\uc744 \ubcf4\ub0c5\ub2c8\ub2e4. **\uc571 C**\uc758 \uc11c\ube44\uc2a4\uc5d0\ub294 \ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45\uc774 \ud65c\uc131\ud654\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4(`\ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45``: \ub85c\uceec`). \n6. **\uc571 C**\uc5d0 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \ub178\ub4dc-\ub85c\uceec \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \uc788\uae30 \ub54c\ubb38\uc5d0 **\uc571 A** (*\ub178\ub4dc 1*)\uc5d0\uc11c **\uc571 C**\ub85c\uc758 \ub0b4\ubd80 \uc694\uccad\uc774 \uc131\uacf5\ud588\uc2b5\ub2c8\ub2e4. \n7. **\uc571 C**\uc5d0 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 _\ub178\ub4dc-\ub85c\uceec \uc5d4\ub4dc\ud3ec\uc778\ud2b8_ \uac00 \uc5c6\uae30 \ub54c\ubb38\uc5d0 **\uc571 A** (*\ub178\ub4dc 3)\uc5d0\uc11c **\uc571 C**\uc5d0 \ub300\ud55c \ub0b4\ubd80 \uc694\uccad\uc774 \uc2e4\ud328\ud569\ub2c8\ub2e4. \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc5d0\uc11c \ubcfc \uc218 \uc788\ub4ef\uc774 \uc571 C\uc758 \ub178\ub4dc 3\uc5d0\ub294 \ubcf5\uc81c\ubcf8\uc774 \uc5c6\uc2b5\ub2c8\ub2e4.**** \n\n\uc544\ub798 \uc2a4\ud06c\ub9b0\uc0f7\uc740 \uc774 \uc811\uadfc\ubc95\uc758 \uc2e4\uc81c \uc608\uc5d0\uc11c \ucea1\ucc98\ud55c \uac83\uc785\ub2c8\ub2e4. \uccab \ubc88\uc9f8 \uc2a4\ud06c\ub9b0\uc0f7 \uc138\ud2b8\ub294 'graphql'\uc5d0 \ub300\ud55c \uc131\uacf5\uc801\uc778 \uc678\ubd80 \uc694\uccad\uacfc 'graphql'\uc5d0\uc11c \ub178\ub4dc `ip-10-0-151.af-south-1.compute.internal` \ub178\ub4dc\uc5d0 \uac19\uc740 \uc704\uce58\uc5d0 \uc788\ub294 `orders` \ubcf5\uc81c\ubcf8\uc73c\ub85c\uc758 \uc131\uacf5\uc801\uc778 \uc911\ucca9 \uc694\uccad\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4. \n\n![Before](..\/images\/before.png)\n![Before results](..\/images\/before-results.png)\n\nIstio\ub97c \uc0ac\uc6a9\ud558\uba74 \ud504\ub85d\uc2dc\uac00 \uc778\uc2dd\ud558\ub294 \ubaa8\ub4e0 [\uc5c5\uc2a4\ud2b8\ub9bc \ud074\ub7ec\uc2a4\ud130](https:\/\/www.envoyproxy.io\/docs\/envoy\/latest\/intro\/arch_overview\/intro\/terminology) \ubc0f \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \ud1b5\uacc4\ub97c \ud655\uc778\ud558\uace0 \ub0b4\ubcf4\ub0bc \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 \ub124\ud2b8\uc6cc\ud06c \ud750\ub984\uacfc \uc6cc\ud06c\ub85c\ub4dc \uc11c\ube44\uc2a4 \uac04\uc758 \ubd84\uc0b0 \uc810\uc720\uc728\uc744 \ud30c\uc545\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uac19\uc740 \uc608\uc81c\ub97c \uacc4\uc18d\ud558\uba74, \ub2e4\uc74c \uba85\ub839\uc744 \uc0ac\uc6a9\ud558\uc5ec `graphql` \ud504\ub85d\uc2dc\uac00 \uc778\uc2dd\ud558\ub294 `orders` \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uac00\uc838\uc62c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\n```bash\nkubectl exec -it deploy\/graphql -n ecommerce -c istio-proxy -- curl localhost:15000\/clusters | grep orders \n```\n\n```bash\n...\norders-service.ecommerce.svc.cluster.local::10.0.1.33:3003::**rq_error::0**\norders-service.ecommerce.svc.cluster.local::10.0.1.33:3003::**rq_success::119**\norders-service.ecommerce.svc.cluster.local::10.0.1.33:3003::**rq_timeout::0**\norders-service.ecommerce.svc.cluster.local::10.0.1.33:3003::**rq_total::119**\norders-service.ecommerce.svc.cluster.local::10.0.1.33:3003::**health_flags::healthy**\norders-service.ecommerce.svc.cluster.local::10.0.1.33:3003::**region::af-south-1**\norders-service.ecommerce.svc.cluster.local::10.0.1.33:3003::**zone::af-south-1b**\n...\n```\n\n\uc774 \uacbd\uc6b0, `graphql` \ud504\ub85d\uc2dc\ub294 \ub178\ub4dc\ub97c \uacf5\uc720\ud558\ub294 \ubcf5\uc81c\ubcf8\uc758 `orders` \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub9cc \uc778\uc2dd\ud569\ub2c8\ub2e4. \uc8fc\ubb38 \uc11c\ube44\uc2a4\uc5d0\uc11c `InternalTrafficPolicy: Local` \uc124\uc815\uc744 \uc81c\uac70\ud558\uace0 \uc704\uc640 \uac19\uc740 \uba85\ub839\uc744 \ub2e4\uc2dc \uc2e4\ud589\ud558\uba74 \uacb0\uacfc\ub294 \uc11c\ub85c \ub2e4\ub978 \ub178\ub4dc\uc5d0 \ubd84\uc0b0\ub41c \ubcf5\uc81c\ubcf8\uc758 \ubaa8\ub4e0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ubc18\ud658\ud569\ub2c8\ub2e4. \ub610\ud55c \uac01 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 `rq_total`\uc744 \uc0b4\ud3b4\ubcf4\uba74 \ub124\ud2b8\uc6cc\ud06c \ubd84\ubc30\uc5d0\uc11c \ube44\uad50\uc801 \uade0\uc77c\ud55c \uc810\uc720\uc728\uc744 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub530\ub77c\uc11c \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \uc11c\ub85c \ub2e4\ub978 \uac00\uc6a9\uc601\uc5ed\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc5c5\uc2a4\ud2b8\ub9bc \uc11c\ube44\uc2a4\uc640 \uc5f0\uacb0\ub41c \uacbd\uc6b0 \uc5ec\ub7ec \uc601\uc5ed\uc5d0 \ub124\ud2b8\uc6cc\ud06c\ub97c \ubd84\uc0b0\ud558\uba74 \ube44\uc6a9\uc774 \ub354 \ub9ce\uc774 \ub4ed\ub2c8\ub2e4.\n\n\uc704\uc758 \uc774\uc804 \uc139\uc158\uc5d0\uc11c \uc5b8\uae09\ud55c \ubc14\uc640 \uac19\uc774, \ud30c\ub4dc \uc5b4\ud53c\ub2c8\ud2f0\ub97c \ud65c\uc6a9\ud558\uc5ec \uc790\uc8fc \ud1b5\uc2e0\ud558\ub294 \ud30c\ub4dc\ub97c \uac19\uc740 \uc704\uce58\uc5d0 \ubc30\uce58\ud560 \uc218 \uc788\ub2e4.\n\n```yaml hl_lines=\"11-20\"\n...\nspec:\n...\n  template:\n    metadata:\n      labels:\n        app: graphql\n        role: api\n        workload: ecommerce\n    spec:\n      affinity:\n        podAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n          - labelSelector:\n              matchExpressions:\n              - key: app\n                operator: In\n                values:\n                - orders\n            topologyKey: \"kubernetes.io\/hostname\"\n      nodeSelector:\n        managedBy: karpenter\n        billing-team: ecommerce\n...\n```\n\n`graphql`\uacfc `orders` \ubcf5\uc81c\ubcf8\uc774 \ub3d9\uc77c\ud55c \ub178\ub4dc\uc5d0 \uacf5\uc874\ud558\uc9c0 \uc54a\ub294 \uacbd\uc6b0 (`ip-10-0-0-151.af-south-1.compute.internal`), \uc544\ub798 \ud3ec\uc2a4\ud2b8\ub9e8 \uc2a4\ud06c\ub9b0\uc0f7\uc758 `200 \uc751\ub2f5 \ucf54\ub4dc`\uc5d0\uc11c \uc54c \uc218 \uc788\ub4ef\uc774 `graphql`\uc5d0 \ub300\ud55c \uccab \ubc88\uc9f8 \uc694\uccad\uc740 \uc131\uacf5\ud558\uc9c0\ub9cc, `graphql`\uc5d0\uc11c `orders`\ub85c\uc758 \ub450 \ubc88\uc9f8 \uc911\ucca9 \uc694\uccad\uc740 `503 \uc751\ub2f5 \ucf54\ub4dc`\ub85c \uc2e4\ud328\ud569\ub2c8\ub2e4.  \n\n![After](..\/images\/after.png)\n![After results](..\/images\/after-results.png)\n\n## \ucd94\uac00 \ub9ac\uc18c\uc2a4\n\n* [Istio\ub97c \uc0ac\uc6a9\ud558\uc5ec EKS\uc758 \uc9c0\uc5f0 \uc2dc\uac04 \ubc0f \ub370\uc774\ud130 \uc804\uc1a1 \ube44\uc6a9 \ud574\uacb0](https:\/\/aws.amazon.com\/blogs\/containers\/addressing-latency-and-data-transfer-costs-on-eks-using-istio\/)\n* [Amazon EKS\uc758 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d\uc5d0 \ub300\ud55c \ud1a0\ud3f4\ub85c\uc9c0 \uc778\uc2dd \ud78c\ud2b8\uc758 \ud6a8\uacfc \uc0b4\ud3b4\ubcf4\uae30](https:\/\/aws.amazon.com\/blogs\/containers\/exploring-the-effect-of-topology-aware-hints-on-network-traffic-in-amazon-elastic-kubernetes-service\/)\n* [Amazon EKS \ud06c\ub85c\uc2a4 \uac00\uc6a9\uc601\uc5ed \ud30c\ub4dc \ubc0f \ud30c\ub4dc \ub124\ud2b8\uc6cc\ud06c \ubc14\uc774\ud2b8\uc5d0 \ub300\ud55c \uac00\uc2dc\uc131 \ud655\ubcf4](https:\/\/aws.amazon.com\/blogs\/containers\/getting-visibility-into-your-amazon-eks-cross-az-pod-to-pod-network-bytes\/)\n* [Istio\ub85c \uac00\uc6a9\uc601\uc5ed \ud2b8\ub798\ud53d\uc744 \ucd5c\uc801\ud654\ud558\uc138\uc694](https:\/\/youtu.be\/EkpdKVm9kQY)\n* [\ud1a0\ud3f4\ub85c\uc9c0 \uc778\uc2dd \ub77c\uc6b0\ud305\uc73c\ub85c \uac00\uc6a9\uc601\uc5ed \ud2b8\ub798\ud53d \ucd5c\uc801\ud654](https:\/\/youtu.be\/KFgE_lNVfz4)\n* [\uc11c\ube44\uc2a4 \ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45\uc744 \ud1b5\ud55c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ube44\uc6a9 \ubc0f \uc131\ub2a5 \ucd5c\uc801\ud654](https:\/\/youtu.be\/-uiF_zixEro)\n* [Istio \ubc0f \uc11c\ube44\uc2a4 \ub0b4\ubd80 \ud2b8\ub798\ud53d \uc815\ucc45\uc73c\ub85c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ube44\uc6a9 \ubc0f \uc131\ub2a5\uc744 \ucd5c\uc801\ud654\ud569\ub2c8\ub2e4.](https:\/\/youtu.be\/edSgEe7Rihc)\n* [\uacf5\ud1b5 \uc544\ud0a4\ud14d\ucc98\uc758 \ub370\uc774\ud130 \uc804\uc1a1 \ube44\uc6a9 \uac1c\uc694](https:\/\/aws.amazon.com\/blogs\/architecture\/overview-of-data-transfer-costs-for-common-architectures\/) \n* [AWS \ucee8\ud14c\uc774\ub108 \uc11c\ube44\uc2a4\uc758 \ub370\uc774\ud130 \uc804\uc1a1 \ube44\uc6a9 \uc774\ud574](https:\/\/aws.amazon.com\/blogs\/containers\/understanding-data-transfer-costs-for-aws-container-services\/)","site":"eks","answers_cleaned":"    search    exclude  true                             HA                                                          AWS             AZ                                 Amazon EKS                                              EKS               VPC            ELB          ECR                                                                                                                                                                                                     EKS                                                                       Amazon EKS                                                 HA                                                                                                                   AZ                                                EKS                                                                                                                              https   aws amazon com blogs containers getting visibility into your amazon eks cross az pod to pod network bytes           Topology Aware Routing         Topology Aware Hint          Topology aware routing     images topo aware routing png   Topology Aware Routing                          EndpointSlices    kube proxy                                                                                                         EndpointSlices           EndpointSlice                                                                kube proxy                                                  EndpointSlices                        Topology aware routing   https   kubernetes io docs concepts services networking topology aware routing                           EndpointSlices                                                EndpointSlices                                                                                  kube proxy                                                         kube proxy                                         EndpointSlice                                                                                         Endpoint Slice     images endpoint slice png          EndPointSlice                                                                                                                                                                                             yaml hl lines  6 7  apiVersion  v1 kind  Service metadata    name  orders service   namespace  ecommerce     annotations        service kubernetes io topology mode  Auto spec    selector      app  orders   type  ClusterIP   ports      protocol  TCP     port  3003     targetPort  3003               EndpointSlices        eu west 1a                                                        Slice shell     images slice shell png       note     Topology aware routing                                                                                                                https   kubernetes io docs concepts scheduling eviction topology spread constraints                                                                                                                                                                                                                                                                                                                                                                         Cluster Autoscaler  CA   https   github com kubernetes autoscaler tree master cluster autoscaler                                            Karpenter  https   karpenter sh          topology kubernetes io zone    http   topology kubernetes io zone E2 80 9D                                                         us west 2a                        Karpenter       yaml hl lines  5 9  apiVersion  karpenter sh v1alpha5 kind  Provisioner metadata  name  single az spec    requirements      key   topology kubernetes io zone      operator  In     values    us west 2a          Cluster Autoscaler  CA        yaml hl lines  7 8  apiVersion  eksctl io v1alpha5 kind  ClusterConfig metadata    name  my ca cluster   region  us east 1   version   1 21  availabilityZones    us east 1a managedNodeGroups    name  managed nodes   labels      role  managed nodes   instanceType  t3 medium   minSize  1   maxSize  10   desiredCapacity  1                                                                                  us west 2a    us west 2b        topology kubernetes io zone  http   topology kubernetes io zone E2 80 9D              nodeSelector      nodeAffinity                                                                us west 2a                               yaml hl lines  7 9  apiVersion  v1 kind  Pod metadata    name  nginx   labels      env  test spec    nodeSelector      topology kubernetes io zone  us west 2a   containers      name  nginx     image  nginx      imagePullPolicy  IfNotPresent                                                                                                                                                                                                                A       HA             1          B               1          A     2          B                 2          AZ                                                                                                                    https   kubernetes io docs concepts services networking service traffic policy                                                                          HA                 A          AZ                    B                                            local                                                                                                                                                                                      note                                                         Local internal traffic     images local traffic png                                                       yaml hl lines  14  apiVersion  v1 kind  Service metadata    name  orders service   namespace  ecommerce spec    selector      app  orders   type  ClusterIP   ports      protocol  TCP     port  3003     targetPort  3003   internalTrafficPolicy  Local                                                                                                                https   kubernetes io docs concepts scheduling eviction topology spread constraints                                                                       https   kubernetes io docs concepts scheduling eviction assign pod node  inter pod affinity and anti affinity                          A      2           B      3                 A          1  2                     B          3         local                                                                    node local no peer     images no node local 1 png           B     1  2      3    2                                                            B                               node local with peer     images no node local 2 png       note                                                                                                                                                                                                                                              yaml hl lines  16 22  apiVersion  apps v1 kind  Deployment metadata    name  express test spec    replicas  6   selector      matchLabels        app  express test   template      metadata        labels          app  express test         tier  backend     spec        topologySpreadConstraints          maxSkew  1         topologyKey   topology kubernetes io zone          whenUnsatisfiable  ScheduleAnyway         labelSelector            matchLabels              app  express test                                                                                                                                                                                         RequiredDuringSchedulingExecutionDuringExecutionDuringIgnored                                                                 yaml hl lines  11 20  apiVersion  apps v1 kind  Deployment metadata    name  graphql   namespace  ecommerce   labels      app kubernetes io version   0 1 6              spec        serviceAccountName  graphql service account       affinity          podAffinity            requiredDuringSchedulingIgnoredDuringExecution              labelSelector                matchExpressions                  key  app                 operator  In                 values                    orders             topologyKey   kubernetes io hostname                        EKS                  EKS                                                                                                                                              AWS Load Balancer Controller  https   kubernetes sigs github io aws load balancer controller        ELB      ALB   NLB                                                                            AWS Load Balancer Controller               IP                                                   EKS              NodePort                                                                                                                         NodePort     AZ        kube proxy                                          AZ                                              kube proxy                                                                                    NodePort        kube proxy       AZ                                                                         LB to Pod     images lb 2 pod png    IP                                                                                            tip                             IP                                        VPC                                                  IP                                                  IP mode     images ip mode png                                Amazon ECR  Amazon ECR                                                                                                                             ECR                  https   docs aws amazon com AmazonECR latest userguide replication html                                                                                                                   VPC        https   docs aws amazon com whitepapers latest aws privatelink what are vpc endpoints html              ECR            ECR                                                        NAT                      ECR      AWS                                                                     AWS                                                                                          Amazon         AMI                                                   EKS                                               AWS                                      AWS                                                                                                                               NAT           NAT                    NAT                                     AWS      Amazon ECR  DynamoDB  S3                 EKS                                                                                 NAT                                   IP                       IP                                 NAT                                                            NAT Gateway     images nat gw png              NAT                AZ  NAT                                                                  AZ  NAT              AZ                     AZ                                            NAT                                                                      Recommended approach     images recommended approach png       VPC                                      VPC        https   docs aws amazon com whitepapers latest aws privatelink what are vpc endpoints html              AWS                       VPC                                      VPC     AWS                                 AWS                   VPC               VPC           AWS           https   docs aws amazon com vpc latest privatelink aws services privatelink support html          VPC        S3   DynamoDB                                 VPC                 VPC                                           VPC               VPC                               VPC      VPN         Direct Connect                            VPC          VPC                  https   aws amazon com privatelink pricing          AWS            ENI                                                    VPC                                           VPC                       AWS                 VPC     VPC                               VPC           AWS                         VPC Endpoints     images vpc endpoints png      VPC                                   VPC     AWS                              VPC                                                                   EC2       NAT          NAT                                                                         VPC                                    VPC            AZ                                             NAT                              VPC                                  Between VPCs     images between vpcs png       VPC                                 VPC      https   docs aws amazon com vpc latest peering what is vpc peering html               VPC                  AZ                                               AZ                             AWS          VPC                              VPC                              VPC                               1 1 VPC                                     VPC                                         Peering     images peering png             Transitive                           VPC                                                        VPC  3            Transit Gateway TGW   https   docs aws amazon com vpc latest tgw what is transit gateway html                   VPC               VPC       VPC                                  TGW                         https   aws amazon com transit gateway pricing             TGW                                                        AWS                VPC             TGW                                Transitive     images transititive png                         EKS                                                                                                                                                        Istio                  Istio                                                                 https   istio io latest docs tasks traffic management locality load balancing distribute                Destination Rules   https   istio io latest docs reference config networking destination rule                                                                                                                                                                                                                                                                                                        https   www envoyproxy io docs envoy latest intro arch overview upstream load balancing locality weight html         note                                                                                       AWS X Ray  https   aws amazon com xray       Jaeger  https   www jaegertracing io                                                           Istio             EKS                                                            Ingress                                                                                                                                                                                                                     Istio                                                    eu west 1            3                                                                              70                                                      yaml hl lines  7 11  apiVersion  networking istio io v1beta1 kind  DestinationRule metadata    name  express test dr spec    host  express test default svc cluster local   trafficPolicy      loadBalancer                                localityLbSetting          distribute            from  eu west 1 eu west 1a                to               eu west 1 eu west 1a     70               eu west 1 eu west 1b     20              eu west 1 eu west 1c     10           from  eu west 1 eu west 1b                 to               eu west 1 eu west 1a     20               eu west 1 eu west 1b     70              eu west 1 eu west 1c     10           from  eu west 1 eu west 1c                 to               eu west 1 eu west 1a     20               eu west 1 eu west 1b     10              eu west 1 eu west 1c     70       connectionPool        http          http2MaxRequests  10         maxRequestsPerConnection  10     outlierDetection        consecutiveGatewayErrors  1       interval  1m       baseEjectionTime  30s          note                           1                                                                                                 eu west 1                                                                      eu west 1a               60        AZ                         eu west 1a          40    eu west 1b                   Istio Traffic Control     images istio traffic control png                              Istio                                                                         Istio                                              Istio                                                                                                                                                                                                                       External and Internal traffic policy     images external and internal traffic policy png   1              A                          C                                                                               AZ 1  AZ 2              2                  Istio                             3             Istio                     AZ 1    AZ 2           AZ                      4                A                  Pod                                        80    AZ 1                          20    AZ 2         5            A              C                  C                                                     6      C                                     A        1         C                     7      C                                        A        3        C                                       C     3                                                                    graphql                   graphql        ip 10 0 151 af south 1 compute internal                 orders                                Before     images before png    Before results     images before results png   Istio                                https   www envoyproxy io docs envoy latest intro arch overview intro terminology                                                                                                         graphql             orders                          bash kubectl exec  it deploy graphql  n ecommerce  c istio proxy    curl localhost 15000 clusters   grep orders          bash     orders service ecommerce svc cluster local  10 0 1 33 3003    rq error  0   orders service ecommerce svc cluster local  10 0 1 33 3003    rq success  119   orders service ecommerce svc cluster local  10 0 1 33 3003    rq timeout  0   orders service ecommerce svc cluster local  10 0 1 33 3003    rq total  119   orders service ecommerce svc cluster local  10 0 1 33 3003    health flags  healthy   orders service ecommerce svc cluster local  10 0 1 33 3003    region  af south 1   orders service ecommerce svc cluster local  10 0 1 33 3003    zone  af south 1b                   graphql                      orders                          InternalTrafficPolicy  Local                                                                                  rq total                                                                                                                                                                                               yaml hl lines  11 20      spec        template      metadata        labels          app  graphql         role  api         workload  ecommerce     spec        affinity          podAffinity            requiredDuringSchedulingIgnoredDuringExecution              labelSelector                matchExpressions                  key  app                 operator  In                 values                    orders             topologyKey   kubernetes io hostname        nodeSelector          managedBy  karpenter         billing team  ecommerce           graphql    orders                            ip 10 0 0 151 af south 1 compute internal                   200                   graphql                       graphql     orders                 503                     After     images after png    After results     images after results png                 Istio       EKS                        https   aws amazon com blogs containers addressing latency and data transfer costs on eks using istio      Amazon EKS                                    https   aws amazon com blogs containers exploring the effect of topology aware hints on network traffic in amazon elastic kubernetes service      Amazon EKS                                       https   aws amazon com blogs containers getting visibility into your amazon eks cross az pod to pod network bytes      Istio                    https   youtu be EkpdKVm9kQY                                 https   youtu be KFgE lNVfz4                                          https   youtu be  uiF zixEro     Istio                                           https   youtu be edSgEe7Rihc                            https   aws amazon com blogs architecture overview of data transfer costs for common architectures       AWS                         https   aws amazon com blogs containers understanding data transfer costs for aws container services  "}
{"questions":"eks IP exclude true search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# IP \uc8fc\uc18c \uc0ac\uc6a9 \ucd5c\uc801\ud654 \n\n\uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ud604\ub300\ud654\ub85c \uc778\ud558\uc5ec \ucee8\ud14c\uc774\ub108\ud654\ub41c \ud658\uacbd\uc758 \uaddc\ubaa8\uac00 \ube60\ub978 \uc18d\ub3c4\ub85c \uc99d\uac00\ud558\uace0 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub294 \uc810\uc810 \ub354 \ub9ce\uc740 \uc6cc\ucee4 \ub178\ub4dc\uc640 \ud30c\ub4dc\uac00 \ubc30\ud3ec\ub418\uace0 \uc788\uc74c\uc744 \uc758\ubbf8\ud569\ub2c8\ub2e4.\n\n[\uc544\ub9c8\uc874 VPC CNI](..\/vpc-cni\/) \ud50c\ub7ec\uadf8\uc778\uc740 \uac01 \ud30c\ub4dc\uc5d0 VPC CIDR\uc758 IP \uc8fc\uc18c\ub97c \ud560\ub2f9\ud569\ub2c8\ub2e4. \uc774 \uc811\uadfc \ubc29\uc2dd\uc740 VPC flow logs \ubc0f \uae30\ud0c0 \ubaa8\ub2c8\ud130\ub9c1 \uc194\ub8e8\uc158\uacfc \uac19\uc740 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud30c\ub4dc \uc8fc\uc18c\ub97c \uc644\ubcbd\ud558\uac8c \ud30c\uc545\ud560 \uc218 \uc788\ub3c4\ub85d \ud569\ub2c8\ub2e4. \uc774\ub85c \uc778\ud574 \uc6cc\ud06c\ub85c\ub4dc \uc720\ud615\uc5d0 \ub530\ub77c \ud30c\ub4dc\uc5d0\uc11c \uc0c1\ub2f9\ud55c \uc218\uc758 IP \uc8fc\uc18c\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nAWS \ub124\ud2b8\uc6cc\ud0b9 \uc544\ud0a4\ud14d\ucc98\ub97c \uc124\uacc4\ud560 \ub54c\ub294 VPC\uc640 \ub178\ub4dc \uc218\uc900\uc5d0\uc11c Amazon EKS IP \uc0ac\uc6a9\uc744 \ucd5c\uc801\ud654\ud558\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 IP \uace0\uac08 \ubb38\uc81c\ub97c \uc644\ud654\ud558\uace0 \ub178\ub4dc\ub2f9 \ud30c\ub4dc \ubc00\ub3c4\ub97c \ub192\uc774\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4.\n\n\uc774 \uc139\uc158\uc5d0\uc11c\ub294 \uc774\ub7ec\ud55c \ubaa9\ud45c\ub97c \ub2ec\uc131\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\ub294 \uae30\uc220\uc5d0 \ub300\ud574 \uc124\uba85\ud569\ub2c8\ub2e4.\n\n## \ub178\ub4dc \ub808\ubca8 IP \uc18c\ube44 \ucd5c\uc801\ud654\n \n[\uc811\ub450\uc0ac(Prefix) \uc704\uc784](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cni-increase-ip-addresses.html) \uc740 Amazon Virtual Private Cloud(Amazon VPC) \uc758 \uae30\ub2a5\uc73c\ub85c, \uc774\ub97c \ud1b5\ud574 Amazon Elastic Compute Cloud (Amazon EC2) \uc778\uc2a4\ud134\uc2a4\uc5d0 IPv4 \ub610\ub294 IPv6 \uc811\ub450\uc0ac\ub97c \ud560\ub2f9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4\ub2f9 IP \uc8fc\uc18c(ENI)\uac00 \uc99d\uac00\ud558\uc5ec \ub178\ub4dc\ub2f9 \ud30c\ub4dc \ubc00\ub3c4\uac00 \uc99d\uac00\ud558\uace0 \ucef4\ud4e8\ud305 \ud6a8\uc728\uc131\uc774 \ud5a5\uc0c1\ub429\ub2c8\ub2e4. \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc5d0\uc11c\ub294 \uc811\ub450\uc0ac \uc704\uc784\ub3c4 \uc9c0\uc6d0\ub429\ub2c8\ub2e4. \n\n\uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [Linux \ub178\ub4dc\ub97c \uc0ac\uc6a9\ud55c \uc811\ub450\uc0ac \uc704\uc784](..\/prefix-mode\/index_linux\/) \ubc0f [\uc708\ub3c4\uc6b0 \ub178\ub4dc\ub97c \uc0ac\uc6a9\ud55c \ud504\ub9ac\ud53d\uc2a4 \uc704\uc784](..\/prefix-mode\/index_windows\/) \uc139\uc158\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n## IP \uc18c\uc9c4 \uc644\ud654\n\n\ud074\ub7ec\uc2a4\ud130\uac00 \uc0ac\uc6a9 \uac00\ub2a5\ud55c IP \uc8fc\uc18c\ub97c \ubaa8\ub450 \uc0ac\uc6a9\ud558\uc9c0 \uc54a\ub3c4\ub85d \ud558\ub824\uba74 \uc131\uc7a5\uc744 \uc5fc\ub450\uc5d0 \ub450\uace0 VPC\uc640 \uc11c\ube0c\ub137\uc758 \ud06c\uae30\ub97c \uc870\uc815\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \n\n[IPv6](..\/ipv6\/) \ucc44\ud0dd\uc740 \uc774\ub7ec\ud55c \ubb38\uc81c\ub97c \ucc98\uc74c\ubd80\ud130 \ubc29\uc9c0\ud560 \uc218 \uc788\ub294 \uc88b\uc740 \ubc29\ubc95\uc785\ub2c8\ub2e4. \uadf8\ub7ec\ub098 \ud655\uc7a5\uc131 \uc694\uad6c \uc0ac\ud56d\uc774 \ucd08\uae30 \uacc4\ud68d\uc744 \ucd08\uacfc\ud558\uc5ec IPv6\uc744 \ucc44\ud0dd\ud560 \uc218 \uc5c6\ub294 \uc870\uc9c1\uc758 \uacbd\uc6b0 IP \uc8fc\uc18c \uace0\uac08\uc5d0 \ub300\ud55c \ub300\uc751\ucc45\uc73c\ub85c VPC \uc124\uacc4\ub97c \uac1c\uc120\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. Amazon EKS \uace0\uac1d\uc774 \uac00\uc7a5 \uc77c\ubc18\uc801\uc73c\ub85c \uc0ac\uc6a9\ud558\ub294 \ubc29\ubc95\uc740 \ub77c\uc6b0\ud305\uc774 \ubd88\uac00\ub2a5\ud55c \ubcf4\uc870 CIDR\uc744 VPC\uc5d0 \ucd94\uac00\ud558\uace0 IP \uc8fc\uc18c\ub97c Pod\uc5d0 \ud560\ub2f9\ud560 \ub54c \uc774 \ucd94\uac00 IP \uacf5\uac04\uc744 \uc0ac\uc6a9\ud558\ub3c4\ub85d VPC CNI\ub97c \uad6c\uc131\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uc774\ub97c \uc77c\ubc18\uc801\uc73c\ub85c [\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9](..\/custom-networking\/)\uc774\ub77c\uace0 \ud569\ub2c8\ub2e4. \n\n\ub178\ub4dc\uc5d0 \ud560\ub2f9\ub41c IP \uc6dc \ud480\uc744 \ucd5c\uc801\ud654\ud558\ub294 \ub370 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 Amazon VPC CNI \ubcc0\uc218\uc5d0 \ub300\ud574 \uc54c\uc544\ubcf4\uaca0\uc2b5\ub2c8\ub2e4. Amazon EKS\uc5d0 \uace0\uc720\ud558\uc9c0\ub294 \uc54a\uc9c0\ub9cc IP \uace0\uac08\uc744 \uc644\ud654\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\ub294 \uba87 \uac00\uc9c0 \ub2e4\ub978 \uc544\ud0a4\ud14d\ucc98 \ud328\ud134\uc5d0 \ub300\ud574 \uc124\uba85\ud558\uba74\uc11c \uc774 \uc139\uc158\uc744 \ub9c8\uce58\uaca0\uc2b5\ub2c8\ub2e4.\n\n\n### IPv6 \uc0ac\uc6a9 (\uad8c\uc7a5\uc0ac\ud56d)\n\nIPv6\uc744 \ucc44\ud0dd\ud558\ub294 \uac83\uc774 RFC1918 \uc81c\ud55c\uc744 \ud574\uacb0\ud558\ub294 \uac00\uc7a5 \uc26c\uc6b4 \ubc29\ubc95\uc785\ub2c8\ub2e4. \ub124\ud2b8\uc6cc\ud06c \uc544\ud0a4\ud14d\ucc98\ub97c \uc120\ud0dd\ud560 \ub54c\ub294 IPv6\uc744 \uccab \ubc88\uc9f8 \uc635\uc158\uc73c\ub85c \ucc44\ud0dd\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. IPv6\uc740 \ucd1d IP \uc8fc\uc18c \uacf5\uac04\uc774 \ud6e8\uc52c \ub354 \ub113\uae30 \ub54c\ubb38\uc5d0 \ud074\ub7ec\uc2a4\ud130 \uad00\ub9ac\uc790\ub294 IPv4 \uc81c\ud55c\uc744 \uc6b0\ud68c\ud558\ub294 \ub370 \ub178\ub825\uc744 \uae30\uc6b8\uc774\uc9c0 \uc54a\uace0\ub3c4 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud558\uace0 \ud655\uc7a5\ud558\ub294 \ub370 \uc9d1\uc911\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nAmazon EKS \ud074\ub7ec\uc2a4\ud130\ub294 IPv4\uc640 IPv6\uc744 \ubaa8\ub450 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c EKS \ud074\ub7ec\uc2a4\ud130\ub294 IPv4 \uc8fc\uc18c \uacf5\uac04\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uc0dd\uc131 \uc2dc IPv6 \uae30\ubc18 \uc8fc\uc18c \uacf5\uac04\uc744 \uc9c0\uc815\ud558\uba74 IPv6\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. IPv6 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ud30c\ub4dc\uc640 \uc11c\ube44\uc2a4\ub294 IPv6 \uc8fc\uc18c\ub97c \uc218\uc2e0\ud558\uba70, **\ub808\uac70\uc2dc IPv4 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c IPv6 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc11c\ube44\uc2a4\uc5d0 \uc5f0\uacb0\ud558\ub294 \uae30\ub2a5\uc744 \uc720\uc9c0\ud558\uba70 \uadf8 \ubc18\ub300\uc758 \uacbd\uc6b0\ub3c4 \ub9c8\ucc2c\uac00\uc9c0\uc785\ub2c8\ub2e4**. \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc758 \ubaa8\ub4e0 \ud30c\ub4dc \uac04 \ud1b5\uc2e0\uc740 \ud56d\uc0c1 IPv6\uc744 \ud1b5\ud574 \uc774\ub8e8\uc5b4\uc9d1\ub2c8\ub2e4. VPC(\/56)\ub0b4\uc5d0\uc11c IPv6 \uc11c\ube0c\ub137\uc758 IPv6 CIDR \ube14\ub85d \ud06c\uae30\ub294 \/64\ub85c \uace0\uc815\ub429\ub2c8\ub2e4.\uc774\ub294 2^64(\uc57d 18\uc5b5)\uc758 IPv6 \uc8fc\uc18c\ub97c \uc81c\uacf5\ud558\ubbc0\ub85c EKS\uc5d0\uc11c \ubc30\ud3ec\ub97c \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [IPv6 EKS \ud074\ub7ec\uc2a4\ud130 \uc2e4\ud589](..\/ipv6\/) \uc139\uc158\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624. \ud578\uc988\uc628 \uc2e4\uc2b5\uc740 [IPv6 \uc2e4\uc2b5 \uc6cc\ud06c\uc20d](https:\/\/catalog.workshops.aws\/ipv6-on-aws\/en-US) \ub0b4\uc5d0 [Amazon EKS\uc5d0\uc11c\uc758 IPv6\uc5d0 \ub300\ud55c \uc774\ud574](https:\/\/catalog.workshops.aws\/ipv6-on-aws\/en-US\/lab-6) \uc139\uc158\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n![EKS Cluster in IPv6 Mode, traffic flow](.\/ipv6.gif)\n\n\n### IPv4 \ud074\ub7ec\uc2a4\ud130\uc758 IP \uc0ac\uc6a9 \ucd5c\uc801\ud654\n\n\uc774 \uc139\uc158\uc740 \uae30\uc874 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc2e4\ud589 \uc911\uc774\uac70\ub098 IPv6\uc73c\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud560 \uc900\ube44\uac00 \ub418\uc9c0 \uc54a\uc740 \uace0\uac1d\uc744 \ub300\uc0c1\uc73c\ub85c \ud569\ub2c8\ub2e4. \ubaa8\ub4e0 \uc870\uc9c1\uc774 \uac00\ub2a5\ud55c \ud55c \ube68\ub9ac IPv6\uc73c\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud558\ub3c4\ub85d \uad8c\uc7a5\ud558\uace0 \uc788\uc9c0\ub9cc \uc77c\ubd80 \uc870\uc9c1\uc5d0\uc11c\ub294 IPv4\ub85c \ucee8\ud14c\uc774\ub108 \uc6cc\ud06c\ub85c\ub4dc\ub97c \ud655\uc7a5\ud558\uae30 \uc704\ud55c \ub300\uccb4 \uc811\uadfc \ubc29\uc2dd\uc744 \ubaa8\uc0c9\ud574\uc57c \ud560 \uc218\ub3c4 \uc788\ub2e4\ub294 \uc810\uc744 \uc54c\uace0 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc774\uc720\ub85c Amazon EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud558\uc5ec IPv4 (RFC1918) \uc8fc\uc18c \uacf5\uac04 \uc18c\ube44\ub97c \ucd5c\uc801\ud654\ud558\ub294 \uc544\ud0a4\ud14d\ucc98 \ud328\ud134\ub3c4 \uc18c\uac1c\ud569\ub2c8\ub2e4.\n\n#### \ud655\uc7a5\uc744 \uc704\ud55c \uacc4\ud68d\n\nIP \uace0\uac08\uc5d0 \ub300\ud55c \uccab \ubc88\uc9f8 \ubc29\uc5b4\uc120\uc73c\ub85c\uc11c \ud074\ub7ec\uc2a4\ud130\uac00 \uc0ac\uc6a9 \uac00\ub2a5\ud55c IP \uc8fc\uc18c\ub97c \ubaa8\ub450 \uc18c\ube44\ud558\uc9c0 \uc54a\ub3c4\ub85d \uc131\uc7a5\uc744 \uc5fc\ub450\uc5d0 \ub450\uace0 IPv4 VPC\uc640 \uc11c\ube0c\ub137\uc758 \ud06c\uae30\ub97c \uc870\uc815\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc11c\ube0c\ub137\uc5d0 \uc0ac\uc6a9 \uac00\ub2a5\ud55c IP \uc8fc\uc18c\uac00 \ucda9\ubd84\ud558\uc9c0 \uc54a\uc73c\uba74 \uc0c8 \ud30c\ub4dc\ub098 \ub178\ub4dc\ub97c \uc0dd\uc131\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \n\nVPC\uc640 \uc11c\ube0c\ub137\uc744 \uad6c\ucd95\ud558\uae30 \uc804\uc5d0 \ud544\uc694\ud55c \uc6cc\ud06c\ub85c\ub4dc \uaddc\ubaa8\uc5d0\uc11c \uc5ed\ubc29\ud5a5\uc73c\ub85c \uc791\uc5c5\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\uc608\ub97c \ub4e4\uc5b4 [eksctl](https:\/\/eksctl.io\/)(EKS\uc5d0\uc11c \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud558\uace0 \uad00\ub9ac\ud558\uae30 \uc704\ud55c \uac04\ub2e8\ud55c CLI \ub3c4\uad6c)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\ub97c \uad6c\ucd95\ud558\uba74 \uae30\ubcf8\uc801\uc73c\ub85c 19\uac1c\uc758 \uc11c\ube0c\ub137\uc774 \uc0dd\uc131\ub429\ub2c8\ub2e4. \/19\uc758 \ub137\ub9c8\uc2a4\ud06c\ub294 8000\uac1c \uc774\uc0c1\uc758 \uc8fc\uc18c\ub97c \ud560\ub2f9\ud560 \uc218 \uc788\ub294 \ub300\ubd80\ubd84\uc758 \uc6cc\ud06c\ub85c\ub4dc \uc720\ud615\uc5d0 \uc801\ud569\ud569\ub2c8\ub2e4.\n\n!!! attention\n    VPC\uc640 \uc11c\ube0c\ub137\uc758 \ud06c\uae30\ub97c \uc870\uc815\ud560 \ub54c IP \uc8fc\uc18c\ub97c \uc18c\ube44\ud560 \uc218 \uc788\ub294 \uc694\uc18c (\uc608: \ub85c\ub4dc \ubc38\ub7f0\uc11c, RDS \ub370\uc774\ud130\ubca0\uc774\uc2a4 \ubc0f \uae30\ud0c0 vpc \ub0b4 \uc11c\ube44\uc2a4) \uac00 \uc5ec\ub7ec \uac1c \uc788\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\ub610\ud55c Amazon EKS\ub294 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc\uc758 \ud1b5\uc2e0\uc744 \ud5c8\uc6a9\ud558\ub294 \ub370 \ud544\uc694\ud55c \ucd5c\ub300 4\uac1c\uc758 \uc5d8\ub77c\uc2a4\ud2f1 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4(X-ENI)\ub97c \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. (\uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\ub2e4\uc74c \ubb38\uc11c](..\/subnets\/)\ub97c \ucc38\uace0\ud558\uc138\uc694.) \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc \uc911\uc5d0 Amazon EKS\ub294 \uc0c8 X-ENI\ub97c \uc0dd\uc131\ud558\uace0 \uc5c5\uadf8\ub808\uc774\ub4dc\uac00 \uc131\uacf5\ud558\uba74 \uc774\uc804 X-ENI\ub97c \uc0ad\uc81c\ud569\ub2c8\ub2e4. \ub530\ub77c\uc11c EKS \ud074\ub7ec\uc2a4\ud130\uc640 \uc5f0\uacb0\ub41c \uc11c\ube0c\ub137\uc758 \uacbd\uc6b0 \ucd5c\uc18c \/28 (16\uac1c\uc758 IP \uc8fc\uc18c)\uc758 \ub137\ub9c8\uc2a4\ud06c\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n[\uc0d8\ud50c EKS \uc11c\ube0c\ub137 \uacc4\uc0b0\uae30](..\/subnet-calc\/subnet-calc.xlsx) \uc2a4\ud504\ub808\ub4dc\uc2dc\ud2b8\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub124\ud2b8\uc6cc\ud06c\ub97c \uacc4\ud68d\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc2a4\ud504\ub808\ub4dc\uc2dc\ud2b8\ub294 \uc6cc\ud06c\ub85c\ub4dc \ubc0f VPC ENI \uad6c\uc131\uc744 \uae30\ubc18\uc73c\ub85c IP \uc0ac\uc6a9\ub7c9\uc744 \uacc4\uc0b0\ud569\ub2c8\ub2e4. IP \uc0ac\uc6a9\ub7c9\uc744 IPv4 \uc11c\ube0c\ub137\uacfc \ube44\uad50\ud558\uc5ec \uad6c\uc131 \ubc0f \uc11c\ube0c\ub137 \ud06c\uae30\uac00 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ucda9\ubd84\ud55c\uc9c0 \ud655\uc778\ud569\ub2c8\ub2e4. VPC\uc758 \uc11c\ube0c\ub137\uc5d0 \uc0ac\uc6a9 \uac00\ub2a5\ud55c IP \uc8fc\uc18c\uac00 \ubd80\uc871\ud560 \uacbd\uc6b0 VPC\uc758 \uc6d0\ub798 CIDR \ube14\ub85d\uc744 \uc0ac\uc6a9\ud558\uc5ec [\uc0c8 \uc11c\ube0c\ub137\uc744 \uc0dd\uc131](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/working-with-subnets.html#create-subnets)\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc774\uc81c [Amazon EKS\uc5d0\uc11c \ud074\ub7ec\uc2a4\ud130 \uc11c\ube0c\ub137 \ubc0f \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc218\uc815](https:\/\/aws.amazon.com\/about-aws\/whats-new\/2023\/10\/amazon-eks-modification-cluster-subnets-security\/)\ud560 \uc218 \uc788\ub294 \uc810\uc5d0 \uc720\uc758\ud558\uc138\uc694.\n\n#### IP \uacf5\uac04 \ud655\uc7a5\n\nRFC1918 IP \uacf5\uac04\uc744 \uac70\uc758 \uc0ac\uc6a9\ud55c \uacbd\uc6b0 [\ucee4\uc2a4\ud140 \ub124\ud2b8\uc6cc\ud0b9](..\/custom-networking\/) \ud328\ud134\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc804\uc6a9 \ucd94\uac00 \uc11c\ube0c\ub137 \ub0b4\uc5d0\uc11c \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud558\uc5ec \ub77c\uc6b0\ud305 \uac00\ub2a5\ud55c IP\ub97c \uc808\uc57d\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\ucee4\uc2a4\ud140 \ub124\ud2b8\uc6cc\ud0b9\uc740 \ucd94\uac00 CIDR \ubc94\uc704\uc5d0 \ubaa8\ub4e0 \uc720\ud6a8\ud55c VPC \ubc94\uc704\ub97c \ud5c8\uc6a9\ud558\uc9c0\ub9cc, CG-NAT (Carrier-Grade NAT, RFC 6598) \uacf5\uac04\uc758 CIDR (\/16)\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc744 \ucd94\ucc9c\ud569\ub2c8\ub2e4. (\uc608: `100.64.0.0\/10` \ub610\ub294 `198.19.0.0\/16`)  \uc774\ub294 RFC1918 \ubc94\uc704\ubcf4\ub2e4 \uae30\uc5c5 \ud658\uacbd\uc5d0\uc11c \uc0ac\uc6a9\ub420 \uac00\ub2a5\uc131\uc774 \uc801\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4. \n\n\uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [Custom \ub124\ud2b8\uc6cc\ud0b9](..\/custom-networking\/) \ubb38\uc11c\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n![Custom Networking, traffic flow](.\/custom-networking.gif)\n\n#### IP \uc6dc \ud480 \ucd5c\uc801\ud654\n\n\uae30\ubcf8 \uad6c\uc131\uc744 \uc0ac\uc6a9\ud558\uba74 VPC CNI\uac00 \uc804\uccb4 ENI(\ubc0f \uad00\ub828 IP)\ub97c \uc6dc \ud480\uc5d0 \ubcf4\uad00\ud569\ub2c8\ub2e4. \uc774\ub85c \uc778\ud574 \ud2b9\ud788 \ub300\uaddc\ubaa8 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc5d0\uc11c \ub9ce\uc740 IP\uac00 \uc18c\ubaa8\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ud074\ub7ec\uc2a4\ud130 \uc11c\ube0c\ub137\uc758 \uc0ac\uc6a9 \uac00\ub2a5\ud55c IP \uc8fc\uc18c \uc218\uac00 \uc81c\ud55c\ub418\uc5b4 \uc788\ub294 \uacbd\uc6b0 \ub2e4\uc74c VPC CNI \uad6c\uc131 \ud658\uacbd \ubcc0\uc218\ub97c \uc790\uc138\ud788 \uc0b4\ud3b4\ubcf4\uc2ed\uc2dc\uc624:\n\n* `WARM_IP_TARGET` \n* `MINIMUM_IP_TARGET`\n* `WARM_ENI_TARGET`\n\n\ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ud560 \uac83\uc73c\ub85c \uc608\uc0c1\ub418\ub294 \ud30c\ub4dc\uc758 \uc218\uc640 \uac70\uc758 \uc77c\uce58\ud558\ub3c4\ub85d `MINIMUM_IP_TARGET` \uac12\uc744 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \ud30c\ub4dc\uac00 \uc0dd\uc131\ub418\uace0 CNI\ub294 EC2 API\ub97c \ud638\ucd9c\ud558\uc9c0 \uc54a\uace0\ub3c4 \uc6dc \ud480\uc5d0\uc11c IP \uc8fc\uc18c\ub97c \ud560\ub2f9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n`WARM_IP_TARGET` \uac12\uc744 \ub108\ubb34 \ub0ae\uac8c \uc124\uc815\ud558\uba74 EC2 API\uc5d0 \ub300\ud55c \ucd94\uac00 \ud638\ucd9c\uc774 \ubc1c\uc0dd\ud558\uace0 \uc774\ub85c \uc778\ud574 \uc694\uccad\uc774 \ubcd1\ubaa9 \ud604\uc0c1\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\ub2e4\ub294 \uc810\uc5d0 \uc720\uc758\ud558\uc2dc\uae30 \ubc14\ub78d\ub2c8\ub2e4. \ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130\uc758 \uacbd\uc6b0 \uc694\uccad\uc758 \ubcd1\ubaa9 \ud604\uc0c1\uc744 \ud53c\ud558\ub824\uba74 `MINIMUM_IP_TARGET`\uacfc \ud568\uaed8 \uc0ac\uc6a9\ud558\uc2ed\uc2dc\uc624.\n\n\uc774\ub7ec\ud55c \uc635\uc158\uc744 \uad6c\uc131\ud558\ub824\uba74 `aws-k8s-cni.yaml` \ub9e4\ub2c8\ud398\uc2a4\ud2b8\ub97c \ub2e4\uc6b4\ub85c\ub4dc\ud558\uace0 \ud658\uacbd \ubcc0\uc218\ub97c \uc124\uc815\ud558\uba74 \ub429\ub2c8\ub2e4. \uc774 \uae00\uc744 \uc4f0\ub294 \uc2dc\uc810\uc5d0\uc11c \ucd5c\uc2e0 \ub9b4\ub9ac\uc2a4\ub294 [\ub2e4\uc74c \ub9c1\ud06c](https:\/\/github.com\/aws\/amazon-vpc-cni-k8s\/blob\/master\/config\/master\/aws-k8s-cni.yaml)\ub97c \ucc38\uc870\ud558\uc138\uc694. \uad6c\uc131 \uac12\uc758 \ubc84\uc804\uc774 \uc124\uce58\ub41c VPC CNI \ubc84\uc804\uacfc \uc77c\uce58\ud558\ub294\uc9c0 \ud655\uc778\ud558\uc138\uc694.\n\n!!! Warning\n    CNI\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\uba74 \uc774\ub7ec\ud55c \uc124\uc815\uc774 \uae30\ubcf8\uac12\uc73c\ub85c \uc7ac\uc124\uc815\ub429\ub2c8\ub2e4. \uc5c5\ub370\uc774\ud2b8\ud558\uae30 \uc804\uc5d0 CNI\ub97c \ubc31\uc5c5\ud574 \ub450\uc138\uc694. \uad6c\uc131 \uc124\uc815\uc744 \uac80\ud1a0\ud558\uc5ec \uc5c5\ub370\uc774\ud2b8\uac00 \uc131\uacf5\ud55c \ud6c4 \ub2e4\uc2dc \uc801\uc6a9\ud574\uc57c \ud558\ub294\uc9c0 \uacb0\uc815\ud558\uc2ed\uc2dc\uc624.\n\n\uae30\uc874 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ub2e4\uc6b4\ud0c0\uc784 \uc5c6\uc774 \uc989\uc2dc CNI \ud30c\ub77c\ubbf8\ud130\ub97c \uc870\uc815\ud560 \uc218 \uc788\uc9c0\ub9cc \ud655\uc7a5\uc131 \uc694\uad6c \uc0ac\ud56d\uc744 \uc9c0\uc6d0\ud558\ub294 \uac12\uc744 \uc120\ud0dd\ud574\uc57c \ud569\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4, \ubc30\uce58 \uc6cc\ud06c\ub85c\ub4dc\ub85c \uc791\uc5c5\ud558\ub294 \uacbd\uc6b0 \ud30c\ub4dc \uc2a4\ucf00\uc77c \uc694\uad6c \uc0ac\ud56d\uc5d0 \ub9de\uac8c \uae30\ubcf8 `WARM_ENI_TARGET`\uc73c\ub85c \uc5c5\ub370\uc774\ud2b8\ud558\ub294 \uac83\uc774 \uc88b\ub2e4. `WARM_ENI_TARGET`\uc744 \ub192\uc740 \uac12\uc73c\ub85c \uc124\uc815\ud558\uba74 \ub300\uaddc\ubaa8 \ubc30\uce58 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc2e4\ud589\ud558\ub294 \ub370 \ud544\uc694\ud55c \uc6dc IP \ud480\uc744 \ud56d\uc0c1 \uc720\uc9c0\ud558\ubbc0\ub85c \ub370\uc774\ud130 \ucc98\ub9ac \uc9c0\uc5f0\uc744 \ud53c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n!!! warning\n    IP \uc8fc\uc18c \uace0\uac08\uc5d0 \ub300\ud55c \ub300\uc751\ucc45\uc740 VPC \uc124\uacc4\ub97c \uac1c\uc120\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. IPv6 \ubc0f \ubcf4\uc870 CIDR\uacfc \uac19\uc740 \uc194\ub8e8\uc158\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624. \ub2e4\ub978 \uc635\uc158\uc744 \uc81c\uc678\ud55c \ud6c4\uc5d0\ub294 \uc774\ub7ec\ud55c \uac12\uc744 \uc870\uc815\ud558\uc5ec \uc6dc IP \uc218\ub97c \ucd5c\uc18c\ud654\ud558\ub294 \uac83\uc774 \uc77c\uc2dc\uc801\uc778 \ud574\uacb0\ucc45\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uac12\uc744 \uc798\ubabb \uad6c\uc131\ud558\uba74 \ud074\ub7ec\uc2a4\ud130 \uc791\ub3d9\uc5d0 \ubc29\ud574\uac00 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n    **\ud504\ub85c\ub355\uc158 \uc2dc\uc2a4\ud15c\uc744 \ubcc0\uacbd\ud558\uae30 \uc804\uc5d0** [\uc774 \ud398\uc774\uc9c0](https:\/\/github.com\/aws\/amazon-vpc-cni-k8s\/blob\/master\/docs\/eni-and-ip-target.md)\uc758 \uace0\ub824 \uc0ac\ud56d\uc744 \ubc18\ub4dc\uc2dc \uac80\ud1a0\ud558\uc2ed\uc2dc\uc624.\n\n#### IP \uc8fc\uc18c \uc778\ubca4\ud1a0\ub9ac \ubaa8\ub2c8\ud130\ub9c1\n\n\uc704\uc5d0\uc11c \uc124\uba85\ud55c \uc194\ub8e8\uc158 \uc678\uc5d0\ub3c4 IP \ud65c\uc6a9\ub3c4\ub97c \ud30c\uc545\ud558\ub294 \uac83\ub3c4 \uc911\uc694\ud569\ub2c8\ub2e4. [CNI \uba54\ud2b8\ub9ad \ud5ec\ud37c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cni-metrics-helper.html)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc11c\ube0c\ub137\uc758 IP \uc8fc\uc18c \uc778\ubca4\ud1a0\ub9ac\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc0ac\uc6a9 \uac00\ub2a5\ud55c \uc77c\ubd80 \uba54\ud2b8\ub9ad\uc740 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n* \ud074\ub7ec\uc2a4\ud130\uac00 \uc9c0\uc6d0\ud560 \uc218 \uc788\ub294 \ucd5c\ub300 ENI \uc218\n* \uc774\ubbf8 \ud560\ub2f9\ub41c ENI \uc218\n* \ud604\uc7ac \ud30c\ub4dc\uc5d0 \ud560\ub2f9\ub41c IP \uc8fc\uc18c \uc218\n* \uc0ac\uc6a9 \uac00\ub2a5\ud55c \uc804\uccb4 \ubc0f \ucd5c\ub300 IP \uc8fc\uc18c \uc218\n\n\uc11c\ube0c\ub137\uc758 IP \uc8fc\uc18c\uac00 \ubd80\uc871\ud560 \uacbd\uc6b0 \uc54c\ub9bc\uc744 \ubc1b\ub3c4\ub85d [CloudWatch \uc54c\ub78c](https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/monitoring\/AlarmThatSendsEmail.html)\uc744 \uc124\uc815\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. [CNI \uba54\ud2b8\ub9ad \ud5ec\ud37c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cni-metrics-helper.html)\uc758 \uc124\uce58 \uc9c0\uce68\uc740 EKS \uc0ac\uc6a9 \uc124\uba85\uc11c\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624. \n\n!!! warning\n    VPC CNI\uc758 `DISABLE_METRICS` \ubcc0\uc218\uac00 false\ub85c \uc124\uc815\ub418\uc5b4 \uc788\ub294\uc9c0 \ud655\uc778\ud558\uc2ed\uc2dc\uc624.\n\n#### \ucd94\uac00 \uace0\ub824 \uc0ac\ud56d\n\nAmazon EKS\uc5d0 \uace0\uc720\ud558\uc9c0 \uc54a\uc740 \ub2e4\ub978 \uc544\ud0a4\ud14d\ucc98 \ud328\ud134\ub3c4 IP \uace0\uac08\uc5d0 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 [VPC \uac04 \ud1b5\uc2e0\uc744 \ucd5c\uc801\ud654](..\/subnets\/#communication-across-vpcs)\ud558\uac70\ub098 [\uc5ec\ub7ec \uacc4\uc815\uc5d0\uc11c VPC \uacf5\uc720](..\/subnets\/#sharing-vpc-across-multiple-accounts)\ub97c \uc0ac\uc6a9\ud558\uc5ec IPv4 \uc8fc\uc18c \ud560\ub2f9\uc744 \uc81c\ud55c\ud569\ub2c8\ub2e4. \n\n\uc5ec\uae30\uc5d0\uc11c \uc774\ub7ec\ud55c \ud328\ud134\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\uc138\uc694.\n\n* [\ud558\uc774\ud37c\uc2a4\ucf00\uc77c \uc544\ub9c8\uc874 VPC \ub124\ud2b8\uc6cc\ud06c \uc124\uacc4](https:\/\/aws.amazon.com\/blogs\/networking-and-content-delivery\/designing-hyperscale-amazon-vpc-networks\/),\n* [Amazon VPC Lattice\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc548\uc804\ud55c \ub2e4\uc911 \uacc4\uc815 \ub2e4\uc911 VPC \uc5f0\uacb0 \uad6c\ucd95](https:\/\/aws.amazon.com\/blogs\/networking-and-content-delivery\/build-secure-multi-account-multi-vpc-connectivity-for-your-applications-with-amazon-vpc-lattice\/).","site":"eks","answers_cleaned":"    search    exclude  true         IP                                                                                                          VPC CNI     vpc cni               VPC CIDR  IP                     VPC flow logs                                                                                     IP                  AWS                   VPC          Amazon EKS IP                            IP                                                                                                  IP               Prefix      https   docs aws amazon com eks latest userguide cni increase ip addresses html    Amazon Virtual Private Cloud Amazon VPC                Amazon Elastic Compute Cloud  Amazon EC2        IPv4    IPv6                              IP    ENI                                                                                 Linux                    prefix mode index linux                              prefix mode index windows                   IP                     IP                                VPC                            IPv6     ipv6                                                                  IPv6                  IP                 VPC                   Amazon EKS                                    CIDR  VPC       IP     Pod             IP           VPC CNI                                       custom networking                     IP                       Amazon VPC CNI                 Amazon EKS            IP                                                                     IPv6            IPv6          RFC1918                                         IPv6                          IPv6    IP                              IPv4                                                              Amazon EKS       IPv4  IPv6                  EKS       IPv4                         IPv6                IPv6              IPv6 EKS                 IPv6                 IPv4        IPv6                                                                            IPv6             VPC  56     IPv6      IPv6 CIDR         64           2 64   18    IPv6           EKS                             IPv6 EKS             ipv6                        IPv6         https   catalog workshops aws ipv6 on aws en US      Amazon EKS    IPv6         https   catalog workshops aws ipv6 on aws en US lab 6                 EKS Cluster in IPv6 Mode  traffic flow    ipv6 gif        IPv4       IP                                  IPv6                                                   IPv6                               IPv4                                                                 Amazon EKS            IPv4  RFC1918                                                   IP                                 IP                            IPv4 VPC                                     IP                                      VPC                                                       eksctl  https   eksctl io   EKS                          CLI                            19                19        8000                                             attention     VPC                 IP                             RDS             vpc                              Amazon EKS                              4                   X ENI                                   subnets                          Amazon EKS    X ENI                      X ENI             EKS                       28  16   IP                               EKS             subnet calc subnet calc xlsx                                                VPC ENI          IP             IP      IPv4                                          VPC              IP            VPC     CIDR                      https   docs aws amazon com vpc latest userguide working with subnets html create subnets                 Amazon EKS                         https   aws amazon com about aws whats new 2023 10 amazon eks modification cluster subnets security                         IP        RFC1918 IP                             custom networking                                             IP                            CIDR            VPC            CG NAT  Carrier Grade NAT  RFC 6598      CIDR   16                       100 64 0 0 10      198 19 0 0 16       RFC1918                                            Custom          custom networking                  Custom Networking  traffic flow    custom networking gif        IP                      VPC CNI     ENI      IP                                         IP                                IP                     VPC CNI                           WARM IP TARGET      MINIMUM IP TARGET     WARM ENI TARGET                                      MINIMUM IP TARGET                                 CNI  EC2 API                 IP                   WARM IP TARGET                EC2 API                                                                                         MINIMUM IP TARGET                              aws k8s cni yaml                                                              https   github com aws amazon vpc cni k8s blob master config master aws k8s cni yaml                        VPC CNI                       Warning     CNI                                         CNI                                                                              CNI                                                                                           WARM ENI TARGET                   WARM ENI TARGET                                         IP                                         warning     IP                VPC                   IPv6      CIDR                                                 IP                                                                                                           https   github com aws amazon vpc cni k8s blob master docs eni and ip target md                            IP                               IP                      CNI         https   docs aws amazon com eks latest userguide cni metrics helper html             IP                                                                      ENI            ENI                IP                       IP            IP                     CloudWatch     https   docs aws amazon com AmazonCloudWatch latest monitoring AlarmThatSendsEmail html                 CNI         https   docs aws amazon com eks latest userguide cni metrics helper html          EKS                       warning     VPC CNI   DISABLE METRICS      false                                   Amazon EKS                      IP                          VPC               subnets  communication across vpcs              VPC        subnets  sharing vpc across multiple accounts        IPv4                                                           VPC          https   aws amazon com blogs networking and content delivery designing hyperscale amazon vpc networks       Amazon VPC Lattice                    VPC        https   aws amazon com blogs networking and content delivery build secure multi account multi vpc connectivity for your applications with amazon vpc lattice   "}
{"questions":"eks Elastic Load Balancer Kubernetes search exclude true Kubernetes AWS","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# Kubernetes \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ubc0f AWS \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub97c \ud1b5\ud55c \uc624\ub958 \ubc0f \ud0c0\uc784\uc544\uc6c3 \ubc29\uc9c0\n\n\ud544\uc694\ud55c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9ac\uc18c\uc2a4 (\uc11c\ube44\uc2a4, \ub514\ud50c\ub85c\uc774\uba3c\ud2b8, \uc778\uadf8\ub808\uc2a4 \ub4f1) \ub97c \uc0dd\uc131\ud55c \ud6c4, \ud30c\ub4dc\ub294 Elastic Load Balancer\ub97c \ud1b5\ud574 \ud074\ub77c\uc774\uc5b8\ud2b8\ub85c\ubd80\ud130 \ud2b8\ub798\ud53d\uc744 \uc218\uc2e0\ud560 \uc218 \uc788\uc5b4\uc57c \ud569\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub610\ub294 Kubernetes \ud658\uacbd\uc744 \ubcc0\uacbd\ud560 \ub54c \uc624\ub958, \uc2dc\uac04 \ucd08\uacfc \ub610\ub294 \uc5f0\uacb0 \uc7ac\uc124\uc815\uc774 \uc0dd\uc131\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \ubcc0\uacbd\uc73c\ub85c \uc778\ud574 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ubc30\ud3ec \ub610\ub294 \uc870\uc815 \uc791\uc5c5 (\uc218\ub3d9 \ub610\ub294 \uc790\ub3d9) \uc774 \ud2b8\ub9ac\uac70\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc548\ud0c0\uae5d\uac8c\ub3c4 \uc774\ub7f0 \uc624\ub958\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \ubb38\uc81c\ub97c \uae30\ub85d\ud558\uc9c0 \uc54a\ub294 \uacbd\uc6b0\uc5d0\ub3c4 \uc0dd\uc131\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uc758 \ub9ac\uc18c\uc2a4\ub97c \uc81c\uc5b4\ud558\ub294 Kubernetes \uc2dc\uc2a4\ud15c\uc774 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc758 \ub300\uc0c1 \ub4f1\ub85d \ubc0f \uc0c1\ud0dc\ub97c \uc81c\uc5b4\ud558\ub294 AWS \uc2dc\uc2a4\ud15c\ubcf4\ub2e4 \ube60\ub974\uac8c \uc2e4\ud589\ub420 \uc218 \uc788\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc694\uccad\uc744 \uc218\uc2e0\ud560 \uc900\ube44\uac00 \ub418\uae30 \uc804\uc5d0 \ud30c\ub4dc\uac00 \ud2b8\ub798\ud53d\uc744 \uc218\uc2e0\ud558\uae30 \uc2dc\uc791\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ud30c\ub4dc\uac00 Ready \uc0c1\ud0dc\uac00 \ub418\ub294 \ud504\ub85c\uc138\uc2a4\uc640 \ud2b8\ub798\ud53d\uc744 \ud30c\ub4dc\ub85c \ub77c\uc6b0\ud305\ud558\ub294 \ubc29\ubc95\uc744 \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n## \ud30c\ub4dc Readiness\n\n[2019 Kubecon talk](https:\/\/www.youtube.com\/watch?v=Vw9GmSeomFg)\uc5d0\uc11c \ubc1c\ucdcc\ud55c \uc774 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 \ud30c\ub4dc\uac00 \ub808\ub514 \uc0c1\ud0dc\uac00 \ub418\uace0 '\ub85c\ub4dc\ubc38\ub7f0\uc11c' \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \ud2b8\ub798\ud53d\uc744 \uc218\uc2e0\ud558\uae30 \uc704\ud574 \uac70\uccd0\uc9c4 \ub2e8\uacc4\ub97c \ubcf4\uc5ec\uc900\ub2e4.\n*[Ready? A Deep Dive into Pod Readiness Gates for Service Health... - Minhan Xia & Ping Zou](https:\/\/www.youtube.com\/watch?v=Vw9GmSeomFg)*  \nNodePort \uc11c\ube44\uc2a4\uc758 \uba64\ubc84\uc778 \ud30c\ub4dc\uac00 \uc0dd\uc131\ub418\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \ub2e4\uc74c \ub2e8\uacc4\ub97c \uac70\uce69\ub2c8\ub2e4.\n\n1. \ud30c\ub4dc\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 (\uc989, `kubectl` \uba85\ub839 \ub610\ub294 \uc2a4\ucf00\uc77c\ub9c1 \uc561\uc158\uc73c\ub85c\ubd80\ud130) \uc5d0\uc11c \uc0dd\uc131\ub429\ub2c8\ub2e4.\n2. \ud30c\ub4dc\ub294 `kube-scheduler`\uc5d0 \uc758\ud574 \uc2a4\ucf00\uc904\ub9c1\ub418\uba70 \ud074\ub7ec\uc2a4\ud130\uc758 \ub178\ub4dc\uc5d0 \ud560\ub2f9\ub429\ub2c8\ub2e4.\n3. \ud560\ub2f9\ub41c \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 kubelet\uc740 \uc5c5\ub370\uc774\ud2b8\ub97c \uc218\uc2e0\ud558\uace0 ('watch'\ub97c \ud1b5\ud574) \ub85c\uceec \ucee8\ud14c\uc774\ub108 \ub7f0\ud0c0\uc784\uacfc \ud1b5\uc2e0\ud558\uc5ec \ud30c\ub4dc\uc5d0 \uc9c0\uc815\ub41c \ucee8\ud14c\uc774\ub108\ub97c \uc2dc\uc791\ud55c\ub2e4.\n    1. \ucee8\ud14c\uc774\ub108\uac00 \uc2e4\ud589\uc744 \uc2dc\uc791\ud558\uba74 (\uadf8\ub9ac\uace0 \uc120\ud0dd\uc801\uc73c\ub85c `ReadinessProbes`\ub9cc \uc804\ub2ec\ud558\uba74), kubelet\uc740 `kube-apiserver`\ub85c \uc5c5\ub370\uc774\ud2b8\ub97c \uc804\uc1a1\ud558\uc5ec \ud30c\ub4dc \uc0c1\ud0dc\ub97c `Ready`\ub85c \uc5c5\ub370\uc774\ud2b8\ud569\ub2c8\ub2e4.\n4.  \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ucee8\ud2b8\ub864\ub7ec\ub294 ('watch'\ub97c \ud1b5\ud574) \uc11c\ube44\uc2a4\uc758 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ubaa9\ub85d\uc5d0 \ucd94\uac00\ud560 \uc0c8 \ud30c\ub4dc\uac00 `Ready`\ub77c\ub294 \uc5c5\ub370\uc774\ud2b8\ub97c \uc218\uc2e0\ud558\uace0 \uc801\uc808\ud55c \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ubc30\uc5f4\uc5d0 \ud30c\ub4dc IP\/\ud3ec\ud2b8 \ud29c\ud50c\uc744 \ucd94\uac00\ud569\ub2c8\ub2e4.\n5. `kube-proxy`\ub294 \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c iptables \uaddc\uce59\uc5d0 \ucd94\uac00\ud560 \uc0c8 IP\/\ud3ec\ud2b8\uac00 \uc788\ub2e4\ub294 \uc5c5\ub370\uc774\ud2b8 (`watch`\ub97c \ud1b5\ud574) \ub97c \uc218\uc2e0\ud55c\ub2e4.\n    1. \uc6cc\ucee4 \ub178\ub4dc\uc758 \ub85c\uceec iptables \uaddc\uce59\uc774 NodePort \uc11c\ube44\uc2a4\uc758 \ucd94\uac00 \ub300\uc0c1 \ud30c\ub4dc\ub85c \uc5c5\ub370\uc774\ud2b8\ub429\ub2c8\ub2e4.\n\n!!! \ucc38\uc870\n    \uc778\uadf8\ub808\uc2a4 \ub9ac\uc18c\uc2a4\uc640 \uc778\uadf8\ub808\uc2a4 \ucee8\ud2b8\ub864\ub7ec (\uc608: AWS \ub85c\ub4dc\ubc38\ub7f0\uc11c \ucee8\ud2b8\ub864\ub7ec) \ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 5\ub2e8\uacc4\ub294 `kube-proxy` \ub300\uc2e0 \uad00\ub828 \ucee8\ud2b8\ub864\ub7ec\uc5d0\uc11c \ucc98\ub9ac\ub429\ub2c8\ub2e4.\uadf8\ub7ec\uba74 \ucee8\ud2b8\ub864\ub7ec\ub294 \ud544\uc694\ud55c \uad6c\uc131 \ub2e8\uacc4 (\uc608: \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0 \ub300\uc0c1 \ub4f1\ub85d\/\ub4f1\ub85d \ucde8\uc18c) \ub97c \uc218\ud589\ud558\uc5ec \ud2b8\ub798\ud53d\uc774 \uc608\uc0c1\ub300\ub85c \ud750\ub974\ub3c4\ub85d \ud569\ub2c8\ub2e4.\n\n[\ud30c\ub4dc\uac00 \uc885\ub8cc\ub418\uac70\ub098](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle\/#pod-termination) \uc900\ube44\ub418\uc9c0 \uc54a\uc740 \uc0c1\ud0dc\ub85c \ubcc0\uacbd\ub418\ub294 \uacbd\uc6b0\uc5d0\ub3c4 \uc720\uc0ac\ud55c \ud504\ub85c\uc138\uc2a4\uac00 \ubc1c\uc0dd\ud569\ub2c8\ub2e4. API \uc11c\ubc84\ub294 \ucee8\ud2b8\ub864\ub7ec, kubelet \ub610\ub294 kubectl \ud074\ub77c\uc774\uc5b8\ud2b8\ub85c\ubd80\ud130 \uc5c5\ub370\uc774\ud2b8\ub97c \uc218\uc2e0\ud558\uc5ec \ud30c\ub4dc\ub97c \uc885\ub8cc\ud569\ub2c8\ub2e4. 3~5\ub2e8\uacc4\ub294 \uac70\uae30\uc11c\ubd80\ud130 \uacc4\uc18d\ub418\uc9c0\ub9cc, \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ubaa9\ub85d\uacfc iptables \uaddc\uce59\uc5d0\uc11c \ud30c\ub4dc IP\/\ud29c\ud50c\uc744 \uc0bd\uc785\ud558\ub294 \ub300\uc2e0 \uc81c\uac70\ud569\ub2c8\ub2e4.\n\n### \ubc30\ud3ec\uc5d0 \ubbf8\uce58\ub294 \uc601\ud5a5\n\n\ub2e4\uc74c\uc740 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ubc30\ud3ec\ub85c \uc778\ud574 \ud30c\ub4dc \uad50\uccb4\uac00 \ud2b8\ub9ac\uac70\ub420 \ub54c \ucde8\ud574\uc9c4 \ub2e8\uacc4\ub97c \ubcf4\uc5ec\uc8fc\ub294 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc785\ub2c8\ub2e4:\n*[Ready? A Deep Dive into Pod Readiness Gates for Service Health... - Minhan Xia & Ping Zou](https:\/\/www.youtube.com\/watch?v=Vw9GmSeomFg)*  \n\uc774 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc5d0\uc11c \uc8fc\ubaa9\ud560 \uc810\uc740 \uccab \ubc88\uc9f8 \ud30c\ub4dc\uac00 \"Ready\" \uc0c1\ud0dc\uc5d0 \ub3c4\ub2ec\ud560 \ub54c\uae4c\uc9c0 \ub450 \ubc88\uc9f8 \ud30c\ub4dc\uac00 \ubc30\ud3ec\ub418\uc9c0 \uc54a\ub294\ub2e4\ub294 \uac83\uc785\ub2c8\ub2e4. \uc774\uc804 \uc139\uc158\uc758 4\ub2e8\uacc4\uc640 5\ub2e8\uacc4\ub3c4 \uc704\uc758 \ubc30\ud3ec \uc791\uc5c5\uacfc \ubcd1\ud589\ud558\uc5ec \uc218\ud589\ub429\ub2c8\ub2e4.\n\n\uc989, \ub514\ud50c\ub85c\uc774\uba3c\ud2b8 \ucee8\ud2b8\ub864\ub7ec\uac00 \ub2e4\uc74c \ud30c\ub4dc\ub85c \ub118\uc5b4\uac08 \ub54c \uc0c8 \ud30c\ub4dc \uc0c1\ud0dc\ub97c \uc804\ud30c\ud558\ub294 \uc561\uc158\uc774 \uc5ec\uc804\ud788 \uc9c4\ud589 \uc911\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ud504\ub85c\uc138\uc2a4\ub294 \uc774\uc804 \ubc84\uc804\uc758 \ud30c\ub4dc\ub3c4 \uc885\ub8cc\ud558\ubbc0\ub85c, \ud30c\ub4dc\uac00 Ready \uc0c1\ud0dc\uc5d0 \ub3c4\ub2ec\ud588\uc9c0\ub9cc \ubcc0\uacbd \uc0ac\ud56d\uc774 \uacc4\uc18d \uc804\ud30c\ub418\uace0 \uc774\uc804 \ubc84\uc804\uc758 \ud30c\ub4dc\uac00 \uc885\ub8cc\ub418\ub294 \uc0c1\ud669\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc704\uc5d0\uc11c \uc124\uba85\ud55c Kubernetes \uc2dc\uc2a4\ud15c\uc740 \uae30\ubcf8\uc801\uc73c\ub85c \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc758 \ub4f1\ub85d \uc2dc\uac04\uc774\ub098 \uc0c1\ud0dc \ud655\uc778\uc744 \uace0\ub824\ud558\uc9c0 \uc54a\uae30 \ub54c\ubb38\uc5d0 AWS\uc640 \uac19\uc740 \ud074\ub77c\uc6b0\ub4dc \uacf5\uae09\uc790\uc758 \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub97c \uc0ac\uc6a9\ud558\uba74 \uc774 \ubb38\uc81c\uac00 \ub354\uc6b1 \uc545\ud654\ub429\ub2c8\ub2e4. **\uc989 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8 \uc5c5\ub370\uc774\ud2b8\uac00 \ud30c\ub4dc \uc804\uccb4\uc5d0 \uac78\uccd0 \uc644\uc804\ud788 \uc21c\ud658\ub420 \uc218 \uc788\uc9c0\ub9cc \ub85c\ub4dc\ubc38\ub7f0\uc11c\uac00 \uc0c1\ud0dc \uc810\uac80 \uc218\ud589 \ub610\ub294 \uc0c8 \ud30c\ub4dc \ub4f1\ub85d\uc744 \uc644\ub8cc\ud558\uc9c0 \uc54a\uc544 \uc6b4\uc601 \uc911\ub2e8\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.**\n\n\ud30c\ub4dc\uac00 \uc885\ub8cc\ub420 \ub54c\ub3c4 \ube44\uc2b7\ud55c \ubb38\uc81c\uac00 \ubc1c\uc0dd\ud569\ub2c8\ub2e4. \ub85c\ub4dc\ubc38\ub7f0\uc11c \uad6c\uc131\uc5d0 \ub530\ub77c \ud30c\ub4dc\uc758 \ub4f1\ub85d\uc744 \ucde8\uc18c\ud558\uace0 \uc0c8 \uc694\uccad \uc218\uc2e0\uc744 \uc911\uc9c0\ud558\ub294 \ub370 1~2\ubd84 \uc815\ub3c4 \uac78\ub9b4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. **\ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \uc774\ub7f0 \ub4f1\ub85d \ucde8\uc18c\ub97c \uc704\ud574 \ub864\ub9c1 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub97c \uc9c0\uc5f0\uc2dc\ud0a4\uc9c0 \uc54a\uc73c\uba70, \uc774\ub85c \uc778\ud574 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uac00 \uc774\ubbf8 \uc885\ub8cc\ub41c \ub300\uc0c1 \ud30c\ub4dc\uc758 IP\/\ud3ec\ud2b8\ub85c \ud2b8\ub798\ud53d\uc744 \uacc4\uc18d \ubcf4\ub0b4\ub294 \uc0c1\ud0dc\ub85c \uc774\uc5b4\uc9c8 \uc218 \uc788\uc2b5\ub2c8\ub2e4.**\n\n\uc774\ub7f0 \ubb38\uc81c\ub97c \ubc29\uc9c0\ud558\uae30 \uc704\ud574 Kubernetes \uc2dc\uc2a4\ud15c\uc774 AWS Load Balancer \ub3d9\uc791\uc5d0 \ub354 \ubd80\ud569\ud558\ub294 \uc870\uce58\ub97c \ucde8\ud558\ub3c4\ub85d \uad6c\uc131\uc744 \ucd94\uac00\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \uad8c\uc7a5 \uc0ac\ud56d\n\n### IP \ub300\uc0c1 \uc720\ud615 \ub85c\ub4dc\ubc38\ub7f0\uc11c \uc774\uc6a9\n\n`LoadBalancer` \uc720\ud615\uc758 \uc11c\ube44\uc2a4\ub97c \uc0dd\uc131\ud560 \ub54c, **\uc778\uc2a4\ud134\uc2a4 \ub300\uc0c1 \uc720\ud615** \ub4f1\ub85d\uc744 \ud1b5\ud574 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0\uc11c *\ud074\ub7ec\uc2a4\ud130\uc758 \ubaa8\ub4e0 \ub178\ub4dc*\ub85c \ud2b8\ub798\ud53d\uc774 \uc804\uc1a1\ub429\ub2c8\ub2e4. \uadf8\ub7ec\uba74 \uac01 \ub178\ub4dc\uac00 'NodePort'\uc758 \ud2b8\ub798\ud53d\uc744 \uc11c\ube44\uc2a4 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc5b4\ub808\uc774\uc758 \ud30c\ub4dc\/IP \ud29c\ud50c\ub85c \ub9ac\ub514\ub809\uc158\ud569\ub2c8\ub2e4. \uc774 \ud0c0\uac9f\uc740 \ubcc4\ub3c4\uc758 \uc6cc\ucee4 \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n!!! note\n    \ubc30\uc5f4\uc5d0\ub294 \"Ready\" \ud30c\ub4dc\ub9cc \uc788\uc5b4\uc57c \ud55c\ub2e4\ub294 \uc810\uc744 \uae30\uc5b5\ud558\uc138\uc694.\n\n\uc774\ub807\uac8c \ud558\uba74 \uc694\uccad\uc5d0 \ud649\uc774 \ucd94\uac00\ub418\uace0 \ub85c\ub4dc\ubc38\ub7f0\uc11c \uad6c\uc131\uc774 \ubcf5\uc7a1\ud574\uc9d1\ub2c8\ub2e4.\uc608\ub97c \ub4e4\uc5b4 \uc704\uc758 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uac00 \uc138\uc158 \uc5b4\ud53c\ub2c8\ud2f0\ub85c \uad6c\uc131\ub41c \uacbd\uc6b0 \uc5b4\ud53c\ub2c8\ud2f0\ub294 \uc5b4\ud53c\ub2c8\ud2f0 \uad6c\uc131\uc5d0 \ub530\ub77c \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc640 \ubc31\uc5d4\ub4dc \ub178\ub4dc \uc0ac\uc774\uc5d0\ub9cc \uc720\uc9c0\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub85c\ub4dc\ubc38\ub7f0\uc11c\uac00 \ubc31\uc5d4\ub4dc \ud30c\ub4dc\uc640 \uc9c1\uc811 \ud1b5\uc2e0\ud558\uc9c0 \uc54a\uae30 \ub54c\ubb38\uc5d0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc2dc\uc2a4\ud15c\uc73c\ub85c \ud2b8\ub798\ud53d \ud750\ub984\uacfc \ud0c0\uc774\ubc0d\uc744 \uc81c\uc5b4\ud558\uae30\uac00 \ub354 \uc5b4\ub824\uc6cc\uc9d1\ub2c8\ub2e4.\n\n[AWS \ub85c\ub4dc\ubc38\ub7f0\uc11c \ucee8\ud2b8\ub864\ub7ec](https:\/\/github.com\/kubernetes-sigs\/aws-load-balancer-controller)\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0, **IP \ub300\uc0c1 \uc720\ud615**\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud30c\ub4dc IP\/\ud3ec\ud2b8 \ud29c\ud50c\uc744 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0 \uc9c1\uc811 \ub4f1\ub85d\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc774\ub294 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0\uc11c \ub300\uc0c1 \ud30c\ub4dc\ub85c\uc758 \ud2b8\ub798\ud53d \uacbd\ub85c\ub97c \ub2e8\uc21c\ud654 \ud569\ub2c8\ub2e4. \uc989 \uc0c8 \ub300\uc0c1\uc774 \ub4f1\ub85d\ub418\uba74 \ub300\uc0c1\uc774 \"Ready\" \ud30c\ub4dc IP \ubc0f \ud3ec\ud2b8\uc778\uc9c0 \ud655\uc778\ud560 \uc218 \uc788\uace0, \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc758 \uc0c1\ud0dc \ud655\uc778\uc774 \ud30c\ub4dc\uc5d0 \uc9c1\uc811 \uc804\ub2ec\ub418\uba70, VPC \ud750\ub984 \ub85c\uadf8\ub97c \uac80\ud1a0\ud558\uac70\ub098 \uc720\ud2f8\ub9ac\ud2f0\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud560 \ub54c \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc640 \ud30c\ub4dc \uac04\uc758 \ud2b8\ub798\ud53d\uc744 \uc27d\uac8c \ucd94\uc801\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub610\ud55c IP \ub4f1\ub85d\uc744 \uc0ac\uc6a9\ud558\uba74 `NodePort` \uaddc\uce59\uc744 \ud1b5\ud574 \uc5f0\uacb0\uc744 \uad00\ub9ac\ud558\ub294 \ub300\uc2e0 \ubc31\uc5d4\ub4dc \ud30c\ub4dc\uc5d0 \ub300\ud55c \ud2b8\ub798\ud53d\uc758 \ud0c0\uc774\ubc0d\uacfc \uad6c\uc131\uc744 \uc9c1\uc811 \uc81c\uc5b4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \ud30c\ub4dc Readiness \uac8c\uc774\ud2b8 \ud65c\uc6a9\n\n[\ud30c\ub4dc Readiness \uac8c\uc774\ud2b8](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle\/#pod-readiness-gate) \ub294 \ud30c\ub4dc\uac00 \"Ready\" \uc0c1\ud0dc\uc5d0 \ub3c4\ub2ec\ud558\uae30 \uc804\uc5d0 \ucda9\uc871\ub418\uc5b4\uc57c \ud558\ub294 \ucd94\uac00 \uc694\uad6c\uc0ac\ud56d\uc774\ub2e4.\n\n\n>[[...] AWS Load Balancer \ucee8\ud2b8\ub864\ub7ec\ub294 \uc778\uadf8\ub808\uc2a4 \ub610\ub294 \uc11c\ube44\uc2a4 \ubc31\uc5d4\ub4dc\ub97c \uad6c\uc131\ud558\ub294 \ud30c\ub4dc\uc5d0 \ub300\ud55c \uc900\ube44 \uc870\uac74\uc744 \uc124\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. ALB\/NLB \ub300\uc0c1 \uadf8\ub8f9\uc758 \ud574\ub2f9 \ub300\uc0c1\uc758 \uc0c1\ud0dc\uac00 \"Healthy\"\ub85c \ud45c\uc2dc\ub418\ub294 \uacbd\uc6b0\uc5d0\ub9cc \ud30c\ub4dc\uc758 \uc870\uac74 \uc0c1\ud0dc\uac00 'True'\ub85c \uc124\uc815\ub429\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uc0c8\ub85c \uc0dd\uc131\ub41c \ud30c\ub4dc\uac00 ALB\/NLB \ub300\uc0c1 \uadf8\ub8f9\uc5d0\uc11c \"\uc815\uc0c1\"\uc73c\ub85c \ub418\uc5b4 \ud2b8\ub798\ud53d\uc744 \ubc1b\uc744 \uc900\ube44\uac00 \ub420 \ub54c\uae4c\uc9c0 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc758 \ub864\ub9c1 \uc5c5\ub370\uc774\ud2b8\uac00 \uae30\uc874 \ud30c\ub4dc\ub97c \uc885\ub8cc\ud558\uc9c0 \uc54a\ub3c4\ub85d \ud55c\ub2e4.](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/v2.4\/deploy\/pod_readiness_gate\/)\n\nReadiness \uac8c\uc774\ud2b8\ub294 \ubc30\ud3ec \uc911\uc5d0 \uc0c8 \ubcf5\uc81c\ubcf8\uc744 \uc0dd\uc131\ud560 \ub54c \ucfe0\ubc84\ub124\ud2f0\uc2a4\uac00 \"\ub108\ubb34 \ube68\ub9ac\" \uc6c0\uc9c1\uc774\uc9c0 \uc54a\ub3c4\ub85d \ud558\uace0 \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \ubc30\ud3ec\ub97c \uc644\ub8cc\ud588\uc9c0\ub9cc \uc0c8 \ud30c\ub4dc\uac00 \ub4f1\ub85d\uc744 \uc644\ub8cc\ud558\uc9c0 \uc54a\uc740 \uc0c1\ud669\uc744 \ubc29\uc9c0\ud569\ub2c8\ub2e4.\n\n\uc774\ub97c \ud65c\uc131\ud654\ud558\ub824\uba74 \ub2e4\uc74c\uc744 \uc218\ud589\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n1. \ucd5c\uc2e0 \ubc84\uc804\uc758 [AWS \ub85c\ub4dc\ubc38\ub7f0\uc11c \ucee8\ud2b8\ub864](https:\/\/github.com\/kubernetes-sigs\/aws-load-balancer-controller)\ub97c \ubc30\ud3ec\ud569\ub2c8\ub2e4. (**[*\uc774\uc804 \ubc84\uc804\uc744 \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub294 \uacbd\uc6b0 \uc124\uba85\uc11c\ub97c \ucc38\uc870*](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/v2.4\/deploy\/upgrade\/migrate_v1_v2\/)**)\n2. \ud30c\ub4dc Readiness \uac8c\uc774\ud2b8\ub97c \uc790\ub3d9\uc73c\ub85c \uc8fc\uc785\ud558\ub824\uba74 `elbv2.k8s.aws\/pod-readiness-gate-inject: enabled` \ub808\uc774\ube14\ub85c [\ud0c0\uac9f \ud30c\ub4dc\uac00 \uc2e4\ud589 \uc911\uc778 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \ub808\uc774\ube14\uc744 \ubd99\uc785\ub2c8\ub2e4.](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/v2.4\/deploy\/pod_readiness_gate\/).\n3. \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc758 \ubaa8\ub4e0 \ud30c\ub4dc\uac00 \uc900\ube44 \uac8c\uc774\ud2b8 \ucee8\ud53c\uadf8\ub808\uc774\uc158\uc744 \uac00\uc838\uc624\ub3c4\ub85d \ud558\ub824\uba74 \uc778\uadf8\ub808\uc2a4 \ub610\ub294 \uc11c\ube44\uc2a4\ub97c \uc0dd\uc131\ud558\uace0 \ud30c\ub4dc\ub97c \uc0dd\uc131\ud558\uae30 ***\uc804\uc5d0*** \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \ub808\uc774\ube14\uc744 \uc9c0\uc815\ud574\uc57c \ud55c\ub2e4.\n\n\n### \uc885\ub8cc*\uc804*\uc5d0 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0\uc11c \ud30c\ub4dc\uc758 \ub4f1\ub85d\uc774 \ucde8\uc18c\ub418\uc5c8\ub294\uc9c0 \ud655\uc778\n\nWhen a pod is terminated steps 4 and 5 from the pod readiness section occur at the same time that the container processes receive the termination signals. This means that if your container is able to shut down quickly it may shut down faster than the Load Balancer is able to deregister the target. To avoid this situation adjust the Pod spec with:\n\n1. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \ub4f1\ub85d\uc744 \ucde8\uc18c\ud558\uace0 \uc5f0\uacb0\uc744 \uc815\uc0c1\uc801\uc73c\ub85c \uc885\ub8cc\ud560 \uc218 \uc788\ub3c4\ub85d 'PreStop' \ub77c\uc774\ud504\uc0ac\uc774\ud074 \ud6c5\uc744 \ucd94\uac00\ud569\ub2c8\ub2e4. \uc774 \ud6c5\uc740 API \uc694\uccad \ub610\ub294 \uad00\ub9ac \uc774\ubca4\ud2b8 (\uc608: \ub77c\uc774\ube0c\ub2c8\uc2a4\/\uc2a4\ud0c0\ud2b8\uc5c5 \ud504\ub85c\ube0c \uc2e4\ud328, \uc120\uc810, \ub9ac\uc18c\uc2a4 \uacbd\ud569 \ub4f1) \ub85c \uc778\ud574 \ucee8\ud14c\uc774\ub108\uac00 \uc885\ub8cc\ub418\uae30 \uc9c1\uc804\uc5d0 \ud638\ucd9c\ub429\ub2c8\ub2e4. \uc911\uc694\ud55c \uc810\uc740 [\uc774 \ud6c5\uc774 \ud638\ucd9c\ub418\uc5b4 \uc885\ub8cc \uc2e0\ud638\uac00 \uc804\uc1a1\ub418\uae30 **\uc804\uc5d0** \uc644\ub8cc\ub418\ub3c4\ub85d \ud5c8\uc6a9\ub428](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle\/#pod-termination) \uc785\ub2c8\ub2e4. \ub2e8, \uc720\uc608 \uae30\uac04\uc774 \uc2e4\ud589\uc744 \uc218\uc6a9\ud560 \uc218 \uc788\uc744 \ub9cc\ud07c \ucda9\ubd84\ud788 \uae38\uc5b4\uc57c \ud569\ub2c8\ub2e4.\n\n```\n        lifecycle:\n          preStop:\n            exec:\n              command: [\"\/bin\/sh\", \"-c\", \"sleep 180\"] \n```\n\uc704\uc640 \uac19\uc740 \uac04\ub2e8\ud55c sleep \uba85\ub839\uc744 \uc0ac\uc6a9\ud558\uba74 \ud30c\ub4dc\uac00 `Terminating (\uc885\ub8cc \uc911)`\uc73c\ub85c \ud45c\uc2dc\ub41c \uc2dc\uc810 (\uadf8\ub9ac\uace0 \ub85c\ub4dc\ubc38\ub7f0\uc11c \ub4f1\ub85d \ucde8\uc18c\uac00 \uc2dc\uc791\ub418\ub294 \uc2dc\uc810) \uacfc \uc885\ub8cc \uc2e0\ud638\uac00 \ucee8\ud14c\uc774\ub108 \ud504\ub85c\uc138\uc2a4\ub85c \uc804\uc1a1\ub418\ub294 \uc2dc\uc810 \uc0ac\uc774\uc5d0 \uc9e7\uc740 \uc9c0\uc5f0\uc744 \ubc1c\uc0dd\uc2dc\ud0ac \uc218 \uc788\ub2e4.\ud544\uc694\ud55c \uacbd\uc6b0 \uc774 \ud6c5\ub97c \uace0\uae09 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc885\ub8cc\/\uc885\ub8cc \uc808\ucc28\uc5d0\ub3c4 \ud65c\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n2. \uc804\uccb4 `\ud504\ub9ac\uc2a4\ud1b1` \uc2e4\ud589 \uc2dc\uac04\uacfc \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc885\ub8cc \uc2e0\ud638\uc5d0 \uc815\uc0c1\uc801\uc73c\ub85c \uc751\ub2f5\ud558\ub294 \ub370 \uac78\ub9ac\ub294 \uc2dc\uac04\uc744 \uc218\uc6a9\ud560 \uc218 \uc788\ub3c4\ub85d 'TerminationGracePeriodsSeconds'\ub97c \uc5f0\uc7a5\ud558\uc2ed\uc2dc\uc624.\uc544\ub798 \uc608\uc2dc\uc5d0\uc11c\ub294 \uc720\uc608 \uae30\uac04\uc744 200\ucd08\ub85c \uc5f0\uc7a5\ud558\uc5ec \uc804\uccb4 `sleep 180` \uba85\ub839\uc744 \uc644\ub8cc\ud55c \ub2e4\uc74c, \uc571\uc774 \uc815\uc0c1\uc801\uc73c\ub85c \uc885\ub8cc\ub420 \uc218 \uc788\ub3c4\ub85d 20\ucd08 \ub354 \uc5f0\uc7a5\ud588\uc2b5\ub2c8\ub2e4.\n\n```\n    spec:\n      terminationGracePeriodSeconds: 200\n      containers:\n      - name: webapp\n        image: webapp-st:v1.3\n        [...]\n        lifecycle:\n          preStop:\n            exec:\n              command: [\"\/bin\/sh\", \"-c\", \"sleep 180\"] \n```\n\n### \ud30c\ub4dc\uc5d0 Readiness \ud504\ub85c\ube0c\uac00 \uc788\ub294\uc9c0 \ud655\uc778\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0\uc11c \ud30c\ub4dc\ub97c \uc0dd\uc131\ud560 \ub54c \uae30\ubcf8 \uc900\ube44 \uc0c1\ud0dc\ub294 \"Ready\"\uc774\uc9c0\ub9cc, \ub300\ubd80\ubd84\uc758 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc740 \uc778\uc2a4\ud134\uc2a4\ud654\ud558\uace0 \uc694\uccad\uc744 \ubc1b\uc744 \uc900\ube44\uac00 \ub418\ub294 \ub370 1~2\ubd84 \uc815\ub3c4 \uac78\ub9bd\ub2c8\ub2e4. [\ud30c\ub4dc \uc2a4\ud399\uc5d0\uc11c 'Readiness \ud504\ub85c\ube0c'\ub97c \uc815\uc758\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-liveness-readiness-startup-probes\/) \uc2e4\ud589 \uba85\ub839\uc5b4 \ub610\ub294 \ub124\ud2b8\uc6cc\ud06c \uc694\uccad\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc2dc\uc791\uc744 \uc644\ub8cc\ud558\uace0 \ud2b8\ub798\ud53d\uc744 \ucc98\ub9ac\ud560 \uc900\ube44\uac00 \ub418\uc5c8\ub294\uc9c0 \ud655\uc778\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4.\n\n`Readiness \ud504\ub85c\ube0c`\ub85c \uc815\uc758\ub41c \ud30c\ub4dc\ub294 \"NotReady\" \uc0c1\ud0dc\uc5d0\uc11c \uc2dc\uc791\ud558\uba70, `Readiness \ud504\ub85c\ube0c`\uac00 \uc131\uacf5\ud588\uc744 \ub54c\ub9cc \"\uc900\ube44 \uc644\ub8cc\"\ub85c \ubcc0\uacbd\ub429\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc2dc\uc791\uc774 \uc644\ub8cc\ub420 \ub54c\uae4c\uc9c0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \"\uc11c\ube44\uc2a4 \uc911\"\uc73c\ub85c \uc804\ud658\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n\uc7a5\uc560 \uc0c1\ud0dc (\uc608: \uad50\ucc29 \uc0c1\ud0dc) \uc5d0 \ub4e4\uc5b4\uac08 \ub54c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ub2e4\uc2dc \uc2dc\uc791\ud560 \uc218 \uc788\ub3c4\ub85d \ub77c\uc774\ube0c\ub2c8\uc2a4 \ud504\ub85c\ube0c\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc9c0\ub9cc, \ud65c\uc131 \uc7a5\uc560\uac00 \ubc1c\uc0dd\ud558\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc7ac\uc2dc\uc791\uc744 \ud2b8\ub9ac\uac70\ud558\ubbc0\ub85c Stateful \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc0ac\uc6a9\ud560 \ub54c\ub294 \uc8fc\uc758\ud574\uc57c \ud569\ub2c8\ub2e4. \uc2dc\uc791 \uc18d\ub3c4\uac00 \ub290\ub9b0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0\ub3c4 [\uc2a4\ud0c0\ud2b8\uc5c5 \ud504\ub85c\ube0c](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-liveness-readiness-startup-probes\/#define-startup-probes)\ub97c \ud65c\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc544\ub798 \ud504\ub85c\ube0c\ub294 \ud3ec\ud2b8 80\uc5d0 \ub300\ud55c HTTP \ud504\ub85c\ube0c\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc6f9 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc5b8\uc81c \uc900\ube44\ub418\ub294\uc9c0 \ud655\uc778\ud569\ub2c8\ub2e4 (\ud65c\uc131 \ud504\ub85c\ube0c\uc5d0\ub3c4 \ub3d9\uc77c\ud55c \ud504\ub85c\ube0c \uad6c\uc131\uc774 \uc0ac\uc6a9\ub428).\n\n```\n        [...]\n        ports:\n        - containerPort: 80\n        livenessProbe:\n          httpGet:\n            path: \/\n            port: 80\n          failureThreshold: 1\n          periodSeconds: 10\n          initialDelaySeconds: 5\n        readinessProbe:\n          httpGet:\n            path: \/\n            port: 80\n          periodSeconds: 5\n        [...]\n```\n\n### \ud30c\ub4dc \uc911\ub2e8 \uc608\uc0b0 (PDB) \uc124\uc815\n\n[\ud30c\ub4dc \uc911\ub2e8 \uc608\uc0b0 (PDB)](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/disruptions\/#pod-disruption-budgets)\uc740 \ubcf5\uc81c\ub41c \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc911 [\uc790\ubc1c\uc801 \uc911\ub2e8](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/disruptions\/#voluntary-and-involuntary-disruptions)\uc73c\ub85c \uc778\ud574 \ub3d9\uc2dc\uc5d0 \ub2e4\uc6b4\ub418\ub294 \ud30c\ub4dc\uc758 \uc218\ub97c \uc81c\ud55c\ud569\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \ucffc\ub7fc \uae30\ubc18 \uc751\uc6a9 \ud504\ub85c\uadf8\ub7a8\uc5d0\uc11c\ub294 \uc2e4\ud589 \uc911\uc778 \ubcf5\uc81c\ubcf8 \uc218\uac00 \ucffc\ub7fc\uc5d0 \ud544\uc694\ud55c \uc218 \uc774\ud558\ub85c \ub5a8\uc5b4\uc9c0\uc9c0 \uc54a\ub3c4\ub85d \ud558\ub824\ub294 \uacbd\uc6b0\uac00 \uc788\uc2b5\ub2c8\ub2e4. \uc6f9 \ud504\ub7f0\ud2b8 \uc5d4\ub4dc\ub294 \ubd80\ud558\ub97c \ucc98\ub9ac\ud558\ub294 \ubcf5\uc81c\ubcf8\uc758 \uc218\uac00 \uc804\uccb4 \ubcf5\uc81c\ubcf8\uc758 \ud2b9\uc815 \ube44\uc728 \uc774\ud558\ub85c \ub5a8\uc5b4\uc9c0\uc9c0 \uc54a\ub3c4\ub85d \ud574\uc57c \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nPDB\ub294 \ub178\ub4dc \ub4dc\ub808\uc774\ub2dd \ub610\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ubc30\ud3ec\uc640 \uac19\uc740 \uac83\uc73c\ub85c\ubd80\ud130 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ubcf4\ud638\ud569\ub2c8\ub2e4. PDB\ub294 \uc774\ub7f0 \uc870\uce58\ub97c \ucde8\ud558\ub294 \ub3d9\uc548 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \ud30c\ub4dc\uc758 \uc218 \ub610\ub294 \ube44\uc728\uc744 \ucd5c\uc18c\ud55c\uc73c\ub85c \uc720\uc9c0\ud569\ub2c8\ub2e4.\n\n!!! caution\n    PDB\ub294 \ud638\uc2a4\ud2b8 OS \uc7a5\uc560 \ub610\ub294 \ub124\ud2b8\uc6cc\ud06c \uc5f0\uacb0 \uc190\uc2e4\uacfc \uac19\uc740 \ube44\uc790\ubc1c\uc801 \uc911\ub2e8\uc73c\ub85c\ubd80\ud130 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ubcf4\ud638\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n\uc544\ub798 \uc608\uc2dc\ub294 `app: echoserver` \ub808\uc774\ube14\uc774 \ubd99\uc740 \ud30c\ub4dc\ub97c \ud56d\uc0c1 1\uac1c \uc774\uc0c1 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub3c4\ub85d \ub9cc\ub4ed\ub2c8\ub2e4. [\uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \ub9de\ub294 \uc62c\ubc14\ub978 \ub808\ud50c\ub9ac\uce74 \uc218\ub97c \uad6c\uc131\ud558\uac70\ub098 \ubc31\ubd84\uc728\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.](https:\/\/kubernetes.io\/docs\/tasks\/run-application\/configure-pdb\/#think-about-how-your-application-reacts-to-disruptions):\n\n```\napiVersion: policy\/v1beta1\nkind: PodDisruptionBudget\nmetadata:\n  name: echoserver-pdb\n  namespace: echoserver\nspec:\n  minAvailable: 1\n  selector:\n    matchLabels:\n      app: echoserver\n```\n\n### \uc885\ub8cc \uc2e0\ud638\ub97c \uc815\uc0c1\uc801\uc73c\ub85c \ucc98\ub9ac\n\n\ud30c\ub4dc\uac00 \uc885\ub8cc\ub418\uba74 \ucee8\ud14c\uc774\ub108 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc740 \ub450 \uac1c\uc758 [Signal](https:\/\/www.gnu.org\/software\/libc\/manual\/html_node\/Standard-Signals.html)\uc744 \uc218\uc2e0\ud569\ub2c8\ub2e4. \uccab \ubc88\uc9f8\ub294 [`SIGTERM` \uc2e0\ud638](https:\/\/www.gnu.org\/software\/libc\/manual\/html_node\/Termination-Signals.html)\uc774\uba70, \uc774\ub294 \ud504\ub85c\uc138\uc2a4 \uc2e4\ud589\uc744 \uc911\ub2e8\ud558\ub77c\ub294 \"\uc815\uc911\ud55c\" \uc694\uccad\uc785\ub2c8\ub2e4. \uc774 \uc2e0\ud638\ub294 \ucc28\ub2e8\ub420 \uc218\ub3c4 \uc788\uace0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \ub2e8\uc21c\ud788 \uc774 \uc2e0\ud638\ub97c \ubb34\uc2dc\ud560 \uc218\ub3c4 \uc788\uc73c\ubbc0\ub85c, `terminationGracePeriodSeconds`\uc774 \uacbd\uacfc\ud558\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc740 [`SIGKILL` \uc2e0\ud638](https:\/\/www.gnu.org\/software\/libc\/manual\/html_node\/Termination-Signals.html)\ub97c \ubc1b\uac8c \ub429\ub2c8\ub2e4. `SIGKILL`\uc740 \ud504\ub85c\uc138\uc2a4\ub97c \uac15\uc81c\ub85c \uc911\uc9c0\ud558\ub294 \ub370 \uc0ac\uc6a9\ub418\uba70, [\ucc28\ub2e8, \ucc98\ub9ac \ub610\ub294 \ubb34\uc2dc](https:\/\/man7.org\/linux\/man-pages\/man7\/signal.7.html) \ub420 \uc218 \uc5c6\uc73c\ubbc0\ub85c \ud56d\uc0c1 \uce58\uba85\uc801\uc785\ub2c8\ub2e4.\n\n\uc774\ub7f0 \uc2e0\ud638\ub294 \ucee8\ud14c\uc774\ub108 \ub7f0\ud0c0\uc784\uc5d0\uc11c \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc885\ub8cc\ub97c \ud2b8\ub9ac\uac70\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4.\ub610\ud55c `SIGTERM` \uc2e0\ud638\ub294 `preStop` \ud6c5\uc774 \uc2e4\ud589\ub41c \ud6c4\uc5d0 \uc804\uc1a1\ub429\ub2c8\ub2e4. \uc704 \uad6c\uc131\uc5d0\uc11c `preStop` \ud6c5\uc740 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0\uc11c \ud30c\ub4dc\uc758 \ub4f1\ub85d\uc774 \ucde8\uc18c\ub418\uc5c8\ub294\uc9c0 \ud655\uc778\ud558\ubbc0\ub85c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc740 `SIGTERM` \uc2e0\ud638\uac00 \uc218\uc2e0\ub420 \ub54c \uc5f4\ub824 \uc788\ub294 \ub098\uba38\uc9c0 \uc5f0\uacb0\uc744 \uc815\uc0c1\uc801\uc73c\ub85c \uc885\ub8cc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n!!! \ucc38\uc870\n    [\uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uc9c4\uc785\uc810\uc5d0 \"\ub798\ud37c \uc2a4\ud06c\ub9bd\ud2b8\"\ub97c \uc0ac\uc6a9\ud560 \uacbd\uc6b0 \ucee8\ud14c\uc774\ub108 \ud658\uacbd\uc5d0\uc11c\uc758 \uc2e0\ud638 \ucc98\ub9ac\ub294 \ubcf5\uc7a1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.](https:\/\/petermalmgren.com\/signal-handling-docker\/) \uc2a4\ud06c\ub9bd\ud2b8\ub294 PID 1\uc774\ubbc0\ub85c \uc2e0\ud638\ub97c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc73c\ub85c \uc804\ub2ec\ud558\uc9c0 \uc54a\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\n### \ub4f1\ub85d \ucde8\uc18c \uc9c0\uc5f0\uc5d0 \uc8fc\uc758\n\nElastic Load Balancing\uc740 \ub4f1\ub85d \ucde8\uc18c \uc911\uc778 \ub300\uc0c1\uc5d0 \ub300\ud55c \uc694\uccad \uc804\uc1a1\uc744 \uc911\uc9c0\ud569\ub2c8\ub2e4.\uae30\ubcf8\uc801\uc73c\ub85c Elastic Load Balancing\uc740 \ub4f1\ub85d \ucde8\uc18c \ud504\ub85c\uc138\uc2a4\ub97c \uc644\ub8cc\ud558\uae30 \uc804\uc5d0 300\ucd08 \uc815\ub3c4 \ub300\uae30\ud558\ubbc0\ub85c \ub300\uc0c1\uc5d0 \ub300\ud55c \uc9c4\ud589 \uc911\uc778 \uc694\uccad\uc744 \uc644\ub8cc\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Elastic Load Balancing\uc774 \ub300\uae30\ud558\ub294 \uc2dc\uac04\uc744 \ubcc0\uacbd\ud558\ub824\uba74 \ub4f1\ub85d \ucde8\uc18c \uc9c0\uc5f0 \uac12\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\uc2ed\uc2dc\uc624.\n\ub4f1\ub85d \ucde8\uc18c \ub300\uc0c1\uc758 \ucd08\uae30 \uc0c1\ud0dc\ub294 `draining`\uc785\ub2c8\ub2e4. \ub4f1\ub85d \ucde8\uc18c \uc9c0\uc5f0\uc774 \uacbd\uacfc\ud558\uba74 \ub4f1\ub85d \ucde8\uc18c \ud504\ub85c\uc138\uc2a4\uac00 \uc644\ub8cc\ub418\uace0 \ub300\uc0c1\uc758 \uc0c1\ud0dc\ub294 `unused`\uac00 \ub429\ub2c8\ub2e4. \ub300\uc0c1\uc774 Auto Scaling \uadf8\ub8f9\uc758 \uc77c\ubd80\uc778 \uacbd\uc6b0 \ub300\uc0c1\uc744 \uc885\ub8cc\ud558\uace0 \uad50\uccb4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub4f1\ub85d \ucde8\uc18c \ub300\uc0c1\uc5d0 \uc9c4\ud589 \uc911\uc778 \uc694\uccad\uc774 \uc5c6\uace0 \ud65c\uc131 \uc5f0\uacb0\uc774 \uc5c6\ub294 \uacbd\uc6b0 Elastic Load Balancing\uc740 \ub4f1\ub85d \ucde8\uc18c \uc9c0\uc5f0\uc774 \uacbd\uacfc\ud560 \ub54c\uae4c\uc9c0 \uae30\ub2e4\ub9ac\uc9c0 \uc54a\uace0 \ub4f1\ub85d \ucde8\uc18c \ud504\ub85c\uc138\uc2a4\ub97c \uc989\uc2dc \uc644\ub8cc\ud569\ub2c8\ub2e4.\n\n!!! caution\n    \ub300\uc0c1 \ub4f1\ub85d \ucde8\uc18c\uac00 \uc644\ub8cc\ub418\ub354\ub77c\ub3c4 \ub4f1\ub85d \ucde8\uc18c \uc9c0\uc5f0 \uc81c\ud55c \uc2dc\uac04\uc774 \ub9cc\ub8cc\ub420 \ub54c\uae4c\uc9c0 \ub300\uc0c1 \uc0c1\ud0dc\uac00 '\ub4dc\ub808\uc774\ub2dd \uc911'\uc73c\ub85c \ud45c\uc2dc\ub429\ub2c8\ub2e4. \uc81c\ud55c \uc2dc\uac04\uc774 \ub9cc\ub8cc\ub418\uba74 \ub300\uc0c1\uc740 '\ubbf8\uc0ac\uc6a9' \uc0c1\ud0dc\ub85c \uc804\ud658\ub429\ub2c8\ub2e4.\n\n[\ub4f1\ub85d \ucde8\uc18c \uc9c0\uc5f0\uc774 \uacbd\uacfc\ud558\uae30 \uc804\uc5d0 \ub4f1\ub85d \ucde8\uc18c \ub300\uc0c1\uc774 \uc5f0\uacb0\uc744 \uc885\ub8cc\ud558\uba74 \ud074\ub77c\uc774\uc5b8\ud2b8\ub294 500\ub808\ubca8 \uc624\ub958 \uc751\ub2f5\uc744 \ubc1b\uc2b5\ub2c8\ub2e4.](https:\/\/docs.aws.amazon.com\/elasticloadbalancing\/latest\/application\/load-balancer-target-groups.html#deregistration-delay).\n\n\uc774\ub294 [`alb.ingress.kubernetes.io\/target-group-attributes` \uc5b4\ub178\ud14c\uc774\uc158](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/v2.4\/guide\/ingress\/annotations\/#target-group-attributes)\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc778\uadf8\ub808\uc2a4 \ub9ac\uc18c\uc2a4\uc758 \uc5b4\ub178\ud14c\uc774\uc158\uc744 \uc0ac\uc6a9\ud558\uc5ec \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc544\ub798\ub294 \uc608\uc81c\uc785\ub2c8\ub2e4.\n\n```\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: echoserver-ip\n  namespace: echoserver\n  annotations:\n    alb.ingress.kubernetes.io\/scheme: internet-facing\n    alb.ingress.kubernetes.io\/target-type: ip\n    alb.ingress.kubernetes.io\/load-balancer-name: echoserver-ip\n    alb.ingress.kubernetes.io\/target-group-attributes: deregistration_delay.timeout_seconds=30\nspec:\n  ingressClassName: alb\n  rules:\n    - host: echoserver.example.com\n      http:\n        paths:\n          - path: \/\n            pathType: Exact\n            backend:\n              service:\n                name: echoserver-service\n                port:\n                  number: 8080\n```","site":"eks","answers_cleaned":"    search    exclude  true         Kubernetes          AWS                                                                          Elastic Load Balancer                                                Kubernetes                                                                                                                                                                       Kubernetes                              AWS                                                                                     Ready                                                 Readiness   2019 Kubecon talk  https   www youtube com watch v Vw9GmSeomFg                                                                            Ready  A Deep Dive into Pod Readiness Gates for Service Health      Minhan Xia   Ping Zou  https   www youtube com watch v Vw9GmSeomFg     NodePort                                        1                         kubectl                               2       kube scheduler                              3                 kubelet               watch                                                 1                             ReadinessProbes          kubelet   kube apiserver                      Ready            4                 watch                                 Ready                                IP               5   kube proxy           iptables           IP                watch                   1            iptables     NodePort                                                             AWS                       5     kube proxy                                                                                                         https   kubernetes io docs concepts workloads pods pod lifecycle  pod termination                                         API           kubelet    kubectl                                3 5                           iptables         IP                                                                                              Ready  A Deep Dive into Pod Readiness Gates for Service Health      Minhan Xia   Ping Zou  https   www youtube com watch v Vw9GmSeomFg                                Ready                                              4    5                                                                                                                          Ready                                                                   Kubernetes                                              AWS                                                                                                                                                                                                                  1 2                                                                                       IP                                                   Kubernetes      AWS Load Balancer                                                    IP                  LoadBalancer                                                                                      NodePort                          IP                                                 note           Ready                                                                                                                                                                                                                        AWS             https   github com kubernetes sigs aws load balancer controller              IP                  IP                                                                                     Ready     IP                                             VPC                                                                 IP           NodePort                                                                     Readiness             Readiness      https   kubernetes io docs concepts workloads pods pod lifecycle  pod readiness gate         Ready                                            AWS Load Balancer                                                       ALB NLB                    Healthy                         True                            ALB NLB                                                                           https   kubernetes sigs github io aws load balancer controller v2 4 deploy pod readiness gate    Readiness                                                                                                                            1          AWS            https   github com kubernetes sigs aws load balancer controller                                           https   kubernetes sigs github io aws load balancer controller v2 4 deploy upgrade migrate v1 v2      2     Readiness                  elbv2 k8s aws pod readiness gate inject  enabled                                         https   kubernetes sigs github io aws load balancer controller v2 4 deploy pod readiness gate    3                                                                                                                                            When a pod is terminated steps 4 and 5 from the pod readiness section occur at the same time that the container processes receive the termination signals  This means that if your container is able to shut down quickly it may shut down faster than the Load Balancer is able to deregister the target  To avoid this situation adjust the Pod spec with   1                                        PreStop                        API                                                                                                                              https   kubernetes io docs concepts workloads pods pod lifecycle  pod termination                                                           lifecycle            preStop              exec                command     bin sh     c    sleep 180                  sleep               Terminating                                                                                                                                            2                                                                  TerminationGracePeriodsSeconds                          200            sleep 180                                 20                     spec        terminationGracePeriodSeconds  200       containers          name  webapp         image  webapp st v1 3                       lifecycle            preStop              exec                command     bin sh     c    sleep 180                 Readiness                                           Ready                                           1 2                     Readiness                    https   kubernetes io docs tasks configure pod container configure liveness readiness startup probes                                                                              Readiness                NotReady              Readiness                                                                                                                                                                                            Stateful                                                        https   kubernetes io docs tasks configure pod container configure liveness readiness startup probes  define startup probes                           80     HTTP                                                                                           ports            containerPort  80         livenessProbe            httpGet              path                port  80           failureThreshold  1           periodSeconds  10           initialDelaySeconds  5         readinessProbe            httpGet              path                port  80           periodSeconds  5                                  PDB                 PDB   https   kubernetes io docs concepts workloads pods disruptions  pod disruption budgets                         https   kubernetes io docs concepts workloads pods disruptions  voluntary and involuntary disruptions                                                                                                                                                                           PDB                                                PDB                                                        caution     PDB      OS                                                              app  echoserver                 1                                                                     https   kubernetes io docs tasks run application configure pdb  think about how your application reacts to disruptions        apiVersion  policy v1beta1 kind  PodDisruptionBudget metadata    name  echoserver pdb   namespace  echoserver spec    minAvailable  1   selector      matchLabels        app  echoserver                                                                Signal  https   www gnu org software libc manual html node Standard Signals html                  SIGTERM      https   www gnu org software libc manual html node Termination Signals html                                                                                    terminationGracePeriodSeconds                  SIGKILL      https   www gnu org software libc manual html node Termination Signals html            SIGKILL                                         https   man7 org linux man pages man7 signal 7 html                                                                      SIGTERM       preStop                           preStop                                           SIGTERM                                                                                                                       https   petermalmgren com signal handling docker         PID 1                                                     Elastic Load Balancing                                     Elastic Load Balancing                      300                                                 Elastic Load Balancing                                                         draining                                               unused            Auto Scaling                                                                      Elastic Load Balancing                                                       caution                                                                                                                                                      500                 https   docs aws amazon com elasticloadbalancing latest application load balancer target groups html deregistration delay         alb ingress kubernetes io target group attributes         https   kubernetes sigs github io aws load balancer controller v2 4 guide ingress annotations  target group attributes                                                          apiVersion  networking k8s io v1 kind  Ingress metadata    name  echoserver ip   namespace  echoserver   annotations      alb ingress kubernetes io scheme  internet facing     alb ingress kubernetes io target type  ip     alb ingress kubernetes io load balancer name  echoserver ip     alb ingress kubernetes io target group attributes  deregistration delay timeout seconds 30 spec    ingressClassName  alb   rules        host  echoserver example com       http          paths              path                pathType  Exact             backend                service                  name  echoserver service                 port                    number  8080    "}
{"questions":"eks exclude true search Amazon VPC CNI IP ENI CIDR","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\n\n\uae30\ubcf8\uc801\uc73c\ub85c Amazon VPC CNI\ub294 \uae30\ubcf8 \uc11c\ube0c\ub137\uc5d0\uc11c \uc120\ud0dd\ud55c IP \uc8fc\uc18c\ub97c \ud30c\ub4dc\uc5d0 \ud560\ub2f9\ud569\ub2c8\ub2e4. \uae30\ubcf8 \uc11c\ube0c\ub137\uc740 \uae30\ubcf8 ENI\uac00 \uc5f0\uacb0\ub41c \uc11c\ube0c\ub137 CIDR\uc774\uba70, \uc77c\ubc18\uc801\uc73c\ub85c \ub178\ub4dc\/\ud638\uc2a4\ud2b8\uc758 \uc11c\ube0c\ub137\uc785\ub2c8\ub2e4.\n\n\uc11c\ube0c\ub137 CIDR\uc774 \ub108\ubb34 \uc791\uc73c\uba74 CNI\uac00 \ud30c\ub4dc\uc5d0 \ud560\ub2f9\ud558\uae30\uc5d0 \ucda9\ubd84\ud55c \ubcf4\uc870 IP \uc8fc\uc18c\ub97c \ud655\ubcf4\ud558\uc9c0 \ubabb\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub294 EKS IPv4 \ud074\ub7ec\uc2a4\ud130\uc758 \uc77c\ubc18\uc801\uc778 \ubb38\uc81c\uc785\ub2c8\ub2e4.\n\n\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc740 \uc774 \ubb38\uc81c\uc5d0 \ub300\ud55c \ud55c \uac00\uc9c0 \ud574\uacb0\ucc45\uc785\ub2c8\ub2e4.\n\n\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc740 \ubcf4\uc870 VPC \uc8fc\uc18c \uacf5\uac04 (CIDR) \uc5d0\uc11c \ub178\ub4dc \ubc0f \ud30c\ub4dc IP\ub97c \ud560\ub2f9\ud558\uc5ec IP \uace0\uac08 \ubb38\uc81c\ub97c \ud574\uacb0\ud569\ub2c8\ub2e4. \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9 \uc9c0\uc6d0\uc740 Eniconfig \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub9ac\uc18c\uc2a4\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4. ENIConfig\uc5d0\ub294 \ud30c\ub4dc\uac00 \uc18d\ud558\uac8c \ub420 \ubcf4\uc548 \uadf8\ub8f9\uacfc \ud568\uaed8 \ub300\uccb4 \uc11c\ube0c\ub137 CIDR \ubc94\uc704 (\ubcf4\uc870 VPC CIDR\uc5d0\uc11c \ud30c\ubc0d) \uac00 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc774 \ud65c\uc131\ud654\ub418\uba74 VPC CNI\ub294 eniconfig\uc5d0 \uc815\uc758\ub41c \uc11c\ube0c\ub137\uc5d0 \ubcf4\uc870 ENI\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4. CNI\ub294 ENIConfig CRD\uc5d0 \uc815\uc758\ub41c CIDR \ubc94\uc704\uc758 IP \uc8fc\uc18c\ub97c \ud30c\ub4dc\uc5d0 \ud560\ub2f9\ud569\ub2c8\ub2e4.\n\n\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc5d0\uc11c\ub294 \uae30\ubcf8 ENI\ub97c \uc0ac\uc6a9\ud558\uc9c0 \uc54a\uc73c\ubbc0\ub85c, \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ud560 \uc218 \uc788\ub294 \ucd5c\ub300 \ud30c\ub4dc \uc218\ub294 \ub354 \uc801\ub2e4. \ud638\uc2a4\ud2b8 \ub124\ud2b8\uc6cc\ud06c \ud30c\ub4dc\ub294 \uae30\ubcf8 ENI\uc5d0 \ud560\ub2f9\ub41c IP \uc8fc\uc18c\ub97c \uacc4\uc18d \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \ub610\ud55c \uae30\ubcf8 ENI\ub294 \uc18c\uc2a4 \ub124\ud2b8\uc6cc\ud06c \ubcc0\ud658\uc744 \ucc98\ub9ac\ud558\uace0 \ud30c\ub4dc \ud2b8\ub798\ud53d\uc744 \ub178\ub4dc \uc678\ubd80\ub85c \ub77c\uc6b0\ud305\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4.\n\n## \uc608\uc81c \uad6c\uc131\n\n\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc740 \ubcf4\uc870 CIDR \ubc94\uc704\uc5d0 \uc720\ud6a8\ud55c VPC \ubc94\uc704\ub97c \ud5c8\uc6a9\ud558\uc9c0\ub9cc CG-NAT \uacf5\uac04 (\uc608: 100.64.0.0\/10 \ub610\ub294 198.19.0.0\/16) \uc758 CIDR (\/16) \uc740 \ub2e4\ub978 RFC1918 \ubc94\uc704\ubcf4\ub2e4 \uae30\uc5c5 \ud658\uacbd\uc5d0\uc11c \uc0ac\uc6a9\ub420 \uac00\ub2a5\uc131\uc774 \uc801\uae30 \ub54c\ubb38\uc5d0 \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. VPC\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \ud5c8\uc6a9 \ubc0f \uc81c\ud55c\ub41c CIDR \ube14\ub85d \uc5f0\uacb0\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 VPC \uc124\uba85\uc11c\uc758 VPC \ubc0f \uc11c\ube0c\ub137 \ud06c\uae30 \uc870\uc815 \uc139\uc158\uc5d0\uc11c [IPv4 CIDR \ube14\ub85d \uc5f0\uacb0 \uc81c\ud55c](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/configure-your-vpc.html#add-cidr-block-restrictions)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n\uc544\ub798 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc5d0\uc11c \ubcfc \uc218 \uc788\ub4ef\uc774 \uc6cc\ucee4 \ub178\ub4dc\uc758 \uae30\ubcf8 \uc5d8\ub77c\uc2a4\ud2f1 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4 ([ENI](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/using-eni.html)) \ub294 \uc5ec\uc804\ud788 \uae30\ubcf8 VPC CIDR \ubc94\uc704 (\uc774 \uacbd\uc6b0 10.0.0.0\/16) \ub97c \uc0ac\uc6a9\ud558\uc9c0\ub9cc \ubcf4\uc870 ENI\ub294 \ubcf4\uc870 VPC CIDR \ubc94\uc704 (\uc774 \uacbd\uc6b0 100.64.0.0\/16) \ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \uc774\uc81c \ud30c\ub4dc\uac00 100.64.0.0\/16 CIDR \ubc94\uc704\ub97c \uc0ac\uc6a9\ud558\ub3c4\ub85d \ud558\ub824\uba74 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc744 \uc0ac\uc6a9\ud558\ub3c4\ub85d CNI \ud50c\ub7ec\uadf8\uc778\uc744 \uad6c\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. [\uc5ec\uae30](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cni-custom-network.html)\uc5d0 \uc124\uba85\ub41c \ub300\ub85c \ub2e8\uacc4\ub97c \uc218\ud589\ud558\uba74 \ub429\ub2c8\ub2e4.\n\n![illustration of pods on secondary subnet](.\/image.png)\n\nCNI\uc5d0\uc11c \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc744 \uc0ac\uc6a9\ud558\ub3c4\ub85d \ud558\ub824\uba74 `AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG` \ud658\uacbd \ubcc0\uc218\ub97c `true`\ub85c \uc124\uc815\ud558\uc2ed\uc2dc\uc624.\n\n```\nkubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true\n```\n\n\n`AWS_VPC_K8S_CNI_CUSTOM_Network_CFG=true`\uc778 \uacbd\uc6b0, CNI\ub294 `ENIConfig`\uc5d0 \uc815\uc758\ub41c \uc11c\ube0c\ub137\uc758 \ud30c\ub4dc IP \uc8fc\uc18c\ub97c \ud560\ub2f9\ud55c\ub2e4. `ENIConfig` \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub9ac\uc18c\uc2a4\ub294 \ud30c\ub4dc\uac00 \uc2a4\ucf00\uc904\ub9c1\ub420 \uc11c\ube0c\ub137\uc744 \uc815\uc758\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4.\n\n```\napiVersion : crd.k8s.amazonaws.com\/v1alpha1\nkind : ENIConfig\nmetadata:\n  name: us-west-2a\nspec: \n  securityGroups:\n    - sg-0dff111a1d11c1c11\n  subnet: subnet-011b111c1f11fdf11\n```\n\n`ENIconfig` \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub9ac\uc18c\uc2a4\ub97c \uc0dd\uc131\ud560 \ub54c \uc0c8 \uc6cc\ucee4 \ub178\ub4dc\ub97c \uc0dd\uc131\ud558\uace0 \uae30\uc874 \ub178\ub4dc\ub97c \ube44\uc6cc\uc57c \ud569\ub2c8\ub2e4. \uae30\uc874 \uc6cc\ucee4 \ub178\ub4dc\uc640 \ud30c\ub4dc\ub294 \uc601\ud5a5\uc744 \ubc1b\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \n\n\n## \uad8c\uc7a5 \uc0ac\ud56d\n\n### \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9 \uc774\uc6a9\uc744 \uad8c\uc7a5\ud558\ub294 \uacbd\uc6b0\n\nIPv4\uac00 \uace0\uac08\ub418\uace0 \uc788\uace0 \uc544\uc9c1 IPv6\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc744 \uace0\ub824\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. Amazon EKS\ub294 [RFC6598](https:\/\/datatracker.ietf.org\/doc\/html\/rfc6598) \uacf5\uac04\uc744 \uc9c0\uc6d0\ud558\ubbc0\ub85c [RFC1918](https:\/\/datatracker.ietf.org\/doc\/html\/rfc1918) \ubb38\uc81c \uc18c\ubaa8 \ubb38\uc81c \uc774\uc0c1\uc73c\ub85c \ud30c\ub4dc\ub97c \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uacfc \ud568\uaed8 Prefix \uc704\uc784\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc\uc758 \ud30c\ub4dc \ubc00\ub3c4\ub97c \ub192\uc774\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624.\n\n\ubcf4\uc548 \uadf8\ub8f9 \uc694\uad6c \uc0ac\ud56d\uc774 \ub2e4\ub978 \ub2e4\ub978 \ub124\ud2b8\uc6cc\ud06c\uc5d0\uc11c Pod\ub97c \uc2e4\ud589\ud574\uc57c \ud558\ub294 \ubcf4\uc548 \uc694\uad6c \uc0ac\ud56d\uc774 \uc788\ub294 \uacbd\uc6b0 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc744 \uace0\ub824\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc774 \ud65c\uc131\ud654\ub418\uba74 \ud30c\ub4dc\ub294 Eniconfig\uc5d0 \uc815\uc758\ub41c \ub300\ub85c \ub178\ub4dc\uc758 \uae30\ubcf8 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4\uc640 \ub2e4\ub978 \uc11c\ube0c\ub137 \ub610\ub294 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n\n\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc740 \uc5ec\ub7ec EKS \ud074\ub7ec\uc2a4\ud130 \ubc0f \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ubc30\ud3ec\ud558\uc5ec \uc628\ud504\ub808\ubbf8\uc2a4 \ub370\uc774\ud130 \uc13c\ud130 \uc11c\ube44\uc2a4\ub97c \uc5f0\uacb0\ud558\ub294 \ub370 \uac00\uc7a5 \uc801\ud569\ud55c \uc635\uc158\uc785\ub2c8\ub2e4. Amazon Elastic Load Balancing \ubc0f NAT-GW\uc640 \uac19\uc740 \uc11c\ube44\uc2a4\ub97c \uc704\ud574 VPC\uc5d0\uc11c EKS\ub85c \uc561\uc138\uc2a4\ud560 \uc218 \uc788\ub294 \ud504\ub77c\uc774\ube57 \uc8fc\uc18c (RFC1918) \uc758 \uc218\ub97c \ub298\ub9ac\ub294 \ub3d9\uc2dc\uc5d0 \uc5ec\ub7ec \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ud30c\ub4dc\uc5d0 \ub77c\uc6b0\ud305\ud560 \uc218 \uc5c6\ub294 CG-NAT \uacf5\uac04\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. [\ud2b8\ub79c\uc9d3 \uac8c\uc774\ud2b8\uc6e8\uc774](https:\/\/aws.amazon.com\/transit-gateway\/) \ubc0f \uacf5\uc720 \uc11c\ube44\uc2a4 VPC (\uace0\uac00\uc6a9\uc131\uc744 \uc704\ud55c \uc5ec\ub7ec \uac00\uc6a9\uc601\uc5ed\uc5d0 \uac78\uce5c NAT \uac8c\uc774\ud2b8\uc6e8\uc774 \ud3ec\ud568) \ub97c \uc0ac\uc6a9\ud55c \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc744 \ud1b5\ud574 \ud655\uc7a5 \uac00\ub2a5\ud558\uace0 \uc608\uce21 \uac00\ub2a5\ud55c \ud2b8\ub798\ud53d \ud750\ub984\uc744 \uc81c\uacf5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 [\ube14\ub85c\uadf8 \uac8c\uc2dc\ubb3c](https:\/\/aws.amazon.com\/blogs\/containers\/eks-vpc-routable-ip-address-conservation\/) \uc5d0\uc11c\ub294 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc744 \uc0ac\uc6a9\ud558\uc5ec EKS Pod\ub97c \ub370\uc774\ud130 \uc13c\ud130 \ub124\ud2b8\uc6cc\ud06c\uc5d0 \uc5f0\uacb0\ud558\ub294 \ub370 \uac00\uc7a5 \uad8c\uc7a5\ub418\ub294 \ubc29\ubc95 \uc911 \ud558\ub098\uc778 \uc544\ud0a4\ud14d\ucc98 \ud328\ud134\uc744 \uc124\uba85\ud569\ub2c8\ub2e4.\n\n### \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9 \uc774\uc6a9\uc744 \uad8c\uc7a5\ud558\uc9c0 \uc54a\ub294 \uacbd\uc6b0\n\n#### IPv6 \uad6c\ud604 \uc900\ube44 \uc644\ub8cc\n\n\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc740 IP \uace0\uac08 \ubb38\uc81c\ub97c \uc644\ud654\ud560 \uc218 \uc788\uc9c0\ub9cc \ucd94\uac00 \uc6b4\uc601 \uc624\ubc84\ud5e4\ub4dc\uac00 \ud544\uc694\ud569\ub2c8\ub2e4. \ud604\uc7ac \uc774\uc911 \uc2a4\ud0dd (IPv4\/IPv6) VPC\ub97c \ubc30\ud3ec \uc911\uc774\uac70\ub098 \uacc4\ud68d\uc5d0 IPv6 \uc9c0\uc6d0\uc774 \ud3ec\ud568\ub41c \uacbd\uc6b0 IPv6 \ud074\ub7ec\uc2a4\ud130\ub97c \ub300\uc2e0 \uad6c\ud604\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. IPv6 EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uc124\uc815\ud558\uace0 \uc571\uc744 \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. IPv6 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc640 \ud30c\ub4dc \ubaa8\ub450 IPv6 \uc8fc\uc18c\ub97c \uc5bb\uace0 IPv4 \ubc0f IPv6 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ubaa8\ub450\uc640 \uc1a1\uc218\uc2e0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. [IPv6 EKS \ud074\ub7ec\uc2a4\ud130 \uc2e4\ud589](..\/ipv6\/index.md)\uc5d0 \ub300\ud55c \ubaa8\ubc94 \uc0ac\ub840\ub97c \uac80\ud1a0\ud558\uc2ed\uc2dc\uc624.\n\n#### \uace0\uac08\ub41c CG-NAT \uacf5\uac04\n\n\ub610\ud55c \ud604\uc7ac CG-NAT \uacf5\uac04\uc758 CIDR\uc744 \uc0ac\uc6a9\ud558\uace0 \uc788\uac70\ub098 \ubcf4\uc870 CIDR\uc744 \ud074\ub7ec\uc2a4\ud130 VPC\uc640 \uc5f0\uacb0\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0 \ub300\uccb4 CNI \uc0ac\uc6a9\uacfc \uac19\uc740 \ub2e4\ub978 \uc635\uc158\uc744 \ud0d0\uc0c9\ud574\uc57c \ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc0c1\uc6a9 \uc9c0\uc6d0\uc744 \ubc1b\uac70\ub098 \uc0ac\ub0b4 \uc9c0\uc2dd\uc744 \ubcf4\uc720\ud558\uc5ec \uc624\ud508\uc18c\uc2a4 CNI \ud50c\ub7ec\uadf8\uc778 \ud504\ub85c\uc81d\ud2b8\ub97c \ub514\ubc84\uae45\ud558\uace0 \ud328\uce58\ub97c \uc81c\ucd9c\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\ub300\uccb4 CNI \ud50c\ub7ec\uadf8\uc778](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/alternate-cni-plugins.html) \uc0ac\uc6a9 \uc124\uba85\uc11c\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n#### \ud504\ub77c\uc774\ube57 NAT \uac8c\uc774\ud2b8\uc6e8\uc774 \uc0ac\uc6a9\n\nAmazon VPC\ub294 \uc774\uc81c [\ud504\ub77c\uc774\ube57 NAT \uac8c\uc774\ud2b8\uc6e8\uc774](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/vpc-nat-gateway.html) \uae30\ub2a5\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. Amazon\uc758 \ud504\ub77c\uc774\ube57 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \uc0ac\uc6a9\ud558\uba74 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc758 \uc778\uc2a4\ud134\uc2a4\ub97c CIDR\uc774 \uacb9\uce58\ub294 \ub2e4\ub978 VPC \ubc0f \uc628\ud504\ub808\ubbf8\uc2a4 \ub124\ud2b8\uc6cc\ud06c\uc5d0 \uc5f0\uacb0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 [\ube14\ub85c\uadf8 \uac8c\uc2dc\ubb3c](https:\/\/aws.amazon.com\/blogs\/containers\/addressing-ipv4-address-exhaustion-in-amazon-eks-clusters-using-private-nat-gateways\/)\uc5d0 \uc124\uba85\ub41c \ubc29\ubc95\uc744 \ud65c\uc6a9\ud558\uc5ec \ud504\ub77c\uc774\ube57 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \uc0ac\uc6a9\ud558\uc5ec CIDR \uc911\ubcf5\uc73c\ub85c \uc778\ud55c EKS \uc6cc\ud06c\ub85c\ub4dc\uc758 \ud1b5\uc2e0 \ubb38\uc81c\ub97c \ud574\uacb0\ud558\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624. \uc774\ub294 \uace0\uac1d\uc774 \uc81c\uae30\ud55c \uc911\ub300\ud55c \ubd88\ub9cc \uc0ac\ud56d\uc785\ub2c8\ub2e4. \ub9de\ucda4\ud615 \ub124\ud2b8\uc6cc\ud0b9\ub9cc\uc73c\ub85c\ub294 \uc911\ubcf5\ub418\ub294 CIDR \ubb38\uc81c\ub97c \ud574\uacb0\ud560 \uc218 \uc5c6\uc73c\uba70 \uad6c\uc131 \ubb38\uc81c\uac00 \uac00\uc911\ub429\ub2c8\ub2e4.\n\n\uc774 \ube14\ub85c\uadf8 \uac8c\uc2dc\ubb3c \uad6c\ud604\uc5d0 \uc0ac\uc6a9\ub41c \ub124\ud2b8\uc6cc\ud06c \uc544\ud0a4\ud14d\ucc98\ub294 Amazon VPC \uc124\uba85\uc11c\uc758 [\uc911\ubcf5 \ub124\ud2b8\uc6cc\ud06c \uac04 \ud1b5\uc2e0 \ud65c\uc131\ud654](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/nat-gateway-scenarios.html#private-nat-overlapping-networks)\uc5d0 \uc788\ub294 \uad8c\uc7a5 \uc0ac\ud56d\uc744 \ub530\ub985\ub2c8\ub2e4. \uc774 \ube14\ub85c\uadf8 \uac8c\uc2dc\ubb3c\uc5d0\uc11c \uc124\uba85\ud55c \uac83\ucc98\ub7fc \ud504\ub77c\uc774\ube57 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c RFC6598 \uc8fc\uc18c\uc640 \ud568\uaed8 \uc0ac\uc6a9\ud558\uc5ec \uace0\uac1d\uc758 \ud504\ub77c\uc774\ube57 IP \uace0\uac08 \ubb38\uc81c\ub97c \uad00\ub9ac\ud560 \uc218 \uc788\ub294 \ubc29\ubc95\uc744 \ubaa8\uc0c9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. EKS \ud074\ub7ec\uc2a4\ud130, \uc6cc\ucee4 \ub178\ub4dc\ub294 \ub77c\uc6b0\ud305\uc774 \ubd88\uac00\ub2a5\ud55c 100.64.0.0\/16 VPC \ubcf4\uc870 CIDR \ubc94\uc704\uc5d0 \ubc30\ud3ec\ub418\ub294 \ubc18\uba74, \uc0ac\uc124 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\uc778 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub294 \ub77c\uc6b0\ud305 \uac00\ub2a5\ud55c RFC1918 CIDR \ubc94\uc704\uc5d0 \ubc30\ud3ec\ub429\ub2c8\ub2e4. \uc774 \ube14\ub85c\uadf8\uc5d0\uc11c\ub294 \ub77c\uc6b0\ud305\uc774 \ubd88\uac00\ub2a5\ud55c CIDR \ubc94\uc704\uac00 \uacb9\uce58\ub294 VPC \uac04\uc758 \ud1b5\uc2e0\uc744 \uc6a9\uc774\ud558\uac8c \ud558\uae30 \uc704\ud574 \ud2b8\ub79c\uc9d3 \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \uc0ac\uc6a9\ud558\uc5ec VPC\ub97c \uc5f0\uacb0\ud558\ub294 \ubc29\ubc95\uc744 \uc124\uba85\ud569\ub2c8\ub2e4. VPC\uc758 \ub77c\uc6b0\ud305 \ubd88\uac00\ub2a5\ud55c \uc8fc\uc18c \ubc94\uc704\uc5d0 \uc788\ub294 EKS \ub9ac\uc18c\uc2a4\uac00 \uc8fc\uc18c \ubc94\uc704\uac00 \uacb9\uce58\uc9c0 \uc54a\ub294 \ub2e4\ub978 VPC\uc640 \ud1b5\uc2e0\ud574\uc57c \ud558\ub294 \uc0ac\uc6a9 \uc0ac\ub840\uc758 \uacbd\uc6b0 \uace0\uac1d\uc740 VPC \ud53c\uc5b4\ub9c1\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc774\ub7f0 VPC\ub97c \uc0c1\ud638 \uc5f0\uacb0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\uc81c VPC \ud53c\uc5b4\ub9c1 \uc5f0\uacb0\uc744 \ud1b5\ud55c \uac00\uc6a9\uc601\uc5ed \ub0b4\uc758 \ubaa8\ub4e0 \ub370\uc774\ud130 \uc804\uc1a1\uc774 \ubb34\ub8cc\uc774\ubbc0\ub85c \uc774 \ubc29\ubc95\uc744 \uc0ac\uc6a9\ud558\uba74 \ube44\uc6a9\uc744 \uc808\uac10\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n![illustration of network traffic using private NAT gateway](.\/image-3.png)\n\n#### \ub178\ub4dc \ubc0f \ud30c\ub4dc\ub97c \uc704\ud55c \uace0\uc720\ud55c \ub124\ud2b8\uc6cc\ud06c\n\n\ubcf4\uc548\uc0c1\uc758 \uc774\uc720\ub85c \ub178\ub4dc\uc640 \ud30c\ub4dc\ub97c \ud2b9\uc815 \ub124\ud2b8\uc6cc\ud06c\ub85c \uaca9\ub9ac\ud574\uc57c \ud558\ub294 \uacbd\uc6b0, \ub354 \ud070 \ubcf4\uc870 CIDR \ube14\ub85d (\uc608: 100.64.0.0\/8) \uc758 \uc11c\ube0c\ub137\uc5d0 \ub178\ub4dc\uc640 \ud30c\ub4dc\ub97c \ubc30\ud3ec\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. VPC\uc5d0 \uc0c8 CIDR\uc744 \uc124\uce58\ud55c \ud6c4\uc5d0\ub294 \ubcf4\uc870 CIDR\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub2e4\ub978 \ub178\ub4dc \uadf8\ub8f9\uc744 \ubc30\ud3ec\ud558\uace0 \uc6d0\ub798 \ub178\ub4dc\ub97c \ub4dc\ub808\uc778\ud558\uc5ec \ud30c\ub4dc\ub97c \uc0c8 \uc6cc\ucee4 \ub178\ub4dc\uc5d0 \uc790\ub3d9\uc73c\ub85c \uc7ac\ubc30\ud3ec\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub97c \uad6c\ud604\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 \uc774 [\ube14\ub85c\uadf8](https:\/\/aws.amazon.com\/blogs\/containers\/optimize-ip-addresses-usage-by-pods-in-your-amazon-eks-cluster\/) \uac8c\uc2dc\ubb3c\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n\uc544\ub798 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc5d0 \ud45c\uc2dc\ub41c \uc124\uc815\uc5d0\uc11c\ub294 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc774 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ub300\uc2e0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc6cc\ucee4 \ub178\ub4dc\ub294 VPC\uc758 \ubcf4\uc870 VPC CIDR \ubc94\uc704 (\uc608: 100.64.0.0\/10) \uc5d0 \uc18d\ud558\ub294 \uc11c\ube0c\ub137\uc5d0 \ubc30\ud3ec\ub429\ub2c8\ub2e4. EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uacc4\uc18d \uc2e4\ud589\ud560 \uc218 \uc788\uc9c0\ub9cc (\ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc740 \uc6d0\ub798 \uc11c\ube0c\ub137\uc5d0 \uc720\uc9c0\ub428), \ub178\ub4dc\uc640 \ud30c\ub4dc\ub294 \ubcf4\uc870 \uc11c\ube0c\ub137\uc73c\ub85c \uc774\ub3d9\ud569\ub2c8\ub2e4. \uc774\ub294 VPC\uc5d0\uc11c IP \uace0\uac08\uc758 \uc704\ud5d8\uc744 \uc644\ud654\ud558\uae30 \uc704\ud55c \ud754\ud558\uc9c0\ub294 \uc54a\uc9c0\ub9cc \ub610 \ub2e4\ub978 \uae30\ubc95\uc785\ub2c8\ub2e4.\uc0c8 \uc6cc\ucee4 \ub178\ub4dc\uc5d0 \ud30c\ub4dc\ub97c \uc7ac\ubc30\ud3ec\ud558\uae30 \uc804\uc5d0 \uae30\uc874 \ub178\ub4dc\ub97c \ube44\uc6b0\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n![illustration of worker nodes on secondary subnet](.\/image-2.png)\n\n### \uac00\uc6a9\uc601\uc5ed \ub808\uc774\ube14\uc744 \uc0ac\uc6a9\ud55c \uad6c\uc131 \uc790\ub3d9\ud654\n\nKubernetes\ub97c \ud65c\uc131\ud654\ud558\uc5ec \uc6cc\ucee4 \ub178\ub4dc \uac00\uc6a9\uc601\uc5ed (AZ) \uc5d0 \ud574\ub2f9\ud558\ub294 eniConfig\ub97c \uc790\ub3d9\uc73c\ub85c \uc801\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \uc6cc\ucee4 \ub178\ub4dc\uc5d0 [`topology.kubernetes.io\/zone`](http:\/\/topology.kubernetes.io\/zone) \ud0dc\uadf8\ub97c \uc790\ub3d9\uc73c\ub85c \ucd94\uac00\ud569\ub2c8\ub2e4. Amazon EKS\ub294 AZ\ub2f9 \ubcf4\uc870 \uc11c\ube0c\ub137 (\ub300\uccb4 CIDR) \uc774 \ud558\ub098\ubfd0\uc778 \uacbd\uc6b0 \uac00\uc6a9\uc601\uc5ed\uc744 ENI \uad6c\uc131 \uc774\ub984\uc73c\ub85c \uc0ac\uc6a9\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \ucc38\uace0\ub85c `failure-domain.beta.kubernetes.io\/zone` \ud0dc\uadf8\ub294 \ub354 \uc774\uc0c1 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\uc73c\uba70 `topology.kubernetes.io\/zone` \ud0dc\uadf8\ub85c \ub300\uccb4\ub418\uc5c8\uc2b5\ub2c8\ub2e4.\n\n1. `name` \ud544\ub4dc\ub97c VPC\uc758 \uac00\uc6a9\uc601\uc5ed\uc73c\ub85c \uc124\uc815\ud569\ub2c8\ub2e4.\n2. \ub2e4\uc74c \uba85\ub839\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc790\ub3d9 \uad6c\uc131\uc744 \ud65c\uc131\ud654\ud569\ub2c8\ub2e4.\n\n```\nkubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true\n```\n\n\uac00\uc6a9\uc601\uc5ed\ub2f9 \ubcf4\uc870 \uc11c\ube0c\ub137\uc774 \uc5ec\ub7ec \uac1c \uc788\ub294 \uacbd\uc6b0, \ud2b9\uc815 `ENI_CONFIG_LABEL_DEF`\ub97c \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. `ENI_CONFIG_LABEL_DEF`\ub97c [`k8s.amazonaws.com\/eniConfig`](http:\/\/k8s.amazonaws.com\/eniConfig)\ub85c \uad6c\uc131\ud558\uace0 [`k8s.amazonaws.com\/eniConfig=us-west-2a-subnet-1`](http:\/\/k8s.amazonaws.com\/eniConfig=us-west-2a-subnet-1) \ubc0f [`k8s.amazonaws.com\/eniConfig=us-west-2a-subnet-2`](http:\/\/k8s.amazonaws.com\/eniConfig=us-west-2a-subnet-2) \uac19\uc740 \uc0ac\uc6a9\uc790 \uc815\uc758 eniConfig \uc774\ub984\uc73c\ub85c \ub178\ub4dc\ub97c \ub808\uc774\ube14\ub9c1\ud558\ub294 \uac83\uc744 \uace0\ub824\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \ubcf4\uc870 \ub124\ud2b8\uc6cc\ud0b9 \uad6c\uc131 \uc2dc \ud30c\ub4dc \uad50\uccb4\n\n\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc744 \ud65c\uc131\ud654\ud574\ub3c4 \uae30\uc874 \ub178\ub4dc\ub294 \uc218\uc815\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ub9de\ucda4\ud615 \ub124\ud2b8\uc6cc\ud0b9\uc740 \ud30c\uad34\uc801\uc778 \uc870\uce58\uc785\ub2c8\ub2e4. \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc744 \ud65c\uc131\ud654\ud55c \ud6c4 \ud074\ub7ec\uc2a4\ud130\uc758 \ubaa8\ub4e0 \uc6cc\ucee4 \ub178\ub4dc\ub97c \uc21c\ucc28\uc801\uc73c\ub85c \uad50\uccb4\ud558\ub294 \ub300\uc2e0, \uc6cc\ucee4 \ub178\ub4dc\uac00 \ud504\ub85c\ube44\uc800\ub2dd\ub418\uae30 \uc804\uc5d0 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc774 \uac00\ub2a5\ud558\ub3c4\ub85d Lambda \ud568\uc218\ub97c \ud638\ucd9c\ud558\ub294 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub9ac\uc18c\uc2a4\ub85c [EKS \uc2dc\uc791 \uc548\ub0b4\uc11c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/getting-started.html)\uc758 AWS CloudFormation \ud15c\ud50c\ub9bf\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\uc5ec \ud658\uacbd \ubcc0\uc218\ub85c `aws-node` \ub370\ubaac\uc14b\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n\uc0ac\uc6a9\uc790 \uc9c0\uc815 CNI \ub124\ud2b8\uc6cc\ud0b9 \uae30\ub2a5\uc73c\ub85c \uc804\ud658\ud558\uae30 \uc804\uc5d0 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud558\ub294 \ub178\ub4dc\uac00 \uc788\ub294 \uacbd\uc6b0, \ud30c\ub4dc\ub97c \ucc28\ub2e8\ud558\uace0 [\ub4dc\ub808\uc774\ub2dd](https:\/\/aws.amazon.com\/premiumsupport\/knowledge-center\/eks-worker-node-actions\/) \ud558\uc5ec \ud30c\ub4dc\ub97c \uc815\uc0c1\uc801\uc73c\ub85c \uc885\ub8cc\ud55c \ub2e4\uc74c \ub178\ub4dc\ub97c \uc885\ub8cc\ud574\uc57c \ud569\ub2c8\ub2e4. ENIConfig \ub808\uc774\ube14 \ub610\ub294 \uc8fc\uc11d\uacfc \uc77c\uce58\ud558\ub294 \uc0c8 \ub178\ub4dc\ub9cc \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc744 \uc0ac\uc6a9\ud558\ubbc0\ub85c \uc774\ub7f0 \uc0c8 \ub178\ub4dc\uc5d0 \uc2a4\ucf00\uc904\ub9c1\ub41c \ud30c\ub4dc\uc5d0\ub294 \ubcf4\uc870 CIDR\uc758 IP\ub97c \ud560\ub2f9\ubc1b\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \ub178\ub4dc\ub2f9 \ucd5c\ub300 \ud30c\ub4dc \uc218 \uacc4\uc0b0\n\n\ub178\ub4dc\uc758 \uae30\ubcf8 ENI\ub294 \ub354 \uc774\uc0c1 Pod IP \uc8fc\uc18c\ub97c \ud560\ub2f9\ud558\ub294 \ub370 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\uc73c\ubbc0\ub85c \ud2b9\uc815 EC2 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc5d0\uc11c \uc2e4\ud589\ud560 \uc218 \uc788\ub294 Pod \uc218\uac00 \uac10\uc18c\ud569\ub2c8\ub2e4. \uc774 \uc81c\ud55c\uc744 \uc6b0\ud68c\ud558\uae30 \uc704\ud574 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uacfc \ud568\uaed8 Prefix \ud560\ub2f9\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Prefix\ub97c \ud560\ub2f9\ud558\uba74 \ubcf4\uc870 ENI\uc5d0\uc11c \uac01 \ubcf4\uc870 IP\uac00 \/28 Prefix\ub85c \ub300\uccb4\ub429\ub2c8\ub2e4.\n\n\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc774 \uc788\ub294 m5.large \uc778\uc2a4\ud134\uc2a4\uc758 \ucd5c\ub300 \ud30c\ub4dc \uc218\ub97c \uace0\ub824\ud574\ubd05\uc2dc\ub2e4.\n\nPrefix\ub97c \ud560\ub2f9\ud558\uc9c0 \uc54a\uace0 \uc2e4\ud589\ud560 \uc218 \uc788\ub294 \ucd5c\ub300 \ud30c\ub4dc \uc218\ub294 29\uac1c\uc785\ub2c8\ub2e4.\n\n* ((3 ENIs - 1) * (10 secondary IPs per ENI - 1)) + 2 = 20\n\n\ud504\ub9ac\ud53d\uc2a4 \uc5b4\ud0dc\uce58\uba3c\ud2b8\ub97c \ud65c\uc131\ud654\ud558\uba74 \ud30c\ub4dc\uc758 \uc218\uac00 290\uac1c\ub85c \ub298\uc5b4\ub0a9\ub2c8\ub2e4.\n\n* (((3 ENIs - 1) * ((10 secondary IPs per ENI - 1) * 16)) + 2 = 290\n\n\ud558\uc9c0\ub9cc \uc778\uc2a4\ud134\uc2a4\uc758 \uac00\uc0c1 CPU \uc218\uac00 \ub9e4\uc6b0 \uc801\uae30 \ub54c\ubb38\uc5d0 max-pod\ub97c 290\uc774 \uc544\ub2cc 110\uc73c\ub85c \uc124\uc815\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \ub354 \ud070 \uc778\uc2a4\ud134\uc2a4\uc758 \uacbd\uc6b0 EKS\ub294 \ucd5c\ub300 \ud30c\ub4dc \uac12\uc744 250\uc73c\ub85c \uc124\uc815\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \ub354 \uc791\uc740 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615 (\uc608: m5.large) \uc5d0 Prefix \ucca8\ubd80 \ud30c\uc77c\uc744 \uc0ac\uc6a9\ud560 \uacbd\uc6b0 IP \uc8fc\uc18c\ubcf4\ub2e4 \ud6e8\uc52c \uba3c\uc800 \uc778\uc2a4\ud134\uc2a4\uc758 CPU \ubc0f \uba54\ubaa8\ub9ac \ub9ac\uc18c\uc2a4\uac00 \uace0\uac08\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n!!! info\n    CNI Prefix\uac00 ENI\uc5d0 \/28 Prefix\ub97c \ud560\ub2f9\ud560 \ub54c\ub294 \uc5f0\uc18d\ub41c IP \uc8fc\uc18c \ube14\ub85d\uc774\uc5b4\uc57c \ud569\ub2c8\ub2e4. Prefix\uac00 \uc0dd\uc131\ub418\ub294 \uc11c\ube0c\ub137\uc774 \uace0\ub3c4\ub85c \ubd84\ud560\ub41c \uacbd\uc6b0 Prefix \uc5f0\uacb0\uc5d0 \uc2e4\ud328\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uc6a9 \uc804\uc6a9 VPC\ub97c \uc0c8\ub85c \ub9cc\ub4e4\uac70\ub098 \uc11c\ube0c\ub137\uc5d0 Prefix \ucca8\ubd80 \uc804\uc6a9\uc73c\ub85c CIDR \uc138\ud2b8\ub97c \uc608\uc57d\ud558\uc5ec \uc774\ub7f0 \ubb38\uc81c\uac00 \ubc1c\uc0dd\ud558\uc9c0 \uc54a\ub3c4\ub85d \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc8fc\uc81c\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\uc11c\ube0c\ub137 CIDR \uc608\uc57d](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/subnet-cidr-reservation.html)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n### CG-NAT \uacf5\uac04\uc758 \uae30\uc874 \uc0ac\uc6a9 \ud604\ud669 \ud30c\uc545\n\n\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc744 \uc0ac\uc6a9\ud558\uba74 IP \uc18c\ubaa8 \ubb38\uc81c\ub97c \uc644\ud654\ud560 \uc218 \uc788\uc9c0\ub9cc \ubaa8\ub4e0 \ubb38\uc81c\ub97c \ud574\uacb0\ud560 \uc218\ub294 \uc5c6\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc774\ubbf8 CG-NAT \uacf5\uac04\uc744 \uc0ac\uc6a9\ud558\uace0 \uc788\uac70\ub098 \ub2e8\uc21c\ud788 \ubcf4\uc870 CIDR\uc744 \ud074\ub7ec\uc2a4\ud130 VPC\uc5d0 \uc5f0\uacb0\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0\uc5d0\ub294 \ub300\uccb4 CNI\ub97c \uc0ac\uc6a9\ud558\uac70\ub098 IPv6 \ud074\ub7ec\uc2a4\ud130\ub85c \uc774\ub3d9\ud558\ub294 \ub4f1\uc758 \ub2e4\ub978 \uc635\uc158\uc744 \uc0b4\ud3b4\ubcf4\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.","site":"eks","answers_cleaned":"    search    exclude  true                            Amazon VPC CNI               IP                           ENI          CIDR                               CIDR         CNI                   IP                        EKS IPv4                                                                        VPC        CIDR             IP       IP                               Eniconfig                    ENIConfig                              CIDR        VPC CIDR                                       VPC CNI  eniconfig              ENI         CNI  ENIConfig CRD      CIDR     IP                                   ENI                                                          ENI      IP                     ENI                                                                            CIDR         VPC           CG NAT        100 64 0 0 10    198 19 0 0 16    CIDR   16       RFC1918                                            VPC                     CIDR                   VPC      VPC                   IPv4 CIDR           https   docs aws amazon com vpc latest userguide configure your vpc html add cidr block restrictions                                                           ENI  https   docs aws amazon com AWSEC2 latest UserGuide using eni html            VPC CIDR          10 0 0 0 16             ENI     VPC CIDR          100 64 0 0 16                  100 64 0 0 16 CIDR                                  CNI                      https   docs aws amazon com eks latest userguide cni custom network html                           illustration of pods on secondary subnet    image png   CNI                           AWS VPC K8S CNI CUSTOM NETWORK CFG          true                kubectl set env daemonset aws node  n kube system AWS VPC K8S CNI CUSTOM NETWORK CFG true        AWS VPC K8S CNI CUSTOM Network CFG true       CNI   ENIConfig               IP            ENIConfig                                                apiVersion   crd k8s amazonaws com v1alpha1 kind   ENIConfig metadata    name  us west 2a spec     securityGroups        sg 0dff111a1d11c1c11   subnet  subnet 011b111c1f11fdf11       ENIconfig                                                                                                                       IPv4             IPv6                                         Amazon EKS   RFC6598  https   datatracker ietf org doc html rfc6598             RFC1918  https   datatracker ietf org doc html rfc1918                                               Prefix                                                                 Pod                                                                          Eniconfig                                                                     EKS                                                            Amazon Elastic Load Balancing   NAT GW             VPC   EKS                     RFC1918                                       CG NAT                             https   aws amazon com transit gateway            VPC                       NAT                                                                                https   aws amazon com blogs containers eks vpc routable ip address conservation                         EKS Pod                                                                                             IPv6                        IP                                               IPv4 IPv6  VPC              IPv6            IPv6                        IPv6 EKS                               IPv6 EKS                      IPv6        IPv4   IPv6                         IPv6 EKS             ipv6 index md                               CG NAT           CG NAT     CIDR              CIDR       VPC                 CNI                                                           CNI                                                CNI       https   docs aws amazon com eks latest userguide alternate cni plugins html                             NAT           Amazon VPC           NAT        https   docs aws amazon com vpc latest userguide vpc nat gateway html             Amazon       NAT                             CIDR         VPC                                       https   aws amazon com blogs containers addressing ipv4 address exhaustion in amazon eks clusters using private nat gateways                      NAT             CIDR         EKS                                                                           CIDR                                                           Amazon VPC                         https   docs aws amazon com vpc latest userguide nat gateway scenarios html private nat overlapping networks                                            NAT        RFC6598                      IP                                 EKS                        100 64 0 0 16 VPC    CIDR                 NAT        NAT                RFC1918 CIDR                               CIDR         VPC                                   VPC                  VPC                     EKS                       VPC                        VPC              VPC                    VPC                                                                   illustration of network traffic using private NAT gateway    image 3 png                                                                           CIDR        100 64 0 0 8                               VPC    CIDR             CIDR                                                                                                 https   aws amazon com blogs containers optimize ip addresses usage by pods in your amazon eks cluster                                                                             VPC     VPC CIDR        100 64 0 0 10                    EKS                                                                       VPC   IP                                                                                  illustration of worker nodes on secondary subnet    image 2 png                             Kubernetes                    AZ         eniConfig                                    topology kubernetes io zone   http   topology kubernetes io zone                  Amazon EKS  AZ             CIDR                  ENI                            failure domain beta kubernetes io zone                     topology kubernetes io zone                1   name      VPC                2                                  kubectl set env daemonset aws node  n kube system AWS VPC K8S CNI CUSTOM NETWORK CFG true                                    ENI CONFIG LABEL DEF              ENI CONFIG LABEL DEF     k8s amazonaws com eniConfig   http   k8s amazonaws com eniConfig          k8s amazonaws com eniConfig us west 2a subnet 1   http   k8s amazonaws com eniConfig us west 2a subnet 1      k8s amazonaws com eniConfig us west 2a subnet 2   http   k8s amazonaws com eniConfig us west 2a subnet 2            eniConfig                                                                                                                                                                                                            Lambda                       EKS         https   docs aws amazon com eks latest userguide getting started html   AWS CloudFormation                     aws node                               CNI                                                             https   aws amazon com premiumsupport knowledge center eks worker node actions                                     ENIConfig                                                                 CIDR  IP                                           ENI       Pod IP                         EC2                    Pod                                         Prefix                 Prefix          ENI        IP   28 Prefix                          m5 large                         Prefix                            29           3 ENIs   1     10 secondary IPs per ENI   1     2   20                           290                3 ENIs   1      10 secondary IPs per ENI   1    16     2   290               CPU              max pod  290     110                              EKS           250                                  m5 large    Prefix               IP                  CPU                             info     CNI Prefix  ENI   28 Prefix             IP               Prefix                       Prefix                          VPC               Prefix         CIDR                                                          CIDR     https   docs aws amazon com vpc latest userguide subnet cidr reservation html                CG NAT                                    IP                                               CG NAT                     CIDR       VPC                   CNI        IPv6                                   "}
{"questions":"eks Security Group for Pod SGP search exclude true AWS EC2 Amazon VPC CNI ENI","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9 (Security Group for Pod, SGP)\n\nAWS \ubcf4\uc548 \uadf8\ub8f9\uc740 \uc778\ubc14\uc6b4\ub4dc \ubc0f \uc544\uc6c3\ubc14\uc6b4\ub4dc \ud2b8\ub798\ud53d\uc744 \uc81c\uc5b4\ud558\uae30 \uc704\ud574 EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0 \ub300\ud55c \uac00\uc0c1 \ubc29\ud654\ubcbd\uc758 \uc5ed\ud560\uc744 \uc218\ud589\ud569\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c Amazon VPC CNI\ub294 \ub178\ub4dc\uc758 ENI\uc640 \uc5f0\uacb0\ub41c \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \ub530\ub77c\uc11c \ubaa8\ub4e0 \ud30c\ub4dc\ub294 \uae30\ubcf8\uc801\uc73c\ub85c \ud30c\ub4dc\uac00 \ub3d9\uc791\ud558\ub294 \ub178\ub4dc\uc640 \ub3d9\uc77c\ud55c \ubcf4\uc548 \uadf8\ub8f9\uc744 \uacf5\uc720\ud558\uc5ec \uc774\uc6a9\ud569\ub2c8\ub2e4.\n\n\uc544\ub798 \uadf8\ub9bc\uc5d0\uc11c \ubcfc \uc218 \uc788\ub4ef\uc774, \uc6cc\ucee4 \ub178\ub4dc\uc5d0\uc11c \ub3d9\uc791\ud558\ub294 \ubaa8\ub4e0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ud30c\ub4dc\ub294 RDS \ub370\uc774\ud130\ubca0\uc774\uc2a4 \uc11c\ube44\uc2a4\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. (RDS \uc778\ubc14\uc6b4\ub4dc\uac00 \ub178\ub4dc\uc758 \ubcf4\uc548 \uadf8\ub8f9\uc744 \ud5c8\uc6a9\ud55c\ub2e4\ub294 \uac00\uc815\ud558\uc5d0). \ubcf4\uc548 \uadf8\ub8f9\uc740 \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ubaa8\ub4e0 \ud30c\ub4dc\uc5d0 \uc801\uc6a9\ub418\uae30 \ub54c\ubb38\uc5d0 \ud070 \uadf8\ub8f9 \ub2e8\uc704\ub85c \ubcf4\uc548 \uaddc\uce59\uc774 \uc801\uc6a9\ub429\ub2c8\ub2e4. \ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9 \uae30\ub2a5\uc744 \uc774\uc6a9\ud558\uba74 \uac01 Pod\ubcc4\ub85c \ubcf4\uc548 \uaddc\uce59\uc744 \uc124\uc815\ud560 \uc218 \uc788\uc73c\uba70, \uc774\ub294 \uc138\uc138\ud55c \ubcf4\uc548 \uc804\ub7b5\uc744 \uc138\uc6b8\uc218 \uc788\ub3c4\ub85d \ub3c4\uc640\uc90d\ub2c8\ub2e4.\n\n![illustration of node with security group connecting to RDS](.\/image.png) \n\ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\uba74 \uacf5\uc720 \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4\uc5d0\uc11c \ub2e4\uc591\ud55c \ub124\ud2b8\uc6cc\ud06c \ubcf4\uc548 \uc694\uad6c \uc0ac\ud56d\uc744 \uac00\uc9c4 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc2e4\ud589\ud558\uc5ec \ucef4\ud4e8\ud305 \ud6a8\uc728\uc131\uc744 \uac1c\uc120\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. EC2 \ubcf4\uc548 \uadf8\ub8f9\uacfc \ud568\uaed8 \ud30c\ub4dc\uc640 \ud30c\ub4dc \uc0ac\uc774 \ub610\ub294 \ud30c\ub4dc\uc5d0\uc11c \uc678\ubd80 AWS \uc11c\ube44\uc2a4\uc640 \uac19\uc740 \uc5ec\ub7ec \uc720\ud615\uc758 \ubcf4\uc548 \uaddc\uce59\uc744 \ud55c \uacf3\uc5d0\uc11c \uc815\uc758\ud558\uace0 Kubernetes \ub124\uc774\ud2f0\ube0c API\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc801\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc544\ub798\uc758 \uadf8\ub9bc\uc740 \ud30c\ub4dc \uc218\uc900\uc5d0\uc11c \uc801\uc6a9\ub41c \ubcf4\uc548 \uadf8\ub8f9\uc744 \ub098\ud0c0\ub0b4\uace0 \uc788\uc73c\uba70, \uc774\ub7f0 \ubcf4\uc548 \uadf8\ub8f9\uc774 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ubc30\ud3ec \ubc0f \ub178\ub4dc \uc544\ud0a4\ud14d\ucc98\ub97c \uac04\uc18c\ud654\ud558\ub294 \ubc29\ubc95\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4. \uc774\uc81c \ud30c\ub4dc\uc5d0\uc11c Amazon RDS \ub370\uc774\ud130\ubca0\uc774\uc2a4\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n![illustration of pod and node with different security groups connecting to RDS](.\/image-2.png)\n\nVPC CNI\uc5d0 \ub300\ud574 `ENABLE_POD_ENI=true`\ub85c \uc124\uc815\ud558\uc5ec \ud30c\ub4dc\uc5d0 \ub300\ud55c \ubcf4\uc548 \uadf8\ub8f9\uc744 \ud65c\uc131\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud65c\uc131\ud654\ub418\uba74 EKS\uc758 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \"[VPC \ub9ac\uc18c\uc2a4 \ucee8\ud2b8\ub864\ub7ec](https:\/\/github.com\/aws\/amazon-vpc-resource-controller-k8s)\"\uac00 \"aws-k8s-trunk-eni\"\ub77c\ub294 \ud2b8\ub801\ud06c \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc0dd\uc131\ud558\uc5ec \ub178\ub4dc\uc5d0 \uc5f0\uacb0\ud569\ub2c8\ub2e4. \ud2b8\ub801\ud06c \uc778\ud130\ud398\uc774\uc2a4\ub294 \uc778\uc2a4\ud134\uc2a4\uc5d0 \uc5f0\uacb0\ub41c \ud45c\uc900 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4 \uc5ed\ud560\uc744 \ud569\ub2c8\ub2e4. \ud2b8\ub801\ud06c \uc778\ud130\ud398\uc774\uc2a4\ub97c \uad00\ub9ac\ud558\ub824\uba74 Amazon EKS \ud074\ub7ec\uc2a4\ud130\uc640 \ud568\uaed8 \uc81c\uacf5\ub418\ub294 \ud074\ub7ec\uc2a4\ud130 \uc5ed\ud560\uc5d0 'AmazonEKSVPCResourceController' \uad00\ub9ac\ud615 \uc815\ucc45\uc744 \ucd94\uac00\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ub610\ud55c \ucee8\ud2b8\ub864\ub7ec\ub294 \"aws-k8s-branch-eni\"\ub77c\ub294 \ube0c\ub79c\uce58 \uc778\ud130\ud398\uc774\uc2a4\ub97c \uc0dd\uc131\ud558\uc5ec \ud2b8\ub801\ud06c \uc778\ud130\ud398\uc774\uc2a4\uc640 \uc5f0\uacb0\ud569\ub2c8\ub2e4. \ud30c\ub4dc\ub294 [SecurityGroupPolicy](https:\/\/github.com\/aws\/amazon-vpc-resource-controller-k8s\/blob\/master\/config\/crd\/bases\/vpcresources.k8s.aws_securitygrouppolicies.yaml) \ucee4\uc2a4\ud140 \ub9ac\uc18c\uc2a4\ub97c \uc0ac\uc6a9\ud558\uc5ec \ubcf4\uc548 \uadf8\ub8f9\uc744 \ud560\ub2f9\ubc1b\uace0 \ube0c\ub79c\uce58 \uc778\ud130\ud398\uc774\uc2a4\uc640 \uc5f0\uacb0\ub429\ub2c8\ub2e4. \ubcf4\uc548 \uadf8\ub8f9\uc740 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4 \ub2e8\uc704\ub85c \uc9c0\uc815\ub418\ubbc0\ub85c, \uc774\uc81c \uc774\ub7f0 \ucd94\uac00 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4\uc5d0\uc11c \ud2b9\uc815 \ubcf4\uc548 \uadf8\ub8f9\uc744 \ud544\uc694\ub85c \ud558\ub294 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubc30\ud3ec \uc0ac\uc804 \uc694\uad6c \uc0ac\ud56d\uc744 \ud3ec\ud568\ud558\uc5ec [\ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9\uc5d0 \ub300\ud55c EKS \uc0ac\uc6a9\uc790 \uac00\uc774\ub4dc \uc139\uc158](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/security-groups-for-pods.html)\uc5d0\uc11c \uc880\ub354 \uc790\uc138\ud55c \ub0b4\uc6a9\ub4e4\uc744 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n![illustration of worker subnet with security groups associated with ENIs](.\/image-3.png)\n\n\ubd84\uae30 \uc778\ud130\ud398\uc774\uc2a4 \uc6a9\ub7c9\uc740 \ubcf4\uc870 IP \uc8fc\uc18c\uc5d0 \ub300\ud55c \uae30\uc874 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615 \uc81c\ud55c\uc5d0 *\ucd94\uac00*\ub429\ub2c8\ub2e4. \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\ub294 \ud30c\ub4dc\ub294 max-pods \uacf5\uc2dd\uc5d0\uc11c \uace0\ub824\ub418\uc9c0 \uc54a\uc73c\uba70 \ud30c\ub4dc\uc5d0 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 max-pods \uac12\uc744 \ub192\uc774\ub294 \uac83\uc744 \uace0\ub824\ud558\uac70\ub098 \ub178\ub4dc\uac00 \uc2e4\uc81c\ub85c \uc9c0\uc6d0\ud560 \uc218 \uc788\ub294 \uac83\ubcf4\ub2e4 \uc801\uc740 \uc218\uc758 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud574\ub3c4 \uad1c\ucc2e\uc2b5\ub2c8\ub2e4.\n\nm5.large\uc5d0\ub294 \ucd5c\ub300 9\uac1c\uc758 \ubd84\uae30 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4\uac00 \uc788\uc744 \uc218 \uc788\uc73c\uba70 \ud45c\uc900 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4\uc5d0 \ucd5c\ub300 27\uac1c\uc758 \ubcf4\uc870 IP \uc8fc\uc18c\uac00 \ud560\ub2f9\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc544\ub798 \uc608\uc5d0 \ud45c\uc2dc\ub41c \uac83\ucc98\ub7fc m5.large\uc758 \uae30\ubcf8 max-pods\ub294 29\uc774\uba70 EKS\ub294 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\ub294 Pod\ub97c \ucd5c\ub300 Pod \uc218\ub85c \uacc4\uc0b0\ud569\ub2c8\ub2e4. \ub178\ub4dc\uc758 max-pods\ub97c \ubcc0\uacbd\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \uc9c0\uce68\uc740 [EKS \uc0ac\uc6a9\uc790 \uac00\uc774\ub4dc](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cni-increase-ip-addresses.html)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n\ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9\uc744 [\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cni-custom-network.html)\uacfc \ud568\uaed8 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0, ENIConfig\uc5d0 \uc9c0\uc815\ub41c \ubcf4\uc548 \uadf8\ub8f9 \ub300\uc2e0 \ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9\uc5d0 \uc815\uc758\ub41c \ubcf4\uc548 \uadf8\ub8f9\uc774 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. \ub530\ub77c\uc11c \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc774 \ud65c\uc131\ud654\ub418\uba74 \ud30c\ub4dc\ubcc4 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\uba74\uc11c \ubcf4\uc548 \uadf8\ub8f9 \uc21c\uc11c\ub97c \uc2e0\uc911\ud558\uac8c \uc0b4\ud3b4\ubd10\uc57c \ud569\ub2c8\ub2e4.\n\n## \uad8c\uc7a5 \uc0ac\ud56d\n\n### \ud65c\uc131 \ud504\ub85c\ube0c\ub97c \uc704\ud55c TCP Early Demux \uae30\ub2a5 \ube44\ud65c\uc131\ud654\n\n\ud65c\uc131 \ub610\ub294 \uc900\ube44 \uc0c1\ud0dc \ud504\ub85c\ube0c\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0, kubelet\uc774 TCP\ub97c \ud1b5\ud574 \ube0c\ub79c\uce58 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4\uc758 \ud30c\ub4dc\uc5d0 \uc5f0\uacb0\ud560 \uc218 \uc788\ub3c4\ub85d TCP Early Dmux \uae30\ub2a5\uc744 \ube44\ud65c\uc131\ud654 \ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub294 \uc5c4\uaca9 \ubaa8\ub4dc\uc5d0\uc11c\ub9cc \ud544\uc694\ud569\ub2c8\ub2e4. \uc774 \uc791\uc5c5\uc744 \uc218\ud589\ud558\ub824\uba74 \ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud569\ub2c8\ub2e4.\n\n```\nkubectl edit daemonset aws-node -n kube-system\n```\n\n\n`\ucd08\uae30\ud654 \ucee8\ud14c\uc774\ub108` \uc139\uc158\uc5d0\uc11c `DISABLE_TCP_EARLY_DEMUX`\uc758 \uac12\uc744 `true`\ub85c \ubcc0\uacbd\ud569\ub2c8\ub2e4.\n\n### Pod\uc6a9 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\uc5ec \uae30\uc874 AWS \uad6c\uc131\uc744 \ud65c\uc6a9\ud558\uc2ed\uc2dc\uc624.\n\n\ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\uba74 RDS \ub370\uc774\ud130\ubca0\uc774\uc2a4 \ub610\ub294 EC2 \uc778\uc2a4\ud134\uc2a4\uc640 \uac19\uc740 VPC \ub9ac\uc18c\uc2a4\uc5d0 \ub300\ud55c \ub124\ud2b8\uc6cc\ud06c \uc561\uc138\uc2a4\ub97c \ub354 \uc27d\uac8c \uc81c\ud55c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9\uc758 \ubd84\uba85\ud55c \uc774\uc810 \uc911 \ud558\ub098\ub294 \uae30\uc874 AWS \ubcf4\uc548 \uadf8\ub8f9 \ub9ac\uc18c\uc2a4\ub97c \uc7ac\uc0ac\uc6a9\ud560 \uc218 \uc788\ub2e4\ub294 \uac83\uc785\ub2c8\ub2e4.\n\ubcf4\uc548 \uadf8\ub8f9\uc744 \ub124\ud2b8\uc6cc\ud06c \ubc29\ud654\ubcbd\uc73c\ub85c \uc0ac\uc6a9\ud558\uc5ec AWS \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4\ub97c \uc81c\ud55c\ud558\ub294 \uacbd\uc6b0, \ube0c\ub79c\uce58 ENI\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud30c\ub4dc\uc5d0 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc801\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c EKS\ub85c \uc571\uc744 \uc804\uc1a1\ud558\uace0 \ubcf4\uc548 \uadf8\ub8f9\uc744 \ud1b5\ud574 \ub2e4\ub978 AWS \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4\ub97c \uc81c\ud55c\ud558\ub294 \uacbd\uc6b0 \ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624.\n\n### \ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9 Enforcing \ubaa8\ub4dc \uad6c\uc131\n\nAmazon VPC CNI \ud50c\ub7ec\uadf8\uc778 \ubc84\uc804 1.11\uc5d0\ub294 `POD_SECURITY_GROUP_ENFORCING_MODE`(\"enforcing \ubaa8\ub4dc\")\ub77c\ub294 \uc0c8\ub85c\uc6b4 \uc124\uc815\uc774 \ucd94\uac00\ub418\uc5c8\uc2b5\ub2c8\ub2e4. \uc801\uc6a9 \ubaa8\ub4dc\ub294 \ud30c\ub4dc\uc5d0 \uc801\uc6a9\ub418\ub294 \ubcf4\uc548 \uadf8\ub8f9\uacfc \uc18c\uc2a4 NAT \ud65c\uc131\ud654 \uc5ec\ubd80\ub97c \ubaa8\ub450 \uc81c\uc5b4\ud569\ub2c8\ub2e4. \uc801\uc6a9 \ubaa8\ub4dc\ub97c \uc5c4\uaca9 \ub610\ub294 \ud45c\uc900\uc73c\ub85c \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc5c4\uaca9\uc774 \uae30\ubcf8\uac12\uc774\uba70 `ENABLE_POD_ENI`\uac00 `true`\ub85c \uc124\uc815\ub41c VPC CNI\uc758 \uc774\uc804 \ub3d9\uc791\uc744 \ubc18\uc601\ud569\ub2c8\ub2e4.\n\nStrict \ubaa8\ub4dc\uc5d0\uc11c\ub294 \ubd84\uae30 ENI \ubcf4\uc548 \uadf8\ub8f9\ub9cc \uc801\uc6a9\ub429\ub2c8\ub2e4. \uc18c\uc2a4 NAT\ub3c4 \ube44\ud65c\uc131\ud654\ub429\ub2c8\ub2e4.\n\nStandard \ubaa8\ub4dc\uc5d0\uc11c\ub294 \uae30\ubcf8 ENI \ubc0f \ubd84\uae30 ENI(\ud30c\ub4dc\uc640 \uc5f0\uacb0\ub428)\uc640 \uc5f0\uacb0\ub41c \ubcf4\uc548 \uadf8\ub8f9\uc774 \uc801\uc6a9\ub429\ub2c8\ub2e4. \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d\uc740 \ub450 \ubcf4\uc548 \uadf8\ub8f9\uc744 \ubaa8\ub450 \uc900\uc218\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n!!! warning\n    \ubaa8\ub4e0 \ubaa8\ub4dc \ubcc0\uacbd\uc740 \uc0c8\ub85c \ucd9c\uc2dc\ub41c \ud30c\ub4dc\uc5d0\ub9cc \uc601\ud5a5\uc744 \ubbf8\uce69\ub2c8\ub2e4. \uae30\uc874 \ud30c\ub4dc\ub294 \ud30c\ub4dc\uac00 \uc0dd\uc131\ub420 \ub54c \uad6c\uc131\ub41c \ubaa8\ub4dc\ub97c \uc0ac\uc6a9\ud55c\ub2e4. \uace0\uac1d\uc740 \ud2b8\ub798\ud53d \ub3d9\uc791\uc744 \ubcc0\uacbd\ud558\ub824\ub294 \uacbd\uc6b0 \ubcf4\uc548 \uadf8\ub8f9\uacfc \ud568\uaed8 \uae30\uc874 \ud30c\ub4dc\ub97c \uc7ac\ud65c\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n### Enforcing \ubaa8\ub4dc: \ud30c\ub4dc \ubc0f \ub178\ub4dc \ud2b8\ub798\ud53d\uc744 \uaca9\ub9ac\ud558\uae30 \uc704\ud574 Strict \ubaa8\ub4dc\ub97c \uc0ac\uc6a9\n\n\uae30\ubcf8\uc801\uc73c\ub85c Pod\uc758 \ubcf4\uc548 \uadf8\ub8f9\uc740 \u201cStrict \ubaa8\ub4dc\u201d\ub85c \uc124\uc815\ub429\ub2c8\ub2e4. \ud30c\ub4dc \ud2b8\ub798\ud53d\uc744 \ub098\uba38\uc9c0 \ub178\ub4dc \ud2b8\ub798\ud53d\uacfc \uc644\uc804\ud788 \ubd84\ub9ac\ud574\uc57c \ud558\ub294 \uacbd\uc6b0 \uc774 \uc124\uc815\uc744 \uc0ac\uc6a9\ud55c\ub2e4. Strict \ubaa8\ub4dc\uc5d0\uc11c\ub294 \uc18c\uc2a4 NAT\uac00 \uaebc\uc838 \ube0c\ub79c\uce58 ENI \uc544\uc6c3\ubc14\uc6b4\ub4dc \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n!!! Warning\n    \ubaa8\ub4e0 \ubaa8\ub4dc \ubcc0\uacbd\uc740 \uc0c8\ub85c \uc2e4\ud589\ub41c \ud30c\ub4dc\uc5d0\ub9cc \uc601\ud5a5\uc744 \ubbf8\uce69\ub2c8\ub2e4. \uae30\uc874 \ud30c\ub4dc\ub294 \ud30c\ub4dc\uac00 \uc0dd\uc131\ub420 \ub54c \uad6c\uc131\ub41c \ubaa8\ub4dc\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \uace0\uac1d\uc774 \ud2b8\ub798\ud53d \ub3d9\uc791\uc744 \ubcc0\uacbd\ud558\ub824\uba74 \ubcf4\uc548 \uadf8\ub8f9\uc774 \ud3ec\ud568\ub41c \uae30\uc874 \ud30c\ub4dc\ub97c \uc7ac\ud65c\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n### Enforcing \ubaa8\ub4dc: \ub2e4\uc74c \uc0c1\ud669\uc5d0\uc11c\ub294 Standard \ubaa8\ub4dc\ub97c \u200b\u200b\uc0ac\uc6a9\n\n**\ud30c\ub4dc\uc758 \ucee8\ud14c\uc774\ub108\uc5d0 \ud45c\uc2dc\ub418\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8 \uc18c\uc2a4 IP**\n\n\ud074\ub77c\uc774\uc5b8\ud2b8 \uc18c\uc2a4 IP\ub97c \ud30c\ub4dc\uc758 \ucee8\ud14c\uc774\ub108\uc5d0 \ud45c\uc2dc\ub418\ub3c4\ub85d \uc720\uc9c0\ud574\uc57c \ud558\ub294 \uacbd\uc6b0 `POD_SECURITY_GROUP_ENFORCING_MODE`\ub97c `standard`\uc73c\ub85c \uc124\uc815\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. Kubernetes \uc11c\ube44\uc2a4\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8 \uc18c\uc2a4 IP(\uae30\ubcf8 \uc720\ud615 \ud074\ub7ec\uc2a4\ud130) \ubcf4\uc874\uc744 \uc9c0\uc6d0\ud558\uae30 \uc704\ud574 externalTrafficPolicy=local\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \uc774\uc81c \ud45c\uc900 \ubaa8\ub4dc\uc5d0\uc11c externalTrafficPolicy\uac00 Local\ub85c \uc124\uc815\ub41c \uc778\uc2a4\ud134\uc2a4 \ub300\uc0c1\uc744 \uc0ac\uc6a9\ud558\uc5ec NodePort \ubc0f LoadBalancer \uc720\ud615\uc758 Kubernetes \uc11c\ube44\uc2a4\ub97c \uc2e4\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. `Local`\uc740 \ud074\ub77c\uc774\uc5b8\ud2b8 \uc18c\uc2a4 IP\ub97c \uc720\uc9c0\ud558\uace0 LoadBalancer \ubc0f NodePort \uc720\ud615 \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \ub450 \ubc88\uc9f8 \ud649\uc744 \ubc29\uc9c0\ud569\ub2c8\ub2e4.\n\n**NodeLocal DNSCache \ubc30\ud3ec**\n\n\ud30c\ub4dc\uc5d0 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 [NodeLocal DNSCache](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/nodelocaldns\/)\ub97c \uc0ac\uc6a9\ud558\ub294 \ud30c\ub4dc\ub97c \uc9c0\uc6d0\ud558\ub3c4\ub85d \ud45c\uc900 \ubaa8\ub4dc\ub97c \u200b\u200b\uad6c\uc131\ud569\ub2c8\ub2e4. NodeLocal DNSCache\ub294 \ud074\ub7ec\uc2a4\ud130 \ub178\ub4dc\uc5d0\uc11c DNS \uce90\uc2f1 \uc5d0\uc774\uc804\ud2b8\ub97c DaemonSet\uc73c\ub85c \uc2e4\ud589\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 DNS \uc131\ub2a5\uc744 \ud5a5\uc0c1\uc2dc\ud0b5\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 DNS QPS \uc694\uad6c \uc0ac\ud56d\uc774 \uac00\uc7a5 \ub192\uc740 \ud30c\ub4dc\uac00 \ub85c\uceec \uce90\uc2dc\uac00 \uc788\ub294 \ub85c\uceec kube-dns\/CoreDNS\ub97c \ucffc\ub9ac\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub418\uc5b4 \ub300\uae30 \uc2dc\uac04\uc774 \ud5a5\uc0c1\ub429\ub2c8\ub2e4.\n\nNodeLocal DNSCache\ub294 \ub178\ub4dc\uc5d0 \ub300\ud55c \ubaa8\ub4e0 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d\uc774 VPC\ub85c \uc9c4\uc785\ud558\ubbc0\ub85c strict \ubaa8\ub4dc\uc5d0\uc11c\ub294 \uc9c0\uc6d0\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n**Kubernetes \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc9c0\uc6d0**\n\n\uc5f0\uacb0\ub41c \ubcf4\uc548 \uadf8\ub8f9\uc774 \uc788\ub294 \ud30c\ub4dc\uc5d0 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uc0ac\uc6a9\ud560 \ub54c\ub294 \ud45c\uc900 \uc2dc\ud589 \ubaa8\ub4dc\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n\ud074\ub7ec\uc2a4\ud130\uc5d0 \uc18d\ud558\uc9c0 \uc54a\uc740 AWS \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \ub124\ud2b8\uc6cc\ud06c \uc218\uc900 \uc561\uc138\uc2a4\ub97c \uc81c\ud55c\ud558\ub824\uba74 \ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9\uc744 \ud65c\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \ub0b4\ubd80 \ud30c\ub4dc \uac04\uc758 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d(\uc885\uc885 East\/West \ud2b8\ub798\ud53d\uc774\ub77c\uace0\ub3c4 \ud568)\uc744 \uc81c\ud55c\ud558\ub824\uba74 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uace0\ub824\ud558\uc138\uc694.\n\n### \ud30c\ub4dc\ub2f9 \ubcf4\uc548 \uadf8\ub8f9\uacfc\uc758 \ube44\ud638\ud658\uc131 \uc2dd\ubcc4\n\nWindows \uae30\ubc18 \ubc0f \ube44 Nitro \uc778\uc2a4\ud134\uc2a4\ub294 \ud30c\ub4dc\uc5d0 \ub300\ud55c \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc9c0\uc6d0\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ud30c\ub4dc\uc5d0\uc11c \ubcf4\uc548 \uadf8\ub8f9\uc744 \ud65c\uc6a9\ud558\ub824\uba74 \uc778\uc2a4\ud134\uc2a4\uc5d0 isTrunkingEnabled \ud0dc\uadf8\ub97c \uc9c0\uc815\ud574\uc57c \ud569\ub2c8\ub2e4. \ud30c\ub4dc\uac00 VPC \ub0b4\ubd80 \ub610\ub294 \uc678\ubd80\uc758 AWS \uc11c\ube44\uc2a4\uc5d0 \uc758\uc874\ud558\uc9c0 \uc54a\ub294 \uacbd\uc6b0 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uc0ac\uc6a9\ud558\uc5ec \ubcf4\uc548 \uadf8\ub8f9\uc774 \uc544\ub2cc \ud30c\ub4dc \uac04\uc758 \uc561\uc138\uc2a4\ub97c \uad00\ub9ac\ud569\ub2c8\ub2e4.\n\n### \ud30c\ub4dc\ub2f9 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\uc5ec AWS \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \ud2b8\ub798\ud53d\uc744 \ud6a8\uc728\uc801\uc73c\ub85c \uc81c\uc5b4\n\nEKS \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 VPC \ub0b4\uc758 \ub2e4\ub978 \ub9ac\uc18c\uc2a4\uc640 \ud1b5\uc2e0\ud574\uc57c \ud558\ub294 \uacbd\uc6b0. RDS \ub370\uc774\ud130\ubca0\uc774\uc2a4\ub97c \uad6c\ucd95\ud55c \ub2e4\uc74c \ud30c\ub4dc\uc5d0 SG\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc744 \uad8c\uc7a5 \ub4dc\ub9bd\ub2c8\ub2e4. CIDR \ub610\ub294 DNS \uc774\ub984\uc744 \uc9c0\uc815\ud560 \uc218 \uc788\ub294 \uc815\ucc45 \uc5d4\uc9c4\uc774 \uc788\uc9c0\ub9cc VPC \ub0b4\uc5d0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \uc788\ub294 AWS \uc11c\ube44\uc2a4\uc640 \ud1b5\uc2e0\ud560 \ub54c\ub294 \ub35c \ucd5c\uc801\uc758 \uc120\ud0dd\uc785\ub2c8\ub2e4.\n\n\uc774\uc640 \ub300\uc870\uc801\uc73c\ub85c Kubernetes [\ub124\ud2b8\uc6cc\ud06c \uc815\ucc45](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/network-policies\/)\uc740 \ud074\ub7ec\uc2a4\ud130 \ub0b4\ubd80 \ubc0f \uc678\ubd80 \ubaa8\ub450\uc5d0\uc11c \uc218\uc2e0 \ubc0f \uc1a1\uc2e0 \ud2b8\ub798\ud53d\uc744 \uc81c\uc5b4\ud558\uae30 \uc704\ud55c \uba54\ucee4\ub2c8\uc998\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \ub2e4\ub978 AWS \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \uc885\uc18d\uc131\uc774 \uc81c\ud55c\uc801\uc778 \uacbd\uc6b0 Kubernetes \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uace0\ub824\ud574\uc57c \ud569\ub2c8\ub2e4. SG\uc640 \uac19\uc740 AWS \uae30\ubcf8 \uc758\ubbf8 \uccb4\uacc4\uc640 \ubc18\ub300\ub85c AWS \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4\ub97c \uc81c\ud55c\ud558\uae30 \uc704\ud574 CIDR \ubc94\uc704\ub97c \uae30\ubc18\uc73c\ub85c \uc1a1\uc2e0 \uaddc\uce59\uc744 \uc9c0\uc815\ud558\ub294 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Kubernetes \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud30c\ub4dc \uac04(\uc885\uc885 East\/West \ud2b8\ub798\ud53d\uc774\ub77c\uace0\ub3c4 \ud568) \ubc0f \ud30c\ub4dc\uc640 \uc678\ubd80 \uc11c\ube44\uc2a4 \uac04\uc758 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d\uc744 \uc81c\uc5b4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Kubernetes \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45\uc740 OSI \ub808\ubca8 3\uacfc 4\uc5d0\uc11c \uad6c\ud604\ub429\ub2c8\ub2e4.\n\nAmazon EKS\ub97c \uc0ac\uc6a9\ud558\uba74 [Calico](https:\/\/projectcalico.docs.tigera.io\/getting-started\/kubernetes\/managed-public-cloud\/eks) \ubc0f [Cilium](https:\/\/\uc640 \uac19\uc740 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc5d4\uc9c4\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. docs.cilium.io\/en\/stable\/intro\/). \uae30\ubcf8\uc801\uc73c\ub85c \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc5d4\uc9c4\uc740 \uc124\uce58\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc124\uc815 \ubc29\ubc95\uc5d0 \ub300\ud55c \uc9c0\uce68\uc740 \ud574\ub2f9 \uc124\uce58 \uac00\uc774\ub4dc\ub97c \ud655\uc778\ud558\uc138\uc694. \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc0ac\uc6a9 \ubc29\ubc95\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [EKS \ubcf4\uc548 \ubaa8\ubc94 \uc0ac\ub840](https:\/\/aws.github.io\/aws-eks-best-practices\/security\/docs\/network\/#network-policy)\ub97c \ucc38\uc870\ud558\uc138\uc694. DNS \ud638\uc2a4\ud2b8 \uc774\ub984 \uae30\ub2a5\uc740 \uc5d4\ud130\ud504\ub77c\uc774\uc988 \ubc84\uc804\uc758 \ub124\ud2b8\uc6cc\ud06c \uc815\ucc45 \uc5d4\uc9c4\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc73c\uba70, \uc774\ub294 Kubernetes \uc11c\ube44\uc2a4\/\ud30c\ub4dc\uc640 AWS \uc678\ubd80\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ub9ac\uc18c\uc2a4 \uac04\uc758 \ud2b8\ub798\ud53d\uc744 \uc81c\uc5b4\ud558\ub294 \u200b\u200b\ub370 \uc720\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub610\ud55c \uae30\ubcf8\uc801\uc73c\ub85c \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc9c0\uc6d0\ud558\uc9c0 \uc54a\ub294 AWS \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c DNS \ud638\uc2a4\ud2b8 \uc774\ub984 \uc9c0\uc6d0\uc744 \uace0\ub824\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### AWS Loadbalancer Controller\ub97c \uc0ac\uc6a9\ud558\ub3c4\ub85d \ub2e8\uc77c \ubcf4\uc548 \uadf8\ub8f9\uc5d0 \ud0dc\uadf8 \uc9c0\uc815\n\n\ub9ce\uc740 \ubcf4\uc548 \uadf8\ub8f9\uc774 \ud30c\ub4dc\uc5d0 \ud560\ub2f9\ub41c \uacbd\uc6b0 Amazon EKS\ub294 \uacf5\uc720 \ub610\ub294 \uc18c\uc720\ub41c [`kubernetes.io\/cluster\/$name`](http:\/\/kubernetes.io\/cluster\/$name)\uc73c\ub85c \ub2e8\uc77c \ubcf4\uc548 \uadf8\ub8f9\uc5d0 \ud0dc\uadf8\ub97c \uc9c0\uc815\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \ud0dc\uadf8\ub97c \uc0ac\uc6a9\ud558\uba74 AWS Loadbalancer Controller\uac00 \ubcf4\uc548 \uadf8\ub8f9\uc758 \uaddc\uce59\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\uc5ec \ud2b8\ub798\ud53d\uc744 \ud30c\ub4dc\ub85c \ub77c\uc6b0\ud305\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud30c\ub4dc\uc5d0 \ud558\ub098\uc758 \ubcf4\uc548 \uadf8\ub8f9\ub9cc \uc81c\uacf5\ub418\ub294 \uacbd\uc6b0 \ud0dc\uadf8 \ud560\ub2f9\uc740 \uc120\ud0dd \uc0ac\ud56d\uc785\ub2c8\ub2e4. \ubcf4\uc548 \uadf8\ub8f9\uc5d0 \uc124\uc815\ub41c \uad8c\ud55c\uc740 \ucd94\uac00\ub418\ubbc0\ub85c \ub85c\ub4dc\ubc38\ub7f0\uc11c \ucee8\ud2b8\ub864\ub7ec\uac00 \uaddc\uce59\uc744 \ucc3e\uace0 \uc870\uc815\ud558\ub824\uba74 \ub2e8\uc77c \ubcf4\uc548 \uadf8\ub8f9\uc5d0 \ud0dc\uadf8\ub97c \uc9c0\uc815\ud558\ub294 \uac83\uc73c\ub85c \ucda9\ubd84\ud569\ub2c8\ub2e4. \ub610\ud55c \ubcf4\uc548 \uadf8\ub8f9\uc5d0\uc11c \uc815\uc758\ud55c [\uae30\ubcf8 \ud560\ub2f9\ub7c9](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/amazon-vpc-limits.html#vpc-limits-security-groups)\uc744 \uc900\uc218\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4.\n\n### \uc544\uc6c3\ubc14\uc6b4\ub4dc \ud2b8\ub798\ud53d\uc5d0 \ub300\ud55c NAT \uad6c\uc131\n\n\uc18c\uc2a4 NAT\ub294 \ubcf4\uc548 \uadf8\ub8f9\uc774 \ud560\ub2f9\ub41c \ud30c\ub4dc\uc758 \uc544\uc6c3\ubc14\uc6b4\ub4dc \ud2b8\ub798\ud53d\uc5d0 \ub300\ud574 \ube44\ud65c\uc131\ud654\ub429\ub2c8\ub2e4. \uc778\ud130\ub137 \uc561\uc138\uc2a4\uac00 \ud544\uc694\ud55c \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\ub294 \ud30c\ub4dc\uc758 \uacbd\uc6b0 NAT \uac8c\uc774\ud2b8\uc6e8\uc774 \ub610\ub294 \uc778\uc2a4\ud134\uc2a4\ub85c \uad6c\uc131\ub41c \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0\uc11c \uc6cc\ucee4 \ub178\ub4dc\ub97c \uc2dc\uc791\ud558\uace0 CNI\uc5d0\uc11c [\uc678\ubd80 SNAT](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/external-snat.html)\ub97c \ud65c\uc131\ud654\ud569\ub2c8\ub2e4.\n\n```\nkubectl set env daemonset -n kube-system aws-node AWS_VPC_K8S_CNI_EXTERNALSNAT=true\n```\n\n### \ubcf4\uc548 \uadf8\ub8f9\uc774 \uc788\ub294 \ud30c\ub4dc\ub97c \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0 \ubc30\ud3ec\n\n\ubcf4\uc548 \uadf8\ub8f9\uc774 \ud560\ub2f9\ub41c \ud30c\ub4dc\ub294 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0 \ubc30\ud3ec\ub41c \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\uc5b4\uc57c \ud569\ub2c8\ub2e4. \ub2e8 \ud37c\ube14\ub9ad \uc11c\ube0c\ub137\uc5d0 \ubc30\ud3ec\ub41c \ubcf4\uc548 \uadf8\ub8f9\uc774 \ud560\ub2f9\ub41c \ud30c\ub4dc\ub294 \uc778\ud130\ub137\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4.\n\n### \ud30c\ub4dc \uc2a4\ud399\uc5d0\uc11c *terminationGracePeriodSeconds* \ubd80\ubd84 \ud655\uc778\n\n\ud30c\ub4dc \uc0ac\uc591 \ud30c\uc77c\uc5d0\uc11c 'terminationGracePeriodSeconds'\uac00 0\uc774 \uc544\ub2cc\uc9c0 \ud655\uc778\ud558\uc138\uc694. (\uae30\ubcf8\uac12 30\ucd08) \uc774\ub294 Amazon VPC CNI\uac00 \uc6cc\ucee4 \ub178\ub4dc\uc5d0\uc11c \ud30c\ub4dc \ub124\ud2b8\uc6cc\ud06c\ub97c \uc0ad\uc81c\ud558\ub294 \ub370 \ud544\uc218\uc801\uc785\ub2c8\ub2e4. 0\uc73c\ub85c \uc124\uc815\ud558\uba74 CNI \ud50c\ub7ec\uadf8\uc778\uc774 \ud638\uc2a4\ud2b8\uc5d0\uc11c \ud30c\ub4dc \ub124\ud2b8\uc6cc\ud06c\ub97c \uc81c\uac70\ud558\uc9c0 \uc54a\uc73c\uba70 \ubd84\uae30 ENI\uac00 \ud6a8\uacfc\uc801\uc73c\ub85c \uc815\ub9ac\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n### Fargate\ub97c \uc774\uc6a9\ud558\ub294 \ud30c\ub4dc\uc6a9 \ubcf4\uc548 \uadf8\ub8f9 \uc0ac\uc6a9\n\nFargate\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ud30c\ub4dc\uc758 \ubcf4\uc548 \uadf8\ub8f9\uc740 EC2 \uc6cc\ucee4 \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ud30c\ub4dc\uc640 \ub9e4\uc6b0 \uc720\uc0ac\ud558\uac8c \uc791\ub3d9\ud55c\ub2e4. \uc608\ub97c \ub4e4\uc5b4 Fargate \ud30c\ub4dc\uc5d0 \uc5f0\uacb0\ud558\ub294 \ubcf4\uc548 \uadf8\ub8f9 \uc815\ucc45\uc5d0\uc11c \ubcf4\uc548 \uadf8\ub8f9\uc744 \ucc38\uc870\ud558\uae30 \uc804\uc5d0 \uba3c\uc800 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4.\uae30\ubcf8\uc801\uc73c\ub85c \ubcf4\uc548 \uadf8\ub8f9 \uc815\ucc45\uc744 Fargate \ud30c\ub4dc\uc5d0 \uba85\uc2dc\uc801\uc73c\ub85c \ud560\ub2f9\ud558\uc9c0 \uc54a\uc73c\uba74 [\ud074\ub7ec\uc2a4\ud130 \ubcf4\uc548 \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/sec-group-reqs.html)\uc774 \ubaa8\ub4e0 Fargate \ud30c\ub4dc\uc5d0 \ud560\ub2f9\ub429\ub2c8\ub2e4. \ub2e8\uc21c\ud654\ub97c \uc704\ud574 Fagate Pod\uc758 SecurityGroupPolicy\uc5d0 \ud074\ub7ec\uc2a4\ud130 \ubcf4\uc548 \uadf8\ub8f9\uc744 \ucd94\uac00\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uadf8\ub807\uc9c0 \uc54a\uc73c\uba74 \ubcf4\uc548 \uadf8\ub8f9\uc5d0 \ucd5c\uc18c \ubcf4\uc548 \uadf8\ub8f9 \uaddc\uce59\uc744 \ucd94\uac00\ud574\uc57c \ud569\ub2c8\ub2e4. \uc124\uba85 \ud074\ub7ec\uc2a4\ud130 API\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 \ubcf4\uc548 \uadf8\ub8f9\uc744 \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```bash\n aws eks describe-cluster --name CLUSTER_NAME --query 'cluster.resourcesVpcConfig.clusterSecurityGroupId'\n```\n\n```bash\ncat >my-fargate-sg-policy.yaml <<EOF\napiVersion: vpcresources.k8s.aws\/v1beta1\nkind: SecurityGroupPolicy\nmetadata:\n  name: my-fargate-sg-policy\n  namespace: my-fargate-namespace\nspec:\n  podSelector: \n    matchLabels:\n      role: my-fargate-role\n  securityGroups:\n    groupIds:\n      - cluster_security_group_id\n      - my_fargate_pod_security_group_id\nEOF\n```\n\n\ucd5c\uc18c \ubcf4\uc548 \uadf8\ub8f9 \uaddc\uce59\uc740 [\uc5ec\uae30](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/sec-group-reqs.html)\uc5d0 \ub098\uc640 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \uaddc\uce59\uc744 \ud1b5\ud574 Fargate \ud30c\ub4dc\ub294 kube-apiserver, kubelet, CoreDNS\uc640 \uac19\uc740 \ud074\ub7ec\uc2a4\ud130 \ub0b4 \uc11c\ube44\uc2a4\uc640 \ud1b5\uc2e0\ud560 \uc218 \uc788\ub2e4. \ub610\ud55c Fargate \ud30c\ub4dc\uc640\uc758 \uc778\ubc14\uc6b4\ub4dc \ubc0f \uc544\uc6c3\ubc14\uc6b4\ub4dc \uc5f0\uacb0\uc744 \ud5c8\uc6a9\ud558\ub294 \uaddc\uce59\uc744 \ucd94\uac00\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \ud30c\ub4dc\uac00 VPC\uc758 \ub2e4\ub978 \ud30c\ub4dc\ub098 \ub9ac\uc18c\uc2a4\uc640 \ud1b5\uc2e0\ud560 \uc218 \uc788\uac8c \ub41c\ub2e4. \ub610\ud55c Fargate\uac00 Amazon ECR \ub610\ub294 DockerHub\uc640 \uac19\uc740 \ub2e4\ub978 \ucee8\ud14c\uc774\ub108 \ub808\uc9c0\uc2a4\ud2b8\ub9ac\uc5d0\uc11c \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub97c \uac00\uc838\uc624\ub3c4\ub85d \ud558\ub294 \uaddc\uce59\uc744 \ud3ec\ud568\ud574\uc57c \ud569\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [AWS \uc77c\ubc18 \ucc38\uc870](https:\/\/docs.aws.amazon.com\/general\/latest\/gr\/aws-ip-ranges.html)\uc758 AWS IP \uc8fc\uc18c \ubc94\uc704\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n\uc544\ub798 \uba85\ub839\uc744 \uc0ac\uc6a9\ud558\uc5ec Fargate Pod\uc5d0 \uc801\uc6a9\ub41c \ubcf4\uc548 \uadf8\ub8f9\uc744 \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```bash\nkubectl get pod FARGATE_POD -o jsonpath='{.metadata.annotations.vpc\\.amazonaws\\.com\/pod-eni}{\"\\n\"}'\n```\n\n\uc704 \uba85\ub839\uc758 ENI ID\ub97c \uc801\uc5b4 \ub461\ub2c8\ub2e4. \n\n```bash\naws ec2 describe-network-interfaces --network-interface-ids ENI_ID --query 'NetworkInterfaces[*].Groups[*]'\n```\n\n\uc0c8 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc801\uc6a9\ud558\ub824\uba74 \uae30\uc874 Fargate \ud30c\ub4dc\ub97c \uc0ad\uc81c\ud558\uace0 \ub2e4\uc2dc \ub9cc\ub4e4\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \ub2e4\uc74c \uba85\ub839\uc740 example-app \ubc30\ud3ec\ub97c \uc2dc\uc791\ud569\ub2c8\ub2e4. \ud2b9\uc815 \ud30c\ub4dc\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\ub824\uba74 \uc544\ub798 \uba85\ub839\uc5b4\uc5d0\uc11c \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc640 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8 \uc774\ub984\uc744 \ubcc0\uacbd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```bash\nkubectl rollout restart -n example-ns deployment example-pod\n```\n\n","site":"eks","answers_cleaned":"    search    exclude  true                    Security Group for Pod  SGP   AWS                                  EC2                                   Amazon VPC CNI      ENI                                                                                                                      RDS                           RDS                                                                                                            Pod                                                     illustration of node with security group connecting to RDS    image png                                                                                        EC2                                AWS                                  Kubernetes      API                                                                                                                      Amazon RDS                         illustration of pod and node with different security groups connecting to RDS    image 2 png   VPC CNI      ENABLE POD ENI true                                         EKS                   VPC           https   github com aws amazon vpc resource controller k8s     aws k8s trunk eni                                                                                            Amazon EKS                         AmazonEKSVPCResourceController                               aws k8s branch eni                                           SecurityGroupPolicy  https   github com aws amazon vpc resource controller k8s blob master config crd bases vpcresources k8s aws securitygrouppolicies yaml                                                                                                                                                                     EKS             https   docs aws amazon com eks latest userguide security groups for pods html                               illustration of worker subnet with security groups associated with ENIs    image 3 png                   IP                                                max pods                                  max pods                                                             m5 large      9                                             27      IP                               m5 large     max pods  29   EKS              Pod     Pod               max pods                   EKS          https   docs aws amazon com eks latest userguide cni increase ip addresses html                                     https   docs aws amazon com eks latest userguide cni custom network html               ENIConfig                                                                                                                                       TCP Early Demux                                    kubelet  TCP                                   TCP Early Dmux                                                                    kubectl edit daemonset aws node  n kube system                        DISABLE TCP EARLY DEMUX       true               Pod                 AWS                          RDS           EC2          VPC                                                               AWS                                                    AWS                           ENI                                EC2        EKS                       AWS                                                                  Enforcing        Amazon VPC CNI         1 11    POD SECURITY GROUP ENFORCING MODE   enforcing                                                   NAT                                                            ENABLE POD ENI    true       VPC CNI                 Strict          ENI                  NAT            Standard          ENI      ENI                                                                  warning                                                                                                                          Enforcing                          Strict               Pod          Strict                                                            Strict          NAT         ENI                                Warning                                                                                                                         Enforcing              Standard                                     IP             IP                              POD SECURITY GROUP ENFORCING MODE    standard                  Kubernetes               IP                         externalTrafficPolicy local                    externalTrafficPolicy  Local                    NodePort   LoadBalancer     Kubernetes                   Local            IP       LoadBalancer   NodePort                              NodeLocal DNSCache                           NodeLocal DNSCache  https   kubernetes io docs tasks administer cluster nodelocaldns                                   NodeLocal DNSCache            DNS          DaemonSet             DNS                    DNS QPS                               kube dns CoreDNS                               NodeLocal DNSCache                      VPC        strict                     Kubernetes                                                                                       AWS                                                                               East West                                                             Windows        Nitro                                                        isTrunkingEnabled                   VPC           AWS                                                                                AWS                        EKS                       VPC                        RDS                    SG                   CIDR    DNS                         VPC              AWS                                    Kubernetes           https   kubernetes io docs concepts services networking network policies                                                                   AWS                      Kubernetes                    SG     AWS               AWS                      CIDR                                           Kubernetes                       East West                                                  Kubernetes          OSI    3  4           Amazon EKS        Calico  https   projectcalico docs tigera io getting started kubernetes managed public cloud eks     Cilium  https                                docs cilium io en stable intro                                                                                            EKS           https   aws github io aws eks best practices security docs network  network policy          DNS                                                  Kubernetes         AWS                                                                    AWS         DNS                             AWS Loadbalancer Controller                                              Amazon EKS              kubernetes io cluster  name   http   kubernetes io cluster  name                                         AWS Loadbalancer Controller                                                                                                                                                                                      https   docs aws amazon com vpc latest userguide amazon vpc limits html vpc limits security groups                                      NAT        NAT                                                                        NAT                                           CNI       SNAT  https   docs aws amazon com eks latest userguide external snat html                kubectl set env daemonset  n kube system aws node AWS VPC K8S CNI EXTERNALSNAT true                                                                                                                                                 terminationGracePeriodSeconds                     terminationGracePeriodSeconds   0                  30      Amazon VPC CNI                                  0        CNI                                  ENI                        Fargate                     Fargate                   EC2                                      Fargate                                                                       Fargate                                 https   docs aws amazon com eks latest userguide sec group reqs html      Fargate                    Fagate Pod  SecurityGroupPolicy                                                                         API                                  bash  aws eks describe cluster   name CLUSTER NAME   query  cluster resourcesVpcConfig clusterSecurityGroupId          bash cat  my fargate sg policy yaml   EOF apiVersion  vpcresources k8s aws v1beta1 kind  SecurityGroupPolicy metadata    name  my fargate sg policy   namespace  my fargate namespace spec    podSelector       matchLabels        role  my fargate role   securityGroups      groupIds          cluster security group id         my fargate pod security group id EOF                        https   docs aws amazon com eks latest userguide sec group reqs html                      Fargate     kube apiserver  kubelet  CoreDNS                              Fargate                                                     VPC                              Fargate  Amazon ECR    DockerHub                                                               AWS        https   docs aws amazon com general latest gr aws ip ranges html   AWS IP                             Fargate Pod                            bash kubectl get pod FARGATE POD  o jsonpath    metadata annotations vpc  amazonaws  com pod eni    n               ENI ID               bash aws ec2 describe network interfaces   network interface ids ENI ID   query  NetworkInterfaces    Groups                            Fargate                                    example app                                                                       bash kubectl rollout restart  n example ns deployment example pod      "}
{"questions":"eks search DNS CoreDNS exclude true EKS","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \ub124\ud2b8\uc6cc\ud06c \uc131\ub2a5 \ubb38\uc81c\uc5d0 \ub300\ud55c EKS \uc6cc\ud06c\ub85c\ub4dc \ubaa8\ub2c8\ud130\ub9c1\n\n## DNS \uc2a4\ub85c\ud2c0\ub9c1 \ubb38\uc81c\uc5d0 \ub300\ud55c CoreDNS \ud2b8\ub798\ud53d \ubaa8\ub2c8\ud130\ub9c1\n\nDNS \uc9d1\uc57d\uc801 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc2e4\ud589\ud558\uba74 DNS \uc2a4\ub85c\ud2c0\ub9c1\uc73c\ub85c \uc778\ud574 \uac04\ud5d0\uc801\uc73c\ub85c CoreDNS \uc7a5\uc560\uac00 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc73c\uba70, \uc774\ub85c \uc778\ud574 \uac00\ub054 \uc54c \uc218 \uc5c6\ub294 Hostexception \uc624\ub958\uac00 \ubc1c\uc0dd\ud560 \uc218 \uc788\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nCoreDNS \ubc30\ud3ec\uc5d0\ub294 Kubernetes \uc2a4\ucf00\uc904\ub7ec\uac00 \ud074\ub7ec\uc2a4\ud130\uc758 \uac1c\ubcc4 \uc6cc\ucee4 \ub178\ub4dc\uc5d0\uc11c CoreDNS \uc778\uc2a4\ud134\uc2a4\ub97c \uc2e4\ud589\ud558\ub3c4\ub85d \uc9c0\uc2dc\ud558\ub294 \ubc18\uc120\ud638\ub3c4 \uc815\ucc45\uc774 \uc788\uc2b5\ub2c8\ub2e4. \uc989, \ub3d9\uc77c\ud55c \uc6cc\ucee4 \ub178\ub4dc\uc5d0 \ubcf5\uc81c\ubcf8\uc744 \uac19\uc740 \uc704\uce58\uc5d0 \ubc30\uce58\ud558\uc9c0 \uc54a\ub3c4\ub85d \ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uac01 \ubcf5\uc81c\ubcf8\uc758 \ud2b8\ub798\ud53d\uc774 \ub2e4\ub978 ENI\ub97c \ud1b5\ud574 \ub77c\uc6b0\ud305\ub418\ubbc0\ub85c \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4\ub2f9 DNS \ucffc\ub9ac \uc218\uac00 \ud6a8\uacfc\uc801\uc73c\ub85c \uc904\uc5b4\ub4ed\ub2c8\ub2e4. \ucd08\ub2f9 1024\uac1c\uc758 \ud328\ud0b7 \uc81c\ud55c\uc73c\ub85c \uc778\ud574 DNS \ucffc\ub9ac\uac00 \ubcd1\ubaa9 \ud604\uc0c1\uc744 \uacaa\ub294 \uacbd\uc6b0 1) \ucf54\uc5b4 DNS \ubcf5\uc81c\ubcf8 \uc218\ub97c \ub298\ub9ac\uac70\ub098 2) [NodeLocal DNS \uce90\uc2dc](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/nodelocaldns\/)\ub97c \uad6c\ud604\ud574 \ubcfc \uc218 \uc788\uc2b5\ub2c8\ub2e4.\uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [CoreDNS \uba54\ud2b8\ub9ad \ubaa8\ub2c8\ud130\ub9c1](https:\/\/aws.github.io\/aws-eks-best-practices\/reliability\/docs\/networkmanagement\/#monitor-coredns-metrics)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n### \ub3c4\uc804 \uacfc\uc81c\n* \ud328\ud0b7 \ub4dc\ub86d\uc740 \uba87 \ucd08 \ub9cc\uc5d0 \ubc1c\uc0dd\ud558\uba70 DNS \uc2a4\ub85c\ud2c0\ub9c1\uc774 \uc2e4\uc81c\ub85c \ubc1c\uc0dd\ud558\ub294\uc9c0 \ud655\uc778\ud558\uae30 \uc704\ud574 \uc774\ub7f0 \ud328\ud134\uc744 \uc801\uc808\ud558\uac8c \ubaa8\ub2c8\ud130\ub9c1\ud558\ub294 \uac83\uc740 \uae4c\ub2e4\ub85c\uc6b8 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n* DNS \ucffc\ub9ac\ub294 \uc5d8\ub77c\uc2a4\ud2f1 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4 \uc218\uc900\uc5d0\uc11c \uc870\uc808\ub429\ub2c8\ub2e4. \ub530\ub77c\uc11c \ubcd1\ubaa9 \ud604\uc0c1\uc774 \ubc1c\uc0dd\ud55c \ucffc\ub9ac\ub294 \ucffc\ub9ac \ub85c\uae45\uc5d0 \ub098\ud0c0\ub098\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n* \ud50c\ub85c\uc6b0 \ub85c\uadf8\ub294 \ubaa8\ub4e0 IP \ud2b8\ub798\ud53d\uc744 \ucea1\ucc98\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc608: \uc778\uc2a4\ud134\uc2a4\uac00 Amazon DNS \uc11c\ubc84\uc5d0 \uc811\uc18d\ud560 \ub54c \uc0dd\uc131\ub418\ub294 \ud2b8\ub798\ud53d \uc790\uccb4 DNS \uc11c\ubc84\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \ud574\ub2f9 DNS \uc11c\ubc84\ub85c \ud5a5\ud558\ub294 \ubaa8\ub4e0 \ud2b8\ub798\ud53d\uc774 \ub85c\uae45\ub429\ub2c8\ub2e4.\n\n### \uc194\ub8e8\uc158\n\uc6cc\ucee4 \ub178\ub4dc\uc758 DNS \uc870\uc808 \ubb38\uc81c\ub97c \uc27d\uac8c \uc2dd\ubcc4\ud560 \uc218 \uc788\ub294 \ubc29\ubc95\uc740 `linklocal_allowance_exeded` \uba54\ud2b8\ub9ad\uc744 \ucea1\ucc98\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. [linklocal_allowance_exceeded](https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/monitoring\/metrics-collected-by-CloudWatch-agent.html#linux-metrics-enabled-by-CloudWatch-agent)\ub294 \ub85c\uceec \ud504\ub85d\uc2dc \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \ud2b8\ub798\ud53d\uc758 PPS\uac00 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4\uc758 \ucd5c\ub300\uac12\uc744 \ucd08\uacfc\ud558\uc5ec \uc0ad\uc81c\ub41c \ud328\ud0b7 \uc218\uc785\ub2c8\ub2e4. \uc774\ub294 DNS \uc11c\ube44\uc2a4, \uc778\uc2a4\ud134\uc2a4 \uba54\ud0c0\ub370\uc774\ud130 \uc11c\ube44\uc2a4 \ubc0f Amazon Time Sync \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \ud2b8\ub798\ud53d\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce69\ub2c8\ub2e4. \uc774 \uc774\ubca4\ud2b8\ub97c \uc2e4\uc2dc\uac04\uc73c\ub85c \ucd94\uc801\ud558\ub294 \ub300\uc2e0 \uc774 \uc9c0\ud45c\ub97c [Amazon Managed Service for Prometheus](https:\/\/aws.amazon.com\/prometheus\/)\uc5d0\ub3c4 \uc2a4\ud2b8\ub9ac\ubc0d\ud558\uc5ec [Amazon Managed Grafana](https:\/\/aws.amazon.com\/grafana\/)\uc5d0\uc11c \uc2dc\uac01\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## Conntrack \uba54\ud2b8\ub9ad\uc744 \uc0ac\uc6a9\ud558\uc5ec DNS \ucffc\ub9ac \uc9c0\uc5f0 \ubaa8\ub2c8\ud130\ub9c1\n\nCoreDNS \uc2a4\ub85c\ud2c0\ub9c1\/\ucffc\ub9ac \uc9c0\uc5f0\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\ub294 \ub610 \ub2e4\ub978 \uc9c0\ud45c\ub294 `conntrack_allowance_available`\uacfc `conntrack_allowance_exceed`\uc785\ub2c8\ub2e4.\n\uc5f0\uacb0 \ucd94\uc801 \ud5c8\uc6a9\ub7c9\uc744 \ucd08\uacfc\ud558\uc5ec \ubc1c\uc0dd\ud558\ub294 \uc5f0\uacb0 \uc7a5\uc560\ub294 \ub2e4\ub978 \ud5c8\uc6a9\uce58\ub97c \ucd08\uacfc\ud558\uc5ec \ubc1c\uc0dd\ud558\ub294 \uc5f0\uacb0 \uc2e4\ud328\ubcf4\ub2e4 \ub354 \ud070 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4. TCP\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub370\uc774\ud130\ub97c \uc804\uc1a1\ud558\ub294 \uacbd\uc6b0, \ub300\uc5ed\ud3ed, PPS \ub4f1\uacfc \uac19\uc774 EC2 \uc778\uc2a4\ud134\uc2a4 \ub124\ud2b8\uc6cc\ud06c \ud5c8\uc6a9\ub7c9\uc744 \ucd08\uacfc\ud558\uc5ec \ub300\uae30\uc5f4\uc5d0 \uc788\uac70\ub098 \uc0ad\uc81c\ub418\ub294 \ud328\ud0b7\uc740 \uc77c\ubc18\uc801\uc73c\ub85c TCP\uc758 \ud63c\uc7a1 \uc81c\uc5b4 \uae30\ub2a5 \ub355\ubd84\uc5d0 \uc815\uc0c1\uc801\uc73c\ub85c \ucc98\ub9ac\ub429\ub2c8\ub2e4. \uc601\ud5a5\uc744 \ubc1b\uc740 \ud750\ub984\uc740 \ub290\ub824\uc9c0\uace0 \uc190\uc2e4\ub41c \ud328\ud0b7\uc740 \uc7ac\uc804\uc1a1\ub429\ub2c8\ub2e4. \uadf8\ub7ec\ub098 \uc778\uc2a4\ud134\uc2a4\uac00 Connections Tracked \ud5c8\uc6a9\ub7c9\uc744 \ucd08\uacfc\ud558\ub294 \uacbd\uc6b0 \uc0c8 \uc5f0\uacb0\uc744 \uc704\ud55c \uacf5\uac04\uc744 \ub9c8\ub828\ud558\uae30 \uc704\ud574 \uae30\uc874 \uc5f0\uacb0 \uc911 \uc77c\ubd80\ub97c \ub2eb\uae30 \uc804\uae4c\uc9c0\ub294 \uc0c8 \uc5f0\uacb0\uc744 \uc124\uc815\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \n\n`conntrack_allowance_available` \ubc0f `conntrack_allowance_exceed`\ub294 \uace0\uac1d\uc774 \ubaa8\ub4e0 \uc778\uc2a4\ud134\uc2a4\ub9c8\ub2e4 \ub2ec\ub77c\uc9c0\ub294 \uc5f0\uacb0 \ucd94\uc801 \ud5c8\uc6a9\ub7c9\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4. \uc774\ub7f0 \ub124\ud2b8\uc6cc\ud06c \uc131\ub2a5 \uc9c0\ud45c\ub97c \ud1b5\ud574 \uace0\uac1d\uc740 \ub124\ud2b8\uc6cc\ud06c \ub300\uc5ed\ud3ed, \ucd08\ub2f9 \ud328\ud0b7 \uc218 (PPS), \ucd94\uc801\ub41c \uc5f0\uacb0, \ub9c1\ud06c \ub85c\uceec \uc11c\ube44\uc2a4 \uc561\uc138\uc2a4 (Amazon DNS, \uc778\uc2a4\ud134\uc2a4 \uba54\ud0c0 \ub370\uc774\ud130 \uc11c\ube44\uc2a4, Amazon Time Sync) \uc640 \uac19\uc740 \uc778\uc2a4\ud134\uc2a4\uc758 \ub124\ud2b8\uc6cc\ud0b9 \ud5c8\uc6a9\ub7c9\uc744 \ucd08\uacfc\ud588\uc744 \ub54c \ub300\uae30 \ub610\ub294 \uc0ad\uc81c\ub418\ub294 \ud328\ud0b7 \uc218\ub97c \ud30c\uc545\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n`conntrack_allowance_available`\uc740 \ud574\ub2f9 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc758 \ucd94\uc801\ub41c \uc5f0\uacb0 \ud5c8\uc6a9\ub7c9\uc5d0 \ub3c4\ub2ec\ud558\uae30 \uc804\uc5d0 \uc778\uc2a4\ud134\uc2a4\uac00 \uc124\uc815\ud560 \uc218 \uc788\ub294 \ucd94\uc801\ub41c \uc5f0\uacb0 \uc218\uc785\ub2c8\ub2e4 (\ub2c8\ud2b8\ub85c \uae30\ubc18 \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c\ub9cc \uc9c0\uc6d0\ub428). \n`conntrack_allowance_exceed`\ub294 \uc778\uc2a4\ud134\uc2a4\uc758 \uc5f0\uacb0 \ucd94\uc801\uc774 \ucd5c\ub300\uce58\ub97c \ucd08\uacfc\ud558\uc5ec \uc0c8 \uc5f0\uacb0\uc744 \uc124\uc815\ud560 \uc218 \uc5c6\uc5b4\uc11c \uc0ad\uc81c\ub41c \ud328\ud0b7 \uc218\uc785\ub2c8\ub2e4.\n\n## \uae30\ud0c0 \uc911\uc694\ud55c \ub124\ud2b8\uc6cc\ud06c \uc131\ub2a5 \uc9c0\ud45c\n\n\uae30\ud0c0 \uc911\uc694\ud55c \ub124\ud2b8\uc6cc\ud06c \uc131\ub2a5 \uc9c0\ud45c\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n`bw_in_allowance_exceed` (\uc774\uc0c1\uc801\uc778 \uc9c0\ud45c \uac12\uc740 0\uc774\uc5b4\uc57c \ud568) \ub294 \uc778\ubc14\uc6b4\ub4dc \uc9d1\uacc4 \ub300\uc5ed\ud3ed\uc774 \uc778\uc2a4\ud134\uc2a4\uc758 \ucd5c\ub300\uac12\uc744 \ucd08\uacfc\ud558\uc5ec \ub300\uae30 \ubc0f\/\ub610\ub294 \uc0ad\uc81c\ub41c \ud328\ud0b7 \uc218\uc785\ub2c8\ub2e4.\n\n`bw_out_allowance_exceed` (\uc774\uc0c1\uc801\uc778 \uc9c0\ud45c \uac12\uc740 0\uc774\uc5b4\uc57c \ud568) \ub294 \uc544\uc6c3\ubc14\uc6b4\ub4dc \ucd1d \ub300\uc5ed\ud3ed\uc774 \ud574\ub2f9 \uc778\uc2a4\ud134\uc2a4\uc758 \ucd5c\ub300\uac12\uc744 \ucd08\uacfc\ud558\uc5ec \ub300\uae30 \ubc0f\/\ub610\ub294 \uc0ad\uc81c\ub41c \ud328\ud0b7 \uc218\uc785\ub2c8\ub2e4.\n\n`pps_allowance_exceed` (\uc774\uc0c1\uc801\uc778 \uc9c0\ud45c \uac12\uc740 0\uc774\uc5b4\uc57c \ud568) \ub294 \uc591\ubc29\ud5a5 PPS\uac00 \uc778\uc2a4\ud134\uc2a4\uc758 \ucd5c\ub300\uac12\uc744 \ucd08\uacfc\ud558\uc5ec \ub300\uae30 \ubc0f\/\ub610\ub294 \uc0ad\uc81c\ub41c \ud328\ud0b7 \uc218\uc785\ub2c8\ub2e4.\n\n## \ub124\ud2b8\uc6cc\ud06c \uc131\ub2a5 \ubb38\uc81c\uc5d0 \ub300\ud55c \uc6cc\ud06c\ub85c\ub4dc \ubaa8\ub2c8\ud130\ub9c1\uc744 \uc704\ud55c \uc9c0\ud45c \ucea1\ucc98\n\nElastic Network Adapter (ENA) \ub4dc\ub77c\uc774\ubc84\ub294 \uc704\uc5d0\uc11c \uc124\uba85\ud55c \ub124\ud2b8\uc6cc\ud06c \uc131\ub2a5 \uba54\ud2b8\ub9ad\uc774 \ud65c\uc131\ud654\ub41c \uc778\uc2a4\ud134\uc2a4\ub85c\ubd80\ud130 \ud574\ub2f9 \uba54\ud2b8\ub9ad\uc744 \uac8c\uc2dc\ud569\ub2c8\ub2e4. CloudWatch \uc5d0\uc774\uc804\ud2b8\ub97c \uc0ac\uc6a9\ud558\uc5ec \ubaa8\ub4e0 \ub124\ud2b8\uc6cc\ud06c \uc131\ub2a5 \uc9c0\ud45c\ub97c CloudWatch\uc5d0 \uac8c\uc2dc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\ube14\ub85c\uadf8](https:\/\/aws.amazon.com\/blogs\/networking-and-content-delivery\/amazon-ec2-instance-level-network-performance-metrics-uncover-new-insights\/)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n\uc774\uc81c \uc704\uc5d0\uc11c \uc124\uba85\ud55c \uc9c0\ud45c\ub97c \ucea1\ucc98\ud558\uc5ec Prometheus\uc6a9 Amazon Managed Service\uc5d0 \uc800\uc7a5\ud558\uace0 Amazon Managed Grafana\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc2dc\uac01\ud654\ud574 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n### \uc0ac\uc804 \uc694\uad6c \uc0ac\ud56d\n* ethtool - \uc6cc\ucee4 \ub178\ub4dc\uc5d0 ethtool\uc774 \uc124\uce58\ub418\uc5b4 \uc788\ub294\uc9c0 \ud655\uc778\ud558\uc2ed\uc2dc\uc624.\n* AWS \uacc4\uc815\uc5d0 \uad6c\uc131\ub41c AMP \uc6cc\ud06c\uc2a4\ud398\uc774\uc2a4.\uc9c0\uce68\uc740 AMP \uc0ac\uc6a9 \uc124\uba85\uc11c\uc758 [\uc6cc\ud06c\uc2a4\ud398\uc774\uc2a4 \ub9cc\ub4e4\uae30](https:\/\/docs.aws.amazon.com\/prometheus\/latest\/userguide\/AMP-onboard-create-workspace.html)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n* Amazon Managed Grafana \uc6cc\ud06c\uc2a4\ud398\uc774\uc2a4\n\n### \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 ethtool \uc775\uc2a4\ud3ec\ud130 \uad6c\ucd95\n\ubc30\ud3ec\uc5d0\ub294 ethtool\uc5d0\uc11c \uc815\ubcf4\ub97c \uac00\uc838\uc640 \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \ud615\uc2dd\uc73c\ub85c \uac8c\uc2dc\ud558\ub294 Python \uc2a4\ud06c\ub9bd\ud2b8\uac00 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\nkubectl apply -f https:\/\/raw.githubusercontent.com\/Showmax\/prometheus-ethtool-exporter\/master\/deploy\/k8s-daemonset.yaml\n```\n\n### ADOT \uceec\ub809\ud130\ub97c \ubc30\ud3ec\ud558\uc5ec ethtool \uba54\ud2b8\ub9ad\uc744 \uc2a4\ud06c\ub7a9\ud558\uace0 \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\uc6a9 \uc544\ub9c8\uc874 \ub9e4\ub2c8\uc9c0\ub4dc \uc11c\ube44\uc2a4 \uc6cc\ud06c\uc2a4\ud398\uc774\uc2a4\uc5d0 \uc800\uc7a5\nOpenTelemetry\uc6a9 AWS \ubc30\ud3ec\ud310 (ADOT) \uc744 \uc124\uce58\ud558\ub294 \uac01 \ud074\ub7ec\uc2a4\ud130\uc5d0\ub294 \uc774 \uc5ed\ud560\uc774 \uc788\uc5b4\uc57c AWS \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc5d0 \uba54\ud2b8\ub9ad\uc744 Prometheus\uc6a9 Amazon \uad00\ub9ac\ud615 \uc11c\ube44\uc2a4\uc5d0 \uc800\uc7a5\ud560 \uc218 \uc788\ub294 \uad8c\ud55c\uc744 \ubd80\uc5ec\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\ub2e4\uc74c \ub2e8\uacc4\uc5d0 \ub530\ub77c IRSA\ub97c \uc0ac\uc6a9\ud558\uc5ec IAM \uc5ed\ud560\uc744 \uc0dd\uc131\ud558\uace0 Amazon EKS \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc5d0 \uc5f0\uacb0\ud558\uc2ed\uc2dc\uc624.\n\n```\neksctl create iamserviceaccount --name adot-collector --namespace default --cluster <CLUSTER_NAME> --attach-policy-arn arn:aws:iam::aws:policy\/AmazonPrometheusRemoteWriteAccess --attach-policy-arn arn:aws:iam::aws:policy\/AWSXrayWriteOnlyAccess --attach-policy-arn arn:aws:iam::aws:policy\/CloudWatchAgentServerPolicy --region <REGION> --approve  --override-existing-serviceaccounts\n```\n\nADOT \uceec\ub809\ud130\ub97c \ubc30\ud3ec\ud558\uc5ec \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 ethtool \uc775\uc2a4\ud3ec\ud130\uc758 \uba54\ud2b8\ub9ad\uc744 \uc2a4\ud06c\ub7a9\ud558\uc5ec \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\uc6a9 \uc544\ub9c8\uc874 \ub9e4\ub2c8\uc9c0\ub4dc \uc11c\ube44\uc2a4\uc5d0 \uc800\uc7a5\ud574 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c \uc808\ucc28\uc5d0\uc11c\ub294 \ubc30\ud3ec\ub97c \ubaa8\ub4dc \uac12\uc73c\ub85c \uc0ac\uc6a9\ud558\ub294 \uc608\uc81c YAML \ud30c\uc77c\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \uc774 \ubaa8\ub4dc\ub294 \uae30\ubcf8 \ubaa8\ub4dc\uc774\uba70 \ub3c5\ub9bd \uc2e4\ud589\ud615 \uc751\uc6a9 \ud504\ub85c\uadf8\ub7a8\uacfc \uc720\uc0ac\ud558\uac8c ADOT Collector\ub97c \ubc30\ud3ec\ud569\ub2c8\ub2e4. \uc774 \uad6c\uc131\uc740 \ud074\ub7ec\uc2a4\ud130\uc758 \ud30c\ub4dc\uc5d0\uc11c \uc2a4\ud06c\ub7a9\ud55c \uc0d8\ud50c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uacfc Amazon Managed Service for Prometheus \uc9c0\ud45c\ub85c\ubd80\ud130 OTLP \uc9c0\ud45c\ub97c \uc218\uc2e0\ud569\ub2c8\ub2e4.\n\n```\ncurl -o collector-config-amp.yaml https:\/\/raw.githubusercontent.com\/aws-observability\/aws-otel-community\/master\/sample-configs\/operator\/collector-config-amp.yaml\n```\n\n\uceec\ub809\ud130-config-amp.yaml\uc5d0\uc11c \ub2e4\uc74c\uc744 \uc0ac\uc6a9\uc790 \uace0\uc720\uc758 \uac12\uc73c\ub85c \ubc14\uafb8\uc2ed\uc2dc\uc624.\n* mode: deployment\n* serviceAccount: adot-collector\n* endpoint: \"<YOUR_REMOTE_WRITE_ENDPOINT>\"\n* region: \"<YOUR_AWS_REGION>\"\n* name: adot-collector\n\n```\nkubectl apply -f collector-config-amp.yaml \n```\n\n\ucc44\ud0dd \uc218\uc9d1\uae30\uac00 \ubc30\ud3ec\ub418\uba74 \uc9c0\ud45c\uac00 Amazon Prometheus\uc5d0 \uc131\uacf5\uc801\uc73c\ub85c \uc800\uc7a5\ub429\ub2c8\ub2e4.\n\n### Prometheus\uac00 \uc54c\ub9bc\uc744 \ubcf4\ub0b4\ub3c4\ub85d Amazon \uad00\ub9ac \uc11c\ube44\uc2a4\uc758 \uc54c\ub9bc \uad00\ub9ac\uc790\ub97c \uad6c\uc131\n\uc9c0\uae08\uae4c\uc9c0 \uc124\uba85\ud55c \uba54\ud2b8\ub9ad\uc744 \ud655\uc778\ud558\uae30 \uc704\ud574 \uae30\ub85d \uaddc\uce59 \ubc0f \uacbd\uace0 \uaddc\uce59\uc744 \uad6c\uc131\ud574 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4. \n\n[\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\uc6a9 \uc544\ub9c8\uc874 \ub9e4\ub2c8\uc9c0\ub4dc \uc11c\ube44\uc2a4\uc6a9 ACK \ucee8\ud2b8\ub864\ub7ec](https:\/\/github.com\/aws-controllers-k8s\/prometheusservice-controller)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc54c\ub9bc \ubc0f \uae30\ub85d \uaddc\uce59\uc744 \uaddc\uc815\ud560 \uac83\uc785\ub2c8\ub2e4.\n\n\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\uc6a9 Amazon \uad00\ub9ac \uc11c\ube44\uc2a4 \uc11c\ube44\uc2a4\ub97c \uc704\ud55c ACL \ucee8\ud2b8\ub864\ub7ec\ub97c \ubc30\ud3ec\ud574 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n \n```\nexport SERVICE=prometheusservice\nexport RELEASE_VERSION=`curl -sL https:\/\/api.github.com\/repos\/aws-controllers-k8s\/$SERVICE-controller\/releases\/latest | grep '\"tag_name\":' | cut -d'\"' -f4`\nexport ACK_SYSTEM_NAMESPACE=ack-system\nexport AWS_REGION=us-east-1\naws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws\nhelm install --create-namespace -n $ACK_SYSTEM_NAMESPACE ack-$SERVICE-controller \\\noci:\/\/public.ecr.aws\/aws-controllers-k8s\/$SERVICE-chart --version=$RELEASE_VERSION --set=aws.region=$AWS_REGION\n```\n\n\uba85\ub839\uc744 \uc2e4\ud589\ud558\uba74 \uba87 \ubd84 \ud6c4 \ub2e4\uc74c \uba54\uc2dc\uc9c0\uac00 \ud45c\uc2dc\ub429\ub2c8\ub2e4.\n\n```\nYou are now able to create Amazon Managed Service for Prometheus (AMP) resources!\n\nThe controller is running in \"cluster\" mode.\n\nThe controller is configured to manage AWS resources in region: \"us-east-1\"\n\nThe ACK controller has been successfully installed and ACK can now be used to provision an Amazon Managed Service for Prometheus workspace.\n```\n\n\uc774\uc81c \uacbd\uace0 \uad00\ub9ac\uc790 \uc815\uc758 \ubc0f \uaddc\uce59 \uadf8\ub8f9\uc744 \ud504\ub85c\ube44\uc800\ub2dd\ud558\uae30 \uc704\ud55c yaml \ud30c\uc77c\uc744 \uc0dd\uc131\ud574 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\uc544\ub798 \ud30c\uc77c\uc744 `rulegroup.yaml`\ub85c \uc800\uc7a5\ud569\ub2c8\ub2e4.\n\n```\napiVersion: prometheusservice.services.k8s.aws\/v1alpha1\nkind: RuleGroupsNamespace\nmetadata:\n   name: default-rule\nspec:\n   workspaceID: <Your WORKSPACE-ID>\n   name: default-rule\n   configuration: |\n     groups:\n     - name: ppsallowance\n       rules:\n       - record: metric:pps_allowance_exceeded\n         expr: rate(node_net_ethtool{device=\"eth0\",type=\"pps_allowance_exceeded\"}[30s])\n       - alert: PPSAllowanceExceeded\n         expr: rate(node_net_ethtool{device=\"eth0\",type=\"pps_allowance_exceeded\"} [30s]) > 0\n         labels:\n           severity: critical\n           \n         annotations:\n           summary: Connections dropped due to total allowance exceeding for the  (instance )\n           description: \"PPSAllowanceExceeded is greater than 0\"\n     - name: bw_in\n       rules:\n       - record: metric:bw_in_allowance_exceeded\n         expr: rate(node_net_ethtool{device=\"eth0\",type=\"bw_in_allowance_exceeded\"}[30s])\n       - alert: BWINAllowanceExceeded\n         expr: rate(node_net_ethtool{device=\"eth0\",type=\"bw_in_allowance_exceeded\"} [30s]) > 0\n         labels:\n           severity: critical\n           \n         annotations:\n           summary: Connections dropped due to total allowance exceeding for the  (instance )\n           description: \"BWInAllowanceExceeded is greater than 0\"\n     - name: bw_out\n       rules:\n       - record: metric:bw_out_allowance_exceeded\n         expr: rate(node_net_ethtool{device=\"eth0\",type=\"bw_out_allowance_exceeded\"}[30s])\n       - alert: BWOutAllowanceExceeded\n         expr: rate(node_net_ethtool{device=\"eth0\",type=\"bw_out_allowance_exceeded\"} [30s]) > 0\n         labels:\n           severity: critical\n           \n         annotations:\n           summary: Connections dropped due to total allowance exceeding for the  (instance )\n           description: \"BWoutAllowanceExceeded is greater than 0\"            \n     - name: conntrack\n       rules:\n       - record: metric:conntrack_allowance_exceeded\n         expr: rate(node_net_ethtool{device=\"eth0\",type=\"conntrack_allowance_exceeded\"}[30s])\n       - alert: ConntrackAllowanceExceeded\n         expr: rate(node_net_ethtool{device=\"eth0\",type=\"conntrack_allowance_exceeded\"} [30s]) > 0\n         labels:\n           severity: critical\n           \n         annotations:\n           summary: Connections dropped due to total allowance exceeding for the  (instance )\n           description: \"ConnTrackAllowanceExceeded is greater than 0\"\n     - name: linklocal\n       rules:\n       - record: metric:linklocal_allowance_exceeded\n         expr: rate(node_net_ethtool{device=\"eth0\",type=\"linklocal_allowance_exceeded\"}[30s])\n       - alert: LinkLocalAllowanceExceeded\n         expr: rate(node_net_ethtool{device=\"eth0\",type=\"linklocal_allowance_exceeded\"} [30s]) > 0\n         labels:\n           severity: critical\n           \n         annotations:\n           summary: Packets dropped due to PPS rate allowance exceeded for local services  (instance )\n           description: \"LinkLocalAllowanceExceeded is greater than 0\"\n```\n\nWORKSPACE-ID\ub97c \uc0ac\uc6a9 \uc911\uc778 \uc791\uc5c5 \uacf5\uac04\uc758 \uc791\uc5c5 \uc601\uc5ed ID\ub85c \ubc14\uafb8\uc2ed\uc2dc\uc624. \n\n\uc774\uc81c \uc54c\ub9bc \uad00\ub9ac\uc790 \uc815\uc758\ub97c \uad6c\uc131\ud574 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\uc544\ub798 \ud30c\uc77c\uc744 `alertmanager.yaml`\uc73c\ub85c \uc800\uc7a5\ud569\ub2c8\ub2e4.\n\n```\napiVersion: prometheusservice.services.k8s.aws\/v1alpha1  \nkind: AlertManagerDefinition\nmetadata:\n  name: alert-manager\nspec:\n  workspaceID: <Your WORKSPACE-ID >\n  configuration: |\n    alertmanager_config: |\n      route:\n         receiver: default_receiver\n       receivers:\n       - name: default_receiver\n          sns_configs:\n          - topic_arn: TOPIC-ARN\n            sigv4:\n              region: REGION\n            message: |\n              alert_type: \n              event_type:      \n```\n\nWORKSPACE-ID\ub97c \uc0c8 \uc791\uc5c5 \uacf5\uac04\uc758 \uc791\uc5c5 \uacf5\uac04 ID\ub85c, TOPIC-ARN\uc744 \uc54c\ub9bc\uc744 \ubcf4\ub0b4\ub824\ub294 [Amazon \ub2e8\uc21c \uc54c\ub9bc \uc11c\ube44\uc2a4](https:\/\/aws.amazon.com\/sns\/) \uc8fc\uc81c\uc758 ARN\uc73c\ub85c, \uadf8\ub9ac\uace0 REGION\uc744 \uc6cc\ud06c\ub85c\ub4dc\uc758 \ud604\uc7ac \uc9c0\uc5ed\uc73c\ub85c \ub300\uccb4\ud569\ub2c8\ub2e4. \uc791\uc5c5 \uc601\uc5ed\uc5d0 Amazon SNS\ub85c \uba54\uc2dc\uc9c0\ub97c \ubcf4\ub0bc \uad8c\ud55c\uc774 \uc788\ub294\uc9c0 \ud655\uc778\ud558\uc2ed\uc2dc\uc624.\n\n  \n### \uc544\ub9c8\uc874 \ub9e4\ub2c8\uc9c0\ub4dc \uadf8\ub77c\ud30c\ub098\uc5d0\uc11c ethtool \uba54\ud2b8\ub9ad\uc744 \uc2dc\uac01\ud654\nAmazon Managed Grafana \ub0b4\uc5d0\uc11c \uc9c0\ud45c\ub97c \uc2dc\uac01\ud654\ud558\uace0 \ub300\uc2dc\ubcf4\ub4dc\ub97c \uad6c\ucd95\ud574 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\uc6a9 \uc544\ub9c8\uc874 \ub9e4\ub2c8\uc9c0\ub4dc \uc11c\ube44\uc2a4\ub97c \uc544\ub9c8\uc874 \ub9e4\ub2c8\uc9c0\ub4dc \uadf8\ub77c\ud30c\ub098 \ucf58\uc194 \ub0b4\uc5d0\uc11c \ub370\uc774\ud130 \uc18c\uc2a4\ub85c \uad6c\uc131\ud558\uc2ed\uc2dc\uc624.\uc9c0\uce68\uc740 [\uc544\ub9c8\uc874 \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\ub97c \ub370\uc774\ud130 \uc18c\uc2a4\ub85c \ucd94\uac00](https:\/\/docs.aws.amazon.com\/grafana\/latest\/userguide\/AMP-adding-AWS-config.html)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n\uc774\uc81c \uc544\ub9c8\uc874 \ub9e4\ub2c8\uc9c0\ub4dc \uadf8\ub77c\ud30c\ub098\uc758 \uba54\ud2b8\ub9ad\uc744 \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\ud0d0\uc0c9 \ubc84\ud2bc\uc744 \ud074\ub9ad\ud558\uace0 ethtool\uc744 \uac80\uc0c9\ud558\uc138\uc694.\n\n![Node_ethtool metrics](.\/explore_metrics.png)\n\n`rate (node_net_ethtool {device=\"eth0\", type=\"linklocal_allowance_Exceed \"} [30s])` \ucffc\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec linklocal_allowance_Exceed \uc9c0\ud45c\uc5d0 \ub300\ud55c \ub300\uc2dc\ubcf4\ub4dc\ub97c \ub9cc\ub4e4\uc5b4 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\uadf8\ub7ec\uba74 \uc544\ub798 \ub300\uc2dc\ubcf4\ub4dc\uac00 \ub098\ud0c0\ub0a9\ub2c8\ub2e4.\n\n![linklocal_allowance_exceeded dashboard](.\/linklocal.png)\n\n\uac12\uc774 0\uc774\ubbc0\ub85c \uc0ad\uc81c\ub41c \ud328\ud0b7\uc774 \uc5c6\uc74c\uc744 \ubd84\uba85\ud788 \uc54c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n`rate (node_net_ethtool {device=\"eth0\", type=\"conntrack_allowance_Exceed \"} [30s])` \ucffc\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec conntrack_allowance_Exceed \uba54\ud2b8\ub9ad\uc5d0 \ub300\ud55c \ub300\uc2dc\ubcf4\ub4dc\ub97c \ub9cc\ub4e4\uc5b4 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\uacb0\uacfc\ub294 \uc544\ub798 \ub300\uc2dc\ubcf4\ub4dc\uc640 \uac19\uc2b5\ub2c8\ub2e4.\n\n![conntrack_allowance_exceeded dashboard](.\/conntrack.png)\n\n[\uc5ec\uae30](https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/monitoring\/CloudWatch-Agent-network-performance.html)\uc5d0 \uc124\uba85\ub41c \ub300\ub85c \ud074\ub77c\uc6b0\ub4dc\uc6cc\uce58 \uc5d0\uc774\uc804\ud2b8\ub97c \uc2e4\ud589\ud558\uba74 `conntrack_allowance_exceed` \uc9c0\ud45c\ub97c CloudWatch\uc5d0\uc11c \uc2dc\uac01\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. CloudWatch\uc758 \uacb0\uacfc \ub300\uc2dc\ubcf4\ub4dc\ub294 \ub2e4\uc74c\uacfc \uac19\uc774 \ud45c\uc2dc\ub429\ub2c8\ub2e4.\n\n![CW_NW_Performance](.\/cw_metrics.png)\n\n\uac12\uc774 0\uc774\ubbc0\ub85c \uc0ad\uc81c\ub41c \ud328\ud0b7\uc774 \uc5c6\uc74c\uc744 \ubd84\uba85\ud788 \uc54c \uc218 \uc788\uc2b5\ub2c8\ub2e4.Nitro \uae30\ubc18 \uc778\uc2a4\ud134\uc2a4\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 'conntrack_allowance_available'\uc5d0 \ub300\ud55c \uc720\uc0ac\ud55c \ub300\uc2dc\ubcf4\ub4dc\ub97c \ub9cc\ub4e4\uace0 EC2 \uc778\uc2a4\ud134\uc2a4\uc758 \uc5f0\uacb0\uc744 \uc0ac\uc804\uc5d0 \ubaa8\ub2c8\ud130\ub9c1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.Amazon Managed Grafana\uc5d0\uc11c \uc2ac\ub799, SNS, Pagerduty \ub4f1\uc5d0 \uc54c\ub9bc\uc744 \ubcf4\ub0b4\ub3c4\ub85d \uc54c\ub9bc\uc744 \uad6c\uc131\ud558\uc5ec \uc774\ub97c \ub354\uc6b1 \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n","site":"eks","answers_cleaned":"    search    exclude  true                        EKS               DNS             CoreDNS           DNS                DNS                 CoreDNS                                Hostexception                                      CoreDNS      Kubernetes                        CoreDNS                                                                                                  ENI                        DNS                       1024              DNS                  1     DNS             2   NodeLocal DNS     https   kubernetes io docs tasks administer cluster nodelocaldns                          CoreDNS           https   aws github io aws eks best practices reliability docs networkmanagement  monitor coredns metrics                                           DNS                                                              DNS                                                                                   IP                          Amazon DNS                       DNS                DNS                                        DNS                         linklocal allowance exeded                   linklocal allowance exceeded  https   docs aws amazon com AmazonCloudWatch latest monitoring metrics collected by CloudWatch agent html linux metrics enabled by CloudWatch agent                       PPS                                        DNS                       Amazon Time Sync                                                    Amazon Managed Service for Prometheus  https   aws amazon com prometheus             Amazon Managed Grafana  https   aws amazon com grafana                      Conntrack           DNS             CoreDNS                                           conntrack allowance available    conntrack allowance exceed                                                                               TCP                          PPS       EC2                                             TCP                                                                      Connections Tracked                                                                            conntrack allowance available     conntrack allowance exceed                                                                                            PPS                          Amazon DNS                   Amazon Time Sync                                                             conntrack allowance available                                                                                      conntrack allowance exceed                                                                                                             bw in allowance exceed              0                                                             bw out allowance exceed              0                                                                pps allowance exceed              0             PPS                                                                               Elastic Network Adapter  ENA                                                         CloudWatch                           CloudWatch                            https   aws amazon com blogs networking and content delivery amazon ec2 instance level network performance metrics uncover new insights                                 Prometheus  Amazon Managed Service       Amazon Managed Grafana                                   ethtool          ethtool                     AWS         AMP            AMP                      https   docs aws amazon com prometheus latest userguide AMP onboard create workspace html             Amazon Managed Grafana                    ethtool              ethtool                            Python                       kubectl apply  f https   raw githubusercontent com Showmax prometheus ethtool exporter master deploy k8s daemonset yaml          ADOT           ethtool                                            OpenTelemetry  AWS      ADOT                            AWS                Prometheus  Amazon                                            IRSA       IAM          Amazon EKS                        eksctl create iamserviceaccount   name adot collector   namespace default   cluster  CLUSTER NAME    attach policy arn arn aws iam  aws policy AmazonPrometheusRemoteWriteAccess   attach policy arn arn aws iam  aws policy AWSXrayWriteOnlyAccess   attach policy arn arn aws iam  aws policy CloudWatchAgentServerPolicy   region  REGION    approve    override existing serviceaccounts      ADOT                  ethtool                                                                                YAML                                               ADOT Collector                                          Amazon Managed Service for Prometheus       OTLP                 curl  o collector config amp yaml https   raw githubusercontent com aws observability aws otel community master sample configs operator collector config amp yaml          config amp yaml                            mode  deployment   serviceAccount  adot collector   endpoint    YOUR REMOTE WRITE ENDPOINT     region    YOUR AWS REGION     name  adot collector      kubectl apply  f collector config amp yaml                        Amazon Prometheus                    Prometheus           Amazon                                                                                             ACK       https   github com aws controllers k8s prometheusservice controller                                       Amazon                ACL                        export SERVICE prometheusservice export RELEASE VERSION  curl  sL https   api github com repos aws controllers k8s  SERVICE controller releases latest   grep   tag name      cut  d     f4  export ACK SYSTEM NAMESPACE ack system export AWS REGION us east 1 aws ecr public get login password   region us east 1   helm registry login   username AWS   password stdin public ecr aws helm install   create namespace  n  ACK SYSTEM NAMESPACE ack  SERVICE controller   oci   public ecr aws aws controllers k8s  SERVICE chart   version  RELEASE VERSION   set aws region  AWS REGION                                         You are now able to create Amazon Managed Service for Prometheus  AMP  resources   The controller is running in  cluster  mode   The controller is configured to manage AWS resources in region   us east 1   The ACK controller has been successfully installed and ACK can now be used to provision an Amazon Managed Service for Prometheus workspace                                        yaml                        rulegroup yaml               apiVersion  prometheusservice services k8s aws v1alpha1 kind  RuleGroupsNamespace metadata     name  default rule spec     workspaceID   Your WORKSPACE ID     name  default rule    configuration         groups         name  ppsallowance        rules           record  metric pps allowance exceeded          expr  rate node net ethtool device  eth0  type  pps allowance exceeded   30s            alert  PPSAllowanceExceeded          expr  rate node net ethtool device  eth0  type  pps allowance exceeded    30s     0          labels             severity  critical                      annotations             summary  Connections dropped due to total allowance exceeding for the   instance              description   PPSAllowanceExceeded is greater than 0         name  bw in        rules           record  metric bw in allowance exceeded          expr  rate node net ethtool device  eth0  type  bw in allowance exceeded   30s            alert  BWINAllowanceExceeded          expr  rate node net ethtool device  eth0  type  bw in allowance exceeded    30s     0          labels             severity  critical                      annotations             summary  Connections dropped due to total allowance exceeding for the   instance              description   BWInAllowanceExceeded is greater than 0         name  bw out        rules           record  metric bw out allowance exceeded          expr  rate node net ethtool device  eth0  type  bw out allowance exceeded   30s            alert  BWOutAllowanceExceeded          expr  rate node net ethtool device  eth0  type  bw out allowance exceeded    30s     0          labels             severity  critical                      annotations             summary  Connections dropped due to total allowance exceeding for the   instance              description   BWoutAllowanceExceeded is greater than 0                     name  conntrack        rules           record  metric conntrack allowance exceeded          expr  rate node net ethtool device  eth0  type  conntrack allowance exceeded   30s            alert  ConntrackAllowanceExceeded          expr  rate node net ethtool device  eth0  type  conntrack allowance exceeded    30s     0          labels             severity  critical                      annotations             summary  Connections dropped due to total allowance exceeding for the   instance              description   ConnTrackAllowanceExceeded is greater than 0         name  linklocal        rules           record  metric linklocal allowance exceeded          expr  rate node net ethtool device  eth0  type  linklocal allowance exceeded   30s            alert  LinkLocalAllowanceExceeded          expr  rate node net ethtool device  eth0  type  linklocal allowance exceeded    30s     0          labels             severity  critical                      annotations             summary  Packets dropped due to PPS rate allowance exceeded for local services   instance              description   LinkLocalAllowanceExceeded is greater than 0       WORKSPACE ID                     ID                                           alertmanager yaml                apiVersion  prometheusservice services k8s aws v1alpha1   kind  AlertManagerDefinition metadata    name  alert manager spec    workspaceID   Your WORKSPACE ID     configuration        alertmanager config          route           receiver  default receiver        receivers           name  default receiver           sns configs              topic arn  TOPIC ARN             sigv4                region  REGION             message                  alert type                 event type             WORKSPACE ID                 ID   TOPIC ARN            Amazon            https   aws amazon com sns       ARN        REGION                              Amazon SNS                                                  ethtool          Amazon Managed Grafana                                                                                                                      https   docs aws amazon com grafana latest userguide AMP adding AWS config html                                                        ethtool            Node ethtool metrics    explore metrics png    rate  node net ethtool  device  eth0   type  linklocal allowance Exceed     30s             linklocal allowance Exceed                                               linklocal allowance exceeded dashboard    linklocal png      0                                rate  node net ethtool  device  eth0   type  conntrack allowance Exceed     30s             conntrack allowance Exceed                                               conntrack allowance exceeded dashboard    conntrack png        https   docs aws amazon com AmazonCloudWatch latest monitoring CloudWatch Agent network performance html                             conntrack allowance exceed      CloudWatch                CloudWatch                            CW NW Performance    cw metrics png      0                             Nitro                   conntrack allowance available                    EC2                            Amazon Managed Grafana       SNS  Pagerduty                                        "}
{"questions":"eks search iframe width 560 height 315 src https www youtube com embed zdXpTT0bZXo title YouTube video player frameborder 0 allow accelerometer autoplay clipboard write encrypted media gyroscope picture in picture web share allowfullscreen iframe exclude true IPv6 EKS","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# IPv6 EKS \ud074\ub7ec\uc2a4\ud130 \uc2e4\ud589\n\n<iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/zdXpTT0bZXo\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen><\/iframe>\n\nIPv6 \ubaa8\ub4dc\uc758 EKS\ub294 \ub300\uaddc\ubaa8 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc790\uc8fc \ub098\ud0c0\ub098\ub294 IPv4 \uace0\uac08 \ubb38\uc81c\ub97c \ud574\uacb0\ud569\ub2c8\ub2e4. EKS\uc758 IPv6 \uc9c0\uc6d0\uc740 IPv4 \uc8fc\uc18c \uacf5\uac04\uc758 \uc81c\ud55c\ub41c \ud06c\uae30\ub85c \uc778\ud574 \ubc1c\uc0dd\ud558\ub294 IPv4 \uace0\uac08 \ubb38\uc81c\ub97c \ud574\uacb0\ud558\ub294 \ub370 \uc911\uc810\uc744 \ub450\uace0 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub294 \ub9ce\uc740 \uace0\uac1d\uc774 \uc81c\uae30\ud55c \uc911\uc694\ud55c \uc6b0\ub824 \uc0ac\ud56d\uc774\uba70 Kubernetes\uc758 \"[IPv4\/IPv6 \uc774\uc911 \uc2a4\ud0dd](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/dual-stack\/)\" \uae30\ub2a5\uacfc\ub294 \ub2e4\ub985\ub2c8\ub2e4.\n\n\ub610\ud55c EKS\/IPv6\ub294 IPv6 CIDR\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub124\ud2b8\uc6cc\ud06c \uacbd\uacc4\ub97c \uc0c1\ud638 \uc5f0\uacb0\ud560 \uc218 \uc788\ub294 \uc720\uc5f0\uc131\uc744 \uc81c\uacf5\ud558\ubbc0\ub85c CIDR \uc911\ubcf5\uc73c\ub85c \ubc1c\uc0dd\ud560 \uac00\ub2a5\uc131\uc744 \ucd5c\uc18c\ud654\ud558\uc5ec 2\uc911 \ubb38\uc81c(\ud074\ub7ec\uc2a4\ud130 \ub0b4, \ud074\ub7ec\uc2a4\ud130 \uac04)\ub97c \ud574\uacb0\ud569\ub2c8\ub2e4.\n\nIPv6 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ud30c\ub4dc\uc640 \uc11c\ube44\uc2a4\ub294 \ub808\uac70\uc2dc IPv4 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc640\uc758 \ud638\ud658\uc131\uc744 \uc720\uc9c0\ud558\uba74\uc11c IPv6 \uc8fc\uc18c\ub97c \uc218\uc2e0\ud569\ub2c8\ub2e4. \uc5ec\uae30\uc5d0\ub294 \uc678\ubd80 IPv4 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \ud074\ub7ec\uc2a4\ud130 \ub0b4 \uc11c\ube44\uc2a4\uc5d0 \uc561\uc138\uc2a4\ud558\ub294 \uae30\ub2a5\uacfc \ud30c\ub4dc\uac00 \uc678\ubd80 IPv4 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \uc561\uc138\uc2a4\ud558\ub294 \uae30\ub2a5\uc774 \ud3ec\ud568\ub429\ub2c8\ub2e4.\n\nAmazon EKS IPv6 \uc9c0\uc6d0\uc740 \ub124\uc774\ud2f0\ube0c VPC IPv6 \uae30\ub2a5\uc744 \ud65c\uc6a9\ud569\ub2c8\ub2e4. \uac01 VPC\uc5d0\ub294 IPv4 \uc8fc\uc18c Prefix (CIDR \ube14\ub85d \ud06c\uae30\ub294 \/16\uc5d0\uc11c \/28\uae4c\uc9c0 \uac00\ub2a5) \ubc0f \uc544\ub9c8\uc874 GUA (\uae00\ub85c\ubc8c \uc720\ub2c8\uce90\uc2a4\ud2b8 \uc8fc\uc18c) \ub0b4\uc758 \uace0\uc720\ud55c \/56 IPv6 \uc8fc\uc18c Prefix (\uace0\uc815) \uac00 \ud560\ub2f9\ub429\ub2c8\ub2e4. VPC\uc758 \uac01 \uc11c\ube0c\ub137\uc5d0 \/64 \uc8fc\uc18c Prefix\ub97c \ud560\ub2f9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub77c\uc6b0\ud305 \ud14c\uc774\ube14, \ub124\ud2b8\uc6cc\ud06c \uc561\uc138\uc2a4 \uc81c\uc5b4 \ubaa9\ub85d, \ud53c\uc5b4\ub9c1, DNS \ud655\uc778\uacfc \uac19\uc740 IPv4 \uae30\ub2a5\uc740 IPv6 \uc9c0\uc6d0 VPC\uc5d0\uc11c \ub3d9\uc77c\ud55c \ubc29\uc2dd\uc73c\ub85c \uc791\ub3d9\ud569\ub2c8\ub2e4. VPC\ub97c \uc774\uc911 \uc2a4\ud0dd VPC\ub77c\uace0 \ud558\uba70 \uc774\uc911 \uc2a4\ud0dd \uc11c\ube0c\ub137\uc5d0 \uc774\uc5b4 \ub2e4\uc74c \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 EK\/IPv6 \uae30\ubc18 \ud074\ub7ec\uc2a4\ud130\ub97c \uc9c0\uc6d0\ud558\ub294 IPv4&IPv6 VPC \uae30\ubc18 \ud328\ud134\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\n![Dual Stack VPC, mandatory foundation for EKS cluster in IPv6 mode](.\/eks-ipv6-foundation.png)\n\nIPv6 \ud658\uacbd\uc5d0\uc11c\ub294 \ubaa8\ub4e0 \uc8fc\uc18c\ub97c \uc778\ud130\ub137\uc73c\ub85c \ub77c\uc6b0\ud305\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c VPC\ub294 \ud37c\ube14\ub9ad GUA \ubc94\uc704\uc5d0\uc11c IPv6 CIDR\uc744 \ud560\ub2f9\ud569\ub2c8\ub2e4. VPC\ub294 RFC 4193 (fd00: :\/8 \ub610\ub294 fc00: :\/8) \uc5d0 \uc815\uc758\ub41c [\uace0\uc720 \ub85c\uceec \uc8fc\uc18c (ULA)](https:\/\/en.wikipedia.org\/wiki\/Unique_local_address) \ubc94\uc704\uc5d0\uc11c \ud504\ub77c\uc774\ube57 IPv6 \uc8fc\uc18c\ub97c \ud560\ub2f9\ud558\ub294 \uac83\uc744 \uc9c0\uc6d0\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc774\ub294 \uc0ac\uc6a9\uc790\uac00 \uc18c\uc720\ud55c IPv6 CIDR\uc744 \ud560\ub2f9\ud558\ub824\ub294 \uacbd\uc6b0\uc5d0\ub3c4 \ub9c8\ucc2c\uac00\uc9c0\uc785\ub2c8\ub2e4. VPC\uc5d0 \uc678\ubd80 \uc804\uc6a9 \uc778\ud130\ub137 \uac8c\uc774\ud2b8\uc6e8\uc774 (EIGW) \ub97c \uad6c\ud604\ud558\uc5ec \ub4e4\uc5b4\uc624\ub294 \ud2b8\ub798\ud53d\uc740 \ubaa8\ub450 \ucc28\ub2e8\ud558\uba74\uc11c \uc544\uc6c3\ubc14\uc6b4\ub4dc \ud2b8\ub798\ud53d\uc740 \ud5c8\uc6a9\ud568\uc73c\ub85c\uc368 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0\uc11c \uc778\ud130\ub137\uc73c\ub85c \ub098\uac00\ub294 \uac83\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4.\n\ub2e4\uc74c \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 EKS\/IPv6 \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc758 Pod IPv6 \uc778\ud130\ub137 \uc1a1\uc2e0 \ud750\ub984\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\n![Dual Stack VPC, EKS Cluster in IPv6 Mode, Pods in private subnets egressing to Internet IPv6 endpoints](.\/eks-egress-ipv6.png)\n\nIPv6 \uc11c\ube0c\ub137\uc744 \uad6c\ud604\ud558\ub294 \ubaa8\ubc94 \uc0ac\ub840\ub294 [VPC \uc0ac\uc6a9 \uc124\uba85\uc11c](https:\/\/docs.aws.amazon.com\/whitepapers\/latest\/ipv6-on-aws\/IPv6-on-AWS.html)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nIPv6 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ub178\ub4dc\uc640 \ud30c\ub4dc\ub294 \ud37c\ube14\ub9ad IPv6 \uc8fc\uc18c\ub97c \uac16\uc2b5\ub2c8\ub2e4. EKS\ub294 \uace0\uc720\ud55c \ub85c\uceec IPv6 \uc720\ub2c8\uce90\uc2a4\ud2b8 \uc8fc\uc18c (ULA) \ub97c \uae30\ubc18\uc73c\ub85c \uc11c\ube44\uc2a4\uc5d0 IPv6 \uc8fc\uc18c\ub97c \ud560\ub2f9\ud569\ub2c8\ub2e4. IPv6 \ud074\ub7ec\uc2a4\ud130\uc758 ULA \uc11c\ube44\uc2a4 CIDR\uc740 IPv4\uc640 \ub2ec\ub9ac \ud074\ub7ec\uc2a4\ud130 \uc0dd\uc131 \ub2e8\uacc4\uc5d0\uc11c \uc790\ub3d9\uc73c\ub85c \ud560\ub2f9\ub418\uba70 \uc9c0\uc815\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 EKS\/IPv6 \uae30\ubc18 \ud074\ub7ec\uc2a4\ud130 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ubc0f \ub370\uc774\ud130 \ud50c\ub79c \uae30\ubc18 \ud328\ud134\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\n![Dual Stack VPC, EKS Cluster in IPv6 Mode, control plane ULA, data plane IPv6 GUA for EC2 & Pods](.\/eks-cluster-ipv6-foundation.png)\n\n## \uac1c\uc694\n\nEKS\/IPv6\ub294 Prefix \ubaa8\ub4dc (VPC-CNI \ud50c\ub7ec\uadf8\uc778 ENI IP \ud560\ub2f9 \ubaa8\ub4dc) \uc5d0\uc11c\ub9cc \uc9c0\uc6d0\ub429\ub2c8\ub2e4. [Prefix \ubaa8\ub4dc](https:\/\/aws.github.io\/aws-eks-best-practices\/networking\/prefix-mode\/index_linux\/)\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\uc2ed\uc2dc\uc624.\n\n> Prefix \ud560\ub2f9\uc740 Nitro \uae30\ubc18 EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c\ub9cc \uc791\ub3d9\ud558\ubbc0\ub85c EKS\/IPv6\ub294 \ud074\ub7ec\uc2a4\ud130 \ub370\uc774\ud130 \ud50c\ub808\uc778\uc774 EC2 Nitro \uae30\ubc18 \uc778\uc2a4\ud134\uc2a4\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0\uc5d0\ub9cc \uc9c0\uc6d0\ub429\ub2c8\ub2e4. \n\n\uac04\ub2e8\ud788 \ub9d0\ud574\uc11c IPv6 Prefix\uac00 \/80 (\uc6cc\ucee4 \ub178\ub4dc\ub2f9) \uc774\uba74 \ucd5c\ub300 10^14 IPv6 \uc8fc\uc18c\uac00 \uc0dd\uc131\ub418\uba70, \uc81c\ud55c \uc694\uc18c\ub294 \ub354 \uc774\uc0c1 IP\uac00 \uc544\ub2c8\ub77c \ud30c\ub4dc\uc758 \ubc00\ub3c4\uc785\ub2c8\ub2e4 (\ucc38\uace0 \uc790\ub8cc \uae30\uc900).\n\nIPv6 Prefix \ud560\ub2f9\uc740 EKS \uc6cc\ucee4 \ub178\ub4dc \ubd80\ud2b8\uc2a4\ud2b8\ub7a9 \uc2dc\uc5d0\ub9cc \ubc1c\uc0dd\ud569\ub2c8\ub2e4.\n\uc774 \ub3d9\uc791\uc740 \ud504\ub77c\uc774\ube57 IPv4 \uc8fc\uc18c\ub97c \uc801\uc2dc\uc5d0 \ud560\ub2f9\ud558\uae30 \uc704\ud55c VPC CNI \ud50c\ub7ec\uadf8\uc778 (ipamd) \uc5d0\uc11c \uc0dd\uc131\ub418\ub294 API \ud638\ucd9c \uc18d\ub3c4 \uc81c\ud55c\uc73c\ub85c \uc778\ud574 \ud30c\ub4dc \uc2a4\ucf00\uc904\ub9c1\uc5d0\uc11c \ud30c\ub4dc \uc774\ud0c8\uc774 \uc2ec\ud55c EKS\/IPv4 \ud074\ub7ec\uc2a4\ud130\uac00 \uc885\uc885 \uc9c0\uc5f0\ub418\ub294 \uc2dc\ub098\ub9ac\uc624\ub97c \uc644\ud654\ud558\ub294 \uac83\uc73c\ub85c \uc54c\ub824\uc838 \uc788\uc2b5\ub2c8\ub2e4. \ub610\ud55c VPC-CNI \ud50c\ub7ec\uadf8\uc778 \uace0\uae09 \uc124\uc815 \ud29c\ub2dd [WARM_IP\/ENI*, MINIMUM_IP*](https:\/\/github.com\/aws\/amazon-vpc-cni-k8s#warm_ip_target)\uc744 \ubd88\ud544\uc694\ud558\uac8c \ub9cc\ub4dc\ub294 \uac83\uc73c\ub85c \uc54c\ub824\uc838 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 IPv6 \uc6cc\ucee4 \ub178\ub4dc ENI (\uc5d8\ub77c\uc2a4\ud2f1 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4) \ub97c \ud655\ub300\ud55c \uac83\uc785\ub2c8\ub2e4.\n\n![illustration of worker subnet, including primary ENI with multiple IPv6 Addresses](.\/image-2.png)\n\n\ubaa8\ub4e0 EKS \uc6cc\ucee4 \ub178\ub4dc\uc5d0\ub294 \ud574\ub2f9 DNS \ud56d\ubaa9\uacfc \ud568\uaed8 IPv4 \ubc0f IPv6 \uc8fc\uc18c\uac00 \ud560\ub2f9\ub429\ub2c8\ub2e4. \ud2b9\uc815 \uc6cc\ucee4 \ub178\ub4dc\uc758 \uacbd\uc6b0 \uc774\uc911 \uc2a4\ud0dd \uc11c\ube0c\ub137\uc758 \ub2e8\uc77c IPv4 \uc8fc\uc18c\ub9cc \uc0ac\uc6a9\ub429\ub2c8\ub2e4. IPv6\uc5d0 \ub300\ud55c EKS \uc9c0\uc6d0\uc744 \ud1b5\ud574 \ub3c5\ubcf4\uc801\uc778 \uc678\ubd80 \uc804\uc6a9 IPv4 \ubaa8\ub378\uc744 \ud1b5\ud574 IPv4 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 (AWS, \uc628\ud504\ub808\ubbf8\uc2a4, \uc778\ud130\ub137) \uc640 \ud1b5\uc2e0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. EKS\ub294 Pod\uc5d0 IPv4 \uc8fc\uc18c\ub97c \ud560\ub2f9\ud558\uace0 \uad6c\uc131\ud558\ub294 VPC CNI \ud50c\ub7ec\uadf8\uc778\uc758 \ubcf4\uc870 \uae30\ub2a5\uc778 \ud638\uc2a4\ud2b8-\ub85c\uceec CNI \ud50c\ub7ec\uadf8\uc778\uc744 \uad6c\ud604\ud569\ub2c8\ub2e4. CNI \ud50c\ub7ec\uadf8\uc778\uc740 169.254.172.0\/22 \ubc94\uc704\uc758 \ud30c\ub4dc\uc5d0 \ub300\ud574 \ud638\uc2a4\ud2b8\ubcc4\ub85c \ub77c\uc6b0\ud305\ud560 \uc218 \uc5c6\ub294 IPv4 \uc8fc\uc18c\ub97c \ud560\ub2f9\ud569\ub2c8\ub2e4. \ud30c\ub4dc\uc5d0 \ud560\ub2f9\ub41c IPv4 \uc8fc\uc18c\ub294 *\uc6cc\ucee4 \ub178\ub4dc\uc5d0 \uace0\uc720\ud55c* \uac83\uc73c\ub85c *\uc6cc\ucee4 \ub178\ub4dc \uc774\uc678\uc758 \ub2e4\ub978 \uc8fc\uc18c\ub294 \uc54c\ub824\uc9c0\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4*. 169.254.172.0\/22\ub294 \ub300\uaddc\ubaa8 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc9c0\uc6d0\ud560 \uc218 \uc788\ub294 \ucd5c\ub300 1024\uac1c\uc758 \uace0\uc720\ud55c IPv4 \uc8fc\uc18c\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4.\n\n\ub2e4\uc74c \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 \ud074\ub7ec\uc2a4\ud130 \uacbd\uacc4 \uc678\ubd80 (\uc778\ud130\ub137 \uc544\ub2d8) \uc5d0 \uc788\ub294 IPv4 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \uc5f0\uacb0\ud558\ub294 IPv6 Pod\uc758 \ud750\ub984\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\n![EKS\/IPv6, IPv4 egress-only flow](.\/eks-ipv4-snat-cni.png)\n\n\uc704 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc5d0\uc11c \ud30c\ub4dc\ub294 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \ub300\ud55c DNS \uc870\ud68c\ub97c \uc218\ud589\ud558\uace0, IPv4 \"A\" \uc751\ub2f5\uc744 \ubc1b\uc73c\uba74 \ud30c\ub4dc\uc758 \ub178\ub4dc \uc804\uc6a9 \uace0\uc720 IPv4 \uc8fc\uc18c\uac00 \uc18c\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc8fc\uc18c \ubcc0\ud658 (SNAT) \uc744 \ud1b5\ud574 EC2 \uc6cc\ucee4 \ub178\ub4dc\uc5d0 \uc5f0\uacb0\ub41c \uae30\ubcf8 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4\uc758 \ud504\ub77c\uc774\ube57 IPv4 (VPC) \uc8fc\uc18c\ub85c \ubcc0\ud658\ub429\ub2c8\ub2e4.\n\n\ub610\ud55c EK\/IPv6 \ud30c\ub4dc\ub294 \ud37c\ube14\ub9ad IPv4 \uc8fc\uc18c\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc778\ud130\ub137\uc744 \ud1b5\ud574 IPv4 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \uc5f0\uacb0\ud574\uc57c \ube44\uc2b7\ud55c \ud750\ub984\uc774 \uc874\uc7ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\ub2e4\uc74c \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 \ud074\ub7ec\uc2a4\ud130 \uacbd\uacc4 \uc678\ubd80\uc758 IPv4 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \uc5f0\uacb0\ud558\ub294 IPv6 Pod\uc758 \ud750\ub984\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4 (\uc778\ud130\ub137 \ub77c\uc6b0\ud305 \uac00\ub2a5).\n\n![EKS\/IPv6, IPv4 Internet egress-only flow](.\/eks-ipv4-snat-cni-internet.png)\n\n\uc704 \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc5d0\uc11c \ud30c\ub4dc\ub294 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \ub300\ud55c DNS \uc870\ud68c\ub97c \uc218\ud589\ud558\uace0, IPv4 \"A\" \uc751\ub2f5\uc744 \ubc1b\uc73c\uba74 \ud30c\ub4dc\uc758 \ub178\ub4dc \uc804\uc6a9 \uace0\uc720 IPv4 \uc8fc\uc18c\ub294 \uc18c\uc2a4 \ub124\ud2b8\uc6cc\ud06c \uc8fc\uc18c \ubcc0\ud658 (SNAT) \uc744 \ud1b5\ud574 EC2 \uc6cc\ucee4 \ub178\ub4dc\uc5d0 \uc5f0\uacb0\ub41c \uae30\ubcf8 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4\uc758 \ud504\ub77c\uc774\ube57 IPv4 (VPC) \uc8fc\uc18c\ub85c \ubcc0\ud658\ub429\ub2c8\ub2e4. \uadf8\ub7f0 \ub2e4\uc74c \ud30c\ub4dc IPv4 \uc8fc\uc18c (\uc18c\uc2a4 IPv4: EC2 \uae30\ubcf8 IP) \ub294 IPv4 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub85c \ub77c\uc6b0\ud305\ub418\uba70, \uc5ec\uae30\uc11c EC2 \uae30\ubcf8 IP\ub294 \uc720\ud6a8\ud55c \uc778\ud130\ub137 \ub77c\uc6b0\ud305 \uac00\ub2a5\ud55c IPv4 \ud37c\ube14\ub9ad IP \uc8fc\uc18c (NAT \uac8c\uc774\ud2b8\uc6e8\uc774 \ud560\ub2f9 \ud37c\ube14\ub9ad IP) \ub85c \ubcc0\ud658\ub429\ub2c8\ub2e4.\n\n\ub178\ub4dc \uac04\uc758 \ubaa8\ub4e0 \ud30c\ub4dc \uac04 \ud1b5\uc2e0\uc740 \ud56d\uc0c1 IPv6 \uc8fc\uc18c\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4. VPC CNI\ub294 IPv4 \uc5f0\uacb0\uc744 \ucc28\ub2e8\ud558\uba74\uc11c IPv6\ub97c \ucc98\ub9ac\ud558\ub3c4\ub85d iptables\ub97c \uad6c\uc131\ud569\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\ub294 \uace0\uc720\ud55c [\ub85c\uceec IPv6 \uc720\ub2c8\uce90\uc2a4\ud2b8 \uc8fc\uc18c (ULA)](https:\/\/datatracker.ietf.org\/doc\/html\/rfc4193) \uc5d0\uc11c IPv6 \uc8fc\uc18c (\ud074\ub7ec\uc2a4\ud130IP) \ub9cc \uc218\uc2e0\ud569\ub2c8\ub2e4. IPv6 \ud074\ub7ec\uc2a4\ud130\uc6a9 ULA \uc11c\ube44\uc2a4 CIDR\uc740 EKS \ud074\ub7ec\uc2a4\ud130 \uc0dd\uc131 \ub2e8\uacc4\uc5d0\uc11c \uc790\ub3d9\uc73c\ub85c \ud560\ub2f9\ub418\uba70 \uc218\uc815\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 \ud30c\ub4dc\uc5d0\uc11c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\ub85c\uc758 \ud750\ub984\uc744 \ub098\ud0c0\ub0c5\ub2c8\ub2e4.\n\n![EKS\/IPv6, IPv6 Pod to IPv6 k8s service (ClusterIP ULA) flow](.\/Pod-to-service-ipv6.png)\n\n\uc11c\ube44\uc2a4\ub294 AWS \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc778\ud130\ub137\uc5d0 \ub178\ucd9c\ub429\ub2c8\ub2e4. \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub294 \ud37c\ube14\ub9ad IPv4 \ubc0f IPv6 \uc8fc\uc18c, \uc989 \ub4c0\uc5bc \uc2a4\ud0dd \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub97c \uc218\uc2e0\ud569\ub2c8\ub2e4. IPv6 \ud074\ub7ec\uc2a4\ud130 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\uc5d0 \uc561\uc138\uc2a4\ud558\ub294 IPv4 \ud074\ub77c\uc774\uc5b8\ud2b8\uc758 \uacbd\uc6b0 \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub294 IPv4\uc5d0\uc11c IPv6\uc73c\ub85c\uc758 \ubcc0\ud658\uc744 \uc218\ud589\ud569\ub2c8\ub2e4.\n\nAmazon EKS\ub294 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0\uc11c \uc6cc\ucee4 \ub178\ub4dc\uc640 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \ud37c\ube14\ub9ad \uc11c\ube0c\ub137\uc5d0 \ud37c\ube14\ub9ad \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub97c \uc0dd\uc131\ud558\uc5ec \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0 \uc788\ub294 \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ud30c\ub4dc\ub85c \ud2b8\ub798\ud53d\uc744 \ub85c\ub4dc \ubc38\ub7f0\uc2f1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\ub2e4\uc74c \ub2e4\uc774\uc5b4\uadf8\ub7a8\uc740 EKS\/IPv6 \uc778\uadf8\ub808\uc2a4 \uae30\ubc18 \uc11c\ube44\uc2a4\uc5d0 \uc561\uc138\uc2a4\ud558\ub294 \uc778\ud130\ub137 IPv4 \uc0ac\uc6a9\uc790\ub97c \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\n![\uc778\ud130\ub137 IPv4 \uc0ac\uc6a9\uc790\ub97c EKS\/IPv6 \uc778\uadf8\ub808\uc2a4 \uc11c\ube44\uc2a4\ub85c](.\/ipv4-internet-to-eks-ipv6.png)\n\n> \ucc38\uace0: \uc704 \ud328\ud134\uc744 \uc0ac\uc6a9\ud558\ub824\uba74 AWS \ub85c\ub4dc\ubc38\ub7f0\uc11c \ucee8\ud2b8\ub864\ub7ec\uc758 [\ucd5c\uc2e0 \ubc84\uc804](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller)\uc744 \ubc30\ud3ec\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n### EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 <-> \ub370\uc774\ud130 \ud50c\ub808\uc778 \ud1b5\uc2e0\n\nEKS\ub294 \ub4c0\uc5bc \uc2a4\ud0dd \ubaa8\ub4dc (IPv4\/IPv6) \uc5d0\uc11c \ud06c\ub85c\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 ENI (X-eni) \ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud560 \uc608\uc815\uc785\ub2c8\ub2e4. kubelet \ubc0f kube-proxy\uc640 \uac19\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub178\ub4dc \uad6c\uc131 \uc694\uc18c\ub294 \uc774\uc911 \uc2a4\ud0dd\uc744 \uc9c0\uc6d0\ud558\ub3c4\ub85d \uad6c\uc131\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. Kubelet\uacfc kube-proxy\ub294 \ud638\uc2a4\ud2b8\ub124\ud2b8\uc6cc\ud06c \ubaa8\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\uba70 \ub178\ub4dc\uc758 \uae30\ubcf8 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4\uc5d0 \uc5f0\uacb0\ub41c IPv4 \ubc0f IPv6 \uc8fc\uc18c \ubaa8\ub450\uc5d0 \ubc14\uc778\ub529\ub429\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84\ub294 IPv6 \uae30\ubc18\uc778 X-ENI\ub97c \ud1b5\ud574 \ud30c\ub4dc \ubc0f \ub178\ub4dc \ucef4\ud3ec\ub10c\ud2b8\uc640 \ud1b5\uc2e0\ud569\ub2c8\ub2e4. \ud30c\ub4dc\ub294 X-ENI\ub97c \ud1b5\ud574 API \uc11c\ubc84\uc640 \ud1b5\uc2e0\ud558\uba70, \ud30c\ub4dc\uc640 API-\uc11c\ubc84 \uac04 \ud1b5\uc2e0\uc740 \ud56d\uc0c1 IPv6 \ubaa8\ub4dc\ub97c \uc0ac\uc6a9\ud55c\ub2e4.\n\n![illustration of cluster including X-ENIs](.\/image-5.png)\n\n## \uad8c\uc7a5 \uc0ac\ud56d\n\n### IPv4 EKS API\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4 \uc720\uc9c0\n\nEKS API\ub294 IPv4\uc5d0\uc11c\ub9cc \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc5ec\uae30\uc5d0\ub294 \ud074\ub7ec\uc2a4\ud130 API \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub3c4 \ud3ec\ud568\ub429\ub2c8\ub2e4. IPv6 \uc804\uc6a9 \ub124\ud2b8\uc6cc\ud06c\uc5d0\uc11c\ub294 \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc640 API\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \ub124\ud2b8\uc6cc\ud06c\ub294 (1) IPv6\uc640 IPv4 \ud638\uc2a4\ud2b8 \uac04\uc758 \ud1b5\uc2e0\uc744 \uc6a9\uc774\ud558\uac8c \ud558\ub294 NAT64\/DNS64\uc640 \uac19\uc740 IPv6 \uc804\ud658 \uba54\ucee4\ub2c8\uc998\uacfc (2) IPv4 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \ubcc0\ud658\uc744 \uc9c0\uc6d0\ud558\ub294 DNS \uc11c\ube44\uc2a4\ub97c \uc9c0\uc6d0\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n### \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4 \uae30\ubc18 \uc77c\uc815\n\n\ub2e8\uc77c IPv6 Prefix\ub294 \ub2e8\uc77c \ub178\ub4dc\uc5d0\uc11c \ub9ce\uc740 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud558\uae30\uc5d0 \ucda9\ubd84\ud558\ub2e4.\ub610\ud55c \uc774\ub294 \ub178\ub4dc\uc758 \ucd5c\ub300 \ud30c\ub4dc \uc218\uc5d0 \ub300\ud55c ENI \ubc0f IP \uc81c\ud55c\uc744 \ud6a8\uacfc\uc801\uc73c\ub85c \uc81c\uac70\ud569\ub2c8\ub2e4. IPv6\ub294 \ucd5c\ub300 POD\uc5d0 \ub300\ud55c \uc9c1\uc811\uc801\uc778 \uc885\uc18d\uc131\uc744 \uc81c\uac70\ud558\uc9c0\ub9cc m5.large\uc640 \uac19\uc740 \uc791\uc740 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc5d0 Prefix \ucca8\ubd80 \ud30c\uc77c\uc744 \uc0ac\uc6a9\ud558\uba74 IP \uc8fc\uc18c\ub97c \ubaa8\ub450 \uc0ac\uc6a9\ud558\uae30 \ud6e8\uc52c \uc804\uc5d0 \uc778\uc2a4\ud134\uc2a4\uc758 CPU\uc640 \uba54\ubaa8\ub9ac \ub9ac\uc18c\uc2a4\uac00 \uace0\uac08\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9 \ub610\ub294 \uc0ac\uc6a9\uc790 \uc9c0\uc815 AMI ID\uac00 \uc788\ub294 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 EKS \uad8c\uc7a5 \ucd5c\ub300 \ud30c\ub4dc \uac12\uc744 \uc9c1\uc811 \uc124\uc815\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ub2e4\uc74c \uacf5\uc2dd\uc744 \uc0ac\uc6a9\ud558\uc5ec IPv6 EKS \ud074\ub7ec\uc2a4\ud130\uc758 \ub178\ub4dc\uc5d0 \ubc30\ud3ec\ud560 \uc218 \uc788\ub294 \ucd5c\ub300 \ud30c\ub4dc \uc218\ub97c \uacb0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n* ((\uc778\uc2a4\ud134\uc2a4 \uc720\ud615\ubcc4 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4 \uc218 (\ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4\ub2f9 Prefix \uc218-1)* 16) + 2\n\n* ((3 ENIs)*((10 secondary IPs per ENI-1)* 16)) + 2 = 460 (real)\n\n\uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc740 \uc790\ub3d9\uc73c\ub85c \ucd5c\ub300 \ud30c\ub4dc \uc218\ub97c \uacc4\uc0b0\ud569\ub2c8\ub2e4. \ub9ac\uc18c\uc2a4 \uc81c\ud55c\uc73c\ub85c \uc778\ud55c \ud30c\ub4dc \uc2a4\ucf00\uc904\ub9c1 \uc2e4\ud328\ub97c \ubc29\uc9c0\ud558\ub824\uba74 \ucd5c\ub300 \ud30c\ub4dc \uc218\uc5d0 \ub300\ud55c EKS \uad8c\uc7a5\uac12\uc744 \ubcc0\uacbd\ud558\uc9c0 \ub9c8\uc138\uc694.\n\n### \uae30\uc874\uc758 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc758 \ubaa9\uc801 \ud3c9\uac00\n\n[\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9](https:\/\/aws.github.io\/aws-eks-best-practices\/networking\/custom-networking\/)\uc774 \ud604\uc7ac \ud65c\uc131\ud654\ub418\uc5b4 \uc788\ub294 \uacbd\uc6b0 Amazon EKS\ub294 IPv6\uc5d0\uc11c \ud574\ub2f9 \ub124\ud2b8\uc6cc\ud0b9 \uc694\uad6c \uc0ac\ud56d\uc744 \uc7ac\ud3c9\uac00\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. IPv4 \uace0\uac08 \ubb38\uc81c\ub97c \ud574\uacb0\ud558\uae30 \uc704\ud574 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc744 \uc0ac\uc6a9\ud558\uae30\ub85c \uc120\ud0dd\ud55c \uacbd\uc6b0 IPv6\uc5d0\uc11c\ub294 \ub354 \uc774\uc0c1 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc744 \uc0ac\uc6a9\ud560 \ud544\uc694\uac00 \uc5c6\uc2b5\ub2c8\ub2e4. \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9\uc744 \uc0ac\uc6a9\ud558\uc5ec \ubcf4\uc548 \uc694\uad6c \uc0ac\ud56d(\uc608: \ub178\ub4dc\uc640 \ud30c\ub4dc\ub97c \uc704\ud55c \ubcc4\ub3c4\uc758 \ub124\ud2b8\uc6cc\ud06c)\uc744 \ucda9\uc871\ud558\ub294 \uacbd\uc6b0 [EKS \ub85c\ub4dc\ub9f5 \uc694\uccad](https:\/\/github.com\/aws\/containers-roadmap\/issues)\uc744 \uc81c\ucd9c\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n### EKS\/IPv6 \ud074\ub7ec\uc2a4\ud130\uc758 \ud30c\uac8c\uc774\ud2b8 \ud30c\ub4dc\n\nEKS\ub294 \ud30c\uac8c\uc774\ud2b8\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ud30c\ub4dc\uc6a9 IPv6\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4. Fargate\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ud30c\ub4dc\ub294 VPC CIDR \ubc94\uc704 (IPv4&IPv6) \uc5d0\uc11c \ubd84\ud560\ub41c IPv6 \ubc0f VPC \ub77c\uc6b0\ud305 \uac00\ub2a5\ud55c \ud504\ub77c\uc774\ube57 IPv4 \uc8fc\uc18c\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \uac04\ub2e8\ud788 \ub9d0\ud574\uc11c EKS\/Fargate \ud30c\ub4dc \ud074\ub7ec\uc2a4\ud130 \uc804\uccb4 \ubc00\ub3c4\ub294 \uc0ac\uc6a9 \uac00\ub2a5\ud55c IPv4 \ubc0f IPv6 \uc8fc\uc18c\ub85c \uc81c\ud55c\ub429\ub2c8\ub2e4. \ud5a5\ud6c4 \uc131\uc7a5\uc5d0 \ub300\ube44\ud558\uc5ec \ub4c0\uc5bc \uc2a4\ud0dd \uc11c\ube0c\ub137\/vPC CIDR\uc758 \ud06c\uae30\ub97c \uc870\uc815\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uae30\ubcf8 \uc11c\ube0c\ub137\uc5d0 \uc0ac\uc6a9 \uac00\ub2a5\ud55c IPv4 \uc8fc\uc18c\uac00 \uc5c6\uc73c\uba74 IPv6 \uc0ac\uc6a9 \uac00\ub2a5 \uc8fc\uc18c\uc640 \uc0c1\uad00\uc5c6\uc774 \uc0c8 Fargate Pod\ub97c \uc608\uc57d\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4.\n\n### AWS \ub85c\ub4dc\ubc38\ub7f0\uc11c \ucee8\ud2b8\ub864\ub7ec (LBC) \ubc30\ud3ec\n\n**\uc5c5\uc2a4\ud2b8\ub9bc \uc778\ud2b8\ub9ac \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4 \ucee8\ud2b8\ub864\ub7ec\ub294 IPv6\uc744 \uc9c0\uc6d0\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4**. AWS \ub85c\ub4dc\ubc38\ub7f0\uc11c \ucee8\ud2b8\ub864\ub7ec \uc560\ub4dc\uc628\uc758 [\ucd5c\uc2e0 \ubc84\uc804](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller)\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.LBC\ub294 `\"alb.ingress.kubernetes.io\/ip-address type: dualstack\"` \ubc0f `\"alb.ingress.kubernetes.io\/target-type: ip\"\"\ub77c\ub294 \uc8fc\uc11d\uc774 \ub2ec\ub9b0 \ud574\ub2f9 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\/\uc778\uadf8\ub808\uc2a4 \uc815\uc758\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0\uc5d0\ub9cc \uc774\uc911 \uc2a4\ud0dd NLB \ub610\ub294 \uc774\uc911 \uc2a4\ud0dd ALB\ub97c \ubc30\ud3ec\ud569\ub2c8\ub2e4.\n\nAWS \ub124\ud2b8\uc6cc\ud06c \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub294 \ub4c0\uc5bc \uc2a4\ud0dd UDP \ud504\ub85c\ud1a0\ucf5c \uc8fc\uc18c \uc720\ud615\uc744 \uc9c0\uc6d0\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc9c0\uc5f0 \uc2dc\uac04\uc774 \uc9e7\uc740 \uc2e4\uc2dc\uac04 \uc2a4\ud2b8\ub9ac\ubc0d, \uc628\ub77c\uc778 \uac8c\uc784 \ubc0f IoT\uc5d0 \ub300\ud55c \uac15\ub825\ud55c \uc694\uad6c \uc0ac\ud56d\uc774 \uc788\ub294 \uacbd\uc6b0 IPv4 \ud074\ub7ec\uc2a4\ud130\ub97c \uc2e4\ud589\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.UDP \uc11c\ube44\uc2a4\uc758 \uc0c1\ud0dc \uc810\uac80 \uad00\ub9ac\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\"UDP \ud2b8\ub798\ud53d\uc744 \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub85c \ub77c\uc6b0\ud305\ud558\ub294 \ubc29\ubc95\"](https:\/\/aws.amazon.com\/blogs\/containers\/how-to-route-udp-traffic-into-kubernetes\/)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n### IMDSv2\uc5d0 \ub300\ud55c \uc758\uc874\uc131\n\nIPv6 \ubaa8\ub4dc\uc758 EKS\ub294 \uc544\uc9c1 IMDSv2 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc9c0\uc6d0\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. IMDSv2\uac00 EKS\/IPv6\uc73c\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud558\ub294 \ub370 \ubc29\ud574\uac00 \ub418\ub294 \uacbd\uc6b0 \uc9c0\uc6d0 \ud2f0\ucf13\uc744 \uc5ec\uc2ed\uc2dc\uc624","site":"eks","answers_cleaned":"    search    exclude  true         IPv6 EKS           iframe width  560  height  315  src  https   www youtube com embed zdXpTT0bZXo  title  YouTube video player  frameborder  0  allow  accelerometer  autoplay  clipboard write  encrypted media  gyroscope  picture in picture  web share  allowfullscreen   iframe   IPv6     EKS      EKS                IPv4               EKS  IPv6     IPv4                        IPv4                                                      Kubernetes    IPv4 IPv6        https   kubernetes io docs concepts services networking dual stack                   EKS IPv6  IPv6 CIDR                                       CIDR                     2                              IPv6 EKS                     IPv4                    IPv6                    IPv4                                     IPv4                          Amazon EKS IPv6          VPC IPv6              VPC   IPv4    Prefix  CIDR         16    28             GUA                        56 IPv6    Prefix               VPC          64    Prefix                                            DNS        IPv4     IPv6    VPC                   VPC        VPC                              EK IPv6               IPv4 IPv6 VPC                  Dual Stack VPC  mandatory foundation for EKS cluster in IPv6 mode    eks ipv6 foundation png   IPv6                                       VPC      GUA      IPv6 CIDR         VPC  RFC 4193  fd00    8    fc00    8                   ULA   https   en wikipedia org wiki Unique local address            IPv6                                    IPv6 CIDR                      VPC                   EIGW                                                                                       EKS IPv6         Pod IPv6                      Dual Stack VPC  EKS Cluster in IPv6 Mode  Pods in private subnets egressing to Internet IPv6 endpoints    eks egress ipv6 png   IPv6                   VPC         https   docs aws amazon com whitepapers latest ipv6 on aws IPv6 on AWS html                 IPv6 EKS                    IPv6           EKS         IPv6           ULA              IPv6            IPv6       ULA     CIDR  IPv4                                                  EKS IPv6                                           Dual Stack VPC  EKS Cluster in IPv6 Mode  control plane ULA  data plane IPv6 GUA for EC2   Pods    eks cluster ipv6 foundation png          EKS IPv6  Prefix     VPC CNI      ENI IP                    Prefix     https   aws github io aws eks best practices networking prefix mode index linux                      Prefix     Nitro    EC2               EKS IPv6                EC2 Nitro                                     IPv6 Prefix   80                10 14 IPv6                       IP                             IPv6 Prefix     EKS                                   IPv4                 VPC CNI       ipamd          API                                   EKS IPv4                                           VPC CNI                WARM IP ENI   MINIMUM IP   https   github com aws amazon vpc cni k8s warm ip target                                      IPv6       ENI                                  illustration of worker subnet  including primary ENI with multiple IPv6 Addresses    image 2 png      EKS            DNS        IPv4   IPv6                                       IPv4            IPv6     EKS                   IPv4        IPv4        AWS                            EKS  Pod  IPv4               VPC CNI                     CNI              CNI       169 254 172 0 22                            IPv4                    IPv4                                                    169 254 172 0 22                           1024       IPv4                                                IPv4             IPv6 Pod                EKS IPv6  IPv4 egress only flow    eks ipv4 snat cni png                           DNS           IPv4  A                       IPv4                    SNAT       EC2                                IPv4  VPC                 EK IPv6         IPv4                  IPv4                                                       IPv4             IPv6 Pod                             EKS IPv6  IPv4 Internet egress only flow    eks ipv4 snat cni internet png                           DNS           IPv4  A                       IPv4                    SNAT       EC2                                IPv4  VPC                      IPv4        IPv4  EC2    IP    IPv4 NAT                   EC2    IP                  IPv4     IP     NAT              IP                                 IPv6            VPC CNI  IPv4           IPv6        iptables                             IPv6           ULA   https   datatracker ietf org doc html rfc4193     IPv6         IP           IPv6       ULA     CIDR  EKS                                                                             EKS IPv6  IPv6 Pod to IPv6 k8s service  ClusterIP ULA  flow    Pod to service ipv6 png        AWS                                    IPv4   IPv6                           IPv6                       IPv4                  IPv4   IPv6                Amazon EKS                                                                                                                         EKS IPv6                        IPv4                    IPv4      EKS IPv6              ipv4 internet to eks ipv6 png                     AWS                     https   kubernetes sigs github io aws load balancer controller                  EKS                         EKS            IPv4 IPv6              ENI  X eni                  kubelet   kube proxy                                             Kubelet  kube proxy                                           IPv4   IPv6                      API     IPv6     X ENI                              X ENI     API               API             IPv6              illustration of cluster including X ENIs    image 5 png                 IPv4 EKS API             EKS API  IPv4                           API               IPv6                        API                      1  IPv6  IPv4                    NAT64 DNS64     IPv6           2  IPv4                 DNS                                       IPv6 Prefix                                                  ENI   IP                  IPv6     POD                     m5 large                 Prefix             IP                         CPU                                              AMI ID                        EKS                                       IPv6 EKS                                                                                 Prefix   1   16    2      3 ENIs    10 secondary IPs per ENI 1   16     2   460  real                                                                             EKS                                                           https   aws github io aws eks best practices networking custom networking                   Amazon EKS  IPv6                                 IPv4                                          IPv6                                                                                                 EKS         https   github com aws containers roadmap issues                      EKS IPv6                EKS                  IPv6         Fargate            VPC CIDR     IPv4 IPv6         IPv6   VPC              IPv4                    EKS Fargate                       IPv4   IPv6                                  vPC CIDR                                   IPv4         IPv6                  Fargate Pod                   AWS             LBC                                 IPv6               AWS                         https   kubernetes sigs github io aws load balancer controller                LBC    alb ingress kubernetes io ip address type  dualstack       alb ingress kubernetes io target type  ip                                                  NLB          ALB          AWS                   UDP                                                     IoT                      IPv4                    UDP                             UDP                        https   aws amazon com blogs containers how to route udp traffic into kubernetes                 IMDSv2          IPv6     EKS     IMDSv2                   IMDSv2  EKS IPv6                                   "}
{"questions":"eks IPVS IPVS IPVS IP EKS iptables 1 000 iptables iptables nftables nftable kube proxy https kubernetes io docs reference networking virtual ips proxy mode nftables IPVS GA IPVS IPVS IPVS kube proxy","answers":"# IPVS \ubaa8\ub4dc\uc5d0\uc11c kube-proxy \uc2e4\ud589\n\nIPVS (IP \uac00\uc0c1 \uc11c\ubc84) \ubaa8\ub4dc\uc758 EKS\ub294 \ub808\uac70\uc2dc iptables \ubaa8\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 `kube-proxy`\uc640 \ud568\uaed8 1,000\uac1c \uc774\uc0c1\uc758 \uc11c\ube44\uc2a4\uac00 \ud3ec\ud568\ub41c \ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130\ub97c \uc2e4\ud589\ud560 \ub54c \ud754\ud788 \ubc1c\uc0dd\ud558\ub294 [\ub124\ud2b8\uc6cc\ud06c \uc9c0\uc5f0 \ubb38\uc81c](https:\/\/aws.github.io\/aws-eks-best-practices\/reliability\/docs\/controlplane\/#running-large-clusters)\ub97c \ud574\uacb0\ud569\ub2c8\ub2e4.\uc774\ub7ec\ud55c \uc131\ub2a5 \ubb38\uc81c\ub294 \uac01 \ud328\ud0b7\uc5d0 \ub300\ud55c iptables \ud328\ud0b7 \ud544\ud130\ub9c1 \uaddc\uce59\uc744 \uc21c\ucc28\uc801\uc73c\ub85c \ucc98\ub9ac\ud55c \uacb0\uacfc\uc785\ub2c8\ub2e4.\uc774 \uc9c0\uc5f0 \ubb38\uc81c\ub294 iptables\uc758 \ud6c4\uc18d \ubc84\uc804\uc778 nftables\uc5d0\uc11c \ud574\uacb0\ub418\uc5c8\uc2b5\ub2c8\ub2e4.\ud558\uc9c0\ub9cc \uc774 \uae00\uc744 \uc4f0\ub294 \uc2dc\uc810 \ud604\uc7ac, nftable\uc744 \ud65c\uc6a9\ud558\uae30 \uc704\ud55c [kube-proxy\ub294 \uc544\uc9c1 \uac1c\ubc1c \uc911] (https:\/\/kubernetes.io\/docs\/reference\/networking\/virtual-ips\/#proxy-mode-nftables) \uc774\ub2e4.\uc774 \ubb38\uc81c\ub97c \ud574\uacb0\ud558\ub824\uba74 IPVS \ubaa8\ub4dc\uc5d0\uc11c `kube-proxy`\uac00 \uc2e4\ud589\ub418\ub3c4\ub85d \ud074\ub7ec\uc2a4\ud130\ub97c \uad6c\uc131\ud560 \uc218 \uc788\ub2e4.\n\n## \uac1c\uc694\n\n[\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804 1.11](https:\/\/kubernetes.io\/blog\/2018\/07\/09\/ipvs-based-in-cluster-load-balancing-deep-dive\/) \ubd80\ud130 GA\uac00 \ub41c IPVS\ub294 \uc120\ud615 \uac80\uc0c9\uc774 \uc544\ub2cc \ud574\uc2dc \ud14c\uc774\ube14\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud328\ud0b7\uc744 \ucc98\ub9ac\ud558\ubbc0\ub85c \uc218\ucc9c \uac1c\uc758 \ub178\ub4dc\uc640 \uc11c\ube44\uc2a4\uac00 \uc788\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ud6a8\uc728\uc131\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4.IPVS\ub294 \ub85c\ub4dc \ubc38\ub7f0\uc2f1\uc744 \uc704\ud574 \uc124\uacc4\ub418\uc5c8\uc73c\ubbc0\ub85c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud0b9 \uc131\ub2a5 \ubb38\uc81c\uc5d0 \uc801\ud569\ud55c \uc194\ub8e8\uc158\uc785\ub2c8\ub2e4.\n\nIPVS\ub294 \ud2b8\ub798\ud53d\uc744 \ubc31\uc5d4\ub4dc \ud3ec\ub4dc\uc5d0 \ubd84\uc0b0\ud558\uae30 \uc704\ud55c \uba87 \uac00\uc9c0 \uc635\uc158\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4.\uac01 \uc635\uc158\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\uacf5\uc2dd \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubb38\uc11c](https:\/\/kubernetes.io\/docs\/reference\/networking\/virtual-ips\/#proxy-mode-ipvs) \uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc9c0\ub9cc, \uac04\ub2e8\ud55c \ubaa9\ub85d\uc740 \uc544\ub798\uc5d0 \ub098\uc640 \uc788\ub2e4.\ub77c\uc6b4\ub4dc \ub85c\ube48\uacfc \ucd5c\uc18c \uc5f0\uacb0\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc758 IPVS \ub85c\ub4dc \ubc38\ub7f0\uc2f1 \uc635\uc158\uc73c\ub85c \uac00\uc7a5 \ub9ce\uc774 \uc0ac\uc6a9\ub418\ub294 \uc635\uc158 \uc911 \ud558\ub098\uc785\ub2c8\ub2e4.\n```\n- rr (\ub77c\uc6b4\ub4dc \ub85c\ube48)\n- wrr (\uc6e8\uc774\ud2f0\ub4dc \ub77c\uc6b4\ub4dc \ub85c\ube48)\n- lc (\ucd5c\uc18c \uc5f0\uacb0)\n- wlc (\uac00\uc911\uce58\uac00 \uac00\uc7a5 \uc801\uc740 \uc5f0\uacb0)\n- lblc (\uc9c0\uc5ed\uc131 \uae30\ubc18 \ucd5c\uc18c \uc5f0\uacb0)\n- lblcr (\ubcf5\uc81c\ub97c \ud1b5\ud55c \uc9c0\uc5ed\uc131 \uae30\ubc18 \ucd5c\uc18c \uc5f0\uacb0)\n- sh (\uc18c\uc2a4 \ud574\uc2f1)\n- dh (\ub370\uc2a4\ud2f0\ub124\uc774\uc158 \ud574\uc2f1)\n- sed (\ucd5c\ub2e8 \uc608\uc0c1 \uc9c0\uc5f0)\n- nq (\uc904 \uc11c\uc9c0 \ub9c8\uc138\uc694)\n```\n\n### \uad6c\ud604\n\nEKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c IPVS\ub97c \ud65c\uc131\ud654\ud558\ub824\uba74 \uba87 \ub2e8\uacc4\ub9cc \uac70\uce58\uba74 \ub429\ub2c8\ub2e4.\uac00\uc7a5 \uba3c\uc800 \ud574\uc57c \ud560 \uc77c\uc740 EKS \uc791\uc5c5\uc790 \ub178\ub4dc \uc774\ubbf8\uc9c0\uc5d0 Linux \uac00\uc0c1 \uc11c\ubc84 \uad00\ub9ac `ipvsadm` \ud328\ud0a4\uc9c0\uac00 \uc124\uce58\ub418\uc5b4 \uc788\ub294\uc9c0 \ud655\uc778\ud558\ub294 \uac83\uc785\ub2c8\ub2e4.Amazon Linux 2023\uacfc \uac19\uc740 Fedora \uae30\ubc18 \uc774\ubbf8\uc9c0\uc5d0 \uc774 \ud328\ud0a4\uc9c0\ub97c \uc124\uce58\ud558\ub824\uba74 \uc791\uc5c5\uc790 \ub178\ub4dc \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c \ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n```bash\nsudo dnf install -y ipvsadm\n```\nUbuntu\uc640 \uac19\uc740 \ub370\ube44\uc548 \uae30\ubc18 \uc774\ubbf8\uc9c0\uc5d0\uc11c\ub294 \uc124\uce58 \uba85\ub839\uc774 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n```bash\nsudo apt-get install ipvsadm\n```\n\n\ub2e4\uc74c\uc73c\ub85c \uc704\uc5d0 \ub098\uc5f4\ub41c IPVS \uad6c\uc131 \uc635\uc158\uc5d0 \ub300\ud55c \ucee4\ub110 \ubaa8\ub4c8\uc744 \ub85c\ub4dc\ud574\uc57c \ud569\ub2c8\ub2e4.\uc7ac\ubd80\ud305\ud574\ub3c4 \uacc4\uc18d \uc791\ub3d9\ud558\ub3c4\ub85d \uc774\ub7ec\ud55c \ubaa8\ub4c8\uc744 `\/etc\/modules-load.d\/` \ub514\ub809\ud1a0\ub9ac \ub0b4\uc758 \ud30c\uc77c\uc5d0 \uae30\ub85d\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n```bash\nsudo sh -c 'cat << EOF > \/etc\/modules-load.d\/ipvs.conf\nip_vs\nip_vs_rr\nip_vs_wrr\nip_vs_lc\nip_vs_wlc\nip_vs_lblc\nip_vs_lblcr\nip_vs_sh\nip_vs_dh\nip_vs_sed\nip_vs_nq\nnf_conntrack\nEOF'\n```\n\ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud558\uc5ec \uc774\ubbf8 \uc2e4\ud589 \uc911\uc778 \uc2dc\uc2a4\ud15c\uc5d0\uc11c \uc774\ub7ec\ud55c \ubaa8\ub4c8\uc744 \ub85c\ub4dc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n```bash\nsudo modprobe ip_vs \nsudo modprobe ip_vs_rr\nsudo modprobe ip_vs_wrr\nsudo modprobe ip_vs_lc\nsudo modprobe ip_vs_wlc\nsudo modprobe ip_vs_lblc\nsudo modprobe ip_vs_lblcr\nsudo modprobe ip_vs_sh\nsudo modprobe ip_vs_dh\nsudo modprobe ip_vs_sed\nsudo modprobe ip_vs_nq\nsudo modprobe nf_conntrack\n```\n!!! note\n    \uc774\ub7ec\ud55c \uc791\uc5c5\uc790 \ub178\ub4dc \ub2e8\uacc4\ub294 [\uc0ac\uc6a9\uc790 \ub370\uc774\ud130 \uc2a4\ud06c\ub9bd\ud2b8](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/user-data.html) \ub97c \ud1b5\ud574 \uc791\uc5c5\uc790 \ub178\ub4dc\uc758 \ubd80\ud2b8\uc2a4\ud2b8\ub7a9 \ud504\ub85c\uc138\uc2a4\uc758 \uc77c\ubd80\ub85c \uc2e4\ud589\ud558\uac70\ub098 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \uc791\uc5c5\uc790 \ub178\ub4dc AMI\ub97c \ube4c\ub4dc\ud558\uae30 \uc704\ud574 \uc2e4\ud589\ub418\ub294 \ube4c\ub4dc \uc2a4\ud06c\ub9bd\ud2b8\uc5d0\uc11c \uc2e4\ud589\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c\uc73c\ub85c IPVS \ubaa8\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\ub3c4\ub85d \ud074\ub7ec\uc2a4\ud130\uc758 `kube-proxy` DaemonSet\ub97c \uad6c\uc131\ud569\ub2c8\ub2e4. \uc774\ub294 `kube-proxy` `mode`\ub97c `ipvs`\ub85c \uc124\uc815\ud558\uace0 `ipvs scheduler`\ub97c \uc704\uc5d0 \ub098\uc5f4\ub41c \ub85c\ub4dc \ubc38\ub7f0\uc2f1 \uc635\uc158 \uc911 \ud558\ub098\ub85c \uc124\uc815\ud558\uba74 \ub429\ub2c8\ub2e4(\uc608: \ub77c\uc6b4\ub4dc \ub85c\ube48\uc758 \uacbd\uc6b0 `rr`).\n!!! Warning\n    \uc774\ub294 \uc6b4\uc601 \uc911\ub2e8\uc744 \uc57c\uae30\ud558\ub294 \ubcc0\uacbd\uc774\ubbc0\ub85c \uadfc\ubb34 \uc2dc\uac04 \uc678 \uc2dc\uac04\uc5d0 \uc218\ud589\ud574\uc57c \ud569\ub2c8\ub2e4.\uc601\ud5a5\uc744 \ucd5c\uc18c\ud654\ud558\ub824\uba74 \ucd08\uae30 EKS \ud074\ub7ec\uc2a4\ud130 \uc0dd\uc131 \uc911\uc5d0 \uc774\ub7ec\ud55c \ubcc0\uacbd\uc744 \uc218\ud589\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n`kube-proxy` EKS \uc560\ub4dc\uc628\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\uc5ec AWS CLI \uba85\ub839\uc744 \uc2e4\ud589\ud558\uc5ec IPVS\ub97c \ud65c\uc131\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n```bash\naws eks update-addon --cluster-name $CLUSTER_NAME --addon-name kube-proxy \\\n  --configuration-values '{\"ipvs\": {\"scheduler\": \"rr\"}, \"mode\": \"ipvs\"}' \\\n  --resolve-conflicts OVERWRITE\n```\n\ub610\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c `kube-proxy-config` \ucee8\ud53c\uadf8\ub9f5\uc744 \uc218\uc815\ud558\uc5ec \uc774 \uc791\uc5c5\uc744 \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n```bash\nkubectl -n kube-system edit cm kube-proxy-config\n```\n`ipvs`\uc5d0\uc11c `scheduler` \uc124\uc815\uc744 \ucc3e\uc544 \uac12\uc744 \uc704\uc5d0 \ub098\uc5f4\ub41c IPVS \ub85c\ub4dc \ubc38\ub7f0\uc2f1 \uc635\uc158 \uc911 \ud558\ub098\ub85c \uc124\uc815\ud569\ub2c8\ub2e4(\uc608: \ub77c\uc6b4\ub4dc \ub85c\ube48\uc758 \uacbd\uc6b0 `rr`)\n\uae30\ubcf8\uac12\uc774 `iptables`\uc778 `mode` \uc124\uc815\uc744 \ucc3e\uc544 \uac12\uc744 `ipvs`\ub85c \ubcc0\uacbd\ud569\ub2c8\ub2e4.\n\ub450 \uc635\uc158 \uc911 \ud558\ub098\uc758 \uacb0\uacfc\ub294 \uc544\ub798 \uad6c\uc131\uacfc \uc720\uc0ac\ud574\uc57c \ud569\ub2c8\ub2e4.\n```yaml hl_lines=\"9 13\"\n  iptables:\n    masqueradeAll: false\n    masqueradeBit: 14\n    minSyncPeriod: 0s\n    syncPeriod: 30s\n  ipvs:\n    excludeCIDRs: null\n    minSyncPeriod: 0s\n    scheduler: \"rr\"\n    syncPeriod: 30s\n  kind: KubeProxyConfiguration\n  metricsBindAddress: 0.0.0.0:10249\n  mode: \"ipvs\"\n  nodePortAddresses: null\n  oomScoreAdj: -998\n  portRange: \"\"\n  udpIdleTimeout: 250ms\n```\n\n\uc774\ub7ec\ud55c \ubcc0\uacbd\uc744 \uc218\ud589\ud558\uae30 \uc804\uc5d0 \uc791\uc5c5\uc790 \ub178\ub4dc\uac00 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc5f0\uacb0\ub41c \uacbd\uc6b0 kube-proxy \ub370\ubaac\uc14b\uc744 \ub2e4\uc2dc \uc2dc\uc791\ud574\uc57c \ud569\ub2c8\ub2e4.\n```bash\nkubectl -n kube-system rollout restart ds kube-proxy\n```\n\n### \uc720\ud6a8\uc131 \uac80\uc0ac\n\n\uc791\uc5c5\uc790 \ub178\ub4dc \uc911 \ud558\ub098\uc5d0\uc11c \ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 \ubc0f \uc791\uc5c5\uc790 \ub178\ub4dc\uac00 IPVS \ubaa8\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\uace0 \uc788\ub294\uc9c0 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n```bash\nsudo ipvsadm -L\n```\n\n\ucd5c\uc18c\ud55c \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84 \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \ud56d\ubaa9\uc774 `10.100.0.1`\uc774\uace0 CoreDNS \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \ud56d\ubaa9\uc774 `10.100.0.10`\uc778 \uc544\ub798\uc640 \ube44\uc2b7\ud55c \uacb0\uacfc\ub97c \ubcfc \uc218 \uc788\uc744 \uac83\uc774\ub2e4.\n```hl_lines=\"4 7 10\"\nIP Virtual Server version 1.2.1 (size=4096)\nProt LocalAddress:Port Scheduler Flags\n  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn\nTCP  ip-10-100-0-1.us-east-1. rr\n  -> ip-192-168-113-81.us-eas Masq        1      0          0\n  -> ip-192-168-162-166.us-ea Masq        1      1          0\nTCP  ip-10-100-0-10.us-east-1 rr\n  -> ip-192-168-104-215.us-ea Masq        1      0          0\n  -> ip-192-168-123-227.us-ea Masq        1      0          0\nUDP  ip-10-100-0-10.us-east-1 rr\n  -> ip-192-168-104-215.us-ea Masq        1      0          0\n  -> ip-192-168-123-227.us-ea Masq        1      0          0\n```\n!!! note\n    \uc774 \uc608\uc81c \ucd9c\ub825\uc740 \uc11c\ube44\uc2a4 IP \uc8fc\uc18c \ubc94\uc704\uac00 `10.100.0.0\/16`\uc778 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uac00\uc838\uc628 \uac83\uc785\ub2c8\ub2e4","site":"eks","answers_cleaned":"  IPVS      kube proxy     IPVS  IP            EKS      iptables            kube proxy      1 000                                                    https   aws github io aws eks best practices reliability docs controlplane  running large clusters                             iptables                                     iptables         nftables                              nftable           kube proxy            https   kubernetes io docs reference networking virtual ips  proxy mode nftables                 IPVS       kube proxy                                           1 11  https   kubernetes io blog 2018 07 09 ipvs based in cluster load balancing deep dive      GA    IPVS                                                                      IPVS                                                    IPVS                                                                     https   kubernetes io docs reference networking virtual ips  proxy mode ipvs                                                        IPVS                                          rr            wrr                 lc           wlc                   lblc                  lblcr                         sh           dh               sed              nq                         EKS        IPVS                                     EKS             Linux           ipvsadm                          Amazon Linux 2023     Fedora                                                          bash sudo dnf install  y ipvsadm     Ubuntu                                       bash sudo apt get install ipvsadm                  IPVS                                                    etc modules load d                                bash sudo sh  c  cat    EOF    etc modules load d ipvs conf ip vs ip vs rr ip vs wrr ip vs lc ip vs wlc ip vs lblc ip vs lblcr ip vs sh ip vs dh ip vs sed ip vs nq nf conntrack EOF                                                        bash sudo modprobe ip vs  sudo modprobe ip vs rr sudo modprobe ip vs wrr sudo modprobe ip vs lc sudo modprobe ip vs wlc sudo modprobe ip vs lblc sudo modprobe ip vs lblcr sudo modprobe ip vs sh sudo modprobe ip vs dh sudo modprobe ip vs sed sudo modprobe ip vs nq sudo modprobe nf conntrack         note                                   https   docs aws amazon com AWSEC2 latest UserGuide user data html                                                   AMI                                             IPVS                   kube proxy  DaemonSet             kube proxy   mode    ipvs         ipvs scheduler                                                  rr        Warning                                                             EKS                                    kube proxy  EKS             AWS CLI          IPVS                  bash aws eks update addon   cluster name  CLUSTER NAME   addon name kube proxy       configuration values    ipvs     scheduler    rr     mode    ipvs          resolve conflicts OVERWRITE                kube proxy config                                  bash kubectl  n kube system edit cm kube proxy config      ipvs     scheduler                   IPVS                                      rr         iptables    mode             ipvs                                             yaml hl lines  9 13    iptables      masqueradeAll  false     masqueradeBit  14     minSyncPeriod  0s     syncPeriod  30s   ipvs      excludeCIDRs  null     minSyncPeriod  0s     scheduler   rr      syncPeriod  30s   kind  KubeProxyConfiguration   metricsBindAddress  0 0 0 0 10249   mode   ipvs    nodePortAddresses  null   oomScoreAdj   998   portRange       udpIdleTimeout  250ms                                           kube proxy                      bash kubectl  n kube system rollout restart ds kube proxy                                                           IPVS                              bash sudo ipvsadm  L                API                 10 100 0 1    CoreDNS              10 100 0 10                              hl lines  4 7 10  IP Virtual Server version 1 2 1  size 4096  Prot LocalAddress Port Scheduler Flags      RemoteAddress Port           Forward Weight ActiveConn InActConn TCP  ip 10 100 0 1 us east 1  rr      ip 192 168 113 81 us eas Masq        1      0          0      ip 192 168 162 166 us ea Masq        1      1          0 TCP  ip 10 100 0 10 us east 1 rr      ip 192 168 104 215 us ea Masq        1      0          0      ip 192 168 123 227 us ea Masq        1      0          0 UDP  ip 10 100 0 10 us east 1 rr      ip 192 168 104 215 us ea Masq        1      0          0      ip 192 168 123 227 us ea Masq        1      0          0         note                  IP         10 100 0 0 16   EKS                "}
{"questions":"eks search iframe width 560 height 315 src https www youtube com embed RBE3yk2UlYA title YouTube video player frameborder 0 allow accelerometer autoplay clipboard write encrypted media gyroscope picture in picture web share allowfullscreen iframe exclude true Amazon VPC CNI","answers":"\ufeff---\nsearch:\n  exclude: true\n---\n\n\n# Amazon VPC CNI\n\n<iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/RBE3yk2UlYA\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen><\/iframe>\n\nAmazon EKS\ub294 [Amazon VPC \ucee8\ud14c\uc774\ub108 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4(VPC CNI)](https:\/\/github.com\/aws\/amazon-vpc-cni-k8s) \ud50c\ub7ec\uadf8\uc778\uc744 \ud1b5\ud574 \ud074\ub7ec\uc2a4\ud130 \ub124\ud2b8\uc6cc\ud0b9\uc744 \uad6c\ud604\ud569\ub2c8\ub2e4. CNI \ud50c\ub7ec\uadf8\uc778\uc744 \uc0ac\uc6a9\ud558\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud30c\ub4dc\uac00 VPC \ub124\ud2b8\uc6cc\ud06c\uc5d0\uc11c\uc640 \ub3d9\uc77c\ud55c IP \uc8fc\uc18c\ub97c \uac00\uc9c8 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uad6c\uccb4\uc801\uc73c\ub85c\ub294 \ud30c\ub4dc \ub0b4\ubd80\uc758 \ubaa8\ub4e0 \ucee8\ud14c\uc774\ub108\ub294 \ub124\ud2b8\uc6cc\ud06c \ub124\uc784\uc2a4\ud398\uc774\uc2a4\ub97c \uacf5\uc720\ud558\uba70 \ub85c\uceec \ud3ec\ud2b8\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc11c\ub85c \ud1b5\uc2e0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nAmazon VPC CNI\uc5d0\ub294 \ub450 \uac00\uc9c0 \uad6c\uc131 \uc694\uc18c\uac00 \uc788\uc2b5\ub2c8\ub2e4.\n\n* CNI \ubc14\uc774\ub108\ub9ac\ub294 \ud30c\ub4dc \uac04 \ud1b5\uc2e0\uc744 \ud65c\uc131\ud654\ud558\ub294 \ud30c\ub4dc \ub124\ud2b8\uc6cc\ud06c\ub97c \uad6c\uc131\ud569\ub2c8\ub2e4. CNI \ubc14\uc774\ub108\ub9ac\ub294 \ub178\ub4dc\uc758 \ub8e8\ud2b8 \ud30c\uc77c \uc2dc\uc2a4\ud15c\uc5d0\uc11c \uc2e4\ud589\ub418\uba70, \uc0c8 \ud30c\ub4dc\uac00 \ub178\ub4dc\uc5d0 \ucd94\uac00\ub418\uac70\ub098 \uae30\uc874 \ud30c\ub4dc\uac00 \ub178\ub4dc\uc5d0\uc11c \uc81c\uac70\ub420 \ub54c kubelet\uc5d0 \uc758\ud574 \ud638\ucd9c\ub429\ub2c8\ub2e4.\n* \uc624\ub798 \uc2e4\ud589\ub418\ub294(long-running) \ub178\ub4dc \ub85c\uceec(node-local) IP \uc8fc\uc18c \uad00\ub9ac (IPAM) \ub370\ubaac\uc778 ipamd\ub294 \ub2e4\uc74c\uc744 \ub2f4\ub2f9\ud569\ub2c8\ub2e4.\n  * \ub178\ub4dc\uc758 ENI \uad00\ub9ac \ubc0f\n  * \uc0ac\uc6a9 \uac00\ub2a5\ud55c IP \uc8fc\uc18c \ub610\ub294 Prefix\uc758 \uc6dc \ud480 \uc720\uc9c0\n\n\uc778\uc2a4\ud134\uc2a4\uac00 \uc0dd\uc131\ub418\uba74 EC2\ub294 \uae30\ubcf8 \uc11c\ube0c\ub137\uacfc \uc5f0\uacb0\ub41c \uae30\ubcf8 ENI\ub97c \uc0dd\uc131\ud558\uc5ec \uc5f0\uacb0\ud569\ub2c8\ub2e4. \uae30\ubcf8 \uc11c\ube0c\ub137\uc740 \ud37c\ube14\ub9ad \ub610\ub294 \ud504\ub77c\uc774\ube57\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud638\uc2a4\ud2b8 \ub124\ud2b8\uc6cc\ud06c \ubaa8\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ud30c\ub4dc\ub294 \ub178\ub4dc \uae30\ubcf8 ENI\uc5d0 \ud560\ub2f9\ub41c \uae30\ubcf8 IP \uc8fc\uc18c\ub97c \uc0ac\uc6a9\ud558\uba70 \ud638\uc2a4\ud2b8\uc640 \ub3d9\uc77c\ud55c \ub124\ud2b8\uc6cc\ud06c \ub124\uc784\uc2a4\ud398\uc774\uc2a4\ub97c \uacf5\uc720\ud569\ub2c8\ub2e4.\n\nCNI \ud50c\ub7ec\uadf8\uc778\uc740 \ub178\ub4dc\uc758 [Elastic Network Interface (ENI)](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/using-eni.html)\ub97c \uad00\ub9ac\ud569\ub2c8\ub2e4. \ub178\ub4dc\uac00 \ud504\ub85c\ube44\uc800\ub2dd\ub418\uba74 CNI \ud50c\ub7ec\uadf8\uc778\uc740 \ub178\ub4dc\uc758 \uc11c\ube0c\ub137\uc5d0\uc11c \uae30\ubcf8 ENI\uc5d0 \uc2ac\ub86f \ud480(IP \ub610\ub294 Prefix)\uc744 \uc790\ub3d9\uc73c\ub85c \ud560\ub2f9\ud569\ub2c8\ub2e4. \uc774 \ud480\uc744 *\uc6dc \ud480*\uc774\ub77c\uace0 \ud558\uba70, \ud06c\uae30\ub294 \ub178\ub4dc\uc758 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc5d0 \ub530\ub77c \uacb0\uc815\ub429\ub2c8\ub2e4. CNI \uc124\uc815\uc5d0 \ub530\ub77c \uc2ac\ub86f\uc740 IP \uc8fc\uc18c \ub610\ub294 Prefix\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. ENI\uc758 \uc2ac\ub86f\uc774 \ud560\ub2f9\ub418\uba74 CNI\ub294 \uc6dc \uc2ac\ub86f \ud480\uc774 \uc788\ub294 \ucd94\uac00 ENI\ub97c \ub178\ub4dc\uc5d0 \uc5f0\uacb0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \ucd94\uac00 ENI\ub97c \ubcf4\uc870 ENI\ub77c\uace0 \ud569\ub2c8\ub2e4. \uac01 ENI\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc5d0 \ub530\ub77c \ud2b9\uc815 \uac2f\uc218\uc758 \uc2ac\ub86f\ub9cc \uc9c0\uc6d0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. CNI\ub294 \ud544\uc694\ud55c \uc2ac\ub86f \uc218\ub97c \uae30\ubc18\uc73c\ub85c \uc778\uc2a4\ud134\uc2a4\uc5d0 \ub354 \ub9ce\uc740 ENI\ub97c \uc5f0\uacb0\ud569\ub2c8\ub2e4. \uc5ec\uae30\uc11c \uc2ac\ub86f \uac2f\uc218\ub294 \ubcf4\ud1b5 \ud30c\ub4dc \uac2f\uc218\uc5d0 \ud574\ub2f9\ud569\ub2c8\ub2e4. \uc774 \ud504\ub85c\uc138\uc2a4\ub294 \ub178\ub4dc\uac00 \ucd94\uac00 ENI\ub97c \ub354 \uc774\uc0c1 \uc81c\uacf5\ud560 \uc218 \uc5c6\uc744 \ub54c\uae4c\uc9c0 \uacc4\uc18d\ub429\ub2c8\ub2e4. \ub610\ud55c CNI\ub294 \ud30c\ub4dc \uc2dc\uc791 \uc18d\ub3c4\ub97c \ub192\uc774\uae30 \uc704\ud574 '\uc6dc' ENI\uc640 \uc2ac\ub86f\uc744 \uc0ac\uc804 \ud560\ub2f9\ud569\ub2c8\ub2e4. \ucc38\uace0\ub85c, \uac01 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc5d0\ub294 \uc5f0\uacb0\ud560 \uc218 \uc788\ub294 \ucd5c\ub300 ENI \uc218\uac00 \uc874\uc7ac\ud569\ub2c8\ub2e4. \uc774 \uc870\uac74\uc740 \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4\uc640 \ub354\ubd88\uc5b4 \ud30c\ub4dc \ubc00\ub3c4(\ub178\ub4dc\ub2f9 \ud30c\ub4dc \uc218)\uc5d0 \ub300\ud55c \ub610 \ud558\ub098\uc758 \uc81c\uc57d \uc870\uac74\uc785\ub2c8\ub2e4.\n\n![flow chart illustrating procedure when new ENI delegated prefix is needed](.\/image.png)\n\n\ucd5c\ub300 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4 \uc218, \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \ucd5c\ub300 \uc2ac\ub86f \uc218\ub294 EC2 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc5d0 \ub530\ub77c \ub2e4\ub985\ub2c8\ub2e4. \uac01 \ud30c\ub4dc\ub294 \uc2ac\ub86f\uc758 IP \uc8fc\uc18c\ub97c \uc0ac\uc6a9\ud558\uae30 \ub54c\ubb38\uc5d0 \ud2b9\uc815 EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c \uc2e4\ud589\ud560 \uc218 \uc788\ub294 \ud30c\ub4dc\uc758 \uc218\ub294 \uc5f0\uacb0\ud560 \uc218 \uc788\ub294 ENI\uc758 \uc218\uc640 \uac01 ENI\uac00 \uc9c0\uc6d0\ud558\ub294 \uc2ac\ub86f\uc758 \uc218\uc5d0 \ub530\ub77c \ub2ec\ub77c\uc9d1\ub2c8\ub2e4. \uc778\uc2a4\ud134\uc2a4\uc758 CPU \ubc0f \uba54\ubaa8\ub9ac \ub9ac\uc18c\uc2a4\uac00 \uace0\uac08\ub418\uc9c0 \uc54a\ub3c4\ub85d EKS \uc0ac\uc6a9 \uac00\uc774\ub4dc\uc5d0 \ub530\ub77c \ucd5c\ub300 \ud30c\ub4dc\ub97c \uc124\uc815\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. `hostNetwork`\ub97c \uc0ac\uc6a9\ud558\ub294 \ud30c\ub4dc\ub294 \uc774 \uacc4\uc0b0\uc5d0\uc11c \uc81c\uc678\ub429\ub2c8\ub2e4. \uc8fc\uc5b4\uc9c4 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc5d0 \ub300\ud55c EKS\uc758 \uad8c\uc7a5 \ucd5c\ub300 \ud30c\ub4dc \uac1c\uc218\ub97c \uacc4\uc0b0\ud558\uae30 \uc704\ud574 [max-pod-calculator.sh](https:\/\/github.com\/awslabs\/amazon-eks-ami\/blob\/master\/files\/max-pods-calculator.sh)\ub77c\ub294 \uc2a4\ud06c\ub9bd\ud2b8\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \uac1c\uc694\n\n\ubcf4\uc870 IP \ubaa8\ub4dc\ub294 VPC CNI\uc758 \uae30\ubcf8 \ubaa8\ub4dc\uc785\ub2c8\ub2e4. \uc774 \uac00\uc774\ub4dc\uc5d0\uc11c\ub294 \ubcf4\uc870 IP \ubaa8\ub4dc\uac00 \ud65c\uc131\ud654\ub41c \uacbd\uc6b0\uc758 VPC CNI \ub3d9\uc791\uc5d0 \ub300\ud55c \uc77c\ubc18\uc801\uc778 \uac1c\uc694\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. ipamd\uc758 \uae30\ub2a5(IP \uc8fc\uc18c \ud560\ub2f9)\uc740 VPC CNI\uc758 \uad6c\uc131 \uc124\uc815, \uc608\ub97c \ub4e4\uc5b4 [Prefix \ubaa8\ub4dc](..\/prefix-mode\/index_linux.md), [\ud30c\ub4dc\ub2f9 \ubcf4\uc548 \uadf8\ub8f9 \uc218](..\/sgpp\/index.md), [\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9](..\/custom-networking\/index.md)\uc5d0 \ub530\ub77c \ub2ec\ub77c\uc9c8 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nAmazon VPC CNI\ub294 \uc6cc\ucee4 \ub178\ub4dc\uc5d0 aws-node\ub77c\ub294 \uc774\ub984\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub370\ubaac\uc14b\uc73c\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4. \uc6cc\ucee4 \ub178\ub4dc\uac00 \ud504\ub85c\ube44\uc800\ub2dd\ub418\uba74 primary ENI\ub77c\uace0 \ud558\ub294 \uae30\ubcf8 ENI\uac00 \uc5f0\uacb0\ub429\ub2c8\ub2e4. CNI\ub294 \ub178\ub4dc\uc758 \uae30\ubcf8 ENI\uc5d0 \uc5f0\uacb0\ub41c \uc11c\ube0c\ub137\uc5d0\uc11c \uc6dc \ud480\uc758 ENI\uc640 \ubcf4\uc870 IP \uc8fc\uc18c\ub97c \ud560\ub2f9\ud569\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c ipamd\ub294 \ub178\ub4dc\uc5d0 \ucd94\uac00 ENI\ub97c \ud560\ub2f9\ud558\ub824\uace0 \uc2dc\ub3c4\ud569\ub2c8\ub2e4. \ub2e8\uc77c \ud30c\ub4dc\uac00 \uc2a4\ucf00\uc904\ub418\uace0 \uae30\ubcf8 ENI\uc758 \ubcf4\uc870 IP \uc8fc\uc18c\uac00 \ud560\ub2f9\ub418\uba74 IPAMD\ub294 \ucd94\uac00 ENI\ub97c \ud560\ub2f9\ud569\ub2c8\ub2e4. \uc774 \u201c\uc6dc\u201d ENI\ub294 \ub354 \ube60\ub978 \ud30c\ub4dc \ub124\ud2b8\uc6cc\ud0b9\uc744 \uac00\ub2a5\ud558\uac8c \ud569\ub2c8\ub2e4. \ubcf4\uc870 IP \uc8fc\uc18c \ud480\uc774 \ubd80\uc871\ud574\uc9c0\uba74 CNI\ub294 \ub2e4\ub978 ENI\ub97c \ucd94\uac00\ud558\uc5ec \ub354 \ub9ce\uc740 \uc8fc\uc18c\ub97c \ud560\ub2f9\ud569\ub2c8\ub2e4.\n\n\ud480\uc758 ENI \ubc0f IP \uc8fc\uc18c \uc218\ub294 [WARM_ENI_TARGET, WARM_IP_TARGET, MINIMUM_IP_TARGET](https:\/\/github.com\/aws\/amazon-vpc-cni-k8s\/blob\/master\/docs\/eni-and-ip-target.md)\uc774\ub77c\ub294 \ud658\uacbd \ubcc0\uc218\ub97c \ud1b5\ud574 \uad6c\uc131\ub429\ub2c8\ub2e4. `aws-node` \ub370\ubaac\uc14b\uc740 \ucda9\ubd84\ud55c \uc218\uc758 ENI\uac00 \uc5f0\uacb0\ub418\uc5b4 \uc788\ub294\uc9c0 \uc8fc\uae30\uc801\uc73c\ub85c \ud655\uc778\ud569\ub2c8\ub2e4. `WARM_ENI_TARGET` \ud639\uc740 `WARM_IP_TARGET` \ubc0f `MINIMUM_IP_TARGET` \uc870\uac74\uc774 \ucda9\uc871\ub418\uba74 \ucda9\ubd84\ud55c \uc218\uc758 ENI\uac00 \uc5f0\uacb0\ub429\ub2c8\ub2e4. \uc5f0\uacb0\ub41c ENI\uac00 \ucda9\ubd84\ud558\uc9c0 \uc54a\uc740 \uacbd\uc6b0 CNI\ub294 `MAX_ENI` \ud55c\ub3c4\uc5d0 \ub3c4\ub2ec\ud560 \ub54c\uae4c\uc9c0 EC2\uc5d0 API\ub97c \ud638\ucd9c\ud558\uc5ec \ucd94\uac00\ub85c ENI\ub97c \uc5f0\uacb0\ud569\ub2c8\ub2e4.\n\n* `WARM_ENI_TARGET` - \uc815\uc218 \uac12, \uac12\uc774 >0\uc774\uba74 \uc694\uad6c \uc0ac\ud56d\uc774 \ud65c\uc131\ud654\ub41c \uac83\uc785\ub2c8\ub2e4.\n  * \uad00\ub9ac\ud560 \uc6dc ENI\uc758 \uc218\uc785\ub2c8\ub2e4. ENI\ub294 \ub178\ub4dc\uc5d0 \ubcf4\uc870 ENI\ub85c \uc5f0\uacb0\ub418\uba74 \u201c\uc6dc\u201d \uc0c1\ud0dc\uac00 \ub418\uc9c0\ub9cc, \uc5b4\ub5a4 \ud30c\ub4dc\uc5d0\uc11c\ub3c4 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uad6c\uccb4\uc801\uc73c\ub85c \ub9d0\ud558\uba74 ENI\uc758 IP \uc8fc\uc18c\uac00 \ud30c\ub4dc\uc640 \uc5f0\uacb0\ub418\uc9c0 \uc54a\uc740 \uc0c1\ud0dc\uc785\ub2c8\ub2e4.\n  * \uc608: ENI\uac00 2\uac1c\uc774\uace0 \uac01 ENI\uac00 5\uac1c\uc758 IP \uc8fc\uc18c\ub97c \uc9c0\uc6d0\ud558\ub294 \uc778\uc2a4\ud134\uc2a4\ub97c \uc608\ub85c \ub4e4\uc5b4 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4. WARM_ENI_TARGET\uc740 1\ub85c \uc124\uc815\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \uc778\uc2a4\ud134\uc2a4\uc5d0 \uc815\ud655\ud788 5\uac1c\uc758 IP \uc8fc\uc18c\uac00 \uc5f0\uacb0\ub41c \uacbd\uc6b0 CNI\ub294 \uc778\uc2a4\ud134\uc2a4\uc5d0 2\uac1c\uc758 ENI\ub97c \uc5f0\uacb0\ud55c \uc0c1\ud0dc\ub85c \uc720\uc9c0\ud569\ub2c8\ub2e4. \uccab \ubc88\uc9f8 ENI\uac00 \uc0ac\uc6a9 \uc911\uc774\uba70 \uc774 ENI\uc5d0\uc11c \uc0ac\uc6a9 \uac00\ub2a5\ud55c 5\uac1c IP \uc8fc\uc18c\uac00 \ubaa8\ub450 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. \ub450 \ubc88\uc9f8 ENI\ub294 \ud480\uc5d0 5\uac1c IP \uc8fc\uc18c\uac00 \ubaa8\ub450 \uc788\ub294 \u201c\uc6dc\u201d \uc0c1\ud0dc\uc785\ub2c8\ub2e4. \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c \ub2e4\ub978 \ud30c\ub4dc\ub97c \uc2dc\uc791\ud558\ub294 \uacbd\uc6b0 6\ubc88\uc9f8 IP \uc8fc\uc18c\uac00 \ud544\uc694\ud569\ub2c8\ub2e4. CNI\ub294 \uc774 6\ubc88\uc9f8 \ud30c\ub4dc\uc5d0 \ub450 \ubc88\uc9f8 ENI \ubc0f \ud480\uc758 5\uac1c\uc758 IP\uc5d0\uc11c IP \uc8fc\uc18c\ub97c \ud560\ub2f9\ud569\ub2c8\ub2e4. \uc774\uc81c \ub450 \ubc88\uc9f8 ENI\uac00 \uc0ac\uc6a9\ub418\uba70 \ub354 \uc774\uc0c1 \u201c\uc6dc\u201d \uc0c1\ud0dc\uac00 \uc544\ub2c8\uac8c \ub429\ub2c8\ub2e4. CNI\ub294 \ucd5c\uc18c 1\uac1c\uc758 \uc6dc ENI\ub97c \uc720\uc9c0\ud558\uae30 \uc704\ud574 \uc138 \ubc88\uc9f8 ENI\ub97c \ud560\ub2f9\ud569\ub2c8\ub2e4.\n\n!!! Note\n    \uc6dc ENI\ub3c4 VPC\uc758 CIDR\uc5d0 \uc788\ub294 IP \uc8fc\uc18c\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4. IP \uc8fc\uc18c\ub294 \ud30c\ub4dc\uc640 \uac19\uc740 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc5f0\uacb0\ub418\uae30 \uc804\uae4c\uc9c0\ub294 \u201c\ubbf8\uc0ac\uc6a9\u201d \ub610\ub294 \u201c\uc6dc\u201d \uc0c1\ud0dc\uac00 \ub429\ub2c8\ub2e4.\n\n* `WARM_IP_TARGET`, \uc815\uc218, \uac12\uc774 >0\uc774\uba74 \uc694\uad6c \uc0ac\ud56d\uc774 \ud65c\uc131\ud654\ub41c \uac83\uc785\ub2c8\ub2e4.\n  * \uc720\uc9c0\ud560 \uc6dc IP \uc8fc\uc18c \uc218\uc785\ub2c8\ub2e4. \uc6dc IP\ub294 \ud65c\uc131 \uc5f0\uacb0\ub41c ENI\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc9c0\ub9cc \ud30c\ub4dc\uc5d0 \ud560\ub2f9\ub418\uc9c0\ub294 \uc54a\uc2b5\ub2c8\ub2e4. \uc989, \uc0ac\uc6a9 \uac00\ub2a5\ud55c \uc6dc IP\uc758 \uc218\ub294 \ucd94\uac00 ENI \uc5c6\uc774 \ud30c\ub4dc\uc5d0 \ud560\ub2f9\ud560 \uc218 \uc788\ub294 IP\uc758 \uc218\uc785\ub2c8\ub2e4.\n  * \uc608: ENI\uac00 1\uac1c\uc774\uace0 \uac01 ENI\uac00 20\uac1c\uc758 IP \uc8fc\uc18c\ub97c \uc9c0\uc6d0\ud558\ub294 \uc778\uc2a4\ud134\uc2a4\ub97c \uc608\ub85c \ub4e4\uc5b4 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4. WARM_IP_TARGET\uc740 5\ub85c \uc124\uc815\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. WARM_ENI_TARGET\uc740 0\uc73c\ub85c \uc124\uc815\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. 16\ubc88\uc9f8 IP \uc8fc\uc18c\uac00 \ud544\uc694\ud560 \ub54c\uae4c\uc9c0\ub294 ENI 1\uac1c\ub9cc \uc5f0\uacb0\ub429\ub2c8\ub2e4. \uadf8 \ub2e4\uc74c\uc5d0\ub294 CNI\uac00 \uc11c\ube0c\ub137 CIDR\uc758 \uac00\ub2a5\ud55c \uc8fc\uc18c 20\uac1c\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub450 \ubc88\uc9f8 ENI\ub97c \uc5f0\uacb0\ud569\ub2c8\ub2e4.\n* `MINIMUM_IP_TARGET`, \uc815\uc218, \uac12\uc774 >0\uc774\uba74 \uc694\uad6c \uc0ac\ud56d\uc774 \ud65c\uc131\ud654\ub41c \uac83\uc785\ub2c8\ub2e4.\n  * \uc5b8\uc81c\ub4e0\uc9c0 \ud560\ub2f9\ud560 \uc218 \uc788\ub294 \ucd5c\uc18c IP \uc8fc\uc18c \uc218\uc785\ub2c8\ub2e4. \uc774\ub294 \uc77c\ubc18\uc801\uc73c\ub85c \uc778\uc2a4\ud134\uc2a4 \uc2dc\uc791 \uc2dc \uc5ec\ub7ec ENI \ud560\ub2f9\uc744 \ubbf8\ub9ac \ubd88\ub7ec\uc624\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4.\n  * \uc608: \uc0c8\ub85c \uc2dc\uc791\ud55c \uc778\uc2a4\ud134\uc2a4\ub97c \uc608\ub85c \ub4e4\uc5b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4. ENI\ub294 1\uac1c\uc774\uace0 \uac01 ENI\ub294 10\uac1c\uc758 IP \uc8fc\uc18c\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4. MINIMUM_IP_TARGET\uc740 100\uc73c\ub85c \uc124\uc815\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. ENI\ub294 \ucd1d 100\uac1c\uc758 \uc8fc\uc18c\uc5d0 \ub300\ud574 9\uac1c\uc758 ENI\ub97c \uc989\uc2dc \ucd94\uac00\ub85c \uc5f0\uacb0\ud569\ub2c8\ub2e4. \uc774\ub294 `WARM_IP_TARGET` \ub610\ub294 `WARM_ENI_TARGET`\uac12\uacfc \uc0c1\uad00\uc5c6\uc774 \ubc1c\uc0dd\ud569\ub2c8\ub2e4.\n\n\uc774 \ud504\ub85c\uc81d\ud2b8\uc5d0\ub294 [\uc11c\ube0c\ub137 \uacc4\uc0b0\uae30 Excel \ubb38\uc11c](..\/subnet-calc\/subnet-calc.xlsx)\uac00 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ubb38\uc11c\ub294 `WARM_IP_TARGET` \ubc0f `WARM_ENI_TARGET`\uacfc \uac19\uc740 \ub2e4\uc591\ud55c ENI \uad6c\uc131 \uc635\uc158\uc5d0 \ub530\ub77c \uc9c0\uc815\ub41c \uc6cc\ud06c\ub85c\ub4dc\uc758 IP \uc8fc\uc18c \uc0ac\uc6a9\uc744 \uc2dc\ubbac\ub808\uc774\uc158\ud569\ub2c8\ub2e4.\n\n![illustration of components involved in assigning an IP address to a pod](.\/image-2.png)\n\nKubelet\uc774 \ud30c\ub4dc \ucd94\uac00 \uc694\uccad\uc744 \ubc1b\uc73c\uba74, CNI \ubc14\uc774\ub108\ub9ac\ub294 ipamd\uc5d0 \uc0ac\uc6a9 \uac00\ub2a5\ud55c IP \uc8fc\uc18c\ub97c \ucffc\ub9ac\ud558\uace0, ipamd\ub294 \uc774\ub97c \ud30c\ub4dc\uc5d0 \uc81c\uacf5\ud569\ub2c8\ub2e4. CNI \ubc14\uc774\ub108\ub9ac\ub294 \ud638\uc2a4\ud2b8\uc640 \ud30c\ub4dc \ub124\ud2b8\uc6cc\ud06c\ub97c \uc5f0\uacb0\ud569\ub2c8\ub2e4.\n\n\ub178\ub4dc\uc5d0 \ubc30\ud3ec\ub41c \ud30c\ub4dc\ub294 \uae30\ubcf8\uc801\uc73c\ub85c \uae30\ubcf8 ENI\uc640 \ub3d9\uc77c\ud55c \ubcf4\uc548 \uadf8\ub8f9\uc5d0 \ud560\ub2f9\ub418\uba70, \ud30c\ub4dc\ub97c \ub2e4\ub978 \ubcf4\uc548 \uadf8\ub8f9\uc73c\ub85c \uad6c\uc131\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4.\n\n![second illustration of components involved in assigning an IP address to a pod](.\/image-3.png)\n\nIP \uc8fc\uc18c \ud480\uc774 \uace0\uac08\ub418\uba74 \ud50c\ub7ec\uadf8\uc778\uc740 \uc790\ub3d9\uc73c\ub85c \ub2e4\ub978 Elastic Network Interface\ub97c \uc778\uc2a4\ud134\uc2a4\uc5d0 \uc5f0\uacb0\ud558\uace0 \ud574\ub2f9 \uc778\ud130\ud398\uc774\uc2a4\uc5d0 \ub2e4\ub978 \ubcf4\uc870 IP \uc8fc\uc18c \uc138\ud2b8\ub97c \ud560\ub2f9\ud569\ub2c8\ub2e4. \uc774 \ud504\ub85c\uc138\uc2a4\ub294 \ub178\ub4dc\uac00 \ub354 \uc774\uc0c1 \ucd94\uac00 Elastic Network Interface\ub97c \uc9c0\uc6d0\ud560 \uc218 \uc5c6\uc744 \ub54c\uae4c\uc9c0 \uacc4\uc18d\ub429\ub2c8\ub2e4.\n\n![third illustration of components involved in assigning an IP address to a pod](.\/image-4.png)\n\n\ud30c\ub4dc\uac00 \uc0ad\uc81c\ub418\uba74 VPC CNI\ub294 \ud30c\ub4dc\uc758 IP \uc8fc\uc18c\ub97c 30\ucd08 \ucfe8\ub2e4\uc6b4 \uce90\uc2dc\uc5d0 \uc800\uc7a5\ud569\ub2c8\ub2e4. \ucfe8 \ub2e4\uc6b4 \uce90\uc2dc\uc758 IP\ub294 \uc2e0\uaddc \ud30c\ub4dc\uc5d0 \ud560\ub2f9\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ucfe8\ub9c1\uc624\ud504 \uc8fc\uae30\uac00 \ub05d\ub098\uba74 VPC CNI\ub294 \ud30c\ub4dc IP\ub97c \uc6dc \ud480\ub85c \ub2e4\uc2dc \uc62e\uae41\ub2c8\ub2e4. \ucfe8\ub9c1 \uc624\ud504 \uc8fc\uae30\ub294 \ud30c\ub4dc IP \uc8fc\uc18c\uac00 \ub108\ubb34 \uc774\ub974\uac8c \uc7ac\ud65c\uc6a9\ub418\ub294 \uac83\uc744 \ubc29\uc9c0\ud558\uace0 \ubaa8\ub4e0 \ud074\ub7ec\uc2a4\ud130 \ub178\ub4dc\uc758 kube-proxy\uac00 iptables \uaddc\uce59 \uc5c5\ub370\uc774\ud2b8\ub97c \uc644\ub8cc\ud560 \uc218 \uc788\ub3c4\ub85d \ud569\ub2c8\ub2e4. IP \ub610\ub294 ENI\uc758 \uc218\uac00 \uc6dc \ud480 \uc124\uc815 \uc218\ub97c \ucd08\uacfc\ud558\uba74 ipamd \ud50c\ub7ec\uadf8\uc778\uc740 VPC\uc5d0 IP\uc640 ENI\ub97c \ubc18\ud658\ud569\ub2c8\ub2e4.\n\n\ubcf4\uc870 IP \ubaa8\ub4dc\uc5d0\uc11c \uc55e\uc11c \uc124\uba85\ud55c \ubc14\uc640 \uac19\uc774, \uac01 \ud30c\ub4dc\ub294 \uc778\uc2a4\ud134\uc2a4\uc5d0 \uc5f0\uacb0\ub41c ENI \uc911 \ud558\ub098\ub85c\ubd80\ud130 \ud558\ub098\uc758 \ubcf4\uc870 \ud504\ub77c\uc774\ube57 IP \uc8fc\uc18c\ub97c \uc218\uc2e0\ud569\ub2c8\ub2e4. \uac01 \ud30c\ub4dc\ub294 IP \uc8fc\uc18c\ub97c \uc0ac\uc6a9\ud558\uae30 \ub54c\ubb38\uc5d0 \ud2b9\uc815 EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c \uc2e4\ud589\ud560 \uc218 \uc788\ub294 \ud30c\ub4dc\uc758 \uc218\ub294 \uc5f0\uacb0\ud560 \uc218 \uc788\ub294 ENI\uc758 \uc218\uc640 \uc9c0\uc6d0\ud558\ub294 IP \uc8fc\uc18c\uc758 \uc218\uc5d0 \ub530\ub77c \ub2ec\ub77c\uc9d1\ub2c8\ub2e4. VPC CNI\ub294 [limit](https:\/\/github.com\/aws\/amazon-vpc-resource-controller-k8s\/blob\/master\/pkg\/aws\/vpc\/limits.go)\ud30c\uc77c\uc744 \ud655\uc778\ud558\uc5ec \uac01 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc5d0 \ud5c8\uc6a9\ub418\ub294 ENI\uc640 IP \uc8fc\uc18c \uc218\ub97c \uc54c\uc544\ub0c5\ub2c8\ub2e4.\n\n\ub2e4\uc74c \uacf5\uc2dd\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc\uc5d0 \ubc30\ud3ec\ud560 \uc218 \uc788\ub294 \ucd5c\ub300 \ud30c\ub4dc \uc218\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n`(\uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc758 \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4 \uc218 \u00d7 (\ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4\ub2f9 IP \uc8fc\uc18c \uc218 - 1)) + 2`\n\n+2\ub294 kube-proxy \ubc0f VPC CNI\uc640 \uac19\uc740 \ud638\uc2a4\ud2b8 \ub124\ud2b8\uc6cc\ud0b9\uc5d0\uc11c \ud544\uc694\ud55c \ud30c\ub4dc\ub97c \ub098\ud0c0\ub0c5\ub2c8\ub2e4. Amazon EKS\uc5d0\uc11c\ub294 \uac01 \ub178\ub4dc\uc5d0\uc11c kube-proxy \ubc0f VPC CNI\uac00 \uc791\ub3d9\ud574\uc57c \ud558\uba70, \uc774\ub7ec\ud55c \uc694\uad6c \uc0ac\ud56d\uc740 max-pods \uac12\uc5d0 \ubc18\uc601\ub429\ub2c8\ub2e4. \ucd94\uac00 \ud638\uc2a4\ud2b8 \ub124\ud2b8\uc6cc\ud0b9 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud558\ub824\uba74 max-pods \uac12 \uc5c5\ub370\uc774\ud2b8\ub97c \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624.\n\n+2\ub294 kube-proxy \ubc0f VPC CNI\uc640 \uac19\uc740 \ud638\uc2a4\ud2b8 \ub124\ud2b8\uc6cc\ud0b9\uc744 \uc0ac\uc6a9\ud558\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud30c\ub4dc\ub97c \ub098\ud0c0\ub0c5\ub2c8\ub2e4. Amazon EKS\ub294 \ubaa8\ub4e0 \ub178\ub4dc\uc5d0\uc11c kube-proxy\uc640 VPC CNI\ub97c \uc2e4\ud589\ud574\uc57c \ud558\uba70, \uc774\ub294 max-pods \uae30\uc900\uc73c\ub85c \uacc4\uc0b0\ub429\ub2c8\ub2e4. \ub354 \ub9ce\uc740 \ud638\uc2a4\ud2b8 \ub124\ud2b8\uc6cc\ud0b9 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud560 \uacc4\ud68d\uc774\ub77c\uba74 max-pods \uac12\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624. \uc2dc\uc791 \ud15c\ud50c\ub9bf\uc5d0\uc11c `--kubelet-extra-args \"\u2014max-pods=110\"`\uc744 \uc0ac\uc6a9\uc790 \ub370\uc774\ud130\ub85c \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc608\ub97c \ub4e4\uc5b4, c5.large \ub178\ub4dc 3\uac1c(ENI 3\uac1c, ENI\ub2f9 \ucd5c\ub300 10\uac1c IP)\uac00 \uc788\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ud074\ub7ec\uc2a4\ud130\uac00 \uc2dc\uc791\ub418\uace0, CoreDNS \ud30c\ub4dc 2\uac1c\uac00 \uc788\ub294 \uacbd\uc6b0 CNI\ub294 49\uac1c\uc758 IP \uc8fc\uc18c\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc6dc \ud480\uc5d0 \ubcf4\uad00\ud569\ub2c8\ub2e4. \uc6dc \ud480\uc744 \uc0ac\uc6a9\ud558\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ubc30\ud3ec \uc2dc \ud30c\ub4dc\ub97c \ub354 \ube60\ub974\uac8c \uc2dc\uc791\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub178\ub4dc 1 (CoreDNS \ud30c\ub4dc \ud3ec\ud568): ENI 2\uac1c, IP 20\uac1c \ud560\ub2f9\n\n\ub178\ub4dc 2 (CoreDNS \ud30c\ub4dc \ud3ec\ud568): ENI 2\uac1c, IP 20\uac1c \ud560\ub2f9\n\n\ub178\ub4dc 3 (\ud30c\ub4dc \uc5c6\uc74c): ENI 1\uac1c, IP 10\uac1c \ud560\ub2f9\n\n\uc8fc\ub85c \ub370\ubaac\uc14b\uc73c\ub85c \uc2e4\ud589\ub418\ub294 \uc778\ud504\ub77c \ud30c\ub4dc\ub294 \uac01\uac01 \ucd5c\ub300 \ud30c\ub4dc \uc218\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce5c\ub2e4\ub294 \uc810\uc744 \uace0\ub824\ud569\ub2c8\ub2e4. \uc5ec\uae30\uc5d0\ub294 \ub2e4\uc74c\uc774 \ud3ec\ud568\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n* CoreDNS\n* Amazon Elastic LoadBalancer\n* metrics-server \uc6b4\uc601\uc6a9 Pods\n\n\uc774\ub7ec\ud55c \ud30c\ub4dc\ub4e4\uc758 \uc6a9\ub7c9\uc744 \uc870\ud569\ud558\uc5ec \uc778\ud504\ub77c\ub97c \uacc4\ud68d\ud558\ub294 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \uac01 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc5d0\uc11c \uc9c0\uc6d0\ud558\ub294 \ucd5c\ub300 \ud30c\ub4dc \uc218 \ubaa9\ub85d\uc740 GitHub\uc758 [eni-max-Pods.txt](https:\/\/github.com\/awslabs\/amazon-eks-ami\/blob\/master\/files\/eni-max-pods.txt)\ub97c \ucc38\uc870\ud569\ub2c8\ub2e4.\n\n![illustration of multiple ENIs attached to a node](.\/image-5.png)\n\n## \uad8c\uc7a5 \uc0ac\ud56d\n\n### VPC CNI \uad00\ub9ac\ud615 \uc560\ub4dc\uc628 \ubc30\ud3ec\n\n\ud074\ub7ec\uc2a4\ud130\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\uba74 Amazon EKS\uac00 \uc790\ub3d9\uc73c\ub85c VPC CNI\ub97c \uc124\uce58\ud569\ub2c8\ub2e4. \uadf8\ub7fc\uc5d0\ub3c4 \ubd88\uad6c\ud558\uace0 Amazon EKS\ub294 \ud074\ub7ec\uc2a4\ud130\uac00 \ucef4\ud4e8\ud305, \uc2a4\ud1a0\ub9ac\uc9c0, \ub124\ud2b8\uc6cc\ud0b9\uacfc \uac19\uc740 \uae30\ubcf8 AWS \ub9ac\uc18c\uc2a4\uc640 \uc0c1\ud638 \uc791\uc6a9\ud560 \uc218 \uc788\ub3c4\ub85d \ud558\ub294 \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. VPC CNI\ub97c \ube44\ub86f\ud55c \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\ub97c \ubc30\ud3ec\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\nAmazon EKS \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc740 Amazon EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uc704\ud55c VPC CNI \uc124\uce58 \ubc0f \uad00\ub9ac\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. Amazon EKS \uc560\ub4dc\uc628\uc5d0\ub294 \ucd5c\uc2e0 \ubcf4\uc548 \ud328\uce58, \ubc84\uadf8 \uc218\uc815\uc774 \ud3ec\ud568\ub418\uc5b4 \uc788\uc73c\uba70, Amazon EKS\uc640 \ud638\ud658\ub418\ub294\uc9c0 AWS\uc758 \uac80\uc99d\uc744 \uac70\ucce4\uc2b5\ub2c8\ub2e4. VPC CNI \uc560\ub4dc\uc628\uc744 \uc0ac\uc6a9\ud558\uba74 Amazon EKS \ud074\ub7ec\uc2a4\ud130\uc758 \ubcf4\uc548 \ubc0f \uc548\uc815\uc131\uc744 \uc9c0\uc18d\uc801\uc73c\ub85c \ubcf4\uc7a5\ud558\uace0 \ucd94\uac00 \uae30\ub2a5\uc744 \uc124\uce58, \uad6c\uc131 \ubc0f \uc5c5\ub370\uc774\ud2b8\ud558\ub294 \ub370 \ud544\uc694\ud55c \ub178\ub825\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub610\ud55c \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc740 Amazon EKS API, AWS \uad00\ub9ac \ucf58\uc194, AWS CLI \ubc0f eksctl\uc744 \ud1b5\ud574 \ucd94\uac00, \uc5c5\ub370\uc774\ud2b8 \ub610\ub294 \uc0ad\uc81c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n`kubectl get` \uba85\ub839\uc5b4\uc640 \ud568\uaed8 `--show-managed-fields` \ud50c\ub798\uadf8\ub97c \uc0ac\uc6a9\ud558\uc5ec VPC CNI\uc758 \uad00\ub9ac\ud615 \ud544\ub4dc\ub97c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\nkubectl get daemonset aws-node --show-managed-fields -n kube-system -o yaml\n```\n\n\uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc740 15\ubd84\ub9c8\ub2e4 \uad6c\uc131\uc744 \uc790\ub3d9\uc73c\ub85c \ub36e\uc5b4\uc4f0\ub294 \uac83\uc73c\ub85c \uad6c\uc131 \ubcc0\ub3d9\uc744 \ubc29\uc9c0\ud569\ub2c8\ub2e4. \uc989, \uc560\ub4dc\uc628 \uc0dd\uc131 \ud6c4 Kubernetes API\ub97c \ud1b5\ud574 \ubcc0\uacbd\ud55c \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc740 \uc790\ub3d9\ud654\ub41c \ub4dc\ub9ac\ud504\ud2b8 \ubc29\uc9c0 \ud504\ub85c\uc138\uc2a4\uc5d0 \uc758\ud574 \ub36e\uc5b4\uc4f0\uc5ec\uc9c0\uace0 \uc560\ub4dc\uc628 \uc5c5\ub370\uc774\ud2b8 \ud504\ub85c\uc138\uc2a4 \uc911\uc5d0\ub3c4 \uae30\ubcf8\uac12\uc73c\ub85c \uc124\uc815\ub429\ub2c8\ub2e4.\n\nEKS\uc5d0\uc11c \uad00\ub9ac\ud558\ub294 \ud544\ub4dc\ub294 managedFields \ud558\uc704\uc5d0 \ub098\uc5f4\ub418\uba70 \uc774 \ud544\ub4dc\uc5d0 \ub300\ud55c \uad00\ub9ac\uc790\ub294 EKS\uc785\ub2c8\ub2e4. EKS\uc5d0\uc11c \uad00\ub9ac\ud558\ub294 \ud544\ub4dc\uc5d0\ub294 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8, \uc774\ubbf8\uc9c0, \uc774\ubbf8\uc9c0 URL, liveness probe, readiness probe, \ub808\uc774\ube14, \ubcfc\ub968 \ubc0f \ubcfc\ub968 \ub9c8\uc6b4\ud2b8\uac00 \ud3ec\ud568\ub429\ub2c8\ub2e4.\n\n!!! Info\nWARM_ENI_TARGET, WARM_IP_TARGET, MINIMUM_IP_TARGET\uacfc \uac19\uc774 \uc790\uc8fc \uc0ac\uc6a9\ub418\ub294 \ud544\ub4dc\ub294 \uad00\ub9ac\ub418\uc9c0 \uc54a\uc73c\uba70 \uc870\uc815\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \ud544\ub4dc\uc758 \ubcc0\uacbd \uc0ac\ud56d\uc740 \uc560\ub4dc\uc628 \uc5c5\ub370\uc774\ud2b8 \uc2dc\uc5d0\ub3c4 \uc720\uc9c0\ub429\ub2c8\ub2e4.\n\n\ud504\ub85c\ub355\uc158 \ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\uae30 \uc804, \ud2b9\uc815 \uad6c\uc131\uc758 \ube44\ud504\ub85c\ub355\uc158 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc560\ub4dc\uc628 \ub3d9\uc791\uc744 \ud14c\uc2a4\ud2b8\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \ub610\ud55c [\uc560\ub4dc\uc628](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/eks-add-ons.html) \uad6c\uc131\uc5d0 \ub300\ud574\uc11c\ub294 EKS \uc0ac\uc6a9\uc790 \uac00\uc774\ub4dc\uc758 \ub2e8\uacc4\ub97c \ub530\ub77c\ud569\ub2c8\ub2e4.\n\n#### \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc73c\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\n\n\uc790\uccb4 \uad00\ub9ac\ud615 VPC CNI\uc758 \ubc84\uc804 \ud638\ud658\uc131\uc744 \uad00\ub9ac\ud558\uace0 \ubcf4\uc548 \ud328\uce58\ub97c \uc5c5\ub370\uc774\ud2b8\ud569\ub2c8\ub2e4. \uc790\uccb4 \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\ub824\uba74 [EKS \uc0ac\uc6a9\uc790 \uac00\uc774\ub4dc](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managing-vpc-cni.html#updating-vpc-cni-add-on)\uc5d0 \uc124\uba85\ub41c \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\uc640 \uc548\ub0b4 \uc0ac\ud56d\ub4e4\uc744 \ud65c\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4. \uae30\uc874 EKS \ud074\ub7ec\uc2a4\ud130\uc758 \uacbd\uc6b0, \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc73c\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud558\ub294 \uac83\uc774 \uc88b\uc73c\uba70, \ub9c8\uc774\uadf8\ub808\uc774\uc158 \uc804\uc5d0 \ud604\uc7ac CNI \uc124\uc815\uc744 \ubc31\uc5c5\ud574 \ub458 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc744 \uad6c\uc131\ud558\uae30 \uc704\ud574 Amazon EKS API, AWS \uad00\ub9ac \ucf58\uc194 \ub610\ub294 AWS \uba85\ub839\uc904 \uc778\ud130\ud398\uc774\uc2a4\ub97c \ud65c\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\nkubectl apply view-last-applied daemonset aws-node -n kube-system > aws-k8s-cni-old.yaml\n```\n\n\ud544\ub4dc \uae30\ubcf8 \uc124\uc815\uc5d0 \uad00\ub9ac\ud615\uc73c\ub85c \ud45c\uc2dc\ub41c \uacbd\uc6b0 Amazon EKS\ub294 CNI \uad6c\uc131 \uc124\uc815\uc744 \ub300\uccb4\ud569\ub2c8\ub2e4. \uad00\ub9ac\ud615 \ud544\ub4dc\ub97c \uc218\uc815\ud558\uc9c0 \uc54a\ub3c4\ub85d \uc8fc\uc758\ud569\ub2c8\ub2e4. \uc560\ub4dc\uc628\uc740 *\uc6dc* \ud658\uacbd \ubcc0\uc218 \ubc0f CNI \ubaa8\ub4dc\uc640 \uac19\uc740 \uad6c\uc131 \ud544\ub4dc\ub97c \uc870\uc815\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ud30c\ub4dc\uc640 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc740 \uad00\ub9ac\ud615 CNI\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud558\ub294 \ub3d9\uc548 \uacc4\uc18d \uc2e4\ud589\ub429\ub2c8\ub2e4.\n\n#### \uc5c5\ub370\uc774\ud2b8 \uc804 CNI \uc124\uc815 \ubc31\uc5c5\n\nVPC CNI\ub294 \uace0\uac1d\uc758 \ub370\uc774\ud130 \ud50c\ub808\uc778(\ub178\ub4dc)\uc5d0\uc11c \uc2e4\ud589\ub418\ubbc0\ub85c Amazon EKS\ub294 \uc2e0\uaddc \ubc84\uc804\uc774 \ucd9c\uc2dc\ub418\uac70\ub098 [\ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\ub370\uc774\ud2b8](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/update-cluster.html)\ud55c \ud6c4\uc5d0 \uc2e0\uaddc \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9c8\uc774\ub108 \ubc84\uc804\uc73c\ub85c (\uad00\ub9ac\ud615 \ubc0f \uc790\uccb4 \uad00\ub9ac\ud615)\uc560\ub4dc\uc628\uc744 \uc790\ub3d9\uc73c\ub85c \uc5c5\ub370\uc774\ud2b8\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uae30\uc874 \ud074\ub7ec\uc2a4\ud130\uc758 \uc560\ub4dc\uc628\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\ub824\uba74 update-addon API\ub97c \uc0ac\uc6a9\ud558\uac70\ub098 EKS \ucf58\uc194\uc5d0\uc11c \uc560\ub4dc\uc628\uc6a9 \uc9c0\uae08 \uc5c5\ub370\uc774\ud2b8(Update Now) \ub9c1\ud06c\ub97c \ud074\ub9ad\ud558\uc5ec \uc5c5\ub370\uc774\ud2b8\ub97c \ud2b8\ub9ac\uac70\ud574\uc57c \ud569\ub2c8\ub2e4. \uc790\uccb4 \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc744 \ubc30\ud3ec\ud55c \uacbd\uc6b0 [\uc790\uccb4 \uad00\ub9ac\ud615 VPC CNI \uc560\ub4dc\uc628 \uc5c5\ub370\uc774\ud2b8](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managing-vpc-cni.html#updating-vpc-cni-add-on)\uc5d0 \uc548\ub0b4\ub41c \ub2e8\uacc4\ub97c \ub530\ub974\ub3c4\ub85d \ud569\ub2c8\ub2e4.\n\n\ub9c8\uc774\ub108 \ubc84\uc804\uc740 \ud55c \ubc88\uc5d0 \ud558\ub098\uc529 \uc5c5\ub370\uc774\ud2b8\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \ud604\uc7ac \ub9c8\uc774\ub108 \ubc84\uc804\uc774 `1.9`\uc774\uace0 `1.11`\ub85c \uc5c5\ub370\uc774\ud2b8\ud560 \uacbd\uc6b0, \uba3c\uc800 `1.10`\uc758 \ucd5c\uc2e0 \ud328\uce58 \ubc84\uc804\uc73c\ub85c \uc5c5\ub370\uc774\ud2b8\ud55c \ub2e4\uc74c `1.11`\uc758 \ucd5c\uc2e0 \ud328\uce58 \ubc84\uc804\uc73c\ub85c \uc5c5\ub370\uc774\ud2b8\ud574\uc57c \ud569\ub2c8\ub2e4.\n\nAmazon VPC CNI\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\uae30 \uc804\uc5d0 AWS \ub178\ub4dc \ub370\ubaac\uc14b \uac80\uc0ac\ub97c \uc218\ud589\ud569\ub2c8\ub2e4. \uae30\uc874 \uc124\uc815\uc744 \ubc31\uc5c5\ud569\ub2c8\ub2e4. \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 Amazon EKS\uc5d0\uc11c \uc7ac\uc815\uc758\ud560 \uc218 \uc788\ub294 \uc124\uc815\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\uc9c0 \uc54a\uc558\ub294\uc9c0 \ud655\uc778\ud569\ub2c8\ub2e4. \uace0\uac1d\uc758 \uc790\ub3d9\ud654 \uc6cc\ud06c\ud50c\ub85c\uc758 \uc5c5\ub370\uc774\ud2b8 \ud6c5\uc744 \uc801\uc6a9\ud558\uac70\ub098 \ud639\uc740 \uc560\ub4dc\uc628 \uc5c5\ub370\uc774\ud2b8 \ud6c4 \uc218\ub3d9 \uc801\uc6a9 \ub2e8\uacc4\ub97c \ucd94\uac00\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4.\n\n```\nkubectl apply view-last-applied daemonset aws-node -n kube-system > aws-k8s-cni-old.yaml\n```\n\n\uc790\uccb4 \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc758 \uacbd\uc6b0 \ubc31\uc5c5\uc744 GitHub\uc758 `releases` \ub0b4\uc6a9\uacfc \ube44\uad50\ud558\uc5ec \uc0ac\uc6a9 \uac00\ub2a5\ud55c \ubc84\uc804\uc744 \ud655\uc778\ud558\uace0 \uc5c5\ub370\uc774\ud2b8\ud558\ub824\ub294 \ubc84\uc804\uc758 \ubcc0\uacbd \uc0ac\ud56d\uc744 \uc219\uc9c0\ud569\ub2c8\ub2e4. \uc790\uccb4 \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc744 \uad00\ub9ac\ud558\uae30 \uc704\ud574 \ud5ec\ub984(Helm)\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc744 \uad8c\uc7a5\ud558\uba70, values \ud30c\uc77c\uc744 \ud65c\uc6a9\ud558\uc5ec \uc124\uc815\uc744 \uc801\uc6a9\ud558\ub294 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \ub370\ubaac\uc14b \uc0ad\uc81c\uc640 \uad00\ub828\ub41c \ubaa8\ub4e0 \uc5c5\ub370\uc774\ud2b8 \uc791\uc5c5\uc740 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub2e4\uc6b4\ud0c0\uc784\uc73c\ub85c \uc774\uc5b4\uc9c0\ubbc0\ub85c \ud53c\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n### \ubcf4\uc548 \ucee8\ud14d\uc2a4\ud2b8 \uc774\ud574\n\nVPC CNI\ub97c \ud6a8\uc728\uc801\uc73c\ub85c \uad00\ub9ac\ud558\uae30 \uc704\ud574 \uad6c\uc131\ub41c \ubcf4\uc548 \ucee8\ud14d\uc2a4\ud2b8\ub97c \uc774\ud574\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. Amazon VPC CNI\uc5d0\ub294 CNI \ubc14\uc774\ub108\ub9ac\uc640 ipamd(aws-node) \ub370\ubaac\uc14b\uc774\ub77c\ub294 \ub450 \uac00\uc9c0 \uad6c\uc131 \uc694\uc18c\uac00 \uc788\uc2b5\ub2c8\ub2e4. CNI\ub294 \ub178\ub4dc\uc5d0\uc11c \ubc14\uc774\ub108\ub9ac\ub85c \uc2e4\ud589\ub418\uba70 \ub178\ub4dc \ub8e8\ud2b8 \ud30c\uc77c \uc2dc\uc2a4\ud15c\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc73c\uba70, \ub178\ub4dc \uc218\uc900\uc5d0\uc11c iptables\ub97c \ucc98\ub9ac\ud558\ubbc0\ub85c privileged \uc811\uadfc\uc5d0 \ub300\ud55c \uad8c\ud55c\ub3c4 \uac16\uc2b5\ub2c8\ub2e4. CNI \ubc14\uc774\ub108\ub9ac\ub294 \ud30c\ub4dc\uac00 \ucd94\uac00\ub418\uac70\ub098 \uc81c\uac70\ub420 \ub54c kubelet\uc5d0 \uc758\ud574 \ud638\ucd9c\ub429\ub2c8\ub2e4.\n\naws-node \ub370\ubaac\uc14b\uc740 \ub178\ub4dc \uc218\uc900\uc5d0\uc11c IP \uc8fc\uc18c \uad00\ub9ac\ub97c \ub2f4\ub2f9\ud558\ub294 \uc7a5\uae30 \uc2e4\ud589 \ud504\ub85c\uc138\uc2a4\uc785\ub2c8\ub2e4. aws-node\ub294 `hostNetwork` \ubaa8\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\uba70 \ub8e8\ud504\ubc31 \ub514\ubc14\uc774\uc2a4 \ubc0f \ub3d9\uc77c\ud55c \ub178\ub4dc\uc5d0 \uc788\ub294 \ub2e4\ub978 \ud30c\ub4dc\uc758 \ub124\ud2b8\uc6cc\ud06c \ud65c\ub3d9\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4\ub97c \ud5c8\uc6a9\ud569\ub2c8\ub2e4. aws-node\uc758 \ucd08\uae30\ud654 \ucee8\ud14c\uc774\ub108\ub294 privileged \ubaa8\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\uba70 CRI \uc18c\ucf13\uc744 \ub9c8\uc6b4\ud2b8\ud558\uc5ec \ub370\ubaac\uc14b\uc774 \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ud30c\ub4dc\uc758 IP \uc0ac\uc6a9\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud560 \uc218 \uc788\ub3c4\ub85d \ud569\ub2c8\ub2e4. Amazon EKS\ub294 aws-node\uc758 \ucd08\uae30\ud654 \ucee8\ud14c\uc774\ub108\uc758 privileged \uc694\uad6c \uc870\uac74\uc744 \uc81c\uac70\ud558\uae30 \uc704\ud574 \ub178\ub825\ud558\uace0 \uc788\uc2b5\ub2c8\ub2e4. \ub610\ud55c aws-node\ub294 NAT \uc5d4\ud2b8\ub9ac\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\uace0 iptables \ubaa8\ub4c8\uc744 \ub85c\ub4dc\ud574\uc57c \ud558\ubbc0\ub85c NET_ADMIN \uad8c\ud55c\uc73c\ub85c \uc2e4\ud589\ud574\uc57c \ud569\ub2c8\ub2e4.\n\nAmazon EKS\ub294 \ud30c\ub4dc \ubc0f \ub124\ud2b8\uc6cc\ud0b9 \uc124\uc815\uc758 IP \uad00\ub9ac\ub97c \uc704\ud574 aws-node \ub9e4\ub2c8\ud398\uc2a4\ud2b8\uc5d0\uc11c \uc815\uc758\ud55c \ub300\ub85c \ubcf4\uc548 \uc815\ucc45\uc744 \ubc30\ud3ec\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \ucd5c\uc2e0 \ubc84\uc804\uc758 VPC CNI\ub85c \uc5c5\ub370\uc774\ud2b8\ud558\ub294 \uac83\uc744 \uace0\ub824\ud569\ub2c8\ub2e4. \ub610\ud55c \ud2b9\uc815 \ubcf4\uc548 \uc694\uad6c \uc0ac\ud56d\uc774 \uc788\ub294 \uacbd\uc6b0 [GitHub \uc774\uc288](https:\/\/github.com\/aws\/amazon-vpc-cni-k8s\/issues)\ub97c \ud655\uc778\ud569\ub2c8\ub2e4.\n\n### CNI\uc5d0 \ubcc4\ub3c4\uc758 IAM \uc5ed\ud560 \uc0ac\uc6a9\n\nAWS VPC CNI\uc5d0\ub294 AWS \uc790\uaca9 \uc99d\uba85 \ubc0f \uc561\uc138\uc2a4 \uad00\ub9ac(IAM) \uad8c\ud55c\uc774 \ud544\uc694\ud569\ub2c8\ub2e4. IAM \uc5ed\ud560\uc744 \uc0ac\uc6a9\ud558\ub824\uba74 \uba3c\uc800 CNI \uc815\ucc45\uc744 \uc124\uc815\ud574\uc57c \ud569\ub2c8\ub2e4. IPv4 \ud074\ub7ec\uc2a4\ud130\uc6a9 AWS \uad00\ub9ac\ud615 \uc815\ucc45\uc778 [`AmazonEKS_CNI_Policy`](https:\/\/console.aws.amazon.com\/iam\/home#\/policies\/arn:aws:iam::aws:policy\/AmazonEKS_CNI_Policy%24jsonEditor)\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Amazon EKS\uc758 CNI \uad00\ub9ac\ud615 \uc815\ucc45\uc5d0\ub294 IPv4 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c \uad8c\ud55c\ub9cc \uc874\uc7ac\ud569\ub2c8\ub2e4. [\uc774 \ub9c1\ud06c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cni-iam-role.html#cni-iam-role-create-ipv6-policy)\uc5d0 \ud45c\uc2dc\ub41c \uad8c\ud55c\uc744 \uc0ac\uc6a9\ud558\uc5ec IPv6 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud574 \ubcc4\ub3c4\uc758 IAM \uc815\ucc45\uc744 \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\uae30\ubcf8\uc801\uc73c\ub85c VPC CNI\ub294 (\uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uacfc \uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9 \ubaa8\ub450)[Amazon EKS \ub178\ub4dc IAM \uc5ed\ud560](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/create-node-role.html)\uc744 \uc0c1\uc18d\ud569\ub2c8\ub2e4.\n\nAmazon VPC CNI\uc5d0 \ub300\ud55c \uad00\ub828 \uc815\ucc45\uc744 \uc0ac\uc6a9\ud558\uc5ec \ubcc4\ub3c4\uc758 IAM \uc5ed\ud560\uc744 \uad6c\uc131\ud558\ub294 \uac83\uc744 **\uac15\ub825\ud558\uac8c** \uad8c\uc7a5\ud569\ub2c8\ub2e4. \uadf8\ub807\uc9c0 \uc54a\uc740 \uacbd\uc6b0 Amazon VPC CNI\uc758 \ud30c\ub4dc\ub294 \ub178\ub4dc IAM \uc5ed\ud560\uc5d0 \ud560\ub2f9\ub41c \uad8c\ud55c\uc744 \uc5bb\uace0 \ub178\ub4dc\uc5d0 \ud560\ub2f9\ub41c \uc778\uc2a4\ud134\uc2a4 \ud504\ub85c\ud544\uc5d0 \uc811\uadfc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nVPC CNI \ud50c\ub7ec\uadf8\uc778\uc740 aws-node\ub77c\ub294 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub97c \uc0dd\uc131\ud558\uace0 \uad6c\uc131\ud569\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub294 Amazon EKS CNI \uc815\ucc45\uc774 \uc5f0\uacb0\ub41c Amazon EKS \ub178\ub4dc\uc758 IAM \uc5ed\ud560\uc5d0 \ubc14\uc778\ub529\ub429\ub2c8\ub2e4. \ubcc4\ub3c4\uc758 IAM \uc5ed\ud560\uc744 \uc0ac\uc6a9\ud558\ub824\uba74 Amazon EKS CNI \uc815\ucc45\uc774 \uc5f0\uacb0\ub41c [\uc2e0\uaddc \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub97c \uc0dd\uc131](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cni-iam-role.html#cni-iam-role-create-role)\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc2e0\uaddc \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub97c \uc0ac\uc6a9\ud558\ub824\uba74 [CNI \ud30c\ub4dc \uc7ac\ubc30\ud3ec](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cni-iam-role.html#cni-iam-role-redeploy-pods)\ub97c \uc9c4\ud589\ud574\uc57c \ud569\ub2c8\ub2e4. \uc2e0\uaddc \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud560 \ub54c VPC CNI \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc5d0 `--service-account-role-arn`\uc744 \uc9c0\uc815\ud558\ub294 \uac83\uc744 \uace0\ub824\ud569\ub2c8\ub2e4. \uc774 \ub54c Amazon EKS \ub178\ub4dc\uc758 IAM \uc5ed\ud560\uc5d0\uc11c IPv4\uc640 IPv6 \ubaa8\ub450\uc5d0 \ub300\ud55c Amazon EKS CNI \uc815\ucc45\uc744 \ubc18\ub4dc\uc2dc \uc81c\uac70\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ubcf4\uc548 \uce68\ud574\uc758 \ud53c\ud574 \ubc94\uc704\ub97c \ucd5c\uc18c\ud654\ud558\uae30 \uc704\ud574 [\uc778\uc2a4\ud134\uc2a4 \uba54\ud0c0\ub370\uc774\ud130 \uc811\uadfc\uc744 \ucc28\ub2e8](https:\/\/aws.github.io\/aws-eks-best-practices\/security\/docs\/iam\/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node)\ud558\ub294 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4.\n\n### Liveness\/Readiness Probe \uc2e4\ud328 \ucc98\ub9ac\n\n\ud504\ub85c\ube0c \uc2e4\ud328\ub85c \uc778\ud574 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \ud30c\ub4dc\uac00 \ucee8\ud14c\uc774\ub108 \uc0dd\uc131 \uc0c1\ud0dc\uc5d0\uc11c \uba48\ucd94\ub294 \uac83\uc744 \ubc29\uc9c0\ud558\uae30 \uc704\ud574 EKS 1.20 \ubc84\uc804 \uc774\uc0c1 \ud074\ub7ec\uc2a4\ud130\uc758 liveness \ubc0f readiness probe\uc758 \ud0c0\uc784\uc544\uc6c3 \uac12 (\uae30\ubcf8\uac12 `TimeoutSeconds: 10`)\uc744 \ub192\uc77c \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \ud574\ub2f9 \uc774\uc288\ub294 \ub370\uc774\ud130 \uc9d1\uc57d\uc801\uc778 \ubc30\uce58 \ucc98\ub9ac \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ubc1c\uacac\ub418\uc5c8\uc2b5\ub2c8\ub2e4. CPU\ub97c \ub9ce\uc774 \uc0ac\uc6a9\ud558\uba74 aws-node \ud504\ub85c\ube0c \uc0c1\ud0dc \uc7a5\uc560\uac00 \ubc1c\uc0dd\ud558\uc5ec Pod CPU \ub9ac\uc18c\uc2a4 \uc694\uccad\uc774 \ucda9\uc871\ub418\uc9c0 \uc54a\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud504\ub85c\ube0c \ud0c0\uc784\uc544\uc6c3\uc744 \uc218\uc815\ud558\ub294 \uac83 \uc678\uc5d0\ub3c4 aws-node\uc5d0 \ub300\ud55c CPU \ub9ac\uc18c\uc2a4 \uc694\uccad(\uae30\ubcf8\uac12 `CPU: 25m`)\uc774 \uc62c\ubc14\ub974\uac8c \uad6c\uc131\ub418\uc5c8\ub294\uc9c0 \ud655\uc778\ud569\ub2c8\ub2e4. \ub178\ub4dc\uc5d0 \ubb38\uc81c\uac00 \uc788\ub294 \uacbd\uc6b0\uac00 \uc544\ub2c8\uba74 \uc124\uc815\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\uc9c0 \uc54a\ub294 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4.\n\nAmazon EKS \uc9c0\uc6d0\uc744 \ubc1b\ub294 \ub3d9\uc548 \ub178\ub4dc\uc5d0\uc11c sudo `bash \/opt\/cni\/bin\/aws-cni-support.sh`\ub97c \uc2e4\ud589\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \uc774 \uc2a4\ud06c\ub9bd\ud2b8\ub294 \ub178\ub4dc\uc758 kubelet \ub85c\uadf8\uc640 \uba54\ubaa8\ub9ac \uc0ac\uc6a9\ub960\uc744 \ud3c9\uac00\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4. \uc2a4\ud06c\ub9bd\ud2b8\ub97c \uc2e4\ud589\ud558\uae30 \uc704\ud574 Amazon EKS \uc6cc\ucee4 \ub178\ub4dc\uc5d0 SSM \uc5d0\uc774\uc804\ud2b8\ub97c \uc124\uce58\ud558\ub294 \uac83\uc744 \uace0\ub824\ud569\ub2c8\ub2e4.\n\n\n\n\n### \ube44 EKS \ucd5c\uc801\ud654 AMI \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c IPtables \uc804\ub2ec \uc815\ucc45 \uad6c\uc131\n\n\uc0ac\uc6a9\uc790 \uc9c0\uc815 AMI\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 [kubelet.service](https:\/\/github.com\/awslabs\/amazon-eks-ami\/blob\/master\/files\/kubelet.service#L8)\uc5d0\uc11c iptables \uc804\ub2ec \uc815\ucc45\uc744 ACCEPT\ub85c \uc124\uc815\ud574\uc57c \ud569\ub2c8\ub2e4. \ub9ce\uc740 \uc2dc\uc2a4\ud15c\uc5d0\uc11c iptables \uc804\ub2ec \uc815\ucc45\uc774 DROP\uc73c\ub85c \uc124\uc815\ub429\ub2c8\ub2e4. [HashiCorp Packer](https:\/\/packer.io\/intro\/why.html)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc0ac\uc6a9\uc790 \uc9c0\uc815 AMI\ub97c, [AWS GitHub\uc758 Amazon EKS AMI \uc800\uc7a5\uc18c](https:\/\/github.com\/awslabs\/amazon-eks-ami)\uc5d0\uc11c \ub9ac\uc18c\uc2a4 \ubc0f \uad6c\uc131 \uc2a4\ud06c\ub9bd\ud2b8\ub97c \uc0ac\uc6a9\ud558\uc5ec \ube4c\ub4dc \uc0ac\uc591\uc744 \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. [kubelet.service](https:\/\/github.com\/awslabs\/amazon-eks-ami\/blob\/master\/files\/kubelet.service#L8)\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\uace0 [\uc774 \ub9c1\ud06c](https:\/\/aws.amazon.com\/premiumsupport\/knowledge-center\/eks-custom-linux-ami\/)\uc5d0 \uc124\uba85\ub41c \uc548\ub0b4\uc5d0 \ub530\ub77c \uc0ac\uc6a9\uc790 \uc9c0\uc815 AMI\ub97c \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n### \uc815\uae30\uc801\uc778 CNI \ubc84\uc804 \uc5c5\uadf8\ub808\uc774\ub4dc\n\nVPC CNI\ub294 \uc774\uc804 \ubc84\uc804\uacfc \ud638\ud658\ub429\ub2c8\ub2e4. \ucd5c\uc2e0 \ubc84\uc804\uc740 \ubaa8\ub4e0 Amazon EKS\uc5d0\uc11c \uc9c0\uc6d0\ud558\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uacfc \ud638\ud658\ub429\ub2c8\ub2e4. \ub610\ud55c VPC CNI\ub294 EKS \uc560\ub4dc\uc628\uc73c\ub85c \uc81c\uacf5\ub429\ub2c8\ub2e4(\uc704\uc758 \u201cVPC CNI \uad00\ub9ac\ud615 \uc560\ub4dc\uc628 \ubc30\ud3ec\u201d \ucc38\uc870). EKS \uc560\ub4dc\uc628\uc740 \uc560\ub4dc\uc628 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uad00\ub9ac\ud558\uc9c0\ub9cc CNI\uc640 \uac19\uc740 \uc560\ub4dc\uc628\uc740 \ub370\uc774\ud130 \ud50c\ub808\uc778\uc5d0\uc11c \uc2e4\ud589\ub418\ubbc0\ub85c \uc790\ub3d9\uc73c\ub85c \uc5c5\uadf8\ub808\uc774\ub4dc\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uad00\ub9ac\ud615 \ubc0f \uc790\uccb4 \uad00\ub9ac\ud615 \uc6cc\ucee4 \ub178\ub4dc \uc5c5\uadf8\ub808\uc774\ub4dc \ud6c4\uc5d0\ub294 VPC CNI \uc560\ub4dc\uc628\uc744 \uc5c5\uadf8\ub808\uc774\ub4dc\ud574\uc57c \ud569\ub2c8\ub2e4.","site":"eks","answers_cleaned":"     search    exclude  true         Amazon VPC CNI   iframe width  560  height  315  src  https   www youtube com embed RBE3yk2UlYA  title  YouTube video player  frameborder  0  allow  accelerometer  autoplay  clipboard write  encrypted media  gyroscope  picture in picture  web share  allowfullscreen   iframe   Amazon EKS   Amazon VPC                 VPC CNI   https   github com aws amazon vpc cni k8s                             CNI                      VPC             IP                                                                                     Amazon VPC CNI                        CNI                                      CNI                                                               kubelet                      long running        node local  IP        IPAM      ipamd                     ENI                 IP       Prefix                     EC2                 ENI                                                                        ENI         IP                                        CNI            Elastic Network Interface  ENI   https   docs aws amazon com AWSEC2 latest UserGuide using eni html                      CNI                    ENI       IP    Prefix                                                            CNI            IP       Prefix          ENI           CNI                ENI                         ENI     ENI          ENI                                     CNI                            ENI                                                    ENI                              CNI                       ENI                                             ENI                                                                     flow chart illustrating procedure when new ENI delegated prefix is needed    image png                                      EC2                             IP                 EC2                                 ENI       ENI                              CPU                     EKS                                  hostNetwork                                          EKS                        max pod calculator sh  https   github com awslabs amazon eks ami blob master files max pods calculator sh                                 IP     VPC CNI                        IP              VPC CNI                        ipamd     IP         VPC CNI                Prefix        prefix mode index linux md                    sgpp index md                    custom networking index md                   Amazon VPC CNI         aws node                                         primary ENI         ENI         CNI         ENI                 ENI     IP                  ipamd         ENI                               ENI     IP          IPAMD     ENI               ENI                             IP             CNI     ENI                           ENI   IP        WARM ENI TARGET  WARM IP TARGET  MINIMUM IP TARGET  https   github com aws amazon vpc cni k8s blob master docs eni and ip target md                       aws node              ENI                         WARM ENI TARGET      WARM IP TARGET     MINIMUM IP TARGET                  ENI             ENI             CNI   MAX ENI              EC2  API           ENI             WARM ENI TARGET              0                               ENI        ENI         ENI                                                  ENI  IP                               ENI  2      ENI  5   IP                             WARM ENI TARGET  1                       5   IP            CNI        2   ENI                      ENI           ENI          5  IP                    ENI     5  IP                                            6   IP            CNI    6            ENI      5   IP   IP                    ENI                             CNI     1     ENI               ENI              Note       ENI  VPC  CIDR     IP            IP                                                      WARM IP TARGET           0                               IP            IP         ENI                                         IP        ENI                 IP               ENI  1      ENI  20   IP                             WARM IP TARGET  5             WARM ENI TARGET  0              16   IP              ENI 1                 CNI      CIDR         20             ENI            MINIMUM IP TARGET           0                                          IP                                ENI                                                      ENI  1      ENI  10   IP            MINIMUM IP TARGET  100              ENI    100          9   ENI                    WARM IP TARGET      WARM ENI TARGET                                   Excel        subnet calc subnet calc xlsx                     WARM IP TARGET     WARM ENI TARGET          ENI                     IP                     illustration of components involved in assigning an IP address to a pod    image 2 png   Kubelet                 CNI       ipamd         IP           ipamd                CNI                                                  ENI                                                  second illustration of components involved in assigning an IP address to a pod    image 3 png   IP                          Elastic Network Interface                             IP                                   Elastic Network Interface                         third illustration of components involved in assigning an IP address to a pod    image 4 png            VPC CNI      IP     30                          IP                                 VPC CNI     IP                             IP                                      kube proxy  iptables                         IP    ENI                    ipamd       VPC  IP  ENI             IP                                    ENI                     IP                  IP                 EC2                                 ENI          IP                  VPC CNI   limit  https   github com aws amazon vpc resource controller k8s blob master pkg aws vpc limits go                          ENI  IP                                                                                                     IP        1     2    2  kube proxy   VPC CNI                               Amazon EKS           kube proxy   VPC CNI                      max pods                                 max pods                     2  kube proxy   VPC CNI                                     Amazon EKS          kube proxy  VPC CNI              max pods                                         max pods                                    kubelet extra args   max pods 110                                 c5 large    3  ENI 3   ENI     10  IP                         CoreDNS    2         CNI  49   IP                                                                      1  CoreDNS         ENI 2   IP 20         2  CoreDNS         ENI 2   IP 20         3          ENI 1   IP 10                                                                                   CoreDNS   Amazon Elastic LoadBalancer   metrics server     Pods                                                                     GitHub   eni max Pods txt  https   github com awslabs amazon eks ami blob master files eni max pods txt             illustration of multiple ENIs attached to a node    image 5 png                 VPC CNI                           Amazon EKS       VPC CNI                   Amazon EKS                               AWS                                      VPC CNI                                         Amazon EKS          Amazon EKS          VPC CNI                 Amazon EKS                                  Amazon EKS        AWS             VPC CNI           Amazon EKS                                                                                    Amazon EKS API  AWS        AWS CLI   eksctl                               kubectl get             show managed fields            VPC CNI                          kubectl get daemonset aws node   show managed fields  n kube system  o yaml               15                                                Kubernetes API                                                                               EKS            managedFields                        EKS     EKS                                URL  liveness probe  readiness probe                                Info WARM ENI TARGET  WARM IP TARGET  MINIMUM IP TARGET                                                                                                                                                  https   docs aws amazon com eks latest userguide eks add ons html           EKS                                                    VPC CNI                                                    EKS          https   docs aws amazon com eks latest userguide managing vpc cni html updating vpc cni add on             API                       EKS                                                   CNI                                      Amazon EKS API  AWS          AWS                             kubectl apply view last applied daemonset aws node  n kube system   aws k8s cni old yaml                             Amazon EKS  CNI                                                        CNI                                          CNI                                     CNI        VPC CNI                          Amazon EKS                            https   docs aws amazon com eks latest userguide update cluster html                                                                                   update addon API        EKS                   Update Now                                                       VPC CNI           https   docs aws amazon com eks latest userguide managing vpc cni html updating vpc cni add on                                                                         1 9     1 11                 1 10                        1 11                           Amazon VPC CNI            AWS                                                  Amazon EKS                                                                                                           kubectl apply view last applied daemonset aws node  n kube system   aws k8s cni old yaml                         GitHub   releases                                                                            Helm                 values                                                                                                   VPC CNI                                           Amazon VPC CNI   CNI       ipamd aws node                           CNI                                                    iptables        privileged                  CNI                       kubelet             aws node              IP                            aws node   hostNetwork                                                                 aws node            privileged           CRI                              IP                      Amazon EKS  aws node            privileged                              aws node  NAT             iptables              NET ADMIN                 Amazon EKS                IP        aws node                                            VPC CNI                                          GitHub     https   github com aws amazon vpc cni k8s issues               CNI      IAM        AWS VPC CNI   AWS                IAM             IAM              CNI               IPv4       AWS           AmazonEKS CNI Policy   https   console aws amazon com iam home  policies arn aws iam  aws policy AmazonEKS CNI Policy 24jsonEditor               Amazon EKS  CNI          IPv4                            https   docs aws amazon com eks latest userguide cni iam role html cni iam role create ipv6 policy                IPv6              IAM                      VPC CNI                               Amazon EKS    IAM     https   docs aws amazon com eks latest userguide create node role html           Amazon VPC CNI                     IAM                                       Amazon VPC CNI         IAM                                               VPC CNI       aws node                                         Amazon EKS CNI         Amazon EKS     IAM                 IAM           Amazon EKS CNI                           https   docs aws amazon com eks latest userguide cni iam role html cni iam role create role                                 CNI         https   docs aws amazon com eks latest userguide cni iam role html cni iam role redeploy pods                            VPC CNI             service account role arn                      Amazon EKS     IAM      IPv4  IPv6        Amazon EKS CNI                                                               https   aws github io aws eks best practices security docs iam  restrict access to the instance profile assigned to the worker node                   Liveness Readiness Probe                                                           EKS 1 20             liveness   readiness probe               TimeoutSeconds  10                                                       CPU          aws node                 Pod CPU                                              aws node     CPU             CPU  25m                                                                   Amazon EKS                sudo  bash  opt cni bin aws cni support sh                             kubelet                                            Amazon EKS        SSM                                EKS     AMI        IPtables                  AMI           kubelet service  https   github com awslabs amazon eks ami blob master files kubelet service L8    iptables        ACCEPT                     iptables        DROP           HashiCorp Packer  https   packer io intro why html               AMI    AWS GitHub  Amazon EKS AMI      https   github com awslabs amazon eks ami                                            kubelet service  https   github com awslabs amazon eks ami blob master files kubelet service L8                 https   aws amazon com premiumsupport knowledge center eks custom linux ami                      AMI                         CNI           VPC CNI                          Amazon EKS                            VPC CNI  EKS                 VPC CNI                  EKS                       CNI                                                                          VPC CNI                  "}
{"questions":"eks search EKS AWS VPC exclude true VPC","answers":"\ufeff---\nsearch:\n  exclude: true\n---\n\n\n# VPC \ubc0f \uc11c\ube0c\ub137 \uace0\ub824 \uc0ac\ud56d\n\nEKS \ud074\ub7ec\uc2a4\ud130\ub97c \uc6b4\uc601\ud558\ub824\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\ud2b8\uc6cc\ud0b9 \uc678\uc5d0\ub3c4 AWS VPC \ub124\ud2b8\uc6cc\ud0b9\uc5d0 \ub300\ud55c \uc9c0\uc2dd\uc774 \ud544\uc694\ud569\ub2c8\ub2e4.\n\nVPC\ub97c \uc124\uacc4\ud558\uac70\ub098 \uae30\uc874 VPC\uc5d0 \ud074\ub7ec\uc2a4\ud130\ub97c \ubc30\ud3ec\ud558\uae30 \uc804\uc5d0 EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ud1b5\uc2e0 \uba54\ucee4\ub2c8\uc998\uc744 \uc774\ud574\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4.\n\nEKS\uc5d0\uc11c \uc0ac\uc6a9\ud560 VPC\uc640 \uc11c\ube0c\ub137\uc744 \uc124\uacc4\ud560 \ub54c\ub294 [\ud074\ub7ec\uc2a4\ud130 VPC \uace0\ub824 \uc0ac\ud56d](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/network_reqs.html) \ubc0f [Amazon EKS \ubcf4\uc548 \uadf8\ub8f9 \uace0\ub824 \uc0ac\ud56d](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/sec-group-reqs.html)\uc744 \ucc38\uc870\ud569\ub2c8\ub2e4.\n\n## \uac1c\uc694\n\n### EKS \ud074\ub7ec\uc2a4\ud130 \uc544\ud0a4\ud14d\ucc98\n\nEKS \ud074\ub7ec\uc2a4\ud130\ub294 \ub450 \uac1c\uc758 VPC\ub85c \uad6c\uc131\ub429\ub2c8\ub2e4. \n\n* \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc744 \ud638\uc2a4\ud305\ud558\ub294 AWS \uad00\ub9ac\ud615 VPC. \uc774 VPC\ub294 \uace0\uac1d \uacc4\uc815\uc5d0 \ud45c\uc2dc\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \n* \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub178\ub4dc\ub97c \ud638\uc2a4\ud305\ud558\ub294 \uace0\uac1d \uad00\ub9ac\ud615 VPC. \uc5ec\uae30\uc5d0\uc11c \ucee8\ud14c\uc774\ub108\ub294 \ubb3c\ub860 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc0ac\uc6a9\ud558\ub294 \ub85c\ub4dc \ubc38\ub7f0\uc11c\uc640 \uac19\uc740 \uae30\ud0c0 \uace0\uac1d \uad00\ub9ac\ud615 AWS \uc778\ud504\ub77c\uac00 \uc2e4\ud589\ub429\ub2c8\ub2e4. \uc774 VPC\ub294 \uace0\uac1d \uacc4\uc815\uc5d0 \ud45c\uc2dc\ub429\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud558\uae30 \uc804, \uace0\uac1d \uad00\ub9ac\ud615 VPC\ub97c \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \uc0ac\uc6a9\uc790\uac00 VPC\ub97c \uc81c\uacf5\ud558\uc9c0 \uc54a\uc744 \uacbd\uc6b0 eksctl\uc774 VPC\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4.\n\n\uace0\uac1d VPC\uc758 \ub178\ub4dc\ub294 AWS VPC\uc758 \uad00\ub9ac\ud615 API \uc11c\ubc84 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \uc5f0\uacb0\ud560 \uc218 \uc788\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 \ub178\ub4dc\uac00 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0 \ub4f1\ub85d\ub418\uace0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud558\ub77c\ub294 \uc694\uccad\uc744 \uc218\uc2e0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub178\ub4dc\ub294 (a) EKS \ud37c\ube14\ub9ad \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ub610\ub294 (b) EKS\uc5d0\uc11c \uad00\ub9ac\ud558\ub294 \uad50\ucc28 \uacc4\uc815 [Elastic Network Interface](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/using-eni.html)(X-ENI)\ub97c \ud1b5\ud574 EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0 \uc5f0\uacb0\ub429\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud560 \ub54c\ub294 \ucd5c\uc18c \ub450 \uac1c\uc758 VPC \uc11c\ube0c\ub137\uc744 \uc9c0\uc815\ud574\uc57c \ud569\ub2c8\ub2e4. EKS\ub294 \ud074\ub7ec\uc2a4\ud130 \uc0dd\uc131 \uc2dc \uc9c0\uc815\ub41c \uac01 \uc11c\ube0c\ub137(\ud074\ub7ec\uc2a4\ud130 \uc11c\ube0c\ub137\uc774\ub77c\uace0\ub3c4 \ud568)\uc5d0 X-ENI\ub97c \ubc30\uce58\ud569\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84\ub294 \uc774\ub7ec\ud55c \uad50\ucc28 \uacc4\uc815 ENI\ub97c \uc0ac\uc6a9\ud558\uc5ec \uace0\uac1d \uad00\ub9ac\ud615 \ud074\ub7ec\uc2a4\ud130 VPC \uc11c\ube0c\ub137\uc5d0 \ubc30\ud3ec\ub41c \ub178\ub4dc\uc640 \ud1b5\uc2e0\ud569\ub2c8\ub2e4. \n\n![general illustration of cluster networking, including load balancer, nodes, and pods.](.\/image.png)\n\n\ub178\ub4dc\ub97c \uc2dc\uc791\ud558\uba74 EKS \ubd80\ud2b8\uc2a4\ud2b8\ub7a9 \uc2a4\ud06c\ub9bd\ud2b8\uac00 \uc2e4\ud589\ub418\uace0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub178\ub4dc \uad6c\uc131 \ud30c\uc77c\uc774 \uc124\uce58\ub429\ub2c8\ub2e4. \uac01 \uc778\uc2a4\ud134\uc2a4\uc758 \ubd80\ud305 \ud504\ub85c\uc138\uc2a4\uc758 \uc77c\ubd80\ub85c\uc368 \ucee8\ud14c\uc774\ub108 \ub7f0\ud0c0\uc784 \uc5d0\uc774\uc804\ud2b8, kubelet \ubc0f \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub178\ub4dc \uc5d0\uc774\uc804\ud2b8\uac00 \uc2dc\uc791\ub429\ub2c8\ub2e4.\n\n\ub178\ub4dc\ub97c \ub4f1\ub85d\ud558\uae30 \uc704\ud574 kubelet\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \uc811\uc18d\ud569\ub2c8\ub2e4. VPC \uc678\ubd80\uc758 \ud37c\ube14\ub9ad \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ub610\ub294 VPC \ub0b4\uc758 \ud504\ub77c\uc774\ube57 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc640\uc758 \uc5f0\uacb0\uc744 \uc124\uc815\ud569\ub2c8\ub2e4. Kubelet\uc740 API \uba85\ub839\uc744 \uc218\uc2e0\ud558\uace0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \uc0c1\ud0dc \uc5c5\ub370\uc774\ud2b8 \ubc0f \ud558\ud2b8\ube44\ud2b8\ub97c \uc815\uae30\uc801\uc73c\ub85c \uc81c\uacf5\ud569\ub2c8\ub2e4.\n\n### EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ud1b5\uc2e0\n\nEKS\uc5d0\ub294 [\ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cluster-endpoint.html)\uc5d0 \ub300\ud55c \uc811\uadfc\uc744 \uc81c\uc5b4\ud558\ub294 \ub450 \uac00\uc9c0 \ubc29\ubc95\uc774 \uc788\uc2b5\ub2c8\ub2e4. \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc811\uadfc \uc81c\uc5b4\ub97c \ud1b5\ud574 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \ud37c\ube14\ub9ad \uc778\ud130\ub137\uc744 \ud1b5\ud574 \uc811\uadfc\ud560 \uc218 \uc788\ub294\uc9c0 \uc544\ub2c8\uba74 VPC\ub97c \ud1b5\ud574\uc11c\ub9cc \uc811\uadfc\ud560 \uc218 \uc788\ub294\uc9c0 \uc5ec\ubd80\ub97c \uc120\ud0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud37c\ube14\ub9ad \uc5d4\ub4dc\ud3ec\uc778\ud2b8(\uae30\ubcf8\uac12), \ud504\ub77c\uc774\ube57 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ub610\ub294 \ub458 \ub2e4\ub97c \ud55c \ubc88\uc5d0 \ud65c\uc131\ud654 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\ud074\ub7ec\uc2a4\ud130 API \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \uad6c\uc131\uc740 \ub178\ub4dc\uac00 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \ud1b5\uc2e0\ud558\uae30 \uc704\ud55c \uacbd\ub85c\ub97c \uacb0\uc815\ud569\ub2c8\ub2e4. \ucc38\uace0\ub85c \uc774\ub7ec\ud55c \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc124\uc815\uc740 EKS \ucf58\uc194 \ub610\ub294 API\ub97c \ud1b5\ud574 \uc5b8\uc81c\ub4e0\uc9c0 \ubcc0\uacbd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n#### \ud37c\ube14\ub9ad \uc5d4\ub4dc\ud3ec\uc778\ud2b8\n\n\ud574\ub2f9 \uad6c\uc131\uc740 \uc2e0\uaddc Amazon EKS \ud074\ub7ec\uc2a4\ud130\uc758 \uae30\ubcf8 \ub3d9\uc791\uc785\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uc758 \ud37c\ube14\ub9ad \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub9cc \ud65c\uc131\ud654\ub41c \uacbd\uc6b0 \ud074\ub7ec\uc2a4\ud130\uc758 VPC \ub0b4\uc5d0\uc11c \uc2dc\uc791\ub41c \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc694\uccad(\uc608: \uc6cc\ucee4\ub178\ub4dc\uc5d0\uc11c \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc73c\ub85c\uc758 \ud1b5\uc2e0)\uc740 VPC \uc678\ubd80\ub85c \uac00\uc9c0\ub9cc Amazon \ub124\ud2b8\uc6cc\ud06c \uc678\ubd80\ub85c\ub294 \uac00\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ub178\ub4dc\uac00 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0 \uc5f0\uacb0\ub418\ub824\uba74 \ud37c\ube14\ub9ad IP \uc8fc\uc18c \ubc0f \uc778\ud130\ub137 \uac8c\uc774\ud2b8\uc6e8\uc774\uc5d0 \ub300\ud55c \uacbd\ub85c \ub610\ub294 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\uc758 \ud37c\ube14\ub9ad IP \uc8fc\uc18c\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\uc5d0 \ub300\ud55c \uacbd\ub85c\uac00 \uc788\uc5b4\uc57c \ud569\ub2c8\ub2e4.\n\n#### \ud37c\ube14\ub9ad \ubc0f \ud504\ub77c\uc774\ube57 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\n\n\ud37c\ube14\ub9ad \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc640 \ud504\ub77c\uc774\ube57 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 \ubaa8\ub450 \ud65c\uc131\ud654\ub418\uba74 VPC \ub0b4\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc694\uccad\uc774 VPC \ub0b4\uc758 X-ENI\ub97c \ud1b5\ud574 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \ud1b5\uc2e0\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 API \uc11c\ubc84\ub294 \uc778\ud130\ub137\uc5d0\uc11c \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n#### \ud504\ub77c\uc774\ube57 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\n\n\ud504\ub77c\uc774\ube57 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub9cc \ud65c\uc131\ud654\ub41c \uacbd\uc6b0 \uc778\ud130\ub137\uc5d0\uc11c API \uc11c\ubc84\uc5d0 \uacf5\uac1c\uc801\uc73c\ub85c \uc561\uc138\uc2a4\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 API \uc11c\ubc84\ub85c \ud5a5\ud558\ub294 \ubaa8\ub4e0 \ud2b8\ub798\ud53d\uc740 \ud074\ub7ec\uc2a4\ud130\uc758 VPC \ub610\ub294 \uc5f0\uacb0\ub41c \ub124\ud2b8\uc6cc\ud06c \uc548\uc5d0\uc11c \ub4e4\uc5b4\uc640\uc57c \ud569\ub2c8\ub2e4. \ub178\ub4dc\ub294 VPC \ub0b4\uc758 X-ENI\ub97c \ud1b5\ud574 API \uc11c\ubc84\uc640 \ud1b5\uc2e0\ud569\ub2c8\ub2e4. \ub2e8, \ud074\ub7ec\uc2a4\ud130 \uad00\ub9ac \ub3c4\uad6c\ub294 \ud504\ub77c\uc774\ube57 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc5b4\uc57c \ud569\ub2c8\ub2e4. [Amazon VPC \uc678\ubd80\uc5d0\uc11c \ud504\ub77c\uc774\ube57 Amazon EKS \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \uc5f0\uacb0\ud558\ub294 \ubc29\ubc95](https:\/\/aws.amazon.com\/premiumsupport\/knowledge-center\/eks-private-cluster-endpoint-vpc\/)\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubd05\ub2c8\ub2e4.\n\n\ucc38\uace0\ub85c \ud074\ub7ec\uc2a4\ud130\uc758 API \uc11c\ubc84 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub294 \ud37c\ube14\ub9ad DNS \uc11c\ubc84\uc5d0\uc11c VPC\uc758 \ud504\ub77c\uc774\ube57 IP \uc8fc\uc18c\ub85c \ub9ac\uc878\ube0c\ub429\ub2c8\ub2e4. \uacfc\uac70\uc5d0\ub294 VPC \ub0b4\uc5d0\uc11c\ub9cc \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ub9ac\uc878\ube0c\ud560 \uc218 \uc788\uc5c8\uc2b5\ub2c8\ub2e4.\n\n### VPC \uad6c\uc131\n\nAmazon VPC\ub294 IPv4 \ubc0f IPv6 \uc8fc\uc18c\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4. Amazon EKS\ub294 \uae30\ubcf8\uc801\uc73c\ub85c IPv4\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4. VPC\uc5d0\ub294 IPv4 CIDR \ube14\ub85d\uc774 \uc5f0\uacb0\ub418\uc5b4 \uc788\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uc120\ud0dd\uc801\uc73c\ub85c \uc5ec\ub7ec IPv4 [Classless Inter-Domain Routing](http:\/\/en.wikipedia.org\/wiki\/CIDR_notation)(CIDR) \ube14\ub85d\uacfc \uc5ec\ub7ec IPv6 CIDR \ube14\ub85d\uc744 VPC\uc5d0 \uc5f0\uacb0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. VPC\ub97c \uc0dd\uc131\ud560 \uacbd\uc6b0 [RFC 1918](http:\/\/www.faqs.org\/rfcs\/rfc1918.html)\uc5d0 \uc9c0\uc815\ub41c \ud504\ub77c\uc774\ube57 IPv4 \uc8fc\uc18c \ubc94\uc704\uc5d0\uc11c VPC\uc5d0 \ub300\ud55c IPv4 CIDR \ube14\ub85d\uc744 \uc9c0\uc815\ud574\uc57c \ud569\ub2c8\ub2e4. \ud5c8\uc6a9\ub418\ub294 \ube14\ub85d \ud06c\uae30\ub294 `\/16` Prefix(65,536\uac1c\uc758 IP \uc8fc\uc18c)\uc640 `\/28` Prefix(16\uac1c\uc758 IP \uc8fc\uc18c) \uc0ac\uc774\uc785\ub2c8\ub2e4. \n\n\uc2e0\uaddc VPC\ub97c \ub9cc\ub4e4 \ub54c\ub294 \ud558\ub098\uc758 IPv6 CIDR \ube14\ub85d\uc744 \uc5f0\uacb0\ud560 \uc218 \uc788\uc73c\uba70, \uae30\uc874 VPC\ub97c \ubcc0\uacbd\ud560 \ub54c\ub294 \ucd5c\ub300 5\uac1c\uae4c\uc9c0 \uc5f0\uacb0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. IPv6 VPC\uc758 CIDR \ube14\ub85d\uc758 Prefix \uae38\uc774\ub294 [1\uc870 \uac1c \uc774\uc0c1\uc758 IP \uc8fc\uc18c](https:\/\/www.ripe.net\/about-us\/press-centre\/understanding-ip-addressing#:~:text=IPv6%20Relative%20Network%20Sizes%20%20%20%2F128%20,Minimum%20IPv6%20allocation%20%201%20more%20rows%20)\ub97c \uac00\uc9c4 \/64\ub85c \uace0\uc815\ub429\ub2c8\ub2e4. Amazon\uc5d0\uc11c \uad00\ub9ac\ud558\ub294 IPv6 \uc8fc\uc18c \ud480\uc5d0\uc11c IPv6 CIDR \ube14\ub85d\uc744 \uc694\uccad\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nAmazon EKS \ud074\ub7ec\uc2a4\ud130\ub294 IPv4\uc640 IPv6\ub97c \ubaa8\ub450 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c EKS \ud074\ub7ec\uc2a4\ud130\ub294 IPv4 IP\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uc0dd\uc131 \uc2dc IPv6\uc744 \uc9c0\uc815\ud558\uba74 IPv6 \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. IPv6 \ud074\ub7ec\uc2a4\ud130\uc5d0\ub294 \ub4c0\uc5bc \uc2a4\ud0dd VPC\uc640 \uc11c\ube0c\ub137\uc774 \ud544\uc694\ud569\ub2c8\ub2e4.\n\nAmazon EKS\ub294 \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud558\ub294 \ub3d9\uc548 \uc11c\ub85c \ub2e4\ub978 \uac00\uc6a9 \uc601\uc5ed\uc5d0 \uc788\ub294 \uc11c\ube0c\ub137\uc744 \ub450 \uac1c \uc774\uc0c1 \uc0ac\uc6a9\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud560 \ub54c \uc804\ub2ec\ud558\ub294 \uc11c\ube0c\ub137\uc744 \ud074\ub7ec\uc2a4\ud130 \uc11c\ube0c\ub137\uc774\ub77c\uace0 \ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud560 \ub54c Amazon EKS\ub294 \uc9c0\uc815\ud55c \uc11c\ube0c\ub137\uc5d0 \ucd5c\ub300 4\uac1c\uc758 \uad50\ucc28 \uacc4\uc815(x-account \ub610\ub294 x-ENIs) ENI\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4.x-ENI\ub294 \ud56d\uc0c1 \ubc30\ud3ec\ub418\uba70 \ub85c\uadf8 \uc804\uc1a1, \uc2e4\ud589 \ubc0f \ud504\ub85d\uc2dc\uc640 \uac19\uc740 \ud074\ub7ec\uc2a4\ud130 \uad00\ub9ac \ud2b8\ub798\ud53d\uc5d0 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. \uc804\uccb4 [VPC \ubc0f \uc11c\ube0c\ub137 \uc694\uad6c \uc0ac\ud56d](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/network_reqs.html#network-requirements-subnets) \uc138\ubd80 \uc815\ubcf4\ub294 EKS \uc0ac\uc6a9\uc790 \uac00\uc774\ub4dc\ub97c \ucc38\uc870\ud569\ub2c8\ub2e4. \n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc6cc\ucee4 \ub178\ub4dc\ub294 \ud074\ub7ec\uc2a4\ud130 \uc11c\ube0c\ub137\uc5d0\uc11c \uc2e4\ud589\ud560 \uc218 \uc788\uc9c0\ub9cc \uad8c\uc7a5\ub418\uc9c0\ub294 \uc54a\uc2b5\ub2c8\ub2e4. [\ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc](https:\/\/aws.github.io\/aws-eks-best-practices\/upgrades\/#verify-available-ip-addresses) \uc911\uc5d0 Amazon EKS\ub294 \ud074\ub7ec\uc2a4\ud130 \uc11c\ube0c\ub137\uc5d0 \ucd94\uac00 ENI\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uac00 \ud655\uc7a5\ub418\uba74 \uc6cc\ucee4 \ub178\ub4dc\uc640 \ud30c\ub4dc\uac00 \ud074\ub7ec\uc2a4\ud130 \uc11c\ube0c\ub137\uc5d0\uc11c \uac00\uc6a9 IP\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub530\ub77c\uc11c \uc0ac\uc6a9 \uac00\ub2a5\ud55c IP\ub97c \ucda9\ubd84\ud788 \ud655\ubcf4\ud558\ub824\uba74 \/28 \ub137\ub9c8\uc2a4\ud06c\uac00 \uc788\ub294 \uc804\uc6a9 \ud074\ub7ec\uc2a4\ud130 \uc11c\ube0c\ub137 \uc0ac\uc6a9\uc744 \uace0\ub824\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc6cc\ucee4 \ub178\ub4dc\ub294 \ud37c\ube14\ub9ad \ub610\ub294 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0\uc11c \uc2e4\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc11c\ube0c\ub137\uc774 \ud37c\ube14\ub9ad\uc778\uc9c0 \ud504\ub77c\uc774\ube57\uc778\uc9c0\ub294 \uc11c\ube0c\ub137 \ub0b4\uc758 \ud2b8\ub798\ud53d\uc774 [\uc778\ud130\ub137 \uac8c\uc774\ud2b8\uc6e8\uc774](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/VPC_Internet_Gateway.html)\ub97c \ud1b5\ud574 \ub77c\uc6b0\ud305\ub418\ub294\uc9c0 \uc5ec\ubd80\ub97c \ub73b\ud569\ub2c8\ub2e4. \ud37c\ube14\ub9ad \uc11c\ube0c\ub137\uc5d0\ub294 \uc778\ud130\ub137 \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \ud1b5\ud574 \uc778\ud130\ub137\uc73c\ub85c \ub77c\uc6b0\ud305\ub418\ub294 \ub77c\uc6b0\ud2b8 \ud14c\uc774\ube14 \ud56d\ubaa9\uc774 \uc788\uc9c0\ub9cc \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0\ub294 \uc5c6\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\ub978 \uacf3\uc5d0\uc11c \uc2dc\uc791\ud558\uc5ec \ub178\ub4dc\uc5d0 \ub3c4\ucc29\ud558\ub294 \ud2b8\ub798\ud53d\uc744 *\uc778\uadf8\ub808\uc2a4(ingress)*\ub77c\uace0 \ud569\ub2c8\ub2e4. \ub178\ub4dc\uc5d0\uc11c \uc2dc\uc791\ud558\uc5ec \ub124\ud2b8\uc6cc\ud06c \uc678\ubd80\ub85c \uac00\ub294 \ud2b8\ub798\ud53d\uc744 *\uc774\uadf8\ub808\uc2a4(egress)*\ub77c\uace0 \ud569\ub2c8\ub2e4. \uc778\ud130\ub137 \uac8c\uc774\ud2b8\uc6e8\uc774\ub85c \uad6c\uc131\ub41c \uc11c\ube0c\ub137 \ub0b4\uc5d0 \ud37c\ube14\ub9ad \ub610\ub294 Elastic IP Address(EIP)\ub97c \uac00\uc9c4 \ub178\ub4dc\ub294 VPC \uc678\ubd80\ub85c\ubd80\ud130\uc758 \uc778\uadf8\ub808\uc2a4\ub97c \ud5c8\uc6a9\ud569\ub2c8\ub2e4. \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0\ub294 \uc77c\ubc18\uc801\uc73c\ub85c [NAT \uac8c\uc774\ud2b8\uc6e8\uc774](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/vpc-nat-gateway.html)\uac00 \ud3ec\ud568\ub418\uc5b4 \uc788\ub294\ub370, \uc774 \uac8c\uc774\ud2b8\uc6e8\uc774\ub294 \ub178\ub4dc\ub85c\ubd80\ud130\uc758 \ud2b8\ub798\ud53d\uc774 VPC\ub97c \ub098\uac00\ub294 \uac83(*\uc774\uadf8\ub808\uc2a4*)\uc740 \ud5c8\uc6a9\ud558\uba74\uc11c VPC \ub0b4\uc5d0\uc11c \ub178\ub4dc\ub85c\uc758 \uc778\uadf8\ub808\uc2a4 \ud2b8\ub798\ud53d\ub9cc \ud5c8\uc6a9\ud569\ub2c8\ub2e4.\n\nIPv6 \ud658\uacbd\uc5d0\uc11c\ub294 \ubaa8\ub4e0 \uc8fc\uc18c\ub97c \uc778\ud130\ub137\uc73c\ub85c \ub77c\uc6b0\ud305\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub178\ub4dc \ubc0f \ud30c\ub4dc\uc640 \uc5f0\uacb0\ub41c IPv6 \uc8fc\uc18c\ub294 \ud37c\ube14\ub9ad\uc785\ub2c8\ub2e4. \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc740 VPC\uc5d0 [egress-only internet gateways (EIGW)](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/egress-only-internet-gateway.html)\ub97c \uad6c\uc131\ud558\uc5ec \uc544\uc6c3\ubc14\uc6b4\ub4dc \ud2b8\ub798\ud53d\uc744 \ud5c8\uc6a9\ud558\ub294 \ub3d9\uc2dc\uc5d0 \ub4e4\uc5b4\uc624\ub294 \ud2b8\ub798\ud53d\uc740 \ubaa8\ub450 \ucc28\ub2e8\ud558\ub294 \ubc29\uc2dd\uc73c\ub85c \uc9c0\uc6d0\ub429\ub2c8\ub2e4. IPv6 \uc11c\ube0c\ub137 \uad6c\ud604\uc758 \ubaa8\ubc94 \uc0ac\ub840\ub294 [VPC \uc0ac\uc6a9\uc790 \uac00\uc774\ub4dc](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/VPC_Scenario2.html)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\n\n### \ub2e4\uc74c\uacfc \uac19\uc740 \uc138 \uac00\uc9c0 \ubc29\ubc95\uc73c\ub85c VPC\uc640 \uc11c\ube0c\ub137\uc744 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n#### \ud37c\ube14\ub9ad \uc11c\ube0c\ub137\ub9cc \uc0ac\uc6a9\n\n\ub3d9\uc77c\ud55c \ud37c\ube14\ub9ad \uc11c\ube0c\ub137\uc5d0\uc11c \ub178\ub4dc\uc640 \uc778\uadf8\ub808\uc2a4 \ub9ac\uc18c\uc2a4(\uc608: \ub85c\ub4dc \ubc38\ub7f0\uc11c)\uac00 \ubaa8\ub450 \uc0dd\uc131\ub429\ub2c8\ub2e4. \ud37c\ube14\ub9ad \uc11c\ube0c\ub137\uc5d0 [`kubernetes.io\/role\/elb`](http:\/\/kubernetes.io\/role\/elb) \ud0dc\uadf8\ub97c \uc9c0\uc815\ud558\uc5ec \uc778\ud130\ub137\uc5d0 \uc5f0\uacb0\ub41c \ub85c\ub4dc \ubc38\ub7f0\uc11c\ub97c \uad6c\uc131\ud569\ub2c8\ub2e4. \ud574\ub2f9 \uad6c\uc131\uc5d0\uc11c\ub294 \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud37c\ube14\ub9ad, \ud504\ub77c\uc774\ube57 \ub610\ub294 \ub458 \ub2e4 (\ud37c\ube14\ub9ad \ubc0f \ud504\ub77c\uc774\ube57)\ub85c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n#### \ud504\ub77c\uc774\ube57 \ubc0f \ud37c\ube14\ub9ad \uc11c\ube0c\ub137 \uc0ac\uc6a9\n\n\ub178\ub4dc\ub294 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0\uc11c \uc0dd\uc131\ub418\ub294 \ubc18\uba74, \uc778\uadf8\ub808\uc2a4 \ub9ac\uc18c\uc2a4\ub294 \ud37c\ube14\ub9ad \uc11c\ube0c\ub137\uc5d0\uc11c \uc778\uc2a4\ud134\uc2a4\ud654\ub429\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \ub300\ud55c \ud37c\ube14\ub9ad, \ud504\ub77c\uc774\ube57 \ub610\ub294 \ub458 \ub2e4(\ud37c\ube14\ub9ad \ubc0f \ud504\ub77c\uc774\ube57) \uc561\uc138\uc2a4\ub97c \uc0ac\uc6a9\ud558\ub3c4\ub85d \uc124\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \uad6c\uc131\uc5d0 \ub530\ub77c \ub178\ub4dc \ud2b8\ub798\ud53d\uc740 NAT \uac8c\uc774\ud2b8\uc6e8\uc774 \ub610\ub294 ENI\ub97c \ud1b5\ud574 \ub4e4\uc5b4\uc635\ub2c8\ub2e4.\n\n#### \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\ub9cc \uc0ac\uc6a9\n\n\ub178\ub4dc\uc640 \uc778\uadf8\ub808\uc2a4 \ubaa8\ub450 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0\uc11c \uc0dd\uc131\ub429\ub2c8\ub2e4. [`kubernetes.io\/role\/internal-elb`](http:\/\/kubernetes.io\/role\/internal-elb:1) \uc11c\ube0c\ub137 \ud0dc\uadf8\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub0b4\ubd80\uc6a9 \ub85c\ub4dc \ubc38\ub7f0\uc11c\ub97c \uad6c\uc131\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uc758 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \uc811\uadfc\ud558\ub824\uba74 VPN \uc5f0\uacb0\uc774 \ud544\uc694\ud569\ub2c8\ub2e4. EC2\uc640 \ubaa8\ub4e0 Amazon ECR \ubc0f S3 \ub9ac\ud3ec\uc9c0\ud1a0\ub9ac\uc5d0 \ub300\ud574 [AWS PrivateLink](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/endpoint-service.html)\ub97c \ud65c\uc131\ud654\ud574\uc57c \ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uc758 \ud504\ub77c\uc774\ube57 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub9cc \ud65c\uc131\ud654\ud574\uc57c \ud569\ub2c8\ub2e4. \ud504\ub77c\uc774\ube57 \ud074\ub7ec\uc2a4\ud130\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\uae30 \uc804 [EKS \ud504\ub77c\uc774\ube57 \ud074\ub7ec\uc2a4\ud130 \uc694\uad6c \uc0ac\ud56d](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/private-clusters.html)\uc744 \uc77d\uc5b4\ubcfc \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4.\n\n\n### VPC\uac04 \ud1b5\uc2e0\n\n\uc5ec\ub7ec \uac1c\uc758 VPC\uc640 \uc774\ub7ec\ud55c VPC\uc5d0 \ubc30\ud3ec\ub41c \ubcc4\ub3c4\uc758 EKS \ud074\ub7ec\uc2a4\ud130\ub4e4\uc774 \ud544\uc694\ud55c \uacbd\uc6b0\uac00 \ub9ce\uc774 \uc788\uc2b5\ub2c8\ub2e4. \n\n[Amazon VPC Lattice](https:\/\/aws.amazon.com\/vpc\/lattice\/)\ub97c \uc0ac\uc6a9\ud558\uba74 \uc5ec\ub7ec VPC\uc640 \uacc4\uc815\uc5d0\uc11c \uc11c\ube44\uc2a4\ub97c \uc77c\uad00\ub418\uace0 \uc548\uc804\ud558\uac8c \uc5f0\uacb0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4(VPC \ud53c\uc5b4\ub9c1, AWS PrivateLink \ub610\ub294 AWS Transit Gateway\uc640 \uac19\uc740 \uc11c\ube44\uc2a4\uc5d0\uc11c \ucd94\uac00 \uc5f0\uacb0\uc744 \uc81c\uacf5\ud560 \ud544\uc694 \uc5c6\uc74c). [\uc5ec\uae30](https:\/\/aws.amazon.com\/blogs\/networking-and-content-delivery\/build-secure-multi-account-multi-vpc-connectivity-for-your-applications-with-amazon-vpc-lattice\/)\uc5d0\uc11c \uc790\uc138\ud55c \ub0b4\uc6a9\uc744 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n![Amazon VPC Lattice, traffic flow](.\/vpc-lattice.gif)\n\n\nAmazon VPC Lattice\ub294 IPv4 \ubc0f IPv6\uc758 \ub9c1\ud06c \ub85c\uceec \uc8fc\uc18c \uacf5\uac04\uc5d0\uc11c \uc791\ub3d9\ud558\uba70, IPv4 \uc8fc\uc18c\uac00 \uacb9\uce60 \uc218 \uc788\ub294 \uc11c\ube44\uc2a4 \uac04\uc758 \uc5f0\uacb0\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc6b4\uc601 \ud6a8\uc728\uc131\uc744 \uc704\ud574 EKS \ud074\ub7ec\uc2a4\ud130\uc640 \ub178\ub4dc\ub97c \uacb9\uce58\uc9c0 \uc54a\ub294 IP \ubc94\uc704\uc5d0 \ubc30\ud3ec\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \uc778\ud504\ub77c\uc5d0 IP \ubc94\uc704\uac00 \uacb9\uce58\ub294 VPC\uac00 \ud3ec\ud568\ub41c \uacbd\uc6b0\uc5d0\ub294 \uadf8\uc5d0 \ub9de\uac8c \ub124\ud2b8\uc6cc\ud06c\ub97c \uc124\uacc4\ud574\uc57c \ud569\ub2c8\ub2e4. \ub77c\uc6b0\ud305 \uac00\ub2a5\ud55c RFC1918 IP \uc8fc\uc18c\ub97c \uc720\uc9c0\ud558\uba74\uc11c \uc911\ubcf5\ub418\ub294 CIDR \ubb38\uc81c\ub97c \ud574\uacb0\ud558\uae30 \uc704\ud574 [\ud504\ub77c\uc774\ube57 NAT \uac8c\uc774\ud2b8\uc6e8\uc774](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/vpc-nat-gateway.html#nat-gateway-basics) \ub610\ub294 [\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9](..\/custom-networking\/index.md) \ubaa8\ub4dc\uc5d0\uc11c [transit gateway](https:\/\/docs.aws.amazon.com\/whitepapers\/latest\/aws-vpc-connectivity-options\/aws-transit-gateway.html)\uc640 \ud568\uaed8 VPC CNI\ub97c \uc0ac\uc6a9\ud558\uc5ec EKS\uc758 \uc6cc\ud06c\ub85c\ub4dc\ub97c \ud1b5\ud569\ud558\ub294 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4.\n\n![Private Nat Gateway with Custom Networking, traffic flow](.\/private-nat-gw.gif)\n\n\uc11c\ube44\uc2a4 \uc81c\uacf5\uc790\uc774\uace0 \ubcc4\ub3c4\uc758 \uacc4\uc815\uc73c\ub85c \uace0\uac1d VPC\uc640 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4 \ubc0f \uc778\uadf8\ub808\uc2a4(ALB \ub610\ub294 NLB)\ub97c \uacf5\uc720\ud558\ub824\ub294 \uacbd\uc6b0, \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc11c\ube44\uc2a4\ub77c\uace0\ub3c4 \ubd88\ub9ac\ub294 [AWS PrivateLink](https:\/\/docs.aws.amazon.com\/vpc\/latest\/privatelink\/privatelink-share-your-services.html) \ud65c\uc6a9\uc744 \uace0\ub824\ud569\ub2c8\ub2e4.\n\n### \uc5ec\ub7ec \uacc4\uc815\uc5d0\uc11c\uc758 VPC \uacf5\uc720\n\n\ub9ce\uc740 \uae30\uc5c5\uc5d0\uc11c AWS \uc870\uc9c1 \ub0b4 \uc5ec\ub7ec AWS \uacc4\uc815\uc758 \ub124\ud2b8\uc6cc\ud06c \uad00\ub9ac\ub97c \uac04\uc18c\ud654\ud558\uace0, \ube44\uc6a9\uc744 \uc808\uac10\ud558\uace0, \ubcf4\uc548\uc744 \uac1c\uc120\ud558\uae30 \uc704\ud55c \uc218\ub2e8\uc73c\ub85c \uacf5\uc720 Amazon VPC\ub97c \ub3c4\uc785\ud569\ub2c8\ub2e4. \uc774\ub4e4\uc740 AWS Resource Access Manager(RAM)\ub97c \ud65c\uc6a9\ud558\uc5ec \uc9c0\uc6d0\ub418\ub294 [AWS \ub9ac\uc18c\uc2a4](https:\/\/docs.aws.amazon.com\/ram\/latest\/userguide\/shareable.html)\ub97c \uac1c\ubcc4 AWS \uacc4\uc815, \uc870\uc9c1 \ub2e8\uc704(OU) \ub610\ub294 \uc804\uccb4 AWS \uc870\uc9c1\uacfc \uc548\uc804\ud558\uac8c \uacf5\uc720\ud569\ub2c8\ub2e4.\n\nAWS RAM\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub2e4\ub978 AWS \uacc4\uc815\uc758 \uacf5\uc720\uc6a9 VPC \uc11c\ube0c\ub137\uc5d0 Amazon EKS \ud074\ub7ec\uc2a4\ud130, \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9 \ubc0f \uae30\ud0c0 \uc9c0\uc6d0 AWS \ub9ac\uc18c\uc2a4(\uc608: \ub85c\ub4dc\ubc38\ub7f0\uc11c, \ubcf4\uc548 \uadf8\ub8f9, \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ub4f1)\ub97c \ubc30\ud3ec\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc544\ub798 \uadf8\ub9bc\uc740 \uc0c1\uc704 \uc218\uc900 \uc544\ud0a4\ud14d\ucc98\uc758 \uc608\ub97c \ubcf4\uc5ec\uc90d\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 \uc911\uc559 \ub124\ud2b8\uc6cc\ud06c \ud300\uc740 VPC, \uc11c\ube0c\ub137 \ub4f1\uacfc \uac19\uc740 \ub124\ud2b8\uc6cc\ud0b9 \uad6c\uc870\ub97c \uc81c\uc5b4\ud558\uace0, \ub3d9\uc2dc\uc5d0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub610\ub294 \ud50c\ub7ab\ud3fc \ud300\uc740 \uac01\uc790\uc758 AWS \uacc4\uc815\uc5d0 Amazon EKS \ud074\ub7ec\uc2a4\ud130\ub97c \ubc30\ud3ec\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0 \ub300\ud55c \uc804\uccb4 \uc124\uba85\uc740 \uc774 [github \uc800\uc7a5\uc18c](https:\/\/github.com\/aws-samples\/eks-shared-subnets)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n![Deploying Amazon EKS in VPC Shared Subnets across AWS Accounts.](.\/eks-shared-subnets.png)\n\n#### \uacf5\uc720 \uc11c\ube0c\ub137 \uc0ac\uc6a9 \uc2dc \uace0\ub824 \uc0ac\ud56d\n\n* Amazon EKS \ud074\ub7ec\uc2a4\ud130\uc640 \uc6cc\ucee4 \ub178\ub4dc\ub294 \ubaa8\ub450 \ub3d9\uc77c\ud55c VPC\uc758 \uacf5\uc720 \uc11c\ube0c\ub137 \ub0b4\uc5d0\uc11c \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Amazon EKS\ub294 \uc5ec\ub7ec VPC\uc5d0\uc11c\uc758 \ud074\ub7ec\uc2a4\ud130 \uc0dd\uc131\uc744 \uc9c0\uc6d0\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \n\n* Amazon EKS\ub294 AWS VPC \ubcf4\uc548 \uadf8\ub8f9(SG)\uc744 \uc0ac\uc6a9\ud558\uc5ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \ud074\ub7ec\uc2a4\ud130\uc758 \uc6cc\ucee4 \ub178\ub4dc \uc0ac\uc774\uc758 \ud2b8\ub798\ud53d\uc744 \uc81c\uc5b4\ud569\ub2c8\ub2e4. \ub610\ud55c \ubcf4\uc548 \uadf8\ub8f9\uc740 \uc6cc\ucee4 \ub178\ub4dc, \uae30\ud0c0 VPC \ub9ac\uc18c\uc2a4 \ubc0f \uc678\ubd80 IP \uc8fc\uc18c \uac04\uc758 \ud2b8\ub798\ud53d\uc744 \uc81c\uc5b4\ud558\ub294 \ub370\uc5d0\ub3c4 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\/\ucc38\uc5ec\uc790(participant) \uacc4\uc815\uc5d0\uc11c \uc774\ub7ec\ud55c \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \ud30c\ub4dc\uc5d0 \uc0ac\uc6a9\ud560 \ubcf4\uc548 \uadf8\ub8f9\ub3c4 \ucc38\uc5ec\uc790 \uacc4\uc815\uc5d0 \uc788\ub294\uc9c0 \ud655\uc778\ud569\ub2c8\ub2e4. \ubcf4\uc548 \uadf8\ub8f9 \ub0b4\uc5d0\uc11c \uc778\ubc14\uc6b4\ub4dc \ubc0f \uc544\uc6c3\ubc14\uc6b4\ub4dc \uaddc\uce59\uc744 \uad6c\uc131\ud558\uc5ec \uc911\uc559 VPC \uacc4\uc815\uc5d0 \uc788\ub294 \ubcf4\uc548 \uadf8\ub8f9\uc5d0\uc11c \uc1a1\uc218\uc2e0\ub418\ub294 \ud544\uc694\ud55c \ud2b8\ub798\ud53d\uc744 \ud5c8\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n* Amazon EKS \ud074\ub7ec\uc2a4\ud130\uac00 \uc788\ub294 \ucc38\uc5ec\uc790 \uacc4\uc815 \ub0b4\uc5d0 IAM \uc5ed\ud560 \ubc0f \uad00\ub828 \uc815\ucc45\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c IAM \uc5ed\ud560 \ubc0f \uc815\ucc45\uc740 Amazon EKS\uc5d0\uc11c \uad00\ub9ac\ud558\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc640 Fargate\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ub178\ub4dc \ubc0f \ud30c\ub4dc\uc5d0 \ud544\uc694\ud55c \uad8c\ud55c\uc744 \ubd80\uc5ec\ud558\uae30 \uc704\ud574 \ubc18\ub4dc\uc2dc \ud544\uc694\ud569\ub2c8\ub2e4. \uc774 \uad8c\ud55c\uc744 \ud1b5\ud574 Amazon EKS\ub294 \uc0ac\uc6a9\uc790\ub97c \ub300\uc2e0\ud558\uc5ec \ub2e4\ub978 AWS \uc11c\ube44\uc2a4\ub97c \ud638\ucd9c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n* \ub2e4\uc74c \uc811\uadfc \ubc29\uc2dd\uc5d0 \ub530\ub77c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud30c\ub4dc\uc5d0\uc11c Amazon S3 \ubc84\ud0b7, Dynamodb \ud14c\uc774\ube14 \ub4f1\uacfc \uac19\uc740 AWS \ub9ac\uc18c\uc2a4\uc758 \uacc4\uc815 \uac04 \uc561\uc138\uc2a4\ub97c \ud5c8\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n    * **\ub9ac\uc18c\uc2a4 \uae30\ubc18 \uc815\ucc45 \uc811\uadfc \ubc29\uc2dd**: AWS \uc11c\ube44\uc2a4\uac00 \ub9ac\uc18c\uc2a4 \uc815\ucc45\uc744 \uc9c0\uc6d0\ud558\ub294 \uacbd\uc6b0 \uc801\uc808\ud55c \ub9ac\uc18c\uc2a4 \uae30\ubc18 \uc815\ucc45\uc744 \ucd94\uac00\ud558\uc5ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud30c\ub4dc\uc5d0 \ud560\ub2f9\ub41c IAM \uc5ed\ud560\uc5d0 \ub300\ud55c \uacc4\uc815 \uac04 \uc561\uc138\uc2a4\ub97c \ud5c8\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c\ub294 OIDC \uacf5\uae09\uc790, IAM \uc5ed\ud560 \ubc0f \uad8c\ud55c \uc815\ucc45\uc774 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uacc4\uc815\uc5d0 \uc874\uc7ac\ud558\uac8c \ub429\ub2c8\ub2e4. \ub9ac\uc18c\uc2a4 \uae30\ubc18 \uc815\ucc45\uc744 \uc9c0\uc6d0\ud558\ub294 AWS \uc11c\ube44\uc2a4\ub97c \ucc3e\uc73c\ub824\uba74 [IAM\uacfc \ud568\uaed8 \uc791\ub3d9\ud558\ub294 AWS \uc11c\ube44\uc2a4](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/reference_aws-services-that-work-with-iam.html)\ub97c \ucc38\uc870\ud558\uace0 \ub9ac\uc18c\uc2a4 \uae30\ubc18 \uc5f4\uc5d0\uc11c '\uc608'\ub77c\uace0 \ud45c\uc2dc\ub41c \uc11c\ube44\uc2a4\ub97c \ucc3e\uc544\ubcf4\uc2ed\uc2dc\uc624.\n\n    * **OIDC \uacf5\uae09\uc790 \uc811\uadfc \ubc29\uc2dd**: OIDC \uacf5\uae09\uc790, IAM \uc5ed\ud560, \uad8c\ud55c \ubc0f \uc2e0\ub8b0 \uc815\ucc45\uacfc \uac19\uc740 IAM \ub9ac\uc18c\uc2a4\ub294 \ub9ac\uc18c\uc2a4\uac00 \uc788\ub294 \ub2e4\ub978 \ucc38\uc5ec\uc790 AWS \uacc4\uc815\uc5d0\uc11c \uc0dd\uc131\ub429\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc5ed\ud560\uc740 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uacc4\uc815\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud30c\ub4dc\uc5d0 \ud560\ub2f9\ub418\uc5b4 \uacc4\uc815 \uac04 \ub9ac\uc18c\uc2a4\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\ub3c4\ub85d \ud569\ub2c8\ub2e4. \uc774 \uc811\uadfc \ubc29\uc2dd\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\ub97c \uc704\ud55c \uad50\ucc28 \uacc4\uc815 \uac04 IAM \uc5ed\ud560](https:\/\/aws.amazon.com\/blogs\/containers\/cross-account-iam-roles-for-kubernetes-service-accounts\/) \ube14\ub85c\uadf8\ub97c \ucc38\uc870\ud569\ub2c8\ub2e4.\n\n* Amazon Elastic Loadbalancer(ELB) \ub9ac\uc18c\uc2a4(ALB \ub610\ub294 NLB)\ub97c \ubc30\ud3ec\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub610\ub294 \uc911\uc559 \ub124\ud2b8\uc6cc\ud0b9 \uacc4\uc815\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud30c\ub4dc\ub85c \ud2b8\ub798\ud53d\uc744 \ub77c\uc6b0\ud305\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc911\uc559 \ub124\ud2b8\uc6cc\ud0b9 \uacc4\uc815\uc5d0 ELB \ub9ac\uc18c\uc2a4\ub97c \ubc30\ud3ec\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \uc548\ub0b4\ub294 [\uad50\ucc28 \uacc4\uc815 \ub85c\ub4dc \ubc38\ub7f0\uc11c\ub97c \ud1b5\ud574 Amazon EKS \ud30c\ub4dc \ub178\ucd9c](https:\/\/aws.amazon.com\/blogs\/containers\/expose-amazon-eks-pods-through-cross-account-load-balancer\/)\uc548\ub0b4\ub97c \ucc38\uc870\ud569\ub2c8\ub2e4. \uc774 \uc635\uc158\uc740 \ub85c\ub4dc \ubc38\ub7f0\uc11c \ub9ac\uc18c\uc2a4\uc758 \ubcf4\uc548 \uad6c\uc131\uc5d0 \ub300\ud55c \ubaa8\ub4e0 \uad8c\ud55c\uc744 \uc911\uc559 \ub124\ud2b8\uc6cc\ud0b9 \uacc4\uc815\uc5d0 \ubd80\uc5ec\ud558\ubbc0\ub85c \uc720\uc5f0\uc131\uc774 \ud5a5\uc0c1\ub429\ub2c8\ub2e4.\n\n* Amazon VPC CNI\uc758 `\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub124\ud2b8\uc6cc\ud0b9 \uae30\ub2a5(custom networking feature)`\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \uc911\uc559 \ub124\ud2b8\uc6cc\ud0b9 \uacc4\uc815\uc5d0 \ub098\uc5f4\ub41c \uac00\uc6a9 \uc601\uc5ed(AZ) ID \ub9e4\ud551\uc744 \uc0ac\uc6a9\ud558\uc5ec \uac01\uac01\uc758 `ENIConfig`\ub97c \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub294 \ubb3c\ub9ac\uc801 AZ\ub97c \uac01 AWS \uacc4\uc815\uc758 AZ \uc774\ub984\uc5d0 \ubb34\uc791\uc704\ub85c \ub9e4\ud551\ud558\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4.\n\n### \ubcf4\uc548 \uadf8\ub8f9\n\n[*\ubcf4\uc548 \uadf8\ub8f9*](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/VPC_SecurityGroups.html)\uc740 \uc5f0\uacb0\ub41c \ub9ac\uc18c\uc2a4\uc5d0 \ub4e4\uc5b4\uc624\uac70\ub098 \ub098\uac00\ub294 \uac83\uc774 \ud5c8\uc6a9\ub418\ub294 \ud2b8\ub798\ud53d\uc744 \uc81c\uc5b4\ud569\ub2c8\ub2e4. Amazon EKS\ub294 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\uc5ec [\ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \ub178\ub4dc](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/sec-group-reqs.html)\uac04\uc758 \ud1b5\uc2e0\uc744 \uad00\ub9ac\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud558\uba74 Amazon EKS\ub294 `eks-cluster-sg-my-cluster-uniqueID`\ub77c\ub294 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4. EKS\ub294 \uc774\ub7ec\ud55c \ubcf4\uc548 \uadf8\ub8f9\uc744 \uad00\ub9ac\ud615 ENI \ubc0f \ub178\ub4dc\uc5d0 \uc5f0\uacb0\ud569\ub2c8\ub2e4. \uae30\ubcf8 \uaddc\uce59\uc744 \uc0ac\uc6a9\ud558\uba74 \ud074\ub7ec\uc2a4\ud130\uc640 \ub178\ub4dc \uac04\uc5d0 \ubaa8\ub4e0 \ud2b8\ub798\ud53d\uc774 \uc790\uc720\ub86d\uac8c \uc804\ub2ec\ub418\uace0, \ubaa8\ub4e0 \uc544\uc6c3\ubc14\uc6b4\ub4dc \ud2b8\ub798\ud53d\uc774 \ubaa8\ub4e0 \ubaa9\uc801\uc9c0\ub85c \uc804\ub2ec\ub418\ub3c4\ub85d \ud5c8\uc6a9\ud569\ub2c8\ub2e4. \n\n\ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud560 \ub54c \uc790\uccb4 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uccb4 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc9c0\uc815\ud558\ub294 \uacbd\uc6b0 [\ubcf4\uc548 \uadf8\ub8f9 \uad8c\uc7a5 \uc0ac\ud56d](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/sec-group-reqs.html)\uc744 \ucc38\uc870\ud569\ub2c8\ub2e4. \n\n## \uad8c\uc7a5 \uc0ac\ud56d\n\n### \ub2e4\uc911 AZ \ubc30\ud3ec \uace0\ub824\n\nAWS \ub9ac\uc804\uc740 \ubb3c\ub9ac\uc801\uc73c\ub85c \ubd84\ub9ac\ub418\uace0 \uaca9\ub9ac\ub41c \uc5ec\ub7ec \uac00\uc6a9 \uc601\uc5ed(AZ)\uc744 \uc81c\uacf5\ud558\uba70, \uc774\ub7ec\ud55c \uac00\uc6a9 \uc601\uc5ed\uc740 \uc9c0\uc5f0 \uc2dc\uac04\uc774 \uc9e7\uace0 \ucc98\ub9ac\ub7c9\uc774 \ub192\uc73c\uba70 \uc911\ubcf5\uc131\uc774 \ub192\uc740 \ub124\ud2b8\uc6cc\ud0b9\uc73c\ub85c \uc5f0\uacb0\ub429\ub2c8\ub2e4. \uac00\uc6a9 \uc601\uc5ed\uc744 \ud65c\uc6a9\ud558\uc5ec \uac00\uc6a9 \uc601\uc5ed \uac04\uc5d0 \uc911\ub2e8 \uc5c6\uc774 \uc790\ub3d9\uc73c\ub85c \uc7a5\uc560 \uc870\uce58\ub418\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc124\uacc4\ud558\uace0 \uc6b4\uc601\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Amazon EKS\ub294 EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uc5ec\ub7ec \uac00\uc6a9 \uc601\uc5ed\uc5d0 \ubc30\ud3ec\ud560 \uac83\uc744 \uac15\ub825\ud788 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud560 \ub54c \ucd5c\uc18c \ub450 \uac1c\uc758 \uac00\uc6a9 \uc601\uc5ed\uc5d0 \uc11c\ube0c\ub137\uc744 \uc9c0\uc815\ud558\ub294 \uac83\uc744 \uace0\ub824\ud569\ub2c8\ub2e4.\n\n\ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 Kubelet\uc740 [`topology.kubernetes.io\/region=us-west-2`, `topology.kubernetes.io\/zone=us-west-2d`](http:\/\/topology.kubernetes.io\/region=us-west-2,topology.kubernetes.io\/zone=us-west-2d)\uc640 \uac19\uc740 \ub808\uc774\ube14\uc744 \ub178\ub4dc \uc624\ube0c\uc81d\ud2b8\uc5d0 \uc790\ub3d9\uc73c\ub85c \ucd94\uac00\ud569\ub2c8\ub2e4. \ub178\ub4dc \ub808\uc774\ube14\uc744 [Pod topology spread constraints](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/topology-spread-constraints\/)\uc640 \ud568\uaed8 \uc0ac\uc6a9\ud558\uc5ec \ud30c\ub4dc\uac00 \uc5ec\ub7ec \uc601\uc5ed\uc5d0 \ubd84\uc0b0\ub418\ub294 \ubc29\uc2dd\uc744 \uc81c\uc5b4\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c \ud78c\ud2b8\ub97c \ud1b5\ud574 \ucfe0\ubc84\ub124\ud2f0\uc2a4 [\uc2a4\ucf00\uc904\ub7ec](https:\/\/kubernetes.io\/docs\/reference\/command-line-tools-reference\/kube-scheduler\/)\uac00 \uc608\uc0c1 \uac00\uc6a9\uc131\uc744 \ub192\uc774\uae30 \uc704\ud574 \ud30c\ub4dc\ub97c \ubc30\uce58\ud558\uc5ec \uc0c1\uad00 \uad00\uacc4\uac00 \uc788\ub294 \uc7a5\uc560\uac00 \uc804\uccb4 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc704\ud5d8\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub178\ub4dc \uc140\ub809\ud130 \ubc0f AZ \ubd84\uc0b0 \uc81c\uc57d \uc870\uac74\uc758 \uc608\ub97c \ubcf4\ub824\uba74 [\ud30c\ub4dc\uc5d0 \ub178\ub4dc \ud560\ub2f9](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#nodeselector)\uc744 \ucc38\uc870\ud569\ub2c8\ub2e4.\n\n\ub178\ub4dc\ub97c \uc0dd\uc131\ud560 \ub54c \uc11c\ube0c\ub137 \ub610\ub294 \uac00\uc6a9 \uc601\uc5ed\uc744 \uc815\uc758\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc11c\ube0c\ub137\uc774 \uad6c\uc131\ub418\uc9c0 \uc54a\uc740 \uacbd\uc6b0 \ub178\ub4dc\ub294 \ud074\ub7ec\uc2a4\ud130 \uc11c\ube0c\ub137\uc5d0 \ubc30\uce58\ub429\ub2c8\ub2e4. \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc5d0 \ub300\ud55c EKS \uc9c0\uc6d0\uc740 \uac00\uc6a9 \uc6a9\ub7c9\uc744 \uae30\uc900\uc73c\ub85c \uc5ec\ub7ec \uac00\uc6a9 \uc601\uc5ed\uc5d0 \ub178\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \ubd84\uc0b0\ud569\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc\uac00 topology spread constraints\ub97c \uc815\uc758\ud558\ub294 \uacbd\uc6b0 [Karpenter](https:\/\/karpenter.sh\/)\ub294 \ub178\ub4dc\ub97c \uc9c0\uc815\ub41c AZ\ub85c \ud655\uc7a5\ud558\uc5ec AZ \ubd84\uc0b0 \ubc30\uce58\ub97c \uc900\uc218\ud569\ub2c8\ub2e4.\n\nAWS Elastic Loadbalancer\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc758 AWS \ub85c\ub4dc \ubc38\ub7f0\uc11c \ucee8\ud2b8\ub864\ub7ec\uc5d0 \uc758\ud574 \uad00\ub9ac\ub429\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc778\uadf8\ub808\uc2a4 \ub9ac\uc18c\uc2a4\ub97c \uc704\ud55c \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub85c\ub4dc \ubc38\ub7f0\uc11c(ALB)\uc640 \ub85c\ub4dc\ubc38\ub7f0\uc11c \uc720\ud615\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\ub97c \uc704\ud55c \ub124\ud2b8\uc6cc\ud06c \ub85c\ub4dc \ubc38\ub7f0\uc11c(NLB)\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud569\ub2c8\ub2e4. Elastic Loadbalancer \ucee8\ud2b8\ub864\ub7ec\ub294 [\ud0dc\uadf8](https:\/\/aws.amazon.com\/premiumsupport\/knowledge-center\/eks-vpc-subnet-discovery\/)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc11c\ube0c\ub137\uc744 \uac80\uc0c9\ud569\ub2c8\ub2e4. ELB \ucee8\ud2b8\ub864\ub7ec\uac00 \uc778\uadf8\ub808\uc2a4 \ub9ac\uc18c\uc2a4\ub97c \uc131\uacf5\uc801\uc73c\ub85c \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub824\uba74 \ucd5c\uc18c \ub450 \uac1c\uc758 \uac00\uc6a9 \uc601\uc5ed (AZ)\uc774 \ud544\uc694\ud569\ub2c8\ub2e4. \uc9c0\ub9ac\uc801 \uc774\uc911\ud654\uc758 \uc548\uc804\uc131\uacfc \uc548\uc815\uc131\uc744 \ud65c\uc6a9\ud558\uae30 \uc704\ud574 \ucd5c\uc18c \ub450 \uac1c\uc758 AZ\uc5d0 \uc11c\ube0c\ub137\uc744 \uc124\uc815\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \n\n### \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0 \ub178\ub4dc \ubc30\ud3ec\n\n\ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uacfc \ud37c\ube14\ub9ad \uc11c\ube0c\ub137\uc744 \ubaa8\ub450 \ud3ec\ud568\ud558\ub294 VPC\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc6cc\ud06c\ub85c\ub4dc\ub97c EKS\uc5d0 \ubc30\ud3ec\ud558\ub294 \ub370 \uac00\uc7a5 \uc801\ud569\ud55c \ubc29\ubc95\uc785\ub2c8\ub2e4. \uc11c\ub85c \ub2e4\ub978 \ub450 \uac00\uc6a9 \uc601\uc5ed\uc5d0 \ucd5c\uc18c \ub450 \uac1c\uc758 \ud37c\ube14\ub9ad \uc11c\ube0c\ub137\uacfc \ub450 \uac1c\uc758 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc744 \uc124\uc815\ud560 \uac83\uc744 \uace0\ub824\ud569\ub2c8\ub2e4. \ud37c\ube14\ub9ad \uc11c\ube0c\ub137\uc758 \ub77c\uc6b0\ud305 \ud14c\uc774\ube14\uc5d0\ub294 \uc778\ud130\ub137 \uac8c\uc774\ud2b8\uc6e8\uc774\uc5d0 \ub300\ud55c \uacbd\ub85c\uac00 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \ud30c\ub4dc\ub294 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \ud1b5\ud574 \uc778\ud130\ub137\uacfc \uc0c1\ud638\uc791\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. IPv6 \ud658\uacbd(EIGW)\uc5d0\uc11c\uc758 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc740 [\uc678\ubd80 \uc804\uc6a9 \uc778\ud130\ub137 \uac8c\uc774\ud2b8\uc6e8\uc774](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/egress-only-internet-gateway.html)\ub97c \ud1b5\ud574 \uc9c0\uc6d0\ub429\ub2c8\ub2e4.\n\n\ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0\uc11c \ub178\ub4dc\ub97c \uc778\uc2a4\ud134\uc2a4\ud654\ud558\uba74 \ub178\ub4dc\uc5d0 \ub300\ud55c \ud2b8\ub798\ud53d \uc81c\uc5b4\ub97c \ucd5c\ub300\ud654 \ud560 \uc218 \uc788\uc73c\uba70 \ub300\ubd80\ubd84\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \uc801\ud569\ud569\ub2c8\ub2e4. \uc778\uadf8\ub808\uc2a4 \ub9ac\uc18c\uc2a4(\uc608: \ub85c\ub4dc \ubc38\ub7f0\uc11c)\ub294 \ud37c\ube14\ub9ad \uc11c\ube0c\ub137\uc5d0\uc11c \uc778\uc2a4\ud134\uc2a4\ud654\ub418\uace0 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0\uc11c \uc791\ub3d9\ud558\ub294 \ud30c\ub4dc\ub85c \ud2b8\ub798\ud53d\uc744 \ub77c\uc6b0\ud305\ud569\ub2c8\ub2e4.\n\n\uc5c4\uaca9\ud55c \ubcf4\uc548 \ubc0f \ub124\ud2b8\uc6cc\ud06c \uaca9\ub9ac\uac00 \ud544\uc694\ud55c \uacbd\uc6b0 \ud504\ub77c\uc774\ube57 \uc804\uc6a9 \ubaa8\ub4dc\ub97c \uace0\ub824\ud569\ub2c8\ub2e4. \uc774 \uad6c\uc131\uc5d0\uc11c\ub294 \uc138 \uac1c\uc758 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc774 AWS \ub9ac\uc804 \ub0b4 VPC\uc758 \uc11c\ub85c \ub2e4\ub978 \uac00\uc6a9 \uc601\uc5ed\uc5d0 \ubc30\ud3ec\ub429\ub2c8\ub2e4. \uc11c\ube0c\ub137\uc5d0 \ubc30\ud3ec\ub41c \ub9ac\uc18c\uc2a4\ub294 \uc778\ud130\ub137\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc5c6\uc73c\uba70 \uc778\ud130\ub137\uc5d0\uc11c \uc11c\ube0c\ub137\uc758 \ub9ac\uc18c\uc2a4\ub85c\ub3c4 \uc561\uc138\uc2a4\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \ub2e4\ub978 AWS \uc11c\ube44\uc2a4\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc73c\ub824\uba74 PrivateLink \uc778\ud130\ud398\uc774\uc2a4 \ubc0f\/\ub610\ub294 \uac8c\uc774\ud2b8\uc6e8\uc774 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uad6c\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. AWS \ub85c\ub4dc \ubc38\ub7f0\uc11c \ucee8\ud2b8\ub864\ub7ec\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub0b4\ubd80 \ub85c\ub4dc \ubc38\ub7f0\uc11c\uac00 \ud2b8\ub798\ud53d\uc744 \ud30c\ub4dc\ub85c \ub9ac\ub514\ub809\uc158\ud558\ub3c4\ub85d \uc124\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucee8\ud2b8\ub864\ub7ec\uac00 \ub85c\ub4dc \ubc38\ub7f0\uc11c\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub824\uba74 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0 ([``kubernetes.io\/role\/internal-elb: 1`](http:\/\/kubernetes.io\/role\/internal-elb)) \ud0dc\uadf8\ub97c \uc9c0\uc815\ud574\uc57c \ud569\ub2c8\ub2e4. \ub178\ub4dc\ub97c \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub4f1\ub85d\ud558\ub824\uba74 \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud504\ub77c\uc774\ube57 \ubaa8\ub4dc\ub85c \uc124\uc815\ud574\uc57c \ud569\ub2c8\ub2e4. \uc804\uccb4 \uc694\uad6c \uc0ac\ud56d \ubc0f \uace0\ub824 \uc0ac\ud56d\uc740 [\ud504\ub77c\uc774\ube57 \ud074\ub7ec\uc2a4\ud130 \uac00\uc774\ub4dc](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/private-clusters.html)\ub97c \ucc38\uc870\ud569\ub2c8\ub2e4.\n\n### \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \ud37c\ube14\ub9ad \ubc0f \ud504\ub77c\uc774\ube57 \ubaa8\ub4dc \uace0\ub824\n\nAmazon EKS\ub294 \ud37c\ube14\ub9ad \uc804\uc6a9, \ud37c\ube14\ub9ad \ubc0f \ud504\ub77c\uc774\ube57, \ud504\ub77c\uc774\ube57 \uc804\uc6a9 \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ubaa8\ub4dc\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. \uae30\ubcf8 \ubaa8\ub4dc\ub294 \ud37c\ube14\ub9ad \uc804\uc6a9\uc774\uc9c0\ub9cc \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud37c\ube14\ub9ad \ubc0f \ud504\ub77c\uc774\ube57 \ubaa8\ub4dc\ub85c \uad6c\uc131\ud558\ub294 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \uc774 \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 \ud074\ub7ec\uc2a4\ud130 VPC \ub0b4\uc5d0\uc11c\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \ud638\ucd9c(\uc608: \ub178\ub4dc\uc640 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uac04 \ud1b5\uc2e0)\uc5d0 \ud504\ub77c\uc774\ube57 VPC \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ud65c\uc6a9\ud558\uace0 \ud2b8\ub798\ud53d\uc774 \ud074\ub7ec\uc2a4\ud130\uc758 VPC \ub0b4\uc5d0 \uc720\uc9c0\ub418\ub3c4\ub85d \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubc18\uba74 \ud074\ub7ec\uc2a4\ud130 API \uc11c\ubc84\ub294 \uc778\ud130\ub137\uc744 \ud1b5\ud574 \uc5f0\uacb0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \ud37c\ube14\ub9ad \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 CIDR \ube14\ub85d\uc740 \uc81c\ud55c\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. [CIDR \ube14\ub85d \uc81c\ud55c\uc744 \ud3ec\ud568\ud558\uc5ec \ud37c\ube14\ub9ad \ubc0f \ud504\ub77c\uc774\ube57 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc561\uc138\uc2a4\ub97c \uad6c\uc131\ud558\ub294 \ubc29\ubc95\uc744 \uc54c\uc544\ubd05\ub2c8\ub2e4.](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cluster-endpoint.html#modify-endpoint-access)\n\n\ubcf4\uc548 \ubc0f \ub124\ud2b8\uc6cc\ud06c \uaca9\ub9ac\uac00 \ud544\uc694\ud55c \uacbd\uc6b0 \ud504\ub77c\uc774\ube57 \uc804\uc6a9 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. [EKS \uc0ac\uc6a9\uc790 \uac00\uc774\ub4dc](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cluster-endpoint.html#private-access)\uc5d0 \uc81c\uc2dc\ub41c \uc635\uc158 \uc911 \ud558\ub098\ub97c \uc0ac\uc6a9\ud558\uc5ec API \uc11c\ubc84\uc5d0 \ud504\ub77c\uc774\ube57\ud558\uac8c \uc5f0\uacb0\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4.\n\n\n### \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc2e0\uc911\ud558\uac8c \uad6c\uc131\n\nAmazon EKS\ub294 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ubcf4\uc548 \uadf8\ub8f9 \uc0ac\uc6a9\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \ubaa8\ub4e0 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ubcf4\uc548 \uadf8\ub8f9\uc740 \ub178\ub4dc\uc640 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uac04\uc758 \ud1b5\uc2e0\uc744 \ud5c8\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4. \uc870\uc9c1 \ub0b4\uc5d0\uc11c \uac1c\ubc29\ud615 \ud1b5\uc2e0\uc744 \ud5c8\uc6a9\ud558\uc9c0 \uc54a\ub294 \uacbd\uc6b0 [\ud3ec\ud2b8 \uc694\uad6c \uc0ac\ud56d](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/sec-group-reqs.html)\uc744 \ud655\uc778\ud558\uace0 \uaddc\uce59\uc744 \uc218\ub3d9\uc73c\ub85c \uad6c\uc131\ud569\ub2c8\ub2e4. \n\nEKS\ub294 \ud074\ub7ec\uc2a4\ud130 \uc0dd\uc131 \uc911\uc5d0 \uc81c\uacf5\ud558\ub294 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uad00\ub9ac\ud615 \uc778\ud130\ud398\uc774\uc2a4(X-ENI)\uc5d0 \uc801\uc6a9\ud569\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ubcf4\uc548\uadf8\ub8f9\uc774 \ub178\ub4dc\uc640 \uc989\uc2dc \uc5f0\uacb0\ub418\uc9c0\ub294 \uc54a\uc2b5\ub2c8\ub2e4. \ub178\ub4dc \uadf8\ub8f9\uc744 \uc0dd\uc131\ud560 \ub54c\ub294 \uc218\ub3d9\uc73c\ub85c [\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc5f0\uacb0](https:\/\/eksctl.io\/usage\/schema\/#nodeGroups-securityGroups)\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \ub178\ub4dc \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1 \uc911 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ubcf4\uc548 \uadf8\ub8f9\uc758 Karpenter \ub178\ub4dc \ud15c\ud50c\ub9bf \uac80\uc0c9\uc744 \ud65c\uc131\ud654\ud558\ub824\uba74 [securityGroupSelectorTerms](https:\/\/karpenter.sh\/docs\/concepts\/nodeclasses\/#specsecuritygroupselectorterms)\ub97c \ud65c\uc131\ud654\ud560 \uac83\uc744 \uace0\ub824\ud569\ub2c8\ub2e4.\n\n\ubaa8\ub4e0 \ub178\ub4dc \uac04 \ud1b5\uc2e0 \ud2b8\ub798\ud53d\uc744 \ud5c8\uc6a9\ud558\ub294 \ubcf4\uc548 \uadf8\ub8f9\uc744 \uc0dd\uc131\ud558\ub294 \uac83\uc744 \uac15\ub825\ud788 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \ubd80\ud2b8\uc2a4\ud2b8\ub7a9 \ud504\ub85c\uc138\uc2a4 \uc911\uc5d0 \ub178\ub4dc\uac00 \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 \uc561\uc138\uc2a4\ud558\ub824\uba74 \uc544\uc6c3\ubc14\uc6b4\ub4dc \uc778\ud130\ub137 \uc5f0\uacb0\uc774 \ud544\uc694\ud569\ub2c8\ub2e4. \uc628\ud504\ub808\ubbf8\uc2a4 \uc5f0\uacb0 \ubc0f \ucee8\ud14c\uc774\ub108 \ub808\uc9c0\uc2a4\ud2b8\ub9ac \uc561\uc138\uc2a4\uc640 \uac19\uc740 \uc678\ubd80 \uc561\uc138\uc2a4 \uc694\uad6c \uc0ac\ud56d\uc744 \ud3c9\uac00\ud558\uace0 \uaddc\uce59\uc744 \uc801\uc808\ud558\uac8c \uc124\uc815\ud569\ub2c8\ub2e4. \ubcc0\uacbd \uc0ac\ud56d\uc744 \ud504\ub85c\ub355\uc158\uc5d0 \uc801\uc6a9\ud558\uae30 \uc804\uc5d0 \uac1c\ubc1c \ud658\uacbd\uc5d0\uc11c \ub124\ud2b8\uc6cc\ud06c \uc5f0\uacb0\uc744 \uc8fc\uc758 \uae4a\uac8c \ud655\uc778\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4.\n\n### \uac01 \uac00\uc6a9 \uc601\uc5ed\uc5d0 NAT \uac8c\uc774\ud2b8\uc6e8\uc774 \ubc30\ud3ec\n\n\ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137(IPv4 \ubc0f IPv6)\uc5d0 \ub178\ub4dc\ub97c \ubc30\ud3ec\ud558\ub294 \uacbd\uc6b0 \uac01 \uac00\uc6a9 \uc601\uc5ed(AZ)\uc5d0 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \uc0dd\uc131\ud558\uc5ec \uac00\uc6a9 \uc601\uc5ed\uc5d0 \ub3c5\ub9bd\uc801\uc778 \uc544\ud0a4\ud14d\ucc98\ub97c \ubcf4\uc7a5\ud558\uace0 AZ \uac04 \ube44\uc6a9\uc744 \uc808\uac10\ud558\ub294 \uac83\uc744 \uace0\ub824\ud569\ub2c8\ub2e4. AZ\uc758 \uac01 NAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub294 \uc774\uc911\ud654\ub85c \uad6c\ud604\ub429\ub2c8\ub2e4.\n\n### Cloud9\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud504\ub77c\uc774\ube57 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc561\uc138\uc2a4\n\nAWS Cloud9\ub294 AWS Systems Manager\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc778\uadf8\ub808\uc2a4 \uc561\uc138\uc2a4 \uc5c6\uc774 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0\uc11c \uc548\uc804\ud558\uac8c \uc2e4\ud589\ud560 \uc218 \uc788\ub294 \uc6f9 \uae30\ubc18 IDE\uc785\ub2c8\ub2e4. Cloud9 \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c \uc774\uadf8\ub808\uc2a4\ub97c \ube44\ud65c\uc131\ud654\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. [Cloud9\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud504\ub77c\uc774\ube57 \ud074\ub7ec\uc2a4\ud130\uc640 \uc11c\ube0c\ub137\uc5d0 \uc561\uc138\uc2a4\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\uc2ed\uc2dc\uc624.](https:\/\/aws.amazon.com\/blogs\/security\/isolating-network-access-to-your-aws-cloud9-environments\/)\n\n![illustration of AWS Cloud9 console connecting to no-ingress EC2 instance.](.\/image-2.jpg)\n","site":"eks","answers_cleaned":"     search    exclude  true         VPC              EKS                            AWS VPC                      VPC           VPC                EKS                                 EKS       VPC                    VPC        https   docs aws amazon com eks latest userguide network reqs html     Amazon EKS              https   docs aws amazon com eks latest userguide sec group reqs html                      EKS            EKS            VPC                                  AWS     VPC    VPC                                              VPC                                                 AWS               VPC                                     VPC                 VPC             eksctl  VPC             VPC      AWS VPC      API                                                                                               a  EKS               b  EKS               Elastic Network Interface  https   docs aws amazon com AWSEC2 latest UserGuide using eni html  X ENI      EKS                                      VPC                EKS                                       X ENI               API               ENI                   VPC                         general illustration of cluster networking  including load balancer  nodes  and pods     image png            EKS                                                                              kubelet                                      kubelet                           VPC                  VPC                            Kubelet  API                                                   EKS             EKS                https   docs aws amazon com eks latest userguide cluster endpoint html                                                                               VPC                                                                                              API                                                              EKS       API                                                 Amazon EKS                                                VPC               API                              VPC         Amazon                                           IP                          NAT            IP              NAT                                                                                VPC          API     VPC    X ENI                          API                                                                    API                             API                       VPC                               VPC    X ENI     API                                                        Amazon VPC           Amazon EKS                      https   aws amazon com premiumsupport knowledge center eks private cluster endpoint vpc                             API               DNS      VPC       IP                  VPC                                VPC     Amazon VPC  IPv4   IPv6            Amazon EKS        IPv4         VPC   IPv4 CIDR                            IPv4  Classless Inter Domain Routing  http   en wikipedia org wiki CIDR notation  CIDR         IPv6 CIDR     VPC              VPC          RFC 1918  http   www faqs org rfcs rfc1918 html            IPv4         VPC     IPv4 CIDR                             16  Prefix 65 536   IP        28  Prefix 16   IP                 VPC            IPv6 CIDR                   VPC            5                IPv6 VPC  CIDR     Prefix      1        IP     https   www ripe net about us press centre understanding ip addressing    text IPv6 20Relative 20Network 20Sizes 20 20 20 2F128 20 Minimum 20IPv6 20allocation 20 201 20more 20rows 20       64         Amazon        IPv6        IPv6 CIDR                  Amazon EKS       IPv4  IPv6                  EKS       IPv4 IP                   IPv6       IPv6                   IPv6              VPC               Amazon EKS                                                                                                             Amazon EKS              4         x account    x ENIs  ENI        x ENI                                                      VPC              https   docs aws amazon com eks latest userguide network reqs html network requirements subnets         EKS                                                                              https   aws github io aws eks best practices upgrades  verify available ip addresses     Amazon EKS               ENI                                                IP                         IP             28                                                                                                                             https   docs aws amazon com vpc latest userguide VPC Internet Gateway html                                                                                                                             ingress                                           egress                                       Elastic IP Address EIP          VPC                                       NAT        https   docs aws amazon com vpc latest userguide vpc nat gateway html                                  VPC                       VPC                            IPv6                                              IPv6                       VPC   egress only internet gateways  EIGW   https   docs aws amazon com vpc latest userguide egress only internet gateway html                                                          IPv6                 VPC          https   docs aws amazon com vpc latest userguide VPC Scenario2 html                                        VPC                                                                                                  kubernetes io role elb   http   kubernetes io role elb                                                                                                                                                                                                                                                                                NAT          ENI                                                                kubernetes io role internal elb   http   kubernetes io role internal elb 1                                                     VPN            EC2     Amazon ECR   S3            AWS PrivateLink  https   docs aws amazon com vpc latest userguide endpoint service html                                                                 EKS                  https   docs aws amazon com eks latest userguide private clusters html                       VPC            VPC      VPC          EKS                            Amazon VPC Lattice  https   aws amazon com vpc lattice            VPC                                 VPC      AWS PrivateLink    AWS Transit Gateway                                   https   aws amazon com blogs networking and content delivery build secure multi account multi vpc connectivity for your applications with amazon vpc lattice                            Amazon VPC Lattice  traffic flow    vpc lattice gif    Amazon VPC Lattice  IPv4   IPv6                      IPv4                                          EKS                  IP                        IP         VPC                                         RFC1918 IP                CIDR                   NAT        https   docs aws amazon com vpc latest userguide vpc nat gateway html nat gateway basics                      custom networking index md        transit gateway  https   docs aws amazon com whitepapers latest aws vpc connectivity options aws transit gateway html      VPC CNI       EKS                          Private Nat Gateway with Custom Networking  traffic flow    private nat gw gif                         VPC                   ALB    NLB                               AWS PrivateLink  https   docs aws amazon com vpc latest privatelink privatelink share your services html                           VPC             AWS         AWS                                                   Amazon VPC             AWS Resource Access Manager RAM              AWS      https   docs aws amazon com ram latest userguide shareable html      AWS           OU        AWS                  AWS RAM          AWS         VPC      Amazon EKS                         AWS                                                                                          VPC                                                    AWS     Amazon EKS                                        github      https   github com aws samples eks shared subnets                   Deploying Amazon EKS in VPC Shared Subnets across AWS Accounts     eks shared subnets png                             Amazon EKS                     VPC                         Amazon EKS     VPC                            Amazon EKS  AWS VPC       SG                                                                       VPC          IP                                       participant                                                                                                 VPC                                              Amazon EKS                    IAM                        IAM          Amazon EKS                    Fargate                                                     Amazon EKS               AWS                                             Amazon S3     Dynamodb           AWS                                                       AWS                                                        IAM                                        OIDC      IAM                                                  AWS            IAM          AWS      https   docs aws amazon com IAM latest UserGuide reference aws services that work with iam html                                                   OIDC              OIDC      IAM                    IAM                     AWS                                                                                                                         IAM     https   aws amazon com blogs containers cross account iam roles for kubernetes service accounts                  Amazon Elastic Loadbalancer ELB      ALB    NLB                                                                      ELB                                            Amazon EKS        https   aws amazon com blogs containers expose amazon eks pods through cross account load balancer                                                                                 Amazon VPC CNI                  custom networking feature                                  AZ  ID               ENIConfig                    AZ    AWS     AZ                                            https   docs aws amazon com vpc latest userguide VPC SecurityGroups html                                          Amazon EKS                            https   docs aws amazon com eks latest userguide sec group reqs html                          Amazon EKS   eks cluster sg my cluster uniqueID                  EKS                 ENI                                                                                                                                                                https   docs aws amazon com eks latest userguide sec group reqs html                             AZ        AWS                             AZ                                                                                                                              Amazon EKS  EKS                                                                                             Kubelet    topology kubernetes io region us west 2    topology kubernetes io zone us west 2d   http   topology kubernetes io region us west 2 topology kubernetes io zone us west 2d                                         Pod topology spread constraints  https   kubernetes io docs concepts scheduling eviction topology spread constraints                                                                      https   kubernetes io docs reference command line tools reference kube scheduler                                                                                  AZ                              https   kubernetes io docs concepts scheduling eviction assign pod node  nodeselector                                                                                                  EKS                                                 topology spread constraints           Karpenter  https   karpenter sh            AZ       AZ                AWS Elastic Loadbalancer              AWS                                                         ALB                                       NLB             Elastic Loadbalancer            https   aws amazon com premiumsupport knowledge center eks vpc subnet discovery                     ELB                                               AZ                                             AZ                                                                       VPC              EKS                                                                                                                                        NAT                              IPv6    EIGW                                 https   docs aws amazon com vpc latest userguide egress only internet gateway html                                                                                                                                                                                                                                      AWS      VPC                                                                                                   AWS                  PrivateLink                                   AWS                                                                                                kubernetes io role internal elb  1   http   kubernetes io role internal elb                                                                                                  https   docs aws amazon com eks latest userguide private clusters html                                             Amazon EKS                                                                                                                               VPC            API                               VPC                        VPC                            API                                                 CIDR                    CIDR                                                    https   docs aws amazon com eks latest userguide cluster endpoint html modify endpoint access                                                       EKS          https   docs aws amazon com eks latest userguide cluster endpoint html private access                     API                                                Amazon EKS                                                                                                                   https   docs aws amazon com eks latest userguide sec group reqs html                          EKS                                          X ENI                                                                                    https   eksctl io usage schema  nodeGroups securityGroups                                       Karpenter                    securityGroupSelectorTerms  https   karpenter sh docs concepts nodeclasses  specsecuritygroupselectorterms                                                                                                                                                                                                                                                                     NAT                    IPv4   IPv6                       AZ   NAT                                    AZ                      AZ    NAT                         Cloud9                       AWS Cloud9  AWS Systems Manager                                                 IDE     Cloud9                              Cloud9                                                 https   aws amazon com blogs security isolating network access to your aws cloud9 environments      illustration of AWS Cloud9 console connecting to no ingress EC2 instance     image 2 jpg  "}
{"questions":"eks search iframe width 560 height 315 src https www youtube com embed FIBc8GkjFU0 title YouTube video player frameborder 0 allow accelerometer autoplay clipboard write encrypted media gyroscope picture in picture web share allowfullscreen iframe exclude true Cluster Autoscaler","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler\n\n<iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/FIBc8GkjFU0\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen><\/iframe>\n\n## \uac1c\uc694\n\n[\ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/cluster-autoscaler)\ub294 [SIG \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1](https:\/\/github.com\/kubernetes\/community\/tree\/master\/sig-autoscaling)\uc5d0\uc11c \uc720\uc9c0 \uad00\ub9ac\ud558\ub294 \uc778\uae30 \uc788\ub294 \ud074\ub7ec\uc2a4\ud130 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1 \uc194\ub8e8\uc158\uc785\ub2c8\ub2e4. \uc774\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub9ac\uc18c\uc2a4\ub97c \ub0ad\ube44\ud558\uc9c0 \uc54a\uace0 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc788\ub294 \ucda9\ubd84\ud55c \ub178\ub4dc\uac00 \uc788\ub294\uc9c0 \ud655\uc778\ud558\ub294 \uc5ed\ud560\uc744 \ud569\ub2c8\ub2e4. Cluster Autoscaler\ub294 \uc2a4\ucf00\uc904\ub9c1\uc5d0 \uc2e4\ud328\ud55c \ud30c\ub4dc\uc640 \ud65c\uc6a9\ub3c4\uac00 \ub0ae\uc740 \ub178\ub4dc\ub97c \uac10\uc2dc\ud569\ub2c8\ub2e4. \uadf8\ub7f0 \ub2e4\uc74c \ud074\ub7ec\uc2a4\ud130\uc5d0 \ubcc0\uacbd \uc0ac\ud56d\uc744 \uc801\uc6a9\ud558\uae30 \uc804\uc5d0 \ub178\ub4dc \ucd94\uac00 \ub610\ub294 \uc81c\uac70\ub97c \uc2dc\ubbac\ub808\uc774\uc158\ud569\ub2c8\ub2e4. Cluster Autoscaler \ub0b4\uc758 AWS \ud074\ub77c\uc6b0\ub4dc \uacf5\uae09\uc790 \uad6c\ud604\uc740 EC2 Auto Scaling \uadf8\ub8f9\uc758 `.desireReplicas` \ud544\ub4dc\ub97c \uc81c\uc5b4\ud569\ub2c8\ub2e4.\n\n![](.\/architecture.png)\n\n\uc774 \uac00\uc774\ub4dc\ub294 Cluster Autoscaler\ub97c \uad6c\uc131\ud558\uace0 \uc870\uc9c1\uc758 \uc694\uad6c \uc0ac\ud56d\uc5d0 \uac00\uc7a5 \uc801\ud569\ud55c \uc808\ucda9\uc548\uc744 \uc120\ud0dd\ud558\uae30 \uc704\ud55c \uba58\ud0c8 \ubaa8\ub378\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \ucd5c\uc0c1\uc758 \ub2e8\uc77c \uad6c\uc131\uc740 \uc5c6\uc9c0\ub9cc \uc131\ub2a5, \ud655\uc7a5\uc131, \ube44\uc6a9 \ubc0f \uac00\uc6a9\uc131\uc744 \uc808\ucda9\ud560 \uc218 \uc788\ub294 \uad6c\uc131 \uc635\uc158 \uc9d1\ud569\uc774 \uc788\uc2b5\ub2c8\ub2e4.\ub610\ud55c \uc774 \uc548\ub0b4\uc11c\ub294 AWS \uad6c\uc131\uc744 \ucd5c\uc801\ud654\ud558\uae30 \uc704\ud55c \ud301\uacfc \ubaa8\ubc94 \uc0ac\ub840\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4.\n\n### \uc6a9\uc5b4\uc9d1\n\n\ub2e4\uc74c \uc6a9\uc5b4\ub294 \uc774 \ubb38\uc11c \uc804\uccb4\uc5d0\uc11c \uc790\uc8fc \uc0ac\uc6a9\ub429\ub2c8\ub2e4. \uc774\ub7f0 \uc6a9\uc5b4\ub294 \uad11\ubc94\uc704\ud55c \uc758\ubbf8\ub97c \uac00\uc9c8 \uc218 \uc788\uc9c0\ub9cc \uc774 \ubb38\uc11c\uc758 \ubaa9\uc801\uc0c1 \uc544\ub798 \uc815\uc758\ub85c\ub9cc \uc81c\ud55c\ub429\ub2c8\ub2e4.\n\n**\ud655\uc7a5\uc131**\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc758 \ud30c\ub4dc \ubc0f \ub178\ub4dc \uc218\uac00 \uc99d\uac00\ud560 \ub54c Cluster Autoscaler\uac00 \uc5bc\ub9c8\ub098 \uc798 \uc791\ub3d9\ud558\ub294\uc9c0\ub97c \ub098\ud0c0\ub0c5\ub2c8\ub2e4. \ud655\uc7a5\uc131 \ud55c\uacc4\uc5d0 \ub3c4\ub2ec\ud558\uba74 Cluster Autoscaler\uc758 \uc131\ub2a5\uacfc \uae30\ub2a5\uc774 \uc800\ud558\ub429\ub2c8\ub2e4. Cluster Autoscaler\uac00 \ud655\uc7a5\uc131 \uc81c\ud55c\uc744 \ucd08\uacfc\ud558\uba74 \ub354 \uc774\uc0c1 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ub178\ub4dc\ub97c \ucd94\uac00\ud558\uac70\ub098 \uc81c\uac70\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4.\n\n**\uc131\ub2a5**\uc740 Cluster Autoscaler\uac00 \uaddc\ubaa8 \uc870\uc815 \uacb0\uc815\uc744 \uc5bc\ub9c8\ub098 \ube68\ub9ac \ub0b4\ub9ac\uace0 \uc2e4\ud589\ud560 \uc218 \uc788\ub294\uc9c0\ub97c \ub098\ud0c0\ub0c5\ub2c8\ub2e4. \uc644\ubcbd\ud558\uac8c \uc791\ub3d9\ud558\ub294 Cluster Autoscaler\ub294 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc5c6\ub294 \ub4f1\uc758 \uc774\ubca4\ud2b8\uc5d0 \ub300\uc751\ud558\uc5ec \uc989\uc2dc \uacb0\uc815\uc744 \ub0b4\ub9ac\uace0 \uc2a4\ucf00\uc77c\ub9c1 \uc870\uce58\ub97c \ud2b8\ub9ac\uac70\ud569\ub2c8\ub2e4.\n\n**\uac00\uc6a9\uc131**\uc740 \ud30c\ub4dc\ub97c \uc911\ub2e8 \uc5c6\uc774 \uc2e0\uc18d\ud558\uac8c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc788\ub2e4\ub294 \ub73b\uc774\ub2e4. \uc5ec\uae30\uc5d0\ub294 \uc0c8\ub85c \uc0dd\uc131\ub41c \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud574\uc57c \ud558\ub294 \uacbd\uc6b0\uc640 \ucd95\uc18c\ub41c \ub178\ub4dc\uac00 \uc2a4\ucf00\uc904\ub9c1\ub41c \ub098\uba38\uc9c0 \ud30c\ub4dc\ub97c \uc885\ub8cc\ud558\ub294 \uacbd\uc6b0\uac00 \ud3ec\ud568\ub429\ub2c8\ub2e4.\n\n**\ube44\uc6a9**\uc740 \uc2a4\ucf00\uc77c-\uc544\uc6c3 \ubc0f \uc2a4\ucf00\uc77c-\uc778 \uc774\ubca4\ud2b8\uc5d0 \ub300\ud55c \uacb0\uc815\uc5d0 \ub530\ub77c \uacb0\uc815\ub429\ub2c8\ub2e4. \uae30\uc874 \ub178\ub4dc\uc758 \ud65c\uc6a9\ub3c4\uac00 \ub0ae\uac70\ub098 \ub4e4\uc5b4\uc624\ub294 \ud30c\ub4dc\uc5d0 \ube44\ud574 \ub108\ubb34 \ud070 \uc0c8 \ub178\ub4dc\ub97c \ucd94\uac00\ud558\uba74 \ub9ac\uc18c\uc2a4\uac00 \ub0ad\ube44\ub429\ub2c8\ub2e4. \uc0ac\uc6a9 \uc0ac\ub840\uc5d0 \ub530\ub77c \uacf5\uaca9\uc801\uc778 \uaddc\ubaa8 \ucd95\uc18c \uacb0\uc815\uc73c\ub85c \uc778\ud574 \ud30c\ub4dc\ub97c \uc870\uae30\uc5d0 \uc885\ub8cc\ud558\ub294 \ub370 \ub530\ub978 \ube44\uc6a9\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\ub2e4.\n\n**\ub178\ub4dc \uadf8\ub8f9**\uc740 \ud074\ub7ec\uc2a4\ud130 \ub0b4 \ub178\ub4dc \uadf8\ub8f9\uc5d0 \ub300\ud55c \ucd94\uc0c1\uc801\uc778 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uac1c\ub150\uc785\ub2c8\ub2e4. \uc774\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9ac\uc18c\uc2a4\ub294 \uc544\ub2c8\uc9c0\ub9cc Cluster Autoscaler, \ud074\ub7ec\uc2a4\ud130 API \ubc0f \uae30\ud0c0 \uad6c\uc131 \uc694\uc18c\uc5d0 \ucd94\uc0c1\ud654\ub41c \ud615\ud0dc\ub85c \uc874\uc7ac\ud569\ub2c8\ub2e4. \ub178\ub4dc \uadf8\ub8f9 \ub0b4\uc758 \ub178\ub4dc\ub294 \ub808\uc774\ube14 \ubc0f \ud14c\uc778\ud2b8\uc640 \uac19\uc740 \uc18d\uc131\uc744 \uacf5\uc720\ud558\uc9c0\ub9cc \uc5ec\ub7ec \uac00\uc6a9\uc601\uc5ed \ub610\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc73c\ub85c \uad6c\uc131\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n**EC2 Auto Scaling \uadf8\ub8f9**\uc740 EC2\uc758 \ub178\ub4dc \uadf8\ub8f9 \uad6c\ud604\uc73c\ub85c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. EC2 Auto Scaling \uadf8\ub8f9\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc790\ub3d9\uc73c\ub85c \uac00\uc785\ud558\uace0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\uc758 \ud574\ub2f9 \ub178\ub4dc \ub9ac\uc18c\uc2a4\uc5d0 \ub808\uc774\ube14\uacfc \ud14c\uc778\ud2b8\ub97c \uc801\uc6a9\ud558\ub294 \uc778\uc2a4\ud134\uc2a4\ub97c \uc2dc\uc791\ud558\ub3c4\ub85d \uad6c\uc131\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\n**EC2 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9**\uc740 EC2\uc5d0 \ub178\ub4dc \uadf8\ub8f9\uc744 \uad6c\ud604\ud55c \ub610 \ub2e4\ub978 \uc608\uc785\ub2c8\ub2e4. EC2 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1 \uadf8\ub8f9\uc744 \uc218\ub3d9\uc73c\ub85c \uad6c\uc131\ud558\ub294 \ubcf5\uc7a1\uc131\uc744 \uc5c6\uc560\uace0 \ub178\ub4dc \ubc84\uc804 \uc5c5\uadf8\ub808\uc774\ub4dc \ubc0f \uc815\uc0c1\uc801\uc778 \ub178\ub4dc \uc885\ub8cc\uc640 \uac19\uc740 \ucd94\uac00 \uad00\ub9ac \uae30\ub2a5\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4.\n\n### Cluster Autoscaler \uc6b4\uc601\n\nCluster Autoscaler\ub294 \uc77c\ubc18\uc801\uc73c\ub85c \ud074\ub7ec\uc2a4\ud130\uc5d0 [\ub514\ud50c\ub85c\uc774\uba3c\ud2b8](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/cluster-autoscaler\/cloudprovider\/aws\/examples) \ud0c0\uc785\uc73c\ub85c \uc124\uce58\ub429\ub2c8\ub2e4. \uace0\uac00\uc6a9\uc131\uc744 \ubcf4\uc7a5\ud558\uae30 \uc704\ud574 [\ub9ac\ub354\uc120\ucd9c](https:\/\/en.wikipedia.org\/wiki\/Leader_election)\ub97c \uc0ac\uc6a9\ud558\uc9c0\ub9cc \uc791\uc5c5\uc740 \ud558\ub098\uc758 \ub808\ud50c\ub9ac\uce74\uc5d0\uc11c\ub9cc \uc218\ud589\ub429\ub2c8\ub2e4. \uc218\ud3c9\uc801\uc73c\ub85c \ud655\uc7a5\ud560 \uc218\ub294 \uc5c6\uc2b5\ub2c8\ub2e4. \uae30\ubcf8 \uc124\uc815\uc758 \uacbd\uc6b0 \uc81c\uacf5\ub41c [\uc124\uce58\uc9c0\uce68](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cluster-autoscaler.html)\uc744 \uc0ac\uc6a9\ud558\uba74 \uae30\ubcf8\uac12\uc774 \uae30\ubcf8\uc801\uc73c\ub85c \uc791\ub3d9\ud558\uc9c0\ub9cc \uba87 \uac00\uc9c0 \uc720\uc758\ud574\uc57c \ud560 \uc0ac\ud56d\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c \uc0ac\ud56d\uc744 \ud655\uc778\ud558\uc138\uc694:\n\n* Cluster Autoscaler \ubc84\uc804\uc774 \ud074\ub7ec\uc2a4\ud130 \ubc84\uc804\uacfc \uc77c\uce58\ud558\ub294\uc9c0 \ud655\uc778\ud569\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804 \ubcc4 \uacf5\uc2dd \ud638\ud658\ub418\ub294 \ubc84\uc804 \uc678\uc5d0 \ud0c0 \ubc84\uc804\uc758 \ud638\ud658\uc131\uc740 [\ud14c\uc2a4\ud2b8 \ub418\uc9c0 \uc54a\uac70\ub098 \uc9c0\uc6d0\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4](https:\/\/github.com\/kubernetes\/autoscaler\/blob\/master\/cluster-autoscaler\/README.md#releases).\n* \uc774 \ubaa8\ub4dc\ub97c \uc0ac\uc6a9\ud558\uc9c0 \ubabb\ud558\uac8c \ud558\ub294 \ud2b9\uc815 \uace0\uae09 \uc0ac\uc6a9 \uc0ac\ub840\uac00 \uc5c6\ub294 \ud55c [Auto Discovery](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/cluster-autoscaler\/cloudprovider\/aws#auto-discovery-setup)\ub97c \ud65c\uc131\ud654\ud588\ub294\uc9c0 \ud655\uc778\ud569\ub2c8\ub2e4.\n\n### IAM \uc5ed\ud560\uc5d0 \ucd5c\uc18c \uc811\uadfc \uad8c\ud55c \uc801\uc6a9\n\n[Auto Discovery](https:\/\/github.com\/kubernetes\/autoscaler\/blob\/master\/cluster-autoscaler\/cloudprovider\/aws\/README.md#Auto-discovery-setup)\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0, `autoscaling:SetDesiredCapacity` \ubc0f `autoscaling:TerminateInstanceInAutoScalingGroup`\uc791\uc5c5\uc744 \ud604\uc7ac \ud074\ub7ec\uc2a4\ud130\ub85c \ubc94\uc704\uac00 \uc9c0\uc815\ub41c \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1\uadf8\ub8f9\uc73c\ub85c \uc81c\ud55c\ud558\uc5ec \ucd5c\uc18c \uc811\uadfc \uad8c\ud55c\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4.\n\n\uc774\ub807\uac8c \ud558\uba74 `--node-group-auto-discovery` \uc778\uc218\uac00 \ud0dc\uadf8\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc758 \ub178\ub4dc \uadf8\ub8f9\uc73c\ub85c \ubc94\uc704\ub97c \uc881\ud788\uc9c0 \uc54a\uc558\ub354\ub77c\ub3c4 (\uc608: `k8s.io\/cluster-autoscaler\/<cluster-name>`) \ud55c \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 Cluster Autoscaler\uac00 \ub2e4\ub978 \ud074\ub7ec\uc2a4\ud130\uc758 \ub178\ub4dc \uadf8\ub8f9\uc744 \uc218\uc815\ud560 \uc218 \uc5c6\uac8c \ub429\ub2c8\ub2e4.\n\n```json\n{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Action\": [\n                \"autoscaling:SetDesiredCapacity\",\n                \"autoscaling:TerminateInstanceInAutoScalingGroup\"\n            ],\n            \"Resource\": \"*\",\n            \"Condition\": {\n                \"StringEquals\": {\n                    \"aws:ResourceTag\/k8s.io\/cluster-autoscaler\/enabled\": \"true\",\n                    \"aws:ResourceTag\/k8s.io\/cluster-autoscaler\/<my-cluster>\": \"owned\"\n                }\n            }\n        },\n        {\n            \"Effect\": \"Allow\",\n            \"Action\": [\n                \"autoscaling:DescribeAutoScalingGroups\",\n                \"autoscaling:DescribeAutoScalingInstances\",\n                \"autoscaling:DescribeLaunchConfigurations\",\n                \"autoscaling:DescribeScalingActivities\",\n                \"autoscaling:DescribeTags\",\n                \"ec2:DescribeImages\",\n                \"ec2:DescribeInstanceTypes\",\n                \"ec2:DescribeLaunchTemplateVersions\",\n                \"ec2:GetInstanceTypesFromInstanceRequirements\",\n                \"eks:DescribeNodegroup\"\n            ],\n            \"Resource\": \"*\"\n        }\n    ]\n}\n```\n\n### \ub178\ub4dc \uadf8\ub8f9 \uad6c\uc131\n\n\ud6a8\uacfc\uc801\uc778 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1\uc740 \ud074\ub7ec\uc2a4\ud130\uc758 \ub178\ub4dc \uadf8\ub8f9 \uc138\ud2b8\ub97c \uc62c\ubc14\ub974\uac8c \uad6c\uc131\ud558\ub294 \uac83\uc5d0\uc11c \uc2dc\uc791\ub429\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc \uc804\ubc18\uc5d0\uc11c \uac00\uc6a9\uc131\uc744 \uadf9\ub300\ud654\ud558\uace0 \ube44\uc6a9\uc744 \uc808\uac10\ud558\ub824\uba74 \uc62c\ubc14\ub978 \ub178\ub4dc \uadf8\ub8f9 \uc138\ud2b8\ub97c \uc120\ud0dd\ud558\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4. AWS\ub294 \ub2e4\uc591\ud55c \uc0ac\uc6a9 \uc0ac\ub840\uc5d0 \uc720\uc5f0\ud558\uac8c \uc801\uc6a9\ud560 \uc218 \uc788\ub294 EC2 Auto Scaling \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc \uadf8\ub8f9\uc744 \uad6c\ud604\ud569\ub2c8\ub2e4. \ud558\uc9c0\ub9cc Cluster Autoscaler\ub294 \ub178\ub4dc \uadf8\ub8f9\uc5d0 \ub300\ud574 \uba87 \uac00\uc9c0 \uac00\uc815\uc744 \ud569\ub2c8\ub2e4. EC2 Auto Scaling \uadf8\ub8f9 \uad6c\uc131\uc744 \uc774\ub7f0 \uac00\uc815\uacfc \uc77c\uad00\ub418\uac8c \uc720\uc9c0\ud558\uba74 \uc6d0\uce58 \uc54a\ub294 \ub3d9\uc791\uc744 \ucd5c\uc18c\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c\uc744 \ud655\uc778\ud558\uc2ed\uc2dc\uc624.\n\n* \ub178\ub4dc \uadf8\ub8f9\uc758 \uac01 \ub178\ub4dc\ub294 \ub808\uc774\ube14, \ud14c\uc778\ud2b8, \ub9ac\uc18c\uc2a4\uc640 \uac19\uc740 \ub3d9\uc77c\ud55c \uc2a4\ucf00\uc904\ub9c1 \uc18d\uc131\uc744 \uac00\uc9d1\ub2c8\ub2e4.\n * \ud63c\ud569 \uc778\uc2a4\ud134\uc2a4 \uc815\ucc45\uc758 \uacbd\uc6b0 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc740 \ub3d9\uc77c\ud55c \uc2a4\ud399\uc758 CPU, \uba54\ubaa8\ub9ac \ubc0f GPU\uc774\uc5ec\uc57c \ud569\ub2c8\ub2e4.\n * \uc815\ucc45\uc5d0 \uc9c0\uc815\ub41c \uccab \ubc88\uc9f8 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc740 \uc2a4\ucf00\uc904\ub9c1\uc744 \uc2dc\ubbac\ub808\uc774\uc158\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4.\n * \uc815\ucc45\uc5d0 \ub354 \ub9ce\uc740 \ub9ac\uc18c\uc2a4\uac00 \ud3ec\ud568\ub41c \ucd94\uac00 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc774 \uc788\ub294 \uacbd\uc6b0 \ud655\uc7a5 \ud6c4 \ub9ac\uc18c\uc2a4\uac00 \ub0ad\ube44\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n * \uc815\ucc45\uc5d0 \ub9ac\uc18c\uc2a4\uac00 \uc801\uc740 \ucd94\uac00 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc774 \uc788\ub294 \uacbd\uc6b0, \ud30c\ub4dc\uac00 \ud574\ub2f9 \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c \uc77c\uc815\uc744 \uc608\uc57d\ud558\uc9c0 \ubabb\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n* \ub178\ub4dc \uc218\uac00 \uc801\uc740 \ub178\ub4dc \uadf8\ub8f9\ubcf4\ub2e4 \ub178\ub4dc\uac00 \ub9ce\uc740 \ub178\ub4dc \uadf8\ub8f9\uc774 \uc120\ud638\ub429\ub2c8\ub2e4. \uc774\ub294 \ud655\uc7a5\uc131\uc5d0 \uac00\uc7a5 \ud070 \uc601\ud5a5\uc744 \ubbf8\uce69\ub2c8\ub2e4.\n* \uac00\ub2a5\ud558\uba74 \ub450 \uc2dc\uc2a4\ud15c \ubaa8\ub450 \uc9c0\uc6d0\uc744 \uc81c\uacf5\ud558\ub294 EC2 \uae30\ub2a5 (\uc608: \uc9c0\uc5ed, \ud63c\ud569 \uc778\uc2a4\ud134\uc2a4 \uc815\ucc45) \uc744 \uc120\ud638\ud558\uc2ed\uc2dc\uc624.\n\n*\ucc38\uace0: [EKS \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managed-node-groups.html)\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc5d0\ub294 \uc790\ub3d9 EC2 Auto Scaling \uadf8\ub8f9 \uac80\uc0c9 \ubc0f \uc815\uc0c1\uc801\uc778 \ub178\ub4dc \uc885\ub8cc\uc640 \uac19\uc740 Cluster Autoscaler \uae30\ub2a5\uc744 \ube44\ub86f\ud55c \uac15\ub825\ud55c \uad00\ub9ac \uae30\ub2a5\uc774 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.*\n\n## \uc131\ub2a5 \ubc0f \ud655\uc7a5\uc131 \ucd5c\uc801\ud654\n\n\uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1 \uc54c\uace0\ub9ac\uc998\uc758 \ub7f0\ud0c0\uc784 \ubcf5\uc7a1\uc131\uc744 \uc774\ud574\ud558\uba74 [1,000\uac1c \ub178\ub4dc](https:\/\/github.com\/kubernetes\/autoscaler\/blob\/master\/cluster-autoscaler\/proposals\/scalability_tests.md)\ub97c \ucd08\uacfc\ud558\ub294 \ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uacc4\uc18d \uc6d0\ud65c\ud558\uac8c \uc791\ub3d9\ud558\ub3c4\ub85d Cluster Autoscaler\ub97c \ud29c\ub2dd\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4.\n\nCluster Autoscaler\uc758 \ud655\uc7a5\uc131\uc744 \uc870\uc815\ud558\uae30 \uc704\ud55c \uc8fc\uc694\ud55c \uc694\uc18c\ub294 \ud504\ub85c\uc138\uc2a4\uc5d0 \uc81c\uacf5\ub418\ub294 \ub9ac\uc18c\uc2a4, \uc54c\uace0\ub9ac\uc998\uc758 \uc2a4\uce94 \uac04\uaca9, \ud074\ub7ec\uc2a4\ud130\uc758 \ub178\ub4dc \uadf8\ub8f9 \uc218\uc785\ub2c8\ub2e4. \uc774 \uc54c\uace0\ub9ac\uc998\uc758 \uc2e4\uc81c \ub7f0\ud0c0\uc784 \ubcf5\uc7a1\uc131\uc5d0\ub294 \uc2a4\ucf00\uc904\ub9c1 \ud50c\ub7ec\uadf8\uc778 \ubcf5\uc7a1\uc131 \ubc0f \ud30c\ub4dc \uc218\uc640 \uac19\uc740 \ub2e4\ub978 \uc694\uc778\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \ud30c\ub77c\ubbf8\ud130\ub294 \ud074\ub7ec\uc2a4\ud130\uc758 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc790\uc5f0\uc2a4\ub7fd\uac8c \uc601\ud5a5\uc744 \ubbf8\uce58\uba70 \uc27d\uac8c \uc870\uc815\ud560 \uc218 \uc5c6\uae30 \ub54c\ubb38\uc5d0 \uad6c\uc131\ud560 \uc218 \uc5c6\ub294 \ud30c\ub77c\ubbf8\ud130\ub85c \uac04\uc8fc\ub429\ub2c8\ub2e4.\n\nCluster Autoscaler\ub294 \ud30c\ub4dc, \ub178\ub4dc, \ub178\ub4dc \uadf8\ub8f9\uc744 \ud3ec\ud568\ud558\uc5ec \uc804\uccb4 \ud074\ub7ec\uc2a4\ud130\uc758 \uc0c1\ud0dc\ub97c \uba54\ubaa8\ub9ac\uc5d0 \ub85c\ub4dc\ud569\ub2c8\ub2e4. \uc54c\uace0\ub9ac\uc998\uc740 \uac01 \uc2a4\uce94 \uac04\uaca9\ub9c8\ub2e4 \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc5c6\ub294 \ud30c\ub4dc\ub97c \uc2dd\ubcc4\ud558\uace0 \uac01 \ub178\ub4dc \uadf8\ub8f9\uc5d0 \ub300\ud55c \uc2a4\ucf00\uc904\ub9c1\uc744 \uc2dc\ubbac\ub808\uc774\uc158\ud569\ub2c8\ub2e4. \uc774\ub7f0 \uc694\uc18c\ub97c \uc870\uc815\ud558\ub294 \uac83\uc740 \uc11c\ub85c \ub2e4\ub978 \uc7a5\ub2e8\uc810\uc774 \uc788\uc73c\ubbc0\ub85c \uc0ac\uc6a9 \uc0ac\ub840\uc5d0 \ub9de\uac8c \uc2e0\uc911\ud558\uac8c \uace0\ub824\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n### Cluster Autoscaler\uc758 \uc218\uc9c1 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1\n\nCluster Autoscaler\ub97c \ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130\ub85c \ud655\uc7a5\ud558\ub294 \uac00\uc7a5 \uac04\ub2e8\ud55c \ubc29\ubc95\uc740 \ubc30\ud3ec\ub97c \uc704\ud55c \ub9ac\uc18c\uc2a4 \uc694\uccad\uc744 \ub298\ub9ac\ub294 \uac83\uc785\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \ud06c\uae30\uc5d0 \ub530\ub77c \ud06c\uac8c \ub2e4\ub974\uc9c0\ub9cc \ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130\uc758 \uacbd\uc6b0 \uba54\ubaa8\ub9ac\uc640 CPU\ub97c \ubaa8\ub450 \ub298\ub824\uc57c \ud569\ub2c8\ub2e4. \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1 \uc54c\uace0\ub9ac\uc998\uc740 \ubaa8\ub4e0 \ud30c\ub4dc\uc640 \ub178\ub4dc\ub97c \uba54\ubaa8\ub9ac\uc5d0 \uc800\uc7a5\ud558\ubbc0\ub85c \uacbd\uc6b0\uc5d0 \ub530\ub77c \uba54\ubaa8\ub9ac \uc0ac\uc6a9\ub7c9\uc774 1GB\ubcf4\ub2e4 \ucee4\uc9c8 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub9ac\uc18c\uc2a4 \uc99d\uac00\ub294 \uc77c\ubc18\uc801\uc73c\ub85c \uc218\ub3d9\uc73c\ub85c \uc218\ud589\ub429\ub2c8\ub2e4. \uc9c0\uc18d\uc801\uc778 \ub9ac\uc18c\uc2a4 \ud29c\ub2dd\uc73c\ub85c \uc778\ud574 \uc6b4\uc601\uc0c1\uc758 \ubd80\ub2f4\uc774 \uc0dd\uae34\ub2e4\uba74 [Addon Resizer](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/addon-resizer) \ub610\ub294 [Vertical Pod Autoscaler](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/vertical-pod-autoscaler) \uc0ac\uc6a9\uc744 \uace0\ub824\ud574 \ubcf4\uc138\uc694.\n\n### \ub178\ub4dc \uadf8\ub8f9 \uc218 \uc904\uc774\uae30\n\n\ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c Cluster Autoscaler\uac00 \uacc4\uc18d \uc798 \uc791\ub3d9\ud558\ub3c4\ub85d \ud558\ub294 \ud55c \uac00\uc9c0 \ubc29\ubc95\uc740 \ub178\ub4dc \uadf8\ub8f9 \uc218\ub97c \ucd5c\uc18c\ud654\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \ud300 \ub610\ub294 \uc751\uc6a9 \ud504\ub85c\uadf8\ub7a8\ubcc4\ub85c \ub178\ub4dc \uadf8\ub8f9\uc744 \uad6c\uc131\ud558\ub294 \uc77c\ubd80 \uc870\uc9c1\uc5d0\uc11c\ub294 \uc774\uac83\uc774 \uc5b4\ub824\uc6b8 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub294 Kubernetes API\uc5d0\uc11c \uc644\ubcbd\ud558\uac8c \uc9c0\uc6d0\ub418\uc9c0\ub9cc \ud655\uc7a5\uc131\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce58\ub294 Cluster Autoscaler \ube44\uad8c\uc7a5 \ud328\ud134\uc73c\ub85c \uac04\uc8fc\ub429\ub2c8\ub2e4. \ub2e4\uc911 \ub178\ub4dc \uadf8\ub8f9(\uc608: \uc2a4\ud31f \ub610\ub294 GPU)\uc744 \uc0ac\uc6a9\ud558\ub294 \ub370\uc5d0\ub294 \uc5ec\ub7ec \uac00\uc9c0 \uc774\uc720\uac00 \uc788\uc9c0\ub9cc, \ub300\ubd80\ubd84\uc758 \uacbd\uc6b0 \uc801\uc740 \uc218\uc758 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\uba74\uc11c \ub3d9\uc77c\ud55c \ud6a8\uacfc\ub97c \uc5bb\uc744 \uc218 \uc788\ub294 \ub300\uc548 \uc124\uacc4\uac00 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c\uc744 \ud655\uc778\ud569\ub2c8\ub2e4:\n\n* \ud30c\ub4dc \uaca9\ub9ac\ub294 \ub178\ub4dc \uadf8\ub8f9\uc774 \uc544\ub2cc \ub124\uc784\uc2a4\ud398\uc774\uc2a4\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc218\ud589\ub429\ub2c8\ub2e4.\n * \uc2e0\ub8b0\ub3c4\uac00 \ub0ae\uc740 \uba40\ud2f0\ud14c\ub10c\ud2b8 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c\ub294 \ubd88\uac00\ub2a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n * \ud30c\ub4dc \ub9ac\uc18c\uc2a4 \uc694\uccad(request)\uacfc \ub9ac\uc18c\uc2a4 \uc81c\ud55c(limit)\uc740 \ub9ac\uc18c\uc2a4 \uacbd\ud569\uc744 \ubc29\uc9c0\ud558\uae30 \uc704\ud574 \uc801\uc808\ud558\uac8c \uc124\uc815\ub418\uc5c8\ub2e4.\n * \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc774 \ud074\uc218\ub85d \ube48 \ud328\ud0b9\uc774 \ucd5c\uc801\ud654\ub418\uace0 \uc2dc\uc2a4\ud15c \ud30c\ub4dc \uc624\ubc84\ud5e4\ub4dc\uac00 \uc904\uc5b4\ub4ed\ub2c8\ub2e4.\n* NodeTaints \ub610\ub294 NodeSelector\ub294 \ud30c\ub4dc\ub97c \uc608\uc678\uc801\uc73c\ub85c \uc2a4\ucf00\uc904\ub9c1\ud558\ub294 \ub370 \uc0ac\uc6a9\ub418\ub294 \uac83\uc774\uc9c0, \uaddc\uce59\uc774 \uc544\ub2d9\ub2c8\ub2e4.\n* \ub9ac\uc804 \ub9ac\uc18c\uc2a4\ub294 \uba40\ud2f0 \uac00\uc6a9\uc601\uc5ed\uc744 \ud3ec\ud568\ud558\ub294 \ub2e8\uc77c EC2 Auto Scaling \uadf8\ub8f9\uc73c\ub85c \uc815\uc758\ub429\ub2c8\ub2e4.\n\n### \uc2a4\uce94 \uac04\uaca9 \uc904\uc774\uae30\n\n\uc2a4\uce94 \uac04\uaca9(\uc608: 10\ucd08)\uc744 \uc9e7\uac8c \uc124\uc815\ud558\uba74 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc5c6\uc744 \ub54c Cluster Autoscaler\uac00 \ucd5c\ub300\ud55c \ube68\ub9ac \uc751\ub2f5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uc2a4\uce94\ud560 \ub54c\ub9c8\ub2e4 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \ubc0f EC2 Auto Scaling \uadf8\ub8f9 \ub610\ub294 EKS \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9 API\uc5d0 \ub300\ud55c API \ud638\ucd9c\uc774 \ub9ce\uc774 \ubc1c\uc0dd\ud569\ub2c8\ub2e4. \uc774\ub7f0 API \ud638\ucd9c\ub85c \uc778\ud574 Kubernetes \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc758 \uc18d\ub3c4\uac00 \uc81c\ud55c\ub418\uac70\ub098 \uc11c\ube44\uc2a4\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc5c6\uac8c \ub420 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub514\ud3f4\ud2b8 \uc2a4\uce94 \uac04\uaca9\uc740 10\ucd08\uc774\uc9c0\ub9cc AWS\uc5d0\uc11c \uc0c8\ub85c\uc6b4 \ub178\ub4dc\ub97c \uc2dc\uc791\ud558\ub294 \ub370\ub294 \uc0c8 \uc778\uc2a4\ud134\uc2a4\ub97c \uc2dc\uc791\ud558\ub294 \ub4f1 \ud6e8\uc52c \ub354 \uc624\ub79c \uc2dc\uac04\uc774 \uc18c\uc694\ub429\ub2c8\ub2e4. \uc989, \uc804\uccb4 \uc2a4\ucf00\uc77c\uc5c5 \uc2dc\uac04\uc744 \ud06c\uac8c \ub298\ub9ac\uc9c0 \uc54a\uace0\ub3c4 \uc2a4\uce94 \uac04\uaca9\uc744 \ub298\ub9b4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \ub178\ub4dc\ub97c \uc2dc\uc791\ud558\ub294 \ub370 2\ubd84\uc774 \uac78\ub9ac\ub294 \uacbd\uc6b0 \uc2a4\uce94 \uac04\uaca9\uc744 1\ubd84\uc73c\ub85c \ubcc0\uacbd\ud558\uba74 API \ud638\ucd9c\uc774 6\ubc30 \uc904\uc5b4\ub4e4\uace0  \ud655\uc7a5\uc774 38% \ub290\ub824\uc9c0\ub294 \uc808\ucda9\uc810\uc774 \ubc1c\uc0dd\ud569\ub2c8\ub2e4.\n\n### \ub178\ub4dc \uadf8\ub8f9 \uac04 \uc0e4\ub529\n\nCluster Autoscaler\ub294 \ud2b9\uc815 \ub178\ub4dc \uadf8\ub8f9 \uc9d1\ud569\uc5d0\uc11c \uc791\ub3d9\ud558\ub3c4\ub85d \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud558\uba74 \uac01\uac01 \ub2e4\ub978 \ub178\ub4dc \uadf8\ub8f9 \uc9d1\ud569\uc5d0\uc11c \uc791\ub3d9\ud558\ub3c4\ub85d \uad6c\uc131\ub41c Cluster Autoscaler \uc778\uc2a4\ud134\uc2a4\ub97c \uc5ec\ub7ec \uac1c \ubc30\ud3ec\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc804\ub7b5\uc744 \uc0ac\uc6a9\ud558\uba74 \uc784\uc758\ub85c \ub9ce\uc740 \uc218\uc758 \ub178\ub4dc \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc73c\ubbc0\ub85c \ud655\uc7a5\uc131\uc5d0 \ube44\uc6a9\uc744 \ud22c\uc790\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ubc29\ubc95\uc740 \uc131\ub2a5 \uac1c\uc120\uc744 \uc704\ud55c \ucd5c\ud6c4\uc758 \uc218\ub2e8\uc73c\ub85c\ub9cc \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\nCluster Autoscaler\ub294 \uc6d0\ub798 \uc774 \uad6c\uc131\uc6a9\uc73c\ub85c \uc124\uacc4\ub418\uc9c0 \uc54a\uc558\uc73c\ubbc0\ub85c \uba87 \uac00\uc9c0 \ubd80\uc791\uc6a9\uc774 \uc788\uc2b5\ub2c8\ub2e4. \uc0e4\ub4dc(\ub3c5\ub9bd\uc801\uc73c\ub85c \uc6b4\uc601\ub418\ub294 \ub178\ub4dc \uadf8\ub8f9)\ub294 \uc11c\ub85c \ud1b5\uc2e0\ud558\uc9c0 \uc54a\uae30 \ub54c\ubb38\uc5d0 \uc5ec\ub7ec \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec\uc5d0\uc11c \ub3d9\uc2dc\uc5d0 \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc5c6\ub294 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud558\ub824\uace0 \uc2dc\ub3c4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub85c \uc778\ud574 \uc5ec\ub7ec \ub178\ub4dc \uadf8\ub8f9\uc774 \ubd88\ud544\uc694\ud558\uac8c \ud655\uc7a5\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \ucd94\uac00 \ub178\ub4dc\ub294 `scale-down-delay` \uc774\ud6c4 \ub2e4\uc2dc \ucd95\uc18c\ub429\ub2c8\ub2e4.\n\n```\nmetadata:\n  name: cluster-autoscaler\n  namespace: cluster-autoscaler-1\n\n...\n\n--nodes=1:10:k8s-worker-asg-1\n--nodes=1:10:k8s-worker-asg-2\n\n---\n\nmetadata:\n  name: cluster-autoscaler\n  namespace: cluster-autoscaler-2\n\n...\n\n--nodes=1:10:k8s-worker-asg-3\n--nodes=1:10:k8s-worker-asg-4\n```\n\n\ub2e4\uc74c\uc744 \ud655\uc778\ud558\uc2ed\uc2dc\uc624.\n\n* \uac01 \uc0e4\ub4dc\ub294 \uace0\uc720\ud55c EC2 Auto Scaling \uadf8\ub8f9 \uc138\ud2b8\ub97c \uac00\ub9ac\ud0a4\ub3c4\ub85d \uad6c\uc131\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n* \ub9ac\ub354 \uc120\ucd9c \ucda9\ub3cc\uc744 \ubc29\uc9c0\ud558\uae30 \uc704\ud574 \uac01 \uc0e4\ub4dc\ub294 \ubcc4\ub3c4\uc758 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \ubc30\ud3ec\ub429\ub2c8\ub2e4.\n\n## \ube44\uc6a9 \ubc0f \uac00\uc6a9\uc131 \ucd5c\uc801\ud654\n\n### \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\n\n\ub178\ub4dc \uadf8\ub8f9\uc5d0\uc11c \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\ub97c \uc0ac\uc6a9\ud558\uba74 \uc628\ub514\ub9e8\ub4dc \uc694\uae08\uc5d0\uc11c \ucd5c\ub300 90% \uae4c\uc9c0 \uc808\uc57d\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubc18\uba74 EC2\uc5d0\uc11c \uc6a9\ub7c9\uc744 \ub2e4\uc2dc \ud544\uc694\ub85c \ud558\ub294 \uacbd\uc6b0 \uc5b8\uc81c\ub4e0\uc9c0 \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\ub97c \uc911\ub2e8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc0ac\uc6a9 \uac00\ub2a5\ud55c \uc6a9\ub7c9\uc774 \ubd80\uc871\ud558\uc5ec EC2 Auto Scaling \uadf8\ub8f9\uc744 \ud655\uc7a5\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0 \uc6a9\ub7c9 \ubd80\uc871 \uc624\ub958\uac00 \ubc1c\uc0dd\ud569\ub2c8\ub2e4. \uc5ec\ub7ec \uc778\uc2a4\ud134\uc2a4 \ud328\ubc00\ub9ac\ub97c \uc120\ud0dd\ud558\uc5ec \ub2e4\uc591\uc131\uc744 \uadf9\ub300\ud654\ud558\uba74 \ub9ce\uc740 \uc2a4\ud31f \uc6a9\ub7c9 \ud480\uc744 \ud65c\uc6a9\ud558\uc5ec \uc6d0\ud558\ub294 \uaddc\ubaa8\ub97c \ub2ec\uc131\ud560 \uac00\ub2a5\uc131\uc774 \ub192\uc544\uc9c0\uace0 \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4 \uc911\ub2e8\uc774 \ud074\ub7ec\uc2a4\ud130 \uac00\uc6a9\uc131\uc5d0 \ubbf8\uce58\ub294 \uc601\ud5a5\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\ub97c \uc0ac\uc6a9\ud55c \ud63c\ud569 \uc778\uc2a4\ud134\uc2a4 \uc815\ucc45\uc740 \ub178\ub4dc \uadf8\ub8f9 \uc218\ub97c \ub298\ub9ac\uc9c0 \uc54a\uace0\ub3c4 \ub2e4\uc591\uc131\uc744 \ub192\uc77c \uc218 \uc788\ub294 \uc88b\uc740 \ubc29\ubc95\uc785\ub2c8\ub2e4. \ubcf4\uc7a5\ub41c \ub9ac\uc18c\uc2a4\uac00 \ud544\uc694\ud55c \uacbd\uc6b0 \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4 \ub300\uc2e0 \uc628\ub514\ub9e8\ub4dc \uc778\uc2a4\ud134\uc2a4\ub97c \uc0ac\uc6a9\ud558\uc138\uc694.\n\n\ud63c\ud569 \uc778\uc2a4\ud134\uc2a4 \uc815\ucc45\uc744 \uad6c\uc131\ud560 \ub54c\ub294 \ubaa8\ub4e0 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc758 \ub9ac\uc18c\uc2a4 \uc6a9\ub7c9\uc774 \ube44\uc2b7\ud574\uc57c \ud569\ub2c8\ub2e4. \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec\uc758 \uc2a4\ucf00\uc904\ub9c1 \uc2dc\ubbac\ub808\uc774\ud130\ub294 \ud63c\ud569 \uc778\uc2a4\ud134\uc2a4 \uc815\ucc45\uc758 \uccab \ubc88\uc9f8 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \ud6c4\uc18d \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc774 \ub354 \ud06c\uba74 \ud655\uc7a5 \ud6c4 \ub9ac\uc18c\uc2a4\uac00 \ub0ad\ube44\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud06c\uae30\uac00 \uc791\uc73c\uba74 \uc6a9\ub7c9 \ubd80\uc871\uc73c\ub85c \uc778\ud574 \ud30c\ub4dc\uac00 \uc0c8 \uc778\uc2a4\ud134\uc2a4\ub97c \uc2a4\ucf00\uc974\ub418\uc9c0 \ubabb\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 M4, M5, M5a \ubc0f M5n \uc778\uc2a4\ud134\uc2a4\ub294 \ubaa8\ub450 \ube44\uc2b7\ud55c \uc591\uc758 CPU\uc640 \uba54\ubaa8\ub9ac\ub97c \uac00\uc9c0\uace0 \uc788\uc73c\uba70 \ud63c\ud569 \uc778\uc2a4\ud134\uc2a4 \uc815\ucc45\uc744 \uc801\uc6a9\ud558\uae30\uc5d0 \uc801\ud569\ud569\ub2c8\ub2e4.[EC2 Instance Selector](https:\/\/github.com\/aws\/amazon-ec2-instance-selector) \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uba74 \uc720\uc0ac\ud55c \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc2dd\ubcc4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n![](.\/spot_mix_instance_policy.jpg)\n\n\uc628\ub514\ub9e8\ub4dc \ubc0f \uc2a4\ud31f \uc6a9\ub7c9\uc744 \ubcc4\ub3c4\uc758 EC2 Auto Scaling \uadf8\ub8f9\uc73c\ub85c \ubd84\ub9ac\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc2a4\ucf00\uc904\ub9c1 \uc18d\uc131\uc774 \uadfc\ubcf8\uc801\uc73c\ub85c \ub2e4\ub974\uae30 \ub54c\ubb38\uc5d0 [\uae30\ubcf8 \uc6a9\ub7c9 \uc804\ub7b5](https:\/\/docs.aws.amazon.com\/autoscaling\/ec2\/userguide\/asg-purchase-options.html#asg-instances-distribution)\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\ubcf4\ub2e4 \uc774 \ubc29\ubc95\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4\ub294 (EC2\uc5d0\uc11c \uc6a9\ub7c9\uc744 \ub2e4\uc2dc \ud655\ubcf4\ud574\uc57c \ud560 \ub54c) \uc5b8\uc81c\ub4e0\uc9c0 \uc911\ub2e8\ub418\ubbc0\ub85c \uc0ac\uc6a9\uc790\ub294 \uba85\ud658\ud55c \uc120\uc810 \ub3d9\uc791\uc744 \uc704\ud574 \uc120\uc810 \uac00\ub2a5\ud55c \ub178\ub4dc\ub97c \ud14c\uc778\ud2b8\uc2dc\ud0b5\ub2c8\ub2e4. \uc774\ub7f0 \uacbd\uc6b0\uc5d0 \ud30c\ub4dc\ub294 \uba85\uc2dc\uc801\uc778 \ud1a8\ub7ec\ub808\uc774\uc158\uc774 \uc694\uad6c\ub429\ub2c8\ub2e4. \uc774\ub7f0 \ud14c\uc778\ud2b8\ub85c \uc778\ud574 \ub178\ub4dc\uc758 \uc2a4\ucf00\uc904 \uc18d\uc131\uc774 \ub2ec\ub77c\uc9c0\ubbc0\ub85c \uc5ec\ub7ec EC2 Auto Scaling \uadf8\ub8f9\uc73c\ub85c \ubd84\ub9ac\ud574\uc57c \ud569\ub2c8\ub2e4.\n\nCluster Autoscaler\uc5d0\ub294 [Expanders](https:\/\/github.com\/kubernetes\/autoscaler\/blob\/master\/cluster-autoscaler\/FAQ.md#what-are-expanders)\ub77c\ub294 \uac1c\ub150\uc774 \uc788\uc73c\uba70, \ud655\uc7a5\ud560 \ub178\ub4dc \uadf8\ub8f9\uc744 \uc120\ud0dd\ud558\uae30 \uc704\ud55c \ub2e4\uc591\ud55c \uc804\ub7b5\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4.`--expander=least-waste` \uc804\ub7b5\uc740 \uc77c\ubc18\uc801\uc778 \uc6a9\ub3c4\uc758 \uae30\ubcf8 \uc804\ub7b5\uc73c\ub85c, \uc2a4\ud31f \uc778\uc2a4\ud134\uc2a4 \ub2e4\uc591\ud654\ub97c \uc704\ud574 \uc5ec\ub7ec \ub178\ub4dc \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\ub824\ub294 \uacbd\uc6b0 (\uc704 \uc774\ubbf8\uc9c0 \uc124\uba85 \ucc38\uc870) \uc870\uc815 \ud65c\ub3d9 \uc774\ud6c4\uc5d0 \uac00\uc7a5 \uc798 \ud65c\uc6a9\ub418\ub294 \uadf8\ub8f9\uc744 \ud655\uc7a5\ud558\uc5ec \ub178\ub4dc \uadf8\ub8f9\uc758 \ube44\uc6a9\uc744 \ub354\uc6b1 \ucd5c\uc801\ud654\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \ub178\ub4dc \uadf8\ub8f9 \/ ASG \uc6b0\uc120 \uc21c\uc704 \uc9c0\uc815\n\nPriority expander\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc6b0\uc120\uc21c\uc704 \uae30\ubc18 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1\uc744 \uad6c\uc131\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. `--expander=priority`\ub97c \uc0ac\uc6a9\ud558\uba74 \ud074\ub7ec\uc2a4\ud130\uac00 \ub178\ub4dc \uadf8\ub8f9\/ASG\uc758 \uc6b0\uc120 \uc21c\uc704\ub97c \uc9c0\uc815\ud560 \uc218 \uc788\uc73c\uba70, \uc5b4\ub5a4 \uc774\uc720\ub85c\ub4e0 \ud655\uc7a5\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0 \uc6b0\uc120 \uc21c\uc704 \ubaa9\ub85d\uc5d0\uc11c \ub2e4\uc74c \ub178\ub4dc \uadf8\ub8f9\uc744 \uc120\ud0dd\ud569\ub2c8\ub2e4. \uc774\ub294 \uc608\ub97c \ub4e4\uc5b4, GPU\uac00 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ucd5c\uc801\ud654\ub41c \uc131\ub2a5\uc744 \uc81c\uacf5\ud558\uae30 \ub54c\ubb38\uc5d0 P3 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc0ac\uc6a9\ud558\ub824\ub294 \uacbd\uc6b0\uc5d0 \uc720\uc6a9\ud558\uc9c0\ub9cc \ub450 \ubc88\uc9f8 \uc635\uc158\uc73c\ub85c P2 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc0ac\uc6a9\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: cluster-autoscaler-priority-expander\n  namespace: kube-system\ndata:\n  priorities: |-\n    10:\n      - .*p2-node-group.*\n    50:\n      - .*p3-node-group.*\n```\n\nCluster Autoscaler\ub294 *p3-node-group*\uc774\ub77c\ub294 \uc774\ub984\uacfc \uc77c\uce58\ud558\ub294 EC2 Auto Scaling \uadf8\ub8f9\uc744 \ud655\uc7a5\ud558\ub824\uace0 \uc2dc\ub3c4\ud569\ub2c8\ub2e4. \uc774 \uc791\uc5c5\uc774 `--max-node-provision-time` \ub0b4\uc5d0 \uc131\uacf5\ud558\uc9c0 \ubabb\ud558\uba74 *p2-node-group*\uc774\ub77c\ub294 \uc774\ub984\uacfc \uc77c\uce58\ud558\ub294 EC2 Auto Scaling \uadf8\ub8f9\uc744 \ud655\uc7a5\ud558\ub824\uace0 \uc2dc\ub3c4\ud569\ub2c8\ub2e4.\n\uc774 \uac12\uc740 \uae30\ubcf8\uc801\uc73c\ub85c 15\ubd84\uc73c\ub85c \uc124\uc815\ub418\uba70 \ub178\ub4dc \uadf8\ub8f9 \uc120\ud0dd\uc758 \uc18d\ub3c4\ub97c \ub192\uc774\uae30 \uc704\ud574 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uac12\uc774 \ub108\ubb34 \ub0ae\uc73c\uba74 \ubd88\ud544\uc694\ud55c \ud06c\uae30 \uc870\uc815\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc624\ubc84\ud504\ub85c\ube44\uc800\ub2dd\n\nCluster Autoscaler\ub294 \ud544\uc694\ud55c \uacbd\uc6b0\uc5d0\ub9cc \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub178\ub4dc\ub97c \ucd94\uac00\ud558\uace0 \uc0ac\uc6a9\ud558\uc9c0 \uc54a\uc744 \ub54c\ub294 \ub178\ub4dc\ub97c \uc81c\uac70\ud568\uc73c\ub85c\uc368 \ube44\uc6a9\uc744 \ucd5c\uc18c\ud654\ud569\ub2c8\ub2e4. \uc774\ub294 \ub9ce\uc740 \ud30c\ub4dc\uac00 \uc2a4\ucf00\uc904\ub9c1\ub418\uae30 \uc804\uc5d0 \ub178\ub4dc \ud655\uc7a5\uc774 \uc644\ub8cc\ub420 \ub54c\uae4c\uc9c0 \uae30\ub2e4\ub824\uc57c \ud558\uae30 \ub54c\ubb38\uc5d0 \ubc30\ud3ec \uc9c0\uc5f0 \uc2dc\uac04\uc5d0 \uc0c1\ub2f9\ud55c \uc601\ud5a5\uc744 \ubbf8\uce69\ub2c8\ub2e4. \ub178\ub4dc\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uac8c \ub418\uae30\uae4c\uc9c0 \uba87 \ubd84\uc774 \uac78\ub9b4 \uc218 \uc788\uc73c\uba70, \uc774\ub85c \uc778\ud574 \ud30c\ub4dc \uc2a4\ucf00\uc904\ub9c1 \uc9c0\uc5f0 \uc2dc\uac04\uc774 \ud3c9\uc18c\ubcf4\ub2e4 \uba87 \ubc30\ub098 \uc99d\uac00\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc774\ub7f0 \uc2a4\ucf00\uc904\ub9c1 \uc9c0\uc5f0\uc740 [\uc624\ubc84\ud504\ub85c\ube44\uc800\ub2dd](https:\/\/github.com\/kubernetes\/autoscaler\/blob\/master\/cluster-autoscaler\/FAQ.md#how-can-i-configure-overprovisioning-with-cluster-autoscaler)\uc744 \uc0ac\uc6a9\ud558\uba74 \ube44\uc6a9\uc740 \uc99d\uac00\ud558\ub098 \uc644\ud654\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc624\ubc84\ud504\ub85c\ube44\uc800\ub2dd\uc740 \ud074\ub7ec\uc2a4\ud130 \ub0b4 \uacf5\uac04\uc744 \ucc28\uc9c0\ud558\ub294 \uc6b0\uc120 \uc21c\uc704\uac00 \ub0ae\uc740(\ubcf4\ud1b5 \uc74c\uc218) \uc784\uc2dc \ud30c\ub4dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \uad6c\ud604\ub429\ub2c8\ub2e4. \uc0c8\ub85c \uc0dd\uc131\ub41c \ud30c\ub4dc\ub97c unschedulable \uc0c1\ud0dc\uc774\uace0 \uc6b0\uc120 \uc21c\uc704\uac00 \ub354 \ub192\uc740 \uacbd\uc6b0, \uc784\uc2dc \ud30c\ub4dc\ub97c \uc120\uc810\ud558\uc5ec \uacf5\uac04\uc744 \ud655\ubcf4\ud569\ub2c8\ub2e4. \uadf8\ub7ec\uba74 \uc784\uc2dc \ud30c\ub4dc\ub294 unschedulable \uc0c1\ud0dc\uac00 \ub418\uace0 Cluster Autoscaler\ub294 \uc0c8 \ub178\ub4dc\ub97c \ud655\uc7a5\ud558\ub3c4\ub85d \ud2b8\ub9ac\uac70\ub429\ub2c8\ub2e4.\n\n\uc624\ubc84\ud504\ub85c\ube44\uc800\ub2dd\uc73c\ub85c \uc5bb\uc744 \uc218 \uc788\ub294 \ub610\ub2e4\ub978 \ub2e4\ub978 \uc774\uc810\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc624\ubc84\ud504\ub85c\ube44\uc800\ub2dd \uad6c\uc131\uc774 \uc548\ub418\uc5b4 \uc788\ub294 \uacbd\uc6b0 \uc0ac\uc6a9\ub960\uc774 \ub192\uc740 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c\ub294 \ud30c\ub4dc\uc758 `preferredDuringSchedulingIgnoredDuringExecution` \uaddc\uce59\uc774\ub098 \ub178\ub4dc \uc5b4\ud53c\ub2c8\ud2f0 \uaddc\uce59\uc73c\ub85c \uc778\ud558\uc5ec \ucd5c\uc801\uc758 \uc2a4\ucf00\uc904\ub9c1 \uacb0\uc815\uc744 \ub0b4\ub9ac\uc9c0 \ubabb\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\uc5d0 \ub300\ud55c \uc77c\ubc18\uc801\uc778 \uc0ac\uc6a9 \uc0ac\ub840\ub294 AntiAffinity\ub97c \uc0ac\uc6a9\ud558\uc5ec \uac00\uc6a9\uc131 \uc601\uc5ed \uac04\uc5d0 \uac00\uc6a9\uc131\uc774 \ub192\uc740 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \ud30c\ub4dc\ub97c \ubd84\ub9ac\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uc624\ubc84\ud504\ub85c\ube44\uc800\ub2dd\uc740 \uc62c\ubc14\ub978 \uc601\uc5ed\uc758 \ub178\ub4dc\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uac00\ub2a5\uc131\uc744 \ud06c\uac8c \ub192\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc5bc\ub9c8\ub098 \ub9ce\uc740 \uc6a9\ub7c9\uc744 \uc624\ubc84\ud504\ub85c\ube44\uc800\ub2dd\ud560 \uc9c0\ub294 \uc870\uc9c1 \ub0b4\uc5d0\uc11c \uc2e0\uc911\ud788 \uacb0\uc815\ud574\uc57c \ud560 \ube44\uc988\ub2c8\uc2a4 \uc0ac\ud56d\uc785\ub2c8\ub2e4. \ud575\uc2ec\uc740 \uc131\ub2a5\uacfc \ube44\uc6a9 \uac04\uc758 \uade0\ud615\uc785\ub2c8\ub2e4. \uc774 \uacb0\uc815\uc744 \ub0b4\ub9ac\ub294 \ud55c \uac00\uc9c0 \ubc29\ubc95\uc740 \ud3c9\uade0\uc801\uc73c\ub85c \uc5bc\ub9c8\ub098 \uc790\uc8fc \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1\ub418\ub294\uc9c0 \ube48\ub3c4\ub97c \uacc4\uc0b0\ud558\uace0 \uc774 \uac12\uc744 \uc0c8 \ub178\ub4dc\ub97c \ud655\uc7a5\ud558\ub294 \ub370 \uac78\ub9ac\ub294 \uc2dc\uac04\uc73c\ub85c \ub098\ub204\ub294 \uac83\uc785\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \ud3c9\uade0\uc801\uc73c\ub85c 30\ucd08\ub9c8\ub2e4 \uc0c8 \ub178\ub4dc\uac00 \ud544\uc694\ud558\uace0 EC2\uc5d0\uc11c \uc0c8 \ub178\ub4dc\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub294 \ub370 30\ucd08\uac00 \uac78\ub9b0\ub2e4\uba74, \ub2e8\uc77c \ub178\ub4dc\ub97c \uc624\ubc84\ud504\ub85c\ube44\uc800\ub2dd\ud558\uba74 \ud56d\uc0c1 \ucd94\uac00 \ub178\ub4dc\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uac8c \ub418\ubbc0\ub85c EC2 \uc778\uc2a4\ud134\uc2a4 \ud558\ub098\ub97c \ucd94\uac00\ud558\ub294 \ub370 \ub4dc\ub294 \ube44\uc6a9\uc73c\ub85c \uc608\uc57d \uc9c0\uc5f0 \uc2dc\uac04\uc744 30\ucd08\uae4c\uc9c0 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc601\uc5ed \uc2a4\ucf00\uc904\ub9c1 \uacb0\uc815\uc744 \uac1c\uc120\ud558\ub824\uba74 EC2 Auto Scaling \uadf8\ub8f9\uc758 \uac00\uc6a9\uc601\uc5ed \uc218\uc640 \ub3d9\uc77c\ud55c \uc218\uc758 \ub178\ub4dc\ub97c \uc624\ubc84\ud504\ub85c\ube44\uc800\ub2dd\ud558\uc5ec \uc2a4\ucf00\uc904\ub7ec\uac00 \uc218\uc2e0 \ud30c\ub4dc\uc5d0 \uac00\uc7a5 \uc801\ud569\ud55c \uc601\uc5ed\uc744 \uc120\ud0dd\ud560 \uc218 \uc788\ub3c4\ub85d \ud558\uc2ed\uc2dc\uc624.\n\n### \uc2a4\ucf00\uc77c \ub2e4\uc6b4 \ucd95\ucd9c \ubc29\uc9c0\n\n\uc77c\ubd80 \uc6cc\ud06c\ub85c\ub4dc\ub294 \uc81c\uac70\ud558\ub294\ub370 \ube44\uc6a9\uc774 \ub9ce\uc774 \ub4ed\ub2c8\ub2e4. \ube45\ub370\uc774\ud130 \ubd84\uc11d, \uba38\uc2e0 \ub7ec\ub2dd \uc791\uc5c5, \ud14c\uc2a4\ud2b8 \ub7ec\ub108\ub294 \uacb0\uad6d\uc5d0\ub294 \uc644\ub8cc\ub418\uc9c0\ub9cc \uc911\ub2e8\ub420 \uacbd\uc6b0 \ub2e4\uc2dc \uc2dc\uc791\ud574\uc57c \ud569\ub2c8\ub2e4. Cluster Autoscaler\ub294 scale-down-utilization-threshold \uc774\ud558\ub85c \ubaa8\ub4e0 \ub178\ub4dc\ub97c \ucd95\uc18c\ud558\ub824\uace0 \uc2dc\ub3c4\ud558\uba70, \uc774\ub85c \uc778\ud574 \ub178\ub4dc\uc5d0 \ub0a8\uc544 \uc788\ub294 \ubaa8\ub4e0 \ud30c\ub4dc\uac00 \uc911\ub2e8\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc81c\uac70 \ube44\uc6a9\uc774 \ub9ce\uc774 \ub4dc\ub294 \ud30c\ub4dc\ub97c Cluster Autoscaler\uc5d0\uc11c \uc778\uc2dd\ud558\ub294 \ub808\uc774\ube14\ub85c \ubcf4\ud638\ud568\uc73c\ub85c\uc368 \uc774\ub97c \ubc29\uc9c0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c\uc744 \ud655\uc778\ud558\uc2ed\uc2dc\uc624.\n\n* \ud30c\ub4dc\ub97c \uc81c\uac70\ud558\ub294 \ub370 \ube44\uc6a9\uc774 \ub9ce\uc774 \ub4dc\ub294 \ucf54\ub4dc\uc5d0\ub294 `cluster-autoscaler.kubernetes.io\/safe-to-evict=false`\ub77c\ub294 \uc5b4\ub178\ucf00\uc774\uc158\uc774 \ubd99\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \uace0\uae09 \uc0ac\uc6a9 \uc0ac\ub840\n\n### EBS \ubcfc\ub968\n\n\uc601\uad6c \uc2a4\ud1a0\ub9ac\uc9c0\ub294 \ub370\uc774\ud130\ubca0\uc774\uc2a4 \ub610\ub294 \ubd84\uc0b0 \uce90\uc2dc\uc640 \uac19\uc740 \uc2a4\ud14c\uc774\ud2b8\ud480(stateful) \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uad6c\ucd95\ud558\ub294 \ub370 \ub9e4\uc6b0 \uc911\uc694\ud569\ub2c8\ub2e4. [EBS \ubcfc\ub968](https:\/\/aws.amazon.com\/premiumsupport\/knowledge-center\/eks-persistent-storage\/)\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0\uc11c \uc774\ub7f0 \uc0ac\uc6a9 \uc0ac\ub840\ub97c \uc9c0\uc6d0\ud558\uc9c0\ub9cc \ud2b9\uc815 \uc601\uc5ed\uc73c\ub85c \uc81c\ud55c\ub429\ub2c8\ub2e4. \uac01 AZ\ubcc4\ub85c \ubcc4\ub3c4\uc758 EBS \ubcfc\ub968\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc5ec\ub7ec AZ\uc5d0\uc11c \uc0e4\ub529\ud558\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uac00\uc6a9\uc131\uc774 \ub192\uc544\uc9c8 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uadf8\ub7ec\uba74 Cluster Autoscaler\uac00 EC2 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1 \uadf8\ub8f9 \uc2a4\ucf00\uc77c\ub9c1\uc758 \uade0\ud615\uc744 \ub9de\ucd9c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c\uc744 \ud655\uc778\ud558\uc2ed\uc2dc\uc624.\n\n* \ub178\ub4dc \uadf8\ub8f9 \ubc38\ub7f0\uc2f1\uc740 `balance-similar-node-groups=true`\ub85c \uc124\uc815\ud558\uc5ec \ud65c\uc131\ud654\ub429\ub2c8\ub2e4.\n* \ub178\ub4dc \uadf8\ub8f9\uc740 \uac00\uc6a9\uc601\uc5ed\uacfc EBS \ubcfc\ub968\uc774 \ub2e4\ub974\ub2e4\ub294 \uc810\uc744 \uc81c\uc678\ud558\uba74 \ub3d9\uc77c\ud55c \uc124\uc815\uc73c\ub85c \uad6c\uc131\ub429\ub2c8\ub2e4.\n\n### \uacf5\ub3d9 \uc2a4\ucf00\uc904\ub9c1\n\n\uba38\uc2e0 \ub7ec\ub2dd \ubd84\uc0b0 \ud2b8\ub808\uc774\ub2dd \uc791\uc5c5\uc740 \ub3d9\uc77c \uac00\uc6a9\uc601\uc5ed\uc5d0 \ub178\ub4dc \uad6c\uc131\uc744 \ud1b5\ud574 \ub808\uc774\ud134\uc2dc\ub97c \ucd5c\uc18c\ud654\ud568\uc73c\ub85c\uc368 \uc0c1\ub2f9\ud55c \uc774\uc810\uc744 \uc5bb\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \uc6cc\ud06c\ub85c\ub4dc\ub294 \ud2b9\uc815 \uc601\uc5ed\uc5d0 \uc5ec\ub7ec \uac1c\uc758 \ud30c\ub4dc\ub97c \ubc30\ud3ec\ud569\ub2c8\ub2e4. \uc774\ub294 \ubaa8\ub4e0 \uacf5\ub3d9 \uc2a4\ucf00\uc904\ub9c1\ub41c \ud30c\ub4dc\uc5d0 \ud30c\ub4dc \uc5b4\ud53c\ub2c8\ud2f0\ub97c \uc124\uc815\ud558\uac70\ub098 `topologyKey: failure-domain.beta.kubernetes.io\/zone`\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc \uc5b4\ud53c\ub2c8\ud2f0\ub97c \uc124\uc815\ud568\uc73c\ub85c\uc368 \uad6c\uc131\ud560 \uc218 \uc788\ub2e4. \uadf8\ub7ec\uba74 Cluster Autoscaler\uac00 \uc218\uc694\uc5d0 \ub9de\ucdb0 \ud2b9\uc815 \uc601\uc5ed\uc744 \ud655\uc7a5\ud569\ub2c8\ub2e4. \uac00\uc6a9\uc601\uc5ed\ub2f9 \ud558\ub098\uc529 \uc5ec\ub7ec EC2 Auto Scaling \uadf8\ub8f9\uc744 \ud560\ub2f9\ud558\uc5ec \ud568\uaed8 \uc608\uc57d\ub41c \uc804\uccb4 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud574 \ud398\uc77c\uc624\ubc84\ub97c \ud65c\uc131\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c\uc744 \ud655\uc778\ud558\uc2ed\uc2dc\uc624.\n\n* `balance-similar-node-groups=false`\ub97c \uc124\uc815\ud558\uc5ec \ub178\ub4dc \uadf8\ub8f9 \ubc38\ub7f0\uc2f1\uc744 \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n* \ud074\ub7ec\uc2a4\ud130\uac00 \ub9ac\uc804 \ub0b4 \uba40\ud2f0 \uac00\uc6a9\uc601\uc5ed \ub178\ub4dc \uadf8\ub8f9\uacfc \ub2e8\uc77c \uac00\uc6a9\uc601\uc5ed \ub178\ub4dc \uadf8\ub8f9\uc73c\ub85c \uad6c\uc131\ub41c \uacbd\uc6b0 [\ub178\ub4dc \uc5b4\ud53c\ub2c8\ud2f0](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#affinity-and-anti-affinity) \ub610\ub294 [\ud30c\ub4dc \uc120\uc810(Preemption)](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/pod-priority-preemption\/)\uc744 \uc0ac\uc6a9\ub418\uc5b4\uc57c \ud569\ub2c8\ub2e4.\n * [\ub178\ub4dc \uc5b4\ud53c\ub2c8\ud2f0](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#affinity-and-anti-affinity)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uba40\ud2f0 \uac00\uc6a9\uc601\uc5ed \ud30c\ub4dc\uac00 \ub2e8\uc77c \uac00\uc6a9\uc601\uc5ed \ub178\ub4dc \uadf8\ub8f9\uc5d0 (\ub610\ub294 \uadf8 \ubc18\ub300\uc758 \uacbd\uc6b0) \uc2a4\ucf00\uc974\ub9c1\ub418\uc9c0 \uc54a\ub3c4\ub85d \ud558\uc5ec\uc57c \ud569\ub2c8\ub2e4.\n * \ub2e8\uc77c \uac00\uc6a9\uc601\uc5ed\uc5d0 \ubc30\ud3ec\ub418\uc5b4\uc57c \ub418\ub294 \ud30c\ub4dc\uac00 \uba40\ud2f0 \uac00\uc6a9\uc601\uc5ed \ub178\ub4dc \uadf8\ub8f9\uc5d0 \uc2a4\ucf00\uc904\ub9c1\ub418\uba74 \uba40\ud2f0 \uac00\uc6a9\uc601\uc5ed \ud30c\ub4dc\uc758 \uc6a9\ub7c9 \ubd88\uade0\ud615\uc744 \ucd08\ub798\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n * \ub2e8\uc77c \uac00\uc6a9\uc601\uc5ed \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc911\ub2e8 \ubc0f \uc7ac\ubc30\uce58\ub97c \ud5c8\uc6a9\ud560 \uc218 \uc788\ub294 \uacbd\uc6b0, [Pod Preemption](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/pod-priority-preemption\/)\uc744 \uad6c\uc131\ud558\uc5ec \uc9c0\uc5ed\uc801\uc73c\ub85c \uaddc\ubaa8\uac00 \uc870\uc815\ub41c \ud30c\ub4dc\uac00 \uacbd\uc7c1\uc774 \ub35c\ud55c \uad6c\uc5ed\uc744 \uc120\uc810\ud558\uace0 \uc77c\uc815\uc744 \uc870\uc815\ud560 \uc218 \uc788\ub3c4\ub85d \ud558\uc2ed\uc2dc\uc624.\n\n### \uac00\uc18d \ud558\ub4dc\uc6e8\uc5b4\n\n\uc77c\ubd80 \ud074\ub7ec\uc2a4\ud130\ub294 GPU\uc640 \uac19\uc740 \ud2b9\uc218 \ud558\ub4dc\uc6e8\uc5b4 \uac00\uc18d\uae30\ub97c \ud65c\uc6a9\ud569\ub2c8\ub2e4. \uc2a4\ucf00\uc77c\uc544\uc6c3 \uc2dc \uac00\uc18d\uae30 \uc7a5\uce58 \ud50c\ub7ec\uadf8\uc778\uc774 \ub9ac\uc18c\uc2a4\ub97c \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc54c\ub9ac\ub294 \ub370 \uba87 \ubd84 \uc815\ub3c4 \uac78\ub9b4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Cluster Autoscaler\ub294 \uc774 \ub178\ub4dc\uc5d0 \uac00\uc18d\uae30\uac00 \uc788\uc744 \uac83\uc774\ub77c\uace0 \uc2dc\ubbac\ub808\uc774\uc158\ud588\uc9c0\ub9cc, \uac00\uc18d\uae30\uac00 \uc900\ube44\ub418\uace0 \ub178\ub4dc\uc758 \uac00\uc6a9 \ub9ac\uc18c\uc2a4\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\uae30 \uc804\uae4c\uc9c0\ub294 \ub178\ub4dc\uc5d0\uc11c \ubcf4\ub958 \uc911\uc778 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \uc774\ub85c \uc778\ud574 [\ubc18\ubcf5\uc801\uc778 \ubd88\ud544\uc694\ud55c \ud655\uc7a5](https:\/\/github.com\/kubernetes\/kubernetes\/issues\/54959)\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub610\ud55c \uac00\uc18d\uae30\uac00 \uc788\uace0 CPU \ub610\ub294 \uba54\ubaa8\ub9ac \uc0ac\uc6a9\ub960\uc774 \ub192\uc740 \ub178\ub4dc\ub294 \uac00\uc18d\uae30\ub97c \uc0ac\uc6a9\ud558\uc9c0 \uc54a\ub354\ub77c\ub3c4 \ucd95\uc18c\uac00 \uace0\ub824\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc774 \ub3d9\uc791\uc740 \uac00\uc18d\uae30\uc758 \uc0c1\ub300\uc801 \ube44\uc6a9 \ub54c\ubb38\uc5d0 \ube44\uc6a9\uc774 \ub9ce\uc774 \ub4e4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub300\uc2e0 Cluster Autoscaler\ub294 \ube44\uc5b4\uc788\ub294 \uac00\uc18d\uae30\uac00 \uc788\ub294 \uacbd\uc6b0 \ub178\ub4dc \ucd95\uc18c\ub97c \uace0\ub824\ud558\ub294 \ud2b9\uc218 \uaddc\uce59\uc744 \uc801\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc774\ub7f0 \uacbd\uc6b0\uc5d0 \uc62c\ubc14\ub974\uac8c \ub3d9\uc791\ud558\ub3c4\ub85d \uac00\uc18d\uae30 \ub178\ub4dc\uac00 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc870\uc778\ud558\uae30 \uc804\uc5d0 \ud574\ub2f9 \ub178\ub4dc kubelet\uc5d0 \ub808\uc774\ube14\uc744 \ucd94\uac00\ud558\uc5ec \uc124\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Cluster Autoscaler\ub294 \uc774 \ub808\uc774\ube14\uc744 \ud1b5\ud574 \uac00\uc18d\uae30 \ucd5c\uc801\ud654 \ub3d9\uc791\uc744 \ud2b8\ub9ac\uac70\ud569\ub2c8\ub2e4.\n\n\ub2e4\uc74c\uc744 \ud655\uc778\ud558\uc138\uc624.\n\n* GPU \ub178\ub4dc\uc6a9 Kubelet\uc740 `--node-labels k8s.amazonaws.com\/accelerator=$ACCELERATOR_TYPE`\uc73c\ub85c \uad6c\uc131\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n* \uac00\uc18d\uae30\uac00 \uc788\ub294 \ub178\ub4dc\ub294 \uc704\uc5d0\uc11c \uc5b8\uae09\ud55c \uac83\uacfc \ub3d9\uc77c\ud55c \uc2a4\ucf00\uc904\ub9c1 \uc18d\uc131 \uaddc\uce59\uc744 \uc900\uc218\ud569\ub2c8\ub2e4.\n\n### 0\ubd80\ud130 \uc2a4\ucf00\uc77c\ub9c1\n\nCluster Autoscaler(CA)\ub294 \ub178\ub4dc \uadf8\ub8f9\uc744 0\uae4c\uc9c0 \ub610\ub294 0\ubd80\ud130 \ud655\uc7a5\ud560 \uc218 \uc788\uc5b4 \ube44\uc6a9\uc744 \ud06c\uac8c \uc808\uac10\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. CA\ub294 \uc624\ud1a0 \uc2a4\ucf00\uc77c\ub9c1 \uadf8\ub8f9(ASG)\uc758 LaunchConfiguration \ub610\ub294 LaunchTemplate\uc5d0 \uc9c0\uc815\ub41c \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uac80\uc0ac\ud558\uc5ec ASG\uc758 CPU, \uba54\ubaa8\ub9ac \ubc0f GPU \ub9ac\uc18c\uc2a4\ub97c \ud30c\uc545\ud569\ub2c8\ub2e4. \uc77c\ubd80 \ud30c\ub4dc\ub294 LaunchConfiguration\uc5d0\uc11c \uac80\uc0c9\ud560 \uc218 \uc5c6\ub294 `WindowsENI`, `PrivateIPv4Address`, NodeSelector \ub610\ub294 \ud14c\uc778\ud2b8\uc640 \uac19\uc740 \ucd94\uac00 \ub9ac\uc18c\uc2a4\uac00 \ud544\uc694\ud569\ub2c8\ub2e4. Cluster Autoscaler\ub294 EC2 ASG\uc758 \ud0dc\uadf8\uc5d0\uc11c \uc774\ub7f0 \uc694\uc18c\ub97c \ubc1c\uacac\ud558\uc5ec \uc774\ub7f0 \uc694\uc18c\ub97c \ucc98\ub9ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uba74 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n```\nKey: k8s.io\/cluster-autoscaler\/node-template\/resources\/$RESOURCE_NAME\nValue: 5\nKey: k8s.io\/cluster-autoscaler\/node-template\/label\/$LABEL_KEY\nValue: $LABEL_VALUE\nKey: k8s.io\/cluster-autoscaler\/node-template\/taint\/$TAINT_KEY\nValue: NoSchedule\n```\n\n*\ucc38\uace0: 0\uc73c\ub85c \ud655\uc7a5\ud560 \uacbd\uc6b0 \uc6a9\ub7c9\uc740 EC2\ub85c \ubc18\ud658\ub418\uba70 \ud5a5\ud6c4\uc5d0\ub294 \uc0ac\uc6a9\ud560 \uc218 \uc5c6\uac8c \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.*\n\n## \ucd94\uac00 \ud30c\ub77c\ubbf8\ud130\n\nCluster Autoscaler\uc758 \ub3d9\uc791\uacfc \uc131\ub2a5\uc744 \uc870\uc815\ud558\ub294 \ub370 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \ub9ce\uc740 \uc124\uc815 \uc635\uc158\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\ud30c\ub77c\ubbf8\ud130\uc758 \uc804\uccb4 \ubaa9\ub85d\uc740 [Github](https:\/\/github.com\/kubernetes\/autoscaler\/blob\/master\/cluster-autoscaler\/FAQ.md#what-are-the-parameters-to-ca)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n|  |  |  |\n|-|-|-|\n| \ud30c\ub77c\ubbf8\ud130 | \uc124\uba85 | Default |\n| scan-interval | \ud074\ub7ec\uc2a4\ud130 \ud655\uc7a5 \ub610\ub294 \ucd95\uc18c\ub97c \uc704\ud55c \uc7ac\ud3c9\uac00 \ube48\ub3c4 | 10 \ucd08 |\n| max-empty-bulk-delete | \ub3d9\uc2dc\uc5d0 \uc0ad\uc81c\ud560 \uc218 \uc788\ub294 \ube48 \ub178\ub4dc\uc758 \ucd5c\ub300 \uc218 | 10 |\n| scale-down-delay-after-add | \uc2a4\ucf00\uc77c \uc5c5 \ud6c4 \uc2a4\ucf00\uc77c \ub2e4\uc6b4 \ud3c9\uac00\uac00 \uc7ac\uac1c\ub418\ub294 \uc2dc\uac04 | 10 \ubd84 |\n| scale-down-delay-after-delete | \ub178\ub4dc \uc0ad\uc81c \ud6c4 \uc2a4\ucf00\uc77c \ub2e4\uc6b4 \ud3c9\uac00\uac00 \uc7ac\uac1c\ub418\ub294 \uc2dc\uac04, \uae30\ubcf8\uac12\uc740 scan-interval | scan-interval |\n| scale-down-delay-after-failure | \uc2a4\ucf00\uc77c \ub2e4\uc6b4 \uc2e4\ud328 \ud6c4 \uc2a4\ucf00\uc77c \ub2e4\uc6b4 \ud3c9\uac00\uac00 \uc7ac\uac1c\ub418\ub294 \uae30\uac04 | 3 \ubd84 |\n| scale-down-unneeded-time | \ub178\ub4dc\ub97c \ucd95\uc18c\ud560 \uc218 \uc788\uc73c\ub824\uba74 \ud574\ub2f9 \ub178\ub4dc\uac00 \ubd88\ud544\uc694\ud574\uc57c \ud558\ub294 \uae30\uac04 | 10 \ubd84 |\n| scale-down-unready-time | \uc900\ube44\ub418\uc9c0 \uc54a\uc740 \ub178\ub4dc\uac00 \uc2a4\ucf00\uc77c \ub2e4\uc6b4 \ub300\uc0c1\uc774 \ub418\uae30\uae4c\uc9c0 \ubd88\ud544\uc694\ud558\uac8c \ub418\ub294 \uae30\uac04 | 20\ubd84 |\n| scale-down-utilization-threshold | \ub178\ub4dc \uc0ac\uc6a9\ub960 \uc218\uc900, \uc694\uccad\ub41c \ub9ac\uc18c\uc2a4\uc758 \ud569\uacc4\ub97c \uc6a9\ub7c9\uc73c\ub85c \ub098\ub208 \uac12\uc73c\ub85c \uc815\uc758\ub418\uba70, \uc774 \uc218\uc900 \uc774\ud558\ub85c \ub178\ub4dc\ub97c \ucd95\uc18c\ud560 \uc218 \uc788\uc74c | 0.5 |\n| scale-down-non-empty-candidates-count | \ud55c \ubc88\uc758 \ubc18\ubcf5\uc5d0\uc11c \ub4dc\ub808\uc778\uc744 \ud1b5\ud55c \uc2a4\ucf00\uc77c \ub2e4\uc6b4 \ub300\uc0c1\uc73c\ub85c \uac04\uc8fc\ub418\ub294 \ube44\uc5b4 \uc788\uc9c0 \uc54a\uc740 \ucd5c\ub300 \ub178\ub4dc\uc758 \uc218. \uac12\uc774 \ub0ae\uc744\uc218\ub85d CA \uc751\ub2f5\uc131\uc740 \ud5a5\uc0c1\ub418\uc9c0\ub9cc \uc2a4\ucf00\uc77c \ub2e4\uc6b4 \uc9c0\uc5f0 \uc2dc\uac04\uc740 \ub354 \ub290\ub9b4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uac12\uc774 \ub192\uc744\uc218\ub85d \ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130 (\uc218\ubc31 \uac1c \ub178\ub4dc) \uc758 CA \uc131\ub2a5\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\uc774 \ud734\ub9ac\uc2a4\ud2f1\uc744 \ub044\ub824\uba74 \uc591\uc218\uac00 \uc544\ub2cc \uac12\uc73c\ub85c \uc124\uc815\ud558\uc2ed\uc2dc\uc624. CA\ub294 \uace0\ub824\ud558\ub294 \ub178\ub4dc \uc218\ub97c \uc81c\ud55c\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. | 30 |\n| scale-down-candidates-pool-ratio | \uc774\uc804 \ubc18\ubcf5\uc758 \uc77c\ubd80 \ud6c4\ubcf4\uac00 \ub354 \uc774\uc0c1 \uc720\ud6a8\ud558\uc9c0 \uc54a\uc744 \ub54c \ucd95\uc18c\ud560 \uc218 \uc788\ub294 \ube44\uc5b4 \uc788\uc9c0 \uc54a\uc740 \ucd94\uac00 \ud6c4\ubcf4\ub85c \uac04\uc8fc\ub418\ub294 \ub178\ub4dc\uc758 \ube44\uc728\uc785\ub2c8\ub2e4.\uac12\uc774 \ub0ae\uc744\uc218\ub85d CA \uc751\ub2f5\uc131\uc740 \ud5a5\uc0c1\ub418\uc9c0\ub9cc \uc2a4\ucf00\uc77c \ub2e4\uc6b4 \uc9c0\uc5f0 \uc2dc\uac04\uc740 \ub354 \ub290\ub9b4 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\uac12\uc774 \ub192\uc744\uc218\ub85d \ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130 (\uc218\ubc31 \uac1c \ub178\ub4dc) \uc758 CA \uc131\ub2a5\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\uc774 \ud734\ub9ac\uc2a4\ud2f1\uc744 \ub044\ub824\uba74 1.0\uc73c\ub85c \uc124\uc815\ud569\ub2c8\ub2e4. CA\ub294 \ubaa8\ub4e0 \ub178\ub4dc\ub97c \ucd94\uac00 \ud6c4\ubcf4\ub85c \uc0ac\uc6a9\ud569\ub2c8\ub2e4. | 0.1 |\n| scale-down-candidates-pool-min-count | \uc774\uc804 \ubc18\ubcf5\uc758 \uc77c\ubd80 \ud6c4\ubcf4\uac00 \ub354 \uc774\uc0c1 \uc720\ud6a8\ud558\uc9c0 \uc54a\uc744 \uacbd\uc6b0 \ucd95\uc18c\ud560 \uc218 \uc788\ub294 \ube44\uc5b4 \uc788\uc9c0 \uc54a\uc740 \ucd94\uac00 \ud6c4\ubcf4\ub85c \uac04\uc8fc\ub418\ub294 \ucd5c\uc18c \ub178\ub4dc \uc218. \ucd94\uac00 \ud6c4\ubcf4\uc758 \ud480 \ud06c\uae30\ub97c \uacc4\uc0b0\ud560 \ub54c\ub294 `\ucd5c\ub300\uac12 (\ub178\ub4dc\uc218 * scale-down-candidates-pool-ratio, scale-down-candidates-pool-min-count) `\uc73c\ub85c \uacc4\uc0b0\ud569\ub2c8\ub2e4. | 50 |\n\n## \ucd94\uac00 \ub9ac\uc18c\uc2a4\n\n\uc774 \ud398\uc774\uc9c0\uc5d0\ub294 Cluster Autoscaler \ud504\ub808\uc820\ud14c\uc774\uc158 \ubc0f \ub370\ubaa8 \ubaa9\ub85d\uc774 \ub4e4\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \uc5ec\uae30\uc5d0 \ud504\ub808\uc820\ud14c\uc774\uc158\uc774\ub098 \ub370\ubaa8\ub97c \ucd94\uac00\ud558\ub824\uba74 \ud480 \ub9ac\ud018\uc2a4\ud2b8\ub97c \ubcf4\ub0b4\uc8fc\uc138\uc694.\n\n| \ud504\ub808\uc820\ud14c\uc774\uc158 \ub370\ubaa8 | \ubc1c\ud45c\uc790 |\n| ------------ | ------- |\n| [Autoscaling and Cost Optimization on Kubernetes: From 0 to 100](https:\/\/sched.co\/Zemi) | Guy Templeton, Skyscanner & Jiaxin Shan, Amazon |\n| [SIG-Autoscaling Deep Dive](https:\/\/youtu.be\/odxPyW_rZNQ) | Maciek Pytel & Marcin Wielgus |\n\n## \ucc38\uace0 \uc790\ub8cc\n\n* [https:\/\/github.com\/kubernetes\/autoscaler\/blob\/master\/cluster-autoscaler\/FAQ.md](https:\/\/github.com\/kubernetes\/autoscaler\/blob\/master\/cluster-autoscaler\/FAQ.md)\n* [https:\/\/github.com\/kubernetes\/autoscaler\/blob\/master\/cluster-autoscaler\/cloudprovider\/aws\/README.md](https:\/\/github.com\/kubernetes\/autoscaler\/blob\/master\/cluster-autoscaler\/cloudprovider\/aws\/README.md)\n* [https:\/\/github.com\/aws\/amazon-ec2-instance-selector](https:\/\/github.com\/aws\/amazon-ec2-instance-selector)\n* [https:\/\/github.com\/aws\/aws-node-termination-handler](https:\/\/github.com\/aws\/aws-node-termination-handler)","site":"eks","answers_cleaned":"    search    exclude  true               Cluster Autoscaler   iframe width  560  height  315  src  https   www youtube com embed FIBc8GkjFU0  title  YouTube video player  frameborder  0  allow  accelerometer  autoplay  clipboard write  encrypted media  gyroscope  picture in picture  web share  allowfullscreen   iframe                 Cluster Autoscaler  https   github com kubernetes autoscaler tree master cluster autoscaler    SIG         https   github com kubernetes community tree master sig autoscaling                                                                                                     Cluster Autoscaler                                                                                     Cluster Autoscaler    AWS              EC2 Auto Scaling       desireReplicas                    architecture png          Cluster Autoscaler                                                                                                                        AWS                                                                                                                                                             Cluster Autoscaler                                   Cluster Autoscaler                 Cluster Autoscaler                                                          Cluster Autoscaler                                                   Cluster Autoscaler                                                                                                                                                                                                                                                                                                                                                                                                             Cluster Autoscaler       API                                                                                                      EC2 Auto Scaling       EC2                         EC2 Auto Scaling                                 API                                                      EC2              EC2                        EC2                                                                                  Cluster Autoscaler     Cluster Autoscaler                       https   github com kubernetes autoscaler tree master cluster autoscaler cloudprovider aws examples                                   https   en wikipedia org wiki Leader election                                                                        https   docs aws amazon com eks latest userguide cluster autoscaler html                                                                 Cluster Autoscaler                                                                                      https   github com kubernetes autoscaler blob master cluster autoscaler README md releases                                          Auto Discovery  https   github com kubernetes autoscaler tree master cluster autoscaler cloudprovider aws auto discovery setup                      IAM                   Auto Discovery  https   github com kubernetes autoscaler blob master cluster autoscaler cloudprovider aws README md Auto discovery setup             autoscaling SetDesiredCapacity     autoscaling TerminateInstanceInAutoScalingGroup                                                                          node group auto discovery                                                k8s io cluster autoscaler  cluster name                   Cluster Autoscaler                                    json        Version    2012 10 17        Statement                            Effect    Allow                Action                      autoscaling SetDesiredCapacity                    autoscaling TerminateInstanceInAutoScalingGroup                              Resource                     Condition                      StringEquals                          aws ResourceTag k8s io cluster autoscaler enabled    true                        aws ResourceTag k8s io cluster autoscaler  my cluster     owned                                                                    Effect    Allow                Action                      autoscaling DescribeAutoScalingGroups                    autoscaling DescribeAutoScalingInstances                    autoscaling DescribeLaunchConfigurations                    autoscaling DescribeScalingActivities                    autoscaling DescribeTags                    ec2 DescribeImages                    ec2 DescribeInstanceTypes                    ec2 DescribeLaunchTemplateVersions                    ec2 GetInstanceTypesFromInstanceRequirements                    eks DescribeNodegroup                              Resource                                                                                                                                                          AWS                           EC2 Auto Scaling                            Cluster Autoscaler                          EC2 Auto Scaling                                                                                                                                                     CPU        GPU                                                                                                                                                                                                                                                                                  EC2                                         EKS            https   docs aws amazon com eks latest userguide managed node groups html                                 EC2 Auto Scaling                        Cluster Autoscaler                                                                              1 000      https   github com kubernetes autoscaler blob master cluster autoscaler proposals scalability tests md                                 Cluster Autoscaler                   Cluster Autoscaler                                                                                                                                                                                                           Cluster Autoscaler                                                                                                                                                                            Cluster Autoscaler             Cluster Autoscaler                                                                                           CPU                                                                 1GB                                                                        Addon Resizer  https   github com kubernetes autoscaler tree master addon resizer      Vertical Pod Autoscaler  https   github com kubernetes autoscaler tree master vertical pod autoscaler                                            Cluster Autoscaler                                                                                                     Kubernetes API                           Cluster Autoscaler                                   GPU                                                                                                                                                                                       request          limit                                                                                 NodeTaints    NodeSelector                                                                    EC2 Auto Scaling                                      10                             Cluster Autoscaler                                       API   EC2 Auto Scaling       EKS           API     API                  API        Kubernetes                                                         10     AWS                                                                                                                  2                 1         API     6            38                                    Cluster Autoscaler                                                                        Cluster Autoscaler                                                                                                                             Cluster Autoscaler                                                                                                                                                                             scale down delay                    metadata    name  cluster autoscaler   namespace  cluster autoscaler 1         nodes 1 10 k8s worker asg 1   nodes 1 10 k8s worker asg 2       metadata    name  cluster autoscaler   namespace  cluster autoscaler 2         nodes 1 10 k8s worker asg 3   nodes 1 10 k8s worker asg 4                               EC2 Auto Scaling                                                                                                                                        90                    EC2                                                              EC2 Auto Scaling                                                                                                                                                                                                                                                                                                                                                                                                                                                                 M4  M5  M5a   M5n                 CPU                                        EC2 Instance Selector  https   github com aws amazon ec2 instance selector                                           spot mix instance policy jpg                     EC2 Auto Scaling                                                      https   docs aws amazon com autoscaling ec2 userguide asg purchase options html asg instances distribution                                          EC2                                                                                                                                    EC2 Auto Scaling                 Cluster Autoscaler    Expanders  https   github com kubernetes autoscaler blob master cluster autoscaler FAQ md what are expanders                                                 expander least waste                                                                                                                                                       ASG           Priority expander                                       expander priority                    ASG                                                                              GPU                          P3                                    P2                            apiVersion  v1 kind  ConfigMap metadata    name  cluster autoscaler priority expander   namespace  kube system data    priorities         10            p2 node group       50            p3 node group        Cluster Autoscaler   p3 node group              EC2 Auto Scaling                           max node provision time               p2 node group              EC2 Auto Scaling                             15                                                                                             Cluster Autoscaler                                                                                                                                                                                                                                      https   github com kubernetes autoscaler blob master cluster autoscaler FAQ md how can i configure overprovisioning with cluster autoscaler                                                                                                     unschedulable                                                        unschedulable        Cluster Autoscaler                                                                                                     preferredDuringSchedulingIgnoredDuringExecution                                                                      AntiAffinity                                                                                                                                                                                                                                                                                30               EC2                   30                                                 EC2                                   30                                 EC2 Auto Scaling                                                                                                                                                                                  Cluster Autoscaler  scale down utilization threshold                                                                            Cluster Autoscaler                                                                              cluster autoscaler kubernetes io safe to evict false                                      EBS                                        stateful                            EBS     https   aws amazon com premiumsupport knowledge center eks persistent storage                                             AZ       EBS             AZ                                     Cluster Autoscaler  EC2                                                            balance similar node groups true                               EBS                                                                                                                                                                                          topologyKey  failure domain beta kubernetes io zone                                      Cluster Autoscaler                                    EC2 Auto Scaling                                                                 balance similar node groups false                                                                                            https   kubernetes io docs concepts scheduling eviction assign pod node  affinity and anti affinity            Preemption   https   kubernetes io docs concepts configuration pod priority preemption                            https   kubernetes io docs concepts scheduling eviction assign pod node  affinity and anti affinity                                                                                                                                                                                             Pod Preemption  https   kubernetes io docs concepts configuration pod priority preemption                                                                                      GPU                                                                                 Cluster Autoscaler                                                                                                                   https   github com kubernetes kubernetes issues 54959                           CPU                                                                                            Cluster Autoscaler                                                                                                kubelet                        Cluster Autoscaler                                               GPU     Kubelet     node labels k8s amazonaws com accelerator  ACCELERATOR TYPE                                                                    0         Cluster Autoscaler CA          0      0                               CA             ASG   LaunchConfiguration    LaunchTemplate                    ASG  CPU        GPU                    LaunchConfiguration             WindowsENI    PrivateIPv4Address   NodeSelector                           Cluster Autoscaler  EC2 ASG                                                           Key  k8s io cluster autoscaler node template resources  RESOURCE NAME Value  5 Key  k8s io cluster autoscaler node template label  LABEL KEY Value   LABEL VALUE Key  k8s io cluster autoscaler node template taint  TAINT KEY Value  NoSchedule           0              EC2                                             Cluster Autoscaler                                                        Github  https   github com kubernetes autoscaler blob master cluster autoscaler FAQ md what are the parameters to ca                                                  Default     scan interval                              10       max empty bulk delete                             10     scale down delay after add                                10       scale down delay after delete                                    scan interval   scan interval     scale down delay after failure                                    3       scale down unneeded time                                       10       scale down unready time                                             20      scale down utilization threshold                                                                     0 5     scale down non empty candidates count                                                                 CA                                                                    CA                                                  CA                          30     scale down candidates pool ratio                                                                                CA                                                                   CA                               1 0          CA                         0 1     scale down candidates pool min count                                                                                                        scale down candidates pool ratio  scale down candidates pool min count               50                       Cluster Autoscaler                                                                                                                   Autoscaling and Cost Optimization on Kubernetes  From 0 to 100  https   sched co Zemi    Guy Templeton  Skyscanner   Jiaxin Shan  Amazon      SIG Autoscaling Deep Dive  https   youtu be odxPyW rZNQ    Maciek Pytel   Marcin Wielgus                 https   github com kubernetes autoscaler blob master cluster autoscaler FAQ md  https   github com kubernetes autoscaler blob master cluster autoscaler FAQ md     https   github com kubernetes autoscaler blob master cluster autoscaler cloudprovider aws README md  https   github com kubernetes autoscaler blob master cluster autoscaler cloudprovider aws README md     https   github com aws amazon ec2 instance selector  https   github com aws amazon ec2 instance selector     https   github com aws aws node termination handler  https   github com aws aws node termination handler "}
{"questions":"eks exclude true API search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \uc6cc\ud06c\ub85c\ub4dc\n\n\uc6cc\ud06c\ub85c\ub4dc\ub294 \ud074\ub7ec\uc2a4\ud130\ub97c \ud655\uc7a5\ud560 \uc218 \uc788\ub294 \uaddc\ubaa8\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce69\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\ub97c \ub9ce\uc774 \uc0ac\uc6a9\ud558\ub294 \uc6cc\ud06c\ub85c\ub4dc\ub294 \ub2e8\uc77c \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ubcf4\uc720\ud560 \uc218 \uc788\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc758 \ucd1d\ub7c9\uc744 \uc81c\ud55c\ud558\uc9c0\ub9cc, \ubd80\ud558\ub97c \uc904\uc774\uae30 \uc704\ud574 \ubcc0\uacbd\ud560 \uc218 \uc788\ub294 \uba87 \uac00\uc9c0 \uae30\ubcf8\uac12\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc758 \uc6cc\ud06c\ub85c\ub4dc\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\uc640 \ud1b5\ud569\ub418\ub294 \uae30\ub2a5(\uc608: Secrets \ubc0f ServiceAccount)\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc9c0\ub9cc, \uc774\ub7f0 \uae30\ub2a5\uc774 \ud56d\uc0c1 \ud544\uc694\ud55c \uac83\uc740 \uc544\ub2c8\ubbc0\ub85c \uc0ac\uc6a9\ud558\uc9c0 \uc54a\ub294 \uacbd\uc6b0 \ube44\ud65c\uc131\ud654\ud574\uc57c \ud569\ub2c8\ub2e4.\uc6cc\ud06c\ub85c\ub4dc \uc561\uc138\uc2a4\uc640 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0 \ub300\ud55c \uc885\uc18d\uc131\uc744 \uc81c\ud55c\ud558\uba74 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc2e4\ud589\ud560 \uc218 \uc788\ub294 \uc6cc\ud06c\ub85c\ub4dc \uc218\uac00 \uc99d\uac00\ud558\uace0 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud55c \ubd88\ud544\uc694\ud55c \uc561\uc138\uc2a4\ub97c \uc81c\uac70\ud558\uace0 \ucd5c\uc18c \uad8c\ud55c \uad00\ud589\uc744 \uad6c\ud604\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc758 \ubcf4\uc548\uc744 \uac1c\uc120\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\ubcf4\uc548 \ubaa8\ubc94 \uc0ac\ub840](https:\/\/aws.github.io\/aws-eks-best-practices\/security\/docs\/)\ub97c \ucc38\uace0\ud558\uc138\uc694.\n\n## \ud30c\ub4dc \ub124\ud2b8\uc6cc\ud0b9\uc5d0 IPv6 \uc0ac\uc6a9\ud558\uae30\n\nVPC\ub97c IPv4\uc5d0\uc11c IPv6\uc73c\ub85c \uc804\ud658\ud560 \uc218 \uc5c6\uc73c\ubbc0\ub85c \ud074\ub7ec\uc2a4\ud130\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\uae30 \uc804\uc5d0 IPv6\ub97c \ud65c\uc131\ud654\ud558\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4. \ub2e4\ub9cc VPC\uc5d0\uc11c IPv6\ub97c \ud65c\uc131\ud654\ud55c\ub2e4\uace0 \ud574\uc11c \ubc18\ub4dc\uc2dc \uc0ac\uc6a9\ud574\uc57c \ud558\ub294 \uac83\uc740 \uc544\ub2c8\uba70, \ud30c\ub4dc \ubc0f \uc11c\ube44\uc2a4\uac00 IPv6\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0\uc5d0\ub3c4 IPv4 \uc8fc\uc18c\ub97c \uc624\uac00\ub294 \ud2b8\ub798\ud53d\uc744 \ub77c\uc6b0\ud305\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [EKS \ub124\ud2b8\uc6cc\ud0b9 \ubaa8\ubc94 \uc0ac\ub840](https:\/\/aws.github.io\/aws-eks-best-practices\/networking\/index\/)\ub97c \ucc38\uace0\ud558\uc138\uc694.\n\n[\ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c IPv6 \uc0ac\uc6a9\ud558\uae30 \ud29c\ud1a0\ub9ac\uc5bc](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cni-ipv6.html)\ub97c \uc0ac\uc6a9\ud558\uba74 \uac00\uc7a5 \uc77c\ubc18\uc801\uc778 \ud074\ub7ec\uc2a4\ud130 \ubc0f \uc6cc\ud06c\ub85c\ub4dc \ud655\uc7a5 \uc81c\ud55c\uc744 \ud53c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. IPv6\ub294 \uc0ac\uc6a9 \uac00\ub2a5\ud55c IP \uc8fc\uc18c\uac00 \uc5c6\uc5b4 \ud30c\ub4dc\uc640 \ub178\ub4dc\ub97c \uc0dd\uc131\ud560 \uc218 \uc5c6\ub294 IP \uc8fc\uc18c \uace0\uac08\uc744 \ubc29\uc9c0\ud569\ub2c8\ub2e4. \ub610\ud55c \ub178\ub4dc\ub2f9 ENI \uc5b4\ud0dc\uce58\uba3c\ud2b8 \uc218\ub97c \uc904\uc5ec \ud30c\ub4dc\uac00 IP \uc8fc\uc18c\ub97c \ub354 \ube60\ub974\uac8c \uc218\uc2e0\ud558\ubbc0\ub85c \ub178\ub4dc\ub2f9 \uc131\ub2a5\ub3c4 \uac1c\uc120\ub418\uc5c8\uc2b5\ub2c8\ub2e4. [VPC CNI\uc758 IPv4 Prefix \ubaa8\ub4dc](https:\/\/aws.github.io\/aws-eks-best-practices\/networking\/prefix-mode\/)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc720\uc0ac\ud55c \ub178\ub4dc \uc131\ub2a5\uc744 \uc5bb\uc744 \uc218 \uc788\uc9c0\ub9cc, \uc5ec\uc804\ud788 VPC\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 IP \uc8fc\uc18c\uac00 \ucda9\ubd84\ud55c\uc9c0 \ud655\uc778\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n## \ub124\uc784\uc2a4\ud398\uc774\uc2a4\ub2f9 \uc11c\ube44\uc2a4 \uc218 \uc81c\ud55c\n\n[\ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc758 \ucd5c\ub300 \uc11c\ube44\uc2a4 \uc218\ub294 5,000\uac1c\uc774\uace0 \ud074\ub7ec\uc2a4\ud130\uc758 \ucd5c\ub300 \uc11c\ube44\uc2a4 \uc218\ub294 10,000\uac1c](https:\/\/github.com\/kubernetes\/community\/blob\/master\/sig-scalability\/configs-and-limits\/thresholds.md) \uc785\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc\uc640 \uc11c\ube44\uc2a4\ub97c \uad6c\uc131\ud558\uace0, \uc131\ub2a5\uc744 \ub192\uc774\uace0, \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \ubc94\uc704 \ub9ac\uc18c\uc2a4\uac00 \uc5f0\uc1c4\uc801\uc73c\ub85c \uc601\ud5a5\uc744 \ubc1b\uc9c0 \uc54a\ub3c4\ub85d \ud558\ub824\uba74 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\ub2f9 \uc11c\ube44\uc2a4 \uc218\ub97c 500\uac1c\ub85c \uc81c\ud55c\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\nkube-proxy\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc\ub2f9 \uc0dd\uc131\ub418\ub294 IP \ud14c\uc774\ube14 \uaddc\uce59\uc758 \uc218\ub294 \ud074\ub7ec\uc2a4\ud130\uc758 \ucd1d \uc11c\ube44\uc2a4 \uc218\uc5d0 \ub530\ub77c \uc99d\uac00\ud569\ub2c8\ub2e4.\uc218\ucc9c \uac1c\uc758 IP \ud14c\uc774\ube14 \uaddc\uce59\uc744 \uc0dd\uc131\ud558\uace0 \uc774\ub7f0 \uaddc\uce59\uc744 \ud1b5\ud574 \ud328\ud0b7\uc744 \ub77c\uc6b0\ud305\ud558\uba74 \ub178\ub4dc\uc758 \uc131\ub2a5\uc774 \uc800\ud558\ub418\uace0 \ub124\ud2b8\uc6cc\ud06c \uc9c0\uc5f0 \uc2dc\uac04\uc774 \ub298\uc5b4\ub0a9\ub2c8\ub2e4.\n\n\ub124\uc784\uc2a4\ud398\uc774\uc2a4\ub2f9 \uc11c\ube44\uc2a4 \uc218\uac00 500\uac1c \ubbf8\ub9cc\uc778 \uacbd\uc6b0 \ub2e8\uc77c \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ud658\uacbd\uc744 \ud3ec\ud568\ud558\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\ub97c \uc0dd\uc131\ud558\uc2ed\uc2dc\uc624. \uc774\ub807\uac8c \ud558\uba74 \uc11c\ube44\uc2a4 \uac80\uc0c9 \uc81c\ud55c\uc744 \ud53c\ud560 \uc218 \uc788\uc744 \ub9cc\ud07c \uc11c\ube44\uc2a4 \uac80\uc0c9 \ud06c\uae30\uac00 \uc791\uc544\uc9c0\uace0 \uc11c\ube44\uc2a4 \uc774\ub984 \ucda9\ub3cc\uc744 \ubc29\uc9c0\ud558\ub294 \ub370\ub3c4 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ud658\uacbd(\uc608: dev, test, prod) \uc740 \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \ub300\uc2e0 \ubcc4\ub3c4\uc758 EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n## Elastic Load Balancer \ud560\ub2f9\ub7c9 \uc774\ud574\n\n\uc11c\ube44\uc2a4\ub97c \uc0dd\uc131\ud560 \ub54c \uc0ac\uc6a9\ud560 \ub85c\ub4dc \ubc38\ub7f0\uc2f1 \uc720\ud615(\uc608: \ub124\ud2b8\uc6cc\ud06c \ub85c\ub4dc\ubc38\ub7f0\uc11c (NLB) \ub610\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub85c\ub4dc\ubc38\ub7f0\uc11c (ALB)) \ub97c \uace0\ub824\ud558\uc138\uc694. \uac01 \ub85c\ub4dc\ubc38\ub7f0\uc11c \uc720\ud615\uc740 \uc11c\ub85c \ub2e4\ub978 \uae30\ub2a5\uc744 \uc81c\uacf5\ud558\uba70 [\ud560\ub2f9\ub7c9](https:\/\/docs.aws.amazon.com\/elasticloadbalancing\/latest\/application\/load-balancer-limits.html)\uc774 \ub2e4\ub985\ub2c8\ub2e4. \uae30\ubcf8 \ud560\ub2f9\ub7c9 \uc911 \uc77c\ubd80\ub294 \uc870\uc815\ud560 \uc218 \uc788\uc9c0\ub9cc \uc77c\ubd80 \ud560\ub2f9\ub7c9 \ucd5c\ub300\uac12\uc740 \ubcc0\uacbd\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \uacc4\uc815 \ud560\ub2f9\ub7c9 \ubc0f \uc0ac\uc6a9\ub7c9\uc744 \ubcf4\ub824\uba74 AWS \ucf58\uc194\uc758 [\uc11c\ube44\uc2a4 \ud560\ub2f9\ub7c9 \ub300\uc2dc\ubcf4\ub4dc](http:\/\/console.aws.amazon.com\/servicequotas)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n\uc608\ub97c \ub4e4\uc5b4, \uae30\ubcf8 ALB \ubaa9\ud45c\ub294 1000\uc785\ub2c8\ub2e4. \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uac00 1,000\uac1c\uac00 \ub118\ub294 \uc11c\ube44\uc2a4\uac00 \uc788\ub294 \uacbd\uc6b0 \ud560\ub2f9\ub7c9\uc744 \ub298\ub9ac\uac70\ub098 \uc11c\ube44\uc2a4\ub97c \uc5ec\ub7ec ALB\ub85c \ubd84\ud560\ud558\uac70\ub098 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc778\uadf8\ub808\uc2a4(Ingress)\ub97c \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4. \uae30\ubcf8 NLB \ub300\uc0c1\uc740 3000\uc774\uc9c0\ub9cc AZ\ub2f9 500\uac1c \ub300\uc0c1\uc73c\ub85c \uc81c\ud55c\ub429\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c NLB \uc11c\ube44\uc2a4\uc5d0 \ub300\ud574 500\uac1c \uc774\uc0c1\uc758 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud558\ub294 \uacbd\uc6b0 \uc5ec\ub7ec AZ\ub97c \uc0ac\uc6a9\ud558\uac70\ub098 \ud560\ub2f9\ub7c9 \ud55c\ub3c4 \uc99d\uac00\ub97c \uc694\uccad\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\uc11c\ube44\uc2a4\uc5d0 \uc5f0\uacb0\ub41c \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub97c \uc0ac\uc6a9\ud558\ub294 \ub300\uc2e0 [\uc778\uadf8\ub808\uc2a4 \ucee8\ud2b8\ub864\ub7ec](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress-controllers\/) \ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. AWS Load Balancer Controller\ub294 \uc218\uc2e0 \ub9ac\uc18c\uc2a4\uc6a9 ALB\ub97c \uc0dd\uc131\ud560 \uc218 \uc788\uc9c0\ub9cc, \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc804\uc6a9 \ucee8\ud2b8\ub864\ub7ec\ub97c \uc2e4\ud589\ud558\ub294 \uac83\ub3c4 \uace0\ub824\ud574 \ubcfc \uc218 \uc788\uc2b5\ub2c8\ub2e4.\ud074\ub7ec\uc2a4\ud130 \ub0b4 \uc218\uc2e0 \ucee8\ud2b8\ub864\ub7ec\ub97c \uc0ac\uc6a9\ud558\uba74 \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc5ed\ubc29\ud5a5 \ud504\ub85d\uc2dc\ub97c \uc2e4\ud589\ud558\uc5ec \ub2e8\uc77c \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0\uc11c \uc5ec\ub7ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\ub97c \ub178\ucd9c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucee8\ud2b8\ub864\ub7ec\ub294 [Gateway API](https:\/\/gateway-api.sigs.k8s.io\/) \uc9c0\uc6d0\uacfc \uac19\uc740 \ub2e4\uc591\ud55c \uae30\ub2a5\uc744 \uc81c\uacf5\ud558\ubbc0\ub85c \uc6cc\ud06c\ub85c\ub4dc\uc758 \uc218\uc640 \uaddc\ubaa8\uc5d0 \ub530\ub77c \uc774\uc810\uc774 \uc788\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## Route 53, Global Accelerator, \ub610\ub294 CloudFront \uc0ac\uc6a9\ud558\uae30\n\n\uc5ec\ub7ec \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub97c \uc0ac\uc6a9\ud558\ub294 \uc11c\ube44\uc2a4\ub97c \ub2e8\uc77c \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \uc0ac\uc6a9\ud558\ub824\uba74 [Amazon CloudFront](https:\/\/aws.amazon.com\/cloudfront\/), [AWS Global Accelerator](https:\/\/aws.amazon.com\/global-accelerator\/) \ub610\ub294 [Amazon Route 53](https:\/\/aws.amazon.com\/route53\/)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ubaa8\ub4e0 \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub97c \ub2e8\uc77c \uace0\uac1d \ub300\uc0c1 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c \ub178\ucd9c\ud574\uc57c \ud569\ub2c8\ub2e4. \uac01 \uc635\uc158\uc5d0\ub294 \uc11c\ub85c \ub2e4\ub978 \uc774\uc810\uc774 \uc788\uc73c\uba70 \ud544\uc694\uc5d0 \ub530\ub77c \uac1c\ubcc4\uc801\uc73c\ub85c \ub610\ub294 \ud568\uaed8 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nRoute 53\uc740 \uacf5\ud1b5 \uc774\ub984\uc73c\ub85c \uc5ec\ub7ec \ub85c\ub4dc\ubc38\ub7f0\uc11c\ub97c \ub178\ucd9c\ud560 \uc218 \uc788\uc73c\uba70 \ud560\ub2f9\ub41c \uac00\uc911\uce58\uc5d0 \ub530\ub77c \uac01 \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc5d0 \ud2b8\ub798\ud53d\uc744 \uc804\uc1a1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. [DNS \uac00\uc911\uce58](https:\/\/docs.aws.amazon.com\/Route53\/latest\/DeveloperGuide\/resource-record-sets-values-weighted.html#rrsets-values-weighted-weight)\uc124\uba85\uc11c\uc5d0 \uc790\uc138\ud55c \ub0b4\uc6a9\uc744 \ud655\uc778\ud560 \uc218 \uc788\uc73c\uba70 [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc678\ubd80 DNS \ucee8\ud2b8\ub864\ub7ec](https:\/\/github.com\/kubernetes-sigs\/external-dns)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc774\ub97c \uad6c\ud604\ud558\ub294 \ubc29\ubc95\uc740 [AWS Load Balancer Controller \uc124\uba85\uc11c](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/v2.4\/guide\/integrations\/external_dns\/#usage)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nGlobal Accelerator\ud130\ub294 \uc694\uccad IP \uc8fc\uc18c\ub97c \uae30\ubc18\uc73c\ub85c \uac00\uc7a5 \uac00\uae4c\uc6b4 \uc9c0\uc5ed\uc73c\ub85c \uc6cc\ud06c\ub85c\ub4dc\ub97c \ub77c\uc6b0\ud305\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub294 \uc5ec\ub7ec \uc9c0\uc5ed\uc5d0 \ubc30\ud3ec\ub418\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc720\uc6a9\ud560 \uc218 \uc788\uc9c0\ub9cc \ub2e8\uc77c \uc9c0\uc5ed\uc758 \ub2e8\uc77c \ud074\ub7ec\uc2a4\ud130\ub85c\uc758 \ub77c\uc6b0\ud305\uc744 \uac1c\uc120\ud558\uc9c0\ub294 \uc54a\uc2b5\ub2c8\ub2e4. Route 53\uc744 Global Accelerator\ud130\uc640 \ud568\uaed8 \uc0ac\uc6a9\ud558\uba74 \uac00\uc6a9\uc601\uc5ed\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0 \uc0c1\ud0dc \uc810\uac80 \ubc0f \uc790\ub3d9 \uc7a5\uc560 \uc870\uce58\uc640 \uac19\uc740 \ucd94\uac00\uc801\uc778 \uc774\uc810\uc774 \uc788\uc2b5\ub2c8\ub2e4. Route 53\uacfc \ud568\uaed8 Global Accelerator\ud130\ub97c \uc0ac\uc6a9\ud558\ub294 \uc608\ub294 [\uc774 \ube14\ub85c\uadf8 \uac8c\uc2dc\ubb3c](https:\/\/aws.amazon.com\/blogs\/containers\/operating-a-multi-regional-stateless-application-using-amazon-eks\/)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nCloudFront\ub294 Route 53 \ubc0f Global Accelerator\uc640 \ud568\uaed8 \uc0ac\uc6a9\ud558\uac70\ub098 \ub2e8\ub3c5\uc73c\ub85c \ud2b8\ub798\ud53d\uc744 \uc5ec\ub7ec \ubaa9\uc801\uc9c0\ub85c \ub77c\uc6b0\ud305\ud558\ub294 \ub370 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. CloudFront\ub294 \uc624\ub9ac\uc9c4 \uc18c\uc2a4\uc5d0\uc11c \uc81c\uacf5\ub418\ub294 \uc790\uc0b0\uc744 \uce90\uc2dc\ud558\ubbc0\ub85c \uc81c\uacf5\ud558\ub294 \ub300\uc0c1\uc5d0 \ub530\ub77c \ub300\uc5ed\ud3ed \uc694\uad6c \uc0ac\ud56d\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \uc5d4\ub4dc\ud3ec\uc778\ud2b8(Endpoints) \ub300\uc2e0\uc5d0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc2ac\ub77c\uc774\uc2a4(EndpointSlices) \uc0ac\uc6a9\ud558\uae30\n\n\uc11c\ube44\uc2a4 \ub808\uc774\ube14\uacfc \uc77c\uce58\ud558\ub294 \ud30c\ub4dc\ub97c \ubc1c\uacac\ud560 \ub54c\ub294 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ub300\uc2e0 [\uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc2ac\ub77c\uc774\uc2a4](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/endpoint-slices\/)\ub97c \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4. \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub294 \uc11c\ube44\uc2a4\ub97c \uc18c\uaddc\ubaa8\ub85c \ub178\ucd9c\ud560 \uc218 \uc788\ub294 \uac04\ub2e8\ud55c \ubc29\ubc95\uc774\uc5c8\uc9c0\ub9cc, \ub300\uaddc\ubaa8 \uc11c\ube44\uc2a4\uac00 \uc790\ub3d9\uc73c\ub85c \ud655\uc7a5\ub418\uac70\ub098 \uc5c5\ub370\uc774\ud2b8\ub418\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0\uc11c \ub9ce\uc740 \ud2b8\ub798\ud53d\uc774 \ubc1c\uc0dd\ud569\ub2c8\ub2e4. \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc2ac\ub77c\uc774\uc2a4\uc5d0\ub294 \ud1a0\ud3f4\ub85c\uc9c0 \uc778\uc2dd \ud78c\ud2b8\uc640 \uac19\uc740 \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uc790\ub3d9 \uadf8\ub8f9\ud654 \uae30\ub2a5\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ubaa8\ub4e0 \ucee8\ud2b8\ub864\ub7ec\uac00 \uae30\ubcf8\uc801\uc73c\ub85c \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc2ac\ub77c\uc774\uc2a4\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc740 \uc544\ub2d9\ub2c8\ub2e4. \ucee8\ud2b8\ub864\ub7ec \uc124\uc815\uc744 \ud655\uc778\ud558\uace0 \ud544\uc694\ud55c \uacbd\uc6b0 \ud65c\uc131\ud654\ud574\uc57c \ud569\ub2c8\ub2e4. [AWS Load Balancer Controller](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/v2.4\/deploy\/configurations\/#controller-command-line-flags)\uc758 \uacbd\uc6b0 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc2ac\ub77c\uc774\uc2a4\ub97c \uc0ac\uc6a9\ud558\ub824\uba74 `--enable-endpoint-slices` \uc120\ud0dd\uc801 \ud50c\ub798\uadf8\ub97c \ud65c\uc131\ud654\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n## \uac00\ub2a5\ud558\ub2e4\uba74 \ubcc0\uacbd \ubd88\uac00(immutable)\ud558\uace0 \uc678\ubd80(external) \uc2dc\ud06c\ub9bf \uc0ac\uc6a9\ud558\uae30\n\nkubelet\uc740 \ud574\ub2f9 \ub178\ub4dc\uc758 \ud30c\ub4dc\uc5d0 \ub300\ud55c \ubcfc\ub968\uc5d0\uc11c \uc0ac\uc6a9\ub418\ub294 \uc2dc\ud06c\ub9bf\uc758 \ud604\uc7ac \ud0a4\uc640 \uac12\uc744 \uce90\uc2dc\uc5d0 \ubcf4\uad00\ud55c\ub2e4. kubelet\uc740 \uc2dc\ud06c\ub9bf\uc744 \uac10\uc2dc\ud558\uc5ec \ubcc0\uacbd \uc0ac\ud56d\uc744 \ud0d0\uc9c0\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uac00 \ud655\uc7a5\ub428\uc5d0 \ub530\ub77c \uc2dc\uacc4\uc758 \uc218\uac00 \uc99d\uac00\ud558\uba74 API \uc11c\ubc84 \uc131\ub2a5\uc5d0 \ubd80\uc815\uc801\uc778 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc2dc\ud06c\ub9bf\uc758 \uac10\uc2dc \uc218\ub97c \uc904\uc774\ub294 \ub450 \uac00\uc9c0 \uc804\ub7b5\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\n* \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9ac\uc18c\uc2a4\uc5d0 \uc561\uc138\uc2a4\ud560 \ud544\uc694\uac00 \uc5c6\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uacbd\uc6b0 AutoMountServiceAccountToken: false\ub97c \uc124\uc815\ud558\uc5ec \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \uc2dc\ud06c\ub9bf \uc790\ub3d9 \ud0d1\uc7ac\ub97c \ube44\ud65c\uc131\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n* \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc554\ud638\uac00 \uc815\uc801\uc774\uc5b4\uc11c \ud5a5\ud6c4 \uc218\uc815\ub418\uc9c0 \uc54a\uc744 \uacbd\uc6b0 [\uc554\ud638\ub97c \ubcc0\uacbd \ubd88\uac00\ub2a5](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/#secret-immutable)\uc73c\ub85c \ud45c\uc2dc\ud558\uc2ed\uc2dc\uc624. kubelet\uc740 \ubcc0\uacbd \ubd88\uac00\ub2a5\ud55c \ube44\ubc00\uc5d0 \ub300\ud55c API \uac10\uc2dc \uae30\ub2a5\uc744 \uc720\uc9c0\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n\uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc744 \ud30c\ub4dc\uc5d0 \uc790\ub3d9\uc73c\ub85c \ub9c8\uc6b4\ud2b8\ud558\ub294 \uac83\uc744 \ube44\ud65c\uc131\ud654\ud558\ub824\uba74 \uc6cc\ud06c\ub85c\ub4dc\uc5d0\uc11c \ub2e4\uc74c \uc124\uc815\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud2b9\uc815 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8\uc774 \ud544\uc694\ud55c \uacbd\uc6b0 \uc774\ub7f0 \uc124\uc815\uc744 \uc7ac\uc815\uc758\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: app\nautomountServiceAccountToken: true\n```\n\n\ud074\ub7ec\uc2a4\ud130\uc758 \uc554\ud638 \uc218\uac00 \uc81c\ud55c\uc778 10,000\uac1c\ub97c \ucd08\uacfc\ud558\uae30 \uc804\uc5d0 \ubaa8\ub2c8\ud130\ub9c1\ud558\uc138\uc694. \ub2e4\uc74c \uba85\ub839\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc758 \ucd1d \uc554\ud638 \uc218\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \ubaa8\ub2c8\ud130\ub9c1 \ub3c4\uad6c\ub97c \ud1b5\ud574 \uc774 \ud55c\ub3c4\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n```\nkubectl get secrets -A | wc -l\n```\n\n\uc774 \ud55c\ub3c4\uc5d0 \ub3c4\ub2ec\ud558\uae30 \uc804\uc5d0 \ud074\ub7ec\uc2a4\ud130 \uad00\ub9ac\uc790\uc5d0\uac8c \uc54c\ub9ac\ub3c4\ub85d \ubaa8\ub2c8\ud130\ub9c1\uc744 \uc124\uc815\ud574\uc57c \ud569\ub2c8\ub2e4.[Secrets Store CSI \ub4dc\ub77c\uc774\ubc84](https:\/\/secrets-store-csi-driver.sigs.k8s.io\/)\uc640 \ud568\uaed8 [AWS Key Management Service (AWS KMS)](https:\/\/aws.amazon.com\/kms\/) \ub610\ub294 [Hashicorp Vault](https:\/\/www.vaultproject.io\/)\uc640 \uac19\uc740 \uc678\ubd80 \ube44\ubc00 \uad00\ub9ac \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624.\n\n## \ubc30\ud3ec \uc774\ub825 \uc81c\ud55c\n\n\ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc774\uc804 \uac1d\uccb4\uac00 \uacc4\uc18d \ucd94\uc801\ub418\ubbc0\ub85c \ud30c\ub4dc\ub97c \uc0dd\uc131, \uc5c5\ub370\uc774\ud2b8 \ub610\ub294 \uc0ad\uc81c\ud560 \ub54c \uc18d\ub3c4\uac00 \ub290\ub824\uc9c8 \uc218 \uc788\uc2b5\ub2c8\ub2e4. [\ub514\ud50c\ub85c\uc774\uba3c\ud2b8](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/#clean-up-policy)\uc758 `therevisionHistoryLimit`\uc744 \uc904\uc774\uba74 \uad6c\ud615 \ub808\ud50c\ub9ac\uce74\uc14b\uc744 \uc815\ub9ac\ud558\uc5ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864\ub7ec \ub9e4\ub2c8\uc800\uac00 \ucd94\uc801\ud558\ub294 \uc624\ube0c\uc81d\ud2b8\uc758 \ucd1d\ub7c9\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc758 \uae30\ubcf8 \uae30\ub85d \ud55c\ub3c4\ub294 10\uac1c\uc785\ub2c8\ub2e4.\n\n\ud074\ub7ec\uc2a4\ud130\uac00 CronJobs\ub098 \ub2e4\ub978 \uba54\ucee4\ub2c8\uc998\uc744 \ud1b5\ud574 \ub9ce\uc740 \uc218\uc758 \uc791\uc5c5 \uac1c\uccb4\ub97c \uc0dd\uc131\ud558\ub294 \uacbd\uc6b0, [`TTLSecondsFinished` \uc124\uc815](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/ttlafterfinished\/)\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc758 \uc624\ub798\ub41c \ud30c\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \uc815\ub9ac\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uc9c0\uc815\ub41c \uc2dc\uac04\uc774 \uc9c0\ub098\uba74 \uc131\uacf5\uc801\uc73c\ub85c \uc2e4\ud589\ub41c \uc791\uc5c5\uc774 \uc791\uc5c5 \uae30\ub85d\uc5d0\uc11c \uc81c\uac70\ub429\ub2c8\ub2e4.\n\n## enableServiceLinks\ub97c \uae30\ubcf8\uc73c\ub85c \ube44\ud65c\uc131\ud654\ud558\uae30\n\n\ud30c\ub4dc\uac00 \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub420 \ub54c, kubelet\uc740 \uac01 \ud65c\uc131 \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \ud658\uacbd \ubcc0\uc218 \uc138\ud2b8\ub97c \ucd94\uac00\ud569\ub2c8\ub2e4. \ub9ac\ub205\uc2a4 \ud504\ub85c\uc138\uc2a4\uc5d0\ub294 \ud658\uacbd\uc5d0 \ub9de\ub294 \ucd5c\ub300 \ud06c\uae30\uac00 \uc788\uc73c\uba70 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \uc11c\ube44\uc2a4\uac00 \ub108\ubb34 \ub9ce\uc73c\uba74 \uc774 \ud06c\uae30\uc5d0 \ub3c4\ub2ec\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub124\uc784\uc2a4\ud398\uc774\uc2a4\ub2f9 \uc11c\ube44\uc2a4 \uc218\ub294 5,000\uac1c\ub97c \ucd08\uacfc\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \uadf8 \uc774\ud6c4\uc5d0\ub294 \uc11c\ube44\uc2a4 \ud658\uacbd \ubcc0\uc218 \uc218\uac00 \uc178 \ud55c\ub3c4\ub97c \ucd08\uacfc\ud558\uc5ec \uc2dc\uc791 \uc2dc \ud30c\ub4dc\uac00 \ud06c\ub798\uc2dc\ub97c \uc77c\uc73c\ud0a4\uac8c \ub429\ub2c8\ub2e4. \n\n\ud30c\ub4dc\uac00 \uc11c\ube44\uc2a4 \uac80\uc0c9\uc5d0 \uc11c\ube44\uc2a4 \ud658\uacbd \ubcc0\uc218\ub97c \uc0ac\uc6a9\ud558\uc9c0 \uc54a\uc544\uc57c \ud558\ub294 \ub2e4\ub978 \uc774\uc720\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \ud658\uacbd \ubcc0\uc218 \uc774\ub984 \ucda9\ub3cc, \uc11c\ube44\uc2a4 \uc774\ub984 \uc720\ucd9c, \uc804\uccb4 \ud658\uacbd \ud06c\uae30 \ub4f1\uc774 \uc788\uc2b5\ub2c8\ub2e4. \uc11c\ube44\uc2a4 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uac80\uc0c9\ud558\ub824\uba74 CoreDNS\ub97c \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n## \ub9ac\uc18c\uc2a4\ub2f9 \ub3d9\uc801 \uc5b4\ub4dc\ubbf8\uc158 \uc6f9\ud6c5(Webhook) \uc81c\ud55c\ud558\uae30\n\n[Dynamic Admission Webhooks](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/extensible-admission-controllers\/)\uc5d0\ub294 \uc5b4\ub4dc\ubbf8\uc158 \uc6f9\ud6c5\uacfc \ubba4\ud14c\uc774\ud305(Mutating) \uc6f9\ud6c5\uc774 \ud3ec\ud568\ub429\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0 \uc18d\ud558\uc9c0 \uc54a\ub294 API \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub85c, \ub9ac\uc18c\uc2a4\uac00 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\ub85c \uc804\uc1a1\ub420 \ub54c \uc21c\uc11c\ub300\ub85c \ud638\ucd9c\ub429\ub2c8\ub2e4. \uac01 \uc6f9\ud6c5\uc758 \uae30\ubcf8 \uc81c\ud55c \uc2dc\uac04\uc740 10\ucd08\uc774\uba70, \uc6f9\ud6c5\uc774 \uc5ec\ub7ec \uac1c \uc788\uac70\ub098 \uc81c\ud55c \uc2dc\uac04\uc774 \ucd08\uacfc\ub41c \uacbd\uc6b0 API \uc694\uccad\uc5d0 \uac78\ub9ac\ub294 \uc2dc\uac04\uc774 \ub298\uc5b4\ub0a0 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ud2b9\ud788 \uac00\uc6a9\uc601\uc5ed \uc7a5\uc560 \ubc1c\uc0dd \uc2dc \uc6f9\ud6c5\uc758 \uac00\uc6a9\uc131\uc774 \ub192\uc740\uc9c0 \ud655\uc778\ud558\uace0 [FailurePolicy](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/extensible-admission-controllers\/#failure-policy)\uac00 \ub9ac\uc18c\uc2a4\ub97c \uac70\ubd80\ud558\uac70\ub098 \uc2e4\ud328\ub97c \ubb34\uc2dc\ud558\ub3c4\ub85d \uc801\uc808\ud558\uac8c \uc124\uc815\ub418\uc5b4 \uc788\ub294\uc9c0 \ud655\uc778\ud558\uc138\uc694. --dry-run kubectl \uba85\ub839\uc774 \uc6f9\ud6c5\uc744 \uc6b0\ud68c\ud558\ub3c4\ub85d \ud5c8\uc6a9\ud558\uc5ec \ud544\uc694\ud558\uc9c0 \uc54a\uc744 \ub54c\ub294 \uc6f9\ud6c5\uc744 \ud638\ucd9c\ud558\uc9c0 \ub9c8\uc2ed\uc2dc\uc624.\n\n```\napiVersion: admission.k8s.io\/v1\nkind: AdmissionReview\nrequest:\n  dryRun: False\n```\n\n\uc6f9\ud6c5\ub97c \ubcc0\uacbd\ud558\uba74 \ub9ac\uc18c\uc2a4\ub97c \uc790\uc8fc \uc5f0\uc18d\uc801\uc73c\ub85c \uc218\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubba4\ud14c\uc774\ud305 \uc6f9\ud6c5\uc774 5\uac1c \uc788\uace0 \ub9ac\uc18c\uc2a4 50\uac1c\ub97c \ubc30\ud3ec\ud558\uba74 \uc218\uc815\ub41c \ub9ac\uc18c\uc2a4\uc758 \uc774\uc804 \ubc84\uc804\uc744 \uc81c\uac70\ud558\uae30 \uc704\ud574 5\ubd84\ub9c8\ub2e4 \ucef4\ud329\uc158\uc774 \uc2e4\ud589\ub420 \ub54c\uae4c\uc9c0 etcd\ub294 \uac01 \ub9ac\uc18c\uc2a4\uc758 \ubaa8\ub4e0 \ubc84\uc804\uc744 \uc800\uc7a5\ud569\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c etcd\uac00 \ub300\uccb4\ub41c \ub9ac\uc18c\uc2a4\ub97c \uc81c\uac70\ud558\uba74 etcd\uc5d0\uc11c 200\uac1c\uc758 \ub9ac\uc18c\uc2a4 \ubc84\uc804\uc774 \uc81c\uac70\ub418\uba70 \ub9ac\uc18c\uc2a4 \ud06c\uae30\uc5d0 \ub530\ub77c 15\ubd84\ub9c8\ub2e4 \uc870\uac01 \ubaa8\uc74c\uc774 \uc2e4\ud589\ub420 \ub54c\uae4c\uc9c0 etcd \ud638\uc2a4\ud2b8\uc5d0\uc11c \uc0c1\ub2f9\ud55c \uacf5\uac04\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc774\ub7f0 \uc870\uac01 \ubaa8\uc74c(defragmentation)\uc73c\ub85c \uc778\ud574 etcd\uac00 \uc77c\uc2dc \uc911\uc9c0\ub418\uc5b4 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \ubc0f \ucee8\ud2b8\ub864\ub7ec\uc5d0 \ub2e4\ub978 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub300\uaddc\ubaa8 \ub9ac\uc18c\uc2a4\ub97c \uc790\uc8fc \uc218\uc815\ud558\uac70\ub098 \uc218\ubc31 \uac1c\uc758 \ub9ac\uc18c\uc2a4\ub97c \uc21c\uc2dd\uac04\uc5d0 \uc5f0\uc18d\ud574\uc11c \uc218\uc815\ud558\ub294 \uac83\uc740 \ud53c\ud574\uc57c \ud569\ub2c8\ub2e4.","site":"eks","answers_cleaned":"    search    exclude  true                                                        API                                                                                                                 API             Secrets   ServiceAccount                                                                                                                                                                                                              https   aws github io aws eks best practices security docs                        IPv6       VPC  IPv4   IPv6                               IPv6                     VPC   IPv6                                          IPv6            IPv4                                    EKS             https   aws github io aws eks best practices networking index                    IPv6            https   docs aws amazon com eks latest userguide cni ipv6 html                                              IPv6         IP                         IP                      ENI                 IP                                   VPC CNI  IPv4 Prefix     https   aws github io aws eks best practices networking prefix mode                                  VPC            IP                                                             5 000                    10 000   https   github com kubernetes community blob master sig scalability configs and limits thresholds md                                                                                    500                  kube proxy                IP                                          IP                                                                                  500                                                                                                                                        dev  test  prod                  EKS                     Elastic Load Balancer                                                 NLB                   ALB                                             https   docs aws amazon com elasticloadbalancing latest application load balancer limits html                                                                          AWS                    http   console aws amazon com servicequotas                      ALB     1000            1 000                                   ALB                   Ingress                NLB     3000    AZ  500                     NLB         500                     AZ                                                                  https   kubernetes io docs concepts services networking ingress controllers                 AWS Load Balancer Controller          ALB                                                                                                                                          Gateway API  https   gateway api sigs k8s io                                                           Route 53  Global Accelerator     CloudFront                                            Amazon CloudFront  https   aws amazon com cloudfront     AWS Global Accelerator  https   aws amazon com global accelerator       Amazon Route 53  https   aws amazon com route53                                                                                                  Route 53                                                                     DNS      https   docs aws amazon com Route53 latest DeveloperGuide resource record sets values weighted html rrsets values weighted weight                                  DNS       https   github com kubernetes sigs external dns                     AWS Load Balancer Controller      https   kubernetes sigs github io aws load balancer controller v2 4 guide integrations external dns  usage                 Global Accelerator      IP                                                                                                          Route 53  Global Accelerator                                                                 Route 53     Global Accelerator                       https   aws amazon com blogs containers operating a multi regional stateless application using amazon eks                  CloudFront  Route 53   Global Accelerator                                                 CloudFront                                                                      Endpoints                 EndpointSlices                                                       https   kubernetes io docs concepts services networking endpoint slices                                                                                                                                                                                                                                                 AWS Load Balancer Controller  https   kubernetes sigs github io aws load balancer controller v2 4 deploy configurations  controller command line flags                          enable endpoint slices                                      immutable       external            kubelet                                                  kubelet                                                    API                                                                                                AutoMountServiceAccountToken  false                                                                                       https   kubernetes io docs concepts configuration secret  secret immutable            kubelet                 API                                                                                                                                     apiVersion  v1 kind  ServiceAccount metadata    name  app automountServiceAccountToken  true                      10 000                                                                                                  kubectl get secrets  A   wc  l                                                    Secrets Store CSI       https   secrets store csi driver sigs k8s io        AWS Key Management Service  AWS KMS   https   aws amazon com kms       Hashicorp Vault  https   www vaultproject io                                                                                                                          https   kubernetes io docs concepts workloads controllers deployment  clean up policy    therevisionHistoryLimit                                                                                  10             CronJobs                                      TTLSecondsFinished      https   kubernetes io docs concepts workloads controllers ttlafterfinished                                                                                          enableServiceLinks                               kubelet                                                                                                                  5 000                                                                                                                                                                                   CoreDNS                                Webhook         Dynamic Admission Webhooks  https   kubernetes io docs reference access authn authz extensible admission controllers                   Mutating                                   API                    API                                    10                                API                                                             FailurePolicy  https   kubernetes io docs reference access authn authz extensible admission controllers  failure policy                                               dry run kubectl                                                   apiVersion  admission k8s io v1 kind  AdmissionReview request    dryRun  False                                                  5         50                                5                 etcd                                etcd                etcd   200                           15                   etcd                                     defragmentation       etcd                API                                                                                  "}
{"questions":"eks SLO search Amazon EKS EKS SLO SLI Service Level Indicator SLO Service Level Objective exclude true","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc5c5\uc2a4\ud2b8\ub9bc SLO\n\nAmazon EKS\ub294 \uc5c5\uc2a4\ud2b8\ub9bc \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9b4\ub9ac\uc2a4\uc640 \ub3d9\uc77c\ud55c \ucf54\ub4dc\ub97c \uc2e4\ud589\ud558\uace0 EKS \ud074\ub7ec\uc2a4\ud130\uac00 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee4\ubba4\ub2c8\ud2f0\uc5d0\uc11c \uc815\uc758\ud55c SLO \ub0b4\uc5d0\uc11c \uc791\ub3d9\ud558\ub3c4\ub85d \ud569\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 [\ud655\uc7a5\uc131 SIG](https:\/\/github.com\/kubernetes\/community\/tree\/master\/sig-scalability)\ub294 \ud655\uc7a5\uc131 \ubaa9\ud45c\ub97c \uc815\uc758\ud558\uace0, \uc815\uc758\ub41c SLI(Service Level Indicator)\uc640 SLO(Service Level Objective)\ub97c \ud1b5\ud574 \uc131\ub2a5 \ubcd1\ubaa9 \ud604\uc0c1\uc744 \uc870\uc0ac\ud569\ub2c8\ub2e4. \n\nSLI\ub294 \uc2dc\uc2a4\ud15c\uc774 \uc5bc\ub9c8\ub098 \u201c\uc798\u201d \uc2e4\ud589\ub418\uace0 \uc788\ub294\uc9c0 \ud310\ub2e8\ud558\ub294 \ub370 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uba54\ud2b8\ub9ad\uc774\ub098 \uce21\uc815\uac12\uacfc \uac19\uc774 \uc2dc\uc2a4\ud15c\uc744 \uce21\uc815\ud558\ub294 \ubc29\ubc95(\uc608: \uc694\uccad \uc9c0\uc5f0 \uc2dc\uac04 \ub610\ub294 \uac1c\uc218)\uc785\ub2c8\ub2e4. SLO\ub294 \uc2dc\uc2a4\ud15c\uc774 '\uc815\uc0c1' \uc2e4\ud589\ub420 \ub54c \uc608\uc0c1\ub418\ub294 \uac12\uc744 \uc815\uc758\ud569\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uc694\uccad \uc9c0\uc5f0 \uc2dc\uac04\uc774 3\ucd08 \ubbf8\ub9cc\uc73c\ub85c \uc720\uc9c0\ub429\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 SLO\uc640 SLI\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uad6c\uc131 \uc694\uc18c\uc758 \uc131\ub2a5\uc5d0 \ucd08\uc810\uc744 \ub9de\ucd94\uace0 EKS \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \uac00\uc6a9\uc131\uc5d0 \ucd08\uc810\uc744 \ub9de\ucd98 Amazon EKS \uc11c\ube44\uc2a4 SLA\uc640\ub294 \uc644\uc804\ud788 \ub3c5\ub9bd\uc801\uc785\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \uc0ac\uc6a9\uc790\uac00 CSI \ub4dc\ub77c\uc774\ubc84, \uc5b4\ub4dc\ubbf8\uc158 \uc6f9\ud6c5, \uc790\ub3d9 \uc2a4\ucf00\uc77c\ub7ec\uc640 \uac19\uc740 \ucee4\uc2a4\ud140 \uc560\ub4dc\uc628 \ub610\ub294 \ub4dc\ub77c\uc774\ubc84\ub85c \uc2dc\uc2a4\ud15c\uc744 \ud655\uc7a5\ud560 \uc218 \uc788\ub294 \ub9ce\uc740 \uae30\ub2a5\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c \ud655\uc7a5\uc740 \ub2e4\uc591\ud55c \ubc29\uc2dd\uc73c\ub85c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc758 \uc131\ub2a5\uc5d0 \ud070 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\ub2e4. \uc608\ub97c \ub4e4\uc5b4 `FailurePolicy=Ignore`\uac00 \ud3ec\ud568\ub41c \uc5b4\ub4dc\ubbf8\uc158 \uc6f9\ud6c5\uc740 \uc6f9\ud6c5 \ud0c0\uac9f\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0 K8s API \uc694\uccad\uc5d0 \uc9c0\uc5f0 \uc2dc\uac04\uc744 \ucd94\uac00\ud560 \uc218 \uc788\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud655\uc7a5\uc131 SIG\ub294 [\"you promise, we promise\" \ud504\ub808\uc784\uc6cc\ud06c](https:\/\/github.com\/kubernetes\/community\/blob\/master\/sig-scalability\/slos\/slos.md#how-we-define-scalability)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud655\uc7a5\uc131\uc744 \uc815\uc758\ud569\ub2c8\ub2e4.\n\n\n> \uc0ac\uc6a9\uc790\uac00 \ub2e4\uc74c\uacfc \uac19\uc774 \uc57d\uc18d\ud558\ub294 \uacbd\uc6b0 (`You promise`):  \n>     - \ud074\ub7ec\uc2a4\ud130\ub97c \uc62c\ubc14\ub974\uac8c \uad6c\uc131\ud558\uc138\uc694  \n>     - \ud655\uc7a5\uc131 \uae30\ub2a5\uc744 \u201c\ud569\ub9ac\uc801\uc73c\ub85c\u201d \uc0ac\uc6a9  \n>     - \ud074\ub7ec\uc2a4\ud130\uc758 \ubd80\ud558\ub97c [\uad8c\uc7a5 \ub9ac\ubc0b](https:\/\/github.com\/kubernetes\/community\/blob\/master\/sig-scalability\/configs-and-limits\/thresholds.md) \uc774\ub0b4\ub85c \uc720\uc9c0  \n> \n> \uadf8\ub7ec\uba74 \ud074\ub7ec\uc2a4\ud130\uac00 \ud655\uc7a5\ub420 \uac83\uc744 \uc57d\uc18d\ud569\ub2c8\ub2e4. (`We promise`):  \n>     - \ubaa8\ub4e0 SLO\uac00 \ub9cc\uc871\ud569\ub2c8\ub2e4.    \n\n# \ucfe0\ubc84\ub124\ud2f0\uc2a4 SLO\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 SLO\ub294 \uc6cc\ucee4 \ub178\ub4dc \uc2a4\ucf00\uc77c\ub9c1\uc774\ub098 \uc5b4\ub4dc\ubbf8\uc158 \uc6f9\ud6c5\uacfc \uac19\uc774 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\ub294 \ubaa8\ub4e0 \ud50c\ub7ec\uadf8\uc778\uacfc \uc678\ubd80 \uc81c\ud55c\uc744 \uace0\ub824\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c SLO\ub294 [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucef4\ud3ec\ub10c\ud2b8](https:\/\/kubernetes.io\/docs\/concepts\/overview\/components\/)\uc5d0 \ucd08\uc810\uc744 \ub9de\ucd94\uace0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc561\uc158\uacfc \ub9ac\uc18c\uc2a4\uac00 \uae30\ub300 \ubc94\uc704 \ub0b4\uc5d0\uc11c \uc791\ub3d9\ud558\ub3c4\ub85d \ud569\ub2c8\ub2e4. SLO\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uac1c\ubc1c\uc790\uac00 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucf54\ub4dc\ub97c \ubcc0\uacbd\ud574\ub3c4 \uc804\uccb4 \uc2dc\uc2a4\ud15c\uc758 \uc131\ub2a5\uc774 \uc800\ud558\ub418\uc9c0 \uc54a\ub3c4\ub85d \ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4.\n\n[\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud655\uc7a5\uc131 SIG\ub294 \ub2e4\uc74c\uacfc \uac19\uc740 \uacf5\uc2dd SLO\/SLI\ub97c \uc815\uc758\ud569\ub2c8\ub2e4](https:\/\/github.com\/kubernetes\/community\/blob\/master\/sig-scalability\/slos\/slos.md). Amazon EKS \ud300\uc740 \uc774\ub7ec\ud55c SLO\/SLI\uc5d0 \ub300\ud574 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc815\uae30\uc801\uc73c\ub85c \ud655\uc7a5\uc131 \ud14c\uc2a4\ud2b8\ub97c \uc2e4\ud589\ud558\uc5ec \ubcc0\uacbd \ubc0f \uc0c8 \ubc84\uc804 \ucd9c\uc2dc\uc5d0 \ub530\ub978 \uc131\ub2a5 \uc800\ud558\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud569\ub2c8\ub2e4.\n\n\n|Objective\t|Definition\t|SLO\t|\n|---\t|---\t|---\t|\n|API request latency (mutating)\t|\ubaa8\ub4e0 (\ub9ac\uc18c\uc2a4, \ub3d9\uc0ac) \uc30d\uc758 \ub2e8\uc77c \uac1d\uccb4\uc5d0 \ub300\ud55c \ubcc0\uacbd API \ud638\ucd9c \ucc98\ub9ac \uc9c0\uc5f0 (\uc9c0\ub09c 5\ubd84 \ub3d9\uc548 \ubc31\ubd84\uc704 99\ub85c \uce21\uc815) |\uae30\ubcf8 Kubernetes \uc124\uce58\uc5d0\uc11c \ubaa8\ub4e0 (\ub9ac\uc18c\uc2a4, \ub3d9\uc0ac) \uc30d\uc5d0 \ub300\ud574 (\uac00\uc0c1 \ubc0f \uc9d1\uacc4 \ub9ac\uc18c\uc2a4 \uc815\uc758 \uc81c\uc678), \ud074\ub7ec\uc2a4\ud130\ub2f9 \ubc31\ubd84\uc704 99 <= 1\ucd08 |\n|API request latency (read-only)\t|\ubaa8\ub4e0 (\ub9ac\uc18c\uc2a4, \ubc94\uc704) \uc30d\uc5d0 \ub300\ud55c \ube44\uc2a4\ud2b8\ub9ac\ubc0d \uc77d\uae30 \uc804\uc6a9 API \ud638\ucd9c \ucc98\ub9ac \ub300\uae30 \uc2dc\uac04 (\uc9c0\ub09c 5\ubd84 \ub3d9\uc548 \ubc31\ubd84\uc704 99\ub85c \uce21\uc815) |\uae30\ubcf8 Kubernetes \uc124\uce58\uc5d0\uc11c \ubaa8\ub4e0 (\ub9ac\uc18c\uc2a4, \ubc94\uc704) \uc30d\uc5d0 \ub300\ud574 (\uac00\uc0c1 \ubc0f \uc9d1\uacc4 \ub9ac\uc18c\uc2a4 \ubc0f \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub9ac\uc18c\uc2a4 \uc815\uc758 \uc81c\uc678), \ud074\ub7ec\uc2a4\ud130\ub2f9 \ubc31\ubd84\uc704 99: (a) <= `scope=resource`\uc778 \uacbd\uc6b0 1\ucd08 (b) << = 30\ucd08 (`scope=namespace` \ub610\ub294 `scope=cluster`\uc778 \uacbd\uc6b0) |\n|Pod startup latency\t| \uc608\uc57d \uac00\ub2a5\ud55c \uc0c1\ud0dc \ube44\uc800\uc7a5 \ud30c\ub4dc\uc758 \uc2dc\uc791 \uc9c0\uc5f0 \uc2dc\uac04 (\uc774\ubbf8\uc9c0\ub97c \uac00\uc838\uc624\uace0 \ucd08\uae30\ud654 \ucee8\ud14c\uc774\ub108\ub97c \uc2e4\ud589\ud558\ub294 \ub370 \uac78\ub9ac\ub294 \uc2dc\uac04 \uc81c\uc678), \ud30c\ub4dc \uc0dd\uc131 \ud0c0\uc784\uc2a4\ud0ec\ud504\ubd80\ud130 \ubaa8\ub4e0 \ucee8\ud14c\uc774\ub108\uac00 \uc2dc\uc791 \ubc0f \uc2dc\uacc4\ub97c \ud1b5\ud574 \uad00\ucc30\ub41c \uac83\uc73c\ub85c \ubcf4\uace0\ub418\ub294 \uc2dc\uc810\uae4c\uc9c0 \uce21\uc815 (\uc9c0\ub09c 5\ubd84 \ub3d9\uc548 \ubc31\ubd84\uc704 99\ub85c \uce21\uc815) |\uae30\ubcf8 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc124\uce58\uc5d0\uc11c \ud074\ub7ec\uc2a4\ud130\ub2f9 \ubc31\ubd84\uc704 99 <= 5\ucd08 |\n\n<!-- |Objective\t|Definition\t|SLO\t|\n|---\t|---\t|---\t|\n|API request latency (mutating)\t|Latency of processing mutating  API calls for single objects for every (resource, verb) pair, measured as 99th percentile over last 5 minutes\t|In default Kubernetes installation, for every (resource, verb) pair, excluding virtual and aggregated resources and Custom Resource Definitions, 99th percentile per cluster-day <= 1s\t|\n|API request latency (read-only)\t|Latency of processing non-streaming read-only API calls for every (resource, scope) pair, measured as 99th percentile over last 5 minutes\t|In default Kubernetes installation, for every (resource, scope) pair, excluding virtual and aggregated resources and Custom Resource Definitions, 99th percentile per cluster-day: (a) <= 1s if `scope=resource` (b) <= 30s otherwise (if `scope=namespace` or `scope=cluster`)\t|\n|Pod startup latency\t|Startup latency of schedulable stateless pods, excluding time to pull images and run init containers, measured from pod creation timestamp to when all its containers are reported as started and observed via watch, measured as 99th percentile over last 5 minutes\t|In default Kubernetes installation, 99th percentile per cluster-day <= 5s\t| -->\n\n### API \uc694\uccad \uc9c0\uc5f0 \uc2dc\uac04 \n\n`kube-apiserver`\uc5d0\ub294 \uae30\ubcf8\uc801\uc73c\ub85c `--request-timeout`\uc774 `1m0s`\ub85c \uc815\uc758\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \uc989, \uc694\uccad\uc744 \ucd5c\ub300 1\ubd84 (60\ucd08) \ub3d9\uc548 \uc2e4\ud589\ud55c \ud6c4 \uc81c\ud55c \uc2dc\uac04\uc744 \ucd08\uacfc\ud558\uc5ec \ucde8\uc18c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc9c0\uc5f0 \uc2dc\uac04\uc5d0 \ub300\ud574 \uc815\uc758\ub41c SLO\ub294 \uc2e4\ud589 \uc911\uc778 \uc694\uccad \uc720\ud615\ubcc4\ub85c \uad6c\ubd84\ub418\uba70, \ubcc0\uacbd\ub418\uac70\ub098 \uc77d\uae30 \uc804\uc6a9\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n#### \ubba4\ud14c\uc774\ud305 (Mutating)\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0\uc11c \uc694\uccad\uc744 \ubcc0\uacbd\ud558\uba74 \uc0dd\uc131, \uc0ad\uc81c \ub610\ub294 \uc5c5\ub370\uc774\ud2b8\uc640 \uac19\uc740 \ub9ac\uc18c\uc2a4\uac00 \ubcc0\uacbd\ub429\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc694\uccad\uc740 \uc5c5\ub370\uc774\ud2b8\ub41c \uc624\ube0c\uc81d\ud2b8\uac00 \ubc18\ud658\ub418\uae30 \uc804\uc5d0 \ubcc0\uacbd \uc0ac\ud56d\uc744 [etcd \ubc31\uc5d4\ub4dc](https:\/\/kubernetes.io\/docs\/concepts\/overview\/components\/#etcd)\uc5d0 \uae30\ub85d\ud574\uc57c \ud558\uae30 \ub54c\ubb38\uc5d0 \ube44\uc6a9\uc774 \ub9ce\uc774 \ub4e0\ub2e4. [Etcd](https:\/\/etcd.io\/)\ub294 \ubaa8\ub4e0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130 \ub370\uc774\ud130\uc5d0 \uc0ac\uc6a9\ub418\ub294 \ubd84\uc0b0 \ud0a4-\ubc38\ub958 \uc800\uc7a5\uc18c\uc785\ub2c8\ub2e4.\n\n\uc774 \uc9c0\uc5f0 \uc2dc\uac04\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9ac\uc18c\uc2a4\uc758 (resource, verb) \uc30d\uc5d0 \ub300\ud55c 5\ubd84 \uc774\uc0c1\uc758 99\ubc88\uc9f8 \ubc31\ubd84\uc704\uc218\ub85c \uce21\uc815\ub429\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uc774 \uc9c0\uc5f0 \uc2dc\uac04\uc740 \ud30c\ub4dc \uc0dd\uc131 \uc694\uccad\uacfc \uc5c5\ub370\uc774\ud2b8 \ub178\ub4dc \uc694\uccad\uc758 \uc9c0\uc5f0 \uc2dc\uac04\uc744 \uce21\uc815\ud569\ub2c8\ub2e4. SLO\ub97c \ucda9\uc871\ud558\ub824\uba74 \uc694\uccad \uc9c0\uc5f0 \uc2dc\uac04\uc774 1\ucd08 \ubbf8\ub9cc\uc774\uc5b4\uc57c \ud569\ub2c8\ub2e4.\n\n#### \uc77d\uae30 \uc804\uc6a9 (read-only)\n\n\uc77d\uae30 \uc804\uc6a9 \uc694\uccad\uc740 \ub2e8\uc77c \ub9ac\uc18c\uc2a4 (\uc608: Pod X \uc815\ubcf4 \uac00\uc838\uc624\uae30) \ub610\ub294 \uceec\ub809\uc158 (\uc608: \u201c\ub124\uc784\uc2a4\ud398\uc774\uc2a4 X\uc5d0\uc11c \ubaa8\ub4e0 \ud30c\ub4dc \uc815\ubcf4 \uac00\uc838\uc624\uae30\u201d) \uc744 \uac80\uc0c9\ud55c\ub2e4. `kube-apiserver`\ub294 \uc624\ube0c\uc81d\ud2b8 \uce90\uc2dc\ub97c \uc720\uc9c0\ud558\ubbc0\ub85c \uc694\uccad\ub41c \ub9ac\uc18c\uc2a4\uac00 \uce90\uc2dc\uc5d0\uc11c \ubc18\ud658\ub420 \uc218\ub3c4 \uc788\uace0, \uba3c\uc800 etcd\uc5d0\uc11c \uac80\uc0c9\ud574\uc57c \ud560 \uc218\ub3c4 \uc788\ub2e4. \n\uc774\ub7ec\ud55c \uc9c0\uc5f0 \uc2dc\uac04\uc740 5\ubd84 \ub3d9\uc548\uc758 99\ubc88\uc9f8 \ubc31\ubd84\uc704\uc218\ub85c\ub3c4 \uce21\uc815\ub418\uc9c0\ub9cc, \uc77d\uae30 \uc804\uc6a9 \uc694\uccad\uc740 \ubcc4\ub3c4\uc758 \ubc94\uc704\ub97c \uac00\uc9c8 \uc218 \uc788\uc2b5\ub2c8\ub2e4.SLO\ub294 \ub450 \uac00\uc9c0 \ub2e4\ub978 \ubaa9\ud45c\ub97c \uc815\uc758\ud569\ub2c8\ub2e4.\n\n* *\ub2e8\uc77c* \ub9ac\uc18c\uc2a4(\uc608: `kubectl get pod -n mynamespace my-controller-xxx`)\uc5d0 \ub300\ud55c \uc694\uccad\uc758 \uacbd\uc6b0 \uc694\uccad \uc9c0\uc5f0 \uc2dc\uac04\uc740 1\ucd08 \ubbf8\ub9cc\uc73c\ub85c \uc720\uc9c0\ub418\uc5b4\uc57c \ud569\ub2c8\ub2e4.\n* \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \ub610\ub294 \ud074\ub7ec\uc2a4\ud130\uc758 \uc5ec\ub7ec \ub9ac\uc18c\uc2a4\uc5d0 \ub300\ud574 \uc694\uccad\ud55c \uacbd\uc6b0 (\uc608: `kubectl get pods -A`) \uc9c0\uc5f0 \uc2dc\uac04\uc740 30\ucd08 \ubbf8\ub9cc\uc73c\ub85c \uc720\uc9c0\ub418\uc5b4\uc57c \ud569\ub2c8\ub2e4.\n\nKubernetes \ub9ac\uc18c\uc2a4 \ubaa9\ub85d\uc5d0 \ub300\ud55c \uc694\uccad\uc740 \uc694\uccad\uc5d0 \ud3ec\ud568\ub41c \ubaa8\ub4e0 \uc624\ube0c\uc81d\ud2b8\uc758 \uc138\ubd80 \uc815\ubcf4\uac00 SLO \ub0b4\uc5d0 \ubc18\ud658\ub420 \uac83\uc73c\ub85c \uc608\uc0c1\ud558\uae30 \ub54c\ubb38\uc5d0 SLO\ub294 \uc694\uccad \ubc94\uc704\uc5d0 \ub530\ub77c \ub2e4\ub978 \ubaa9\ud45c \uac12\uc744 \uac00\uc9d1\ub2c8\ub2e4. \ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130 \ub610\ub294 \ub300\uaddc\ubaa8 \ub9ac\uc18c\uc2a4 \uceec\ub809\uc158\uc5d0\uc11c\ub294 \uc751\ub2f5 \ud06c\uae30\uac00 \ucee4\uc838 \ubc18\ud658\ud558\ub294 \ub370 \ub2e4\uc18c \uc2dc\uac04\uc774 \uac78\ub9b4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uc218\ub9cc \uac1c\uc758 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud558\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c JSON\uc73c\ub85c \uc778\ucf54\ub529\ud560 \ub54c \uac01 \ud30c\ub4dc\uac00 \ub300\ub7b5 1KiB\uc778 \uacbd\uc6b0 \ud074\ub7ec\uc2a4\ud130\uc758 \ubaa8\ub4e0 \ud30c\ub4dc\ub97c \ubc18\ud658\ud558\ub294 \ub370 10MB \uc774\uc0c1\uc774 \ub41c\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub77c\uc774\uc5b8\ud2b8\ub294 \uc774\ub7ec\ud55c \uc751\ub2f5 \ud06c\uae30\ub97c \uc904\uc774\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\ub2e4 [APIListChunking\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub300\uaddc\ubaa8 \ub9ac\uc18c\uc2a4 \uceec\ub809\uc158\uc744 \uac80\uc0c9](https:\/\/kubernetes.io\/docs\/reference\/using-api\/api-concepts\/#retrieving-large-results-sets-in-chunks). \n\n### \ud30c\ub4dc \uc2dc\uc791 \uc9c0\uc5f0\n\n\uc774 SLO\ub294 \uc8fc\ub85c \ud30c\ub4dc \uc0dd\uc131\ubd80\ud130 \ud574\ub2f9 \ud30c\ub4dc\uc758 \ucee8\ud14c\uc774\ub108\uac00 \uc2e4\uc81c\ub85c \uc2e4\ud589\uc744 \uc2dc\uc791\ud560 \ub54c\uae4c\uc9c0 \uac78\ub9ac\ub294 \uc2dc\uac04\uacfc \uad00\ub828\uc774 \uc788\ub2e4. \uc774\ub97c \uce21\uc815\ud558\uae30 \uc704\ud574 \ud30c\ub4dc\uc5d0 \uae30\ub85d\ub41c \uc0dd\uc131 \ud0c0\uc784\uc2a4\ud0ec\ud504\uc640 [\ud30c\ub4dc WATCH\uc694\uccad](https:\/\/kubernetes.io\/docs\/reference\/using-api\/api-concepts\/#efficient-detection-of-changes)\uc5d0\uc11c \ubcf4\uace0\ub41c \ucee8\ud14c\uc774\ub108\uac00 \uc2dc\uc791\ub41c \uc2dc\uc810 (\ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0 \ud480\ub9c1 \ubc0f \ucd08\uae30\ud654 \ucee8\ud14c\uc774\ub108 \uc2e4\ud589 \uc2dc\uac04 \uc81c\uc678)\uacfc\uc758 \ucc28\uc774\ub97c \uacc4\uc0b0\ud569\ub2c8\ub2e4. SLO\ub97c \ucda9\uc871\ud558\ub824\uba74 \uc774 \ud30c\ub4dc \uc2dc\uc791 \uc9c0\uc5f0 \uc2dc\uac04\uc758 \ud074\ub7ec\uc2a4\ud130 \uc77c\ub2f9 99\ubc88\uc9f8 \ubc31\ubd84\uc704\uc218\ub97c 5\ucd08 \ubbf8\ub9cc\uc73c\ub85c \uc720\uc9c0\ud574\uc57c \ud55c\ub2e4. \n\n\ucc38\uace0\ub85c, \uc774 SLO\uc5d0\uc11c\ub294 \uc6cc\ucee4 \ub178\ub4dc\uac00 \uc774 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc774\ubbf8 \uc874\uc7ac\ud558\uba70 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc900\ube44\uac00 \ub41c \uc0c1\ud0dc\uc778 \uac83\uc73c\ub85c \uac00\uc815\ud55c\ub2e4. \uc774 SLO\ub294 \uc774\ubbf8\uc9c0 \ud480\uc774\ub098 \ucd08\uae30\ud654 \ucee8\ud14c\uc774\ub108 \uc2e4\ud589\uc744 \uace0\ub824\ud558\uc9c0 \uc54a\uc73c\uba70, \uc601\uad6c \uc2a4\ud1a0\ub9ac\uc9c0 \ud50c\ub7ec\uadf8\uc778\uc744 \ud65c\uc6a9\ud558\uc9c0 \uc54a\ub294 \u201c\uc2a4\ud14c\uc774\ud2b8\ub9ac\uc2a4(stateless) \ud30c\ub4dc\"\ub85c\ub9cc \ud14c\uc2a4\ud2b8\ub97c \uc81c\ud55c\ud55c\ub2e4. \n\n## \ucfe0\ubc84\ub124\ud2f0\uc2a4 SLI \uba54\ud2b8\ub9ad\uc2a4 \n\n\ub610\ud55c \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \uc2dc\uac04\uc774 \uc9c0\ub0a8\uc5d0 \ub530\ub77c \uc774\ub7ec\ud55c SLI\ub97c \ucd94\uc801\ud558\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucef4\ud3ec\ub10c\ud2b8\uc5d0 [\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uba54\ud2b8\ub9ad](https:\/\/prometheus.io\/docs\/concepts\/data_model\/)\uc744 \ucd94\uac00\ud558\uc5ec SLI\uc5d0 \ub300\ud55c \uc635\uc800\ubc84\ube4c\ub9ac\ud2f0\ub97c \uac1c\uc120\ud558\uace0 \uc788\uc2b5\ub2c8\ub2e4. [\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \ucffc\ub9ac \uc5b8\uc5b4 (PromQL)](https:\/\/prometheus.io\/docs\/prometheus\/latest\/querying\/basics\/)\ub97c \uc0ac\uc6a9\ud558\uc5ec Prometheus \ub610\ub294 Grafana \ub300\uc2dc\ubcf4\ub4dc\uc640 \uac19\uc740 \ub3c4\uad6c\uc5d0\uc11c \uc2dc\uac04 \uacbd\uacfc\uc5d0 \ub530\ub978 SLI \uc131\ub2a5\uc744 \ud45c\uc2dc\ud558\ub294 \ucffc\ub9ac\ub97c \uc791\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc544\ub798\ub294 \uc704\uc758 SLO\uc5d0 \ub300\ud55c \uba87 \uac00\uc9c0 \uc608\uc785\ub2c8\ub2e4.\n\n### API \uc11c\ubc84 \uc694\uccad \ub808\uc774\ud134\uc2dc\n\n|Metric\t|Definition\t|\n|---\t|---\t|\n|apiserver_request_sli_duration_seconds\t| \uac01 verb, \uadf8\ub8f9, \ubc84\uc804, \ub9ac\uc18c\uc2a4, \ud558\uc704 \ub9ac\uc18c\uc2a4, \ubc94\uc704 \ubc0f \uad6c\uc131 \uc694\uc18c\uc5d0 \ub300\ud55c \uc751\ub2f5 \uc9c0\uc5f0 \uc2dc\uac04 \ubd84\ud3ec (\uc6f9\ud6c5 \uc9c0\uc18d \uc2dc\uac04, \uc6b0\uc120 \uc21c\uc704 \ubc0f \uacf5\uc815\uc131 \ub300\uae30\uc5f4 \ub300\uae30 \uc2dc\uac04 \uc81c\uc678)\t|\n|apiserver_request_duration_seconds\t| \uac01 verb, \ud14c\uc2a4\ud2b8 \uc2e4\ud589 \uac12, \uadf8\ub8f9, \ubc84\uc804, \ub9ac\uc18c\uc2a4, \ud558\uc704 \ub9ac\uc18c\uc2a4, \ubc94\uc704 \ubc0f \uad6c\uc131 \uc694\uc18c\uc5d0 \ub300\ud55c \uc751\ub2f5 \uc9c0\uc5f0 \uc2dc\uac04 \ubd84\ud3ec (\ucd08)\t|  \n\n*\ucc38\uace0: `apiserver_request_sli_duration_seconds` \uba54\ud2b8\ub9ad\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 1.27 \ubc84\uc804\ubd80\ud130 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub2e4.*\n\n\uc774\ub7ec\ud55c \uba54\ud2b8\ub9ad\uc744 \uc0ac\uc6a9\ud558\uc5ec API \uc11c\ubc84 \uc751\ub2f5 \uc2dc\uac04\uacfc Kubernetes \uad6c\uc131 \uc694\uc18c \ub610\ub294 \uae30\ud0c0 \ud50c\ub7ec\uadf8\uc778\/\uad6c\uc131 \uc694\uc18c\uc5d0 \ubcd1\ubaa9 \ud604\uc0c1\uc774 \uc788\ub294\uc9c0 \uc870\uc0ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc544\ub798 \ucffc\ub9ac\ub294 [\ucee4\ubba4\ub2c8\ud2f0 SLO \ub300\uc2dc\ubcf4\ub4dc](https:\/\/github.com\/kubernetes\/perf-tests\/tree\/master\/clusterloader2\/pkg\/prometheus\/manifests\/dashboards)\ub97c \uae30\ubc18\uc73c\ub85c \ud569\ub2c8\ub2e4. \n\n**API \uc694\uccad \ub808\uc774\ud134\uc2dc SLI (mutating)** - \ud574\ub2f9 \uc2dc\uac04\uc740 \uc6f9\ud6c5 \uc2e4\ud589 \ub610\ub294 \ub300\uae30\uc5f4 \ub300\uae30 \uc2dc\uac04\uc744 \ud3ec\ud568*\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4*.   \n`histogram_quantile(0.99, sum(rate(apiserver_request_sli_duration_seconds_bucket{verb=~\"CREATE|DELETE|PATCH|POST|PUT\", subresource!~\"proxy|attach|log|exec|portforward\"}[5m])) by (resource, subresource, verb, scope, le)) > 0`\n\n**API \uc694\uccad \ub808\uc774\ud134\uc2dc \uc2dc\uac04 \ud569\uacc4 (mutating)** - API \uc11c\ubc84\uc5d0\uc11c \uc694\uccad\uc774 \uc18c\uc694\ub41c \ucd1d \uc2dc\uac04\uc785\ub2c8\ub2e4. \uc774 \uc2dc\uac04\uc740 \uc6f9\ud6c5 \uc2e4\ud589, API \uc6b0\uc120 \uc21c\uc704 \ubc0f \uacf5\uc815\uc131 \ub300\uae30 \uc2dc\uac04\uc744 \ud3ec\ud568\ud558\ubbc0\ub85c SLI \uc2dc\uac04\ubcf4\ub2e4 \uae38 \uc218 \uc788\uc2b5\ub2c8\ub2e4.  \n`histogram_quantile(0.99, sum(rate(apiserver_request_duration_seconds_bucket{verb=~\"CREATE|DELETE|PATCH|POST|PUT\", subresource!~\"proxy|attach|log|exec|portforward\"}[5m])) by (resource, subresource, verb, scope, le)) > 0`\n\n\uc774 \ucffc\ub9ac\uc5d0\uc11c\ub294 `kubectl port-forward` \ub610\ub294 `kubectl exec` \uc694\uccad\uacfc \uac19\uc774 \uc989\uc2dc \ubc18\ud658\ub418\uc9c0 \uc54a\ub294 \uc2a4\ud2b8\ub9ac\ubc0d API \uc694\uccad\uc744 \uc81c\uc678\ud569\ub2c8\ub2e4. (`subresource!~\"proxy|attach|log|exec|portforward\"`). \uadf8\ub9ac\uace0 \uac1d\uccb4\ub97c \uc218\uc815\ud558\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 verb\uc5d0 \ub300\ud574\uc11c\ub9cc \ud544\ud130\ub9c1\ud558\uace0 \uc788\uc2b5\ub2c8\ub2e4 (`verb=~\"Create|Delete|Patch|Post|put\"`).\uadf8\ub7f0 \ub2e4\uc74c \uc9c0\ub09c 5\ubd84 \ub3d9\uc548\uc758 \ud574\ub2f9 \uc9c0\uc5f0 \uc2dc\uac04\uc758 99\ubc88\uc9f8 \ubc31\ubd84\uc704\uc218\ub97c \uacc4\uc0b0\ud569\ub2c8\ub2e4.\n\n\uc77d\uae30 \uc804\uc6a9 API \uc694\uccad\uc5d0\ub3c4 \ube44\uc2b7\ud55c \ucffc\ub9ac\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud544\ud130\ub9c1 \ub300\uc0c1 verb\uc5d0 \uc77d\uae30 \uc804\uc6a9 \uc791\uc5c5 `LIST`\uc640 `GET`\uc774 \ud3ec\ud568\ub418\ub3c4\ub85d \uc218\uc815\ud558\uae30\ub9cc \ud558\uba74 \ub429\ub2c8\ub2e4. \ub610\ud55c \uc694\uccad \ubc94\uc704(\uc608: \ub2e8\uc77c \ub9ac\uc18c\uc2a4\ub97c \uac00\uc838\uc624\uac70\ub098 \uc5ec\ub7ec \ub9ac\uc18c\uc2a4\ub97c \ub098\uc5f4\ud558\ub294 \uacbd\uc6b0)\uc5d0 \ub530\ub77c SLO \uc784\uacc4\uac12\ub3c4 \ub2e4\ub985\ub2c8\ub2e4.\n\n**API \uc694\uccad \ub808\uc774\ud134\uc2dc \uc2dc\uac04 SLI (\uc77d\uae30 \uc804\uc6a9)** - \uc774\ubc88\uc5d0\ub294 \uc6f9\ud6c5 \uc2e4\ud589 \ub610\ub294 \ub300\uae30\uc5f4 \ub300\uae30 \uc2dc\uac04\uc744 \ud3ec\ud568*\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4*.\n\ub2e8\uc77c \ub9ac\uc18c\uc2a4\uc758 \uacbd\uc6b0 (\ubc94\uc704=\ub9ac\uc18c\uc2a4, \uc784\uacc4\uac12=1s)  \n`histogram_quantile(0.99, sum(rate(apiserver_request_sli_duration_seconds_bucket{verb=~\"GET\", scope=~\"resource\"}[5m])) by (resource, subresource, verb, scope, le))`\n\n\ub3d9\uc77c\ud55c \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \uc788\ub294 \ub9ac\uc18c\uc2a4 \uceec\ub809\uc158\uc758 \uacbd\uc6b0 (\ubc94\uc704=\ub124\uc784\uc2a4\ud398\uc774\uc2a4, \uc784\uacc4\uac12=5s)  \n`histogram_quantile(0.99, sum(rate(apiserver_request_sli_duration_seconds_bucket{verb=~\"LIST\", scope=~\"namespace\"}[5m])) by (resource, subresource, verb, scope, le))`\n\n\uc804\uccb4 \ud074\ub7ec\uc2a4\ud130\uc758 \ub9ac\uc18c\uc2a4 \uceec\ub809\uc158\uc758 \uacbd\uc6b0 (\ubc94\uc704=\ud074\ub7ec\uc2a4\ud130, \uc784\uacc4\uac12=30\ucd08)  \n`histogram_quantile(0.99, sum(rate(apiserver_request_sli_duration_seconds_bucket{verb=~\"LIST\", scope=~\"cluster\"}[5m])) by (resource, subresource, verb, scope, le))`\n\n**API \uc694\uccad \ub808\uc774\ud134\uc2dc \uc2dc\uac04 \ud569\uacc4 (\uc77d\uae30 \uc804\uc6a9) ** - API \uc11c\ubc84\uc5d0\uc11c \uc694\uccad\uc774 \uc18c\uc694\ub41c \ucd1d \uc2dc\uac04\uc785\ub2c8\ub2e4. \uc774 \uc2dc\uac04\uc740 \uc6f9\ud6c5 \uc2e4\ud589 \ubc0f \ub300\uae30 \uc2dc\uac04\uc744 \ud3ec\ud568\ud558\ubbc0\ub85c SLI \uc2dc\uac04\ubcf4\ub2e4 \uae38 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\ub2e8\uc77c \ub9ac\uc18c\uc2a4\uc758 \uacbd\uc6b0 (\ubc94\uc704=\ub9ac\uc18c\uc2a4, \uc784\uacc4\uac12=1\ucd08)  \n`histogram_quantile(0.99, sum(rate(apiserver_request_duration_seconds_bucket{verb=~\"GET\", scope=~\"resource\"}[5m])) by (resource, subresource, verb, scope, le))`\n\n\ub3d9\uc77c\ud55c \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \uc788\ub294 \ub9ac\uc18c\uc2a4 \uceec\ub809\uc158\uc758 \uacbd\uc6b0 (\ubc94\uc704=\ub124\uc784\uc2a4\ud398\uc774\uc2a4, \uc784\uacc4\uac12=5s)  \n`histogram_quantile(0.99, sum(rate(apiserver_request_duration_seconds_bucket{verb=~\"LIST\", scope=~\"namespace\"}[5m])) by (resource, subresource, verb, scope, le))`\n\n\uc804\uccb4 \ud074\ub7ec\uc2a4\ud130\uc758 \ub9ac\uc18c\uc2a4 \ubaa8\uc74c\uc758 \uacbd\uc6b0 (\ubc94\uc704=\ud074\ub7ec\uc2a4\ud130, \uc784\uacc4\uac12=30\ucd08)  \n`histogram_quantile(0.99, sum(rate(apiserver_request_duration_seconds_bucket{verb=~\"LIST\", scope=~\"cluster\"}[5m])) by (resource, subresource, verb, scope, le))`\n\nSLI \uba54\ud2b8\ub9ad\uc740 \uc694\uccad\uc774 API Priority \ubc0f Fairness \ub300\uae30\uc5f4\uc5d0\uc11c \ub300\uae30\ud558\uac70\ub098, \uc2b9\uc778 \uc6f9\ud6c5 \ub610\ub294 \uae30\ud0c0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud655\uc7a5\uc744 \ud1b5\ud574 \uc791\uc5c5\ud558\ub294 \ub370 \uac78\ub9ac\ub294 \uc2dc\uac04\uc744 \uc81c\uc678\ud568\uc73c\ub85c\uc368 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uad6c\uc131 \uc694\uc18c\uc758 \uc131\ub2a5\uc5d0 \ub300\ud55c \ud1b5\ucc30\ub825\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc804\uccb4 \uc9c0\ud45c\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 API \uc11c\ubc84\uc758 \uc751\ub2f5\uc744 \uae30\ub2e4\ub9ac\ub294 \uc2dc\uac04\uc744 \ubc18\uc601\ud558\ubbc0\ub85c \ubcf4\ub2e4 \ucd1d\uccb4\uc801\uc778 \uc2dc\uac01\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc9c0\ud45c\ub97c \ube44\uad50\ud558\uba74 \uc694\uccad \ucc98\ub9ac \uc9c0\uc5f0\uc774 \ubc1c\uc0dd\ud558\ub294 \uc704\uce58\ub97c \ud30c\uc545\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n### \ud30c\ub4dc \uc2dc\uc791 \ub808\uc774\ud134\uc2dc\n\n|Metric\t| Definition\t|\n|---\t|---\t|\n|kubelet_pod_start_sli_duration_seconds\t|\ud30c\ub4dc\ub97c \uc2dc\uc791\ud558\ub294 \ub370 \uac78\ub9ac\ub294 \uc2dc\uac04 (\ucd08) (\uc774\ubbf8\uc9c0\ub97c \uac00\uc838\uc624\uace0 \ucd08\uae30\ud654 \ucee8\ud14c\uc774\ub108\ub97c \uc2e4\ud589\ud558\ub294 \ub370 \uac78\ub9ac\ub294 \uc2dc\uac04 \uc81c\uc678), \ud30c\ub4dc \uc0dd\uc131 \ud0c0\uc784\uc2a4\ud0ec\ud504\ubd80\ud130 \ubaa8\ub4e0 \ucee8\ud14c\uc774\ub108\uac00 \uc2dc\uacc4\ub97c \ud1b5\ud574 \uc2dc\uc791 \ubc0f \uad00\ucc30\ub41c \uac83\uc73c\ub85c \ubcf4\uace0\ub420 \ub54c\uae4c\uc9c0\uc758 \uc2dc\uac04 \t|\n|kubelet_pod_start_duration_seconds\t|kubelet\uc774 \ud30c\ub4dc\ub97c \ucc98\uc74c \ubcf8 \uc2dc\uc810\ubd80\ud130 \ud30c\ub4dc\uac00 \uc2e4\ud589\ub418\uae30 \uc2dc\uc791\ud560 \ub54c\uae4c\uc9c0\uc758 \uc2dc\uac04(\ucd08). \uc5ec\uae30\uc5d0\ub294 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud558\uac70\ub098 \uc6cc\ucee4 \ub178\ub4dc \uc6a9\ub7c9\uc744 \ud655\uc7a5\ud558\ub294 \uc2dc\uac04\uc740 \ud3ec\ud568\ub418\uc9c0 \uc54a\ub294\ub2e4.\t|\n\n*\ucc38\uace0: `kubelet_pod_start_sli_duration_seconds`\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 1.27\ubd80\ud130 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub2e4.*\n\n\uc704\uc758 \ucffc\ub9ac\uc640 \ub9c8\ucc2c\uac00\uc9c0\ub85c \uc774\ub7ec\ud55c \uba54\ud2b8\ub9ad\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc \uc2a4\ucf00\uc77c\ub9c1, \uc774\ubbf8\uc9c0 \ud480 \ubc0f \ucd08\uae30\ud654 \ucee8\ud14c\uc774\ub108\uac00 Kubelet \uc791\uc5c5\uacfc \ube44\uad50\ud558\uc5ec \ud30c\ub4dc \ucd9c\uc2dc\ub97c \uc5bc\ub9c8\ub098 \uc9c0\uc5f0\uc2dc\ud0a4\ub294\uc9c0 \ud30c\uc545\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n**\ud30c\ub4dc \uc2dc\uc791 \ub808\uc774\ud134\uc2dc \uc2dc\uac04 SLI -** \uc774\uac83\uc740 \ud30c\ub4dc \uc0dd\uc131\ubd80\ud130 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ucee8\ud14c\uc774\ub108\uac00 \uc2e4\ud589 \uc911\uc778 \uac83\uc73c\ub85c \ubcf4\uace0\ub41c \uc2dc\uc810\uae4c\uc9c0\uc758 \uc2dc\uac04\uc785\ub2c8\ub2e4. \uc5ec\uae30\uc5d0\ub294 \uc6cc\ucee4 \ub178\ub4dc \uc6a9\ub7c9\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uace0 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud558\ub294 \ub370 \uac78\ub9ac\ub294 \uc2dc\uac04\uc774 \ud3ec\ud568\ub418\uc9c0\ub9cc, \uc774\ubbf8\uc9c0\ub97c \uac00\uc838\uc624\uac70\ub098 \ucd08\uae30\ud654 \ucee8\ud14c\uc774\ub108\ub97c \uc2e4\ud589\ud558\ub294 \ub370 \uac78\ub9ac\ub294 \uc2dc\uac04\uc740 \ud3ec\ud568\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.  \n`histogram_quantile(0.99, sum(rate(kubelet_pod_start_sli_duration_seconds_bucket[5m])) by (le))`\n\n**\ud30c\ub4dc \uc2dc\uc791 \ub808\uc774\ud134\uc2dc \uc2dc\uac04 \ud569\uacc4 -** kubelet\uc774 \ucc98\uc74c\uc73c\ub85c \ud30c\ub4dc\ub97c \uc2dc\uc791\ud558\ub294 \ub370 \uac78\ub9ac\ub294 \uc2dc\uac04\uc785\ub2c8\ub2e4. \uc774\ub294 kubelet\uc774 WATCH\ub97c \ud1b5\ud574 \ud30c\ub4dc\ub97c \uc218\uc2e0\ud55c \uc2dc\uc810\ubd80\ud130 \uce21\uc815\ub418\uba70, \uc6cc\ucee4 \ub178\ub4dc \uc2a4\ucf00\uc77c\ub9c1 \ub610\ub294 \uc2a4\ucf00\uc904\ub9c1\uc5d0 \uac78\ub9ac\ub294 \uc2dc\uac04\uc740 \ud3ec\ud568\ub418\uc9c0 \uc54a\ub294\ub2e4. \uc5ec\uae30\uc5d0\ub294 \uc774\ubbf8\uc9c0\ub97c \uac00\uc838\uc624\uace0 \uc2e4\ud589\ud560 \ucee8\ud14c\uc774\ub108\ub97c \ucd08\uae30\ud654\ud558\ub294 \ub370 \uac78\ub9ac\ub294 \uc2dc\uac04\uc774 \ud3ec\ud568\ub429\ub2c8\ub2e4.  \n`histogram_quantile(0.99, sum(rate(kubelet_pod_start_duration_seconds_bucket[5m])) by (le))`\n\n\n\n## \ud074\ub7ec\uc2a4\ud130\uc758 SLO\n\nEKS \ud074\ub7ec\uc2a4\ud130\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9ac\uc18c\uc2a4\uc5d0\uc11c Prometheus \uba54\ud2b8\ub9ad\uc744 \uc218\uc9d1\ud558\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uad6c\uc131 \uc694\uc18c\uc758 \uc131\ub2a5\uc5d0 \ub300\ud55c \uc2ec\uce35\uc801\uc778 \ud1b5\ucc30\ub825\uc744 \uc5bb\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n[perf-test repo](https:\/\/github.com\/kubernetes\/perf-tests\/)\uc5d0\ub294 \ud14c\uc2a4\ud2b8 \uc911 \ud074\ub7ec\uc2a4\ud130\uc758 \uc9c0\uc5f0 \uc2dc\uac04 \ubc0f \uc911\uc694 \uc131\ub2a5 \uba54\ud2b8\ub9ad\uc744 \ud45c\uc2dc\ud558\ub294 Grafana \ub300\uc2dc\ubcf4\ub4dc\uac00 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. perf \ud14c\uc2a4\ud2b8 \uad6c\uc131\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uba54\ud2b8\ub9ad\uc744 \uc218\uc9d1\ud558\ub3c4\ub85d \uad6c\uc131\ub41c \uc624\ud508 \uc18c\uc2a4 \ud504\ub85c\uc81d\ud2b8\uc778 [kube-prometheus-stack](https:\/\/github.com\/prometheus-community\/helm-charts\/tree\/main\/charts\/kube-prometheus-stack)\uc744 \ud65c\uc6a9\ud558\uc9c0\ub9cc [Amazon Managed Prometheus \ubc0f Amazon Managed Grafana \uc0ac\uc6a9](https:\/\/aws-observability.github.io\/terraform-aws-observability-accelerator\/eks\/)\ub3c4 \uac00\ub2a5\ud569\ub2c8\ub2e4.\n\n`kube-prometheus-stack` \ub610\ub294 \uc720\uc0ac\ud55c \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uc194\ub8e8\uc158\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \ub3d9\uc77c\ud55c \ub300\uc2dc\ubcf4\ub4dc\ub97c \uc124\uce58\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc758 SLO\ub97c \uc2e4\uc2dc\uac04\uc73c\ub85c \uad00\ucc30\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n1. \uba3c\uc800 `kubectl apply -f prometheus-rules.yaml`\ub85c \ub300\uc2dc\ubcf4\ub4dc\uc5d0\uc11c \uc0ac\uc6a9\ub418\ub294 \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uaddc\uce59\uc744 \uc124\uce58\ud574\uc57c \ud55c\ub2e4. \uc5ec\uae30\uc5d0\uc11c \uaddc\uce59 \uc0ac\ubcf8\uc744 \ub2e4\uc6b4\ub85c\ub4dc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4: https:\/\/github.com\/kubernetes\/perf-tests\/blob\/master\/clusterloader2\/pkg\/prometheus\/manifests\/prometheus-rules.yaml\n    1. \ud30c\uc77c\uc758 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uac00 \uc0ac\uc6a9\uc790 \ud658\uacbd\uacfc \uc77c\uce58\ud558\ub294\uc9c0 \ud655\uc778\ud558\uc138\uc694.\n    2. `kube-prometheus-stack`\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \ub808\uc774\ube14\uc774 `Prometheus.PrometheusSpec.RuleSelector` \ud5ec\ub984 \uac12\uacfc \uc77c\uce58\ud558\ub294\uc9c0 \ud655\uc778\ud558\uc138\uc694.\n2. \uadf8\ub7f0 \ub2e4\uc74c Grafana\uc5d0 \ub300\uc2dc\ubcf4\ub4dc\ub97c \uc124\uce58\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub97c \uc0dd\uc131\ud558\ub294 json \ub300\uc2dc\ubcf4\ub4dc\uc640 \ud30c\uc774\uc36c \uc2a4\ud06c\ub9bd\ud2b8\ub294 \ub2e4\uc74c\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4: https:\/\/github.com\/kubernetes\/perf-tests\/tree\/master\/clusterloader2\/pkg\/prometheus\/manifests\/dashboards\n    1. [`slo.json` \ub300\uc2dc\ubcf4\ub4dc](https:\/\/github.com\/kubernetes\/perf-tests\/blob\/master\/clusterloader2\/pkg\/prometheus\/manifests\/dashboards\/slo.json)\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 SLO\uc640 \uad00\ub828\ub41c \ud074\ub7ec\uc2a4\ud130\uc758 \uc131\ub2a5\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\nSLO\ub294 \ud074\ub7ec\uc2a4\ud130\uc758 Kubernetes \uad6c\uc131 \uc694\uc18c \uc131\ub2a5\uc5d0 \ucd08\uc810\uc744 \ub9de\ucd94\uace0 \uc788\uc9c0\ub9cc \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c \ub2e4\uc591\ud55c \uad00\uc810\uc774\ub098 \ud1b5\ucc30\ub825\uc744 \uc81c\uacf5\ud558\ub294 \ucd94\uac00 \uba54\ud2b8\ub9ad\uc744 \uac80\ud1a0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. [Kube-State-Metrics](https:\/\/github.com\/kubernetes\/kube-state-metrics\/tree\/main)\uc640 \uac19\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee4\ubba4\ub2c8\ud2f0 \ud504\ub85c\uc81d\ud2b8\ub294 \ud074\ub7ec\uc2a4\ud130\uc758 \ucd94\uc138\ub97c \ube60\ub974\uac8c \ubd84\uc11d\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee4\ubba4\ub2c8\ud2f0\uc5d0\uc11c \uac00\uc7a5 \ub9ce\uc774 \uc0ac\uc6a9\ub418\ub294 \ud50c\ub7ec\uadf8\uc778\uacfc \ub4dc\ub77c\uc774\ubc84\ub3c4 Prometheus \uba54\ud2b8\ub9ad\uc744 \ub0b4\ubcf4\ub0b4\ubbc0\ub85c \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec \ub610\ub294 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \uc2a4\ucf00\uc904\ub7ec\uc640 \uac19\uc740 \uc0ac\ud56d\uc744 \uc870\uc0ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n[\uc635\uc800\ubc84\ube4c\ub9ac\ud2f0 \ubaa8\ubc94 \uc0ac\ub840 \uac00\uc774\ub4dc](https:\/\/aws-observability.github.io\/observability-best-practices\/guides\/containers\/oss\/eks\/best-practices-metrics-collection\/#control-plane-metrics)\uc5d0\ub294 \ucd94\uac00 \ud1b5\ucc30\ub825\uc744 \uc5bb\ub294 \ub370 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \ub2e4\ub978 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uba54\ud2b8\ub9ad\uc758 \uc608\uac00 \ub098\uc640 \uc788\uc2b5\ub2c8\ub2e4. \n\n\n\n\n\n","site":"eks","answers_cleaned":"    search    exclude  true                    SLO  Amazon EKS                               EKS                        SLO                           SIG  https   github com kubernetes community tree master sig scalability                     SLI Service Level Indicator   SLO Service Level Objective                         SLI                                                                                          SLO                                                 3                    SLO  SLI                           EKS                         Amazon EKS     SLA                            CSI                                                                                                                                FailurePolicy Ignore                                   K8s API                                SIG    you promise  we promise         https   github com kubernetes community blob master sig scalability slos slos md how we define scalability                                              You promise                                                                                        https   github com kubernetes community blob master sig scalability configs and limits thresholds md                                          We promise                 SLO                      SLO       SLO                                                                            SLO               https   kubernetes io docs concepts overview components                                                SLO                                                                           SIG            SLO SLI         https   github com kubernetes community blob master sig scalability slos slos md   Amazon EKS        SLO SLI     EKS                                                                Objective  Definition  SLO                     API request latency  mutating                                API              5         99          Kubernetes                                                        99    1     API request latency  read only                                  API                 5         99          Kubernetes                                                                     99   a      scope resource      1   b       30    scope namespace      scope cluster          Pod startup latency                                                                                                                                  5         99                               99    5           Objective  Definition  SLO                     API request latency  mutating   Latency of processing mutating  API calls for single objects for every  resource  verb  pair  measured as 99th percentile over last 5 minutes  In default Kubernetes installation  for every  resource  verb  pair  excluding virtual and aggregated resources and Custom Resource Definitions  99th percentile per cluster day    1s    API request latency  read only   Latency of processing non streaming read only API calls for every  resource  scope  pair  measured as 99th percentile over last 5 minutes  In default Kubernetes installation  for every  resource  scope  pair  excluding virtual and aggregated resources and Custom Resource Definitions  99th percentile per cluster day   a     1s if  scope resource   b     30s otherwise  if  scope namespace  or  scope cluster      Pod startup latency  Startup latency of schedulable stateless pods  excluding time to pull images and run init containers  measured from pod creation timestamp to when all its containers are reported as started and observed via watch  measured as 99th percentile over last 5 minutes  In default Kubernetes installation  99th percentile per cluster day    5s            API             kube apiserver             request timeout    1m0s                        1   60                                                  SLO                                                       Mutating                                                                                       etcd      https   kubernetes io docs concepts overview components  etcd                           Etcd  https   etcd io                                                                  resource  verb        5      99                                                                   SLO                  1                          read only                        Pod X                             X                            kube apiserver                                              etcd                             5      99                                             SLO                                     kubectl get pod  n mynamespace my controller xxx                        1                                                           kubectl get pods  A          30                   Kubernetes                                        SLO                     SLO                                                                                                                     JSON                   1KiB                         10MB                                                   APIListChunking                        https   kubernetes io docs reference using api api concepts  retrieving large results sets in chunks                     SLO                                                                                           WATCH    https   kubernetes io docs reference using api api concepts  efficient detection of changes                                                                    SLO                               99         5                         SLO                                                           SLO                                                               stateless                              SLI                                 SLI                                https   prometheus io docs concepts data model         SLI                                       PromQL   https   prometheus io docs prometheus latest querying basics         Prometheus    Grafana                         SLI                                 SLO                     API              Metric  Definition                apiserver request sli duration seconds     verb                                                                                          apiserver request duration seconds     verb                                                                           apiserver request sli duration seconds             1 27                                API           Kubernetes                                                             SLO       https   github com kubernetes perf tests tree master clusterloader2 pkg prometheus manifests dashboards                 API         SLI  mutating                                                  histogram quantile 0 99  sum rate apiserver request sli duration seconds bucket verb   CREATE DELETE PATCH POST PUT   subresource   proxy attach log exec portforward   5m    by  resource  subresource  verb  scope  le     0     API                mutating      API                                    API                          SLI                   histogram quantile 0 99  sum rate apiserver request duration seconds bucket verb   CREATE DELETE PATCH POST PUT   subresource   proxy attach log exec portforward   5m    by  resource  subresource  verb  scope  le     0            kubectl port forward      kubectl exec                         API              subresource   proxy attach log exec portforward                        verb                    verb   Create Delete Patch Post put             5                99                       API                                 verb            LIST    GET                                                                      SLO               API            SLI                                                                           1s     histogram quantile 0 99  sum rate apiserver request sli duration seconds bucket verb   GET   scope   resource   5m    by  resource  subresource  verb  scope  le                                                5s     histogram quantile 0 99  sum rate apiserver request sli duration seconds bucket verb   LIST   scope   namespace   5m    by  resource  subresource  verb  scope  le                                        30      histogram quantile 0 99  sum rate apiserver request sli duration seconds bucket verb   LIST   scope   cluster   5m    by  resource  subresource  verb  scope  le       API                            API                                                  SLI                                        1      histogram quantile 0 99  sum rate apiserver request duration seconds bucket verb   GET   scope   resource   5m    by  resource  subresource  verb  scope  le                                                5s     histogram quantile 0 99  sum rate apiserver request duration seconds bucket verb   LIST   scope   namespace   5m    by  resource  subresource  verb  scope  le                                       30      histogram quantile 0 99  sum rate apiserver request duration seconds bucket verb   LIST   scope   cluster   5m    by  resource  subresource  verb  scope  le     SLI          API Priority   Fairness                                                                                                            API                                                                                                          Metric   Definition                kubelet pod start sli duration seconds                                                                                                                           kubelet pod start duration seconds  kubelet                                                                                                kubelet pod start sli duration seconds         1 27                                                                     Kubelet                                                          SLI                                                                                                                                                               histogram quantile 0 99  sum rate kubelet pod start sli duration seconds bucket 5m    by  le                            kubelet                                kubelet  WATCH                                                                                                                 histogram quantile 0 99  sum rate kubelet pod start duration seconds bucket 5m    by  le                SLO  EKS                   Prometheus                                                               perf test repo  https   github com kubernetes perf tests                                         Grafana                  perf                                           kube prometheus stack  https   github com prometheus community helm charts tree main charts kube prometheus stack          Amazon Managed Prometheus   Amazon Managed Grafana     https   aws observability github io terraform aws observability accelerator eks             kube prometheus stack                                                  SLO                      1      kubectl apply  f prometheus rules yaml                                                             https   github com kubernetes perf tests blob master clusterloader2 pkg prometheus manifests prometheus rules yaml     1                                       2   kube prometheus stack                 Prometheus PrometheusSpec RuleSelector                     2        Grafana                            json                                  https   github com kubernetes perf tests tree master clusterloader2 pkg prometheus manifests dashboards     1    slo json        https   github com kubernetes perf tests blob master clusterloader2 pkg prometheus manifests dashboards slo json         SLO                        SLO        Kubernetes                                                                        Kube State Metrics  https   github com kubernetes kube state metrics tree main                                                                                              Prometheus                                                                           https   aws observability github io observability best practices guides containers oss eks best practices metrics collection  control plane metrics                                                          "}
{"questions":"eks exclude true EKS NTP syslog kube system search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \ud074\ub7ec\uc2a4\ud130 \uc11c\ube44\uc2a4\n\n\ud074\ub7ec\uc2a4\ud130 \uc11c\ube44\uc2a4\ub294 EKS \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ub418\uc9c0\ub9cc \uc0ac\uc6a9\uc790 \uc6cc\ud06c\ub85c\ub4dc\ub294 \uc544\ub2d9\ub2c8\ub2e4. \ub9ac\ub205\uc2a4 \uc11c\ubc84\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc9c0\uc6d0\ud558\uae30 \uc704\ud574 NTP, syslog \ubc0f \ucee8\ud14c\uc774\ub108 \ub7f0\ud0c0\uc784\uacfc \uac19\uc740 \uc11c\ube44\uc2a4\ub97c \uc2e4\ud589\ud574\uc57c \ud558\ub294 \uacbd\uc6b0\uac00 \ub9ce\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uc11c\ube44\uc2a4\ub3c4 \ube44\uc2b7\ud558\uba70 \ud074\ub7ec\uc2a4\ud130\ub97c \uc790\ub3d9\ud654\ud558\uace0 \uc6b4\uc601\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub418\ub294 \uc11c\ube44\uc2a4\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0\uc11c \uc774\ub4e4\uc740 \uc77c\ubc18\uc801\uc73c\ub85c kube-system \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c \uc2e4\ud589\ub418\uace0 \uc77c\ubd80\ub294 [\ub370\ubaac\uc14b](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/daemonset\/)\ub85c \uc2e4\ud589\ub429\ub2c8\ub2e4.\n\n\ud074\ub7ec\uc2a4\ud130 \uc11c\ube44\uc2a4\ub294 \uac00\ub3d9 \uc2dc\uac04\uc774 \uae38\uc5b4\uc9c8 \uac83\uc73c\ub85c \uc608\uc0c1\ub418\uba70 \uc815\uc804 \ubc0f \ubb38\uc81c \ud574\uacb0\uc5d0 \uc911\uc694\ud55c \uc5ed\ud560\uc744 \ud558\ub294 \uacbd\uc6b0\uac00 \ub9ce\uc2b5\ub2c8\ub2e4. \ucf54\uc5b4 \ud074\ub7ec\uc2a4\ud130 \uc11c\ube44\uc2a4\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0 \uc7a5\uc560 \ubcf5\uad6c \ub610\ub294 \uc608\ubc29\uc5d0 \ub3c4\uc6c0\uc774 \ub418\ub294 \ub370\uc774\ud130 (\uc608: \ub192\uc740 \ub514\uc2a4\ud06c \uc0ac\uc6a9\ub960)\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc5c6\uac8c \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubcc4\ub3c4\uc758 \ub178\ub4dc \uadf8\ub8f9 \ub610\ub294 AWS Fargate\uc640 \uac19\uc740 \uc804\uc6a9 \ucef4\ud4e8\ud305 \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c \uc2e4\ud589\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uaddc\ubaa8\uac00 \ucee4\uc9c0\uac70\ub098 \ub9ac\uc18c\uc2a4\ub97c \ub354 \ub9ce\uc774 \uc0ac\uc6a9\ud558\ub294 \uc6cc\ud06c\ub85c\ub4dc\uac00 \uacf5\uc720 \uc778\uc2a4\ud134\uc2a4\uc5d0 \ubbf8\uce58\ub294 \uc601\ud5a5\uc744 \ud074\ub7ec\uc2a4\ud130 \uc11c\ube44\uc2a4\uac00 \ubc1b\uc9c0 \uc54a\ub3c4\ub85d \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## CoreDNS \uc2a4\ucf00\uc77c\ub9c1\n\nCoreDNS \uc2a4\ucf00\uc77c\ub9c1\uc5d0\ub294 \ub450 \uac00\uc9c0 \uae30\ubcf8 \uba54\ucee4\ub2c8\uc998\uc774 \uc788\uc2b5\ub2c8\ub2e4. CoreDNS \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \ud638\ucd9c \uc218\ub97c \uc904\uc774\uace0 \ubcf5\uc81c\ubcf8 \uc218\ub97c \ub298\ub9bd\ub2c8\ub2e4.\n\n### ndot\uc744 \uc904\uc5ec \uc678\ubd80 \ucffc\ub9ac\ub97c \uc904\uc785\ub2c8\ub2e4.\n\nndots \uc124\uc815\uc740 DNS \ucffc\ub9ac\ub97c \ud53c\ud558\uae30\uc5d0 \ucda9\ubd84\ud558\ub2e4\uace0 \uac04\uc8fc\ub418\ub294 \ub3c4\uba54\uc778 \uc774\ub984\uc758 \ub9c8\uce68\ud45c (\uc77c\uba85 \"\uc810\") \uc218\ub97c \uc9c0\uc815\ud569\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 ndots \uc124\uc815\uc774 5 (\uae30\ubcf8\uac12) \uc774\uace0 api.example.com (\uc810 2\uac1c) \uacfc \uac19\uc740 \uc678\ubd80 \ub3c4\uba54\uc778\uc5d0\uc11c \ub9ac\uc18c\uc2a4\ub97c \uc694\uccad\ud558\ub294 \uacbd\uc6b0 \/etc\/resolv.conf\uc5d0 \uc815\uc758\ub41c \uac01 \uac80\uc0c9 \ub3c4\uba54\uc778\uc5d0 \ub300\ud574 CoreDNS\uac00 \ucffc\ub9ac\ub418\uc5b4 \ub354 \uad6c\uccb4\uc801\uc778 \ub3c4\uba54\uc778\uc774 \uac80\uc0c9\ub429\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c \uc678\ubd80 \uc694\uccad\uc744 \ud558\uae30 \uc804\uc5d0 \ub2e4\uc74c \ub3c4\uba54\uc778\uc774 \uac80\uc0c9\ub429\ub2c8\ub2e4.\n\n```\napi.example.<namespace>.svc.cluster.local\napi.example.svc.cluster.local\napi.example.cluster.local\napi.example.<region>.compute.internal\n```\n\n`namespace` \ubc0f `region` \uac12\uc740 \uc6cc\ud06c\ub85c\ub4dc \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \ubc0f \ucef4\ud4e8\ud305 \uc9c0\uc5ed\uc73c\ub85c \ub300\uccb4\ub429\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uc124\uc815\uc5d0 \ub530\ub77c \ucd94\uac00 \uac80\uc0c9 \ub3c4\uba54\uc778\uc774 \uc788\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc6cc\ud06c\ub85c\ub4dc\uc758 [ndots \uc635\uc158 \ub0ae\ucd94\uae30](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/dns-pod-service\/#pod-dns-config) \ub610\ub294 \ud6c4\ud589 \ud56d\ubaa9\uc744 \ud3ec\ud568\ud558\uc5ec \ub3c4\uba54\uc778 \uc694\uccad\uc744 \uc644\uc804\ud788 \uac80\uc99d\ud558\uc5ec CoreDNS\uc5d0 \ub300\ud55c \uc694\uccad \uc218\ub97c \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.(\uc608: `api.example.com.`). \uc6cc\ud06c\ub85c\ub4dc\uac00 DNS\ub97c \ud1b5\ud574 \uc678\ubd80 \uc11c\ube44\uc2a4\uc5d0 \uc5f0\uacb0\ud558\ub294 \uacbd\uc6b0 \uc6cc\ud06c\ub85c\ub4dc\uac00 \ubd88\ud544\uc694\ud558\uac8c \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c DNS \ucffc\ub9ac\ub97c \ud074\ub7ec\uc2a4\ud130\ub9c1\ud558\uc9c0 \uc54a\ub3c4\ub85d ndots\ub97c 2\ub85c \uc124\uc815\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ud074\ub7ec\uc2a4\ud130 \ub0b4\ubd80 \uc11c\ube44\uc2a4\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4\uac00 \ud544\uc694\ud558\uc9c0 \uc54a\uc740 \uacbd\uc6b0 \ub2e4\ub978 DNS \uc11c\ubc84 \ubc0f \uac80\uc0c9 \ub3c4\uba54\uc778\uc744 \uc124\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\nspec:\n  dnsPolicy: \"None\"\n  dnsConfig:\n    options:\n      - name: ndots\n        value: \"2\"\n      - name: edns0\n```\n\nndots\ub97c \ub108\ubb34 \ub0ae\uc740 \uac12\uc73c\ub85c \ub0ae\ucd94\uac70\ub098 \uc5f0\uacb0\ud558\ub824\ub294 \ub3c4\uba54\uc778\uc758 \uad6c\uccb4\uc131\uc774 \ucda9\ubd84\ud558\uc9c0 \uc54a\uc740 \uacbd\uc6b0 (\ud6c4\ud589 \ud3ec\ud568) DNS \uc870\ud68c\uac00 \uc2e4\ud328\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\uc774 \uc124\uc815\uc774 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc5b4\ub5a4 \uc601\ud5a5\uc744 \ubbf8\uce60\uc9c0 \ud14c\uc2a4\ud2b8\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n### CoreDNS \uc218\ud3c9 \uc2a4\ucf00\uc77c\ub9c1\n\nCoreDNS \uc778\uc2a4\ud134\uc2a4\ub294 \ubc30\ud3ec\uc5d0 \ubcf5\uc81c\ubcf8\uc744 \ucd94\uac00\ud558\uc5ec \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. CoreDNS\ub97c \ud655\uc7a5\ud558\ub824\uba74 [NodeLocal DNS](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/nodelocaldns\/) \ub610\ub294 [cluster proportional autoscaler](https:\/\/github.com\/kubernetes-sigs\/cluster-proportional-autoscaler)\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\nNodeLocal DNS\ub294 \ub178\ub4dc\ub2f9 \ud558\ub098\uc758 \uc778\uc2a4\ud134\uc2a4\ub97c \ub370\ubaac\uc14b\uc73c\ub85c \uc2e4\ud589\ud574\uc57c \ud558\uba70, \uc774\ub97c \uc704\ud574\uc11c\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub354 \ub9ce\uc740 \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4\uac00 \ud544\uc694\ud558\uc9c0\ub9cc DNS \uc694\uccad \uc2e4\ud328\ub97c \ubc29\uc9c0\ud558\uace0 \ud074\ub7ec\uc2a4\ud130\uc758 DNS \ucffc\ub9ac\uc5d0 \ub300\ud55c \uc751\ub2f5 \uc2dc\uac04\uc744 \uc904\uc785\ub2c8\ub2e4. Cluster propertional autoscaler\ub294 \ud074\ub7ec\uc2a4\ud130\uc758 \ub178\ub4dc \ub610\ub294 \ucf54\uc5b4 \uc218\uc5d0 \ub530\ub77c CoreDNS\uc758 \ud06c\uae30\ub97c \uc870\uc815\ud569\ub2c8\ub2e4. \uc774\ub294 \ucffc\ub9ac \uc694\uccad\uacfc \uc9c1\uc811\uc801\uc778 \uc0c1\uad00 \uad00\uacc4\ub294 \uc544\ub2c8\uc9c0\ub9cc \uc6cc\ud06c\ub85c\ub4dc \ubc0f \ud074\ub7ec\uc2a4\ud130 \ud06c\uae30\uc5d0 \ub530\ub77c \uc720\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uae30\ubcf8 \ube44\ub840 \ucc99\ub3c4\ub294 \ud074\ub7ec\uc2a4\ud130\uc758 256\uac1c \ucf54\uc5b4 \ub610\ub294 16\uac1c \ub178\ub4dc\ub9c8\ub2e4 \ucd94\uac00 \ubcf5\uc81c\ubcf8\uc744 \ucd94\uac00\ud558\ub294 \uac83\uc785\ub2c8\ub2e4(\ub458 \uc911 \uba3c\uc800 \ubc1c\uc0dd\ud558\ub294 \uae30\uc900).\n\n## \ucfe0\ubc84\ub124\ud2f0\uc2a4 Metric Server \uc218\uc9c1 \ud655\uc7a5\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 Metric Server\ub294 \uc218\ud3c9 \ubc0f \uc218\uc9c1 \ud655\uc7a5\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. Metric Server\ub97c \uc218\ud3c9\uc801\uc73c\ub85c \ud655\uc7a5\ud558\uba74 \uac00\uc6a9\uc131\uc740 \ub192\uc544\uc9c0\uc9c0\ub9cc \ub354 \ub9ce\uc740 \ud074\ub7ec\uc2a4\ud130 \uba54\ud2b8\ub9ad\uc744 \ucc98\ub9ac\ud560 \uc218 \uc788\uc744 \ub9cc\ud07c \uc218\ud3c9\uc801\uc73c\ub85c \ud655\uc7a5\ub418\uc9c0\ub294 \uc54a\uc2b5\ub2c8\ub2e4. \ub178\ub4dc\uc640 \uc218\uc9d1\ub41c \uc9c0\ud45c\uac00 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ucd94\uac00\ub428\uc5d0 \ub530\ub77c [\uad8c\uc7a5 \uc0ac\ud56d](https:\/\/kubernetes-sigs.github.io\/metrics-server\/#scaling)\uc5d0 \ub530\ub77c \uba54\ud2b8\ub9ad \uc11c\ubc84\ub97c \uc218\uc9c1\uc73c\ub85c \ud655\uc7a5\ud574\uc57c \ud569\ub2c8\ub2e4.\n\nMetric Server\ub294 \uc218\uc9d1, \uc9d1\uacc4 \ubc0f \uc81c\uacf5\ud558\ub294 \ub370\uc774\ud130\ub97c \uba54\ubaa8\ub9ac\uc5d0 \ubcf4\uad00\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uac00 \ucee4\uc9c0\uba74 Metric Server\uac00 \uc800\uc7a5\ud558\ub294 \ub370\uc774\ud130 \uc591\ub3c4 \ub298\uc5b4\ub0a9\ub2c8\ub2e4. \ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c Metric Server\ub294 \uae30\ubcf8 \uc124\uce58\uc5d0 \uc9c0\uc815\ub41c \uba54\ubaa8\ub9ac \ubc0f CPU \uc608\uc57d\ub7c9\ubcf4\ub2e4 \ub354 \ub9ce\uc740 \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4\ub97c \ud544\uc694\ub85c \ud569\ub2c8\ub2e4.[Vertical Pod Autoscaler](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/vertical-pod-autoscaler)(VPA) \ub610\ub294 [Addon Resizer](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/addon-resizer)\ub97c \uc0ac\uc6a9\ud558\uc5ec Metric Server\ub97c \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Addon Resizer\ub294 Worker \ub178\ub4dc\uc5d0 \ube44\ub840\ud558\uc5ec \uc218\uc9c1\uc73c\ub85c \ud655\uc7a5\ub418\uace0 VPA\ub294 CPU \ubc0f \uba54\ubaa8\ub9ac \uc0ac\uc6a9\ub7c9\uc5d0 \ub530\ub77c \uc870\uc815\ub429\ub2c8\ub2e4.\n\n## CoreDNS lameduck \uc9c0\uc18d \uc2dc\uac04\n\n\ud30c\ub4dc\ub294 \uc774\ub984 \ud655\uc778\uc744 \uc704\ud574 `kube-dns` \uc11c\ube44\uc2a4\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 Destination NAT (DNAT) \ub97c \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc\uc5d0\uc11c CoreDNS \ubc31\uc5d4\ub4dc \ud30c\ub4dc\ub85c `kube-dns` \ud2b8\ub798\ud53d\uc744 \ub9ac\ub514\ub809\uc158\ud569\ub2c8\ub2e4. CoreDNS Deployment\ub97c \ud655\uc7a5\ud558\uba74, `kube-proxy`\ub294 \ub178\ub4dc\uc758 iptables \uaddc\uce59 \ubc0f \uccb4\uc778\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\uc5ec DNS \ud2b8\ub798\ud53d\uc744 CoreDNS \ud30c\ub4dc\ub85c \ub9ac\ub514\ub809\uc158\ud569\ub2c8\ub2e4. \ud655\uc7a5 \uc2dc \uc0c8 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc804\ud30c\ud558\uace0 \ucd95\uc18c\ud560 \ub54c \uaddc\uce59\uc744 \uc0ad\uc81c\ud558\ub294\ub370 \ud074\ub7ec\uc2a4\ud130 \ud06c\uae30\uc5d0 \ub530\ub77c CoreDNS\ub97c \uc0ad\uc81c\ud558\ub294 \ub370 1~10\ucd08 \uc815\ub3c4 \uac78\ub9b4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\uc774\ub7ec\ud55c \uc804\ud30c \uc9c0\uc5f0\uc73c\ub85c \uc778\ud574 CoreDNS \ud30c\ub4dc\uac00 \uc885\ub8cc\ub418\uc5c8\uc9c0\ub9cc \ub178\ub4dc\uc758 iptables \uaddc\uce59\uc774 \uc5c5\ub370\uc774\ud2b8\ub418\uc9c0 \uc54a\uc740 \uacbd\uc6b0 DNS \uc870\ud68c \uc624\ub958\uac00 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc2dc\ub098\ub9ac\uc624\uc5d0\uc11c \ub178\ub4dc\ub294 \uc885\ub8cc\ub41c CoreDNS \ud30c\ub4dc\uc5d0 DNS \ucffc\ub9ac\ub97c \uacc4\uc18d \uc804\uc1a1\ud560 \uc218 \uc788\ub2e4.\n\nCoreDNS \ud30c\ub4dc\uc5d0 [lameduck](https:\/\/coredns.io\/plugins\/health\/) \uae30\uac04\uc744 \uc124\uc815\ud558\uc5ec DNS \uc870\ud68c \uc2e4\ud328\ub97c \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. Lameduck \ubaa8\ub4dc\uc5d0 \uc788\ub294 \ub3d9\uc548 CoreDNS\ub294 \uacc4\uc18d\ud574\uc11c \uc9c4\ud589 \uc911\uc778 \uc694\uccad\uc5d0 \uc751\ub2f5\ud569\ub2c8\ub2e4.Lameduck \uae30\uac04\uc744 \uc124\uc815\ud558\uba74 CoreDNS \uc885\ub8cc \ud504\ub85c\uc138\uc2a4\uac00 \uc9c0\uc5f0\ub418\uc5b4 \ub178\ub4dc\uac00 iptables \uaddc\uce59 \ubc0f \uccb4\uc778\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\ub294 \ub370 \ud544\uc694\ud55c \uc2dc\uac04\uc744 \ud655\ubcf4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\nCoreDNS lameduck \uc9c0\uc18d \uc2dc\uac04\uc744 30\ucd08\ub85c \uc124\uc815\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \n\n## CoreDNS readiness \ud504\ub85c\ube0c\n\nCoreDNS\uc758 Readiness \ud504\ub85c\ube0c\uc5d0\ub294 `\/health` \ub300\uc2e0 `\/ready`\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc744 \ucd94\ucc9c\ud569\ub2c8\ub2e4.\n\nLameduck \uc9c0\uc18d \uc2dc\uac04\uc744 30\ucd08\ub85c \uc124\uc815\ud558\ub77c\ub294 \uc774\uc804 \uad8c\uc7a5 \uc0ac\ud56d\uc5d0 \ub530\ub77c, \ud30c\ub4dc \uc885\ub8cc \uc804\uc5d0 \ub178\ub4dc\uc758 iptables \uaddc\uce59\uc744 \uc5c5\ub370\uc774\ud2b8\ud560 \uc218 \uc788\ub294 \ucda9\ubd84\ud55c \uc2dc\uac04\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. CoreDNS \uc900\ube44 \uc0c1\ud0dc \ud504\ub85c\ube0c\uc5d0 `\/health` \ub300\uc2e0 `\/ready'\ub97c \uc0ac\uc6a9\ud558\uba74 \uc2dc\uc791 \uc2dc CoreDNS \ud30c\ub4dc\uac00 DNS \uc694\uccad\uc5d0 \uc989\uc2dc \uc751\ub2f5\ud560 \uc218 \uc788\ub3c4\ub85d \uc644\ubcbd\ud558\uac8c \uc900\ube44\ub429\ub2c8\ub2e4.\n\n```yaml\nreadinessProbe:\n  httpGet:\n    path: \/ready\n    port: 8181\n    scheme: HTTP\n```\n\nCoreDNS Ready \ud50c\ub7ec\uadf8\uc778\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [https:\/\/coredns.io\/plugins\/ready\/](https:\/\/coredns.io\/plugins\/ready\/) \uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n## \ub85c\uae45 \ubc0f \ubaa8\ub2c8\ud130\ub9c1 \uc5d0\uc774\uc804\ud2b8\n\n\ub85c\uae45 \ubc0f \ubaa8\ub2c8\ud130\ub9c1 \uc5d0\uc774\uc804\ud2b8\ub294 API \uc11c\ubc84\ub97c \ucffc\ub9ac\ud558\uc5ec \uc6cc\ud06c\ub85c\ub4dc \uba54\ud0c0\ub370\uc774\ud130\ub85c \ub85c\uadf8\uc640 \uba54\ud2b8\ub9ad\uc744 \ubcf4\uac15\ud558\ubbc0\ub85c \ud074\ub7ec\uc2a4\ud130 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0 \uc0c1\ub2f9\ud55c \ub85c\ub4dc\ub97c \ucd94\uac00\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub178\ub4dc\uc758 \uc5d0\uc774\uc804\ud2b8\ub294 \ucee8\ud14c\uc774\ub108 \ubc0f \ud504\ub85c\uc138\uc2a4 \uc774\ub984\uacfc \uac19\uc740 \ud56d\ubaa9\uc744 \ubcf4\uae30 \uc704\ud574 \ub85c\uceec \ub178\ub4dc \ub9ac\uc18c\uc2a4\uc5d0\ub9cc \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. API \uc11c\ubc84\ub97c \ucffc\ub9ac\ud558\uba74 Kubernetes Deployment \uc774\ub984 \ubc0f \ub808\uc774\ube14\uacfc \uac19\uc740 \uc138\ubd80 \uc815\ubcf4\ub97c \ucd94\uac00\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub294 \ubb38\uc81c \ud574\uacb0\uc5d0\ub294 \ub9e4\uc6b0 \uc720\uc6a9\ud558\uc9c0\ub9cc \ud655\uc7a5\uc5d0\ub294 \ud574\ub85c\uc6b8 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub85c\uae45 \ubc0f \ubaa8\ub2c8\ud130\ub9c1\uc5d0 \ub300\ud55c \uc635\uc158\uc774 \ub108\ubb34 \ub2e4\uc591\ud558\uae30 \ub54c\ubb38\uc5d0 \ubaa8\ub4e0 \uacf5\uae09\uc790\uc5d0 \ub300\ud55c \uc608\ub97c \ud45c\uc2dc\ud560 \uc218\ub294 \uc5c6\uc2b5\ub2c8\ub2e4. [fluentbit](https:\/\/docs.fluentbit.io\/manual\/pipeline\/filters\/kubernetes)\ub97c \uc0ac\uc6a9\ud558\uba74 Use_Kubelet\uc744 \ud65c\uc131\ud654\ud558\uc5ec Kubernetes API \uc11c\ubc84 \ub300\uc2e0 \ub85c\uceec kubelet\uc5d0\uc11c \uba54\ud0c0\ub370\uc774\ud130\ub97c \uac00\uc838\uc624\uace0 `Kube_Meta_Cache_TTL`\uc744 \uc904\uc774\ub294 \uc22b\uc790\ub85c \uc124\uc815\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \ub370\uc774\ud130\ub97c \uce90\uc2dc\ud560 \uc218 \uc788\uc744 \ub54c \ud638\ucd9c\uc744 \ubc18\ubcf5\ud569\ub2c8\ub2e4(\uc608: 60).\n\n\uc870\uc815 \ubaa8\ub2c8\ud130\ub9c1 \ubc0f \ub85c\uae45\uc5d0\ub294 \ub450 \uac00\uc9c0 \uc77c\ubc18 \uc635\uc158\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\n* \ud1b5\ud569 \ube44\ud65c\uc131\ud654\n* \uc0d8\ud50c\ub9c1 \ubc0f \ud544\ud130\ub9c1\n\n\ub85c\uadf8 \uba54\ud0c0\ub370\uc774\ud130\uac00 \uc190\uc2e4\ub418\ubbc0\ub85c \ud1b5\ud569\uc744 \ube44\ud65c\uc131\ud654\ud558\ub294 \uac83\uc774 \uc635\uc158\uc774 \uc544\ub2cc \uacbd\uc6b0\uac00 \ub9ce\uc2b5\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 API \ud655\uc7a5 \ubb38\uc81c\uac00 \uc81c\uac70\ub418\uc9c0\ub9cc \ud544\uc694\ud560 \ub54c \ud544\uc694\ud55c \uba54\ud0c0\ub370\uc774\ud130\uac00 \uc5c6\uc5b4 \ub2e4\ub978 \ubb38\uc81c\uac00 \ubc1c\uc0dd\ud569\ub2c8\ub2e4.\n\n\uc0d8\ud50c\ub9c1 \ubc0f \ud544\ud130\ub9c1\uc744 \uc218\ud589\ud558\uba74 \uc218\uc9d1\ub418\ub294 \uc9c0\ud45c \ubc0f \ub85c\uadf8 \uc218\uac00 \uc904\uc5b4\ub4ed\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 Kubernetes API\uc5d0 \ub300\ud55c \uc694\uccad \uc591\uc774 \uc904\uc5b4\ub4e4\uace0 \uc218\uc9d1\ub418\ub294 \uc9c0\ud45c \ubc0f \ub85c\uadf8\uc5d0 \ud544\uc694\ud55c \uc2a4\ud1a0\ub9ac\uc9c0 \uc591\uc774 \uc904\uc5b4\ub4ed\ub2c8\ub2e4. \uc2a4\ud1a0\ub9ac\uc9c0 \ube44\uc6a9\uc744 \uc904\uc774\uba74 \uc804\uccb4 \uc2dc\uc2a4\ud15c \ube44\uc6a9\ub3c4 \ub0ae\uc544\uc9d1\ub2c8\ub2e4.\n\n\uc0d8\ud50c\ub9c1\uc744 \uad6c\uc131\ud558\ub294 \uae30\ub2a5\uc740 \uc5d0\uc774\uc804\ud2b8 \uc18c\ud504\ud2b8\uc6e8\uc5b4\uc5d0 \ub530\ub77c \ub2e4\ub974\uba70 \ub2e4\uc591\ud55c \uc218\uc9d1 \uc9c0\uc810\uc5d0\uc11c \uad6c\ud604\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. API \uc11c\ubc84 \ud638\ucd9c\uc774 \ubc1c\uc0dd\ud560 \uac00\ub2a5\uc131\uc774 \ub192\uae30 \ub54c\ubb38\uc5d0 \uc5d0\uc774\uc804\ud2b8\uc5d0 \ucd5c\ub300\ud55c \uac00\uae5d\uac8c \uc0d8\ud50c\ub9c1\uc744 \ucd94\uac00\ud558\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4. \uc0d8\ud50c\ub9c1 \uc9c0\uc6d0\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\ub824\uba74 \uacf5\uae09\uc790\uc5d0\uac8c \ubb38\uc758\ud558\uc138\uc694.\n\nCloudWatch \ubc0f CloudWatch Logs\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 [\ubb38\uc11c\uc5d0 \uc124\uba85\ub41c](https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/logs\/FilterAndPatternSyntax.html) \ud328\ud134\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc5d0\uc774\uc804\ud2b8 \ud544\ud130\ub9c1\uc744 \ucd94\uac00\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub85c\uadf8 \ubc0f \uc9c0\ud45c \uc190\uc2e4\uc744 \ubc29\uc9c0\ud558\ub824\uba74 \uc218\uc2e0 \ubc1b\ub294 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0\uc11c \uc911\ub2e8\uc774 \ubc1c\uc0dd\ud560 \uacbd\uc6b0 \ub370\uc774\ud130\ub97c \ubc84\ud37c\ub9c1\ud560 \uc218 \uc788\ub294 \uc2dc\uc2a4\ud15c\uc73c\ub85c \ub370\uc774\ud130\ub97c \ubcf4\ub0b4\uc57c \ud569\ub2c8\ub2e4. Fluentbit\ub97c \uc0ac\uc6a9\ud558\uba74 [Amazon Kinesis Data Firehose](https:\/\/docs.fluentbit.io\/manual\/pipeline\/outputs\/firehose)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub370\uc774\ud130\ub97c \uc784\uc2dc\ub85c \ubcf4\uad00\ud560 \uc218 \uc788\uc73c\ubbc0\ub85c \ucd5c\uc885 \ub370\uc774\ud130 \uc800\uc7a5 \uc704\uce58\uc5d0 \uacfc\ubd80\ud558\uac00 \uac78\ub9b4 \uac00\ub2a5\uc131\uc774 \uc904\uc5b4\ub4ed\ub2c8\ub2e4.","site":"eks","answers_cleaned":"    search    exclude  true                             EKS                                                              NTP  syslog                                                                                                              kube system                         https   kubernetes io docs concepts workloads controllers daemonset                                                                                                                                                                              AWS Fargate                                                                                                                  CoreDNS       CoreDNS                            CoreDNS                                     ndot                   ndots     DNS                                                            ndots     5          api example com    2                               etc resolv conf                   CoreDNS                                                                 api example  namespace  svc cluster local api example svc cluster local api example cluster local api example  region  compute internal       namespace     region                                                                             ndots         https   kubernetes io docs concepts services networking dns pod service  pod dns config                                  CoreDNS                          api example com           DNS                                          DNS                 ndots  2                                                         DNS                               spec    dnsPolicy   None    dnsConfig      options          name  ndots         value   2          name  edns0      ndots                                                    DNS                                                       CoreDNS          CoreDNS                                 CoreDNS         NodeLocal DNS  https   kubernetes io docs tasks administer cluster nodelocaldns       cluster proportional autoscaler  https   github com kubernetes sigs cluster proportional autoscaler                  NodeLocal DNS                                                                 DNS                   DNS                     Cluster propertional autoscaler                       CoreDNS                                                                                       256        16                                                    Metric Server              Metric Server                     Metric Server                                                                                                       https   kubernetes sigs github io metrics server  scaling                              Metric Server                                           Metric Server                                Metric Server                   CPU                              Vertical Pod Autoscaler  https   github com kubernetes autoscaler tree master vertical pod autoscaler  VPA      Addon Resizer  https   github com kubernetes autoscaler tree master addon resizer        Metric Server              Addon Resizer  Worker                    VPA  CPU                          CoreDNS lameduck                       kube dns                     Destination NAT  DNAT              CoreDNS          kube dns                CoreDNS Deployment         kube proxy       iptables                 DNS      CoreDNS                                                             CoreDNS         1 10                                 CoreDNS                iptables                  DNS                                     CoreDNS     DNS                   CoreDNS      lameduck  https   coredns io plugins health            DNS                   Lameduck           CoreDNS                       Lameduck          CoreDNS                   iptables                                         CoreDNS lameduck        30                      CoreDNS readiness      CoreDNS  Readiness         health       ready                   Lameduck        30                                    iptables                               CoreDNS              health       ready             CoreDNS     DNS                                  yaml readinessProbe    httpGet      path   ready     port  8181     scheme  HTTP      CoreDNS Ready                   https   coredns io plugins ready   https   coredns io plugins ready                                                 API                                                                                                                                       API          Kubernetes Deployment                                                                                                                                 fluentbit  https   docs fluentbit io manual pipeline filters kubernetes        Use Kubelet        Kubernetes API          kubelet                Kube Meta Cache TTL                                                      60                                                                                                                   API                                                                                               Kubernetes API                                                                                                                                          API                                                                                             CloudWatch   CloudWatch Logs                    https   docs aws amazon com AmazonCloudWatch latest logs FilterAndPatternSyntax html                                                                                                                Fluentbit        Amazon Kinesis Data Firehose  https   docs fluentbit io manual pipeline outputs firehose                                                             "}
{"questions":"eks API Server API Server API Server search API Server exclude true","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ubaa8\ub2c8\ud130\ub9c1\n\n## API Server\nAPI Server\ub97c \uc0b4\ud3b4\ubcfc \ub54c \uadf8 \uae30\ub2a5 \uc911 \ud558\ub098\uac00 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc758 \uacfc\ubd80\ud558\ub97c \ubc29\uc9c0\ud558\uae30 \uc704\ud574 \uc778\ubc14\uc6b4\ub4dc \uc694\uccad\uc744 \uc870\uc808\ud558\ub294 \uac83\uc784\uc744 \uae30\uc5b5\ud558\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4. API Server \uc218\uc900\uc5d0\uc11c \ubcd1\ubaa9 \ud604\uc0c1\uc774 \ubc1c\uc0dd\ud558\ub294 \uac83\ucc98\ub7fc \ubcf4\uc77c \uc218 \uc788\ub294 \uac83\uc740 \uc2e4\uc81c\ub85c \ub354 \uc2ec\uac01\ud55c \ubb38\uc81c\ub85c\ubd80\ud130 \uc774\ub97c \ubcf4\ud638\ud558\ub294 \uac83\uc77c \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc2dc\uc2a4\ud15c\uc744 \ud1b5\ud574 \uc774\ub3d9\ud558\ub294 \uc694\uccad\ub7c9 \uc99d\uac00\uc758 \uc7a5\ub2e8\uc810\uc744 \uace0\ub824\ud574\uc57c \ud569\ub2c8\ub2e4. API Server \uac12\uc744 \ub298\ub824\uc57c \ud558\ub294\uc9c0 \uacb0\uc815\ud558\uae30 \uc704\ud574 \uc5fc\ub450\uc5d0 \ub450\uc5b4\uc57c \ud560 \uc0ac\ud56d\uc5d0 \ub300\ud55c \uc791\uc740 \uc0d8\ud50c\ub9c1\uc740 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n1. \uc2dc\uc2a4\ud15c\uc744 \ud1b5\ud574 \uc774\ub3d9\ud558\ub294 \uc694\uccad\uc758 \ub300\uae30 \uc2dc\uac04\uc740 \uc5bc\ub9c8\ub098 \ub429\ub2c8\uae4c?\n2. \uc9c0\uc5f0 \uc2dc\uac04\uc740 API Server \uc790\uccb4\uc785\ub2c8\uae4c, \uc544\ub2c8\uba74 etcd\uc640 \uac19\uc740 \"\ub2e4\uc6b4\uc2a4\ud2b8\ub9bc\"\uc785\ub2c8\uae4c?\n3. API \uc11c\ubc84 \ub300\uae30\uc5f4 \uae4a\uc774\uac00 \uc774 \uc9c0\uc5f0 \uc2dc\uac04\uc758 \uc694\uc778\uc785\ub2c8\uae4c?\n4. API \uc6b0\uc120 \uc21c\uc704 \ubc0f \uacf5\uc815\uc131(APF) \ub300\uae30\uc5f4\uc774 \uc6b0\ub9ac\uac00 \uc6d0\ud558\ub294 API \ud638\ucd9c \ud328\ud134\uc5d0 \ub9de\uac8c \uc62c\ubc14\ub974\uac8c \uc124\uc815\ub418\uc5b4 \uc788\uc2b5\ub2c8\uae4c?\n\n## \ubb38\uc81c\uac00 \uc788\ub294 \uacf3\uc740 \uc5b4\ub514\uc785\ub2c8\uae4c?\n\uba3c\uc800 API \uc9c0\uc5f0 \uc2dc\uac04 \uce21\uc815 \uc9c0\ud45c\ub97c \uc0ac\uc6a9\ud558\uc5ec API Server\uac00 \uc694\uccad\uc744 \ucc98\ub9ac\ud558\ub294 \ub370 \uac78\ub9ac\ub294 \uc2dc\uac04\uc744 \ud30c\uc545\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc544\ub798 PromQL \ubc0f Grafana \ud788\ud2b8\ub9f5\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc774 \ub370\uc774\ud130\ub97c \ud45c\uc2dc\ud574 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n```\nmax(increase(apiserver_request_duration_seconds_bucket{subresource!=\"status\",subresource!=\"token\",subresource!=\"scale\",subresource!=\"\/healthz\",subresource!=\"binding\",subresource!=\"proxy\",verb!=\"WATCH\"}[$__rate_interval])) by (le)\n```\n\n!!! tip\n    \uc774 \ubb38\uc11c\uc5d0 \uc0ac\uc6a9\ub41c API \ub300\uc2dc\ubcf4\ub4dc\ub85c API \uc11c\ubc84\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 \ub2e4\uc74c [blog](https:\/\/aws.amazon.com\/blogs\/containers\/troubleshooting-amazon-eks-api-servers-with-prometheus\/)\ub97c \ucc38\uc870\ud558\uc138\uc694.\n\n![API \uc694\uccad \uc9c0\uc5f0 \ud788\ud2b8\ub9f5](..\/images\/api-request-duration.png)\n\n\uc774\ub7ec\ud55c \uc694\uccad\uc740 \ubaa8\ub450 1\ucd08 \uc544\ub798\uc5d0 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub294 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc774 \uc801\uc2dc\uc5d0 \uc694\uccad\uc744 \ucc98\ub9ac\ud558\uace0 \uc788\uc74c\uc744 \ub098\ud0c0\ub0b4\ub294 \uc88b\uc740 \ud45c\uc2dc\uc785\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uadf8\ub807\uc9c0 \uc54a\ub2e4\uba74 \uc5b4\ub5a8\uae4c\uc694?\n\n\uc704\uc758 API \uc694\uccad \uae30\uac04\uc5d0\uc11c \uc0ac\uc6a9\ud558\ub294 \ud615\uc2dd\uc740 \ud788\ud2b8\ub9f5\uc785\ub2c8\ub2e4. \ud788\ud2b8\ub9f5 \ud615\uc2dd\uc758 \uc88b\uc740 \uc810\uc740 \uae30\ubcf8\uc801\uc73c\ub85c API\uc758 \uc81c\ud55c \uc2dc\uac04 \uac12(60\ucd08)\uc744 \uc54c\ub824\uc900\ub2e4\ub294 \uac83\uc785\ub2c8\ub2e4. \uadf8\ub7ec\ub098 \uc2e4\uc81c\ub85c \uc54c\uc544\uc57c \ud560 \uac83\uc740 \uc2dc\uac04 \ucd08\uacfc \uc784\uacc4\uac12\uc5d0 \ub3c4\ub2ec\ud558\uae30 \uc804\uc5d0 \uc774 \uac12\uc774 \uc5b4\ub5a4 \uc784\uacc4\uac12\uc5d0 \uad00\uc2ec\uc744 \uac00\uc838\uc57c \ud558\ub294\uac00\uc785\ub2c8\ub2e4. \ud5c8\uc6a9 \uac00\ub2a5\ud55c \uc784\uacc4\uac12\uc5d0 \ub300\ud55c \ub300\ub7b5\uc801\uc778 \uc9c0\uce68\uc744 \ubcf4\ub824\uba74 [\uc5ec\uae30](https:\/\/github.com\/kubernetes\/community\/blob\/master\/sig-scalability\/slos\/slos.md#steady-state-slisslos)\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\ub294 \uc5c5\uc2a4\ud2b8\ub9bc \ucfe0\ubc84\ub124\ud2f0\uc2a4 SLO\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n!!! tip\n    \uc774 \uba85\ub839\ubb38\uc758 max \ud568\uc218\ub97c \ud655\uc778\ud558\uc138\uc694. \uc5ec\ub7ec \uc11c\ubc84(\uae30\ubcf8\uc801\uc73c\ub85c EKS\uc758 \ub450 API Server)\ub97c \uc9d1\uacc4\ud558\ub294 \uc9c0\ud45c\ub97c \uc0ac\uc6a9\ud560 \ub54c \ud574\ub2f9 \uc11c\ubc84\uc758 \ud3c9\uade0\uc744 \uad6c\ud558\uc9c0 \uc54a\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4.\n\n### \ube44\ub300\uce6d \ud2b8\ub798\ud53d \ud328\ud134\n\ud55c API Server [pod]\ub294 \ub85c\ub4dc\uac00 \uc57d\ud558\uace0 \ub2e4\ub978 \ud558\ub098\ub294 \uacfc\ubd80\ud558\uac00 \uac78\ub9ac\uba74 \uc5b4\ub5bb\uac8c \ub420\uae4c\uc694?\uc774 \ub450 \uc22b\uc790\uc758 \ud3c9\uade0\uc744 \uad6c\ud558\uba74 \ubb34\uc2a8 \uc77c\uc774 \ubc8c\uc5b4\uc84c\ub294\uc9c0 \uc798\ubabb \ud574\uc11d\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4, \uc5ec\uae30\uc5d0\ub294 \uc138 \uac1c\uc758 API Server\uac00 \uc788\uc9c0\ub9cc \ubaa8\ub4e0 \ub85c\ub4dc\ub294 \uc774 API Server \uc911 \ud558\ub098\uc5d0 \uc788\uc2b5\ub2c8\ub2e4. \uc77c\ubc18\uc801\uc73c\ub85c etcd \ubc0f API \uc11c\ubc84\uc640 \uac19\uc774 \uc5ec\ub7ec \uc11c\ubc84\uac00 \uc788\ub294 \ubaa8\ub4e0 \uac83\uc740 \uaddc\ubaa8 \ubc0f \uc131\ub2a5 \ubb38\uc81c\uc5d0 \ud22c\uc790\ud560 \ub54c \ubd84\ub9ac\ub418\uc5b4\uc57c \ud569\ub2c8\ub2e4.\n\n![Total inflight requests](..\/images\/inflight-requests.png)\n\nAPI \uc6b0\uc120\uc21c\uc704 \ubc0f \uacf5\uc815\uc131(APF)\uc73c\ub85c \uc804\ud658\ud558\uba74\uc11c \uc2dc\uc2a4\ud15c\uc758 \ucd1d \uc694\uccad \uc218\ub294 API Server\uac00 \ucd08\uacfc \uad6c\ub3c5\ub418\uc5c8\ub294\uc9c0 \ud655\uc778\ud558\ub294 \ud558\ub098\uc758 \uc694\uc18c\uc77c \ubfd0\uc785\ub2c8\ub2e4. \uc774\uc81c \uc2dc\uc2a4\ud15c\uc5d0\uc11c \uc77c\ub828\uc758 \ub300\uae30\uc5f4\uc744 \uc0ac\uc6a9\ud558\ubbc0\ub85c \ub300\uae30\uc5f4\uc774 \uaf49 \ucc3c\ub294\uc9c0, \ud574\ub2f9 \ub300\uae30\uc5f4\uc758 \ud2b8\ub798\ud53d\uc774 \uc0ad\uc81c\ub418\uace0 \uc788\ub294\uc9c0 \ud655\uc778\ud574\uc57c \ud569\ub2c8\ub2e4. \n\n\ub2e4\uc74c \ucffc\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc774\ub7ec\ud55c \ub300\uae30\uc5f4\uc744 \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n```\nmax without(instance)(apiserver_flowcontrol_request_concurrency_limit{})\n```\n\n!!! note\n    API A&F \uc791\ub3d9 \ubc29\uc2dd\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 \ub2e4\uc74c [\ubaa8\ubc94 \uc0ac\ub840 \uac00\uc774\ub4dc](https:\/\/aws.github.io\/aws-eks-best-practices\/scalability\/docs\/control-plane\/#api-priority-and-fairness)\ub97c \ucc38\uc870\ud558\uc138\uc694.\n\n\uc5ec\uae30\uc5d0\uc11c\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uae30\ubcf8\uc801\uc73c\ub85c \uc81c\uacf5\ub418\ub294 7\uac1c\uc758 \uc11c\ub85c \ub2e4\ub978 \uc6b0\uc120\uc21c\uc704 \uadf8\ub8f9\uc744 \ubcfc \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n![Shared concurrency](..\/images\/shared-concurrency.png)\n\n\ub2e4\uc74c\uc73c\ub85c \ud2b9\uc815 \uc6b0\uc120\uc21c\uc704 \uc218\uc900\uc774 \ud3ec\ud654 \uc0c1\ud0dc\uc778\uc9c0 \ud30c\uc545\ud558\uae30 \uc704\ud574 \ud574\ub2f9 \uc6b0\uc120\uc21c\uc704 \uadf8\ub8f9\uc774 \uba87 \ud37c\uc13c\ud2b8\uc758 \ube44\uc728\ub85c \uc0ac\uc6a9\ub418\uace0 \uc788\ub294\uc9c0 \ud655\uc778\ud558\uace0\uc790 \ud569\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc\uac00 \ub0ae\uc740 \uc218\uc900\uc5d0\uc11c\ub294 \uc694\uccad\uc744 \uc2a4\ub85c\ud2c0\ub9c1\ud558\ub294 \uac83\uc774 \ubc14\ub78c\uc9c1\ud560 \uc218 \uc788\uc9c0\ub9cc \ub9ac\ub354 \uc120\ucd9c \uc218\uc900\uc5d0\uc11c\ub294 \uadf8\ub807\uc9c0 \uc54a\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nAPI \uc6b0\uc120 \uc21c\uc704 \ubc0f \uacf5\uc815\uc131(APF) \uc2dc\uc2a4\ud15c\uc5d0\ub294 \uc5ec\ub7ec \uac00\uc9c0 \ubcf5\uc7a1\ud55c \uc635\uc158\uc774 \uc788\uc73c\uba70, \uc774\ub7ec\ud55c \uc635\uc158 \uc911 \uc77c\ubd80\ub294 \uc758\ub3c4\ud558\uc9c0 \uc54a\uc740 \uacb0\uacfc\ub97c \ucd08\ub798\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc\uc5d0\uc11c \uc77c\ubc18\uc801\uc73c\ub85c \ubcfc \uc218 \uc788\ub294 \ubb38\uc81c\ub294 \ubd88\ud544\uc694\ud55c \ub300\uae30 \uc2dc\uac04\uc774 \ucd94\uac00\ub418\uae30 \uc2dc\uc791\ud558\ub294 \uc9c0\uc810\uae4c\uc9c0 \ub300\uae30\uc5f4 \uae4a\uc774\ub97c \ub298\ub9ac\ub294 \uac83\uc785\ub2c8\ub2e4. `apiserver_flowcontrol_current_inqueue_request` \uc9c0\ud45c\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc774 \ubb38\uc81c\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. `apiserver_flowcontrol_rejected_requests_total`\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc0ad\uc81c\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubc84\ud0b7\uc774 \ub3d9\uc2dc\uc131\uc744 \ucd08\uacfc\ud558\ub294 \uacbd\uc6b0 \uc774\ub7ec\ud55c \uc9c0\ud45c\ub294 0\uc774 \uc544\ub2cc \uac12\uc774 \ub429\ub2c8\ub2e4.\n\n![Requests in use](..\/images\/requests-in-use.png)\n\n\ub300\uae30\uc5f4 \uae4a\uc774\ub97c \ub298\ub9ac\uba74 API Server\uac00 \uc9c0\uc5f0 \uc2dc\uac04\uc758 \uc911\uc694\ud55c \uc6d0\uc778\uc774 \ub420 \uc218 \uc788\uc73c\ubbc0\ub85c \uc8fc\uc758\ud574\uc11c \uc218\ud589\ud574\uc57c \ud569\ub2c8\ub2e4. \uc0dd\uc131\ub41c \ub300\uae30\uc5f4 \uc218\ub97c \uc2e0\uc911\ud558\uac8c \uacb0\uc815\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 EKS \uc2dc\uc2a4\ud15c\uc758 \uacf5\uc720 \uc218\ub294 600\uac1c\uc785\ub2c8\ub2e4. \ub108\ubb34 \ub9ce\uc740 \ub300\uae30\uc5f4\uc744 \uc0dd\uc131\ud558\uba74 \ub9ac\ub354 \uc120\ud0dd \ub300\uae30\uc5f4\uc774\ub098 \uc2dc\uc2a4\ud15c \ub300\uae30\uc5f4\uacfc \uac19\uc774 \ucc98\ub9ac\ub7c9\uc774 \ud544\uc694\ud55c \uc911\uc694\ud55c \ub300\uae30\uc5f4\uc758 \uacf5\uc720\uac00 \uc904\uc5b4\ub4e4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucd94\uac00 \ub300\uae30\uc5f4\uc744 \ub108\ubb34 \ub9ce\uc774 \uc0dd\uc131\ud558\uba74 \uc774\ub7ec\ud55c \ub300\uae30\uc5f4\uc758 \ud06c\uae30\ub97c \uc62c\ubc14\ub974\uac8c \uc9c0\uc815\ud558\uae30\uac00 \ub354 \uc5b4\ub824\uc6cc\uc9c8 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nAPF\uc5d0\uc11c \uc218\ud589\ud560 \uc218 \uc788\ub294 \uac04\ub2e8\ud558\uace0 \uc601\ud5a5\ub825 \uc788\ub294 \ubcc0\uacbd\uc5d0 \ucd08\uc810\uc744 \ub9de\ucd94\uae30 \uc704\ud574 \ud65c\uc6a9\ub3c4\uac00 \ub0ae\uc740 \ubc84\ud0b7\uc5d0\uc11c \uacf5\uc720\ub97c \uac00\uc838\uc640 \ucd5c\ub300 \uc0ac\uc6a9\ub7c9\uc5d0 \uc788\ub294 \ubc84\ud0b7\uc758 \ud06c\uae30\ub97c \ub298\ub9bd\ub2c8\ub2e4. \uc774\ub7ec\ud55c \ubc84\ud0b7 \uac04\uc5d0 \uacf5\uc720\ub97c \uc9c0\ub2a5\uc801\uc73c\ub85c \uc7ac\ubd84\ubc30\ud568\uc73c\ub85c\uc368 \uc0ad\uc81c \uac00\ub2a5\uc131\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc790\uc138\ud55c \ub0b4\uc6a9\uc740 EKS \ubaa8\ubc94 \uc0ac\ub840 \uac00\uc774\ub4dc [API \uc6b0\uc120\uc21c\uc704 \ubc0f \uacf5\uc815\uc131 \uc124\uc815](https:\/\/aws.github.io\/aws-eks-best-practices\/scalability\/docs\/control-plane\/#api-priority-and-fairness)\uc744 \ucc38\uc870\ud558\uc138\uc694.\n\n### API vs. etcd \uc9c0\uc5f0 \uc2dc\uac04\nAPI Server\uc758 \uba54\ud2b8\ub9ad\/\ub85c\uadf8\ub97c \uc0ac\uc6a9\ud558\uc5ec API Server\uc5d0 \ubb38\uc81c\uac00 \uc788\ub294\uc9c0, API Server\uc758 \uc5c5\uc2a4\ud2b8\ub9bc\/\ub2e4\uc6b4\uc2a4\ud2b8\ub9bc \ub610\ub294 \uc774 \ub458\uc758 \uc870\ud569\uc5d0 \ubb38\uc81c\uac00 \uc788\ub294\uc9c0 \ud310\ub2e8\ud558\ub824\uba74 \uc5b4\ub5bb\uac8c \ud574\uc57c \ud569\ub2c8\uae4c? \uc774\ub97c \ub354 \uc798 \uc774\ud574\ud558\uae30 \uc704\ud574 API Server\uc640 etcd\uac00 \uc5b4\ub5a4 \uad00\ub828\uc774 \uc788\ub294\uc9c0, \uadf8\ub9ac\uace0 \uc798\ubabb\ub41c \uc2dc\uc2a4\ud15c\uc744 \ud574\uacb0\ud558\ub294 \uac83\uc774 \uc5bc\ub9c8\ub098 \uc26c\uc6b4\uc9c0 \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n\uc544\ub798 \ucc28\ud2b8\uc5d0\uc11c\ub294 API Server\uc758 \uc9c0\uc5f0 \uc2dc\uac04\uc744 \ubcfc \uc218 \uc788\uc9c0\ub9cc, etcd \uc218\uc900\uc5d0\uc11c \ub300\ubd80\ubd84\uc758 \uc9c0\uc5f0 \uc2dc\uac04\uc744 \ubcf4\uc5ec\uc8fc\ub294 \uadf8\ub798\ud504\uc758 \ub9c9\ub300\ub85c \uc778\ud574 \uc774 \uc9c0\uc5f0 \uc2dc\uac04\uc758 \uc0c1\ub2f9 \ubd80\ubd84\uc774 etcd \uc11c\ubc84\uc640 \uc5f0\uad00\ub418\uc5b4 \uc788\uc74c\uc744 \uc54c \uc218 \uc788\uc2b5\ub2c8\ub2e4. etcd \ub300\uae30 \uc2dc\uac04\uc774 15\ucd08\uc774\uace0 \ub3d9\uc2dc\uc5d0 API \uc11c\ubc84 \ub300\uae30 \uc2dc\uac04\uc774 20\ucd08\uc778 \uacbd\uc6b0 \ub300\uae30 \uc2dc\uac04\uc758 \ub300\ubd80\ubd84\uc740 \uc2e4\uc81c\ub85c etcd \uc218\uc900\uc5d0 \uc788\uc2b5\ub2c8\ub2e4\n\n\uc804\uccb4 \ud750\ub984\uc744 \uc0b4\ud3b4\ubcf4\uba74 API Server\uc5d0\ub9cc \uc9d1\uc911\ud558\uc9c0 \uc54a\uace0 etcd\uac00 \uc555\ubc15\uc744 \ubc1b\uace0 \uc788\uc74c\uc744 \ub098\ud0c0\ub0b4\ub294 \uc2e0\ud638(\uc608: slow apply counters \uc99d\uac00)\ub97c \ucc3e\ub294 \uac83\uc774 \ud604\uba85\ud558\ub2e4\ub294 \uac83\uc744 \uc54c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n!!! tip\n    The dashboard in section can be found at https:\/\/github.com\/RiskyAdventure\/Troubleshooting-Dashboards\/blob\/main\/api-troubleshooter.json\n\n![ETCD duress](..\/images\/etcd-duress.png)\n\n### \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \ud074\ub77c\uc774\uc5b8\ud2b8 \uce21 \ubb38\uc81c\n\uc774 \ucc28\ud2b8\uc5d0\uc11c\ub294 \ud574\ub2f9 \uae30\uac04 \ub3d9\uc548 \uc644\ub8cc\ud558\ub294 \ub370 \uac00\uc7a5 \ub9ce\uc740 \uc2dc\uac04\uc774 \uac78\ub9b0 API \ud638\ucd9c\uc744 \ucc3e\uace0 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uacbd\uc6b0 \uc0ac\uc6a9\uc790 \uc815\uc758 \ub9ac\uc18c\uc2a4(CRD)\uac00 05:40 \uc2dc\uac04 \ud504\ub808\uc784 \ub3d9\uc548 \uac00\uc7a5 \uc624\ub798\uac78\ub9b0 \ud638\ucd9c\uc774 APPLY \ud568\uc218\ub77c\ub294 \uac83\uc744 \ubcfc \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n![Slowest requests](..\/images\/slowest-requests.png)\n\n\uc774 \ub370\uc774\ud130\ub97c \ubc14\ud0d5\uc73c\ub85c Ad-Hoc PromQL \ub610\ub294 CloudWatch Insights \ucffc\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud574\ub2f9 \uae30\uac04 \ub3d9\uc548 \uac10\uc0ac \ub85c\uadf8\uc5d0\uc11c LIST \uc694\uccad\uc744 \uac00\uc838\uc640\uc11c \uc5b4\ub5a4 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc778\uc9c0 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### CloudWatch\ub85c \uc18c\uc2a4 \ucc3e\uae30\n\uba54\ud2b8\ub9ad\uc740 \uc0b4\ud3b4\ubcf4\uace0\uc790 \ud558\ub294 \ubb38\uc81c \uc601\uc5ed\uc744 \ucc3e\uace0 \ubb38\uc81c\uc758 \uae30\uac04\uacfc \uac80\uc0c9 \ub9e4\uac1c\ubcc0\uc218\ub97c \ubaa8\ub450 \uc881\ud788\ub294 \ub370 \uac00\uc7a5 \uc798 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. \uc774 \ub370\uc774\ud130\uac00 \ud655\ubcf4\ub418\uba74 \ub354 \uc790\uc138\ud55c \uc2dc\uac04\uacfc \uc624\ub958\uc5d0 \ub300\ud55c \ub85c\uadf8\ub85c \uc804\ud658\ud558\ub824\uace0 \ud569\ub2c8\ub2e4. \uc774\ub97c \uc704\ud574 [CloudWatch Logs Insights](https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/logs\/AnalyzingLogData.html)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub85c\uadf8\ub97c \uba54\ud2b8\ub9ad\uc73c\ub85c \uc804\ud658\ud569\ub2c8\ub2e4.\n\n\uc608\ub97c \ub4e4\uc5b4, \uc704\uc758 \ubb38\uc81c\ub97c \uc870\uc0ac\ud558\uae30 \uc704\ud574 \ub2e4\uc74c CloudWatch Logs Insights \ucffc\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc0ac\uc6a9\uc790 \uc5d0\uc774\uc804\ud2b8 \ubc0f requestURI\ub97c \uac00\uc838\uc640\uc11c \uc5b4\ub5a4 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc774 \uc9c0\uc5f0\uc744 \uc77c\uc73c\ud0a4\ub294\uc9c0 \uc815\ud655\ud788 \ud30c\uc545\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n!!! tip\n    Watch\uc5d0\uc11c \uc815\uc0c1\uc801\uc778 List\/Resync \ub3d9\uc791\uc744 \uac00\uc838\uc624\uc9c0 \uc54a\uc73c\ub824\uba74 \uc801\uc808\ud55c \uac1c\uc218\ub97c \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n```\nfields *@timestamp*, *@message*\n| filter *@logStream* like \"kube-apiserver-audit\"\n| filter ispresent(requestURI)\n| filter verb = \"list\"\n| parse requestReceivedTimestamp \/\\d+-\\d+-(?<StartDay>\\d+)T(?<StartHour>\\d+):(?<StartMinute>\\d+):(?<StartSec>\\d+).(?<StartMsec>\\d+)Z\/\n| parse stageTimestamp \/\\d+-\\d+-(?<EndDay>\\d+)T(?<EndHour>\\d+):(?<EndMinute>\\d+):(?<EndSec>\\d+).(?<EndMsec>\\d+)Z\/\n| fields (StartHour * 3600 + StartMinute * 60 + StartSec + StartMsec \/ 1000000) as StartTime, (EndHour * 3600 + EndMinute * 60 + EndSec + EndMsec \/ 1000000) as EndTime, (EndTime - StartTime) as DeltaTime\n| stats avg(DeltaTime) as AverageDeltaTime, count(*) as CountTime by requestURI, userAgent\n| filter CountTime >=50\n| sort AverageDeltaTime desc\n```\n\n\uc774 \ucffc\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub300\uae30 \uc2dc\uac04\uc774 \uae34 list \uc791\uc5c5\uc744 \ub300\ub7c9\uc73c\ub85c \uc2e4\ud589\ud558\ub294 \ub450 \uac1c\uc758 \uc11c\ub85c \ub2e4\ub978 \uc5d0\uc774\uc804\ud2b8(Splunk \ubc0f CloudWatch Agent)\ub97c \ubc1c\uacac\ud588\uc2b5\ub2c8\ub2e4. \ub370\uc774\ud130\ub97c \ubc14\ud0d5\uc73c\ub85c \uc774 \ucee8\ud2b8\ub864\ub7ec\ub97c \uc81c\uac70, \uc5c5\ub370\uc774\ud2b8\ud558\uac70\ub098 \ub2e4\ub978 \ud504\ub85c\uc81d\ud2b8\ub85c \uad50\uccb4\ud558\uae30\ub85c \uacb0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n![Query results](..\/images\/query-results.png)\n\n!!! tip\n    \uc774 \uc8fc\uc81c\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 \ub2e4\uc74c [\ube14\ub85c\uadf8](https:\/\/aws.amazon.com\/blogs\/containers\/troubleshooting-amazon-eks-api-servers-with-prometheus\/)\ub97c \ucc38\uc870\ud558\uc138\uc694.\n\n## Scheduler\nEKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uc778\uc2a4\ud134\uc2a4\ub294 \ubcc4\ub3c4\uc758 AWS \uacc4\uc815\uc5d0\uc11c \uc2e4\ud589\ub418\ubbc0\ub85c \uce21\uc815 \uc9c0\ud45c\uc5d0 \ub300\ud55c \ud574\ub2f9 \uad6c\uc131 \uc694\uc18c\ub97c \uc218\uc9d1\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4(API Server\ub294 \uc608\uc678\uc784). \uadf8\ub7ec\ub098 \uc774\ub7ec\ud55c \uad6c\uc131 \uc694\uc18c\uc5d0 \ub300\ud55c \uac10\uc0ac \ub85c\uadf8\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc73c\ubbc0\ub85c \ud574\ub2f9 \ub85c\uadf8\ub97c \uc9c0\ud45c\ub85c \ubcc0\ud658\ud558\uc5ec \ud655\uc7a5 \uc2dc \ubcd1\ubaa9 \ud604\uc0c1\uc744 \uc77c\uc73c\ud0a4\ub294 \ud558\uc704 \uc2dc\uc2a4\ud15c\uc774 \uc788\ub294\uc9c0 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. CloudWatch Logs Insights\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc2a4\ucf00\uc904\ub7ec \ub300\uae30\uc5f4\uc5d0 \uc608\uc57d\ub418\uc9c0 \uc54a\uc740 Pod\uac00 \uba87 \uac1c \uc788\ub294\uc9c0 \ud655\uc778\ud574 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n### \uc2a4\ucf00\uc904\ub7ec \ub85c\uadf8\uc5d0\uc11c \uc608\uc57d\ub418\uc9c0 \uc54a\uc740 \ud30c\ub4dc\n\uc790\uccb4 \uad00\ub9ac\ud615 \ucfe0\ubc84\ub124\ud2f0\uc2a4(\uc608: Kops)\uc5d0\uc11c \uc9c1\uc811 \uc2a4\ucf00\uc904\ub7ec \uc9c0\ud45c\ub97c \uc2a4\ud06c\ub7a9\ud560 \uc218 \uc788\ub294 \uc561\uc138\uc2a4 \uad8c\ud55c\uc774 \uc788\ub294 \uacbd\uc6b0 \ub2e4\uc74c PromQL\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc2a4\ucf00\uc904\ub7ec \ubc31\ub85c\uadf8\ub97c \uc774\ud574\ud569\ub2c8\ub2e4.\n\n```\nmax without(instance)(scheduler_pending_pods)\n```\n\nEKS\uc5d0\uc11c\ub294 \uc704 \uc9c0\ud45c\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc5c6\uc73c\ubbc0\ub85c \uc544\ub798 CloudWatch Logs Insights \ucffc\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud2b9\uc815 \uae30\uac04 \ub3d9\uc548 \uc608\uc57d\uc744 \ucde8\uc18c\ud560 \uc218 \uc5c6\uc5c8\ub358 \ud30c\ub4dc \uc218\ub97c \ud655\uc778\ud558\uc5ec \ubc31\ub85c\uadf8\ub97c \ud655\uc778\ud560 \uac83\uc785\ub2c8\ub2e4. \uadf8\ub7ec\uba74 \ud53c\ud06c \ud0c0\uc784\uc758 \uba54\uc2dc\uc9c0\ub97c \ub354 \uc790\uc138\ud788 \ubd84\uc11d\ud558\uc5ec \ubcd1\ubaa9 \ud604\uc0c1\uc758 \ud2b9\uc131\uc744 \uc774\ud574\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub178\ub4dc\uac00 \ucda9\ubd84\ud788 \ube60\ub974\uac8c \uad50\uccb4\ub418\uc9c0 \uc54a\uac70\ub098 \uc2a4\ucf00\uc904\ub7ec \uc790\uccb4\uc758 rate limiter\ub97c \uc608\ub85c \ub4e4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n```\nfields timestamp, pod, err, *@message*\n| filter *@logStream* like \"scheduler\"\n| filter *@message* like \"Unable to schedule pod\"\n| parse *@message*  \/^.(?<date>\\d{4})\\s+(?<timestamp>\\d+:\\d+:\\d+\\.\\d+)\\s+\\S*\\s+\\S+\\]\\s\\\"(.*?)\\\"\\s+pod=(?<pod>\\\"(.*?)\\\")\\s+err=(?<err>\\\"(.*?)\\\")\/\n| count(*) as count by pod, err\n| sort count desc\n```\n\n\uc5ec\uae30\uc11c\ub294 \uc2a4\ud1a0\ub9ac\uc9c0 PVC\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc5c6\uc5b4 \ud30c\ub4dc\uac00 \ubc30\ud3ec\ub418\uc9c0 \uc54a\uc558\ub2e4\ub294 \uc2a4\ucf00\uc904\ub7ec\uc758 \uc624\ub958\ub97c \ubcfc \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n![CloudWatch Logs query](..\/images\/cwl-query.png)\n\n!!! note\n    \uc774 \uae30\ub2a5\uc744 \ud65c\uc131\ud654\ud558\ub824\uba74 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0\uc11c \uac10\uc0ac \ub85c\uae45\uc744 \ucf1c\uc57c \ud569\ub2c8\ub2e4. \uc2dc\uac04\uc774 \uc9c0\ub0a8\uc5d0 \ub530\ub77c \ubd88\ud544\uc694\ud558\uac8c \ube44\uc6a9\uc774 \uc99d\uac00\ud558\uc9c0 \uc54a\ub3c4\ub85d \ub85c\uadf8 \ubcf4\uc874\uc744 \uc81c\ud55c\ud558\ub294 \uac83\ub3c4 \uac00\uc7a5 \uc88b\uc740 \ubc29\ubc95\uc785\ub2c8\ub2e4. \ub2e4\uc74c\uc740 EKSCTL \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uc5ec \ubaa8\ub4e0 \ub85c\uae45 \uae30\ub2a5\uc744 \ucf1c\ub294 \uc608\uc81c\uc785\ub2c8\ub2e4.  \n\n```yaml\ncloudWatch:\n  clusterLogging:\n    enableTypes: [\"*\"]\n    logRetentionInDays: 10\n```\n\n## Kube Controller Manager\n\ub2e4\ub978 \ubaa8\ub4e0 \ucee8\ud2b8\ub864\ub7ec\uc640 \ub9c8\ucc2c\uac00\uc9c0\ub85c Kube Controller Manager\ub294 \ud55c \ubc88\uc5d0 \uc218\ud589\ud560 \uc218 \uc788\ub294 \uc791\uc5c5 \uc218\uc5d0 \uc81c\ud55c\uc774 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \ud30c\ub77c\ubbf8\ud130\ub97c \uc124\uc815\ud560 \uc218 \uc788\ub294 KOPS \uad6c\uc131\uc744 \uc0b4\ud3b4\ubcf4\uba74\uc11c \uc774\ub7ec\ud55c \ud50c\ub798\uadf8 \uc911 \uc77c\ubd80\uac00 \ubb34\uc5c7\uc778\uc9c0 \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n```yaml\n  kubeControllerManager:\n    concurrentEndpointSyncs: 5\n    concurrentReplicasetSyncs: 5\n    concurrentNamespaceSyncs: 10\n    concurrentServiceaccountTokenSyncs: 5\n    concurrentServiceSyncs: 5\n    concurrentResourceQuotaSyncs: 5\n    concurrentGcSyncs: 20\n    kubeAPIBurst: 20\n    kubeAPIQPS: \"30\"\n```\n\n\uc774\ub7ec\ud55c \ucee8\ud2b8\ub864\ub7ec\uc5d0\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ubcc0\ub3d9\uc774 \uc2ec\ud560 \ub54c \ub300\uae30\uc5f4\uc774 \uaf49 \ucc28\uac8c \ub429\ub2c8\ub2e4. \uc774 \uacbd\uc6b0 replicaset controller\uc758 \ub300\uae30\uc5f4\uc5d0 \ub300\uaddc\ubaa8 \ubc31\ub85c\uadf8\uac00 \uc788\ub294 \uac83\uc744 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n![Queues](..\/images\/queues.png)\n\n\uc774\ub7ec\ud55c \uc0c1\ud669\uc744 \ud574\uacb0\ud558\ub294 \ubc29\ubc95\uc5d0\ub294 \ub450 \uac00\uc9c0\uac00 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uccb4 \uad00\ub9ac\ub97c \uc2e4\ud589\ud558\ub294 \uacbd\uc6b0 \ub3d9\uc2dc \uace0\ub8e8\ud2f4\uc744 \ub298\ub9b4 \uc218 \uc788\uc9c0\ub9cc \uc774\ub294 KCM\uc5d0\uc11c \ub354 \ub9ce\uc740 \ub370\uc774\ud130\ub97c \ucc98\ub9ac\ud558\uc5ec etcd\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub2e4\ub978 \uc635\uc158\uc740 \ubc30\ud3ec\uc5d0\uc11c `.spec.revisionHistoryLimit`\uc744 \uc0ac\uc6a9\ud558\uc5ec replicaset \uac1c\uccb4 \uc218\ub97c \uc904\uc774\uace0 \ub864\ubc31\ud560 \uc218 \uc788\ub294 replicaset \uac1c\uccb4 \uc218\ub97c \uc904\uc5ec \ud574\ub2f9 \ucee8\ud2b8\ub864\ub7ec\uc5d0 \ub300\ud55c \ubd80\ub2f4\uc744 \uc904\uc774\ub294 \uac83\uc785\ub2c8\ub2e4.\n\n```yaml\nspec:\n  revisionHistoryLimit: 2\n```\n\n\ub2e4\ub978 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uae30\ub2a5\uc744 \uc870\uc815\ud558\uac70\ub098 \ud574\uc81c\ud558\uc5ec \uc774\ud0c8\ub960\uc774 \ub192\uc740 \uc2dc\uc2a4\ud15c\uc758 \uc555\ub825\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4, \ud30c\ub4dc\uc758 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 k8s API\uc640 \uc9c1\uc811 \ud1b5\uc2e0\ud560 \ud544\uc694\uac00 \uc5c6\ub294 \uacbd\uc6b0 \ud574\ub2f9 \ud30c\ub4dc\uc5d0 \uc801\uc6a9\ub41c \uc2dc\ud06c\ub9bf\uc744 \ub044\uba74 ServiceAccountTokenSync\uc758 \ubd80\ud558\ub97c \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uac00\ub2a5\ud558\uba74 \uc774\ub7f0 \ubb38\uc81c\ub97c \ud574\uacb0\ud560 \uc218 \uc788\ub294 \ub354 \ubc14\ub78c\uc9c1\ud55c \ubc29\ubc95\uc785\ub2c8\ub2e4.\n\n```yaml\nkind: Pod\nspec:\n  automountServiceAccountToken: false\n```\n\n\uc9c0\ud45c\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc5c6\ub294 \uc2dc\uc2a4\ud15c\uc5d0\uc11c\ub294 \ub85c\uadf8\ub97c \ub2e4\uc2dc \uac80\ud1a0\ud558\uc5ec \uacbd\ud569\uc744 \uac10\uc9c0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucee8\ud2b8\ub864\ub7ec\ub2f9 \ub610\ub294 \uc9d1\uacc4 \uc218\uc900\uc5d0\uc11c \ucc98\ub9ac\ub418\ub294 \uc694\uccad\uc758 \uc218\ub97c \ud655\uc778\ud558\ub824\uba74 \ub2e4\uc74c\uacfc \uac19\uc740 CloudWatch Logs Insights \ucffc\ub9ac\ub97c \uc0ac\uc6a9\ud558\uba74 \ub429\ub2c8\ub2e4. \n\n### Total Volume Processed by the KCM\n\n```\n# Query to count API qps coming from kube-controller-manager, split by controller type.\n# If you're seeing values close to 20\/sec for any particular controller, it's most likely seeing client-side API throttling.\nfields @timestamp, @logStream, @message\n| filter @logStream like \/kube-apiserver-audit\/\n| filter userAgent like \/kube-controller-manager\/\n# Exclude lease-related calls (not counted under kcm qps)\n| filter requestURI not like \"apis\/coordination.k8s.io\/v1\/namespaces\/kube-system\/leases\/kube-controller-manager\"\n# Exclude API discovery calls (not counted under kcm qps)\n| filter requestURI not like \"?timeout=32s\"\n# Exclude watch calls (not counted under kcm qps)\n| filter verb != \"watch\"\n# If you want to get counts of API calls coming from a specific controller, uncomment the appropriate line below:\n# | filter user.username like \"system:serviceaccount:kube-system:job-controller\"\n# | filter user.username like \"system:serviceaccount:kube-system:cronjob-controller\"\n# | filter user.username like \"system:serviceaccount:kube-system:deployment-controller\"\n# | filter user.username like \"system:serviceaccount:kube-system:replicaset-controller\"\n# | filter user.username like \"system:serviceaccount:kube-system:horizontal-pod-autoscaler\"\n# | filter user.username like \"system:serviceaccount:kube-system:persistent-volume-binder\"\n# | filter user.username like \"system:serviceaccount:kube-system:endpointslice-controller\"\n# | filter user.username like \"system:serviceaccount:kube-system:endpoint-controller\"\n# | filter user.username like \"system:serviceaccount:kube-system:generic-garbage-controller\"\n| stats count(*) as count by user.username\n| sort count desc\n```\n\n\uc5ec\uae30\uc11c \uc911\uc694\ud55c \uc810\uc740 \ud655\uc7a5\uc131 \ubb38\uc81c\ub97c \uc870\uc0ac\ud560 \ub54c \uc790\uc138\ud55c \ubb38\uc81c \ud574\uacb0 \ub2e8\uacc4\ub85c \uc774\ub3d9\ud558\uae30 \uc804\uc5d0 \uacbd\ub85c\uc758 \ubaa8\ub4e0 \ub2e8\uacc4(API, \uc2a4\ucf00\uc904\ub7ec, KCM \ub4f1)\ub97c \uc0b4\ud3b4\ubcf4\ub294 \uac83\uc785\ub2c8\ub2e4. \ud504\ub85c\ub355\uc158\uc5d0\uc11c\ub294 \uc2dc\uc2a4\ud15c\uc774 \ucd5c\uace0\uc758 \uc131\ub2a5\uc73c\ub85c \uc791\ub3d9\ud560 \uc218 \uc788\ub3c4\ub85d \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc758 \ub450 \ubd80\ubd84 \uc774\uc0c1\uc744 \uc870\uc815\ud574\uc57c \ud558\ub294 \uacbd\uc6b0\uac00 \uc885\uc885 \uc788\uc2b5\ub2c8\ub2e4. \ud6e8\uc52c \ub354 \ud070 \ubcd1\ubaa9 \ud604\uc0c1\uc758 \ub2e8\uc21c\ud55c \uc99d\uc0c1(\uc608: \ub178\ub4dc \uc2dc\uac04 \ucd08\uacfc)\uc744 \uc2e4\uc218\ub85c \ud574\uacb0\ud558\ub294 \uac83\uc740 \uc27d\uc2b5\ub2c8\ub2e4.\n\n## ETCD \netcd\ub294 \uba54\ubaa8\ub9ac \ub9e4\ud551 \ud30c\uc77c\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud0a4 \uac12 \uc30d\uc744 \ud6a8\uc728\uc801\uc73c\ub85c \uc800\uc7a5\ud569\ub2c8\ub2e4. \uc77c\ubc18\uc801\uc73c\ub85c 2, 4, 8GB \uc81c\ud55c\uc73c\ub85c \uc124\uc815\ub41c \uc0ac\uc6a9 \uac00\ub2a5\ud55c \uba54\ubaa8\ub9ac \uacf5\uac04\uc758 \ud06c\uae30\ub97c \uc124\uc815\ud558\ub294 \ubcf4\ud638 \uba54\ucee4\ub2c8\uc998\uc774 \uc788\uc2b5\ub2c8\ub2e4. \ub370\uc774\ud130\ubca0\uc774\uc2a4\uc758 \uac1c\uccb4 \uc218\uac00 \uc801\ub2e4\ub294 \uac83\uc740 \uac1c\uccb4\uac00 \uc5c5\ub370\uc774\ud2b8\ub418\uace0 \uc774\uc804 \ubc84\uc804\uc744 \uc815\ub9ac\ud574\uc57c \ud560 \ub54c etcd\uc5d0\uc11c \uc218\ud589\ud574\uc57c \ud558\ub294 \uc815\ub9ac \uc791\uc5c5\uc774 \uc904\uc5b4\ub4e0\ub2e4\ub294 \uac83\uc744 \uc758\ubbf8\ud569\ub2c8\ub2e4. \uac1d\uccb4\uc758 \uc774\uc804 \ubc84\uc804\uc744 \uc815\ub9ac\ud558\ub294 \uc774\ub7ec\ud55c \ud504\ub85c\uc138\uc2a4\ub97c \uc555\ucd95\uc774\ub77c\uace0 \ud569\ub2c8\ub2e4. \uc5ec\ub7ec \ubc88\uc758 \uc555\ucd95 \uc791\uc5c5 \ud6c4\uc5d0\ub294 \ud2b9\uc815 \uc784\uacc4\uac12 \uc774\uc0c1 \ub610\ub294 \uace0\uc815\ub41c \uc2dc\uac04 \uc77c\uc815\uc5d0 \ub530\ub77c \ubc1c\uc0dd\ud558\ub294 \uc870\uac01 \ubaa8\uc74c(defragging)\uc774\ub77c\ub294 \uc0ac\uc6a9 \uac00\ub2a5\ud55c \uacf5\uac04\uc744 \ubcf5\uad6c\ud558\ub294 \ud6c4\uc18d \ud504\ub85c\uc138\uc2a4\uac00 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\uc758 \uac1c\uccb4 \uc218\ub97c \uc81c\ud55c\ud558\uc5ec \uc555\ucd95 \ubc0f \uc870\uac01 \ubaa8\uc74c \ud504\ub85c\uc138\uc2a4\uc758 \uc601\ud5a5\uc744 \uc904\uc774\uae30 \uc704\ud574 \uc218\ud589\ud560 \uc218 \uc788\ub294 \uba87 \uac00\uc9c0 \uc0ac\uc6a9\uc790 \uad00\ub828 \ud56d\ubaa9\uc774 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 Helm\uc740 \ub192\uc740 `revisionHistoryLimit`\uc744 \uc720\uc9c0\ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 ReplicaSet\uc640 \uac19\uc740 \uc774\uc804 \uac1c\uccb4\uac00 \uc2dc\uc2a4\ud15c\uc5d0 \uc720\uc9c0\ub418\uc5b4 \ub864\ubc31\uc744 \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uae30\ub85d \uc81c\ud55c\uc744 2\ub85c \uc124\uc815\ud558\uba74 \uac1c\uccb4(\uc608: ReplicaSets) \uc218\ub97c 10\uc5d0\uc11c 2\ub85c \uc904\uc77c \uc218 \uc788\uc73c\uba70 \uacb0\uacfc\uc801\uc73c\ub85c \uc2dc\uc2a4\ud15c\uc5d0 \ub300\ud55c \ub85c\ub4dc\uac00 \uc904\uc5b4\ub4ed\ub2c8\ub2e4.\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nspec:\n  revisionHistoryLimit: 2\n```\n\n\ubaa8\ub2c8\ud130\ub9c1 \uad00\uc810\uc5d0\uc11c \uc2dc\uc2a4\ud15c \uc9c0\uc5f0 \uc2dc\uac04 \uae09\uc99d\uc774 \uc2dc\uac04 \ub2e8\uc704\ub85c \uad6c\ubd84\ub41c \uc124\uc815\ub41c \ud328\ud134\uc73c\ub85c \ubc1c\uc0dd\ud558\ub294 \uacbd\uc6b0 \uc774 \uc870\uac01 \ubaa8\uc74c \ud504\ub85c\uc138\uc2a4\uac00 \uc6d0\uc778\uc778\uc9c0 \ud655\uc778\ud558\ub294 \uac83\uc774 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. CloudWatch Logs\ub97c \uc0ac\uc6a9\ud558\uba74 \uc774\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc870\uac01 \ubaa8\uc74c\uc758 \uc2dc\uc791\/\uc885\ub8cc \uc2dc\uac04\uc744 \ubcf4\ub824\uba74 \ub2e4\uc74c \ucffc\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc2ed\uc2dc\uc624.\n\n```\nfields *@timestamp*, *@message*\n| filter *@logStream* like \/etcd-manager\/\n| filter *@message* like \/defraging|defraged\/\n| sort *@timestamp* asc\n```\n\n![Defrag query](..\/images\/defrag.png)\n","site":"eks","answers_cleaned":"    search    exclude  true                          API Server API Server                                                                          API Server                                                                                                         API Server                                                        1                                   2         API Server            etcd                 3  API                            4  API             APF               API                                                   API                   API Server                                    PromQL   Grafana                                  max increase apiserver request duration seconds bucket subresource   status  subresource   token  subresource   scale  subresource    healthz  subresource   binding  subresource   proxy  verb   WATCH      rate interval    by  le           tip               API       API                               blog  https   aws amazon com blogs containers troubleshooting amazon eks api servers with prometheus              API               images api request duration png              1                                                                              API                                              API          60                                                                                                                  https   github com kubernetes community blob master sig scalability slos slos md steady state slisslos                       SLO                   tip            max                        EKS    API Server                                                                API Server  pod                                                                                               API Server               API Server                   etcd   API                                                        Total inflight requests     images inflight requests png   API            APF                       API Server                                                                                                                                        max without instance  apiserver flowcontrol request concurrency limit             note     API A F                                  https   aws github io aws eks best practices scalability docs control plane  api priority and fairness                                  7                               Shared concurrency     images shared concurrency png                                                                                                                                                   API             APF                                                                                                                                      apiserver flowcontrol current inqueue request                                apiserver flowcontrol rejected requests total                                                 0                Requests in use     images requests in use png               API Server                                                                             EKS            600                                                                                                                                           APF                                                                                                                                             EKS            API                https   aws github io aws eks best practices scalability docs control plane  api priority and fairness               API vs  etcd       API Server               API Server           API Server                                                                  API Server  etcd                                                              API Server                  etcd                                                   etcd                        etcd        15        API           20                      etcd                       API Server           etcd                        slow apply counters                                    tip     The dashboard in section can be found at https   github com RiskyAdventure Troubleshooting Dashboards blob main api troubleshooter json    ETCD duress     images etcd duress png                                                                API                              CRD   05 40                       APPLY                      Slowest requests     images slowest requests png               Ad Hoc PromQL    CloudWatch Insights                           LIST                                       CloudWatch                                                                                                                         CloudWatch Logs Insights  https   docs aws amazon com AmazonCloudWatch latest logs AnalyzingLogData html                                                   CloudWatch Logs Insights                     requestURI                                                   tip     Watch        List Resync                                      fields   timestamp     message    filter   logStream  like  kube apiserver audit    filter ispresent requestURI    filter verb    list    parse requestReceivedTimestamp   d   d     StartDay  d  T   StartHour  d      StartMinute  d      StartSec  d      StartMsec  d  Z    parse stageTimestamp   d   d     EndDay  d  T   EndHour  d      EndMinute  d      EndSec  d      EndMsec  d  Z    fields  StartHour   3600   StartMinute   60   StartSec   StartMsec   1000000  as StartTime   EndHour   3600   EndMinute   60   EndSec   EndMsec   1000000  as EndTime   EndTime   StartTime  as DeltaTime   stats avg DeltaTime  as AverageDeltaTime  count    as CountTime by requestURI  userAgent   filter CountTime   50   sort AverageDeltaTime desc                          list                               Splunk   CloudWatch Agent                                                                       Query results     images query results png       tip                               https   aws amazon com blogs containers troubleshooting amazon eks api servers with prometheus               Scheduler EKS                   AWS                                           API Server                                                                                                      CloudWatch Logs Insights                         Pod                                                               Kops                                           PromQL                             max without instance  scheduler pending pods       EKS                         CloudWatch Logs Insights                                                                                                                                           rate limiter                     fields timestamp  pod  err    message    filter   logStream  like  scheduler    filter   message  like  Unable to schedule pod    parse   message         date  d 4   s    timestamp  d   d   d    d   s  S  s  S    s          s pod    pod            s err    err               count    as count by pod  err   sort count desc                PVC                                                 CloudWatch Logs query     images cwl query png       note                                                                                                         EKSCTL                                    yaml cloudWatch    clusterLogging      enableTypes            logRetentionInDays  10         Kube Controller Manager                   Kube Controller Manager                                                   KOPS                                           yaml   kubeControllerManager      concurrentEndpointSyncs  5     concurrentReplicasetSyncs  5     concurrentNamespaceSyncs  10     concurrentServiceaccountTokenSyncs  5     concurrentServiceSyncs  5     concurrentResourceQuotaSyncs  5     concurrentGcSyncs  20     kubeAPIBurst  20     kubeAPIQPS   30                                                      replicaset controller                                      Queues     images queues png                                                                    KCM                  etcd                               spec revisionHistoryLimit        replicaset                    replicaset                                        yaml spec    revisionHistoryLimit  2                                                                             k8s API                                      ServiceAccountTokenSync                                                        yaml kind  Pod spec    automountServiceAccountToken  false                                                                                                 CloudWatch Logs Insights                     Total Volume Processed by the KCM        Query to count API qps coming from kube controller manager  split by controller type    If you re seeing values close to 20 sec for any particular controller  it s most likely seeing client side API throttling  fields  timestamp   logStream   message   filter  logStream like  kube apiserver audit    filter userAgent like  kube controller manager    Exclude lease related calls  not counted under kcm qps    filter requestURI not like  apis coordination k8s io v1 namespaces kube system leases kube controller manager    Exclude API discovery calls  not counted under kcm qps    filter requestURI not like   timeout 32s    Exclude watch calls  not counted under kcm qps    filter verb     watch    If you want to get counts of API calls coming from a specific controller  uncomment the appropriate line below      filter user username like  system serviceaccount kube system job controller      filter user username like  system serviceaccount kube system cronjob controller      filter user username like  system serviceaccount kube system deployment controller      filter user username like  system serviceaccount kube system replicaset controller      filter user username like  system serviceaccount kube system horizontal pod autoscaler      filter user username like  system serviceaccount kube system persistent volume binder      filter user username like  system serviceaccount kube system endpointslice controller      filter user username like  system serviceaccount kube system endpoint controller      filter user username like  system serviceaccount kube system generic garbage controller    stats count    as count by user username   sort count desc                                                               API        KCM                                                                                                                                              ETCD  etcd                                            2  4  8GB                                                                                                 etcd                                                                                                                             defragging                                                                                                                        Helm      revisionHistoryLimit                 ReplicaSet                                             2             ReplicaSets     10   2                                        yaml apiVersion  apps v1 kind  Deployment spec    revisionHistoryLimit  2                                                                                                   CloudWatch Logs                                                                fields   timestamp     message    filter   logStream  like  etcd manager    filter   message  like  defraging defraged    sort   timestamp  asc        Defrag query     images defrag png  "}
{"questions":"eks exclude true API Server Controller Manager Scheduler search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API Server, \ucfe0\ubc84\ub124\ud2f0\uc2a4 Controller Manager, Scheduler \ubc0f \ucfe0\ubc84\ub124\ud2f0\uc2a4\uac00 \uc791\ub3d9\ud558\ub294 \ub370 \ud544\uc694\ud55c \uae30\ud0c0 \uad6c\uc131 \uc694\uc18c\ub85c \uad6c\uc131\ub429\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uad6c\uc131 \uc694\uc18c\uc758 \ud655\uc7a5\uc131 \uc81c\ud55c\uc740 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 \ud56d\ubaa9\uc5d0 \ub530\ub77c \ub2e4\ub974\uc9c0\ub9cc \ud655\uc7a5\uc5d0 \uac00\uc7a5 \ud070 \uc601\ud5a5\uc744 \ubbf8\uce58\ub294 \uc601\uc5ed\uc5d0\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804, \uc0ac\uc6a9\ub960 \ubc0f \uac1c\ubcc4 \ub178\ub4dc \ud655\uc7a5\uc774 \ud3ec\ud568\ub429\ub2c8\ub2e4.\n\n## EKS 1.24 \uc774\uc0c1\uc744 \uc0ac\uc6a9\ud558\uc138\uc694\n\nEKS 1.24\uc5d0\ub294 \uc5ec\ub7ec \uac00\uc9c0 \ubcc0\uacbd \uc0ac\ud56d\uc774 \ub3c4\uc785\ub418\uc5c8\uc73c\uba70 \ucee8\ud14c\uc774\ub108 \ub7f0\ud0c0\uc784\uc744 docker \ub300\uc2e0 [containerd](https:\/\/containerd.io\/)\ub85c \uc804\ud658\ud588\uc2b5\ub2c8\ub2e4. Containerd\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc758 \uc694\uad6c \uc0ac\ud56d\uc5d0 \uae34\ubc00\ud558\uac8c \ub9de\ucdb0 \ucee8\ud14c\uc774\ub108 \ub7f0\ud0c0\uc784 \uae30\ub2a5\uc744 \uc81c\ud55c\ud558\uc5ec \uac1c\ubcc4 \ub178\ub4dc \uc131\ub2a5\uc744 \ub192\uc5ec \ud074\ub7ec\uc2a4\ud130 \ud655\uc7a5\uc744 \ub3d5\uc2b5\ub2c8\ub2e4. Containerd\ub294 \uc9c0\uc6d0\ub418\ub294 \ubaa8\ub4e0 EKS \ubc84\uc804\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc73c\uba70, 1.24 \uc774\uc804 \ubc84\uc804\uc5d0\uc11c Containerd\ub85c \uc804\ud658\ud558\ub824\uba74 [`--container-runtime` bootstrap flag](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/eks-optimized-ami.html#containerd-bootstrap)\ub97c \uc0ac\uc6a9\ud558\uc138\uc694.\n\n## \uc6cc\ud06c\ub85c\ub4dc \ubc0f \ub178\ub4dc \ubc84\uc2a4\ud305 \uc81c\ud55c\n\n!!! Attention\n    \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0\uc11c API \ud55c\ub3c4\uc5d0 \ub3c4\ub2ec\ud558\uc9c0 \uc54a\uc73c\ub824\uba74 \ud074\ub7ec\uc2a4\ud130 \ud06c\uae30\ub97c \ud55c \ubc88\uc5d0 \ub450 \uc790\ub9bf\uc218 \ube44\uc728\ub85c \ub298\ub9ac\ub294 \uae09\uaca9\ud55c \ud655\uc7a5\uc744 \uc81c\ud55c\ud574\uc57c \ud569\ub2c8\ub2e4(\uc608: \ud55c \ubc88\uc5d0 1000\uac1c \ub178\ub4dc\uc5d0\uc11c 1100\uac1c \ub178\ub4dc\ub85c \ub610\ub294 4000\uac1c\uc5d0\uc11c 4500\uac1c \ud30c\ub4dc\ub85c).\n\nEKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc740 \ud074\ub7ec\uc2a4\ud130\uac00 \uc131\uc7a5\ud568\uc5d0 \ub530\ub77c \uc790\ub3d9\uc73c\ub85c \ud655\uc7a5\ub418\uc9c0\ub9cc \ud655\uc7a5 \uc18d\ub3c4\uc5d0\ub294 \uc81c\ud55c\uc774 \uc788\uc2b5\ub2c8\ub2e4. EKS \ud074\ub7ec\uc2a4\ud130\ub97c \ucc98\uc74c \uc0dd\uc131\ud560 \ub54c \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc740 \uc989\uc2dc \uc218\ubc31 \uac1c\uc758 \ub178\ub4dc \ub610\ub294 \uc218\ucc9c \uac1c\uc758 \ud30c\ub4dc\ub85c \ud655\uc7a5\ub420 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. EKS\uc758 \uc2a4\ucf00\uc77c\ub9c1 \uac1c\uc120 \ubc29\ubc95\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\ub824\uba74 [\uc774 \ube14\ub85c\uadf8 \uac8c\uc2dc\ubb3c](https:\/\/aws.amazon.com\/blogs\/containers\/amazon-eks-control-plane-auto-scaling-enhancements-improve-speed-by-4x\/)\uc744 \ucc38\uc870\ud558\uc138\uc694.\n\n\ub300\uaddc\ubaa8 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ud655\uc7a5\ud558\ub824\uba74 \uc778\ud504\ub77c\uac00 \uc644\ubcbd\ud558\uac8c \uc900\ube44\ub418\ub3c4\ub85d \uc870\uc815\ud574\uc57c \ud569\ub2c8\ub2e4(\uc608: \ub85c\ub4dc \ubc38\ub7f0\uc11c \uc6cc\ubc0d). \ud655\uc7a5 \uc18d\ub3c4\ub97c \uc81c\uc5b4\ud558\ub824\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \uc801\ud569\ud55c \uce21\uc815 \uc9c0\ud45c\ub97c \uae30\ubc18\uc73c\ub85c \ud655\uc7a5\ud558\uace0 \uc788\ub294\uc9c0 \ud655\uc778\ud569\ub2c8\ub2e4. CPU \ubc0f \uba54\ubaa8\ub9ac \ud655\uc7a5\uc740 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc81c\uc57d \uc870\uac74\uc744 \uc815\ud655\ud558\uac8c \uc608\uce21\ud558\uc9c0 \ubabb\ud560 \uc218 \uc788\uc73c\uba70 \ucfe0\ubc84\ub124\ud2f0\uc2a4 HPA(Horizontal Pod Autoscaler)\uc5d0\uc11c \uc0ac\uc6a9\uc790 \uc9c0\uc815 \uc9c0\ud45c(\uc608: \ucd08\ub2f9 \uc694\uccad)\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \ub354 \ub098\uc740 \ud655\uc7a5 \uc635\uc158\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc0ac\uc6a9\uc790 \uc815\uc758 \uc9c0\ud45c\ub97c \uc0ac\uc6a9\ud558\ub824\uba74 [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubb38\uc11c](https:\/\/kubernetes.io\/docs\/tasks\/run-application\/horizontal-pod-autoscale-walkthrough\/#autoscaling-on-multiple-metrics-and-custom-metrics)\uc758 \uc608\ub97c \ucc38\uc870\ud558\uc138\uc694. \uace0\uae09 \ud655\uc7a5\uc774 \ud544\uc694\ud558\uac70\ub098 \uc678\ubd80 \uc18c\uc2a4(\uc608: AWS SQS \ub300\uae30\uc5f4)\ub97c \uae30\ubc18\uc73c\ub85c \ud655\uc7a5\ud574\uc57c \ud558\ub294 \uacbd\uc6b0 \uc774\ubca4\ud2b8 \uae30\ubc18 \uc6cc\ud06c\ub85c\ub4dc \ud655\uc7a5\uc744 \uc704\ud574 [KEDA](https:\/\/keda.sh)\ub97c \uc0ac\uc6a9\ud558\uc138\uc694.\n\n## \ub178\ub4dc\uc640 \ud30c\ub4dc\ub97c \uc548\uc804\ud558\uac8c \ucd95\uc18c\n\n### \uc7a5\uae30 \uc2e4\ud589 \uc778\uc2a4\ud134\uc2a4 \uad50\uccb4\n\n\uc815\uae30\uc801\uc73c\ub85c \ub178\ub4dc\ub97c \uad50\uccb4\ud558\uba74 \uad6c\uc131 \ub4dc\ub9ac\ud504\ud2b8\uc640 \uac00\ub3d9 \uc2dc\uac04\uc774 \uc5f0\uc7a5\ub41c \ud6c4\uc5d0\ub9cc \ubc1c\uc0dd\ud558\ub294 \ubb38\uc81c(\uc608: \ub290\ub9b0 \uba54\ubaa8\ub9ac \ub204\uc218)\ub97c \ubc29\uc9c0\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\ub97c \uac74\uac15\ud55c \uc0c1\ud0dc\ub85c \uc720\uc9c0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\ub3d9 \uad50\uccb4\ub294 \ub178\ub4dc \uc5c5\uadf8\ub808\uc774\ub4dc \ubc0f \ubcf4\uc548 \ud328\uce58\uc5d0 \ub300\ud55c \uc88b\uc740 \ud504\ub85c\uc138\uc2a4\uc640 \uc0ac\ub840\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uc758 \ubaa8\ub4e0 \ub178\ub4dc\uac00 \uc815\uae30\uc801\uc73c\ub85c \uad50\uccb4\ub418\uba74 \uc9c0\uc18d\uc801\uc778 \uc720\uc9c0 \uad00\ub9ac\ub97c \uc704\ud574 \ubcc4\ub3c4\uc758 \ud504\ub85c\uc138\uc2a4\ub97c \uc720\uc9c0\ud558\ub294 \ub370 \ud544\uc694\ud55c \ub178\ub825\uc774 \uc904\uc5b4\ub4ed\ub2c8\ub2e4.\n\nKarpenter\uc758 [Time To Live (TTL)](https:\/\/aws.github.io\/aws-eks-best-practices\/karpenter\/#use-timers-ttl-to-automatically-delete-nodes-from-the-cluster) \uc124\uc815\uc744 \ud1b5\ud574 \uc9c0\uc815\ub41c \uc2dc\uac04 \ub3d9\uc548 \uc778\uc2a4\ud134\uc2a4\uac00 \uc2e4\ud589\ub41c \ud6c4 \uc778\uc2a4\ud134\uc2a4\ub97c \uad50\uccb4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc740 `max-instance-lifetime` \uc124\uc815\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \uad50\uccb4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc5d0\ub294 \ud604\uc7ac \uc774 \uae30\ub2a5\uc774 \uc5c6\uc9c0\ub9cc [\uc5ec\uae30 GitHub\uc5d0\uc11c](https:\/\/github.com\/aws\/containers-roadmap\/issues\/1190) \uc694\uccad\uc744 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \ud65c\uc6a9\ub3c4\uac00 \ub0ae\uc740 \ub178\ub4dc \uc81c\uac70\n\n[`--scale-down-utilization-threshold`](https:\/\/github.com\/kubernetes\/autoscaler\/blob\/master\/cluster-autoscaler\/FAQ.md#how-does-scale-down-work)\ub97c \ud1b5\ud574 \ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler\uc758 \ucd95\uc18c \uc784\uacc4\uac12\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc2e4\ud589 \uc911\uc778 \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc5c6\uc744 \ub54c \ub178\ub4dc\ub97c \uc81c\uac70\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub610\ub294 Karpenter\uc5d0\uc11c `ttlSecondsAfterEmpty` \ud504\ub85c\ube44\uc800\ub108 \uc124\uc815\uc744 \ud65c\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### Pod Distruption Budgets \ubc0f \uc548\uc804\ud55c \ub178\ub4dc \uc167\ub2e4\uc6b4 \uc0ac\uc6a9\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ud30c\ub4dc\uc640 \ub178\ub4dc\ub97c \uc81c\uac70\ud558\ub824\uba74 \ucee8\ud2b8\ub864\ub7ec\uac00 \uc5ec\ub7ec \ub9ac\uc18c\uc2a4(\uc608: EndpointSlices)\ub97c \uc5c5\ub370\uc774\ud2b8\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774 \uc791\uc5c5\uc744 \uc790\uc8fc \ub610\ub294 \ub108\ubb34 \ube60\ub974\uac8c \uc218\ud589\ud558\uba74 \ubcc0\uacbd \uc0ac\ud56d\uc774 \ucee8\ud2b8\ub864\ub7ec\uc5d0 \uc804\ud30c\ub418\uba74\uc11c API Server \uc4f0\ub85c\ud2c0\ub9c1 \ubc0f \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc911\ub2e8\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. [Pod Distruption Budgets](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/disruptions\/)\uc740 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ub178\ub4dc\uac00 \uc81c\uac70\ub418\uac70\ub098 \uc2a4\ucf00\uc904\uc774 \uc870\uc815\ub420 \ub54c \ubcc0\ub3d9 \uc18d\ub3c4\ub97c \ub2a6\ucd94\uc5b4 \uc6cc\ud06c\ub85c\ub4dc \uac00\uc6a9\uc131\uc744 \ubcf4\ud638\ud558\ub294 \ubaa8\ubc94 \uc0ac\ub840\uc785\ub2c8\ub2e4.\n\n## Kubectl \uc2e4\ud589 \uc2dc \ud074\ub77c\uc774\uc5b8\ud2b8\uce21 \uce90\uc2dc \uc0ac\uc6a9\n\nkubectl \uba85\ub839\uc744 \ube44\ud6a8\uc728\uc801\uc73c\ub85c \uc0ac\uc6a9\ud558\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API Server\uc5d0 \ucd94\uac00 \ub85c\ub4dc\uac00 \ubc1c\uc0dd\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. kubectl\uc744 \ubc18\ubcf5\uc801\uc73c\ub85c(\uc608: for \ub8e8\ud504\uc5d0\uc11c) \uc0ac\uc6a9\ud558\ub294 \uc2a4\ud06c\ub9bd\ud2b8\ub098 \uc790\ub3d9\ud654\ub97c \uc2e4\ud589\ud558\uac70\ub098 \ub85c\uceec \uce90\uc2dc \uc5c6\uc774 \uba85\ub839\uc744 \uc2e4\ud589\ud558\ub294 \uac83\uc744 \ud53c\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n`kubectl`\uc5d0\ub294 \ud544\uc694\ud55c API \ud638\ucd9c \uc591\uc744 \uc904\uc774\uae30 \uc704\ud574 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uac80\uc0c9 \uc815\ubcf4\ub97c \uce90\uc2dc\ud558\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8 \uce21 \uce90\uc2dc\uac00 \uc788\uc2b5\ub2c8\ub2e4. \uce90\uc2dc\ub294 \uae30\ubcf8\uc801\uc73c\ub85c \ud65c\uc131\ud654\ub418\uc5b4 \uc788\uc73c\uba70 10\ubd84\ub9c8\ub2e4 \uc0c8\ub85c \uace0\uccd0\uc9d1\ub2c8\ub2e4.\n\n\ucee8\ud14c\uc774\ub108\uc5d0\uc11c \ub610\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8 \uce21 \uce90\uc2dc \uc5c6\uc774 kubectl\uc744 \uc2e4\ud589\ud558\ub294 \uacbd\uc6b0 API \uc4f0\ub85c\ud2c0\ub9c1 \ubb38\uc81c\uac00 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubd88\ud544\uc694\ud55c API \ud638\ucd9c\uc744 \ud53c\ud558\uae30 \uc704\ud574 `--cache-dir`\uc744 \ub9c8\uc6b4\ud2b8\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 \uce90\uc2dc\ub97c \uc720\uc9c0\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n## kubectl Compression \ube44\ud65c\uc131\ud654\n\nkubeconfig \ud30c\uc77c\uc5d0\uc11c kubectl compression\uc744 \ube44\ud65c\uc131\ud654\ud558\uba74 API \ubc0f \ud074\ub77c\uc774\uc5b8\ud2b8 CPU \uc0ac\uc6a9\ub7c9\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c \uc11c\ubc84\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8\ub85c \uc804\uc1a1\ub41c \ub370\uc774\ud130\ub97c \uc555\ucd95\ud558\uc5ec \ub124\ud2b8\uc6cc\ud06c \ub300\uc5ed\ud3ed\uc744 \ucd5c\uc801\ud654\ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uc694\uccad\ub9c8\ub2e4 \ud074\ub77c\uc774\uc5b8\ud2b8\uc640 \uc11c\ubc84\uc5d0 CPU \ubd80\ud558\uac00 \uac00\uc911\ub418\uba70, \ub300\uc5ed\ud3ed\uc774 \ucda9\ubd84\ud558\ub2e4\uba74 \uc555\ucd95\uc744 \ube44\ud65c\uc131\ud654\ud558\uba74 \uc624\ubc84\ud5e4\ub4dc\uc640 \uc9c0\uc5f0 \uc2dc\uac04\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc555\ucd95\uc744 \ube44\ud65c\uc131\ud654\ud558\ub824\uba74 kubeconfig \ud30c\uc77c\uc5d0\uc11c `--disable-compression=true` \ud50c\ub798\uadf8\ub97c \uc0ac\uc6a9\ud558\uac70\ub098 `disable-compression: true`\ub85c \uc124\uc815\ud558\uba74 \ub429\ub2c8\ub2e4.\n\n```\napiVersion: v1\nclusters:\n- cluster:\n    server: serverURL\n    disable-compression: true\n  name: cluster\n```\n\n## Cluster Autoscaler \uc0e4\ub529\n\n[\ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler\ub294 \ud14c\uc2a4\ud2b8\ub97c \uac70\ucce4\uc2b5\ub2c8\ub2e4](https:\/\/github.com\/kubernetes\/autoscaler\/blob\/master\/cluster-autoscaler\/proposals\/scalability_tests.md). \ucd5c\ub300 1,000\uac1c \ub178\ub4dc\uae4c\uc9c0 \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. 1000\uac1c \uc774\uc0c1\uc758 \ub178\ub4dc\uac00 \uc788\ub294 \ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c\ub294 Cluster AutoScaler\ub97c \uc5ec\ub7ec \uc778\uc2a4\ud134\uc2a4\uc5d0 \uc0e4\ub4dc \ubaa8\ub4dc\ub85c \uc2e4\ud589\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uac01 \ud074\ub7ec\uc2a4\ud130 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec \uc778\uc2a4\ud134\uc2a4\ub294 \ub178\ub4dc \uadf8\ub8f9 \uc138\ud2b8\ub97c \ud655\uc7a5\ud558\ub3c4\ub85d \uad6c\uc131\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c \uc608\ub294 \uac01 4\uac1c\uc758 \ub178\ub4dc \uadf8\ub8f9\uc5d0 \uad6c\uc131\ub41c 2\uac1c\uc758 \ud074\ub7ec\uc2a4\ud130 \uc624\ud1a0 \uc2a4\ucf00\uc77c\ub9c1 \uad6c\uc131\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\nClusterAutoscaler-1\n\n```\nautoscalingGroups:\n- name: eks-core-node-grp-20220823190924690000000011-80c1660e-030d-476d-cb0d-d04d585a8fcb\n  maxSize: 50\n  minSize: 2\n- name: eks-data_m1-20220824130553925600000011-5ec167fa-ca93-8ca4-53a5-003e1ed8d306\n  maxSize: 450\n  minSize: 2\n- name: eks-data_m2-20220824130733258600000015-aac167fb-8bf7-429d-d032-e195af4e25f5\n  maxSize: 450\n  minSize: 2\n- name: eks-data_m3-20220824130553914900000003-18c167fa-ca7f-23c9-0fea-f9edefbda002\n  maxSize: 450\n  minSize: 2\n```\n\nClusterAutoscaler-2\n\n```\nautoscalingGroups:\n- name: eks-data_m4-2022082413055392550000000f-5ec167fa-ca86-6b83-ae9d-1e07ade3e7c4\n  maxSize: 450\n  minSize: 2\n- name: eks-data_m5-20220824130744542100000017-02c167fb-a1f7-3d9e-a583-43b4975c050c\n  maxSize: 450\n  minSize: 2\n- name: eks-data_m6-2022082413055392430000000d-9cc167fa-ca94-132a-04ad-e43166cef41f\n  maxSize: 450\n  minSize: 2\n- name: eks-data_m7-20220824130553921000000009-96c167fa-ca91-d767-0427-91c879ddf5af\n  maxSize: 450\n  minSize: 2\n```\n\n## API Priority and Fairness\n\n![](..\/images\/APF.jpg)\n\n### \uac1c\uc694\n\n<iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/YnPPHBawhE0\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen><\/iframe>\n\n\uc694\uccad\uc774 \uc99d\uac00\ud558\ub294 \uae30\uac04 \ub3d9\uc548 \uacfc\ubd80\ud558\uac00 \ubc1c\uc0dd\ud558\uc9c0 \uc54a\ub3c4\ub85d \ubcf4\ud638\ud558\uae30 \uc704\ud574 API \uc11c\ubc84\ub294 \ud2b9\uc815 \uc2dc\uac04\uc5d0 \ucc98\ub9ac\ud560 \uc218 \uc788\ub294 \uc9c4\ud589 \uc911\uc778 \uc694\uccad \uc218\ub97c \uc81c\ud55c\ud569\ub2c8\ub2e4. \uc774 \uc81c\ud55c\uc744 \ucd08\uacfc\ud558\uba74 API \uc11c\ubc84\ub294 \uc694\uccad\uc744 \uac70\ubd80\ud558\uae30 \uc2dc\uc791\ud558\uace0 \"Too many requests\"\uc5d0 \ub300\ud55c 429 HTTP \uc751\ub2f5 \ucf54\ub4dc\ub97c \ud074\ub77c\uc774\uc5b8\ud2b8\uc5d0 \ubc18\ud658\ud569\ub2c8\ub2e4. \uc11c\ubc84\uac00 \uc694\uccad\uc744 \uc0ad\uc81c\ud558\uace0 \ud074\ub77c\uc774\uc5b8\ud2b8\uac00 \ub098\uc911\uc5d0 \ub2e4\uc2dc \uc2dc\ub3c4\ud558\ub3c4\ub85d \ud558\ub294 \uac83\uc774 \uc694\uccad \uc218\uc5d0 \ub300\ud55c \uc11c\ubc84 \uce21 \uc81c\ud55c\uc744 \ub450\uc9c0 \uc54a\uace0 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0 \uacfc\ubd80\ud558\ub97c \uc8fc\uc5b4 \uc131\ub2a5\uc774 \uc800\ud558\ub418\uac70\ub098 \uac00\uc6a9\uc131\uc774 \uc800\ud558\ub420 \uc218 \uc788\ub294 \uac83\ubcf4\ub2e4 \ub354 \uc88b\uc2b5\ub2c8\ub2e4.\n\n\uc774\ub7ec\ud55c \uc9c4\ud589 \uc911\uc778 \uc694\uccad\uc774 \ub2e4\uc591\ud55c \uc694\uccad \uc720\ud615\uc73c\ub85c \ub098\ub204\uc5b4\uc9c0\ub294 \ubc29\uc2dd\uc744 \uad6c\uc131\ud558\uae30 \uc704\ud574 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0\uc11c \uc0ac\uc6a9\ud558\ub294 \uba54\ucee4\ub2c8\uc998\uc744 [API Priority and Fairness](https:\/\/kubernetes.io\/docs\/concepts\/cluster-administration\/flow-control\/)\uc774\ub77c\uace0 \ud569\ub2c8\ub2e4. API \uc11c\ubc84\ub294 `--max-requests-inflight` \ubc0f `--max-mutating-requests-inflight` \ud50c\ub798\uadf8\ub85c \uc9c0\uc815\ub41c \uac12\uc744 \ud569\uc0b0\ud558\uc5ec \ud5c8\uc6a9\ud560 \uc218 \uc788\ub294 \ucd1d \uc9c4\ud589 \uc911\uc778 \uc694\uccad \uc218\ub97c \uad6c\uc131\ud569\ub2c8\ub2e4. EKS\ub294 \uc774\ub7ec\ud55c \ud50c\ub798\uadf8\uc5d0 \ub300\ud574 \uae30\ubcf8\uac12\uc778 400\uac1c \ubc0f 200\uac1c \uc694\uccad\uc744 \uc0ac\uc6a9\ud558\ubbc0\ub85c \uc8fc\uc5b4\uc9c4 \uc2dc\uac04\uc5d0 \ucd1d 600\uac1c\uc758 \uc694\uccad\uc744 \uc804\ub2ec\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. APF\ub294 \uc774\ub7ec\ud55c 600\uac1c\uc758 \uc694\uccad\uc744 \ub2e4\uc591\ud55c \uc694\uccad \uc720\ud615\uc73c\ub85c \ub098\ub204\ub294 \ubc29\ubc95\uc744 \uc9c0\uc815\ud569\ub2c8\ub2e4. EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc740 \uac01 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ucd5c\uc18c 2\uac1c\uc758 API \uc11c\ubc84\uac00 \ub4f1\ub85d\ub418\uc5b4 \uc788\uc5b4 \uac00\uc6a9\uc131\uc774 \ub192\uc2b5\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \ud074\ub7ec\uc2a4\ud130 \uc804\uccb4\uc758 \ucd1d \uc9c4\ud589 \uc911\uc778 \uc694\uccad \uc218\uac00 1200\uac1c\ub85c \ub298\uc5b4\ub0a9\ub2c8\ub2e4.\n\nPriorityLevelConfigurations \ubc0f FlowSchemas\ub77c\ub294 \ub450 \uc885\ub958\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uac1d\uccb4\ub294 \ucd1d \uc694\uccad \uc218\uac00 \ub2e4\uc591\ud55c \uc694\uccad \uc720\ud615 \uac04\uc5d0 \ubd84\ud560\ub418\ub294 \ubc29\uc2dd\uc744 \uad6c\uc131\ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uac1d\uccb4\ub294 API \uc11c\ubc84\uc5d0 \uc758\ud574 \uc790\ub3d9\uc73c\ub85c \uc720\uc9c0 \uad00\ub9ac\ub418\uba70 EKS\ub294 \uc9c0\uc815\ub41c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9c8\uc774\ub108 \ubc84\uc804\uc5d0 \ub300\ud574 \uc774\ub7ec\ud55c \uac1d\uccb4\uc758 \uae30\ubcf8 \uad6c\uc131\uc744 \uc0ac\uc6a9\ud569\ub2c8\ub2e4. PriorityLevelConfigurations\ub294 \ud5c8\uc6a9\ub41c \ucd1d \uc694\uccad \uc218\uc758 \uc77c\ubd80\ub97c \ub098\ud0c0\ub0c5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uc6cc\ud06c\ub85c\ub4dc\uac00 \ub192\uc740 PriorityLevelConfiguration\uc5d0\ub294 \ucd1d 600\uac1c\uc758 \uc694\uccad \uc911 98\uac1c\uac00 \ud560\ub2f9\ub429\ub2c8\ub2e4. \ubaa8\ub4e0 PriorityLevelConfigurations\uc5d0 \ud560\ub2f9\ub41c \uc694\uccad\uc758 \ud569\uacc4\ub294 600\uc785\ub2c8\ub2e4(\ub610\ub294 \ud2b9\uc815 \uc218\uc900\uc774 \uc694\uccad\uc758 \uc77c\ubd80\ub9cc \ud5c8\uc6a9\ub41c \uacbd\uc6b0 API Server\uac00 \ubc18\uc62c\ub9bc\ud558\ubbc0\ub85c 600\ubcf4\ub2e4 \uc57d\uac04 \ub192\uc74c). \ud074\ub7ec\uc2a4\ud130\uc758 PriorityLevelConfigurations\uc640 \uac01\uac01\uc5d0 \ud560\ub2f9\ub41c \uc694\uccad \uc218\ub97c \ud655\uc778\ud558\ub824\uba74 \ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. EKS 1.24\uc758 \uae30\ubcf8\uac12\uc740 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n```\n$ kubectl get --raw \/metrics | grep apiserver_flowcontrol_request_concurrency_limit\napiserver_flowcontrol_request_concurrency_limit{priority_level=\"catch-all\"} 13\napiserver_flowcontrol_request_concurrency_limit{priority_level=\"global-default\"} 49\napiserver_flowcontrol_request_concurrency_limit{priority_level=\"leader-election\"} 25\napiserver_flowcontrol_request_concurrency_limit{priority_level=\"node-high\"} 98\napiserver_flowcontrol_request_concurrency_limit{priority_level=\"system\"} 74\napiserver_flowcontrol_request_concurrency_limit{priority_level=\"workload-high\"} 98\napiserver_flowcontrol_request_concurrency_limit{priority_level=\"workload-low\"} 245\n```\n\n\ub450 \ubc88\uc9f8 \uc720\ud615\uc758 \uac1d\uccb4\ub294 FlowSchemas\uc785\ub2c8\ub2e4. \ud2b9\uc815 \uc18d\uc131 \uc138\ud2b8\uac00 \ud3ec\ud568\ub41c API \uc11c\ubc84 \uc694\uccad\uc740 \ub3d9\uc77c\ud55c FlowSchema\ub85c \ubd84\ub958\ub429\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc18d\uc131\uc5d0\ub294 \uc778\uc99d\ub41c \uc0ac\uc6a9\uc790\ub098 API Group, \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \ub610\ub294 \ub9ac\uc18c\uc2a4\uc640 \uac19\uc740 \uc694\uccad\uc758 \uc18d\uc131\uc774 \ud3ec\ud568\ub429\ub2c8\ub2e4. FlowSchema\ub294 \ub610\ud55c \uc774 \uc720\ud615\uc758 \uc694\uccad\uc774 \ub9e4\ud551\ub418\uc5b4\uc57c \ud558\ub294 PriorityLevelConfiguration\uc744 \uc9c0\uc815\ud569\ub2c8\ub2e4. \ub450 \uac1c\uccb4\ub294 \ud568\uaed8 \"\uc774 \uc720\ud615\uc758 \uc694\uccad\uc774 \uc774 inflight \uc694\uccad \ube44\uc728\uc5d0 \ud3ec\ud568\ub418\uae30\ub97c \uc6d0\ud569\ub2c8\ub2e4.\"\ub77c\uace0 \ub9d0\ud569\ub2c8\ub2e4. \uc694\uccad\uc774 API Server\uc5d0 \ub3c4\ub2ec\ud558\uba74 \ubaa8\ub4e0 \ud544\uc218 \uc18d\uc131\uacfc \uc77c\uce58\ud558\ub294 FlowSchemas\ub97c \ucc3e\uc744 \ub54c\uae4c\uc9c0 \uac01 FlowSchemas\ub97c \ud655\uc778\ud569\ub2c8\ub2e4. \uc5ec\ub7ec FlowSchemas\uac00 \uc694\uccad\uacfc \uc77c\uce58\ud558\ub294 \uacbd\uc6b0 API Server\ub294 \uac1c\uccb4\uc758 \uc18d\uc131\uc73c\ub85c \uc9c0\uc815\ub41c \uc77c\uce58 \uc6b0\uc120 \uc21c\uc704\uac00 \uac00\uc7a5 \uc791\uc740 FlowSchema\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4.\n\nFlowSchemas\uc640 PriorityLevelConfigurations\uc758 \ub9e4\ud551\uc740 \ub2e4\uc74c \uba85\ub839\uc744 \uc0ac\uc6a9\ud558\uc5ec \ubcfc \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\n$ kubectl get flowschemas\nNAME                           PRIORITYLEVEL     MATCHINGPRECEDENCE   DISTINGUISHERMETHOD   AGE     MISSINGPL\nexempt                         exempt            1                    <none>                7h19m   False\neks-exempt                     exempt            2                    <none>                7h19m   False\nprobes                         exempt            2                    <none>                7h19m   False\nsystem-leader-election         leader-election   100                  ByUser                7h19m   False\nendpoint-controller            workload-high     150                  ByUser                7h19m   False\nworkload-leader-election       leader-election   200                  ByUser                7h19m   False\nsystem-node-high               node-high         400                  ByUser                7h19m   False\nsystem-nodes                   system            500                  ByUser                7h19m   False\nkube-controller-manager        workload-high     800                  ByNamespace           7h19m   False\nkube-scheduler                 workload-high     800                  ByNamespace           7h19m   False\nkube-system-service-accounts   workload-high     900                  ByNamespace           7h19m   False\neks-workload-high              workload-high     1000                 ByUser                7h14m   False\nservice-accounts               workload-low      9000                 ByUser                7h19m   False\nglobal-default                 global-default    9900                 ByUser                7h19m   False\ncatch-all                      catch-all         10000                ByUser                7h19m   False\n```\n\nPriorityLevelConfigurations\uc5d0\ub294 Queue, Reject \ub610\ub294 Exempt \uc720\ud615\uc774 \uc788\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Queue \ubc0f Reject \uc720\ud615\uc758 \uacbd\uc6b0 \ud574\ub2f9 \uc6b0\uc120\uc21c\uc704 \uc218\uc900\uc5d0 \ub300\ud55c \ucd5c\ub300 \uc9c4\ud589 \uc911\uc778 \uc694\uccad \uc218\uc5d0 \uc81c\ud55c\uc774 \uc801\uc6a9\ub418\uc9c0\ub9cc \ud574\ub2f9 \uc81c\ud55c\uc5d0 \ub3c4\ub2ec\ud558\uba74 \ub3d9\uc791\uc774 \ub2ec\ub77c\uc9d1\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uc6cc\ud06c\ub85c\ub4dc\uac00 \ub192\uc740 PriorityLevelConfiguration\uc740 Queue \uc720\ud615\uc744 \uc0ac\uc6a9\ud558\uba70 \ucee8\ud2b8\ub864\ub7ec \ub9e4\ub2c8\uc800, \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ucee8\ud2b8\ub864\ub7ec, \uc2a4\ucf00\uc904\ub7ec \ubc0f kube-system \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ud30c\ub4dc\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uc694\uccad\uc774 98\uac1c \uc788\uc2b5\ub2c8\ub2e4. Queue \uc720\ud615\uc774 \uc0ac\uc6a9\ub418\ubbc0\ub85c API Server\ub294 \uc694\uccad\uc744 \uba54\ubaa8\ub9ac\uc5d0 \uc720\uc9c0\ud558\ub824\uace0 \uc2dc\ub3c4\ud558\uba70 \uc774\ub7ec\ud55c \uc694\uccad\uc774 \uc2dc\uac04 \ucd08\uacfc\ub418\uae30 \uc804\uc5d0 \uc9c4\ud589 \uc911\uc778 \uc694\uccad \uc218\uac00 98\uac1c \ubbf8\ub9cc\uc73c\ub85c \ub5a8\uc5b4\uc9c0\uae30\ub97c \ud76c\ub9dd\ud569\ub2c8\ub2e4. \ud2b9\uc815 \uc694\uccad\uc774 \ub300\uae30\uc5f4\uc5d0\uc11c \uc2dc\uac04 \ucd08\uacfc\ub418\uac70\ub098 \ub108\ubb34 \ub9ce\uc740 \uc694\uccad\uc774 \uc774\ubbf8 \ub300\uae30\uc5f4\uc5d0 \uc788\ub294 \uacbd\uc6b0 API Server\ub294 \uc694\uccad\uc744 \uc0ad\uc81c\ud558\uace0 \ud074\ub77c\uc774\uc5b8\ud2b8\uc5d0 429\ub97c \ubc18\ud658\ud560 \uc218\ubc16\uc5d0 \uc5c6\uc2b5\ub2c8\ub2e4. \ub300\uae30\uc5f4\uc5d0 \uc788\uc73c\uba74 \uc694\uccad\uc774 429\ub97c \uc218\uc2e0\ud558\uc9c0 \ubabb\ud560 \uc218 \uc788\uc9c0\ub9cc \uc694\uccad\uc758 \uc885\ub2e8 \uac04 \uc9c0\uc5f0 \uc2dc\uac04\uc774 \ub298\uc5b4\ub098\ub294 \ub2e8\uc810\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc774\uc81c Reject \uc720\ud615\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud3ec\uad04\uc801\uc778 PriorityLevelConfiguration\uc5d0 \ub9e4\ud551\ub418\ub294 \ud3ec\uad04\uc801\uc778 FlowSchema\ub97c \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4. \ud074\ub77c\uc774\uc5b8\ud2b8\uac00 13\uac1c\uc758 \uc9c4\ud589 \uc911\uc778 \uc694\uccad \uc81c\ud55c\uc5d0 \ub3c4\ub2ec\ud558\uba74 API Server\ub294 \ub300\uae30\uc5f4\uc744 \uc2e4\ud589\ud558\uc9c0 \uc54a\uace0 429 \uc751\ub2f5 \ucf54\ub4dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc694\uccad\uc744 \uc989\uc2dc \uc0ad\uc81c\ud569\ub2c8\ub2e4. \ub9c8\uc9c0\ub9c9\uc73c\ub85c Exempt \uc720\ud615\uc744 \uc0ac\uc6a9\ud558\uc5ec PriorityLevelConfiguration\uc5d0 \ub9e4\ud551\ud558\ub294 \uc694\uccad\uc740 429\ub97c \uc218\uc2e0\ud558\uc9c0 \uc54a\uc73c\uba70 \ud56d\uc0c1 \uc989\uc2dc \uc804\ub2ec\ub429\ub2c8\ub2e4. \uc774\ub294 healthz \uc694\uccad\uc774\ub098 system:masters \uadf8\ub8f9\uc5d0\uc11c \uc624\ub294 \uc694\uccad\uacfc \uac19\uc740 \uc6b0\uc120\uc21c\uc704\uac00 \ub192\uc740 \uc694\uccad\uc5d0 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. \n\n### APF \ubc0f \uc0ad\uc81c\ub41c \uc694\uccad \ubaa8\ub2c8\ud130\ub9c1\n\nAPF\ub85c \uc778\ud574 \uc0ad\uc81c\ub41c \uc694\uccad\uc774 \uc788\ub294\uc9c0 \ud655\uc778\ud558\ub824\uba74 `apiserver_flowcontrol_rejected_requests_total`\uc5d0 \ub300\ud55c API Server \uc9c0\ud45c\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uc5ec \uc601\ud5a5\uc744 \ubc1b\uc740 FlowSchemas \ubc0f PriorityLevelConfigurations\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uc774 \uc9c0\ud45c\ub294 \uc6cc\ud06c\ub85c\ub4dc\uac00 \ub0ae\uc740 \ub300\uae30\uc5f4\uc758 \uc694\uccad \uc2dc\uac04 \ucd08\uacfc\ub85c \uc778\ud574 service account FlowSchema\uc758 \uc694\uccad 100\uac1c\uac00 \uc0ad\uc81c\ub418\uc5c8\uc74c\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\n```\n% kubectl get --raw \/metrics | grep apiserver_flowcontrol_rejected_requests_total\napiserver_flowcontrol_rejected_requests_total{flow_schema=\"service-accounts\",priority_level=\"workload-low\",reason=\"time-out\"} 100\n```\n\n\uc9c0\uc815\ub41c PriorityLevelConfiguration\uc774 429\ub97c \uc218\uc2e0\ud558\uac70\ub098 \ud050\ub85c \uc778\ud574 \uc9c0\uc5f0 \uc2dc\uac04\uc774 \uc99d\uac00\ud558\ub294 \uc815\ub3c4\ub97c \ud655\uc778\ud558\ub824\uba74 \ub3d9\uc2dc\uc131 \uc81c\ud55c\uacfc \uc0ac\uc6a9 \uc911\uc778 \ub3d9\uc2dc\uc131\uc758 \ucc28\uc774\ub97c \ube44\uad50\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc608\uc5d0\ub294 100\uac1c\uc758 \uc694\uccad \ubc84\ud37c\uac00 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\n% kubectl get --raw \/metrics | grep 'apiserver_flowcontrol_request_concurrency_limit.*workload-low'\napiserver_flowcontrol_request_concurrency_limit{priority_level=\"workload-low\"} 245\n\n% kubectl get --raw \/metrics | grep 'apiserver_flowcontrol_request_concurrency_in_use.*workload-low'\napiserver_flowcontrol_request_concurrency_in_use{flow_schema=\"service-accounts\",priority_level=\"workload-low\"} 145\n```\n\n\ud2b9\uc815 PriorityLevelConfiguration\uc5d0\uc11c \ub300\uae30\uc5f4\uc774 \ubc1c\uc0dd\ud558\uc9c0\ub9cc \ubc18\ub4dc\uc2dc \uc694\uccad\uc774 \uc0ad\uc81c\ub418\ub294 \uac83\uc740 \uc544\ub2cc\uc9c0 \ud655\uc778\ud558\ub824\uba74 `apiserver_flowcontrol_current_inqueue_requests`\uc5d0 \ub300\ud55c \uce21\uc815 \uc9c0\ud45c\ub97c \ucc38\uc870\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\n% kubectl get --raw \/metrics | grep 'apiserver_flowcontrol_current_inqueue_requests.*workload-low'\napiserver_flowcontrol_current_inqueue_requests{flow_schema=\"service-accounts\",priority_level=\"workload-low\"} 10\n```\n\n\uae30\ud0c0 \uc720\uc6a9\ud55c Prometheus \uce21\uc815\ud56d\ubaa9\uc740 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4:\n\n- apiserver_flowcontrol_dispatched_requests_total\n- apiserver_flowcontrol_request_execution_seconds\n- apiserver_flowcontrol_request_wait_duration_seconds\n\n[APF \uc9c0\ud45c](https:\/\/kubernetes.io\/docs\/concepts\/cluster-administration\/flow-control\/#observability)\uc758 \uc804\uccb4 \ubaa9\ub85d\uc740 \uc5c5\uc2a4\ud2b8\ub9bc \ubb38\uc11c\ub97c \ucc38\uc870\ud558\uc138\uc694.\n\n### \uc0ad\uc81c\ub41c \uc694\uccad \ubc29\uc9c0\n\n#### \uc6cc\ud06c\ub85c\ub4dc\ub97c \ubcc0\uacbd\ud558\uc5ec 429\ub97c \ubc29\uc9c0\n\nAPF\uac00 \ud5c8\uc6a9\ub41c \ucd5c\ub300 \ub0b4\ubd80 \uc694\uccad \uc218\ub97c \ucd08\uacfc\ud558\ub294 \uc9c0\uc815\ub41c PriorityLevelConfiguration\uc73c\ub85c \uc778\ud574 \uc694\uccad\uc744 \uc0ad\uc81c\ud558\ub294 \uacbd\uc6b0 \uc601\ud5a5\uc744 \ubc1b\ub294 FlowSchemas\uc758 \ud074\ub77c\uc774\uc5b8\ud2b8\ub294 \uc9c0\uc815\ub41c \uc2dc\uac04\uc5d0 \uc2e4\ud589\ub418\ub294 \uc694\uccad \uc218\ub97c \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub294 429\uac00 \ubc1c\uc0dd\ud558\ub294 \uae30\uac04 \ub3d9\uc548 \uc774\ub8e8\uc5b4\uc9c4 \ucd1d \uc694\uccad \uc218\ub97c \uc904\uc784\uc73c\ub85c\uc368 \ub2ec\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ube44\uc6a9\uc774 \ub9ce\uc774 \ub4dc\ub294 \ubaa9\ub85d \ud638\ucd9c\uacfc \uac19\uc740 \uc7a5\uae30 \uc2e4\ud589 \uc694\uccad\uc740 \uc2e4\ud589\ub418\ub294 \uc804\uccb4 \uae30\uac04 \ub3d9\uc548 \uc9c4\ud589 \uc911\uc778 \uc694\uccad\uc73c\ub85c \uacc4\uc0b0\ub418\uae30 \ub54c\ubb38\uc5d0 \ud2b9\ud788 \ubb38\uc81c\uac00 \ub429\ub2c8\ub2e4. \ube44\uc6a9\uc774 \ub9ce\uc774 \ub4dc\ub294 \uc694\uccad \uc218\ub97c \uc904\uc774\uac70\ub098 \ubaa9\ub85d \ud638\ucd9c\uc758 \ub300\uae30 \uc2dc\uac04\uc744 \ucd5c\uc801\ud654\ud558\uba74(\uc608: \uc694\uccad \ub2f9 \uac00\uc838\uc624\ub294 \uac1d\uccb4 \uc218\ub97c \uc904\uc774\uac70\ub098 watch \uc694\uccad\uc744 \uc0ac\uc6a9\ud558\ub3c4\ub85d \uc804\ud658) \ud574\ub2f9 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ud544\uc694\ud55c \ucd1d \ub3d9\uc2dc\uc131\uc744 \uc904\uc774\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n#### APF \uc124\uc815\uc744 \ubcc0\uacbd\ud558\uc5ec 429\ub97c \ubc29\uc9c0\n\n!!! warning\n     \uc218\ud589 \uc911\uc778 \uc791\uc5c5\uc744 \uc54c\uace0 \uc788\ub294 \uacbd\uc6b0\uc5d0\ub9cc \uae30\ubcf8 APF \uc124\uc815\uc744 \ubcc0\uacbd\ud558\uc2ed\uc2dc\uc624. APF \uc124\uc815\uc774 \uc798\ubabb \uad6c\uc131\ub418\uba74 API Server \uc694\uccad\uc774 \uc911\ub2e8\ub418\uace0 \uc6cc\ud06c\ub85c\ub4dc\uac00 \ud06c\uac8c \uc911\ub2e8\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc694\uccad \uc0ad\uc81c\ub97c \ubc29\uc9c0\ud558\uae30 \uc704\ud55c \ub610 \ub2e4\ub978 \uc811\uadfc \ubc29\uc2dd\uc740 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc124\uce58\ub41c \uae30\ubcf8 FlowSchemas \ub610\ub294 PriorityLevelConfigurations\ub97c \ubcc0\uacbd\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. EKS\ub294 \uc9c0\uc815\ub41c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9c8\uc774\ub108 \ubc84\uc804\uc5d0 \ub300\ud55c FlowSchemas \ubc0f PriorityLevelConfigurations\uc758 \uc5c5\uc2a4\ud2b8\ub9bc \uae30\ubcf8 \uc124\uc815\uc744 \uc124\uce58\ud569\ub2c8\ub2e4. API\ub294 \uac1d\uccb4\uc5d0 \ub300\ud55c \ub2e4\uc74c \uc8fc\uc11d\uc774 false\ub85c \uc124\uc815\ub418\uc9c0 \uc54a\uc740 \ud55c \uc774\ub7ec\ud55c \uac1d\uccb4\ub97c \uae30\ubcf8\uac12\uc73c\ub85c \uc790\ub3d9\uc73c\ub85c \ub2e4\uc2dc \uc870\uc815\ud569\ub2c8\ub2e4.\n\n```\n  metadata:\n    annotations:\n      apf.kubernetes.io\/autoupdate-spec: \"false\"\n```\n\n\ub192\uc740 \uc218\uc900\uc5d0\uc11c APF \uc124\uc815\uc740 \ub2e4\uc74c \uc911 \ud558\ub098\ub85c \uc218\uc815\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n- \uad00\uc2ec \uc788\ub294 \uc694\uccad\uc5d0 \ub354 \ub9ce\uc740 \uae30\ub0b4 \uc218\uc6a9 \ub2a5\ub825\uc744 \ud560\ub2f9\ud558\uc138\uc694.\n- \ub2e4\ub978 \uc694\uccad \uc720\ud615\uc5d0 \ub300\ud55c \uc6a9\ub7c9\uc774 \ubd80\uc871\ud560 \uc218 \uc788\ub294 \ube44\ud544\uc218\uc801\uc774\uac70\ub098 \ube44\uc6a9\uc774 \ub9ce\uc774 \ub4dc\ub294 \uc694\uccad\uc744 \uaca9\ub9ac\ud569\ub2c8\ub2e4.\n\n\uc774\ub294 \uae30\ubcf8 FlowSchemas \ubc0f PriorityLevelConfigurations\ub97c \ubcc0\uacbd\ud558\uac70\ub098 \uc774\ub7ec\ud55c \uc720\ud615\uc758 \uc0c8 \uac1c\uccb4\ub97c \uc0dd\uc131\ud558\uc5ec \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uad00\ub9ac\uc790\ub294 \uad00\ub828 PriorityLevelConfigurations \uac1c\uccb4\uc5d0 \ub300\ud55c authenticateConcurrencyShares \uac12\uc744 \ub298\ub824 \ud560\ub2f9\ub418\ub294 \uc9c4\ud589 \uc911 \uc694\uccad \ube44\uc728\uc744 \ub298\ub9b4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub610\ud55c \uc694\uccad\uc774 \ubc1c\uc1a1\ub418\uae30 \uc804\uc5d0 \ub300\uae30\uc5f4\uc5d0 \ucd94\uac00\ub418\uc5b4 \ubc1c\uc0dd\ud558\ub294 \ucd94\uac00 \ub300\uae30 \uc2dc\uac04\uc744 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0\uc11c \ucc98\ub9ac\ud560 \uc218 \uc788\ub294 \uacbd\uc6b0 \ud2b9\uc815 \uc2dc\uac04\uc5d0 \ub300\uae30\uc5f4\uc5d0 \ucd94\uac00\ud560 \uc218 \uc788\ub294 \uc694\uccad \uc218\ub3c4 \ub298\uc5b4\ub0a0 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub610\ub294 \uace0\uac1d\uc758 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub9de\ub294 \uc0c8\ub85c\uc6b4 FlowSchema \ubc0f PriorityLevelConfigurations \uac1c\uccb4\ub97c \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uae30\uc874 PriorityLevelConfigurations \ub610\ub294 \uc0c8\ub85c\uc6b4 PriorityLevelConfigurations\uc5d0 \ub354 \ub9ce\uc740 secureConcurrencyShares\ub97c \ud560\ub2f9\ud558\uba74 \uc804\uccb4 \ud55c\ub3c4\uac00 API Server \ub2f9 600\uac1c\ub85c \uc720\uc9c0\ub418\ubbc0\ub85c \ub2e4\ub978 \ubc84\ud0b7\uc5d0\uc11c \ucc98\ub9ac\ud560 \uc218 \uc788\ub294 \uc694\uccad \uc218\uac00 \uc904\uc5b4\ub4ed\ub2c8\ub2e4.\n\nAPF \uae30\ubcf8\uac12\uc744 \ubcc0\uacbd\ud560 \ub54c \uc124\uc815 \ubcc0\uacbd\uc73c\ub85c \uc778\ud574 \uc758\ub3c4\ud558\uc9c0 \uc54a\uc740 429\uac00 \ubc1c\uc0dd\ud558\uc9c0 \uc54a\ub3c4\ub85d \ube44\ud504\ub85c\ub355\uc158 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc774\ub7ec\ud55c \uce21\uc815\ud56d\ubaa9\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n1. \ubc84\ud0b7\uc774 \uc694\uccad \uc0ad\uc81c\ub97c \uc2dc\uc791\ud558\uc9c0 \uc54a\ub3c4\ub85d \ubaa8\ub4e0 FlowSchemas\uc5d0 \ub300\ud574 `apiserver_flowcontrol_rejected_requests_total`\uc5d0 \ub300\ud55c \uce21\uc815 \uc9c0\ud45c\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud574\uc57c \ud569\ub2c8\ub2e4.\n2. `apiserver_flowcontrol_request_concurrency_limit` \ubc0f `apiserver_flowcontrol_request_concurrency_in_use` \uac12\uc744 \ube44\uad50\ud558\uc5ec \uc0ac\uc6a9 \uc911\uc778 \ub3d9\uc2dc\uc131\uc774 \ud574\ub2f9 \uc6b0\uc120\uc21c\uc704 \uc218\uc900\uc758 \uc81c\ud55c\uc744 \uc704\ubc18\ud560 \uc704\ud5d8\uc774 \uc5c6\ub294\uc9c0 \ud655\uc778\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\uc0c8\ub85c\uc6b4 FlowSchema \ubc0f PriorityLevelConfiguration\uc744 \uc815\uc758\ud558\ub294 \uc77c\ubc18\uc801\uc778 \uc0ac\uc6a9 \uc0ac\ub840 \uc911 \ud558\ub098\ub294 \uaca9\ub9ac\uc785\ub2c8\ub2e4. \ud30c\ub4dc\uc5d0\uc11c \uc7a5\uae30 \uc2e4\ud589 \ubaa9\ub85d \uc774\ubca4\ud2b8 \ud638\ucd9c\uc744 \uc790\uccb4 \uc694\uccad \uacf5\uc720\ub85c \ubd84\ub9ac\ud55c\ub2e4\uace0 \uac00\uc815\ud574 \ubcf4\uaca0\uc2b5\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uae30\uc874 \uc11c\ube44\uc2a4 \uacc4\uc815 FlowSchema\ub97c \uc0ac\uc6a9\ud558\ub294 \ud30c\ub4dc\uc758 \uc911\uc694\ud55c \uc694\uccad\uc774 429\ub97c \uc218\uc2e0\ud558\uc5ec \uc694\uccad \uc6a9\ub7c9\uc774 \ubd80\uc871\ud574\uc9c0\ub294 \uac83\uc744 \ubc29\uc9c0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc9c4\ud589 \uc911\uc778 \uc694\uccad\uc758 \ucd1d \uac1c\uc218\ub294 \uc720\ud55c\ud558\uc9c0\ub9cc \uc774 \uc608\uc5d0\uc11c\ub294 APF \uc124\uc815\uc744 \uc218\uc815\ud558\uc5ec \uc9c0\uc815\ub41c \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud55c \uc694\uccad \uc6a9\ub7c9\uc744 \ub354 \uc798 \ub098\ub20c \uc218 \uc788\uc74c\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\nList \uc774\ubca4\ud2b8 \uc694\uccad\uc744 \ubd84\ub9ac\ud558\uae30 \uc704\ud55c FlowSchema \uac1d\uccb4\uc758 \uc608:\n\n```\napiVersion: flowcontrol.apiserver.k8s.io\/v1beta1\nkind: FlowSchema\nmetadata:\n  name: list-events-default-service-accounts\nspec:\n  distinguisherMethod:\n    type: ByUser\n  matchingPrecedence: 8000\n  priorityLevelConfiguration:\n    name: catch-all\n  rules:\n  - resourceRules:\n    - apiGroups:\n      - '*'\n      namespaces:\n      - default\n      resources:\n      - events\n      verbs:\n      - list\n    subjects:\n    - kind: ServiceAccount\n      serviceAccount:\n        name: default\n        namespace: default\n```\n\n- \uc774 FlowSchema\ub294 \uae30\ubcf8 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc758 Service Account\uc5d0\uc11c \uc218\ud589\ub41c \ubaa8\ub4e0 List \uc774\ubca4\ud2b8 \ud638\ucd9c\uc744 \ucea1\ucc98\ud569\ub2c8\ub2e4.\n- \uc77c\uce58 \uc6b0\uc120 \uc21c\uc704 8000\uc740 \uae30\uc874 Service Account FlowSchema\uc5d0\uc11c \uc0ac\uc6a9\ud558\ub294 \uac12 9000\ubcf4\ub2e4 \ub0ae\uc73c\ubbc0\ub85c \uc774\ub7ec\ud55c List \uc774\ubca4\ud2b8 \ud638\ucd9c\uc740 Service Account\uac00 \uc544\ub2cc list-events-default-service-account\uc640 \uc77c\uce58\ud569\ub2c8\ub2e4.\n- \uc774\ub7ec\ud55c \uc694\uccad\uc744 \uaca9\ub9ac\ud558\uae30 \uc704\ud574 \ud3ec\uad04\uc801\uc778 PriorityLevelConfiguration\uc744 \uc0ac\uc6a9\ud558\uace0 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ubc84\ud0b7\uc740 \uc7a5\uae30 \uc2e4\ud589\ub418\ub294 list \uc774\ubca4\ud2b8 \ud638\ucd9c\uc5d0\uc11c 13\uac1c\uc758 \uc9c4\ud589 \uc911\uc778 \uc694\uccad\ub9cc \uc0ac\uc6a9\ud560 \uc218 \uc788\ub3c4\ub85d \ud5c8\uc6a9\ud569\ub2c8\ub2e4. \ud30c\ub4dc\ub294 \ub3d9\uc2dc\uc5d0 13\uac1c \uc774\uc0c1\uc758 \uc694\uccad\uc744 \ubc1c\ud589\ud558\ub824\uace0 \uc2dc\ub3c4\ud558\uc9c0\ub9cc \uc989\uc2dc 429\ub97c \uc751\ub2f5\ubc1b\uae30 \uc2dc\uc791\ud569\ub2c8\ub2e4.\n\n## API Server\uc5d0\uc11c \ub9ac\uc18c\uc2a4 \uac80\uc0c9\n\nAPI Server\uc5d0\uc11c \uc815\ubcf4\ub97c \uac00\uc838\uc624\ub294 \uac83\uc740 \ubaa8\ub4e0 \uaddc\ubaa8\uc758 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc608\uc0c1\ub418\ub294 \ub3d9\uc791\uc785\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uc758 \ub9ac\uc18c\uc2a4 \uc218\ub97c \ud655\uc7a5\ud558\uba74 \uc694\uccad \ube48\ub3c4\uc640 \ub370\uc774\ud130 \ubcfc\ub968\uc774 \ube60\ub974\uac8c \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc758 \ubcd1\ubaa9 \ud604\uc0c1\uc774 \ub418\uc5b4 API \uc4f0\ub85c\ud2c0\ub9c1 \ubc0f \uc18d\ub3c4 \uc800\ud558\ub85c \uc774\uc5b4\uc9c8 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc9c0\uc5f0 \uc2dc\uac04\uc758 \uc2ec\uac01\ub3c4\uc5d0 \ub530\ub77c \uc8fc\uc758\ud558\uc9c0 \uc54a\uc73c\uba74 \uc608\uc0c1\uce58 \ubabb\ud55c \ub2e4\uc6b4\ud0c0\uc784\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc774\ub7ec\ud55c \uc720\ud615\uc758 \ubb38\uc81c\ub97c \ubc29\uc9c0\ud558\uae30 \uc704\ud55c \uccab \ubc88\uc9f8 \ub2e8\uacc4\ub294 \ubb34\uc5c7\uc744 \uc694\uccad\ud558\uace0 \uc5bc\ub9c8\ub098 \uc790\uc8fc \uc694\uccad\ud558\ub294\uc9c0 \ud30c\uc545\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \ub2e4\uc74c\uc740 \uaddc\ubaa8 \uc870\uc815 \ubaa8\ubc94 \uc0ac\ub840\uc5d0 \ub530\ub77c \ucffc\ub9ac \uc591\uc744 \uc81c\ud55c\ud558\ub294 \uc9c0\uce68\uc785\ub2c8\ub2e4. \uc774 \uc139\uc158\uc758 \uc81c\uc548 \uc0ac\ud56d\uc740 \ud655\uc7a5\uc131\uc774 \uac00\uc7a5 \uc88b\uc740 \uac83\uc73c\ub85c \uc54c\ub824\uc9c4 \uc635\uc158\ubd80\ud130 \uc21c\uc11c\ub300\ub85c \uc81c\uacf5\ub429\ub2c8\ub2e4.\n\n### Shared Informers \uc0ac\uc6a9\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 API\uc640 \ud1b5\ud569\ub418\ub294 \ucee8\ud2b8\ub864\ub7ec \ubc0f \uc790\ub3d9\ud654\ub97c \uad6c\ucd95\ud560 \ub54c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9ac\uc18c\uc2a4\uc5d0\uc11c \uc815\ubcf4\ub97c \uac00\uc838\uc640\uc57c \ud558\ub294 \uacbd\uc6b0\uac00 \ub9ce\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \ub9ac\uc18c\uc2a4\ub97c \uc815\uae30\uc801\uc73c\ub85c \ud3f4\ub9c1\ud558\uba74 API Server\uc5d0 \uc0c1\ub2f9\ud55c \ubd80\ud558\uac00 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nclient-go \ub77c\uc774\ube0c\ub7ec\ub9ac\uc758 [informer](https:\/\/pkg.go.dev\/k8s.io\/client-go\/informers) \ub97c \uc0ac\uc6a9\ud558\uba74 \ubcc0\uacbd \uc0ac\ud56d\uc744 \ud3f4\ub9c1\ud558\ub294 \ub300\uc2e0 \uc774\ubca4\ud2b8\ub97c \uae30\ubc18\uc73c\ub85c \ub9ac\uc18c\uc2a4\uc758 \ubcc0\uacbd \uc0ac\ud56d\uc744 \uad00\ucc30\ud560 \uc218 \uc788\ub2e4\ub294 \uc774\uc810\uc774 \uc788\uc2b5\ub2c8\ub2e4. Shared Informer\ub294 \uc774\ubca4\ud2b8 \ubc0f \ubcc0\uacbd \uc0ac\ud56d\uc5d0 \uacf5\uc720 \uce90\uc2dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \ubd80\ud558\ub97c \ub354\uc6b1 \uc904\uc774\ubbc0\ub85c \ub3d9\uc77c\ud55c \ub9ac\uc18c\uc2a4\ub97c \uac10\uc2dc\ud558\ub294 \uc5ec\ub7ec \ucee8\ud2b8\ub864\ub7ec\ub85c \uc778\ud574 \ucd94\uac00 \ubd80\ud558\uac00 \uac00\uc911\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n\ucee8\ud2b8\ub864\ub7ec\ub294 \ud2b9\ud788 \ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130\uc758 \uacbd\uc6b0 label \ubc0f field selectors \uc5c6\uc774 \ud074\ub7ec\uc2a4\ud130 \uc804\uccb4 \ub9ac\uc18c\uc2a4\ub97c \ud3f4\ub9c1\ud558\uc9c0 \uc54a\uc544\uc57c \ud569\ub2c8\ub2e4. \ud544\ud130\ub9c1\ub418\uc9c0 \uc54a\uc740 \uac01 \ud3f4\ub9c1\uc5d0\ub294 \ud074\ub77c\uc774\uc5b8\ud2b8\uac00 \ud544\ud130\ub9c1\ud558\uae30 \uc704\ud574 etcd\uc5d0\uc11c API Server\ub97c \ud1b5\ud574 \ub9ce\uc740 \uc591\uc758 \ubd88\ud544\uc694\ud55c \ub370\uc774\ud130\ub97c \uc804\uc1a1\ud574\uc57c \ud569\ub2c8\ub2e4. \ub808\uc774\ube14\uacfc \ub124\uc784\uc2a4\ud398\uc774\uc2a4\ub97c \uae30\ubc18\uc73c\ub85c \ud544\ud130\ub9c1\ud558\uba74 API \uc11c\ubc84\uac00 \uc694\uccad\uc744 \ucc98\ub9ac\ud558\uace0 \ud074\ub77c\uc774\uc5b8\ud2b8\uc5d0 \uc804\uc1a1\ub418\ub294 \ub370\uc774\ud130\ub97c \ucc98\ub9ac\ud558\uae30 \uc704\ud574 \uc218\ud589\ud574\uc57c \ud558\ub294 \uc791\uc5c5\ub7c9\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc0ac\uc6a9 \ucd5c\uc801\ud654\n\n\uc0ac\uc6a9\uc790 \uc815\uc758 \ucee8\ud2b8\ub864\ub7ec \ub610\ub294 \uc790\ub3d9\ud654\ub97c \uc0ac\uc6a9\ud558\uc5ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\ub97c \ud638\ucd9c\ud560 \ub54c \ud544\uc694\ud55c \ub9ac\uc18c\uc2a4\ub85c\ub9cc \ud638\ucd9c\uc744 \uc81c\ud55c\ud558\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4. \uc81c\ud55c\uc774 \uc5c6\uc73c\uba74 API Server \ubc0f etcd\uc5d0 \ubd88\ud544\uc694\ud55c \ub85c\ub4dc\uac00 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uac00\ub2a5\ud558\uba74 watch \uc778\uc790\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc778\uc790\uac00 \uc5c6\uc73c\uba74 \uae30\ubcf8 \ub3d9\uc791\uc740 \uac1d\uccb4\ub97c \ub098\uc5f4\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. list \ub300\uc2e0 watch\ub97c \uc0ac\uc6a9\ud558\ub824\uba74 API \uc694\uccad \ub05d\uc5d0 `?watch=true`\ub97c \ucd94\uac00\ud558\uba74 \ub429\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 watch\ub97c \uc0ac\uc6a9\ud558\uc5ec \uae30\ubcf8 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc758 \ubaa8\ub4e0 \ud30c\ub4dc\ub97c \uac00\uc838\uc624\ub824\uba74 \ub2e4\uc74c\uc744 \uc0ac\uc6a9\ud558\uc138\uc694.\n\n```\n\/api\/v1\/namespaces\/default\/pods?watch=true\n```\n\n\uac1d\uccb4\ub97c \ub098\uc5f4\ud558\ub294 \uacbd\uc6b0 \ub098\uc5f4\ud558\ub294 \ud56d\ubaa9\uc758 \ubc94\uc704\uc640 \ubc18\ud658\ub418\ub294 \ub370\uc774\ud130\uc758 \uc591\uc744 \uc81c\ud55c\ud574\uc57c \ud569\ub2c8\ub2e4. \uc694\uccad\uc5d0 `limit=500` \uc778\uc218\ub97c \ucd94\uac00\ud558\uc5ec \ubc18\ud658\ub418\ub294 \ub370\uc774\ud130\ub97c \uc81c\ud55c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. `fieldSelector` \uc778\uc218\uc640 `\/namespace\/` \uacbd\ub85c\ub294 \ubaa9\ub85d\uc758 \ubc94\uc704\ub97c \ud544\uc694\uc5d0 \ub530\ub77c \uc881\uac8c \uc9c0\uc815\ud558\ub294 \ub370 \uc720\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uae30\ubcf8 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 \ud30c\ub4dc\ub9cc \ub098\uc5f4\ud558\ub824\uba74 \ub2e4\uc74c API \uacbd\ub85c\uc640 \uc778\uc218\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n\n```\n\/api\/v1\/namespaces\/default\/pods?fieldSelector=status.phase=Running&limit=500\n```\n\n\ub610\ub294 \ub2e4\uc74c\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc2e4\ud589 \uc911\uc778 \ubaa8\ub4e0 \ud30c\ub4dc\ub97c \ub098\uc5f4\ud569\ub2c8\ub2e4.\n\n```\n\/api\/v1\/pods?fieldSelector=status.phase=Running&limit=500\n```\n\n\ub098\uc5f4\ub41c \uc624\ube0c\uc81d\ud2b8\ub97c \uc81c\ud55c\ud558\ub294 \ub610 \ub2e4\ub978 \uc635\uc158\uc740 [`ResourceVersions` (\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc124\uba85\uc11c \ucc38\uc870)](https:\/\/kubernetes.io\/docs\/reference\/using-api\/api-concepts\/#resource-versions)\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83 \uc785\ub2c8\ub2e4. `ResourceVersion` \uc778\uc790\uac00 \uc5c6\uc73c\uba74 \uc0ac\uc6a9 \uac00\ub2a5\ud55c \ucd5c\uc2e0 \ubc84\uc804\uc744 \ubc1b\uac8c \ub418\uba70, \uc774 \uacbd\uc6b0 \ub370\uc774\ud130\ubca0\uc774\uc2a4\uc5d0\uc11c \uac00\uc7a5 \ube44\uc6a9\uc774 \ub9ce\uc774 \ub4e4\uace0 \uac00\uc7a5 \ub290\ub9b0 etcd \ucffc\ub7fc \uc77d\uae30\uac00 \ud544\uc694\ud569\ub2c8\ub2e4. resourceVersion\uc740 \ucffc\ub9ac\ud558\ub824\ub294 \ub9ac\uc18c\uc2a4\uc5d0 \ub530\ub77c \ub2ec\ub77c\uc9c0\uba70, `Metadata.Resourceversion` \ud544\ub4dc\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nAPI Server \uce90\uc2dc\uc5d0\uc11c \uacb0\uacfc\ub97c \ubc18\ud658\ud558\ub294 \ud2b9\ubcc4\ud55c `ResourceVersion=0`\uc774 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 etcd \ubd80\ud558\ub97c \uc904\uc77c \uc218 \uc788\uc9c0\ub9cc Pagination\uc740 \uc9c0\uc6d0\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n```\n\/api\/v1\/namespaces\/default\/pods?resourceVersion=0\n```\n\n\uc778\uc790 \uc5c6\uc774 API\ub97c \ud638\ucd9c\ud558\uba74 API Server \ubc0f etcd\uc5d0\uc11c \ub9ac\uc18c\uc2a4\ub97c \uac00\uc7a5 \ub9ce\uc774 \uc0ac\uc6a9\ud558\uac8c \ub429\ub2c8\ub2e4.\uc774 \ud638\ucd9c\uc744 \uc0ac\uc6a9\ud558\uba74 Pagination\uc774\ub098 \ubc94\uc704 \uc81c\ud55c \uc5c6\uc774 \ubaa8\ub4e0 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc758 \ubaa8\ub4e0 \ud30c\ub4dc\ub97c \uac00\uc838\uc624\uace0 etcd\uc5d0\uc11c \ucffc\ub7fc\uc744 \uc77d\uc5b4\uc57c \ud569\ub2c8\ub2e4.\n\n```\n\/api\/v1\/pods\n```","site":"eks","answers_cleaned":"    search    exclude  true                                             API Server        Controller Manager  Scheduler                                                                                                                                             EKS 1 24            EKS 1 24                                 docker     containerd  https   containerd io            Containerd                                                                      Containerd          EKS                 1 24         Containerd            container runtime  bootstrap flag  https   docs aws amazon com eks latest userguide eks optimized ami html containerd bootstrap                                    Attention               API                                                                    1000       1100         4000    4500         EKS                                                     EKS                                                              EKS                                       https   aws amazon com blogs containers amazon eks control plane auto scaling enhancements improve speed by 4x                                                                                                                           CPU                                                  HPA Horizontal Pod Autoscaler                                                                                  https   kubernetes io docs tasks run application horizontal pod autoscale walkthrough  autoscaling on multiple metrics and custom metrics                                   AWS SQS                                           KEDA  https   keda sh                                                                                                                                                                                                                                                                       Karpenter   Time To Live  TTL   https   aws github io aws eks best practices karpenter  use timers ttl to automatically delete nodes from the cluster                                                                max instance lifetime                                                             GitHub    https   github com aws containers roadmap issues 1190                                          scale down utilization threshold   https   github com kubernetes autoscaler blob master cluster autoscaler FAQ md how does scale down work            Cluster Autoscaler                                                   Karpenter    ttlSecondsAfterEmpty                             Pod Distruption Budgets                                                             EndpointSlices                                                          API Server                                Pod Distruption Budgets  https   kubernetes io docs concepts workloads pods disruptions                                                                        Kubectl                    kubectl                       API Server                     kubectl           for                                                             kubectl        API                                                                       10                                       kubectl          API                           API               cache dir                                    kubectl Compression       kubeconfig      kubectl compression         API         CPU                                                                                         CPU                                                                     kubeconfig         disable compression true              disable compression  true                  apiVersion  v1 clusters    cluster      server  serverURL     disable compression  true   name  cluster         Cluster Autoscaler            Cluster Autoscaler              https   github com kubernetes autoscaler blob master cluster autoscaler proposals scalability tests md      1 000                   1000                         Cluster AutoScaler                                                                                       4              2                            ClusterAutoscaler 1      autoscalingGroups    name  eks core node grp 20220823190924690000000011 80c1660e 030d 476d cb0d d04d585a8fcb   maxSize  50   minSize  2   name  eks data m1 20220824130553925600000011 5ec167fa ca93 8ca4 53a5 003e1ed8d306   maxSize  450   minSize  2   name  eks data m2 20220824130733258600000015 aac167fb 8bf7 429d d032 e195af4e25f5   maxSize  450   minSize  2   name  eks data m3 20220824130553914900000003 18c167fa ca7f 23c9 0fea f9edefbda002   maxSize  450   minSize  2      ClusterAutoscaler 2      autoscalingGroups    name  eks data m4 2022082413055392550000000f 5ec167fa ca86 6b83 ae9d 1e07ade3e7c4   maxSize  450   minSize  2   name  eks data m5 20220824130744542100000017 02c167fb a1f7 3d9e a583 43b4975c050c   maxSize  450   minSize  2   name  eks data m6 2022082413055392430000000d 9cc167fa ca94 132a 04ad e43166cef41f   maxSize  450   minSize  2   name  eks data m7 20220824130553921000000009 96c167fa ca91 d767 0427 91c879ddf5af   maxSize  450   minSize  2         API Priority and Fairness         images APF jpg            iframe width  560  height  315  src  https   www youtube com embed YnPPHBawhE0  title  YouTube video player  frameborder  0  allow  accelerometer  autoplay  clipboard write  encrypted media  gyroscope  picture in picture  web share  allowfullscreen   iframe                                        API                                                   API                    Too many requests      429 HTTP                                                                                                                                                                                                           API Priority and Fairness  https   kubernetes io docs concepts cluster administration flow control           API        max requests inflight       max mutating requests inflight                                                 EKS                   400    200                      600                   APF      600                                  EKS                     2   API                                                      1200           PriorityLevelConfigurations   FlowSchemas                                                                API                     EKS                                             PriorityLevelConfigurations                                        PriorityLevelConfiguration     600        98             PriorityLevelConfigurations              600                             API Server         600                 PriorityLevelConfigurations                                         EKS 1 24                        kubectl get   raw  metrics   grep apiserver flowcontrol request concurrency limit apiserver flowcontrol request concurrency limit priority level  catch all   13 apiserver flowcontrol request concurrency limit priority level  global default   49 apiserver flowcontrol request concurrency limit priority level  leader election   25 apiserver flowcontrol request concurrency limit priority level  node high   98 apiserver flowcontrol request concurrency limit priority level  system   74 apiserver flowcontrol request concurrency limit priority level  workload high   98 apiserver flowcontrol request concurrency limit priority level  workload low   245                   FlowSchemas                   API            FlowSchema                           API Group                                   FlowSchema                        PriorityLevelConfiguration                               inflight                                 API Server                      FlowSchemas           FlowSchemas            FlowSchemas              API Server                               FlowSchema          FlowSchemas  PriorityLevelConfigurations                                   kubectl get flowschemas NAME                           PRIORITYLEVEL     MATCHINGPRECEDENCE   DISTINGUISHERMETHOD   AGE     MISSINGPL exempt                         exempt            1                     none                 7h19m   False eks exempt                     exempt            2                     none                 7h19m   False probes                         exempt            2                     none                 7h19m   False system leader election         leader election   100                  ByUser                7h19m   False endpoint controller            workload high     150                  ByUser                7h19m   False workload leader election       leader election   200                  ByUser                7h19m   False system node high               node high         400                  ByUser                7h19m   False system nodes                   system            500                  ByUser                7h19m   False kube controller manager        workload high     800                  ByNamespace           7h19m   False kube scheduler                 workload high     800                  ByNamespace           7h19m   False kube system service accounts   workload high     900                  ByNamespace           7h19m   False eks workload high              workload high     1000                 ByUser                7h14m   False service accounts               workload low      9000                 ByUser                7h19m   False global default                 global default    9900                 ByUser                7h19m   False catch all                      catch all         10000                ByUser                7h19m   False      PriorityLevelConfigurations   Queue  Reject    Exempt                Queue   Reject                                                                                      PriorityLevelConfiguration  Queue                                       kube system                                 98        Queue           API Server                                                     98                                                                  API Server                  429                             429                                                   Reject               PriorityLevelConfiguration            FlowSchema                  13                     API Server               429                                 Exempt          PriorityLevelConfiguration           429                           healthz      system masters                                          APF                APF                        apiserver flowcontrol rejected requests total      API Server                   FlowSchemas   PriorityLevelConfigurations                                                     service account FlowSchema     100                        kubectl get   raw  metrics   grep apiserver flowcontrol rejected requests total apiserver flowcontrol rejected requests total flow schema  service accounts  priority level  workload low  reason  time out   100          PriorityLevelConfiguration  429                                                                             100                       kubectl get   raw  metrics   grep  apiserver flowcontrol request concurrency limit  workload low  apiserver flowcontrol request concurrency limit priority level  workload low   245    kubectl get   raw  metrics   grep  apiserver flowcontrol request concurrency in use  workload low  apiserver flowcontrol request concurrency in use flow schema  service accounts  priority level  workload low   145         PriorityLevelConfiguration                                         apiserver flowcontrol current inqueue requests                                kubectl get   raw  metrics   grep  apiserver flowcontrol current inqueue requests  workload low  apiserver flowcontrol current inqueue requests flow schema  service accounts  priority level  workload low   10             Prometheus                    apiserver flowcontrol dispatched requests total   apiserver flowcontrol request execution seconds   apiserver flowcontrol request wait duration seconds   APF     https   kubernetes io docs concepts cluster administration flow control  observability                                                          429      APF                           PriorityLevelConfiguration                         FlowSchemas                                          429                                                                                                                                                                                         watch                                                             APF          429          warning                              APF             APF             API Server                                                           EKS              FlowSchemas    PriorityLevelConfigurations             EKS                       FlowSchemas   PriorityLevelConfigurations                     API                false                                                metadata      annotations        apf kubernetes io autoupdate spec   false               APF                                                                                                                             FlowSchemas   PriorityLevelConfigurations                                               PriorityLevelConfigurations        authenticateConcurrencyShares                                                                                                                                                            FlowSchema   PriorityLevelConfigurations                    PriorityLevelConfigurations        PriorityLevelConfigurations       secureConcurrencyShares              API Server   600                                        APF                               429                                               1                         FlowSchemas      apiserver flowcontrol rejected requests total                         2   apiserver flowcontrol request concurrency limit     apiserver flowcontrol request concurrency in use                                                                FlowSchema   PriorityLevelConfiguration                                                                                                 FlowSchema                   429                                                                APF                                                   List                 FlowSchema             apiVersion  flowcontrol apiserver k8s io v1beta1 kind  FlowSchema metadata    name  list events default service accounts spec    distinguisherMethod      type  ByUser   matchingPrecedence  8000   priorityLevelConfiguration      name  catch all   rules      resourceRules        apiGroups                    namespaces          default       resources          events       verbs          list     subjects        kind  ServiceAccount       serviceAccount          name  default         namespace  default          FlowSchema             Service Account          List                           8000     Service Account FlowSchema          9000            List         Service Account     list events default service account                                PriorityLevelConfiguration                           list          13                                      13                         429                  API Server           API Server                                                                                                 API                                                                                                                                                                                                                                       Shared Informers           API                                                                               API Server                       client go         informer  https   pkg go dev k8s io client go informers                                                                  Shared Informer                                                                                                          label   field selectors                                                               etcd   API Server                                                       API                                                                          API                                        API                                             API Server   etcd                             watch                                                 list    watch        API         watch true                   watch                                                api v1 namespaces default pods watch true                                                           limit 500                                  fieldSelector        namespace                                                                                 API                      api v1 namespaces default pods fieldSelector status phase Running limit 500                                            api v1 pods fieldSelector status phase Running limit 500                                ResourceVersions                  https   kubernetes io docs reference using api api concepts  resource versions                ResourceVersion                                                                etcd               resourceVersion                       Metadata Resourceversion                   API Server                    ResourceVersion 0                etcd              Pagination                   api v1 namespaces default pods resourceVersion 0            API       API Server   etcd                                  Pagination                                   etcd                      api v1 pods    "}
{"questions":"eks exclude true EC2 API search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub370\uc774\ud130 \ud50c\ub808\uc778\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub370\uc774\ud130 \ud50c\ub808\uc778\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc774 \uc0ac\uc6a9\ud558\ub294 EC2 \uc778\uc2a4\ud134\uc2a4, \ub85c\ub4dc \ubc38\ub7f0\uc11c, \uc2a4\ud1a0\ub9ac\uc9c0 \ubc0f \uae30\ud0c0 API\ub97c \ud3ec\ud568\ud569\ub2c8\ub2e4. \uc870\uc9c1\ud654\ub97c \uc704\ud574 [\ud074\ub7ec\uc2a4\ud130 \uc11c\ube44\uc2a4](.\/cluster-services.md)\ub97c \ubcc4\ub3c4\uc758 \ud398\uc774\uc9c0\ub85c \uadf8\ub8f9\ud654\ud588\uc73c\uba70 \ub85c\ub4dc \ubc38\ub7f0\uc11c \ud655\uc7a5\uc740 [\uc6cc\ud06c\ub85c\ub4dc \uc139\uc158](.\/workloads.md)\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc139\uc158\uc5d0\uc11c\ub294 \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4 \ud655\uc7a5\uc5d0 \uc911\uc810\uc744 \ub461\ub2c8\ub2e4.\n\n\uc5ec\ub7ec \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc788\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c\ub294 EC2 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc120\ud0dd\ud558\ub294 \uac83\uc774 \uace0\uac1d\uc774 \uc9c1\uba74\ud558\ub294 \uac00\uc7a5 \uc5b4\ub824\uc6b4 \uacb0\uc815 \uc911 \ud558\ub098\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubaa8\ub4e0 \uc0c1\ud669\uc5d0 \ub9de\ub294 \ub2e8\uc77c \uc194\ub8e8\uc158\uc740 \uc5c6\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c\uc740 \ucef4\ud4e8\ud305 \ud655\uc7a5\uacfc \uad00\ub828\ub41c \uc77c\ubc18\uc801\uc778 \uc704\ud5d8\uc744 \ubc29\uc9c0\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub418\ub294 \uba87 \uac00\uc9c0 \ud301\uc785\ub2c8\ub2e4.\n\n## \uc790\ub3d9 \ub178\ub4dc \uc624\ud1a0 \uc2a4\ucf00\uc77c\ub9c1\n\n\uc218\uace0\ub97c \uc904\uc774\uace0 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc640 \uae34\ubc00\ud558\uac8c \ud1b5\ud569\ub418\ub294 \ub178\ub4dc \uc624\ud1a0 \uc2a4\ucf00\uc77c\ub9c1\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130\uc5d0\ub294 [\uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managed-node-groups.html) \ubc0f [Karpenter](https:\/\/karpenter.sh\/) \ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n\uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc740 \uad00\ub9ac\ud615 \uc5c5\uadf8\ub808\uc774\ub4dc \ubc0f \uad6c\uc131\uc5d0 \ub300\ud55c \ucd94\uac00 \uc774\uc810\uacfc \ud568\uaed8 Amazon EC2 Auto Scaling \uadf8\ub8f9\uc758 \uc720\uc5f0\uc131\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. [\ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/cluster-autoscaler)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud655\uc7a5\ud560 \uc218 \uc788\uc73c\uba70 \ub2e4\uc591\ud55c \ucef4\ud4e8\ud305 \uc694\uad6c \uc0ac\ud56d\uc774 \uc788\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c \uc77c\ubc18\uc801\uc778 \uc635\uc158\uc785\ub2c8\ub2e4.\n\nKarpenter\ub294 AWS\uc5d0\uc11c \ub9cc\ub4e0 \uc624\ud508 \uc18c\uc2a4 \uc6cc\ud06c\ub85c\ub4dc \ub124\uc774\ud2f0\ube0c \ub178\ub4dc \uc624\ud1a0 \uc2a4\ucf00\uc77c\ub7ec\uc785\ub2c8\ub2e4. \ub178\ub4dc \uadf8\ub8f9\uc744 \uad00\ub9ac\ud558\uc9c0 \uc54a\uace0\ub3c4 \ub9ac\uc18c\uc2a4 (\uc608: GPU) \uc640 taint \ubc0f tolerations (\uc608: zone spread)\uc5d0 \ub300\ud55c \uc6cc\ud06c\ub85c\ub4dc \uc694\uad6c \uc0ac\ud56d\uc744 \uae30\ubc18\uc73c\ub85c \ud074\ub7ec\uc2a4\ud130\uc758 \ub178\ub4dc\ub97c \ud655\uc7a5\ud569\ub2c8\ub2e4. \ub178\ub4dc\ub294 EC2\ub85c\ubd80\ud130 \uc9c1\uc811 \uc0dd\uc131\ub418\ubbc0\ub85c \uae30\ubcf8 \ub178\ub4dc \uadf8\ub8f9 \ud560\ub2f9\ub7c9 (\uadf8\ub8f9\ub2f9 450\uac1c \ub178\ub4dc)\uc774 \ud544\uc694 \uc5c6\uc73c\uba70 \uc6b4\uc601 \uc624\ubc84\ud5e4\ub4dc\ub97c \uc904\uc774\uba74\uc11c \uc778\uc2a4\ud134\uc2a4 \uc120\ud0dd \uc720\uc5f0\uc131\uc774 \ud5a5\uc0c1\ub429\ub2c8\ub2e4. \uace0\uac1d\uc740 \uac00\ub2a5\ud558\uba74 Karpenter\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n## \ub2e4\uc591\ud55c EC2 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615 \uc0ac\uc6a9\n\n\uac01 AWS \ub9ac\uc804\uc5d0\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\ubcc4\ub85c \uc0ac\uc6a9 \u200b\u200b\uac00\ub2a5\ud55c \uc778\uc2a4\ud134\uc2a4 \uc218\uac00 \uc81c\ud55c\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \ud558\ub098\uc758 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\ub9cc \uc0ac\uc6a9\ud558\ub294 \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud558\uace0 \ub9ac\uc804\uc758 \uc6a9\ub7c9\uc744 \ucd08\uacfc\ud558\uc5ec \ub178\ub4dc \uc218\ub97c \ud655\uc7a5\ud558\uba74 \uc0ac\uc6a9 \uac00\ub2a5\ud55c \uc778\uc2a4\ud134\uc2a4\uac00 \uc5c6\ub2e4\ub294 \uc624\ub958\uac00 \ubc1c\uc0dd\ud569\ub2c8\ub2e4. \uc774 \ubb38\uc81c\ub97c \ubc29\uc9c0\ud558\ub824\uba74 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc784\uc758\ub85c \uc81c\ud55c\ud574\uc11c\ub294 \uc548 \ub429\ub2c8\ub2e4.\n\nKarpenter\ub294 \uae30\ubcf8\uc801\uc73c\ub85c \ud638\ud658\ub418\ub294 \ub2e4\uc591\ud55c \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc0ac\uc6a9\ud558\uba70 \ubcf4\ub958 \uc911\uc778 \uc6cc\ud06c\ub85c\ub4dc \uc694\uad6c \uc0ac\ud56d, \uac00\uc6a9\uc131 \ubc0f \ube44\uc6a9\uc744 \uae30\ubc18\uc73c\ub85c \ud504\ub85c\ube44\uc800\ub2dd \uc2dc \uc778\uc2a4\ud134\uc2a4\ub97c \uc120\ud0dd\ud569\ub2c8\ub2e4. [NodePools](https:\/\/karpenter.sh\/docs\/concepts\/nodepools\/#instance-types)\uc758 `karpenter.k8s.aws\/instance-category` \ud0a4\uc5d0 \uc0ac\uc6a9\ub418\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615 \ubaa9\ub85d\uc744 \uc815\uc758\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler\ub97c \uc0ac\uc6a9\ud558\ub824\uba74 \ub178\ub4dc \uadf8\ub8f9\uc774 \uc77c\uad00\ub418\uac8c \ud655\uc7a5\ub420 \uc218 \uc788\ub3c4\ub85d \uc720\uc0ac\ud55c \ud06c\uae30\uc758 \uc720\ud615\uc744 \ud544\uc694\ub85c \ud569\ub2c8\ub2e4. CPU \ubc0f \uba54\ubaa8\ub9ac \ud06c\uae30\ub97c \uae30\uc900\uc73c\ub85c \uc5ec\ub7ec \uadf8\ub8f9\uc744 \uc0dd\uc131\ud558\uace0 \ub3c5\ub9bd\uc801\uc73c\ub85c \ud655\uc7a5\ud574\uc57c \ud569\ub2c8\ub2e4. [ec2-instance-selector](https:\/\/github.com\/aws\/amazon-ec2-instance-selector)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc \uadf8\ub8f9\uacfc \ube44\uc2b7\ud55c \ud06c\uae30\uc758 \uc778\uc2a4\ud134\uc2a4\ub97c \uc2dd\ubcc4\ud558\uc138\uc694.\n\n```\nec2-instance-selector --service eks --vcpus-min 8 --memory-min 16\na1.2xlarge\na1.4xlarge\na1.metal\nc4.4xlarge\nc4.8xlarge\nc5.12xlarge\nc5.18xlarge\nc5.24xlarge\nc5.2xlarge\nc5.4xlarge\nc5.9xlarge\nc5.metal\n```\n\n## API \uc11c\ubc84 \ubd80\ud558\ub97c \uc904\uc774\uae30 \uc704\ud574 \ub354 \ud070 \ub178\ub4dc\ub97c \uc120\ud638\n\n\uc5b4\ub5a4 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc0ac\uc6a9\ud560\uc9c0 \uacb0\uc815\ud560 \ub54c \ub178\ub4dc \uc218\uac00 \uc801\uace0 \ud06c\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0 \uac78\ub9ac\ub294 \ubd80\ud558\uac00 \uc904\uc5b4\ub4ed\ub2c8\ub2e4. \uc2e4\ud589 \uc911\uc778 kubelet\uacfc \ub370\ubaac\uc14b\uc758 \uc218\uac00 \uc904\uc5b4\ub4e4\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4. \uadf8\ub7ec\ub098 \ud070 \ub178\ub4dc\ub294 \uc791\uc740 \ub178\ub4dc\ucc98\ub7fc \ucda9\ubd84\ud788 \ud65c\uc6a9\ub418\uc9c0 \uc54a\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub178\ub4dc \ud06c\uae30\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc758 \uac00\uc6a9\uc131 \ubc0f \ud655\uc7a5\uc131\uc744 \uae30\ubc18\uc73c\ub85c \ud3c9\uac00\ud574\uc57c \ud569\ub2c8\ub2e4.\n\nu-24tb1.metal \uc778\uc2a4\ud134\uc2a4 3\uac1c (24TB Memory \ubc0f 448 cores)\uac00 \uc788\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0\ub294 3\uac1c\uc758 kubelets\uac00 \uc788\uc73c\uba70 \uae30\ubcf8\uc801\uc73c\ub85c \ub178\ub4dc\ub2f9 110\uac1c\uc758 \ud30c\ub4dc\ub85c \uc81c\ud55c\ub429\ub2c8\ub2e4. \ud30c\ub4dc\uac00 \uac01\uac01 4\uac1c\uc758 \ucf54\uc5b4\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \uc774\ub294 \uc608\uc0c1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4 (4\ucf54\uc5b4 x 110 = \ub178\ub4dc\ub2f9 440\ucf54\uc5b4). \ud074\ub7ec\uc2a4\ud130\uc5d0 3\uac1c\uc758 \ub178\ub4dc\ub97c \uc0ac\uc6a9\ud558\uba74 \uc778\uc2a4\ud134\uc2a4 1\uac1c\uac00 \uc911\ub2e8\ub420 \ub54c \ud074\ub7ec\uc2a4\ud130\uc758 1\/3\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc73c\ubbc0\ub85c \uc778\uc2a4\ud134\uc2a4 \uc778\uc2dc\ub358\ud2b8\ub97c \ucc98\ub9ac\ud558\ub294 \ub2a5\ub825\uc774 \ub5a8\uc5b4\uc9d1\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc2a4\ucf00\uc904\ub7ec\uac00 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc801\uc808\ud558\uac8c \ubc30\uce58\ud560 \uc218 \uc788\ub3c4\ub85d \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub178\ub4dc \uc694\uad6c \uc0ac\ud56d\uacfc pod spread\ub97c \uc9c0\uc815\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\uc6cc\ud06c\ub85c\ub4dc\ub294 taint, tolerations \ubc0f [PodTopologySpread](https:\/\/kubernetes.io\/blog\/2020\/05\/introducing-podtopologyspread\/)\ub97c \ud1b5\ud574 \ud544\uc694\ud55c \ub9ac\uc18c\uc2a4\uc640 \ud544\uc694\ud55c \uac00\uc6a9\uc131\uc744 \uc815\uc758\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub4e4\uc740 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc758 \ubd80\ud558 \uac10\uc18c, \uc6b4\uc601 \uac10\uc18c, \ube44\uc6a9 \uc808\uac10\uc774\ub77c\ub294 \uac00\uc6a9\uc131 \ubaa9\ud45c\ub97c \ucda9\ubd84\ud788 \ud65c\uc6a9\ud560 \uc218 \uc788\uace0 \uac00\uc6a9\uc131 \ubaa9\ud45c\ub97c \ucda9\uc871\ud560 \uc218 \uc788\ub294 \uac00\uc7a5 \ud070 \ub178\ub4dc\ub97c \uc120\ud638\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 Scheduler\ub294 \ub9ac\uc18c\uc2a4\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uacbd\uc6b0 \uac00\uc6a9 \uc601\uc5ed\uacfc \ud638\uc2a4\ud2b8\uc5d0 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \ubd84\uc0b0\ud558\ub824\uace0 \ud569\ub2c8\ub2e4. \uc0ac\uc6a9 \uac00\ub2a5\ud55c \uc6a9\ub7c9\uc774 \uc5c6\ub294 \uacbd\uc6b0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler\ub294 \uac01 \uac00\uc6a9 \uc601\uc5ed\uc5d0 \ub178\ub4dc\ub97c \uade0\ub4f1\ud558\uac8c \ucd94\uac00\ud558\ub824\uace0 \uc2dc\ub3c4\ud569\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub2e4\ub978 \uc694\uad6c \uc0ac\ud56d\uc774 \uc9c0\uc815\ub418\uc5b4 \uc788\uc9c0 \uc54a\ub294 \ud55c Karpenter\ub294 \uac00\ub2a5\ud55c \ud55c \ube60\ub974\uace0 \uc800\ub834\ud558\uac8c \ub178\ub4dc\ub97c \ucd94\uac00\ud558\ub824\uace0 \uc2dc\ub3c4\ud569\ub2c8\ub2e4.\n\n\uc2a4\ucf00\uc904\ub7ec\ub97c \ud1b5\ud574 \uc6cc\ud06c\ub85c\ub4dc\ub97c \ubd84\uc0b0\uc2dc\ud0a4\uace0 \uac00\uc6a9 \uc601\uc5ed \uc804\uccb4\uc5d0 \uc0c8 \ub178\ub4dc\ub97c \uc0dd\uc131\ud558\ub3c4\ub85d \ud558\ub824\uba74 TopologySpreadConstraints (topologySpreadConstraints) \ub97c \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n```\nspec:\n  topologySpreadConstraints:\n    - maxSkew: 3\n      topologyKey: \"topology.kubernetes.io\/zone\"\n      whenUnsatisfiable: ScheduleAnyway\n      labelSelector:\n        matchLabels:\n          dev: my-deployment\n    - maxSkew: 2\n      topologyKey: \"kubernetes.io\/hostname\"\n      whenUnsatisfiable: ScheduleAnyway\n      labelSelector:\n        matchLabels:\n          dev: my-deployment\n```\n\n## \uc77c\uad00\ub41c \uc6cc\ud06c\ub85c\ub4dc \uc131\ub2a5\uc744 \uc704\ud574 \uc720\uc0ac\ud55c \ub178\ub4dc \ud06c\uae30 \uc0ac\uc6a9\n\n\uc6cc\ud06c\ub85c\ub4dc\ub294 \uc77c\uad00\ub41c \uc131\ub2a5\uacfc \uc608\uce21 \uac00\ub2a5\ud55c \ud655\uc7a5\uc744 \ud5c8\uc6a9\ud558\uae30 \uc704\ud574 \uc2e4\ud589\ud574\uc57c \ud558\ub294 \ub178\ub4dc \ud06c\uae30\ub97c \uc815\uc758\ud574\uc57c \ud569\ub2c8\ub2e4. 500m CPU\ub97c \uc694\uccad\ud558\ub294 \uc6cc\ud06c\ub85c\ub4dc\ub294 4 cores \uc778\uc2a4\ud134\uc2a4\uc640 16 cores \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c \ub2e4\ub974\uac8c \uc218\ud589\ub429\ub2c8\ub2e4. T \uc2dc\ub9ac\uc988 \uc778\uc2a4\ud134\uc2a4\uc640 \uac19\uc774 \ubc84\uc2a4\ud2b8 \uac00\ub2a5\ud55c CPU\ub97c \uc0ac\uc6a9\ud558\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc740 \ud53c\ud558\uc138\uc694.\n\n\uc6cc\ud06c\ub85c\ub4dc\uac00 \uc77c\uad00\ub41c \uc131\ub2a5\uc744 \uc5bb\uc744 \uc218 \uc788\ub3c4\ub85d \uc6cc\ud06c\ub85c\ub4dc\ub294 [\uc9c0\uc6d0\ub418\ub294 Karpenter \ub808\uc774\ube14](https:\/\/karpenter.sh\/docs\/concepts\/scheduling\/#labels)\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud2b9\uc815 \uc778\uc2a4\ud134\uc2a4 \ud06c\uae30\ub97c \ub300\uc0c1\uc73c\ub85c \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\nkind: deployment\n...\nspec:\n  template:\n    spec:\n    containers:\n    nodeSelector:\n      karpenter.k8s.aws\/instance-size: 8xlarge\n```\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc2a4\ucf00\uc904\ub9c1\ub418\ub294 \uc6cc\ud06c\ub85c\ub4dc\ub294 \ub808\uc774\ube14 \ub9e4\uce6d\uc744 \uae30\ubc18\uc73c\ub85c node selector\ub97c \ub178\ub4dc \uadf8\ub8f9\uacfc \uc77c\uce58\uc2dc\ucf1c\uc57c \ud569\ub2c8\ub2e4.\n\n```\nspec:\n  affinity:\n    nodeAffinity:\n      requiredDuringSchedulingIgnoredDuringExecution:\n        nodeSelectorTerms:\n        - matchExpressions:\n          - key: eks.amazonaws.com\/nodegroup\n            operator: In\n            values:\n            - 8-core-node-group    # match your node group name\n```\n\n## \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4\ub97c \ud6a8\uc728\uc801\uc73c\ub85c \uc0ac\uc6a9\n\n\ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4\uc5d0\ub294 EC2 \uc778\uc2a4\ud134\uc2a4 \ubc0f \uac00\uc6a9 \uc601\uc5ed\uc774 \ud3ec\ud568\ub429\ub2c8\ub2e4. \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4\ub97c \ud6a8\uacfc\uc801\uc73c\ub85c \uc0ac\uc6a9\ud558\uba74 \ud655\uc7a5\uc131, \uac00\uc6a9\uc131, \uc131\ub2a5\uc774 \ud5a5\uc0c1\ub418\uace0 \ucd1d \ube44\uc6a9\uc774 \uc808\uac10\ub429\ub2c8\ub2e4. \uc624\ud1a0 \uc2a4\ucf00\uc77c\ub9c1 \ud658\uacbd\uc5d0\uc11c \uc5ec\ub7ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc740 \ud6a8\uc728\uc801\uc778 \ub9ac\uc18c\uc2a4 \uc0ac\uc6a9\ub7c9\uc744 \uc608\uce21\ud558\uae30\uac00 \uadf9\ud788 \uc5b4\ub835\uc2b5\ub2c8\ub2e4. [Karpenter](https:\/\/karpenter.sh\/)\ub294 \uc6cc\ud06c\ub85c\ub4dc \uc694\uad6c \uc0ac\ud56d\uc5d0 \ub530\ub77c \uc628\ub514\ub9e8\ub4dc\ub85c \uc778\uc2a4\ud134\uc2a4\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\uc5ec \ud65c\uc6a9\ub3c4\uc640 \uc720\uc5f0\uc131\uc744 \uadf9\ub300\ud654\ud558\uae30 \uc704\ud574 \ub9cc\ub4e4\uc5b4\uc84c\uc2b5\ub2c8\ub2e4.\n\nKarpenter\ub97c \uc0ac\uc6a9\ud558\uba74 \uba3c\uc800 \ub178\ub4dc \uadf8\ub8f9\uc744 \uc0dd\uc131\ud558\uac70\ub098 \ud2b9\uc815 \ub178\ub4dc\uc5d0 \ub300\ud55c label taint\ub97c \uad6c\uc131\ud558\uc9c0 \uc54a\uace0\ub3c4 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ud544\uc694\ud55c \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4 \uc720\ud615\uc744 \uc120\uc5b8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [Karpenter best practices](https:\/\/aws.github.io\/aws-eks-best-practices\/karpenter\/)\ub97c \ucc38\uc870\ud558\uc138\uc694. Karpenter provisioner\uc5d0\uc11c [consolidation](https:\/\/aws.github.io\/aws-eks-best-practices\/karpenter\/#configure-requestslimits-for-all-non-cpu-resources-when-using-consolidation)\uc744 \ud65c\uc131\ud654\ud558\uc5ec \ud65c\uc6a9\ub3c4\uac00 \ub0ae\uc740 \ub178\ub4dc\ub97c \uad50\uccb4\ud574 \ubcf4\uc138\uc694.\n\n## Amazon Machine Image (AMI) \uc5c5\ub370\uc774\ud2b8 \uc790\ub3d9\ud654\n\nWorker \ub178\ub4dc \uad6c\uc131 \uc694\uc18c\ub97c \ucd5c\uc2e0 \uc0c1\ud0dc\ub85c \uc720\uc9c0\ud558\uba74 \ucd5c\uc2e0 \ubcf4\uc548 \ud328\uce58\uc640 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\uc640 \ud638\ud658\ub418\ub294 \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. kublet \uc5c5\ub370\uc774\ud2b8\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uae30\ub2a5\uc758 \uac00\uc7a5 \uc911\uc694\ud55c \uad6c\uc131 \uc694\uc18c\uc774\uc9c0\ub9cc OS, \ucee4\ub110 \ubc0f \ub85c\uceec\uc5d0 \uc124\uce58\ub41c \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ud328\uce58\ub97c \uc790\ub3d9\ud654\ud558\uba74 \ud655\uc7a5\uc5d0 \ub530\ub978 \uc720\uc9c0 \uad00\ub9ac \ube44\uc6a9\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub178\ub4dc \uc774\ubbf8\uc9c0\uc5d0\ub294 \ucd5c\uc2e0 [Amazon EKS optimized Amazon Linux 2](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/eks-optimized-ami.html) \ub610\ub294 [Amazon EKS optimized Bottlerocket AMI](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/eks-optimized-ami-bottlerocket.html)\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. Karpenter\ub294 [\uc0ac\uc6a9 \uac00\ub2a5\ud55c \ucd5c\uc2e0 AMI](https:\/\/karpenter.sh\/docs\/concepts\/nodepools\/#instance-types) \ub97c \uc790\ub3d9\uc73c\ub85c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc0c8 \ub178\ub4dc\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud569\ub2c8\ub2e4. \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc740 [\ub178\ub4dc \uadf8\ub8f9 \uc5c5\ub370\uc774\ud2b8](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/update-managed-node-group.html) \uc911\uc5d0 AMI\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\uc9c0\ub9cc \ub178\ub4dc \ud504\ub85c\ube44\uc800\ub2dd \uc2dc\uc5d0\ub294 AMI ID\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n\uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc758 \uacbd\uc6b0 \ud328\uce58 \ub9b4\ub9ac\uc2a4\uc5d0 \uc0ac\uc6a9\ud560 \uc218 \uc788\uac8c \ub418\uba74 Auto Scaling Group (ASG) \uc2dc\uc791 \ud15c\ud50c\ub9bf\uc744 \uc0c8 AMI ID\ub85c \uc5c5\ub370\uc774\ud2b8\ud574\uc57c \ud569\ub2c8\ub2e4. AMI \ub9c8\uc774\ub108 \ubc84\uc804 (\uc608: 1.23.5~1.24.3)\uc740 EKS \ucf58\uc194 \ubc0f API\uc5d0\uc11c [\ub178\ub4dc \uadf8\ub8f9 \uc5c5\uadf8\ub808\uc774\ub4dc](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/update-managed-node-group.html)\ub85c \uc81c\uacf5\ub429\ub2c8\ub2e4. \ud328\uce58 \ub9b4\ub9ac\uc2a4 \ubc84\uc804 (\uc608: 1.23.5 ~ 1.23.6)\uc740 \ub178\ub4dc \uadf8\ub8f9\uc5d0 \ub300\ud55c \uc5c5\uadf8\ub808\uc774\ub4dc\ub85c \uc81c\uacf5\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. AMI \ud328\uce58 \ub9b4\ub9ac\uc2a4\ub97c \ud1b5\ud574 \ub178\ub4dc \uadf8\ub8f9\uc744 \ucd5c\uc2e0 \uc0c1\ud0dc\ub85c \uc720\uc9c0\ud558\ub824\uba74 \uc0c8 \uc2dc\uc791 \ud15c\ud50c\ub9bf \ubc84\uc804\uc744 \uc0dd\uc131\ud558\uace0 \ub178\ub4dc \uadf8\ub8f9\uc774 \uc778\uc2a4\ud134\uc2a4\ub97c \uc0c8 AMI \ub9b4\ub9ac\uc2a4\ub85c \uad50\uccb4\ud558\ub3c4\ub85d \ud574\uc57c \ud569\ub2c8\ub2e4.\n\n[\uc774 \ud398\uc774\uc9c0](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/eks-optimized-ami.html) \uc5d0\uc11c \uc0ac\uc6a9 \uac00\ub2a5\ud55c \ucd5c\uc2e0 AMI\ub97c \ucc3e\uac70\ub098 AWS CLI\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\naws ssm get-parameter \\\n  --name \/aws\/service\/eks\/optimized-ami\/1.24\/amazon-linux-2\/recommended\/image_id \\\n  --query \"Parameter.Value\" \\\n  --output text\n```\n## \ucee8\ud14c\uc774\ub108\uc5d0 \uc5ec\ub7ec EBS \ubcfc\ub968 \uc0ac\uc6a9\n\nEBS \ubcfc\ub968\uc5d0\ub294 \ubcfc\ub968 \uc720\ud615 (\uc608: gp3) \ubc0f \ub514\uc2a4\ud06c \ud06c\uae30\uc5d0 \ub530\ub978 \uc785\/\ucd9c\ub825 (I\/O) \ud560\ub2f9\ub7c9\uc774 \uc788\uc2b5\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \ud638\uc2a4\ud2b8\uc640 \ub2e8\uc77c EBS \ub8e8\ud2b8 \ubcfc\ub968\uc744 \uacf5\uc720\ud558\ub294 \uacbd\uc6b0 \uc804\uccb4 \ud638\uc2a4\ud2b8\uc758 \ub514\uc2a4\ud06c \ud560\ub2f9\ub7c9\uc774 \uace0\uac08\ub418\uace0 \ub2e4\ub978 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uac00\uc6a9 \uc6a9\ub7c9\uc744 \uae30\ub2e4\ub9ac\uac8c \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\uc751 \uc6a9 \ud504\ub85c\uadf8\ub7a8\uc740 \uc624\ubc84\ub808\uc774 \ud30c\ud2f0\uc158\uc5d0 \ud30c\uc77c\uc744 \uc4f0\uace0, \ud638\uc2a4\ud2b8\uc5d0\uc11c \ub85c\uceec \ubcfc\ub968\uc744 \ub9c8\uc6b4\ud2b8\ud558\uace0, \uc0ac\uc6a9\ub41c \ub85c\uae45 \uc5d0\uc774\uc804\ud2b8\uc5d0 \ub530\ub77c \ud45c\uc900 \ucd9c\ub825 (STDOUT)\uc5d0 \ub85c\uadf8\uc628\ud560 \ub54c\ub3c4 \ub514\uc2a4\ud06c\uc5d0 \uae30\ub85d\ud569\ub2c8\ub2e4.\n\nDisk I\/O \uc18c\ubaa8\ub97c \ubc29\uc9c0\ud558\ub824\uba74 \ub450 \ubc88\uc9f8 \ubcfc\ub968\uc744 \ucee8\ud14c\uc774\ub108 state \ud3f4\ub354 (\uc608: \/run\/containerd)\uc5d0 \ub9c8\uc6b4\ud2b8\ud558\uace0, \uc6cc\ud06c\ub85c\ub4dc \uc2a4\ud1a0\ub9ac\uc9c0\uc6a9\uc73c\ub85c \ubcc4\ub3c4\uc758 EBS \ubcfc\ub968\uc744 \uc0ac\uc6a9\ud558\uace0, \ubd88\ud544\uc694\ud55c \ub85c\uceec \ub85c\uae45\uc744 \ube44\ud65c\uc131\ud654\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n[eksctl](https:\/\/eksctl.io\/)\uc744 \uc0ac\uc6a9\ud558\uc5ec EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0 \ub450 \ubc88\uc9f8 \ubcfc\ub968\uc744 \ub9c8\uc6b4\ud2b8\ud558\ub824\uba74 \ub2e4\uc74c\uacfc \uac19\uc740 \uad6c\uc131\uc758 \ub178\ub4dc \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\nmanagedNodeGroups:\n  - name: al2-workers\n    amiFamily: AmazonLinux2\n    desiredCapacity: 2\n    volumeSize: 80\n    additionalVolumes:\n      - volumeName: '\/dev\/sdz'\n        volumeSize: 100\n    preBootstrapCommands:\n    - |\n      \"systemctl stop containerd\"\n      \"mkfs -t ext4 \/dev\/nvme1n1\"\n      \"rm -rf \/var\/lib\/containerd\/*\"\n      \"mount \/dev\/nvme1n1 \/var\/lib\/containerd\/\"\n      \"systemctl start containerd\"\n```\n\nTerraform\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc \uadf8\ub8f9\uc744 \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub294 \uacbd\uc6b0 [EKS Blueprints for terraform](https:\/\/aws-ia.github.io\/terraform-aws-eks-blueprints\/patterns\/stateful\/#eks-managed-nodegroup-w-multiple-volumes)\uc758 \uc608\ub97c \ucc38\uc870\ud558\uc138\uc694. Karpenter\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub294 \uacbd\uc6b0 \ub178\ub4dc \uc0ac\uc6a9\uc790 \ub370\uc774\ud130\uc640 \ud568\uaed8 [`blockDeviceMappings`](https:\/\/karpenter.sh\/docs\/concepts\/nodeclasses\/#specblockdevicemappings)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ucd94\uac00 \ubcfc\ub968\uc744 \ucd94\uac00\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nEBS \ubcfc\ub968\uc744 \ud30c\ub4dc\uc5d0 \uc9c1\uc811 \ud0d1\uc7ac\ud558\ub824\uba74 [AWS EBS CSI \ub4dc\ub77c\uc774\ubc84](https:\/\/github.com\/kubernetes-sigs\/aws-ebs-csi-driver)\ub97c \uc0ac\uc6a9\ud558\uace0 \uc2a4\ud1a0\ub9ac\uc9c0 \ud074\ub798\uc2a4\uac00 \uc788\ub294 \ubcfc\ub968\uc744 \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n```\n---\napiVersion: storage.k8s.io\/v1\nkind: StorageClass\nmetadata:\n  name: ebs-sc\nprovisioner: ebs.csi.aws.com\nvolumeBindingMode: WaitForFirstConsumer\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: ebs-claim\nspec:\n  accessModes:\n    - ReadWriteOnce\n  storageClassName: ebs-sc\n  resources:\n    requests:\n      storage: 4Gi\n---\napiVersion: v1\nkind: Pod\nmetadata:\n  name: app\nspec:\n  containers:\n  - name: app\n    image: public.ecr.aws\/docker\/library\/nginx\n    volumeMounts:\n    - name: persistent-storage\n      mountPath: \/data\n  volumes:\n  - name: persistent-storage\n    persistentVolumeClaim:\n      claimName: ebs-claim\n```\n\n## \uc6cc\ud06c\ub85c\ub4dc\uc5d0\uc11c EBS \ubcfc\ub968\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 EBS \uc5f0\uacb0 \uc81c\ud55c\uc774 \ub0ae\uc740 \uc778\uc2a4\ud134\uc2a4\ub97c \ud53c\ud558\uc138\uc694.\n\nEBS\ub294 \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc601\uad6c \uc2a4\ud1a0\ub9ac\uc9c0\ub97c \ud655\ubcf4\ud558\ub294 \uac00\uc7a5 \uc26c\uc6b4 \ubc29\ubc95 \uc911 \ud558\ub098\uc774\uc9c0\ub9cc \ud655\uc7a5\uc131 \uc81c\ud55c\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uac01 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc5d0\ub294 [\uc5f0\uacb0\ud560 \uc218 \uc788\ub294 \ucd5c\ub300 EBS \ubcfc\ub968](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/volume_limits.html) \uc218\uac00 \uc788\uc2b5\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc\ub294 \uc2e4\ud589\ud574\uc57c \ud558\ub294 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc120\uc5b8\ud558\uace0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 taints\uac00 \uc788\ub294 \ub2e8\uc77c \uc778\uc2a4\ud134\uc2a4\uc758 \ubcf5\uc81c\ubcf8 \uc218\ub97c \uc81c\ud55c\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n## \ub514\uc2a4\ud06c\uc5d0 \ubd88\ud544\uc694\ud55c \ub85c\uae45\uc744 \ube44\ud65c\uc131\ud654\n\n\ud504\ub85c\ub355\uc158 \ud658\uacbd\uc5d0\uc11c \ub514\ubc84\uadf8 \ub85c\uae45\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc2e4\ud589\ud558\uc9c0 \uc54a\uace0 \ub514\uc2a4\ud06c\ub97c \uc790\uc8fc \uc77d\uace0 \uc4f0\ub294 \ub85c\uae45\uc744 \ube44\ud65c\uc131\ud654\ud558\uc5ec \ubd88\ud544\uc694\ud55c \ub85c\uceec \ub85c\uae45\uc744 \ud53c\ud558\uc138\uc694. Journald\ub294 \ub85c\uadf8 \ubc84\ud37c\ub97c \uba54\ubaa8\ub9ac\uc5d0 \uc720\uc9c0\ud558\uace0 \uc8fc\uae30\uc801\uc73c\ub85c \ub514\uc2a4\ud06c\uc5d0 \ud50c\ub7ec\uc2dc\ud558\ub294 \ub85c\uceec \ub85c\uae45 \uc11c\ube44\uc2a4\uc785\ub2c8\ub2e4. Journald\ub294 \ubaa8\ub4e0 \ud589\uc744 \uc989\uc2dc \ub514\uc2a4\ud06c\uc5d0 \uae30\ub85d\ud558\ub294 syslog\ubcf4\ub2e4 \uc120\ud638\ub429\ub2c8\ub2e4. syslog\ub97c \ube44\ud65c\uc131\ud654\ud558\uba74 \ud544\uc694\ud55c \ucd1d \uc2a4\ud1a0\ub9ac\uc9c0 \uc6a9\ub7c9\ub3c4 \uc904\uc5b4\ub4e4\uace0 \ubcf5\uc7a1\ud55c \ub85c\uadf8 \uc21c\ud658 \uaddc\uce59\uc774 \ud544\uc694\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. syslog\ub97c \ube44\ud65c\uc131\ud654\ud558\ub824\uba74 cloud-init \uad6c\uc131\uc5d0 \ub2e4\uc74c \ucf54\ub4dc \uc870\uac01\uc744 \ucd94\uac00\ud558\uba74 \ub429\ub2c8\ub2e4.\n\n```\nruncmd:\n  - [ systemctl, disable, --now, syslog.service ]\n```\n\n## OS \uc5c5\ub370\uc774\ud2b8 \uc18d\ub3c4\uac00 \ud544\uc694\ud560 \ub54c in place \ubc29\uc2dd\uc73c\ub85c \uc778\uc2a4\ud134\uc2a4 \ud328\uce58\n\n!!! Attention\n    \uc778\uc2a4\ud134\uc2a4 \ud328\uce58\ub294 \ud544\uc694\ud55c \uacbd\uc6b0\uc5d0\ub9cc \uc218\ud589\ud574\uc57c \ud569\ub2c8\ub2e4. Amazon\uc5d0\uc11c\ub294 \uc778\ud504\ub77c\ub97c \ubcc0\uacbd\ud560 \uc218 \uc5c6\ub294 \uac83\uc73c\ub85c \ucde8\uae09\ud558\uace0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uacfc \ub3d9\uc77c\ud55c \ubc29\uc2dd\uc73c\ub85c \ud558\uc704 \ud658\uacbd\uc744 \ud1b5\ud574 \uc2b9\uaca9\ub418\ub294 \uc5c5\ub370\uc774\ud2b8\ub97c \ucca0\uc800\ud788 \ud14c\uc2a4\ud2b8\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \uc774 \uc139\uc158\uc740 \uc774\uac83\uc774 \ubd88\uac00\ub2a5\ud560 \ub54c \uc801\uc6a9\ub429\ub2c8\ub2e4.\n\n\ucee8\ud14c\uc774\ub108\ud654\ub41c \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc911\ub2e8\ud558\uc9c0 \uc54a\uace0 \uae30\uc874 Linux \ud638\uc2a4\ud2b8\uc5d0 \ud328\ud0a4\uc9c0\ub97c \uc124\uce58\ud558\ub294 \ub370 \uba87 \ucd08 \ubc16\uc5d0 \uac78\ub9ac\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc778\uc2a4\ud134\uc2a4\ub97c \ucc28\ub2e8(cordoning), \ub4dc\ub808\uc774\ub2dd(draining) \ub610\ub294 \uad50\uccb4(replacing)\ud558\uc9c0 \uc54a\uace0\ub3c4 \ud328\ud0a4\uc9c0\ub97c \uc124\uce58\ud558\uace0 \uac80\uc99d\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc778\uc2a4\ud134\uc2a4\ub97c \uad50\uccb4\ud558\ub824\uba74 \uba3c\uc800 \uc0c8 AMI\ub97c \uc0dd\uc131, \uac80\uc99d \ubc0f \ubc30\ud3ec\ud574\uc57c \ud569\ub2c8\ub2e4. \uc778\uc2a4\ud134\uc2a4\uc5d0\ub294 \ub300\uccb4 \uc778\uc2a4\ud134\uc2a4\uac00 \uc0dd\uc131\ub418\uc5b4\uc57c \ud558\uba70, \uc774\uc804 \uc778\uc2a4\ud134\uc2a4\ub294 \ucc28\ub2e8 \ubc0f \uc81c\uac70\ub418\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uadf8\ub7f0 \ub2e4\uc74c \uc0c8 \uc778\uc2a4\ud134\uc2a4\uc5d0 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc0dd\uc131\ud558\uace0 \ud655\uc778\ud558\uace0 \ud328\uce58\uac00 \ud544\uc694\ud55c \ubaa8\ub4e0 \uc778\uc2a4\ud134\uc2a4\uc5d0 \ub300\ud574 \ubc18\ubcf5\ud574\uc57c \ud569\ub2c8\ub2e4. \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc911\ub2e8\ud558\uc9c0 \uc54a\uace0 \uc778\uc2a4\ud134\uc2a4\ub97c \uc548\uc804\ud558\uac8c \uad50\uccb4\ud558\ub824\uba74 \uba87 \uc2dc\uac04, \uba70\uce60 \ub610\ub294 \uba87 \uc8fc\uac00 \uac78\ub9bd\ub2c8\ub2e4.\n\nAmazon\uc5d0\uc11c\ub294 \uc790\ub3d9\ud654\ub41c \uc120\uc5b8\uc801 \uc2dc\uc2a4\ud15c\uc5d0\uc11c \uad6c\ucd95, \ud14c\uc2a4\ud2b8 \ubc0f \uc2b9\uaca9\ub418\ub294 \ubd88\ubcc0 \uc778\ud504\ub77c\ub97c \uc0ac\uc6a9\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uc2dc\uc2a4\ud15c\uc5d0 \uc2e0\uc18d\ud558\uac8c \ud328\uce58\ub97c \uc801\uc6a9\ud574\uc57c \ud558\ub294 \uacbd\uc6b0 \uc2dc\uc2a4\ud15c\uc744 \ud328\uce58\ud558\uace0 \uc0c8 AMI\uac00 \ucd9c\uc2dc\ub418\uba74 \uad50\uccb4\ud574\uc57c \ud569\ub2c8\ub2e4. \uc2dc\uc2a4\ud15c \ud328\uce58\uc640 \uad50\uccb4 \uc0ac\uc774\uc758 \uc2dc\uac04 \ucc28\uc774\uac00 \ud06c\uae30 \ub54c\ubb38\uc5d0 [AWS Systems Manager Patch Manager](https:\/\/docs.aws.amazon.com\/systems-manager\/latest\/userguide\/systems-manager-patch.html)\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \ud544\uc694\ud55c \uacbd\uc6b0 \ub178\ub4dc \ud328\uce58\ub97c \uc790\ub3d9\ud654\ud569\ub2c8\ub2e4.\n\n\ub178\ub4dc \ud328\uce58\ub97c \uc0ac\uc6a9\ud558\uba74 \ubcf4\uc548 \uc5c5\ub370\uc774\ud2b8\ub97c \uc2e0\uc18d\ud558\uac8c \ucd9c\uc2dc\ud558\uace0 AMI\uac00 \uc5c5\ub370\uc774\ud2b8\ub41c \ud6c4 \uc815\uae30\uc801\uc778 \uc77c\uc815\uc5d0 \ub530\ub77c \uc778\uc2a4\ud134\uc2a4\ub97c \uad50\uccb4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. [Flatcar Container Linux](https:\/\/platcar-linux.org\/) \ub610\ub294 [Bottlerocket OS](https:\/\/github.com\/bottlerocket-os\/bottlerocket)\uc640 \uac19\uc740 \uc77d\uae30 \uc804\uc6a9 \ub8e8\ud2b8 \ud30c\uc77c \uc2dc\uc2a4\ud15c\uc774 \uc788\ub294 \uc6b4\uc601 \uccb4\uc81c\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \ud574\ub2f9 \uc6b4\uc601 \uccb4\uc81c\uc5d0\uc11c \uc791\ub3d9\ud558\ub294 \uc5c5\ub370\uc774\ud2b8 \uc5f0\uc0b0\uc790\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. [Flatcar Linux update operator](https:\/\/github.com\/platcar\/platcar-linux-update-operator) \ubc0f [Bottlerocket update operator](https:\/\/github.com\/bottlerocket-os\/bottlerocket-update-operator)\ub294 \uc778\uc2a4\ud134\uc2a4\ub97c \uc7ac\ubd80\ud305\ud558\uc5ec \ub178\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \ucd5c\uc2e0 \uc0c1\ud0dc\ub85c \uc720\uc9c0\ud569\ub2c8\ub2e4.","site":"eks","answers_cleaned":"    search    exclude  true                                                           EC2                         API                              cluster services md                                          workloads md                                                                 EC2                                                                                                                                                                                                                          https   docs aws amazon com eks latest userguide managed node groups html     Karpenter  https   karpenter sh                                                            Amazon EC2 Auto Scaling                        Cluster Autoscaler  https   github com kubernetes autoscaler tree master cluster autoscaler                                                          Karpenter  AWS                                                             GPU    taint   tolerations     zone spread                                            EC2                               450                                                         Karpenter                        EC2               AWS                                                                                                                                                                                    Karpenter                                                                                      NodePools  https   karpenter sh docs concepts nodepools  instance types    karpenter k8s aws instance category                                         Cluster Autoscaler                                                   CPU                                             ec2 instance selector  https   github com aws amazon ec2 instance selector                                         ec2 instance selector   service eks   vcpus min 8   memory min 16 a1 2xlarge a1 4xlarge a1 metal c4 4xlarge c4 8xlarge c5 12xlarge c5 18xlarge c5 24xlarge c5 2xlarge c5 4xlarge c5 9xlarge c5 metal         API                                                                                                  kubelet                                                                                                    u 24tb1 metal      3   24TB Memory   448 cores             3   kubelets                110                     4                              4   x 110       440           3                 1               1 3                                                                                            pod spread                   taint  tolerations    PodTopologySpread  https   kubernetes io blog 2020 05 introducing podtopologyspread                                                                                                                                       Scheduler                                                                            Cluster Autoscaler                                                              Karpenter                                                                                  TopologySpreadConstraints  topologySpreadConstraints                   spec    topologySpreadConstraints        maxSkew  3       topologyKey   topology kubernetes io zone        whenUnsatisfiable  ScheduleAnyway       labelSelector          matchLabels            dev  my deployment       maxSkew  2       topologyKey   kubernetes io hostname        whenUnsatisfiable  ScheduleAnyway       labelSelector          matchLabels            dev  my deployment                                                                                                 500m CPU             4 cores       16 cores                   T                      CPU                                                          Karpenter      https   karpenter sh docs concepts scheduling  labels                                        kind  deployment     spec    template      spec      containers      nodeSelector        karpenter k8s aws instance size  8xlarge            Cluster Autoscaler                                        node selector                         spec    affinity      nodeAffinity        requiredDuringSchedulingIgnoredDuringExecution          nodeSelectorTerms            matchExpressions              key  eks amazonaws com nodegroup             operator  In             values                8 core node group      match your node group name                                      EC2                                                                                                                                 Karpenter  https   karpenter sh                                                                    Karpenter                                 label taint                                                      Karpenter best practices  https   aws github io aws eks best practices karpenter           Karpenter provisioner    consolidation  https   aws github io aws eks best practices karpenter  configure requestslimits for all non cpu resources when using consolidation                                  Amazon Machine Image  AMI            Worker                                       API                       kublet                                 OS                                                                          Amazon EKS optimized Amazon Linux 2  https   docs aws amazon com eks latest userguide eks optimized ami html      Amazon EKS optimized Bottlerocket AMI  https   docs aws amazon com eks latest userguide eks optimized ami bottlerocket html                 Karpenter             AMI  https   karpenter sh docs concepts nodepools  instance types                                                            https   docs aws amazon com eks latest userguide update managed node group html     AMI                       AMI ID                                                  Auto Scaling Group  ASG            AMI ID              AMI            1 23 5 1 24 3   EKS      API                 https   docs aws amazon com eks latest userguide update managed node group html                        1 23 5   1 23 6                               AMI                                                                 AMI                             https   docs aws amazon com eks latest userguide eks optimized ami html               AMI      AWS CLI                   aws ssm get parameter       name  aws service eks optimized ami 1 24 amazon linux 2 recommended image id       query  Parameter Value        output text                 EBS        EBS                gp3                     I O                             EBS                                                                                                                                             STDOUT                        Disk I O                         state         run containerd                           EBS                                     eksctl  https   eksctl io         EC2                                                          managedNodeGroups      name  al2 workers     amiFamily  AmazonLinux2     desiredCapacity  2     volumeSize  80     additionalVolumes          volumeName    dev sdz          volumeSize  100     preBootstrapCommands                 systemctl stop containerd         mkfs  t ext4  dev nvme1n1         rm  rf  var lib containerd           mount  dev nvme1n1  var lib containerd          systemctl start containerd       Terraform                          EKS Blueprints for terraform  https   aws ia github io terraform aws eks blueprints patterns stateful  eks managed nodegroup w multiple volumes             Karpenter                                       blockDeviceMappings   https   karpenter sh docs concepts nodeclasses  specblockdevicemappings                            EBS                   AWS EBS CSI       https   github com kubernetes sigs aws ebs csi driver                                            apiVersion  storage k8s io v1 kind  StorageClass metadata    name  ebs sc provisioner  ebs csi aws com volumeBindingMode  WaitForFirstConsumer     apiVersion  v1 kind  PersistentVolumeClaim metadata    name  ebs claim spec    accessModes        ReadWriteOnce   storageClassName  ebs sc   resources      requests        storage  4Gi     apiVersion  v1 kind  Pod metadata    name  app spec    containers      name  app     image  public ecr aws docker library nginx     volumeMounts        name  persistent storage       mountPath   data   volumes      name  persistent storage     persistentVolumeClaim        claimName  ebs claim                EBS             EBS                        EBS                                                                              EBS     https   docs aws amazon com AWSEC2 latest UserGuide volume limits html                                             taints                                                                                                                                         Journald                                                  Journald                     syslog          syslog                                                      syslog          cloud init                              runcmd        systemctl  disable    now  syslog service           OS                in place                   Attention                                 Amazon                                                                                                                                     Linux                                            cordoning        draining        replacing                                                AMI                                                                                                                                                                                           Amazon                                                                                                 AMI                                               AWS Systems Manager Patch Manager  https   docs aws amazon com systems manager latest userguide systems manager patch html                                                                       AMI                                         Flatcar Container Linux  https   platcar linux org       Bottlerocket OS  https   github com bottlerocket os bottlerocket                                                                                  Flatcar Linux update operator  https   github com platcar platcar linux update operator     Bottlerocket update operator  https   github com bottlerocket os bottlerocket update operator                                     "}
{"questions":"eks exclude true search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \uace0\uac00\uc6a9\uc131 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc2e4\ud589\n\n\uace0\uac1d\uc740 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ubcc0\uacbd\ud560 \ub54c\ub098 \ud2b8\ub798\ud53d\uc774 \uae09\uc99d\ud560 \ub54c \uc870\ucc28 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \ud56d\uc0c1 \uc0ac\uc6a9 \uac00\ub2a5\ud558\uae30\ub97c \uae30\ub300\ud569\ub2c8\ub2e4. \ud655\uc7a5 \uac00\ub2a5\ud558\uace0 \ubcf5\uc6d0\ub825\uc774 \ub6f0\uc5b4\ub09c \uc544\ud0a4\ud14d\ucc98\ub97c \ud1b5\ud574 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uacfc \uc11c\ube44\uc2a4\ub97c \uc911\ub2e8 \uc5c6\uc774 \uc2e4\ud589\ud558\uc5ec \uc0ac\uc6a9\uc790 \ub9cc\uc871\ub3c4\ub97c \uc720\uc9c0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud655\uc7a5 \uac00\ub2a5\ud55c \uc778\ud504\ub77c\ub294 \ube44\uc988\ub2c8\uc2a4 \uc694\uad6c \uc0ac\ud56d\uc5d0 \ub530\ub77c \ud655\uc7a5 \ubc0f \ucd95\uc18c\ub429\ub2c8\ub2e4. \ub2e8\uc77c \uc7a5\uc560 \uc9c0\uc810\uc744 \uc81c\uac70\ud558\ub294 \uac83\uc740 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uac00\uc6a9\uc131\uc744 \uac1c\uc120\ud558\uace0 \ubcf5\uc6d0\ub825\uc744 \ub192\uc774\uae30 \uc704\ud55c \uc911\uc694\ud55c \ub2e8\uacc4\uc785\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\ub97c \uc0ac\uc6a9\ud558\uba74 \uac00\uc6a9\uc131\uacfc \ubcf5\uc6d0\ub825\uc774 \ub6f0\uc5b4\ub09c \ubc29\uc2dd\uc73c\ub85c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc6b4\uc601\ud558\uace0 \uc2e4\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc120\uc5b8\uc801 \uad00\ub9ac\ub97c \ud1b5\ud574 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc124\uc815\ud55c \ud6c4\uc5d0\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uac00 \uc9c0\uc18d\uc801\uc73c\ub85c [\ud604\uc7ac \uc0c1\ud0dc\ub97c \uc6d0\ud558\ub294 \uc0c1\ud0dc\uc640 \uc77c\uce58](https:\/\/kubernetes.io\/docs\/concepts\/architecture\/controller\/#desired-vs-current)\ud558\ub3c4\ub85d \uc2dc\ub3c4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \uad8c\uc7a5 \uc0ac\ud56d\n\n### \uc2f1\uae00\ud1a4 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud558\uc9c0 \ub9c8\uc138\uc694\n\n\uc804\uccb4 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \ub2e8\uc77c \ud30c\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uacbd\uc6b0, \ud574\ub2f9 \ud30c\ub4dc\uac00 \uc885\ub8cc\ub418\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc5c6\uac8c \ub429\ub2c8\ub2e4. \uac1c\ubcc4 \ud30c\ub4dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ubc30\ud3ec\ud558\ub294 \ub300\uc2e0 [\ub514\ud50c\ub85c\uc774\uba3c\ud2b8](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/)\ub97c \uc0dd\uc131\ud558\uc2ed\uc2dc\uc624. \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub85c \uc0dd\uc131\ub41c \ud30c\ub4dc\uac00 \uc2e4\ud328\ud558\uac70\ub098 \uc885\ub8cc\ub418\ub294 \uacbd\uc6b0, \ub514\ud50c\ub85c\uc774\uba3c\ud2b8 [\ucee8\ud2b8\ub864\ub7ec](https:\/\/kubernetes.io\/docs\/concepts\/architecture\/controller\/)\ub294 \uc0c8 \ud30c\ub4dc\ub97c \uc2dc\uc791\ud558\uc5ec \uc9c0\uc815\ub41c \uac1c\uc218\uc758 \ub808\ud50c\ub9ac\uce74 \ud30c\ub4dc\uac00 \ud56d\uc0c1 \uc2e4\ud589\ub418\ub3c4\ub85d \ud569\ub2c8\ub2e4. \n\n### \uc5ec\ub7ec \uac1c\uc758 \ub808\ud50c\ub9ac\uce74 \uc2e4\ud589 \n\n\ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc571\uc758 \uc5ec\ub7ec \ubcf5\uc81c\ubcf8 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud558\uba74 \uac00\uc6a9\uc131\uc774 \ub192\uc740 \ubc29\uc2dd\uc73c\ub85c \uc571\uc744 \uc2e4\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud558\ub098\uc758 \ubcf5\uc81c\ubcf8\uc5d0 \uc7a5\uc560\uac00 \ubc1c\uc0dd\ud558\ub354\ub77c\ub3c4 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uac00 \uc190\uc2e4\uc744 \ub9cc\ud68c\ud558\uae30 \uc704\ud574 \ub2e4\ub978 \ud30c\ub4dc\ub97c \uc0dd\uc131\ud558\uae30 \uc804\uae4c\uc9c0\ub294 \uc6a9\ub7c9\uc774 \uc904\uc5b4\ub4e4\uae30\ub294 \ud558\uc9c0\ub9cc \ub098\uba38\uc9c0 \ubcf5\uc81c\ubcf8\uc740 \uc5ec\uc804\ud788 \uc791\ub3d9\ud55c\ub2e4. \ub610\ud55c [Horizontal Pod Autoscaler](https:\/\/kubernetes.io\/docs\/tasks\/run-application\/horizontal-pod-autoscale\/)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc6cc\ud06c\ub85c\ub4dc \uc218\uc694\uc5d0 \ub530\ub77c \ubcf5\uc81c\ubcf8\uc744 \uc790\ub3d9\uc73c\ub85c \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n### \uc5ec\ub7ec \ub178\ub4dc\uc5d0 \ubcf5\uc81c\ubcf8\uc744 \uc2a4\ucf00\uc904\ub9c1\ud569\ub2c8\ub2e4.\n\n\ubaa8\ub4e0 \ubcf5\uc81c\ubcf8\uc774 \ub3d9\uc77c\ud55c \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\uace0 \uc788\uace0 \ub178\ub4dc\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc5c6\uac8c \ub418\uba74 \uc5ec\ub7ec \ubcf5\uc81c\ubcf8\uc744 \uc2e4\ud589\ud558\ub294 \uac83\uc740 \uadf8\ub2e4\uc9c0 \uc720\uc6a9\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ud30c\ub4dc anti-affinity \ub610\ub294 \ud30c\ub4dc topology spread contraints\uc744 \uc0ac\uc6a9\ud574 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc758 \ubcf5\uc81c\ubcf8\uc744 \uc5ec\ub7ec \uc6cc\ucee4 \ub178\ub4dc\uc5d0 \ubd84\uc0b0\uc2dc\ud0a4\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624. \n\n\uc5ec\ub7ec AZ\uc5d0\uc11c \uc2e4\ud589\ud558\uc5ec \uc77c\ubc18\uc801\uc778 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uc2e0\ub8b0\uc131\uc744 \ub354\uc6b1 \uac1c\uc120\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n#### \ud30c\ub4dc anti-affinity \uaddc\uce59 \uc0ac\uc6a9\n\n\uc544\ub798 \ub9e4\ub2c8\ud398\uc2a4\ud2b8\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc2a4\ucf00\uc904\ub7ec\uc5d0\uac8c \ud30c\ub4dc\ub97c \ubcc4\ub3c4\uc758 \ub178\ub4dc\uc640 AZ\uc5d0 \ubc30\uce58\ud558\ub3c4\ub85d *prefer*\ub77c\uace0 \uc9c0\uc2dc\ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ub418\uc5b4\uc788\ub2e4\uba74 \ubcc4\ub3c4\uc758 \ub178\ub4dc\ub098 AZ\uac00 \ud544\uc694\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uadf8\ub807\uac8c \ud558\uba74 \uac01 AZ\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 \ud30c\ub4dc\uac00 \uc788\uc73c\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uac00 \uc5b4\ub5a4 \ud30c\ub4dc\ub3c4 \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc5c6\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \ub2e8 \uc138 \uac1c\uc758 \ubcf5\uc81c\ubcf8\uc774 \ud544\uc694\ud55c \uacbd\uc6b0, `topologyKey: topology.kubernetes.io\/zone`\uc5d0 \ub300\ud574 `requiredDuringSchedulingIgnoredDuringExecution`\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc73c\uba70, \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc2a4\ucf00\uc904\ub7ec\ub294 \ub3d9\uc77c\ud55c AZ\uc5d0 \ub450 \uac1c\uc758 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n```\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: spread-host-az\n  labels:\n    app: web-server\nspec:\n  replicas: 4\n  selector:\n    matchLabels:\n      app: web-server\n  template:\n    metadata:\n      labels:\n        app: web-server\n    spec:\n      affinity:\n        podAntiAffinity:\n          preferredDuringSchedulingIgnoredDuringExecution:\n          - podAffinityTerm:\n              labelSelector:\n                matchExpressions:\n                - key: app\n                  operator: In\n                  values:\n                  - web-server\n              topologyKey: topology.kubernetes.io\/zone\n            weight: 100\n          - podAffinityTerm:\n              labelSelector:\n                matchExpressions:\n                - key: app\n                  operator: In\n                  values:\n                  - web-server\n              topologyKey: kubernetes.io\/hostname \n            weight: 99\n      containers:\n      - name: web-app\n        image: nginx:1.16-alpine\n```\n\n#### \ud30c\ub4dc topology spread constraints \uc0ac\uc6a9\n\n\ud30c\ub4dc anti-affinity \uaddc\uce59\uacfc \ub9c8\ucc2c\uac00\uc9c0\ub85c, \ud30c\ub4dc topology spread constraints\uc744 \uc0ac\uc6a9\ud558\uba74 \ud638\uc2a4\ud2b8 \ub610\ub294 AZ\uc640 \uac19\uc740 \ub2e4\uc591\ud55c \uc7a5\uc560 (\ub610\ub294 \ud1a0\ud3f4\ub85c\uc9c0) \ub3c4\uba54\uc778\uc5d0\uc11c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc811\uadfc \ubc29\uc2dd\uc740 \uc11c\ub85c \ub2e4\ub978 \ud1a0\ud3f4\ub85c\uc9c0 \ub3c4\uba54\uc778 \uac01\uac01\uc5d0 \uc5ec\ub7ec \ubcf5\uc81c\ubcf8\uc744 \ubcf4\uc720\ud558\uc5ec \ub0b4\uacb0\ud568\uc131\uacfc \uac00\uc6a9\uc131\uc744 \ubcf4\uc7a5\ud558\ub824\ub294 \uacbd\uc6b0\uc5d0 \ub9e4\uc6b0 \ud6a8\uacfc\uc801\uc785\ub2c8\ub2e4. \ubc18\uba74, \ud30c\ub4dc anti-affinity \uaddc\uce59\uc740 anti-affinity\uac00 \uc788\ub294 \ud30c\ub4dc \uc11c\ub85c\uc5d0 \ub300\ud574 \uac70\ubd80 \ud6a8\uacfc\uac00 \uc788\uae30 \ub54c\ubb38\uc5d0 \ud1a0\ud3f4\ub85c\uc9c0 \ub3c4\uba54\uc778\uc5d0 \ub2e8\uc77c \ubcf5\uc81c\ubcf8\uc774 \uc788\ub3c4\ub85d \uc27d\uac8c \ub9cc\ub4e4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uacbd\uc6b0 \uc804\uc6a9 \ub178\ub4dc\uc758 \ub2e8\uc77c \ubcf5\uc81c\ubcf8\uc740 \ub0b4\uacb0\ud568\uc131 \uce21\uba74\uc5d0\uc11c \uc774\uc0c1\uc801\uc774\uc9c0\ub3c4 \uc54a\uace0 \ub9ac\uc18c\uc2a4\ub97c \uc801\uc808\ud558\uac8c \uc0ac\uc6a9\ud558\uc9c0\ub3c4 \uc54a\uc2b5\ub2c8\ub2e4. topology spread constraints\uc744 \uc0ac\uc6a9\ud558\uba74 \uc2a4\ucf00\uc904\ub7ec\uac00 \ud1a0\ud3f4\ub85c\uc9c0 \ub3c4\uba54\uc778 \uc804\uccb4\uc5d0 \uc801\uc6a9\ud558\ub824\uace0 \uc2dc\ub3c4\ud558\ub294 \ubd84\ubc30 \ub610\ub294 \ubc30\ud3ec\ub97c \ubcf4\ub2e4 \ud6a8\uacfc\uc801\uc73c\ub85c \uc81c\uc5b4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc811\uadfc \ubc29\uc2dd\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uba87 \uac00\uc9c0 \uc911\uc694\ud55c \uc18d\uc131\uc740 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n1. `MaxSkew`\ub294 \ud1a0\ud3f4\ub85c\uc9c0 \ub3c4\uba54\uc778 \uc804\uccb4\uc5d0\uc11c \uade0\ub4f1\ud558\uc9c0 \uc54a\uac8c \ubd84\uc0b0\ub420 \uc218 \uc788\ub294 \ucd5c\ub300 \uc815\ub3c4\ub97c \uc81c\uc5b4\ud558\uac70\ub098 \uacb0\uc815\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 10\uac1c\uc758 \ubcf5\uc81c\ubcf8\uc774 \uc788\uace0 3\uac1c\uc758 AZ\uc5d0 \ubc30\ud3ec\ub41c \uacbd\uc6b0 \uade0\ub4f1\ud558\uac8c \ubd84\uc0b0\ub420 \uc218\ub294 \uc5c6\uc9c0\ub9cc \ubd84\ud3ec\uc758 \ubd88\uade0\uc77c\uc131\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uacbd\uc6b0 `MaxSkew`\ub294 1\uc5d0\uc11c 10 \uc0ac\uc774\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\uac12\uc774 1\uc774\uba74 3\uac1c\uc758 AZ\uc5d0 \uac78\uccd0 `4,3,3`, `3,4,3` \ub610\ub294 `3,3,4`\uc640 \uac19\uc740 \ubd84\ubc30\uac00 \uc0dd\uc131\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubc18\ub300\ub85c \uac12\uc774 10\uc774\uba74 3\uac1c\uc758 AZ\uc5d0 \uac78\uccd0 `10,0,0`, `0,10,0` \ub610\ub294 `0,0,10`\uacfc \uac19\uc740 \ubd84\ubc30\uac00 \ub098\uc62c \uc218 \uc788\ub2e4\ub294 \uc758\ubbf8\uc785\ub2c8\ub2e4.\n2. `TopologyKey`\ub294 \ub178\ub4dc \ub808\uc774\ube14 \uc911 \ud558\ub098\uc758 \ud0a4\uc774\uba70 \ud30c\ub4dc \ubc30\ud3ec\uc5d0 \uc0ac\uc6a9\ud574\uc57c \ud558\ub294 \ud1a0\ud3f4\ub85c\uc9c0 \ub3c4\uba54\uc778 \uc720\ud615\uc744 \uc815\uc758\ud569\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uc874(zone)\ubcc4 \ubd84\ubc30\ub294 \ub2e4\uc74c\uacfc \uac19\uc740 \ud0a4-\uac12 \uc30d\uc744 \uac00\uc9d1\ub2c8\ub2e4.\n```\ntopologyKey: \"topology.kubernetes.io\/zone\"\n```\n3. `WhenUnsatisfiable` \uc18d\uc131\uc740 \uc6d0\ud558\ub294 \uc81c\uc57d \uc870\uac74\uc744 \ucda9\uc871\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0 \uc2a4\ucf00\uc904\ub7ec\uac00 \uc5b4\ub5bb\uac8c \uc751\ub2f5\ud560\uc9c0 \uacb0\uc815\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4.\n4. `LabelSelector`\ub294 \uc77c\uce58\ud558\ub294 \ud30c\ub4dc\ub97c \ucc3e\ub294 \ub370 \uc0ac\uc6a9\ub418\uba70, \uc774\ub97c \ud1b5\ud574 \uc2a4\ucf00\uc904\ub7ec\ub294 \uc9c0\uc815\ud55c \uc81c\uc57d \uc870\uac74\uc5d0 \ub530\ub77c \ud30c\ub4dc\ub97c \ubc30\uce58\ud560 \uc704\uce58\ub97c \uacb0\uc815\ud560 \ub54c \uc774\ub97c \uc778\uc9c0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc704\uc758 \ud544\ub4dc \uc678\uc5d0\ub3c4, \ub2e4\ub978 \ud544\ub4dc\uc5d0 \ub300\ud574\uc11c\ub294 [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc124\uba85\uc11c](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/topology-spread-constraints\/)\uc5d0\uc11c \ub354 \uc790\uc138\ud788 \uc54c\uc544\ubcfc \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n![\ud30c\ub4dc \ud1a0\ud3f4\ub85c\uc9c0\ub294 \uc81c\uc57d \uc870\uac74\uc744 3\uac1c AZ\uc5d0 \ubd84\uc0b0\uc2dc\ud0b5\ub2c8\ub2e4.](.\/images\/pod-topology-spread-constraints.jpg)\n\n```\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: spread-host-az\n  labels:\n    app: web-server\nspec:\n  replicas: 10\n  selector:\n    matchLabels:\n      app: web-server\n  template:\n    metadata:\n      labels:\n        app: web-server\n    spec:\n      topologySpreadConstraints:\n      - maxSkew: 1\n        topologyKey: \"topology.kubernetes.io\/zone\"\n        whenUnsatisfiable: ScheduleAnyway\n        labelSelector:\n          matchLabels:\n            app: express-test\n      containers:\n      - name: web-app\n        image: nginx:1.16-alpine\n```\n\n### \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uba54\ud2b8\ub9ad \uc11c\ubc84 \uc2e4\ud589\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 [\uba54\ud2b8\ub9ad \uc11c\ubc84](https:\/\/github.com\/kubernetes-sigs\/metrics-server) \ub97c \uc124\uce58\ud558\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ud655\uc7a5\uc5d0 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4. [HPA](https:\/\/kubernetes.io\/docs\/tasks\/run-application\/horizontal-pod-autoscale\/) \ubc0f [VPA](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/vertical-pod-autoscaler)\uc640 \uac19\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec \uc560\ub4dc\uc628\uc740 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uba54\ud2b8\ub9ad\uc744 \ucd94\uc801\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ud655\uc7a5\ud569\ub2c8\ub2e4. \uba54\ud2b8\ub9ad \uc11c\ubc84\ub294 \uaddc\ubaa8 \uc870\uc815 \uacb0\uc815\uc744 \ub0b4\ub9ac\ub294 \ub370 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \ub9ac\uc18c\uc2a4 \uba54\ud2b8\ub9ad\uc744 \uc218\uc9d1\ud569\ub2c8\ub2e4. \uba54\ud2b8\ub9ad\uc740 kubelets\uc5d0\uc11c \uc218\uc9d1\ub418\uc5b4 [\uba54\ud2b8\ub9ad API \ud615\uc2dd](https:\/\/github.com\/kubernetes\/metrics)\uc73c\ub85c \uc81c\uacf5\ub429\ub2c8\ub2e4.\n\n\uba54\ud2b8\ub9ad \uc11c\ubc84\ub294 \ub370\uc774\ud130\ub97c \ubcf4\uad00\ud558\uc9c0 \uc54a\uc73c\uba70 \ubaa8\ub2c8\ud130\ub9c1 \uc194\ub8e8\uc158\ub3c4 \uc544\ub2d9\ub2c8\ub2e4. \uadf8 \ubaa9\uc801\uc740 CPU \ubc0f \uba54\ubaa8\ub9ac \uc0ac\uc6a9\ub7c9 \uba54\ud2b8\ub9ad\uc744 \ub2e4\ub978 \uc2dc\uc2a4\ud15c\uc5d0 \uacf5\uac1c\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uc2dc\uac04 \uacbd\uacfc\uc5d0 \ub530\ub978 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc0c1\ud0dc\ub97c \ucd94\uc801\ud558\ub824\uba74 Prometheus \ub610\ub294 Amazon CloudWatch\uc640 \uac19\uc740 \ubaa8\ub2c8\ud130\ub9c1 \ub3c4\uad6c\uac00 \ud544\uc694\ud569\ub2c8\ub2e4. \n\n[EKS \uc124\uba85\uc11c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/metrics-server.html)\uc5d0 \ub530\ub77c EKS \ud074\ub7ec\uc2a4\ud130\uc5d0 \uba54\ud2b8\ub9ad \uc11c\ubc84\ub97c \uc124\uce58\ud558\uc2ed\uc2dc\uc624. \n\n## Horizontal Pod Autoscaler (HPA)\n\nHPA\ub294 \uc218\uc694\uc5d0 \ub530\ub77c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc790\ub3d9\uc73c\ub85c \ud655\uc7a5\ud558\uace0 \ud2b8\ub798\ud53d\uc774 \ucd5c\uace0\uc870\uc5d0 \ub2ec\ud560 \ub54c \uace0\uac1d\uc5d0\uac8c \uc601\ud5a5\uc744 \ubbf8\uce58\uc9c0 \uc54a\ub3c4\ub85d \ub3c4\uc640\uc90d\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0\ub294 \uc81c\uc5b4 \ub8e8\ud504\ub85c \uad6c\ud604\ub418\uc5b4 \uc788\uc5b4 \ub9ac\uc18c\uc2a4 \uba54\ud2b8\ub9ad\uc744 \uc81c\uacf5\ud558\ub294 API\uc5d0\uc11c \uba54\ud2b8\ub9ad\uc744 \uc815\uae30\uc801\uc73c\ub85c \ucffc\ub9ac\ud569\ub2c8\ub2e4.\n\nHPA\ub294 \ub2e4\uc74c API\uc5d0\uc11c \uba54\ud2b8\ub9ad\uc744 \uac80\uc0c9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n1. \ub9ac\uc18c\uc2a4 \uba54\ud2b8\ub9ad API\ub77c\uace0\ub3c4 \ud558\ub294 `metrics.k8s.io` \u2014 \ud30c\ub4dc\uc758 CPU \ubc0f \uba54\ubaa8\ub9ac \uc0ac\uc6a9\ub7c9\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4.\n2. `custom.metrics.k8s.io` \u2014 \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\uc640 \uac19\uc740 \ub2e4\ub978 \uba54\ud2b8\ub9ad \ucf5c\ub809\ud130\uc758 \uba54\ud2b8\ub9ad\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uba54\ud2b8\ub9ad\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc5d0 __internal__ \uc785\ub2c8\ub2e4. \n3. `external.metrics.k8s.io` \u2014 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc5d0 __external__\uc778 \uba54\ud2b8\ub9ad\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4 (\uc608: SQS \ub300\uae30\uc5f4 \uae38\uc774, ELB \uc9c0\uc5f0 \uc2dc\uac04).\n\n\uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ud655\uc7a5\ud558\uae30 \uc704\ud55c \uba54\ud2b8\ub9ad\uc744 \uc81c\uacf5\ud558\ub824\uba74 \uc774 \uc138 \uac00\uc9c0 API \uc911 \ud558\ub098\ub97c \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4. \n\n### \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub610\ub294 \uc678\ubd80 \uc9c0\ud45c\ub97c \uae30\ubc18\uc73c\ub85c \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uaddc\ubaa8 \uc870\uc815\n\n\uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub610\ub294 \uc678\ubd80 \uc9c0\ud45c\ub97c \uc0ac\uc6a9\ud558\uc5ec CPU \ub610\ub294 \uba54\ubaa8\ub9ac \uc0ac\uc6a9\ub960 \uc774\uc678\uc758 \uc9c0\ud45c\uc5d0 \ub530\ub77c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.[\ucee4\uc2a4\ud140 \uba54\ud2b8\ub9ad](https:\/\/github.com\/kubernetes\/community\/blob\/master\/contributors\/design-proposals\/instrumentation\/custom-metrics-api.md) API \uc11c\ubc84\ub294 HPA\uac00 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc790\ub3d9 \uc2a4\ucf00\uc77c\ub9c1\ud558\ub294 \ub370 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 `custom-metrics.k8s.io` API\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. \n\n[\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uba54\ud2b8\ub9ad API\uc6a9 \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uc5b4\ub311\ud130](https:\/\/github.com\/directxman12\/k8s-prometheus-adapter)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\uc5d0\uc11c \uba54\ud2b8\ub9ad\uc744 \uc218\uc9d1\ud558\uace0 HPA\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uacbd\uc6b0 \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uc5b4\ub311\ud130\ub294 \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uba54\ud2b8\ub9ad\uc744 [\uba54\ud2b8\ub9ad API \ud615\uc2dd](https:\/\/github.com\/kubernetes\/metrics\/blob\/master\/pkg\/apis\/metrics\/v1alpha1\/types.go)\uc73c\ub85c \ub178\ucd9c\ud569\ub2c8\ub2e4. \ubaa8\ub4e0 \ucee4\uc2a4\ud140 \uba54\ud2b8\ub9ad \uad6c\ud604 \ubaa9\ub85d\uc740 [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc124\uba85\uc11c](https:\/\/github.com\/kubernetes\/metrics\/blob\/master\/IMPLEMENTATIONS.md#custom-metrics-api)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uc5b4\ub311\ud130\ub97c \ubc30\ud3ec\ud55c \ud6c4\uc5d0\ub294 kubectl\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc0ac\uc6a9\uc790 \uc9c0\uc815 \uba54\ud2b8\ub9ad\uc744 \ucffc\ub9ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n`kubectl get \u2014raw \/apis\/custom.metrics.k8s.io\/v1beta1\/ `\n\n[\uc678\ubd80 \uba54\ud2b8\ub9ad](https:\/\/github.com\/kubernetes\/community\/blob\/master\/contributors\/design-proposals\/instrumentation\/external-metrics-api.md)\uc740 \uc774\ub984\uc5d0\uc11c \uc54c \uc218 \uc788\ub4ef\uc774 Horizontal Pod Autoscaler\uc5d0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130 \uc678\ubd80\uc758 \uba54\ud2b8\ub9ad\uc744 \uc0ac\uc6a9\ud558\uc5ec \ubc30\ud3ec\ub97c \ud655\uc7a5\ud560 \uc218 \uc788\ub294 \uae30\ub2a5\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \ubc30\uce58 \ucc98\ub9ac \uc6cc\ud06c\ub85c\ub4dc\uc5d0\uc11c\ub294 SQS \ub300\uae30\uc5f4\uc5d0\uc11c \uc9c4\ud589 \uc911\uc778 \uc791\uc5c5 \uc218\uc5d0 \ub530\ub77c \ubcf5\uc81c\ubcf8 \uc218\ub97c \uc870\uc815\ud558\ub294 \uac83\uc774 \uc77c\ubc18\uc801\uc785\ub2c8\ub2e4.\n\nKubernetes \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc790\ub3d9 \ud655\uc7a5\ud558\ub824\uba74 \uc5ec\ub7ec \uc0ac\uc6a9\uc790 \uc815\uc758 \uc774\ubca4\ud2b8\ub97c \uae30\ubc18\uc73c\ub85c \ucee8\ud14c\uc774\ub108 \ud655\uc7a5\uc744 \uad6c\ub3d9\ud560 \uc218 \uc788\ub294 \uc624\ud508 \uc18c\uc2a4 \ud504\ub85c\uc81d\ud2b8\uc778 KEDA(Kubernetes Event-driven Autoscaling)\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 [AWS \ube14\ub85c\uadf8](https:\/\/aws.amazon.com\/blogs\/mt\/autoscaling-kubernetes-workloads-with-keda-using-amazon-managed-service-for-prometheus-metrics\/)\uc5d0\uc11c\ub294 Kubernetes \uc6cc\ud06c\ub85c\ub4dc \uc790\ub3d9 \ud655\uc7a5\uc744 \uc704\ud574 Amazon Managed Service for Prometheus\ub97c \uc0ac\uc6a9\ud558\ub294 \ubc29\ubc95\uc744 \uc124\uba85\ud569\ub2c8\ub2e4.\n\n## Vertical Pod Autoscaler (VPA)\n\nVPA\ub294 \ud30c\ub4dc\uc758 CPU \ubc0f \uba54\ubaa8\ub9ac \uc608\uc57d\uc744 \uc790\ub3d9\uc73c\ub85c \uc870\uc815\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \u201c\uc801\uc808\ud55c \ud06c\uae30\u201d\ub85c \uc870\uc815\ud560 \uc218 \uc788\ub3c4\ub85d \ud569\ub2c8\ub2e4. \ub9ac\uc18c\uc2a4 \ud560\ub2f9\uc744 \ub298\ub824 \uc218\uc9c1\uc73c\ub85c \ud655\uc7a5\ud574\uc57c \ud558\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uacbd\uc6b0 [VPA](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/vertical-pod-autoscaler)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud30c\ub4dc \ubcf5\uc81c\ubcf8\uc744 \uc790\ub3d9\uc73c\ub85c \ud655\uc7a5\ud558\uac70\ub098 \uaddc\ubaa8 \uc870\uc815 \uad8c\uc7a5 \uc0ac\ud56d\uc744 \uc81c\uacf5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nVPA\uc758 \ud604\uc7ac \uad6c\ud604\uc740 \ud30c\ub4dc\uc5d0 \ub300\ud55c \uc778\ud50c\ub808\uc774\uc2a4 \uc870\uc815\uc744 \uc218\ud589\ud558\uc9c0 \uc54a\uace0 \ub300\uc2e0 \uc2a4\ucf00\uc77c\ub9c1\uc774 \ud544\uc694\ud55c \ud30c\ub4dc\ub97c \ub2e4\uc2dc \uc0dd\uc131\ud558\uae30 \ub54c\ubb38\uc5d0 VPA\uac00 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ud655\uc7a5\ud574\uc57c \ud558\ub294 \uacbd\uc6b0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc77c\uc2dc\uc801\uc73c\ub85c \uc0ac\uc6a9\ud560 \uc218 \uc5c6\uac8c \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n[EKS \uc124\uba85\uc11c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/vertical-pod-autoscaler.html)\uc5d0\ub294 VPA \uc124\uc815 \ubc29\ubc95\uc774 \uc218\ub85d\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \n\n[Fairwinds Goldilocks](https:\/\/github.com\/FairwindsOps\/goldilocks\/) \ud504\ub85c\uc81d\ud2b8\ub294 CPU \ubc0f \uba54\ubaa8\ub9ac \uc694\uccad \ubc0f \uc81c\ud55c\uc5d0 \ub300\ud55c VPA \uad8c\uc7a5 \uc0ac\ud56d\uc744 \uc2dc\uac01\ud654\ud560 \uc218 \uc788\ub294 \ub300\uc2dc\ubcf4\ub4dc\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. VPA \uc5c5\ub370\uc774\ud2b8 \ubaa8\ub4dc\ub97c \uc0ac\uc6a9\ud558\uba74 VPA \uad8c\uc7a5 \uc0ac\ud56d\uc5d0 \ub530\ub77c \ud30c\ub4dc\ub97c \uc790\ub3d9 \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n## \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc5c5\ub370\uc774\ud2b8\n\n\ucd5c\uc2e0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0\ub294 \ub192\uc740 \uc218\uc900\uc758 \uc548\uc815\uc131\uacfc \uac00\uc6a9\uc131\uc744 \uac16\ucd98 \ube60\ub978 \ud601\uc2e0\uc774 \ud544\uc694\ud569\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \uace0\uac1d\uc5d0\uac8c \uc601\ud5a5\uc744 \uc8fc\uc9c0 \uc54a\uc73c\uba74\uc11c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc9c0\uc18d\uc801\uc73c\ub85c \uc5c5\ub370\uc774\ud2b8\ud560 \uc218 \uc788\ub294 \ub3c4\uad6c\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. \n\n\uac00\uc6a9\uc131 \uc800\ud558 \uc5c6\uc774 \ubcc0\uacbd \uc0ac\ud56d\uc744 \uc2e0\uc18d\ud558\uac8c \ubc30\ud3ec\ud560 \uc218 \uc788\ub294 \uba87 \uac00\uc9c0 \ubaa8\ubc94 \uc0ac\ub840\ub97c \uc0b4\ud3b4\ubcf4\uaca0\uc2b5\ub2c8\ub2e4.\n\n### \ub864\ubc31\uc744 \uc218\ud589\ud560 \uc218 \uc788\ub294 \uba54\ucee4\ub2c8\uc998 \ub9c8\ub828\n\n\uc2e4\ud589 \ucde8\uc18c \ubc84\ud2bc\uc774 \uc788\uc73c\uba74 \uc7ac\ud574\ub97c \ud53c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud504\ub85c\ub355\uc158 \ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\uae30 \uc804\uc5d0 \ubcc4\ub3c4\uc758 \ud558\uc704 \ud658\uacbd(\ud14c\uc2a4\ud2b8 \ub610\ub294 \uac1c\ubc1c \ud658\uacbd)\uc5d0\uc11c \ubc30\ud3ec\ub97c \ud14c\uc2a4\ud2b8\ud558\ub294 \uac83\uc774 \uac00\uc7a5 \uc88b\uc2b5\ub2c8\ub2e4. CI\/CD \ud30c\uc774\ud504\ub77c\uc778\uc744 \uc0ac\uc6a9\ud558\uba74 \ubc30\ud3ec\ub97c \uc790\ub3d9\ud654\ud558\uace0 \ud14c\uc2a4\ud2b8\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc9c0\uc18d\uc801 \ubc30\ud3ec \ud30c\uc774\ud504\ub77c\uc778\uc744 \uc0ac\uc6a9\ud558\uba74 \uc5c5\uadf8\ub808\uc774\ub4dc\uc5d0 \uacb0\ud568\uc774 \ubc1c\uc0dd\ud560 \uacbd\uc6b0 \uc774\uc804 \ubc84\uc804\uc73c\ub85c \ube60\ub974\uac8c \ub418\ub3cc\ub9b4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc2e4\ud589 \uc911\uc778 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc5c5\ub370\uc774\ud2b8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub294 \uc77c\ubc18\uc801\uc73c\ub85c \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\uc5ec \uc218\ud589\ub429\ub2c8\ub2e4. `kubectl`\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub2e4\uc74c\uacfc \uac19\uc774 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub97c \uc5c5\ub370\uc774\ud2b8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```bash\nkubectl --record deployment.apps\/nginx-deployment set image nginx-deployment nginx=nginx:1.16.1\n```\n\n`--record` \uc778\uc218\ub294 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc758 \ubcc0\uacbd \uc0ac\ud56d\uc744 \uae30\ub85d\ud558\uace0 \ub864\ubc31\uc744 \uc218\ud589\ud574\uc57c \ud558\ub294 \uacbd\uc6b0 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4. `kubectl rollout history deployment`\ub294 \ud074\ub7ec\uc2a4\ud130\uc758 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc5d0 \ub300\ud574 \uae30\ub85d\ub41c \ubcc0\uacbd \uc0ac\ud56d\uc744 \ubcf4\uc5ec\uc900\ub2e4. `kubectl rollout undo deployment <DEPLOYMENT_NAME>`\uc744 \uc0ac\uc6a9\ud558\uc5ec \ubcc0\uacbd\uc0ac\ud56d\uc744 \ub864\ubc31\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uae30\ubcf8\uc801\uc73c\ub85c, \ud30c\ub4dc\ub97c \uc7ac\uc0dd\uc131\ud574\uc57c \ud558\ub294 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\uba74 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub294 [\ub864\ub9c1 \uc5c5\ub370\uc774\ud2b8](https:\/\/kubernetes.io\/docs\/tutorials\/kubernetes-basics\/update\/update-intro\/)\ub97c \uc218\ud589\ud569\ub2c8\ub2e4. \uc989, \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 \ud30c\ub4dc\uc758 \uc77c\ubd80\ub9cc \uc5c5\ub370\uc774\ud2b8\ud558\uace0 \ubaa8\ub4e0 \ud30c\ub4dc\ub294 \ud55c \ubc88\uc5d0 \uc5c5\ub370\uc774\ud2b8\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. `RollingUpdateStrategy` \ud504\ub85c\ud37c\ud2f0\ub97c \ud1b5\ud574 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uac00 \ub864\ub9c1 \uc5c5\ub370\uc774\ud2b8\ub97c \uc218\ud589\ud558\ub294 \ubc29\uc2dd\uc744 \uc81c\uc5b4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc758 *\ub864\ub9c1 \uc5c5\ub370\uc774\ud2b8*\ub97c \uc218\ud589\ud560 \ub54c, [`Max Unavailable`](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/#max-unavailable) \ud504\ub85c\ud37c\ud2f0\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc5c5\ub370\uc774\ud2b8 \uc911\uc5d0 \uc0ac\uc6a9\ud560 \uc218 \uc5c6\ub294 \ud30c\ub4dc\uc758 \ucd5c\ub300 \uac1c\uc218\ub97c \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc758 `Max Surge` \ud504\ub85c\ud37c\ud2f0\ub97c \uc0ac\uc6a9\ud558\uba74 \uc6d0\ud558\ub294 \ud30c\ub4dc \uc218\ubcf4\ub2e4 \ub354 \uc0dd\uc131\ud560 \uc218 \uc788\ub294 \ucd5c\ub300 \ud30c\ub4dc \uc218\ub97c \uc124\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub864\uc544\uc6c3\uc73c\ub85c \uc778\ud574 \uace0\uac1d\uc774 \ud63c\ub780\uc5d0 \ube60\uc9c0\uc9c0 \uc54a\ub3c4\ub85d `max unavailable`\uc744 \uc870\uc815\ud558\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624. \uc608\ub97c \ub4e4\uc5b4 \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \uae30\ubcf8\uc801\uc73c\ub85c 25%\uc758 `max unavailable`\uc744 \uc124\uc815\ud569\ub2c8\ub2e4. \uc989, 100\uac1c\uc758 \ud30c\ub4dc\uac00 \uc788\ub294 \uacbd\uc6b0 \ub864\uc544\uc6c3 \uc911\uc5d0 \ud65c\uc131 \uc0c1\ud0dc\ub85c \uc791\ub3d9\ud558\ub294 \ud30c\ub4dc\ub294 75\uac1c\ub9cc \uc788\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \ucd5c\uc18c 80\uac1c\uc758 \ud30c\ub4dc\uac00 \ud544\uc694\ud55c \uacbd\uc6b0 \uc774 \ub864\uc544\uc6c3\uc73c\ub85c \uc778\ud574 \uc911\ub2e8\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub300\uc2e0, `max unavailable`\uc744 20% \ub85c \uc124\uc815\ud558\uc5ec \ub864\uc544\uc6c3 \ub0b4\ub0b4 \uc791\ub3d9\ud558\ub294 \ud30c\ub4dc\uac00 80\uac1c \uc774\uc0c1 \uc788\ub3c4\ub85d \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n### \ube14\ub8e8\/\uadf8\ub9b0 \ubc30\ud3ec \uc0ac\uc6a9\n\n\ubcc0\uacbd\uc740 \ubcf8\uc9c8\uc801\uc73c\ub85c \uc704\ud5d8\ud558\uc9c0\ub9cc, \ucde8\uc18c\ud560 \uc218 \uc5c6\ub294 \ubcc0\uacbd\uc740 \uc7a0\uc7ac\uc801\uc73c\ub85c \uce58\uba85\uc801\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. *\ub864\ubc31*\uc744 \ud1b5\ud574 \uc2dc\uac04\uc744 \ud6a8\uacfc\uc801\uc73c\ub85c \ub418\ub3cc\ub9b4 \uc218 \uc788\ub294 \ubcc0\uacbd \uc808\ucc28\ub97c \uc0ac\uc6a9\ud558\uba74 \ud5a5\uc0c1\ub41c \uae30\ub2a5\uacfc \uc2e4\ud5d8\uc774 \ub354 \uc548\uc804\ud574\uc9d1\ub2c8\ub2e4. \ube14\ub8e8\/\uadf8\ub9b0 \ubc30\ud3ec\ub294 \ubb38\uc81c\uac00 \ubc1c\uc0dd\ud560 \uacbd\uc6b0 \ubcc0\uacbd \uc0ac\ud56d\uc744 \uc2e0\uc18d\ud558\uac8c \ucca0\ud68c\ud560 \uc218 \uc788\ub294 \ubc29\ubc95\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc774 \ubc30\ud3ec \uc804\ub7b5\uc5d0\uc11c\ub294 \uc0c8 \ubc84\uc804\uc744 \uc704\ud55c \ud658\uacbd\uc744 \ub9cc\ub4ed\ub2c8\ub2e4. \uc774 \ud658\uacbd\uc740 \uc5c5\ub370\uc774\ud2b8 \uc911\uc778 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \ud604\uc7ac \ubc84\uc804\uacfc \ub3d9\uc77c\ud569\ub2c8\ub2e4. \uc0c8 \ud658\uacbd\uc774 \ud504\ub85c\ube44\uc804\ub418\uba74 \ud2b8\ub798\ud53d\uc774 \uc0c8 \ud658\uacbd\uc73c\ub85c \ub77c\uc6b0\ud305\ub429\ub2c8\ub2e4. \uc0c8 \ubc84\uc804\uc5d0\uc11c \uc624\ub958\uac00 \ubc1c\uc0dd\ud558\uc9c0 \uc54a\uace0 \uc6d0\ud558\ub294 \uacb0\uacfc\ub97c \uc5bb\uc744 \uacbd\uc6b0 \uc774\uc804 \ud658\uacbd\uc740 \uc885\ub8cc\ub429\ub2c8\ub2e4. \uadf8\ub807\uc9c0 \uc54a\uc73c\uba74 \ud2b8\ub798\ud53d\uc774 \uc774\uc804 \ubc84\uc804\uc73c\ub85c \ubcf5\uc6d0\ub429\ub2c8\ub2e4. \n\n\uae30\uc874 \ubc84\uc804\uc758 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc640 \ub3d9\uc77c\ud55c \uc0c8 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub97c \uc0dd\uc131\ud558\uc5ec \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0\uc11c \ube14\ub8e8\/\uadf8\ub9b0 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub97c \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc0c8 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc758 \ud30c\ub4dc\uac00 \uc624\ub958 \uc5c6\uc774 \uc2e4\ud589\ub418\uace0 \uc788\ub294\uc9c0 \ud655\uc778\ud588\uc73c\uba74, \ud2b8\ub798\ud53d\uc744 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \ud30c\ub4dc\ub85c \ub77c\uc6b0\ud305\ud558\ub294 \uc11c\ube44\uc2a4\uc758 `selector` \uc2a4\ud399\uc744 \ubcc0\uacbd\ud558\uc5ec \uc0c8 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub85c \ud2b8\ub798\ud53d\uc744 \ubcf4\ub0b4\uae30 \uc2dc\uc791\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n[Flux](https:\/\/fluxcd.io), [Jenkins](https:\/\/www.jenkins.io), [Spinnaker](https:\/\/spinnaker.io)\uc640 \uac19\uc740 \ub9ce\uc740 \uc9c0\uc18d\uc801 \ud1b5\ud569(CI) \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uba74 \ube14\ub8e8\/\uadf8\ub9b0 \ubc30\ud3ec\ub97c \uc790\ub3d9\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ube14\ub85c\uadf8\uc5d0\ub294 Jenkins\ub97c \uc0ac\uc6a9\ud55c \ub2e8\uacc4\ubcc4 \uc124\uba85\uc774 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4: [Jenkins\ub97c \uc0ac\uc6a9\ud55c \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc758 \uc81c\ub85c \ub2e4\uc6b4\ud0c0\uc784 \ubc30\ud3ec](https:\/\/kubernetes.io\/blog\/2018\/04\/30\/zero-downtime-deployment-kubernetes-jenkins\/)\n\n### Canary \ub514\ud50c\ub85c\uc774\uba3c\ud2b8 \uc0ac\uc6a9\ud558\uae30\n\nCanary \ubc30\ud3ec\ub294 \ube14\ub8e8\/\uadf8\ub9b0 \ubc30\ud3ec\uc758 \ubcc0\ud615\uc73c\ub85c, \ubcc0\uacbd\uc73c\ub85c \uc778\ud55c \uc704\ud5d8\uc744 \ud06c\uac8c \uc81c\uac70\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ubc30\ud3ec \uc804\ub7b5\uc5d0\uc11c\ub294 \uae30\uc874 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc640 \ud568\uaed8 \ub354 \uc801\uc740 \uc218\uc758 \ud30c\ub4dc\uac00 \ud3ec\ud568\ub41c \uc0c8 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub97c \uc0dd\uc131\ud558\uace0 \uc18c\ub7c9\uc758 \ud2b8\ub798\ud53d\uc744 \uc0c8 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub85c \uc804\ud658\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uc9c0\ud45c\uc5d0\uc11c \uc0c8 \ubc84\uc804\uc774 \uae30\uc874 \ubc84\uc804\uacfc \uac19\uac70\ub098 \ub354 \ub098\uc740 \uc131\ub2a5\uc744 \ubcf4\uc778\ub2e4\uba74, \uc0c8 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub85c \ud5a5\ud558\ub294 \ud2b8\ub798\ud53d\uc744 \uc810\uc9c4\uc801\uc73c\ub85c \ub298\ub9ac\uba74\uc11c \ubaa8\ub4e0 \ud2b8\ub798\ud53d\uc774 \uc0c8 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub85c \uc804\ud658\ub420 \ub54c\uae4c\uc9c0 \uaddc\ubaa8\ub97c \ub298\ub9bd\ub2c8\ub2e4. \ub9cc\uc57d \ubb38\uc81c\uac00 \ubc1c\uc0dd\ud558\uba74 \ubaa8\ub4e0 \ud2b8\ub798\ud53d\uc744 \uc774\uc804 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub85c \ub77c\uc6b0\ud305\ud558\uace0 \uc0c8 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub85c\uc758 \ud2b8\ub798\ud53d \uc804\uc1a1\uc744 \uc911\ub2e8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 canary \ubc30\ud3ec\ub97c \uc218\ud589\ud558\ub294 \uae30\ubcf8 \ubc29\ubc95\uc744 \uc81c\uacf5\ud558\uc9c0 \uc54a\uc9c0\ub9cc, [Flagger](https:\/\/github.com\/weaveworks\/flagger)\uc640 \uac19\uc740 \ub3c4\uad6c\ub97c [Istio](https:\/\/docs.flagger.app\/tutorials\/istio-progressive-delivery) \ub610\ub294 [App Mesh](https:\/\/docs.flagger.app\/install\/flagger-install-on-eks-appmesh)\uc640 \ud568\uaed8 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub2e4.\n\n\n## \uc0c1\ud0dc \uc810\uac80 \ubc0f \uc790\uac00 \ubcf5\uad6c\n\n\ubc84\uadf8\uac00 \uc5c6\ub294 \uc18c\ud504\ud2b8\uc6e8\uc5b4\ub294 \uc5c6\uc9c0\ub9cc \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub97c \uc0ac\uc6a9\ud558\uba74 \uc18c\ud504\ud2b8\uc6e8\uc5b4 \uc624\ub958\uc758 \uc601\ud5a5\uc744 \ucd5c\uc18c\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uacfc\uac70\uc5d0\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \ucda9\ub3cc\ud558\uba74 \ub204\uad70\uac00 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc218\ub3d9\uc73c\ub85c \ub2e4\uc2dc \uc2dc\uc791\ud558\uc5ec \uc0c1\ud669\uc744 \ud574\uacb0\ud574\uc57c \ud588\uc2b5\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub97c \uc0ac\uc6a9\ud558\uba74 \ud30c\ub4dc\uc758 \uc18c\ud504\ud2b8\uc6e8\uc5b4 \uc7a5\uc560\ub97c \uac10\uc9c0\ud558\uace0 \uc790\ub3d9\uc73c\ub85c \uc0c8 \ubcf5\uc81c\ubcf8\uc73c\ub85c \uad50\uccb4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub97c \uc0ac\uc6a9\ud558\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \ube44\uc815\uc0c1 \uc778\uc2a4\ud134\uc2a4\ub97c \uc790\ub3d9\uc73c\ub85c \uad50\uccb4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \uc138 \uac00\uc9c0 \uc720\ud615\uc758 [\uc0c1\ud0dc \uac80\uc0ac](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-liveness-readiness-startup-probes\/)\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4.\n\n1. Liveness probe\n2. Startup probe (\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804 1.16 \uc774\uc0c1\uc5d0\uc11c \uc9c0\uc6d0)\n3. Readiness probe\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc5d0\uc774\uc804\ud2b8\uc778 [Kubelet](https:\/\/kubernetes.io\/docs\/reference\/command-line-tools-reference\/kubelet\/)\uc740 \uc704\uc5d0\uc11c \uc5b8\uae09\ud55c \ubaa8\ub4e0 \uac80\uc0ac\ub97c \uc2e4\ud589\ud560 \ucc45\uc784\uc774 \uc788\uc2b5\ub2c8\ub2e4. Kubelet\uc740 \uc138 \uac00\uc9c0 \ubc29\ubc95\uc73c\ub85c \ud30c\ub4dc\uc758 \uc0c1\ud0dc\ub97c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. kubelet\uc740 \ud30c\ub4dc\uc758 \ucee8\ud14c\uc774\ub108 \ub0b4\uc5d0\uc11c \uc178 \uba85\ub839\uc744 \uc2e4\ud589\ud558\uac70\ub098, \ucee8\ud14c\uc774\ub108\uc5d0 HTTP GET \uc694\uccad\uc744 \ubcf4\ub0b4\uac70\ub098, \uc9c0\uc815\ub41c \ud3ec\ud2b8\uc5d0 TCP \uc18c\ucf13\uc744 \uc5f4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\ucee8\ud14c\uc774\ub108 \ub0b4\uc5d0\uc11c \uc178 \uc2a4\ud06c\ub9bd\ud2b8\ub97c \uc2e4\ud589\ud558\ub294 `exec` \uae30\ubc18 \ud504\ub85c\ube0c\ub97c \uc120\ud0dd\ud558\ub294 \uacbd\uc6b0, `TimeoutSeconds` \uac12\uc774 \ub9cc\ub8cc\ub418\uae30 *\uc804\uc5d0* \uc178 \uba85\ub839\uc5b4\uac00 \uc885\ub8cc\ub418\ub294\uc9c0 \ud655\uc778\ud558\uc2ed\uc2dc\uc624. \uadf8\ub807\uc9c0 \uc54a\uc73c\uba74 \ub178\ub4dc\uc5d0 \ub178\ub4dc \uc7a5\uc560\ub97c \uc77c\uc73c\ud0a4\ub294 `<defunct>` \ud504\ub85c\uc138\uc2a4\uac00 \uc0dd\uae41\ub2c8\ub2e4. \n\n## \uad8c\uc7a5 \uc0ac\ud56d\n### Liveness Probe\ub97c \uc0ac\uc6a9\ud558\uc5ec \ube44\uc815\uc0c1 \ud30c\ub4dc \uc81c\uac70\n\nLiveness probe\ub294 \ud504\ub85c\uc138\uc2a4\uac00 \uacc4\uc18d \uc2e4\ud589\ub418\uc9c0\ub9cc \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc751\ub2f5\ud558\uc9c0 \uc54a\ub294 *\uad50\ucc29* \uc0c1\ud0dc\ub97c \uac10\uc9c0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \ud3ec\ud2b8 80\uc5d0\uc11c \uc218\uc2e0 \ub300\uae30\ud558\ub294 \uc6f9 \uc11c\ube44\uc2a4\ub97c \uc2e4\ud589 \uc911\uc778 \uacbd\uc6b0 \ud30c\ub4dc\uc758 \ud3ec\ud2b8 80\uc5d0\uc11c HTTP GET \uc694\uccad\uc744 \ubcf4\ub0b4\ub3c4\ub85d Liveness \ud504\ub85c\ube0c\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Kubelet\uc740 \uc8fc\uae30\uc801\uc73c\ub85c GET \uc694\uccad\uc744 \ud30c\ub4dc\uc5d0 \ubcf4\ub0b4\uace0 \uc751\ub2f5\uc744 \uae30\ub2e4\ub9bd\ub2c8\ub2e4. \ud30c\ub4dc\uac00 200-399 \uc0ac\uc774\uc5d0\uc11c \uc751\ub2f5\ud558\uba74 kubelet\uc740 \ud30c\ub4dc\uac00 \uc815\uc0c1\uc774\ub77c\uace0 \uac04\uc8fc\ud558\uace0, \uadf8\ub807\uc9c0 \uc54a\uc73c\uba74 \ud30c\ub4dc\ub294 \ube44\uc815\uc0c1\uc73c\ub85c \ud45c\uc2dc\ub429\ub2c8\ub2e4. \ud30c\ub4dc\uac00 \uc0c1\ud0dc \uccb4\ud06c\uc5d0 \uacc4\uc18d \uc2e4\ud328\ud558\uba74 kubelet\uc740 \ud30c\ub4dc\ub97c \uc885\ub8cc\ud569\ub098\ub2e4.\n\n`initialDelaySeconds`\ub97c \uc0ac\uc6a9\ud558\uc5ec \uccab \ubc88\uc9f8 \ud504\ub85c\ube0c\ub97c \uc9c0\uc5f0\uc2dc\ud0ac \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nLiveness Probe\ub97c \uc0ac\uc6a9\ud560 \ub54c\ub294 \ubaa8\ub4e0 \ud30c\ub4dc\uac00 \ub3d9\uc2dc\uc5d0 Liveness Probe\uc5d0 \uc2e4\ud328\ud558\ub294 \uc0c1\ud669\uc774 \ubc1c\uc0dd\ud558\uc9c0 \uc54a\ub3c4\ub85d \ud574\uc57c \ud569\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \ubaa8\ub4e0 \ud30c\ub4dc\ub97c \uad50\uccb4\ud558\ub824\uace0 \uc2dc\ub3c4\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc624\ud504\ub77c\uc778\uc73c\ub85c \uc804\ud658\ud558\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4. \uac8c\ub2e4\uac00 \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \uacc4\uc18d\ud574\uc11c \uc0c8\ub85c\uc6b4 \ud30c\ub4dc\ub97c \ub9cc\ub4e4\uc9c0\ub9cc Liveness Probe\ub3c4 \uc2e4\ud328\ud560 \uac83\uc774\uae30 \ub54c\ubb38\uc5d0 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0 \ubd88\ud544\uc694\ud55c \ubd80\ub2f4\uc744 \uc90d\ub2c8\ub2e4. \ud30c\ub4dc \uc678\ubd80 \uc694\uc18c(\uc608: \uc678\ubd80 \ub370\uc774\ud130\ubca0\uc774\uc2a4)\uc5d0 \uc758\uc874\ud558\ub3c4\ub85d Liveness Probe\ub97c \uad6c\uc131\ud558\uc9c0 \ub9c8\uc2ed\uc2dc\uc624. \ub2e4\uc2dc \ub9d0\ud574, \ud30c\ub4dc \uc678\ubd80 \ub370\uc774\ud130\ubca0\uc774\uc2a4\uac00 \uc751\ub2f5\ud558\uc9c0 \uc54a\ub294\ub2e4\uace0 \ud574\uc11c \ud30c\ub4dc\uac00 Liveness Probe\uc5d0 \uc2e4\ud328\ud558\ub294 \uc77c\uc774 \uc788\uc5b4\uc11c\ub294 \uc548 \ub429\ub2c8\ub2e4.\n\nSandor Sz\u00fccs\uc758 \uac8c\uc2dc\ubb3c [\ud65c\uc131 \ud504\ub85c\ube0c\ub294 \uc704\ud5d8\ud558\ub2e4](https:\/\/srcco.de\/posts\/kubernetes-liveness-probes-are-dangerous.html)\uc5d0\uc11c\ub294 \uc798\ubabb \uad6c\uc131\ub41c \ud504\ub85c\ube0c\ub85c \uc778\ud574 \ubc1c\uc0dd\ud560 \uc218 \uc788\ub294 \ubb38\uc81c\ub97c \uc124\uba85\ud569\ub2c8\ub2e4.\n\n### \uc2dc\uc791\ud558\ub294 \ub370 \uc2dc\uac04\uc774 \uc624\ub798 \uac78\ub9ac\ub294 \uc5b4\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0\ub294 Startup Probe\ub97c \uc0ac\uc6a9\ud558\uc2ed\uc2dc\uc624.\n\n\uc571\uc744 \uc2dc\uc791\ud558\ub294 \ub370 \ucd94\uac00 \uc2dc\uac04\uc774 \ud544\uc694\ud55c \uacbd\uc6b0 Startup Probe\ub97c \uc0ac\uc6a9\ud558\uc5ec Liveness \ubc0f Readniness Probe\ub97c \uc9c0\uc5f0\uc2dc\ud0ac \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \ub370\uc774\ud130\ubca0\uc774\uc2a4\ub85c \ubd80\ud130 \ub370\uc774\ud130\ub97c \uce90\uc2f1\ud574\uc57c \ud558\ub294 Java \uc571\uc774 \uc81c\ub300\ub85c \uc791\ub3d9\ud558\ub824\uba74 \ucd5c\ub300 2\ubd84\uc774 \uac78\ub9b4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc644\uc804\ud788 \uc791\ub3d9\ud558\uae30 \uc804\uae4c\uc9c0\ub294 \ubaa8\ub4e0 Liveness \ub610\ub294 Readniness Probe\uac00 \uc2e4\ud328\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Startup Probe\ub97c \uad6c\uc131\ud558\uba74 Liveness \ub610\ub294 Readniness Probe\ub97c \uc2e4\ud589\ud558\uae30 \uc804\uc5d0 Java \uc571\uc744 *\uc815\uc0c1*\uc0c1\ud0dc\ub85c \ub9cc\ub4e4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\nStartup Probe\uac00 \uc131\uacf5\ud560 \ub54c\uae4c\uc9c0 \ub2e4\ub978 \ubaa8\ub4e0 \ud504\ub85c\ube0c\ub294 \ube44\ud65c\uc131\ud654\ub429\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4\uac00 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc2dc\uc791\uc744 \uc704\ud574 \ub300\uae30\ud574\uc57c \ud558\ub294 \ucd5c\ub300 \uc2dc\uac04\uc744 \uc815\uc758\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucd5c\ub300 \uad6c\uc131 \uc2dc\uac04\uc774 \uc9c0\ub09c \ud6c4\uc5d0\ub3c4 \ud30c\ub4dc\uac00 \uc5ec\uc804\ud788 \uc2a4\ud0c0\ud2b8\uc5c5 \ud504\ub85c\ube0c\uc5d0 \uc2e4\ud328\ud558\uba74 \ud30c\ub4dc\ub294 \uc885\ub8cc\ub418\uace0 \uc0c8 \ud30c\ub4dc\uac00 \uc0dd\uc131\ub429\ub2c8\ub2e4. \n\nStartup Probe\ub294 Liveness Probe\uc640 \ube44\uc2b7\ud569\ub2c8\ub2e4. \uc989, \uc2e4\ud328\ud558\uba74 \ud30c\ub4dc\uac00 \ub2e4\uc2dc \uc0dd\uc131\ub429\ub2c8\ub2e4. Ricardo A.\uac00 \uc790\uc2e0\uc758 \uae00 [\ud658\uc0c1\uc801\uc778 \ud504\ub85c\ube0c \ubc0f \uad6c\uc131 \ubc29\ubc95](https:\/\/medium.com\/swlh\/fantastic-probes-and-how-to-configure-them-fef7e030bd2f)\uc5d0\uc11c \uc124\uba85\ud588\ub4ef\uc774, \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc2dc\uc791 \uc2dc\uac04\uc744 \uc608\uce21\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0\uc5d0\ub294 Startup Probe\ub97c \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc2dc\uc791\ud558\ub294 \ub370 10\ucd08\uac00 \uac78\ub9b0\ub2e4\ub294 \uac83\uc744 \uc54c\uace0 \uc788\ub2e4\uba74 \ub300\uc2e0 `initialDelaySeconds`\uc640 \ud568\uaed8 Liveness\/Readiness Probe\ub97c \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n### Readiness Probe\ub97c \uc0ac\uc6a9\ud558\uc5ec \ubd80\ubd84\uc801\uc73c\ub85c \uc0ac\uc6a9\ud560 \uc218 \uc5c6\ub294 \uc0c1\ud0dc\ub97c \uac10\uc9c0\ud558\uc138\uc694\n\nLiveness probe\ub294 \ud30c\ub4dc \uc885\ub8cc(\uc989, \uc571 \uc7ac\uc2dc\uc791)\ub97c \ud1b5\ud574 \ud574\uacb0\ub418\ub294 \uc571 \uc7a5\uc560\ub97c \uac10\uc9c0\ud558\ub294 \ubc18\uba74, Readiness Probe\ub294 \uc571\uc744 _temporarily_ \uc0ac\uc6a9\ud560 \uc218 \uc5c6\ub294 \uc0c1\ud0dc\ub97c \uac10\uc9c0\ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc0c1\ud669\uc5d0\uc11c\ub294 \uc571\uc774 \uc77c\uc2dc\uc801\uc73c\ub85c \uc751\ub2f5\ud558\uc9c0 \uc54a\uc744 \uc218 \uc788\uc9c0\ub9cc \uc774 \uc791\uc5c5\uc774 \uc644\ub8cc\ub418\uba74 \ub2e4\uc2dc \uc815\uc0c1\uc774 \ub420 \uac83\uc73c\ub85c \uc608\uc0c1\ub429\ub2c8\ub2e4. \n\n\uc608\ub97c \ub4e4\uc5b4, \uc9d1\uc911\uc801\uc778 \ub514\uc2a4\ud06c I\/O \uc791\uc5c5 \uc911\uc5d0\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc77c\uc2dc\uc801\uc73c\ub85c \uc694\uccad\uc744 \ucc98\ub9ac\ud560 \uc218 \uc5c6\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc5ec\uae30\uc11c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \ud30c\ub4dc\ub97c \uc885\ub8cc\ud558\ub294 \uac83\uc740 \ud574\uacb0\ucc45\uc774 \uc544\ub2c8\uba70, \ub3d9\uc2dc\uc5d0 \ud30c\ub4dc\ub85c \uc804\uc1a1\ub41c \ucd94\uac00 \uc694\uccad\uc774 \uc2e4\ud328\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nReadiness Probe\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc571\uc758 \uc77c\uc2dc\uc801\uc778 \uac00\uc6a9\uc131 \uc911\ub2e8\uc744 \uac10\uc9c0\ud558\uace0 \ub2e4\uc2dc \uc791\ub3d9\ud560 \ub54c\uae4c\uc9c0 \ud574\ub2f9 \ud30c\ub4dc\uc5d0 \ub300\ud55c \uc694\uccad \uc804\uc1a1\uc744 \uc911\ub2e8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. *\uc2e4\ud328\ub85c \uc778\ud574 \ud30c\ub4dc\uac00 \uc7ac\uc0dd\uc131\ub418\ub294 Liveness Probe\uc640 \ub2ec\ub9ac, Readiness Probe\uac00 \uc2e4\ud328\ud558\uba74 \ud30c\ub4dc\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\ub85c\ubd80\ud130 \uc5b4\ub5a0\ud55c \ud2b8\ub798\ud53d\ub3c4 \uc218\uc2e0\ud558\uc9c0 \uc54a\uac8c \ub429\ub2c8\ub2e4*. Readiness Probe\uac00 \uc131\uacf5\ud558\uba74 \ud30c\ub4dc\ub294 \uc11c\ube44\uc2a4\ub85c\ubd80\ud130 \ud2b8\ub798\ud53d\uc744 \ub2e4\uc2dc \uc218\uc2e0\ud569\ub2c8\ub2e4. \n\nLiveness Probe\uc640 \ub9c8\ucc2c\uac00\uc9c0\ub85c \ud30c\ub4dc \uc678\ubd80\uc758 \ub9ac\uc18c\uc2a4(\uc608: \ub370\uc774\ud130\ubca0\uc774\uc2a4)\uc5d0 \uc758\uc874\ud558\ub294 Readiness Probe\ub97c \uad6c\uc131\ud558\uc9c0 \ub9c8\uc2ed\uc2dc\uc624. \ub2e4\uc74c\uc740 \uc798\ubabb \uad6c\uc131\ub41c Readiness\ub85c \uc778\ud574 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc791\ub3d9\ud558\uc9c0 \uc54a\uc744 \uc218 \uc788\ub294 \uc2dc\ub098\ub9ac\uc624\uc785\ub2c8\ub2e4. \uc571\uc758 \ub370\uc774\ud130\ubca0\uc774\uc2a4\uc5d0 \uc5f0\uacb0\ud560 \uc218 \uc5c6\uc744 \ub54c \ud30c\ub4dc\uc758 Readiness Probe\uc5d0 \uc7a5\uc560\uac00 \ubc1c\uc0dd\ud558\uba74 \ub2e4\ub978 \ud30c\ub4dc \ubcf5\uc81c\ubcf8\ub3c4 \ub3d9\uc77c\ud55c \uc0c1\ud0dc \uc810\uac80 \uae30\uc900\uc744 \uacf5\uc720\ud558\ubbc0\ub85c \ub3d9\uc2dc\uc5d0 \uc2e4\ud328\ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c \ubc29\uc2dd\uc73c\ub85c \ud504\ub85c\ube0c\ub97c \uc124\uc815\ud558\uba74 \ub370\uc774\ud130\ubca0\uc774\uc2a4\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc5c6\uc744 \ub54c\ub9c8\ub2e4 \ud30c\ub4dc\uc758 Readiness Probe\uac00 \uc2e4\ud328\ud558\uace0 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uac00 *all* \ud30c\ub4dc\ub85c \ud2b8\ub798\ud53d \uc804\uc1a1\uc744 \uc911\uc9c0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\nReadiness Probes \uc0ac\uc6a9\uc758 \ubd80\uc791\uc6a9\uc740 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\ub294 \ub370 \uac78\ub9ac\ub294 \uc2dc\uac04\uc744 \ub298\ub9b4 \uc218 \uc788\ub2e4\ub294 \uac83\uc785\ub2c8\ub2e4. Readiness Probe\uac00 \uc131\uacf5\ud558\uc9c0 \uc54a\ub294 \ud55c \uc0c8 \ubcf5\uc81c\ubcf8\uc740 \ud2b8\ub798\ud53d\uc744 \uc218\uc2e0\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uadf8\ub54c\uae4c\uc9c0\ub294 \uae30\uc874 \ubcf5\uc81c\ubcf8\uc774 \uacc4\uc18d\ud574\uc11c \ud2b8\ub798\ud53d\uc744 \uc218\uc2e0\ud558\uac8c \ub429\ub2c8\ub2e4. \n\n---\n\n## \uc7a5\uc560 \ucc98\ub9ac\n\n\ud30c\ub4dc\uc758 \uc218\uba85\uc740 \uc720\ud55c\ud569\ub2c8\ub2e4. - \ud30c\ub4dc\ub97c \uc624\ub798 \uc2e4\ud589\ud558\ub354\ub77c\ub3c4 \ub54c\uac00 \ub418\uba74 \ud30c\ub4dc\uac00 \uc62c\ubc14\ub974\uac8c \uc885\ub8cc\ub418\ub3c4\ub85d \ud558\ub294 \uac83\uc774 \ud604\uba85\ud569\ub2c8\ub2e4. \uc5c5\uadf8\ub808\uc774\ub4dc \uc804\ub7b5\uc5d0 \ub530\ub77c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub824\uba74 \uc0c8 \uc6cc\ucee4 \ub178\ub4dc\ub97c \uc0dd\uc131\ud574\uc57c \ud560 \uc218 \uc788\uc73c\uba70, \uc774 \uacbd\uc6b0 \ubaa8\ub4e0 \ud30c\ub4dc\ub97c \uc0c8 \ub178\ub4dc\uc5d0\uc11c \ub2e4\uc2dc \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \uc801\uc808\ud55c \uc885\ub8cc \ucc98\ub9ac \ubc0f \ud30c\ub4dc \uc911\ub2e8 \uc608\uc0b0\uc744 \ub9c8\ub828\ud558\uba74 \ud30c\ub4dc\uac00 \uc774\uc804 \ub178\ub4dc\uc5d0\uc11c \uc81c\uac70\ub418\uace0 \uc0c8 \ub178\ub4dc\uc5d0\uc11c \uc7ac\uc0dd\uc131\ub420 \ub54c \uc11c\ube44\uc2a4 \uc911\ub2e8\uc744 \ud53c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\uc6cc\ucee4 \ub178\ub4dc\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub294 \uac00\uc7a5 \uc88b\uc740 \ubc29\ubc95\uc740 \uc0c8 \uc6cc\ucee4 \ub178\ub4dc\ub97c \ub9cc\ub4e4\uace0 \uae30\uc874 \uc6cc\ucee4 \ub178\ub4dc\ub97c \uc885\ub8cc\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uc6cc\ucee4 \ub178\ub4dc\ub97c \uc885\ub8cc\ud558\uae30 \uc804\uc5d0 \uba3c\uc800 \uc6cc\ucee4 \ub178\ub4dc\ub97c `drain` \ud574\uc57c \ud569\ub2c8\ub2e4. \uc6cc\ucee4 \ub178\ub4dc\uac00 \ube44\uc6cc\uc9c0\uba74 \ud574\ub2f9 \ub178\ub4dc\uc758 \ubaa8\ub4e0 \ud30c\ub4dc\uac00 *\uc548\uc804\ud558\uac8c* \uc81c\uac70\ub429\ub2c8\ub2e4. \uc5ec\uae30\uc11c \uac00\uc7a5 \uc911\uc694\ud55c \ub2e8\uc5b4\ub294 \uc548\uc804\uc785\ub2c8\ub2e4. \uc6cc\ucee4 \ub178\ub4dc\uc5d0\uc11c \ud30c\ub4dc\uac00 \uc81c\uac70\ub418\uba74 \ub2e8\uc21c\ud788 `SIGKILL` \uc2dc\uadf8\ub110\uc774 \uc804\uc1a1\ub418\ub294 \uac83\uc774 \uc544\ub2d9\ub2c8\ub2e4. \ub300\uc2e0, `SIGTERM` \uc2e0\ud638\uac00 \uc81c\uac70\ub418\ub294 \ud30c\ub4dc\uc5d0 \uc788\ub294 \uac01 \ucee8\ud14c\uc774\ub108\uc758 \uba54\uc778 \ud504\ub85c\uc138\uc2a4(PID 1)\ub85c \ubcf4\ub0b4\uc9c4\ub2e4. `SIGTERM` \uc2e0\ud638\uac00 \uc804\uc1a1\ub41c \ud6c4, \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \ud504\ub85c\uc138\uc2a4\uc5d0 `SIGKILL` \uc2e0\ud638\uac00 \uc804\uc1a1\ub418\uae30\uae4c\uc9c0 \uc77c\uc815 \uc2dc\uac04(\uc720\uc608 \uae30\uac04)\uc744 \uc90d\ub2c8\ub2e4. \uc774 \uc720\uc608 \uae30\uac04\uc740 \uae30\ubcf8\uc801\uc73c\ub85c 30\ucd08\uc785\ub2c8\ub2e4. kubectl\uc5d0\uc11c `grace-period` \ud50c\ub798\uadf8\ub97c \uc0ac\uc6a9\ud558\uc5ec \uae30\ubcf8\uac12\uc744 \uc7ac\uc815\uc758\ud558\uac70\ub098 Podspec\uc5d0\uc11c `terminationGracePeriodSeconds`\ub97c \uc120\uc5b8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n`kubectl delete pod <pod name> \u2014grace-period=<seconds>`\n\n\uba54\uc778 \ud504\ub85c\uc138\uc2a4\uc5d0 PID 1\uc774 \uc5c6\ub294 \ucee8\ud14c\uc774\ub108\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc77c\ubc18\uc801\uc785\ub2c8\ub2e4. \ub2e4\uc74c\uacfc \uac19\uc740 Python \uae30\ubc18 \uc0d8\ud50c \ucee8\ud14c\uc774\ub108\ub97c \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624.\n\n```\n$ kubectl exec python-app -it ps\n PID USER TIME COMMAND\n 1   root 0:00 {script.sh} \/bin\/sh .\/script.sh\n 5   root 0:00 python app.py\n```\n\n\uc774 \uc608\uc81c\uc5d0\uc11c \uc178 \uc2a4\ud06c\ub9bd\ud2b8\ub294 `SIGTERM`\uc744 \uc218\uc2e0\ud558\ub294\ub370, \uc774 \uc608\uc81c\uc758 \uba54\uc778 \ud504\ub85c\uc138\uc2a4\ub294 \ud30c\uc774\uc36c \uc751\uc6a9 \ud504\ub85c\uadf8\ub7a8\uc774\uc9c0\ub9cc `SIGTERM` \uc2e0\ud638\ub97c \ubc1b\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ud30c\ub4dc\uac00 \uc885\ub8cc\ub418\uba74, \ud30c\uc774\uc36c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uac11\uc790\uae30 \uc885\ub8cc\ub429\ub2c8\ub2e4. \uc774 \ubb38\uc81c\ub294 \ucee8\ud14c\uc774\ub108\uc758 [`ENTRYPOINT`](https:\/\/docs.docker.com\/engine\/reference\/builder\/#entrypoint)\ub97c \ubcc0\uacbd\ud558\uc5ec \ud30c\uc774\uc36c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc2e4\ud589\ud568\uc73c\ub85c\uc368 \ud574\uacb0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub610\ub294 [dumb-init](https:\/\/github.com\/Yelp\/dumb-init)\uacfc \uac19\uc740 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc2e0\ud638\ub97c \ucc98\ub9ac\ud560 \uc218 \uc788\ub3c4\ub85d \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n[\ucee8\ud14c\uc774\ub108 \ud6c4\ud06c](https:\/\/kubernetes.io\/docs\/concepts\/containers\/container-lifecycle-hooks\/#container-hooks)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ucee8\ud14c\uc774\ub108 \uc2dc\uc791 \ub610\ub294 \uc911\uc9c0 \uc2dc \uc2a4\ud06c\ub9bd\ud2b8 \ub610\ub294 HTTP \uc694\uccad\uc744 \uc2e4\ud589\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. `Prestop` \ud6c4\ud06c \uc561\uc158\uc740 \ucee8\ud14c\uc774\ub108\uac00 `SIGTERM` \uc2e0\ud638\ub97c \uc218\uc2e0\ud558\uae30 \uc804\uc5d0 \uc2e4\ud589\ub418\uba70 \uc774 \uc2e0\ud638\uac00 \uc804\uc1a1\ub418\uae30 \uc804\uc5d0 \uc644\ub8cc\ub418\uc5b4\uc57c \ud569\ub2c8\ub2e4. `terminationGracePeriodSeconds` \uac12\uc740 `SIGTERM` \uc2e0\ud638\uac00 \uc804\uc1a1\ub420 \ub54c\uac00 \uc544\ub2c8\ub77c `PreStop` \ud6c4\ud06c \uc561\uc158\uc774 \uc2e4\ud589\ub418\uae30 \uc2dc\uc791\ud560 \ub54c\ubd80\ud130 \uc801\uc6a9\ub429\ub2c8\ub2e4.\n\n## \uad8c\uc7a5 \uc0ac\ud56d\n\n### Pod Disruption Budget\uc73c\ub85c \uc911\uc694\ud55c \uc6cc\ud06c\ub85c\ub4dc\ub97c \ubcf4\ud638\ud558\uc138\uc694\n\nPod Disruption Budget \ub610\ub294 PDB\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \ubcf5\uc81c\ubcf8 \uc218\uac00 \uc120\uc5b8\ub41c \uc784\uacc4\uac12 \uc544\ub798\ub85c \ub5a8\uc5b4\uc9c0\uba74 \uc81c\uac70 \ud504\ub85c\uc138\uc2a4\ub97c \uc77c\uc2dc\uc801\uc73c\ub85c \uc911\ub2e8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc0ac\uc6a9 \uac00\ub2a5\ud55c \ubcf5\uc81c\ubcf8 \uc218\uac00 \uc784\uacc4\uac12\uc744 \ucd08\uacfc\ud558\uba74 \uc81c\uac70 \ud504\ub85c\uc138\uc2a4\uac00 \uacc4\uc18d\ub429\ub2c8\ub2e4. PDB\ub97c \uc0ac\uc6a9\ud558\uc5ec \ubcf5\uc81c\ubcf8\uc758 `minAvailable` \ubc0f `maxUnavailable` \uc218\ub97c \uc120\uc5b8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uc571 \ubcf5\uc81c\ubcf8\uc744 3\uac1c \uc774\uc0c1 \uc0ac\uc6a9\ud560 \uc218 \uc788\uac8c \ud558\ub824\uba74 PDB\ub97c \ub9cc\ub4e4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n```\napiVersion: policy\/v1beta1\nkind: PodDisruptionBudget\nmetadata:\n  name: my-svc-pdb\nspec:\n  minAvailable: 3\n  selector:\n    matchLabels:\n      app: my-svc\n```\n\n\uc704\uc758 PDB \uc815\ucc45\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0\uac8c 3\uac1c \uc774\uc0c1\uc758 \ubcf5\uc81c\ubcf8\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc744 \ub54c\uae4c\uc9c0 \uc81c\uac70 \ud504\ub85c\uc138\uc2a4\ub97c \uc911\ub2e8\ud558\ub3c4\ub85d \uc9c0\uc2dc\ud569\ub2c8\ub2e4. \ub178\ub4dc \ub4dc\ub808\uc774\ub2dd\uc740 `PodDisruptionBudgets`\uc744 \uace0\ub824\ud569\ub2c8\ub2e4. EKS \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9 \uc5c5\uadf8\ub808\uc774\ub4dc \uc911\uc5d0\ub294 [15\ubd84 \ud0c0\uc784\uc544\uc6c3\uc73c\ub85c \ub178\ub4dc\uac00 \uace0\uac08\ub429\ub2c8\ub2e4](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managed-node-update-behavior.html). 15\ubd84 \ud6c4 \uc5c5\ub370\uc774\ud2b8\ub97c \uac15\uc81c \uc2e4\ud589\ud558\uc9c0 \uc54a\uc73c\uba74(EKS \ucf58\uc194\uc5d0\uc11c\ub294 \ub864\ub9c1 \uc5c5\ub370\uc774\ud2b8\ub77c\uace0 \ud568) \uc5c5\ub370\uc774\ud2b8\uac00 \uc2e4\ud328\ud569\ub2c8\ub2e4. \uc5c5\ub370\uc774\ud2b8\ub97c \uac15\uc81c\ub85c \uc801\uc6a9\ud558\uba74 \ud30c\ub4dc\uac00 \uc0ad\uc81c\ub429\ub2c8\ub2e4.\n\n\uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc\uc758 \uacbd\uc6b0 [AWS Node Termination Handler](https:\/\/github.com\/aws\/aws-node-termination-handler)\uc640 \uac19\uc740 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uba74 Kubernetes \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc774 [EC2 \uc720\uc9c0 \uad00\ub9ac](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/monitoring-instances-status-check_sched.html) \uc774\ubca4\ud2b8 \ubc0f [EC2 \uc2a4\ud31f \uc911\ub2e8](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/spot-interruptions.html) \ub4f1 EC2 \uc778\uc2a4\ud134\uc2a4\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc5c6\uac8c \ub420 \uc218 \uc788\ub294 \uc774\ubca4\ud2b8\uc5d0 \uc801\uc808\ud558\uac8c \ub300\uc751\ud569\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc\ub97c \ube44\uc6b0\uace0 \uc0c8 \ud30c\ub4dc\uac00 \uc2a4\ucf00\uc904\ub418\uc9c0 \uc54a\ub3c4\ub85d \ud55c \ub2e4\uc74c, \ud30c\ub4dc\ub97c \ub4dc\ub808\uc774\ub2dd\ud558\uc5ec \uc2e4\ud589 \uc911\uc778 \ud30c\ub4dc\ub97c \uc885\ub8cc\ud55c\ub2e4.\n\n\ud30c\ub4dc anti-affinity\ub97c \uc0ac\uc6a9\ud574 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc758 \ud30c\ub4dc\ub97c \ub2e4\ub978 \ub178\ub4dc\uc5d0 \uc2a4\ucf00\uc904\ub9c1\ud558\uace0 \ub178\ub4dc \uc5c5\uadf8\ub808\uc774\ub4dc \uc911 PDB \uad00\ub828 \uc9c0\uc5f0\uc744 \ud53c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n### \uce74\uc624\uc2a4 \uc5d4\uc9c0\ub2c8\uc5b4\ub9c1 \uc5f0\uc2b5 \n> \uce74\uc624\uc2a4 \uc5d4\uc9c0\ub2c8\uc5b4\ub9c1\uc740 \ud504\ub85c\ub355\uc158\uc5d0\uc11c\uc758 \uaca9\ub82c\ud55c \uc870\uac74\uc744 \uacac\ub51c \uc218 \uc788\ub294 \uc2dc\uc2a4\ud15c\uc758 \uc131\ub2a5\uc5d0 \ub300\ud55c \uc2e0\ub8b0\ub97c \uad6c\ucd95\ud558\uae30 \uc704\ud574 \ubd84\uc0b0 \uc2dc\uc2a4\ud15c\uc744 \uc2e4\ud5d8\ud558\ub294 \ubd84\uc57c\uc785\ub2c8\ub2e4.\n\nDominik Tornow\ub294 \uc790\uc2e0\uc758 \ube14\ub85c\uadf8 [\ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \uc120\uc5b8\uc801 \uc2dc\uc2a4\ud15c](https:\/\/medium.com\/@dominik.tornow\/the-mechanics-of-kubernetes-ac8112eaa302)\uc5d0\uc11c \u201c*\uc0ac\uc6a9\uc790\uac00 \uc6d0\ud558\ub294 \uc2dc\uc2a4\ud15c \uc0c1\ud0dc\ub97c \uc2dc\uc2a4\ud15c\uc5d0 \ud45c\uc2dc\ud569\ub2c8\ub2e4. \uadf8\ub7f0 \ub2e4\uc74c \uc2dc\uc2a4\ud15c\uc740 \ud604\uc7ac \uc0c1\ud0dc\uc640 \uc6d0\ud558\ub294 \uc0c1\ud0dc\ub97c \uace0\ub824\ud558\uc5ec \ud604\uc7ac \uc0c1\ud0dc\uc5d0\uc11c \uc6d0\ud558\ub294 \uc0c1\ud0dc\ub85c \uc804\ud658\ud558\uae30 \uc704\ud55c \uba85\ub839 \uc21c\uc11c\ub97c \uacb0\uc815\ud569\ub2c8\ub2e4.*\u201d\ub77c\uace0 \uc124\uba85\ud569\ub2c8\ub2e4. \uc989, \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \ud56d\uc0c1 *\uc6d0\ud558\ub294 \uc0c1\ud0dc*\ub97c \uc800\uc7a5\ud558\uace0 \uc2dc\uc2a4\ud15c\uc774 \uc774\ub97c \ubc97\uc5b4\ub098\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \uc0c1\ud0dc\ub97c \ubcf5\uc6d0\ud558\uae30 \uc704\ud55c \uc870\uce58\ub97c \ucde8\ud569\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uc6cc\ucee4 \ub178\ub4dc\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc5c6\uac8c \ub418\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \ud30c\ub4dc\ub97c \ub2e4\ub978 \uc6cc\ucee4 \ub178\ub4dc\ub85c \ub2e4\uc2dc \uc2a4\ucf00\uc904\ud569\ub2c8\ub2e4. \ub9c8\ucc2c\uac00\uc9c0\ub85c, `replica`\uac00 \ucda9\ub3cc\ud558\uba74 [\ub514\ud50c\ub85c\uc774\uba3c\ud2b8 \ucee8\ud2b8\ub864\ub7ec](https:\/\/kubernetes.io\/docs\/concepts\/architecture\/controller\/#design)\uac00 \uc0c8 `replica`\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4. \uc774\ub7f0 \ubc29\uc2dd\uc73c\ub85c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864\ub7ec\ub294 \uc7a5\uc560\ub97c \uc790\ub3d9\uc73c\ub85c \uc218\uc815\ud569\ub2c8\ub2e4. \n\n[Gremlin](https:\/\/www.gremlin.com)\uacfc \uac19\uc740 \uce74\uc624\uc2a4 \uc5d4\uc9c0\ub2c8\uc5b4\ub9c1 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc758 \ubcf5\uc6d0\ub825\uc744 \ud14c\uc2a4\ud2b8\ud558\uace0 \ub2e8\uc77c \uc7a5\uc560 \uc9c0\uc810\uc744 \uc2dd\ubcc4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130(\ubc0f \uadf8 \uc774\uc0c1)\uc5d0 \uc778\uc704\uc801\uc778 \ud63c\ub3c8\uc744 \uc720\ubc1c\ud558\ub294 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uba74 \uc2dc\uc2a4\ud15c \uc57d\uc810\uc744 \ubc1c\uacac\ud558\uace0 \ubcd1\ubaa9 \ud604\uc0c1\uacfc \uc798\ubabb\ub41c \uad6c\uc131\uc744 \uc2dd\ubcc4\ud558\uba70 \ud1b5\uc81c\ub41c \ud658\uacbd\uc5d0\uc11c \ubb38\uc81c\ub97c \uc218\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uce74\uc624\uc2a4 \uc5d4\uc9c0\ub2c8\uc5b4\ub9c1 \ucca0\ud559\uc740 \uc758\ub3c4\uc801\uc73c\ub85c \ubb38\uc81c\ub97c \ud574\uacb0\ud558\uace0 \uc778\ud504\ub77c\uc5d0 \uc2a4\ud2b8\ub808\uc2a4\ub97c \uc8fc\uc5b4 \uc608\uc0c1\uce58 \ubabb\ud55c \ub2e4\uc6b4\ud0c0\uc784\uc744 \ucd5c\uc18c\ud654\ud558\ub294 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \n\n### \uc11c\ube44\uc2a4 \uba54\uc2dc \uc0ac\uc6a9\n\n\uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \ubcf5\uc6d0\ub825\uc744 \uac1c\uc120\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc11c\ube44\uc2a4 \uba54\uc2dc\ub294 \uc11c\ube44\uc2a4 \uac04 \ud1b5\uc2e0\uc744 \uac00\ub2a5\ud558\uac8c \ud558\uace0 \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 \ub124\ud2b8\uc6cc\ud06c\uc758 \uac00\uc2dc\uc131\uc744 \ub192\uc785\ub2c8\ub2e4. \ub300\ubd80\ubd84\uc758 \uc11c\ube44\uc2a4 \uba54\uc2dc \uc81c\ud488\uc740 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \ub124\ud2b8\uc6cc\ud06c \ud2b8\ub798\ud53d\uc744 \uac00\ub85c\ucc44\uace0 \uac80\uc0ac\ud558\ub294 \uc18c\uaddc\ubaa8 \ub124\ud2b8\uc6cc\ud06c \ud504\ub85d\uc2dc\ub97c \uac01 \uc11c\ube44\uc2a4\uc640 \ud568\uaed8 \uc2e4\ud589\ud558\ub294 \ubc29\uc2dd\uc73c\ub85c \uc791\ub3d9\ud569\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc218\uc815\ud558\uc9c0 \uc54a\uace0\ub3c4 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uba54\uc2dc\uc5d0 \ubc30\uce58\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc11c\ube44\uc2a4 \ud504\ub85d\uc2dc\uc5d0 \ub0b4\uc7a5\ub41c \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub124\ud2b8\uc6cc\ud06c \ud1b5\uacc4\ub97c \uc0dd\uc131\ud558\uace0, \uc561\uc138\uc2a4 \ub85c\uadf8\ub97c \uc0dd\uc131\ud558\uace0, \ubd84\uc0b0 \ucd94\uc801\uc744 \uc704\ud55c \uc544\uc6c3\ubc14\uc6b4\ub4dc \uc694\uccad\uc5d0 HTTP \ud5e4\ub354\ub97c \ucd94\uac00\ud558\ub3c4\ub85d \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \uc0ac\uc6a9\ud558\uba74 \uc790\ub3d9 \uc694\uccad \uc7ac\uc2dc\ub3c4, \uc81c\ud55c \uc2dc\uac04, \ud68c\ub85c \ucc28\ub2e8, \uc18d\ub3c4 \uc81c\ud55c\uacfc \uac19\uc740 \uae30\ub2a5\uc744 \ud1b5\ud574 \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4\uc758 \ubcf5\uc6d0\ub825\uc744 \ub192\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc5ec\ub7ec \ud074\ub7ec\uc2a4\ud130\ub97c \uc6b4\uc601\ud558\ub294 \uacbd\uc6b0 \uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 \uac04 \uc11c\ube44\uc2a4 \uac04 \ud1b5\uc2e0\uc744 \ud65c\uc131\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc11c\ube44\uc2a4 \uba54\uc2dc\n+ [AWS App Mesh](https:\/\/aws.amazon.com\/app-mesh\/)\n+ [Istio](https:\/\/istio.io)\n+ [LinkerD](http:\/\/linkerd.io)\n+ [Consul](https:\/\/www.consul.io)\n\n---\n\n## Observability \n\nObservability\ub294 \ubaa8\ub2c8\ud130\ub9c1, \ub85c\uae45, \ucd94\uc801\uc744 \ud3ec\ud568\ud558\ub294 \ud3ec\uad04\uc801\uc778 \uc6a9\uc5b4\uc785\ub2c8\ub2e4. \ub9c8\uc774\ud06c\ub85c\uc11c\ube44\uc2a4 \uae30\ubc18 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc740 \uae30\ubcf8\uc801\uc73c\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4. \ub2e8\uc77c \uc2dc\uc2a4\ud15c\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud558\ub294 \uac83\uc73c\ub85c \ucda9\ubd84\ud55c \ubaa8\ub180\ub9ac\uc2dd \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uacfc \ub2ec\ub9ac \ubd84\uc0b0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc544\ud0a4\ud14d\ucc98\uc5d0\uc11c\ub294 \uac01 \uad6c\uc131 \uc694\uc18c\uc758 \uc131\ub2a5\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud574\uc57c \ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uc218\uc900 \ubaa8\ub2c8\ud130\ub9c1, \ub85c\uae45 \ubc0f \ubd84\uc0b0 \ucd94\uc801 \uc2dc\uc2a4\ud15c\uc744 \uc0ac\uc6a9\ud558\uc5ec \uace0\uac1d\uc774 \uc911\ub2e8\ub418\uae30 \uc804\uc5d0 \ud074\ub7ec\uc2a4\ud130\uc758 \ubb38\uc81c\ub97c \uc2dd\ubcc4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\ubb38\uc81c \ud574\uacb0 \ubc0f \ubaa8\ub2c8\ud130\ub9c1\uc744 \uc704\ud55c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub0b4\uc7a5 \ub3c4\uad6c\ub294 \uc81c\ud55c\uc801\uc785\ub2c8\ub2e4. \uba54\ud2b8\ub9ad \uc11c\ubc84\ub294 \ub9ac\uc18c\uc2a4 \uba54\ud2b8\ub9ad\uc744 \uc218\uc9d1\ud558\uc5ec \uba54\ubaa8\ub9ac\uc5d0 \uc800\uc7a5\ud558\uc9c0\ub9cc \uc720\uc9c0\ud558\uc9c0\ub294 \uc54a\uc2b5\ub2c8\ub2e4. kubectl\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud30c\ub4dc\uc758 \ub85c\uadf8\ub97c \ubcfc \uc218 \uc788\uc9c0\ub9cc, \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \ub85c\uadf8\ub97c \uc790\ub3d9\uc73c\ub85c \ubcf4\uad00\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uadf8\ub9ac\uace0 \ubd84\uc0b0 \ucd94\uc801 \uad6c\ud604\uc740 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ucf54\ub4dc \uc218\uc900\uc5d0\uc11c \ub610\ub294 \uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc218\ud589\ub429\ub2c8\ub2e4. \n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\uc758 \ud655\uc7a5\uc131\uc740 \uc5ec\uae30\uc11c \ube5b\uc744 \ubc1c\ud569\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub97c \uc0ac\uc6a9\ud558\uba74 \uc120\ud638\ud558\ub294 \uc911\uc559 \uc9d1\uc911\uc2dd \ubaa8\ub2c8\ud130\ub9c1, \ub85c\uae45 \ubc0f \ucd94\uc801 \uc194\ub8e8\uc158\uc744 \uac00\uc838\uc62c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n## \uad8c\uc7a5 \uc0ac\ud56d\n\n### \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ubaa8\ub2c8\ud130\ub9c1\n\n\ucd5c\uc2e0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0\uc11c \ubaa8\ub2c8\ud130\ub9c1\ud574\uc57c \ud558\ub294 \uc9c0\ud45c\uc758 \uc218\ub294 \uacc4\uc18d \uc99d\uac00\ud558\uace0 \uc788\uc2b5\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc790\ub3d9\uc73c\ub85c \ucd94\uc801\ud558\uba74 \uace0\uac1d\uc758 \ubb38\uc81c\ub97c \ud574\uacb0\ud558\ub294\ub370 \uc9d1\uc911\ud560 \uc218 \uc788\uc5b4 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4. [Prometheus](https:\/\/prometheus.io) \ub610\ub294 [CloudWatch Container Insights](https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/monitoring\/ContainerInsights.html)\uc640 \uac19\uc740 \ud074\ub7ec\uc2a4\ud130 \uc804\ubc18\uc758 \ubaa8\ub2c8\ud130\ub9c1 \ub3c4\uad6c\ub294 \ud074\ub7ec\uc2a4\ud130 \ubc0f \uc6cc\ud06c\ub85c\ub4dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \ubb38\uc81c\uac00 \ubc1c\uc0dd\ud560 \ub54c \ub610\ub294 \uac00\uae09\uc801\uc774\uba74 \ubb38\uc81c\uac00 \ubc1c\uc0dd\ud558\uae30 \uc804\uc5d0 \uc2e0\ud638\ub97c \uc81c\uacf5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\ubaa8\ub2c8\ud130\ub9c1 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uba74 \uc6b4\uc601 \ud300\uc774 \uad6c\ub3c5\ud560 \uc218 \uc788\ub294 \uc54c\ub9bc\uc744 \uc0dd\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc545\ud654 \uc2dc \uac00\ub3d9 \uc911\ub2e8\uc73c\ub85c \uc774\uc5b4\uc9c0\uac70\ub098 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc131\ub2a5\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\ub294 \uc774\ubca4\ud2b8\uc5d0 \ub300\ud574 \uacbd\ubcf4\ub97c \ud65c\uc131\ud654\ud558\ub294 \uaddc\uce59\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624. \n\n\uc5b4\ub5a4 \uba54\ud2b8\ub9ad\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud574\uc57c \ud560\uc9c0 \uc798 \ubaa8\ub974\uaca0\ub2e4\uba74 \ub2e4\uc74c \ubc29\ubc95\uc5d0\uc11c \uc601\uac10\uc744 \uc5bb\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n- [RED method](https:\/\/www.weave.works\/blog\/a-practical-guide-from-instrumenting-code-to-specifying-alerts-with-the-red-method). \uc694\uccad, \uc624\ub958, \uae30\uac04\uc744 \ub098\ud0c0\ub0c5\ub2c8\ub2e4. \n- [USE method](http:\/\/www.brendangregg.com\/usemethod.html). \uc0ac\uc6a9\ub960, \ud3ec\ud654\ub3c4, \uc624\ub958\ub97c \ub098\ud0c0\ub0c5\ub2c8\ub2e4. \n\nSysdig\uc758 \uac8c\uc2dc\ubb3c [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc54c\ub9bc \ubaa8\ubc94 \uc0ac\ub840](https:\/\/sysdig.com\/blog\/alerting-kubernetes\/)\uc5d0\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uac00\uc6a9\uc131\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\ub294 \uad6c\uc131 \uc694\uc18c\uc758 \ud3ec\uad04\uc801\uc778 \ubaa9\ub85d\uc774 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \ud074\ub77c\uc774\uc5b8\ud2b8 \ub77c\uc774\ube0c\ub7ec\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uba54\ud2b8\ub9ad\uc744 \uacf5\uac1c\ud558\uc138\uc694\n\n\uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc0c1\ud0dc\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \ud45c\uc900 \uba54\ud2b8\ub9ad\uc744 \uc9d1\uacc4\ud558\ub294 \uac83 \uc678\uc5d0\ub3c4 [\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \ud074\ub77c\uc774\uc5b8\ud2b8 \ub77c\uc774\ube0c\ub7ec\ub9ac](https:\/\/prometheus.io\/docs\/instrumenting\/clientlibs\/)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\ubcc4 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \uba54\ud2b8\ub9ad\uc744 \uacf5\uac1c\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uac00\uc2dc\uc131\uc744 \uac1c\uc120\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc911\uc559 \uc9d1\uc911\uc2dd \ub85c\uae45 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub85c\uadf8\ub97c \uc218\uc9d1\ud558\uace0 \uc720\uc9c0\ud569\ub2c8\ub2e4.\n\nEKS \ub85c\uae45\uc740 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub85c\uadf8\uc640 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub85c\uadf8\uc758 \ub450 \uac00\uc9c0 \ubc94\uc8fc\uc5d0 \uc18d\ud569\ub2c8\ub2e4. EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub85c\uae45\uc740 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc758 \uac10\uc0ac \ubc0f \uc9c4\ub2e8 \ub85c\uadf8\ub97c \uacc4\uc815\uc758 CloudWatch Logs\ub85c \uc9c1\uc811 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub85c\uadf8\ub294 \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ud30c\ub4dc\uc5d0\uc11c \uc0dd\uc131\ub418\ub294 \ub85c\uadf8\uc785\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub85c\uadf8\uc5d0\ub294 \ube44\uc988\ub2c8\uc2a4 \ub85c\uc9c1 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc2e4\ud589\ud558\ub294 \ud30c\ub4dc\uc640 CoreDNS, Cluster Autoscaler, Prometheus \ub4f1\uacfc \uac19\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc2dc\uc2a4\ud15c \ucef4\ud3ec\ub10c\ud2b8\uc5d0\uc11c \uc0dd\uc131\ub41c \ub85c\uadf8\uac00 \ud3ec\ud568\ub429\ub2c8\ub2e4. \n\n[EKS\ub294 \ub2e4\uc12f \uac00\uc9c0 \uc720\ud615\uc758 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub85c\uadf8\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4.](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/control-plane-logs.html):\n\n1. \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84 \uad6c\uc131 \uc694\uc18c \ub85c\uadf8\n2. \uac10\uc0ac\n3. \uc778\uc99d\uc790(Authenticator)\n4. \ucee8\ud2b8\ub864\ub7ec \ub9e4\ub2c8\uc800 \n5. \uc2a4\ucf00\uc904\ub7ec\n\n\ucee8\ud2b8\ub864\ub7ec \uad00\ub9ac\uc790 \ubc0f \uc2a4\ucf00\uc904\ub7ec \ub85c\uadf8\ub294 \ubcd1\ubaa9 \ud604\uc0c1 \ubc0f \uc624\ub958\uc640 \uac19\uc740 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ubb38\uc81c\ub97c \uc9c4\ub2e8\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uae30\ubcf8\uc801\uc73c\ub85c EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub85c\uadf8\ub294 CloudWatch Logs\ub85c \uc804\uc1a1\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub85c\uae45\uc744 \ud65c\uc131\ud654\ud558\uace0 \uacc4\uc815\uc758 \uac01 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud574 \ucea1\ucc98\ud558\ub824\ub294 EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub85c\uadf8\uc758 \uc720\ud615\uc744 \uc120\ud0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub85c\uadf8\ub97c \uc218\uc9d1\ud558\ub824\uba74 \ud074\ub7ec\uc2a4\ud130\uc5d0 [Fluent Bit](http:\/\/fluentbit.io), [Fluentd](https:\/\/www.fluentd.org) \ub610\ub294 [CloudWatch Container Insights](https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/monitoring\/deploy-container-insights-EKS.html)\uc640 \uac19\uc740 \ub85c\uadf8 \uc218\uc9d1 \ub3c4\uad6c\ub97c \uc124\uce58\ud574\uc57c \ud569\ub2c8\ub2e4. \n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub85c\uadf8 \uc560\uadf8\ub9ac\uac8c\uc774\ud130 \ub3c4\uad6c\ub294 \ub370\ubaac\uc14b\uc73c\ub85c \uc2e4\ud589\ub418\uba70 \ub178\ub4dc\uc758 \ucee8\ud14c\uc774\ub108 \ub85c\uadf8\ub97c \uc2a4\ud06c\ub7a9\ud569\ub2c8\ub2e4. \uadf8\ub7ec\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub85c\uadf8\uac00 \uc911\uc559 \uc9d1\uc911\uc2dd \ub300\uc0c1\uc73c\ub85c \uc804\uc1a1\ub418\uc5b4 \uc800\uc7a5\ub429\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 CloudWatch \ucee8\ud14c\uc774\ub108 \uc778\uc0ac\uc774\ud2b8\ub294 Fluent Bit \ub610\ub294 Fluentd\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub85c\uadf8\ub97c \uc218\uc9d1\ud558\uace0 \uc774\ub97c CloudWatch Logs\ub85c \uc804\uc1a1\ud558\uc5ec \uc800\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Fluent Bit\uacfc Fluentd\ub294 Elasticsearch \ubc0f InfluxDB\uc640 \uac19\uc740 \ub110\ub9ac \uc0ac\uc6a9\ub418\ub294 \uc5ec\ub7ec \ub85c\uadf8 \ubd84\uc11d \uc2dc\uc2a4\ud15c\uc744 \uc9c0\uc6d0\ud558\ubbc0\ub85c Fluent Bit \ub610\ub294 Fluentd\uc758 \ub85c\uadf8 \uad6c\uc131\uc744 \uc218\uc815\ud558\uc5ec \ub85c\uadf8\uc758 \uc2a4\ud1a0\ub9ac\uc9c0 \ubc31\uc5d4\ub4dc\ub97c \ubcc0\uacbd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\n### \ubd84\uc0b0 \ucd94\uc801 \uc2dc\uc2a4\ud15c\uc744 \uc0ac\uc6a9\ud558\uc5ec \ubcd1\ubaa9 \ud604\uc0c1\uc744 \uc2dd\ubcc4\ud558\uc2ed\uc2dc\uc624.\n\n\uc77c\ubc18\uc801\uc778 \ucd5c\uc2e0 \uc751\uc6a9 \ud504\ub85c\uadf8\ub7a8\uc5d0\ub294 \ub124\ud2b8\uc6cc\ud06c\ub97c \ud1b5\ud574 \uad6c\uc131 \uc694\uc18c\uac00 \ubd84\uc0b0\ub418\uc5b4 \uc788\uc73c\uba70 \uc751\uc6a9 \ud504\ub85c\uadf8\ub7a8\uc744 \uad6c\uc131\ud558\ub294 \uac01 \uad6c\uc131 \uc694\uc18c\uac00 \uc81c\ub300\ub85c \uc791\ub3d9\ud558\ub294\uc9c0\uc5d0 \ub530\ub77c \uc2e0\ub8b0\uc131\uc774 \ub2ec\ub77c\uc9d1\ub2c8\ub2e4. \ubd84\uc0b0 \ucd94\uc801 \uc194\ub8e8\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 \uc694\uccad\uc758 \ud750\ub984\uacfc \uc2dc\uc2a4\ud15c\uc774 \ud1b5\uc2e0\ud558\ub294 \ubc29\uc2dd\uc744 \uc774\ud574\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucd94\uc801\uc744 \ud1b5\ud574 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub124\ud2b8\uc6cc\ud06c\uc5d0\uc11c \ubcd1\ubaa9 \ud604\uc0c1\uc774 \ubc1c\uc0dd\ud558\ub294 \uc704\uce58\ub97c \ud30c\uc545\ud558\uace0 \uc5f0\uc1c4\uc801 \uc7a5\uc560\ub97c \uc77c\uc73c\ud0ac \uc218 \uc788\ub294 \ubb38\uc81c\ub97c \uc608\ubc29\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0\uc11c \ucd94\uc801\uc744 \uad6c\ud604\ud558\ub294 \ubc29\ubc95\uc5d0\ub294 \ub450 \uac00\uc9c0\uac00 \uc788\uc2b5\ub2c8\ub2e4. \uacf5\uc720 \ub77c\uc774\ube0c\ub7ec\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec \ucf54\ub4dc \uc218\uc900\uc5d0\uc11c \ubd84\uc0b0 \ucd94\uc801\uc744 \uad6c\ud604\ud558\uac70\ub098 \uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\ucf54\ub4dc \uc218\uc900\uc5d0\uc11c \ucd94\uc801\uc744 \uad6c\ud604\ud558\ub294 \uac83\uc740 \ubd88\ub9ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uba54\uc11c\ub4dc\uc5d0\uc11c\ub294 \ucf54\ub4dc\ub97c \ubcc0\uacbd\ud574\uc57c \ud569\ub2c8\ub2e4. \ub2e4\uad6d\uc5b4 \uc751\uc6a9 \ud504\ub85c\uadf8\ub7a8\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \uc774\ub294 \ub354 \ubcf5\uc7a1\ud569\ub2c8\ub2e4. \ub610\ud55c \uc11c\ube44\uc2a4 \uc804\uccb4\uc5d0 \uac78\uccd0 \ub610 \ub2e4\ub978 \ub77c\uc774\ube0c\ub7ec\ub9ac\ub97c \uc720\uc9c0 \uad00\ub9ac\ud560 \ucc45\uc784\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \n\n[LinkerD](http:\/\/linkerd.io), [Istio](http:\/\/istio.io), [AWS App Mesh](https:\/\/aws.amazon.com\/app-mesh\/)\uc640 \uac19\uc740 \uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \uc0ac\uc6a9\ud558\uba74 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ucf54\ub4dc\ub97c \ucd5c\uc18c\ud55c\uc73c\ub85c \ubcc0\uacbd\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0\uc11c \ubd84\uc0b0 \ucd94\uc801\uc744 \uad6c\ud604\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc9c0\ud45c \uc0dd\uc131, \ub85c\uae45 \ubc0f \ucd94\uc801\uc744 \ud45c\uc900\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n[AWS X-Ray](https:\/\/aws.amazon.com\/xray\/), [Jaeger](https:\/\/www.jaegertracing.io)\uc640 \uac19\uc740 \ucd94\uc801 \ub3c4\uad6c\ub294 \uacf5\uc720 \ub77c\uc774\ube0c\ub7ec\ub9ac\uc640 \uc11c\ube44\uc2a4 \uba54\uc2dc \uad6c\ud604\uc744 \ubaa8\ub450 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \n\n(\uacf5\uc720 \ub77c\uc774\ube0c\ub7ec\ub9ac \ubc0f \uc11c\ube44\uc2a4 \uba54\uc2dc) \uad6c\ud604\uc744 \ubaa8\ub450 \uc9c0\uc6d0\ud558\ub294 [AWS X-Ray](https:\/\/aws.amazon.com\/xray\/) \ub610\ub294 [Jaeger](https:\/\/www.jaegertracing.io)\uc640 \uac19\uc740 \ucd94\uc801 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud574 \ubcf4\uc2ee\uc2dc\uc624. \uadf8\ub7ec\uba74 \ub098\uc911\uc5d0 \uc11c\ube44\uc2a4 \uba54\uc2dc\ub97c \ucc44\ud0dd\ud560 \ub54c \ub3c4\uad6c\ub97c \uc804\ud658\ud558\uc9c0 \uc54a\uc544\ub3c4 \ub429\ub2c8\ub2e4. ","site":"eks","answers_cleaned":"    search    exclude  true                                                                                                                                                                                                                                                                                                                                                                                   https   kubernetes io docs concepts architecture controller  desired vs current                                                                                                                                                     https   kubernetes io docs concepts workloads controllers deployment                                                         https   kubernetes io docs concepts architecture controller                                                                                                                                                                                                                             Horizontal Pod Autoscaler  https   kubernetes io docs tasks run application horizontal pod autoscale                                                                                                                                                  anti affinity       topology spread contraints                                                     AZ                                                   anti affinity                                           AZ         prefer                             AZ                      AZ                                                                              topologyKey  topology kubernetes io zone       requiredDuringSchedulingIgnoredDuringExecution                              AZ                             apiVersion  apps v1 kind  Deployment metadata    name  spread host az   labels      app  web server spec    replicas  4   selector      matchLabels        app  web server   template      metadata        labels          app  web server     spec        affinity          podAntiAffinity            preferredDuringSchedulingIgnoredDuringExecution              podAffinityTerm                labelSelector                  matchExpressions                    key  app                   operator  In                   values                      web server               topologyKey  topology kubernetes io zone             weight  100             podAffinityTerm                labelSelector                  matchExpressions                    key  app                   operator  In                   values                      web server               topologyKey  kubernetes io hostname              weight  99       containers          name  web app         image  nginx 1 16 alpine              topology spread constraints        anti affinity               topology spread constraints              AZ                                                                                                                                anti affinity     anti affinity                                                                                                                                 topology spread constraints                                                                                                              1   MaxSkew                                                                           10           3   AZ                                                        MaxSkew   1   10               1   3   AZ      4 3 3    3 4 3      3 3 4                             10   3   AZ      10 0 0    0 10 0      0 0 10                          2   TopologyKey                                                               zone                               topologyKey   topology kubernetes io zone      3   WhenUnsatisfiable                                                          4   LabelSelector                                                                                                                      https   kubernetes io docs concepts scheduling eviction topology spread constraints                                          3  AZ             images pod topology spread constraints jpg       apiVersion  apps v1 kind  Deployment metadata    name  spread host az   labels      app  web server spec    replicas  10   selector      matchLabels        app  web server   template      metadata        labels          app  web server     spec        topologySpreadConstraints          maxSkew  1         topologyKey   topology kubernetes io zone          whenUnsatisfiable  ScheduleAnyway         labelSelector            matchLabels              app  express test       containers          name  web app         image  nginx 1 16 alpine                                          https   github com kubernetes sigs metrics server                              HPA  https   kubernetes io docs tasks run application horizontal pod autoscale      VPA  https   github com kubernetes autoscaler tree master vertical pod autoscaler                                                                                                               kubelets             API     https   github com kubernetes metrics                                                        CPU                                                              Prometheus    Amazon CloudWatch                        EKS      https   docs aws amazon com eks latest userguide metrics server html      EKS                            Horizontal Pod Autoscaler  HPA   HPA                                                                                                       API                      HPA     API                    1          API        metrics k8s io        CPU                   2   custom metrics k8s io                                                              internal         3   external metrics k8s io                  external                   SQS         ELB                                            API                                                                                 CPU                                                    https   github com kubernetes community blob master contributors design proposals instrumentation custom metrics api md  API     HPA                                custom metrics k8s io  API                      API              https   github com directxman12 k8s prometheus adapter                           HPA                                                 API     https   github com kubernetes metrics blob master pkg apis metrics v1alpha1 types go                                         https   github com kubernetes metrics blob master IMPLEMENTATIONS md custom metrics api                                      kubectl                                kubectl get  raw  apis custom metrics k8s io v1beta1              https   github com kubernetes community blob master contributors design proposals instrumentation external metrics api md                Horizontal Pod Autoscaler                                                                       SQS                                              Kubernetes                                                                  KEDA Kubernetes Event driven Autoscaling                  AWS      https   aws amazon com blogs mt autoscaling kubernetes workloads with keda using amazon managed service for prometheus metrics      Kubernetes                Amazon Managed Service for Prometheus                      Vertical Pod Autoscaler  VPA   VPA      CPU                                                                                          VPA  https   github com kubernetes autoscaler tree master vertical pod autoscaler                                                     VPA                                                               VPA                                                         EKS      https   docs aws amazon com eks latest userguide vertical pod autoscaler html    VPA                      Fairwinds Goldilocks  https   github com FairwindsOps goldilocks         CPU                   VPA                               VPA               VPA                                                                                                                                                                                                                                                                                                                                             CI CD                                                                                                                                                                                       kubectl                                         bash kubectl   record deployment apps nginx deployment set image nginx deployment nginx nginx 1 16 1         record                                                   kubectl rollout history deployment                                      kubectl rollout undo deployment  DEPLOYMENT NAME                                                                                 https   kubernetes io docs tutorials kubernetes basics update update intro                                                                             RollingUpdateStrategy                                                                              Max Unavailable   https   kubernetes io docs concepts workloads controllers deployment  max unavailable                                                              Max Surge                                                                                   max unavailable                                        25    max unavailable             100                                    75                         80                                              max unavailable   20                         80                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               selector                                            Flux  https   fluxcd io    Jenkins  https   www jenkins io    Spinnaker  https   spinnaker io                CI                                              Jenkins                          Jenkins                         https   kubernetes io blog 2018 04 30 zero downtime deployment kubernetes jenkins        Canary              Canary                                                                                                                                                                                                                                                                                                                         canary                            Flagger  https   github com weaveworks flagger           Istio  https   docs flagger app tutorials istio progressive delivery      App Mesh  https   docs flagger app install flagger install on eks appmesh                                                                                                                                                                                                                                                                                                   https   kubernetes io docs tasks configure pod container configure liveness readiness startup probes            1  Liveness probe 2  Startup probe           1 16          3  Readiness probe               Kubelet  https   kubernetes io docs reference command line tools reference kubelet                                 Kubelet                                kubelet                                  HTTP GET                   TCP                                        exec                    TimeoutSeconds                                                              defunct                              Liveness Probe                  Liveness probe                                                               80                                  80   HTTP GET          Liveness                  Kubelet        GET                            200 399           kubelet                                                              kubelet               initialDelaySeconds                                Liveness Probe                    Liveness Probe                                                                                                             Liveness Probe                                                                   Liveness Probe                                                   Liveness Probe                       Sandor Sz cs                     https   srcco de posts kubernetes liveness probes are dangerous html                                                                        Startup Probe                                   Startup Probe       Liveness   Readniness Probe                                             Java                 2                               Liveness    Readniness Probe              Startup Probe       Liveness    Readniness Probe          Java                         Startup Probe                                                                                                                                              Startup Probe  Liveness Probe                               Ricardo A                            https   medium com swlh fantastic probes and how to configure them fef7e030bd2f                                       Startup Probe                           10                      initialDelaySeconds      Liveness Readiness Probe                 Readiness Probe                                 Liveness probe                                          Readiness Probe      temporarily                                                                                                        I O                                                                                                             Readiness Probe                                                                                      Liveness Probe      Readiness Probe                                               Readiness Probe                                   Liveness Probe                                    Readiness Probe                        Readiness                                                             Readiness Probe                                                                                                 Readiness Probe               all                            Readiness Probes                                                  Readiness Probe                                                                                                                                                                                                                                                                                                                                                                                                                    drain                                                                                              SIGKILL                          SIGTERM                                  PID 1          SIGTERM                           SIGKILL                                               30      kubectl    grace period                        Podspec    terminationGracePeriodSeconds                 kubectl delete pod  pod name   grace period  seconds             PID 1                                  Python                              kubectl exec python app  it ps  PID USER TIME COMMAND  1   root 0 00  script sh   bin sh   script sh  5   root 0 00 python app py                      SIGTERM                                         SIGTERM                                                              ENTRYPOINT   https   docs docker com engine reference builder  entrypoint                                           dumb init  https   github com Yelp dumb init                                                           https   kubernetes io docs concepts containers container lifecycle hooks  container hooks                                HTTP                   Prestop                SIGTERM                                             terminationGracePeriodSeconds      SIGTERM                  PreStop                                            Pod Disruption Budget                    Pod Disruption Budget    PDB                                                                                                     PDB             minAvailable     maxUnavailable                              3                  PDB                   apiVersion  policy v1beta1 kind  PodDisruptionBudget metadata    name  my svc pdb spec    minAvailable  3   selector      matchLabels        app  my svc         PDB             3                                                        PodDisruptionBudgets          EKS                      15                    https   docs aws amazon com eks latest userguide managed node update behavior html   15                      EKS                                                                           AWS Node Termination Handler  https   github com aws aws node termination handler                                  Kubernetes           EC2        https   docs aws amazon com AWSEC2 latest UserGuide monitoring instances status check sched html         EC2        https   docs aws amazon com AWSEC2 latest UserGuide spot interruptions html    EC2                                              API                                                                    anti affinity                                           PDB                                                                                                                         Dominik Tornow                           https   medium com  dominik tornow the mechanics of kubernetes ac8112eaa302                                                                                                                                                                                                                                                          replica                      https   kubernetes io docs concepts architecture controller  design      replica                                                 Gremlin  https   www gremlin com                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            HTTP                                                                                                                                                                                  AWS App Mesh  https   aws amazon com app mesh      Istio  https   istio io     LinkerD  http   linkerd io     Consul  https   www consul io           Observability   Observability                                                                                                                                                                                                                                                                                                             kubectl                                                                                                                                                                                                                                                                                                                    Prometheus  https   prometheus io      CloudWatch Container Insights  https   docs aws amazon com AmazonCloudWatch latest monitoring ContainerInsights html                                                                                                                                                                                                                                                                           RED method  https   www weave works blog a practical guide from instrumenting code to specifying alerts with the red method                          USE method  http   www brendangregg com usemethod html                          Sysdig                       https   sysdig com blog alerting kubernetes                                                                                                                                                                   https   prometheus io docs instrumenting clientlibs                                                                                                    EKS                                           EKS                                      CloudWatch Logs                                                                                           CoreDNS  Cluster Autoscaler  Prometheus                                          EKS                                https   docs aws amazon com eks latest userguide control plane logs html    1        API             2     3      Authenticator  4            5                                                                                  EKS             CloudWatch Logs                                                    EKS                                                      Fluent Bit  http   fluentbit io    Fluentd  https   www fluentd org      CloudWatch Container Insights  https   docs aws amazon com AmazonCloudWatch latest monitoring deploy container insights EKS html                                                                                                                             CloudWatch            Fluent Bit    Fluentd                   CloudWatch Logs                   Fluent Bit  Fluentd  Elasticsearch   InfluxDB                                 Fluent Bit    Fluentd                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             LinkerD  http   linkerd io    Istio  http   istio io    AWS App Mesh  https   aws amazon com app mesh                                                                                                                   AWS X Ray  https   aws amazon com xray     Jaeger  https   www jaegertracing io                                                                               AWS X Ray  https   aws amazon com xray       Jaeger  https   www jaegertracing io                                                               "}
{"questions":"eks Amazon Elastic Kubernetes Service EKS AWS EKS EC2 API Amazon EKS search exclude true EKS","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\n\nAmazon Elastic Kubernetes Service(EKS)\ub294 \uc790\uccb4 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub610\ub294 \uc6cc\ucee4 \ub178\ub4dc\ub97c \uc124\uce58, \uc6b4\uc601 \ubc0f \uc720\uc9c0 \uad00\ub9ac\ud560 \ud544\uc694 \uc5c6\uc774 AWS\uc5d0\uc11c \uc27d\uac8c \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub97c \uc2e4\ud589\ud560 \uc218 \uc788\uac8c \ud574\uc8fc\ub294 \uad00\ub9ac\ud615 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\uc785\ub2c8\ub2e4. \uc5c5\uc2a4\ud2b8\ub9bc \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub97c \uc2e4\ud589\ud558\uba70 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uaddc\uc815 \uc900\uc218 \uc778\uc99d\uc744 \ubc1b\uc558\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uaddc\uc815 \uc900\uc218\ub97c \ud1b5\ud574 EKS\ub294 EC2 \ub610\ub294 \uc628\ud504\ub808\ubbf8\uc2a4\uc5d0 \uc124\uce58\ud560 \uc218 \uc788\ub294 \uc624\ud508 \uc18c\uc2a4 \ucee4\ubba4\ub2c8\ud2f0 \ubc84\uc804\uacfc \ub9c8\ucc2c\uac00\uc9c0\ub85c \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \uc5c5\uc2a4\ud2b8\ub9bc \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uae30\uc874 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc740 Amazon EKS\uc640 \ud638\ud658\ub429\ub2c8\ub2e4.\n\nEKS\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub178\ub4dc\uc758 \uac00\uc6a9\uc131\uacfc \ud655\uc7a5\uc131\uc744 \uc790\ub3d9\uc73c\ub85c \uad00\ub9ac\ud558\uace0 \ube44\uc815\uc0c1 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub178\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \ub300\uccb4\ud569\ub2c8\ub2e4.\n\n## EKS \uc544\ud0a4\ud14d\ucc98\n\nEKS \uc544\ud0a4\ud14d\ucc98\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc758 \uac00\uc6a9\uc131\uacfc \ub0b4\uad6c\uc131\uc744 \uc190\uc0c1\uc2dc\ud0ac \uc218 \uc788\ub294 \ub2e8\uc77c \uc7a5\uc560 \uc9c0\uc810\uc744 \uc81c\uac70\ud558\ub3c4\ub85d \uc124\uacc4\ub418\uc5c8\uc2b5\ub2c8\ub2e4.\n\nEKS\ub85c \uad00\ub9ac\ub418\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc740 EKS \uad00\ub9ac\ud615 VPC \ub0b4\uc5d0\uc11c \uc2e4\ud589\ub429\ub2c8\ub2e4. EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84 \ub178\ub4dc, \uae30\ud0c0 \ud074\ub7ec\uc2a4\ud130\ub85c \uad6c\uc131\ub429\ub2c8\ub2e4. API \uc11c\ubc84, \uc2a4\ucf00\uc904\ub7ec, `kube-controller-manager`\uc640 \uac19\uc740 \uad6c\uc131 \uc694\uc18c\ub97c \uc2e4\ud589\ud558\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84 \ub178\ub4dc\ub294 \uc624\ud1a0 \uc2a4\ucf00\uc77c\ub9c1 \uadf8\ub8f9\uc5d0\uc11c \uc2e4\ud589\ub429\ub2c8\ub2e4. EKS\ub294 AWS \ub9ac\uc804 \ub0b4\uc758 \ubcc4\uac1c\uc758 \uac00\uc6a9 \uc601\uc5ed(AZ)\uc5d0\uc11c \ucd5c\uc18c 2\uac1c\uc758 API \uc11c\ubc84 \ub178\ub4dc\ub97c \uc2e4\ud589\ud569\ub2c8\ub2e4. \ub9c8\ucc2c\uac00\uc9c0\ub85c \ub0b4\uad6c\uc131\uc744 \uc704\ud574 etcd \uc11c\ubc84 \ub178\ub4dc\ub3c4 3\uac1c\uc758 AZ\uc5d0 \uac78\uce5c \uc790\ub3d9 \ud06c\uae30 \uc870\uc815 \uadf8\ub8f9\uc5d0\uc11c \uc2e4\ud589\ub429\ub2c8\ub2e4. EKS\ub294 \uac01 AZ\uc5d0\uc11c NAT \uac8c\uc774\ud2b8\uc6e8\uc774\ub97c \uc2e4\ud589\ud558\uace0, API \uc11c\ubc84 \ubc0f etcd \uc11c\ubc84\ub294 \ud504\ub77c\uc774\ube57 \uc11c\ube0c\ub137\uc5d0\uc11c \uc2e4\ud589\ub429\ub2c8\ub2e4. \uc774 \uc544\ud0a4\ud14d\ucc98\ub294 \ub2e8\uc77c AZ\uc758 \uc774\ubca4\ud2b8\uac00 EKS \ud074\ub7ec\uc2a4\ud130\uc758 \uac00\uc6a9\uc131\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce58\uc9c0 \uc54a\ub3c4\ub85d \ud569\ub2c8\ub2e4.\n\n\uc0c8 \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud558\uba74 Amazon EKS\ub294 \ud074\ub7ec\uc2a4\ud130\uc640 \ud1b5\uc2e0\ud558\ub294 \ub370 \uc0ac\uc6a9\ud558\ub294 \uad00\ub9ac\ud615 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84\ub97c \uc704\ud55c \uace0\uac00\uc6a9\uc131 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4(`kubectl`\uacfc \uac19\uc740 \ub3c4\uad6c \uc0ac\uc6a9). \uad00\ub9ac\ud615 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub294 NLB\ub97c \uc0ac\uc6a9\ud558\uc5ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84\uc758 \ubd80\ud558\ub97c \ubd84\uc0b0\ud569\ub2c8\ub2e4. \ub610\ud55c EKS\ub294 \uc6cc\ucee4 \ub178\ub4dc\uc640\uc758 \uc6d0\ud65c\ud55c \ud1b5\uc2e0\uc744 \uc704\ud574 \uc11c\ub85c \ub2e4\ub978 AZ\uc5d0 \ub450 \uac1c\uc758 [ENI](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/using-eni.html)\ub97c \ud504\ub85c\ube44\uc800\ub2dd\ud569\ub2c8\ub2e4.\n\n![EKS \ub370\uc774\ud130 \ud50c\ub808\uc778 \ub124\ud2b8\uc6cc\ud06c \uc5f0\uacb0](.\/images\/eks-data-plane-connectivity.jpeg)\n\n[\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc758 API \uc11c\ubc84](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cluster-endpoint.html)\ub294 \ud37c\ube14\ub9ad \uc778\ud130\ub137(\ud37c\ube14\ub9ad \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc0ac\uc6a9) \ub610\ub294 VPC(EKS \uad00\ub9ac ENI \uc0ac\uc6a9) \ub610\ub294 \ub458 \ub2e4\ub97c \ud1b5\ud574 \uc5f0\uacb0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc0ac\uc6a9\uc790\uc640 \uc6cc\ucee4 \ub178\ub4dc\uac00 \ud37c\ube14\ub9ad \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \uc0ac\uc6a9\ud558\uc5ec API \uc11c\ubc84\uc5d0 \uc5f0\uacb0\ud558\ub4e0 EKS\uc5d0\uc11c \uad00\ub9ac\ud558\ub294 ENI\ub97c \uc0ac\uc6a9\ud558\ub4e0 \uad00\uacc4\uc5c6\uc774 \uc5f0\uacb0\uc744 \uc704\ud55c \uc911\ubcf5 \uacbd\ub85c\uac00 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \uad8c\uc7a5 \uc0ac\ud56d\n\n## \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uba54\ud2b8\ub9ad \ubaa8\ub2c8\ud130\ub9c1\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uba54\ud2b8\ub9ad\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud558\uba74 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uc131\ub2a5\uc5d0 \ub300\ud55c \ud1b5\ucc30\ub825\uc744 \uc5bb\uace0 \ubb38\uc81c\ub97c \uc2dd\ubcc4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ube44\uc815\uc0c1 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc740 \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc758 \uac00\uc6a9\uc131\uc744 \uc190\uc0c1\uc2dc\ud0ac \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4 \uc798\ubabb \uc791\uc131\ub41c \ucee8\ud2b8\ub864\ub7ec\ub294 API \uc11c\ubc84\uc5d0 \uacfc\ubd80\ud558\ub97c \uc77c\uc73c\ucf1c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uac00\uc6a9\uc131\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 `\/metrics` \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0\uc11c \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uba54\ud2b8\ub9ad\uc744 \ub178\ucd9c\ud569\ub2c8\ub2e4.\n\n`kubectl`\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub178\ucd9c\ub41c \uba54\ud2b8\ub9ad\uc744 \ubcfc \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```shell\nkubectl get --raw \/metrics\n```\n\n\uc774\ub7ec\ud55c \uc9c0\ud45c\ub294 [\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \ud14d\uc2a4\ud2b8 \ud615\uc2dd](https:\/\/github.com\/prometheus\/docs\/blob\/master\/content\/docs\/instrumenting\/exposition_formats.md)\uc73c\ub85c \ud45c\uc2dc\ub429\ub2c8\ub2e4.\n\n\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc774\ub7ec\ud55c \uc9c0\ud45c\ub97c \uc218\uc9d1\ud558\uace0 \uc800\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. 2020\ub144 5\uc6d4, CloudWatch\ub294 CloudWatch Container Insights\uc5d0 \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uc9c0\ud45c \ubaa8\ub2c8\ud130\ub9c1\uc5d0 \ub300\ud55c \uc9c0\uc6d0\uc744 \ucd94\uac00\ud588\uc2b5\ub2c8\ub2e4. \ub530\ub77c\uc11c Amazon CloudWatch\ub97c \uc0ac\uc6a9\ud558\uc5ec EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. [\uc0c8 Prometheus \uc2a4\ud06c\ub7a9 \ub300\uc0c1 \ucd94\uac00 \uc790\uc2b5\uc11c: \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 KPI \uc11c\ubc84 \uc9c0\ud45c](https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/monitoring\/ContainerInsights-Prometheus-Setup-configure.html#ContainerInsights-Prometheus-Setup-new-exporters)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc9c0\ud45c\ub97c \uc218\uc9d1\ud558\uace0 CloudWatch \ub300\uc2dc\ubcf4\ub4dc\ub97c \uc0dd\uc131\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc758 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84 \uba54\ud2b8\ub9ad\uc740 [\uc5ec\uae30](https:\/\/github.com\/kubernetes\/apiserver\/blob\/master\/pkg\/endpoints\/metrics\/metrics.go)\uc5d0\uc11c \ucc3e\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4, `apiserver_request_duration_seconds`\ub294 API \uc694\uccad\uc744 \uc2e4\ud589\ud558\ub294 \ub370 \uac78\ub9ac\ub294 \uc2dc\uac04\uc744 \ub098\ud0c0\ub0bc \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c\uacfc \uac19\uc740 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uba54\ud2b8\ub9ad\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud574 \ubcf4\uc2ed\uc2dc\uc624.\n\n### API \uc11c\ubc84\n\n| \uba54\ud2b8\ub9ad | \uc124\uba85 |\n|: --|: --|\n| `apiserver_request_total` | \uac01 \uba54\uc18c\ub4dc, \ub4dc\ub77c\uc774 \ub7f0 \uac12, \uadf8\ub8f9, \ubc84\uc804, \ub9ac\uc18c\uc2a4, \ubc94\uc704, \uad6c\uc131 \uc694\uc18c, HTTP \uc751\ub2f5 \ucf54\ub4dc\uc5d0 \ub300\ud574 \uad6c\ubd84\ub41c API \uc11c\ubc84 \uc694\uccad \uce74\uc6b4\ud130\uc785\ub2c8\ub2e4.|\n| `apiserver_request_duration_seconds*` | \uac01 \uba54\uc18c\ub4dc, \ub4dc\ub77c\uc774 \ub7f0 \uac12, \uadf8\ub8f9, \ubc84\uc804, \ub9ac\uc18c\uc2a4, \ud558\uc704 \ub9ac\uc18c\uc2a4, \ubc94\uc704, \uad6c\uc131 \uc694\uc18c\uc5d0 \ub300\ud55c \uc751\ub2f5 \uc9c0\uc5f0 \uc2dc\uac04 \ubd84\ud3ec(\ucd08 \ub2e8\uc704)|\n| `apiserver_admission_controller_admission_duration_seconds` | admission controller \uc9c0\uc5f0 \uc2dc\uac04 \ud788\uc2a4\ud1a0\uadf8\ub7a8(\ucd08), \uc774\ub984\uc73c\ub85c \uc2dd\ubcc4\ub418\uba70 \uac01 \uc791\uc5c5, API \ub9ac\uc18c\uc2a4 \ubc0f \uc720\ud615\ubcc4\ub85c \uad6c\ubd84\ub428(\uac80\uc99d \ub610\ub294 \uc2b9\uc778).|\n| `apiserver_admission_webhook_rejection_count` | admission webhook \uac70\ubd80 \uac74\uc218.\uc774\ub984, \uc791\uc5c5, \uac70\ubd80 \ucf54\ub4dc, \uc720\ud615(\uac80\uc99d \ub610\ub294 \uc2b9\uc778), \uc624\ub958 \uc720\ud615(calling_webhook_error, apiserver_internal_error, no_error) \uc73c\ub85c \uc2dd\ubcc4\ub429\ub2c8\ub2e4.|\n| `rest_client_request_duration_seconds` | \uc694\uccad \uc9c0\uc5f0 \uc2dc\uac04(\ucd08)\ub3d9\uc0ac\uc640 URL\ubcc4\ub85c \ubd84\ub958\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.|\n| `rest_client_requests_total` | \uc0c1\ud0dc \ucf54\ub4dc, \uba54\uc11c\ub4dc, \ud638\uc2a4\ud2b8\ubcc4\ub85c \ud30c\ud2f0\uc158\uc744 \ub098\ub208 HTTP \uc694\uccad \uc218|\n\n### etcd\n\n| \uba54\ud2b8\ub9ad | \uc124\uba85 \n|: --|: --|\n| `etcd_request_duration_seconds` | \uac01 \uc791\uc5c5 \ubc0f \uac1d\uccb4 \uc720\ud615\uc5d0 \ub300\ud55c Etcd \uc694\uccad \uc9c0\uc5f0 \uc2dc\uac04(\ucd08)|\n| `etcd_db_total_size_in_bytes` \ub610\ub294 <br \/>`apiserver_storage_db_total_size_in_bytes` (EKS v1.26\ubd80\ud130 \uc2dc\uc791) | Etcd \ub370\uc774\ud130\ubca0\uc774\uc2a4 \ud06c\uae30 |\n\n[\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubaa8\ub2c8\ud130\ub9c1 \uac1c\uc694 \ub300\uc2dc\ubcf4\ub4dc](https:\/\/grafana.com\/grafana\/dashboards\/14623)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84 \uc694\uccad\uacfc \uc9c0\uc5f0 \uc2dc\uac04 \ubc0f etcd \uc9c0\uc5f0 \uc2dc\uac04 \uba54\ud2b8\ub9ad\uc744 \uc2dc\uac01\ud654\ud558\uace0 \ubaa8\ub2c8\ud130\ub9c1\ud558\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624.\n\n\ub2e4\uc74c \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \ucffc\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec etcd\uc758 \ud604\uc7ac \ud06c\uae30\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \ucffc\ub9ac\ub294 API \uba54\ud2b8\ub9ad \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0\uc11c \uba54\ud2b8\ub9ad\uc744 \uc2a4\ud06c\ub7a9\ud558\ub294 `kube-apiserver`\ub77c\ub294 \uc791\uc5c5\uc774 \uc788\uace0 EKS \ubc84\uc804\uc774 v1.26 \ubbf8\ub9cc\uc778 \uac83\uc73c\ub85c \uac00\uc815\ud569\ub2c8\ub2e4. \n\n```text\nmax(etcd_db_total_size_in_bytes{job=\"kube-apiserver\"} \/ (8 * 1024 * 1024 * 1024))\n```\n\n## \ud074\ub7ec\uc2a4\ud130 \uc778\uc99d\n\nEKS\ub294 \ud604\uc7ac [bearer\/\uc11c\ube44\uc2a4 \uacc4\uc815 \ud1a0\ud070](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/authentication\/#service-account-tokens)\uacfc [\uc6f9\ud6c5 \ud1a0\ud070 \uc778\uc99d](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/authentication\/#webhook-token-authentication)\uc744 \uc0ac\uc6a9\ud558\ub294 IAM \uc778\uc99d \ub4f1 \ub450 \uac00\uc9c0 \uc720\ud615\uc758 \uc778\uc99d\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \uc0ac\uc6a9\uc790\uac00 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\ub97c \ud638\ucd9c\ud558\uba74 \uc6f9\ud6c5\ub294 \uc694\uccad\uc5d0 \ud3ec\ud568\ub41c \uc778\uc99d \ud1a0\ud070\uc744 IAM\uc5d0 \uc804\ub2ec\ud569\ub2c8\ub2e4. base 64\ub85c \uc11c\uba85\ub41c URL\uc778 \ud1a0\ud070\uc740 AWS \uba85\ub839\uc904 \uc778\ud130\ud398\uc774\uc2a4([AWS CLI](https:\/\/aws.amazon.com\/cli\/))\uc5d0 \uc758\ud574 \uc0dd\uc131\ub429\ub2c8\ub2e4.\n\nEKS \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud558\ub294 IAM \uc0ac\uc6a9\uc790 \ub610\ub294 \uc5ed\ud560\uc740 \uc790\ub3d9\uc73c\ub85c \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c \uc804\uccb4 \uc561\uc138\uc2a4 \uad8c\ud55c\uc744 \uc5bb\uc2b5\ub2c8\ub2e4. [`aws-auth` configmap](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/add-user-role.html)\uc744 \ud3b8\uc9d1\ud558\uc5ec EKS \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4\ub97c \uad00\ub9ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n`aws-auth` \ucee8\ud53c\uadf8\ub9f5\uc744 \uc798\ubabb \uad6c\uc131\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub300\ud55c \uc561\uc138\uc2a4 \uad8c\ud55c\uc744 \uc783\uc740 \uacbd\uc6b0\uc5d0\ub3c4 \ud074\ub7ec\uc2a4\ud130 \uc0dd\uc131\uc790\uc758 \uc0ac\uc6a9\uc790 \ub610\ub294 \uc5ed\ud560\uc744 \uc0ac\uc6a9\ud558\uc5ec EKS \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub4dc\ubb38 \uacbd\uc6b0\uc774\uae34 \ud558\uc9c0\ub9cc AWS \ub9ac\uc804\uc5d0\uc11c IAM \uc11c\ube44\uc2a4\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0\uc5d0\ub3c4 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4 \uacc4\uc815\uc758 bearer \ud1a0\ud070\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\ub97c \uad00\ub9ac\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ubaa8\ub4e0 \uc791\uc5c5\uc744 \uc218\ud589\ud560 \uc218 \uc788\ub294 \u201csuper-admin\u201d \uacc4\uc815\uc744 \uc0dd\uc131\ud558\uc2ed\uc2dc\uc624.\n\n```\nkubectl -n kube-system create serviceaccount super-admin\n```\n\nsuper-admin cluster-admin \uc5ed\ud560\uc744 \ubd80\uc5ec\ud558\ub294 \uc5ed\ud560 \ubc14\uc778\ub529\uc744 \uc0dd\uc131\ud569\ub2c8\ub2e4.\n\n```\nkubectl create clusterrolebinding super-admin-rb --clusterrole=cluster-admin --serviceaccount=kube-system:super-admin\n```\n\n\uc11c\ube44\uc2a4 \uacc4\uc815 \uc2dc\ud06c\ub9bf \uac00\uc838\uc624\uae30:\n\n```\nsecret_name=`kubectl -n kube-system get serviceaccount\/super-admin -o jsonpath=' {.secrets [0] .name} '`\n```\n\n\uc2dc\ud06c\ub9bf\uacfc \uad00\ub828\ub41c \ud1a0\ud070 \uac00\uc838\uc624\uae30:\n\n```\nSECRET_NAME=`kubectl -n kube-system get serviceaccount\/super-admin -o jsonpath='{.secrets[0].name}'`\n```\n\n\uc11c\ube44\uc2a4 \uacc4\uc815\uacfc \ud1a0\ud070\uc744 `kubeconfig'\uc5d0 \ucd94\uac00\ud569\ub2c8\ub2e4.\n\n```\nTOKEN=`kubectl -n kube-system get secret $SECRET_NAME -o jsonpath='{.data.token}'| base64 --decode`\n```\n\nsuper-admin \uacc4\uc815\uc744 \uc0ac\uc6a9\ud558\ub3c4\ub85d `kubeconfig`\uc5d0\uc11c \ud604\uc7ac \ucee8\ud14d\uc2a4\ud2b8\ub97c \uc124\uc815\ud569\ub2c8\ub2e4.\n\n```\nkubectl config set-credentials super-admin --token=$TOKEN\n```\n\n\ucd5c\uc885 `kubeconfig`\ub294 \ub2e4\uc74c\uacfc \uac19\uc544\uc57c \ud569\ub2c8\ub2e4.\n\n```\napiVersion: v1\nclusters:\n- cluster:\n    certificate-authority-data:<REDACTED>\n    server: https:\/\/<CLUSTER>.gr7.us-west-2.eks.amazonaws.com\n  name: arn:aws:eks:us-west-2:<account number>:cluster\/<cluster name>\ncontexts:\n- context:\n    cluster: arn:aws:eks:us-west-2:<account number>:cluster\/<cluster name>\n    user: super-admin\n  name: arn:aws:eks:us-west-2:<account number>:cluster\/<cluster name>\ncurrent-context: arn:aws:eks:us-west-2:<account number>:cluster\/<cluster name>\nkind: Config\npreferences: {}\nusers:\n#- name: arn:aws:eks:us-west-2:<account number>:cluster\/<cluster name>\n#  user:\n#    exec:\n#      apiVersion: client.authentication.k8s.io\/v1alpha1\n#      args:\n#      - --region\n#      - us-west-2\n#      - eks\n#      - get-token\n#      - --cluster-name\n#      - <<cluster name>>\n#      command: aws\n#      env: null\n- name: super-admin\n  user:\n    token: <<super-admin sa\u2019s secret>>\n```\n\n## Admission Webhooks\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0\ub294 [admission webhooks \uac80\uc99d \ubc0f \ubcc0\uacbd](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/extensible-admission-controllers)\uc774\ub77c\ub294 \ub450 \uac00\uc9c0 \uc720\ud615\uc758 admission webhooks\uc774 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 \uc0ac\uc6a9\uc790\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\ub97c \ud655\uc7a5\ud558\uace0 API\uc5d0\uc11c \uac1d\uccb4\ub97c \uc2b9\uc778\ud558\uae30 \uc804\uc5d0 \uac1d\uccb4\ub97c \uac80\uc99d\ud558\uac70\ub098 \ubcc0\uacbd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc6f9\ud6c5\ub97c \uc798\ubabb \uad6c\uc131\ud558\uba74 \ud074\ub7ec\uc2a4\ud130\uc758 \uc911\uc694\ud55c \uc791\uc5c5\uc774 \ucc28\ub2e8\ub418\uc5b4 EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc774 \ubd88\uc548\uc815\ud574\uc9c8 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ud074\ub7ec\uc2a4\ud130 \ud06c\ub9ac\ud2f0\uceec \uc791\uc5c5\uc5d0 \uc601\ud5a5\uc744 \uc8fc\uc9c0 \uc54a\uc73c\ub824\uba74 \ub2e4\uc74c\uacfc \uac19\uc740 \u201ccatch-all\u201d \uc6f9\ud6c5\uc744 \uc124\uc815\ud558\uc9c0 \ub9c8\uc2ed\uc2dc\uc624.\n\n```\n- name: \"pod-policy.example.com\"\n  rules:\n  - apiGroups:   [\"*\"]\n    apiVersions: [\"*\"]\n    operations:  [\"*\"]\n    resources:   [\"*\"]\n    scope: \"*\"\n```\n\n\ub610\ub294 \uc6f9\ud6c5\uc5d0 30\ucd08 \ubbf8\ub9cc\uc758 \uc81c\ud55c \uc2dc\uac04\uc744 \uac00\uc9c4 Fail Open \uc815\ucc45\uc774 \uc788\ub294\uc9c0 \ud655\uc778\ud558\uc5ec \uc6f9\ud6c5\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0 \ud074\ub7ec\uc2a4\ud130\uc758 \uc911\uc694\ud55c \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc601\ud5a5\uc744 \uc8fc\uc9c0 \uc54a\ub3c4\ub85d \ud558\uc2ed\uc2dc\uc624.\n\n### \uc548\uc804\ud558\uc9c0 \uc54a\uc740 `sysctls`\uac00 \uc788\ub294 \ud30c\ub4dc\ub97c \ucc28\ub2e8\ud55c\ub2e4.\n\n`Sysctl`\uc740 \uc0ac\uc6a9\uc790\uac00 \ub7f0\ud0c0\uc784 \uc911\uc5d0 \ucee4\ub110 \ud30c\ub77c\ubbf8\ud130\ub97c \uc218\uc815\ud560 \uc218 \uc788\ub294 \ub9ac\ub205\uc2a4 \uc720\ud2f8\ub9ac\ud2f0\uc785\ub2c8\ub2e4. \uc774\ub7ec\ud55c \ucee4\ub110 \ub9e4\uac1c\ubcc0\uc218\ub294 \ub124\ud2b8\uc6cc\ud06c, \ud30c\uc77c \uc2dc\uc2a4\ud15c, \uac00\uc0c1 \uba54\ubaa8\ub9ac, \ud504\ub85c\uc138\uc2a4 \uad00\ub9ac \ub4f1 \uc6b4\uc601 \uccb4\uc81c \ub3d9\uc791\uc758 \ub2e4\uc591\ud55c \uce21\uba74\uc744 \uc81c\uc5b4\ud569\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\ub97c \uc0ac\uc6a9\ud558\uba74 \ud30c\ub4dc\uc5d0 `sysctl` \ud504\ub85c\ud544\uc744 \ud560\ub2f9\ud560 \uc218 \uc788\ub2e4.\ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 `systcls`\ub97c \uc548\uc804\ud55c \uac83\uacfc \uc548\uc804\ud558\uc9c0 \uc54a\uc740 \uac83\uc73c\ub85c \ubd84\ub958\ud569\ub2c8\ub2e4. \uc548\uc804\ud55c `sysctls`\ub294 \ucee8\ud14c\uc774\ub108 \ub610\ub294 \ud30c\ub4dc\uc5d0 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uac00 \uc9c0\uc815\ub418\uba70, \uc774\ub97c \uc124\uc815\ud574\ub3c4 \ub178\ub4dc\uc758 \ub2e4\ub978 \ud30c\ub4dc\ub098 \ub178\ub4dc \uc790\uccb4\uc5d0\ub294 \uc601\ud5a5\uc744 \uc8fc\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ubc18\ub300\ub85c \uc548\uc804\ud558\uc9c0 \uc54a\uc740 sysctl\uc740 \ub2e4\ub978 \ud30c\ub4dc\ub97c \ubc29\ud574\ud558\uac70\ub098 \ub178\ub4dc\ub97c \ubd88\uc548\uc815\ud558\uac8c \ub9cc\ub4e4 \uc218 \uc788\uc73c\ubbc0\ub85c \uae30\ubcf8\uc801\uc73c\ub85c \ube44\ud65c\uc131\ud654\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc548\uc804\ud558\uc9c0 \uc54a\uc740 `sysctls`\uac00 \uae30\ubcf8\uc801\uc73c\ub85c \ube44\ud65c\uc131\ud654\ub418\ubbc0\ub85c, kubelet\uc740 \uc548\uc804\ud558\uc9c0 \uc54a\uc740 `sysctl` \ud504\ub85c\ud544\uc744 \uac00\uc9c4 \ud30c\ub4dc\ub97c \uc0dd\uc131\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \ud30c\ub4dc\ub97c \uc0dd\uc131\ud558\uba74, \uc2a4\ucf00\uc904\ub7ec\ub294 \ud574\ub2f9 \ud30c\ub4dc\ub97c \ub178\ub4dc\uc5d0 \ubc18\ubcf5\uc801\uc73c\ub85c \ud560\ub2f9\ud558\uc9c0\ub9cc \ub178\ub4dc\ub294 \uc2e4\ud589\uc5d0 \uc2e4\ud328\ud569\ub2c8\ub2e4. \uc774 \ubb34\ud55c \ub8e8\ud504\ub294 \uad81\uadf9\uc801\uc73c\ub85c \ud074\ub7ec\uc2a4\ud130 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc5d0 \ubd80\ub2f4\uc744 \uc8fc\uc5b4 \ud074\ub7ec\uc2a4\ud130\ub97c \ubd88\uc548\uc815\ud558\uac8c \ub9cc\ub4ed\ub2c8\ub2e4.\n\n\uc548\uc804\ud558\uc9c0 \uc54a\uc740 `sysctls`\uac00 \uc788\ub294 \ud30c\ub4dc\ub97c \uac70\ubd80\ud558\ub824\uba74 [OPA \uac8c\uc774\ud2b8\ud0a4\ud37c](https:\/\/github.com\/open-policy-agent\/gatekeeper-library\/blob\/377cb915dba2db10702c25ef1ee374b4aa8d347a\/src\/pod-security-policy\/forbidden-sysctls\/constraint.tmpl) \ub610\ub294 [Kyverno](https:\/\/kyverno.io\/policies\/pod-security\/baseline\/restrict-sysctls\/restrict-sysctls\/)\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624.\n\n\n\n## \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc \ucc98\ub9ac\n2021\ub144 4\uc6d4\ubd80\ud130 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9b4\ub9ac\uc2a4 \uc8fc\uae30\uac00 \uc5f0\uac04 4\uac1c \ub9b4\ub9ac\uc2a4(\ubd84\uae30\uc5d0 \ud55c \ubc88)\uc5d0\uc11c \uc5f0\uac04 \uc138 \ubc88\uc758 \ub9b4\ub9ac\uc2a4\ub85c \ubcc0\uacbd\ub418\uc5c8\uc2b5\ub2c8\ub2e4. \uc0c8 \ub9c8\uc774\ub108 \ubc84\uc804(\uc608: 1.**21** \ub610\ub294 1.**22**) \uc740 \ub300\ub7b5 [15\uc8fc\ub9c8\ub2e4](https:\/\/kubernetes.io\/blog\/2021\/07\/20\/new-kubernetes-release-cadence\/#what-s-changing-and-when) \ub9b4\ub9ac\uc2a4\ub429\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 1.19\ubd80\ud130 \uac01 \ub9c8\uc774\ub108 \ubc84\uc804\uc740 \ucc98\uc74c \ub9b4\ub9ac\uc2a4\ub41c \ud6c4 \uc57d 12\uac1c\uc6d4 \ub3d9\uc548 \uc9c0\uc6d0\ub429\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \ucd5c\uc18c \ub450 \uac1c\uc758 \ub9c8\uc774\ub108 \ubc84\uc804\uc5d0 \ub300\ud574 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \uc6cc\ucee4 \ub178\ub4dc \uac04\uc758 \ud638\ud658\uc131\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee4\ubba4\ub2c8\ud2f0\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804 \uc9c0\uc6d0\uc5d0 \ub530\ub77c EKS\ub294 \uc5b8\uc81c\ub4e0\uc9c0 \ucd5c\uc18c 3\uac1c\uc758 \ud504\ub85c\ub355\uc158 \ubc84\uc804\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc81c\uacf5\ud558\uba70, \ub124 \ubc88\uc9f8 \ubc84\uc804\uc740 \uc9c0\uc6d0 \uc911\ub2e8\ub420 \uc608\uc815\uc785\ub2c8\ub2e4.\n\nEKS\ub294 \uc9c0\uc6d0 \uc885\ub8cc\uc77c \ucd5c\uc18c 60\uc77c \uc804\uc5d0 \ud574\ub2f9 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9c8\uc774\ub108 \ubc84\uc804\uc758 \uc9c0\uc6d0 \uc911\ub2e8\uc744 \ubc1c\ud45c\ud569\ub2c8\ub2e4. \uc9c0\uc6d0 \uc885\ub8cc\uc77c\uc774 \ub418\uba74 \uc9c0\uc6d0 \uc911\ub2e8\ub41c \ubc84\uc804\uc744 \uc2e4\ud589\ud558\ub294 \ud074\ub7ec\uc2a4\ud130\ub294 EKS\uac00 \uc9c0\uc6d0\ud558\ub294 \ub2e4\uc74c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uc73c\ub85c \uc790\ub3d9 \uc5c5\ub370\uc774\ud2b8\ub418\uae30 \uc2dc\uc791\ud569\ub2c8\ub2e4.\n\nEKS\ub294 [\ucfe0\ubc84\ub124\ud2f0\uc2a4](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/kubernetes-versions.html)\uc640 [EKS \ud50c\ub7ab\ud3fc \ubc84\uc804](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/platform-versions.html) \ubaa8\ub450\uc5d0 \ub300\ud574 in-place \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uc218\ud589\ud569\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 \ud074\ub7ec\uc2a4\ud130 \uc6b4\uc601\uc774 \ub2e8\uc21c\ud654\ub418\uace0 \ub2e4\uc6b4\ud0c0\uc784 \uc5c6\uc774 \ucd5c\uc2e0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uae30\ub2a5\uc744 \ud65c\uc6a9\ud558\uace0 \ubcf4\uc548 \ud328\uce58\ub97c \uc801\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc0c8 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uc5d0\ub294 \uc911\uc694\ud55c \ubcc0\uacbd \uc0ac\ud56d\uc774 \uc801\uc6a9\ub418\uba70 \uc5c5\uadf8\ub808\uc774\ub4dc \ud6c4\uc5d0\ub294 \ud074\ub7ec\uc2a4\ud130\ub97c \ub2e4\uc6b4\uadf8\ub808\uc774\ub4dc\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \ucd5c\uc2e0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uc73c\ub85c \uc6d0\ud65c\ud558\uac8c \uc804\ud658\ud558\ub824\uba74 \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc \ucc98\ub9ac\ub97c \uc704\ud55c \ud504\ub85c\uc138\uc2a4\ub97c \uc798 \ubb38\uc11c\ud654\ud574\uc57c \ud569\ub2c8\ub2e4. \ucd5c\uc2e0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uc73c\ub85c \uc5c5\uadf8\ub808\uc774\ub4dc\ud560 \ub54c in-place \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uc218\ud589\ud558\ub294 \ub300\uc2e0 \uc0c8 \ud074\ub7ec\uc2a4\ud130\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud558\ub294 \uac83\uc744 \uace0\ub824\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. [VMware\uc758 Velero](https:\/\/github.com\/vmware-tanzu\/velero)\uc640 \uac19\uc740 \ud074\ub7ec\uc2a4\ud130 \ubc31\uc5c5 \ubc0f \ubcf5\uc6d0 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uba74 \uc0c8 \ud074\ub7ec\uc2a4\ud130\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud558\ub294\ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n- \uc0c8 \ubc84\uc804\uc5d0\uc11c\ub294 \uae30\uc874 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc190\uc0c1\uc2dc\ud0ac \uc218 \uc788\ub294 API\uc640 \uae30\ub2a5\uc744 \ub354 \uc774\uc0c1 \uc0ac\uc6a9\ud558\uc9c0 \ubabb\ud560 \uc218 \uc788\uc73c\ubbc0\ub85c [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc9c0\uc6d0 \uc911\ub2e8 \uc815\ucc45](https:\/\/kubernetes.io\/docs\/reference\/using-api\/deprecation-policy\/)\uc744 \uc219\uc9c0\ud574\uc57c \ud569\ub2c8\ub2e4.\n- \ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\uae30 \uc804\uc5d0 [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubcc0\uacbd \ub85c\uadf8](https:\/\/github.com\/kubernetes\/kubernetes\/tree\/master\/CHANGELOG) \ubc0f [Amazon EKS \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/kubernetes-versions.html)\uc744 \uac80\ud1a0\ud558\uc5ec \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ubbf8\uce58\ub294 \ubd80\uc815\uc801\uc778 \uc601\ud5a5\uc744 \ud30c\uc545\ud574\uc57c \ud569\ub2c8\ub2e4.\n- \ube44\ud504\ub85c\ub355\uc158 \ud658\uacbd\uc5d0\uc11c \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \ud14c\uc2a4\ud2b8\ud558\uace0 \ud604\uc7ac \uc6cc\ud06c\ub85c\ub4dc \ubc0f \ucee8\ud2b8\ub864\ub7ec\uc5d0 \ubbf8\uce58\ub294 \uc601\ud5a5\uc744 \ud30c\uc545\ud574 \ubcf4\uc2ed\uc2dc\uc624. \uc0c8 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uc73c\ub85c \uc774\ub3d9\ud558\uae30 \uc804\uc5d0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158, \ucee8\ud2b8\ub864\ub7ec \ubc0f \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ud1b5\ud569\uc758 \ud638\ud658\uc131\uc744 \ud14c\uc2a4\ud2b8\ud558\ub294 \uc9c0\uc18d\uc801 \ud1b5\ud569 \uc6cc\ud06c\ud50c\ub85c\ub97c \uad6c\ucd95\ud558\uc5ec \ud14c\uc2a4\ud2b8\ub97c \uc790\ub3d9\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n- \ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud55c \ud6c4 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc560\ub4dc\uc628\uc744 \uc5c5\uadf8\ub808\uc774\ub4dc\ud574\uc57c \ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. [Amazon EKS \ud074\ub7ec\uc2a4\ud130 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804 \uc5c5\ub370\uc774\ud2b8](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/update-cluster.html)\ub97c \uac80\ud1a0\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 \uc560\ub4dc\uc628\uacfc \ud074\ub7ec\uc2a4\ud130 \ubc84\uc804\uc758 \ud638\ud658\uc131\uc744 \uac80\uc99d\ud558\uc2ed\uc2dc\uc624.\n- [\ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub85c\uae45](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/control-plane-logs.html)\uc744 \ucf1c\uace0 \ub85c\uadf8\uc5d0\uc11c \uc624\ub958\uac00 \uc788\ub294\uc9c0 \uac80\ud1a0\ud574 \ubcf4\uc2ed\uc2dc\uc624.\n- EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uad00\ub9ac\ud560 \ub54c\ub294 `eksctl`\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624. `eksctl`\uc744 \uc0ac\uc6a9\ud558\uc5ec [\ucee8\ud2b8\ub864 \ud50c\ub808\uc778, \uc560\ub4dc\uc628, \uc6cc\ucee4 \ub178\ub4dc \uc5c5\ub370\uc774\ud2b8](https:\/\/eksctl.io\/usage\/cluster-upgrade\/)\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n- EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uc5c5\uadf8\ub808\uc774\ub4dc\uc5d0\ub294 \uc6cc\ucee4 \ub178\ub4dc \uc5c5\uadf8\ub808\uc774\ub4dc\uac00 \ud3ec\ud568\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. EKS \uc6cc\ucee4 \ub178\ub4dc \uc5c5\ub370\uc774\ud2b8\ub294 \uc0ac\uc6a9\uc790\uc758 \ucc45\uc784\uc785\ub2c8\ub2e4. \uc6cc\ucee4 \ub178\ub4dc \uc5c5\uadf8\ub808\uc774\ub4dc \ud504\ub85c\uc138\uc2a4\ub97c \uc790\ub3d9\ud654\ud558\ub824\uba74 [EKS \uad00\ub9ac \ub178\ub4dc \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managed-node-groups.html) \ub610\ub294 [EKS on Fargate](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/fargate.html)\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624.\n- \ud544\uc694\ud55c \uacbd\uc6b0 [`kubectl convert`](https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl-linux\/#install-kubectl-convert-plugin) \ud50c\ub7ec\uadf8\uc778\uc744 \uc0ac\uc6a9\ud558\uc5ec [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9e4\ub2c8\ud398\uc2a4\ud2b8 \ud30c\uc77c\uc744 \ub2e4\ub978 API \ubc84\uc804 \uac04\uc5d0 \ubcc0\ud658](https:\/\/kubernetes.io\/docs\/tasks\/tools\/included\/kubectl-convert-overview\/)\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4 \n\n## \ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130 \uc2e4\ud589\n\nEKS\ub294 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uc778\uc2a4\ud134\uc2a4\uc758 \ubd80\ud558\ub97c \ub2a5\ub3d9\uc801\uc73c\ub85c \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0 \uc790\ub3d9\uc73c\ub85c \ud655\uc7a5\ud558\uc5ec \uace0\uc131\ub2a5\uc744 \ubcf4\uc7a5\ud569\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \ub300\uaddc\ubaa8 \ud074\ub7ec\uc2a4\ud130\ub97c \uc2e4\ud589\ud560 \ub54c\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc0f AWS \uc11c\ube44\uc2a4\uc758 \ud560\ub2f9\ub7c9 \ub0b4\uc5d0\uc11c \ubc1c\uc0dd\ud560 \uc218 \uc788\ub294 \uc131\ub2a5 \ubb38\uc81c\uc640 \ud55c\uacc4\ub97c \uace0\ub824\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n- [ProjectCalico \ud300\uc5d0\uc11c \uc218\ud589\ud55c \ud14c\uc2a4\ud2b8](https:\/\/www.projectcalico.org\/comparing-kube-proxy-modes-iptables-or-ipvs\/)\uc5d0 \ub530\ub974\uba74, \uc11c\ube44\uc2a4\uac00 1000\uac1c \uc774\uc0c1\uc778 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c `iptables` \ubaa8\ub4dc\uc5d0\uc11c `kube-proxy`\ub97c \uc0ac\uc6a9\ud560 \uacbd\uc6b0 \ub124\ud2b8\uc6cc\ud06c \uc9c0\uc5f0\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud574\uacb0 \ubc29\ubc95\uc740 [`ipvs` \ubaa8\ub4dc\uc5d0\uc11c `kube-proxy`\ub85c \uc2e4\ud589](..\/..\/networking\/ipvs\/index.ko.md)\uc73c\ub85c \uc804\ud658\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \n- CNI\uc5d0\uc11c \ud30c\ub4dc\uc758 IP \uc8fc\uc18c\ub97c \uc694\uccad\ud574\uc57c \ud558\uac70\ub098 \uc0c8 EC2 \uc778\uc2a4\ud134\uc2a4\ub97c \uc790\uc8fc \uc0dd\uc131\ud574\uc57c \ud558\ub294 \uacbd\uc6b0\uc5d0\ub3c4 [EC2 API \uc694\uccad \uc81c\ud55c](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/APIReference\/throttling.html)\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. IP \uc8fc\uc18c\ub97c \uce90\uc2f1\ud558\ub3c4\ub85d CNI\ub97c \uad6c\uc131\ud558\uba74 EC2 API \ud638\ucd9c\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub354 \ud070 EC2 \uc778\uc2a4\ud134\uc2a4 \uc720\ud615\uc744 \uc0ac\uc6a9\ud558\uc5ec EC2 \uc870\uc815 \uc774\ubca4\ud2b8\ub97c \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \ud55c\ub3c4 \ubc0f \uc11c\ube44\uc2a4 \ud560\ub2f9\ub7c9 \uc54c\uc544\ubcf4\uae30\n\nAWS\ub294 \uc2e4\uc218\ub85c \ub9ac\uc18c\uc2a4\ub97c \uacfc\ub3c4\ud558\uac8c \ud504\ub85c\ube44\uc800\ub2dd\ud558\ub294 \uac83\uc744 \ubc29\uc9c0\ud558\uae30 \uc704\ud574 \uc11c\ube44\uc2a4 \ud55c\ub3c4(\ud300\uc774 \uc694\uccad\ud560 \uc218 \uc788\ub294 \uac01 \ub9ac\uc18c\uc2a4 \uc218\uc758 \uc0c1\ud55c\uc120)\ub97c \uc124\uc815\ud569\ub2c8\ub2e4. [Amazon EKS \uc11c\ube44\uc2a4 \ud560\ub2f9\ub7c9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/service-quotas.html)\uc5d0\ub294 \uc11c\ube44\uc2a4 \ud55c\ub3c4\uac00 \ub098\uc640 \uc788\uc2b5\ub2c8\ub2e4. [AWS \uc11c\ube44\uc2a4 \ud560\ub2f9\ub7c9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/service-quotas.html)\uc744 \uc0ac\uc6a9\ud558\uc5ec \ubcc0\uacbd\ud560 \uc218 \uc788\ub294 \ub450 \uac00\uc9c0 \uc720\ud615\uc758 \ud55c\ub3c4, Soft limit\uac00 \uc788\uc2b5\ub2c8\ub2e4. Hard limit\ub294 \ubcc0\uacbd\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc124\uacc4\ud560 \ub54c\ub294 \uc774\ub7ec\ud55c \uac12\uc744 \uace0\ub824\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc11c\ube44\uc2a4 \uc81c\ud55c\uc744 \uc815\uae30\uc801\uc73c\ub85c \uac80\ud1a0\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc124\uacc4 \uc911\uc5d0 \ud1b5\ud569\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n- \uc624\ucf00\uc2a4\ud2b8\ub808\uc774\uc158 \uc5d4\uc9c4\uc758 \uc81c\ud55c \uc678\uc5d0\ub3c4 ELB(Elastic Load Balancing) \ubc0f Amazon VPC\uc640 \uac19\uc740 \ub2e4\ub978 AWS \uc11c\ube44\uc2a4\uc5d0\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc131\ub2a5\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\ub294 \uc81c\ud55c\uc774 \uc788\uc2b5\ub2c8\ub2e4.\n- EC2 \ud55c\ub3c4\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [EC2 \uc11c\ube44\uc2a4 \uc81c\ud55c](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/ec2-resource-limits.html)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n- \uac01 EC2 \uc778\uc2a4\ud134\uc2a4\ub294 [Amazon \uc81c\uacf5 DNS \uc11c\ubc84](https:\/\/docs.aws.amazon.com\/vpc\/latest\/userguide\/vpc-dns.html#vpc-dns-limits)\ub85c \uc804\uc1a1\ud560 \uc218 \uc788\ub294 \ud328\ud0b7 \uc218\ub97c \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4\ub2f9 \ucd08\ub2f9 \ucd5c\ub300 1024 \ud328\ud0b7\uc73c\ub85c \uc81c\ud55c\ud569\ub2c8\ub2e4.\n- EKS \ud658\uacbd\uc5d0\uc11c etcd \uc2a4\ud1a0\ub9ac\uc9c0 \ud55c\ub3c4\ub294 [\uc5c5\uc2a4\ud2b8\ub9bc \uc9c0\uce68](https:\/\/etcd.io\/docs\/v3.5\/dev-guide\/limit\/#storage-size-limit)\uc5d0 \ub530\ub77c **8GB**\uc785\ub2c8\ub2e4. etcd db \ud06c\uae30\ub97c \ucd94\uc801\ud558\ub824\uba74 `etcd_db_total_size_in_bytes` \uc9c0\ud45c\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\uc2ed\uc2dc\uc624. \uc774 \ubaa8\ub2c8\ud130\ub9c1\uc744 \uc124\uc815\ud558\ub824\uba74 [\uacbd\uace0 \uaddc\uce59](https:\/\/github.com\/etcd-io\/etcd\/blob\/main\/contrib\/mixin\/mixin.libsonnet#L213-L240) `etcdBackendQuotaLowSpace` \ubc0f `etcdExcessiveDatabaseGrowth`\ub97c \ucc38\uc870\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \ucd94\uac00 \ub9ac\uc18c\uc2a4:\n\n- [Amazon EKS \uc6cc\ucee4 \ub178\ub4dc\uc758 \ud074\ub7ec\uc2a4\ud130 \ub124\ud2b8\uc6cc\ud0b9\uc5d0 \ub300\ud55c \uc774\ud574\ud558\uae30 \uc26c\uc6b4 \uc124\uba85](https:\/\/aws.amazon.com\/blogs\/containers\/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes\/)\n- [\uc544\ub9c8\uc874 EKS \ud074\ub7ec\uc2a4\ud130 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uc561\uc138\uc2a4 \uc81c\uc5b4](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cluster-endpoint.html)\n- [AWS re:Invent 2019: \uc544\ub9c8\uc874 EKS \uc5b8\ub354 \ub354 \ud6c4\ub4dc (CON421-R1)](https:\/\/www.youtube.com\/watch?v=7vxDWDD2YnM)","site":"eks","answers_cleaned":"    search    exclude  true         EKS          Amazon Elastic Kubernetes Service EKS                                                    AWS                                                                                                  EKS  EC2                                               API                                      Amazon EKS          EKS                                                                        EKS       EKS                                                                    EKS                      EKS     VPC            EKS                API                        API            kube controller manager                        API                            EKS  AWS                 AZ       2   API                             etcd        3   AZ                          EKS    AZ   NAT              API      etcd                                  AZ       EKS                                           Amazon EKS                              API                           kubectl                         NLB             API                   EKS                           AZ        ENI  https   docs aws amazon com AWSEC2 latest UserGuide using eni html                EKS                    images eks data plane connectivity jpeg                API     https   docs aws amazon com eks latest userguide cluster endpoint html                            VPC EKS    ENI                                                         API          EKS        ENI                                                                      API                                                                                                                          API                                                    metrics                                kubectl                               shell kubectl get   raw  metrics                              https   github com prometheus docs blob master content docs instrumenting exposition formats md                                                  2020  5   CloudWatch  CloudWatch Container Insights                                     Amazon CloudWatch       EKS                            Prometheus                       KPI        https   docs aws amazon com AmazonCloudWatch latest monitoring ContainerInsights Prometheus Setup configure html ContainerInsights Prometheus Setup new exporters                 CloudWatch                                                API              https   github com kubernetes apiserver blob master pkg endpoints metrics metrics go                       apiserver request duration seconds   API                                                                      API                                 apiserver request total                                            HTTP               API                   apiserver request duration seconds                                                                              apiserver admission controller admission duration seconds    admission controller                                 API                               apiserver admission webhook rejection count    admission webhook                                          calling webhook error  apiserver internal error  no error                rest client request duration seconds                   URL                  rest client requests total                              HTTP            etcd                             etcd request duration seconds                     Etcd                 etcd db total size in bytes      br    apiserver storage db total size in bytes   EKS v1 26         Etcd                                   https   grafana com grafana dashboards 14623              API                etcd                                                          etcd                             API                         kube apiserver           EKS     v1 26                     text max etcd db total size in bytes job  kube apiserver      8   1024   1024   1024                    EKS      bearer            https   kubernetes io docs reference access authn authz authentication  service account tokens              https   kubernetes io docs reference access authn authz authentication  webhook token authentication        IAM                                     API                          IAM         base 64      URL      AWS            AWS CLI  https   aws amazon com cli                EKS            IAM                                             aws auth  configmap  https   docs aws amazon com eks latest userguide add user role html        EKS                             aws auth                                                                   EKS                                 AWS      IAM                                  bearer                                                     super admin                   kubectl  n kube system create serviceaccount super admin      super admin cluster admin                              kubectl create clusterrolebinding super admin rb   clusterrole cluster admin   serviceaccount kube system super admin                            secret name  kubectl  n kube system get serviceaccount super admin  o jsonpath     secrets  0   name                                 SECRET NAME  kubectl  n kube system get serviceaccount super admin  o jsonpath    secrets 0  name                      kubeconfig               TOKEN  kubectl  n kube system get secret  SECRET NAME  o jsonpath    data token    base64   decode       super admin            kubeconfig                         kubectl config set credentials super admin   token  TOKEN          kubeconfig                     apiVersion  v1 clusters    cluster      certificate authority data  REDACTED      server  https    CLUSTER  gr7 us west 2 eks amazonaws com   name  arn aws eks us west 2  account number  cluster  cluster name  contexts    context      cluster  arn aws eks us west 2  account number  cluster  cluster name      user  super admin   name  arn aws eks us west 2  account number  cluster  cluster name  current context  arn aws eks us west 2  account number  cluster  cluster name  kind  Config preferences     users     name  arn aws eks us west 2  account number  cluster  cluster name     user       exec         apiVersion  client authentication k8s io v1alpha1        args             region          us west 2          eks          get token            cluster name            cluster name          command  aws        env  null   name  super admin   user      token    super admin sa s secret           Admission Webhooks           admission webhooks          https   kubernetes io docs reference access authn authz extensible admission controllers              admission webhooks                         API       API                                                                        EKS                                                           catch all                        name   pod policy example com    rules      apiGroups              apiVersions            operations             resources              scope                  30                Fail Open                                                                             sysctls                  Sysctl                                                                                                                                      sysctl                        systcls                                  sysctls                                                                                 sysctl                                                                sysctls                  kubelet           sysctl                                                                                                                                                   sysctls                 OPA        https   github com open policy agent gatekeeper library blob 377cb915dba2db10702c25ef1ee374b4aa8d347a src pod security policy forbidden sysctls constraint tmpl      Kyverno  https   kyverno io policies pod security baseline restrict sysctls restrict sysctls                                          2021  4                     4                                                   1   21      1   22          15     https   kubernetes io blog 2021 07 20 new kubernetes release cadence  what s changing and when                1 19                         12                                                                                                  EKS          3                                                EKS            60                                                                     EKS                                       EKS          https   docs aws amazon com eks latest userguide kubernetes versions html    EKS         https   docs aws amazon com eks latest userguide platform versions html         in place                                                                                                                                                                                                                                         in place                                                      VMware  Velero  https   github com vmware tanzu velero                                                                                            API                                           https   kubernetes io docs reference using api deprecation policy                                               https   github com kubernetes kubernetes tree master CHANGELOG     Amazon EKS           https   docs aws amazon com eks latest userguide kubernetes versions html                                                                                                                                                                                                                                                    Amazon EKS                     https   docs aws amazon com eks latest userguide update cluster html                                                       https   docs aws amazon com eks latest userguide control plane logs html                               EKS               eksctl                      eksctl                                   https   eksctl io usage cluster upgrade              EKS                                         EKS                                                   EKS           https   docs aws amazon com eks latest userguide managed node groups html      EKS on Fargate  https   docs aws amazon com eks latest userguide fargate html                                kubectl convert   https   kubernetes io docs tasks tools install kubectl linux  install kubectl convert plugin                                 API           https   kubernetes io docs tasks tools included kubectl convert overview                             EKS                                                                                    AWS                                                ProjectCalico              https   www projectcalico org comparing kube proxy modes iptables or ipvs              1000              iptables        kube proxy                                        ipvs        kube proxy             networking ipvs index ko md                  CNI       IP                EC2                        EC2 API        https   docs aws amazon com AWSEC2 latest APIReference throttling html               IP           CNI       EC2 API                    EC2               EC2                                           AWS                                                                             Amazon EKS          https   docs aws amazon com eks latest userguide service quotas html                      AWS          https   docs aws amazon com eks latest userguide service quotas html                              Soft limit        Hard limit                                                                                                                      ELB Elastic Load Balancing    Amazon VPC        AWS                                          EC2                 EC2         https   docs aws amazon com AWSEC2 latest UserGuide ec2 resource limits html               EC2        Amazon    DNS     https   docs aws amazon com vpc latest userguide vpc dns html vpc dns limits                                    1024               EKS      etcd                    https   etcd io docs v3 5 dev guide limit  storage size limit        8GB       etcd db            etcd db total size in bytes                                      https   github com etcd io etcd blob main contrib mixin mixin libsonnet L213 L240   etcdBackendQuotaLowSpace     etcdExcessiveDatabaseGrowth                               Amazon EKS                                  https   aws amazon com blogs containers de mystifying cluster networking for amazon eks worker nodes          EKS                    https   docs aws amazon com eks latest userguide cluster endpoint html     AWS re Invent 2019      EKS          CON421 R1   https   www youtube com watch v 7vxDWDD2YnM "}
{"questions":"eks search exclude true 2 EKS","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# EKS \ub370\uc774\ud130 \ud50c\ub808\uc778\n\n\uac00\uc6a9\uc131\uacfc \ubcf5\uc6d0\ub825\uc774 \ub6f0\uc5b4\ub09c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc6b4\uc601\ud558\ub824\uba74 \uac00\uc6a9\uc131\uacfc \ubcf5\uc6d0\ub825\uc774 \ub6f0\uc5b4\ub09c \ub370\uc774\ud130 \ud50c\ub808\uc778\uc774 \ud544\uc694\ud569\ub2c8\ub2e4. \ud0c4\ub825\uc801\uc778 \ub370\uc774\ud130 \ud50c\ub808\uc778\uc744 \uc0ac\uc6a9\ud558\uba74 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uac00 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc790\ub3d9\uc73c\ub85c \ud655\uc7a5\ud558\uace0 \ubcf5\uad6c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubcf5\uc6d0\ub825\uc774 \ub6f0\uc5b4\ub09c \ub370\uc774\ud130 \ud50c\ub808\uc778\uc740 2\uac1c \uc774\uc0c1\uc758 \uc6cc\ucee4 \ub178\ub4dc\ub85c \uad6c\uc131\ub418\uba70, \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub530\ub77c \ud655\uc7a5 \ubc0f \ucd95\uc18c\ub420 \uc218 \uc788\uc73c\uba70 \uc7a5\uc560 \ubc1c\uc0dd \uc2dc \uc790\ub3d9\uc73c\ub85c \ubcf5\uad6c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nEKS\ub97c \uc0ac\uc6a9\ud558\ub294 \uc6cc\ucee4 \ub178\ub4dc\uc5d0\ub294 [EC2 \uc778\uc2a4\ud134\uc2a4](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/worker.html)\uc640 [Fargate] (https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/fargate.html)\ub77c\ub294 \ub450 \uac00\uc9c0 \uc635\uc158\uc774 \uc788\uc2b5\ub2c8\ub2e4. EC2 \uc778\uc2a4\ud134\uc2a4\ub97c \uc120\ud0dd\ud558\uba74 \uc6cc\ucee4 \ub178\ub4dc\ub97c \uc9c1\uc811 \uad00\ub9ac\ud558\uac70\ub098 [EKS \uad00\ub9ac \ub178\ub4dc \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managed-node-groups.html)\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uad00\ub9ac\ud615 \uc6cc\ucee4 \ub178\ub4dc\uc640 \uc790\uccb4 \uad00\ub9ac\ud615 \uc6cc\ucee4 \ub178\ub4dc\uc640 Fargate\uac00 \ud63c\ud569\ub41c \ud074\ub7ec\uc2a4\ud130\ub97c \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\nFargate\uc758 EKS\ub294 \ubcf5\uc6d0\ub825\uc774 \ub6f0\uc5b4\ub09c \ub370\uc774\ud130 \ud50c\ub808\uc778\uc744 \uc704\ud55c \uac00\uc7a5 \uc26c\uc6b4 \ubc29\ubc95\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. Fargate\ub294 \uaca9\ub9ac\ub41c \ucef4\ud4e8\ud305 \ud658\uacbd\uc5d0\uc11c \uac01 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud569\ub2c8\ub2e4. Fargate\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uac01 \ud30c\ub4dc\uc5d0\ub294 \uc790\uccb4 \uc6cc\ucee4 \ub178\ub4dc\uac00 \uc788\uc2b5\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4\uac00 \ud30c\ub4dc\ub97c \ud655\uc7a5\ud568\uc5d0 \ub530\ub77c Fargate\ub294 \ub370\uc774\ud130 \ud50c\ub808\uc778\uc744 \uc790\ub3d9\uc73c\ub85c \ud655\uc7a5\ud569\ub2c8\ub2e4. [horizontal pod autoscaler](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/horizontal-pod-autoscaler.html)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub370\uc774\ud130 \ud50c\ub808\uc778\uacfc \uc6cc\ud06c\ub85c\ub4dc\ub97c \ubaa8\ub450 \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nEC2 \uc6cc\ucee4 \ub178\ub4dc\ub97c \ud655\uc7a5\ud558\ub294 \ub370 \uc120\ud638\ub418\ub294 \ubc29\ubc95\uc740 [\ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler](https:\/\/github.com\/kubernetes\/autoscaler\/blob\/master\/cluster-autoscaler\/cloudprovider\/aws\/README.md), [EC2 Auto Scaling \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/autoscaling\/ec2\/userguide\/AutoScalingGroup.html) \ub610\ub294 [Atlassian's Esclator](https:\/\/github.com\/atlassian\/escalator)\uc640 \uac19\uc740 \ucee4\ubba4\ub2c8\ud2f0 \ud504\ub85c\uc81d\ud2b8\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc785\ub2c8\ub2e4.\n\n## \uad8c\uc7a5 \uc0ac\ud56d \n\n### EC2 \uc624\ud1a0 \uc2a4\ucf00\uc77c\ub9c1 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc6cc\ucee4 \ub178\ub4dc \uc0dd\uc131\n\n\uac1c\ubcc4 EC2 \uc778\uc2a4\ud134\uc2a4\ub97c \uc0dd\uc131\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc870\uc778\ud558\ub294 \ub300\uc2e0 EC2 \uc624\ud1a0 \uc2a4\ucf00\uc77c\ub9c1 \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc6cc\ucee4 \ub178\ub4dc\ub97c \uc0dd\uc131\ud558\ub294 \uac83\uc774 \uac00\uc7a5 \uc88b\uc2b5\ub2c8\ub2e4. \uc624\ud1a0 \uc2a4\ucf00\uc77c\ub9c1 \uadf8\ub8f9\uc740 \uc885\ub8cc\ub418\uac70\ub098 \uc7a5\uc560\uac00 \ubc1c\uc0dd\ud55c \ub178\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \uad50\uccb4\ud558\ubbc0\ub85c \ud074\ub7ec\uc2a4\ud130\uac00 \ud56d\uc0c1 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc2e4\ud589\ud560 \uc218 \uc788\ub294 \uc6a9\ub7c9\uc744 \ud655\ubcf4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n### \ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc\ub97c \ud655\uc7a5\ud558\uc138\uc694\n\nCluster Autoscaler\ub294 \ud074\ub7ec\uc2a4\ud130\uc758 \ub9ac\uc18c\uc2a4\uac00 \ucda9\ubd84\ud558\uc9c0 \uc54a\uc544 \uc2e4\ud589\ud560 \uc218 \uc5c6\ub294 \ud30c\ub4dc\uac00 \uc788\uc744 \ub54c \ub370\uc774\ud130 \ud50c\ub808\uc778 \ud06c\uae30\ub97c \uc870\uc815\ud558\uba70, \ub2e4\ub978 \uc6cc\ucee4 \ub178\ub4dc\ub97c \ucd94\uac00\ud558\uc5ec \ub3c4\uc6c0\uc744 \uc90d\ub2c8\ub2e4. Cluster Autoscaler\ub294 \ubc18\uc751\ud615 \ud504\ub85c\uc138\uc2a4\uc774\uae34 \ud558\uc9c0\ub9cc \ud074\ub7ec\uc2a4\ud130\uc758 \uc6a9\ub7c9\uc774 \ucda9\ubd84\ud558\uc9c0 \uc54a\uc544 \ud30c\ub4dc\uac00 *pending* \uc0c1\ud0dc\uac00 \ub420 \ub54c\uae4c\uc9c0 \uae30\ub2e4\ub9bd\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc774\ubca4\ud2b8\uac00 \ubc1c\uc0dd\ud558\uba74 \ud074\ub7ec\uc2a4\ud130\uc5d0 EC2 \uc778\uc2a4\ud134\uc2a4\uac00 \ucd94\uac00\ub429\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uc758 \uc6a9\ub7c9\uc774 \ubd80\uc871\ud574\uc9c0\uba74 \uc6cc\ucee4 \ub178\ub4dc\uac00 \ucd94\uac00\ub420 \ub54c\uae4c\uc9c0 \uc0c8 \ubcf5\uc81c\ubcf8 \ub610\ub294 \uc0c8 \ud30c\ub4dc\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc5c6\uac8c \ub429\ub2c8\ub2e4(*Pending \uc0c1\ud0dc*). \ub370\uc774\ud130 \ud50c\ub808\uc778\uc774 \uc6cc\ud06c\ub85c\ub4dc \uc218\uc694\ub97c \ucda9\uc871\ud560 \ub9cc\ud07c \ucda9\ubd84\ud788 \ube60\ub974\uac8c \ud655\uc7a5\ub418\uc9c0 \uc54a\ub294 \uacbd\uc6b0 \uc774\ub7ec\ud55c \uc9c0\uc5f0\uc740 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uc2e0\ub8b0\uc131\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc6cc\ucee4 \ub178\ub4dc\uc758 \uc0ac\uc6a9\ub960\uc774 \uc9c0\uc18d\uc801\uc73c\ub85c \ub0ae\uace0 \ud574\ub2f9 \ub178\ub4dc\uc758 \ubaa8\ub4e0 \ud30c\ub4dc\ub97c \ub2e4\ub978 \uc6cc\ucee4 \ub178\ub4dc\uc5d0 \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc788\ub294 \uacbd\uc6b0 Cluster Autoscaler\ub294 \ud574\ub2f9 \uc6cc\ucee4 \ub178\ub4dc\ub97c \uc885\ub8cc\ud569\ub2c8\ub2e4.\n\n### Cluster Autoscaler\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc624\ubc84 \ud504\ub85c\ube44\uc800\ub2dd\uc744 \uad6c\uc131\ud569\ub2c8\ub2e4.\n\nCluster Autoscaler\ub294 \ud074\ub7ec\uc2a4\ud130\uc758 \ud30c\ub4dc\uac00 \uc774\ubbf8 *pending* \uc0c1\ud0dc\uc77c \ub54c \ub370\uc774\ud130 \ud50c\ub808\uc778 \uc2a4\ucf00\uc77c\ub9c1\uc744 \ud2b8\ub9ac\uac70\ud569\ub2c8\ub2e4. \ub530\ub77c\uc11c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \ub354 \ub9ce\uc740 \ubcf5\uc81c\ubcf8\uc774 \ud544\uc694\ud55c \uc2dc\uc810\uacfc \uc2e4\uc81c\ub85c \ub354 \ub9ce\uc740 \ubcf5\uc81c\ubcf8\uc744 \uac00\uc838\uc624\ub294 \uc2dc\uc810 \uc0ac\uc774\uc5d0 \uc9c0\uc5f0\uc774 \uc788\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc9c0\uc5f0\uc744 \ubc29\uc9c0\ud560 \uc218 \uc788\ub294 \ubc29\ubc95\uc740 \ud544\uc694\ud55c \uac83\ubcf4\ub2e4 \ub9ce\uc740 \ubcf5\uc81c\ubcf8\uc744 \ucd94\uac00\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \ubcf5\uc81c\ubcf8 \uc218\ub97c \ub298\ub9ac\ub294 \uac83\uc785\ub2c8\ub2e4. \n\nCluster Autoscaler\uc5d0\uc11c \uad8c\uc7a5\ud558\ub294 \ub610 \ub2e4\ub978 \ud328\ud134\uc740 [*pause* \ud30c\ub4dc\uc640 \uc6b0\uc120\uc21c\uc704 \uc120\uc810 \uae30\ub2a5](https:\/\/github.com\/kubernetes\/autoscaler\/blob\/master\/cluster-autoscaler\/FAQ.md#how-can-i-configure-overprovisioning-with-cluster-autoscaler)\uc785\ub2c8\ub2e4. *pause \ud30c\ub4dc*\ub294 [pause \ucee8\ud14c\uc774\ub108](https:\/\/github.com\/kubernetes\/kubernetes\/tree\/master\/build\/pause)\ub97c \uc2e4\ud589\ud558\ub294\ub370, \uc774\ub984\uc5d0\uc11c \uc54c \uc218 \uc788\ub4ef\uc774 \ud074\ub7ec\uc2a4\ud130\uc758 \ub2e4\ub978 \ud30c\ub4dc\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \ucef4\ud4e8\ud305 \uc6a9\ub7c9\uc758 placeholder \uc5ed\ud560\uc744 \ud558\ub294 \uac83 \uc678\uc5d0\ub294 \uc544\ubb34\uac83\ub3c4 \ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. *\ub9e4\uc6b0 \ub0ae\uc740 \ud560\ub2f9 \uc6b0\uc120 \uc21c\uc704*\ub85c \uc2e4\ud589\ub418\uae30 \ub54c\ubb38\uc5d0, \ub2e4\ub978 \ud30c\ub4dc\ub97c \uc0dd\uc131\ud574\uc57c \ud558\uace0 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uac00\uc6a9 \uc6a9\ub7c9\uc774 \uc5c6\uc744 \ub54c \uc77c\uc2dc \uc911\uc9c0 \ud30c\ub4dc\uac00 \ub178\ub4dc\uc5d0\uc11c \uc81c\uac70\ub429\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc2a4\ucf00\uc904\ub7ec\ub294 pause \ud30c\ub4dc\uc758 \ucd95\ucd9c\uc744 \uac10\uc9c0\ud558\uace0 \uc2a4\ucf00\uc904\uc744 \ub2e4\uc2dc \uc7a1\uc73c\ub824\uace0 \ud569\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \ud074\ub7ec\uc2a4\ud130\uac00 \ucd5c\ub300 \uc6a9\ub7c9\uc73c\ub85c \uc2e4\ud589\ub418\uace0 \uc788\uae30 \ub54c\ubb38\uc5d0 \uc77c\uc2dc \uc911\uc9c0 \ud30c\ub4dc\ub294 *pending* \uc0c1\ud0dc\ub85c \uc720\uc9c0\ub418\uba70, Cluster Autoscaler\ub294 \uc774\uc5d0 \ub300\uc751\ud558\uc5ec \ub178\ub4dc\ub97c \ucd94\uac00\ud569\ub2c8\ub2e4. \n\n### \uc5ec\ub7ec \uc624\ud1a0 \uc2a4\ucf00\uc77c\ub9c1 \uadf8\ub8f9\uacfc \ud568\uaed8 Cluster Autoscaler \uc0ac\uc6a9\n\n`--node-group-auto-discovery` \ud50c\ub798\uadf8\ub97c \ud65c\uc131\ud654\ud55c \uc0c1\ud0dc\ub85c Cluster Autoscaler\ub97c \uc2e4\ud589\ud569\ub2c8\ub2e4.\uc774\ub807\uac8c \ud558\uba74 Cluster Autoscaler\uac00 \uc815\uc758\ub41c \ud2b9\uc815 \ud0dc\uadf8\uac00 \ud3ec\ud568\ub41c \ubaa8\ub4e0 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1 \uadf8\ub8f9\uc744 \ucc3e\uc744 \uc218 \uc788\uc73c\ubbc0\ub85c \ub9e4\ub2c8\ud398\uc2a4\ud2b8\uc5d0\uc11c \uac01 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub9c1 \uadf8\ub8f9\uc744 \uc815\uc758\ud558\uace0 \uc720\uc9c0\ud560 \ud544\uc694\uac00 \uc5c6\uc2b5\ub2c8\ub2e4.\n\n### \ub85c\uceec \uc2a4\ud1a0\ub9ac\uc9c0\uc640 \ud568\uaed8 Cluster Autoscaler \uc0ac\uc6a9\n\n\uae30\ubcf8\uc801\uc73c\ub85c Cluster Autoscaler\ub294 \ub85c\uceec \uc2a4\ud1a0\ub9ac\uc9c0\uac00 \uc5f0\uacb0\ub41c \uc0c1\ud0dc\ub85c \ubc30\ud3ec\ub41c \ud30c\ub4dc\uac00 \uc788\ub294 \ub178\ub4dc\ub97c \ucd95\uc18c\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. `--skip-nodes-with-local-storage` \ud50c\ub798\uadf8\ub97c false\ub85c \uc124\uc815\ud558\uba74 Cluster Autoscaler\uac00 \uc774\ub7ec\ud55c \ub178\ub4dc\ub97c \ucd95\uc18c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc6cc\ucee4 \ub178\ub4dc\uc640 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc5ec\ub7ec AZ\uc5d0 \ubd84\uc0b0\ud569\ub2c8\ub2e4.\n\n\uc5ec\ub7ec AZ\uc5d0\uc11c \uc6cc\ucee4 \ub178\ub4dc\uc640 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud558\uc5ec \uac1c\ubcc4 AZ\uc5d0\uc11c \uc7a5\uc560\uac00 \ubc1c\uc0dd\ud558\uc9c0 \uc54a\ub3c4\ub85d \uc6cc\ud06c\ub85c\ub4dc\ub97c \ubcf4\ud638\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub178\ub4dc\ub97c \uc0dd\uc131\ud558\ub294 \uc11c\ube0c\ub137\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc6cc\ucee4 \ub178\ub4dc\uac00 \uc0dd\uc131\ub418\ub294 AZ\ub97c \uc81c\uc5b4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 1.18+\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 AZ\uc5d0 \ud30c\ub4dc\ub97c \ubd84\uc0b0\ud558\ub294 \ub370 \uad8c\uc7a5\ub418\ub294 \ubc29\ubc95\uc740 [\ud30c\ub4dc\uc5d0 \ub300\ud55c \ud1a0\ud3f4\ub85c\uc9c0 \ubd84\uc0b0 \uc81c\uc57d](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-topology-spread-constraints\/#spread-constraints-for-pods)\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc785\ub2c8\ub2e4.\n\n\uc544\ub798 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub294 \uac00\ub2a5\ud55c \uacbd\uc6b0 AZ\uc5d0 \ud30c\ub4dc\ub97c \ubd84\uc0b0\uc2dc\ud0a4\uace0, \uadf8\ub807\uc9c0 \uc54a\uc744 \uacbd\uc6b0 \ud574\ub2f9 \ud30c\ub4dc\ub294 \uadf8\ub0e5 \uc2e4\ud589\ub418\ub3c4\ub85d \ud569\ub2c8\ub2e4.\n\n```\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: web-server\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: web-server\n  template:\n    metadata:\n      labels:\n        app: web-server\n    spec:\n      topologySpreadConstraints:\n        - maxSkew: 1\n          whenUnsatisfiable: ScheduleAnyway\n          topologyKey: topology.kubernetes.io\/zone\n          labelSelector:\n            matchLabels:\n              app: web-server\n      containers:\n      - name: web-app\n        image: nginx\n        resources:\n          requests:\n            cpu: 1\n```\n\n!!! note\n    `kube-scheduler`\ub294 \ud574\ub2f9 \ub808\uc774\ube14\uc774 \uc788\ub294 \ub178\ub4dc\ub97c \ud1b5\ud55c \ud1a0\ud3f4\ub85c\uc9c0 \ub3c4\uba54\uc778\ub9cc \uc778\uc2dd\ud569\ub2c8\ub2e4. \uc704\uc758 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub97c \ub2e8\uc77c \uc874\uc5d0\ub9cc \ub178\ub4dc\uac00 \uc788\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ubc30\ud3ec\ud558\uba74, `kube-scheduler`\uac00 \ub2e4\ub978 \uc874\uc744 \uc778\uc2dd\ud558\uc9c0 \ubabb\ud558\ubbc0\ub85c \ubaa8\ub4e0 \ud30c\ub4dc\uac00 \ud574\ub2f9 \ub178\ub4dc\uc5d0\uc11c \uc2a4\ucf00\uc904\ub9c1\ub429\ub2c8\ub2e4. \uc774 Topology Spread\uac00 \uc2a4\ucf00\uc904\ub7ec\uc640 \ud568\uaed8 \uc608\uc0c1\ub300\ub85c \uc791\ub3d9\ud558\ub824\uba74 \ubaa8\ub4e0 \uc874\uc5d0 \ub178\ub4dc\uac00 \uc774\ubbf8 \uc788\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uc774 \ubb38\uc81c\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 1.24\uc5d0\uc11c `MinDomainsInPodToplogySpread` [\uae30\ub2a5 \uac8c\uc774\ud2b8](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-topology-spread-constraints\/#api)\uac00 \ucd94\uac00\ub418\uba74\uc11c \ud574\uacb0\ub420 \uac83\uc785\ub2c8\ub2e4. \uc774 \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud558\uba74 \uc2a4\ucf00\uc904\ub7ec\uc5d0 \uc801\uaca9 \ub3c4\uba54\uc778 \uc218\ub97c \uc54c\ub9ac\uae30 \uc704\ud574 `MinDomains` \uc18d\uc131\uc744 \uc9c0\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n!!! warning\n    `whenUnsatisfiable`\ub97c `Donot Schedule`\ub85c \uc124\uc815\ud558\uba74 Topology Spread Constraints\uc744 \ucda9\uc871\ud560 \uc218 \uc5c6\ub294 \uacbd\uc6b0 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc5c6\uac8c \ub429\ub2c8\ub2e4. Topology Spread Constraints\uc744 \uc704\ubc18\ud558\ub294 \ub300\uc2e0 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud558\uc9c0 \uc54a\ub294 \uac83\uc774 \ub354 \uc88b\uc740 \uacbd\uc6b0\uc5d0\ub9cc \uc124\uc815\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\uc774\uc804 \ubc84\uc804\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0\uc11c\ub294 \ud30c\ub4dc anti-affinity \uaddc\uce59\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc5ec\ub7ec AZ\uc5d0 \uac78\uccd0 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc544\ub798 \ub9e4\ub2c8\ud398\uc2a4\ud2b8\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc2a4\ucf00\uc904\ub7ec\uc5d0\uac8c \ubcc4\uac1c\uc758 AZ\uc5d0\uc11c \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud558\ub294 \uac83\uc744 *\uc120\ud638*\ud55c\ub2e4\uace0 \uc54c\ub824\uc90d\ub2c8\ub2e4. \n\n```\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: web-server\n  labels:\n    app: web-server\nspec:\n  replicas: 4\n  selector:\n    matchLabels:\n      app: web-server\n  template:\n    metadata:\n      labels:\n        app: web-server\n    spec:\n      affinity:\n        podAntiAffinity:\n          preferredDuringSchedulingIgnoredDuringExecution:\n          - podAffinityTerm:\n              labelSelector:\n                matchExpressions:\n                - key: app\n                  operator: In\n                  values:\n                  - web-server\n              topologyKey: failure-domain.beta.kubernetes.io\/zone\n            weight: 100\n      containers:\n      - name: web-app\n        image: nginx\n```\n\n!!! warning \n    \ud30c\ub4dc\ub97c \uc11c\ub85c \ub2e4\ub978 AZ\uc5d0 \uc2a4\ucf00\uc904\ud560 \ud544\uc694\ub294 \uc5c6\uc2b5\ub2c8\ub2e4. \uadf8\ub807\uc9c0 \uc54a\uc73c\uba74 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc758 \ud30c\ub4dc \uc218\uac00 AZ \uc218\ub97c \uc808\ub300 \ucd08\uacfc\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \n\n### EBS \ubcfc\ub968\uc744 \uc0ac\uc6a9\ud560 \ub54c \uac01 AZ\uc758 \uc6a9\ub7c9\uc744 \ud655\ubcf4\ud558\uc2ed\uc2dc\uc624.\n\n[Amazon EBS\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc601\uad6c \ubcfc\ub968 \uc81c\uacf5](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/ebs-csi.html)\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \ud30c\ub4dc \ubc0f \uad00\ub828 EBS \ubcfc\ub968\uc774 \ub3d9\uc77c\ud55c AZ\uc5d0 \uc788\ub294\uc9c0 \ud655\uc778\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774 \uae00\uc744 \uc4f0\ub294 \uc2dc\uc810\uc5d0\uc11c EBS \ubcfc\ub968\uc740 \ub2e8\uc77c AZ \ub0b4\uc5d0\uc11c\ub9cc \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud30c\ub4dc\ub294 \ub2e4\ub978 AZ\uc5d0 \uc704\uce58\ud55c EBS \uc9c0\uc6d0 \uc601\uad6c \ubcfc\ub968\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 [\uc2a4\ucf00\uc904\ub7ec\ub294 \uc6cc\ucee4 \ub178\ub4dc\uac00 \uc5b4\ub290 AZ\uc5d0 \uc704\uce58\ud558\ub294\uc9c0 \uc54c\uace0 \uc788\uc2b5\ub2c8\ub2e4](https:\/\/kubernetes.io\/docs\/reference\/kubernetes-api\/labels-annotations-taints\/#topologykubernetesiozone). \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \ud574\ub2f9 \ubcfc\ub968\uacfc \ub3d9\uc77c\ud55c AZ\uc5d0 EBS \ubcfc\ub968\uc774 \ud544\uc694\ud55c \ud30c\ub4dc\ub97c \ud56d\uc0c1 \uc2a4\ucf00\uc904\ub9c1\ud569\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \ubcfc\ub968\uc774 \uc704\uce58\ud55c AZ\uc5d0 \uc0ac\uc6a9 \uac00\ub2a5\ud55c \uc6cc\ucee4 \ub178\ub4dc\uac00 \uc5c6\ub294 \uacbd\uc6b0 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \n\n\ud074\ub7ec\uc2a4\ud130\uac00 \ud56d\uc0c1 \ud544\uc694\ud55c EBS \ubcfc\ub968\uacfc \ub3d9\uc77c\ud55c AZ\uc5d0 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc788\ub294 \uc6a9\ub7c9\uc744 \ud655\ubcf4\ud560 \uc218 \uc788\ub3c4\ub85d \ucda9\ubd84\ud55c \uc6a9\ub7c9\uc744 \uac16\ucd98 \uac01 AZ\uc5d0 \uc624\ud1a0 \uc2a4\ucf00\uc77c\ub9c1 \uadf8\ub8f9\uc744 \uc0dd\uc131\ud558\uc2ed\uc2dc\uc624. \ub610\ud55c \ud074\ub7ec\uc2a4\ud130 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec\uc5d0\uc11c `--balance-similar-similar-node groups` \uae30\ub2a5\uc744 \ud65c\uc131\ud654\ud574\uc57c \ud569\ub2c8\ub2e4.\n\nEBS \ubcfc\ub968\uc744 \uc0ac\uc6a9\ud558\uc9c0\ub9cc \uac00\uc6a9\uc131\uc744 \ub192\uc774\uae30 \uc704\ud55c \uc694\uad6c \uc0ac\ud56d\uc774 \uc5c6\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc2e4\ud589 \uc911\uc778 \uacbd\uc6b0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ubc30\ud3ec\ub97c \ub2e8\uc77c AZ\ub85c \uc81c\ud55c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. EKS\uc5d0\uc11c\ub294 \uc6cc\ucee4 \ub178\ub4dc\uc5d0 AZ \uc774\ub984\uc774 \ud3ec\ud568\ub41c `failure-domain.beta.kubernetes.io\/zone` \ub808\uc774\ube14\uc774 \uc790\ub3d9\uc73c\ub85c \ucd94\uac00\ub429\ub2c8\ub2e4. `kubectl get nodes --show-labels`\ub97c \uc2e4\ud589\ud558\uc5ec \ub178\ub4dc\uc5d0 \ucca8\ubd80\ub41c \ub808\uc774\ube14\uc744 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ube4c\ud2b8\uc778 \ub178\ub4dc \ub808\uc774\ube14\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\uc5ec\uae30] (https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/#built-in-node-labels)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub178\ub4dc \uc140\ub809\ud130\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud2b9\uc815 AZ\uc5d0\uc11c \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\uc544\ub798 \uc608\uc2dc\uc5d0\uc11c\ub294 \ud30c\ub4dc\uac00 `us-west-2c` AZ\uc5d0\uc11c\ub9cc \uc2a4\ucf00\uc904\ub9c1\ub429\ub2c8\ub2e4.\n\n```\napiVersion: v1\nkind: Pod\nmetadata:\n  name: single-az-pod\nspec:\n  affinity:\n    nodeAffinity:\n      requiredDuringSchedulingIgnoredDuringExecution:\n        nodeSelectorTerms:\n        - matchExpressions:\n          - key: failure-domain.beta.kubernetes.io\/zone\n            operator: In\n            values:\n            - us-west-2c\n  containers:\n  - name: single-az-container\n    image: kubernetes\/pause\n```\n\n\ud37c\uc2dc\uc2a4\ud134\ud2b8 \ubcfc\ub968(EBS \uc9c0\uc6d0) \uc5ed\uc2dc AZ \uc774\ub984\uc73c\ub85c \uc790\ub3d9 \ub808\uc774\ube14\uc774 \uc9c0\uc815\ub429\ub2c8\ub2e4. `kubectl get pv -L topology.ebs.csi.aws.com\/zone` \uba85\ub839\uc744 \uc2e4\ud589\ud574 \ud37c\uc2dc\uc2a4\ud134\ud2b8 \ubcfc\ub968\uc774 \uc5b4\ub290 AZ\uc5d0 \uc18d\ud558\ub294\uc9c0 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud30c\ub4dc\uac00 \uc0dd\uc131\ub418\uace0 \ubcfc\ub968\uc744 \uc694\uccad\ud558\uba74, \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \ud574\ub2f9 \ubcfc\ub968\uacfc \ub3d9\uc77c\ud55c AZ\uc5d0 \uc788\ub294 \ub178\ub4dc\uc5d0 \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud569\ub2c8\ub2e4. \n\n\ub178\ub4dc \uadf8\ub8f9\uc774 \ud558\ub098\uc778 EKS \ud074\ub7ec\uc2a4\ud130\uac00 \uc788\ub294 \uc2dc\ub098\ub9ac\uc624\ub97c \uc0dd\uac01\ud574 \ubcf4\uc2ed\uc2dc\uc624. \uc774 \ub178\ub4dc \uadf8\ub8f9\uc5d0\ub294 3\uac1c\uc758 AZ\uc5d0 \ubd84\uc0b0\ub41c \uc138 \uac1c\uc758 \uc6cc\ucee4 \ub178\ub4dc\uac00 \uc788\uc2b5\ub2c8\ub2e4. EBS \uc9c0\uc6d0 \ud37c\uc2dc\uc2a4\ud134\ud2b8 \ubcfc\ub968\uc744 \uc0ac\uc6a9\ud558\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc774 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uacfc \ud574\ub2f9 \ubcfc\ub968\uc744 \uc0dd\uc131\ud558\uba74 \ud574\ub2f9 \ud30c\ub4dc\uac00 \uc138 \uac1c\uc758 AZ \uc911 \uccab \ubc88\uc9f8 AZ\uc5d0 \uc0dd\uc131\ub429\ub2c8\ub2e4. \uadf8\ub7ec\uba74 \uc774 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud558\ub294 \uc6cc\ucee4 \ub178\ub4dc\uac00 \ube44\uc815\uc0c1 \uc0c1\ud0dc\uac00 \ub418\uace0 \uc774\ud6c4\uc5d0\ub294 \uc0ac\uc6a9\ud560 \uc218 \uc5c6\uac8c \ub41c\ub2e4. Cluster Autoscaler\ub294 \ube44\uc815\uc0c1 \ub178\ub4dc\ub97c \uc0c8 \uc6cc\ucee4 \ub178\ub4dc\ub85c \uad50\uccb4\ud569\ub2c8\ub2e4. \uadf8\ub7ec\ub098 \uc790\ub3d9 \ud655\uc7a5 \uadf8\ub8f9\uc774 \uc138 \uac1c\uc758 AZ\uc5d0 \uac78\uccd0 \uc788\uae30 \ub54c\ubb38\uc5d0 \uc0c1\ud669\uc5d0 \ub530\ub77c \uc0c8 \uc6cc\ucee4 \ub178\ub4dc\uac00 \ub450 \ubc88\uc9f8 \ub610\ub294 \uc138 \ubc88\uc9f8 AZ\uc5d0\uc11c \uc2dc\uc791\ub420 \uc218 \uc788\uc9c0\ub9cc \uccab \ubc88\uc9f8 AZ\uc5d0\uc11c\ub294 \uc2e4\ud589\ub418\uc9c0 \uc54a\uc744 \uc218 \uc788\uc2b5\ub2c8\ub2e4. AZ \uc81c\uc57d\uc774 \uc788\ub294 EBS \ubcfc\ub968\uc740 \uccab \ubc88\uc9f8 AZ\uc5d0\ub9cc \uc874\uc7ac\ud558\uace0 \ud574\ub2f9 AZ\uc5d0\ub294 \uc0ac\uc6a9 \uac00\ub2a5\ud55c \uc6cc\ucee4 \ub178\ub4dc\uac00 \uc5c6\uc73c\ubbc0\ub85c \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4. \ub530\ub77c\uc11c \uac01 AZ\uc5d0 \ub178\ub4dc \uadf8\ub8f9\uc744 \ud558\ub098\uc529 \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \uadf8\ub798\uc57c \ub2e4\ub978 AZ\uc5d0\uc11c \uc2a4\ucf00\uc904\ub9c1\ud560 \uc218 \uc5c6\ub294 \ud30c\ub4dc\ub97c \uc2e4\ud589\ud560 \uc218 \uc788\ub294 \ucda9\ubd84\ud55c \uc6a9\ub7c9\uc774 \ud56d\uc0c1 \ud655\ubcf4\ub429\ub2c8\ub2e4. \n\n\n\ub610\ub294 \uc601\uad6c \uc2a4\ud1a0\ub9ac\uc9c0\uac00 \ud544\uc694\ud55c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc2e4\ud589\ud560 \ub54c [EFS] (https:\/\/github.com\/kubernetes-sigs\/aws-efs-csi-driver)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 \uc790\ub3d9 \ud06c\uae30 \uc870\uc815\uc744 \ub2e8\uc21c\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud074\ub77c\uc774\uc5b8\ud2b8\ub294 \ub9ac\uc804 \ub0b4 \ubaa8\ub4e0 AZ\uc5d0\uc11c \ub3d9\uc2dc\uc5d0 EFS \ud30c\uc77c \uc2dc\uc2a4\ud15c\uc5d0 \uc561\uc138\uc2a4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. EFS \uae30\ubc18 \ud37c\uc2dc\uc2a4\ud134\ud2b8 \ubcfc\ub968\uc744 \uc0ac\uc6a9\ud558\ub294 \ud30c\ub4dc\uac00 \uc885\ub8cc\ub418\uc5b4 \ub2e4\ub978 AZ\uc5d0 \uc2a4\ucf00\uc904\ub418\ub354\ub77c\ub3c4 \ubcfc\ub968\uc744 \ub9c8\uc6b4\ud2b8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \ub178\ub4dc \ubb38\uc81c \uac10\uc9c0\uae30 \uc2e4\ud589\n\n\uc6cc\ucee4 \ub178\ub4dc\uc758 \uc7a5\uc560\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uac00\uc6a9\uc131\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\uc2b5\ub2c8\ub2e4. [node-problem-detector](https:\/\/github.com\/kubernetes\/node-problem-detector)\ub294 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc124\uce58\ud558\uc5ec \uc6cc\ucee4 \ub178\ub4dc \ubb38\uc81c\ub97c \ud0d0\uc9c0\ud560 \uc218 \uc788\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc560\ub4dc\uc628\uc785\ub2c8\ub2e4. [npd\uc758 \uce58\ub8cc \uc2dc\uc2a4\ud15c](https:\/\/github.com\/kubernetes\/node-problem-detector#remedy-systems)\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \ube44\uc6b0\uace0 \uc885\ub8cc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n### \uc2dc\uc2a4\ud15c \ubc0f \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub370\ubaac\uc744 \uc704\ud55c \ub9ac\uc18c\uc2a4 \uc608\uc57d\n\n[\uc6b4\uc601 \uccb4\uc81c \ubc0f \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub370\ubaac\uc744 \uc704\ud55c \ucef4\ud4e8\ud305 \uc6a9\ub7c9\ub97c \uc608\uc57d](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/reserve-compute-resources\/)\ud558\uc5ec \uc6cc\ucee4 \ub178\ub4dc\uc758 \uc548\uc815\uc131\uc744 \uac1c\uc120\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud30c\ub4dc, \ud2b9\ud788 `limits`\uc774 \uc120\uc5b8\ub418\uc9c0 \uc54a\uc740 \ud30c\ub4dc\ub294 \uc2dc\uc2a4\ud15c \ub9ac\uc18c\uc2a4\ub97c \ud3ec\ud654\uc2dc\ucf1c \uc6b4\uc601\uccb4\uc81c \ud504\ub85c\uc138\uc2a4\uc640 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub370\ubaac (`kubelet`, \ucee8\ud14c\uc774\ub108 \ub7f0\ud0c0\uc784 \ub4f1)\uc774 \uc2dc\uc2a4\ud15c \ub9ac\uc18c\uc2a4\ub97c \ub193\uace0 \ud30c\ub4dc\uc640 \uacbd\uc7c1\ud558\ub294 \uc0c1\ud669\uc5d0 \ub193\uc774\uac8c \ub429\ub2c8\ub2e4.`kubelet` \ud50c\ub798\uadf8 `--system-reserved`\uc640 `--kube-reserved`\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc2dc\uc2a4\ud15c \ud504\ub85c\uc138\uc2a4 (`udev`, `sshd` \ub4f1)\uc640 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub370\ubaac\uc744 \uc704\ud55c \ub9ac\uc18c\uc2a4\ub97c \uac01\uac01 \uc608\uc57d\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n[EKS\uc5d0 \ucd5c\uc801\ud654\ub41c Linux AMI](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/eks-optimized-ami.html)\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 CPU, \uba54\ubaa8\ub9ac \ubc0f \uc2a4\ud1a0\ub9ac\uc9c0\ub294 \uae30\ubcf8\uc801\uc73c\ub85c \uc2dc\uc2a4\ud15c \ubc0f \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub370\ubaac\uc6a9\uc73c\ub85c \uc608\uc57d\ub429\ub2c8\ub2e4. \uc774 AMI\ub97c \uae30\ubc18\uc73c\ub85c \ud558\ub294 \uc6cc\ucee4 \ub178\ub4dc\uac00 \uc2dc\uc791\ub418\uba74 [`bootstrap.sh` \uc2a4\ud06c\ub9bd\ud2b8](https:\/\/github.com\/awslabs\/amazon-eks-ami\/blob\/master\/files\/bootstrap.sh) \ub97c \ud2b8\ub9ac\uac70\ud558\ub3c4\ub85d EC2 \uc0ac\uc6a9\uc790 \ub370\uc774\ud130\uac00 \uad6c\uc131\ub429\ub2c8\ub2e4. \uc774 \uc2a4\ud06c\ub9bd\ud2b8\ub294 EC2 \uc778\uc2a4\ud134\uc2a4\uc5d0\uc11c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 CPU \ucf54\uc5b4 \uc218\uc640 \ucd1d \uba54\ubaa8\ub9ac\ub97c \uae30\ubc18\uc73c\ub85c CPU \ubc0f \uba54\ubaa8\ub9ac \uc608\uc57d\uc744 \uacc4\uc0b0\ud569\ub2c8\ub2e4. \uacc4\uc0b0\ub41c \uac12\uc740 `\/etc\/kubernetes\/kubelet\/kubelet-config.json`\uc5d0 \uc788\ub294 `KubeletConfiguration` \ud30c\uc77c\uc5d0 \uae30\ub85d\ub429\ub2c8\ub2e4. \n\n\ub178\ub4dc\uc5d0\uc11c \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub370\ubaac\uc744 \uc2e4\ud589\ud558\uace0 \uae30\ubcf8\uc801\uc73c\ub85c \uc608\uc57d\ub41c CPU \ubc0f \uba54\ubaa8\ub9ac \uc591\uc774 \ucda9\ubd84\ud558\uc9c0 \uc54a\uc740 \uacbd\uc6b0 \uc2dc\uc2a4\ud15c \ub9ac\uc18c\uc2a4 \uc608\uc57d\uc744 \ub298\ub824\uc57c \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n`eksctl`\uc740 [\uc2dc\uc2a4\ud15c \ubc0f \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub370\ubaac\uc758 \ub9ac\uc18c\uc2a4 \uc608\uc57d](https:\/\/eksctl.io\/usage\/customizing-the-kubelet\/)\uc744 \uc0ac\uc6a9\uc790 \uc9c0\uc815\ud558\ub294 \uac00\uc7a5 \uc26c\uc6b4 \ubc29\ubc95\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \n\n### QoS \uad6c\ud604\n\n\uc911\uc694\ud55c \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc758 \uacbd\uc6b0, \ud30c\ub4dc\uc758 \ucee8\ud14c\uc774\ub108\uc5d0 \ub300\ud574 `requests`=`limits` \uc815\uc758\ub97c \uace0\ub824\ud574\ubcf4\uc2ed\uc2dc\uc624. \uc774\ub807\uac8c \ud558\uba74 \ub2e4\ub978 \ud30c\ub4dc\uac00 \ub9ac\uc18c\uc2a4\ub97c \uc694\uccad\ud558\ub354\ub77c\ub3c4 \ucee8\ud14c\uc774\ub108\uac00 \uc885\ub8cc\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n\ubaa8\ub4e0 \ucee8\ud14c\uc774\ub108\uc5d0 CPU \ubc0f \uba54\ubaa8\ub9ac \uc81c\ud55c\uc744 \uc801\uc6a9\ud558\ub294 \uac83\uc774 \uac00\uc7a5 \uc88b\uc2b5\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \ucee8\ud14c\uc774\ub108\uac00 \uc2e4\uc218\ub85c \uc2dc\uc2a4\ud15c \ub9ac\uc18c\uc2a4\ub97c \uc18c\ube44\ud558\uc5ec \uac19\uc740 \uc704\uce58\uc5d0 \ubc30\uce58\ub41c \ub2e4\ub978 \ud504\ub85c\uc138\uc2a4\uc758 \uac00\uc6a9\uc131\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce58\ub294 \uac83\uc744 \ubc29\uc9c0\ud560 \uc218 \uc788\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4.\n\n### \ubaa8\ub4e0 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud55c \ub9ac\uc18c\uc2a4 \uc694\uccad\/\uc81c\ud55c \uad6c\uc131 \ubc0f \ud06c\uae30 \uc870\uc815\n\n\ub9ac\uc18c\uc2a4 \uc694\uccad\uc758 \ud06c\uae30 \uc870\uc815 \ubc0f \uc6cc\ud06c\ub85c\ub4dc \ud55c\ub3c4\uc5d0 \ub300\ud55c \uba87 \uac00\uc9c0 \uc77c\ubc18\uc801\uc778 \uc9c0\uce68\uc744 \uc801\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n- CPU\uc5d0 \ub9ac\uc18c\uc2a4 \uc81c\ud55c\uc744 \uc9c0\uc815\ud558\uc9c0 \ub9c8\uc2ed\uc2dc\uc624. \uc81c\ud55c\uc774 \uc5c6\ub294 \uacbd\uc6b0 \uc694\uccad\uc740 [\ucee8\ud14c\uc774\ub108\uc758 \uc0c1\ub300\uc801 CPU \uc0ac\uc6a9 \uc2dc\uac04](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/#how-pods-with-resource-limits-are-run)\uc5d0 \uac00\uc911\uce58 \uc5ed\ud560\uc744 \ud569\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \uc778\uc704\uc801\uc778 \uc81c\ud55c\uc774\ub098 \uacfc\ub2e4 \ud604\uc0c1 \uc5c6\uc774 \uc6cc\ud06c\ub85c\ub4dc\uc5d0\uc11c CPU \uc804\uccb4\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n- CPU\uac00 \uc544\ub2cc \ub9ac\uc18c\uc2a4\uc758 \uacbd\uc6b0, `requests`=`limits`\ub97c \uad6c\uc131\ud558\uba74 \uac00\uc7a5 \uc608\uce21 \uac00\ub2a5\ud55c \ub3d9\uc791\uc774 \uc81c\uacf5\ub429\ub2c8\ub2e4. \ub9cc\uc57d `requests`!=`limits`\uc774\uba74, \ucee8\ud14c\uc774\ub108\uc758 [QOS](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/quality-service-pod\/#qos-classes)\ub3c4 Guaranteed\uc5d0\uc11c Burstable\ub85c \uac10\uc18c\ud558\uc5ec [node pressure](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/node-pressure-eviction\/) \uc774\ubca4\ud2b8\uc5d0 \ucd95\ucd9c\ub420 \uac00\ub2a5\uc131\uc774 \ub192\uc544\uc84c\uc2b5\ub2c8\ub2e4.\n\n- CPU\uac00 \uc544\ub2cc \ub9ac\uc18c\uc2a4\uc758 \uacbd\uc6b0 \uc694\uccad\ubcf4\ub2e4 \ud6e8\uc52c \ud070 \uc81c\ud55c\uc744 \uc9c0\uc815\ud558\uc9c0 \ub9c8\uc2ed\uc2dc\uc624. `limits`\uc774 `requests`\uc5d0 \ube44\ud574 \ud06c\uac8c \uad6c\uc131\ub420\uc218\ub85d \ub178\ub4dc\uac00 \uc624\ubc84 \ucee4\ubc0b\ub418\uc5b4 \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc911\ub2e8\ub420 \uac00\ub2a5\uc131\uc774 \ub192\uc544\uc9d1\ub2c8\ub2e4.\n\n- [Karpenter](https:\/\/aws.github.io\/aws-eks-best-practices\/karpenter\/) \ub610\ub294 [Cluster AutoScaler](https:\/\/aws.github.io\/aws-eks-best-practices\/cluster-autoscaling\/)\uc640 \uac19\uc740 \ub178\ub4dc \uc790\ub3d9 \ud06c\uae30 \uc870\uc815 \uc194\ub8e8\uc158\uc744 \uc0ac\uc6a9\ud560 \ub54c\ub294 \uc694\uccad \ud06c\uae30\ub97c \uc62c\ubc14\ub974\uac8c \uc9c0\uc815\ud558\ub294 \uac83\uc774 \ud2b9\ud788 \uc911\uc694\ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c \ub3c4\uad6c\ub294 \uc6cc\ud06c\ub85c\ub4dc \uc694\uccad\uc744 \uac80\ud1a0\ud558\uc5ec \ud504\ub85c\ube44\uc800\ub2dd\ud560 \ub178\ub4dc\uc758 \uc218\uc640 \ud06c\uae30\ub97c \uacb0\uc815\ud569\ub2c8\ub2e4. \uc694\uccad\uc774 \ub108\ubb34 \uc791\uc544 \uc81c\ud55c\uc774 \ub354 \ud070 \uacbd\uc6b0, \uc6cc\ud06c\ub85c\ub4dc\uac00 \ub178\ub4dc\uc5d0 \uaf49 \ucc28 \uc788\uc73c\uba74 \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc81c\uac70\ub418\uac70\ub098 OOM\uc73c\ub85c \uc885\ub8cc\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub9ac\uc18c\uc2a4 \uc694\uccad\uc744 \uacb0\uc815\ud558\ub294 \uac83\uc740 \uc5b4\ub824\uc6b8 \uc218 \uc788\uc9c0\ub9cc [Vertical Pod Autoscaler](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/vertical-pod-autoscaler)\uc640 \uac19\uc740 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uba74 \ub7f0\ud0c0\uc784 \uc2dc \ucee8\ud14c\uc774\ub108 \ub9ac\uc18c\uc2a4 \uc0ac\uc6a9\ub7c9\uc744 \uad00\ucc30\ud558\uc5ec \uc694\uccad \uaddc\ubaa8\ub97c '\uc801\uc815'\ud558\uac8c \uc870\uc815\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc694\uccad \ud06c\uae30\ub97c \uacb0\uc815\ud558\ub294 \ub370 \uc720\uc6a9\ud560 \uc218 \uc788\ub294 \ub2e4\ub978 \ub3c4\uad6c\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n- [Goldilocks](https:\/\/github.com\/FairwindsOps\/goldilocks)\n- [Parca](https:\/\/www.parca.dev\/)\n- [Prodfiler](https:\/\/prodfiler.com\/)\n- [rsg](https:\/\/mhausenblas.info\/right-size-guide\/)\n\n\n### \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc758 \ub9ac\uc18c\uc2a4 \ud560\ub2f9\ub7c9 \uad6c\uc131\n\n\ub124\uc784\uc2a4\ud398\uc774\uc2a4\ub294 \uc0ac\uc6a9\uc790\uac00 \uc5ec\ub7ec \ud300 \ub610\ub294 \ud504\ub85c\uc81d\ud2b8\uc5d0 \ubd84\uc0b0\ub418\uc5b4 \uc788\ub294 \ud658\uacbd\uc5d0\uc11c \uc0ac\uc6a9\ud558\uae30 \uc704\ud55c \uac83\uc785\ub2c8\ub2e4. \uc774\ub984 \ubc94\uc704\ub97c \uc81c\uacf5\ud558\uace0 \ud074\ub7ec\uc2a4\ud130 \ub9ac\uc18c\uc2a4\ub97c \uc5ec\ub7ec \ud300, \ud504\ub85c\uc81d\ud2b8, \uc6cc\ud06c\ub85c\ub4dc \uac04\uc5d0 \ub098\ub204\ub294 \ubc29\ubc95\uc785\ub2c8\ub2e4. \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc758 \ucd1d \ub9ac\uc18c\uc2a4 \uc0ac\uc6a9\ub7c9\uc744 \uc81c\ud55c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. [`ResourceQuota`](https:\/\/kubernetes.io\/docs\/concepts\/policy\/resource-quotas\/) \uac1d\uccb4\ub294 \uc720\ud615\ubcc4\ub85c \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \ub9cc\ub4e4 \uc218 \uc788\ub294 \uac1c\uccb4 \uc218\uc640 \ud574\ub2f9 \ud504\ub85c\uc81d\ud2b8\uc758 \ub9ac\uc18c\uc2a4\uac00 \uc18c\ube44\ud560 \uc218 \uc788\ub294 \ucd1d \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4 \uc591\uc744 \uc81c\ud55c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc9c0\uc815\ub41c \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c \uc694\uccad\ud560 \uc218 \uc788\ub294 \uc2a4\ud1a0\ub9ac\uc9c0 \ubc0f\/\ub610\ub294 \ucef4\ud4e8\ud305(CPU \ubc0f \uba54\ubaa8\ub9ac) \ub9ac\uc18c\uc2a4\uc758 \ucd1d\ud569\uc744 \uc81c\ud55c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n> CPU \ubc0f \uba54\ubaa8\ub9ac\uc640 \uac19\uc740 \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4\uc758 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \ub9ac\uc18c\uc2a4 \ucffc\ud130\uac00 \ud65c\uc131\ud654\ub41c \uacbd\uc6b0 \uc0ac\uc6a9\uc790\ub294 \ud574\ub2f9 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc758 \uac01 \ucee8\ud14c\uc774\ub108\uc5d0 \ub300\ud55c \uc694\uccad \ub610\ub294 \uc81c\ud55c\uc744 \uc9c0\uc815\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\uac01 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \ud560\ub2f9\ub7c9\uc744 \uad6c\uc131\ud558\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624. \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \ub0b4\uc758 \ucee8\ud14c\uc774\ub108\uc5d0 \uc0ac\uc804 \uad6c\uc131\ub41c \uc81c\ud55c\uc744 \uc790\ub3d9\uc73c\ub85c \uc801\uc6a9\ud558\ub824\uba74 `LimitRanges`\ub97c \uc0ac\uc6a9\ud574 \ubcf4\uc2ed\uc2dc\uc624. \n\n### \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \ub0b4\uc5d0\uc11c \ucee8\ud14c\uc774\ub108 \ub9ac\uc18c\uc2a4 \uc0ac\uc6a9\uc744 \uc81c\ud55c\ud569\ub2c8\ub2e4.\n\n\ub9ac\uc18c\uc2a4 \ucffc\ud130\ub294 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uac00 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \ub9ac\uc18c\uc2a4\uc758 \uc591\uc744 \uc81c\ud55c\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub429\ub2c8\ub2e4. [`LimitRange` \uac1c\uccb4](https:\/\/kubernetes.io\/docs\/concepts\/policy\/limit-range\/)\ub294 \ucee8\ud14c\uc774\ub108\uac00 \uc694\uccad\ud560 \uc218 \uc788\ub294 \ucd5c\uc18c \ubc0f \ucd5c\ub300 \ub9ac\uc18c\uc2a4\ub97c \uad6c\ud604\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. `LimitRange`\ub97c \uc0ac\uc6a9\ud558\uba74 \ucee8\ud14c\uc774\ub108\uc5d0 \ub300\ud55c \uae30\ubcf8 \uc694\uccad \ubc0f \uc81c\ud55c\uc744 \uc124\uc815\ud560 \uc218 \uc788\ub294\ub370, \uc774\ub294 \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4 \uc81c\ud55c\uc744 \uc124\uc815\ud558\ub294 \uac83\uc774 \uc870\uc9c1\uc758 \ud45c\uc900 \uad00\ud589\uc774 \uc544\ub2cc \uacbd\uc6b0\uc5d0 \uc720\uc6a9\ud569\ub2c8\ub2e4. \uc774\ub984\uc5d0\uc11c \uc54c \uc218 \uc788\ub4ef\uc774, `LimitRange`\ub294 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc758 \ud30c\ub4dc \ub610\ub294 \ucee8\ud14c\uc774\ub108\ub2f9 \ucd5c\uc18c \ubc0f \ucd5c\ub300 \ucef4\ud4e8\ud305 \ub9ac\uc18c\uc2a4 \uc0ac\uc6a9\ub7c9\uc744 \uc801\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub610\ud55c \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c \ud37c\uc2dc\uc2a4\ud134\ud2b8\ubcfc\ub968\ud074\ub808\uc784\ub2f9 \ucd5c\uc18c \ubc0f \ucd5c\ub300 \uc2a4\ud1a0\ub9ac\uc9c0 \uc694\uccad\uc744 \uc801\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ucee8\ud14c\uc774\ub108\uc640 \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \uc218\uc900\uc5d0\uc11c \uc81c\ud55c\uc744 \uc801\uc6a9\ud558\ub824\uba74 `LimitRange`\uc640 `ResourceQuota`\ub97c \ud568\uaed8 \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc81c\ud55c\uc744 \uc124\uc815\ud558\uba74 \ucee8\ud14c\uc774\ub108 \ub610\ub294 \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uac00 \ud074\ub7ec\uc2a4\ud130\uc758 \ub2e4\ub978 \ud14c\ub10c\ud2b8\uac00 \uc0ac\uc6a9\ud558\ub294 \ub9ac\uc18c\uc2a4\uc5d0 \uc601\ud5a5\uc744 \uc8fc\uc9c0 \uc54a\ub3c4\ub85d \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n## CoreDNS\n\nCoreDNS\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0\uc11c \uc774\ub984 \ud655\uc778 \ubc0f \uc11c\ube44\uc2a4 \uac80\uc0c9 \uae30\ub2a5\uc744 \uc218\ud589\ud569\ub2c8\ub2e4. EKS \ud074\ub7ec\uc2a4\ud130\uc5d0 \uae30\ubcf8\uc801\uc73c\ub85c \uc124\uce58\ub429\ub2c8\ub2e4. \uc0c1\ud638 \uc6b4\uc6a9\uc131\uc744 \uc704\ud574 CoreDNS\uc6a9 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\uc758 \uc774\ub984\uc740 \uc5ec\uc804\ud788 [kube-dns](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/dns-custom-nameservers\/)\ub85c \uc9c0\uc815\ub429\ub2c8\ub2e4. CoreDNS \ud30c\ub4dc\ub294 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc758 \uc77c\ubd80\ub85c `kube-system` \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c \uc2e4\ud589\ub418\uba70, EKS\uc5d0\uc11c\ub294 \uae30\ubcf8\uc801\uc73c\ub85c \uc694\uccad\uacfc \uc81c\ud55c\uc774 \uc120\uc5b8\ub41c \ub450 \uac1c\uc758 \ubcf5\uc81c\ubcf8\uc744 \uc2e4\ud589\ud569\ub2c8\ub2e4. DNS \ucffc\ub9ac\ub294 `kube-system` \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 `kube-dns` \uc11c\ube44\uc2a4\ub85c \uc804\uc1a1\ub429\ub2c8\ub2e4.\n\n## \uad8c\uc7a5 \uc0ac\ud56d\n### \ud575\uc2ec DNS \uc9c0\ud45c \ubaa8\ub2c8\ud130\ub9c1\nCoreDNS\ub294 [\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4](https:\/\/github.com\/coredns\/coredns\/tree\/master\/plugin\/metrics)\uc5d0 \ub300\ud55c \uc9c0\uc6d0\uc744 \ub0b4\uc7a5\ud558\uace0 \uc788\uc2b5\ub2c8\ub2e4. \ud2b9\ud788 CoreDNS \uc9c0\uc5f0 \uc2dc\uac04(`coredns_dns_request_duration_seconds_sum`, [1.7.0](https:\/\/github.com\/coredns\/coredns\/blob\/master\/notes\/coredns-1.7.0.md) \ubc84\uc804 \uc774\uc804\uc5d0\ub294 \uba54\ud2b8\ub9ad\uc774 `core_dns_response_rcode_count_total`\uc774\ub77c\uace0 \ubd88\ub838\uc74c), \uc624\ub958 (`coredns_dns_responses_total`, NXDOMAIN, SERVFAIL, FormErr) \ubc0f CoreDNS \ud30c\ub4dc\uc758 \uba54\ubaa8\ub9ac \uc0ac\uc6a9\ub7c9\uc5d0 \ub300\ud55c \ubaa8\ub2c8\ud130\ub9c1\uc744 \uace0\ub824\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ubb38\uc81c \ud574\uacb0\uc744 \uc704\ud574 kubectl\uc744 \uc0ac\uc6a9\ud558\uc5ec CoreDNS \ub85c\uadf8\ub97c \ubcfc \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```shell\nfor p in $(kubectl get pods \u2014namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs \u2014namespace=kube-system $p; done\n```\n\n### \ub178\ub4dc \ub85c\uceec DNS \uce90\uc2dc \uc0ac\uc6a9\n[\ub178\ub4dc \ub85c\uceec DNS \uce90\uc2dc](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/nodelocaldns\/)\ub97c \uc2e4\ud589\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 DNS \uc131\ub2a5\uc744 \uac1c\uc120\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774 \uae30\ub2a5\uc740 \ud074\ub7ec\uc2a4\ud130 \ub178\ub4dc\uc5d0\uc11c DNS \uce90\uc2f1 \uc5d0\uc774\uc804\ud2b8\ub97c \ub370\ubaac\uc14b\uc73c\ub85c \uc2e4\ud589\ud569\ub2c8\ub2e4. \ubaa8\ub4e0 \ud30c\ub4dc\ub294 \uc774\ub984 \ud655\uc778\uc744 \uc704\ud574 `kube-dns` \uc11c\ube44\uc2a4\ub97c \uc0ac\uc6a9\ud558\ub294 \ub300\uc2e0 \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 DNS \uce90\uc2f1 \uc5d0\uc774\uc804\ud2b8\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n\n### CoreDNS\uc758 cluster-proportional-scaler\ub97c \uad6c\uc131\ud569\ub2c8\ub2e4.\n\n\ud074\ub7ec\uc2a4\ud130 DNS \uc131\ub2a5\uc744 \uac1c\uc120\ud558\ub294 \ub610 \ub2e4\ub978 \ubc29\ubc95\uc740 \ud074\ub7ec\uc2a4\ud130\uc758 \ub178\ub4dc \ubc0f CPU \ucf54\uc5b4 \uc218\uc5d0 \ub530\ub77c [CoreDNS \ubc30\ud3ec\ub97c \uc790\ub3d9\uc73c\ub85c \uc218\ud3c9\uc73c\ub85c \ud655\uc7a5](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/dns-horizontal-autoscaling\/#enablng-dns-horizontal-autoscaling)\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. [\uc218\ud3c9 \ud074\ub7ec\uc2a4\ud130 \ube44\ub840 \uc790\ub3d9 \ud655\uc7a5](https:\/\/github.com\/kubernetes-sigs\/cluster-proportional-autoscaler\/blob\/master\/README.md)\uc740 \uc2a4\ucf00\uc904 \uac00\ub2a5\ud55c \ub370\uc774\ud130 \ud50c\ub808\uc778\uc758 \ud06c\uae30\uc5d0 \ub530\ub77c \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc758 \ubcf5\uc81c\ubcf8 \uc218\ub97c \uc870\uc815\ud558\ub294 \ucee8\ud14c\uc774\ub108\uc785\ub2c8\ub2e4.\n\n\ub178\ub4dc\uc640 \ub178\ub4dc\uc758 CPU \ucf54\uc5b4 \uc9d1\uacc4\ub294 CoreDNS\ub97c \ud655\uc7a5\ud560 \uc218 \uc788\ub294 \ub450 \uac00\uc9c0 \uc9c0\ud45c\uc785\ub2c8\ub2e4. \ub450 \uc9c0\ud45c\ub97c \ub3d9\uc2dc\uc5d0 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub354 \ud070 \ub178\ub4dc\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 CoreDNS \uc2a4\ucf00\uc77c\ub9c1\uc740 CPU \ucf54\uc5b4 \uc218\ub97c \uae30\ubc18\uc73c\ub85c \ud569\ub2c8\ub2e4. \ubc18\uba74 \ub354 \uc791\uc740 \ub178\ub4dc\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 CoreDNS \ubcf5\uc81c\ubcf8\uc758 \uc218\ub294 \ub370\uc774\ud130 \ud50c\ub808\uc778\uc758 CPU \ucf54\uc5b4\uc5d0 \ub530\ub77c \ub2ec\ub77c\uc9d1\ub2c8\ub2e4. \ube44\ub840 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec \uad6c\uc131\uc740 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n```\nlinear: '{\"coresPerReplica\":256,\"min\":1,\"nodesPerReplica\":16}'\n```\n\n### \ub178\ub4dc \uadf8\ub8f9\uc774 \uc788\ub294 AMI \uc120\ud0dd\nEKS\ub294 \uace0\uac1d\uc774 \uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uacfc \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc744 \ubaa8\ub450 \uc0dd\uc131\ud558\ub294 \ub370 \uc0ac\uc6a9\ud558\ub294 \ucd5c\uc801\ud654\ub41c EC2 AMI\ub97c \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c AMI\ub294 \uc9c0\uc6d0\ub418\ub294 \ubaa8\ub4e0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uc5d0 \ub300\ud574 \ubaa8\ub4e0 \ub9ac\uc804\uc5d0 \uac8c\uc2dc\ub429\ub2c8\ub2e4. EKS\ub294 CVE \ub610\ub294 \ubc84\uadf8\uac00 \ubc1c\uacac\ub418\uba74 \uc774\ub7ec\ud55c AMI\ub97c \ub354 \uc774\uc0c1 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\ub294 \uac83\uc73c\ub85c \ud45c\uc2dc\ud569\ub2c8\ub2e4. \ub530\ub77c\uc11c \ub178\ub4dc \uadf8\ub8f9\uc5d0 \uc0ac\uc6a9\ud560 AMI\ub97c \uc120\ud0dd\ud560 \ub54c\ub294 \ub354 \uc774\uc0c1 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\ub294 AMI\ub97c \uc0ac\uc6a9\ud558\uc9c0 \uc54a\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\nEc2 describe-images api\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc544\ub798 \uba85\ub839\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub354 \uc774\uc0c1 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\ub294 AMI\ub97c \ud544\ud130\ub9c1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\naws ec2 describe-images --image-id ami-0d551c4f633e7679c --no-include-deprecated\n```\n\n\uc774\ubbf8\uc9c0 \uc124\uba85 \ucd9c\ub825\uc5d0 DeprecationTime\uc774 \ud544\ub4dc\ub85c \ud3ec\ud568\ub418\uc5b4 \uc788\ub294\uc9c0 \ud655\uc778\ud558\uc5ec \uc9c0\uc6d0 \uc911\ub2e8\ub41c AMI\ub97c \uc2dd\ubcc4\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uba74:\n\n```\naws ec2 describe-images --image-id ami-xxx --no-include-deprecated\n{\n    \"Images\": [\n        {\n            \"Architecture\": \"x86_64\",\n            \"CreationDate\": \"2022-07-13T15:54:06.000Z\",\n            \"ImageId\": \"ami-xxx\",\n            \"ImageLocation\": \"123456789012\/eks_xxx\",\n            \"ImageType\": \"machine\",\n            \"Public\": false,\n            \"OwnerId\": \"123456789012\",\n            \"PlatformDetails\": \"Linux\/UNIX\",\n            \"UsageOperation\": \"RunInstances\",\n            \"State\": \"available\",\n            \"BlockDeviceMappings\": [\n                {\n                    \"DeviceName\": \"\/dev\/xvda\",\n                    \"Ebs\": {\n                        \"DeleteOnTermination\": true,\n                        \"SnapshotId\": \"snap-0993a2fc4bbf4f7f4\",\n                        \"VolumeSize\": 20,\n                        \"VolumeType\": \"gp2\",\n                        \"Encrypted\": false\n                    }\n                }\n            ],\n            \"Description\": \"EKS Kubernetes Worker AMI with AmazonLinux2 image, (k8s: 1.19.15, docker: 20.10.13-2.amzn2, containerd: 1.4.13-3.amzn2)\",\n            \"EnaSupport\": true,\n            \"Hypervisor\": \"xen\",\n            \"Name\": \"aws_eks_optimized_xxx\",\n            \"RootDeviceName\": \"\/dev\/xvda\",\n            \"RootDeviceType\": \"ebs\",\n            \"SriovNetSupport\": \"simple\",\n            \"VirtualizationType\": \"hvm\",\n            \"DeprecationTime\": \"2023-02-09T19:41:00.000Z\"\n        }\n    ]\n}\n```\n","site":"eks","answers_cleaned":"    search    exclude  true         EKS                                                                                                                                              2                                                                     EKS                EC2       https   docs aws amazon com eks latest userguide worker html    Fargate   https   docs aws amazon com eks latest userguide fargate html                   EC2                             EKS           https   docs aws amazon com eks latest userguide managed node groups html                                        Fargate                          Fargate  EKS                                        Fargate                            Fargate                                                  Fargate                        horizontal pod autoscaler  https   docs aws amazon com eks latest userguide horizontal pod autoscaler html                                       EC2                               Cluster Autoscaler  https   github com kubernetes autoscaler blob master cluster autoscaler cloudprovider aws README md    EC2 Auto Scaling     https   docs aws amazon com autoscaling ec2 userguide AutoScalingGroup html      Atlassian s Esclator  https   github com atlassian escalator                                            EC2                               EC2                          EC2                                                                                                                                       Cluster Autoscaler                  Cluster Autoscaler                                                                                 Cluster Autoscaler                                        pending                                       EC2                                                                          Pending                                                                                                                                                  Cluster Autoscaler                        Cluster Autoscaler                         Cluster Autoscaler                pending                                                                                                                                                                          Cluster Autoscaler                   pause                  https   github com kubernetes autoscaler blob master cluster autoscaler FAQ md how can i configure overprovisioning with cluster autoscaler       pause       pause       https   github com kubernetes kubernetes tree master build pause                                                      placeholder                                                                                                                           pause                                                                         pending            Cluster Autoscaler                                             Cluster Autoscaler        node group auto discovery                Cluster Autoscaler               Cluster Autoscaler                                                                                                  Cluster Autoscaler           Cluster Autoscaler                                                skip nodes with local storage       false       Cluster Autoscaler                                           AZ             AZ                      AZ                                                                 AZ                     1 18           AZ                                          https   kubernetes io docs concepts workloads pods pod topology spread constraints  spread constraints for pods                                 AZ                                                 apiVersion  apps v1 kind  Deployment metadata    name  web server spec    replicas  3   selector      matchLabels        app  web server   template      metadata        labels          app  web server     spec        topologySpreadConstraints            maxSkew  1           whenUnsatisfiable  ScheduleAnyway           topologyKey  topology kubernetes io zone           labelSelector              matchLabels                app  web server       containers          name  web app         image  nginx         resources            requests              cpu  1          note      kube scheduler                                                                            kube scheduler                                             Topology Spread                                                        1 24    MinDomainsInPodToplogySpread           https   kubernetes io docs concepts workloads pods pod topology spread constraints  api                                                      MinDomains                       warning      whenUnsatisfiable    Donot Schedule        Topology Spread Constraints                                  Topology Spread Constraints                                                                 anti affinity             AZ                                                  AZ                                      apiVersion  apps v1 kind  Deployment metadata    name  web server   labels      app  web server spec    replicas  4   selector      matchLabels        app  web server   template      metadata        labels          app  web server     spec        affinity          podAntiAffinity            preferredDuringSchedulingIgnoredDuringExecution              podAffinityTerm                labelSelector                  matchExpressions                    key  app                   operator  In                   values                      web server               topologyKey  failure domain beta kubernetes io zone             weight  100       containers          name  web app         image  nginx          warning                AZ                                       AZ                        EBS             AZ                Amazon EBS                 https   docs aws amazon com eks latest userguide ebs csi html                   EBS         AZ                             EBS        AZ                         AZ      EBS                                               AZ                 https   kubernetes io docs reference kubernetes api labels annotations taints  topologykubernetesiozone                     AZ  EBS                                     AZ                                                       EBS         AZ                                            AZ                                          balance similar similar node groups                  EBS                                                                AZ              EKS           AZ          failure domain beta kubernetes io zone                    kubectl get nodes   show labels                                                              https   kubernetes io docs concepts configuration assign pod node  built in node labels                                AZ                                     us west 2c  AZ                  apiVersion  v1 kind  Pod metadata    name  single az pod spec    affinity      nodeAffinity        requiredDuringSchedulingIgnoredDuringExecution          nodeSelectorTerms            matchExpressions              key  failure domain beta kubernetes io zone             operator  In             values                us west 2c   containers      name  single az container     image  kubernetes pause               EBS        AZ                      kubectl get pv  L topology ebs csi aws com zone                       AZ                                                        AZ                                   EKS                                    3   AZ                        EBS                                                                   AZ        AZ                                                            Cluster Autoscaler                                             AZ                                         AZ                  AZ                    AZ        EBS          AZ           AZ                                              AZ                              AZ                                                                               EFS   https   github com kubernetes sigs aws efs csi driver                                                   AZ       EFS                      EFS                               AZ                                                                                      node problem detector  https   github com kubernetes node problem detector                                                npd          https   github com kubernetes node problem detector remedy systems                                                                                                   https   kubernetes io docs tasks administer cluster reserve compute resources                                     limits                                                   kubelet                                                  kubelet         system reserved      kube reserved                   udev    sshd                                          EKS       Linux AMI  https   docs aws amazon com eks latest userguide eks optimized ami html           CPU                                               AMI                        bootstrap sh        https   github com awslabs amazon eks ami blob master files bootstrap sh           EC2                         EC2                 CPU                   CPU                           etc kubernetes kubelet kubelet config json       KubeletConfiguration                                              CPU                                                  eksctl                            https   eksctl io usage customizing the kubelet                                    QoS                                   requests   limits                                                                    CPU                                                                                                                                                                                                               CPU                                              CPU        https   kubernetes io docs concepts configuration manage resources containers  how pods with resource limits are run                                                 CPU                    CPU               requests   limits                                 requests    limits            QOS  https   kubernetes io docs tasks configure pod container quality service pod  qos classes   Guaranteed   Burstable        node pressure  https   kubernetes io docs concepts scheduling eviction node pressure eviction                            CPU                                       limits    requests                                                     Karpenter  https   aws github io aws eks best practices karpenter       Cluster AutoScaler  https   aws github io aws eks best practices cluster autoscaling                                                                                                                                                                OOM                                           Vertical Pod Autoscaler  https   github com kubernetes autoscaler tree master vertical pod autoscaler                                                                                                              Goldilocks  https   github com FairwindsOps goldilocks     Parca  https   www parca dev      Prodfiler  https   prodfiler com      rsg  https   mhausenblas info right size guide                                                                                                                                                                         ResourceQuota   https   kubernetes io docs concepts policy resource quotas                                                                                                                      CPU                                CPU                                                                                                                                                                    LimitRanges                                                                                                      LimitRange      https   kubernetes io docs concepts policy limit range                                                      LimitRange                                                                                                       LimitRange                                                                                                                                           LimitRange    ResourceQuota                                                                                                   CoreDNS  CoreDNS                                    EKS                               CoreDNS                      kube dns  https   kubernetes io docs tasks administer cluster dns custom nameservers           CoreDNS                  kube system                 EKS                                       DNS      kube system                 kube dns                               DNS         CoreDNS           https   github com coredns coredns tree master plugin metrics                        CoreDNS        coredns dns request duration seconds sum    1 7 0  https   github com coredns coredns blob master notes coredns 1 7 0 md                core dns response rcode count total                coredns dns responses total   NXDOMAIN  SERVFAIL  FormErr    CoreDNS                                            kubectl       CoreDNS                   shell for p in   kubectl get pods  namespace kube system  l k8s app kube dns  o name   do kubectl logs  namespace kube system  p  done                DNS              DNS     https   kubernetes io docs tasks administer cluster nodelocaldns              DNS                                 DNS                                         kube dns                         DNS                      CoreDNS  cluster proportional scaler               DNS                              CPU           CoreDNS                   https   kubernetes io docs tasks administer cluster dns horizontal autoscaling  enablng dns horizontal autoscaling                             https   github com kubernetes sigs cluster proportional autoscaler blob master README md                                                                 CPU        CoreDNS                                                             CoreDNS       CPU                                     CoreDNS                  CPU                                            linear     coresPerReplica  256  min  1  nodesPerReplica  16                      AMI    EKS                                                   EC2 AMI             AMI                                     EKS  CVE                 AMI                                         AMI                      AMI                    Ec2 describe images api                                AMI                    aws ec2 describe images   image id ami 0d551c4f633e7679c   no include deprecated                 DeprecationTime                           AMI                           aws ec2 describe images   image id ami xxx   no include deprecated        Images                            Architecture    x86 64                CreationDate    2022 07 13T15 54 06 000Z                ImageId    ami xxx                ImageLocation    123456789012 eks xxx                ImageType    machine                Public   false               OwnerId    123456789012                PlatformDetails    Linux UNIX                UsageOperation    RunInstances                State    available                BlockDeviceMappings                                            DeviceName     dev xvda                        Ebs                              DeleteOnTermination   true                           SnapshotId    snap 0993a2fc4bbf4f7f4                            VolumeSize   20                           VolumeType    gp2                            Encrypted   false                                                                     Description    EKS Kubernetes Worker AMI with AmazonLinux2 image   k8s  1 19 15  docker  20 10 13 2 amzn2  containerd  1 4 13 3 amzn2                 EnaSupport   true               Hypervisor    xen                Name    aws eks optimized xxx                RootDeviceName     dev xvda                RootDeviceType    ebs                SriovNetSupport    simple                VirtualizationType    hvm                DeprecationTime    2023 02 09T19 41 00 000Z                        "}
{"questions":"eks exclude true Amazon EKS Karpenter Fargate EKS Anywhere AWS Outposts AWS Local Zone search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc \ubaa8\ubc94 \uc0ac\ub840\n\n\uc774 \uc548\ub0b4\uc11c\ub294 \ud074\ub7ec\uc2a4\ud130 \uad00\ub9ac\uc790\uc5d0\uac8c Amazon EKS \uc5c5\uadf8\ub808\uc774\ub4dc \uc804\ub7b5\uc744 \uacc4\ud68d\ud558\uace0 \uc2e4\ud589\ud558\ub294 \ubc29\ubc95\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4. \ub610\ud55c \uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc, \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9, Karpenter \ub178\ub4dc \ubc0f Fargate \ub178\ub4dc\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub294 \ubc29\ubc95\ub3c4 \uc124\uba85\ud569\ub2c8\ub2e4. EKS Anywhere, \uc790\uccb4 \uad00\ub9ac\ud615 \ucfe0\ubc84\ub124\ud2f0\uc2a4, AWS Outposts \ub610\ub294 AWS Local Zone\uc5d0 \ub300\ud55c \uc9c0\uce68\uc740 \ud3ec\ud568\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \n\n## \uac1c\uc694\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uc740 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \ub370\uc774\ud130 \ud50c\ub808\uc778\uc744 \ubaa8\ub450 \ud3ec\ud568\ud569\ub2c8\ub2e4.\uc6d0\ud65c\ud55c \uc791\ub3d9\uc744 \uc704\ud574 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \ub370\uc774\ud130 \ud50c\ub808\uc778 \ubaa8\ub450 \ub3d9\uc77c\ud55c [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9c8\uc774\ub108 \ubc84\uc804, \uc608: 1.24](https:\/\/kubernetes.io\/releases\/version-skew-policy\/#supported-versions) \ub97c \uc2e4\ud589\ud574\uc57c \ud569\ub2c8\ub2e4. AWS\uac00 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc744 \uad00\ub9ac\ud558\uace0 \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub294 \ub3d9\uc548 \ub370\uc774\ud130 \ud50c\ub808\uc778\uc5d0\uc11c \uc6cc\ucee4 \ub178\ub4dc\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\ub294 \uac83\uc740 \uc0ac\uc6a9\uc790\uc758 \ucc45\uc784\uc785\ub2c8\ub2e4.\n\n* **\ucee8\ud2b8\ub864 \ud50c\ub808\uc778** \u2014 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ubc84\uc804\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84\uc5d0\uc11c \uc815\uc758\ud569\ub2c8\ub2e4.EKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c\ub294 AWS\uc5d0\uc11c \uc774\ub97c \uad00\ub9ac\ud569\ub2c8\ub2e4.\ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ubc84\uc804\uc73c\ub85c\uc758 \uc5c5\uadf8\ub808\uc774\ub4dc\ub294 AWS API\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc2dc\uc791\ub429\ub2c8\ub2e4. \n* **\ub370\uc774\ud130 \ud50c\ub808\uc778** \u2014 \ub370\uc774\ud130 \ud50c\ub808\uc778 \ubc84\uc804\uc740 \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 Kubelet \ubc84\uc804\uc744 \ucc38\uc870\ud569\ub2c8\ub2e4.\ub3d9\uc77c\ud55c \ud074\ub7ec\uc2a4\ud130\uc758 \uc5ec\ub7ec \ub178\ub4dc\ub77c\ub3c4 \ubc84\uc804\uc774 \ub2e4\ub97c \uc218 \uc788\uc2b5\ub2c8\ub2e4. `kubectl get nodes` \uba85\ub839\uc5b4\ub85c \uc788\ub294 \ubaa8\ub4e0 \ub178\ub4dc\uc758 \ubc84\uc804\uc744 \ud655\uc778\ud558\uc138\uc694. \n\n## \uc5c5\uadf8\ub808\uc774\ub4dc \uc804\n\nAmazon EKS\uc5d0\uc11c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uc744 \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub824\ub294 \uacbd\uc6b0 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uc2dc\uc791\ud558\uae30 \uc804\uc5d0 \uba87 \uac00\uc9c0 \uc911\uc694\ud55c \uc815\ucc45, \ub3c4\uad6c \ubc0f \uc808\ucc28\ub97c \ub9c8\ub828\ud574\uc57c \ud569\ub2c8\ub2e4. \n\n* **\uc9c0\uc6d0 \uc911\ub2e8 \uc815\ucc45 \uc774\ud574** \u2014 [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc9c0\uc6d0 \uc911\ub2e8 \uc815\ucc45](https:\/\/kubernetes.io\/docs\/reference\/using-api\/deprecation-policy\/)\uc774 \uc5b4\ub5bb\uac8c \uc791\ub3d9\ud558\ub294\uc9c0 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\uc138\uc694. \uae30\uc874 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\ub294 \ud5a5\ud6c4 \ubcc0\uacbd \uc0ac\ud56d\uc744 \uc219\uc9c0\ud558\uc138\uc694. \ucd5c\uc2e0 \ubc84\uc804\uc758 Kubernetes\ub294 \ud2b9\uc815 API \ubc0f \uae30\ub2a5\uc744 \ub2e8\uacc4\uc801\uc73c\ub85c \uc911\ub2e8\ud558\ub294 \uacbd\uc6b0\uac00 \ub9ce\uc73c\uba70, \uc774\ub85c \uc778\ud574 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \uc2e4\ud589\uc5d0 \ubb38\uc81c\uac00 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n* **\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubcc0\uacbd \ub85c\uadf8 \uac80\ud1a0** \u2014 [Amazon EKS Kubernetes \ubc84\uc804](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/kubernetes-versions.html) \uacfc \ud568\uaed8 [Kubernetes \ubcc0\uacbd \ub85c\uadf8](https:\/\/github.com\/kubernetes\/kubernetes\/tree\/master\/CHANGELOG) \ub97c \ucca0\uc800\ud788 \uac80\ud1a0\ud558\uc5ec \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\ub294 \uc8fc\uc694 \ubcc0\uacbd \uc0ac\ud56d \ub4f1 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ubbf8\uce60 \uc218 \uc788\ub294 \uc601\ud5a5\uc744 \ud30c\uc545\ud558\uc2ed\uc2dc\uc624.\n* **\ud074\ub7ec\uc2a4\ud130 \ucd94\uac00 \uae30\ub2a5 \ud638\ud658\uc131 \ud3c9\uac00** \u2014 Amazon EKS\ub294 \uc0c8 \ubc84\uc804\uc774 \ucd9c\uc2dc\ub418\uac70\ub098 \ud074\ub7ec\uc2a4\ud130\ub97c \uc0c8 Kubernetes \ub9c8\uc774\ub108 \ubc84\uc804\uc73c\ub85c \uc5c5\ub370\uc774\ud2b8\ud55c \ud6c4\uc5d0 \ucd94\uac00 \uae30\ub2a5\uc744 \uc790\ub3d9\uc73c\ub85c \uc5c5\ub370\uc774\ud2b8\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub824\ub294 \ud074\ub7ec\uc2a4\ud130 \ubc84\uc804\uacfc \uae30\uc874 \ud074\ub7ec\uc2a4\ud130 \uc560\ub4dc\uc628\uc758 \ud638\ud658\uc131\uc744 \uc774\ud574\ud558\ub824\uba74 [\uc560\ub4dc\uc628 \uc5c5\ub370\uc774\ud2b8](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managing-add-ons.html#updating-an-add-on) \ub97c \uac80\ud1a0\ud558\uc2ed\uc2dc\uc624.\n* **\ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub85c\uae45 \ud65c\uc131\ud654** \u2014 \uc5c5\uadf8\ub808\uc774\ub4dc \ud504\ub85c\uc138\uc2a4 \uc911\uc5d0 \ubc1c\uc0dd\ud560 \uc218 \uc788\ub294 \ub85c\uadf8, \uc624\ub958 \ub610\ub294 \ubb38\uc81c\ub97c \ucea1\ucc98\ud558\ub824\uba74 [\ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ub85c\uae45](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/control-plane-logs.html)\uc744 \ud65c\uc131\ud654\ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c \ub85c\uadf8\uc5d0 \uc774\uc0c1\uc774 \uc788\ub294\uc9c0 \uac80\ud1a0\ud574 \ubcf4\uc2ed\uc2dc\uc624. \ube44\ud504\ub85c\ub355\uc158 \ud658\uacbd\uc5d0\uc11c \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \ud14c\uc2a4\ud2b8\ud558\uac70\ub098 \uc790\ub3d9\ud654\ub41c \ud14c\uc2a4\ud2b8\ub97c \uc9c0\uc18d\uc801 \ud1b5\ud569 \uc6cc\ud06c\ud50c\ub85c\uc5d0 \ud1b5\ud569\ud558\uc5ec \uc560\ud50c\ub9ac\ucf00\uc774\uc158, \ucee8\ud2b8\ub864\ub7ec \ubc0f \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ud1b5\ud569\uacfc\uc758 \ubc84\uc804 \ud638\ud658\uc131\uc744 \ud3c9\uac00\ud558\uc138\uc694.\n* **\ud074\ub7ec\uc2a4\ud130 \uad00\ub9ac\ub97c \uc704\ud55c eksctl \uc0b4\ud3b4\ubcf4\uae30** \u2014 [eksctl](https:\/\/eksctl.io\/)\uc744 \uc0ac\uc6a9\ud558\uc5ec EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uad00\ub9ac\ud558\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624. \uae30\ubcf8\uc801\uc73c\ub85c [\ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uc5c5\ub370\uc774\ud2b8, \uc560\ub4dc\uc628 \uad00\ub9ac, \uc6cc\ucee4 \ub178\ub4dc \uc5c5\ub370\uc774\ud2b8 \ucc98\ub9ac](https:\/\/eksctl.io\/usage\/cluster-upgrade\/) \uae30\ub2a5\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \n* **EKS\uc5d0\uc11c \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9 \ub610\ub294 Fargate\ub97c \uc120\ud0dd\ud558\uc138\uc694** \u2014 [EKS \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managed-node-groups.html) \ub610\ub294 [EKS Fargate](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/fargate.html)\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc6cc\ucee4 \ub178\ub4dc \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uac04\uc18c\ud654\ud558\uace0 \uc790\ub3d9\ud654\ud569\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uc635\uc158\uc744 \uc0ac\uc6a9\ud558\uba74 \ud504\ub85c\uc138\uc2a4\ub97c \uac04\uc18c\ud654\ud558\uace0 \uc218\ub3d9 \uac1c\uc785\uc744 \uc904\uc77c \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n* **kubectl Convert \ud50c\ub7ec\uadf8\uc778 \ud65c\uc6a9** \u2014 [kubectl convert \ud50c\ub7ec\uadf8\uc778](https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl-linux\/#install-kubectl-convert-plugin)\uc744 \ud65c\uc6a9\ud558\uc5ec \uc11c\ub85c \ub2e4\ub978 API \ubc84\uc804 \uac04\uc5d0 [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9e4\ub2c8\ud398\uc2a4\ud2b8 \ud30c\uc77c \ubcc0\ud658](https:\/\/kubernetes.io\/docs\/tasks\/tools\/included\/kubectl-convert-overview\/)\uc744 \uc6a9\uc774\ud558\uac8c \ud569\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 \uad6c\uc131\uc774 \uc0c8\ub85c\uc6b4 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uacfc\uc758 \ud638\ud658\uc131\uc744 \uc720\uc9c0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \ud074\ub7ec\uc2a4\ud130\ub97c \ucd5c\uc2e0 \uc0c1\ud0dc\ub85c \uc720\uc9c0\n\nAmazon EKS\uc758 \uacf5\ub3d9 \ucc45\uc784 \ubaa8\ub378\uc744 \ubc18\uc601\ud558\uba74 \uc548\uc804\ud558\uace0 \ud6a8\uc728\uc801\uc778 EKS \ud658\uacbd\uc744 \uc704\ud574\uc11c\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc5c5\ub370\uc774\ud2b8\ub97c \ucd5c\uc2e0 \uc0c1\ud0dc\ub85c \uc720\uc9c0\ud558\ub294 \uac83\uc774 \ubb34\uc5c7\ubcf4\ub2e4 \uc911\uc694\ud569\ub2c8\ub2e4.\uc774\ub7ec\ud55c \uc804\ub7b5\uc744 \uc6b4\uc601 \uc6cc\ud06c\ud50c\ub85c\uc5d0 \ud1b5\ud569\ud558\uba74 \ucd5c\uc2e0 \uae30\ub2a5 \ubc0f \uac1c\uc120 \uc0ac\ud56d\uc744 \ucd5c\ub300\ud55c \ud65c\uc6a9\ud558\ub294 \uc548\uc804\ud55c \ucd5c\uc2e0 \ud074\ub7ec\uc2a4\ud130\ub97c \uc720\uc9c0\ud560 \uc218 \uc788\ub294 \uc785\uc9c0\ub97c \ub2e4\uc9c8 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc804\ub7b5:\n\n* **\uc9c0\uc6d0\ub418\ub294 \ubc84\uc804 \uc815\ucc45** \u2014 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee4\ubba4\ub2c8\ud2f0\uc5d0 \ub530\ub77c Amazon EKS\ub294 \uc77c\ubc18\uc801\uc73c\ub85c \uc138 \uac00\uc9c0 \ud65c\uc131 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uc744 \uc81c\uacf5\ud558\uace0 \ub9e4\ub144 \ub124 \ubc88\uc9f8 \ubc84\uc804\uc740 \uc9c0\uc6d0 \uc911\ub2e8\ud569\ub2c8\ub2e4. \uc9c0\uc6d0 \uc911\ub2e8 \ud1b5\uc9c0\ub294 \ubc84\uc804\uc774 \uc9c0\uc6d0 \uc885\ub8cc\uc77c\uc5d0 \ub3c4\ub2ec\ud558\uae30 \ucd5c\uc18c 60\uc77c \uc804\uc5d0 \ubc1c\ud589\ub429\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [EKS \ubc84\uc804 FAQ](https:\/\/aws.amazon.com\/eks\/eks-version-faq\/)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n* **\uc790\ub3d9 \uc5c5\uadf8\ub808\uc774\ub4dc \uc815\ucc45** \u2014 EKS \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc5c5\ub370\uc774\ud2b8\ub97c \uacc4\uc18d \ub3d9\uae30\ud654\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \ubc84\uadf8 \uc218\uc815 \ubc0f \ubcf4\uc548 \ud328\uce58\ub97c \ud3ec\ud568\ud55c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucee4\ubba4\ub2c8\ud2f0 \uc9c0\uc6d0\uc740 \uc77c\ubc18\uc801\uc73c\ub85c 1\ub144 \uc774\uc0c1 \ub41c \ubc84\uc804\uc758 \uacbd\uc6b0 \uc911\ub2e8\ub429\ub2c8\ub2e4. \ub610\ud55c \uc9c0\uc6d0 \uc911\ub2e8\ub41c \ubc84\uc804\uc5d0\ub294 \ucde8\uc57d\uc131 \ubcf4\uace0\uac00 \ubd80\uc871\ud558\uc5ec \uc7a0\uc7ac\uc801 \uc704\ud5d8\uc774 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ubc84\uc804\uc758 \uc218\uba85\uc774 \ub05d\ub098\uae30 \uc804\uc5d0 \uc0ac\uc804 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \ud558\uc9c0 \ubabb\ud558\uba74 \uc790\ub3d9 \uc5c5\uadf8\ub808\uc774\ub4dc\uac00 \ud2b8\ub9ac\uac70\ub418\uc5b4 \uc6cc\ud06c\ub85c\ub4dc\uc640 \uc2dc\uc2a4\ud15c\uc774 \uc911\ub2e8\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [EKS \ubc84\uc804 \uc9c0\uc6d0 \uc815\ucc45](https:\/\/aws.amazon.com\/eks\/eks-version-support-policy\/)\uc744 \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n* **\uc5c5\uadf8\ub808\uc774\ub4dc \ub7f0\ubd81 \uc0dd\uc131** \u2014 \uc5c5\uadf8\ub808\uc774\ub4dc \uad00\ub9ac\ub97c \uc704\ud55c \ubb38\uc11c\ud654\ub41c \ud504\ub85c\uc138\uc2a4\ub97c \uc218\ub9bd\ud558\uc2ed\uc2dc\uc624. \uc0ac\uc804 \uc608\ubc29\uc801 \uc811\uadfc \ubc29\uc2dd\uc758 \uc77c\ud658\uc73c\ub85c \uc5c5\uadf8\ub808\uc774\ub4dc \ud504\ub85c\uc138\uc2a4\uc5d0 \ub9de\ub294 \ub7f0\ubd81 \ubc0f \ud2b9\uc218 \ub3c4\uad6c\ub97c \uac1c\ubc1c\ud558\uc2ed\uc2dc\uc624. \uc774\ub97c \ud1b5\ud574 \ub300\ube44 \ub2a5\ub825\uc774 \ud5a5\uc0c1\ub420 \ubfd0\ub9cc \uc544\ub2c8\ub77c \ubcf5\uc7a1\ud55c \uc804\ud658\ub3c4 \uac04\uc18c\ud654\ub429\ub2c8\ub2e4. \uc801\uc5b4\ub3c4 1\ub144\uc5d0 \ud55c \ubc88 \ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub294 \uac83\uc744 \ud45c\uc900 \uad00\ud589\uc73c\ub85c \uc0bc\uc73c\uc138\uc694. \uc774 \ubc29\ubc95\uc744 \ud1b5\ud574 \uc9c0\uc18d\uc801\uc778 \uae30\uc220 \ubc1c\uc804\uc5d0 \ubc1c\ub9de\ucd94\uc5b4 \ud658\uacbd\uc758 \ud6a8\uc728\uc131\uacfc \ubcf4\uc548\uc744 \uac15\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## EKS \ucd9c\uc2dc \uc77c\uc815 \uac80\ud1a0\n\n[EKS \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9b4\ub9ac\uc2a4 \uce98\ub9b0\ub354 \uac80\ud1a0](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/kubernetes-versions.html#kubernetes-release-calendar)\ub97c \ud1b5\ud574 \uc0c8 \ubc84\uc804\uc774 \ucd9c\uc2dc\ub418\ub294 \uc2dc\uae30\uc640 \ud2b9\uc815 \ubc84\uc804\uc5d0 \ub300\ud55c \uc9c0\uc6d0\uc774 \uc885\ub8cc\ub418\ub294 \uc2dc\uae30\ub97c \uc54c\uc544\ubcf4\uc2ed\uc2dc\uc624. \uc77c\ubc18\uc801\uc73c\ub85c EKS\ub294 \ub9e4\ub144 \uc138 \uac1c\uc758 \ub9c8\uc774\ub108 \ubc84\uc804\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub97c \ub9b4\ub9ac\uc2a4\ud558\uba70 \uac01 \ub9c8\uc774\ub108 \ubc84\uc804\uc740 \uc57d 14\uac1c\uc6d4 \ub3d9\uc548 \uc9c0\uc6d0\ub429\ub2c8\ub2e4. \n\n\ub610\ud55c \uc5c5\uc2a4\ud2b8\ub9bc [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9b4\ub9ac\uc2a4 \uc815\ubcf4](https:\/\/kubernetes.io\/releases\/) \ub3c4 \uac80\ud1a0\ud558\uc2ed\uc2dc\uc624.\n\n## \uacf5\ub3d9 \ucc45\uc784 \ubaa8\ub378\uc774 \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc\uc5d0 \uc5b4\ub5bb\uac8c \uc801\uc6a9\ub418\ub294\uc9c0 \uc774\ud574\n\n\ud074\ub7ec\uc2a4\ud130 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \ub370\uc774\ud130 \ud50c\ub808\uc778 \ubaa8\ub450\uc5d0 \ub300\ud55c \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uc2dc\uc791\ud558\ub294 \uac83\uc740 \uc0ac\uc6a9\uc790\uc758 \ucc45\uc784\uc785\ub2c8\ub2e4. [\uc5c5\uadf8\ub808\uc774\ub4dc \uc2dc\uc791 \ubc29\ubc95\uc5d0 \ub300\ud574 \uc54c\uc544\ubcf4\uc2ed\uc2dc\uc624.](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/update-cluster.html) \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uc2dc\uc791\ud558\uba74 AWS\uac00 \ud074\ub7ec\uc2a4\ud130 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uad00\ub9ac\ud569\ub2c8\ub2e4. Fargate \ud30c\ub4dc \ubc0f [\uae30\ud0c0 \uc560\ub4dc\uc628] \uc744 \ud3ec\ud568\ud55c \ub370\uc774\ud130 \ud50c\ub808\uc778 \uc5c5\uadf8\ub808\uc774\ub4dc\ub294 \uc0ac\uc6a9\uc790\uc758 \ucc45\uc784\uc785\ub2c8\ub2e4.(#upgrade -\uc560\ub4dc\uc628 \ubc0f \uad6c\uc131 \uc694\uc18c - kubernetes-api \uc0ac\uc6a9) \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc \ud6c4 \uac00\uc6a9\uc131\uacfc \uc6b4\uc601\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce58\uc9c0 \uc54a\ub3c4\ub85d \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc758 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uac80\uc99d\ud558\uace0 \uacc4\ud68d\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n## \ud074\ub7ec\uc2a4\ud130\ub97c \uc778\ud50c\ub808\uc774\uc2a4 \uc5c5\uadf8\ub808\uc774\ub4dc\n\nEKS\ub294 \uc778\ud50c\ub808\uc774\uc2a4 \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc \uc804\ub7b5\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4.\uc774\ub807\uac8c \ud558\uba74 \ud074\ub7ec\uc2a4\ud130 \ub9ac\uc18c\uc2a4\uac00 \uc720\uc9c0\ub418\uace0 \ud074\ub7ec\uc2a4\ud130 \uad6c\uc131 (\uc608: API \uc5d4\ub4dc\ud3ec\uc778\ud2b8, OIDC, ENIS, \ub85c\ub4dc\ubc38\ub7f0\uc11c) \uc774 \uc77c\uad00\ub418\uac8c \uc720\uc9c0\ub429\ub2c8\ub2e4.\uc774\ub807\uac8c \ud558\uba74 \ud074\ub7ec\uc2a4\ud130 \uc0ac\uc6a9\uc790\uc758 \uc5c5\ubb34 \uc911\ub2e8\uc774 \uc904\uc5b4\ub4e4\uace0, \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc7ac\ubc30\ud3ec\ud558\uac70\ub098 \uc678\ubd80 \ub9ac\uc18c\uc2a4 (\uc608: DNS, \uc2a4\ud1a0\ub9ac\uc9c0) \ub97c \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud560 \ud544\uc694 \uc5c6\uc774 \ud074\ub7ec\uc2a4\ud130\uc758 \uae30\uc874 \uc6cc\ud06c\ub85c\ub4dc\uc640 \ub9ac\uc18c\uc2a4\ub97c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc804\uccb4 \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uc218\ud589\ud560 \ub54c\ub294 \ud55c \ubc88\uc5d0 \ud558\ub098\uc758 \ub9c8\uc774\ub108 \ubc84\uc804 \uc5c5\uadf8\ub808\uc774\ub4dc\ub9cc \uc2e4\ud589\ud560 \uc218 \uc788\ub2e4\ub294 \uc810\uc5d0 \uc720\uc758\ud574\uc57c \ud569\ub2c8\ub2e4 (\uc608: 1.24\uc5d0\uc11c 1.25\uae4c\uc9c0). \n\n\uc989, \uc5ec\ub7ec \ubc84\uc804\uc744 \uc5c5\ub370\uc774\ud2b8\ud574\uc57c \ud558\ub294 \uacbd\uc6b0 \uc77c\ub828\uc758 \uc21c\ucc28\uc801 \uc5c5\uadf8\ub808\uc774\ub4dc\uac00 \ud544\uc694\ud569\ub2c8\ub2e4.\uc21c\ucc28\uc801 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uacc4\ud68d\ud558\ub294 \uac83\uc740 \ub354 \ubcf5\uc7a1\ud558\uba70 \ub2e4\uc6b4\ud0c0\uc784\uc774 \ubc1c\uc0dd\ud560 \uc704\ud5d8\uc774 \ub354 \ub192\uc2b5\ub2c8\ub2e4.\uc774 \uc0c1\ud669\uc5d0\uc11c\ub294 [\ube14\ub8e8\/\uadf8\ub9b0 \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc \uc804\ub7b5\uc744 \ud3c9\uac00\ud558\uc2ed\uc2dc\uc624.](#\uc778\ud50c\ub808\uc774\uc2a4-\ud074\ub7ec\uc2a4\ud130-\uc5c5\uadf8\ub808\uc774\ub4dc\uc758-\ub300\uc548\uc73c\ub85c-\ube14\ub8e8\/\uadf8\ub9b0-\ud074\ub7ec\uc2a4\ud130-\ud3c9\uac00)\n\n## \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \ub370\uc774\ud130 \ud50c\ub808\uc778\uc744 \uc21c\uc11c\ub300\ub85c \uc5c5\uadf8\ub808\uc774\ub4dc\n\n\ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub824\uba74 \ub2e4\uc74c \uc870\uce58\ub97c \ucde8\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n1. [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc0f EKS \ub9b4\ub9ac\uc2a4 \ub178\ud2b8\ub97c \uac80\ud1a0\ud558\uc2ed\uc2dc\uc624.](#use-the-eks-documentation-to-create-an-upgrade-checklist)\n2. [\ud074\ub7ec\uc2a4\ud130\ub97c \ubc31\uc5c5\ud558\uc2ed\uc2dc\uc624.(\uc120\ud0dd \uc0ac\ud56d)](#backup-the-cluster-before-upgrading)\n3. [\uc6cc\ud06c\ub85c\ub4dc\uc5d0\uc11c \ub354 \uc774\uc0c1 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\uac70\ub098 \uc81c\uac70\ub41c API \uc0ac\uc6a9\uc744 \uc2dd\ubcc4\ud558\uace0 \uc218\uc815\ud558\uc2ed\uc2dc\uc624.](#identify-and-remediate-removed-api-usage-before-upgrading-the-control-plane)\n4. [\uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \ub3d9\uc77c\ud55c Kubernetes \ubc84\uc804\uc5d0 \uc788\ub294\uc9c0 \ud655\uc778\ud558\uc2ed\uc2dc\uc624.](#track-the-version-skew-of-nodes-ensure-managed-node-groups-are-on-the-same-version-as-the-control-plane-before-upgrading) EKS Fargate \ud504\ub85c\ud30c\uc77c\uc5d0\uc11c \uc0dd\uc131\ud55c EKS \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9 \ubc0f \ub178\ub4dc\ub294 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \ub370\uc774\ud130 \ud50c\ub808\uc778 \uac04\uc758 \ub9c8\uc774\ub108 \ubc84\uc804 \uc2a4\ud050\ub97c 1\uac1c\ub9cc \uc9c0\uc6d0\ud569\ub2c8\ub2e4.\n5. [AWS \ucf58\uc194 \ub610\ub294 CLI\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc744 \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\uc2ed\uc2dc\uc624.](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/update-cluster.html)\n6. [\uc560\ub4dc\uc628 \ud638\ud658\uc131\uc744 \uac80\ud1a0\ud558\uc138\uc694.](#upgrade-add-ons-and-components-using-the-kubernetes-api) \ud544\uc694\uc5d0 \ub530\ub77c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc560\ub4dc\uc628\uacfc \ucee4\uc2a4\ud140 \ucee8\ud2b8\ub864\ub7ec\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\uc2ed\uc2dc\uc624. \n7. [kubectl \uc5c5\ub370\uc774\ud2b8\ud558\uae30.](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/install-kubectl.html)\n8. [\ud074\ub7ec\uc2a4\ud130 \ub370\uc774\ud130 \ud50c\ub808\uc778\uc744 \uc5c5\uadf8\ub808\uc774\ub4dc\ud569\ub2c8\ub2e4.](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/update-managed-node-group.html) \uc5c5\uadf8\ub808\uc774\ub4dc\ub41c \ud074\ub7ec\uc2a4\ud130\uc640 \ub3d9\uc77c\ud55c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9c8\uc774\ub108 \ubc84\uc804\uc73c\ub85c \ub178\ub4dc\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\uc2ed\uc2dc\uc624. \n\n## EKS \ubb38\uc11c\ub97c \ud65c\uc6a9\ud558\uc5ec \uc5c5\uadf8\ub808\uc774\ub4dc \uccb4\ud06c\ub9ac\uc2a4\ud2b8 \uc0dd\uc131\n\nEKS \ucfe0\ubc84\ub124\ud2f0\uc2a4 [\ubc84\uc804 \ubb38\uc11c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/kubernetes-versions.html)\uc5d0\ub294 \uac01 \ubc84\uc804\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ubcc0\uacbd \ubaa9\ub85d\uc774 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4.\uac01 \uc5c5\uadf8\ub808\uc774\ub4dc\uc5d0 \ub300\ud55c \uccb4\ud06c\ub9ac\uc2a4\ud2b8\ub97c \uc791\uc131\ud558\uc2ed\uc2dc\uc624. \n\n\ud2b9\uc815 EKS \ubc84\uc804 \uc5c5\uadf8\ub808\uc774\ub4dc \uc9c0\uce68\uc740 \ubb38\uc11c\uc5d0\uc11c \uac01 \ubc84\uc804\uc758 \uc8fc\uc694 \ubcc0\uacbd \uc0ac\ud56d \ubc0f \uace0\ub824 \uc0ac\ud56d\uc744 \uac80\ud1a0\ud558\uc2ed\uc2dc\uc624.\n\n* [EKS 1.27](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/kubernetes-versions.html#kubernetes-1.27)\n* [EKS 1.26](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/kubernetes-versions.html#kubernetes-1.26)\n* [EKS 1.25](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/kubernetes-versions.html#kubernetes-1.25)\n* [EKS 1.24](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/kubernetes-versions.html#kubernetes-1.24)\n* [EKS 1.23](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/kubernetes-versions.html#kubernetes-1.23)\n* [EKS 1.22](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/kubernetes-versions.html#kubernetes-1.22)\n\n## \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc560\ub4dc\uc628 \ubc0f \ucef4\ud3ec\ub10c\ud2b8 \uc5c5\uadf8\ub808\uc774\ub4dc\n\n\ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\uae30 \uc804\uc5d0 \uc0ac\uc6a9 \uc911\uc778 Kubernetes \uad6c\uc131 \uc694\uc18c\uc758 \ubc84\uc804\uc744 \uc774\ud574\ud574\uc57c \ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uad6c\uc131 \uc694\uc18c\uc758 \uc778\ubca4\ud1a0\ub9ac\ub97c \uc791\uc131\ud558\uace0 Kubernetes API\ub97c \uc9c1\uc811 \uc0ac\uc6a9\ud558\ub294 \uad6c\uc131 \uc694\uc18c\ub97c \uc2dd\ubcc4\ud558\uc2ed\uc2dc\uc624.\uc5ec\uae30\uc5d0\ub294 \ubaa8\ub2c8\ud130\ub9c1 \ubc0f \ub85c\uae45 \uc5d0\uc774\uc804\ud2b8, \ud074\ub7ec\uc2a4\ud130 \uc624\ud1a0\uc2a4\ucf00\uc77c\ub7ec, \ucee8\ud14c\uc774\ub108 \uc2a4\ud1a0\ub9ac\uc9c0 \ub4dc\ub77c\uc774\ubc84 (\uc608: [EBS CSI](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/ebs-csi.html), [EFS CSI](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/efs-csi.html)), \uc778\uadf8\ub808\uc2a4 \ucee8\ud2b8\ub864\ub7ec, \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\ub97c \uc9c1\uc811 \uc0ac\uc6a9\ud558\ub294 \uae30\ud0c0 \uc6cc\ud06c\ub85c\ub4dc \ub610\ub294 \uc560\ub4dc\uc628\uacfc \uac19\uc740 \uc911\uc694\ud55c \ud074\ub7ec\uc2a4\ud130 \uad6c\uc131 \uc694\uc18c\uac00 \ud3ec\ud568\ub429\ub2c8\ub2e4. \n\n!!! tip\n    \uc911\uc694\ud55c \ud074\ub7ec\uc2a4\ud130 \uad6c\uc131 \uc694\uc18c\ub294 \ub300\uac1c `*-system` \ub124\uc784\uc2a4\ud398\uc774\uc2a4\uc5d0 \uc124\uce58\ub429\ub2c8\ub2e4.\n    \n    ```\n    kubectl get ns | grep '-system'\n    ```\n\nKubernetes API\ub97c \uc0ac\uc6a9\ud558\ub294 \uad6c\uc131 \uc694\uc18c\ub97c \uc2dd\ubcc4\ud55c \ud6c4\uc5d0\ub294 \ud574\ub2f9 \uc124\uba85\uc11c\uc5d0\uc11c \ubc84\uc804 \ud638\ud658\uc131 \ubc0f \uc5c5\uadf8\ub808\uc774\ub4dc \uc694\uad6c \uc0ac\ud56d\uc744 \ud655\uc778\ud558\uc2ed\uc2dc\uc624. \uc608\ub97c \ub4e4\uc5b4 \ubc84\uc804 \ud638\ud658\uc131\uc5d0 \ub300\ud574\uc11c\ub294 [AWS \ub85c\ub4dc\ubc38\ub7f0\uc11c \ucee8\ud2b8\ub864\ub7ec](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/v2.4\/deploy\/installation\/) \uc124\uba85\uc11c\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uc9c4\ud589\ud558\uae30 \uc804\uc5d0 \uc77c\ubd80 \uad6c\uc131 \uc694\uc18c\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\uac70\ub098 \uad6c\uc131\uc744 \ubcc0\uacbd\ud574\uc57c \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud655\uc778\ud574\uc57c \ud560 \uba87 \uac00\uc9c0 \uc911\uc694\ud55c \uad6c\uc131 \uc694\uc18c\ub85c\ub294 [CoreDNS](https:\/\/github.com\/coredns\/coredns), [kube-proxy](https:\/\/kubernetes.io\/docs\/concepts\/overview\/components\/#kube-proxy), [VPC CNI](https:\/\/github.com\/aws\/amazon-vpc-cni-k8s), \uc2a4\ud1a0\ub9ac\uc9c0 \ub4dc\ub77c\uc774\ubc84 \ub4f1\uc774 \uc788\uc2b5\ub2c8\ub2e4. \n\n\ud074\ub7ec\uc2a4\ud130\uc5d0\ub294 Kubernetes API\ub97c \uc0ac\uc6a9\ud558\ub294 \ub9ce\uc740 \uc6cc\ud06c\ub85c\ub4dc\uac00 \ud3ec\ud568\ub418\ub294 \uacbd\uc6b0\uac00 \ub9ce\uc73c\uba70 \uc778\uadf8\ub808\uc2a4 \ucee8\ud2b8\ub864\ub7ec, \uc9c0\uc18d\uc801 \uc804\ub2ec \uc2dc\uc2a4\ud15c, \ubaa8\ub2c8\ud130\ub9c1 \ub3c4\uad6c\uc640 \uac19\uc740 \uc6cc\ud06c\ub85c\ub4dc \uae30\ub2a5\uc5d0 \ud544\uc694\ud569\ub2c8\ub2e4.EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud560 \ub54c\ub294 \uc560\ub4dc\uc628\uacfc \ud0c0\uc0ac \ub3c4\uad6c\ub3c4 \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\uc5ec \ud638\ud658\ub418\ub294\uc9c0 \ud655\uc778\ud574\uc57c \ud569\ub2c8\ub2e4.\n \n\uc77c\ubc18\uc801\uc778 \uc560\ub4dc\uc628\uc758 \ub2e4\uc74c \uc608\uc640 \uad00\ub828 \uc5c5\uadf8\ub808\uc774\ub4dc \uc124\uba85\uc11c\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n* **Amazon VPC CNI:** \uac01 \ud074\ub7ec\uc2a4\ud130 \ubc84\uc804\uc5d0 \ub300\ud55c Amazon VPC CNI \uc560\ub4dc\uc628\uc758 \uad8c\uc7a5 \ubc84\uc804\uc740 [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc790\uccb4 \uad00\ub9ac\ud615 \uc560\ub4dc\uc628\uc6a9 Amazon VPC CNI \ud50c\ub7ec\uadf8\uc778 \uc5c5\ub370\uc774\ud2b8](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managing-vpc-cni.html)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624. **Amazon EKS \uc560\ub4dc\uc628\uc73c\ub85c \uc124\uce58\ud55c \uacbd\uc6b0 \ud55c \ubc88\uc5d0 \ud558\ub098\uc758 \ub9c8\uc774\ub108 \ubc84\uc804\ub9cc \uc5c5\uadf8\ub808\uc774\ub4dc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.**\n* **kube-proxy:** [\ucfe0\ubc84\ub124\ud2f0\uc2a4 kube-proxy \uc790\uccb4 \uad00\ub9ac\ud615 \uc560\ub4dc\uc628 \uc5c5\ub370\uc774\ud2b8](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managing-kube-proxy.html)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n* **CoreDNS:** [CoreDNS \uc790\uccb4 \uad00\ub9ac\ud615 \uc560\ub4dc\uc628 \uc5c5\ub370\uc774\ud2b8](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managing-coredns.html)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n* **AWS Load Balancer Controller:** AWS Load Balancer Controller\ub294 \ubc30\ud3ec\ud55c EKS \ubc84\uc804\uacfc \ud638\ud658\ub418\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 [\uc124\uce58 \uac00\uc774\ub4dc](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/aws-load-balancer-controller.html)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624. \n* **Amazon Elastic Block Store (\uc544\ub9c8\uc874 EBS) \ucee8\ud14c\uc774\ub108 \uc2a4\ud1a0\ub9ac\uc9c0 \uc778\ud130\ud398\uc774\uc2a4 (CSI) \ub4dc\ub77c\uc774\ubc84:** \uc124\uce58 \ubc0f \uc5c5\uadf8\ub808\uc774\ub4dc \uc815\ubcf4\ub294 [Amazon EKS \uc560\ub4dc\uc628\uc73c\ub85c Amazon EBS CSI \ub4dc\ub77c\uc774\ubc84 \uad00\ub9ac](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managing-ebs-csi.html)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n* **Amazon Elastic File System (Amazon EFS) \ucee8\ud14c\uc774\ub108 \uc2a4\ud1a0\ub9ac\uc9c0 \uc778\ud130\ud398\uc774\uc2a4 (CSI) \ub4dc\ub77c\uc774\ubc84:** \uc124\uce58 \ubc0f \uc5c5\uadf8\ub808\uc774\ub4dc \uc815\ubcf4\ub294 [Amazon EFS CSI \ub4dc\ub77c\uc774\ubc84](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/efs-csi.html) \ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n* **\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uba54\ud2b8\ub9ad \uc11c\ubc84:** \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 GitHub\uc758 [metrics-server](https:\/\/kubernetes-sigs.github.io\/metrics-server\/)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n* **\ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler:** \ucfe0\ubc84\ub124\ud2f0\uc2a4 Cluster Autoscaler \ubc84\uc804\uc744 \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub824\uba74 \ubc30\ud3ec \uc2dc \uc774\ubbf8\uc9c0 \ubc84\uc804\uc744 \ubcc0\uacbd\ud558\uc2ed\uc2dc\uc624. Cluster Autoscaler\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc2a4\ucf00\uc904\ub7ec\uc640 \ubc00\uc811\ud558\uac8c \uc5f0\uacb0\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud560 \ub54c\ub294 \ud56d\uc0c1 \uc5c5\uadf8\ub808\uc774\ub4dc\ud574\uc57c \ud569\ub2c8\ub2e4. [GitHub \ub9b4\ub9ac\uc2a4](https:\/\/github.com\/kubernetes\/autoscaler\/releases)\ub97c \uac80\ud1a0\ud558\uc5ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9c8\uc774\ub108 \ubc84\uc804\uc5d0 \ud574\ub2f9\ud558\ub294 \ucd5c\uc2e0 \ub9b4\ub9ac\uc2a4\uc758 \uc8fc\uc18c\ub97c \ucc3e\uc73c\uc2ed\uc2dc\uc624.\n* **Karpenter:** \uc124\uce58 \ubc0f \uc5c5\uadf8\ub808\uc774\ub4dc \uc815\ubcf4\ub294 [Karpenter \uc124\uba85\uc11c](https:\/\/karpenter.sh\/v0.27.3\/faq\/#which-versions-of-kubernetes-does-karpenter-support)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n## \uc5c5\uadf8\ub808\uc774\ub4dc \uc804\uc5d0 \uae30\ubcf8 EKS \uc694\uad6c \uc0ac\ud56d \ud655\uc778\n\nAWS\uc5d0\uc11c \uc5c5\uadf8\ub808\uc774\ub4dc \ud504\ub85c\uc138\uc2a4\ub97c \uc644\ub8cc\ud558\ub824\uba74 \uacc4\uc815\uc5d0 \ud2b9\uc815 \ub9ac\uc18c\uc2a4\uac00 \ud544\uc694\ud569\ub2c8\ub2e4.\uc774\ub7f0 \ub9ac\uc18c\uc2a4\uac00 \uc5c6\ub294 \uacbd\uc6b0 \ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4.\ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uc5c5\uadf8\ub808\uc774\ub4dc\uc5d0\ub294 \ub2e4\uc74c \ub9ac\uc18c\uc2a4\uac00 \ud544\uc694\ud569\ub2c8\ub2e4.\n\n1. \uc0ac\uc6a9 \uac00\ub2a5\ud55c IP \uc8fc\uc18c: \ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\ub824\uba74 Amazon EKS\uc5d0\uc11c \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud560 \ub54c \uc9c0\uc815\ud55c \uc11c\ube0c\ub137\uc758 \uc0ac\uc6a9 \uac00\ub2a5\ud55c IP \uc8fc\uc18c\uac00 \ucd5c\ub300 5\uac1c\uae4c\uc9c0 \ud544\uc694\ud569\ub2c8\ub2e4.\n2. EKS IAM \uc5ed\ud560: \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 IAM \uc5ed\ud560\uc740 \ud544\uc694\ud55c \uad8c\ud55c\uc73c\ub85c \uacc4\uc815\uc5d0 \uacc4\uc18d \uc874\uc7ac\ud569\ub2c8\ub2e4.\n3. \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc2dc\ud06c\ub9bf \uc554\ud638\ud654\uac00 \ud65c\uc131\ud654\ub418\uc5b4 \uc788\ub294 \uacbd\uc6b0 \ud074\ub7ec\uc2a4\ud130 IAM \uc5ed\ud560\uc5d0 AWS KMS\ud0a4\ub97c \uc0ac\uc6a9\ud560 \uad8c\ud55c\uc774 \uc788\ub294\uc9c0 \ud655\uc778\ud558\uc2ed\uc2dc\uc624.\n\n### \uc0ac\uc6a9 \uac00\ub2a5\ud55c IP \uc8fc\uc18c \ud655\uc778\n\n\ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\ub370\uc774\ud2b8\ud558\ub824\uba74 Amazon EKS\uc5d0\uc11c \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud560 \ub54c \uc9c0\uc815\ud55c \uc11c\ube0c\ub137\uc758 \uc0ac\uc6a9 \uac00\ub2a5\ud55c IP \uc8fc\uc18c\uac00 \ucd5c\ub300 5\uac1c\uae4c\uc9c0 \ud544\uc694\ud569\ub2c8\ub2e4.\n\n\ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud558\uc5ec \uc11c\ube0c\ub137\uc5d0 \ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud560 \uc218 \uc788\ub294 \ucda9\ubd84\ud55c IP \uc8fc\uc18c\uac00 \uc788\ub294\uc9c0 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\nCLUSTER=<cluster name>\naws ec2 describe-subnets --subnet-ids \\\n  $(aws eks describe-cluster --name ${CLUSTER} \\\n  --query 'cluster.resourcesVpcConfig.subnetIds' \\\n  --output text) \\\n  --query 'Subnets[*].[SubnetId,AvailabilityZone,AvailableIpAddressCount]' \\\n  --output table\n\n----------------------------------------------------\n|                  DescribeSubnets                 |\n+---------------------------+--------------+-------+\n|  subnet-067fa8ee8476abbd6 |  us-east-1a  |  8184 |\n|  subnet-0056f7403b17d2b43 |  us-east-1b  |  8153 |\n|  subnet-09586f8fb3addbc8c |  us-east-1a  |  8120 |\n|  subnet-047f3d276a22c6bce |  us-east-1b  |  8184 |\n+---------------------------+--------------+-------+\n```\n\n[VPC CNI Metrics Helper](https:\/\/github.com\/aws\/amazon-vpc-cni-k8s\/blob\/master\/cmd\/cni-metrics-helper\/README.md) \ub97c \uc0ac\uc6a9\ud558\uc5ec VPC \uc9c0\ud45c\uc5d0 \ub300\ud55c CloudWatch \ub300\uc2dc\ubcf4\ub4dc\ub97c \ub9cc\ub4e4 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\ud074\ub7ec\uc2a4\ud130 \uc0dd\uc131 \uc2dc \ucc98\uc74c \uc9c0\uc815\ud55c \uc11c\ube0c\ub137\uc758 IP \uc8fc\uc18c\uac00 \ubd80\uc871\ud55c \uacbd\uc6b0, Amazon EKS\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uc2dc\uc791\ud558\uae30 \uc804\uc5d0 \u201cUpdateClusterConfiguration\u201d API\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130 \uc11c\ube0c\ub137\uc744 \uc5c5\ub370\uc774\ud2b8\ud560 \uac83\uc744 \uad8c\uc7a5\ud569\ub2c8\ub2e4. \uc2e0\uaddc \uc11c\ube0c\ub137\uc774 \uc544\ub798 \ud56d\ubaa9\uc744 \ub9cc\uc871\ud558\ub294\uc9c0 \ud655\uc778\ud558\uc2ed\uc2dc\uc624:\n\n* \uc2e0\uaddc \uc11c\ube0c\ub137\uc740 \ud074\ub7ec\uc2a4\ud130 \uc0dd\uc131 \uc911\uc5d0 \uc120\ud0dd\ud55c \ub3d9\uc77c \uac00\uc6a9\uc601\uc5ed\uc5d0 \uc18d\ud569\ub2c8\ub2e4. \n* \uc2e0\uaddc \uc11c\ube0c\ub137\uc740 \ud074\ub7ec\uc2a4\ud130 \uc0dd\uc131 \uc2dc \uc81c\uacf5\ub41c \ub3d9\uc77c VPC\uc5d0 \uc18d\ud569\ub2c8\ub2e4.\n\n\uae30\uc874 VPC CIDR \ube14\ub85d\uc758 IP \uc8fc\uc18c\uac00 \ubd80\uc871\ud55c \uacbd\uc6b0 \ucd94\uac00 CIDR \ube14\ub85d\uc744 \uc5f0\uacb0\ud558\ub294 \uac83\uc744 \uace0\ub824\ud574 \ubcf4\uc2ed\uc2dc\uc624. AWS\ub97c \uc0ac\uc6a9\ud558\uba74 \ucd94\uac00 CIDR \ube14\ub85d\uc744 \uae30\uc874 \ud074\ub7ec\uc2a4\ud130 VPC\uc640 \uc5f0\uacb0\ud558\uc5ec IP \uc8fc\uc18c \ud480\uc744 \ud6a8\uacfc\uc801\uc73c\ub85c \ud655\uc7a5\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7ec\ud55c \ud655\uc7a5\uc740 \ucd94\uac00 \ud504\ub77c\uc774\ube57 IP \ubc94\uc704 (RFC 1918) \ub97c \ub3c4\uc785\ud558\uac70\ub098, \ud544\uc694\ud55c \uacbd\uc6b0 \ud37c\ube14\ub9ad IP \ubc94\uc704 (\ube44 RFC 1918) \ub97c \ub3c4\uc785\ud558\uc5ec \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Amazon EKS\uc5d0\uc11c \uc0c8 CIDR\uc744 \uc0ac\uc6a9\ud558\ub824\uba74 \uba3c\uc800 \uc0c8 VPC CIDR \ube14\ub85d\uc744 \ucd94\uac00\ud558\uace0 VPC \uc0c8\ub85c \uace0\uce68\uc774 \uc644\ub8cc\ub420 \ub54c\uae4c\uc9c0 \uae30\ub2e4\ub824\uc57c \ud569\ub2c8\ub2e4. \uadf8\ub7f0 \ub2e4\uc74c \uc0c8\ub85c \uc124\uc815\ub41c CIDR \ube14\ub85d\uc744 \uae30\ubc18\uc73c\ub85c \uc11c\ube0c\ub137\uc744 VPC\ub85c \uc5c5\ub370\uc774\ud2b8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\n### EKS IAM \uc5ed\ud560 \ud655\uc778\n\n\ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud558\uc5ec IAM \uc5ed\ud560\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uace0 \uacc4\uc815\uc5d0 \uc62c\ubc14\ub978 \uc5ed\ud560 \uc218\uc784 \uc815\ucc45\uc774 \uc801\uc6a9\ub418\uc5c8\ub294\uc9c0 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n```\nCLUSTER=<cluster name>\nROLE_ARN=$(aws eks describe-cluster --name ${CLUSTER} \\\n  --query 'cluster.roleArn' --output text)\naws iam get-role --role-name ${ROLE_ARN##*\/} \\\n  --query 'Role.AssumeRolePolicyDocument'\n  \n{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Principal\": {\n                \"Service\": \"eks.amazonaws.com\"\n            },\n            \"Action\": \"sts:AssumeRole\"\n        }\n    ]\n}\n```\n\n## EKS \uc560\ub4dc\uc628\uc73c\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\n\nAmazon EKS\ub294 \ubaa8\ub4e0 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc6a9 Amazon VPC CNI \ud50c\ub7ec\uadf8\uc778, `kube-proxy`, CoreDNS\uc640 \uac19\uc740 \uc560\ub4dc\uc628\uc744 \uc790\ub3d9\uc73c\ub85c \uc124\uce58\ud569\ub2c8\ub2e4. \uc560\ub4dc\uc628\uc740 \uc790\uccb4 \uad00\ub9ac\ud558\uac70\ub098 Amazon EKS \uc560\ub4dc\uc628\uc73c\ub85c \uc124\uce58\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Amazon EKS \uc560\ub4dc\uc628\uc740 EKS API\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc560\ub4dc\uc628\uc744 \uad00\ub9ac\ud558\ub294 \ub610 \ub2e4\ub978 \ubc29\ubc95\uc785\ub2c8\ub2e4. \n\nAmazon EKS \uc560\ub4dc\uc628\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub2e8\uc77c \uba85\ub839\uc73c\ub85c \ubc84\uc804\uc744 \uc5c5\ub370\uc774\ud2b8\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608:\n\n```\naws eks update-addon \u2014cluster-name my-cluster \u2014addon-name vpc-cni \u2014addon-version version-number \\\n--service-account-role-arn arn:aws:iam::111122223333:role\/role-name \u2014configuration-values '{}' \u2014resolve-conflicts PRESERVE\n```\n\n\ub2e4\uc74c\uacfc \uac19\uc740 EKS \uc560\ub4dc\uc628\uc774 \uc788\ub294\uc9c0 \ud655\uc778:\n\n```\naws eks list-addons --cluster-name <cluster name>\n```\n\n!!! warning\n      \n    EKS \uc560\ub4dc\uc628\uc740 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uc5c5\uadf8\ub808\uc774\ub4dc \uc911\uc5d0 \uc790\ub3d9\uc73c\ub85c \uc5c5\uadf8\ub808\uc774\ub4dc\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. EKS \uc560\ub4dc\uc628 \uc5c5\ub370\uc774\ud2b8\ub97c \uc2dc\uc791\ud558\uace0 \uc6d0\ud558\ub294 \ubc84\uc804\uc744 \uc120\ud0dd\ud574\uc57c \ud569\ub2c8\ub2e4. \n\n    * \uc0ac\uc6a9 \uac00\ub2a5\ud55c \ubaa8\ub4e0 \ubc84\uc804 \uc911\uc5d0\uc11c \ud638\ud658\ub418\ub294 \ubc84\uc804\uc744 \uc120\ud0dd\ud558\ub294 \uac83\uc740 \uc0ac\uc6a9\uc790\uc758 \ucc45\uc784\uc785\ub2c8\ub2e4. [\uc560\ub4dc\uc628 \ubc84\uc804 \ud638\ud658\uc131\uc5d0 \ub300\ud55c \uc9c0\uce68\uc744 \uac80\ud1a0\ud558\uc138\uc694.](#upgrade-add-ons-and-components-using-the-kubernetes-api)\n    * Amazon EKS \uc560\ub4dc\uc628\uc740 \ud55c \ubc88\uc5d0 \ud558\ub098\uc758 \ub9c8\uc774\ub108 \ubc84\uc804\ub9cc \uc5c5\uadf8\ub808\uc774\ub4dc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n[EKS \uc560\ub4dc\uc628\uc73c\ub85c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uad6c\uc131 \uc694\uc18c\uc640 \uc2dc\uc791 \ubc29\ubc95\uc5d0 \ub300\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\uc138\uc694.](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/eks-add-ons.html)\n\n[EKS \uc560\ub4dc\uc628\uc5d0 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \uad6c\uc131\uc744 \uc81c\uacf5\ud558\ub294 \ubc29\ubc95\uc744 \uc54c\uc544\ubd05\ub2c8\ub2e4.](https:\/\/aws.amazon.com\/blogs\/containers\/amazon-eks-add-ons-advanced-configuration\/)\n\n## \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc744 \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\uae30 \uc804\uc5d0 \uc81c\uac70\ub41c API \uc0ac\uc6a9\uc744 \uc2dd\ubcc4\ud558\uace0 \uc218\uc815\ud558\uae30\n\nEKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc744 \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\uae30 \uc804\uc5d0 \uc81c\uac70\ub41c API\uc758 API \uc0ac\uc6a9\uc744 \ud655\uc778\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub97c \uc704\ud574\uc11c\ub294 \uc2e4\ud589 \uc911\uc778 \ud074\ub7ec\uc2a4\ud130 \ub610\ub294 \uc815\uc801\uc73c\ub85c \ub80c\ub354\ub9c1\ub41c Kubernetes \ub9e4\ub2c8\ud398\uc2a4\ud2b8 \ud30c\uc77c\uc744 \ud655\uc778\ud560 \uc218 \uc788\ub294 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \n\n\uc77c\ubc18\uc801\uc73c\ub85c \uc815\uc801 \ub9e4\ub2c8\ud398\uc2a4\ud2b8 \ud30c\uc77c\uc744 \ub300\uc0c1\uc73c\ub85c \uac80\uc0ac\ub97c \uc2e4\ud589\ud558\ub294 \uac83\uc774 \ub354 \uc815\ud655\ud569\ub2c8\ub2e4. \ub77c\uc774\ube0c \ud074\ub7ec\uc2a4\ud130\ub97c \ub300\uc0c1\uc73c\ub85c \uc2e4\ud589\ud558\uba74 \uc774\ub7f0 \ub3c4\uad6c\uac00 \uc624\ud0d0\uc744 \ubc18\ud658\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 API\uac00 \ub354 \uc774\uc0c1 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\ub294\ub2e4\uace0 \ud574\uc11c API\uac00 \uc81c\uac70\ub41c \uac83\uc740 \uc544\ub2d9\ub2c8\ub2e4. [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc9c0\uc6d0 \uc911\ub2e8 \uc815\ucc45](https:\/\/kubernetes.io\/docs\/reference\/using-api\/deprecation-policy\/)\uc744 \ud655\uc778\ud558\uc5ec API \uc81c\uac70\uac00 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ubbf8\uce58\ub294 \uc601\ud5a5\uc744 \uc774\ud574\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n### \ud074\ub7ec\uc2a4\ud130 \uc778\uc0ac\uc774\ud2b8\n[\ud074\ub7ec\uc2a4\ud130 \uc778\uc0ac\uc774\ud2b8](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cluster-insights.html)\ub294 EKS \ud074\ub7ec\uc2a4\ud130\ub97c \ucd5c\uc2e0 \ubc84\uc804\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub85c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub294 \uae30\ub2a5\uc5d0 \uc601\ud5a5\uc744 \ubbf8\uce60 \uc218 \uc788\ub294 \ubb38\uc81c\uc5d0 \ub300\ud55c \uacb0\uacfc\ub97c \uc81c\uacf5\ud558\ub294 \uae30\ub2a5\uc785\ub2c8\ub2e4. \uc774\ub7ec\ud55c \uacb0\uacfc\ub294 Amazon EKS\uc5d0\uc11c \uc120\ubcc4 \ubc0f \uad00\ub9ac\ud558\uba70 \ubb38\uc81c \ud574\uacb0 \ubc29\ubc95\uc5d0 \ub300\ud55c \uad8c\uc7a5 \uc0ac\ud56d\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uc778\uc0ac\uc774\ud2b8\ub97c \ud65c\uc6a9\ud558\uba74 \ucd5c\uc2e0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uc73c\ub85c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub294 \ub370 \ub4dc\ub294 \ub178\ub825\uc744 \ucd5c\uc18c\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\nEKS \ud074\ub7ec\uc2a4\ud130\uc758 \uc778\uc0ac\uc774\ud2b8\ub97c \ubcf4\ub824\uba74 \ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n```\naws eks list-insights --region <region-code> --cluster-name <my-cluster>\n\n{\n    \"insights\": [\n        {\n            \"category\": \"UPGRADE_READINESS\", \n            \"name\": \"Deprecated APIs removed in Kubernetes v1.29\", \n            \"insightStatus\": {\n                \"status\": \"PASSING\", \n                \"reason\": \"No deprecated API usage detected within the last 30 days.\"\n            }, \n            \"kubernetesVersion\": \"1.29\", \n            \"lastTransitionTime\": 1698774710.0, \n            \"lastRefreshTime\": 1700157422.0, \n            \"id\": \"123e4567-e89b-42d3-a456-579642341238\", \n            \"description\": \"Checks for usage of deprecated APIs that are scheduled for removal in Kubernetes v1.29. Upgrading your cluster before migrating to the updated APIs supported by v1.29 could cause application impact.\"\n        }\n    ]\n}\n```\n\n\uc778\uc0ac\uc774\ud2b8\uc5d0 \ub300\ud574 \ub354 \uc790\uc138\ud55c \ub0b4\uc6a9\uc744 \ucd9c\ub825\ud558\ub824\uba74 \ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4:\n```\naws eks describe-insight --region <region-code> --id <insight-id> --cluster-name <my-cluster>\n```\n\n[Amazon EKS \ucf58\uc194](https:\/\/console.aws.amazon.com\/eks\/home#\/clusters)\uc5d0\uc11c \uc778\uc0ac\uc774\ud2b8\ub97c \ubcfc \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4.\ud074\ub7ec\uc2a4\ud130 \ubaa9\ub85d\uc5d0\uc11c \ud074\ub7ec\uc2a4\ud130\ub97c \uc120\ud0dd\ud558\uba74 \uc778\uc0ac\uc774\ud2b8 \uacb0\uacfc\uac00 ```\uc5c5\uadf8\ub808\uc774\ub4dc \uc778\uc0ac\uc774\ud2b8``` \ud0ed \uc544\ub798\uc5d0 \ud45c\uc2dc\ub429\ub2c8\ub2e4.\n\n`\"status\": ERROR`\uc778 \ud074\ub7ec\uc2a4\ud130 \uc778\uc0ac\uc774\ud2b8\ub97c \ucc3e\uc740 \uacbd\uc6b0 \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uc218\ud589\ud558\uae30 \uc804\uc5d0 \ubb38\uc81c\ub97c \ud574\uacb0\ud574\uc57c \ud569\ub2c8\ub2e4. `aws eks describe-insight` \uba85\ub839\uc744 \uc2e4\ud589\ud558\uba74 \ub2e4\uc74c\uacfc \uac19\uc740 \uac1c\uc120 \uad8c\uace0 \uc0ac\ud56d\uc744 \uacf5\uc720\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4:\n\n\uc601\ud5a5\uc744 \ubc1b\ub294 \ub9ac\uc18c\uc2a4:\n```\n\"resources\": [\n      {\n        \"insightStatus\": {\n          \"status\": \"ERROR\"\n        },\n        \"kubernetesResourceUri\": \"\/apis\/policy\/v1beta1\/podsecuritypolicies\/null\"\n      }\n]\n```\n\nAPIs deprecated:\n```\n\"deprecationDetails\": [\n      {\n        \"usage\": \"\/apis\/flowcontrol.apiserver.k8s.io\/v1beta2\/flowschemas\", \n        \"replacedWith\": \"\/apis\/flowcontrol.apiserver.k8s.io\/v1beta3\/flowschemas\", \n        \"stopServingVersion\": \"1.29\", \n        \"clientStats\": [], \n        \"startServingReplacementVersion\": \"1.26\"\n      }\n]\n```\n\n\ucde8\ud574\uc57c \ud560 \uad8c\uc7a5 \uc870\uce58:\n```\n\"recommendation\": \"Update manifests and API clients to use newer Kubernetes APIs if applicable before upgrading to Kubernetes v1.26.\"\n```\n\nEKS \ucf58\uc194 \ub610\ub294 CLI\ub97c \ud1b5\ud574 \ud074\ub7ec\uc2a4\ud130 \ud1b5\ucc30\ub825\uc744 \ud65c\uc6a9\ud558\uba74 EKS \ud074\ub7ec\uc2a4\ud130 \ubc84\uc804\uc744 \uc131\uacf5\uc801\uc73c\ub85c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub294 \ud504\ub85c\uc138\uc2a4\ub97c \uac00\uc18d\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c \ub9ac\uc18c\uc2a4\ub97c \ud1b5\ud574 \uc790\uc138\ud788 \uc54c\uc544\ubcf4\uc2ed\uc2dc\uc624:\n* [\uacf5\uc2dd EKS \ubb38\uc11c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cluster-insights.html)\n* [\ud074\ub7ec\uc2a4\ud130 \uc778\uc0ac\uc774\ud2b8 \ucd9c\uc2dc \ube14\ub85c\uadf8](https:\/\/aws.amazon.com\/blogs\/containers\/accelerate-the-testing-and-verification-of-amazon-eks-upgrades-with-upgrade-insights\/).\n\n### Kube-no-Trouble\n\n[Kube-no-Trouble](https:\/\/github.com\/doitintl\/kube-no-trouble) \uc740 `kubent` \uba85\ub839\uc744 \uc0ac\uc6a9\ud558\ub294 \uc624\ud508\uc18c\uc2a4 \ucee4\ub9e8\ub4dc\ub77c\uc778 \uc720\ud2f8\ub9ac\ud2f0\uc785\ub2c8\ub2e4. \uc778\uc218 \uc5c6\uc774 `kubent`\ub97c \uc2e4\ud589\ud558\uba74 \ud604\uc7ac KubeConfig \ucee8\ud14d\uc2a4\ud2b8\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud074\ub7ec\uc2a4\ud130\ub97c \uc2a4\uce94\ud558\uace0 \ub354 \uc774\uc0c1 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\uace0 \uc81c\uac70\ub420 API\uac00 \ud3ec\ud568\ub41c \ubcf4\uace0\uc11c\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4\n\n```\nkubent\n\n4:17PM INF >>> Kube No Trouble `kubent` <<<\n4:17PM INF version 0.7.0 (git sha d1bb4e5fd6550b533b2013671aa8419d923ee042)\n4:17PM INF Initializing collectors and retrieving data\n4:17PM INF Target K8s version is 1.24.8-eks-ffeb93d\n4:l INF Retrieved 93 resources from collector name=Cluster\n4:17PM INF Retrieved 16 resources from collector name=\"Helm v3\"\n4:17PM INF Loaded ruleset name=custom.rego.tmpl\n4:17PM INF Loaded ruleset name=deprecated-1-16.rego\n4:17PM INF Loaded ruleset name=deprecated-1-22.rego\n4:17PM INF Loaded ruleset name=deprecated-1-25.rego\n4:17PM INF Loaded ruleset name=deprecated-1-26.rego\n4:17PM INF Loaded ruleset name=deprecated-future.rego\n__________________________________________________________________________________________\n>>> Deprecated APIs removed in 1.25 <<<\n------------------------------------------------------------------------------------------\nKIND                NAMESPACE     NAME             API_VERSION      REPLACE_WITH (SINCE)\nPodSecurityPolicy   <undefined>   eks.privileged   policy\/v1beta1   <removed> (1.21.0)\n```\n\n\uc815\uc801 \ub9e4\ub2c8\ud398\uc2a4\ud2b8 \ud30c\uc77c \ubc0f \ud5ec\ub984 \ud328\ud0a4\uc9c0\ub97c \uc2a4\uce94\ud558\ub294 \ub370\uc5d0\ub3c4 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\ub9e4\ub2c8\ud398\uc2a4\ud2b8\ub97c \ubc30\ud3ec\ud558\uae30 \uc804\uc5d0 \ubb38\uc81c\ub97c \uc2dd\ubcc4\ud558\ub824\uba74 \uc9c0\uc18d\uc801 \ud1b5\ud569 (CI) \ud504\ub85c\uc138\uc2a4\uc758 \uc77c\ubd80\ub85c `kubent` \ub97c \uc2e4\ud589\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\ub610\ud55c \ub9e4\ub2c8\ud398\uc2a4\ud2b8\ub97c \uc2a4\uce94\ud558\ub294 \uac83\uc774 \ub77c\uc774\ube0c \ud074\ub7ec\uc2a4\ud130\ub97c \uc2a4\uce94\ud558\ub294 \uac83\ubcf4\ub2e4 \ub354 \uc815\ud655\ud569\ub2c8\ub2e4. \n\nKube-no-Trouble\uc740 \ud074\ub7ec\uc2a4\ud130\ub97c \uc2a4\uce94\ud558\uae30 \uc704\ud55c \uc801\uc808\ud55c \uad8c\ud55c\uc774 \uc788\ub294 \uc0d8\ud50c [\uc11c\ube44\uc2a4 \uc5b4\uce74\uc6b4\ud2b8 \ubc0f \uc5ed\ud560](https:\/\/github.com\/doitintl\/kube-no-trouble\/blob\/master\/docs\/k8s-sa-and-role-example.yaml)\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \n\n### \ud480\ub8e8\ud1a0(pluto)\n\n\ub610 \ub2e4\ub978 \uc635\uc158\uc740 [pluto](https:\/\/pluto.docs.fairwinds.com\/)\uc778\ub370, \uc774\ub294 \ub77c\uc774\ube0c \ud074\ub7ec\uc2a4\ud130, \ub9e4\ub2c8\ud398\uc2a4\ud2b8 \ud30c\uc77c, \ud5ec\ub984 \ucc28\ud2b8 \uc2a4\uce94\uc744 \uc9c0\uc6d0\ud558\uace0 CI \ud504\ub85c\uc138\uc2a4\uc5d0 \ud3ec\ud568\ud560 \uc218 \uc788\ub294 GitHub Action\uc774 \uc788\ub2e4\ub294 \uc810\uc5d0\uc11c `kubent`\uc640 \ube44\uc2b7\ud569\ub2c8\ub2e4.\n\n```\npluto detect-all-in-cluster\n\nNAME             KIND                VERSION          REPLACEMENT   REMOVED   DEPRECATED   REPL AVAIL  \neks.privileged   PodSecurityPolicy   policy\/v1beta1                 false     true         true\n```\n\n### \ub9ac\uc18c\uc2a4\n\n\uc5c5\uadf8\ub808\uc774\ub4dc \uc804\uc5d0 \ud074\ub7ec\uc2a4\ud130\uac00 \uc9c0\uc6d0 \uc911\ub2e8\ub41c API\ub97c \uc0ac\uc6a9\ud558\uc9c0 \uc54a\ub294\uc9c0 \ud655\uc778\ud558\ub824\uba74 \ub2e4\uc74c\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud574\uc57c \ud569\ub2c8\ub2e4:\n\n* \ucfe0\ubc84\ub124\ud2f0\uc2a4 v1.19 `apiserver_requested_deprecated_apis` \uba54\ud2b8\ub9ad:\n\n```\nkubectl get --raw \/metrics | grep apiserver_requested_deprecated_apis\n\napiserver_requested_deprecated_apis{group=\"policy\",removed_release=\"1.25\",resource=\"podsecuritypolicies\",subresource=\"\",version=\"v1beta1\"} 1\n```\n\n* `k8s.io\/deprecated`\uac00 `true`\ub85c \ud45c\uc2dc\ub41c [\uac10\uc0ac \ub85c\uadf8](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/control-plane-logs.html)  \ub0b4 \uc774\ubca4\ud2b8:\n\n```\nCLUSTER=\"<cluster_name>\"\nQUERY_ID=$(aws logs start-query \\\n --log-group-name \/aws\/eks\/${CLUSTER}\/cluster \\\n --start-time $(date -u --date=\"-30 minutes\" \"+%s\") # or date -v-30M \"+%s\" on MacOS \\\n --end-time $(date \"+%s\") \\\n --query-string 'fields @message | filter `annotations.k8s.io\/deprecated`=\"true\"' \\\n --query queryId --output text)\n\necho \"Query started (query id: $QUERY_ID), please hold ...\" && sleep 5 # give it some time to query\n\naws logs get-query-results --query-id $QUERY_ID\n```\n\n\ub354 \uc774\uc0c1 \uc0ac\uc6a9\ub418\uc9c0 \uc54a\ub294 API\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 \ub2e4\uc74c \ud589\uc774 \ucd9c\ub825\ub429\ub2c8\ub2e4:\n\n```\n{\n    \"results\": [\n        [\n            {\n                \"field\": \"@message\",\n                \"value\": \"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\":\\\"audit.k8s.io\/v1\\\",\\\"level\\\":\\\"Request\\\",\\\"auditID\\\":\\\"8f7883c6-b3d5-42d7-967a-1121c6f22f01\\\",\\\"stage\\\":\\\"ResponseComplete\\\",\\\"requestURI\\\":\\\"\/apis\/policy\/v1beta1\/podsecuritypolicies?allowWatchBookmarks=true\\\\u0026resourceVersion=4131\\\\u0026timeout=9m19s\\\\u0026timeoutSeconds=559\\\\u0026watch=true\\\",\\\"verb\\\":\\\"watch\\\",\\\"user\\\":{\\\"username\\\":\\\"system:apiserver\\\",\\\"uid\\\":\\\"8aabfade-da52-47da-83b4-46b16cab30fa\\\",\\\"groups\\\":[\\\"system:masters\\\"]},\\\"sourceIPs\\\":[\\\"::1\\\"],\\\"userAgent\\\":\\\"kube-apiserver\/v1.24.16 (linux\/amd64) kubernetes\/af930c1\\\",\\\"objectRef\\\":{\\\"resource\\\":\\\"podsecuritypolicies\\\",\\\"apiGroup\\\":\\\"policy\\\",\\\"apiVersion\\\":\\\"v1beta1\\\"},\\\"responseStatus\\\":{\\\"metadata\\\":{},\\\"code\\\":200},\\\"requestReceivedTimestamp\\\":\\\"2023-10-04T12:36:11.849075Z\\\",\\\"stageTimestamp\\\":\\\"2023-10-04T12:45:30.850483Z\\\",\\\"annotations\\\":{\\\"authorization.k8s.io\/decision\\\":\\\"allow\\\",\\\"authorization.k8s.io\/reason\\\":\\\"\\\",\\\"k8s.io\/deprecated\\\":\\\"true\\\",\\\"k8s.io\/removed-release\\\":\\\"1.25\\\"}}\"\n            },\n[...]\n```\n\n## \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc6cc\ud06c\ub85c\ub4dc \uc5c5\ub370\uc774\ud2b8. kubectl-convert\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub9e4\ub2c8\ud398\uc2a4\ud2b8\ub97c \uc5c5\ub370\uc774\ud2b8\n\n\uc5c5\ub370\uc774\ud2b8\uac00 \ud544\uc694\ud55c \uc6cc\ud06c\ub85c\ub4dc\uc640 \ub9e4\ub2c8\ud398\uc2a4\ud2b8\ub97c \uc2dd\ubcc4\ud55c \ud6c4\uc5d0\ub294 \ub9e4\ub2c8\ud398\uc2a4\ud2b8 \ud30c\uc77c\uc758 \ub9ac\uc18c\uc2a4 \uc720\ud615\uc744 \ubcc0\uacbd\ud574\uc57c \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4 (\uc608: \ud30c\ub4dc\uc2dc\ud050\ub9ac\ud2f0\ud3f4\ub9ac\uc2dc\ub97c \ud30c\ub4dc\uc2dc\ud050\ub9ac\ud2f0\uc2a4\ud0e0\ub2e4\ub4dc\ub85c).\uc774\ub97c \uc704\ud574\uc11c\ub294 \ub9ac\uc18c\uc2a4 \uc0ac\uc591\uc744 \uc5c5\ub370\uc774\ud2b8\ud558\uace0 \uad50\uccb4\ud560 \ub9ac\uc18c\uc2a4\uc5d0 \ub530\ub978 \ucd94\uac00 \uc870\uc0ac\uac00 \ud544\uc694\ud569\ub2c8\ub2e4.\n\n\ub9ac\uc18c\uc2a4 \uc720\ud615\uc740 \ub3d9\uc77c\ud558\uac8c \uc720\uc9c0\ub418\uc9c0\ub9cc API \ubc84\uc804\uc744 \uc5c5\ub370\uc774\ud2b8\ud574\uc57c \ud558\ub294 \uacbd\uc6b0, `kubectl-convert` \uba85\ub839\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub9e4\ub2c8\ud398\uc2a4\ud2b8 \ud30c\uc77c\uc744 \uc790\ub3d9\uc73c\ub85c \ubcc0\ud658\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc608\uc804 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\ub97c `apps\/v1`\uc73c\ub85c \ubcc0\ud658\ud558\ub294 \uacbd\uc6b0\ub97c \uc608\ub85c \ub4e4 \uc218 \uc788\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc6f9 \uc0ac\uc774\ud2b8\uc758 [kubectl convert \ud50c\ub7ec\uadf8\uc778 \uc124\uce58](https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl-linux\/#install-kubectl-convert-plugin) \ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\n`kubectl-convert -f <file> --output-version <group>\/<version>`\n\n## \ub370\uc774\ud130 \ud50c\ub808\uc778\uc774 \uc5c5\uadf8\ub808\uc774\ub4dc\ub418\ub294 \ub3d9\uc548 \uc6cc\ud06c\ub85c\ub4dc\uc758 \uac00\uc6a9\uc131\uc744 \ubcf4\uc7a5\ud558\ub3c4\ub85d PodDisruptionBudget \ubc0f topologySpreadConstraints \uc870\uac74\uc744 \uad6c\uc131.\n\n\ub370\uc774\ud130 \ud50c\ub808\uc778\uc774 \uc5c5\uadf8\ub808\uc774\ub4dc\ub418\ub294 \ub3d9\uc548 \uc6cc\ud06c\ub85c\ub4dc\uc758 \uac00\uc6a9\uc131\uc744 \ubcf4\uc7a5\ud558\ub824\uba74 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \uc801\uc808\ud55c [PodDisruptionBudget(\ud30c\ub4dc\ub514\uc2a4\ub7fd\uc158 \uc608\uc0b0)](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/disruptions\/#pod-disruption-budgets) \ubc0f [\ud1a0\ud3f4\ub85c\uc9c0 \uc2a4\ud504\ub808\ub4dc \uc81c\uc57d \uc870\uac74](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/topology-spread-constraints) \uc774 \uc788\ub294\uc9c0 \ud655\uc778\ud558\uc2ed\uc2dc\uc624.\ubaa8\ub4e0 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub3d9\uc77c\ud55c \uc218\uc900\uc758 \uac00\uc6a9\uc131\uc774 \ud544\uc694\ud55c \uac83\uc740 \uc544\ub2c8\ubbc0\ub85c \uc6cc\ud06c\ub85c\ub4dc\uc758 \uaddc\ubaa8\uc640 \uc694\uad6c \uc0ac\ud56d\uc744 \uac80\uc99d\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\uc6cc\ud06c\ub85c\ub4dc\uac00 \uc5ec\ub7ec \uac00\uc6a9\uc601\uc5ed\uc5d0 \ubd84\uc0b0\ub418\uc5b4 \uc788\uace0 \ud1a0\ud3f4\ub85c\uc9c0\uac00 \ubd84\uc0b0\ub41c \uc5ec\ub7ec \ud638\uc2a4\ud2b8\uc5d0 \ubd84\uc0b0\ub418\uc5b4 \uc788\ub294\uc9c0 \ud655\uc778\ud558\uba74 \uc6cc\ud06c\ub85c\ub4dc\uac00 \ubb38\uc81c \uc5c6\uc774 \uc0c8 \ub370\uc774\ud130 \ud50c\ub808\uc778\uc73c\ub85c \uc790\ub3d9 \ub9c8\uc774\uadf8\ub808\uc774\uc158\ub420 \uac83\uc774\ub77c\ub294 \uc2e0\ub8b0\ub3c4\uac00 \ub192\uc544\uc9d1\ub2c8\ub2e4. \n\n\ub2e4\uc74c\uc740 \ud56d\uc0c1 80% \uc758 \ubcf5\uc81c\ubcf8\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uace0 \uc5ec\ub7ec \uc601\uc5ed\uacfc \ud638\uc2a4\ud2b8\uc5d0 \uac78\uccd0 \ubcf5\uc81c\ubcf8\uc744 \ubd84\uc0b0\uc2dc\ud0a4\ub294 \uc6cc\ud06c\ub85c\ub4dc\uc758 \uc608\uc785\ub2c8\ub2e4.\n\n```\napiVersion: policy\/v1\nkind: PodDisruptionBudget\nmetadata:\n  name: myapp\nspec:\n  minAvailable: \"80%\"\n  selector:\n    matchLabels:\n      app: myapp\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: myapp\nspec:\n  replicas: 10\n  selector:\n    matchLabels:\n      app: myapp\n  template:\n    metadata:\n      labels:\n        app: myapp\n    spec:\n      containers:\n      - image: public.ecr.aws\/eks-distro\/kubernetes\/pause:3.2\n        name: myapp\n        resources:\n          requests:\n            cpu: \"1\"\n            memory: 256M\n      topologySpreadConstraints:\n      - labelSelector:\n          matchLabels:\n            app: host-zone-spread\n        maxSkew: 2\n        topologyKey: kubernetes.io\/hostname\n        whenUnsatisfiable: DoNotSchedule\n      - labelSelector:\n          matchLabels:\n            app: host-zone-spread\n        maxSkew: 2\n        topologyKey: topology.kubernetes.io\/zone\n        whenUnsatisfiable: DoNotSchedule\n```\n\n[AWS Resilience Hub](https:\/\/aws.amazon.com\/resilience-hub\/) \ub294 \uc544\ub9c8\uc874 \uc5d8\ub77c\uc2a4\ud2f1 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4 (Amazon EKS) \ub97c \uc9c0\uc6d0 \ub9ac\uc18c\uc2a4\ub85c \ucd94\uac00\ud588\uc2b5\ub2c8\ub2e4. Resilience Hub\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ubcf5\uc6d0\ub825\uc744 \uc815\uc758, \uac80\uc99d \ubc0f \ucd94\uc801\ud560 \uc218 \uc788\ub294 \ub2e8\uc77c \uc7a5\uc18c\ub97c \uc81c\uacf5\ud558\ubbc0\ub85c \uc18c\ud504\ud2b8\uc6e8\uc5b4, \uc778\ud504\ub77c \ub610\ub294 \uc6b4\uc601 \uc911\ub2e8\uc73c\ub85c \uc778\ud55c \ubd88\ud544\uc694\ud55c \ub2e4\uc6b4\ud0c0\uc784\uc744 \ud53c\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9 \ub610\ub294 Karpenter\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub370\uc774\ud130 \ud50c\ub808\uc778 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uac04\uc18c\ud654.\n\n\uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uacfc Karpenter\ub294 \ubaa8\ub450 \ub178\ub4dc \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \ub2e8\uc21c\ud654\ud558\uc9c0\ub9cc \uc811\uadfc \ubc29\uc2dd\uc740 \ub2e4\ub985\ub2c8\ub2e4.\n\n\uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc740 \ub178\ub4dc\uc758 \ud504\ub85c\ube44\uc800\ub2dd \ubc0f \ub77c\uc774\ud504\uc0ac\uc774\ud074 \uad00\ub9ac\ub97c \uc790\ub3d9\ud654\ud569\ub2c8\ub2e4.\uc989, \ud55c \ubc88\uc758 \uc791\uc5c5\uc73c\ub85c \ub178\ub4dc\ub97c \uc0dd\uc131, \uc790\ub3d9 \uc5c5\ub370\uc774\ud2b8 \ub610\ub294 \uc885\ub8cc\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uae30\ubcf8 \uad6c\uc131\uc5d0\uc11c Karpenter\ub294 \ud638\ud658\ub418\ub294 \ucd5c\uc2e0 EKS \ucd5c\uc801\ud654 AMI\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc0c8 \ub178\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \uc0dd\uc131\ud569\ub2c8\ub2e4. EKS\uac00 \uc5c5\ub370\uc774\ud2b8\ub41c EKS \ucd5c\uc801\ud654 AMI\ub97c \ucd9c\uc2dc\ud558\uac70\ub098 \ud074\ub7ec\uc2a4\ud130\uac00 \uc5c5\uadf8\ub808\uc774\ub4dc\ub418\uba74 Karpenter\ub294 \uc790\ub3d9\uc73c\ub85c \uc774\ub7f0 \uc774\ubbf8\uc9c0\ub97c \uc0ac\uc6a9\ud558\uae30 \uc2dc\uc791\ud569\ub2c8\ub2e4.[Karpenter\ub294 \ub178\ub4dc \uc5c5\ub370\uc774\ud2b8\ub97c \uc704\ud55c \ub178\ub4dc \ub9cc\ub8cc\ub3c4 \uad6c\ud604\ud569\ub2c8\ub2e4.](#enable-node-expiry-for-karpenter-managed-nodes)\n \n[Karpenter\ub294 \uc0ac\uc6a9\uc790 \uc9c0\uc815 AMI\ub97c \uc0ac\uc6a9\ud558\ub3c4\ub85d \uad6c\uc131\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.](https:\/\/karpenter.sh\/docs\/concepts\/node-templates\/) Karpenter\uc5d0\uc11c \uc0ac\uc6a9\uc790 \uc9c0\uc815 AMI\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0 kubelet \ubc84\uc804\uc5d0 \ub300\ud55c \ucc45\uc784\uc740 \uc0ac\uc6a9\uc790\uc5d0\uac8c \uc788\uc2b5\ub2c8\ub2e4. \n\n## \uae30\uc874 \ub178\ub4dc \ubc0f \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc\uc758 \ubc84\uc804 \ud638\ud658\uc131 \ud655\uc778\n\nAmazon EKS\uc5d0\uc11c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uc9c4\ud589\ud558\uae30 \uc804\uc5d0 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9, \uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc \ubc0f \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \uac04\uc758 \ud638\ud658\uc131\uc744 \ubcf4\uc7a5\ud558\ub294 \uac83\uc774 \uc911\uc694\ud569\ub2c8\ub2e4. \ud638\ud658\uc131\uc740 \uc0ac\uc6a9 \uc911\uc778 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uc5d0 \ub530\ub77c \uacb0\uc815\ub418\uba70 \ub2e4\uc591\ud55c \uc2dc\ub098\ub9ac\uc624\uc5d0 \ub530\ub77c \ub2ec\ub77c\uc9d1\ub2c8\ub2e4. \uc804\ub7b5:\n\n* **\ucfe0\ubc84\ub124\ud2f0\uc2a4 v1.28+** \u2014 **** \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804 1.28 \uc774\uc0c1\ubd80\ud130\ub294 \ud575\uc2ec \uad6c\uc131 \uc694\uc18c\uc5d0 \ub300\ud55c \ubcf4\ub2e4 \uad00\ub300\ud55c \ubc84\uc804 \uc815\ucc45\uc774 \uc788\uc2b5\ub2c8\ub2e4. \ud2b9\ud788, \ucfe0\ubc84\ub124\ud2f0\uc2a4 API \uc11c\ubc84\uc640 kubelet \uac04\uc5d0 \uc9c0\uc6d0\ub418\ub294 \uc2a4\ud050(skew)\uac00 \ud558\ub098\uc758 \ub9c8\uc774\ub108 \ubc84\uc804 (n-2\uc5d0\uc11c n-3\uc73c\ub85c) \uc73c\ub85c \ud655\uc7a5\ub418\uc5c8\uc2b5\ub2c8\ub2e4. \uc608\ub97c \ub4e4\uc5b4, \uc0ac\uc6a9 \uc911\uc778 EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778 \ubc84\uc804\uc774 1.28\uc778 \uacbd\uc6b0, 1.25\uae4c\uc9c0\uc758 kubelet \ubc84\uc804\uc744 \uc548\uc804\ud558\uac8c \uc0ac\uc6a9\ud560 \uc218 \uc788\ub2e4. \uc774 \ubc84\uc804 \uc2a4\ud050\ub294 [AWS Fargate](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/fargate.html), [\uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managed-node-groups.html) \ubc0f [\uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/worker.html) \uc5d0\uc11c \uc9c0\uc6d0\ub429\ub2c8\ub2e4. \ubcf4\uc548\uc0c1\uc758 \uc774\uc720\ub85c [Amazon \uba38\uc2e0 \uc774\ubbf8\uc9c0 (AMI)](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/eks-optimized-amis.html) \ubc84\uc804\uc744 \ucd5c\uc2e0 \uc0c1\ud0dc\ub85c \uc720\uc9c0\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \uc774\uc804 kubelet \ubc84\uc804\uc740 \uc7a0\uc7ac\uc801\uc778 \uacf5\ud1b5 \ucde8\uc57d\uc131 \ubc0f \ub178\ucd9c (CVE) \uc73c\ub85c \uc778\ud574 \ubcf4\uc548 \uc704\ud5d8\uc744 \ucd08\ub798\ud560 \uc218 \uc788\uc73c\uba70, \uc774\ub294 \uc774\uc804 kubelet \ubc84\uc804 \uc0ac\uc6a9\uc758 \uc774\uc810\ubcf4\ub2e4 \ud074 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n* **\ucfe0\ubc84\ub124\ud2f0\uc2a4 < v1.28** \u2014 v1.28 \uc774\uc804 \ubc84\uc804\uc744 \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0, API \uc11c\ubc84\uc640 kubelet \uac04\uc5d0 \uc9c0\uc6d0\ub418\ub294 \uc2a4\ud050\ub294 n-2\uc774\ub2e4. \uc608\ub97c \ub4e4\uc5b4, \uc0ac\uc6a9 \uc911\uc778 EKS \ubc84\uc804\uc774 1.27\uc778 \uacbd\uc6b0, \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uac00\uc7a5 \uc624\ub798\ub41c kubelet \ubc84\uc804\uc740 1.25\uc774\ub2e4. \uc774 \ubc84\uc804 \ucc28\uc774\ub294 [AWS Fargate](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/fargate.html), [\uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managed-node-groups.html) \ubc0f [\uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/worker.html)\uc5d0 \uc801\uc6a9\ub429\ub2c8\ub2e4.\n\n## Karpenter \uad00\ub9ac\ud615 \ub178\ub4dc\uc758 \ub178\ub4dc \ub9cc\ub8cc \ud65c\uc131\ud654\n\nKarpenter\uac00 \ub178\ub4dc \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uad6c\ud604\ud558\ub294 \ud55c \uac00\uc9c0 \ubc29\ubc95\uc740 \ub178\ub4dc \ub9cc\ub8cc\ub77c\ub294 \uac1c\ub150\uc744 \uc0ac\uc6a9\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uc774\ub807\uac8c \ud558\uba74 \ub178\ub4dc \uc5c5\uadf8\ub808\uc774\ub4dc\uc5d0 \ud544\uc694\ud55c \uacc4\ud68d\uc774 \uc904\uc5b4\ub4ed\ub2c8\ub2e4. \ud504\ub85c\ube44\uc800\ub108\uc5d0\uc11c **ttlSecondsUntilExpired** \uc758 \uac12\uc744 \uc124\uc815\ud558\uba74 \ub178\ub4dc \ub9cc\ub8cc\uac00 \ud65c\uc131\ud654\ub429\ub2c8\ub2e4. \ub178\ub4dc\uac00 \uba87 \ucd08 \ub9cc\uc5d0 \uc815\uc758\ub41c \uc5f0\ub839\uc5d0 \ub3c4\ub2ec\ud558\uba74 \ub178\ub4dc\uac00 \uc548\uc804\ud558\uac8c \ube44\uc6cc\uc9c0\uace0 \uc0ad\uc81c\ub429\ub2c8\ub2e4. \uc774\ub294 \uc0ac\uc6a9 \uc911\uc778 \uacbd\uc6b0\uc5d0\ub3c4 \ub9c8\ucc2c\uac00\uc9c0\uc774\ubbc0\ub85c \ub178\ub4dc\ub97c \uc0c8\ub85c \ud504\ub85c\ube44\uc800\ub2dd\ub41c \uc5c5\uadf8\ub808\uc774\ub4dc\ub41c \uc778\uc2a4\ud134\uc2a4\ub85c \uad50\uccb4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub178\ub4dc\uac00 \uad50\uccb4\ub418\uba74 Karpenter\ub294 EKS\uc5d0 \ucd5c\uc801\ud654\ub41c \ucd5c\uc2e0 AMI\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4. \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 Karpenter \uc6f9 \uc0ac\uc774\ud2b8\uc758 [\ub514\ud504\ub85c\ube44\uc800\ub2dd(Deprovisioning)](https:\/\/karpenter.sh\/docs\/concepts\/deprovisioning\/#methods)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624.\n\nKarpenter\ub294 \uc774 \uac12\uc5d0 \uc9c0\ud130\ub97c \uc790\ub3d9\uc73c\ub85c \ucd94\uac00\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uacfc\ub3c4\ud55c \uc6cc\ud06c\ub85c\ub4dc \uc911\ub2e8\uc744 \ubc29\uc9c0\ud558\ub824\uba74 Kubernetes \uc124\uba85\uc11c\uc5d0 \ub098\uc640 \uc788\ub294 \ub300\ub85c [Pod Disruption Budget (PDB)](https:\/\/kubernetes.io\/docs\/tasks\/run-application\/configure-pdb\/) \uc744 \uc815\uc758\ud558\uc2ed\uc2dc\uc624.\n\n\ud504\ub85c\ube44\uc800\ub108\uc5d0\uc11c **ttlSecondsUntilExpired** \uac12\uc744 \uc124\uc815\ud558\ub294 \uacbd\uc6b0 \uc774\ub294 \ud504\ub85c\ube44\uc800\ub108\uc640 \uc5f0\uacb0\ub41c \uae30\uc874 \ub178\ub4dc\uc5d0 \uc801\uc6a9\ub429\ub2c8\ub2e4.\n\n## Karpenter \uad00\ub9ac \ub178\ub4dc\uc5d0 \ub4dc\ub9ac\ud504\ud2b8 \uae30\ub2a5 \uc0ac\uc6a9\n\n[Karpenter's Drift \uae30\ub2a5](https:\/\/karpenter.sh\/docs\/concepts\/deprovisioning\/#drift)\uc740 Karpenter\uac00 \ud504\ub85c\ube44\uc800\ub2dd\ud55c \ub178\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\uc5ec EKS \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \ub3d9\uae30\ud654 \uc0c1\ud0dc\ub97c \uc720\uc9c0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. Karpenter \ub4dc\ub9ac\ud504\ud2b8\ub294 \ud604\uc7ac [\uae30\ub2a5 \uac8c\uc774\ud2b8](https:\/\/karpenter.sh\/docs\/concepts\/settings\/#feature-gates)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud65c\uc131\ud654\ud574\uc57c \ud569\ub2c8\ub2e4. Karpenter\uc758 \uae30\ubcf8 \uad6c\uc131\uc740 EKS \ud074\ub7ec\uc2a4\ud130\uc758 \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uacfc \ub3d9\uc77c\ud55c \uba54\uc774\uc800 \ubc0f \ub9c8\uc774\ub108 \ubc84\uc804\uc5d0 \ub300\ud574 EKS\uc5d0 \ucd5c\uc801\ud654\ub41c \ucd5c\uc2e0 AMI\ub97c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n\nEKS \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc\uac00 \uc644\ub8cc\ub418\uba74 Karpenter\uc758 Drift \uae30\ub2a5\uc740 Karpenter\uac00 \ud504\ub85c\ube44\uc800\ub2dd\ud55c \ub178\ub4dc\uac00 \uc774\uc804 \ud074\ub7ec\uc2a4\ud130 \ubc84\uc804\uc6a9 EKS \ucd5c\uc801\ud654 AMI\ub97c \uc0ac\uc6a9\ud558\uace0 \uc788\uc74c\uc744 \uac10\uc9c0\ud558\uace0 \ud574\ub2f9 \ub178\ub4dc\ub97c \uc790\ub3d9\uc73c\ub85c \uc5f0\uacb0, \ub4dc\ub808\uc778 \ubc0f \uad50\uccb4\ud569\ub2c8\ub2e4. \uc0c8 \ub178\ub4dc\ub85c \uc774\ub3d9\ud558\ub294 \ud30c\ub4dc\ub97c \uc9c0\uc6d0\ud558\ub824\uba74 \uc801\uc808\ud55c \ud30c\ub4dc [\ub9ac\uc18c\uc2a4 \ud560\ub2f9\ub7c9](https:\/\/kubernetes.io\/docs\/concepts\/policy\/resource-quotas\/)\uc744 \uc124\uc815\ud558\uace0 [Pod Disruption Budgets](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/disruptions\/)(PDB) \uc744 \uc0ac\uc6a9\ud558\uc5ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubaa8\ubc94 \uc0ac\ub840\ub97c \ub530\ub974\uc138\uc694. Karpenter\uc758 \ub514\ud504\ub85c\ube44\uc800\ub2dd\uc740 \ud30c\ub4dc \ub9ac\uc18c\uc2a4 \uc694\uccad\uc744 \uae30\ubc18\uc73c\ub85c \ub300\uccb4 \ub178\ub4dc\ub97c \ubbf8\ub9ac \uac00\ub3d9\ud558\uace0 \ub178\ub4dc \ub514\ud504\ub85c\ube44\uc800\ub2dd\uc744 \ud560 \ub54c PDB\ub97c \uc874\uc911\ud569\ub2c8\ub2e4.\n\n## eksctl\uc744 \uc0ac\uc6a9\ud558\uc5ec \uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc758 \uc5c5\uadf8\ub808\uc774\ub4dc\ub97c \uc790\ub3d9\ud654\n\n\uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc740 \uc0ac\uc6a9\uc790 \uacc4\uc815\uc5d0 \ubc30\ud3ec\ub418\uace0 EKS \uc11c\ube44\uc2a4 \uc678\ubd80\uc758 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc5f0\uacb0\ub41c EC2 \uc778\uc2a4\ud134\uc2a4\uc785\ub2c8\ub2e4. \uc774\ub4e4\uc740 \uc77c\ubc18\uc801\uc73c\ub85c \uc77c\uc885\uc758 \uc790\ub3d9\ud654 \ub3c4\uad6c\ub97c \ud1b5\ud574 \ubc30\ud3ec\ub418\uace0 \uad00\ub9ac\ub429\ub2c8\ub2e4. \uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9\uc744 \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub824\uba74 \ub3c4\uad6c \uc124\uba85\uc11c\ub97c \ucc38\uc870\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\uc608\ub97c \ub4e4\uc5b4 eksctl\uc740 [\uc790\uccb4 \uad00\ub9ac\ud615 \ub178\ub4dc \uc0ad\uc81c \ubc0f \ub4dc\ub808\uc778](https:\/\/eksctl.io\/usage\/managing-nodegroups\/#deleting-and-draining)\uc744 \uc9c0\uc6d0\ud569\ub2c8\ub2e4.\n\n\uba87 \uac00\uc9c0 \uc77c\ubc18\uc801\uc778 \ub3c4\uad6c\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n* [eksctl](https:\/\/eksctl.io\/usage\/nodegroup-upgrade\/)\n* [kOps](https:\/\/kops.sigs.k8s.io\/operations\/updates_and_upgrades\/)\n* [EKS Blueprints](https:\/\/aws-ia.github.io\/terraform-aws-eks-blueprints\/node-groups\/#self-managed-node-groups)\n\n## \uc5c5\uadf8\ub808\uc774\ub4dc \uc804 \ud074\ub7ec\uc2a4\ud130 \ubc31\uc5c5\n\n\uc0c8 \ubc84\uc804\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 Amazon EKS \ud074\ub7ec\uc2a4\ud130\ub97c \ud06c\uac8c \ubcc0\uacbd\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud55c \ud6c4\uc5d0\ub294 \ub2e4\uc6b4\uadf8\ub808\uc774\ub4dc\ud560 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4.\n\n[Velero](https:\/\/velero.io\/) \ub294 \ucee4\ubba4\ub2c8\ud2f0\uc5d0\uc11c \uc9c0\uc6d0\ud558\ub294 \uc624\ud508 \uc18c\uc2a4 \ub3c4\uad6c\ub85c, \uae30\uc874 \ud074\ub7ec\uc2a4\ud130\ub97c \ubc31\uc5c5\ud558\uace0 \uc0c8 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ubc31\uc5c5\uc744 \uc801\uc6a9\ud558\ub294 \ub370 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ud604\uc7ac EKS\uc5d0\uc11c \uc9c0\uc6d0\ud558\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc84\uc804\uc5d0\uc11c\ub9cc \uc0c8 \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud560 \uc218 \uc788\ub2e4\ub294 \uc810\uc5d0 \uc720\uc758\ud558\uc2ed\uc2dc\uc624. \ud074\ub7ec\uc2a4\ud130\uac00 \ud604\uc7ac \uc2e4\ud589 \uc911\uc778 \ubc84\uc804\uc774 \uacc4\uc18d \uc9c0\uc6d0\ub418\uace0 \uc5c5\uadf8\ub808\uc774\ub4dc\uac00 \uc2e4\ud328\ud560 \uacbd\uc6b0 \uc6d0\ub798 \ubc84\uc804\uc73c\ub85c \uc0c8 \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud558\uace0 \ub370\uc774\ud130 \ud50c\ub808\uc778\uc744 \ubcf5\uc6d0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucc38\uace0\ub85c IAM\uc744 \ud3ec\ud568\ud55c AWS \ub9ac\uc18c\uc2a4\ub294 Velero\uc758 \ubc31\uc5c5\uc5d0 \ud3ec\ud568\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \ub9ac\uc18c\uc2a4\ub294 \ub2e4\uc2dc \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. \n\n## \ucee8\ud2b8\ub864 \ud50c\ub808\uc778\uc744 \uc5c5\uadf8\ub808\uc774\ub4dc\ud55c \ud6c4 Fargate \ubc30\ud3ec\ub97c \ub2e4\uc2dc \uc2dc\uc791.\n\nFargate \ub370\uc774\ud130 \ud50c\ub808\uc778 \ub178\ub4dc\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub824\uba74 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc7ac\ubc30\ud3ec\ud574\uc57c \ud569\ub2c8\ub2e4. \ubaa8\ub4e0 \ud30c\ub4dc\ub97c `-o wide` \uc635\uc158\uc73c\ub85c \ub098\uc5f4\ud558\uc5ec \ud30c\uac8c\uc774\ud2b8 \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589 \uc911\uc778 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc2dd\ubcc4\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.'fargate-'\ub85c \uc2dc\uc791\ud558\ub294 \ubaa8\ub4e0 \ub178\ub4dc \uc774\ub984\uc740 \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc7ac\ubc30\ud3ec\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\n## \uc778\ud50c\ub808\uc774\uc2a4 \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc\uc758 \ub300\uc548\uc73c\ub85c \ube14\ub8e8\/\uadf8\ub9b0 \ud074\ub7ec\uc2a4\ud130 \ud3c9\uac00\n\n\uc77c\ubd80 \uace0\uac1d\uc740 \ube14\ub8e8\/\uadf8\ub9b0 \uc5c5\uadf8\ub808\uc774\ub4dc \uc804\ub7b5\uc744 \uc120\ud638\ud569\ub2c8\ub2e4. \uc5ec\uae30\uc5d0\ub294 \uc774\uc810\uc774 \uc788\uc744 \uc218 \uc788\uc9c0\ub9cc \uace0\ub824\ud574\uc57c \ud560 \ub2e8\uc810\ub3c4 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ud61c\ud0dd\uc740 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4:\n\n* \uc5ec\ub7ec EKS \ubc84\uc804\uc744 \ud55c \ubc88\uc5d0 \ubcc0\uacbd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4 (\uc608: 1.23\uc5d0\uc11c 1.25).\n* \uc774\uc804 \ud074\ub7ec\uc2a4\ud130\ub85c \ub2e4\uc2dc \uc804\ud658 \uac00\ub2a5\n* \ucd5c\uc2e0 \uc2dc\uc2a4\ud15c (\uc608: terraform) \uc73c\ub85c \uad00\ub9ac\ud560 \uc218 \uc788\ub294 \uc0c8 \ud074\ub7ec\uc2a4\ud130\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4.\n* \uc6cc\ud06c\ub85c\ub4dc\ub97c \uac1c\ubcc4\uc801\uc73c\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uba87 \uac00\uc9c0 \ub2e8\uc810\uc740 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n* \uc18c\ube44\uc790 \uc5c5\ub370\uc774\ud2b8\uac00 \ud544\uc694\ud55c API \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ubc0f OIDC \ubcc0\uacbd (\uc608: kubectl \ubc0f CI\/CD)\n* \ub9c8\uc774\uadf8\ub808\uc774\uc158 \uc911\uc5d0 2\uac1c\uc758 \ud074\ub7ec\uc2a4\ud130\ub97c \ubcd1\ub82c\ub85c \uc2e4\ud589\ud574\uc57c \ud558\ubbc0\ub85c \ube44\uc6a9\uc774 \ub9ce\uc774 \ub4e4\uace0 \uc9c0\uc5ed \uc6a9\ub7c9\uc774 \uc81c\ud55c\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n* \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc11c\ub85c \uc885\uc18d\ub418\uc5b4 \ud568\uaed8 \ub9c8\uc774\uadf8\ub808\uc774\uc158\ub418\ub294 \uacbd\uc6b0 \ub354 \ub9ce\uc740 \uc870\uc815\uc774 \ud544\uc694\ud569\ub2c8\ub2e4.\n* \ub85c\ub4dc\ubc38\ub7f0\uc11c\uc640 \uc678\ubd80 DNS\ub294 \uc5ec\ub7ec \ud074\ub7ec\uc2a4\ud130\uc5d0 \uc27d\uac8c \ubd84\uc0b0\ub420 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4.\n\n\uc774 \uc804\ub7b5\uc740 \uac00\ub2a5\ud558\uc9c0\ub9cc \uc778\ud50c\ub808\uc774\uc2a4 \uc5c5\uadf8\ub808\uc774\ub4dc\ubcf4\ub2e4 \ube44\uc6a9\uc774 \ub9ce\uc774 \ub4e4\uace0 \uc870\uc815 \ubc0f \uc6cc\ud06c\ub85c\ub4dc \ub9c8\uc774\uadf8\ub808\uc774\uc158\uc5d0 \ub354 \ub9ce\uc740 \uc2dc\uac04\uc774 \ud544\uc694\ud569\ub2c8\ub2e4.\uc0c1\ud669\uc5d0 \ub530\ub77c \ud544\uc694\ud560 \uc218 \uc788\uc73c\ubbc0\ub85c \uc2e0\uc911\ud558\uac8c \uacc4\ud68d\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ub192\uc740 \uc218\uc900\uc758 \uc790\ub3d9\ud654\uc640 GitOps\uc640 \uac19\uc740 \uc120\uc5b8\ud615 \uc2dc\uc2a4\ud15c\uc744 \uc0ac\uc6a9\ud558\uba74 \uc774 \uc791\uc5c5\uc744 \ub354 \uc27d\uac8c \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\ub370\uc774\ud130\ub97c \ubc31\uc5c5\ud558\uace0 \uc0c8 \ud074\ub7ec\uc2a4\ud130\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud558\ub824\uba74 \uc0c1\ud0dc \uc800\uc7a5 \uc6cc\ud06c\ub85c\ub4dc\uc5d0 \ub300\ud55c \ucd94\uac00 \uc608\ubc29 \uc870\uce58\ub97c \ucde8\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\uc790\uc138\ud55c \ub0b4\uc6a9\uc740 \ub2e4\uc74c \ube14\ub85c\uadf8 \uac8c\uc2dc\ubb3c\uc744 \uac80\ud1a0\ud558\uc138\uc694.\n\n* [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130 \uc5c5\uadf8\ub808\uc774\ub4dc: \ube14\ub8e8-\uadf8\ub9b0 \ubc30\ud3ec \uc804\ub7b5](https:\/\/aws.amazon.com\/blogs\/containers\/kubernetes-cluster-upgrade-the-blue-green-deployment-strategy\/)\n* [\uc2a4\ud14c\uc774\ud2b8\ub9ac\uc2a4 ArgoCD \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc704\ud55c \ube14\ub8e8\/\uadf8\ub9b0 \ub610\ub294 \uce74\ub098\ub9ac Amazon EKS \ud074\ub7ec\uc2a4\ud130 \ub9c8\uc774\uadf8\ub808\uc774\uc158](https:\/\/aws.amazon.com\/blogs\/containers\/blue-green-or-canary-amazon-eks-clusters-migration-for-stateless-argocd-workloads\/)\n\n## \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud504\ub85c\uc81d\ud2b8\uc5d0\uc11c \uacc4\ud68d\ub41c \uc8fc\uc694 \ubcc0\uacbd \uc0ac\ud56d \ucd94\uc801 \u2014 \ubbf8\ub9ac \uc0dd\uac01\ud574 \ubcf4\uc138\uc694\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub2e4\uc74c \ubc84\uc804\ub9cc \uace0\ub824\ud558\uc9c0 \ub9c8\uc138\uc694. \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc758 \uc0c8 \ubc84\uc804\uc774 \ucd9c\uc2dc\ub418\uba74 \uc774\ub97c \uac80\ud1a0\ud558\uace0 \uc8fc\uc694 \ubcc0\uacbd \uc0ac\ud56d\uc744 \ud655\uc778\ud558\uc2ed\uc2dc\uc624. \uc608\ub97c \ub4e4\uc5b4, \uc77c\ubd80 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc740 \ub3c4\ucee4 API\ub97c \uc9c1\uc811 \uc0ac\uc6a9\ud588\uace0, \ub3c4\ucee4\uc6a9 \ucee8\ud14c\uc774\ub108 \ub7f0\ud0c0\uc784 \uc778\ud130\ud398\uc774\uc2a4 (CRI) (Dockershim\uc774\ub77c\uace0\ub3c4 \ud568) \uc5d0 \ub300\ud55c \uc9c0\uc6d0\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 `1.24`\uc5d0\uc11c \uc81c\uac70\ub418\uc5c8\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \uc885\ub958\uc758 \ubcc0\ud654\uc5d0\ub294 \ub300\ube44\ud558\ub294 \ub370 \ub354 \ub9ce\uc740 \uc2dc\uac04\uc774 \ud544\uc694\ud569\ub2c8\ub2e4. \n \n\uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub824\ub294 \ubc84\uc804\uc5d0 \ub300\ud574 \ubb38\uc11c\ud654\ub41c \ubaa8\ub4e0 \ubcc0\uacbd \uc0ac\ud56d\uc744 \uac80\ud1a0\ud558\uace0 \ud544\uc694\ud55c \uc5c5\uadf8\ub808\uc774\ub4dc \ub2e8\uacc4\ub97c \uae30\ub85d\ud574 \ub450\uc2ed\uc2dc\uc624.\ub610\ud55c Amazon EKS \uad00\ub9ac\ud615 \ud074\ub7ec\uc2a4\ud130\uc640 \uad00\ub828\ub41c \ubaa8\ub4e0 \uc694\uad6c \uc0ac\ud56d \ub610\ub294 \uc808\ucc28\ub97c \uae30\ub85d\ud574 \ub450\uc2ed\uc2dc\uc624.\n\n* [\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubcc0\uacbd \ub85c\uadf8](https:\/\/github.com\/kubernetes\/kubernetes\/tree\/master\/CHANGELOG)\n\n## \uae30\ub2a5 \uc81c\uac70\uc5d0 \ub300\ud55c \uad6c\uccb4\uc801\uc778 \uc9c0\uce68\n\n### 1.25\uc5d0\uc11c Dockershim \uc81c\uac70 - Docker Socket(DDS) \uc6a9 \uac80\ucd9c\uae30 \uc0ac\uc6a9\n\n1.25\uc6a9 EKS \ucd5c\uc801\ud654 AMI\uc5d0\ub294 \ub354 \uc774\uc0c1 Dockershim\uc5d0 \ub300\ud55c \uc9c0\uc6d0\uc774 \ud3ec\ud568\ub418\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. Dockershim\uc5d0 \ub300\ud55c \uc885\uc18d\uc131\uc774 \uc788\ub294 \uacbd\uc6b0 (\uc608: Docker \uc18c\ucf13\uc744 \ub9c8\uc6b4\ud2b8\ud558\ub294 \uacbd\uc6b0) \uc6cc\ucee4 \ub178\ub4dc\ub97c 1.25\ub85c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\uae30 \uc804\uc5d0 \uc774\ub7f0 \uc885\uc18d\uc131\uc744 \uc81c\uac70\ud574\uc57c \ud569\ub2c8\ub2e4. \n\n1.25\ub85c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\uae30 \uc804\uc5d0 \ub3c4\ucee4 \uc18c\ucf13\uc5d0 \uc885\uc18d \uad00\uacc4\uac00 \uc788\ub294 \uc778\uc2a4\ud134\uc2a4\ub97c \ucc3e\uc544\ubcf4\uc138\uc694. kubectl \ud50c\ub7ec\uadf8\uc778\uc778 [Detector for Docker Socket (DDS)](https:\/\/github.com\/aws-containers\/kubectl-detector-for-docker-socket) \ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4. \n\n### 1.25\uc5d0\uc11c \ud30c\ub4dc \uc2dc\ud050\ub9ac\ud2f0 \ud3f4\ub9ac\uc2dc(PSP) \uc81c\uac70 - \ud30c\ub4dc \uc2dc\ud050\ub9ac\ud2f0 \uc2a4\ud0e0\ub2e4\ub4dc(PSS) \ub610\ub294 \ucf54\ub4dc\ud615 \uc815\ucc45(PaC) \uc194\ub8e8\uc158\uc73c\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\n\n`PodSecurityPolicy` \ub294 [\ucfe0\ubc84\ub124\ud2f0\uc2a4 1.21\uc5d0\uc11c \uc9c0\uc6d0 \uc911\ub2e8\uc774 \ubc1c\ud45c](https:\/\/kubernetes.io\/blog\/2021\/04\/06\/podsecuritypolicy-deprecation-past-present-and-future\/)\uc774 \ub418\uc5c8\uace0, \ucfe0\ubc84\ub124\ud2f0\uc2a4 1.25\uc5d0\uc11c\ub294 \uc81c\uac70\ub418\uc5c8\uc2b5\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c PSP\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0, \uc6cc\ud06c\ub85c\ub4dc \uc911\ub2e8\uc744 \ubc29\uc9c0\ud558\uae30 \uc704\ud574 \ud074\ub7ec\uc2a4\ud130\ub97c \ubc84\uc804 1.25\ub85c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\uae30 \uc804\uc5d0 \ub0b4\uc7a5\ub41c \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud30c\ub4dc \uc2dc\ud050\ub9ac\ud2f0 \uc2a4\ud0e0\ub2e4\ub4dc (PSS) \ub610\ub294 \ucf54\ub4dc\ud615 \uc815\ucc45(PaC) \uc194\ub8e8\uc158\uc73c\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\ud574\uc57c \ud569\ub2c8\ub2e4. \n\nAWS\ub294 [EKS \uc124\uba85\uc11c\uc5d0 \uc790\uc138\ud55c FAQ](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/pod-security-policy-removal-faq.html)\ub97c \uac8c\uc2dc\ud588\uc2b5\ub2c8\ub2e4.\n\n[\ud30c\ub4dc \uc2dc\ud050\ub9ac\ud2f0 \uc2a4\ud0e0\ub2e4\ub4dc(PSS) \ubc0f \ud30c\ub4dc \uc2dc\ud050\ub9ac\ud2f0 \uc5b4\ub4dc\ubbf8\uc158(PSA)](https:\/\/aws.github.io\/aws-eks-best-practices\/security\/docs\/pods\/#pod-security-standards-pss-and-pod-security-admission-psa) \ubaa8\ubc94 \uc0ac\ub840\ub97c \uac80\ud1a0\ud558\uc2ed\uc2dc\uc624. \n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc6f9\uc0ac\uc774\ud2b8\uc5d0\uc11c [\ud30c\ub4dc \uc2dc\ud050\ub9ac\ud2f0 \ud3f4\ub9ac\uc2dc(PSP) \uc9c0\uc6d0 \uc911\ub2e8 \ube14\ub85c\uadf8 \uac8c\uc2dc\ubb3c](https:\/\/kubernetes.io\/blog\/2021\/04\/06\/podsecuritypolicy-deprecation-past-present-and-future\/)\uc744 \uac80\ud1a0\ud558\uc2ed\uc2dc\uc624.\n\n### 1.23\uc5d0\uc11c In-tree \uc2a4\ud1a0\ub9ac\uc9c0 \ub4dc\ub77c\uc774\ubc84 \uc9c0\uc6d0 \uc911\ub2e8 - \ucee8\ud14c\uc774\ub108 \uc2a4\ud1a0\ub9ac\uc9c0 \uc778\ud130\ud398\uc774\uc2a4 (CSI) \ub4dc\ub77c\uc774\ubc84\ub85c \ub9c8\uc774\uadf8\ub808\uc774\uc158\n\n\ucee8\ud14c\uc774\ub108 \uc2a4\ud1a0\ub9ac\uc9c0 \uc778\ud130\ud398\uc774\uc2a4(CSI)\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uac00 \uae30\uc874\uc758 In-tree \uc2a4\ud1a0\ub9ac\uc9c0 \ub4dc\ub77c\uc774\ubc84 \uba54\ucee4\ub2c8\uc998\uc744 \ub300\uccb4\ud560 \uc218 \uc788\ub3c4\ub85d \uc124\uacc4\ub418\uc5c8\uc2b5\ub2c8\ub2e4. Amazon EBS \ucee8\ud14c\uc774\ub108 \uc2a4\ud1a0\ub9ac\uc9c0 \uc778\ud130\ud398\uc774\uc2a4 (CSI) \ub9c8\uc774\uadf8\ub808\uc774\uc158 \uae30\ub2a5\uc740 Amazon EKS `1.23` \uc774\uc0c1 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uae30\ubcf8\uc801\uc73c\ub85c \ud65c\uc131\ud654\ub429\ub2c8\ub2e4. \ud30c\ub4dc\uac00 \ubc84\uc804 `1.22` \ub610\ub294 \uc774\uc804 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \uacbd\uc6b0, \uc11c\ube44\uc2a4 \uc911\ub2e8\uc744 \ubc29\uc9c0\ud558\ub824\uba74 \ud074\ub7ec\uc2a4\ud130\ub97c \ubc84\uc804 `1.23`\uc73c\ub85c \uc5c5\ub370\uc774\ud2b8\ud558\uae30 \uc804\uc5d0 [Amazon EBS CSI \ub4dc\ub77c\uc774\ubc84](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/ebs-csi.html)\ub97c \uc124\uce58\ud574\uc57c \ud569\ub2c8\ub2e4. \n\n[Amazon EBS CSI \ub9c8\uc774\uadf8\ub808\uc774\uc158 \uc790\uc8fc \ubb3b\ub294 \uc9c8\ubb38](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/ebs-csi-migration-faq.html)\uc744 \uac80\ud1a0\ud558\uc2ed\uc2dc\uc624.\n\n## \ucd94\uac00 \ub9ac\uc18c\uc2a4\n\n### ClowdHaus EKS \uc5c5\uadf8\ub808\uc774\ub4dc \uc9c0\uce68\n\n[ClowdHaus EKS \uc5c5\uadf8\ub808\uc774\ub4dc \uc9c0\uce68](https:\/\/clowdhaus.github.io\/eksup\/) \uc740 Amazon EKS \ud074\ub7ec\uc2a4\ud130\ub97c \uc5c5\uadf8\ub808\uc774\ub4dc\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub418\ub294 CLI\uc785\ub2c8\ub2e4.\ud074\ub7ec\uc2a4\ud130\ub97c \ubd84\uc11d\ud558\uc5ec \uc5c5\uadf8\ub808\uc774\ub4dc \uc804\uc5d0 \ud574\uacb0\ud574\uc57c \ud560 \uc7a0\uc7ac\uc801 \ubb38\uc81c\ub97c \ucc3e\uc544\ub0bc \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n### GoNoGo\n\n[GoNoGo](https:\/\/github.com\/FairwindsOps\/GoNoGo) \ub294 \ud074\ub7ec\uc2a4\ud130 \uc560\ub4dc\uc628\uc758 \uc5c5\uadf8\ub808\uc774\ub4dc \uc2e0\ub8b0\ub3c4\ub97c \uacb0\uc815\ud558\ub294 \uc54c\ud30c \ub2e8\uacc4 \ub3c4\uad6c\uc785\ub2c8\ub2e4. \n","site":"eks","answers_cleaned":"    search    exclude  true                                             Amazon EKS                                                         Karpenter      Fargate                        EKS Anywhere                AWS Outposts    AWS Local Zone                                                                                                                       1 24  https   kubernetes io releases version skew policy  supported versions              AWS                                                                                                     API            EKS         AWS                                 AWS API                                                     Kubelet                                             kubectl get nodes                                          Amazon EKS                                                                                                               https   kubernetes io docs reference using api deprecation policy                                                                        Kubernetes     API                                                                                    Amazon EKS Kubernetes     https   docs aws amazon com eks latest userguide kubernetes versions html        Kubernetes        https   github com kubernetes kubernetes tree master CHANGELOG                                                                                              Amazon EKS                      Kubernetes                                                                                                https   docs aws amazon com eks latest userguide managing add ons html updating an add on                                                                                           https   docs aws amazon com eks latest userguide control plane logs html                                                                                                                                                         eksctl           eksctl  https   eksctl io         EKS                                                                     https   eksctl io usage cluster upgrade                   EKS                Fargate             EKS            https   docs aws amazon com eks latest userguide managed node groups html      EKS Fargate  https   docs aws amazon com eks latest userguide fargate html                                                                                  kubectl Convert              kubectl convert       https   kubernetes io docs tasks tools install kubectl linux  install kubectl convert plugin              API                           https   kubernetes io docs tasks tools included kubectl convert overview                                                                             Amazon EKS                           EKS                                                                                                                                                                          Amazon EKS                                                                                   60                     EKS    FAQ  https   aws amazon com eks eks version faq                                EKS                                                                              1                                                                                                                                                     EKS           https   aws amazon com eks eks version support policy                                                                                                                                                                  1                                                                                             EKS            EKS                   https   docs aws amazon com eks latest userguide kubernetes versions html kubernetes release calendar                                                          EKS                                           14                                      https   kubernetes io releases                                                                                                                                       https   docs aws amazon com eks latest userguide update cluster html                   AWS                             Fargate                                                 upgrade                kubernetes api                                                                                                      EKS                                                               API        OIDC  ENIS                                                                            DNS                                                                                                                                1 24   1 25                                                                                                                                                                                                                                                  1           EKS                   use the eks documentation to create an upgrade checklist  2                          backup the cluster before upgrading  3                            API                    identify and remediate removed api usage before upgrading the control plane  4                                   Kubernetes                   track the version skew of nodes ensure managed node groups are on the same version as the control plane before upgrading  EKS Fargate            EKS                                                1          5   AWS       CLI                                 https   docs aws amazon com eks latest userguide update cluster html  6                     upgrade add ons and components using the kubernetes api                                          7   kubectl          https   docs aws amazon com eks latest userguide install kubectl html  8                            https   docs aws amazon com eks latest userguide update managed node group html                                                      EKS                          EKS               https   docs aws amazon com eks latest userguide kubernetes versions html                                                                  EKS                                                       EKS 1 27  https   docs aws amazon com eks latest userguide kubernetes versions html kubernetes 1 27     EKS 1 26  https   docs aws amazon com eks latest userguide kubernetes versions html kubernetes 1 26     EKS 1 25  https   docs aws amazon com eks latest userguide kubernetes versions html kubernetes 1 25     EKS 1 24  https   docs aws amazon com eks latest userguide kubernetes versions html kubernetes 1 24     EKS 1 23  https   docs aws amazon com eks latest userguide kubernetes versions html kubernetes 1 23     EKS 1 22  https   docs aws amazon com eks latest userguide kubernetes versions html kubernetes 1 22            API                                                Kubernetes                                             Kubernetes API                                                                              EBS CSI  https   docs aws amazon com eks latest userguide ebs csi html    EFS CSI  https   docs aws amazon com eks latest userguide efs csi html                     API                                                          tip                           system                                  kubectl get ns   grep   system           Kubernetes API                                                                                 AWS             https   kubernetes sigs github io aws load balancer controller v2 4 deploy installation                                                                                                  CoreDNS  https   github com coredns coredns    kube proxy  https   kubernetes io docs concepts overview components  kube proxy    VPC CNI  https   github com aws amazon vpc cni k8s                               Kubernetes API                                                                               EKS                                                                                                  Amazon VPC CNI                  Amazon VPC CNI                                Amazon VPC CNI            https   docs aws amazon com eks latest userguide managing vpc cni html             Amazon EKS                                                    kube proxy           kube proxy                  https   docs aws amazon com eks latest userguide managing kube proxy html               CoreDNS     CoreDNS                  https   docs aws amazon com eks latest userguide managing coredns html               AWS Load Balancer Controller    AWS Load Balancer Controller      EKS                                 https   docs aws amazon com eks latest userguide aws load balancer controller html                Amazon Elastic Block Store      EBS                   CSI                          Amazon EKS       Amazon EBS CSI          https   docs aws amazon com eks latest userguide managing ebs csi html               Amazon Elastic File System  Amazon EFS                   CSI                          Amazon EFS CSI       https   docs aws amazon com eks latest userguide efs csi html                                        GitHub   metrics server  https   kubernetes sigs github io metrics server                      Cluster Autoscaler          Cluster Autoscaler                                   Cluster Autoscaler                                                               GitHub      https   github com kubernetes autoscaler releases                                                  Karpenter                    Karpenter      https   karpenter sh v0 27 3 faq  which versions of kubernetes does karpenter support                           EKS           AWS                                                                                                         1         IP                   Amazon EKS                               IP        5           2  EKS IAM             IAM                            3                                  IAM     AWS KMS                                   IP                      Amazon EKS                               IP        5                                                   IP                          CLUSTER  cluster name  aws ec2 describe subnets   subnet ids       aws eks describe cluster   name   CLUSTER        query  cluster resourcesVpcConfig subnetIds        output text        query  Subnets     SubnetId AvailabilityZone AvailableIpAddressCount         output table                                                                          DescribeSubnets                                                                           subnet 067fa8ee8476abbd6    us east 1a     8184      subnet 0056f7403b17d2b43    us east 1b     8153      subnet 09586f8fb3addbc8c    us east 1a     8120      subnet 047f3d276a22c6bce    us east 1b     8184                                                              VPC CNI Metrics Helper  https   github com aws amazon vpc cni k8s blob master cmd cni metrics helper README md         VPC        CloudWatch                                         IP             Amazon EKS                           UpdateClusterConfiguration  API                                                                                                                                   VPC            VPC CIDR     IP               CIDR                       AWS          CIDR             VPC       IP                                         IP     RFC 1918                      IP       RFC 1918                     Amazon EKS     CIDR             VPC CIDR          VPC                                       CIDR               VPC                      EKS IAM                    IAM                                                        CLUSTER  cluster name  ROLE ARN   aws eks describe cluster   name   CLUSTER        query  cluster roleArn    output text  aws iam get role   role name   ROLE ARN            query  Role AssumeRolePolicyDocument            Version    2012 10 17        Statement                            Effect    Allow                Principal                      Service    eks amazonaws com                              Action    sts AssumeRole                            EKS               Amazon EKS                   Amazon VPC CNI        kube proxy   CoreDNS                                    Amazon EKS                   Amazon EKS      EKS API                               Amazon EKS                                             aws eks update addon  cluster name my cluster  addon name vpc cni  addon version version number     service account role arn arn aws iam  111122223333 role role name  configuration values       resolve conflicts PRESERVE             EKS                   aws eks list addons   cluster name  cluster name           warning            EKS                                          EKS                                                                                                                     upgrade add ons and components using the kubernetes api        Amazon EKS                                         EKS                                             https   docs aws amazon com eks latest userguide eks add ons html    EKS                                  https   aws amazon com blogs containers amazon eks add ons advanced configuration                               API                EKS                         API  API                                               Kubernetes                                                                                                                                        API                    API                                https   kubernetes io docs reference using api deprecation policy         API                                                        https   docs aws amazon com eks latest userguide cluster insights html   EKS                                                                            Amazon EKS                                                                                                     EKS                                        aws eks list insights   region  region code    cluster name  my cluster          insights                            category    UPGRADE READINESS                 name    Deprecated APIs removed in Kubernetes v1 29                 insightStatus                      status    PASSING                     reason    No deprecated API usage detected within the last 30 days                                kubernetesVersion    1 29                 lastTransitionTime   1698774710 0                lastRefreshTime   1700157422 0                id    123e4567 e89b 42d3 a456 579642341238                 description    Checks for usage of deprecated APIs that are scheduled for removal in Kubernetes v1 29  Upgrading your cluster before migrating to the updated APIs supported by v1 29 could cause application impact                                                                          aws eks describe insight   region  region code    id  insight id    cluster name  my cluster        Amazon EKS     https   console aws amazon com eks home  clusters                                                                                   status   ERROR                                                       aws eks describe insight                                                          resources                      insightStatus                status    ERROR                      kubernetesResourceUri     apis policy v1beta1 podsecuritypolicies null                 APIs deprecated       deprecationDetails                      usage     apis flowcontrol apiserver k8s io v1beta2 flowschemas             replacedWith     apis flowcontrol apiserver k8s io v1beta3 flowschemas             stopServingVersion    1 29             clientStats                 startServingReplacementVersion    1 26                                   recommendation    Update manifests and API clients to use newer Kubernetes APIs if applicable before upgrading to Kubernetes v1 26        EKS       CLI                    EKS                                                                        EKS     https   docs aws amazon com eks latest userguide cluster insights html                       https   aws amazon com blogs containers accelerate the testing and verification of amazon eks upgrades with upgrade insights         Kube no Trouble   Kube no Trouble  https   github com doitintl kube no trouble     kubent                                      kubent           KubeConfig                                        API                      kubent  4 17PM INF     Kube No Trouble  kubent      4 17PM INF version 0 7 0  git sha d1bb4e5fd6550b533b2013671aa8419d923ee042  4 17PM INF Initializing collectors and retrieving data 4 17PM INF Target K8s version is 1 24 8 eks ffeb93d 4 l INF Retrieved 93 resources from collector name Cluster 4 17PM INF Retrieved 16 resources from collector name  Helm v3  4 17PM INF Loaded ruleset name custom rego tmpl 4 17PM INF Loaded ruleset name deprecated 1 16 rego 4 17PM INF Loaded ruleset name deprecated 1 22 rego 4 17PM INF Loaded ruleset name deprecated 1 25 rego 4 17PM INF Loaded ruleset name deprecated 1 26 rego 4 17PM INF Loaded ruleset name deprecated future rego                                                                                                Deprecated APIs removed in 1 25                                                                                                KIND                NAMESPACE     NAME             API VERSION      REPLACE WITH  SINCE  PodSecurityPolicy    undefined    eks privileged   policy v1beta1    removed   1 21 0                                                                                  CI             kubent                                                                 Kube no Trouble                                              https   github com doitintl kube no trouble blob master docs k8s sa and role example yaml                    pluto             pluto  https   pluto docs fairwinds com                                            CI                GitHub Action           kubent               pluto detect all in cluster  NAME             KIND                VERSION          REPLACEMENT   REMOVED   DEPRECATED   REPL AVAIL   eks privileged   PodSecurityPolicy   policy v1beta1                 false     true         true                                     API                                          v1 19  apiserver requested deprecated apis            kubectl get   raw  metrics   grep apiserver requested deprecated apis  apiserver requested deprecated apis group  policy  removed release  1 25  resource  podsecuritypolicies  subresource    version  v1beta1   1         k8s io deprecated    true               https   docs aws amazon com eks latest userguide control plane logs html               CLUSTER   cluster name   QUERY ID   aws logs start query      log group name  aws eks   CLUSTER  cluster      start time   date  u   date   30 minutes     s     or date  v 30M    s  on MacOS      end time   date    s        query string  fields  message   filter  annotations k8s io deprecated   true        query queryId   output text   echo  Query started  query id   QUERY ID   please hold         sleep 5   give it some time to query  aws logs get query results   query id  QUERY ID                   API                                   results                                              field     message                    value       kind     Event     apiVersion     audit k8s io v1     level     Request     auditID     8f7883c6 b3d5 42d7 967a 1121c6f22f01     stage     ResponseComplete     requestURI      apis policy v1beta1 podsecuritypolicies allowWatchBookmarks true  u0026resourceVersion 4131  u0026timeout 9m19s  u0026timeoutSeconds 559  u0026watch true     verb     watch     user      username     system apiserver     uid     8aabfade da52 47da 83b4 46b16cab30fa     groups      system masters       sourceIPs        1      userAgent     kube apiserver v1 24 16  linux amd64  kubernetes af930c1     objectRef      resource     podsecuritypolicies     apiGroup     policy     apiVersion     v1beta1      responseStatus      metadata        code   200    requestReceivedTimestamp     2023 10 04T12 36 11 849075Z     stageTimestamp     2023 10 04T12 45 30 850483Z     annotations      authorization k8s io decision     allow     authorization k8s io reason          k8s io deprecated     true     k8s io removed release     1 25                                                    kubectl convert                                                                                                                                                                                    API                    kubectl convert                                                  apps v1                                              kubectl convert          https   kubernetes io docs tasks tools install kubectl linux  install kubectl convert plugin              kubectl convert  f  file    output version  group   version                                            PodDisruptionBudget   topologySpreadConstraints                                                          PodDisruptionBudget             https   kubernetes io docs concepts workloads pods disruptions  pod disruption budgets                      https   kubernetes io docs concepts scheduling eviction topology spread constraints                                                                                                                                                                                                 80                                                             apiVersion  policy v1 kind  PodDisruptionBudget metadata    name  myapp spec    minAvailable   80     selector      matchLabels        app  myapp     apiVersion  apps v1 kind  Deployment metadata    name  myapp spec    replicas  10   selector      matchLabels        app  myapp   template      metadata        labels          app  myapp     spec        containers          image  public ecr aws eks distro kubernetes pause 3 2         name  myapp         resources            requests              cpu   1              memory  256M       topologySpreadConstraints          labelSelector            matchLabels              app  host zone spread         maxSkew  2         topologyKey  kubernetes io hostname         whenUnsatisfiable  DoNotSchedule         labelSelector            matchLabels              app  host zone spread         maxSkew  2         topologyKey  topology kubernetes io zone         whenUnsatisfiable  DoNotSchedule       AWS Resilience Hub  https   aws amazon com resilience hub                         Amazon EKS                    Resilience Hub                                                                                                             Karpenter                                       Karpenter                                                                                                                                  Karpenter          EKS     AMI                         EKS        EKS     AMI                      Karpenter                           Karpenter                              enable node expiry for karpenter managed nodes     Karpenter         AMI                     https   karpenter sh docs concepts node templates   Karpenter          AMI          kubelet                                                          Amazon EKS                                                                                                                                              v1 28                    1 28                                                  API     kubelet            skew               n 2   n 3                             EKS             1 28      1 25    kubelet                              AWS Fargate  https   docs aws amazon com eks latest userguide fargate html               https   docs aws amazon com eks latest userguide managed node groups html                https   docs aws amazon com eks latest userguide worker html                      Amazon         AMI   https   docs aws amazon com eks latest userguide eks optimized amis html                              kubelet                       CVE                                kubelet                                   v1 28     v1 28                 API     kubelet             n 2                 EKS     1 27                      kubelet     1 25              AWS Fargate  https   docs aws amazon com eks latest userguide fargate html               https   docs aws amazon com eks latest userguide managed node groups html                https   docs aws amazon com eks latest userguide worker html              Karpenter                    Karpenter                                                                                           ttlSecondsUntilExpired                                                                                                                                               Karpenter  EKS          AMI                 Karpenter                Deprovisioning   https   karpenter sh docs concepts deprovisioning  methods            Karpenter                                              Kubernetes                Pod Disruption Budget  PDB   https   kubernetes io docs tasks run application configure pdb                        ttlSecondsUntilExpired                                              Karpenter                     Karpenter s Drift     https   karpenter sh docs concepts deprovisioning  drift   Karpenter                          EKS                              Karpenter                   https   karpenter sh docs concepts settings  feature gates                   Karpenter         EKS                                     EKS          AMI          EKS                  Karpenter  Drift     Karpenter                         EKS     AMI                                                                                   https   kubernetes io docs concepts policy resource quotas          Pod Disruption Budgets  https   kubernetes io docs concepts workloads pods disruptions   PDB                            Karpenter                                                        PDB             eksctl                                                            EKS                   EC2                                                                                               eksctl                       https   eksctl io usage managing nodegroups  deleting and draining                                       eksctl  https   eksctl io usage nodegroup upgrade      kOps  https   kops sigs k8s io operations updates and upgrades      EKS Blueprints  https   aws ia github io terraform aws eks blueprints node groups  self managed node groups                                    Amazon EKS                                                    Velero  https   velero io                                                                             EKS                                                                                                                                        IAM      AWS      Velero                                                             Fargate             Fargate                                                o wide                                              fargate                                                                                                                                                                     EKS                         1 23   1 25                                    terraform                                                                                                API         OIDC        kubectl   CI CD              2                                                                                                              DNS                                                                                                                                              GitOps                                                                                                                                                                      https   aws amazon com blogs containers kubernetes cluster upgrade the blue green deployment strategy             ArgoCD                       Amazon EKS              https   aws amazon com blogs containers blue green or canary amazon eks clusters migration for stateless argocd workloads                                                                                                                                          API                               CRI   Dockershim                        1 24                                                                                                                 Amazon EKS                                                          https   github com kubernetes kubernetes tree master CHANGELOG                             1 25   Dockershim      Docker Socket DDS            1 25  EKS     AMI        Dockershim                    Dockershim                    Docker                      1 25                                 1 25                                           kubectl        Detector for Docker Socket  DDS   https   github com aws containers kubectl detector for docker socket                        1 25               PSP                    PSS            PaC                 PodSecurityPolicy           1 21              https   kubernetes io blog 2021 04 06 podsecuritypolicy deprecation past present and future               1 25                    PSP                                     1 25                                     PSS            PaC                        AWS   EKS          FAQ  https   docs aws amazon com eks latest userguide pod security policy removal faq html                          PSS                 PSA   https   aws github io aws eks best practices security docs pods  pod security standards pss and pod security admission psa                                             PSP                 https   kubernetes io blog 2021 04 06 podsecuritypolicy deprecation past present and future                 1 23   In tree                                    CSI                                CSI              In tree                                    Amazon EBS                  CSI             Amazon EKS  1 23                                  1 22                                                1 23               Amazon EBS CSI       https   docs aws amazon com eks latest userguide ebs csi html                Amazon EBS CSI                  https   docs aws amazon com eks latest userguide ebs csi migration faq html                           ClowdHaus EKS            ClowdHaus EKS           https   clowdhaus github io eksup     Amazon EKS                        CLI                                                         GoNoGo   GoNoGo  https   github com FairwindsOps GoNoGo                                            "}
{"questions":"eks In tree Out of tree exclude true search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \uc601\uad6c \uc2a4\ud1a0\ub9ac\uc9c0 \uc635\uc158\n\n## In-tree\uc640 Out-of-tree \ubcfc\ub968 \ud50c\ub7ec\uadf8\uc778\n\n\ucee8\ud14c\uc774\ub108 \uc2a4\ud1a0\ub9ac\uc9c0 \uc778\ud130\ud398\uc774\uc2a4(CSI)\uac00 \ub3c4\uc785\ub418\uae30 \uc804\uc5d0\ub294 \ubaa8\ub4e0 \ubcfc\ub968 \ud50c\ub7ec\uadf8\uc778\uc774 in-tree \uc600\uc2b5\ub2c8\ub2e4. \uc989, \ucf54\uc5b4 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubc14\uc774\ub108\ub9ac\uc640 \ud568\uaed8 \ube4c\ub4dc, \uc5f0\uacb0, \ucef4\ud30c\uc77c \ubc0f \uc81c\uacf5\ub418\uace0 \ud575\uc2ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\ub97c \ud655\uc7a5\ud588\uc2b5\ub2c8\ub2e4. \uc774\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0 \uc0c8\ub85c\uc6b4 \uc2a4\ud1a0\ub9ac\uc9c0 \uc2dc\uc2a4\ud15c(\ubcfc\ub968 \ud50c\ub7ec\uadf8\uc778)\uc744 \ucd94\uac00\ud558\ub824\uba74 \ud575\uc2ec \ucf54\uc5b4 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucf54\ub4dc \uc800\uc7a5\uc18c\uc5d0 \ub300\ud55c \ucf54\ub4dc\ub97c \ud655\uc778\ud574\uc57c \ud558\ub294 \uac83\uc744 \uc758\ubbf8\ud588\uc2b5\ub2c8\ub2e4.\n\nOut-of-tree \ubcfc\ub968 \ud50c\ub7ec\uadf8\uc778\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ucf54\ub4dc \ubca0\uc774\uc2a4\uc640 \ub3c5\ub9bd\uc801\uc73c\ub85c \uac1c\ubc1c\ub418\uba70 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ud074\ub7ec\uc2a4\ud130\uc5d0 \ud655\uc7a5\uc73c\ub85c \ubc30\ud3ec(\ubc0f \uc124\uce58) \ub429\ub2c8\ub2e4. \uc774\ub97c \ud1b5\ud574 \ubca4\ub354\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub9b4\ub9ac\uc2a4 \uc8fc\uae30\uc640 \ubcc4\ub3c4\ub85c \ub4dc\ub77c\uc774\ubc84\ub97c \uc5c5\ub370\uc774\ud2b8 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uac00 \ubca4\ub354\uc5d0 k8s\uc640 \uc778\ud130\ud398\uc774\uc2a4\ud558\ub294 \ud45c\uc900 \ubc29\ubc95\uc744 \uc81c\uacf5\ud558\ub294 \uc2a4\ud1a0\ub9ac\uc9c0 \uc778\ud130\ud398\uc774\uc2a4 \ud639\uc740 CSI\ub97c \ub9cc\ub4e4\uc5c8\uae30 \ub54c\ubb38\uc5d0 \uac00\ub2a5\ud569\ub2c8\ub2e4.\n\nAmazon Elastic Kubernetes Services (EKS) \uc2a4\ud1a0\ub9ac\uc9c0 \ud074\ub798\uc2a4 \ubc0f CSI \ub4dc\ub77c\uc774\ubc84\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc744 [AWS \ubb38\uc11c](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/storage.html)\uc5d0\uc11c \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n## \uc708\ub3c4\uc6b0\uc6a9 In-tree \ubcfc\ub968 \ud50c\ub7ec\uadf8\uc778\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubcfc\ub968\uc744 \uc0ac\uc6a9\ud558\uba74 \ub370\uc774\ud130 \uc9c0\uc18d\uc131 \uc694\uad6c \uc0ac\ud56d\uc774 \uc788\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0 \ubc30\ud3ec\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud37c\uc2dc\uc2a4\ud134\ud2b8 \ubcfc\ub968 \uad00\ub9ac\ub294 \ubcfc\ub968 \ud504\ub85c\ube44\uc800\ub2dd\/\ud504\ub85c\ube44\uc800\ub2dd \ud574\uc81c\/\ud06c\uae30 \uc870\uc815, \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub178\ub4dc\uc5d0 \ubcfc\ub968 \uc5f0\uacb0\/\ubd84\ub9ac, \ud30c\ub4dc\uc758 \uac1c\ubcc4 \ucee8\ud14c\uc774\ub108\uc5d0 \ubcfc\ub968 \ub9c8\uc6b4\ud2b8\/\ub9c8\uc6b4\ud2b8 \ud574\uc81c\ub85c \uad6c\uc131\ub429\ub2c8\ub2e4. \ud2b9\uc815 \uc2a4\ud1a0\ub9ac\uc9c0 \ubc31\uc5d4\ub4dc \ub610\ub294 \ud504\ub85c\ud1a0\ucf5c\uc5d0 \ub300\ud574 \uc774\ub7f0 \ubcfc\ub968 \uad00\ub9ac \uc791\uc5c5\uc744 \uad6c\ud604\ud558\uae30 \uc704\ud55c \ucf54\ub4dc\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubcfc\ub968 \ud50c\ub7ec\uadf8\uc778 **(In-tree \ubcfc\ub968 \ud50c\ub7ec\uadf8\uc778)** \ud615\uc2dd\uc73c\ub85c \uc81c\uacf5\ub429\ub2c8\ub2e4. Amazon EKS\uc5d0\uc11c\ub294 \uc708\ub3c4\uc6b0\uc5d0\uc11c \ub2e4\uc74c\uacfc \uac19\uc740 \ud074\ub798\uc2a4\uc758 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ubcfc\ub968 \ud50c\ub7ec\uadf8\uc778\uc774 \uc9c0\uc6d0\ub429\ub2c8\ub2e4:\n\n*In-tree \ubcfc\ub968 \ud50c\ub7ec\uadf8\uc778:* [awsElasticBlockStore](https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/#awselasticblockstore)\n\n\uc708\ub3c4\uc6b0 \ub178\ub4dc\uc5d0\uc11c In-tree \ubcfc\ub968 \ud50c\ub7ec\uadf8\uc778\uc744 \uc0ac\uc6a9\ud558\uae30 \uc704\ud574\uc11c\ub294 NTFS\ub97c fsType \uc73c\ub85c \uc0ac\uc6a9\ud558\uae30 \uc704\ud55c \ucd94\uac00 StorageClass\ub97c \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4. EKS\uc5d0\uc11c \uae30\ubcf8 StorageClass\ub294 ext4\ub97c \uae30\ubcf8 fsType\uc73c\ub85c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n\nStorageClass\ub294 \uad00\ub9ac\uc790\uac00 \uc81c\uacf5\ud558\ub294 \uc2a4\ud1a0\ub9ac\uc9c0\uc758 \"\ud074\ub798\uc2a4\"\ub97c \uc124\uba85\ud558\ub294 \ubc29\ubc95\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4. \ub2e4\uc591\ud55c \ud074\ub798\uc2a4\ub294 QoS \uc218\uc900, \ubc31\uc5c5 \uc815\ucc45 \ub610\ub294 \ud074\ub7ec\uc2a4\ud130 \uad00\ub9ac\uc790\uac00 \uacb0\uc815\ud55c \uc784\uc758 \uc815\ucc45\uc5d0 \ub9e4\ud551\ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \ud074\ub798\uc2a4\uac00 \ubb34\uc5c7\uc744 \ub098\ud0c0\ub0b4\ub294\uc9c0\uc5d0 \ub300\ud574 \uc758\uacac\uc774 \uc5c6\uc2b5\ub2c8\ub2e4. \uc774 \uac1c\ub150\uc740 \ub2e4\ub978 \uc2a4\ud1a0\ub9ac\uc9c0 \uc2dc\uc2a4\ud15c\uc5d0\uc11c\ub294 \"\ud504\ub85c\ud30c\uc77c\" \uc774\ub77c\uace0 \ubd80\ub974\uae30\ub3c4 \ud569\ub2c8\ub2e4.\n\n\ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud558\uc5ec \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4:\n\n```bash\nkubectl describe storageclass gp2\n```\n\n\ucd9c\ub825:\n\n```bash\nName:            gp2\nIsDefaultClass:  Yes\nAnnotations:     kubectl.kubernetes.io\/last-applied-configuration={\"apiVersion\":\"storage.k8s.io\/v1\",\"kind\":\"StorageClas\n\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io\/is-default-class\":\"true\"},\"name\":\"gp2\"},\"parameters\":{\"fsType\"\n\"ext4\",\"type\":\"gp2\"},\"provisioner\":\"kubernetes.io\/aws-ebs\",\"volumeBindingMode\":\"WaitForFirstConsumer\"}\n,storageclass.kubernetes.io\/is-default-class=true\nProvisioner:           kubernetes.io\/aws-ebs\nParameters:            fsType=ext4,type=gp2\nAllowVolumeExpansion:  <unset>\nMountOptions:          <none>\nReclaimPolicy:         Delete\nVolumeBindingMode:     WaitForFirstConsumer\nEvents:                <none>\n```\n\n**NTFS**\ub97c \uc9c0\uc6d0\ud558\ub294 \uc0c8 StorageClass\ub97c \uc0dd\uc131\ud558\ub824\uba74 \ub2e4\uc74c \ub9e4\ub2c8\ud398\uc2a4\ud2b8\ub97c \uc0ac\uc6a9\ud558\uc2ed\uc2dc\uc624:\n\n```yaml\nkind: StorageClass\napiVersion: storage.k8s.io\/v1\nmetadata:\n  name: gp2-windows\nprovisioner: kubernetes.io\/aws-ebs\nparameters:\n  type: gp2\n  fsType: ntfs\nvolumeBindingMode: WaitForFirstConsumer\n```\n\n\ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud558\uc5ec StorageClass\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4:\n\n```bash \nkubectl apply -f NTFSStorageClass.yaml\n```\n\n\ub2e4\uc74c \ub2e8\uacc4\ub294 \ud37c\uc2dc\uc2a4\ud134\ud2b8 \ubcfc\ub968 \ud074\ub808\uc784(PVC)\uc744 \uc0dd\uc131\ud558\ub294 \uac83\uc785\ub2c8\ub2e4.\n\n\ud37c\uc2dc\uc2a4\ud134\ud2b8 \ubcfc\ub968(PV)\uc740 \uad00\ub9ac\uc790\uac00 \ud504\ub85c\ube44\uc800\ub2dd\ud588\uac70\ub098 PVC\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub3d9\uc801\uc73c\ub85c \ud504\ub85c\ube44\uc800\ub2dd\ub41c \ud074\ub7ec\uc2a4\ud130\uc758 \uc2a4\ud1a0\ub9ac\uc9c0\uc785\ub2c8\ub2e4. \ub178\ub4dc\uac00 \ud074\ub7ec\uc2a4\ud130 \ub9ac\uc18c\uc2a4\uc778 \uac83\ucc98\ub7fc \ud074\ub7ec\uc2a4\ud130\uc758 \ub9ac\uc18c\uc2a4\uc785\ub2c8\ub2e4. \uc774 API \uac1d\uccb4\ub294 NFS, iSCSI \ub610\ub294 \ud074\ub77c\uc6b0\ub4dc \uacf5\uae09\uc790\ubcc4 \uc2a4\ud1a0\ub9ac\uc9c0 \uc2dc\uc2a4\ud15c \ub4f1 \uc2a4\ud1a0\ub9ac\uc9c0 \uad6c\ud604\uc758 \uc138\ubd80 \uc815\ubcf4\ub97c \ucea1\ucc98\ud569\ub2c8\ub2e4.\n\nPVC\ub294 \uc0ac\uc6a9\uc790\uc758 \uc2a4\ud1a0\ub9ac\uc9c0 \uc694\uccad\uc785\ub2c8\ub2e4. \ud074\ub808\uc784\uc740 \ud2b9\uc815 \ud06c\uae30 \ubc0f \uc561\uc138\uc2a4 \ubaa8\ub4dc\ub97c \uc694\uccad\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4(\uc608: ReadWriteOnce, ReadOnlyMany \ub610\ub294 ReadWriteMan\ub85c \ub9c8\uc6b4\ud2b8\ub420 \uc218 \uc788\uc74c).\n\n\uc0ac\uc6a9\uc790\uc5d0\uac8c\ub294 \ub2e4\uc591\ud55c \uc720\uc988\ucf00\uc774\uc2a4\ub97c \uc704\ud574 \uc131\ub2a5\uacfc \uac19\uc740 \ub2e4\uc591\ud55c \uc18d\uc131\uc744 \uac00\uc9c4 PV\uac00 \ud544\uc694\ud569\ub2c8\ub2e4. \ud074\ub7ec\uc2a4\ud130 \uad00\ub9ac\uc790\ub294 \uc0ac\uc6a9\uc790\uc5d0\uac8c \ud574\ub2f9 \ubcfc\ub968\uc774 \uad6c\ud604\ub418\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \uc138\ubd80 \uc815\ubcf4\ub97c \ub178\ucd9c\uc2dc\ud0a4\uc9c0 \uc54a\uace0\ub3c4 \ud06c\uae30 \ubc0f \uc561\uc138\uc2a4 \ubaa8\ub4dc\ubcf4\ub2e4 \ub354 \ub2e4\uc591\ud55c \ubc29\uc2dd\uc73c\ub85c PV\ub97c \uc81c\uacf5\ud560 \uc218 \uc788\uc5b4\uc57c \ud569\ub2c8\ub2e4. \uc774\ub7f0 \uc694\uad6c \uc0ac\ud56d\uc744 \ucda9\uc871\ud558\uae30 \uc704\ud574 StorageClass \ub9ac\uc18c\uc2a4\uac00 \uc788\uc2b5\ub2c8\ub2e4. \n\n\uc544\ub798 \uc608\uc81c\uc5d0\uc11c \uc708\ub3c4\uc6b0 \ub124\uc784\uc2a4\ud398\uc774\uc2a4 \ub0b4\uc5d0 PVC\uac00 \uc0dd\uc131\ub418\uc5c8\uc2b5\ub2c8\ub2e4.\n\n```yaml \napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: ebs-windows-pv-claim\n  namespace: windows\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: gp2-windows\n  resources: \n    requests:\n      storage: 1Gi\n```\n\n\ub2e4\uc74c \uba85\ub839\uc744 \uc2e4\ud589\ud558\uc5ec PVC\ub97c \ub9cc\ub4ed\ub2c8\ub2e4:\n\n```bash \nkubectl apply -f persistent-volume-claim.yaml\n```\n\n\ub2e4\uc74c \ub9e4\ub2c8\ud398\uc2a4\ud2b8\uc5d0\uc11c\ub294 \uc708\ub3c4\uc6b0 \ud30c\ub4dc\ub97c \uc0dd\uc131\ud558\uace0 VolumeMount\ub97c `C:\\Data`\ub85c \uc124\uc815\ud558\uace0 PVC\ub97c `C:\\Data`\uc5d0 \uc5f0\uacb0\ub41c \uc2a4\ud1a0\ub9ac\uc9c0\ub85c \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n\n```yaml \napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: windows-server-ltsc2019\n  namespace: windows\nspec:\n  selector:\n    matchLabels:\n      app: windows-server-ltsc2019\n      tier: backend\n      track: stable\n  replicas: 1\n  template:\n    metadata:\n      labels:\n        app: windows-server-ltsc2019\n        tier: backend\n        track: stable\n    spec:\n      containers:\n      - name: windows-server-ltsc2019\n        image: mcr.microsoft.com\/windows\/servercore:ltsc2019\n        ports:\n        - name: http\n          containerPort: 80\n        imagePullPolicy: IfNotPresent\n        volumeMounts:\n        - mountPath: \"C:\\\\data\"\n          name: test-volume\n      volumes:\n        - name: test-volume\n          persistentVolumeClaim:\n            claimName: ebs-windows-pv-claim\n      nodeSelector:\n        kubernetes.io\/os: windows\n        node.kubernetes.io\/windows-build: '10.0.17763'\n```\n\nPowerShell\uc744 \ud1b5\ud574 \uc708\ub3c4\uc6b0 \ud30c\ub4dc\uc5d0 \uc561\uc138\uc2a4\ud558\uc5ec \uacb0\uacfc\ub97c \ud14c\uc2a4\ud2b8\ud569\ub2c8\ub2e4:\n\n```bash \nkubectl exec -it podname powershell -n windows\n```\n\n\uc708\ub3c4\uc6b0 \ud30c\ub4dc \ub0b4\uc5d0\uc11c `ls`\ub97c \uc218\ud589\ud569\ub2c8\ub2e4:\n\n\ucd9c\ub825:\n\n```bash \nPS C:\\> ls\n\n\n    Directory: C:\\\n\n\nMode                 LastWriteTime         Length Name\n----                 -------------         ------ ----\nd-----          3\/8\/2021   1:54 PM                data\nd-----          3\/8\/2021   3:37 PM                inetpub\nd-r---          1\/9\/2021   7:26 AM                Program Files\nd-----          1\/9\/2021   7:18 AM                Program Files (x86)\nd-r---          1\/9\/2021   7:28 AM                Users\nd-----          3\/8\/2021   3:36 PM                var\nd-----          3\/8\/2021   3:36 PM                Windows\n-a----         12\/7\/2019   4:20 AM           5510 License.txt\n```\n\n**data directory** \ub294 EBS \ubcfc\ub968\uc5d0 \uc758\ud574 \uc81c\uacf5\ub429\ub2c8\ub2e4.\n\n## \uc708\ub3c4\uc6b0 Out-of-tree\nCSI \ud50c\ub7ec\uadf8\uc778\uacfc \uc5f0\uacb0\ub41c \ucf54\ub4dc\ub294 \uc77c\ubc18\uc801\uc73c\ub85c \ucee8\ud14c\uc774\ub108 \uc774\ubbf8\uc9c0\ub85c \ubc30\ud3ec\ub418\uace0 \ub370\ubaac\uc14b(DaemonSet) \ubc0f \uc2a4\ud14c\uc774\ud2b8\ud480\uc14b(StatefulSet)\uacfc \uac19\uc740 \ud45c\uc900 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uad6c\uc131\uc744 \uc0ac\uc6a9\ud558\uc5ec \ubc30\ud3ec\ub418\ub294 out-of-tree \uc2a4\ud06c\ub9bd\ud2b8 \ubc0f \ubc14\uc774\ub108\ub9ac\ub85c \uc81c\uacf5\ub429\ub2c8\ub2e4. CSI \ud50c\ub7ec\uadf8\uc778\uc740 \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0\uc11c \uad11\ubc94\uc704\ud55c \ubcfc\ub968 \uad00\ub9ac \uc791\uc5c5\uc744 \ucc98\ub9ac\ud569\ub2c8\ub2e4. CSI \ud50c\ub7ec\uadf8\uc778\uc740 \uc77c\ubc18\uc801\uc73c\ub85c \ub178\ub4dc \ud50c\ub7ec\uadf8\uc778(\uac01 \ub178\ub4dc\uc5d0\uc11c \ub370\ubaac\uc14b\uc73c\ub85c \uc2e4\ud589\ub428)\uacfc \ucee8\ud2b8\ub864\ub7ec \ud50c\ub7ec\uadf8\uc778\uc73c\ub85c \uad6c\uc131\ub429\ub2c8\ub2e4.\n\nCSI \ub178\ub4dc \ud50c\ub7ec\uadf8\uc778 (\ud2b9\ud788 \ube14\ub85d \uc7a5\uce58 \ub610\ub294 \uacf5\uc720 \ud30c\uc77c \uc2dc\uc2a4\ud15c\uc744 \ud1b5\ud574 \ub178\ucd9c\ub418\ub294 \ud37c\uc2dc\uc2a4\ud134\ud2b8 \ubcfc\ub968\uacfc \uad00\ub828\ub41c \ud50c\ub7ec\uadf8\uc778)\uc740 \ub514\uc2a4\ud06c \uc7a5\uce58 \uc2a4\uce94, \ud30c\uc77c \uc2dc\uc2a4\ud15c \ud0d1\uc7ac \ub4f1\uacfc \uac19\uc740 \ub2e4\uc591\ud55c \uad8c\ud55c \uc791\uc5c5\uc744 \uc218\ud589\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\ub7f0 \uc791\uc5c5\uc740 \ud638\uc2a4\ud2b8 \uc6b4\uc601 \uccb4\uc81c\ub9c8\ub2e4 \ub2e4\ub985\ub2c8\ub2e4. \ub9ac\ub205\uc2a4 \uc6cc\ucee4 \ub178\ub4dc\uc758 \uacbd\uc6b0 \ucee8\ud14c\uc774\ub108\ud654\ub41c CSI \ub178\ub4dc \ud50c\ub7ec\uadf8\uc778\uc740 \uc77c\ubc18\uc801\uc73c\ub85c \uad8c\ud55c \uc788\ub294 \ucee8\ud14c\uc774\ub108\ub85c \ubc30\ud3ec\ub429\ub2c8\ub2e4. \uc708\ub3c4\uc6b0 \uc6cc\ucee4 \ub178\ub4dc\uc758 \uacbd\uc6b0 \uac01 \uc708\ub3c4\uc6b0 \ub178\ub4dc\uc5d0 \uc0ac\uc804 \uc124\uce58\ud574\uc57c \ud558\ub294 \ucee4\ubba4\ub2c8\ud2f0 \uad00\ub9ac \ub3c5\ub9bd \uc2e4\ud589\ud615 \ubc14\uc774\ub108\ub9ac\uc778 [csi-proxy](https:\/\/github.com\/kubernetes-csi\/csi-proxy) \ub97c \uc0ac\uc6a9\ud558\uc5ec \ucee8\ud14c\uc774\ub108\ud654\ub41c CSI \ub178\ub4dc \ud50c\ub7ec\uadf8\uc778\uc5d0 \ub300\ud55c \uad8c\ud55c \uc788\ub294 \uc791\uc5c5\uc774 \uc9c0\uc6d0\ub429\ub2c8\ub2e4. \n\n[Amazon EKS \ucd5c\uc801\ud654 \uc708\ub3c4\uc6b0 AMI](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/eks-optimized-windows-ami.html)\uc5d0\uc11c\ub294 2022\ub144 4\uc6d4\ubd80\ud130 CSI-proxy\uac00 \ud3ec\ud568\ub429\ub2c8\ub2e4. \uace0\uac1d\uc740 \uc708\ub3c4\uc6b0 \ub178\ub4dc\uc758 [SMB CSI \ub4dc\ub77c\uc774\ubc84](https:\/\/github.com\/kubernetes-csi\/csi-driver-smb)\ub97c \uc0ac\uc6a9\ud558\uc5ec [Amazon FSx for Windows File Server](https:\/\/aws.amazon.com\/fsx\/windows\/), [Amazon FSx for NetApp ONTAP SMB Shares](https:\/\/aws.amazon.com\/fsx\/netapp-ontap\/), \ubc0f\/\ub610\ub294 [AWS Storage Gateway \u2013 File Gateway](https:\/\/aws.amazon.com\/storagegateway\/file\/)\uc5d0 \uc561\uc138\uc2a4 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c [\ube14\ub85c\uadf8](https:\/\/aws.amazon.com\/blogs\/modernizing-with-aws\/using-smb-csi-driver-on-amazon-eks-windows-nodes\/)\uc5d0\uc11c\ub294 Amazon FSx for Windows File Server\ub97c \uc704\ub3c4\uc6b0 \ud30c\ub4dc\uc6a9 \uc601\uad6c \uc2a4\ud1a0\ub9ac\uc9c0\ub85c \uc0ac\uc6a9\ud558\ub3c4\ub85d SMB CSI \ub4dc\ub77c\uc774\ubc84\ub97c \uc124\uc815\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \uc138\ubd80 \uc815\ubcf4\uac00 \ub098\uc640 \uc788\uc2b5\ub2c8\ub2e4.\n\n## Amazon FSx for Windows File Server\n\ud55c \uac00\uc9c0 \uc635\uc158\uc740 [SMB Global Mapping](https:\/\/docs.microsoft.com\/en-us\/virtualization\/windowscontainers\/manage-containers\/persistent-storage)\uc774\ub77c\ub294 SMB \uae30\ub2a5\uc744 \ud1b5\ud574 Windows \ud30c\uc77c \uc11c\ubc84\uc6a9 Amazon FSx\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uc774 \uae30\ub2a5\uc744 \uc0ac\uc6a9\ud558\uba74 \ud638\uc2a4\ud2b8\uc5d0 SMB \uacf5\uc720\ub97c \ub9c8\uc6b4\ud2b8\ud55c \ub2e4\uc74c \ud574\ub2f9 \uacf5\uc720\uc758 \ub514\ub809\ud130\ub9ac\ub97c \ucee8\ud14c\uc774\ub108\ub85c \uc804\ub2ec\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ucee8\ud14c\uc774\ub108\ub97c \ud2b9\uc815 \uc11c\ubc84, \uacf5\uc720, \uc0ac\uc6a9\uc790 \uc774\ub984 \ub610\ub294 \uc554\ud638\ub85c \uad6c\uc131\ud560 \ud544\uc694\uac00 \uc5c6\uc2b5\ub2c8\ub2e4. \ub300\uc2e0 \ud638\uc2a4\ud2b8\uc5d0\uc11c \ubaa8\ub450 \ucc98\ub9ac\ub429\ub2c8\ub2e4. \ucee8\ud14c\uc774\ub108\ub294 \ub85c\uceec \uc2a4\ud1a0\ub9ac\uc9c0\uac00 \uc788\ub294 \uac83\ucc98\ub7fc \uc791\ub3d9\ud569\ub2c8\ub2e4.\n\n> SMB Global Mapping\uc740 \uc624\ucf00\uc2a4\ud2b8\ub808\uc774\ud130\uc5d0\uac8c \ud22c\uba85\ud558\uac8c \uc804\ub2ec\ub418\uba70 HostPath\ub97c \ud1b5\ud574 \ub9c8\uc6b4\ud2b8\ub418\ubbc0\ub85c **\ubcf4\uc548 \ubb38\uc81c\uac00 \ubc1c\uc0dd\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4**.\n\n\uc544\ub798 \uc608\uc81c\uc5d0\uc11c, `G:\\Directory\\app-state` \uacbd\ub85c\ub294 \uc708\ub3c4\uc6b0 \ub178\ub4dc\uc758 SMB \uacf5\uc720\uc785\ub2c8\ub2e4.\n\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: test-fsx\nspec:\n  containers:\n  - name: test-fsx\n    image: mcr.microsoft.com\/windows\/servercore:ltsc2019\n    command:\n      - powershell.exe\n      - -command\n      - \"Add-WindowsFeature Web-Server; Invoke-WebRequest -UseBasicParsing -Uri 'https:\/\/dotnetbinaries.blob.core.windows.net\/servicemonitor\/2.0.1.6\/ServiceMonitor.exe' -OutFile 'C:\\\\ServiceMonitor.exe'; echo '<html><body><br\/><br\/><marquee><H1>Hello EKS!!!<H1><marquee><\/body><html>' > C:\\\\inetpub\\\\wwwroot\\\\default.html; C:\\\\ServiceMonitor.exe 'w3svc'; \"\n    volumeMounts:\n      - mountPath: C:\\dotnetapp\\app-state\n        name: test-mount\n  volumes:\n    - name: test-mount\n      hostPath: \n        path: G:\\Directory\\app-state\n        type: Directory\n  nodeSelector:\n      beta.kubernetes.io\/os: windows\n      beta.kubernetes.io\/arch: amd64\n```\n\n\ub2e4\uc74c [\ube14\ub85c\uadf8](https:\/\/aws.amazon.com\/blogs\/containers\/using-amazon-fsx-for-windows-file-server-on-eks-windows-containers\/) \uc5d0\uc11c\ub294 Amazon FSx for Windows File Server\ub97c \uc708\ub3c4\uc6b0 \ud30c\ub4dc\uc6a9 \uc601\uad6c \uc2a4\ud1a0\ub9ac\uc9c0\ub85c \uc124\uc815\ud558\ub294 \ubc29\ubc95\uc5d0 \ub300\ud55c \uc138\ubd80 \uc815\ubcf4\uac00 \ub098\uc640 \uc788\uc2b5\ub2c8\ub2e4.","site":"eks","answers_cleaned":"    search    exclude  true                        In tree  Out of tree                          CSI                        in tree                                                        API                                                                                                Out of tree                                                                                                                                 k8s                                    CSI                   Amazon Elastic Kubernetes Services  EKS             CSI                   AWS     https   docs aws amazon com eks latest userguide storage html                         In tree                                                                                                                                                                                                                                    In tree                        Amazon EKS                                              In tree            awsElasticBlockStore  https   kubernetes io docs concepts storage volumes  awselasticblockstore            In tree                    NTFS  fsType               StorageClass            EKS      StorageClass  ext4     fsType           StorageClass                                                  QoS                                                                                                                                                          bash kubectl describe storageclass gp2              bash Name             gp2 IsDefaultClass   Yes Annotations      kubectl kubernetes io last applied configuration   apiVersion   storage k8s io v1   kind   StorageClas    metadata    annotations    storageclass kubernetes io is default class   true    name   gp2    parameters    fsType   ext4   type   gp2    provisioner   kubernetes io aws ebs   volumeBindingMode   WaitForFirstConsumer    storageclass kubernetes io is default class true Provisioner            kubernetes io aws ebs Parameters             fsType ext4 type gp2 AllowVolumeExpansion    unset  MountOptions            none  ReclaimPolicy          Delete VolumeBindingMode      WaitForFirstConsumer Events                  none         NTFS           StorageClass                              yaml kind  StorageClass apiVersion  storage k8s io v1 metadata    name  gp2 windows provisioner  kubernetes io aws ebs parameters    type  gp2   fsType  ntfs volumeBindingMode  WaitForFirstConsumer                  StorageClass             bash  kubectl apply  f NTFSStorageClass yaml                          PVC                        PV                 PVC                                                                    API     NFS  iSCSI                                                 PVC                                                      ReadWriteOnce  ReadOnlyMany    ReadWriteMan                                                      PV                                                                                      PV                                   StorageClass                                    PVC               yaml  apiVersion  v1 kind  PersistentVolumeClaim metadata    name  ebs windows pv claim   namespace  windows spec     accessModes        ReadWriteOnce   storageClassName  gp2 windows   resources       requests        storage  1Gi                  PVC            bash  kubectl apply  f persistent volume claim yaml                               VolumeMount   C  Data        PVC   C  Data                        yaml  apiVersion  apps v1 kind  Deployment metadata    name  windows server ltsc2019   namespace  windows spec    selector      matchLabels        app  windows server ltsc2019       tier  backend       track  stable   replicas  1   template      metadata        labels          app  windows server ltsc2019         tier  backend         track  stable     spec        containers          name  windows server ltsc2019         image  mcr microsoft com windows servercore ltsc2019         ports            name  http           containerPort  80         imagePullPolicy  IfNotPresent         volumeMounts            mountPath   C   data            name  test volume       volumes            name  test volume           persistentVolumeClaim              claimName  ebs windows pv claim       nodeSelector          kubernetes io os  windows         node kubernetes io windows build   10 0 17763       PowerShell                                   bash  kubectl exec  it podname powershell  n windows                  ls                   bash  PS C    ls       Directory  C     Mode                 LastWriteTime         Length Name                                                        d               3 8 2021   1 54 PM                data d               3 8 2021   3 37 PM                inetpub d r             1 9 2021   7 26 AM                Program Files d               1 9 2021   7 18 AM                Program Files  x86  d r             1 9 2021   7 28 AM                Users d               3 8 2021   3 36 PM                var d               3 8 2021   3 36 PM                Windows  a             12 7 2019   4 20 AM           5510 License txt        data directory     EBS                       Out of tree CSI                                        DaemonSet           StatefulSet                             out of tree                     CSI                                     CSI                                                            CSI                                                                                                                                                            CSI                                                                                            csi proxy  https   github com kubernetes csi csi proxy                CSI                                 Amazon EKS         AMI  https   docs aws amazon com eks latest userguide eks optimized windows ami html     2022  4    CSI proxy                      SMB CSI       https   github com kubernetes csi csi driver smb         Amazon FSx for Windows File Server  https   aws amazon com fsx windows     Amazon FSx for NetApp ONTAP SMB Shares  https   aws amazon com fsx netapp ontap          AWS Storage Gateway   File Gateway  https   aws amazon com storagegateway file                            https   aws amazon com blogs modernizing with aws using smb csi driver on amazon eks windows nodes      Amazon FSx for Windows File Server                         SMB CSI                                       Amazon FSx for Windows File Server           SMB Global Mapping  https   docs microsoft com en us virtualization windowscontainers manage containers persistent storage     SMB        Windows        Amazon FSx                             SMB                                                                                                                                            SMB Global Mapping                      HostPath                                              G  Directory app state              SMB            yaml apiVersion  v1 kind  Pod metadata    name  test fsx spec    containers      name  test fsx     image  mcr microsoft com windows servercore ltsc2019     command          powershell exe          command          Add WindowsFeature Web Server  Invoke WebRequest  UseBasicParsing  Uri  https   dotnetbinaries blob core windows net servicemonitor 2 0 1 6 ServiceMonitor exe   OutFile  C   ServiceMonitor exe   echo   html  body  br   br   marquee  H1 Hello EKS    H1  marquee   body  html     C   inetpub  wwwroot  default html  C   ServiceMonitor exe  w3svc         volumeMounts          mountPath  C  dotnetapp app state         name  test mount   volumes        name  test mount       hostPath           path  G  Directory app state         type  Directory   nodeSelector        beta kubernetes io os  windows       beta kubernetes io arch  amd64               https   aws amazon com blogs containers using amazon fsx for windows file server on eks windows containers       Amazon FSx for Windows File Server                                              "}
{"questions":"eks exclude true Heterogeneous search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \uc774\uae30\uc885 \uc6cc\ud06c\ub85c\ub4dc \uc2e4\ud589\u00b6\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4\ub294 \ub3d9\uc77c\ud55c \ud074\ub7ec\uc2a4\ud130\uc5d0 \ub9ac\ub205\uc2a4 \ubc0f \uc708\ub3c4\uc6b0 \ub178\ub4dc\ub97c \ud63c\ud569\ud558\uc5ec \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \uc774\uae30\uc885(Heterogeneous) \ud074\ub7ec\uc2a4\ud130\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4. \ud574\ub2f9 \ud074\ub7ec\uc2a4\ud130 \ub0b4\uc5d0\ub294 \ub9ac\ub205\uc2a4\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ud30c\ub4dc\uc640 \uc708\ub3c4\uc6b0\uc5d0\uc11c \uc2e4\ud589\ub418\ub294 \ud30c\ub4dc\ub97c \ud63c\ud569\ud558\uc5ec \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ub3d9\uc77c\ud55c \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc5ec\ub7ec \ubc84\uc804\uc758 \uc708\ub3c4\uc6b0\ub97c \uc2e4\ud589\ud560 \uc218\ub3c4 \uc788\uc2b5\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uc774 \uacb0\uc815\uc744 \ub0b4\ub9b4 \ub54c \uace0\ub824\ud574\uc57c \ud560 \uba87 \uac00\uc9c0 \uc694\uc18c(\uc544\ub798 \uc124\uba85 \ucc38\uc870)\uac00 \uc788\uc2b5\ub2c8\ub2e4.\n\n# \ub178\ub4dc \ub0b4 \ud30c\ub4dc \ud560\ub2f9 \ubaa8\ubc94 \uc0ac\ub840\n\n\ub9ac\ub205\uc2a4 \ubc0f \uc708\ub3c4\uc6b0 \uc6cc\ud06c\ub85c\ub4dc\ub97c \uac01\uac01\uc758 OS\ubcc4 \ub178\ub4dc\uc5d0 \uc720\uc9c0\ud558\ub824\uba74 \ub178\ub4dc \uc140\ub809\ud130(NodeSelector)\uc640 \ud14c\uc778\ud2b8(Taint)\/\ud1a8\ub7ec\ub808\uc774\uc158(Toleration)\uc744 \uc870\ud569\ud558\uc5ec \uc0ac\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4. \uc774\uae30\uc885 \ud658\uacbd\uc5d0\uc11c \uc6cc\ud06c\ub85c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud558\ub294 \uc8fc\ub41c \ubaa9\uc801\uc740 \uae30\uc874 \ub9ac\ub205\uc2a4 \uc6cc\ud06c\ub85c\ub4dc\uc640\uc758 \ud638\ud658\uc131\uc774 \uae68\uc9c0\uc9c0 \uc54a\ub3c4\ub85d \ud558\ub294 \uac83\uc785\ub2c8\ub2e4.\n\n## \ud2b9\uc815 OS \uc6cc\ud06c\ub85c\ub4dc\uac00 \uc801\uc808\ud55c \ucee8\ud14c\uc774\ub108 \ub178\ub4dc\uc5d0\uc11c \uc2e4\ud589 \ubcf4\uc7a5\n\n\uc0ac\uc6a9\uc790\ub294 \ub178\ub4dc\uc140\ub809\ud130\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc708\ub3c4\uc6b0 \ucee8\ud14c\uc774\ub108\ub97c \uc801\uc808\ud55c \ud638\uc2a4\ud2b8\uc5d0\uc11c \uc2a4\ucf00\uc904\ub9c1 \ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud604\uc7ac \ubaa8\ub4e0 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub178\ub4dc\uc5d0\ub294 \ub2e4\uc74c\uacfc \uac19\uc740 default labels \uc774 \uc788\uc2b5\ub2c8\ub2e4:\n\n    kubernetes.io\/os = [windows|linux]\n    kubernetes.io\/arch = [amd64|arm64|...]\n\n\ud30c\ub4dc \uc2a4\ud399\uc5d0 ``\"kubernetes.io\/os\": windows`` \uc640 \uac19\uc740 \ub178\ub4dc\uc140\ub809\ud130\uac00 \ud3ec\ud568\ub418\uc9c0 \uc54a\ub294 \uacbd\uc6b0, \ud30c\ub4dc\ub294 \uc708\ub3c4\uc6b0 \ub610\ub294 \ub9ac\ub205\uc2a4 \uc5b4\ub290 \ud638\uc2a4\ud2b8\uc5d0\uc11c\ub098 \uc2a4\ucf00\uc904\ub9c1 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc708\ub3c4\uc6b0 \ucee8\ud14c\uc774\ub108\ub294 \uc708\ub3c4\uc6b0\uc5d0\uc11c\ub9cc \uc2e4\ud589\ud560 \uc218 \uc788\uace0 \ub9ac\ub205\uc2a4 \ucee8\ud14c\uc774\ub108\ub294 \ub9ac\ub205\uc2a4\uc5d0\uc11c\ub9cc \uc2e4\ud589\ud560 \uc218 \uc788\uae30 \ub54c\ubb38\uc5d0 \ubb38\uc81c\uac00 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc5d4\ud130\ud504\ub77c\uc774\uc988 \ud658\uacbd\uc5d0\uc11c\ub294 \ub9ac\ub205\uc2a4 \ucee8\ud14c\uc774\ub108\uc5d0 \ub300\ud55c \uae30\uc874 \ubc30\ud3ec\uac00 \ub9ce\uc744 \ubfd0\ub9cc \uc544\ub2c8\ub77c \ud5ec\ub984 \ucc28\ud2b8\uc640 \uac19\uc740 \uae30\uc131 \uad6c\uc131 \uc5d0\ucf54\uc2dc\uc2a4\ud15c(off-the-shelf configurations)\uc744 \uac16\ub294 \uac83\uc774 \uc77c\ubc18\uc801\uc785\ub2c8\ub2e4. \uc774\ub7f0 \uc0c1\ud669\uc5d0\uc11c\ub294 \ub514\ud50c\ub85c\uc774\uba3c\ud2b8\uc758 \ub178\ub4dc\uc140\ub809\ud130\ub97c \ubcc0\uacbd\ud558\ub294 \uac83\uc744 \uc8fc\uc800\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. **\ub300\uc548\uc740 \ud14c\uc778\ud2b8\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83**\uc785\ub2c8\ub2e4.\n\n\uc608\ub97c \ub4e4\uc5b4: `--register-with-taints='os=windows:NoSchedule'`\n\nEKS\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0, eksctl\uc740 clusterConfig\ub97c \ud1b5\ud574 \ud14c\uc778\ud2b8\ub97c \uc801\uc6a9\ud558\ub294 \ubc29\ubc95\uc744 \uc81c\uacf5\ud569\ub2c8\ub2e4:\n\n```yaml\nNodeGroups:\n  - name: windows-ng\n    amiFamily: WindowsServer2022FullContainer\n    ...\n    labels:\n      nodeclass: windows2022\n    taints:\n      os: \"windows:NoSchedule\"\n```\n\n\ubaa8\ub4e0 \uc708\ub3c4\uc6b0 \ub178\ub4dc\uc5d0 \ud14c\uc778\ud2b8\ub97c \ucd94\uac00\ud558\ub294 \uacbd\uc6b0, \uc2a4\ucf00\uc904\ub7ec\ub294 \ud14c\uc778\ud2b8\ub97c \ud5c8\uc6a9\ud558\uc9c0 \uc54a\ub294 \ud55c \ud574\ub2f9 \ub178\ub4dc\uc5d0\uc11c \ud30c\ub4dc\ub97c \uc2a4\ucf00\uc904\ub9c1\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c\uc740 \ud30c\ub4dc \ub9e4\ub2c8\ud398\uc2a4\ud2b8\uc758 \uc608\uc2dc\uc785\ub2c8\ub2e4:\n\n```yaml\nnodeSelector:\n    kubernetes.io\/os: windows\ntolerations:\n    - key: \"os\"\n      operator: \"Equal\"\n      value: \"windows\"\n      effect: \"NoSchedule\"\n```\n\n## \ub3d9\uc77c\ud55c \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc5ec\ub7ec \uc708\ub3c4\uc6b0 \ube4c\ub4dc \ucc98\ub9ac\n\n\uac01 \ud30c\ub4dc\uc5d0\uc11c \uc0ac\uc6a9\ud558\ub294 \uc708\ub3c4\uc6b0 \ucee8\ud14c\uc774\ub108 \ubca0\uc774\uc2a4 \uc774\ubbf8\uc9c0\ub294 \ub178\ub4dc\uc640 \ub3d9\uc77c\ud55c \ucee4\ub110 \ube4c\ub4dc \ubc84\uc804\uacfc \uc77c\uce58\ud574\uc57c \ud569\ub2c8\ub2e4. \ub3d9\uc77c\ud55c \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc5ec\ub7ec \uc708\ub3c4\uc6b0 Server \ube4c\ub4dc\ub97c \uc0ac\uc6a9\ud558\ub824\uba74 \ucd94\uac00 \ub178\ub4dc \ub808\uc774\ube14\uc778 \ub178\ub4dc\uc140\ub809\ud130\ub97c \uc124\uc815\ud558\uac70\ub098 **windows-build** \ub808\uc774\ube14\uc744 \ud65c\uc6a9\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 1.17 \ubc84\uc804\uc5d0\uc11c\ub294 **node.kubernetes.io\/windows-build** \ub77c\ub294 \uc0c8\ub85c\uc6b4 \ub808\uc774\ube14\uc744 \uc790\ub3d9\uc73c\ub85c \ucd94\uac00\ud558\uc5ec \ub3d9\uc77c\ud55c \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \uc5ec\ub7ec \uc708\ub3c4\uc6b0 \ube4c\ub4dc\uc758 \uad00\ub9ac\ub97c \ub2e8\uc21c\ud654 \ud569\ub2c8\ub2e4. \uc774\uc804 \ubc84\uc804\uc744 \uc2e4\ud589 \uc911\uc778 \uacbd\uc6b0 \uc774 \ub808\uc774\ube14\uc744 \uc708\ub3c4\uc6b0 \ub178\ub4dc\uc5d0 \uc218\ub3d9\uc73c\ub85c \ucd94\uac00\ud558\ub294 \uac83\uc774 \uc88b\uc2b5\ub2c8\ub2e4.\n\n\uc774 \ub808\uc774\ube14\uc5d0\ub294 \ud638\ud658\uc131\uc744 \uc704\ud574 \uc77c\uce58\ud574\uc57c \ud558\ub294 \uc708\ub3c4\uc6b0 \uba54\uc774\uc800, \ub9c8\uc774\ub108, \uadf8\ub9ac\uace0 \ube4c\ub4dc \ubc88\ud638\uac00 \ubc18\uc601\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \ub2e4\uc74c\uc740 \ud604\uc7ac \uac01 \uc708\ub3c4\uc6b0 \uc11c\ubc84 \ubc84\uc804\uc5d0 \uc0ac\uc6a9\ub418\ub294 \uac12\uc785\ub2c8\ub2e4.\n\n\uc911\uc694\ud55c \uc810\uc740 \uc708\ub3c4\uc6b0 \uc11c\ubc84\uac00 \uc7a5\uae30 \uc11c\ube44\uc2a4 \ucc44\ub110(LTSC)\ub97c \uae30\ubcf8 \ub9b4\ub9ac\uc2a4 \ucc44\ub110\ub85c \uc774\ub3d9\ud558\uace0 \uc788\ub2e4\ub294 \uac83\uc785\ub2c8\ub2e4. \uc708\ub3c4\uc6b0 \uc11c\ubc84 \ubc18\uae30 \ucc44\ub110(SAC)\uc740 2022\ub144 8\uc6d4 9\uc77c\uc5d0 \uc0ac\uc6a9 \uc911\uc9c0\ub418\uc5c8\uc2b5\ub2c8\ub2e4. \uc708\ub3c4\uc6b0 \uc11c\ubc84\uc758 \ud5a5\ud6c4 SAC \ub9b4\ub9ac\uc2a4\ub294 \uc5c6\uc2b5\ub2c8\ub2e4.\n\n\n| Product Name | Build Number(s) |\n| -------- | -------- |\n| Server full 2022 LTSC    | 10.0.20348    |\n| Server core 2019 LTSC    | 10.0.17763    |\n\n\ub2e4\uc74c \uba85\ub839\uc744 \ud1b5\ud574 OS \ube4c\ub4dc \ubc84\uc804\uc744 \ud655\uc778\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4:\n\n```bash    \nkubectl get nodes -o wide\n```\n\nKERNEL-VERSION \ucd9c\ub825\uc740 \uc708\ub3c4\uc6b0 OS \ube4c\ub4dc \ubc84\uc804\uc744 \ub098\ud0c0\ub0c5\ub2c8\ub2e4.\n\n```bash \nNAME                          STATUS   ROLES    AGE   VERSION                INTERNAL-IP   EXTERNAL-IP     OS-IMAGE                         KERNEL-VERSION                  CONTAINER-RUNTIME\nip-10-10-2-235.ec2.internal   Ready    <none>   23m   v1.24.7-eks-fb459a0    10.10.2.235   3.236.30.157    Windows Server 2022 Datacenter   10.0.20348.1607                 containerd:\/\/1.6.6\nip-10-10-31-27.ec2.internal   Ready    <none>   23m   v1.24.7-eks-fb459a0    10.10.31.27   44.204.218.24   Windows Server 2019 Datacenter   10.0.17763.4131                 containerd:\/\/1.6.6\nip-10-10-7-54.ec2.internal    Ready    <none>   31m   v1.24.11-eks-a59e1f0   10.10.7.54    3.227.8.172     Amazon Linux 2                   5.10.173-154.642.amzn2.x86_64   containerd:\/\/1.6.19\n```\n\n\uc544\ub798 \uc608\uc81c\uc5d0\uc11c\ub294 \ub2e4\uc591\ud55c \uc708\ub3c4\uc6b0 \ub178\ub4dc \uadf8\ub8f9 OS \ubc84\uc804\uc744 \uc2e4\ud589\ud560 \ub54c \uc62c\ubc14\ub978 \uc708\ub3c4\uc6b0 \ube4c\ub4dc \ubc84\uc804\uc744 \uc77c\uce58\uc2dc\ud0a4\uae30 \uc704\ud574 \ucd94\uac00 \ub178\ub4dc\uc140\ub809\ud130\ub97c \ud30c\ub4dc \uc2a4\ud399\uc5d0 \uc801\uc6a9\ud569\ub2c8\ub2e4.\n\n```yaml\nnodeSelector:\n    kubernetes.io\/os: windows\n    node.kubernetes.io\/windows-build: '10.0.20348'\ntolerations:\n    - key: \"os\"\n    operator: \"Equal\"\n    value: \"windows\"\n    effect: \"NoSchedule\"\n```\n\n## RuntimeClass\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud30c\ub4dc \ub9e4\ub2c8\ud398\uc2a4\ud2b8\uc758 \ub178\ub4dc\uc140\ub809\ud130\uc640 \ud1a8\ub7ec\ub808\uc774\uc158 \ub2e8\uc21c\ud654\n\nRuntimeClass\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud14c\uc778\ud2b8\uc640 \ud1a8\ub7ec\ub808\uc774\uc158\uc744 \uc0ac\uc6a9\ud558\ub294 \ud504\ub85c\uc138\uc2a4\ub97c \uac04\uc18c\ud654\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \uc774\ub7f0 \ud14c\uc778\ud2b8\uc640 \ud1a8\ub7ec\ub808\uc774\uc158\uc744 \ucea1\uc290\ud654\ud558\ub294 RuntimeClass \uc624\ube0c\uc81d\ud2b8\ub97c \ub9cc\ub4e4\uc5b4 \uc774 \uc791\uc5c5\uc744 \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\ub2e4\uc74c \ub9e4\ub2c8\ud398\uc2a4\ud2b8\ub97c \ud1b5\ud574 RuntimeClass\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4:\n\n```yaml\napiVersion: node.k8s.io\/v1beta1\nkind: RuntimeClass\nmetadata:\n  name: windows-2022\nhandler: 'docker'\nscheduling:\n  nodeSelector:\n    kubernetes.io\/os: 'windows'\n    kubernetes.io\/arch: 'amd64'\n    node.kubernetes.io\/windows-build: '10.0.20348'\n  tolerations:\n  - effect: NoSchedule\n    key: os\n    operator: Equal\n    value: \"windows\"\n```\n\nRuntimeClass\uac00 \uc0dd\uc131\ub418\uba74, \ud30c\ub4dc \ub9e4\ub2c8\ud398\uc2a4\ud2b8\uc758 \uc2a4\ud399\uc744 \ud1b5\ud574 \uc0ac\uc6a9\ud569\ub2c8\ub2e4.\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: iis-2022\n  labels:\n    app: iis-2022\nspec:\n  replicas: 1\n  template:\n    metadata:\n      name: iis-2022\n      labels:\n        app: iis-2022\n    spec:\n      runtimeClassName: windows-2022\n      containers:\n      - name: iis\n```\n\n## \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9 \uc9c0\uc6d0\n\uace0\uac1d\uc774 \uc708\ub3c4\uc6b0 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ubcf4\ub2e4 \uac04\uc18c\ud654\ub41c \ubc29\uc2dd\uc73c\ub85c \uc2e4\ud589\ud560 \uc218 \uc788\ub3c4\ub85d AWS\uc5d0\uc11c\ub294 2022\ub144 12\uc6d4 15\uc77c\uc5d0 [\uc708\ub3c4\uc6b0 \ucee8\ud14c\uc774\ub108\uc5d0 \ub300\ud55c EKS \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9 (MNG) \uc9c0\uc6d0](https:\/\/aws.amazon.com\/about-aws\/whats-new\/2022\/12\/amazon-eks-automated-provisioning-lifecycle-management-windows-containers\/)\uc744 \uc2dc\uc791 \ud588\uc2b5\ub2c8\ub2e4. [\uc708\ub3c4\uc6b0 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managed-node-groups.html)\uc740 [\ub9ac\ub205\uc2a4 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managed-node-groups.html)\uacfc \ub3d9\uc77c\ud55c \uc6cc\ud06c\ud50c\ub85c\uc6b0\uc640 \ub3c4\uad6c\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud65c\uc131\ud654\ub429\ub2c8\ub2e4. \uc708\ub3c4\uc6b0 \uc11c\ubc84 2019 \ubc0f 2022 \ud328\ubc00\ub9ac\uc758 Full, Core AMI(Amazon Machine Image)\uac00 \uc9c0\uc6d0 \ub429\ub2c8\ub2e4.\n\n\uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9(MNG)\uc5d0 \uc9c0\uc6d0\ub418\ub294 AMI \ud328\ubc00\ub9ac\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4:\n\n| AMI Family |\n| ---------   | \n| WINDOWS_CORE_2019_x86_64    | \n| WINDOWS_FULL_2019_x86_64    | \n| WINDOWS_CORE_2022_x86_64    | \n| WINDOWS_FULL_2022_x86_64    | \n\n## \ucd94\uac00 \ubb38\uc11c\n\n\nAWS \uacf5\uc2dd \ubb38\uc11c:\nhttps:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/windows-support.html\n\n\ud30c\ub4dc \ub124\ud2b8\uc6cc\ud0b9(CNI)\uc758 \uc791\ub3d9 \ubc29\uc2dd\uc744 \ub354 \uc798 \uc774\ud574\ud558\ub824\uba74 \ub2e4\uc74c \ub9c1\ud06c\ub97c \ud655\uc778\ud558\uc2ed\uc2dc\uc624: https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/pod-networking.html\n\nEKS \uae30\ubc18 \uc708\ub3c4\uc6b0\uc6a9 \uad00\ub9ac\ud615 \ub178\ub4dc \uadf8\ub8f9 \ubc30\ud3ec\uc5d0 \uad00\ud55c AWS \ube14\ub85c\uadf8:\nhttps:\/\/aws.amazon.com\/blogs\/containers\/deploying-amazon-eks-windows-managed-node-groups","site":"eks","answers_cleaned":"    search    exclude  true                                                                        Heterogeneous                                                                                                                                                                                                      OS                   NodeSelector       Taint        Toleration                                                                                          OS                                                                                                        default labels              kubernetes io os    windows linux      kubernetes io arch    amd64 arm64                 kubernetes io os   windows                                                                                                                                                                                                           off the shelf configurations                                                                                                register with taints  os windows NoSchedule    EKS           eksctl  clusterConfig                              yaml NodeGroups      name  windows ng     amiFamily  WindowsServer2022FullContainer             labels        nodeclass  windows2022     taints        os   windows NoSchedule                                                                                                       yaml nodeSelector      kubernetes io os  windows tolerations        key   os        operator   Equal        value   windows        effect   NoSchedule                                                                                                               Server                                     windows build                         1 17         node kubernetes io windows build                                                                                                                                                                                                                                    LTSC                                          SAC   2022  8  9                          SAC                Product Name   Build Number s                              Server full 2022 LTSC      10 0 20348        Server core 2019 LTSC      10 0 17763                 OS                        bash     kubectl get nodes  o wide      KERNEL VERSION         OS                   bash  NAME                          STATUS   ROLES    AGE   VERSION                INTERNAL IP   EXTERNAL IP     OS IMAGE                         KERNEL VERSION                  CONTAINER RUNTIME ip 10 10 2 235 ec2 internal   Ready     none    23m   v1 24 7 eks fb459a0    10 10 2 235   3 236 30 157    Windows Server 2022 Datacenter   10 0 20348 1607                 containerd   1 6 6 ip 10 10 31 27 ec2 internal   Ready     none    23m   v1 24 7 eks fb459a0    10 10 31 27   44 204 218 24   Windows Server 2019 Datacenter   10 0 17763 4131                 containerd   1 6 6 ip 10 10 7 54 ec2 internal    Ready     none    31m   v1 24 11 eks a59e1f0   10 10 7 54    3 227 8 172     Amazon Linux 2                   5 10 173 154 642 amzn2 x86 64   containerd   1 6 19                             OS                                                               yaml nodeSelector      kubernetes io os  windows     node kubernetes io windows build   10 0 20348  tolerations        key   os      operator   Equal      value   windows      effect   NoSchedule          RuntimeClass                                   RuntimeClass                                                                RuntimeClass                                           RuntimeClass             yaml apiVersion  node k8s io v1beta1 kind  RuntimeClass metadata    name  windows 2022 handler   docker  scheduling    nodeSelector      kubernetes io os   windows      kubernetes io arch   amd64      node kubernetes io windows build   10 0 20348    tolerations      effect  NoSchedule     key  os     operator  Equal     value   windows       RuntimeClass                                    yaml apiVersion  apps v1 kind  Deployment metadata    name  iis 2022   labels      app  iis 2022 spec    replicas  1   template      metadata        name  iis 2022       labels          app  iis 2022     spec        runtimeClassName  windows 2022       containers          name  iis                                                             AWS    2022  12  15                 EKS            MNG      https   aws amazon com about aws whats new 2022 12 amazon eks automated provisioning lifecycle management windows containers                             https   docs aws amazon com eks latest userguide managed node groups html                   https   docs aws amazon com eks latest userguide managed node groups html                                      2019   2022      Full  Core AMI Amazon Machine Image                      MNG        AMI                   AMI Family                      WINDOWS CORE 2019 x86 64         WINDOWS FULL 2019 x86 64         WINDOWS CORE 2022 x86 64         WINDOWS FULL 2022 x86 64                   AWS        https   docs aws amazon com eks latest userguide windows support html          CNI                                   https   docs aws amazon com eks latest userguide pod networking html  EKS                          AWS      https   aws amazon com blogs containers deploying amazon eks windows managed node groups"}
{"questions":"eks info We ve Moved to the AWS Docs Bookmarks and links will continue to work but we recommend updating them for faster access in the future on the AWS Docs Please visit our new site for the latest version This content has been updated and relocated to improve your experience","answers":"\n!!! info \"We've Moved to the AWS Docs! \ud83d\ude80\"\n    This content has been updated and relocated to improve your experience. \n    Please visit our new site for the latest version:\n    [AWS EKS Best Practices Guide](https:\/\/docs.aws.amazon.com\/eks\/latest\/best-practices\/windows-monitoring.html) on the AWS Docs\n\n    Bookmarks and links will continue to work, but we recommend updating them for faster access in the future.\n\n---\n\n# Monitoring\n\nPrometheus, a [graduated CNCF project](https:\/\/www.cncf.io\/projects\/) is by far the most popular monitoring system with native integration into Kubernetes. Prometheus collects metrics around containers, pods, nodes, and clusters. Additionally, Prometheus leverages AlertsManager which lets you program alerts to warn you if something in your cluster is going wrong. Prometheus stores the metric data as a time series data identified by metric name and key\/value pairs. Prometheus includes away to query using a language called PromQL, which is short for Prometheus Query Language. \n\nThe high level architecture of Prometheus metrics collection is shown below:\n\n\n![Prometheus Metrics collection](.\/images\/prom.png)\n\n\nPrometheus uses a pull mechanism and scrapes metrics from targets using exporters and from the Kubernetes API using the [kube state metrics](https:\/\/github.com\/kubernetes\/kube-state-metrics). This means applications and services must expose a HTTP(S) endpoint containing Prometheus formatted metrics. Prometheus will then, as per its configuration, periodically pull metrics from these HTTP(S) endpoints.\n\nAn exporter lets you consume third party metrics as Prometheus formatted metrics. A Prometheus exporter is typically deployed on each node. For a complete list of exporters please refer to the Prometheus [exporters](https:\/\/prometheus.io\/docs\/instrumenting\/exporters\/). While [node exporter](https:\/\/github.com\/prometheus\/node_exporter) is suited for exporting host hardware and OS metrics for linux nodes, it wont work for Windows nodes. \n\nIn a **mixed node EKS cluster with Windows nodes** when you use the stable [Prometheus helm chart](https:\/\/github.com\/prometheus-community\/helm-charts), you will see failed pods on the Windows nodes, as this exporter is not intended for Windows. You will need to treat the Windows worker pool separate and instead install the [Windows exporter](https:\/\/github.com\/prometheus-community\/windows_exporter) on the Windows worker node group. \n\nIn order to setup Prometheus monitoring for Windows nodes, you need to download and install the WMI exporter on the Windows server itself and then setup the targets inside the scrape configuration of the Prometheus configuration file.\nThe [releases page](https:\/\/github.com\/prometheus-community\/windows_exporter\/releases) provides all available .msi installers, with respective feature sets and bug fixes. The installer will setup the windows_exporter as a Windows service, as well as create an exception in the Windows firewall. If the installer is run without any parameters, the exporter will run with default settings for enabled collectors, ports, etc.\n\nYou can check out the **scheduling best practices** section of this guide which suggests the use of taints\/tolerations or RuntimeClass to selectively deploy node exporter only to linux nodes, while the Windows exporter is installed on Windows nodes as you bootstrap the node or using a configuration management tool of your choice (example chef, Ansible, SSM etc).\n\nNote that, unlike the linux nodes where the node exporter is installed as a daemonset , on Windows nodes the WMI exporter is installed on the host itself. The exporter will export metrics such as the CPU usage, the memory and the disk I\/O usage and can also be used to monitor IIS sites and applications, the network interfaces and services. \n\nThe windows_exporter will expose all metrics from enabled collectors by default. This is the recommended way to collect metrics to avoid errors. However, for advanced use the windows_exporter can be passed an optional list of collectors to filter metrics. The collect[] parameter, in the Prometheus configuration lets you do that.\n\nThe default install steps for Windows include downloading and starting the exporter as a service during the bootstrapping process with arguments, such as the collectors you want to filter.\n\n```powershell \n> Powershell Invoke-WebRequest https:\/\/github.com\/prometheus-community\/windows_exporter\/releases\/download\/v0.13.0\/windows_exporter-0.13.0-amd64.msi -OutFile <DOWNLOADPATH> \n\n> msiexec \/i <DOWNLOADPATH> ENABLED_COLLECTORS=\"cpu,cs,logical_disk,net,os,system,container,memory\"\n```\n\n\nBy default, the metrics can be scraped at the \/metrics endpoint on port 9182.\nAt this point, Prometheus can consume the metrics by adding the following scrape_config to the Prometheus configuration \n\n```yaml \nscrape_configs:\n    - job_name: \"prometheus\"\n      static_configs: \n        - targets: ['localhost:9090']\n    ...\n    - job_name: \"wmi_exporter\"\n      scrape_interval: 10s\n      static_configs: \n        - targets: ['<windows-node1-ip>:9182', '<windows-node2-ip>:9182', ...]\n```\n\nPrometheus configuration is reloaded using \n\n```bash \n\n> ps aux | grep prometheus\n> kill HUP <PID> \n\n```\n\nA better and recommended way to add targets is to use a  Custom Resource Definition called ServiceMonitor, which comes as part of the [Prometheus operator](https:\/\/github.com\/prometheus-operator\/kube-prometheus\/releases)] that provides the definition for a ServiceMonitor Object and a controller that will activate the ServiceMonitors we define and automatically build the required Prometheus configuration. \n\nThe ServiceMonitor, which declaratively specifies how groups of Kubernetes services should be monitored, is used to define an application you wish to scrape metrics from within Kubernetes. Within the ServiceMonitor we specify the Kubernetes labels that the operator can use to identify the Kubernetes Service which in turn identifies the Pods, that we wish to monitor. \n\nIn order to leverage the ServiceMonitor, create an Endpoint object pointing to specific Windows targets, a headless service and a ServiceMontor for the Windows nodes.\n\n```yaml\napiVersion: v1\nkind: Endpoints\nmetadata:\n  labels:\n    k8s-app: wmiexporter\n  name: wmiexporter\n  namespace: kube-system\nsubsets:\n- addresses:\n  - ip: NODE-ONE-IP\n    targetRef:\n      kind: Node\n      name: NODE-ONE-NAME\n  - ip: NODE-TWO-IP\n    targetRef:\n      kind: Node\n      name: NODE-TWO-NAME\n  - ip: NODE-THREE-IP\n    targetRef:\n      kind: Node\n      name: NODE-THREE-NAME\n  ports:\n  - name: http-metrics\n    port: 9182\n    protocol: TCP\n\n---\napiVersion: v1\nkind: Service ##Headless Service\nmetadata:\n  labels:\n    k8s-app: wmiexporter\n  name: wmiexporter\n  namespace: kube-system\nspec:\n  clusterIP: None\n  ports:\n  - name: http-metrics\n    port: 9182\n    protocol: TCP\n    targetPort: 9182\n  sessionAffinity: None\n  type: ClusterIP\n  \n---\napiVersion: monitoring.coreos.com\/v1\nkind: ServiceMonitor ##Custom ServiceMonitor Object\nmetadata:\n  labels:\n    k8s-app: wmiexporter\n  name: wmiexporter\n  namespace: monitoring\nspec:\n  endpoints:\n  - interval: 30s\n    port: http-metrics\n  jobLabel: k8s-app\n  namespaceSelector:\n    matchNames:\n    - kube-system\n  selector:\n    matchLabels:\n      k8s-app: wmiexporter\n```\n\nFor more details on the operator and the usage of ServiceMonitor, checkout the official [operator](https:\/\/github.com\/prometheus-operator\/kube-prometheus) documentation. Note that Prometheus does support dynamic target discovery using many [service discovery](https:\/\/prometheus.io\/blog\/2015\/06\/01\/advanced-service-discovery\/) options.\n","site":"eks","answers_cleaned":"     info  We ve Moved to the AWS Docs         This content has been updated and relocated to improve your experience       Please visit our new site for the latest version       AWS EKS Best Practices Guide  https   docs aws amazon com eks latest best practices windows monitoring html  on the AWS Docs      Bookmarks and links will continue to work  but we recommend updating them for faster access in the future          Monitoring  Prometheus  a  graduated CNCF project  https   www cncf io projects   is by far the most popular monitoring system with native integration into Kubernetes  Prometheus collects metrics around containers  pods  nodes  and clusters  Additionally  Prometheus leverages AlertsManager which lets you program alerts to warn you if something in your cluster is going wrong  Prometheus stores the metric data as a time series data identified by metric name and key value pairs  Prometheus includes away to query using a language called PromQL  which is short for Prometheus Query Language    The high level architecture of Prometheus metrics collection is shown below      Prometheus Metrics collection    images prom png    Prometheus uses a pull mechanism and scrapes metrics from targets using exporters and from the Kubernetes API using the  kube state metrics  https   github com kubernetes kube state metrics   This means applications and services must expose a HTTP S  endpoint containing Prometheus formatted metrics  Prometheus will then  as per its configuration  periodically pull metrics from these HTTP S  endpoints   An exporter lets you consume third party metrics as Prometheus formatted metrics  A Prometheus exporter is typically deployed on each node  For a complete list of exporters please refer to the Prometheus  exporters  https   prometheus io docs instrumenting exporters    While  node exporter  https   github com prometheus node exporter  is suited for exporting host hardware and OS metrics for linux nodes  it wont work for Windows nodes    In a   mixed node EKS cluster with Windows nodes   when you use the stable  Prometheus helm chart  https   github com prometheus community helm charts   you will see failed pods on the Windows nodes  as this exporter is not intended for Windows  You will need to treat the Windows worker pool separate and instead install the  Windows exporter  https   github com prometheus community windows exporter  on the Windows worker node group    In order to setup Prometheus monitoring for Windows nodes  you need to download and install the WMI exporter on the Windows server itself and then setup the targets inside the scrape configuration of the Prometheus configuration file  The  releases page  https   github com prometheus community windows exporter releases  provides all available  msi installers  with respective feature sets and bug fixes  The installer will setup the windows exporter as a Windows service  as well as create an exception in the Windows firewall  If the installer is run without any parameters  the exporter will run with default settings for enabled collectors  ports  etc   You can check out the   scheduling best practices   section of this guide which suggests the use of taints tolerations or RuntimeClass to selectively deploy node exporter only to linux nodes  while the Windows exporter is installed on Windows nodes as you bootstrap the node or using a configuration management tool of your choice  example chef  Ansible  SSM etc    Note that  unlike the linux nodes where the node exporter is installed as a daemonset   on Windows nodes the WMI exporter is installed on the host itself  The exporter will export metrics such as the CPU usage  the memory and the disk I O usage and can also be used to monitor IIS sites and applications  the network interfaces and services    The windows exporter will expose all metrics from enabled collectors by default  This is the recommended way to collect metrics to avoid errors  However  for advanced use the windows exporter can be passed an optional list of collectors to filter metrics  The collect   parameter  in the Prometheus configuration lets you do that   The default install steps for Windows include downloading and starting the exporter as a service during the bootstrapping process with arguments  such as the collectors you want to filter      powershell    Powershell Invoke WebRequest https   github com prometheus community windows exporter releases download v0 13 0 windows exporter 0 13 0 amd64 msi  OutFile  DOWNLOADPATH      msiexec  i  DOWNLOADPATH  ENABLED COLLECTORS  cpu cs logical disk net os system container memory        By default  the metrics can be scraped at the  metrics endpoint on port 9182  At this point  Prometheus can consume the metrics by adding the following scrape config to the Prometheus configuration      yaml  scrape configs        job name   prometheus        static configs             targets    localhost 9090                 job name   wmi exporter        scrape interval  10s       static configs             targets     windows node1 ip  9182     windows node2 ip  9182             Prometheus configuration is reloaded using      bash     ps aux   grep prometheus   kill HUP  PID         A better and recommended way to add targets is to use a  Custom Resource Definition called ServiceMonitor  which comes as part of the  Prometheus operator  https   github com prometheus operator kube prometheus releases   that provides the definition for a ServiceMonitor Object and a controller that will activate the ServiceMonitors we define and automatically build the required Prometheus configuration    The ServiceMonitor  which declaratively specifies how groups of Kubernetes services should be monitored  is used to define an application you wish to scrape metrics from within Kubernetes  Within the ServiceMonitor we specify the Kubernetes labels that the operator can use to identify the Kubernetes Service which in turn identifies the Pods  that we wish to monitor    In order to leverage the ServiceMonitor  create an Endpoint object pointing to specific Windows targets  a headless service and a ServiceMontor for the Windows nodes      yaml apiVersion  v1 kind  Endpoints metadata    labels      k8s app  wmiexporter   name  wmiexporter   namespace  kube system subsets    addresses      ip  NODE ONE IP     targetRef        kind  Node       name  NODE ONE NAME     ip  NODE TWO IP     targetRef        kind  Node       name  NODE TWO NAME     ip  NODE THREE IP     targetRef        kind  Node       name  NODE THREE NAME   ports      name  http metrics     port  9182     protocol  TCP      apiVersion  v1 kind  Service   Headless Service metadata    labels      k8s app  wmiexporter   name  wmiexporter   namespace  kube system spec    clusterIP  None   ports      name  http metrics     port  9182     protocol  TCP     targetPort  9182   sessionAffinity  None   type  ClusterIP        apiVersion  monitoring coreos com v1 kind  ServiceMonitor   Custom ServiceMonitor Object metadata    labels      k8s app  wmiexporter   name  wmiexporter   namespace  monitoring spec    endpoints      interval  30s     port  http metrics   jobLabel  k8s app   namespaceSelector      matchNames        kube system   selector      matchLabels        k8s app  wmiexporter      For more details on the operator and the usage of ServiceMonitor  checkout the official  operator  https   github com prometheus operator kube prometheus  documentation  Note that Prometheus does support dynamic target discovery using many  service discovery  https   prometheus io blog 2015 06 01 advanced service discovery   options  "}
{"questions":"eks Prometheus AlertsManager AlertsManager PromQL exclude true search","answers":"---\nsearch:\n  exclude: true\n---\n\n\n# \ubaa8\ub2c8\ud130\ub9c1\n\n\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4(Prometheus)\ub294, [CNCF \uc878\uc5c5 \ud504\ub85c\uc81d\ud2b8](https:\/\/www.cncf.io\/projects\/)\ub85c\uc11c \ucfe0\ubc84\ub124\ud2f0\uc2a4\uc5d0 \uae30\ubcf8\uc801\uc73c\ub85c \ud1b5\ud569\ub418\ub294 \uac00\uc7a5 \uc778\uae30 \uc788\ub294 \ubaa8\ub2c8\ud130\ub9c1 \uc2dc\uc2a4\ud15c\uc785\ub2c8\ub2e4. \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\ub294 \ucee8\ud14c\uc774\ub108, \ud30c\ub4dc, \ub178\ub4dc \ubc0f \ud074\ub7ec\uc2a4\ud130\uc640 \uad00\ub828\ub41c \uba54\ud2b8\ub9ad\uc744 \uc218\uc9d1\ud569\ub2c8\ub2e4. \ub610\ud55c \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\ub294 AlertsManager\ub97c \ud65c\uc6a9\ud569\ub2c8\ub2e4. AlertsManager\ub97c \uc0ac\uc6a9\ud558\uba74 \ud074\ub7ec\uc2a4\ud130\uc5d0\uc11c \ubb38\uc81c\uac00 \ubc1c\uc0dd\ud560 \uacbd\uc6b0 \uacbd\uace0\ub97c \ud504\ub85c\uadf8\ub798\ubc0d\ud558\uc5ec \uacbd\uace0\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\ub294 \uc9c0\ud45c \ub370\uc774\ud130\ub97c \uc9c0\ud45c \uc774\ub984 \ubc0f \ud0a4\/\uac12 \uc30d\uc73c\ub85c \uc2dd\ubcc4\ub418\ub294 \uc2dc\uacc4\uc5f4 \ub370\uc774\ud130\ub85c \uc800\uc7a5\ud569\ub2c8\ub2e4. \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\uc5d0\ub294 \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \ucffc\ub9ac \uc5b8\uc5b4\uc758 \uc904\uc784\ub9d0\uc778 PromQL\uc774\ub77c\ub294 \uc5b8\uc5b4\ub97c \uc0ac\uc6a9\ud558\uc5ec \ucffc\ub9ac\ud558\ub294 \ubc29\ubc95\uc774 \ud3ec\ud568\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \n\n\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uba54\ud2b8\ub9ad \uc218\uc9d1\uc758 \uc0c1\uc704 \uc218\uc900 \uc544\ud0a4\ud14d\ucc98\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.\n\n\n![\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uba54\ud2b8\ub9ad \uceec\ub809\uc158](.\/images\/prom.png)\n\n\n\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\ub294 \ud480 \uba54\ucee4\ub2c8\uc998\uc744 \uc0ac\uc6a9\ud558\uace0 \uc775\uc2a4\ud3ec\ud130(Exporter)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud0c0\uac9f\uc5d0\uc11c \uba54\ud2b8\ub9ad\uc744 \uc2a4\ud06c\ub7a9\ud558\uace0 [kube state metrics](https:\/\/github.com\/kubernetes\/kube-state-metrics)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ucfe0\ubc84\ub124\ud2f0\uc2a4 API\uc5d0\uc11c \uba54\ud2b8\ub9ad\uc744 \uc2a4\ud06c\ub7a9\ud569\ub2c8\ub2e4. \uc989, \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uacfc \uc11c\ube44\uc2a4\ub294 \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \ud615\uc2dd\uc758 \uba54\ud2b8\ub9ad\uc774 \ud3ec\ud568\ub41c HTTP(S) \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ub178\ucd9c\ud574\uc57c \ud569\ub2c8\ub2e4. \uadf8\ub7ec\uba74 \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\ub294 \uad6c\uc131\uc5d0 \ub530\ub77c \uc8fc\uae30\uc801\uc73c\ub85c \uc774\ub7f0 HTTP(S) \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0\uc11c \uba54\ud2b8\ub9ad\uc744 \uac00\uc838\uc635\ub2c8\ub2e4.\n\n\uc775\uc2a4\ud3ec\ud130\ub97c \uc0ac\uc6a9\ud558\uba74 \ud0c0\uc0ac \uc9c0\ud45c\ub97c \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \ud615\uc2dd\uc758 \uc9c0\ud45c\ub85c \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uc775\uc2a4\ud3ec\ud130\ub294 \uc77c\ubc18\uc801\uc73c\ub85c \uac01 \ub178\ub4dc\uc5d0 \ubc30\ud3ec\ub429\ub2c8\ub2e4. \uc775\uc2a4\ud3ec\ud130 \uc804\uccb4 \ubaa9\ub85d\uc740 \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 [\uc775\uc2a4\ud3ec\ud130 \ubb38\uc11c](https:\/\/prometheus.io\/docs\/instrumenting\/exporters\/)\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624. [node exporter](https:\/\/github.com\/prometheus\/node_exporter)\ub294 Linux \ub178\ub4dc \uc6a9 \ud638\uc2a4\ud2b8 \ud558\ub4dc\uc6e8\uc5b4 \ubc0f OS \uba54\ud2b8\ub9ad\uc744 \ub0b4\ubcf4\ub0b4\ub294 \ub370 \uc801\ud569\ud558\uc9c0\ub9cc \uc708\ub3c4\uc6b0 \ub178\ub4dc\uc5d0\uc11c\ub294 \uc791\ub3d9\ud558\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\n\n**\uc708\ub3c4\uc6b0 \ub178\ub4dc\uac00 \uc788\ub294 \ud63c\ud569 \ub178\ub4dc EKS \ud074\ub7ec\uc2a4\ud130**\uc5d0\uc11c \uc548\uc815\uc801\uc778 [\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \ud5ec\ub984 \ucc28\ud2b8](https:\/\/github.com\/prometheus-community\/helm-charts)\ub97c \uc0ac\uc6a9\ud558\uba74 \uc708\ub3c4\uc6b0 \ub178\ub4dc\uc5d0 \uc7a5\uc560\uac00 \ubc1c\uc0dd\ud55c \ud30c\ub4dc\uac00 \ud45c\uc2dc\ub429\ub2c8\ub2e4. \uc774 \uc775\uc2a4\ud3ec\ud130\ub294 \uc708\ub3c4\uc6b0\uc6a9\uc774 \uc544\ub2c8\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4. \uc708\ub3c4\uc6b0 \uc791\uc5c5\uc790 \ud480\uc744 \ubcc4\ub3c4\ub85c \ucc98\ub9ac\ud558\uace0 \ub300\uc2e0 \uc708\ub3c4\uc6b0 \uc6cc\ucee4 \ub178\ub4dc \uadf8\ub8f9\uc5d0 [\uc708\ub3c4\uc6b0 \uc775\uc2a4\ud3ec\ud130](https:\/\/github.com\/prometheus-community\/windows_exporter)\ub97c \uc124\uce58\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n\uc708\ub3c4\uc6b0 \ub178\ub4dc\uc5d0 \ub300\ud574 \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \ubaa8\ub2c8\ud130\ub9c1\uc744 \uc124\uc815\ud558\ub824\uba74 \uc708\ub3c4\uc6b0 \uc11c\ubc84 \uc790\uccb4\uc5d0 WMI \uc775\uc2a4\ud3ec\ud130\ub97c \ub2e4\uc6b4\ub85c\ub4dc\ud558\uc5ec \uc124\uce58\ud55c \ub2e4\uc74c \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uad6c\uc131 \ud30c\uc77c\uc758 \uc2a4\ud06c\ub7a9 \uad6c\uc131 \ub0b4\uc5d0\uc11c \ub300\uc0c1\uc744 \uc124\uc815\ud574\uc57c \ud569\ub2c8\ub2e4.\n[\ub9b4\ub9ac\uc2a4 \ud398\uc774\uc9c0](https:\/\/github.com\/prometheus-community\/windows_exporter\/releases)\ub294 \uc0ac\uc6a9 \uac00\ub2a5\ud55c \ubaa8\ub4e0.msi \uc124\uce58 \ud504\ub85c\uadf8\ub7a8\uc744 \uac01 \uae30\ub2a5 \uc138\ud2b8 \ubc0f \ubc84\uadf8 \uc218\uc815\uacfc \ud568\uaed8 \uc81c\uacf5\ud569\ub2c8\ub2e4. \uc124\uce58 \ud504\ub85c\uadf8\ub7a8\uc740 windows_exporter\ub97c \uc708\ub3c4\uc6b0 \uc11c\ube44\uc2a4\ub85c \uc124\uc815\ud558\uace0 \uc708\ub3c4\uc6b0 \ubc29\ud654\ubcbd\uc5d0\uc11c \uc608\uc678\ub97c \uc0dd\uc131\ud569\ub2c8\ub2e4. \ud30c\ub77c\ubbf8\ud130 \uc5c6\uc774 \uc124\uce58 \ud504\ub85c\uadf8\ub7a8\uc744 \uc2e4\ud589\ud558\ub294 \uacbd\uc6b0 \uc775\uc2a4\ud3ec\ud130\ub294 \ud65c\uc131\ud654\ub41c \uceec\ub809\ud130, \ud3ec\ud2b8 \ub4f1\uc5d0 \ub300\ud55c \uae30\ubcf8 \uc124\uc815\uc73c\ub85c \uc2e4\ud589\ub429\ub2c8\ub2e4.\n\n\uc774 \uac00\uc774\ub4dc\uc758 **\uc2a4\ucf00\uc904\ub9c1 \ubaa8\ubc94 \uc0ac\ub840** \uc139\uc158\uc740 \ud14c\uc778\ud2b8\/\ud1a8\ub7ec\ub808\uc774\uc158 \ub610\ub294 RuntimeClass\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc \uc775\uc2a4\ud3ec\ud130\ub97c Linux \ub178\ub4dc\uc5d0\ub9cc \uc120\ud0dd\uc801\uc73c\ub85c \ubc30\ud3ec\ud558\ub294 \ubc29\ubc95\uc744 \uc81c\uc548\ud558\ub294 \ubc18\uba74, \uc708\ub3c4\uc6b0 \uc775\uc2a4\ud3ec\ud130\ub294 \ub178\ub4dc\ub97c \ubd80\ud2b8\uc2a4\ud2b8\ub7a9\ud558\uac70\ub098 \uc6d0\ud558\ub294 \uad6c\uc131 \uad00\ub9ac \ub3c4\uad6c(\uc608: chef, Ansible, SSM \ub4f1)\ub97c \uc0ac\uc6a9\ud558\uc5ec \ub178\ub4dc \uc775\uc2a4\ud3ec\ud130\ub97c \uc124\uce58\ud558\ub3c4\ub85d \uc81c\uc548\ud569\ub2c8\ub2e4.\n\n\ucc38\uace0\ub85c, \ub178\ub4dc \uc775\uc2a4\ud3ec\ud130\uac00 \ub370\ubaac\uc14b\uc73c\ub85c \uc124\uce58\ub418\ub294 Linux \ub178\ub4dc\uc640 \ub2ec\ub9ac \uc708\ub3c4\uc6b0 \ub178\ub4dc\uc5d0\uc11c\ub294 WMI \uc775\uc2a4\ud3ec\ud130\uac00 \ud638\uc2a4\ud2b8 \uc790\uccb4\uc5d0 \uc124\uce58\ub429\ub2c8\ub2e4. \uc775\uc2a4\ud3ec\ud130\ub294 CPU \uc0ac\uc6a9\ub7c9, \uba54\ubaa8\ub9ac \ubc0f \ub514\uc2a4\ud06c I\/O \uc0ac\uc6a9\ub7c9\uacfc \uac19\uc740 \uba54\ud2b8\ub9ad\uc744 \ub0b4\ubcf4\ub0b4\uace0 IIS \uc0ac\uc774\ud2b8 \ubc0f \uc751\uc6a9 \ud504\ub85c\uadf8\ub7a8, \ub124\ud2b8\uc6cc\ud06c \uc778\ud130\ud398\uc774\uc2a4 \ubc0f \uc11c\ube44\uc2a4\ub97c \ubaa8\ub2c8\ud130\ub9c1\ud558\ub294 \ub370\uc5d0\ub3c4 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\nwindows_exporter\ub294 \uae30\ubcf8\uc801\uc73c\ub85c \ud65c\uc131\ud654\ub41c \uceec\ub809\ud130\uc758 \ubaa8\ub4e0 \uba54\ud2b8\ub9ad\uc744 \ub178\ucd9c\ud569\ub2c8\ub2e4. \uc624\ub958\ub97c \ubc29\uc9c0\ud558\uae30 \uc704\ud574 \uc9c0\ud45c\ub97c \uc218\uc9d1\ud558\ub294 \ub370 \uad8c\uc7a5\ub418\ub294 \ubc29\ubc95\uc785\ub2c8\ub2e4. \ud558\uc9c0\ub9cc \uace0\uae09 \uc0ac\uc6a9\uc744 \uc704\ud574 windows_exporter\uc5d0 \uc120\ud0dd\uc801 \uc218\uc9d1\uae30 \ubaa9\ub85d\uc744 \uc804\ub2ec\ud558\uc5ec \uba54\ud2b8\ub9ad\uc744 \ud544\ud130\ub9c1\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uad6c\uc131\uc758 collect[] \ud30c\ub77c\ubbf8\ud130\ub97c \uc0ac\uc6a9\ud558\uba74 \uc774 \uc791\uc5c5\uc744 \uc218\ud589\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n\uc708\ub3c4\uc6b0\uc758 \uae30\ubcf8 \uc124\uce58 \ub2e8\uacc4\uc5d0\ub294 \ubd80\ud2b8\uc2a4\ud2b8\ub7a9 \ud504\ub85c\uc138\uc2a4 \uc911\uc5d0 \ud544\ud130\ub9c1\ud558\ub824\ub294 \uceec\ub809\ud130\uc640 \uac19\uc740 \uc778\uc218\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc775\uc2a4\ud3ec\ud130\ub97c \uc11c\ube44\uc2a4\ub85c \ub2e4\uc6b4\ub85c\ub4dc\ud558\uace0 \uc2dc\uc791\ud558\ub294 \uc791\uc5c5\uc774 \ud3ec\ud568\ub429\ub2c8\ub2e4.\n\n```powershell \n> Powershell Invoke-WebRequest https:\/\/github.com\/prometheus-community\/windows_exporter\/releases\/download\/v0.13.0\/windows_exporter-0.13.0-amd64.msi -OutFile <DOWNLOADPATH> \n\n> msiexec \/i <DOWNLOADPATH> ENABLED_COLLECTORS=\"cpu,cs,logical_disk,net,os,system,container,memory\"\n```\n\n\n\uae30\ubcf8\uc801\uc73c\ub85c \uba54\ud2b8\ub9ad\uc740 \ud3ec\ud2b8 9182\uc758 \/metrics \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0\uc11c \uc2a4\ud06c\ub7a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\uc774\uc81c \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\ub294 \ub2e4\uc74c scrape_config\ub97c \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uad6c\uc131\uc5d0 \ucd94\uac00\ud558\uc5ec \uba54\ud2b8\ub9ad\uc744 \uc0ac\uc6a9\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n```yaml \nscrape_configs:\n    - job_name: \"prometheus\"\n      static_configs: \n        - targets: ['localhost:9090']\n    ...\n    - job_name: \"wmi_exporter\"\n      scrape_interval: 10s\n      static_configs: \n        - targets: ['<windows-node1-ip>:9182', '<windows-node2-ip>:9182', ...]\n```\n\n\ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uad6c\uc131\uc744 \ub2e4\uc74c\uacfc \uac19\uc774 \ub2e4\uc2dc \ub85c\ub4dc\ud569\ub2c8\ub2e4. \n\n```bash \n\n> ps aux | grep prometheus\n> kill HUP <PID> \n\n```\n\n\ub300\uc0c1\uc744 \ucd94\uac00\ud558\ub294 \ub354 \uc88b\uace0 \uad8c\uc7a5\ub418\ub294 \ubc29\ubc95\uc740 ServiceMonitor\ub77c\ub294 \uc0ac\uc6a9\uc790 \uc9c0\uc815 \ub9ac\uc18c\uc2a4 \uc815\uc758\ub97c \uc0ac\uc6a9\ud558\ub294 \uac83\uc785\ub2c8\ub2e4. \uc774 \uc815\uc758\ub294 ServiceMonitor \uac1c\uccb4\uc5d0 \ub300\ud55c \uc815\uc758\uc640 \uc6b0\ub9ac\uac00 \uc815\uc758\ud55c ServiceMonitor\ub97c \ud65c\uc131\ud654\ud558\uace0 \ud544\uc694\ud55c \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4 \uad6c\uc131\uc744 \uc790\ub3d9\uc73c\ub85c \ube4c\ub4dc\ud558\ub294 \ucee8\ud2b8\ub864\ub7ec\ub97c \uc81c\uacf5\ud558\ub294 [Prometheus \uc6b4\uc601\uc790](https:\/\/github.com\/prometheus-operator\/kube-prometheus\/releases)\uc758 \uc77c\ubd80\ub85c \uc81c\uacf5\ub429\ub2c8\ub2e4. \n\n\ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4 \uadf8\ub8f9\uc744 \ubaa8\ub2c8\ud130\ub9c1\ud558\ub294 \ubc29\ubc95\uc744 \uc120\uc5b8\uc801\uc73c\ub85c \uc9c0\uc815\ud558\ub294 ServiceMonitor\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub0b4\uc5d0\uc11c \uba54\ud2b8\ub9ad\uc744 \uc2a4\ud06c\ub7a9\ud558\ub824\ub294 \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \uc815\uc758\ud558\ub294 \ub370 \uc0ac\uc6a9\ub429\ub2c8\ub2e4.ServiceMonitor \ub0b4\uc5d0\uc11c \uc6b4\uc601\uc790\uac00 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\ub97c \uc2dd\ubcc4\ud558\ub294 \ub370 \uc0ac\uc6a9\ud560 \uc218 \uc788\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \ub808\uc774\ube14\uc744 \uc9c0\uc815\ud569\ub2c8\ub2e4. \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\ub294 \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\ub97c \uc2dd\ubcc4\ud558\uace0, \ucfe0\ubc84\ub124\ud2f0\uc2a4 \uc11c\ube44\uc2a4\ub294 \ub2e4\uc2dc \uc6b0\ub9ac\uac00 \ubaa8\ub2c8\ud130\ub9c1\ud558\uace0\uc790 \ud558\ub294 \ud30c\ub4dc\ub97c \uc2dd\ubcc4\ud569\ub2c8\ub2e4. \n\nServiceMonitor\ub97c \ud65c\uc6a9\ud558\ub824\uba74 \ud2b9\uc815 \uc708\ub3c4\uc6b0 \ub300\uc0c1\uc744 \uac00\ub9ac\ud0a4\ub294 \uc5d4\ub4dc\ud3ec\uc778\ud2b8 \uac1d\uccb4, \uc708\ub3c4\uc6b0 \ub178\ub4dc\uc6a9 \ud5e4\ub4dc\ub9ac\uc2a4 \uc11c\ube44\uc2a4 \ubc0f ServiceMontor\ub97c \uc0dd\uc131\ud574\uc57c \ud569\ub2c8\ub2e4.\n\n```yaml\napiVersion: v1\nkind: Endpoints\nmetadata:\n  labels:\n    k8s-app: wmiexporter\n  name: wmiexporter\n  namespace: kube-system\nsubsets:\n- addresses:\n  - ip: NODE-ONE-IP\n    targetRef:\n      kind: Node\n      name: NODE-ONE-NAME\n  - ip: NODE-TWO-IP\n    targetRef:\n      kind: Node\n      name: NODE-TWO-NAME\n  - ip: NODE-THREE-IP\n    targetRef:\n      kind: Node\n      name: NODE-THREE-NAME\n  ports:\n  - name: http-metrics\n    port: 9182\n    protocol: TCP\n\n---\napiVersion: v1\nkind: Service ##Headless Service\nmetadata:\n  labels:\n    k8s-app: wmiexporter\n  name: wmiexporter\n  namespace: kube-system\nspec:\n  clusterIP: None\n  ports:\n  - name: http-metrics\n    port: 9182\n    protocol: TCP\n    targetPort: 9182\n  sessionAffinity: None\n  type: ClusterIP\n  \n---\napiVersion: monitoring.coreos.com\/v1\nkind: ServiceMonitor ##Custom ServiceMonitor Object\nmetadata:\n  labels:\n    k8s-app: wmiexporter\n  name: wmiexporter\n  namespace: monitoring\nspec:\n  endpoints:\n  - interval: 30s\n    port: http-metrics\n  jobLabel: k8s-app\n  namespaceSelector:\n    matchNames:\n    - kube-system\n  selector:\n    matchLabels:\n      k8s-app: wmiexporter\n```\n\n\uc6b4\uc601\uc790 \ubc0f ServiceMonitor \uc0ac\uc6a9\uc5d0 \ub300\ud55c \uc790\uc138\ud55c \ub0b4\uc6a9\uc740 \uacf5\uc2dd [\uc624\ud37c\ub808\uc774\ud130](https:\/\/github.com\/prometheus-operator\/kube-prometheus) \uc124\uba85\uc11c\ub97c \ucc38\uc870\ud558\uc2ed\uc2dc\uc624. \ucc38\uace0\ub85c \ud504\ub85c\uba54\ud14c\uc6b0\uc2a4\ub294 \ub2e4\uc591\ud55c [\uc11c\ube44\uc2a4 \ub514\uc2a4\ucee4\ubc84\ub9ac](https:\/\/prometheus.io\/blog\/2015\/06\/01\/advanced-service-discovery\/) \uc635\uc158\uc744 \uc0ac\uc6a9\ud55c \ub3d9\uc801 \ud0c0\uac9f \ub514\uc2a4\ucee4\ubc84\ub9ac\ub97c \uc9c0\uc6d0\ud569\ub2c8\ub2e4.\n","site":"eks","answers_cleaned":"    search    exclude  true                      Prometheus     CNCF          https   www cncf io projects                                                                                                     AlertsManager         AlertsManager                                                                                                                                  PromQL                                                                                              images prom png                              Exporter                         kube state metrics  https   github com kubernetes kube state metrics              API                                                    HTTP S                                               HTTP S                                                                                                                                 https   prometheus io docs instrumenting exporters             node exporter  https   github com prometheus node exporter   Linux                 OS                                                            EKS                              https   github com prometheus community helm charts                                                                                                            https   github com prometheus community windows exporter                                                       WMI                                                                      https   github com prometheus community windows exporter releases             msi                                              windows exporter                                                                                                                                              RuntimeClass                Linux                                                                     chef  Ansible  SSM                                                          Linux                  WMI                            CPU                I O                   IIS                                                           windows exporter                                                                                  windows exporter                                               collect                                                                                                                             powershell    Powershell Invoke WebRequest https   github com prometheus community windows exporter releases download v0 13 0 windows exporter 0 13 0 amd64 msi  OutFile  DOWNLOADPATH      msiexec  i  DOWNLOADPATH  ENABLED COLLECTORS  cpu cs logical disk net os system container memory                      9182   metrics                                    scrape config                                        yaml  scrape configs        job name   prometheus        static configs             targets    localhost 9090                 job name   wmi exporter        scrape interval  10s       static configs             targets     windows node1 ip  9182     windows node2 ip  9182                                              bash     ps aux   grep prometheus   kill HUP  PID                                ServiceMonitor                                   ServiceMonitor                    ServiceMonitor                                             Prometheus      https   github com prometheus operator kube prometheus releases                                                    ServiceMonitor                                             ServiceMonitor                                                                                                                             ServiceMonitor                                                     ServiceMontor                yaml apiVersion  v1 kind  Endpoints metadata    labels      k8s app  wmiexporter   name  wmiexporter   namespace  kube system subsets    addresses      ip  NODE ONE IP     targetRef        kind  Node       name  NODE ONE NAME     ip  NODE TWO IP     targetRef        kind  Node       name  NODE TWO NAME     ip  NODE THREE IP     targetRef        kind  Node       name  NODE THREE NAME   ports      name  http metrics     port  9182     protocol  TCP      apiVersion  v1 kind  Service   Headless Service metadata    labels      k8s app  wmiexporter   name  wmiexporter   namespace  kube system spec    clusterIP  None   ports      name  http metrics     port  9182     protocol  TCP     targetPort  9182   sessionAffinity  None   type  ClusterIP        apiVersion  monitoring coreos com v1 kind  ServiceMonitor   Custom ServiceMonitor Object metadata    labels      k8s app  wmiexporter   name  wmiexporter   namespace  monitoring spec    endpoints      interval  30s     port  http metrics   jobLabel  k8s app   namespaceSelector      matchNames        kube system   selector      matchLabels        k8s app  wmiexporter            ServiceMonitor                           https   github com prometheus operator kube prometheus                                           https   prometheus io blog 2015 06 01 advanced service discovery                               "}
{"questions":"eks Definition Performance Efficiency Pillar The performance efficiency pillar focuses on the efficient use of computing resources to meet requirements and how to maintain that efficiency as demand changes and technologies evolve This section provides in depth best practices guidance for architecting for performance efficiency on AWS Performance efficiency for EKS containers is composed of three areas To ensure the efficient use of EKS container services you should gather data on all aspects of the architecture from the high level design to the selection of EKS resource types By reviewing your choices on a regular basis you ensure that you are taking advantage of the continually evolving Amazon EKS and Container services Monitoring will ensure that you are aware of any deviance from expected performance so you can take action on it","answers":"# Performance Efficiency Pillar\n\nThe performance efficiency pillar focuses on the efficient use of computing resources to meet requirements and how to maintain that efficiency as demand changes and technologies evolve. This section provides in-depth, best practices guidance for architecting for performance efficiency on AWS.\n\n## Definition\n\nTo ensure the efficient use of EKS container services, you should gather data on all aspects of the architecture, from the high-level design to the selection of EKS resource types. By reviewing your choices on a regular basis, you ensure that you are taking advantage of the continually evolving Amazon EKS and Container services. Monitoring will ensure that you are aware of any deviance from expected performance so you can take action on it.\n\nPerformance efficiency for EKS containers is composed of three areas:\n\n- Optimize your container\n\n- Resource Management\n\n- Scalability Management\n\n## Best Practices\n\n### Optimize your container\n\nYou can run most applications in a Docker container without too much hassle. There are a number of things that you need to do to ensure it&#39;s running effectively in a production environment, including streamlining the build process. The following best practices will help you to achieve that.\n\n#### Recommendations\n\n- **Make your container images stateless:** A container created with a Docker image should be ephemeral and immutable. In other words, the container should be disposable and independent, i.e. a new one can be built and put in place with absolutely no configuration changes. Design your containers to be stateless. If you would like to use persistent data, use [volumes](https:\/\/docs.docker.com\/engine\/admin\/volumes\/volumes\/) instead. If you would like to store secrets or sensitive application data used by services, you can use solutions like AWS [Systems Manager](https:\/\/aws.amazon.com\/systems-manager\/)[Parameter Store](https:\/\/aws.amazon.com\/ec2\/systems-manager\/parameter-store\/) or third-party offerings or open source solutions, such as [HashiCorp Valut](https:\/\/www.vaultproject.io\/) and [Consul](https:\/\/www.consul.io\/), for runtime configurations.\n- [**Minimal base image**](https:\/\/docs.docker.com\/develop\/develop-images\/baseimages\/) **:** Start with a small base image. Every other instruction in the Dockerfile builds on top of this image. The smaller the base image, the smaller the resulting image is, and the more quickly it can be downloaded. For example, the [alpine:3.7](https:\/\/hub.docker.com\/r\/library\/alpine\/tags\/) image is 71 MB smaller than the [centos:7](https:\/\/hub.docker.com\/r\/library\/centos\/tags\/) image. You can even use the [scratch](https:\/\/hub.docker.com\/r\/library\/scratch\/) base image, which is an empty image on which you can build your own runtime environment.\n- **Avoid unnecessary packages:** When building a container image, include only the dependencies what your application needs and avoid installing unnecessary packages. For example if your application does not need an SSH server, don&#39;t include one.  This will reduce complexity, dependencies, file sizes, and build times.  To exclude files not relevant to the build use a .dockerignore file.\n- [**Use multi-stage build**](https:\/\/docs.docker.com\/v17.09\/engine\/userguide\/eng-image\/multistage-build\/#use-multi-stage-builds):Multi-stage builds allow you to build your application in a first &quot;build&quot; container and use the result in another container, while using the same Dockerfile.  To expand a bit on that, in multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don&#39;t want in the final image.  This method drastically reduces the size of your final image, without struggling to reduce the number of intermediate layers and files.\n- **Minimize number of layers:** Each instruction in the Dockerfile adds an extra layer to the Docker image. The number of instructions and layers should be kept to a minimum as this affects build performance and time. For example, the first  instruction below will create multiple layers, whereas the second instruction by using &amp;&amp;(chaining) we reduced the number of layers, which will help provide better performance. The is the best way to reduce the number of layers that will be created in your Dockerfile.\n- \n    ```\n            RUN apt-get -y update\n            RUN apt-get install -y python\n            RUN apt-get -y update && apt-get install -y python\n    ```\n            \n- **Properly tag your images:** When building images, always tag them with useful and meaningful tags. This is a good way to organize and document metadata describing an image, for example, by including a unique counter like build id from a CI server (e.g. CodeBuild or Jenkins) to help with identifying the correct image. The tag latest is used by default if you do not provide one in your Docker commands.  We recommend not to use the automatically created latest tag, because by using this tag you&#39;ll automatically be running future major releases, which could include breaking changes for your application. The best practice is to avoid the latest tag and instead use the unique digest created by your CI server.\n- **Use** [**Build Cache**](https:\/\/docs.docker.com\/develop\/develop-images\/dockerfile_best-practices\/) **to improve build speed** : The cache allows you to take advantage of existing cached images, rather than building each image from scratch. For example, you should add the source code of your application as late as possible in your Dockerfile so that the base image and your application&#39;s dependencies get cached and aren&#39;t rebuilt on every build. To reuse already cached images, By default in Amazon EKS, the kubelet will try to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).\n- **Image Security :** Using public images  may be  a great way to start working on containers and deploying it to Kubernetes. However, using them in production can come with a set of challenges. Especially when it comes to security. Ensure to follow the best practices for packaging and distributing the containers\/applications. For example, don&#39;t build your containers with passwords baked in also you might need to control what&#39;s inside them.  Recommend to use private repository such as [Amazon ECR](https:\/\/aws.amazon.com\/ecr\/) and leverage the in-built [image scanning](https:\/\/docs.aws.amazon.com\/AmazonECR\/latest\/userguide\/image-scanning.html) feature to identify software vulnerabilities in your container images.  \n\n- **Right size your containers:**  As you develop and run applications in containers, there are a few key areas to consider. How you size containers and manage your application deployments can negatively impact the end-user experience of services that you provide. To help you succeed, the following best practices will help you right size your containers. After you determine the number of resources required for your application, you should set requests and limits Kubernetes to ensure that your applications are running correctly. \n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*(a) Perform testing of the application*:  to gather vital statistics and other performance &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Based upon this data you can work out the optimal configuration, in terms of memory and &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;CPU, for your container. Vital statistics such as : __*CPU, Latency, I\/O, Memory usage, &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Network*__ . Determine expected, mean, and peak container memory and CPU usage by &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;doing a separate load test if necessary. Also consider all the processes that might &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;potentially run in parallel in the container. \n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;     Recommend to use  [CloudWatch Container insights](https:\/\/aws.amazon.com\/blogs\/mt\/introducing-container-insights-for-amazon-ecs\/) or partner products, which will give &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;you the right information to size containers and the Worker nodes.\n\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*(b)Test services independently:*  As many applications depend on each other in a true &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;microservice architecture, you need to test them with a high degree of independence &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;meaning that the services are both able to properly function by themselves, as well as &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;function as part of a cohesive system.\n\n### Resource Management \n\nOne of the most common questions that asked in the adoption of Kubernetes is &quot;*What should I put in a Pod?*&quot;. For example, a three tier LAMP application container. Should we keep this application in the same pod? Well, this works effectively as a single pod but this is an example of an anti-pattern for Pod creation. There are two reasons for that \n\n***(a)*** If you have both the containers in the same Pod, you are forced to use the same scaling strategy which is not ideal for production environment also you can&#39;t effectively manage or constraint resources based on the usage. *E.g:* you might need to scale just the frontend not frontend and backend (MySQL) as a unit also if you would like to increase the resources dedicated just to the backend, you cant just do that.\n\n***(b)*** If you have two separate pods, one for frontend and other for backend. Scaling would be very easy and you get a better reliability.\n\nThe above might not work in all the use-cases. In the above example frontend and backend may land in different machines and they will communicate with each other via network, So you need to ask the question &quot;***Will my application work correctly, If they are placed and run on different machines?***&quot; If the answer is a &quot;***no***&quot; may be because of the application design or for some other technical reasons, then grouping of containers in a single pod makes sense. If the answer is &quot;***Yes***&quot; then multiple Pods is the correct approach.\n\n#### Recommendations\n\n+ **Package a single application per container:**\nA container works best when a single application runs inside it. This application should have a single parent process. For example, do not run PHP and MySQL in the same container: it&#39;s harder to debug, and you can&#39;t horizontally scale the PHP container alone. This separation allows you to better tie the lifecycle of the application to that of the container.  Your containers should be both stateless and immutable. Stateless means that any state (persistent data of any kind) is stored outside of the container, for example, you can use different kinds of external storage like Persistent disk, Amazon EBS, and Amazon EFS if needed, or managed database like Amazon RDS.  Immutable means that a container will not be modified during its life: no updates, no patches, and no configuration changes. To update the application code or apply a patch, you build a new image and deploy it.\n\n+ **Use Labels to Kubernetes Objects:**\n[Labels](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/common-labels\/#labels) allow Kubernetes objects to be queried and operated upon in bulk. They can also be used to identify and organize Kubernetes objects into groups. As such defining labels should figure right at the top of any Kubernetes best practices list.\n\n+ **Setting resource request limits:**\nSetting request limits is the mechanism used to control the amount of system resources that a container can consume such as CPU and memory. These settings are what the container is guaranteed to get when the container initially starts. If a container requests a resource, container orchestrators such as Kubernetes will only schedule it on a node that can provide that resource. Limits, on the other hand, make sure that a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; In the below example Pod manifest, we add a limit of 1.0 CPU and 256 MB of memory\n\n```\n        apiVersion: v1\n        kind: Pod\n        metadata:\n          name: nginx-pod-webserver\n          labels:\n            name: nginx-pod\n        spec:\n          containers:\n          - name: nginx\n            image: nginx:latest\n            resources:\n              limits:\n                memory: \"256Mi\"\n                cpu: \"1000m\"\n              requests:\n                memory: \"128Mi\"\n                cpu: \"500m\"\n            ports:\n            - containerPort: 80\n\n         \n```\n\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;It&#39;s a best practice to define these requests and limits in your pod definitions. If you don&#39;t &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;include these values, the scheduler doesn&#39;t understand what resources are needed. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Without this information, the scheduler might schedule the pod on a node without &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;sufficient resources to provide acceptable application performance.\n\n+ **Limit the number of concurrent disruptions:**\nUse  _PodDisruptionBudget_, This settings allows you to set a policy on the minimum available and maximum unavailable pods during voluntary eviction events. An example of an eviction would be when perform maintenance on the node or draining a node.\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; _Example:_ A web frontend might want to ensure that 8 Pods to be available at any &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;given time. In this scenario, an eviction can evict as many pods as it wants, as long as &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;eight are available.\n```\napiVersion: policy\/v1beta1\nkind: PodDisruptionBudget\nmetadata:\n  name: frontend-demo\nspec:\n  minAvailable: 8\n  selector:\n    matchLabels:\n      app: frontend\n```\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**N.B:** You can also specify pod disruption budget as a percentage by using maxAvailable &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;or maxUnavailable parameter.\n\n+ **Use Namespaces:**\nNamespaces allows a physical cluster to be shared by multiple teams. A namespace allows to partition created resources into a logically named group. This allows you to set resource quotas per namespace, Role-Based Access Control (RBAC) per namespace, and also network policies per namespace. It gives you soft multitenancy features.\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;For example, If you have three applications running on a single Amazon EKS cluster &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;accessed by three different teams which requires multiple resource constraints and &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;different levels of QoS each group  you could create a namespace per team and give each &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;team a quota on the number of resources that it can utilize, such as CPU and memory.\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;You can also specify default limits in Kubernetes namespaces level by enabling &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[LimitRange](https:\/\/kubernetes.io\/docs\/concepts\/policy\/limit-range\/) admission controller. These default limits will constrain the amount of CPU &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;or memory a given Pod can use unless the defaults are explicitly overridden by the Pod&#39;s &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;configuration.\n\n+ **Manage Resource Quota:** \nEach namespace can be assigned resource quota. Specifying quota allows to restrict how much of cluster resources can be consumed across all resources in a namespace. Resource quota can be defined by a [ResourceQuota](https:\/\/kubernetes.io\/docs\/concepts\/policy\/resource-quotas\/) object. A presence of ResourceQuota object in a namespace ensures that resource quotas are enforced.\n\n+ **Configure Health Checks for Pods:**\nHealth checks are a simple way to let the system know if an instance of your app is working or not. If an instance of your app is not working, then other services should not access it or send requests to it. Instead, requests should be sent to another instance of the app that is working. The system also should bring your app back to a healthy state.  By default, all the running pods have the restart policy set to always which means the kubelet running within a node will automatically restart a pod when the container encounters an error. Health checks extend this capability of kubelet through the concept of [container probes](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle\/#container-probes).\n\n  Kubernetes provides two types of [health checks](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-liveness-readiness-startup-probes\/): readiness and liveness probes.  For example, consider if one of your applications, which typically runs for long periods of time, transitions to a non-running state and can only recover by being restarted. You can use liveness probes to detect and remedy such situations. Using health checks gives your applications better reliability, and higher uptime.\n\n\n+ **Advanced Scheduling Techniques:**\nGenerally, schedulers ensure that pods are placed only on nodes that have sufficient free resources, and across nodes, they try to balance out the resource utilization across nodes, deployments, replicas, and so on. But sometimes you want to control how your pods are scheduled. For example, perhaps you want to ensure that certain pods are only scheduled on nodes with specialized hardware, such as requiring a GPU machine for an ML workload. Or you want to collocate services that communicate frequently.\n\n  Kubernetes offers many[advanced scheduling features](https:\/\/kubernetes.io\/blog\/2017\/03\/advanced-scheduling-in-kubernetes\/)and multiple filters\/constraints to schedule the pods on the right node.  For example, when using Amazon EKS, you can use[taints and tolerations](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/#taints-and-toleations-beta-feature)to restrict what workloads can run on specific nodes. You can also control pod scheduling using [node selectors](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/#nodeselector)and[affinity and anti-affinity](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/#affinity-and-anti-affinity)constructs and even have your own custom scheduler built for this purpose.\n\n#### Scalability Management \n  Containers are stateless. They are born and when they die, they are not resurrected.  There are many techniques that you can leverage on Amazon EKS, not only to scale out your containerized applications but also the Kubernetes worker node.\n  \n#### Recommendations\n\n  + On Amazon EKS, you can configure [Horizontal pod autoscaler](https:\/\/kubernetes.io\/docs\/tasks\/run-application\/horizontal-pod-autoscale\/),which automatically scales the number of pods in a replication controller, deployment, or replica set based on observed CPU utilization (or use[custom metrics](https:\/\/git.k8s.io\/community\/contributors\/design-proposals\/instrumentation\/custom-metrics-api.md)based on application-provided metrics).\n\n  + You can use  [Vertical Pod Autoscaler](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/vertical-pod-autoscaler) which automatically adjusts the CPU and memory reservations for your pods to help &quot;right size&quot; your applications. This adjustment can improve cluster resource utilization and free up CPU and memory for other pods.  This is useful in scenarios like your production database &quot;MongoDB&quot; does not scale the same way as a stateless application frontend, In this scenario you could use VPA to scale up the MongoDB Pod.\n\n  + To enable VPA you need to use  Kubernetes metrics server, which is an aggregator of resource usage data in your cluster. It is not deployed by default in Amazon EKS clusters.  You need to configure it before [configure VPA](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/vertical-pod-autoscaler.html) alternatively you can also use Prometheus to provide metrics for the Vertical Pod Autoscaler.\n\n  + While HPA and VPA scale the deployments and pods, [Cluster Autoscaler](https:\/\/github.com\/kubernetes\/autoscaler) will scale-out  and scale-in the size of the pool of worker nodes. It adjusts the size of a Kubernetes cluster based on the current utilization. Cluster Autoscaler increases the size of the cluster when there are pods that failed to schedule on any of the current nodes due to insufficient resources or when adding a new node would increase the overall availability of cluster resources. Please follow this [step by step](https:\/\/eksworkshop.com\/scaling\/deploy_ca\/) guide to setup Cluster Autoscaler.  If you are using Amazon EKS on AWS Fargate, AWS Manages the control plane for you. \n\n     Please have a look at the reliability pillar for detailed information.\n     \n#### Monitoring \n#### Deployment Best Practices \n#### Trade-Offs","site":"eks","answers_cleaned":"  Performance Efficiency Pillar  The performance efficiency pillar focuses on the efficient use of computing resources to meet requirements and how to maintain that efficiency as demand changes and technologies evolve  This section provides in depth  best practices guidance for architecting for performance efficiency on AWS      Definition  To ensure the efficient use of EKS container services  you should gather data on all aspects of the architecture  from the high level design to the selection of EKS resource types  By reviewing your choices on a regular basis  you ensure that you are taking advantage of the continually evolving Amazon EKS and Container services  Monitoring will ensure that you are aware of any deviance from expected performance so you can take action on it   Performance efficiency for EKS containers is composed of three areas     Optimize your container    Resource Management    Scalability Management     Best Practices      Optimize your container  You can run most applications in a Docker container without too much hassle  There are a number of things that you need to do to ensure it  39 s running effectively in a production environment  including streamlining the build process  The following best practices will help you to achieve that        Recommendations      Make your container images stateless    A container created with a Docker image should be ephemeral and immutable  In other words  the container should be disposable and independent  i e  a new one can be built and put in place with absolutely no configuration changes  Design your containers to be stateless  If you would like to use persistent data  use  volumes  https   docs docker com engine admin volumes volumes   instead  If you would like to store secrets or sensitive application data used by services  you can use solutions like AWS  Systems Manager  https   aws amazon com systems manager   Parameter Store  https   aws amazon com ec2 systems manager parameter store   or third party offerings or open source solutions  such as  HashiCorp Valut  https   www vaultproject io   and  Consul  https   www consul io    for runtime configurations       Minimal base image    https   docs docker com develop develop images baseimages         Start with a small base image  Every other instruction in the Dockerfile builds on top of this image  The smaller the base image  the smaller the resulting image is  and the more quickly it can be downloaded  For example  the  alpine 3 7  https   hub docker com r library alpine tags   image is 71 MB smaller than the  centos 7  https   hub docker com r library centos tags   image  You can even use the  scratch  https   hub docker com r library scratch   base image  which is an empty image on which you can build your own runtime environment      Avoid unnecessary packages    When building a container image  include only the dependencies what your application needs and avoid installing unnecessary packages  For example if your application does not need an SSH server  don  39 t include one   This will reduce complexity  dependencies  file sizes  and build times   To exclude files not relevant to the build use a  dockerignore file       Use multi stage build    https   docs docker com v17 09 engine userguide eng image multistage build  use multi stage builds  Multi stage builds allow you to build your application in a first  quot build quot  container and use the result in another container  while using the same Dockerfile   To expand a bit on that  in multi stage builds  you use multiple FROM statements in your Dockerfile  Each FROM instruction can use a different base  and each of them begins a new stage of the build  You can selectively copy artifacts from one stage to another  leaving behind everything you don  39 t want in the final image   This method drastically reduces the size of your final image  without struggling to reduce the number of intermediate layers and files      Minimize number of layers    Each instruction in the Dockerfile adds an extra layer to the Docker image  The number of instructions and layers should be kept to a minimum as this affects build performance and time  For example  the first  instruction below will create multiple layers  whereas the second instruction by using  amp  amp  chaining  we reduced the number of layers  which will help provide better performance  The is the best way to reduce the number of layers that will be created in your Dockerfile                         RUN apt get  y update             RUN apt get install  y python             RUN apt get  y update    apt get install  y python                          Properly tag your images    When building images  always tag them with useful and meaningful tags  This is a good way to organize and document metadata describing an image  for example  by including a unique counter like build id from a CI server  e g  CodeBuild or Jenkins  to help with identifying the correct image  The tag latest is used by default if you do not provide one in your Docker commands   We recommend not to use the automatically created latest tag  because by using this tag you  39 ll automatically be running future major releases  which could include breaking changes for your application  The best practice is to avoid the latest tag and instead use the unique digest created by your CI server      Use      Build Cache    https   docs docker com develop develop images dockerfile best practices     to improve build speed     The cache allows you to take advantage of existing cached images  rather than building each image from scratch  For example  you should add the source code of your application as late as possible in your Dockerfile so that the base image and your application  39 s dependencies get cached and aren  39 t rebuilt on every build  To reuse already cached images  By default in Amazon EKS  the kubelet will try to pull each image from the specified registry  However  if the imagePullPolicy property of the container is set to IfNotPresent or Never  then a local image is used  preferentially or exclusively  respectively       Image Security     Using public images  may be  a great way to start working on containers and deploying it to Kubernetes  However  using them in production can come with a set of challenges  Especially when it comes to security  Ensure to follow the best practices for packaging and distributing the containers applications  For example  don  39 t build your containers with passwords baked in also you might need to control what  39 s inside them   Recommend to use private repository such as  Amazon ECR  https   aws amazon com ecr   and leverage the in built  image scanning  https   docs aws amazon com AmazonECR latest userguide image scanning html  feature to identify software vulnerabilities in your container images         Right size your containers     As you develop and run applications in containers  there are a few key areas to consider  How you size containers and manage your application deployments can negatively impact the end user experience of services that you provide  To help you succeed  the following best practices will help you right size your containers  After you determine the number of resources required for your application  you should set requests and limits Kubernetes to ensure that your applications are running correctly     nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp   a  Perform testing of the application    to gather vital statistics and other performance  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp Based upon this data you can work out the optimal configuration  in terms of memory and  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp CPU  for your container  Vital statistics such as      CPU  Latency  I O  Memory usage   nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp Network      Determine expected  mean  and peak container memory and CPU usage by  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp doing a separate load test if necessary  Also consider all the processes that might  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp potentially run in parallel in the container     nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp      Recommend to use   CloudWatch Container insights  https   aws amazon com blogs mt introducing container insights for amazon ecs   or partner products  which will give  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp you the right information to size containers and the Worker nodes     nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp   b Test services independently    As many applications depend on each other in a true  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp microservice architecture  you need to test them with a high degree of independence  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp meaning that the services are both able to properly function by themselves  as well as  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp function as part of a cohesive system       Resource Management   One of the most common questions that asked in the adoption of Kubernetes is  quot  What should I put in a Pod   quot   For example  a three tier LAMP application container  Should we keep this application in the same pod  Well  this works effectively as a single pod but this is an example of an anti pattern for Pod creation  There are two reasons for that       a     If you have both the containers in the same Pod  you are forced to use the same scaling strategy which is not ideal for production environment also you can  39 t effectively manage or constraint resources based on the usage   E g   you might need to scale just the frontend not frontend and backend  MySQL  as a unit also if you would like to increase the resources dedicated just to the backend  you cant just do that       b     If you have two separate pods  one for frontend and other for backend  Scaling would be very easy and you get a better reliability   The above might not work in all the use cases  In the above example frontend and backend may land in different machines and they will communicate with each other via network  So you need to ask the question  quot    Will my application work correctly  If they are placed and run on different machines     quot  If the answer is a  quot    no    quot  may be because of the application design or for some other technical reasons  then grouping of containers in a single pod makes sense  If the answer is  quot    Yes    quot  then multiple Pods is the correct approach        Recommendations      Package a single application per container    A container works best when a single application runs inside it  This application should have a single parent process  For example  do not run PHP and MySQL in the same container  it  39 s harder to debug  and you can  39 t horizontally scale the PHP container alone  This separation allows you to better tie the lifecycle of the application to that of the container   Your containers should be both stateless and immutable  Stateless means that any state  persistent data of any kind  is stored outside of the container  for example  you can use different kinds of external storage like Persistent disk  Amazon EBS  and Amazon EFS if needed  or managed database like Amazon RDS   Immutable means that a container will not be modified during its life  no updates  no patches  and no configuration changes  To update the application code or apply a patch  you build a new image and deploy it       Use Labels to Kubernetes Objects     Labels  https   kubernetes io docs concepts overview working with objects common labels  labels  allow Kubernetes objects to be queried and operated upon in bulk  They can also be used to identify and organize Kubernetes objects into groups  As such defining labels should figure right at the top of any Kubernetes best practices list       Setting resource request limits    Setting request limits is the mechanism used to control the amount of system resources that a container can consume such as CPU and memory  These settings are what the container is guaranteed to get when the container initially starts  If a container requests a resource  container orchestrators such as Kubernetes will only schedule it on a node that can provide that resource  Limits  on the other hand  make sure that a container never goes above a certain value  The container is only allowed to go up to the limit  and then it is restricted    nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  In the below example Pod manifest  we add a limit of 1 0 CPU and 256 MB of memory              apiVersion  v1         kind  Pod         metadata            name  nginx pod webserver           labels              name  nginx pod         spec            containers              name  nginx             image  nginx latest             resources                limits                  memory   256Mi                  cpu   1000m                requests                  memory   128Mi                  cpu   500m              ports                containerPort  80                   nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp It  39 s a best practice to define these requests and limits in your pod definitions  If you don  39 t  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp include these values  the scheduler doesn  39 t understand what resources are needed   nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp Without this information  the scheduler might schedule the pod on a node without  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp sufficient resources to provide acceptable application performance       Limit the number of concurrent disruptions    Use   PodDisruptionBudget   This settings allows you to set a policy on the minimum available and maximum unavailable pods during voluntary eviction events  An example of an eviction would be when perform maintenance on the node or draining a node    nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp   Example   A web frontend might want to ensure that 8 Pods to be available at any  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp given time  In this scenario  an eviction can evict as many pods as it wants  as long as  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp eight are available      apiVersion  policy v1beta1 kind  PodDisruptionBudget metadata    name  frontend demo spec    minAvailable  8   selector      matchLabels        app  frontend       nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp   N B    You can also specify pod disruption budget as a percentage by using maxAvailable  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp or maxUnavailable parameter       Use Namespaces    Namespaces allows a physical cluster to be shared by multiple teams  A namespace allows to partition created resources into a logically named group  This allows you to set resource quotas per namespace  Role Based Access Control  RBAC  per namespace  and also network policies per namespace  It gives you soft multitenancy features    nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp For example  If you have three applications running on a single Amazon EKS cluster  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp accessed by three different teams which requires multiple resource constraints and  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp different levels of QoS each group  you could create a namespace per team and give each  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp team a quota on the number of resources that it can utilize  such as CPU and memory    nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp You can also specify default limits in Kubernetes namespaces level by enabling  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  LimitRange  https   kubernetes io docs concepts policy limit range   admission controller  These default limits will constrain the amount of CPU  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp or memory a given Pod can use unless the defaults are explicitly overridden by the Pod  39 s  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp  nbsp configuration       Manage Resource Quota     Each namespace can be assigned resource quota  Specifying quota allows to restrict how much of cluster resources can be consumed across all resources in a namespace  Resource quota can be defined by a  ResourceQuota  https   kubernetes io docs concepts policy resource quotas   object  A presence of ResourceQuota object in a namespace ensures that resource quotas are enforced       Configure Health Checks for Pods    Health checks are a simple way to let the system know if an instance of your app is working or not  If an instance of your app is not working  then other services should not access it or send requests to it  Instead  requests should be sent to another instance of the app that is working  The system also should bring your app back to a healthy state   By default  all the running pods have the restart policy set to always which means the kubelet running within a node will automatically restart a pod when the container encounters an error  Health checks extend this capability of kubelet through the concept of  container probes  https   kubernetes io docs concepts workloads pods pod lifecycle  container probes      Kubernetes provides two types of  health checks  https   kubernetes io docs tasks configure pod container configure liveness readiness startup probes    readiness and liveness probes   For example  consider if one of your applications  which typically runs for long periods of time  transitions to a non running state and can only recover by being restarted  You can use liveness probes to detect and remedy such situations  Using health checks gives your applications better reliability  and higher uptime        Advanced Scheduling Techniques    Generally  schedulers ensure that pods are placed only on nodes that have sufficient free resources  and across nodes  they try to balance out the resource utilization across nodes  deployments  replicas  and so on  But sometimes you want to control how your pods are scheduled  For example  perhaps you want to ensure that certain pods are only scheduled on nodes with specialized hardware  such as requiring a GPU machine for an ML workload  Or you want to collocate services that communicate frequently     Kubernetes offers many advanced scheduling features  https   kubernetes io blog 2017 03 advanced scheduling in kubernetes  and multiple filters constraints to schedule the pods on the right node   For example  when using Amazon EKS  you can use taints and tolerations  https   kubernetes io docs concepts configuration assign pod node  taints and toleations beta feature to restrict what workloads can run on specific nodes  You can also control pod scheduling using  node selectors  https   kubernetes io docs concepts configuration assign pod node  nodeselector and affinity and anti affinity  https   kubernetes io docs concepts configuration assign pod node  affinity and anti affinity constructs and even have your own custom scheduler built for this purpose        Scalability Management    Containers are stateless  They are born and when they die  they are not resurrected   There are many techniques that you can leverage on Amazon EKS  not only to scale out your containerized applications but also the Kubernetes worker node          Recommendations      On Amazon EKS  you can configure  Horizontal pod autoscaler  https   kubernetes io docs tasks run application horizontal pod autoscale   which automatically scales the number of pods in a replication controller  deployment  or replica set based on observed CPU utilization  or use custom metrics  https   git k8s io community contributors design proposals instrumentation custom metrics api md based on application provided metrics        You can use   Vertical Pod Autoscaler  https   github com kubernetes autoscaler tree master vertical pod autoscaler  which automatically adjusts the CPU and memory reservations for your pods to help  quot right size quot  your applications  This adjustment can improve cluster resource utilization and free up CPU and memory for other pods   This is useful in scenarios like your production database  quot MongoDB quot  does not scale the same way as a stateless application frontend  In this scenario you could use VPA to scale up the MongoDB Pod       To enable VPA you need to use  Kubernetes metrics server  which is an aggregator of resource usage data in your cluster  It is not deployed by default in Amazon EKS clusters   You need to configure it before  configure VPA  https   docs aws amazon com eks latest userguide vertical pod autoscaler html  alternatively you can also use Prometheus to provide metrics for the Vertical Pod Autoscaler       While HPA and VPA scale the deployments and pods   Cluster Autoscaler  https   github com kubernetes autoscaler  will scale out  and scale in the size of the pool of worker nodes  It adjusts the size of a Kubernetes cluster based on the current utilization  Cluster Autoscaler increases the size of the cluster when there are pods that failed to schedule on any of the current nodes due to insufficient resources or when adding a new node would increase the overall availability of cluster resources  Please follow this  step by step  https   eksworkshop com scaling deploy ca   guide to setup Cluster Autoscaler   If you are using Amazon EKS on AWS Fargate  AWS Manages the control plane for you         Please have a look at the reliability pillar for detailed information             Monitoring       Deployment Best Practices       Trade Offs"}
{"questions":"external secrets This is basicially a zero configuration authentication method that inherits the credentials from the runtime environment using the Controller s Pod Identity Note If you are using Parameter Store replace with in all examples below AWS Authentication","answers":"## AWS Authentication\n\n### Controller's Pod Identity\n\n![Pod Identity Authentication](..\/pictures\/diagrams-provider-aws-auth-pod-identity.png)\n\nNote: If you are using Parameter Store replace `service: SecretsManager` with `service: ParameterStore` in all examples below.\n\nThis is basicially a zero-configuration authentication method that inherits the credentials from the runtime environment using the [aws sdk default credential chain](https:\/\/docs.aws.amazon.com\/sdk-for-java\/v1\/developer-guide\/credentials.html#credentials-default).\n\nYou can attach a role to the pod using [IRSA](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts.html), [kiam](https:\/\/github.com\/uswitch\/kiam) or [kube2iam](https:\/\/github.com\/jtblin\/kube2iam). When no other authentication method is configured in the `Kind=Secretstore` this role is used to make all API calls against AWS Secrets Manager or SSM Parameter Store.\n\nBased on the Pod's identity you can do a `sts:assumeRole` before fetching the secrets to limit access to certain keys in your provider. This is optional.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: team-b-store\nspec:\n  provider:\n    aws:\n      service: SecretsManager\n      region: eu-central-1\n      # optional: do a sts:assumeRole before fetching secrets\n      role: team-b\n```\n\n### Access Key ID & Secret Access Key\n\n![SecretRef](..\/pictures\/diagrams-provider-aws-auth-secret-ref.png)\n\nYou can store Access Key ID & Secret Access Key in a `Kind=Secret` and reference it from a SecretStore.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: team-b-store\nspec:\n  provider:\n    aws:\n      service: SecretsManager\n      region: eu-central-1\n      # optional: assume role before fetching secrets\n      role: team-b\n      auth:\n        secretRef:\n          accessKeyIDSecretRef:\n            name: awssm-secret\n            key: access-key\n          secretAccessKeySecretRef:\n            name: awssm-secret\n            key: secret-access-key\n```\n\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` in `accessKeyIDSecretRef`, `secretAccessKeySecretRef` with the namespaces where the secrets reside.\n\n### EKS Service Account credentials\n\n![Service Account](..\/pictures\/diagrams-provider-aws-auth-service-account.png)\n\nThis feature lets you use short-lived service account tokens to authenticate with AWS.\nYou must have [Service Account Volume Projection](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/#service-account-token-volume-projection) enabled - it is by default on EKS. See [EKS guide](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts-technical-overview.html) on how to set up IAM roles for service accounts.\n\nThe big advantage of this approach is that ESO runs without any credentials.\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  annotations:\n    eks.amazonaws.com\/role-arn: arn:aws:iam::123456789012:role\/team-a\n  name: my-serviceaccount\n  namespace: default\n```\n\nReference the service account from above in the Secret Store:\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: secretstore-sample\nspec:\n  provider:\n    aws:\n      service: SecretsManager\n      region: eu-central-1\n      auth:\n        jwt:\n          serviceAccountRef:\n            name: my-serviceaccount\n```\n\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` for `serviceAccountRef` with the namespace where the service account resides.\n\n## Custom Endpoints\n\nYou can define custom AWS endpoints if you want to use regional, vpc or custom endpoints. See List of endpoints for [Secrets Manager](https:\/\/docs.aws.amazon.com\/general\/latest\/gr\/asm.html), [Secure Systems Manager](https:\/\/docs.aws.amazon.com\/general\/latest\/gr\/ssm.html) and [Security Token Service](https:\/\/docs.aws.amazon.com\/general\/latest\/gr\/sts.html).\n\nUse the following environment variables to point the controller to your custom endpoints. Note: All resources managed by this controller are affected.\n\n| ENV VAR                     | DESCRIPTION                                                                                                                                                          |\n| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| AWS_SECRETSMANAGER_ENDPOINT | Endpoint for the Secrets Manager Service. The controller uses this endpoint to fetch secrets from AWS Secrets Manager.                                               |\n| AWS_SSM_ENDPOINT            | Endpoint for the AWS Secure Systems Manager. The controller uses this endpoint to fetch secrets from SSM Parameter Store.                                            |\n| AWS_STS_ENDPOINT            | Endpoint for the Security Token Service. The controller uses this endpoint when creating a session and when doing `assumeRole` or `assumeRoleWithWebIdentity` calls. |","site":"external secrets","answers_cleaned":"   AWS Authentication      Controller s Pod Identity    Pod Identity Authentication     pictures diagrams provider aws auth pod identity png   Note  If you are using Parameter Store replace  service  SecretsManager  with  service  ParameterStore  in all examples below   This is basicially a zero configuration authentication method that inherits the credentials from the runtime environment using the  aws sdk default credential chain  https   docs aws amazon com sdk for java v1 developer guide credentials html credentials default    You can attach a role to the pod using  IRSA  https   docs aws amazon com eks latest userguide iam roles for service accounts html    kiam  https   github com uswitch kiam  or  kube2iam  https   github com jtblin kube2iam   When no other authentication method is configured in the  Kind Secretstore  this role is used to make all API calls against AWS Secrets Manager or SSM Parameter Store   Based on the Pod s identity you can do a  sts assumeRole  before fetching the secrets to limit access to certain keys in your provider  This is optional      yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  team b store spec    provider      aws        service  SecretsManager       region  eu central 1         optional  do a sts assumeRole before fetching secrets       role  team b          Access Key ID   Secret Access Key    SecretRef     pictures diagrams provider aws auth secret ref png   You can store Access Key ID   Secret Access Key in a  Kind Secret  and reference it from a SecretStore      yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  team b store spec    provider      aws        service  SecretsManager       region  eu central 1         optional  assume role before fetching secrets       role  team b       auth          secretRef            accessKeyIDSecretRef              name  awssm secret             key  access key           secretAccessKeySecretRef              name  awssm secret             key  secret access key        NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  in  accessKeyIDSecretRef    secretAccessKeySecretRef  with the namespaces where the secrets reside       EKS Service Account credentials    Service Account     pictures diagrams provider aws auth service account png   This feature lets you use short lived service account tokens to authenticate with AWS  You must have  Service Account Volume Projection  https   kubernetes io docs tasks configure pod container configure service account  service account token volume projection  enabled   it is by default on EKS  See  EKS guide  https   docs aws amazon com eks latest userguide iam roles for service accounts technical overview html  on how to set up IAM roles for service accounts   The big advantage of this approach is that ESO runs without any credentials      yaml apiVersion  v1 kind  ServiceAccount metadata    annotations      eks amazonaws com role arn  arn aws iam  123456789012 role team a   name  my serviceaccount   namespace  default      Reference the service account from above in the Secret Store      yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  secretstore sample spec    provider      aws        service  SecretsManager       region  eu central 1       auth          jwt            serviceAccountRef              name  my serviceaccount        NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  for  serviceAccountRef  with the namespace where the service account resides      Custom Endpoints  You can define custom AWS endpoints if you want to use regional  vpc or custom endpoints  See List of endpoints for  Secrets Manager  https   docs aws amazon com general latest gr asm html    Secure Systems Manager  https   docs aws amazon com general latest gr ssm html  and  Security Token Service  https   docs aws amazon com general latest gr sts html    Use the following environment variables to point the controller to your custom endpoints  Note  All resources managed by this controller are affected     ENV VAR                       DESCRIPTION                                                                                                                                                                                                                                                                                                                                                                     AWS SECRETSMANAGER ENDPOINT   Endpoint for the Secrets Manager Service  The controller uses this endpoint to fetch secrets from AWS Secrets Manager                                                    AWS SSM ENDPOINT              Endpoint for the AWS Secure Systems Manager  The controller uses this endpoint to fetch secrets from SSM Parameter Store                                                 AWS STS ENDPOINT              Endpoint for the Security Token Service  The controller uses this endpoint when creating a session and when doing  assumeRole  or  assumeRoleWithWebIdentity  calls   "}
{"questions":"external secrets We have a to track progress for our road towards GA should be opened in that repository Project Management The Code our TODOs and Documentation is maintained on Features bugs and any issues regarding the documentation should be filed as in Issues All Issues","answers":"## Project Management\nThe Code, our TODOs and Documentation is maintained on\n[GitHub](https:\/\/github.com\/external-secrets\/external-secrets). All Issues\nshould be opened in that repository.\nWe have a [Roadmap](roadmap.md) to track progress for our road towards GA.\n\n## Issues\n\nFeatures, bugs and any issues regarding the documentation should be filed as\n[GitHub Issue](https:\/\/github.com\/external-secrets\/external-secrets\/issues) in\nour repository. We use labels like `kind\/feature`, `kind\/bug`, `area\/aws` to\norganize the issues. Issues labeled `good first issue` and `help wanted` are\nespecially good for a first contribution. If you want to pick up an issue just\nleave a comment.\n\n## Submitting a Pull Request\n\nThis project uses the well-known pull request process from GitHub. To submit a\npull request, fork the repository and push any changes to a branch on the copy,\nfrom there a pull request can be made in the main repo. Merging a pull request\nrequires the following steps to be completed before the pull request will\nbe merged:\n\n* ideally, there is an issue that documents the problem or feature in depth.\n* code must have a reasonable amount of test coverage\n* tests must pass\n* PR needs be reviewed and approved\n\nOnce these steps are completed the PR will be merged by a code owner.\nWe're using the pull request `assignee` feature to track who is responsible\nfor the lifecycle of the PR: review, merging, ping on inactivity, close.\nWe close pull requests or issues if there is no response from the author for\na period of time. Feel free to reopen if you want to get back on it.\n\n### Triggering e2e tests\n\nWe have an extensive set of e2e tests that test the integration with *real* cloud provider APIs.\nMaintainers must trigger these kind of tests manually for PRs that come from forked repositories. These tests run inside a `kind` cluster in the GitHub Actions runner:\n\n```\n\/ok-to-test sha=<full_commit_hash>\n```\nExamples:\n```\n\/ok-to-test sha=b8ca0040200a7a05d57048d86a972fdf833b8c9b\n```\n\n#### Executing e2e tests locally\n\nYou have to prepare your shell environment with the necessary variables so the e2e test\nrunner knows what credentials to use. See `e2e\/run.sh` for the variables that are passed in.\nIf you e.g. want to test AWS integration make sure set all `AWS_*` variables mentioned\nin that file.\n\nUse [ginkgo labels](https:\/\/onsi.github.io\/ginkgo\/#spec-labels) to select the tests\nyou want to execute. You have to specify `!managed` to ensure that you do not\nrun managed tests.\n\n```\nmake test.e2e GINKGO_LABELS='gcp&&!managed'\n```\n\n#### Managed Kubernetes e2e tests\n\nThere's another suite of e2e tests that integrate with managed Kubernetes offerings.\nThey create real infrastructure at a cloud provider and deploy the controller\ninto that environment.\nThis is necessary to test the authentication integration\n([GCP Workload Identity](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/workload-identity),\n[EKS IRSA](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts.html)...).\n\nThese tests are time intensive (~20-45min) and must be triggered manually by\na maintainer when a particular provider or authentication mechanism was changed:\n\n```\n\/ok-to-test-managed sha=xxxxxx provider=aws\n# or\n\/ok-to-test-managed sha=xxxxxx provider=gcp\n# or\n\/ok-to-test-managed sha=xxxxxx provider=azure\n```\n\nBoth tests can run in parallel. Once started they add a dynamic GitHub check `integration-managed-(gcp|aws|azure)` to the PR that triggered the test.\n\n\n### Executing Managed Kubernetes e2e tests locally\n\nYou have to prepare your shell environment with the necessary variables so the e2e\ntest runner knows what credentials to use. See `.github\/workflows\/e2e-managed.yml`\nfor the variables that are passed in. If you e.g. want to test AWS integration make\nsure set all variables containing `AWS_*` and `TF_VAR_AWS_*` mentioned in that file.\n\nThen execute `tf.apply.aws` or `tf.apply.gcp` to create the infrastructure.\n\n```\nmake tf.apply.aws\n```\n\nThen run the `managed` testsuite. You will need push permissions to the external-secrets ghcr repository. You can set `IMAGE_NAME` to control which image registry is used to store the controller and e2e test images in.\n\nYou also have to setup a proper Kubeconfig so the e2e test pod gets deployed into the managed cluster.\n\n```\naws eks update-kubeconfig --name ${AWS_CLUSTER_NAME}\nor\ngcloud container clusters get-credentials ${GCP_GKE_CLUSTER} --region europe-west1-b\n```\n\nUse [ginkgo labels](https:\/\/onsi.github.io\/ginkgo\/#spec-labels) to select the tests\nyou want to execute.\n\n```\n# you may have to set IMAGE_NAME=docker.io\/your-user\/external-secrets\nmake test.e2e.managed GINKGO_LABELS='gcp'\n```\n\n## Proposal Process\nBefore we introduce significant changes to the project we want to gather feedback\nfrom the community to ensure that we progress in the right direction before we\ndevelop and release big changes. Significant changes include for example:\n\n* creating new custom resources\n* proposing breaking changes\n* changing the behavior of the controller significantly\n\nPlease create a document in the `design\/` directory based on the template `000-template.md`\nand fill in your proposal. Open a pull request in draft mode and request feedback. Once the proposal is accepted and the pull request is merged we can create work packages and proceed with the implementation.\n\n## Release Planning\n\nWe have a [GitHub Project Board](https:\/\/github.com\/orgs\/external-secrets\/projects\/2\/views\/1) where we organize issues on a high level. We group issues by milestone. Once all issues of a given milestone are closed we should prepare a new feature release. Issues of the next milestone have priority over other issues - but that does not mean that no one is allowed to start working on them.\n\nIssues must be _manually_ added to that board (at least for now, see [GH Roadmap](https:\/\/github.com\/github\/roadmap\/issues\/286)). Milestones must be assigned manually as well. If no milestone is assigned it is basically a backlog item. It is the responsibility of the maintainers to:\n\n1. assign new issues to the GH Project\n2. add a milestone if needed\n3. add appropriate labels\n\nIf you would like to raise the priority of an issue for whatever reason feel free to comment on the issue or ping a maintainer.\n\n## Support & Questions\n\nProviding support to end users is an important and difficult task.\nWe have three different channels through which support questions arise:\n\n1. Kubernetes Slack [#external-secrets](https:\/\/kubernetes.slack.com\/archives\/C017BF84G2Y)\n2. [GitHub Discussions](https:\/\/github.com\/external-secrets\/external-secrets\/discussions)\n3. GitHub Issues\n\nWe use labels to identify GitHub Issues. Specifically for managing support cases we use the following labels to identify the state a support case is in:\n\n* `triage\/needs-information`: Indicates an issue needs more information in order to work on it.\n* `triage\/not-reproducible`: Indicates an issue can not be reproduced as described.\n* `triage\/support`: Indicates an issue that is a support question.\n\n\n## Cutting Releases\n\nThe external-secrets project is released on a as-needed basis. Feel free to open a issue to request a release. Details on how to cut a release can be found in the [release](release.md) page.","site":"external secrets","answers_cleaned":"   Project Management The Code  our TODOs and Documentation is maintained on  GitHub  https   github com external secrets external secrets   All Issues should be opened in that repository  We have a  Roadmap  roadmap md  to track progress for our road towards GA      Issues  Features  bugs and any issues regarding the documentation should be filed as  GitHub Issue  https   github com external secrets external secrets issues  in our repository  We use labels like  kind feature    kind bug    area aws  to organize the issues  Issues labeled  good first issue  and  help wanted  are especially good for a first contribution  If you want to pick up an issue just leave a comment      Submitting a Pull Request  This project uses the well known pull request process from GitHub  To submit a pull request  fork the repository and push any changes to a branch on the copy  from there a pull request can be made in the main repo  Merging a pull request requires the following steps to be completed before the pull request will be merged     ideally  there is an issue that documents the problem or feature in depth    code must have a reasonable amount of test coverage   tests must pass   PR needs be reviewed and approved  Once these steps are completed the PR will be merged by a code owner  We re using the pull request  assignee  feature to track who is responsible for the lifecycle of the PR  review  merging  ping on inactivity  close  We close pull requests or issues if there is no response from the author for a period of time  Feel free to reopen if you want to get back on it       Triggering e2e tests  We have an extensive set of e2e tests that test the integration with  real  cloud provider APIs  Maintainers must trigger these kind of tests manually for PRs that come from forked repositories  These tests run inside a  kind  cluster in the GitHub Actions runner        ok to test sha  full commit hash      Examples       ok to test sha b8ca0040200a7a05d57048d86a972fdf833b8c9b           Executing e2e tests locally  You have to prepare your shell environment with the necessary variables so the e2e test runner knows what credentials to use  See  e2e run sh  for the variables that are passed in  If you e g  want to test AWS integration make sure set all  AWS    variables mentioned in that file   Use  ginkgo labels  https   onsi github io ginkgo  spec labels  to select the tests you want to execute  You have to specify   managed  to ensure that you do not run managed tests       make test e2e GINKGO LABELS  gcp   managed            Managed Kubernetes e2e tests  There s another suite of e2e tests that integrate with managed Kubernetes offerings  They create real infrastructure at a cloud provider and deploy the controller into that environment  This is necessary to test the authentication integration   GCP Workload Identity  https   cloud google com kubernetes engine docs how to workload identity    EKS IRSA  https   docs aws amazon com eks latest userguide iam roles for service accounts html        These tests are time intensive   20 45min  and must be triggered manually by a maintainer when a particular provider or authentication mechanism was changed        ok to test managed sha xxxxxx provider aws   or  ok to test managed sha xxxxxx provider gcp   or  ok to test managed sha xxxxxx provider azure      Both tests can run in parallel  Once started they add a dynamic GitHub check  integration managed  gcp aws azure   to the PR that triggered the test        Executing Managed Kubernetes e2e tests locally  You have to prepare your shell environment with the necessary variables so the e2e test runner knows what credentials to use  See   github workflows e2e managed yml  for the variables that are passed in  If you e g  want to test AWS integration make sure set all variables containing  AWS    and  TF VAR AWS    mentioned in that file   Then execute  tf apply aws  or  tf apply gcp  to create the infrastructure       make tf apply aws      Then run the  managed  testsuite  You will need push permissions to the external secrets ghcr repository  You can set  IMAGE NAME  to control which image registry is used to store the controller and e2e test images in   You also have to setup a proper Kubeconfig so the e2e test pod gets deployed into the managed cluster       aws eks update kubeconfig   name   AWS CLUSTER NAME  or gcloud container clusters get credentials   GCP GKE CLUSTER    region europe west1 b      Use  ginkgo labels  https   onsi github io ginkgo  spec labels  to select the tests you want to execute         you may have to set IMAGE NAME docker io your user external secrets make test e2e managed GINKGO LABELS  gcp          Proposal Process Before we introduce significant changes to the project we want to gather feedback from the community to ensure that we progress in the right direction before we develop and release big changes  Significant changes include for example     creating new custom resources   proposing breaking changes   changing the behavior of the controller significantly  Please create a document in the  design   directory based on the template  000 template md  and fill in your proposal  Open a pull request in draft mode and request feedback  Once the proposal is accepted and the pull request is merged we can create work packages and proceed with the implementation      Release Planning  We have a  GitHub Project Board  https   github com orgs external secrets projects 2 views 1  where we organize issues on a high level  We group issues by milestone  Once all issues of a given milestone are closed we should prepare a new feature release  Issues of the next milestone have priority over other issues   but that does not mean that no one is allowed to start working on them   Issues must be  manually  added to that board  at least for now  see  GH Roadmap  https   github com github roadmap issues 286    Milestones must be assigned manually as well  If no milestone is assigned it is basically a backlog item  It is the responsibility of the maintainers to   1  assign new issues to the GH Project 2  add a milestone if needed 3  add appropriate labels  If you would like to raise the priority of an issue for whatever reason feel free to comment on the issue or ping a maintainer      Support   Questions  Providing support to end users is an important and difficult task  We have three different channels through which support questions arise   1  Kubernetes Slack   external secrets  https   kubernetes slack com archives C017BF84G2Y  2   GitHub Discussions  https   github com external secrets external secrets discussions  3  GitHub Issues  We use labels to identify GitHub Issues  Specifically for managing support cases we use the following labels to identify the state a support case is in      triage needs information   Indicates an issue needs more information in order to work on it     triage not reproducible   Indicates an issue can not be reproduced as described     triage support   Indicates an issue that is a support question       Cutting Releases  The external secrets project is released on a as needed basis  Feel free to open a issue to request a release  Details on how to cut a release can be found in the  release  release md  page "}
{"questions":"external secrets Getting Started cd external secrets You must have a working and shell git clone https github com external secrets external secrets git then clone the repo","answers":"## Getting Started\n\nYou must have a working [Go environment](https:\/\/golang.org\/doc\/install) and\nthen clone the repo:\n\n```shell\ngit clone https:\/\/github.com\/external-secrets\/external-secrets.git\ncd external-secrets\n```\n\n_Note: many of the `make` commands use [yq](https:\/\/github.com\/mikefarah\/yq), version 4.2X.X or higher._\n\nOur helm chart is tested using `helm-unittest`. You will need it to run tests locally if you modify the helm chart. Install it with the following command:\n\n```\n$ helm plugin install https:\/\/github.com\/helm-unittest\/helm-unittest\n```\n\n## Building & Testing\n\nThe project uses the `make` build system. It'll run code generators, tests and\nstatic code analysis.\n\nBuilding the operator binary and docker image:\n\n```shell\nmake build\nmake docker.build IMAGE_NAME=external-secrets IMAGE_TAG=latest\n```\n\nRun tests and lint the code:\n```shell\nmake test\nmake lint # OR\ndocker run --rm -v $(pwd):\/app -w \/app golangci\/golangci-lint:v1.49.0 golangci-lint run\n```\n\nBuild the documentation:\n```shell\nmake docs\n```\n\n## Using Tilt\n\n[Tilt](https:\/\/tilt.dev) can be used to develop external-secrets. Tilt will hot-reload changes to the code and replace\nthe running binary in the container using a process manager of its own.\n\nTo run tilt, download the utility for your operating system and run `make tilt-up`. This will do two things:\n- downloads tilt for the current OS and ARCH under `bin\/tilt`\n- make manifest files of your current changes and place them under `.\/bin\/deploy\/manifests\/external-secrets.yaml`\n- run tilt with `tilt run`\n\nHit `space` and you can observe all the pods starting up and track their output in the tilt UI.\n\n## Installing\n\nTo install the External Secret Operator into a Kubernetes Cluster run:\n\n```shell\nhelm repo add external-secrets https:\/\/charts.external-secrets.io\nhelm repo update\nhelm install external-secrets external-secrets\/external-secrets\n```\n\nYou can alternatively run the controller on your host system for development purposes:\n\n\n```shell\nmake crds.install\nmake run\n```\n\nTo remove the CRDs run:\n\n```shell\nmake crds.uninstall\n```\n\nIf you need to test some other k8s integrations and need the operator to be deployed to the actual cluster while developing, you can use the following workflow:\n\n```shell\n# Start a local K8S cluster with KinD\nkind create cluster --name external-secrets\n\nexport TAG=$(make docker.tag)\nexport IMAGE=$(make docker.imagename)\n\n# Build docker image\nmake docker.build\n\n# Load docker image into local kind cluster\nkind load docker-image $IMAGE:$TAG --name external-secrets\n\n# (Optional) Pull the image from GitHub Repo to copy into kind\n# docker pull ghcr.io\/external-secrets\/external-secrets:v0.8.2\n# kind load docker-image ghcr.io\/external-secrets\/external-secrets:v0.8.2 -n external-secrets\n# export TAG=v0.8.2\n\n# Update helm charts and install to KinD cluster\nmake helm.generate\nhelm upgrade --install external-secrets .\/deploy\/charts\/external-secrets\/ \\\n--set image.repository=$IMAGE --set image.tag=$TAG \\\n--set webhook.image.repository=$IMAGE --set webhook.image.tag=$TAG \\\n--set certController.image.repository=$IMAGE --set certController.image.tag=$TAG\n\n\n# Command to delete the cluster when done\n# kind delete cluster -n external-secrets\n```\n\n!!! note \"Contributing Flow\"\n    The HOW TO guide for contributing is at the [Contributing Process](process.md) page.\n\n\n## Documentation\n\nWe use [mkdocs material](https:\/\/squidfunk.github.io\/mkdocs-material\/) and [mike](https:\/\/github.com\/jimporter\/mike) to generate this\ndocumentation. See `\/docs` for the source code and `\/hack\/api-docs` for the build process.\n\nWhen writing documentation it is advised to run the mkdocs server with livereload:\n\n```shell\nmake docs.serve\n```\n\nRun the following command to run a complete build. The rendered assets are available under `\/site`.\n\n```shell\nmake docs\nmake docs.serve\n```\n\nOpen `http:\/\/localhost:8000` in your browser.\n\nSince mike uses a branch to create\/update documentation, any docs operation will create a diff on your local `gh-pages` branch.\n\nWhen finished writing\/reviewing the docs, clean up your local docs branch changes with `git branch -D gh-pages`","site":"external secrets","answers_cleaned":"   Getting Started  You must have a working  Go environment  https   golang org doc install  and then clone the repo      shell git clone https   github com external secrets external secrets git cd external secrets       Note  many of the  make  commands use  yq  https   github com mikefarah yq   version 4 2X X or higher    Our helm chart is tested using  helm unittest   You will need it to run tests locally if you modify the helm chart  Install it with the following command         helm plugin install https   github com helm unittest helm unittest         Building   Testing  The project uses the  make  build system  It ll run code generators  tests and static code analysis   Building the operator binary and docker image      shell make build make docker build IMAGE NAME external secrets IMAGE TAG latest      Run tests and lint the code     shell make test make lint   OR docker run   rm  v   pwd   app  w  app golangci golangci lint v1 49 0 golangci lint run      Build the documentation     shell make docs         Using Tilt   Tilt  https   tilt dev  can be used to develop external secrets  Tilt will hot reload changes to the code and replace the running binary in the container using a process manager of its own   To run tilt  download the utility for your operating system and run  make tilt up   This will do two things    downloads tilt for the current OS and ARCH under  bin tilt    make manifest files of your current changes and place them under    bin deploy manifests external secrets yaml    run tilt with  tilt run   Hit  space  and you can observe all the pods starting up and track their output in the tilt UI      Installing  To install the External Secret Operator into a Kubernetes Cluster run      shell helm repo add external secrets https   charts external secrets io helm repo update helm install external secrets external secrets external secrets      You can alternatively run the controller on your host system for development purposes       shell make crds install make run      To remove the CRDs run      shell make crds uninstall      If you need to test some other k8s integrations and need the operator to be deployed to the actual cluster while developing  you can use the following workflow      shell   Start a local K8S cluster with KinD kind create cluster   name external secrets  export TAG   make docker tag  export IMAGE   make docker imagename     Build docker image make docker build    Load docker image into local kind cluster kind load docker image  IMAGE  TAG   name external secrets     Optional  Pull the image from GitHub Repo to copy into kind   docker pull ghcr io external secrets external secrets v0 8 2   kind load docker image ghcr io external secrets external secrets v0 8 2  n external secrets   export TAG v0 8 2    Update helm charts and install to KinD cluster make helm generate helm upgrade   install external secrets   deploy charts external secrets      set image repository  IMAGE   set image tag  TAG     set webhook image repository  IMAGE   set webhook image tag  TAG     set certController image repository  IMAGE   set certController image tag  TAG     Command to delete the cluster when done   kind delete cluster  n external secrets          note  Contributing Flow      The HOW TO guide for contributing is at the  Contributing Process  process md  page       Documentation  We use  mkdocs material  https   squidfunk github io mkdocs material   and  mike  https   github com jimporter mike  to generate this documentation  See   docs  for the source code and   hack api docs  for the build process   When writing documentation it is advised to run the mkdocs server with livereload      shell make docs serve      Run the following command to run a complete build  The rendered assets are available under   site       shell make docs make docs serve      Open  http   localhost 8000  in your browser   Since mike uses a branch to create update documentation  any docs operation will create a diff on your local  gh pages  branch   When finished writing reviewing the docs  clean up your local docs branch changes with  git branch  D gh pages "}
{"questions":"external secrets identity and expression level of experience education socio economic status size visible or invisible disability ethnicity sex characteristics gender Code of Conduct We as members contributors and leaders pledge to make participation in our nationality personal appearance race caste color religion or sexual identity Our Pledge community a harassment free experience for everyone regardless of age body","answers":"\n# Code of Conduct\n\n## Our Pledge\n\nWe as members, contributors, and leaders pledge to make participation in our\ncommunity a harassment-free experience for everyone, regardless of age, body\nsize, visible or invisible disability, ethnicity, sex characteristics, gender\nidentity and expression, level of experience, education, socio-economic status,\nnationality, personal appearance, race, caste, color, religion, or sexual identity\nand orientation.\n\nWe pledge to act and interact in ways that contribute to an open, welcoming,\ndiverse, inclusive, and healthy community.\n\n## Our Standards\n\nExamples of behavior that contributes to a positive environment for our\ncommunity include:\n\n* Demonstrating empathy and kindness toward other people\n* Being respectful of differing opinions, viewpoints, and experiences\n* Giving and gracefully accepting constructive feedback\n* Accepting responsibility and apologizing to those affected by our mistakes,\n  and learning from the experience\n* Focusing on what is best not just for us as individuals, but for the\n  overall community\n\nExamples of unacceptable behavior include:\n\n* The use of sexualized language or imagery, and sexual attention or\n  advances of any kind\n* Trolling, insulting or derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or email\n  address, without their explicit permission\n* Other conduct which could reasonably be considered inappropriate in a\n  professional setting\n\n## Enforcement Responsibilities\n\nCommunity leaders are responsible for clarifying and enforcing our standards of\nacceptable behavior and will take appropriate and fair corrective action in\nresponse to any behavior that they deem inappropriate, threatening, offensive,\nor harmful.\n\nCommunity leaders have the right and responsibility to remove, edit, or reject\ncomments, commits, code, wiki edits, issues, and other contributions that are\nnot aligned to this Code of Conduct, and will communicate reasons for moderation\ndecisions when appropriate.\n\n## Scope\n\nThis Code of Conduct applies within all community spaces, and also applies when\nan individual is officially representing the community in public spaces.\nExamples of representing our community include using an official e-mail address,\nposting via an official social media account, or acting as an appointed\nrepresentative at an online or offline event.\n\n## Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be\nreported to the community leaders responsible for enforcement at cncf-ExternalSecretsOp-maintainers@lists.cncf.io.\nAll complaints will be reviewed and investigated promptly and fairly.\n\nAll community leaders are obligated to respect the privacy and security of the\nreporter of any incident.\n\n## Enforcement Guidelines\n\nCommunity leaders will follow these Community Impact Guidelines in determining\nthe consequences for any action they deem in violation of this Code of Conduct:\n\n### 1. Correction\n\n**Community Impact**: Use of inappropriate language or other behavior deemed\nunprofessional or unwelcome in the community.\n\n**Consequence**: A private, written warning from community leaders, providing\nclarity around the nature of the violation and an explanation of why the\nbehavior was inappropriate. A public apology may be requested.\n\n### 2. Warning\n\n**Community Impact**: A violation through a single incident or series\nof actions.\n\n**Consequence**: A warning with consequences for continued behavior. No\ninteraction with the people involved, including unsolicited interaction with\nthose enforcing the Code of Conduct, for a specified period of time. This\nincludes avoiding interactions in community spaces as well as external channels\nlike social media. Violating these terms may lead to a temporary or\npermanent ban.\n\n### 3. Temporary Ban\n\n**Community Impact**: A serious violation of community standards, including\nsustained inappropriate behavior.\n\n**Consequence**: A temporary ban from any sort of interaction or public\ncommunication with the community for a specified period of time. No public or\nprivate interaction with the people involved, including unsolicited interaction\nwith those enforcing the Code of Conduct, is allowed during this period.\nViolating these terms may lead to a permanent ban.\n\n### 4. Permanent Ban\n\n**Community Impact**: Demonstrating a pattern of violation of community\nstandards, including sustained inappropriate behavior,  harassment of an\nindividual, or aggression toward or disparagement of classes of individuals.\n\n**Consequence**: A permanent ban from any sort of public interaction within\nthe community.\n\n## Attribution\n\nThis Code of Conduct is adapted from the [Contributor Covenant][homepage],\nversion 2.0, available at\n[https:\/\/www.contributor-covenant.org\/version\/2\/0\/code_of_conduct.html][v2.0].\n\nCommunity Impact Guidelines were inspired by \n[Mozilla's code of conduct enforcement ladder][Mozilla CoC].\n\nFor answers to common questions about this code of conduct, see the FAQ at\n[https:\/\/www.contributor-covenant.org\/faq][FAQ]. Translations are available \nat [https:\/\/www.contributor-covenant.org\/translations][translations].\n\n[homepage]: https:\/\/www.contributor-covenant.org\n[v2.0]: https:\/\/www.contributor-covenant.org\/version\/2\/0\/code_of_conduct.html\n[Mozilla CoC]: https:\/\/github.com\/mozilla\/diversity\n[FAQ]: https:\/\/www.contributor-covenant.org\/faq\n[translations]: https:\/\/www.contributor-covenant.org\/translations","site":"external secrets","answers_cleaned":"   Code of Conduct     Our Pledge  We as members  contributors  and leaders pledge to make participation in our community a harassment free experience for everyone  regardless of age  body size  visible or invisible disability  ethnicity  sex characteristics  gender identity and expression  level of experience  education  socio economic status  nationality  personal appearance  race  caste  color  religion  or sexual identity and orientation   We pledge to act and interact in ways that contribute to an open  welcoming  diverse  inclusive  and healthy community      Our Standards  Examples of behavior that contributes to a positive environment for our community include     Demonstrating empathy and kindness toward other people   Being respectful of differing opinions  viewpoints  and experiences   Giving and gracefully accepting constructive feedback   Accepting responsibility and apologizing to those affected by our mistakes    and learning from the experience   Focusing on what is best not just for us as individuals  but for the   overall community  Examples of unacceptable behavior include     The use of sexualized language or imagery  and sexual attention or   advances of any kind   Trolling  insulting or derogatory comments  and personal or political attacks   Public or private harassment   Publishing others  private information  such as a physical or email   address  without their explicit permission   Other conduct which could reasonably be considered inappropriate in a   professional setting     Enforcement Responsibilities  Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate  threatening  offensive  or harmful   Community leaders have the right and responsibility to remove  edit  or reject comments  commits  code  wiki edits  issues  and other contributions that are not aligned to this Code of Conduct  and will communicate reasons for moderation decisions when appropriate      Scope  This Code of Conduct applies within all community spaces  and also applies when an individual is officially representing the community in public spaces  Examples of representing our community include using an official e mail address  posting via an official social media account  or acting as an appointed representative at an online or offline event      Enforcement  Instances of abusive  harassing  or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at cncf ExternalSecretsOp maintainers lists cncf io  All complaints will be reviewed and investigated promptly and fairly   All community leaders are obligated to respect the privacy and security of the reporter of any incident      Enforcement Guidelines  Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct       1  Correction    Community Impact    Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community     Consequence    A private  written warning from community leaders  providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate  A public apology may be requested       2  Warning    Community Impact    A violation through a single incident or series of actions     Consequence    A warning with consequences for continued behavior  No interaction with the people involved  including unsolicited interaction with those enforcing the Code of Conduct  for a specified period of time  This includes avoiding interactions in community spaces as well as external channels like social media  Violating these terms may lead to a temporary or permanent ban       3  Temporary Ban    Community Impact    A serious violation of community standards  including sustained inappropriate behavior     Consequence    A temporary ban from any sort of interaction or public communication with the community for a specified period of time  No public or private interaction with the people involved  including unsolicited interaction with those enforcing the Code of Conduct  is allowed during this period  Violating these terms may lead to a permanent ban       4  Permanent Ban    Community Impact    Demonstrating a pattern of violation of community standards  including sustained inappropriate behavior   harassment of an individual  or aggression toward or disparagement of classes of individuals     Consequence    A permanent ban from any sort of public interaction within the community      Attribution  This Code of Conduct is adapted from the  Contributor Covenant  homepage   version 2 0  available at  https   www contributor covenant org version 2 0 code of conduct html  v2 0    Community Impact Guidelines were inspired by   Mozilla s code of conduct enforcement ladder  Mozilla CoC    For answers to common questions about this code of conduct  see the FAQ at  https   www contributor covenant org faq  FAQ   Translations are available  at  https   www contributor covenant org translations  translations     homepage   https   www contributor covenant org  v2 0   https   www contributor covenant org version 2 0 code of conduct html  Mozilla CoC   https   github com mozilla diversity  FAQ   https   www contributor covenant org faq  translations   https   www contributor covenant org translations"}
{"questions":"external secrets If the which define where secrets live and how to synchronize them The controller Architecture The External Secrets Operator extends Kubernetes with Custom fetches secrets from an external API and creates Kubernetes API Overview Resources https kubernetes io docs concepts extend kubernetes api extension custom resources","answers":"# API Overview\n\n## Architecture\n![high-level](..\/pictures\/diagrams-high-level-simple.png)\n\nThe External Secrets Operator extends Kubernetes with [Custom\nResources](https:\/\/kubernetes.io\/docs\/concepts\/extend-kubernetes\/api-extension\/custom-resources\/),\nwhich define where secrets live and how to synchronize them. The controller\nfetches secrets from an external API and creates Kubernetes\n[secrets](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/). If the\nsecret from the external API changes, the controller will reconcile the state in\nthe cluster and update the secrets accordingly.\n\n\n## Resource model\n\nTo understand the mechanics of the operator let's start with the data model. The\nSecretStore references a bucket of key\/value pairs. But because every external\nAPI is slightly different this bucket may be e.g. an instance of an Azure\nKeyVault or a AWS Secrets Manager in a certain AWS Account and region. Please\ntake a look at the provider documentation to see what the Bucket actually maps\nto.\n\n![Resource Mapping](..\/pictures\/diagrams-resource-mapping.png)\n\n### SecretStore\n\nThe idea behind the [SecretStore](..\/api\/secretstore.md) resource is to separate concerns of\nauthentication\/access and the actual Secret and configuration needed for\nworkloads. The ExternalSecret specifies what to fetch, the SecretStore specifies\nhow to access. This resource is namespaced.\n\n``` yaml\n{% include 'basic-secret-store.yaml' %}\n```\nThe `SecretStore` contains references to secrets which hold credentials to\naccess the external API.\n\n### ExternalSecret\nAn [ExternalSecret](..\/api\/externalsecret.md) declares what data to fetch. It has a reference to a\n`SecretStore` which knows how to access that data. The controller uses that\n`ExternalSecret` as a blueprint to create secrets.\n\n``` yaml\n{% include 'basic-external-secret.yaml' %}\n```\n\n### ClusterSecretStore\n\nThe [ClusterSecretStore](..\/api\/clustersecretstore.md) is a global, cluster-wide SecretStore that can be\nreferenced from all namespaces. You can use it to provide a central gateway to your secret provider.\n\n## Behavior\n\nThe External Secret Operator (ESO for brevity) reconciles `ExternalSecrets` in\nthe following manner:\n\n1. ESO uses `spec.secretStoreRef` to find an appropriate `SecretStore`. If it\n   doesn't exist or the `spec.controller` field doesn't match it won't further\n   process this ExternalSecret.\n2. ESO instanciates an external API client using the specified credentials from\n   the `SecretStore` spec.\n3. ESO fetches the secrets as requested by the `ExternalSecret`, it will decode\n   the secrets if required\n5. ESO creates an `Kind=Secret` based on the template provided by\n   `ExternalSecret.target.template`. The `Secret.data` can be templated using\n   the secret values from the external API.\n6. ESO ensures that the secret values stay in sync with the external API\n\n## Roles and responsibilities\n\nThe External Secret Operator is designed to target the following persona:\n\n* **Cluster Operator**: The cluster operator is responsible for setting up the\n  External Secret Operator, managing access policies and creating\n  ClusterSecretStores.\n* **Application developer**: The Application developer is responsible for\n  defining ExternalSecrets and the application configuration\n\nEach persona will roughly map to a Kubernetes RBAC role. Depending on your\nenvironment these roles can map to a single user. **Note:** There is no Secret\nOperator that handles the lifecycle of the secret, this is out of the scope of\nESO.\n\n## Access Control\n\nThe External Secrets Operator runs as a deployment in your cluster with elevated\nprivileges. It will create\/read\/update secrets in all namespaces and has access\nto secrets stored in some external API. Ensure that the credentials you provide\ngive ESO the least privilege necessary.\n\nDesign your `SecretStore`\/`ClusterSecretStore` carefully! Be sure to restrict\naccess of application developers to read only certain\nkeys in a shared environment.\n\nYou should also consider using Kubernetes' admission control system (e.g.\n[OPA](https:\/\/www.openpolicyagent.org\/) or [Kyverno](https:\/\/kyverno.io\/)) for\nfine-grained access control.\n\n## Running multiple Controller\nYou can run multiple controllers within the cluster. One controller can be\nlimited to only process `SecretStores` with a predefined `spec.controller`\nfield.\n\n!!! note \"Testers welcome\"\n    This is not widely tested. Please help us test the setup and\/or document use-cases.","site":"external secrets","answers_cleaned":"  API Overview     Architecture   high level     pictures diagrams high level simple png   The External Secrets Operator extends Kubernetes with  Custom Resources  https   kubernetes io docs concepts extend kubernetes api extension custom resources    which define where secrets live and how to synchronize them  The controller fetches secrets from an external API and creates Kubernetes  secrets  https   kubernetes io docs concepts configuration secret    If the secret from the external API changes  the controller will reconcile the state in the cluster and update the secrets accordingly       Resource model  To understand the mechanics of the operator let s start with the data model  The SecretStore references a bucket of key value pairs  But because every external API is slightly different this bucket may be e g  an instance of an Azure KeyVault or a AWS Secrets Manager in a certain AWS Account and region  Please take a look at the provider documentation to see what the Bucket actually maps to     Resource Mapping     pictures diagrams resource mapping png       SecretStore  The idea behind the  SecretStore     api secretstore md  resource is to separate concerns of authentication access and the actual Secret and configuration needed for workloads  The ExternalSecret specifies what to fetch  the SecretStore specifies how to access  This resource is namespaced       yaml    include  basic secret store yaml         The  SecretStore  contains references to secrets which hold credentials to access the external API       ExternalSecret An  ExternalSecret     api externalsecret md  declares what data to fetch  It has a reference to a  SecretStore  which knows how to access that data  The controller uses that  ExternalSecret  as a blueprint to create secrets       yaml    include  basic external secret yaml              ClusterSecretStore  The  ClusterSecretStore     api clustersecretstore md  is a global  cluster wide SecretStore that can be referenced from all namespaces  You can use it to provide a central gateway to your secret provider      Behavior  The External Secret Operator  ESO for brevity  reconciles  ExternalSecrets  in the following manner   1  ESO uses  spec secretStoreRef  to find an appropriate  SecretStore   If it    doesn t exist or the  spec controller  field doesn t match it won t further    process this ExternalSecret  2  ESO instanciates an external API client using the specified credentials from    the  SecretStore  spec  3  ESO fetches the secrets as requested by the  ExternalSecret   it will decode    the secrets if required 5  ESO creates an  Kind Secret  based on the template provided by     ExternalSecret target template   The  Secret data  can be templated using    the secret values from the external API  6  ESO ensures that the secret values stay in sync with the external API     Roles and responsibilities  The External Secret Operator is designed to target the following persona       Cluster Operator    The cluster operator is responsible for setting up the   External Secret Operator  managing access policies and creating   ClusterSecretStores      Application developer    The Application developer is responsible for   defining ExternalSecrets and the application configuration  Each persona will roughly map to a Kubernetes RBAC role  Depending on your environment these roles can map to a single user    Note    There is no Secret Operator that handles the lifecycle of the secret  this is out of the scope of ESO      Access Control  The External Secrets Operator runs as a deployment in your cluster with elevated privileges  It will create read update secrets in all namespaces and has access to secrets stored in some external API  Ensure that the credentials you provide give ESO the least privilege necessary   Design your  SecretStore   ClusterSecretStore  carefully  Be sure to restrict access of application developers to read only certain keys in a shared environment   You should also consider using Kubernetes  admission control system  e g   OPA  https   www openpolicyagent org   or  Kyverno  https   kyverno io    for fine grained access control      Running multiple Controller You can run multiple controllers within the cluster  One controller can be limited to only process  SecretStores  with a predefined  spec controller  field       note  Testers welcome      This is not widely tested  Please help us test the setup and or document use cases "}
{"questions":"external secrets hide toc We want to provide security patches and critical bug fixes in a timely manner to our users Supported Versions This page lists the status timeline and policy for currently supported ESO releases and its providers Please also see our that describes API versioning deprecation and API surface","answers":"---\nhide:\n  - toc\n---\n\nThis page lists the status, timeline and policy for currently supported ESO releases and its providers. Please also see our [deprecation policy](deprecation-policy.md) that describes API versioning, deprecation and API surface.\n\n## Supported Versions\n\nWe want to provide security patches and critical bug fixes in a timely manner to our users.\nTo do so, we offer long-term support for our latest two (N, N-1) software releases.\nWe aim for a 2-3 month minor release cycle, i.e. a given release is supported for about 4-6 months.\n\nWe want to cover the following cases:\n\n- regular image rebuilds to update OS dependencies\n- regular go dependency updates\n- backport bug fixes on demand\n\n| ESO Version | Kubernetes Version | Release Date | End of Life     |\n| ----------- | ------------------ | ------------ | --------------- |\n| 0.10.x      | 1.19 \u2192 1.31        | Aug 3, 2024  | Release of 0.12 |\n| 0.9.x       | 1.19 \u2192 1.30        | Jun 22, 2023 | Release of 0.11 |\n| 0.8.x       | 1.19 \u2192 1.28        | Mar 16, 2023 | Aug 3, 2024     |\n| 0.7.x       | 1.19 \u2192 1.26        | Dec 11, 2022 | Jun 22, 2023    |\n| 0.6.x       | 1.19 \u2192 1.24        | Oct 9, 2022  | Mar 16, 2023    |\n| 0.5.x       | 1.19 \u2192 1.24        | Apr 6, 2022  | Dec 11, 2022    |\n| 0.4.x       | 1.16 \u2192 1.24        | Feb 2, 2022  | Oct 9, 2022     |\n| 0.3.x       | 1.16 \u2192 1.24        | Jul 25, 2021 | Apr 6, 2022     |\n\n## Provider Stability and Support Level\n\nThe following table describes the stability level of each provider and who's responsible.\n\n| Provider                                                                                                   | Stability |                                                                                                                                                                              Maintainer |\n|------------------------------------------------------------------------------------------------------------|:---------:|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|\n| [AWS Secrets Manager](https:\/\/external-secrets.io\/latest\/provider\/aws-secrets-manager\/)                    |  stable   |                                                                                                                                 [external-secrets](https:\/\/github.com\/external-secrets) |\n| [AWS Parameter Store](https:\/\/external-secrets.io\/latest\/provider\/aws-parameter-store\/)                    |  stable   |                                                                                                                                 [external-secrets](https:\/\/github.com\/external-secrets) |\n| [Hashicorp Vault](https:\/\/external-secrets.io\/latest\/provider\/hashicorp-vault\/)                            |  stable   |                                                                                                                                 [external-secrets](https:\/\/github.com\/external-secrets) |\n| [GCP Secret Manager](https:\/\/external-secrets.io\/latest\/provider\/google-secrets-manager\/)                  |  stable   |                                                                                                                                 [external-secrets](https:\/\/github.com\/external-secrets) |\n| [Azure Keyvault](https:\/\/external-secrets.io\/latest\/provider\/azure-key-vault\/)                             |  stable   |                                                                                                                                 [external-secrets](https:\/\/github.com\/external-secrets) |\n| [IBM Cloud Secrets Manager](https:\/\/external-secrets.io\/latest\/provider\/ibm-secrets-manager\/)              |  stable   | [@knelasevero](https:\/\/github.com\/knelasevero) [@sebagomez](https:\/\/github.com\/sebagomez) [@ricardoptcosta](https:\/\/github.com\/ricardoptcosta) [@IdanAdar](https:\/\/github.com\/IdanAdar) |\n| [Kubernetes](https:\/\/external-secrets.io\/latest\/provider\/kubernetes)                                       |   beta    |                                                                                                                                 [external-secrets](https:\/\/github.com\/external-secrets) |\n| [Yandex Lockbox](https:\/\/external-secrets.io\/latest\/provider\/yandex-lockbox\/)                              |   alpha   |                                                                                     [@AndreyZamyslov](https:\/\/github.com\/AndreyZamyslov) [@knelasevero](https:\/\/github.com\/knelasevero) |\n| [GitLab Variables](https:\/\/external-secrets.io\/latest\/provider\/gitlab-variables\/)                          |   alpha   |                                                                                                                                                  [@Jabray5](https:\/\/github.com\/Jabray5) |\n| Alibaba Cloud KMS                                                                                          |   alpha   |                                                                                                                                          [@ElsaChelala](https:\/\/github.com\/ElsaChelala) |\n| [Oracle Vault](https:\/\/external-secrets.io\/latest\/provider\/oracle-vault)                                   |   alpha   |                                                                                                 [@KianTigger](https:\/\/github.com\/KianTigger) [@EladGabay](https:\/\/github.com\/EladGabay) |\n| [Akeyless](https:\/\/external-secrets.io\/latest\/provider\/akeyless)                                           |  stable   |                                                                                                                                 [external-secrets](https:\/\/github.com\/external-secrets) |\n| [1Password](https:\/\/external-secrets.io\/latest\/provider\/1password-automation)                              |   alpha   |                                                                                       [@SimSpaceCorp](https:\/\/github.com\/Simspace) [@snarlysodboxer](https:\/\/github.com\/snarlysodboxer) |\n| [Generic Webhook](https:\/\/external-secrets.io\/latest\/provider\/webhook)                                     |   alpha   |                                                                                                                                                  [@willemm](https:\/\/github.com\/willemm) |\n| [senhasegura DevOps Secrets Management (DSM)](https:\/\/external-secrets.io\/latest\/provider\/senhasegura-dsm) |   alpha   |                                                                                                                                                    [@lfraga](https:\/\/github.com\/lfraga) |\n| [Doppler SecretOps Platform](https:\/\/external-secrets.io\/latest\/provider\/doppler)                          |   alpha   |                                                                                         [@ryan-blunden](https:\/\/github.com\/ryan-blunden\/) [@nmanoogian](https:\/\/github.com\/nmanoogian\/) |\n| [Keeper Security](https:\/\/www.keepersecurity.com\/)                                                         |   alpha   |                                                                                                                                              [@ppodevlab](https:\/\/github.com\/ppodevlab) |\n| [Scaleway](https:\/\/external-secrets.io\/latest\/provider\/scaleway)                                           |   alpha   |                                                                                                                                                   [@azert9](https:\/\/github.com\/azert9\/) |\n| [Conjur](https:\/\/external-secrets.io\/latest\/provider\/conjur)                                               |  stable   |                                                                                                  [@davidh-cyberark](https:\/\/github.com\/davidh-cyberark\/) [@szh](https:\/\/github.com\/szh) |\n| [Delinea](https:\/\/external-secrets.io\/latest\/provider\/delinea)                                             |   alpha   |                                                                                                                                     [@michaelsauter](https:\/\/github.com\/michaelsauter\/) |\n| [Beyondtrust](https:\/\/external-secrets.io\/latest\/provider\/beyondtrust)                                     |   alpha   |                                                                                                                                       [@btfhernandez](https:\/\/github.com\/btfhernandez\/) |\n| [SecretServer](https:\/\/external-secrets.io\/latest\/provider\/secretserver)                                   |   alpha   |                                                                                                                                        [@billhamilton](https:\/\/github.com\/pacificcode\/) |\n| [Pulumi ESC](https:\/\/external-secrets.io\/latest\/provider\/pulumi)                                           |   alpha   |                                                                                                                                                    [@dirien](https:\/\/github.com\/dirien) |\n| [Passbolt](https:\/\/external-secrets.io\/latest\/provider\/passbolt)                                           |   alpha   |                                                                                                                                                                                         |\n| [Infisical](https:\/\/external-secrets.io\/latest\/provider\/infisical)                                         |   alpha   |                                                                                                                                              [@akhilmhdh](https:\/\/github.com\/akhilmhdh) |\n| [Device42](https:\/\/external-secrets.io\/latest\/provider\/device42)                                           |   alpha   |                                                                                                                                                                                         |\n| [Bitwarden Secrets Manager](https:\/\/external-secrets.io\/latest\/provider\/bitwarden-secrets-manager)         |   alpha   |                                                                                                                                                  [@skarlso](https:\/\/github.com\/Skarlso) |\n| [Previder](https:\/\/external-secrets.io\/latest\/provider\/previder)                                           |  stable   |                                                                                                                                                [@previder](https:\/\/github.com\/previder) |\n\n## Provider Feature Support\n\nThe following table show the support for features across different providers.\n\n| Provider                  | find by name | find by tags | metadataPolicy Fetch | referent authentication | store validation | push secret | DeletionPolicy Merge\/Delete |\n|---------------------------| :----------: | :----------: | :------------------: | :---------------------: | :--------------: |:-----------:|:---------------------------:|\n| AWS Secrets Manager       |      x       |      x       |          x           |            x            |        x         |      x      |              x              |\n| AWS Parameter Store       |      x       |      x       |          x           |            x            |        x         |      x      |              x              |\n| Hashicorp Vault           |      x       |      x       |          x           |            x            |        x         |      x      |              x              |\n| GCP Secret Manager        |      x       |      x       |          x           |            x            |        x         |      x      |              x              |\n| Azure Keyvault            |      x       |      x       |          x           |            x            |        x         |      x      |              x              |\n| Kubernetes                |      x       |      x       |          x           |            x            |        x         |      x      |              x              |\n| IBM Cloud Secrets Manager |      x       |              |          x           |                         |        x         |             |                             |\n| Yandex Lockbox            |              |              |                      |                         |        x         |             |                             |\n| GitLab Variables          |      x       |      x       |                      |                         |        x         |             |                             |\n| Alibaba Cloud KMS         |              |              |                      |                         |        x         |             |                             |\n| Oracle Vault              |              |              |                      |                         |        x         |             |                             |\n| Akeyless                  |      x       |      x       |                      |            x            |        x         |      x      |              x              |\n| 1Password                 |      x       |              |                      |                         |        x         |      x      |              x              |\n| Generic Webhook           |              |              |                      |                         |                  |             |              x              |\n| senhasegura DSM           |              |              |                      |                         |        x         |             |                             |\n| Doppler                   |      x       |              |                      |                         |        x         |             |                             |\n| Keeper Security           |      x       |              |                      |                         |        x         |      x      |                             |\n| Scaleway                  |      x       |      x       |                      |                         |        x         |      x      |              x              |\n| Conjur                    |      x       |      x       |                      |                         |        x         |             |                             |\n| Delinea                   |      x       |              |                      |                         |        x         |             |                             |\n| Beyondtrust               |      x       |              |                      |                         |        x         |             |                             |\n| SecretServer              |      x       |              |                      |                         |        x         |             |                             |\n| Pulumi ESC                |      x       |              |                      |                         |        x         |             |                             |\n| Passbolt                  |      x       |              |                      |                         |        x         |             |                             |\n| Infisical                 |      x       |              |                      |            x            |        x         |             |                             |\n| Device42                  |              |              |                      |                         |        x         |             |                             |\n| Bitwarden Secrets Manager |      x       |              |                      |                         |        x         |      x      |              x              |\n| Previder                  |      x       |              |                      |                         |        x         |             |                             |\n\n## Support Policy\n\nWe provide technical support and security \/ bug fixes for the above listed versions.\n\n### Technical support\n\nWe provide assistance for deploying\/upgrading etc. on a best-effort basis. You can request support through the following channels:\n\n- [Kubernetes Slack\n  #external-secrets](https:\/\/kubernetes.slack.com\/messages\/external-secrets)\n- GitHub [Issues](https:\/\/github.com\/external-secrets\/external-secrets\/issues)\n- GitHub [Discussions](https:\/\/github.com\/external-secrets\/external-secrets\/discussions)\n\nEven though we have active maintainers and people assigned to this project, we kindly ask for patience when asking for support. We will try to get to priority issues as fast as possible, but there may be some delays.","site":"external secrets","answers_cleaned":"    hide      toc      This page lists the status  timeline and policy for currently supported ESO releases and its providers  Please also see our  deprecation policy  deprecation policy md  that describes API versioning  deprecation and API surface      Supported Versions  We want to provide security patches and critical bug fixes in a timely manner to our users  To do so  we offer long term support for our latest two  N  N 1  software releases  We aim for a 2 3 month minor release cycle  i e  a given release is supported for about 4 6 months   We want to cover the following cases     regular image rebuilds to update OS dependencies   regular go dependency updates   backport bug fixes on demand    ESO Version   Kubernetes Version   Release Date   End of Life                                                                               0 10 x        1 19   1 31          Aug 3  2024    Release of 0 12     0 9 x         1 19   1 30          Jun 22  2023   Release of 0 11     0 8 x         1 19   1 28          Mar 16  2023   Aug 3  2024         0 7 x         1 19   1 26          Dec 11  2022   Jun 22  2023        0 6 x         1 19   1 24          Oct 9  2022    Mar 16  2023        0 5 x         1 19   1 24          Apr 6  2022    Dec 11  2022        0 4 x         1 16   1 24          Feb 2  2022    Oct 9  2022         0 3 x         1 16   1 24          Jul 25  2021   Apr 6  2022           Provider Stability and Support Level  The following table describes the stability level of each provider and who s responsible     Provider                                                                                                     Stability                                                                                                                                                                                Maintainer                                                                                                                                                                                                                                                                                                                           AWS Secrets Manager  https   external secrets io latest provider aws secrets manager                         stable                                                                                                                                      external secrets  https   github com external secrets       AWS Parameter Store  https   external secrets io latest provider aws parameter store                         stable                                                                                                                                      external secrets  https   github com external secrets       Hashicorp Vault  https   external secrets io latest provider hashicorp vault                                 stable                                                                                                                                      external secrets  https   github com external secrets       GCP Secret Manager  https   external secrets io latest provider google secrets manager                       stable                                                                                                                                      external secrets  https   github com external secrets       Azure Keyvault  https   external secrets io latest provider azure key vault                                  stable                                                                                                                                      external secrets  https   github com external secrets       IBM Cloud Secrets Manager  https   external secrets io latest provider ibm secrets manager                   stable       knelasevero  https   github com knelasevero    sebagomez  https   github com sebagomez    ricardoptcosta  https   github com ricardoptcosta    IdanAdar  https   github com IdanAdar       Kubernetes  https   external secrets io latest provider kubernetes                                            beta                                                                                                                                       external secrets  https   github com external secrets       Yandex Lockbox  https   external secrets io latest provider yandex lockbox                                    alpha                                                                                           AndreyZamyslov  https   github com AndreyZamyslov    knelasevero  https   github com knelasevero       GitLab Variables  https   external secrets io latest provider gitlab variables                                alpha                                                                                                                                                        Jabray5  https   github com Jabray5      Alibaba Cloud KMS                                                                                              alpha                                                                                                                                                ElsaChelala  https   github com ElsaChelala       Oracle Vault  https   external secrets io latest provider oracle vault                                        alpha                                                                                                       KianTigger  https   github com KianTigger    EladGabay  https   github com EladGabay       Akeyless  https   external secrets io latest provider akeyless                                               stable                                                                                                                                      external secrets  https   github com external secrets       1Password  https   external secrets io latest provider 1password automation                                   alpha                                                                                             SimSpaceCorp  https   github com Simspace    snarlysodboxer  https   github com snarlysodboxer       Generic Webhook  https   external secrets io latest provider webhook                                          alpha                                                                                                                                                        willemm  https   github com willemm       senhasegura DevOps Secrets Management  DSM   https   external secrets io latest provider senhasegura dsm      alpha                                                                                                                                                          lfraga  https   github com lfraga       Doppler SecretOps Platform  https   external secrets io latest provider doppler                               alpha                                                                                               ryan blunden  https   github com ryan blunden     nmanoogian  https   github com nmanoogian        Keeper Security  https   www keepersecurity com                                                               alpha                                                                                                                                                    ppodevlab  https   github com ppodevlab       Scaleway  https   external secrets io latest provider scaleway                                                alpha                                                                                                                                                         azert9  https   github com azert9        Conjur  https   external secrets io latest provider conjur                                                   stable                                                                                                        davidh cyberark  https   github com davidh cyberark     szh  https   github com szh       Delinea  https   external secrets io latest provider delinea                                                  alpha                                                                                                                                           michaelsauter  https   github com michaelsauter        Beyondtrust  https   external secrets io latest provider beyondtrust                                          alpha                                                                                                                                             btfhernandez  https   github com btfhernandez        SecretServer  https   external secrets io latest provider secretserver                                        alpha                                                                                                                                              billhamilton  https   github com pacificcode        Pulumi ESC  https   external secrets io latest provider pulumi                                                alpha                                                                                                                                                          dirien  https   github com dirien       Passbolt  https   external secrets io latest provider passbolt                                                alpha                                                                                                                                                                                                  Infisical  https   external secrets io latest provider infisical                                              alpha                                                                                                                                                    akhilmhdh  https   github com akhilmhdh       Device42  https   external secrets io latest provider device42                                                alpha                                                                                                                                                                                                  Bitwarden Secrets Manager  https   external secrets io latest provider bitwarden secrets manager              alpha                                                                                                                                                        skarlso  https   github com Skarlso       Previder  https   external secrets io latest provider previder                                               stable                                                                                                                                                      previder  https   github com previder        Provider Feature Support  The following table show the support for features across different providers     Provider                    find by name   find by tags   metadataPolicy Fetch   referent authentication   store validation   push secret   DeletionPolicy Merge Delete                                                                                                                                                                                 AWS Secrets Manager              x              x                  x                        x                     x                x                     x                  AWS Parameter Store              x              x                  x                        x                     x                x                     x                  Hashicorp Vault                  x              x                  x                        x                     x                x                     x                  GCP Secret Manager               x              x                  x                        x                     x                x                     x                  Azure Keyvault                   x              x                  x                        x                     x                x                     x                  Kubernetes                       x              x                  x                        x                     x                x                     x                  IBM Cloud Secrets Manager        x                                 x                                              x                                                         Yandex Lockbox                                                                                                    x                                                         GitLab Variables                 x              x                                                                 x                                                         Alibaba Cloud KMS                                                                                                 x                                                         Oracle Vault                                                                                                      x                                                         Akeyless                         x              x                                           x                     x                x                     x                  1Password                        x                                                                                x                x                     x                  Generic Webhook                                                                                                                                          x                  senhasegura DSM                                                                                                   x                                                         Doppler                          x                                                                                x                                                         Keeper Security                  x                                                                                x                x                                        Scaleway                         x              x                                                                 x                x                     x                  Conjur                           x              x                                                                 x                                                         Delinea                          x                                                                                x                                                         Beyondtrust                      x                                                                                x                                                         SecretServer                     x                                                                                x                                                         Pulumi ESC                       x                                                                                x                                                         Passbolt                         x                                                                                x                                                         Infisical                        x                                                          x                     x                                                         Device42                                                                                                          x                                                         Bitwarden Secrets Manager        x                                                                                x                x                     x                  Previder                         x                                                                                x                                                           Support Policy  We provide technical support and security   bug fixes for the above listed versions       Technical support  We provide assistance for deploying upgrading etc  on a best effort basis  You can request support through the following channels      Kubernetes Slack    external secrets  https   kubernetes slack com messages external secrets    GitHub  Issues  https   github com external secrets external secrets issues    GitHub  Discussions  https   github com external secrets external secrets discussions   Even though we have active maintainers and people assigned to this project  we kindly ask for patience when asking for support  We will try to get to priority issues as fast as possible  but there may be some delays "}
{"questions":"external secrets to a supported version before installing external secrets Installing with Helm Getting started External secrets runs within your Kubernetes cluster as a deployment resource and manages Kubernetes secret resources with ExternalSecret resources It utilizes CustomResourceDefinitions to configure access to secret providers through SecretStore resources Note The minimum supported version of Kubernetes is Users still running Kubernetes v1 15 or below should upgrade","answers":"# Getting started\n\nExternal-secrets runs within your Kubernetes cluster as a deployment resource.\nIt utilizes CustomResourceDefinitions to configure access to secret providers through SecretStore resources\nand manages Kubernetes secret resources with ExternalSecret resources.\n\n> Note: The minimum supported version of Kubernetes is `1.16.0`. Users still running Kubernetes v1.15 or below should upgrade\n> to a supported version before installing external-secrets.\n\n## Installing with Helm\n\nThe default install options will automatically install and manage the CRDs as part of your helm release. If you do not want the CRDs to be automatically upgraded and managed, you must set the `installCRDs` option to `false`. (e.g. `--set installCRDs=false`)\n\nYou can install those CRDs outside of `helm` using:\n```bash\nkubectl apply -k \"https:\/\/raw.githubusercontent.com\/external-secrets\/external-secrets\/<replace_with_your_version>\/deploy\/crds\/bundle.yaml\"\n```\n\nUncomment the relevant line in the next steps to disable the automatic install of CRDs.\n\n### Option 1: Install from chart repository\n\n```bash\nhelm repo add external-secrets https:\/\/charts.external-secrets.io\n\nhelm install external-secrets \\\n   external-secrets\/external-secrets \\\n    -n external-secrets \\\n    --create-namespace \\\n  # --set installCRDs=false\n```\n\n### Option 2: Install chart from local build\n\nBuild and install the Helm chart locally after cloning the repository.\n\n```bash\nmake helm.build\n\nhelm install external-secrets \\\n    .\/bin\/chart\/external-secrets.tgz \\\n    -n external-secrets \\\n    --create-namespace \\\n  # --set installCRDs=false\n```\n\n### Create a secret containing your AWS credentials\n\n```shell\necho -n 'KEYID' > .\/access-key\necho -n 'SECRETKEY' > .\/secret-access-key\nkubectl create secret generic awssm-secret --from-file=.\/access-key --from-file=.\/secret-access-key\n```\n\n### Create your first SecretStore\n\nCreate a file 'basic-secret-store.yaml' with the following content.\n\n```yaml\n{% include 'basic-secret-store.yaml' %}\n```\n\nApply it to create a SecretStore resource.\n\n```\nkubectl apply -f \"basic-secret-store.yaml\"\n```\n\n### Create your first ExternalSecret\n\nCreate a file 'basic-external-secret.yaml' with the following content.\n\n```yaml\n{% include 'basic-external-secret.yaml' %}\n```\n\nApply it to create an External Secret resource.\n\n```\nkubectl apply -f \"basic-external-secret.yaml\"\n```\n\n```bash\nkubectl describe externalsecret example\n# [...]\nName:  example\nStatus:\n  Binding:\n    Name:                  secret-to-be-created\n  Conditions:\n    Last Transition Time:  2021-02-24T16:45:23Z\n    Message:               Secret was synced\n    Reason:                SecretSynced\n    Status:                True\n    Type:                  Ready\n  Refresh Time:            2021-02-24T16:45:24Z\nEvents:                    <none>\n```\n\nFor more advanced examples, please read the other\n[guides](..\/guides\/introduction.md).\n\n## Installing with OLM\n\nExternal-secrets can be managed by [Operator Lifecycle Manager](https:\/\/olm.operatorframework.io\/) (OLM) via an installer operator. It is made available through [OperatorHub.io](https:\/\/operatorhub.io\/), this installation method is suited best for OpenShift. See installation instructions on the [external-secrets-operator](https:\/\/operatorhub.io\/operator\/external-secrets-operator) package.\n\n## Uninstalling\n\nBefore continuing, ensure that all external-secret resources that have been created by users have been deleted.\nYou can check for any existing resources with the following command:\n\n```bash\nkubectl get SecretStores,ClusterSecretStores,ExternalSecrets --all-namespaces\n```\n\nOnce all these resources have been deleted you are ready to uninstall external-secrets.\n\n### Uninstalling with Helm\n\nUninstall the helm release using the delete command.\n\n```bash\nhelm delete external-secrets --namespace external-secrets\n```","site":"external secrets","answers_cleaned":"  Getting started  External secrets runs within your Kubernetes cluster as a deployment resource  It utilizes CustomResourceDefinitions to configure access to secret providers through SecretStore resources and manages Kubernetes secret resources with ExternalSecret resources     Note  The minimum supported version of Kubernetes is  1 16 0   Users still running Kubernetes v1 15 or below should upgrade   to a supported version before installing external secrets      Installing with Helm  The default install options will automatically install and manage the CRDs as part of your helm release  If you do not want the CRDs to be automatically upgraded and managed  you must set the  installCRDs  option to  false    e g     set installCRDs false    You can install those CRDs outside of  helm  using     bash kubectl apply  k  https   raw githubusercontent com external secrets external secrets  replace with your version  deploy crds bundle yaml       Uncomment the relevant line in the next steps to disable the automatic install of CRDs       Option 1  Install from chart repository     bash helm repo add external secrets https   charts external secrets io  helm install external secrets      external secrets external secrets        n external secrets         create namespace         set installCRDs false          Option 2  Install chart from local build  Build and install the Helm chart locally after cloning the repository      bash make helm build  helm install external secrets         bin chart external secrets tgz        n external secrets         create namespace         set installCRDs false          Create a secret containing your AWS credentials     shell echo  n  KEYID      access key echo  n  SECRETKEY      secret access key kubectl create secret generic awssm secret   from file   access key   from file   secret access key          Create your first SecretStore  Create a file  basic secret store yaml  with the following content      yaml    include  basic secret store yaml          Apply it to create a SecretStore resource       kubectl apply  f  basic secret store yaml           Create your first ExternalSecret  Create a file  basic external secret yaml  with the following content      yaml    include  basic external secret yaml          Apply it to create an External Secret resource       kubectl apply  f  basic external secret yaml          bash kubectl describe externalsecret example         Name   example Status    Binding      Name                   secret to be created   Conditions      Last Transition Time   2021 02 24T16 45 23Z     Message                Secret was synced     Reason                 SecretSynced     Status                 True     Type                   Ready   Refresh Time             2021 02 24T16 45 24Z Events                      none       For more advanced examples  please read the other  guides     guides introduction md       Installing with OLM  External secrets can be managed by  Operator Lifecycle Manager  https   olm operatorframework io    OLM  via an installer operator  It is made available through  OperatorHub io  https   operatorhub io    this installation method is suited best for OpenShift  See installation instructions on the  external secrets operator  https   operatorhub io operator external secrets operator  package      Uninstalling  Before continuing  ensure that all external secret resources that have been created by users have been deleted  You can check for any existing resources with the following command      bash kubectl get SecretStores ClusterSecretStores ExternalSecrets   all namespaces      Once all these resources have been deleted you are ready to uninstall external secrets       Uninstalling with Helm  Uninstall the helm release using the delete command      bash helm delete external secrets   namespace external secrets    "}
{"questions":"external secrets We support authentication with Microsoft Entra identities that can be used as Workload Identity or as well as with Service Principal credentials Authentication Azure Key vault External Secrets Operator integrates with for secrets certificates and Keys management","answers":"\n![aws sm](..\/pictures\/eso-az-kv-azure-kv.png)\n\n## Azure Key vault\n\nExternal Secrets Operator integrates with [Azure Key vault](https:\/\/azure.microsoft.com\/en-us\/services\/key-vault\/) for secrets, certificates and Keys management.\n\n### Authentication\n\nWe support authentication with Microsoft Entra identities that can be used as Workload Identity or [AAD Pod Identity](https:\/\/azure.github.io\/aad-pod-identity\/docs\/) as well as with Service Principal credentials.\n\nSince the [AAD Pod Identity](https:\/\/azure.github.io\/aad-pod-identity\/docs\/) is deprecated, it is recommended to use the [Workload Identity](https:\/\/azure.github.io\/azure-workload-identity) authentication.\n\nWe support connecting to different cloud flavours azure supports: `PublicCloud`, `USGovernmentCloud`, `ChinaCloud` and `GermanCloud`. You have to specify the `environmentType` and point to the correct cloud flavour. This defaults to `PublicCloud`.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: azure-backend\nspec:\n  provider:\n    azurekv:\n      # PublicCloud, USGovernmentCloud, ChinaCloud, GermanCloud\n      environmentType: PublicCloud # default\n```\n\nMinimum required permissions are `Get` over secret and certificate permissions. This can be done by adding a Key Vault access policy:\n\n```sh\nKUBELET_IDENTITY_OBJECT_ID=$(az aks show --resource-group <AKS_CLUSTER_RG_NAME> --name <AKS_CLUSTER_NAME> --query 'identityProfile.kubeletidentity.objectId' -o tsv)\naz keyvault set-policy --name kv-name-with-certs --object-id \"$KUBELET_IDENTITY_OBJECT_ID\" --certificate-permissions get --secret-permissions get\n```\n\n#### Service Principal key authentication\n\nA service Principal client and Secret is created and the JSON keyfile is stored in a `Kind=Secret`. The `ClientID` and `ClientSecret` or `ClientCertificate` (in PEM format) should be configured for the secret. This service principal should have proper access rights to the keyvault to be managed by the operator.\n\n#### Managed Identity authentication\n\nA Managed Identity should be created in Azure, and that Identity should have proper rights to the keyvault to be managed by the operator.\n\nUse [aad-pod-identity](https:\/\/azure.github.io\/aad-pod-identity\/docs\/) to assign the identity to external-secrets operator. To add the selector to external-secrets operator, use `podLabels` in your values.yaml in case of Helm installation of external-secrets.\n\nIf there are multiple Managed Identities for different keyvaults, the operator should have been assigned all identities via [aad-pod-identity](https:\/\/azure.github.io\/aad-pod-identity\/docs\/), then the SecretStore configuration should include the Id of the identity to be used via the `identityId` field.\n\n```yaml\n{% include 'azkv-secret-store-mi.yaml' %}\n```\n\n#### Workload Identity\n\nIn Microsoft Entra, Workload Identity can be Application, user-assigned Managed Identity and Service Principal.\n\nYou can use [Azure AD Workload Identity Federation](https:\/\/docs.microsoft.com\/en-us\/azure\/active-directory\/develop\/workload-identity-federation) to access Azure managed services like Key Vault **without needing to manage secrets**. You need to configure a trust relationship between your Kubernetes Cluster and Azure AD. This can be done in various ways, for instance using `terraform`, the Azure Portal or the `az` cli. We found the [azwi](https:\/\/azure.github.io\/azure-workload-identity\/docs\/installation\/azwi.html) cli very helpful. The Azure [Workload Identity Quick Start Guide](https:\/\/azure.github.io\/azure-workload-identity\/docs\/quick-start.html) is also good place to get started.\n\nThis is basically a two step process:\n\n1. Create a Kubernetes Service Account ([guide](https:\/\/azure.github.io\/azure-workload-identity\/docs\/quick-start.html#5-create-a-kubernetes-service-account))\n\n```sh\nazwi serviceaccount create phase sa \\\n  --aad-application-name \"${APPLICATION_NAME}\" \\\n  --service-account-namespace \"${SERVICE_ACCOUNT_NAMESPACE}\" \\\n  --service-account-name \"${SERVICE_ACCOUNT_NAME}\"\n```\n2. Configure the trust relationship between Azure AD and Kubernetes ([guide](https:\/\/azure.github.io\/azure-workload-identity\/docs\/quick-start.html#6-establish-federated-identity-credential-between-the-aad-application-and-the-service-account-issuer--subject))\n\n```sh\nazwi serviceaccount create phase federated-identity \\\n  --aad-application-name \"${APPLICATION_NAME}\" \\\n  --service-account-namespace \"${SERVICE_ACCOUNT_NAMESPACE}\" \\\n  --service-account-name \"${SERVICE_ACCOUNT_NAME}\" \\\n  --service-account-issuer-url \"${SERVICE_ACCOUNT_ISSUER}\"\n```\n\nWith these prerequisites met you can configure `ESO` to use that Service Account. You have two options:\n\n##### Mounted Service Account\nYou run the controller and mount that particular service account into the pod by adding the label `azure.workload.identity\/use: \"true\"`to the pod. That grants _everyone_ who is able to create a secret store or reference a correctly configured one the ability to read secrets. **This approach is usually not recommended**. But may make sense when you want to share an identity with multiple namespaces. Also see our [Multi-Tenancy Guide](..\/guides\/multi-tenancy.md) for design considerations.\n\n```yaml\n{% include 'azkv-workload-identity-mounted.yaml' %}\n```\n\n##### Referenced Service Account\nYou run the controller without service account (effectively without azure permissions). Now you have to configure the SecretStore and set the `serviceAccountRef` and point to the service account you have just created. **This is usually the recommended approach**. It makes sense for everyone who wants to run the controller without Azure permissions and delegate authentication via service accounts in particular namespaces. Also see our [Multi-Tenancy Guide](..\/guides\/multi-tenancy.md) for design considerations.\n\n```yaml\n{% include 'azkv-workload-identity.yaml' %}\n```\n\nIn case you don't have the clientId when deploying the SecretStore, such as when deploying a Helm chart that includes instructions for creating a [Managed Identity](https:\/\/github.com\/Azure\/azure-service-operator\/blob\/main\/v2\/samples\/managedidentity\/v1api20181130\/v1api20181130_userassignedidentity.yaml) using [Azure Service Operator](https:\/\/azure.github.io\/azure-service-operator\/) next to the SecretStore definition, you may encounter an interpolation problem. Helm lacks dependency management, which means it can create an issue when the clientId is only known after everything is deployed. Although the Service Account can inject `clientId` and `tenantId` into a pod, it doesn't support secretKeyRef\/configMapKeyRef. Therefore, you can deliver the clientId and tenantId directly, bypassing the Service Account.\n\nThe following example demonstrates using the secretRef field to directly deliver the `clientId` and `tenantId` to the SecretStore while utilizing Workload Identity authentication.\n\n```yaml\n{% include 'azkv-workload-identity-secretref.yaml' %}\n```\n\n### Update secret store\nBe sure the `azurekv` provider is listed in the `Kind=SecretStore`\n\n```yaml\n{% include 'azkv-secret-store.yaml' %}\n```\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` in `clientId` and `clientSecret`  with the namespaces where the secrets reside.\n\nOr in case of Managed Identity authentication:\n\n```yaml\n{% include 'azkv-secret-store-mi.yaml' %}\n```\n\n### Object Types\n\nAzure Key Vault manages different [object types](https:\/\/docs.microsoft.com\/en-us\/azure\/key-vault\/general\/about-keys-secrets-certificates#object-types), we support `keys`, `secrets` and `certificates`. Simply prefix the key with `key`, `secret` or `cert` to retrieve the desired type (defaults to secret).\n\n| Object Type   | Return Value                                                                                                                                                                                                                      |\n| ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `secret`      | the raw secret value.                                                                                                                                                                                                             |\n| `key`         | A JWK which contains the public key. Azure Key Vault does **not** export the private key. You may want to use [template functions](..\/guides\/templating.md) to transform this JWK into PEM encoded PKIX ASN.1 DER format. |\n| `certificate` | The raw CER contents of the x509 certificate. You may want to use [template functions](..\/guides\/templating.md) to transform this into your desired encoding                                                             |\n\n### Creating external secret\n\nTo create a Kubernetes secret from the Azure Key vault secret a `Kind=ExternalSecret` is needed.\n\nYou can manage keys\/secrets\/certificates saved inside the keyvault , by setting a \"\/\" prefixed type in the secret name, the default type is a `secret`. Other supported values are `cert` and `key`.\n\n```yaml\n{% include 'azkv-external-secret.yaml' %}\n```\n\nThe operator will fetch the Azure Key vault secret and inject it as a `Kind=Secret`. Then the Kubernetes secret can be fetched by issuing:\n\n```sh\nkubectl get secret secret-to-be-created -n <namespace> -o jsonpath='{.data.dev-secret-test}' | base64 -d\n```\n\nTo select all secrets inside the key vault or all tags inside a secret, you can use the `dataFrom` directive:\n\n```yaml\n{% include 'azkv-datafrom-external-secret.yaml' %}\n```\n\nTo get a PKCS#12 certificate from Azure Key Vault and inject it as a `Kind=Secret` of type `kubernetes.io\/tls`:\n\n```yaml\n{% include 'azkv-pkcs12-cert-external-secret.yaml' %}\n```\n\n### Creating a PushSecret\nYou can push secrets to Azure Key Vault into the different `secret`, `key` and `certificate` APIs.\n\n#### Pushing to a Secret\nPushing to a Secret requires no previous setup. with the secret available in Kubernetes, you can simply refer it to a PushSecret object to have it created on Azure Key Vault:\n```yaml\n{% include 'azkv-pushsecret-secret.yaml' %}\n```\n!!! note\n      In order to create a PushSecret targeting keys, `CreateSecret` and `DeleteSecret` actions must be granted to the Service Principal\/Identity configured on the SecretStore.\n\n#### Pushing to a Key\nThe first step is to generate a valid Private Key. Supported Formats include `PRIVATE KEY`, `RSA PRIVATE KEY` AND `EC PRIVATE KEY` (EC\/PKCS1\/PKCS8 types). After uploading your key to a Kubernetes Secret, the next step is to create a PushSecret manifest with the following configuration:\n\n```yaml\n{% include 'azkv-pushsecret-key.yaml' %}\n```\n\n!!! note\n      In order to create a PushSecret targeting keys, `ImportKey` and `DeleteKey` actions must be granted to the Service Principal\/Identity configured on the SecretStore.\n#### Pushing to a Certificate\nThe first step is to generate a valid P12 certificate. Currently, only PKCS1\/PKCS8 types are supported. Currently only password-less P12 certificates are supported.\n\nAfter uploading your P12 certificate to a Kubernetes Secret, the next step is to create a PushSecret manifest with the following configuration\n```yaml\n{% include 'azkv-pushsecret-certificate.yaml' %}\n```\n!!! note\n       In order to create a PushSecret targeting keys, `ImportCertificate` and `DeleteCertificate` actions must be granted to the Service Principal\/Identity configured on the SecretStore.","site":"external secrets","answers_cleaned":"   aws sm     pictures eso az kv azure kv png      Azure Key vault  External Secrets Operator integrates with  Azure Key vault  https   azure microsoft com en us services key vault   for secrets  certificates and Keys management       Authentication  We support authentication with Microsoft Entra identities that can be used as Workload Identity or  AAD Pod Identity  https   azure github io aad pod identity docs   as well as with Service Principal credentials   Since the  AAD Pod Identity  https   azure github io aad pod identity docs   is deprecated  it is recommended to use the  Workload Identity  https   azure github io azure workload identity  authentication   We support connecting to different cloud flavours azure supports   PublicCloud    USGovernmentCloud    ChinaCloud  and  GermanCloud   You have to specify the  environmentType  and point to the correct cloud flavour  This defaults to  PublicCloud       yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  azure backend spec    provider      azurekv          PublicCloud  USGovernmentCloud  ChinaCloud  GermanCloud       environmentType  PublicCloud   default      Minimum required permissions are  Get  over secret and certificate permissions  This can be done by adding a Key Vault access policy      sh KUBELET IDENTITY OBJECT ID   az aks show   resource group  AKS CLUSTER RG NAME    name  AKS CLUSTER NAME    query  identityProfile kubeletidentity objectId   o tsv  az keyvault set policy   name kv name with certs   object id   KUBELET IDENTITY OBJECT ID    certificate permissions get   secret permissions get           Service Principal key authentication  A service Principal client and Secret is created and the JSON keyfile is stored in a  Kind Secret   The  ClientID  and  ClientSecret  or  ClientCertificate   in PEM format  should be configured for the secret  This service principal should have proper access rights to the keyvault to be managed by the operator        Managed Identity authentication  A Managed Identity should be created in Azure  and that Identity should have proper rights to the keyvault to be managed by the operator   Use  aad pod identity  https   azure github io aad pod identity docs   to assign the identity to external secrets operator  To add the selector to external secrets operator  use  podLabels  in your values yaml in case of Helm installation of external secrets   If there are multiple Managed Identities for different keyvaults  the operator should have been assigned all identities via  aad pod identity  https   azure github io aad pod identity docs    then the SecretStore configuration should include the Id of the identity to be used via the  identityId  field      yaml    include  azkv secret store mi yaml               Workload Identity  In Microsoft Entra  Workload Identity can be Application  user assigned Managed Identity and Service Principal   You can use  Azure AD Workload Identity Federation  https   docs microsoft com en us azure active directory develop workload identity federation  to access Azure managed services like Key Vault   without needing to manage secrets    You need to configure a trust relationship between your Kubernetes Cluster and Azure AD  This can be done in various ways  for instance using  terraform   the Azure Portal or the  az  cli  We found the  azwi  https   azure github io azure workload identity docs installation azwi html  cli very helpful  The Azure  Workload Identity Quick Start Guide  https   azure github io azure workload identity docs quick start html  is also good place to get started   This is basically a two step process   1  Create a Kubernetes Service Account   guide  https   azure github io azure workload identity docs quick start html 5 create a kubernetes service account       sh azwi serviceaccount create phase sa       aad application name    APPLICATION NAME         service account namespace    SERVICE ACCOUNT NAMESPACE         service account name    SERVICE ACCOUNT NAME       2  Configure the trust relationship between Azure AD and Kubernetes   guide  https   azure github io azure workload identity docs quick start html 6 establish federated identity credential between the aad application and the service account issuer  subject       sh azwi serviceaccount create phase federated identity       aad application name    APPLICATION NAME         service account namespace    SERVICE ACCOUNT NAMESPACE         service account name    SERVICE ACCOUNT NAME         service account issuer url    SERVICE ACCOUNT ISSUER        With these prerequisites met you can configure  ESO  to use that Service Account  You have two options         Mounted Service Account You run the controller and mount that particular service account into the pod by adding the label  azure workload identity use   true  to the pod  That grants  everyone  who is able to create a secret store or reference a correctly configured one the ability to read secrets    This approach is usually not recommended    But may make sense when you want to share an identity with multiple namespaces  Also see our  Multi Tenancy Guide     guides multi tenancy md  for design considerations      yaml    include  azkv workload identity mounted yaml                Referenced Service Account You run the controller without service account  effectively without azure permissions   Now you have to configure the SecretStore and set the  serviceAccountRef  and point to the service account you have just created    This is usually the recommended approach    It makes sense for everyone who wants to run the controller without Azure permissions and delegate authentication via service accounts in particular namespaces  Also see our  Multi Tenancy Guide     guides multi tenancy md  for design considerations      yaml    include  azkv workload identity yaml          In case you don t have the clientId when deploying the SecretStore  such as when deploying a Helm chart that includes instructions for creating a  Managed Identity  https   github com Azure azure service operator blob main v2 samples managedidentity v1api20181130 v1api20181130 userassignedidentity yaml  using  Azure Service Operator  https   azure github io azure service operator   next to the SecretStore definition  you may encounter an interpolation problem  Helm lacks dependency management  which means it can create an issue when the clientId is only known after everything is deployed  Although the Service Account can inject  clientId  and  tenantId  into a pod  it doesn t support secretKeyRef configMapKeyRef  Therefore  you can deliver the clientId and tenantId directly  bypassing the Service Account   The following example demonstrates using the secretRef field to directly deliver the  clientId  and  tenantId  to the SecretStore while utilizing Workload Identity authentication      yaml    include  azkv workload identity secretref yaml              Update secret store Be sure the  azurekv  provider is listed in the  Kind SecretStore      yaml    include  azkv secret store yaml           NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  in  clientId  and  clientSecret   with the namespaces where the secrets reside   Or in case of Managed Identity authentication      yaml    include  azkv secret store mi yaml              Object Types  Azure Key Vault manages different  object types  https   docs microsoft com en us azure key vault general about keys secrets certificates object types   we support  keys    secrets  and  certificates   Simply prefix the key with  key    secret  or  cert  to retrieve the desired type  defaults to secret      Object Type     Return Value                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 secret         the raw secret value                                                                                                                                                                                                                   key            A JWK which contains the public key  Azure Key Vault does   not   export the private key  You may want to use  template functions     guides templating md  to transform this JWK into PEM encoded PKIX ASN 1 DER format       certificate    The raw CER contents of the x509 certificate  You may want to use  template functions     guides templating md  to transform this into your desired encoding                                                                    Creating external secret  To create a Kubernetes secret from the Azure Key vault secret a  Kind ExternalSecret  is needed   You can manage keys secrets certificates saved inside the keyvault   by setting a     prefixed type in the secret name  the default type is a  secret   Other supported values are  cert  and  key       yaml    include  azkv external secret yaml          The operator will fetch the Azure Key vault secret and inject it as a  Kind Secret   Then the Kubernetes secret can be fetched by issuing      sh kubectl get secret secret to be created  n  namespace   o jsonpath    data dev secret test     base64  d      To select all secrets inside the key vault or all tags inside a secret  you can use the  dataFrom  directive      yaml    include  azkv datafrom external secret yaml          To get a PKCS 12 certificate from Azure Key Vault and inject it as a  Kind Secret  of type  kubernetes io tls       yaml    include  azkv pkcs12 cert external secret yaml              Creating a PushSecret You can push secrets to Azure Key Vault into the different  secret    key  and  certificate  APIs        Pushing to a Secret Pushing to a Secret requires no previous setup  with the secret available in Kubernetes  you can simply refer it to a PushSecret object to have it created on Azure Key Vault     yaml    include  azkv pushsecret secret yaml             note       In order to create a PushSecret targeting keys   CreateSecret  and  DeleteSecret  actions must be granted to the Service Principal Identity configured on the SecretStore        Pushing to a Key The first step is to generate a valid Private Key  Supported Formats include  PRIVATE KEY    RSA PRIVATE KEY  AND  EC PRIVATE KEY   EC PKCS1 PKCS8 types   After uploading your key to a Kubernetes Secret  the next step is to create a PushSecret manifest with the following configuration      yaml    include  azkv pushsecret key yaml              note       In order to create a PushSecret targeting keys   ImportKey  and  DeleteKey  actions must be granted to the Service Principal Identity configured on the SecretStore       Pushing to a Certificate The first step is to generate a valid P12 certificate  Currently  only PKCS1 PKCS8 types are supported  Currently only password less P12 certificates are supported   After uploading your P12 certificate to a Kubernetes Secret  the next step is to create a PushSecret manifest with the following configuration    yaml    include  azkv pushsecret certificate yaml             note        In order to create a PushSecret targeting keys   ImportCertificate  and  DeleteCertificate  actions must be granted to the Service Principal Identity configured on the SecretStore "}
{"questions":"external secrets External Secret Spec A points to a specific namespace in the target Kubernetes Cluster You are able to retrieve all secrets from that particular namespace given you have the correct set of RBAC permissions This provider supports the use of the field With it you point to the key of the remote secret If you leave it empty it will json encode all key value pairs External Secrets Operator allows to retrieve secrets from a Kubernetes Cluster this can be either a remote cluster or the local one where the operator runs in The reconciler checks if you have read access for secrets in that namespace using See below on how to set that up properly","answers":"External Secrets Operator allows to retrieve secrets from a Kubernetes Cluster - this can be either a remote cluster or the local one where the operator runs in.\n\nA `SecretStore` points to a **specific namespace** in the target Kubernetes Cluster. You are able to retrieve all secrets from that particular namespace given you have the correct set of RBAC permissions.\n\nThe `SecretStore` reconciler checks if you have read access for secrets in that namespace using `SelfSubjectRulesReview`. See below on how to set that up properly.\n\n### External Secret Spec\n\nThis provider supports the use of the `Property` field. With it you point to the key of the remote secret. If you leave it empty it will json encode all key\/value pairs.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: database-credentials\nspec:\n  refreshInterval: 1h\n  secretStoreRef:\n    kind: SecretStore\n    name: k8s-store             # name of the SecretStore (or kind specified)\n  target:\n    name: database-credentials  # name of the k8s Secret to be created\n  data:\n  - secretKey: username\n    remoteRef:\n      key: database-credentials\n      property: username\n\n  - secretKey: password\n    remoteRef:\n      key: database-credentials\n      property: password\n\n  # metadataPolicy to fetch all the labels and annotations in JSON format\n  - secretKey: tags\n    remoteRef:\n      metadataPolicy: Fetch\n      key: database-credentials\n\n  # metadataPolicy to fetch all the labels in JSON format\n  - secretKey: labels\n    remoteRef:\n      metadataPolicy: Fetch\n      key: database-credentials\n\t  property: labels\n\n  # metadataPolicy to fetch a specific label (dev) from the source secret\n  - secretKey: developer\n    remoteRef:\n      metadataPolicy: Fetch\n      key: database-credentials\n\t  property: labels.dev\n\n```\n\n#### find by tag & name\n\nYou can fetch secrets based on labels or names matching a regexp:\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: fetch-tls-and-nginx\nspec:\n  refreshInterval: 1h\n  secretStoreRef:\n    kind: SecretStore\n    name: k8s-store\n  target:\n    name: fetch-tls-and-nginx\n  dataFrom:\n  - find:\n      name:\n        # match secret name with regexp\n        regexp: \"tls-.*\"\n  - find:\n      tags:\n        # fetch secrets based on label combination\n        app: \"nginx\"\n```\n\n### Target API-Server Configuration\n\nThe servers `url` can be omitted and defaults to `kubernetes.default`. You **have to** provide a CA certificate in order to connect to the API Server securely.\nFor your convenience, each namespace has a ConfigMap `kube-root-ca.crt` that contains the CA certificate of the internal API Server (see `RootCAConfigMap` [feature gate](https:\/\/kubernetes.io\/docs\/reference\/command-line-tools-reference\/feature-gates\/)).\nUse that if you want to connect to the same API server.\nIf you want to connect to a remote API Server you need to fetch it and store it inside the cluster as ConfigMap or Secret.\nYou may also define it inline as base64 encoded value using the `caBundle` property.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: k8s-store-default-ns\nspec:\n  provider:\n    kubernetes:\n      # with this, the store is able to pull only from `default` namespace\n      remoteNamespace: default\n      server:\n        url: \"https:\/\/myapiserver.tld\"\n        caProvider:\n          type: ConfigMap\n          name: kube-root-ca.crt\n          key: ca.crt\n```\n\n### Authentication\n\nIt's possible to authenticate against the Kubernetes API using client certificates, a bearer token or service account. The operator enforces that exactly one authentication method is used. You can not use the service account that is mounted inside the operator, this is by design to avoid reading secrets across namespaces.\n\n**NOTE:** `SelfSubjectRulesReview` permission is required in order to validation work properly. Please use the following role as reference:\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: Role\nmetadata:\n  namespace: default\n  name: eso-store-role\nrules:\n- apiGroups: [\"\"]\n  resources:\n  - secrets\n  verbs:\n  - get\n  - list\n  - watch\n- apiGroups:\n  - authorization.k8s.io\n  resources:\n  - selfsubjectrulesreviews\n  verbs:\n  - create\n```\n\n#### Authenticating with BearerToken\n\nCreate a Kubernetes secret with a client token. There are many ways to acquire such a token, please refer to the [Kubernetes Authentication docs](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/authentication\/#authentication-strategies).\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: my-token\ndata:\n  token: \"....\"\n```\n\nCreate a SecretStore: The `auth` section indicates that the type `token` will be used for authentication, it includes the path to fetch the token. Set `remoteNamespace` to the name of the namespace where your target secrets reside.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: k8s-store-token-auth\nspec:\n  provider:\n    kubernetes:\n      # with this, the store is able to pull only from `default` namespace\n      remoteNamespace: default\n      server:\n        # ...\n      auth:\n        token:\n          bearerToken:\n            name: my-token\n            key: token\n```\n\n#### Authenticating with ServiceAccount\n\nCreate a Kubernetes Service Account, please refer to the [Service Account Tokens Documentation](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/authentication\/#service-account-tokens) on how they work and how to create them.\n\n```\n$ kubectl create serviceaccount my-store\n```\n\nThis Service Account needs permissions to read `Secret` and create `SelfSubjectRulesReview` resources. Please see the above role.\n\n```\n$ kubectl create rolebinding my-store --role=eso-store-role --serviceaccount=default:my-store\n```\n\nCreate a SecretStore: the `auth` section indicates that the type `serviceAccount` will be used for authentication.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: k8s-store-sa-auth\nspec:\n  provider:\n    kubernetes:\n      # with this, the store is able to pull only from `default` namespace\n      remoteNamespace: default\n      server:\n        # ...\n      auth:\n        serviceAccount:\n          name: \"my-store\"\n```\n\n#### Authenticating with Client Certificates\n\nCreate a Kubernetes secret which contains the client key and certificate. See [Generate Certificates Documentations](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/certificates\/) on how to create them.\n\n```\n$ kubectl create secret tls tls-secret --cert=path\/to\/tls.cert --key=path\/to\/tls.key\n```\n\nReference the `tls-secret` in the SecretStore\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: k8s-store-cert-auth\nspec:\n  provider:\n    kubernetes:\n      # with this, the store is able to pull only from `default` namespace\n      remoteNamespace: default\n      server:\n        # ...\n      auth:\n        cert:\n          clientCert:\n            name: \"tls-secret\"\n            key: \"tls.crt\"\n          clientKey:\n            name: \"tls-secret\"\n            key: \"tls.key\"\n```\n\n\n### PushSecret\n\nThe PushSecret functionality facilitates the replication of a Kubernetes Secret from one namespace or cluster to another. This feature proves useful in scenarios where you need to share sensitive information, such as credentials or configuration data, across different parts of your infrastructure.\n\nTo configure the PushSecret resource, you need to specify the following parameters:\n\n* **Selector**: Specify the selector that identifies the source Secret to be replicated. This selector allows you to target the specific Secret you want to share.\n\n* **SecretKey**: Set the SecretKey parameter to indicate the key within the source Secret that you want to replicate. This ensures that only the relevant information is shared.\n\n* **RemoteRef.Property**: In addition to the above parameters, the Kubernetes provider requires you to set the `remoteRef.property` field. This field specifies the key of the remote Secret resource where the replicated value should be stored.\n\n\nHere's an example:\n\n```yaml\napiVersion: external-secrets.io\/v1alpha1\nkind: PushSecret\nmetadata:\n  name: example\nspec:\n  refreshInterval: 1h\n  secretStoreRefs:\n    - name: k8s-store-remote-ns\n      kind: SecretStore\n  selector:\n    secret:\n      name: pokedex-credentials\n  data:\n    - match:\n        secretKey: best-pokemon\n        remoteRef:\n          remoteKey: remote-best-pokemon\n          property: best-pokemon\n```\n\nTo utilize the PushSecret feature effectively, the referenced `SecretStore` requires specific permissions on the target cluster. In particular it requires `create`, `read`, `update` and `delete` permissions on the Secret resource:\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: Role\nmetadata:\n  namespace: remote\n  name: eso-store-push-role\nrules:\n- apiGroups: [\"\"]\n  resources:\n  - secrets\n  verbs:\n  - get\n  - list\n  - watch\n  - create\n  - update\n  - patch\n  - delete\n- apiGroups:\n  - authorization.k8s.io\n  resources:\n  - selfsubjectrulesreviews\n  verbs:\n  - create\n```\n\n#### PushSecret Metadata\n\nThe Kubernetes provider is able to manage both `metadata.labels` and `metadata.annotations` of the secret on the target cluster.\n\nUsers have different preferences on what metadata should be pushed. ESO by default pushes both labels and annotations to the target secret and merges them with the existing metadata.\n\nYou can specify the metadata in the `spec.template.metadata` section if you want to decouple it from the existing secret.\n\n```yaml\n{% raw %}\napiVersion: external-secrets.io\/v1alpha1\nkind: PushSecret\nmetadata:\n  name: example\nspec:\n  # ...\n  template:\n    metadata:\n      labels:\n        app.kubernetes.io\/part-of: argocd\n    data:\n      mysql_connection_string: \"mysql:\/\/:3306\/\"\n  data:\n  - match:\n      secretKey: mysql_connection_string\n      remoteRef:\n        remoteKey: backend_secrets\n        property: mysql_connection_string\n{% endraw %}\n```\n\nFurther, you can leverage the `.data[].metadata` section to fine-tine the behaviour of the metadata merge strategy. The metadata section is a versioned custom-resource _alike_ structure, the behaviour is detailed below.\n\n```yaml\napiVersion: external-secrets.io\/v1alpha1\nkind: PushSecret\nmetadata:\n  name: example\nspec:\n  # ...\n  data:\n  - match:\n      secretKey: example-1\n      remoteRef:\n        remoteKey: example-remote-secret\n        property: url\n        \n    metadata:\n      apiVersion: kubernetes.external-secrets.io\/v1alpha1\n      kind: PushSecretMetadata\n      spec:\n        sourceMergePolicy: Merge # or Replace\n        targetMergePolicy: Merge # or Replace \/ Ignore\n        labels:\n          color: red\n        annotations:\n          yes: please\n\n```\n\n\n| Field             | Type                                 | Description                                                                                                                                                                                                                                                                                                                                       |\n| ----------------- | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| sourceMergePolicy | string: `Merge`, `Replace`           | The sourceMergePolicy defines how the metadata of the source secret is merged. `Merge` will merge the metadata of the source secret with the  metadata defined in `.data[].metadata`. With `Replace`, the metadata in `.data[].metadata` replaces the source metadata.                                                                            |\n| targetMergePolicy | string: `Merge`, `Replace`, `Ignore` | The targetMergePolicy defines how ESO merges the metadata produced by the sourceMergePolicy with the target secret. With `Merge`, the source metadata is merged with the existing metadata from the target secret. `Replace` will replace the target metadata with the metadata defined in the source. `Ignore` leaves the target metadata as is. |\n| labels            | `map[string]string`                  | The labels.                                                                                                                                                                                                                                                                                                                                       |\n| annotations       | `map[string]string`                  | The annotations.                                                                                                                                                                                                                                                                                                                                  |\n\n#### Implementation Considerations\n\nWhen utilizing the PushSecret feature and configuring the permissions for the SecretStore, consider the following:\n\n\n* **RBAC Configuration**: Ensure that the Role-Based Access Control (RBAC) configuration for the SecretStore grants the appropriate permissions for creating, reading, and updating resources in the target cluster.\n\n* **Least Privilege Principle**: Adhere to the principle of least privilege when assigning permissions to the SecretStore. Only provide the minimum required permissions to accomplish the desired synchronization between Secrets.\n\n* **Namespace or Cluster Scope**: Depending on your specific requirements, configure the SecretStore to operate at the desired scope, whether it is limited to a specific namespace or encompasses the entire cluster. Consider the security and access control implications of your chosen scope.","site":"external secrets","answers_cleaned":"External Secrets Operator allows to retrieve secrets from a Kubernetes Cluster   this can be either a remote cluster or the local one where the operator runs in   A  SecretStore  points to a   specific namespace   in the target Kubernetes Cluster  You are able to retrieve all secrets from that particular namespace given you have the correct set of RBAC permissions   The  SecretStore  reconciler checks if you have read access for secrets in that namespace using  SelfSubjectRulesReview   See below on how to set that up properly       External Secret Spec  This provider supports the use of the  Property  field  With it you point to the key of the remote secret  If you leave it empty it will json encode all key value pairs      yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  database credentials spec    refreshInterval  1h   secretStoreRef      kind  SecretStore     name  k8s store               name of the SecretStore  or kind specified    target      name  database credentials    name of the k8s Secret to be created   data      secretKey  username     remoteRef        key  database credentials       property  username      secretKey  password     remoteRef        key  database credentials       property  password      metadataPolicy to fetch all the labels and annotations in JSON format     secretKey  tags     remoteRef        metadataPolicy  Fetch       key  database credentials      metadataPolicy to fetch all the labels in JSON format     secretKey  labels     remoteRef        metadataPolicy  Fetch       key  database credentials    property  labels      metadataPolicy to fetch a specific label  dev  from the source secret     secretKey  developer     remoteRef        metadataPolicy  Fetch       key  database credentials    property  labels dev            find by tag   name  You can fetch secrets based on labels or names matching a regexp      yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  fetch tls and nginx spec    refreshInterval  1h   secretStoreRef      kind  SecretStore     name  k8s store   target      name  fetch tls and nginx   dataFrom      find        name            match secret name with regexp         regexp   tls         find        tags            fetch secrets based on label combination         app   nginx           Target API Server Configuration  The servers  url  can be omitted and defaults to  kubernetes default   You   have to   provide a CA certificate in order to connect to the API Server securely  For your convenience  each namespace has a ConfigMap  kube root ca crt  that contains the CA certificate of the internal API Server  see  RootCAConfigMap   feature gate  https   kubernetes io docs reference command line tools reference feature gates     Use that if you want to connect to the same API server  If you want to connect to a remote API Server you need to fetch it and store it inside the cluster as ConfigMap or Secret  You may also define it inline as base64 encoded value using the  caBundle  property      yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  k8s store default ns spec    provider      kubernetes          with this  the store is able to pull only from  default  namespace       remoteNamespace  default       server          url   https   myapiserver tld          caProvider            type  ConfigMap           name  kube root ca crt           key  ca crt          Authentication  It s possible to authenticate against the Kubernetes API using client certificates  a bearer token or service account  The operator enforces that exactly one authentication method is used  You can not use the service account that is mounted inside the operator  this is by design to avoid reading secrets across namespaces     NOTE     SelfSubjectRulesReview  permission is required in order to validation work properly  Please use the following role as reference      yaml apiVersion  rbac authorization k8s io v1 kind  Role metadata    namespace  default   name  eso store role rules    apiGroups         resources      secrets   verbs      get     list     watch   apiGroups      authorization k8s io   resources      selfsubjectrulesreviews   verbs      create           Authenticating with BearerToken  Create a Kubernetes secret with a client token  There are many ways to acquire such a token  please refer to the  Kubernetes Authentication docs  https   kubernetes io docs reference access authn authz authentication  authentication strategies       yaml apiVersion  v1 kind  Secret metadata    name  my token data    token              Create a SecretStore  The  auth  section indicates that the type  token  will be used for authentication  it includes the path to fetch the token  Set  remoteNamespace  to the name of the namespace where your target secrets reside      yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  k8s store token auth spec    provider      kubernetes          with this  the store is able to pull only from  default  namespace       remoteNamespace  default       server                      auth          token            bearerToken              name  my token             key  token           Authenticating with ServiceAccount  Create a Kubernetes Service Account  please refer to the  Service Account Tokens Documentation  https   kubernetes io docs reference access authn authz authentication  service account tokens  on how they work and how to create them         kubectl create serviceaccount my store      This Service Account needs permissions to read  Secret  and create  SelfSubjectRulesReview  resources  Please see the above role         kubectl create rolebinding my store   role eso store role   serviceaccount default my store      Create a SecretStore  the  auth  section indicates that the type  serviceAccount  will be used for authentication      yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  k8s store sa auth spec    provider      kubernetes          with this  the store is able to pull only from  default  namespace       remoteNamespace  default       server                      auth          serviceAccount            name   my store            Authenticating with Client Certificates  Create a Kubernetes secret which contains the client key and certificate  See  Generate Certificates Documentations  https   kubernetes io docs tasks administer cluster certificates   on how to create them         kubectl create secret tls tls secret   cert path to tls cert   key path to tls key      Reference the  tls secret  in the SecretStore     yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  k8s store cert auth spec    provider      kubernetes          with this  the store is able to pull only from  default  namespace       remoteNamespace  default       server                      auth          cert            clientCert              name   tls secret              key   tls crt            clientKey              name   tls secret              key   tls key            PushSecret  The PushSecret functionality facilitates the replication of a Kubernetes Secret from one namespace or cluster to another  This feature proves useful in scenarios where you need to share sensitive information  such as credentials or configuration data  across different parts of your infrastructure   To configure the PushSecret resource  you need to specify the following parameters       Selector    Specify the selector that identifies the source Secret to be replicated  This selector allows you to target the specific Secret you want to share       SecretKey    Set the SecretKey parameter to indicate the key within the source Secret that you want to replicate  This ensures that only the relevant information is shared       RemoteRef Property    In addition to the above parameters  the Kubernetes provider requires you to set the  remoteRef property  field  This field specifies the key of the remote Secret resource where the replicated value should be stored    Here s an example      yaml apiVersion  external secrets io v1alpha1 kind  PushSecret metadata    name  example spec    refreshInterval  1h   secretStoreRefs        name  k8s store remote ns       kind  SecretStore   selector      secret        name  pokedex credentials   data        match          secretKey  best pokemon         remoteRef            remoteKey  remote best pokemon           property  best pokemon      To utilize the PushSecret feature effectively  the referenced  SecretStore  requires specific permissions on the target cluster  In particular it requires  create    read    update  and  delete  permissions on the Secret resource      yaml apiVersion  rbac authorization k8s io v1 kind  Role metadata    namespace  remote   name  eso store push role rules    apiGroups         resources      secrets   verbs      get     list     watch     create     update     patch     delete   apiGroups      authorization k8s io   resources      selfsubjectrulesreviews   verbs      create           PushSecret Metadata  The Kubernetes provider is able to manage both  metadata labels  and  metadata annotations  of the secret on the target cluster   Users have different preferences on what metadata should be pushed  ESO by default pushes both labels and annotations to the target secret and merges them with the existing metadata   You can specify the metadata in the  spec template metadata  section if you want to decouple it from the existing secret      yaml    raw    apiVersion  external secrets io v1alpha1 kind  PushSecret metadata    name  example spec            template      metadata        labels          app kubernetes io part of  argocd     data        mysql connection string   mysql    3306     data      match        secretKey  mysql connection string       remoteRef          remoteKey  backend secrets         property  mysql connection string    endraw         Further  you can leverage the   data   metadata  section to fine tine the behaviour of the metadata merge strategy  The metadata section is a versioned custom resource  alike  structure  the behaviour is detailed below      yaml apiVersion  external secrets io v1alpha1 kind  PushSecret metadata    name  example spec            data      match        secretKey  example 1       remoteRef          remoteKey  example remote secret         property  url              metadata        apiVersion  kubernetes external secrets io v1alpha1       kind  PushSecretMetadata       spec          sourceMergePolicy  Merge   or Replace         targetMergePolicy  Merge   or Replace   Ignore         labels            color  red         annotations            yes  please          Field               Type                                   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            sourceMergePolicy   string   Merge    Replace              The sourceMergePolicy defines how the metadata of the source secret is merged   Merge  will merge the metadata of the source secret with the  metadata defined in   data   metadata   With  Replace   the metadata in   data   metadata  replaces the source metadata                                                                                 targetMergePolicy   string   Merge    Replace    Ignore    The targetMergePolicy defines how ESO merges the metadata produced by the sourceMergePolicy with the target secret  With  Merge   the source metadata is merged with the existing metadata from the target secret   Replace  will replace the target metadata with the metadata defined in the source   Ignore  leaves the target metadata as is      labels               map string string                     The labels                                                                                                                                                                                                                                                                                                                                            annotations          map string string                     The annotations                                                                                                                                                                                                                                                                                                                                           Implementation Considerations  When utilizing the PushSecret feature and configuring the permissions for the SecretStore  consider the following        RBAC Configuration    Ensure that the Role Based Access Control  RBAC  configuration for the SecretStore grants the appropriate permissions for creating  reading  and updating resources in the target cluster       Least Privilege Principle    Adhere to the principle of least privilege when assigning permissions to the SecretStore  Only provide the minimum required permissions to accomplish the desired synchronization between Secrets       Namespace or Cluster Scope    Depending on your specific requirements  configure the SecretStore to operate at the desired scope  whether it is limited to a specific namespace or encompasses the entire cluster  Consider the security and access control implications of your chosen scope "}
{"questions":"external secrets Important note about this documentation External Secrets Operator integrates with for secret management 1Password Secrets Automation The 1Password API calls the entries in vaults Items These docs use the same term Behavior How an Item is equated to an ExternalSecret is equated to an Item s Title","answers":"## 1Password Secrets Automation\n\nExternal Secrets Operator integrates with [1Password Secrets Automation](https:\/\/1password.com\/products\/secrets\/) for secret management.\n\n### Important note about this documentation\n_**The 1Password API calls the entries in vaults 'Items'. These docs use the same term.**_\n\n### Behavior\n* How an Item is equated to an ExternalSecret:\n    * `remoteRef.key` is equated to an Item's Title\n    * `remoteRef.property` is equated to:\n        * An Item's field's Label (Password type)\n        * An Item's file's Name (Document type)\n        * If empty, defaults to the first file name, or the field labeled `password`\n    * `remoteRef.version` is currently not supported.\n    * One Item in a vault can equate to one Kubernetes Secret to keep things easy to comprehend.\n* Support for 1Password secret types of `Password` and `Document`.\n    * The `Password` type can get data from multiple `fields` in the Item.\n    * The `Document` type can get data from files.\n    * See [creating 1Password Items compatible with ExternalSecrets](#creating-compatible-1password-items).\n* Ordered vaults\n    * Specify an ordered list of vaults in a SecretStore and the value will be sourced from the first vault with a matching Item.\n    * If no matching Item is found, an error is returned.\n    * This supports having a default or shared set of values that can also be overriden for specific environments.\n* `dataFrom`:\n    * `find.path` is equated to Item Title.\n    * `find.name.regexp` is equated to field Labels.\n    * `find.tags` are not supported at this time.\n\n### Prerequisites\n* 1Password requires running a 1Password Connect Server to which the API requests will be made.\n    * External Secrets does not run this server. See [Deploy a Connect Server](#deploy-a-connect-server).\n    * One Connect Server is needed per 1Password Automation Environment.\n    * Many Vaults can be added to an Automation Environment, and Tokens can be generated in that Environment with access to any set or subset of those Vaults.\n* 1Password Connect Server version 1.5.6 or higher.\n\n### Setup Authentication\n_Authentication requires a `1password-credentials.json` file provided to the Connect Server, and a related 'Access Token' for the client in this provider to authenticate to that Connect Server. Both of these are generated by 1Password._\n\n1. Setup an Automation Environment [at 1Password.com](https:\/\/support.1password.com\/secrets-automation\/), or [via the op CLI](https:\/\/github.com\/1Password\/connect\/blob\/a0a5f3d92e68497098d9314721335a7bb68a3b2d\/README.md#create-server-and-access-token).\n    * Note: don't be confused by the `op connect server create` syntax. This will create an Automation Environment in 1Password, and corresponding credentials for a Connect Server, nothing more.\n    * This will result in a `1password-credentials.json` file to provide to a Connect Server Deployment, and an Access Token to provide as a Secret referenced by a `SecretStore` or `ClusterSecretStore`.\n1. Create a Kubernetes secret with the Access Token\n```yaml\n{% include '1password-token-secret.yaml' %}\n```\n1. Reference the secret in a SecretStore or ClusterSecretStore\n```yaml\n{% include '1password-secret-store.yaml' %}\n```\n1. Create a Kubernetes secret with the Connect Server credentials\n```yaml\n{% include '1password-connect-server-secret.yaml' %}\n```\n1. Reference the secret in a Connect Server Deployment\n```yaml\n{% include '1password-connect-server-deployment.yaml' %}\n```\n\n### Deploy a Connect Server\n* Follow the remaining instructions in the [Quick Start guide](https:\/\/github.com\/1Password\/connect\/blob\/a0a5f3d92e68497098d9314721335a7bb68a3b2d\/README.md#quick-start).\n    * Deploy at minimum a Deployment and Service for a Connect Server, to go along with the Secret for the Server created in the [Setup Authentication section](#setup-authentication).\n* The Service's name will be referenced in SecretStores\/ClusterSecretStores.\n* Keep in mind the likely need for additional Connect Servers for other Automation Environments when naming objects. For example dev, staging, prod, etc.\n* Unencrypted secret values are passed over the connection between the Operator and the Connect Server. **Encrypting the connection is recommended.**\n\n### Creating Compatible 1Password Items\n_Also see [examples below](#examples) for matching SecretStore and ExternalSecret specs._\n#### Manually (Password type)\n1. Click the plus button to create a new Password type Item.\n1. Change the title to what you want `remoteRef.key` to be.\n1. Set what you want `remoteRef.property` to be in the field sections where is says 'label', and values where it says 'new field'.\n1. Click the 'Save' button.\n\n![create-password-screenshot](..\/pictures\/screenshot_1password_create_password.png)\n#### Manually (Document type)\n* Click the plus button to create a new Document type Item.\n* Choose the file to upload and upload it.\n* Change the title to match `remoteRef.key`\n* Click the 'Add New File' button to add more files.\n* Click the 'Save' button.\n\n![create-document-screenshot](..\/pictures\/screenshot_1password_create_document.png)\n#### Scripting (Password type with op [CLI](https:\/\/developer.1password.com\/docs\/cli\/v1\/get-started\/))\n* Create `file.json` with the following contents, swapping in your keys and values. Note: `section.name`'s and `section.title`'s values are ignored by the Operator, but cannot be empty for the `op` CLI\n    ```json\n       {\n        \"title\": \"my-title\",\n        \"vault\": {\n          \"id\": \"vault-id\"\n        },\n        \"category\": \"LOGIN\",\n        \"fields\": [\n          {\n            \"id\": \"username\",\n            \"type\": \"STRING\",\n            \"purpose\": \"USERNAME\",\n            \"label\": \"username\",\n            \"value\": \"a-username\"\n          },\n          {\n            \"id\": \"password\",\n            \"type\": \"CONCEALED\",\n            \"purpose\": \"PASSWORD\",\n            \"label\": \"password\",\n            \"password_details\": {\n              \"strength\": \"TERRIBLE\"\n            },\n            \"value\": \"a-password\"\n          },\n          {\n            \"id\": \"notesPlain\",\n            \"type\": \"STRING\",\n            \"purpose\": \"NOTES\",\n            \"label\": \"notesPlain\",\n            \"value\": \"notesPlain\"\n          },\n          {\n            \"id\": \"customField\",\n            \"type\": \"CONCEALED\",\n            \"purpose\": \"custom\",\n            \"label\": \"custom\",\n            \"value\": \"custom-value\"\n          }\n        ]\n      }\n    ```\n* Run `op item create --template file.json`\n#### Scripting (Document type)\n* Unfortunately the `op` CLI doesn't seem to support uploading multiple files to the same Item, and the current Go lib has a [bug](https:\/\/github.com\/1Password\/connect-sdk-go\/issues\/45). `op` can be used to create a Document type Item with one file in it, but for now it's necessary to add multiple files to the same Document via the GUI.\n\n#### In-built field labeled `password` on Password type Items\n* TL;DR if you need a field labeled `password`, use the in-built one rather than the one in a fields Section.\n\n![password-field-example](..\/pictures\/screenshot_1password_password_field.png)\n\n* 1Password automatically adds a field labeled `password` on every Password type Item, whether it's created through a GUI or the API or `op` CLI.\n* There's no problem with using this field just like any other field, _just make sure you don't end up with two fields with the same label_. (For example, by automating the `op` CLI to create Items.)\n* The in-built `password` field is not otherwise special for the purposes of ExternalSecrets. It can be ignored when not in use.\n\n### Examples\nExamples of using the `my-env-config` and `my-cert` Items [seen above](#manually-password-type).\n\n* Note: with this configuration a 1Password Item titled `my-env-config` is correlated to a ExternalSecret named `my-env-config` that results in a Kubernetes secret named `my-env-config`, all with matching names for the key\/value pairs. This is a way to increase comprehensibility.\n```yaml\n{% include '1password-secret-store.yaml' %}\n```\n```yaml\n{% include '1password-external-secret-my-env-config.yaml' %}\n```\n```yaml\n{% include '1password-external-secret-my-cert.yaml' %}\n```\n\n### Additional Notes\n#### General\n* It's intuitive to use Document type Items for Kubernetes secrets mounted as files, and Password type Items for ones that will be mounted as environment variables, but either can be used for either. It comes down to what's more convenient.\n\n#### Why no version history\n* 1Password only supports version history on their in-built `password` field. Therefore, implementing version history in this provider would require one Item in 1Password per `remoteRef` in an ExternalSecret. Additionally `remoteRef.property` would be pointless\/unusable.\n* For example, a Kubernetes secret with 15 keys (say, used in `envFrom`,) would require 15 Items in the 1Password vault, instead of 15 Fields in 1 Item. This would quickly get untenable for more than a few secrets, because:\n    * All Items would have to have unique names which means `secretKey` couldn't match the Item name the `remoteRef` is targeting.\n    * Maintenance, particularly clean up of no longer used secrets, would be significantly more work.\n    * A vault would often become a huge list of unorganized entries as opposed to a much smaller list organized by Kubernetes Secret.\n* To support new and old versions of a secret value at the same time, create a new Item in 1Password with the new value, and point some ExternalSecrets at a time to the new Item.\n\n#### Keeping misconfiguration from working\n* One instance of the ExternalSecrets Operator _can_ work with many Connect Server instances, but it may not be the best approach.\n* With one Operator instance per Connect Server instance, namespaces and RBAC can be used to improve security posture, and perhaps just as importantly, it's harder to misconfigure something and have it work (supply env A's secret values to env B for example.)\n* You can run as many 1Password Connect Servers as you need security boundaries to help protect against accidental misconfiguration.\n\n#### Patching ExternalSecrets with Kustomize\n* An overlay can provide a SecretStore specific to that overlay, and then use JSON6902 to patch all the ExternalSecrets coming from base to point to that SecretStore. Here's an example `overlays\/staging\/kustomization.yaml`:\n    ```yaml\n    ---\n    apiVersion: kustomize.config.k8s.io\/v1beta1\n    kind: Kustomization\n\n    resources:\n    - ..\/..\/base\/something-with-external-secrets\n    - secretStore.staging.yaml\n\n    patchesJson6902:\n    - target:\n        kind: ExternalSecret\n        name: \".*\"\n      patch: |-\n        - op: replace\n          path: \/spec\/secretStoreRef\/name\n          value: staging\n    ```","site":"external secrets","answers_cleaned":"   1Password Secrets Automation  External Secrets Operator integrates with  1Password Secrets Automation  https   1password com products secrets   for secret management       Important note about this documentation    The 1Password API calls the entries in vaults  Items   These docs use the same term          Behavior   How an Item is equated to an ExternalSecret         remoteRef key  is equated to an Item s Title        remoteRef property  is equated to            An Item s field s Label  Password type            An Item s file s Name  Document type            If empty  defaults to the first file name  or the field labeled  password         remoteRef version  is currently not supported        One Item in a vault can equate to one Kubernetes Secret to keep things easy to comprehend    Support for 1Password secret types of  Password  and  Document         The  Password  type can get data from multiple  fields  in the Item        The  Document  type can get data from files        See  creating 1Password Items compatible with ExternalSecrets   creating compatible 1password items     Ordered vaults       Specify an ordered list of vaults in a SecretStore and the value will be sourced from the first vault with a matching Item        If no matching Item is found  an error is returned        This supports having a default or shared set of values that can also be overriden for specific environments     dataFrom          find path  is equated to Item Title         find name regexp  is equated to field Labels         find tags  are not supported at this time       Prerequisites   1Password requires running a 1Password Connect Server to which the API requests will be made        External Secrets does not run this server  See  Deploy a Connect Server   deploy a connect server         One Connect Server is needed per 1Password Automation Environment        Many Vaults can be added to an Automation Environment  and Tokens can be generated in that Environment with access to any set or subset of those Vaults    1Password Connect Server version 1 5 6 or higher       Setup Authentication  Authentication requires a  1password credentials json  file provided to the Connect Server  and a related  Access Token  for the client in this provider to authenticate to that Connect Server  Both of these are generated by 1Password    1  Setup an Automation Environment  at 1Password com  https   support 1password com secrets automation    or  via the op CLI  https   github com 1Password connect blob a0a5f3d92e68497098d9314721335a7bb68a3b2d README md create server and access token         Note  don t be confused by the  op connect server create  syntax  This will create an Automation Environment in 1Password  and corresponding credentials for a Connect Server  nothing more        This will result in a  1password credentials json  file to provide to a Connect Server Deployment  and an Access Token to provide as a Secret referenced by a  SecretStore  or  ClusterSecretStore   1  Create a Kubernetes secret with the Access Token    yaml    include  1password token secret yaml         1  Reference the secret in a SecretStore or ClusterSecretStore    yaml    include  1password secret store yaml         1  Create a Kubernetes secret with the Connect Server credentials    yaml    include  1password connect server secret yaml         1  Reference the secret in a Connect Server Deployment    yaml    include  1password connect server deployment yaml              Deploy a Connect Server   Follow the remaining instructions in the  Quick Start guide  https   github com 1Password connect blob a0a5f3d92e68497098d9314721335a7bb68a3b2d README md quick start         Deploy at minimum a Deployment and Service for a Connect Server  to go along with the Secret for the Server created in the  Setup Authentication section   setup authentication     The Service s name will be referenced in SecretStores ClusterSecretStores    Keep in mind the likely need for additional Connect Servers for other Automation Environments when naming objects  For example dev  staging  prod  etc    Unencrypted secret values are passed over the connection between the Operator and the Connect Server    Encrypting the connection is recommended         Creating Compatible 1Password Items  Also see  examples below   examples  for matching SecretStore and ExternalSecret specs        Manually  Password type  1  Click the plus button to create a new Password type Item  1  Change the title to what you want  remoteRef key  to be  1  Set what you want  remoteRef property  to be in the field sections where is says  label   and values where it says  new field   1  Click the  Save  button     create password screenshot     pictures screenshot 1password create password png       Manually  Document type    Click the plus button to create a new Document type Item    Choose the file to upload and upload it    Change the title to match  remoteRef key    Click the  Add New File  button to add more files    Click the  Save  button     create document screenshot     pictures screenshot 1password create document png       Scripting  Password type with op  CLI  https   developer 1password com docs cli v1 get started      Create  file json  with the following contents  swapping in your keys and values  Note   section name  s and  section title  s values are ignored by the Operator  but cannot be empty for the  op  CLI        json                   title    my title            vault                id    vault id                      category    LOGIN            fields                              id    username                type    STRING                purpose    USERNAME                label    username                value    a username                                        id    password                type    CONCEALED                purpose    PASSWORD                label    password                password details                    strength    TERRIBLE                              value    a password                                        id    notesPlain                type    STRING                purpose    NOTES                label    notesPlain                value    notesPlain                                        id    customField                type    CONCEALED                purpose    custom                label    custom                value    custom value                                          Run  op item create   template file json       Scripting  Document type    Unfortunately the  op  CLI doesn t seem to support uploading multiple files to the same Item  and the current Go lib has a  bug  https   github com 1Password connect sdk go issues 45    op  can be used to create a Document type Item with one file in it  but for now it s necessary to add multiple files to the same Document via the GUI        In built field labeled  password  on Password type Items   TL DR if you need a field labeled  password   use the in built one rather than the one in a fields Section     password field example     pictures screenshot 1password password field png     1Password automatically adds a field labeled  password  on every Password type Item  whether it s created through a GUI or the API or  op  CLI    There s no problem with using this field just like any other field   just make sure you don t end up with two fields with the same label    For example  by automating the  op  CLI to create Items     The in built  password  field is not otherwise special for the purposes of ExternalSecrets  It can be ignored when not in use       Examples Examples of using the  my env config  and  my cert  Items  seen above   manually password type      Note  with this configuration a 1Password Item titled  my env config  is correlated to a ExternalSecret named  my env config  that results in a Kubernetes secret named  my env config   all with matching names for the key value pairs  This is a way to increase comprehensibility     yaml    include  1password secret store yaml            yaml    include  1password external secret my env config yaml            yaml    include  1password external secret my cert yaml              Additional Notes      General   It s intuitive to use Document type Items for Kubernetes secrets mounted as files  and Password type Items for ones that will be mounted as environment variables  but either can be used for either  It comes down to what s more convenient        Why no version history   1Password only supports version history on their in built  password  field  Therefore  implementing version history in this provider would require one Item in 1Password per  remoteRef  in an ExternalSecret  Additionally  remoteRef property  would be pointless unusable    For example  a Kubernetes secret with 15 keys  say  used in  envFrom    would require 15 Items in the 1Password vault  instead of 15 Fields in 1 Item  This would quickly get untenable for more than a few secrets  because        All Items would have to have unique names which means  secretKey  couldn t match the Item name the  remoteRef  is targeting        Maintenance  particularly clean up of no longer used secrets  would be significantly more work        A vault would often become a huge list of unorganized entries as opposed to a much smaller list organized by Kubernetes Secret    To support new and old versions of a secret value at the same time  create a new Item in 1Password with the new value  and point some ExternalSecrets at a time to the new Item        Keeping misconfiguration from working   One instance of the ExternalSecrets Operator  can  work with many Connect Server instances  but it may not be the best approach    With one Operator instance per Connect Server instance  namespaces and RBAC can be used to improve security posture  and perhaps just as importantly  it s harder to misconfigure something and have it work  supply env A s secret values to env B for example     You can run as many 1Password Connect Servers as you need security boundaries to help protect against accidental misconfiguration        Patching ExternalSecrets with Kustomize   An overlay can provide a SecretStore specific to that overlay  and then use JSON6902 to patch all the ExternalSecrets coming from base to point to that SecretStore  Here s an example  overlays staging kustomization yaml          yaml             apiVersion  kustomize config k8s io v1beta1     kind  Kustomization      resources              base something with external secrets       secretStore staging yaml      patchesJson6902        target          kind  ExternalSecret         name             patch               op  replace           path   spec secretStoreRef name           value  staging        "}
{"questions":"external secrets senhasegura DevOps Secrets Management DSM External Secrets Operator integrates with module to sync application secrets to secrets held on the Kubernetes cluster Authentication Authentication in senhasegura uses DevOps Secrets Management DSM application authorization schema","answers":"## senhasegura DevOps Secrets Management (DSM)\n\nExternal Secrets Operator integrates with [senhasegura](https:\/\/senhasegura.com\/) [DevOps Secrets Management (DSM)](https:\/\/senhasegura.com\/devops) module to sync application secrets to secrets held on the Kubernetes cluster.\n\n---\n\n## Authentication\n\nAuthentication in senhasegura uses DevOps Secrets Management (DSM) application authorization schema\n\nYou need to create an Kubernetes Secret with desired auth parameters, for example:\n\nInstructions to setup authorizations and secrets in senhasegura DSM can be found at [senhasegura docs for DSM](https:\/\/helpcenter.senhasegura.io\/docs\/3.22\/dsm) and [senhasegura YouTube channel](https:\/\/www.youtube.com\/channel\/UCpDms35l3tcrfb8kZSpeNYw\/search?query=DSM%2C%20en-US)\n\n```yaml\n{% include 'senhasegura-dsm-secret.yaml' %}\n```\n\n---\n\n## Examples\n\nTo sync secrets between senhasegura and Kubernetes with External Secrets, we need to define an SecretStore or ClusterSecretStore resource with senhasegura provider, setting authentication in DSM module with Secret defined before\n\n### SecretStore\n\n``` yaml\n{% include 'senhasegura-dsm-secretstore.yaml' %}\n```\n\n### ClusterSecretStore\n\n``` yaml\n{% include 'senhasegura-dsm-clustersecretstore.yaml' %}\n```\n\n---\n\n## Syncing secrets\n\nIn examples below, consider that three secrets (api-settings, db-settings and hsm-settings) are defined in senhasegura DSM\n\n---\n\n**Secret Identifier: ** api-settings\n\n**Secret data:** \n\n```bash\nURL=https:\/\/example.com\/api\/example\nTOKEN=example-token-value\n```\n\n---\n\n**Secret Identifier: ** db-settings\n\n**Secret data:** \n\n```bash\nDB_HOST='db.example'\nDB_PORT='5432'\nDB_USERNAME='example'\nDB_PASSWORD='example'\n```\n\n---\n\n**Secret Identifier: ** hsm-settings\n\n**Secret data:** \n\n```bash\nHSM_ADDRESS='hsm.example'\nHSM_PORT='9223'\n```\n\n\n---\n\n### Sync DSM secrets using Secret Identifiers\n\nYou can fetch all key\/value pairs for a given secret identifier If you leave the remoteRef.property empty. This returns the json-encoded secret value for that path.\n\nIf you only need a specific key, you can select it using remoteRef.property as the key name.\n\nIn this method, you can overwrites data name in Kubernetes Secret object (e.g API_SETTINGS and API_SETTINGS_TOKEN)\n\n``` yaml\n{% include 'senhasegura-dsm-external-secret-single.yaml' %}\n```\n\nKubernetes Secret will be create with follow `.data.X`\n\n```bash\nAPI_SETTINGS='[{\"TOKEN\":\"example-token-value\",\"URL\":\"https:\/\/example.com\/api\/example\"}]'\nAPI_SETTINGS_TOKEN='example-token-value'\n```\n\n---\n\n### Sync DSM secrets using Secret Identifiers with automatically name assignments\n\nIf your app requires multiples secrets, it is not required to create multiple ExternalSecret resources, you can aggregate secrets using a single ExternalSecret resource\n\nIn this method, every secret data in senhasegura creates an Kubernetes Secret `.data.X` field\n\n``` yaml\n{% include 'senhasegura-dsm-external-secret-multiple.yaml' %}\n```\n\nKubernetes Secret will be create with follow `.data.X`\n\n```bash\nURL='https:\/\/example.com\/api\/example'\nTOKEN='example-token-value'\nDB_HOST='db.example'\nDB_PORT='5432'\nDB_USERNAME='example'\nDB_PASSWORD='example'\n```\n\n<!-- https:\/\/github.com\/external-secrets\/external-secrets\/pull\/830#discussion_r858657107 -->\n\n<!-- ### Sync all secrets from DSM authorization\n\nYou can sync all secrets that your authorization in DSM has using find, in a future release you will be able to filter secrets by name, path or tags\n\n``` yaml\n{% include 'senhasegura-dsm-external-secret-all.yaml' %}\n```\n\nKubernetes Secret will be create with follow `.data.X`\n\n```bash\nURL='https:\/\/example.com\/api\/example'\nTOKEN='example-token-value'\nDB_HOST='db.example'\nDB_PORT='5432'\nDB_USERNAME='example'\nDB_PASSWORD='example'\nHSM_ADDRESS='hsm.example'\nHSM_PORT='9223'\n``` -->","site":"external secrets","answers_cleaned":"   senhasegura DevOps Secrets Management  DSM   External Secrets Operator integrates with  senhasegura  https   senhasegura com    DevOps Secrets Management  DSM   https   senhasegura com devops  module to sync application secrets to secrets held on the Kubernetes cluster           Authentication  Authentication in senhasegura uses DevOps Secrets Management  DSM  application authorization schema  You need to create an Kubernetes Secret with desired auth parameters  for example   Instructions to setup authorizations and secrets in senhasegura DSM can be found at  senhasegura docs for DSM  https   helpcenter senhasegura io docs 3 22 dsm  and  senhasegura YouTube channel  https   www youtube com channel UCpDms35l3tcrfb8kZSpeNYw search query DSM 2C 20en US      yaml    include  senhasegura dsm secret yaml                  Examples  To sync secrets between senhasegura and Kubernetes with External Secrets  we need to define an SecretStore or ClusterSecretStore resource with senhasegura provider  setting authentication in DSM module with Secret defined before      SecretStore      yaml    include  senhasegura dsm secretstore yaml              ClusterSecretStore      yaml    include  senhasegura dsm clustersecretstore yaml                  Syncing secrets  In examples below  consider that three secrets  api settings  db settings and hsm settings  are defined in senhasegura DSM         Secret Identifier     api settings    Secret data         bash URL https   example com api example TOKEN example token value             Secret Identifier     db settings    Secret data         bash DB HOST  db example  DB PORT  5432  DB USERNAME  example  DB PASSWORD  example              Secret Identifier     hsm settings    Secret data         bash HSM ADDRESS  hsm example  HSM PORT  9223                 Sync DSM secrets using Secret Identifiers  You can fetch all key value pairs for a given secret identifier If you leave the remoteRef property empty  This returns the json encoded secret value for that path   If you only need a specific key  you can select it using remoteRef property as the key name   In this method  you can overwrites data name in Kubernetes Secret object  e g API SETTINGS and API SETTINGS TOKEN       yaml    include  senhasegura dsm external secret single yaml          Kubernetes Secret will be create with follow   data X      bash API SETTINGS     TOKEN   example token value   URL   https   example com api example     API SETTINGS TOKEN  example token value                Sync DSM secrets using Secret Identifiers with automatically name assignments  If your app requires multiples secrets  it is not required to create multiple ExternalSecret resources  you can aggregate secrets using a single ExternalSecret resource  In this method  every secret data in senhasegura creates an Kubernetes Secret   data X  field      yaml    include  senhasegura dsm external secret multiple yaml          Kubernetes Secret will be create with follow   data X      bash URL  https   example com api example  TOKEN  example token value  DB HOST  db example  DB PORT  5432  DB USERNAME  example  DB PASSWORD  example            https   github com external secrets external secrets pull 830 discussion r858657107               Sync all secrets from DSM authorization  You can sync all secrets that your authorization in DSM has using find  in a future release you will be able to filter secrets by name  path or tags      yaml    include  senhasegura dsm external secret all yaml          Kubernetes Secret will be create with follow   data X      bash URL  https   example com api example  TOKEN  example token value  DB HOST  db example  DB PORT  5432  DB USERNAME  example  DB PASSWORD  example  HSM ADDRESS  hsm example  HSM PORT  9223         "}
{"questions":"external secrets Secrets Manager defined region You should define Roles that define fine grained access to way users of the can only access the secrets necessary A points to AWS Secrets Manager in a certain account within a individual secrets and pass them to ESO using This","answers":"\n![aws sm](..\/pictures\/eso-az-kv-aws-sm.png)\n\n## Secrets Manager\n\nA `SecretStore` points to AWS Secrets Manager in a certain account within a\ndefined region. You should define Roles that define fine-grained access to\nindividual secrets and pass them to ESO using `spec.provider.aws.role`. This\nway users of the `SecretStore` can only access the secrets necessary.\n\n``` yaml\n{% include 'aws-sm-store.yaml' %}\n```\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` in `accessKeyIDSecretRef` and `secretAccessKeySecretRef`  with the namespaces where the secrets reside.\n### IAM Policy\n\nCreate a IAM Policy to pin down access to secrets matching `dev-*`.\n\n``` json\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"secretsmanager:GetResourcePolicy\",\n        \"secretsmanager:GetSecretValue\",\n        \"secretsmanager:DescribeSecret\",\n        \"secretsmanager:ListSecretVersionIds\"\n      ],\n      \"Resource\": [\n        \"arn:aws:secretsmanager:us-west-2:111122223333:secret:dev-*\"\n      ]\n    }\n  ]\n}\n```\n\n#### Permissions for PushSecret\n\nIf you're planning to use `PushSecret`, ensure you also have the following permissions in your IAM policy:\n\n``` json\n{\n  \"Effect\": \"Allow\",\n  \"Action\": [\n    \"secretsmanager:CreateSecret\",\n    \"secretsmanager:PutSecretValue\",\n    \"secretsmanager:TagResource\",\n    \"secretsmanager:DeleteSecret\"\n  ],\n  \"Resource\": [\n    \"arn:aws:secretsmanager:us-west-2:111122223333:secret:dev-*\"\n  ]\n}\n```\n\nHere's a more restrictive version of the IAM policy:\n\n``` json\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"secretsmanager:CreateSecret\",\n        \"secretsmanager:PutSecretValue\",\n        \"secretsmanager:TagResource\"\n      ],\n      \"Resource\": [\n        \"arn:aws:secretsmanager:us-west-2:111122223333:secret:dev-*\"\n      ]\n    },\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"secretsmanager:DeleteSecret\"\n      ],\n      \"Resource\": [\n        \"arn:aws:secretsmanager:us-west-2:111122223333:secret:dev-*\"\n      ],\n      \"Condition\": {\n        \"StringEquals\": {\n          \"secretsmanager:ResourceTag\/managed-by\": \"external-secrets\"\n        }\n      }\n    }\n  ]\n}\n```\n\nIn this policy, the DeleteSecret action is restricted to secrets that have the specified tag, ensuring that deletion operations are more controlled and in line with the intended management of the secrets.\n\n#### Additional Settings for PushSecret\n\nAdditional settings can be set at the `SecretStore` level to control the behavior of `PushSecret` when interacting with AWS Secrets Manager.\n\n```yaml\n{% include 'aws-sm-store-secretsmanager-config.yaml' %}\n```\n\n#### Additional Metadata for PushSecret\n\nIt's possible to configure AWS Secrets Manager to either push secrets in `binary` format or as plain `string`.\n\nTo control this behaviour set the following provider metadata:\n\n```yaml\n{% include 'aws-sm-push-secret-with-metadata.yaml' %}\n```\n\n`secretPushFormat` takes two options. `binary` and `string`, where `binary` is the _default_.\n\n### JSON Secret Values\n\nSecretsManager supports *simple* key\/value pairs that are stored as json. If you use the API you can store more complex JSON objects. You can access nested values or arrays using [gjson syntax](https:\/\/github.com\/tidwall\/gjson\/blob\/master\/SYNTAX.md):\n\nConsider the following JSON object that is stored in the SecretsManager key `friendslist`:\n``` json\n{\n  \"name\": {\"first\": \"Tom\", \"last\": \"Anderson\"},\n  \"friends\": [\n    {\"first\": \"Dale\", \"last\": \"Murphy\"},\n    {\"first\": \"Roger\", \"last\": \"Craig\"},\n    {\"first\": \"Jane\", \"last\": \"Murphy\"}\n  ]\n}\n```\n\nThis is an example on how you would look up nested keys in the above json object:\n\n``` yaml\n{% include 'aws-sm-external-secret.yaml' %}\n```\n\n### Secret Versions\n\nSecretsManager creates a new version of a secret every time it is updated. The secret version can be reference in two ways, the `VersionStage` and the `VersionId`. The `VersionId` is a unique uuid which is generated every time the secret changes. This id is immutable and will always refer to the same secret data. The `VersionStage` is an alias to a `VersionId`, and can refer to different secret data as the secret is updated. By default, SecretsManager will add the version stages `AWSCURRENT` and `AWSPREVIOUS` to every secret, but other stages can be created via the [update-secret-version-stage](https:\/\/docs.aws.amazon.com\/cli\/latest\/reference\/secretsmanager\/update-secret-version-stage.html) api.\n\nThe `version` field on the `remoteRef` of the ExternalSecret will normally consider the version to be a `VersionStage`, but if the field is prefixed with `uuid\/`, then the version will be considered a `VersionId`.\n\nSo in this example, the operator will request the same secret with different versions: `AWSCURRENT` and `AWSPREVIOUS`:\n\n``` yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: versioned-api-key\nspec:\n  refreshInterval: 1h\n  secretStoreRef:\n    name: aws-secretsmanager\n    kind: SecretStore\n  target:\n    name: versioned-api-key\n    creationPolicy: Owner\n  data:\n  - secretKey: previous-api-key\n    remoteRef:\n      key: \"production\/api-key\"\n      version: \"AWSPREVIOUS\"\n  - secretKey: current-api-key\n    remoteRef:\n      key: \"production\/api-key\"\n      version: \"AWSCURRENT\"\n```\n\nWhile in this example, the operator will request the secret with `VersionId` as `abcd-1234`\n\n``` yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: versioned-api-key\nspec:\n  refreshInterval: 1h\n  secretStoreRef:\n    name: aws-secretsmanager\n    kind: SecretStore\n  target:\n    name: versioned-api-key\n    creationPolicy: Owner\n  data:\n  - secretKey: api-key\n    remoteRef:\n      key: \"production\/api-key\"\n      version: \"uuid\/123e4567-e89b-12d3-a456-426614174000\"\n```\n\n--8<-- \"snippets\/provider-aws-access.md\"","site":"external secrets","answers_cleaned":"   aws sm     pictures eso az kv aws sm png      Secrets Manager  A  SecretStore  points to AWS Secrets Manager in a certain account within a defined region  You should define Roles that define fine grained access to individual secrets and pass them to ESO using  spec provider aws role   This way users of the  SecretStore  can only access the secrets necessary       yaml    include  aws sm store yaml           NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  in  accessKeyIDSecretRef  and  secretAccessKeySecretRef   with the namespaces where the secrets reside      IAM Policy  Create a IAM Policy to pin down access to secrets matching  dev          json      Version    2012 10 17      Statement                  Effect    Allow          Action              secretsmanager GetResourcePolicy            secretsmanager GetSecretValue            secretsmanager DescribeSecret            secretsmanager ListSecretVersionIds                  Resource              arn aws secretsmanager us west 2 111122223333 secret dev                                  Permissions for PushSecret  If you re planning to use  PushSecret   ensure you also have the following permissions in your IAM policy       json      Effect    Allow      Action          secretsmanager CreateSecret        secretsmanager PutSecretValue        secretsmanager TagResource        secretsmanager DeleteSecret          Resource          arn aws secretsmanager us west 2 111122223333 secret dev               Here s a more restrictive version of the IAM policy       json      Version    2012 10 17      Statement                  Effect    Allow          Action              secretsmanager CreateSecret            secretsmanager PutSecretValue            secretsmanager TagResource                  Resource              arn aws secretsmanager us west 2 111122223333 secret dev                                Effect    Allow          Action              secretsmanager DeleteSecret                  Resource              arn aws secretsmanager us west 2 111122223333 secret dev                    Condition              StringEquals                secretsmanager ResourceTag managed by    external secrets                                     In this policy  the DeleteSecret action is restricted to secrets that have the specified tag  ensuring that deletion operations are more controlled and in line with the intended management of the secrets        Additional Settings for PushSecret  Additional settings can be set at the  SecretStore  level to control the behavior of  PushSecret  when interacting with AWS Secrets Manager      yaml    include  aws sm store secretsmanager config yaml               Additional Metadata for PushSecret  It s possible to configure AWS Secrets Manager to either push secrets in  binary  format or as plain  string    To control this behaviour set the following provider metadata      yaml    include  aws sm push secret with metadata yaml           secretPushFormat  takes two options   binary  and  string   where  binary  is the  default        JSON Secret Values  SecretsManager supports  simple  key value pairs that are stored as json  If you use the API you can store more complex JSON objects  You can access nested values or arrays using  gjson syntax  https   github com tidwall gjson blob master SYNTAX md    Consider the following JSON object that is stored in the SecretsManager key  friendslist       json      name     first    Tom    last    Anderson       friends           first    Dale    last    Murphy          first    Roger    last    Craig          first    Jane    last    Murphy              This is an example on how you would look up nested keys in the above json object       yaml    include  aws sm external secret yaml              Secret Versions  SecretsManager creates a new version of a secret every time it is updated  The secret version can be reference in two ways  the  VersionStage  and the  VersionId   The  VersionId  is a unique uuid which is generated every time the secret changes  This id is immutable and will always refer to the same secret data  The  VersionStage  is an alias to a  VersionId   and can refer to different secret data as the secret is updated  By default  SecretsManager will add the version stages  AWSCURRENT  and  AWSPREVIOUS  to every secret  but other stages can be created via the  update secret version stage  https   docs aws amazon com cli latest reference secretsmanager update secret version stage html  api   The  version  field on the  remoteRef  of the ExternalSecret will normally consider the version to be a  VersionStage   but if the field is prefixed with  uuid    then the version will be considered a  VersionId    So in this example  the operator will request the same secret with different versions   AWSCURRENT  and  AWSPREVIOUS        yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  versioned api key spec    refreshInterval  1h   secretStoreRef      name  aws secretsmanager     kind  SecretStore   target      name  versioned api key     creationPolicy  Owner   data      secretKey  previous api key     remoteRef        key   production api key        version   AWSPREVIOUS      secretKey  current api key     remoteRef        key   production api key        version   AWSCURRENT       While in this example  the operator will request the secret with  VersionId  as  abcd 1234       yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  versioned api key spec    refreshInterval  1h   secretStoreRef      name  aws secretsmanager     kind  SecretStore   target      name  versioned api key     creationPolicy  Owner   data      secretKey  api key     remoteRef        key   production api key        version   uuid 123e4567 e89b 12d3 a456 426614174000         8     snippets provider aws access md "}
{"questions":"external secrets External Secrets Operator integrates with for secret management IBM Cloud Secret Manager Authentication We support API key and trusted profile container authentication for this provider API key secret","answers":"## IBM Cloud Secret Manager\n\nExternal Secrets Operator integrates with [IBM Cloud Secret Manager](https:\/\/www.ibm.com\/cloud\/secrets-manager) for secret management.\n\n### Authentication\n\nWe support API key and trusted profile container authentication for this provider.\n\n#### API key secret\n\nTo generate your key (for test purposes we are going to generate from your user), first got to your (Access IAM) page:\n\n![iam](..\/pictures\/screenshot_api_keys_iam.png)\n\nOn the left, click \"API Keys\", then click on \"Create\"\n\n![iam-left](..\/pictures\/screenshot_api_keys_iam_left.png)\n\nPick a name and description for your key:\n\n![iam-create-key](..\/pictures\/screenshot_api_keys_create.png)\n\nYou have created a key. Press the eyeball to show the key. Copy or save it because keys can't be displayed or downloaded twice.\n\n![iam-create-success](..\/pictures\/screenshot_api_keys_create_successful.png)\n\nCreate a secret containing your apiKey:\n\n```shell\nkubectl create secret generic ibm-secret --from-literal=apiKey='API_KEY_VALUE'\n```\n\n#### Trusted Profile Container Auth\n\nTo create the trusted profile, first got to your (Access IAM) page:\n\n![iam](..\/pictures\/screenshot_api_keys_iam.png)\n\nOn the left, click \"Access groups\":\n\n![iam-left](..\/pictures\/screenshot_container_auth_create_group.png)\n\nPick a name and description for your group:\n\n![iam-left](..\/pictures\/screenshot_container_auth_create_group_1.png)\n\nClick on \"Access\", and then on \"Assign\":\n\n![iam-left](..\/pictures\/screenshot_container_auth_create_group_2.png)\n\nClick on \"Assign Access\", select \"IAM services\", and pick \"Secrets Manager\" from the pick-list:\n\n![iam-left](..\/pictures\/screenshot_container_auth_create_group_3.png)\n\nScope to \"All resources\" or \"Resources based on selected attributes\":\n\n![iam-left](..\/pictures\/screenshot_container_auth_create_group_4.png)\n\nSelect the \"SecretsReader\" service access policy:\n\n![iam-left](..\/pictures\/screenshot_container_auth_create_group_5.png)\n\nClick \"Add\" and \"Assign\" to save the access group.\n\nNext, on the left, click \"Trusted profiles\":\n\n![iam-left](..\/pictures\/screenshot_container_auth_iam_left.png)\n\nPress \"Create\" and pick a name and description for your profile:\n\n![iam-create-key](..\/pictures\/screenshot_container_auth_create_1.png)\n\nScope the profile's access.\n\nThe compute service type will be \"Red Hat OpenShift on IBM Cloud\".  Additional restriction can be configured based on cloud or cluster metadata, or if \"Specific resources\" is selected, restriction to a specific cluster.\n\n![iam-create-key](..\/pictures\/screenshot_container_auth_create_2.png)\n\nClick \"Add\" next to the previously created access group and then \"Create\", to associate the necessary service permissions.\n\n![iam-create-key](..\/pictures\/screenshot_container_auth_create_3.png)\n\nTo use the container-based authentication, it is necessary to map the API server `serviceAccountToken` auth token to the \"external-secrets\" and \"external-secrets-webhook\" deployment descriptors. Example below:\n\n```yaml\n{% include 'ibm-container-auth-volume.yaml' %}\n```\n\n### Update secret store\nBe sure the `ibm` provider is listed in the `Kind=SecretStore`\n\n```yaml\n{% include 'ibm-secret-store.yaml' %}\n```\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` in `secretApiKeySecretRef` with the namespace where the secret resides.\n\n**NOTE:** Only `secretApiKeySecretRef` or `containerAuth` should be specified, depending on authentication method being used.\n\nTo find your `serviceURL`, under your Secrets Manager resource, go to \"Endpoints\" on the left.\n\nSee here for a list of [publicly available endpoints](https:\/\/cloud.ibm.com\/apidocs\/secrets-manager#getting-started-endpoints).\n\n![iam-create-success](..\/pictures\/screenshot_service_url.png)\n\n### Secret Types\nWe support the following secret types of [IBM Secrets Manager](https:\/\/cloud.ibm.com\/apidocs\/secrets-manager):\n\n* `arbitrary`\n* `username_password`\n* `iam_credentials`\n* `service_credentials`\n* `imported_cert`\n* `public_cert`\n* `private_cert`\n* `kv`\n\nTo define the type of secret you would like to sync you need to prefix the secret id with the desired type. If the secret type is not specified it is defaulted to `arbitrary`:\n\n```yaml\n{% include 'ibm-es-types.yaml' %}\n\n```\n\nThe behavior for the different secret types is as following:\n\n#### arbitrary\n\n* `remoteRef` retrieves a string from secrets manager and sets it for specified `secretKey`\n* `dataFrom` retrieves a string from secrets manager and tries to parse it as JSON object setting the key:values pairs in resulting Kubernetes secret if successful\n\n#### username_password\n* `remoteRef` requires a `property` to be set for either `username` or `password` to retrieve respective fields from the secrets manager secret and set in specified `secretKey`\n* `dataFrom` retrieves both `username` and `password` fields from the secrets manager secret and sets appropriate key:value pairs in the resulting Kubernetes secret\n\n#### iam_credentials\n* `remoteRef` retrieves an apikey from secrets manager and sets it for specified `secretKey`\n* `dataFrom` retrieves an apikey from secrets manager and sets it for the `apikey` Kubernetes secret key\n\n#### service_credentials\n* `remoteRef` retrieves the credentials object from secrets manager and sets it for specified `secretKey`\n* `dataFrom` retrieves the credential object as a map from secrets manager and sets appropriate key:value pairs in the resulting Kubernetes secret\n\n#### imported_cert, public_cert, and private_cert\n* `remoteRef` requires a `property` to be set for either `certificate`, `private_key` or `intermediate` to retrieve respective fields from the secrets manager secret and set in specified `secretKey`\n* `dataFrom` retrieves all `certificate`, `private_key` and `intermediate` fields from the secrets manager secret and sets appropriate key:value pairs in the resulting Kubernetes secret\n\n#### kv\n* An optional `property` field can be set to `remoteRef` to select requested key from the KV secret. If not set, the entire secret will be returned\n* `dataFrom` retrieves a string from secrets manager and tries to parse it as JSON object setting the key:values pairs in resulting Kubernetes secret if successful. It could be either used with the methods\n  * `Extract` to extract multiple key\/value pairs from one secret (with optional `property` field being supported as well)\n  * `Find` to find secrets based on tags or regular expressions and allows finding multiple external secrets and map them into a single Kubernetes secret\n\n```json\n{\n  \"key1\": \"val1\",\n  \"key2\": \"val2\",\n  \"key3\": {\n    \"keyA\": \"valA\",\n    \"keyB\": \"valB\"\n  },\n  \"special.key\": \"special-content\"\n}\n```\n\n```yaml\ndata:\n- secretKey: key3_keyB\n  remoteRef:\n    key: 'kv\/aaaaa-bbbb-cccc-dddd-eeeeee'\n    property: 'key3.keyB'\n- secretKey: special_key\n  remoteRef:\n    key: 'kv\/aaaaa-bbbb-cccc-dddd-eeeeee'\n    property: 'special.key'\n- secretKey: key_all\n  remoteRef:\n    key: 'kv\/aaaaa-bbbb-cccc-dddd-eeeeee'\n```\n\n```yaml\ndataFrom:\n  - extract:\n    key: 'kv\/fffff-gggg-iiii-dddd-eeeeee' #mandatory\n    decodingStrategy: Base64 #optional\n\n```\n\n```yaml\ndataFrom:\n  - find:\n      name:  #matches any secret name ending in foo-bar\n        regexp: \"key\" #assumption that secrets are stored like \/comp\/key1, key2\/trigger, and comp\/trigger\/keygen within the secret manager\n  - find:\n      tags: #matches any secrets with the following metadata labels\n        environment: \"dev\"\n        application: \"BFF\"\n```\n\nresults in\n\n```yaml\ndata:\n  # secrets from data\n  key3_keyB: ... #valB\n  special_key: ... #special-content\n  key_all: ... #{\"key1\":\"val1\",\"key2\":\"val2\", ...\"special.key\":\"special-content\"}\n\n  # secrets from dataFrom with extract method\n  keyA: ... #1st key-value pair from JSON object\n  keyB: ... #2nd key-value pair from JSON object\n  keyC: ... #3rd key-value pair from JSON object\n\n  # secrets from dataFrom with find regex method\n  _comp_key1: ... #secret value for \/comp\/key1\n  key2_trigger: ... #secret value for key2\/trigger\n  _comp_trigger_keygen: ... #secret value for comp\/trigger\/keygen\n\n  # secrets from dataFrom with find tags method\n  bffA: ...\n  bffB: ...\n  bffC: ...\n\n\n```\n\n### Creating external secret\n\nTo create a kubernetes secret from the IBM Secrets Manager, a `Kind=ExternalSecret` is needed.\nBelow example creates a kubernetes secret based on ID of the secret in Secrets Manager.\n\n```yaml\n{% include 'ibm-external-secret.yaml' %}\n```\n\nAlternatively, the secret name along with its secret group name can be specified instead of secret ID to fetch the secret.\n\n```yaml\n{% include 'ibm-external-secret-by-name.yaml' %}\n```\n\n### Getting the Kubernetes secret\nThe operator will fetch the IBM Secret Manager secret and inject it as a `Kind=Secret`\n```\nkubectl get secret secret-to-be-created -n <namespace> | -o jsonpath='{.data.test}' | base64 -d\n```\n\n### Populating the Kubernetes secret with metadata from IBM Secrets Manager Provider\nESO can add metadata while creating or updating a Kubernetes secret to be reflected in its labels or annotations. The metadata could be any of the fields that are supported and returned in the response by IBM Secrets Manager.\n\nIn order for the user to opt in to adding metadata to secret, an existing optional field `spec.dataFrom.extract.metadataPolicy` can be set to `Fetch`, its default value being `None`. In addition to this, templating provided be ESO can be leveraged to specify the key-value pairs of the resultant secrets' labels and annotation.\n\nIn order for the required metadata to be populated in the Kubernetes secret, combination of below should be provided in the External Secrets resource:\n1. The required metadata should be specified under `template.metadata.labels` or `template.metadata.annotations`.\n2. The required secret data should be specified under `template.data`.\n3. The spec.dataFrom.extract should be specified with details of the Secrets Manager secret with `spec.dataFrom.extract.metadataPolicy` set to `Fetch`.\nBelow is an example, where `secret_id` and `updated_at` are the metadata of a secret in IBM Secrets Manager:\n\n```yaml\n{% include 'ibm-external-secret-with-metadata.yaml' %}\n```\n\nWhile the secret is being reconciled, it will have the secret data along with the required annotations. Below is the example of the secret after reconciliation:\n\n```yaml\napiVersion: v1\ndata:\n  secret: OHE0MFV5MGhQb2FmRjZTOGVva3dPQjRMeVZXeXpWSDlrSWgyR1BiVDZTMyc=\nimmutable: false\nkind: Secret\nmetadata:\n  annotations:\n    reconcile.external-secrets.io\/data-hash: 02217008d13ed228e75cf6d26fe74324\n    creationTimestamp: \"2023-05-04T08:41:24Z\"\n    secret_id: \"1234\"\n    updated_at: 2023-05-04T08:57:19Z\n  name: database-credentials\n  namespace: external-secrets\n  ownerReferences:\n  - apiVersion: external-secrets.io\/v1beta1\n    blockOwnerDeletion: true\n    controller: true\n    kind: ExternalSecret\n    name: database-credentials\n    uid: c2a018e7-1ac3-421b-bd3b-d9497204f843\n  #resourceVersion: \"1803567\" #immutable for a user\n  #uid: f5dff604-611b-4d41-9d65-b860c61a0b8d #immutable for a user\ntype: Opaque\n```","site":"external secrets","answers_cleaned":"   IBM Cloud Secret Manager  External Secrets Operator integrates with  IBM Cloud Secret Manager  https   www ibm com cloud secrets manager  for secret management       Authentication  We support API key and trusted profile container authentication for this provider        API key secret  To generate your key  for test purposes we are going to generate from your user   first got to your  Access IAM  page     iam     pictures screenshot api keys iam png   On the left  click  API Keys   then click on  Create     iam left     pictures screenshot api keys iam left png   Pick a name and description for your key     iam create key     pictures screenshot api keys create png   You have created a key  Press the eyeball to show the key  Copy or save it because keys can t be displayed or downloaded twice     iam create success     pictures screenshot api keys create successful png   Create a secret containing your apiKey      shell kubectl create secret generic ibm secret   from literal apiKey  API KEY VALUE            Trusted Profile Container Auth  To create the trusted profile  first got to your  Access IAM  page     iam     pictures screenshot api keys iam png   On the left  click  Access groups      iam left     pictures screenshot container auth create group png   Pick a name and description for your group     iam left     pictures screenshot container auth create group 1 png   Click on  Access   and then on  Assign      iam left     pictures screenshot container auth create group 2 png   Click on  Assign Access   select  IAM services   and pick  Secrets Manager  from the pick list     iam left     pictures screenshot container auth create group 3 png   Scope to  All resources  or  Resources based on selected attributes      iam left     pictures screenshot container auth create group 4 png   Select the  SecretsReader  service access policy     iam left     pictures screenshot container auth create group 5 png   Click  Add  and  Assign  to save the access group   Next  on the left  click  Trusted profiles      iam left     pictures screenshot container auth iam left png   Press  Create  and pick a name and description for your profile     iam create key     pictures screenshot container auth create 1 png   Scope the profile s access   The compute service type will be  Red Hat OpenShift on IBM Cloud    Additional restriction can be configured based on cloud or cluster metadata  or if  Specific resources  is selected  restriction to a specific cluster     iam create key     pictures screenshot container auth create 2 png   Click  Add  next to the previously created access group and then  Create   to associate the necessary service permissions     iam create key     pictures screenshot container auth create 3 png   To use the container based authentication  it is necessary to map the API server  serviceAccountToken  auth token to the  external secrets  and  external secrets webhook  deployment descriptors  Example below      yaml    include  ibm container auth volume yaml              Update secret store Be sure the  ibm  provider is listed in the  Kind SecretStore      yaml    include  ibm secret store yaml           NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  in  secretApiKeySecretRef  with the namespace where the secret resides     NOTE    Only  secretApiKeySecretRef  or  containerAuth  should be specified  depending on authentication method being used   To find your  serviceURL   under your Secrets Manager resource  go to  Endpoints  on the left   See here for a list of  publicly available endpoints  https   cloud ibm com apidocs secrets manager getting started endpoints      iam create success     pictures screenshot service url png       Secret Types We support the following secret types of  IBM Secrets Manager  https   cloud ibm com apidocs secrets manager       arbitrary     username password     iam credentials     service credentials     imported cert     public cert     private cert     kv   To define the type of secret you would like to sync you need to prefix the secret id with the desired type  If the secret type is not specified it is defaulted to  arbitrary       yaml    include  ibm es types yaml           The behavior for the different secret types is as following        arbitrary     remoteRef  retrieves a string from secrets manager and sets it for specified  secretKey     dataFrom  retrieves a string from secrets manager and tries to parse it as JSON object setting the key values pairs in resulting Kubernetes secret if successful       username password    remoteRef  requires a  property  to be set for either  username  or  password  to retrieve respective fields from the secrets manager secret and set in specified  secretKey     dataFrom  retrieves both  username  and  password  fields from the secrets manager secret and sets appropriate key value pairs in the resulting Kubernetes secret       iam credentials    remoteRef  retrieves an apikey from secrets manager and sets it for specified  secretKey     dataFrom  retrieves an apikey from secrets manager and sets it for the  apikey  Kubernetes secret key       service credentials    remoteRef  retrieves the credentials object from secrets manager and sets it for specified  secretKey     dataFrom  retrieves the credential object as a map from secrets manager and sets appropriate key value pairs in the resulting Kubernetes secret       imported cert  public cert  and private cert    remoteRef  requires a  property  to be set for either  certificate    private key  or  intermediate  to retrieve respective fields from the secrets manager secret and set in specified  secretKey     dataFrom  retrieves all  certificate    private key  and  intermediate  fields from the secrets manager secret and sets appropriate key value pairs in the resulting Kubernetes secret       kv   An optional  property  field can be set to  remoteRef  to select requested key from the KV secret  If not set  the entire secret will be returned    dataFrom  retrieves a string from secrets manager and tries to parse it as JSON object setting the key values pairs in resulting Kubernetes secret if successful  It could be either used with the methods      Extract  to extract multiple key value pairs from one secret  with optional  property  field being supported as well       Find  to find secrets based on tags or regular expressions and allows finding multiple external secrets and map them into a single Kubernetes secret     json      key1    val1      key2    val2      key3          keyA    valA        keyB    valB          special key    special content            yaml data    secretKey  key3 keyB   remoteRef      key   kv aaaaa bbbb cccc dddd eeeeee      property   key3 keyB    secretKey  special key   remoteRef      key   kv aaaaa bbbb cccc dddd eeeeee      property   special key    secretKey  key all   remoteRef      key   kv aaaaa bbbb cccc dddd eeeeee          yaml dataFrom      extract      key   kv fffff gggg iiii dddd eeeeee   mandatory     decodingStrategy  Base64  optional          yaml dataFrom      find        name    matches any secret name ending in foo bar         regexp   key   assumption that secrets are stored like  comp key1  key2 trigger  and comp trigger keygen within the secret manager     find        tags   matches any secrets with the following metadata labels         environment   dev          application   BFF       results in     yaml data      secrets from data   key3 keyB       valB   special key       special content   key all         key1   val1   key2   val2       special key   special content        secrets from dataFrom with extract method   keyA       1st key value pair from JSON object   keyB       2nd key value pair from JSON object   keyC       3rd key value pair from JSON object      secrets from dataFrom with find regex method    comp key1       secret value for  comp key1   key2 trigger       secret value for key2 trigger    comp trigger keygen       secret value for comp trigger keygen      secrets from dataFrom with find tags method   bffA        bffB        bffC                 Creating external secret  To create a kubernetes secret from the IBM Secrets Manager  a  Kind ExternalSecret  is needed  Below example creates a kubernetes secret based on ID of the secret in Secrets Manager      yaml    include  ibm external secret yaml          Alternatively  the secret name along with its secret group name can be specified instead of secret ID to fetch the secret      yaml    include  ibm external secret by name yaml              Getting the Kubernetes secret The operator will fetch the IBM Secret Manager secret and inject it as a  Kind Secret      kubectl get secret secret to be created  n  namespace     o jsonpath    data test     base64  d          Populating the Kubernetes secret with metadata from IBM Secrets Manager Provider ESO can add metadata while creating or updating a Kubernetes secret to be reflected in its labels or annotations  The metadata could be any of the fields that are supported and returned in the response by IBM Secrets Manager   In order for the user to opt in to adding metadata to secret  an existing optional field  spec dataFrom extract metadataPolicy  can be set to  Fetch   its default value being  None   In addition to this  templating provided be ESO can be leveraged to specify the key value pairs of the resultant secrets  labels and annotation   In order for the required metadata to be populated in the Kubernetes secret  combination of below should be provided in the External Secrets resource  1  The required metadata should be specified under  template metadata labels  or  template metadata annotations   2  The required secret data should be specified under  template data   3  The spec dataFrom extract should be specified with details of the Secrets Manager secret with  spec dataFrom extract metadataPolicy  set to  Fetch   Below is an example  where  secret id  and  updated at  are the metadata of a secret in IBM Secrets Manager      yaml    include  ibm external secret with metadata yaml          While the secret is being reconciled  it will have the secret data along with the required annotations  Below is the example of the secret after reconciliation      yaml apiVersion  v1 data    secret  OHE0MFV5MGhQb2FmRjZTOGVva3dPQjRMeVZXeXpWSDlrSWgyR1BiVDZTMyc  immutable  false kind  Secret metadata    annotations      reconcile external secrets io data hash  02217008d13ed228e75cf6d26fe74324     creationTimestamp   2023 05 04T08 41 24Z      secret id   1234      updated at  2023 05 04T08 57 19Z   name  database credentials   namespace  external secrets   ownerReferences      apiVersion  external secrets io v1beta1     blockOwnerDeletion  true     controller  true     kind  ExternalSecret     name  database credentials     uid  c2a018e7 1ac3 421b bd3b d9497204f843    resourceVersion   1803567   immutable for a user    uid  f5dff604 611b 4d41 9d65 b860c61a0b8d  immutable for a user type  Opaque    "}
{"questions":"external secrets External Secrets Operator integrates with Warning The External Secrets Operator secure usage involves taking several measures Please see for more information The BT provider supports retrieval of a secret from BeyondInsight Password Safe versions 23 1 or greater BeyondTrust Password Safe Prerequisites Warning If the BT provider secret is deleted it will still exist in the Kubernetes secrets","answers":"## BeyondTrust Password Safe\n\nExternal Secrets Operator integrates with [BeyondTrust Password Safe](https:\/\/www.beyondtrust.com\/docs\/beyondinsight-password-safe\/).\n\nWarning: The External Secrets Operator secure usage involves taking several measures. Please see [Security Best Practices](https:\/\/external-secrets.io\/latest\/guides\/security-best-practices\/) for more information.\n\nWarning: If the BT provider secret is deleted it will still exist in the Kubernetes secrets.\n\n### Prerequisites\nThe BT provider supports retrieval of a secret from BeyondInsight\/Password Safe versions 23.1 or greater.\n\nFor this provider to retrieve a secret the Password Safe\/Secrets Safe instance must be preconfigured with the secret in question and authorized to read it.\n\n### Authentication\n\nBeyondTrust [OAuth Authentication](https:\/\/www.beyondtrust.com\/docs\/beyondinsight-password-safe\/ps\/admin\/configure-api-registration.htm).\n\n1. Create an API access registration in BeyondInsight\n2. Create or use an existing Secrets Safe Group\n3. Create or use an existing Application User\n4. Add API registration to the Application user\n5. Add the user to the group\n6. Add the Secrets Safe Feature to the group\n\n> NOTE: The ClientID and ClientSecret must be stored in a Kubernetes secret in order for the SecretStore to read the configuration.\n\nIf you're using client credentials authentication:\n```sh\nkubectl create secret generic bt-secret --from-literal ClientSecret=\"<your secret>\"\nkubectl create secret generic bt-id --from-literal ClientId=\"<your ID>\"\n```\n\nIf you're using API Key authentication:\n```sh\nkubectl create secret generic bt-apikey --from-literal ApiKey=\"<your apikey>\"\n```\n\n### Client Certificate\n\nIf using `retrievalType: MANAGED_ACCOUNT`, you will also need to download the pfx certificate from Secrets Safe, extract that certificate and create two Kubernetes secrets.\n\n```sh\nopenssl pkcs12 -in client_certificate.pfx -nocerts -out ps_key.pem -nodes\nopenssl pkcs12 -in client_certificate.pfx -clcerts -nokeys -out ps_cert.pem\n\n# Copy the text from the ps_key.pem to a file.\n-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n\n# Copy the text from the ps_cert.pem to a file.\n-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----\n\nkubectl create secret generic bt-certificate --from-file=ClientCertificate=.\/ps_cert.pem\nkubectl create secret generic bt-certificatekey --from-file=ClientCertificateKey=.\/ps_key.pem\n```\n\n### Creating a SecretStore\n\nYou can follow the below example to create a `SecretStore` resource.\nYou can also use a `ClusterSecretStore` allowing you to reference secrets from all namespaces. [ClusterSecretStore](https:\/\/external-secrets.io\/latest\/api\/clustersecretstore\/)\n\n```sh\nkubectl apply -f secret-store.yml\n```\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: secretstore-beyondtrust\nspec:\n  provider:\n    beyondtrust:\n      server:\n        apiUrl: https:\/\/example.com:443\/BeyondTrust\/api\/public\/v3\/\n        retrievalType: MANAGED_ACCOUNT  # or SECRET\n        verifyCA: true\n        clientTimeOutSeconds: 45\n      auth: \n        certificate: # omit certificates if retrievalType is SECRET\n          secretRef:\n            name: bt-certificate\n            key: ClientCertificate\n        certificateKey:\n          secretRef:\n            name: bt-certificatekey\n            key: ClientCertificateKey\n        clientSecret: # define this section if using client credentials authentication\n          secretRef:\n            name: bt-secret\n            key: ClientSecret\n        clientId: # define this section if using client credentials authentication\n          secretRef:\n            name: bt-id\n            key: ClientId\n        apiKey: # define this section if using Api Key authentication\n          secretRef:\n            name: bt-apikey\n            key: ApiKey\n```\n\n### Creating an ExternalSecret\n\nYou can follow the below example to create a `ExternalSecret` resource. Secrets can be referenced by path.\nYou can also use a `ClusterExternalSecret` allowing you to reference secrets from all namespaces.\n\n```sh\nkubectl apply -f external-secret.yml\n```\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: beyondtrust-external-secret\nspec:\n  refreshInterval: 1h\n  secretStoreRef:\n    kind: SecretStore\n    name: secretstore-beyondtrust\n  target:\n    name: my-beyondtrust-secret # name of secret to create in k8s secrets (etcd)\n    creationPolicy: Owner\n  data:\n    - secretKey: secretKey\n      remoteRef:\n        key: system01\/managed_account01\n```\n\n### Get the K8s secret\n\n```shell\n# WARNING: this command will reveal the stored secret in plain text\nkubectl get secret my-beyondtrust-secret -o jsonpath=\"{.data.secretKey}\" | base64 --decode && echo\n``","site":"external secrets","answers_cleaned":"   BeyondTrust Password Safe  External Secrets Operator integrates with  BeyondTrust Password Safe  https   www beyondtrust com docs beyondinsight password safe     Warning  The External Secrets Operator secure usage involves taking several measures  Please see  Security Best Practices  https   external secrets io latest guides security best practices   for more information   Warning  If the BT provider secret is deleted it will still exist in the Kubernetes secrets       Prerequisites The BT provider supports retrieval of a secret from BeyondInsight Password Safe versions 23 1 or greater   For this provider to retrieve a secret the Password Safe Secrets Safe instance must be preconfigured with the secret in question and authorized to read it       Authentication  BeyondTrust  OAuth Authentication  https   www beyondtrust com docs beyondinsight password safe ps admin configure api registration htm    1  Create an API access registration in BeyondInsight 2  Create or use an existing Secrets Safe Group 3  Create or use an existing Application User 4  Add API registration to the Application user 5  Add the user to the group 6  Add the Secrets Safe Feature to the group    NOTE  The ClientID and ClientSecret must be stored in a Kubernetes secret in order for the SecretStore to read the configuration   If you re using client credentials authentication     sh kubectl create secret generic bt secret   from literal ClientSecret   your secret   kubectl create secret generic bt id   from literal ClientId   your ID        If you re using API Key authentication     sh kubectl create secret generic bt apikey   from literal ApiKey   your apikey            Client Certificate  If using  retrievalType  MANAGED ACCOUNT   you will also need to download the pfx certificate from Secrets Safe  extract that certificate and create two Kubernetes secrets      sh openssl pkcs12  in client certificate pfx  nocerts  out ps key pem  nodes openssl pkcs12  in client certificate pfx  clcerts  nokeys  out ps cert pem    Copy the text from the ps key pem to a file       BEGIN PRIVATE KEY               END PRIVATE KEY         Copy the text from the ps cert pem to a file       BEGIN CERTIFICATE               END CERTIFICATE       kubectl create secret generic bt certificate   from file ClientCertificate   ps cert pem kubectl create secret generic bt certificatekey   from file ClientCertificateKey   ps key pem          Creating a SecretStore  You can follow the below example to create a  SecretStore  resource  You can also use a  ClusterSecretStore  allowing you to reference secrets from all namespaces   ClusterSecretStore  https   external secrets io latest api clustersecretstore       sh kubectl apply  f secret store yml         yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  secretstore beyondtrust spec    provider      beyondtrust        server          apiUrl  https   example com 443 BeyondTrust api public v3          retrievalType  MANAGED ACCOUNT    or SECRET         verifyCA  true         clientTimeOutSeconds  45       auth           certificate    omit certificates if retrievalType is SECRET           secretRef              name  bt certificate             key  ClientCertificate         certificateKey            secretRef              name  bt certificatekey             key  ClientCertificateKey         clientSecret    define this section if using client credentials authentication           secretRef              name  bt secret             key  ClientSecret         clientId    define this section if using client credentials authentication           secretRef              name  bt id             key  ClientId         apiKey    define this section if using Api Key authentication           secretRef              name  bt apikey             key  ApiKey          Creating an ExternalSecret  You can follow the below example to create a  ExternalSecret  resource  Secrets can be referenced by path  You can also use a  ClusterExternalSecret  allowing you to reference secrets from all namespaces      sh kubectl apply  f external secret yml         yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  beyondtrust external secret spec    refreshInterval  1h   secretStoreRef      kind  SecretStore     name  secretstore beyondtrust   target      name  my beyondtrust secret   name of secret to create in k8s secrets  etcd      creationPolicy  Owner   data        secretKey  secretKey       remoteRef          key  system01 managed account01          Get the K8s secret     shell   WARNING  this command will reveal the stored secret in plain text kubectl get secret my beyondtrust secret  o jsonpath    data secretKey     base64   decode    echo   "}
{"questions":"external secrets Sync environments configs and secrets from to Kubernetes using the External Secrets Operator Authentication More information about setting up ESC can be found in the Pulumi ESC","answers":"## Pulumi ESC\n\nSync environments, configs and secrets from [Pulumi ESC](https:\/\/www.pulumi.com\/product\/esc\/) to Kubernetes using the External Secrets Operator.\n\n![Pulumi ESC](..\/pictures\/pulumi-esc.png)\n\nMore information about setting up [Pulumi](https:\/\/www.pulumi.com\/) ESC can be found in the [Pulumi ESC documentation](https:\/\/www.pulumi.com\/docs\/esc\/).\n\n### Authentication\n\nPulumi [Access Tokens](https:\/\/www.pulumi.com\/docs\/pulumi-cloud\/access-management\/access-tokens\/) are recommended to access Pulumi ESC.\n\n### Creating a SecretStore\n\nA Pulumi `SecretStore` can be created by specifying the `organization`, `project` and `environment` and referencing a Kubernetes secret containing the `accessToken`.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: secret-store\nspec:\n  provider:\n    pulumi:\n      organization: <NAME_OF_THE_ORGANIZATION>\n      project: <NAME_OF_THE_PROJECT>\n      environment: <NAME_OF_THE_ENVIRONMENT>\n      accessToken:\n        secretRef:\n          name: <NAME_OF_KUBE_SECRET>\n          key: <KEY_IN_KUBE_SECRET>\n```\n\nIf required, the API URL (`apiUrl`) can be customized as well. If not specified, the default value is `https:\/\/api.pulumi.com\/api\/esc`.\n\n### Creating a ClusterSecretStore\n\nSimilarly, a `ClusterSecretStore` can be created by specifying the `namespace` and referencing a Kubernetes secret containing the `accessToken`.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ClusterSecretStore\nmetadata:\n  name: secret-store\nspec:\n  provider:\n    pulumi:\n      organization: <NAME_OF_THE_ORGANIZATION>\n      project: <NAME_OF_THE_PROJECT>\n      environment: <NAME_OF_THE_ENVIRONMENT>\n      accessToken:\n        secretRef:\n          name: <NAME_OF_KUBE_SECRET>\n          key: <KEY_IN_KUBE_SECRET>\n          namespace: <NAMESPACE>\n```\n\n### Referencing Secrets\n\nSecrets can be referenced by defining the `key` containing the JSON path to the secret. Pulumi ESC secrets are internally organized as a JSON object.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: secret\nspec:\n  refreshInterval: 1h\n  secretStoreRef:\n    kind: SecretStore\n    name: secret-store\n  data:\n  - secretKey: <KEY_IN_KUBE_SECRET>\n    remoteRef:\n      key: <PULUMI_PATH_SYNTAX>\n```\n\n**Note:** `key` is not following the JSON Path syntax, but rather the Pulumi path syntax.\n\n#### Examples\n\n* root\n* root.nested\n* root[\"nested\"]\n* root.double.nest\n* root[\"double\"].nest\n* root[\"double\"][\"nest\"]\n* root.array[0]\n* root.array[100]\n* root.array[0].nested\n* root.array[0][1].nested\n* root.nested.array[0].double[1]\n* root[\"key with \\\"escaped\\\" quotes\"]\n* root[\"key with a .\"]\n* [\"root key with \\\"escaped\\\" quotes\"].nested\n* [\"root key with a .\"][100]\n* root.array[*].field\n* root.array[\"*\"].field\n\nSee [Pulumi's documentation](https:\/\/www.pulumi.com\/docs\/concepts\/options\/ignorechanges\/) for more information.\n\n### PushSecrets\n\nWith the latest release of Pulumi ESC, secrets can be pushed to the Pulumi service. This can be done by creating a `PushSecrets` object.\n\nHere is a basic example of how to define a `PushSecret` object:\n\n```yaml\napiVersion: external-secrets.io\/v1alpha1\nkind: PushSecret\nmetadata:\n  name: push-secret-example\nspec:\n  refreshInterval: 1h\n  selector:\n    secret:\n      name: <NAME_OF_KUBE_SECRET>\n  secretStoreRefs:\n  - kind: ClusterSecretStore\n    name: secret-store\n  data:\n  - match:\n      secretKey: <KEY_IN_KUBE_SECRET>\n      remoteRef:\n        remoteKey: <PULUMI_PATH_SYNTAX>\n```\n\nThis will then push the secret to the Pulumi service. If the secret already exists, it will be updated.\n\n### Limitations\n\nCurrently, the Pulumi provider only supports nested objects up to a depth of 1. Any nested objects beyond this depth will be stored as a string with the JSON representation.\n\nThis Pulumi ESC example:\n\n```yaml\nvalues:\n  backstage:\n    my: test\n    test: hello\n    test22:\n      my: hello\n    test33:\n      world: true\n    x: true\n    num: 42\n```\n\nWill result in the following Kubernetes secret:\n\n```yaml\nmy: test\nnum: \"42\"\ntest: hello\ntest22: '{\"my\":{\"trace\":{\"def\":{\"begin\":{\"byte\":72,\"column\":11,\"line\":6},\"end\":{\"byte\":77,\"column\":16,\"line\":6},\"environment\":\"tgif-demo\"}},\"value\":\"hello\"}}'\ntest33: '{\"world\":{\"trace\":{\"def\":{\"begin\":{\"byte\":103,\"column\":14,\"line\":8},\"end\":{\"byte\":107,\"column\":18,\"line\":8},\"environment\":\"tgif-demo\"}},\"value\":true}}'\nx: \"true\"\n```","site":"external secrets","answers_cleaned":"   Pulumi ESC  Sync environments  configs and secrets from  Pulumi ESC  https   www pulumi com product esc   to Kubernetes using the External Secrets Operator     Pulumi ESC     pictures pulumi esc png   More information about setting up  Pulumi  https   www pulumi com   ESC can be found in the  Pulumi ESC documentation  https   www pulumi com docs esc         Authentication  Pulumi  Access Tokens  https   www pulumi com docs pulumi cloud access management access tokens   are recommended to access Pulumi ESC       Creating a SecretStore  A Pulumi  SecretStore  can be created by specifying the  organization    project  and  environment  and referencing a Kubernetes secret containing the  accessToken       yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  secret store spec    provider      pulumi        organization   NAME OF THE ORGANIZATION        project   NAME OF THE PROJECT        environment   NAME OF THE ENVIRONMENT        accessToken          secretRef            name   NAME OF KUBE SECRET            key   KEY IN KUBE SECRET       If required  the API URL   apiUrl   can be customized as well  If not specified  the default value is  https   api pulumi com api esc        Creating a ClusterSecretStore  Similarly  a  ClusterSecretStore  can be created by specifying the  namespace  and referencing a Kubernetes secret containing the  accessToken       yaml apiVersion  external secrets io v1beta1 kind  ClusterSecretStore metadata    name  secret store spec    provider      pulumi        organization   NAME OF THE ORGANIZATION        project   NAME OF THE PROJECT        environment   NAME OF THE ENVIRONMENT        accessToken          secretRef            name   NAME OF KUBE SECRET            key   KEY IN KUBE SECRET            namespace   NAMESPACE           Referencing Secrets  Secrets can be referenced by defining the  key  containing the JSON path to the secret  Pulumi ESC secrets are internally organized as a JSON object      yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  secret spec    refreshInterval  1h   secretStoreRef      kind  SecretStore     name  secret store   data      secretKey   KEY IN KUBE SECRET      remoteRef        key   PULUMI PATH SYNTAX         Note     key  is not following the JSON Path syntax  but rather the Pulumi path syntax        Examples    root   root nested   root  nested     root double nest   root  double   nest   root  double    nest     root array 0    root array 100    root array 0  nested   root array 0  1  nested   root nested array 0  double 1    root  key with   escaped   quotes     root  key with a         root key with   escaped   quotes   nested     root key with a     100    root array    field   root array      field  See  Pulumi s documentation  https   www pulumi com docs concepts options ignorechanges   for more information       PushSecrets  With the latest release of Pulumi ESC  secrets can be pushed to the Pulumi service  This can be done by creating a  PushSecrets  object   Here is a basic example of how to define a  PushSecret  object      yaml apiVersion  external secrets io v1alpha1 kind  PushSecret metadata    name  push secret example spec    refreshInterval  1h   selector      secret        name   NAME OF KUBE SECRET    secretStoreRefs      kind  ClusterSecretStore     name  secret store   data      match        secretKey   KEY IN KUBE SECRET        remoteRef          remoteKey   PULUMI PATH SYNTAX       This will then push the secret to the Pulumi service  If the secret already exists  it will be updated       Limitations  Currently  the Pulumi provider only supports nested objects up to a depth of 1  Any nested objects beyond this depth will be stored as a string with the JSON representation   This Pulumi ESC example      yaml values    backstage      my  test     test  hello     test22        my  hello     test33        world  true     x  true     num  42      Will result in the following Kubernetes secret      yaml my  test num   42  test  hello test22     my    trace    def    begin    byte  72  column  11  line  6   end    byte  77  column  16  line  6   environment   tgif demo     value   hello     test33     world    trace    def    begin    byte  103  column  14  line  8   end    byte  107  column  18  line  8   environment   tgif demo     value  true    x   true     "}
{"questions":"external secrets will enable users to seamlessly integrate their Chef based secret management with Kubernetes through the existing External Secrets framework Authentication In many enterprises legacy applications and infrastructure are still tightly integrated with the Chef Chef Infra Server Chef Server Cluster for configuration and secrets management Teams often rely on to securely store sensitive information such as application secrets and infrastructure configurations These data bags serve as a centralized repository for managing and distributing sensitive data across the Chef ecosystem Chef NOTE is designed only to fetch data from the Chef data bags into Kubernetes secrets it won t update delete any item in the data bags","answers":"## Chef\n\n`Chef External Secrets provider` will enable users to seamlessly integrate their Chef-based secret management with Kubernetes through the existing External Secrets framework.\n\nIn many enterprises, legacy applications and infrastructure are still tightly integrated with the Chef\/Chef Infra Server\/Chef Server Cluster for configuration and secrets management. Teams often rely on [Chef data bags](https:\/\/docs.chef.io\/data_bags\/) to securely store sensitive information such as application secrets and infrastructure configurations. These data bags serve as a centralized repository for managing and distributing sensitive data across the Chef ecosystem.\n\n**NOTE:** `Chef External Secrets provider` is designed only to fetch data from the Chef data bags into Kubernetes secrets, it won't update\/delete any item in the data bags. \n\n### Authentication\n\nEvery request made to the Chef Infra server needs to be authenticated. [Authentication](https:\/\/docs.chef.io\/server\/auth\/) is done using the Private keys of the Chef Users.  The User needs to have appropriate [Permissions](https:\/\/docs.chef.io\/server\/server_orgs\/#permissions) to the data bags containing the data that they want to fetch using the External Secrets Operator.\n\nThe following command can be used to create Chef Users:\n```sh\nchef-server-ctl user-create USER_NAME FIRST_NAME [MIDDLE_NAME] LAST_NAME EMAIL 'PASSWORD' (options)\n```\n\nMore details on the above command are available here [Chef User Create Option](https:\/\/docs.chef.io\/server\/server_users\/#user-create). The above command will return the default private key (PRIVATE_KEY_VALUE), which we will use for authentication. Additionally, a Chef User with access to specific data bags, a private key pair with an expiration date can be created with the help of the  [knife user key](https:\/\/docs.chef.io\/server\/auth\/#knife-user-key) command.\n\n### Create a secret containing your private key\n\nWe need to store the above User's API key into a secret resource.\nExample:\n```sh\nkubectl create secret generic chef-user-secret -n vivid --from-literal=user-private-key='PRIVATE_KEY_VALUE'\n```\n\n### Creating ClusterSecretStore\n\nThe Chef `ClusterSecretStore` is a cluster-scoped SecretStore that can be referenced by all Chef `ExternalSecrets` from all namespaces. You can follow the below example to create a `ClusterSecretStore` resource.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ClusterSecretStore\nmetadata:\n  name: vivid-clustersecretstore # name of ClusterSecretStore\nspec:\n  provider:\n    chef:\n      username: user # Chef User name\n      serverUrl: https:\/\/manage.chef.io\/organizations\/testuser\/ # Chef server URL\n      auth:\n        secretRef:\n          privateKeySecretRef:\n            key: user-private-key # name of the key inside Secret resource\n            name: chef-user-secret # name of Kubernetes Secret resource containing the Chef User's private key\n            namespace: vivid # the namespace in which the above Secret resource resides\n```\n\n### Creating SecretStore\n\nChef `SecretStores` are bound to a namespace and can not reference resources across namespaces. For cross-namespace SecretStores, you must use Chef `ClusterSecretStores`.\n\nYou can follow the below example to create a `SecretStore` resource.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: vivid-secretstore # name of SecretStore\n  namespace: vivid # must be required for kind: SecretStore\nspec:\n  provider:\n    chef:\n      username: user # Chef User name\n      serverUrl: https:\/\/manage.chef.io\/organizations\/testuser\/ # Chef server URL\n      auth:\n        secretRef:\n          privateKeySecretRef:\n            name: chef-user-secret # name of Kubernetes Secret resource containing the Chef User's private key\n            key: user-private-key # name of the key inside Secret resource\n            namespace: vivid # the ns where the k8s secret resource containing Chef User's private key resides\n\n```\n\n### Creating ExternalSecret\n\nThe Chef `ExternalSecret` describes what data should be fetched from Chef Data bags, and how the data should be transformed and saved as a Kind=Secret.\n\nYou can follow the below example to create an `ExternalSecret` resource.\n```yaml\n{% include 'chef-external-secret.yaml' %}\n```\n\nWhen the above `ClusterSecretStore` and `ExternalSecret` resources are created, the `ExternalSecret` will connect to the Chef Server using the private key and will fetch the data bags contained in the `vivid-credentials` secret resource.\n\nTo get all data items inside the data bag, you can use the `dataFrom` directive:\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: vivid-external-secrets # name of ExternalSecret\n  namespace: vivid # namespace inside which the ExternalSecret will be created\n  annotations:\n    company\/contacts: user.a@company.com, user.b@company.com\n    company\/team: vivid-dev\n  labels:\n    app.kubernetes.io\/name: external-secrets\nspec:\n  refreshInterval: 1h\n  secretStoreRef:\n    name: vivid-clustersecretstore # name of ClusterSecretStore\n    kind: ClusterSecretStore\n  dataFrom:\n  - extract:\n      key: vivid_global # only data bag name\n  target:\n    name: vivid_global_all_cred # name of Kubernetes Secret resource that will be created and will contain the obtained secrets\n    creationPolicy: Owner\n\n```\n\nfollow : [this file](https:\/\/github.com\/external-secrets\/external-secrets\/blob\/main\/apis\/externalsecrets\/v1beta1\/secretstore_chef_types.go) for more info","site":"external secrets","answers_cleaned":"   Chef   Chef External Secrets provider  will enable users to seamlessly integrate their Chef based secret management with Kubernetes through the existing External Secrets framework   In many enterprises  legacy applications and infrastructure are still tightly integrated with the Chef Chef Infra Server Chef Server Cluster for configuration and secrets management  Teams often rely on  Chef data bags  https   docs chef io data bags   to securely store sensitive information such as application secrets and infrastructure configurations  These data bags serve as a centralized repository for managing and distributing sensitive data across the Chef ecosystem     NOTE     Chef External Secrets provider  is designed only to fetch data from the Chef data bags into Kubernetes secrets  it won t update delete any item in the data bags        Authentication  Every request made to the Chef Infra server needs to be authenticated   Authentication  https   docs chef io server auth   is done using the Private keys of the Chef Users   The User needs to have appropriate  Permissions  https   docs chef io server server orgs  permissions  to the data bags containing the data that they want to fetch using the External Secrets Operator   The following command can be used to create Chef Users     sh chef server ctl user create USER NAME FIRST NAME  MIDDLE NAME  LAST NAME EMAIL  PASSWORD   options       More details on the above command are available here  Chef User Create Option  https   docs chef io server server users  user create   The above command will return the default private key  PRIVATE KEY VALUE   which we will use for authentication  Additionally  a Chef User with access to specific data bags  a private key pair with an expiration date can be created with the help of the   knife user key  https   docs chef io server auth  knife user key  command       Create a secret containing your private key  We need to store the above User s API key into a secret resource  Example     sh kubectl create secret generic chef user secret  n vivid   from literal user private key  PRIVATE KEY VALUE           Creating ClusterSecretStore  The Chef  ClusterSecretStore  is a cluster scoped SecretStore that can be referenced by all Chef  ExternalSecrets  from all namespaces  You can follow the below example to create a  ClusterSecretStore  resource      yaml apiVersion  external secrets io v1beta1 kind  ClusterSecretStore metadata    name  vivid clustersecretstore   name of ClusterSecretStore spec    provider      chef        username  user   Chef User name       serverUrl  https   manage chef io organizations testuser    Chef server URL       auth          secretRef            privateKeySecretRef              key  user private key   name of the key inside Secret resource             name  chef user secret   name of Kubernetes Secret resource containing the Chef User s private key             namespace  vivid   the namespace in which the above Secret resource resides          Creating SecretStore  Chef  SecretStores  are bound to a namespace and can not reference resources across namespaces  For cross namespace SecretStores  you must use Chef  ClusterSecretStores    You can follow the below example to create a  SecretStore  resource      yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  vivid secretstore   name of SecretStore   namespace  vivid   must be required for kind  SecretStore spec    provider      chef        username  user   Chef User name       serverUrl  https   manage chef io organizations testuser    Chef server URL       auth          secretRef            privateKeySecretRef              name  chef user secret   name of Kubernetes Secret resource containing the Chef User s private key             key  user private key   name of the key inside Secret resource             namespace  vivid   the ns where the k8s secret resource containing Chef User s private key resides           Creating ExternalSecret  The Chef  ExternalSecret  describes what data should be fetched from Chef Data bags  and how the data should be transformed and saved as a Kind Secret   You can follow the below example to create an  ExternalSecret  resource     yaml    include  chef external secret yaml          When the above  ClusterSecretStore  and  ExternalSecret  resources are created  the  ExternalSecret  will connect to the Chef Server using the private key and will fetch the data bags contained in the  vivid credentials  secret resource   To get all data items inside the data bag  you can use the  dataFrom  directive     yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  vivid external secrets   name of ExternalSecret   namespace  vivid   namespace inside which the ExternalSecret will be created   annotations      company contacts  user a company com  user b company com     company team  vivid dev   labels      app kubernetes io name  external secrets spec    refreshInterval  1h   secretStoreRef      name  vivid clustersecretstore   name of ClusterSecretStore     kind  ClusterSecretStore   dataFrom      extract        key  vivid global   only data bag name   target      name  vivid global all cred   name of Kubernetes Secret resource that will be created and will contain the obtained secrets     creationPolicy  Owner       follow    this file  https   github com external secrets external secrets blob main apis externalsecrets v1beta1 secretstore chef types go  for more info"}
{"questions":"external secrets Secrets Manager Configuration SMC Authentication External Secrets Operator integrates with for secret management by using Keeper Security KSM can authenticate using One Time Access Token or Secret Manager Configuration In order to work with External Secret Operator we need to configure a Secret Manager Configuration","answers":"## Keeper Security\n\nExternal Secrets Operator integrates with [Keeper Security](https:\/\/www.keepersecurity.com\/) for secret management by using [Keeper Secrets Manager](https:\/\/docs.keeper.io\/secrets-manager\/secrets-manager\/about).\n\n\n## Authentication\n\n### Secrets Manager Configuration (SMC)\n\nKSM can authenticate using *One Time Access Token* or *Secret Manager Configuration*. In order to work with External Secret Operator we need to configure a Secret Manager Configuration.\n\n#### Creating Secrets Manager Configuration\n\nYou can find the documentation for the Secret Manager Configuration creation [here](https:\/\/docs.keeper.io\/secrets-manager\/secrets-manager\/about\/secrets-manager-configuration). Make sure you add the proper permissions to your device in order to be able to read and write secrets\n\nOnce you have created your SMC, you will get a config.json file or a base64 json encoded string containing the following keys:\n\n- `hostname`\n- `clientId`\n- `privateKey`\n- `serverPublicKeyId`\n- `appKey`\n- `appOwnerPublicKey`\n\nThis base64 encoded jsong string will be required to create your secretStores\n\n## Important note about this documentation\n_**The KepeerSecurity calls the entries in vaults 'Records'. These docs use the same term.**_\n\n### Update secret store\nBe sure the `keepersecurity` provider is listed in the `Kind=SecretStore`\n\n```yaml\n{% include 'keepersecurity-secret-store.yaml' %}\n```\n\n**NOTE 1:** `folderID` target the folder ID where the secrets should be pushed to. It requires write permissions within the folder\n\n**NOTE 2:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` for `SecretAccessKeyRef` with the namespace of the secret that we just created.\n\n## External Secrets\n### Behavior\n* How a Record is equated to an ExternalSecret:\n    * `remoteRef.key` is equated to a Record's ID\n    * `remoteRef.property` is equated to one of the following options:\n        * Fields: [Record's field's Type](https:\/\/docs.keeper.io\/secrets-manager\/secrets-manager\/about\/field-record-types)\n        * CustomFields: Record's field's Label\n        * Files: Record's file's Name\n        * If empty, defaults to the complete Record in JSON format\n    * `remoteRef.version` is currently not supported.\n* `dataFrom`:\n    * `find.path` is currently not supported.\n    * `find.name.regexp` is equated to one of the following options:\n        * Fields: Record's field's Type\n        * CustomFields: Record's field's Label\n        * Files: Record's file's Name\n    * `find.tags` are not supported at this time.\n\n**NOTE:** For complex [types](https:\/\/docs.keeper.io\/secrets-manager\/secrets-manager\/about\/field-record-types), like name, phone, bankAccount, which does not match with a single string value, external secrets will return the complete json string. Use the json template functions to decode.\n\n### Creating external secret\nTo create a kubernetes secret from Keeper Secret Manager secret a `Kind=ExternalSecret` is needed.\n\n```yaml\n{% include 'keepersecurity-external-secret.yaml' %}\n```\n\nThe operator will fetch the Keeper Secret Manager secret and inject it as a `Kind=Secret`\n```\nkubectl get secret secret-to-be-created -n <namespace> | -o jsonpath='{.data.dev-secret-test}' | base64 -d\n```\n\n## Limitations\n\nThere are some limitations using this provider.\n\n* Keeper Secret Manager does not work with `General` Records types nor legacy non-typed records\n* Using tags `find.tags` is not supported by KSM\n* Using path `find.path` is not supported at the moment\n\n## Push Secrets\n\nPush Secret will only work with a custom KeeperSecurity Record type `externalSecrets`\n\n### Behavior\n* `selector`:\n  * `secret.name`: name of the kubernetes secret to be pushed\n* `data.match`:\n  * `secretKey`: key on the selected secret to be pushed\n  * `remoteRef.remoteKey`: Secret and key to be created on the remote provider\n    * Format: SecretName\/SecretKey\n\n### Creating push secret\nTo create a Keeper Security record from kubernetes a `Kind=PushSecret` is needed.\n\n```yaml\n{% include 'keepersecurity-push-secret.yaml' %}\n```\n\n### Limitations\n* Only possible to push one key per secret at the moment\n* If the record with the selected name exists but the key does not exists the record can not be updated. See [Ability to add custom fields to existing secret #17](https:\/\/github.com\/Keeper-Security\/secrets-manager-go\/issues\/17)","site":"external secrets","answers_cleaned":"   Keeper Security  External Secrets Operator integrates with  Keeper Security  https   www keepersecurity com   for secret management by using  Keeper Secrets Manager  https   docs keeper io secrets manager secrets manager about        Authentication      Secrets Manager Configuration  SMC   KSM can authenticate using  One Time Access Token  or  Secret Manager Configuration   In order to work with External Secret Operator we need to configure a Secret Manager Configuration        Creating Secrets Manager Configuration  You can find the documentation for the Secret Manager Configuration creation  here  https   docs keeper io secrets manager secrets manager about secrets manager configuration   Make sure you add the proper permissions to your device in order to be able to read and write secrets  Once you have created your SMC  you will get a config json file or a base64 json encoded string containing the following keys      hostname     clientId     privateKey     serverPublicKeyId     appKey     appOwnerPublicKey   This base64 encoded jsong string will be required to create your secretStores     Important note about this documentation    The KepeerSecurity calls the entries in vaults  Records   These docs use the same term          Update secret store Be sure the  keepersecurity  provider is listed in the  Kind SecretStore      yaml    include  keepersecurity secret store yaml            NOTE 1     folderID  target the folder ID where the secrets should be pushed to  It requires write permissions within the folder    NOTE 2    In case of a  ClusterSecretStore   Be sure to provide  namespace  for  SecretAccessKeyRef  with the namespace of the secret that we just created      External Secrets     Behavior   How a Record is equated to an ExternalSecret         remoteRef key  is equated to a Record s ID        remoteRef property  is equated to one of the following options            Fields   Record s field s Type  https   docs keeper io secrets manager secrets manager about field record types            CustomFields  Record s field s Label           Files  Record s file s Name           If empty  defaults to the complete Record in JSON format        remoteRef version  is currently not supported     dataFrom          find path  is currently not supported         find name regexp  is equated to one of the following options            Fields  Record s field s Type           CustomFields  Record s field s Label           Files  Record s file s Name        find tags  are not supported at this time     NOTE    For complex  types  https   docs keeper io secrets manager secrets manager about field record types   like name  phone  bankAccount  which does not match with a single string value  external secrets will return the complete json string  Use the json template functions to decode       Creating external secret To create a kubernetes secret from Keeper Secret Manager secret a  Kind ExternalSecret  is needed      yaml    include  keepersecurity external secret yaml          The operator will fetch the Keeper Secret Manager secret and inject it as a  Kind Secret      kubectl get secret secret to be created  n  namespace     o jsonpath    data dev secret test     base64  d         Limitations  There are some limitations using this provider     Keeper Secret Manager does not work with  General  Records types nor legacy non typed records   Using tags  find tags  is not supported by KSM   Using path  find path  is not supported at the moment     Push Secrets  Push Secret will only work with a custom KeeperSecurity Record type  externalSecrets       Behavior    selector        secret name   name of the kubernetes secret to be pushed    data match        secretKey   key on the selected secret to be pushed      remoteRef remoteKey   Secret and key to be created on the remote provider       Format  SecretName SecretKey      Creating push secret To create a Keeper Security record from kubernetes a  Kind PushSecret  is needed      yaml    include  keepersecurity push secret yaml              Limitations   Only possible to push one key per secret at the moment   If the record with the selected name exists but the key does not exists the record can not be updated  See  Ability to add custom fields to existing secret  17  https   github com Keeper Security secrets manager go issues 17 "}
{"questions":"external secrets around the SDK that runs as a separate service providing ESO with a light REST API to pull secrets through size the bitwarden Rust SDK libraries are over 150MB in size it has been decided to create a soft wrapper Prerequisites Bitwarden Secrets Manager Provider This section describes how to set up the Bitwarden Secrets Manager provider for External Secrets Operator ESO The Bitwarden SDK is Rust based and requires CGO enabled In order to not restrict the capabilities of ESO and the image In order for the bitwarden provider to work we need a second service This service is the","answers":"## Bitwarden Secrets Manager Provider\n\nThis section describes how to set up the Bitwarden Secrets Manager provider for External Secrets Operator (ESO).\n\n### Prerequisites\n\nIn order for the bitwarden provider to work, we need a second service. This service is the [Bitwarden SDK Server](https:\/\/github.com\/external-secrets\/bitwarden-sdk-server).\nThe Bitwarden SDK is Rust based and requires CGO enabled. In order to not restrict the capabilities of ESO, and the image\nsize ( the bitwarden Rust SDK libraries are over 150MB in size ) it has been decided to create a soft wrapper\naround the SDK that runs as a separate service providing ESO with a light REST API to pull secrets through.\n\n#### Bitwarden SDK server\n\nThe server itself can be installed together with ESO. The ESO Helm Chart packages this service as a dependency.\nThe Bitwarden SDK Server's full name is hardcoded to bitwarden-sdk-server. This is so that the exposed service URL\ngets a determinable endpoint.\n\nIn order to install the service install ESO with the following helm directive:\n\n```\nhelm install external-secrets \\\n   external-secrets\/external-secrets \\\n    -n external-secrets \\\n    --create-namespace \\\n    --set bitwarden-sdk-server.enabled=true\n```\n\n##### Certificate\n\nThe Bitwarden SDK Server _NEEDS_ to run as an HTTPS service. That means that any installation that wants to communicate with the Bitwarden\nprovider will need to generate a certificate. The best approach for that is to use cert-manager. It's easy to set up\nand can generate a certificate that the store can use to connect with the server.\n\nFor a sample set up look at the bitwarden sdk server's test setup. It contains a self-signed certificate issuer for\ncert-manager.\n\n### External secret store\n\nWith that out of the way, let's take a look at how a secret store would look like.\n\n```yaml\n{% include 'bitwarden-secrets-manager-secret-store.yaml' %}\n```\n\nThe api url and identity url are optional. The secret should contain the token for the Machine account for bitwarden.\n\n!!! note inline end\nMake sure that the machine account has Read-Write access to the Project that the secrets are in.\n\n!!! note inline end\nA secret store is organization\/project dependent. Meaning a 1 store == 1 organization\/project. This is so that we ensure\nthat no other project's secrets can be modified accidentally _or_ intentionally.\n\n### External Secrets\n\nThere are two ways to fetch secrets from the provider.\n\n#### Find by UUID\n\nIn order to fetch a secret by using its UUID simply provide that as remote key in the external secrets like this:\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: bitwarden\nspec:\n  refreshInterval: 1h\n  secretStoreRef:\n    # This name must match the metadata.name in the `SecretStore`\n    name: bitwarden-secretsmanager\n    kind: SecretStore\n  data:\n  - secretKey: test\n    remoteRef:\n      key: \"339062b8-a5a1-4303-bf1d-b1920146a622\"\n```\n\n#### Find by Name\n\nTo find a secret using its name, we need a bit more information. Mainly, these are the rules to find a secret:\n\n- if name is a UUID get the secret\n- if name is NOT a UUID Property is mandatory that defines the projectID to look for\n- if name + projectID + organizationID matches, we return that secret\n- if more than one name exists for the same projectID within the same organization we error\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: bitwarden\nspec:\n  refreshInterval: 1h\n  secretStoreRef:\n    # This name must match the metadata.name in the `SecretStore`\n    name: bitwarden-secretsmanager\n    kind: SecretStore\n  data:\n  - secretKey: test\n    remoteRef:\n      key: \"secret-name\"\n```\n\n### Push Secret\n\nPushing a secret is also implemented. Pushing a secret requires even more restrictions because Bitwarden Secrets Manager\nallows creating the same secret with the same key multiple times. In order to avoid overwriting, or potentially, returning\nthe wrong secret, we restrict push secret with the following rules:\n\n- name, projectID, organizationID and value AND NOTE equal, we won't push it again.\n- name, projectID, organizationID and ONLY the value does not equal ( INCLUDING THE NOTE ) we update\n- any of the above isn't true, we create the secret ( this means that it will create a secret in a separate project )\n\n```yaml\napiVersion: external-secrets.io\/v1alpha1\nkind: PushSecret\nmetadata:\n  name: pushsecret-bitwarden # Customisable\nspec:\n  refreshInterval: 1h # Refresh interval for which push secret will reconcile\n  secretStoreRefs: # A list of secret stores to push secrets to\n    - name: bitwarden-secretsmanager\n      kind: SecretStore\n  selector:\n    secret:\n      name: my-secret # Source Kubernetes secret to be pushed\n  data:\n    - match:\n        secretKey: key # Source Kubernetes secret key to be pushed\n        remoteRef:\n          remoteKey: remote-key-name # Remote reference (where the secret is going to be pushed)\n      metadata:\n        note: \"Note of the secret to add.\"\n```","site":"external secrets","answers_cleaned":"   Bitwarden Secrets Manager Provider  This section describes how to set up the Bitwarden Secrets Manager provider for External Secrets Operator  ESO        Prerequisites  In order for the bitwarden provider to work  we need a second service  This service is the  Bitwarden SDK Server  https   github com external secrets bitwarden sdk server   The Bitwarden SDK is Rust based and requires CGO enabled  In order to not restrict the capabilities of ESO  and the image size   the bitwarden Rust SDK libraries are over 150MB in size   it has been decided to create a soft wrapper around the SDK that runs as a separate service providing ESO with a light REST API to pull secrets through        Bitwarden SDK server  The server itself can be installed together with ESO  The ESO Helm Chart packages this service as a dependency  The Bitwarden SDK Server s full name is hardcoded to bitwarden sdk server  This is so that the exposed service URL gets a determinable endpoint   In order to install the service install ESO with the following helm directive       helm install external secrets      external secrets external secrets        n external secrets         create namespace         set bitwarden sdk server enabled true            Certificate  The Bitwarden SDK Server  NEEDS  to run as an HTTPS service  That means that any installation that wants to communicate with the Bitwarden provider will need to generate a certificate  The best approach for that is to use cert manager  It s easy to set up and can generate a certificate that the store can use to connect with the server   For a sample set up look at the bitwarden sdk server s test setup  It contains a self signed certificate issuer for cert manager       External secret store  With that out of the way  let s take a look at how a secret store would look like      yaml    include  bitwarden secrets manager secret store yaml          The api url and identity url are optional  The secret should contain the token for the Machine account for bitwarden       note inline end Make sure that the machine account has Read Write access to the Project that the secrets are in       note inline end A secret store is organization project dependent  Meaning a 1 store    1 organization project  This is so that we ensure that no other project s secrets can be modified accidentally  or  intentionally       External Secrets  There are two ways to fetch secrets from the provider        Find by UUID  In order to fetch a secret by using its UUID simply provide that as remote key in the external secrets like this      yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  bitwarden spec    refreshInterval  1h   secretStoreRef        This name must match the metadata name in the  SecretStore      name  bitwarden secretsmanager     kind  SecretStore   data      secretKey  test     remoteRef        key   339062b8 a5a1 4303 bf1d b1920146a622            Find by Name  To find a secret using its name  we need a bit more information  Mainly  these are the rules to find a secret     if name is a UUID get the secret   if name is NOT a UUID Property is mandatory that defines the projectID to look for   if name   projectID   organizationID matches  we return that secret   if more than one name exists for the same projectID within the same organization we error     yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  bitwarden spec    refreshInterval  1h   secretStoreRef        This name must match the metadata name in the  SecretStore      name  bitwarden secretsmanager     kind  SecretStore   data      secretKey  test     remoteRef        key   secret name           Push Secret  Pushing a secret is also implemented  Pushing a secret requires even more restrictions because Bitwarden Secrets Manager allows creating the same secret with the same key multiple times  In order to avoid overwriting  or potentially  returning the wrong secret  we restrict push secret with the following rules     name  projectID  organizationID and value AND NOTE equal  we won t push it again    name  projectID  organizationID and ONLY the value does not equal   INCLUDING THE NOTE   we update   any of the above isn t true  we create the secret   this means that it will create a secret in a separate project       yaml apiVersion  external secrets io v1alpha1 kind  PushSecret metadata    name  pushsecret bitwarden   Customisable spec    refreshInterval  1h   Refresh interval for which push secret will reconcile   secretStoreRefs    A list of secret stores to push secrets to       name  bitwarden secretsmanager       kind  SecretStore   selector      secret        name  my secret   Source Kubernetes secret to be pushed   data        match          secretKey  key   Source Kubernetes secret key to be pushed         remoteRef            remoteKey  remote key name   Remote reference  where the secret is going to be pushed        metadata          note   Note of the secret to add      "}
{"questions":"external secrets To use RRSA authentication you should follow to assign the RAM role to external secrets operator We support Access key and RRSA authentication Alibaba Cloud Secrets Manager External Secrets Operator integrates with for secrets and Keys management Authentication","answers":"\n## Alibaba Cloud Secrets Manager\n\nExternal Secrets Operator integrates with [Alibaba Cloud Key Management Service](https:\/\/www.alibabacloud.com\/help\/en\/key-management-service\/latest\/kms-what-is-key-management-service\/) for secrets and Keys management.\n\n### Authentication\n\nWe support Access key and RRSA authentication.\n\nTo use RRSA authentication, you should follow [Use RRSA to authorize pods to access different cloud services](https:\/\/www.alibabacloud.com\/help\/en\/container-service-for-kubernetes\/latest\/use-rrsa-to-enforce-access-control\/) to assign the RAM role to external-secrets operator.\n\n#### Access Key authentication\n\nTo use `accessKeyID` and `accessKeySecrets`, simply create them as a regular `Kind: Secret` beforehand and associate it with the `SecretStore`:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: secret-sample\ndata:\n  accessKeyID: bXlhd2Vzb21lYWNjZXNza2V5aWQ=\n  accessKeySecret: bXlhd2Vzb21lYWNjZXNza2V5c2VjcmV0\n```\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: secretstore-sample\nspec:\n  provider:\n    alibaba:\n      regionID: ap-southeast-1\n      auth:\n        secretRef:\n          accessKeyIDSecretRef:\n            name: secret-sample\n            key: accessKeyID\n          accessKeySecretSecretRef:\n            name: secret-sample\n            key: accessKeySecret\n```\n\n\n#### RRSA authentication\n\nWhen using RRSA authentication we manually project the OIDC token file to pod as volume\n\n```yaml\nextraVolumes:\n  - name: oidc-token\n    projected:\n      sources:\n      - serviceAccountToken:\n          path: oidc-token\n          expirationSeconds: 7200    # The validity period of the OIDC token in seconds.\n          audience: \"sts.aliyuncs.com\"\n\nextraVolumeMounts:\n  - name: oidc-token\n    mountPath: \/var\/run\/secrets\/tokens\n```\n\nand provide the RAM role ARN and OIDC volume path to the secret store\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: secretstore-sample\nspec:\n  provider:\n    alibaba:\n      regionID: ap-southeast-1\n      auth:\n        rrsa:\n          oidcProviderArn: acs:ram::1234:oidc-provider\/ack-rrsa-ce123456\n          oidcTokenFilePath: \/var\/run\/secrets\/tokens\/oidc-token\n          roleArn: acs:ram::1234:role\/test-role\n          sessionName: secrets\n```\n\n### Creating external secret\n\nTo create a kubernetes secret from the Alibaba Cloud Key Management Service secret a `Kind=ExternalSecret` is needed.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: example\nspec:\n  refreshInterval: 1h\n  secretStoreRef:\n    name: secretstore-sample\n    kind: SecretStore\n  target:\n    name: example-secret\n    creationPolicy: Owner\n  data:\n    - secretKey: secret-key\n      remoteRef:\n        key: ext-secret\n```","site":"external secrets","answers_cleaned":"    Alibaba Cloud Secrets Manager  External Secrets Operator integrates with  Alibaba Cloud Key Management Service  https   www alibabacloud com help en key management service latest kms what is key management service   for secrets and Keys management       Authentication  We support Access key and RRSA authentication   To use RRSA authentication  you should follow  Use RRSA to authorize pods to access different cloud services  https   www alibabacloud com help en container service for kubernetes latest use rrsa to enforce access control   to assign the RAM role to external secrets operator        Access Key authentication  To use  accessKeyID  and  accessKeySecrets   simply create them as a regular  Kind  Secret  beforehand and associate it with the  SecretStore       yaml apiVersion  v1 kind  Secret metadata    name  secret sample data    accessKeyID  bXlhd2Vzb21lYWNjZXNza2V5aWQ    accessKeySecret  bXlhd2Vzb21lYWNjZXNza2V5c2VjcmV0         yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  secretstore sample spec    provider      alibaba        regionID  ap southeast 1       auth          secretRef            accessKeyIDSecretRef              name  secret sample             key  accessKeyID           accessKeySecretSecretRef              name  secret sample             key  accessKeySecret            RRSA authentication  When using RRSA authentication we manually project the OIDC token file to pod as volume     yaml extraVolumes      name  oidc token     projected        sources          serviceAccountToken            path  oidc token           expirationSeconds  7200      The validity period of the OIDC token in seconds            audience   sts aliyuncs com   extraVolumeMounts      name  oidc token     mountPath   var run secrets tokens      and provide the RAM role ARN and OIDC volume path to the secret store    yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  secretstore sample spec    provider      alibaba        regionID  ap southeast 1       auth          rrsa            oidcProviderArn  acs ram  1234 oidc provider ack rrsa ce123456           oidcTokenFilePath   var run secrets tokens oidc token           roleArn  acs ram  1234 role test role           sessionName  secrets          Creating external secret  To create a kubernetes secret from the Alibaba Cloud Key Management Service secret a  Kind ExternalSecret  is needed      yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  example spec    refreshInterval  1h   secretStoreRef      name  secretstore sample     kind  SecretStore   target      name  example secret     creationPolicy  Owner   data        secretKey  secret key       remoteRef          key  ext secret    "}
{"questions":"external secrets raw First create a SecretStore with a webhook backend We ll use a static user password External Secrets Operator can integrate with simple web apis by specifying the endpoint Generic Webhook Example yaml","answers":"## Generic Webhook\n\nExternal Secrets Operator can integrate with simple web apis by specifying the endpoint\n\n### Example\n\nFirst, create a SecretStore with a webhook backend.  We'll use a static user\/password `test`:\n\n```yaml\n{% raw %}\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: webhook-backend\nspec:\n  provider:\n    webhook:\n      url: \"http:\/\/httpbin.org\/get?parameter=\"\n      result:\n        jsonPath: \"$.args.parameter\"\n      headers:\n        Content-Type: application\/json\n        Authorization: Basic \n      secrets:\n      - name: auth\n        secretRef:\n          name: webhook-credentials\n{%- endraw %}\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: webhook-credentials\ndata:\n  username: dGVzdA== # \"test\"\n  password: dGVzdA== # \"test\"\n```\n\nNB: This is obviously not practical because it just returns the key as the result, but it shows how it works\n\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` in all `secrets` references with the namespaces where the secrets reside.\n\nNow create an ExternalSecret that uses the above SecretStore:\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: webhook-example\nspec:\n  refreshInterval: \"15s\"\n  secretStoreRef:\n    name: webhook-backend\n    kind: SecretStore\n  target:\n    name: example-sync\n  data:\n  - secretKey: foobar\n    remoteRef:\n      key: secret\n---\n# will create a secret with:\nkind: Secret\nmetadata:\n  name: example-sync\ndata:\n  foobar: c2VjcmV0\n```\n\n#### Limitations\n\nWebhook does not support authorization, other than what can be sent by generating http headers\n\n!!! note\n      If a webhook endpoint for a given `ExternalSecret` returns a 404 status code, the secret is considered to have been deleted.  This will trigger the `deletionPolicy` set on the `ExternalSecret`.\n\n### Templating\n\nGeneric WebHook provider uses the templating engine to generate the API call.  It can be used in the url, headers, body and result.jsonPath fields.\n\nThe provider inserts the secret to be retrieved in the object named `remoteRef`.\n\nIn addition, secrets can be added as named objects, for example to use in authorization headers.\nEach secret has a `name` property which determines the name of the object in the templating engine.\n\n### All Parameters\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ClusterSecretStore\nmetadata:\n  name: statervault\nspec:\n  provider:\n    webhook:\n      # Url to call.  Use templating engine to fill in the request parameters\n      url: <url>\n      # http method, defaults to GET\n      method: <method>\n      # Timeout in duration (1s, 1m, etc)\n      timeout: 1s\n      result:\n        # [jsonPath](https:\/\/jsonpath.com) syntax, which also can be templated\n        jsonPath: <jsonPath>\n      # Map of headers, can be templated\n      headers:\n        <Header-Name>: <header contents>\n      # Body to sent as request, can be templated (optional)\n      body: <body>\n      # List of secrets to expose to the templating engine\n      secrets:\n      # Use this name to refer to this secret in templating, above\n      - name: <name>\n        secretRef:\n          namespace: <namespace> # Only used in ClusterSecretStores\n          name: <name>\n      # Add CAs here for the TLS handshake\n      caBundle: <base64 encoded cabundle>\n      caProvider:\n        type: Secret or ConfigMap\n        name: <name of secret or configmap>\n        namespace: <namespace> # Only used in ClusterSecretStores\n        key: <key inside secret>\n```\n\n### Webhook as generators\nYou can also leverage webhooks as generators, following the same syntax. The only difference is that the webhook generator needs its source secrets to be labeled, as opposed to webhook secretstores. Please see the [generator-webhook](..\/api\/generator\/webhook.md) documentation for more information.","site":"external secrets","answers_cleaned":"   Generic Webhook  External Secrets Operator can integrate with simple web apis by specifying the endpoint      Example  First  create a SecretStore with a webhook backend   We ll use a static user password  test       yaml    raw    apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  webhook backend spec    provider      webhook        url   http   httpbin org get parameter         result          jsonPath     args parameter        headers          Content Type  application json         Authorization  Basic        secrets          name  auth         secretRef            name  webhook credentials     endraw        apiVersion  v1 kind  Secret metadata    name  webhook credentials data    username  dGVzdA      test    password  dGVzdA      test       NB  This is obviously not practical because it just returns the key as the result  but it shows how it works    NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  in all  secrets  references with the namespaces where the secrets reside   Now create an ExternalSecret that uses the above SecretStore      yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  webhook example spec    refreshInterval   15s    secretStoreRef      name  webhook backend     kind  SecretStore   target      name  example sync   data      secretKey  foobar     remoteRef        key  secret       will create a secret with  kind  Secret metadata    name  example sync data    foobar  c2VjcmV0           Limitations  Webhook does not support authorization  other than what can be sent by generating http headers      note       If a webhook endpoint for a given  ExternalSecret  returns a 404 status code  the secret is considered to have been deleted   This will trigger the  deletionPolicy  set on the  ExternalSecret        Templating  Generic WebHook provider uses the templating engine to generate the API call   It can be used in the url  headers  body and result jsonPath fields   The provider inserts the secret to be retrieved in the object named  remoteRef    In addition  secrets can be added as named objects  for example to use in authorization headers  Each secret has a  name  property which determines the name of the object in the templating engine       All Parameters     yaml apiVersion  external secrets io v1beta1 kind  ClusterSecretStore metadata    name  statervault spec    provider      webhook          Url to call   Use templating engine to fill in the request parameters       url   url          http method  defaults to GET       method   method          Timeout in duration  1s  1m  etc        timeout  1s       result             jsonPath  https   jsonpath com  syntax  which also can be templated         jsonPath   jsonPath          Map of headers  can be templated       headers           Header Name    header contents          Body to sent as request  can be templated  optional        body   body          List of secrets to expose to the templating engine       secrets          Use this name to refer to this secret in templating  above         name   name          secretRef            namespace   namespace    Only used in ClusterSecretStores           name   name          Add CAs here for the TLS handshake       caBundle   base64 encoded cabundle        caProvider          type  Secret or ConfigMap         name   name of secret or configmap          namespace   namespace    Only used in ClusterSecretStores         key   key inside secret           Webhook as generators You can also leverage webhooks as generators  following the same syntax  The only difference is that the webhook generator needs its source secrets to be labeled  as opposed to webhook secretstores  Please see the  generator webhook     api generator webhook md  documentation for more information "}
{"questions":"external secrets Parameter Store defined region You should define Roles that define fine grained access to way users of the can only access the secrets necessary A points to AWS SSM Parameter Store in a certain account within a individual secrets and pass them to ESO using This","answers":"\n![aws sm](..\/pictures\/diagrams-provider-aws-ssm-parameter-store.png)\n\n## Parameter Store\n\nA `ParameterStore` points to AWS SSM Parameter Store in a certain account within a\ndefined region. You should define Roles that define fine-grained access to\nindividual secrets and pass them to ESO using `spec.provider.aws.role`. This\nway users of the `SecretStore` can only access the secrets necessary.\n\n``` yaml\n{% include 'aws-parameter-store.yaml' %}\n```\n\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` in `accessKeyIDSecretRef` and `secretAccessKeySecretRef`  with the namespaces where the secrets reside.\n\n!!! warning \"API Pricing & Throttling\"\n    The SSM Parameter Store API is charged by throughput and\n    is available in different tiers, [see pricing](https:\/\/aws.amazon.com\/systems-manager\/pricing\/#Parameter_Store).\n    Please estimate your costs before using ESO. Cost depends on the RefreshInterval of your ExternalSecrets.\n\n### IAM Policy\n\n#### Fetching Parameters\n\nThe example policy below shows the minimum required permissions for fetching SSM parameters. This policy permits pinning down access to secrets with a path matching `dev-*`. Other operations may require additional permission. For example, finding parameters based on tags will also require `ssm:DescribeParameters` and `tag:GetResources` permission with `\"Resource\": \"*\"`. Generally, the specific permission required will be logged as an error if an operation fails.\n\nFor further information see [AWS Documentation](https:\/\/docs.aws.amazon.com\/systems-manager\/latest\/userguide\/sysman-paramstore-access.html).\n\n``` json\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"ssm:GetParameter*\",\n      ],\n      \"Resource\": \"arn:aws:ssm:us-east-2:1234567889911:parameter\/dev-*\"\n    }\n  ]\n}\n```\n\n#### Pushing Parameters\n\nThe example policy below shows the minimum required permissions for pushing SSM parameters. Like with the fetching policy it restricts the path in which it can push secrets too.\n\n``` json\n{\n    \"Action\": [\n        \"ssm:GetParameter*\",\n        \"ssm:PutParameter*\",\n        \"ssm:AddTagsToResource\",\n        \"ssm:ListTagsForResource\"\n    ],\n    \"Effect\": \"Allow\",\n    \"Resource\": \"arn:aws:ssm:us-east-2:1234567889911:parameter\/dev-*\"\n}\n```\n\n### JSON Secret Values\n\nYou can store JSON objects in a parameter. You can access nested values or arrays using [gjson syntax](https:\/\/github.com\/tidwall\/gjson\/blob\/master\/SYNTAX.md):\n\nConsider the following JSON object that is stored in the Parameter Store key `friendslist`:\n\n``` json\n{\n  \"name\": {\"first\": \"Tom\", \"last\": \"Anderson\"},\n  \"friends\": [\n    {\"first\": \"Dale\", \"last\": \"Murphy\"},\n    {\"first\": \"Roger\", \"last\": \"Craig\"},\n    {\"first\": \"Jane\", \"last\": \"Murphy\"}\n  ]\n}\n```\n\nThis is an example on how you would look up nested keys in the above json object:\n\n``` yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: extract-data\nspec:\n  # [omitted for brevity]\n  data:\n  - secretKey: my_name\n    remoteRef:\n      key: friendslist\n      property: name.first # Tom\n  - secretKey: first_friend\n    remoteRef:\n      key: friendslist\n      property: friends.1.first # Roger\n\n  # metadataPolicy to fetch all the tags in JSON format\n  - secretKey: tags\n    remoteRef:\n      metadataPolicy: Fetch\n      key: database-credentials\n\n  # metadataPolicy to fetch a specific tag (dev) from the source secret\n  - secretKey: developer\n    remoteRef:\n      metadataPolicy: Fetch\n      key: database-credentials\n      property: dev\n```\n\n### Parameter Versions\n\nParameterStore creates a new version of a parameter every time it is updated with a new value. The parameter can be referenced via the `version` property\n\n## SetSecret\n\nThe SetSecret method for the Parameter Store allows the user to set the value stored within the Kubernetes cluster to the remote AWS Parameter Store.\n\n### Creating a Push Secret\n\n```yaml\n{% include \"full-pushsecret.yaml\" %}\n```\n\n#### Additional Metadata for PushSecret\n\nOptionally, it is possible to configure additional options for the parameter such as `Type` and encryption Key. To control this behaviour you can set the following provider's `metadata`:\n\n```yaml\n{% include 'aws-pm-push-secret-with-metadata.yaml' %}\n```\n\n`parameterStoreType` takes three options. `String`, `StringList`, and `SecureString`, where `String` is the _default_.\n\n`parameterStoreKeyID` takes a KMS Key `$ID` or `$ARN` (in case a key source is created in another account) as a string, where `alias\/aws\/ssm` is the _default_. This property is only used if `parameterStoreType` is set as `SecureString`.\n\n#### Check successful secret sync\n\nTo be able to check that the secret has been succesfully synced you can run the following command:\n\n```bash\nkubectl get pushsecret pushsecret-example\n```\n\nIf the secret has synced successfully it will show the status as \"Synced\".\n\n#### Test new secret using AWS CLI\n\nTo View your parameter on AWS Parameter Store using the AWS CLI, install and login to the AWS CLI using the following guide: [AWS CLI](https:\/\/aws.amazon.com\/cli\/).\n\nRun the following commands to get your synchronized parameter from AWS Parameter Store:\n\n```bash\naws ssm get-parameter --name=my-first-parameter --region=us-east-1\n```\n\nYou should see something similar to the following output:\n\n```json\n{\n    \"Parameter\": {\n        \"Name\": \"my-first-parameter\",\n        \"Type\": \"String\",\n        \"Value\": \"charmander\",\n        \"Version\": 4,\n        \"LastModifiedDate\": \"2022-09-15T13:04:31.098000-03:00\",\n        \"ARN\": \"arn:aws:ssm:us-east-1:1234567890123:parameter\/my-first-parameter\",\n        \"DataType\": \"text\"\n    }\n}\n```\n\n--8<-- \"snippets\/provider-aws-access.md\"","site":"external secrets","answers_cleaned":"   aws sm     pictures diagrams provider aws ssm parameter store png      Parameter Store  A  ParameterStore  points to AWS SSM Parameter Store in a certain account within a defined region  You should define Roles that define fine grained access to individual secrets and pass them to ESO using  spec provider aws role   This way users of the  SecretStore  can only access the secrets necessary       yaml    include  aws parameter store yaml            NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  in  accessKeyIDSecretRef  and  secretAccessKeySecretRef   with the namespaces where the secrets reside       warning  API Pricing   Throttling      The SSM Parameter Store API is charged by throughput and     is available in different tiers   see pricing  https   aws amazon com systems manager pricing  Parameter Store       Please estimate your costs before using ESO  Cost depends on the RefreshInterval of your ExternalSecrets       IAM Policy       Fetching Parameters  The example policy below shows the minimum required permissions for fetching SSM parameters  This policy permits pinning down access to secrets with a path matching  dev     Other operations may require additional permission  For example  finding parameters based on tags will also require  ssm DescribeParameters  and  tag GetResources  permission with   Resource         Generally  the specific permission required will be logged as an error if an operation fails   For further information see  AWS Documentation  https   docs aws amazon com systems manager latest userguide sysman paramstore access html        json      Version    2012 10 17      Statement                  Effect    Allow          Action              ssm GetParameter                    Resource    arn aws ssm us east 2 1234567889911 parameter dev                          Pushing Parameters  The example policy below shows the minimum required permissions for pushing SSM parameters  Like with the fetching policy it restricts the path in which it can push secrets too       json        Action              ssm GetParameter             ssm PutParameter             ssm AddTagsToResource            ssm ListTagsForResource              Effect    Allow        Resource    arn aws ssm us east 2 1234567889911 parameter dev               JSON Secret Values  You can store JSON objects in a parameter  You can access nested values or arrays using  gjson syntax  https   github com tidwall gjson blob master SYNTAX md    Consider the following JSON object that is stored in the Parameter Store key  friendslist        json      name     first    Tom    last    Anderson       friends           first    Dale    last    Murphy          first    Roger    last    Craig          first    Jane    last    Murphy              This is an example on how you would look up nested keys in the above json object       yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  extract data spec       omitted for brevity    data      secretKey  my name     remoteRef        key  friendslist       property  name first   Tom     secretKey  first friend     remoteRef        key  friendslist       property  friends 1 first   Roger      metadataPolicy to fetch all the tags in JSON format     secretKey  tags     remoteRef        metadataPolicy  Fetch       key  database credentials      metadataPolicy to fetch a specific tag  dev  from the source secret     secretKey  developer     remoteRef        metadataPolicy  Fetch       key  database credentials       property  dev          Parameter Versions  ParameterStore creates a new version of a parameter every time it is updated with a new value  The parameter can be referenced via the  version  property     SetSecret  The SetSecret method for the Parameter Store allows the user to set the value stored within the Kubernetes cluster to the remote AWS Parameter Store       Creating a Push Secret     yaml    include  full pushsecret yaml               Additional Metadata for PushSecret  Optionally  it is possible to configure additional options for the parameter such as  Type  and encryption Key  To control this behaviour you can set the following provider s  metadata       yaml    include  aws pm push secret with metadata yaml           parameterStoreType  takes three options   String    StringList   and  SecureString   where  String  is the  default     parameterStoreKeyID  takes a KMS Key   ID  or   ARN   in case a key source is created in another account  as a string  where  alias aws ssm  is the  default   This property is only used if  parameterStoreType  is set as  SecureString         Check successful secret sync  To be able to check that the secret has been succesfully synced you can run the following command      bash kubectl get pushsecret pushsecret example      If the secret has synced successfully it will show the status as  Synced         Test new secret using AWS CLI  To View your parameter on AWS Parameter Store using the AWS CLI  install and login to the AWS CLI using the following guide   AWS CLI  https   aws amazon com cli     Run the following commands to get your synchronized parameter from AWS Parameter Store      bash aws ssm get parameter   name my first parameter   region us east 1      You should see something similar to the following output      json        Parameter              Name    my first parameter            Type    String            Value    charmander            Version   4           LastModifiedDate    2022 09 15T13 04 31 098000 03 00            ARN    arn aws ssm us east 1 1234567890123 parameter my first parameter            DataType    text                 8     snippets provider aws access md "}
{"questions":"external secrets Doppler SecretOps Platform Authentication Sync secrets from the to Kubernetes using the External Secrets Operator Doppler are recommended as they restrict access to a single config","answers":"![Doppler External Secrets Provider](..\/pictures\/doppler-provider-header.jpg)\n\n## Doppler SecretOps Platform\n\nSync secrets from the [Doppler SecretOps Platform](https:\/\/www.doppler.com\/) to Kubernetes using the External Secrets Operator.\n\n## Authentication\n\nDoppler [Service Tokens](https:\/\/docs.doppler.com\/docs\/service-tokens) are recommended as they restrict access to a single config.\n\n![Doppler Service Token](..\/pictures\/doppler-service-tokens.png)\n\n> NOTE: Doppler Personal Tokens are also supported but require `project` and `config` to be set on the `SecretStore` or `ClusterSecretStore`.\n\nCreate the Doppler Token secret by opening the Doppler dashboard and navigating to the desired Project and Config, then create a new Service Token from the **Access** tab:\n\n![Create Doppler Service Token](..\/pictures\/doppler-create-service-token.jpg)\n\nCreate the Doppler Token Kubernetes secret with your Service Token value:\n\n```sh\nHISTIGNORE='*kubectl*' kubectl create secret generic \\\n    doppler-token-auth-api \\\n    --from-literal dopplerToken=\"dp.st.xxxx\"\n```\n\nThen to create a generic `SecretStore`:\n\n```yaml\n{% include 'doppler-generic-secret-store.yaml' %}\n```\n\n> **NOTE:** In case of a `ClusterSecretStore`, be sure to set `namespace` in `secretRef.dopplerToken`.\n\n\n## Use Cases\n\nThe Doppler provider allows for a wide range of use cases:\n\n1. [Fetch](#1-fetch)\n2. [Fetch all](#2-fetch-all)\n3. [Filter](#3-filter)\n4. [JSON secret](#4-json-secret)\n5. [Name transformer](#5-name-transformer)\n6. [Download](#6-download)\n\nLet's explore each use case using a fictional `auth-api` Doppler project.\n\n## 1. Fetch\n\nTo sync one or more individual secrets:\n\n``` yaml\n{% include 'doppler-fetch-secret.yaml' %}\n```\n\n![Doppler fetch](..\/pictures\/doppler-fetch.png)\n\n## 2. Fetch all\n\nTo sync every secret from a config:\n\n``` yaml\n{% include 'doppler-fetch-all-secrets.yaml' %}\n```\n\n![Doppler fetch all](..\/pictures\/doppler-fetch-all.png)\n\n## 3. Filter\n\nTo filter secrets by `path` (path prefix), `name` (regular expression) or a combination of both:\n\n``` yaml\n{% include 'doppler-filtered-secrets.yaml' %}\n```\n\n![Doppler filter](..\/pictures\/doppler-filter.png)\n\n## 4. JSON secret\n\nTo parse a JSON secret to its key-value pairs:\n\n``` yaml\n{% include 'doppler-parse-json-secret.yaml' %}\n```\n\n![Doppler JSON Secret](..\/pictures\/doppler-json.png)\n\n## 5. Name transformer\n\nName transformers format keys from Doppler's UPPER_SNAKE_CASE to one of the following alternatives:\n\n- upper-camel\n- camel\n- lower-snake\n- tf-var\n- dotnet-env\n- lower-kebab\n\nName transformers require a specifically configured `SecretStore`:\n\n```yaml\n{% include 'doppler-name-transformer-secret-store.yaml' %}\n```\n\nThen an `ExternalSecret` referencing the `SecretStore`:\n\n```yaml\n{% include 'doppler-name-transformer-external-secret.yaml' %}\n```\n\n![Doppler name transformer](..\/pictures\/doppler-name-transformer.png)\n\n### 6. Download\n\nA single `DOPPLER_SECRETS_FILE` key is set where the value is the secrets downloaded in one of the following formats:\n\n- json\n- dotnet-json\n- env\n- env-no-quotes\n- yaml\n\nDownloading secrets requires a specifically configured `SecretStore`:\n\n```yaml\n{% include 'doppler-secrets-download-secret-store.yaml' %}\n```\n\nThen an `ExternalSecret` referencing the `SecretStore`:\n\n```yaml\n{% include 'doppler-secrets-download-external-secret.yaml' %}\n```\n\n![Doppler download](..\/pictures\/doppler-download.png)","site":"external secrets","answers_cleaned":"  Doppler External Secrets Provider     pictures doppler provider header jpg      Doppler SecretOps Platform  Sync secrets from the  Doppler SecretOps Platform  https   www doppler com   to Kubernetes using the External Secrets Operator      Authentication  Doppler  Service Tokens  https   docs doppler com docs service tokens  are recommended as they restrict access to a single config     Doppler Service Token     pictures doppler service tokens png     NOTE  Doppler Personal Tokens are also supported but require  project  and  config  to be set on the  SecretStore  or  ClusterSecretStore    Create the Doppler Token secret by opening the Doppler dashboard and navigating to the desired Project and Config  then create a new Service Token from the   Access   tab     Create Doppler Service Token     pictures doppler create service token jpg   Create the Doppler Token Kubernetes secret with your Service Token value      sh HISTIGNORE   kubectl   kubectl create secret generic       doppler token auth api         from literal dopplerToken  dp st xxxx       Then to create a generic  SecretStore       yaml    include  doppler generic secret store yaml              NOTE    In case of a  ClusterSecretStore   be sure to set  namespace  in  secretRef dopplerToken        Use Cases  The Doppler provider allows for a wide range of use cases   1   Fetch   1 fetch  2   Fetch all   2 fetch all  3   Filter   3 filter  4   JSON secret   4 json secret  5   Name transformer   5 name transformer  6   Download   6 download   Let s explore each use case using a fictional  auth api  Doppler project      1  Fetch  To sync one or more individual secrets       yaml    include  doppler fetch secret yaml            Doppler fetch     pictures doppler fetch png      2  Fetch all  To sync every secret from a config       yaml    include  doppler fetch all secrets yaml            Doppler fetch all     pictures doppler fetch all png      3  Filter  To filter secrets by  path   path prefix    name   regular expression  or a combination of both       yaml    include  doppler filtered secrets yaml            Doppler filter     pictures doppler filter png      4  JSON secret  To parse a JSON secret to its key value pairs       yaml    include  doppler parse json secret yaml            Doppler JSON Secret     pictures doppler json png      5  Name transformer  Name transformers format keys from Doppler s UPPER SNAKE CASE to one of the following alternatives     upper camel   camel   lower snake   tf var   dotnet env   lower kebab  Name transformers require a specifically configured  SecretStore       yaml    include  doppler name transformer secret store yaml          Then an  ExternalSecret  referencing the  SecretStore       yaml    include  doppler name transformer external secret yaml            Doppler name transformer     pictures doppler name transformer png       6  Download  A single  DOPPLER SECRETS FILE  key is set where the value is the secrets downloaded in one of the following formats     json   dotnet json   env   env no quotes   yaml  Downloading secrets requires a specifically configured  SecretStore       yaml    include  doppler secrets download secret store yaml          Then an  ExternalSecret  referencing the  SecretStore       yaml    include  doppler secrets download external secret yaml            Doppler download     pictures doppler download png "}
{"questions":"external secrets Google Cloud Secret Manager External Secrets Operator integrates with for secret management Workload Identity Authentication Your Google Kubernetes Engine GKE applications can consume GCP services like Secrets Manager without using static long lived authentication tokens This is our recommended approach of handling credentials in GCP ESO offers two options for integrating with GKE workload identity pod based workload identity and using service accounts directly Before using either way you need to create a service account this is covered below","answers":"## Google Cloud Secret Manager\n\nExternal Secrets Operator integrates with [GCP Secret Manager](https:\/\/cloud.google.com\/secret-manager) for secret management.\n\n## Authentication\n\n### Workload Identity\n\nYour Google Kubernetes Engine (GKE) applications can consume GCP services like Secrets Manager without using static, long-lived authentication tokens. This is our recommended approach of handling credentials in GCP. ESO offers two options for integrating with GKE workload identity: **pod-based workload identity** and **using service accounts directly**. Before using either way you need to create a service account - this is covered below.\n\n#### Creating Workload Identity Service Accounts\n\nYou can find the documentation for Workload Identity [here](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/workload-identity). We will walk you through how to navigate it here.\n\nSearch [the document](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/workload-identity) for this editable values and change them to your values:\n_Note: If you have installed ESO, a serviceaccount has already been created. You can either patch the existing `external-secrets` SA or create a new one that fits your needs._\n\n- `CLUSTER_NAME`: The name of your cluster\n- `PROJECT_ID`: Your project ID (not your Project number nor your Project name)\n- `K8S_NAMESPACE`: For us following these steps here it will be `es`, but this will be the namespace where you deployed the external-secrets operator\n- `KSA_NAME`: external-secrets (if you are not creating a new one to attach to the deployment)\n- `GSA_NAME`: external-secrets for simplicity, or something else if you have to follow different naming conventions for cloud resources\n- `ROLE_NAME`: should be `roles\/secretmanager.secretAccessor` - so you make the pod only be able to access secrets on Secret Manager\n\n#### Using Service Accounts directly\n\nLet's assume you have created a service account correctly and attached a appropriate workload identity. It should roughly look like this:\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-secrets\n  namespace: es\n  annotations:\n    iam.gke.io\/gcp-service-account: example-team-a@my-project.iam.gserviceaccount.com\n```\n\nYou can reference this particular ServiceAccount in a `SecretStore` or `ClusterSecretStore`. It's important that you also set the `projectID`, `clusterLocation` and `clusterName`. The Namespace on the `serviceAccountRef` is ignored when using a `SecretStore` resource. This is needed to isolate the namespaces properly.\n\n*When filling `clusterLocation` parameter keep in mind if it is Regional or Zonal cluster.*\n\n```yaml\n{% include 'gcpsm-wi-secret-store.yaml' %}\n```\n\n*You need to give the Google service account the `roles\/iam.serviceAccountTokenCreator` role so it can generate a service account token for you (not necessary in the Pod-based Workload Identity bellow)*\n\n#### Using Pod-based Workload Identity\n\nYou can attach a Workload Identity directly to the ESO pod. ESO then has access to all the APIs defined in the attached service account policy. You attach the workload identity by (1) creating a service account with a attached workload identity (described above) and (2) using this particular service account in the pod's `serviceAccountName` field.\n\nFor this example we will assume that you installed ESO using helm and that you named the chart installation `external-secrets` and the namespace where it lives `es` like:\n\n```sh\nhelm install external-secrets external-secrets\/external-secrets --namespace es\n```\n\nThen most of the resources would have this name, the important one here being the k8s service account attached to the external-secrets operator deployment:\n\n```yaml\n# ...\n      containers:\n      - image: ghcr.io\/external-secrets\/external-secrets:vVERSION\n        name: external-secrets\n        ports:\n        - containerPort: 8080\n          protocol: TCP\n      restartPolicy: Always\n      schedulerName: default-scheduler\n      serviceAccount: external-secrets\n      serviceAccountName: external-secrets # <--- here\n```\n\nThe pod now has the identity. Now you need to configure the `SecretStore`.\nYou just need to set the `projectID`, all other fields can be omitted.\n\n```yaml\n{% include 'gcpsm-pod-wi-secret-store.yaml' %}\n```\n\n### GCP Service Account authentication\n\nYou can use [GCP Service Account](https:\/\/cloud.google.com\/iam\/docs\/service-accounts) to authenticate with GCP. These are static, long-lived credentials. A GCP Service Account is a JSON file that needs to be stored in a `Kind=Secret`. ESO will use that Secret to authenticate with GCP. See here how you [manage GCP Service Accounts](https:\/\/cloud.google.com\/iam\/docs\/creating-managing-service-accounts).\nAfter creating a GCP Service account go to `IAM & Admin` web UI, click `ADD ANOTHER ROLE` button, add `Secret Manager Secret Accessor` role to this service account.\nThe `Secret Manager Secret Accessor` role is required to access secrets.\n\n```yaml\n{% include 'gcpsm-credentials-secret.yaml' %}\n```\n\n\n#### Update secret store\nBe sure the `gcpsm` provider is listed in the `Kind=SecretStore`\n\n```yaml\n{% include 'gcpsm-secret-store.yaml' %}\n```\n\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` for `SecretAccessKeyRef` with the namespace of the secret that we just created.\n\n#### Creating external secret\n\nTo create a kubernetes secret from the GCP Secret Manager secret a `Kind=ExternalSecret` is needed.\n\n```yaml\n{% include 'gcpsm-external-secret.yaml' %}\n```\n\nThe operator will fetch the GCP Secret Manager secret and inject it as a `Kind=Secret`\n```\nkubectl get secret secret-to-be-created -n <namespace> -o jsonpath='{.data.dev-secret-test}' | base64 -d\n```","site":"external secrets","answers_cleaned":"   Google Cloud Secret Manager  External Secrets Operator integrates with  GCP Secret Manager  https   cloud google com secret manager  for secret management      Authentication      Workload Identity  Your Google Kubernetes Engine  GKE  applications can consume GCP services like Secrets Manager without using static  long lived authentication tokens  This is our recommended approach of handling credentials in GCP  ESO offers two options for integrating with GKE workload identity    pod based workload identity   and   using service accounts directly    Before using either way you need to create a service account   this is covered below        Creating Workload Identity Service Accounts  You can find the documentation for Workload Identity  here  https   cloud google com kubernetes engine docs how to workload identity   We will walk you through how to navigate it here   Search  the document  https   cloud google com kubernetes engine docs how to workload identity  for this editable values and change them to your values   Note  If you have installed ESO  a serviceaccount has already been created  You can either patch the existing  external secrets  SA or create a new one that fits your needs       CLUSTER NAME   The name of your cluster    PROJECT ID   Your project ID  not your Project number nor your Project name     K8S NAMESPACE   For us following these steps here it will be  es   but this will be the namespace where you deployed the external secrets operator    KSA NAME   external secrets  if you are not creating a new one to attach to the deployment     GSA NAME   external secrets for simplicity  or something else if you have to follow different naming conventions for cloud resources    ROLE NAME   should be  roles secretmanager secretAccessor    so you make the pod only be able to access secrets on Secret Manager       Using Service Accounts directly  Let s assume you have created a service account correctly and attached a appropriate workload identity  It should roughly look like this      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external secrets   namespace  es   annotations      iam gke io gcp service account  example team a my project iam gserviceaccount com      You can reference this particular ServiceAccount in a  SecretStore  or  ClusterSecretStore   It s important that you also set the  projectID    clusterLocation  and  clusterName   The Namespace on the  serviceAccountRef  is ignored when using a  SecretStore  resource  This is needed to isolate the namespaces properly    When filling  clusterLocation  parameter keep in mind if it is Regional or Zonal cluster       yaml    include  gcpsm wi secret store yaml           You need to give the Google service account the  roles iam serviceAccountTokenCreator  role so it can generate a service account token for you  not necessary in the Pod based Workload Identity bellow         Using Pod based Workload Identity  You can attach a Workload Identity directly to the ESO pod  ESO then has access to all the APIs defined in the attached service account policy  You attach the workload identity by  1  creating a service account with a attached workload identity  described above  and  2  using this particular service account in the pod s  serviceAccountName  field   For this example we will assume that you installed ESO using helm and that you named the chart installation  external secrets  and the namespace where it lives  es  like      sh helm install external secrets external secrets external secrets   namespace es      Then most of the resources would have this name  the important one here being the k8s service account attached to the external secrets operator deployment      yaml             containers          image  ghcr io external secrets external secrets vVERSION         name  external secrets         ports            containerPort  8080           protocol  TCP       restartPolicy  Always       schedulerName  default scheduler       serviceAccount  external secrets       serviceAccountName  external secrets        here      The pod now has the identity  Now you need to configure the  SecretStore   You just need to set the  projectID   all other fields can be omitted      yaml    include  gcpsm pod wi secret store yaml              GCP Service Account authentication  You can use  GCP Service Account  https   cloud google com iam docs service accounts  to authenticate with GCP  These are static  long lived credentials  A GCP Service Account is a JSON file that needs to be stored in a  Kind Secret   ESO will use that Secret to authenticate with GCP  See here how you  manage GCP Service Accounts  https   cloud google com iam docs creating managing service accounts   After creating a GCP Service account go to  IAM   Admin  web UI  click  ADD ANOTHER ROLE  button  add  Secret Manager Secret Accessor  role to this service account  The  Secret Manager Secret Accessor  role is required to access secrets      yaml    include  gcpsm credentials secret yaml                Update secret store Be sure the  gcpsm  provider is listed in the  Kind SecretStore      yaml    include  gcpsm secret store yaml            NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  for  SecretAccessKeyRef  with the namespace of the secret that we just created        Creating external secret  To create a kubernetes secret from the GCP Secret Manager secret a  Kind ExternalSecret  is needed      yaml    include  gcpsm external secret yaml          The operator will fetch the GCP Secret Manager secret and inject it as a  Kind Secret      kubectl get secret secret to be created  n  namespace   o jsonpath    data dev secret test     base64  d    "}
{"questions":"external secrets External Secrets Operator integrates with for secret management Hashicorp Vault The is the only one supported by this provider For other secrets engines please refer to the","answers":"![HCP Vault](..\/pictures\/diagrams-provider-vault.png)\n\n## Hashicorp Vault\n\nExternal Secrets Operator integrates with [HashiCorp Vault](https:\/\/www.vaultproject.io\/) for secret management.\n\nThe [KV Secrets Engine](https:\/\/www.vaultproject.io\/docs\/secrets\/kv) is the only\none supported by this provider. For other secrets engines, please refer to the\n[Vault Generator](..\/api\/generator\/vault.md).\n\n### Example\n\nFirst, create a SecretStore with a vault backend. For the sake of simplicity we'll use a static token `root`:\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: vault-backend\nspec:\n  provider:\n    vault:\n      server: \"http:\/\/my.vault.server:8200\"\n      path: \"secret\"\n      # Version is the Vault KV secret engine version.\n      # This can be either \"v1\" or \"v2\", defaults to \"v2\"\n      version: \"v2\"\n      auth:\n        # points to a secret that contains a vault token\n        # https:\/\/www.vaultproject.io\/docs\/auth\/token\n        tokenSecretRef:\n          name: \"vault-token\"\n          key: \"token\"\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: vault-token\ndata:\n  token: cm9vdA== # \"root\"\n```\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` for `tokenSecretRef` with the namespace of the secret that we just created.\n\nThen create a simple k\/v pair at path `secret\/foo`:\n\n```\nvault kv put secret\/foo my-value=s3cr3t\n```\n\nCan check kv version using following and check for `Options` column, it should indicate [version:2]:\n\n```\nvault secrets list -detailed\n```\n\nIf you are using version: 1, just remember to update your SecretStore manifest appropriately\n\nNow create a ExternalSecret that uses the above SecretStore:\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: vault-example\nspec:\n  refreshInterval: \"15s\"\n  secretStoreRef:\n    name: vault-backend\n    kind: SecretStore\n  target:\n    name: example-sync\n  data:\n  - secretKey: foobar\n    remoteRef:\n      key: foo\n      property: my-value\n\n  # metadataPolicy to fetch all the labels in JSON format\n  - secretKey: tags\n    remoteRef:\n      metadataPolicy: Fetch\n      key: foo\n\n  # metadataPolicy to fetch a specific label (dev) from the source secret\n  - secretKey: developer\n    remoteRef:\n      metadataPolicy: Fetch\n      key: foo\n      property: dev\n\n---\n# That will automatically create a Kubernetes Secret with:\n# apiVersion: v1\n# kind: Secret\n# metadata:\n#  name: example-sync\n# data:\n#  foobar: czNjcjN0\n```\n\nKeep in mind that fetching the labels with `metadataPolicy: Fetch` only works with KV sercrets engine version v2.\n\n#### Fetching Raw Values\n\nYou can fetch all key\/value pairs for a given path If you leave the `remoteRef.property` empty. This returns the json-encoded secret value for that path.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: vault-example\nspec:\n  # ...\n  data:\n  - secretKey: foobar\n    remoteRef:\n      key: \/dev\/package.json\n```\n\n#### Nested Values\n\nVault supports nested key\/value pairs. You can specify a [gjson](https:\/\/github.com\/tidwall\/gjson) expression at `remoteRef.property` to get a nested value.\n\nGiven the following secret - assume its path is `\/dev\/config`:\n```json\n{\n  \"foo\": {\n    \"nested\": {\n      \"bar\": \"mysecret\"\n    }\n  }\n}\n```\n\nYou can set the `remoteRef.property` to point to the nested key using a [gjson](https:\/\/github.com\/tidwall\/gjson) expression.\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: vault-example\nspec:\n  # ...\n  data:\n  - secretKey: foobar\n    remoteRef:\n      key: \/dev\/config\n      property: foo.nested.bar\n---\n# creates a secret with:\n# foobar=mysecret\n```\n\nIf you would set the `remoteRef.property` to just `foo` then you would get the json-encoded value of that property: `{\"nested\":{\"bar\":\"mysecret\"}}`.\n\n#### Multiple nested Values\n\nYou can extract multiple keys from a nested secret using `dataFrom`.\n\nGiven the following secret - assume its path is `\/dev\/config`:\n```json\n{\n  \"foo\": {\n    \"nested\": {\n      \"bar\": \"mysecret\",\n      \"baz\": \"bang\"\n    }\n  }\n}\n```\n\nYou can set the `remoteRef.property` to point to the nested key using a [gjson](https:\/\/github.com\/tidwall\/gjson) expression.\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: vault-example\nspec:\n  # ...\n  dataFrom:\n  - extract:\n      key: \/dev\/config\n      property: foo.nested\n```\n\nThat results in a secret with these values:\n```\nbar=mysecret\nbaz=bang\n```\n\n#### Getting multiple secrets\n\nYou can extract multiple secrets from Hashicorp vault by using `dataFrom.Find`\n\nCurrently, `dataFrom.Find` allows users to fetch secret names that match a given regexp pattern, or fetch secrets whose `custom_metadata` tags match a predefined set.\n\n\n!!! warning\n    The way hashicorp Vault currently allows LIST operations is through the existence of a secret metadata. If you delete the secret, you will also need to delete the secret's metadata or this will currently make Find operations fail.\n\nGiven the following secret - assume its path is `\/dev\/config`:\n```json\n{\n  \"foo\": {\n    \"nested\": {\n      \"bar\": \"mysecret\",\n      \"baz\": \"bang\"\n    }\n  }\n}\n```\n\nAlso consider the following secret has the following `custom_metadata`:\n```json\n{\n  \"environment\": \"dev\",\n  \"component\": \"app-1\"\n}\n```\n\nIt is possible to find this secret by all the following possibilities:\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: vault-example\nspec:\n  # ...\n  dataFrom:\n  - find: #will return every secret with 'dev' in it (including paths)\n      name:\n        regexp: dev\n  - find: #will return every secret matching environment:dev tags from dev\/ folder and beyond\n      tags:\n        environment: dev\n```\nwill generate a secret with:\n```json\n{\n  \"dev_config\":\"{\\\"foo\\\":{\\\"nested\\\":{\\\"bar\\\":\\\"mysecret\\\",\\\"baz\\\":\\\"bang\\\"}}}\"\n}\n```\n\nCurrently, `Find` operations are recursive throughout a given vault folder, starting on `provider.Path` definition. It is recommended to narrow down the scope of search by setting a `find.path` variable. This is also useful to automatically reduce the resulting secret key names:\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: vault-example\nspec:\n  # ...\n  dataFrom:\n  - find: #will return every secret from dev\/ folder\n      path: dev\n      name:\n        regexp: \".*\"\n  - find: #will return every secret matching environment:dev tags from dev\/ folder\n      path: dev\n      tags:\n        environment: dev\n```\nWill generate a secret with:\n```json\n{\n  \"config\":\"{\\\"foo\\\": {\\\"nested\\\": {\\\"bar\\\": \\\"mysecret\\\",\\\"baz\\\": \\\"bang\\\"}}}\"\n}\n\n```\n### Authentication\n\nWe support five different modes for authentication:\n[token-based](https:\/\/www.vaultproject.io\/docs\/auth\/token),\n[appRole](https:\/\/www.vaultproject.io\/docs\/auth\/approle),\n[kubernetes-native](https:\/\/www.vaultproject.io\/docs\/auth\/kubernetes),\n[ldap](https:\/\/www.vaultproject.io\/docs\/auth\/ldap),\n[userPass](https:\/\/www.vaultproject.io\/docs\/auth\/userpass),\n[jwt\/oidc](https:\/\/www.vaultproject.io\/docs\/auth\/jwt),\n[awsAuth](https:\/\/developer.hashicorp.com\/vault\/docs\/auth\/aws) and\n[tlsCert](https:\/\/developer.hashicorp.com\/vault\/docs\/auth\/cert), each one comes with it's own\ntrade-offs. Depending on the authentication method you need to adapt your environment.\n\nIf you're using Vault namespaces, you can authenticate into one namespace and use the vault token against a different namespace, if desired.\n\n#### Token-based authentication\n\nA static token is stored in a `Kind=Secret` and is used to authenticate with vault.\n\n```yaml\n{% include 'vault-token-store.yaml' %}\n```\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` in `tokenSecretRef` with the namespace where the secret resides.\n\n#### AppRole authentication example\n\n[AppRole authentication](https:\/\/www.vaultproject.io\/docs\/auth\/approle) reads the secret id from a\n`Kind=Secret` and uses the specified `roleId` to aquire a temporary token to fetch secrets.\n\n```yaml\n{% include 'vault-approle-store.yaml' %}\n```\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` in `secretRef` with the namespace where the secret resides.\n\n#### Kubernetes authentication\n\n[Kubernetes-native authentication](https:\/\/www.vaultproject.io\/docs\/auth\/kubernetes) has three\noptions of obtaining credentials for vault:\n\n1.  by using a service account jwt referenced in `serviceAccountRef`\n2.  by using the jwt from a `Kind=Secret` referenced by the `secretRef`\n3.  by using transient credentials from the mounted service account token within the\n    external-secrets operator\n\nVault validates the service account token by using the TokenReview API. \u26a0\ufe0f You have to bind the `system:auth-delegator` ClusterRole to the service account that is used for authentication. Please follow the [Vault documentation](https:\/\/developer.hashicorp.com\/vault\/docs\/auth\/kubernetes#configuring-kubernetes).\n\n```yaml\n{% include 'vault-kubernetes-store.yaml' %}\n```\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` in `serviceAccountRef` or in `secretRef`, if used.\n\n#### LDAP authentication\n\n[LDAP authentication](https:\/\/www.vaultproject.io\/docs\/auth\/ldap) uses\nusername\/password pair to get an access token. Username is stored directly in\na `Kind=SecretStore` or `Kind=ClusterSecretStore` resource, password is stored\nin a `Kind=Secret` referenced by the `secretRef`.\n\n```yaml\n{% include 'vault-ldap-store.yaml' %}\n```\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` in `secretRef` with the namespace where the secret resides.\n\n#### UserPass authentication\n\n[UserPass authentication](https:\/\/www.vaultproject.io\/docs\/auth\/userpass) uses\nusername\/password pair to get an access token. Username is stored directly in\na `Kind=SecretStore` or `Kind=ClusterSecretStore` resource, password is stored\nin a `Kind=Secret` referenced by the `secretRef`.\n\n```yaml\n{% include 'vault-userpass-store.yaml' %}\n```\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` in `secretRef` with the namespace where the secret resides.\n\n#### JWT\/OIDC authentication\n\n[JWT\/OIDC](https:\/\/www.vaultproject.io\/docs\/auth\/jwt) uses either a\n[JWT](https:\/\/jwt.io\/) token stored in a `Kind=Secret` and referenced by the\n`secretRef` or a temporary Kubernetes service account token retrieved via the `TokenRequest` API. Optionally a `role` field can be defined in a `Kind=SecretStore`\nor `Kind=ClusterSecretStore` resource.\n\n```yaml\n{% include 'vault-jwt-store.yaml' %}\n```\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` in `secretRef` with the namespace where the secret resides.\n\n#### AWS IAM authentication\n\n[AWS IAM](https:\/\/developer.hashicorp.com\/vault\/docs\/auth\/aws) uses either a\nset of AWS Programmatic access credentials stored in a `Kind=Secret` and referenced by the\n`secretRef` or by getting the authentication token from an [IRSA](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts.html) enabled service account\n\n#### TLS certificates authentication\n\n[TLS certificates auth method](https:\/\/developer.hashicorp.com\/vault\/docs\/auth\/cert)  allows authentication using SSL\/TLS client certificates which are either signed by a CA or self-signed. SSL\/TLS client certificates are defined as having an ExtKeyUsage extension with the usage set to either ClientAuth or Any.\n\n### Mutual authentication (mTLS)\n\nUnder specific compliance requirements, the Vault server can be set up to enforce mutual authentication from clients across all APIs by configuring the server with `tls_require_and_verify_client_cert = true`. This configuration differs fundamentally from the [TLS certificates auth method](#tls-certificates-authentication). While the TLS certificates auth method allows the issuance of a Vault token through the `\/v1\/auth\/cert\/login` API, the mTLS configuration solely focuses on TLS transport layer authentication and lacks any authorization-related capabilities. It's important to note that the Vault token must still be included in the request, following any of the supported authentication methods mentioned earlier.\n\n```yaml\n{% include 'vault-mtls-store.yaml' %}\n```\n\n### Access Key ID & Secret Access Key\nYou can store Access Key ID & Secret Access Key in a `Kind=Secret` and reference it from a SecretStore.\n\n```yaml\n{% include 'vault-iam-store-static-creds.yaml' %}\n```\n\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` in `accessKeyIDSecretRef`, `secretAccessKeySecretRef` with the namespaces where the secrets reside.\n\n### EKS Service Account credentials\n\nThis feature lets you use short-lived service account tokens to authenticate with AWS.\nYou must have [Service Account Volume Projection](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/#service-account-token-volume-projection) enabled - it is by default on EKS. See [EKS guide](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts-technical-overview.html) on how to set up IAM roles for service accounts.\n\nThe big advantage of this approach is that ESO runs without any credentials.\n\n```yaml\n{% include 'vault-iam-store-sa.yaml' %}\n```\n\nReference the service account from above in the Secret Store:\n\n```yaml\n{% include 'vault-iam-store.yaml' %}\n```\n### Controller's Pod Identity\n\nThis is basicially a zero-configuration authentication approach that inherits the credentials from the controller's pod identity\n\nThis approach assumes that appropriate IRSA setup is done controller's pod (i.e. IRSA enabled IAM role is created appropriately and controller's service account is annotated appropriately with the annotation \"eks.amazonaws.com\/role-arn\" to enable IRSA)\n\n```yaml\n{% include 'vault-iam-store-controller-pod-identity.yaml' %}\n```\n\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` for `serviceAccountRef` with the namespace where the service account resides.\n\n```yaml\n{% include 'vault-jwt-store.yaml' %}\n```\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` in `secretRef` with the namespace where the secret resides.\n\n### PushSecret\n\nVault supports PushSecret features which allow you to sync a given Kubernetes secret key into a Hashicorp vault secret. To do so, it is expected that the secret key is a valid JSON object or that the `property` attribute has been specified under the `remoteRef`.\nTo use PushSecret, you need to give `create`, `read` and `update` permissions to the path where you want to push secrets for both `data` and `metadata` of the secret. Use it with care!\n\n!!! note\n     Since Vault KV v1 API is not supported with storing secrets metadata, PushSecret will add a `custom_metadata` map to each secret in Vault that he will manage. It means pushing secret keys named `custom_metadata` is not supported with Vault KV v1.\n\n\nHere is an example of how to set up `PushSecret`:\n\n```yaml\n{% include 'vault-pushsecret.yaml' %}\n```\n\nNote that in this example, we are generating two secrets in the target vault with the same structure but using different input formats.\n\n### Vault Enterprise\n\n#### Eventual Consistency and Performance Standby Nodes\n\nWhen using Vault Enterprise with [performance standby nodes](https:\/\/www.vaultproject.io\/docs\/enterprise\/consistency#performance-standby-nodes),\nany follower can handle read requests immediately after the provider has\nauthenticated. Since Vault becomes eventually consistent in this mode, these\nrequests can fail if the login has not yet propagated to each server's local\nstate.\n\nBelow are two different solutions to this scenario. You'll need to review them\nand pick the best fit for your environment and Vault configuration.\n\n#### Vault Namespaces\n\n[Vault namespaces](https:\/\/www.vaultproject.io\/docs\/enterprise\/namespaces) are an enterprise feature that support multi-tenancy. You can specify a vault namespace using the `namespace` property when you define a SecretStore:\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: vault-backend\nspec:\n  provider:\n    vault:\n      server: \"http:\/\/my.vault.server:8200\"\n      # See https:\/\/www.vaultproject.io\/docs\/enterprise\/namespaces\n      namespace: \"ns1\"\n      path: \"secret\"\n      version: \"v2\"\n      auth:\n        # ...\n```\n\n##### Authenticating into a different namespace\n\nIn some situations your authentication backend may be in one namespace, and your secrets in another. You can authenticate into one namespace, and use that token against another, by setting `provider.vault.namespace` and `provider.vault.auth.namespace` to different values. If `provider.vault.auth.namespace` is unset but `provider.vault.namespace` is, it will default to the `provider.vault.namespace` value.\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: vault-backend\nspec:\n  provider:\n    vault:\n      server: \"http:\/\/my.vault.server:8200\"\n      # See https:\/\/www.vaultproject.io\/docs\/enterprise\/namespaces\n      namespace: \"app-team\"\n      path: \"secret\"\n      version: \"v2\"\n      auth:\n        namespace: \"kubernetes-team\"\n        # ...\n```\n\n#### Read Your Writes\n\nVault 1.10.0 and later encodes information in the token to detect the case\nwhen a server is behind. If a Vault server does not have information about\nthe provided token, [Vault returns a 412 error](https:\/\/www.vaultproject.io\/docs\/faq\/ssct#q-is-there-anything-else-i-need-to-consider-to-achieve-consistency-besides-upgrading-to-vault-1-10)\nso clients know to retry.\n\nA method supported in versions Vault 1.7 and later is to utilize the\n`X-Vault-Index` header returned on all write requests (including logins).\nPassing this header back on subsequent requests instructs the Vault client\nto retry the request until the server has an index greater than or equal\nto that returned with the last write. Obviously though, this has a performance\nhit because the read is blocked until the follower's local state has caught up.\n\n#### Forward Inconsistent\n\nVault also supports proxying inconsistent requests to the current cluster leader\nfor immediate read-after-write consistency.\n\nVault 1.10.0 and later [support a replication configuration](https:\/\/www.vaultproject.io\/docs\/faq\/ssct#q-is-there-a-new-configuration-that-this-feature-introduces) that detects when forwarding should occur and does it transparently to the client.\n\nIn Vault 1.7 forwarding can be achieved by setting the `X-Vault-Inconsistent`\nheader to `forward-active-node`. By default, this behavior is disabled and must\nbe explicitly enabled in the server's [replication configuration](https:\/\/www.vaultproject.io\/docs\/configuration\/replication#allow_forwarding_via_header).","site":"external secrets","answers_cleaned":"  HCP Vault     pictures diagrams provider vault png      Hashicorp Vault  External Secrets Operator integrates with  HashiCorp Vault  https   www vaultproject io   for secret management   The  KV Secrets Engine  https   www vaultproject io docs secrets kv  is the only one supported by this provider  For other secrets engines  please refer to the  Vault Generator     api generator vault md        Example  First  create a SecretStore with a vault backend  For the sake of simplicity we ll use a static token  root       yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  vault backend spec    provider      vault        server   http   my vault server 8200        path   secret          Version is the Vault KV secret engine version          This can be either  v1  or  v2   defaults to  v2        version   v2        auth            points to a secret that contains a vault token           https   www vaultproject io docs auth token         tokenSecretRef            name   vault token            key   token      apiVersion  v1 kind  Secret metadata    name  vault token data    token  cm9vdA      root        NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  for  tokenSecretRef  with the namespace of the secret that we just created   Then create a simple k v pair at path  secret foo        vault kv put secret foo my value s3cr3t      Can check kv version using following and check for  Options  column  it should indicate  version 2        vault secrets list  detailed      If you are using version  1  just remember to update your SecretStore manifest appropriately  Now create a ExternalSecret that uses the above SecretStore      yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  vault example spec    refreshInterval   15s    secretStoreRef      name  vault backend     kind  SecretStore   target      name  example sync   data      secretKey  foobar     remoteRef        key  foo       property  my value      metadataPolicy to fetch all the labels in JSON format     secretKey  tags     remoteRef        metadataPolicy  Fetch       key  foo      metadataPolicy to fetch a specific label  dev  from the source secret     secretKey  developer     remoteRef        metadataPolicy  Fetch       key  foo       property  dev        That will automatically create a Kubernetes Secret with    apiVersion  v1   kind  Secret   metadata     name  example sync   data     foobar  czNjcjN0      Keep in mind that fetching the labels with  metadataPolicy  Fetch  only works with KV sercrets engine version v2        Fetching Raw Values  You can fetch all key value pairs for a given path If you leave the  remoteRef property  empty  This returns the json encoded secret value for that path      yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  vault example spec            data      secretKey  foobar     remoteRef        key   dev package json           Nested Values  Vault supports nested key value pairs  You can specify a  gjson  https   github com tidwall gjson  expression at  remoteRef property  to get a nested value   Given the following secret   assume its path is   dev config      json      foo          nested            bar    mysecret                   You can set the  remoteRef property  to point to the nested key using a  gjson  https   github com tidwall gjson  expression     yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  vault example spec            data      secretKey  foobar     remoteRef        key   dev config       property  foo nested bar       creates a secret with    foobar mysecret      If you would set the  remoteRef property  to just  foo  then you would get the json encoded value of that property     nested    bar   mysecret            Multiple nested Values  You can extract multiple keys from a nested secret using  dataFrom    Given the following secret   assume its path is   dev config      json      foo          nested            bar    mysecret          baz    bang                   You can set the  remoteRef property  to point to the nested key using a  gjson  https   github com tidwall gjson  expression     yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  vault example spec            dataFrom      extract        key   dev config       property  foo nested      That results in a secret with these values      bar mysecret baz bang           Getting multiple secrets  You can extract multiple secrets from Hashicorp vault by using  dataFrom Find   Currently   dataFrom Find  allows users to fetch secret names that match a given regexp pattern  or fetch secrets whose  custom metadata  tags match a predefined set        warning     The way hashicorp Vault currently allows LIST operations is through the existence of a secret metadata  If you delete the secret  you will also need to delete the secret s metadata or this will currently make Find operations fail   Given the following secret   assume its path is   dev config      json      foo          nested            bar    mysecret          baz    bang                   Also consider the following secret has the following  custom metadata      json      environment    dev      component    app 1         It is possible to find this secret by all the following possibilities     yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  vault example spec            dataFrom      find   will return every secret with  dev  in it  including paths        name          regexp  dev     find   will return every secret matching environment dev tags from dev  folder and beyond       tags          environment  dev     will generate a secret with     json      dev config      foo      nested      bar     mysecret     baz     bang              Currently   Find  operations are recursive throughout a given vault folder  starting on  provider Path  definition  It is recommended to narrow down the scope of search by setting a  find path  variable  This is also useful to automatically reduce the resulting secret key names     yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  vault example spec            dataFrom      find   will return every secret from dev  folder       path  dev       name          regexp           find   will return every secret matching environment dev tags from dev  folder       path  dev       tags          environment  dev     Will generate a secret with     json      config      foo       nested       bar      mysecret     baz      bang                  Authentication  We support five different modes for authentication   token based  https   www vaultproject io docs auth token    appRole  https   www vaultproject io docs auth approle    kubernetes native  https   www vaultproject io docs auth kubernetes    ldap  https   www vaultproject io docs auth ldap    userPass  https   www vaultproject io docs auth userpass    jwt oidc  https   www vaultproject io docs auth jwt    awsAuth  https   developer hashicorp com vault docs auth aws  and  tlsCert  https   developer hashicorp com vault docs auth cert   each one comes with it s own trade offs  Depending on the authentication method you need to adapt your environment   If you re using Vault namespaces  you can authenticate into one namespace and use the vault token against a different namespace  if desired        Token based authentication  A static token is stored in a  Kind Secret  and is used to authenticate with vault      yaml    include  vault token store yaml           NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  in  tokenSecretRef  with the namespace where the secret resides        AppRole authentication example   AppRole authentication  https   www vaultproject io docs auth approle  reads the secret id from a  Kind Secret  and uses the specified  roleId  to aquire a temporary token to fetch secrets      yaml    include  vault approle store yaml           NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  in  secretRef  with the namespace where the secret resides        Kubernetes authentication   Kubernetes native authentication  https   www vaultproject io docs auth kubernetes  has three options of obtaining credentials for vault   1   by using a service account jwt referenced in  serviceAccountRef  2   by using the jwt from a  Kind Secret  referenced by the  secretRef  3   by using transient credentials from the mounted service account token within the     external secrets operator  Vault validates the service account token by using the TokenReview API     You have to bind the  system auth delegator  ClusterRole to the service account that is used for authentication  Please follow the  Vault documentation  https   developer hashicorp com vault docs auth kubernetes configuring kubernetes       yaml    include  vault kubernetes store yaml           NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  in  serviceAccountRef  or in  secretRef   if used        LDAP authentication   LDAP authentication  https   www vaultproject io docs auth ldap  uses username password pair to get an access token  Username is stored directly in a  Kind SecretStore  or  Kind ClusterSecretStore  resource  password is stored in a  Kind Secret  referenced by the  secretRef       yaml    include  vault ldap store yaml           NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  in  secretRef  with the namespace where the secret resides        UserPass authentication   UserPass authentication  https   www vaultproject io docs auth userpass  uses username password pair to get an access token  Username is stored directly in a  Kind SecretStore  or  Kind ClusterSecretStore  resource  password is stored in a  Kind Secret  referenced by the  secretRef       yaml    include  vault userpass store yaml           NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  in  secretRef  with the namespace where the secret resides        JWT OIDC authentication   JWT OIDC  https   www vaultproject io docs auth jwt  uses either a  JWT  https   jwt io   token stored in a  Kind Secret  and referenced by the  secretRef  or a temporary Kubernetes service account token retrieved via the  TokenRequest  API  Optionally a  role  field can be defined in a  Kind SecretStore  or  Kind ClusterSecretStore  resource      yaml    include  vault jwt store yaml           NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  in  secretRef  with the namespace where the secret resides        AWS IAM authentication   AWS IAM  https   developer hashicorp com vault docs auth aws  uses either a set of AWS Programmatic access credentials stored in a  Kind Secret  and referenced by the  secretRef  or by getting the authentication token from an  IRSA  https   docs aws amazon com eks latest userguide iam roles for service accounts html  enabled service account       TLS certificates authentication   TLS certificates auth method  https   developer hashicorp com vault docs auth cert   allows authentication using SSL TLS client certificates which are either signed by a CA or self signed  SSL TLS client certificates are defined as having an ExtKeyUsage extension with the usage set to either ClientAuth or Any       Mutual authentication  mTLS   Under specific compliance requirements  the Vault server can be set up to enforce mutual authentication from clients across all APIs by configuring the server with  tls require and verify client cert   true   This configuration differs fundamentally from the  TLS certificates auth method   tls certificates authentication   While the TLS certificates auth method allows the issuance of a Vault token through the   v1 auth cert login  API  the mTLS configuration solely focuses on TLS transport layer authentication and lacks any authorization related capabilities  It s important to note that the Vault token must still be included in the request  following any of the supported authentication methods mentioned earlier      yaml    include  vault mtls store yaml              Access Key ID   Secret Access Key You can store Access Key ID   Secret Access Key in a  Kind Secret  and reference it from a SecretStore      yaml    include  vault iam store static creds yaml            NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  in  accessKeyIDSecretRef    secretAccessKeySecretRef  with the namespaces where the secrets reside       EKS Service Account credentials  This feature lets you use short lived service account tokens to authenticate with AWS  You must have  Service Account Volume Projection  https   kubernetes io docs tasks configure pod container configure service account  service account token volume projection  enabled   it is by default on EKS  See  EKS guide  https   docs aws amazon com eks latest userguide iam roles for service accounts technical overview html  on how to set up IAM roles for service accounts   The big advantage of this approach is that ESO runs without any credentials      yaml    include  vault iam store sa yaml          Reference the service account from above in the Secret Store      yaml    include  vault iam store yaml             Controller s Pod Identity  This is basicially a zero configuration authentication approach that inherits the credentials from the controller s pod identity  This approach assumes that appropriate IRSA setup is done controller s pod  i e  IRSA enabled IAM role is created appropriately and controller s service account is annotated appropriately with the annotation  eks amazonaws com role arn  to enable IRSA      yaml    include  vault iam store controller pod identity yaml            NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  for  serviceAccountRef  with the namespace where the service account resides      yaml    include  vault jwt store yaml           NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  in  secretRef  with the namespace where the secret resides       PushSecret  Vault supports PushSecret features which allow you to sync a given Kubernetes secret key into a Hashicorp vault secret  To do so  it is expected that the secret key is a valid JSON object or that the  property  attribute has been specified under the  remoteRef   To use PushSecret  you need to give  create    read  and  update  permissions to the path where you want to push secrets for both  data  and  metadata  of the secret  Use it with care       note      Since Vault KV v1 API is not supported with storing secrets metadata  PushSecret will add a  custom metadata  map to each secret in Vault that he will manage  It means pushing secret keys named  custom metadata  is not supported with Vault KV v1    Here is an example of how to set up  PushSecret       yaml    include  vault pushsecret yaml          Note that in this example  we are generating two secrets in the target vault with the same structure but using different input formats       Vault Enterprise       Eventual Consistency and Performance Standby Nodes  When using Vault Enterprise with  performance standby nodes  https   www vaultproject io docs enterprise consistency performance standby nodes   any follower can handle read requests immediately after the provider has authenticated  Since Vault becomes eventually consistent in this mode  these requests can fail if the login has not yet propagated to each server s local state   Below are two different solutions to this scenario  You ll need to review them and pick the best fit for your environment and Vault configuration        Vault Namespaces   Vault namespaces  https   www vaultproject io docs enterprise namespaces  are an enterprise feature that support multi tenancy  You can specify a vault namespace using the  namespace  property when you define a SecretStore      yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  vault backend spec    provider      vault        server   http   my vault server 8200          See https   www vaultproject io docs enterprise namespaces       namespace   ns1        path   secret        version   v2        auth                           Authenticating into a different namespace  In some situations your authentication backend may be in one namespace  and your secrets in another  You can authenticate into one namespace  and use that token against another  by setting  provider vault namespace  and  provider vault auth namespace  to different values  If  provider vault auth namespace  is unset but  provider vault namespace  is  it will default to the  provider vault namespace  value      yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  vault backend spec    provider      vault        server   http   my vault server 8200          See https   www vaultproject io docs enterprise namespaces       namespace   app team        path   secret        version   v2        auth          namespace   kubernetes team                          Read Your Writes  Vault 1 10 0 and later encodes information in the token to detect the case when a server is behind  If a Vault server does not have information about the provided token   Vault returns a 412 error  https   www vaultproject io docs faq ssct q is there anything else i need to consider to achieve consistency besides upgrading to vault 1 10  so clients know to retry   A method supported in versions Vault 1 7 and later is to utilize the  X Vault Index  header returned on all write requests  including logins   Passing this header back on subsequent requests instructs the Vault client to retry the request until the server has an index greater than or equal to that returned with the last write  Obviously though  this has a performance hit because the read is blocked until the follower s local state has caught up        Forward Inconsistent  Vault also supports proxying inconsistent requests to the current cluster leader for immediate read after write consistency   Vault 1 10 0 and later  support a replication configuration  https   www vaultproject io docs faq ssct q is there a new configuration that this feature introduces  that detects when forwarding should occur and does it transparently to the client   In Vault 1 7 forwarding can be achieved by setting the  X Vault Inconsistent  header to  forward active node   By default  this behavior is disabled and must be explicitly enabled in the server s  replication configuration  https   www vaultproject io docs configuration replication allow forwarding via header  "}
{"questions":"external secrets External Secrets Operator integration with Creating a SecretStore Both username and password can be specified either directly in your yaml config or by referencing a kubernetes secret Delinea Secret Server You need a username password and a fully qualified Secret Server tenant URL to authenticate i e","answers":"# Delinea Secret Server\n\nExternal Secrets Operator integration with [Delinea Secret Server](https:\/\/docs.delinea.com\/online-help\/secret-server\/start.htm).\n\n### Creating a SecretStore\n\nYou need a username, password and a fully qualified Secret Server tenant URL to authenticate\ni.e. `https:\/\/yourTenantName.secretservercloud.com`.\n\nBoth username and password can be specified either directly in your `SecretStore` yaml config, or by referencing a kubernetes secret.\n\nTo acquire a username and password, refer to the  Secret Server [user management](https:\/\/docs.delinea.com\/online-help\/secret-server\/users\/creating-users\/index.htm) documentation.\n\nBoth `username` and `password` can either be specified directly via the `value` field (example below)\n>spec.provider.secretserver.username.value: \"yourusername\"<br \/>\nspec.provider.secretserver.password.value: \"yourpassword\" <br \/>\n\nOr you can reference a kubernetes secret (password example below).\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: SecretStore\nmetadata:\n  name: secret-server-store\nspec:\n  provider:\n    secretserver:\n      serverURL: \"https:\/\/yourtenantname.secretservercloud.com\"\n      username:\n        value: \"yourusername\"\n      password:\n        secretRef:\n          name: <NAME_OF_K8S_SECRET>\n          key: <KEY_IN_K8S_SECRET>\n```\n\n### Referencing Secrets\n\nSecrets may be referenced by secret ID or secret name.\n>Please note if using the secret name\nthe name field must not contain spaces or control characters.<br \/>\nIf multiple secrets are found, *`only the first found secret will be returned`*.\n\nPlease note: `Retrieving a specific version of a secret is not yet supported.`\n\nNote that because all Secret Server secrets are JSON objects, you must specify the `remoteRef.property`\nin your ExternalSecret configuration.<br \/>\nYou can access nested values or arrays using [gjson syntax](https:\/\/github.com\/tidwall\/gjson\/blob\/master\/SYNTAX.md).\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n    name: secret-server-external-secret\nspec:\n    refreshInterval: 1h\n    secretStoreRef:\n        kind: SecretStore\n        name: secret-server-store\n    data:\n      - secretKey: SecretServerValue #<SECRET_VALUE_RETURNED_HERE>\n        remoteRef:\n          key: \"52622\" #<SECRET_ID>\n          property: \"array.0.value\" #<GJSON_PROPERTY> * an empty property will return the entire secret\n```\n\n### Preparing your secret\nYou can either retrieve your entire secret or you can use a JSON formatted string\nstored in your secret located at Items[0].ItemValue to retrieve a specific value.<br \/>\nSee example JSON secret below.\n\n#### Examples\nUsing the json formatted secret below:\n\n- Lookup a single top level property using secret ID.\n\n>spec.data.remoteRef.key = 52622 (id of the secret)<br \/>\nspec.data.remoteRef.property = \"user\" (Items.0.ItemValue user attribute)<br \/>\nreturns: marktwain@hannibal.com\n\n- Lookup a nested property using secret name.\n\n>spec.data.remoteRef.key = \"external-secret-testing\" (name of the secret)<br \/>\nspec.data.remoteRef.property = \"books.1\" (Items.0.ItemValue books.1 attribute)<br \/>\nreturns: huckleberryFinn\n\n- Lookup by secret ID (*secret name will work as well*) and return the entire secret.\n\n>spec.data.remoteRef.key = \"52622\" (id of the secret)<br \/>\nspec.data.remoteRef.property = \"\" <br \/>\nreturns: The entire secret in JSON format as displayed below\n\n\n```JSON\n{\n  \"Name\": \"external-secret-testing\",\n  \"FolderID\": 73,\n  \"ID\": 52622,\n  \"SiteID\": 1,\n  \"SecretTemplateID\": 6098,\n  \"SecretPolicyID\": -1,\n  \"PasswordTypeWebScriptID\": -1,\n  \"LauncherConnectAsSecretID\": -1,\n  \"CheckOutIntervalMinutes\": -1,\n  \"Active\": true,\n  \"CheckedOut\": false,\n  \"CheckOutEnabled\": false,\n  \"AutoChangeEnabled\": false,\n  \"CheckOutChangePasswordEnabled\": false,\n  \"DelayIndexing\": false,\n  \"EnableInheritPermissions\": true,\n  \"EnableInheritSecretPolicy\": true,\n  \"ProxyEnabled\": false,\n  \"RequiresComment\": false,\n  \"SessionRecordingEnabled\": false,\n  \"WebLauncherRequiresIncognitoMode\": false,\n  \"Items\": [\n    {\n      \"ItemID\": 280265,\n      \"FieldID\": 439,\n      \"FileAttachmentID\": 0,\n      \"FieldName\": \"Data\",\n      \"Slug\": \"data\",\n      \"FieldDescription\": \"json text field\",\n      \"Filename\": \"\",\n      \"ItemValue\": \"{ \\\"user\\\": \\\"marktwain@hannibal.com\\\", \\\"occupation\\\": \\\"author\\\",\\\"books\\\":[ \\\"tomSawyer\\\",\\\"huckleberryFinn\\\",\\\"Pudd'nhead Wilson\\\"] }\",\n      \"IsFile\": false,\n      \"IsNotes\": false,\n      \"IsPassword\": false\n    }\n  ]\n}\n```\n\n### Referencing Secrets in multiple Items secrets\n\nIf there is more then one Item in the secret, it supports to retrieve them (all Item.\\*.ItemValue) looking up by Item.\\*.FieldName or Item.\\*.Slug, instead of the above behaviour to use gjson only on the first item Items.0.ItemValue only.\n\n#### Examples\n\nUsing the json formatted secret below:\n\n- Lookup a single top level property using secret ID.\n\n>spec.data.remoteRef.key = 4000 (id of the secret)<br \/>\nspec.data.remoteRef.property = \"Username\" (Items.0.FieldName)<br \/>\nreturns: usernamevalue\n\n- Lookup a nested property using secret name.\n\n>spec.data.remoteRef.key = \"Secretname\" (name of the secret)<br \/>\nspec.data.remoteRef.property = \"password\" (Items.1.slug)<br \/>\nreturns: passwordvalue\n\n- Lookup by secret ID (*secret name will work as well*) and return the entire secret.\n\n>spec.data.remoteRef.key = \"4000\" (id of the secret)<br \/>\nreturns: The entire secret in JSON format as displayed below\n\n\n```JSON\n{\n  \"Name\": \"Secretname\",\n  \"FolderID\": 0,\n  \"ID\": 4000,\n  \"SiteID\": 0,\n  \"SecretTemplateID\": 0,\n  \"LauncherConnectAsSecretID\": 0,\n  \"CheckOutIntervalMinutes\": 0,\n  \"Active\": false,\n  \"CheckedOut\": false,\n  \"CheckOutEnabled\": false,\n  \"AutoChangeEnabled\": false,\n  \"CheckOutChangePasswordEnabled\": false,\n  \"DelayIndexing\": false,\n  \"EnableInheritPermissions\": false,\n  \"EnableInheritSecretPolicy\": false,\n  \"ProxyEnabled\": false,\n  \"RequiresComment\": false,\n  \"SessionRecordingEnabled\": false,\n  \"WebLauncherRequiresIncognitoMode\": false,\n  \"Items\": [\n    {\n      \"ItemID\": 0,\n      \"FieldID\": 0,\n      \"FileAttachmentID\": 0,\n      \"FieldName\": \"Username\",\n      \"Slug\": \"username\",\n      \"FieldDescription\": \"\",\n      \"Filename\": \"\",\n      \"ItemValue\": \"usernamevalue\",\n      \"IsFile\": false,\n      \"IsNotes\": false,\n      \"IsPassword\": false\n    },\n    {\n      \"ItemID\": 0,\n      \"FieldID\": 0,\n      \"FileAttachmentID\": 0,\n      \"FieldName\": \"Password\",\n      \"Slug\": \"password\",\n      \"FieldDescription\": \"\",\n      \"Filename\": \"\",\n      \"ItemValue\": \"passwordvalue\",\n      \"IsFile\": false,\n      \"IsNotes\": false,\n      \"IsPassword\": false\n    }\n  ]\n}\n```","site":"external secrets","answers_cleaned":"  Delinea Secret Server  External Secrets Operator integration with  Delinea Secret Server  https   docs delinea com online help secret server start htm        Creating a SecretStore  You need a username  password and a fully qualified Secret Server tenant URL to authenticate i e   https   yourTenantName secretservercloud com    Both username and password can be specified either directly in your  SecretStore  yaml config  or by referencing a kubernetes secret   To acquire a username and password  refer to the  Secret Server  user management  https   docs delinea com online help secret server users creating users index htm  documentation   Both  username  and  password  can either be specified directly via the  value  field  example below   spec provider secretserver username value   yourusername  br    spec provider secretserver password value   yourpassword   br     Or you can reference a kubernetes secret  password example below       yaml apiVersion  external secrets io v1beta1 kind  SecretStore metadata    name  secret server store spec    provider      secretserver        serverURL   https   yourtenantname secretservercloud com        username          value   yourusername        password          secretRef            name   NAME OF K8S SECRET            key   KEY IN K8S SECRET           Referencing Secrets  Secrets may be referenced by secret ID or secret name   Please note if using the secret name the name field must not contain spaces or control characters  br    If multiple secrets are found    only the first found secret will be returned     Please note   Retrieving a specific version of a secret is not yet supported    Note that because all Secret Server secrets are JSON objects  you must specify the  remoteRef property  in your ExternalSecret configuration  br    You can access nested values or arrays using  gjson syntax  https   github com tidwall gjson blob master SYNTAX md       yaml apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata      name  secret server external secret spec      refreshInterval  1h     secretStoreRef          kind  SecretStore         name  secret server store     data          secretKey  SecretServerValue   SECRET VALUE RETURNED HERE          remoteRef            key   52622    SECRET ID            property   array 0 value    GJSON PROPERTY    an empty property will return the entire secret          Preparing your secret You can either retrieve your entire secret or you can use a JSON formatted string stored in your secret located at Items 0  ItemValue to retrieve a specific value  br    See example JSON secret below        Examples Using the json formatted secret below     Lookup a single top level property using secret ID    spec data remoteRef key   52622  id of the secret  br    spec data remoteRef property    user   Items 0 ItemValue user attribute  br    returns  marktwain hannibal com    Lookup a nested property using secret name    spec data remoteRef key    external secret testing   name of the secret  br    spec data remoteRef property    books 1   Items 0 ItemValue books 1 attribute  br    returns  huckleberryFinn    Lookup by secret ID   secret name will work as well   and return the entire secret    spec data remoteRef key    52622   id of the secret  br    spec data remoteRef property       br    returns  The entire secret in JSON format as displayed below      JSON      Name    external secret testing      FolderID   73     ID   52622     SiteID   1     SecretTemplateID   6098     SecretPolicyID    1     PasswordTypeWebScriptID    1     LauncherConnectAsSecretID    1     CheckOutIntervalMinutes    1     Active   true     CheckedOut   false     CheckOutEnabled   false     AutoChangeEnabled   false     CheckOutChangePasswordEnabled   false     DelayIndexing   false     EnableInheritPermissions   true     EnableInheritSecretPolicy   true     ProxyEnabled   false     RequiresComment   false     SessionRecordingEnabled   false     WebLauncherRequiresIncognitoMode   false     Items                  ItemID   280265         FieldID   439         FileAttachmentID   0         FieldName    Data          Slug    data          FieldDescription    json text field          Filename              ItemValue        user      marktwain hannibal com      occupation      author     books       tomSawyer     huckleberryFinn     Pudd nhead Wilson               IsFile   false         IsNotes   false         IsPassword   false                      Referencing Secrets in multiple Items secrets  If there is more then one Item in the secret  it supports to retrieve them  all Item    ItemValue  looking up by Item    FieldName or Item    Slug  instead of the above behaviour to use gjson only on the first item Items 0 ItemValue only        Examples  Using the json formatted secret below     Lookup a single top level property using secret ID    spec data remoteRef key   4000  id of the secret  br    spec data remoteRef property    Username   Items 0 FieldName  br    returns  usernamevalue    Lookup a nested property using secret name    spec data remoteRef key    Secretname   name of the secret  br    spec data remoteRef property    password   Items 1 slug  br    returns  passwordvalue    Lookup by secret ID   secret name will work as well   and return the entire secret    spec data remoteRef key    4000   id of the secret  br    returns  The entire secret in JSON format as displayed below      JSON      Name    Secretname      FolderID   0     ID   4000     SiteID   0     SecretTemplateID   0     LauncherConnectAsSecretID   0     CheckOutIntervalMinutes   0     Active   false     CheckedOut   false     CheckOutEnabled   false     AutoChangeEnabled   false     CheckOutChangePasswordEnabled   false     DelayIndexing   false     EnableInheritPermissions   false     EnableInheritSecretPolicy   false     ProxyEnabled   false     RequiresComment   false     SessionRecordingEnabled   false     WebLauncherRequiresIncognitoMode   false     Items                  ItemID   0         FieldID   0         FileAttachmentID   0         FieldName    Username          Slug    username          FieldDescription              Filename              ItemValue    usernamevalue          IsFile   false         IsNotes   false         IsPassword   false                     ItemID   0         FieldID   0         FileAttachmentID   0         FieldName    Password          Slug    password          FieldDescription              Filename              ItemValue    passwordvalue          IsFile   false         IsNotes   false         IsPassword   false                "}
{"questions":"external secrets SecretStore resource specifies how to access Akeyless This resource is namespaced External Secrets Operator integrates with the Akeyless Secrets Management Platform NOTE Make sure the Akeyless provider is listed in the Kind SecretStore Create Secret Store If you use a customer fragment define the value of akeylessGWApiURL as the URL of your Akeyless Gateway in the following format https your akeyless gw 8080 v2","answers":"## Akeyless Secrets Management Platform\n\nExternal Secrets Operator integrates with the [Akeyless Secrets Management Platform](https:\/\/www.akeyless.io\/).\n\n### Create Secret Store\n\nSecretStore resource specifies how to access Akeyless. This resource is namespaced.\n\n**NOTE:** Make sure the Akeyless provider is listed in the Kind=SecretStore.\nIf you use a customer fragment, define the value of akeylessGWApiURL as the URL of your Akeyless Gateway in the following format: https:\/\/your.akeyless.gw:8080\/v2.\n\nAkeyelss provide several Authentication Methods:\n\n### Authentication with Kubernetes\n\nOptions for obtaining Kubernetes credentials include:\n\n1. Using a service account jwt referenced in serviceAccountRef\n2. Using the jwt from a Kind=Secret referenced by the secretRef\n3. Using transient credentials from the mounted service account token within the external-secrets operator\n\n#### Create the Akeyless Secret Store Provider with Kubernetes Auth-Method\n\n```yaml\n{% include 'akeyless-secret-store-k8s-auth.yaml' %}\n```\n\n**NOTE:** In case of a `ClusterSecretStore`, Be sure to provide `namespace` for `serviceAccountRef` and `secretRef` according to  the namespaces where the secrets reside.\n\n### Authentication With Cloud-Identity or Api-Access-Key\n\nAkeyless providers require an access-id, access-type and access-Type-param\nTo set your SecretStore with an authentication method from Akeyless.\n\nThe supported auth-methods and their parameters are:\n\n| accessType  | accessTypeParam                                                                                                                                                                                                                      |\n| ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `aws_iam` |   -                                                         |\n| `gcp` |      The gcp audience                                                      |\n| `azure_ad` |  azure object id  (optional)                                                          |\n| `api_key`      | The access key.                                                                                                                                     |\n| `k8s`         | The k8s configuration name |\n\nFor more information see [Akeyless Authentication Methods](https:\/\/docs.akeyless.io\/docs\/access-and-authentication-methods)\n\n#### Creating an Akeyless Credentials Secret\n\nCreate a secret containing your credentials using the following example as a guide:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: akeyless-secret-creds\ntype: Opaque\nstringData:\n  accessId: \"p-XXXX\"\n  accessType:  # gcp\/azure_ad\/api_key\/k8s\/aws_iam\n  accessTypeParam:  # optional: can be one of the following: gcp-audience\/azure-obj-id\/access-key\/k8s-conf-name\n```\n\n#### Create the Akeyless Secret Store Provider with the Credentials Secret\n\n```yaml\n{% include 'akeyless-secret-store.yaml' %}\n```\n\n**NOTE:** In case of a `ClusterSecretStore`, be sure to provide `namespace` for `accessID`, `accessType` and `accessTypeParam`  according to the namespaces where the secrets reside.\n\n#### Create the Akeyless Secret Store With CAs for TLS handshake\n\n```yaml\n....\nspec:\n  provider:\n    akeyless:\n      akeylessGWApiURL: \"https:\/\/your.akeyless.gw:8080\/v2\"\n\n      # Optional caBundle - PEM\/base64 encoded CA certificate\n      caBundle: \"<base64 encoded cabundle>\"\n      # Optional caProvider:\n      # Instead of caBundle you can also specify a caProvider\n      # this will retrieve the cert from a Secret or ConfigMap\n      caProvider:\n        type: \"Secret\/ConfigMap\" # Can be Secret or ConfigMap\n        name: \"<name of secret or configmap>\"\n        key: \"<key inside secret>\"\n        # namespace is mandatory for ClusterSecretStore and not relevant for SecretStore\n        namespace: \"my-cert-secret-namespace\"\n  ....\n```\n\n### Creating an external secret\n\nTo get a secret from Akeyless and create it as a secret on the Kubernetes cluster, a `Kind=ExternalSecret` is needed.\n\n```yaml\n{% include 'akeyless-external-secret.yaml' %}\n```\n\n\n#### Using DataFrom\n\nDataFrom can be used to get a secret as a JSON string and attempt to parse it.\n\n```yaml\n{% include 'akeyless-external-secret-json.yaml' %}\n```\n\n### Getting the Kubernetes Secret\n\nThe operator will fetch the secret and inject it as a `Kind=Secret`.\n\n```bash\nkubectl get secret database-credentials -o jsonpath='{.data.db-password}' | base64 -d\n```\n\n```bash\nkubectl get secret database-credentials-json -o jsonpath='{.data}'\n```\n\n### Pushing a secret\n\nTo push a secret from Kubernetes cluster and create it as a secret to Akeyless, a `Kind=PushSecret` resource is needed.\n\n{% include 'akeyless-push-secret.yaml' %}\n\nThen when you create a matching secret as follows:\n\n```bash\nkubectl create secret generic --from-literal=cache-pass=mypassword k8s-created-secret\n```\n\nThen it will create a secret in akeyless `eso-created\/my-secret` with value `{\"cache-pass\":\"mypassword\"}`","site":"external secrets","answers_cleaned":"   Akeyless Secrets Management Platform  External Secrets Operator integrates with the  Akeyless Secrets Management Platform  https   www akeyless io         Create Secret Store  SecretStore resource specifies how to access Akeyless  This resource is namespaced     NOTE    Make sure the Akeyless provider is listed in the Kind SecretStore  If you use a customer fragment  define the value of akeylessGWApiURL as the URL of your Akeyless Gateway in the following format  https   your akeyless gw 8080 v2   Akeyelss provide several Authentication Methods       Authentication with Kubernetes  Options for obtaining Kubernetes credentials include   1  Using a service account jwt referenced in serviceAccountRef 2  Using the jwt from a Kind Secret referenced by the secretRef 3  Using transient credentials from the mounted service account token within the external secrets operator       Create the Akeyless Secret Store Provider with Kubernetes Auth Method     yaml    include  akeyless secret store k8s auth yaml            NOTE    In case of a  ClusterSecretStore   Be sure to provide  namespace  for  serviceAccountRef  and  secretRef  according to  the namespaces where the secrets reside       Authentication With Cloud Identity or Api Access Key  Akeyless providers require an access id  access type and access Type param To set your SecretStore with an authentication method from Akeyless   The supported auth methods and their parameters are     accessType    accessTypeParam                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 aws iam                                                                     gcp         The gcp audience                                                           azure ad     azure object id   optional                                                                api key         The access key                                                                                                                                           k8s            The k8s configuration name    For more information see  Akeyless Authentication Methods  https   docs akeyless io docs access and authentication methods        Creating an Akeyless Credentials Secret  Create a secret containing your credentials using the following example as a guide      yaml apiVersion  v1 kind  Secret metadata    name  akeyless secret creds type  Opaque stringData    accessId   p XXXX    accessType     gcp azure ad api key k8s aws iam   accessTypeParam     optional  can be one of the following  gcp audience azure obj id access key k8s conf name           Create the Akeyless Secret Store Provider with the Credentials Secret     yaml    include  akeyless secret store yaml            NOTE    In case of a  ClusterSecretStore   be sure to provide  namespace  for  accessID    accessType  and  accessTypeParam   according to the namespaces where the secrets reside        Create the Akeyless Secret Store With CAs for TLS handshake     yaml      spec    provider      akeyless        akeylessGWApiURL   https   your akeyless gw 8080 v2           Optional caBundle   PEM base64 encoded CA certificate       caBundle    base64 encoded cabundle           Optional caProvider          Instead of caBundle you can also specify a caProvider         this will retrieve the cert from a Secret or ConfigMap       caProvider          type   Secret ConfigMap    Can be Secret or ConfigMap         name    name of secret or configmap           key    key inside secret             namespace is mandatory for ClusterSecretStore and not relevant for SecretStore         namespace   my cert secret namespace                  Creating an external secret  To get a secret from Akeyless and create it as a secret on the Kubernetes cluster  a  Kind ExternalSecret  is needed      yaml    include  akeyless external secret yaml                Using DataFrom  DataFrom can be used to get a secret as a JSON string and attempt to parse it      yaml    include  akeyless external secret json yaml              Getting the Kubernetes Secret  The operator will fetch the secret and inject it as a  Kind Secret       bash kubectl get secret database credentials  o jsonpath    data db password     base64  d         bash kubectl get secret database credentials json  o jsonpath    data            Pushing a secret  To push a secret from Kubernetes cluster and create it as a secret to Akeyless  a  Kind PushSecret  resource is needed      include  akeyless push secret yaml      Then when you create a matching secret as follows      bash kubectl create secret generic   from literal cache pass mypassword k8s created secret      Then it will create a secret in akeyless  eso created my secret  with value    cache pass   mypassword   "}
{"questions":"external secrets A running Conjur Server Before installing the Conjur provider you need Conjur Provider Prerequisites This section describes how to set up the Conjur provider for External Secrets Operator ESO For a working example see the or","answers":"## Conjur Provider\n\nThis section describes how to set up the Conjur provider for External Secrets Operator (ESO). For a working example, see the [Accelerator-K8s-External-Secrets repo](https:\/\/github.com\/conjurdemos\/Accelerator-K8s-External-Secrets).\n\n### Prerequisites\n\nBefore installing the Conjur provider, you need:\n\n* A running Conjur Server ([OSS](https:\/\/github.com\/cyberark\/conjur),\n[Enterprise](https:\/\/www.cyberark.com\/products\/secrets-manager-enterprise\/), or\n[Cloud](https:\/\/www.cyberark.com\/products\/multi-cloud-secrets\/)), with:\n  * An accessible Conjur endpoint (for example: `https:\/\/myapi.example.com`).\n  * Your configured Conjur authentication info (such as `hostid`, `apikey`, or JWT service ID). For more information on configuring Conjur, see [Policy statement reference](https:\/\/docs.cyberark.com\/conjur-open-source\/Latest\/en\/Content\/Operations\/Policy\/policy-statement-ref.htm).\n  * Support for your authentication method (`apikey` is supported by default, `jwt` requires additional configuration).\n  * **Optional**: Conjur server certificate (see [below](#conjur-server-certificate)).\n* A Kubernetes cluster with ESO installed.\n\n### Conjur server certificate\n\nIf you set up your Conjur server with a self-signed certificate, we recommend that you populate the `caBundle` field with the Conjur self-signed certificate in the secret-store definition. The certificate CA must be referenced in the secret-store definition using either `caBundle` or `caProvider`:\n\n```yaml\n{% include 'conjur-ca-bundle.yaml' %}\n```\n\n### External secret store\n\nThe Conjur provider is configured as an external secret store in ESO. The Conjur provider supports these two methods to authenticate to Conjur:\n\n* [`apikey`](#option-1-external-secret-store-with-apikey-authentication): uses a Conjur `hostid` and `apikey` to authenticate with Conjur\n* [`jwt`](#option-2-external-secret-store-with-jwt-authentication): uses a JWT to authenticate with Conjur\n\n#### Option 1: External secret store with apiKey authentication\n\nThis method uses a Conjur `hostid` and `apikey` to authenticate with Conjur. It is the simplest method to set up and use because your Conjur instance requires no additional configuration.\n\n##### Step 1: Define an external secret store\n\n!!! Tip\n    Save as the file as: `conjur-secret-store.yaml`\n\n```yaml\n{% include 'conjur-secret-store-apikey.yaml' %}\n```\n\n##### Step 2: Create Kubernetes secrets for Conjur credentials\n\nTo connect to the Conjur server, the **ESO Conjur provider** needs to retrieve the `apikey` credentials from K8s secrets.\n\n!!! Note\n    For more information about how to create K8s secrets, see [Creating a secret](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/#creating-a-secret).\n\nHere is an example of how to create K8s secrets using the `kubectl` command:\n\n```shell\n# This is all one line\nkubectl -n external-secrets create secret generic conjur-creds --from-literal=hostid=MYCONJURHOSTID --from-literal=apikey=MYAPIKEY\n\n# Example:\n# kubectl -n external-secrets create secret generic conjur-creds --from-literal=hostid=host\/data\/app1\/host001 --from-literal=apikey=321blahblah\n```\n\n!!! Note\n    `conjur-creds` is the `name` defined in the `userRef` and `apikeyRef` fields of the `conjur-secret-store.yml` file.\n\n\n##### Step 3: Create the external secrets store\n\n!!! Important\n    Unless you are using a [ClusterSecretStore](..\/api\/clustersecretstore.md), credentials must reside in the same namespace as the SecretStore.\n\n```shell\n# WARNING: creates the store in the \"external-secrets\" namespace, update the value as needed\n#\nkubectl apply -n external-secrets -f conjur-secret-store.yaml\n\n# WARNING: running the delete command will delete the secret store configuration\n#\n# If there is a need to delete the external secretstore\n# kubectl delete secretstore -n external-secrets conjur\n```\n\n#### Option 2: External secret store with JWT authentication\n\nThis method uses JWT tokens to authenticate with Conjur. You can use the following methods to retrieve a JWT token for authentication:\n\n* JWT token from a referenced Kubernetes service account\n* JWT token stored in a Kubernetes secret\n\n##### Step 1: Define an external secret store\n\nWhen you use JWT authentication, the following must be specified in the `SecretStore`:\n\n* `account` -  The name of the Conjur account\n* `serviceId` - The ID of the JWT Authenticator `WebService` configured in Conjur that is used to authenticate the JWT token\n\nYou can retrieve the JWT token from either a referenced service account or a Kubernetes secret.\n\nFor example, to retrieve a JWT token from a referenced Kubernetes service account, the following secret store definition can be used:\n\n```yaml\n{% include 'conjur-secret-store-jwt-service-account-ref.yaml' %}\n```\n\n!!! Important\n    This method is only supported in Kubernetes 1.22 and above as it uses the [TokenRequest API](https:\/\/kubernetes.io\/docs\/reference\/kubernetes-api\/authentication-resources\/token-request-v1\/) to get the JWT token from the referenced service account. Audiences can be defined in the [Conjur JWT authenticator](https:\/\/docs.conjur.org\/Latest\/en\/Content\/Integrations\/k8s-ocp\/k8s-jwt-authn.htm).\n\nAlternatively, here is an example where a secret containing a valid JWT token is referenced:\n\n```yaml\n{% include 'conjur-secret-store-jwt-secret-ref.yaml' %}\n```\n\nThe JWT token must identify your Conjur host, be compatible with your configured Conjur JWT authenticator, and meet all the [Conjur JWT guidelines](https:\/\/docs.conjur.org\/Latest\/en\/Content\/Operations\/Services\/cjr-authn-jwt-guidelines.htm#Best).\n\nYou can use an external JWT issuer or the Kubernetes API server to create the token. For example, a Kubernetes service account token can be created with this command:\n\n```shell\nkubectl create token my-service-account --audience='https:\/\/conjur.company.com' --duration=3600s\n```\n\nSave the secret store file as `conjur-secret-store.yaml`.\n\n##### Step 2: Create the external secrets store\n\n```shell\n# WARNING: creates the store in the \"external-secrets\" namespace, update the value as needed\n#\nkubectl apply -n external-secrets -f conjur-secret-store.yaml\n\n# WARNING: running the delete command will delete the secret store configuration\n#\n# If there is a need to delete the external secretstore\n# kubectl delete secretstore -n external-secrets conjur\n```\n\n### Define an external secret\n\nAfter you have configured the Conjur provider secret store, you can fetch secrets from Conjur.\n\nHere is an example of how to fetch a single secret from Conjur:\n\n```yaml\n{% include 'conjur-external-secret.yaml' %}\n```\n\nSave the external secret file as `conjur-external-secret.yaml`.\n\n#### Find by Name and Find by Tag\n\nThe Conjur provider also supports the Find by Name and Find by Tag ESO features. This means that\nyou can use a regular expression or tags to dynamically fetch multiple secrets from Conjur.\n\n```yaml\n{% include 'conjur-external-secret-find.yaml' %}\n```\n\nIf you use these features, we strongly recommend that you limit the permissions of the Conjur host\nto only the secrets that it needs to access. This is more secure and it reduces the load on\nboth the Conjur server and ESO.\n\n### Create the external secret\n\n```shell\n# WARNING: creates the external-secret in the \"external-secrets\" namespace, update the value as needed\n#\nkubectl apply -n external-secrets -f conjur-external-secret.yaml\n\n# WARNING: running the delete command will delete the external-secrets configuration\n#\n# If there is a need to delete the external secret\n# kubectl delete externalsecret -n external-secrets conjur\n```\n\n### Get the K8s secret\n\n* Log in to your Conjur server and verify that your secret exists\n* Review the value of your Kubernetes secret to verify that it contains the same value as the Conjur server\n\n```shell\n# WARNING: this command will reveal the stored secret in plain text\n#\n# Assuming the secret name is \"secret00\", this will show the value\nkubectl get secret -n external-secrets conjur -o jsonpath=\"{.data.secret00}\"  | base64 --decode && echo\n```\n\n### See also\n\n* [Accelerator-K8s-External-Secrets repo](https:\/\/github.com\/conjurdemos\/Accelerator-K8s-External-Secrets)\n* [Configure Conjur JWT authentication](https:\/\/docs.cyberark.com\/conjur-open-source\/Latest\/en\/Content\/Operations\/Services\/cjr-authn-jwt-guidelines.htm)\n\n### License\n\nCopyright (c) 2023-2024 CyberArk Software Ltd. All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n<http:\/\/www.apache.org\/licenses\/LICENSE-2.0>\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.","site":"external secrets","answers_cleaned":"   Conjur Provider  This section describes how to set up the Conjur provider for External Secrets Operator  ESO   For a working example  see the  Accelerator K8s External Secrets repo  https   github com conjurdemos Accelerator K8s External Secrets        Prerequisites  Before installing the Conjur provider  you need     A running Conjur Server   OSS  https   github com cyberark conjur    Enterprise  https   www cyberark com products secrets manager enterprise    or  Cloud  https   www cyberark com products multi cloud secrets     with      An accessible Conjur endpoint  for example   https   myapi example com        Your configured Conjur authentication info  such as  hostid    apikey   or JWT service ID   For more information on configuring Conjur  see  Policy statement reference  https   docs cyberark com conjur open source Latest en Content Operations Policy policy statement ref htm       Support for your authentication method   apikey  is supported by default   jwt  requires additional configuration         Optional    Conjur server certificate  see  below   conjur server certificate      A Kubernetes cluster with ESO installed       Conjur server certificate  If you set up your Conjur server with a self signed certificate  we recommend that you populate the  caBundle  field with the Conjur self signed certificate in the secret store definition  The certificate CA must be referenced in the secret store definition using either  caBundle  or  caProvider       yaml    include  conjur ca bundle yaml              External secret store  The Conjur provider is configured as an external secret store in ESO  The Conjur provider supports these two methods to authenticate to Conjur       apikey    option 1 external secret store with apikey authentication   uses a Conjur  hostid  and  apikey  to authenticate with Conjur     jwt    option 2 external secret store with jwt authentication   uses a JWT to authenticate with Conjur       Option 1  External secret store with apiKey authentication  This method uses a Conjur  hostid  and  apikey  to authenticate with Conjur  It is the simplest method to set up and use because your Conjur instance requires no additional configuration         Step 1  Define an external secret store      Tip     Save as the file as   conjur secret store yaml      yaml    include  conjur secret store apikey yaml                Step 2  Create Kubernetes secrets for Conjur credentials  To connect to the Conjur server  the   ESO Conjur provider   needs to retrieve the  apikey  credentials from K8s secrets       Note     For more information about how to create K8s secrets  see  Creating a secret  https   kubernetes io docs concepts configuration secret  creating a secret    Here is an example of how to create K8s secrets using the  kubectl  command      shell   This is all one line kubectl  n external secrets create secret generic conjur creds   from literal hostid MYCONJURHOSTID   from literal apikey MYAPIKEY    Example    kubectl  n external secrets create secret generic conjur creds   from literal hostid host data app1 host001   from literal apikey 321blahblah          Note      conjur creds  is the  name  defined in the  userRef  and  apikeyRef  fields of the  conjur secret store yml  file          Step 3  Create the external secrets store      Important     Unless you are using a  ClusterSecretStore     api clustersecretstore md   credentials must reside in the same namespace as the SecretStore      shell   WARNING  creates the store in the  external secrets  namespace  update the value as needed   kubectl apply  n external secrets  f conjur secret store yaml    WARNING  running the delete command will delete the secret store configuration     If there is a need to delete the external secretstore   kubectl delete secretstore  n external secrets conjur           Option 2  External secret store with JWT authentication  This method uses JWT tokens to authenticate with Conjur  You can use the following methods to retrieve a JWT token for authentication     JWT token from a referenced Kubernetes service account   JWT token stored in a Kubernetes secret        Step 1  Define an external secret store  When you use JWT authentication  the following must be specified in the  SecretStore       account     The name of the Conjur account    serviceId    The ID of the JWT Authenticator  WebService  configured in Conjur that is used to authenticate the JWT token  You can retrieve the JWT token from either a referenced service account or a Kubernetes secret   For example  to retrieve a JWT token from a referenced Kubernetes service account  the following secret store definition can be used      yaml    include  conjur secret store jwt service account ref yaml              Important     This method is only supported in Kubernetes 1 22 and above as it uses the  TokenRequest API  https   kubernetes io docs reference kubernetes api authentication resources token request v1   to get the JWT token from the referenced service account  Audiences can be defined in the  Conjur JWT authenticator  https   docs conjur org Latest en Content Integrations k8s ocp k8s jwt authn htm    Alternatively  here is an example where a secret containing a valid JWT token is referenced      yaml    include  conjur secret store jwt secret ref yaml          The JWT token must identify your Conjur host  be compatible with your configured Conjur JWT authenticator  and meet all the  Conjur JWT guidelines  https   docs conjur org Latest en Content Operations Services cjr authn jwt guidelines htm Best    You can use an external JWT issuer or the Kubernetes API server to create the token  For example  a Kubernetes service account token can be created with this command      shell kubectl create token my service account   audience  https   conjur company com    duration 3600s      Save the secret store file as  conjur secret store yaml          Step 2  Create the external secrets store     shell   WARNING  creates the store in the  external secrets  namespace  update the value as needed   kubectl apply  n external secrets  f conjur secret store yaml    WARNING  running the delete command will delete the secret store configuration     If there is a need to delete the external secretstore   kubectl delete secretstore  n external secrets conjur          Define an external secret  After you have configured the Conjur provider secret store  you can fetch secrets from Conjur   Here is an example of how to fetch a single secret from Conjur      yaml    include  conjur external secret yaml          Save the external secret file as  conjur external secret yaml         Find by Name and Find by Tag  The Conjur provider also supports the Find by Name and Find by Tag ESO features  This means that you can use a regular expression or tags to dynamically fetch multiple secrets from Conjur      yaml    include  conjur external secret find yaml          If you use these features  we strongly recommend that you limit the permissions of the Conjur host to only the secrets that it needs to access  This is more secure and it reduces the load on both the Conjur server and ESO       Create the external secret     shell   WARNING  creates the external secret in the  external secrets  namespace  update the value as needed   kubectl apply  n external secrets  f conjur external secret yaml    WARNING  running the delete command will delete the external secrets configuration     If there is a need to delete the external secret   kubectl delete externalsecret  n external secrets conjur          Get the K8s secret    Log in to your Conjur server and verify that your secret exists   Review the value of your Kubernetes secret to verify that it contains the same value as the Conjur server     shell   WARNING  this command will reveal the stored secret in plain text     Assuming the secret name is  secret00   this will show the value kubectl get secret  n external secrets conjur  o jsonpath    data secret00      base64   decode    echo          See also     Accelerator K8s External Secrets repo  https   github com conjurdemos Accelerator K8s External Secrets     Configure Conjur JWT authentication  https   docs cyberark com conjur open source Latest en Content Operations Services cjr authn jwt guidelines htm       License  Copyright  c  2023 2024 CyberArk Software Ltd  All rights reserved   Licensed under the Apache License  Version 2 0  the  License    you may not use this file except in compliance with the License  You may obtain a copy of the License at   http   www apache org licenses LICENSE 2 0   Unless required by applicable law or agreed to in writing  software distributed under the License is distributed on an  AS IS  BASIS  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND  either express or implied  See the License for the specific language governing permissions and limitations under the License "}
{"questions":"external secrets Summary hide toc The External Secrets Operator is a Kubernetes Operator that seamlessly incorporates external secret management systems into Kubernetes This Operator retrieves data from the external API and generates Kubernetes Secret resources using the corresponding secret values This process occurs continuously in the background through regular polling of the external API Consequently whenever a secret undergoes changes in the external API the corresponding Kubernetes Secret will also be updated accordingly Background","answers":"---\nhide:\n  - toc\n---\n\n## Background\n\nThe External Secrets Operator is a Kubernetes Operator that seamlessly incorporates external secret management systems into Kubernetes. This Operator retrieves data from the external API and generates Kubernetes Secret resources using the corresponding secret values. This process occurs continuously in the background through regular polling of the external API. Consequently, whenever a secret undergoes changes in the external API, the corresponding Kubernetes Secret will also be updated accordingly.\n\n### Summary\n\n| Purpose             | Description                  |\n| ------------------- | ---------------------------- |\n| Intended Usage      | Sync Secrets into Kubernetes |\n| Data Classifiation  | Critical                     |\n| Highest Risk Impact | Organisation takeover        |\n\n### Components\n\nESO comprises three main components: `webhook`, `cert controller` and a `core controller`. For more detailed information, please refer to the documentation on [components](..\/api\/components.md).\n\n## Overview\n\nThis section provides an overview of the security aspects of the External Secrets Operator (ESO) and includes information on assets, threats, and controls involved in its operation.\n\nThe following diagram illustrates the security perspective of how ESO functions, highlighting the assets (items to protect), threats (potential risks), and controls (measures to mitigate threats).\n\n![Overview](..\/pictures\/eso-threat-model-overview.drawio.png)\n\n### Scope\n\nFor the purpose of this threat model, we assume an ESO installation using helm and default settings on a public cloud provider. It is important to note that the [Kubernetes SIG Security](https:\/\/github.com\/kubernetes\/community\/tree\/master\/sig-security) team has defined an [Admission Control Threat Model](https:\/\/github.com\/kubernetes\/sig-security\/blob\/main\/sig-security-docs\/papers\/admission-control\/kubernetes-admission-control-threat-model.md), which is recommended reading for a better understanding of the security aspects that partially apply to External Secrets Operator.\n\n\nESO utilizes the `ValidatingWebhookConfiguration` mechanism to validate `(Cluster)SecretStore` and `(Cluster)ExternalSecret` resources. However, it is essential to understand that this validation process does not serve as a security control mechanism. Instead, ESO performs validation by enforcing additional rules that go beyond the [CustomResourceDefinition OpenAPI v3 Validation schema](https:\/\/kubernetes.io\/docs\/tasks\/extend-kubernetes\/custom-resources\/custom-resource-definitions\/#validation).\n\n### Assets\n\n#### A01: Cluster-Level access to secrets\n\nThe controller possesses privileged access to the `kube-apiserver` and is authorized to read and write secret resources across all namespaces within a cluster.\n\n#### A02: CRD and Webhook Write access\n\nThe cert-controller component has read\/write access to `ValidatingWebhookConfigurations` and `CustomResourceDefinitions` resources. This access is necessary to inject\/modify the caBundle property.\n\n#### A03: secret provider access\n\nThe `core-controller` component accesses a secret provider using user-supplied credentials. These credentials can be derived from environment variables, mounted service account tokens, files within the controller container, or fetched from the Kubernetes API (e.g., `Kind=Secret`). The scope of these credentials may vary, potentially providing full access to a cloud provider.\n\n#### A04: capability to modify resources\n\nThe webhook component validates and converts ExternalSecret and SecretStore resources. The conversion webhook is essential for migrating resources from the old version `v1alpha1` to the new version `v1beta1`. The webhook component possesses the ability to modify resources during runtime.\n\n### Threats\n\n#### T01: Tampering with resources through MITM\n\nAn adversary could launch a Man-in-the-Middle (MITM) attack to hijack the webhook pod, enabling them to manipulate the data of the conversion webhook. This could involve injecting malicious resources or causing a Denial-of-Service (DoS) attack. To mitigate this threat, a mutual authentication mechanism should be enforced for the connection between the Kubernetes API server and the webhook service to ensure that only authenticated endpoints can communicate.\n\n#### T02: Webhook DOS\n\nCurrently, ESO generates an X.509 certificate for webhook registration without authenticating the kube-apiserver. Consequently, if an attacker gains network access to the webhook Pod, they can overload the webhook server and initiate a DoS attack. As a result, modifications to ESO resources may fail, and the ESO core controller may be impacted due to the unavailability of the conversion webhook.\n\n#### T03: Unauthorized access to cluster secrets\n\nAn attacker can gain unauthorized access to secrets by utilizing the service account token of the ESO core controller Pod or exploiting software vulnerabilities. This unauthorized access allows the attacker to read secrets within the cluster, potentially leading to a cluster takeover.\n\n#### T04: unauthorized access to secret provider credentials\n\nAn attacker can gain unauthorized access to credentials that provide access to external APIs storing secrets. If the credentials have overly broad permissions, this could result in an organization takeover.\n\n#### T05: data exfiltration through malicious resources\n\nAn attacker can exfiltrate data from the cluster by utilizing maliciously crafted resources. Multiple attack vectors can be employed, e.g.:\n\n1. copying data from a namespace to an unauthorized namespace\n2. exfiltrating data to an unauthorized secret provider\n3. exfiltrating data through an authorized secret provider to a malicious provider account\n\nSuccessful data exfiltration can lead to intellectual property loss, information misuse, loss of customer trust, and damage to the brand or reputation.\n\n#### T06: supply chain attacks\n\nAn attack can infiltrate the ESO container through various attack vectors. The following are some potential entry points, although this is not an exhaustive list. For a comprehensive analysis, refer to [SLSA Threats and mitigations](https:\/\/slsa.dev\/spec\/v0.1\/threats) or [GCP software supply chain threats](https:\/\/cloud.google.com\/software-supply-chain-security\/docs\/attack-vectors).\n\n1. Source Threats: Unauthorized changes or inclusion of vulnerable code in ESO through code submissions.\n2. Build Threats: Creation and distribution of malicious builds of ESO, such as in container registries, Artifact Hub, or Operator Hub.\n3. Dependency Threats: Introduction of vulnerable code into ESO dependencies.\n4. Deployment and Runtime Threats: Injection of malicious code through compromised deployment processes.\n\n#### T07: malicious workloads in eso namespace\n\nAn attacker can deploy malicious workloads within the external-secrets namespace, taking advantage of the ESO service account with potentially cluster-wide privileges.\n\n\n### Controls\n\n#### C01: Network Security Policy\n\nImplement a NetworkPolicy to restrict traffic in both inbound and outbound directions on all networks. Employ a \"deny all\" \/ \"permit by exception\" approach for inbound and outbound network traffic. The specific network policies for the core-controller depend on the chosen provider. The webhook and cert-controller have well-defined sets of endpoints they communicate with. Refer to the [Security Best Practices](.\/security-best-practices.md) documentation for inbound and outbound network requirements.\n\nPlease note that ESO does not provide pre-packaged network policies, and it is the user's responsibility to implement the necessary security controls.\n\n#### C02: Least Privilege RBAC\n\nAdhere to the principle of least privilege by configuring Role-Based Access Control (RBAC) permissions not only for the ESO workload but also for all users interacting with it. Ensure that RBAC permissions on provider side are appropriate according to your setup, by for example limiting which sensitive information a given credential can have access to. Ensure that  kubernetes RBAC are set up to grant access to ESO resources only where necessary. For example, allowing write access to `ClusterSecretStore`\/`ExternalSecret` may be sufficient for a threat to become a reality.\n\n#### C03: Policy Enforcement\n\nImplement a Policy Engine such as Kyverno or OPA to enforce restrictions on changes to ESO resources. The specific policies to be enforced depend on the environment. Here are a few suggestions:\n\n1. (Cluster)SecretStore: Restrict the allowed secret providers, disallowing unused or undesired providers (e.g. Webhook).\n2. (Cluster)SecretStore: Restrict the permitted authentication mechanisms (e.g. prevent usage of `secretRef`).\n3. (Cluster)SecretStore: Enforce limitations on modifications to provider-specific fields relevant for security, such as `caBundle`, `caProvider`, `region`, `role`, `url`, `environmentType`, `identityId`, and `others`.\n4. ClusterSecretStore: Control the usage of `namespaceSelector`, such as forbidding or mandating the usage of the `kube-system` namespace.\n5. ClusterExternalSecret: Restrict the usage of `namespaceSelector`.\n\nPlease note that ESO does not provide pre-packaged policies, and it is the user's responsibility to implement the necessary security controls.\n\n#### C04: Provider Access Policy\n\nConfigure fine-grained access control on the HTTP endpoint of the secret provider to prevent data exfiltration across accounts or organizations. Consult the documentation of your specific provider (e.g.: [AWS Secrets Manager VPC Endpoint Policies](https:\/\/docs.aws.amazon.com\/secretsmanager\/latest\/userguide\/vpc-endpoint-overview.html), [GCP Private Service Connect](https:\/\/cloud.google.com\/vpc\/docs\/private-service-connect), or [Azure Private Link](https:\/\/learn.microsoft.com\/en-us\/azure\/key-vault\/general\/private-link-service)) for guidance on setting up access policies.\n\n#### C05: Entirely disable CRDs\n\nYou should disable unused CRDs to narrow down your attack surface. Not all users require the use of `PushSecret`, `ClusterSecretStore` or `ClusterExternalSecret` resources.","site":"external secrets","answers_cleaned":"    hide      toc         Background  The External Secrets Operator is a Kubernetes Operator that seamlessly incorporates external secret management systems into Kubernetes  This Operator retrieves data from the external API and generates Kubernetes Secret resources using the corresponding secret values  This process occurs continuously in the background through regular polling of the external API  Consequently  whenever a secret undergoes changes in the external API  the corresponding Kubernetes Secret will also be updated accordingly       Summary    Purpose               Description                                                                             Intended Usage        Sync Secrets into Kubernetes     Data Classifiation    Critical                         Highest Risk Impact   Organisation takeover               Components  ESO comprises three main components   webhook    cert controller  and a  core controller   For more detailed information  please refer to the documentation on  components     api components md       Overview  This section provides an overview of the security aspects of the External Secrets Operator  ESO  and includes information on assets  threats  and controls involved in its operation   The following diagram illustrates the security perspective of how ESO functions  highlighting the assets  items to protect   threats  potential risks   and controls  measures to mitigate threats      Overview     pictures eso threat model overview drawio png       Scope  For the purpose of this threat model  we assume an ESO installation using helm and default settings on a public cloud provider  It is important to note that the  Kubernetes SIG Security  https   github com kubernetes community tree master sig security  team has defined an  Admission Control Threat Model  https   github com kubernetes sig security blob main sig security docs papers admission control kubernetes admission control threat model md   which is recommended reading for a better understanding of the security aspects that partially apply to External Secrets Operator    ESO utilizes the  ValidatingWebhookConfiguration  mechanism to validate   Cluster SecretStore  and   Cluster ExternalSecret  resources  However  it is essential to understand that this validation process does not serve as a security control mechanism  Instead  ESO performs validation by enforcing additional rules that go beyond the  CustomResourceDefinition OpenAPI v3 Validation schema  https   kubernetes io docs tasks extend kubernetes custom resources custom resource definitions  validation        Assets       A01  Cluster Level access to secrets  The controller possesses privileged access to the  kube apiserver  and is authorized to read and write secret resources across all namespaces within a cluster        A02  CRD and Webhook Write access  The cert controller component has read write access to  ValidatingWebhookConfigurations  and  CustomResourceDefinitions  resources  This access is necessary to inject modify the caBundle property        A03  secret provider access  The  core controller  component accesses a secret provider using user supplied credentials  These credentials can be derived from environment variables  mounted service account tokens  files within the controller container  or fetched from the Kubernetes API  e g    Kind Secret    The scope of these credentials may vary  potentially providing full access to a cloud provider        A04  capability to modify resources  The webhook component validates and converts ExternalSecret and SecretStore resources  The conversion webhook is essential for migrating resources from the old version  v1alpha1  to the new version  v1beta1   The webhook component possesses the ability to modify resources during runtime       Threats       T01  Tampering with resources through MITM  An adversary could launch a Man in the Middle  MITM  attack to hijack the webhook pod  enabling them to manipulate the data of the conversion webhook  This could involve injecting malicious resources or causing a Denial of Service  DoS  attack  To mitigate this threat  a mutual authentication mechanism should be enforced for the connection between the Kubernetes API server and the webhook service to ensure that only authenticated endpoints can communicate        T02  Webhook DOS  Currently  ESO generates an X 509 certificate for webhook registration without authenticating the kube apiserver  Consequently  if an attacker gains network access to the webhook Pod  they can overload the webhook server and initiate a DoS attack  As a result  modifications to ESO resources may fail  and the ESO core controller may be impacted due to the unavailability of the conversion webhook        T03  Unauthorized access to cluster secrets  An attacker can gain unauthorized access to secrets by utilizing the service account token of the ESO core controller Pod or exploiting software vulnerabilities  This unauthorized access allows the attacker to read secrets within the cluster  potentially leading to a cluster takeover        T04  unauthorized access to secret provider credentials  An attacker can gain unauthorized access to credentials that provide access to external APIs storing secrets  If the credentials have overly broad permissions  this could result in an organization takeover        T05  data exfiltration through malicious resources  An attacker can exfiltrate data from the cluster by utilizing maliciously crafted resources  Multiple attack vectors can be employed  e g    1  copying data from a namespace to an unauthorized namespace 2  exfiltrating data to an unauthorized secret provider 3  exfiltrating data through an authorized secret provider to a malicious provider account  Successful data exfiltration can lead to intellectual property loss  information misuse  loss of customer trust  and damage to the brand or reputation        T06  supply chain attacks  An attack can infiltrate the ESO container through various attack vectors  The following are some potential entry points  although this is not an exhaustive list  For a comprehensive analysis  refer to  SLSA Threats and mitigations  https   slsa dev spec v0 1 threats  or  GCP software supply chain threats  https   cloud google com software supply chain security docs attack vectors    1  Source Threats  Unauthorized changes or inclusion of vulnerable code in ESO through code submissions  2  Build Threats  Creation and distribution of malicious builds of ESO  such as in container registries  Artifact Hub  or Operator Hub  3  Dependency Threats  Introduction of vulnerable code into ESO dependencies  4  Deployment and Runtime Threats  Injection of malicious code through compromised deployment processes        T07  malicious workloads in eso namespace  An attacker can deploy malicious workloads within the external secrets namespace  taking advantage of the ESO service account with potentially cluster wide privileges        Controls       C01  Network Security Policy  Implement a NetworkPolicy to restrict traffic in both inbound and outbound directions on all networks  Employ a  deny all     permit by exception  approach for inbound and outbound network traffic  The specific network policies for the core controller depend on the chosen provider  The webhook and cert controller have well defined sets of endpoints they communicate with  Refer to the  Security Best Practices    security best practices md  documentation for inbound and outbound network requirements   Please note that ESO does not provide pre packaged network policies  and it is the user s responsibility to implement the necessary security controls        C02  Least Privilege RBAC  Adhere to the principle of least privilege by configuring Role Based Access Control  RBAC  permissions not only for the ESO workload but also for all users interacting with it  Ensure that RBAC permissions on provider side are appropriate according to your setup  by for example limiting which sensitive information a given credential can have access to  Ensure that  kubernetes RBAC are set up to grant access to ESO resources only where necessary  For example  allowing write access to  ClusterSecretStore   ExternalSecret  may be sufficient for a threat to become a reality        C03  Policy Enforcement  Implement a Policy Engine such as Kyverno or OPA to enforce restrictions on changes to ESO resources  The specific policies to be enforced depend on the environment  Here are a few suggestions   1   Cluster SecretStore  Restrict the allowed secret providers  disallowing unused or undesired providers  e g  Webhook   2   Cluster SecretStore  Restrict the permitted authentication mechanisms  e g  prevent usage of  secretRef    3   Cluster SecretStore  Enforce limitations on modifications to provider specific fields relevant for security  such as  caBundle    caProvider    region    role    url    environmentType    identityId   and  others   4  ClusterSecretStore  Control the usage of  namespaceSelector   such as forbidding or mandating the usage of the  kube system  namespace  5  ClusterExternalSecret  Restrict the usage of  namespaceSelector    Please note that ESO does not provide pre packaged policies  and it is the user s responsibility to implement the necessary security controls        C04  Provider Access Policy  Configure fine grained access control on the HTTP endpoint of the secret provider to prevent data exfiltration across accounts or organizations  Consult the documentation of your specific provider  e g    AWS Secrets Manager VPC Endpoint Policies  https   docs aws amazon com secretsmanager latest userguide vpc endpoint overview html    GCP Private Service Connect  https   cloud google com vpc docs private service connect   or  Azure Private Link  https   learn microsoft com en us azure key vault general private link service   for guidance on setting up access policies        C05  Entirely disable CRDs  You should disable unused CRDs to narrow down your attack surface  Not all users require the use of  PushSecret    ClusterSecretStore  or  ClusterExternalSecret  resources "}
{"questions":"external secrets Security Best Practices 1 Namespace Isolation To maintain security boundaries ESO ensures that namespaced resources like and are limited to their respective namespaces The following rules apply The purpose of this document is to outline a set of best practices for securing the External Secrets Operator ESO These practices aim to mitigate the risk of successful attacks against ESO and the Kubernetes cluster it integrates with Security Functions and Features","answers":"# Security Best Practices\n\nThe purpose of this document is to outline a set of best practices for securing the External Secrets Operator (ESO). These practices aim to mitigate the risk of successful attacks against ESO and the Kubernetes cluster it integrates with.\n\n## Security Functions and Features\n\n### 1. Namespace Isolation\n\nTo maintain security boundaries, ESO ensures that namespaced resources like `SecretStore` and `ExternalSecret` are limited to their respective namespaces. The following rules apply:\n\n1. `ExternalSecret` resources must not have cross-namespace references of `Kind=SecretStore` or `Kind=Secret` resources\n2. `SecretStore` resources must not have cross-namespace references of `Kind=Secret` or others\n\nFor cluster-wide resources like `ClusterSecretStore` and `ClusterExternalSecret`, exercise caution since they have access to Secret resources across all namespaces. Minimize RBAC permissions for administrators and developers to the necessary minimum. If cluster-wide resources are not required, it is recommended to disable them.\n\n### 2. Configure ClusterSecretStore match conditions\n\nUtilize the ClusterSecretStore resource to define specific match conditions using `namespaceSelector` or an explicit namespaces list. This restricts the usage of the `ClusterSecretStore` to a predetermined list of namespaces or a namespace that matches a predefined label. Here's an example:\n\n```yaml\napiVersion: external-secrets.io\/v1beta1\nkind: ClusterSecretStore\nmetadata:\n  name: fake\nspec:\n  conditions:\n    - namespaceSelector:\n        matchLabels:\n          app: frontend\n```\n\n### 3. Selectively Disable Reconciliation of Cluster-Wide Resources\n\nESO allows you to selectively disable the reconciliation of cluster-wide resources such as `ClusterSecretStore`, `ClusterExternalSecret`, and `PushSecret`. You can disable the installation of CRDs in the Helm chart or disable reconciliation in the core-controller using the following options:\n\nTo disable CRD installation:\n\n```yaml\n# disable cluster-wide resources & push secret\ncrds:\n  createClusterExternalSecret: false\n  createClusterSecretStore: false\n  createPushSecret: false\n```\n\nTo disable reconciliation in the core-controller:\n\n```\n--enable-cluster-external-secret-reconciler\n--enable-cluster-store-reconciler\n```\n\n### 4. Implement Namespace-Scoped Installation\n\nTo further enhance security, consider installing ESO into a specific namespace with restricted access to only that namespace's resources. This prevents access to cluster-wide secrets. Use the following Helm values to scope the controller to a specific namespace:\n\n```yaml\n# If set to true, create scoped RBAC roles under the scoped namespace\n# and implicitly disable cluster stores and cluster external secrets\nscopedRBAC: true\n\n# Specify the namespace where external secrets should be reconciled\nscopedNamespace: my-namespace\n```\n\n### 5. Restrict Webhook TLS Ciphers\n\nConsider installing ESO restricting webhook ciphers. Use the following Helm values to scope webhook for specific TLS ciphers:\n```yaml\nwebhook:\n  extraArgs:\n    tls-ciphers: \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\"\n```\n## Pod Security\n\nThe Pods of the External Secrets Operator have been configured to meet the [Pod Security Standards](https:\/\/kubernetes.io\/docs\/concepts\/security\/pod-security-standards\/), specifically the restricted profile. This configuration ensures a strong security posture by implementing recommended best practices for hardening Pods, including those outlined in the [NSA Kubernetes Hardening Guide](https:\/\/media.defense.gov\/2022\/Aug\/29\/2003066362\/-1\/-1\/0\/CTR_KUBERNETES_HARDENING_GUIDANCE_1.2_20220829.PDF).\n\nBy adhering to these standards, the External Secrets Operator benefits from a secure and resilient operating environment. The restricted profile has been set as the default configuration since version `v0.8.2`, and it is recommended to maintain this setting to align with the principle of least privilege.\n\n## Role-Based Access Control (RBAC)\n\nThe External Secrets Operator operates with elevated privileges within your Kubernetes cluster, allowing it to read and write to all secrets across all namespaces. It is crucial to properly restrict access to ESO resources such as `ExternalSecret` and `SecretStore` where necessary. This is particularly important for cluster-scoped resources like `ClusterExternalSecret` and `ClusterSecretStore`. Unauthorized tampering with these resources by an attacker could lead to unauthorized access to secrets or potential data exfiltration from your system.\n\nIn most scenarios, the External Secrets Operator is deployed cluster-wide. However, if you prefer to run it on a per-namespace basis, you can scope it to a specific namespace using the `scopedRBAC` and `scopedNamespace` options in the helm chart.\n\nTo ensure a secure RBAC configuration, consider the following checklist:\n\n* Restrict access to execute shell commands (pods\/exec) within the External Secrets Operator Pod.\n* Restrict access to (Cluster)ExternalSecret and (Cluster)SecretStore resources.\n* Limit access to aggregated ClusterRoles (view\/edit\/admin) as needed.\n* If necessary, deploy ESO with scoped RBAC or within a specific namespace.\n\nBy carefully managing RBAC permissions and scoping the External Secrets Operator appropriately, you can enhance the security of your Kubernetes cluster.\n\n## Network Traffic and Security\n\nTo ensure a secure network environment, it is recommended to restrict network traffic to and from the External Secrets Operator using `NetworkPolicies` or similar mechanisms. By default, the External Secrets Operator does not include pre-defined Network Policies.\n\nTo implement network restrictions effectively, consider the following steps:\n\n* Define and apply appropriate NetworkPolicies to limit inbound and outbound traffic for the External Secrets Operator.\n* Specify a \"deny all\" policy by default and selectively permit necessary communication based on your specific requirements.\n* Restrict access to only the required endpoints and protocols for the External Secrets Operator, such as communication with the Kubernetes API server or external secret providers.\n* Regularly review and update the Network Policies to align with changes in your network infrastructure and security requirements.\n\nIt is the responsibility of the user to define and configure Network Policies tailored to their specific environment and security needs. By implementing proper network restrictions, you can enhance the overall security posture of the External Secrets Operator within your Kubernetes cluster.\n\n!!! danger \"Data Exfiltration Risk\"\n\n    If not configured properly ESO may be used to exfiltrate data out of your cluster.\n    It is advised to create tight NetworkPolicies and use a policy engine such as kyverno to prevent data exfiltration.\n\n\n### Outbound Traffic Restrictions\n\n#### Core Controller\n\nRestrict outbound traffic from the core controller component to the following destinations:\n\n* `kube-apiserver`: The Kubernetes API server.\n* Secret provider (e.g., AWS, GCP): Whenever possible, use private endpoints to establish secure and private communication.\n\n#### Webhook\n\n* Restrict outbound traffic from the webhook component to the `kube-apiserver`.\n\n#### Cert Controller\n\n* Restrict outbound traffic from the cert controller component to the `kube-apiserver`.\n\n\n### Inbound Traffic Restrictions\n\n#### Core Controller\n\n* Restrict inbound traffic to the core controller component by allowing communication on port `8080` from your monitoring agent.\n\n#### Cert Controller\n\n* Restrict inbound traffic to the cert controller component by allowing communication on port `8080` from your monitoring agent.\n* Additionally, permit inbound traffic on port `8081` from the kubelet for health check endpoints (healthz\/readyz).\n\n#### Webhook\n\nRestrict inbound traffic to the webhook component as follows:\n\n* Allow communication on port `10250` from the kube-apiserver.\n* Allow communication on port `8080` from your monitoring agent.\n* Permit inbound traffic on port `8081` from the kubelet for health check endpoints (healthz\/readyz).\n\n## Policy Engine Best Practices\n\nTo enhance the security and enforce specific policies for External Secrets Operator (ESO) resources such as SecretStore and ExternalSecret, it is recommended to utilize a policy engine like [Kyverno](http:\/\/kyverno.io\/) or [OPA Gatekeeper](https:\/\/github.com\/open-policy-agent\/gatekeeper). These policy engines provide a way to define and enforce custom policies that restrict changes made to ESO resources.\n\n!!! danger \"Data Exfiltration Risk\"\n\n    ESO could be used to exfiltrate data out of your cluster. You should disable all providers you don't need.\n    Further, you should implement `NetworkPolicies` to restrict network access to known entities (see above), to prevent data exfiltration.\n\nHere are some recommendations to consider when configuring your policies:\n\n1. **Explicitly Deny Unused Providers**: Create policies that explicitly deny the usage of secret providers that are not required in your environment. This prevents unauthorized access to unnecessary providers and reduces the attack surface.\n2. **Restrict Access to Secrets**: Implement policies that restrict access to secrets based on specific conditions. For example, you can define policies to allow access to secrets only if they have a particular prefix in the `.spec.data[].remoteRef.key` field. This helps ensure that only authorized entities can access sensitive information.\n3. **Restrict `ClusterSecretStore` References**: Define policies to restrict the usage of ClusterSecretStore references within ExternalSecret resources. This ensures that the resources are properly scoped and prevent potential unauthorized access to secrets across namespaces.\n\nBy leveraging a policy engine, you can implement these recommendations and enforce custom policies that align with your organization's security requirements. Please refer to the documentation of the chosen policy engine (e.g., Kyverno or OPA Gatekeeper) for detailed instructions on how to define and enforce policies for ESO resources.\n\n\n!!! note \"Provider Validation Example Policy\"\n\n    The following policy validates the usage of the `provider` field in the SecretStore manifest.\n\n    ```yaml\n    {% filter indent(width=4) %}\n{% include 'kyverno-policy-secretstore.yaml' %}\n    {% endfilter %}\n    ```\n\n## Regular Patches\n\nTo maintain a secure environment, it is crucial to regularly patch and update all software components of External Secrets Operator and the underlying cluster. By doing so, known vulnerabilities can be addressed, and the overall system's security can be improved. Here are some recommended practices for ensuring timely updates:\n\n1. **Automated Patching and Updating**: Utilize automated patching and updating tools to streamline the process of keeping software components up-to-date\n2. **Regular Update ESO**: Stay informed about the latest updates and releases provided for ESO. The development team regularly releases updates to improve stability, performance, and security. Please refer to the [Stability and Support](..\/introduction\/stability-support.md) documentation for more information on the available updates\n3. **Cluster-wide Updates**: Apart from ESO, ensure that all other software components within your cluster, such as the operating system, container runtime, and Kubernetes itself, are regularly patched and updated.\n\nBy adhering to a regular patching and updating schedule, you can proactively mitigate security risks associated with known vulnerabilities and ensure the overall stability and security of your ESO deployment.\n\n## Verify Artefacts\n\n### Verify Container Images\n\nThe container images of External Secrets Operator are signed using Cosign and the keyless signing feature. To ensure the authenticity and integrity of the container image, you can follow the steps outlined below:\n\n\n```sh\n# Retrieve Image Signature\n$ crane digest ghcr.io\/external-secrets\/external-secrets:v0.8.1\nsha256:36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554\n\n# verify signature\n$ COSIGN_EXPERIMENTAL=1 cosign verify ghcr.io\/external-secrets\/external-secrets@sha256:36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554 | jq\n\n# ...\n[\n  {\n    \"critical\": {\n      \"identity\": {\n        \"docker-reference\": \"ghcr.io\/external-secrets\/external-secrets\"\n      },\n      \"image\": {\n        \"docker-manifest-digest\": \"sha256:36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554\"\n      },\n      \"type\": \"cosign container image signature\"\n    },\n    \"optional\": {\n      \"1.3.6.1.4.1.57264.1.1\": \"https:\/\/token.actions.githubusercontent.com\",\n      \"1.3.6.1.4.1.57264.1.2\": \"workflow_dispatch\",\n      \"1.3.6.1.4.1.57264.1.3\": \"a0d2aef2e35c259c9ee75d65f7587e6ed71ef2ad\",\n      \"1.3.6.1.4.1.57264.1.4\": \"Create Release\",\n      \"1.3.6.1.4.1.57264.1.5\": \"external-secrets\/external-secrets\",\n      \"1.3.6.1.4.1.57264.1.6\": \"refs\/heads\/main\",\n      \"Bundle\": {\n        # ...\n      },\n      \"GITHUB_ACTOR\": \"gusfcarvalho\",\n      \"Issuer\": \"https:\/\/token.actions.githubusercontent.com\",\n      \"Subject\": \"https:\/\/github.com\/external-secrets\/external-secrets\/.github\/workflows\/release.yml@refs\/heads\/main\",\n      \"githubWorkflowName\": \"Create Release\",\n      \"githubWorkflowRef\": \"refs\/heads\/main\",\n      \"githubWorkflowRepository\": \"external-secrets\/external-secrets\",\n      \"githubWorkflowSha\": \"a0d2aef2e35c259c9ee75d65f7587e6ed71ef2ad\",\n      \"githubWorkflowTrigger\": \"workflow_dispatch\"\n    }\n  }\n]\n```\n\nIn the output of the verification process, pay close attention to the `optional.Issuer` and `optional.Subject` fields. These fields contain important information about the image's authenticity. Verify that the values of Issuer and Subject match the expected values for the ESO container image. If they do not match, it indicates that the image is not legitimate and should not be used.\n\nBy following these steps and confirming that the Issuer and Subject fields align with the expected values for the ESO container image, you can ensure that the image has not been tampered with and is safe to use.\n\n\n### Verifying Provenance\n\nThe External Secrets Operator employs the [SLSA](https:\/\/slsa.dev\/provenance\/v0.1) (Supply Chain Levels for Software Artifacts) standard to create and attest to the provenance of its builds. Provenance verification is essential to ensure the integrity and trustworthiness of the software supply chain. This outlines the process of verifying the attested provenance of External Secrets Operator builds using the cosign tool.\n\n```sh\n$ COSIGN_EXPERIMENTAL=1 cosign verify-attestation --type slsaprovenance ghcr.io\/external-secrets\/external-secrets:v0.8.1 | jq .payload -r | base64 --decode | jq\n\nVerification for ghcr.io\/external-secrets\/external-secrets:v0.8.1 --\nThe following checks were performed on each of these signatures:\n  - The cosign claims were validated\n  - Existence of the claims in the transparency log was verified offline\n  - Any certificates were verified against the Fulcio roots.\nCertificate subject:  https:\/\/github.com\/external-secrets\/external-secrets\/.github\/workflows\/release.yml@refs\/heads\/main\nCertificate issuer URL:  https:\/\/token.actions.githubusercontent.com\nGitHub Workflow Trigger: workflow_dispatch\nGitHub Workflow SHA: a0d2aef2e35c259c9ee75d65f7587e6ed71ef2ad\nGitHub Workflow Name: Create Release\nGitHub Workflow Trigger external-secrets\/external-secrets\nGitHub Workflow Ref: refs\/heads\/main\n{\n  \"_type\": \"https:\/\/in-toto.io\/Statement\/v0.1\",\n  \"predicateType\": \"https:\/\/slsa.dev\/provenance\/v0.2\",\n  \"subject\": [\n    {\n      \"name\": \"ghcr.io\/external-secrets\/external-secrets\",\n      \"digest\": {\n        \"sha256\": \"36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554\"\n      }\n    }\n  ],\n  \"predicate\": {\n    \"builder\": {\n      \"id\": \"https:\/\/github.com\/external-secrets\/external-secrets\/Attestations\/GitHubHostedActions@v1\"\n    },\n    \"buildType\": \"https:\/\/github.com\/Attestations\/GitHubActionsWorkflow@v1\",\n    \"invocation\": {\n      \"configSource\": {\n        \"uri\": \"git+https:\/\/github.com\/external-secrets\/external-secrets\",\n        \"digest\": {\n          \"sha1\": \"a0d2aef2e35c259c9ee75d65f7587e6ed71ef2ad\"\n        },\n        \"entryPoint\": \"Create Release\"\n      },\n      \"parameters\": {\n        \"version\": \"v0.8.1\"\n      }\n    },\n    [...]\n  }\n}\n```\n\n### Fetching SBOM\n\nEvery External Secrets Operator image is accompanied by an SBOM (Software Bill of Materials) in SPDX JSON format. The SBOM provides detailed information about the software components and dependencies used in the image. This technical documentation explains the process of downloading and verifying the SBOM for a specific version of External Secrets Operator using the Cosign tool.\n\n```sh\n$ crane digest ghcr.io\/external-secrets\/external-secrets:v0.8.1\nsha256:36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554\n\n$ COSIGN_EXPERIMENTAL=1 cosign verify-attestation --type spdx ghcr.io\/external-secrets\/external-secrets@sha256:36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554 | jq '.payload |= @base64d | .payload | fromjson' | jq '.predicate.Data | fromjson'\n\n[...]\n{\n  \"SPDXID\": \"SPDXRef-DOCUMENT\",\n  \"name\": \"ghcr.io\/external-secrets\/external-secrets@sha256-36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554\",\n  \"spdxVersion\": \"SPDX-2.2\",\n  \"creationInfo\": {\n    \"created\": \"2023-03-17T23:17:01.568002344Z\",\n    \"creators\": [\n      \"Organization: Anchore, Inc\",\n      \"Tool: syft-0.40.1\"\n    ],\n    \"licenseListVersion\": \"3.16\"\n  },\n  \"dataLicense\": \"CC0-1.0\",\n  \"documentNamespace\": \"https:\/\/anchore.com\/syft\/image\/ghcr.io\/external-secrets\/external-secrets@sha256-36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554-83484ebb-b469-45fa-8fcc-9290c4ea4f6f\",\n  \"packages\": [\n    [...]\n    {\n      \"SPDXID\": \"SPDXRef-c809070b0beb099e\",\n      \"name\": \"tzdata\",\n      \"licenseConcluded\": \"NONE\",\n      \"downloadLocation\": \"NOASSERTION\",\n      \"externalRefs\": [\n        {\n          \"referenceCategory\": \"SECURITY\",\n          \"referenceLocator\": \"cpe:2.3:a:tzdata:tzdata:2021a-1\\\\+deb11u8:*:*:*:*:*:*:*\",\n          \"referenceType\": \"cpe23Type\"\n        },\n        {\n          \"referenceCategory\": \"PACKAGE_MANAGER\",\n          \"referenceLocator\": \"pkg:deb\/debian\/tzdata@2021a-1+deb11u8?arch=all&distro=debian-11\",\n          \"referenceType\": \"purl\"\n        }\n      ],\n      \"filesAnalyzed\": false,\n      \"licenseDeclared\": \"NONE\",\n      \"originator\": \"Person: GNU Libc Maintainers <debian-glibc@lists.debian.org>\",\n      \"sourceInfo\": \"acquired package info from DPKG DB: \/var\/lib\/dpkg\/status.d\/tzdata, \/usr\/share\/doc\/tzdata\/copyright\",\n      \"versionInfo\": \"2021a-1+deb11u8\"\n    }\n  ]\n}\n```","site":"external secrets","answers_cleaned":"  Security Best Practices  The purpose of this document is to outline a set of best practices for securing the External Secrets Operator  ESO   These practices aim to mitigate the risk of successful attacks against ESO and the Kubernetes cluster it integrates with      Security Functions and Features      1  Namespace Isolation  To maintain security boundaries  ESO ensures that namespaced resources like  SecretStore  and  ExternalSecret  are limited to their respective namespaces  The following rules apply   1   ExternalSecret  resources must not have cross namespace references of  Kind SecretStore  or  Kind Secret  resources 2   SecretStore  resources must not have cross namespace references of  Kind Secret  or others  For cluster wide resources like  ClusterSecretStore  and  ClusterExternalSecret   exercise caution since they have access to Secret resources across all namespaces  Minimize RBAC permissions for administrators and developers to the necessary minimum  If cluster wide resources are not required  it is recommended to disable them       2  Configure ClusterSecretStore match conditions  Utilize the ClusterSecretStore resource to define specific match conditions using  namespaceSelector  or an explicit namespaces list  This restricts the usage of the  ClusterSecretStore  to a predetermined list of namespaces or a namespace that matches a predefined label  Here s an example      yaml apiVersion  external secrets io v1beta1 kind  ClusterSecretStore metadata    name  fake spec    conditions        namespaceSelector          matchLabels            app  frontend          3  Selectively Disable Reconciliation of Cluster Wide Resources  ESO allows you to selectively disable the reconciliation of cluster wide resources such as  ClusterSecretStore    ClusterExternalSecret   and  PushSecret   You can disable the installation of CRDs in the Helm chart or disable reconciliation in the core controller using the following options   To disable CRD installation      yaml   disable cluster wide resources   push secret crds    createClusterExternalSecret  false   createClusterSecretStore  false   createPushSecret  false      To disable reconciliation in the core controller         enable cluster external secret reconciler   enable cluster store reconciler          4  Implement Namespace Scoped Installation  To further enhance security  consider installing ESO into a specific namespace with restricted access to only that namespace s resources  This prevents access to cluster wide secrets  Use the following Helm values to scope the controller to a specific namespace      yaml   If set to true  create scoped RBAC roles under the scoped namespace   and implicitly disable cluster stores and cluster external secrets scopedRBAC  true    Specify the namespace where external secrets should be reconciled scopedNamespace  my namespace          5  Restrict Webhook TLS Ciphers  Consider installing ESO restricting webhook ciphers  Use the following Helm values to scope webhook for specific TLS ciphers     yaml webhook    extraArgs      tls ciphers   TLS ECDHE ECDSA WITH CHACHA20 POLY1305 SHA256 TLS ECDHE RSA WITH CHACHA20 POLY1305 SHA256         Pod Security  The Pods of the External Secrets Operator have been configured to meet the  Pod Security Standards  https   kubernetes io docs concepts security pod security standards    specifically the restricted profile  This configuration ensures a strong security posture by implementing recommended best practices for hardening Pods  including those outlined in the  NSA Kubernetes Hardening Guide  https   media defense gov 2022 Aug 29 2003066362  1  1 0 CTR KUBERNETES HARDENING GUIDANCE 1 2 20220829 PDF    By adhering to these standards  the External Secrets Operator benefits from a secure and resilient operating environment  The restricted profile has been set as the default configuration since version  v0 8 2   and it is recommended to maintain this setting to align with the principle of least privilege      Role Based Access Control  RBAC   The External Secrets Operator operates with elevated privileges within your Kubernetes cluster  allowing it to read and write to all secrets across all namespaces  It is crucial to properly restrict access to ESO resources such as  ExternalSecret  and  SecretStore  where necessary  This is particularly important for cluster scoped resources like  ClusterExternalSecret  and  ClusterSecretStore   Unauthorized tampering with these resources by an attacker could lead to unauthorized access to secrets or potential data exfiltration from your system   In most scenarios  the External Secrets Operator is deployed cluster wide  However  if you prefer to run it on a per namespace basis  you can scope it to a specific namespace using the  scopedRBAC  and  scopedNamespace  options in the helm chart   To ensure a secure RBAC configuration  consider the following checklist     Restrict access to execute shell commands  pods exec  within the External Secrets Operator Pod    Restrict access to  Cluster ExternalSecret and  Cluster SecretStore resources    Limit access to aggregated ClusterRoles  view edit admin  as needed    If necessary  deploy ESO with scoped RBAC or within a specific namespace   By carefully managing RBAC permissions and scoping the External Secrets Operator appropriately  you can enhance the security of your Kubernetes cluster      Network Traffic and Security  To ensure a secure network environment  it is recommended to restrict network traffic to and from the External Secrets Operator using  NetworkPolicies  or similar mechanisms  By default  the External Secrets Operator does not include pre defined Network Policies   To implement network restrictions effectively  consider the following steps     Define and apply appropriate NetworkPolicies to limit inbound and outbound traffic for the External Secrets Operator    Specify a  deny all  policy by default and selectively permit necessary communication based on your specific requirements    Restrict access to only the required endpoints and protocols for the External Secrets Operator  such as communication with the Kubernetes API server or external secret providers    Regularly review and update the Network Policies to align with changes in your network infrastructure and security requirements   It is the responsibility of the user to define and configure Network Policies tailored to their specific environment and security needs  By implementing proper network restrictions  you can enhance the overall security posture of the External Secrets Operator within your Kubernetes cluster       danger  Data Exfiltration Risk       If not configured properly ESO may be used to exfiltrate data out of your cluster      It is advised to create tight NetworkPolicies and use a policy engine such as kyverno to prevent data exfiltration        Outbound Traffic Restrictions       Core Controller  Restrict outbound traffic from the core controller component to the following destinations      kube apiserver   The Kubernetes API server    Secret provider  e g   AWS  GCP   Whenever possible  use private endpoints to establish secure and private communication        Webhook    Restrict outbound traffic from the webhook component to the  kube apiserver         Cert Controller    Restrict outbound traffic from the cert controller component to the  kube apiserver         Inbound Traffic Restrictions       Core Controller    Restrict inbound traffic to the core controller component by allowing communication on port  8080  from your monitoring agent        Cert Controller    Restrict inbound traffic to the cert controller component by allowing communication on port  8080  from your monitoring agent    Additionally  permit inbound traffic on port  8081  from the kubelet for health check endpoints  healthz readyz         Webhook  Restrict inbound traffic to the webhook component as follows     Allow communication on port  10250  from the kube apiserver    Allow communication on port  8080  from your monitoring agent    Permit inbound traffic on port  8081  from the kubelet for health check endpoints  healthz readyz       Policy Engine Best Practices  To enhance the security and enforce specific policies for External Secrets Operator  ESO  resources such as SecretStore and ExternalSecret  it is recommended to utilize a policy engine like  Kyverno  http   kyverno io   or  OPA Gatekeeper  https   github com open policy agent gatekeeper   These policy engines provide a way to define and enforce custom policies that restrict changes made to ESO resources       danger  Data Exfiltration Risk       ESO could be used to exfiltrate data out of your cluster  You should disable all providers you don t need      Further  you should implement  NetworkPolicies  to restrict network access to known entities  see above   to prevent data exfiltration   Here are some recommendations to consider when configuring your policies   1    Explicitly Deny Unused Providers    Create policies that explicitly deny the usage of secret providers that are not required in your environment  This prevents unauthorized access to unnecessary providers and reduces the attack surface  2    Restrict Access to Secrets    Implement policies that restrict access to secrets based on specific conditions  For example  you can define policies to allow access to secrets only if they have a particular prefix in the   spec data   remoteRef key  field  This helps ensure that only authorized entities can access sensitive information  3    Restrict  ClusterSecretStore  References    Define policies to restrict the usage of ClusterSecretStore references within ExternalSecret resources  This ensures that the resources are properly scoped and prevent potential unauthorized access to secrets across namespaces   By leveraging a policy engine  you can implement these recommendations and enforce custom policies that align with your organization s security requirements  Please refer to the documentation of the chosen policy engine  e g   Kyverno or OPA Gatekeeper  for detailed instructions on how to define and enforce policies for ESO resources        note  Provider Validation Example Policy       The following policy validates the usage of the  provider  field in the SecretStore manifest          yaml        filter indent width 4        include  kyverno policy secretstore yaml            endfilter                Regular Patches  To maintain a secure environment  it is crucial to regularly patch and update all software components of External Secrets Operator and the underlying cluster  By doing so  known vulnerabilities can be addressed  and the overall system s security can be improved  Here are some recommended practices for ensuring timely updates   1    Automated Patching and Updating    Utilize automated patching and updating tools to streamline the process of keeping software components up to date 2    Regular Update ESO    Stay informed about the latest updates and releases provided for ESO  The development team regularly releases updates to improve stability  performance  and security  Please refer to the  Stability and Support     introduction stability support md  documentation for more information on the available updates 3    Cluster wide Updates    Apart from ESO  ensure that all other software components within your cluster  such as the operating system  container runtime  and Kubernetes itself  are regularly patched and updated   By adhering to a regular patching and updating schedule  you can proactively mitigate security risks associated with known vulnerabilities and ensure the overall stability and security of your ESO deployment      Verify Artefacts      Verify Container Images  The container images of External Secrets Operator are signed using Cosign and the keyless signing feature  To ensure the authenticity and integrity of the container image  you can follow the steps outlined below       sh   Retrieve Image Signature   crane digest ghcr io external secrets external secrets v0 8 1 sha256 36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554    verify signature   COSIGN EXPERIMENTAL 1 cosign verify ghcr io external secrets external secrets sha256 36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554   jq                   critical            identity              docker reference    ghcr io external secrets external secrets                  image              docker manifest digest    sha256 36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554                  type    cosign container image signature              optional            1 3 6 1 4 1 57264 1 1    https   token actions githubusercontent com          1 3 6 1 4 1 57264 1 2    workflow dispatch          1 3 6 1 4 1 57264 1 3    a0d2aef2e35c259c9ee75d65f7587e6ed71ef2ad          1 3 6 1 4 1 57264 1 4    Create Release          1 3 6 1 4 1 57264 1 5    external secrets external secrets          1 3 6 1 4 1 57264 1 6    refs heads main          Bundle                                   GITHUB ACTOR    gusfcarvalho          Issuer    https   token actions githubusercontent com          Subject    https   github com external secrets external secrets  github workflows release yml refs heads main          githubWorkflowName    Create Release          githubWorkflowRef    refs heads main          githubWorkflowRepository    external secrets external secrets          githubWorkflowSha    a0d2aef2e35c259c9ee75d65f7587e6ed71ef2ad          githubWorkflowTrigger    workflow dispatch                   In the output of the verification process  pay close attention to the  optional Issuer  and  optional Subject  fields  These fields contain important information about the image s authenticity  Verify that the values of Issuer and Subject match the expected values for the ESO container image  If they do not match  it indicates that the image is not legitimate and should not be used   By following these steps and confirming that the Issuer and Subject fields align with the expected values for the ESO container image  you can ensure that the image has not been tampered with and is safe to use        Verifying Provenance  The External Secrets Operator employs the  SLSA  https   slsa dev provenance v0 1   Supply Chain Levels for Software Artifacts  standard to create and attest to the provenance of its builds  Provenance verification is essential to ensure the integrity and trustworthiness of the software supply chain  This outlines the process of verifying the attested provenance of External Secrets Operator builds using the cosign tool      sh   COSIGN EXPERIMENTAL 1 cosign verify attestation   type slsaprovenance ghcr io external secrets external secrets v0 8 1   jq  payload  r   base64   decode   jq  Verification for ghcr io external secrets external secrets v0 8 1    The following checks were performed on each of these signatures      The cosign claims were validated     Existence of the claims in the transparency log was verified offline     Any certificates were verified against the Fulcio roots  Certificate subject   https   github com external secrets external secrets  github workflows release yml refs heads main Certificate issuer URL   https   token actions githubusercontent com GitHub Workflow Trigger  workflow dispatch GitHub Workflow SHA  a0d2aef2e35c259c9ee75d65f7587e6ed71ef2ad GitHub Workflow Name  Create Release GitHub Workflow Trigger external secrets external secrets GitHub Workflow Ref  refs heads main       type    https   in toto io Statement v0 1      predicateType    https   slsa dev provenance v0 2      subject                  name    ghcr io external secrets external secrets          digest              sha256    36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554                        predicate          builder            id    https   github com external secrets external secrets Attestations GitHubHostedActions v1              buildType    https   github com Attestations GitHubActionsWorkflow v1        invocation            configSource              uri    git https   github com external secrets external secrets            digest                sha1    a0d2aef2e35c259c9ee75d65f7587e6ed71ef2ad                      entryPoint    Create Release                  parameters              version    v0 8 1                                          Fetching SBOM  Every External Secrets Operator image is accompanied by an SBOM  Software Bill of Materials  in SPDX JSON format  The SBOM provides detailed information about the software components and dependencies used in the image  This technical documentation explains the process of downloading and verifying the SBOM for a specific version of External Secrets Operator using the Cosign tool      sh   crane digest ghcr io external secrets external secrets v0 8 1 sha256 36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554    COSIGN EXPERIMENTAL 1 cosign verify attestation   type spdx ghcr io external secrets external secrets sha256 36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554   jq   payload     base64d    payload   fromjson    jq   predicate Data   fromjson              SPDXID    SPDXRef DOCUMENT      name    ghcr io external secrets external secrets sha256 36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554      spdxVersion    SPDX 2 2      creationInfo          created    2023 03 17T23 17 01 568002344Z        creators            Organization  Anchore  Inc          Tool  syft 0 40 1              licenseListVersion    3 16          dataLicense    CC0 1 0      documentNamespace    https   anchore com syft image ghcr io external secrets external secrets sha256 36e606279dbebac51b4b9300b9fa85e8c08c1c673ba3ecc38af1402a0b035554 83484ebb b469 45fa 8fcc 9290c4ea4f6f      packages                            SPDXID    SPDXRef c809070b0beb099e          name    tzdata          licenseConcluded    NONE          downloadLocation    NOASSERTION          externalRefs                          referenceCategory    SECURITY              referenceLocator    cpe 2 3 a tzdata tzdata 2021a 1   deb11u8                            referenceType    cpe23Type                                  referenceCategory    PACKAGE MANAGER              referenceLocator    pkg deb debian tzdata 2021a 1 deb11u8 arch all distro debian 11              referenceType    purl                            filesAnalyzed   false         licenseDeclared    NONE          originator    Person  GNU Libc Maintainers  debian glibc lists debian org           sourceInfo    acquired package info from DPKG DB   var lib dpkg status d tzdata   usr share doc tzdata copyright          versionInfo    2021a 1 deb11u8                 "}
{"questions":"external secrets Dockerconfigjson example Please follow the authentication and SecretStore steps of the to setup access to your google cloud account first Here we will give some examples of how to work with a few common k8s secret types We will give this examples here with the gcp provider should work with other providers in the same way Please also check the guides on to understand the details First create a secret in Google Cloud Secrets Manager containing your docker config A few common k8s secret types examples","answers":"# A few common k8s secret types examples\n\nHere we will give some examples of how to work with a few common k8s secret types. We will give this examples here with the gcp provider (should work with other providers in the same way). Please also check the guides on [Advanced Templating](templating.md) to understand the details.\n\nPlease follow the authentication and SecretStore steps of the [Google Cloud Secrets Manager guide](..\/provider\/google-secrets-manager.md) to setup access to your google cloud account first.\n\n\n## Dockerconfigjson example\n\nFirst create a secret in Google Cloud Secrets Manager containing your docker config:\n\n![iam](..\/pictures\/screenshot_docker_config_json_example.png)\n\nLet's call this secret docker-config-example on Google Cloud.\n\nThen create a ExternalSecret resource taking advantage of templating to populate the generated secret:\n\n```yaml\n{% include 'gcpsm-docker-config-externalsecret.yaml' %}\n```\n\nFor Helm users: since Helm interprets the template above, the ExternalSecret resource can be written this way:\n\n```yaml\n{% include 'gcpsm-docker-config-helm-externalsecret.yaml' %}\n```\n\nFor more information, please see [this issue](https:\/\/github.com\/helm\/helm\/issues\/2798)\n\nThis will generate a valid dockerconfigjson secret for you to use!\n\nYou can get the final value with:\n\n```bash\nkubectl get secret secret-to-be-created -n <namespace> -o jsonpath=\"{.data.\\.dockerconfigjson}\" | base64 -d\n```\n\nAlternately, if you only have the container registry name and password value, you can take advantage of the advanced ExternalSecret templating functions to create the secret:\n\n```yaml\n{% raw %}\napiVersion: external-secrets.io\/v1beta1\nkind: ExternalSecret\nmetadata:\n  name: dk-cfg-example\nspec:\n  refreshInterval: 1h\n  secretStoreRef:\n    name: example\n    kind: SecretStore\n  target:\n    template:\n      type: kubernetes.io\/dockerconfigjson\n      data:\n        .dockerconfigjson: '{\"auths\":{\".\":{\"username\":\"\",\"password\":\"\",\"auth\":\"\"}}}'\n  data:\n  - secretKey: registryName\n    remoteRef:\n      key: secret\/docker-registry-name # \"myRegistry\"\n  - secretKey: registryHost\n    remoteRef:\n      key: secret\/docker-registry-host # \"docker.io\"\n  - secretKey: password\n    remoteRef:\n      key: secret\/docker-registry-password\n{% endraw %}\n```\n\n## TLS Cert example\n\nWe are assuming here that you already have valid certificates, maybe generated with letsencrypt or any other CA. So to simplify you can use openssl to generate a single secret pkcs12 cert based on your cert.pem and privkey.pen files.\n\n```bash\nopenssl pkcs12 -export -out certificate.p12 -inkey privkey.pem -in cert.pem\n```\n\nWith a certificate.p12 you can upload it to Google Cloud Secrets Manager:\n\n![p12](..\/pictures\/screenshot_ssl_certificate_p12_example.png)\n\nAnd now you can create an ExternalSecret that gets it. You will end up with a k8s secret of type tls with pem values.\n\n```yaml\n{% include 'gcpsm-tls-externalsecret.yaml' %}\n```\n\nYou can get their values with:\n\n```bash\nkubectl get secret secret-to-be-created -n <namespace> -o jsonpath=\"{.data.tls\\.crt}\" | base64 -d\nkubectl get secret secret-to-be-created -n <namespace> -o jsonpath=\"{.data.tls\\.key}\" | base64 -d\n```\n\n\n## SSH Auth example\n\nAdd the ssh privkey to a new Google Cloud Secrets Manager secret:\n\n![ssh](..\/pictures\/screenshot_ssh_privkey_example.png)\n\nAnd now you can create an ExternalSecret that gets it. You will end up with a k8s secret of type ssh-auth with the privatekey value.\n\n```yaml\n{% include 'gcpsm-ssh-auth-externalsecret.yaml' %}\n```\n\nYou can get the privkey value with:\n\n```bash\nkubectl get secret secret-to-be-created -n <namespace> -o jsonpath=\"{.data.ssh-privatekey}\" | base64 -d\n```\n\n## More examples\n\n!!! note \"We need more examples here\"\n    Feel free to contribute with our docs and add more examples here!","site":"external secrets","answers_cleaned":"  A few common k8s secret types examples  Here we will give some examples of how to work with a few common k8s secret types  We will give this examples here with the gcp provider  should work with other providers in the same way   Please also check the guides on  Advanced Templating  templating md  to understand the details   Please follow the authentication and SecretStore steps of the  Google Cloud Secrets Manager guide     provider google secrets manager md  to setup access to your google cloud account first       Dockerconfigjson example  First create a secret in Google Cloud Secrets Manager containing your docker config     iam     pictures screenshot docker config json example png   Let s call this secret docker config example on Google Cloud   Then create a ExternalSecret resource taking advantage of templating to populate the generated secret      yaml    include  gcpsm docker config externalsecret yaml          For Helm users  since Helm interprets the template above  the ExternalSecret resource can be written this way      yaml    include  gcpsm docker config helm externalsecret yaml          For more information  please see  this issue  https   github com helm helm issues 2798   This will generate a valid dockerconfigjson secret for you to use   You can get the final value with      bash kubectl get secret secret to be created  n  namespace   o jsonpath    data   dockerconfigjson     base64  d      Alternately  if you only have the container registry name and password value  you can take advantage of the advanced ExternalSecret templating functions to create the secret      yaml    raw    apiVersion  external secrets io v1beta1 kind  ExternalSecret metadata    name  dk cfg example spec    refreshInterval  1h   secretStoreRef      name  example     kind  SecretStore   target      template        type  kubernetes io dockerconfigjson       data           dockerconfigjson     auths         username      password      auth           data      secretKey  registryName     remoteRef        key  secret docker registry name    myRegistry      secretKey  registryHost     remoteRef        key  secret docker registry host    docker io      secretKey  password     remoteRef        key  secret docker registry password    endraw            TLS Cert example  We are assuming here that you already have valid certificates  maybe generated with letsencrypt or any other CA  So to simplify you can use openssl to generate a single secret pkcs12 cert based on your cert pem and privkey pen files      bash openssl pkcs12  export  out certificate p12  inkey privkey pem  in cert pem      With a certificate p12 you can upload it to Google Cloud Secrets Manager     p12     pictures screenshot ssl certificate p12 example png   And now you can create an ExternalSecret that gets it  You will end up with a k8s secret of type tls with pem values      yaml    include  gcpsm tls externalsecret yaml          You can get their values with      bash kubectl get secret secret to be created  n  namespace   o jsonpath    data tls  crt     base64  d kubectl get secret secret to be created  n  namespace   o jsonpath    data tls  key     base64  d          SSH Auth example  Add the ssh privkey to a new Google Cloud Secrets Manager secret     ssh     pictures screenshot ssh privkey example png   And now you can create an ExternalSecret that gets it  You will end up with a k8s secret of type ssh auth with the privatekey value      yaml    include  gcpsm ssh auth externalsecret yaml          You can get the privkey value with      bash kubectl get secret secret to be created  n  namespace   o jsonpath    data ssh privatekey     base64  d         More examples      note  We need more examples here      Feel free to contribute with our docs and add more examples here "}
{"questions":"external secrets note Each data value is interpreted as a Please note that referencing a non existing key in the template will raise an error instead of being suppressed Advanced Templating v2 With External Secrets Operator you can transform the data from the external secret provider before it is stored as You can do this with the Consider using camelcase when defining spec data secretkey example serviceAccountToken","answers":"# Advanced Templating v2\n\nWith External Secrets Operator you can transform the data from the external secret provider before it is stored as `Kind=Secret`. You can do this with the `Spec.Target.Template`.\n\nEach data value is interpreted as a [Go template](https:\/\/golang.org\/pkg\/text\/template\/). Please note that referencing a non-existing key in the template will raise an error, instead of being suppressed.\n\n!!! note\n\n    Consider using camelcase when defining  **.'spec.data.secretkey'**, example: serviceAccountToken\n\n    If your secret keys contain **`-` (dashes)**, you will need to reference them using **`index`** <\/br>\n    Example: **`\\{\\{ index .data \"service-account-token\" \\}\\}`**\n\n## Helm\n\nWhen installing ExternalSecrets via `helm`, the template must be escaped so that `helm` will not try to render it. The most straightforward way to accomplish this would be to use backticks ([raw string constants](https:\/\/pkg.go.dev\/text\/template#hdr-Examples)):\n\n```yaml\n{% include 'helm-template-v2-escape-sequence.yaml' %}\n```\n\n## Examples\n\nYou can use templates to inject your secrets into a configuration file that you mount into your pod:\n\n```yaml\n{% include 'multiline-template-v2-external-secret.yaml' %}\n```\n\nAnother example with two keys in the same secret:\n\n```yaml\n{% include 'multikey-template-v2-external-secret.yaml' %}\n```\n\n### MergePolicy\n\nBy default, the templating mechanism will not use any information available from the original `data` and `dataFrom` queries to the provider, and only keep the templated information. It is possible to change this behavior through the use of the `mergePolicy` field. `mergePolicy` currently accepts two values: `Replace` (the default) and `Merge`. When using `Merge`, `data` and `dataFrom` keys will also be embedded into the templated secret, having lower priority than the template outcome. See the example for more information:\n\n```yaml\n{% include 'merge-template-v2-external-secret.yaml' %}\n```\n\n### TemplateFrom\n\nYou do not have to define your templates inline in an ExternalSecret but you can pull `ConfigMaps` or other Secrets that contain a template. Consider the following example:\n\n```yaml\n{% include 'template-v2-from-secret.yaml' %}\n```\n\n`TemplateFrom` also gives you the ability to Target your template to the Secret's Annotations, Labels or the Data block. It also allows you to render the templated information as `Values` or as `KeysAndValues` through the `templateAs` configuration:\n\n```yaml\n{% include 'template-v2-scope-and-target.yaml' %}\n```\n\nLastly, `TemplateFrom` also supports adding `Literal` blocks for quick templating. These `Literal` blocks differ from `Template.Data` as they are rendered as a a `key:value` pair (while the `Template.Data`, you can only template the value).\n\nSee an example, how to produce a `htpasswd` file that can be used by an ingress-controller (for example: https:\/\/kubernetes.github.io\/ingress-nginx\/examples\/auth\/basic\/) where the contents of the `htpasswd` file needs to be presented via the `auth` key. We use the `htpasswd` function to create a `bcrytped` hash of the password.\n\nSuppose you have multiple key-value pairs within your provider secret like\n\n```json\n{\n  \"user1\": \"password1\",\n  \"user2\": \"password2\",\n  ...\n}\n```\n\n```yaml\n{% include 'template-v2-literal-example.yaml' %}\n```\n\n### Extract Keys and Certificates from PKCS#12 Archive\n\nYou can use pre-defined functions to extract data from your secrets. Here: extract keys and certificates from a PKCS#12 archive and store it as PEM.\n\n```yaml\n{% include 'pkcs12-template-v2-external-secret.yaml' %}\n```\n\n### Extract from JWK\n\nYou can extract the public or private key parts of a JWK and use them as [PKCS#8](https:\/\/pkg.go.dev\/crypto\/x509#ParsePKCS8PrivateKey) private key or PEM-encoded [PKIX](https:\/\/pkg.go.dev\/crypto\/x509#MarshalPKIXPublicKey) public key.\n\nA JWK looks similar to this:\n\n```json\n{\n  \"kty\": \"RSA\",\n  \"kid\": \"cc34c0a0-bd5a-4a3c-a50d-a2a7db7643df\",\n  \"use\": \"sig\",\n  \"n\": \"pjdss...\",\n  \"e\": \"AQAB\"\n  \/\/ ...\n}\n```\n\nAnd what you want may be a PEM-encoded public or private key portion of it. Take a look at this example on how to transform it into the desired format:\n\n```yaml\n{% include 'jwk-template-v2-external-secret.yaml' %}\n```\n\n### Filter PEM blocks\n\nConsider you have a secret that contains both a certificate and a private key encoded in PEM format and it is your goal to use only the certificate from that secret.\n\n```\n-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCvxGZOW4IXvGlh\n . . .\nm8JCpbJXDfSSVxKHgK1Siw4K6pnTsIA2e\/Z+Ha2fvtocERjq7VQMAJFaIZSTKo9Q\nJwwY+vj0yxWjyzHUzZB33tg=\n-----END PRIVATE KEY-----\n-----BEGIN CERTIFICATE-----\nMIIDMDCCAhigAwIBAgIQabPaXuZCQaCg+eQAVptGGDANBgkqhkiG9w0BAQsFADAV\n . . .\nNtFUGA95RGN9s+pl6XY0YARPHf5O76ErC1OZtDTR5RdyQfcM+94gYZsexsXl0aQO\n9YD3Wg==\n-----END CERTIFICATE-----\n\n```\n\nYou can achieve that by using the `filterPEM` function to extract a specific type of PEM block from that secret. If multiple blocks of that type (here: `CERTIFICATE`) exist, all of them are returned in the order specified. To extract a specific type of PEM block, pass the type as a string argument to the filterPEM function. Take a look at this example of how to transform a secret which contains a private key and a certificate into the desired format:\n\n```yaml\n{% include 'filterpem-template-v2-external-secret.yaml' %}\n```\n\n## Templating with PushSecret\n\n`PushSecret` templating is much like `ExternalSecrets` templating. In-fact under the hood, it's using the same data structure.\nWhich means, anything described in the above should be possible with push secret as well resulting in a templated secret\ncreated at the provider.\n\n```yaml\n{% include 'template-v2-push-secret.yaml' %}\n```\n\n## Helper functions\n\n!!! info inline end\n\n    Note: we removed `env` and `expandenv` from sprig functions for security reasons.\n\nWe provide a couple of convenience functions that help you transform your secrets. This is useful when dealing with PKCS#12 archives or JSON Web Keys (JWK).\n\nIn addition to that you can use over 200+ [sprig functions](http:\/\/masterminds.github.io\/sprig\/). If you feel a function is missing or might be valuable feel free to open an issue and submit a [pull request](..\/contributing\/process.md#submitting-a-pull-request).\n\n<br\/>\n\n| Function         | Description                                                                                                                                                                                                                  |\n| ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| pkcs12key        | Extracts all private keys from a PKCS#12 archive and encodes them in **PKCS#8 PEM** format.                                                                                                                                  |\n| pkcs12keyPass    | Same as `pkcs12key`. Uses the provided password to decrypt the PKCS#12 archive.                                                                                                                                              |\n| pkcs12cert       | Extracts all certificates from a PKCS#12 archive and orders them if possible. If disjunct or multiple leaf certs are provided they are returned as-is. <br\/> Sort order: `leaf \/ intermediate(s) \/ root`.                    |\n| pkcs12certPass   | Same as `pkcs12cert`. Uses the provided password to decrypt the PKCS#12 archive.                                                                                                                                             |\n| pemToPkcs12      | Takes a PEM encoded certificate and key and creates a base64 encoded PKCS#12 archive.                                                                                                                                         |\n| pemToPkcs12Pass  | Same as `pemToPkcs12`. Uses the provided password to encrypt the PKCS#12 archive.                                                                                                                                            |\n| fullPemToPkcs12      | Takes a PEM encoded certificates chain and key and creates a base64 encoded PKCS#12 archive.                                                                                                                                         |\n| fullPemToPkcs12Pass  | Same as `fullPemToPkcs12`. Uses the provided password to encrypt the PKCS#12 archive.                                                                                                                                            |\n| filterPEM        | Filters PEM blocks with a specific type from a list of PEM blocks.                                                                                                                                                           |\n| jwkPublicKeyPem  | Takes an json-serialized JWK and returns an PEM block of type `PUBLIC KEY` that contains the public key. [See here](https:\/\/golang.org\/pkg\/crypto\/x509\/#MarshalPKIXPublicKey) for details.                                   |\n| jwkPrivateKeyPem | Takes an json-serialized JWK as `string` and returns an PEM block of type `PRIVATE KEY` that contains the private key in PKCS #8 format. [See here](https:\/\/golang.org\/pkg\/crypto\/x509\/#MarshalPKCS8PrivateKey) for details. |\n| toYaml           | Takes an interface, marshals it to yaml. It returns a string, even on marshal error (empty string).                                                                                                                          |\n| fromYaml         | Function converts a YAML document into a map[string]any.                                                                                                                                                             |\n\n## Migrating from v1\n\nIf you are still using `v1alpha1`, You have to opt-in to use the new engine version by specifying `template.engineVersion=v2`:\n\n```yaml\napiVersion: external-secrets.io\/v1alpha1\nkind: ExternalSecret\nmetadata:\n  name: secret\nspec:\n  # ...\n  target:\n    template:\n      engineVersion: v2\n  # ...\n```\n\nThe biggest change was that basically all function parameter types were changed from accepting\/returning `[]byte` to `string`. This is relevant for you because now you don't need to specify `toString` all the time at the end of a template pipeline.\n\n```yaml\n{% raw %}\napiVersion: external-secrets.io\/v1alpha1\nkind: ExternalSecret\n# ...\nspec:\n  target:\n    template:\n      engineVersion: v2\n      data:\n        # this used to be \n        egg: \"new: \"\n{% endraw %}\n```\n\n##### Functions removed\/replaced\n\n- `base64encode` was renamed to `b64enc`.\n- `base64decode` was renamed to `b64dec`. Any errors that occur during decoding are silenced.\n- `fromJSON` was renamed to `fromJson`. Any errors that occur during unmarshalling are silenced.\n- `toJSON` was renamed to `toJson`. Any errors that occur during marshalling are silenced.\n- `pkcs12key` and `pkcs12keyPass` encode the PKCS#8 key directly into PEM format. There is no need to call `pemPrivateKey` anymore. Also, these functions do extract all private keys from the PKCS#12 archive not just the first one.\n- `pkcs12cert` and `pkcs12certPass` encode the certs directly into PEM format. There is no need to call `pemCertificate` anymore. These functions now **extract all certificates** from the PKCS#12 archive not just the first one.\n- `toString` implementation was replaced by the `sprig` implementation and should be api-compatible.\n- `toBytes` was removed.\n- `pemPrivateKey` was removed. It's now implemented within the `pkcs12*` functions.\n- `pemCertificate` was removed. It's now implemented within the `pkcs12*` functions.","site":"external secrets","answers_cleaned":"  Advanced Templating v2  With External Secrets Operator you can transform the data from the external secret provider before it is stored as  Kind Secret   You can do this with the  Spec Target Template    Each data value is interpreted as a  Go template  https   golang org pkg text template    Please note that referencing a non existing key in the template will raise an error  instead of being suppressed       note      Consider using camelcase when defining      spec data secretkey     example  serviceAccountToken      If your secret keys contain        dashes     you will need to reference them using    index      br      Example          index  data  service account token              Helm  When installing ExternalSecrets via  helm   the template must be escaped so that  helm  will not try to render it  The most straightforward way to accomplish this would be to use backticks   raw string constants  https   pkg go dev text template hdr Examples        yaml    include  helm template v2 escape sequence yaml             Examples  You can use templates to inject your secrets into a configuration file that you mount into your pod      yaml    include  multiline template v2 external secret yaml          Another example with two keys in the same secret      yaml    include  multikey template v2 external secret yaml              MergePolicy  By default  the templating mechanism will not use any information available from the original  data  and  dataFrom  queries to the provider  and only keep the templated information  It is possible to change this behavior through the use of the  mergePolicy  field   mergePolicy  currently accepts two values   Replace   the default  and  Merge   When using  Merge    data  and  dataFrom  keys will also be embedded into the templated secret  having lower priority than the template outcome  See the example for more information      yaml    include  merge template v2 external secret yaml              TemplateFrom  You do not have to define your templates inline in an ExternalSecret but you can pull  ConfigMaps  or other Secrets that contain a template  Consider the following example      yaml    include  template v2 from secret yaml           TemplateFrom  also gives you the ability to Target your template to the Secret s Annotations  Labels or the Data block  It also allows you to render the templated information as  Values  or as  KeysAndValues  through the  templateAs  configuration      yaml    include  template v2 scope and target yaml          Lastly   TemplateFrom  also supports adding  Literal  blocks for quick templating  These  Literal  blocks differ from  Template Data  as they are rendered as a a  key value  pair  while the  Template Data   you can only template the value    See an example  how to produce a  htpasswd  file that can be used by an ingress controller  for example  https   kubernetes github io ingress nginx examples auth basic   where the contents of the  htpasswd  file needs to be presented via the  auth  key  We use the  htpasswd  function to create a  bcrytped  hash of the password   Suppose you have multiple key value pairs within your provider secret like     json      user1    password1      user2    password2                   yaml    include  template v2 literal example yaml              Extract Keys and Certificates from PKCS 12 Archive  You can use pre defined functions to extract data from your secrets  Here  extract keys and certificates from a PKCS 12 archive and store it as PEM      yaml    include  pkcs12 template v2 external secret yaml              Extract from JWK  You can extract the public or private key parts of a JWK and use them as  PKCS 8  https   pkg go dev crypto x509 ParsePKCS8PrivateKey  private key or PEM encoded  PKIX  https   pkg go dev crypto x509 MarshalPKIXPublicKey  public key   A JWK looks similar to this      json      kty    RSA      kid    cc34c0a0 bd5a 4a3c a50d a2a7db7643df      use    sig      n    pjdss         e    AQAB                  And what you want may be a PEM encoded public or private key portion of it  Take a look at this example on how to transform it into the desired format      yaml    include  jwk template v2 external secret yaml              Filter PEM blocks  Consider you have a secret that contains both a certificate and a private key encoded in PEM format and it is your goal to use only the certificate from that secret            BEGIN PRIVATE KEY      MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCvxGZOW4IXvGlh        m8JCpbJXDfSSVxKHgK1Siw4K6pnTsIA2e Z Ha2fvtocERjq7VQMAJFaIZSTKo9Q JwwY vj0yxWjyzHUzZB33tg       END PRIVATE KEY           BEGIN CERTIFICATE      MIIDMDCCAhigAwIBAgIQabPaXuZCQaCg eQAVptGGDANBgkqhkiG9w0BAQsFADAV        NtFUGA95RGN9s pl6XY0YARPHf5O76ErC1OZtDTR5RdyQfcM 94gYZsexsXl0aQO 9YD3Wg        END CERTIFICATE            You can achieve that by using the  filterPEM  function to extract a specific type of PEM block from that secret  If multiple blocks of that type  here   CERTIFICATE   exist  all of them are returned in the order specified  To extract a specific type of PEM block  pass the type as a string argument to the filterPEM function  Take a look at this example of how to transform a secret which contains a private key and a certificate into the desired format      yaml    include  filterpem template v2 external secret yaml             Templating with PushSecret   PushSecret  templating is much like  ExternalSecrets  templating  In fact under the hood  it s using the same data structure  Which means  anything described in the above should be possible with push secret as well resulting in a templated secret created at the provider      yaml    include  template v2 push secret yaml             Helper functions      info inline end      Note  we removed  env  and  expandenv  from sprig functions for security reasons   We provide a couple of convenience functions that help you transform your secrets  This is useful when dealing with PKCS 12 archives or JSON Web Keys  JWK    In addition to that you can use over 200   sprig functions  http   masterminds github io sprig    If you feel a function is missing or might be valuable feel free to open an issue and submit a  pull request     contributing process md submitting a pull request     br      Function           Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                          pkcs12key          Extracts all private keys from a PKCS 12 archive and encodes them in   PKCS 8 PEM   format                                                                                                                                       pkcs12keyPass      Same as  pkcs12key   Uses the provided password to decrypt the PKCS 12 archive                                                                                                                                                   pkcs12cert         Extracts all certificates from a PKCS 12 archive and orders them if possible  If disjunct or multiple leaf certs are provided they are returned as is   br   Sort order   leaf   intermediate s    root                          pkcs12certPass     Same as  pkcs12cert   Uses the provided password to decrypt the PKCS 12 archive                                                                                                                                                  pemToPkcs12        Takes a PEM encoded certificate and key and creates a base64 encoded PKCS 12 archive                                                                                                                                              pemToPkcs12Pass    Same as  pemToPkcs12   Uses the provided password to encrypt the PKCS 12 archive                                                                                                                                                 fullPemToPkcs12        Takes a PEM encoded certificates chain and key and creates a base64 encoded PKCS 12 archive                                                                                                                                              fullPemToPkcs12Pass    Same as  fullPemToPkcs12   Uses the provided password to encrypt the PKCS 12 archive                                                                                                                                                 filterPEM          Filters PEM blocks with a specific type from a list of PEM blocks                                                                                                                                                                jwkPublicKeyPem    Takes an json serialized JWK and returns an PEM block of type  PUBLIC KEY  that contains the public key   See here  https   golang org pkg crypto x509  MarshalPKIXPublicKey  for details                                        jwkPrivateKeyPem   Takes an json serialized JWK as  string  and returns an PEM block of type  PRIVATE KEY  that contains the private key in PKCS  8 format   See here  https   golang org pkg crypto x509  MarshalPKCS8PrivateKey  for details      toYaml             Takes an interface  marshals it to yaml  It returns a string  even on marshal error  empty string                                                                                                                                fromYaml           Function converts a YAML document into a map string any                                                                                                                                                                    Migrating from v1  If you are still using  v1alpha1   You have to opt in to use the new engine version by specifying  template engineVersion v2       yaml apiVersion  external secrets io v1alpha1 kind  ExternalSecret metadata    name  secret spec            target      template        engineVersion  v2              The biggest change was that basically all function parameter types were changed from accepting returning    byte  to  string   This is relevant for you because now you don t need to specify  toString  all the time at the end of a template pipeline      yaml    raw    apiVersion  external secrets io v1alpha1 kind  ExternalSecret       spec    target      template        engineVersion  v2       data            this used to be          egg   new       endraw               Functions removed replaced     base64encode  was renamed to  b64enc      base64decode  was renamed to  b64dec   Any errors that occur during decoding are silenced     fromJSON  was renamed to  fromJson   Any errors that occur during unmarshalling are silenced     toJSON  was renamed to  toJson   Any errors that occur during marshalling are silenced     pkcs12key  and  pkcs12keyPass  encode the PKCS 8 key directly into PEM format  There is no need to call  pemPrivateKey  anymore  Also  these functions do extract all private keys from the PKCS 12 archive not just the first one     pkcs12cert  and  pkcs12certPass  encode the certs directly into PEM format  There is no need to call  pemCertificate  anymore  These functions now   extract all certificates   from the PKCS 12 archive not just the first one     toString  implementation was replaced by the  sprig  implementation and should be api compatible     toBytes  was removed     pemPrivateKey  was removed  It s now implemented within the  pkcs12   functions     pemCertificate  was removed  It s now implemented within the  pkcs12   functions "}
{"questions":"external secrets In order to do so it is possible to define a set of rewrite operations using These operations can be stacked hence allowing complex manipulations of the secret keys When calling out an ExternalSecret with or it is possible that you end up with a kubernetes secret that has conflicts in the key names or that you simply want to remove a common path from the secret keys Rewriting Keys in DataFrom Methods Rewrite operations are all applied before is applied","answers":"# Rewriting Keys in DataFrom\n\nWhen calling out an ExternalSecret with `dataFrom.extract` or `dataFrom.find`, it is possible that you end up with a kubernetes secret that has conflicts in the key names, or that you simply want to remove a common path from the secret keys.\n\nIn order to do so, it is possible to define a set of rewrite operations using `dataFrom.rewrite`. These operations can be stacked, hence allowing complex manipulations of the secret keys.\n\nRewrite operations are all applied before `ConversionStrategy` is applied.\n\n## Methods\n\n### Regexp\nThis method implements rewriting through the use of regular expressions. It needs a `source` and a `target` field. The source field is where the definition of the matching regular expression goes, where the `target` field is where the replacing expression goes.\n\nSome considerations about the implementation of Regexp Rewrite:\n\n1. The input of a subsequent rewrite operation are the outputs of the previous rewrite.\n2. If a given set of keys do not match any Rewrite operation, there will be no error. Rather, the original keys will be used.\n3. If a `source` is not a compilable `regexp` expression, an error will be produced and the external secret goes into a error state.\n\n## Examples\n### Removing a common path from find operations\nThe following ExternalSecret:\n```yaml\n{% include 'datafrom-rewrite-remove-path.yaml' %}\n```\nWill get all the secrets matching `path\/to\/my\/secrets\/*` and then rewrite them by removing the common path away.\n\nIn this example, if we had the following secrets available in the provider:\n```\npath\/to\/my\/secrets\/username\npath\/to\/my\/secrets\/password\n```\nthe output kubernetes secret would be:\n```yaml\napiVersion: v1\nkind: Secret\ntype: Opaque\ndata:\n    username: ...\n    password: ...\n```\n### Avoiding key collisions\nThe following ExternalSecret:\n```yaml\n{% include 'datafrom-rewrite-conflict.yaml' %}\n\n```\nWill allow two secrets with the same JSON keys to be imported into a Kubernetes Secret without any conflict.\nIn this example, if we had the following secrets available in the provider:\n```json\n{\n    \"my-secrets-dev\": {\n        \"password\": \"bar\",\n     },\n    \"my-secrets-prod\": {\n        \"password\": \"safebar\",\n     }\n}\n```\nthe output kubernetes secret would be:\n```yaml\napiVersion: v1\nkind: Secret\ntype: Opaque\ndata:\n    dev_password: YmFy #bar\n    prod_password: c2FmZWJhcg== #safebar\n```\n\n### Remove invalid characters\nThe following ExternalSecret:\n```yaml\n{% include 'datafrom-rewrite-invalid-characters.yaml' %}\n\n```\nWill remove invalid characters from the secret key.\nIn this example, if we had the following secrets available in the provider:\n```json\n{\n    \"development\": {\n        \"foo\/bar\": \"1111\",\n        \"foo$baz\": \"2222\"\n    }\n}\n```\nthe output kubernetes secret would be:\n```yaml\napiVersion: v1\nkind: Secret\ntype: Opaque\ndata:\n    foo_bar: MTExMQ== #1111\n    foo_baz: MjIyMg== #2222\n```\n\n## Limitations\n\nRegexp Rewrite is based on golang `regexp`, which in turns implements `RE2` regexp language. There a a series of known limitations to this implementation, such as:\n\n* Lack of ability to do lookaheads or lookbehinds;\n* Lack of negation expressions;\n* Lack of support for conditional branches;\n* Lack of support for possessive repetitions.\n\nA list of compatibility and known limitations considering other commonly used regexp frameworks (such as PCRE and PERL) are listed [here](https:\/\/github.com\/google\/re2\/wiki\/Syntax).","site":"external secrets","answers_cleaned":"  Rewriting Keys in DataFrom  When calling out an ExternalSecret with  dataFrom extract  or  dataFrom find   it is possible that you end up with a kubernetes secret that has conflicts in the key names  or that you simply want to remove a common path from the secret keys   In order to do so  it is possible to define a set of rewrite operations using  dataFrom rewrite   These operations can be stacked  hence allowing complex manipulations of the secret keys   Rewrite operations are all applied before  ConversionStrategy  is applied      Methods      Regexp This method implements rewriting through the use of regular expressions  It needs a  source  and a  target  field  The source field is where the definition of the matching regular expression goes  where the  target  field is where the replacing expression goes   Some considerations about the implementation of Regexp Rewrite   1  The input of a subsequent rewrite operation are the outputs of the previous rewrite  2  If a given set of keys do not match any Rewrite operation  there will be no error  Rather  the original keys will be used  3  If a  source  is not a compilable  regexp  expression  an error will be produced and the external secret goes into a error state      Examples     Removing a common path from find operations The following ExternalSecret     yaml    include  datafrom rewrite remove path yaml         Will get all the secrets matching  path to my secrets    and then rewrite them by removing the common path away   In this example  if we had the following secrets available in the provider      path to my secrets username path to my secrets password     the output kubernetes secret would be     yaml apiVersion  v1 kind  Secret type  Opaque data      username          password              Avoiding key collisions The following ExternalSecret     yaml    include  datafrom rewrite conflict yaml          Will allow two secrets with the same JSON keys to be imported into a Kubernetes Secret without any conflict  In this example  if we had the following secrets available in the provider     json        my secrets dev              password    bar                my secrets prod              password    safebar                the output kubernetes secret would be     yaml apiVersion  v1 kind  Secret type  Opaque data      dev password  YmFy  bar     prod password  c2FmZWJhcg    safebar          Remove invalid characters The following ExternalSecret     yaml    include  datafrom rewrite invalid characters yaml          Will remove invalid characters from the secret key  In this example  if we had the following secrets available in the provider     json        development              foo bar    1111            foo baz    2222              the output kubernetes secret would be     yaml apiVersion  v1 kind  Secret type  Opaque data      foo bar  MTExMQ    1111     foo baz  MjIyMg    2222         Limitations  Regexp Rewrite is based on golang  regexp   which in turns implements  RE2  regexp language  There a a series of known limitations to this implementation  such as     Lack of ability to do lookaheads or lookbehinds    Lack of negation expressions    Lack of support for conditional branches    Lack of support for possessive repetitions   A list of compatibility and known limitations considering other commonly used regexp frameworks  such as PCRE and PERL  are listed  here  https   github com google re2 wiki Syntax  "}
{"questions":"external secrets Advantages created to get credentials with no manual intervention from the beginning FluxCD is a GitOps operator for Kubernetes It synchronizes the status of the cluster from manifests allocated in GitOps using FluxCD v2 This approach has several advantages as follows different repositories Git or Helm This approach fits perfectly with External Secrets on clusters which are dynamically","answers":"# GitOps using FluxCD (v2)\n\nFluxCD is a GitOps operator for Kubernetes. It synchronizes the status of the cluster from manifests allocated in\ndifferent repositories (Git or Helm). This approach fits perfectly with External Secrets on clusters which are dynamically\ncreated, to get credentials with no manual intervention from the beginning.\n\n## Advantages\n\nThis approach has several advantages as follows:\n\n* **Homogenize environments** allowing developers to use the same toolset in Kind in the same way they do in the cloud\n  provider distributions such as EKS or GKE. This accelerates the development\n* **Reduce security risks**, because credentials can be easily obtained, so temptation to store them locally is reduced.\n* **Application compatibility increase**: Applications are deployed in different ways, and sometimes they need to share\n  credentials. This can be done using External Secrets as a wire for them at real time.\n* **Automation by default** oh, come on!\n\n## The approach\n\nFluxCD is composed by several controllers dedicated to manage different custom resources. The most important\nones are **Kustomization** (to clarify, Flux one, not Kubernetes' one) and **HelmRelease** to deploy using the approaches\nof the same names.\n\nExternal Secrets can be deployed using Helm [as explained here](..\/introduction\/getting-started.md). The deployment includes the\nCRDs if enabled on the `values.yaml`, but after this, you need to deploy some `SecretStore` to start\ngetting credentials from your secrets manager with External Secrets.\n\n> The idea of this guide is to deploy the whole stack, using flux, needed by developers not to worry about the credentials,\n> but only about the application and its code.\n\n## The problem\n\nThis can sound easy, but External Secrets is deployed using Helm, which is managed by the HelmController,\nand your custom resources, for example a `ClusterSecretStore` and the related `Secret`, are often deployed using a\n`kustomization.yaml`, which is deployed by the KustomizeController.\n\nBoth controllers manage the resources independently, at different moments, with no possibility to wait each other.\nThis means that we have a wonderful race condition where sometimes the CRs (`SecretStore`,`ClusterSecretStore`...) tries\nto be deployed before than the CRDs needed to recognize them.\n\n## The solution\n\nLet's see the conditions to start working on a solution:\n\n* The External Secrets operator is deployed with Helm, and admits disabling the CRDs deployment\n* The race condition only affects the deployment of `CustomResourceDefinition` and the CRs needed later\n* CRDs can be deployed directly from the Git repository of the project using a Flux `Kustomization`\n* Required CRs can be deployed using a Flux `Kustomization` too, allowing dependency between CRDs and CRs\n* All previous manifests can be applied with a Kubernetes `kustomization`\n\n## Create the main kustomization\n\nTo have a better view of things needed later, the first manifest to be created is the `kustomization.yaml`\n\n```yaml\n{% include 'gitops\/kustomization.yaml' %}\n```\n\n## Create the secret\n\nTo access your secret manager, External Secrets needs some credentials. They are stored inside a Secret, which is intended\nto be deployed by automation as a good practise. This time, a placeholder called `secret-token.yaml` is show as an example:\n\n```yaml\n# The namespace.yaml first\n{% include 'gitops\/namespace.yaml' %}\n```\n\n```yaml\n{% include 'gitops\/secret-token.yaml' %}\n```\n\n## Creating the references to repositories\n\nCreate a manifest called `repositories.yaml` to store the references to external repositories for Flux\n\n```yaml\n{% include 'gitops\/repositories.yaml' %}\n```\n\n## Deploy the CRDs\n\nAs mentioned, CRDs can be deployed using the official Helm package, but to solve the race condition, they will be deployed\nfrom our git repository using a Kustomization manifest called `deployment-crds.yaml` as follows:\n\n```yaml\n{% include 'gitops\/deployment-crds.yaml' %}\n```\n\n## Deploy the operator\n\nThe operator is deployed using a HelmRelease manifest to deploy the Helm package, but due to the special race condition,\nthe deployment must be disabled in the `values` of the manifest called `deployment.yaml`, as follows:\n\n```yaml\n{% include 'gitops\/deployment.yaml' %}\n```\n\n## Deploy the CRs\n\nNow, be ready for the arcane magic. Create a Kustomization manifest called `deployment-crs.yaml` with the following content:\n\n```yaml\n{% include 'gitops\/deployment-crs.yaml' %}\n```\n\nThere are several interesting details to see here, that finally solves the race condition:\n\n1. First one is the field `dependsOn`, which points to a previous Kustomization called `external-secrets-crds`. This\n   dependency forces this deployment to wait for the other to be ready, before start being deployed.\n2. The reference to the place where to find the CRs\n   ```yaml\n   path: .\/infrastructure\/external-secrets\/crs\n   sourceRef:\n    kind: GitRepository\n    name: flux-system\n   ```\n   Custom Resources will be searched in the relative path `.\/infrastructure\/external-secrets\/crs` of the GitRepository\n   called `flux-system`, which is a reference to the same repository that FluxCD watches to synchronize the cluster.\n   With fewer words, a reference to itself, but going to another directory called `crs`\n\nOf course, allocate inside the mentioned path `.\/infrastructure\/external-secrets\/crs`, all the desired CRs to be deployed,\nfor example, a manifest `clusterSecretStore.yaml` to reach your Hashicorp Vault as follows:\n\n```yaml\n{% include 'gitops\/crs\/clusterSecretStore.yaml' %}\n```\n\n## Results\n\nAt the end, the required files tree is shown in the following picture:\n\n![FluxCD files tree](..\/pictures\/screenshot_gitops_final_directory_tree.png)","site":"external secrets","answers_cleaned":"  GitOps using FluxCD  v2   FluxCD is a GitOps operator for Kubernetes  It synchronizes the status of the cluster from manifests allocated in different repositories  Git or Helm   This approach fits perfectly with External Secrets on clusters which are dynamically created  to get credentials with no manual intervention from the beginning      Advantages  This approach has several advantages as follows       Homogenize environments   allowing developers to use the same toolset in Kind in the same way they do in the cloud   provider distributions such as EKS or GKE  This accelerates the development     Reduce security risks    because credentials can be easily obtained  so temptation to store them locally is reduced      Application compatibility increase    Applications are deployed in different ways  and sometimes they need to share   credentials  This can be done using External Secrets as a wire for them at real time      Automation by default   oh  come on      The approach  FluxCD is composed by several controllers dedicated to manage different custom resources  The most important ones are   Kustomization    to clarify  Flux one  not Kubernetes  one  and   HelmRelease   to deploy using the approaches of the same names   External Secrets can be deployed using Helm  as explained here     introduction getting started md   The deployment includes the CRDs if enabled on the  values yaml   but after this  you need to deploy some  SecretStore  to start getting credentials from your secrets manager with External Secrets     The idea of this guide is to deploy the whole stack  using flux  needed by developers not to worry about the credentials    but only about the application and its code      The problem  This can sound easy  but External Secrets is deployed using Helm  which is managed by the HelmController  and your custom resources  for example a  ClusterSecretStore  and the related  Secret   are often deployed using a  kustomization yaml   which is deployed by the KustomizeController   Both controllers manage the resources independently  at different moments  with no possibility to wait each other  This means that we have a wonderful race condition where sometimes the CRs   SecretStore   ClusterSecretStore      tries to be deployed before than the CRDs needed to recognize them      The solution  Let s see the conditions to start working on a solution     The External Secrets operator is deployed with Helm  and admits disabling the CRDs deployment   The race condition only affects the deployment of  CustomResourceDefinition  and the CRs needed later   CRDs can be deployed directly from the Git repository of the project using a Flux  Kustomization    Required CRs can be deployed using a Flux  Kustomization  too  allowing dependency between CRDs and CRs   All previous manifests can be applied with a Kubernetes  kustomization      Create the main kustomization  To have a better view of things needed later  the first manifest to be created is the  kustomization yaml      yaml    include  gitops kustomization yaml             Create the secret  To access your secret manager  External Secrets needs some credentials  They are stored inside a Secret  which is intended to be deployed by automation as a good practise  This time  a placeholder called  secret token yaml  is show as an example      yaml   The namespace yaml first    include  gitops namespace yaml             yaml    include  gitops secret token yaml             Creating the references to repositories  Create a manifest called  repositories yaml  to store the references to external repositories for Flux     yaml    include  gitops repositories yaml             Deploy the CRDs  As mentioned  CRDs can be deployed using the official Helm package  but to solve the race condition  they will be deployed from our git repository using a Kustomization manifest called  deployment crds yaml  as follows      yaml    include  gitops deployment crds yaml             Deploy the operator  The operator is deployed using a HelmRelease manifest to deploy the Helm package  but due to the special race condition  the deployment must be disabled in the  values  of the manifest called  deployment yaml   as follows      yaml    include  gitops deployment yaml             Deploy the CRs  Now  be ready for the arcane magic  Create a Kustomization manifest called  deployment crs yaml  with the following content      yaml    include  gitops deployment crs yaml          There are several interesting details to see here  that finally solves the race condition   1  First one is the field  dependsOn   which points to a previous Kustomization called  external secrets crds   This    dependency forces this deployment to wait for the other to be ready  before start being deployed  2  The reference to the place where to find the CRs       yaml    path    infrastructure external secrets crs    sourceRef      kind  GitRepository     name  flux system           Custom Resources will be searched in the relative path    infrastructure external secrets crs  of the GitRepository    called  flux system   which is a reference to the same repository that FluxCD watches to synchronize the cluster     With fewer words  a reference to itself  but going to another directory called  crs   Of course  allocate inside the mentioned path    infrastructure external secrets crs   all the desired CRs to be deployed  for example  a manifest  clusterSecretStore yaml  to reach your Hashicorp Vault as follows      yaml    include  gitops crs clusterSecretStore yaml             Results  At the end  the required files tree is shown in the following picture     FluxCD files tree     pictures screenshot gitops final directory tree png "}
{"questions":"external secrets Bitwarden support using webhook provider How does it work External Secrets Operator 0 8 0 Multiple Cluster SecretStores using the webhook provider To make external secrets compatible with Bitwarden we need Bitwarden is an integrated open source password management solution for individuals teams and business organizations","answers":"# Bitwarden support using webhook provider\n\nBitwarden is an integrated open source password management solution for individuals, teams, and business organizations.\n\n## How does it work?\n\nTo make external-secrets compatible with Bitwarden, we need:\n\n* External Secrets Operator >= 0.8.0\n* Multiple (Cluster)SecretStores using the webhook provider\n* BitWarden CLI image running `bw serve`\n\nWhen you create a new external-secret object, the External Secrets webhook provider will query the Bitwarden CLI pod that is synced with the Bitwarden server.\n\n## Requirements\n\n* Bitwarden account (it also works with Vaultwarden!)\n* A Kubernetes secret which contains your Bitwarden credentials\n* A Docker image running the Bitwarden CLI. You could use `ghcr.io\/charlesthomas\/bitwarden-cli:2023.12.1` or build your own.\n\nHere is an example of a Dockerfile used to build the image:\n```dockerfile\nFROM debian:sid\n\nENV BW_CLI_VERSION=2023.12.1\n\nRUN apt update && \\\n    apt install -y wget unzip && \\\n    wget https:\/\/github.com\/bitwarden\/clients\/releases\/download\/cli-v${BW_CLI_VERSION}\/bw-linux-${BW_CLI_VERSION}.zip && \\\n    unzip bw-linux-${BW_CLI_VERSION}.zip && \\\n    chmod +x bw && \\\n    mv bw \/usr\/local\/bin\/bw && \\\n    rm -rfv *.zip\n\nCOPY entrypoint.sh \/\n\nCMD [\"\/entrypoint.sh\"]\n```\n\nAnd the content of `entrypoint.sh`:\n```bash\n#!\/bin\/bash\n\nset -e\n\nbw config server ${BW_HOST}\n\nexport BW_SESSION=$(bw login ${BW_USER} --passwordenv BW_PASSWORD --raw)\n\nbw unlock --check\n\necho 'Running `bw server` on port 8087'\nbw serve --hostname 0.0.0.0 #--disable-origin-protection\n```\n\n## Deploy Bitwarden credentials\n\n```yaml\n{% include 'bitwarden-cli-secrets.yaml' %}\n```\n\n## Deploy Bitwarden CLI container\n\n```yaml\n{% include 'bitwarden-cli-deployment.yaml' %}\n```\n\n> NOTE: Deploying a network policy is recommended since there is no authentication to query the Bitwarden CLI, which means that your secrets are exposed.\n\n> NOTE: In this example the Liveness probe is querying \/sync to ensure that the Bitwarden CLI is able to connect to the server and is also synchronised. (The secret sync is only every 2 minutes in this example)\n\n## Deploy (Cluster)SecretStores\n\nThere are four possible (Cluster)SecretStores to deploy, each can access different types of fields from an item in the Bitwarden vault. It is not required to deploy them all.\n\n```yaml\n{% include 'bitwarden-secret-store.yaml' %}\n```\n\n## Usage\n\n(Cluster)SecretStores:\n\n* `bitwarden-login`: Use to get the `username` or `password` fields\n* `bitwarden-fields`: Use to get custom fields\n* `bitwarden-notes`: Use to get notes\n* `bitwarden-attachments`: Use to get attachments\n\nremoteRef:\n\n* `key`: ID of a secret, which can be found in the URL `itemId` parameter:\n  `https:\/\/myvault.com\/#\/vault?type=login&itemId=........-....-....-....-............`s\n\n* `property`: Name of the field to access\n    * `username` for the username of a secret (`bitwarden-login` SecretStore)\n    * `password` for the password of a secret (`bitwarden-login` SecretStore)\n    * `name_of_the_custom_field` for any custom field (`bitwarden-fields` SecretStore)\n    * `id_or_name_of_the_attachment` for any attachment (`bitwarden-attachment`, SecretStore)\n\n```yaml\n{% include 'bitwarden-secret.yaml' %}\n```","site":"external secrets","answers_cleaned":"  Bitwarden support using webhook provider  Bitwarden is an integrated open source password management solution for individuals  teams  and business organizations      How does it work   To make external secrets compatible with Bitwarden  we need     External Secrets Operator    0 8 0   Multiple  Cluster SecretStores using the webhook provider   BitWarden CLI image running  bw serve   When you create a new external secret object  the External Secrets webhook provider will query the Bitwarden CLI pod that is synced with the Bitwarden server      Requirements    Bitwarden account  it also works with Vaultwarden     A Kubernetes secret which contains your Bitwarden credentials   A Docker image running the Bitwarden CLI  You could use  ghcr io charlesthomas bitwarden cli 2023 12 1  or build your own   Here is an example of a Dockerfile used to build the image     dockerfile FROM debian sid  ENV BW CLI VERSION 2023 12 1  RUN apt update          apt install  y wget unzip          wget https   github com bitwarden clients releases download cli v  BW CLI VERSION  bw linux   BW CLI VERSION  zip          unzip bw linux   BW CLI VERSION  zip          chmod  x bw          mv bw  usr local bin bw          rm  rfv   zip  COPY entrypoint sh    CMD    entrypoint sh        And the content of  entrypoint sh      bash    bin bash  set  e  bw config server   BW HOST   export BW SESSION   bw login   BW USER    passwordenv BW PASSWORD   raw   bw unlock   check  echo  Running  bw server  on port 8087  bw serve   hostname 0 0 0 0    disable origin protection         Deploy Bitwarden credentials     yaml    include  bitwarden cli secrets yaml             Deploy Bitwarden CLI container     yaml    include  bitwarden cli deployment yaml            NOTE  Deploying a network policy is recommended since there is no authentication to query the Bitwarden CLI  which means that your secrets are exposed     NOTE  In this example the Liveness probe is querying  sync to ensure that the Bitwarden CLI is able to connect to the server and is also synchronised   The secret sync is only every 2 minutes in this example      Deploy  Cluster SecretStores  There are four possible  Cluster SecretStores to deploy  each can access different types of fields from an item in the Bitwarden vault  It is not required to deploy them all      yaml    include  bitwarden secret store yaml             Usage   Cluster SecretStores      bitwarden login   Use to get the  username  or  password  fields    bitwarden fields   Use to get custom fields    bitwarden notes   Use to get notes    bitwarden attachments   Use to get attachments  remoteRef      key   ID of a secret  which can be found in the URL  itemId  parameter     https   myvault com   vault type login itemId                                      s     property   Name of the field to access        username  for the username of a secret   bitwarden login  SecretStore         password  for the password of a secret   bitwarden login  SecretStore         name of the custom field  for any custom field   bitwarden fields  SecretStore         id or name of the attachment  for any attachment   bitwarden attachment   SecretStore      yaml    include  bitwarden secret yaml        "}
{"questions":"external secrets Metrics hide toc If you are using a different monitoring tool that also needs a endpoint you can set the Helm flag to In addition you can also set and to scrape the other components The External Secrets Operator exposes its Prometheus metrics in the path To enable it set the Helm flag to","answers":"---\nhide:\n  - toc\n---\n\n# Metrics\n\nThe External Secrets Operator exposes its Prometheus metrics in the `\/metrics` path. To enable it, set the `serviceMonitor.enabled` Helm flag to `true`.\n\nIf you are using a different monitoring tool that also needs a `\/metrics` endpoint, you can set the `metrics.service.enabled` Helm flag to `true`. In addition you can also set `webhook.metrics.service.enabled` and `certController.metrics.service.enabled` to scrape the other components.\n\nThe Operator has [the controller-runtime metrics inherited from kubebuilder](https:\/\/book.kubebuilder.io\/reference\/metrics-reference.html) plus some custom metrics with a resource name prefix, such as `externalsecret_`.\n\n## Cluster External Secret Metrics\n| Name                                       | Type  | Description                                                |\n|--------------------------------------------|-------|------------------------------------------------------------|\n| `clusterexternalsecret_status_condition`   | Gauge | The status condition of a specific Cluster External Secret |\n| `clusterexternalsecret_reconcile_duration` | Gauge | The duration time to reconcile the Cluster External Secret |\n\n## External Secret Metrics\n| Name                                           | Type      | Description                                                                                                                                                                                                             |\n|------------------------------------------------|-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `externalsecret_provider_api_calls_count`      | Counter   | Number of API calls made to an upstream secret provider API. The metric provides a `provider`, `call` and `status` labels.                                                                                              |\n| `externalsecret_sync_calls_total`              | Counter   | Total number of the External Secret sync calls                                                                                                                                                                          |\n| `externalsecret_sync_calls_error`              | Counter   | Total number of the External Secret sync errors                                                                                                                                                                         |\n| `externalsecret_status_condition`              | Gauge     | The status condition of a specific External Secret                                                                                                                                                                      |\n| `externalsecret_reconcile_duration`            | Gauge     | The duration time to reconcile the External Secret                                                                                                                                                                      |\n\n## Cluster Secret Store Metrics\n| Name                                    | Type  | Description                                             |\n|-----------------------------------------|-------|---------------------------------------------------------|\n| `clustersecretstore_status_condition`   | Gauge | The status condition of a specific Cluster Secret Store |\n| `clustersecretstore_reconcile_duration` | Gauge | The duration time to reconcile the Cluster Secret Store |\n\n# Secret Store Metrics\n| Name                             | Type  | Description                                     |\n|----------------------------------|-------|-------------------------------------------------|\n| `secretstore_status_condition`   | Gauge | The status condition of a specific Secret Store |\n| `secretstore_reconcile_duration` | Gauge | The duration time to reconcile the Secret Store |\n\n## Controller Runtime Metrics\nSee [the kubebuilder documentation](https:\/\/book.kubebuilder.io\/reference\/metrics-reference.html) on the default exported metrics by controller-runtime.\n\n## Dashboard\n\nWe provide a [Grafana Dashboard](https:\/\/raw.githubusercontent.com\/external-secrets\/external-secrets\/main\/docs\/snippets\/dashboard.json) that gives you an overview of External Secrets Operator:\n\n![ESO Dashboard](..\/pictures\/eso-dashboard-1.png)\n![ESO Dashboard](..\/pictures\/eso-dashboard-2.png)\n\n\n## Service Level Indicators and Alerts\n\nWe find the following Service Level Indicators (SLIs) useful when operating ESO. They should give you a good starting point and hints to develop your own Service Level Objectives (SLOs).\n\n#### Webhook HTTP Status Codes\nThe webhook HTTP status code indicates that a HTTP Request was answered successfully or not.\nIf the Webhook pod is not able to serve the requests properly then that failure may cascade down to the controller or any other user of `kube-apiserver`.\n\nSLI Example: request error percentage.\n```\nsum(increase(controller_runtime_webhook_requests_total{service=~\"external-secrets.*\",code=\"500\"}[1m]))\n\/\nsum(increase(controller_runtime_webhook_requests_total{service=~\"external-secrets.*\"}[1m]))\n```\n\n#### Webhook HTTP Request Latency\nIf the webhook server is not able to respond in time then that may cause a timeout at the client.\nThis failure may cascade down to the controller or any other user of `kube-apiserver`.\n\nSLI Example: p99 across all webhook requests.\n```\nhistogram_quantile(0.99,\n  sum(rate(controller_runtime_webhook_latency_seconds_bucket{service=~\"external-secrets.*\"}[5m])) by (le)\n)\n```\n\n#### Controller Workqueue Depth\nIf the workqueue depth is > 0 for a longer period of time then this is an indicator for the controller not being able to reconcile resources in time. I.e. delivery of secret updates is delayed.\n\nNote: when a controller is restarted, then `queue length = total number of resources`. Make sure to measure the time it takes for the controller to fully reconcile all secrets after a restart. In large clusters this may take a while, make sure to define an acceptable timeframe to fully reconcile all resources.\n\n```\nsum(\n  workqueue_depth{service=~\"external-secrets.*\"}\n) by (name)\n```\n\n#### Controller Reconcile Latency\nThe controller should be able to reconcile resources within a reasonable timeframe. When latency is high secret delivery may impacted.\n\nSLI Example: p99 across all controllers.\n```\nhistogram_quantile(0.99,\n  sum(rate(controller_runtime_reconcile_time_seconds_bucket{service=~\"external-secrets.*\"}[5m])) by (le)\n)\n```\n\n#### Controller Reconcile Error\nThe controller should be able to reconcile resources without errors. When errors occurr secret delivery may be impacted which could cascade down to the secret consuming applications.\n\n```\nsum(increase(\n  controller_runtime_reconcile_total{service=~\"external-secrets.*\",controller=~\"$controller\",result=\"error\"}[1m])\n) by (result)\n```","site":"external secrets","answers_cleaned":"    hide      toc        Metrics  The External Secrets Operator exposes its Prometheus metrics in the   metrics  path  To enable it  set the  serviceMonitor enabled  Helm flag to  true    If you are using a different monitoring tool that also needs a   metrics  endpoint  you can set the  metrics service enabled  Helm flag to  true   In addition you can also set  webhook metrics service enabled  and  certController metrics service enabled  to scrape the other components   The Operator has  the controller runtime metrics inherited from kubebuilder  https   book kubebuilder io reference metrics reference html  plus some custom metrics with a resource name prefix  such as  externalsecret        Cluster External Secret Metrics   Name                                         Type    Description                                                                                                                                                                         clusterexternalsecret status condition      Gauge   The status condition of a specific Cluster External Secret      clusterexternalsecret reconcile duration    Gauge   The duration time to reconcile the Cluster External Secret       External Secret Metrics   Name                                             Type        Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           externalsecret provider api calls count         Counter     Number of API calls made to an upstream secret provider API  The metric provides a  provider    call  and  status  labels                                                                                                    externalsecret sync calls total                 Counter     Total number of the External Secret sync calls                                                                                                                                                                               externalsecret sync calls error                 Counter     Total number of the External Secret sync errors                                                                                                                                                                              externalsecret status condition                 Gauge       The status condition of a specific External Secret                                                                                                                                                                           externalsecret reconcile duration               Gauge       The duration time to reconcile the External Secret                                                                                                                                                                            Cluster Secret Store Metrics   Name                                      Type    Description                                                                                                                                                                clustersecretstore status condition      Gauge   The status condition of a specific Cluster Secret Store      clustersecretstore reconcile duration    Gauge   The duration time to reconcile the Cluster Secret Store      Secret Store Metrics   Name                               Type    Description                                                                                                                                         secretstore status condition      Gauge   The status condition of a specific Secret Store      secretstore reconcile duration    Gauge   The duration time to reconcile the Secret Store       Controller Runtime Metrics See  the kubebuilder documentation  https   book kubebuilder io reference metrics reference html  on the default exported metrics by controller runtime      Dashboard  We provide a  Grafana Dashboard  https   raw githubusercontent com external secrets external secrets main docs snippets dashboard json  that gives you an overview of External Secrets Operator     ESO Dashboard     pictures eso dashboard 1 png    ESO Dashboard     pictures eso dashboard 2 png       Service Level Indicators and Alerts  We find the following Service Level Indicators  SLIs  useful when operating ESO  They should give you a good starting point and hints to develop your own Service Level Objectives  SLOs         Webhook HTTP Status Codes The webhook HTTP status code indicates that a HTTP Request was answered successfully or not  If the Webhook pod is not able to serve the requests properly then that failure may cascade down to the controller or any other user of  kube apiserver    SLI Example  request error percentage      sum increase controller runtime webhook requests total service   external secrets    code  500   1m      sum increase controller runtime webhook requests total service   external secrets     1m              Webhook HTTP Request Latency If the webhook server is not able to respond in time then that may cause a timeout at the client  This failure may cascade down to the controller or any other user of  kube apiserver    SLI Example  p99 across all webhook requests      histogram quantile 0 99    sum rate controller runtime webhook latency seconds bucket service   external secrets     5m    by  le              Controller Workqueue Depth If the workqueue depth is   0 for a longer period of time then this is an indicator for the controller not being able to reconcile resources in time  I e  delivery of secret updates is delayed   Note  when a controller is restarted  then  queue length   total number of resources   Make sure to measure the time it takes for the controller to fully reconcile all secrets after a restart  In large clusters this may take a while  make sure to define an acceptable timeframe to fully reconcile all resources       sum    workqueue depth service   external secrets       by  name            Controller Reconcile Latency The controller should be able to reconcile resources within a reasonable timeframe  When latency is high secret delivery may impacted   SLI Example  p99 across all controllers      histogram quantile 0 99    sum rate controller runtime reconcile time seconds bucket service   external secrets     5m    by  le              Controller Reconcile Error The controller should be able to reconcile resources without errors  When errors occurr secret delivery may be impacted which could cascade down to the secret consuming applications       sum increase    controller runtime reconcile total service   external secrets    controller    controller  result  error   1m     by  result     "}
{"questions":"external secrets a href external secrets io 2fv1beta1 external secrets io v1beta1 a p p Package v1beta1 contains resources for external secrets p p Packages p h2 id external secrets io v1beta1 external secrets io v1beta1 h2 ul li","answers":"<p>Packages:<\/p>\n<ul>\n<li>\n<a href=\"#external-secrets.io%2fv1beta1\">external-secrets.io\/v1beta1<\/a>\n<\/li>\n<\/ul>\n<h2 id=\"external-secrets.io\/v1beta1\">external-secrets.io\/v1beta1<\/h2>\n<p>\n<p>Package v1beta1 contains resources for external-secrets<\/p>\n<\/p>\nResource Types:\n<ul><\/ul>\n<h3 id=\"external-secrets.io\/v1beta1.AWSAuth\">AWSAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AWSProvider\">AWSProvider<\/a>)\n<\/p>\n<p>\n<p>AWSAuth tells the controller how to do authentication with aws.\nOnly one of secretRef or jwt can be specified.\nif none is specified the controller will load credentials using the aws sdk defaults.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AWSAuthSecretRef\">\nAWSAuthSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>jwt<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AWSJWTAuth\">\nAWSJWTAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AWSAuthSecretRef\">AWSAuthSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AWSAuth\">AWSAuth<\/a>)\n<\/p>\n<p>\n<p>AWSAuthSecretRef holds secret references for AWS credentials\nboth AccessKeyID and SecretAccessKey must be defined in order to properly authenticate.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>accessKeyIDSecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The AccessKeyID is used for authentication<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretAccessKeySecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The SecretAccessKey is used for authentication<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sessionTokenSecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The SessionToken used for authentication\nThis must be defined if AccessKeyID and SecretAccessKey are temporary credentials\nsee: <a href=\"https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id_credentials_temp_use-resources.html\">https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id_credentials_temp_use-resources.html<\/a><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AWSJWTAuth\">AWSJWTAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AWSAuth\">AWSAuth<\/a>)\n<\/p>\n<p>\n<p>Authenticate against AWS using service account tokens.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>serviceAccountRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#ServiceAccountSelector\">\nExternal Secrets meta\/v1.ServiceAccountSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AWSProvider\">AWSProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>AWSProvider configures a store to sync secrets with AWS.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>service<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AWSServiceType\">\nAWSServiceType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Service defines which service should be used to fetch the secrets<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AWSAuth\">\nAWSAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Auth defines the information necessary to authenticate against AWS\nif not set aws sdk will infer credentials from your environment\nsee: <a href=\"https:\/\/docs.aws.amazon.com\/sdk-for-go\/v1\/developer-guide\/configuring-sdk.html#specifying-credentials\">https:\/\/docs.aws.amazon.com\/sdk-for-go\/v1\/developer-guide\/configuring-sdk.html#specifying-credentials<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>role<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Role is a Role ARN which the provider will assume<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>region<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>AWS Region to be used for the provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>additionalRoles<\/code><\/br>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>AdditionalRoles is a chained list of Role ARNs which the provider will sequentially assume before assuming the Role<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>externalID<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>AWS External ID set on assumed IAM roles<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sessionTags<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.*github.com\/external-secrets\/external-secrets\/apis\/externalsecrets\/v1beta1.Tag\">\n[]*github.com\/external-secrets\/external-secrets\/apis\/externalsecrets\/v1beta1.Tag\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>AWS STS assume role session tags<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretsManager<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretsManager\">\nSecretsManager\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecretsManager defines how the provider behaves when interacting with AWS SecretsManager<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>transitiveTagKeys<\/code><\/br>\n<em>\n[]*string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>AWS STS assume role transitive session tags. Required when multiple rules are used with the provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>prefix<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Prefix adds a prefix to all retrieved values.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AWSServiceType\">AWSServiceType\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AWSProvider\">AWSProvider<\/a>)\n<\/p>\n<p>\n<p>AWSServiceType is a enum that defines the service\/API that is used to fetch the secrets.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;ParameterStore&#34;<\/p><\/td>\n<td><p>AWSServiceParameterStore is the AWS SystemsManager ParameterStore service.\nsee: <a href=\"https:\/\/docs.aws.amazon.com\/systems-manager\/latest\/userguide\/systems-manager-parameter-store.html\">https:\/\/docs.aws.amazon.com\/systems-manager\/latest\/userguide\/systems-manager-parameter-store.html<\/a><\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;SecretsManager&#34;<\/p><\/td>\n<td><p>AWSServiceSecretsManager is the AWS SecretsManager service.\nsee: <a href=\"https:\/\/docs.aws.amazon.com\/secretsmanager\/latest\/userguide\/intro.html\">https:\/\/docs.aws.amazon.com\/secretsmanager\/latest\/userguide\/intro.html<\/a><\/p>\n<\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AkeylessAuth\">AkeylessAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AkeylessProvider\">AkeylessProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AkeylessAuthSecretRef\">\nAkeylessAuthSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Reference to a Secret that contains the details\nto authenticate with Akeyless.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kubernetesAuth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AkeylessKubernetesAuth\">\nAkeylessKubernetesAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Kubernetes authenticates with Akeyless by passing the ServiceAccount\ntoken stored in the named Secret resource.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AkeylessAuthSecretRef\">AkeylessAuthSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AkeylessAuth\">AkeylessAuth<\/a>)\n<\/p>\n<p>\n<p>AkeylessAuthSecretRef\nAKEYLESS_ACCESS_TYPE_PARAM: AZURE_OBJ_ID OR GCP_AUDIENCE OR ACCESS_KEY OR KUB_CONFIG_NAME.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>accessID<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The SecretAccessID is used for authentication<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>accessType<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>accessTypeParam<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AkeylessKubernetesAuth\">AkeylessKubernetesAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AkeylessAuth\">AkeylessAuth<\/a>)\n<\/p>\n<p>\n<p>Authenticate with Kubernetes ServiceAccount token stored.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>accessID<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>the Akeyless Kubernetes auth-method access-id<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>k8sConfName<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Kubernetes-auth configuration name in Akeyless-Gateway<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#ServiceAccountSelector\">\nExternal Secrets meta\/v1.ServiceAccountSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Optional service account field containing the name of a kubernetes ServiceAccount.\nIf the service account is specified, the service account secret token JWT will be used\nfor authenticating with Akeyless. If the service account selector is not supplied,\nthe secretRef will be used instead.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Optional secret field containing a Kubernetes ServiceAccount JWT used\nfor authenticating with Akeyless. If a name is specified without a key,\n<code>token<\/code> is the default. If one is not specified, the one bound to\nthe controller will be used.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AkeylessProvider\">AkeylessProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>AkeylessProvider Configures an store to sync secrets using Akeyless KV.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>akeylessGWApiURL<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Akeyless GW API Url from which the secrets to be fetched from.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>authSecretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AkeylessAuth\">\nAkeylessAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth configures how the operator authenticates with Akeyless.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>caBundle<\/code><\/br>\n<em>\n[]byte\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>PEM\/base64 encoded CA bundle used to validate Akeyless Gateway certificate. Only used\nif the AkeylessGWApiURL URL is using HTTPS protocol. If not set the system root certificates\nare used to validate the TLS connection.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>caProvider<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.CAProvider\">\nCAProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The provider for the CA bundle to use to validate Akeyless Gateway certificate.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AlibabaAuth\">AlibabaAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AlibabaProvider\">AlibabaProvider<\/a>)\n<\/p>\n<p>\n<p>AlibabaAuth contains a secretRef for credentials.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AlibabaAuthSecretRef\">\nAlibabaAuthSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>rrsa<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AlibabaRRSAAuth\">\nAlibabaRRSAAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AlibabaAuthSecretRef\">AlibabaAuthSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AlibabaAuth\">AlibabaAuth<\/a>)\n<\/p>\n<p>\n<p>AlibabaAuthSecretRef holds secret references for Alibaba credentials.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>accessKeyIDSecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The AccessKeyID is used for authentication<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>accessKeySecretSecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The AccessKeySecret is used for authentication<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AlibabaProvider\">AlibabaProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>AlibabaProvider configures a store to sync secrets using the Alibaba Secret Manager provider.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AlibabaAuth\">\nAlibabaAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>regionID<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Alibaba Region to be used for the provider<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AlibabaRRSAAuth\">AlibabaRRSAAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AlibabaAuth\">AlibabaAuth<\/a>)\n<\/p>\n<p>\n<p>Authenticate against Alibaba using RRSA.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>oidcProviderArn<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>oidcTokenFilePath<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>roleArn<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sessionName<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AzureAuthType\">AzureAuthType\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AzureKVProvider\">AzureKVProvider<\/a>)\n<\/p>\n<p>\n<p>AuthType describes how to authenticate to the Azure Keyvault\nOnly one of the following auth types may be specified.\nIf none of the following auth type is specified, the default one\nis ServicePrincipal.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;ManagedIdentity&#34;<\/p><\/td>\n<td><p>Using Managed Identity to authenticate. Used with aad-pod-identity installed in the cluster.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;ServicePrincipal&#34;<\/p><\/td>\n<td><p>Using service principal to authenticate, which needs a tenantId, a clientId and a clientSecret.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;WorkloadIdentity&#34;<\/p><\/td>\n<td><p>Using Workload Identity service accounts to authenticate.<\/p>\n<\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AzureEnvironmentType\">AzureEnvironmentType\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AzureKVProvider\">AzureKVProvider<\/a>)\n<\/p>\n<p>\n<p>AzureEnvironmentType specifies the Azure cloud environment endpoints to use for\nconnecting and authenticating with Azure. By default it points to the public cloud AAD endpoint.\nThe following endpoints are available, also see here: <a href=\"https:\/\/github.com\/Azure\/go-autorest\/blob\/main\/autorest\/azure\/environments.go#L152\">https:\/\/github.com\/Azure\/go-autorest\/blob\/main\/autorest\/azure\/environments.go#L152<\/a>\nPublicCloud, USGovernmentCloud, ChinaCloud, GermanCloud<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;ChinaCloud&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;GermanCloud&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;PublicCloud&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;USGovernmentCloud&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AzureKVAuth\">AzureKVAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AzureKVProvider\">AzureKVProvider<\/a>)\n<\/p>\n<p>\n<p>Configuration used to authenticate with Azure.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>clientId<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The Azure clientId of the service principle or managed identity used for authentication.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tenantId<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The Azure tenantId of the managed identity used for authentication.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>clientSecret<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The Azure ClientSecret of the service principle used for authentication.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>clientCertificate<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The Azure ClientCertificate of the service principle used for authentication.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.AzureKVProvider\">AzureKVProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>Configures an store to sync secrets using Azure KV.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>authType<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AzureAuthType\">\nAzureAuthType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Auth type defines how to authenticate to the keyvault service.\nValid values are:\n- &ldquo;ServicePrincipal&rdquo; (default): Using a service principal (tenantId, clientId, clientSecret)\n- &ldquo;ManagedIdentity&rdquo;: Using Managed Identity assigned to the pod (see aad-pod-identity)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>vaultUrl<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Vault Url from which the secrets to be fetched from.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tenantId<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TenantID configures the Azure Tenant to send requests to. Required for ServicePrincipal auth type. Optional for WorkloadIdentity.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>environmentType<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AzureEnvironmentType\">\nAzureEnvironmentType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>EnvironmentType specifies the Azure cloud environment endpoints to use for\nconnecting and authenticating with Azure. By default it points to the public cloud AAD endpoint.\nThe following endpoints are available, also see here: <a href=\"https:\/\/github.com\/Azure\/go-autorest\/blob\/main\/autorest\/azure\/environments.go#L152\">https:\/\/github.com\/Azure\/go-autorest\/blob\/main\/autorest\/azure\/environments.go#L152<\/a>\nPublicCloud, USGovernmentCloud, ChinaCloud, GermanCloud<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>authSecretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AzureKVAuth\">\nAzureKVAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Auth configures how the operator authenticates with Azure. Required for ServicePrincipal auth type. Optional for WorkloadIdentity.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#ServiceAccountSelector\">\nExternal Secrets meta\/v1.ServiceAccountSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ServiceAccountRef specified the service account\nthat should be used when authenticating with WorkloadIdentity.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>identityId<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>If multiple Managed Identity is assigned to the pod, you can select the one to be used<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.BeyondTrustProviderSecretRef\">BeyondTrustProviderSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.BeyondtrustAuth\">BeyondtrustAuth<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>value<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Value can be specified directly to set a value without using a secret.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecretRef references a key in a secret that will be used as value.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.BeyondtrustAuth\">BeyondtrustAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.BeyondtrustProvider\">BeyondtrustProvider<\/a>)\n<\/p>\n<p>\n<p>Configures a store to sync secrets using BeyondTrust Password Safe.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiKey<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.BeyondTrustProviderSecretRef\">\nBeyondTrustProviderSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>APIKey If not provided then ClientID\/ClientSecret become required.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>clientId<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.BeyondTrustProviderSecretRef\">\nBeyondTrustProviderSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>ClientID is the API OAuth Client ID.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>clientSecret<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.BeyondTrustProviderSecretRef\">\nBeyondTrustProviderSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>ClientSecret is the API OAuth Client Secret.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>certificate<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.BeyondTrustProviderSecretRef\">\nBeyondTrustProviderSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Certificate (cert.pem) for use when authenticating with an OAuth client Id using a Client Certificate.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>certificateKey<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.BeyondTrustProviderSecretRef\">\nBeyondTrustProviderSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Certificate private key (key.pem). For use when authenticating with an OAuth client Id<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.BeyondtrustProvider\">BeyondtrustProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.BeyondtrustAuth\">\nBeyondtrustAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth configures how the operator authenticates with Beyondtrust.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>server<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.BeyondtrustServer\">\nBeyondtrustServer\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth configures how API server works.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.BeyondtrustServer\">BeyondtrustServer\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.BeyondtrustProvider\">BeyondtrustProvider<\/a>)\n<\/p>\n<p>\n<p>Configures a store to sync secrets using BeyondTrust Password Safe.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiUrl<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retrievalType<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>The secret retrieval type. SECRET = Secrets Safe (credential, text, file). MANAGED_ACCOUNT = Password Safe account associated with a system.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>separator<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>A character that separates the folder names.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>verifyCA<\/code><\/br>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>clientTimeOutSeconds<\/code><\/br>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<p>Timeout specifies a time limit for requests made by this Client. The timeout includes connection time, any redirects, and reading the response body. Defaults to 45 seconds.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.BitwardenSecretsManagerAuth\">BitwardenSecretsManagerAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.BitwardenSecretsManagerProvider\">BitwardenSecretsManagerProvider<\/a>)\n<\/p>\n<p>\n<p>BitwardenSecretsManagerAuth contains the ref to the secret that contains the machine account token.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.BitwardenSecretsManagerSecretRef\">\nBitwardenSecretsManagerSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.BitwardenSecretsManagerProvider\">BitwardenSecretsManagerProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>BitwardenSecretsManagerProvider configures a store to sync secrets with a Bitwarden Secrets Manager instance.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiURL<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>identityURL<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>bitwardenServerSDKURL<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>caBundle<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Base64 encoded certificate for the bitwarden server sdk. The sdk MUST run with HTTPS to make sure no MITM attack\ncan be performed.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>caProvider<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.CAProvider\">\nCAProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>see: <a href=\"https:\/\/external-secrets.io\/latest\/spec\/#external-secrets.io\/v1alpha1.CAProvider\">https:\/\/external-secrets.io\/latest\/spec\/#external-secrets.io\/v1alpha1.CAProvider<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>organizationID<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>OrganizationID determines which organization this secret store manages.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>projectID<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>ProjectID determines which project this secret store manages.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.BitwardenSecretsManagerAuth\">\nBitwardenSecretsManagerAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth configures how secret-manager authenticates with a bitwarden machine account instance.\nMake sure that the token being used has permissions on the given secret.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.BitwardenSecretsManagerSecretRef\">BitwardenSecretsManagerSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.BitwardenSecretsManagerAuth\">BitwardenSecretsManagerAuth<\/a>)\n<\/p>\n<p>\n<p>BitwardenSecretsManagerSecretRef contains the credential ref to the bitwarden instance.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>credentials<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>AccessToken used for the bitwarden instance.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.CAProvider\">CAProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AkeylessProvider\">AkeylessProvider<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.BitwardenSecretsManagerProvider\">BitwardenSecretsManagerProvider<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.ConjurProvider\">ConjurProvider<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.KubernetesServer\">KubernetesServer<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.VaultProvider\">VaultProvider<\/a>)\n<\/p>\n<p>\n<p>Used to provide custom certificate authority (CA) certificates\nfor a secret store. The CAProvider points to a Secret or ConfigMap resource\nthat contains a PEM-encoded certificate.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>type<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.CAProviderType\">\nCAProviderType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The type of provider to use such as &ldquo;Secret&rdquo;, or &ldquo;ConfigMap&rdquo;.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>name<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>The name of the object located at the provider type.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>key<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>The key where the CA certificate can be found in the Secret or ConfigMap.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>namespace<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The namespace the Provider type is in.\nCan only be defined when used in a ClusterSecretStore.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.CAProviderType\">CAProviderType\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.CAProvider\">CAProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;ConfigMap&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;Secret&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.CertAuth\">CertAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.KubernetesAuth\">KubernetesAuth<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>clientCert<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>clientKey<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ChefAuth\">ChefAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ChefProvider\">ChefProvider<\/a>)\n<\/p>\n<p>\n<p>ChefAuth contains a secretRef for credentials.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ChefAuthSecretRef\">\nChefAuthSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ChefAuthSecretRef\">ChefAuthSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ChefAuth\">ChefAuth<\/a>)\n<\/p>\n<p>\n<p>ChefAuthSecretRef holds secret references for chef server login credentials.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>privateKeySecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>SecretKey is the Signing Key in PEM format, used for authentication.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ChefProvider\">ChefProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>ChefProvider configures a store to sync secrets using basic chef server connection credentials.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ChefAuth\">\nChefAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth defines the information necessary to authenticate against chef Server<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>username<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>UserName should be the user ID on the chef server<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serverUrl<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>ServerURL is the chef server URL used to connect to. If using orgs you should include your org in the url and terminate the url with a &ldquo;\/&rdquo;<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ClusterExternalSecret\">ClusterExternalSecret\n<\/h3>\n<p>\n<p>ClusterExternalSecret is the Schema for the clusterexternalsecrets API.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>metadata<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterExternalSecretSpec\">\nClusterExternalSecretSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>externalSecretSpec<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretSpec\">\nExternalSecretSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The spec for the ExternalSecrets to be created<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>externalSecretName<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The name of the external secrets to be created.\nDefaults to the name of the ClusterExternalSecret<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>externalSecretMetadata<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretMetadata\">\nExternalSecretMetadata\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The metadata of the external secrets to be created<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>namespaceSelector<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#labelselector-v1-meta\">\nKubernetes meta\/v1.LabelSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The labels to select by to find the Namespaces to create the ExternalSecrets in.\nDeprecated: Use NamespaceSelectors instead.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>namespaceSelectors<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#*k8s.io\/apimachinery\/pkg\/apis\/meta\/v1.labelselector--\">\n[]*k8s.io\/apimachinery\/pkg\/apis\/meta\/v1.LabelSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>A list of labels to select by to find the Namespaces to create the ExternalSecrets in. The selectors are ORed.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>namespaces<\/code><\/br>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Choose namespaces by name. This field is ORed with anything that NamespaceSelectors ends up choosing.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>refreshTime<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The time in which the controller should reconcile its objects and recheck namespaces for labels.<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterExternalSecretStatus\">\nClusterExternalSecretStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ClusterExternalSecretConditionType\">ClusterExternalSecretConditionType\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterExternalSecretStatusCondition\">ClusterExternalSecretStatusCondition<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;Ready&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ClusterExternalSecretNamespaceFailure\">ClusterExternalSecretNamespaceFailure\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterExternalSecretStatus\">ClusterExternalSecretStatus<\/a>)\n<\/p>\n<p>\n<p>ClusterExternalSecretNamespaceFailure represents a failed namespace deployment and it&rsquo;s reason.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>namespace<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Namespace is the namespace that failed when trying to apply an ExternalSecret<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>reason<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Reason is why the ExternalSecret failed to apply to the namespace<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ClusterExternalSecretSpec\">ClusterExternalSecretSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterExternalSecret\">ClusterExternalSecret<\/a>)\n<\/p>\n<p>\n<p>ClusterExternalSecretSpec defines the desired state of ClusterExternalSecret.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>externalSecretSpec<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretSpec\">\nExternalSecretSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The spec for the ExternalSecrets to be created<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>externalSecretName<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The name of the external secrets to be created.\nDefaults to the name of the ClusterExternalSecret<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>externalSecretMetadata<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretMetadata\">\nExternalSecretMetadata\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The metadata of the external secrets to be created<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>namespaceSelector<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#labelselector-v1-meta\">\nKubernetes meta\/v1.LabelSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The labels to select by to find the Namespaces to create the ExternalSecrets in.\nDeprecated: Use NamespaceSelectors instead.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>namespaceSelectors<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#*k8s.io\/apimachinery\/pkg\/apis\/meta\/v1.labelselector--\">\n[]*k8s.io\/apimachinery\/pkg\/apis\/meta\/v1.LabelSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>A list of labels to select by to find the Namespaces to create the ExternalSecrets in. The selectors are ORed.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>namespaces<\/code><\/br>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Choose namespaces by name. This field is ORed with anything that NamespaceSelectors ends up choosing.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>refreshTime<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The time in which the controller should reconcile its objects and recheck namespaces for labels.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ClusterExternalSecretStatus\">ClusterExternalSecretStatus\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterExternalSecret\">ClusterExternalSecret<\/a>)\n<\/p>\n<p>\n<p>ClusterExternalSecretStatus defines the observed state of ClusterExternalSecret.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>externalSecretName<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>ExternalSecretName is the name of the ExternalSecrets created by the ClusterExternalSecret<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>failedNamespaces<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterExternalSecretNamespaceFailure\">\n[]ClusterExternalSecretNamespaceFailure\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Failed namespaces are the namespaces that failed to apply an ExternalSecret<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>provisionedNamespaces<\/code><\/br>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ProvisionedNamespaces are the namespaces where the ClusterExternalSecret has secrets<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>conditions<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterExternalSecretStatusCondition\">\n[]ClusterExternalSecretStatusCondition\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ClusterExternalSecretStatusCondition\">ClusterExternalSecretStatusCondition\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterExternalSecretStatus\">ClusterExternalSecretStatus<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>type<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterExternalSecretConditionType\">\nClusterExternalSecretConditionType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#conditionstatus-v1-core\">\nKubernetes core\/v1.ConditionStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>message<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ClusterSecretStore\">ClusterSecretStore\n<\/h3>\n<p>\n<p>ClusterSecretStore represents a secure external location for storing secrets, which can be referenced as part of <code>storeRef<\/code> fields.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>metadata<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreSpec\">\nSecretStoreSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>controller<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to select the correct ESO controller (think: ingress.ingressClassName)\nThe ESO controller is instantiated with a specific controller name and filters ES based on this property<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>provider<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">\nSecretStoreProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Used to configure the provider. Only one provider may be set<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retrySettings<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreRetrySettings\">\nSecretStoreRetrySettings\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to configure http retries if failed<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>refreshInterval<\/code><\/br>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to configure store refresh interval in seconds. Empty or 0 will default to the controller config.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>conditions<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterSecretStoreCondition\">\n[]ClusterSecretStoreCondition\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to constraint a ClusterSecretStore to specific namespaces. Relevant only to ClusterSecretStore<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreStatus\">\nSecretStoreStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ClusterSecretStoreCondition\">ClusterSecretStoreCondition\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreSpec\">SecretStoreSpec<\/a>)\n<\/p>\n<p>\n<p>ClusterSecretStoreCondition describes a condition by which to choose namespaces to process ExternalSecrets in\nfor a ClusterSecretStore instance.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>namespaceSelector<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#labelselector-v1-meta\">\nKubernetes meta\/v1.LabelSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Choose namespace using a labelSelector<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>namespaces<\/code><\/br>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Choose namespaces by name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>namespaceRegexes<\/code><\/br>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Choose namespaces by using regex matching<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ConjurAPIKey\">ConjurAPIKey\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ConjurAuth\">ConjurAuth<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>account<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>userRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>apiKeyRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ConjurAuth\">ConjurAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ConjurProvider\">ConjurProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apikey<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ConjurAPIKey\">\nConjurAPIKey\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>jwt<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ConjurJWT\">\nConjurJWT\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ConjurJWT\">ConjurJWT\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ConjurAuth\">ConjurAuth<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>account<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceID<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>The conjur authn jwt webservice id<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>hostId<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Optional HostID for JWT authentication. This may be used depending\non how the Conjur JWT authenticator policy is configured.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Optional SecretRef that refers to a key in a Secret resource containing JWT token to\nauthenticate with Conjur using the JWT authentication method.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#ServiceAccountSelector\">\nExternal Secrets meta\/v1.ServiceAccountSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Optional ServiceAccountRef specifies the Kubernetes service account for which to request\na token for with the <code>TokenRequest<\/code> API.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ConjurProvider\">ConjurProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>url<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>caBundle<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>caProvider<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.CAProvider\">\nCAProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ConjurAuth\">\nConjurAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.DelineaProvider\">DelineaProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>See <a href=\"https:\/\/github.com\/DelineaXPM\/dsv-sdk-go\/blob\/main\/vault\/vault.go\">https:\/\/github.com\/DelineaXPM\/dsv-sdk-go\/blob\/main\/vault\/vault.go<\/a>.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>clientId<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.DelineaProviderSecretRef\">\nDelineaProviderSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>ClientID is the non-secret part of the credential.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>clientSecret<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.DelineaProviderSecretRef\">\nDelineaProviderSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>ClientSecret is the secret part of the credential.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tenant<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Tenant is the chosen hostname \/ site name.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>urlTemplate<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>URLTemplate\nIf unset, defaults to &ldquo;https:\/\/%s.secretsvaultcloud.%s\/v1\/%s%s&rdquo;.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tld<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TLD is based on the server location that was chosen during provisioning.\nIf unset, defaults to &ldquo;com&rdquo;.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.DelineaProviderSecretRef\">DelineaProviderSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.DelineaProvider\">DelineaProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>value<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Value can be specified directly to set a value without using a secret.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecretRef references a key in a secret that will be used as value.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.Device42Auth\">Device42Auth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.Device42Provider\">Device42Provider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.Device42SecretRef\">\nDevice42SecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.Device42Provider\">Device42Provider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>Device42Provider configures a store to sync secrets with a Device42 instance.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>host<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>URL configures the Device42 instance URL.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.Device42Auth\">\nDevice42Auth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth configures how secret-manager authenticates with a Device42 instance.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.Device42SecretRef\">Device42SecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.Device42Auth\">Device42Auth<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>credentials<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Username \/ Password is used for authentication.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.DopplerAuth\">DopplerAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.DopplerProvider\">DopplerProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.DopplerAuthSecretRef\">\nDopplerAuthSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.DopplerAuthSecretRef\">DopplerAuthSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.DopplerAuth\">DopplerAuth<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>dopplerToken<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The DopplerToken is used for authentication.\nSee <a href=\"https:\/\/docs.doppler.com\/reference\/api#authentication\">https:\/\/docs.doppler.com\/reference\/api#authentication<\/a> for auth token types.\nThe Key attribute defaults to dopplerToken if not specified.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.DopplerProvider\">DopplerProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>DopplerProvider configures a store to sync secrets using the Doppler provider.\nProject and Config are required if not using a Service Token.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.DopplerAuth\">\nDopplerAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth configures how the Operator authenticates with the Doppler API<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>project<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Doppler project (required if not using a Service Token)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>config<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Doppler config (required if not using a Service Token)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>nameTransformer<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Environment variable compatible name transforms that change secret names to a different format<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>format<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Format enables the downloading of secrets as a file (string)<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecret\">ExternalSecret\n<\/h3>\n<p>\n<p>ExternalSecret is the Schema for the external-secrets API.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>metadata<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretSpec\">\nExternalSecretSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>secretStoreRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreRef\">\nSecretStoreRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>target<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretTarget\">\nExternalSecretTarget\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>refreshInterval<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>RefreshInterval is the amount of time before the values are read again from the SecretStore provider,\nspecified as Golang Duration strings.\nValid time units are &ldquo;ns&rdquo;, &ldquo;us&rdquo; (or &ldquo;\u00b5s&rdquo;), &ldquo;ms&rdquo;, &ldquo;s&rdquo;, &ldquo;m&rdquo;, &ldquo;h&rdquo;\nExample values: &ldquo;1h&rdquo;, &ldquo;2h30m&rdquo;, &ldquo;5d&rdquo;, &ldquo;10s&rdquo;\nMay be set to zero to fetch and create it once. Defaults to 1h.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>data<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretData\">\n[]ExternalSecretData\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Data defines the connection between the Kubernetes Secret keys and the Provider data<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>dataFrom<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretDataFromRemoteRef\">\n[]ExternalSecretDataFromRemoteRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DataFrom is used to fetch all properties from a specific Provider data\nIf multiple entries are specified, the Secret keys are merged in the specified order<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretStatus\">\nExternalSecretStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretConditionType\">ExternalSecretConditionType\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretStatusCondition\">ExternalSecretStatusCondition<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;Deleted&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;Ready&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretConversionStrategy\">ExternalSecretConversionStrategy\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretDataRemoteRef\">ExternalSecretDataRemoteRef<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretFind\">ExternalSecretFind<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;Default&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;Unicode&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretCreationPolicy\">ExternalSecretCreationPolicy\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretTarget\">ExternalSecretTarget<\/a>)\n<\/p>\n<p>\n<p>ExternalSecretCreationPolicy defines rules on how to create the resulting Secret.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;Merge&#34;<\/p><\/td>\n<td><p>Merge does not create the Secret, but merges the data fields to the Secret.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;None&#34;<\/p><\/td>\n<td><p>None does not create a Secret (future use with injector).<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Orphan&#34;<\/p><\/td>\n<td><p>Orphan creates the Secret and does not set the ownerReference.\nI.e. it will be orphaned after the deletion of the ExternalSecret.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Owner&#34;<\/p><\/td>\n<td><p>Owner creates the Secret and sets .metadata.ownerReferences to the ExternalSecret resource.<\/p>\n<\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretData\">ExternalSecretData\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretSpec\">ExternalSecretSpec<\/a>)\n<\/p>\n<p>\n<p>ExternalSecretData defines the connection between the Kubernetes Secret key (spec.data.<key>) and the Provider data.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretKey<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>The key in the Kubernetes Secret to store the value.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>remoteRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretDataRemoteRef\">\nExternalSecretDataRemoteRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>RemoteRef points to the remote secret and defines\nwhich secret (version\/property\/..) to fetch.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sourceRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.StoreSourceRef\">\nStoreSourceRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>SourceRef allows you to override the source\nfrom which the value will be pulled.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretDataFromRemoteRef\">ExternalSecretDataFromRemoteRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretSpec\">ExternalSecretSpec<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>extract<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretDataRemoteRef\">\nExternalSecretDataRemoteRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to extract multiple key\/value pairs from one secret\nNote: Extract does not support sourceRef.Generator or sourceRef.GeneratorRef.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>find<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretFind\">\nExternalSecretFind\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to find secrets based on tags or regular expressions\nNote: Find does not support sourceRef.Generator or sourceRef.GeneratorRef.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>rewrite<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretRewrite\">\n[]ExternalSecretRewrite\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to rewrite secret Keys after getting them from the secret Provider\nMultiple Rewrite operations can be provided. They are applied in a layered order (first to last)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sourceRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.StoreGeneratorSourceRef\">\nStoreGeneratorSourceRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>SourceRef points to a store or generator\nwhich contains secret values ready to use.\nUse this in combination with Extract or Find pull values out of\na specific SecretStore.\nWhen sourceRef points to a generator Extract or Find is not supported.\nThe generator returns a static map of values<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretDataRemoteRef\">ExternalSecretDataRemoteRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretData\">ExternalSecretData<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretDataFromRemoteRef\">ExternalSecretDataFromRemoteRef<\/a>)\n<\/p>\n<p>\n<p>ExternalSecretDataRemoteRef defines Provider data location.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>key<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Key is the key used in the Provider, mandatory<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadataPolicy<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretMetadataPolicy\">\nExternalSecretMetadataPolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Policy for fetching tags\/labels from provider secrets, possible options are Fetch, None. Defaults to None<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>property<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to select a specific property of the Provider value (if a map), if supported<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>version<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to select a specific version of the Provider value, if supported<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>conversionStrategy<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretConversionStrategy\">\nExternalSecretConversionStrategy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to define a conversion Strategy<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>decodingStrategy<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretDecodingStrategy\">\nExternalSecretDecodingStrategy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to define a decoding Strategy<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretDecodingStrategy\">ExternalSecretDecodingStrategy\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretDataRemoteRef\">ExternalSecretDataRemoteRef<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretFind\">ExternalSecretFind<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;Auto&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;Base64&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;Base64URL&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;None&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretDeletionPolicy\">ExternalSecretDeletionPolicy\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretTarget\">ExternalSecretTarget<\/a>)\n<\/p>\n<p>\n<p>ExternalSecretDeletionPolicy defines rules on how to delete the resulting Secret.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;Delete&#34;<\/p><\/td>\n<td><p>Delete deletes the secret if all provider secrets are deleted.\nIf a secret gets deleted on the provider side and is not accessible\nanymore this is not considered an error and the ExternalSecret\ndoes not go into SecretSyncedError status.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Merge&#34;<\/p><\/td>\n<td><p>Merge removes keys in the secret, but not the secret itself.\nIf a secret gets deleted on the provider side and is not accessible\nanymore this is not considered an error and the ExternalSecret\ndoes not go into SecretSyncedError status.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Retain&#34;<\/p><\/td>\n<td><p>Retain will retain the secret if all provider secrets have been deleted.\nIf a provider secret does not exist the ExternalSecret gets into the\nSecretSyncedError status.<\/p>\n<\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretFind\">ExternalSecretFind\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretDataFromRemoteRef\">ExternalSecretDataFromRemoteRef<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>path<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>A root path to start the find operations.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>name<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.FindName\">\nFindName\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Finds secrets based on the name.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tags<\/code><\/br>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Find secrets based on tags.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>conversionStrategy<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretConversionStrategy\">\nExternalSecretConversionStrategy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to define a conversion Strategy<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>decodingStrategy<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretDecodingStrategy\">\nExternalSecretDecodingStrategy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to define a decoding Strategy<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretMetadata\">ExternalSecretMetadata\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterExternalSecretSpec\">ClusterExternalSecretSpec<\/a>)\n<\/p>\n<p>\n<p>ExternalSecretMetadata defines metadata fields for the ExternalSecret generated by the ClusterExternalSecret.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>annotations<\/code><\/br>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>labels<\/code><\/br>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretMetadataPolicy\">ExternalSecretMetadataPolicy\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretDataRemoteRef\">ExternalSecretDataRemoteRef<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;Fetch&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;None&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretRewrite\">ExternalSecretRewrite\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretDataFromRemoteRef\">ExternalSecretDataFromRemoteRef<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>regexp<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretRewriteRegexp\">\nExternalSecretRewriteRegexp\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to rewrite with regular expressions.\nThe resulting key will be the output of a regexp.ReplaceAll operation.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>transform<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretRewriteTransform\">\nExternalSecretRewriteTransform\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to apply string transformation on the secrets.\nThe resulting key will be the output of the template applied by the operation.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretRewriteRegexp\">ExternalSecretRewriteRegexp\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretRewrite\">ExternalSecretRewrite<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>source<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Used to define the regular expression of a re.Compiler.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>target<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Used to define the target pattern of a ReplaceAll operation.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretRewriteTransform\">ExternalSecretRewriteTransform\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretRewrite\">ExternalSecretRewrite<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>template<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Used to define the template to apply on the secret name.\n<code>.value<\/code> will specify the secret name in the template.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretSpec\">ExternalSecretSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterExternalSecretSpec\">ClusterExternalSecretSpec<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.ExternalSecret\">ExternalSecret<\/a>)\n<\/p>\n<p>\n<p>ExternalSecretSpec defines the desired state of ExternalSecret.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretStoreRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreRef\">\nSecretStoreRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>target<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretTarget\">\nExternalSecretTarget\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>refreshInterval<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>RefreshInterval is the amount of time before the values are read again from the SecretStore provider,\nspecified as Golang Duration strings.\nValid time units are &ldquo;ns&rdquo;, &ldquo;us&rdquo; (or &ldquo;\u00b5s&rdquo;), &ldquo;ms&rdquo;, &ldquo;s&rdquo;, &ldquo;m&rdquo;, &ldquo;h&rdquo;\nExample values: &ldquo;1h&rdquo;, &ldquo;2h30m&rdquo;, &ldquo;5d&rdquo;, &ldquo;10s&rdquo;\nMay be set to zero to fetch and create it once. Defaults to 1h.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>data<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretData\">\n[]ExternalSecretData\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Data defines the connection between the Kubernetes Secret keys and the Provider data<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>dataFrom<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretDataFromRemoteRef\">\n[]ExternalSecretDataFromRemoteRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DataFrom is used to fetch all properties from a specific Provider data\nIf multiple entries are specified, the Secret keys are merged in the specified order<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretStatus\">ExternalSecretStatus\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecret\">ExternalSecret<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>refreshTime<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>refreshTime is the time and date the external secret was fetched and\nthe target secret updated<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>syncedResourceVersion<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>SyncedResourceVersion keeps track of the last synced version<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>conditions<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretStatusCondition\">\n[]ExternalSecretStatusCondition\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>binding<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#localobjectreference-v1-core\">\nKubernetes core\/v1.LocalObjectReference\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Binding represents a servicebinding.io Provisioned Service reference to the secret<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretStatusCondition\">ExternalSecretStatusCondition\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretStatus\">ExternalSecretStatus<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>type<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretConditionType\">\nExternalSecretConditionType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#conditionstatus-v1-core\">\nKubernetes core\/v1.ConditionStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>reason<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>message<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>lastTransitionTime<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretTarget\">ExternalSecretTarget\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretSpec\">ExternalSecretSpec<\/a>)\n<\/p>\n<p>\n<p>ExternalSecretTarget defines the Kubernetes Secret to be created\nThere can be only one target per ExternalSecret.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The name of the Secret resource to be managed.\nDefaults to the .metadata.name of the ExternalSecret resource<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>creationPolicy<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretCreationPolicy\">\nExternalSecretCreationPolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>CreationPolicy defines rules on how to create the resulting Secret.\nDefaults to &ldquo;Owner&rdquo;<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>deletionPolicy<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretDeletionPolicy\">\nExternalSecretDeletionPolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DeletionPolicy defines rules on how to delete the resulting Secret.\nDefaults to &ldquo;Retain&rdquo;<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>template<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretTemplate\">\nExternalSecretTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Template defines a blueprint for the created Secret resource.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>immutable<\/code><\/br>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Immutable defines if the final secret will be immutable<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretTemplate\">ExternalSecretTemplate\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretTarget\">ExternalSecretTarget<\/a>)\n<\/p>\n<p>\n<p>ExternalSecretTemplate defines a blueprint for the created Secret resource.\nwe can not use native corev1.Secret, it will have empty ObjectMeta values: <a href=\"https:\/\/github.com\/kubernetes-sigs\/controller-tools\/issues\/448\">https:\/\/github.com\/kubernetes-sigs\/controller-tools\/issues\/448<\/a><\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>type<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#secrettype-v1-core\">\nKubernetes core\/v1.SecretType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>engineVersion<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.TemplateEngineVersion\">\nTemplateEngineVersion\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>EngineVersion specifies the template engine version\nthat should be used to compile\/execute the\ntemplate specified in .data and .templateFrom[].<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretTemplateMetadata\">\nExternalSecretTemplateMetadata\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>mergePolicy<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.TemplateMergePolicy\">\nTemplateMergePolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>data<\/code><\/br>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>templateFrom<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.TemplateFrom\">\n[]TemplateFrom\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretTemplateMetadata\">ExternalSecretTemplateMetadata\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretTemplate\">ExternalSecretTemplate<\/a>)\n<\/p>\n<p>\n<p>ExternalSecretTemplateMetadata defines metadata fields for the Secret blueprint.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>annotations<\/code><\/br>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>labels<\/code><\/br>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ExternalSecretValidator\">ExternalSecretValidator\n<\/h3>\n<p>\n<\/p>\n<h3 id=\"external-secrets.io\/v1beta1.FakeProvider\">FakeProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>FakeProvider configures a fake provider that returns static values.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>data<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.FakeProviderData\">\n[]FakeProviderData\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.FakeProviderData\">FakeProviderData\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.FakeProvider\">FakeProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>key<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>valueMap<\/code><\/br>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<p>Deprecated: ValueMap is deprecated and is intended to be removed in the future, use the <code>value<\/code> field instead.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>version<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.FindName\">FindName\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretFind\">ExternalSecretFind<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>regexp<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Finds secrets base<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.FortanixProvider\">FortanixProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiUrl<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>APIURL is the URL of SDKMS API. Defaults to <code>sdkms.fortanix.com<\/code>.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>apiKey<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.FortanixProviderSecretRef\">\nFortanixProviderSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>APIKey is the API token to access SDKMS Applications.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.FortanixProviderSecretRef\">FortanixProviderSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.FortanixProvider\">FortanixProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>SecretRef is a reference to a secret containing the SDKMS API Key.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.GCPSMAuth\">GCPSMAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.GCPSMProvider\">GCPSMProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.GCPSMAuthSecretRef\">\nGCPSMAuthSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workloadIdentity<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.GCPWorkloadIdentity\">\nGCPWorkloadIdentity\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.GCPSMAuthSecretRef\">GCPSMAuthSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.GCPSMAuth\">GCPSMAuth<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretAccessKeySecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The SecretAccessKey is used for authentication<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.GCPSMProvider\">GCPSMProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>GCPSMProvider Configures a store to sync secrets using the GCP Secret Manager provider.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.GCPSMAuth\">\nGCPSMAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Auth defines the information necessary to authenticate against GCP<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>projectID<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>ProjectID project where secret is located<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>location<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Location optionally defines a location for a secret<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.GCPWorkloadIdentity\">GCPWorkloadIdentity\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.GCPSMAuth\">GCPSMAuth<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>serviceAccountRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#ServiceAccountSelector\">\nExternal Secrets meta\/v1.ServiceAccountSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>clusterLocation<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>clusterName<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>clusterProjectID<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.GeneratorRef\">GeneratorRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.StoreGeneratorSourceRef\">StoreGeneratorSourceRef<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.StoreSourceRef\">StoreSourceRef<\/a>)\n<\/p>\n<p>\n<p>GeneratorRef points to a generator custom resource.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Specify the apiVersion of the generator resource<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Specify the Kind of the generator resource<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>name<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Specify the name of the generator resource<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.GenericStore\">GenericStore\n<\/h3>\n<p>\n<p>GenericStore is a common interface for interacting with ClusterSecretStore\nor a namespaced SecretStore.<\/p>\n<\/p>\n<h3 id=\"external-secrets.io\/v1beta1.GenericStoreValidator\">GenericStoreValidator\n<\/h3>\n<p>\n<\/p>\n<h3 id=\"external-secrets.io\/v1beta1.GitlabAuth\">GitlabAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.GitlabProvider\">GitlabProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>SecretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.GitlabSecretRef\">\nGitlabSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.GitlabProvider\">GitlabProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>Configures a store to sync secrets with a GitLab instance.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>url<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>URL configures the GitLab instance URL. Defaults to <a href=\"https:\/\/gitlab.com\/\">https:\/\/gitlab.com\/<\/a>.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.GitlabAuth\">\nGitlabAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth configures how secret-manager authenticates with a GitLab instance.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>projectID<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>ProjectID specifies a project where secrets are located.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>inheritFromGroups<\/code><\/br>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<p>InheritFromGroups specifies whether parent groups should be discovered and checked for secrets.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>groupIDs<\/code><\/br>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<p>GroupIDs specify, which gitlab groups to pull secrets from. Group secrets are read from left to right followed by the project variables.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>environment<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Environment environment_scope of gitlab CI\/CD variables (Please see <a href=\"https:\/\/docs.gitlab.com\/ee\/ci\/environments\/#create-a-static-environment\">https:\/\/docs.gitlab.com\/ee\/ci\/environments\/#create-a-static-environment<\/a> on how to create environments)<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.GitlabSecretRef\">GitlabSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.GitlabAuth\">GitlabAuth<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>accessToken<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>AccessToken is used for authentication.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.IBMAuth\">IBMAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.IBMProvider\">IBMProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.IBMAuthSecretRef\">\nIBMAuthSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>containerAuth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.IBMAuthContainerAuth\">\nIBMAuthContainerAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.IBMAuthContainerAuth\">IBMAuthContainerAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.IBMAuth\">IBMAuth<\/a>)\n<\/p>\n<p>\n<p>IBM Container-based auth with IAM Trusted Profile.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>profile<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>the IBM Trusted Profile<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tokenLocation<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Location the token is mounted on the pod<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>iamEndpoint<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.IBMAuthSecretRef\">IBMAuthSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.IBMAuth\">IBMAuth<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretApiKeySecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The SecretAccessKey is used for authentication<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.IBMProvider\">IBMProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>Configures an store to sync secrets using a IBM Cloud Secrets Manager\nbackend.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.IBMAuth\">\nIBMAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth configures how secret-manager authenticates with the IBM secrets manager.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceUrl<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>ServiceURL is the Endpoint URL that is specific to the Secrets Manager service instance<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.InfisicalAuth\">InfisicalAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.InfisicalProvider\">InfisicalProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>universalAuthCredentials<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.UniversalAuthCredentials\">\nUniversalAuthCredentials\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.InfisicalProvider\">InfisicalProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>InfisicalProvider configures a store to sync secrets using the Infisical provider.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.InfisicalAuth\">\nInfisicalAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth configures how the Operator authenticates with the Infisical API<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretsScope<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.MachineIdentityScopeInWorkspace\">\nMachineIdentityScopeInWorkspace\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>hostAPI<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.KeeperSecurityProvider\">KeeperSecurityProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>KeeperSecurityProvider Configures a store to sync secrets using Keeper Security.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>authRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>folderID<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.KubernetesAuth\">KubernetesAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.KubernetesProvider\">KubernetesProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>cert<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.CertAuth\">\nCertAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>has both clientCert and clientKey as secretKeySelector<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>token<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.TokenAuth\">\nTokenAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>use static token to authenticate with<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccount<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#ServiceAccountSelector\">\nExternal Secrets meta\/v1.ServiceAccountSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>points to a service account that should be used for authentication<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.KubernetesProvider\">KubernetesProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>Configures a store to sync secrets with a Kubernetes instance.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>server<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.KubernetesServer\">\nKubernetesServer\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>configures the Kubernetes server Address.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.KubernetesAuth\">\nKubernetesAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Auth configures how secret-manager authenticates with a Kubernetes instance.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>authRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>A reference to a secret that contains the auth information.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>remoteNamespace<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Remote namespace to fetch the secrets from<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.KubernetesServer\">KubernetesServer\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.KubernetesProvider\">KubernetesProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>url<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>configures the Kubernetes server Address.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>caBundle<\/code><\/br>\n<em>\n[]byte\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>CABundle is a base64-encoded CA certificate<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>caProvider<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.CAProvider\">\nCAProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>see: <a href=\"https:\/\/external-secrets.io\/v0.4.1\/spec\/#external-secrets.io\/v1alpha1.CAProvider\">https:\/\/external-secrets.io\/v0.4.1\/spec\/#external-secrets.io\/v1alpha1.CAProvider<\/a><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.MachineIdentityScopeInWorkspace\">MachineIdentityScopeInWorkspace\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.InfisicalProvider\">InfisicalProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretsPath<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>recursive<\/code><\/br>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>environmentSlug<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>projectSlug<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.NoSecretError\">NoSecretError\n<\/h3>\n<p>\n<p>NoSecretError shall be returned when a GetSecret can not find the\ndesired secret. This is used for deletionPolicy.<\/p>\n<\/p>\n<h3 id=\"external-secrets.io\/v1beta1.NotModifiedError\">NotModifiedError\n<\/h3>\n<p>\n<p>NotModifiedError to signal that the webhook received no changes,\nand it should just return without doing anything.<\/p>\n<\/p>\n<h3 id=\"external-secrets.io\/v1beta1.OnboardbaseAuthSecretRef\">OnboardbaseAuthSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.OnboardbaseProvider\">OnboardbaseProvider<\/a>)\n<\/p>\n<p>\n<p>OnboardbaseAuthSecretRef holds secret references for onboardbase API Key credentials.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiKeyRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>OnboardbaseAPIKey is the APIKey generated by an admin account.\nIt is used to recognize and authorize access to a project and environment within onboardbase<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>passcodeRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>OnboardbasePasscode is the passcode attached to the API Key<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.OnboardbaseProvider\">OnboardbaseProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>OnboardbaseProvider configures a store to sync secrets using the Onboardbase provider.\nProject and Config are required if not using a Service Token.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.OnboardbaseAuthSecretRef\">\nOnboardbaseAuthSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth configures how the Operator authenticates with the Onboardbase API<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>apiHost<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>APIHost use this to configure the host url for the API for selfhosted installation, default is <a href=\"https:\/\/public.onboardbase.com\/api\/v1\/\">https:\/\/public.onboardbase.com\/api\/v1\/<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>project<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Project is an onboardbase project that the secrets should be pulled from<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>environment<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Environment is the name of an environmnent within a project to pull the secrets from<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.OnePasswordAuth\">OnePasswordAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.OnePasswordProvider\">OnePasswordProvider<\/a>)\n<\/p>\n<p>\n<p>OnePasswordAuth contains a secretRef for credentials.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.OnePasswordAuthSecretRef\">\nOnePasswordAuthSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.OnePasswordAuthSecretRef\">OnePasswordAuthSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.OnePasswordAuth\">OnePasswordAuth<\/a>)\n<\/p>\n<p>\n<p>OnePasswordAuthSecretRef holds secret references for 1Password credentials.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>connectTokenSecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The ConnectToken is used for authentication to a 1Password Connect Server.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.OnePasswordProvider\">OnePasswordProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>OnePasswordProvider configures a store to sync secrets using the 1Password Secret Manager provider.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.OnePasswordAuth\">\nOnePasswordAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth defines the information necessary to authenticate against OnePassword Connect Server<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>connectHost<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>ConnectHost defines the OnePassword Connect Server to connect to<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>vaults<\/code><\/br>\n<em>\nmap[string]int\n<\/em>\n<\/td>\n<td>\n<p>Vaults defines which OnePassword vaults to search in which order<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.OracleAuth\">OracleAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.OracleProvider\">OracleProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>tenancy<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Tenancy is the tenancy OCID where user is located.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>user<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>User is an access OCID specific to the account.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.OracleSecretRef\">\nOracleSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>SecretRef to pass through sensitive information.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.OraclePrincipalType\">OraclePrincipalType\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.OracleProvider\">OracleProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;InstancePrincipal&#34;<\/p><\/td>\n<td><p>InstancePrincipal represents a instance principal.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;UserPrincipal&#34;<\/p><\/td>\n<td><p>UserPrincipal represents a user principal.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Workload&#34;<\/p><\/td>\n<td><p>WorkloadPrincipal represents a workload principal.<\/p>\n<\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.OracleProvider\">OracleProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>Configures an store to sync secrets using a Oracle Vault\nbackend.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>region<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Region is the region where vault is located.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>vault<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Vault is the vault&rsquo;s OCID of the specific vault where secret is located.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>compartment<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Compartment is the vault compartment OCID.\nRequired for PushSecret<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>encryptionKey<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>EncryptionKey is the OCID of the encryption key within the vault.\nRequired for PushSecret<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>principalType<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.OraclePrincipalType\">\nOraclePrincipalType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The type of principal to use for authentication. If left blank, the Auth struct will\ndetermine the principal type. This optional field must be specified if using\nworkload identity.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.OracleAuth\">\nOracleAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Auth configures how secret-manager authenticates with the Oracle Vault.\nIf empty, use the instance principal, otherwise the user credentials specified in Auth.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#ServiceAccountSelector\">\nExternal Secrets meta\/v1.ServiceAccountSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ServiceAccountRef specified the service account\nthat should be used when authenticating with WorkloadIdentity.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.OracleSecretRef\">OracleSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.OracleAuth\">OracleAuth<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>privatekey<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>PrivateKey is the user&rsquo;s API Signing Key in PEM format, used for authentication.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>fingerprint<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Fingerprint is the fingerprint of the API private key.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.PassboltAuth\">PassboltAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.PassboltProvider\">PassboltProvider<\/a>)\n<\/p>\n<p>\n<p>Passbolt contains a secretRef for the passbolt credentials.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>passwordSecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>privateKeySecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.PassboltProvider\">PassboltProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.PassboltAuth\">\nPassboltAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth defines the information necessary to authenticate against Passbolt Server<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>host<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Host defines the Passbolt Server to connect to<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.PasswordDepotAuth\">PasswordDepotAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.PasswordDepotProvider\">PasswordDepotProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.PasswordDepotSecretRef\">\nPasswordDepotSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.PasswordDepotProvider\">PasswordDepotProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>Configures a store to sync secrets with a Password Depot instance.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>host<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>URL configures the Password Depot instance URL.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>database<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Database to use as source<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.PasswordDepotAuth\">\nPasswordDepotAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth configures how secret-manager authenticates with a Password Depot instance.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.PasswordDepotSecretRef\">PasswordDepotSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.PasswordDepotAuth\">PasswordDepotAuth<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>credentials<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Username \/ Password is used for authentication.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.PreviderAuth\">PreviderAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.PreviderProvider\">PreviderProvider<\/a>)\n<\/p>\n<p>\n<p>PreviderAuth contains a secretRef for credentials.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.PreviderAuthSecretRef\">\nPreviderAuthSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.PreviderAuthSecretRef\">PreviderAuthSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.PreviderAuth\">PreviderAuth<\/a>)\n<\/p>\n<p>\n<p>PreviderAuthSecretRef holds secret references for Previder Vault credentials.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>accessToken<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The AccessToken is used for authentication<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.PreviderProvider\">PreviderProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>PreviderProvider configures a store to sync secrets using the Previder Secret Manager provider.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.PreviderAuth\">\nPreviderAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>baseUri<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.Provider\">Provider\n<\/h3>\n<p>\n<p>Provider is a common interface for interacting with secret backends.<\/p>\n<\/p>\n<h3 id=\"external-secrets.io\/v1beta1.PulumiProvider\">PulumiProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiUrl<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>APIURL is the URL of the Pulumi API.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>accessToken<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.PulumiProviderSecretRef\">\nPulumiProviderSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>AccessToken is the access tokens to sign in to the Pulumi Cloud Console.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>organization<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Organization are a space to collaborate on shared projects and stacks.\nTo create a new organization, visit <a href=\"https:\/\/app.pulumi.com\/\">https:\/\/app.pulumi.com\/<\/a> and click &ldquo;New Organization&rdquo;.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>project<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Project is the name of the Pulumi ESC project the environment belongs to.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>environment<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Environment are YAML documents composed of static key-value pairs, programmatic expressions,\ndynamically retrieved values from supported providers including all major clouds,\nand other Pulumi ESC environments.\nTo create a new environment, visit <a href=\"https:\/\/www.pulumi.com\/docs\/esc\/environments\/\">https:\/\/www.pulumi.com\/docs\/esc\/environments\/<\/a> for more information.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.PulumiProviderSecretRef\">PulumiProviderSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.PulumiProvider\">PulumiProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>SecretRef is a reference to a secret containing the Pulumi API token.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.PushSecretData\">PushSecretData\n<\/h3>\n<p>\n<p>PushSecretData is an interface to allow using v1alpha1.PushSecretData content in Provider registered in v1beta1.<\/p>\n<\/p>\n<h3 id=\"external-secrets.io\/v1beta1.PushSecretRemoteRef\">PushSecretRemoteRef\n<\/h3>\n<p>\n<p>PushSecretRemoteRef is an interface to allow using v1alpha1.PushSecretRemoteRef in Provider registered in v1beta1.<\/p>\n<\/p>\n<h3 id=\"external-secrets.io\/v1beta1.ScalewayProvider\">ScalewayProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiUrl<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>APIURL is the url of the api to use. Defaults to <a href=\"https:\/\/api.scaleway.com\">https:\/\/api.scaleway.com<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>region<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Region where your secrets are located: <a href=\"https:\/\/developers.scaleway.com\/en\/quickstart\/#region-and-zone\">https:\/\/developers.scaleway.com\/en\/quickstart\/#region-and-zone<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>projectId<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>ProjectID is the id of your project, which you can find in the console: <a href=\"https:\/\/console.scaleway.com\/project\/settings\">https:\/\/console.scaleway.com\/project\/settings<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>accessKey<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ScalewayProviderSecretRef\">\nScalewayProviderSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>AccessKey is the non-secret part of the api key.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretKey<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ScalewayProviderSecretRef\">\nScalewayProviderSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>SecretKey is the non-secret part of the api key.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ScalewayProviderSecretRef\">ScalewayProviderSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ScalewayProvider\">ScalewayProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>value<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Value can be specified directly to set a value without using a secret.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecretRef references a key in a secret that will be used as value.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SecretServerProvider\">SecretServerProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>See <a href=\"https:\/\/github.com\/DelineaXPM\/tss-sdk-go\/blob\/main\/server\/server.go\">https:\/\/github.com\/DelineaXPM\/tss-sdk-go\/blob\/main\/server\/server.go<\/a>.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>username<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretServerProviderRef\">\nSecretServerProviderRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Username is the secret server account username.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>password<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretServerProviderRef\">\nSecretServerProviderRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Password is the secret server account password.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serverURL<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>ServerURL\nURL to your secret server installation<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SecretServerProviderRef\">SecretServerProviderRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretServerProvider\">SecretServerProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>value<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Value can be specified directly to set a value without using a secret.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecretRef references a key in a secret that will be used as value.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SecretStore\">SecretStore\n<\/h3>\n<p>\n<p>SecretStore represents a secure external location for storing secrets, which can be referenced as part of <code>storeRef<\/code> fields.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>metadata<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreSpec\">\nSecretStoreSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>controller<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to select the correct ESO controller (think: ingress.ingressClassName)\nThe ESO controller is instantiated with a specific controller name and filters ES based on this property<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>provider<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">\nSecretStoreProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Used to configure the provider. Only one provider may be set<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retrySettings<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreRetrySettings\">\nSecretStoreRetrySettings\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to configure http retries if failed<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>refreshInterval<\/code><\/br>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to configure store refresh interval in seconds. Empty or 0 will default to the controller config.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>conditions<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterSecretStoreCondition\">\n[]ClusterSecretStoreCondition\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to constraint a ClusterSecretStore to specific namespaces. Relevant only to ClusterSecretStore<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreStatus\">\nSecretStoreStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SecretStoreCapabilities\">SecretStoreCapabilities\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreStatus\">SecretStoreStatus<\/a>)\n<\/p>\n<p>\n<p>SecretStoreCapabilities defines the possible operations a SecretStore can do.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;ReadOnly&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;ReadWrite&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;WriteOnly&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SecretStoreConditionType\">SecretStoreConditionType\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreStatusCondition\">SecretStoreStatusCondition<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;Ready&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreSpec\">SecretStoreSpec<\/a>)\n<\/p>\n<p>\n<p>SecretStoreProvider contains the provider-specific configuration.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>aws<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AWSProvider\">\nAWSProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>AWS configures this store to sync secrets using AWS Secret Manager provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>azurekv<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AzureKVProvider\">\nAzureKVProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>AzureKV configures this store to sync secrets using Azure Key Vault provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>akeyless<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AkeylessProvider\">\nAkeylessProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Akeyless configures this store to sync secrets using Akeyless Vault provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>bitwardensecretsmanager<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.BitwardenSecretsManagerProvider\">\nBitwardenSecretsManagerProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>BitwardenSecretsManager configures this store to sync secrets using BitwardenSecretsManager provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>vault<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultProvider\">\nVaultProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Vault configures this store to sync secrets using Hashi provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>gcpsm<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.GCPSMProvider\">\nGCPSMProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>GCPSM configures this store to sync secrets using Google Cloud Platform Secret Manager provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>oracle<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.OracleProvider\">\nOracleProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Oracle configures this store to sync secrets using Oracle Vault provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ibm<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.IBMProvider\">\nIBMProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>IBM configures this store to sync secrets using IBM Cloud provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>yandexcertificatemanager<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.YandexCertificateManagerProvider\">\nYandexCertificateManagerProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>YandexCertificateManager configures this store to sync secrets using Yandex Certificate Manager provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>yandexlockbox<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.YandexLockboxProvider\">\nYandexLockboxProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>YandexLockbox configures this store to sync secrets using Yandex Lockbox provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>gitlab<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.GitlabProvider\">\nGitlabProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>GitLab configures this store to sync secrets using GitLab Variables provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>alibaba<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.AlibabaProvider\">\nAlibabaProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Alibaba configures this store to sync secrets using Alibaba Cloud provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>onepassword<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.OnePasswordProvider\">\nOnePasswordProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>OnePassword configures this store to sync secrets using the 1Password Cloud provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>webhook<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.WebhookProvider\">\nWebhookProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Webhook configures this store to sync secrets using a generic templated webhook<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kubernetes<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.KubernetesProvider\">\nKubernetesProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Kubernetes configures this store to sync secrets using a Kubernetes cluster provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>fake<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.FakeProvider\">\nFakeProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Fake configures a store with static key\/value pairs<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>senhasegura<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SenhaseguraProvider\">\nSenhaseguraProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Senhasegura configures this store to sync secrets using senhasegura provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>scaleway<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ScalewayProvider\">\nScalewayProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Scaleway<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>doppler<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.DopplerProvider\">\nDopplerProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Doppler configures this store to sync secrets using the Doppler provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>previder<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.PreviderProvider\">\nPreviderProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Previder configures this store to sync secrets using the Previder provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>onboardbase<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.OnboardbaseProvider\">\nOnboardbaseProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Onboardbase configures this store to sync secrets using the Onboardbase provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>keepersecurity<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.KeeperSecurityProvider\">\nKeeperSecurityProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>KeeperSecurity configures this store to sync secrets using the KeeperSecurity provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>conjur<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ConjurProvider\">\nConjurProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Conjur configures this store to sync secrets using conjur provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>delinea<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.DelineaProvider\">\nDelineaProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Delinea DevOps Secrets Vault\n<a href=\"https:\/\/docs.delinea.com\/online-help\/products\/devops-secrets-vault\/current\">https:\/\/docs.delinea.com\/online-help\/products\/devops-secrets-vault\/current<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretserver<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretServerProvider\">\nSecretServerProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecretServer configures this store to sync secrets using SecretServer provider\n<a href=\"https:\/\/docs.delinea.com\/online-help\/secret-server\/start.htm\">https:\/\/docs.delinea.com\/online-help\/secret-server\/start.htm<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>chef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ChefProvider\">\nChefProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Chef configures this store to sync secrets with chef server<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>pulumi<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.PulumiProvider\">\nPulumiProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Pulumi configures this store to sync secrets using the Pulumi provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>fortanix<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.FortanixProvider\">\nFortanixProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Fortanix configures this store to sync secrets using the Fortanix provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>passworddepot<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.PasswordDepotProvider\">\nPasswordDepotProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>passbolt<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.PassboltProvider\">\nPassboltProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>device42<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.Device42Provider\">\nDevice42Provider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Device42 configures this store to sync secrets using the Device42 provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>infisical<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.InfisicalProvider\">\nInfisicalProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Infisical configures this store to sync secrets using the Infisical provider<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>beyondtrust<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.BeyondtrustProvider\">\nBeyondtrustProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Beyondtrust configures this store to sync secrets using Password Safe provider.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SecretStoreRef\">SecretStoreRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretSpec\">ExternalSecretSpec<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.StoreGeneratorSourceRef\">StoreGeneratorSourceRef<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.StoreSourceRef\">StoreSourceRef<\/a>)\n<\/p>\n<p>\n<p>SecretStoreRef defines which SecretStore to fetch the ExternalSecret data.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name of the SecretStore resource<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Kind of the SecretStore resource (SecretStore or ClusterSecretStore)\nDefaults to <code>SecretStore<\/code><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SecretStoreRetrySettings\">SecretStoreRetrySettings\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreSpec\">SecretStoreSpec<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>maxRetries<\/code><\/br>\n<em>\nint32\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retryInterval<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SecretStoreSpec\">SecretStoreSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterSecretStore\">ClusterSecretStore<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.SecretStore\">SecretStore<\/a>)\n<\/p>\n<p>\n<p>SecretStoreSpec defines the desired state of SecretStore.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>controller<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to select the correct ESO controller (think: ingress.ingressClassName)\nThe ESO controller is instantiated with a specific controller name and filters ES based on this property<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>provider<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">\nSecretStoreProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Used to configure the provider. Only one provider may be set<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retrySettings<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreRetrySettings\">\nSecretStoreRetrySettings\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to configure http retries if failed<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>refreshInterval<\/code><\/br>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to configure store refresh interval in seconds. Empty or 0 will default to the controller config.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>conditions<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterSecretStoreCondition\">\n[]ClusterSecretStoreCondition\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used to constraint a ClusterSecretStore to specific namespaces. Relevant only to ClusterSecretStore<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SecretStoreStatus\">SecretStoreStatus\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ClusterSecretStore\">ClusterSecretStore<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.SecretStore\">SecretStore<\/a>)\n<\/p>\n<p>\n<p>SecretStoreStatus defines the observed state of the SecretStore.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>conditions<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreStatusCondition\">\n[]SecretStoreStatusCondition\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>capabilities<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreCapabilities\">\nSecretStoreCapabilities\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SecretStoreStatusCondition\">SecretStoreStatusCondition\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreStatus\">SecretStoreStatus<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>type<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreConditionType\">\nSecretStoreConditionType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#conditionstatus-v1-core\">\nKubernetes core\/v1.ConditionStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>reason<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>message<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>lastTransitionTime<\/code><\/br>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.25\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SecretsClient\">SecretsClient\n<\/h3>\n<p>\n<p>SecretsClient provides access to secrets.<\/p>\n<\/p>\n<h3 id=\"external-secrets.io\/v1beta1.SecretsManager\">SecretsManager\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.AWSProvider\">AWSProvider<\/a>)\n<\/p>\n<p>\n<p>SecretsManager defines how the provider behaves when interacting with AWS\nSecretsManager. Some of these settings are only applicable to controlling how\nsecrets are deleted, and hence only apply to PushSecret (and only when\ndeletionPolicy is set to Delete).<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>forceDeleteWithoutRecovery<\/code><\/br>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specifies whether to delete the secret without any recovery window. You\ncan&rsquo;t use both this parameter and RecoveryWindowInDays in the same call.\nIf you don&rsquo;t use either, then by default Secrets Manager uses a 30 day\nrecovery window.\nsee: <a href=\"https:\/\/docs.aws.amazon.com\/secretsmanager\/latest\/apireference\/API_DeleteSecret.html#SecretsManager-DeleteSecret-request-ForceDeleteWithoutRecovery\">https:\/\/docs.aws.amazon.com\/secretsmanager\/latest\/apireference\/API_DeleteSecret.html#SecretsManager-DeleteSecret-request-ForceDeleteWithoutRecovery<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>recoveryWindowInDays<\/code><\/br>\n<em>\nint64\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The number of days from 7 to 30 that Secrets Manager waits before\npermanently deleting the secret. You can&rsquo;t use both this parameter and\nForceDeleteWithoutRecovery in the same call. If you don&rsquo;t use either,\nthen by default Secrets Manager uses a 30 day recovery window.\nsee: <a href=\"https:\/\/docs.aws.amazon.com\/secretsmanager\/latest\/apireference\/API_DeleteSecret.html#SecretsManager-DeleteSecret-request-RecoveryWindowInDays\">https:\/\/docs.aws.amazon.com\/secretsmanager\/latest\/apireference\/API_DeleteSecret.html#SecretsManager-DeleteSecret-request-RecoveryWindowInDays<\/a><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SenhaseguraAuth\">SenhaseguraAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SenhaseguraProvider\">SenhaseguraProvider<\/a>)\n<\/p>\n<p>\n<p>SenhaseguraAuth tells the controller how to do auth in senhasegura.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>clientId<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>clientSecretSecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SenhaseguraModuleType\">SenhaseguraModuleType\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SenhaseguraProvider\">SenhaseguraProvider<\/a>)\n<\/p>\n<p>\n<p>SenhaseguraModuleType enum defines senhasegura target module to fetch secrets<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;DSM&#34;<\/p><\/td>\n<td><pre><code>\tSenhaseguraModuleDSM is the senhasegura DevOps Secrets Management module\nsee: https:\/\/senhasegura.com\/devops\n<\/code><\/pre>\n<\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.SenhaseguraProvider\">SenhaseguraProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>SenhaseguraProvider setup a store to sync secrets with senhasegura.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>url<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>URL of senhasegura<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>module<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SenhaseguraModuleType\">\nSenhaseguraModuleType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Module defines which senhasegura module should be used to get secrets<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SenhaseguraAuth\">\nSenhaseguraAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth defines parameters to authenticate in senhasegura<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ignoreSslCertificate<\/code><\/br>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<p>IgnoreSslCertificate defines if SSL certificate must be ignored<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.StoreGeneratorSourceRef\">StoreGeneratorSourceRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretDataFromRemoteRef\">ExternalSecretDataFromRemoteRef<\/a>)\n<\/p>\n<p>\n<p>StoreGeneratorSourceRef allows you to override the source\nfrom which the secret will be pulled from.\nYou can define at maximum one property.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>storeRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreRef\">\nSecretStoreRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>generatorRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.GeneratorRef\">\nGeneratorRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>GeneratorRef points to a generator custom resource.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.StoreSourceRef\">StoreSourceRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretData\">ExternalSecretData<\/a>)\n<\/p>\n<p>\n<p>StoreSourceRef allows you to override the SecretStore source\nfrom which the secret will be pulled from.\nYou can define at maximum one property.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>storeRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreRef\">\nSecretStoreRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>generatorRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.GeneratorRef\">\nGeneratorRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>GeneratorRef points to a generator custom resource.<\/p>\n<p>Deprecated: The generatorRef is not implemented in .data[].\nthis will be removed with v1.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.Tag\">Tag\n<\/h3>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>key<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.TemplateEngineVersion\">TemplateEngineVersion\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretTemplate\">ExternalSecretTemplate<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;v1&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;v2&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.TemplateFrom\">TemplateFrom\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretTemplate\">ExternalSecretTemplate<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>configMap<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.TemplateRef\">\nTemplateRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secret<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.TemplateRef\">\nTemplateRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>target<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.TemplateTarget\">\nTemplateTarget\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>literal<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.TemplateMergePolicy\">TemplateMergePolicy\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.ExternalSecretTemplate\">ExternalSecretTemplate<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;Merge&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;Replace&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.TemplateRef\">TemplateRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.TemplateFrom\">TemplateFrom<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>The name of the ConfigMap\/Secret resource<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>items<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.TemplateRefItem\">\n[]TemplateRefItem\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>A list of keys in the ConfigMap\/Secret to use as templates for Secret data<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.TemplateRefItem\">TemplateRefItem\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.TemplateRef\">TemplateRef<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>key<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>A key in the ConfigMap\/Secret<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>templateAs<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.TemplateScope\">\nTemplateScope\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.TemplateScope\">TemplateScope\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.TemplateRefItem\">TemplateRefItem<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;KeysAndValues&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;Values&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.TemplateTarget\">TemplateTarget\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.TemplateFrom\">TemplateFrom<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;Annotations&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;Data&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;Labels&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.TokenAuth\">TokenAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.KubernetesAuth\">KubernetesAuth<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>bearerToken<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.UniversalAuthCredentials\">UniversalAuthCredentials\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.InfisicalAuth\">InfisicalAuth<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>clientId<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>clientSecret<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.ValidationResult\">ValidationResult\n(<code>byte<\/code> alias)<\/p><\/h3>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>2<\/p><\/td>\n<td><p>Error indicates that there is a misconfiguration.<\/p>\n<\/td>\n<\/tr><tr><td><p>0<\/p><\/td>\n<td><p>Ready indicates that the client is configured correctly\nand can be used.<\/p>\n<\/td>\n<\/tr><tr><td><p>1<\/p><\/td>\n<td><p>Unknown indicates that the client can be used\nbut information is missing and it can not be validated.<\/p>\n<\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultAppRole\">VaultAppRole\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAuth\">VaultAuth<\/a>)\n<\/p>\n<p>\n<p>VaultAppRole authenticates with Vault using the App Role auth mechanism,\nwith the role and secret stored in a Kubernetes Secret resource.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>path<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Path where the App Role authentication backend is mounted\nin Vault, e.g: &ldquo;approle&rdquo;<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>roleId<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>RoleID configured in the App Role authentication backend when setting\nup the authentication backend in Vault.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>roleRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Reference to a key in a Secret that contains the App Role ID used\nto authenticate with Vault.\nThe <code>key<\/code> field must be specified and denotes which entry within the Secret\nresource is used as the app role id.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Reference to a key in a Secret that contains the App Role secret used\nto authenticate with Vault.\nThe <code>key<\/code> field must be specified and denotes which entry within the Secret\nresource is used as the app role secret.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultAuth\">VaultAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.VaultProvider\">VaultProvider<\/a>)\n<\/p>\n<p>\n<p>VaultAuth is the configuration used to authenticate with a Vault server.\nOnly one of <code>tokenSecretRef<\/code>, <code>appRole<\/code>,  <code>kubernetes<\/code>, <code>ldap<\/code>, <code>userPass<\/code>, <code>jwt<\/code> or <code>cert<\/code>\ncan be specified. A namespace to authenticate against can optionally be specified.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>namespace<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Name of the vault namespace to authenticate to. This can be different than the namespace your secret is in.\nNamespaces is a set of features within Vault Enterprise that allows\nVault environments to support Secure Multi-tenancy. e.g: &ldquo;ns1&rdquo;.\nMore about namespaces can be found here <a href=\"https:\/\/www.vaultproject.io\/docs\/enterprise\/namespaces\">https:\/\/www.vaultproject.io\/docs\/enterprise\/namespaces<\/a>\nThis will default to Vault.Namespace field if set, or empty otherwise<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tokenSecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TokenSecretRef authenticates with Vault by presenting a token.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>appRole<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAppRole\">\nVaultAppRole\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>AppRole authenticates with Vault using the App Role auth mechanism,\nwith the role and secret stored in a Kubernetes Secret resource.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kubernetes<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultKubernetesAuth\">\nVaultKubernetesAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Kubernetes authenticates with Vault by passing the ServiceAccount\ntoken stored in the named Secret resource to the Vault server.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ldap<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultLdapAuth\">\nVaultLdapAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Ldap authenticates with Vault by passing username\/password pair using\nthe LDAP authentication method<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>jwt<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultJwtAuth\">\nVaultJwtAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Jwt authenticates with Vault by passing role and JWT token using the\nJWT\/OIDC authentication method<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>cert<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultCertAuth\">\nVaultCertAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Cert authenticates with TLS Certificates by passing client certificate, private key and ca certificate\nCert authentication method<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>iam<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultIamAuth\">\nVaultIamAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Iam authenticates with vault by passing a special AWS request signed with AWS IAM credentials\nAWS IAM authentication method<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>userPass<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultUserPassAuth\">\nVaultUserPassAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>UserPass authenticates with Vault by passing username\/password pair<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultAwsAuth\">VaultAwsAuth\n<\/h3>\n<p>\n<p>VaultAwsAuth tells the controller how to do authentication with aws.\nOnly one of secretRef or jwt can be specified.\nif none is specified the controller will try to load credentials from its own service account assuming it is IRSA enabled.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAwsAuthSecretRef\">\nVaultAwsAuthSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>jwt<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAwsJWTAuth\">\nVaultAwsJWTAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultAwsAuthSecretRef\">VaultAwsAuthSecretRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAwsAuth\">VaultAwsAuth<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.VaultIamAuth\">VaultIamAuth<\/a>)\n<\/p>\n<p>\n<p>VaultAWSAuthSecretRef holds secret references for AWS credentials\nboth AccessKeyID and SecretAccessKey must be defined in order to properly authenticate.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>accessKeyIDSecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The AccessKeyID is used for authentication<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretAccessKeySecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The SecretAccessKey is used for authentication<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sessionTokenSecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The SessionToken used for authentication\nThis must be defined if AccessKeyID and SecretAccessKey are temporary credentials\nsee: <a href=\"https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id_credentials_temp_use-resources.html\">https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id_credentials_temp_use-resources.html<\/a><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultAwsJWTAuth\">VaultAwsJWTAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAwsAuth\">VaultAwsAuth<\/a>, \n<a href=\"#external-secrets.io\/v1beta1.VaultIamAuth\">VaultIamAuth<\/a>)\n<\/p>\n<p>\n<p>Authenticate against AWS using service account tokens.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>serviceAccountRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#ServiceAccountSelector\">\nExternal Secrets meta\/v1.ServiceAccountSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultCertAuth\">VaultCertAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAuth\">VaultAuth<\/a>)\n<\/p>\n<p>\n<p>VaultJwtAuth authenticates with Vault using the JWT\/OIDC authentication\nmethod, with the role name and token stored in a Kubernetes Secret resource.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>clientCert<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ClientCert is a certificate to authenticate using the Cert Vault\nauthentication method<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>SecretRef to a key in a Secret resource containing client private key to\nauthenticate with Vault using the Cert authentication method<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultClientTLS\">VaultClientTLS\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.VaultProvider\">VaultProvider<\/a>)\n<\/p>\n<p>\n<p>VaultClientTLS is the configuration used for client side related TLS communication,\nwhen the Vault server requires mutual authentication.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>certSecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>CertSecretRef is a certificate added to the transport layer\nwhen communicating with the Vault server.\nIf no key for the Secret is specified, external-secret will default to &lsquo;tls.crt&rsquo;.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>keySecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>KeySecretRef to a key in a Secret resource containing client private key\nadded to the transport layer when communicating with the Vault server.\nIf no key for the Secret is specified, external-secret will default to &lsquo;tls.key&rsquo;.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultIamAuth\">VaultIamAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAuth\">VaultAuth<\/a>)\n<\/p>\n<p>\n<p>VaultIamAuth authenticates with Vault using the Vault&rsquo;s AWS IAM authentication method. Refer: <a href=\"https:\/\/developer.hashicorp.com\/vault\/docs\/auth\/aws\">https:\/\/developer.hashicorp.com\/vault\/docs\/auth\/aws<\/a><\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>path<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Path where the AWS auth method is enabled in Vault, e.g: &ldquo;aws&rdquo;<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>region<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>AWS region<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>role<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>This is the AWS role to be assumed before talking to vault<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>vaultRole<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Vault Role. In vault, a role describes an identity with a set of permissions, groups, or policies you want to attach a user of the secrets engine<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>externalID<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>AWS External ID set on assumed IAM roles<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>vaultAwsIamServerID<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>X-Vault-AWS-IAM-Server-ID is an additional header used by Vault IAM auth method to mitigate against different types of replay attacks. More details here: <a href=\"https:\/\/developer.hashicorp.com\/vault\/docs\/auth\/aws\">https:\/\/developer.hashicorp.com\/vault\/docs\/auth\/aws<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAwsAuthSecretRef\">\nVaultAwsAuthSecretRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specify credentials in a Secret object<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>jwt<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAwsJWTAuth\">\nVaultAwsJWTAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specify a service account with IRSA enabled<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultJwtAuth\">VaultJwtAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAuth\">VaultAuth<\/a>)\n<\/p>\n<p>\n<p>VaultJwtAuth authenticates with Vault using the JWT\/OIDC authentication\nmethod, with the role name and a token stored in a Kubernetes Secret resource or\na Kubernetes service account token retrieved via <code>TokenRequest<\/code>.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>path<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Path where the JWT authentication backend is mounted\nin Vault, e.g: &ldquo;jwt&rdquo;<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>role<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Role is a JWT role to authenticate using the JWT\/OIDC Vault\nauthentication method<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Optional SecretRef that refers to a key in a Secret resource containing JWT token to\nauthenticate with Vault using the JWT\/OIDC authentication method.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kubernetesServiceAccountToken<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultKubernetesServiceAccountTokenAuth\">\nVaultKubernetesServiceAccountTokenAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Optional ServiceAccountToken specifies the Kubernetes service account for which to request\na token for with the <code>TokenRequest<\/code> API.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultKVStoreVersion\">VaultKVStoreVersion\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.VaultProvider\">VaultProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;v1&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;v2&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultKubernetesAuth\">VaultKubernetesAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAuth\">VaultAuth<\/a>)\n<\/p>\n<p>\n<p>Authenticate against Vault using a Kubernetes ServiceAccount token stored in\na Secret.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>mountPath<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Path where the Kubernetes authentication backend is mounted in Vault, e.g:\n&ldquo;kubernetes&rdquo;<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#ServiceAccountSelector\">\nExternal Secrets meta\/v1.ServiceAccountSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Optional service account field containing the name of a kubernetes ServiceAccount.\nIf the service account is specified, the service account secret token JWT will be used\nfor authenticating with Vault. If the service account selector is not supplied,\nthe secretRef will be used instead.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Optional secret field containing a Kubernetes ServiceAccount JWT used\nfor authenticating with Vault. If a name is specified without a key,\n<code>token<\/code> is the default. If one is not specified, the one bound to\nthe controller will be used.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>role<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>A required field containing the Vault Role to assume. A Role binds a\nKubernetes ServiceAccount with a set of Vault policies.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultKubernetesServiceAccountTokenAuth\">VaultKubernetesServiceAccountTokenAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.VaultJwtAuth\">VaultJwtAuth<\/a>)\n<\/p>\n<p>\n<p>VaultKubernetesServiceAccountTokenAuth authenticates with Vault using a temporary\nKubernetes service account token retrieved by the <code>TokenRequest<\/code> API.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>serviceAccountRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#ServiceAccountSelector\">\nExternal Secrets meta\/v1.ServiceAccountSelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Service account field containing the name of a kubernetes ServiceAccount.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>audiences<\/code><\/br>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Optional audiences field that will be used to request a temporary Kubernetes service\naccount token for the service account referenced by <code>serviceAccountRef<\/code>.\nDefaults to a single audience <code>vault<\/code> it not specified.\nDeprecated: use serviceAccountRef.Audiences instead<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>expirationSeconds<\/code><\/br>\n<em>\nint64\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Optional expiration time in seconds that will be used to request a temporary\nKubernetes service account token for the service account referenced by\n<code>serviceAccountRef<\/code>.\nDeprecated: this will be removed in the future.\nDefaults to 10 minutes.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultLdapAuth\">VaultLdapAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAuth\">VaultAuth<\/a>)\n<\/p>\n<p>\n<p>VaultLdapAuth authenticates with Vault using the LDAP authentication method,\nwith the username and password stored in a Kubernetes Secret resource.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>path<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Path where the LDAP authentication backend is mounted\nin Vault, e.g: &ldquo;ldap&rdquo;<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>username<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Username is a LDAP user name used to authenticate using the LDAP Vault\nauthentication method<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>SecretRef to a key in a Secret resource containing password for the LDAP\nuser used to authenticate with Vault using the LDAP authentication\nmethod<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultProvider\">VaultProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>Configures an store to sync secrets using a HashiCorp Vault\nKV backend.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAuth\">\nVaultAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth configures how secret-manager authenticates with the Vault server.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>server<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Server is the connection address for the Vault server, e.g: &ldquo;<a href=\"https:\/\/vault.example.com:8200&quot;\">https:\/\/vault.example.com:8200&rdquo;<\/a>.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>path<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Path is the mount path of the Vault KV backend endpoint, e.g:\n&ldquo;secret&rdquo;. The v2 KV secret engine version specific &ldquo;\/data&rdquo; path suffix\nfor fetching secrets from Vault is optional and will be appended\nif not present in specified path.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>version<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultKVStoreVersion\">\nVaultKVStoreVersion\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Version is the Vault KV secret engine version. This can be either &ldquo;v1&rdquo; or\n&ldquo;v2&rdquo;. Version defaults to &ldquo;v2&rdquo;.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>namespace<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Name of the vault namespace. Namespaces is a set of features within Vault Enterprise that allows\nVault environments to support Secure Multi-tenancy. e.g: &ldquo;ns1&rdquo;.\nMore about namespaces can be found here <a href=\"https:\/\/www.vaultproject.io\/docs\/enterprise\/namespaces\">https:\/\/www.vaultproject.io\/docs\/enterprise\/namespaces<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>caBundle<\/code><\/br>\n<em>\n[]byte\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>PEM encoded CA bundle used to validate Vault server certificate. Only used\nif the Server URL is using HTTPS protocol. This parameter is ignored for\nplain HTTP protocol connection. If not set the system root certificates\nare used to validate the TLS connection.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tls<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.VaultClientTLS\">\nVaultClientTLS\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The configuration used for client side related TLS communication, when the Vault server\nrequires mutual authentication. Only used if the Server URL is using HTTPS protocol.\nThis parameter is ignored for plain HTTP protocol connection.\nIt&rsquo;s worth noting this configuration is different from the &ldquo;TLS certificates auth method&rdquo;,\nwhich is available under the <code>auth.cert<\/code> section.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>caProvider<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.CAProvider\">\nCAProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The provider for the CA bundle to use to validate Vault server certificate.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>readYourWrites<\/code><\/br>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ReadYourWrites ensures isolated read-after-write semantics by\nproviding discovered cluster replication states in each request.\nMore information about eventual consistency in Vault can be found here\n<a href=\"https:\/\/www.vaultproject.io\/docs\/enterprise\/consistency\">https:\/\/www.vaultproject.io\/docs\/enterprise\/consistency<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>forwardInconsistent<\/code><\/br>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ForwardInconsistent tells Vault to forward read-after-write requests to the Vault\nleader instead of simply retrying within a loop. This can increase performance if\nthe option is enabled serverside.\n<a href=\"https:\/\/www.vaultproject.io\/docs\/configuration\/replication#allow_forwarding_via_header\">https:\/\/www.vaultproject.io\/docs\/configuration\/replication#allow_forwarding_via_header<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>headers<\/code><\/br>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Headers to be added in Vault request<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.VaultUserPassAuth\">VaultUserPassAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.VaultAuth\">VaultAuth<\/a>)\n<\/p>\n<p>\n<p>VaultUserPassAuth authenticates with Vault using UserPass authentication method,\nwith the username and password stored in a Kubernetes Secret resource.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>path<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Path where the UserPassword authentication backend is mounted\nin Vault, e.g: &ldquo;user&rdquo;<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>username<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Username is a user name used to authenticate using the UserPass Vault\nauthentication method<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>SecretRef to a key in a Secret resource containing password for the\nuser used to authenticate with Vault using the UserPass authentication\nmethod<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.WebhookCAProvider\">WebhookCAProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.WebhookProvider\">WebhookProvider<\/a>)\n<\/p>\n<p>\n<p>Defines a location to fetch the cert for the webhook provider from.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>type<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.WebhookCAProviderType\">\nWebhookCAProviderType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The type of provider to use such as &ldquo;Secret&rdquo;, or &ldquo;ConfigMap&rdquo;.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>name<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>The name of the object located at the provider type.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>key<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>The key where the CA certificate can be found in the Secret or ConfigMap.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>namespace<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The namespace the Provider type is in.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.WebhookCAProviderType\">WebhookCAProviderType\n(<code>string<\/code> alias)<\/p><\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.WebhookCAProvider\">WebhookCAProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;ConfigMap&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;Secret&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.WebhookProvider\">WebhookProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>AkeylessProvider Configures an store to sync secrets using Akeyless KV.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>method<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Webhook Method<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>url<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Webhook url to call<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>headers<\/code><\/br>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Headers<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>body<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Body<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Timeout<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>result<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.WebhookResult\">\nWebhookResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Result formatting<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secrets<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.WebhookSecret\">\n[]WebhookSecret\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Secrets to fill in templates\nThese secrets will be passed to the templating function as key value pairs under the given name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>caBundle<\/code><\/br>\n<em>\n[]byte\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>PEM encoded CA bundle used to validate webhook server certificate. Only used\nif the Server URL is using HTTPS protocol. This parameter is ignored for\nplain HTTP protocol connection. If not set the system root certificates\nare used to validate the TLS connection.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>caProvider<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.WebhookCAProvider\">\nWebhookCAProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The provider for the CA bundle to use to validate webhook server certificate.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.WebhookResult\">WebhookResult\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.WebhookProvider\">WebhookProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>jsonPath<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Json path of return value<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.WebhookSecret\">WebhookSecret\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.WebhookProvider\">WebhookProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name of this secret in templates<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Secret ref to fill in credentials<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.YandexCertificateManagerAuth\">YandexCertificateManagerAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.YandexCertificateManagerProvider\">YandexCertificateManagerProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>authorizedKeySecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The authorized key used for authentication<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.YandexCertificateManagerCAProvider\">YandexCertificateManagerCAProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.YandexCertificateManagerProvider\">YandexCertificateManagerProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>certSecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.YandexCertificateManagerProvider\">YandexCertificateManagerProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>YandexCertificateManagerProvider Configures a store to sync secrets using the Yandex Certificate Manager provider.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiEndpoint<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Yandex.Cloud API endpoint (e.g. &lsquo;api.cloud.yandex.net:443&rsquo;)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.YandexCertificateManagerAuth\">\nYandexCertificateManagerAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth defines the information necessary to authenticate against Yandex Certificate Manager<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>caProvider<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.YandexCertificateManagerCAProvider\">\nYandexCertificateManagerCAProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The provider for the CA bundle to use to validate Yandex.Cloud server certificate.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.YandexLockboxAuth\">YandexLockboxAuth\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.YandexLockboxProvider\">YandexLockboxProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>authorizedKeySecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The authorized key used for authentication<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.YandexLockboxCAProvider\">YandexLockboxCAProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.YandexLockboxProvider\">YandexLockboxProvider<\/a>)\n<\/p>\n<p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>certSecretRef<\/code><\/br>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/github.com\/external-secrets\/external-secrets\/apis\/meta\/v1#SecretKeySelector\">\nExternal Secrets meta\/v1.SecretKeySelector\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"external-secrets.io\/v1beta1.YandexLockboxProvider\">YandexLockboxProvider\n<\/h3>\n<p>\n(<em>Appears on:<\/em>\n<a href=\"#external-secrets.io\/v1beta1.SecretStoreProvider\">SecretStoreProvider<\/a>)\n<\/p>\n<p>\n<p>YandexLockboxProvider Configures a store to sync secrets using the Yandex Lockbox provider.<\/p>\n<\/p>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiEndpoint<\/code><\/br>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Yandex.Cloud API endpoint (e.g. &lsquo;api.cloud.yandex.net:443&rsquo;)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>auth<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.YandexLockboxAuth\">\nYandexLockboxAuth\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Auth defines the information necessary to authenticate against Yandex Lockbox<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>caProvider<\/code><\/br>\n<em>\n<a href=\"#external-secrets.io\/v1beta1.YandexLockboxCAProvider\">\nYandexLockboxCAProvider\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The provider for the CA bundle to use to validate Yandex.Cloud server certificate.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr\/>\n<p><em>\nGenerated with <code>gen-crd-api-reference-docs<\/code>.\n<\/em><\/p>","site":"external secrets","answers_cleaned":" p Packages   p   ul   li   a href   external secrets io 2fv1beta1  external secrets io v1beta1  a    li    ul   h2 id  external secrets io v1beta1  external secrets io v1beta1  h2   p   p Package v1beta1 contains resources for external secrets  p    p  Resource Types   ul   ul   h3 id  external secrets io v1beta1 AWSAuth  AWSAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 AWSProvider  AWSProvider  a     p   p   p AWSAuth tells the controller how to do authentication with aws  Only one of secretRef or jwt can be specified  if none is specified the controller will load credentials using the aws sdk defaults   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 AWSAuthSecretRef   AWSAuthSecretRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code jwt  code   br   em   a href   external secrets io v1beta1 AWSJWTAuth   AWSJWTAuth   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 AWSAuthSecretRef  AWSAuthSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 AWSAuth  AWSAuth  a     p   p   p AWSAuthSecretRef holds secret references for AWS credentials both AccessKeyID and SecretAccessKey must be defined in order to properly authenticate   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code accessKeyIDSecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p The AccessKeyID is used for authentication  p    td    tr   tr   td   code secretAccessKeySecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p The SecretAccessKey is used for authentication  p    td    tr   tr   td   code sessionTokenSecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p The SessionToken used for authentication This must be defined if AccessKeyID and SecretAccessKey are temporary credentials see   a href  https   docs aws amazon com IAM latest UserGuide id credentials temp use resources html  https   docs aws amazon com IAM latest UserGuide id credentials temp use resources html  a   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 AWSJWTAuth  AWSJWTAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 AWSAuth  AWSAuth  a     p   p   p Authenticate against AWS using service account tokens   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code serviceAccountRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 ServiceAccountSelector   External Secrets meta v1 ServiceAccountSelector   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 AWSProvider  AWSProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p AWSProvider configures a store to sync secrets with AWS   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code service  code   br   em   a href   external secrets io v1beta1 AWSServiceType   AWSServiceType   a    em    td   td   p Service defines which service should be used to fetch the secrets  p    td    tr   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 AWSAuth   AWSAuth   a    em    td   td   em  Optional   em   p Auth defines the information necessary to authenticate against AWS if not set aws sdk will infer credentials from your environment see   a href  https   docs aws amazon com sdk for go v1 developer guide configuring sdk html specifying credentials  https   docs aws amazon com sdk for go v1 developer guide configuring sdk html specifying credentials  a   p    td    tr   tr   td   code role  code   br   em  string   em    td   td   em  Optional   em   p Role is a Role ARN which the provider will assume  p    td    tr   tr   td   code region  code   br   em  string   em    td   td   p AWS Region to be used for the provider  p    td    tr   tr   td   code additionalRoles  code   br   em    string   em    td   td   em  Optional   em   p AdditionalRoles is a chained list of Role ARNs which the provider will sequentially assume before assuming the Role  p    td    tr   tr   td   code externalID  code   br   em  string   em    td   td   p AWS External ID set on assumed IAM roles  p    td    tr   tr   td   code sessionTags  code   br   em   a href   external secrets io v1beta1  github com external secrets external secrets apis externalsecrets v1beta1 Tag      github com external secrets external secrets apis externalsecrets v1beta1 Tag   a    em    td   td   em  Optional   em   p AWS STS assume role session tags  p    td    tr   tr   td   code secretsManager  code   br   em   a href   external secrets io v1beta1 SecretsManager   SecretsManager   a    em    td   td   em  Optional   em   p SecretsManager defines how the provider behaves when interacting with AWS SecretsManager  p    td    tr   tr   td   code transitiveTagKeys  code   br   em     string   em    td   td   em  Optional   em   p AWS STS assume role transitive session tags  Required when multiple rules are used with the provider  p    td    tr   tr   td   code prefix  code   br   em  string   em    td   td   em  Optional   em   p Prefix adds a prefix to all retrieved values   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 AWSServiceType  AWSServiceType   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 AWSProvider  AWSProvider  a     p   p   p AWSServiceType is a enum that defines the service API that is used to fetch the secrets   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 ParameterStore  34   p   td   td  p AWSServiceParameterStore is the AWS SystemsManager ParameterStore service  see   a href  https   docs aws amazon com systems manager latest userguide systems manager parameter store html  https   docs aws amazon com systems manager latest userguide systems manager parameter store html  a   p    td    tr  tr  td  p   34 SecretsManager  34   p   td   td  p AWSServiceSecretsManager is the AWS SecretsManager service  see   a href  https   docs aws amazon com secretsmanager latest userguide intro html  https   docs aws amazon com secretsmanager latest userguide intro html  a   p    td    tr   tbody    table   h3 id  external secrets io v1beta1 AkeylessAuth  AkeylessAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 AkeylessProvider  AkeylessProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 AkeylessAuthSecretRef   AkeylessAuthSecretRef   a    em    td   td   em  Optional   em   p Reference to a Secret that contains the details to authenticate with Akeyless   p    td    tr   tr   td   code kubernetesAuth  code   br   em   a href   external secrets io v1beta1 AkeylessKubernetesAuth   AkeylessKubernetesAuth   a    em    td   td   em  Optional   em   p Kubernetes authenticates with Akeyless by passing the ServiceAccount token stored in the named Secret resource   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 AkeylessAuthSecretRef  AkeylessAuthSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 AkeylessAuth  AkeylessAuth  a     p   p   p AkeylessAuthSecretRef AKEYLESS ACCESS TYPE PARAM  AZURE OBJ ID OR GCP AUDIENCE OR ACCESS KEY OR KUB CONFIG NAME   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code accessID  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p The SecretAccessID is used for authentication  p    td    tr   tr   td   code accessType  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr   tr   td   code accessTypeParam  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 AkeylessKubernetesAuth  AkeylessKubernetesAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 AkeylessAuth  AkeylessAuth  a     p   p   p Authenticate with Kubernetes ServiceAccount token stored   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code accessID  code   br   em  string   em    td   td   p the Akeyless Kubernetes auth method access id  p    td    tr   tr   td   code k8sConfName  code   br   em  string   em    td   td   p Kubernetes auth configuration name in Akeyless Gateway  p    td    tr   tr   td   code serviceAccountRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 ServiceAccountSelector   External Secrets meta v1 ServiceAccountSelector   a    em    td   td   em  Optional   em   p Optional service account field containing the name of a kubernetes ServiceAccount  If the service account is specified  the service account secret token JWT will be used for authenticating with Akeyless  If the service account selector is not supplied  the secretRef will be used instead   p    td    tr   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p Optional secret field containing a Kubernetes ServiceAccount JWT used for authenticating with Akeyless  If a name is specified without a key   code token  code  is the default  If one is not specified  the one bound to the controller will be used   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 AkeylessProvider  AkeylessProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p AkeylessProvider Configures an store to sync secrets using Akeyless KV   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code akeylessGWApiURL  code   br   em  string   em    td   td   p Akeyless GW API Url from which the secrets to be fetched from   p    td    tr   tr   td   code authSecretRef  code   br   em   a href   external secrets io v1beta1 AkeylessAuth   AkeylessAuth   a    em    td   td   p Auth configures how the operator authenticates with Akeyless   p    td    tr   tr   td   code caBundle  code   br   em    byte   em    td   td   em  Optional   em   p PEM base64 encoded CA bundle used to validate Akeyless Gateway certificate  Only used if the AkeylessGWApiURL URL is using HTTPS protocol  If not set the system root certificates are used to validate the TLS connection   p    td    tr   tr   td   code caProvider  code   br   em   a href   external secrets io v1beta1 CAProvider   CAProvider   a    em    td   td   em  Optional   em   p The provider for the CA bundle to use to validate Akeyless Gateway certificate   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 AlibabaAuth  AlibabaAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 AlibabaProvider  AlibabaProvider  a     p   p   p AlibabaAuth contains a secretRef for credentials   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 AlibabaAuthSecretRef   AlibabaAuthSecretRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code rrsa  code   br   em   a href   external secrets io v1beta1 AlibabaRRSAAuth   AlibabaRRSAAuth   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 AlibabaAuthSecretRef  AlibabaAuthSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 AlibabaAuth  AlibabaAuth  a     p   p   p AlibabaAuthSecretRef holds secret references for Alibaba credentials   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code accessKeyIDSecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p The AccessKeyID is used for authentication  p    td    tr   tr   td   code accessKeySecretSecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p The AccessKeySecret is used for authentication  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 AlibabaProvider  AlibabaProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p AlibabaProvider configures a store to sync secrets using the Alibaba Secret Manager provider   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 AlibabaAuth   AlibabaAuth   a    em    td   td    td    tr   tr   td   code regionID  code   br   em  string   em    td   td   p Alibaba Region to be used for the provider  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 AlibabaRRSAAuth  AlibabaRRSAAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 AlibabaAuth  AlibabaAuth  a     p   p   p Authenticate against Alibaba using RRSA   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code oidcProviderArn  code   br   em  string   em    td   td    td    tr   tr   td   code oidcTokenFilePath  code   br   em  string   em    td   td    td    tr   tr   td   code roleArn  code   br   em  string   em    td   td    td    tr   tr   td   code sessionName  code   br   em  string   em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 AzureAuthType  AzureAuthType   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 AzureKVProvider  AzureKVProvider  a     p   p   p AuthType describes how to authenticate to the Azure Keyvault Only one of the following auth types may be specified  If none of the following auth type is specified  the default one is ServicePrincipal   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 ManagedIdentity  34   p   td   td  p Using Managed Identity to authenticate  Used with aad pod identity installed in the cluster   p    td    tr  tr  td  p   34 ServicePrincipal  34   p   td   td  p Using service principal to authenticate  which needs a tenantId  a clientId and a clientSecret   p    td    tr  tr  td  p   34 WorkloadIdentity  34   p   td   td  p Using Workload Identity service accounts to authenticate   p    td    tr   tbody    table   h3 id  external secrets io v1beta1 AzureEnvironmentType  AzureEnvironmentType   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 AzureKVProvider  AzureKVProvider  a     p   p   p AzureEnvironmentType specifies the Azure cloud environment endpoints to use for connecting and authenticating with Azure  By default it points to the public cloud AAD endpoint  The following endpoints are available  also see here   a href  https   github com Azure go autorest blob main autorest azure environments go L152  https   github com Azure go autorest blob main autorest azure environments go L152  a  PublicCloud  USGovernmentCloud  ChinaCloud  GermanCloud  p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 ChinaCloud  34   p   td   td   td    tr  tr  td  p   34 GermanCloud  34   p   td   td   td    tr  tr  td  p   34 PublicCloud  34   p   td   td   td    tr  tr  td  p   34 USGovernmentCloud  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 AzureKVAuth  AzureKVAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 AzureKVProvider  AzureKVProvider  a     p   p   p Configuration used to authenticate with Azure   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code clientId  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p The Azure clientId of the service principle or managed identity used for authentication   p    td    tr   tr   td   code tenantId  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p The Azure tenantId of the managed identity used for authentication   p    td    tr   tr   td   code clientSecret  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p The Azure ClientSecret of the service principle used for authentication   p    td    tr   tr   td   code clientCertificate  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p The Azure ClientCertificate of the service principle used for authentication   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 AzureKVProvider  AzureKVProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p Configures an store to sync secrets using Azure KV   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code authType  code   br   em   a href   external secrets io v1beta1 AzureAuthType   AzureAuthType   a    em    td   td   em  Optional   em   p Auth type defines how to authenticate to the keyvault service  Valid values are     ldquo ServicePrincipal rdquo   default   Using a service principal  tenantId  clientId  clientSecret     ldquo ManagedIdentity rdquo   Using Managed Identity assigned to the pod  see aad pod identity   p    td    tr   tr   td   code vaultUrl  code   br   em  string   em    td   td   p Vault Url from which the secrets to be fetched from   p    td    tr   tr   td   code tenantId  code   br   em  string   em    td   td   em  Optional   em   p TenantID configures the Azure Tenant to send requests to  Required for ServicePrincipal auth type  Optional for WorkloadIdentity   p    td    tr   tr   td   code environmentType  code   br   em   a href   external secrets io v1beta1 AzureEnvironmentType   AzureEnvironmentType   a    em    td   td   p EnvironmentType specifies the Azure cloud environment endpoints to use for connecting and authenticating with Azure  By default it points to the public cloud AAD endpoint  The following endpoints are available  also see here   a href  https   github com Azure go autorest blob main autorest azure environments go L152  https   github com Azure go autorest blob main autorest azure environments go L152  a  PublicCloud  USGovernmentCloud  ChinaCloud  GermanCloud  p    td    tr   tr   td   code authSecretRef  code   br   em   a href   external secrets io v1beta1 AzureKVAuth   AzureKVAuth   a    em    td   td   em  Optional   em   p Auth configures how the operator authenticates with Azure  Required for ServicePrincipal auth type  Optional for WorkloadIdentity   p    td    tr   tr   td   code serviceAccountRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 ServiceAccountSelector   External Secrets meta v1 ServiceAccountSelector   a    em    td   td   em  Optional   em   p ServiceAccountRef specified the service account that should be used when authenticating with WorkloadIdentity   p    td    tr   tr   td   code identityId  code   br   em  string   em    td   td   em  Optional   em   p If multiple Managed Identity is assigned to the pod  you can select the one to be used  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 BeyondTrustProviderSecretRef  BeyondTrustProviderSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 BeyondtrustAuth  BeyondtrustAuth  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code value  code   br   em  string   em    td   td   em  Optional   em   p Value can be specified directly to set a value without using a secret   p    td    tr   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p SecretRef references a key in a secret that will be used as value   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 BeyondtrustAuth  BeyondtrustAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 BeyondtrustProvider  BeyondtrustProvider  a     p   p   p Configures a store to sync secrets using BeyondTrust Password Safe   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiKey  code   br   em   a href   external secrets io v1beta1 BeyondTrustProviderSecretRef   BeyondTrustProviderSecretRef   a    em    td   td   p APIKey If not provided then ClientID ClientSecret become required   p    td    tr   tr   td   code clientId  code   br   em   a href   external secrets io v1beta1 BeyondTrustProviderSecretRef   BeyondTrustProviderSecretRef   a    em    td   td   p ClientID is the API OAuth Client ID   p    td    tr   tr   td   code clientSecret  code   br   em   a href   external secrets io v1beta1 BeyondTrustProviderSecretRef   BeyondTrustProviderSecretRef   a    em    td   td   p ClientSecret is the API OAuth Client Secret   p    td    tr   tr   td   code certificate  code   br   em   a href   external secrets io v1beta1 BeyondTrustProviderSecretRef   BeyondTrustProviderSecretRef   a    em    td   td   p Certificate  cert pem  for use when authenticating with an OAuth client Id using a Client Certificate   p    td    tr   tr   td   code certificateKey  code   br   em   a href   external secrets io v1beta1 BeyondTrustProviderSecretRef   BeyondTrustProviderSecretRef   a    em    td   td   p Certificate private key  key pem   For use when authenticating with an OAuth client Id  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 BeyondtrustProvider  BeyondtrustProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 BeyondtrustAuth   BeyondtrustAuth   a    em    td   td   p Auth configures how the operator authenticates with Beyondtrust   p    td    tr   tr   td   code server  code   br   em   a href   external secrets io v1beta1 BeyondtrustServer   BeyondtrustServer   a    em    td   td   p Auth configures how API server works   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 BeyondtrustServer  BeyondtrustServer   h3   p    em Appears on   em   a href   external secrets io v1beta1 BeyondtrustProvider  BeyondtrustProvider  a     p   p   p Configures a store to sync secrets using BeyondTrust Password Safe   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiUrl  code   br   em  string   em    td   td    td    tr   tr   td   code retrievalType  code   br   em  string   em    td   td   p The secret retrieval type  SECRET   Secrets Safe  credential  text  file   MANAGED ACCOUNT   Password Safe account associated with a system   p    td    tr   tr   td   code separator  code   br   em  string   em    td   td   p A character that separates the folder names   p    td    tr   tr   td   code verifyCA  code   br   em  bool   em    td   td    td    tr   tr   td   code clientTimeOutSeconds  code   br   em  int   em    td   td   p Timeout specifies a time limit for requests made by this Client  The timeout includes connection time  any redirects  and reading the response body  Defaults to 45 seconds   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 BitwardenSecretsManagerAuth  BitwardenSecretsManagerAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 BitwardenSecretsManagerProvider  BitwardenSecretsManagerProvider  a     p   p   p BitwardenSecretsManagerAuth contains the ref to the secret that contains the machine account token   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 BitwardenSecretsManagerSecretRef   BitwardenSecretsManagerSecretRef   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 BitwardenSecretsManagerProvider  BitwardenSecretsManagerProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p BitwardenSecretsManagerProvider configures a store to sync secrets with a Bitwarden Secrets Manager instance   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiURL  code   br   em  string   em    td   td    td    tr   tr   td   code identityURL  code   br   em  string   em    td   td    td    tr   tr   td   code bitwardenServerSDKURL  code   br   em  string   em    td   td    td    tr   tr   td   code caBundle  code   br   em  string   em    td   td   em  Optional   em   p Base64 encoded certificate for the bitwarden server sdk  The sdk MUST run with HTTPS to make sure no MITM attack can be performed   p    td    tr   tr   td   code caProvider  code   br   em   a href   external secrets io v1beta1 CAProvider   CAProvider   a    em    td   td   em  Optional   em   p see   a href  https   external secrets io latest spec  external secrets io v1alpha1 CAProvider  https   external secrets io latest spec  external secrets io v1alpha1 CAProvider  a   p    td    tr   tr   td   code organizationID  code   br   em  string   em    td   td   p OrganizationID determines which organization this secret store manages   p    td    tr   tr   td   code projectID  code   br   em  string   em    td   td   p ProjectID determines which project this secret store manages   p    td    tr   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 BitwardenSecretsManagerAuth   BitwardenSecretsManagerAuth   a    em    td   td   p Auth configures how secret manager authenticates with a bitwarden machine account instance  Make sure that the token being used has permissions on the given secret   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 BitwardenSecretsManagerSecretRef  BitwardenSecretsManagerSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 BitwardenSecretsManagerAuth  BitwardenSecretsManagerAuth  a     p   p   p BitwardenSecretsManagerSecretRef contains the credential ref to the bitwarden instance   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code credentials  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p AccessToken used for the bitwarden instance   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 CAProvider  CAProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 AkeylessProvider  AkeylessProvider  a     a href   external secrets io v1beta1 BitwardenSecretsManagerProvider  BitwardenSecretsManagerProvider  a     a href   external secrets io v1beta1 ConjurProvider  ConjurProvider  a     a href   external secrets io v1beta1 KubernetesServer  KubernetesServer  a     a href   external secrets io v1beta1 VaultProvider  VaultProvider  a     p   p   p Used to provide custom certificate authority  CA  certificates for a secret store  The CAProvider points to a Secret or ConfigMap resource that contains a PEM encoded certificate   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code type  code   br   em   a href   external secrets io v1beta1 CAProviderType   CAProviderType   a    em    td   td   p The type of provider to use such as  ldquo Secret rdquo   or  ldquo ConfigMap rdquo    p    td    tr   tr   td   code name  code   br   em  string   em    td   td   p The name of the object located at the provider type   p    td    tr   tr   td   code key  code   br   em  string   em    td   td   p The key where the CA certificate can be found in the Secret or ConfigMap   p    td    tr   tr   td   code namespace  code   br   em  string   em    td   td   em  Optional   em   p The namespace the Provider type is in  Can only be defined when used in a ClusterSecretStore   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 CAProviderType  CAProviderType   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 CAProvider  CAProvider  a     p   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 ConfigMap  34   p   td   td   td    tr  tr  td  p   34 Secret  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 CertAuth  CertAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 KubernetesAuth  KubernetesAuth  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code clientCert  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr   tr   td   code clientKey  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 ChefAuth  ChefAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 ChefProvider  ChefProvider  a     p   p   p ChefAuth contains a secretRef for credentials   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 ChefAuthSecretRef   ChefAuthSecretRef   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 ChefAuthSecretRef  ChefAuthSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 ChefAuth  ChefAuth  a     p   p   p ChefAuthSecretRef holds secret references for chef server login credentials   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code privateKeySecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p SecretKey is the Signing Key in PEM format  used for authentication   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ChefProvider  ChefProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p ChefProvider configures a store to sync secrets using basic chef server connection credentials   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 ChefAuth   ChefAuth   a    em    td   td   p Auth defines the information necessary to authenticate against chef Server  p    td    tr   tr   td   code username  code   br   em  string   em    td   td   p UserName should be the user ID on the chef server  p    td    tr   tr   td   code serverUrl  code   br   em  string   em    td   td   p ServerURL is the chef server URL used to connect to  If using orgs you should include your org in the url and terminate the url with a  ldquo   rdquo   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ClusterExternalSecret  ClusterExternalSecret   h3   p   p ClusterExternalSecret is the Schema for the clusterexternalsecrets API   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code metadata  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code   br   em   a href   external secrets io v1beta1 ClusterExternalSecretSpec   ClusterExternalSecretSpec   a    em    td   td   br    br    table   tr   td   code externalSecretSpec  code   br   em   a href   external secrets io v1beta1 ExternalSecretSpec   ExternalSecretSpec   a    em    td   td   p The spec for the ExternalSecrets to be created  p    td    tr   tr   td   code externalSecretName  code   br   em  string   em    td   td   em  Optional   em   p The name of the external secrets to be created  Defaults to the name of the ClusterExternalSecret  p    td    tr   tr   td   code externalSecretMetadata  code   br   em   a href   external secrets io v1beta1 ExternalSecretMetadata   ExternalSecretMetadata   a    em    td   td   em  Optional   em   p The metadata of the external secrets to be created  p    td    tr   tr   td   code namespaceSelector  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  labelselector v1 meta   Kubernetes meta v1 LabelSelector   a    em    td   td   em  Optional   em   p The labels to select by to find the Namespaces to create the ExternalSecrets in  Deprecated  Use NamespaceSelectors instead   p    td    tr   tr   td   code namespaceSelectors  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25   k8s io apimachinery pkg apis meta v1 labelselector        k8s io apimachinery pkg apis meta v1 LabelSelector   a    em    td   td   em  Optional   em   p A list of labels to select by to find the Namespaces to create the ExternalSecrets in  The selectors are ORed   p    td    tr   tr   td   code namespaces  code   br   em    string   em    td   td   em  Optional   em   p Choose namespaces by name  This field is ORed with anything that NamespaceSelectors ends up choosing   p    td    tr   tr   td   code refreshTime  code   br   em   a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   p The time in which the controller should reconcile its objects and recheck namespaces for labels   p    td    tr    table    td    tr   tr   td   code status  code   br   em   a href   external secrets io v1beta1 ClusterExternalSecretStatus   ClusterExternalSecretStatus   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 ClusterExternalSecretConditionType  ClusterExternalSecretConditionType   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 ClusterExternalSecretStatusCondition  ClusterExternalSecretStatusCondition  a     p   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 Ready  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 ClusterExternalSecretNamespaceFailure  ClusterExternalSecretNamespaceFailure   h3   p    em Appears on   em   a href   external secrets io v1beta1 ClusterExternalSecretStatus  ClusterExternalSecretStatus  a     p   p   p ClusterExternalSecretNamespaceFailure represents a failed namespace deployment and it rsquo s reason   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code namespace  code   br   em  string   em    td   td   p Namespace is the namespace that failed when trying to apply an ExternalSecret  p    td    tr   tr   td   code reason  code   br   em  string   em    td   td   em  Optional   em   p Reason is why the ExternalSecret failed to apply to the namespace  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ClusterExternalSecretSpec  ClusterExternalSecretSpec   h3   p    em Appears on   em   a href   external secrets io v1beta1 ClusterExternalSecret  ClusterExternalSecret  a     p   p   p ClusterExternalSecretSpec defines the desired state of ClusterExternalSecret   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code externalSecretSpec  code   br   em   a href   external secrets io v1beta1 ExternalSecretSpec   ExternalSecretSpec   a    em    td   td   p The spec for the ExternalSecrets to be created  p    td    tr   tr   td   code externalSecretName  code   br   em  string   em    td   td   em  Optional   em   p The name of the external secrets to be created  Defaults to the name of the ClusterExternalSecret  p    td    tr   tr   td   code externalSecretMetadata  code   br   em   a href   external secrets io v1beta1 ExternalSecretMetadata   ExternalSecretMetadata   a    em    td   td   em  Optional   em   p The metadata of the external secrets to be created  p    td    tr   tr   td   code namespaceSelector  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  labelselector v1 meta   Kubernetes meta v1 LabelSelector   a    em    td   td   em  Optional   em   p The labels to select by to find the Namespaces to create the ExternalSecrets in  Deprecated  Use NamespaceSelectors instead   p    td    tr   tr   td   code namespaceSelectors  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25   k8s io apimachinery pkg apis meta v1 labelselector        k8s io apimachinery pkg apis meta v1 LabelSelector   a    em    td   td   em  Optional   em   p A list of labels to select by to find the Namespaces to create the ExternalSecrets in  The selectors are ORed   p    td    tr   tr   td   code namespaces  code   br   em    string   em    td   td   em  Optional   em   p Choose namespaces by name  This field is ORed with anything that NamespaceSelectors ends up choosing   p    td    tr   tr   td   code refreshTime  code   br   em   a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   p The time in which the controller should reconcile its objects and recheck namespaces for labels   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ClusterExternalSecretStatus  ClusterExternalSecretStatus   h3   p    em Appears on   em   a href   external secrets io v1beta1 ClusterExternalSecret  ClusterExternalSecret  a     p   p   p ClusterExternalSecretStatus defines the observed state of ClusterExternalSecret   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code externalSecretName  code   br   em  string   em    td   td   p ExternalSecretName is the name of the ExternalSecrets created by the ClusterExternalSecret  p    td    tr   tr   td   code failedNamespaces  code   br   em   a href   external secrets io v1beta1 ClusterExternalSecretNamespaceFailure     ClusterExternalSecretNamespaceFailure   a    em    td   td   em  Optional   em   p Failed namespaces are the namespaces that failed to apply an ExternalSecret  p    td    tr   tr   td   code provisionedNamespaces  code   br   em    string   em    td   td   em  Optional   em   p ProvisionedNamespaces are the namespaces where the ClusterExternalSecret has secrets  p    td    tr   tr   td   code conditions  code   br   em   a href   external secrets io v1beta1 ClusterExternalSecretStatusCondition     ClusterExternalSecretStatusCondition   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 ClusterExternalSecretStatusCondition  ClusterExternalSecretStatusCondition   h3   p    em Appears on   em   a href   external secrets io v1beta1 ClusterExternalSecretStatus  ClusterExternalSecretStatus  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code type  code   br   em   a href   external secrets io v1beta1 ClusterExternalSecretConditionType   ClusterExternalSecretConditionType   a    em    td   td    td    tr   tr   td   code status  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  conditionstatus v1 core   Kubernetes core v1 ConditionStatus   a    em    td   td    td    tr   tr   td   code message  code   br   em  string   em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 ClusterSecretStore  ClusterSecretStore   h3   p   p ClusterSecretStore represents a secure external location for storing secrets  which can be referenced as part of  code storeRef  code  fields   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code metadata  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code   br   em   a href   external secrets io v1beta1 SecretStoreSpec   SecretStoreSpec   a    em    td   td   br    br    table   tr   td   code controller  code   br   em  string   em    td   td   em  Optional   em   p Used to select the correct ESO controller  think  ingress ingressClassName  The ESO controller is instantiated with a specific controller name and filters ES based on this property  p    td    tr   tr   td   code provider  code   br   em   a href   external secrets io v1beta1 SecretStoreProvider   SecretStoreProvider   a    em    td   td   p Used to configure the provider  Only one provider may be set  p    td    tr   tr   td   code retrySettings  code   br   em   a href   external secrets io v1beta1 SecretStoreRetrySettings   SecretStoreRetrySettings   a    em    td   td   em  Optional   em   p Used to configure http retries if failed  p    td    tr   tr   td   code refreshInterval  code   br   em  int   em    td   td   em  Optional   em   p Used to configure store refresh interval in seconds  Empty or 0 will default to the controller config   p    td    tr   tr   td   code conditions  code   br   em   a href   external secrets io v1beta1 ClusterSecretStoreCondition     ClusterSecretStoreCondition   a    em    td   td   em  Optional   em   p Used to constraint a ClusterSecretStore to specific namespaces  Relevant only to ClusterSecretStore  p    td    tr    table    td    tr   tr   td   code status  code   br   em   a href   external secrets io v1beta1 SecretStoreStatus   SecretStoreStatus   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 ClusterSecretStoreCondition  ClusterSecretStoreCondition   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreSpec  SecretStoreSpec  a     p   p   p ClusterSecretStoreCondition describes a condition by which to choose namespaces to process ExternalSecrets in for a ClusterSecretStore instance   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code namespaceSelector  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  labelselector v1 meta   Kubernetes meta v1 LabelSelector   a    em    td   td   em  Optional   em   p Choose namespace using a labelSelector  p    td    tr   tr   td   code namespaces  code   br   em    string   em    td   td   em  Optional   em   p Choose namespaces by name  p    td    tr   tr   td   code namespaceRegexes  code   br   em    string   em    td   td   em  Optional   em   p Choose namespaces by using regex matching  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ConjurAPIKey  ConjurAPIKey   h3   p    em Appears on   em   a href   external secrets io v1beta1 ConjurAuth  ConjurAuth  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code account  code   br   em  string   em    td   td    td    tr   tr   td   code userRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr   tr   td   code apiKeyRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 ConjurAuth  ConjurAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 ConjurProvider  ConjurProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apikey  code   br   em   a href   external secrets io v1beta1 ConjurAPIKey   ConjurAPIKey   a    em    td   td   em  Optional   em    td    tr   tr   td   code jwt  code   br   em   a href   external secrets io v1beta1 ConjurJWT   ConjurJWT   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 ConjurJWT  ConjurJWT   h3   p    em Appears on   em   a href   external secrets io v1beta1 ConjurAuth  ConjurAuth  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code account  code   br   em  string   em    td   td    td    tr   tr   td   code serviceID  code   br   em  string   em    td   td   p The conjur authn jwt webservice id  p    td    tr   tr   td   code hostId  code   br   em  string   em    td   td   em  Optional   em   p Optional HostID for JWT authentication  This may be used depending on how the Conjur JWT authenticator policy is configured   p    td    tr   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p Optional SecretRef that refers to a key in a Secret resource containing JWT token to authenticate with Conjur using the JWT authentication method   p    td    tr   tr   td   code serviceAccountRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 ServiceAccountSelector   External Secrets meta v1 ServiceAccountSelector   a    em    td   td   em  Optional   em   p Optional ServiceAccountRef specifies the Kubernetes service account for which to request a token for with the  code TokenRequest  code  API   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ConjurProvider  ConjurProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code url  code   br   em  string   em    td   td    td    tr   tr   td   code caBundle  code   br   em  string   em    td   td   em  Optional   em    td    tr   tr   td   code caProvider  code   br   em   a href   external secrets io v1beta1 CAProvider   CAProvider   a    em    td   td   em  Optional   em    td    tr   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 ConjurAuth   ConjurAuth   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 DelineaProvider  DelineaProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p See  a href  https   github com DelineaXPM dsv sdk go blob main vault vault go  https   github com DelineaXPM dsv sdk go blob main vault vault go  a    p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code clientId  code   br   em   a href   external secrets io v1beta1 DelineaProviderSecretRef   DelineaProviderSecretRef   a    em    td   td   p ClientID is the non secret part of the credential   p    td    tr   tr   td   code clientSecret  code   br   em   a href   external secrets io v1beta1 DelineaProviderSecretRef   DelineaProviderSecretRef   a    em    td   td   p ClientSecret is the secret part of the credential   p    td    tr   tr   td   code tenant  code   br   em  string   em    td   td   p Tenant is the chosen hostname   site name   p    td    tr   tr   td   code urlTemplate  code   br   em  string   em    td   td   em  Optional   em   p URLTemplate If unset  defaults to  ldquo https    s secretsvaultcloud  s v1  s s rdquo    p    td    tr   tr   td   code tld  code   br   em  string   em    td   td   em  Optional   em   p TLD is based on the server location that was chosen during provisioning  If unset  defaults to  ldquo com rdquo    p    td    tr    tbody    table   h3 id  external secrets io v1beta1 DelineaProviderSecretRef  DelineaProviderSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 DelineaProvider  DelineaProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code value  code   br   em  string   em    td   td   em  Optional   em   p Value can be specified directly to set a value without using a secret   p    td    tr   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p SecretRef references a key in a secret that will be used as value   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 Device42Auth  Device42Auth   h3   p    em Appears on   em   a href   external secrets io v1beta1 Device42Provider  Device42Provider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 Device42SecretRef   Device42SecretRef   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 Device42Provider  Device42Provider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p Device42Provider configures a store to sync secrets with a Device42 instance   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code host  code   br   em  string   em    td   td   p URL configures the Device42 instance URL   p    td    tr   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 Device42Auth   Device42Auth   a    em    td   td   p Auth configures how secret manager authenticates with a Device42 instance   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 Device42SecretRef  Device42SecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 Device42Auth  Device42Auth  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code credentials  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p Username   Password is used for authentication   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 DopplerAuth  DopplerAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 DopplerProvider  DopplerProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 DopplerAuthSecretRef   DopplerAuthSecretRef   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 DopplerAuthSecretRef  DopplerAuthSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 DopplerAuth  DopplerAuth  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code dopplerToken  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p The DopplerToken is used for authentication  See  a href  https   docs doppler com reference api authentication  https   docs doppler com reference api authentication  a  for auth token types  The Key attribute defaults to dopplerToken if not specified   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 DopplerProvider  DopplerProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p DopplerProvider configures a store to sync secrets using the Doppler provider  Project and Config are required if not using a Service Token   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 DopplerAuth   DopplerAuth   a    em    td   td   p Auth configures how the Operator authenticates with the Doppler API  p    td    tr   tr   td   code project  code   br   em  string   em    td   td   em  Optional   em   p Doppler project  required if not using a Service Token   p    td    tr   tr   td   code config  code   br   em  string   em    td   td   em  Optional   em   p Doppler config  required if not using a Service Token   p    td    tr   tr   td   code nameTransformer  code   br   em  string   em    td   td   em  Optional   em   p Environment variable compatible name transforms that change secret names to a different format  p    td    tr   tr   td   code format  code   br   em  string   em    td   td   em  Optional   em   p Format enables the downloading of secrets as a file  string   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecret  ExternalSecret   h3   p   p ExternalSecret is the Schema for the external secrets API   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code metadata  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code   br   em   a href   external secrets io v1beta1 ExternalSecretSpec   ExternalSecretSpec   a    em    td   td   br    br    table   tr   td   code secretStoreRef  code   br   em   a href   external secrets io v1beta1 SecretStoreRef   SecretStoreRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code target  code   br   em   a href   external secrets io v1beta1 ExternalSecretTarget   ExternalSecretTarget   a    em    td   td   em  Optional   em    td    tr   tr   td   code refreshInterval  code   br   em   a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   p RefreshInterval is the amount of time before the values are read again from the SecretStore provider  specified as Golang Duration strings  Valid time units are  ldquo ns rdquo    ldquo us rdquo   or  ldquo  s rdquo     ldquo ms rdquo    ldquo s rdquo    ldquo m rdquo    ldquo h rdquo  Example values   ldquo 1h rdquo    ldquo 2h30m rdquo    ldquo 5d rdquo    ldquo 10s rdquo  May be set to zero to fetch and create it once  Defaults to 1h   p    td    tr   tr   td   code data  code   br   em   a href   external secrets io v1beta1 ExternalSecretData     ExternalSecretData   a    em    td   td   em  Optional   em   p Data defines the connection between the Kubernetes Secret keys and the Provider data  p    td    tr   tr   td   code dataFrom  code   br   em   a href   external secrets io v1beta1 ExternalSecretDataFromRemoteRef     ExternalSecretDataFromRemoteRef   a    em    td   td   em  Optional   em   p DataFrom is used to fetch all properties from a specific Provider data If multiple entries are specified  the Secret keys are merged in the specified order  p    td    tr    table    td    tr   tr   td   code status  code   br   em   a href   external secrets io v1beta1 ExternalSecretStatus   ExternalSecretStatus   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretConditionType  ExternalSecretConditionType   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretStatusCondition  ExternalSecretStatusCondition  a     p   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 Deleted  34   p   td   td   td    tr  tr  td  p   34 Ready  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 ExternalSecretConversionStrategy  ExternalSecretConversionStrategy   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretDataRemoteRef  ExternalSecretDataRemoteRef  a     a href   external secrets io v1beta1 ExternalSecretFind  ExternalSecretFind  a     p   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 Default  34   p   td   td   td    tr  tr  td  p   34 Unicode  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 ExternalSecretCreationPolicy  ExternalSecretCreationPolicy   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretTarget  ExternalSecretTarget  a     p   p   p ExternalSecretCreationPolicy defines rules on how to create the resulting Secret   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 Merge  34   p   td   td  p Merge does not create the Secret  but merges the data fields to the Secret   p    td    tr  tr  td  p   34 None  34   p   td   td  p None does not create a Secret  future use with injector    p    td    tr  tr  td  p   34 Orphan  34   p   td   td  p Orphan creates the Secret and does not set the ownerReference  I e  it will be orphaned after the deletion of the ExternalSecret   p    td    tr  tr  td  p   34 Owner  34   p   td   td  p Owner creates the Secret and sets  metadata ownerReferences to the ExternalSecret resource   p    td    tr   tbody    table   h3 id  external secrets io v1beta1 ExternalSecretData  ExternalSecretData   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretSpec  ExternalSecretSpec  a     p   p   p ExternalSecretData defines the connection between the Kubernetes Secret key  spec data  key   and the Provider data   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretKey  code   br   em  string   em    td   td   p The key in the Kubernetes Secret to store the value   p    td    tr   tr   td   code remoteRef  code   br   em   a href   external secrets io v1beta1 ExternalSecretDataRemoteRef   ExternalSecretDataRemoteRef   a    em    td   td   p RemoteRef points to the remote secret and defines which secret  version property     to fetch   p    td    tr   tr   td   code sourceRef  code   br   em   a href   external secrets io v1beta1 StoreSourceRef   StoreSourceRef   a    em    td   td   p SourceRef allows you to override the source from which the value will be pulled   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretDataFromRemoteRef  ExternalSecretDataFromRemoteRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretSpec  ExternalSecretSpec  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code extract  code   br   em   a href   external secrets io v1beta1 ExternalSecretDataRemoteRef   ExternalSecretDataRemoteRef   a    em    td   td   em  Optional   em   p Used to extract multiple key value pairs from one secret Note  Extract does not support sourceRef Generator or sourceRef GeneratorRef   p    td    tr   tr   td   code find  code   br   em   a href   external secrets io v1beta1 ExternalSecretFind   ExternalSecretFind   a    em    td   td   em  Optional   em   p Used to find secrets based on tags or regular expressions Note  Find does not support sourceRef Generator or sourceRef GeneratorRef   p    td    tr   tr   td   code rewrite  code   br   em   a href   external secrets io v1beta1 ExternalSecretRewrite     ExternalSecretRewrite   a    em    td   td   em  Optional   em   p Used to rewrite secret Keys after getting them from the secret Provider Multiple Rewrite operations can be provided  They are applied in a layered order  first to last   p    td    tr   tr   td   code sourceRef  code   br   em   a href   external secrets io v1beta1 StoreGeneratorSourceRef   StoreGeneratorSourceRef   a    em    td   td   p SourceRef points to a store or generator which contains secret values ready to use  Use this in combination with Extract or Find pull values out of a specific SecretStore  When sourceRef points to a generator Extract or Find is not supported  The generator returns a static map of values  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretDataRemoteRef  ExternalSecretDataRemoteRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretData  ExternalSecretData  a     a href   external secrets io v1beta1 ExternalSecretDataFromRemoteRef  ExternalSecretDataFromRemoteRef  a     p   p   p ExternalSecretDataRemoteRef defines Provider data location   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code key  code   br   em  string   em    td   td   p Key is the key used in the Provider  mandatory  p    td    tr   tr   td   code metadataPolicy  code   br   em   a href   external secrets io v1beta1 ExternalSecretMetadataPolicy   ExternalSecretMetadataPolicy   a    em    td   td   em  Optional   em   p Policy for fetching tags labels from provider secrets  possible options are Fetch  None  Defaults to None  p    td    tr   tr   td   code property  code   br   em  string   em    td   td   em  Optional   em   p Used to select a specific property of the Provider value  if a map   if supported  p    td    tr   tr   td   code version  code   br   em  string   em    td   td   em  Optional   em   p Used to select a specific version of the Provider value  if supported  p    td    tr   tr   td   code conversionStrategy  code   br   em   a href   external secrets io v1beta1 ExternalSecretConversionStrategy   ExternalSecretConversionStrategy   a    em    td   td   em  Optional   em   p Used to define a conversion Strategy  p    td    tr   tr   td   code decodingStrategy  code   br   em   a href   external secrets io v1beta1 ExternalSecretDecodingStrategy   ExternalSecretDecodingStrategy   a    em    td   td   em  Optional   em   p Used to define a decoding Strategy  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretDecodingStrategy  ExternalSecretDecodingStrategy   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretDataRemoteRef  ExternalSecretDataRemoteRef  a     a href   external secrets io v1beta1 ExternalSecretFind  ExternalSecretFind  a     p   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 Auto  34   p   td   td   td    tr  tr  td  p   34 Base64  34   p   td   td   td    tr  tr  td  p   34 Base64URL  34   p   td   td   td    tr  tr  td  p   34 None  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 ExternalSecretDeletionPolicy  ExternalSecretDeletionPolicy   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretTarget  ExternalSecretTarget  a     p   p   p ExternalSecretDeletionPolicy defines rules on how to delete the resulting Secret   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 Delete  34   p   td   td  p Delete deletes the secret if all provider secrets are deleted  If a secret gets deleted on the provider side and is not accessible anymore this is not considered an error and the ExternalSecret does not go into SecretSyncedError status   p    td    tr  tr  td  p   34 Merge  34   p   td   td  p Merge removes keys in the secret  but not the secret itself  If a secret gets deleted on the provider side and is not accessible anymore this is not considered an error and the ExternalSecret does not go into SecretSyncedError status   p    td    tr  tr  td  p   34 Retain  34   p   td   td  p Retain will retain the secret if all provider secrets have been deleted  If a provider secret does not exist the ExternalSecret gets into the SecretSyncedError status   p    td    tr   tbody    table   h3 id  external secrets io v1beta1 ExternalSecretFind  ExternalSecretFind   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretDataFromRemoteRef  ExternalSecretDataFromRemoteRef  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code path  code   br   em  string   em    td   td   em  Optional   em   p A root path to start the find operations   p    td    tr   tr   td   code name  code   br   em   a href   external secrets io v1beta1 FindName   FindName   a    em    td   td   em  Optional   em   p Finds secrets based on the name   p    td    tr   tr   td   code tags  code   br   em  map string string   em    td   td   em  Optional   em   p Find secrets based on tags   p    td    tr   tr   td   code conversionStrategy  code   br   em   a href   external secrets io v1beta1 ExternalSecretConversionStrategy   ExternalSecretConversionStrategy   a    em    td   td   em  Optional   em   p Used to define a conversion Strategy  p    td    tr   tr   td   code decodingStrategy  code   br   em   a href   external secrets io v1beta1 ExternalSecretDecodingStrategy   ExternalSecretDecodingStrategy   a    em    td   td   em  Optional   em   p Used to define a decoding Strategy  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretMetadata  ExternalSecretMetadata   h3   p    em Appears on   em   a href   external secrets io v1beta1 ClusterExternalSecretSpec  ClusterExternalSecretSpec  a     p   p   p ExternalSecretMetadata defines metadata fields for the ExternalSecret generated by the ClusterExternalSecret   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code annotations  code   br   em  map string string   em    td   td   em  Optional   em    td    tr   tr   td   code labels  code   br   em  map string string   em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretMetadataPolicy  ExternalSecretMetadataPolicy   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretDataRemoteRef  ExternalSecretDataRemoteRef  a     p   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 Fetch  34   p   td   td   td    tr  tr  td  p   34 None  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 ExternalSecretRewrite  ExternalSecretRewrite   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretDataFromRemoteRef  ExternalSecretDataFromRemoteRef  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code regexp  code   br   em   a href   external secrets io v1beta1 ExternalSecretRewriteRegexp   ExternalSecretRewriteRegexp   a    em    td   td   em  Optional   em   p Used to rewrite with regular expressions  The resulting key will be the output of a regexp ReplaceAll operation   p    td    tr   tr   td   code transform  code   br   em   a href   external secrets io v1beta1 ExternalSecretRewriteTransform   ExternalSecretRewriteTransform   a    em    td   td   em  Optional   em   p Used to apply string transformation on the secrets  The resulting key will be the output of the template applied by the operation   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretRewriteRegexp  ExternalSecretRewriteRegexp   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretRewrite  ExternalSecretRewrite  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code source  code   br   em  string   em    td   td   p Used to define the regular expression of a re Compiler   p    td    tr   tr   td   code target  code   br   em  string   em    td   td   p Used to define the target pattern of a ReplaceAll operation   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretRewriteTransform  ExternalSecretRewriteTransform   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretRewrite  ExternalSecretRewrite  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code template  code   br   em  string   em    td   td   p Used to define the template to apply on the secret name   code  value  code  will specify the secret name in the template   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretSpec  ExternalSecretSpec   h3   p    em Appears on   em   a href   external secrets io v1beta1 ClusterExternalSecretSpec  ClusterExternalSecretSpec  a     a href   external secrets io v1beta1 ExternalSecret  ExternalSecret  a     p   p   p ExternalSecretSpec defines the desired state of ExternalSecret   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretStoreRef  code   br   em   a href   external secrets io v1beta1 SecretStoreRef   SecretStoreRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code target  code   br   em   a href   external secrets io v1beta1 ExternalSecretTarget   ExternalSecretTarget   a    em    td   td   em  Optional   em    td    tr   tr   td   code refreshInterval  code   br   em   a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   p RefreshInterval is the amount of time before the values are read again from the SecretStore provider  specified as Golang Duration strings  Valid time units are  ldquo ns rdquo    ldquo us rdquo   or  ldquo  s rdquo     ldquo ms rdquo    ldquo s rdquo    ldquo m rdquo    ldquo h rdquo  Example values   ldquo 1h rdquo    ldquo 2h30m rdquo    ldquo 5d rdquo    ldquo 10s rdquo  May be set to zero to fetch and create it once  Defaults to 1h   p    td    tr   tr   td   code data  code   br   em   a href   external secrets io v1beta1 ExternalSecretData     ExternalSecretData   a    em    td   td   em  Optional   em   p Data defines the connection between the Kubernetes Secret keys and the Provider data  p    td    tr   tr   td   code dataFrom  code   br   em   a href   external secrets io v1beta1 ExternalSecretDataFromRemoteRef     ExternalSecretDataFromRemoteRef   a    em    td   td   em  Optional   em   p DataFrom is used to fetch all properties from a specific Provider data If multiple entries are specified  the Secret keys are merged in the specified order  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretStatus  ExternalSecretStatus   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecret  ExternalSecret  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code refreshTime  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   p refreshTime is the time and date the external secret was fetched and the target secret updated  p    td    tr   tr   td   code syncedResourceVersion  code   br   em  string   em    td   td   p SyncedResourceVersion keeps track of the last synced version  p    td    tr   tr   td   code conditions  code   br   em   a href   external secrets io v1beta1 ExternalSecretStatusCondition     ExternalSecretStatusCondition   a    em    td   td   em  Optional   em    td    tr   tr   td   code binding  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  localobjectreference v1 core   Kubernetes core v1 LocalObjectReference   a    em    td   td   p Binding represents a servicebinding io Provisioned Service reference to the secret  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretStatusCondition  ExternalSecretStatusCondition   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretStatus  ExternalSecretStatus  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code type  code   br   em   a href   external secrets io v1beta1 ExternalSecretConditionType   ExternalSecretConditionType   a    em    td   td    td    tr   tr   td   code status  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  conditionstatus v1 core   Kubernetes core v1 ConditionStatus   a    em    td   td    td    tr   tr   td   code reason  code   br   em  string   em    td   td   em  Optional   em    td    tr   tr   td   code message  code   br   em  string   em    td   td   em  Optional   em    td    tr   tr   td   code lastTransitionTime  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretTarget  ExternalSecretTarget   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretSpec  ExternalSecretSpec  a     p   p   p ExternalSecretTarget defines the Kubernetes Secret to be created There can be only one target per ExternalSecret   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code   br   em  string   em    td   td   em  Optional   em   p The name of the Secret resource to be managed  Defaults to the  metadata name of the ExternalSecret resource  p    td    tr   tr   td   code creationPolicy  code   br   em   a href   external secrets io v1beta1 ExternalSecretCreationPolicy   ExternalSecretCreationPolicy   a    em    td   td   em  Optional   em   p CreationPolicy defines rules on how to create the resulting Secret  Defaults to  ldquo Owner rdquo   p    td    tr   tr   td   code deletionPolicy  code   br   em   a href   external secrets io v1beta1 ExternalSecretDeletionPolicy   ExternalSecretDeletionPolicy   a    em    td   td   em  Optional   em   p DeletionPolicy defines rules on how to delete the resulting Secret  Defaults to  ldquo Retain rdquo   p    td    tr   tr   td   code template  code   br   em   a href   external secrets io v1beta1 ExternalSecretTemplate   ExternalSecretTemplate   a    em    td   td   em  Optional   em   p Template defines a blueprint for the created Secret resource   p    td    tr   tr   td   code immutable  code   br   em  bool   em    td   td   em  Optional   em   p Immutable defines if the final secret will be immutable  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretTemplate  ExternalSecretTemplate   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretTarget  ExternalSecretTarget  a     p   p   p ExternalSecretTemplate defines a blueprint for the created Secret resource  we can not use native corev1 Secret  it will have empty ObjectMeta values   a href  https   github com kubernetes sigs controller tools issues 448  https   github com kubernetes sigs controller tools issues 448  a   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code type  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  secrettype v1 core   Kubernetes core v1 SecretType   a    em    td   td   em  Optional   em    td    tr   tr   td   code engineVersion  code   br   em   a href   external secrets io v1beta1 TemplateEngineVersion   TemplateEngineVersion   a    em    td   td   p EngineVersion specifies the template engine version that should be used to compile execute the template specified in  data and  templateFrom     p    td    tr   tr   td   code metadata  code   br   em   a href   external secrets io v1beta1 ExternalSecretTemplateMetadata   ExternalSecretTemplateMetadata   a    em    td   td   em  Optional   em    td    tr   tr   td   code mergePolicy  code   br   em   a href   external secrets io v1beta1 TemplateMergePolicy   TemplateMergePolicy   a    em    td   td    td    tr   tr   td   code data  code   br   em  map string string   em    td   td   em  Optional   em    td    tr   tr   td   code templateFrom  code   br   em   a href   external secrets io v1beta1 TemplateFrom     TemplateFrom   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretTemplateMetadata  ExternalSecretTemplateMetadata   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretTemplate  ExternalSecretTemplate  a     p   p   p ExternalSecretTemplateMetadata defines metadata fields for the Secret blueprint   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code annotations  code   br   em  map string string   em    td   td   em  Optional   em    td    tr   tr   td   code labels  code   br   em  map string string   em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 ExternalSecretValidator  ExternalSecretValidator   h3   p    p   h3 id  external secrets io v1beta1 FakeProvider  FakeProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p FakeProvider configures a fake provider that returns static values   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code data  code   br   em   a href   external secrets io v1beta1 FakeProviderData     FakeProviderData   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 FakeProviderData  FakeProviderData   h3   p    em Appears on   em   a href   external secrets io v1beta1 FakeProvider  FakeProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code key  code   br   em  string   em    td   td    td    tr   tr   td   code value  code   br   em  string   em    td   td    td    tr   tr   td   code valueMap  code   br   em  map string string   em    td   td   p Deprecated  ValueMap is deprecated and is intended to be removed in the future  use the  code value  code  field instead   p    td    tr   tr   td   code version  code   br   em  string   em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 FindName  FindName   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretFind  ExternalSecretFind  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code regexp  code   br   em  string   em    td   td   em  Optional   em   p Finds secrets base  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 FortanixProvider  FortanixProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiUrl  code   br   em  string   em    td   td   p APIURL is the URL of SDKMS API  Defaults to  code sdkms fortanix com  code    p    td    tr   tr   td   code apiKey  code   br   em   a href   external secrets io v1beta1 FortanixProviderSecretRef   FortanixProviderSecretRef   a    em    td   td   p APIKey is the API token to access SDKMS Applications   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 FortanixProviderSecretRef  FortanixProviderSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 FortanixProvider  FortanixProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p SecretRef is a reference to a secret containing the SDKMS API Key   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 GCPSMAuth  GCPSMAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 GCPSMProvider  GCPSMProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 GCPSMAuthSecretRef   GCPSMAuthSecretRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code workloadIdentity  code   br   em   a href   external secrets io v1beta1 GCPWorkloadIdentity   GCPWorkloadIdentity   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 GCPSMAuthSecretRef  GCPSMAuthSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 GCPSMAuth  GCPSMAuth  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretAccessKeySecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p The SecretAccessKey is used for authentication  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 GCPSMProvider  GCPSMProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p GCPSMProvider Configures a store to sync secrets using the GCP Secret Manager provider   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 GCPSMAuth   GCPSMAuth   a    em    td   td   em  Optional   em   p Auth defines the information necessary to authenticate against GCP  p    td    tr   tr   td   code projectID  code   br   em  string   em    td   td   p ProjectID project where secret is located  p    td    tr   tr   td   code location  code   br   em  string   em    td   td   p Location optionally defines a location for a secret  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 GCPWorkloadIdentity  GCPWorkloadIdentity   h3   p    em Appears on   em   a href   external secrets io v1beta1 GCPSMAuth  GCPSMAuth  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code serviceAccountRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 ServiceAccountSelector   External Secrets meta v1 ServiceAccountSelector   a    em    td   td    td    tr   tr   td   code clusterLocation  code   br   em  string   em    td   td    td    tr   tr   td   code clusterName  code   br   em  string   em    td   td    td    tr   tr   td   code clusterProjectID  code   br   em  string   em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 GeneratorRef  GeneratorRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 StoreGeneratorSourceRef  StoreGeneratorSourceRef  a     a href   external secrets io v1beta1 StoreSourceRef  StoreSourceRef  a     p   p   p GeneratorRef points to a generator custom resource   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code   br   em  string   em    td   td   p Specify the apiVersion of the generator resource  p    td    tr   tr   td   code kind  code   br   em  string   em    td   td   p Specify the Kind of the generator resource  p    td    tr   tr   td   code name  code   br   em  string   em    td   td   p Specify the name of the generator resource  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 GenericStore  GenericStore   h3   p   p GenericStore is a common interface for interacting with ClusterSecretStore or a namespaced SecretStore   p    p   h3 id  external secrets io v1beta1 GenericStoreValidator  GenericStoreValidator   h3   p    p   h3 id  external secrets io v1beta1 GitlabAuth  GitlabAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 GitlabProvider  GitlabProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code SecretRef  code   br   em   a href   external secrets io v1beta1 GitlabSecretRef   GitlabSecretRef   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 GitlabProvider  GitlabProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p Configures a store to sync secrets with a GitLab instance   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code url  code   br   em  string   em    td   td   p URL configures the GitLab instance URL  Defaults to  a href  https   gitlab com   https   gitlab com   a    p    td    tr   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 GitlabAuth   GitlabAuth   a    em    td   td   p Auth configures how secret manager authenticates with a GitLab instance   p    td    tr   tr   td   code projectID  code   br   em  string   em    td   td   p ProjectID specifies a project where secrets are located   p    td    tr   tr   td   code inheritFromGroups  code   br   em  bool   em    td   td   p InheritFromGroups specifies whether parent groups should be discovered and checked for secrets   p    td    tr   tr   td   code groupIDs  code   br   em    string   em    td   td   p GroupIDs specify  which gitlab groups to pull secrets from  Group secrets are read from left to right followed by the project variables   p    td    tr   tr   td   code environment  code   br   em  string   em    td   td   p Environment environment scope of gitlab CI CD variables  Please see  a href  https   docs gitlab com ee ci environments  create a static environment  https   docs gitlab com ee ci environments  create a static environment  a  on how to create environments   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 GitlabSecretRef  GitlabSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 GitlabAuth  GitlabAuth  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code accessToken  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p AccessToken is used for authentication   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 IBMAuth  IBMAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 IBMProvider  IBMProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 IBMAuthSecretRef   IBMAuthSecretRef   a    em    td   td    td    tr   tr   td   code containerAuth  code   br   em   a href   external secrets io v1beta1 IBMAuthContainerAuth   IBMAuthContainerAuth   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 IBMAuthContainerAuth  IBMAuthContainerAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 IBMAuth  IBMAuth  a     p   p   p IBM Container based auth with IAM Trusted Profile   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code profile  code   br   em  string   em    td   td   p the IBM Trusted Profile  p    td    tr   tr   td   code tokenLocation  code   br   em  string   em    td   td   p Location the token is mounted on the pod  p    td    tr   tr   td   code iamEndpoint  code   br   em  string   em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 IBMAuthSecretRef  IBMAuthSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 IBMAuth  IBMAuth  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretApiKeySecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p The SecretAccessKey is used for authentication  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 IBMProvider  IBMProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p Configures an store to sync secrets using a IBM Cloud Secrets Manager backend   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 IBMAuth   IBMAuth   a    em    td   td   p Auth configures how secret manager authenticates with the IBM secrets manager   p    td    tr   tr   td   code serviceUrl  code   br   em  string   em    td   td   p ServiceURL is the Endpoint URL that is specific to the Secrets Manager service instance  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 InfisicalAuth  InfisicalAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 InfisicalProvider  InfisicalProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code universalAuthCredentials  code   br   em   a href   external secrets io v1beta1 UniversalAuthCredentials   UniversalAuthCredentials   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 InfisicalProvider  InfisicalProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p InfisicalProvider configures a store to sync secrets using the Infisical provider   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 InfisicalAuth   InfisicalAuth   a    em    td   td   p Auth configures how the Operator authenticates with the Infisical API  p    td    tr   tr   td   code secretsScope  code   br   em   a href   external secrets io v1beta1 MachineIdentityScopeInWorkspace   MachineIdentityScopeInWorkspace   a    em    td   td    td    tr   tr   td   code hostAPI  code   br   em  string   em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 KeeperSecurityProvider  KeeperSecurityProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p KeeperSecurityProvider Configures a store to sync secrets using Keeper Security   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code authRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr   tr   td   code folderID  code   br   em  string   em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 KubernetesAuth  KubernetesAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 KubernetesProvider  KubernetesProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code cert  code   br   em   a href   external secrets io v1beta1 CertAuth   CertAuth   a    em    td   td   em  Optional   em   p has both clientCert and clientKey as secretKeySelector  p    td    tr   tr   td   code token  code   br   em   a href   external secrets io v1beta1 TokenAuth   TokenAuth   a    em    td   td   em  Optional   em   p use static token to authenticate with  p    td    tr   tr   td   code serviceAccount  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 ServiceAccountSelector   External Secrets meta v1 ServiceAccountSelector   a    em    td   td   em  Optional   em   p points to a service account that should be used for authentication  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 KubernetesProvider  KubernetesProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p Configures a store to sync secrets with a Kubernetes instance   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code server  code   br   em   a href   external secrets io v1beta1 KubernetesServer   KubernetesServer   a    em    td   td   em  Optional   em   p configures the Kubernetes server Address   p    td    tr   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 KubernetesAuth   KubernetesAuth   a    em    td   td   em  Optional   em   p Auth configures how secret manager authenticates with a Kubernetes instance   p    td    tr   tr   td   code authRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p A reference to a secret that contains the auth information   p    td    tr   tr   td   code remoteNamespace  code   br   em  string   em    td   td   em  Optional   em   p Remote namespace to fetch the secrets from  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 KubernetesServer  KubernetesServer   h3   p    em Appears on   em   a href   external secrets io v1beta1 KubernetesProvider  KubernetesProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code url  code   br   em  string   em    td   td   em  Optional   em   p configures the Kubernetes server Address   p    td    tr   tr   td   code caBundle  code   br   em    byte   em    td   td   em  Optional   em   p CABundle is a base64 encoded CA certificate  p    td    tr   tr   td   code caProvider  code   br   em   a href   external secrets io v1beta1 CAProvider   CAProvider   a    em    td   td   em  Optional   em   p see   a href  https   external secrets io v0 4 1 spec  external secrets io v1alpha1 CAProvider  https   external secrets io v0 4 1 spec  external secrets io v1alpha1 CAProvider  a   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 MachineIdentityScopeInWorkspace  MachineIdentityScopeInWorkspace   h3   p    em Appears on   em   a href   external secrets io v1beta1 InfisicalProvider  InfisicalProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretsPath  code   br   em  string   em    td   td   em  Optional   em    td    tr   tr   td   code recursive  code   br   em  bool   em    td   td   em  Optional   em    td    tr   tr   td   code environmentSlug  code   br   em  string   em    td   td    td    tr   tr   td   code projectSlug  code   br   em  string   em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 NoSecretError  NoSecretError   h3   p   p NoSecretError shall be returned when a GetSecret can not find the desired secret  This is used for deletionPolicy   p    p   h3 id  external secrets io v1beta1 NotModifiedError  NotModifiedError   h3   p   p NotModifiedError to signal that the webhook received no changes  and it should just return without doing anything   p    p   h3 id  external secrets io v1beta1 OnboardbaseAuthSecretRef  OnboardbaseAuthSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 OnboardbaseProvider  OnboardbaseProvider  a     p   p   p OnboardbaseAuthSecretRef holds secret references for onboardbase API Key credentials   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiKeyRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p OnboardbaseAPIKey is the APIKey generated by an admin account  It is used to recognize and authorize access to a project and environment within onboardbase  p    td    tr   tr   td   code passcodeRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p OnboardbasePasscode is the passcode attached to the API Key  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 OnboardbaseProvider  OnboardbaseProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p OnboardbaseProvider configures a store to sync secrets using the Onboardbase provider  Project and Config are required if not using a Service Token   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 OnboardbaseAuthSecretRef   OnboardbaseAuthSecretRef   a    em    td   td   p Auth configures how the Operator authenticates with the Onboardbase API  p    td    tr   tr   td   code apiHost  code   br   em  string   em    td   td   p APIHost use this to configure the host url for the API for selfhosted installation  default is  a href  https   public onboardbase com api v1   https   public onboardbase com api v1   a   p    td    tr   tr   td   code project  code   br   em  string   em    td   td   p Project is an onboardbase project that the secrets should be pulled from  p    td    tr   tr   td   code environment  code   br   em  string   em    td   td   p Environment is the name of an environmnent within a project to pull the secrets from  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 OnePasswordAuth  OnePasswordAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 OnePasswordProvider  OnePasswordProvider  a     p   p   p OnePasswordAuth contains a secretRef for credentials   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 OnePasswordAuthSecretRef   OnePasswordAuthSecretRef   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 OnePasswordAuthSecretRef  OnePasswordAuthSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 OnePasswordAuth  OnePasswordAuth  a     p   p   p OnePasswordAuthSecretRef holds secret references for 1Password credentials   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code connectTokenSecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p The ConnectToken is used for authentication to a 1Password Connect Server   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 OnePasswordProvider  OnePasswordProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p OnePasswordProvider configures a store to sync secrets using the 1Password Secret Manager provider   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 OnePasswordAuth   OnePasswordAuth   a    em    td   td   p Auth defines the information necessary to authenticate against OnePassword Connect Server  p    td    tr   tr   td   code connectHost  code   br   em  string   em    td   td   p ConnectHost defines the OnePassword Connect Server to connect to  p    td    tr   tr   td   code vaults  code   br   em  map string int   em    td   td   p Vaults defines which OnePassword vaults to search in which order  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 OracleAuth  OracleAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 OracleProvider  OracleProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code tenancy  code   br   em  string   em    td   td   p Tenancy is the tenancy OCID where user is located   p    td    tr   tr   td   code user  code   br   em  string   em    td   td   p User is an access OCID specific to the account   p    td    tr   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 OracleSecretRef   OracleSecretRef   a    em    td   td   p SecretRef to pass through sensitive information   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 OraclePrincipalType  OraclePrincipalType   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 OracleProvider  OracleProvider  a     p   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 InstancePrincipal  34   p   td   td  p InstancePrincipal represents a instance principal   p    td    tr  tr  td  p   34 UserPrincipal  34   p   td   td  p UserPrincipal represents a user principal   p    td    tr  tr  td  p   34 Workload  34   p   td   td  p WorkloadPrincipal represents a workload principal   p    td    tr   tbody    table   h3 id  external secrets io v1beta1 OracleProvider  OracleProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p Configures an store to sync secrets using a Oracle Vault backend   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code region  code   br   em  string   em    td   td   p Region is the region where vault is located   p    td    tr   tr   td   code vault  code   br   em  string   em    td   td   p Vault is the vault rsquo s OCID of the specific vault where secret is located   p    td    tr   tr   td   code compartment  code   br   em  string   em    td   td   em  Optional   em   p Compartment is the vault compartment OCID  Required for PushSecret  p    td    tr   tr   td   code encryptionKey  code   br   em  string   em    td   td   em  Optional   em   p EncryptionKey is the OCID of the encryption key within the vault  Required for PushSecret  p    td    tr   tr   td   code principalType  code   br   em   a href   external secrets io v1beta1 OraclePrincipalType   OraclePrincipalType   a    em    td   td   em  Optional   em   p The type of principal to use for authentication  If left blank  the Auth struct will determine the principal type  This optional field must be specified if using workload identity   p    td    tr   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 OracleAuth   OracleAuth   a    em    td   td   em  Optional   em   p Auth configures how secret manager authenticates with the Oracle Vault  If empty  use the instance principal  otherwise the user credentials specified in Auth   p    td    tr   tr   td   code serviceAccountRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 ServiceAccountSelector   External Secrets meta v1 ServiceAccountSelector   a    em    td   td   em  Optional   em   p ServiceAccountRef specified the service account that should be used when authenticating with WorkloadIdentity   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 OracleSecretRef  OracleSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 OracleAuth  OracleAuth  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code privatekey  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p PrivateKey is the user rsquo s API Signing Key in PEM format  used for authentication   p    td    tr   tr   td   code fingerprint  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p Fingerprint is the fingerprint of the API private key   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 PassboltAuth  PassboltAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 PassboltProvider  PassboltProvider  a     p   p   p Passbolt contains a secretRef for the passbolt credentials   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code passwordSecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr   tr   td   code privateKeySecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 PassboltProvider  PassboltProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 PassboltAuth   PassboltAuth   a    em    td   td   p Auth defines the information necessary to authenticate against Passbolt Server  p    td    tr   tr   td   code host  code   br   em  string   em    td   td   p Host defines the Passbolt Server to connect to  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 PasswordDepotAuth  PasswordDepotAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 PasswordDepotProvider  PasswordDepotProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 PasswordDepotSecretRef   PasswordDepotSecretRef   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 PasswordDepotProvider  PasswordDepotProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p Configures a store to sync secrets with a Password Depot instance   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code host  code   br   em  string   em    td   td   p URL configures the Password Depot instance URL   p    td    tr   tr   td   code database  code   br   em  string   em    td   td   p Database to use as source  p    td    tr   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 PasswordDepotAuth   PasswordDepotAuth   a    em    td   td   p Auth configures how secret manager authenticates with a Password Depot instance   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 PasswordDepotSecretRef  PasswordDepotSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 PasswordDepotAuth  PasswordDepotAuth  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code credentials  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p Username   Password is used for authentication   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 PreviderAuth  PreviderAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 PreviderProvider  PreviderProvider  a     p   p   p PreviderAuth contains a secretRef for credentials   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 PreviderAuthSecretRef   PreviderAuthSecretRef   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 PreviderAuthSecretRef  PreviderAuthSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 PreviderAuth  PreviderAuth  a     p   p   p PreviderAuthSecretRef holds secret references for Previder Vault credentials   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code accessToken  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p The AccessToken is used for authentication  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 PreviderProvider  PreviderProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p PreviderProvider configures a store to sync secrets using the Previder Secret Manager provider   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 PreviderAuth   PreviderAuth   a    em    td   td    td    tr   tr   td   code baseUri  code   br   em  string   em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 Provider  Provider   h3   p   p Provider is a common interface for interacting with secret backends   p    p   h3 id  external secrets io v1beta1 PulumiProvider  PulumiProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiUrl  code   br   em  string   em    td   td   p APIURL is the URL of the Pulumi API   p    td    tr   tr   td   code accessToken  code   br   em   a href   external secrets io v1beta1 PulumiProviderSecretRef   PulumiProviderSecretRef   a    em    td   td   p AccessToken is the access tokens to sign in to the Pulumi Cloud Console   p    td    tr   tr   td   code organization  code   br   em  string   em    td   td   p Organization are a space to collaborate on shared projects and stacks  To create a new organization  visit  a href  https   app pulumi com   https   app pulumi com   a  and click  ldquo New Organization rdquo    p    td    tr   tr   td   code project  code   br   em  string   em    td   td   p Project is the name of the Pulumi ESC project the environment belongs to   p    td    tr   tr   td   code environment  code   br   em  string   em    td   td   p Environment are YAML documents composed of static key value pairs  programmatic expressions  dynamically retrieved values from supported providers including all major clouds  and other Pulumi ESC environments  To create a new environment  visit  a href  https   www pulumi com docs esc environments   https   www pulumi com docs esc environments   a  for more information   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 PulumiProviderSecretRef  PulumiProviderSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 PulumiProvider  PulumiProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p SecretRef is a reference to a secret containing the Pulumi API token   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 PushSecretData  PushSecretData   h3   p   p PushSecretData is an interface to allow using v1alpha1 PushSecretData content in Provider registered in v1beta1   p    p   h3 id  external secrets io v1beta1 PushSecretRemoteRef  PushSecretRemoteRef   h3   p   p PushSecretRemoteRef is an interface to allow using v1alpha1 PushSecretRemoteRef in Provider registered in v1beta1   p    p   h3 id  external secrets io v1beta1 ScalewayProvider  ScalewayProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiUrl  code   br   em  string   em    td   td   em  Optional   em   p APIURL is the url of the api to use  Defaults to  a href  https   api scaleway com  https   api scaleway com  a   p    td    tr   tr   td   code region  code   br   em  string   em    td   td   p Region where your secrets are located   a href  https   developers scaleway com en quickstart  region and zone  https   developers scaleway com en quickstart  region and zone  a   p    td    tr   tr   td   code projectId  code   br   em  string   em    td   td   p ProjectID is the id of your project  which you can find in the console   a href  https   console scaleway com project settings  https   console scaleway com project settings  a   p    td    tr   tr   td   code accessKey  code   br   em   a href   external secrets io v1beta1 ScalewayProviderSecretRef   ScalewayProviderSecretRef   a    em    td   td   p AccessKey is the non secret part of the api key   p    td    tr   tr   td   code secretKey  code   br   em   a href   external secrets io v1beta1 ScalewayProviderSecretRef   ScalewayProviderSecretRef   a    em    td   td   p SecretKey is the non secret part of the api key   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 ScalewayProviderSecretRef  ScalewayProviderSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 ScalewayProvider  ScalewayProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code value  code   br   em  string   em    td   td   em  Optional   em   p Value can be specified directly to set a value without using a secret   p    td    tr   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p SecretRef references a key in a secret that will be used as value   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 SecretServerProvider  SecretServerProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p See  a href  https   github com DelineaXPM tss sdk go blob main server server go  https   github com DelineaXPM tss sdk go blob main server server go  a    p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code username  code   br   em   a href   external secrets io v1beta1 SecretServerProviderRef   SecretServerProviderRef   a    em    td   td   p Username is the secret server account username   p    td    tr   tr   td   code password  code   br   em   a href   external secrets io v1beta1 SecretServerProviderRef   SecretServerProviderRef   a    em    td   td   p Password is the secret server account password   p    td    tr   tr   td   code serverURL  code   br   em  string   em    td   td   p ServerURL URL to your secret server installation  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 SecretServerProviderRef  SecretServerProviderRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretServerProvider  SecretServerProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code value  code   br   em  string   em    td   td   em  Optional   em   p Value can be specified directly to set a value without using a secret   p    td    tr   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p SecretRef references a key in a secret that will be used as value   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 SecretStore  SecretStore   h3   p   p SecretStore represents a secure external location for storing secrets  which can be referenced as part of  code storeRef  code  fields   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code metadata  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code   br   em   a href   external secrets io v1beta1 SecretStoreSpec   SecretStoreSpec   a    em    td   td   br    br    table   tr   td   code controller  code   br   em  string   em    td   td   em  Optional   em   p Used to select the correct ESO controller  think  ingress ingressClassName  The ESO controller is instantiated with a specific controller name and filters ES based on this property  p    td    tr   tr   td   code provider  code   br   em   a href   external secrets io v1beta1 SecretStoreProvider   SecretStoreProvider   a    em    td   td   p Used to configure the provider  Only one provider may be set  p    td    tr   tr   td   code retrySettings  code   br   em   a href   external secrets io v1beta1 SecretStoreRetrySettings   SecretStoreRetrySettings   a    em    td   td   em  Optional   em   p Used to configure http retries if failed  p    td    tr   tr   td   code refreshInterval  code   br   em  int   em    td   td   em  Optional   em   p Used to configure store refresh interval in seconds  Empty or 0 will default to the controller config   p    td    tr   tr   td   code conditions  code   br   em   a href   external secrets io v1beta1 ClusterSecretStoreCondition     ClusterSecretStoreCondition   a    em    td   td   em  Optional   em   p Used to constraint a ClusterSecretStore to specific namespaces  Relevant only to ClusterSecretStore  p    td    tr    table    td    tr   tr   td   code status  code   br   em   a href   external secrets io v1beta1 SecretStoreStatus   SecretStoreStatus   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 SecretStoreCapabilities  SecretStoreCapabilities   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreStatus  SecretStoreStatus  a     p   p   p SecretStoreCapabilities defines the possible operations a SecretStore can do   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 ReadOnly  34   p   td   td   td    tr  tr  td  p   34 ReadWrite  34   p   td   td   td    tr  tr  td  p   34 WriteOnly  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 SecretStoreConditionType  SecretStoreConditionType   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreStatusCondition  SecretStoreStatusCondition  a     p   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 Ready  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreSpec  SecretStoreSpec  a     p   p   p SecretStoreProvider contains the provider specific configuration   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code aws  code   br   em   a href   external secrets io v1beta1 AWSProvider   AWSProvider   a    em    td   td   em  Optional   em   p AWS configures this store to sync secrets using AWS Secret Manager provider  p    td    tr   tr   td   code azurekv  code   br   em   a href   external secrets io v1beta1 AzureKVProvider   AzureKVProvider   a    em    td   td   em  Optional   em   p AzureKV configures this store to sync secrets using Azure Key Vault provider  p    td    tr   tr   td   code akeyless  code   br   em   a href   external secrets io v1beta1 AkeylessProvider   AkeylessProvider   a    em    td   td   em  Optional   em   p Akeyless configures this store to sync secrets using Akeyless Vault provider  p    td    tr   tr   td   code bitwardensecretsmanager  code   br   em   a href   external secrets io v1beta1 BitwardenSecretsManagerProvider   BitwardenSecretsManagerProvider   a    em    td   td   em  Optional   em   p BitwardenSecretsManager configures this store to sync secrets using BitwardenSecretsManager provider  p    td    tr   tr   td   code vault  code   br   em   a href   external secrets io v1beta1 VaultProvider   VaultProvider   a    em    td   td   em  Optional   em   p Vault configures this store to sync secrets using Hashi provider  p    td    tr   tr   td   code gcpsm  code   br   em   a href   external secrets io v1beta1 GCPSMProvider   GCPSMProvider   a    em    td   td   em  Optional   em   p GCPSM configures this store to sync secrets using Google Cloud Platform Secret Manager provider  p    td    tr   tr   td   code oracle  code   br   em   a href   external secrets io v1beta1 OracleProvider   OracleProvider   a    em    td   td   em  Optional   em   p Oracle configures this store to sync secrets using Oracle Vault provider  p    td    tr   tr   td   code ibm  code   br   em   a href   external secrets io v1beta1 IBMProvider   IBMProvider   a    em    td   td   em  Optional   em   p IBM configures this store to sync secrets using IBM Cloud provider  p    td    tr   tr   td   code yandexcertificatemanager  code   br   em   a href   external secrets io v1beta1 YandexCertificateManagerProvider   YandexCertificateManagerProvider   a    em    td   td   em  Optional   em   p YandexCertificateManager configures this store to sync secrets using Yandex Certificate Manager provider  p    td    tr   tr   td   code yandexlockbox  code   br   em   a href   external secrets io v1beta1 YandexLockboxProvider   YandexLockboxProvider   a    em    td   td   em  Optional   em   p YandexLockbox configures this store to sync secrets using Yandex Lockbox provider  p    td    tr   tr   td   code gitlab  code   br   em   a href   external secrets io v1beta1 GitlabProvider   GitlabProvider   a    em    td   td   em  Optional   em   p GitLab configures this store to sync secrets using GitLab Variables provider  p    td    tr   tr   td   code alibaba  code   br   em   a href   external secrets io v1beta1 AlibabaProvider   AlibabaProvider   a    em    td   td   em  Optional   em   p Alibaba configures this store to sync secrets using Alibaba Cloud provider  p    td    tr   tr   td   code onepassword  code   br   em   a href   external secrets io v1beta1 OnePasswordProvider   OnePasswordProvider   a    em    td   td   em  Optional   em   p OnePassword configures this store to sync secrets using the 1Password Cloud provider  p    td    tr   tr   td   code webhook  code   br   em   a href   external secrets io v1beta1 WebhookProvider   WebhookProvider   a    em    td   td   em  Optional   em   p Webhook configures this store to sync secrets using a generic templated webhook  p    td    tr   tr   td   code kubernetes  code   br   em   a href   external secrets io v1beta1 KubernetesProvider   KubernetesProvider   a    em    td   td   em  Optional   em   p Kubernetes configures this store to sync secrets using a Kubernetes cluster provider  p    td    tr   tr   td   code fake  code   br   em   a href   external secrets io v1beta1 FakeProvider   FakeProvider   a    em    td   td   em  Optional   em   p Fake configures a store with static key value pairs  p    td    tr   tr   td   code senhasegura  code   br   em   a href   external secrets io v1beta1 SenhaseguraProvider   SenhaseguraProvider   a    em    td   td   em  Optional   em   p Senhasegura configures this store to sync secrets using senhasegura provider  p    td    tr   tr   td   code scaleway  code   br   em   a href   external secrets io v1beta1 ScalewayProvider   ScalewayProvider   a    em    td   td   em  Optional   em   p Scaleway  p    td    tr   tr   td   code doppler  code   br   em   a href   external secrets io v1beta1 DopplerProvider   DopplerProvider   a    em    td   td   em  Optional   em   p Doppler configures this store to sync secrets using the Doppler provider  p    td    tr   tr   td   code previder  code   br   em   a href   external secrets io v1beta1 PreviderProvider   PreviderProvider   a    em    td   td   em  Optional   em   p Previder configures this store to sync secrets using the Previder provider  p    td    tr   tr   td   code onboardbase  code   br   em   a href   external secrets io v1beta1 OnboardbaseProvider   OnboardbaseProvider   a    em    td   td   em  Optional   em   p Onboardbase configures this store to sync secrets using the Onboardbase provider  p    td    tr   tr   td   code keepersecurity  code   br   em   a href   external secrets io v1beta1 KeeperSecurityProvider   KeeperSecurityProvider   a    em    td   td   em  Optional   em   p KeeperSecurity configures this store to sync secrets using the KeeperSecurity provider  p    td    tr   tr   td   code conjur  code   br   em   a href   external secrets io v1beta1 ConjurProvider   ConjurProvider   a    em    td   td   em  Optional   em   p Conjur configures this store to sync secrets using conjur provider  p    td    tr   tr   td   code delinea  code   br   em   a href   external secrets io v1beta1 DelineaProvider   DelineaProvider   a    em    td   td   em  Optional   em   p Delinea DevOps Secrets Vault  a href  https   docs delinea com online help products devops secrets vault current  https   docs delinea com online help products devops secrets vault current  a   p    td    tr   tr   td   code secretserver  code   br   em   a href   external secrets io v1beta1 SecretServerProvider   SecretServerProvider   a    em    td   td   em  Optional   em   p SecretServer configures this store to sync secrets using SecretServer provider  a href  https   docs delinea com online help secret server start htm  https   docs delinea com online help secret server start htm  a   p    td    tr   tr   td   code chef  code   br   em   a href   external secrets io v1beta1 ChefProvider   ChefProvider   a    em    td   td   em  Optional   em   p Chef configures this store to sync secrets with chef server  p    td    tr   tr   td   code pulumi  code   br   em   a href   external secrets io v1beta1 PulumiProvider   PulumiProvider   a    em    td   td   em  Optional   em   p Pulumi configures this store to sync secrets using the Pulumi provider  p    td    tr   tr   td   code fortanix  code   br   em   a href   external secrets io v1beta1 FortanixProvider   FortanixProvider   a    em    td   td   em  Optional   em   p Fortanix configures this store to sync secrets using the Fortanix provider  p    td    tr   tr   td   code passworddepot  code   br   em   a href   external secrets io v1beta1 PasswordDepotProvider   PasswordDepotProvider   a    em    td   td   em  Optional   em    td    tr   tr   td   code passbolt  code   br   em   a href   external secrets io v1beta1 PassboltProvider   PassboltProvider   a    em    td   td   em  Optional   em    td    tr   tr   td   code device42  code   br   em   a href   external secrets io v1beta1 Device42Provider   Device42Provider   a    em    td   td   em  Optional   em   p Device42 configures this store to sync secrets using the Device42 provider  p    td    tr   tr   td   code infisical  code   br   em   a href   external secrets io v1beta1 InfisicalProvider   InfisicalProvider   a    em    td   td   em  Optional   em   p Infisical configures this store to sync secrets using the Infisical provider  p    td    tr   tr   td   code beyondtrust  code   br   em   a href   external secrets io v1beta1 BeyondtrustProvider   BeyondtrustProvider   a    em    td   td   em  Optional   em   p Beyondtrust configures this store to sync secrets using Password Safe provider   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 SecretStoreRef  SecretStoreRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretSpec  ExternalSecretSpec  a     a href   external secrets io v1beta1 StoreGeneratorSourceRef  StoreGeneratorSourceRef  a     a href   external secrets io v1beta1 StoreSourceRef  StoreSourceRef  a     p   p   p SecretStoreRef defines which SecretStore to fetch the ExternalSecret data   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code   br   em  string   em    td   td   p Name of the SecretStore resource  p    td    tr   tr   td   code kind  code   br   em  string   em    td   td   em  Optional   em   p Kind of the SecretStore resource  SecretStore or ClusterSecretStore  Defaults to  code SecretStore  code   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 SecretStoreRetrySettings  SecretStoreRetrySettings   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreSpec  SecretStoreSpec  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code maxRetries  code   br   em  int32   em    td   td    td    tr   tr   td   code retryInterval  code   br   em  string   em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 SecretStoreSpec  SecretStoreSpec   h3   p    em Appears on   em   a href   external secrets io v1beta1 ClusterSecretStore  ClusterSecretStore  a     a href   external secrets io v1beta1 SecretStore  SecretStore  a     p   p   p SecretStoreSpec defines the desired state of SecretStore   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code controller  code   br   em  string   em    td   td   em  Optional   em   p Used to select the correct ESO controller  think  ingress ingressClassName  The ESO controller is instantiated with a specific controller name and filters ES based on this property  p    td    tr   tr   td   code provider  code   br   em   a href   external secrets io v1beta1 SecretStoreProvider   SecretStoreProvider   a    em    td   td   p Used to configure the provider  Only one provider may be set  p    td    tr   tr   td   code retrySettings  code   br   em   a href   external secrets io v1beta1 SecretStoreRetrySettings   SecretStoreRetrySettings   a    em    td   td   em  Optional   em   p Used to configure http retries if failed  p    td    tr   tr   td   code refreshInterval  code   br   em  int   em    td   td   em  Optional   em   p Used to configure store refresh interval in seconds  Empty or 0 will default to the controller config   p    td    tr   tr   td   code conditions  code   br   em   a href   external secrets io v1beta1 ClusterSecretStoreCondition     ClusterSecretStoreCondition   a    em    td   td   em  Optional   em   p Used to constraint a ClusterSecretStore to specific namespaces  Relevant only to ClusterSecretStore  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 SecretStoreStatus  SecretStoreStatus   h3   p    em Appears on   em   a href   external secrets io v1beta1 ClusterSecretStore  ClusterSecretStore  a     a href   external secrets io v1beta1 SecretStore  SecretStore  a     p   p   p SecretStoreStatus defines the observed state of the SecretStore   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code conditions  code   br   em   a href   external secrets io v1beta1 SecretStoreStatusCondition     SecretStoreStatusCondition   a    em    td   td   em  Optional   em    td    tr   tr   td   code capabilities  code   br   em   a href   external secrets io v1beta1 SecretStoreCapabilities   SecretStoreCapabilities   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 SecretStoreStatusCondition  SecretStoreStatusCondition   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreStatus  SecretStoreStatus  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code type  code   br   em   a href   external secrets io v1beta1 SecretStoreConditionType   SecretStoreConditionType   a    em    td   td    td    tr   tr   td   code status  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  conditionstatus v1 core   Kubernetes core v1 ConditionStatus   a    em    td   td    td    tr   tr   td   code reason  code   br   em  string   em    td   td   em  Optional   em    td    tr   tr   td   code message  code   br   em  string   em    td   td   em  Optional   em    td    tr   tr   td   code lastTransitionTime  code   br   em   a href  https   kubernetes io docs reference generated kubernetes api v1 25  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 SecretsClient  SecretsClient   h3   p   p SecretsClient provides access to secrets   p    p   h3 id  external secrets io v1beta1 SecretsManager  SecretsManager   h3   p    em Appears on   em   a href   external secrets io v1beta1 AWSProvider  AWSProvider  a     p   p   p SecretsManager defines how the provider behaves when interacting with AWS SecretsManager  Some of these settings are only applicable to controlling how secrets are deleted  and hence only apply to PushSecret  and only when deletionPolicy is set to Delete    p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code forceDeleteWithoutRecovery  code   br   em  bool   em    td   td   em  Optional   em   p Specifies whether to delete the secret without any recovery window  You can rsquo t use both this parameter and RecoveryWindowInDays in the same call  If you don rsquo t use either  then by default Secrets Manager uses a 30 day recovery window  see   a href  https   docs aws amazon com secretsmanager latest apireference API DeleteSecret html SecretsManager DeleteSecret request ForceDeleteWithoutRecovery  https   docs aws amazon com secretsmanager latest apireference API DeleteSecret html SecretsManager DeleteSecret request ForceDeleteWithoutRecovery  a   p    td    tr   tr   td   code recoveryWindowInDays  code   br   em  int64   em    td   td   em  Optional   em   p The number of days from 7 to 30 that Secrets Manager waits before permanently deleting the secret  You can rsquo t use both this parameter and ForceDeleteWithoutRecovery in the same call  If you don rsquo t use either  then by default Secrets Manager uses a 30 day recovery window  see   a href  https   docs aws amazon com secretsmanager latest apireference API DeleteSecret html SecretsManager DeleteSecret request RecoveryWindowInDays  https   docs aws amazon com secretsmanager latest apireference API DeleteSecret html SecretsManager DeleteSecret request RecoveryWindowInDays  a   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 SenhaseguraAuth  SenhaseguraAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 SenhaseguraProvider  SenhaseguraProvider  a     p   p   p SenhaseguraAuth tells the controller how to do auth in senhasegura   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code clientId  code   br   em  string   em    td   td    td    tr   tr   td   code clientSecretSecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 SenhaseguraModuleType  SenhaseguraModuleType   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 SenhaseguraProvider  SenhaseguraProvider  a     p   p   p SenhaseguraModuleType enum defines senhasegura target module to fetch secrets  p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 DSM  34   p   td   td  pre  code  SenhaseguraModuleDSM is the senhasegura DevOps Secrets Management module see  https   senhasegura com devops   code   pre    td    tr   tbody    table   h3 id  external secrets io v1beta1 SenhaseguraProvider  SenhaseguraProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p SenhaseguraProvider setup a store to sync secrets with senhasegura   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code url  code   br   em  string   em    td   td   p URL of senhasegura  p    td    tr   tr   td   code module  code   br   em   a href   external secrets io v1beta1 SenhaseguraModuleType   SenhaseguraModuleType   a    em    td   td   p Module defines which senhasegura module should be used to get secrets  p    td    tr   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 SenhaseguraAuth   SenhaseguraAuth   a    em    td   td   p Auth defines parameters to authenticate in senhasegura  p    td    tr   tr   td   code ignoreSslCertificate  code   br   em  bool   em    td   td   p IgnoreSslCertificate defines if SSL certificate must be ignored  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 StoreGeneratorSourceRef  StoreGeneratorSourceRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretDataFromRemoteRef  ExternalSecretDataFromRemoteRef  a     p   p   p StoreGeneratorSourceRef allows you to override the source from which the secret will be pulled from  You can define at maximum one property   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code storeRef  code   br   em   a href   external secrets io v1beta1 SecretStoreRef   SecretStoreRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code generatorRef  code   br   em   a href   external secrets io v1beta1 GeneratorRef   GeneratorRef   a    em    td   td   em  Optional   em   p GeneratorRef points to a generator custom resource   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 StoreSourceRef  StoreSourceRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretData  ExternalSecretData  a     p   p   p StoreSourceRef allows you to override the SecretStore source from which the secret will be pulled from  You can define at maximum one property   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code storeRef  code   br   em   a href   external secrets io v1beta1 SecretStoreRef   SecretStoreRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code generatorRef  code   br   em   a href   external secrets io v1beta1 GeneratorRef   GeneratorRef   a    em    td   td   p GeneratorRef points to a generator custom resource   p   p Deprecated  The generatorRef is not implemented in  data    this will be removed with v1   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 Tag  Tag   h3   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code key  code   br   em  string   em    td   td    td    tr   tr   td   code value  code   br   em  string   em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 TemplateEngineVersion  TemplateEngineVersion   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretTemplate  ExternalSecretTemplate  a     p   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 v1  34   p   td   td   td    tr  tr  td  p   34 v2  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 TemplateFrom  TemplateFrom   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretTemplate  ExternalSecretTemplate  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code configMap  code   br   em   a href   external secrets io v1beta1 TemplateRef   TemplateRef   a    em    td   td    td    tr   tr   td   code secret  code   br   em   a href   external secrets io v1beta1 TemplateRef   TemplateRef   a    em    td   td    td    tr   tr   td   code target  code   br   em   a href   external secrets io v1beta1 TemplateTarget   TemplateTarget   a    em    td   td   em  Optional   em    td    tr   tr   td   code literal  code   br   em  string   em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 TemplateMergePolicy  TemplateMergePolicy   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 ExternalSecretTemplate  ExternalSecretTemplate  a     p   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 Merge  34   p   td   td   td    tr  tr  td  p   34 Replace  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 TemplateRef  TemplateRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 TemplateFrom  TemplateFrom  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code   br   em  string   em    td   td   p The name of the ConfigMap Secret resource  p    td    tr   tr   td   code items  code   br   em   a href   external secrets io v1beta1 TemplateRefItem     TemplateRefItem   a    em    td   td   p A list of keys in the ConfigMap Secret to use as templates for Secret data  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 TemplateRefItem  TemplateRefItem   h3   p    em Appears on   em   a href   external secrets io v1beta1 TemplateRef  TemplateRef  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code key  code   br   em  string   em    td   td   p A key in the ConfigMap Secret  p    td    tr   tr   td   code templateAs  code   br   em   a href   external secrets io v1beta1 TemplateScope   TemplateScope   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 TemplateScope  TemplateScope   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 TemplateRefItem  TemplateRefItem  a     p   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 KeysAndValues  34   p   td   td   td    tr  tr  td  p   34 Values  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 TemplateTarget  TemplateTarget   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 TemplateFrom  TemplateFrom  a     p   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 Annotations  34   p   td   td   td    tr  tr  td  p   34 Data  34   p   td   td   td    tr  tr  td  p   34 Labels  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 TokenAuth  TokenAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 KubernetesAuth  KubernetesAuth  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code bearerToken  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 UniversalAuthCredentials  UniversalAuthCredentials   h3   p    em Appears on   em   a href   external secrets io v1beta1 InfisicalAuth  InfisicalAuth  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code clientId  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr   tr   td   code clientSecret  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 ValidationResult  ValidationResult   code byte  code  alias   p   h3   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p 2  p   td   td  p Error indicates that there is a misconfiguration   p    td    tr  tr  td  p 0  p   td   td  p Ready indicates that the client is configured correctly and can be used   p    td    tr  tr  td  p 1  p   td   td  p Unknown indicates that the client can be used but information is missing and it can not be validated   p    td    tr   tbody    table   h3 id  external secrets io v1beta1 VaultAppRole  VaultAppRole   h3   p    em Appears on   em   a href   external secrets io v1beta1 VaultAuth  VaultAuth  a     p   p   p VaultAppRole authenticates with Vault using the App Role auth mechanism  with the role and secret stored in a Kubernetes Secret resource   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code path  code   br   em  string   em    td   td   p Path where the App Role authentication backend is mounted in Vault  e g   ldquo approle rdquo   p    td    tr   tr   td   code roleId  code   br   em  string   em    td   td   em  Optional   em   p RoleID configured in the App Role authentication backend when setting up the authentication backend in Vault   p    td    tr   tr   td   code roleRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p Reference to a key in a Secret that contains the App Role ID used to authenticate with Vault  The  code key  code  field must be specified and denotes which entry within the Secret resource is used as the app role id   p    td    tr   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p Reference to a key in a Secret that contains the App Role secret used to authenticate with Vault  The  code key  code  field must be specified and denotes which entry within the Secret resource is used as the app role secret   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 VaultAuth  VaultAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 VaultProvider  VaultProvider  a     p   p   p VaultAuth is the configuration used to authenticate with a Vault server  Only one of  code tokenSecretRef  code    code appRole  code     code kubernetes  code    code ldap  code    code userPass  code    code jwt  code  or  code cert  code  can be specified  A namespace to authenticate against can optionally be specified   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code namespace  code   br   em  string   em    td   td   em  Optional   em   p Name of the vault namespace to authenticate to  This can be different than the namespace your secret is in  Namespaces is a set of features within Vault Enterprise that allows Vault environments to support Secure Multi tenancy  e g   ldquo ns1 rdquo   More about namespaces can be found here  a href  https   www vaultproject io docs enterprise namespaces  https   www vaultproject io docs enterprise namespaces  a  This will default to Vault Namespace field if set  or empty otherwise  p    td    tr   tr   td   code tokenSecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p TokenSecretRef authenticates with Vault by presenting a token   p    td    tr   tr   td   code appRole  code   br   em   a href   external secrets io v1beta1 VaultAppRole   VaultAppRole   a    em    td   td   em  Optional   em   p AppRole authenticates with Vault using the App Role auth mechanism  with the role and secret stored in a Kubernetes Secret resource   p    td    tr   tr   td   code kubernetes  code   br   em   a href   external secrets io v1beta1 VaultKubernetesAuth   VaultKubernetesAuth   a    em    td   td   em  Optional   em   p Kubernetes authenticates with Vault by passing the ServiceAccount token stored in the named Secret resource to the Vault server   p    td    tr   tr   td   code ldap  code   br   em   a href   external secrets io v1beta1 VaultLdapAuth   VaultLdapAuth   a    em    td   td   em  Optional   em   p Ldap authenticates with Vault by passing username password pair using the LDAP authentication method  p    td    tr   tr   td   code jwt  code   br   em   a href   external secrets io v1beta1 VaultJwtAuth   VaultJwtAuth   a    em    td   td   em  Optional   em   p Jwt authenticates with Vault by passing role and JWT token using the JWT OIDC authentication method  p    td    tr   tr   td   code cert  code   br   em   a href   external secrets io v1beta1 VaultCertAuth   VaultCertAuth   a    em    td   td   em  Optional   em   p Cert authenticates with TLS Certificates by passing client certificate  private key and ca certificate Cert authentication method  p    td    tr   tr   td   code iam  code   br   em   a href   external secrets io v1beta1 VaultIamAuth   VaultIamAuth   a    em    td   td   em  Optional   em   p Iam authenticates with vault by passing a special AWS request signed with AWS IAM credentials AWS IAM authentication method  p    td    tr   tr   td   code userPass  code   br   em   a href   external secrets io v1beta1 VaultUserPassAuth   VaultUserPassAuth   a    em    td   td   em  Optional   em   p UserPass authenticates with Vault by passing username password pair  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 VaultAwsAuth  VaultAwsAuth   h3   p   p VaultAwsAuth tells the controller how to do authentication with aws  Only one of secretRef or jwt can be specified  if none is specified the controller will try to load credentials from its own service account assuming it is IRSA enabled   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 VaultAwsAuthSecretRef   VaultAwsAuthSecretRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code jwt  code   br   em   a href   external secrets io v1beta1 VaultAwsJWTAuth   VaultAwsJWTAuth   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  external secrets io v1beta1 VaultAwsAuthSecretRef  VaultAwsAuthSecretRef   h3   p    em Appears on   em   a href   external secrets io v1beta1 VaultAwsAuth  VaultAwsAuth  a     a href   external secrets io v1beta1 VaultIamAuth  VaultIamAuth  a     p   p   p VaultAWSAuthSecretRef holds secret references for AWS credentials both AccessKeyID and SecretAccessKey must be defined in order to properly authenticate   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code accessKeyIDSecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p The AccessKeyID is used for authentication  p    td    tr   tr   td   code secretAccessKeySecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p The SecretAccessKey is used for authentication  p    td    tr   tr   td   code sessionTokenSecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p The SessionToken used for authentication This must be defined if AccessKeyID and SecretAccessKey are temporary credentials see   a href  https   docs aws amazon com IAM latest UserGuide id credentials temp use resources html  https   docs aws amazon com IAM latest UserGuide id credentials temp use resources html  a   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 VaultAwsJWTAuth  VaultAwsJWTAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 VaultAwsAuth  VaultAwsAuth  a     a href   external secrets io v1beta1 VaultIamAuth  VaultIamAuth  a     p   p   p Authenticate against AWS using service account tokens   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code serviceAccountRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 ServiceAccountSelector   External Secrets meta v1 ServiceAccountSelector   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 VaultCertAuth  VaultCertAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 VaultAuth  VaultAuth  a     p   p   p VaultJwtAuth authenticates with Vault using the JWT OIDC authentication method  with the role name and token stored in a Kubernetes Secret resource   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code clientCert  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p ClientCert is a certificate to authenticate using the Cert Vault authentication method  p    td    tr   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p SecretRef to a key in a Secret resource containing client private key to authenticate with Vault using the Cert authentication method  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 VaultClientTLS  VaultClientTLS   h3   p    em Appears on   em   a href   external secrets io v1beta1 VaultProvider  VaultProvider  a     p   p   p VaultClientTLS is the configuration used for client side related TLS communication  when the Vault server requires mutual authentication   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code certSecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p CertSecretRef is a certificate added to the transport layer when communicating with the Vault server  If no key for the Secret is specified  external secret will default to  lsquo tls crt rsquo    p    td    tr   tr   td   code keySecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p KeySecretRef to a key in a Secret resource containing client private key added to the transport layer when communicating with the Vault server  If no key for the Secret is specified  external secret will default to  lsquo tls key rsquo    p    td    tr    tbody    table   h3 id  external secrets io v1beta1 VaultIamAuth  VaultIamAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 VaultAuth  VaultAuth  a     p   p   p VaultIamAuth authenticates with Vault using the Vault rsquo s AWS IAM authentication method  Refer   a href  https   developer hashicorp com vault docs auth aws  https   developer hashicorp com vault docs auth aws  a   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code path  code   br   em  string   em    td   td   p Path where the AWS auth method is enabled in Vault  e g   ldquo aws rdquo   p    td    tr   tr   td   code region  code   br   em  string   em    td   td   p AWS region  p    td    tr   tr   td   code role  code   br   em  string   em    td   td   p This is the AWS role to be assumed before talking to vault  p    td    tr   tr   td   code vaultRole  code   br   em  string   em    td   td   p Vault Role  In vault  a role describes an identity with a set of permissions  groups  or policies you want to attach a user of the secrets engine  p    td    tr   tr   td   code externalID  code   br   em  string   em    td   td   p AWS External ID set on assumed IAM roles  p    td    tr   tr   td   code vaultAwsIamServerID  code   br   em  string   em    td   td   p X Vault AWS IAM Server ID is an additional header used by Vault IAM auth method to mitigate against different types of replay attacks  More details here   a href  https   developer hashicorp com vault docs auth aws  https   developer hashicorp com vault docs auth aws  a   p    td    tr   tr   td   code secretRef  code   br   em   a href   external secrets io v1beta1 VaultAwsAuthSecretRef   VaultAwsAuthSecretRef   a    em    td   td   em  Optional   em   p Specify credentials in a Secret object  p    td    tr   tr   td   code jwt  code   br   em   a href   external secrets io v1beta1 VaultAwsJWTAuth   VaultAwsJWTAuth   a    em    td   td   em  Optional   em   p Specify a service account with IRSA enabled  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 VaultJwtAuth  VaultJwtAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 VaultAuth  VaultAuth  a     p   p   p VaultJwtAuth authenticates with Vault using the JWT OIDC authentication method  with the role name and a token stored in a Kubernetes Secret resource or a Kubernetes service account token retrieved via  code TokenRequest  code    p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code path  code   br   em  string   em    td   td   p Path where the JWT authentication backend is mounted in Vault  e g   ldquo jwt rdquo   p    td    tr   tr   td   code role  code   br   em  string   em    td   td   em  Optional   em   p Role is a JWT role to authenticate using the JWT OIDC Vault authentication method  p    td    tr   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p Optional SecretRef that refers to a key in a Secret resource containing JWT token to authenticate with Vault using the JWT OIDC authentication method   p    td    tr   tr   td   code kubernetesServiceAccountToken  code   br   em   a href   external secrets io v1beta1 VaultKubernetesServiceAccountTokenAuth   VaultKubernetesServiceAccountTokenAuth   a    em    td   td   em  Optional   em   p Optional ServiceAccountToken specifies the Kubernetes service account for which to request a token for with the  code TokenRequest  code  API   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 VaultKVStoreVersion  VaultKVStoreVersion   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 VaultProvider  VaultProvider  a     p   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 v1  34   p   td   td   td    tr  tr  td  p   34 v2  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 VaultKubernetesAuth  VaultKubernetesAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 VaultAuth  VaultAuth  a     p   p   p Authenticate against Vault using a Kubernetes ServiceAccount token stored in a Secret   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code mountPath  code   br   em  string   em    td   td   p Path where the Kubernetes authentication backend is mounted in Vault  e g   ldquo kubernetes rdquo   p    td    tr   tr   td   code serviceAccountRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 ServiceAccountSelector   External Secrets meta v1 ServiceAccountSelector   a    em    td   td   em  Optional   em   p Optional service account field containing the name of a kubernetes ServiceAccount  If the service account is specified  the service account secret token JWT will be used for authenticating with Vault  If the service account selector is not supplied  the secretRef will be used instead   p    td    tr   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p Optional secret field containing a Kubernetes ServiceAccount JWT used for authenticating with Vault  If a name is specified without a key   code token  code  is the default  If one is not specified  the one bound to the controller will be used   p    td    tr   tr   td   code role  code   br   em  string   em    td   td   p A required field containing the Vault Role to assume  A Role binds a Kubernetes ServiceAccount with a set of Vault policies   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 VaultKubernetesServiceAccountTokenAuth  VaultKubernetesServiceAccountTokenAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 VaultJwtAuth  VaultJwtAuth  a     p   p   p VaultKubernetesServiceAccountTokenAuth authenticates with Vault using a temporary Kubernetes service account token retrieved by the  code TokenRequest  code  API   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code serviceAccountRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 ServiceAccountSelector   External Secrets meta v1 ServiceAccountSelector   a    em    td   td   p Service account field containing the name of a kubernetes ServiceAccount   p    td    tr   tr   td   code audiences  code   br   em    string   em    td   td   em  Optional   em   p Optional audiences field that will be used to request a temporary Kubernetes service account token for the service account referenced by  code serviceAccountRef  code   Defaults to a single audience  code vault  code  it not specified  Deprecated  use serviceAccountRef Audiences instead  p    td    tr   tr   td   code expirationSeconds  code   br   em  int64   em    td   td   em  Optional   em   p Optional expiration time in seconds that will be used to request a temporary Kubernetes service account token for the service account referenced by  code serviceAccountRef  code   Deprecated  this will be removed in the future  Defaults to 10 minutes   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 VaultLdapAuth  VaultLdapAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 VaultAuth  VaultAuth  a     p   p   p VaultLdapAuth authenticates with Vault using the LDAP authentication method  with the username and password stored in a Kubernetes Secret resource   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code path  code   br   em  string   em    td   td   p Path where the LDAP authentication backend is mounted in Vault  e g   ldquo ldap rdquo   p    td    tr   tr   td   code username  code   br   em  string   em    td   td   p Username is a LDAP user name used to authenticate using the LDAP Vault authentication method  p    td    tr   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p SecretRef to a key in a Secret resource containing password for the LDAP user used to authenticate with Vault using the LDAP authentication method  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 VaultProvider  VaultProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p Configures an store to sync secrets using a HashiCorp Vault KV backend   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 VaultAuth   VaultAuth   a    em    td   td   p Auth configures how secret manager authenticates with the Vault server   p    td    tr   tr   td   code server  code   br   em  string   em    td   td   p Server is the connection address for the Vault server  e g   ldquo  a href  https   vault example com 8200 quot   https   vault example com 8200 rdquo   a    p    td    tr   tr   td   code path  code   br   em  string   em    td   td   em  Optional   em   p Path is the mount path of the Vault KV backend endpoint  e g   ldquo secret rdquo   The v2 KV secret engine version specific  ldquo  data rdquo  path suffix for fetching secrets from Vault is optional and will be appended if not present in specified path   p    td    tr   tr   td   code version  code   br   em   a href   external secrets io v1beta1 VaultKVStoreVersion   VaultKVStoreVersion   a    em    td   td   p Version is the Vault KV secret engine version  This can be either  ldquo v1 rdquo  or  ldquo v2 rdquo   Version defaults to  ldquo v2 rdquo    p    td    tr   tr   td   code namespace  code   br   em  string   em    td   td   em  Optional   em   p Name of the vault namespace  Namespaces is a set of features within Vault Enterprise that allows Vault environments to support Secure Multi tenancy  e g   ldquo ns1 rdquo   More about namespaces can be found here  a href  https   www vaultproject io docs enterprise namespaces  https   www vaultproject io docs enterprise namespaces  a   p    td    tr   tr   td   code caBundle  code   br   em    byte   em    td   td   em  Optional   em   p PEM encoded CA bundle used to validate Vault server certificate  Only used if the Server URL is using HTTPS protocol  This parameter is ignored for plain HTTP protocol connection  If not set the system root certificates are used to validate the TLS connection   p    td    tr   tr   td   code tls  code   br   em   a href   external secrets io v1beta1 VaultClientTLS   VaultClientTLS   a    em    td   td   em  Optional   em   p The configuration used for client side related TLS communication  when the Vault server requires mutual authentication  Only used if the Server URL is using HTTPS protocol  This parameter is ignored for plain HTTP protocol connection  It rsquo s worth noting this configuration is different from the  ldquo TLS certificates auth method rdquo   which is available under the  code auth cert  code  section   p    td    tr   tr   td   code caProvider  code   br   em   a href   external secrets io v1beta1 CAProvider   CAProvider   a    em    td   td   em  Optional   em   p The provider for the CA bundle to use to validate Vault server certificate   p    td    tr   tr   td   code readYourWrites  code   br   em  bool   em    td   td   em  Optional   em   p ReadYourWrites ensures isolated read after write semantics by providing discovered cluster replication states in each request  More information about eventual consistency in Vault can be found here  a href  https   www vaultproject io docs enterprise consistency  https   www vaultproject io docs enterprise consistency  a   p    td    tr   tr   td   code forwardInconsistent  code   br   em  bool   em    td   td   em  Optional   em   p ForwardInconsistent tells Vault to forward read after write requests to the Vault leader instead of simply retrying within a loop  This can increase performance if the option is enabled serverside   a href  https   www vaultproject io docs configuration replication allow forwarding via header  https   www vaultproject io docs configuration replication allow forwarding via header  a   p    td    tr   tr   td   code headers  code   br   em  map string string   em    td   td   em  Optional   em   p Headers to be added in Vault request  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 VaultUserPassAuth  VaultUserPassAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 VaultAuth  VaultAuth  a     p   p   p VaultUserPassAuth authenticates with Vault using UserPass authentication method  with the username and password stored in a Kubernetes Secret resource   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code path  code   br   em  string   em    td   td   p Path where the UserPassword authentication backend is mounted in Vault  e g   ldquo user rdquo   p    td    tr   tr   td   code username  code   br   em  string   em    td   td   p Username is a user name used to authenticate using the UserPass Vault authentication method  p    td    tr   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p SecretRef to a key in a Secret resource containing password for the user used to authenticate with Vault using the UserPass authentication method  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 WebhookCAProvider  WebhookCAProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 WebhookProvider  WebhookProvider  a     p   p   p Defines a location to fetch the cert for the webhook provider from   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code type  code   br   em   a href   external secrets io v1beta1 WebhookCAProviderType   WebhookCAProviderType   a    em    td   td   p The type of provider to use such as  ldquo Secret rdquo   or  ldquo ConfigMap rdquo    p    td    tr   tr   td   code name  code   br   em  string   em    td   td   p The name of the object located at the provider type   p    td    tr   tr   td   code key  code   br   em  string   em    td   td   p The key where the CA certificate can be found in the Secret or ConfigMap   p    td    tr   tr   td   code namespace  code   br   em  string   em    td   td   em  Optional   em   p The namespace the Provider type is in   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 WebhookCAProviderType  WebhookCAProviderType   code string  code  alias   p   h3   p    em Appears on   em   a href   external secrets io v1beta1 WebhookCAProvider  WebhookCAProvider  a     p   p    p   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 ConfigMap  34   p   td   td   td    tr  tr  td  p   34 Secret  34   p   td   td   td    tr   tbody    table   h3 id  external secrets io v1beta1 WebhookProvider  WebhookProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p AkeylessProvider Configures an store to sync secrets using Akeyless KV   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code method  code   br   em  string   em    td   td   p Webhook Method  p    td    tr   tr   td   code url  code   br   em  string   em    td   td   p Webhook url to call  p    td    tr   tr   td   code headers  code   br   em  map string string   em    td   td   em  Optional   em   p Headers  p    td    tr   tr   td   code body  code   br   em  string   em    td   td   em  Optional   em   p Body  p    td    tr   tr   td   code timeout  code   br   em   a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Timeout  p    td    tr   tr   td   code result  code   br   em   a href   external secrets io v1beta1 WebhookResult   WebhookResult   a    em    td   td   p Result formatting  p    td    tr   tr   td   code secrets  code   br   em   a href   external secrets io v1beta1 WebhookSecret     WebhookSecret   a    em    td   td   em  Optional   em   p Secrets to fill in templates These secrets will be passed to the templating function as key value pairs under the given name  p    td    tr   tr   td   code caBundle  code   br   em    byte   em    td   td   em  Optional   em   p PEM encoded CA bundle used to validate webhook server certificate  Only used if the Server URL is using HTTPS protocol  This parameter is ignored for plain HTTP protocol connection  If not set the system root certificates are used to validate the TLS connection   p    td    tr   tr   td   code caProvider  code   br   em   a href   external secrets io v1beta1 WebhookCAProvider   WebhookCAProvider   a    em    td   td   em  Optional   em   p The provider for the CA bundle to use to validate webhook server certificate   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 WebhookResult  WebhookResult   h3   p    em Appears on   em   a href   external secrets io v1beta1 WebhookProvider  WebhookProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code jsonPath  code   br   em  string   em    td   td   em  Optional   em   p Json path of return value  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 WebhookSecret  WebhookSecret   h3   p    em Appears on   em   a href   external secrets io v1beta1 WebhookProvider  WebhookProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code   br   em  string   em    td   td   p Name of this secret in templates  p    td    tr   tr   td   code secretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   p Secret ref to fill in credentials  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 YandexCertificateManagerAuth  YandexCertificateManagerAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 YandexCertificateManagerProvider  YandexCertificateManagerProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code authorizedKeySecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p The authorized key used for authentication  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 YandexCertificateManagerCAProvider  YandexCertificateManagerCAProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 YandexCertificateManagerProvider  YandexCertificateManagerProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code certSecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 YandexCertificateManagerProvider  YandexCertificateManagerProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p YandexCertificateManagerProvider Configures a store to sync secrets using the Yandex Certificate Manager provider   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiEndpoint  code   br   em  string   em    td   td   em  Optional   em   p Yandex Cloud API endpoint  e g   lsquo api cloud yandex net 443 rsquo    p    td    tr   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 YandexCertificateManagerAuth   YandexCertificateManagerAuth   a    em    td   td   p Auth defines the information necessary to authenticate against Yandex Certificate Manager  p    td    tr   tr   td   code caProvider  code   br   em   a href   external secrets io v1beta1 YandexCertificateManagerCAProvider   YandexCertificateManagerCAProvider   a    em    td   td   em  Optional   em   p The provider for the CA bundle to use to validate Yandex Cloud server certificate   p    td    tr    tbody    table   h3 id  external secrets io v1beta1 YandexLockboxAuth  YandexLockboxAuth   h3   p    em Appears on   em   a href   external secrets io v1beta1 YandexLockboxProvider  YandexLockboxProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code authorizedKeySecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td   em  Optional   em   p The authorized key used for authentication  p    td    tr    tbody    table   h3 id  external secrets io v1beta1 YandexLockboxCAProvider  YandexLockboxCAProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 YandexLockboxProvider  YandexLockboxProvider  a     p   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code certSecretRef  code   br   em   a href  https   pkg go dev github com external secrets external secrets apis meta v1 SecretKeySelector   External Secrets meta v1 SecretKeySelector   a    em    td   td    td    tr    tbody    table   h3 id  external secrets io v1beta1 YandexLockboxProvider  YandexLockboxProvider   h3   p    em Appears on   em   a href   external secrets io v1beta1 SecretStoreProvider  SecretStoreProvider  a     p   p   p YandexLockboxProvider Configures a store to sync secrets using the Yandex Lockbox provider   p    p   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiEndpoint  code   br   em  string   em    td   td   em  Optional   em   p Yandex Cloud API endpoint  e g   lsquo api cloud yandex net 443 rsquo    p    td    tr   tr   td   code auth  code   br   em   a href   external secrets io v1beta1 YandexLockboxAuth   YandexLockboxAuth   a    em    td   td   p Auth defines the information necessary to authenticate against Yandex Lockbox  p    td    tr   tr   td   code caProvider  code   br   em   a href   external secrets io v1beta1 YandexLockboxCAProvider   YandexLockboxCAProvider   a    em    td   td   em  Optional   em   p The provider for the CA bundle to use to validate Yandex Cloud server certificate   p    td    tr    tbody    table   hr    p  em  Generated with  code gen crd api reference docs  code     em   p "}
{"questions":"external-dns NOTE Your Pi hole must be running Pi hole Pi hole has an internal list it checks last when resolving requests This list can contain any number of arbitrary A AAAA or CNAME records Deploy ExternalDNS This tutorial describes how to setup ExternalDNS to sync records with Pi hole s Custom DNS There is a pseudo API exposed that ExternalDNS is able to use to manage these records","answers":"# Pi-hole\n\nThis tutorial describes how to setup ExternalDNS to sync records with Pi-hole's Custom DNS.\nPi-hole has an internal list it checks last when resolving requests. This list can contain any number of arbitrary A, AAAA or CNAME records.\nThere is a pseudo-API exposed that ExternalDNS is able to use to manage these records.\n\n__NOTE:__ Your Pi-hole must be running [version 5.9 or newer](https:\/\/pi-hole.net\/blog\/2022\/02\/12\/pi-hole-ftl-v5-14-web-v5-11-and-core-v5-9-released).\n\n\n## Deploy ExternalDNS\n\nYou can skip to the [manifest](#externaldns-manifest) if authentication is disabled on your Pi-hole instance or you don't want to use secrets.\n\nIf your Pi-hole server's admin dashboard is protected by a password, you'll likely want to create a secret first containing its value. \nThis is optional since you _do_ retain the option to pass it as a flag with `--pihole-password`.\n\nYou can create the secret with:\n\n```bash\nkubectl create secret generic pihole-password \\\n    --from-literal EXTERNAL_DNS_PIHOLE_PASSWORD=supersecret\n```\n\nReplacing **\"supersecret\"** with the actual password to your Pi-hole server.\n\n### ExternalDNS Manifest\n\nApply the following manifest to deploy ExternalDNS, editing values for your environment accordingly. \nBe sure to change the namespace in the `ClusterRoleBinding` if you are using a namespace other than **default**.\n\n```yaml\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\",\"watch\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        # If authentication is disabled and\/or you didn't create\n        # a secret, you can remove this block.\n        envFrom:\n        - secretRef:\n            # Change this if you gave the secret a different name\n            name: pihole-password\n        args:\n        - --source=service\n        - --source=ingress\n        # Pihole only supports A\/AAAA\/CNAME records so there is no mechanism to track ownership.\n        # You don't need to set this flag, but if you leave it unset, you will receive warning\n        # logs when ExternalDNS attempts to create TXT records.\n        - --registry=noop\n        # IMPORTANT: If you have records that you manage manually in Pi-hole, set\n        # the policy to upsert-only so they do not get deleted.\n        - --policy=upsert-only\n        - --provider=pihole\n        # Change this to the actual address of your Pi-hole web server\n        - --pihole-server=http:\/\/pihole-web.pihole.svc.cluster.local\n      securityContext:\n        fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes token files\n```\n\n### Arguments\n\n - `--pihole-server (env: EXTERNAL_DNS_PIHOLE_SERVER)` - The address of the Pi-hole web server\n - `--pihole-password (env: EXTERNAL_DNS_PIHOLE_PASSWORD)` - The password to the Pi-hole web server (if enabled)\n - `--pihole-tls-skip-verify (env: EXTERNAL_DNS_PIHOLE_TLS_SKIP_VERIFY)` - Skip verification of any TLS certificates served by the Pi-hole web server.\n\n## Verify ExternalDNS Works\n\n### Ingress Example\n\nCreate an Ingress resource. ExternalDNS will use the hostname specified in the Ingress object.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: foo\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: foo.bar.com\n    http:\n      paths:\n      - path: \/\n        pathType: Prefix\n        backend:\n          service:\n            name: foo\n            port:\n              number: 80\n```\n\n### Service Example\n\nThe below sample application can be used to verify Services work.\nFor services ExternalDNS will look for the annotation `external-dns.alpha.kubernetes.io\/hostname` on the service and use the corresponding value.\n\n```yaml\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: nginx.external-dns-test.homelab.com\nspec:\n  type: LoadBalancer\n  ports:\n  - port: 80\n    name: http\n    targetPort: 80\n  selector:\n    app: nginx\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n          name: http\n```\n\nYou can then query your Pi-hole to see if the record was created.\n\n_Change `@192.168.100.2` to the actual address of your DNS server_\n\n```bash\n$ dig +short @192.168.100.2  nginx.external-dns-test.homelab.com\n\n192.168.100.129\n```","site":"external-dns","answers_cleaned":"  Pi hole  This tutorial describes how to setup ExternalDNS to sync records with Pi hole s Custom DNS  Pi hole has an internal list it checks last when resolving requests  This list can contain any number of arbitrary A  AAAA or CNAME records  There is a pseudo API exposed that ExternalDNS is able to use to manage these records     NOTE    Your Pi hole must be running  version 5 9 or newer  https   pi hole net blog 2022 02 12 pi hole ftl v5 14 web v5 11 and core v5 9 released        Deploy ExternalDNS  You can skip to the  manifest   externaldns manifest  if authentication is disabled on your Pi hole instance or you don t want to use secrets   If your Pi hole server s admin dashboard is protected by a password  you ll likely want to create a secret first containing its value   This is optional since you  do  retain the option to pass it as a flag with    pihole password    You can create the secret with      bash kubectl create secret generic pihole password         from literal EXTERNAL DNS PIHOLE PASSWORD supersecret      Replacing    supersecret    with the actual password to your Pi hole server       ExternalDNS Manifest  Apply the following manifest to deploy ExternalDNS  editing values for your environment accordingly   Be sure to change the namespace in the  ClusterRoleBinding  if you are using a namespace other than   default        yaml     apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    list   watch       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0           If authentication is disabled and or you didn t create           a secret  you can remove this block          envFrom            secretRef                Change this if you gave the secret a different name             name  pihole password         args              source service             source ingress           Pihole only supports A AAAA CNAME records so there is no mechanism to track ownership            You don t need to set this flag  but if you leave it unset  you will receive warning           logs when ExternalDNS attempts to create TXT records              registry noop           IMPORTANT  If you have records that you manage manually in Pi hole  set           the policy to upsert only so they do not get deleted              policy upsert only             provider pihole           Change this to the actual address of your Pi hole web server             pihole server http   pihole web pihole svc cluster local       securityContext          fsGroup  65534   For ExternalDNS to be able to read Kubernetes token files          Arguments        pihole server  env  EXTERNAL DNS PIHOLE SERVER     The address of the Pi hole web server       pihole password  env  EXTERNAL DNS PIHOLE PASSWORD     The password to the Pi hole web server  if enabled        pihole tls skip verify  env  EXTERNAL DNS PIHOLE TLS SKIP VERIFY     Skip verification of any TLS certificates served by the Pi hole web server      Verify ExternalDNS Works      Ingress Example  Create an Ingress resource  ExternalDNS will use the hostname specified in the Ingress object      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  foo spec    ingressClassName  nginx   rules      host  foo bar com     http        paths          path            pathType  Prefix         backend            service              name  foo             port                number  80          Service Example  The below sample application can be used to verify Services work  For services ExternalDNS will look for the annotation  external dns alpha kubernetes io hostname  on the service and use the corresponding value      yaml     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  nginx external dns test homelab com spec    type  LoadBalancer   ports      port  80     name  http     targetPort  80   selector      app  nginx     apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80           name  http      You can then query your Pi hole to see if the record was created    Change   192 168 100 2  to the actual address of your DNS server      bash   dig  short  192 168 100 2  nginx external dns test homelab com  192 168 100 129    "}
{"questions":"external-dns If you prefer to try out ExternalDNS in one of the existing environments you can skip this step GKE with default controller This tutorial describes how to setup ExternalDNS for usage within a cluster Make sure to use 0 11 0 version of ExternalDNS for this tutorial The following instructions use to provide ExternalDNS with the permissions it needs to manage DNS records within a single the organizing entity to allocate resources Single project test scenario using access scopes","answers":"# GKE with default controller\n\nThis tutorial describes how to setup ExternalDNS for usage within a [GKE](https:\/\/cloud.google.com\/kubernetes-engine) ([Google Kuberentes Engine](https:\/\/cloud.google.com\/kubernetes-engine)) cluster. Make sure to use **>=0.11.0** version of ExternalDNS for this tutorial\n\n## Single project test scenario using access scopes\n\n*If you prefer to try-out ExternalDNS in one of the existing environments you can skip this step*\n\nThe following instructions use [access scopes](https:\/\/cloud.google.com\/compute\/docs\/access\/service-accounts#accesscopesiam) to provide ExternalDNS with the permissions it needs to manage DNS records within a single [project](https:\/\/cloud.google.com\/docs\/overview#projects), the organizing entity to allocate resources.\n\nNote that since these permissions are associated with the instance, all pods in the cluster will also have these permissions. As such, this approach is not suitable for anything but testing environments.\n\nThis solution will only work when both CloudDNS and GKE are provisioned in the same project.  If the CloudDNS zone is in a different project, this solution will not work.\n\n### Configure Project Environment\n\nSet up your environment to work with Google Cloud Platform. Fill in your variables as needed, e.g. target project.\n\n```bash\n# set variables to the appropriate desired values\nPROJECT_ID=\"my-external-dns-test\"\nREGION=\"europe-west1\"\nZONE=\"europe-west1-d\"\nClOUD_BILLING_ACCOUNT=\"<my-cloud-billing-account>\"\n# set default settings for project\ngcloud config set project $PROJECT_ID\ngcloud config set compute\/region $REGION\ngcloud config set compute\/zone $ZONE\n# enable billing and APIs if not done already\ngcloud beta billing projects link $PROJECT_ID \\\n  --billing-account $BILLING_ACCOUNT\ngcloud services enable \"dns.googleapis.com\"\ngcloud services enable \"container.googleapis.com\"\n```\n\n### Create GKE Cluster\n\n```bash\ngcloud container clusters create $GKE_CLUSTER_NAME \\\n  --num-nodes 1 \\\n  --scopes \"https:\/\/www.googleapis.com\/auth\/ndev.clouddns.readwrite\"\n```\n\n**WARNING**: Note that this cluster will use the default [compute engine GSA](https:\/\/cloud.google.com\/compute\/docs\/access\/service-accounts#default_service_account) that contians the overly permissive project editor (`roles\/editor`) role. So essentially, anything on the cluster could potentially grant escalated privileges.  Also, as mentioned earlier, the access scope `ndev.clouddns.readwrite` will allow anything running on the cluster to have read\/write permissions on all Cloud DNS zones within the same project.\n\n### Cloud DNS Zone\n\nCreate a DNS zone which will contain the managed DNS records. If using your own domain that was registered with a third-party domain registrar, you should point your domain's name servers to the values under the `nameServers` key. Please consult your registrar's documentation on how to do that.  This tutorial will use example domain of  `example.com`.\n\n```bash\ngcloud dns managed-zones create \"example-com\" --dns-name \"example.com.\" \\\n  --description \"Automatically managed zone by kubernetes.io\/external-dns\"\n```\n\nMake a note of the nameservers that were assigned to your new zone.\n\n```bash\ngcloud dns record-sets list \\\n    --zone \"example-com\" --name \"example.com.\" --type NS\n```\n\nOutputs:\n\n```\nNAME          TYPE  TTL    DATA\nexample.com.  NS    21600  ns-cloud-e1.googledomains.com.,ns-cloud-e2.googledomains.com.,ns-cloud-e3.googledomains.com.,ns-cloud-e4.googledomains.com.\n```\n\nIn this case it's `ns-cloud-{e1-e4}.googledomains.com.` but your's could slightly differ, e.g. `{a1-a4}`, `{b1-b4}` etc.\n\n## Cross project access scenario using Google Service Account\n\nMore often, following best practices in regards to security and operations, Cloud DNS zones will be managed in a separate project from the Kubernetes cluster.  This section shows how setup ExternalDNS to access Cloud DNS from a different project. These steps will also work for single project scenarios as well.\n\nExternalDNS will need permissions to make changes to the Cloud DNS zone. There are three ways to configure the access needed:\n\n* [Worker Node Service Account](#worker-node-service-account-method)\n* [Static Credentials](#static-credentials)\n* [Workload Identity](#workload-identity)\n\n### Setup Cloud DNS and GKE\n\nBelow are examples on how you can configure Cloud DNS and GKE in separate projects, and then use one of the three methods to grant access to ExternalDNS.  Replace the environment variables to values that make sense in your environment.\n\n#### Configure Projects\n\nFor this process, create projects with the appropriate APIs enabled.\n\n```bash\n# set variables to appropriate desired values\nGKE_PROJECT_ID=\"my-workload-project\"\nDNS_PROJECT_ID=\"my-cloud-dns-project\"\nClOUD_BILLING_ACCOUNT=\"<my-cloud-billing-account>\"\n# enable billing and APIs for DNS project if not done already\ngcloud config set project $DNS_PROJECT_ID\ngcloud beta billing projects link $CLOUD_DNS_PROJECT \\\n  --billing-account $ClOUD_BILLING_ACCOUNT\ngcloud services enable \"dns.googleapis.com\"\n# enable billing and APIs for GKE project if not done already\ngcloud config set project $GKE_PROJECT_ID\ngcloud beta billing projects link $CLOUD_DNS_PROJECT \\\n  --billing-account $ClOUD_BILLING_ACCOUNT\ngcloud services enable \"container.googleapis.com\"\n```\n\n#### Provisioning Cloud DNS\n\nCreate a Cloud DNS zone in the designated DNS project.  \n\n```bash\ngcloud dns managed-zones create \"example-com\" --project $DNS_PROJECT_ID \\\n  --description \"example.com\" --dns-name=\"example.com.\" --visibility=public\n```\n\nIf using your own domain that was registered with a third-party domain registrar, you should point your domain's name servers to the values under the `nameServers` key.  Please consult your registrar's documentation on how to do that. The example domain of `example.com` will be used for this tutorial.\n\n#### Provisioning a GKE cluster for cross project access\n\nCreate a GSA (Google Service Account) and grant it the [minimal set of privileges required](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/hardening-your-cluster#use_least_privilege_sa) for GKE nodes:\n\n```bash\nGKE_CLUSTER_NAME=\"my-external-dns-cluster\"\nGKE_REGION=\"us-central1\"\nGKE_SA_NAME=\"worker-nodes-sa\"\nGKE_SA_EMAIL=\"$GKE_SA_NAME@${GKE_PROJECT_ID}.iam.gserviceaccount.com\"\n\nROLES=(\n  roles\/logging.logWriter\n  roles\/monitoring.metricWriter\n  roles\/monitoring.viewer\n  roles\/stackdriver.resourceMetadata.writer\n)\n\ngcloud iam service-accounts create $GKE_SA_NAME \\\n  --display-name $GKE_SA_NAME --project $GKE_PROJECT_ID\n\n# assign google service account to roles in GKE project\nfor ROLE in ${ROLES[*]}; do\n  gcloud projects add-iam-policy-binding $GKE_PROJECT_ID \\\n    --member \"serviceAccount:$GKE_SA_EMAIL\" \\\n    --role $ROLE\ndone\n```\n\nCreate a cluster using this service account and enable [workload identity](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/workload-identity):\n\n```bash\ngcloud container clusters create $GKE_CLUSTER_NAME \\\n  --project $GKE_PROJECT_ID --region $GKE_REGION --num-nodes 1 \\\n  --service-account \"$GKE_SA_EMAIL\" \\\n  --workload-pool \"$GKE_PROJECT_ID.svc.id.goog\"\n```\n\n### Workload Identity\n\n[Workload Identity](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/workload-identity) allows workloads in your GKE cluster to [authenticate directly to GCP](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/workload-identity#credential-flow) using Kubernetes Service Accounts\n\nYou have an option to chose from using the gcloud CLI or using Terraform.\n\n=== \"gcloud CLI\"\n\n    The below instructions assume you are using the default Kubernetes Service account name of `external-dns` in the namespace `external-dns`\n\n    Grant the Kubernetes service account DNS `roles\/dns.admin` at project level\n    \n    ```shell\n    gcloud projects add-iam-policy-binding projects\/DNS_PROJECT_ID \\\n        --role=roles\/dns.admin \\\n        --member=principal:\/\/iam.googleapis.com\/projects\/PROJECT_NUMBER\/locations\/global\/workloadIdentityPools\/PROJECT_ID.svc.id.goog\/subject\/ns\/external-dns\/sa\/external-dns \\\n        --condition=None\n    ```\n    \n    Replace the following:\n    \n    * `DNS_PROJECT_ID` : Project ID of your DNS project. If DNS is in the same project as your GKE cluster, use your GKE project.\n    * `PROJECT_ID`: your Google Cloud project ID of your GKE Cluster\n    * `PROJECT_NUMBER`: your numerical Google Cloud project number of your GKE cluster\n\n    If you wish to change the namespace, replace \n\n    * `ns\/external-dns` with `ns\/<your namespace`  \n    * `sa\/external-dns` with `sa\/<your ksa>`\n\n=== \"Terraform\"\n\n    The below instructions assume you are using the default Kubernetes Service account name of `external-dns` in the namespace `external-dns`\n\n    Create a file called `main.tf` and place in it the below. _Note: If you're an experienced terraform user feel free to split these out in to different files_ \n\n    ```hcl\n    variable \"gke-project\" {\n      type        = string\n      description = \"Name of the project that the GKE cluster exists in\"\n      default     = \"GKE-PROJECT\"\n    }\n    \n    variable \"ksa_name\" {\n      type        = string\n      description = \"Name of the Kubernetes service account that will be accessing the DNS Zones\"\n      default     = \"external-dns\"\n    }\n    \n    variable \"kns_name\" {\n      type        = string\n      description = \"Name of the Kubernetes Namespace\"\n      default     = \"external-dns\"\n    }\n    \n    data \"google_project\" \"project\" {\n      project_id = var.gke-project\n    }\n    \n    locals {\n      member = \"principal:\/\/iam.googleapis.com\/projects\/${data.google_project.project.number}\/locations\/global\/workloadIdentityPools\/${var.gke-project}.svc.id.goog\/subject\/ns\/${var.kns_name}\/sa\/${var.ksa_name}\"\n    }\n    \n    resource \"google_project_iam_member\" \"external_dns\" {\n      member  = local.member\n      project = \"DNS-PROJECT\"\n      role    = \"roles\/dns.reader\"\n    }\n\n    resource \"google_dns_managed_zone_iam_member\" \"member\" {\n      project      = \"DNS-PROJECT\"\n      managed_zone = \"ZONE-NAME\"\n      role         = \"roles\/dns.admin\"\n      member       = local.member\n    }\n    ```\n    \n    Replace the following\n    \n    * `GKE-PROJECT` : Project that contains your GKE cluster\n    * `DNS-PROJECT` : Project that holds your DNS zones\n    \n    You can also change the below if you plan to use a different service account name and namespace\n    \n    * `variable \"ksa_name\"` : Name of the Kubernetes service account external-dns will use\n    * `variable \"kns_name\"` : Name of the Kubernetes Name Space that will have external-dns installed to\n\n### Worker Node Service Account method\n\nIn this method, the GSA (Google Service Account) that is associated with GKE worker nodes will be configured to have access to Cloud DNS.  \n\n**WARNING**: This will grant access to modify the Cloud DNS zone records for all containers running on cluster, not just ExternalDNS, so use this option with caution.  This is not recommended for production environments.\n\n```bash\nGKE_SA_EMAIL=\"$GKE_SA_NAME@${GKE_PROJECT_ID}.iam.gserviceaccount.com\"\n\n# assign google service account to dns.admin role in the cloud dns project\ngcloud projects add-iam-policy-binding $DNS_PROJECT_ID \\\n  --member serviceAccount:$GKE_SA_EMAIL \\\n  --role roles\/dns.admin\n```\n\nAfter this, follow the steps in [Deploy ExternalDNS](#deploy-externaldns).  Make sure to set the `--google-project` flag to match the Cloud DNS project name.\n\n### Static Credentials\n\nIn this scenario, a new GSA (Google Service Account) is created that has access to the CloudDNS zone.  The credentials for this GSA are saved and installed as a Kubernetes secret that will be used by ExternalDNS.  \n\nThis allows only containers that have access to the secret, such as ExternalDNS to update records on the Cloud DNS Zone.\n\n#### Create GSA for use with static credentials\n\n```bash\nDNS_SA_NAME=\"external-dns-sa\"\nDNS_SA_EMAIL=\"$DNS_SA_NAME@${GKE_PROJECT_ID}.iam.gserviceaccount.com\"\n\n# create GSA used to access the Cloud DNS zone\ngcloud iam service-accounts create $DNS_SA_NAME --display-name $DNS_SA_NAME\n\n# assign google service account to dns.admin role in cloud-dns project\ngcloud projects add-iam-policy-binding $DNS_PROJECT_ID \\\n  --member serviceAccount:$DNS_SA_EMAIL --role \"roles\/dns.admin\"\n```\n\n#### Create Kubernetes secret using static credentials\n\nGenerate static credentials from the ExternalDNS GSA.\n\n```bash\n# download static credentials\ngcloud iam service-accounts keys create \/local\/path\/to\/credentials.json \\\n  --iam-account $DNS_SA_EMAIL\n```\n\nCreate a Kubernetes secret with the credentials in the same namespace of ExternalDNS.\n\n```bash\nkubectl create secret generic \"external-dns\" --namespace ${EXTERNALDNS_NS:-\"default\"} \\\n  --from-file \/local\/path\/to\/credentials.json\n```\n\nAfter this, follow the steps in [Deploy ExternalDNS](#deploy-externaldns).  Make sure to set the `--google-project` flag to match Cloud DNS project name. Make sure to uncomment out the section that mounts the secret to the ExternalDNS pods.\n\n#### Deploy External DNS\n\nDeploy ExternalDNS with the following steps below, documented under [Deploy ExternalDNS](#deploy-externaldns).  Set the `--google-project` flag to the Cloud DNS project name.\n\n#### Update ExternalDNS pods\n\n!!! note \"Only required if not enabled on all nodes\"\n    If you have GKE Workload Identity enabled on all nodes in your cluster, the below step is not necessary \n\nUpdate the Pod spec to schedule the workloads on nodes that use Workload Identity and to use the annotated Kubernetes service account.\n\n```bash\nkubectl patch deployment \"external-dns\" \\\n  --namespace ${EXTERNALDNS_NS:-\"default\"} \\\n  --patch \\\n '{\"spec\": {\"template\": {\"spec\": {\"nodeSelector\": {\"iam.gke.io\/gke-metadata-server-enabled\": \"true\"}}}}}'\n```\n\nAfter all of these steps you may see several messages with `googleapi: Error 403: Forbidden, forbidden`.  After several minutes when the token is refreshed, these error messages will go away, and you should see info messages, such as: `All records are already up to date`.\n\n## Deploy ExternalDNS\n\nThen apply the following manifests file to deploy ExternalDNS.\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n  labels:\n    app.kubernetes.io\/name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\n  labels:\n    app.kubernetes.io\/name: external-dns\nrules:\n  - apiGroups: [\"\"]\n    resources: [\"services\",\"endpoints\",\"pods\",\"nodes\"]\n    verbs: [\"get\",\"watch\",\"list\"]\n  - apiGroups: [\"extensions\",\"networking.k8s.io\"]\n    resources: [\"ingresses\"]\n    verbs: [\"get\",\"watch\",\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\n  labels:\n    app.kubernetes.io\/name: external-dns\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n  - kind: ServiceAccount\n    name: external-dns\n    namespace: default # change if namespace is not 'default'\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\n  labels:\n    app.kubernetes.io\/name: external-dns  \nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app.kubernetes.io\/name: external-dns\n  template:\n    metadata:\n      labels:\n        app.kubernetes.io\/name: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n        - name: external-dns\n          image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n          args:\n            - --source=service\n            - --source=ingress\n            - --domain-filter=example.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones\n            - --provider=google\n            - --log-format=json # google cloud logs parses severity of the \"text\" log format incorrectly\n    #        - --google-project=my-cloud-dns-project # Use this to specify a project different from the one external-dns is running inside\n            - --google-zone-visibility=public # Use this to filter to only zones with this visibility. Set to either 'public' or 'private'. Omitting will match public and private zones\n            - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization\n            - --registry=txt\n            - --txt-owner-id=my-identifier\n      #     # uncomment below if static credentials are used  \n      #     env:\n      #       - name: GOOGLE_APPLICATION_CREDENTIALS\n      #         value: \/etc\/secrets\/service-account\/credentials.json\n      #     volumeMounts:\n      #       - name: google-service-account\n      #         mountPath: \/etc\/secrets\/service-account\/\n      # volumes:\n      #   - name: google-service-account\n      #     secret:\n      #       secretName: external-dns\n```\n\nCreate the deployment for ExternalDNS:\n\n```bash\nkubectl create --namespace \"default\" --filename externaldns.yaml\n```\n\n## Verify ExternalDNS works\n\nThe following will deploy a small nginx server that will be used to demonstrate that ExternalDNS is working.\n\n### Verify using an external load balancer\n\nCreate the following sample application to test that ExternalDNS works.  This example will provision a L4 load balancer.\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    # change nginx.example.com to match an appropriate value\n    external-dns.alpha.kubernetes.io\/hostname: nginx.example.com\nspec:\n  type: LoadBalancer\n  ports:\n    - port: 80\n      targetPort: 80\n  selector:\n    app: nginx\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n        - image: nginx\n          name: nginx\n          ports:\n            - containerPort: 80\n```\n\nCreate the deployment and service objects:\n\n```bash\nkubectl create --namespace \"default\" --filename nginx.yaml\n```\n\nAfter roughly two minutes check that a corresponding DNS record for your service was created.\n\n```bash\ngcloud dns record-sets list --zone \"example-com\" --name \"nginx.example.com.\"\n```\n\nExample output:\n\n```\nNAME                TYPE  TTL  DATA\nnginx.example.com.  A     300  104.155.60.49\nnginx.example.com.  TXT   300  \"heritage=external-dns,external-dns\/owner=my-identifier\"\n```\n\nNote created `TXT` record alongside `A` record. `TXT` record signifies that the corresponding `A` record is managed by ExternalDNS. This makes ExternalDNS safe for running in environments where there are other records managed via other means.\n\nLet's check that we can resolve this DNS name. We'll ask the nameservers assigned to your zone first.\n\n```bash\ndig +short @ns-cloud-e1.googledomains.com. nginx.example.com.\n104.155.60.49\n```\n\nGiven you hooked up your DNS zone with its parent zone you can use `curl` to access your site.\n\n```bash\ncurl nginx.example.com\n```\n\n### Verify using an ingress\n\nLet's check that Ingress works as well. Create the following Ingress.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: nginx\nspec:\n  rules:\n    - host: server.example.com\n      http:\n        paths:\n          - path: \/\n            pathType: Prefix\n            backend:\n              service:\n                name: nginx\n                port:\n                  number: 80\n```\n\nCreate the ingress objects with:\n\n```bash\nkubectl create --namespace \"default\" --filename ingress.yaml\n```\n\nNote that this will ingress object will use the default ingress controller that comes with GKE to create a L7 load balancer in addition to the L4 load balancer previously with the service object.  To use only the L7 load balancer, update the service manafest to change the Service type to `NodePort` and remove the ExternalDNS annotation.\n\nAfter roughly two minutes check that a corresponding DNS record for your Ingress was created.\n\n```bash\ngcloud dns record-sets list \\\n    --zone \"example-com\" \\\n    --name \"server.example.com.\" \\\n```\nOutput:\n\n```\nNAME                 TYPE  TTL  DATA\nserver.example.com.  A     300  130.211.46.224\nserver.example.com.  TXT   300  \"heritage=external-dns,external-dns\/owner=my-identifier\"\n```\n\nLet's check that we can resolve this DNS name as well.\n\n```bash\ndig +short @ns-cloud-e1.googledomains.com. server.example.com.\n130.211.46.224\n```\n\nTry with `curl` as well.\n\n```bash\ncurl server.example.com\n```\n\n### Clean up\n\nMake sure to delete all Service and Ingress objects before terminating the cluster so all load balancers get cleaned up correctly.\n\n```bash\nkubectl delete service nginx\nkubectl delete ingress nginx\n```\n\nGive ExternalDNS some time to clean up the DNS records for you. Then delete the managed zone and cluster.\n\n```bash\ngcloud dns managed-zones delete \"example-com\"\ngcloud container clusters delete \"external-dns\"\n```","site":"external-dns","answers_cleaned":"  GKE with default controller  This tutorial describes how to setup ExternalDNS for usage within a  GKE  https   cloud google com kubernetes engine    Google Kuberentes Engine  https   cloud google com kubernetes engine   cluster  Make sure to use     0 11 0   version of ExternalDNS for this tutorial     Single project test scenario using access scopes   If you prefer to try out ExternalDNS in one of the existing environments you can skip this step   The following instructions use  access scopes  https   cloud google com compute docs access service accounts accesscopesiam  to provide ExternalDNS with the permissions it needs to manage DNS records within a single  project  https   cloud google com docs overview projects   the organizing entity to allocate resources   Note that since these permissions are associated with the instance  all pods in the cluster will also have these permissions  As such  this approach is not suitable for anything but testing environments   This solution will only work when both CloudDNS and GKE are provisioned in the same project   If the CloudDNS zone is in a different project  this solution will not work       Configure Project Environment  Set up your environment to work with Google Cloud Platform  Fill in your variables as needed  e g  target project      bash   set variables to the appropriate desired values PROJECT ID  my external dns test  REGION  europe west1  ZONE  europe west1 d  ClOUD BILLING ACCOUNT   my cloud billing account     set default settings for project gcloud config set project  PROJECT ID gcloud config set compute region  REGION gcloud config set compute zone  ZONE   enable billing and APIs if not done already gcloud beta billing projects link  PROJECT ID       billing account  BILLING ACCOUNT gcloud services enable  dns googleapis com  gcloud services enable  container googleapis com           Create GKE Cluster     bash gcloud container clusters create  GKE CLUSTER NAME       num nodes 1       scopes  https   www googleapis com auth ndev clouddns readwrite         WARNING    Note that this cluster will use the default  compute engine GSA  https   cloud google com compute docs access service accounts default service account  that contians the overly permissive project editor   roles editor   role  So essentially  anything on the cluster could potentially grant escalated privileges   Also  as mentioned earlier  the access scope  ndev clouddns readwrite  will allow anything running on the cluster to have read write permissions on all Cloud DNS zones within the same project       Cloud DNS Zone  Create a DNS zone which will contain the managed DNS records  If using your own domain that was registered with a third party domain registrar  you should point your domain s name servers to the values under the  nameServers  key  Please consult your registrar s documentation on how to do that   This tutorial will use example domain of   example com       bash gcloud dns managed zones create  example com    dns name  example com         description  Automatically managed zone by kubernetes io external dns       Make a note of the nameservers that were assigned to your new zone      bash gcloud dns record sets list         zone  example com    name  example com     type NS      Outputs       NAME          TYPE  TTL    DATA example com   NS    21600  ns cloud e1 googledomains com  ns cloud e2 googledomains com  ns cloud e3 googledomains com  ns cloud e4 googledomains com       In this case it s  ns cloud  e1 e4  googledomains com   but your s could slightly differ  e g    a1 a4      b1 b4   etc      Cross project access scenario using Google Service Account  More often  following best practices in regards to security and operations  Cloud DNS zones will be managed in a separate project from the Kubernetes cluster   This section shows how setup ExternalDNS to access Cloud DNS from a different project  These steps will also work for single project scenarios as well   ExternalDNS will need permissions to make changes to the Cloud DNS zone  There are three ways to configure the access needed      Worker Node Service Account   worker node service account method     Static Credentials   static credentials     Workload Identity   workload identity       Setup Cloud DNS and GKE  Below are examples on how you can configure Cloud DNS and GKE in separate projects  and then use one of the three methods to grant access to ExternalDNS   Replace the environment variables to values that make sense in your environment        Configure Projects  For this process  create projects with the appropriate APIs enabled      bash   set variables to appropriate desired values GKE PROJECT ID  my workload project  DNS PROJECT ID  my cloud dns project  ClOUD BILLING ACCOUNT   my cloud billing account     enable billing and APIs for DNS project if not done already gcloud config set project  DNS PROJECT ID gcloud beta billing projects link  CLOUD DNS PROJECT       billing account  ClOUD BILLING ACCOUNT gcloud services enable  dns googleapis com    enable billing and APIs for GKE project if not done already gcloud config set project  GKE PROJECT ID gcloud beta billing projects link  CLOUD DNS PROJECT       billing account  ClOUD BILLING ACCOUNT gcloud services enable  container googleapis com            Provisioning Cloud DNS  Create a Cloud DNS zone in the designated DNS project        bash gcloud dns managed zones create  example com    project  DNS PROJECT ID       description  example com    dns name  example com     visibility public      If using your own domain that was registered with a third party domain registrar  you should point your domain s name servers to the values under the  nameServers  key   Please consult your registrar s documentation on how to do that  The example domain of  example com  will be used for this tutorial        Provisioning a GKE cluster for cross project access  Create a GSA  Google Service Account  and grant it the  minimal set of privileges required  https   cloud google com kubernetes engine docs how to hardening your cluster use least privilege sa  for GKE nodes      bash GKE CLUSTER NAME  my external dns cluster  GKE REGION  us central1  GKE SA NAME  worker nodes sa  GKE SA EMAIL   GKE SA NAME   GKE PROJECT ID  iam gserviceaccount com   ROLES     roles logging logWriter   roles monitoring metricWriter   roles monitoring viewer   roles stackdriver resourceMetadata writer    gcloud iam service accounts create  GKE SA NAME       display name  GKE SA NAME   project  GKE PROJECT ID    assign google service account to roles in GKE project for ROLE in   ROLES      do   gcloud projects add iam policy binding  GKE PROJECT ID         member  serviceAccount  GKE SA EMAIL          role  ROLE done      Create a cluster using this service account and enable  workload identity  https   cloud google com kubernetes engine docs how to workload identity       bash gcloud container clusters create  GKE CLUSTER NAME       project  GKE PROJECT ID   region  GKE REGION   num nodes 1       service account   GKE SA EMAIL        workload pool   GKE PROJECT ID svc id goog           Workload Identity   Workload Identity  https   cloud google com kubernetes engine docs how to workload identity  allows workloads in your GKE cluster to  authenticate directly to GCP  https   cloud google com kubernetes engine docs concepts workload identity credential flow  using Kubernetes Service Accounts  You have an option to chose from using the gcloud CLI or using Terraform        gcloud CLI       The below instructions assume you are using the default Kubernetes Service account name of  external dns  in the namespace  external dns       Grant the Kubernetes service account DNS  roles dns admin  at project level             shell     gcloud projects add iam policy binding projects DNS PROJECT ID             role roles dns admin             member principal   iam googleapis com projects PROJECT NUMBER locations global workloadIdentityPools PROJECT ID svc id goog subject ns external dns sa external dns             condition None                  Replace the following              DNS PROJECT ID    Project ID of your DNS project  If DNS is in the same project as your GKE cluster  use your GKE project         PROJECT ID   your Google Cloud project ID of your GKE Cluster        PROJECT NUMBER   your numerical Google Cloud project number of your GKE cluster      If you wish to change the namespace  replace          ns external dns  with  ns  your namespace           sa external dns  with  sa  your ksa         Terraform       The below instructions assume you are using the default Kubernetes Service account name of  external dns  in the namespace  external dns       Create a file called  main tf  and place in it the below   Note  If you re an experienced terraform user feel free to split these out in to different files           hcl     variable  gke project          type          string       description    Name of the project that the GKE cluster exists in        default        GKE PROJECT                 variable  ksa name          type          string       description    Name of the Kubernetes service account that will be accessing the DNS Zones        default        external dns                 variable  kns name          type          string       description    Name of the Kubernetes Namespace        default        external dns                 data  google project   project          project id   var gke project                locals         member    principal   iam googleapis com projects   data google project project number  locations global workloadIdentityPools   var gke project  svc id goog subject ns   var kns name  sa   var ksa name                  resource  google project iam member   external dns          member    local member       project    DNS PROJECT        role       roles dns reader             resource  google dns managed zone iam member   member          project         DNS PROJECT        managed zone    ZONE NAME        role            roles dns admin        member         local member                        Replace the following             GKE PROJECT    Project that contains your GKE cluster        DNS PROJECT    Project that holds your DNS zones          You can also change the below if you plan to use a different service account name and namespace             variable  ksa name     Name of the Kubernetes service account external dns will use        variable  kns name     Name of the Kubernetes Name Space that will have external dns installed to      Worker Node Service Account method  In this method  the GSA  Google Service Account  that is associated with GKE worker nodes will be configured to have access to Cloud DNS       WARNING    This will grant access to modify the Cloud DNS zone records for all containers running on cluster  not just ExternalDNS  so use this option with caution   This is not recommended for production environments      bash GKE SA EMAIL   GKE SA NAME   GKE PROJECT ID  iam gserviceaccount com     assign google service account to dns admin role in the cloud dns project gcloud projects add iam policy binding  DNS PROJECT ID       member serviceAccount  GKE SA EMAIL       role roles dns admin      After this  follow the steps in  Deploy ExternalDNS   deploy externaldns    Make sure to set the    google project  flag to match the Cloud DNS project name       Static Credentials  In this scenario  a new GSA  Google Service Account  is created that has access to the CloudDNS zone   The credentials for this GSA are saved and installed as a Kubernetes secret that will be used by ExternalDNS     This allows only containers that have access to the secret  such as ExternalDNS to update records on the Cloud DNS Zone        Create GSA for use with static credentials     bash DNS SA NAME  external dns sa  DNS SA EMAIL   DNS SA NAME   GKE PROJECT ID  iam gserviceaccount com     create GSA used to access the Cloud DNS zone gcloud iam service accounts create  DNS SA NAME   display name  DNS SA NAME    assign google service account to dns admin role in cloud dns project gcloud projects add iam policy binding  DNS PROJECT ID       member serviceAccount  DNS SA EMAIL   role  roles dns admin            Create Kubernetes secret using static credentials  Generate static credentials from the ExternalDNS GSA      bash   download static credentials gcloud iam service accounts keys create  local path to credentials json       iam account  DNS SA EMAIL      Create a Kubernetes secret with the credentials in the same namespace of ExternalDNS      bash kubectl create secret generic  external dns    namespace   EXTERNALDNS NS   default         from file  local path to credentials json      After this  follow the steps in  Deploy ExternalDNS   deploy externaldns    Make sure to set the    google project  flag to match Cloud DNS project name  Make sure to uncomment out the section that mounts the secret to the ExternalDNS pods        Deploy External DNS  Deploy ExternalDNS with the following steps below  documented under  Deploy ExternalDNS   deploy externaldns    Set the    google project  flag to the Cloud DNS project name        Update ExternalDNS pods      note  Only required if not enabled on all nodes      If you have GKE Workload Identity enabled on all nodes in your cluster  the below step is not necessary   Update the Pod spec to schedule the workloads on nodes that use Workload Identity and to use the annotated Kubernetes service account      bash kubectl patch deployment  external dns        namespace   EXTERNALDNS NS   default         patch       spec     template     spec     nodeSelector     iam gke io gke metadata server enabled    true             After all of these steps you may see several messages with  googleapi  Error 403  Forbidden  forbidden    After several minutes when the token is refreshed  these error messages will go away  and you should see info messages  such as   All records are already up to date       Deploy ExternalDNS  Then apply the following manifests file to deploy ExternalDNS      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns   labels      app kubernetes io name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns   labels      app kubernetes io name  external dns rules      apiGroups           resources    services   endpoints   pods   nodes       verbs    get   watch   list       apiGroups    extensions   networking k8s io       resources    ingresses       verbs    get   watch   list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer   labels      app kubernetes io name  external dns roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects      kind  ServiceAccount     name  external dns     namespace  default   change if namespace is not  default      apiVersion  apps v1 kind  Deployment metadata    name  external dns   labels      app kubernetes io name  external dns   spec    strategy      type  Recreate   selector      matchLabels        app kubernetes io name  external dns   template      metadata        labels          app kubernetes io name  external dns     spec        serviceAccountName  external dns       containers            name  external dns           image  registry k8s io external dns external dns v0 15 0           args                  source service                 source ingress                 domain filter example com   will make ExternalDNS see only the hosted zones matching provided domain  omit to process all available hosted zones                 provider google                 log format json   google cloud logs parses severity of the  text  log format incorrectly                  google project my cloud dns project   Use this to specify a project different from the one external dns is running inside                 google zone visibility public   Use this to filter to only zones with this visibility  Set to either  public  or  private   Omitting will match public and private zones                 policy upsert only   would prevent ExternalDNS from deleting any records  omit to enable full synchronization                 registry txt                 txt owner id my identifier               uncomment below if static credentials are used               env                  name  GOOGLE APPLICATION CREDENTIALS                 value   etc secrets service account credentials json             volumeMounts                  name  google service account                 mountPath   etc secrets service account          volumes              name  google service account             secret                secretName  external dns      Create the deployment for ExternalDNS      bash kubectl create   namespace  default    filename externaldns yaml         Verify ExternalDNS works  The following will deploy a small nginx server that will be used to demonstrate that ExternalDNS is working       Verify using an external load balancer  Create the following sample application to test that ExternalDNS works   This example will provision a L4 load balancer      yaml apiVersion  v1 kind  Service metadata    name  nginx   annotations        change nginx example com to match an appropriate value     external dns alpha kubernetes io hostname  nginx example com spec    type  LoadBalancer   ports        port  80       targetPort  80   selector      app  nginx     apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers            image  nginx           name  nginx           ports                containerPort  80      Create the deployment and service objects      bash kubectl create   namespace  default    filename nginx yaml      After roughly two minutes check that a corresponding DNS record for your service was created      bash gcloud dns record sets list   zone  example com    name  nginx example com        Example output       NAME                TYPE  TTL  DATA nginx example com   A     300  104 155 60 49 nginx example com   TXT   300   heritage external dns external dns owner my identifier       Note created  TXT  record alongside  A  record   TXT  record signifies that the corresponding  A  record is managed by ExternalDNS  This makes ExternalDNS safe for running in environments where there are other records managed via other means   Let s check that we can resolve this DNS name  We ll ask the nameservers assigned to your zone first      bash dig  short  ns cloud e1 googledomains com  nginx example com  104 155 60 49      Given you hooked up your DNS zone with its parent zone you can use  curl  to access your site      bash curl nginx example com          Verify using an ingress  Let s check that Ingress works as well  Create the following Ingress      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  nginx spec    rules        host  server example com       http          paths              path                pathType  Prefix             backend                service                  name  nginx                 port                    number  80      Create the ingress objects with      bash kubectl create   namespace  default    filename ingress yaml      Note that this will ingress object will use the default ingress controller that comes with GKE to create a L7 load balancer in addition to the L4 load balancer previously with the service object   To use only the L7 load balancer  update the service manafest to change the Service type to  NodePort  and remove the ExternalDNS annotation   After roughly two minutes check that a corresponding DNS record for your Ingress was created      bash gcloud dns record sets list         zone  example com          name  server example com         Output       NAME                 TYPE  TTL  DATA server example com   A     300  130 211 46 224 server example com   TXT   300   heritage external dns external dns owner my identifier       Let s check that we can resolve this DNS name as well      bash dig  short  ns cloud e1 googledomains com  server example com  130 211 46 224      Try with  curl  as well      bash curl server example com          Clean up  Make sure to delete all Service and Ingress objects before terminating the cluster so all load balancers get cleaned up correctly      bash kubectl delete service nginx kubectl delete ingress nginx      Give ExternalDNS some time to clean up the DNS records for you  Then delete the managed zone and cluster      bash gcloud dns managed zones delete  example com  gcloud container clusters delete  external dns     "}
{"questions":"external-dns Designate DNS from OpenStack URL of the OpenStack identity service which is responsible for user authentication and also served as a registry for other All OpenStack CLIs require authentication parameters to be provided These parameters include Authenticating with OpenStack We are going to use OpenStack CLI utility which is an umbrella application for most of OpenStack clients including This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using OpenStack Designate DNS","answers":"# Designate DNS from OpenStack\n\nThis tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using OpenStack Designate DNS.\n\n## Authenticating with OpenStack\n\nWe are going to use OpenStack CLI - `openstack` utility, which is an umbrella application for most of OpenStack clients including `designate`.\n\nAll OpenStack CLIs require authentication parameters to be provided. These parameters include:\n* URL of the OpenStack identity service (`keystone`) which is responsible for user authentication and also served as a registry for other\n  OpenStack services. Designate endpoints must be registered in `keystone` in order to ExternalDNS and OpenStack CLI be able to find them.\n* OpenStack region name\n* User login name.\n* User project (tenant) name.\n* User domain (only when using keystone API v3)\n\nAlthough these parameters can be passed explicitly through the CLI flags, traditionally it is done by sourcing `openrc` file (`source ~\/openrc`) that is a\nshell snippet that sets environment variables that all OpenStack CLI understand by convention.\n\nRecent versions of OpenStack Dashboard have a nice UI to download `openrc` file for both v2 and v3 auth protocols. Both protocols can be used with ExternalDNS.\nv3 is generally preferred over v2, but might not be available in some OpenStack installations.\n\n## Installing OpenStack Designate\n\nPlease refer to the Designate deployment [tutorial](https:\/\/docs.openstack.org\/project-install-guide\/dns\/ocata\/install.html) for instructions on how\nto install and test Designate with BIND backend. You will be required to have admin rights in existing OpenStack installation to do this. One convenient\nway to get yourself an OpenStack installation to play with is to use [DevStack](https:\/\/docs.openstack.org\/devstack\/latest\/).\n\n## Creating DNS zones\n\nAll domain names that are ExternalDNS is going to create must belong to one of DNS zones created in advance. Here is an example of how to create `example.com` DNS zone:\n```console\n$ openstack zone create --email dnsmaster@example.com example.com.\n```\n\nIt is important to manually create all the zones that are going to be used for kubernetes entities (ExternalDNS sources) before starting ExternalDNS.\n\n## Deploy ExternalDNS\n\nCreate a deployment file called `externaldns.yaml` with the following contents:\n\n### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=designate\n        env: # values from openrc file\n        - name: OS_AUTH_URL\n          value: https:\/\/controller\/identity\/v3\n        - name: OS_REGION_NAME\n          value: RegionOne\n        - name: OS_USERNAME\n          value: admin\n        - name: OS_PASSWORD\n          value: p@ssw0rd\n        - name: OS_PROJECT_NAME\n          value: demo\n        - name: OS_USER_DOMAIN_NAME\n          value: Default\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"] \n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"watch\",\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  selector:\n    matchLabels:\n      app: external-dns\n  strategy:\n    type: Recreate\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=designate\n        env: # values from openrc file\n        - name: OS_AUTH_URL\n          value: https:\/\/controller\/identity\/v3\n        - name: OS_REGION_NAME\n          value: RegionOne\n        - name: OS_USERNAME\n          value: admin\n        - name: OS_PASSWORD\n          value: p@ssw0rd\n        - name: OS_PROJECT_NAME\n          value: demo\n        - name: OS_USER_DOMAIN_NAME\n          value: Default\n```\n\nCreate the deployment for ExternalDNS:\n\n```console\n$ kubectl create -f externaldns.yaml\n```\n\n### Optional: Trust self-sign certificates\nIf your OpenStack-Installation is configured with a self-sign certificate, you could extend the `pod.spec` with following secret-mount:\n```yaml\n        volumeMounts:\n        - mountPath: \/etc\/ssl\/certs\/\n          name: cacerts \n      volumes:\n      - name: cacerts\n        secret:\n          defaultMode: 420\n          secretName: self-sign-certs\n```\n\ncontent of the secret `self-sign-certs` must be the certificate\/chain in PEM format.\n\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: my-app.example.com\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\nNote the annotation on the service; use the same hostname as the DNS zone created above.\n\nExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records.\n\nCreate the deployment and service:\n\n```console\n$ kubectl create -f nginx.yaml\n```\n\n\nOnce the service has an external IP assigned, ExternalDNS will notice the new service IP address and notify Designate,\nwhich in turn synchronize DNS records with underlying DNS server backend.\n\n## Verifying DNS records\n\nTo verify that DNS record was indeed created, you can use the following command:\n\n```console\n$ openstack recordset list example.com.\n```\n\nThere should be a record for my-app.example.com having `ACTIVE` status. And of course, the ultimate method to verify is to issue a DNS query:\n\n```console\n$ dig my-app.example.com @controller\n```\n\n## Cleanup\n\nNow that we have verified that ExternalDNS created all DNS records, we can delete the tutorial's example:\n\n```console\n$ kubectl delete service -f nginx.yaml\n$ kubectl delete service -f externaldns.yaml\n```","site":"external-dns","answers_cleaned":"  Designate DNS from OpenStack  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using OpenStack Designate DNS      Authenticating with OpenStack  We are going to use OpenStack CLI    openstack  utility  which is an umbrella application for most of OpenStack clients including  designate    All OpenStack CLIs require authentication parameters to be provided  These parameters include    URL of the OpenStack identity service   keystone   which is responsible for user authentication and also served as a registry for other   OpenStack services  Designate endpoints must be registered in  keystone  in order to ExternalDNS and OpenStack CLI be able to find them    OpenStack region name   User login name    User project  tenant  name    User domain  only when using keystone API v3   Although these parameters can be passed explicitly through the CLI flags  traditionally it is done by sourcing  openrc  file   source   openrc   that is a shell snippet that sets environment variables that all OpenStack CLI understand by convention   Recent versions of OpenStack Dashboard have a nice UI to download  openrc  file for both v2 and v3 auth protocols  Both protocols can be used with ExternalDNS  v3 is generally preferred over v2  but might not be available in some OpenStack installations      Installing OpenStack Designate  Please refer to the Designate deployment  tutorial  https   docs openstack org project install guide dns ocata install html  for instructions on how to install and test Designate with BIND backend  You will be required to have admin rights in existing OpenStack installation to do this  One convenient way to get yourself an OpenStack installation to play with is to use  DevStack  https   docs openstack org devstack latest        Creating DNS zones  All domain names that are ExternalDNS is going to create must belong to one of DNS zones created in advance  Here is an example of how to create  example com  DNS zone     console   openstack zone create   email dnsmaster example com example com       It is important to manually create all the zones that are going to be used for kubernetes entities  ExternalDNS sources  before starting ExternalDNS      Deploy ExternalDNS  Create a deployment file called  externaldns yaml  with the following contents       Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider designate         env    values from openrc file           name  OS AUTH URL           value  https   controller identity v3           name  OS REGION NAME           value  RegionOne           name  OS USERNAME           value  admin           name  OS PASSWORD           value  p ssw0rd           name  OS PROJECT NAME           value  demo           name  OS USER DOMAIN NAME           value  Default          Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups         resources    pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses      verbs    get   watch   list     apiGroups         resources    nodes     verbs    watch   list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    selector      matchLabels        app  external dns   strategy      type  Recreate   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider designate         env    values from openrc file           name  OS AUTH URL           value  https   controller identity v3           name  OS REGION NAME           value  RegionOne           name  OS USERNAME           value  admin           name  OS PASSWORD           value  p ssw0rd           name  OS PROJECT NAME           value  demo           name  OS USER DOMAIN NAME           value  Default      Create the deployment for ExternalDNS      console   kubectl create  f externaldns yaml          Optional  Trust self sign certificates If your OpenStack Installation is configured with a self sign certificate  you could extend the  pod spec  with following secret mount     yaml         volumeMounts            mountPath   etc ssl certs            name  cacerts        volumes          name  cacerts         secret            defaultMode  420           secretName  self sign certs      content of the secret  self sign certs  must be the certificate chain in PEM format       Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  my app example com spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80      Note the annotation on the service  use the same hostname as the DNS zone created above   ExternalDNS uses this annotation to determine what services should be registered with DNS  Removing the annotation will cause ExternalDNS to remove the corresponding DNS records   Create the deployment and service      console   kubectl create  f nginx yaml       Once the service has an external IP assigned  ExternalDNS will notice the new service IP address and notify Designate  which in turn synchronize DNS records with underlying DNS server backend      Verifying DNS records  To verify that DNS record was indeed created  you can use the following command      console   openstack recordset list example com       There should be a record for my app example com having  ACTIVE  status  And of course  the ultimate method to verify is to issue a DNS query      console   dig my app example com  controller         Cleanup  Now that we have verified that ExternalDNS created all DNS records  we can delete the tutorial s example      console   kubectl delete service  f nginx yaml   kubectl delete service  f externaldns yaml    "}
{"questions":"external-dns Plural This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Plural DNS Make sure to use 0 12 3 version of ExternalDNS for this tutorial Creating Plural Credentials A secret containing the a Plural access token is needed for this provider You can get a token for your user","answers":"# Plural\n\nThis tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Plural DNS.\n\nMake sure to use **>=0.12.3** version of ExternalDNS for this tutorial.\n\n## Creating Plural Credentials\n\nA secret containing the a Plural access token is needed for this provider. You can get a token for your user [here](https:\/\/app.plural.sh\/profile\/tokens).\n\nTo create the secret you can run `kubectl create secret generic plural-env --from-literal=PLURAL_ACCESS_TOKEN=<replace-with-your-access-token>`.\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster you want to test ExternalDNS with.\nThen apply one of the following manifests file to deploy ExternalDNS.\n\n## Using Helm\n\nCreate a values.yaml file to configure ExternalDNS to use plural DNS as the DNS provider. This file should include the necessary environment variables:\n\n```shell\nprovider:\n  name: plural\nextraArgs:\n  - --plural-cluster=example-plural-cluster\n  - --plural-provider=aws # gcp, azure, equinix and kind are also possible\nenv:\n  - name: PLURAL_ACCESS_TOKEN\n    valueFrom:\n      secretKeyRef:\n        name: PLURAL_ACCESS_TOKEN\n        key: plural-env\n  - name: PLURAL_ENDPOINT\n    value: https:\/\/app.plural.sh \n```\n\nFinally, install the ExternalDNS chart with Helm using the configuration specified in your values.yaml file:\n\n```shell\nhelm upgrade --install external-dns external-dns\/external-dns --values values.yaml\n```\n\n### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=plural\n        - --plural-cluster=example-plural-cluster\n        - --plural-provider=aws # gcp, azure, equinix and kind are also possible\n        env:\n        - name: PLURAL_ACCESS_TOKEN\n          valueFrom:\n            secretKeyRef:\n              key: PLURAL_ACCESS_TOKEN\n              name: plural-env\n        - name: PLURAL_ENDPOINT # (optional) use an alternative endpoint for Plural; defaults to https:\/\/app.plural.sh\n          value: https:\/\/app.plural.sh\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"] \n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\", \"watch\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=plural\n        - --plural-cluster=example-plural-cluster\n        - --plural-provider=aws # gcp, azure, equinix and kind are also possible\n        env:\n        - name: PLURAL_ACCESS_TOKEN\n          valueFrom:\n            secretKeyRef:\n              key: PLURAL_ACCESS_TOKEN\n              name: plural-env\n        - name: PLURAL_ENDPOINT # (optional) use an alternative endpoint for Plural; defaults to https:\/\/app.plural.sh\n          value: https:\/\/app.plural.sh\n```\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: example.com\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\nNote the annotation on the service; use the same hostname as the Plural DNS zone created above. The annotation may also be a subdomain\nof the DNS zone (e.g. 'www.example.com').\n\nBy setting the TTL annotation on the service, you have to pass a valid TTL, which must be 120 or above.\nThis annotation is optional, if you won't set it, it will be 1 (automatic) which is 300.\n\nExternalDNS uses this annotation to determine what services should be registered with DNS.  Removing the annotation\nwill cause ExternalDNS to remove the corresponding DNS records.\n\nCreate the deployment and service:\n\n```\n$ kubectl create -f nginx.yaml\n```\n\nDepending where you run your service it can take a little while for your cloud provider to create an external IP for the service.\n\nOnce the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize\nthe Plural DNS records.\n\n## Verifying Plural DNS records\n\nCheck your [Plural domain overview](https:\/\/app.plural.sh\/account\/domains) to view the domains associated with your Plural account. There you can view the records for each domain.\n\nThe records should show the external IP address of the service as the A record for your domain.\n\n## Cleanup\n\nNow that we have verified that ExternalDNS will automatically manage Plural DNS records, we can delete the tutorial's example:\n\n```\n$ kubectl delete -f nginx.yaml\n$ kubectl delete -f externaldns.yaml","site":"external-dns","answers_cleaned":"  Plural  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Plural DNS   Make sure to use     0 12 3   version of ExternalDNS for this tutorial      Creating Plural Credentials  A secret containing the a Plural access token is needed for this provider  You can get a token for your user  here  https   app plural sh profile tokens    To create the secret you can run  kubectl create secret generic plural env   from literal PLURAL ACCESS TOKEN  replace with your access token        Deploy ExternalDNS  Connect your  kubectl  client to the cluster you want to test ExternalDNS with  Then apply one of the following manifests file to deploy ExternalDNS      Using Helm  Create a values yaml file to configure ExternalDNS to use plural DNS as the DNS provider  This file should include the necessary environment variables      shell provider    name  plural extraArgs        plural cluster example plural cluster       plural provider aws   gcp  azure  equinix and kind are also possible env      name  PLURAL ACCESS TOKEN     valueFrom        secretKeyRef          name  PLURAL ACCESS TOKEN         key  plural env     name  PLURAL ENDPOINT     value  https   app plural sh       Finally  install the ExternalDNS chart with Helm using the configuration specified in your values yaml file      shell helm upgrade   install external dns external dns external dns   values values yaml          Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider plural             plural cluster example plural cluster             plural provider aws   gcp  azure  equinix and kind are also possible         env            name  PLURAL ACCESS TOKEN           valueFrom              secretKeyRef                key  PLURAL ACCESS TOKEN               name  plural env           name  PLURAL ENDPOINT    optional  use an alternative endpoint for Plural  defaults to https   app plural sh           value  https   app plural sh          Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses      verbs    get   watch   list     apiGroups         resources    nodes     verbs    list    watch       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider plural             plural cluster example plural cluster             plural provider aws   gcp  azure  equinix and kind are also possible         env            name  PLURAL ACCESS TOKEN           valueFrom              secretKeyRef                key  PLURAL ACCESS TOKEN               name  plural env           name  PLURAL ENDPOINT    optional  use an alternative endpoint for Plural  defaults to https   app plural sh           value  https   app plural sh         Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  example com spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80      Note the annotation on the service  use the same hostname as the Plural DNS zone created above  The annotation may also be a subdomain of the DNS zone  e g   www example com     By setting the TTL annotation on the service  you have to pass a valid TTL  which must be 120 or above  This annotation is optional  if you won t set it  it will be 1  automatic  which is 300   ExternalDNS uses this annotation to determine what services should be registered with DNS   Removing the annotation will cause ExternalDNS to remove the corresponding DNS records   Create the deployment and service         kubectl create  f nginx yaml      Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service   Once the service has an external IP assigned  ExternalDNS will notice the new service IP address and synchronize the Plural DNS records      Verifying Plural DNS records  Check your  Plural domain overview  https   app plural sh account domains  to view the domains associated with your Plural account  There you can view the records for each domain   The records should show the external IP address of the service as the A record for your domain      Cleanup  Now that we have verified that ExternalDNS will automatically manage Plural DNS records  we can delete the tutorial s example         kubectl delete  f nginx yaml   kubectl delete  f externaldns yaml"}
{"questions":"external-dns External DNS manages service endpoints in existing DNS zones The Akamai provider does not add remove or configure new zones The or and can create and manage Edge DNS zones Akamai Edge DNS Zones External DNS v0 8 0 or greater Prerequisites","answers":"# Akamai Edge DNS\n\n## Prerequisites\n\nExternal-DNS v0.8.0 or greater.\n\n### Zones\n\nExternal-DNS manages service endpoints in existing DNS zones. The Akamai provider does not add, remove or configure new zones. The [Akamai Control Center](https:\/\/control.akamai.com) or [Akamai DevOps Tools](https:\/\/developer.akamai.com\/devops), [Akamai CLI](https:\/\/developer.akamai.com\/cli) and [Akamai Terraform Provider](https:\/\/developer.akamai.com\/tools\/integrations\/terraform) can create and manage Edge DNS zones. \n\n### Akamai Edge DNS Authentication\n\nThe Akamai Edge DNS provider requires valid Akamai Edgegrid API authentication credentials to access zones and manage  DNS records. \n\nEither directly by key or indirectly via a file can set credentials for the provider. The Akamai credential keys and mappings to the Akamai provider utilizing different presentation methods are:\n\n| Edgegrid Auth Key | External-DNS Cmd Line Key | Environment\/ConfigMap Key | Description |\n| ----------------- | ------------------------- | ------------------------- | ----------- |\n| host | akamai-serviceconsumerdomain | EXTERNAL_DNS_AKAMAI_SERVICECONSUMERDOMAIN | Akamai Edgegrid API server |\n| access_token | akamai-access-token | EXTERNAL_DNS_AKAMAI_ACCESS_TOKEN | Akamai Edgegrid API access token |\n| client_token | akamai-client-token  | EXTERNAL_DNS_AKAMAI_CLIENT_TOKEN |Akamai Edgegrid API client token |\n| client-secret | akamai-client-secret | EXTERNAL_DNS_AKAMAI_CLIENT_SECRET |Akamai Edgegrid API client secret |\n\nIn addition to specifying auth credentials individually, an Akamai Edgegrid .edgerc file convention can set credentials.\n\n| External-DNS Cmd Line | Environment\/ConfigMap | Description |\n| --------------------- | --------------------- | ----------- |\n| akamai-edgerc-path | EXTERNAL_DNS_AKAMAI_EDGERC_PATH | Accessible path to Edgegrid credentials file, e.g \/home\/test\/.edgerc |\n| akamai-edgerc-section | EXTERNAL_DNS_AKAMAI_EDGERC_SECTION | Section in Edgegrid credentials file containing credentials |\n\n[Akamai API Authentication](https:\/\/developer.akamai.com\/getting-started\/edgegrid) provides an overview and further information about authorization credentials for API base applications and tools.\n\n## Deploy External-DNS\n\nAn operational External-DNS deployment consists of an External-DNS container and service. The following sections demonstrate the ConfigMap objects that would make up an example functional external DNS kubernetes configuration utilizing NGINX as the service.\n\nConnect your `kubectl` client to the External-DNS cluster.\n\nBegin by creating a Kubernetes secret to securely store your  Akamai Edge DNS Access Tokens. This key will enable ExternalDNS to authenticate with Akamai Edge DNS:\n\n```shell\nkubectl create secret generic AKAMAI-DNS --from-literal=EXTERNAL_DNS_AKAMAI_SERVICECONSUMERDOMAIN=YOUR_SERVICECONSUMERDOMAIN --from-literal=EXTERNAL_DNS_AKAMAI_CLIENT_TOKEN=YOUR_CLIENT_TOKEN --from-literal=EXTERNAL_DNS_AKAMAI_CLIENT_SECRET=YOUR_CLIENT_SECRET --from-literal=EXTERNAL_DNS_AKAMAI_ACCESS_TOKEN=YOUR_ACCESS_TOKEN\n```\n\nEnsure to replace YOUR_SERVICECONSUMERDOMAIN, EXTERNAL_DNS_AKAMAI_CLIENT_TOKEN, YOUR_CLIENT_SECRET and YOUR_ACCESS_TOKEN with your actual Akamai Edge DNS API keys.\n\nThen apply one of the following manifests file to deploy ExternalDNS.\n\n### Using Helm\n\nCreate a values.yaml file to configure ExternalDNS to use Akamai Edge DNS as the DNS provider. This file should include the necessary environment variables:\n\n```shell\nprovider:\n  name: akamai\nenv:\n  - name: EXTERNAL_DNS_AKAMAI_SERVICECONSUMERDOMAIN\n    valueFrom:\n      secretKeyRef:\n        name: AKAMAI-DNS\n        key: EXTERNAL_DNS_AKAMAI_SERVICECONSUMERDOMAIN\n  - name: EXTERNAL_DNS_AKAMAI_CLIENT_TOKEN\n    valueFrom:\n      secretKeyRef:\n        name: AKAMAI-DNS\n        key: EXTERNAL_DNS_AKAMAI_CLIENT_TOKEN\n  - name: EXTERNAL_DNS_AKAMAI_CLIENT_SECRET\n    valueFrom:\n      secretKeyRef:\n        name: AKAMAI-DNS\n        key: EXTERNAL_DNS_AKAMAI_CLIENT_SECRET\n  - name: EXTERNAL_DNS_AKAMAI_ACCESS_TOKEN\n    valueFrom:\n      secretKeyRef:\n        name: AKAMAI-DNS\n        key: EXTERNAL_DNS_AKAMAI_ACCESS_TOKEN\n```\n\nFinally, install the ExternalDNS chart with Helm using the configuration specified in your values.yaml file:\n\n```shell\nhelm upgrade --install external-dns external-dns\/external-dns --values values.yaml\n```\n\n\n### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service  # or ingress or both\n        - --provider=akamai\n        - --domain-filter=example.com\n        # zone-id-filter may be specified as well to filter on contract ID\n        - --registry=txt\n        - --txt-owner-id=\n        - --txt-prefix=.\n        env:\n        - name: EXTERNAL_DNS_AKAMAI_SERVICECONSUMERDOMAIN\n          valueFrom:\n            secretKeyRef:\n              name: AKAMAI-DNS\n              key: EXTERNAL_DNS_AKAMAI_SERVICECONSUMERDOMAIN\n        - name: EXTERNAL_DNS_AKAMAI_CLIENT_TOKEN\n          valueFrom:\n            secretKeyRef:\n              name: AKAMAI-DNS\n              key: EXTERNAL_DNS_AKAMAI_CLIENT_TOKEN\n        - name: EXTERNAL_DNS_AKAMAI_CLIENT_SECRET\n          valueFrom:\n            secretKeyRef:\n              name: AKAMAI-DNS\n              key: EXTERNAL_DNS_AKAMAI_CLIENT_SECRET\n        - name: EXTERNAL_DNS_AKAMAI_ACCESS_TOKEN\n          valueFrom:\n            secretKeyRef:\n              name: AKAMAI-DNS\n              key: EXTERNAL_DNS_AKAMAI_ACCESS_TOKEN\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"watch\", \"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service  # or ingress or both\n        - --provider=akamai\n        - --domain-filter=example.com\n        # zone-id-filter may be specified as well to filter on contract ID\n        - --registry=txt\n        - --txt-owner-id=\n        - --txt-prefix=.\n        env:\n        - name: EXTERNAL_DNS_AKAMAI_SERVICECONSUMERDOMAIN\n          valueFrom:\n            secretKeyRef:\n              name: AKAMAI-DNS\n              key: EXTERNAL_DNS_AKAMAI_SERVICECONSUMERDOMAIN\n        - name: EXTERNAL_DNS_AKAMAI_CLIENT_TOKEN\n          valueFrom:\n            secretKeyRef:\n              name: AKAMAI-DNS\n              key: EXTERNAL_DNS_AKAMAI_CLIENT_TOKEN\n        - name: EXTERNAL_DNS_AKAMAI_CLIENT_SECRET\n          valueFrom:\n            secretKeyRef:\n              name: AKAMAI-DNS\n              key: EXTERNAL_DNS_AKAMAI_CLIENT_SECRET\n        - name: EXTERNAL_DNS_AKAMAI_ACCESS_TOKEN\n          valueFrom:\n            secretKeyRef:\n              name: AKAMAI-DNS\n              key: EXTERNAL_DNS_AKAMAI_ACCESS_TOKEN\n```\n\nCreate the deployment for External-DNS:\n\n```\n$ kubectl apply -f externaldns.yaml\n```\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: nginx.example.com\n    external-dns.alpha.kubernetes.io\/ttl: \"600\" #optional\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\nCreate the deployment and service object:\n\n```\n$ kubectl apply -f nginx.yaml\n```\n\n## Verify Akamai Edge DNS Records\n\nWait 3-5 minutes before validating the records to allow the record changes to propagate to all the Akamai name servers.\n\nValidate records using the [Akamai Control Center](http:\/\/control.akamai.com) or by executing a dig, nslookup or similar DNS command.\n \n## Cleanup\n\nOnce you successfully configure and verify record management via External-DNS, you can delete the tutorial's examples:\n\n```\n$ kubectl delete -f nginx.yaml\n$ kubectl delete -f externaldns.yaml\n```\n\n## Additional Information\n\n* The Akamai provider allows the administrative user to filter zones by both name (`domain-filter`) and contract Id (`zone-id-filter`). The Edge DNS API will return a '500 Internal Error' for invalid contract Ids.\n* The provider will substitute quotes in TXT records with a `` ` `` (back tick) when writing records with the API.","site":"external-dns","answers_cleaned":"  Akamai Edge DNS     Prerequisites  External DNS v0 8 0 or greater       Zones  External DNS manages service endpoints in existing DNS zones  The Akamai provider does not add  remove or configure new zones  The  Akamai Control Center  https   control akamai com  or  Akamai DevOps Tools  https   developer akamai com devops    Akamai CLI  https   developer akamai com cli  and  Akamai Terraform Provider  https   developer akamai com tools integrations terraform  can create and manage Edge DNS zones        Akamai Edge DNS Authentication  The Akamai Edge DNS provider requires valid Akamai Edgegrid API authentication credentials to access zones and manage  DNS records    Either directly by key or indirectly via a file can set credentials for the provider  The Akamai credential keys and mappings to the Akamai provider utilizing different presentation methods are     Edgegrid Auth Key   External DNS Cmd Line Key   Environment ConfigMap Key   Description                                                                                                 host   akamai serviceconsumerdomain   EXTERNAL DNS AKAMAI SERVICECONSUMERDOMAIN   Akamai Edgegrid API server     access token   akamai access token   EXTERNAL DNS AKAMAI ACCESS TOKEN   Akamai Edgegrid API access token     client token   akamai client token    EXTERNAL DNS AKAMAI CLIENT TOKEN  Akamai Edgegrid API client token     client secret   akamai client secret   EXTERNAL DNS AKAMAI CLIENT SECRET  Akamai Edgegrid API client secret    In addition to specifying auth credentials individually  an Akamai Edgegrid  edgerc file convention can set credentials     External DNS Cmd Line   Environment ConfigMap   Description                                                                     akamai edgerc path   EXTERNAL DNS AKAMAI EDGERC PATH   Accessible path to Edgegrid credentials file  e g  home test  edgerc     akamai edgerc section   EXTERNAL DNS AKAMAI EDGERC SECTION   Section in Edgegrid credentials file containing credentials     Akamai API Authentication  https   developer akamai com getting started edgegrid  provides an overview and further information about authorization credentials for API base applications and tools      Deploy External DNS  An operational External DNS deployment consists of an External DNS container and service  The following sections demonstrate the ConfigMap objects that would make up an example functional external DNS kubernetes configuration utilizing NGINX as the service   Connect your  kubectl  client to the External DNS cluster   Begin by creating a Kubernetes secret to securely store your  Akamai Edge DNS Access Tokens  This key will enable ExternalDNS to authenticate with Akamai Edge DNS      shell kubectl create secret generic AKAMAI DNS   from literal EXTERNAL DNS AKAMAI SERVICECONSUMERDOMAIN YOUR SERVICECONSUMERDOMAIN   from literal EXTERNAL DNS AKAMAI CLIENT TOKEN YOUR CLIENT TOKEN   from literal EXTERNAL DNS AKAMAI CLIENT SECRET YOUR CLIENT SECRET   from literal EXTERNAL DNS AKAMAI ACCESS TOKEN YOUR ACCESS TOKEN      Ensure to replace YOUR SERVICECONSUMERDOMAIN  EXTERNAL DNS AKAMAI CLIENT TOKEN  YOUR CLIENT SECRET and YOUR ACCESS TOKEN with your actual Akamai Edge DNS API keys   Then apply one of the following manifests file to deploy ExternalDNS       Using Helm  Create a values yaml file to configure ExternalDNS to use Akamai Edge DNS as the DNS provider  This file should include the necessary environment variables      shell provider    name  akamai env      name  EXTERNAL DNS AKAMAI SERVICECONSUMERDOMAIN     valueFrom        secretKeyRef          name  AKAMAI DNS         key  EXTERNAL DNS AKAMAI SERVICECONSUMERDOMAIN     name  EXTERNAL DNS AKAMAI CLIENT TOKEN     valueFrom        secretKeyRef          name  AKAMAI DNS         key  EXTERNAL DNS AKAMAI CLIENT TOKEN     name  EXTERNAL DNS AKAMAI CLIENT SECRET     valueFrom        secretKeyRef          name  AKAMAI DNS         key  EXTERNAL DNS AKAMAI CLIENT SECRET     name  EXTERNAL DNS AKAMAI ACCESS TOKEN     valueFrom        secretKeyRef          name  AKAMAI DNS         key  EXTERNAL DNS AKAMAI ACCESS TOKEN      Finally  install the ExternalDNS chart with Helm using the configuration specified in your values yaml file      shell helm upgrade   install external dns external dns external dns   values values yaml           Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service    or ingress or both             provider akamai             domain filter example com           zone id filter may be specified as well to filter on contract ID             registry txt             txt owner id              txt prefix           env            name  EXTERNAL DNS AKAMAI SERVICECONSUMERDOMAIN           valueFrom              secretKeyRef                name  AKAMAI DNS               key  EXTERNAL DNS AKAMAI SERVICECONSUMERDOMAIN           name  EXTERNAL DNS AKAMAI CLIENT TOKEN           valueFrom              secretKeyRef                name  AKAMAI DNS               key  EXTERNAL DNS AKAMAI CLIENT TOKEN           name  EXTERNAL DNS AKAMAI CLIENT SECRET           valueFrom              secretKeyRef                name  AKAMAI DNS               key  EXTERNAL DNS AKAMAI CLIENT SECRET           name  EXTERNAL DNS AKAMAI ACCESS TOKEN           valueFrom              secretKeyRef                name  AKAMAI DNS               key  EXTERNAL DNS AKAMAI ACCESS TOKEN          Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    watch    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service    or ingress or both             provider akamai             domain filter example com           zone id filter may be specified as well to filter on contract ID             registry txt             txt owner id              txt prefix           env            name  EXTERNAL DNS AKAMAI SERVICECONSUMERDOMAIN           valueFrom              secretKeyRef                name  AKAMAI DNS               key  EXTERNAL DNS AKAMAI SERVICECONSUMERDOMAIN           name  EXTERNAL DNS AKAMAI CLIENT TOKEN           valueFrom              secretKeyRef                name  AKAMAI DNS               key  EXTERNAL DNS AKAMAI CLIENT TOKEN           name  EXTERNAL DNS AKAMAI CLIENT SECRET           valueFrom              secretKeyRef                name  AKAMAI DNS               key  EXTERNAL DNS AKAMAI CLIENT SECRET           name  EXTERNAL DNS AKAMAI ACCESS TOKEN           valueFrom              secretKeyRef                name  AKAMAI DNS               key  EXTERNAL DNS AKAMAI ACCESS TOKEN      Create the deployment for External DNS         kubectl apply  f externaldns yaml         Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  nginx example com     external dns alpha kubernetes io ttl   600   optional spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80      Create the deployment and service object         kubectl apply  f nginx yaml         Verify Akamai Edge DNS Records  Wait 3 5 minutes before validating the records to allow the record changes to propagate to all the Akamai name servers   Validate records using the  Akamai Control Center  http   control akamai com  or by executing a dig  nslookup or similar DNS command       Cleanup  Once you successfully configure and verify record management via External DNS  you can delete the tutorial s examples         kubectl delete  f nginx yaml   kubectl delete  f externaldns yaml         Additional Information    The Akamai provider allows the administrative user to filter zones by both name   domain filter   and contract Id   zone id filter    The Edge DNS API will return a  500 Internal Error  for invalid contract Ids    The provider will substitute quotes in TXT records with a          back tick  when writing records with the API "}
{"questions":"external-dns If you are new to GoDaddy we recommend you first read the following This tutorial describes how to set up ExternalDNS for use within a Kubernetes cluster using GoDaddy DNS GoDaddy Creating a zone with GoDaddy DNS Make sure to use 0 6 version of ExternalDNS for this tutorial","answers":"# GoDaddy\n\nThis tutorial describes how to set up ExternalDNS for use within a\nKubernetes cluster using GoDaddy DNS.\n\nMake sure to use **>=0.6** version of ExternalDNS for this tutorial.\n\n## Creating a zone with GoDaddy DNS\n\nIf you are new to GoDaddy, we recommend you first read the following\ninstructions for creating a zone.\n\n[Creating a zone using the GoDaddy web console](https:\/\/www.godaddy.com\/)\n\n[Creating a zone using the GoDaddy API](https:\/\/developer.godaddy.com\/)\n\n## Creating GoDaddy API key\n\nYou first need to create an API Key.\n\nUsing the [GoDaddy documentation](https:\/\/developer.godaddy.com\/getstarted) you will have your `API key` and `API secret`\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster with which you want to test ExternalDNS, and then apply one of the following manifest files for deployment:\n\n## Using Helm\n\nCreate a values.yaml file to configure ExternalDNS to use GoDaddy as the DNS provider. This file should include the necessary environment variables:\n\n```shell\nprovider: \n  name: godaddy \nextraArgs:\n  - --godaddy-api-key=YOUR_API_KEY\n  - --godaddy-api-secret=YOUR_API_SECRET\n```\n\nBe sure to replace YOUR_API_KEY and YOUR_API_SECRET with your actual GoDaddy API key and GoDaddy API secret.\n\nFinally, install the ExternalDNS chart with Helm using the configuration specified in your values.yaml file:\n\n```shell\nhelm upgrade --install external-dns external-dns\/external-dns --values values.yaml\n```\n\n### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=godaddy\n        - --txt-prefix=external-dns. # In case of multiple k8s cluster\n        - --txt-owner-id=owner-id # In case of multiple k8s cluster\n        - --godaddy-api-key=<Your API Key>\n        - --godaddy-api-secret=<Your API secret>\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\",\"watch\"]\n- apiGroups: [\"\"]\n  resources: [\"endpoints\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=godaddy\n        - --txt-prefix=external-dns. # In case of multiple k8s cluster\n        - --txt-owner-id=owner-id # In case of multiple k8s cluster\n        - --godaddy-api-key=<Your API Key>\n        - --godaddy-api-secret=<Your API secret>\n```\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: example.com\n    external-dns.alpha.kubernetes.io\/ttl: \"120\" #optional\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\n**A note about annotations**\n\nVerify that the annotation on the service uses the same hostname as the GoDaddy DNS zone created above. The annotation may also be a subdomain of the DNS zone (e.g. 'www.example.com').\n\nThe TTL annotation can be used to configure the TTL on DNS records managed by ExternalDNS and is optional. If this annotation is not set, the TTL on records managed by ExternalDNS will default to 10.\n\nExternalDNS uses the hostname annotation to determine which services should be registered with DNS. Removing the hostname annotation will cause ExternalDNS to remove the corresponding DNS records.\n\n### Create the deployment and service\n\n```\n$ kubectl create -f nginx.yaml\n```\n\nDepending on where you run your service, it may take some time for your cloud provider to create an external IP for the service. Once an external IP is assigned, ExternalDNS detects the new service IP address and synchronizes the GoDaddy DNS records.\n\n## Verifying GoDaddy DNS records\n\nUse the GoDaddy web console or API to verify that the A record for your domain shows the external IP address of the services.\n\n## Cleanup\n\nOnce you successfully configure and verify record management via ExternalDNS, you can delete the tutorial's example:\n\n```\n$ kubectl delete -f nginx.yaml\n$ kubectl delete -f externaldns.yaml\n```","site":"external-dns","answers_cleaned":"  GoDaddy  This tutorial describes how to set up ExternalDNS for use within a Kubernetes cluster using GoDaddy DNS   Make sure to use     0 6   version of ExternalDNS for this tutorial      Creating a zone with GoDaddy DNS  If you are new to GoDaddy  we recommend you first read the following instructions for creating a zone    Creating a zone using the GoDaddy web console  https   www godaddy com     Creating a zone using the GoDaddy API  https   developer godaddy com       Creating GoDaddy API key  You first need to create an API Key   Using the  GoDaddy documentation  https   developer godaddy com getstarted  you will have your  API key  and  API secret      Deploy ExternalDNS  Connect your  kubectl  client to the cluster with which you want to test ExternalDNS  and then apply one of the following manifest files for deployment      Using Helm  Create a values yaml file to configure ExternalDNS to use GoDaddy as the DNS provider  This file should include the necessary environment variables      shell provider     name  godaddy  extraArgs        godaddy api key YOUR API KEY       godaddy api secret YOUR API SECRET      Be sure to replace YOUR API KEY and YOUR API SECRET with your actual GoDaddy API key and GoDaddy API secret   Finally  install the ExternalDNS chart with Helm using the configuration specified in your values yaml file      shell helm upgrade   install external dns external dns external dns   values values yaml          Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider godaddy             txt prefix external dns    In case of multiple k8s cluster             txt owner id owner id   In case of multiple k8s cluster             godaddy api key  Your API Key              godaddy api secret  Your API secret           Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services     verbs    get   watch   list     apiGroups         resources    pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    list   watch     apiGroups         resources    endpoints     verbs    get   watch   list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider godaddy             txt prefix external dns    In case of multiple k8s cluster             txt owner id owner id   In case of multiple k8s cluster             godaddy api key  Your API Key              godaddy api secret  Your API secret          Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  example com     external dns alpha kubernetes io ttl   120   optional spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80        A note about annotations    Verify that the annotation on the service uses the same hostname as the GoDaddy DNS zone created above  The annotation may also be a subdomain of the DNS zone  e g   www example com     The TTL annotation can be used to configure the TTL on DNS records managed by ExternalDNS and is optional  If this annotation is not set  the TTL on records managed by ExternalDNS will default to 10   ExternalDNS uses the hostname annotation to determine which services should be registered with DNS  Removing the hostname annotation will cause ExternalDNS to remove the corresponding DNS records       Create the deployment and service        kubectl create  f nginx yaml      Depending on where you run your service  it may take some time for your cloud provider to create an external IP for the service  Once an external IP is assigned  ExternalDNS detects the new service IP address and synchronizes the GoDaddy DNS records      Verifying GoDaddy DNS records  Use the GoDaddy web console or API to verify that the A record for your domain shows the external IP address of the services      Cleanup  Once you successfully configure and verify record management via ExternalDNS  you can delete the tutorial s example         kubectl delete  f nginx yaml   kubectl delete  f externaldns yaml    "}
{"questions":"external-dns Tencent Cloud Tencent Cloud DNSPod Service is the domain name resolution and management service for public access Set up PrivateDns or DNSPod Tencent Cloud PrivateDNS Service is the domain name resolution and management service for VPC internal access Make sure to use 0 13 1 version of ExternalDNS for this tutorial External Dns Version","answers":"# Tencent Cloud\n\n## External Dns Version\n\n* Make sure to use **>=0.13.1** version of ExternalDNS for this tutorial\n\n## Set up PrivateDns or DNSPod\n\nTencent Cloud DNSPod Service is the domain name resolution and management service for public access.\nTencent Cloud PrivateDNS Service is the domain name resolution and management service for VPC internal access.\n\n* If you want to use internal dns service in Tencent Cloud.\n1. Set up the args `--tencent-cloud-zone-type=private`\n2. Create a DNS domain in PrivateDNS console. DNS domain which will contain the managed DNS records.\n\n* If you want to use public dns service in Tencent Cloud.\n1. Set up the args `--tencent-cloud-zone-type=public`\n2. Create a Domain in DnsPod console. DNS domain which will contain the managed DNS records.\n\n## Set up CAM for API Key\n\nIn Tencent CAM Console. you may get the secretId and secretKey pair. make sure the key pair has those Policy.\n\n```json\n{\n    \"version\": \"2.0\",\n    \"statement\": [\n        {\n            \"effect\": \"allow\",\n            \"action\": [\n                \"dnspod:ModifyRecord\",\n                \"dnspod:DeleteRecord\",\n                \"dnspod:CreateRecord\",\n                \"dnspod:DescribeRecordList\",\n                \"dnspod:DescribeDomainList\"\n            ],\n            \"resource\": [\n                \"*\"\n            ]\n        },\n        {\n            \"effect\": \"allow\",\n            \"action\": [\n                \"privatedns:DescribePrivateZoneList\",\n                \"privatedns:DescribePrivateZoneRecordList\",\n                \"privatedns:CreatePrivateZoneRecord\",\n                \"privatedns:DeletePrivateZoneRecord\",\n                \"privatedns:ModifyPrivateZoneRecord\"\n            ],\n            \"resource\": [\n                \"*\"\n            ]\n        }\n    ]\n}\n```\n\n# Deploy ExternalDNS\n\n## Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: external-dns\ndata:\n  tencent-cloud.json: |\n    {\n      \"regionId\": \"ap-shanghai\",\n      \"secretId\": \"******\",\n      \"secretKey\": \"******\",\n      \"vpcId\": \"vpc-******\",\n      \"internetEndpoint\": false  # Default: false. Access the Tencent API through the intranet. If you need to deploy on the public network, you need to change to true\n    }\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - args:\n        - --source=service\n        - --source=ingress\n        - --domain-filter=external-dns-test.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones\n        - --provider=tencentcloud\n        - --policy=sync # set `upsert-only` would prevent ExternalDNS from deleting any records\n        - --tencent-cloud-zone-type=private # only look at private hosted zones. set `public` to use the public dns service.\n        - --tencent-cloud-config-file=\/etc\/kubernetes\/tencent-cloud.json\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        imagePullPolicy: Always\n        name: external-dns\n        resources: {}\n        terminationMessagePath: \/dev\/termination-log\n        terminationMessagePolicy: File\n        volumeMounts:\n        - mountPath: \/etc\/kubernetes\n          name: config-volume\n          readOnly: true\n      dnsPolicy: ClusterFirst\n      hostAliases:\n      - hostnames:\n        - privatedns.internal.tencentcloudapi.com\n        - dnspod.internal.tencentcloudapi.com\n        ip: 169.254.0.95\n      restartPolicy: Always\n      schedulerName: default-scheduler\n      securityContext: {}\n      serviceAccount: external-dns\n      serviceAccountName: external-dns\n      terminationGracePeriodSeconds: 30\n      volumes:\n      - configMap:\n          defaultMode: 420\n          items:\n          - key: tencent-cloud.json\n            path: tencent-cloud.json\n          name: external-dns\n        name: config-volume\n```\n\n# Example\n\n## Service\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: nginx.external-dns-test.com\n    external-dns.alpha.kubernetes.io\/internal-hostname: nginx-internal.external-dns-test.com\n    external-dns.alpha.kubernetes.io\/ttl: \"600\"\nspec:\n  type: LoadBalancer\n  ports:\n  - port: 80\n    name: http\n    targetPort: 80\n  selector:\n    app: nginx\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n          name: http\n```\n\n`nginx.external-dns-test.com` will record to the Loadbalancer VIP.\n`nginx-internal.external-dns-test.com` will record to the ClusterIP.\nall of the DNS Record ttl will be 600.\n\n# Attention\n\nThis makes ExternalDNS safe for running in environments where there are other records managed via other means.\n","site":"external-dns","answers_cleaned":"  Tencent Cloud     External Dns Version    Make sure to use     0 13 1   version of ExternalDNS for this tutorial     Set up PrivateDns or DNSPod  Tencent Cloud DNSPod Service is the domain name resolution and management service for public access  Tencent Cloud PrivateDNS Service is the domain name resolution and management service for VPC internal access     If you want to use internal dns service in Tencent Cloud  1  Set up the args    tencent cloud zone type private  2  Create a DNS domain in PrivateDNS console  DNS domain which will contain the managed DNS records     If you want to use public dns service in Tencent Cloud  1  Set up the args    tencent cloud zone type public  2  Create a Domain in DnsPod console  DNS domain which will contain the managed DNS records      Set up CAM for API Key  In Tencent CAM Console  you may get the secretId and secretKey pair  make sure the key pair has those Policy      json        version    2 0        statement                            effect    allow                action                      dnspod ModifyRecord                    dnspod DeleteRecord                    dnspod CreateRecord                    dnspod DescribeRecordList                    dnspod DescribeDomainList                              resource                                                                         effect    allow                action                      privatedns DescribePrivateZoneList                    privatedns DescribePrivateZoneRecordList                    privatedns CreatePrivateZoneRecord                    privatedns DeletePrivateZoneRecord                    privatedns ModifyPrivateZoneRecord                              resource                                                                Deploy ExternalDNS     Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  v1 kind  ConfigMap metadata    name  external dns data    tencent cloud json                 regionId    ap shanghai          secretId                    secretKey                    vpcId    vpc                 internetEndpoint   false    Default  false  Access the Tencent API through the intranet  If you need to deploy on the public network  you need to change to true           apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          args              source service             source ingress             domain filter external dns test com   will make ExternalDNS see only the hosted zones matching provided domain  omit to process all available hosted zones             provider tencentcloud             policy sync   set  upsert only  would prevent ExternalDNS from deleting any records             tencent cloud zone type private   only look at private hosted zones  set  public  to use the public dns service              tencent cloud config file  etc kubernetes tencent cloud json         image  registry k8s io external dns external dns v0 15 0         imagePullPolicy  Always         name  external dns         resources             terminationMessagePath   dev termination log         terminationMessagePolicy  File         volumeMounts            mountPath   etc kubernetes           name  config volume           readOnly  true       dnsPolicy  ClusterFirst       hostAliases          hostnames            privatedns internal tencentcloudapi com           dnspod internal tencentcloudapi com         ip  169 254 0 95       restartPolicy  Always       schedulerName  default scheduler       securityContext           serviceAccount  external dns       serviceAccountName  external dns       terminationGracePeriodSeconds  30       volumes          configMap            defaultMode  420           items              key  tencent cloud json             path  tencent cloud json           name  external dns         name  config volume        Example     Service     yaml apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  nginx external dns test com     external dns alpha kubernetes io internal hostname  nginx internal external dns test com     external dns alpha kubernetes io ttl   600  spec    type  LoadBalancer   ports      port  80     name  http     targetPort  80   selector      app  nginx     apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80           name  http       nginx external dns test com  will record to the Loadbalancer VIP   nginx internal external dns test com  will record to the ClusterIP  all of the DNS Record ttl will be 600     Attention  This makes ExternalDNS safe for running in environments where there are other records managed via other means  "}
{"questions":"external-dns A DNSimple API access token can be acquired by following the Create a DNSimple API Access Token This tutorial describes how to setup ExternalDNS for usage with DNSimple Make sure to use 0 4 6 version of ExternalDNS for this tutorial DNSimple","answers":"# DNSimple\n\n\nThis tutorial describes how to setup ExternalDNS for usage with DNSimple.\n\nMake sure to use **>=0.4.6** version of ExternalDNS for this tutorial.\n\n## Create a DNSimple API Access Token\n\nA DNSimple API access token can be acquired by following the [provided documentation from DNSimple](https:\/\/support.dnsimple.com\/articles\/api-access-token\/)\n\nThe environment variable `DNSIMPLE_OAUTH` must be set to the generated API token to run ExternalDNS with DNSimple.\n\nWhen the generated DNSimple API access token is a _User token_, as opposed to an _Account token_, the following environment variables must also be set:\n  - `DNSIMPLE_ACCOUNT_ID`: Set this to the account ID which the domains to be managed by ExternalDNS belong to (eg. `1001234`).\n  - `DNSIMPLE_ZONES`: Set this to a comma separated list of DNS zones to be managed by ExternalDNS (eg. `mydomain.com,example.com`).\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster you want to test ExternalDNS with.\nThen apply one of the following manifests file to deploy ExternalDNS.\n\n### Manifest (for clusters without RBAC enabled)\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone you create in DNSimple.\n        - --provider=dnsimple\n        - --registry=txt\n        env:\n        - name: DNSIMPLE_OAUTH\n          value: \"YOUR_DNSIMPLE_API_KEY\"\n        - name: DNSIMPLE_ACCOUNT_ID\n          value: \"SET THIS IF USING A DNSIMPLE USER ACCESS TOKEN\"\n        - name: DNSIMPLE_ZONES\n          value: \"SET THIS IF USING A DNSIMPLE USER ACCESS TOKEN\"\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone you create in DNSimple.\n        - --provider=dnsimple\n        - --registry=txt\n        env:\n        - name: DNSIMPLE_OAUTH\n          value: \"YOUR_DNSIMPLE_API_KEY\"\n        - name: DNSIMPLE_ACCOUNT_ID\n          value: \"SET THIS IF USING A DNSIMPLE USER ACCESS TOKEN\"\n        - name: DNSIMPLE_ZONES\n          value: \"SET THIS IF USING A DNSIMPLE USER ACCESS TOKEN\"\n```\n\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: validate-external-dns.example.com\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\nNote the annotation on the service; use the same hostname as the DNSimple DNS zone created above. The annotation may also be a subdomain\nof the DNS zone (e.g. 'www.example.com').\n\nExternalDNS uses this annotation to determine what services should be registered with DNS.  Removing the annotation will cause ExternalDNS to remove the corresponding DNS records.\n\nCreate the deployment and service:\n\n```sh\n$ kubectl create -f nginx.yaml\n```\n\nDepending where you run your service it can take a little while for your cloud provider to create an external IP for the service. Check the status by running\n`kubectl get services nginx`.  If the `EXTERNAL-IP` field shows an address, the service is ready to be accessed externally.\n\nOnce the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize\nthe DNSimple DNS records.\n\n## Verifying DNSimple DNS records\n\n### Getting your DNSimple Account ID\n\nIf you do not know your DNSimple account ID it can be acquired using the [whoami](https:\/\/developer.dnsimple.com\/v2\/identity\/#whoami) endpoint from the DNSimple Identity API\n\n```sh\ncurl -H \"Authorization: Bearer $DNSIMPLE_ACCOUNT_TOKEN\" \\\n    -H 'Accept: application\/json' \\\n    https:\/\/api.dnsimple.com\/v2\/whoami\n{\n  \"data\": {\n    \"user\": null,\n    \"account\": {\n      \"id\": 1,\n      \"email\": \"example-account@example.com\",\n      \"plan_identifier\": \"dnsimple-professional\",\n      \"created_at\": \"2015-09-18T23:04:37Z\",\n      \"updated_at\": \"2016-06-09T20:03:39Z\"\n    }\n  }\n}\n```\n\n### Looking at the DNSimple Dashboard\n\nYou can view your DNSimple Record Editor at https:\/\/dnsimple.com\/a\/YOUR_ACCOUNT_ID\/domains\/example.com\/records. Ensure you substitute the value `YOUR_ACCOUNT_ID` with the ID of your DNSimple account and `example.com` with the correct domain that you used during validation.\n\n### Using the DNSimple Zone Records API\n\nThis approach allows for you to use the DNSimple [List records for a zone](https:\/\/developer.dnsimple.com\/v2\/zones\/records\/#listZoneRecords) endpoint to verify the creation of the A and TXT record. Ensure you substitute the value `YOUR_ACCOUNT_ID` with the ID of your DNSimple account and `example.com` with the correct domain that you used during validation.\n\n```sh\ncurl -H \"Authorization: Bearer $DNSIMPLE_ACCOUNT_TOKEN\" \\\n    -H 'Accept: application\/json' \\\n    'https:\/\/api.dnsimple.com\/v2\/YOUR_ACCOUNT_ID\/zones\/example.com\/records&name=validate-external-dns'\n```\n\n## Clean up\n\nNow that we have verified that ExternalDNS will automatically manage DNSimple DNS records, we can delete the tutorial's example:\n\n```sh\n$ kubectl delete -f nginx.yaml\n$ kubectl delete -f externaldns.yaml\n```\n\n### Deleting Created Records\n\nThe created records can be deleted using the record IDs from the verification step and the [Delete a zone record](https:\/\/developer.dnsimple.com\/v2\/zones\/records\/#deleteZoneRecord) endpoint.","site":"external-dns","answers_cleaned":"  DNSimple   This tutorial describes how to setup ExternalDNS for usage with DNSimple   Make sure to use     0 4 6   version of ExternalDNS for this tutorial      Create a DNSimple API Access Token  A DNSimple API access token can be acquired by following the  provided documentation from DNSimple  https   support dnsimple com articles api access token    The environment variable  DNSIMPLE OAUTH  must be set to the generated API token to run ExternalDNS with DNSimple   When the generated DNSimple API access token is a  User token   as opposed to an  Account token   the following environment variables must also be set       DNSIMPLE ACCOUNT ID   Set this to the account ID which the domains to be managed by ExternalDNS belong to  eg   1001234         DNSIMPLE ZONES   Set this to a comma separated list of DNS zones to be managed by ExternalDNS  eg   mydomain com example com        Deploy ExternalDNS  Connect your  kubectl  client to the cluster you want to test ExternalDNS with  Then apply one of the following manifests file to deploy ExternalDNS       Manifest  for clusters without RBAC enabled     yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service             domain filter example com    optional  limit to only example com domains  change to match the zone you create in DNSimple              provider dnsimple             registry txt         env            name  DNSIMPLE OAUTH           value   YOUR DNSIMPLE API KEY            name  DNSIMPLE ACCOUNT ID           value   SET THIS IF USING A DNSIMPLE USER ACCESS TOKEN            name  DNSIMPLE ZONES           value   SET THIS IF USING A DNSIMPLE USER ACCESS TOKEN           Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service             domain filter example com    optional  limit to only example com domains  change to match the zone you create in DNSimple              provider dnsimple             registry txt         env            name  DNSIMPLE OAUTH           value   YOUR DNSIMPLE API KEY            name  DNSIMPLE ACCOUNT ID           value   SET THIS IF USING A DNSIMPLE USER ACCESS TOKEN            name  DNSIMPLE ZONES           value   SET THIS IF USING A DNSIMPLE USER ACCESS TOKEN           Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  validate external dns example com spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80      Note the annotation on the service  use the same hostname as the DNSimple DNS zone created above  The annotation may also be a subdomain of the DNS zone  e g   www example com     ExternalDNS uses this annotation to determine what services should be registered with DNS   Removing the annotation will cause ExternalDNS to remove the corresponding DNS records   Create the deployment and service      sh   kubectl create  f nginx yaml      Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service  Check the status by running  kubectl get services nginx    If the  EXTERNAL IP  field shows an address  the service is ready to be accessed externally   Once the service has an external IP assigned  ExternalDNS will notice the new service IP address and synchronize the DNSimple DNS records      Verifying DNSimple DNS records      Getting your DNSimple Account ID  If you do not know your DNSimple account ID it can be acquired using the  whoami  https   developer dnsimple com v2 identity  whoami  endpoint from the DNSimple Identity API     sh curl  H  Authorization  Bearer  DNSIMPLE ACCOUNT TOKEN         H  Accept  application json        https   api dnsimple com v2 whoami      data          user   null       account            id   1         email    example account example com          plan identifier    dnsimple professional          created at    2015 09 18T23 04 37Z          updated at    2016 06 09T20 03 39Z                       Looking at the DNSimple Dashboard  You can view your DNSimple Record Editor at https   dnsimple com a YOUR ACCOUNT ID domains example com records  Ensure you substitute the value  YOUR ACCOUNT ID  with the ID of your DNSimple account and  example com  with the correct domain that you used during validation       Using the DNSimple Zone Records API  This approach allows for you to use the DNSimple  List records for a zone  https   developer dnsimple com v2 zones records  listZoneRecords  endpoint to verify the creation of the A and TXT record  Ensure you substitute the value  YOUR ACCOUNT ID  with the ID of your DNSimple account and  example com  with the correct domain that you used during validation      sh curl  H  Authorization  Bearer  DNSIMPLE ACCOUNT TOKEN         H  Accept  application json         https   api dnsimple com v2 YOUR ACCOUNT ID zones example com records name validate external dns          Clean up  Now that we have verified that ExternalDNS will automatically manage DNSimple DNS records  we can delete the tutorial s example      sh   kubectl delete  f nginx yaml   kubectl delete  f externaldns yaml          Deleting Created Records  The created records can be deleted using the record IDs from the verification step and the  Delete a zone record  https   developer dnsimple com v2 zones records  deleteZoneRecord  endpoint "}
{"questions":"external-dns Alibaba Cloud json RAM Permissions This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster on Alibaba Cloud Make sure to use 0 5 6 version of ExternalDNS for this tutorial Statement Version 1","answers":"# Alibaba Cloud\n\nThis tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster on Alibaba Cloud. Make sure to use **>=0.5.6** version of ExternalDNS for this tutorial\n\n## RAM Permissions\n\n```json\n{\n  \"Version\": \"1\",\n  \"Statement\": [\n    {\n      \"Action\": \"alidns:AddDomainRecord\",\n      \"Resource\": \"*\",\n      \"Effect\": \"Allow\"\n    },\n    {\n      \"Action\": \"alidns:DeleteDomainRecord\",\n      \"Resource\": \"*\",\n      \"Effect\": \"Allow\"\n    },\n    {\n      \"Action\": \"alidns:UpdateDomainRecord\",\n      \"Resource\": \"*\",\n      \"Effect\": \"Allow\"\n    },\n    {\n      \"Action\": \"alidns:DescribeDomainRecords\",\n      \"Resource\": \"*\",\n      \"Effect\": \"Allow\"\n    },\n    {\n      \"Action\": \"alidns:DescribeDomains\",\n      \"Resource\": \"*\",\n      \"Effect\": \"Allow\"\n    },\n    {\n      \"Action\": \"pvtz:AddZoneRecord\",\n      \"Resource\": \"*\",\n      \"Effect\": \"Allow\"\n    },\n    {\n      \"Action\": \"pvtz:DeleteZoneRecord\",\n      \"Resource\": \"*\",\n      \"Effect\": \"Allow\"\n    },\n    {\n      \"Action\": \"pvtz:UpdateZoneRecord\",\n      \"Resource\": \"*\",\n      \"Effect\": \"Allow\"\n    },\n    {\n      \"Action\": \"pvtz:DescribeZoneRecords\",\n      \"Resource\": \"*\",\n      \"Effect\": \"Allow\"\n    },\n    {\n      \"Action\": \"pvtz:DescribeZones\",\n      \"Resource\": \"*\",\n      \"Effect\": \"Allow\"\n    },\n    {\n      \"Action\": \"pvtz:DescribeZoneInfo\",\n      \"Resource\": \"*\",\n      \"Effect\": \"Allow\"\n    }\n  ]\n}\n```\n\nWhen running on Alibaba Cloud, you need to make sure that your nodes (on which External DNS runs) have the RAM instance profile with the above RAM role assigned.\n\n## Set up a Alibaba Cloud DNS service or Private Zone service\n\nAlibaba Cloud DNS Service is the domain name resolution and management service for public access. It routes access from end-users to the designated web app.\nAlibaba Cloud Private Zone is the domain name resolution and management service for VPC internal access. \n\n*If you prefer to try-out ExternalDNS in one of the existing domain or zone you can skip this step*\n\nCreate a DNS domain which will contain the managed DNS records. For public DNS service, the domain name should be valid and owned by yourself.\n\n```console\n$ aliyun alidns AddDomain --DomainName \"external-dns-test.com\"\n```\n\n\nMake a note of the ID of the hosted zone you just created.\n\n```console\n$ aliyun alidns DescribeDomains --KeyWord=\"external-dns-test.com\" | jq -r '.Domains.Domain[0].DomainId'\n```\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster you want to test ExternalDNS with.\nThen apply one of the following manifests file to deploy ExternalDNS.\n\n### Manifest (for clusters without RBAC enabled)\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service\n        - --source=ingress\n        - --domain-filter=external-dns-test.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones\n        - --provider=alibabacloud\n        - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization\n        - --alibaba-cloud-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)\n        - --registry=txt\n        - --txt-owner-id=my-identifier\n        volumeMounts:\n        - mountPath: \/usr\/share\/zoneinfo\n          name: hostpath\n      volumes:\n      - name: hostpath\n        hostPath:\n          path: \/usr\/share\/zoneinfo\n          type: Directory\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"] \n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service\n        - --source=ingress\n        - --domain-filter=external-dns-test.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones\n        - --provider=alibabacloud\n        - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization\n        - --alibaba-cloud-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)\n        - --registry=txt\n        - --txt-owner-id=my-identifier\n        - --alibaba-cloud-config-file= # enable sts token \n        volumeMounts:\n        - mountPath: \/usr\/share\/zoneinfo\n          name: hostpath\n      volumes:\n      - name: hostpath\n        hostPath:\n          path: \/usr\/share\/zoneinfo\n          type: Directory\n```\n\n\n\n## Arguments\n\nThis list is not the full list, but a few arguments that where chosen.\n\n### alibaba-cloud-zone-type\n\n`alibaba-cloud-zone-type` allows filtering for private and public zones\n\n* If value is `public`, it will sync with records in Alibaba Cloud DNS Service\n* If value is `private`, it will sync with records in Alibaba Cloud Private Zone Service\n\n\n## Verify ExternalDNS works (Ingress example)\n\nCreate an ingress resource manifest file.\n\n> For ingress objects ExternalDNS will create a DNS record based on the host specified for the ingress object.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: foo\nspec:\n  ingressClassName: nginx # use the one that corresponds to your ingress controller.\n  rules:\n  - host: foo.external-dns-test.com\n    http:\n      paths:\n      - backend:\n          service:\n            name: foo\n            port:\n              number: 80\n        pathType: Prefix\n```\n\n## Verify ExternalDNS works (Service example)\n\nCreate the following sample application to test that ExternalDNS works.\n\n> For services ExternalDNS will look for the annotation `external-dns.alpha.kubernetes.io\/hostname` on the service and use the corresponding value.\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: nginx.external-dns-test.com.\nspec:\n  type: LoadBalancer\n  ports:\n  - port: 80\n    name: http\n    targetPort: 80\n  selector:\n    app: nginx\n\n---\n\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n          name: http\n```\n\nAfter roughly two minutes check that a corresponding DNS record for your service was created.\n\n```console\n$ aliyun alidns DescribeDomainRecords --DomainName=external-dns-test.com\n{\n  \"PageNumber\": 1,\n  \"TotalCount\": 1,\n  \"PageSize\": 20,\n  \"RequestId\": \"1DBEF426-F771-46C7-9802-4989E9C94EE8\",\n  \"DomainRecords\": {\n    \"Record\": [\n      {\n        \"RR\": \"nginx\",\n        \"Status\": \"ENABLE\",\n        \"Value\": \"1.2.3.4\",\n        \"Weight\": 1,\n        \"RecordId\": \"3994015629411328\",\n        \"Type\": \"A\",\n        \"DomainName\": \"external-dns-test.com\",\n        \"Locked\": false,\n        \"Line\": \"default\",\n        \"TTL\": 600\n      }\uff0c\n      {\n        \"RR\": \"nginx\",\n        \"Status\": \"ENABLE\",\n        \"Value\": \"heritage=external-dns;external-dns\/owner=my-identifier\",\n        \"Weight\": 1,\n        \"RecordId\": \"3994015629411329\",\n        \"Type\": \"TTL\",\n        \"DomainName\": \"external-dns-test.com\",\n        \"Locked\": false,\n        \"Line\": \"default\",\n        \"TTL\": 600\n      }      \n    ]\n  }\n}\n```\n\nNote created TXT record alongside ALIAS record. TXT record signifies that the corresponding ALIAS record is managed by ExternalDNS. This makes ExternalDNS safe for running in environments where there are other records managed via other means.\n\nLet's check that we can resolve this DNS name. We'll ask the nameservers assigned to your zone first.\n\n```console\n$ dig nginx.external-dns-test.com.\n```\n\nIf you hooked up your DNS zone with its parent zone correctly you can use `curl` to access your site.\n\n```console\n$ curl nginx.external-dns-test.com.\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!<\/title>\n...\n<\/head>\n<body>\n...\n<\/body>\n<\/html>\n```\n\n## Custom TTL\n\nThe default DNS record TTL (Time-To-Live) is 300 seconds. You can customize this value by setting the annotation `external-dns.alpha.kubernetes.io\/ttl`.\ne.g., modify the service manifest YAML file above:\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: nginx.external-dns-test.com\n    external-dns.alpha.kubernetes.io\/ttl: 60\nspec:\n    ...\n```\n\nThis will set the DNS record's TTL to 60 seconds.\n\n## Clean up\n\nMake sure to delete all Service objects before terminating the cluster so all load balancers get cleaned up correctly.\n\n```console\n$ kubectl delete service nginx\n```\n\nGive ExternalDNS some time to clean up the DNS records for you. Then delete the hosted zone if you created one for the testing purpose.\n\n```console\n$ aliyun alidns DeleteDomain --DomainName external-dns-test.com\n```\n\nFor more info about Alibaba Cloud external dns, please refer this [docs](https:\/\/yq.aliyun.com\/articles\/633412)","site":"external-dns","answers_cleaned":"  Alibaba Cloud  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster on Alibaba Cloud  Make sure to use     0 5 6   version of ExternalDNS for this tutorial     RAM Permissions     json      Version    1      Statement                  Action    alidns AddDomainRecord          Resource               Effect    Allow                      Action    alidns DeleteDomainRecord          Resource               Effect    Allow                      Action    alidns UpdateDomainRecord          Resource               Effect    Allow                      Action    alidns DescribeDomainRecords          Resource               Effect    Allow                      Action    alidns DescribeDomains          Resource               Effect    Allow                      Action    pvtz AddZoneRecord          Resource               Effect    Allow                      Action    pvtz DeleteZoneRecord          Resource               Effect    Allow                      Action    pvtz UpdateZoneRecord          Resource               Effect    Allow                      Action    pvtz DescribeZoneRecords          Resource               Effect    Allow                      Action    pvtz DescribeZones          Resource               Effect    Allow                      Action    pvtz DescribeZoneInfo          Resource               Effect    Allow                   When running on Alibaba Cloud  you need to make sure that your nodes  on which External DNS runs  have the RAM instance profile with the above RAM role assigned      Set up a Alibaba Cloud DNS service or Private Zone service  Alibaba Cloud DNS Service is the domain name resolution and management service for public access  It routes access from end users to the designated web app  Alibaba Cloud Private Zone is the domain name resolution and management service for VPC internal access     If you prefer to try out ExternalDNS in one of the existing domain or zone you can skip this step   Create a DNS domain which will contain the managed DNS records  For public DNS service  the domain name should be valid and owned by yourself      console   aliyun alidns AddDomain   DomainName  external dns test com        Make a note of the ID of the hosted zone you just created      console   aliyun alidns DescribeDomains   KeyWord  external dns test com    jq  r   Domains Domain 0  DomainId          Deploy ExternalDNS  Connect your  kubectl  client to the cluster you want to test ExternalDNS with  Then apply one of the following manifests file to deploy ExternalDNS       Manifest  for clusters without RBAC enabled     yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service             source ingress             domain filter external dns test com   will make ExternalDNS see only the hosted zones matching provided domain  omit to process all available hosted zones             provider alibabacloud             policy upsert only   would prevent ExternalDNS from deleting any records  omit to enable full synchronization             alibaba cloud zone type public   only look at public hosted zones  valid values are public  private or no value for both              registry txt             txt owner id my identifier         volumeMounts            mountPath   usr share zoneinfo           name  hostpath       volumes          name  hostpath         hostPath            path   usr share zoneinfo           type  Directory          Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses      verbs    get   watch   list     apiGroups         resources    nodes     verbs    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service             source ingress             domain filter external dns test com   will make ExternalDNS see only the hosted zones matching provided domain  omit to process all available hosted zones             provider alibabacloud             policy upsert only   would prevent ExternalDNS from deleting any records  omit to enable full synchronization             alibaba cloud zone type public   only look at public hosted zones  valid values are public  private or no value for both              registry txt             txt owner id my identifier             alibaba cloud config file    enable sts token          volumeMounts            mountPath   usr share zoneinfo           name  hostpath       volumes          name  hostpath         hostPath            path   usr share zoneinfo           type  Directory           Arguments  This list is not the full list  but a few arguments that where chosen       alibaba cloud zone type   alibaba cloud zone type  allows filtering for private and public zones    If value is  public   it will sync with records in Alibaba Cloud DNS Service   If value is  private   it will sync with records in Alibaba Cloud Private Zone Service      Verify ExternalDNS works  Ingress example   Create an ingress resource manifest file     For ingress objects ExternalDNS will create a DNS record based on the host specified for the ingress object      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  foo spec    ingressClassName  nginx   use the one that corresponds to your ingress controller    rules      host  foo external dns test com     http        paths          backend            service              name  foo             port                number  80         pathType  Prefix         Verify ExternalDNS works  Service example   Create the following sample application to test that ExternalDNS works     For services ExternalDNS will look for the annotation  external dns alpha kubernetes io hostname  on the service and use the corresponding value      yaml apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  nginx external dns test com  spec    type  LoadBalancer   ports      port  80     name  http     targetPort  80   selector      app  nginx       apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80           name  http      After roughly two minutes check that a corresponding DNS record for your service was created      console   aliyun alidns DescribeDomainRecords   DomainName external dns test com      PageNumber   1     TotalCount   1     PageSize   20     RequestId    1DBEF426 F771 46C7 9802 4989E9C94EE8      DomainRecords          Record                      RR    nginx            Status    ENABLE            Value    1 2 3 4            Weight   1           RecordId    3994015629411328            Type    A            DomainName    external dns test com            Locked   false           Line    default            TTL   600                           RR    nginx            Status    ENABLE            Value    heritage external dns external dns owner my identifier            Weight   1           RecordId    3994015629411329            Type    TTL            DomainName    external dns test com            Locked   false           Line    default            TTL   600                                Note created TXT record alongside ALIAS record  TXT record signifies that the corresponding ALIAS record is managed by ExternalDNS  This makes ExternalDNS safe for running in environments where there are other records managed via other means   Let s check that we can resolve this DNS name  We ll ask the nameservers assigned to your zone first      console   dig nginx external dns test com       If you hooked up your DNS zone with its parent zone correctly you can use  curl  to access your site      console   curl nginx external dns test com    DOCTYPE html   html   head   title Welcome to nginx   title        head   body        body    html          Custom TTL  The default DNS record TTL  Time To Live  is 300 seconds  You can customize this value by setting the annotation  external dns alpha kubernetes io ttl   e g   modify the service manifest YAML file above      yaml apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  nginx external dns test com     external dns alpha kubernetes io ttl  60 spec               This will set the DNS record s TTL to 60 seconds      Clean up  Make sure to delete all Service objects before terminating the cluster so all load balancers get cleaned up correctly      console   kubectl delete service nginx      Give ExternalDNS some time to clean up the DNS records for you  Then delete the hosted zone if you created one for the testing purpose      console   aliyun alidns DeleteDomain   DomainName external dns test com      For more info about Alibaba Cloud external dns  please refer this  docs  https   yq aliyun com articles 633412 "}
{"questions":"external-dns Using the resource with External DNS requires Contour version 1 5 or greater Without RBAC This tutorial describes how to configure External DNS to use the Contour source yaml apiVersion apps v1 Example manifests for External DNS Contour HTTPProxy","answers":"# Contour HTTPProxy\n\nThis tutorial describes how to configure External DNS to use the Contour `HTTPProxy` source.\nUsing the `HTTPProxy` resource with External DNS requires Contour version 1.5 or greater.\n\n### Example manifests for External DNS\n#### Without RBAC\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service\n        - --source=ingress\n        - --source=contour-httpproxy\n        - --domain-filter=external-dns-test.my-org.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones\n        - --provider=aws\n        - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization\n        - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)\n        - --registry=txt\n        - --txt-owner-id=my-identifier\n```\n\n#### With RBAC\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"] \n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n- apiGroups: [\"projectcontour.io\"]\n  resources: [\"httpproxies\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service\n        - --source=ingress\n        - --source=contour-httpproxy\n        - --domain-filter=external-dns-test.my-org.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones\n        - --provider=aws\n        - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization\n        - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)\n        - --registry=txt\n        - --txt-owner-id=my-identifier\n```\n\n### Verify External DNS works\nThe following instructions are based on the \n[Contour example workload](https:\/\/github.com\/projectcontour\/contour\/tree\/master\/examples\/example-workload\/httpproxy).\n\n#### Install a sample service\n```bash\n$ kubectl apply -f - <<EOF\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  labels:\n    app: kuard\n  name: kuard\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: kuard\n  template:\n    metadata:\n      labels:\n        app: kuard\n    spec:\n      containers:\n      - image: gcr.io\/kuar-demo\/kuard-amd64:1\n        name: kuard\n---\napiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: kuard\n  name: kuard\nspec:\n  ports:\n  - port: 80\n    protocol: TCP\n    targetPort: 8080\n  selector:\n    app: kuard\n  sessionAffinity: None\n  type: ClusterIP\nEOF\n```\n\nThen create an `HTTPProxy`:\n\n```\n$ kubectl apply -f - <<EOF\napiVersion: projectcontour.io\/v1\nkind: HTTPProxy\nmetadata:\n  labels:\n    app: kuard\n  name: kuard\n  namespace: default\nspec:\n  virtualhost:\n    fqdn: kuard.example.com\n  routes:\n    - conditions:\n      - prefix: \/\n      services:\n        - name: kuard\n          port: 80\nEOF\n```\n\n#### Access the sample service using `curl`\n```bash\n$ curl -i http:\/\/kuard.example.com\/healthy\nHTTP\/1.1 200 OK\nContent-Type: text\/plain\nDate: Thu, 27 Jun 2019 19:42:26 GMT\nContent-Length: 2\n\nok\n```","site":"external-dns","answers_cleaned":"  Contour HTTPProxy  This tutorial describes how to configure External DNS to use the Contour  HTTPProxy  source  Using the  HTTPProxy  resource with External DNS requires Contour version 1 5 or greater       Example manifests for External DNS      Without RBAC     yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service             source ingress             source contour httpproxy             domain filter external dns test my org com   will make ExternalDNS see only the hosted zones matching provided domain  omit to process all available hosted zones             provider aws             policy upsert only   would prevent ExternalDNS from deleting any records  omit to enable full synchronization             aws zone type public   only look at public hosted zones  valid values are public  private or no value for both              registry txt             txt owner id my identifier           With RBAC    yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses      verbs    get   watch   list     apiGroups         resources    nodes     verbs    list     apiGroups    projectcontour io     resources    httpproxies     verbs    get   watch   list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service             source ingress             source contour httpproxy             domain filter external dns test my org com   will make ExternalDNS see only the hosted zones matching provided domain  omit to process all available hosted zones             provider aws             policy upsert only   would prevent ExternalDNS from deleting any records  omit to enable full synchronization             aws zone type public   only look at public hosted zones  valid values are public  private or no value for both              registry txt             txt owner id my identifier          Verify External DNS works The following instructions are based on the   Contour example workload  https   github com projectcontour contour tree master examples example workload httpproxy         Install a sample service    bash   kubectl apply  f     EOF apiVersion  apps v1 kind  Deployment metadata    labels      app  kuard   name  kuard spec    replicas  3   selector      matchLabels        app  kuard   template      metadata        labels          app  kuard     spec        containers          image  gcr io kuar demo kuard amd64 1         name  kuard     apiVersion  v1 kind  Service metadata    labels      app  kuard   name  kuard spec    ports      port  80     protocol  TCP     targetPort  8080   selector      app  kuard   sessionAffinity  None   type  ClusterIP EOF      Then create an  HTTPProxy          kubectl apply  f     EOF apiVersion  projectcontour io v1 kind  HTTPProxy metadata    labels      app  kuard   name  kuard   namespace  default spec    virtualhost      fqdn  kuard example com   routes        conditions          prefix          services            name  kuard           port  80 EOF           Access the sample service using  curl     bash   curl  i http   kuard example com healthy HTTP 1 1 200 OK Content Type  text plain Date  Thu  27 Jun 2019 19 42 26 GMT Content Length  2  ok    "}
{"questions":"external-dns CoreDNS with minikube warning This tutorial is out of date You need to This tutorial describes how to setup ExternalDNS for usage within a cluster that makes use of and informationsource PRs to update it are welcome","answers":"# CoreDNS with minikube\n\n:warning: This tutorial is out of date.\n\n:information_source: PRs to update it are welcome !\n\nThis tutorial describes how to setup ExternalDNS for usage within a [minikube](https:\/\/github.com\/kubernetes\/minikube) cluster that makes use of [CoreDNS](https:\/\/github.com\/coredns\/coredns) and [nginx ingress controller](https:\/\/github.com\/kubernetes\/ingress-nginx).\n\nYou need to:\n\n* install CoreDNS with [etcd](https:\/\/github.com\/etcd-io\/etcd) enabled\n* install external-dns with coredns as a provider\n* enable ingress controller for the minikube cluster\n\n## Creating a cluster\n\n```shell\nminikube start\n```\n\n## Installing CoreDNS with etcd enabled\n\nHelm chart is used to install etcd and CoreDNS.\n\n### Initializing helm chart\n\n```shell\nhelm init\n```\n\n### Installing etcd\n\n[etcd operator](https:\/\/github.com\/coreos\/etcd-operator) is used to manage etcd clusters.\n```\nhelm install stable\/etcd-operator --name my-etcd-op\n```\n\netcd cluster is installed with example yaml from etcd operator website.\n\n```shell\nkubectl apply -f https:\/\/raw.githubusercontent.com\/coreos\/etcd-operator\/HEAD\/example\/example-etcd-cluster.yaml\n```\n\n### Installing CoreDNS\n\nIn order to make CoreDNS work with etcd backend, values.yaml of the chart should be changed with corresponding configurations.\n\n```\nwget https:\/\/raw.githubusercontent.com\/helm\/charts\/HEAD\/stable\/coredns\/values.yaml\n```\n\nYou need to edit\/patch the file with below diff\n\n```diff\ndiff --git a\/values.yaml b\/values.yaml\nindex 964e72b..e2fa934 100644\n--- a\/values.yaml\n+++ b\/values.yaml\n@@ -27,12 +27,12 @@ service:\n\n rbac:\n   # If true, create & use RBAC resources\n-  create: false\n+  create: true\n   # Ignored if rbac.create is true\n   serviceAccountName: default\n\n # isClusterService specifies whether chart should be deployed as cluster-service or normal k8s app.\n-isClusterService: true\n+isClusterService: false\n\n servers:\n - zones:\n@@ -51,6 +51,12 @@ servers:\n     parameters: 0.0.0.0:9153\n   - name: proxy\n     parameters: . \/etc\/resolv.conf\n+  - name: etcd\n+    parameters: example.org\n+    configBlock: |-\n+      stubzones\n+      path \/skydns\n+      endpoint http:\/\/10.105.68.165:2379\n\n # Complete example with all the options:\n # - zones:                 # the `zones` block can be left out entirely, defaults to \".\"\n```\n\n**Note**:\n\n* IP address of etcd's endpoint should be get from etcd client service. It should be \"example-etcd-cluster-client\" in this example. This IP address is used through this document for etcd endpoint configuration.\n\n```shell\n$ kubectl get svc example-etcd-cluster-client\nNAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE\nexample-etcd-cluster-client   ClusterIP   10.105.68.165   <none>        2379\/TCP   16m\n```\n\n* Parameters should configure your own domain. \"example.org\" is used in this example.\n\nAfter configuration done in values.yaml, you can install coredns chart.\n\n```shell\nhelm install --name my-coredns --values values.yaml stable\/coredns\n```\n\n## Installing ExternalDNS\n\n### Install external ExternalDNS\n\nETCD_URLS is configured to etcd client service address.\nOptionally, you can configure ETCD_USERNAME and ETCD_PASSWORD for authenticating to etcd. It is also possible to connect to the etcd cluster via HTTPS using the following environment variables: ETCD_CA_FILE, ETCD_CERT_FILE, ETCD_KEY_FILE, ETCD_TLS_SERVER_NAME, ETCD_TLS_INSECURE.\n\n#### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\n  namespace: kube-system\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=ingress\n        - --provider=coredns\n        - --log-level=debug # debug only\n        env:\n        - name: ETCD_URLS\n          value: http:\/\/10.105.68.165:2379\n```\n\n#### Manifest (for clusters with RBAC enabled)\n\n```yaml\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: kube-system\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n  namespace: kube-system\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\n  namespace: kube-system\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=ingress\n        - --provider=coredns\n        - --log-level=debug # debug only\n        env:\n        - name: ETCD_URLS\n          value: http:\/\/10.105.68.165:2379\n```\n\n## Enable the ingress controller\n\nYou can use the ingress controller in minikube cluster. It needs to enable ingress addon in the cluster.\n\n```shell\nminikube addons enable ingress\n```\n\n## Testing ingress example\n\n```shell\n$ cat ingress.yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: nginx\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: nginx.example.org\n    http:\n      paths:\n      - backend:\n          serviceName: nginx\n          servicePort: 80\n\n$ kubectl apply -f ingress.yaml\ningress.extensions \"nginx\" created\n```\n\nWait a moment until DNS has the ingress IP. The DNS service IP is from CoreDNS service. It is \"my-coredns-coredns\" in this example.\n\n```shell\n$ kubectl get svc my-coredns-coredns\nNAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE\nmy-coredns-coredns   ClusterIP   10.100.4.143   <none>        53\/UDP    12m\n\n$ kubectl get ingress\nNAME      HOSTS               ADDRESS     PORTS     AGE\nnginx     nginx.example.org   10.0.2.15   80        2m\n\n$ kubectl run -it --rm --restart=Never --image=infoblox\/dnstools:latest dnstools\nIf you don't see a command prompt, try pressing enter.\ndnstools# dig @10.100.4.143 nginx.example.org +short\n10.0.2.15\ndnstools#\n```","site":"external-dns","answers_cleaned":"  CoreDNS with minikube   warning  This tutorial is out of date    information source  PRs to update it are welcome    This tutorial describes how to setup ExternalDNS for usage within a  minikube  https   github com kubernetes minikube  cluster that makes use of  CoreDNS  https   github com coredns coredns  and  nginx ingress controller  https   github com kubernetes ingress nginx    You need to     install CoreDNS with  etcd  https   github com etcd io etcd  enabled   install external dns with coredns as a provider   enable ingress controller for the minikube cluster     Creating a cluster     shell minikube start         Installing CoreDNS with etcd enabled  Helm chart is used to install etcd and CoreDNS       Initializing helm chart     shell helm init          Installing etcd   etcd operator  https   github com coreos etcd operator  is used to manage etcd clusters      helm install stable etcd operator   name my etcd op      etcd cluster is installed with example yaml from etcd operator website      shell kubectl apply  f https   raw githubusercontent com coreos etcd operator HEAD example example etcd cluster yaml          Installing CoreDNS  In order to make CoreDNS work with etcd backend  values yaml of the chart should be changed with corresponding configurations       wget https   raw githubusercontent com helm charts HEAD stable coredns values yaml      You need to edit patch the file with below diff     diff diff   git a values yaml b values yaml index 964e72b  e2fa934 100644     a values yaml     b values yaml     27 12  27 12    service    rbac       If true  create   use RBAC resources    create  false    create  true      Ignored if rbac create is true    serviceAccountName  default     isClusterService specifies whether chart should be deployed as cluster service or normal k8s app   isClusterService  true  isClusterService  false   servers     zones      51 6  51 12    servers       parameters  0 0 0 0 9153      name  proxy      parameters     etc resolv conf      name  etcd      parameters  example org      configBlock            stubzones        path  skydns        endpoint http   10 105 68 165 2379     Complete example with all the options       zones                    the  zones  block can be left out entirely  defaults to            Note       IP address of etcd s endpoint should be get from etcd client service  It should be  example etcd cluster client  in this example  This IP address is used through this document for etcd endpoint configuration      shell   kubectl get svc example etcd cluster client NAME                          TYPE        CLUSTER IP      EXTERNAL IP   PORT S     AGE example etcd cluster client   ClusterIP   10 105 68 165    none         2379 TCP   16m        Parameters should configure your own domain   example org  is used in this example   After configuration done in values yaml  you can install coredns chart      shell helm install   name my coredns   values values yaml stable coredns         Installing ExternalDNS      Install external ExternalDNS  ETCD URLS is configured to etcd client service address  Optionally  you can configure ETCD USERNAME and ETCD PASSWORD for authenticating to etcd  It is also possible to connect to the etcd cluster via HTTPS using the following environment variables  ETCD CA FILE  ETCD CERT FILE  ETCD KEY FILE  ETCD TLS SERVER NAME  ETCD TLS INSECURE        Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns   namespace  kube system spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source ingress             provider coredns             log level debug   debug only         env            name  ETCD URLS           value  http   10 105 68 165 2379           Manifest  for clusters with RBAC enabled      yaml     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  kube system     apiVersion  v1 kind  ServiceAccount metadata    name  external dns   namespace  kube system     apiVersion  apps v1 kind  Deployment metadata    name  external dns   namespace  kube system spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source ingress             provider coredns             log level debug   debug only         env            name  ETCD URLS           value  http   10 105 68 165 2379         Enable the ingress controller  You can use the ingress controller in minikube cluster  It needs to enable ingress addon in the cluster      shell minikube addons enable ingress         Testing ingress example     shell   cat ingress yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  nginx spec    ingressClassName  nginx   rules      host  nginx example org     http        paths          backend            serviceName  nginx           servicePort  80    kubectl apply  f ingress yaml ingress extensions  nginx  created      Wait a moment until DNS has the ingress IP  The DNS service IP is from CoreDNS service  It is  my coredns coredns  in this example      shell   kubectl get svc my coredns coredns NAME                 TYPE        CLUSTER IP     EXTERNAL IP   PORT S    AGE my coredns coredns   ClusterIP   10 100 4 143    none         53 UDP    12m    kubectl get ingress NAME      HOSTS               ADDRESS     PORTS     AGE nginx     nginx example org   10 0 2 15   80        2m    kubectl run  it   rm   restart Never   image infoblox dnstools latest dnstools If you don t see a command prompt  try pressing enter  dnstools  dig  10 100 4 143 nginx example org  short 10 0 2 15 dnstools     "}
{"questions":"external-dns Create a DNS zone which will contain the managed DNS records Let s use Make sure to use the latest version of ExternalDNS for this tutorial Oracle Cloud Infrastructure Creating an OCI DNS Zone as a reference here Make note of the OCID of the compartment This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using OCI DNS","answers":"# Oracle Cloud Infrastructure\n\nThis tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using OCI DNS.\n\nMake sure to use the latest version of ExternalDNS for this tutorial.\n\n## Creating an OCI DNS Zone\n\nCreate a DNS zone which will contain the managed DNS records. Let's use\n`example.com` as a reference here.  Make note of the OCID of the compartment\nin which you created the zone; you'll need to provide that later.\n\nFor more information about OCI DNS see the documentation [here][1].\n\n## Using Private OCI DNS Zones\n\nBy default, the ExternalDNS OCI provider is configured to use Global OCI\nDNS Zones. If you want to use Private OCI DNS Zones, add the following\nargument to the ExternalDNS controller:\n\n```\n--oci-zone-scope=PRIVATE\n```\n\nTo use both Global and Private OCI DNS Zones, set the OCI Zone Scope to be\nempty:\n\n```\n--oci-zone-scope=\n```\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster you want to test ExternalDNS with.\nThe OCI provider supports two authentication options: key-based and instance\nprincipals.\n\n### Key-based\n\nWe first need to create a config file containing the information needed to connect with the OCI API.\n\nCreate a new file (oci.yaml) and modify the contents to match the example\nbelow. Be sure to adjust the values to match your own credentials, and the OCID\nof the compartment containing the zone:\n\n```yaml\nauth:\n  region: us-phoenix-1\n  tenancy: ocid1.tenancy.oc1...\n  user: ocid1.user.oc1...\n  key: |\n    -----BEGIN RSA PRIVATE KEY-----\n    -----END RSA PRIVATE KEY-----\n  fingerprint: af:81:71:8e...\n  # Omit if there is not a password for the key\n  passphrase: Tx1jRk...\ncompartment: ocid1.compartment.oc1...\n```\n\nCreate a secret using the config file above:\n\n```shell\n$ kubectl create secret generic external-dns-config --from-file=oci.yaml\n```\n\n### OCI IAM Instance Principal\n\nIf you're running ExternalDNS within OCI, you can use OCI IAM instance\nprincipals to authenticate with OCI.  This obviates the need to create the\nsecret with your credentials.  You'll need to ensure an OCI IAM policy exists\nwith a statement granting the `manage dns` permission on zones and records in\nthe target compartment to the dynamic group covering your instance running\nExternalDNS.\nE.g.:\n\n```\nAllow dynamic-group <dynamic-group-name> to manage dns in compartment id <target-compartment-OCID>\n```\n\nYou'll also need to add the `--oci-auth-instance-principal` flag to enable\nthis type of authentication. Finally, you'll need to add the\n`--oci-compartment-ocid=ocid1.compartment.oc1...` flag to provide the OCID of\nthe compartment containing the zone to be managed.\n\nFor more information about OCI IAM instance principals, see the documentation [here][2].\nFor more information about OCI IAM policy details for the DNS service, see the documentation [here][3].\n\n### OCI IAM Workload Identity\n\nIf you're running ExternalDNS within an OCI Container Engine for Kubernetes (OKE) cluster,\nyou can use OCI IAM Workload Identity to authenticate with OCI. You'll need to ensure an\nOCI IAM policy exists with a statement granting the `manage dns` permission on zones and\nrecords in the target compartment covering your OKE cluster running ExternalDNS.\nE.g.:\n\n```\nAllow any-user to manage dns in compartment <compartment-name> where all {request.principal.type='workload',request.principal.cluster_id='<cluster-ocid>',request.principal.service_account='external-dns'}\n```\n\nYou'll also need to create a new file (oci.yaml) and modify the contents to match the example\nbelow. Be sure to adjust the values to match your region and the OCID\nof the compartment containing the zone:\n\n```yaml\nauth:\n  region: us-phoenix-1\n  useWorkloadIdentity: true\ncompartment: ocid1.compartment.oc1...\n```\n\nCreate a secret using the config file above:\n\n```shell\n$ kubectl create secret generic external-dns-config --from-file=oci.yaml\n```\n\n## Manifest (for clusters with RBAC enabled)\n\nApply the following manifest to deploy ExternalDNS.\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service\n        - --source=ingress\n        - --provider=oci\n        - --policy=upsert-only # prevent ExternalDNS from deleting any records, omit to enable full synchronization\n        - --txt-owner-id=my-identifier\n        # Specifies the OCI DNS Zone scope, defaults to GLOBAL.\n        # May be GLOBAL, PRIVATE, or an empty value to specify both GLOBAL and PRIVATE OCI DNS Zones\n        # - --oci-zone-scope=GLOBAL\n        # Specifies the zone cache duration, defaults to 0s. If set to 0s, the zone cache is disabled.\n        # Use of zone caching is recommended to reduce the amount of requests sent to OCI DNS.\n        # - --oci-zones-cache-duration=0s\n        volumeMounts:\n          - name: config\n            mountPath: \/etc\/kubernetes\/\n      volumes:\n      - name: config\n        secret:\n          secretName: external-dns-config\n```\n\n## Verify ExternalDNS works (Service example)\n\nCreate the following sample application to test that ExternalDNS works.\n\n> For services ExternalDNS will look for the annotation `external-dns.alpha.kubernetes.io\/hostname` on the service and use the corresponding value.\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: example.com\nspec:\n  type: LoadBalancer\n  ports:\n  - port: 80\n    name: http\n    targetPort: 80\n  selector:\n    app: nginx\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n          name: http\n```\n\nApply the manifest above and wait roughly two minutes and check that a corresponding DNS record for your service was created.\n\n```\n$ kubectl apply -f nginx.yaml\n```\n\n[1]: https:\/\/docs.cloud.oracle.com\/iaas\/Content\/DNS\/Concepts\/dnszonemanagement.htm\n[2]: https:\/\/docs.cloud.oracle.com\/iaas\/Content\/Identity\/Reference\/dnspolicyreference.htm\n[3]: https:\/\/docs.cloud.oracle.com\/iaas\/Content\/Identity\/Tasks\/callingservicesfrominstances.htm\n","site":"external-dns","answers_cleaned":"  Oracle Cloud Infrastructure  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using OCI DNS   Make sure to use the latest version of ExternalDNS for this tutorial      Creating an OCI DNS Zone  Create a DNS zone which will contain the managed DNS records  Let s use  example com  as a reference here   Make note of the OCID of the compartment in which you created the zone  you ll need to provide that later   For more information about OCI DNS see the documentation  here  1       Using Private OCI DNS Zones  By default  the ExternalDNS OCI provider is configured to use Global OCI DNS Zones  If you want to use Private OCI DNS Zones  add the following argument to the ExternalDNS controller         oci zone scope PRIVATE      To use both Global and Private OCI DNS Zones  set the OCI Zone Scope to be empty         oci zone scope          Deploy ExternalDNS  Connect your  kubectl  client to the cluster you want to test ExternalDNS with  The OCI provider supports two authentication options  key based and instance principals       Key based  We first need to create a config file containing the information needed to connect with the OCI API   Create a new file  oci yaml  and modify the contents to match the example below  Be sure to adjust the values to match your own credentials  and the OCID of the compartment containing the zone      yaml auth    region  us phoenix 1   tenancy  ocid1 tenancy oc1      user  ocid1 user oc1      key             BEGIN RSA PRIVATE KEY               END RSA PRIVATE KEY        fingerprint  af 81 71 8e        Omit if there is not a password for the key   passphrase  Tx1jRk    compartment  ocid1 compartment oc1         Create a secret using the config file above      shell   kubectl create secret generic external dns config   from file oci yaml          OCI IAM Instance Principal  If you re running ExternalDNS within OCI  you can use OCI IAM instance principals to authenticate with OCI   This obviates the need to create the secret with your credentials   You ll need to ensure an OCI IAM policy exists with a statement granting the  manage dns  permission on zones and records in the target compartment to the dynamic group covering your instance running ExternalDNS  E g        Allow dynamic group  dynamic group name  to manage dns in compartment id  target compartment OCID       You ll also need to add the    oci auth instance principal  flag to enable this type of authentication  Finally  you ll need to add the    oci compartment ocid ocid1 compartment oc1     flag to provide the OCID of the compartment containing the zone to be managed   For more information about OCI IAM instance principals  see the documentation  here  2   For more information about OCI IAM policy details for the DNS service  see the documentation  here  3        OCI IAM Workload Identity  If you re running ExternalDNS within an OCI Container Engine for Kubernetes  OKE  cluster  you can use OCI IAM Workload Identity to authenticate with OCI  You ll need to ensure an OCI IAM policy exists with a statement granting the  manage dns  permission on zones and records in the target compartment covering your OKE cluster running ExternalDNS  E g        Allow any user to manage dns in compartment  compartment name  where all  request principal type  workload  request principal cluster id   cluster ocid   request principal service account  external dns        You ll also need to create a new file  oci yaml  and modify the contents to match the example below  Be sure to adjust the values to match your region and the OCID of the compartment containing the zone      yaml auth    region  us phoenix 1   useWorkloadIdentity  true compartment  ocid1 compartment oc1         Create a secret using the config file above      shell   kubectl create secret generic external dns config   from file oci yaml         Manifest  for clusters with RBAC enabled   Apply the following manifest to deploy ExternalDNS      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service             source ingress             provider oci             policy upsert only   prevent ExternalDNS from deleting any records  omit to enable full synchronization             txt owner id my identifier           Specifies the OCI DNS Zone scope  defaults to GLOBAL            May be GLOBAL  PRIVATE  or an empty value to specify both GLOBAL and PRIVATE OCI DNS Zones               oci zone scope GLOBAL           Specifies the zone cache duration  defaults to 0s  If set to 0s  the zone cache is disabled            Use of zone caching is recommended to reduce the amount of requests sent to OCI DNS                oci zones cache duration 0s         volumeMounts              name  config             mountPath   etc kubernetes        volumes          name  config         secret            secretName  external dns config         Verify ExternalDNS works  Service example   Create the following sample application to test that ExternalDNS works     For services ExternalDNS will look for the annotation  external dns alpha kubernetes io hostname  on the service and use the corresponding value      yaml apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  example com spec    type  LoadBalancer   ports      port  80     name  http     targetPort  80   selector      app  nginx     apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80           name  http      Apply the manifest above and wait roughly two minutes and check that a corresponding DNS record for your service was created         kubectl apply  f nginx yaml       1   https   docs cloud oracle com iaas Content DNS Concepts dnszonemanagement htm  2   https   docs cloud oracle com iaas Content Identity Reference dnspolicyreference htm  3   https   docs cloud oracle com iaas Content Identity Tasks callingservicesfrominstances htm "}
{"questions":"external-dns AWS Cloud Map API is an alternative approach to managing DNS records directly using the Route53 API It is more suitable for a dynamic environment where service endpoints change frequently It abstracts away technical details of the DNS protocol and offers a simplified model AWS Cloud Map consists of three main API calls CreatePublicDnsNamespace automatically creates a DNS hosted zone CreateService creates a new named service inside the specified namespace This tutorial describes how to set up ExternalDNS for usage within a Kubernetes cluster with RegisterInstance DeregisterInstance can be called multiple times to create a DNS record for the specified Service AWS Cloud Map API","answers":"# AWS Cloud Map API\n\nThis tutorial describes how to set up ExternalDNS for usage within a Kubernetes cluster with [AWS Cloud Map API](https:\/\/docs.aws.amazon.com\/cloud-map\/).\n\n**AWS Cloud Map** API is an alternative approach to managing DNS records directly using the Route53 API. It is more suitable for a dynamic environment where service endpoints change frequently. It abstracts away technical details of the DNS protocol and offers a simplified model. AWS Cloud Map consists of three main API calls:\n\n* CreatePublicDnsNamespace \u2013 automatically creates a DNS hosted zone\n* CreateService \u2013 creates a new named service inside the specified namespace\n* RegisterInstance\/DeregisterInstance \u2013 can be called multiple times to create a DNS record for the specified *Service*\n\nLearn more about the API in the [AWS Cloud Map API Reference](https:\/\/docs.aws.amazon.com\/cloud-map\/latest\/api\/API_Operations.html).\n\n## IAM Permissions\n\nTo use the AWS Cloud Map API, a user must have permissions to create the DNS namespace. You need to make sure that your nodes (on which External DNS runs) have an IAM instance profile with the `AWSCloudMapFullAccess` managed policy attached, that provides following permissions:\n\n```\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"route53:GetHostedZone\",\n        \"route53:ListHostedZonesByName\",\n        \"route53:CreateHostedZone\",\n        \"route53:DeleteHostedZone\",\n        \"route53:ChangeResourceRecordSets\",\n        \"route53:CreateHealthCheck\",\n        \"route53:GetHealthCheck\",\n        \"route53:DeleteHealthCheck\",\n        \"route53:UpdateHealthCheck\",\n        \"ec2:DescribeVpcs\",\n        \"ec2:DescribeRegions\",\n        \"servicediscovery:*\"\n      ],\n      \"Resource\": [\n        \"*\"\n      ]\n    }\n  ]\n}\n```\n\n### IAM Permissions with ABAC\nYou can use Attribute-based access control(ABAC) for advanced deployments.  \n\nYou can define AWS tags that are applied to services created by the controller. By doing so, you can have precise control over your IAM policy to limit the scope of the permissions to services managed by the controller, rather than having to grant full permissions on your entire AWS account.  \nTo pass tags to service creation, use either CLI flags or environment variables:  \n\n*cli:* `--aws-sd-create-tag=key1=value1 --aws-sd-create-tag=key2=value2`\n\n*environment:* `EXTERNAL_DNS_AWS_SD_CREATE_TAG=key1=value1\\nkey2=value2`\n\nUsing tags, your `servicediscovery` policy can become:\n\n```\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"servicediscovery:ListNamespaces\",\n        \"servicediscovery:ListServices\"\n      ],\n      \"Resource\": [\n        \"*\"\n      ]\n    },\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"servicediscovery:CreateService\",\n        \"servicediscovery:TagResource\"\n      ],\n      \"Resource\": [\n        \"*\"\n      ],\n      \"Condition\": {\n        \"StringEquals\": {\n          \"aws:RequestTag\/YOUR_TAG_KEY\": \"YOUR_TAG_VALUE\"\n        }\n      }\n    },\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"servicediscovery:DiscoverInstances\"\n      ],\n      \"Resource\": [\n        \"*\"\n      ],\n      \"Condition\": {\n        \"StringEquals\": {\n          \"servicediscovery:NamespaceName\": \"YOUR_NAMESPACE_NAME\"\n        }\n      }\n    },\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"servicediscovery:RegisterInstance\",\n        \"servicediscovery:DeregisterInstance\",\n        \"servicediscovery:DeleteService\",\n        \"servicediscovery:UpdateService\"\n      ],\n      \"Resource\": [\n        \"*\"\n      ],\n      \"Condition\": {\n        \"StringEquals\": {\n          \"aws:ResourceTag\/YOUR_TAG_KEY\": \"YOUR_TAG_VALUE\"\n        }\n      }\n    }\n  ]\n}\n```\n\n## Set up a namespace\n\nCreate a DNS namespace using the AWS Cloud Map API:\n\n```console\n$ aws servicediscovery create-public-dns-namespace --name \"external-dns-test.my-org.com\"\n```\n\nVerify that the namespace was truly created\n\n```console\n$ aws servicediscovery list-namespaces\n```\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster that you want to test ExternalDNS with.\nThen apply the following manifest file to deploy ExternalDNS.\n\n### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        env:\n          - name: AWS_REGION\n            value: us-east-1 # put your CloudMap NameSpace region\n        args:\n        - --source=service\n        - --source=ingress\n        - --domain-filter=external-dns-test.my-org.com # Makes ExternalDNS see only the namespaces that match the specified domain. Omit the filter if you want to process all available namespaces.\n        - --provider=aws-sd\n        - --aws-zone-type=public # Only look at public namespaces. Valid values are public, private, or no value for both)\n        - --txt-owner-id=my-identifier\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\",\"watch\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        env:\n          - name: AWS_REGION\n            value: us-east-1 # put your CloudMap NameSpace region\n        args:\n        - --source=service\n        - --source=ingress\n        - --domain-filter=external-dns-test.my-org.com # Makes ExternalDNS see only the namespaces that match the specified domain. Omit the filter if you want to process all available namespaces.\n        - --provider=aws-sd\n        - --aws-zone-type=public # Only look at public namespaces. Valid values are public, private, or no value for both)\n        - --txt-owner-id=my-identifier\n```\n\n## Verify that ExternalDNS works (Service example)\n\nCreate the following sample application to test that ExternalDNS works.\n\n> For services ExternalDNS will look for the annotation `external-dns.alpha.kubernetes.io\/hostname` on the service and use the corresponding value.\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: nginx.external-dns-test.my-org.com\nspec:\n  type: LoadBalancer\n  ports:\n  - port: 80\n    name: http\n    targetPort: 80\n  selector:\n    app: nginx\n\n---\n\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n          name: http\n```\n\nAfter one minute check that a corresponding DNS record for your service was created in your hosted zone. We recommended that you use the [Amazon Route53 console](https:\/\/console.aws.amazon.com\/route53) for that purpose.\n\n\n## Custom TTL\n\nThe default DNS record TTL (time to live) is 300 seconds. You can customize this value by setting the annotation `external-dns.alpha.kubernetes.io\/ttl`.\nFor example, modify the service manifest YAML file above:\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: nginx.external-dns-test.my-org.com\n    external-dns.alpha.kubernetes.io\/ttl: \"60\"\nspec:\n    ...\n```\n\nThis will set the TTL for the DNS record to 60 seconds.\n\n## IPv6 Support\n\nIf your Kubernetes cluster is configured with IPv6 support, such as an [EKS cluster with IPv6 support](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/deploy-ipv6-cluster.html), ExternalDNS can\nalso create AAAA DNS records.\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: nginx.external-dns-test.my-org.com\n    external-dns.alpha.kubernetes.io\/ttl: \"60\"\nspec:\n  ipFamilies:\n    - \"IPv6\"\n  type: NodePort\n  ports:\n    - port: 80\n      name: http\n      targetPort: 80\n  selector:\n    app: nginx\n```\n\n:information_source: The AWS-SD provider does not currently support dualstack load balancers and will only create A records for these at this time. See the AWS provider and the [AWS Load Balancer Controller Tutorial](.\/aws-load-balancer-controller.md) for dualstack load balancer support.\n\n## Clean up\n\nDelete all service objects before terminating the cluster so all load balancers get cleaned up correctly.\n\n```console\n$ kubectl delete service nginx\n```\n\nGive ExternalDNS some time to clean up the DNS records for you. Then delete the remaining service and namespace.\n\n```console\n$ aws servicediscovery list-services\n\n{\n    \"Services\": [\n        {\n            \"Id\": \"srv-6dygt5ywvyzvi3an\",\n            \"Arn\": \"arn:aws:servicediscovery:us-west-2:861574988794:service\/srv-6dygt5ywvyzvi3an\",\n            \"Name\": \"nginx\"\n        }\n    ]\n}\n```\n\n```console\n$ aws servicediscovery delete-service --id srv-6dygt5ywvyzvi3an\n```\n\n```console\n$ aws servicediscovery list-namespaces\n{\n    \"Namespaces\": [\n        {\n            \"Type\": \"DNS_PUBLIC\",\n            \"Id\": \"ns-durf2oxu4gxcgo6z\",\n            \"Arn\": \"arn:aws:servicediscovery:us-west-2:861574988794:namespace\/ns-durf2oxu4gxcgo6z\",\n            \"Name\": \"external-dns-test.my-org.com\"\n        }\n    ]\n}\n```\n\n```console\n$ aws servicediscovery delete-namespace --id ns-durf2oxu4gxcgo6z\n```","site":"external-dns","answers_cleaned":"  AWS Cloud Map API  This tutorial describes how to set up ExternalDNS for usage within a Kubernetes cluster with  AWS Cloud Map API  https   docs aws amazon com cloud map       AWS Cloud Map   API is an alternative approach to managing DNS records directly using the Route53 API  It is more suitable for a dynamic environment where service endpoints change frequently  It abstracts away technical details of the DNS protocol and offers a simplified model  AWS Cloud Map consists of three main API calls     CreatePublicDnsNamespace   automatically creates a DNS hosted zone   CreateService   creates a new named service inside the specified namespace   RegisterInstance DeregisterInstance   can be called multiple times to create a DNS record for the specified  Service   Learn more about the API in the  AWS Cloud Map API Reference  https   docs aws amazon com cloud map latest api API Operations html       IAM Permissions  To use the AWS Cloud Map API  a user must have permissions to create the DNS namespace  You need to make sure that your nodes  on which External DNS runs  have an IAM instance profile with the  AWSCloudMapFullAccess  managed policy attached  that provides following permissions            Version    2012 10 17      Statement                  Effect    Allow          Action              route53 GetHostedZone            route53 ListHostedZonesByName            route53 CreateHostedZone            route53 DeleteHostedZone            route53 ChangeResourceRecordSets            route53 CreateHealthCheck            route53 GetHealthCheck            route53 DeleteHealthCheck            route53 UpdateHealthCheck            ec2 DescribeVpcs            ec2 DescribeRegions            servicediscovery                    Resource                                              IAM Permissions with ABAC You can use Attribute based access control ABAC  for advanced deployments     You can define AWS tags that are applied to services created by the controller  By doing so  you can have precise control over your IAM policy to limit the scope of the permissions to services managed by the controller  rather than having to grant full permissions on your entire AWS account    To pass tags to service creation  use either CLI flags or environment variables      cli      aws sd create tag key1 value1   aws sd create tag key2 value2    environment    EXTERNAL DNS AWS SD CREATE TAG key1 value1 nkey2 value2   Using tags  your  servicediscovery  policy can become            Version    2012 10 17      Statement                  Effect    Allow          Action              servicediscovery ListNamespaces            servicediscovery ListServices                  Resource                                             Effect    Allow          Action              servicediscovery CreateService            servicediscovery TagResource                  Resource                                 Condition              StringEquals                aws RequestTag YOUR TAG KEY    YOUR TAG VALUE                                        Effect    Allow          Action              servicediscovery DiscoverInstances                  Resource                                 Condition              StringEquals                servicediscovery NamespaceName    YOUR NAMESPACE NAME                                        Effect    Allow          Action              servicediscovery RegisterInstance            servicediscovery DeregisterInstance            servicediscovery DeleteService            servicediscovery UpdateService                  Resource                                 Condition              StringEquals                aws ResourceTag YOUR TAG KEY    YOUR TAG VALUE                                        Set up a namespace  Create a DNS namespace using the AWS Cloud Map API      console   aws servicediscovery create public dns namespace   name  external dns test my org com       Verify that the namespace was truly created     console   aws servicediscovery list namespaces         Deploy ExternalDNS  Connect your  kubectl  client to the cluster that you want to test ExternalDNS with  Then apply the following manifest file to deploy ExternalDNS       Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         env              name  AWS REGION             value  us east 1   put your CloudMap NameSpace region         args              source service             source ingress             domain filter external dns test my org com   Makes ExternalDNS see only the namespaces that match the specified domain  Omit the filter if you want to process all available namespaces              provider aws sd             aws zone type public   Only look at public namespaces  Valid values are public  private  or no value for both              txt owner id my identifier          Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    list   watch       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         env              name  AWS REGION             value  us east 1   put your CloudMap NameSpace region         args              source service             source ingress             domain filter external dns test my org com   Makes ExternalDNS see only the namespaces that match the specified domain  Omit the filter if you want to process all available namespaces              provider aws sd             aws zone type public   Only look at public namespaces  Valid values are public  private  or no value for both              txt owner id my identifier         Verify that ExternalDNS works  Service example   Create the following sample application to test that ExternalDNS works     For services ExternalDNS will look for the annotation  external dns alpha kubernetes io hostname  on the service and use the corresponding value      yaml apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  nginx external dns test my org com spec    type  LoadBalancer   ports      port  80     name  http     targetPort  80   selector      app  nginx       apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80           name  http      After one minute check that a corresponding DNS record for your service was created in your hosted zone  We recommended that you use the  Amazon Route53 console  https   console aws amazon com route53  for that purpose       Custom TTL  The default DNS record TTL  time to live  is 300 seconds  You can customize this value by setting the annotation  external dns alpha kubernetes io ttl   For example  modify the service manifest YAML file above      yaml apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  nginx external dns test my org com     external dns alpha kubernetes io ttl   60  spec               This will set the TTL for the DNS record to 60 seconds      IPv6 Support  If your Kubernetes cluster is configured with IPv6 support  such as an  EKS cluster with IPv6 support  https   docs aws amazon com eks latest userguide deploy ipv6 cluster html   ExternalDNS can also create AAAA DNS records      yaml apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  nginx external dns test my org com     external dns alpha kubernetes io ttl   60  spec    ipFamilies         IPv6    type  NodePort   ports        port  80       name  http       targetPort  80   selector      app  nginx       information source  The AWS SD provider does not currently support dualstack load balancers and will only create A records for these at this time  See the AWS provider and the  AWS Load Balancer Controller Tutorial    aws load balancer controller md  for dualstack load balancer support      Clean up  Delete all service objects before terminating the cluster so all load balancers get cleaned up correctly      console   kubectl delete service nginx      Give ExternalDNS some time to clean up the DNS records for you  Then delete the remaining service and namespace      console   aws servicediscovery list services         Services                            Id    srv 6dygt5ywvyzvi3an                Arn    arn aws servicediscovery us west 2 861574988794 service srv 6dygt5ywvyzvi3an                Name    nginx                            console   aws servicediscovery delete service   id srv 6dygt5ywvyzvi3an         console   aws servicediscovery list namespaces        Namespaces                            Type    DNS PUBLIC                Id    ns durf2oxu4gxcgo6z                Arn    arn aws servicediscovery us west 2 861574988794 namespace ns durf2oxu4gxcgo6z                Name    external dns test my org com                            console   aws servicediscovery delete namespace   id ns durf2oxu4gxcgo6z    "}
{"questions":"external-dns This tutorial describes how to use ExternalDNS with the aws load balancer controller 1 Follow the to setup ExternalDNS for use in Kubernetes clusters AWS Load Balancer Controller Setting up ExternalDNS and aws load balancer controller 1 https kubernetes sigs github io aws load balancer controller running in AWS Specify the argument so that ExternalDNS will look","answers":"# AWS Load Balancer Controller\n\nThis tutorial describes how to use ExternalDNS with the [aws-load-balancer-controller][1].\n\n[1]: https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\n\n## Setting up ExternalDNS and aws-load-balancer-controller\n\nFollow the [AWS tutorial](aws.md) to setup ExternalDNS for use in Kubernetes clusters\nrunning in AWS. Specify the `source=ingress` argument so that ExternalDNS will look\nfor hostnames in Ingress objects. In addition, you may wish to limit which Ingress\nobjects are used as an ExternalDNS source via the `ingress-class` argument, but\nthis is not required.\n\nFor help setting up the AWS Load Balancer Controller, follow the [Setup Guide][2].\n\n[2]: https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/latest\/deploy\/installation\/\n\nNote that the AWS Load Balancer Controller uses the same tags for [subnet auto-discovery][3]\nas Kubernetes does with the AWS cloud provider.\n\n[3]: https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/latest\/deploy\/subnet_discovery\/\n\nIn the examples that follow, it is assumed that you configured the ALB Ingress\nController with the `ingress-class=alb` argument (not to be confused with the\nsame argument to ExternalDNS) so that the controller will only respect Ingress\nobjects with the `ingressClassName` field set to \"alb\".\n\n## Deploy an example application\n\nCreate the following sample \"echoserver\" application to demonstrate how\nExternalDNS works with ALB ingress objects.\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: echoserver\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: echoserver\n  template:\n    metadata:\n      labels:\n        app: echoserver\n    spec:\n      containers:\n      - image: gcr.io\/google_containers\/echoserver:1.4\n        imagePullPolicy: Always\n        name: echoserver\n        ports:\n        - containerPort: 8080\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: echoserver\nspec:\n  ports:\n    - port: 80\n      targetPort: 8080\n      protocol: TCP\n  type: NodePort\n  selector:\n    app: echoserver\n```\n\nNote that the Service object is of type `NodePort`. We don't need a Service of\ntype `LoadBalancer` here, since we will be using an Ingress to create an ALB.\n\n## Ingress examples\n\nCreate the following Ingress to expose the echoserver application to the Internet.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  annotations:\n    alb.ingress.kubernetes.io\/scheme: internet-facing\n  name: echoserver\nspec:\n  ingressClassName: alb\n  rules:\n  - host: echoserver.mycluster.example.org\n    http: &echoserver_root\n      paths:\n      - path: \/\n        backend:\n          service:\n            name: echoserver\n            port:\n              number: 80\n        pathType: Prefix\n  - host: echoserver.example.org\n    http: *echoserver_root\n```\n\nThe above should result in the creation of an (ipv4) ALB in AWS which will forward\ntraffic to the echoserver application.\n\nIf the `source=ingress` argument is specified, then ExternalDNS will create DNS\nrecords based on the hosts specified in ingress objects. The above example would\nresult in two alias records being created, `echoserver.mycluster.example.org` and\n`echoserver.example.org`, which both alias the ALB that is associated with the\nIngress object.\n\nNote that the above example makes use of the YAML anchor feature to avoid having\nto repeat the http section for multiple hosts that use the exact same paths. If\nthis Ingress object will only be fronting one backend Service, we might instead\ncreate the following:\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  annotations:\n    alb.ingress.kubernetes.io\/scheme: internet-facing\n    external-dns.alpha.kubernetes.io\/hostname: echoserver.mycluster.example.org, echoserver.example.org\n  name: echoserver\nspec:\n  ingressClassName: alb\n  rules:\n  - http:\n      paths:\n      - path: \/\n        backend:\n          service:\n            name: echoserver\n            port:\n              number: 80\n        pathType: Prefix\n```\n\nIn the above example we create a default path that works for any hostname, and\nmake use of the `external-dns.alpha.kubernetes.io\/hostname` annotation to create\nmultiple aliases for the resulting ALB.\n\n## Dualstack ALBs\n\nAWS [supports][4] both IPv4 and \"dualstack\" (both IPv4 and IPv6) interfaces for ALBs.\nThe AWS Load Balancer Controller uses the `alb.ingress.kubernetes.io\/ip-address-type`\nannotation (which defaults to `ipv4`) to determine this. If this annotation is\nset to `dualstack` then ExternalDNS will create two alias records (one A record\nand one AAAA record) for each hostname associated with the Ingress object.\n\n[4]: https:\/\/docs.aws.amazon.com\/elasticloadbalancing\/latest\/application\/application-load-balancers.html#ip-address-type\n\nExample:\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  annotations:\n    alb.ingress.kubernetes.io\/scheme: internet-facing\n    alb.ingress.kubernetes.io\/ip-address-type: dualstack\n  name: echoserver\nspec:\n  ingressClassName: alb\n  rules:\n  - host: echoserver.example.org\n    http:\n      paths:\n      - path: \/\n        backend:\n          service:\n            name: echoserver\n            port:\n              number: 80\n        pathType: Prefix\n```\n\nThe above Ingress object will result in the creation of an ALB with a dualstack\ninterface. ExternalDNS will create both an A `echoserver.example.org` record and\nan AAAA record of the same name, that each are aliases for the same ALB.","site":"external-dns","answers_cleaned":"  AWS Load Balancer Controller  This tutorial describes how to use ExternalDNS with the  aws load balancer controller  1     1   https   kubernetes sigs github io aws load balancer controller     Setting up ExternalDNS and aws load balancer controller  Follow the  AWS tutorial  aws md  to setup ExternalDNS for use in Kubernetes clusters running in AWS  Specify the  source ingress  argument so that ExternalDNS will look for hostnames in Ingress objects  In addition  you may wish to limit which Ingress objects are used as an ExternalDNS source via the  ingress class  argument  but this is not required   For help setting up the AWS Load Balancer Controller  follow the  Setup Guide  2     2   https   kubernetes sigs github io aws load balancer controller latest deploy installation   Note that the AWS Load Balancer Controller uses the same tags for  subnet auto discovery  3  as Kubernetes does with the AWS cloud provider    3   https   kubernetes sigs github io aws load balancer controller latest deploy subnet discovery   In the examples that follow  it is assumed that you configured the ALB Ingress Controller with the  ingress class alb  argument  not to be confused with the same argument to ExternalDNS  so that the controller will only respect Ingress objects with the  ingressClassName  field set to  alb       Deploy an example application  Create the following sample  echoserver  application to demonstrate how ExternalDNS works with ALB ingress objects      yaml apiVersion  apps v1 kind  Deployment metadata    name  echoserver spec    replicas  1   selector      matchLabels        app  echoserver   template      metadata        labels          app  echoserver     spec        containers          image  gcr io google containers echoserver 1 4         imagePullPolicy  Always         name  echoserver         ports            containerPort  8080     apiVersion  v1 kind  Service metadata    name  echoserver spec    ports        port  80       targetPort  8080       protocol  TCP   type  NodePort   selector      app  echoserver      Note that the Service object is of type  NodePort   We don t need a Service of type  LoadBalancer  here  since we will be using an Ingress to create an ALB      Ingress examples  Create the following Ingress to expose the echoserver application to the Internet      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    annotations      alb ingress kubernetes io scheme  internet facing   name  echoserver spec    ingressClassName  alb   rules      host  echoserver mycluster example org     http   echoserver root       paths          path            backend            service              name  echoserver             port                number  80         pathType  Prefix     host  echoserver example org     http   echoserver root      The above should result in the creation of an  ipv4  ALB in AWS which will forward traffic to the echoserver application   If the  source ingress  argument is specified  then ExternalDNS will create DNS records based on the hosts specified in ingress objects  The above example would result in two alias records being created   echoserver mycluster example org  and  echoserver example org   which both alias the ALB that is associated with the Ingress object   Note that the above example makes use of the YAML anchor feature to avoid having to repeat the http section for multiple hosts that use the exact same paths  If this Ingress object will only be fronting one backend Service  we might instead create the following      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    annotations      alb ingress kubernetes io scheme  internet facing     external dns alpha kubernetes io hostname  echoserver mycluster example org  echoserver example org   name  echoserver spec    ingressClassName  alb   rules      http        paths          path            backend            service              name  echoserver             port                number  80         pathType  Prefix      In the above example we create a default path that works for any hostname  and make use of the  external dns alpha kubernetes io hostname  annotation to create multiple aliases for the resulting ALB      Dualstack ALBs  AWS  supports  4  both IPv4 and  dualstack   both IPv4 and IPv6  interfaces for ALBs  The AWS Load Balancer Controller uses the  alb ingress kubernetes io ip address type  annotation  which defaults to  ipv4   to determine this  If this annotation is set to  dualstack  then ExternalDNS will create two alias records  one A record and one AAAA record  for each hostname associated with the Ingress object    4   https   docs aws amazon com elasticloadbalancing latest application application load balancers html ip address type  Example      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    annotations      alb ingress kubernetes io scheme  internet facing     alb ingress kubernetes io ip address type  dualstack   name  echoserver spec    ingressClassName  alb   rules      host  echoserver example org     http        paths          path            backend            service              name  echoserver             port                number  80         pathType  Prefix      The above Ingress object will result in the creation of an ALB with a dualstack interface  ExternalDNS will create both an A  echoserver example org  record and an AAAA record of the same name  that each are aliases for the same ALB "}
{"questions":"external-dns For this tutorial please make sure that you are using a version 0 7 2 of ExternalDNS This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using UltraDNS Managing DNS with UltraDNS UltraDNS If you would like to read up on the UltraDNS service you can find additional details here","answers":"# UltraDNS\n\nThis tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using UltraDNS.\n\nFor this tutorial, please make sure that you are using a version **> 0.7.2** of ExternalDNS.\n\n## Managing DNS with UltraDNS\n\nIf you would like to read-up on the UltraDNS service, you can find additional details here: [Introduction to UltraDNS](https:\/\/docs.ultradns.com\/)\n\nBefore proceeding, please create a new DNS Zone that you will create your records in for this tutorial process. For the examples in this tutorial, we will be using `example.com` as our Zone.\n\n## Setting Up UltraDNS Credentials\n\nThe following environment variables will be needed to run ExternalDNS with UltraDNS.\n\n`ULTRADNS_USERNAME`,`ULTRADNS_PASSWORD`, &`ULTRADNS_BASEURL`\n`ULTRADNS_ACCOUNTNAME`(optional variable).\n\n## Deploying ExternalDNS\n\nConnect your `kubectl` client to the cluster you want to test ExternalDNS with.\nThen, apply one of the following manifests file to deploy ExternalDNS.\n\n- Note: We are assuming the zone is already present within UltraDNS.\n- Note: While creating CNAMES as target endpoints, the `--txt-prefix` option is mandatory.\n### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service \n        - --source=ingress # ingress is also possible\n        - --domain-filter=example.com # (Recommended) We recommend to use this filter as it minimize the time to propagate changes, as there are less number of zones to look into..\n        - --provider=ultradns\n        - --txt-prefix=txt-\n        env:\n        - name: ULTRADNS_USERNAME\n          value: \"\"\n        - name: ULTRADNS_PASSWORD  # The password is required to be BASE64 encrypted.\n          value: \"\"\n        - name: ULTRADNS_BASEURL\n          value: \"https:\/\/api.ultradns.com\/\"\n        - name: ULTRADNS_ACCOUNTNAME\n          value: \"\"\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\",\"watch\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service \n        - --source=ingress\n        - --domain-filter=example.com #(Recommended) We recommend to use this filter as it minimize the time to propagate changes, as there are less number of zones to look into..\n        - --provider=ultradns\n        - --txt-prefix=txt-\n        env:\n        - name: ULTRADNS_USERNAME\n          value: \"\"\n        - name: ULTRADNS_PASSWORD # The password is required to be BASE64 encrypted.\n          value: \"\"\n        - name: ULTRADNS_BASEURL\n          value: \"https:\/\/api.ultradns.com\/\"\n        - name: ULTRADNS_ACCOUNTNAME\n          value: \"\"\n```\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: my-app.example.com.\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\nPlease note the annotation on the service. Use the same hostname as the UltraDNS zone created above.\n\nExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records.\n\n## Creating the Deployment and Service:\n\n```console\n$ kubectl create -f nginx.yaml\n$ kubectl create -f external-dns.yaml\n```\n\nDepending on where you run your service from, it can take a few minutes for your cloud provider to create an external IP for the service.\n\nOnce the service has an external IP assigned, ExternalDNS will notice the new service IP address and will synchronize the UltraDNS records.\n\n## Verifying UltraDNS Records\n\nPlease verify on the [UltraDNS UI](https:\/\/portal.ultradns.com\/login) that the records are created under the zone \"example.com\".\n\nFor more information on UltraDNS UI, refer to (https:\/\/docs.ultradns.com\/Content\/MSP_User_Guide\/Content\/User%20Guides\/MSP_User_Guide\/Navigation\/Moving%20Around%20the%20UI.htm#_Toc2780722).\n\nSelect the zone that was created above (or select the appropriate zone if a different zone was used.)\n\nThe external IP address will be displayed as a CNAME record for your zone.\n\n## Cleaning Up the Deployment and Service\n\nNow that we have verified that ExternalDNS will automatically manage your UltraDNS records, you can delete example zones that you created in this tutorial:\n\n```\n$ kubectl delete service -f nginx.yaml\n$ kubectl delete service -f externaldns.yaml\n```\n## Examples to Manage your Records\n### Creating Multiple A Records Target\n- First, you want to create a service file called 'apple-banana-echo.yaml' \n```yaml\n---\nkind: Pod\napiVersion: v1\nmetadata:\n  name: example-app\n  labels:\n    app: apple\nspec:\n  containers:\n    - name: example-app\n      image: hashicorp\/http-echo\n      args:\n        - \"-text=apple\"\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: example-service\nspec:\n  selector:\n    app: apple\n  ports:\n    - port: 5678 # Default port for image\n```\n- Then, create service file called 'expose-apple-banana-app.yaml' to expose the services. For more information to deploy ingress controller, refer to (https:\/\/kubernetes.github.io\/ingress-nginx\/deploy\/)\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: example-ingress\n  annotations:\n    ingress.kubernetes.io\/rewrite-target: \/\n    ingress.kubernetes.io\/scheme: internet-facing\n    external-dns.alpha.kubernetes.io\/hostname: apple.example.com.\n    external-dns.alpha.kubernetes.io\/target: 10.10.10.1,10.10.10.23\nspec:\n  rules:\n  - http:\n      paths:\n        - path: \/apple\n          pathType: Prefix\n          backend:\n            service:\n              name: example-service\n              port:\n                number: 5678\n```\n- Then, create the deployment and service:\n```console\n$ kubectl create -f apple-banana-echo.yaml\n$ kubectl create -f expose-apple-banana-app.yaml\n$ kubectl create -f external-dns.yaml\n```\n- Depending on where you run your service from, it can take a few minutes for your cloud provider to create an external IP for the service.\n- Please verify on the [UltraDNS UI](https:\/\/portal.ultradns.com\/login) that the records have been created under the zone \"example.com\".\n- Finally, you will need to clean up the deployment and service. Please verify on the UI afterwards that the records have been deleted from the zone \"example.com\":\n```console\n$ kubectl delete -f apple-banana-echo.yaml\n$ kubectl delete -f expose-apple-banana-app.yaml\n$ kubectl delete -f external-dns.yaml\n```\n### Creating CNAME Record\n- Please note, that prior to deploying the external-dns service, you will need to add the option \u2013txt-prefix=txt- into external-dns.yaml. If this not provided, your records will not be created.\n-  First, create a service file called 'apple-banana-echo.yaml'\n    - _Config File Example \u2013 kubernetes cluster is on-premise not on cloud_\n    ```yaml\n    ---\n    kind: Pod\n    apiVersion: v1\n    metadata:\n      name: example-app\n      labels:\n        app: apple\n    spec:\n      containers:\n        - name: example-app\n          image: hashicorp\/http-echo\n          args:\n            - \"-text=apple\"\n    ---\n    kind: Service\n    apiVersion: v1\n    metadata:\n      name: example-service\n    spec:\n      selector:\n        app: apple\n      ports:\n        - port: 5678 # Default port for image\n    ---\n    apiVersion: networking.k8s.io\/v1\n    kind: Ingress\n    metadata:\n      name: example-ingress\n      annotations:\n        ingress.kubernetes.io\/rewrite-target: \/\n        ingress.kubernetes.io\/scheme: internet-facing\n        external-dns.alpha.kubernetes.io\/hostname: apple.example.com.\n        external-dns.alpha.kubernetes.io\/target: apple.cname.com.\n    spec:\n      rules:\n      - http:\n          paths:\n            - path: \/apple\n              backend:\n                service:\n                  name: example-service\n                  port:\n                    number: 5678\n    ```\n    - _Config File Example \u2013 Kubernetes cluster service from different cloud vendors_\n    ```yaml\n    ---\n    kind: Pod\n    apiVersion: v1\n    metadata:\n      name: example-app\n      labels:\n        app: apple\n    spec:\n      containers:\n        - name: example-app\n          image: hashicorp\/http-echo\n          args:\n            - \"-text=apple\"\n    ---\n    kind: Service\n    apiVersion: v1\n    metadata:\n      name: example-service\n      annotations:\n        external-dns.alpha.kubernetes.io\/hostname: my-app.example.com.\n    spec:\n      selector:\n        app: apple\n      type: LoadBalancer\n      ports:\n        - protocol: TCP\n          port: 5678\n          targetPort: 5678\n    ```\n- Then, create the deployment and service:\n```console\n$ kubectl create -f apple-banana-echo.yaml\n$ kubectl create -f external-dns.yaml\n```\n- Depending on where you run your service from, it can take a few minutes for your cloud provider to create an external IP for the service.\n- Please verify on the [UltraDNS UI](https:\/\/portal.ultradns.com\/login), that the records have been created under the zone \"example.com\".\n- Finally, you will need to clean up the deployment and service. Please verify on the UI afterwards that the records have been deleted from the zone \"example.com\":\n```console\n$ kubectl delete -f apple-banana-echo.yaml\n$ kubectl delete -f external-dns.yaml\n```\n### Creating Multiple Types Of Records\n- Please note, that prior to deploying the external-dns service, you will need to add the option \u2013txt-prefix=txt- into external-dns.yaml. Since you will also be created a CNAME record, If this not provided, your records will not be created.\n-  First, create a service file called 'apple-banana-echo.yaml'\n    - _Config File Example \u2013 kubernetes cluster is on-premise not on cloud_\n    ```yaml\n    ---\n    kind: Pod\n    apiVersion: v1\n    metadata:\n      name: example-app\n      labels:\n        app: apple\n    spec:\n      containers:\n        - name: example-app\n          image: hashicorp\/http-echo\n          args:\n            - \"-text=apple\"\n    ---\n    kind: Service\n    apiVersion: v1\n    metadata:\n      name: example-service\n    spec:\n      selector:\n        app: apple\n      ports:\n        - port: 5678 # Default port for image\n    ---\n    kind: Pod\n    apiVersion: v1\n    metadata:\n      name: example-app1\n      labels:\n        app: apple1\n    spec:\n      containers:\n        - name: example-app1\n          image: hashicorp\/http-echo\n          args:\n            - \"-text=apple\"\n    ---\n    kind: Service\n    apiVersion: v1\n    metadata:\n      name: example-service1\n    spec:\n      selector:\n        app: apple1\n      ports:\n        - port: 5679 # Default port for image\n    ---\n    kind: Pod\n    apiVersion: v1\n    metadata:\n      name: example-app2\n      labels:\n        app: apple2\n    spec:\n      containers:\n        - name: example-app2\n          image: hashicorp\/http-echo\n          args:\n            - \"-text=apple\"\n    ---\n    kind: Service\n    apiVersion: v1\n    metadata:\n      name: example-service2\n    spec:\n      selector:\n        app: apple2\n      ports:\n        - port: 5680 # Default port for image\n    ---\n    apiVersion: networking.k8s.io\/v1\n    kind: Ingress\n    metadata:\n      name: example-ingress\n      annotations:\n        ingress.kubernetes.io\/rewrite-target: \/\n        ingress.kubernetes.io\/scheme: internet-facing\n        external-dns.alpha.kubernetes.io\/hostname: apple.example.com.\n        external-dns.alpha.kubernetes.io\/target: apple.cname.com.\n    spec:\n      rules:\n      - http:\n          paths:\n            - path: \/apple\n              backend:\n                service:\n                  name: example-service\n                  port:\n                    number: 5678\n    ---\n    apiVersion: networking.k8s.io\/v1\n    kind: Ingress\n    metadata:\n      name: example-ingress1\n      annotations:\n        ingress.kubernetes.io\/rewrite-target: \/\n        ingress.kubernetes.io\/scheme: internet-facing\n        external-dns.alpha.kubernetes.io\/hostname: apple-banana.example.com.\n        external-dns.alpha.kubernetes.io\/target: 10.10.10.3\n    spec:\n      rules:\n      - http:\n          paths:\n            - path: \/apple\n              backend:\n                service:\n                  name: example-service1\n                  port:\n                    number: 5679\n    ---\n    apiVersion: networking.k8s.io\/v1\n    kind: Ingress\n    metadata:\n      name: example-ingress2\n      annotations:\n        ingress.kubernetes.io\/rewrite-target: \/\n        ingress.kubernetes.io\/scheme: internet-facing\n        external-dns.alpha.kubernetes.io\/hostname: banana.example.com.\n        external-dns.alpha.kubernetes.io\/target: 10.10.10.3,10.10.10.20\n    spec:\n      rules:\n      - http:\n          paths:\n            - path: \/apple\n              backend:\n                service:\n                  name: example-service2\n                  port:\n                    number: 5680\n    ```\n    - _Config File Example \u2013 Kubernetes cluster service from different cloud vendors_\n    ```yaml\n    ---\n    apiVersion: apps\/v1\n    kind: Deployment\n    metadata:\n      name: nginx\n    spec:\n      selector:\n        matchLabels:\n          app: nginx\n      template:\n        metadata:\n          labels:\n            app: nginx\n        spec:\n          containers:\n          - image: nginx\n            name: nginx\n            ports:\n            - containerPort: 80\n    ---\n    apiVersion: v1\n    kind: Service\n    metadata:\n      name: nginx\n      annotations:\n        external-dns.alpha.kubernetes.io\/hostname: my-app.example.com.\n    spec:\n      selector:\n        app: nginx\n      type: LoadBalancer\n      ports:\n        - protocol: TCP\n          port: 80\n          targetPort: 80\n    ---\n    kind: Pod\n    apiVersion: v1\n    metadata:\n      name: example-app\n      labels:\n        app: apple\n    spec:\n      containers:\n        - name: example-app\n          image: hashicorp\/http-echo\n          args:\n            - \"-text=apple\"\n    ---\n    kind: Service\n    apiVersion: v1\n    metadata:\n      name: example-service\n    spec:\n      selector:\n        app: apple\n      ports:\n        - port: 5678 # Default port for image\n    ---\n    kind: Pod\n    apiVersion: v1\n    metadata:\n      name: example-app1\n      labels:\n        app: apple1\n    spec:\n      containers:\n        - name: example-app1\n          image: hashicorp\/http-echo\n          args:\n            - \"-text=apple\"\n    ---\n    apiVersion: extensions\/v1beta1\n    kind: Service\n    apiVersion: v1\n    metadata:\n      name: example-service1\n    spec:\n      selector:\n        app: apple1\n      ports:\n        - port: 5679 # Default port for image\n    ---\n    apiVersion: networking.k8s.io\/v1\n    kind: Ingress\n    metadata:\n      name: example-ingress\n      annotations:\n        ingress.kubernetes.io\/rewrite-target: \/\n        ingress.kubernetes.io\/scheme: internet-facing\n        external-dns.alpha.kubernetes.io\/hostname: apple.example.com.\n        external-dns.alpha.kubernetes.io\/target: 10.10.10.3,10.10.10.25\n    spec:\n      rules:\n      - http:\n          paths:\n            - path: \/apple\n              backend:\n                service:\n                  name: example-service\n                  port:\n                    number: 5678\n    ---\n    apiVersion: networking.k8s.io\/v1\n    kind: Ingress\n    metadata:\n      name: example-ingress1\n      annotations:\n        ingress.kubernetes.io\/rewrite-target: \/\n        ingress.kubernetes.io\/scheme: internet-facing\n        external-dns.alpha.kubernetes.io\/hostname: apple-banana.example.com.\n        external-dns.alpha.kubernetes.io\/target: 10.10.10.3\n    spec:\n      rules:\n      - http:\n          paths:\n            - path: \/apple\n              backend:\n                service:\n                  name: example-service1\n                  port:\n                    number: 5679\n    ```\n- Then, create the deployment and service:\n```console\n$ kubectl create -f apple-banana-echo.yaml\n$ kubectl create -f external-dns.yaml\n```\n- Depending on where you run your service from, it can take a few minutes for your cloud provider to create an external IP for the service.\n- Please verify on the [UltraDNS UI](https:\/\/portal.ultradns.com\/login), that the records have been created under the zone \"example.com\".\n- Finally, you will need to clean up the deployment and service. Please verify on the UI afterwards that the records have been deleted from the zone \"example.com\":\n```console \n$ kubectl delete -f apple-banana-echo.yaml\n$ kubectl delete -f external-dns.yaml```","site":"external-dns","answers_cleaned":"  UltraDNS  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using UltraDNS   For this tutorial  please make sure that you are using a version     0 7 2   of ExternalDNS      Managing DNS with UltraDNS  If you would like to read up on the UltraDNS service  you can find additional details here   Introduction to UltraDNS  https   docs ultradns com    Before proceeding  please create a new DNS Zone that you will create your records in for this tutorial process  For the examples in this tutorial  we will be using  example com  as our Zone      Setting Up UltraDNS Credentials  The following environment variables will be needed to run ExternalDNS with UltraDNS    ULTRADNS USERNAME   ULTRADNS PASSWORD     ULTRADNS BASEURL   ULTRADNS ACCOUNTNAME  optional variable       Deploying ExternalDNS  Connect your  kubectl  client to the cluster you want to test ExternalDNS with  Then  apply one of the following manifests file to deploy ExternalDNS     Note  We are assuming the zone is already present within UltraDNS    Note  While creating CNAMES as target endpoints  the    txt prefix  option is mandatory      Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service              source ingress   ingress is also possible             domain filter example com    Recommended  We recommend to use this filter as it minimize the time to propagate changes  as there are less number of zones to look into               provider ultradns             txt prefix txt          env            name  ULTRADNS USERNAME           value               name  ULTRADNS PASSWORD    The password is required to be BASE64 encrypted            value               name  ULTRADNS BASEURL           value   https   api ultradns com             name  ULTRADNS ACCOUNTNAME           value              Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    list   watch       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service              source ingress             domain filter example com   Recommended  We recommend to use this filter as it minimize the time to propagate changes  as there are less number of zones to look into               provider ultradns             txt prefix txt          env            name  ULTRADNS USERNAME           value               name  ULTRADNS PASSWORD   The password is required to be BASE64 encrypted            value               name  ULTRADNS BASEURL           value   https   api ultradns com             name  ULTRADNS ACCOUNTNAME           value             Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  my app example com  spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80      Please note the annotation on the service  Use the same hostname as the UltraDNS zone created above   ExternalDNS uses this annotation to determine what services should be registered with DNS  Removing the annotation will cause ExternalDNS to remove the corresponding DNS records      Creating the Deployment and Service      console   kubectl create  f nginx yaml   kubectl create  f external dns yaml      Depending on where you run your service from  it can take a few minutes for your cloud provider to create an external IP for the service   Once the service has an external IP assigned  ExternalDNS will notice the new service IP address and will synchronize the UltraDNS records      Verifying UltraDNS Records  Please verify on the  UltraDNS UI  https   portal ultradns com login  that the records are created under the zone  example com    For more information on UltraDNS UI  refer to  https   docs ultradns com Content MSP User Guide Content User 20Guides MSP User Guide Navigation Moving 20Around 20the 20UI htm  Toc2780722    Select the zone that was created above  or select the appropriate zone if a different zone was used    The external IP address will be displayed as a CNAME record for your zone      Cleaning Up the Deployment and Service  Now that we have verified that ExternalDNS will automatically manage your UltraDNS records  you can delete example zones that you created in this tutorial         kubectl delete service  f nginx yaml   kubectl delete service  f externaldns yaml        Examples to Manage your Records     Creating Multiple A Records Target   First  you want to create a service file called  apple banana echo yaml      yaml     kind  Pod apiVersion  v1 metadata    name  example app   labels      app  apple spec    containers        name  example app       image  hashicorp http echo       args              text apple      kind  Service apiVersion  v1 metadata    name  example service spec    selector      app  apple   ports        port  5678   Default port for image       Then  create service file called  expose apple banana app yaml  to expose the services  For more information to deploy ingress controller  refer to  https   kubernetes github io ingress nginx deploy      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  example ingress   annotations      ingress kubernetes io rewrite target        ingress kubernetes io scheme  internet facing     external dns alpha kubernetes io hostname  apple example com      external dns alpha kubernetes io target  10 10 10 1 10 10 10 23 spec    rules      http        paths            path   apple           pathType  Prefix           backend              service                name  example service               port                  number  5678       Then  create the deployment and service     console   kubectl create  f apple banana echo yaml   kubectl create  f expose apple banana app yaml   kubectl create  f external dns yaml       Depending on where you run your service from  it can take a few minutes for your cloud provider to create an external IP for the service    Please verify on the  UltraDNS UI  https   portal ultradns com login  that the records have been created under the zone  example com     Finally  you will need to clean up the deployment and service  Please verify on the UI afterwards that the records have been deleted from the zone  example com      console   kubectl delete  f apple banana echo yaml   kubectl delete  f expose apple banana app yaml   kubectl delete  f external dns yaml         Creating CNAME Record   Please note  that prior to deploying the external dns service  you will need to add the option  txt prefix txt  into external dns yaml  If this not provided  your records will not be created     First  create a service file called  apple banana echo yaml         Config File Example   kubernetes cluster is on premise not on cloud         yaml             kind  Pod     apiVersion  v1     metadata        name  example app       labels          app  apple     spec        containers            name  example app           image  hashicorp http echo           args                  text apple              kind  Service     apiVersion  v1     metadata        name  example service     spec        selector          app  apple       ports            port  5678   Default port for image             apiVersion  networking k8s io v1     kind  Ingress     metadata        name  example ingress       annotations          ingress kubernetes io rewrite target            ingress kubernetes io scheme  internet facing         external dns alpha kubernetes io hostname  apple example com          external dns alpha kubernetes io target  apple cname com      spec        rules          http            paths                path   apple               backend                  service                    name  example service                   port                      number  5678                Config File Example   Kubernetes cluster service from different cloud vendors         yaml             kind  Pod     apiVersion  v1     metadata        name  example app       labels          app  apple     spec        containers            name  example app           image  hashicorp http echo           args                  text apple              kind  Service     apiVersion  v1     metadata        name  example service       annotations          external dns alpha kubernetes io hostname  my app example com      spec        selector          app  apple       type  LoadBalancer       ports            protocol  TCP           port  5678           targetPort  5678           Then  create the deployment and service     console   kubectl create  f apple banana echo yaml   kubectl create  f external dns yaml       Depending on where you run your service from  it can take a few minutes for your cloud provider to create an external IP for the service    Please verify on the  UltraDNS UI  https   portal ultradns com login   that the records have been created under the zone  example com     Finally  you will need to clean up the deployment and service  Please verify on the UI afterwards that the records have been deleted from the zone  example com      console   kubectl delete  f apple banana echo yaml   kubectl delete  f external dns yaml         Creating Multiple Types Of Records   Please note  that prior to deploying the external dns service  you will need to add the option  txt prefix txt  into external dns yaml  Since you will also be created a CNAME record  If this not provided  your records will not be created     First  create a service file called  apple banana echo yaml         Config File Example   kubernetes cluster is on premise not on cloud         yaml             kind  Pod     apiVersion  v1     metadata        name  example app       labels          app  apple     spec        containers            name  example app           image  hashicorp http echo           args                  text apple              kind  Service     apiVersion  v1     metadata        name  example service     spec        selector          app  apple       ports            port  5678   Default port for image             kind  Pod     apiVersion  v1     metadata        name  example app1       labels          app  apple1     spec        containers            name  example app1           image  hashicorp http echo           args                  text apple              kind  Service     apiVersion  v1     metadata        name  example service1     spec        selector          app  apple1       ports            port  5679   Default port for image             kind  Pod     apiVersion  v1     metadata        name  example app2       labels          app  apple2     spec        containers            name  example app2           image  hashicorp http echo           args                  text apple              kind  Service     apiVersion  v1     metadata        name  example service2     spec        selector          app  apple2       ports            port  5680   Default port for image             apiVersion  networking k8s io v1     kind  Ingress     metadata        name  example ingress       annotations          ingress kubernetes io rewrite target            ingress kubernetes io scheme  internet facing         external dns alpha kubernetes io hostname  apple example com          external dns alpha kubernetes io target  apple cname com      spec        rules          http            paths                path   apple               backend                  service                    name  example service                   port                      number  5678             apiVersion  networking k8s io v1     kind  Ingress     metadata        name  example ingress1       annotations          ingress kubernetes io rewrite target            ingress kubernetes io scheme  internet facing         external dns alpha kubernetes io hostname  apple banana example com          external dns alpha kubernetes io target  10 10 10 3     spec        rules          http            paths                path   apple               backend                  service                    name  example service1                   port                      number  5679             apiVersion  networking k8s io v1     kind  Ingress     metadata        name  example ingress2       annotations          ingress kubernetes io rewrite target            ingress kubernetes io scheme  internet facing         external dns alpha kubernetes io hostname  banana example com          external dns alpha kubernetes io target  10 10 10 3 10 10 10 20     spec        rules          http            paths                path   apple               backend                  service                    name  example service2                   port                      number  5680                Config File Example   Kubernetes cluster service from different cloud vendors         yaml             apiVersion  apps v1     kind  Deployment     metadata        name  nginx     spec        selector          matchLabels            app  nginx       template          metadata            labels              app  nginx         spec            containers              image  nginx             name  nginx             ports                containerPort  80             apiVersion  v1     kind  Service     metadata        name  nginx       annotations          external dns alpha kubernetes io hostname  my app example com      spec        selector          app  nginx       type  LoadBalancer       ports            protocol  TCP           port  80           targetPort  80             kind  Pod     apiVersion  v1     metadata        name  example app       labels          app  apple     spec        containers            name  example app           image  hashicorp http echo           args                  text apple              kind  Service     apiVersion  v1     metadata        name  example service     spec        selector          app  apple       ports            port  5678   Default port for image             kind  Pod     apiVersion  v1     metadata        name  example app1       labels          app  apple1     spec        containers            name  example app1           image  hashicorp http echo           args                  text apple              apiVersion  extensions v1beta1     kind  Service     apiVersion  v1     metadata        name  example service1     spec        selector          app  apple1       ports            port  5679   Default port for image             apiVersion  networking k8s io v1     kind  Ingress     metadata        name  example ingress       annotations          ingress kubernetes io rewrite target            ingress kubernetes io scheme  internet facing         external dns alpha kubernetes io hostname  apple example com          external dns alpha kubernetes io target  10 10 10 3 10 10 10 25     spec        rules          http            paths                path   apple               backend                  service                    name  example service                   port                      number  5678             apiVersion  networking k8s io v1     kind  Ingress     metadata        name  example ingress1       annotations          ingress kubernetes io rewrite target            ingress kubernetes io scheme  internet facing         external dns alpha kubernetes io hostname  apple banana example com          external dns alpha kubernetes io target  10 10 10 3     spec        rules          http            paths                path   apple               backend                  service                    name  example service1                   port                      number  5679           Then  create the deployment and service     console   kubectl create  f apple banana echo yaml   kubectl create  f external dns yaml       Depending on where you run your service from  it can take a few minutes for your cloud provider to create an external IP for the service    Please verify on the  UltraDNS UI  https   portal ultradns com login   that the records have been created under the zone  example com     Finally  you will need to clean up the deployment and service  Please verify on the UI afterwards that the records have been deleted from the zone  example com      console    kubectl delete  f apple banana echo yaml   kubectl delete  f external dns yaml   "}
{"questions":"external-dns Headless Services We will go through a small example of deploying a simple Kafka with use of a headless service Use cases The main use cases that inspired this feature is the necessity for fixed addressable hostnames with services such as Kafka when trying to access them from outside the cluster In this scenario quite often only the Node IP addresses are actually routable and as in systems like Kafka more direct connections are preferable This tutorial describes how to setup ExternalDNS for usage in conjunction with a Headless service Setup","answers":"# Headless Services\n\nThis tutorial describes how to setup ExternalDNS for usage in conjunction with a Headless service.\n\n## Use cases\nThe main use cases that inspired this feature is the necessity for fixed addressable hostnames with services, such as Kafka when trying to access them from outside the cluster. In this scenario, quite often, only the Node IP addresses are actually routable and as in systems like Kafka more direct connections are preferable.\n\n## Setup\n\nWe will go through a small example of deploying a simple Kafka with use of a headless service.\n\n### External DNS\n\nA simple deploy could look like this:\n### Manifest (for clusters without RBAC enabled)\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --log-level=debug\n        - --source=service\n        - --source=ingress\n        - --namespace=dev\n        - --domain-filter=example.org. \n        - --provider=aws\n        - --registry=txt\n        - --txt-owner-id=dev.example.org\n```\n\n### Manifest (for clusters with RBAC enabled)\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"] \n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --log-level=debug\n        - --source=service\n        - --source=ingress\n        - --namespace=dev\n        - --domain-filter=example.org. \n        - --provider=aws\n        - --registry=txt\n        - --txt-owner-id=dev.example.org\n```\n\n\n### Kafka Stateful Set\n\nFirst lets deploy a Kafka Stateful set, a simple example(a lot of stuff is missing) with a headless service called `ksvc`\n\n```yaml\napiVersion: apps\/v1\nkind: StatefulSet\nmetadata:\n  name: kafka\nspec:\n  serviceName: ksvc\n  replicas: 3\n  template:\n    metadata:\n      labels:\n        component: kafka\n    spec:\n      containers:\n      - name:  kafka        \n        image: confluent\/kafka\n        ports:\n        - containerPort: 9092\n          hostPort: 9092\n          name: external\n        command:\n        - bash\n        - -c\n        - \" export DOMAIN=$(hostname -d) && \\\n            export KAFKA_BROKER_ID=$(echo $HOSTNAME|rev|cut -d '-' -f 1|rev) && \\\n            export KAFKA_ZOOKEEPER_CONNECT=$ZK_CSVC_SERVICE_HOST:$ZK_CSVC_SERVICE_PORT && \\\n            export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT:\/\/$HOSTNAME.example.org:9092 && \\\n            \/etc\/confluent\/docker\/run\"\n        volumeMounts:\n        - name: datadir\n          mountPath: \/var\/lib\/kafka\n  volumeClaimTemplates:\n  - metadata:\n      name: datadir\n      annotations:\n          volume.beta.kubernetes.io\/storage-class: st1\n    spec:\n      accessModes: [ \"ReadWriteOnce\" ]\n      resources:\n        requests:\n          storage:  500Gi\n```\nVery important here, is to set the `hostPort`(only works if the PodSecurityPolicy allows it)! and in case your app requires an actual hostname inside the container, unlike Kafka, which can advertise on another address, you have to set the hostname yourself.\n\n### Headless Service\n\nNow we need to define a headless service to use to expose the Kafka pods. There are generally two approaches to use expose the nodeport of a Headless service:\n\n1. Add `--fqdn-template=.example.org`\n2. Use a full annotation \n\nIf you go with #1, you just need to define the headless service, here is an example of the case #2:\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: ksvc\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname:  example.org\nspec:\n  ports:\n  - port: 9092\n    name: external\n  clusterIP: None\n  selector:\n    component: kafka\n```\nThis will create 3 dns records:\n```\nkafka-0.example.org\nkafka-1.example.org\nkafka-2.example.org\n```\n\nIf you set `--fqdn-template=.example.org` you can omit the annotation.\nGenerally it is a better approach to use  `--fqdn-template=.example.org`, because then\nyou would get the service name inside the generated A records:\n\n```\nkafka-0.ksvc.example.org\nkafka-1.ksvc.example.org\nkafka-2.ksvc.example.org\n```\n\n#### Using pods' HostIPs as targets\n\nAdd the following annotation to your `Service`:\n\n```yaml\nexternal-dns.alpha.kubernetes.io\/endpoints-type: HostIP\n```\n\nexternal-dns will now publish the value of the `.status.hostIP` field of the pods backing your `Service`.\n\n#### Using node external IPs as targets\n\nAdd the following annotation to your `Service`:\n\n```yaml\nexternal-dns.alpha.kubernetes.io\/endpoints-type: NodeExternalIP\n```\n\nexternal-dns will now publish the node external IP (`.status.addresses` entries of with `type: NodeExternalIP`) of the nodes on which the pods backing your `Service` are running.\n\n#### Using pod annotations to specify target IPs\n\nAdd the following annotation to the **pods** backing your `Service`:\n\n```yaml\nexternal-dns.alpha.kubernetes.io\/target: \"1.2.3.4\"\n```\n\nexternal-dns will publish the IP specified in the annotation of each pod instead of using the podIP advertised by Kubernetes.\n\nThis can be useful e.g. if you are NATing public IPs onto your pod IPs and want to publish these in DNS.","site":"external-dns","answers_cleaned":"  Headless Services  This tutorial describes how to setup ExternalDNS for usage in conjunction with a Headless service      Use cases The main use cases that inspired this feature is the necessity for fixed addressable hostnames with services  such as Kafka when trying to access them from outside the cluster  In this scenario  quite often  only the Node IP addresses are actually routable and as in systems like Kafka more direct connections are preferable      Setup  We will go through a small example of deploying a simple Kafka with use of a headless service       External DNS  A simple deploy could look like this      Manifest  for clusters without RBAC enabled     yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              log level debug             source service             source ingress             namespace dev             domain filter example org               provider aws             registry txt             txt owner id dev example org          Manifest  for clusters with RBAC enabled     yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses      verbs    get   watch   list     apiGroups         resources    nodes     verbs    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              log level debug             source service             source ingress             namespace dev             domain filter example org               provider aws             registry txt             txt owner id dev example org           Kafka Stateful Set  First lets deploy a Kafka Stateful set  a simple example a lot of stuff is missing  with a headless service called  ksvc      yaml apiVersion  apps v1 kind  StatefulSet metadata    name  kafka spec    serviceName  ksvc   replicas  3   template      metadata        labels          component  kafka     spec        containers          name   kafka                 image  confluent kafka         ports            containerPort  9092           hostPort  9092           name  external         command            bash            c             export DOMAIN   hostname  d                   export KAFKA BROKER ID   echo  HOSTNAME rev cut  d      f 1 rev                   export KAFKA ZOOKEEPER CONNECT  ZK CSVC SERVICE HOST  ZK CSVC SERVICE PORT                  export KAFKA ADVERTISED LISTENERS PLAINTEXT    HOSTNAME example org 9092                   etc confluent docker run          volumeMounts            name  datadir           mountPath   var lib kafka   volumeClaimTemplates      metadata        name  datadir       annotations            volume beta kubernetes io storage class  st1     spec        accessModes     ReadWriteOnce          resources          requests            storage   500Gi     Very important here  is to set the  hostPort  only works if the PodSecurityPolicy allows it   and in case your app requires an actual hostname inside the container  unlike Kafka  which can advertise on another address  you have to set the hostname yourself       Headless Service  Now we need to define a headless service to use to expose the Kafka pods  There are generally two approaches to use expose the nodeport of a Headless service   1  Add    fqdn template  example org  2  Use a full annotation   If you go with  1  you just need to define the headless service  here is an example of the case  2      yaml apiVersion  v1 kind  Service metadata    name  ksvc   annotations      external dns alpha kubernetes io hostname   example org spec    ports      port  9092     name  external   clusterIP  None   selector      component  kafka     This will create 3 dns records      kafka 0 example org kafka 1 example org kafka 2 example org      If you set    fqdn template  example org  you can omit the annotation  Generally it is a better approach to use     fqdn template  example org   because then you would get the service name inside the generated A records       kafka 0 ksvc example org kafka 1 ksvc example org kafka 2 ksvc example org           Using pods  HostIPs as targets  Add the following annotation to your  Service       yaml external dns alpha kubernetes io endpoints type  HostIP      external dns will now publish the value of the   status hostIP  field of the pods backing your  Service         Using node external IPs as targets  Add the following annotation to your  Service       yaml external dns alpha kubernetes io endpoints type  NodeExternalIP      external dns will now publish the node external IP    status addresses  entries of with  type  NodeExternalIP   of the nodes on which the pods backing your  Service  are running        Using pod annotations to specify target IPs  Add the following annotation to the   pods   backing your  Service       yaml external dns alpha kubernetes io target   1 2 3 4       external dns will publish the IP specified in the annotation of each pod instead of using the podIP advertised by Kubernetes   This can be useful e g  if you are NATing public IPs onto your pod IPs and want to publish these in DNS "}
{"questions":"external-dns Exoscale provider support was added via thus you need to use external dns v0 5 5 Exoscale and are configured correctly It does not add remove or configure new zones in anyway The Exoscale provider expects that your Exoscale zones you wish to add records to already exists To do this please refer to the Prerequisites","answers":"# Exoscale\n\n## Prerequisites\n\nExoscale provider support was added via [this PR](https:\/\/github.com\/kubernetes-sigs\/external-dns\/pull\/625), thus you need to use external-dns v0.5.5.\n\nThe Exoscale provider expects that your Exoscale zones, you wish to add records to, already exists\nand are configured correctly. It does not add, remove or configure new zones in anyway.\n\nTo do this please refer to the [Exoscale DNS documentation](https:\/\/community.exoscale.com\/documentation\/dns\/).\n\nAdditionally you will have to provide the Exoscale...:\n\n* API Key\n* API Secret\n* Elastic IP address, to access the workers\n\n## Deployment\n\nDeploying external DNS for Exoscale is actually nearly identical to deploying\nit for other providers. This is what a sample `deployment.yaml` looks like:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      # Only use if you're also using RBAC\n      # serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=ingress # or service or both\n        - --provider=exoscale\n        - --domain-filter=\n        - --policy=sync # if you want DNS entries to get deleted as well\n        - --txt-owner-id=\n        - --exoscale-apikey=\n        - --exoscale-apisecret=\n        # - --exoscale-apizone=\n        # - --exoscale-apienv=\n```\n\nOptional arguments `--exoscale-apizone` and `--exoscale-apienv` define [Exoscale API Zone](https:\/\/community.exoscale.com\/documentation\/platform\/exoscale-datacenter-zones\/)\n(default `ch-gva-2`) and Exoscale API environment (default `api`, can be used to target non-production API server) respectively.\n\n## RBAC\n\nIf your cluster is RBAC enabled, you also need to setup the following, before you can run external-dns:\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n  namespace: default\n\n---\n\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n\n---\n\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n```\n\n## Testing and Verification\n\n**Important!**: Remember to change `example.com` with your own domain throughout the following text.\n\nSpin up a simple nginx HTTP server with the following spec (`kubectl apply -f`):\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/target: \nspec:\n  ingressClassName: nginx\n  rules:\n  - host: via-ingress.example.com\n    http:\n      paths:\n      - backend:\n          service:\n            name: \"nginx\"\n            port:\n              number: 80\n        path: \/\n        pathType: Prefix\n\n---\n\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\nspec:\n  ports:\n  - port: 80\n    targetPort: 80\n  selector:\n    app: nginx\n\n---\n\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n```\n\n**Important!**: Don't run dig, nslookup or similar immediately (until you've\nconfirmed the record exists). You'll get hit by [negative DNS caching](https:\/\/tools.ietf.org\/html\/rfc2308), which is hard to flush.\n\nWait about 30s-1m (interval for external-dns to kick in), then check Exoscales [portal](https:\/\/portal.exoscale.com\/dns\/example.com)... via-ingress.example.com should appear as a A and TXT record with your Elastic-IP-address.","site":"external-dns","answers_cleaned":"  Exoscale     Prerequisites  Exoscale provider support was added via  this PR  https   github com kubernetes sigs external dns pull 625   thus you need to use external dns v0 5 5   The Exoscale provider expects that your Exoscale zones  you wish to add records to  already exists and are configured correctly  It does not add  remove or configure new zones in anyway   To do this please refer to the  Exoscale DNS documentation  https   community exoscale com documentation dns     Additionally you will have to provide the Exoscale        API Key   API Secret   Elastic IP address  to access the workers     Deployment  Deploying external DNS for Exoscale is actually nearly identical to deploying it for other providers  This is what a sample  deployment yaml  looks like      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec          Only use if you re also using RBAC         serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source ingress   or service or both             provider exoscale             domain filter              policy sync   if you want DNS entries to get deleted as well             txt owner id              exoscale apikey              exoscale apisecret                exoscale apizone                exoscale apienv       Optional arguments    exoscale apizone  and    exoscale apienv  define  Exoscale API Zone  https   community exoscale com documentation platform exoscale datacenter zones    default  ch gva 2   and Exoscale API environment  default  api   can be used to target non production API server  respectively      RBAC  If your cluster is RBAC enabled  you also need to setup the following  before you can run external dns      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns   namespace  default       apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    list         apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default         Testing and Verification    Important     Remember to change  example com  with your own domain throughout the following text   Spin up a simple nginx HTTP server with the following spec   kubectl apply  f        yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  nginx   annotations      external dns alpha kubernetes io target   spec    ingressClassName  nginx   rules      host  via ingress example com     http        paths          backend            service              name   nginx              port                number  80         path            pathType  Prefix       apiVersion  v1 kind  Service metadata    name  nginx spec    ports      port  80     targetPort  80   selector      app  nginx       apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80        Important     Don t run dig  nslookup or similar immediately  until you ve confirmed the record exists   You ll get hit by  negative DNS caching  https   tools ietf org html rfc2308   which is hard to flush   Wait about 30s 1m  interval for external dns to kick in   then check Exoscales  portal  https   portal exoscale com dns example com     via ingress example com should appear as a A and TXT record with your Elastic IP address "}
{"questions":"external-dns IBM Cloud commands and assumes that the Kubernetes cluster was created via IBM Cloud Kubernetes Service and commands are being run on an orchestration node This tutorial uses for all This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using IBMCloud DNS Creating a IBMCloud DNS zone IBMCloud The IBMCloud provider for ExternalDNS will find suitable zones for domains it manages it will","answers":"# IBMCloud\n\nThis tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using IBMCloud DNS.\n\nThis tutorial uses [IBMCloud CLI](https:\/\/cloud.ibm.com\/docs\/cli?topic=cli-getting-started) for all\nIBM Cloud commands and assumes that the Kubernetes cluster was created via IBM Cloud Kubernetes Service and `kubectl` commands\nare being run on an orchestration node.\n\n## Creating a IBMCloud DNS zone\nThe IBMCloud provider for ExternalDNS will find suitable zones for domains it manages; it will\nnot automatically create zones.\nFor public zone, This tutorial assume that the [IBMCloud Internet Services](https:\/\/cloud.ibm.com\/catalog\/services\/internet-services) was provisioned and the [cis cli plugin](https:\/\/cloud.ibm.com\/docs\/cis?topic=cis-cli-plugin-cis-cli) was installed with IBMCloud CLI\nFor private zone, This tutorial assume that the [IBMCloud DNS Services](https:\/\/cloud.ibm.com\/catalog\/services\/dns-services) was provisioned and the [dns cli plugin](https:\/\/cloud.ibm.com\/docs\/dns-svcs?topic=dns-svcs-cli-plugin-dns-services-cli-commands) was installed with IBMCloud CLI\n\n### Public Zone\nFor this tutorial, we create public zone named `example.com` on IBMCloud Internet Services instance `external-dns-public`\n```\n$ ibmcloud cis domain-add example.com -i external-dns-public\n```\nFollow [step](https:\/\/cloud.ibm.com\/docs\/cis?topic=cis-getting-started#configure-your-name-servers-with-the-registrar-or-existing-dns-provider) to active your zone\n\n### Private Zone\nFor this tutorial, we create private zone named `example.com` on IBMCloud DNS Services instance `external-dns-private`\n```\n$ ibmcloud dns zone-create example.com -i external-dns-private\n```\n\n## Creating configuration file\n\nThe preferred way to inject the configuration file is by using a Kubernetes secret. The secret should contain an object named azure.json with content similar to this:\n\n```\n{\n  \"apiKey\": \"1234567890abcdefghijklmnopqrstuvwxyz\",\n  \"instanceCrn\": \"crn:v1:bluemix:public:internet-svcs:global:a\/bcf1865e99742d38d2d5fc3fb80a5496:b950da8a-5be6-4691-810e-36388c77b0a3::\"\n}\n```\n\nYou can create or find the `apiKey` in your ibmcloud IAM --> [API Keys page](https:\/\/cloud.ibm.com\/iam\/apikeys)\n\nYou can find the `instanceCrn` in your service instance details\n\nNow you can create a file named 'ibmcloud.json' with values gathered above and with the structure of the example above. Use this file to create a Kubernetes secret:\n```\n$ kubectl create secret generic ibmcloud-config-file --from-file=\/local\/path\/to\/ibmcloud.json\n```\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster you want to test ExternalDNS with.\nThen apply one of the following manifests file to deploy ExternalDNS.\n\n### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=ibmcloud\n        - --ibmcloud-proxied # (optional) enable the proxy feature of IBMCloud\n        volumeMounts:\n        - name: ibmcloud-config-file\n          mountPath: \/etc\/kubernetes\n          readOnly: true\n      volumes:\n      - name: ibmcloud-config-file\n        secret:\n          secretName: ibmcloud-config-file\n          items:\n          - key: externaldns-config.json\n            path: ibmcloud.json\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"] \n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\", \"watch\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=ibmcloud\n        - --ibmcloud-proxied # (optional) enable the proxy feature of IBMCloud public zone\n        volumeMounts:\n        - name: ibmcloud-config-file\n          mountPath: \/etc\/kubernetes\n          readOnly: true\n      volumes:\n      - name: ibmcloud-config-file\n        secret:\n          secretName: ibmcloud-config-file\n          items:\n          - key: externaldns-config.json\n            path: ibmcloud.json\n```\n\n## Deploying an Nginx Service\n\nCreate a service file called `nginx.yaml` with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: www.example.com\n    external-dns.alpha.kubernetes.io\/ttl: \"120\" #optional\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\nNote the annotation on the service; use the hostname as the IBMCloud DNS zone created above. The annotation may also be a subdomain\nof the DNS zone (e.g. 'www.example.com').\n\nBy setting the TTL annotation on the service, you have to pass a valid TTL, which must be 120 or above.\nThis annotation is optional, if you won't set it, it will be 1 (automatic) which is 300.\n\nExternalDNS uses this annotation to determine what services should be registered with DNS.  Removing the annotation\nwill cause ExternalDNS to remove the corresponding DNS records.\n\nCreate the deployment and service:\n\n```\n$ kubectl create -f nginx.yaml\n```\n\nDepending where you run your service it can take a little while for your cloud provider to create an external IP for the service.\n\nOnce the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize\nthe IBMCloud DNS records.\n\n## Verifying IBMCloud DNS records\nRun the following command to view the A records:\n\n### Public Zone\n```\n# Get the domain ID with below command on IBMCloud Internet Services instance `external-dns-public`\n$ ibmcloud cis domains -i external-dns-public\n# Get the records with domain ID\n$ ibmcloud cis dns-records DOMAIN_ID  -i external-dns-public\n```\n\n### Private Zone\n```\n# Get the domain ID with below command on IBMCloud DNS Services instance `external-dns-private`\n$ ibmcloud dns zones -i external-dns-private\n# Get the records with domain ID\n$ ibmcloud dns resource-records ZONE_ID  -i external-dns-public\n```\nThis should show the external IP address of the service as the A record for your domain.\n\n## Cleanup\n\nNow that we have verified that ExternalDNS will automatically manage IBMCloud DNS records, we can delete the tutorial's example:\n\n```\n$ kubectl delete -f nginx.yaml\n$ kubectl delete -f externaldns.yaml\n```\n\n## Setting proxied records on public zone\n\nUsing the `external-dns.alpha.kubernetes.io\/ibmcloud-proxied: \"true\"` annotation on your ingress or service, you can specify if the proxy feature of IBMCloud public DNS should be enabled for that record. This setting will override the global `--ibmcloud-proxied` setting.\n\n## Active priviate zone with VPC allocated\n\nBy default, IBMCloud DNS Services don't active your private zone with new zone added, with externale DNS, you can use `external-dns.alpha.kubernetes.io\/ibmcloud-vpc: \"crn:v1:bluemix:public:is:us-south:a\/bcf1865e99742d38d2d5fc3fb80a5496::vpc:r006-74353823-a60d-42e4-97c5-5e2551278435\"` annotation on your ingress or service, it will active your private zone with in specific VPC for that record created in. this setting won't work if the private zone was active already.\n\nNote: the annotaion value is the VPC CRN, every IBM Cloud service have a valid CRN.","site":"external-dns","answers_cleaned":"  IBMCloud  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using IBMCloud DNS   This tutorial uses  IBMCloud CLI  https   cloud ibm com docs cli topic cli getting started  for all IBM Cloud commands and assumes that the Kubernetes cluster was created via IBM Cloud Kubernetes Service and  kubectl  commands are being run on an orchestration node      Creating a IBMCloud DNS zone The IBMCloud provider for ExternalDNS will find suitable zones for domains it manages  it will not automatically create zones  For public zone  This tutorial assume that the  IBMCloud Internet Services  https   cloud ibm com catalog services internet services  was provisioned and the  cis cli plugin  https   cloud ibm com docs cis topic cis cli plugin cis cli  was installed with IBMCloud CLI For private zone  This tutorial assume that the  IBMCloud DNS Services  https   cloud ibm com catalog services dns services  was provisioned and the  dns cli plugin  https   cloud ibm com docs dns svcs topic dns svcs cli plugin dns services cli commands  was installed with IBMCloud CLI      Public Zone For this tutorial  we create public zone named  example com  on IBMCloud Internet Services instance  external dns public        ibmcloud cis domain add example com  i external dns public     Follow  step  https   cloud ibm com docs cis topic cis getting started configure your name servers with the registrar or existing dns provider  to active your zone      Private Zone For this tutorial  we create private zone named  example com  on IBMCloud DNS Services instance  external dns private        ibmcloud dns zone create example com  i external dns private         Creating configuration file  The preferred way to inject the configuration file is by using a Kubernetes secret  The secret should contain an object named azure json with content similar to this            apiKey    1234567890abcdefghijklmnopqrstuvwxyz      instanceCrn    crn v1 bluemix public internet svcs global a bcf1865e99742d38d2d5fc3fb80a5496 b950da8a 5be6 4691 810e 36388c77b0a3           You can create or find the  apiKey  in your ibmcloud IAM      API Keys page  https   cloud ibm com iam apikeys   You can find the  instanceCrn  in your service instance details  Now you can create a file named  ibmcloud json  with values gathered above and with the structure of the example above  Use this file to create a Kubernetes secret        kubectl create secret generic ibmcloud config file   from file  local path to ibmcloud json        Deploy ExternalDNS  Connect your  kubectl  client to the cluster you want to test ExternalDNS with  Then apply one of the following manifests file to deploy ExternalDNS       Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider ibmcloud             ibmcloud proxied    optional  enable the proxy feature of IBMCloud         volumeMounts            name  ibmcloud config file           mountPath   etc kubernetes           readOnly  true       volumes          name  ibmcloud config file         secret            secretName  ibmcloud config file           items              key  externaldns config json             path  ibmcloud json          Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses      verbs    get   watch   list     apiGroups         resources    nodes     verbs    list    watch       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider ibmcloud             ibmcloud proxied    optional  enable the proxy feature of IBMCloud public zone         volumeMounts            name  ibmcloud config file           mountPath   etc kubernetes           readOnly  true       volumes          name  ibmcloud config file         secret            secretName  ibmcloud config file           items              key  externaldns config json             path  ibmcloud json         Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  www example com     external dns alpha kubernetes io ttl   120   optional spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80      Note the annotation on the service  use the hostname as the IBMCloud DNS zone created above  The annotation may also be a subdomain of the DNS zone  e g   www example com     By setting the TTL annotation on the service  you have to pass a valid TTL  which must be 120 or above  This annotation is optional  if you won t set it  it will be 1  automatic  which is 300   ExternalDNS uses this annotation to determine what services should be registered with DNS   Removing the annotation will cause ExternalDNS to remove the corresponding DNS records   Create the deployment and service         kubectl create  f nginx yaml      Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service   Once the service has an external IP assigned  ExternalDNS will notice the new service IP address and synchronize the IBMCloud DNS records      Verifying IBMCloud DNS records Run the following command to view the A records       Public Zone       Get the domain ID with below command on IBMCloud Internet Services instance  external dns public    ibmcloud cis domains  i external dns public   Get the records with domain ID   ibmcloud cis dns records DOMAIN ID   i external dns public          Private Zone       Get the domain ID with below command on IBMCloud DNS Services instance  external dns private    ibmcloud dns zones  i external dns private   Get the records with domain ID   ibmcloud dns resource records ZONE ID   i external dns public     This should show the external IP address of the service as the A record for your domain      Cleanup  Now that we have verified that ExternalDNS will automatically manage IBMCloud DNS records  we can delete the tutorial s example         kubectl delete  f nginx yaml   kubectl delete  f externaldns yaml         Setting proxied records on public zone  Using the  external dns alpha kubernetes io ibmcloud proxied   true   annotation on your ingress or service  you can specify if the proxy feature of IBMCloud public DNS should be enabled for that record  This setting will override the global    ibmcloud proxied  setting      Active priviate zone with VPC allocated  By default  IBMCloud DNS Services don t active your private zone with new zone added  with externale DNS  you can use  external dns alpha kubernetes io ibmcloud vpc   crn v1 bluemix public is us south a bcf1865e99742d38d2d5fc3fb80a5496  vpc r006 74353823 a60d 42e4 97c5 5e2551278435   annotation on your ingress or service  it will active your private zone with in specific VPC for that record created in  this setting won t work if the private zone was active already   Note  the annotaion value is the VPC CRN  every IBM Cloud service have a valid CRN "}
{"questions":"external-dns GKE with nginx ingress controller This tutorial describes how to setup ExternalDNS for usage within a GKE cluster that doesn t make use of Google s but rather uses for that task gcloud config set project zalando external dns test Set up your environment console Setup your environment to work with Google Cloud Platform Fill in your values as needed e g target project","answers":"# GKE with nginx-ingress-controller\n\nThis tutorial describes how to setup ExternalDNS for usage within a GKE cluster that doesn't make use of Google's [default ingress controller](https:\/\/github.com\/kubernetes\/ingress-gce) but rather uses [nginx-ingress-controller](https:\/\/github.com\/kubernetes\/ingress-nginx) for that task.\n\n## Set up your environment\n\nSetup your environment to work with Google Cloud Platform. Fill in your values as needed, e.g. target project.\n\n```console\n$ gcloud config set project \"zalando-external-dns-test\"\n$ gcloud config set compute\/region \"europe-west1\"\n$ gcloud config set compute\/zone \"europe-west1-d\"\n```\n\n## GKE Node Scopes\n\nThe following instructions use instance scopes to provide ExternalDNS with the\npermissions it needs to manage DNS records. Note that since these permissions\nare associated with the instance, all pods in the cluster will also have these\npermissions. As such, this approach is not suitable for anything but testing\nenvironments.\n\nCreate a GKE cluster without using the default ingress controller.\n\n```console\n$ gcloud container clusters create \"external-dns\" \\\n    --num-nodes 1 \\\n    --scopes \"https:\/\/www.googleapis.com\/auth\/ndev.clouddns.readwrite\"\n```\n\nCreate a DNS zone which will contain the managed DNS records.\n\n```console\n$ gcloud dns managed-zones create \"external-dns-test-gcp-zalan-do\" \\\n    --dns-name \"external-dns-test.gcp.zalan.do.\" \\\n    --description \"Automatically managed zone by ExternalDNS\"\n```\n\nMake a note of the nameservers that were assigned to your new zone.\n\n```console\n$ gcloud dns record-sets list \\\n    --zone \"external-dns-test-gcp-zalan-do\" \\\n    --name \"external-dns-test.gcp.zalan.do.\" \\\n    --type NS\nNAME                             TYPE  TTL    DATA\nexternal-dns-test.gcp.zalan.do.  NS    21600  ns-cloud-e1.googledomains.com.,ns-cloud-e2.googledomains.com.,ns-cloud-e3.googledomains.com.,ns-cloud-e4.googledomains.com.\n```\n\nIn this case it's `ns-cloud-{e1-e4}.googledomains.com.` but your's could slightly differ, e.g. `{a1-a4}`, `{b1-b4}` etc.\n\nTell the parent zone where to find the DNS records for this zone by adding the corresponding NS records there. Assuming the parent zone is \"gcp-zalan-do\" and the domain is \"gcp.zalan.do\" and that it's also hosted at Google we would do the following.\n\n```console\n$ gcloud dns record-sets transaction start --zone \"gcp-zalan-do\"\n$ gcloud dns record-sets transaction add ns-cloud-e{1..4}.googledomains.com. \\\n    --name \"external-dns-test.gcp.zalan.do.\" --ttl 300 --type NS --zone \"gcp-zalan-do\"\n$ gcloud dns record-sets transaction execute --zone \"gcp-zalan-do\"\n```\n\nConnect your `kubectl` client to the cluster you just created and bind your GCP\nuser to the cluster admin role in Kubernetes.\n\n```console\n$ gcloud container clusters get-credentials \"external-dns\"\n$ kubectl create clusterrolebinding cluster-admin-me \\\n    --clusterrole=cluster-admin --user=\"$(gcloud config get-value account)\"\n```\n\n### Deploy the nginx ingress controller\n\nFirst, you need to deploy the nginx-based ingress controller. It can be deployed in at least two modes: Leveraging a Layer 4 load balancer in front of the nginx proxies or directly targeting pods with hostPorts on your worker nodes. ExternalDNS doesn't really care and supports both modes.\n\n#### Default Backend\n\nThe nginx controller uses a default backend that it serves when no Ingress rule matches. This is a separate Service that can be picked by you. We'll use the default backend that's used by other ingress controllers for that matter. Apply the following manifests to your cluster to deploy the default backend.\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: default-http-backend\nspec:\n  ports:\n  - port: 80\n    targetPort: 8080\n  selector:\n    app: default-http-backend\n\n---\n\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: default-http-backend\nspec:\n  selector:\n    matchLabels:\n      app: default-http-backend\n  template:\n    metadata:\n      labels:\n        app: default-http-backend\n    spec:\n      containers:\n      - name: default-http-backend\n        image: gcr.io\/google_containers\/defaultbackend:1.3\n```\n\n#### Without a separate TCP load balancer\n\nBy default, the controller will update your Ingress objects with the public IPs of the nodes running your nginx controller instances. You should run multiple instances in case of pod or node failure. The controller will do leader election and will put multiple IPs as targets in your Ingress objects in that case. It could also make sense to run it as a DaemonSet. However, we'll just run a single replica. You have to open the respective ports on all of your worker nodes to allow nginx to receive traffic.\n\n```console\n$ gcloud compute firewall-rules create \"allow-http\" --allow tcp:80 --source-ranges \"0.0.0.0\/0\" --target-tags \"gke-external-dns-9488ba14-node\"\n$ gcloud compute firewall-rules create \"allow-https\" --allow tcp:443 --source-ranges \"0.0.0.0\/0\" --target-tags \"gke-external-dns-9488ba14-node\"\n```\n\nChange `--target-tags` to the corresponding tags of your nodes. You can find them by describing your instances or by looking at the default firewall rules created by GKE for your cluster.\n\nApply the following manifests to your cluster to deploy the nginx-based ingress controller. Note, how it receives a reference to the default backend's Service and that it listens on hostPorts. (You may have to use `hostNetwork: true` as well.)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx-ingress-controller\nspec:\n  selector:\n    matchLabels:\n      app: nginx-ingress-controller\n  template:\n    metadata:\n      labels:\n        app: nginx-ingress-controller\n    spec:\n      containers:\n      - name: nginx-ingress-controller\n        image: gcr.io\/google_containers\/nginx-ingress-controller:0.9.0-beta.3\n        args:\n        - \/nginx-ingress-controller\n        - --default-backend-service=default\/default-http-backend\n        env:\n          - name: POD_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.name\n          - name: POD_NAMESPACE\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.namespace\n        ports:\n        - containerPort: 80\n          hostPort: 80\n        - containerPort: 443\n          hostPort: 443\n```\n\n#### With a separate TCP load balancer\n\nHowever, you can also have the ingress controller proxied by a Kubernetes Service. This will instruct the controller to populate this Service's external IP as the external IP of the Ingress. This exposes the nginx proxies via a Layer 4 load balancer (`type=LoadBalancer`) which is more reliable than the other method. With that approach, you can run as many nginx proxy instances on your cluster as you like or have them autoscaled. This is the preferred way of running the nginx controller.\n\nApply the following manifests to your cluster. Note, how the controller is receiving an additional flag telling it which Service it should treat as its public endpoint and how it doesn't need hostPorts anymore.\n\nApply the following manifests to run the controller in this mode.\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx-ingress-controller\nspec:\n  type: LoadBalancer\n  ports:\n  - name: http\n    port: 80\n    targetPort: 80\n  - name: https\n    port: 443\n    targetPort: 443\n  selector:\n    app: nginx-ingress-controller\n\n---\n\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx-ingress-controller\nspec:\n  selector:\n    matchLabels:\n      app: nginx-ingress-controller\n  template:\n    metadata:\n      labels:\n        app: nginx-ingress-controller\n    spec:\n      containers:\n      - name: nginx-ingress-controller\n        image: gcr.io\/google_containers\/nginx-ingress-controller:0.9.0-beta.3\n        args:\n        - \/nginx-ingress-controller\n        - --default-backend-service=default\/default-http-backend\n        - --publish-service=default\/nginx-ingress-controller\n        env:\n          - name: POD_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.name\n          - name: POD_NAMESPACE\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.namespace\n        ports:\n        - containerPort: 80\n        - containerPort: 443\n```\n\n### Deploy ExternalDNS\n\nApply the following manifest file to deploy ExternalDNS.\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"] \n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=ingress\n        - --domain-filter=external-dns-test.gcp.zalan.do\n        - --provider=google\n        - --google-project=zalando-external-dns-test\n        - --registry=txt\n        - --txt-owner-id=my-identifier\n```\n\nUse `--dry-run` if you want to be extra careful on the first run. Note, that you will not see any records created when you are running in dry-run mode. You can, however, inspect the logs and watch what would have been done.\n\n### Deploy a sample application\n\nCreate the following sample application to test that ExternalDNS works.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: nginx\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: via-ingress.external-dns-test.gcp.zalan.do\n    http:\n      paths:\n      - path: \/\n        backend:\n          service:\n            name: nginx\n            port:\n              number: 80\n        pathType: Prefix\n\n---\n\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\nspec:\n  ports:\n  - port: 80\n    targetPort: 80\n  selector:\n    app: nginx\n\n---\n\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n```\n\nAfter roughly two minutes check that a corresponding DNS record for your Ingress was created.\n\n```console\n$ gcloud dns record-sets list \\\n    --zone \"external-dns-test-gcp-zalan-do\" \\\n    --name \"via-ingress.external-dns-test.gcp.zalan.do.\" \\\n    --type A\nNAME                                         TYPE  TTL  DATA\nvia-ingress.external-dns-test.gcp.zalan.do.  A     300  35.187.1.246\n```\n\nLet's check that we can resolve this DNS name as well.\n\n```console\ndig +short @ns-cloud-e1.googledomains.com. via-ingress.external-dns-test.gcp.zalan.do.\n35.187.1.246\n```\n\nTry with `curl` as well.\n\n```console\n$ curl via-ingress.external-dns-test.gcp.zalan.do\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!<\/title>\n...\n<\/head>\n<body>\n...\n<\/body>\n<\/html>\n```\n\n### Clean up\n\nMake sure to delete all Service and Ingress objects before terminating the cluster so all load balancers and DNS entries get cleaned up correctly.\n\n```console\n$ kubectl delete service nginx-ingress-controller\n$ kubectl delete ingress nginx\n```\n\nGive ExternalDNS some time to clean up the DNS records for you. Then delete the managed zone and cluster.\n\n```console\n$ gcloud dns managed-zones delete \"external-dns-test-gcp-zalan-do\"\n$ gcloud container clusters delete \"external-dns\"\n```\n\nAlso delete the NS records for your removed zone from the parent zone.\n\n```console\n$ gcloud dns record-sets transaction start --zone \"gcp-zalan-do\"\n$ gcloud dns record-sets transaction remove ns-cloud-e{1..4}.googledomains.com. \\\n    --name \"external-dns-test.gcp.zalan.do.\" --ttl 300 --type NS --zone \"gcp-zalan-do\"\n$ gcloud dns record-sets transaction execute --zone \"gcp-zalan-do\"\n```\n\n## GKE with Workload Identity\n\nThe following instructions use [GKE workload\nidentity](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/workload-identity)\nto provide ExternalDNS with the permissions it needs to manage DNS records.\nWorkload identity is the Google-recommended way to provide GKE workloads access\nto GCP APIs.\n\nCreate a GKE cluster with workload identity enabled and without the\nHttpLoadBalancing add-on.\n\n```console\n$ gcloud container clusters create external-dns \\\n    --workload-metadata-from-node=GKE_METADATA_SERVER \\\n    --identity-namespace=zalando-external-dns-test.svc.id.goog \\\n    --addons=HorizontalPodAutoscaling\n```\n\nCreate a GCP service account (GSA) for ExternalDNS and save its email address.\n\n```console\n$ sa_name=\"Kubernetes external-dns\"\n$ gcloud iam service-accounts create sa-edns --display-name=\"$sa_name\"\n$ sa_email=$(gcloud iam service-accounts list --format='value(email)' \\\n    --filter=\"displayName:$sa_name\")\n```\n\nBind the ExternalDNS GSA to the DNS admin role.\n\n```console\n$ gcloud projects add-iam-policy-binding zalando-external-dns-test \\\n    --member=\"serviceAccount:$sa_email\" --role=roles\/dns.admin\n```\n\nLink the ExternalDNS GSA to the Kubernetes service account (KSA) that\nexternal-dns will run under, i.e., the external-dns KSA in the external-dns\nnamespaces.\n\n```console\n$ gcloud iam service-accounts add-iam-policy-binding \"$sa_email\" \\\n    --member=\"serviceAccount:zalando-external-dns-test.svc.id.goog[external-dns\/external-dns]\" \\\n    --role=roles\/iam.workloadIdentityUser\n```\n\nCreate a DNS zone which will contain the managed DNS records.\n\n```console\n$ gcloud dns managed-zones create external-dns-test-gcp-zalan-do \\\n    --dns-name=external-dns-test.gcp.zalan.do. \\\n    --description=\"Automatically managed zone by ExternalDNS\"\n```\n\nMake a note of the nameservers that were assigned to your new zone.\n\n```console\n$ gcloud dns record-sets list \\\n    --zone=external-dns-test-gcp-zalan-do \\\n    --name=external-dns-test.gcp.zalan.do. \\\n    --type NS\nNAME                             TYPE  TTL    DATA\nexternal-dns-test.gcp.zalan.do.  NS    21600  ns-cloud-e1.googledomains.com.,ns-cloud-e2.googledomains.com.,ns-cloud-e3.googledomains.com.,ns-cloud-e4.googledomains.com.\n```\n\nIn this case it's `ns-cloud-{e1-e4}.googledomains.com.` but your's could\nslightly differ, e.g. `{a1-a4}`, `{b1-b4}` etc.\n\nTell the parent zone where to find the DNS records for this zone by adding the\ncorresponding NS records there. Assuming the parent zone is \"gcp-zalan-do\" and\nthe domain is \"gcp.zalan.do\" and that it's also hosted at Google we would do the\nfollowing.\n\n```console\n$ gcloud dns record-sets transaction start --zone=gcp-zalan-do\n$ gcloud dns record-sets transaction add ns-cloud-e{1..4}.googledomains.com. \\\n    --name=external-dns-test.gcp.zalan.do. --ttl 300 --type NS --zone=gcp-zalan-do\n$ gcloud dns record-sets transaction execute --zone=gcp-zalan-do\n```\n\nConnect your `kubectl` client to the cluster you just created and bind your GCP\nuser to the cluster admin role in Kubernetes.\n\n```console\n$ gcloud container clusters get-credentials external-dns\n$ kubectl create clusterrolebinding cluster-admin-me \\\n    --clusterrole=cluster-admin --user=\"$(gcloud config get-value account)\"\n```\n\n### Deploy ingress-nginx\n\nFollow the [ingress-nginx GKE installation\ninstructions](https:\/\/kubernetes.github.io\/ingress-nginx\/deploy\/#gce-gke) to\ndeploy it to the cluster.\n\n```console\n$ kubectl apply -f \\\n    https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/controller-v0.35.0\/deploy\/static\/provider\/cloud\/deploy.yaml\n```\n\n### Deploy ExternalDNS\n\nApply the following manifest file to deploy external-dns.\n\n```yaml\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: external-dns\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n  namespace: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n  - apiGroups: [\"\"]\n    resources: [\"services\", \"endpoints\", \"pods\"]\n    verbs: [\"get\", \"watch\", \"list\"]\n  - apiGroups: [\"extensions\", \"networking.k8s.io\"]\n    resources: [\"ingresses\"]\n    verbs: [\"get\", \"watch\", \"list\"]\n  - apiGroups: [\"\"]\n    resources: [\"nodes\"]\n    verbs: [\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n  - kind: ServiceAccount\n    name: external-dns\n    namespace: external-dns\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\n  namespace: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n        - args:\n            - --source=ingress\n            - --domain-filter=external-dns-test.gcp.zalan.do\n            - --provider=google\n            - --google-project=zalando-external-dns-test\n            - --registry=txt\n            - --txt-owner-id=my-identifier\n          image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n          name: external-dns\n      securityContext:\n        fsGroup: 65534\n        runAsUser: 65534\n      serviceAccountName: external-dns\n```\n\nThen add the proper workload identity annotation to the cert-manager service\naccount.\n\n```bash\n$ kubectl annotate serviceaccount --namespace=external-dns external-dns \\\n    \"iam.gke.io\/gcp-service-account=$sa_email\"\n```\n\n### Deploy a sample application\n\nCreate the following sample application to test that ExternalDNS works.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: nginx\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: via-ingress.external-dns-test.gcp.zalan.do\n    http:\n      paths:\n      - path: \/\n        backend:\n          service:\n            name: nginx\n            port:\n              number: 80\n        pathType: Prefix\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\nspec:\n  ports:\n  - port: 80\n    targetPort: 80\n  selector:\n    app: nginx\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n```\n\nAfter roughly two minutes check that a corresponding DNS record for your ingress\nwas created.\n\n```console\n$ gcloud dns record-sets list \\\n    --zone \"external-dns-test-gcp-zalan-do\" \\\n    --name \"via-ingress.external-dns-test.gcp.zalan.do.\" \\\n    --type A\nNAME                                         TYPE  TTL  DATA\nvia-ingress.external-dns-test.gcp.zalan.do.  A     300  35.187.1.246\n```\n\nLet's check that we can resolve this DNS name as well.\n\n```console\n$ dig +short @ns-cloud-e1.googledomains.com. via-ingress.external-dns-test.gcp.zalan.do.\n35.187.1.246\n```\n\nTry with `curl` as well.\n\n```console\n$ curl via-ingress.external-dns-test.gcp.zalan.do\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!<\/title>\n...\n<\/head>\n<body>\n...\n<\/body>\n<\/html>\n```\n\n### Clean up\n\nMake sure to delete all service and ingress objects before terminating the\ncluster so all load balancers and DNS entries get cleaned up correctly.\n\n```console\n$ kubectl delete service --namespace=ingress-nginx ingress-nginx-controller\n$ kubectl delete ingress nginx\n```\n\nGive ExternalDNS some time to clean up the DNS records for you. Then delete the\nmanaged zone and cluster.\n\n```console\n$ gcloud dns managed-zones delete external-dns-test-gcp-zalan-do\n$ gcloud container clusters delete external-dns\n```\n\nAlso delete the NS records for your removed zone from the parent zone.\n\n```console\n$ gcloud dns record-sets transaction start --zone gcp-zalan-do\n$ gcloud dns record-sets transaction remove ns-cloud-e{1..4}.googledomains.com. \\\n    --name=external-dns-test.gcp.zalan.do. --ttl 300 --type NS --zone=gcp-zalan-do\n$ gcloud dns record-sets transaction execute --zone=gcp-zalan-do\n```\n\n## User Demo How-To Blogs and Examples\n\n* Run external-dns on GKE with workload identity. See [Kubernetes, ingress-nginx, cert-manager & external-dns](https:\/\/blog.atomist.com\/kubernetes-ingress-nginx-cert-manager-external-dns\/)","site":"external-dns","answers_cleaned":"  GKE with nginx ingress controller  This tutorial describes how to setup ExternalDNS for usage within a GKE cluster that doesn t make use of Google s  default ingress controller  https   github com kubernetes ingress gce  but rather uses  nginx ingress controller  https   github com kubernetes ingress nginx  for that task      Set up your environment  Setup your environment to work with Google Cloud Platform  Fill in your values as needed  e g  target project      console   gcloud config set project  zalando external dns test    gcloud config set compute region  europe west1    gcloud config set compute zone  europe west1 d          GKE Node Scopes  The following instructions use instance scopes to provide ExternalDNS with the permissions it needs to manage DNS records  Note that since these permissions are associated with the instance  all pods in the cluster will also have these permissions  As such  this approach is not suitable for anything but testing environments   Create a GKE cluster without using the default ingress controller      console   gcloud container clusters create  external dns          num nodes 1         scopes  https   www googleapis com auth ndev clouddns readwrite       Create a DNS zone which will contain the managed DNS records      console   gcloud dns managed zones create  external dns test gcp zalan do          dns name  external dns test gcp zalan do           description  Automatically managed zone by ExternalDNS       Make a note of the nameservers that were assigned to your new zone      console   gcloud dns record sets list         zone  external dns test gcp zalan do          name  external dns test gcp zalan do           type NS NAME                             TYPE  TTL    DATA external dns test gcp zalan do   NS    21600  ns cloud e1 googledomains com  ns cloud e2 googledomains com  ns cloud e3 googledomains com  ns cloud e4 googledomains com       In this case it s  ns cloud  e1 e4  googledomains com   but your s could slightly differ  e g    a1 a4      b1 b4   etc   Tell the parent zone where to find the DNS records for this zone by adding the corresponding NS records there  Assuming the parent zone is  gcp zalan do  and the domain is  gcp zalan do  and that it s also hosted at Google we would do the following      console   gcloud dns record sets transaction start   zone  gcp zalan do    gcloud dns record sets transaction add ns cloud e 1  4  googledomains com          name  external dns test gcp zalan do     ttl 300   type NS   zone  gcp zalan do    gcloud dns record sets transaction execute   zone  gcp zalan do       Connect your  kubectl  client to the cluster you just created and bind your GCP user to the cluster admin role in Kubernetes      console   gcloud container clusters get credentials  external dns    kubectl create clusterrolebinding cluster admin me         clusterrole cluster admin   user    gcloud config get value account            Deploy the nginx ingress controller  First  you need to deploy the nginx based ingress controller  It can be deployed in at least two modes  Leveraging a Layer 4 load balancer in front of the nginx proxies or directly targeting pods with hostPorts on your worker nodes  ExternalDNS doesn t really care and supports both modes        Default Backend  The nginx controller uses a default backend that it serves when no Ingress rule matches  This is a separate Service that can be picked by you  We ll use the default backend that s used by other ingress controllers for that matter  Apply the following manifests to your cluster to deploy the default backend      yaml apiVersion  v1 kind  Service metadata    name  default http backend spec    ports      port  80     targetPort  8080   selector      app  default http backend       apiVersion  apps v1 kind  Deployment metadata    name  default http backend spec    selector      matchLabels        app  default http backend   template      metadata        labels          app  default http backend     spec        containers          name  default http backend         image  gcr io google containers defaultbackend 1 3           Without a separate TCP load balancer  By default  the controller will update your Ingress objects with the public IPs of the nodes running your nginx controller instances  You should run multiple instances in case of pod or node failure  The controller will do leader election and will put multiple IPs as targets in your Ingress objects in that case  It could also make sense to run it as a DaemonSet  However  we ll just run a single replica  You have to open the respective ports on all of your worker nodes to allow nginx to receive traffic      console   gcloud compute firewall rules create  allow http    allow tcp 80   source ranges  0 0 0 0 0    target tags  gke external dns 9488ba14 node    gcloud compute firewall rules create  allow https    allow tcp 443   source ranges  0 0 0 0 0    target tags  gke external dns 9488ba14 node       Change    target tags  to the corresponding tags of your nodes  You can find them by describing your instances or by looking at the default firewall rules created by GKE for your cluster   Apply the following manifests to your cluster to deploy the nginx based ingress controller  Note  how it receives a reference to the default backend s Service and that it listens on hostPorts   You may have to use  hostNetwork  true  as well       yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx ingress controller spec    selector      matchLabels        app  nginx ingress controller   template      metadata        labels          app  nginx ingress controller     spec        containers          name  nginx ingress controller         image  gcr io google containers nginx ingress controller 0 9 0 beta 3         args             nginx ingress controller             default backend service default default http backend         env              name  POD NAME             valueFrom                fieldRef                  fieldPath  metadata name             name  POD NAMESPACE             valueFrom                fieldRef                  fieldPath  metadata namespace         ports            containerPort  80           hostPort  80           containerPort  443           hostPort  443           With a separate TCP load balancer  However  you can also have the ingress controller proxied by a Kubernetes Service  This will instruct the controller to populate this Service s external IP as the external IP of the Ingress  This exposes the nginx proxies via a Layer 4 load balancer   type LoadBalancer   which is more reliable than the other method  With that approach  you can run as many nginx proxy instances on your cluster as you like or have them autoscaled  This is the preferred way of running the nginx controller   Apply the following manifests to your cluster  Note  how the controller is receiving an additional flag telling it which Service it should treat as its public endpoint and how it doesn t need hostPorts anymore   Apply the following manifests to run the controller in this mode      yaml apiVersion  v1 kind  Service metadata    name  nginx ingress controller spec    type  LoadBalancer   ports      name  http     port  80     targetPort  80     name  https     port  443     targetPort  443   selector      app  nginx ingress controller       apiVersion  apps v1 kind  Deployment metadata    name  nginx ingress controller spec    selector      matchLabels        app  nginx ingress controller   template      metadata        labels          app  nginx ingress controller     spec        containers          name  nginx ingress controller         image  gcr io google containers nginx ingress controller 0 9 0 beta 3         args             nginx ingress controller             default backend service default default http backend             publish service default nginx ingress controller         env              name  POD NAME             valueFrom                fieldRef                  fieldPath  metadata name             name  POD NAMESPACE             valueFrom                fieldRef                  fieldPath  metadata namespace         ports            containerPort  80           containerPort  443          Deploy ExternalDNS  Apply the following manifest file to deploy ExternalDNS      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses      verbs    get   watch   list     apiGroups         resources    nodes     verbs    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source ingress             domain filter external dns test gcp zalan do             provider google             google project zalando external dns test             registry txt             txt owner id my identifier      Use    dry run  if you want to be extra careful on the first run  Note  that you will not see any records created when you are running in dry run mode  You can  however  inspect the logs and watch what would have been done       Deploy a sample application  Create the following sample application to test that ExternalDNS works      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  nginx spec    ingressClassName  nginx   rules      host  via ingress external dns test gcp zalan do     http        paths          path            backend            service              name  nginx             port                number  80         pathType  Prefix       apiVersion  v1 kind  Service metadata    name  nginx spec    ports      port  80     targetPort  80   selector      app  nginx       apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80      After roughly two minutes check that a corresponding DNS record for your Ingress was created      console   gcloud dns record sets list         zone  external dns test gcp zalan do          name  via ingress external dns test gcp zalan do           type A NAME                                         TYPE  TTL  DATA via ingress external dns test gcp zalan do   A     300  35 187 1 246      Let s check that we can resolve this DNS name as well      console dig  short  ns cloud e1 googledomains com  via ingress external dns test gcp zalan do  35 187 1 246      Try with  curl  as well      console   curl via ingress external dns test gcp zalan do   DOCTYPE html   html   head   title Welcome to nginx   title        head   body        body    html           Clean up  Make sure to delete all Service and Ingress objects before terminating the cluster so all load balancers and DNS entries get cleaned up correctly      console   kubectl delete service nginx ingress controller   kubectl delete ingress nginx      Give ExternalDNS some time to clean up the DNS records for you  Then delete the managed zone and cluster      console   gcloud dns managed zones delete  external dns test gcp zalan do    gcloud container clusters delete  external dns       Also delete the NS records for your removed zone from the parent zone      console   gcloud dns record sets transaction start   zone  gcp zalan do    gcloud dns record sets transaction remove ns cloud e 1  4  googledomains com          name  external dns test gcp zalan do     ttl 300   type NS   zone  gcp zalan do    gcloud dns record sets transaction execute   zone  gcp zalan do          GKE with Workload Identity  The following instructions use  GKE workload identity  https   cloud google com kubernetes engine docs how to workload identity  to provide ExternalDNS with the permissions it needs to manage DNS records  Workload identity is the Google recommended way to provide GKE workloads access to GCP APIs   Create a GKE cluster with workload identity enabled and without the HttpLoadBalancing add on      console   gcloud container clusters create external dns         workload metadata from node GKE METADATA SERVER         identity namespace zalando external dns test svc id goog         addons HorizontalPodAutoscaling      Create a GCP service account  GSA  for ExternalDNS and save its email address      console   sa name  Kubernetes external dns    gcloud iam service accounts create sa edns   display name   sa name    sa email   gcloud iam service accounts list   format  value email           filter  displayName  sa name        Bind the ExternalDNS GSA to the DNS admin role      console   gcloud projects add iam policy binding zalando external dns test         member  serviceAccount  sa email    role roles dns admin      Link the ExternalDNS GSA to the Kubernetes service account  KSA  that external dns will run under  i e   the external dns KSA in the external dns namespaces      console   gcloud iam service accounts add iam policy binding   sa email          member  serviceAccount zalando external dns test svc id goog external dns external dns           role roles iam workloadIdentityUser      Create a DNS zone which will contain the managed DNS records      console   gcloud dns managed zones create external dns test gcp zalan do         dns name external dns test gcp zalan do          description  Automatically managed zone by ExternalDNS       Make a note of the nameservers that were assigned to your new zone      console   gcloud dns record sets list         zone external dns test gcp zalan do         name external dns test gcp zalan do          type NS NAME                             TYPE  TTL    DATA external dns test gcp zalan do   NS    21600  ns cloud e1 googledomains com  ns cloud e2 googledomains com  ns cloud e3 googledomains com  ns cloud e4 googledomains com       In this case it s  ns cloud  e1 e4  googledomains com   but your s could slightly differ  e g    a1 a4      b1 b4   etc   Tell the parent zone where to find the DNS records for this zone by adding the corresponding NS records there  Assuming the parent zone is  gcp zalan do  and the domain is  gcp zalan do  and that it s also hosted at Google we would do the following      console   gcloud dns record sets transaction start   zone gcp zalan do   gcloud dns record sets transaction add ns cloud e 1  4  googledomains com          name external dns test gcp zalan do    ttl 300   type NS   zone gcp zalan do   gcloud dns record sets transaction execute   zone gcp zalan do      Connect your  kubectl  client to the cluster you just created and bind your GCP user to the cluster admin role in Kubernetes      console   gcloud container clusters get credentials external dns   kubectl create clusterrolebinding cluster admin me         clusterrole cluster admin   user    gcloud config get value account            Deploy ingress nginx  Follow the  ingress nginx GKE installation instructions  https   kubernetes github io ingress nginx deploy  gce gke  to deploy it to the cluster      console   kubectl apply  f       https   raw githubusercontent com kubernetes ingress nginx controller v0 35 0 deploy static provider cloud deploy yaml          Deploy ExternalDNS  Apply the following manifest file to deploy external dns      yaml apiVersion  v1 kind  Namespace metadata    name  external dns     apiVersion  v1 kind  ServiceAccount metadata    name  external dns   namespace  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules      apiGroups           resources    services    endpoints    pods       verbs    get    watch    list       apiGroups    extensions    networking k8s io       resources    ingresses       verbs    get    watch    list       apiGroups           resources    nodes       verbs    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects      kind  ServiceAccount     name  external dns     namespace  external dns     apiVersion  apps v1 kind  Deployment metadata    name  external dns   namespace  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers            args                  source ingress                 domain filter external dns test gcp zalan do                 provider google                 google project zalando external dns test                 registry txt                 txt owner id my identifier           image  registry k8s io external dns external dns v0 15 0           name  external dns       securityContext          fsGroup  65534         runAsUser  65534       serviceAccountName  external dns      Then add the proper workload identity annotation to the cert manager service account      bash   kubectl annotate serviceaccount   namespace external dns external dns        iam gke io gcp service account  sa email           Deploy a sample application  Create the following sample application to test that ExternalDNS works      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  nginx spec    ingressClassName  nginx   rules      host  via ingress external dns test gcp zalan do     http        paths          path            backend            service              name  nginx             port                number  80         pathType  Prefix     apiVersion  v1 kind  Service metadata    name  nginx spec    ports      port  80     targetPort  80   selector      app  nginx     apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80      After roughly two minutes check that a corresponding DNS record for your ingress was created      console   gcloud dns record sets list         zone  external dns test gcp zalan do          name  via ingress external dns test gcp zalan do           type A NAME                                         TYPE  TTL  DATA via ingress external dns test gcp zalan do   A     300  35 187 1 246      Let s check that we can resolve this DNS name as well      console   dig  short  ns cloud e1 googledomains com  via ingress external dns test gcp zalan do  35 187 1 246      Try with  curl  as well      console   curl via ingress external dns test gcp zalan do   DOCTYPE html   html   head   title Welcome to nginx   title        head   body        body    html           Clean up  Make sure to delete all service and ingress objects before terminating the cluster so all load balancers and DNS entries get cleaned up correctly      console   kubectl delete service   namespace ingress nginx ingress nginx controller   kubectl delete ingress nginx      Give ExternalDNS some time to clean up the DNS records for you  Then delete the managed zone and cluster      console   gcloud dns managed zones delete external dns test gcp zalan do   gcloud container clusters delete external dns      Also delete the NS records for your removed zone from the parent zone      console   gcloud dns record sets transaction start   zone gcp zalan do   gcloud dns record sets transaction remove ns cloud e 1  4  googledomains com          name external dns test gcp zalan do    ttl 300   type NS   zone gcp zalan do   gcloud dns record sets transaction execute   zone gcp zalan do         User Demo How To Blogs and Examples    Run external dns on GKE with workload identity  See  Kubernetes  ingress nginx  cert manager   external dns  https   blog atomist com kubernetes ingress nginx cert manager external dns  "}
{"questions":"external-dns You may be interested with for installation guidelines AWS Route53 with same domain for public and private zones Specify in nginx ingress controller container args Deploy public nginx ingress controller This tutorial describes how to setup ExternalDNS using the same domain for public and private Route53 zones and It also outlines how to use to automatically issue SSL certificates from for both public and private records","answers":"# AWS Route53 with same domain for public and private zones\n\nThis tutorial describes how to setup ExternalDNS using the same domain for public and private Route53 zones and [nginx-ingress-controller](https:\/\/github.com\/kubernetes\/ingress-nginx). It also outlines how to use [cert-manager](https:\/\/github.com\/jetstack\/cert-manager) to automatically issue SSL certificates from [Let's Encrypt](https:\/\/letsencrypt.org\/) for both public and private records.\n\n## Deploy public nginx-ingress-controller\n\nYou may be interested with [GKE with nginx ingress](gke-nginx.md) for installation guidelines.\n\nSpecify `ingress-class` in nginx-ingress-controller container args:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  labels:\n    app: external-ingress\n  name: external-ingress-controller\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: external-ingress\n  template:\n    metadata:\n      labels:\n        app: external-ingress\n    spec:\n      containers:\n      - args:\n        - \/nginx-ingress-controller\n        - --default-backend-service=$(POD_NAMESPACE)\/default-http-backend\n        - --configmap=$(POD_NAMESPACE)\/external-ingress-configuration\n        - --tcp-services-configmap=$(POD_NAMESPACE)\/external-tcp-services\n        - --udp-services-configmap=$(POD_NAMESPACE)\/external-udp-services\n        - --annotations-prefix=nginx.ingress.kubernetes.io\n        - --ingress-class=external-ingress\n        - --publish-service=$(POD_NAMESPACE)\/external-ingress\n        env:\n        - name: POD_NAME\n          valueFrom:\n            fieldRef:\n              apiVersion: v1\n              fieldPath: metadata.name\n        - name: POD_NAMESPACE\n          valueFrom:\n            fieldRef:\n              apiVersion: v1\n              fieldPath: metadata.namespace\n        image: quay.io\/kubernetes-ingress-controller\/nginx-ingress-controller:0.11.0\n        livenessProbe:\n          failureThreshold: 3\n          httpGet:\n            path: \/healthz\n            port: 10254\n            scheme: HTTP\n          initialDelaySeconds: 10\n          periodSeconds: 10\n          successThreshold: 1\n          timeoutSeconds: 1\n        name: external-ingress-controller\n        ports:\n        - containerPort: 80\n          name: http\n          protocol: TCP\n        - containerPort: 443\n          name: https\n          protocol: TCP\n        readinessProbe:\n          failureThreshold: 3\n          httpGet:\n            path: \/healthz\n            port: 10254\n            scheme: HTTP\n          periodSeconds: 10\n          successThreshold: 1\n          timeoutSeconds: 1\n```\n\nSet `type: LoadBalancer` in your public nginx-ingress-controller Service definition.\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  annotations:\n    service.beta.kubernetes.io\/aws-load-balancer-connection-idle-timeout: \"3600\"\n    service.beta.kubernetes.io\/aws-load-balancer-proxy-protocol: '*'\n  labels:\n    app: external-ingress\n  name: external-ingress\nspec:\n  externalTrafficPolicy: Cluster\n  ports:\n  - name: http\n    port: 80\n    protocol: TCP\n    targetPort: http\n  - name: https\n    port: 443\n    protocol: TCP\n    targetPort: https\n  selector:\n    app: external-ingress\n  sessionAffinity: None\n  type: LoadBalancer\n```\n\n## Deploy private nginx-ingress-controller\n\nMake sure to specify `ingress-class` in nginx-ingress-controller container args:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  labels:\n    app: internal-ingress\n  name: internal-ingress-controller\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: internal-ingress\n  template:\n    metadata:\n      labels:\n        app: internal-ingress\n    spec:\n      containers:\n      - args:\n        - \/nginx-ingress-controller\n        - --default-backend-service=$(POD_NAMESPACE)\/default-http-backend\n        - --configmap=$(POD_NAMESPACE)\/internal-ingress-configuration\n        - --tcp-services-configmap=$(POD_NAMESPACE)\/internal-tcp-services\n        - --udp-services-configmap=$(POD_NAMESPACE)\/internal-udp-services\n        - --annotations-prefix=nginx.ingress.kubernetes.io\n        - --ingress-class=internal-ingress\n        - --publish-service=$(POD_NAMESPACE)\/internal-ingress\n        env:\n        - name: POD_NAME\n          valueFrom:\n            fieldRef:\n              apiVersion: v1\n              fieldPath: metadata.name\n        - name: POD_NAMESPACE\n          valueFrom:\n            fieldRef:\n              apiVersion: v1\n              fieldPath: metadata.namespace\n        image: quay.io\/kubernetes-ingress-controller\/nginx-ingress-controller:0.11.0\n        livenessProbe:\n          failureThreshold: 3\n          httpGet:\n            path: \/healthz\n            port: 10254\n            scheme: HTTP\n          initialDelaySeconds: 10\n          periodSeconds: 10\n          successThreshold: 1\n          timeoutSeconds: 1\n        name: internal-ingress-controller\n        ports:\n        - containerPort: 80\n          name: http\n          protocol: TCP\n        - containerPort: 443\n          name: https\n          protocol: TCP\n        readinessProbe:\n          failureThreshold: 3\n          httpGet:\n            path: \/healthz\n            port: 10254\n            scheme: HTTP\n          periodSeconds: 10\n          successThreshold: 1\n          timeoutSeconds: 1\n```\n\nSet additional annotations in your private nginx-ingress-controller Service definition to create an internal load balancer.\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  annotations:\n    service.beta.kubernetes.io\/aws-load-balancer-connection-idle-timeout: \"3600\"\n    service.beta.kubernetes.io\/aws-load-balancer-internal: 0.0.0.0\/0\n    service.beta.kubernetes.io\/aws-load-balancer-proxy-protocol: '*'\n  labels:\n    app: internal-ingress\n  name: internal-ingress\nspec:\n  externalTrafficPolicy: Cluster\n  ports:\n  - name: http\n    port: 80\n    protocol: TCP\n    targetPort: http\n  - name: https\n    port: 443\n    protocol: TCP\n    targetPort: https\n  selector:\n    app: internal-ingress\n  sessionAffinity: None\n  type: LoadBalancer\n```\n\n## Deploy the public zone ExternalDNS\n\nConsult [AWS ExternalDNS setup docs](aws.md) for installation guidelines.\n\nIn ExternalDNS containers args, make sure to specify `aws-zone-type` and `ingress-class`:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  labels:\n    app: external-dns-public\n  name: external-dns-public\n  namespace: kube-system\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: external-dns-public\n  strategy:\n    type: Recreate\n  template:\n    metadata:\n      labels:\n        app: external-dns-public\n    spec:\n      containers:\n      - args:\n        - --source=ingress\n        - --provider=aws\n        - --registry=txt\n        - --txt-owner-id=external-dns\n        - --ingress-class=external-ingress\n        - --aws-zone-type=public\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        name: external-dns-public\n```\n\n## Deploy the private zone ExternalDNS\n\nConsult [AWS ExternalDNS setup docs](aws.md) for installation guidelines.\n\nIn ExternalDNS containers args, make sure to specify `aws-zone-type` and `ingress-class`:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  labels:\n    app: external-dns-private\n  name: external-dns-private\n  namespace: kube-system\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: external-dns-private\n  strategy:\n    type: Recreate\n  template:\n    metadata:\n      labels:\n        app: external-dns-private\n    spec:\n      containers:\n      - args:\n        - --source=ingress\n        - --provider=aws\n        - --registry=txt\n        - --txt-owner-id=dev.k8s.nexus\n        - --ingress-class=internal-ingress\n        - --aws-zone-type=private\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        name: external-dns-private\n```\n\n## Create application Service definitions\n\nFor this setup to work, you need to create two Ingress definitions for your application.\n\nAt first, create a public Ingress definition:\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  labels:\n    app: app\n  name: app-public\nspec:\n  ingressClassName: external-ingress\n  rules:\n  - host: app.domain.com\n    http:\n      paths:\n      - backend:\n          service:\n            name: app\n            port:\n              number: 80\n        pathType: Prefix\n```\n\nThen create a private Ingress definition:\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  labels:\n    app: app\n  name: app-private\nspec:\n  ingressClassName: internal-ingress\n  rules:\n  - host: app.domain.com\n    http:\n      paths:\n      - backend:\n          service:\n            name: app\n            port:\n              number: 80\n        pathType: Prefix\n```\n\nAdditionally, you may leverage [cert-manager](https:\/\/github.com\/jetstack\/cert-manager) to automatically issue SSL certificates from [Let's Encrypt](https:\/\/letsencrypt.org\/). To do that, request a certificate in public service definition:\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  annotations:\n    certmanager.k8s.io\/acme-challenge-type: \"dns01\"\n    certmanager.k8s.io\/acme-dns01-provider: \"route53\"\n    certmanager.k8s.io\/cluster-issuer: \"letsencrypt-production\"\n    kubernetes.io\/tls-acme: \"true\"\n  labels:\n    app: app\n  name: app-public\nspec:\n  ingressClassName: \"external-ingress\"\n  rules:\n  - host: app.domain.com\n    http:\n      paths:\n      - backend:\n          service:\n            name: app\n            port:\n              number: 80\n        pathType: Prefix\n  tls:\n  - hosts:\n    - app.domain.com\n    secretName: app-tls\n```\n\nAnd reuse the requested certificate in private Service definition:\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  labels:\n    app: app\n  name: app-private\nspec:\n  ingressClassName: \"internal-ingress\"\n  rules:\n  - host: app.domain.com\n    http:\n      paths:\n      - backend:\n          service:\n            name: app\n            port:\n              number: 80\n        pathType: Prefix\n  tls:\n  - hosts:\n    - app.domain.com\n    secretName: app-tls\n```","site":"external-dns","answers_cleaned":"  AWS Route53 with same domain for public and private zones  This tutorial describes how to setup ExternalDNS using the same domain for public and private Route53 zones and  nginx ingress controller  https   github com kubernetes ingress nginx   It also outlines how to use  cert manager  https   github com jetstack cert manager  to automatically issue SSL certificates from  Let s Encrypt  https   letsencrypt org   for both public and private records      Deploy public nginx ingress controller  You may be interested with  GKE with nginx ingress  gke nginx md  for installation guidelines   Specify  ingress class  in nginx ingress controller container args      yaml apiVersion  apps v1 kind  Deployment metadata    labels      app  external ingress   name  external ingress controller spec    replicas  1   selector      matchLabels        app  external ingress   template      metadata        labels          app  external ingress     spec        containers          args             nginx ingress controller             default backend service   POD NAMESPACE  default http backend             configmap   POD NAMESPACE  external ingress configuration             tcp services configmap   POD NAMESPACE  external tcp services             udp services configmap   POD NAMESPACE  external udp services             annotations prefix nginx ingress kubernetes io             ingress class external ingress             publish service   POD NAMESPACE  external ingress         env            name  POD NAME           valueFrom              fieldRef                apiVersion  v1               fieldPath  metadata name           name  POD NAMESPACE           valueFrom              fieldRef                apiVersion  v1               fieldPath  metadata namespace         image  quay io kubernetes ingress controller nginx ingress controller 0 11 0         livenessProbe            failureThreshold  3           httpGet              path   healthz             port  10254             scheme  HTTP           initialDelaySeconds  10           periodSeconds  10           successThreshold  1           timeoutSeconds  1         name  external ingress controller         ports            containerPort  80           name  http           protocol  TCP           containerPort  443           name  https           protocol  TCP         readinessProbe            failureThreshold  3           httpGet              path   healthz             port  10254             scheme  HTTP           periodSeconds  10           successThreshold  1           timeoutSeconds  1      Set  type  LoadBalancer  in your public nginx ingress controller Service definition      yaml apiVersion  v1 kind  Service metadata    annotations      service beta kubernetes io aws load balancer connection idle timeout   3600      service beta kubernetes io aws load balancer proxy protocol        labels      app  external ingress   name  external ingress spec    externalTrafficPolicy  Cluster   ports      name  http     port  80     protocol  TCP     targetPort  http     name  https     port  443     protocol  TCP     targetPort  https   selector      app  external ingress   sessionAffinity  None   type  LoadBalancer         Deploy private nginx ingress controller  Make sure to specify  ingress class  in nginx ingress controller container args      yaml apiVersion  apps v1 kind  Deployment metadata    labels      app  internal ingress   name  internal ingress controller spec    replicas  1   selector      matchLabels        app  internal ingress   template      metadata        labels          app  internal ingress     spec        containers          args             nginx ingress controller             default backend service   POD NAMESPACE  default http backend             configmap   POD NAMESPACE  internal ingress configuration             tcp services configmap   POD NAMESPACE  internal tcp services             udp services configmap   POD NAMESPACE  internal udp services             annotations prefix nginx ingress kubernetes io             ingress class internal ingress             publish service   POD NAMESPACE  internal ingress         env            name  POD NAME           valueFrom              fieldRef                apiVersion  v1               fieldPath  metadata name           name  POD NAMESPACE           valueFrom              fieldRef                apiVersion  v1               fieldPath  metadata namespace         image  quay io kubernetes ingress controller nginx ingress controller 0 11 0         livenessProbe            failureThreshold  3           httpGet              path   healthz             port  10254             scheme  HTTP           initialDelaySeconds  10           periodSeconds  10           successThreshold  1           timeoutSeconds  1         name  internal ingress controller         ports            containerPort  80           name  http           protocol  TCP           containerPort  443           name  https           protocol  TCP         readinessProbe            failureThreshold  3           httpGet              path   healthz             port  10254             scheme  HTTP           periodSeconds  10           successThreshold  1           timeoutSeconds  1      Set additional annotations in your private nginx ingress controller Service definition to create an internal load balancer      yaml apiVersion  v1 kind  Service metadata    annotations      service beta kubernetes io aws load balancer connection idle timeout   3600      service beta kubernetes io aws load balancer internal  0 0 0 0 0     service beta kubernetes io aws load balancer proxy protocol        labels      app  internal ingress   name  internal ingress spec    externalTrafficPolicy  Cluster   ports      name  http     port  80     protocol  TCP     targetPort  http     name  https     port  443     protocol  TCP     targetPort  https   selector      app  internal ingress   sessionAffinity  None   type  LoadBalancer         Deploy the public zone ExternalDNS  Consult  AWS ExternalDNS setup docs  aws md  for installation guidelines   In ExternalDNS containers args  make sure to specify  aws zone type  and  ingress class       yaml apiVersion  apps v1 kind  Deployment metadata    labels      app  external dns public   name  external dns public   namespace  kube system spec    replicas  1   selector      matchLabels        app  external dns public   strategy      type  Recreate   template      metadata        labels          app  external dns public     spec        containers          args              source ingress             provider aws             registry txt             txt owner id external dns             ingress class external ingress             aws zone type public         image  registry k8s io external dns external dns v0 15 0         name  external dns public         Deploy the private zone ExternalDNS  Consult  AWS ExternalDNS setup docs  aws md  for installation guidelines   In ExternalDNS containers args  make sure to specify  aws zone type  and  ingress class       yaml apiVersion  apps v1 kind  Deployment metadata    labels      app  external dns private   name  external dns private   namespace  kube system spec    replicas  1   selector      matchLabels        app  external dns private   strategy      type  Recreate   template      metadata        labels          app  external dns private     spec        containers          args              source ingress             provider aws             registry txt             txt owner id dev k8s nexus             ingress class internal ingress             aws zone type private         image  registry k8s io external dns external dns v0 15 0         name  external dns private         Create application Service definitions  For this setup to work  you need to create two Ingress definitions for your application   At first  create a public Ingress definition      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    labels      app  app   name  app public spec    ingressClassName  external ingress   rules      host  app domain com     http        paths          backend            service              name  app             port                number  80         pathType  Prefix      Then create a private Ingress definition      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    labels      app  app   name  app private spec    ingressClassName  internal ingress   rules      host  app domain com     http        paths          backend            service              name  app             port                number  80         pathType  Prefix      Additionally  you may leverage  cert manager  https   github com jetstack cert manager  to automatically issue SSL certificates from  Let s Encrypt  https   letsencrypt org    To do that  request a certificate in public service definition      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    annotations      certmanager k8s io acme challenge type   dns01      certmanager k8s io acme dns01 provider   route53      certmanager k8s io cluster issuer   letsencrypt production      kubernetes io tls acme   true    labels      app  app   name  app public spec    ingressClassName   external ingress    rules      host  app domain com     http        paths          backend            service              name  app             port                number  80         pathType  Prefix   tls      hosts        app domain com     secretName  app tls      And reuse the requested certificate in private Service definition      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    labels      app  app   name  app private spec    ingressClassName   internal ingress    rules      host  app domain com     http        paths          backend            service              name  app             port                number  80         pathType  Prefix   tls      hosts        app domain com     secretName  app tls    "}
{"questions":"external-dns This tutorial describes how to setup ExternalDNS for use within a NS1 Make sure to use 0 5 version of ExternalDNS for this tutorial If you are new to NS1 we recommend you first read the following Kubernetes cluster using NS1 DNS Creating a zone with NS1 DNS","answers":"# NS1\n\nThis tutorial describes how to setup ExternalDNS for use within a\nKubernetes cluster using NS1 DNS.\n\nMake sure to use **>=0.5** version of ExternalDNS for this tutorial.\n\n## Creating a zone with NS1 DNS\n\nIf you are new to NS1, we recommend you first read the following\ninstructions for creating a zone.\n\n[Creating a zone using the NS1\nportal](https:\/\/ns1.com\/knowledgebase\/creating-a-zone)\n\n[Creating a zone using the NS1\nAPI](https:\/\/ns1.com\/api#put-create-a-new-dns-zone)\n\n## Creating NS1 Credentials\n\nAll NS1 products are API-first, meaning everything that can be done on\nthe portal---including managing zones and records, data sources and\nfeeds, and account settings and users---can be done via API.\n\nThe NS1 API is a standard REST API with JSON responses. The environment\nvar `NS1_APIKEY` will be needed to run ExternalDNS with NS1.\n\n### To add or delete an API key\n\n1.  Log into the NS1 portal at [my.nsone.net](http:\/\/my.nsone.net).\n\n2.  Click your username in the upper-right corner, and navigate to **Account Settings** \\> **Users & Teams**.\n\n3.  Navigate to the _API Keys_ tab, and click **Add Key**.\n\n4.  Enter the name of the application and modify permissions and settings as desired. Once complete, click **Create Key**. The new API key appears in the list.\n\n    Note: Set the permissions for your API keys just as you would for a user or team associated with your organization's NS1 account. For more information, refer to the article [Creating and Managing API Keys](https:\/\/help.ns1.com\/hc\/en-us\/articles\/360026140094-Creating-managing-users) in the NS1 Knowledge Base.\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster with which you want to test ExternalDNS, and then apply one of the following manifest files for deployment:\n\nBegin by creating a Kubernetes secret to securely store your NS1 API key. This key will enable ExternalDNS to authenticate with NS1:\n\n```shell\nkubectl create secret generic NS1_APIKEY --from-literal=NS1_API_KEY=YOUR_NS1_API_KEY\n```\n\nEnsure to replace YOUR_NS1_API_KEY with your actual NS1 API key.\n\nThen apply one of the following manifests file to deploy ExternalDNS.\n\n## Using Helm\n\nCreate a values.yaml file to configure ExternalDNS to use NS1 as the DNS provider. This file should include the necessary environment variables:\n\n```shell\nprovider: \n  name: ns1\nenv:\n  - name: NS1_APIKEY\n    valueFrom:\n      secretKeyRef:\n        name: NS1_APIKEY\n        key: NS1_API_KEY\n```\n\nFinally, install the ExternalDNS chart with Helm using the configuration specified in your values.yaml file:\n\n```shell\nhelm upgrade --install external-dns external-dns\/external-dns --values values.yaml\n```\n\n### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=ns1\n        env:\n       - name: NS1_APIKEY\n          valueFrom:\n            secretKeyRef:\n              name: NS1_APIKEY\n              key: NS1_API_KEY\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"] \n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=ns1\n        env:\n       - name: NS1_APIKEY\n          valueFrom:\n            secretKeyRef:\n              name: NS1_APIKEY\n              key: NS1_API_KEY\n```\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: example.com\n    external-dns.alpha.kubernetes.io\/ttl: \"120\" #optional\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\n**A note about annotations**\n\nVerify that the annotation on the service uses the same hostname as the NS1 DNS zone created above. The annotation may also be a subdomain of the DNS zone (e.g. 'www.example.com').\n\nThe TTL annotation can be used to configure the TTL on DNS records managed by ExternalDNS and is optional. If this annotation is not set, the TTL on records managed by ExternalDNS will default to 10.\n\nExternalDNS uses the hostname annotation to determine which services should be registered with DNS. Removing the hostname annotation will cause ExternalDNS to remove the corresponding DNS records.\n\n### Create the deployment and service\n\n```\n$ kubectl create -f nginx.yaml\n```\n\nDepending on where you run your service, it may take some time for your cloud provider to create an external IP for the service. Once an external IP is assigned, ExternalDNS detects the new service IP address and synchronizes the NS1 DNS records.\n\n## Verifying NS1 DNS records\n\nUse the NS1 portal or API to verify that the A record for your domain shows the external IP address of the services.\n\n## Cleanup\n\nOnce you successfully configure and verify record management via ExternalDNS, you can delete the tutorial's example:\n\n```\n$ kubectl delete -f nginx.yaml\n$ kubectl delete -f externaldns.yaml\n```","site":"external-dns","answers_cleaned":"  NS1  This tutorial describes how to setup ExternalDNS for use within a Kubernetes cluster using NS1 DNS   Make sure to use     0 5   version of ExternalDNS for this tutorial      Creating a zone with NS1 DNS  If you are new to NS1  we recommend you first read the following instructions for creating a zone    Creating a zone using the NS1 portal  https   ns1 com knowledgebase creating a zone    Creating a zone using the NS1 API  https   ns1 com api put create a new dns zone      Creating NS1 Credentials  All NS1 products are API first  meaning everything that can be done on the portal   including managing zones and records  data sources and feeds  and account settings and users   can be done via API   The NS1 API is a standard REST API with JSON responses  The environment var  NS1 APIKEY  will be needed to run ExternalDNS with NS1       To add or delete an API key  1   Log into the NS1 portal at  my nsone net  http   my nsone net    2   Click your username in the upper right corner  and navigate to   Account Settings        Users   Teams     3   Navigate to the  API Keys  tab  and click   Add Key     4   Enter the name of the application and modify permissions and settings as desired  Once complete  click   Create Key    The new API key appears in the list       Note  Set the permissions for your API keys just as you would for a user or team associated with your organization s NS1 account  For more information  refer to the article  Creating and Managing API Keys  https   help ns1 com hc en us articles 360026140094 Creating managing users  in the NS1 Knowledge Base      Deploy ExternalDNS  Connect your  kubectl  client to the cluster with which you want to test ExternalDNS  and then apply one of the following manifest files for deployment   Begin by creating a Kubernetes secret to securely store your NS1 API key  This key will enable ExternalDNS to authenticate with NS1      shell kubectl create secret generic NS1 APIKEY   from literal NS1 API KEY YOUR NS1 API KEY      Ensure to replace YOUR NS1 API KEY with your actual NS1 API key   Then apply one of the following manifests file to deploy ExternalDNS      Using Helm  Create a values yaml file to configure ExternalDNS to use NS1 as the DNS provider  This file should include the necessary environment variables      shell provider     name  ns1 env      name  NS1 APIKEY     valueFrom        secretKeyRef          name  NS1 APIKEY         key  NS1 API KEY      Finally  install the ExternalDNS chart with Helm using the configuration specified in your values yaml file      shell helm upgrade   install external dns external dns external dns   values values yaml          Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider ns1         env           name  NS1 APIKEY           valueFrom              secretKeyRef                name  NS1 APIKEY               key  NS1 API KEY          Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses      verbs    get   watch   list     apiGroups         resources    nodes     verbs    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider ns1         env           name  NS1 APIKEY           valueFrom              secretKeyRef                name  NS1 APIKEY               key  NS1 API KEY         Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  example com     external dns alpha kubernetes io ttl   120   optional spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80        A note about annotations    Verify that the annotation on the service uses the same hostname as the NS1 DNS zone created above  The annotation may also be a subdomain of the DNS zone  e g   www example com     The TTL annotation can be used to configure the TTL on DNS records managed by ExternalDNS and is optional  If this annotation is not set  the TTL on records managed by ExternalDNS will default to 10   ExternalDNS uses the hostname annotation to determine which services should be registered with DNS  Removing the hostname annotation will cause ExternalDNS to remove the corresponding DNS records       Create the deployment and service        kubectl create  f nginx yaml      Depending on where you run your service  it may take some time for your cloud provider to create an external IP for the service  Once an external IP is assigned  ExternalDNS detects the new service IP address and synchronizes the NS1 DNS records      Verifying NS1 DNS records  Use the NS1 portal or API to verify that the A record for your domain shows the external IP address of the services      Cleanup  Once you successfully configure and verify record management via ExternalDNS  you can delete the tutorial s example         kubectl delete  f nginx yaml   kubectl delete  f externaldns yaml    "}
{"questions":"external-dns Make sure to use 0 5 14 version of ExternalDNS for this tutorial have at least 1 domain registered at TransIP and enabled the API To use the TransIP API you need an account at TransIP and enable API usage as described in the With the private key generated by the API we create a kubernetes secret This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using TransIP Enable TransIP API and prepare your API key TransIP","answers":"# TransIP\n\nThis tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using TransIP.\n\nMake sure to use **>=0.5.14** version of ExternalDNS for this tutorial, have at least 1 domain registered at TransIP and enabled the API.\n\n## Enable TransIP API and prepare your API key\n\nTo use the TransIP API you need an account at TransIP and enable API usage as described in the [knowledge base](https:\/\/www.transip.eu\/knowledgebase\/entry\/77-want-use-the-transip-api\/). With the private key generated by the API, we create a kubernetes secret:\n\n```console\n$ kubectl create secret generic transip-api-key --from-file=transip-api-key=\/path\/to\/private.key\n```\n\n## Deploy ExternalDNS\n\nBelow are example manifests, for both cluster without or with RBAC enabled. Don't forget to replace `YOUR_TRANSIP_ACCOUNT_NAME` with your TransIP account name. In these examples, an example domain-filter is defined. Such a filter can be used to prevent ExternalDNS from touching any domain not listed in the filter. Refer to the docs for any other command-line parameters you might want to use.\n\n### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains\n        - --provider=transip\n        - --transip-account=YOUR_TRANSIP_ACCOUNT_NAME\n        - --transip-keyfile=\/transip\/transip-api-key\n        volumeMounts:\n        - mountPath: \/transip\n          name: transip-api-key\n          readOnly: true\n      volumes:\n      - name: transip-api-key\n        secret:\n          secretName: transip-api-key\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"watch\", \"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains\n        - --provider=transip\n        - --transip-account=YOUR_TRANSIP_ACCOUNT_NAME\n        - --transip-keyfile=\/transip\/transip-api-key\n        volumeMounts:\n        - mountPath: \/transip\n          name: transip-api-key\n          readOnly: true\n      volumes:\n      - name: transip-api-key\n        secret:\n          secretName: transip-api-key\n```\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: my-app.example.com\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\nNote the annotation on the service; this is the name ExternalDNS will create and manage DNS records for.\n\nExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records.\n\nCreate the deployment and service:\n\n```console\n$ kubectl create -f nginx.yaml\n```\n\nDepending where you run your service it can take a little while for your cloud provider to create an external IP for the service.\n\nOnce the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize the TransIP DNS records.\n\n## Verifying TransIP DNS records\n\nCheck your [TransIP Control Panel](https:\/\/transip.eu\/cp) to view the records for your TransIP DNS zone.\n\nClick on the zone for the one created above if a different domain was used.\n\nThis should show the external IP address of the service as the A record for your domain.","site":"external-dns","answers_cleaned":"  TransIP  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using TransIP   Make sure to use     0 5 14   version of ExternalDNS for this tutorial  have at least 1 domain registered at TransIP and enabled the API      Enable TransIP API and prepare your API key  To use the TransIP API you need an account at TransIP and enable API usage as described in the  knowledge base  https   www transip eu knowledgebase entry 77 want use the transip api    With the private key generated by the API  we create a kubernetes secret      console   kubectl create secret generic transip api key   from file transip api key  path to private key         Deploy ExternalDNS  Below are example manifests  for both cluster without or with RBAC enabled  Don t forget to replace  YOUR TRANSIP ACCOUNT NAME  with your TransIP account name  In these examples  an example domain filter is defined  Such a filter can be used to prevent ExternalDNS from touching any domain not listed in the filter  Refer to the docs for any other command line parameters you might want to use       Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains             provider transip             transip account YOUR TRANSIP ACCOUNT NAME             transip keyfile  transip transip api key         volumeMounts            mountPath   transip           name  transip api key           readOnly  true       volumes          name  transip api key         secret            secretName  transip api key          Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    watch    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains             provider transip             transip account YOUR TRANSIP ACCOUNT NAME             transip keyfile  transip transip api key         volumeMounts            mountPath   transip           name  transip api key           readOnly  true       volumes          name  transip api key         secret            secretName  transip api key         Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  my app example com spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80      Note the annotation on the service  this is the name ExternalDNS will create and manage DNS records for   ExternalDNS uses this annotation to determine what services should be registered with DNS  Removing the annotation will cause ExternalDNS to remove the corresponding DNS records   Create the deployment and service      console   kubectl create  f nginx yaml      Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service   Once the service has an external IP assigned  ExternalDNS will notice the new service IP address and synchronize the TransIP DNS records      Verifying TransIP DNS records  Check your  TransIP Control Panel  https   transip eu cp  to view the records for your TransIP DNS zone   Click on the zone for the one created above if a different domain was used   This should show the external IP address of the service as the A record for your domain "}
{"questions":"external-dns Follow the to setup ExternalDNS for use in Kubernetes clusters This tutorial describes how to use ExternalDNS with the kube ingress aws controller 1 1 https github com zalando incubator kube ingress aws controller kube ingress aws controller Setting up ExternalDNS and kube ingress aws controller running in AWS Specify the argument so that ExternalDNS will look","answers":"# kube-ingress-aws-controller\n\nThis tutorial describes how to use ExternalDNS with the [kube-ingress-aws-controller][1].\n\n[1]: https:\/\/github.com\/zalando-incubator\/kube-ingress-aws-controller\n\n## Setting up ExternalDNS and kube-ingress-aws-controller\n\nFollow the [AWS tutorial](aws.md) to setup ExternalDNS for use in Kubernetes clusters\nrunning in AWS. Specify the `source=ingress` argument so that ExternalDNS will look\nfor hostnames in Ingress objects. In addition, you may wish to limit which Ingress\nobjects are used as an ExternalDNS source via the `ingress-class` argument, but\nthis is not required.\n\nFor help setting up the Kubernetes Ingress AWS Controller, that can\ncreate ALBs and NLBs, follow the [Setup Guide][2].\n\n[2]: https:\/\/github.com\/zalando-incubator\/kube-ingress-aws-controller\/tree\/HEAD\/deploy\n\n\n### Optional RouteGroup\n\n[RouteGroup][3] is a CRD, that enables you to do complex routing with\n[Skipper][4].\n\nFirst, you have to apply the RouteGroup CRD to your cluster:\n\n```\nkubectl apply -f https:\/\/github.com\/zalando\/skipper\/blob\/HEAD\/dataclients\/kubernetes\/deploy\/apply\/routegroups_crd.yaml\n```\n\nYou have to grant all controllers: [Skipper][4],\n[kube-ingress-aws-controller][1] and external-dns to read the routegroup resource and\nkube-ingress-aws-controller to update the status field of a routegroup.\nThis depends on your RBAC policies, in case you use RBAC, you can use\nthis for all 3 controllers:\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: kube-ingress-aws-controller\nrules:\n- apiGroups:\n  - extensions\n  - networking.k8s.io\n  resources:\n  - ingresses\n  verbs:\n  - get\n  - list\n  - watch\n- apiGroups:\n  - extensions\n  - networking.k8s.io\n  resources:\n  - ingresses\/status\n  verbs:\n  - patch\n  - update\n- apiGroups:\n  - zalando.org\n  resources:\n  - routegroups\n  verbs:\n  - get\n  - list\n  - watch\n- apiGroups:\n  - zalando.org\n  resources:\n  - routegroups\/status\n  verbs:\n  - patch\n  - update\n```\n\nSee also current RBAC yaml files:\n- [kube-ingress-aws-controller](https:\/\/github.com\/zalando-incubator\/kubernetes-on-aws\/blob\/dev\/cluster\/manifests\/ingress-controller\/01-rbac.yaml)\n- [skipper](https:\/\/github.com\/zalando-incubator\/kubernetes-on-aws\/blob\/dev\/cluster\/manifests\/skipper\/rbac.yaml)\n- [external-dns](https:\/\/github.com\/zalando-incubator\/kubernetes-on-aws\/blob\/dev\/cluster\/manifests\/external-dns\/01-rbac.yaml)\n\n[3]: https:\/\/opensource.zalando.com\/skipper\/kubernetes\/routegroups\/#routegroups\n[4]: https:\/\/opensource.zalando.com\/skipper\n\n\n## Deploy an example application\n\nCreate the following sample \"echoserver\" application to demonstrate how\nExternalDNS works with ingress objects, that were created by [kube-ingress-aws-controller][1].\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: echoserver\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: echoserver\n  template:\n    metadata:\n      labels:\n        app: echoserver\n    spec:\n      containers:\n      - image: gcr.io\/google_containers\/echoserver:1.4\n        imagePullPolicy: Always\n        name: echoserver\n        ports:\n        - containerPort: 8080\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: echoserver\nspec:\n  ports:\n    - port: 80\n      targetPort: 8080\n      protocol: TCP\n  type: ClusterIP\n  selector:\n    app: echoserver\n```\n\nNote that the Service object is of type `ClusterIP`, because we will\ntarget [Skipper][4] and do the HTTP routing in Skipper. We don't need\na Service of type `LoadBalancer` here, since we will be using a shared\nskipper-ingress for all Ingress. Skipper use `hostNetwork` to be able\nto get traffic from AWS LoadBalancers EC2 network. ALBs or NLBs, will\nbe created based on need and will be shared across all ingress as\ndefault.\n\n## Ingress examples\n\nCreate the following Ingress to expose the echoserver application to the Internet.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: echoserver\nspec:\n  ingressClassName: skipper\n  rules:\n  - host: echoserver.mycluster.example.org\n    http: &echoserver_root\n      paths:\n      - path: \/\n        backend:\n          service:\n            name: echoserver\n            port:\n              number: 80\n        pathType: Prefix\n  - host: echoserver.example.org\n    http: *echoserver_root\n```\n\nThe above should result in the creation of an (ipv4) ALB in AWS which will forward\ntraffic to skipper which will forward to the echoserver application.\n\nIf the `--source=ingress` argument is specified, then ExternalDNS will create DNS\nrecords based on the hosts specified in ingress objects. The above example would\nresult in two alias records being created, `echoserver.mycluster.example.org` and\n`echoserver.example.org`, which both alias the ALB that is associated with the\nIngress object.\n\nNote that the above example makes use of the YAML anchor feature to avoid having\nto repeat the http section for multiple hosts that use the exact same paths. If\nthis Ingress object will only be fronting one backend Service, we might instead\ncreate the following:\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: echoserver.mycluster.example.org, echoserver.example.org\n  name: echoserver\nspec:\n  ingressClassName: skipper\n  rules:\n  - http:\n      paths:\n      - path: \/\n        backend:\n          service:\n            name: echoserver\n            port:\n              number: 80\n        pathType: Prefix\n```\n\nIn the above example we create a default path that works for any hostname, and\nmake use of the `external-dns.alpha.kubernetes.io\/hostname` annotation to create\nmultiple aliases for the resulting ALB.\n\n## Dualstack ALBs\n\nAWS [supports](https:\/\/docs.aws.amazon.com\/elasticloadbalancing\/latest\/application\/application-load-balancers.html#ip-address-type) both IPv4 and \"dualstack\" (both IPv4 and IPv6) interfaces for ALBs.\nThe Kubernetes Ingress AWS controller supports the `alb.ingress.kubernetes.io\/ip-address-type`\nannotation (which defaults to `ipv4`) to determine this. If this annotation is\nset to `dualstack` then ExternalDNS will create two alias records (one A record\nand one AAAA record) for each hostname associated with the Ingress object.\n\n\nExample:\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  annotations:\n    alb.ingress.kubernetes.io\/ip-address-type: dualstack\n  name: echoserver\nspec:\n  ingressClassName: skipper\n  rules:\n  - host: echoserver.example.org\n    http:\n      paths:\n      - path: \/\n        backend:\n          service:\n            name: echoserver\n            port:\n              number: 80\n        pathType: Prefix\n```\n\nThe above Ingress object will result in the creation of an ALB with a dualstack\ninterface. ExternalDNS will create both an A `echoserver.example.org` record and\nan AAAA record of the same name, that each are aliases for the same ALB.\n\n## NLBs\n\nAWS has\n[NLBs](https:\/\/docs.aws.amazon.com\/elasticloadbalancing\/latest\/network\/introduction.html)\nand [kube-ingress-aws-controller][1] is able to create NLBs instead of ALBs.\nThe Kubernetes Ingress AWS controller supports the `zalando.org\/aws-load-balancer-type`\nannotation (which defaults to `alb`) to determine this. If this annotation is\nset to `nlb` then ExternalDNS will create an NLB instead of an ALB.\n\nExample:\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  annotations:\n    zalando.org\/aws-load-balancer-type: nlb\n  name: echoserver\nspec:\n  ingressClassName: skipper\n  rules:\n  - host: echoserver.example.org\n    http:\n      paths:\n      - path: \/\n        backend:\n          service:\n            name: echoserver\n            port:\n              number: 80\n        pathType: Prefix\n```\n\nThe above Ingress object will result in the creation of an NLB. A\nsuccessful create, you can observe in the ingress `status` field, that is\nwritten by [kube-ingress-aws-controller][1]:\n\n```yaml\nstatus:\n  loadBalancer:\n    ingress:\n    - hostname: kube-ing-lb-atedkrlml7iu-1681027139.$region.elb.amazonaws.com\n```\n\nExternalDNS will create a A-records `echoserver.example.org`, that\nuse AWS ALIAS record to automatically maintain IP addresses of the NLB.\n\n## RouteGroup (optional)\n\n[Kube-ingress-aws-controller][1], [Skipper][4] and external-dns\nsupport [RouteGroups][3]. External-dns needs to be started with\n`--source=skipper-routegroup` parameter in order to work on RouteGroup objects.\n\nHere we can not show [all RouteGroup\ncapabilities](https:\/\/opensource.zalando.com\/skipper\/kubernetes\/routegroups\/),\nbut we show one simple example with an application and a custom https\nredirect.\n\n```yaml\napiVersion: zalando.org\/v1\nkind: RouteGroup\nmetadata:\n  name: my-route-group\nspec:\n  backends:\n  - name: my-backend\n    type: service\n    serviceName: my-service\n    servicePort: 80\n  - name: redirectShunt\n    type: shunt\n  defaultBackends:\n  - backendName: my-service\n  routes:\n  - pathSubtree: \/\n  - pathSubtree: \/\n    predicates:\n    - Header(\"X-Forwarded-Proto\", \"http\")\n    filters:\n    - redirectTo(302, \"https:\")\n    backends:\n    - redirectShunt\n```","site":"external-dns","answers_cleaned":"  kube ingress aws controller  This tutorial describes how to use ExternalDNS with the  kube ingress aws controller  1     1   https   github com zalando incubator kube ingress aws controller     Setting up ExternalDNS and kube ingress aws controller  Follow the  AWS tutorial  aws md  to setup ExternalDNS for use in Kubernetes clusters running in AWS  Specify the  source ingress  argument so that ExternalDNS will look for hostnames in Ingress objects  In addition  you may wish to limit which Ingress objects are used as an ExternalDNS source via the  ingress class  argument  but this is not required   For help setting up the Kubernetes Ingress AWS Controller  that can create ALBs and NLBs  follow the  Setup Guide  2     2   https   github com zalando incubator kube ingress aws controller tree HEAD deploy       Optional RouteGroup   RouteGroup  3  is a CRD  that enables you to do complex routing with  Skipper  4    First  you have to apply the RouteGroup CRD to your cluster       kubectl apply  f https   github com zalando skipper blob HEAD dataclients kubernetes deploy apply routegroups crd yaml      You have to grant all controllers   Skipper  4    kube ingress aws controller  1  and external dns to read the routegroup resource and kube ingress aws controller to update the status field of a routegroup  This depends on your RBAC policies  in case you use RBAC  you can use this for all 3 controllers      yaml apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  kube ingress aws controller rules    apiGroups      extensions     networking k8s io   resources      ingresses   verbs      get     list     watch   apiGroups      extensions     networking k8s io   resources      ingresses status   verbs      patch     update   apiGroups      zalando org   resources      routegroups   verbs      get     list     watch   apiGroups      zalando org   resources      routegroups status   verbs      patch     update      See also current RBAC yaml files     kube ingress aws controller  https   github com zalando incubator kubernetes on aws blob dev cluster manifests ingress controller 01 rbac yaml     skipper  https   github com zalando incubator kubernetes on aws blob dev cluster manifests skipper rbac yaml     external dns  https   github com zalando incubator kubernetes on aws blob dev cluster manifests external dns 01 rbac yaml    3   https   opensource zalando com skipper kubernetes routegroups  routegroups  4   https   opensource zalando com skipper      Deploy an example application  Create the following sample  echoserver  application to demonstrate how ExternalDNS works with ingress objects  that were created by  kube ingress aws controller  1       yaml apiVersion  apps v1 kind  Deployment metadata    name  echoserver spec    replicas  1   selector      matchLabels        app  echoserver   template      metadata        labels          app  echoserver     spec        containers          image  gcr io google containers echoserver 1 4         imagePullPolicy  Always         name  echoserver         ports            containerPort  8080     apiVersion  v1 kind  Service metadata    name  echoserver spec    ports        port  80       targetPort  8080       protocol  TCP   type  ClusterIP   selector      app  echoserver      Note that the Service object is of type  ClusterIP   because we will target  Skipper  4  and do the HTTP routing in Skipper  We don t need a Service of type  LoadBalancer  here  since we will be using a shared skipper ingress for all Ingress  Skipper use  hostNetwork  to be able to get traffic from AWS LoadBalancers EC2 network  ALBs or NLBs  will be created based on need and will be shared across all ingress as default      Ingress examples  Create the following Ingress to expose the echoserver application to the Internet      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  echoserver spec    ingressClassName  skipper   rules      host  echoserver mycluster example org     http   echoserver root       paths          path            backend            service              name  echoserver             port                number  80         pathType  Prefix     host  echoserver example org     http   echoserver root      The above should result in the creation of an  ipv4  ALB in AWS which will forward traffic to skipper which will forward to the echoserver application   If the    source ingress  argument is specified  then ExternalDNS will create DNS records based on the hosts specified in ingress objects  The above example would result in two alias records being created   echoserver mycluster example org  and  echoserver example org   which both alias the ALB that is associated with the Ingress object   Note that the above example makes use of the YAML anchor feature to avoid having to repeat the http section for multiple hosts that use the exact same paths  If this Ingress object will only be fronting one backend Service  we might instead create the following      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    annotations      external dns alpha kubernetes io hostname  echoserver mycluster example org  echoserver example org   name  echoserver spec    ingressClassName  skipper   rules      http        paths          path            backend            service              name  echoserver             port                number  80         pathType  Prefix      In the above example we create a default path that works for any hostname  and make use of the  external dns alpha kubernetes io hostname  annotation to create multiple aliases for the resulting ALB      Dualstack ALBs  AWS  supports  https   docs aws amazon com elasticloadbalancing latest application application load balancers html ip address type  both IPv4 and  dualstack   both IPv4 and IPv6  interfaces for ALBs  The Kubernetes Ingress AWS controller supports the  alb ingress kubernetes io ip address type  annotation  which defaults to  ipv4   to determine this  If this annotation is set to  dualstack  then ExternalDNS will create two alias records  one A record and one AAAA record  for each hostname associated with the Ingress object    Example      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    annotations      alb ingress kubernetes io ip address type  dualstack   name  echoserver spec    ingressClassName  skipper   rules      host  echoserver example org     http        paths          path            backend            service              name  echoserver             port                number  80         pathType  Prefix      The above Ingress object will result in the creation of an ALB with a dualstack interface  ExternalDNS will create both an A  echoserver example org  record and an AAAA record of the same name  that each are aliases for the same ALB      NLBs  AWS has  NLBs  https   docs aws amazon com elasticloadbalancing latest network introduction html  and  kube ingress aws controller  1  is able to create NLBs instead of ALBs  The Kubernetes Ingress AWS controller supports the  zalando org aws load balancer type  annotation  which defaults to  alb   to determine this  If this annotation is set to  nlb  then ExternalDNS will create an NLB instead of an ALB   Example      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    annotations      zalando org aws load balancer type  nlb   name  echoserver spec    ingressClassName  skipper   rules      host  echoserver example org     http        paths          path            backend            service              name  echoserver             port                number  80         pathType  Prefix      The above Ingress object will result in the creation of an NLB  A successful create  you can observe in the ingress  status  field  that is written by  kube ingress aws controller  1       yaml status    loadBalancer      ingress        hostname  kube ing lb atedkrlml7iu 1681027139  region elb amazonaws com      ExternalDNS will create a A records  echoserver example org   that use AWS ALIAS record to automatically maintain IP addresses of the NLB      RouteGroup  optional    Kube ingress aws controller  1    Skipper  4  and external dns support  RouteGroups  3   External dns needs to be started with    source skipper routegroup  parameter in order to work on RouteGroup objects   Here we can not show  all RouteGroup capabilities  https   opensource zalando com skipper kubernetes routegroups    but we show one simple example with an application and a custom https redirect      yaml apiVersion  zalando org v1 kind  RouteGroup metadata    name  my route group spec    backends      name  my backend     type  service     serviceName  my service     servicePort  80     name  redirectShunt     type  shunt   defaultBackends      backendName  my service   routes      pathSubtree        pathSubtree        predicates        Header  X Forwarded Proto    http       filters        redirectTo 302   https        backends        redirectShunt    "}
{"questions":"external-dns Creating a Cloudflare DNS zone Make sure to use 0 4 2 version of ExternalDNS for this tutorial We highly recommend to read this tutorial if you haven t used Cloudflare before Cloudflare DNS This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Cloudflare DNS","answers":"# Cloudflare DNS\n\nThis tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Cloudflare DNS.\n\nMake sure to use **>=0.4.2** version of ExternalDNS for this tutorial.\n\n## Creating a Cloudflare DNS zone\n\nWe highly recommend to read this tutorial if you haven't used Cloudflare before:\n\n[Create a Cloudflare account and add a website](https:\/\/support.cloudflare.com\/hc\/en-us\/articles\/201720164-Step-2-Create-a-Cloudflare-account-and-add-a-website)\n\n## Creating Cloudflare Credentials\n\nSnippet from [Cloudflare - Getting Started](https:\/\/api.cloudflare.com\/#getting-started-endpoints):\n\n>Cloudflare's API exposes the entire Cloudflare infrastructure via a standardized programmatic interface. Using Cloudflare's API, you can do just about anything you can do on cloudflare.com via the customer dashboard.\n\n>The Cloudflare API is a RESTful API based on HTTPS requests and JSON responses. If you are registered with Cloudflare, you can obtain your API key from the bottom of the \"My Account\" page, found here: [Go to My account](https:\/\/dash.cloudflare.com\/profile).\n\nAPI Token will be preferred for authentication if `CF_API_TOKEN` environment variable is set.\nOtherwise `CF_API_KEY` and `CF_API_EMAIL` should be set to run ExternalDNS with Cloudflare.\nYou may provide the Cloudflare API token through a file by setting the\n`CF_API_TOKEN=\"file:\/path\/to\/token\"`.\n\nNote. The `CF_API_KEY` and `CF_API_EMAIL` should not be present, if you are using a `CF_API_TOKEN`.\n\nWhen using API Token authentication, the token should be granted Zone `Read`, DNS `Edit` privileges, and access to `All zones`.\n\nIf you would like to further restrict the API permissions to a specific zone (or zones), you also need to use the `--zone-id-filter` so that the underlying API requests only access the zones that you explicitly specify, as opposed to accessing all zones.\n\n## Throttling\n\nCloudflare API has a [global rate limit of 1,200 requests per five minutes](https:\/\/developers.cloudflare.com\/fundamentals\/api\/reference\/limits\/). Running several fast polling ExternalDNS instances in a given account can easily hit that limit. The AWS Provider [docs](.\/aws.md#throttling) has some recommendations that can be followed here too, but in particular, consider passing `--cloudflare-dns-records-per-page` with a high value (maximum is 5,000).\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster you want to test ExternalDNS with.\n\nBegin by creating a Kubernetes secret to securely store your CloudFlare API key. This key will enable ExternalDNS to authenticate with CloudFlare:\n\n```shell\nkubectl create secret generic cloudflare-api-key --from-literal=apiKey=YOUR_API_KEY --from-literal=email=YOUR_CLOUDFLARE_EMAIL\n```\n\nAnd for API Token it should look like :\n\n```shell\nkubectl create secret generic cloudflare-api-key --from-literal=apiKey=YOUR_API_TOKEN\n```\n\nEnsure to replace YOUR_API_KEY with your actual CloudFlare API key and YOUR_CLOUDFLARE_EMAIL with the email associated with your CloudFlare account.\n\nThen apply one of the following manifests file to deploy ExternalDNS.\n\n### Using Helm\n\nCreate a values.yaml file to configure ExternalDNS to use CloudFlare as the DNS provider. This file should include the necessary environment variables:\n\n```yaml\nprovider: \n  name: cloudflare\nenv:\n  - name: CF_API_KEY\n    valueFrom:\n      secretKeyRef:\n        name: cloudflare-api-key\n        key: apiKey\n  - name: CF_API_EMAIL\n    valueFrom:\n      secretKeyRef:\n        name: cloudflare-api-key\n        key: email\n```\n\nUse this in your values.yaml, if you are using API Token:\n\n```yaml\nprovider: \n  name: cloudflare\nenv:\n  - name: CF_API_TOKEN\n    valueFrom:\n      secretKeyRef:\n        name: cloudflare-api-key\n        key: apiKey\n```\n\n\nFinally, install the ExternalDNS chart with Helm using the configuration specified in your values.yaml file:\n\n```shell\nhelm repo add external-dns https:\/\/kubernetes-sigs.github.io\/external-dns\/\n```\n\n```shell\nhelm repo update\n```\n\n```shell\nhelm upgrade --install external-dns external-dns\/external-dns --values values.yaml\n```\n\n### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n        - name: external-dns\n          image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n          args:\n            - --source=service # ingress is also possible\n            - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n            - --zone-id-filter=023e105f4ecef8ad9ca31a8372d0c353 # (optional) limit to a specific zone.\n            - --provider=cloudflare\n            - --cloudflare-proxied # (optional) enable the proxy feature of Cloudflare (DDOS protection, CDN...)\n            - --cloudflare-dns-records-per-page=5000 # (optional) configure how many DNS records to fetch per request\n            - --cloudflare-region-key=\"eu\" # (optional) configure which region can decrypt HTTPS requests\n      env:\n        - name: CF_API_KEY\n          valueFrom:\n            secretKeyRef:\n              name: cloudflare-api-key\n              key: apiKey\n        - name: CF_API_EMAIL\n          valueFrom:\n            secretKeyRef:\n              name: cloudflare-api-key\n              key: email\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n  - apiGroups: [\"\"]\n    resources: [\"services\",\"endpoints\",\"pods\"]\n    verbs: [\"get\",\"watch\",\"list\"]\n  - apiGroups: [\"extensions\",\"networking.k8s.io\"]\n    resources: [\"ingresses\"]\n    verbs: [\"get\",\"watch\",\"list\"]\n  - apiGroups: [\"\"]\n    resources: [\"nodes\"]\n    verbs: [\"list\", \"watch\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n        - name: external-dns\n          image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n          args:\n            - --source=service # ingress is also possible\n            - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n            - --zone-id-filter=023e105f4ecef8ad9ca31a8372d0c353 # (optional) limit to a specific zone.\n            - --provider=cloudflare\n            - --cloudflare-proxied # (optional) enable the proxy feature of Cloudflare (DDOS protection, CDN...)\n            - --cloudflare-dns-records-per-page=5000 # (optional) configure how many DNS records to fetch per request\n            - --cloudflare-region-key=\"eu\" # (optional) configure which region can decrypt HTTPS requests\n          env:\n            - name: CF_API_KEY\n              valueFrom:\n                secretKeyRef:\n                  name: cloudflare-api-key\n                  key: apiKey\n            - name: CF_API_EMAIL\n              valueFrom:\n                secretKeyRef:\n                  name: cloudflare-api-key\n                  key: email\n```\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: example.com\n    external-dns.alpha.kubernetes.io\/ttl: \"120\" #optional\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\nNote the annotation on the service; use the same hostname as the Cloudflare DNS zone created above. The annotation may also be a subdomain\nof the DNS zone (e.g. 'www.example.com').\n\nBy setting the TTL annotation on the service, you have to pass a valid TTL, which must be 120 or above.\nThis annotation is optional, if you won't set it, it will be 1 (automatic) which is 300.\nFor Cloudflare proxied entries, set the TTL annotation to 1 (automatic), or do not set it.\n\nExternalDNS uses this annotation to determine what services should be registered with DNS.  Removing the annotation\nwill cause ExternalDNS to remove the corresponding DNS records.\n\nCreate the deployment and service:\n\n```shell\n$ kubectl create -f nginx.yaml\n```\n\nDepending where you run your service it can take a little while for your cloud provider to create an external IP for the service.\n\nOnce the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize\nthe Cloudflare DNS records.\n\n## Verifying Cloudflare DNS records\n\nCheck your [Cloudflare dashboard](https:\/\/www.cloudflare.com\/a\/dns\/example.com) to view the records for your Cloudflare DNS zone.\n\nSubstitute the zone for the one created above if a different domain was used.\n\nThis should show the external IP address of the service as the A record for your domain.\n\n## Cleanup\n\nNow that we have verified that ExternalDNS will automatically manage Cloudflare DNS records, we can delete the tutorial's example:\n\n```shell\n$ kubectl delete -f nginx.yaml\n$ kubectl delete -f externaldns.yaml\n```\n\n## Setting cloudflare-proxied on a per-ingress basis\n\nUsing the `external-dns.alpha.kubernetes.io\/cloudflare-proxied: \"true\"` annotation on your ingress, you can specify if the proxy feature of Cloudflare should be enabled for that record. This setting will override the global `--cloudflare-proxied` setting.\n\n## Setting cloudflare-region-key to configure regional services\n\nUsing the `external-dns.alpha.kubernetes.io\/cloudflare-region-key` annotation on your ingress, you can restrict which data centers can decrypt and serve HTTPS traffic. A list of available options can be seen [here](https:\/\/developers.cloudflare.com\/data-localization\/regional-services\/get-started\/).\n\nIf not set the value will default to `global`.","site":"external-dns","answers_cleaned":"  Cloudflare DNS  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Cloudflare DNS   Make sure to use     0 4 2   version of ExternalDNS for this tutorial      Creating a Cloudflare DNS zone  We highly recommend to read this tutorial if you haven t used Cloudflare before    Create a Cloudflare account and add a website  https   support cloudflare com hc en us articles 201720164 Step 2 Create a Cloudflare account and add a website      Creating Cloudflare Credentials  Snippet from  Cloudflare   Getting Started  https   api cloudflare com  getting started endpoints     Cloudflare s API exposes the entire Cloudflare infrastructure via a standardized programmatic interface  Using Cloudflare s API  you can do just about anything you can do on cloudflare com via the customer dashboard    The Cloudflare API is a RESTful API based on HTTPS requests and JSON responses  If you are registered with Cloudflare  you can obtain your API key from the bottom of the  My Account  page  found here   Go to My account  https   dash cloudflare com profile    API Token will be preferred for authentication if  CF API TOKEN  environment variable is set  Otherwise  CF API KEY  and  CF API EMAIL  should be set to run ExternalDNS with Cloudflare  You may provide the Cloudflare API token through a file by setting the  CF API TOKEN  file  path to token     Note  The  CF API KEY  and  CF API EMAIL  should not be present  if you are using a  CF API TOKEN    When using API Token authentication  the token should be granted Zone  Read   DNS  Edit  privileges  and access to  All zones    If you would like to further restrict the API permissions to a specific zone  or zones   you also need to use the    zone id filter  so that the underlying API requests only access the zones that you explicitly specify  as opposed to accessing all zones      Throttling  Cloudflare API has a  global rate limit of 1 200 requests per five minutes  https   developers cloudflare com fundamentals api reference limits    Running several fast polling ExternalDNS instances in a given account can easily hit that limit  The AWS Provider  docs    aws md throttling  has some recommendations that can be followed here too  but in particular  consider passing    cloudflare dns records per page  with a high value  maximum is 5 000       Deploy ExternalDNS  Connect your  kubectl  client to the cluster you want to test ExternalDNS with   Begin by creating a Kubernetes secret to securely store your CloudFlare API key  This key will enable ExternalDNS to authenticate with CloudFlare      shell kubectl create secret generic cloudflare api key   from literal apiKey YOUR API KEY   from literal email YOUR CLOUDFLARE EMAIL      And for API Token it should look like       shell kubectl create secret generic cloudflare api key   from literal apiKey YOUR API TOKEN      Ensure to replace YOUR API KEY with your actual CloudFlare API key and YOUR CLOUDFLARE EMAIL with the email associated with your CloudFlare account   Then apply one of the following manifests file to deploy ExternalDNS       Using Helm  Create a values yaml file to configure ExternalDNS to use CloudFlare as the DNS provider  This file should include the necessary environment variables      yaml provider     name  cloudflare env      name  CF API KEY     valueFrom        secretKeyRef          name  cloudflare api key         key  apiKey     name  CF API EMAIL     valueFrom        secretKeyRef          name  cloudflare api key         key  email      Use this in your values yaml  if you are using API Token      yaml provider     name  cloudflare env      name  CF API TOKEN     valueFrom        secretKeyRef          name  cloudflare api key         key  apiKey       Finally  install the ExternalDNS chart with Helm using the configuration specified in your values yaml file      shell helm repo add external dns https   kubernetes sigs github io external dns          shell helm repo update         shell helm upgrade   install external dns external dns external dns   values values yaml          Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers            name  external dns           image  registry k8s io external dns external dns v0 15 0           args                  source service   ingress is also possible                 domain filter example com    optional  limit to only example com domains  change to match the zone created above                  zone id filter 023e105f4ecef8ad9ca31a8372d0c353    optional  limit to a specific zone                  provider cloudflare                 cloudflare proxied    optional  enable the proxy feature of Cloudflare  DDOS protection  CDN                     cloudflare dns records per page 5000    optional  configure how many DNS records to fetch per request                 cloudflare region key  eu     optional  configure which region can decrypt HTTPS requests       env            name  CF API KEY           valueFrom              secretKeyRef                name  cloudflare api key               key  apiKey           name  CF API EMAIL           valueFrom              secretKeyRef                name  cloudflare api key               key  email          Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules      apiGroups           resources    services   endpoints   pods       verbs    get   watch   list       apiGroups    extensions   networking k8s io       resources    ingresses       verbs    get   watch   list       apiGroups           resources    nodes       verbs    list    watch       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers            name  external dns           image  registry k8s io external dns external dns v0 15 0           args                  source service   ingress is also possible                 domain filter example com    optional  limit to only example com domains  change to match the zone created above                  zone id filter 023e105f4ecef8ad9ca31a8372d0c353    optional  limit to a specific zone                  provider cloudflare                 cloudflare proxied    optional  enable the proxy feature of Cloudflare  DDOS protection  CDN                     cloudflare dns records per page 5000    optional  configure how many DNS records to fetch per request                 cloudflare region key  eu     optional  configure which region can decrypt HTTPS requests           env                name  CF API KEY               valueFrom                  secretKeyRef                    name  cloudflare api key                   key  apiKey               name  CF API EMAIL               valueFrom                  secretKeyRef                    name  cloudflare api key                   key  email         Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  example com     external dns alpha kubernetes io ttl   120   optional spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80      Note the annotation on the service  use the same hostname as the Cloudflare DNS zone created above  The annotation may also be a subdomain of the DNS zone  e g   www example com     By setting the TTL annotation on the service  you have to pass a valid TTL  which must be 120 or above  This annotation is optional  if you won t set it  it will be 1  automatic  which is 300  For Cloudflare proxied entries  set the TTL annotation to 1  automatic   or do not set it   ExternalDNS uses this annotation to determine what services should be registered with DNS   Removing the annotation will cause ExternalDNS to remove the corresponding DNS records   Create the deployment and service      shell   kubectl create  f nginx yaml      Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service   Once the service has an external IP assigned  ExternalDNS will notice the new service IP address and synchronize the Cloudflare DNS records      Verifying Cloudflare DNS records  Check your  Cloudflare dashboard  https   www cloudflare com a dns example com  to view the records for your Cloudflare DNS zone   Substitute the zone for the one created above if a different domain was used   This should show the external IP address of the service as the A record for your domain      Cleanup  Now that we have verified that ExternalDNS will automatically manage Cloudflare DNS records  we can delete the tutorial s example      shell   kubectl delete  f nginx yaml   kubectl delete  f externaldns yaml         Setting cloudflare proxied on a per ingress basis  Using the  external dns alpha kubernetes io cloudflare proxied   true   annotation on your ingress  you can specify if the proxy feature of Cloudflare should be enabled for that record  This setting will override the global    cloudflare proxied  setting      Setting cloudflare region key to configure regional services  Using the  external dns alpha kubernetes io cloudflare region key  annotation on your ingress  you can restrict which data centers can decrypt and serve HTTPS traffic  A list of available options can be seen  here  https   developers cloudflare com data localization regional services get started     If not set the value will default to  global  "}
{"questions":"external-dns Creating a DigitalOcean DNS zone Make sure to use 0 4 2 version of ExternalDNS for this tutorial If you want to learn about how to use DigitalOcean s DNS service read the following tutorial series This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using DigitalOcean DNS DigitalOcean DNS","answers":"# DigitalOcean DNS\n\nThis tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using DigitalOcean DNS.\n\nMake sure to use **>=0.4.2** version of ExternalDNS for this tutorial.\n\n## Creating a DigitalOcean DNS zone\n\nIf you want to learn about how to use DigitalOcean's DNS service read the following tutorial series:\n\n[An Introduction to Managing DNS](https:\/\/www.digitalocean.com\/community\/tutorial_series\/an-introduction-to-managing-dns), and specifically [How To Set Up a Host Name with DigitalOcean DNS](https:\/\/www.digitalocean.com\/community\/tutorials\/how-to-set-up-a-host-name-with-digitalocean)\n\nCreate a new DNS zone where you want to create your records in. Let's use `example.com` as an example here.\n\n## Creating DigitalOcean Credentials\n\nGenerate a new personal token by going to [the API settings](https:\/\/cloud.digitalocean.com\/settings\/api\/tokens) or follow [How To Use the DigitalOcean API v2](https:\/\/www.digitalocean.com\/community\/tutorials\/how-to-use-the-digitalocean-api-v2) if you need more information. Give the token a name and choose read and write access. The token needs to be passed to ExternalDNS so make a note of it for later use.\n\nThe environment variable `DO_TOKEN` will be needed to run ExternalDNS with DigitalOcean.\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster you want to test ExternalDNS with.\n\nBegin by creating a Kubernetes secret to securely store your DigitalOcean API key. This key will enable ExternalDNS to authenticate with DigitalOcean:\n\n```shell\nkubectl create secret generic DO_TOKEN --from-literal=DO_TOKEN=YOUR_DIGITALOCEAN_API_KEY\n```\n\nEnsure to replace YOUR_DIGITALOCEAN_API_KEY with your actual DigitalOcean API key.\n\nThen apply one of the following manifests file to deploy ExternalDNS.\n\n## Using Helm\n\nCreate a values.yaml file to configure ExternalDNS to use DigitalOcean as the DNS provider. This file should include the necessary environment variables:\n\n```shell\nprovider: \n  name: digitalocean\nenv:\n  - name: DO_TOKEN\n    valueFrom:\n      secretKeyRef:\n        name: DO_TOKEN\n        key: DO_TOKEN\n```\n\n### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: external-dns\n  strategy:\n    type: Recreate\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=digitalocean\n        env:\n        - name: DO_TOKEN\n          valueFrom:\n            secretKeyRef:\n              name: DO_TOKEN\n              key: DO_TOKEN\n```\n\n### Manifest (for clusters with RBAC enabled)\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"] \n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: external-dns\n  strategy:\n    type: Recreate\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=digitalocean\n        env:\n        - name: DO_TOKEN\n          valueFrom:\n            secretKeyRef:\n              name: DO_TOKEN\n              key: DO_TOKEN\n```\n\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: my-app.example.com\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\nNote the annotation on the service; use the same hostname as the DigitalOcean DNS zone created above.\n\nExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records.\n\nCreate the deployment and service:\n\n```console\n$ kubectl create -f nginx.yaml\n```\n\nDepending where you run your service it can take a little while for your cloud provider to create an external IP for the service.\n\nOnce the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize the DigitalOcean DNS records.\n\n## Verifying DigitalOcean DNS records\n\nCheck your [DigitalOcean UI](https:\/\/cloud.digitalocean.com\/networking\/domains) to view the records for your DigitalOcean DNS zone.\n\nClick on the zone for the one created above if a different domain was used.\n\nThis should show the external IP address of the service as the A record for your domain.\n\n## Cleanup\n\nNow that we have verified that ExternalDNS will automatically manage DigitalOcean DNS records, we can delete the tutorial's example:\n\n```\n$ kubectl delete service -f nginx.yaml\n$ kubectl delete service -f externaldns.yaml\n```\n\n## Advanced Usage\n\n### API Page Size\n\nIf you have a large number of domains and\/or records within a domain, you may encounter API\nrate limiting because of the number of API calls that external-dns must make to the DigitalOcean API to retrieve\nthe current DNS configuration during every reconciliation loop. If this is the case, use the \n`--digitalocean-api-page-size` option to increase the size of the pages used when querying the DigitalOcean API.\n(Note: external-dns uses a default of 50.)","site":"external-dns","answers_cleaned":"  DigitalOcean DNS  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using DigitalOcean DNS   Make sure to use     0 4 2   version of ExternalDNS for this tutorial      Creating a DigitalOcean DNS zone  If you want to learn about how to use DigitalOcean s DNS service read the following tutorial series    An Introduction to Managing DNS  https   www digitalocean com community tutorial series an introduction to managing dns   and specifically  How To Set Up a Host Name with DigitalOcean DNS  https   www digitalocean com community tutorials how to set up a host name with digitalocean   Create a new DNS zone where you want to create your records in  Let s use  example com  as an example here      Creating DigitalOcean Credentials  Generate a new personal token by going to  the API settings  https   cloud digitalocean com settings api tokens  or follow  How To Use the DigitalOcean API v2  https   www digitalocean com community tutorials how to use the digitalocean api v2  if you need more information  Give the token a name and choose read and write access  The token needs to be passed to ExternalDNS so make a note of it for later use   The environment variable  DO TOKEN  will be needed to run ExternalDNS with DigitalOcean      Deploy ExternalDNS  Connect your  kubectl  client to the cluster you want to test ExternalDNS with   Begin by creating a Kubernetes secret to securely store your DigitalOcean API key  This key will enable ExternalDNS to authenticate with DigitalOcean      shell kubectl create secret generic DO TOKEN   from literal DO TOKEN YOUR DIGITALOCEAN API KEY      Ensure to replace YOUR DIGITALOCEAN API KEY with your actual DigitalOcean API key   Then apply one of the following manifests file to deploy ExternalDNS      Using Helm  Create a values yaml file to configure ExternalDNS to use DigitalOcean as the DNS provider  This file should include the necessary environment variables      shell provider     name  digitalocean env      name  DO TOKEN     valueFrom        secretKeyRef          name  DO TOKEN         key  DO TOKEN          Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    replicas  1   selector      matchLabels        app  external dns   strategy      type  Recreate   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider digitalocean         env            name  DO TOKEN           valueFrom              secretKeyRef                name  DO TOKEN               key  DO TOKEN          Manifest  for clusters with RBAC enabled     yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses      verbs    get   watch   list     apiGroups         resources    nodes     verbs    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    replicas  1   selector      matchLabels        app  external dns   strategy      type  Recreate   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider digitalocean         env            name  DO TOKEN           valueFrom              secretKeyRef                name  DO TOKEN               key  DO TOKEN          Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    replicas  1   selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  my app example com spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80      Note the annotation on the service  use the same hostname as the DigitalOcean DNS zone created above   ExternalDNS uses this annotation to determine what services should be registered with DNS  Removing the annotation will cause ExternalDNS to remove the corresponding DNS records   Create the deployment and service      console   kubectl create  f nginx yaml      Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service   Once the service has an external IP assigned  ExternalDNS will notice the new service IP address and synchronize the DigitalOcean DNS records      Verifying DigitalOcean DNS records  Check your  DigitalOcean UI  https   cloud digitalocean com networking domains  to view the records for your DigitalOcean DNS zone   Click on the zone for the one created above if a different domain was used   This should show the external IP address of the service as the A record for your domain      Cleanup  Now that we have verified that ExternalDNS will automatically manage DigitalOcean DNS records  we can delete the tutorial s example         kubectl delete service  f nginx yaml   kubectl delete service  f externaldns yaml         Advanced Usage      API Page Size  If you have a large number of domains and or records within a domain  you may encounter API rate limiting because of the number of API calls that external dns must make to the DigitalOcean API to retrieve the current DNS configuration during every reconciliation loop  If this is the case  use the     digitalocean api page size  option to increase the size of the pages used when querying the DigitalOcean API   Note  external dns uses a default of 50  "}
{"questions":"external-dns AWS The following IAM Policy document allows ExternalDNS to update Route53 Resource Record Sets and Hosted Zones You ll want to create this Policy in IAM first In our example we ll call the policy but you can call This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster on AWS Make sure to use 0 15 0 version of ExternalDNS for this tutorial IAM Policy it whatever you prefer","answers":"# AWS\n\nThis tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster on AWS. Make sure to use **>=0.15.0** version of ExternalDNS for this tutorial\n\n## IAM Policy\n\nThe following IAM Policy document allows ExternalDNS to update Route53 Resource\nRecord Sets and Hosted Zones. You'll want to create this Policy in IAM first. In\nour example, we'll call the policy `AllowExternalDNSUpdates` (but you can call\nit whatever you prefer).\n\nIf you prefer, you may fine-tune the policy to permit updates only to explicit\nHosted Zone IDs.\n\n```json\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"route53:ChangeResourceRecordSets\"\n      ],\n      \"Resource\": [\n        \"arn:aws:route53:::hostedzone\/*\"\n      ]\n    },\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"route53:ListHostedZones\",\n        \"route53:ListResourceRecordSets\",\n        \"route53:ListTagsForResource\"\n      ],\n      \"Resource\": [\n        \"*\"\n      ]\n    }\n  ]\n}\n```\n\nIf you are using the AWS CLI, you can run the following to install the above policy (saved as `policy.json`).  This can be use in subsequent steps to allow ExternalDNS to access Route53 zones.\n\n```bash\naws iam create-policy --policy-name \"AllowExternalDNSUpdates\" --policy-document file:\/\/policy.json\n\n# example: arn:aws:iam::XXXXXXXXXXXX:policy\/AllowExternalDNSUpdates\nexport POLICY_ARN=$(aws iam list-policies \\\n --query 'Policies[?PolicyName==`AllowExternalDNSUpdates`].Arn' --output text)\n```\n\n## Provisioning a Kubernetes cluster\n\nYou can use [eksctl](https:\/\/eksctl.io) to easily provision an [Amazon Elastic Kubernetes Service](https:\/\/aws.amazon.com\/eks) ([EKS](https:\/\/aws.amazon.com\/eks)) cluster that is suitable for this tutorial.  See [Getting started with Amazon EKS \u2013 eksctl](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/getting-started-eksctl.html).\n\n\n```bash\nexport EKS_CLUSTER_NAME=\"my-externaldns-cluster\"\nexport EKS_CLUSTER_REGION=\"us-east-2\"\nexport KUBECONFIG=\"$HOME\/.kube\/${EKS_CLUSTER_NAME}-${EKS_CLUSTER_REGION}.yaml\"\n\neksctl create cluster --name $EKS_CLUSTER_NAME --region $EKS_CLUSTER_REGION\n```\n\nFeel free to use other provisioning tools or an existing cluster.  If [Terraform](https:\/\/www.terraform.io\/) is used, [vpc](https:\/\/registry.terraform.io\/modules\/terraform-aws-modules\/vpc\/aws\/) and [eks](https:\/\/registry.terraform.io\/modules\/terraform-aws-modules\/eks\/aws\/) modules are recommended for standing up an EKS cluster.  Amazon has a workshop called [Amazon EKS Terraform Workshop](https:\/\/catalog.us-east-1.prod.workshops.aws\/workshops\/afee4679-89af-408b-8108-44f5b1065cc7\/) that may be useful for this process.\n\n## Permissions to modify DNS zone\n\nYou will need to use the above policy (represented by the `POLICY_ARN` environment variable) to allow ExternalDNS to update records in Route53 DNS zones. Here are three common ways this can be accomplished:\n\n* [Node IAM Role](#node-iam-role)\n* [Static credentials](#static-credentials)\n* [IAM Roles for Service Accounts](#iam-roles-for-service-accounts)\n\nFor this tutorial, ExternalDNS will use the environment variable `EXTERNALDNS_NS` to represent the namespace, defaulted to `default`.  Feel free to change this to something else, such `externaldns` or `kube-addons`.  Make sure to edit the `subjects[0].namespace` for the `ClusterRoleBinding` resource when deploying ExternalDNS with RBAC enabled.  See [Manifest (for clusters with RBAC enabled)](#manifest-for-clusteres-with-rbac-enabled)  for more information.\n\nAdditionally, throughout this tutorial, the example domain of `example.com` is used.  Change this to appropriate domain under your control.  See [Set up a hosted zone](#set-up-a-hosted-zone) section.\n\n### Node IAM Role\n\nIn this method, you can attach a policy to the Node IAM Role.  This will allow nodes in the Kubernetes cluster to access Route53 zones, which allows ExternalDNS to update DNS records.  Given that this allows all containers to access Route53, not just ExternalDNS, running on the node with these privileges, this method is not recommended, and is only suitable for limited test environments.\n\nIf you are using eksctl to provision a new cluster, you add the policy at creation time with:\n\n```bash\neksctl create cluster --external-dns-access \\\n  --name $EKS_CLUSTER_NAME --region $EKS_CLUSTER_REGION \\\n```\n\n:warning: **WARNING**: This will assign allow read-write access to all nodes in the cluster, not just ExternalDNS.  For this reason, this method is only suitable for limited test environments.\n\nIf you already provisioned a cluster or use other provisioning tools like Terraform, you can use AWS CLI to attach the policy to the Node IAM Role.\n\n#### Get the Node IAM role name\n\nThe role name of the role associated with the node(s) where ExternalDNS will run is needed.  An easy way to get the role name is to use the AWS web console (https:\/\/console.aws.amazon.com\/eks\/), and find any instance in the target node group and copy the role name associated with that instance.\n\n##### Get role name with a single managed nodegroup\n\nFrom the command line, if you have a single managed node group, the default with `eksctl create cluster`, you can find the role name with the following:\n\n```bash\n# get managed node group name (assuming there's only one node group)\nGROUP_NAME=$(aws eks list-nodegroups --cluster-name $EKS_CLUSTER_NAME \\\n  --query nodegroups --out text)\n# fetch role arn given node group name\nROLE_ARN=$(aws eks describe-nodegroup --cluster-name $EKS_CLUSTER_NAME \\\n  --nodegroup-name $GROUP_NAME --query nodegroup.nodeRole --out text)\n# extract just the name part of role arn\nROLE_NAME=${NODE_ROLE_ARN##*\/}\n```\n\n##### Get role name with other configurations\n\nIf you have multiple node groups or any unmanaged node groups, the process gets more complex.  The first step is to get the instance host name of the desired node to where ExternalDNS will be deployed or is already deployed:\n\n```bash\n# node instance name of one of the external dns pods currently running\nINSTANCE_NAME=$(kubectl get pods --all-namespaces \\\n  --selector app.kubernetes.io\/instance=external-dns \\\n  --output jsonpath='{.items[0].spec.nodeName}')\n\n# instance name of one of the nodes (change if node group is different)\nINSTANCE_NAME=$(kubectl get nodes --output name | cut -d'\/' -f2 | tail -1)\n```\n\nWith the instance host name, you can then get the instance id:\n\n```bash\nget_instance_id() {\n  INSTANCE_NAME=$1 # example: ip-192-168-74-34.us-east-2.compute.internal\n\n  # get list of nodes\n  # ip-192-168-74-34.us-east-2.compute.internal\taws:\/\/\/us-east-2a\/i-xxxxxxxxxxxxxxxxx\n  # ip-192-168-86-105.us-east-2.compute.internal\taws:\/\/\/us-east-2a\/i-xxxxxxxxxxxxxxxxx\n  NODES=$(kubectl get nodes \\\n   --output jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.spec.providerID}{\"\\n\"}{end}')\n\n  # print instance id from matching node\n  grep $INSTANCE_NAME <<< \"$NODES\" | cut -d'\/' -f5\n}\n\nINSTANCE_ID=$(get_instance_id $INSTANCE_NAME)\n```\n\nWith the instance id, you can get the associated role name:\n\n```bash\nfindRoleName() {\n  INSTANCE_ID=$1\n\n  # get all of the roles\n  ROLES=($(aws iam list-roles --query Roles[*].RoleName --out text))\n  for ROLE in ${ROLES[*]}; do\n    # get instance profile arn\n    PROFILE_ARN=$(aws iam list-instance-profiles-for-role \\\n      --role-name $ROLE --query InstanceProfiles[0].Arn --output text)\n    # if there is an instance profile\n    if [[ \"$PROFILE_ARN\" != \"None\" ]]; then\n      # get all the instances with this associated instance profile\n      INSTANCES=$(aws ec2 describe-instances \\\n        --filters Name=iam-instance-profile.arn,Values=$PROFILE_ARN \\\n        --query Reservations[*].Instances[0].InstanceId --out text)\n      # find instances that match the instant profile\n      for INSTANCE in ${INSTANCES[*]}; do\n        # set role name value if there is a match\n        if [[ \"$INSTANCE_ID\" == \"$INSTANCE\" ]]; then ROLE_NAME=$ROLE; fi\n      done\n    fi\n  done\n\n  echo $ROLE_NAME\n}\n\nNODE_ROLE_NAME=$(findRoleName $INSTANCE_ID)\n```\n\nUsing the role name, you can associate the policy that was created earlier:\n\n```bash\n# attach policy arn created earlier to node IAM role\naws iam attach-role-policy --role-name $NODE_ROLE_NAME --policy-arn $POLICY_ARN\n```\n\n:warning: **WARNING**: This will assign allow read-write access to all pods running on the same node pool, not just the ExternalDNS pod(s).\n\n#### Deploy ExternalDNS with attached policy to Node IAM Role\n\nIf ExternalDNS is not yet deployed, follow the steps under [Deploy ExternalDNS](#deploy-externaldns) using either RBAC or non-RBAC.\n\n**NOTE**: Before deleting the cluster during, be sure to run `aws iam detach-role-policy`.  Otherwise, there can be errors as the provisioning system, such as `eksctl` or `terraform`, will not be able to delete the roles with the attached policy.\n\n### Static credentials\n\nIn this method, the policy is attached to an IAM user, and the credentials secrets for the IAM user are then made available using a Kubernetes secret.\n\nThis method is not the preferred method as the secrets in the credential file could be copied and used by an unauthorized threat actor.  However, if the Kubernetes cluster is not hosted on AWS, it may be the only method available.  Given this situation, it is important to limit the associated privileges to just minimal required privileges, i.e. read-write access to Route53, and not used a credentials file that has extra privileges beyond what is required.\n\n#### Create IAM user and attach the policy\n\n```bash\n# create IAM user\naws iam create-user --user-name \"externaldns\"\n\n# attach policy arn created earlier to IAM user\naws iam attach-user-policy --user-name \"externaldns\" --policy-arn $POLICY_ARN\n```\n\n#### Create the static credentials\n\n```bash\nSECRET_ACCESS_KEY=$(aws iam create-access-key --user-name \"externaldns\")\nACCESS_KEY_ID=$(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId')\n\ncat <<-EOF > credentials\n\n[default]\naws_access_key_id = $(echo $ACCESS_KEY_ID)\naws_secret_access_key = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey')\nEOF\n```\n\n#### Create Kubernetes secret from credentials\n\n```bash\nkubectl create secret generic external-dns \\\n  --namespace ${EXTERNALDNS_NS:-\"default\"} --from-file \/local\/path\/to\/credentials\n```\n\n#### Deploy ExternalDNS using static credentials\n\nFollow the steps under [Deploy ExternalDNS](#deploy-externaldns) using either RBAC or non-RBAC.  Make sure to uncomment the section that mounts volumes, so that the credentials can be mounted.\n\n> [!TIP]\n> By default ExternalDNS takes the profile named `default` from the credentials file. If you want to use a different \n> profile, you can set the environment variable `EXTERNAL_DNS_AWS_PROFILE` to the desired profile name or use the \n> `--aws-profile` command line argument. It is even possible to use more than one profile at ones, separated by space in\n> the environment variable `EXTERNAL_DNS_AWS_PROFILE` or by using `--aws-profile` multiple times. In this case \n> ExternalDNS looks for the hosted zones in all profiles and keeps maintaining a mapping table between zone and profile \n> in order to be able to modify the zones in the correct profile.\n\n### IAM Roles for Service Accounts\n\n[IRSA](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts.html) ([IAM roles for Service Accounts](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts.html)) allows cluster operators to map AWS IAM Roles to Kubernetes Service Accounts.  This essentially allows only ExternalDNS pods to access Route53 without exposing any static credentials.\n\nThis is the preferred method as it implements [PoLP](https:\/\/csrc.nist.gov\/glossary\/term\/principle_of_least_privilege) ([Principal of Least Privilege](https:\/\/csrc.nist.gov\/glossary\/term\/principle_of_least_privilege)).\n\n**IMPORTANT**: This method requires using KSA (Kubernetes service account) and RBAC.\n\nThis method requires deploying with RBAC.  See [Manifest (for clusters with RBAC enabled)](#manifest-for-clusters-with-rbac-enabled) when ready to deploy ExternalDNS.\n\n**NOTE**: Similar methods to IRSA on AWS are [kiam](https:\/\/github.com\/uswitch\/kiam), which is in maintenence mode, and has [instructions](https:\/\/github.com\/uswitch\/kiam\/blob\/HEAD\/docs\/IAM.md) for creating an IAM role, and also [kube2iam](https:\/\/github.com\/jtblin\/kube2iam).  IRSA is the officially supported method for EKS clusters, and so for non-EKS clusters on AWS, these other tools could be an option.\n\n#### Verify OIDC is supported\n\n```bash\naws eks describe-cluster --name $EKS_CLUSTER_NAME \\\n  --query \"cluster.identity.oidc.issuer\" --output text\n```\n\n#### Associate OIDC to cluster\n\nConfigure the cluster with an OIDC provider and add support for [IRSA](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts.html) ([IAM roles for Service Accounts](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts.html)).\n\nIf you used `eksctl` to provision the EKS cluster, you can update it with the following command:\n\n```bash\neksctl utils associate-iam-oidc-provider \\\n  --cluster $EKS_CLUSTER_NAME --approve\n```\n\nIf the cluster was provisioned with Terraform, you can use the `iam_openid_connect_provider` resource ([ref](https:\/\/registry.terraform.io\/providers\/hashicorp\/aws\/latest\/docs\/resources\/iam_openid_connect_provider)) to associate to the OIDC provider.\n\n#### Create an IAM role bound to a service account\n\nFor the next steps in this process, we will need to associate the `external-dns` service account and a role used to grant access to Route53.  This requires the following steps:\n\n1. Create a role with a trust relationship to the cluster's OIDC provider\n2. Attach the `AllowExternalDNSUpdates` policy to the role\n3. Create the `external-dns` service account\n4. Add annotation to the service account with the role arn\n\n##### Use eksctl with eksctl created EKS cluster\n\nIf `eksctl` was used to provision the EKS cluster, you can perform all of these steps with the following command:\n\n```bash\neksctl create iamserviceaccount \\\n  --cluster $EKS_CLUSTER_NAME \\\n  --name \"external-dns\" \\\n  --namespace ${EXTERNALDNS_NS:-\"default\"} \\\n  --attach-policy-arn $POLICY_ARN \\\n  --approve\n```\n\n##### Use aws cli with any EKS cluster\n\nOtherwise, we can do the following steps using `aws` commands (also see [Creating an IAM role and policy for your service account](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/create-service-account-iam-policy-and-role.html)):\n\n```bash\nACCOUNT_ID=$(aws sts get-caller-identity \\\n  --query \"Account\" --output text)\nOIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME \\\n  --query \"cluster.identity.oidc.issuer\" --output text | sed -e 's|^https:\/\/||')\n\ncat <<-EOF > trust.json\n{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Principal\": {\n                \"Federated\": \"arn:aws:iam::$ACCOUNT_ID:oidc-provider\/$OIDC_PROVIDER\"\n            },\n            \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n            \"Condition\": {\n                \"StringEquals\": {\n                    \"$OIDC_PROVIDER:sub\": \"system:serviceaccount:${EXTERNALDNS_NS:-\"default\"}:external-dns\",\n                    \"$OIDC_PROVIDER:aud\": \"sts.amazonaws.com\"\n                }\n            }\n        }\n    ]\n}\nEOF\n\nIRSA_ROLE=\"external-dns-irsa-role\"\naws iam create-role --role-name $IRSA_ROLE --assume-role-policy-document file:\/\/trust.json\naws iam attach-role-policy --role-name $IRSA_ROLE --policy-arn $POLICY_ARN\n\nROLE_ARN=$(aws iam get-role --role-name $IRSA_ROLE --query Role.Arn --output text)\n\n# Create service account (skip is already created)\nkubectl create serviceaccount \"external-dns\" --namespace ${EXTERNALDNS_NS:-\"default\"}\n\n# Add annotation referencing IRSA role\nkubectl patch serviceaccount \"external-dns\" --namespace ${EXTERNALDNS_NS:-\"default\"} --patch \\\n \"{\\\"metadata\\\": { \\\"annotations\\\": { \\\"eks.amazonaws.com\/role-arn\\\": \\\"$ROLE_ARN\\\" }}}\"\n```\n\nIf any part of this step is misconfigured, such as the role with incorrect namespace configured in the trust relationship, annotation pointing the the wrong role, etc., you will see errors like `WebIdentityErr: failed to retrieve credentials`. Check the configuration and make corrections.\n\nWhen the service account annotations are updated, then the current running pods will have to be terminated, so that new pod(s) with proper configuration (environment variables) will be created automatically.\n\nWhen annotation is added to service account, the ExternalDNS pod(s) scheduled will have `AWS_ROLE_ARN`, `AWS_STS_REGIONAL_ENDPOINTS`, and `AWS_WEB_IDENTITY_TOKEN_FILE` environment variables injected automatically.\n\n#### Deploy ExternalDNS using IRSA\n\nFollow the steps under [Manifest (for clusters with RBAC enabled)](#manifest-for-clusters-with-rbac-enabled).  Make sure to comment out the service account section if this has been created already.\n\nIf you deployed ExternalDNS before adding the service account annotation and the corresponding role, you will likely see error with `failed to list hosted zones: AccessDenied: User`.  You can delete the current running ExternalDNS pod(s) after updating the annotation, so that new pods scheduled will have appropriate configuration to access Route53.\n\n\n## Set up a hosted zone\n\n*If you prefer to try-out ExternalDNS in one of the existing hosted-zones you can skip this step*\n\nCreate a DNS zone which will contain the managed DNS records.  This tutorial will use the fictional domain of `example.com`.\n\n```bash\naws route53 create-hosted-zone --name \"example.com.\" \\\n  --caller-reference \"external-dns-test-$(date +%s)\"\n```\n\nMake a note of the nameservers that were assigned to your new zone.\n\n```bash\nZONE_ID=$(aws route53 list-hosted-zones-by-name --output json \\\n  --dns-name \"example.com.\" --query HostedZones[0].Id --out text)\n\naws route53 list-resource-record-sets --output text \\\n --hosted-zone-id $ZONE_ID --query \\\n \"ResourceRecordSets[?Type == 'NS'].ResourceRecords[*].Value | []\" | tr '\\t' '\\n'\n```\n\nThis should yield something similar this:\n\n```\nns-695.awsdns-22.net.\nns-1313.awsdns-36.org.\nns-350.awsdns-43.com.\nns-1805.awsdns-33.co.uk.\n```\n\nIf using your own domain that was registered with a third-party domain registrar, you should point your domain's name servers to the values in the from the list above.  Please consult your registrar's documentation on how to do that.\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster you want to test ExternalDNS with.\nThen apply one of the following manifests file to deploy ExternalDNS. You can check if your cluster has RBAC by `kubectl api-versions | grep rbac.authorization.k8s.io`.\n\nFor clusters with RBAC enabled, be sure to choose the correct `namespace`.  For this tutorial, the enviornment variable `EXTERNALDNS_NS` will refer to the namespace.  You can set this to a value of your choice:\n\n```bash\nexport EXTERNALDNS_NS=\"default\" # externaldns, kube-addons, etc\n\n# create namespace if it does not yet exist\nkubectl get namespaces | grep -q $EXTERNALDNS_NS || \\\n  kubectl create namespace $EXTERNALDNS_NS\n```\n\n## Using Helm (with OIDC)\n\nCreate a values.yaml file to configure ExternalDNS:\n\n```shell\nprovider:\n  name: aws\nenv:\n  - name: AWS_DEFAULT_REGION\n    value: us-east-1 # change to region where EKS is installed\n```\n\nFinally, install the ExternalDNS chart with Helm using the configuration specified in your values.yaml file:\n\n```shell\nhelm upgrade --install external-dns external-dns\/external-dns --values values.yaml\n```\n\n### When using clusters without RBAC enabled\n\nSave the following below as `externaldns-no-rbac.yaml`.\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\n  labels:\n    app.kubernetes.io\/name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app.kubernetes.io\/name: external-dns\n  template:\n    metadata:\n      labels:\n        app.kubernetes.io\/name: external-dns\n    spec:\n      containers:\n        - name: external-dns\n          image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n          args:\n            - --source=service\n            - --source=ingress\n            - --domain-filter=example.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones\n            - --provider=aws\n            - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization\n            - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)\n            - --registry=txt\n            - --txt-owner-id=my-hostedzone-identifier\n          env:\n            - name: AWS_DEFAULT_REGION\n              value: us-east-1 # change to region where EKS is installed\n      # # Uncomment below if using static credentials\n      #       - name: AWS_SHARED_CREDENTIALS_FILE\n      #        value: \/.aws\/credentials\n      #     volumeMounts:\n      #       - name: aws-credentials\n      #         mountPath: \/.aws\n      #         readOnly: true\n      # volumes:\n      #   - name: aws-credentials\n      #     secret:\n      #       secretName: external-dns\n```\n\nWhen ready you can deploy:\n\n```bash\nkubectl create --filename externaldns-no-rbac.yaml \\\n  --namespace ${EXTERNALDNS_NS:-\"default\"}\n```\n\n### When using clusters with RBAC enabled\n\nIf you're using EKS, you can update the `values.yaml` file you created earlier to include the annotations to link the Role ARN you created before.\n\n```yaml\nprovider:\n  name: aws\nserviceAccount:\n  annotations:\n    eks.amazonaws.com\/role-arn: arn:aws:iam::${ACCOUNT_ID}:role\/${EXTERNALDNS_ROLE_NAME:-\"external-dns\"}\n```\n\nIf you need to provide credentials directly using a secret (ie. You're not using EKS), you can change the `values.yaml` file to include volume and volume mounts.\n\n```yaml\nprovider:\n  name: aws\nenv:\n  - name: AWS_SHARED_CREDENTIALS_FILE\n    value: \/etc\/aws\/credentials\/my_credentials\nextraVolumes:\n  - name: aws-credentials\n    secret:\n      secretName: external-dns # In this example, the secret will have the data stored in a key named `my_credentials`\nextraVolumeMounts:\n  - name: aws-credentials\n    mountPath: \/etc\/aws\/credentials\n    readOnly: true\n```\n\nWhen ready, update your Helm installation:\n\n```shell\nhelm upgrade --install external-dns external-dns\/external-dns --values values.yaml\n```\n\n## Arguments\n\nThis list is not the full list, but a few arguments that where chosen.\n\n### aws-zone-type\n\n`aws-zone-type` allows filtering for private and public zones\n\n## Annotations\n\nAnnotations which are specific to AWS.\n\n### alias\n\n`external-dns.alpha.kubernetes.io\/alias` if set to `true` on an ingress, it will create an ALIAS record when the target is an ALIAS as well. To make the target an alias, the ingress needs to be configured correctly as described in [the docs](.\/gke-nginx.md#with-a-separate-tcp-load-balancer). In particular, the argument `--publish-service=default\/nginx-ingress-controller` has to be set on the `nginx-ingress-controller` container. If one uses the `nginx-ingress` Helm chart, this flag can be set with the `controller.publishService.enabled` configuration option.\n\n### target-hosted-zone\n\n`external-dns.alpha.kubernetes.io\/aws-target-hosted-zone` can optionally be set to the ID of a Route53 hosted zone. This will force external-dns to use the specified hosted zone when creating an ALIAS target.\n\n### aws-zone-match-parent\n`aws-zone-match-parent` allows support subdomains within the same zone by using their parent domain, i.e --domain-filter=x.example.com would create a DNS entry for x.example.com (and subdomains thereof).\n\n```yaml\n## hosted zone domain: example.com\n--domain-filter=x.example.com,example.com\n--aws-zone-match-parent\n```\n\n## Verify ExternalDNS works (Service example)\n\nCreate the following sample application to test that ExternalDNS works.\n\n> For services ExternalDNS will look for the annotation `external-dns.alpha.kubernetes.io\/hostname` on the service and use the corresponding value.\n\n> If you want to give multiple names to service, you can set it to external-dns.alpha.kubernetes.io\/hostname with a comma `,` separator.\n\nFor this verification phase, you can use default or another namespace for the nginx demo, for example:\n\n```bash\nNGINXDEMO_NS=\"nginx\"\nkubectl get namespaces | grep -q $NGINXDEMO_NS || kubectl create namespace $NGINXDEMO_NS\n```\n\nSave the following manifest below as `nginx.yaml`:\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: nginx.example.com\nspec:\n  type: LoadBalancer\n  ports:\n  - port: 80\n    name: http\n    targetPort: 80\n  selector:\n    app: nginx\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n          name: http\n```\n\nDeploy the nginx deployment and service with:\n\n```bash\nkubectl create --filename nginx.yaml --namespace ${NGINXDEMO_NS:-\"default\"}\n```\n\nVerify that the load balancer was allocated with:\n\n```bash\nkubectl get service nginx --namespace ${NGINXDEMO_NS:-\"default\"}\n```\n\nThis should show something like:\n\n```bash\nNAME    TYPE           CLUSTER-IP     EXTERNAL-IP                                                                   PORT(S)        AGE\nnginx   LoadBalancer   10.100.47.41   ae11c2360188411e7951602725593fd1-1224345803.eu-central-1.elb.amazonaws.com.   80:32749\/TCP   12m\n```\n\nAfter roughly two minutes check that a corresponding DNS record for your service that was created.\n\n```bash\naws route53 list-resource-record-sets --output json --hosted-zone-id $ZONE_ID \\\n  --query \"ResourceRecordSets[?Name == 'nginx.example.com.']|[?Type == 'A']\"\n```\n\nThis should show something like:\n\n```json\n[\n    {\n        \"Name\": \"nginx.example.com.\",\n        \"Type\": \"A\",\n        \"AliasTarget\": {\n            \"HostedZoneId\": \"ZEWFWZ4R16P7IB\",\n            \"DNSName\": \"ae11c2360188411e7951602725593fd1-1224345803.eu-central-1.elb.amazonaws.com.\",\n            \"EvaluateTargetHealth\": true\n        }\n    }\n]\n```\n\nYou can also fetch the corresponding text records:\n\n```bash\naws route53 list-resource-record-sets --output json --hosted-zone-id $ZONE_ID \\\n  --query \"ResourceRecordSets[?Name == 'nginx.example.com.']|[?Type == 'TXT']\"\n```\n\nThis will show something like:\n\n```json\n[\n    {\n        \"Name\": \"nginx.example.com.\",\n        \"Type\": \"TXT\",\n        \"TTL\": 300,\n        \"ResourceRecords\": [\n            {\n                \"Value\": \"\\\"heritage=external-dns,external-dns\/owner=external-dns,external-dns\/resource=service\/default\/nginx\\\"\"\n            }\n        ]\n    }\n]\n```\n\nNote created TXT record alongside ALIAS record. TXT record signifies that the corresponding ALIAS record is managed by ExternalDNS. This makes ExternalDNS safe for running in environments where there are other records managed via other means.\n\nFor more information about ALIAS record, see [Choosing between alias and non-alias records](https:\/\/docs.aws.amazon.com\/Route53\/latest\/DeveloperGuide\/resource-record-sets-choosing-alias-non-alias.html).\n\nLet's check that we can resolve this DNS name. We'll ask the nameservers assigned to your zone first.\n\n```bash\ndig +short @ns-5514.awsdns-53.org. nginx.example.com.\n```\n\nThis should return 1+ IP addresses that correspond to the ELB FQDN, i.e. `ae11c2360188411e7951602725593fd1-1224345803.eu-central-1.elb.amazonaws.com.`.\n\nNext try the public nameservers configured by DNS client on your system:\n\n```bash\ndig +short nginx.example.com.\n```\n\nIf you hooked up your DNS zone with its parent zone correctly you can use `curl` to access your site.\n\n```bash\ncurl nginx.example.com.\n```\n\nThis should show something like:\n\n```html\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!<\/title>\n...\n<\/head>\n<body>\n<h1>Welcome to nginx!<\/h1>\n...\n<\/body>\n<\/html>\n```\n\n## Verify ExternalDNS works (Ingress example)\n\nWith the previous `deployment` and `service` objects deployed, we can add an `ingress` object and configure a FQDN value for the `host` key.  The ingress controller will match incoming HTTP traffic, and route it to the appropriate backend service based on the `host` key.\n\n> For ingress objects ExternalDNS will create a DNS record based on the host specified for the ingress object.\n\nFor this tutorial, we have two endpoints, the service with `LoadBalancer` type and an ingress.  For practical purposes, if an ingress is used, the service type can be changed to `ClusterIP` as two endpoints are unecessary in this scenario.\n\n**IMPORTANT**: This requires that an ingress controller has been installed in your Kubernetes cluster.  EKS does not come with an ingress controller by default.  A popular ingress controller is [ingress-nginx](https:\/\/github.com\/kubernetes\/ingress-nginx\/), which can be installed by a [helm chart](https:\/\/artifacthub.io\/packages\/helm\/ingress-nginx\/ingress-nginx) or by [manifests](https:\/\/kubernetes.github.io\/ingress-nginx\/deploy\/#aws).\n\nCreate an ingress resource manifest file named `ingress.yaml` with the contents below:\n\n```yaml\n---\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: nginx\nspec:\n  ingressClassName: nginx\n  rules:\n    - host: server.example.com\n      http:\n        paths:\n          - backend:\n              service:\n                name: nginx\n                port:\n                  number: 80\n            path: \/\n            pathType: Prefix\n```\n\nWhen ready, you can deploy this with:\n\n```bash\nkubectl create --filename ingress.yaml --namespace ${NGINXDEMO_NS:-\"default\"}\n```\n\nWatch the status of the ingress until the ADDRESS field is populated.\n\n```bash\nkubectl get ingress --watch --namespace ${NGINXDEMO_NS:-\"default\"}\n```\n\nYou should see something like this:\n\n```\nNAME    CLASS    HOSTS                ADDRESS   PORTS   AGE\nnginx   <none>   server.example.com             80      47s\nnginx   <none>   server.example.com   ae11c2360188411e7951602725593fd1-1224345803.eu-central-1.elb.amazonaws.com.   80      54s\n```\n\n\nFor the ingress test, run through similar checks, but using domain name used for the ingress:\n\n```bash\n# check records on route53\naws route53 list-resource-record-sets --output json --hosted-zone-id $ZONE_ID \\\n  --query \"ResourceRecordSets[?Name == 'server.example.com.']\"\n\n# query using a route53 name server\ndig +short @ns-5514.awsdns-53.org. server.example.com.\n# query using the default name server\ndig +short server.example.com.\n\n# connect to the nginx web server through the ingress\ncurl server.example.com.\n```\n\n## More service annotation options\n\n### Custom TTL\n\nThe default DNS record TTL (Time-To-Live) is 300 seconds. You can customize this value by setting the annotation `external-dns.alpha.kubernetes.io\/ttl`.\ne.g., modify the service manifest YAML file above:\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: nginx.example.com\n    external-dns.alpha.kubernetes.io\/ttl: \"60\"\nspec:\n    ...\n```\n\nThis will set the DNS record's TTL to 60 seconds.\n\n### Routing policies\n\nRoute53 offers [different routing policies](https:\/\/docs.aws.amazon.com\/Route53\/latest\/DeveloperGuide\/routing-policy.html). The routing policy for a record can be controlled with the following annotations:\n\n* `external-dns.alpha.kubernetes.io\/set-identifier`: this **needs** to be set to use any of the following routing policies\n\nFor any given DNS name, only **one** of the following routing policies can be used:\n\n* Weighted records: `external-dns.alpha.kubernetes.io\/aws-weight`\n* Latency-based routing: `external-dns.alpha.kubernetes.io\/aws-region`\n* Failover:`external-dns.alpha.kubernetes.io\/aws-failover`\n* Geolocation-based routing:\n  * `external-dns.alpha.kubernetes.io\/aws-geolocation-continent-code`\n  * `external-dns.alpha.kubernetes.io\/aws-geolocation-country-code`\n  * `external-dns.alpha.kubernetes.io\/aws-geolocation-subdivision-code`\n* Multi-value answer:`external-dns.alpha.kubernetes.io\/aws-multi-value-answer`\n\n### Associating DNS records with healthchecks\n\nYou can configure Route53 to associate DNS records with healthchecks for automated DNS failover using\n`external-dns.alpha.kubernetes.io\/aws-health-check-id: <health-check-id>` annotation.\n\nNote: ExternalDNS does not support creating healthchecks, and assumes that `<health-check-id>` already exists.\n\n## Canonical Hosted Zones\n\nWhen creating ALIAS type records in Route53 it is required that external-dns be aware of the canonical hosted zone in which\nthe specified hostname is created. External-dns is able to automatically identify the canonical hosted zone for many\nhostnames based upon known hostname suffixes which are defined in [aws.go](https:\/\/github.com\/kubernetes-sigs\/external-dns\/blob\/master\/provider\/aws\/aws.go#L65). If a hostname\ndoes not have a known suffix then the suffix can be added into `aws.go` or the [target-hosted-zone annotation](#target-hosted-zone)\ncan be used to manually define the ID of the canonical hosted zone.\n\n## Govcloud caveats\n\nDue to the special nature with how Route53 runs in Govcloud, there are a few tweaks in the deployment settings.\n\n* An Environment variable with name of `AWS_REGION` set to either `us-gov-west-1` or `us-gov-east-1` is required. Otherwise it tries to lookup a region that does not exist in Govcloud and it errors out.\n\n```yaml\nenv:\n- name: AWS_REGION\n  value: us-gov-west-1\n```\n\n* Route53 in Govcloud does not allow aliases. Therefore, container args must be set so that it uses CNAMES and a txt-prefix must be set to something. Otherwise, it will try to create a TXT record with the same value than the CNAME itself, which is not allowed.\n\n```yaml\nargs:\n- --aws-prefer-cname\n- --txt-prefix=\n```\n\n* The first two changes are needed if you use Route53 in Govcloud, which only supports private zones. There are also no cross account IAM whatsoever between Govcloud and commercial AWS accounts. If services and ingresses need to make Route 53 entries to an public zone in a commercial account, you will have set env variables of `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` with a key and secret to the commercial account that has the sufficient rights.\n\n```yaml\nenv:\n- name: AWS_ACCESS_KEY_ID\n  value: XXXXXXXXX\n- name: AWS_SECRET_ACCESS_KEY\n  valueFrom:\n    secretKeyRef:\n      name: \n      key: \n```\n\n## DynamoDB Registry\n\nThe DynamoDB Registry can be used to store dns records metadata. See the [DynamoDB Registry Tutorial](..\/registry\/dynamodb.md) for more information.\n\n## Clean up\n\nMake sure to delete all Service objects before terminating the cluster so all load balancers get cleaned up correctly.\n\n```bash\nkubectl delete service nginx\n```\n\n**IMPORTANT** If you attached a policy to the Node IAM Role, then you will want to detach this before deleting the EKS cluster.  Otherwise, the role resource will be locked, and the cluster cannot be deleted, especially if it was provisioned by automation like `terraform` or `eksctl`.\n\n```bash\naws iam detach-role-policy --role-name $NODE_ROLE_NAME --policy-arn $POLICY_ARN\n```\n\nIf the cluster was provisioned using `eksctl`, you can delete the cluster with:\n\n```bash\neksctl delete cluster --name $EKS_CLUSTER_NAME --region $EKS_CLUSTER_REGION\n```\n\nGive ExternalDNS some time to clean up the DNS records for you. Then delete the hosted zone if you created one for the testing purpose.\n\n```bash\naws route53 delete-hosted-zone --id $ZONE_ID # e.g \/hostedzone\/ZEWFWZ4R16P7IB\n```\n\nIf IAM user credentials were used, you can remove the user with:\n\n```bash\naws iam detach-user-policy --user-name \"externaldns\" --policy-arn $POLICY_ARN\n\n# If static credentials were used\naws iam delete-access-key --user-name \"externaldns\" --access-key-id $ACCESS_KEY_ID\n\naws iam delete-user --user-name \"externaldns\"\n```\n\nIf IRSA was used, you can remove the IRSA role with:\n\n```bash\naws iam detach-role-policy --role-name $IRSA_ROLE --policy-arn $POLICY_ARN\naws iam delete-role --role-name $IRSA_ROLE\n```\n\nDelete any unneeded policies:\n\n```bash\naws iam delete-policy --policy-arn $POLICY_ARN\n```\n\n## Throttling\n\nRoute53 has a [5 API requests per second per account hard quota](https:\/\/docs.aws.amazon.com\/Route53\/latest\/DeveloperGuide\/DNSLimitations.html#limits-api-requests-route-53).\nRunning several fast polling ExternalDNS instances in a given account can easily hit that limit. Some ways to reduce the request rate include:\n* Reduce the polling loop's synchronization interval at the possible cost of slower change propagation (but see `--events` below to reduce the impact).\n  * `--interval=5m` (default `1m`)\n* Enable a Cache to store the zone records list. It comes with a cost: slower propagation when the zone gets modified from other sources such as the AWS console, terraform, cloudformation or anything similar.\n  * `--provider-cache-time=15m` (default `0m`)\n* Trigger the polling loop on changes to K8s objects, rather than only at `interval` and ensure a minimum of time between events, to have responsive updates with long poll intervals\n  * `--events`\n  * `--min-event-sync-interval=5m` (default `5s`)\n* Limit the [sources watched](https:\/\/github.com\/kubernetes-sigs\/external-dns\/blob\/master\/pkg\/apis\/externaldns\/types.go#L364) when the `--events` flag is specified to specific types, namespaces, labels, or annotations\n  * `--source=ingress --source=service` - specify multiple times for multiple sources\n  * `--namespace=my-app`\n  * `--label-filter=app in (my-app)`\n  * `--ingress-class=nginx-external`\n* Limit services watched by type (not applicable to ingress or other types)\n  * `--service-type-filter=LoadBalancer` default `all`\n* Limit the hosted zones considered\n  * `--zone-id-filter=ABCDEF12345678` - specify multiple times if needed\n  * `--domain-filter=example.com` by domain suffix - specify multiple times if needed\n  * `--regex-domain-filter=example*` by domain suffix but as a regex - overrides domain-filter\n  * `--exclude-domains=ignore.this.example.com` to exclude a domain or subdomain\n  * `--regex-domain-exclusion=ignore*` subtracts it's matches from `regex-domain-filter`'s matches\n  * `--aws-zone-type=public` only sync zones of this type `[public|private]`\n  * `--aws-zone-tags=owner=k8s` only sync zones with this tag\n* If the list of zones managed by ExternalDNS doesn't change frequently, cache it by setting a TTL.\n  * `--aws-zones-cache-duration=3h` (default `0` - disabled)\n* Increase the number of changes applied to Route53 in each batch\n  * `--aws-batch-change-size=4000` (default `1000`)\n* Increase the interval between changes\n  * `--aws-batch-change-interval=10s` (default `1s`)\n* Introducing some jitter to the pod initialization, so that when multiple instances of ExternalDNS are updated at the same time they do not make their requests on the same second.\n\nA simple way to implement randomised startup is with an init container:\n\n```\n...\n    spec:\n      initContainers:\n      - name: init-jitter\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        command:\n        - \/bin\/sh\n        - -c\n        - 'FOR=$((RANDOM % 10))s;echo \"Sleeping for $FOR\";sleep $FOR'\n      containers:\n...\n```\n\n### EKS\n\nAn effective starting point for EKS with an ingress controller might look like:\n\n```bash\n--interval=5m\n--events\n--source=ingress\n--domain-filter=example.com\n--aws-zones-cache-duration=1h\n```\n\n### Batch size options\n\nAfter external-dns generates all changes, it will perform a task to group those changes into batches. Each change will be validated against batch-change-size limits. If at least one of those parameters out of range - the change will be moved to a separate batch. If the change can't fit into any batch - *it will be skipped.*<br>\nThere are 3 options to control batch size for AWS provider:\n* Maximum amount of changes added to one batch\n  * `--aws-batch-change-size` (default `1000`)\n* Maximum size of changes in bytes added to one batch\n  * `--aws-batch-change-size-bytes` (default `32000`)\n* Maximum value count of changes added to one batch\n  * `aws-batch-change-size-values` (default `1000`)\n\n`aws-batch-change-size` can be very useful for throttling purposes and can be set to any value.\n\nDefault values for flags `aws-batch-change-size-bytes` and `aws-batch-change-size-values` are taken from [AWS documentation](https:\/\/docs.aws.amazon.com\/Route53\/latest\/DeveloperGuide\/DNSLimitations.html#limits-api-requests) for Route53 API. **You should not change those values until you really have to.** <br>\nBecause those limits are in place, `aws-batch-change-size` can be set to any value: Even if your batch size is `4000` records, your change will be split to separate batches due to bytes\/values size limits and apply request will be finished without issues.","site":"external-dns","answers_cleaned":"  AWS  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster on AWS  Make sure to use     0 15 0   version of ExternalDNS for this tutorial     IAM Policy  The following IAM Policy document allows ExternalDNS to update Route53 Resource Record Sets and Hosted Zones  You ll want to create this Policy in IAM first  In our example  we ll call the policy  AllowExternalDNSUpdates   but you can call it whatever you prefer    If you prefer  you may fine tune the policy to permit updates only to explicit Hosted Zone IDs      json      Version    2012 10 17      Statement                  Effect    Allow          Action              route53 ChangeResourceRecordSets                  Resource              arn aws route53   hostedzone                                Effect    Allow          Action              route53 ListHostedZones            route53 ListResourceRecordSets            route53 ListTagsForResource                  Resource                                          If you are using the AWS CLI  you can run the following to install the above policy  saved as  policy json     This can be use in subsequent steps to allow ExternalDNS to access Route53 zones      bash aws iam create policy   policy name  AllowExternalDNSUpdates    policy document file   policy json    example  arn aws iam  XXXXXXXXXXXX policy AllowExternalDNSUpdates export POLICY ARN   aws iam list policies      query  Policies  PolicyName   AllowExternalDNSUpdates   Arn    output text          Provisioning a Kubernetes cluster  You can use  eksctl  https   eksctl io  to easily provision an  Amazon Elastic Kubernetes Service  https   aws amazon com eks    EKS  https   aws amazon com eks   cluster that is suitable for this tutorial   See  Getting started with Amazon EKS   eksctl  https   docs aws amazon com eks latest userguide getting started eksctl html        bash export EKS CLUSTER NAME  my externaldns cluster  export EKS CLUSTER REGION  us east 2  export KUBECONFIG   HOME  kube   EKS CLUSTER NAME    EKS CLUSTER REGION  yaml   eksctl create cluster   name  EKS CLUSTER NAME   region  EKS CLUSTER REGION      Feel free to use other provisioning tools or an existing cluster   If  Terraform  https   www terraform io   is used   vpc  https   registry terraform io modules terraform aws modules vpc aws   and  eks  https   registry terraform io modules terraform aws modules eks aws   modules are recommended for standing up an EKS cluster   Amazon has a workshop called  Amazon EKS Terraform Workshop  https   catalog us east 1 prod workshops aws workshops afee4679 89af 408b 8108 44f5b1065cc7   that may be useful for this process      Permissions to modify DNS zone  You will need to use the above policy  represented by the  POLICY ARN  environment variable  to allow ExternalDNS to update records in Route53 DNS zones  Here are three common ways this can be accomplished      Node IAM Role   node iam role     Static credentials   static credentials     IAM Roles for Service Accounts   iam roles for service accounts   For this tutorial  ExternalDNS will use the environment variable  EXTERNALDNS NS  to represent the namespace  defaulted to  default    Feel free to change this to something else  such  externaldns  or  kube addons    Make sure to edit the  subjects 0  namespace  for the  ClusterRoleBinding  resource when deploying ExternalDNS with RBAC enabled   See  Manifest  for clusters with RBAC enabled    manifest for clusteres with rbac enabled   for more information   Additionally  throughout this tutorial  the example domain of  example com  is used   Change this to appropriate domain under your control   See  Set up a hosted zone   set up a hosted zone  section       Node IAM Role  In this method  you can attach a policy to the Node IAM Role   This will allow nodes in the Kubernetes cluster to access Route53 zones  which allows ExternalDNS to update DNS records   Given that this allows all containers to access Route53  not just ExternalDNS  running on the node with these privileges  this method is not recommended  and is only suitable for limited test environments   If you are using eksctl to provision a new cluster  you add the policy at creation time with      bash eksctl create cluster   external dns access       name  EKS CLUSTER NAME   region  EKS CLUSTER REGION         warning    WARNING    This will assign allow read write access to all nodes in the cluster  not just ExternalDNS   For this reason  this method is only suitable for limited test environments   If you already provisioned a cluster or use other provisioning tools like Terraform  you can use AWS CLI to attach the policy to the Node IAM Role        Get the Node IAM role name  The role name of the role associated with the node s  where ExternalDNS will run is needed   An easy way to get the role name is to use the AWS web console  https   console aws amazon com eks    and find any instance in the target node group and copy the role name associated with that instance         Get role name with a single managed nodegroup  From the command line  if you have a single managed node group  the default with  eksctl create cluster   you can find the role name with the following      bash   get managed node group name  assuming there s only one node group  GROUP NAME   aws eks list nodegroups   cluster name  EKS CLUSTER NAME       query nodegroups   out text    fetch role arn given node group name ROLE ARN   aws eks describe nodegroup   cluster name  EKS CLUSTER NAME       nodegroup name  GROUP NAME   query nodegroup nodeRole   out text    extract just the name part of role arn ROLE NAME   NODE ROLE ARN                 Get role name with other configurations  If you have multiple node groups or any unmanaged node groups  the process gets more complex   The first step is to get the instance host name of the desired node to where ExternalDNS will be deployed or is already deployed      bash   node instance name of one of the external dns pods currently running INSTANCE NAME   kubectl get pods   all namespaces       selector app kubernetes io instance external dns       output jsonpath    items 0  spec nodeName       instance name of one of the nodes  change if node group is different  INSTANCE NAME   kubectl get nodes   output name   cut  d     f2   tail  1       With the instance host name  you can then get the instance id      bash get instance id       INSTANCE NAME  1   example  ip 192 168 74 34 us east 2 compute internal      get list of nodes     ip 192 168 74 34 us east 2 compute internal aws    us east 2a i xxxxxxxxxxxxxxxxx     ip 192 168 86 105 us east 2 compute internal aws    us east 2a i xxxxxxxxxxxxxxxxx   NODES   kubectl get nodes        output jsonpath   range  items      metadata name    t    spec providerID    n   end         print instance id from matching node   grep  INSTANCE NAME       NODES    cut  d     f5    INSTANCE ID   get instance id  INSTANCE NAME       With the instance id  you can get the associated role name      bash findRoleName       INSTANCE ID  1      get all of the roles   ROLES    aws iam list roles   query Roles    RoleName   out text     for ROLE in   ROLES      do       get instance profile arn     PROFILE ARN   aws iam list instance profiles for role           role name  ROLE   query InstanceProfiles 0  Arn   output text        if there is an instance profile     if      PROFILE ARN      None      then         get all the instances with this associated instance profile       INSTANCES   aws ec2 describe instances             filters Name iam instance profile arn Values  PROFILE ARN             query Reservations    Instances 0  InstanceId   out text          find instances that match the instant profile       for INSTANCE in   INSTANCES      do           set role name value if there is a match         if      INSTANCE ID       INSTANCE      then ROLE NAME  ROLE  fi       done     fi   done    echo  ROLE NAME    NODE ROLE NAME   findRoleName  INSTANCE ID       Using the role name  you can associate the policy that was created earlier      bash   attach policy arn created earlier to node IAM role aws iam attach role policy   role name  NODE ROLE NAME   policy arn  POLICY ARN       warning    WARNING    This will assign allow read write access to all pods running on the same node pool  not just the ExternalDNS pod s         Deploy ExternalDNS with attached policy to Node IAM Role  If ExternalDNS is not yet deployed  follow the steps under  Deploy ExternalDNS   deploy externaldns  using either RBAC or non RBAC     NOTE    Before deleting the cluster during  be sure to run  aws iam detach role policy    Otherwise  there can be errors as the provisioning system  such as  eksctl  or  terraform   will not be able to delete the roles with the attached policy       Static credentials  In this method  the policy is attached to an IAM user  and the credentials secrets for the IAM user are then made available using a Kubernetes secret   This method is not the preferred method as the secrets in the credential file could be copied and used by an unauthorized threat actor   However  if the Kubernetes cluster is not hosted on AWS  it may be the only method available   Given this situation  it is important to limit the associated privileges to just minimal required privileges  i e  read write access to Route53  and not used a credentials file that has extra privileges beyond what is required        Create IAM user and attach the policy     bash   create IAM user aws iam create user   user name  externaldns     attach policy arn created earlier to IAM user aws iam attach user policy   user name  externaldns    policy arn  POLICY ARN           Create the static credentials     bash SECRET ACCESS KEY   aws iam create access key   user name  externaldns   ACCESS KEY ID   echo  SECRET ACCESS KEY   jq  r   AccessKey AccessKeyId    cat    EOF   credentials   default  aws access key id     echo  ACCESS KEY ID  aws secret access key     echo  SECRET ACCESS KEY   jq  r   AccessKey SecretAccessKey   EOF           Create Kubernetes secret from credentials     bash kubectl create secret generic external dns       namespace   EXTERNALDNS NS   default     from file  local path to credentials           Deploy ExternalDNS using static credentials  Follow the steps under  Deploy ExternalDNS   deploy externaldns  using either RBAC or non RBAC   Make sure to uncomment the section that mounts volumes  so that the credentials can be mounted       TIP    By default ExternalDNS takes the profile named  default  from the credentials file  If you want to use a different    profile  you can set the environment variable  EXTERNAL DNS AWS PROFILE  to the desired profile name or use the       aws profile  command line argument  It is even possible to use more than one profile at ones  separated by space in   the environment variable  EXTERNAL DNS AWS PROFILE  or by using    aws profile  multiple times  In this case    ExternalDNS looks for the hosted zones in all profiles and keeps maintaining a mapping table between zone and profile    in order to be able to modify the zones in the correct profile       IAM Roles for Service Accounts   IRSA  https   docs aws amazon com eks latest userguide iam roles for service accounts html    IAM roles for Service Accounts  https   docs aws amazon com eks latest userguide iam roles for service accounts html   allows cluster operators to map AWS IAM Roles to Kubernetes Service Accounts   This essentially allows only ExternalDNS pods to access Route53 without exposing any static credentials   This is the preferred method as it implements  PoLP  https   csrc nist gov glossary term principle of least privilege    Principal of Least Privilege  https   csrc nist gov glossary term principle of least privilege       IMPORTANT    This method requires using KSA  Kubernetes service account  and RBAC   This method requires deploying with RBAC   See  Manifest  for clusters with RBAC enabled    manifest for clusters with rbac enabled  when ready to deploy ExternalDNS     NOTE    Similar methods to IRSA on AWS are  kiam  https   github com uswitch kiam   which is in maintenence mode  and has  instructions  https   github com uswitch kiam blob HEAD docs IAM md  for creating an IAM role  and also  kube2iam  https   github com jtblin kube2iam    IRSA is the officially supported method for EKS clusters  and so for non EKS clusters on AWS  these other tools could be an option        Verify OIDC is supported     bash aws eks describe cluster   name  EKS CLUSTER NAME       query  cluster identity oidc issuer    output text           Associate OIDC to cluster  Configure the cluster with an OIDC provider and add support for  IRSA  https   docs aws amazon com eks latest userguide iam roles for service accounts html    IAM roles for Service Accounts  https   docs aws amazon com eks latest userguide iam roles for service accounts html     If you used  eksctl  to provision the EKS cluster  you can update it with the following command      bash eksctl utils associate iam oidc provider       cluster  EKS CLUSTER NAME   approve      If the cluster was provisioned with Terraform  you can use the  iam openid connect provider  resource   ref  https   registry terraform io providers hashicorp aws latest docs resources iam openid connect provider   to associate to the OIDC provider        Create an IAM role bound to a service account  For the next steps in this process  we will need to associate the  external dns  service account and a role used to grant access to Route53   This requires the following steps   1  Create a role with a trust relationship to the cluster s OIDC provider 2  Attach the  AllowExternalDNSUpdates  policy to the role 3  Create the  external dns  service account 4  Add annotation to the service account with the role arn        Use eksctl with eksctl created EKS cluster  If  eksctl  was used to provision the EKS cluster  you can perform all of these steps with the following command      bash eksctl create iamserviceaccount       cluster  EKS CLUSTER NAME       name  external dns        namespace   EXTERNALDNS NS   default         attach policy arn  POLICY ARN       approve            Use aws cli with any EKS cluster  Otherwise  we can do the following steps using  aws  commands  also see  Creating an IAM role and policy for your service account  https   docs aws amazon com eks latest userguide create service account iam policy and role html        bash ACCOUNT ID   aws sts get caller identity       query  Account    output text  OIDC PROVIDER   aws eks describe cluster   name  EKS CLUSTER NAME       query  cluster identity oidc issuer    output text   sed  e  s  https         cat    EOF   trust json        Version    2012 10 17        Statement                            Effect    Allow                Principal                      Federated    arn aws iam   ACCOUNT ID oidc provider  OIDC PROVIDER                              Action    sts AssumeRoleWithWebIdentity                Condition                      StringEquals                           OIDC PROVIDER sub    system serviceaccount   EXTERNALDNS NS   default   external dns                         OIDC PROVIDER aud    sts amazonaws com                                                    EOF  IRSA ROLE  external dns irsa role  aws iam create role   role name  IRSA ROLE   assume role policy document file   trust json aws iam attach role policy   role name  IRSA ROLE   policy arn  POLICY ARN  ROLE ARN   aws iam get role   role name  IRSA ROLE   query Role Arn   output text     Create service account  skip is already created  kubectl create serviceaccount  external dns    namespace   EXTERNALDNS NS   default      Add annotation referencing IRSA role kubectl patch serviceaccount  external dns    namespace   EXTERNALDNS NS   default     patch        metadata        annotations        eks amazonaws com role arn       ROLE ARN             If any part of this step is misconfigured  such as the role with incorrect namespace configured in the trust relationship  annotation pointing the the wrong role  etc   you will see errors like  WebIdentityErr  failed to retrieve credentials   Check the configuration and make corrections   When the service account annotations are updated  then the current running pods will have to be terminated  so that new pod s  with proper configuration  environment variables  will be created automatically   When annotation is added to service account  the ExternalDNS pod s  scheduled will have  AWS ROLE ARN    AWS STS REGIONAL ENDPOINTS   and  AWS WEB IDENTITY TOKEN FILE  environment variables injected automatically        Deploy ExternalDNS using IRSA  Follow the steps under  Manifest  for clusters with RBAC enabled    manifest for clusters with rbac enabled    Make sure to comment out the service account section if this has been created already   If you deployed ExternalDNS before adding the service account annotation and the corresponding role  you will likely see error with  failed to list hosted zones  AccessDenied  User    You can delete the current running ExternalDNS pod s  after updating the annotation  so that new pods scheduled will have appropriate configuration to access Route53       Set up a hosted zone   If you prefer to try out ExternalDNS in one of the existing hosted zones you can skip this step   Create a DNS zone which will contain the managed DNS records   This tutorial will use the fictional domain of  example com       bash aws route53 create hosted zone   name  example com         caller reference  external dns test   date   s        Make a note of the nameservers that were assigned to your new zone      bash ZONE ID   aws route53 list hosted zones by name   output json       dns name  example com     query HostedZones 0  Id   out text   aws route53 list resource record sets   output text      hosted zone id  ZONE ID   query     ResourceRecordSets  Type     NS   ResourceRecords    Value         tr   t    n       This should yield something similar this       ns 695 awsdns 22 net  ns 1313 awsdns 36 org  ns 350 awsdns 43 com  ns 1805 awsdns 33 co uk       If using your own domain that was registered with a third party domain registrar  you should point your domain s name servers to the values in the from the list above   Please consult your registrar s documentation on how to do that      Deploy ExternalDNS  Connect your  kubectl  client to the cluster you want to test ExternalDNS with  Then apply one of the following manifests file to deploy ExternalDNS  You can check if your cluster has RBAC by  kubectl api versions   grep rbac authorization k8s io    For clusters with RBAC enabled  be sure to choose the correct  namespace    For this tutorial  the enviornment variable  EXTERNALDNS NS  will refer to the namespace   You can set this to a value of your choice      bash export EXTERNALDNS NS  default    externaldns  kube addons  etc    create namespace if it does not yet exist kubectl get namespaces   grep  q  EXTERNALDNS NS        kubectl create namespace  EXTERNALDNS NS         Using Helm  with OIDC   Create a values yaml file to configure ExternalDNS      shell provider    name  aws env      name  AWS DEFAULT REGION     value  us east 1   change to region where EKS is installed      Finally  install the ExternalDNS chart with Helm using the configuration specified in your values yaml file      shell helm upgrade   install external dns external dns external dns   values values yaml          When using clusters without RBAC enabled  Save the following below as  externaldns no rbac yaml       yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns   labels      app kubernetes io name  external dns spec    strategy      type  Recreate   selector      matchLabels        app kubernetes io name  external dns   template      metadata        labels          app kubernetes io name  external dns     spec        containers            name  external dns           image  registry k8s io external dns external dns v0 15 0           args                  source service                 source ingress                 domain filter example com   will make ExternalDNS see only the hosted zones matching provided domain  omit to process all available hosted zones                 provider aws                 policy upsert only   would prevent ExternalDNS from deleting any records  omit to enable full synchronization                 aws zone type public   only look at public hosted zones  valid values are public  private or no value for both                  registry txt                 txt owner id my hostedzone identifier           env                name  AWS DEFAULT REGION               value  us east 1   change to region where EKS is installed           Uncomment below if using static credentials                 name  AWS SHARED CREDENTIALS FILE                value    aws credentials             volumeMounts                  name  aws credentials                 mountPath    aws                 readOnly  true         volumes              name  aws credentials             secret                secretName  external dns      When ready you can deploy      bash kubectl create   filename externaldns no rbac yaml       namespace   EXTERNALDNS NS   default            When using clusters with RBAC enabled  If you re using EKS  you can update the  values yaml  file you created earlier to include the annotations to link the Role ARN you created before      yaml provider    name  aws serviceAccount    annotations      eks amazonaws com role arn  arn aws iam    ACCOUNT ID  role   EXTERNALDNS ROLE NAME   external dns        If you need to provide credentials directly using a secret  ie  You re not using EKS   you can change the  values yaml  file to include volume and volume mounts      yaml provider    name  aws env      name  AWS SHARED CREDENTIALS FILE     value   etc aws credentials my credentials extraVolumes      name  aws credentials     secret        secretName  external dns   In this example  the secret will have the data stored in a key named  my credentials  extraVolumeMounts      name  aws credentials     mountPath   etc aws credentials     readOnly  true      When ready  update your Helm installation      shell helm upgrade   install external dns external dns external dns   values values yaml         Arguments  This list is not the full list  but a few arguments that where chosen       aws zone type   aws zone type  allows filtering for private and public zones     Annotations  Annotations which are specific to AWS       alias   external dns alpha kubernetes io alias  if set to  true  on an ingress  it will create an ALIAS record when the target is an ALIAS as well  To make the target an alias  the ingress needs to be configured correctly as described in  the docs    gke nginx md with a separate tcp load balancer   In particular  the argument    publish service default nginx ingress controller  has to be set on the  nginx ingress controller  container  If one uses the  nginx ingress  Helm chart  this flag can be set with the  controller publishService enabled  configuration option       target hosted zone   external dns alpha kubernetes io aws target hosted zone  can optionally be set to the ID of a Route53 hosted zone  This will force external dns to use the specified hosted zone when creating an ALIAS target       aws zone match parent  aws zone match parent  allows support subdomains within the same zone by using their parent domain  i e   domain filter x example com would create a DNS entry for x example com  and subdomains thereof       yaml    hosted zone domain  example com   domain filter x example com example com   aws zone match parent         Verify ExternalDNS works  Service example   Create the following sample application to test that ExternalDNS works     For services ExternalDNS will look for the annotation  external dns alpha kubernetes io hostname  on the service and use the corresponding value     If you want to give multiple names to service  you can set it to external dns alpha kubernetes io hostname with a comma     separator   For this verification phase  you can use default or another namespace for the nginx demo  for example      bash NGINXDEMO NS  nginx  kubectl get namespaces   grep  q  NGINXDEMO NS    kubectl create namespace  NGINXDEMO NS      Save the following manifest below as  nginx yaml       yaml apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  nginx example com spec    type  LoadBalancer   ports      port  80     name  http     targetPort  80   selector      app  nginx     apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80           name  http      Deploy the nginx deployment and service with      bash kubectl create   filename nginx yaml   namespace   NGINXDEMO NS   default        Verify that the load balancer was allocated with      bash kubectl get service nginx   namespace   NGINXDEMO NS   default        This should show something like      bash NAME    TYPE           CLUSTER IP     EXTERNAL IP                                                                   PORT S         AGE nginx   LoadBalancer   10 100 47 41   ae11c2360188411e7951602725593fd1 1224345803 eu central 1 elb amazonaws com    80 32749 TCP   12m      After roughly two minutes check that a corresponding DNS record for your service that was created      bash aws route53 list resource record sets   output json   hosted zone id  ZONE ID       query  ResourceRecordSets  Name     nginx example com      Type     A         This should show something like      json                  Name    nginx example com             Type    A            AliasTarget                  HostedZoneId    ZEWFWZ4R16P7IB                DNSName    ae11c2360188411e7951602725593fd1 1224345803 eu central 1 elb amazonaws com                 EvaluateTargetHealth   true                        You can also fetch the corresponding text records      bash aws route53 list resource record sets   output json   hosted zone id  ZONE ID       query  ResourceRecordSets  Name     nginx example com      Type     TXT         This will show something like      json                  Name    nginx example com             Type    TXT            TTL   300           ResourceRecords                                    Value      heritage external dns external dns owner external dns external dns resource service default nginx                                         Note created TXT record alongside ALIAS record  TXT record signifies that the corresponding ALIAS record is managed by ExternalDNS  This makes ExternalDNS safe for running in environments where there are other records managed via other means   For more information about ALIAS record  see  Choosing between alias and non alias records  https   docs aws amazon com Route53 latest DeveloperGuide resource record sets choosing alias non alias html    Let s check that we can resolve this DNS name  We ll ask the nameservers assigned to your zone first      bash dig  short  ns 5514 awsdns 53 org  nginx example com       This should return 1  IP addresses that correspond to the ELB FQDN  i e   ae11c2360188411e7951602725593fd1 1224345803 eu central 1 elb amazonaws com     Next try the public nameservers configured by DNS client on your system      bash dig  short nginx example com       If you hooked up your DNS zone with its parent zone correctly you can use  curl  to access your site      bash curl nginx example com       This should show something like      html   DOCTYPE html   html   head   title Welcome to nginx   title        head   body   h1 Welcome to nginx   h1        body    html          Verify ExternalDNS works  Ingress example   With the previous  deployment  and  service  objects deployed  we can add an  ingress  object and configure a FQDN value for the  host  key   The ingress controller will match incoming HTTP traffic  and route it to the appropriate backend service based on the  host  key     For ingress objects ExternalDNS will create a DNS record based on the host specified for the ingress object   For this tutorial  we have two endpoints  the service with  LoadBalancer  type and an ingress   For practical purposes  if an ingress is used  the service type can be changed to  ClusterIP  as two endpoints are unecessary in this scenario     IMPORTANT    This requires that an ingress controller has been installed in your Kubernetes cluster   EKS does not come with an ingress controller by default   A popular ingress controller is  ingress nginx  https   github com kubernetes ingress nginx    which can be installed by a  helm chart  https   artifacthub io packages helm ingress nginx ingress nginx  or by  manifests  https   kubernetes github io ingress nginx deploy  aws    Create an ingress resource manifest file named  ingress yaml  with the contents below      yaml     apiVersion  networking k8s io v1 kind  Ingress metadata    name  nginx spec    ingressClassName  nginx   rules        host  server example com       http          paths              backend                service                  name  nginx                 port                    number  80             path                pathType  Prefix      When ready  you can deploy this with      bash kubectl create   filename ingress yaml   namespace   NGINXDEMO NS   default        Watch the status of the ingress until the ADDRESS field is populated      bash kubectl get ingress   watch   namespace   NGINXDEMO NS   default        You should see something like this       NAME    CLASS    HOSTS                ADDRESS   PORTS   AGE nginx    none    server example com             80      47s nginx    none    server example com   ae11c2360188411e7951602725593fd1 1224345803 eu central 1 elb amazonaws com    80      54s       For the ingress test  run through similar checks  but using domain name used for the ingress      bash   check records on route53 aws route53 list resource record sets   output json   hosted zone id  ZONE ID       query  ResourceRecordSets  Name     server example com        query using a route53 name server dig  short  ns 5514 awsdns 53 org  server example com    query using the default name server dig  short server example com     connect to the nginx web server through the ingress curl server example com          More service annotation options      Custom TTL  The default DNS record TTL  Time To Live  is 300 seconds  You can customize this value by setting the annotation  external dns alpha kubernetes io ttl   e g   modify the service manifest YAML file above      yaml apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  nginx example com     external dns alpha kubernetes io ttl   60  spec               This will set the DNS record s TTL to 60 seconds       Routing policies  Route53 offers  different routing policies  https   docs aws amazon com Route53 latest DeveloperGuide routing policy html   The routing policy for a record can be controlled with the following annotations      external dns alpha kubernetes io set identifier   this   needs   to be set to use any of the following routing policies  For any given DNS name  only   one   of the following routing policies can be used     Weighted records   external dns alpha kubernetes io aws weight    Latency based routing   external dns alpha kubernetes io aws region    Failover  external dns alpha kubernetes io aws failover    Geolocation based routing       external dns alpha kubernetes io aws geolocation continent code       external dns alpha kubernetes io aws geolocation country code       external dns alpha kubernetes io aws geolocation subdivision code    Multi value answer  external dns alpha kubernetes io aws multi value answer       Associating DNS records with healthchecks  You can configure Route53 to associate DNS records with healthchecks for automated DNS failover using  external dns alpha kubernetes io aws health check id   health check id   annotation   Note  ExternalDNS does not support creating healthchecks  and assumes that   health check id   already exists      Canonical Hosted Zones  When creating ALIAS type records in Route53 it is required that external dns be aware of the canonical hosted zone in which the specified hostname is created  External dns is able to automatically identify the canonical hosted zone for many hostnames based upon known hostname suffixes which are defined in  aws go  https   github com kubernetes sigs external dns blob master provider aws aws go L65   If a hostname does not have a known suffix then the suffix can be added into  aws go  or the  target hosted zone annotation   target hosted zone  can be used to manually define the ID of the canonical hosted zone      Govcloud caveats  Due to the special nature with how Route53 runs in Govcloud  there are a few tweaks in the deployment settings     An Environment variable with name of  AWS REGION  set to either  us gov west 1  or  us gov east 1  is required  Otherwise it tries to lookup a region that does not exist in Govcloud and it errors out      yaml env    name  AWS REGION   value  us gov west 1        Route53 in Govcloud does not allow aliases  Therefore  container args must be set so that it uses CNAMES and a txt prefix must be set to something  Otherwise  it will try to create a TXT record with the same value than the CNAME itself  which is not allowed      yaml args      aws prefer cname     txt prefix         The first two changes are needed if you use Route53 in Govcloud  which only supports private zones  There are also no cross account IAM whatsoever between Govcloud and commercial AWS accounts  If services and ingresses need to make Route 53 entries to an public zone in a commercial account  you will have set env variables of  AWS ACCESS KEY ID  and  AWS SECRET ACCESS KEY  with a key and secret to the commercial account that has the sufficient rights      yaml env    name  AWS ACCESS KEY ID   value  XXXXXXXXX   name  AWS SECRET ACCESS KEY   valueFrom      secretKeyRef        name         key           DynamoDB Registry  The DynamoDB Registry can be used to store dns records metadata  See the  DynamoDB Registry Tutorial     registry dynamodb md  for more information      Clean up  Make sure to delete all Service objects before terminating the cluster so all load balancers get cleaned up correctly      bash kubectl delete service nginx        IMPORTANT   If you attached a policy to the Node IAM Role  then you will want to detach this before deleting the EKS cluster   Otherwise  the role resource will be locked  and the cluster cannot be deleted  especially if it was provisioned by automation like  terraform  or  eksctl       bash aws iam detach role policy   role name  NODE ROLE NAME   policy arn  POLICY ARN      If the cluster was provisioned using  eksctl   you can delete the cluster with      bash eksctl delete cluster   name  EKS CLUSTER NAME   region  EKS CLUSTER REGION      Give ExternalDNS some time to clean up the DNS records for you  Then delete the hosted zone if you created one for the testing purpose      bash aws route53 delete hosted zone   id  ZONE ID   e g  hostedzone ZEWFWZ4R16P7IB      If IAM user credentials were used  you can remove the user with      bash aws iam detach user policy   user name  externaldns    policy arn  POLICY ARN    If static credentials were used aws iam delete access key   user name  externaldns    access key id  ACCESS KEY ID  aws iam delete user   user name  externaldns       If IRSA was used  you can remove the IRSA role with      bash aws iam detach role policy   role name  IRSA ROLE   policy arn  POLICY ARN aws iam delete role   role name  IRSA ROLE      Delete any unneeded policies      bash aws iam delete policy   policy arn  POLICY ARN         Throttling  Route53 has a  5 API requests per second per account hard quota  https   docs aws amazon com Route53 latest DeveloperGuide DNSLimitations html limits api requests route 53   Running several fast polling ExternalDNS instances in a given account can easily hit that limit  Some ways to reduce the request rate include    Reduce the polling loop s synchronization interval at the possible cost of slower change propagation  but see    events  below to reduce the impact          interval 5m   default  1m     Enable a Cache to store the zone records list  It comes with a cost  slower propagation when the zone gets modified from other sources such as the AWS console  terraform  cloudformation or anything similar         provider cache time 15m   default  0m     Trigger the polling loop on changes to K8s objects  rather than only at  interval  and ensure a minimum of time between events  to have responsive updates with long poll intervals        events         min event sync interval 5m   default  5s     Limit the  sources watched  https   github com kubernetes sigs external dns blob master pkg apis externaldns types go L364  when the    events  flag is specified to specific types  namespaces  labels  or annotations        source ingress   source service    specify multiple times for multiple sources        namespace my app         label filter app in  my app          ingress class nginx external    Limit services watched by type  not applicable to ingress or other types         service type filter LoadBalancer  default  all    Limit the hosted zones considered        zone id filter ABCDEF12345678    specify multiple times if needed        domain filter example com  by domain suffix   specify multiple times if needed        regex domain filter example   by domain suffix but as a regex   overrides domain filter        exclude domains ignore this example com  to exclude a domain or subdomain        regex domain exclusion ignore   subtracts it s matches from  regex domain filter  s matches        aws zone type public  only sync zones of this type   public private          aws zone tags owner k8s  only sync zones with this tag   If the list of zones managed by ExternalDNS doesn t change frequently  cache it by setting a TTL         aws zones cache duration 3h   default  0    disabled    Increase the number of changes applied to Route53 in each batch        aws batch change size 4000   default  1000     Increase the interval between changes        aws batch change interval 10s   default  1s     Introducing some jitter to the pod initialization  so that when multiple instances of ExternalDNS are updated at the same time they do not make their requests on the same second   A simple way to implement randomised startup is with an init container               spec        initContainers          name  init jitter         image  registry k8s io external dns external dns v0 15 0         command             bin sh            c            FOR    RANDOM   10  s echo  Sleeping for  FOR  sleep  FOR        containers               EKS  An effective starting point for EKS with an ingress controller might look like      bash   interval 5m   events   source ingress   domain filter example com   aws zones cache duration 1h          Batch size options  After external dns generates all changes  it will perform a task to group those changes into batches  Each change will be validated against batch change size limits  If at least one of those parameters out of range   the change will be moved to a separate batch  If the change can t fit into any batch    it will be skipped   br  There are 3 options to control batch size for AWS provider    Maximum amount of changes added to one batch        aws batch change size   default  1000     Maximum size of changes in bytes added to one batch        aws batch change size bytes   default  32000     Maximum value count of changes added to one batch      aws batch change size values   default  1000     aws batch change size  can be very useful for throttling purposes and can be set to any value   Default values for flags  aws batch change size bytes  and  aws batch change size values  are taken from  AWS documentation  https   docs aws amazon com Route53 latest DeveloperGuide DNSLimitations html limits api requests  for Route53 API    You should not change those values until you really have to     br  Because those limits are in place   aws batch change size  can be set to any value  Even if your batch size is  4000  records  your change will be split to separate batches due to bytes values size limits and apply request will be finished without issues "}
{"questions":"external-dns RFC2136 provider This tutorial describes how to use the RFC2136 with either BIND or Windows DNS deployment of external dns To use external dns with BIND generate procure a key configure DNS and add a Using with BIND Server credentials","answers":"# RFC2136 provider\n\nThis tutorial describes how to use the RFC2136 with either BIND or Windows DNS.\n\n## Using with BIND\n\nTo use external-dns with BIND: generate\/procure a key, configure DNS and add a\ndeployment of external-dns.\n\n### Server credentials:\n\n- RFC2136 was developed for and tested with [BIND](https:\/\/www.isc.org\/downloads\/bind\/) DNS server.\nThis documentation assumes that you already have a configured and working server. If you don't,\nplease check BIND documents or tutorials.\n- If your DNS is provided for you, ask for a TSIG key authorized to update and\ntransfer the zone you wish to update. The key will look something like below.\nSkip the next steps wrt BIND setup.\n\n```text\nkey \"externaldns-key\" {\n\talgorithm hmac-sha256;\n\tsecret \"96Ah\/a2g0\/nLeFGK+d\/0tzQcccf9hCEIy34PoXX2Qg8=\";\n};\n```\n- If you are your own DNS administrator create a TSIG key. Use\n`tsig-keygen -a hmac-sha256 externaldns` or on older distributions\n`dnssec-keygen -a HMAC-SHA256 -b 256 -n HOST externaldns`. You will end up with\na key printed to standard out like above (or in the case of dnssec-keygen in a\nfile called `Kexternaldns......key`).\n\n### BIND Configuration:\n\nIf you do not administer your own DNS, skip to RFC provider configuration\n\n- Edit your named.conf file (or appropriate included file) and add\/change the\nfollowing.\n  - Make sure You are listening on the right interfaces. At least whatever\n  interface external-dns will be communicating over and the interface that\n  faces the internet.\n  - Add the key that you generated\/was given to you above. Copy paste the four\n  lines that you got (not the same as the example key) into your file.\n  - Create a zone for kubernetes. If you already have a zone, skip to the next\n  step. (I put the zone in it's own subdirectory because named,\n  which shouldn't be running as root, needs to create a journal file and the\n  default zone directory isn't writeable by named).\n  ```text\n  zone \"k8s.example.org\" {\n      type master;\n      file \"\/etc\/bind\/pri\/k8s\/k8s.zone\";\n  };\n  ```\n  - Add your key to both transfer and update. For instance with our previous\n  zone.\n  ```text\n  zone \"k8s.example.org\" {\n      type master;\n      file \"\/etc\/bind\/pri\/k8s\/k8s.zone\";\n      allow-transfer {\n          key \"externaldns-key\";\n      };\n      update-policy {\n          grant externaldns-key zonesub ANY;\n      };\n  };\n  ```\n  - Create a zone file (k8s.zone):\n  ```text\n  $TTL 60 ; 1 minute\n  k8s.example.org         IN SOA  k8s.example.org. root.k8s.example.org. (\n                                  16         ; serial\n                                  60         ; refresh (1 minute)\n                                  60         ; retry (1 minute)\n                                  60         ; expire (1 minute)\n                                  60         ; minimum (1 minute)\n                                  )\n                          NS      ns.k8s.example.org.\n  ns                      A       123.456.789.012\n  ```\n  - Reload (or restart) named\n\n\n### Using external-dns\n\nTo use external-dns add an ingress or a LoadBalancer service with a host that\nis part of the domain-filter. For example both of the following would produce\nA records.\n\n```text\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: svc.example.org\nspec:\n  type: LoadBalancer\n  ports:\n  - port: 80\n    targetPort: 80\n  selector:\n    app: nginx\n---\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n    name: my-ingress\nspec:\n    rules:\n    - host: ingress.example.org\n      http:\n          paths:\n          - path: \/\n            backend:\n                serviceName: my-service\n                servicePort: 8000\n```\n\n### Custom TTL\n\nThe default DNS record TTL (Time-To-Live) is 0 seconds. You can customize this value by setting the annotation `external-dns.alpha.kubernetes.io\/ttl`. e.g., modify the service manifest YAML file above:\n\n```\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: nginx.external-dns-test.my-org.com\n    external-dns.alpha.kubernetes.io\/ttl: 60\nspec:\n    ...\n```\n\nThis will set the DNS record's TTL to 60 seconds.\n\nA default TTL for all records can be set using the the flag with a time in seconds, minutes or hours, such as `--rfc2136-min-ttl=60s`\n\nThere are other annotation that can affect the generation of DNS records, but these are beyond the scope of this\ntutorial and are covered in the main documentation.\n\n### Generate reverse DNS records\n\nIf you want to generate reverse DNS records for your services, you have to enable the functionality using the `--rfc2136-create-ptr`\nflag. You have also to add the zone to the list of zones managed by ExternalDNS via the `--rfc2136-zone` and `--domain-filter` flags.\nAn example of a valid configuration is the following:\n\n```--domain-filter=157.168.192.in-addr.arpa --rfc2136-zone=157.168.192.in-addr.arpa```\n\nPTR record tracking is managed by the A\/AAAA record so you can't create PTR records for already generated A\/AAAA records.\n\n### Test with external-dns installed on local machine (optional)\nYou may install external-dns and test on a local machine by running:\n\n```\nexternal-dns --txt-owner-id k8s --provider rfc2136 --rfc2136-host=192.168.0.1 --rfc2136-port=53 --rfc2136-zone=k8s.example.org --rfc2136-tsig-secret=96Ah\/a2g0\/nLeFGK+d\/0tzQcccf9hCEIy34PoXX2Qg8= --rfc2136-tsig-secret-alg=hmac-sha256 --rfc2136-tsig-keyname=externaldns-key --rfc2136-tsig-axfr --source ingress --once --domain-filter=k8s.example.org --dry-run\n```\n\n- host should be the IP of your master DNS server.\n- tsig-secret should be changed to match your secret.\n- tsig-keyname needs to match the keyname you used (if you changed it).\n- domain-filter can be used as shown to filter the domains you wish to update.\n\n### RFC2136 provider configuration:\nIn order to use external-dns with your cluster you need to add a deployment\nwith access to your ingress and service resources. The following are two\nexample manifests with and without RBAC respectively.\n\n- With RBAC:\n```text\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: external-dns\n  labels:\n    name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\n  namespace: external-dns\nrules:\n- apiGroups:\n  - \"\"\n  resources:\n  - services\n  - endpoints\n  - pods\n  - nodes\n  verbs:\n  - get\n  - watch\n  - list\n- apiGroups:\n  - extensions\n  - networking.k8s.io\n  resources:\n  - ingresses\n  verbs:\n  - get\n  - list\n  - watch\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n  namespace: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\n  namespace: external-dns\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: external-dns\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\n  namespace: external-dns\nspec:\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --registry=txt\n        - --txt-prefix=external-dns-\n        - --txt-owner-id=k8s\n        - --provider=rfc2136\n        - --rfc2136-host=192.168.0.1\n        - --rfc2136-port=53\n        - --rfc2136-zone=k8s.example.org\n        - --rfc2136-zone=k8s.your-zone.org\n        - --rfc2136-tsig-secret=96Ah\/a2g0\/nLeFGK+d\/0tzQcccf9hCEIy34PoXX2Qg8=\n        - --rfc2136-tsig-secret-alg=hmac-sha256\n        - --rfc2136-tsig-keyname=externaldns-key\n        - --rfc2136-tsig-axfr\n        - --source=ingress\n        - --domain-filter=k8s.example.org\n```\n\n- Without RBAC:\n```text\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: external-dns\n  labels:\n    name: external-dns\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\n  namespace: external-dns\nspec:\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --registry=txt\n        - --txt-prefix=external-dns-\n        - --txt-owner-id=k8s\n        - --provider=rfc2136\n        - --rfc2136-host=192.168.0.1\n        - --rfc2136-port=53\n        - --rfc2136-zone=k8s.example.org\n        - --rfc2136-zone=k8s.your-zone.org\n        - --rfc2136-tsig-secret=96Ah\/a2g0\/nLeFGK+d\/0tzQcccf9hCEIy34PoXX2Qg8=\n        - --rfc2136-tsig-secret-alg=hmac-sha256\n        - --rfc2136-tsig-keyname=externaldns-key\n        - --rfc2136-tsig-axfr\n        - --source=ingress\n        - --domain-filter=k8s.example.org\n```\n\n## Microsoft DNS (Insecure Updates)\n\nWhile `external-dns` was not developed or tested against Microsoft DNS, it can be configured to work against it. YMMV.\n\n### Insecure Updates\n\n#### DNS-side configuration\n\n1. Create a DNS zone\n2. Enable insecure dynamic updates for the zone\n3. Enable Zone Transfers to all servers\n\n#### `external-dns` configuration\n\nYou'll want to configure `external-dns` similarly to the following:\n\n```text\n...\n        - --provider=rfc2136\n        - --rfc2136-host=192.168.0.1\n        - --rfc2136-port=53\n        - --rfc2136-zone=k8s.example.org\n        - --rfc2136-zone=k8s.your-zone.org\n        - --rfc2136-insecure\n        - --rfc2136-tsig-axfr # needed to enable zone transfers, which is required for deletion of records.\n...\n```\n\n### Secure Updates Using RFC3645 (GSS-TSIG)\n\n#### DNS-side configuration\n\n1. Create a DNS zone\n2. Enable secure dynamic updates for the zone\n3. Enable Zone Transfers to all servers\n\nIf you see any error messages which indicate that `external-dns` was somehow not able to fetch\nexisting DNS records from your DNS server, this could mean that you forgot about step 3.\n\n##### Kerberos Configuration\n\nDNS with secure updates relies upon a valid Kerberos configuration running within the `external-dns` container.  At this time, you will need to create a ConfigMap for the `external-dns` container to use and mount it in your deployment.  Below is an example of a working Kerberos configuration inside a ConfigMap definition.  This may be different depending on many factors in your environment:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  creationTimestamp: null\n  name: krb5.conf\ndata:\n  krb5.conf: |\n    [logging]\n    default = FILE:\/var\/log\/krb5libs.log\n    kdc = FILE:\/var\/log\/krb5kdc.log\n    admin_server = FILE:\/var\/log\/kadmind.log\n\n    [libdefaults]\n    dns_lookup_realm = false\n    ticket_lifetime = 24h\n    renew_lifetime = 7d\n    forwardable = true\n    rdns = false\n    pkinit_anchors = \/etc\/pki\/tls\/certs\/ca-bundle.crt\n    default_ccache_name = KEYRING:persistent:%{uid}\n\n    default_realm = YOUR-REALM.COM\n\n    [realms]\n    YOUR-REALM.COM = {\n      kdc = dc1.yourdomain.com\n      admin_server = dc1.yourdomain.com\n    }\n\n    [domain_realm]\n    yourdomain.com = YOUR-REALM.COM\n    .yourdomain.com = YOUR-REALM.COM\n```\nIn most cases, the realm name will probably be the same as the domain name, so you can simply replace `YOUR-REALM.COM` with something like `YOURDOMAIN.COM`.\n\nOnce the ConfigMap is created, the container `external-dns` container needs to be told to mount that ConfigMap as a volume at the default Kerberos configuration location.  The pod spec should include a similar configuration to the following:\n\n```yaml\n...\n    volumeMounts:\n    - mountPath: \/etc\/krb5.conf\n      name: kerberos-config-volume\n      subPath: krb5.conf\n...\n  volumes:\n  - configMap:\n      defaultMode: 420\n      name: krb5.conf\n    name: kerberos-config-volume\n...\n```\n\n##### `external-dns` configuration\n\nYou'll want to configure `external-dns` similarly to the following:\n\n```text\n...\n        - --provider=rfc2136\n\t - --rfc2136-gss-tsig\n        - --rfc2136-host=dns-host.yourdomain.com\n        - --rfc2136-port=53\n        - --rfc2136-zone=your-zone.com\n        - --rfc2136-zone=your-secondary-zone.com\n        - --rfc2136-kerberos-username=your-domain-account\n        - --rfc2136-kerberos-password=your-domain-password\n        - --rfc2136-kerberos-realm=your-domain.com\n        - --rfc2136-tsig-axfr # needed to enable zone transfers, which is required for deletion of records.\n...\n```\n\nAs noted above, the `--rfc2136-kerberos-realm` flag is completely optional and won't be necessary in many cases.\nMost likely, you will only need it if you see errors similar to this: `KRB Error: (68) KDC_ERR_WRONG_REALM Reserved for future use`.\n\nThe flag `--rfc2136-host` can be set to the host's domain name or IP address.\nHowever, it also determines the name of the Kerberos principal which is used during authentication.\nThis means that Active Directory might only work if this is set to a specific domain name, possibly leading to errors like this:\n`KDC_ERR_S_PRINCIPAL_UNKNOWN Server not found in Kerberos database`.\nTo fix this, try setting `--rfc2136-host` to the \"actual\" hostname of your DNS server.\n\n## DNS Over TLS (RFCs 7858 and 9103)\n\nIf your DNS server does zone transfers over TLS, you can instruct `external-dns` to connect over TLS with the following flags:\n\n * `--rfc2136-use-tls` Will enable TLS for both zone transfers and for updates.\n * `--tls-ca=<cert-file>` Is the path to a file containing certificate(s) that can be used to verify the DNS server\n * `--tls-client-cert=<client-cert-file>` and\n * `--tls-client-cert-key=<client-key-file>` Set the client certificate and key for mutual verification\n * `--rfc2136-skip-tls-verify` Disables verification of the certificate supplied by the DNS server.\n\nIt is currently not supported to do only zone transfers over TLS, but not the updates. They are enabled and disabled together.","site":"external-dns","answers_cleaned":"  RFC2136 provider  This tutorial describes how to use the RFC2136 with either BIND or Windows DNS      Using with BIND  To use external dns with BIND  generate procure a key  configure DNS and add a deployment of external dns       Server credentials     RFC2136 was developed for and tested with  BIND  https   www isc org downloads bind   DNS server  This documentation assumes that you already have a configured and working server  If you don t  please check BIND documents or tutorials    If your DNS is provided for you  ask for a TSIG key authorized to update and transfer the zone you wish to update  The key will look something like below  Skip the next steps wrt BIND setup      text key  externaldns key     algorithm hmac sha256   secret  96Ah a2g0 nLeFGK d 0tzQcccf9hCEIy34PoXX2Qg8             If you are your own DNS administrator create a TSIG key  Use  tsig keygen  a hmac sha256 externaldns  or on older distributions  dnssec keygen  a HMAC SHA256  b 256  n HOST externaldns   You will end up with a key printed to standard out like above  or in the case of dnssec keygen in a file called  Kexternaldns      key         BIND Configuration   If you do not administer your own DNS  skip to RFC provider configuration    Edit your named conf file  or appropriate included file  and add change the following      Make sure You are listening on the right interfaces  At least whatever   interface external dns will be communicating over and the interface that   faces the internet      Add the key that you generated was given to you above  Copy paste the four   lines that you got  not the same as the example key  into your file      Create a zone for kubernetes  If you already have a zone  skip to the next   step   I put the zone in it s own subdirectory because named    which shouldn t be running as root  needs to create a journal file and the   default zone directory isn t writeable by named        text   zone  k8s example org          type master        file   etc bind pri k8s k8s zone                  Add your key to both transfer and update  For instance with our previous   zone       text   zone  k8s example org          type master        file   etc bind pri k8s k8s zone         allow transfer             key  externaldns key                  update policy             grant externaldns key zonesub ANY                          Create a zone file  k8s zone        text    TTL 60   1 minute   k8s example org         IN SOA  k8s example org  root k8s example org                                      16           serial                                   60           refresh  1 minute                                    60           retry  1 minute                                    60           expire  1 minute                                    60           minimum  1 minute                                                                NS      ns k8s example org    ns                      A       123 456 789 012           Reload  or restart  named       Using external dns  To use external dns add an ingress or a LoadBalancer service with a host that is part of the domain filter  For example both of the following would produce A records      text apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  svc example org spec    type  LoadBalancer   ports      port  80     targetPort  80   selector      app  nginx     apiVersion  networking k8s io v1 kind  Ingress metadata      name  my ingress spec      rules        host  ingress example org       http            paths              path                backend                  serviceName  my service                 servicePort  8000          Custom TTL  The default DNS record TTL  Time To Live  is 0 seconds  You can customize this value by setting the annotation  external dns alpha kubernetes io ttl   e g   modify the service manifest YAML file above       apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  nginx external dns test my org com     external dns alpha kubernetes io ttl  60 spec               This will set the DNS record s TTL to 60 seconds   A default TTL for all records can be set using the the flag with a time in seconds  minutes or hours  such as    rfc2136 min ttl 60s   There are other annotation that can affect the generation of DNS records  but these are beyond the scope of this tutorial and are covered in the main documentation       Generate reverse DNS records  If you want to generate reverse DNS records for your services  you have to enable the functionality using the    rfc2136 create ptr  flag  You have also to add the zone to the list of zones managed by ExternalDNS via the    rfc2136 zone  and    domain filter  flags  An example of a valid configuration is the following        domain filter 157 168 192 in addr arpa   rfc2136 zone 157 168 192 in addr arpa     PTR record tracking is managed by the A AAAA record so you can t create PTR records for already generated A AAAA records       Test with external dns installed on local machine  optional  You may install external dns and test on a local machine by running       external dns   txt owner id k8s   provider rfc2136   rfc2136 host 192 168 0 1   rfc2136 port 53   rfc2136 zone k8s example org   rfc2136 tsig secret 96Ah a2g0 nLeFGK d 0tzQcccf9hCEIy34PoXX2Qg8    rfc2136 tsig secret alg hmac sha256   rfc2136 tsig keyname externaldns key   rfc2136 tsig axfr   source ingress   once   domain filter k8s example org   dry run        host should be the IP of your master DNS server    tsig secret should be changed to match your secret    tsig keyname needs to match the keyname you used  if you changed it     domain filter can be used as shown to filter the domains you wish to update       RFC2136 provider configuration  In order to use external dns with your cluster you need to add a deployment with access to your ingress and service resources  The following are two example manifests with and without RBAC respectively     With RBAC     text apiVersion  v1 kind  Namespace metadata    name  external dns   labels      name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns   namespace  external dns rules    apiGroups           resources      services     endpoints     pods     nodes   verbs      get     watch     list   apiGroups      extensions     networking k8s io   resources      ingresses   verbs      get     list     watch     apiVersion  v1 kind  ServiceAccount metadata    name  external dns   namespace  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer   namespace  external dns roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  external dns     apiVersion  apps v1 kind  Deployment metadata    name  external dns   namespace  external dns spec    selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              registry txt             txt prefix external dns              txt owner id k8s             provider rfc2136             rfc2136 host 192 168 0 1             rfc2136 port 53             rfc2136 zone k8s example org             rfc2136 zone k8s your zone org             rfc2136 tsig secret 96Ah a2g0 nLeFGK d 0tzQcccf9hCEIy34PoXX2Qg8              rfc2136 tsig secret alg hmac sha256             rfc2136 tsig keyname externaldns key             rfc2136 tsig axfr             source ingress             domain filter k8s example org        Without RBAC     text apiVersion  v1 kind  Namespace metadata    name  external dns   labels      name  external dns     apiVersion  apps v1 kind  Deployment metadata    name  external dns   namespace  external dns spec    selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              registry txt             txt prefix external dns              txt owner id k8s             provider rfc2136             rfc2136 host 192 168 0 1             rfc2136 port 53             rfc2136 zone k8s example org             rfc2136 zone k8s your zone org             rfc2136 tsig secret 96Ah a2g0 nLeFGK d 0tzQcccf9hCEIy34PoXX2Qg8              rfc2136 tsig secret alg hmac sha256             rfc2136 tsig keyname externaldns key             rfc2136 tsig axfr             source ingress             domain filter k8s example org         Microsoft DNS  Insecure Updates   While  external dns  was not developed or tested against Microsoft DNS  it can be configured to work against it  YMMV       Insecure Updates       DNS side configuration  1  Create a DNS zone 2  Enable insecure dynamic updates for the zone 3  Enable Zone Transfers to all servers        external dns  configuration  You ll want to configure  external dns  similarly to the following      text                 provider rfc2136             rfc2136 host 192 168 0 1             rfc2136 port 53             rfc2136 zone k8s example org             rfc2136 zone k8s your zone org             rfc2136 insecure             rfc2136 tsig axfr   needed to enable zone transfers  which is required for deletion of records               Secure Updates Using RFC3645  GSS TSIG        DNS side configuration  1  Create a DNS zone 2  Enable secure dynamic updates for the zone 3  Enable Zone Transfers to all servers  If you see any error messages which indicate that  external dns  was somehow not able to fetch existing DNS records from your DNS server  this could mean that you forgot about step 3         Kerberos Configuration  DNS with secure updates relies upon a valid Kerberos configuration running within the  external dns  container   At this time  you will need to create a ConfigMap for the  external dns  container to use and mount it in your deployment   Below is an example of a working Kerberos configuration inside a ConfigMap definition   This may be different depending on many factors in your environment      yaml apiVersion  v1 kind  ConfigMap metadata    creationTimestamp  null   name  krb5 conf data    krb5 conf         logging      default   FILE  var log krb5libs log     kdc   FILE  var log krb5kdc log     admin server   FILE  var log kadmind log       libdefaults      dns lookup realm   false     ticket lifetime   24h     renew lifetime   7d     forwardable   true     rdns   false     pkinit anchors    etc pki tls certs ca bundle crt     default ccache name   KEYRING persistent   uid       default realm   YOUR REALM COM       realms      YOUR REALM COM           kdc   dc1 yourdomain com       admin server   dc1 yourdomain com             domain realm      yourdomain com   YOUR REALM COM      yourdomain com   YOUR REALM COM     In most cases  the realm name will probably be the same as the domain name  so you can simply replace  YOUR REALM COM  with something like  YOURDOMAIN COM    Once the ConfigMap is created  the container  external dns  container needs to be told to mount that ConfigMap as a volume at the default Kerberos configuration location   The pod spec should include a similar configuration to the following      yaml         volumeMounts        mountPath   etc krb5 conf       name  kerberos config volume       subPath  krb5 conf       volumes      configMap        defaultMode  420       name  krb5 conf     name  kerberos config volume                 external dns  configuration  You ll want to configure  external dns  similarly to the following      text                 provider rfc2136       rfc2136 gss tsig             rfc2136 host dns host yourdomain com             rfc2136 port 53             rfc2136 zone your zone com             rfc2136 zone your secondary zone com             rfc2136 kerberos username your domain account             rfc2136 kerberos password your domain password             rfc2136 kerberos realm your domain com             rfc2136 tsig axfr   needed to enable zone transfers  which is required for deletion of records           As noted above  the    rfc2136 kerberos realm  flag is completely optional and won t be necessary in many cases  Most likely  you will only need it if you see errors similar to this   KRB Error   68  KDC ERR WRONG REALM Reserved for future use    The flag    rfc2136 host  can be set to the host s domain name or IP address  However  it also determines the name of the Kerberos principal which is used during authentication  This means that Active Directory might only work if this is set to a specific domain name  possibly leading to errors like this   KDC ERR S PRINCIPAL UNKNOWN Server not found in Kerberos database   To fix this  try setting    rfc2136 host  to the  actual  hostname of your DNS server      DNS Over TLS  RFCs 7858 and 9103   If your DNS server does zone transfers over TLS  you can instruct  external dns  to connect over TLS with the following flags         rfc2136 use tls  Will enable TLS for both zone transfers and for updates        tls ca  cert file   Is the path to a file containing certificate s  that can be used to verify the DNS server       tls client cert  client cert file   and       tls client cert key  client key file   Set the client certificate and key for mutual verification       rfc2136 skip tls verify  Disables verification of the certificate supplied by the DNS server   It is currently not supported to do only zone transfers over TLS  but not the updates  They are enabled and disabled together "}
{"questions":"external-dns If you want to learn about how to use Civo DNS Manager read the following tutorials Make sure to use 0 13 5 version of ExternalDNS for this tutorial This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Civo DNS Manager Managing DNS with Civo Civo DNS","answers":"# Civo DNS\n\nThis tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Civo DNS Manager.\n\nMake sure to use **>0.13.5** version of ExternalDNS for this tutorial.\n\n## Managing DNS with Civo\n\nIf you want to learn about how to use Civo DNS Manager read the following tutorials:\n\n[An Introduction to Managing DNS](https:\/\/www.civo.com\/learn\/configure-dns)\n\n## Get Civo Token\n\nCopy the token in the settings for your account\nThe environment variable `CIVO_TOKEN` will be needed to run ExternalDNS with Civo.\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster you want to test ExternalDNS with.\nThen apply one of the following manifests file to deploy ExternalDNS.\n\n### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=civo\n        env:\n        - name: CIVO_TOKEN\n          value: \"YOUR_CIVO_API_TOKEN\"\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=civo\n        env:\n        - name: CIVO_TOKEN\n          value: \"YOUR_CIVO_API_TOKEN\"\n```\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: my-app.example.com\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\nNote the annotation on the service; use the same hostname as the Civo DNS zone created above.\n\nExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records.\n\nCreate the deployment and service:\n\n```console\n$ kubectl create -f nginx.yaml\n```\n\nDepending where you run your service it can take a little while for your cloud provider to create an external IP for the service.\n\nOnce the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize the Civo DNS records.\n\n## Verifying Civo DNS records\n\nCheck your [Civo UI](https:\/\/www.civo.com\/account\/dns) to view the records for your Civo DNS zone.\n\nClick on the zone for the one created above if a different domain was used.\n\nThis should show the external IP address of the service as the A record for your domain.\n\n## Cleanup\n\nNow that we have verified that ExternalDNS will automatically manage Civo DNS records, we can delete the tutorial's example:\n\n```\n$ kubectl delete service -f nginx.yaml\n$ kubectl delete service -f externaldns.yaml\n```","site":"external-dns","answers_cleaned":"  Civo DNS  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Civo DNS Manager   Make sure to use    0 13 5   version of ExternalDNS for this tutorial      Managing DNS with Civo  If you want to learn about how to use Civo DNS Manager read the following tutorials    An Introduction to Managing DNS  https   www civo com learn configure dns      Get Civo Token  Copy the token in the settings for your account The environment variable  CIVO TOKEN  will be needed to run ExternalDNS with Civo      Deploy ExternalDNS  Connect your  kubectl  client to the cluster you want to test ExternalDNS with  Then apply one of the following manifests file to deploy ExternalDNS       Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider civo         env            name  CIVO TOKEN           value   YOUR CIVO API TOKEN           Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider civo         env            name  CIVO TOKEN           value   YOUR CIVO API TOKEN          Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  my app example com spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80      Note the annotation on the service  use the same hostname as the Civo DNS zone created above   ExternalDNS uses this annotation to determine what services should be registered with DNS  Removing the annotation will cause ExternalDNS to remove the corresponding DNS records   Create the deployment and service      console   kubectl create  f nginx yaml      Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service   Once the service has an external IP assigned  ExternalDNS will notice the new service IP address and synchronize the Civo DNS records      Verifying Civo DNS records  Check your  Civo UI  https   www civo com account dns  to view the records for your Civo DNS zone   Click on the zone for the one created above if a different domain was used   This should show the external IP address of the service as the A record for your domain      Cleanup  Now that we have verified that ExternalDNS will automatically manage Civo DNS records  we can delete the tutorial s example         kubectl delete service  f nginx yaml   kubectl delete service  f externaldns yaml    "}
{"questions":"external-dns This tutorial describes how to setup ExternalDNS for use within a If you are new to OVH we recommend you first read the following Creating a zone with OVH DNS OVHcloud Kubernetes cluster using OVH DNS Make sure to use 0 6 version of ExternalDNS for this tutorial","answers":"# OVHcloud\n\nThis tutorial describes how to setup ExternalDNS for use within a\nKubernetes cluster using OVH DNS.\n\nMake sure to use **>=0.6** version of ExternalDNS for this tutorial.\n\n## Creating a zone with OVH DNS\n\nIf you are new to OVH, we recommend you first read the following\ninstructions for creating a zone.\n\n[Creating a zone using the OVH manager](https:\/\/docs.ovh.com\/gb\/en\/domains\/create_a_dns_zone_for_a_domain_which_is_not_registered_at_ovh\/)\n\n[Creating a zone using the OVH API](https:\/\/api.ovh.com\/console\/)\n\n## Creating OVH Credentials\n\nYou first need to create an OVH application.\n\nUsing the [OVH documentation](https:\/\/docs.ovh.com\/gb\/en\/api\/first-steps-with-ovh-api\/#advanced-usage-pair-ovhcloud-apis-with-an-application_2) you will have your `Application key` and `Application secret`\n\nAnd you will need to generate your consumer key, here the permissions needed :\n- GET on `\/domain\/zone`\n- GET on `\/domain\/zone\/*\/record`\n- GET on `\/domain\/zone\/*\/record\/*`\n- POST on `\/domain\/zone\/*\/record`\n- DELETE on `\/domain\/zone\/*\/record\/*`\n- GET on `\/domain\/zone\/*\/soa`\n- POST on `\/domain\/zone\/*\/refresh`\n\nYou can use the following `curl` request to generate & validated your `Consumer key`\n\n```bash\ncurl -XPOST -H \"X-Ovh-Application: <ApplicationKey>\" -H \"Content-type: application\/json\" https:\/\/eu.api.ovh.com\/1.0\/auth\/credential -d '{\n  \"accessRules\": [\n    {\n      \"method\": \"GET\",\n      \"path\": \"\/domain\/zone\"\n    },\n    {\n      \"method\": \"GET\",\n      \"path\": \"\/domain\/zone\/*\/soa\"\n    },\n    {\n      \"method\": \"GET\",\n      \"path\": \"\/domain\/zone\/*\/record\"\n    },\n    {\n      \"method\": \"GET\",\n      \"path\": \"\/domain\/zone\/*\/record\/*\"\n    },\n    {\n      \"method\": \"POST\",\n      \"path\": \"\/domain\/zone\/*\/record\"\n    },\n    {\n      \"method\": \"DELETE\",\n      \"path\": \"\/domain\/zone\/*\/record\/*\"\n    },\n    {\n      \"method\": \"POST\",\n      \"path\": \"\/domain\/zone\/*\/refresh\"\n    }\n  ],\n  \"redirection\":\"https:\/\/github.com\/kubernetes-sigs\/external-dns\/blob\/HEAD\/docs\/tutorials\/ovh.md#creating-ovh-credentials\"\n}'\n```\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster with which you want to test ExternalDNS, and then apply one of the following manifest files for deployment:\n\n### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=ovh\n        env:\n        - name: OVH_APPLICATION_KEY\n          value: \"YOUR_OVH_APPLICATION_KEY\"\n        - name: OVH_APPLICATION_SECRET\n          value: \"YOUR_OVH_APPLICATION_SECRET\"\n        - name: OVH_CONSUMER_KEY\n          value: \"YOUR_OVH_CONSUMER_KEY_AFTER_VALIDATED_LINK\"\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"endpoints\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=ovh\n        env:\n        - name: OVH_APPLICATION_KEY\n          value: \"YOUR_OVH_APPLICATION_KEY\"\n        - name: OVH_APPLICATION_SECRET\n          value: \"YOUR_OVH_APPLICATION_SECRET\"\n        - name: OVH_CONSUMER_KEY\n          value: \"YOUR_OVH_CONSUMER_KEY_AFTER_VALIDATED_LINK\"\n```\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: example.com\n    external-dns.alpha.kubernetes.io\/ttl: \"120\" #optional\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\n**A note about annotations**\n\nVerify that the annotation on the service uses the same hostname as the OVH DNS zone created above. The annotation may also be a subdomain of the DNS zone (e.g. 'www.example.com').\n\nThe TTL annotation can be used to configure the TTL on DNS records managed by ExternalDNS and is optional. If this annotation is not set, the TTL on records managed by ExternalDNS will default to 10.\n\nExternalDNS uses the hostname annotation to determine which services should be registered with DNS. Removing the hostname annotation will cause ExternalDNS to remove the corresponding DNS records.\n\n### Create the deployment and service\n\n```\n$ kubectl create -f nginx.yaml\n```\n\nDepending on where you run your service, it may take some time for your cloud provider to create an external IP for the service. Once an external IP is assigned, ExternalDNS detects the new service IP address and synchronizes the OVH DNS records.\n\n## Verifying OVH DNS records\n\nUse the OVH manager or API to verify that the A record for your domain shows the external IP address of the services.\n\n## Cleanup\n\nOnce you successfully configure and verify record management via ExternalDNS, you can delete the tutorial's example:\n\n```\n$ kubectl delete -f nginx.yaml\n$ kubectl delete -f externaldns.yaml\n```","site":"external-dns","answers_cleaned":"  OVHcloud  This tutorial describes how to setup ExternalDNS for use within a Kubernetes cluster using OVH DNS   Make sure to use     0 6   version of ExternalDNS for this tutorial      Creating a zone with OVH DNS  If you are new to OVH  we recommend you first read the following instructions for creating a zone    Creating a zone using the OVH manager  https   docs ovh com gb en domains create a dns zone for a domain which is not registered at ovh     Creating a zone using the OVH API  https   api ovh com console       Creating OVH Credentials  You first need to create an OVH application   Using the  OVH documentation  https   docs ovh com gb en api first steps with ovh api  advanced usage pair ovhcloud apis with an application 2  you will have your  Application key  and  Application secret   And you will need to generate your consumer key  here the permissions needed     GET on   domain zone    GET on   domain zone   record    GET on   domain zone   record      POST on   domain zone   record    DELETE on   domain zone   record      GET on   domain zone   soa    POST on   domain zone   refresh   You can use the following  curl  request to generate   validated your  Consumer key      bash curl  XPOST  H  X Ovh Application   ApplicationKey    H  Content type  application json  https   eu api ovh com 1 0 auth credential  d       accessRules                  method    GET          path     domain zone                      method    GET          path     domain zone   soa                      method    GET          path     domain zone   record                      method    GET          path     domain zone   record                        method    POST          path     domain zone   record                      method    DELETE          path     domain zone   record                        method    POST          path     domain zone   refresh                redirection   https   github com kubernetes sigs external dns blob HEAD docs tutorials ovh md creating ovh credentials             Deploy ExternalDNS  Connect your  kubectl  client to the cluster with which you want to test ExternalDNS  and then apply one of the following manifest files for deployment       Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider ovh         env            name  OVH APPLICATION KEY           value   YOUR OVH APPLICATION KEY            name  OVH APPLICATION SECRET           value   YOUR OVH APPLICATION SECRET            name  OVH CONSUMER KEY           value   YOUR OVH CONSUMER KEY AFTER VALIDATED LINK           Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services     verbs    get   watch   list     apiGroups         resources    pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    list     apiGroups         resources    endpoints     verbs    get   watch   list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider ovh         env            name  OVH APPLICATION KEY           value   YOUR OVH APPLICATION KEY            name  OVH APPLICATION SECRET           value   YOUR OVH APPLICATION SECRET            name  OVH CONSUMER KEY           value   YOUR OVH CONSUMER KEY AFTER VALIDATED LINK          Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  example com     external dns alpha kubernetes io ttl   120   optional spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80        A note about annotations    Verify that the annotation on the service uses the same hostname as the OVH DNS zone created above  The annotation may also be a subdomain of the DNS zone  e g   www example com     The TTL annotation can be used to configure the TTL on DNS records managed by ExternalDNS and is optional  If this annotation is not set  the TTL on records managed by ExternalDNS will default to 10   ExternalDNS uses the hostname annotation to determine which services should be registered with DNS  Removing the hostname annotation will cause ExternalDNS to remove the corresponding DNS records       Create the deployment and service        kubectl create  f nginx yaml      Depending on where you run your service  it may take some time for your cloud provider to create an external IP for the service  Once an external IP is assigned  ExternalDNS detects the new service IP address and synchronizes the OVH DNS records      Verifying OVH DNS records  Use the OVH manager or API to verify that the A record for your domain shows the external IP address of the services      Cleanup  Once you successfully configure and verify record management via ExternalDNS  you can delete the tutorial s example         kubectl delete  f nginx yaml   kubectl delete  f externaldns yaml    "}
{"questions":"external-dns Azure Private DNS 3 Deploy ExternalDNS 4 Expose an NGINX service with a LoadBalancer and annotate it with the desired DNS name 5 Install NGINX Ingress Controller Optional This tutorial describes how to set up ExternalDNS for managing records in Azure Private DNS 1 Provision Azure Private DNS It comprises of the following steps 2 Configure service principal for managing the zone","answers":"# Azure Private DNS\n\nThis tutorial describes how to set up ExternalDNS for managing records in Azure Private DNS.\n\nIt comprises of the following steps:\n1) Provision Azure Private DNS\n2) Configure service principal for managing the zone\n3) Deploy ExternalDNS\n4) Expose an NGINX service with a LoadBalancer and annotate it with the desired DNS name\n5) Install NGINX Ingress Controller (Optional)\n6) Expose an nginx service with an ingress (Optional)\n7) Verify the DNS records\n\nEverything will be deployed on Kubernetes.\nTherefore, please see the subsequent prerequisites.\n\n## Prerequisites\n- Azure Kubernetes Service is deployed and ready\n- [Azure CLI 2.0](https:\/\/docs.microsoft.com\/en-us\/cli\/azure\/install-azure-cli) and `kubectl` installed on the box to execute the subsequent steps\n\n## Provision Azure Private DNS\n\nThe provider will find suitable zones for domains it manages. It will\nnot automatically create zones.\n\nFor this tutorial, we will create a Azure resource group named 'externaldns' that can easily be deleted later.\n\n```\n$ az group create -n externaldns -l westeurope\n```\n\nSubstitute a more suitable location for the resource group if desired.\n\nAs a prerequisite for Azure Private DNS to resolve records is to define links with VNETs.\nThus, first create a VNET.\n\n```\n$ az network vnet create \\\n  --name myvnet \\\n  --resource-group externaldns \\\n  --location westeurope \\\n  --address-prefix 10.2.0.0\/16 \\\n  --subnet-name mysubnet \\\n  --subnet-prefixes 10.2.0.0\/24\n```\n\nNext, create a Azure Private DNS zone for \"example.com\":\n\n```\n$ az network private-dns zone create -g externaldns -n example.com\n```\n\nSubstitute a domain you own for \"example.com\" if desired.\n\nFinally, create the mentioned link with the VNET.\n\n```\n$ az network private-dns link vnet create -g externaldns -n mylink \\\n   -z example.com -v myvnet --registration-enabled false\n```\n\n## Configure service principal for managing the zone\nExternalDNS needs permissions to make changes in Azure Private DNS.\nThese permissions are roles assigned to the service principal used by ExternalDNS.\n\nA service principal with a minimum access level of `Private DNS Zone Contributor` to the Private DNS zone(s) and `Reader` to the resource group containing the Azure Private DNS zone(s) is necessary.\nMore powerful role-assignments like `Owner` or assignments on subscription-level work too.\n\nStart off by **creating the service principal** without role-assignments.\n```\n$ az ad sp create-for-rbac --skip-assignment -n http:\/\/externaldns-sp\n{\n  \"appId\": \"appId GUID\",  <-- aadClientId value\n  ...\n  \"password\": \"password\",  <-- aadClientSecret value\n  \"tenant\": \"AzureAD Tenant Id\"  <-- tenantId value\n}\n```\n> Note: Alternatively, you can issue `az account show --query \"tenantId\"` to retrieve the id of your AAD Tenant too.\n\nNext, assign the roles to the service principal.\nBut first **retrieve the ID's** of the objects to assign roles on.\n\n```\n# find out the resource ids of the resource group where the dns zone is deployed, and the dns zone itself\n$ az group show --name externaldns --query id -o tsv\n\/subscriptions\/id\/resourceGroups\/externaldns\n\n$ az network private-dns zone show --name example.com -g externaldns --query id -o tsv\n\/subscriptions\/...\/resourceGroups\/externaldns\/providers\/Microsoft.Network\/privateDnsZones\/example.com\n```\nNow, **create role assignments**.\n```\n# 1. as a reader to the resource group\n$ az role assignment create --role \"Reader\" --assignee <appId GUID> --scope <resource group resource id>\n\n# 2. as a contributor to DNS Zone itself\n$ az role assignment create --role \"Private DNS Zone Contributor\" --assignee <appId GUID> --scope <dns zone resource id>\n```\n\n## Throttling\n\nWhen the ExternalDNS managed zones list doesn't change frequently, one can set `--azure-zones-cache-duration` (zones list cache time-to-live). The zones list cache is disabled by default, with a value of 0s.\n\n## Deploy ExternalDNS\nConfigure `kubectl` to be able to communicate and authenticate with your cluster.\nThis is per default done through the file `~\/.kube\/config`.\n\nFor general background information on this see [kubernetes-docs](https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/access-cluster\/).\nAzure-CLI features functionality for automatically maintaining this file for AKS-Clusters. See [Azure-Docs](https:\/\/docs.microsoft.com\/de-de\/cli\/azure\/aks?view=azure-cli-latest#az-aks-get-credentials).\n\nFollow the steps for [azure-dns provider](.\/azure.md#creating-configuration-file) to create a configuration file.\n\nThen apply one of the following manifests depending on whether you use RBAC or not.\n\nThe credentials of the service principal are provided to ExternalDNS as environment-variables.\n\n### Manifest (for clusters without RBAC enabled)\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: externaldns\nspec:\n  selector:\n    matchLabels:\n      app: externaldns\n  strategy:\n    type: Recreate\n  template:\n    metadata:\n      labels:\n        app: externaldns\n    spec:\n      containers:\n      - name: externaldns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service\n        - --source=ingress\n        - --domain-filter=example.com\n        - --provider=azure-private-dns\n        - --azure-resource-group=externaldns\n        - --azure-subscription-id=<use the id of your subscription>\n        volumeMounts:\n        - name: azure-config-file\n          mountPath: \/etc\/kubernetes\n          readOnly: true\n      volumes:\n      - name: azure-config-file\n        secret:\n          secretName: azure-config-file\n```\n\n### Manifest (for clusters with RBAC enabled, cluster access)\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: externaldns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: externaldns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"get\", \"watch\", \"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: externaldns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: externaldns\nsubjects:\n- kind: ServiceAccount\n  name: externaldns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: externaldns\nspec:\n  selector:\n    matchLabels:\n      app: externaldns\n  strategy:\n    type: Recreate\n  template:\n    metadata:\n      labels:\n        app: externaldns\n    spec:\n      serviceAccountName: externaldns\n      containers:\n      - name: externaldns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service\n        - --source=ingress\n        - --domain-filter=example.com\n        - --provider=azure-private-dns\n        - --azure-resource-group=externaldns\n        - --azure-subscription-id=<use the id of your subscription>\n        volumeMounts:\n        - name: azure-config-file\n          mountPath: \/etc\/kubernetes\n          readOnly: true\n      volumes:\n      - name: azure-config-file\n        secret:\n          secretName: azure-config-file\n```\n\n### Manifest (for clusters with RBAC enabled, namespace access)\nThis configuration is the same as above, except it only requires privileges for the current namespace, not for the whole cluster.\nHowever, access to [nodes](https:\/\/kubernetes.io\/docs\/concepts\/architecture\/nodes\/) requires cluster access, so when using this manifest,\nservices with type `NodePort` will be skipped!\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: externaldns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: Role\nmetadata:\n  name: externaldns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: RoleBinding\nmetadata:\n  name: externaldns\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: Role\n  name: externaldns\nsubjects:\n- kind: ServiceAccount\n  name: externaldns\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: externaldns\nspec:\n  selector:\n    matchLabels:\n      app: externaldns\n  strategy:\n    type: Recreate\n  template:\n    metadata:\n      labels:\n        app: externaldns\n    spec:\n      serviceAccountName: externaldns\n      containers:\n      - name: externaldns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service\n        - --source=ingress\n        - --domain-filter=example.com\n        - --provider=azure-private-dns\n        - --azure-resource-group=externaldns\n        - --azure-subscription-id=<use the id of your subscription>\n        volumeMounts:\n        - name: azure-config-file\n          mountPath: \/etc\/kubernetes\n          readOnly: true\n      volumes:\n      - name: azure-config-file\n        secret:\n          secretName: azure-config-file\n```\n\nCreate the deployment for ExternalDNS:\n\n```\n$ kubectl create -f externaldns.yaml\n```\n\n## Create an nginx deployment\n\nThis step creates a demo workload in your cluster. Apply the following manifest to create a deployment that we are going to expose later in this tutorial in multiple ways:\n\n```yaml\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n        - image: nginx\n          name: nginx\n          ports:\n          - containerPort: 80\n```\n\n## Expose the nginx deployment with a load balancer\n\nApply the following manifest to create a service of type `LoadBalancer`. This will create a public load balancer in Azure that will forward traffic to the nginx pods.\n\n```yaml\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx-svc\n  annotations:\n    service.beta.kubernetes.io\/azure-load-balancer-internal: \"true\"\n    external-dns.alpha.kubernetes.io\/hostname: server.example.com\n    external-dns.alpha.kubernetes.io\/internal-hostname: server-clusterip.example.com\nspec:\n  ports:\n    - port: 80\n      protocol: TCP\n      targetPort: 80\n  selector:\n    app: nginx\n  type: LoadBalancer\n```\n\nIn the service we used multiple annptations. The annotation `service.beta.kubernetes.io\/azure-load-balancer-internal` is used to create an internal load balancer. The annotation `external-dns.alpha.kubernetes.io\/hostname` is used to create a DNS record for the load balancer that will point to the internal IP address in the VNET allocated by the internal load balancer. The annotation `external-dns.alpha.kubernetes.io\/internal-hostname` is used to create a private DNS record for the load balancer that will point to the cluster IP.\n\n## Install NGINX Ingress Controller (Optional)\n\nHelm is used to deploy the ingress controller.\n\nWe employ the popular chart [ingress-nginx](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/charts\/ingress-nginx).\n\n```\n$ helm repo add ingress-nginx https:\/\/kubernetes.github.io\/ingress-nginx\n$ helm repo update\n$ helm install [RELEASE_NAME] ingress-nginx\/ingress-nginx\n     --set controller.publishService.enabled=true\n```\n\nThe parameter `controller.publishService.enabled` needs to be set to `true.`\n\nIt will make the ingress controller update the endpoint records of ingress-resources to contain the external-ip of the loadbalancer serving the ingress-controller.\nThis is crucial as ExternalDNS reads those endpoints records when creating DNS-Records from ingress-resources.\nIn the subsequent parameter we will make use of this. If you don't want to work with ingress-resources in your later use, you can leave the parameter out.\n\nVerify the correct propagation of the loadbalancer's ip by listing the ingresses.\n\n```\n$ kubectl get ingress\n```\n\nThe address column should contain the ip for each ingress. ExternalDNS will pick up exactly this piece of information.\n\n```\nNAME     HOSTS             ADDRESS          PORTS   AGE\nnginx1   sample1.aks.com   52.167.195.110   80      6d22h\nnginx2   sample2.aks.com   52.167.195.110   80      6d21h\n```\n\nIf you do not want to deploy the ingress controller with Helm, ensure to pass the following cmdline-flags to it through the mechanism of your choice:\n\n```\nflags:\n--publish-service=<namespace of ingress-controller >\/<svcname of ingress-controller>\n--update-status=true (default-value)\n\nexample:\n.\/nginx-ingress-controller --publish-service=default\/nginx-ingress-controller\n```\n\n## Expose the nginx deployment with the ingress (Optional)\n\nApply the following manifest to create an ingress resource that will expose the nginx deployment. The ingress resource backend points to a `ClusterIP` service that is needed to select the pods that will receive the traffic.\n\n```yaml\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx-svc-clusterip\nspec:\n  ports:\n  - port: 80\n    protocol: TCP\n    targetPort: 80\n  selector:\n    app: nginx\n  type: ClusterIP\n\n---\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: nginx\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: server.example.com\n    http:\n      paths:\n      - backend:\n          service:\n            name: nginx-svc-clusterip\n            port:\n              number: 80\n        pathType: Prefix\n```\n\nWhen you use ExternalDNS with Ingress resources, it automatically creates DNS records based on the hostnames listed in those Ingress objects.\nThose hostnames must match the filters that you defined (if any):\n\n- By default, `--domain-filter` filters Azure Private DNS zone.\n- If you use `--domain-filter` together with `--zone-name-filter`, the behavior changes: `--domain-filter` then filters Ingress domains, not the Azure Private DNS zone name.\n\nWhen those hostnames are removed or renamed the corresponding DNS records are also altered.\n\nCreate the deployment, service and ingress object:\n\n```\n$ kubectl create -f nginx.yaml\n```\n\nSince your external IP would have already been assigned to the nginx-ingress service, the DNS records pointing to the IP of the nginx-ingress service should be created within a minute.\n\n## Verify created records\n\nRun the following command to view the A records for your Azure Private DNS zone:\n\n```\n$ az network private-dns record-set a list -g externaldns -z example.com\n```\n\nSubstitute the zone for the one created above if a different domain was used.\n\nThis should show the external IP address of the service as the A record for your domain ('@' indicates the record is for the zone itself).","site":"external-dns","answers_cleaned":"  Azure Private DNS  This tutorial describes how to set up ExternalDNS for managing records in Azure Private DNS   It comprises of the following steps  1  Provision Azure Private DNS 2  Configure service principal for managing the zone 3  Deploy ExternalDNS 4  Expose an NGINX service with a LoadBalancer and annotate it with the desired DNS name 5  Install NGINX Ingress Controller  Optional  6  Expose an nginx service with an ingress  Optional  7  Verify the DNS records  Everything will be deployed on Kubernetes  Therefore  please see the subsequent prerequisites      Prerequisites   Azure Kubernetes Service is deployed and ready    Azure CLI 2 0  https   docs microsoft com en us cli azure install azure cli  and  kubectl  installed on the box to execute the subsequent steps     Provision Azure Private DNS  The provider will find suitable zones for domains it manages  It will not automatically create zones   For this tutorial  we will create a Azure resource group named  externaldns  that can easily be deleted later         az group create  n externaldns  l westeurope      Substitute a more suitable location for the resource group if desired   As a prerequisite for Azure Private DNS to resolve records is to define links with VNETs  Thus  first create a VNET         az network vnet create       name myvnet       resource group externaldns       location westeurope       address prefix 10 2 0 0 16       subnet name mysubnet       subnet prefixes 10 2 0 0 24      Next  create a Azure Private DNS zone for  example com          az network private dns zone create  g externaldns  n example com      Substitute a domain you own for  example com  if desired   Finally  create the mentioned link with the VNET         az network private dns link vnet create  g externaldns  n mylink       z example com  v myvnet   registration enabled false         Configure service principal for managing the zone ExternalDNS needs permissions to make changes in Azure Private DNS  These permissions are roles assigned to the service principal used by ExternalDNS   A service principal with a minimum access level of  Private DNS Zone Contributor  to the Private DNS zone s  and  Reader  to the resource group containing the Azure Private DNS zone s  is necessary  More powerful role assignments like  Owner  or assignments on subscription level work too   Start off by   creating the service principal   without role assignments        az ad sp create for rbac   skip assignment  n http   externaldns sp      appId    appId GUID        aadClientId value          password    password        aadClientSecret value    tenant    AzureAD Tenant Id       tenantId value         Note  Alternatively  you can issue  az account show   query  tenantId   to retrieve the id of your AAD Tenant too   Next  assign the roles to the service principal  But first   retrieve the ID s   of the objects to assign roles on         find out the resource ids of the resource group where the dns zone is deployed  and the dns zone itself   az group show   name externaldns   query id  o tsv  subscriptions id resourceGroups externaldns    az network private dns zone show   name example com  g externaldns   query id  o tsv  subscriptions     resourceGroups externaldns providers Microsoft Network privateDnsZones example com     Now    create role assignments          1  as a reader to the resource group   az role assignment create   role  Reader    assignee  appId GUID    scope  resource group resource id     2  as a contributor to DNS Zone itself   az role assignment create   role  Private DNS Zone Contributor    assignee  appId GUID    scope  dns zone resource id          Throttling  When the ExternalDNS managed zones list doesn t change frequently  one can set    azure zones cache duration   zones list cache time to live   The zones list cache is disabled by default  with a value of 0s      Deploy ExternalDNS Configure  kubectl  to be able to communicate and authenticate with your cluster  This is per default done through the file     kube config    For general background information on this see  kubernetes docs  https   kubernetes io docs tasks access application cluster access cluster    Azure CLI features functionality for automatically maintaining this file for AKS Clusters  See  Azure Docs  https   docs microsoft com de de cli azure aks view azure cli latest az aks get credentials    Follow the steps for  azure dns provider    azure md creating configuration file  to create a configuration file   Then apply one of the following manifests depending on whether you use RBAC or not   The credentials of the service principal are provided to ExternalDNS as environment variables       Manifest  for clusters without RBAC enabled     yaml apiVersion  apps v1 kind  Deployment metadata    name  externaldns spec    selector      matchLabels        app  externaldns   strategy      type  Recreate   template      metadata        labels          app  externaldns     spec        containers          name  externaldns         image  registry k8s io external dns external dns v0 15 0         args              source service             source ingress             domain filter example com             provider azure private dns             azure resource group externaldns             azure subscription id  use the id of your subscription          volumeMounts            name  azure config file           mountPath   etc kubernetes           readOnly  true       volumes          name  azure config file         secret            secretName  azure config file          Manifest  for clusters with RBAC enabled  cluster access     yaml apiVersion  v1 kind  ServiceAccount metadata    name  externaldns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  externaldns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    get    watch    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  externaldns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  externaldns subjects    kind  ServiceAccount   name  externaldns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  externaldns spec    selector      matchLabels        app  externaldns   strategy      type  Recreate   template      metadata        labels          app  externaldns     spec        serviceAccountName  externaldns       containers          name  externaldns         image  registry k8s io external dns external dns v0 15 0         args              source service             source ingress             domain filter example com             provider azure private dns             azure resource group externaldns             azure subscription id  use the id of your subscription          volumeMounts            name  azure config file           mountPath   etc kubernetes           readOnly  true       volumes          name  azure config file         secret            secretName  azure config file          Manifest  for clusters with RBAC enabled  namespace access  This configuration is the same as above  except it only requires privileges for the current namespace  not for the whole cluster  However  access to  nodes  https   kubernetes io docs concepts architecture nodes   requires cluster access  so when using this manifest  services with type  NodePort  will be skipped      yaml apiVersion  v1 kind  ServiceAccount metadata    name  externaldns     apiVersion  rbac authorization k8s io v1 kind  Role metadata    name  externaldns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list       apiVersion  rbac authorization k8s io v1 kind  RoleBinding metadata    name  externaldns roleRef    apiGroup  rbac authorization k8s io   kind  Role   name  externaldns subjects    kind  ServiceAccount   name  externaldns     apiVersion  apps v1 kind  Deployment metadata    name  externaldns spec    selector      matchLabels        app  externaldns   strategy      type  Recreate   template      metadata        labels          app  externaldns     spec        serviceAccountName  externaldns       containers          name  externaldns         image  registry k8s io external dns external dns v0 15 0         args              source service             source ingress             domain filter example com             provider azure private dns             azure resource group externaldns             azure subscription id  use the id of your subscription          volumeMounts            name  azure config file           mountPath   etc kubernetes           readOnly  true       volumes          name  azure config file         secret            secretName  azure config file      Create the deployment for ExternalDNS         kubectl create  f externaldns yaml         Create an nginx deployment  This step creates a demo workload in your cluster  Apply the following manifest to create a deployment that we are going to expose later in this tutorial in multiple ways      yaml     apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers            image  nginx           name  nginx           ports              containerPort  80         Expose the nginx deployment with a load balancer  Apply the following manifest to create a service of type  LoadBalancer   This will create a public load balancer in Azure that will forward traffic to the nginx pods      yaml     apiVersion  v1 kind  Service metadata    name  nginx svc   annotations      service beta kubernetes io azure load balancer internal   true      external dns alpha kubernetes io hostname  server example com     external dns alpha kubernetes io internal hostname  server clusterip example com spec    ports        port  80       protocol  TCP       targetPort  80   selector      app  nginx   type  LoadBalancer      In the service we used multiple annptations  The annotation  service beta kubernetes io azure load balancer internal  is used to create an internal load balancer  The annotation  external dns alpha kubernetes io hostname  is used to create a DNS record for the load balancer that will point to the internal IP address in the VNET allocated by the internal load balancer  The annotation  external dns alpha kubernetes io internal hostname  is used to create a private DNS record for the load balancer that will point to the cluster IP      Install NGINX Ingress Controller  Optional   Helm is used to deploy the ingress controller   We employ the popular chart  ingress nginx  https   github com kubernetes ingress nginx tree main charts ingress nginx          helm repo add ingress nginx https   kubernetes github io ingress nginx   helm repo update   helm install  RELEASE NAME  ingress nginx ingress nginx        set controller publishService enabled true      The parameter  controller publishService enabled  needs to be set to  true    It will make the ingress controller update the endpoint records of ingress resources to contain the external ip of the loadbalancer serving the ingress controller  This is crucial as ExternalDNS reads those endpoints records when creating DNS Records from ingress resources  In the subsequent parameter we will make use of this  If you don t want to work with ingress resources in your later use  you can leave the parameter out   Verify the correct propagation of the loadbalancer s ip by listing the ingresses         kubectl get ingress      The address column should contain the ip for each ingress  ExternalDNS will pick up exactly this piece of information       NAME     HOSTS             ADDRESS          PORTS   AGE nginx1   sample1 aks com   52 167 195 110   80      6d22h nginx2   sample2 aks com   52 167 195 110   80      6d21h      If you do not want to deploy the ingress controller with Helm  ensure to pass the following cmdline flags to it through the mechanism of your choice       flags    publish service  namespace of ingress controller    svcname of ingress controller    update status true  default value   example    nginx ingress controller   publish service default nginx ingress controller         Expose the nginx deployment with the ingress  Optional   Apply the following manifest to create an ingress resource that will expose the nginx deployment  The ingress resource backend points to a  ClusterIP  service that is needed to select the pods that will receive the traffic      yaml     apiVersion  v1 kind  Service metadata    name  nginx svc clusterip spec    ports      port  80     protocol  TCP     targetPort  80   selector      app  nginx   type  ClusterIP      apiVersion  networking k8s io v1 kind  Ingress metadata    name  nginx spec    ingressClassName  nginx   rules      host  server example com     http        paths          backend            service              name  nginx svc clusterip             port                number  80         pathType  Prefix      When you use ExternalDNS with Ingress resources  it automatically creates DNS records based on the hostnames listed in those Ingress objects  Those hostnames must match the filters that you defined  if any      By default     domain filter  filters Azure Private DNS zone    If you use    domain filter  together with    zone name filter   the behavior changes     domain filter  then filters Ingress domains  not the Azure Private DNS zone name   When those hostnames are removed or renamed the corresponding DNS records are also altered   Create the deployment  service and ingress object         kubectl create  f nginx yaml      Since your external IP would have already been assigned to the nginx ingress service  the DNS records pointing to the IP of the nginx ingress service should be created within a minute      Verify created records  Run the following command to view the A records for your Azure Private DNS zone         az network private dns record set a list  g externaldns  z example com      Substitute the zone for the one created above if a different domain was used   This should show the external IP address of the service as the A record for your domain      indicates the record is for the zone itself  "}
{"questions":"external-dns Importing a Domain into Scaleway DNS Make sure to use 0 7 4 version of ExternalDNS for this tutorial Warning Scaleway DNS is currently in Public Beta and may not be suited for production usage This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Scaleway DNS Scaleway","answers":"# Scaleway\n\nThis tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Scaleway DNS.\n\nMake sure to use **>=0.7.4** version of ExternalDNS for this tutorial.\n\n**Warning**: Scaleway DNS is currently in Public Beta and may not be suited for production usage.\n\n## Importing a Domain into Scaleway DNS\n\nIn order to use your domain, you need to import it into Scaleway DNS. If it's not already done, you can follow [this documentation](https:\/\/www.scaleway.com\/en\/docs\/scaleway-dns\/)\n\nOnce the domain is imported you can either use the root zone, or create a subzone to use.\n\nIn this example we will use `example.com` as an example.\n\n## Creating Scaleway Credentials\n\nTo use ExternalDNS with Scaleway DNS, you need to create an API token (composed of the Access Key and the Secret Key).\nYou can either use existing ones or you can create a new token, as explained in [How to generate an API token](https:\/\/www.scaleway.com\/en\/docs\/generate-an-api-token\/) or directly by going to the [credentials page](https:\/\/console.scaleway.com\/account\/organization\/credentials).\n\nScaleway provider supports configuring credentials using profiles or supplying it directly with environment variables.\n\n### Configuration using a config file\nYou can supply the credentials through a config file:\n1. Create the config file. Check out [Scaleway docs](https:\/\/github.com\/scaleway\/scaleway-sdk-go\/blob\/master\/scw\/README.md#scaleway-config) for instructions\n2. Mount it as a Secret into the Pod\n3. Configure environment variable `SCW_PROFILE` to match the profile name in the config file\n4. Configure environment variable `SCW_CONFIG_PATH` to match the location of the mounted config file\n\n### Configuration using environment variables\nTwo environment variables are needed to run ExternalDNS with Scaleway DNS:\n- `SCW_ACCESS_KEY` which is the Access Key.\n- `SCW_SECRET_KEY` which is the Secret Key.\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster you want to test ExternalDNS with.\nThen apply one of the following manifests file to deploy ExternalDNS.\n\nThe following example are suited for development. For a production usage, prefer secrets over environment, and use a [tagged release](https:\/\/github.com\/kubernetes-sigs\/external-dns\/releases).\n\n### Manifest (for clusters without RBAC enabled)\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: external-dns\n  strategy:\n    type: Recreate\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=scaleway\n        env:\n        - name: SCW_ACCESS_KEY\n          value: \"<your access key>\"\n        - name: SCW_SECRET_KEY\n          value: \"<your secret key>\"\n        ### Set if configuring using a config file. Make sure to create the Secret first.\n        # - name: SCW_PROFILE\n        #   value: \"<profile name>\"\n        # - name: SCW_CONFIG_PATH\n        #   value: \/etc\/scw\/config.yaml\n    #     volumeMounts:\n    #     - name: scw-config\n    #       mountPath: \/etc\/scw\/config.yaml\n    #       readOnly: true\n    # volumes:\n    # - name: scw-config\n    #   secret:\n    #     secretName: scw-config\n    ###\n```\n\n### Manifest (for clusters with RBAC enabled)\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\",\"watch\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: external-dns\n  strategy:\n    type: Recreate\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=scaleway\n        env:\n        - name: SCW_ACCESS_KEY\n          value: \"<your access key>\"\n        - name: SCW_SECRET_KEY\n          value: \"<your secret key>\"\n        ### Set if configuring using a config file. Make sure to create the Secret first.\n        # - name: SCW_PROFILE\n        #   value: \"<profile name>\"\n        # - name: SCW_CONFIG_PATH\n        #   value: \/etc\/scw\/config.yaml\n    #     volumeMounts:\n    #     - name: scw-config\n    #       mountPath: \/etc\/scw\/config.yaml\n    #       readOnly: true\n    # volumes:\n    # - name: scw-config\n    #   secret:\n    #     secretName: scw-config\n    ###\n```\n\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: my-app.example.com\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\nNote the annotation on the service; use the same hostname as the Scaleway DNS zone created above.\n\nExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records.\n\nCreate the deployment and service:\n\n```console\n$ kubectl create -f nginx.yaml\n```\n\nDepending where you run your service it can take a little while for your cloud provider to create an external IP for the service.\n\nOnce the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize the Scaleway DNS records.\n\n## Verifying Scaleway DNS records\n\nCheck your [Scaleway DNS UI](https:\/\/console.scaleway.com\/domains\/external) to view the records for your Scaleway DNS zone.\n\nClick on the zone for the one created above if a different domain was used.\n\nThis should show the external IP address of the service as the A record for your domain.\n\n## Cleanup\n\nNow that we have verified that ExternalDNS will automatically manage Scaleway DNS records, we can delete the tutorial's example:\n\n```\n$ kubectl delete service -f nginx.yaml\n$ kubectl delete service -f externaldns.yaml\n```","site":"external-dns","answers_cleaned":"  Scaleway  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Scaleway DNS   Make sure to use     0 7 4   version of ExternalDNS for this tutorial     Warning    Scaleway DNS is currently in Public Beta and may not be suited for production usage      Importing a Domain into Scaleway DNS  In order to use your domain  you need to import it into Scaleway DNS  If it s not already done  you can follow  this documentation  https   www scaleway com en docs scaleway dns    Once the domain is imported you can either use the root zone  or create a subzone to use   In this example we will use  example com  as an example      Creating Scaleway Credentials  To use ExternalDNS with Scaleway DNS  you need to create an API token  composed of the Access Key and the Secret Key   You can either use existing ones or you can create a new token  as explained in  How to generate an API token  https   www scaleway com en docs generate an api token   or directly by going to the  credentials page  https   console scaleway com account organization credentials    Scaleway provider supports configuring credentials using profiles or supplying it directly with environment variables       Configuration using a config file You can supply the credentials through a config file  1  Create the config file  Check out  Scaleway docs  https   github com scaleway scaleway sdk go blob master scw README md scaleway config  for instructions 2  Mount it as a Secret into the Pod 3  Configure environment variable  SCW PROFILE  to match the profile name in the config file 4  Configure environment variable  SCW CONFIG PATH  to match the location of the mounted config file      Configuration using environment variables Two environment variables are needed to run ExternalDNS with Scaleway DNS     SCW ACCESS KEY  which is the Access Key     SCW SECRET KEY  which is the Secret Key      Deploy ExternalDNS  Connect your  kubectl  client to the cluster you want to test ExternalDNS with  Then apply one of the following manifests file to deploy ExternalDNS   The following example are suited for development  For a production usage  prefer secrets over environment  and use a  tagged release  https   github com kubernetes sigs external dns releases        Manifest  for clusters without RBAC enabled     yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    replicas  1   selector      matchLabels        app  external dns   strategy      type  Recreate   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider scaleway         env            name  SCW ACCESS KEY           value    your access key             name  SCW SECRET KEY           value    your secret key               Set if configuring using a config file  Make sure to create the Secret first              name  SCW PROFILE             value    profile name               name  SCW CONFIG PATH             value   etc scw config yaml           volumeMounts              name  scw config             mountPath   etc scw config yaml             readOnly  true       volumes          name  scw config         secret            secretName  scw config                  Manifest  for clusters with RBAC enabled     yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    list   watch       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    replicas  1   selector      matchLabels        app  external dns   strategy      type  Recreate   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider scaleway         env            name  SCW ACCESS KEY           value    your access key             name  SCW SECRET KEY           value    your secret key               Set if configuring using a config file  Make sure to create the Secret first              name  SCW PROFILE             value    profile name               name  SCW CONFIG PATH             value   etc scw config yaml           volumeMounts              name  scw config             mountPath   etc scw config yaml             readOnly  true       volumes          name  scw config         secret            secretName  scw config                  Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    replicas  1   selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  my app example com spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80      Note the annotation on the service  use the same hostname as the Scaleway DNS zone created above   ExternalDNS uses this annotation to determine what services should be registered with DNS  Removing the annotation will cause ExternalDNS to remove the corresponding DNS records   Create the deployment and service      console   kubectl create  f nginx yaml      Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service   Once the service has an external IP assigned  ExternalDNS will notice the new service IP address and synchronize the Scaleway DNS records      Verifying Scaleway DNS records  Check your  Scaleway DNS UI  https   console scaleway com domains external  to view the records for your Scaleway DNS zone   Click on the zone for the one created above if a different domain was used   This should show the external IP address of the service as the A record for your domain      Cleanup  Now that we have verified that ExternalDNS will automatically manage Scaleway DNS records  we can delete the tutorial s example         kubectl delete service  f nginx yaml   kubectl delete service  f externaldns yaml    "}
{"questions":"external-dns Make sure to use 0 5 5 version of ExternalDNS for this tutorial If you want to learn about how to use Linode DNS Manager read the following tutorials Managing DNS with Linode This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Linode DNS Manager Linode","answers":"# Linode\n\nThis tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Linode DNS Manager.\n\nMake sure to use **>=0.5.5** version of ExternalDNS for this tutorial.\n\n## Managing DNS with Linode\n\nIf you want to learn about how to use Linode DNS Manager read the following tutorials:\n\n[An Introduction to Managing DNS](https:\/\/www.linode.com\/docs\/platform\/manager\/dns-manager\/), and [general documentation](https:\/\/www.linode.com\/docs\/networking\/dns\/)\n\n## Creating Linode Credentials\n\nGenerate a new oauth token by following the instructions at [Access-and-Authentication](https:\/\/developers.linode.com\/api\/v4#section\/Access-and-Authentication)\n\nThe environment variable `LINODE_TOKEN` will be needed to run ExternalDNS with Linode.\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster you want to test ExternalDNS with.\nThen apply one of the following manifests file to deploy ExternalDNS.\n\n### Manifest (for clusters without RBAC enabled)\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=linode\n        env:\n        - name: LINODE_TOKEN\n          value: \"YOUR_LINODE_API_KEY\"\n```\n\n### Manifest (for clusters with RBAC enabled)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=linode\n        env:\n        - name: LINODE_TOKEN\n          value: \"YOUR_LINODE_API_KEY\"\n```\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: my-app.example.com\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\nNote the annotation on the service; use the same hostname as the Linode DNS zone created above.\n\nExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records.\n\nCreate the deployment and service:\n\n```console\n$ kubectl create -f nginx.yaml\n```\n\nDepending where you run your service it can take a little while for your cloud provider to create an external IP for the service.\n\nOnce the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize the Linode DNS records.\n\n## Verifying Linode DNS records\n\nCheck your [Linode UI](https:\/\/cloud.linode.com\/domains) to view the records for your Linode DNS zone.\n\nClick on the zone for the one created above if a different domain was used.\n\nThis should show the external IP address of the service as the A record for your domain.\n\n## Cleanup\n\nNow that we have verified that ExternalDNS will automatically manage Linode DNS records, we can delete the tutorial's example:\n\n```\n$ kubectl delete service -f nginx.yaml\n$ kubectl delete service -f externaldns.yaml\n```","site":"external-dns","answers_cleaned":"  Linode  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Linode DNS Manager   Make sure to use     0 5 5   version of ExternalDNS for this tutorial      Managing DNS with Linode  If you want to learn about how to use Linode DNS Manager read the following tutorials    An Introduction to Managing DNS  https   www linode com docs platform manager dns manager    and  general documentation  https   www linode com docs networking dns       Creating Linode Credentials  Generate a new oauth token by following the instructions at  Access and Authentication  https   developers linode com api v4 section Access and Authentication   The environment variable  LINODE TOKEN  will be needed to run ExternalDNS with Linode      Deploy ExternalDNS  Connect your  kubectl  client to the cluster you want to test ExternalDNS with  Then apply one of the following manifests file to deploy ExternalDNS       Manifest  for clusters without RBAC enabled      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider linode         env            name  LINODE TOKEN           value   YOUR LINODE API KEY           Manifest  for clusters with RBAC enabled      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    nodes     verbs    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider linode         env            name  LINODE TOKEN           value   YOUR LINODE API KEY          Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  my app example com spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80      Note the annotation on the service  use the same hostname as the Linode DNS zone created above   ExternalDNS uses this annotation to determine what services should be registered with DNS  Removing the annotation will cause ExternalDNS to remove the corresponding DNS records   Create the deployment and service      console   kubectl create  f nginx yaml      Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service   Once the service has an external IP assigned  ExternalDNS will notice the new service IP address and synchronize the Linode DNS records      Verifying Linode DNS records  Check your  Linode UI  https   cloud linode com domains  to view the records for your Linode DNS zone   Click on the zone for the one created above if a different domain was used   This should show the external IP address of the service as the A record for your domain      Cleanup  Now that we have verified that ExternalDNS will automatically manage Linode DNS records  we can delete the tutorial s example         kubectl delete service  f nginx yaml   kubectl delete service  f externaldns yaml    "}
{"questions":"external-dns functional It expects that zones you wish to add records to already exist PowerDNS Prerequisites The provider has been written for and tested against v4 1 x and thus requires PowerDNS Auth Server 4 1 x The PDNS provider expects that your PowerDNS instance is already setup and PowerDNS provider support was added via thus you need to use external dns version v0 5","answers":"# PowerDNS\n\n## Prerequisites\n\nThe provider has been written for and tested against [PowerDNS](https:\/\/github.com\/PowerDNS\/pdns) v4.1.x and thus requires **PowerDNS Auth Server >= 4.1.x**\n\nPowerDNS provider support was added via [this PR](https:\/\/github.com\/kubernetes-sigs\/external-dns\/pull\/373), thus you need to use external-dns version >= v0.5\n\nThe PDNS provider expects that your PowerDNS instance is already setup and\nfunctional. It expects that zones, you wish to add records to, already exist\nand are configured correctly. It does not add, remove or configure new zones in\nanyway.\n\n## Feature Support\n\nThe PDNS provider currently does not support:\n\n* Dry running a configuration is not supported\n\n## Deployment\n\nDeploying external DNS for PowerDNS is actually nearly identical to deploying\nit for other providers. This is what a sample `deployment.yaml` looks like:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      # Only use if you're also using RBAC\n      # serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # or ingress or both\n        - --provider=pdns\n        - --pdns-server=\n        - --pdns-server-id=\n        - --pdns-api-key=\n        - --txt-owner-id=\n        - --domain-filter=external-dns-test.my-org.com # will make ExternalDNS see only the zones matching provided domain; omit to process all available zones in PowerDNS\n        - --log-level=debug\n        - --interval=30s\n```\n\n#### Domain Filter (`--domain-filter`)\nWhen the `--domain-filter` argument is specified, external-dns will only create DNS records for host names (specified in ingress objects and services with the external-dns annotation) related to zones that match the `--domain-filter` argument in the external-dns deployment manifest.\n\neg. ```--domain-filter=example.org``` will allow for zone `example.org` and any zones in PowerDNS that ends in `.example.org`, including `an.example.org`, ie. the subdomains of example.org.\n\neg. ```--domain-filter=.example.org``` will allow *only* zones that end in `.example.org`, ie. the subdomains of example.org but not the `example.org` zone itself.\n\nThe filter can also match parent zones. For example `--domain-filter=a.example.com` will allow for zone `example.com`. If you want to match parent zones, you cannot pre-pend your filter with a \".\", eg. `--domain-filter=.example.com` will not attempt to match parent zones.\n\n### Regex Domain Filter (`--regex-domain-filter`)\n`--regex-domain-filter` limits possible domains and target zone with a regex. It overrides domain filters and can be specified only once.\n\n## RBAC\n\nIf your cluster is RBAC enabled, you also need to setup the following, before you can run external-dns:\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n```\n\n## Testing and Verification\n\n**Important!**: Remember to change `example.com` with your own domain throughout the following text.\n\nSpin up a simple \"Hello World\" HTTP server with the following spec (`kubectl apply -f`):\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: echo\nspec:\n  selector:\n    matchLabels:\n      app: echo\n  template:\n    metadata:\n      labels:\n        app: echo\n    spec:\n      containers:\n      - image: hashicorp\/http-echo\n        name: echo\n        ports:\n        - containerPort: 5678\n        args:\n          - -text=\"Hello World\"\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: echo\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: echo.example.com\nspec:\n  selector:\n    app: echo\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 5678\n```\n**Important!**: Don't run dig, nslookup or similar immediately (until you've\nconfirmed the record exists). You'll get hit by [negative DNS caching](https:\/\/tools.ietf.org\/html\/rfc2308), which is hard to flush.\n\nRun the following to make sure everything is in order:\n\n```bash\n$ kubectl get services echo\n$ kubectl get endpoints echo\n```\n\nMake sure everything looks correct, i.e the service is defined and receives a\npublic IP, and that the endpoint also has a pod IP.\n\nOnce that's done, wait about 30s-1m (interval for external-dns to kick in), then do:\n```bash\n$ curl -H \"X-API-Key: ${PDNS_API_KEY}\" ${PDNS_API_URL}\/api\/v1\/servers\/localhost\/zones\/example.com. | jq '.rrsets[] | select(.name | contains(\"echo\"))'\n```\n\nOnce the API shows the record correctly, you can double check your record using:\n```bash\n$ dig @${PDNS_FQDN} echo.example.com.\n```\n\n## Using CRD source to manage DNS records in PowerDNS\n\n[CRD source](https:\/\/github.com\/kubernetes-sigs\/external-dns\/blob\/master\/docs\/contributing\/crd-source.md) provides a generic mechanism and declarative way to manage DNS records in PowerDNS using external-dns.\n\n```bash\nexternal-dns --source=crd --provider=pdns \\\n  --pdns-server= \\\n  --pdns-api-key= \\\n  --domain-filter=example.com \\\n  --managed-record-types=A \\\n  --managed-record-types=CNAME \\\n  --managed-record-types=TXT \\\n  --managed-record-types=MX \\\n  --managed-record-types=SRV\n```\n\nNot all the record types are enabled by default so we can enable the required record types using `--managed-record-types`.\n\n* Example for record type `A`\n\n```yaml\napiVersion: externaldns.k8s.io\/v1alpha1\nkind: DNSEndpoint\nmetadata:\n  name: examplearecord\nspec:\n  endpoints:\n  - dnsName: example.com\n    recordTTL: 60\n    recordType: A\n    targets:\n    - 10.0.0.1\n```\n\n* Example for record type `CNAME`\n\n```yaml\napiVersion: externaldns.k8s.io\/v1alpha1\nkind: DNSEndpoint\nmetadata:\n  name: examplecnamerecord\nspec:\n  endpoints:\n  - dnsName: test-a.example.com\n    recordTTL: 300\n    recordType: CNAME\n    targets:\n    - example.com\n```\n\n* Example for record type `TXT`\n\n```yaml\napiVersion: externaldns.k8s.io\/v1alpha1\nkind: DNSEndpoint\nmetadata:\n  name: exampletxtrecord\nspec:\n  endpoints:\n  - dnsName: example.com\n    recordTTL: 3600\n    recordType: TXT\n    targets:\n      - '\"v=spf1 include:spf.protection.example.com include:example.org -all\"'\n      - '\"apple-domain-verification=XXXXXXXXXXXXX\"'\n```\n\n* Example for record type `MX`\n\n```yaml\napiVersion: externaldns.k8s.io\/v1alpha1\nkind: DNSEndpoint\nmetadata:\n  name: examplemxrecord\nspec:\n  endpoints:\n  - dnsName: example.com\n    recordTTL: 3600\n    recordType: MX\n    targets:\n      - \"10 mailhost1.example.com\"\n```\n\n* Example for record type `SRV`\n\n```yaml\napiVersion: externaldns.k8s.io\/v1alpha1\nkind: DNSEndpoint\nmetadata:\n  name: examplesrvrecord\nspec:\n  endpoints:\n  - dnsName: _service._tls.example.com\n    recordTTL: 180\n    recordType: SRV\n    targets:\n      - \"100 1 443 service.example.com\"\n``","site":"external-dns","answers_cleaned":"  PowerDNS     Prerequisites  The provider has been written for and tested against  PowerDNS  https   github com PowerDNS pdns  v4 1 x and thus requires   PowerDNS Auth Server    4 1 x    PowerDNS provider support was added via  this PR  https   github com kubernetes sigs external dns pull 373   thus you need to use external dns version    v0 5  The PDNS provider expects that your PowerDNS instance is already setup and functional  It expects that zones  you wish to add records to  already exist and are configured correctly  It does not add  remove or configure new zones in anyway      Feature Support  The PDNS provider currently does not support     Dry running a configuration is not supported     Deployment  Deploying external DNS for PowerDNS is actually nearly identical to deploying it for other providers  This is what a sample  deployment yaml  looks like      yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec          Only use if you re also using RBAC         serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   or ingress or both             provider pdns             pdns server              pdns server id              pdns api key              txt owner id              domain filter external dns test my org com   will make ExternalDNS see only the zones matching provided domain  omit to process all available zones in PowerDNS             log level debug             interval 30s           Domain Filter     domain filter   When the    domain filter  argument is specified  external dns will only create DNS records for host names  specified in ingress objects and services with the external dns annotation  related to zones that match the    domain filter  argument in the external dns deployment manifest   eg       domain filter example org    will allow for zone  example org  and any zones in PowerDNS that ends in   example org   including  an example org   ie  the subdomains of example org   eg       domain filter  example org    will allow  only  zones that end in   example org   ie  the subdomains of example org but not the  example org  zone itself   The filter can also match parent zones  For example    domain filter a example com  will allow for zone  example com   If you want to match parent zones  you cannot pre pend your filter with a      eg     domain filter  example com  will not attempt to match parent zones       Regex Domain Filter     regex domain filter      regex domain filter  limits possible domains and target zone with a regex  It overrides domain filters and can be specified only once      RBAC  If your cluster is RBAC enabled  you also need to setup the following  before you can run external dns     yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses     verbs    get   watch   list     apiGroups         resources    pods     verbs    get   watch   list     apiGroups         resources    nodes     verbs    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default         Testing and Verification    Important     Remember to change  example com  with your own domain throughout the following text   Spin up a simple  Hello World  HTTP server with the following spec   kubectl apply  f        yaml apiVersion  apps v1 kind  Deployment metadata    name  echo spec    selector      matchLabels        app  echo   template      metadata        labels          app  echo     spec        containers          image  hashicorp http echo         name  echo         ports            containerPort  5678         args               text  Hello World      apiVersion  v1 kind  Service metadata    name  echo   annotations      external dns alpha kubernetes io hostname  echo example com spec    selector      app  echo   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  5678       Important     Don t run dig  nslookup or similar immediately  until you ve confirmed the record exists   You ll get hit by  negative DNS caching  https   tools ietf org html rfc2308   which is hard to flush   Run the following to make sure everything is in order      bash   kubectl get services echo   kubectl get endpoints echo      Make sure everything looks correct  i e the service is defined and receives a public IP  and that the endpoint also has a pod IP   Once that s done  wait about 30s 1m  interval for external dns to kick in   then do     bash   curl  H  X API Key    PDNS API KEY     PDNS API URL  api v1 servers localhost zones example com    jq   rrsets     select  name   contains  echo          Once the API shows the record correctly  you can double check your record using     bash   dig    PDNS FQDN  echo example com          Using CRD source to manage DNS records in PowerDNS   CRD source  https   github com kubernetes sigs external dns blob master docs contributing crd source md  provides a generic mechanism and declarative way to manage DNS records in PowerDNS using external dns      bash external dns   source crd   provider pdns       pdns server        pdns api key        domain filter example com       managed record types A       managed record types CNAME       managed record types TXT       managed record types MX       managed record types SRV      Not all the record types are enabled by default so we can enable the required record types using    managed record types      Example for record type  A      yaml apiVersion  externaldns k8s io v1alpha1 kind  DNSEndpoint metadata    name  examplearecord spec    endpoints      dnsName  example com     recordTTL  60     recordType  A     targets        10 0 0 1        Example for record type  CNAME      yaml apiVersion  externaldns k8s io v1alpha1 kind  DNSEndpoint metadata    name  examplecnamerecord spec    endpoints      dnsName  test a example com     recordTTL  300     recordType  CNAME     targets        example com        Example for record type  TXT      yaml apiVersion  externaldns k8s io v1alpha1 kind  DNSEndpoint metadata    name  exampletxtrecord spec    endpoints      dnsName  example com     recordTTL  3600     recordType  TXT     targets            v spf1 include spf protection example com include example org  all             apple domain verification XXXXXXXXXXXXX          Example for record type  MX      yaml apiVersion  externaldns k8s io v1alpha1 kind  DNSEndpoint metadata    name  examplemxrecord spec    endpoints      dnsName  example com     recordTTL  3600     recordType  MX     targets           10 mailhost1 example com         Example for record type  SRV      yaml apiVersion  externaldns k8s io v1alpha1 kind  DNSEndpoint metadata    name  examplesrvrecord spec    endpoints      dnsName   service  tls example com     recordTTL  180     recordType  SRV     targets           100 1 443 service example com    "}
{"questions":"external-dns This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Gandi Create a new DNS zone where you want to create your records in Let s use as an example here Make sure the zone uses Make sure to use 0 7 7 version of ExternalDNS for this tutorial Gandi Creating a Gandi DNS zone domain","answers":"# Gandi\n\nThis tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Gandi.\n\nMake sure to use **>=0.7.7** version of ExternalDNS for this tutorial.\n\n## Creating a Gandi DNS zone (domain)\n\nCreate a new DNS zone where you want to create your records in. Let's use `example.com` as an example here. Make sure the zone uses\n\n## Creating Gandi Personal Access Token (PAT)\n\nGenerate a Personal Access Token on [your account](https:\/\/admin.gandi.net) (click on \"User Settings\") with `Manage domain name technical configurations` permission.\n\nThe environment variable `GANDI_PAT` will be needed to run ExternalDNS with Gandi.\n\nYou can also set `GANDI_KEY` if you have an old API key.\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster you want to test ExternalDNS with.\nThen apply one of the following manifests file to deploy ExternalDNS.\n\n### Manifest (for clusters without RBAC enabled)\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: external-dns\n  strategy:\n    type: Recreate\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=gandi\n        env:\n        - name: GANDI_PAT\n          value: \"YOUR_GANDI_PAT\"\n```\n\n### Manifest (for clusters with RBAC enabled)\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"] \n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\",\"watch\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: external-dns\n  strategy:\n    type: Recreate\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service # ingress is also possible\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=gandi\n        env:\n        - name: GANDI_PAT\n          value: \"YOUR_GANDI_PAT\"\n```\n\n\n## Deploying an Nginx Service\n\nCreate a service file called 'nginx.yaml' with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - image: nginx\n        name: nginx\n        ports:\n        - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: my-app.example.com\nspec:\n  selector:\n    app: nginx\n  type: LoadBalancer\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 80\n```\n\nNote the annotation on the service; use the same hostname as the Gandi Domain. Make sure that your Domain is configured to use Live-DNS.\n\nExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records.\n\nCreate the deployment and service:\n\n```console\n$ kubectl create -f nginx.yaml\n```\n\nDepending where you run your service it can take a little while for your cloud provider to create an external IP for the service.\n\nOnce the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize the Gandi DNS records.\n\n## Verifying Gandi DNS records\n\nCheck your [Gandi Dashboard](https:\/\/admin.gandi.net\/domain) to view the records for your Gandi DNS zone.\n\nClick on the zone for the one created above if a different domain was used.\n\nThis should show the external IP address of the service as the A record for your domain.\n\n## Cleanup\n\nNow that we have verified that ExternalDNS will automatically manage Gandi DNS records, we can delete the tutorial's example:\n\n```\n$ kubectl delete service -f nginx.yaml\n$ kubectl delete service -f externaldns.yaml\n```\n\n# Additional options\n\nIf you're using organizations to separate your domains, you can pass the organization's ID in an environment variable called `GANDI_SHARING_ID` to get access to it.","site":"external-dns","answers_cleaned":"  Gandi  This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Gandi   Make sure to use     0 7 7   version of ExternalDNS for this tutorial      Creating a Gandi DNS zone  domain   Create a new DNS zone where you want to create your records in  Let s use  example com  as an example here  Make sure the zone uses     Creating Gandi Personal Access Token  PAT   Generate a Personal Access Token on  your account  https   admin gandi net   click on  User Settings   with  Manage domain name technical configurations  permission   The environment variable  GANDI PAT  will be needed to run ExternalDNS with Gandi   You can also set  GANDI KEY  if you have an old API key      Deploy ExternalDNS  Connect your  kubectl  client to the cluster you want to test ExternalDNS with  Then apply one of the following manifests file to deploy ExternalDNS       Manifest  for clusters without RBAC enabled     yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    replicas  1   selector      matchLabels        app  external dns   strategy      type  Recreate   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider gandi         env            name  GANDI PAT           value   YOUR GANDI PAT           Manifest  for clusters with RBAC enabled     yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses      verbs    get   watch   list     apiGroups         resources    nodes     verbs    list   watch       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    replicas  1   selector      matchLabels        app  external dns   strategy      type  Recreate   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service   ingress is also possible             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider gandi         env            name  GANDI PAT           value   YOUR GANDI PAT           Deploying an Nginx Service  Create a service file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    replicas  1   selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          image  nginx         name  nginx         ports            containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx   annotations      external dns alpha kubernetes io hostname  my app example com spec    selector      app  nginx   type  LoadBalancer   ports        protocol  TCP       port  80       targetPort  80      Note the annotation on the service  use the same hostname as the Gandi Domain  Make sure that your Domain is configured to use Live DNS   ExternalDNS uses this annotation to determine what services should be registered with DNS  Removing the annotation will cause ExternalDNS to remove the corresponding DNS records   Create the deployment and service      console   kubectl create  f nginx yaml      Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service   Once the service has an external IP assigned  ExternalDNS will notice the new service IP address and synchronize the Gandi DNS records      Verifying Gandi DNS records  Check your  Gandi Dashboard  https   admin gandi net domain  to view the records for your Gandi DNS zone   Click on the zone for the one created above if a different domain was used   This should show the external IP address of the service as the A record for your domain      Cleanup  Now that we have verified that ExternalDNS will automatically manage Gandi DNS records  we can delete the tutorial s example         kubectl delete service  f nginx yaml   kubectl delete service  f externaldns yaml        Additional options  If you re using organizations to separate your domains  you can pass the organization s ID in an environment variable called  GANDI SHARING ID  to get access to it "}
{"questions":"external-dns Make sure to use 0 11 0 version of ExternalDNS for this tutorial are being run on an orchestration node This tutorial uses for all Azure DNS Azure commands and assumes that the Kubernetes cluster was created via Azure Container Services and commands This tutorial describes how to setup ExternalDNS for with","answers":"# Azure DNS\n\nThis tutorial describes how to setup ExternalDNS for [Azure DNS](https:\/\/azure.microsoft.com\/services\/dns\/) with [Azure Kubernetes Service](https:\/\/docs.microsoft.com\/azure\/aks\/).\n\nMake sure to use **>=0.11.0** version of ExternalDNS for this tutorial.\n\nThis tutorial uses [Azure CLI 2.0](https:\/\/docs.microsoft.com\/en-us\/cli\/azure\/install-azure-cli) for all\nAzure commands and assumes that the Kubernetes cluster was created via Azure Container Services and `kubectl` commands\nare being run on an orchestration node.\n\n## Creating an Azure DNS zone\n\nThe Azure provider for ExternalDNS will find suitable zones for domains it manages; it will not automatically create zones.\n\nFor this tutorial, we will create a Azure resource group named `MyDnsResourceGroup` that can easily be deleted later:\n\n```bash\n$ az group create --name \"MyDnsResourceGroup\" --location \"eastus\"\n```\n\nSubstitute a more suitable location for the resource group if desired.\n\nNext, create a Azure DNS zone for `example.com`:\n\n```bash\n$ az network dns zone create --resource-group \"MyDnsResourceGroup\" --name \"example.com\"\n```\n\nSubstitute a domain you own for `example.com` if desired.\n\nIf using your own domain that was registered with a third-party domain registrar, you should point your domain's name servers to the values in the `nameServers` field from the JSON data returned by the `az network dns zone create` command. Please consult your registrar's documentation on how to do that.\n\n### Internal Load Balancer\n\nTo create internal load balancers, one can set the annotation `service.beta.kubernetes.io\/azure-load-balancer-internal` to `true` on the resource.\n**Note**: AKS cluster's control plane managed identity needs to be granted `Network Contributor` role to update the subnet. For more details refer to [Use an internal load balancer with Azure Kubernetes Service (AKS)](https:\/\/learn.microsoft.com\/en-us\/azure\/aks\/internal-lb)\n\n## Configuration file\n\nThe azure provider will reference a configuration file called `azure.json`.  The preferred way to inject the configuration file is by using a Kubernetes secret. The secret should contain an object named `azure.json` with content similar to this:\n\n```json\n{\n  \"tenantId\": \"01234abc-de56-ff78-abc1-234567890def\",\n  \"subscriptionId\": \"01234abc-de56-ff78-abc1-234567890def\",\n  \"resourceGroup\": \"MyDnsResourceGroup\",\n  \"aadClientId\": \"01234abc-de56-ff78-abc1-234567890def\",\n  \"aadClientSecret\": \"uKiuXeiwui4jo9quae9o\"\n}\n```\n\nThe following fields are used:\n\n* `tenantId` (**required**) - run `az account show --query \"tenantId\"` or by selecting Azure Active Directory in the Azure Portal and checking the _Directory ID_ under Properties.\n* `subscriptionId` (**required**) - run `az account show --query \"id\"` or by selecting Subscriptions in the Azure Portal.\n* `resourceGroup` (**required**) is the Resource Group created in a previous step that contains the Azure DNS Zone.\n* `aadClientID` is associated with the Service Principal. This is used with Service Principal or Workload Identity methods documented in the next section.\n* `aadClientSecret` is associated with the Service Principal. This is only used with Service Principal method documented in the next section.\n* `useManagedIdentityExtension` - this is set to `true` if you use either AKS Kubelet Identity or AAD Pod Identities methods documented in the next section.\n* `userAssignedIdentityID` - this contains the client id from the Managed identity when using the AAD Pod Identities method documented in the next setion.\n* `activeDirectoryAuthorityHost` - this contains the uri to overwrite the default provided AAD Endpoint. This is useful for providing additional support where the endpoint is not available in the default cloud config from the [azure-sdk-for-go](https:\/\/pkg.go.dev\/github.com\/Azure\/azure-sdk-for-go\/sdk\/azcore\/cloud#pkg-variables).\n* `useWorkloadIdentityExtension` - this is set to `true` if you use Workload Identity method documented in the next section.\n\nThe Azure DNS provider expects, by default, that the configuration file is at `\/etc\/kubernetes\/azure.json`.  This can be overridden with the `--azure-config-file` option when starting ExternalDNS.\n\n## Permissions to modify DNS zone\n\nExternalDNS needs permissions to make changes to the Azure DNS zone. There are four ways configure the access needed:\n\n- [Service Principal](#service-principal)\n- [Managed Identity Using AKS Kubelet Identity](#managed-identity-using-aks-kubelet-identity)\n- [Managed Identity Using AAD Pod Identities](#managed-identity-using-aad-pod-identities)\n- [Managed Identity Using Workload Identity](#managed-identity-using-workload-identity)\n\n### Service Principal\n\nThese permissions are defined in a Service Principal that should be made available to ExternalDNS as a configuration file `azure.json`.\n\n#### Creating a service principal\n\nA Service Principal with a minimum access level of `DNS Zone Contributor` or `Contributor` to the DNS zone(s) and `Reader` to the resource group containing the Azure DNS zone(s) is necessary for ExternalDNS to be able to edit DNS records. However, other more permissive access levels will work too (e.g. `Contributor` to the resource group or the whole subscription).\n\nThis is an Azure CLI example on how to query the Azure API for the information required for the Resource Group and DNS zone you would have already created in previous steps (requires `azure-cli` and `jq`)\n\n```bash\n$ EXTERNALDNS_NEW_SP_NAME=\"ExternalDnsServicePrincipal\" # name of the service principal\n$ AZURE_DNS_ZONE_RESOURCE_GROUP=\"MyDnsResourceGroup\" # name of resource group where dns zone is hosted\n$ AZURE_DNS_ZONE=\"example.com\" # DNS zone name like example.com or sub.example.com\n\n# Create the service principal\n$ DNS_SP=$(az ad sp create-for-rbac --name $EXTERNALDNS_NEW_SP_NAME)\n$ EXTERNALDNS_SP_APP_ID=$(echo $DNS_SP | jq -r '.appId')\n$ EXTERNALDNS_SP_PASSWORD=$(echo $DNS_SP | jq -r '.password')\n```\n\n#### Assign the rights for the service principal\n\nGrant access to Azure DNS zone for the service principal.\n\n```bash\n# fetch DNS id used to grant access to the service principal\nDNS_ID=$(az network dns zone show --name $AZURE_DNS_ZONE \\\n --resource-group $AZURE_DNS_ZONE_RESOURCE_GROUP --query \"id\" --output tsv)\n\n# 1. as a reader to the resource group\n$ az role assignment create --role \"Reader\" --assignee $EXTERNALDNS_SP_APP_ID --scope $DNS_ID\n\n# 2. as a contributor to DNS Zone itself\n$ az role assignment create --role \"Contributor\" --assignee $EXTERNALDNS_SP_APP_ID --scope $DNS_ID\n```\n\n#### Creating a configuration file for the service principal\n\nCreate the file `azure.json` with values gather from previous steps.\n\n```bash\ncat <<-EOF > \/local\/path\/to\/azure.json\n{\n  \"tenantId\": \"$(az account show --query tenantId -o tsv)\",\n  \"subscriptionId\": \"$(az account show --query id -o tsv)\",\n  \"resourceGroup\": \"$AZURE_DNS_ZONE_RESOURCE_GROUP\",\n  \"aadClientId\": \"$EXTERNALDNS_SP_APP_ID\",\n  \"aadClientSecret\": \"$EXTERNALDNS_SP_PASSWORD\"\n}\nEOF\n```\n\nUse this file to create a Kubernetes secret:\n\n```bash\n$ kubectl create secret generic azure-config-file --namespace \"default\" --from-file \/local\/path\/to\/azure.json\n```\n\n### Managed identity using AKS Kubelet identity\n\nThe [managed identity](https:\/\/docs.microsoft.com\/azure\/active-directory\/managed-identities-azure-resources\/overview) that is assigned to the underlying node pool in the AKS cluster can be given permissions to access Azure DNS.  Managed identities are essentially a service principal whose lifecycle is managed, such as deleting the AKS cluster will also delete the service principals associated with the AKS cluster.  The managed identity assigned Kubernetes node pool, or specifically the [VMSS](https:\/\/docs.microsoft.com\/azure\/virtual-machine-scale-sets\/overview), is called the Kubelet identity.\n\nThe managed identites were previously called MSI (Managed Service Identity) and are enabled by default when creating an AKS cluster.\n\nNote that permissions granted to this identity will be accessible to all containers running inside the Kubernetes cluster, not just the ExternalDNS container(s).\n\nFor the managed identity, the contents of `azure.json` should be similar to this:\n\n```json\n{\n  \"tenantId\": \"01234abc-de56-ff78-abc1-234567890def\",\n  \"subscriptionId\": \"01234abc-de56-ff78-abc1-234567890def\",\n  \"resourceGroup\": \"MyDnsResourceGroup\",\n  \"useManagedIdentityExtension\": true,\n  \"userAssignedIdentityID\": \"01234abc-de56-ff78-abc1-234567890def\"\n}\n```\n\n#### Fetching the Kubelet identity\n\nFor this process, you will need to get the kubelet identity:\n\n```bash\n$ PRINCIPAL_ID=$(az aks show --resource-group $CLUSTER_GROUP --name $CLUSTERNAME \\\n  --query \"identityProfile.kubeletidentity.objectId\" --output tsv)\n$ IDENTITY_CLIENT_ID=$(az aks show --resource-group $CLUSTER_GROUP --name $CLUSTERNAME \\\n  --query \"identityProfile.kubeletidentity.clientId\" --output tsv)\n```\n\n#### Assign rights for the Kubelet identity\n\nGrant access to Azure DNS zone for the kubelet identity.\n\n```bash\n$ AZURE_DNS_ZONE=\"example.com\" # DNS zone name like example.com or sub.example.com\n$ AZURE_DNS_ZONE_RESOURCE_GROUP=\"MyDnsResourceGroup\" # resource group where DNS zone is hosted\n\n# fetch DNS id used to grant access to the kubelet identity\n$ DNS_ID=$(az network dns zone show --name $AZURE_DNS_ZONE \\\n  --resource-group $AZURE_DNS_ZONE_RESOURCE_GROUP --query \"id\" --output tsv)\n\n$ az role assignment create --role \"DNS Zone Contributor\" --assignee $PRINCIPAL_ID --scope $DNS_ID\n```\n\n#### Creating a configuration file for the kubelet identity\n\nCreate the file `azure.json` with values gather from previous steps.\n\n```bash\ncat <<-EOF > \/local\/path\/to\/azure.json\n{\n  \"tenantId\": \"$(az account show --query tenantId -o tsv)\",\n  \"subscriptionId\": \"$(az account show --query id -o tsv)\",\n  \"resourceGroup\": \"$AZURE_DNS_ZONE_RESOURCE_GROUP\",\n  \"useManagedIdentityExtension\": true,\n  \"userAssignedIdentityID\": \"$IDENTITY_CLIENT_ID\"\n}\nEOF\n```\n\nUse the `azure.json` file to create a Kubernetes secret:\n\n```bash\n$ kubectl create secret generic azure-config-file --namespace \"default\" --from-file \/local\/path\/to\/azure.json\n```\n\n### Managed identity using AAD Pod Identities\n\nFor this process, we will create a [managed identity](https:\/\/docs.microsoft.com\/\/azure\/active-directory\/managed-identities-azure-resources\/overview) that will be explicitly used by the ExternalDNS container.  This process is similar to Kubelet identity except that this managed identity is not associated with the Kubernetes node pool, but rather associated with explicit ExternalDNS containers.\n\n#### Enable the AAD Pod Identities feature\n\nFor this solution, [AAD Pod Identities](https:\/\/docs.microsoft.com\/azure\/aks\/use-azure-ad-pod-identity) preview feature can be enabled.  The commands below should do the trick to enable this feature:\n\n```bash\n$ az feature register --name EnablePodIdentityPreview --namespace Microsoft.ContainerService\n$ az feature register --name AutoUpgradePreview --namespace Microsoft.ContainerService\n$ az extension add --name aks-preview\n$ az extension update --name aks-preview\n$ az provider register --namespace Microsoft.ContainerService\n```\n\n#### Deploy the AAD Pod Identities service\n\nOnce enabled, you can update your cluster and install needed services for the [AAD Pod Identities](https:\/\/docs.microsoft.com\/azure\/aks\/use-azure-ad-pod-identity) feature.\n\n```bash\n$ AZURE_AKS_RESOURCE_GROUP=\"my-aks-cluster-group\" # name of resource group where aks cluster was created\n$ AZURE_AKS_CLUSTER_NAME=\"my-aks-cluster\" # name of aks cluster previously created\n\n$ az aks update --resource-group ${AZURE_AKS_RESOURCE_GROUP} --name ${AZURE_AKS_CLUSTER_NAME} --enable-pod-identity\n```\n\nNote that, if you use the default network plugin `kubenet`, then you need to add the command line option `--enable-pod-identity-with-kubenet` to the above command.\n\n#### Creating the managed identity\n\nAfter this process is finished, create a managed identity.\n\n```bash\n$ IDENTITY_RESOURCE_GROUP=$AZURE_AKS_RESOURCE_GROUP # custom group or reuse AKS group\n$ IDENTITY_NAME=\"example-com-identity\"\n\n# create a managed identity\n$ az identity create --resource-group \"${IDENTITY_RESOURCE_GROUP}\" --name \"${IDENTITY_NAME}\"\n```\n\n#### Assign rights for the managed identity\n\nGrant access to Azure DNS zone for the managed identity.\n\n```bash\n$ AZURE_DNS_ZONE_RESOURCE_GROUP=\"MyDnsResourceGroup\" # name of resource group where dns zone is hosted\n$ AZURE_DNS_ZONE=\"example.com\" # DNS zone name like example.com or sub.example.com\n\n# fetch identity client id from managed identity created earlier\n$ IDENTITY_CLIENT_ID=$(az identity show --resource-group \"${IDENTITY_RESOURCE_GROUP}\" \\\n  --name \"${IDENTITY_NAME}\" --query \"clientId\" --output tsv)\n# fetch DNS id used to grant access to the managed identity\n$ DNS_ID=$(az network dns zone show --name \"${AZURE_DNS_ZONE}\" \\\n  --resource-group \"${AZURE_DNS_ZONE_RESOURCE_GROUP}\" --query \"id\" --output tsv)\n\n$ az role assignment create --role \"DNS Zone Contributor\" \\\n  --assignee \"${IDENTITY_CLIENT_ID}\" --scope \"${DNS_ID}\"\n```\n\n#### Creating a configuration file for the managed identity\n\nCreate the file `azure.json` with the values from previous steps:\n\n```bash\ncat <<-EOF > \/local\/path\/to\/azure.json\n{\n  \"tenantId\": \"$(az account show --query tenantId -o tsv)\",\n  \"subscriptionId\": \"$(az account show --query id -o tsv)\",\n  \"resourceGroup\": \"$AZURE_DNS_ZONE_RESOURCE_GROUP\",\n  \"useManagedIdentityExtension\": true,\n  \"userAssignedIdentityID\": \"$IDENTITY_CLIENT_ID\"\n}\nEOF\n```\n\nUse the `azure.json` file to create a Kubernetes secret:\n\n```bash\n$ kubectl create secret generic azure-config-file --namespace \"default\" --from-file \/local\/path\/to\/azure.json\n```\n\n#### Creating an Azure identity binding\n\nA binding between the managed identity and the ExternalDNS pods needs to be setup by creating `AzureIdentity` and `AzureIdentityBinding` resources.  This will allow appropriately labeled ExternalDNS pods to authenticate using the managed identity.  When AAD Pod Identity feature is enabled from previous steps above, the `az aks pod-identity add` can be used to create these resources:\n\n```bash\n$ IDENTITY_RESOURCE_ID=$(az identity show --resource-group ${IDENTITY_RESOURCE_GROUP} \\\n  --name ${IDENTITY_NAME} --query id --output tsv)\n\n$ az aks pod-identity add --resource-group ${AZURE_AKS_RESOURCE_GROUP}  \\\n  --cluster-name ${AZURE_AKS_CLUSTER_NAME} --namespace \"default\" \\\n  --name \"external-dns\" --identity-resource-id ${IDENTITY_RESOURCE_ID}\n```\n\nThis will add something similar to the following resources:\n\n```yaml\napiVersion: aadpodidentity.k8s.io\/v1\nkind: AzureIdentity\nmetadata:\n  labels:\n    addonmanager.kubernetes.io\/mode: Reconcile\n    kubernetes.azure.com\/managedby: aks\n  name: external-dns\nspec:\n  clientID: $IDENTITY_CLIENT_ID\n  resourceID: $IDENTITY_RESOURCE_ID\n  type: 0\n---\napiVersion: aadpodidentity.k8s.io\/v1\nkind: AzureIdentityBinding\nmetadata:\n  annotations:\n  labels:\n    addonmanager.kubernetes.io\/mode: Reconcile\n    kubernetes.azure.com\/managedby: aks\n  name: external-dns-binding\nspec:\n  azureIdentity: external-dns\n  selector: external-dns\n```\n\n#### Update ExternalDNS labels\n\nWhen deploying ExternalDNS, you want to make sure that deployed pod(s) will have the label `aadpodidbinding: external-dns` to enable AAD Pod Identities. You can patch an existing deployment of ExternalDNS with this command:\n\n```bash\nkubectl patch deployment external-dns --namespace \"default\" --patch \\\n '{\"spec\": {\"template\": {\"metadata\": {\"labels\": {\"aadpodidbinding\": \"external-dns\"}}}}}'\n```\n\n### Managed identity using Workload Identity\n\nFor this process, we will create a [managed identity](https:\/\/docs.microsoft.com\/\/azure\/active-directory\/managed-identities-azure-resources\/overview) that will be explicitly used by the ExternalDNS container. This process is somewhat similar to Pod Identity except that this managed identity is associated with a kubernetes service account.\n\n#### Deploy OIDC issuer and Workload Identity services\n\nUpdate your cluster to install [OIDC Issuer](https:\/\/learn.microsoft.com\/en-us\/azure\/aks\/use-oidc-issuer) and [Workload Identity](https:\/\/learn.microsoft.com\/en-us\/azure\/aks\/workload-identity-deploy-cluster):\n\n```bash\n$ AZURE_AKS_RESOURCE_GROUP=\"my-aks-cluster-group\" # name of resource group where aks cluster was created\n$ AZURE_AKS_CLUSTER_NAME=\"my-aks-cluster\" # name of aks cluster previously created\n\n$ az aks update --resource-group ${AZURE_AKS_RESOURCE_GROUP} --name ${AZURE_AKS_CLUSTER_NAME} --enable-oidc-issuer --enable-workload-identity\n```\n\n#### Create a managed identity\n\nCreate a managed identity:\n\n```bash\n$ IDENTITY_RESOURCE_GROUP=$AZURE_AKS_RESOURCE_GROUP # custom group or reuse AKS group\n$ IDENTITY_NAME=\"example-com-identity\"\n\n# create a managed identity\n$ az identity create --resource-group \"${IDENTITY_RESOURCE_GROUP}\" --name \"${IDENTITY_NAME}\"\n```\n\n#### Assign a role to the managed identity\n\nGrant access to Azure DNS zone for the managed identity:\n\n```bash\n$ AZURE_DNS_ZONE_RESOURCE_GROUP=\"MyDnsResourceGroup\" # name of resource group where dns zone is hosted\n$ AZURE_DNS_ZONE=\"example.com\" # DNS zone name like example.com or sub.example.com\n\n# fetch identity client id from managed identity created earlier\n$ IDENTITY_CLIENT_ID=$(az identity show --resource-group \"${IDENTITY_RESOURCE_GROUP}\" \\\n  --name \"${IDENTITY_NAME}\" --query \"clientId\" --output tsv)\n# fetch DNS id used to grant access to the managed identity\n$ DNS_ID=$(az network dns zone show --name \"${AZURE_DNS_ZONE}\" \\\n  --resource-group \"${AZURE_DNS_ZONE_RESOURCE_GROUP}\" --query \"id\" --output tsv)\n$ RESOURCE_GROUP_ID=$(az group show --name \"${AZURE_DNS_ZONE_RESOURCE_GROUP}\" --query \"id\" --output tsv)\n\n$ az role assignment create --role \"DNS Zone Contributor\" \\\n  --assignee \"${IDENTITY_CLIENT_ID}\" --scope \"${DNS_ID}\"\n$ az role assignment create --role \"Reader\" \\\n  --assignee \"${IDENTITY_CLIENT_ID}\" --scope \"${RESOURCE_GROUP_ID}\"\n```\n\n#### Create a federated identity credential\n\nA binding between the managed identity and the ExternalDNS service account needs to be setup by creating a federated identity resource:\n\n```bash\n$ OIDC_ISSUER_URL=\"$(az aks show -n myAKSCluster -g myResourceGroup --query \"oidcIssuerProfile.issuerUrl\" -otsv)\"\n\n$ az identity federated-credential create --name ${IDENTITY_NAME} --identity-name ${IDENTITY_NAME} --resource-group $AZURE_AKS_RESOURCE_GROUP} --issuer \"$OIDC_ISSUER_URL\" --subject \"system:serviceaccount:default:external-dns\"\n```\n\nNOTE: make sure federated credential refers to correct namespace and service account (`system:serviceaccount:<NAMESPACE>:<SERVICE_ACCOUNT>`)\n\n#### Helm\n\nWhen deploying external-dns with Helm you need to create a secret to store the Azure config (see below) and create a workload identity (out of scope here) before you can install the chart.\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: external-dns-azure\ntype: Opaque\ndata:\n  azure.json: |\n    {\n      \"tenantId\": \"<TENANT_ID>\",\n      \"subscriptionId\": \"<SUBSCRIPTION_ID>\",\n      \"resourceGroup\": \"<AZURE_DNS_ZONE_RESOURCE_GROUP>\",\n      \"useWorkloadIdentityExtension\": true\n    }\n```\n\nOnce you have created the secret and have a workload identity you can install the chart with the following values.\n\n```yaml\nfullnameOverride: external-dns\n\nserviceAccount:\n  labels:\n    azure.workload.identity\/use: \"true\"\n  annotations:\n    azure.workload.identity\/client-id: <IDENTITY_CLIENT_ID>\n\npodLabels:\n  azure.workload.identity\/use: \"true\"\n\nextraVolumes:\n  - name: azure-config-file\n    secret:\n      secretName: external-dns-azure\n\nextraVolumeMounts:\n  - name: azure-config-file\n    mountPath: \/etc\/kubernetes\n    readOnly: true\n\nprovider:\n  name: azure\n```\n\nNOTE: make sure the pod is restarted whenever you make a configuration change.\n\n#### kubectl (alternative)\n\n##### Create a configuration file for the managed identity\n\nCreate the file `azure.json` with the values from previous steps:\n\n```bash\ncat <<-EOF > \/local\/path\/to\/azure.json\n{\n  \"subscriptionId\": \"$(az account show --query id -o tsv)\",\n  \"resourceGroup\": \"$AZURE_DNS_ZONE_RESOURCE_GROUP\",\n  \"useWorkloadIdentityExtension\": true\n}\nEOF\n```\nNOTE: it's also possible to specify (or override) ClientID specified in the next section through `aadClientId` field in this `azure.json` file.\n\nUse the `azure.json` file to create a Kubernetes secret:\n\n```bash\n$ kubectl create secret generic azure-config-file --namespace \"default\" --from-file \/local\/path\/to\/azure.json\n```\n\n##### Update labels and annotations on ExternalDNS service account\n\nTo instruct Workload Identity webhook to inject a projected token into the ExternalDNS pod, the pod needs to have a label `azure.workload.identity\/use: \"true\"` (before Workload Identity 1.0.0, this label was supposed to be set on the service account instead). Also, the service account needs to have an annotation `azure.workload.identity\/client-id: <IDENTITY_CLIENT_ID>`:\n\nTo patch the existing serviceaccount and deployment, use the following command:\n\n```bash\n$ kubectl patch serviceaccount external-dns --namespace \"default\" --patch \\\n \"{\\\"metadata\\\": {\\\"annotations\\\": {\\\"azure.workload.identity\/client-id\\\": \\\"${IDENTITY_CLIENT_ID}\\\"}}}\"\n$ kubectl patch deployment external-dns --namespace \"default\" --patch \\\n '{\"spec\": {\"template\": {\"metadata\": {\"labels\": {\\\"azure.workload.identity\/use\\\": \\\"true\\\"}}}}}'\n```\n\nNOTE: it's also possible to specify (or override) ClientID through `aadClientId` field in `azure.json`.\n\nNOTE: make sure the pod is restarted whenever you make a configuration change.\n\n## Throttling\n\nWhen the ExternalDNS managed zones list doesn't change frequently, one can set `--azure-zones-cache-duration` (zones list cache time-to-live). The zones list cache is disabled by default, with a value of 0s.\n\n## Ingress used with ExternalDNS\n\nThis deployment assumes that you will be using nginx-ingress. When using nginx-ingress do not deploy it as a Daemon Set. This causes nginx-ingress to write the Cluster IP of the backend pods in the ingress status.loadbalancer.ip property which then has external-dns write the Cluster IP(s) in DNS vs. the nginx-ingress service external IP.\n\nEnsure that your nginx-ingress deployment has the following arg: added to it:\n\n```\n- --publish-service=namespace\/nginx-ingress-controller-svcname\n```\n\nFor more details see here: [nginx-ingress external-dns](https:\/\/github.com\/kubernetes-sigs\/external-dns\/blob\/HEAD\/docs\/faq.md#why-is-externaldns-only-adding-a-single-ip-address-in-route-53-on-aws-when-using-the-nginx-ingress-controller-how-do-i-get-it-to-use-the-fqdn-of-the-elb-assigned-to-my-nginx-ingress-controller-service-instead)\n\n## Deploy ExternalDNS\n\nConnect your `kubectl` client to the cluster you want to test ExternalDNS with. Then apply one of the following manifests file to deploy ExternalDNS.\n\nThe deployment assumes that ExternalDNS will be installed into the `default` namespace.  If this namespace is different, the `ClusterRoleBinding` will need to be updated to reflect the desired alternative namespace, such as `external-dns`, `kube-addons`, etc.\n\n### Manifest (for clusters without RBAC enabled)\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      containers:\n      - name: external-dns\n        image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n        args:\n        - --source=service\n        - --source=ingress\n        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n        - --provider=azure\n        - --azure-resource-group=MyDnsResourceGroup # (optional) use the DNS zones from the tutorial's resource group\n        volumeMounts:\n        - name: azure-config-file\n          mountPath: \/etc\/kubernetes\n          readOnly: true\n      volumes:\n      - name: azure-config-file\n        secret:\n          secretName: azure-config-file\n```\n\n### Manifest (for clusters with RBAC enabled, cluster access)\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n  - apiGroups: [\"\"]\n    resources: [\"services\",\"endpoints\",\"pods\", \"nodes\"]\n    verbs: [\"get\",\"watch\",\"list\"]\n  - apiGroups: [\"extensions\",\"networking.k8s.io\"]\n    resources: [\"ingresses\"]\n    verbs: [\"get\",\"watch\",\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n  - kind: ServiceAccount\n    name: external-dns\n    namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n        - name: external-dns\n          image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n          args:\n            - --source=service\n            - --source=ingress\n            - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n            - --provider=azure\n            - --azure-resource-group=MyDnsResourceGroup # (optional) use the DNS zones from the tutorial's resource group\n            - --txt-prefix=externaldns-\n          volumeMounts:\n            - name: azure-config-file\n              mountPath: \/etc\/kubernetes\n              readOnly: true\n      volumes:\n        - name: azure-config-file\n          secret:\n            secretName: azure-config-file\n```\n\n### Manifest (for clusters with RBAC enabled, namespace access)\nThis configuration is the same as above, except it only requires privileges for the current namespace, not for the whole cluster.\nHowever, access to [nodes](https:\/\/kubernetes.io\/docs\/concepts\/architecture\/nodes\/) requires cluster access, so when using this manifest,\nservices with type `NodePort` will be skipped!\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: Role\nmetadata:\n  name: external-dns\nrules:\n  - apiGroups: [\"\"]\n    resources: [\"services\",\"endpoints\",\"pods\"]\n    verbs: [\"get\",\"watch\",\"list\"]\n  - apiGroups: [\"extensions\",\"networking.k8s.io\"]\n    resources: [\"ingresses\"]\n    verbs: [\"get\",\"watch\",\"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: RoleBinding\nmetadata:\n  name: external-dns\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: Role\n  name: external-dns\nsubjects:\n  - kind: ServiceAccount\n    name: external-dns\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n        - name: external-dns\n          image: registry.k8s.io\/external-dns\/external-dns:v0.15.0\n          args:\n            - --source=service\n            - --source=ingress\n            - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.\n            - --provider=azure\n            - --azure-resource-group=MyDnsResourceGroup # (optional) use the DNS zones from the tutorial's resource group\n          volumeMounts:\n            - name: azure-config-file\n              mountPath: \/etc\/kubernetes\n              readOnly: true\n      volumes:\n        - name: azure-config-file\n          secret:\n            secretName: azure-config-file\n```\n\nCreate the deployment for ExternalDNS:\n\n```bash\n$ kubectl create --namespace \"default\" --filename externaldns.yaml\n```\n\n## Ingress Option: Expose an nginx service with an ingress\n\nCreate a file called `nginx.yaml` with the following contents:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n        - image: nginx\n          name: nginx\n          ports:\n          - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx-svc\nspec:\n  ports:\n    - port: 80\n      protocol: TCP\n      targetPort: 80\n  selector:\n    app: nginx\n  type: ClusterIP\n\n---\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: nginx\nspec:\n  ingressClassName: nginx\n  rules:\n    - host: server.example.com\n      http:\n        paths:\n          - path: \/\n            pathType: Prefix\n            backend:\n              service:\n                name: nginx-svc\n                port:\n                  number: 80\n```\n\nWhen you use ExternalDNS with Ingress resources, it automatically creates DNS records based on the hostnames listed in those Ingress objects.\nThose hostnames must match the filters that you defined (if any):\n\n- By default, `--domain-filter` filters Azure DNS zone.\n- If you use `--domain-filter` together with `--zone-name-filter`, the behavior changes: `--domain-filter` then filters Ingress domains, not the Azure DNS zone name.\n\nWhen those hostnames are removed or renamed the corresponding DNS records are also altered.\n\nCreate the deployment, service and ingress object:\n\n```bash\n$ kubectl create --namespace \"default\" --filename nginx.yaml\n```\n\nSince your external IP would have already been assigned to the nginx-ingress service, the DNS records pointing to the IP of the nginx-ingress service should be created within a minute.\n\n## Azure Load Balancer option: Expose an nginx service with a load balancer\n\nCreate a file called `nginx.yaml` with the following contents:\n\n```yaml\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n        - image: nginx\n          name: nginx\n          ports:\n          - containerPort: 80\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx-svc\n  annotations:\n    external-dns.alpha.kubernetes.io\/hostname: server.example.com\nspec:\n  ports:\n    - port: 80\n      protocol: TCP\n      targetPort: 80\n  selector:\n    app: nginx\n  type: LoadBalancer\n```\n\nThe annotation `external-dns.alpha.kubernetes.io\/hostname` is used to specify the DNS name that should be created for the service. The annotation value is a comma separated list of host names.\n\n## Verifying Azure DNS records\n\nRun the following command to view the A records for your Azure DNS zone:\n\n```bash\n$ az network dns record-set a list --resource-group \"MyDnsResourceGroup\" --zone-name example.com\n```\n\nSubstitute the zone for the one created above if a different domain was used.\n\nThis should show the external IP address of the service as the A record for your domain ('@' indicates the record is for the zone itself).\n\n## Delete Azure Resource Group\n\nNow that we have verified that ExternalDNS will automatically manage Azure DNS records, we can delete the tutorial's\nresource group:\n\n```bash\n$ az group delete --name \"MyDnsResourceGroup\"\n```\n\n## More tutorials\n\nA video explanation is available here: https:\/\/www.youtube.com\/watch?v=VSn6DPKIhM8&list=PLpbcUe4chE79sB7Jg7B4z3HytqUUEwcNE\n\n![image](https:\/\/user-images.githubusercontent.com\/6548359\/235437721-87611869-75f2-4f32-bb35-9da585e46299.png)","site":"external-dns","answers_cleaned":"  Azure DNS  This tutorial describes how to setup ExternalDNS for  Azure DNS  https   azure microsoft com services dns   with  Azure Kubernetes Service  https   docs microsoft com azure aks     Make sure to use     0 11 0   version of ExternalDNS for this tutorial   This tutorial uses  Azure CLI 2 0  https   docs microsoft com en us cli azure install azure cli  for all Azure commands and assumes that the Kubernetes cluster was created via Azure Container Services and  kubectl  commands are being run on an orchestration node      Creating an Azure DNS zone  The Azure provider for ExternalDNS will find suitable zones for domains it manages  it will not automatically create zones   For this tutorial  we will create a Azure resource group named  MyDnsResourceGroup  that can easily be deleted later      bash   az group create   name  MyDnsResourceGroup    location  eastus       Substitute a more suitable location for the resource group if desired   Next  create a Azure DNS zone for  example com       bash   az network dns zone create   resource group  MyDnsResourceGroup    name  example com       Substitute a domain you own for  example com  if desired   If using your own domain that was registered with a third party domain registrar  you should point your domain s name servers to the values in the  nameServers  field from the JSON data returned by the  az network dns zone create  command  Please consult your registrar s documentation on how to do that       Internal Load Balancer  To create internal load balancers  one can set the annotation  service beta kubernetes io azure load balancer internal  to  true  on the resource    Note    AKS cluster s control plane managed identity needs to be granted  Network Contributor  role to update the subnet  For more details refer to  Use an internal load balancer with Azure Kubernetes Service  AKS   https   learn microsoft com en us azure aks internal lb      Configuration file  The azure provider will reference a configuration file called  azure json    The preferred way to inject the configuration file is by using a Kubernetes secret  The secret should contain an object named  azure json  with content similar to this      json      tenantId    01234abc de56 ff78 abc1 234567890def      subscriptionId    01234abc de56 ff78 abc1 234567890def      resourceGroup    MyDnsResourceGroup      aadClientId    01234abc de56 ff78 abc1 234567890def      aadClientSecret    uKiuXeiwui4jo9quae9o         The following fields are used      tenantId     required      run  az account show   query  tenantId   or by selecting Azure Active Directory in the Azure Portal and checking the  Directory ID  under Properties     subscriptionId     required      run  az account show   query  id   or by selecting Subscriptions in the Azure Portal     resourceGroup     required    is the Resource Group created in a previous step that contains the Azure DNS Zone     aadClientID  is associated with the Service Principal  This is used with Service Principal or Workload Identity methods documented in the next section     aadClientSecret  is associated with the Service Principal  This is only used with Service Principal method documented in the next section     useManagedIdentityExtension    this is set to  true  if you use either AKS Kubelet Identity or AAD Pod Identities methods documented in the next section     userAssignedIdentityID    this contains the client id from the Managed identity when using the AAD Pod Identities method documented in the next setion     activeDirectoryAuthorityHost    this contains the uri to overwrite the default provided AAD Endpoint  This is useful for providing additional support where the endpoint is not available in the default cloud config from the  azure sdk for go  https   pkg go dev github com Azure azure sdk for go sdk azcore cloud pkg variables      useWorkloadIdentityExtension    this is set to  true  if you use Workload Identity method documented in the next section   The Azure DNS provider expects  by default  that the configuration file is at   etc kubernetes azure json    This can be overridden with the    azure config file  option when starting ExternalDNS      Permissions to modify DNS zone  ExternalDNS needs permissions to make changes to the Azure DNS zone  There are four ways configure the access needed      Service Principal   service principal     Managed Identity Using AKS Kubelet Identity   managed identity using aks kubelet identity     Managed Identity Using AAD Pod Identities   managed identity using aad pod identities     Managed Identity Using Workload Identity   managed identity using workload identity       Service Principal  These permissions are defined in a Service Principal that should be made available to ExternalDNS as a configuration file  azure json         Creating a service principal  A Service Principal with a minimum access level of  DNS Zone Contributor  or  Contributor  to the DNS zone s  and  Reader  to the resource group containing the Azure DNS zone s  is necessary for ExternalDNS to be able to edit DNS records  However  other more permissive access levels will work too  e g   Contributor  to the resource group or the whole subscription    This is an Azure CLI example on how to query the Azure API for the information required for the Resource Group and DNS zone you would have already created in previous steps  requires  azure cli  and  jq       bash   EXTERNALDNS NEW SP NAME  ExternalDnsServicePrincipal    name of the service principal   AZURE DNS ZONE RESOURCE GROUP  MyDnsResourceGroup    name of resource group where dns zone is hosted   AZURE DNS ZONE  example com    DNS zone name like example com or sub example com    Create the service principal   DNS SP   az ad sp create for rbac   name  EXTERNALDNS NEW SP NAME    EXTERNALDNS SP APP ID   echo  DNS SP   jq  r   appId     EXTERNALDNS SP PASSWORD   echo  DNS SP   jq  r   password             Assign the rights for the service principal  Grant access to Azure DNS zone for the service principal      bash   fetch DNS id used to grant access to the service principal DNS ID   az network dns zone show   name  AZURE DNS ZONE      resource group  AZURE DNS ZONE RESOURCE GROUP   query  id    output tsv     1  as a reader to the resource group   az role assignment create   role  Reader    assignee  EXTERNALDNS SP APP ID   scope  DNS ID    2  as a contributor to DNS Zone itself   az role assignment create   role  Contributor    assignee  EXTERNALDNS SP APP ID   scope  DNS ID           Creating a configuration file for the service principal  Create the file  azure json  with values gather from previous steps      bash cat    EOF    local path to azure json      tenantId      az account show   query tenantId  o tsv       subscriptionId      az account show   query id  o tsv       resourceGroup     AZURE DNS ZONE RESOURCE GROUP      aadClientId     EXTERNALDNS SP APP ID      aadClientSecret     EXTERNALDNS SP PASSWORD    EOF      Use this file to create a Kubernetes secret      bash   kubectl create secret generic azure config file   namespace  default    from file  local path to azure json          Managed identity using AKS Kubelet identity  The  managed identity  https   docs microsoft com azure active directory managed identities azure resources overview  that is assigned to the underlying node pool in the AKS cluster can be given permissions to access Azure DNS   Managed identities are essentially a service principal whose lifecycle is managed  such as deleting the AKS cluster will also delete the service principals associated with the AKS cluster   The managed identity assigned Kubernetes node pool  or specifically the  VMSS  https   docs microsoft com azure virtual machine scale sets overview   is called the Kubelet identity   The managed identites were previously called MSI  Managed Service Identity  and are enabled by default when creating an AKS cluster   Note that permissions granted to this identity will be accessible to all containers running inside the Kubernetes cluster  not just the ExternalDNS container s    For the managed identity  the contents of  azure json  should be similar to this      json      tenantId    01234abc de56 ff78 abc1 234567890def      subscriptionId    01234abc de56 ff78 abc1 234567890def      resourceGroup    MyDnsResourceGroup      useManagedIdentityExtension   true     userAssignedIdentityID    01234abc de56 ff78 abc1 234567890def              Fetching the Kubelet identity  For this process  you will need to get the kubelet identity      bash   PRINCIPAL ID   az aks show   resource group  CLUSTER GROUP   name  CLUSTERNAME       query  identityProfile kubeletidentity objectId    output tsv    IDENTITY CLIENT ID   az aks show   resource group  CLUSTER GROUP   name  CLUSTERNAME       query  identityProfile kubeletidentity clientId    output tsv            Assign rights for the Kubelet identity  Grant access to Azure DNS zone for the kubelet identity      bash   AZURE DNS ZONE  example com    DNS zone name like example com or sub example com   AZURE DNS ZONE RESOURCE GROUP  MyDnsResourceGroup    resource group where DNS zone is hosted    fetch DNS id used to grant access to the kubelet identity   DNS ID   az network dns zone show   name  AZURE DNS ZONE       resource group  AZURE DNS ZONE RESOURCE GROUP   query  id    output tsv     az role assignment create   role  DNS Zone Contributor    assignee  PRINCIPAL ID   scope  DNS ID           Creating a configuration file for the kubelet identity  Create the file  azure json  with values gather from previous steps      bash cat    EOF    local path to azure json      tenantId      az account show   query tenantId  o tsv       subscriptionId      az account show   query id  o tsv       resourceGroup     AZURE DNS ZONE RESOURCE GROUP      useManagedIdentityExtension   true     userAssignedIdentityID     IDENTITY CLIENT ID    EOF      Use the  azure json  file to create a Kubernetes secret      bash   kubectl create secret generic azure config file   namespace  default    from file  local path to azure json          Managed identity using AAD Pod Identities  For this process  we will create a  managed identity  https   docs microsoft com  azure active directory managed identities azure resources overview  that will be explicitly used by the ExternalDNS container   This process is similar to Kubelet identity except that this managed identity is not associated with the Kubernetes node pool  but rather associated with explicit ExternalDNS containers        Enable the AAD Pod Identities feature  For this solution   AAD Pod Identities  https   docs microsoft com azure aks use azure ad pod identity  preview feature can be enabled   The commands below should do the trick to enable this feature      bash   az feature register   name EnablePodIdentityPreview   namespace Microsoft ContainerService   az feature register   name AutoUpgradePreview   namespace Microsoft ContainerService   az extension add   name aks preview   az extension update   name aks preview   az provider register   namespace Microsoft ContainerService           Deploy the AAD Pod Identities service  Once enabled  you can update your cluster and install needed services for the  AAD Pod Identities  https   docs microsoft com azure aks use azure ad pod identity  feature      bash   AZURE AKS RESOURCE GROUP  my aks cluster group    name of resource group where aks cluster was created   AZURE AKS CLUSTER NAME  my aks cluster    name of aks cluster previously created    az aks update   resource group   AZURE AKS RESOURCE GROUP    name   AZURE AKS CLUSTER NAME    enable pod identity      Note that  if you use the default network plugin  kubenet   then you need to add the command line option    enable pod identity with kubenet  to the above command        Creating the managed identity  After this process is finished  create a managed identity      bash   IDENTITY RESOURCE GROUP  AZURE AKS RESOURCE GROUP   custom group or reuse AKS group   IDENTITY NAME  example com identity     create a managed identity   az identity create   resource group    IDENTITY RESOURCE GROUP     name    IDENTITY NAME             Assign rights for the managed identity  Grant access to Azure DNS zone for the managed identity      bash   AZURE DNS ZONE RESOURCE GROUP  MyDnsResourceGroup    name of resource group where dns zone is hosted   AZURE DNS ZONE  example com    DNS zone name like example com or sub example com    fetch identity client id from managed identity created earlier   IDENTITY CLIENT ID   az identity show   resource group    IDENTITY RESOURCE GROUP         name    IDENTITY NAME     query  clientId    output tsv    fetch DNS id used to grant access to the managed identity   DNS ID   az network dns zone show   name    AZURE DNS ZONE         resource group    AZURE DNS ZONE RESOURCE GROUP     query  id    output tsv     az role assignment create   role  DNS Zone Contributor        assignee    IDENTITY CLIENT ID     scope    DNS ID             Creating a configuration file for the managed identity  Create the file  azure json  with the values from previous steps      bash cat    EOF    local path to azure json      tenantId      az account show   query tenantId  o tsv       subscriptionId      az account show   query id  o tsv       resourceGroup     AZURE DNS ZONE RESOURCE GROUP      useManagedIdentityExtension   true     userAssignedIdentityID     IDENTITY CLIENT ID    EOF      Use the  azure json  file to create a Kubernetes secret      bash   kubectl create secret generic azure config file   namespace  default    from file  local path to azure json           Creating an Azure identity binding  A binding between the managed identity and the ExternalDNS pods needs to be setup by creating  AzureIdentity  and  AzureIdentityBinding  resources   This will allow appropriately labeled ExternalDNS pods to authenticate using the managed identity   When AAD Pod Identity feature is enabled from previous steps above  the  az aks pod identity add  can be used to create these resources      bash   IDENTITY RESOURCE ID   az identity show   resource group   IDENTITY RESOURCE GROUP        name   IDENTITY NAME    query id   output tsv     az aks pod identity add   resource group   AZURE AKS RESOURCE GROUP         cluster name   AZURE AKS CLUSTER NAME    namespace  default        name  external dns    identity resource id   IDENTITY RESOURCE ID       This will add something similar to the following resources      yaml apiVersion  aadpodidentity k8s io v1 kind  AzureIdentity metadata    labels      addonmanager kubernetes io mode  Reconcile     kubernetes azure com managedby  aks   name  external dns spec    clientID   IDENTITY CLIENT ID   resourceID   IDENTITY RESOURCE ID   type  0     apiVersion  aadpodidentity k8s io v1 kind  AzureIdentityBinding metadata    annotations    labels      addonmanager kubernetes io mode  Reconcile     kubernetes azure com managedby  aks   name  external dns binding spec    azureIdentity  external dns   selector  external dns           Update ExternalDNS labels  When deploying ExternalDNS  you want to make sure that deployed pod s  will have the label  aadpodidbinding  external dns  to enable AAD Pod Identities  You can patch an existing deployment of ExternalDNS with this command      bash kubectl patch deployment external dns   namespace  default    patch       spec     template     metadata     labels     aadpodidbinding    external dns                 Managed identity using Workload Identity  For this process  we will create a  managed identity  https   docs microsoft com  azure active directory managed identities azure resources overview  that will be explicitly used by the ExternalDNS container  This process is somewhat similar to Pod Identity except that this managed identity is associated with a kubernetes service account        Deploy OIDC issuer and Workload Identity services  Update your cluster to install  OIDC Issuer  https   learn microsoft com en us azure aks use oidc issuer  and  Workload Identity  https   learn microsoft com en us azure aks workload identity deploy cluster       bash   AZURE AKS RESOURCE GROUP  my aks cluster group    name of resource group where aks cluster was created   AZURE AKS CLUSTER NAME  my aks cluster    name of aks cluster previously created    az aks update   resource group   AZURE AKS RESOURCE GROUP    name   AZURE AKS CLUSTER NAME    enable oidc issuer   enable workload identity           Create a managed identity  Create a managed identity      bash   IDENTITY RESOURCE GROUP  AZURE AKS RESOURCE GROUP   custom group or reuse AKS group   IDENTITY NAME  example com identity     create a managed identity   az identity create   resource group    IDENTITY RESOURCE GROUP     name    IDENTITY NAME             Assign a role to the managed identity  Grant access to Azure DNS zone for the managed identity      bash   AZURE DNS ZONE RESOURCE GROUP  MyDnsResourceGroup    name of resource group where dns zone is hosted   AZURE DNS ZONE  example com    DNS zone name like example com or sub example com    fetch identity client id from managed identity created earlier   IDENTITY CLIENT ID   az identity show   resource group    IDENTITY RESOURCE GROUP         name    IDENTITY NAME     query  clientId    output tsv    fetch DNS id used to grant access to the managed identity   DNS ID   az network dns zone show   name    AZURE DNS ZONE         resource group    AZURE DNS ZONE RESOURCE GROUP     query  id    output tsv    RESOURCE GROUP ID   az group show   name    AZURE DNS ZONE RESOURCE GROUP     query  id    output tsv     az role assignment create   role  DNS Zone Contributor        assignee    IDENTITY CLIENT ID     scope    DNS ID     az role assignment create   role  Reader        assignee    IDENTITY CLIENT ID     scope    RESOURCE GROUP ID             Create a federated identity credential  A binding between the managed identity and the ExternalDNS service account needs to be setup by creating a federated identity resource      bash   OIDC ISSUER URL    az aks show  n myAKSCluster  g myResourceGroup   query  oidcIssuerProfile issuerUrl   otsv      az identity federated credential create   name   IDENTITY NAME    identity name   IDENTITY NAME    resource group  AZURE AKS RESOURCE GROUP    issuer   OIDC ISSUER URL    subject  system serviceaccount default external dns       NOTE  make sure federated credential refers to correct namespace and service account   system serviceaccount  NAMESPACE   SERVICE ACCOUNT          Helm  When deploying external dns with Helm you need to create a secret to store the Azure config  see below  and create a workload identity  out of scope here  before you can install the chart      yaml apiVersion  v1 kind  Secret metadata    name  external dns azure type  Opaque data    azure json                 tenantId     TENANT ID           subscriptionId     SUBSCRIPTION ID           resourceGroup     AZURE DNS ZONE RESOURCE GROUP           useWorkloadIdentityExtension   true            Once you have created the secret and have a workload identity you can install the chart with the following values      yaml fullnameOverride  external dns  serviceAccount    labels      azure workload identity use   true    annotations      azure workload identity client id   IDENTITY CLIENT ID   podLabels    azure workload identity use   true   extraVolumes      name  azure config file     secret        secretName  external dns azure  extraVolumeMounts      name  azure config file     mountPath   etc kubernetes     readOnly  true  provider    name  azure      NOTE  make sure the pod is restarted whenever you make a configuration change        kubectl  alternative         Create a configuration file for the managed identity  Create the file  azure json  with the values from previous steps      bash cat    EOF    local path to azure json      subscriptionId      az account show   query id  o tsv       resourceGroup     AZURE DNS ZONE RESOURCE GROUP      useWorkloadIdentityExtension   true   EOF     NOTE  it s also possible to specify  or override  ClientID specified in the next section through  aadClientId  field in this  azure json  file   Use the  azure json  file to create a Kubernetes secret      bash   kubectl create secret generic azure config file   namespace  default    from file  local path to azure json            Update labels and annotations on ExternalDNS service account  To instruct Workload Identity webhook to inject a projected token into the ExternalDNS pod  the pod needs to have a label  azure workload identity use   true    before Workload Identity 1 0 0  this label was supposed to be set on the service account instead   Also  the service account needs to have an annotation  azure workload identity client id   IDENTITY CLIENT ID     To patch the existing serviceaccount and deployment  use the following command      bash   kubectl patch serviceaccount external dns   namespace  default    patch        metadata       annotations       azure workload identity client id        IDENTITY CLIENT ID          kubectl patch deployment external dns   namespace  default    patch       spec     template     metadata     labels      azure workload identity use      true              NOTE  it s also possible to specify  or override  ClientID through  aadClientId  field in  azure json    NOTE  make sure the pod is restarted whenever you make a configuration change      Throttling  When the ExternalDNS managed zones list doesn t change frequently  one can set    azure zones cache duration   zones list cache time to live   The zones list cache is disabled by default  with a value of 0s      Ingress used with ExternalDNS  This deployment assumes that you will be using nginx ingress  When using nginx ingress do not deploy it as a Daemon Set  This causes nginx ingress to write the Cluster IP of the backend pods in the ingress status loadbalancer ip property which then has external dns write the Cluster IP s  in DNS vs  the nginx ingress service external IP   Ensure that your nginx ingress deployment has the following arg  added to it           publish service namespace nginx ingress controller svcname      For more details see here   nginx ingress external dns  https   github com kubernetes sigs external dns blob HEAD docs faq md why is externaldns only adding a single ip address in route 53 on aws when using the nginx ingress controller how do i get it to use the fqdn of the elb assigned to my nginx ingress controller service instead      Deploy ExternalDNS  Connect your  kubectl  client to the cluster you want to test ExternalDNS with  Then apply one of the following manifests file to deploy ExternalDNS   The deployment assumes that ExternalDNS will be installed into the  default  namespace   If this namespace is different  the  ClusterRoleBinding  will need to be updated to reflect the desired alternative namespace  such as  external dns    kube addons   etc       Manifest  for clusters without RBAC enabled     yaml apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        containers          name  external dns         image  registry k8s io external dns external dns v0 15 0         args              source service             source ingress             domain filter example com    optional  limit to only example com domains  change to match the zone created above              provider azure             azure resource group MyDnsResourceGroup    optional  use the DNS zones from the tutorial s resource group         volumeMounts            name  azure config file           mountPath   etc kubernetes           readOnly  true       volumes          name  azure config file         secret            secretName  azure config file          Manifest  for clusters with RBAC enabled  cluster access      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules      apiGroups           resources    services   endpoints   pods    nodes       verbs    get   watch   list       apiGroups    extensions   networking k8s io       resources    ingresses       verbs    get   watch   list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects      kind  ServiceAccount     name  external dns     namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers            name  external dns           image  registry k8s io external dns external dns v0 15 0           args                  source service                 source ingress                 domain filter example com    optional  limit to only example com domains  change to match the zone created above                  provider azure                 azure resource group MyDnsResourceGroup    optional  use the DNS zones from the tutorial s resource group                 txt prefix externaldns            volumeMounts                name  azure config file               mountPath   etc kubernetes               readOnly  true       volumes            name  azure config file           secret              secretName  azure config file          Manifest  for clusters with RBAC enabled  namespace access  This configuration is the same as above  except it only requires privileges for the current namespace  not for the whole cluster  However  access to  nodes  https   kubernetes io docs concepts architecture nodes   requires cluster access  so when using this manifest  services with type  NodePort  will be skipped      yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  Role metadata    name  external dns rules      apiGroups           resources    services   endpoints   pods       verbs    get   watch   list       apiGroups    extensions   networking k8s io       resources    ingresses       verbs    get   watch   list       apiVersion  rbac authorization k8s io v1 kind  RoleBinding metadata    name  external dns roleRef    apiGroup  rbac authorization k8s io   kind  Role   name  external dns subjects      kind  ServiceAccount     name  external dns     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers            name  external dns           image  registry k8s io external dns external dns v0 15 0           args                  source service                 source ingress                 domain filter example com    optional  limit to only example com domains  change to match the zone created above                  provider azure                 azure resource group MyDnsResourceGroup    optional  use the DNS zones from the tutorial s resource group           volumeMounts                name  azure config file               mountPath   etc kubernetes               readOnly  true       volumes            name  azure config file           secret              secretName  azure config file      Create the deployment for ExternalDNS      bash   kubectl create   namespace  default    filename externaldns yaml         Ingress Option  Expose an nginx service with an ingress  Create a file called  nginx yaml  with the following contents      yaml apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers            image  nginx           name  nginx           ports              containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx svc spec    ports        port  80       protocol  TCP       targetPort  80   selector      app  nginx   type  ClusterIP      apiVersion  networking k8s io v1 kind  Ingress metadata    name  nginx spec    ingressClassName  nginx   rules        host  server example com       http          paths              path                pathType  Prefix             backend                service                  name  nginx svc                 port                    number  80      When you use ExternalDNS with Ingress resources  it automatically creates DNS records based on the hostnames listed in those Ingress objects  Those hostnames must match the filters that you defined  if any      By default     domain filter  filters Azure DNS zone    If you use    domain filter  together with    zone name filter   the behavior changes     domain filter  then filters Ingress domains  not the Azure DNS zone name   When those hostnames are removed or renamed the corresponding DNS records are also altered   Create the deployment  service and ingress object      bash   kubectl create   namespace  default    filename nginx yaml      Since your external IP would have already been assigned to the nginx ingress service  the DNS records pointing to the IP of the nginx ingress service should be created within a minute      Azure Load Balancer option  Expose an nginx service with a load balancer  Create a file called  nginx yaml  with the following contents      yaml     apiVersion  apps v1 kind  Deployment metadata    name  nginx spec    selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers            image  nginx           name  nginx           ports              containerPort  80     apiVersion  v1 kind  Service metadata    name  nginx svc   annotations      external dns alpha kubernetes io hostname  server example com spec    ports        port  80       protocol  TCP       targetPort  80   selector      app  nginx   type  LoadBalancer      The annotation  external dns alpha kubernetes io hostname  is used to specify the DNS name that should be created for the service  The annotation value is a comma separated list of host names      Verifying Azure DNS records  Run the following command to view the A records for your Azure DNS zone      bash   az network dns record set a list   resource group  MyDnsResourceGroup    zone name example com      Substitute the zone for the one created above if a different domain was used   This should show the external IP address of the service as the A record for your domain      indicates the record is for the zone itself       Delete Azure Resource Group  Now that we have verified that ExternalDNS will automatically manage Azure DNS records  we can delete the tutorial s resource group      bash   az group delete   name  MyDnsResourceGroup          More tutorials  A video explanation is available here  https   www youtube com watch v VSn6DPKIhM8 list PLpbcUe4chE79sB7Jg7B4z3HytqUUEwcNE    image  https   user images githubusercontent com 6548359 235437721 87611869 75f2 4f32 bb35 9da585e46299 png "}
{"questions":"gcp gke docs 01 Google Cloud Account Creation 02 Create GKE Standard Public Cluster Course Modules 04 Install gcloud CLI on Windows OS 03 Install gcloud CLI on mac OS","answers":"# [GCP GKE Google Kubernetes Engine DevOps 75 Real-World Demos](https:\/\/links.stacksimplify.com\/gcp-google-kubernetes-engine-gke-with-devops)\n\n[![Image](images\/course-title.png \"Google Kubernetes Engine GKE with DevOps 75 Real-World Demos\")](https:\/\/links.stacksimplify.com\/gcp-google-kubernetes-engine-gke-with-devops)\n\n\n## Course Modules\n01. Google Cloud Account Creation\n02. Create GKE Standard Public Cluster\t\t\t\t\n03. Install gcloud CLI on mac OS\t\t\t\t\n04. Install gcloud CLI on Windows OS\t\t\t\t\n05. Docker Fundamentals\t\t\t\t\n06. Kubernetes Pods\t\t\t\t\n07. Kubernetes ReplicaSets\t\t\t\t\n08. Kubernetes Deployment - CREATE\t\t\t\t\n09. Kubernetes Deployment - UPDATE\t\t\t\t\n10. Kubernetes Deployment - ROLLBACK\t\t\t\t\n11. Kubernetes Deployments - Pause and Resume\t\t\t\t\n12. Kubernetes ClusterIP and Load Balancer Service\t\t\t\t\n13. YAML Basics\t\t\t\t\n14. Kubernetes Pod  & Service using YAML\t\t\t\t\n15. Kubernetes ReplicaSets using YAML\t\t\t\t\n16. Kubernetes Deployment using YAML\t\t\t\t\n17. Kubernetes Services using YAML\t\t\t\t\n18.  GKE Kubernetes NodePort Service\t\t\t\t\n19. GKE Kubernetes Headless Service\t\t\t\t\n20. GKE Private Cluster\t\t\t\t\n21. How to use GCP Persistent Disks in GKE ?\t\t\t\t\n22. How to use Balanced Persistent Disk in GKE ?\t\t\t\t\n23. How to use Custom Storage Class in GKE for Persistent Disks ?\t\t\t\t\n24. How to use Pre-existing Persistent Disks in GKE ?\t\t\t\t\n25. How to use Regional Persistent Disks in GKE ?\t\t\t\t\n26. How to perform Persistent Disk  Volume Snapshots and Volume Restore ?\t\t\t\t\n28. GKE Workloads and Cloud SQL with Public IP\t\t\t\t\n29. GKE Workloads and Cloud SQL with Private IP\t\t\t\t\n30. GKE Workloads and Cloud SQL with Private IP and No ExternalName Service\t\t\t\t\n31. How to use Google Cloud File Store in GKE ?\t\t\t\t\n32. How to use Custom Storage Class for File Store in GKE ?\t\t\t\t\n33. How to perform File Store Instance Volume Snapshots and Volume Restore ?\t\t\t\t\n34. Ingress Service Basics\t\t\t\t\n35. Ingress Context Path based Routing\t\t\t\t\n36. Ingress Custom Health Checks using Readiness Probes\t\t\t\t\n37. Register a Google Cloud Domain for some advanced Ingress Service Demos \t\t\t\t\n38. Ingress with Static External IP and Cloud DNS\t\t\t\t\n39. Google Managed SSL Certificates for Ingress\t\t\t\t\n40. Ingress HTTP to HTTPS Redirect\t\t\t\t\n41. GKE Workload Identity\t\t\t\t\n42. External DNS Controller Install\t\t\t\t\n43. External DNS - Ingress Service\t\t\t\t\n44. External DNS - Kubernetes Service\t\t\t\t\n45. Ingress Name based Virtual Host Routing\t\t\t\t\n46. Ingress SSL Policy\t\t\t\t\n47. Ingress with Identity-Aware Proxy\t\t\t\t\n48. Ingress with Self Signed SSL Certificates\t\t\t\t\n49. Ingress with Pre-shared SSL Certificates\t\t\t\t\n50. Ingress with Cloud CDN, HTTP Access Logging and Timeouts\t\t\t\t\n51. Ingress with Client IP Affinity\t\t\t\t\n52. Ingress with Cookie Affinity\t\t\t\t\n53. Ingress with Custom Health Checks using BackendConfig CRD\t\t\t\t\n54. Ingress Internal Load Balancer\t\t\t\t\n55. Ingress with Google Cloud Armor\t\t\t\t\n56. Google Artifact Registry\t\t\t\t\n57. GKE Continuous Integration\t\t\t\t\n58. GKE Continuous Delivery\t\t\t\t\n59. Kubernetes Liveness Probes\t\t\t\t\n60. Kubernetes Startup Probes\t\t\t\t\n61. Kubernetes Readiness Probe\t\t\t\t\n62. Kubernetes Requests and Limits\t\t\t\t\n63. GKE Cluster Autoscaling\t\t\t\t\n64. Kubernetes Namespaces\t\t\t\t\n65. Kubernetes Namespaces Resource Quota\t\t\t\t\n66. Kubernetes Namespaces Limit Range\t\t\t\t\n67. Kubernetes Horizontal Pod Autoscaler\t\t\t\t\n68. GKE Autopilot Cluster\t\t\t\t\n69. How to manage Multiple Cluster access in kubeconfig ?\t\t\t\t\n\t\n\n\n## Kubernetes Concepts Covered\n01. Kubernetes Deployments (Create, Update, Rollback, Pause, Resume)\n02. Kubernetes Pods\n03. Kubernetes Service of Type LoadBalancer\n04. Kubernetes Service of Type ClusterIP\n05. Kubernetes Ingress Service\n06. Kubernetes Storage Class\n07. Kubernetes Storage Persistent Volume\n08. Kubernetes Storage Persistent Volume Claim\n09. Kubernetes Cluster Autoscaler\n10. Kubernetes Horizontal Pod Autoscaler\n11. Kubernetes Namespaces\n12. Kubernetes Namespaces Resource Quota\n13. Kubernetes Namespaces Limit Range\n14. Kubernetes Service Accounts\n15. Kubernetes ConfigMaps\n16. Kubernetes Requests and Limits\n17. Kubernetes Worker Nodes\n18. Kubernetes Service of Type NodePort\n19. Kubernetes Service of Type Headless\n20. Kubernetes ReplicaSets\n\n## Google Services Covered\n01. Google GKE Standard Cluster\n02. Google GKE Autopilot Cluster\n03. Compute Engine - Virtual Machines\n04. Compute Engine - Storage Disks\n05. Compute Engine - Storage Snapshots\n06. Compute Engine - Storage Images\n07. Compute Engine - Instance Groups\n08. Compute Engine - Health Checks\n09. Compute Engine - Network Endpoint Groups\n10. VPC Networks - VPC\n11. VPC Network - External and Internal IP Addresses\n12. VPC Network - Firewall\n13. Network Services - Load Balancing\n14. Network Services - Cloud DNS\n15. Network Services - Cloud CDN\n16. Network Services - Cloud NAT\n17. Network Services - Cloud Domains\n18. Network Services - Private Service Connection\n19. Network Security - Cloud Armor\n20. Network Security - SSL Policies\n21. IAM & Admin - IAM\n22. IAM & Admin - Service Accounts\n23. IAM & Admin - Roles\n24. IAM & Admin - Identity-Aware Proxy\n25. DevOps - Cloud Source Repositories\n26. DevOps - Cloud Build\n27. DevOps - Cloud Storage\n28. SQL - Cloud SQL\n29. Storage - Filestore\n30. Google Artifact Registry\n31. Operations Logging\n32. GCP Monitoring\n\n\n## What will students learn in your course?\n- You will learn to master Kubernetes on Google GKE with 75 Real-world  demo's on Google Cloud Platform with 20+ Kubernetes and 30+ Google Cloud Services\n- You will learn Kubernetes Basics for 4.5 hours\n- You will create GKE Standard and Autopilot clusters with public and private networks\n- You will learn to implement Kubernetes Storage with Google Persistent Disks and Google File Store\n- You will also use Google Cloud SQL, Cloud Load Balancing to deploy a sample application outlining LB to DB usecase in GKE Cluster\n- You will master Kubernetes Ingress concepts in detail on GKE with 22 Real-world Demos\n- You will implement Ingress Context Path Routing and Name based vhost routing\n- You will implement Ingress with Google Managed SSL Certificates\n- You will master Google GKE Workload Identity with a detailed dedicated demo.\n- You will implement External DNS Controller to automatically add, delete DNS records automatically in Google Cloud DNS Service\n- You will implement Ingress with Preshared SSL and Self Signed Certificates\n- You will implement Ingress with Cloud CDN, Cloud Armor, Internal Load Balancer, Cookie Affinity, IP Affinity, HTTP Access Logging.\n- You will implement Ingress with Google Identity-Aware Proxy\n- You will learn to use Google Artifact Registry with GKE\n- You will implement DevOps Continuous Integration (CI) and Continuous Delivery (CD) with Cloud Build and Cloud Source Services\n- You will learn to master Kubernetes Probes (Readiness, Startup, Liveness)\n- You will implement Kubernetes Requests, Limits, Namespaces, Resource Quota and Limit Range\n- You will implement GKE Cluster Autoscaler and Horizontal Pod Autoscaler\n\n\n\n## What are the requirements or prerequisites for taking your course?\n- You must have an Google Cloud account to follow with me for hands-on activities.\n- You don't need to have any basic knowledge of Kubernetes. Course will get started from very very basics of Kubernetes and take you to very advanced levels\n- Any Cloud Platform basics is required to understand the terminology\n\n## Who is this course for?\n- Infrastructure Architects or Sysadmins or Developers who are planning to master Kubernetes from Real-World perspective on Google Cloud Platform (GCP)\n- Any beginner who is interested in learning Kubernetes with Google Cloud Platform (GCP) \n- Any beginner who is planning their career in DevOps\n\n\n## Github Repositories used for this course\n- [Terraform on AWS EKS Kubernetes IaC SRE- 50 Real-World Demos](https:\/\/github.com\/stacksimplify\/terraform-on-aws-eks)\n- [Course Presentation](https:\/\/github.com\/stacksimplify\/terraform-on-aws-eks\/tree\/main\/course-presentation)\n- [Kubernetes Fundamentals](https:\/\/github.com\/stacksimplify\/kubernetes-fundamentals)\n- **Important Note:** Please go to these repositories and FORK these repositories and make use of them during the course.\n\n\n## Each of my courses come with\n- Amazing Hands-on Step By Step Learning Experiences\n- Real Implementation Experience\n- Friendly Support in the Q&A section\n- \"30-Day \"No Questions Asked\" Money Back Guaranteed by Udemy\"\n\n## My Other AWS Courses\n- [Udemy Enroll](https:\/\/www.stacksimplify.com\/azure-aks\/courses\/stacksimplify-best-selling-courses-on-udemy\/)\n\n## Stack Simplify Udemy Profile\n- [Udemy Profile](https:\/\/www.udemy.com\/user\/kalyan-reddy-9\/)\n\n# HashiCorp Certified: Terraform Associate - 50 Practical Demos\n[![Image](https:\/\/stacksimplify.com\/course-images\/hashicorp-certified-terraform-associate-highest-rated.png \"HashiCorp Certified: Terraform Associate - 50 Practical Demos\")](https:\/\/links.stacksimplify.com\/hashicorp-certified-terraform-associate) \n\n# AWS EKS - Elastic Kubernetes Service - Masterclass\n[![Image](https:\/\/stacksimplify.com\/course-images\/AWS-EKS-Kubernetes-Masterclass-DevOps-Microservices-course.png \"AWS EKS Kubernetes - Masterclass\")](https:\/\/www.udemy.com\/course\/aws-eks-kubernetes-masterclass-devops-microservices\/?referralCode=257C9AD5B5AF8D12D1E1)\n\n\n# Azure Kubernetes Service with Azure DevOps and Terraform \n[![Image](https:\/\/stacksimplify.com\/course-images\/azure-kubernetes-service-with-azure-devops-and-terraform.png \"Azure Kubernetes Service with Azure DevOps and Terraform\")](https:\/\/www.udemy.com\/course\/azure-kubernetes-service-with-azure-devops-and-terraform\/?referralCode=2499BF7F5FAAA506ED42)\n\n# Terraform on AWS with SRE & IaC DevOps | Real-World 20 Demos\n[![Image](https:\/\/stacksimplify.com\/course-images\/terraform-on-aws-best-seller.png \"Terraform on AWS with SRE & IaC DevOps | Real-World 20 Demos\")](https:\/\/links.stacksimplify.com\/terraform-on-aws-with-sre-and-iacdevops)\n\n# Azure - HashiCorp Certified: Terraform Associate - 70 Demos\n[![Image](https:\/\/stacksimplify.com\/course-images\/azure-hashicorp-certified-terraform-associate-highest-rated.png \"Azure - HashiCorp Certified: Terraform Associate - 70 Demos\")](https:\/\/links.stacksimplify.com\/azure-hashicorp-certified-terraform-associate)\n\n# Terraform on Azure with IaC DevOps and SRE | Real-World 25 Demos\n\n[![Image](https:\/\/stacksimplify.com\/course-images\/terraform-on-azure-with-iac-azure-devops-sre-1.png \"Terraform on Azure with IaC DevOps and SRE | Real-World 25 Demos\")](https:\/\/links.stacksimplify.com\/terraform-on-azure-with-iac-devops-sre)\n\n# [Terraform on AWS EKS Kubernetes IaC SRE- 50 Real-World Demos](https:\/\/links.stacksimplify.com\/terraform-on-aws-eks-kubernetes-iac-sre)\n\n[![Image](https:\/\/stacksimplify.com\/course-images\/terraform-on-aws-eks-kubernetes.png \"Terraform on AWS EKS Kubernetes IaC SRE- 50 Real-World Demos \")](https:\/\/links.stacksimplify.com\/terraform-on-aws-eks-kubernetes-iac-sre)","site":"gcp gke docs","answers_cleaned":"   GCP GKE Google Kubernetes Engine DevOps 75 Real World Demos  https   links stacksimplify com gcp google kubernetes engine gke with devops      Image  images course title png  Google Kubernetes Engine GKE with DevOps 75 Real World Demos    https   links stacksimplify com gcp google kubernetes engine gke with devops       Course Modules 01  Google Cloud Account Creation 02  Create GKE Standard Public Cluster     03  Install gcloud CLI on mac OS     04  Install gcloud CLI on Windows OS     05  Docker Fundamentals     06  Kubernetes Pods     07  Kubernetes ReplicaSets     08  Kubernetes Deployment   CREATE     09  Kubernetes Deployment   UPDATE     10  Kubernetes Deployment   ROLLBACK     11  Kubernetes Deployments   Pause and Resume     12  Kubernetes ClusterIP and Load Balancer Service     13  YAML Basics     14  Kubernetes Pod    Service using YAML     15  Kubernetes ReplicaSets using YAML     16  Kubernetes Deployment using YAML     17  Kubernetes Services using YAML     18   GKE Kubernetes NodePort Service     19  GKE Kubernetes Headless Service     20  GKE Private Cluster     21  How to use GCP Persistent Disks in GKE       22  How to use Balanced Persistent Disk in GKE       23  How to use Custom Storage Class in GKE for Persistent Disks       24  How to use Pre existing Persistent Disks in GKE       25  How to use Regional Persistent Disks in GKE       26  How to perform Persistent Disk  Volume Snapshots and Volume Restore       28  GKE Workloads and Cloud SQL with Public IP     29  GKE Workloads and Cloud SQL with Private IP     30  GKE Workloads and Cloud SQL with Private IP and No ExternalName Service     31  How to use Google Cloud File Store in GKE       32  How to use Custom Storage Class for File Store in GKE       33  How to perform File Store Instance Volume Snapshots and Volume Restore       34  Ingress Service Basics     35  Ingress Context Path based Routing     36  Ingress Custom Health Checks using Readiness Probes     37  Register a Google Cloud Domain for some advanced Ingress Service Demos      38  Ingress with Static External IP and Cloud DNS     39  Google Managed SSL Certificates for Ingress     40  Ingress HTTP to HTTPS Redirect     41  GKE Workload Identity     42  External DNS Controller Install     43  External DNS   Ingress Service     44  External DNS   Kubernetes Service     45  Ingress Name based Virtual Host Routing     46  Ingress SSL Policy     47  Ingress with Identity Aware Proxy     48  Ingress with Self Signed SSL Certificates     49  Ingress with Pre shared SSL Certificates     50  Ingress with Cloud CDN  HTTP Access Logging and Timeouts     51  Ingress with Client IP Affinity     52  Ingress with Cookie Affinity     53  Ingress with Custom Health Checks using BackendConfig CRD     54  Ingress Internal Load Balancer     55  Ingress with Google Cloud Armor     56  Google Artifact Registry     57  GKE Continuous Integration     58  GKE Continuous Delivery     59  Kubernetes Liveness Probes     60  Kubernetes Startup Probes     61  Kubernetes Readiness Probe     62  Kubernetes Requests and Limits     63  GKE Cluster Autoscaling     64  Kubernetes Namespaces     65  Kubernetes Namespaces Resource Quota     66  Kubernetes Namespaces Limit Range     67  Kubernetes Horizontal Pod Autoscaler     68  GKE Autopilot Cluster     69  How to manage Multiple Cluster access in kubeconfig              Kubernetes Concepts Covered 01  Kubernetes Deployments  Create  Update  Rollback  Pause  Resume  02  Kubernetes Pods 03  Kubernetes Service of Type LoadBalancer 04  Kubernetes Service of Type ClusterIP 05  Kubernetes Ingress Service 06  Kubernetes Storage Class 07  Kubernetes Storage Persistent Volume 08  Kubernetes Storage Persistent Volume Claim 09  Kubernetes Cluster Autoscaler 10  Kubernetes Horizontal Pod Autoscaler 11  Kubernetes Namespaces 12  Kubernetes Namespaces Resource Quota 13  Kubernetes Namespaces Limit Range 14  Kubernetes Service Accounts 15  Kubernetes ConfigMaps 16  Kubernetes Requests and Limits 17  Kubernetes Worker Nodes 18  Kubernetes Service of Type NodePort 19  Kubernetes Service of Type Headless 20  Kubernetes ReplicaSets     Google Services Covered 01  Google GKE Standard Cluster 02  Google GKE Autopilot Cluster 03  Compute Engine   Virtual Machines 04  Compute Engine   Storage Disks 05  Compute Engine   Storage Snapshots 06  Compute Engine   Storage Images 07  Compute Engine   Instance Groups 08  Compute Engine   Health Checks 09  Compute Engine   Network Endpoint Groups 10  VPC Networks   VPC 11  VPC Network   External and Internal IP Addresses 12  VPC Network   Firewall 13  Network Services   Load Balancing 14  Network Services   Cloud DNS 15  Network Services   Cloud CDN 16  Network Services   Cloud NAT 17  Network Services   Cloud Domains 18  Network Services   Private Service Connection 19  Network Security   Cloud Armor 20  Network Security   SSL Policies 21  IAM   Admin   IAM 22  IAM   Admin   Service Accounts 23  IAM   Admin   Roles 24  IAM   Admin   Identity Aware Proxy 25  DevOps   Cloud Source Repositories 26  DevOps   Cloud Build 27  DevOps   Cloud Storage 28  SQL   Cloud SQL 29  Storage   Filestore 30  Google Artifact Registry 31  Operations Logging 32  GCP Monitoring      What will students learn in your course    You will learn to master Kubernetes on Google GKE with 75 Real world  demo s on Google Cloud Platform with 20  Kubernetes and 30  Google Cloud Services   You will learn Kubernetes Basics for 4 5 hours   You will create GKE Standard and Autopilot clusters with public and private networks   You will learn to implement Kubernetes Storage with Google Persistent Disks and Google File Store   You will also use Google Cloud SQL  Cloud Load Balancing to deploy a sample application outlining LB to DB usecase in GKE Cluster   You will master Kubernetes Ingress concepts in detail on GKE with 22 Real world Demos   You will implement Ingress Context Path Routing and Name based vhost routing   You will implement Ingress with Google Managed SSL Certificates   You will master Google GKE Workload Identity with a detailed dedicated demo    You will implement External DNS Controller to automatically add  delete DNS records automatically in Google Cloud DNS Service   You will implement Ingress with Preshared SSL and Self Signed Certificates   You will implement Ingress with Cloud CDN  Cloud Armor  Internal Load Balancer  Cookie Affinity  IP Affinity  HTTP Access Logging    You will implement Ingress with Google Identity Aware Proxy   You will learn to use Google Artifact Registry with GKE   You will implement DevOps Continuous Integration  CI  and Continuous Delivery  CD  with Cloud Build and Cloud Source Services   You will learn to master Kubernetes Probes  Readiness  Startup  Liveness    You will implement Kubernetes Requests  Limits  Namespaces  Resource Quota and Limit Range   You will implement GKE Cluster Autoscaler and Horizontal Pod Autoscaler       What are the requirements or prerequisites for taking your course    You must have an Google Cloud account to follow with me for hands on activities    You don t need to have any basic knowledge of Kubernetes  Course will get started from very very basics of Kubernetes and take you to very advanced levels   Any Cloud Platform basics is required to understand the terminology     Who is this course for    Infrastructure Architects or Sysadmins or Developers who are planning to master Kubernetes from Real World perspective on Google Cloud Platform  GCP    Any beginner who is interested in learning Kubernetes with Google Cloud Platform  GCP     Any beginner who is planning their career in DevOps      Github Repositories used for this course    Terraform on AWS EKS Kubernetes IaC SRE  50 Real World Demos  https   github com stacksimplify terraform on aws eks     Course Presentation  https   github com stacksimplify terraform on aws eks tree main course presentation     Kubernetes Fundamentals  https   github com stacksimplify kubernetes fundamentals      Important Note    Please go to these repositories and FORK these repositories and make use of them during the course       Each of my courses come with   Amazing Hands on Step By Step Learning Experiences   Real Implementation Experience   Friendly Support in the Q A section    30 Day  No Questions Asked  Money Back Guaranteed by Udemy      My Other AWS Courses    Udemy Enroll  https   www stacksimplify com azure aks courses stacksimplify best selling courses on udemy       Stack Simplify Udemy Profile    Udemy Profile  https   www udemy com user kalyan reddy 9      HashiCorp Certified  Terraform Associate   50 Practical Demos    Image  https   stacksimplify com course images hashicorp certified terraform associate highest rated png  HashiCorp Certified  Terraform Associate   50 Practical Demos    https   links stacksimplify com hashicorp certified terraform associate      AWS EKS   Elastic Kubernetes Service   Masterclass    Image  https   stacksimplify com course images AWS EKS Kubernetes Masterclass DevOps Microservices course png  AWS EKS Kubernetes   Masterclass    https   www udemy com course aws eks kubernetes masterclass devops microservices  referralCode 257C9AD5B5AF8D12D1E1      Azure Kubernetes Service with Azure DevOps and Terraform     Image  https   stacksimplify com course images azure kubernetes service with azure devops and terraform png  Azure Kubernetes Service with Azure DevOps and Terraform    https   www udemy com course azure kubernetes service with azure devops and terraform  referralCode 2499BF7F5FAAA506ED42     Terraform on AWS with SRE   IaC DevOps   Real World 20 Demos    Image  https   stacksimplify com course images terraform on aws best seller png  Terraform on AWS with SRE   IaC DevOps   Real World 20 Demos    https   links stacksimplify com terraform on aws with sre and iacdevops     Azure   HashiCorp Certified  Terraform Associate   70 Demos    Image  https   stacksimplify com course images azure hashicorp certified terraform associate highest rated png  Azure   HashiCorp Certified  Terraform Associate   70 Demos    https   links stacksimplify com azure hashicorp certified terraform associate     Terraform on Azure with IaC DevOps and SRE   Real World 25 Demos     Image  https   stacksimplify com course images terraform on azure with iac azure devops sre 1 png  Terraform on Azure with IaC DevOps and SRE   Real World 25 Demos    https   links stacksimplify com terraform on azure with iac devops sre      Terraform on AWS EKS Kubernetes IaC SRE  50 Real World Demos  https   links stacksimplify com terraform on aws eks kubernetes iac sre      Image  https   stacksimplify com course images terraform on aws eks kubernetes png  Terraform on AWS EKS Kubernetes IaC SRE  50 Real World Demos     https   links stacksimplify com terraform on aws eks kubernetes iac sre "}
{"questions":"gcp gke docs Implement GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing title GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. External DNS Controller Installed\n\n## Step-01: Introduction\n1. Requests will be routed in Load Balancer based on DNS Names\n2. `app1-ingress.kalyanreddydaida.com` will send traffic to `App1 Pods`\n3. `app2-ingress.kalyanreddydaida.com` will send traffic to `App2 Pods`\n4. `default-ingress.kalyanreddydaida.com` will send traffic to `App3 Pods`\n\n\n## Step-02: Review kube-manifests\n1. 01-Nginx-App1-Deployment-and-NodePortService.yaml\n2. 02-Nginx-App2-Deployment-and-NodePortService.yaml\n3. 03-Nginx-App3-Deployment-and-NodePortService.yaml\n4. NO CHANGES TO ABOVE 3 files - Standard Deployment and NodePort Service we are using from previous Context Path based Routing Demo\n\n\n## Step-03: 04-Ingress-NameBasedVHost-Routing.yaml\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress-namebasedvhost-routing\n  annotations:\n    # External Load Balancer\n    kubernetes.io\/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io\/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # Google Managed SSL Certificates\n    networking.gke.io\/managed-certificates: managed-cert-for-ingress\n    # SSL Redirect HTTP to HTTPS\n    networking.gke.io\/v1beta1.FrontendConfig: \"my-frontend-config\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io\/hostname: default-ingress.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80     \n  rules:\n    - host: app1-ingress.kalyanreddydaida.com\n      http:\n        paths:\n          - path: \/\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n    - host: app2-ingress.kalyanreddydaida.com\n      http:\n        paths:                  \n          - path: \/\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n```\n\n## Step-04: 05-Managed-Certificate.yaml\n```yaml\napiVersion: networking.gke.io\/v1\nkind: ManagedCertificate\nmetadata:\n  name: managed-cert-for-ingress\nspec:\n  domains:\n    - default101-ingress.kalyanreddydaida.com\n    - app101-ingress.kalyanreddydaida.com\n    - app201-ingress.kalyanreddydaida.com\n```\n\n## Step-05: 06-frontendconfig.yaml\n```yaml\napiVersion: networking.gke.io\/v1beta1\nkind: FrontendConfig\nmetadata:\n  name: my-frontend-config\nspec:\n  redirectToHttps:\n    enabled: true\n    #responseCodeName: RESPONSE_CODE\n```\n\n## Step-06: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Services\nkubectl get ingress\n\n# Verify external-dns Controller logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n\n# Verify Cloud DNS\n1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com\n2. Verify Record sets, DNS Name we added in Ingress Service should be present \n\n# List FrontendConfigs\nkubectl get frontendconfig\n\n# List Managed Certificates\nkubectl get managedcertificate\n\n# Describe Managed Certificates\nkubectl describe managedcertificate managed-cert-for-ingress\nObservation:\n1. Wait for Domain Status to be changed from \"Provisioning\" to \"ACTIVE\"\n2. It might take minimum 60 minutes for provisioning Google Managed SSL Certificates\n```\n\n## Step-07: Access Application\n```t\n# Access Application\nhttp:\/\/app1-ingress.kalyanreddydaida.com\/app1\/index.html\nhttp:\/\/app2-ingress.kalyanreddydaida.com\/app2\/index.html\nhttp:\/\/default-ingress.kalyanreddydaida.com\n\nObservation:\n1. All 3 URLS should work as expected. In your case, replace YOUR_DOMAIN name for testing\n2. HTTP to HTTPS redirect should work\n```\n\n## Step-08: Access Application - Negative usecase Testing\n```t\n# Access Application - App1 DNS Name\nhttp:\/\/app1-ingress.kalyanreddydaida.com\/app2\/index.html   \nObservation: SHOULD FAIL In Pod App1 we don't app2 context path (app2 folder) - 404 ERROR\n\n# Access Application - App2 DNS Name\nhttp:\/\/app2-ingress.kalyanreddydaida.com\/app1\/index.html\nObservation: SHOULD FAIL In Pod App2 we don't app1 context path (app1 folder) - 404 ERROR\n\n# Access Application - App3 or Default DNS Name\nhttp:\/\/default-ingress.kalyanreddydaida.com\/app1\/index.html\nObservation: SHOULD FAIL In Pod App3 we don't app1 context path (app1 folder) - 404 ERROR\n```\n\n## Step-09: Clean-Up\n- DONT DELETE, WE ARE GOING TO USE THESE KUBERNETES RESOURCES IN NEXT DEMO RELATED TO SSL-POLICY\n\n## References\n- [Ingress Features](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features)\n\n","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing description  Implement GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing        Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123     3  External DNS Controller Installed     Step 01  Introduction 1  Requests will be routed in Load Balancer based on DNS Names 2   app1 ingress kalyanreddydaida com  will send traffic to  App1 Pods  3   app2 ingress kalyanreddydaida com  will send traffic to  App2 Pods  4   default ingress kalyanreddydaida com  will send traffic to  App3 Pods       Step 02  Review kube manifests 1  01 Nginx App1 Deployment and NodePortService yaml 2  02 Nginx App2 Deployment and NodePortService yaml 3  03 Nginx App3 Deployment and NodePortService yaml 4  NO CHANGES TO ABOVE 3 files   Standard Deployment and NodePort Service we are using from previous Context Path based Routing Demo      Step 03  04 Ingress NameBasedVHost Routing yaml    yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress namebasedvhost routing   annotations        External Load Balancer     kubernetes io ingress class   gce          Static IP for Ingress Service     kubernetes io ingress global static ip name   gke ingress extip1           Google Managed SSL Certificates     networking gke io managed certificates  managed cert for ingress       SSL Redirect HTTP to HTTPS     networking gke io v1beta1 FrontendConfig   my frontend config           External DNS   For creating a Record Set in Google Cloud Cloud DNS     external dns alpha kubernetes io hostname  default ingress kalyanreddydaida com spec              defaultBackend      service        name  app3 nginx nodeport service       port          number  80        rules        host  app1 ingress kalyanreddydaida com       http          paths              path                pathType  Prefix             backend                service                  name  app1 nginx nodeport service                 port                     number  80       host  app2 ingress kalyanreddydaida com       http          paths                                path                pathType  Prefix             backend                service                  name  app2 nginx nodeport service                 port                     number  80         Step 04  05 Managed Certificate yaml    yaml apiVersion  networking gke io v1 kind  ManagedCertificate metadata    name  managed cert for ingress spec    domains        default101 ingress kalyanreddydaida com       app101 ingress kalyanreddydaida com       app201 ingress kalyanreddydaida com         Step 05  06 frontendconfig yaml    yaml apiVersion  networking gke io v1beta1 kind  FrontendConfig metadata    name  my frontend config spec    redirectToHttps      enabled  true      responseCodeName  RESPONSE CODE         Step 06  Deploy Kubernetes Manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests    List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    List Ingress Services kubectl get ingress    Verify external dns Controller logs kubectl  n external dns ns logs  f   kubectl  n external dns ns get po   egrep  o  external dns A Za z0 9       or  kubectl  n external dns ns get pods kubectl  n external dns ns logs  f  External DNS Pod Name     Verify Cloud DNS 1  Go to Network Services    Cloud DNS    kalyanreddydaida com 2  Verify Record sets  DNS Name we added in Ingress Service should be present     List FrontendConfigs kubectl get frontendconfig    List Managed Certificates kubectl get managedcertificate    Describe Managed Certificates kubectl describe managedcertificate managed cert for ingress Observation  1  Wait for Domain Status to be changed from  Provisioning  to  ACTIVE  2  It might take minimum 60 minutes for provisioning Google Managed SSL Certificates         Step 07  Access Application    t   Access Application http   app1 ingress kalyanreddydaida com app1 index html http   app2 ingress kalyanreddydaida com app2 index html http   default ingress kalyanreddydaida com  Observation  1  All 3 URLS should work as expected  In your case  replace YOUR DOMAIN name for testing 2  HTTP to HTTPS redirect should work         Step 08  Access Application   Negative usecase Testing    t   Access Application   App1 DNS Name http   app1 ingress kalyanreddydaida com app2 index html    Observation  SHOULD FAIL In Pod App1 we don t app2 context path  app2 folder    404 ERROR    Access Application   App2 DNS Name http   app2 ingress kalyanreddydaida com app1 index html Observation  SHOULD FAIL In Pod App2 we don t app1 context path  app1 folder    404 ERROR    Access Application   App3 or Default DNS Name http   default ingress kalyanreddydaida com app1 index html Observation  SHOULD FAIL In Pod App3 we don t app1 context path  app1 folder    404 ERROR         Step 09  Clean Up   DONT DELETE  WE ARE GOING TO USE THESE KUBERNETES RESOURCES IN NEXT DEMO RELATED TO SSL POLICY     References    Ingress Features  https   cloud google com kubernetes engine docs how to ingress features   "}
{"questions":"gcp gke docs Implement GCP Google Kubernetes Engine GKE Autopilot Cluster Create GKE Autopilot Cluster Understand in detail about GKE Autopilot cluster title GCP Google Kubernetes Engine Autopilot Cluster Step 02 Pre requisite Verify if Cloud NAT Gateway created Step 01 Introduction","answers":"---\ntitle: GCP Google Kubernetes Engine Autopilot Cluster\ndescription: Implement GCP Google Kubernetes Engine GKE Autopilot Cluster\n---\n\n## Step-01: Introduction\n- Create GKE Autopilot Cluster\n- Understand in detail about GKE Autopilot cluster\n\n## Step-02: Pre-requisite: Verify if Cloud NAT Gateway created \n- Verify if Cloud NAT Gateway created in `Region:us-central1` where you are planning to create GKE Autopilot Private Cluster\n- This is required for Workload in Private subnets to connect to Internet.  \n- Primarily to Connect to Docker Hub to pull the Docker Images\n- Go to Network Services -> Cloud NAT\n\n## Step-03: Create GKE Autopilot Private Cluster\n- Go to Kubernetes Engine -> Clusters -> **CREATE**\n- Create Cluster -> GKE Autopilot -> **CONFIGURE**\n- **Name:** autopilot-cluster-private-1\n- **Region:** us-central1\n- **Network access:** Private Cluster\n- **Access control plane using its external IP address:** CHECK\n- **Control plane ip range:** 172.18.0.0\/28\n- **Enable control plane authorized networks:** CHECK\n- **Authorized networks:** \n  - **Name:** internet-access\n  - **Network:** 0.0.0.0\/0\n  - Click on **DONE**\n- **Network:** default  (LEAVE TO DEFAULTS)\n- **Node subnet:** default (LEAVE TO DEFAULTS)\n- **Cluster default pod address range:** \/17 (LEAVE TO DEFAULTS)\n- **Service Address range:** \/22 (LEAVE TO DEFAULTS)\n- **Release Channel:** Regular Channel (Default)\n- REST ALL LEAVE TO DEFAULTS\n- Click on **CREATE** \n\n## Step-04: Configure kubectl for kubeconfig\n```t\n# Configure kubectl for kubeconfig\ngcloud container clusters get-credentials CLUSTER-NAME --region REGION --project PROJECT-NAME\n\n# Replace values CLUSTER-NAME, REGION, PROJECT-NAME\ngcloud container clusters get-credentials autopilot-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\nkubectl get nodes -o wide\n```\n\n## Step-05: Review Kubernetes Manifests\n### Step-05-01: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 5 \n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify\/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          resources:\n            requests:\n              memory: \"128Mi\" # 128 MebiByte is equal to 135 Megabyte (MB)\n              cpu: \"200m\" # `m` means milliCPU\n            limits:\n              memory: \"256Mi\"\n              cpu: \"400m\"  # 1000m is equal to 1 VCPU core                           \n```\n### Step-05-02: 02-kubernetes-loadbalancer-service.yaml\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n```\n\n## Step-06: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>\n```\n\n## Step-07: Scale your Application\n```t\n# Scale your Application\nkubectl scale --replicas=15 deployment\/myapp1-deployment\n\n# List Pods\nkubectl get pods\n\n# List Nodes\nkubectl get nodes\n```\n\n## Step-08: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Delete GKE Autopilot Cluster \n# NOTE: Dont delete this cluster, as we are going to use this in next demo.\nGo to Kubernetes Engine > Clusters -> autopilot-cluster-private-1 -> DELETE\n```\n\n\n## References\n- https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/autopilot-overview#default_container_resource_requests\n- https:\/\/cloud.google.com\/kubernetes-engine\/quotas#limits_per_cluste","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine Autopilot Cluster description  Implement GCP Google Kubernetes Engine GKE Autopilot Cluster         Step 01  Introduction   Create GKE Autopilot Cluster   Understand in detail about GKE Autopilot cluster     Step 02  Pre requisite  Verify if Cloud NAT Gateway created    Verify if Cloud NAT Gateway created in  Region us central1  where you are planning to create GKE Autopilot Private Cluster   This is required for Workload in Private subnets to connect to Internet      Primarily to Connect to Docker Hub to pull the Docker Images   Go to Network Services    Cloud NAT     Step 03  Create GKE Autopilot Private Cluster   Go to Kubernetes Engine    Clusters      CREATE     Create Cluster    GKE Autopilot      CONFIGURE       Name    autopilot cluster private 1     Region    us central1     Network access    Private Cluster     Access control plane using its external IP address    CHECK     Control plane ip range    172 18 0 0 28     Enable control plane authorized networks    CHECK     Authorized networks           Name    internet access       Network    0 0 0 0 0     Click on   DONE       Network    default   LEAVE TO DEFAULTS      Node subnet    default  LEAVE TO DEFAULTS      Cluster default pod address range     17  LEAVE TO DEFAULTS      Service Address range     22  LEAVE TO DEFAULTS      Release Channel    Regular Channel  Default    REST ALL LEAVE TO DEFAULTS   Click on   CREATE        Step 04  Configure kubectl for kubeconfig    t   Configure kubectl for kubeconfig gcloud container clusters get credentials CLUSTER NAME   region REGION   project PROJECT NAME    Replace values CLUSTER NAME  REGION  PROJECT NAME gcloud container clusters get credentials autopilot cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes kubectl get nodes  o wide         Step 05  Review Kubernetes Manifests     Step 05 01  01 kubernetes deployment yaml    yaml apiVersion  apps v1 kind  Deployment  metadata   Dictionary   name  myapp1 deployment spec    Dictionary   replicas  5    selector      matchLabels        app  myapp1   template        metadata    Dictionary       name  myapp1 pod       labels    Dictionary         app  myapp1    Key value pairs     spec        containers    List           name  myapp1 container           image  stacksimplify kubenginx 1 0 0           ports                 containerPort  80             resources              requests                memory   128Mi    128 MebiByte is equal to 135 Megabyte  MB                cpu   200m     m  means milliCPU             limits                memory   256Mi                cpu   400m     1000m is equal to 1 VCPU core                                    Step 05 02  02 kubernetes loadbalancer service yaml    yaml apiVersion  v1 kind  Service  metadata    name  myapp1 lb service spec    type  LoadBalancer   ClusterIp    NodePort   selector      app  myapp1   ports         name  http       port  80   Service Port       targetPort  80   Container Port         Step 06  Deploy Kubernetes Manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests    List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    Access Application http    EXTERNAL IP OF GET SERVICE OUTPUT          Step 07  Scale your Application    t   Scale your Application kubectl scale   replicas 15 deployment myapp1 deployment    List Pods kubectl get pods    List Nodes kubectl get nodes         Step 08  Clean Up    t   Delete Kubernetes Resources kubectl delete  f kube manifests    Delete GKE Autopilot Cluster    NOTE  Dont delete this cluster  as we are going to use this in next demo  Go to Kubernetes Engine   Clusters    autopilot cluster private 1    DELETE          References   https   cloud google com kubernetes engine docs concepts autopilot overview default container resource requests   https   cloud google com kubernetes engine quotas limits per cluste"}
{"questions":"gcp gke docs title GCP Google Kubernetes Engine GKE Service with External DNS gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created Implement GCP Google Kubernetes Engine GKE Service with External DNS t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Service with External DNS \ndescription: Implement GCP Google Kubernetes Engine GKE Service with External DNS\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. External DNS Controller Installed\n\n## Step-01: Introduction\n- Kubernetes Service of Type Load Balancer with External DNS\n- We are going to use the Annotation `external-dns.alpha.kubernetes.io\/hostname` in Kubernetes Service.\n- DNS Recordsets will be automatically added to Google Cloud DNS using external-dns controller when Ingress Service deployed\n\n## Step-02: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify\/kubenginx:1.0.0\n          ports: \n            - containerPort: 80      \n```\n\n## Step-03: 02-kubernetes-loadbalancer-service.yaml\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\n  annotations:\n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io\/hostname: extdns-k8s-svc-demo.kalyanreddydaida.com\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n```\n\n## Step-05: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy \n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify external-dns Controller logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n\n# Verify Cloud DNS\n1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com\n2. Verify Record sets, DNS Name we added in Kubernetes Service should be present \n\n# Access Application\nhttp:\/\/<DNS-Name>\nhttp:\/\/extdns-k8s-svc-demo.kalyanreddydaida.com\n```\n\n\n## Step-06: Delete kube-manifests\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests\/\n\n# Verify external-dns Controller logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n\n\n# Verify Cloud DNS\n1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com\n2. Verify Record sets, DNS Name we added in Kubernetes Service should be not preset (already deleted) \n```","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Service with External DNS  description  Implement GCP Google Kubernetes Engine GKE Service with External DNS        Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123     3  External DNS Controller Installed     Step 01  Introduction   Kubernetes Service of Type Load Balancer with External DNS   We are going to use the Annotation  external dns alpha kubernetes io hostname  in Kubernetes Service    DNS Recordsets will be automatically added to Google Cloud DNS using external dns controller when Ingress Service deployed     Step 02  01 kubernetes deployment yaml    yaml apiVersion  apps v1 kind  Deployment  metadata   Dictionary   name  myapp1 deployment spec    Dictionary   replicas  2   selector      matchLabels        app  myapp1   template        metadata    Dictionary       name  myapp1 pod       labels    Dictionary         app  myapp1    Key value pairs     spec        containers    List           name  myapp1 container           image  stacksimplify kubenginx 1 0 0           ports                 containerPort  80               Step 03  02 kubernetes loadbalancer service yaml    yaml apiVersion  v1 kind  Service  metadata    name  myapp1 lb service   annotations        External DNS   For creating a Record Set in Google Cloud Cloud DNS     external dns alpha kubernetes io hostname  extdns k8s svc demo kalyanreddydaida com spec    type  LoadBalancer   ClusterIp    NodePort   selector      app  myapp1   ports         name  http       port  80   Service Port       targetPort  80   Container Port         Step 05  Deploy Kubernetes Manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests    List Deployments kubectl get deploy     List Pods kubectl get pods    List Services kubectl get svc    Verify external dns Controller logs kubectl  n external dns ns logs  f   kubectl  n external dns ns get po   egrep  o  external dns A Za z0 9       or  kubectl  n external dns ns get pods kubectl  n external dns ns logs  f  External DNS Pod Name     Verify Cloud DNS 1  Go to Network Services    Cloud DNS    kalyanreddydaida com 2  Verify Record sets  DNS Name we added in Kubernetes Service should be present     Access Application http    DNS Name  http   extdns k8s svc demo kalyanreddydaida com          Step 06  Delete kube manifests    t   Delete Kubernetes Objects kubectl delete  f kube manifests     Verify external dns Controller logs kubectl  n external dns ns logs  f   kubectl  n external dns ns get po   egrep  o  external dns A Za z0 9       or  kubectl  n external dns ns get pods kubectl  n external dns ns logs  f  External DNS Pod Name      Verify Cloud DNS 1  Go to Network Services    Cloud DNS    kalyanreddydaida com 2  Verify Record sets  DNS Name we added in Kubernetes Service should be not preset  already deleted      "}
{"questions":"gcp gke docs Use GCP File Store for GKE Workloads Implement Backup and Restore title GKE Storage with GCP File Store Backup and Restore Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GKE Storage with GCP File Store - Backup and Restore\ndescription: Use GCP File Store for GKE Workloads - Implement Backup and Restore\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n- GKE Storage with GCP File Store \n- Implement Backups is `VolumeSnapshotClass` and `VolumeSnapshot`\n- Implement Restore of FileStore in myapp2 Application and Verify\n\n\n## Step-02: YAML files are same as first FileStore Demo\n- **Project Folder:** 01-myapp1-kube-manifests\n- YAML files are same as first FileStore Demo\n- 01-filestore-pvc.yaml\n- 02-write-to-filestore-pod.yaml\n- 03-myapp1-deployment.yaml\n- 04-loadBalancer-service.yaml\n\n## Step-03: Deploy 01-myapp1-kube-manifests and Verify\n```t\n# Deploy 01-myapp1-kube-manifests\nkubectl apply -f 01-myapp1-kube-manifests\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List Pods\nkubectl get pods\n``` \n\n## Step-04: Verify GCP Cloud FileStore Instance\n- Go to FileStore -> Instances\n- Click on **Instance ID: pvc-27cd5c27-0ed0-48d1-bc5f-925adfb8495f**\n- **Note:** Instance ID dynamically generated, it can be different in your case starting with pvc-*\n\n## Step-05: Connect to filestore write app Kubernetes pods and Verify\n```t\n# FileStore write app - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- \/bin\/sh\nkubectl exec --stdin --tty filestore-writer-app  -- \/bin\/sh\ncd \/data\nls\ntail -f myapp1.txt\nexit\n```\n\n## Step-06: Connect to myapp1 Kubernetes pods and Verify\n```t\n# List Pods\nkubectl get pods \n\n# myapp1 POD1 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- \/bin\/sh\nkubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- \/bin\/sh\ncd \/usr\/share\/nginx\/html\/filestore\nls\ntail -f myapp1.txt\nexit\n\n# myapp1 POD2 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- \/bin\/sh\nkubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97  -- \/bin\/sh\ncd \/usr\/share\/nginx\/html\/filestore\nls\ntail -f myapp1.txt\nexit\n```\n\n## Step-07: Access Application\n```t\n# List Services\nkubectl get svc\n\n# myapp1 - Access Application\nhttp:\/\/<EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>\/filestore\/myapp1.txt\nhttp:\/\/35.232.145.61\/filestore\/myapp1.txt\ncurl http:\/\/35.232.145.61\/filestore\/myapp1.txt\n```\n\n\n## Step-08: Volume Backup: 01-VolumeSnapshotClass.yaml\n- **Project Folder:** 02-volume-backup-kube-manifests\n```yaml\napiVersion: snapshot.storage.k8s.io\/v1\nkind: VolumeSnapshotClass\nmetadata:\n  name: csi-gcp-filestore-backup-snap-class\ndriver: filestore.csi.storage.gke.io\nparameters:\n  type: backup\ndeletionPolicy: Delete\n```\n\n## Step-09: Volume Backup: 02-VolumeSnapshot.yaml\n- **Project Folder:** 02-volume-backup-kube-manifests\n```yaml\napiVersion: snapshot.storage.k8s.io\/v1\nkind: VolumeSnapshot\nmetadata:\n  name: myapp1-volume-snapshot\nspec:\n  volumeSnapshotClassName: csi-gcp-filestore-backup-snap-class\n  source:\n    persistentVolumeClaimName: gke-filestore-pvc\n```\n\n## Step-10: Volume Backup: Deploy 02-volume-backup-kube-manifests and Verify\n```t\n# Deploy 02-volume-backup-kube-manifests\nkubectl apply -f 02-volume-backup-kube-manifests\n\n# List VolumeSnapshotClass\nkubectl get volumesnapshotclass\n\n# Describe VolumeSnapshotClass\nkubectl describe volumesnapshotclass csi-gcp-filestore-backup-snap-class\n\n# List VolumeSnapshot\nkubectl get volumesnapshot\n\n# Describe VolumeSnapshot\nkubectl describe volumesnapshot myapp1-volume-snapshot\n```\n\n## Step-11: Volume Backup: Verify GCP Cloud FileStore Backups\n- Go to FileStore -> Backups\n- Observation: You should find the Backup with name `snapshot-<SOME-ID>` (Example: snapshot-b4f24bd7-649b-45bb-8a0a-2b09d5b0e631)\n\n## Step-12: Volume Restore: 01-filestore-pvc.yaml\n- **Project Folder:** 03-volume-restore-myapp2-kube-manifests\n```yaml\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: restored-filestore-pvc\nspec:\n  accessModes:\n  - ReadWriteMany\n  storageClassName: standard-rwx\n  resources:\n    requests:\n      storage: 1Ti\n  dataSource:\n    kind: VolumeSnapshot\n    name: myapp1-volume-snapshot\n    apiGroup: snapshot.storage.k8s.io      \n```\n\n## Step-13: Volume Restore: 02-myapp2-deployment.yaml\n- **Project Folder:** 03-volume-restore-myapp2-kube-manifests\n```yaml\napiVersion: apps\/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp2-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp2\n  template:  \n    metadata: # Dictionary\n      name: myapp2-pod\n      labels: # Dictionary\n        app: myapp2  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp2-container\n          image: stacksimplify\/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          volumeMounts:\n            - name: persistent-storage\n              mountPath: \/usr\/share\/nginx\/html\/filestore\n      volumes:\n        - name: persistent-storage\n          persistentVolumeClaim:\n            claimName: restored-filestore-pvc    \n```\n\n## Step-14: Volume Restore: 03-myapp2-loadBalancer-service.yaml\n- **Project Folder:** 03-volume-restore-myapp2-kube-manifests\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp2-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp2\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n```\n\n## Step-13: Volume Restore: Deploy 03-volume-restore-myapp2-kube-manifests and Verify\n```t\n# Deploy 03-volume-restore-myapp2-kube-manifests\nkubectl apply -f 03-volume-restore-myapp2-kube-manifests\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List Pods\nkubectl get pods\n\n# Verify if new FileStore Instance is Created\nGo to -> FileStore -> Instances\n```\n\n## Step-14: Volume Restore: Connect to myapp2 Kubernetes pods and Verify\n```t\n# List Pods\nkubectl get pods \n\n# myapp1 POD1 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- \/bin\/sh\nkubectl exec --stdin --tty myapp2-deployment-6dccd6557-9x6dn -- \/bin\/sh\ncd \/usr\/share\/nginx\/html\/filestore\nls\ntail -f myapp1.txt\nexit\n\n# myapp1 POD2 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- \/bin\/sh\nkubectl exec --stdin --tty myapp2-deployment-6dccd6557-mbbjm  -- \/bin\/sh\ncd \/usr\/share\/nginx\/html\/filestore\nls\ntail -f myapp1.txt\nexit\n```\n\n## Step-15: Volume Restore: Access Applications\n```t\n# List Services\nkubectl get svc\n\n# myapp1 - Access Application\nhttp:\/\/<MYAPP1-EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>\/filestore\/myapp1.txt\nhttp:\/\/35.232.145.61\/filestore\/myapp1.txt\n\n\n# myapp2 - Access Application\nhttp:\/\/<MYAPP2-EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>\/filestore\/myapp1.txt\nhttp:\/\/34.71.145.41\/filestore\/myapp1.txt\n\n\nOBSERVATION: \n1. For MyApp1, writer app is writing to FileStore so we get latest timestamp lines (many lines and file growing)\n2. For MyApp2, we have restored it from backup, which means the number of lines present in file at the time of snapshot will be only displayed. \n3. KEY here is we are able to successfully use the filestore backup for our Kubernetes Workloads\n```\n\n## Step-16: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f 01-myapp1-kube-manifests -f 02-volume-backup-kube-manifests -f 03-volume-restore-myapp2-kube-manifests\n\n# Verify if two FileStore Instances are deleted\nGo to -> FileStore -> Instances\n\n# Verify if FileStore Backup is deleted\nGo to -> FileStore -> Backups\n``","site":"gcp gke docs","answers_cleaned":"    title  GKE Storage with GCP File Store   Backup and Restore description  Use GCP File Store for GKE Workloads   Implement Backup and Restore         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123         Step 01  Introduction   GKE Storage with GCP File Store    Implement Backups is  VolumeSnapshotClass  and  VolumeSnapshot    Implement Restore of FileStore in myapp2 Application and Verify      Step 02  YAML files are same as first FileStore Demo     Project Folder    01 myapp1 kube manifests   YAML files are same as first FileStore Demo   01 filestore pvc yaml   02 write to filestore pod yaml   03 myapp1 deployment yaml   04 loadBalancer service yaml     Step 03  Deploy 01 myapp1 kube manifests and Verify    t   Deploy 01 myapp1 kube manifests kubectl apply  f 01 myapp1 kube manifests    List Storage Class kubectl get sc    List PVC kubectl get pvc    List PV kubectl get pv    List Pods kubectl get pods          Step 04  Verify GCP Cloud FileStore Instance   Go to FileStore    Instances   Click on   Instance ID  pvc 27cd5c27 0ed0 48d1 bc5f 925adfb8495f       Note    Instance ID dynamically generated  it can be different in your case starting with pvc       Step 05  Connect to filestore write app Kubernetes pods and Verify    t   FileStore write app   Connect to Kubernetes Pod kubectl exec   stdin   tty  POD NAME      bin sh kubectl exec   stdin   tty filestore writer app      bin sh cd  data ls tail  f myapp1 txt exit         Step 06  Connect to myapp1 Kubernetes pods and Verify    t   List Pods kubectl get pods     myapp1 POD1   Connect to Kubernetes Pod kubectl exec   stdin   tty  POD NAME      bin sh kubectl exec   stdin   tty myapp1 deployment 5d469f6478 2kp97     bin sh cd  usr share nginx html filestore ls tail  f myapp1 txt exit    myapp1 POD2   Connect to Kubernetes Pod kubectl exec   stdin   tty  POD NAME      bin sh kubectl exec   stdin   tty myapp1 deployment 5d469f6478 2kp97      bin sh cd  usr share nginx html filestore ls tail  f myapp1 txt exit         Step 07  Access Application    t   List Services kubectl get svc    myapp1   Access Application http    EXTERNAL IP OF GET SERVICE OUTPUT  filestore myapp1 txt http   35 232 145 61 filestore myapp1 txt curl http   35 232 145 61 filestore myapp1 txt          Step 08  Volume Backup  01 VolumeSnapshotClass yaml     Project Folder    02 volume backup kube manifests    yaml apiVersion  snapshot storage k8s io v1 kind  VolumeSnapshotClass metadata    name  csi gcp filestore backup snap class driver  filestore csi storage gke io parameters    type  backup deletionPolicy  Delete         Step 09  Volume Backup  02 VolumeSnapshot yaml     Project Folder    02 volume backup kube manifests    yaml apiVersion  snapshot storage k8s io v1 kind  VolumeSnapshot metadata    name  myapp1 volume snapshot spec    volumeSnapshotClassName  csi gcp filestore backup snap class   source      persistentVolumeClaimName  gke filestore pvc         Step 10  Volume Backup  Deploy 02 volume backup kube manifests and Verify    t   Deploy 02 volume backup kube manifests kubectl apply  f 02 volume backup kube manifests    List VolumeSnapshotClass kubectl get volumesnapshotclass    Describe VolumeSnapshotClass kubectl describe volumesnapshotclass csi gcp filestore backup snap class    List VolumeSnapshot kubectl get volumesnapshot    Describe VolumeSnapshot kubectl describe volumesnapshot myapp1 volume snapshot         Step 11  Volume Backup  Verify GCP Cloud FileStore Backups   Go to FileStore    Backups   Observation  You should find the Backup with name  snapshot  SOME ID    Example  snapshot b4f24bd7 649b 45bb 8a0a 2b09d5b0e631      Step 12  Volume Restore  01 filestore pvc yaml     Project Folder    03 volume restore myapp2 kube manifests    yaml kind  PersistentVolumeClaim apiVersion  v1 metadata    name  restored filestore pvc spec    accessModes      ReadWriteMany   storageClassName  standard rwx   resources      requests        storage  1Ti   dataSource      kind  VolumeSnapshot     name  myapp1 volume snapshot     apiGroup  snapshot storage k8s io               Step 13  Volume Restore  02 myapp2 deployment yaml     Project Folder    03 volume restore myapp2 kube manifests    yaml apiVersion  apps v1 kind  Deployment  metadata   Dictionary   name  myapp2 deployment spec    Dictionary   replicas  2   selector      matchLabels        app  myapp2   template        metadata    Dictionary       name  myapp2 pod       labels    Dictionary         app  myapp2    Key value pairs     spec        containers    List           name  myapp2 container           image  stacksimplify kubenginx 1 0 0           ports                 containerPort  80             volumeMounts                name  persistent storage               mountPath   usr share nginx html filestore       volumes            name  persistent storage           persistentVolumeClaim              claimName  restored filestore pvc             Step 14  Volume Restore  03 myapp2 loadBalancer service yaml     Project Folder    03 volume restore myapp2 kube manifests    yaml apiVersion  v1 kind  Service  metadata    name  myapp2 lb service spec    type  LoadBalancer   ClusterIp    NodePort   selector      app  myapp2   ports         name  http       port  80   Service Port       targetPort  80   Container Port         Step 13  Volume Restore  Deploy 03 volume restore myapp2 kube manifests and Verify    t   Deploy 03 volume restore myapp2 kube manifests kubectl apply  f 03 volume restore myapp2 kube manifests    List Storage Class kubectl get sc    List PVC kubectl get pvc    List PV kubectl get pv    List Pods kubectl get pods    Verify if new FileStore Instance is Created Go to    FileStore    Instances         Step 14  Volume Restore  Connect to myapp2 Kubernetes pods and Verify    t   List Pods kubectl get pods     myapp1 POD1   Connect to Kubernetes Pod kubectl exec   stdin   tty  POD NAME      bin sh kubectl exec   stdin   tty myapp2 deployment 6dccd6557 9x6dn     bin sh cd  usr share nginx html filestore ls tail  f myapp1 txt exit    myapp1 POD2   Connect to Kubernetes Pod kubectl exec   stdin   tty  POD NAME      bin sh kubectl exec   stdin   tty myapp2 deployment 6dccd6557 mbbjm      bin sh cd  usr share nginx html filestore ls tail  f myapp1 txt exit         Step 15  Volume Restore  Access Applications    t   List Services kubectl get svc    myapp1   Access Application http    MYAPP1 EXTERNAL IP OF GET SERVICE OUTPUT  filestore myapp1 txt http   35 232 145 61 filestore myapp1 txt     myapp2   Access Application http    MYAPP2 EXTERNAL IP OF GET SERVICE OUTPUT  filestore myapp1 txt http   34 71 145 41 filestore myapp1 txt   OBSERVATION   1  For MyApp1  writer app is writing to FileStore so we get latest timestamp lines  many lines and file growing  2  For MyApp2  we have restored it from backup  which means the number of lines present in file at the time of snapshot will be only displayed   3  KEY here is we are able to successfully use the filestore backup for our Kubernetes Workloads         Step 16  Clean Up    t   Delete Kubernetes Objects kubectl delete  f 01 myapp1 kube manifests  f 02 volume backup kube manifests  f 03 volume restore myapp2 kube manifests    Verify if two FileStore Instances are deleted Go to    FileStore    Instances    Verify if FileStore Backup is deleted Go to    FileStore    Backups   "}
{"questions":"gcp gke docs Implement GCP Google Kubernetes Engine GKE Artifact Registry title GCP Google Kubernetes Engine GKE Artifact Registry Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Artifact Registry\ndescription: Implement GCP Google Kubernetes Engine GKE Artifact Registry\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal.\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- Build a Docker Image\n- Create a Docker repository in Google Artifact Registry.\n- Set up authentication.\n- Push an image to the repository.\n- Pull the image from the repository and Create Deployment in GKE Cluster\n- Access Sample Application in browser and verify\n\n\n## Step-02: Create Dockefile\n- **Dockerfile**\n```t\nFROM nginx\nCOPY index.html \/usr\/share\/nginx\/html\n```\n\n## Step-03: Build Docker Image\n```t\n# Change Directory\ncd google-kubernetes-engine\/56-GKE-Artifact-Registry\/\ncd 01-Docker-Image\n\n# Build Docker Image\ndocker build -t myapp1:v1 .\n\n# List Docker Image\ndocker images myapp1\n```\n\n## Step-04: Run Docker Image\n```t\n# Run Docker Image\ndocker run --name myapp1 -p 80:80 -d myapp1:v1\n\n# Access in browser\nhttp:\/\/localhost\n\n# List Running Docker Containers\ndocker ps\n\n# Stop Docker Container\ndocker stop myapp1\n\n# List All Docker Containers (Stopped Containers)\ndocker ps -a\n\n# Delete Stopped Container\ndocker rm myapp1\n\n# List All Docker Containers (Stopped Containers)\ndocker ps -a\n```\n\n## Step-05: Create Google Artifact Registry\n- Go to Artifact Registry -> Repositories -> Create\n```t\n# Create Google Artifact Registry \nName: gke-artifact-repo1\nFormat: Docker\nRegion: us-central-1\nEncryption: Google-managed encryption key\nClick on Create\n```\n\n## Step-06: Configure Google Artifact Repository authentication\n```t\n# Google Artifact Repository authentication\n## To set up authentication to Docker repositories in the region us-central1\ngcloud auth configure-docker <LOCATION>-docker.pkg.dev\ngcloud auth configure-docker us-central1-docker.pkg.dev\n```\n\n## Step-07: Tag & push the Docker image to Google Artifact Registry\n```t\n# Tag the Docker Image\ndocker tag myapp1:v1 <LOCATION>-docker.pkg.dev\/<GOOGLE-PROJECT-ID>\/<GOOGLE-ARTIFACT-REGISTRY-NAME>\/<IMAGE-NAME>:<IMAGE-TAG>\n\n# Replace Values for docker tag command \n# - LOCATION, \n# - GOOGLE-PROJECT-ID, \n# - GOOGLE-ARTIFACT-REGISTRY-NAME, \n# - IMAGE-NAME, \n# - IMAGE-TAG\ndocker tag myapp1:v1 us-central1-docker.pkg.dev\/kdaida123\/gke-artifact-repo1\/myapp1:v1\n\n# Push the Docker Image to Google Artifact Registry\ndocker push us-central1-docker.pkg.dev\/kdaida123\/gke-artifact-repo1\/myapp1:v1\n```\n\n## Step-08: Verify the Docker Image on Google Artifact Registry\n- Go to Google Artifact Registry -> Repositories -> gke-artifact-repo1\n- Review **myapp1** Docker Image\n\n## Step-09: Update Docker Image and Review kube-manifests\n- **Project-Folder:** 02-kube-manifests\n```yaml\n# Dcoker Image\nimage: us-central1-docker.pkg.dev\/<GCP-PROJECT-ID>\/<ARTIFACT-REPO>\/myapp1:v1\n\n# Update Docker Image in 01-kubernetes-deployment.yaml\nimage: us-central1-docker.pkg.dev\/kdaida123\/gke-artifact-repo1\/myapp1:v1\n```\n\n## Step-10: Deploy kube-manifests\n```t\n# Deploy kube-manifests\nkubectl apply -f 02-kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# Describe Pod\nkubectl describe pod <POD-NAME>\n\n## Observation - Verify Events command \"kubectl describe pod <POD-NAME>\"\n### We should see image pulled from \"us-central1-docker.pkg.dev\/kdaida123\/gke-artifact-repo1\/myapp1:v1\"\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  86s   default-scheduler  Successfully assigned default\/myapp1-deployment-5f8d5c6f48-pb686 to gke-standard-cluster-1-default-pool-2c852f67-46hv\n  Normal  Pulling    85s   kubelet            Pulling image \"us-central1-docker.pkg.dev\/kdaida123\/gke-artifact-repo1\/myapp1:v1\"\n  Normal  Pulled     81s   kubelet            Successfully pulled image \"us-central1-docker.pkg.dev\/kdaida123\/gke-artifact-repo1\/myapp1:v1\" in 4.285567138s\n  Normal  Created    81s   kubelet            Created container myapp1-container\n  Normal  Started    80s   kubelet            Started container myapp1-container\nKalyans-MacBook-Pro:41-GKE-Artiact-Registry kdaida$ \n\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<SVC-EXTERNAL-IP>\n```\n\n## Step-11: Clean-Up\n```t\n# Undeploy sample App\nkubectl delete -f 02-kube-manifests\n```\n\n\n## References\n- [Google Artifact Registry](https:\/\/cloud.google.com\/artifact-registry\/docs\/overview","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Artifact Registry description  Implement GCP Google Kubernetes Engine GKE Artifact Registry         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal     t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes         Step 01  Introduction   Build a Docker Image   Create a Docker repository in Google Artifact Registry    Set up authentication    Push an image to the repository    Pull the image from the repository and Create Deployment in GKE Cluster   Access Sample Application in browser and verify      Step 02  Create Dockefile     Dockerfile      t FROM nginx COPY index html  usr share nginx html         Step 03  Build Docker Image    t   Change Directory cd google kubernetes engine 56 GKE Artifact Registry  cd 01 Docker Image    Build Docker Image docker build  t myapp1 v1      List Docker Image docker images myapp1         Step 04  Run Docker Image    t   Run Docker Image docker run   name myapp1  p 80 80  d myapp1 v1    Access in browser http   localhost    List Running Docker Containers docker ps    Stop Docker Container docker stop myapp1    List All Docker Containers  Stopped Containers  docker ps  a    Delete Stopped Container docker rm myapp1    List All Docker Containers  Stopped Containers  docker ps  a         Step 05  Create Google Artifact Registry   Go to Artifact Registry    Repositories    Create    t   Create Google Artifact Registry  Name  gke artifact repo1 Format  Docker Region  us central 1 Encryption  Google managed encryption key Click on Create         Step 06  Configure Google Artifact Repository authentication    t   Google Artifact Repository authentication    To set up authentication to Docker repositories in the region us central1 gcloud auth configure docker  LOCATION  docker pkg dev gcloud auth configure docker us central1 docker pkg dev         Step 07  Tag   push the Docker image to Google Artifact Registry    t   Tag the Docker Image docker tag myapp1 v1  LOCATION  docker pkg dev  GOOGLE PROJECT ID   GOOGLE ARTIFACT REGISTRY NAME   IMAGE NAME   IMAGE TAG     Replace Values for docker tag command      LOCATION       GOOGLE PROJECT ID       GOOGLE ARTIFACT REGISTRY NAME       IMAGE NAME       IMAGE TAG docker tag myapp1 v1 us central1 docker pkg dev kdaida123 gke artifact repo1 myapp1 v1    Push the Docker Image to Google Artifact Registry docker push us central1 docker pkg dev kdaida123 gke artifact repo1 myapp1 v1         Step 08  Verify the Docker Image on Google Artifact Registry   Go to Google Artifact Registry    Repositories    gke artifact repo1   Review   myapp1   Docker Image     Step 09  Update Docker Image and Review kube manifests     Project Folder    02 kube manifests    yaml   Dcoker Image image  us central1 docker pkg dev  GCP PROJECT ID   ARTIFACT REPO  myapp1 v1    Update Docker Image in 01 kubernetes deployment yaml image  us central1 docker pkg dev kdaida123 gke artifact repo1 myapp1 v1         Step 10  Deploy kube manifests    t   Deploy kube manifests kubectl apply  f 02 kube manifests    List Deployments kubectl get deploy    List Pods kubectl get pods    Describe Pod kubectl describe pod  POD NAME      Observation   Verify Events command  kubectl describe pod  POD NAME       We should see image pulled from  us central1 docker pkg dev kdaida123 gke artifact repo1 myapp1 v1  Events    Type    Reason     Age   From               Message                                                         Normal  Scheduled  86s   default scheduler  Successfully assigned default myapp1 deployment 5f8d5c6f48 pb686 to gke standard cluster 1 default pool 2c852f67 46hv   Normal  Pulling    85s   kubelet            Pulling image  us central1 docker pkg dev kdaida123 gke artifact repo1 myapp1 v1    Normal  Pulled     81s   kubelet            Successfully pulled image  us central1 docker pkg dev kdaida123 gke artifact repo1 myapp1 v1  in 4 285567138s   Normal  Created    81s   kubelet            Created container myapp1 container   Normal  Started    80s   kubelet            Started container myapp1 container Kalyans MacBook Pro 41 GKE Artiact Registry kdaida       List Services kubectl get svc    Access Application http    SVC EXTERNAL IP          Step 11  Clean Up    t   Undeploy sample App kubectl delete  f 02 kube manifests          References    Google Artifact Registry  https   cloud google com artifact registry docs overview"}
{"questions":"gcp gke docs Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Implement GCP Google Kubernetes Engine GKE Ingress Cookie Affinity Configure kubeconfig for kubectl title GCP Google Kubernetes Engine GKE Ingress Cookie Affinity 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Ingress Cookie Affinity\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Cookie Affinity\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n3. ExternalDNS Controller should be installed and ready to use\n```t\n# List Namespaces (external-dns-ns namespace should be present)\nkubectl get ns\n\n# List External DNS Pods\nkubectl -n external-dns-ns get pods\n```\n\n## Step-01: Introduction\n- Implement following Features for Ingress Service\n- BackendConfig - GENERATED_COOKIE Affinity for Ingress Service\n- We are going to create two projects\n  - **Project-01:** GENERATED_COOKIE Affinity enabled\n  - **Project-02:** GENERATED_COOKIE Affinity disabled\n\n## Step-02: Create External IP Address using gcloud\n```t\n# Create External IP Address 1 (IF NOT CREATED - ALREADY CREATED IN PREVIOUS SECTIONS)\ngcloud compute addresses create ADDRESS_NAME --global\ngcloud compute addresses create gke-ingress-extip1 --global\n\n# Create External IP Address 2\ngcloud compute addresses create ADDRESS_NAME --global\ngcloud compute addresses create gke-ingress-extip2 --global\n\n# Describe External IP Address to get\ngcloud compute addresses describe ADDRESS_NAME --global\ngcloud compute addresses describe gke-ingress-extip2 --global\n\n# Verify\nGo to VPC Network -> IP Addresses -> External IP Address\n```\n\n## Step-03: Project-01: Review YAML Manifests\n- **Project Folder:** 01-kube-manifests-with-cookie-affinity \n- 01-kubernetes-deployment.yaml\n- 02-kubernetes-NodePort-service.yaml\n- 03-ingress.yaml\n- 04-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com\/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  sessionAffinity:\n    affinityType: \"GENERATED_COOKIE\"\n```\n\n## Step-04: Project-02: Review YAML Manifests\n- **Project Folder:** 02-kube-manifests-without-cookie-affinity\n- 01-kubernetes-deployment.yaml\n- 02-kubernetes-NodePort-service.yaml\n- 03-ingress.yaml\n- 04-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com\/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig2\nspec:\n  timeoutSec: 42 # Backend service timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n```\n## Step-05: Deploy Kubernetes Manifests\n```t\n# Project-01: Deploy Kubernetes Manifests \nkubectl apply -f 01-kube-manifests-with-cookie-affinity\n\n# Project-02: Deploy Kubernetes Manifests \nkubectl apply -f 02-kube-manifests-without-cookie-affinity\n\n# Verify Deployments\nkubectl get deploy \n\n# Verify Pods\nkubectl get pods\n\n# Verify Node Port Services\nkubectl get svc\n\n# Verify Ingress Services\nkubectl get ingress\n\n# Verify Backend Config\nkubectl get backendconfig\n\n# Project-01: Verify Load Balancer Settings\nGo to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Cookie Affinity Setting\nObservation:\nCookie Affinity setting should be in enabled state\n\n# Project-02: Verify Load Balancer Settings\nGo to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Cookie Affinity Setting\nCookie Affinity setting should be in disabled state\n```\n\n## Step-06: Access Application\n```t\n# Project-01: Access Application using DNS or ExtIP\nhttp:\/\/ingress-with-cookie-affinity.kalyanreddydaida.com\nhttp:\/\/<EXT-IP-1>\nObservation:\n1. Request will keep going always to only one POD due to GENERATED_COOKIE Affinity we configured\n\n# Project-02: Access Application using DNS or ExtIP\nhttp:\/\/ingress-without-cookie-affinity.kalyanreddydaida.com\nhttp:\/\/<EXT-IP-2>\nObservation:\n1. Requests will be load balanced to 4 pods created as part of \"cdn-demo2\" deployment.\n```\n## Step-07: Clean-Up\n```t\n# Project-01: Delete Kubernetes Resources \nkubectl delete -f 01-kube-manifests-with-cookie-affinity\n\n# Project-02: Delete Kubernetes Resources \nkubectl delete -f 02-kube-manifests-without-cookie-affinity\n```\n\n## References\n- [Ingress Features](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features)\n\n\n\nhttp:\/\/ingress-without-cookie-affinity.kalyanreddydaida.com\nhttp:\/\/ingress-with-cookie-affinity.kalyanreddydaida.co","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Ingress Cookie Affinity description  Implement GCP Google Kubernetes Engine GKE Ingress Cookie Affinity         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes     3  ExternalDNS Controller should be installed and ready to use    t   List Namespaces  external dns ns namespace should be present  kubectl get ns    List External DNS Pods kubectl  n external dns ns get pods         Step 01  Introduction   Implement following Features for Ingress Service   BackendConfig   GENERATED COOKIE Affinity for Ingress Service   We are going to create two projects       Project 01    GENERATED COOKIE Affinity enabled       Project 02    GENERATED COOKIE Affinity disabled     Step 02  Create External IP Address using gcloud    t   Create External IP Address 1  IF NOT CREATED   ALREADY CREATED IN PREVIOUS SECTIONS  gcloud compute addresses create ADDRESS NAME   global gcloud compute addresses create gke ingress extip1   global    Create External IP Address 2 gcloud compute addresses create ADDRESS NAME   global gcloud compute addresses create gke ingress extip2   global    Describe External IP Address to get gcloud compute addresses describe ADDRESS NAME   global gcloud compute addresses describe gke ingress extip2   global    Verify Go to VPC Network    IP Addresses    External IP Address         Step 03  Project 01  Review YAML Manifests     Project Folder    01 kube manifests with cookie affinity    01 kubernetes deployment yaml   02 kubernetes NodePort service yaml   03 ingress yaml   04 backendconfig yaml    yaml apiVersion  cloud google com v1 kind  BackendConfig metadata    name  my backendconfig spec    timeoutSec  42   Backend service timeout  https   cloud google com kubernetes engine docs how to ingress features timeout   connectionDraining    Connection draining timeout  https   cloud google com kubernetes engine docs how to ingress features draining timeout     drainingTimeoutSec  62   logging    HTTP access logging  https   cloud google com kubernetes engine docs how to ingress features http logging     enable  true     sampleRate  1 0   sessionAffinity      affinityType   GENERATED COOKIE          Step 04  Project 02  Review YAML Manifests     Project Folder    02 kube manifests without cookie affinity   01 kubernetes deployment yaml   02 kubernetes NodePort service yaml   03 ingress yaml   04 backendconfig yaml    yaml apiVersion  cloud google com v1 kind  BackendConfig metadata    name  my backendconfig2 spec    timeoutSec  42   Backend service timeout  https   cloud google com kubernetes engine docs how to ingress features timeout   connectionDraining    Connection draining timeout  https   cloud google com kubernetes engine docs how to ingress features draining timeout     drainingTimeoutSec  62   logging    HTTP access logging  https   cloud google com kubernetes engine docs how to ingress features http logging     enable  true     sampleRate  1 0        Step 05  Deploy Kubernetes Manifests    t   Project 01  Deploy Kubernetes Manifests  kubectl apply  f 01 kube manifests with cookie affinity    Project 02  Deploy Kubernetes Manifests  kubectl apply  f 02 kube manifests without cookie affinity    Verify Deployments kubectl get deploy     Verify Pods kubectl get pods    Verify Node Port Services kubectl get svc    Verify Ingress Services kubectl get ingress    Verify Backend Config kubectl get backendconfig    Project 01  Verify Load Balancer Settings Go to Network Services    Load Balancing    Load Balancer    Backends    Verify Cookie Affinity Setting Observation  Cookie Affinity setting should be in enabled state    Project 02  Verify Load Balancer Settings Go to Network Services    Load Balancing    Load Balancer    Backends    Verify Cookie Affinity Setting Cookie Affinity setting should be in disabled state         Step 06  Access Application    t   Project 01  Access Application using DNS or ExtIP http   ingress with cookie affinity kalyanreddydaida com http    EXT IP 1  Observation  1  Request will keep going always to only one POD due to GENERATED COOKIE Affinity we configured    Project 02  Access Application using DNS or ExtIP http   ingress without cookie affinity kalyanreddydaida com http    EXT IP 2  Observation  1  Requests will be load balanced to 4 pods created as part of  cdn demo2  deployment         Step 07  Clean Up    t   Project 01  Delete Kubernetes Resources  kubectl delete  f 01 kube manifests with cookie affinity    Project 02  Delete Kubernetes Resources  kubectl delete  f 02 kube manifests without cookie affinity         References    Ingress Features  https   cloud google com kubernetes engine docs how to ingress features     http   ingress without cookie affinity kalyanreddydaida com http   ingress with cookie affinity kalyanreddydaida co"}
{"questions":"gcp gke docs Implement GCP Google Kubernetes Engine GKE Ingress and Cloud CDN Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl title GCP Google Kubernetes Engine GKE Ingress and Cloud CDN 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Ingress and Cloud CDN\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress and Cloud CDN\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n3. ExternalDNS Controller should be installed and ready to use\n```t\n# List Namespaces (external-dns-ns namespace should be present)\nkubectl get ns\n\n# List External DNS Pods\nkubectl -n external-dns-ns get pods\n```\n\n## Step-01: Introduction\n- Implement following Features for Ingress Service\n1. BackendConfig for Ingress Service\n2. Backend Service Timeout\n3. Connection Draining\n4. Ingress Service HTTP Access Logging\n5. Enable Cloud CDN\n\n## Step-02: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: cdn-demo-deployment\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      app: cdn-demo\n  template:\n    metadata:\n      labels:\n        app: cdn-demo\n    spec:\n      containers:\n        - name: cdn-demo\n          image: us-docker.pkg.dev\/google-samples\/containers\/gke\/hello-app-cdn:1.0\n          ports:\n            - containerPort: 8080\n```\n\n## Step-03: 02-kubernetes-NodePort-service.yaml\n- Update Backend Config with annotation **cloud.google.com\/backend-config: '{\"default\": \"my-backendconfig\"}'**\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: cdn-demo-nodeport-service\n  annotations:\n    cloud.google.com\/backend-config: '{\"default\": \"my-backendconfig\"}'     \nspec:\n  type: NodePort\n  selector:\n    app: cdn-demo\n  ports:\n    - port: 80\n      targetPort: 8080\n```\n## Step-04: 03-ingress.yaml\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress-cdn-demo\n  annotations:\n    # External Load Balancer\n    kubernetes.io\/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io\/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io\/hostname: ingress-cdn-demo.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: cdn-demo-nodeport-service\n      port:\n        number: 80     \n```\n\n## Step-05: 04-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com\/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  cdn:\n    enabled: true\n    cachePolicy:\n      includeHost: true\n      includeProtocol: true\n      includeQueryString: false  \n```\n\n## Step-06: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Service\nkubectl get ingress\n\n# List Backend Config\nkubectl get backendconfig\nkubectl describe backendconfig my-backendconfig\n```\n## Step-07: Verify Settings in Load Balancer\n- Go to Network Services -> Load Balancing -> Click on Load Balancer\n- Go to Backend -> Backend Services\n- Verify the Settings\n  - **Timeout:** 42 seconds\n  - **Connection draining timeout:** 62 seconds\n  - **Cloud CDN:** Enabled\n  - **Logging: Enabled:** (sample rate: 1)\n\n## Step-08: Verify Cloud CDN \n- Go to Network Services -> Cloud CDN -> (Automatically created when Ingress Deployed k8s1-c6634a10-default-cdn-demo-nodeport-service-80-553facae)\n- Verify Settings\n  - DETAILS TAB\n  - MONITORING TAB\n  - CACHING TAB\n\n## Step-09: Access Application and Verify Cache Age\n```t\n# Access Application\nhttp:\/\/<DNS-NAME-FROM-INGRESS-SERVICE>\n[or]\nhttp:\/\/<IP-ADDRESS-FROM-INGRESS-SERVICE-OUTPUT>\n\n# Access Application using DNS Name\nhttp:\/\/ingress-cdn-demo.kalyanreddydaida.com\ncurl -v http:\/\/ingress-cdn-demo.kalyanreddydaida.com\/?cache=true\ncurl -v http:\/\/ingress-cdn-demo.kalyanreddydaida.com\ncurl -v http:\/\/ingress-cdn-demo.kalyanreddydaida.com\n\n## Important Note:\n1. The output shows the response headers and body. \n2. In the response headers, you can see that the content was cached. The Age header tells you how many seconds the content has been cached\n\n## Sample Output\nKalyans-Mac-mini:46-GKE-Ingress-Cloud-CDN kalyanreddy$ curl -v http:\/\/ingress-cdn-demo.kalyanreddydaida.com\n*   Trying 34.120.32.120:80...\n* Connected to ingress-cdn-demo.kalyanreddydaida.com (34.120.32.120) port 80 (#0)\n> GET \/ HTTP\/1.1\n> Host: ingress-cdn-demo.kalyanreddydaida.com\n> User-Agent: curl\/7.79.1\n> Accept: *\/*\n> \n* Mark bundle as not supporting multiuse\n< HTTP\/1.1 200 OK\n< Content-Length: 76\n< Via: 1.1 google\n< Date: Thu, 23 Jun 2022 04:47:42 GMT\n< Content-Type: text\/plain; charset=utf-8\n< Age: 1625\n< Cache-Control: max-age=3600,public\n< \nHello, world!\nVersion: 1.0.0\nHostname: cdn-demo-deployment-6f4c8f655d-htpsn\n* Connection #0 to host ingress-cdn-demo.kalyanreddydaida.com left intact\nKalyans-Mac-mini:46-GKE-Ingress-Cloud-CDN kalyanreddy$ \n```\n\n## Step-10: Verify Cloud CDN Monitoring Tab\n- Go to Network Services -> Cloud CDN -> MONITORING Tab\n- Review Charts\n  - CDN Bandwidth\n  - CDN Hit Rate\n  - CDN Fill Rate\n  - CDN Egress Rate\n  - Requests\n  - Response Codes\n\n## Step-11: Verify Ingress Service Logs in Cloud Logging\n- Go to Cloud Logging -> Logs Explorer -> Log Fields -> Select\n- Resource Type: Cloud HTTP Load Balancer\n- Severity: Info\n- Project ID: kdaida123\n- Review the logs\n- Access Application and parallely review the logs\n```t\n# Access Application\ncurl -v http:\/\/ingress-cdn-demo.kalyanreddydaida.com\n```\n\n## Step-12: Verify Ingress Service Logs in Cloud Logging using Other Approach\n- Go to Cloud Logging -> Logs Dashboard \n- Go to Chart -> HTTP\/S Load Balancer Logs By Severity -> Click on **VIEW LOGS**\n\n\n## References\n- [Ingress Features](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features)\n- [Caching overview](https:\/\/cloud.google.com\/cdn\/docs\/caching#cacheability)\n\n","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Ingress and Cloud CDN description  Implement GCP Google Kubernetes Engine GKE Ingress and Cloud CDN         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes     3  ExternalDNS Controller should be installed and ready to use    t   List Namespaces  external dns ns namespace should be present  kubectl get ns    List External DNS Pods kubectl  n external dns ns get pods         Step 01  Introduction   Implement following Features for Ingress Service 1  BackendConfig for Ingress Service 2  Backend Service Timeout 3  Connection Draining 4  Ingress Service HTTP Access Logging 5  Enable Cloud CDN     Step 02  01 kubernetes deployment yaml    yaml apiVersion  apps v1 kind  Deployment metadata    name  cdn demo deployment spec    replicas  2   selector      matchLabels        app  cdn demo   template      metadata        labels          app  cdn demo     spec        containers            name  cdn demo           image  us docker pkg dev google samples containers gke hello app cdn 1 0           ports                containerPort  8080         Step 03  02 kubernetes NodePort service yaml   Update Backend Config with annotation   cloud google com backend config     default    my backendconfig         yaml apiVersion  v1 kind  Service metadata    name  cdn demo nodeport service   annotations      cloud google com backend config     default    my backendconfig         spec    type  NodePort   selector      app  cdn demo   ports        port  80       targetPort  8080        Step 04  03 ingress yaml    yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress cdn demo   annotations        External Load Balancer     kubernetes io ingress class   gce          Static IP for Ingress Service     kubernetes io ingress global static ip name   gke ingress extip1           External DNS   For creating a Record Set in Google Cloud Cloud DNS     external dns alpha kubernetes io hostname  ingress cdn demo kalyanreddydaida com spec              defaultBackend      service        name  cdn demo nodeport service       port          number  80              Step 05  04 backendconfig yaml    yaml apiVersion  cloud google com v1 kind  BackendConfig metadata    name  my backendconfig spec    timeoutSec  42   Backend service timeout  https   cloud google com kubernetes engine docs how to ingress features timeout   connectionDraining    Connection draining timeout  https   cloud google com kubernetes engine docs how to ingress features draining timeout     drainingTimeoutSec  62   logging    HTTP access logging  https   cloud google com kubernetes engine docs how to ingress features http logging     enable  true     sampleRate  1 0   cdn      enabled  true     cachePolicy        includeHost  true       includeProtocol  true       includeQueryString  false           Step 06  Deploy Kubernetes Manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests    List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    List Ingress Service kubectl get ingress    List Backend Config kubectl get backendconfig kubectl describe backendconfig my backendconfig        Step 07  Verify Settings in Load Balancer   Go to Network Services    Load Balancing    Click on Load Balancer   Go to Backend    Backend Services   Verify the Settings       Timeout    42 seconds       Connection draining timeout    62 seconds       Cloud CDN    Enabled       Logging  Enabled     sample rate  1      Step 08  Verify Cloud CDN    Go to Network Services    Cloud CDN     Automatically created when Ingress Deployed k8s1 c6634a10 default cdn demo nodeport service 80 553facae    Verify Settings     DETAILS TAB     MONITORING TAB     CACHING TAB     Step 09  Access Application and Verify Cache Age    t   Access Application http    DNS NAME FROM INGRESS SERVICE   or  http    IP ADDRESS FROM INGRESS SERVICE OUTPUT     Access Application using DNS Name http   ingress cdn demo kalyanreddydaida com curl  v http   ingress cdn demo kalyanreddydaida com  cache true curl  v http   ingress cdn demo kalyanreddydaida com curl  v http   ingress cdn demo kalyanreddydaida com     Important Note  1  The output shows the response headers and body   2  In the response headers  you can see that the content was cached  The Age header tells you how many seconds the content has been cached     Sample Output Kalyans Mac mini 46 GKE Ingress Cloud CDN kalyanreddy  curl  v http   ingress cdn demo kalyanreddydaida com     Trying 34 120 32 120 80      Connected to ingress cdn demo kalyanreddydaida com  34 120 32 120  port 80   0    GET   HTTP 1 1   Host  ingress cdn demo kalyanreddydaida com   User Agent  curl 7 79 1   Accept           Mark bundle as not supporting multiuse   HTTP 1 1 200 OK   Content Length  76   Via  1 1 google   Date  Thu  23 Jun 2022 04 47 42 GMT   Content Type  text plain  charset utf 8   Age  1625   Cache Control  max age 3600 public    Hello  world  Version  1 0 0 Hostname  cdn demo deployment 6f4c8f655d htpsn   Connection  0 to host ingress cdn demo kalyanreddydaida com left intact Kalyans Mac mini 46 GKE Ingress Cloud CDN kalyanreddy           Step 10  Verify Cloud CDN Monitoring Tab   Go to Network Services    Cloud CDN    MONITORING Tab   Review Charts     CDN Bandwidth     CDN Hit Rate     CDN Fill Rate     CDN Egress Rate     Requests     Response Codes     Step 11  Verify Ingress Service Logs in Cloud Logging   Go to Cloud Logging    Logs Explorer    Log Fields    Select   Resource Type  Cloud HTTP Load Balancer   Severity  Info   Project ID  kdaida123   Review the logs   Access Application and parallely review the logs    t   Access Application curl  v http   ingress cdn demo kalyanreddydaida com         Step 12  Verify Ingress Service Logs in Cloud Logging using Other Approach   Go to Cloud Logging    Logs Dashboard    Go to Chart    HTTP S Load Balancer Logs By Severity    Click on   VIEW LOGS        References    Ingress Features  https   cloud google com kubernetes engine docs how to ingress features     Caching overview  https   cloud google com cdn docs caching cacheability   "}
{"questions":"gcp gke docs Implement GCP Google Kubernetes Engine GKE Ingress with External IP title GCP Google Kubernetes Engine GKE Ingress with External IP Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Ingress with External IP\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress with External IP\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n3. Registered Domain using Google Cloud Domains\n\n## Step-01: Introduction\n- Reserve an External IP Address\n- Using Annotaiton `kubernetes.io\/ingress.global-static-ip-name` associate this External IP to Ingress Service\n\n## Step-02: Create External IP Address using gcloud\n```t\n# Create External IP Address\ngcloud compute addresses create ADDRESS_NAME --global\ngcloud compute addresses create gke-ingress-extip1 --global\n\n# Describe External IP Address \ngcloud compute addresses describe ADDRESS_NAME --global\ngcloud compute addresses describe gke-ingress-extip1 --global\n\n# List External IP Address\ngcloud compute addresses list\n\n# Verify\nGo to VPC Network -> IP Addresses -> External IP Address\n```\n\n## Step-03: Add RECORDSET Google Cloud DNS for this External IP\n- Go to Network Services -> Cloud DNS -> kalyanreddydaida.com -> **ADD RECORD SET**\n- DNS NAME: demo1.kalyanreddydaida.com\n- **IPv4 Address:** <EXTERNAL-IP-RESERVERD-IN-STEP-02>\n- Click on **CREATE**\n\n## Step-04: Verify DNS resolving to IP \n```t\n# nslookup test\nnslookup demo1.kalyanreddydaida.com\n\n## Sample Output\nKalyans-Mac-mini:google-kubernetes-engine kalyanreddy$ nslookup demo1.kalyanreddydaida.com\nServer:\t\t192.168.2.1\nAddress:\t192.168.2.1#53\n\nNon-authoritative answer:\nName:\tdemo1.kalyanreddydaida.com\nAddress: 34.120.32.120\n\nKalyans-Mac-mini:google-kubernetes-engine kalyanreddy$ \n```\n\n\n## Step-05: 04-Ingress-external-ip.yaml\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress-external-ip\n  annotations:\n    # External Load Balancer\n    kubernetes.io\/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io\/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: \/app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: \/app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80          \n```\n\n## Step-06: No changes to other 3 YAML Files\n- 01-Nginx-App1-Deployment-and-NodePortService.yaml\n- 02-Nginx-App2-Deployment-and-NodePortService.yaml\n- 03-Nginx-App3-Deployment-and-NodePortService.yaml\n\n## Step-07: Deploy kube-manifests and verify\n```t\n# Deploy Kubernetes manifests\nkubectl apply -f kube-manifests\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Load Balancers\nkubectl get ingress\n\n# Describe Ingress and view Rules\nkubectl describe ingress ingress-external-ip\n```\n\n\n\n## Step-08: Access Application\n```t\n# Important Note\nWait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors\n\n# Access Application\nhttp:\/\/<DNS-DOMAIN-NAME>\/app1\/index.html\nhttp:\/\/<DNS-DOMAIN-NAME>\/app2\/index.html\nhttp:\/\/<DNS-DOMAIN-NAME>\/\n\n# Replace Domain Name registered in Cloud DNS\nhttp:\/\/demo1.kalyanreddydaida.com\/app1\/index.html\nhttp:\/\/demo1.kalyanreddydaida.com\/app2\/index.html\nhttp:\/\/demo1.kalyanreddydaida.com\/\n```\n\n## Step-09: Clean Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Verify Load Balancer Deleted\nGo to Network Services -> Load Balancing -> No Load balancers should be present\n```\n","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Ingress with External IP description  Implement GCP Google Kubernetes Engine GKE Ingress with External IP        Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123      3  Registered Domain using Google Cloud Domains     Step 01  Introduction   Reserve an External IP Address   Using Annotaiton  kubernetes io ingress global static ip name  associate this External IP to Ingress Service     Step 02  Create External IP Address using gcloud    t   Create External IP Address gcloud compute addresses create ADDRESS NAME   global gcloud compute addresses create gke ingress extip1   global    Describe External IP Address  gcloud compute addresses describe ADDRESS NAME   global gcloud compute addresses describe gke ingress extip1   global    List External IP Address gcloud compute addresses list    Verify Go to VPC Network    IP Addresses    External IP Address         Step 03  Add RECORDSET Google Cloud DNS for this External IP   Go to Network Services    Cloud DNS    kalyanreddydaida com      ADD RECORD SET     DNS NAME  demo1 kalyanreddydaida com     IPv4 Address     EXTERNAL IP RESERVERD IN STEP 02    Click on   CREATE       Step 04  Verify DNS resolving to IP     t   nslookup test nslookup demo1 kalyanreddydaida com     Sample Output Kalyans Mac mini google kubernetes engine kalyanreddy  nslookup demo1 kalyanreddydaida com Server   192 168 2 1 Address  192 168 2 1 53  Non authoritative answer  Name  demo1 kalyanreddydaida com Address  34 120 32 120  Kalyans Mac mini google kubernetes engine kalyanreddy            Step 05  04 Ingress external ip yaml    yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress external ip   annotations        External Load Balancer     kubernetes io ingress class   gce          Static IP for Ingress Service     kubernetes io ingress global static ip name   gke ingress extip1     spec     defaultBackend      service        name  app3 nginx nodeport service       port          number  80                               rules        http          paths                         path   app1             pathType  Prefix             backend                service                  name  app1 nginx nodeport service                 port                     number  80             path   app2             pathType  Prefix             backend                service                  name  app2 nginx nodeport service                 port                     number  80                   Step 06  No changes to other 3 YAML Files   01 Nginx App1 Deployment and NodePortService yaml   02 Nginx App2 Deployment and NodePortService yaml   03 Nginx App3 Deployment and NodePortService yaml     Step 07  Deploy kube manifests and verify    t   Deploy Kubernetes manifests kubectl apply  f kube manifests    List Pods kubectl get pods    List Services kubectl get svc    List Ingress Load Balancers kubectl get ingress    Describe Ingress and view Rules kubectl describe ingress ingress external ip           Step 08  Access Application    t   Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors    Access Application http    DNS DOMAIN NAME  app1 index html http    DNS DOMAIN NAME  app2 index html http    DNS DOMAIN NAME      Replace Domain Name registered in Cloud DNS http   demo1 kalyanreddydaida com app1 index html http   demo1 kalyanreddydaida com app2 index html http   demo1 kalyanreddydaida com          Step 09  Clean Up    t   Delete Kubernetes Resources kubectl delete  f kube manifests    Verify Load Balancer Deleted Go to Network Services    Load Balancing    No Load balancers should be present     "}
{"questions":"gcp gke docs Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created title GCP Google Kubernetes Engine GKE NodePort Service Implement GCP Google Kubernetes Engine GKE NodePort Service t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE NodePort Service\ndescription: Implement GCP Google Kubernetes Engine GKE NodePort Service\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123\n\n# List GKE Kubernetes Worker Nodes\nkubectl get nodes\n\n# List GKE Kubernetes Worker Nodes with -o wide option\nkubectl get nodes -o wide\nObservation: \n1. You should see External-IP Address (Public IP accesible via internet)\n2. That is the key thing for testing the Kubernetes NodePort Service on GKE Cluster\n```\n## Step-01: Introduction\n- Implement Kubernetes NodePort Service \n\n## Step-02: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify\/kubenginx:1.0.0\n          ports: \n            - containerPort: 80      \n```\n\n## Step-03: 02-kubernetes-nodeport-service.yaml\n- If you don't speciy `nodePort: 30080` it will dynamically assign one port from range `30000-32768`\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-nodeport-service\nspec:\n  type: NodePort # clusterIP, # NodePort, # LoadBalancer, # ExternalName\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n      nodePort: 30080 # NodePort (Optional)(Node Port Range: 30000-32768)\n```\n\n\n## Step-04: Deply Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get po\n\n# List Services\nkubectl get svc\n```\n\n## Step-05: Access Application\n```t\n# List Kubernetes Worker Node with -0 wide\nkubectl get nodes -o wide\nObservation: \n1. Make a note of any one Node External-IP (Public IP Address)\n\n# Access Application\nhttp:\/\/<NODE-EXTERNAL-IP>:<NodePort>\nhttp:\/\/104.154.52.12:30080\nObservation:\n1. This should fail\n```\n\n## Step-06: Create Firewall Rule\n```t\n# Create Firewall Rule\ngcloud compute firewall-rules create fw-rule-gke-node-port \\\n    --allow tcp:NODE_PORT\n\n# Replace NODE_PORT\ngcloud compute firewall-rules create fw-rule-gke-node-port \\\n    --allow tcp:30080   \n\n# List Firewall Rules\ngcloud compute firewall-rules list    \n```\n\n## Step-07:Access Application\n```t\n# List Kubernetes Worker Node with -0 wide\nkubectl get nodes -o wide\nObservation: \n1. Make a note of any one Node External-IP (Public IP Address)\n\n# Access Application\nhttp:\/\/<NODE-EXTERNAL-IP>:<NodePort>\nhttp:\/\/104.154.52.12:30080\nObservation:\n1. This should Pass\n```\n\n\n\n## Step-08: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Delete NodePort Service Firewall Rule\ngcloud compute firewall-rules delete fw-rule-gke-node-port\n\n# List Firewall Rules\ngcloud compute firewall-rules list \n```\n\n","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE NodePort Service description  Implement GCP Google Kubernetes Engine GKE NodePort Service         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard public cluster 1   region us central1   project kdaida123    List GKE Kubernetes Worker Nodes kubectl get nodes    List GKE Kubernetes Worker Nodes with  o wide option kubectl get nodes  o wide Observation   1  You should see External IP Address  Public IP accesible via internet  2  That is the key thing for testing the Kubernetes NodePort Service on GKE Cluster        Step 01  Introduction   Implement Kubernetes NodePort Service      Step 02  01 kubernetes deployment yaml    yaml apiVersion  apps v1 kind  Deployment  metadata   Dictionary   name  myapp1 deployment spec    Dictionary   replicas  2   selector      matchLabels        app  myapp1   template        metadata    Dictionary       name  myapp1 pod       labels    Dictionary         app  myapp1    Key value pairs     spec        containers    List           name  myapp1 container           image  stacksimplify kubenginx 1 0 0           ports                 containerPort  80               Step 03  02 kubernetes nodeport service yaml   If you don t speciy  nodePort  30080  it will dynamically assign one port from range  30000 32768     yaml apiVersion  v1 kind  Service  metadata    name  myapp1 nodeport service spec    type  NodePort   clusterIP    NodePort    LoadBalancer    ExternalName   selector      app  myapp1   ports         name  http       port  80   Service Port       targetPort  80   Container Port       nodePort  30080   NodePort  Optional  Node Port Range  30000 32768           Step 04  Deply Kubernetes Manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests    List Deployments kubectl get deploy    List Pods kubectl get po    List Services kubectl get svc         Step 05  Access Application    t   List Kubernetes Worker Node with  0 wide kubectl get nodes  o wide Observation   1  Make a note of any one Node External IP  Public IP Address     Access Application http    NODE EXTERNAL IP   NodePort  http   104 154 52 12 30080 Observation  1  This should fail         Step 06  Create Firewall Rule    t   Create Firewall Rule gcloud compute firewall rules create fw rule gke node port         allow tcp NODE PORT    Replace NODE PORT gcloud compute firewall rules create fw rule gke node port         allow tcp 30080       List Firewall Rules gcloud compute firewall rules list             Step 07 Access Application    t   List Kubernetes Worker Node with  0 wide kubectl get nodes  o wide Observation   1  Make a note of any one Node External IP  Public IP Address     Access Application http    NODE EXTERNAL IP   NodePort  http   104 154 52 12 30080 Observation  1  This should Pass           Step 08  Clean Up    t   Delete Kubernetes Resources kubectl delete  f kube manifests    Delete NodePort Service Firewall Rule gcloud compute firewall rules delete fw rule gke node port    List Firewall Rules gcloud compute firewall rules list       "}
{"questions":"gcp gke docs Step 00 Pre requisites title GKE Persistent Disks Existing StorageClass premium rwo 2 Verify if kubeconfig for kubectl is configured in your local terminal Use existing storageclass premium rwo in Kubernetes Workloads Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GKE Persistent Disks Existing StorageClass premium-rwo\ndescription: Use existing storageclass premium-rwo in Kubernetes Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Feature: Compute Engine persistent disk CSI Driver\n  - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. \n  - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster.\n\n## Step-01: Introduction\n- Understand Kubernetes Objects\n01. Kubernetes PersistentVolumeClaim\n02. Kubernetes ConfigMap\n03. Kubernetes Deployment\n04. Kubernetes Volumes\n05. Kubernetes Volume Mounts\n06. Kubernetes Environment Variables\n07. Kubernetes ClusterIP Service\n08. Kubernetes Init Containers\n09. Kubernetes Service of Type LoadBalancer\n10. Kubernetes StorageClass \n\n- Use the predefined Storage class `premium-rwo`\n- By default, dynamically provisioned PersistentVolumes use the default StorageClass and are backed by `standard hard disks`. \n- If you need faster SSDs, you can use the `premium-rwo` storage class from the Compute Engine persistent disk CSI Driver to provision your volumes. \n- This can be done by setting the storageClassName field to `premium-rwo` in your PersistentVolumeClaim \n- `premium-rwo Storage Class` will provision `SSD Persistent Disk`\n\n## Step-02: List Kubernetes Storage Classes in GKE Cluster\n```t\n# List Storage Classes\nkubectl get sc\n```\n\n## Step-03: 01-persistent-volume-claim.yaml\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: premium-rwo \n  resources: \n    requests:\n      storage: 4Gi\n```\n\n## Step-04: Other Kubernetes YAML Manifests\n- No changes to other Kubernetes YAML Manifests\n- They are same as previous section\n1. 01-persistent-volume-claim.yaml\n2. 02-UserManagement-ConfigMap.yaml\n3. 03-mysql-deployment.yaml\n4. 04-mysql-clusterip-service.yaml\n5. 05-UserMgmtWebApp-Deployment.yaml\n6. 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n\n## Step-05: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\/\n\n# List Storage Classes\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n## Step-06: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `4GB`\u00a0Persistent Disk\n- **Observation:** You should see the disk type as **SSD persistent disk**\n\n\n## Step-07: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n```\n\n## Step-08: Clean-Up\n```t\n# Delete kube-manifests\nkubectl delete -f kube-manifests\/\n```\n\n## Reference\n- [Using the Compute Engine persistent disk CSI Driver](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/persistent-volumes\/gce-pd-csi-driver","site":"gcp gke docs","answers_cleaned":"    title  GKE Persistent Disks Existing StorageClass premium rwo description  Use existing storageclass premium rwo in Kubernetes Workloads         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123     3  Feature  Compute Engine persistent disk CSI Driver     Verify the Feature   Compute Engine persistent disk CSI Driver   enabled in GKE Cluster       This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster      Step 01  Introduction   Understand Kubernetes Objects 01  Kubernetes PersistentVolumeClaim 02  Kubernetes ConfigMap 03  Kubernetes Deployment 04  Kubernetes Volumes 05  Kubernetes Volume Mounts 06  Kubernetes Environment Variables 07  Kubernetes ClusterIP Service 08  Kubernetes Init Containers 09  Kubernetes Service of Type LoadBalancer 10  Kubernetes StorageClass     Use the predefined Storage class  premium rwo    By default  dynamically provisioned PersistentVolumes use the default StorageClass and are backed by  standard hard disks      If you need faster SSDs  you can use the  premium rwo  storage class from the Compute Engine persistent disk CSI Driver to provision your volumes     This can be done by setting the storageClassName field to  premium rwo  in your PersistentVolumeClaim     premium rwo Storage Class  will provision  SSD Persistent Disk      Step 02  List Kubernetes Storage Classes in GKE Cluster    t   List Storage Classes kubectl get sc         Step 03  01 persistent volume claim yaml    yaml apiVersion  v1 kind  PersistentVolumeClaim metadata    name  mysql pv claim spec     accessModes        ReadWriteOnce   storageClassName  premium rwo    resources       requests        storage  4Gi         Step 04  Other Kubernetes YAML Manifests   No changes to other Kubernetes YAML Manifests   They are same as previous section 1  01 persistent volume claim yaml 2  02 UserManagement ConfigMap yaml 3  03 mysql deployment yaml 4  04 mysql clusterip service yaml 5  05 UserMgmtWebApp Deployment yaml 6  06 UserMgmtWebApp LoadBalancer Service yaml     Step 05  Deploy kube manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests     List Storage Classes kubectl get sc    List PVC kubectl get pvc    List PV kubectl get pv    List ConfigMaps kubectl get configmap    List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    Verify Pod Logs kubectl get pods kubectl logs  f  USERMGMT POD NAME  kubectl logs  f usermgmt webapp 6ff7d7d849 7lrg5         Step 06  Verify Persistent Disks   Go to Compute Engine    Storage    Disks   Search for  4GB  Persistent Disk     Observation    You should see the disk type as   SSD persistent disk        Step 07  Access Application    t   List Services kubectl get svc    Access Application http    ExternalIP from get service output  Username  admin101 Password  password101         Step 08  Clean Up    t   Delete kube manifests kubectl delete  f kube manifests          Reference    Using the Compute Engine persistent disk CSI Driver  https   cloud google com kubernetes engine docs how to persistent volumes gce pd csi driver"}
{"questions":"gcp gke docs Edit Deployment We can update deployments using two options Learn and Implement Kubernetes Update Deployment Step 00 Introduction Set Image Step 01 Updating Application version V1 to V2 using Set Image Option title Kubernetes Update Deployment","answers":"---\ntitle: Kubernetes - Update Deployment\ndescription: Learn and Implement Kubernetes Update Deployment\n---\n## Step-00: Introduction\n- We can update deployments using two options\n  - Set Image\n  - Edit Deployment\n\n## Step-01: Updating Application version V1 to V2 using \"Set Image\" Option\n### Update Deployment\n- **Observation:** Please Check the container name in `spec.container.name` yaml output and make a note of it and \nreplace in `kubectl set image` command <Container-Name>\n```t\n# Get Container Name from current deployment\nkubectl get deployment my-first-deployment -o yaml\n\n# Update Deployment - SHOULD WORK NOW\nkubectl set image deployment\/<Deployment-Name> <Container-Name>=<Container-Image> \nkubectl set image deployment\/my-first-deployment kubenginx=stacksimplify\/kubenginx:2.0.0 \n```\n\n### Verify Rollout Status (Deployment Status)\n- **Observation:** By default, rollout happens in a rolling update model, so no downtime.\n```t\n# Verify Rollout Status \nkubectl rollout status deployment\/my-first-deployment\n\n# Verify Deployment\nkubectl get deploy\n```\n### Describe Deployment\n- **Observation:**\n  - Verify the Events and understand that Kubernetes by default do  \"Rolling Update\"  for new application releases. \n  - With that said, we will not have downtime for our application.\n```t\n# Descibe Deployment\nkubectl describe deployment my-first-deployment\n```\n### Verify ReplicaSet\n- **Observation:** New ReplicaSet will be created for new version\n```t\n# Verify ReplicaSet\nkubectl get rs\n```\n\n### Verify Pods\n- **Observation:** Pod template hash label of new replicaset should be present for PODs letting us \nknow these pods belong to new ReplicaSet.\n```t\n# List Pods\nkubectl get po\n```\n### Access the Application using Public IP\n- We should see `Application Version:V2` whenever we access the application in browser\n```t\n# Get Load Balancer IP\nkubectl get svc\n\n# Application URL\nhttp:\/\/<External-IP-from-get-service-output>\n```\n\n### Update Change-Cause for the Kubernetes Deployment - Rollout History\n- **Observation:** We have the rollout history, so we can switch back to older revisions using revision history available to us.  \n```t\n# Verify Rollout History\nkubectl rollout history deployment\/my-first-deployment\n\n# Update REVISION CHANGE-CAUSE\nkubectl annotate deployment\/my-first-deployment kubernetes.io\/change-cause=\"Deployment UPDATE - App Version 2.0.0 - SET IMAGE OPTION\"\n\n# Verify Rollout History\nkubectl rollout history deployment\/my-first-deployment\n```\n\n\n## Step-02: Update the Application from V2 to V3 using \"Edit Deployment\" Option\n### Edit Deployment\n```t\n# Edit Deployment\nkubectl edit deployment\/<Deployment-Name> \nkubectl edit deployment\/my-first-deployment \n```\n\n```yaml\n# Change From 2.0.0\n    spec:\n      containers:\n      - image: stacksimplify\/kubenginx:2.0.0\n\n# Change To 3.0.0\n    spec:\n      containers:\n      - image: stacksimplify\/kubenginx:3.0.0\n```\n\n\n### Verify Rollout Status\n- **Observation:** Rollout happens in a rolling update model, so no downtime.\n```t\n# Verify Rollout Status \nkubectl rollout status deployment\/my-first-deployment\n\n# Describe Deployment\nkubectl describe deployment\/my-first-deployment\n```\n### Verify Replicasets\n- **Observation:**  We should see 3 ReplicaSets now, as we have updated our application to 3rd version 3.0.0\n```t\n# Verify ReplicaSet and Pods\nkubectl get rs\nkubectl get po\n```\n\n### Access the Application using Public IP\n- We should see `Application Version:V3` whenever we access the application in browser\n```t\n# Get Load Balancer IP\nkubectl get svc\n\n# Application URL\nhttp:\/\/<External-IP-from-get-service-output>\n```\n\n### Update Change-Cause for the Kubernetes Deployment - Rollout History\n- **Observation:** We have the rollout history, so we can switch back to older revisions using revision history available to us. \n```t\n# Verify Rollout History\nkubectl rollout history deployment\/my-first-deployment\n\n# Update REVISION CHANGE-CAUSE\nkubectl annotate deployment\/my-first-deployment kubernetes.io\/change-cause=\"Deployment UPDATE - App Version 3.0.0 - EDIT DEPLOYMENT OPTION\"\n\n# Verify Rollout History\nkubectl rollout history deployment\/my-first-deployment\n``","site":"gcp gke docs","answers_cleaned":"    title  Kubernetes   Update Deployment description  Learn and Implement Kubernetes Update Deployment        Step 00  Introduction   We can update deployments using two options     Set Image     Edit Deployment     Step 01  Updating Application version V1 to V2 using  Set Image  Option     Update Deployment     Observation    Please Check the container name in  spec container name  yaml output and make a note of it and  replace in  kubectl set image  command  Container Name     t   Get Container Name from current deployment kubectl get deployment my first deployment  o yaml    Update Deployment   SHOULD WORK NOW kubectl set image deployment  Deployment Name   Container Name   Container Image   kubectl set image deployment my first deployment kubenginx stacksimplify kubenginx 2 0 0           Verify Rollout Status  Deployment Status      Observation    By default  rollout happens in a rolling update model  so no downtime     t   Verify Rollout Status  kubectl rollout status deployment my first deployment    Verify Deployment kubectl get deploy         Describe Deployment     Observation        Verify the Events and understand that Kubernetes by default do   Rolling Update   for new application releases       With that said  we will not have downtime for our application     t   Descibe Deployment kubectl describe deployment my first deployment         Verify ReplicaSet     Observation    New ReplicaSet will be created for new version    t   Verify ReplicaSet kubectl get rs          Verify Pods     Observation    Pod template hash label of new replicaset should be present for PODs letting us  know these pods belong to new ReplicaSet     t   List Pods kubectl get po         Access the Application using Public IP   We should see  Application Version V2  whenever we access the application in browser    t   Get Load Balancer IP kubectl get svc    Application URL http    External IP from get service output           Update Change Cause for the Kubernetes Deployment   Rollout History     Observation    We have the rollout history  so we can switch back to older revisions using revision history available to us       t   Verify Rollout History kubectl rollout history deployment my first deployment    Update REVISION CHANGE CAUSE kubectl annotate deployment my first deployment kubernetes io change cause  Deployment UPDATE   App Version 2 0 0   SET IMAGE OPTION     Verify Rollout History kubectl rollout history deployment my first deployment          Step 02  Update the Application from V2 to V3 using  Edit Deployment  Option     Edit Deployment    t   Edit Deployment kubectl edit deployment  Deployment Name   kubectl edit deployment my first deployment          yaml   Change From 2 0 0     spec        containers          image  stacksimplify kubenginx 2 0 0    Change To 3 0 0     spec        containers          image  stacksimplify kubenginx 3 0 0           Verify Rollout Status     Observation    Rollout happens in a rolling update model  so no downtime     t   Verify Rollout Status  kubectl rollout status deployment my first deployment    Describe Deployment kubectl describe deployment my first deployment         Verify Replicasets     Observation     We should see 3 ReplicaSets now  as we have updated our application to 3rd version 3 0 0    t   Verify ReplicaSet and Pods kubectl get rs kubectl get po          Access the Application using Public IP   We should see  Application Version V3  whenever we access the application in browser    t   Get Load Balancer IP kubectl get svc    Application URL http    External IP from get service output           Update Change Cause for the Kubernetes Deployment   Rollout History     Observation    We have the rollout history  so we can switch back to older revisions using revision history available to us      t   Verify Rollout History kubectl rollout history deployment my first deployment    Update REVISION CHANGE CAUSE kubectl annotate deployment my first deployment kubernetes io change cause  Deployment UPDATE   App Version 3 0 0   EDIT DEPLOYMENT OPTION     Verify Rollout History kubectl rollout history deployment my first deployment   "}
{"questions":"gcp gke docs Use GCP File Store for GKE Workloads with Custom StorageClass Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created title GKE Storage with GCP File Store Custom StorageClass t","answers":"---\ntitle: GKE Storage with GCP File Store - Custom StorageClass\ndescription: Use GCP File Store for GKE Workloads with Custom StorageClass\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n- GKE Storage with GCP File Store - Custom StorageClass\n\n\n## Step-02: Enable Filestore CSI driver\t(If not enabled)\n- Go to Kubernetes Engine -> standard-cluster-private -> Details -> Features -> Filestore CSI driver\t\n- Click on Checkbox **Enable Filestore CSI Driver**\n- Click on **SAVE CHANGES**\n\n## Step-03: Verify if Filestore CSI Driver enabled\n```t\n# Verify Filestore CSI Daemonset in kube-system namespace\nkubectl -n kube-system get ds | grep file\nObservation: \n1. You should find the Daemonset with name \"filestore-node\"\n\n# Verify Filestore CSI Daemonset pods in kube-system namespace\nkubectl -n kube-system get pods | grep file\nObservation: \n1. You should find the pods with name \"filestore-node-*\"\n```\n\n## Step-04: Existing Storage Class\n- After you enable the Filestore CSI driver, GKE automatically installs the following StorageClasses for provisioning Filestore instances:\n- **standard-rwx:** using the Basic HDD Filestore service tier\n- **premium-rwx:** using the Basic SSD Filestore service tier\n```t\n# Default Storage Class created as part of FileStore CSI Enablement\nkubectl get sc\nObservation: Below two storage class will be created by default\nstandard-rwx\npremium-rwx \n```\n\n## Step-05: 00-filestore-storage-class.yaml\n```yaml\napiVersion: storage.k8s.io\/v1\nkind: StorageClass\nmetadata:\n  name: filestore-storage-class\nprovisioner: filestore.csi.storage.gke.io # File Store CSI Driver\nvolumeBindingMode: WaitForFirstConsumer\nallowVolumeExpansion: true\nparameters:\n  tier: standard # Allowed values standard, premium, or enterprise\n  network: default # The network parameter can be used when provisioning Filestore instances on non-default VPCs. Non-default VPCs require special firewall rules to be set up.\n```\n\n## Step-06: Other YAML files are same as previous section\n- Other YAML files are same as previous section\n- 01-filestore-pvc.yaml\n- 02-write-to-filestore-pod.yaml\n- 03-myapp1-deployment.yaml\n- 04-loadBalancer-service.yaml\n\n## Step-07: Deploy kube-manifests\n```t\n# Deploy kube-manifests\nkubectl apply -f kube-manifests\/\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List Pods\nkubectl get pods\n``` \n\n## Step-08: Verify GCP Cloud FileStore Instance\n- Go to FileStore -> Instances\n- Click on **Instance ID: pvc-27cd5c27-0ed0-48d1-bc5f-925adfb8495f**\n- **Note:** Instance ID dynamically generated, it can be different in your case starting with pvc-*\n\n## Step-09: Connect to filestore write app Kubernetes pods and Verify\n```t\n# FileStore write app - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- \/bin\/sh\nkubectl exec --stdin --tty filestore-writer-app  -- \/bin\/sh\ncd \/data\nls\ntail -f myapp1.txt\nexit\n```\n\n## Step-10: Connect to myapp1 Kubernetes pods and Verify\n```t\n# List Pods\nkubectl get pods \n\n# myapp1 POD1 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- \/bin\/sh\nkubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- \/bin\/sh\ncd \/usr\/share\/nginx\/html\/filestore\nls\ntail -f myapp1.txt\nexit\n\n# myapp1 POD2 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- \/bin\/sh\nkubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97  -- \/bin\/sh\ncd \/usr\/share\/nginx\/html\/filestore\nls\ntail -f myapp1.txt\nexit\n```\n\n## Step-11: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>\/filestore\/myapp1.txt\nhttp:\/\/35.232.145.61\/filestore\/myapp1.txt\ncurl http:\/\/35.232.145.61\/filestore\/myapp1.txt\n```\n\n\n## Step-12: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests\/\n\n# Verify if FileStore Instance is deleted\nGo to -> FileStore -> Instances\n``","site":"gcp gke docs","answers_cleaned":"    title  GKE Storage with GCP File Store   Custom StorageClass description  Use GCP File Store for GKE Workloads with Custom StorageClass         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123         Step 01  Introduction   GKE Storage with GCP File Store   Custom StorageClass      Step 02  Enable Filestore CSI driver  If not enabled    Go to Kubernetes Engine    standard cluster private    Details    Features    Filestore CSI driver    Click on Checkbox   Enable Filestore CSI Driver     Click on   SAVE CHANGES       Step 03  Verify if Filestore CSI Driver enabled    t   Verify Filestore CSI Daemonset in kube system namespace kubectl  n kube system get ds   grep file Observation   1  You should find the Daemonset with name  filestore node     Verify Filestore CSI Daemonset pods in kube system namespace kubectl  n kube system get pods   grep file Observation   1  You should find the pods with name  filestore node            Step 04  Existing Storage Class   After you enable the Filestore CSI driver  GKE automatically installs the following StorageClasses for provisioning Filestore instances      standard rwx    using the Basic HDD Filestore service tier     premium rwx    using the Basic SSD Filestore service tier    t   Default Storage Class created as part of FileStore CSI Enablement kubectl get sc Observation  Below two storage class will be created by default standard rwx premium rwx          Step 05  00 filestore storage class yaml    yaml apiVersion  storage k8s io v1 kind  StorageClass metadata    name  filestore storage class provisioner  filestore csi storage gke io   File Store CSI Driver volumeBindingMode  WaitForFirstConsumer allowVolumeExpansion  true parameters    tier  standard   Allowed values standard  premium  or enterprise   network  default   The network parameter can be used when provisioning Filestore instances on non default VPCs  Non default VPCs require special firewall rules to be set up          Step 06  Other YAML files are same as previous section   Other YAML files are same as previous section   01 filestore pvc yaml   02 write to filestore pod yaml   03 myapp1 deployment yaml   04 loadBalancer service yaml     Step 07  Deploy kube manifests    t   Deploy kube manifests kubectl apply  f kube manifests     List Storage Class kubectl get sc    List PVC kubectl get pvc    List PV kubectl get pv    List Pods kubectl get pods          Step 08  Verify GCP Cloud FileStore Instance   Go to FileStore    Instances   Click on   Instance ID  pvc 27cd5c27 0ed0 48d1 bc5f 925adfb8495f       Note    Instance ID dynamically generated  it can be different in your case starting with pvc       Step 09  Connect to filestore write app Kubernetes pods and Verify    t   FileStore write app   Connect to Kubernetes Pod kubectl exec   stdin   tty  POD NAME      bin sh kubectl exec   stdin   tty filestore writer app      bin sh cd  data ls tail  f myapp1 txt exit         Step 10  Connect to myapp1 Kubernetes pods and Verify    t   List Pods kubectl get pods     myapp1 POD1   Connect to Kubernetes Pod kubectl exec   stdin   tty  POD NAME      bin sh kubectl exec   stdin   tty myapp1 deployment 5d469f6478 2kp97     bin sh cd  usr share nginx html filestore ls tail  f myapp1 txt exit    myapp1 POD2   Connect to Kubernetes Pod kubectl exec   stdin   tty  POD NAME      bin sh kubectl exec   stdin   tty myapp1 deployment 5d469f6478 2kp97      bin sh cd  usr share nginx html filestore ls tail  f myapp1 txt exit         Step 11  Access Application    t   List Services kubectl get svc    Access Application http    EXTERNAL IP OF GET SERVICE OUTPUT  filestore myapp1 txt http   35 232 145 61 filestore myapp1 txt curl http   35 232 145 61 filestore myapp1 txt          Step 12  Clean Up    t   Delete Kubernetes Objects kubectl delete  f kube manifests     Verify if FileStore Instance is deleted Go to    FileStore    Instances   "}
{"questions":"gcp gke docs Use GCP File Store for GKE Workloads with Default StorageClass Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl title GKE Storage with GCP File Store Default StorageClass 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GKE Storage with GCP File Store - Default StorageClass\ndescription: Use GCP File Store for GKE Workloads with Default StorageClass\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n- GKE Storage with GCP File Store - Default StorageClass\n\n\n## Step-02: Enable Filestore CSI driver\t(If not enabled)\n- Go to Kubernetes Engine -> standard-cluster-private-1 -> Details -> Features -> Filestore CSI driver\t\n- Click on Checkbox **Enable Filestore CSI Driver**\n- Click on **SAVE CHANGES**\n\n## Step-03: Verify if Filestore CSI Driver enabled\n```t\n# Verify Filestore CSI Daemonset in kube-system namespace\nkubectl -n kube-system get ds | grep file\nObservation: \n1. You should find the Daemonset with name \"filestore-node\"\n\n# Verify Filestore CSI Daemonset pods in kube-system namespace\nkubectl -n kube-system get pods | grep file\nObservation: \n1. You should find the pods with name \"filestore-node-*\"\n```\n\n## Step-04: Existing Storage Class\n- After you enable the Filestore CSI driver, GKE automatically installs the following StorageClasses for provisioning Filestore instances:\n- **standard-rwx:** using the Basic HDD Filestore service tier\n- **premium-rwx:** using the Basic SSD Filestore service tier\n- **enterprise-rwx**\n- **enterprise-multishare-rwx**\n```t\n# Default Storage Class created as part of FileStore CSI Enablement\nkubectl get sc\nObservation: Below four storage class will be created by default\nstandard-rwx\npremium-rwx \nenterprise-rwx\nenterprise-multishare-rwx\n```\n\n## Step-05: 01-filestore-pvc.yaml\n```yaml\nkind: PersistentVolumeClaim\napiVersion: v1\nmetadata:\n  name: gke-filestore-pvc\nspec:\n  accessModes:\n  - ReadWriteMany\n  storageClassName: standard-rwx\n  resources:\n    requests:\n      storage: 1Ti\n```\n\n## Step-06: 02-write-to-filestore-pod.yaml\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: filestore-writer-app\nspec:\n  containers:\n    - name: app\n      image: centos\n      command: [\"\/bin\/sh\"]\n      args: [\"-c\", \"while true; do echo GCP File Store used as PV in GKE $(date -u) >> \/data\/myapp1.txt; sleep 5; done\"]\n      volumeMounts:\n        - name: persistent-storage\n          mountPath: \/data\n  volumes:\n    - name: persistent-storage\n      persistentVolumeClaim:\n        claimName: gke-filestore-pvc\n```\n\n## Step-07: 03-myapp1-deployment.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify\/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n          volumeMounts:\n            - name: persistent-storage\n              mountPath: \/usr\/share\/nginx\/html\/filestore\n      volumes:\n        - name: persistent-storage\n          persistentVolumeClaim:\n            claimName: gke-filestore-pvc              \n```\n\n## Step-08: 04-loadBalancer-service.yaml\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n```\n\n## Step-09: Enable Cloud FileStore API (if not enabled)\n- Go to Search -> FileStore -> ENABLE\n\n## Step-09: Deploy kube-manifests\n```t\n# Deploy kube-manifests\nkubectl apply -f kube-manifests\/\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List Pods\nkubectl get pods\n``` \n\n## Step-10: Verify GCP Cloud FileStore Instance\n- Go to FileStore -> Instances\n- Click on **Instance ID: pvc-27cd5c27-0ed0-48d1-bc5f-925adfb8495f**\n- **Note:** Instance ID dynamically generated, it can be different in your case starting with pvc-*\n\n## Step-11: Connect to filestore write app Kubernetes pods and Verify\n```t\n# FileStore write app - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- \/bin\/sh\nkubectl exec --stdin --tty filestore-writer-app  -- \/bin\/sh\ncd \/data\nls\ntail -f myapp1.txt\nexit\n```\n\n## Step-12: Connect to myapp1 Kubernetes pods and Verify\n```t\n# List Pods\nkubectl get pods \n\n# myapp1 POD1 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- \/bin\/sh\nkubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- \/bin\/sh\ncd \/usr\/share\/nginx\/html\/filestore\nls\ntail -f myapp1.txt\nexit\n\n# myapp1 POD2 - Connect to Kubernetes Pod\nkubectl exec --stdin --tty <POD-NAME> -- \/bin\/sh\nkubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97  -- \/bin\/sh\ncd \/usr\/share\/nginx\/html\/filestore\nls\ntail -f myapp1.txt\nexit\n```\n\n## Step-13: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>\/filestore\/myapp1.txt\nhttp:\/\/35.232.145.61\/filestore\/myapp1.txt\ncurl http:\/\/35.232.145.61\/filestore\/myapp1.txt\n```\n\n\n## Step-14: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests\/\n\n# Verify if FileStore Instance is deleted\nGo to -> FileStore -> Instances\n``","site":"gcp gke docs","answers_cleaned":"    title  GKE Storage with GCP File Store   Default StorageClass description  Use GCP File Store for GKE Workloads with Default StorageClass         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123         Step 01  Introduction   GKE Storage with GCP File Store   Default StorageClass      Step 02  Enable Filestore CSI driver  If not enabled    Go to Kubernetes Engine    standard cluster private 1    Details    Features    Filestore CSI driver    Click on Checkbox   Enable Filestore CSI Driver     Click on   SAVE CHANGES       Step 03  Verify if Filestore CSI Driver enabled    t   Verify Filestore CSI Daemonset in kube system namespace kubectl  n kube system get ds   grep file Observation   1  You should find the Daemonset with name  filestore node     Verify Filestore CSI Daemonset pods in kube system namespace kubectl  n kube system get pods   grep file Observation   1  You should find the pods with name  filestore node            Step 04  Existing Storage Class   After you enable the Filestore CSI driver  GKE automatically installs the following StorageClasses for provisioning Filestore instances      standard rwx    using the Basic HDD Filestore service tier     premium rwx    using the Basic SSD Filestore service tier     enterprise rwx       enterprise multishare rwx      t   Default Storage Class created as part of FileStore CSI Enablement kubectl get sc Observation  Below four storage class will be created by default standard rwx premium rwx  enterprise rwx enterprise multishare rwx         Step 05  01 filestore pvc yaml    yaml kind  PersistentVolumeClaim apiVersion  v1 metadata    name  gke filestore pvc spec    accessModes      ReadWriteMany   storageClassName  standard rwx   resources      requests        storage  1Ti         Step 06  02 write to filestore pod yaml    yaml apiVersion  v1 kind  Pod metadata    name  filestore writer app spec    containers        name  app       image  centos       command     bin sh         args     c    while true  do echo GCP File Store used as PV in GKE   date  u      data myapp1 txt  sleep 5  done         volumeMounts            name  persistent storage           mountPath   data   volumes        name  persistent storage       persistentVolumeClaim          claimName  gke filestore pvc         Step 07  03 myapp1 deployment yaml    yaml apiVersion  apps v1 kind  Deployment  metadata   Dictionary   name  myapp1 deployment spec    Dictionary   replicas  2   selector      matchLabels        app  myapp1   template        metadata    Dictionary       name  myapp1 pod       labels    Dictionary         app  myapp1    Key value pairs     spec        containers    List           name  myapp1 container           image  stacksimplify kubenginx 1 0 0           ports                 containerPort  80             volumeMounts                name  persistent storage               mountPath   usr share nginx html filestore       volumes            name  persistent storage           persistentVolumeClaim              claimName  gke filestore pvc                       Step 08  04 loadBalancer service yaml    yaml apiVersion  v1 kind  Service  metadata    name  myapp1 lb service spec    type  LoadBalancer   ClusterIp    NodePort   selector      app  myapp1   ports         name  http       port  80   Service Port       targetPort  80   Container Port         Step 09  Enable Cloud FileStore API  if not enabled    Go to Search    FileStore    ENABLE     Step 09  Deploy kube manifests    t   Deploy kube manifests kubectl apply  f kube manifests     List Storage Class kubectl get sc    List PVC kubectl get pvc    List PV kubectl get pv    List Pods kubectl get pods          Step 10  Verify GCP Cloud FileStore Instance   Go to FileStore    Instances   Click on   Instance ID  pvc 27cd5c27 0ed0 48d1 bc5f 925adfb8495f       Note    Instance ID dynamically generated  it can be different in your case starting with pvc       Step 11  Connect to filestore write app Kubernetes pods and Verify    t   FileStore write app   Connect to Kubernetes Pod kubectl exec   stdin   tty  POD NAME      bin sh kubectl exec   stdin   tty filestore writer app      bin sh cd  data ls tail  f myapp1 txt exit         Step 12  Connect to myapp1 Kubernetes pods and Verify    t   List Pods kubectl get pods     myapp1 POD1   Connect to Kubernetes Pod kubectl exec   stdin   tty  POD NAME      bin sh kubectl exec   stdin   tty myapp1 deployment 5d469f6478 2kp97     bin sh cd  usr share nginx html filestore ls tail  f myapp1 txt exit    myapp1 POD2   Connect to Kubernetes Pod kubectl exec   stdin   tty  POD NAME      bin sh kubectl exec   stdin   tty myapp1 deployment 5d469f6478 2kp97      bin sh cd  usr share nginx html filestore ls tail  f myapp1 txt exit         Step 13  Access Application    t   List Services kubectl get svc    Access Application http    EXTERNAL IP OF GET SERVICE OUTPUT  filestore myapp1 txt http   35 232 145 61 filestore myapp1 txt curl http   35 232 145 61 filestore myapp1 txt          Step 14  Clean Up    t   Delete Kubernetes Objects kubectl delete  f kube manifests     Verify if FileStore Instance is deleted Go to    FileStore    Instances   "}
{"questions":"gcp gke docs Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl title GCP Google Kubernetes Engine Ingress Internal Load Balancer 1 Verify if GKE Cluster is created t Implement GCP Google Kubernetes Engine GKE Internal Load Balancer with Ingress","answers":"---\ntitle: GCP Google Kubernetes Engine Ingress Internal Load Balancer\ndescription: Implement GCP Google Kubernetes Engine GKE Internal Load Balancer with Ingress\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- Ingress Internal Load Balancer\n\n## Step-02: Review Kubernetes Deployment manifests\n- 01-Nginx-App1-Deployment-and-NodePortService.yaml\n- 02-Nginx-App2-Deployment-and-NodePortService.yaml\n- 03-Nginx-App3-Deployment-and-NodePortService.yaml\n\n## Step-03: 04-Ingress-internal-lb.yaml\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress-internal-lb\n  annotations:\n    # If the class annotation is not specified it defaults to \"gce\".\n    # gce: external load balancer\n    # gce-internal: internal load balancer  \n    # Internal Load Balancer\n    kubernetes.io\/ingress.class: \"gce-internal\"  \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: \/app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: \/app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80                 \n```\n\n## Step-04: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 01-kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get po\n\n# List Services\nkubectl get svc\n\n# List Backend Configs\nkubectl get backendconfig\n\n# List Ingress Service\nkubectl get ingress\n\n# Describe Ingress Service\nkubectl describe ingress ingress-internal-lb\n\n# Verify Load Balancer\nGo to Network Services -> Load Balancing -> Load Balancer\n```\n\n## Step-05: Review Curl Kubernetes Manifests\n- **Project Folder:** 02-kube-manifests-curl\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: curl-pod\nspec:\n  containers:\n  - name: curl\n    image: curlimages\/curl \n    command: [ \"sleep\", \"600\" ]\n```\n\n## Step-06: Deply Curl-pod and Verify Internal LB\n```t\n# Deploy curl-pod\nkubectl apply -f 02-kube-manifests-curl\n\n# Will open up a terminal session into the container\nkubectl exec -it curl-pod -- sh\n\n# App1 Curl Test\ncurl http:\/\/<INTERNAL-INGRESS-LB-IP>\/app1\/index.html\n\n# App2 Curl Test\ncurl http:\/\/<INTERNAL-INGRESS-LB-IP>\/app2\/index.html\n\n# App3 Curl Test\ncurl http:\/\/<INTERNAL-INGRESS-LB-IP>\n```\n\n## Step-07: Clean-Up\n```t\n# Delete Kubernetes Manifests\nkubectl delete -f 01-kube-manifests\nkubectl delete -f 02-kube-manifests-curl\n```\n\n## References\n- [Ingress for Internal HTTP(S) Load Balancing](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/ingress-ilb","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine Ingress Internal Load Balancer description  Implement GCP Google Kubernetes Engine GKE Internal Load Balancer with Ingress         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes         Step 01  Introduction   Ingress Internal Load Balancer     Step 02  Review Kubernetes Deployment manifests   01 Nginx App1 Deployment and NodePortService yaml   02 Nginx App2 Deployment and NodePortService yaml   03 Nginx App3 Deployment and NodePortService yaml     Step 03  04 Ingress internal lb yaml    yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress internal lb   annotations        If the class annotation is not specified it defaults to  gce         gce  external load balancer       gce internal  internal load balancer         Internal Load Balancer     kubernetes io ingress class   gce internal    spec     defaultBackend      service        name  app3 nginx nodeport service       port          number  80                               rules        http          paths                         path   app1             pathType  Prefix             backend                service                  name  app1 nginx nodeport service                 port                     number  80             path   app2             pathType  Prefix             backend                service                  name  app2 nginx nodeport service                 port                     number  80                          Step 04  Deploy Kubernetes Manifests    t   Deploy Kubernetes Manifests kubectl apply  f 01 kube manifests    List Deployments kubectl get deploy    List Pods kubectl get po    List Services kubectl get svc    List Backend Configs kubectl get backendconfig    List Ingress Service kubectl get ingress    Describe Ingress Service kubectl describe ingress ingress internal lb    Verify Load Balancer Go to Network Services    Load Balancing    Load Balancer         Step 05  Review Curl Kubernetes Manifests     Project Folder    02 kube manifests curl    yaml apiVersion  v1 kind  Pod metadata    name  curl pod spec    containers      name  curl     image  curlimages curl      command     sleep    600            Step 06  Deply Curl pod and Verify Internal LB    t   Deploy curl pod kubectl apply  f 02 kube manifests curl    Will open up a terminal session into the container kubectl exec  it curl pod    sh    App1 Curl Test curl http    INTERNAL INGRESS LB IP  app1 index html    App2 Curl Test curl http    INTERNAL INGRESS LB IP  app2 index html    App3 Curl Test curl http    INTERNAL INGRESS LB IP          Step 07  Clean Up    t   Delete Kubernetes Manifests kubectl delete  f 01 kube manifests kubectl delete  f 02 kube manifests curl         References    Ingress for Internal HTTP S  Load Balancing  https   cloud google com kubernetes engine docs concepts ingress ilb"}
{"questions":"gcp gke docs Fix kubectl version to match with GKE Cluster Server Version Configure kubeconfig for kubectl on your local terminal Install gcloud CLI on WindowsOS Verify if you are able to reach GKE Cluster using kubectl from your local terminal title gcloud cli install on macOS Learn to install gcloud cli on WindowsOS Step 01 Introduction","answers":"---\ntitle: gcloud cli install on macOS\ndescription: Learn to install gcloud cli on WindowsOS\n---\n\n## Step-01: Introduction\n- Install gcloud CLI on WindowsOS\n- Configure kubeconfig for kubectl on your local terminal\n- Verify if you are able to reach GKE Cluster using kubectl from your local terminal\n- Fix kubectl version to match with GKE Cluster Server Version. \n\n## Step-02: Install gcloud cli on WindowsOS\n- [Install gcloud cli on WindowsOS](https:\/\/cloud.google.com\/sdk\/docs\/install-sdk#windows)\n```t\n## Important Note: Download the latest version available on that respective day\nDowload Link: https:\/\/cloud.google.com\/sdk\/docs\/install-sdk#windows\n\n## Run the Installer\nGoogleCloudSDKInstaller.exe\n```\n\n## Step-03: Verify gcloud cli version\n```t\n# gcloud cli version\ngcloud version\n```\n\n## Step-04: Intialize gcloud CLI in local Terminal \n```t\n# Initialize gcloud CLI\ngcloud init\n\n# List accounts whose credentials are stored on the local system:\ngcloud auth list\n\n# List the properties in your active gcloud CLI configuration\ngcloud config list\n\n# View information about your gcloud CLI installation and the active configuration\ngcloud info\n\n# gcloud config Configurations Commands (For Reference)\ngcloud config list\ngcloud config configurations list\ngcloud config configurations activate\ngcloud config configurations create\ngcloud config configurations delete\ngcloud config configurations describe\ngcloud config configurations rename\n```\n\n## Step-05: Verify gke-gcloud-auth-plugin \n```t\n## Important Note about gke-gcloud-auth-plugin: \n1. Kubernetes clients require an authentication plugin, gke- gcloud-auth-plugin, which uses the Client-go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters\n\n# Verify if gke-gcloud-auth-plugin installed\ngke-gcloud-auth-plugin --version\n\n# Install gke-gcloud-auth-plugin\ngcloud components install gke-gcloud-auth-plugin\n\n# Verify if gke-gcloud-auth-plugin installed\ngke-gcloud-auth-plugin --version\n```\n\n## Step-06: Remove any existing kubectl clients\n```t\n# Verify kubectl version\nkubectl version --output=yaml\nObservation: \n1. If any kubectl exists before installing it from gcloud then uninstall it.\n2. Usually if docker is installed on our desktop, its equivalent kubectl package mostly will be installed and set on PATH. If exists please remove it.  \n\n```\n\n## Step-07: Install kubectl client from gcloud CLI\n```t\n# List gcloud components\ngcloud components list\n\n## SAMPLE OUTPUT\nStatus: Not Installed\nName: kubectl\nID: kubectl\nSize: < 1 MiB\n\n# Install kubectl client\ngcloud components install kubectl\n\n# Verify kubectl version\nkubectl version --output=yaml\n```\n\n\n## Step-08: Configure kubeconfig for kubectl in local desktop terminal\n```t\n# Verify kubeconfig file\nkubectl config view\n\n# Configure kubeconfig for kubectl \ngcloud container clusters get-credentials <GKE-CLUSTER-NAME> --region <REGION> --project <PROJECT>\ngcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123\n\n# Verify kubeconfig file\nkubectl config view\n\n# Verify Kubernetes Worker Nodes\nkubectl get nodes\nObservation: \n1. It should throw warning at the end about huge difference in kubectl client version and GKE Cluster Server Version\n2. Lets fix that in next step. \n\n```\n## Step-09: Fix kubectl client version equal to GKE Cluster version\n- **Important Note:** You must use a kubectl version that is within one minor version difference of your Kubernetescluster control plane. \n- For example, a 1.24 kubectl client works with Kubernetes Cluster 1.23, 1.24 and 1.25 clusters.\n- As our GKE cluster version is 1.26, we will also upgrade our kubectl to 1.26\n```t\n# Verify kubectl version\nkubectl version --output=yaml\n\n# Change Directroy \nGo to Google Cloud SDK \"bin\" directory\n\n# Backup existing kubectl\nBackup \"kubectl\" to \"kubectl_bkup_1.24\"\n\n# Copy latest kubectl\nCOPY  \"kubectl.1.26\" as \"kubectl\"\n\n# Verify kubectl version\nkubectl version --output=yaml\n```\n\n## References\n- [gcloud CLI](https:\/\/cloud.google.com\/sdk\/gcloud)\n- [Install the Google Cloud CLI](https:\/\/cloud.google.com\/sdk\/docs\/install-sdk#mac","site":"gcp gke docs","answers_cleaned":"    title  gcloud cli install on macOS description  Learn to install gcloud cli on WindowsOS         Step 01  Introduction   Install gcloud CLI on WindowsOS   Configure kubeconfig for kubectl on your local terminal   Verify if you are able to reach GKE Cluster using kubectl from your local terminal   Fix kubectl version to match with GKE Cluster Server Version       Step 02  Install gcloud cli on WindowsOS    Install gcloud cli on WindowsOS  https   cloud google com sdk docs install sdk windows     t    Important Note  Download the latest version available on that respective day Dowload Link  https   cloud google com sdk docs install sdk windows     Run the Installer GoogleCloudSDKInstaller exe         Step 03  Verify gcloud cli version    t   gcloud cli version gcloud version         Step 04  Intialize gcloud CLI in local Terminal     t   Initialize gcloud CLI gcloud init    List accounts whose credentials are stored on the local system  gcloud auth list    List the properties in your active gcloud CLI configuration gcloud config list    View information about your gcloud CLI installation and the active configuration gcloud info    gcloud config Configurations Commands  For Reference  gcloud config list gcloud config configurations list gcloud config configurations activate gcloud config configurations create gcloud config configurations delete gcloud config configurations describe gcloud config configurations rename         Step 05  Verify gke gcloud auth plugin     t    Important Note about gke gcloud auth plugin   1  Kubernetes clients require an authentication plugin  gke  gcloud auth plugin  which uses the Client go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters    Verify if gke gcloud auth plugin installed gke gcloud auth plugin   version    Install gke gcloud auth plugin gcloud components install gke gcloud auth plugin    Verify if gke gcloud auth plugin installed gke gcloud auth plugin   version         Step 06  Remove any existing kubectl clients    t   Verify kubectl version kubectl version   output yaml Observation   1  If any kubectl exists before installing it from gcloud then uninstall it  2  Usually if docker is installed on our desktop  its equivalent kubectl package mostly will be installed and set on PATH  If exists please remove it             Step 07  Install kubectl client from gcloud CLI    t   List gcloud components gcloud components list     SAMPLE OUTPUT Status  Not Installed Name  kubectl ID  kubectl Size    1 MiB    Install kubectl client gcloud components install kubectl    Verify kubectl version kubectl version   output yaml          Step 08  Configure kubeconfig for kubectl in local desktop terminal    t   Verify kubeconfig file kubectl config view    Configure kubeconfig for kubectl  gcloud container clusters get credentials  GKE CLUSTER NAME    region  REGION    project  PROJECT  gcloud container clusters get credentials standard public cluster 1   region us central1   project kdaida123    Verify kubeconfig file kubectl config view    Verify Kubernetes Worker Nodes kubectl get nodes Observation   1  It should throw warning at the end about huge difference in kubectl client version and GKE Cluster Server Version 2  Lets fix that in next step           Step 09  Fix kubectl client version equal to GKE Cluster version     Important Note    You must use a kubectl version that is within one minor version difference of your Kubernetescluster control plane     For example  a 1 24 kubectl client works with Kubernetes Cluster 1 23  1 24 and 1 25 clusters    As our GKE cluster version is 1 26  we will also upgrade our kubectl to 1 26    t   Verify kubectl version kubectl version   output yaml    Change Directroy  Go to Google Cloud SDK  bin  directory    Backup existing kubectl Backup  kubectl  to  kubectl bkup 1 24     Copy latest kubectl COPY   kubectl 1 26  as  kubectl     Verify kubectl version kubectl version   output yaml         References    gcloud CLI  https   cloud google com sdk gcloud     Install the Google Cloud CLI  https   cloud google com sdk docs install sdk mac"}
{"questions":"gcp gke docs title GCP Google Kubernetes Engine Kubernetes Namespaces Imperative Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created Implement GCP Google Kubernetes Engine Kubernetes Namespaces Imperative t","answers":"---\ntitle: GCP Google Kubernetes Engine Kubernetes Namespaces Imperative\ndescription: Implement GCP Google Kubernetes Engine Kubernetes Namespaces Imperative\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- Namespaces allow to split-up resources into different groups.\n- Resource names should be unique in a namespace\n- We can use namespaces to create multiple environments like dev, staging and production etc\n- Kubernetes will always list the resources from `default namespace` unless we provide exclusively from which namespace we need information from.\n\n## Step-02: Namespaces Imperative - Create dev Namespace\n### Step-02-01: Create Namespace\n```t\n# List Namespaces\nkubectl get ns \n\n# Craete Namespace\nkubectl create namespace <namespace-name>\nkubectl create namespace dev\n\n# List Namespaces\nkubectl get ns \n```\n### Step-02-02: Deploy All k8s Objects\n```t\n# Deploy All k8s Objects\nkubectl apply -f 01-kube-manifests-imperative\/ -n dev\n\n# List Namespaces\nkubectl get ns\n\n# List Deployments from dev Namespace\nkubectl get deploy -n dev\n\n# List Pods from dev Namespace\nkubectl get pods -n dev\n\n# List Services from dev Namespace\nkubectl get svc -n dev\n\n# List all objects from dev Namespaces\nkubectl get all -n dev\n\n# Access Application\nhttp:\/\/<LB-Service-External-IP>\/\n```\n\n## Step-03: Namespace Declarative - Create qa Namespace\n\n### Step-03-01: Namespace Kubernetes YAML Manifest\n- **File Name:** 00-kubernetes-namespace.yaml\n```yaml\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: qa\n```\n\n### Step-03-02: Update Namespace in Deployment and Service YAML Manifest\n- We are going to update the `namespace: qa` in `metadata` section of Deployment and Service\n```yaml\n# Deployment YAML Manifest\napiVersion: apps\/v1\nkind: Deployment \nmetadata: \n  name: myapp1-deployment\n  namespace: qa\nspec: \n\n# Service YAML Manifest\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\n  namespace: qa\nspec:\n```\n\n### Step-03-03: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 02-kube-manifests-declarative\n\n# List Namespaces\nkubectl get ns\n\n# List Deployments from qa Namespace\nkubectl get deploy -n qa\n\n# List Pods from qa Namespace\nkubectl get pods -n qa\n\n# List Services from qa Namespace\nkubectl get svc -n qa\n\n# List all objects from qa Namespaces\nkubectl get all -n qa\n\n# Access Application\nhttp:\/\/<LB-Service-External-IP>\/\n```\n\n## Step-04: Clean-Up Resources\n- If we delete Namespace, all resources associated with namespace will get deleted.\n```t\n# Delete dev Namespace\nkubectl delete ns dev\n\n# List Namespaces\nkubectl get ns\nObservation:\n1. dev namespace should  not be present\n\n# Verify Pods from dev Namespace\nkubectl get pods -n dev\nObservation: We should not find any pods because namespace itself doesnt exists\n\n# Delete qa Namespace Resources (only)\nkubectl delete -f 02-kube-manifests-declarative\n\n# List Namespaces\nkubectl get ns\n\n# Delete qa Namespace\nkubectl delete ns qa\n\n# List Namespaces\nkubectl get ns\n```\n\n## References:\n- https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/namespaces-walkthrough","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine Kubernetes Namespaces Imperative description  Implement GCP Google Kubernetes Engine Kubernetes Namespaces Imperative         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes         Step 01  Introduction   Namespaces allow to split up resources into different groups    Resource names should be unique in a namespace   We can use namespaces to create multiple environments like dev  staging and production etc   Kubernetes will always list the resources from  default namespace  unless we provide exclusively from which namespace we need information from      Step 02  Namespaces Imperative   Create dev Namespace     Step 02 01  Create Namespace    t   List Namespaces kubectl get ns     Craete Namespace kubectl create namespace  namespace name  kubectl create namespace dev    List Namespaces kubectl get ns          Step 02 02  Deploy All k8s Objects    t   Deploy All k8s Objects kubectl apply  f 01 kube manifests imperative   n dev    List Namespaces kubectl get ns    List Deployments from dev Namespace kubectl get deploy  n dev    List Pods from dev Namespace kubectl get pods  n dev    List Services from dev Namespace kubectl get svc  n dev    List all objects from dev Namespaces kubectl get all  n dev    Access Application http    LB Service External IP           Step 03  Namespace Declarative   Create qa Namespace      Step 03 01  Namespace Kubernetes YAML Manifest     File Name    00 kubernetes namespace yaml    yaml apiVersion  v1 kind  Namespace metadata    name  qa          Step 03 02  Update Namespace in Deployment and Service YAML Manifest   We are going to update the  namespace  qa  in  metadata  section of Deployment and Service    yaml   Deployment YAML Manifest apiVersion  apps v1 kind  Deployment  metadata     name  myapp1 deployment   namespace  qa spec      Service YAML Manifest apiVersion  v1 kind  Service  metadata    name  myapp1 lb service   namespace  qa spec           Step 03 03  Deploy Kubernetes Manifests    t   Deploy Kubernetes Manifests kubectl apply  f 02 kube manifests declarative    List Namespaces kubectl get ns    List Deployments from qa Namespace kubectl get deploy  n qa    List Pods from qa Namespace kubectl get pods  n qa    List Services from qa Namespace kubectl get svc  n qa    List all objects from qa Namespaces kubectl get all  n qa    Access Application http    LB Service External IP           Step 04  Clean Up Resources   If we delete Namespace  all resources associated with namespace will get deleted     t   Delete dev Namespace kubectl delete ns dev    List Namespaces kubectl get ns Observation  1  dev namespace should  not be present    Verify Pods from dev Namespace kubectl get pods  n dev Observation  We should not find any pods because namespace itself doesnt exists    Delete qa Namespace Resources  only  kubectl delete  f 02 kube manifests declarative    List Namespaces kubectl get ns    Delete qa Namespace kubectl delete ns qa    List Namespaces kubectl get ns         References    https   kubernetes io docs tasks administer cluster namespaces walkthrough"}
{"questions":"gcp gke docs Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks using Readiness Probes title GCP Google Kubernetes Engine Ingress Custom Health Check Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t","answers":"---\ntitle: GCP Google Kubernetes Engine Ingress Custom Health Check\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks using Readiness Probes \n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n\n## Step-01: Introduction\n- Ingress Context Path based Routing\n- Ingress Custom Health Checks for each application using Kubernetes Readiness Probes\n  - **App1 Health Check Path:** \/app1\/index.html\n  - **App2 Health Check Path:** \/app2\/index.html\n  - **App3 Health Check Path:** \/index.html\n\n\n## Step-02: 01-Nginx-App1-Deployment-and-NodePortService.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: app1-nginx-deployment\n  labels:\n    app: app1-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app1-nginx\n  template:\n    metadata:\n      labels:\n        app: app1-nginx\n    spec:\n      containers:\n        - name: app1-nginx\n          image: stacksimplify\/kube-nginxapp1:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: \/app1\/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5       \n```\n\n## Step-03: 02-Nginx-App2-Deployment-and-NodePortService.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: app2-nginx-deployment\n  labels:\n    app: app2-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app2-nginx\n  template:\n    metadata:\n      labels:\n        app: app2-nginx\n    spec:\n      containers:\n        - name: app2-nginx\n          image: stacksimplify\/kube-nginxapp2:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: \/app2\/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5   \n```\n\n## Step-04: 03-Nginx-App3-Deployment-and-NodePortService.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx \nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify\/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: \/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5     \n```\n\n## Step-05: 04-Ingress-Custom-Healthcheck.yaml\n- NO CHANGES FROM CONTEXT PATH ROUTING DEMO other than Ingress Service name `ingress-custom-healthcheck`\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress-custom-healthcheck\n  annotations:\n    # External Load Balancer\n    kubernetes.io\/ingress.class: \"gce\"  \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: \/app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: \/app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n#          - path: \/\n#            pathType: Prefix\n#            backend:\n#              service:\n#                name: app3-nginx-nodeport-service\n#                port: \n#                  number: 80                     \n             \n```\n\n\n## Step-06: Deploy kube-manifests and verify\n```t\n# Deploy Kubernetes manifests\nkubectl apply -f kube-manifests\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Load Balancers\nkubectl get ingress\n\n# Describe Ingress and view Rules\nkubectl describe ingress ingress-custom-healthcheck\n```\n\n## Step-07: Verify Health Checks\n- Go to Load Balancing -> Click on LB\n- DETAILS TAB\n  - Backend services -> First Backend -> Click on Health Check Link\n- OR\n- Go to Compute Engine -> Instance Groups -> Health Checks\n- Review all 3 Health Checks and their Paths  \n  - **App1 Health Check Path:** \/app1\/index.html\n  - **App2 Health Check Path:** \/app2\/index.html\n  - **App3 Health Check Path:** \/index.html\n\n\n## Step-08: Access Application\n```t\n# Important Note\nWait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors\n\n# Access Application\nhttp:\/\/<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>\/app1\/index.html\nhttp:\/\/<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>\/app2\/index.html\nhttp:\/\/<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>\/\n```\n\n## Step-09: Clean Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Verify Load Balancer Deleted\nGo to Network Services -> Load Balancing -> No Load balancers should be present\n```\n\n## References\n- [GKE Ingress Healthchecks](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/ingress#health_checks)","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine Ingress Custom Health Check description  Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks using Readiness Probes         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123          Step 01  Introduction   Ingress Context Path based Routing   Ingress Custom Health Checks for each application using Kubernetes Readiness Probes       App1 Health Check Path     app1 index html       App2 Health Check Path     app2 index html       App3 Health Check Path     index html      Step 02  01 Nginx App1 Deployment and NodePortService yaml    yaml apiVersion  apps v1 kind  Deployment metadata    name  app1 nginx deployment   labels      app  app1 nginx spec    replicas  1   selector      matchLabels        app  app1 nginx   template      metadata        labels          app  app1 nginx     spec        containers            name  app1 nginx           image  stacksimplify kube nginxapp1 1 0 0           ports                containerPort  80             Readiness Probe  https   cloud google com kubernetes engine docs concepts ingress def inf hc                         readinessProbe              httpGet                scheme  HTTP               path   app1 index html               port  80             initialDelaySeconds  10             periodSeconds  15             timeoutSeconds  5                Step 03  02 Nginx App2 Deployment and NodePortService yaml    yaml apiVersion  apps v1 kind  Deployment metadata    name  app2 nginx deployment   labels      app  app2 nginx  spec    replicas  1   selector      matchLabels        app  app2 nginx   template      metadata        labels          app  app2 nginx     spec        containers            name  app2 nginx           image  stacksimplify kube nginxapp2 1 0 0           ports                containerPort  80             Readiness Probe  https   cloud google com kubernetes engine docs concepts ingress def inf hc                         readinessProbe              httpGet                scheme  HTTP               path   app2 index html               port  80             initialDelaySeconds  10             periodSeconds  15             timeoutSeconds  5            Step 04  03 Nginx App3 Deployment and NodePortService yaml    yaml apiVersion  apps v1 kind  Deployment metadata    name  app3 nginx deployment   labels      app  app3 nginx  spec    replicas  1   selector      matchLabels        app  app3 nginx   template      metadata        labels          app  app3 nginx     spec        containers            name  app3 nginx           image  stacksimplify kubenginx 1 0 0           ports                containerPort  80             Readiness Probe  https   cloud google com kubernetes engine docs concepts ingress def inf hc                        readinessProbe              httpGet                scheme  HTTP               path   index html               port  80             initialDelaySeconds  10             periodSeconds  15             timeoutSeconds  5              Step 05  04 Ingress Custom Healthcheck yaml   NO CHANGES FROM CONTEXT PATH ROUTING DEMO other than Ingress Service name  ingress custom healthcheck     yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress custom healthcheck   annotations        External Load Balancer     kubernetes io ingress class   gce    spec     defaultBackend      service        name  app3 nginx nodeport service       port          number  80                               rules        http          paths                         path   app1             pathType  Prefix             backend                service                  name  app1 nginx nodeport service                 port                     number  80             path   app2             pathType  Prefix             backend                service                  name  app2 nginx nodeport service                 port                     number  80              path                 pathType  Prefix              backend                 service                   name  app3 nginx nodeport service                  port                      number  80                                             Step 06  Deploy kube manifests and verify    t   Deploy Kubernetes manifests kubectl apply  f kube manifests    List Pods kubectl get pods    List Services kubectl get svc    List Ingress Load Balancers kubectl get ingress    Describe Ingress and view Rules kubectl describe ingress ingress custom healthcheck         Step 07  Verify Health Checks   Go to Load Balancing    Click on LB   DETAILS TAB     Backend services    First Backend    Click on Health Check Link   OR   Go to Compute Engine    Instance Groups    Health Checks   Review all 3 Health Checks and their Paths         App1 Health Check Path     app1 index html       App2 Health Check Path     app2 index html       App3 Health Check Path     index html      Step 08  Access Application    t   Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors    Access Application http    ADDRESS FIELD FROM GET INGRESS OUTPUT  app1 index html http    ADDRESS FIELD FROM GET INGRESS OUTPUT  app2 index html http    ADDRESS FIELD FROM GET INGRESS OUTPUT           Step 09  Clean Up    t   Delete Kubernetes Resources kubectl delete  f kube manifests    Verify Load Balancer Deleted Go to Network Services    Load Balancing    No Load balancers should be present         References    GKE Ingress Healthchecks  https   cloud google com kubernetes engine docs concepts ingress health checks "}
{"questions":"gcp gke docs Step 00 Pre requisites title GKE Persistent Disks Preexisting PD 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl Use Google Disks Preexisting PD for Kubernetes Workloads 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GKE Persistent Disks Preexisting PD\ndescription: Use Google Disks Preexisting PD for Kubernetes Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Feature: Compute Engine persistent disk CSI Driver\n  - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. \n  - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. \n\n\n## Step-01: Introduction\n- Use the **pre-existing Persistent Disk** created in previous demo.\n- As part of this demo, we are going to provision the **Persistent Volume (PV)** manually. We call this as Static Provisioning. \n\n\n## Step-02: List Kubernetes Storage Classes in GKE Cluster\n```t\n# List Storage Classes\nkubectl get sc\n```\n\n## Step-03: 00-persistent-volume.yaml\n```yaml\napiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: preexisting-pd\nspec:\n  storageClassName: standard-rwo\n  capacity:\n    storage: 8Gi\n  accessModes:\n    - ReadWriteOnce\n  claimRef:\n    namespace: default\n    name: mysql-pv-claim\n  gcePersistentDisk:\n    pdName: preexisting-pd\n    fsType: ext4\n```\n\n## Step-04: 01-persistent-volume-claim.yaml\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo\n  resources: \n    requests:\n      storage: 8Gi\n```\n\n## Step-05: Other Kubernetes YAML Manifests\n- No changes to other Kubernetes YAML Manifests\n- They are same as previous section\n- 02-UserManagement-ConfigMap.yaml\n- 03-mysql-deployment.yaml\n- 04-mysql-clusterip-service.yaml\n- 05-UserMgmtWebApp-Deployment.yaml\n- 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n\n## Step-06: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\/\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n## Step-07: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `8GB`\u00a0Persistent Disk\n- **Observation:** You should see the disk type **In Use By** updated and bound to **gke-standard-cluster-1-default-pool-db7b638f-j5lk**\n\n\n\n## Step-08: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\nObservation:\n1. You should see admin102 already present.\n2. This is because in previous demo, we already created admin102 and that data disk we have mounted here using \"Static Provisioning PV\" concept.\n```\n\n## Step-09: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests\/\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# Delete Persistent Disk: preexisting-pd\n1. \"preexisting-pd\" will not get deleted automatically\n2. We should manually delete it \n3. We should observe that its \"In Use By\" field is empty (Not associated to anything)\n4. Go to Compute Engine -> Disks -> preexisting-pd -> DELETE \n```\n","site":"gcp gke docs","answers_cleaned":"    title  GKE Persistent Disks Preexisting PD description  Use Google Disks Preexisting PD for Kubernetes Workloads         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123     3  Feature  Compute Engine persistent disk CSI Driver     Verify the Feature   Compute Engine persistent disk CSI Driver   enabled in GKE Cluster       This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster        Step 01  Introduction   Use the   pre existing Persistent Disk   created in previous demo    As part of this demo  we are going to provision the   Persistent Volume  PV    manually  We call this as Static Provisioning        Step 02  List Kubernetes Storage Classes in GKE Cluster    t   List Storage Classes kubectl get sc         Step 03  00 persistent volume yaml    yaml apiVersion  v1 kind  PersistentVolume metadata    name  preexisting pd spec    storageClassName  standard rwo   capacity      storage  8Gi   accessModes        ReadWriteOnce   claimRef      namespace  default     name  mysql pv claim   gcePersistentDisk      pdName  preexisting pd     fsType  ext4         Step 04  01 persistent volume claim yaml    yaml apiVersion  v1 kind  PersistentVolumeClaim metadata    name  mysql pv claim spec     accessModes        ReadWriteOnce   storageClassName  standard rwo   resources       requests        storage  8Gi         Step 05  Other Kubernetes YAML Manifests   No changes to other Kubernetes YAML Manifests   They are same as previous section   02 UserManagement ConfigMap yaml   03 mysql deployment yaml   04 mysql clusterip service yaml   05 UserMgmtWebApp Deployment yaml   06 UserMgmtWebApp LoadBalancer Service yaml     Step 06  Deploy kube manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests     List Storage Class kubectl get sc    List PVC kubectl get pvc    List PV kubectl get pv    List ConfigMaps kubectl get configmap    List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    Verify Pod Logs kubectl get pods kubectl logs  f  USERMGMT POD NAME  kubectl logs  f usermgmt webapp 6ff7d7d849 7lrg5         Step 07  Verify Persistent Disks   Go to Compute Engine    Storage    Disks   Search for  8GB  Persistent Disk     Observation    You should see the disk type   In Use By   updated and bound to   gke standard cluster 1 default pool db7b638f j5lk         Step 08  Access Application    t   List Services kubectl get svc    Access Application http    ExternalIP from get service output  Username  admin101 Password  password101  Observation  1  You should see admin102 already present  2  This is because in previous demo  we already created admin102 and that data disk we have mounted here using  Static Provisioning PV  concept          Step 09  Clean Up    t   Delete Kubernetes Objects kubectl delete  f kube manifests     List PVC kubectl get pvc    List PV kubectl get pv    Delete Persistent Disk  preexisting pd 1   preexisting pd  will not get deleted automatically 2  We should manually delete it  3  We should observe that its  In Use By  field is empty  Not associated to anything  4  Go to Compute Engine    Disks    preexisting pd    DELETE      "}
{"questions":"gcp gke docs Step 00 Pre requisites title GKE Storage with GCP Cloud SQL Without ExternalName Service 2 Verify if kubeconfig for kubectl is configured in your local terminal 1 Verify if GKE Cluster is created Use GCP Cloud SQL MySQL DB for GKE Workloads without ExternalName Service t","answers":"---\ntitle: GKE Storage with GCP Cloud SQL - Without ExternalName Service\ndescription: Use GCP Cloud SQL MySQL DB for GKE Workloads without ExternalName Service\n---\n\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n- GKE Private Cluster \n- GCP Cloud SQL with Private IP \n- [GKE is a Fully Integrated Network Model](https:\/\/cloud.google.com\/architecture\/gke-compare-network-models)\n- GKE is a Fully Integrated Network Model for Kubernetes, that said without ExternalName service we can directly connect to Private or Public IP of Cloud SQL from GKE Cluster itself. \n- We are going to update the UMS Kubernetes Deployment `DB_HOSTNAME` with `Cloud SQL Private IP` and it should work without any issues. \n\n\n\n## Step-02: 03-UserMgmtWebApp-Deployment.yaml\n- **Change-1:** Update Cloud SQL IP Address in `command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z 10.64.18.3 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']`\n- **Change-2:** Update Cloud SQL IP Address for `DB_HOSTNAME` value `value: 10.64.18.3`\n```yaml\napiVersion: apps\/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          #command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z 10.64.18.3 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']                \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify\/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              #value: \"mysql-externalname-service\"            \n              value: 10.64.18.3\n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password   \n```\n\n\n## Step-03: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\/\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n\n## Step-04: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n```\n\n## Step-05: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests\/\n\n# Delete Cloud SQL MySQL Instance\n1. Go to SQL ->  ums-db-private-instance -> DELETE\n2. Instance ID: ums-db-private-instance\n3. Click on DELETE\n```\n\n## References\n- [Private Service Access with MySQL](https:\/\/cloud.google.com\/sql\/docs\/mysql\/configure-private-services-access#console)\n- [Private Service Access](https:\/\/cloud.google.com\/vpc\/docs\/private-services-access)\n- [VPC Network Peering Limits](https:\/\/cloud.google.com\/vpc\/docs\/quota#vpc-peering)\n- [Configuring Private Service Access](https:\/\/cloud.google.com\/vpc\/docs\/configure-private-services-access)\n- [Additional Reference Only - Enabling private services access](https:\/\/cloud.google.com\/service-infrastructure\/docs\/enabling-private-services-access)\n","site":"gcp gke docs","answers_cleaned":"    title  GKE Storage with GCP Cloud SQL   Without ExternalName Service description  Use GCP Cloud SQL MySQL DB for GKE Workloads without ExternalName Service          Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123         Step 01  Introduction   GKE Private Cluster    GCP Cloud SQL with Private IP     GKE is a Fully Integrated Network Model  https   cloud google com architecture gke compare network models    GKE is a Fully Integrated Network Model for Kubernetes  that said without ExternalName service we can directly connect to Private or Public IP of Cloud SQL from GKE Cluster itself     We are going to update the UMS Kubernetes Deployment  DB HOSTNAME  with  Cloud SQL Private IP  and it should work without any issues         Step 02  03 UserMgmtWebApp Deployment yaml     Change 1    Update Cloud SQL IP Address in  command    sh     c    echo  e  Checking for the availability of MySQL Server deployment   while   nc  z 10 64 18 3 3306  do sleep 1  printf      done  echo  e       MySQL DB Server has started          Change 2    Update Cloud SQL IP Address for  DB HOSTNAME  value  value  10 64 18 3     yaml apiVersion  apps v1 kind  Deployment  metadata    name  usermgmt webapp   labels      app  usermgmt webapp spec    replicas  1   selector      matchLabels        app  usermgmt webapp   template        metadata        labels           app  usermgmt webapp     spec        initContainers            name  init db           image  busybox 1 31            command    sh     c    echo  e  Checking for the availability of MySQL Server deployment   while   nc  z mysql externalname service 3306  do sleep 1  printf      done  echo  e       MySQL DB Server has started                     command    sh     c    echo  e  Checking for the availability of MySQL Server deployment   while   nc  z 10 64 18 3 3306  do sleep 1  printf      done  echo  e       MySQL DB Server has started                           containers            name  usermgmt webapp           image  stacksimplify kube usermgmt webapp 1 0 0 MySQLDB           imagePullPolicy  Always           ports                 containerPort  8080                      env                name  DB HOSTNAME                value   mysql externalname service                            value  10 64 18 3               name  DB PORT               value   3306                            name  DB NAME               value   webappdb                            name  DB USERNAME               value   root                            name  DB PASSWORD               valueFrom                  secretKeyRef                    name  mysql db password                   key  db password             Step 03  Deploy kube manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests     List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    Verify Pod Logs kubectl get pods kubectl logs  f  USERMGMT POD NAME  kubectl logs  f usermgmt webapp 6ff7d7d849 7lrg5          Step 04  Access Application    t   List Services kubectl get svc    Access Application http    ExternalIP from get service output  Username  admin101 Password  password101         Step 05  Clean Up    t   Delete Kubernetes Objects kubectl delete  f kube manifests     Delete Cloud SQL MySQL Instance 1  Go to SQL     ums db private instance    DELETE 2  Instance ID  ums db private instance 3  Click on DELETE         References    Private Service Access with MySQL  https   cloud google com sql docs mysql configure private services access console     Private Service Access  https   cloud google com vpc docs private services access     VPC Network Peering Limits  https   cloud google com vpc docs quota vpc peering     Configuring Private Service Access  https   cloud google com vpc docs configure private services access     Additional Reference Only   Enabling private services access  https   cloud google com service infrastructure docs enabling private services access  "}
{"questions":"gcp gke docs title GCP Google Kubernetes Engine GKE Ingress Basics Step 00 Pre requisites Implement GCP Google Kubernetes Engine GKE Ingress Basics 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Ingress Basics\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Basics\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n- Learn Ingress Basics\n- [Ingress Diagram Reference](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/ingress#ingress_to_resource_mappings)\n\n## Step-02: Verify HTTP Load Balancing enabled for your GKE Cluster\n- Go to Kubernetes Engine -> standard-cluster-private1 -> DETAILS tab -> Networking\n- Verify `HTTP Load Balancing: Enabled` \n\n\n## Step-03: Kubernetes Deployment: 01-Nginx-App3-Deployment-and-NodePortService.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify\/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n```\n\n## Step-04: Kubernetes NodePort Service: 01-Nginx-App3-Deployment-and-NodePortService.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\n  labels:\n    app: app3-nginx\n  annotations:\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n```\n\n## Step-05: 02-ingress-basic.yaml\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress-basics\n  annotations:\n    # If the class annotation is not specified it defaults to \"gce\".\n    # gce: external load balancer\n    # gce-internal: internal load balancer\n    kubernetes.io\/ingress.class: \"gce\"  \nspec:\n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                   \n```\n\n## Step-06: Deploy kube-manifests and Verify\n```t\n# Deploy kube-manifests\nkubectl apply -f kube-manifests\/\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress\nkubectl get ingress\nObservation:\n1. Wait for ADDRESS field to populate the Public IP Address\n\n# Describe Ingress \nkubectl describe ingress ingress-basics\n\n# Access Application\nhttp:\/\/<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>\nImportant Note:\n1. If you get 502 error, wait for 2 to 3 mins and retry. \n2. It takes time to create load balancer on GCP.\n```\n\n## Step-07: Verify Load Balancer\n- Go to Load Balancing -> Click on Load balancer\n### Load Balancer View \n- DETAILS Tab\n  - Frontend\n  - Host and Path Rules\n  - Backend Services\n  - Health Checks\n- MONITORING TAB\n- CACHING TAB \n### Load Balancer Components View\n- FORWARDING RULES\n- TARGET PROXIES\n- BACKEND SERVICES\n- BACKEND BUCKETS\n- CERTIFICATES\n- TARGET POOLS\n\n## Step-08: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Verify if load balancer got deleted\nGo to Load Balancing -> Should not see any load balancers\n```\n \n\n## GKE Ingress References\n- [Ingress Features](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features)\n-[Ingress Concepts](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/ingress)\n- [Service Networking](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/service-networking","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Ingress Basics description  Implement GCP Google Kubernetes Engine GKE Ingress Basics        Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123         Step 01  Introduction   Learn Ingress Basics    Ingress Diagram Reference  https   cloud google com kubernetes engine docs concepts ingress ingress to resource mappings      Step 02  Verify HTTP Load Balancing enabled for your GKE Cluster   Go to Kubernetes Engine    standard cluster private1    DETAILS tab    Networking   Verify  HTTP Load Balancing  Enabled        Step 03  Kubernetes Deployment  01 Nginx App3 Deployment and NodePortService yaml    yaml apiVersion  apps v1 kind  Deployment metadata    name  app3 nginx deployment   labels      app  app3 nginx spec    replicas  1   selector      matchLabels        app  app3 nginx   template      metadata        labels          app  app3 nginx     spec        containers            name  app3 nginx           image  stacksimplify kubenginx 1 0 0           ports                containerPort  80         Step 04  Kubernetes NodePort Service  01 Nginx App3 Deployment and NodePortService yaml    yaml apiVersion  v1 kind  Service metadata    name  app3 nginx nodeport service   labels      app  app3 nginx   annotations  spec    type  NodePort   selector      app  app3 nginx   ports        port  80       targetPort  80         Step 05  02 ingress basic yaml    yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress basics   annotations        If the class annotation is not specified it defaults to  gce         gce  external load balancer       gce internal  internal load balancer     kubernetes io ingress class   gce    spec    defaultBackend      service        name  app3 nginx nodeport service       port          number  80                            Step 06  Deploy kube manifests and Verify    t   Deploy kube manifests kubectl apply  f kube manifests     List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    List Ingress kubectl get ingress Observation  1  Wait for ADDRESS field to populate the Public IP Address    Describe Ingress  kubectl describe ingress ingress basics    Access Application http    ADDRESS FIELD FROM GET INGRESS OUTPUT  Important Note  1  If you get 502 error  wait for 2 to 3 mins and retry   2  It takes time to create load balancer on GCP          Step 07  Verify Load Balancer   Go to Load Balancing    Click on Load balancer     Load Balancer View    DETAILS Tab     Frontend     Host and Path Rules     Backend Services     Health Checks   MONITORING TAB   CACHING TAB      Load Balancer Components View   FORWARDING RULES   TARGET PROXIES   BACKEND SERVICES   BACKEND BUCKETS   CERTIFICATES   TARGET POOLS     Step 08  Clean Up    t   Delete Kubernetes Resources kubectl delete  f kube manifests    Verify if load balancer got deleted Go to Load Balancing    Should not see any load balancers           GKE Ingress References    Ingress Features  https   cloud google com kubernetes engine docs how to ingress features    Ingress Concepts  https   cloud google com kubernetes engine docs concepts ingress     Service Networking  https   cloud google com kubernetes engine docs concepts service networking"}
{"questions":"gcp gke docs Step 00 Pre requisites title GKE Persistent Disks Custom StorageClass 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl Use Custom storageclass to provision Google Disks for Kubernetes Workloads 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GKE Persistent Disks Custom StorageClass \ndescription: Use Custom storageclass to provision Google Disks for Kubernetes Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Feature: Compute Engine persistent disk CSI Driver\n  - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. \n  - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster.\n\n\n## Step-01: Introduction\n- **Feaute-1:** Create custom Kubernetes StorageClass instead of using predefined one in GKE Cluster. custom storage class `gke-pd-standard-rwo-sc`\n- **Feature-2:** Test `allowVolumeExpansion: true` in Storage Class\n- **Feature-3:** Use `reclaimPolicy: Retain` in Storage Class and Test it \n\n## Step-02: List Kubernetes Storage Classes in GKE Cluster\n```t\n# List Storage Classes\nkubectl get sc\n```\n\n## Step-03: 00-storage-class.yaml\n```yaml\napiVersion: storage.k8s.io\/v1\nkind: StorageClass\nmetadata: \n  name: gke-pd-standard-rwo-sc\nprovisioner: pd.csi.storage.gke.io\nvolumeBindingMode: WaitForFirstConsumer \nallowVolumeExpansion: true\nreclaimPolicy: Retain \nparameters:\n  type: pd-balanced\n\n# STORAGE CLASS \n# 1. A StorageClass provides a way for administrators \n# to describe the \"classes\" of storage they offer.\n# 2. Here we are offering GCP PD Storage for GKE Cluster\n```\n\n## Step-04: 01-persistent-volume-claim.yaml\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: gke-pd-standard-rwo-sc\n  resources: \n    requests:\n      storage: 4Gi\n```\n\n\n## Step-05: Other Kubernetes YAML Manifests\n- No changes to other Kubernetes YAML Manifests\n- They are same as previous section\n- 02-UserManagement-ConfigMap.yaml\n- 03-mysql-deployment.yaml\n- 04-mysql-clusterip-service.yaml\n- 05-UserMgmtWebApp-Deployment.yaml\n- 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n\n## Step-06: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\/\n\n# List Storage Classes\nkubectl get sc\nObservation: \n1. You should find the new custom storage class object created with name as \"gke-pd-standard-rwo-sc\"\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n## Step-07: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `4GB`\u00a0Persistent Disk\n- **Observation:** You should see the disk type as **Balanced persistent disk**\n\n\n\n## Step-08: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\n# Create New User (Used for testing  `allowVolumeExpansion: true` Option)\nUsername: admin102\nPassword: password102\nFirst Name: fname102\nLast Name: lname102\nEmail Address: admin102@stacksimplify.com\nSocial Security Address: ssn102\n```\n\n## Step-09: Update 01-persistent-volume-claim.yaml from 4Gi to 8Gi\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: gke-pd-standard-rwo-sc\n  resources: \n    requests:\n      #storage: 4Gi # Commment at Step-09\n      storage: 8Gi # UnCommment at Step-09\n```\n\n## Step-10: Deploy updated kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\/\n\n# List PVC\nkubectl get pvc\nObservation:\n1. Wait for 2 to 3 mins and automatically CAPACITY value changes from 4Gi to 8Gi\n\n# List PV\nkubectl get pv\nObservation:\n1. Wait for 2 to 3 mins and automatically CAPACITY value changes from 4Gi to 8Gi\n\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\nObservation:\n1. No impact to underlying MySQL Database data.\n2. VolumeExpansion is seamless without impacting the real data. \n3. We should find the two users which are present before VolumeExpansion as-is.\n```\n## Step-11: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `8GB`\u00a0Persistent Disk, as 4GB disk expaned to 8GB now.\n- **Observation:** You should see the disk type as **Balanced persistent disk**\n\n\n## Step-12: Verify reclaimPolicy: Retain\n```t\n# Delete kube-manifests\nkubectl delete -f kube-manifests\/\n\n# List Storage Class\nkubectl get sc\nObservation:\n1. Custom storage class deleted\n\n# List PVC\nkubectl get pvc\nObservation:\n1. PVC deleted\n\n# List PV\nkubectl get pv\nObservation:\n1. PV still present\n2. PV STATUS will be in \"Released\", not used by anyoe.\n```\n\n## Step-13: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `8GB`\u00a0Persistent Disk.\n- **Observation:** You should see the disk is still present even after all kube-manifests (storageclass, pvc) all deleted.\n- This is due to we have used **reclaimPolicy: Retain** in Custom Storage Class\n\n\n## Step-14: Clone Persistent Disk\n- **Question:** Why we are cloning the disk ?\n- **Answer:** In the next demo, we are going use the **pre-existing persistent disk** in our demo. For that purpose we are cloning it. \n- Go to Compute Engine -> Storage -> Disks\n- Search for `8GB`\u00a0Persistent Disk.\n- Click on **Clone Disk**\n- **Name:** preexisting-pd\n- **Description:** preexisting-pd Demo with GKE\n- **Location:** Single\n- **Snapshot Schedule:** UNCHECK\n- Click on **CREATE**\n\n## Step-15: Delete Retained Persistent Disk from this Demo\n- Go to Compute Engine -> Storage -> Disks\n- Search for `8GB`\u00a0Persistent Disk.\n- **Disk Name:**  pvc-3f2c1daa-122d-4bdb-a7b6-b9943631cc14\n- Click on **DELETE DISK**\n```t\n# List PV\nkubectl get pv\n\n# Delete  PV \nkubectl delete pv pvc-3f2c1daa-122d-4bdb-a7b6-b9943631cc14 \n\n# List PV\nkubectl get pv\n```\n\n## Step-16: Change PVC 8Gi to 4Gi: 01-persistent-volume-claim.yaml\n- Change PVC 8Gi to 4Gi so that `kube-manifests` will be demo ready for students. \n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: gke-pd-standard-rwo-sc\n  resources: \n    requests:\n      storage: 4Gi # Commment at Step-09\n      #storage: 8Gi # UnCommment at Step-09\n```\n\n\n## Reference\n- [Using the Compute Engine persistent disk CSI Driver](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/persistent-volumes\/gce-pd-csi-driver)","site":"gcp gke docs","answers_cleaned":"    title  GKE Persistent Disks Custom StorageClass  description  Use Custom storageclass to provision Google Disks for Kubernetes Workloads         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123     3  Feature  Compute Engine persistent disk CSI Driver     Verify the Feature   Compute Engine persistent disk CSI Driver   enabled in GKE Cluster       This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster       Step 01  Introduction     Feaute 1    Create custom Kubernetes StorageClass instead of using predefined one in GKE Cluster  custom storage class  gke pd standard rwo sc      Feature 2    Test  allowVolumeExpansion  true  in Storage Class     Feature 3    Use  reclaimPolicy  Retain  in Storage Class and Test it      Step 02  List Kubernetes Storage Classes in GKE Cluster    t   List Storage Classes kubectl get sc         Step 03  00 storage class yaml    yaml apiVersion  storage k8s io v1 kind  StorageClass metadata     name  gke pd standard rwo sc provisioner  pd csi storage gke io volumeBindingMode  WaitForFirstConsumer  allowVolumeExpansion  true reclaimPolicy  Retain  parameters    type  pd balanced    STORAGE CLASS    1  A StorageClass provides a way for administrators    to describe the  classes  of storage they offer    2  Here we are offering GCP PD Storage for GKE Cluster         Step 04  01 persistent volume claim yaml    yaml apiVersion  v1 kind  PersistentVolumeClaim metadata    name  mysql pv claim spec     accessModes        ReadWriteOnce   storageClassName  gke pd standard rwo sc   resources       requests        storage  4Gi          Step 05  Other Kubernetes YAML Manifests   No changes to other Kubernetes YAML Manifests   They are same as previous section   02 UserManagement ConfigMap yaml   03 mysql deployment yaml   04 mysql clusterip service yaml   05 UserMgmtWebApp Deployment yaml   06 UserMgmtWebApp LoadBalancer Service yaml     Step 06  Deploy kube manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests     List Storage Classes kubectl get sc Observation   1  You should find the new custom storage class object created with name as  gke pd standard rwo sc     List PVC kubectl get pvc    List PV kubectl get pv    List ConfigMaps kubectl get configmap    List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    Verify Pod Logs kubectl get pods kubectl logs  f  USERMGMT POD NAME  kubectl logs  f usermgmt webapp 6ff7d7d849 7lrg5         Step 07  Verify Persistent Disks   Go to Compute Engine    Storage    Disks   Search for  4GB  Persistent Disk     Observation    You should see the disk type as   Balanced persistent disk         Step 08  Access Application    t   List Services kubectl get svc    Access Application http    ExternalIP from get service output  Username  admin101 Password  password101    Create New User  Used for testing   allowVolumeExpansion  true  Option  Username  admin102 Password  password102 First Name  fname102 Last Name  lname102 Email Address  admin102 stacksimplify com Social Security Address  ssn102         Step 09  Update 01 persistent volume claim yaml from 4Gi to 8Gi    yaml apiVersion  v1 kind  PersistentVolumeClaim metadata    name  mysql pv claim spec     accessModes        ReadWriteOnce   storageClassName  gke pd standard rwo sc   resources       requests         storage  4Gi   Commment at Step 09       storage  8Gi   UnCommment at Step 09         Step 10  Deploy updated kube manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests     List PVC kubectl get pvc Observation  1  Wait for 2 to 3 mins and automatically CAPACITY value changes from 4Gi to 8Gi    List PV kubectl get pv Observation  1  Wait for 2 to 3 mins and automatically CAPACITY value changes from 4Gi to 8Gi    Access Application http    ExternalIP from get service output  Username  admin101 Password  password101 Observation  1  No impact to underlying MySQL Database data  2  VolumeExpansion is seamless without impacting the real data   3  We should find the two users which are present before VolumeExpansion as is         Step 11  Verify Persistent Disks   Go to Compute Engine    Storage    Disks   Search for  8GB  Persistent Disk  as 4GB disk expaned to 8GB now      Observation    You should see the disk type as   Balanced persistent disk        Step 12  Verify reclaimPolicy  Retain    t   Delete kube manifests kubectl delete  f kube manifests     List Storage Class kubectl get sc Observation  1  Custom storage class deleted    List PVC kubectl get pvc Observation  1  PVC deleted    List PV kubectl get pv Observation  1  PV still present 2  PV STATUS will be in  Released   not used by anyoe          Step 13  Verify Persistent Disks   Go to Compute Engine    Storage    Disks   Search for  8GB  Persistent Disk      Observation    You should see the disk is still present even after all kube manifests  storageclass  pvc  all deleted    This is due to we have used   reclaimPolicy  Retain   in Custom Storage Class      Step 14  Clone Persistent Disk     Question    Why we are cloning the disk       Answer    In the next demo  we are going use the   pre existing persistent disk   in our demo  For that purpose we are cloning it     Go to Compute Engine    Storage    Disks   Search for  8GB  Persistent Disk    Click on   Clone Disk       Name    preexisting pd     Description    preexisting pd Demo with GKE     Location    Single     Snapshot Schedule    UNCHECK   Click on   CREATE       Step 15  Delete Retained Persistent Disk from this Demo   Go to Compute Engine    Storage    Disks   Search for  8GB  Persistent Disk      Disk Name     pvc 3f2c1daa 122d 4bdb a7b6 b9943631cc14   Click on   DELETE DISK      t   List PV kubectl get pv    Delete  PV  kubectl delete pv pvc 3f2c1daa 122d 4bdb a7b6 b9943631cc14     List PV kubectl get pv         Step 16  Change PVC 8Gi to 4Gi  01 persistent volume claim yaml   Change PVC 8Gi to 4Gi so that  kube manifests  will be demo ready for students      yaml apiVersion  v1 kind  PersistentVolumeClaim metadata    name  mysql pv claim spec     accessModes        ReadWriteOnce   storageClassName  gke pd standard rwo sc   resources       requests        storage  4Gi   Commment at Step 09        storage  8Gi   UnCommment at Step 09          Reference    Using the Compute Engine persistent disk CSI Driver  https   cloud google com kubernetes engine docs how to persistent volumes gce pd csi driver "}
{"questions":"gcp gke docs Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t Implement GCP Google Kubernetes Engine GKE Continuous Delivery Pipeline title GCP Google Kubernetes Engine GKE CD","answers":"---\ntitle: GCP Google Kubernetes Engine GKE CD\ndescription: Implement GCP Google Kubernetes Engine GKE Continuous Delivery Pipeline\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- Implement Continuous Delivery Pipeline for GKE Workloads using\n- Google Cloud Source\n- Google Cloud Build\n- Google Artifact Repository\n\n\n## Step-02: Assign Kubernetes Engine Developer IAM Role to Cloud Build\n- To deploy the application in your Googke GKE Kubernetes cluster, **Cloud Build** needs the **Kubernetes Engine Developer Identity and Access Management Role.**\n```t\n# Verify if changes took place using Google Cloud Console    \n1. Go to Cloud Build -> Settings -> SERVICE ACCOUNT -> Service account permissions\n2. Kubernetes Engine\t-> Should be in \"DISABLED\" state\n\n# Get current project PROJECT_ID\nPROJECT_ID=\"$(gcloud config get-value project)\"\necho ${PROJECT_ID}\n\n# Get Google Cloud Project Number\nPROJECT_NUMBER=\"$(gcloud projects describe ${PROJECT_ID} --format='get(projectNumber)')\"\necho ${PROJECT_NUMBER}\n\n# Associate Kubernetes Engine Developer IAM Role to Cloud Build\ngcloud projects add-iam-policy-binding ${PROJECT_NUMBER} \\\n    --member=serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \\\n    --role=roles\/container.developer\n\n# Verify if changes took place using Google Cloud Console    \n1. Go to Cloud Build -> Settings -> SERVICE ACCOUNT -> Service account permissions\n2. Kubernetes Engine\t-> Should be in \"ENABLED\" state\n```\n\n## Step-03: Review File cloudbuild-delivery.yaml\n- **File Location:** 01-myapp1-k8s-repo\n```yaml\n# [START cloudbuild-delivery]\nsteps:\n# This step deploys the new version of our container image\n# in the \"standard-cluster-private-1\" Google Kubernetes Engine cluster.\n- name: 'gcr.io\/cloud-builders\/kubectl'\n  id: Deploy\n  args:\n  - 'apply'\n  - '-f'\n  - 'kubernetes.yaml'\n  env:\n  - 'CLOUDSDK_COMPUTE_REGION=us-central1'\n  #- 'CLOUDSDK_COMPUTE_ZONE=us-central1-c'  \n  - 'CLOUDSDK_CONTAINER_CLUSTER=standard-cluster-private-1' # Provide GKE Cluster Name\n\n# This step copies the applied manifest to the production branch\n# The COMMIT_SHA variable is automatically\n# replaced by Cloud Build.\n- name: 'gcr.io\/cloud-builders\/git'\n  id: Copy to production branch\n  entrypoint: \/bin\/sh\n  args:\n  - '-c'\n  - |\n    set -x && \\\n    # Configure Git to create commits with Cloud Build's service account\n    git config user.email $(gcloud auth list --filter=status:ACTIVE --format='value(account)') && \\\n    # Switch to the production branch and copy the kubernetes.yaml file from the candidate branch\n    git fetch origin production && git checkout production && \\\n    git checkout $COMMIT_SHA kubernetes.yaml && \\\n    # Commit the kubernetes.yaml file with a descriptive commit message\n    git commit -m \"Manifest from commit $COMMIT_SHA\n    $(git log --format=%B -n 1 $COMMIT_SHA)\" && \\\n    # Push the changes back to Cloud Source Repository\n    git push origin production\n# [END cloudbuild-delivery]\n```\n## Step-04: Create and Initialize myapp1-k8s-repo Repo, Copy Files and Push to Cloud Source Repository\n```t\n# Change Directory \ncd course-repos\n\n# List Cloud Source Repositories\ngcloud source repos list\n\n# Create Cloud Source Gith Repo: myapp1-k8s-repo\ngcloud source repos create myapp1-k8s-repo\n\n# Initialize myapp1-k8s-repo Repo\ngcloud source repos clone myapp1-k8s-repo\n\n# Copy Files to myapp1-k8s-repo\ncloudbuild-delivery.yaml from \"58-GKE-Continuous-Delivery-with-CloudBuild\/01-myapp1-k8s-repo\"\n\n# Change Directory\ncd myapp1-k8s-repo\n\n# Commit Changes\ngit add .\ngit commit -m \"Create cloudbuild-delivery.yaml for k8s deployment\"\n\n# Create a candidate branch and push to be available in Cloud Source Repositories.\ngit checkout -b candidate\ngit push origin candidate\n\n# Create a production branch and push to be available in Cloud Source Repositories.\ngit checkout -b production\ngit push origin production\n```\n\n## Step-05: Grant the Cloud Source Repository Writer IAM role to the Cloud Build service account\n- Grant the Cloud Source Repository Writer IAM role to the Cloud Build service account for the **myapp1-k8s-repo** repository.\n\n```t\n# Get current project PROJECT_ID\nPROJECT_ID=\"$(gcloud config get-value project)\"\necho ${PROJECT_ID}\n\n# GET GCP PROJECT NUMBER\nPROJECT_NUMBER=\"$(gcloud projects describe ${PROJECT_ID} --format='get(projectNumber)')\"\necho ${PROJECT_NUMBER}\n\n# Change Directory    \ncd 02-Source-Writer-IAM-Role\n\n# Clean-Up File (put the file empty - No Content)\n>myapp1-k8s-repo-policy.yaml\n\n# Create IAM Policy YAML File\ncat >myapp1-k8s-repo-policy.yaml <<EOF\nbindings:\n- members:\n  - serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com\n  role: roles\/source.writer\nEOF\n\n# Verify IAM Policy File created with PROJECT_NUMBER\ncat myapp1-k8s-repo-policy.yaml\n\n# Set IAM Policy to Cloud Source Repository: myapp1-k8s-repo\ngcloud source repos set-iam-policy \\\n    myapp1-k8s-repo myapp1-k8s-repo-policy.yaml\n```\n\n## Step-06: Create the trigger for the continuous delivery pipeline\n- Go to Cloud Build -> Triggers -> Region: us-central-1 -> Click on **CREATE TRIGGER**\n- **Name:** myapp1-cd\n- **Region:** us-central1\n- **Description:** myapp1 Continuous Deployment Pipeline\n- **Tags:** environment=dev\n- **Event:** Push to a branch\n- **Source:** myapp1-k8s-repo\n- **Branch:** candidate \n- **Configuration:** Cloud Build configuration file (yaml or json)\n- **Location:** Repository\n- **Cloud Build Configuration file location:** cloudbuild-delivery.yaml\n- **Approval:** leave unchecked\n- **Service account:** leave to default\n- Click on **CREATE**\n\n\n## Step-06: Review files in folder 03-myapp1-app-repo\n1. Dockerfile\n2. index.html\n3. kubernetes.yaml.tpl\n4. cloudbuild-trigger-cd.yaml\n5. cloudbuild.yaml (Just a copy of cloudbuild-trigger-cd.yaml)\n```yaml\n# [START cloudbuild - Docker Image Build]\nsteps:\n# This step builds the container image.\n- name: 'gcr.io\/cloud-builders\/docker'\n  id: Build\n  args:\n  - 'build'\n  - '-t'\n  - 'us-central1-docker.pkg.dev\/$PROJECT_ID\/myapps-repository\/myapp1:$SHORT_SHA'\n  - '.'\n\n# This step pushes the image to Artifact Registry\n# The PROJECT_ID and SHORT_SHA variables are automatically\n# replaced by Cloud Build.\n- name: 'gcr.io\/cloud-builders\/docker'\n  id: Push\n  args:\n  - 'push'\n  - 'us-central1-docker.pkg.dev\/$PROJECT_ID\/myapps-repository\/myapp1:$SHORT_SHA'\n# [END cloudbuild - Docker Image Build]\n\n\n# [START cloudbuild-trigger-cd]\n# This step clones the myapp1-k8s-repo repository\n- name: 'gcr.io\/cloud-builders\/gcloud'\n  id: Clone myapp1-k8s-repo repository\n  entrypoint: \/bin\/sh\n  args:\n  - '-c'\n  - |\n    gcloud source repos clone myapp1-k8s-repo && \\\n    cd myapp1-k8s-repo && \\\n    git checkout candidate && \\\n    git config user.email $(gcloud auth list --filter=status:ACTIVE --format='value(account)')\n# This step generates the new manifest\n- name: 'gcr.io\/cloud-builders\/gcloud'\n  id: Generate Kubernetes manifest\n  entrypoint: \/bin\/sh\n  args:\n  - '-c'\n  - |\n     sed \"s\/GOOGLE_CLOUD_PROJECT\/${PROJECT_ID}\/g\" kubernetes.yaml.tpl | \\\n     sed \"s\/COMMIT_SHA\/${SHORT_SHA}\/g\" > myapp1-k8s-repo\/kubernetes.yaml\n# This step pushes the manifest back to myapp1-k8s-repo\n- name: 'gcr.io\/cloud-builders\/gcloud'\n  id: Push manifest\n  entrypoint: \/bin\/sh\n  args:\n  - '-c'\n  - |\n    set -x && \\\n    cd myapp1-k8s-repo && \\\n    git add kubernetes.yaml && \\\n    git commit -m \"Deploying image us-central1-docker.pkg.dev\/$PROJECT_ID\/myapps-repository\/myapp1:${SHORT_SHA}\n    Built from commit ${COMMIT_SHA} of repository myapp1-app-repo\n    Author: $(git log --format='%an <%ae>' -n 1 HEAD)\" && \\\n    git push origin candidate\n# [END cloudbuild-trigger-cd]\n```\n\n\n## Step-07: Update index.html in myapp1-app-repo, Push and Verify\n```t\n# Change Directory (GIT REPO)\ncd myapp1-app-repo\n\n# Update index.html\n      <p>Application Version: V4<\/p>\n\n# Add additional files to myapp1-app-repo\n1. kubernetes.yaml.tpl\n2. cloudbuild-trigger-cd.yaml\n3. cloudbuild.yaml (Just a copy of cloudbuild-trigger-cd.yaml)\n\n\n# Git Commit and Push to Remote Repository\ngit status\ngit add .\ngit commit -am \"V4 Commit CI CD\"\ngit push\n\n# Verify Cloud Source Repository: myapp1-app-repo\nhttps:\/\/source.cloud.google.com\/\nmyapp1-app-repo\n\n# Verify Cloud Source Repository: myapp1-k8s-repo\nhttps:\/\/source.cloud.google.com\/\nmyapp1-k8s-repo\nBranch: Candidate\nYou should find \"kubernetes.yaml\" file with latest commit code for Image from \"myapp1-app-repo\"\n```\n\n## Step-08: Verify myapp1-ci and myapp1-cd builds\n- Go to Cloud Build -> History\n- Review latest **myapp1-ci** build steps\n- Review latest **myapp1-cd** build steps\n\n## Step-09: Verify Files in Cloud Source Repositories\n- Go to Cloud Source \n- **myapp1-app-repo:** New files should be present\n- **myapp1-k8s-repo:** kubernetes.yaml file with values replaced related to GOOGLE GOOGLE_CLOUD_PROJECT and COMMIT_SHA should be replaced `image: us-central1-docker.pkg.dev\/kdaida123\/myapps-repository\/myapp1:2a3e72a`\n\n## Step-10: Verify Google Artifact Registry\n- Go to Artifact Registry -> Repositories -> myapps-repository -> myapp1\n- Shoud see a new docker image\n\n## Step-11: Access Application\n```t\n# List Pods\nkubect get pods\n\n# List Deployments\nkubectl get deploy\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<SERVICE-EXTERNALIP>\nObservation:\n1. Should see v4 version of application deployed\n```\n\n## Step-12: Test CI CD one more time\n- Update index.html to V5\n```t\n# Change Directory (GIT REPO)\ncd myapp1-app-repo\n\n# Update index.html\n      <p>Application Version: V5<\/p>\n\n# Git Commit and Push to Remote Repository\ngit status\ngit add .\ngit commit -am \"V5 Commit CI CD\"\ngit push\n\n# Verify Build process\nGo to Cloud Build -> myapp1-ci -> BUILD LOG \nGo to Cloud Build -> myapp1-cd -> BUILD LOG\n\n# Access Application\nhttp:\/\/<SERVICE-EXTERNALIP>\nObservation:\n1. Should see v5 version of application deployed\n```\n\n## Step-13: Verify Application Rollback by just rebuilding CD Pipeline\n- Go to ANY version of `myapp1-cd` and click on `REBUILD`\n- Verify by accessing Application\n```t\n# List Pods\nkubect get pods\n\n# List Deployments\nkubectl get deploy\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<SERVICE-EXTERNALIP>\nObservation:\n1. Should see V4 version of application deployed\n```\n\n## Step-14: Clean-Up\n```t\n# Disable \/ Delete CI CD Pipelines\n1. Go to Cloud Build -> myapp1-ci -> 3 dots -> Delete\n2. Go to Cloud Build -> myapp1-cd -> 3 dots -> Delete\n\n# Delete Cloud Source Repositories\nGo to Cloud Source (https:\/\/source.cloud.google.com\/repos) \n1. myapp1-app-repo -> Settings -> Delete this repository\n2. myapp1-k8s-repo -> Settings -> Delete this repository\n\n# Delete Kubernetes Deployment\nkubect get deploy\nkubectl delete deploy myapp1-deployment\n\n# Delete Kubernetes Service\nkubectl get svc\nkubectl delete svc myapp1-lb-service \n\n# Delete Artifact Registry\nGo to Artifact Registry -> Repositories -> myapps-repository -> DELETE\n\n# Delete Local Repos\ncd course-repos\nrm -rf myapp1-app-repo\nrm -rf myapp1-k8s-repo\n```\n\n## References\n- https:\/\/github.com\/GoogleCloudPlatform\/gke-gitops-tutorial-cloudbuild","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE CD description  Implement GCP Google Kubernetes Engine GKE Continuous Delivery Pipeline         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes         Step 01  Introduction   Implement Continuous Delivery Pipeline for GKE Workloads using   Google Cloud Source   Google Cloud Build   Google Artifact Repository      Step 02  Assign Kubernetes Engine Developer IAM Role to Cloud Build   To deploy the application in your Googke GKE Kubernetes cluster    Cloud Build   needs the   Kubernetes Engine Developer Identity and Access Management Role       t   Verify if changes took place using Google Cloud Console     1  Go to Cloud Build    Settings    SERVICE ACCOUNT    Service account permissions 2  Kubernetes Engine    Should be in  DISABLED  state    Get current project PROJECT ID PROJECT ID    gcloud config get value project   echo   PROJECT ID     Get Google Cloud Project Number PROJECT NUMBER    gcloud projects describe   PROJECT ID    format  get projectNumber     echo   PROJECT NUMBER     Associate Kubernetes Engine Developer IAM Role to Cloud Build gcloud projects add iam policy binding   PROJECT NUMBER          member serviceAccount   PROJECT NUMBER  cloudbuild gserviceaccount com         role roles container developer    Verify if changes took place using Google Cloud Console     1  Go to Cloud Build    Settings    SERVICE ACCOUNT    Service account permissions 2  Kubernetes Engine    Should be in  ENABLED  state         Step 03  Review File cloudbuild delivery yaml     File Location    01 myapp1 k8s repo    yaml    START cloudbuild delivery  steps    This step deploys the new version of our container image   in the  standard cluster private 1  Google Kubernetes Engine cluster    name   gcr io cloud builders kubectl    id  Deploy   args       apply        f       kubernetes yaml    env       CLOUDSDK COMPUTE REGION us central1        CLOUDSDK COMPUTE ZONE us central1 c         CLOUDSDK CONTAINER CLUSTER standard cluster private 1    Provide GKE Cluster Name    This step copies the applied manifest to the production branch   The COMMIT SHA variable is automatically   replaced by Cloud Build    name   gcr io cloud builders git    id  Copy to production branch   entrypoint   bin sh   args        c            set  x            Configure Git to create commits with Cloud Build s service account     git config user email   gcloud auth list   filter status ACTIVE   format  value account               Switch to the production branch and copy the kubernetes yaml file from the candidate branch     git fetch origin production    git checkout production          git checkout  COMMIT SHA kubernetes yaml            Commit the kubernetes yaml file with a descriptive commit message     git commit  m  Manifest from commit  COMMIT SHA       git log   format  B  n 1  COMMIT SHA              Push the changes back to Cloud Source Repository     git push origin production    END cloudbuild delivery         Step 04  Create and Initialize myapp1 k8s repo Repo  Copy Files and Push to Cloud Source Repository    t   Change Directory  cd course repos    List Cloud Source Repositories gcloud source repos list    Create Cloud Source Gith Repo  myapp1 k8s repo gcloud source repos create myapp1 k8s repo    Initialize myapp1 k8s repo Repo gcloud source repos clone myapp1 k8s repo    Copy Files to myapp1 k8s repo cloudbuild delivery yaml from  58 GKE Continuous Delivery with CloudBuild 01 myapp1 k8s repo     Change Directory cd myapp1 k8s repo    Commit Changes git add   git commit  m  Create cloudbuild delivery yaml for k8s deployment     Create a candidate branch and push to be available in Cloud Source Repositories  git checkout  b candidate git push origin candidate    Create a production branch and push to be available in Cloud Source Repositories  git checkout  b production git push origin production         Step 05  Grant the Cloud Source Repository Writer IAM role to the Cloud Build service account   Grant the Cloud Source Repository Writer IAM role to the Cloud Build service account for the   myapp1 k8s repo   repository      t   Get current project PROJECT ID PROJECT ID    gcloud config get value project   echo   PROJECT ID     GET GCP PROJECT NUMBER PROJECT NUMBER    gcloud projects describe   PROJECT ID    format  get projectNumber     echo   PROJECT NUMBER     Change Directory     cd 02 Source Writer IAM Role    Clean Up File  put the file empty   No Content   myapp1 k8s repo policy yaml    Create IAM Policy YAML File cat  myapp1 k8s repo policy yaml   EOF bindings    members      serviceAccount   PROJECT NUMBER  cloudbuild gserviceaccount com   role  roles source writer EOF    Verify IAM Policy File created with PROJECT NUMBER cat myapp1 k8s repo policy yaml    Set IAM Policy to Cloud Source Repository  myapp1 k8s repo gcloud source repos set iam policy       myapp1 k8s repo myapp1 k8s repo policy yaml         Step 06  Create the trigger for the continuous delivery pipeline   Go to Cloud Build    Triggers    Region  us central 1    Click on   CREATE TRIGGER       Name    myapp1 cd     Region    us central1     Description    myapp1 Continuous Deployment Pipeline     Tags    environment dev     Event    Push to a branch     Source    myapp1 k8s repo     Branch    candidate      Configuration    Cloud Build configuration file  yaml or json      Location    Repository     Cloud Build Configuration file location    cloudbuild delivery yaml     Approval    leave unchecked     Service account    leave to default   Click on   CREATE        Step 06  Review files in folder 03 myapp1 app repo 1  Dockerfile 2  index html 3  kubernetes yaml tpl 4  cloudbuild trigger cd yaml 5  cloudbuild yaml  Just a copy of cloudbuild trigger cd yaml     yaml    START cloudbuild   Docker Image Build  steps    This step builds the container image    name   gcr io cloud builders docker    id  Build   args       build        t       us central1 docker pkg dev  PROJECT ID myapps repository myapp1  SHORT SHA             This step pushes the image to Artifact Registry   The PROJECT ID and SHORT SHA variables are automatically   replaced by Cloud Build    name   gcr io cloud builders docker    id  Push   args       push       us central1 docker pkg dev  PROJECT ID myapps repository myapp1  SHORT SHA     END cloudbuild   Docker Image Build       START cloudbuild trigger cd    This step clones the myapp1 k8s repo repository   name   gcr io cloud builders gcloud    id  Clone myapp1 k8s repo repository   entrypoint   bin sh   args        c            gcloud source repos clone myapp1 k8s repo          cd myapp1 k8s repo          git checkout candidate          git config user email   gcloud auth list   filter status ACTIVE   format  value account      This step generates the new manifest   name   gcr io cloud builders gcloud    id  Generate Kubernetes manifest   entrypoint   bin sh   args        c             sed  s GOOGLE CLOUD PROJECT   PROJECT ID  g  kubernetes yaml tpl          sed  s COMMIT SHA   SHORT SHA  g    myapp1 k8s repo kubernetes yaml   This step pushes the manifest back to myapp1 k8s repo   name   gcr io cloud builders gcloud    id  Push manifest   entrypoint   bin sh   args        c            set  x          cd myapp1 k8s repo          git add kubernetes yaml          git commit  m  Deploying image us central1 docker pkg dev  PROJECT ID myapps repository myapp1   SHORT SHA      Built from commit   COMMIT SHA  of repository myapp1 app repo     Author    git log   format   an   ae    n 1 HEAD            git push origin candidate    END cloudbuild trigger cd           Step 07  Update index html in myapp1 app repo  Push and Verify    t   Change Directory  GIT REPO  cd myapp1 app repo    Update index html        p Application Version  V4  p     Add additional files to myapp1 app repo 1  kubernetes yaml tpl 2  cloudbuild trigger cd yaml 3  cloudbuild yaml  Just a copy of cloudbuild trigger cd yaml      Git Commit and Push to Remote Repository git status git add   git commit  am  V4 Commit CI CD  git push    Verify Cloud Source Repository  myapp1 app repo https   source cloud google com  myapp1 app repo    Verify Cloud Source Repository  myapp1 k8s repo https   source cloud google com  myapp1 k8s repo Branch  Candidate You should find  kubernetes yaml  file with latest commit code for Image from  myapp1 app repo          Step 08  Verify myapp1 ci and myapp1 cd builds   Go to Cloud Build    History   Review latest   myapp1 ci   build steps   Review latest   myapp1 cd   build steps     Step 09  Verify Files in Cloud Source Repositories   Go to Cloud Source      myapp1 app repo    New files should be present     myapp1 k8s repo    kubernetes yaml file with values replaced related to GOOGLE GOOGLE CLOUD PROJECT and COMMIT SHA should be replaced  image  us central1 docker pkg dev kdaida123 myapps repository myapp1 2a3e72a      Step 10  Verify Google Artifact Registry   Go to Artifact Registry    Repositories    myapps repository    myapp1   Shoud see a new docker image     Step 11  Access Application    t   List Pods kubect get pods    List Deployments kubectl get deploy    List Services kubectl get svc    Access Application http    SERVICE EXTERNALIP  Observation  1  Should see v4 version of application deployed         Step 12  Test CI CD one more time   Update index html to V5    t   Change Directory  GIT REPO  cd myapp1 app repo    Update index html        p Application Version  V5  p     Git Commit and Push to Remote Repository git status git add   git commit  am  V5 Commit CI CD  git push    Verify Build process Go to Cloud Build    myapp1 ci    BUILD LOG  Go to Cloud Build    myapp1 cd    BUILD LOG    Access Application http    SERVICE EXTERNALIP  Observation  1  Should see v5 version of application deployed         Step 13  Verify Application Rollback by just rebuilding CD Pipeline   Go to ANY version of  myapp1 cd  and click on  REBUILD    Verify by accessing Application    t   List Pods kubect get pods    List Deployments kubectl get deploy    List Services kubectl get svc    Access Application http    SERVICE EXTERNALIP  Observation  1  Should see V4 version of application deployed         Step 14  Clean Up    t   Disable   Delete CI CD Pipelines 1  Go to Cloud Build    myapp1 ci    3 dots    Delete 2  Go to Cloud Build    myapp1 cd    3 dots    Delete    Delete Cloud Source Repositories Go to Cloud Source  https   source cloud google com repos   1  myapp1 app repo    Settings    Delete this repository 2  myapp1 k8s repo    Settings    Delete this repository    Delete Kubernetes Deployment kubect get deploy kubectl delete deploy myapp1 deployment    Delete Kubernetes Service kubectl get svc kubectl delete svc myapp1 lb service     Delete Artifact Registry Go to Artifact Registry    Repositories    myapps repository    DELETE    Delete Local Repos cd course repos rm  rf myapp1 app repo rm  rf myapp1 k8s repo         References   https   github com GoogleCloudPlatform gke gitops tutorial cloudbuild"}
{"questions":"gcp gke docs Implement GCP Google Kubernetes Engine GKE Ingress SSL with Google Managed Certificates Step 00 Pre requisites title GCP Google Kubernetes Engine GKE Ingress SSL 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Ingress SSL\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress SSL with Google Managed Certificates\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Registered Domain using Google Cloud Domains\n4. DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS (Example: demo1.kalyanreddydaida.com)\n\n\n## Step-01: Introduction\n- Google Managed Certificates for GKE Ingress\n- Ingress SSL\n- Certificate Validity: 90 days\n- 30 days before expiry google starts renewal process. We dont need to worry about it.\n- **Important Note:** Google-managed certificates are only supported with GKE Ingress using the external HTTP(S) load balancer. Google-managed certificates do not support third-party Ingress controllers.\n\n## Step-02: kube-manifest - NO CHANGES\n- 01-Nginx-App1-Deployment-and-NodePortService.yaml\n- 02-Nginx-App2-Deployment-and-NodePortService.yaml\n- 03-Nginx-App3-Deployment-and-NodePortService.yaml\n\n## Step-03: 05-Managed-Certificate.yaml\n- **Pre-requisite-1:** Registered Domain using Google Cloud Domains\n- **Pre-requisite-2:** DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS (Example: demo1.kalyanreddydaida.com)\n```yaml\napiVersion: networking.gke.io\/v1\nkind: ManagedCertificate\nmetadata:\n  name: managed-cert-for-ingress\nspec:\n  domains:\n    - demo1.kalyanreddydaida.com\n```\n\n## Step-04: 04-Ingress-SSL.yaml\n- Add the annotation `networking.gke.io\/managed-certificates` to Ingress Service with Managed Certificate name. \n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress-ssl\n  annotations:\n    # External Load Balancer\n    kubernetes.io\/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io\/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # Google Managed SSL Certificates\n    networking.gke.io\/managed-certificates: managed-cert-for-ingress\nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: \/app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: \/app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80   \n```\n\n## Step-06: Deploy kube-manifests and Verify\n```t\n# Deploy Kubernetes manifests\nkubectl apply -f kube-manifests\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Load Balancers\nkubectl get ingress\n\n# Describe Ingress and view Rules\nkubectl describe ingress ingress-ssl\n```\n\n## Step-07: Verify Managed Certificates\n```t\n# List Managed Certificate\nkubectl get managedcertificate\n\n# Describe managed certificate\nkubectl describe managedcertificate managed-cert-for-ingress\nObservation: \n1. Wait for the Google-managed certificate to finish provisioning. \n2. This might take up to 60 minutes. \n3. Status of the certificate should change from PROVISIONING to ACTIVE\ndemo1.kalyanreddydaida.com: PROVISIONING\n\n# List Certificates\ngcloud compute ssl-certificates list\n```\n\n## Step-08: Verify SSL Certificates from Certificate Tab in Load Balancer\n### Load Balancers Component View\n- View in **Load Balancers Component View**\n- Click on **CERTIFICATES** tab\n\n### Load Balancers View\n- Review FRONTEND with HTTPS Protocol and associated with Certificate\n\n\n\n## Step-09: Access Application\n```t\n# Important Note\nWait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors\n\n# Access Application\nhttp:\/\/<DNS-DOMAIN-NAME>\/app1\/index.html\nhttp:\/\/<DNS-DOMAIN-NAME>\/app2\/index.html\nhttp:\/\/<DNS-DOMAIN-NAME>\/\n\n# Note: Replace Domain Name registered in Cloud DNS\n# HTTP URLs\nhttp:\/\/demo1.kalyanreddydaida.com\/app1\/index.html\nhttp:\/\/demo1.kalyanreddydaida.com\/app2\/index.html\nhttp:\/\/demo1.kalyanreddydaida.com\/\n\n# HTTPS URLs\nhttps:\/\/demo1.kalyanreddydaida.com\/app1\/index.html\nhttps:\/\/demo1.kalyanreddydaida.com\/app2\/index.html\nhttps:\/\/demo1.kalyanreddydaida.com\/\n```\n\n\n\n\n\n## References\n- https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/managed-certs\n- https:\/\/cloud.google.com\/load-balancing\/docs\/ssl-certificates\/troubleshooting\n- https:\/\/github.com\/GoogleCloudPlatform\/gke-managed-cert","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Ingress SSL description  Implement GCP Google Kubernetes Engine GKE Ingress SSL with Google Managed Certificates        Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123     3  Registered Domain using Google Cloud Domains 4  DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS  Example  demo1 kalyanreddydaida com       Step 01  Introduction   Google Managed Certificates for GKE Ingress   Ingress SSL   Certificate Validity  90 days   30 days before expiry google starts renewal process  We dont need to worry about it      Important Note    Google managed certificates are only supported with GKE Ingress using the external HTTP S  load balancer  Google managed certificates do not support third party Ingress controllers      Step 02  kube manifest   NO CHANGES   01 Nginx App1 Deployment and NodePortService yaml   02 Nginx App2 Deployment and NodePortService yaml   03 Nginx App3 Deployment and NodePortService yaml     Step 03  05 Managed Certificate yaml     Pre requisite 1    Registered Domain using Google Cloud Domains     Pre requisite 2    DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS  Example  demo1 kalyanreddydaida com     yaml apiVersion  networking gke io v1 kind  ManagedCertificate metadata    name  managed cert for ingress spec    domains        demo1 kalyanreddydaida com         Step 04  04 Ingress SSL yaml   Add the annotation  networking gke io managed certificates  to Ingress Service with Managed Certificate name      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress ssl   annotations        External Load Balancer     kubernetes io ingress class   gce          Static IP for Ingress Service     kubernetes io ingress global static ip name   gke ingress extip1           Google Managed SSL Certificates     networking gke io managed certificates  managed cert for ingress spec     defaultBackend      service        name  app3 nginx nodeport service       port          number  80                               rules        http          paths                         path   app1             pathType  Prefix             backend                service                  name  app1 nginx nodeport service                 port                     number  80             path   app2             pathType  Prefix             backend                service                  name  app2 nginx nodeport service                 port                     number  80            Step 06  Deploy kube manifests and Verify    t   Deploy Kubernetes manifests kubectl apply  f kube manifests    List Pods kubectl get pods    List Services kubectl get svc    List Ingress Load Balancers kubectl get ingress    Describe Ingress and view Rules kubectl describe ingress ingress ssl         Step 07  Verify Managed Certificates    t   List Managed Certificate kubectl get managedcertificate    Describe managed certificate kubectl describe managedcertificate managed cert for ingress Observation   1  Wait for the Google managed certificate to finish provisioning   2  This might take up to 60 minutes   3  Status of the certificate should change from PROVISIONING to ACTIVE demo1 kalyanreddydaida com  PROVISIONING    List Certificates gcloud compute ssl certificates list         Step 08  Verify SSL Certificates from Certificate Tab in Load Balancer     Load Balancers Component View   View in   Load Balancers Component View     Click on   CERTIFICATES   tab      Load Balancers View   Review FRONTEND with HTTPS Protocol and associated with Certificate       Step 09  Access Application    t   Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors    Access Application http    DNS DOMAIN NAME  app1 index html http    DNS DOMAIN NAME  app2 index html http    DNS DOMAIN NAME      Note  Replace Domain Name registered in Cloud DNS   HTTP URLs http   demo1 kalyanreddydaida com app1 index html http   demo1 kalyanreddydaida com app2 index html http   demo1 kalyanreddydaida com     HTTPS URLs https   demo1 kalyanreddydaida com app1 index html https   demo1 kalyanreddydaida com app2 index html https   demo1 kalyanreddydaida com              References   https   cloud google com kubernetes engine docs how to managed certs   https   cloud google com load balancing docs ssl certificates troubleshooting   https   github com GoogleCloudPlatform gke managed cert"}
{"questions":"gcp gke docs Use Google Disks Volume Clone for GKE Workloads Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal 1 Verify if GKE Cluster is created title GKE Persistent Disks Volume Clone t","answers":"---\ntitle: GKE Persistent Disks - Volume Clone\ndescription: Use Google Disks Volume Clone for GKE Workloads\n---\n\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Feature: Compute Engine persistent disk CSI Driver\n  - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. \n  - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster.\n\n\n## Step-01: Introduction\n- Understand how to implement cloned Disks in GKE\n\n## Step-02:  Kubernetes YAML Manifests\n- **Project Folder:** 01-kube-manifests\n- No changes to Kubernetes YAML Manifests, same as Section `21-GKE-PD-existing-SC-standard-rwo`\n- 01-persistent-volume-claim.yaml\n- 02-UserManagement-ConfigMap.yaml\n- 03-mysql-deployment.yaml\n- 04-mysql-clusterip-service.yaml\n- 05-UserMgmtWebApp-Deployment.yaml\n- 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n\n## Step-03: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 01-kube-manifests\/\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n## Step-04: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `4GB`\u00a0Persistent Disk\n- **Observation:** Review the below items\n  - **Zones:** us-central1-c\n  - **Type:** Balanced persistent disk\n  - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk\n\n## Step-05: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\n# Create New User admin102\nUsername: admin102\nPassword: password102\nFirst Name: fname102\nLast Name: lname102\nEmail Address: admin102@stacksimplify.com\nSocial Security Address: ssn102\n\n# Create New User admin103\nUsername: admin103\nPassword: password103\nFirst Name: fname103\nLast Name: lname103\nEmail Address: admin103@stacksimplify.com\nSocial Security Address: ssn103\n```\n\n## Step-06: Volume Clone: 01-podpvc-clone.yaml\n- **Project Folder:** 02-Use-Cloned-Volume-kube-manifests\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: podpvc-clone\nspec:\n  dataSource:\n    name: mysql-pv-claim # the name of the source PersistentVolumeClaim that you created as part of UMS Web App\n    kind: PersistentVolumeClaim\n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo  # same as the StorageClass of the source PersistentVolumeClaim.   \n  resources:\n    requests:\n      storage: 4Gi # the amount of storage to request, which must be at least the size of the source PersistentVolumeClaim\n```\n\n## Step-07: 03-mysql-deployment.yaml\n- **Change-1:** Change the `claimName: mysql-pv-claim` to `claimName: podpvc-clone`\n- \n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: mysql2\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql2\n  strategy:\n    type: Recreate \n  template: \n    metadata: \n      labels: \n        app: mysql2\n    spec: \n      containers:\n        - name: mysql2\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: \/var\/lib\/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: \/docker-entrypoint-initdb.d #https:\/\/hub.docker.com\/_\/mysql Refer Initializing a fresh instance                                         \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            #claimName: mysql-pv-claim\n            claimName: podpvc-clone\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script2\n```\n\n## Step-08:  Kubernetes YAML Manifests\n- **Project Folder:** 02-Use-Cloned-Volume-kube-manifests\n- No changes to Kubernetes YAML Manifests, same as Section `21-GKE-PD-existing-SC-standard-rwo`\n- For all the resource names and labels append with 2 (Example: mysql to mysql2, usermgmt-webapp to usermgmt-webapp2)\n- 02-UserManagement-ConfigMap.yaml\n- 03-mysql-deployment.yaml\n- 04-mysql-clusterip-service.yaml\n- 05-UserMgmtWebApp-Deployment.yaml\n- 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n\n## Step-09: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 02-Use-Cloned-Volume-kube-manifests\/\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp2-6ff7d7d849-7lrg5\n```\n\n## Step-10: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `4GB`\u00a0Persistent Disk\n- **Observation:** Review the below items\n  - **Type:** Balanced persistent disk\n  - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk\n\n## Step-11: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\nObservation:\n1. You should see both \"admin102\" and \"admin103\" users already present.\n2. This is because we have used the cloned disk from \"01-kube-manifests\"\n```\n\n## Step-12: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f 01-kube-manifests -f 02-Use-Cloned-Volume-kube-manifests\n```\n\n\n```t\n# Reference\nhttps:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/assign-pods-nodes\/\n\n# Get Nodes\nkubectl get nodes \n\n# Show Node Labels\nkubectl get nodes --show-labels\n\n# Label Node\nkubectl label nodes <your-node-name> nodetype=db\nkubectl label nodes gke-standard-cluster-pri-default-pool-4f7ab141-p0gz nodetype=db\n\n# Show Node Labels\nkubectl get nodes --show-labels\n```","site":"gcp gke docs","answers_cleaned":"    title  GKE Persistent Disks   Volume Clone description  Use Google Disks Volume Clone for GKE Workloads          Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123     3  Feature  Compute Engine persistent disk CSI Driver     Verify the Feature   Compute Engine persistent disk CSI Driver   enabled in GKE Cluster       This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster       Step 01  Introduction   Understand how to implement cloned Disks in GKE     Step 02   Kubernetes YAML Manifests     Project Folder    01 kube manifests   No changes to Kubernetes YAML Manifests  same as Section  21 GKE PD existing SC standard rwo    01 persistent volume claim yaml   02 UserManagement ConfigMap yaml   03 mysql deployment yaml   04 mysql clusterip service yaml   05 UserMgmtWebApp Deployment yaml   06 UserMgmtWebApp LoadBalancer Service yaml     Step 03  Deploy kube manifests    t   Deploy Kubernetes Manifests kubectl apply  f 01 kube manifests     List Storage Class kubectl get sc    List PVC kubectl get pvc    List PV kubectl get pv    List ConfigMaps kubectl get configmap    List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    Verify Pod Logs kubectl get pods kubectl logs  f  USERMGMT POD NAME  kubectl logs  f usermgmt webapp 6ff7d7d849 7lrg5         Step 04  Verify Persistent Disks   Go to Compute Engine    Storage    Disks   Search for  4GB  Persistent Disk     Observation    Review the below items       Zones    us central1 c       Type    Balanced persistent disk       In use by    gke standard cluster 1 default pool db7b638f j5lk     Step 05  Access Application    t   List Services kubectl get svc    Access Application http    ExternalIP from get service output  Username  admin101 Password  password101    Create New User admin102 Username  admin102 Password  password102 First Name  fname102 Last Name  lname102 Email Address  admin102 stacksimplify com Social Security Address  ssn102    Create New User admin103 Username  admin103 Password  password103 First Name  fname103 Last Name  lname103 Email Address  admin103 stacksimplify com Social Security Address  ssn103         Step 06  Volume Clone  01 podpvc clone yaml     Project Folder    02 Use Cloned Volume kube manifests    yaml apiVersion  v1 kind  PersistentVolumeClaim metadata    name  podpvc clone spec    dataSource      name  mysql pv claim   the name of the source PersistentVolumeClaim that you created as part of UMS Web App     kind  PersistentVolumeClaim   accessModes        ReadWriteOnce   storageClassName  standard rwo    same as the StorageClass of the source PersistentVolumeClaim       resources      requests        storage  4Gi   the amount of storage to request  which must be at least the size of the source PersistentVolumeClaim         Step 07  03 mysql deployment yaml     Change 1    Change the  claimName  mysql pv claim  to  claimName  podpvc clone        yaml apiVersion  apps v1 kind  Deployment metadata    name  mysql2 spec     replicas  1   selector      matchLabels        app  mysql2   strategy      type  Recreate    template       metadata         labels           app  mysql2     spec         containers            name  mysql2           image  mysql 8 0           env                name  MYSQL ROOT PASSWORD               value  dbpassword11           ports                containerPort  3306               name  mysql               volumeMounts                name  mysql persistent storage               mountPath   var lib mysql                   name  usermanagement dbcreation script               mountPath   docker entrypoint initdb d  https   hub docker com   mysql Refer Initializing a fresh instance                                                volumes             name  mysql persistent storage           persistentVolumeClaim               claimName  mysql pv claim             claimName  podpvc clone           name  usermanagement dbcreation script           configMap              name  usermanagement dbcreation script2         Step 08   Kubernetes YAML Manifests     Project Folder    02 Use Cloned Volume kube manifests   No changes to Kubernetes YAML Manifests  same as Section  21 GKE PD existing SC standard rwo    For all the resource names and labels append with 2  Example  mysql to mysql2  usermgmt webapp to usermgmt webapp2    02 UserManagement ConfigMap yaml   03 mysql deployment yaml   04 mysql clusterip service yaml   05 UserMgmtWebApp Deployment yaml   06 UserMgmtWebApp LoadBalancer Service yaml     Step 09  Deploy kube manifests    t   Deploy Kubernetes Manifests kubectl apply  f 02 Use Cloned Volume kube manifests     List Storage Class kubectl get sc    List PVC kubectl get pvc    List PV kubectl get pv    List ConfigMaps kubectl get configmap    List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    Verify Pod Logs kubectl get pods kubectl logs  f  USERMGMT POD NAME  kubectl logs  f usermgmt webapp2 6ff7d7d849 7lrg5         Step 10  Verify Persistent Disks   Go to Compute Engine    Storage    Disks   Search for  4GB  Persistent Disk     Observation    Review the below items       Type    Balanced persistent disk       In use by    gke standard cluster 1 default pool db7b638f j5lk     Step 11  Access Application    t   List Services kubectl get svc    Access Application http    ExternalIP from get service output  Username  admin101 Password  password101  Observation  1  You should see both  admin102  and  admin103  users already present  2  This is because we have used the cloned disk from  01 kube manifests          Step 12  Clean Up    t   Delete Kubernetes Objects kubectl delete  f 01 kube manifests  f 02 Use Cloned Volume kube manifests          t   Reference https   kubernetes io docs tasks configure pod container assign pods nodes     Get Nodes kubectl get nodes     Show Node Labels kubectl get nodes   show labels    Label Node kubectl label nodes  your node name  nodetype db kubectl label nodes gke standard cluster pri default pool 4f7ab141 p0gz nodetype db    Show Node Labels kubectl get nodes   show labels    "}
{"questions":"gcp gke docs title GCP Google Kubernetes Engine Kubernetes Resource Quota Implement GCP Google Kubernetes Engine Kubernetes Resource Quota Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GCP Google Kubernetes Engine Kubernetes Resource Quota\ndescription: Implement GCP Google Kubernetes Engine Kubernetes Resource Quota\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n# Step-01: Introduction\n1. Kubernetes Namespaces - ResourceQuota \n2. Kubernetes Namespaces - Declarative using YAML\n\n## Step-02: Create Namespace manifest\n- **Important Note:** File name starts with `01-`  so that when creating k8s objects namespace will get created first so it don't throw an error.\n```yaml\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: qa\n```\n\n## Step-03: Create Kubernetes ResourceQuota manifest\n- **File Name:** 02-kubernetes-resourcequota.yaml\n```yaml\napiVersion: v1\nkind: ResourceQuota\nmetadata:\n  name: ns-resource-quota\n  namespace: qa\nspec:\n  hard:\n    requests.cpu: \"1\"\n    requests.memory: 1Gi\n    limits.cpu: \"2\"\n    limits.memory: 2Gi  \n    pods: \"3\"    \n    configmaps: \"3\" \n    persistentvolumeclaims: \"3\" \n    secrets: \"3\" \n    services: \"3\"                   \n```\n\n## Step-04: Create Kubernetes objects & Test\n```t\n# Create All Objects\nkubectl apply -f kube-manifests\/\n\n# List Pods\nkubectl get pods -n qa -w\n\n# View Pod Specification (CPU & Memory)\nkubectl describe pod <pod-name> -n qa\n\n# Get Resource Quota  - default Namespace\nkubectl get resourcequota\nkubectl describe resourcequota gke-resource-quotas\nObservation:\n1. gke-resource-quotas will be precreated by GKE Cluster for each namespace. \n2. Any new quotas we define below the GKE Resource quota limits, that quota will be overrided by default GKE Resource Quota in a Namespace.   \n\n\n# Get Resource Quota - qa Namespace\nkubectl get resourcequota -n qa\n\n# Describe Resource Quota - qa Namespace\nkubectl describe resourcequota qa-namespace-resource-quota -n qa\n\n# Test Quota by increasing the pods to 4 where in resource quota is 3 pods only\nkubectl get deploy -n qa\nkubectl get pods -n qa\nkubectl scale --replicas=4 deployment\/myapp1-deployment -n qa\nkubectl get pods -n qa\nkubectl get deploy -n qa\n\n# Verify Deployment and ReplicaSet Events\nkubectl describe deploy <Deployment-Name> -n qa\nkubectl describe rs <ReplicaSet-Name> -n qa\nObservation: In ReplicaSet Events we should find the error\n\n## WARNING MESSAGE IN REPLICASET EVENTS ABOUT RESOURCE QUOTA\nWarning  FailedCreate      77s                replicaset-controller  Error creating: pods \"myapp1-deployment-5b4bdfc49d-92t9z\" is forbidden: exceeded quota: qa-namespace-resource-quota, requested: pods=1, used: pods=3, limited: pods=3\n\n# List Services\nkubectl get svc -n qa\n\n# Access Application\nhttp:\/\/<SVC-EXTERNAL-IP>\n```\n## Step-05: Clean-Up\n- Delete all Kubernetes objects created as part of this section\n```t\n# Delete All\nkubectl delete -f kube-manifests\/ -n qa\n\n# List Namespaces\nkubectl get ns\n```\n\n## References:\n- https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/namespaces-walkthrough\/\n- https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/manage-resources\/quota-memory-cpu-namespace\/\n\n\n## Additional References:\n- https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/manage-resources\/cpu-constraint-namespace\/ \n- https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/manage-resources\/memory-constraint-namespace\/","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine Kubernetes Resource Quota description  Implement GCP Google Kubernetes Engine Kubernetes Resource Quota         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes       Step 01  Introduction 1  Kubernetes Namespaces   ResourceQuota  2  Kubernetes Namespaces   Declarative using YAML     Step 02  Create Namespace manifest     Important Note    File name starts with  01    so that when creating k8s objects namespace will get created first so it don t throw an error     yaml apiVersion  v1 kind  Namespace metadata    name  qa         Step 03  Create Kubernetes ResourceQuota manifest     File Name    02 kubernetes resourcequota yaml    yaml apiVersion  v1 kind  ResourceQuota metadata    name  ns resource quota   namespace  qa spec    hard      requests cpu   1      requests memory  1Gi     limits cpu   2      limits memory  2Gi       pods   3          configmaps   3       persistentvolumeclaims   3       secrets   3       services   3                             Step 04  Create Kubernetes objects   Test    t   Create All Objects kubectl apply  f kube manifests     List Pods kubectl get pods  n qa  w    View Pod Specification  CPU   Memory  kubectl describe pod  pod name   n qa    Get Resource Quota    default Namespace kubectl get resourcequota kubectl describe resourcequota gke resource quotas Observation  1  gke resource quotas will be precreated by GKE Cluster for each namespace   2  Any new quotas we define below the GKE Resource quota limits  that quota will be overrided by default GKE Resource Quota in a Namespace         Get Resource Quota   qa Namespace kubectl get resourcequota  n qa    Describe Resource Quota   qa Namespace kubectl describe resourcequota qa namespace resource quota  n qa    Test Quota by increasing the pods to 4 where in resource quota is 3 pods only kubectl get deploy  n qa kubectl get pods  n qa kubectl scale   replicas 4 deployment myapp1 deployment  n qa kubectl get pods  n qa kubectl get deploy  n qa    Verify Deployment and ReplicaSet Events kubectl describe deploy  Deployment Name   n qa kubectl describe rs  ReplicaSet Name   n qa Observation  In ReplicaSet Events we should find the error     WARNING MESSAGE IN REPLICASET EVENTS ABOUT RESOURCE QUOTA Warning  FailedCreate      77s                replicaset controller  Error creating  pods  myapp1 deployment 5b4bdfc49d 92t9z  is forbidden  exceeded quota  qa namespace resource quota  requested  pods 1  used  pods 3  limited  pods 3    List Services kubectl get svc  n qa    Access Application http    SVC EXTERNAL IP         Step 05  Clean Up   Delete all Kubernetes objects created as part of this section    t   Delete All kubectl delete  f kube manifests   n qa    List Namespaces kubectl get ns         References    https   kubernetes io docs tasks administer cluster namespaces walkthrough    https   kubernetes io docs tasks administer cluster manage resources quota memory cpu namespace       Additional References    https   kubernetes io docs tasks administer cluster manage resources cpu constraint namespace     https   kubernetes io docs tasks administer cluster manage resources memory constraint namespace "}
{"questions":"gcp gke docs Implement GCP Google Kubernetes Engine Kubernetes Liveness Probes Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal title GCP Google Kubernetes Engine Kubernetes Liveness Probes Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GCP Google Kubernetes Engine Kubernetes Liveness Probes\ndescription: Implement GCP Google Kubernetes Engine Kubernetes Liveness Probes\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n## Step-01: Introduction\n- Implement `Liveness Probe` and Test it\n\n## Step-02:  Understand Liveness Probe \n1. Liveness probes lets Kubernetes know whether our application running in a container inside a pod is healthy or not.\n2. If our application is healthy Kubernetes will not involve with the pod functioning. If our application is unhealthy Kubernetes will mark the pod as unhealthy.\n3. If our application is healthy Kubernetes will not involve with the pod functioning. If our application is unhealthy Kubernetes will mark the pod as unhealthy.\n4. In short, Use liveness probe to remove unhealthy pods\n\n## Step-03: Liveness Probe Type: Command\n### Step-03-01: Review Liveness Probe Type: Command\n- **File Name:** `01-liveness-probe-linux-command\/05-UserMgmtWebApp-Deployment.yaml`\n```yaml\n          # Liveness Probe Linux Command                   \n          livenessProbe:\n            exec:\n              command: \n                - \/bin\/sh\n                - -c \n                - nc -z localhost 8080\n            initialDelaySeconds: 60 # initialDelaySeconds field tells  the kubelet that it should wait 60 seconds before performing the first probe. \n            periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. \n            failureThreshold: 3 # Default Value\n            successThreshold: 1 # Default value                      \n```\n\n### Step-03-02: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 01-liveness-probe-linux-command\n\n# List Pods\nkubectl get pods\nObservation:\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<LB-IP>\nUsername: admin101\nPassword: password101\n```\n\n### Step-03-03: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f 01-liveness-probe-linux-command\n```\n\n\n## Step-04: Liveness Probe Type: HTTP Request\n### Step-04-01: Review Liveness Probe Type: HTTP Request\n- **File Name:** `02-liveness-probe-HTTP-Request\/05-UserMgmtWebApp-Deployment.yaml`\n```yaml\n          # Liveness Probe HTTP Request\n          livenessProbe:\n            httpGet:\n              path: \/login\n              port: 8080\n              httpHeaders:\n              - name: Custom-Header\n                value: Awesome          \n            initialDelaySeconds: 60 # initialDelaySeconds field tells  the kubelet that it should wait 60 seconds before performing the first probe. \n            periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. \n            failureThreshold: 3 # Default Value\n            successThreshold: 1 # Default value\n                    \n```\n\n### Step-04-02: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 02-liveness-probe-HTTP-Request\n\n# List Pods\nkubectl get pods\nObservation:\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<LB-IP>\nUsername: admin101\nPassword: password101\n```\n\n### Step-04-03: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f 02-liveness-probe-HTTP-Request\n```\n\n\n\n## Step-05: Liveness Probe Type: TCP Request\n### Step-05-01: Review Liveness Probe Type: TCP Request\n- **File Name:** `03-liveness-probe-TCP-Request\/05-UserMgmtWebApp-Deployment.yaml`\n```yaml\n          # Liveness Probe TCP request\n          livenessProbe:\n            tcpSocket:\n              port: 8080\n            initialDelaySeconds: 60 # initialDelaySeconds field tells  the kubelet that it should wait 60 seconds before performing the first probe. \n            periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. \n            failureThreshold: 3 # Default Value\n            successThreshold: 1 # Default value\n                    \n```\n\n### Step-05-02: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 03-liveness-probe-TCP-Request\n\n# List Pods\nkubectl get pods\nObservation:\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<LB-IP>\nUsername: admin101\nPassword: password101\n```\n\n### Step-05-03: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f 03-liveness-probe-TCP-Request\n```\n\n","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine Kubernetes Liveness Probes description  Implement GCP Google Kubernetes Engine Kubernetes Liveness Probes         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes         Step 01  Introduction   Implement  Liveness Probe  and Test it     Step 02   Understand Liveness Probe  1  Liveness probes lets Kubernetes know whether our application running in a container inside a pod is healthy or not  2  If our application is healthy Kubernetes will not involve with the pod functioning  If our application is unhealthy Kubernetes will mark the pod as unhealthy  3  If our application is healthy Kubernetes will not involve with the pod functioning  If our application is unhealthy Kubernetes will mark the pod as unhealthy  4  In short  Use liveness probe to remove unhealthy pods     Step 03  Liveness Probe Type  Command     Step 03 01  Review Liveness Probe Type  Command     File Name     01 liveness probe linux command 05 UserMgmtWebApp Deployment yaml     yaml             Liveness Probe Linux Command                              livenessProbe              exec                command                      bin sh                    c                    nc  z localhost 8080             initialDelaySeconds  60   initialDelaySeconds field tells  the kubelet that it should wait 60 seconds before performing the first probe               periodSeconds  10   periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds               failureThreshold  3   Default Value             successThreshold  1   Default value                                Step 03 02  Deploy Kubernetes Manifests    t   Deploy Kubernetes Manifests kubectl apply  f 01 liveness probe linux command    List Pods kubectl get pods Observation     List Services kubectl get svc    Access Application http    LB IP  Username  admin101 Password  password101          Step 03 03  Clean Up    t   Delete Kubernetes Resources kubectl delete  f 01 liveness probe linux command          Step 04  Liveness Probe Type  HTTP Request     Step 04 01  Review Liveness Probe Type  HTTP Request     File Name     02 liveness probe HTTP Request 05 UserMgmtWebApp Deployment yaml     yaml             Liveness Probe HTTP Request           livenessProbe              httpGet                path   login               port  8080               httpHeaders                  name  Custom Header                 value  Awesome                       initialDelaySeconds  60   initialDelaySeconds field tells  the kubelet that it should wait 60 seconds before performing the first probe               periodSeconds  10   periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds               failureThreshold  3   Default Value             successThreshold  1   Default value                               Step 04 02  Deploy Kubernetes Manifests    t   Deploy Kubernetes Manifests kubectl apply  f 02 liveness probe HTTP Request    List Pods kubectl get pods Observation     List Services kubectl get svc    Access Application http    LB IP  Username  admin101 Password  password101          Step 04 03  Clean Up    t   Delete Kubernetes Resources kubectl delete  f 02 liveness probe HTTP Request           Step 05  Liveness Probe Type  TCP Request     Step 05 01  Review Liveness Probe Type  TCP Request     File Name     03 liveness probe TCP Request 05 UserMgmtWebApp Deployment yaml     yaml             Liveness Probe TCP request           livenessProbe              tcpSocket                port  8080             initialDelaySeconds  60   initialDelaySeconds field tells  the kubelet that it should wait 60 seconds before performing the first probe               periodSeconds  10   periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds               failureThreshold  3   Default Value             successThreshold  1   Default value                               Step 05 02  Deploy Kubernetes Manifests    t   Deploy Kubernetes Manifests kubectl apply  f 03 liveness probe TCP Request    List Pods kubectl get pods Observation     List Services kubectl get svc    Access Application http    LB IP  Username  admin101 Password  password101          Step 05 03  Clean Up    t   Delete Kubernetes Resources kubectl delete  f 03 liveness probe TCP Request      "}
{"questions":"gcp gke docs Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks title GCP Google Kubernetes Engine GKE Ingress Custom Health Checks Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Ingress Custom Health Checks\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n3. ExternalDNS Controller should be installed and ready to use\n```t\n# List Namespaces (external-dns-ns namespace should be present)\nkubectl get ns\n\n# List External DNS Pods\nkubectl -n external-dns-ns get pods\n```\n\n## Step-01: Introduction\n1. Implement Self Signed SSL Certificates with GKE Ingress Service\n2. Create SSL Certificates using OpenSSL.\n3. Create Kubernetes Secret with SSL Certificate and Private Key\n4. Reference these Kubernetes Secrets in Ingress Service **Ingress spec.tls**\n\n## Step-02: App1 - Create Self-Signed SSL Certificates and Kubernetes Secrets\n```t\n# Change Directory \ncd SSL-SelfSigned-Certs\n\n# Create your app1 key:\nopenssl genrsa -out app1-ingress.key 2048\n\n# Create your app1 certificate signing request:\nopenssl req -new -key app1-ingress.key -out app1-ingress.csr -subj \"\/CN=app1.kalyanreddydaida.com\"\n\n# Create your app1 certificate:\nopenssl x509 -req -days 7300 -in app1-ingress.csr -signkey app1-ingress.key -out app1-ingress.crt\n\n# Create a Secret that holds your app1 certificate and key:\nkubectl create secret tls app1-secret  --cert app1-ingress.crt --key app1-ingress.key\n\n# List Secrets\nkubectl get secrets\n```\n\n\n## Step-03: App2 - Create Self-Signed SSL Certificates and Kubernetes Secrets\n```t\n# Change Directory \ncd SSL-SelfSigned-Certs\n\n# Create your app2 key:\nopenssl genrsa -out app2-ingress.key 2048\n\n# Create your app2 certificate signing request:\nopenssl req -new -key app2-ingress.key -out app2-ingress.csr -subj \"\/CN=app2.kalyanreddydaida.com\"\n\n# Create your app2 certificate:\nopenssl x509 -req -days 7300 -in app2-ingress.csr -signkey app2-ingress.key -out app2-ingress.crt\n\n# Create a Secret that holds your app2 certificate and key:\nkubectl create secret tls app2-secret  --cert app2-ingress.crt --key app2-ingress.key\n\n# List Secrets\nkubectl get secrets\n```\n\n## Step-03: App3 - Create Self-Signed SSL Certificates and Kubernetes Secrets\n```t\n# Change Directory \ncd SSL-SelfSigned-Certs\n\n# Create your app3 key:\nopenssl genrsa -out app3-ingress.key 2048\n\n# Create your app3 certificate signing request:\nopenssl req -new -key app3-ingress.key -out app3-ingress.csr -subj \"\/CN=app3-default.kalyanreddydaida.com\"\n\n# Create your app3 certificate:\nopenssl x509 -req -days 7300 -in app3-ingress.csr -signkey app3-ingress.key -out app3-ingress.crt\n\n# Create a Secret that holds your app3 certificate and key:\nkubectl create secret tls app3-secret  --cert app3-ingress.crt --key app3-ingress.key\n\n# List Secrets\nkubectl get secrets\n```\n\n## Step-04: No changes to following kube-manifests from previous Ingress Name Based Virtual Host Routing Demo\n1. 01-Nginx-App1-Deployment-and-NodePortService.yaml\n2. 02-Nginx-App2-Deployment-and-NodePortService.yaml\n3. 03-Nginx-App3-Deployment-and-NodePortService.yaml\n4. 05-frontendconfig.yaml\n\n## Step-05: Review 04-ingress-self-signed-ssl.yaml\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress-selfsigned-ssl\n  annotations:\n    # External Load Balancer\n    kubernetes.io\/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io\/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # SSL Redirect HTTP to HTTPS\n    networking.gke.io\/v1beta1.FrontendConfig: \"my-frontend-config\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io\/hostname: app3-default.kalyanreddydaida.com\nspec: \n  # SSL Certs - Associate using Kubernetes Secrets         \n  tls:\n  - secretName: app1-secret\n  - secretName: app2-secret\n  - secretName: app3-secret\n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80           \n  rules:\n    - host: app1.kalyanreddydaida.com\n      http:\n        paths:\n          - path: \/\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n    - host: app2.kalyanreddydaida.com\n      http:\n        paths:                  \n          - path: \/\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n```\n\n## Step-06: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Services\nkubectl get ingress\n\n# Describe Ingress Service\nkubectl describe ingress ingress-selfsigned-ssl\n\n# Verify external-dns Controller logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n\n# Verify Cloud DNS\n1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com\n2. Verify Record sets, DNS Name we added in Ingress Service should be present \n\n# List FrontendConfigs\nkubectl get frontendconfig\n\n# Verify SSL Certificates\nGo to Load Balancers\n1. Load Balancers View -> In Frontends\n2. Load Balancers Components View -> Certificates Tab\n```\n\n## Step-07: Access Application\n```t\n# Access Application\nhttp:\/\/app1.kalyanreddydaida.com\/app1\/index.html\nhttp:\/\/app2.kalyanreddydaida.com\/app2\/index.html\nhttp:\/\/app3-default.kalyanreddydaida.com\n\nObservation:\n1. All 3 URLS should work as expected. In your case, replace YOUR_DOMAIN name for testing\n2. HTTP to HTTPS redirect should work\n3. You will get a warning \"The certificate is not trusted because it is self-signed.\". Click on \"Accept the risk and continue\"\n```\n\n\n## Step-08: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# List Kubernetes Secrets\nkubectl get secrets\n\n# Delete Kubernetes Secrets\nkubectl delete secret app1-secret \nkubectl delete secret app2-secret \nkubectl delete secret app3-secret \n```\n\n## References\n- [User Managed Certificates](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-multi-ssl#user-managed-certs)","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Ingress Custom Health Checks description  Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes      3  ExternalDNS Controller should be installed and ready to use    t   List Namespaces  external dns ns namespace should be present  kubectl get ns    List External DNS Pods kubectl  n external dns ns get pods         Step 01  Introduction 1  Implement Self Signed SSL Certificates with GKE Ingress Service 2  Create SSL Certificates using OpenSSL  3  Create Kubernetes Secret with SSL Certificate and Private Key 4  Reference these Kubernetes Secrets in Ingress Service   Ingress spec tls       Step 02  App1   Create Self Signed SSL Certificates and Kubernetes Secrets    t   Change Directory  cd SSL SelfSigned Certs    Create your app1 key  openssl genrsa  out app1 ingress key 2048    Create your app1 certificate signing request  openssl req  new  key app1 ingress key  out app1 ingress csr  subj   CN app1 kalyanreddydaida com     Create your app1 certificate  openssl x509  req  days 7300  in app1 ingress csr  signkey app1 ingress key  out app1 ingress crt    Create a Secret that holds your app1 certificate and key  kubectl create secret tls app1 secret    cert app1 ingress crt   key app1 ingress key    List Secrets kubectl get secrets          Step 03  App2   Create Self Signed SSL Certificates and Kubernetes Secrets    t   Change Directory  cd SSL SelfSigned Certs    Create your app2 key  openssl genrsa  out app2 ingress key 2048    Create your app2 certificate signing request  openssl req  new  key app2 ingress key  out app2 ingress csr  subj   CN app2 kalyanreddydaida com     Create your app2 certificate  openssl x509  req  days 7300  in app2 ingress csr  signkey app2 ingress key  out app2 ingress crt    Create a Secret that holds your app2 certificate and key  kubectl create secret tls app2 secret    cert app2 ingress crt   key app2 ingress key    List Secrets kubectl get secrets         Step 03  App3   Create Self Signed SSL Certificates and Kubernetes Secrets    t   Change Directory  cd SSL SelfSigned Certs    Create your app3 key  openssl genrsa  out app3 ingress key 2048    Create your app3 certificate signing request  openssl req  new  key app3 ingress key  out app3 ingress csr  subj   CN app3 default kalyanreddydaida com     Create your app3 certificate  openssl x509  req  days 7300  in app3 ingress csr  signkey app3 ingress key  out app3 ingress crt    Create a Secret that holds your app3 certificate and key  kubectl create secret tls app3 secret    cert app3 ingress crt   key app3 ingress key    List Secrets kubectl get secrets         Step 04  No changes to following kube manifests from previous Ingress Name Based Virtual Host Routing Demo 1  01 Nginx App1 Deployment and NodePortService yaml 2  02 Nginx App2 Deployment and NodePortService yaml 3  03 Nginx App3 Deployment and NodePortService yaml 4  05 frontendconfig yaml     Step 05  Review 04 ingress self signed ssl yaml    yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress selfsigned ssl   annotations        External Load Balancer     kubernetes io ingress class   gce          Static IP for Ingress Service     kubernetes io ingress global static ip name   gke ingress extip1           SSL Redirect HTTP to HTTPS     networking gke io v1beta1 FrontendConfig   my frontend config           External DNS   For creating a Record Set in Google Cloud Cloud DNS     external dns alpha kubernetes io hostname  app3 default kalyanreddydaida com spec       SSL Certs   Associate using Kubernetes Secrets            tls      secretName  app1 secret     secretName  app2 secret     secretName  app3 secret   defaultBackend      service        name  app3 nginx nodeport service       port          number  80              rules        host  app1 kalyanreddydaida com       http          paths              path                pathType  Prefix             backend                service                  name  app1 nginx nodeport service                 port                     number  80       host  app2 kalyanreddydaida com       http          paths                                path                pathType  Prefix             backend                service                  name  app2 nginx nodeport service                 port                     number  80         Step 06  Deploy Kubernetes Manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests    List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    List Ingress Services kubectl get ingress    Describe Ingress Service kubectl describe ingress ingress selfsigned ssl    Verify external dns Controller logs kubectl  n external dns ns logs  f   kubectl  n external dns ns get po   egrep  o  external dns A Za z0 9       or  kubectl  n external dns ns get pods kubectl  n external dns ns logs  f  External DNS Pod Name     Verify Cloud DNS 1  Go to Network Services    Cloud DNS    kalyanreddydaida com 2  Verify Record sets  DNS Name we added in Ingress Service should be present     List FrontendConfigs kubectl get frontendconfig    Verify SSL Certificates Go to Load Balancers 1  Load Balancers View    In Frontends 2  Load Balancers Components View    Certificates Tab         Step 07  Access Application    t   Access Application http   app1 kalyanreddydaida com app1 index html http   app2 kalyanreddydaida com app2 index html http   app3 default kalyanreddydaida com  Observation  1  All 3 URLS should work as expected  In your case  replace YOUR DOMAIN name for testing 2  HTTP to HTTPS redirect should work 3  You will get a warning  The certificate is not trusted because it is self signed    Click on  Accept the risk and continue           Step 08  Clean Up    t   Delete Kubernetes Resources kubectl delete  f kube manifests    List Kubernetes Secrets kubectl get secrets    Delete Kubernetes Secrets kubectl delete secret app1 secret  kubectl delete secret app2 secret  kubectl delete secret app3 secret          References    User Managed Certificates  https   cloud google com kubernetes engine docs how to ingress multi ssl user managed certs "}
{"questions":"gcp gke docs Implement GCP Google Kubernetes Engine GKE External DNS Install Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created title GCP Google Kubernetes Engine GKE External DNS Install gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE External DNS Install\ndescription: Implement GCP Google Kubernetes Engine GKE External DNS Install\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n## Step-01: Introduction\n1. Create GCP IAM Service Account: external-dns-gsa\n2. Add IAM Roles to GCP IAM Service Account (add-iam-policy-binding)\n3. Create Kubernetes Namespace: external-dns-ns\n4. Create Kubernetes Service Account: external-dns-ksa\n5. Associate GCP IAM Service Account with Kubernetes Service Account (gcloud iam service-accounts add-iam-policy-binding)\n6. Annotate Kubernetes Service Account with GCP IAM SA email Address (kubectl annotate serviceaccount)\n7. Install Helm CLI on your local desktop (if not installed)\n8. Install  External-DNS using Helm\n9. Verify External-DNS Logs\n10. Additional Reference: Install [ExternalDNS Controller using Helm](https:\/\/github.com\/kubernetes-sigs\/external-dns)\n\n## Step-03: Create GCP IAM Service Account\n```t\n# List IAM Service Accounts\ngcloud iam service-accounts list\n\n# Create GCP IAM Service Account\ngcloud iam service-accounts create GSA_NAME --project=GSA_PROJECT\nGSA_NAME: the name of the new IAM service account.\nGSA_PROJECT: the project ID of the Google Cloud project for your IAM service account.\n\n# Replace GSA_NAME and GSA_PROJECT\ngcloud iam service-accounts create external-dns-gsa --project=kdaida123\n\n# List IAM Service Accounts\ngcloud iam service-accounts list\n```\n\n## Step-04: Add IAM Roles to GCP IAM Service Account\n```t\n# Add IAM Roles to GCP IAM Service Account\ngcloud projects add-iam-policy-binding PROJECT_ID \\\n    --member \"serviceAccount:GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com\" \\\n    --role \"ROLE_NAME\"\nPROJECT_ID: your Google Cloud project ID.\nGSA_NAME: the name of your IAM service account.\nGSA_PROJECT: the project ID of the Google Cloud project of your IAM service account.\nROLE_NAME: the IAM role to assign to your service account, like roles\/spanner.viewer.\n\n# Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME\ngcloud projects add-iam-policy-binding kdaida123 \\\n    --member \"serviceAccount:external-dns-gsa@kdaida123.iam.gserviceaccount.com\" \\\n    --role \"roles\/dns.admin\" \n```\n\n## Step-05: Create Kubernetes Namepsace and Kubernetes Service Account\n```t\n# Create Kubernetes Namespace\nkubectl create namespace <NAMESPACE>\nkubectl create namespace external-dns-ns\n\n# List Namespaces\nkubectl get ns\n\n# Create Service Account\nkubectl create serviceaccount <KSA_NAME>  --namespace <NAMESPACE>\nkubectl create serviceaccount external-dns-ksa  --namespace external-dns-ns\n\n# List Service Accounts\nkubectl -n external-dns-ns get sa\n```\n\n## Step-06: Associate GCP IAM Service Account with Kubernetes Service Account\n- Allow the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts.\n- This binding allows the Kubernetes service account to act as the IAM service account.\n```t\n# Associate GCP IAM Service Account with Kubernetes Service Account\ngcloud iam service-accounts add-iam-policy-binding GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com \\\n    --role roles\/iam.workloadIdentityUser \\\n    --member \"serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE\/KSA_NAME]\"\n\n# Replace GSA_NAME, GSA_PROJECT, PROJECT_ID, NAMESPACE, KSA_NAME\ngcloud iam service-accounts add-iam-policy-binding external-dns-gsa@kdaida123.iam.gserviceaccount.com \\\n    --role roles\/iam.workloadIdentityUser \\\n    --member \"serviceAccount:kdaida123.svc.id.goog[external-dns-ns\/external-dns-ksa]\"\n```\n\n## Step-07: Annotate Kubernetes Service Account with GCP IAM SA email Address\n- Annotate the Kubernetes service account with the email address of the IAM service account.\n```t\n# Annotate Kubernetes Service Account with GCP IAM SA email Address\nkubectl annotate serviceaccount KSA_NAME \\\n    --namespace NAMESPACE \\\n    iam.gke.io\/gcp-service-account=GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com\n\n# Replace KSA_NAME, NAMESPACE, GSA_NAME, GSA_PROJECT\nkubectl annotate serviceaccount external-dns-ksa \\\n    --namespace external-dns-ns \\\n    iam.gke.io\/gcp-service-account=external-dns-gsa@kdaida123.iam.gserviceaccount.com\n\n# Describe Kubernetes Service Account\nkubectl -n external-dns-ns describe sa external-dns-ksa \n```\n\n## Step-08: Install Helm Client on Local Desktop\n- [Install Helm](https:\/\/helm.sh\/docs\/intro\/install\/)\n```t\n# Install Helm\nbrew install helm\n\n# Verify Helm version\nhelm version\n```\n\n## Step-09: Review external-dns values.yaml\n- [external-dns values.yaml](https:\/\/github.com\/kubernetes-sigs\/external-dns\/blob\/master\/charts\/external-dns\/values.yaml)\n- [external-dns Configuration](https:\/\/github.com\/kubernetes-sigs\/external-dns\/tree\/master\/charts\/external-dns#configuration)\n\n\n## Step-10: Review external-dns Deployment Configs\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: external-dns\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: external-dns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\",\"endpoints\",\"pods\"]\n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"extensions\",\"networking.k8s.io\"]\n  resources: [\"ingresses\"] \n  verbs: [\"get\",\"watch\",\"list\"]\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"get\", \"watch\", \"list\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: external-dns-viewer\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: external-dns\nsubjects:\n- kind: ServiceAccount\n  name: external-dns\n  namespace: default\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: external-dns\nspec:\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: external-dns\n  template:\n    metadata:\n      labels:\n        app: external-dns\n    spec:\n      serviceAccountName: external-dns\n      containers:\n      - name: external-dns\n        image: k8s.gcr.io\/external-dns\/external-dns:v0.8.0\n        args:\n        - --source=service\n        - --source=ingress\n        - --domain-filter=external-dns-test.gcp.zalan.do # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones\n        - --provider=google\n#        - --google-project=zalando-external-dns-test # Use this to specify a project different from the one external-dns is running inside\n        - --google-zone-visibility=private # Use this to filter to only zones with this visibility. Set to either 'public' or 'private'. Omitting will match public and private zones\n        - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization\n        - --registry=txt\n        - --txt-owner-id=my-identifier\n```\n\n## Step-11: Install external-dns using Helm\n```t\n# Add external-dns repo to Helm\nhelm repo add external-dns https:\/\/kubernetes-sigs.github.io\/external-dns\/\n\n# Install Helm Chart\nhelm upgrade --install external-dns external-dns\/external-dns \\\n    --set provider=google \\\n    --set policy=sync \\\n    --set google-zone-visibility=public \\\n    --set txt-owner-id=k8s \\\n    --set serviceAccount.create=false \\\n    --set serviceAccount.name=external-dns-ksa \\\n    -n external-dns-ns\n    \n# Optional Setting (Important Note: will make ExternalDNS see only the Cloud DNS zones matching provided domain, omit to process all available Cloud DNS zones)\n--set domain-filter=kalyanreddydaida.com \\\n```\n\n## Step-12: Verify external-dns deployment\n```t\n# List Helm \nhelm  list -n external-dns-ns\n\n# List Kubernetes Service Account\nkubectl -n external-dns-ns get sa\n\n# Describe Kubernetes Service Account\nkubectl -n external-dns-ns describe sa external-dns-ksa\n\n# List All resources from default Namespace\nkubectl -n external-dns-ns get all\n\n# List pods (external-dns pod should be in running state)\nkubectl -n external-dns-ns get pods\n\n# Verify Deployment by checking logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n```\n\n## References\n- https:\/\/github.com\/kubernetes-sigs\/external-dns\/tree\/master\/charts\/external-dns\n- https:\/\/github.com\/kubernetes-sigs\/external-dns\/blob\/master\/docs\/tutorials\/gke.md\n\n## External-DNS Logs from Reference\n\n```log\nW0624 07:14:15.829747   14199 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.\nTo learn more, consult https:\/\/cloud.google.com\/blog\/products\/containers-kubernetes\/kubectl-auth-changes-in-gke\nError from server (BadRequest): container \"external-dns\" in pod \"external-dns-6f49549d96-2jd5q\" is waiting to start: ContainerCreating\nKalyans-Mac-mini:48-GKE-Ingress-IAP kalyanreddy$ kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\nW0624 07:14:23.520269   14201 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.\nTo learn more, consult https:\/\/cloud.google.com\/blog\/products\/containers-kubernetes\/kubectl-auth-changes-in-gke\nW0624 07:14:24.512312   14203 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.\nTo learn more, consult https:\/\/cloud.google.com\/blog\/products\/containers-kubernetes\/kubectl-auth-changes-in-gke\ntime=\"2022-06-24T01:44:18Z\" level=info msg=\"config: {APIServerURL: KubeConfig: RequestTimeout:30s DefaultTargets:[] ContourLoadBalancerService:heptio-contour\/contour GlooNamespace:gloo-system SkipperRouteGroupVersion:zalando.org\/v1 Sources:[service ingress] Namespace: AnnotationFilter: LabelFilter: FQDNTemplate: CombineFQDNAndAnnotation:false IgnoreHostnameAnnotation:false IgnoreIngressTLSSpec:false IgnoreIngressRulesSpec:false Compatibility: PublishInternal:false PublishHostIP:false AlwaysPublishNotReadyAddresses:false ConnectorSourceServer:localhost:8080 Provider:google GoogleProject: GoogleBatchChangeSize:1000 GoogleBatchChangeInterval:1s GoogleZoneVisibility: DomainFilter:[] ExcludeDomains:[] RegexDomainFilter: RegexDomainExclusion: ZoneNameFilter:[] ZoneIDFilter:[] AlibabaCloudConfigFile:\/etc\/kubernetes\/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType: AWSZoneTagFilter:[] AWSAssumeRole: AWSBatchChangeSize:1000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AWSAPIRetries:3 AWSPreferCNAME:false AWSZoneCacheDuration:0s AWSSDServiceCleanup:false AzureConfigFile:\/etc\/kubernetes\/azure.json AzureResourceGroup: AzureSubscriptionID: AzureUserAssignedIdentityClientID: BluecatDNSConfiguration: BluecatConfigFile:\/etc\/kubernetes\/bluecat.json BluecatDNSView: BluecatGatewayHost: BluecatRootZone: BluecatDNSServerName: BluecatDNSDeployType:no-deploy BluecatSkipTLSVerify:false CloudflareProxied:false CloudflareZonesPerPage:50 CoreDNSPrefix:\/skydns\/ RcodezeroTXTEncrypt:false AkamaiServiceConsumerDomain: AkamaiClientToken: AkamaiClientSecret: AkamaiAccessToken: AkamaiEdgercPath: AkamaiEdgercSection: InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true InfobloxView: InfobloxMaxResults:0 InfobloxFQDNRegEx: InfobloxCreatePTR:false InfobloxCacheDuration:0 DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:\/etc\/kubernetes\/oci.yaml InMemoryZones:[] OVHEndpoint:ovh-eu OVHApiRateLimit:20 PDNSServer:http:\/\/localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:sync Registry:txt TXTOwnerID:default TXTPrefix: TXTSuffix: Interval:1m0s MinEventSyncInterval:5s Once:false DryRun:false UpdateEvents:false LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s TXTWildcardReplacement: ExoscaleEndpoint:https:\/\/api.exoscale.ch\/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io\/v1alpha1 CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] CFAPIEndpoint: CFUsername: CFPassword: RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136GSSTSIG:false RFC2136KerberosRealm: RFC2136KerberosUsername: RFC2136KerberosPassword: RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false RFC2136MinTTL:0s RFC2136BatchChangeSize:50 NS1Endpoint: NS1IgnoreSSL:false NS1MinTTLSeconds:0 TransIPAccountName: TransIPPrivateKeyFile: DigitalOceanAPIPageSize:50 ManagedDNSRecordTypes:[A CNAME] GoDaddyAPIKey: GoDaddySecretKey: GoDaddyTTL:0 GoDaddyOTE:false OCPRouterName:}\"\ntime=\"2022-06-24T01:44:18Z\" level=info msg=\"Instantiating new Kubernetes client\"\ntime=\"2022-06-24T01:44:18Z\" level=info msg=\"Using inCluster-config based on serviceaccount-token\"\ntime=\"2022-06-24T01:44:18Z\" level=info msg=\"Created Kubernetes client https:\/\/10.104.0.1:443\"\ntime=\"2022-06-24T01:44:18Z\" level=info msg=\"Google project auto-detected: kdaida123\"\ntime=\"2022-06-24T01:44:23Z\" level=error msg=\"Get \\\"https:\/\/dns.googleapis.com\/dns\/v1\/projects\/kdaida123\/managedZones?alt=json&prettyPrint=false\\\": compute: Received 403 `Unable to generate access token; IAM returned 403 Forbidden: The caller does not have permission\\nThis error could be caused by a missing IAM policy binding on the target IAM service account.\\nFor more information, refer to the Workload Identity documentation:\\n\\thttps:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/workload-identity#authenticating_to\\n\\n`\"\n\n``","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE External DNS Install description  Implement GCP Google Kubernetes Engine GKE External DNS Install        Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123        Step 01  Introduction 1  Create GCP IAM Service Account  external dns gsa 2  Add IAM Roles to GCP IAM Service Account  add iam policy binding  3  Create Kubernetes Namespace  external dns ns 4  Create Kubernetes Service Account  external dns ksa 5  Associate GCP IAM Service Account with Kubernetes Service Account  gcloud iam service accounts add iam policy binding  6  Annotate Kubernetes Service Account with GCP IAM SA email Address  kubectl annotate serviceaccount  7  Install Helm CLI on your local desktop  if not installed  8  Install  External DNS using Helm 9  Verify External DNS Logs 10  Additional Reference  Install  ExternalDNS Controller using Helm  https   github com kubernetes sigs external dns      Step 03  Create GCP IAM Service Account    t   List IAM Service Accounts gcloud iam service accounts list    Create GCP IAM Service Account gcloud iam service accounts create GSA NAME   project GSA PROJECT GSA NAME  the name of the new IAM service account  GSA PROJECT  the project ID of the Google Cloud project for your IAM service account     Replace GSA NAME and GSA PROJECT gcloud iam service accounts create external dns gsa   project kdaida123    List IAM Service Accounts gcloud iam service accounts list         Step 04  Add IAM Roles to GCP IAM Service Account    t   Add IAM Roles to GCP IAM Service Account gcloud projects add iam policy binding PROJECT ID         member  serviceAccount GSA NAME GSA PROJECT iam gserviceaccount com          role  ROLE NAME  PROJECT ID  your Google Cloud project ID  GSA NAME  the name of your IAM service account  GSA PROJECT  the project ID of the Google Cloud project of your IAM service account  ROLE NAME  the IAM role to assign to your service account  like roles spanner viewer     Replace PROJECT ID  GSA NAME  GSA PROJECT  ROLE NAME gcloud projects add iam policy binding kdaida123         member  serviceAccount external dns gsa kdaida123 iam gserviceaccount com          role  roles dns admin           Step 05  Create Kubernetes Namepsace and Kubernetes Service Account    t   Create Kubernetes Namespace kubectl create namespace  NAMESPACE  kubectl create namespace external dns ns    List Namespaces kubectl get ns    Create Service Account kubectl create serviceaccount  KSA NAME     namespace  NAMESPACE  kubectl create serviceaccount external dns ksa    namespace external dns ns    List Service Accounts kubectl  n external dns ns get sa         Step 06  Associate GCP IAM Service Account with Kubernetes Service Account   Allow the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts    This binding allows the Kubernetes service account to act as the IAM service account     t   Associate GCP IAM Service Account with Kubernetes Service Account gcloud iam service accounts add iam policy binding GSA NAME GSA PROJECT iam gserviceaccount com         role roles iam workloadIdentityUser         member  serviceAccount PROJECT ID svc id goog NAMESPACE KSA NAME      Replace GSA NAME  GSA PROJECT  PROJECT ID  NAMESPACE  KSA NAME gcloud iam service accounts add iam policy binding external dns gsa kdaida123 iam gserviceaccount com         role roles iam workloadIdentityUser         member  serviceAccount kdaida123 svc id goog external dns ns external dns ksa           Step 07  Annotate Kubernetes Service Account with GCP IAM SA email Address   Annotate the Kubernetes service account with the email address of the IAM service account     t   Annotate Kubernetes Service Account with GCP IAM SA email Address kubectl annotate serviceaccount KSA NAME         namespace NAMESPACE       iam gke io gcp service account GSA NAME GSA PROJECT iam gserviceaccount com    Replace KSA NAME  NAMESPACE  GSA NAME  GSA PROJECT kubectl annotate serviceaccount external dns ksa         namespace external dns ns       iam gke io gcp service account external dns gsa kdaida123 iam gserviceaccount com    Describe Kubernetes Service Account kubectl  n external dns ns describe sa external dns ksa          Step 08  Install Helm Client on Local Desktop    Install Helm  https   helm sh docs intro install      t   Install Helm brew install helm    Verify Helm version helm version         Step 09  Review external dns values yaml    external dns values yaml  https   github com kubernetes sigs external dns blob master charts external dns values yaml     external dns Configuration  https   github com kubernetes sigs external dns tree master charts external dns configuration       Step 10  Review external dns Deployment Configs    yaml apiVersion  v1 kind  ServiceAccount metadata    name  external dns     apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  external dns rules    apiGroups         resources    services   endpoints   pods     verbs    get   watch   list     apiGroups    extensions   networking k8s io     resources    ingresses      verbs    get   watch   list     apiGroups         resources    nodes     verbs    get    watch    list       apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  external dns viewer roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  external dns subjects    kind  ServiceAccount   name  external dns   namespace  default     apiVersion  apps v1 kind  Deployment metadata    name  external dns spec    strategy      type  Recreate   selector      matchLabels        app  external dns   template      metadata        labels          app  external dns     spec        serviceAccountName  external dns       containers          name  external dns         image  k8s gcr io external dns external dns v0 8 0         args              source service             source ingress             domain filter external dns test gcp zalan do   will make ExternalDNS see only the hosted zones matching provided domain  omit to process all available hosted zones             provider google              google project zalando external dns test   Use this to specify a project different from the one external dns is running inside             google zone visibility private   Use this to filter to only zones with this visibility  Set to either  public  or  private   Omitting will match public and private zones             policy upsert only   would prevent ExternalDNS from deleting any records  omit to enable full synchronization             registry txt             txt owner id my identifier         Step 11  Install external dns using Helm    t   Add external dns repo to Helm helm repo add external dns https   kubernetes sigs github io external dns     Install Helm Chart helm upgrade   install external dns external dns external dns         set provider google         set policy sync         set google zone visibility public         set txt owner id k8s         set serviceAccount create false         set serviceAccount name external dns ksa        n external dns ns        Optional Setting  Important Note  will make ExternalDNS see only the Cloud DNS zones matching provided domain  omit to process all available Cloud DNS zones    set domain filter kalyanreddydaida com           Step 12  Verify external dns deployment    t   List Helm  helm  list  n external dns ns    List Kubernetes Service Account kubectl  n external dns ns get sa    Describe Kubernetes Service Account kubectl  n external dns ns describe sa external dns ksa    List All resources from default Namespace kubectl  n external dns ns get all    List pods  external dns pod should be in running state  kubectl  n external dns ns get pods    Verify Deployment by checking logs kubectl  n external dns ns logs  f   kubectl  n external dns ns get po   egrep  o  external dns A Za z0 9       or  kubectl  n external dns ns get pods kubectl  n external dns ns logs  f  External DNS Pod Name          References   https   github com kubernetes sigs external dns tree master charts external dns   https   github com kubernetes sigs external dns blob master docs tutorials gke md     External DNS Logs from Reference     log W0624 07 14 15 829747   14199 gcp go 120  WARNING  the gcp auth plugin is deprecated in v1 22   unavailable in v1 25   use gcloud instead  To learn more  consult https   cloud google com blog products containers kubernetes kubectl auth changes in gke Error from server  BadRequest   container  external dns  in pod  external dns 6f49549d96 2jd5q  is waiting to start  ContainerCreating Kalyans Mac mini 48 GKE Ingress IAP kalyanreddy  kubectl  n external dns ns logs  f   kubectl  n external dns ns get po   egrep  o  external dns A Za z0 9      W0624 07 14 23 520269   14201 gcp go 120  WARNING  the gcp auth plugin is deprecated in v1 22   unavailable in v1 25   use gcloud instead  To learn more  consult https   cloud google com blog products containers kubernetes kubectl auth changes in gke W0624 07 14 24 512312   14203 gcp go 120  WARNING  the gcp auth plugin is deprecated in v1 22   unavailable in v1 25   use gcloud instead  To learn more  consult https   cloud google com blog products containers kubernetes kubectl auth changes in gke time  2022 06 24T01 44 18Z  level info msg  config   APIServerURL  KubeConfig  RequestTimeout 30s DefaultTargets    ContourLoadBalancerService heptio contour contour GlooNamespace gloo system SkipperRouteGroupVersion zalando org v1 Sources  service ingress  Namespace  AnnotationFilter  LabelFilter  FQDNTemplate  CombineFQDNAndAnnotation false IgnoreHostnameAnnotation false IgnoreIngressTLSSpec false IgnoreIngressRulesSpec false Compatibility  PublishInternal false PublishHostIP false AlwaysPublishNotReadyAddresses false ConnectorSourceServer localhost 8080 Provider google GoogleProject  GoogleBatchChangeSize 1000 GoogleBatchChangeInterval 1s GoogleZoneVisibility  DomainFilter    ExcludeDomains    RegexDomainFilter  RegexDomainExclusion  ZoneNameFilter    ZoneIDFilter    AlibabaCloudConfigFile  etc kubernetes alibaba cloud json AlibabaCloudZoneType  AWSZoneType  AWSZoneTagFilter    AWSAssumeRole  AWSBatchChangeSize 1000 AWSBatchChangeInterval 1s AWSEvaluateTargetHealth true AWSAPIRetries 3 AWSPreferCNAME false AWSZoneCacheDuration 0s AWSSDServiceCleanup false AzureConfigFile  etc kubernetes azure json AzureResourceGroup  AzureSubscriptionID  AzureUserAssignedIdentityClientID  BluecatDNSConfiguration  BluecatConfigFile  etc kubernetes bluecat json BluecatDNSView  BluecatGatewayHost  BluecatRootZone  BluecatDNSServerName  BluecatDNSDeployType no deploy BluecatSkipTLSVerify false CloudflareProxied false CloudflareZonesPerPage 50 CoreDNSPrefix  skydns  RcodezeroTXTEncrypt false AkamaiServiceConsumerDomain  AkamaiClientToken  AkamaiClientSecret  AkamaiAccessToken  AkamaiEdgercPath  AkamaiEdgercSection  InfobloxGridHost  InfobloxWapiPort 443 InfobloxWapiUsername admin InfobloxWapiPassword  InfobloxWapiVersion 2 3 1 InfobloxSSLVerify true InfobloxView  InfobloxMaxResults 0 InfobloxFQDNRegEx  InfobloxCreatePTR false InfobloxCacheDuration 0 DynCustomerName  DynUsername  DynPassword  DynMinTTLSeconds 0 OCIConfigFile  etc kubernetes oci yaml InMemoryZones    OVHEndpoint ovh eu OVHApiRateLimit 20 PDNSServer http   localhost 8081 PDNSAPIKey  PDNSTLSEnabled false TLSCA  TLSClientCert  TLSClientCertKey  Policy sync Registry txt TXTOwnerID default TXTPrefix  TXTSuffix  Interval 1m0s MinEventSyncInterval 5s Once false DryRun false UpdateEvents false LogFormat text MetricsAddress  7979 LogLevel info TXTCacheInterval 0s TXTWildcardReplacement  ExoscaleEndpoint https   api exoscale ch dns ExoscaleAPIKey  ExoscaleAPISecret  CRDSourceAPIVersion externaldns k8s io v1alpha1 CRDSourceKind DNSEndpoint ServiceTypeFilter    CFAPIEndpoint  CFUsername  CFPassword  RFC2136Host  RFC2136Port 0 RFC2136Zone  RFC2136Insecure false RFC2136GSSTSIG false RFC2136KerberosRealm  RFC2136KerberosUsername  RFC2136KerberosPassword  RFC2136TSIGKeyName  RFC2136TSIGSecret  RFC2136TSIGSecretAlg  RFC2136TAXFR false RFC2136MinTTL 0s RFC2136BatchChangeSize 50 NS1Endpoint  NS1IgnoreSSL false NS1MinTTLSeconds 0 TransIPAccountName  TransIPPrivateKeyFile  DigitalOceanAPIPageSize 50 ManagedDNSRecordTypes  A CNAME  GoDaddyAPIKey  GoDaddySecretKey  GoDaddyTTL 0 GoDaddyOTE false OCPRouterName    time  2022 06 24T01 44 18Z  level info msg  Instantiating new Kubernetes client  time  2022 06 24T01 44 18Z  level info msg  Using inCluster config based on serviceaccount token  time  2022 06 24T01 44 18Z  level info msg  Created Kubernetes client https   10 104 0 1 443  time  2022 06 24T01 44 18Z  level info msg  Google project auto detected  kdaida123  time  2022 06 24T01 44 23Z  level error msg  Get   https   dns googleapis com dns v1 projects kdaida123 managedZones alt json prettyPrint false    compute  Received 403  Unable to generate access token  IAM returned 403 Forbidden  The caller does not have permission nThis error could be caused by a missing IAM policy binding on the target IAM service account  nFor more information  refer to the Workload Identity documentation  n thttps   cloud google com kubernetes engine docs how to workload identity authenticating to n n      "}
{"questions":"gcp gke docs Perform Authorized Network Tests Create Cloud NAT Deploy Sample App and Test title GCP Google Kubernetes Engine GKE Private Cluster Implement GCP Google Kubernetes Engine GKE Private Cluster Create GKE Private Cluster Step 01 Introduction","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Private Cluster\ndescription: Implement GCP Google Kubernetes Engine GKE Private Cluster\n---\n\n## Step-01: Introduction\n- Create GKE Private Cluster\n- Create Cloud NAT\n- Deploy Sample App and Test\n- Perform Authorized Network Tests\n \n## Step-02: Create Standard GKE Cluster \n- Go to Kubernetes Engine -> Clusters -> CREATE\n- Select **GKE Standard -> CONFIGURE**\n- **Cluster Basics**\n  - **Name:** standard-cluster-private-1\n  - **Location type:** Regional\n  - **Zone:** us-central1-a, us-central1-b, us-central1-c\n  - **Release Channel**\n    - **Release Channel:** Rapid Channel\n    - **Version:** LATEST AVAIALABLE ON THAT DAY\n  - REST ALL LEAVE TO DEFAULTS\n- **NODE POOLS: default-pool**\n- **Node pool details**\n  - **Name:** default-pool\n  - **Number of Nodes (per Zone):** 1\n- **Nodes: Configure node settings** \n  - **Image type:** Containerized Optimized OS\n  - **Machine configuration**\n    - **GENERAL PURPOSE SERIES:** e2\n    - **Machine Type:** e2-small\n  - **Boot disk type:** standard persistent disk\n  - **Boot disk size(GB):** 20\n  - **Enable Nodes on Spot VMs:** CHECKED\n- **Node Networking:** REVIEW AND LEAVE TO DEFAULTS    \n- **Node Security:** \n  - **Access scopes:** Allow full access to all Cloud APIs\n  - REST ALL REVIEW AND LEAVE TO DEFAULTS\n- **Node Metadata:** REVIEW AND LEAVE TO DEFAULTS\n- **CLUSTER** \n  - **Automation:** REVIEW AND LEAVE TO DEFAULTS\n  - **Networking:** \n    - **Network Access:** Private Cluster\n    - **Access control plane using its external IP address:** BY DEFAULT CHECKED\n      - **Important Note:** Disabling this option locks down external access to the cluster control plane. There is still an external IP address used by Google for cluster management purposes, but the IP address is not accessible to anyone. This setting is  permanent\n    - **Enable Control Plane Global Access:** CHECKED\n    - **Control Plane IP Range:** 172.16.0.0\/28\n    - **CHECK THIS BOX: Enable Dataplane V2** CHECK IT - IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED\n  - **Security:** REVIEW AND LEAVE TO DEFAULTS\n    - **CHECK THIS BOX: Enable Workload Identity** IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED\n  - **Metadata:** REVIEW AND LEAVE TO DEFAULTS\n  - **Features:** REVIEW AND LEAVE TO DEFAULTS\n    - **Enable Compute Engine Persistent Disk CSI Driver:** SHOULD BE CHECKED BY DEFAULT - VERIFY\n    - **Enable File Store CSI Driver:** CHECKED \n- CLICK ON **CREATE**\n\n## Step-03: Review kube-manifests: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify\/kubenginx:1.0.0\n          ports: \n            - containerPort: 80      \n          imagePullPolicy: Always            \n```\n\n## Step-04: Review kube-manifest: 02-kubernetes-loadbalancer-service.yaml\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port      \n```\n\n## Step-05: Deploy Kubernetes Manifests\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# Change Directory\ncd 20-GKE-Private-Cluster\n\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\/\n\n# Verify Pods \nkubectl get pods \nObservation: SHOULD FAIL - UNABLE TO DOWNLOAD DOCKER IMAGE FROM DOCKER HUB\n\n# Describe Pod\nkubectl describe pod <POD-NAME>\n\n# Clean-Up\nkubectl delete -f kube-manifests\/\n```\n\n## Step-06: Create Cloud NAT\n- Go to Network Services -> CREATE CLOUD NAT GATEWAY\n- **Gateway Name:** gke-us-central1-default-cloudnat-gw\n- **Select Cloud Router:** \n  - **Network:** default\n  - **Region:** us-central1\n  - **Cloud Router:** CREATE NEW ROUTER\n    - **Name:** gke-us-central1-cloud-router\n    - **Description:** GKE Cloud Router Region us-central1\n    - **Network:** default (POPULATED by default)\n    - **Region:** us-central1 (POPULATED by default)\n    - **BGP Peer keepalive interval:** 20 seconds (LEAVE TO DEFAULT)\n    - Click on **CREATE**\n- **Cloud NAT Mapping:** LEAVE TO DEFAULTS\n- **Destination (external):** LEAVE TO DEFAULTS\n- **Stackdriver logging:**  LEAVE TO DEFAULTS\n- **Port allocation:** \n  - CHECK **Enable Dynamic Port Allocation**\n- **Timeouts for protocol connections:** LEAVE TO DEFAULTS\n- CLICK on **CREATE**  \n\n## Step-07: Deploy Kubernetes Manifests\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# Verify Pods \nkubectl get pods \nObservation: SHOULD BE ABLE TO DOWNLOAD THE DOCKER IMAGE\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<External-IP>\n\n# Clean-Up\nkubectl delete -f kube-manifests\n```\n\n## Step-08: Authorized Network Test1: My Network\n- Goto -> standard-cluster-private-1 -> DETAILS -> NETWORKING\n- Control plane authorized networks\t-> EDIT\n- **Enable control plane authorized networks:** CHECKED\n- CLICK ON **ADD AUTHORIZED NETWORK**\n- **NAME:** MY-NETWORK-1\n- **NETWORK:** 10.10.10.0\/24 \n- Click on **DONE**\n- Click on **SAVE CHANGES**\n```t\n# List Kubernetes Nodes\nkubectl get nodes\nObservation:\n1. Access to GKE API Service from our local desktop kubectl cli is lost\n2. Access to GKE API Service is now allowed only from \"10.10.10.0\/24\" network\n3. In short even though our GKE API Server has Internet enabled endpoint, its access is restricted to specific network of IPs\n\n## Sample Output\nKalyan-Mac-mini:google-kubernetes-engine kalyan$ kubectl get nodes\nUnable to connect to the server: dial tcp 34.70.169.161:443: i\/o timeout\nKalyan-Mac-mini:google-kubernetes-engine kalyan$ \n```\n\n## Step-09: Authorized Network Test2: My Desktop\n- Go to link [whatismyip](https:\/\/www.whatismyip.com\/) and get desktop public IP \n- Goto -> standard-cluster-private-1 -> DETAILS -> NETWORKING\n- Control plane authorized networks\t-> EDIT\n- **Enable control plane authorized networks:** CHECKED\n- CLICK ON **ADD AUTHORIZED NETWORK**\n- **NAME:** MY-DESKTOP-1\n- **NETWORK:** 10.10.10.0\/24 \n- Click on **DONE**\n- Click on **SAVE CHANGES**\n```t\n# List Kubernetes Nodes\nkubectl get nodes\nObservation:\n1. Access to GKE API Service from our local desktop kubectl cli should be success\n\n## Sample Output\nKalyans-Mac-mini:google-kubernetes-engine kalyan$ kubectl get nodes\nNAME                                                  STATUS   ROLES    AGE   VERSION\ngke-standard-cluster-pri-default-pool-90b1f67b-4z71   Ready    <none>   55m   v1.24.3-gke.900\ngke-standard-cluster-pri-default-pool-90b1f67b-6xn6   Ready    <none>   55m   v1.24.3-gke.900\ngke-standard-cluster-pri-default-pool-90b1f67b-dggg   Ready    <none>   55m   v1.24.3-gke.900\nKalyans-Mac-mini:google-kubernetes-engine kalyan$ \n```\n\n## Step-10: Authorized Network Test2: Delete both network rules (Roll back to old state)\n- Goto -> standard-cluster-private-1 -> DETAILS -> NETWORKING\n- Control plane authorized networks\t-> EDIT\n- **Enable control plane authorized networks:** UN-CHECKED\n- AUTHORIZED NETWORKS -> DELETE -> MY-NETWORK-1, MY-DESKTOP-1\n- Click on **SAVE CHANGES**\n```t\n# List Kubernetes Nodes\nkubectl get nodes\nObservation:\n1. Access to GKE API Service from our local desktop kubectl cli should be success\n\n## Sample Output\nKalyans-Mac-mini:google-kubernetes-engine kalyan$ kubectl get nodes\nNAME                                                  STATUS   ROLES    AGE   VERSION\ngke-standard-cluster-pri-default-pool-90b1f67b-4z71   Ready    <none>   55m   v1.24.3-gke.900\ngke-standard-cluster-pri-default-pool-90b1f67b-6xn6   Ready    <none>   55m   v1.24.3-gke.900\ngke-standard-cluster-pri-default-pool-90b1f67b-dggg   Ready    <none>   55m   v1.24.3-gke.900\nKalyans-Mac-mini:google-kubernetes-engine kalyan$ \n```\n\n## Additional Reference\n- [GKE Private Cluster with Terraform](https:\/\/github.com\/GoogleCloudPlatform\/gke-private-cluster-demo","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Private Cluster description  Implement GCP Google Kubernetes Engine GKE Private Cluster         Step 01  Introduction   Create GKE Private Cluster   Create Cloud NAT   Deploy Sample App and Test   Perform Authorized Network Tests      Step 02  Create Standard GKE Cluster    Go to Kubernetes Engine    Clusters    CREATE   Select   GKE Standard    CONFIGURE       Cluster Basics         Name    standard cluster private 1       Location type    Regional       Zone    us central1 a  us central1 b  us central1 c       Release Channel           Release Channel    Rapid Channel         Version    LATEST AVAIALABLE ON THAT DAY     REST ALL LEAVE TO DEFAULTS     NODE POOLS  default pool       Node pool details         Name    default pool       Number of Nodes  per Zone     1     Nodes  Configure node settings          Image type    Containerized Optimized OS       Machine configuration           GENERAL PURPOSE SERIES    e2         Machine Type    e2 small       Boot disk type    standard persistent disk       Boot disk size GB     20       Enable Nodes on Spot VMs    CHECKED     Node Networking    REVIEW AND LEAVE TO DEFAULTS         Node Security           Access scopes    Allow full access to all Cloud APIs     REST ALL REVIEW AND LEAVE TO DEFAULTS     Node Metadata    REVIEW AND LEAVE TO DEFAULTS     CLUSTER          Automation    REVIEW AND LEAVE TO DEFAULTS       Networking             Network Access    Private Cluster         Access control plane using its external IP address    BY DEFAULT CHECKED           Important Note    Disabling this option locks down external access to the cluster control plane  There is still an external IP address used by Google for cluster management purposes  but the IP address is not accessible to anyone  This setting is  permanent         Enable Control Plane Global Access    CHECKED         Control Plane IP Range    172 16 0 0 28         CHECK THIS BOX  Enable Dataplane V2   CHECK IT   IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED       Security    REVIEW AND LEAVE TO DEFAULTS         CHECK THIS BOX  Enable Workload Identity   IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED       Metadata    REVIEW AND LEAVE TO DEFAULTS       Features    REVIEW AND LEAVE TO DEFAULTS         Enable Compute Engine Persistent Disk CSI Driver    SHOULD BE CHECKED BY DEFAULT   VERIFY         Enable File Store CSI Driver    CHECKED    CLICK ON   CREATE       Step 03  Review kube manifests  01 kubernetes deployment yaml    yaml apiVersion  apps v1 kind  Deployment  metadata   Dictionary   name  myapp1 deployment spec    Dictionary   replicas  2   selector      matchLabels        app  myapp1   template        metadata    Dictionary       name  myapp1 pod       labels    Dictionary         app  myapp1    Key value pairs     spec        containers    List           name  myapp1 container           image  stacksimplify kubenginx 1 0 0           ports                 containerPort  80                 imagePullPolicy  Always                     Step 04  Review kube manifest  02 kubernetes loadbalancer service yaml    yaml apiVersion  v1 kind  Service  metadata    name  myapp1 lb service spec    type  LoadBalancer   ClusterIp    NodePort   selector      app  myapp1   ports         name  http       port  80   Service Port       targetPort  80   Container Port               Step 05  Deploy Kubernetes Manifests    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT  gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    Change Directory cd 20 GKE Private Cluster    Deploy Kubernetes Manifests kubectl apply  f kube manifests     Verify Pods  kubectl get pods  Observation  SHOULD FAIL   UNABLE TO DOWNLOAD DOCKER IMAGE FROM DOCKER HUB    Describe Pod kubectl describe pod  POD NAME     Clean Up kubectl delete  f kube manifests          Step 06  Create Cloud NAT   Go to Network Services    CREATE CLOUD NAT GATEWAY     Gateway Name    gke us central1 default cloudnat gw     Select Cloud Router           Network    default       Region    us central1       Cloud Router    CREATE NEW ROUTER         Name    gke us central1 cloud router         Description    GKE Cloud Router Region us central1         Network    default  POPULATED by default          Region    us central1  POPULATED by default          BGP Peer keepalive interval    20 seconds  LEAVE TO DEFAULT        Click on   CREATE       Cloud NAT Mapping    LEAVE TO DEFAULTS     Destination  external     LEAVE TO DEFAULTS     Stackdriver logging     LEAVE TO DEFAULTS     Port allocation         CHECK   Enable Dynamic Port Allocation       Timeouts for protocol connections    LEAVE TO DEFAULTS   CLICK on   CREATE         Step 07  Deploy Kubernetes Manifests    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT  gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    Deploy Kubernetes Manifests kubectl apply  f kube manifests    Verify Pods  kubectl get pods  Observation  SHOULD BE ABLE TO DOWNLOAD THE DOCKER IMAGE    List Services kubectl get svc    Access Application http    External IP     Clean Up kubectl delete  f kube manifests         Step 08  Authorized Network Test1  My Network   Goto    standard cluster private 1    DETAILS    NETWORKING   Control plane authorized networks    EDIT     Enable control plane authorized networks    CHECKED   CLICK ON   ADD AUTHORIZED NETWORK       NAME    MY NETWORK 1     NETWORK    10 10 10 0 24    Click on   DONE     Click on   SAVE CHANGES      t   List Kubernetes Nodes kubectl get nodes Observation  1  Access to GKE API Service from our local desktop kubectl cli is lost 2  Access to GKE API Service is now allowed only from  10 10 10 0 24  network 3  In short even though our GKE API Server has Internet enabled endpoint  its access is restricted to specific network of IPs     Sample Output Kalyan Mac mini google kubernetes engine kalyan  kubectl get nodes Unable to connect to the server  dial tcp 34 70 169 161 443  i o timeout Kalyan Mac mini google kubernetes engine kalyan           Step 09  Authorized Network Test2  My Desktop   Go to link  whatismyip  https   www whatismyip com   and get desktop public IP    Goto    standard cluster private 1    DETAILS    NETWORKING   Control plane authorized networks    EDIT     Enable control plane authorized networks    CHECKED   CLICK ON   ADD AUTHORIZED NETWORK       NAME    MY DESKTOP 1     NETWORK    10 10 10 0 24    Click on   DONE     Click on   SAVE CHANGES      t   List Kubernetes Nodes kubectl get nodes Observation  1  Access to GKE API Service from our local desktop kubectl cli should be success     Sample Output Kalyans Mac mini google kubernetes engine kalyan  kubectl get nodes NAME                                                  STATUS   ROLES    AGE   VERSION gke standard cluster pri default pool 90b1f67b 4z71   Ready     none    55m   v1 24 3 gke 900 gke standard cluster pri default pool 90b1f67b 6xn6   Ready     none    55m   v1 24 3 gke 900 gke standard cluster pri default pool 90b1f67b dggg   Ready     none    55m   v1 24 3 gke 900 Kalyans Mac mini google kubernetes engine kalyan           Step 10  Authorized Network Test2  Delete both network rules  Roll back to old state    Goto    standard cluster private 1    DETAILS    NETWORKING   Control plane authorized networks    EDIT     Enable control plane authorized networks    UN CHECKED   AUTHORIZED NETWORKS    DELETE    MY NETWORK 1  MY DESKTOP 1   Click on   SAVE CHANGES      t   List Kubernetes Nodes kubectl get nodes Observation  1  Access to GKE API Service from our local desktop kubectl cli should be success     Sample Output Kalyans Mac mini google kubernetes engine kalyan  kubectl get nodes NAME                                                  STATUS   ROLES    AGE   VERSION gke standard cluster pri default pool 90b1f67b 4z71   Ready     none    55m   v1 24 3 gke 900 gke standard cluster pri default pool 90b1f67b 6xn6   Ready     none    55m   v1 24 3 gke 900 gke standard cluster pri default pool 90b1f67b dggg   Ready     none    55m   v1 24 3 gke 900 Kalyans Mac mini google kubernetes engine kalyan           Additional Reference    GKE Private Cluster with Terraform  https   github com GoogleCloudPlatform gke private cluster demo"}
{"questions":"gcp gke docs Use GCP Cloud SQL MySQL DB for GKE Workloads Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl title GKE Storage with GCP Cloud SQL MySQL Public Instance 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GKE Storage with GCP Cloud SQL - MySQL Public Instance\ndescription: Use GCP Cloud SQL MySQL DB for GKE Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --zone <ZONE> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-1 --zone us-central1-c --project kdaida123\n```\n\n## Step-01: Introduction\n- GKE Private Cluster \n- GCP Cloud SQL with Public IP and Authorized Network for DB as entire internet (0.0.0.0\/0)\n\n## Step-02: Create Google Cloud SQL MySQL Instance\n- Go to SQL -> Choose MySQL\n- **Instance ID:** ums-db-public-instance\n- **Password:** KalyanReddy13\n- **Database Version:** MYSQL 8.0\n- **Choose a configuration to start with:** Development\n- **Choose region and zonal availability**\n  - **Region:** US-central1(IOWA)\n  - **Zonal availability:** Single Zone\n  - **Primary Zone:** us-central1-a\n- **Customize your instance**\n- **Machine Type**\n  - **Machine Type:** LightWeight (1 vCPU, 3.75GB)\n- **STORAGE**  \n  - **Storage Type:** HDD\n  - **Storage Capacity:** 10GB \n  - **Enable automatic storage increases:** CHECKED\n- **CONNECTIONS**  \n  - **Instance IP Assignment:** \n    - **Private IP:** UNCHECKED\n    - **Public IP:** CHECKED\n  - **Authorized networks**\n    - **Name:** All-Internet\n    - **Network:**  0.0.0.0\/0     \n    - Click on **DONE**\n- **DATA PROTECTION**\n  - **Automatic Backups:** UNCHECKED\n  - **Enable Deletion protection:** UNCHECKED\n- **Maintenance:** Leave to defaults\n- **Flags:** Leave to defaults\n- **Labels:** Leave to defaults\n- Click on **CREATE INSTANCE**      \n\n## Step-03: Perform Telnet Test from local desktop\n```t\n# Telnet Test\ntelnet <MYSQL-DB-PUBLIC-IP> 3306\n\n# Replace Public IP\ntelnet 35.184.228.151 3306\n\n## SAMPLE OUTPUT\nKalyans-Mac-mini:25-GKE-Storage-with-GCP-Cloud-SQL kalyanreddy$ telnet 35.184.228.151 3306\nTrying 35.184.228.151...\nConnected to 151.228.184.35.bc.googleusercontent.com.\nEscape character is '^]'.\nQ\n8.0.26-google?h'Sxcr+?nd'h<a(X`z=mysql_native_password2#08S01Got timeout reading communication packetsConnection closed by foreign host.\nKalyans-Mac-mini:25-GKE-Storage-with-GCP-Cloud-SQL kalyanreddy$\n```\n\n\n## Step-04: Create DB Schema webappdb \n- Go to SQL ->  ums-db-public-instance -> Databases -> **CREATE DATABASE**\n- **Database Name:** webappdb\n- **Character set:** utf8\n- **Collation:** Default collation\n- Click on **CREATE**\n\n\n## Step-05: 01-MySQL-externalName-Service.yaml\n- Update Cloud SQL MySQL DB `Public IP` in ExternalName Service\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: mysql-externalname-service\nspec:\n  type: ExternalName\n  externalName: 35.184.228.151\n```\n\n## Step-06: 02-Kubernetes-Secrets.yaml\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: mysql-db-password\ntype: Opaque\ndata: \n  db-password: S2FseWFuUmVkZHkxMw==\n\n# Base64 of KalyanReddy13\n# https:\/\/www.base64encode.org\/\n# Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw==\n```\n\n## Step-07: 03-UserMgmtWebApp-Deployment.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify\/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql-externalname-service\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password   \n```\n\n## Step-08: 04-UserMgmtWebApp-LoadBalancer-Service.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port\n```\n\n## Step-09: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\/\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n\n## Step-10: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n```\n\n## Step-11: Connect to MySQL DB (Cloud SQL) from GKE Cluster using kubectl\n```t\n## Verify from Kubernetes Cluster, we are able to connect to MySQL DB\n# Template\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h <Kubernetes-ExternalName-Service> -u <USER_NAME> -p<PASSWORD>\n\n# MySQL Client 8.0: Replace External Name Service, Username and Password\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13\n\nmysql> show schemas;\nmysql> use webappdb;\nmysql> show tables;\nmysql> select * from user;\nmysql> exit\n```\n\n## Step-12: Create New user admin102 and verify in Cloud SQL MySQL webappdb\n```t\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\n# Create New User (Used for testing  `allowVolumeExpansion: true` Option)\nUsername: admin102\nPassword: password102\nFirst Name: fname102\nLast Name: lname102\nEmail Address: admin102@stacksimplify.com\nSocial Security Address: ssn102\n\n# MySQL Client 8.0: Replace External Name Service, Username and Password\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13\n\nmysql> show schemas;\nmysql> use webappdb;\nmysql> show tables;\nmysql> select * from user;\nmysql> exit\n```\n\n## Step-13: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests\/\n\n# Delete Cloud SQL MySQL Instance\n1. Go to SQL ->  ums-db-public-instance -> DELETE\n2. Instance ID: ums-db-public-instance\n3. Click on DELETE\n```","site":"gcp gke docs","answers_cleaned":"    title  GKE Storage with GCP Cloud SQL   MySQL Public Instance description  Use GCP Cloud SQL MySQL DB for GKE Workloads         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    zone  ZONE    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster 1   zone us central1 c   project kdaida123         Step 01  Introduction   GKE Private Cluster    GCP Cloud SQL with Public IP and Authorized Network for DB as entire internet  0 0 0 0 0      Step 02  Create Google Cloud SQL MySQL Instance   Go to SQL    Choose MySQL     Instance ID    ums db public instance     Password    KalyanReddy13     Database Version    MYSQL 8 0     Choose a configuration to start with    Development     Choose region and zonal availability         Region    US central1 IOWA        Zonal availability    Single Zone       Primary Zone    us central1 a     Customize your instance       Machine Type         Machine Type    LightWeight  1 vCPU  3 75GB      STORAGE           Storage Type    HDD       Storage Capacity    10GB        Enable automatic storage increases    CHECKED     CONNECTIONS           Instance IP Assignment             Private IP    UNCHECKED         Public IP    CHECKED       Authorized networks           Name    All Internet         Network     0 0 0 0 0            Click on   DONE       DATA PROTECTION         Automatic Backups    UNCHECKED       Enable Deletion protection    UNCHECKED     Maintenance    Leave to defaults     Flags    Leave to defaults     Labels    Leave to defaults   Click on   CREATE INSTANCE             Step 03  Perform Telnet Test from local desktop    t   Telnet Test telnet  MYSQL DB PUBLIC IP  3306    Replace Public IP telnet 35 184 228 151 3306     SAMPLE OUTPUT Kalyans Mac mini 25 GKE Storage with GCP Cloud SQL kalyanreddy  telnet 35 184 228 151 3306 Trying 35 184 228 151    Connected to 151 228 184 35 bc googleusercontent com  Escape character is       Q 8 0 26 google h Sxcr  nd h a X z mysql native password2 08S01Got timeout reading communication packetsConnection closed by foreign host  Kalyans Mac mini 25 GKE Storage with GCP Cloud SQL kalyanreddy           Step 04  Create DB Schema webappdb    Go to SQL     ums db public instance    Databases      CREATE DATABASE       Database Name    webappdb     Character set    utf8     Collation    Default collation   Click on   CREATE        Step 05  01 MySQL externalName Service yaml   Update Cloud SQL MySQL DB  Public IP  in ExternalName Service    yaml apiVersion  v1 kind  Service metadata    name  mysql externalname service spec    type  ExternalName   externalName  35 184 228 151         Step 06  02 Kubernetes Secrets yaml    yaml apiVersion  v1 kind  Secret metadata    name  mysql db password type  Opaque data     db password  S2FseWFuUmVkZHkxMw      Base64 of KalyanReddy13   https   www base64encode org    Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw           Step 07  03 UserMgmtWebApp Deployment yaml    yaml apiVersion  apps v1 kind  Deployment  metadata    name  usermgmt webapp   labels      app  usermgmt webapp spec    replicas  1   selector      matchLabels        app  usermgmt webapp   template        metadata        labels           app  usermgmt webapp     spec        initContainers            name  init db           image  busybox 1 31           command    sh     c    echo  e  Checking for the availability of MySQL Server deployment   while   nc  z mysql externalname service 3306  do sleep 1  printf      done  echo  e       MySQL DB Server has started                 containers            name  usermgmt webapp           image  stacksimplify kube usermgmt webapp 1 0 0 MySQLDB           imagePullPolicy  Always           ports                 containerPort  8080                      env                name  DB HOSTNAME               value   mysql externalname service                            name  DB PORT               value   3306                            name  DB NAME               value   webappdb                            name  DB USERNAME               value   root                            name  DB PASSWORD               valueFrom                  secretKeyRef                    name  mysql db password                   key  db password            Step 08  04 UserMgmtWebApp LoadBalancer Service yaml    yaml apiVersion  v1 kind  Service metadata    name  usermgmt webapp lb service   labels       app  usermgmt webapp spec     type  LoadBalancer   selector       app  usermgmt webapp   ports         port  80   Service Port       targetPort  8080   Container Port         Step 09  Deploy kube manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests     List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    Verify Pod Logs kubectl get pods kubectl logs  f  USERMGMT POD NAME  kubectl logs  f usermgmt webapp 6ff7d7d849 7lrg5          Step 10  Access Application    t   List Services kubectl get svc    Access Application http    ExternalIP from get service output  Username  admin101 Password  password101         Step 11  Connect to MySQL DB  Cloud SQL  from GKE Cluster using kubectl    t    Verify from Kubernetes Cluster  we are able to connect to MySQL DB   Template kubectl run  it   rm   image mysql 8 0   restart Never mysql client    mysql  h  Kubernetes ExternalName Service   u  USER NAME   p PASSWORD     MySQL Client 8 0  Replace External Name Service  Username and Password kubectl run  it   rm   image mysql 8 0   restart Never mysql client    mysql  h mysql externalname service  u root  pKalyanReddy13  mysql  show schemas  mysql  use webappdb  mysql  show tables  mysql  select   from user  mysql  exit         Step 12  Create New user admin102 and verify in Cloud SQL MySQL webappdb    t   Access Application http    ExternalIP from get service output  Username  admin101 Password  password101    Create New User  Used for testing   allowVolumeExpansion  true  Option  Username  admin102 Password  password102 First Name  fname102 Last Name  lname102 Email Address  admin102 stacksimplify com Social Security Address  ssn102    MySQL Client 8 0  Replace External Name Service  Username and Password kubectl run  it   rm   image mysql 8 0   restart Never mysql client    mysql  h mysql externalname service  u root  pKalyanReddy13  mysql  show schemas  mysql  use webappdb  mysql  show tables  mysql  select   from user  mysql  exit         Step 13  Clean Up    t   Delete Kubernetes Objects kubectl delete  f kube manifests     Delete Cloud SQL MySQL Instance 1  Go to SQL     ums db public instance    DELETE 2  Instance ID  ums db public instance 3  Click on DELETE    "}
{"questions":"gcp gke docs Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal title GCP Google Kubernetes Engine GKE Ingress Context Path Routing Configure kubeconfig for kubectl Implement GCP Google Kubernetes Engine GKE Ingress Context Path Routing 1 Verify if GKE Cluster is created gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Ingress Context Path Routing\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress Context Path Routing\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n- Ingress Context Path based Routing\n- Discuss about the Architecture we are going to build as part of this Section\n- We are going to deploy all these 3 apps in kubernetes with context path based routing enabled in Ingress Controller\n  - \/app1\/* - should go to app1-nginx-nodeport-service\n  - \/app2\/* - should go to app2-nginx-nodeport-service\n  - \/*    - should go to  app3-nginx-nodeport-service\n\n\n## Step-02: Review Nginx App1, App2 & App3 Deployment & Service\n- Differences for all 3 apps will be only one field from kubernetes manifests perspective and additionally their naming conventions\n  - **Kubernetes Deployment:** Container Image name\n- **App1 Nginx: 01-Nginx-App1-Deployment-and-NodePortService.yaml**\n  - **image:** stacksimplify\/kube-nginxapp1:1.0.0\n- **App2 Nginx: 02-Nginx-App2-Deployment-and-NodePortService.yaml**\n  - **image:** stacksimplify\/kube-nginxapp2:1.0.0\n- **App3 Nginx: 03-Nginx-App3-Deployment-and-NodePortService.yaml**\n  - **image:** stacksimplify\/kubenginx:1.0.0\n\n\n## Step-03: 04-Ingress-ContextPath-Based-Routing.yaml\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress-cpr\n  annotations:\n    # External Load Balancer  \n    kubernetes.io\/ingress.class: \"gce\"  \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: \/app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: \/app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80\n#          - path: \/\n#            pathType: Prefix\n#            backend:\n#              service:\n#                name: app3-nginx-nodeport-service\n#                port: \n#                  number: 80                           \n```\n\n## Step-04: Deploy kube-manifests and test\n```t\n# Deploy Kubernetes manifests\nkubectl apply -f kube-manifests\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Load Balancers\nkubectl get ingress\n\n# Describe Ingress and view Rules\nkubectl describe ingress ingress-cpr\n```\n\n## Step-05: Access Application\n```t\n# Important Note\nWait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors\n\n# Access Application\nhttp:\/\/<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>\/app1\/index.html\nhttp:\/\/<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>\/app2\/index.html\nhttp:\/\/<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>\/\n```\n\n\n## Step-06: Verify Load Balancer\n- Go to Load Balancing -> Click on Load balancer\n### Load Balancer View \n- DETAILS Tab\n  - Frontend\n  - Host and Path Rules\n  - Backend Services\n  - Health Checks\n- MONITORING TAB\n- CACHING TAB \n### Load Balancer Components View\n- FORWARDING RULES\n- TARGET PROXIES\n- BACKEND SERVICES\n- BACKEND BUCKETS\n- CERTIFICATES\n- TARGET POOLS\n\n\n## Step-07: Clean Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n```","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Ingress Context Path Routing description  Implement GCP Google Kubernetes Engine GKE Ingress Context Path Routing        Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123         Step 01  Introduction   Ingress Context Path based Routing   Discuss about the Architecture we are going to build as part of this Section   We are going to deploy all these 3 apps in kubernetes with context path based routing enabled in Ingress Controller      app1     should go to app1 nginx nodeport service      app2     should go to app2 nginx nodeport service             should go to  app3 nginx nodeport service      Step 02  Review Nginx App1  App2   App3 Deployment   Service   Differences for all 3 apps will be only one field from kubernetes manifests perspective and additionally their naming conventions       Kubernetes Deployment    Container Image name     App1 Nginx  01 Nginx App1 Deployment and NodePortService yaml         image    stacksimplify kube nginxapp1 1 0 0     App2 Nginx  02 Nginx App2 Deployment and NodePortService yaml         image    stacksimplify kube nginxapp2 1 0 0     App3 Nginx  03 Nginx App3 Deployment and NodePortService yaml         image    stacksimplify kubenginx 1 0 0      Step 03  04 Ingress ContextPath Based Routing yaml    yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress cpr   annotations        External Load Balancer       kubernetes io ingress class   gce    spec     defaultBackend      service        name  app3 nginx nodeport service       port          number  80                               rules        http          paths                         path   app1             pathType  Prefix             backend                service                  name  app1 nginx nodeport service                 port                     number  80             path   app2             pathType  Prefix             backend                service                  name  app2 nginx nodeport service                 port                     number  80              path                 pathType  Prefix              backend                 service                   name  app3 nginx nodeport service                  port                      number  80                                    Step 04  Deploy kube manifests and test    t   Deploy Kubernetes manifests kubectl apply  f kube manifests    List Pods kubectl get pods    List Services kubectl get svc    List Ingress Load Balancers kubectl get ingress    Describe Ingress and view Rules kubectl describe ingress ingress cpr         Step 05  Access Application    t   Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors    Access Application http    ADDRESS FIELD FROM GET INGRESS OUTPUT  app1 index html http    ADDRESS FIELD FROM GET INGRESS OUTPUT  app2 index html http    ADDRESS FIELD FROM GET INGRESS OUTPUT            Step 06  Verify Load Balancer   Go to Load Balancing    Click on Load balancer     Load Balancer View    DETAILS Tab     Frontend     Host and Path Rules     Backend Services     Health Checks   MONITORING TAB   CACHING TAB      Load Balancer Components View   FORWARDING RULES   TARGET PROXIES   BACKEND SERVICES   BACKEND BUCKETS   CERTIFICATES   TARGET POOLS      Step 07  Clean Up    t   Delete Kubernetes Resources kubectl delete  f kube manifests    "}
{"questions":"gcp gke docs title GKE Persistent Disks Use Regional PD Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl Use Google Disks Regional PD for Kubernetes Workloads 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GKE Persistent Disks - Use Regional PD\ndescription: Use Google Disks Regional PD for Kubernetes Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Feature: Compute Engine persistent disk CSI Driver\n  - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. \n  - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster.\n\n\n## Step-01: Introduction\n- Use Regional Persistent Disks\n\n## Step-02: List Kubernetes Storage Classes in GKE Cluster\n```t\n# List Storage Classes\nkubectl get sc\n```\n\n## Step-03: 00-storage-class.yaml\n```yaml\napiVersion: storage.k8s.io\/v1\nkind: StorageClass\nmetadata:\n  name: regionalpd-storageclass\nprovisioner: pd.csi.storage.gke.io\nparameters:\n  #type: pd-standard # Note: To use regional persistent disks of type pd-standard, set the PersistentVolumeClaim.storage attribute to 200Gi or higher. If you need a smaller persistent disk, use pd-ssd instead of pd-standard.\n  type: pd-ssd \n  replication-type: regional-pd\nvolumeBindingMode: WaitForFirstConsumer\nallowedTopologies:\n- matchLabelExpressions:\n  - key: topology.gke.io\/zone\n    values:\n    - us-central1-c\n    - us-central1-b\n\n## Important Note - Regional PD \n# If using a regional cluster, you can leave allowedTopologies unspecified. If you do this, when you create a Pod that consumes a PersistentVolumeClaim which uses this StorageClass a regional persistent disk is provisioned with two zones. One zone is the same as the zone that the Pod is scheduled in. The other zone is randomly picked from the zones available to the cluster.\n# When using a zonal cluster, allowedTopologies must be set.    \n\n# STORAGE CLASS \n# 1. A StorageClass provides a way for administrators \n# to describe the \"classes\" of storage they offer.\n# 2. Here we are offering GCP PD Storage for GKE Cluster\n```\n\n## Step-04: 01-persistent-volume-claim.yaml\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: regionalpd-storageclass\n  resources: \n    requests:\n      storage: 4Gi\n```\n\n## Step-05: Other Kubernetes YAML Manifests\n- No changes to other Kubernetes YAML Manifests\n- They are same as previous section\n- 02-UserManagement-ConfigMap.yaml\n- 03-mysql-deployment.yaml\n- 04-mysql-clusterip-service.yaml\n- 05-UserMgmtWebApp-Deployment.yaml\n- 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n\n\n## Step-06: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\/\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n## Step-07: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `4GB`\u00a0Persistent Disk\n- **Observation:** Review the below items\n  - **Zones:** us-central1-b, us-central1-c\n  - **Type:** Regional SSD persistent disk\n  - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk\n\n\n\n## Step-08: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n```\n\n## Step-09: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests\/\n\n# Verify if PD is deleted\nGo to Compute Engine -> Disks -> Search for 4GB Regional SSD persistent disk.\nIt should be deleted. \n```\n\n\n\n## References \n- [Regional PD](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/persistent-volumes\/regional-pd","site":"gcp gke docs","answers_cleaned":"    title  GKE Persistent Disks   Use Regional PD description  Use Google Disks Regional PD for Kubernetes Workloads         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123     3  Feature  Compute Engine persistent disk CSI Driver     Verify the Feature   Compute Engine persistent disk CSI Driver   enabled in GKE Cluster       This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster       Step 01  Introduction   Use Regional Persistent Disks     Step 02  List Kubernetes Storage Classes in GKE Cluster    t   List Storage Classes kubectl get sc         Step 03  00 storage class yaml    yaml apiVersion  storage k8s io v1 kind  StorageClass metadata    name  regionalpd storageclass provisioner  pd csi storage gke io parameters     type  pd standard   Note  To use regional persistent disks of type pd standard  set the PersistentVolumeClaim storage attribute to 200Gi or higher  If you need a smaller persistent disk  use pd ssd instead of pd standard    type  pd ssd    replication type  regional pd volumeBindingMode  WaitForFirstConsumer allowedTopologies    matchLabelExpressions      key  topology gke io zone     values        us central1 c       us central1 b     Important Note   Regional PD    If using a regional cluster  you can leave allowedTopologies unspecified  If you do this  when you create a Pod that consumes a PersistentVolumeClaim which uses this StorageClass a regional persistent disk is provisioned with two zones  One zone is the same as the zone that the Pod is scheduled in  The other zone is randomly picked from the zones available to the cluster    When using a zonal cluster  allowedTopologies must be set         STORAGE CLASS    1  A StorageClass provides a way for administrators    to describe the  classes  of storage they offer    2  Here we are offering GCP PD Storage for GKE Cluster         Step 04  01 persistent volume claim yaml    yaml apiVersion  v1 kind  PersistentVolumeClaim metadata    name  mysql pv claim spec     accessModes        ReadWriteOnce   storageClassName  regionalpd storageclass   resources       requests        storage  4Gi         Step 05  Other Kubernetes YAML Manifests   No changes to other Kubernetes YAML Manifests   They are same as previous section   02 UserManagement ConfigMap yaml   03 mysql deployment yaml   04 mysql clusterip service yaml   05 UserMgmtWebApp Deployment yaml   06 UserMgmtWebApp LoadBalancer Service yaml      Step 06  Deploy kube manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests     List Storage Class kubectl get sc    List PVC kubectl get pvc    List PV kubectl get pv    List ConfigMaps kubectl get configmap    List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    Verify Pod Logs kubectl get pods kubectl logs  f  USERMGMT POD NAME  kubectl logs  f usermgmt webapp 6ff7d7d849 7lrg5         Step 07  Verify Persistent Disks   Go to Compute Engine    Storage    Disks   Search for  4GB  Persistent Disk     Observation    Review the below items       Zones    us central1 b  us central1 c       Type    Regional SSD persistent disk       In use by    gke standard cluster 1 default pool db7b638f j5lk       Step 08  Access Application    t   List Services kubectl get svc    Access Application http    ExternalIP from get service output  Username  admin101 Password  password101         Step 09  Clean Up    t   Delete Kubernetes Objects kubectl delete  f kube manifests     Verify if PD is deleted Go to Compute Engine    Disks    Search for 4GB Regional SSD persistent disk  It should be deleted             References     Regional PD  https   cloud google com kubernetes engine docs how to persistent volumes regional pd"}
{"questions":"gcp gke docs Use existing storageclass standard rwo in Kubernetes Workloads Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal title GKE Persistent Disks Existing StorageClass standard rwo Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GKE Persistent Disks Existing StorageClass standard-rwo\ndescription: Use existing storageclass standard-rwo in Kubernetes Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Feature: Compute Engine persistent disk CSI Driver\n  - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. \n  - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. \n\n\n## Step-01: Introduction\n- Understand Kubernetes Objects\n01. Kubernetes PersistentVolumeClaim\n02. Kubernetes ConfigMap\n03. Kubernetes Deployment\n04. Kubernetes Volumes\n05. Kubernetes Volume Mounts\n06. Kubernetes Environment Variables\n07. Kubernetes ClusterIP Service\n08. Kubernetes Init Containers\n09. Kubernetes Service of Type LoadBalancer\n10. Kubernetes StorageClass \n\n- Use predefined Storage Class `standard-rwo`\n- `standard-rwo` uses balanced persistent disk\n\n## Step-02: List Kubernetes Storage Classes in GKE Cluster\n```t\n# List Storage Classes\nkubectl get sc\n```\n\n## Step-03: 01-persistent-volume-claim.yaml\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: mysql-pv-claim\nspec: \n  accessModes:\n    - ReadWriteOnce\n  storageClassName: standard-rwo\n  resources: \n    requests:\n      storage: 4Gi\n\n# NEED FOR PVC\n# 1. Dynamic volume provisioning allows storage volumes to be created \n# on-demand. \n\n# 2. Without dynamic provisioning, cluster administrators have to manually \n# make calls to their cloud or storage provider to create new storage \n# volumes, and then create PersistentVolume objects to represent them in k8s\n\n# 3. The dynamic provisioning feature eliminates the need for cluster \n# administrators to pre-provision storage. Instead, it automatically \n# provisions storage when it is requested by users.\n\n# 4. PVC: Users request dynamically provisioned storage by including \n# a storage class in their PersistentVolumeClaim\n```\n\n## Step-04: 02-UserManagement-ConfigMap.yaml\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: usermanagement-dbcreation-script\ndata: \n  mysql_usermgmt.sql: |-\n    DROP DATABASE IF EXISTS webappdb;\n    CREATE DATABASE webappdb; \n```\n\n## Step-05: 03-mysql-deployment.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate # terminates all the pods and replaces them with the new version.\n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:8.0\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: \/var\/lib\/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: \/docker-entrypoint-initdb.d #https:\/\/hub.docker.com\/_\/mysql Refer Initializing a fresh instance                                            \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            claimName: mysql-pv-claim\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n\n\n# VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: \n## 1. On-disk files in a container are ephemeral\n## 2. One problem is the loss of files when a container crashes. \n## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. \n## Only they can be mounted in Container\n## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach \n## for having Persistent Volumes for workloads in Kubernetes\n\n\n## ENVIRONMENT VARIABLES\n# 1. When you create a Pod, you can set environment variables for the \n# containers that run in the Pod. \n# 2. To set environment variables, include the env or envFrom field in \n# the configuration file.\n\n\n## DEPLOYMENT STRATEGIES\n# 1. Rolling deployment: This strategy  replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster.\n# 2. Recreate: This strategy terminates all the pods and replaces them with the new version.\n# 3. Ramped slow rollout: This strategy  rolls out replicas of the new version, while in parallel, shutting down old replicas. \n# 4. Best-effort controlled rollout: This strategy  specifies a \u201cmax unavailable\u201d parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly.\n# 5. Canary Deployment: This strategy  uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful.\n```\n\n## Step-06: 04-mysql-clusterip-service.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata: \n  name: mysql\nspec:\n  selector:\n    app: mysql \n  ports: \n    - port: 3306  \n  clusterIP: None # This means we are going to use Pod IP    \n```\n## Step-07: 05-UserMgmtWebApp-Deployment.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify\/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              value: \"dbpassword11\"            \n```\n## Step-08: 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port\n```\n## Step-09: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\/\n\n# List Storage Classes\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n\n# Sample Message for Successful Start of JVM\n2022-06-20 09:34:32.519  INFO 1 --- [ost-startStop-1] .r.SpringbootSecurityInternalApplication : Started SpringbootSecurityInternalApplication in 14.891 seconds (JVM running for 23.283)\n20-Jun-2022 09:34:32.593 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive \/usr\/local\/tomcat\/webapps\/ROOT.war has finished in 21,016 ms\n20-Jun-2022 09:34:32.623 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler [\"http-apr-8080\"]\n20-Jun-2022 09:34:32.688 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler [\"ajp-apr-8009\"]\n20-Jun-2022 09:34:32.713 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 21275 ms\n```\n\n## Step-10: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `4GB`\u00a0Persistent Disk\n\n## Step-11: Verify Kubernetes Workloads, Services ConfigMaps on Kubernetes Engine Dashboard\n```t\n# Verify Workloads\nGo to Kubernetes Engine -> Workloads\nObservation:\n1. You should see \"mysql\" and \"usermgmt-webapp\" deployments\n\n# Verify Services\nGo to Kubernetes Engine -> Services & Ingress\nObservation:\n1. You should \"mysql ClusterIP Service\" and \"usermgmt-webapp-lb-service\"\n\n# Verify ConfigMaps\nGo to Kubernetes Engine -> Secrets & ConfigMaps\nObservation: \n1. You should find the ConfigMap \"usermanagement-dbcreation-script\"\n\n# Verify Persistent Volume Claim\nGo to Kubernetes Engine -> Storage -> PERSISTENT VOLUME CLAIMS TAB\nObservation: \n1. You should see PVC \"mysql-pv-claim\"\n\n# Verify StorageClass\nGo to Kubernetes Engine -> Storage -> STORAGE CLASSES TAB\nObservation: \n1. You should see 3 Storage Classes out of which \"standard-rwo\" and \"premium-rwo\" are part of Compute Engine Persistent Disks (latest and greatest - Recommended for use)\n2. Not recommended to use Storage Class with name \"standard\" (Older version)\n```\n## Step-13: Connect to MySQL Database\n```t\n# Template: Connect to MySQL Database using kubectl\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h <Kubernetes-ClusterIP-Service> -u <USER_NAME> -p<PASSWORD>\n\n# MySQL Client 8.0: Replace ClusterIP Service, Username and Password\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u root -pdbpassword11\n\nmysql> show schemas;\nmysql> use webappdb;\nmysql> show tables;\nmysql> select * from user;\nmysql> exit\n```\n\n\n## Step-12: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\n# Create New User\nUsername: admin102\nPassword: password102\nFirst Name: fname102\nLast Name: lname102\nEmail Address: admin102@stacksimplify.com\nSocial Security Address: ssn102\n\n# Verify this user in MySQL DB\n# Template: Connect to MySQL Database using kubectl\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h <Kubernetes-ClusterIP-Service> -u <USER_NAME> -p<PASSWORD>\n\n# MySQL Client 8.0: Replace ClusterIP Service, Username and Password\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u root -pdbpassword11\n\nmysql> show schemas;\nmysql> use webappdb;\nmysql> show tables;\nmysql> select * from user;\nmysql> select * from user;\nObservation:\n1. You should find the newly created user from browser successfully created in MySQL DB.\n2. In simple terms, we have done the following\na. Created MySQL k8s Deployment in GKE CLuster\nb. Created Java WebApplication  k8s Deployment in GKE Cluster\nc. Accessed Application using GKE Load Balancer IP using browser\nd. Created a new user in this application and that user successfully stored in MySQL DB.\ne. END TO END FLOW from Browser to DB using GKE Cluster we have seen.\n```\n\n## Step-13: Verify GCE PD CSI Driver Logging\n- https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/persistent-volumes\/gce-pd-csi-driver\n```t\n# Cloud Logging Query \n resource.type=\"k8s_container\"\n resource.labels.project_id=\"PROJECT_ID\"\n  resource.labels.cluster_name=\"CLUSTER_NAME\"\n resource.labels.namespace_name=\"kube-system\"\n resource.labels.container_name=\"gce-pd-driver\"\n\n# Cloud Logging Query (Replace Values)\n resource.type=\"k8s_container\"\n resource.labels.project_id=\"kdaida123\"\n resource.labels.cluster_name=\"standard-cluster-private-1\"\n resource.labels.namespace_name=\"kube-system\"\n resource.labels.container_name=\"gce-pd-driver\"\n```\n\n## Step-14: Clean-Up\n```t\n# Delete kube-manifests\nkubectl delete -f kube-manifests\/\n```\n\n## Reference\n- [Using the Compute Engine persistent disk CSI Driver](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/persistent-volumes\/gce-pd-csi-driver)\n\n\n## Addtional-Data-01\n1. It enables the automatic deployment and management of the persistent disk driver without having to manually set it up.\n2. You can use customer-managed encryption keys (CMEKs). These keys are used to encrypt the data encryption keys that encrypt your data. \n3. You can use volume snapshots with the Compute Engine persistent disk CSI Driver. Volume snapshots let you create a copy of your volume at a specific point in time. You can use this copy to bring a volume back to a prior state or to provision a new volume.\n4. Bug fixes and feature updates are rolled out independently from minor Kubernetes releases. This release schedule typically results in a faster release cadence.\n\n## Addtional-Data-02\n- For Standard Clusters: The Compute Engine persistent disk CSI Driver is enabled by default on newly created clusters \n  - Linux clusters: GKE version 1.18.10-gke.2100 or later, or 1.19.3-gke.2100 or later.\n  - Windows clusters: GKE version 1.22.6-gke.300 or later, or 1.23.2-gke.300 or later.\n- For Autopilot clusters: The Compute Engine persistent disk CSI Driver is enabled by default and cannot be disabled or edited.\n\n## Addtional-Data-03\n- GKE automatically installs the following StorageClasses:\n  - standard-rwo:  using balanced persistent disk\n  - premium-rwo: using SSD persistent disk\n- For Autopilot clusters: The default StorageClass is standard-rwo, which uses the Compute Engine persistent disk CSI Driver. \n- For Standard clusters: The default StorageClass uses the Kubernetes in-tree gcePersistentDisk volume plugin.\n```t\n# You can find the name of your installed StorageClasses by running the following command:\nkubectl get sc\nor\nkubectl get storageclass\n```","site":"gcp gke docs","answers_cleaned":"    title  GKE Persistent Disks Existing StorageClass standard rwo description  Use existing storageclass standard rwo in Kubernetes Workloads         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123     3  Feature  Compute Engine persistent disk CSI Driver     Verify the Feature   Compute Engine persistent disk CSI Driver   enabled in GKE Cluster       This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster        Step 01  Introduction   Understand Kubernetes Objects 01  Kubernetes PersistentVolumeClaim 02  Kubernetes ConfigMap 03  Kubernetes Deployment 04  Kubernetes Volumes 05  Kubernetes Volume Mounts 06  Kubernetes Environment Variables 07  Kubernetes ClusterIP Service 08  Kubernetes Init Containers 09  Kubernetes Service of Type LoadBalancer 10  Kubernetes StorageClass     Use predefined Storage Class  standard rwo     standard rwo  uses balanced persistent disk     Step 02  List Kubernetes Storage Classes in GKE Cluster    t   List Storage Classes kubectl get sc         Step 03  01 persistent volume claim yaml    yaml apiVersion  v1 kind  PersistentVolumeClaim metadata    name  mysql pv claim spec     accessModes        ReadWriteOnce   storageClassName  standard rwo   resources       requests        storage  4Gi    NEED FOR PVC   1  Dynamic volume provisioning allows storage volumes to be created    on demand      2  Without dynamic provisioning  cluster administrators have to manually    make calls to their cloud or storage provider to create new storage    volumes  and then create PersistentVolume objects to represent them in k8s    3  The dynamic provisioning feature eliminates the need for cluster    administrators to pre provision storage  Instead  it automatically    provisions storage when it is requested by users     4  PVC  Users request dynamically provisioned storage by including    a storage class in their PersistentVolumeClaim         Step 04  02 UserManagement ConfigMap yaml    yaml apiVersion  v1 kind  ConfigMap metadata    name  usermanagement dbcreation script data     mysql usermgmt sql         DROP DATABASE IF EXISTS webappdb      CREATE DATABASE webappdb           Step 05  03 mysql deployment yaml    yaml apiVersion  apps v1 kind  Deployment metadata    name  mysql spec     replicas  1   selector      matchLabels        app  mysql   strategy      type  Recreate   terminates all the pods and replaces them with the new version    template       metadata         labels           app  mysql     spec         containers            name  mysql           image  mysql 8 0           env                name  MYSQL ROOT PASSWORD               value  dbpassword11           ports                containerPort  3306               name  mysql               volumeMounts                name  mysql persistent storage               mountPath   var lib mysql                   name  usermanagement dbcreation script               mountPath   docker entrypoint initdb d  https   hub docker com   mysql Refer Initializing a fresh instance                                                   volumes             name  mysql persistent storage           persistentVolumeClaim              claimName  mysql pv claim           name  usermanagement dbcreation script           configMap              name  usermanagement dbcreation script     VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES      1  On disk files in a container are ephemeral    2  One problem is the loss of files when a container crashes      3  Kubernetes Volumes solves above two as these volumes are configured to POD and not container      Only they can be mounted in Container    4  Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach     for having Persistent Volumes for workloads in Kubernetes      ENVIRONMENT VARIABLES   1  When you create a Pod  you can set environment variables for the    containers that run in the Pod     2  To set environment variables  include the env or envFrom field in    the configuration file       DEPLOYMENT STRATEGIES   1  Rolling deployment  This strategy  replaces pods running the old version of the application with the new version  one by one  without downtime to the cluster    2  Recreate  This strategy terminates all the pods and replaces them with the new version    3  Ramped slow rollout  This strategy  rolls out replicas of the new version  while in parallel  shutting down old replicas     4  Best effort controlled rollout  This strategy  specifies a  max unavailable  parameter which indicates what percentage of existing pods can be unavailable during the upgrade  enabling the rollout to happen much more quickly    5  Canary Deployment  This strategy  uses a progressive delivery approach  with one version of the application serving maximum users  and another  newer version serving a small set of test users  The test deployment is rolled out to more users if it is successful          Step 06  04 mysql clusterip service yaml    yaml apiVersion  v1 kind  Service metadata     name  mysql spec    selector      app  mysql    ports         port  3306     clusterIP  None   This means we are going to use Pod IP            Step 07  05 UserMgmtWebApp Deployment yaml    yaml apiVersion  apps v1 kind  Deployment  metadata    name  usermgmt webapp   labels      app  usermgmt webapp spec    replicas  1   selector      matchLabels        app  usermgmt webapp   template        metadata        labels           app  usermgmt webapp     spec        initContainers            name  init db           image  busybox 1 31           command    sh     c    echo  e  Checking for the availability of MySQL Server deployment   while   nc  z mysql 3306  do sleep 1  printf      done  echo  e       MySQL DB Server has started                 containers            name  usermgmt webapp           image  stacksimplify kube usermgmt webapp 1 0 0 MySQLDB           imagePullPolicy  Always           ports                 containerPort  8080                      env                name  DB HOSTNAME               value   mysql                            name  DB PORT               value   3306                            name  DB NAME               value   webappdb                            name  DB USERNAME               value   root                            name  DB PASSWORD               value   dbpassword11                     Step 08  06 UserMgmtWebApp LoadBalancer Service yaml    yaml apiVersion  v1 kind  Service metadata    name  usermgmt webapp lb service   labels       app  usermgmt webapp spec     type  LoadBalancer   selector       app  usermgmt webapp   ports         port  80   Service Port       targetPort  8080   Container Port        Step 09  Deploy kube manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests     List Storage Classes kubectl get sc    List PVC kubectl get pvc    List PV kubectl get pv    List ConfigMaps kubectl get configmap    List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    Verify Pod Logs kubectl get pods kubectl logs  f  USERMGMT POD NAME  kubectl logs  f usermgmt webapp 6ff7d7d849 7lrg5    Sample Message for Successful Start of JVM 2022 06 20 09 34 32 519  INFO 1      ost startStop 1   r SpringbootSecurityInternalApplication   Started SpringbootSecurityInternalApplication in 14 891 seconds  JVM running for 23 283  20 Jun 2022 09 34 32 593 INFO  localhost startStop 1  org apache catalina startup HostConfig deployWAR Deployment of web application archive  usr local tomcat webapps ROOT war has finished in 21 016 ms 20 Jun 2022 09 34 32 623 INFO  main  org apache coyote AbstractProtocol start Starting ProtocolHandler   http apr 8080   20 Jun 2022 09 34 32 688 INFO  main  org apache coyote AbstractProtocol start Starting ProtocolHandler   ajp apr 8009   20 Jun 2022 09 34 32 713 INFO  main  org apache catalina startup Catalina start Server startup in 21275 ms         Step 10  Verify Persistent Disks   Go to Compute Engine    Storage    Disks   Search for  4GB  Persistent Disk     Step 11  Verify Kubernetes Workloads  Services ConfigMaps on Kubernetes Engine Dashboard    t   Verify Workloads Go to Kubernetes Engine    Workloads Observation  1  You should see  mysql  and  usermgmt webapp  deployments    Verify Services Go to Kubernetes Engine    Services   Ingress Observation  1  You should  mysql ClusterIP Service  and  usermgmt webapp lb service     Verify ConfigMaps Go to Kubernetes Engine    Secrets   ConfigMaps Observation   1  You should find the ConfigMap  usermanagement dbcreation script     Verify Persistent Volume Claim Go to Kubernetes Engine    Storage    PERSISTENT VOLUME CLAIMS TAB Observation   1  You should see PVC  mysql pv claim     Verify StorageClass Go to Kubernetes Engine    Storage    STORAGE CLASSES TAB Observation   1  You should see 3 Storage Classes out of which  standard rwo  and  premium rwo  are part of Compute Engine Persistent Disks  latest and greatest   Recommended for use  2  Not recommended to use Storage Class with name  standard   Older version         Step 13  Connect to MySQL Database    t   Template  Connect to MySQL Database using kubectl kubectl run  it   rm   image mysql 8 0   restart Never mysql client    mysql  h  Kubernetes ClusterIP Service   u  USER NAME   p PASSWORD     MySQL Client 8 0  Replace ClusterIP Service  Username and Password kubectl run  it   rm   image mysql 8 0   restart Never mysql client    mysql  h mysql  u root  pdbpassword11  mysql  show schemas  mysql  use webappdb  mysql  show tables  mysql  select   from user  mysql  exit          Step 12  Access Application    t   List Services kubectl get svc    Access Application http    ExternalIP from get service output  Username  admin101 Password  password101    Create New User Username  admin102 Password  password102 First Name  fname102 Last Name  lname102 Email Address  admin102 stacksimplify com Social Security Address  ssn102    Verify this user in MySQL DB   Template  Connect to MySQL Database using kubectl kubectl run  it   rm   image mysql 8 0   restart Never mysql client    mysql  h  Kubernetes ClusterIP Service   u  USER NAME   p PASSWORD     MySQL Client 8 0  Replace ClusterIP Service  Username and Password kubectl run  it   rm   image mysql 8 0   restart Never mysql client    mysql  h mysql  u root  pdbpassword11  mysql  show schemas  mysql  use webappdb  mysql  show tables  mysql  select   from user  mysql  select   from user  Observation  1  You should find the newly created user from browser successfully created in MySQL DB  2  In simple terms  we have done the following a  Created MySQL k8s Deployment in GKE CLuster b  Created Java WebApplication  k8s Deployment in GKE Cluster c  Accessed Application using GKE Load Balancer IP using browser d  Created a new user in this application and that user successfully stored in MySQL DB  e  END TO END FLOW from Browser to DB using GKE Cluster we have seen          Step 13  Verify GCE PD CSI Driver Logging   https   cloud google com kubernetes engine docs how to persistent volumes gce pd csi driver    t   Cloud Logging Query   resource type  k8s container   resource labels project id  PROJECT ID    resource labels cluster name  CLUSTER NAME   resource labels namespace name  kube system   resource labels container name  gce pd driver     Cloud Logging Query  Replace Values   resource type  k8s container   resource labels project id  kdaida123   resource labels cluster name  standard cluster private 1   resource labels namespace name  kube system   resource labels container name  gce pd driver          Step 14  Clean Up    t   Delete kube manifests kubectl delete  f kube manifests          Reference    Using the Compute Engine persistent disk CSI Driver  https   cloud google com kubernetes engine docs how to persistent volumes gce pd csi driver       Addtional Data 01 1  It enables the automatic deployment and management of the persistent disk driver without having to manually set it up  2  You can use customer managed encryption keys  CMEKs   These keys are used to encrypt the data encryption keys that encrypt your data   3  You can use volume snapshots with the Compute Engine persistent disk CSI Driver  Volume snapshots let you create a copy of your volume at a specific point in time  You can use this copy to bring a volume back to a prior state or to provision a new volume  4  Bug fixes and feature updates are rolled out independently from minor Kubernetes releases  This release schedule typically results in a faster release cadence      Addtional Data 02   For Standard Clusters  The Compute Engine persistent disk CSI Driver is enabled by default on newly created clusters      Linux clusters  GKE version 1 18 10 gke 2100 or later  or 1 19 3 gke 2100 or later      Windows clusters  GKE version 1 22 6 gke 300 or later  or 1 23 2 gke 300 or later    For Autopilot clusters  The Compute Engine persistent disk CSI Driver is enabled by default and cannot be disabled or edited      Addtional Data 03   GKE automatically installs the following StorageClasses      standard rwo   using balanced persistent disk     premium rwo  using SSD persistent disk   For Autopilot clusters  The default StorageClass is standard rwo  which uses the Compute Engine persistent disk CSI Driver     For Standard clusters  The default StorageClass uses the Kubernetes in tree gcePersistentDisk volume plugin     t   You can find the name of your installed StorageClasses by running the following command  kubectl get sc or kubectl get storageclass    "}
{"questions":"gcp gke docs 1 Verify if GKE Cluster is created Implement GCP Google Kubernetes Engine GKE Workload Identity Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl title GCP Google Kubernetes Engine GKE Workload Identity gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Workload Identity\ndescription: Implement GCP Google Kubernetes Engine GKE Workload Identity\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n## Step-01: Introduction\n1. Create GCP IAM Service Account\n2. Add IAM Roles to GCP IAM Service Account (add-iam-policy-binding)\n3. Create Kubernetes Namespace\n4. Create Kubernetes Service Account\n5. Associate GCP IAM Service Account with Kubernetes Service Account (gcloud iam service-accounts add-iam-policy-binding)\n6. Annotate Kubernetes Service Account with GCP IAM SA email Address (kubectl annotate serviceaccount)\n7. Create a Sample App with and without Kubernetes Service Account\n8. Test Workload Identity in GKE Cluster\n\n## Step-02: Verify if Workload Identity Setting is enabled for GKE Cluster\n- Go to Kubernetes Engine -> Clusters -> standard-cluster-private-1 -> DETAILS Tab\n- In Security -> Workload Identity\t-> SHOULD BE IN ENABLED STATE\n\n## Step-03: Create GCP IAM Service Account\n```t\n# List IAM Service Accounts\ngcloud iam service-accounts list\n\n# List Google Cloud Projects\ngcloud projects list\nObservation: \n1. Get the PROJECT_ID for your current project\n2. Replace GSA_PROJECT_ID with PROJECT_ID for your current project\n\n# Create GCP IAM Service Account\ngcloud iam service-accounts create GSA_NAME --project=GSA_PROJECT_ID\nGSA_NAME: the name of the new IAM service account.\nGSA_PROJECT_ID: the project ID of the Google Cloud project for your IAM service account.\nGSA_PROJECT==PROJECT_ID\n\n# Replace GSA_NAME and GSA_PROJECT\ngcloud iam service-accounts create wid-gcpiam-sa --project=kdaida123\n\n# List IAM Service Accounts\ngcloud iam service-accounts list\n```\n\n## Step-04: Add IAM Roles to GCP IAM Service Account\n- We are giving `\"roles\/compute.viewer\"` permissions to IAM Service Account. \n- From Kubernetes Pod, we are going to list the compute instances.\n- With the help of the `Google IAM Service account` and `Kubernetes Service Account`, access for Kubernetes Pod from GKE cluster should be successful for listing the google computing instances. \n```t\n# Add IAM Roles to GCP IAM Service Account\ngcloud projects add-iam-policy-binding PROJECT_ID \\\n    --member \"serviceAccount:GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com\" \\\n    --role \"ROLE_NAME\"\nPROJECT_ID: your Google Cloud project ID.\nGSA_NAME: the name of your IAM service account.\nGSA_PROJECT_ID: the project ID of the Google Cloud project of your IAM service account.\nGSA_PROJECT_ID==PROJECT_ID\nROLE_NAME: the IAM role to assign to your service account, like roles\/spanner.viewer.\n\n# Replace PROJECT_ID, GSA_NAME, GSA_PROJECT_ID, ROLE_NAME\ngcloud projects add-iam-policy-binding kdaida123 \\\n    --member \"serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com\" \\\n    --role \"roles\/compute.viewer\" \n```\n\n## Step-05: Create Kubernetes Namepsace and Service Account\n```t\n# Create Kubernetes Namespace\nkubectl create namespace <NAMESPACE>\nkubectl create namespace wid-kns\n\n# Create Service Account\nkubectl create serviceaccount <KSA_NAME>  --namespace <NAMESPACE>\nkubectl create serviceaccount wid-ksa  --namespace wid-kns\n```\n\n## Step-06: Associate GCP IAM Service Account with Kubernetes Service Account\n- Allow the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts.\n- This binding allows the Kubernetes service account to act as the IAM service account.\n```t\n# Associate GCP IAM Service Account with Kubernetes Service Account\ngcloud iam service-accounts add-iam-policy-binding GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com \\\n    --role roles\/iam.workloadIdentityUser \\\n    --member \"serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE\/KSA_NAME]\"\n\n# Replace GSA_NAME, GSA_PROJECT_ID, PROJECT_ID, NAMESPACE, KSA_NAME\ngcloud iam service-accounts add-iam-policy-binding wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com \\\n    --role roles\/iam.workloadIdentityUser \\\n    --member \"serviceAccount:kdaida123.svc.id.goog[wid-kns\/wid-ksa]\"\n```\n\n## Step-07: Annotate Kubernetes Service Account with GCP IAM SA email Address\n- Annotate the Kubernetes service account with the email address of the IAM service account.\n```t\n# Annotate Kubernetes Service Account with GCP IAM SA email Address\nkubectl annotate serviceaccount KSA_NAME \\\n    --namespace NAMESPACE \\\n    iam.gke.io\/gcp-service-account=GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com\n\n# Replace KSA_NAME, NAMESPACE, GSA_NAME, GSA_PROJECT_ID\nkubectl annotate serviceaccount wid-ksa \\\n    --namespace wid-kns \\\n    iam.gke.io\/gcp-service-account=wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com\n\n# Describe Kubernetes Service Account\nkubectl describe sa wid-ksa -n wid-kns\n```\n\n## Step-08: 01-wid-demo-pod-without-sa.yaml\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: wid-demo-without-sa\n  namespace: wid-kns\nspec:\n  containers:\n  - image: google\/cloud-sdk:slim\n    name: wid-demo-without-sa\n    command: [\"sleep\",\"infinity\"]\n  #serviceAccountName: wid-ksa\n  nodeSelector:\n    iam.gke.io\/gke-metadata-server-enabled: \"true\"\n```\n\n## Step-09: 02-wid-demo-pod-with-sa.yaml\n- **Important Note:** For Autopilot clusters, omit the nodeSelector field. Autopilot rejects this nodeSelector because all nodes use Workload Identity.\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: wid-demo-with-sa\n  namespace: wid-kns\nspec:\n  containers:\n  - image: google\/cloud-sdk:slim\n    name: wid-demo-with-sa\n    command: [\"sleep\",\"infinity\"]\n  serviceAccountName: wid-ksa\n  nodeSelector:\n    iam.gke.io\/gke-metadata-server-enabled: \"true\"\n```\n\n## Step-10: Deploy Kubernetes Manifests and Verify\n```t\n# Deploy kube-manifests\nkubectl apply -f kube-manifests\n\n# List Pods\nkubectl -n wid-kns get pods \n```\n\n## Step-11: Verify from Workload Identity without Service Account Pod\n```t\n# Connect to Pod\nkubectl -n wid-kns exec -it wid-demo-without-sa -- \/bin\/bash\n\n# Default Service Account Pod is using currently\ngcloud auth list\nObservation: It chose the default account\n\n## Sample Output\nroot@wid-demo-without-sa:\/# gcloud auth list\n    Credentialed Accounts\nACTIVE  ACCOUNT\n*       kdaida123.svc.id.goog\n\nTo set the active account, run:\n    $ gcloud config set account `ACCOUNT`\n\nroot@wid-demo-without-sa:\/# \n\n\n# List Compute Instances from workload-identity-demo pod\ngcloud compute instances list\n\n## Sample Output\nroot@wid-demo-without-sa:\/# gcloud compute instances list\nERROR: (gcloud.compute.instances.list) Some requests did not succeed:\n - Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https:\/\/developers.google.com\/identity\/sign-in\/web\/devconsole-project.\n\nroot@wid-demo-without-sa:\/# \n\n# Exit the container terminal\nexit\n```\n\n## Step-12: Verify from Workload Identity with Service Account Pod\n```t\n# Connect to Pod\nkubectl -n wid-kns exec -it wid-demo-with-sa -- \/bin\/bash\n\n# Default Service Account Pod is using currently\ngcloud auth list\n\n## Sample Output\nroot@wid-demo-with-sa:\/# gcloud auth list\n                 Credentialed Accounts\nACTIVE  ACCOUNT\n*       wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com\n\nTo set the active account, run:\n    $ gcloud config set account `ACCOUNT`\n\nroot@wid-demo-with-sa:\/# \n\n# List Compute Instances from workload-identity-demo pod\ngcloud compute instances list\n\n## Sample Output\nroot@wid-demo-with-sa:\/# gcloud compute instances list\nNAME                                                 ZONE           MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP    EXTERNAL_IP  STATUS\ngke-standard-cluster-priva-new-pool-2-7c9415e8-5cds  us-central1-c  g1-small      true         10.128.15.235               RUNNING\ngke-standard-cluster-priva-new-pool-2-7c9415e8-5mpz  us-central1-c  g1-small      true         10.128.0.8                  RUNNING\ngke-standard-cluster-priva-new-pool-2-7c9415e8-8qg6  us-central1-c  g1-small      true         10.128.0.2                  RUNNING\nroot@wid-demo-with-sa:\/# \n```\n\n## Step-13: Negative Usecase: Test access to Cloud DNS Record Sets\n```t\n# gcloud list DNS Records\ngcloud dns record-sets list --zone=kalyanreddydaida-com\nObservation:\n1. GCP IAM Service Account \"wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com\" doesnt have roles assigned related to Cloud DNS so we got HTTP 403\n\n## Sample Output\nroot@wid-demo-with-sa:\/# gcloud dns record-sets list --zone=kalyanreddydaida-com\nERROR: (gcloud.dns.record-sets.list) HTTPError 403: Forbidden\nroot@wid-demo-with-sa:\/# \n\n# Exit the container terminal\nexit\n```\n\n## Step-14: Give Cloud DNS Admin Role for GCP IAM Servic Account wid-gcpiam-sa\n```t\n# Add IAM Roles to GCP IAM Service Account\ngcloud projects add-iam-policy-binding PROJECT_ID \\\n    --member \"serviceAccount:GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com\" \\\n    --role \"ROLE_NAME\"\nPROJECT_ID: your Google Cloud project ID.\nGSA_NAME: the name of your IAM service account.\nGSA_PROJECT: the project ID of the Google Cloud project of your IAM service account.\nROLE_NAME: the IAM role to assign to your service account, like roles\/spanner.viewer.\nGSA_PROJECT==PROJECT_ID\n\n# Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME\ngcloud projects add-iam-policy-binding kdaida123 \\\n    --member \"serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com\" \\\n    --role \"roles\/dns.admin\" \n```\n\n## Step-15: Verify from Workload Identity with Service Account Pod\n```t\n# Connect to Pod\nkubectl -n wid-kns exec -it wid-demo-with-sa -- \/bin\/bash\n\n# List Cloud DNS Record Sets\ngcloud dns record-sets list --zone=kalyanreddydaida-com\n\n### Sample Output\nroot@wid-demo-with-sa:\/# gcloud dns record-sets list --zone=kalyanreddydaida-com\nNAME                         TYPE  TTL    DATA\nkalyanreddydaida.com.        NS    21600  ns-cloud-a1.googledomains.com.,ns-cloud-a2.googledomains.com.,ns-cloud-a3.googledomains.com.,ns-cloud-a4.googledomains.com.\nkalyanreddydaida.com.        SOA   21600  ns-cloud-a1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 259200 300\ndemo1.kalyanreddydaida.com.  A     300    34.120.32.120\nroot@wid-demo-with-sa:\/# \n\n\n# List Compute Instances from workload-identity-demo pod\ngcloud compute instances list\n\n## Sample Output\nroot@wid-demo-with-sa:\/# gcloud compute instances list\nNAME                                                 ZONE           MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP    EXTERNAL_IP  STATUS\ngke-standard-cluster-priva-new-pool-2-7c9415e8-5cds  us-central1-c  g1-small      true         10.128.15.235               RUNNING\ngke-standard-cluster-priva-new-pool-2-7c9415e8-5mpz  us-central1-c  g1-small      true         10.128.0.8                  RUNNING\ngke-standard-cluster-priva-new-pool-2-7c9415e8-8qg6  us-central1-c  g1-small      true         10.128.0.2                  RUNNING\nroot@wid-demo-with-sa:\/# \n\n# Exit the container terminal \nexit\n```\n\n\n## Step-16: Clean-Up Kubernetes Resources\n```t\n# Delete Kubernetes Pods\nkubectl delete -f kube-manifests\n\n# List Namespaces\nkubectl get ns\n\n# Delete Kubernetes Namespace \nkubectl delete ns wid-kns\nObservation:\n1. Kubernetes Service Account \"wid-ksa\" will get automatically deleted when that namespace is deleted\n```\n\n## Step-17: Clean-Up GCP IAM Resources\n```t\n# List GCP IAM Service Accounts\ngcloud iam service-accounts list\n\n# Remove IAM Roles to GCP IAM Service Account\ngcloud projects remove-iam-policy-binding PROJECT_ID \\\n    --member \"serviceAccount:GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com\" \\\n    --role \"ROLE_NAME\"\nPROJECT_ID: your Google Cloud project ID.\nGSA_NAME: the name of your IAM service account.\nGSA_PROJECT_ID: the project ID of the Google Cloud project of your IAM service account.\nGSA_PROJECT_ID==PROJECT_ID\nROLE_NAME: the IAM role to assign to your service account, like roles\/spanner.viewer.\n\n# REMOVE ROLE: COMPUTE VIEWER: Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME\ngcloud projects remove-iam-policy-binding kdaida123 \\\n    --member \"serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com\" \\\n    --role \"roles\/compute.viewer\" \n\n# REMOVE ROLE: DNS ADMIN: Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME\ngcloud projects remove-iam-policy-binding kdaida123 \\\n    --member \"serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com\" \\\n    --role \"roles\/dns.admin\" \n\n# Delete the GCP IAM Service Account we have created\ngcloud iam service-accounts delete wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com --project=kdaida123\n```\n\n## References\n- [GKE - Use Workload Identity](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/workload-identity","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Workload Identity description  Implement GCP Google Kubernetes Engine GKE Workload Identity        Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123         Step 01  Introduction 1  Create GCP IAM Service Account 2  Add IAM Roles to GCP IAM Service Account  add iam policy binding  3  Create Kubernetes Namespace 4  Create Kubernetes Service Account 5  Associate GCP IAM Service Account with Kubernetes Service Account  gcloud iam service accounts add iam policy binding  6  Annotate Kubernetes Service Account with GCP IAM SA email Address  kubectl annotate serviceaccount  7  Create a Sample App with and without Kubernetes Service Account 8  Test Workload Identity in GKE Cluster     Step 02  Verify if Workload Identity Setting is enabled for GKE Cluster   Go to Kubernetes Engine    Clusters    standard cluster private 1    DETAILS Tab   In Security    Workload Identity    SHOULD BE IN ENABLED STATE     Step 03  Create GCP IAM Service Account    t   List IAM Service Accounts gcloud iam service accounts list    List Google Cloud Projects gcloud projects list Observation   1  Get the PROJECT ID for your current project 2  Replace GSA PROJECT ID with PROJECT ID for your current project    Create GCP IAM Service Account gcloud iam service accounts create GSA NAME   project GSA PROJECT ID GSA NAME  the name of the new IAM service account  GSA PROJECT ID  the project ID of the Google Cloud project for your IAM service account  GSA PROJECT  PROJECT ID    Replace GSA NAME and GSA PROJECT gcloud iam service accounts create wid gcpiam sa   project kdaida123    List IAM Service Accounts gcloud iam service accounts list         Step 04  Add IAM Roles to GCP IAM Service Account   We are giving   roles compute viewer   permissions to IAM Service Account     From Kubernetes Pod  we are going to list the compute instances    With the help of the  Google IAM Service account  and  Kubernetes Service Account   access for Kubernetes Pod from GKE cluster should be successful for listing the google computing instances      t   Add IAM Roles to GCP IAM Service Account gcloud projects add iam policy binding PROJECT ID         member  serviceAccount GSA NAME GSA PROJECT ID iam gserviceaccount com          role  ROLE NAME  PROJECT ID  your Google Cloud project ID  GSA NAME  the name of your IAM service account  GSA PROJECT ID  the project ID of the Google Cloud project of your IAM service account  GSA PROJECT ID  PROJECT ID ROLE NAME  the IAM role to assign to your service account  like roles spanner viewer     Replace PROJECT ID  GSA NAME  GSA PROJECT ID  ROLE NAME gcloud projects add iam policy binding kdaida123         member  serviceAccount wid gcpiam sa kdaida123 iam gserviceaccount com          role  roles compute viewer           Step 05  Create Kubernetes Namepsace and Service Account    t   Create Kubernetes Namespace kubectl create namespace  NAMESPACE  kubectl create namespace wid kns    Create Service Account kubectl create serviceaccount  KSA NAME     namespace  NAMESPACE  kubectl create serviceaccount wid ksa    namespace wid kns         Step 06  Associate GCP IAM Service Account with Kubernetes Service Account   Allow the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts    This binding allows the Kubernetes service account to act as the IAM service account     t   Associate GCP IAM Service Account with Kubernetes Service Account gcloud iam service accounts add iam policy binding GSA NAME GSA PROJECT ID iam gserviceaccount com         role roles iam workloadIdentityUser         member  serviceAccount PROJECT ID svc id goog NAMESPACE KSA NAME      Replace GSA NAME  GSA PROJECT ID  PROJECT ID  NAMESPACE  KSA NAME gcloud iam service accounts add iam policy binding wid gcpiam sa kdaida123 iam gserviceaccount com         role roles iam workloadIdentityUser         member  serviceAccount kdaida123 svc id goog wid kns wid ksa           Step 07  Annotate Kubernetes Service Account with GCP IAM SA email Address   Annotate the Kubernetes service account with the email address of the IAM service account     t   Annotate Kubernetes Service Account with GCP IAM SA email Address kubectl annotate serviceaccount KSA NAME         namespace NAMESPACE       iam gke io gcp service account GSA NAME GSA PROJECT ID iam gserviceaccount com    Replace KSA NAME  NAMESPACE  GSA NAME  GSA PROJECT ID kubectl annotate serviceaccount wid ksa         namespace wid kns       iam gke io gcp service account wid gcpiam sa kdaida123 iam gserviceaccount com    Describe Kubernetes Service Account kubectl describe sa wid ksa  n wid kns         Step 08  01 wid demo pod without sa yaml    yaml apiVersion  v1 kind  Pod metadata    name  wid demo without sa   namespace  wid kns spec    containers      image  google cloud sdk slim     name  wid demo without sa     command    sleep   infinity      serviceAccountName  wid ksa   nodeSelector      iam gke io gke metadata server enabled   true          Step 09  02 wid demo pod with sa yaml     Important Note    For Autopilot clusters  omit the nodeSelector field  Autopilot rejects this nodeSelector because all nodes use Workload Identity     yaml apiVersion  v1 kind  Pod metadata    name  wid demo with sa   namespace  wid kns spec    containers      image  google cloud sdk slim     name  wid demo with sa     command    sleep   infinity     serviceAccountName  wid ksa   nodeSelector      iam gke io gke metadata server enabled   true          Step 10  Deploy Kubernetes Manifests and Verify    t   Deploy kube manifests kubectl apply  f kube manifests    List Pods kubectl  n wid kns get pods          Step 11  Verify from Workload Identity without Service Account Pod    t   Connect to Pod kubectl  n wid kns exec  it wid demo without sa     bin bash    Default Service Account Pod is using currently gcloud auth list Observation  It chose the default account     Sample Output root wid demo without sa    gcloud auth list     Credentialed Accounts ACTIVE  ACCOUNT         kdaida123 svc id goog  To set the active account  run        gcloud config set account  ACCOUNT   root wid demo without sa         List Compute Instances from workload identity demo pod gcloud compute instances list     Sample Output root wid demo without sa    gcloud compute instances list ERROR   gcloud compute instances list  Some requests did not succeed     Request had invalid authentication credentials  Expected OAuth 2 access token  login cookie or other valid authentication credential  See https   developers google com identity sign in web devconsole project   root wid demo without sa        Exit the container terminal exit         Step 12  Verify from Workload Identity with Service Account Pod    t   Connect to Pod kubectl  n wid kns exec  it wid demo with sa     bin bash    Default Service Account Pod is using currently gcloud auth list     Sample Output root wid demo with sa    gcloud auth list                  Credentialed Accounts ACTIVE  ACCOUNT         wid gcpiam sa kdaida123 iam gserviceaccount com  To set the active account  run        gcloud config set account  ACCOUNT   root wid demo with sa        List Compute Instances from workload identity demo pod gcloud compute instances list     Sample Output root wid demo with sa    gcloud compute instances list NAME                                                 ZONE           MACHINE TYPE  PREEMPTIBLE  INTERNAL IP    EXTERNAL IP  STATUS gke standard cluster priva new pool 2 7c9415e8 5cds  us central1 c  g1 small      true         10 128 15 235               RUNNING gke standard cluster priva new pool 2 7c9415e8 5mpz  us central1 c  g1 small      true         10 128 0 8                  RUNNING gke standard cluster priva new pool 2 7c9415e8 8qg6  us central1 c  g1 small      true         10 128 0 2                  RUNNING root wid demo with sa             Step 13  Negative Usecase  Test access to Cloud DNS Record Sets    t   gcloud list DNS Records gcloud dns record sets list   zone kalyanreddydaida com Observation  1  GCP IAM Service Account  wid gcpiam sa kdaida123 iam gserviceaccount com  doesnt have roles assigned related to Cloud DNS so we got HTTP 403     Sample Output root wid demo with sa    gcloud dns record sets list   zone kalyanreddydaida com ERROR   gcloud dns record sets list  HTTPError 403  Forbidden root wid demo with sa        Exit the container terminal exit         Step 14  Give Cloud DNS Admin Role for GCP IAM Servic Account wid gcpiam sa    t   Add IAM Roles to GCP IAM Service Account gcloud projects add iam policy binding PROJECT ID         member  serviceAccount GSA NAME GSA PROJECT iam gserviceaccount com          role  ROLE NAME  PROJECT ID  your Google Cloud project ID  GSA NAME  the name of your IAM service account  GSA PROJECT  the project ID of the Google Cloud project of your IAM service account  ROLE NAME  the IAM role to assign to your service account  like roles spanner viewer  GSA PROJECT  PROJECT ID    Replace PROJECT ID  GSA NAME  GSA PROJECT  ROLE NAME gcloud projects add iam policy binding kdaida123         member  serviceAccount wid gcpiam sa kdaida123 iam gserviceaccount com          role  roles dns admin           Step 15  Verify from Workload Identity with Service Account Pod    t   Connect to Pod kubectl  n wid kns exec  it wid demo with sa     bin bash    List Cloud DNS Record Sets gcloud dns record sets list   zone kalyanreddydaida com      Sample Output root wid demo with sa    gcloud dns record sets list   zone kalyanreddydaida com NAME                         TYPE  TTL    DATA kalyanreddydaida com         NS    21600  ns cloud a1 googledomains com  ns cloud a2 googledomains com  ns cloud a3 googledomains com  ns cloud a4 googledomains com  kalyanreddydaida com         SOA   21600  ns cloud a1 googledomains com  cloud dns hostmaster google com  1 21600 3600 259200 300 demo1 kalyanreddydaida com   A     300    34 120 32 120 root wid demo with sa         List Compute Instances from workload identity demo pod gcloud compute instances list     Sample Output root wid demo with sa    gcloud compute instances list NAME                                                 ZONE           MACHINE TYPE  PREEMPTIBLE  INTERNAL IP    EXTERNAL IP  STATUS gke standard cluster priva new pool 2 7c9415e8 5cds  us central1 c  g1 small      true         10 128 15 235               RUNNING gke standard cluster priva new pool 2 7c9415e8 5mpz  us central1 c  g1 small      true         10 128 0 8                  RUNNING gke standard cluster priva new pool 2 7c9415e8 8qg6  us central1 c  g1 small      true         10 128 0 2                  RUNNING root wid demo with sa        Exit the container terminal  exit          Step 16  Clean Up Kubernetes Resources    t   Delete Kubernetes Pods kubectl delete  f kube manifests    List Namespaces kubectl get ns    Delete Kubernetes Namespace  kubectl delete ns wid kns Observation  1  Kubernetes Service Account  wid ksa  will get automatically deleted when that namespace is deleted         Step 17  Clean Up GCP IAM Resources    t   List GCP IAM Service Accounts gcloud iam service accounts list    Remove IAM Roles to GCP IAM Service Account gcloud projects remove iam policy binding PROJECT ID         member  serviceAccount GSA NAME GSA PROJECT ID iam gserviceaccount com          role  ROLE NAME  PROJECT ID  your Google Cloud project ID  GSA NAME  the name of your IAM service account  GSA PROJECT ID  the project ID of the Google Cloud project of your IAM service account  GSA PROJECT ID  PROJECT ID ROLE NAME  the IAM role to assign to your service account  like roles spanner viewer     REMOVE ROLE  COMPUTE VIEWER  Replace PROJECT ID  GSA NAME  GSA PROJECT  ROLE NAME gcloud projects remove iam policy binding kdaida123         member  serviceAccount wid gcpiam sa kdaida123 iam gserviceaccount com          role  roles compute viewer      REMOVE ROLE  DNS ADMIN  Replace PROJECT ID  GSA NAME  GSA PROJECT  ROLE NAME gcloud projects remove iam policy binding kdaida123         member  serviceAccount wid gcpiam sa kdaida123 iam gserviceaccount com          role  roles dns admin      Delete the GCP IAM Service Account we have created gcloud iam service accounts delete wid gcpiam sa kdaida123 iam gserviceaccount com   project kdaida123         References    GKE   Use Workload Identity  https   cloud google com kubernetes engine docs how to workload identity"}
{"questions":"gcp gke docs title GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy Implement GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n3.  External DNS Controller should be installed and ready to use\n\n## Step-01: Introduction\n1. Configuring the OAuth consent screen\n2. Creating OAuth credentials\n3. Setting up IAP access\n4. Creating a Kubernetes Secret with OAuth Client ID Credentials\n5. Adding an iap block to the BackendConfig\n\n## Step-02: Create basic google gmail users (if not present)\n- I have created below two users for this IAP Demo\n    - gcpuser901@gmail.com\n    - gcpuser902@gmail.com\n\n## Step-03: Enabling IAP for GKE\n- [Enabling IAP for GKE](https:\/\/cloud.google.com\/iap\/docs\/enabling-kubernetes-howto)\n- We will follow steps from above documentation link to create below 2 items\n    1. [Configuring the OAuth consent screen](https:\/\/cloud.google.com\/iap\/docs\/enabling-kubernetes-howto#oauth-configure)\n    2. [Creating OAuth credentials](https:\/\/cloud.google.com\/iap\/docs\/enabling-kubernetes-howto#oauth-credentials)\n\n\n```t\n# Make a note of Client ID and Client Secret\nClient ID: 1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com\nClient Secret: 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\n\n# Template\nhttps:\/\/iap.googleapis.com\/v1\/oauth\/clientIds\/CLIENT_ID:handleRedirect\n\n# Replace CLIENT_ID (Update URL in OAuth 2.0 Client IDs -> gke-ingress-iap-demo-oauth-creds)\nhttps:\/\/iap.googleapis.com\/v1\/oauth\/clientIds\/1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com:handleRedirect\n```\n\n## Step-04: Creating a Kubernetes Secret\n```t\n# Make a note of Client ID and Client Secret\nClient ID: 1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com\nClient Secret: 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\n\n# List Kubernetes Secrets (Default Namespace)\nkubectl get secrets\n\n# Create Kubernetes Secret\nkubectl create secret generic my-secret --from-literal=client_id=client_id_key \\\n    --from-literal=client_secret=client_secret_key\n\n# Replace  client_id_key, client_secret_key\nkubectl create secret generic my-secret --from-literal=client_id=1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com \\\n    --from-literal=client_secret=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\n\n# List Kubernetes Secrets (Default Namespace)\nkubectl get secrets\n```\n\n## Step-05: Adding an iap block to the BackendConfig\n- **File Name:** 07-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com\/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  iap:\n    enabled: true\n    oauthclientCredentials:\n      secretName: my-secret    \n```\n\n## Step-06: Review Kubenertes Manifests\n- All 3 Node Port Services will have annotation added `cloud.google.com\/backend-config`\n- 01-Nginx-App1-Deployment-and-NodePortService.yaml\n- 02-Nginx-App2-Deployment-and-NodePortService.yaml\n- 03-Nginx-App3-Deployment-and-NodePortService.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: app1-nginx-nodeport-service\n  labels:\n    app: app1-nginx\n  annotations:\n    cloud.google.com\/backend-config: '{\"default\": \"my-backendconfig\"}'      \nspec:\n  type: NodePort\n  selector:\n    app: app1-nginx\n  ports:\n    - port: 80\n      targetPort: 80\n```\n\n## Step-07: Review Kubenertes Manifests\n- No changes to below YAML files from previous section\n- 04-Ingress-NameBasedVHost-Routing.yaml\n- 05-Managed-Certificate.yaml\n- 06-frontendconfig.yaml\n\n\n## Step-08: Deploy Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\nObservation: \n1. All other configs already created as part of previous demo, only backendconfig change will be applied now. \n\n# List Deployments\nkubectl get deploy \n\n# List Pods\nkubectl get pods \n\n# List Services\nkubectl get svc \n\n# List Ingress Services\nkubectl get ingress \n\n# List Frontend Configs\nkubectl get frontendconfig \n\n# List Backend Configs\nkubectl get backendconfig\n```\n\n## Step-09: Setting up IAP access\n- [Setting up IAP access](https:\/\/cloud.google.com\/iap\/docs\/enabling-kubernetes-howto#iap-access)\n- Add User `gcpuser901@gmail.com` as Principal.\n\n## Step-10: Access Application\n```t\n# Access Application\nhttp:\/\/app1-ingress.kalyanreddydaida.com\/app1\/index.html\nhttp:\/\/app2-ingress.kalyanreddydaida.com\/app2\/index.html\nhttp:\/\/default-ingress.kalyanreddydaida.com\n\nUsername: gcpuser901@gmail.com (In your case it might be a different user you added as part of Step-09)\nPassword: XXXXXXXXXX\n\nObservation:\n1. All 3 URLS will redirect to Google Authentication. Provide credentials to login\n2. All 3 URLS should work as expected. In your case, replace YOUR_DOMAIN name for testing\n3. HTTP to HTTPS redirect should work\n```\n\n## Step-11: Negative Usecase: Access using User which is not added in Principal\n```t\n# Access Application\nhttp:\/\/app1-ingress.kalyanreddydaida.com\/app1\/index.html\nhttp:\/\/app2-ingress.kalyanreddydaida.com\/app2\/index.html\nhttp:\/\/default-ingress.kalyanreddydaida.com\n\nUsername: gcpuser902@gmail.com (user which is not added in principal as part of Step-09)\nPassword: XXXXXXXXXX\n\nObservation:\n1. It should fail, Application should not be accessible. \n```\n\n## Step-12: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Delete Kubernetes Secret\nkubectl delete secret my-secret\n\n# Delete OAuth Credentials\nGo to API & Services -> Credentials -> OAuth 2.0 Client IDs -> gke-ingress-iap-demo-oauth-creds -> DELETE\n```\n\n\n\n## References\n- [Ingress Features](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features)\n- [Enabling IAP for GKE](https:\/\/cloud.google.com\/iap\/docs\/enabling-kubernetes-howto)\n","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy description  Implement GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes     3   External DNS Controller should be installed and ready to use     Step 01  Introduction 1  Configuring the OAuth consent screen 2  Creating OAuth credentials 3  Setting up IAP access 4  Creating a Kubernetes Secret with OAuth Client ID Credentials 5  Adding an iap block to the BackendConfig     Step 02  Create basic google gmail users  if not present    I have created below two users for this IAP Demo       gcpuser901 gmail com       gcpuser902 gmail com     Step 03  Enabling IAP for GKE    Enabling IAP for GKE  https   cloud google com iap docs enabling kubernetes howto    We will follow steps from above documentation link to create below 2 items     1   Configuring the OAuth consent screen  https   cloud google com iap docs enabling kubernetes howto oauth configure      2   Creating OAuth credentials  https   cloud google com iap docs enabling kubernetes howto oauth credentials       t   Make a note of Client ID and Client Secret Client ID  1057267725005 0icbqnab9rsvodgmq7dicfvs1f56sj5p apps googleusercontent com Client Secret  GOCSPX TKJOtavKIRI7vjMLQVp s gy0ut5    Template https   iap googleapis com v1 oauth clientIds CLIENT ID handleRedirect    Replace CLIENT ID  Update URL in OAuth 2 0 Client IDs    gke ingress iap demo oauth creds  https   iap googleapis com v1 oauth clientIds 1057267725005 0icbqnab9rsvodgmq7dicfvs1f56sj5p apps googleusercontent com handleRedirect         Step 04  Creating a Kubernetes Secret    t   Make a note of Client ID and Client Secret Client ID  1057267725005 0icbqnab9rsvodgmq7dicfvs1f56sj5p apps googleusercontent com Client Secret  GOCSPX TKJOtavKIRI7vjMLQVp s gy0ut5    List Kubernetes Secrets  Default Namespace  kubectl get secrets    Create Kubernetes Secret kubectl create secret generic my secret   from literal client id client id key         from literal client secret client secret key    Replace  client id key  client secret key kubectl create secret generic my secret   from literal client id 1057267725005 0icbqnab9rsvodgmq7dicfvs1f56sj5p apps googleusercontent com         from literal client secret GOCSPX TKJOtavKIRI7vjMLQVp s gy0ut5    List Kubernetes Secrets  Default Namespace  kubectl get secrets         Step 05  Adding an iap block to the BackendConfig     File Name    07 backendconfig yaml    yaml apiVersion  cloud google com v1 kind  BackendConfig metadata    name  my backendconfig spec    iap      enabled  true     oauthclientCredentials        secretName  my secret             Step 06  Review Kubenertes Manifests   All 3 Node Port Services will have annotation added  cloud google com backend config    01 Nginx App1 Deployment and NodePortService yaml   02 Nginx App2 Deployment and NodePortService yaml   03 Nginx App3 Deployment and NodePortService yaml    yaml apiVersion  v1 kind  Service metadata    name  app1 nginx nodeport service   labels      app  app1 nginx   annotations      cloud google com backend config     default    my backendconfig          spec    type  NodePort   selector      app  app1 nginx   ports        port  80       targetPort  80         Step 07  Review Kubenertes Manifests   No changes to below YAML files from previous section   04 Ingress NameBasedVHost Routing yaml   05 Managed Certificate yaml   06 frontendconfig yaml      Step 08  Deploy Kubernetes Manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests Observation   1  All other configs already created as part of previous demo  only backendconfig change will be applied now      List Deployments kubectl get deploy     List Pods kubectl get pods     List Services kubectl get svc     List Ingress Services kubectl get ingress     List Frontend Configs kubectl get frontendconfig     List Backend Configs kubectl get backendconfig         Step 09  Setting up IAP access    Setting up IAP access  https   cloud google com iap docs enabling kubernetes howto iap access    Add User  gcpuser901 gmail com  as Principal      Step 10  Access Application    t   Access Application http   app1 ingress kalyanreddydaida com app1 index html http   app2 ingress kalyanreddydaida com app2 index html http   default ingress kalyanreddydaida com  Username  gcpuser901 gmail com  In your case it might be a different user you added as part of Step 09  Password  XXXXXXXXXX  Observation  1  All 3 URLS will redirect to Google Authentication  Provide credentials to login 2  All 3 URLS should work as expected  In your case  replace YOUR DOMAIN name for testing 3  HTTP to HTTPS redirect should work         Step 11  Negative Usecase  Access using User which is not added in Principal    t   Access Application http   app1 ingress kalyanreddydaida com app1 index html http   app2 ingress kalyanreddydaida com app2 index html http   default ingress kalyanreddydaida com  Username  gcpuser902 gmail com  user which is not added in principal as part of Step 09  Password  XXXXXXXXXX  Observation  1  It should fail  Application should not be accessible           Step 12  Clean Up    t   Delete Kubernetes Resources kubectl delete  f kube manifests    Delete Kubernetes Secret kubectl delete secret my secret    Delete OAuth Credentials Go to API   Services    Credentials    OAuth 2 0 Client IDs    gke ingress iap demo oauth creds    DELETE           References    Ingress Features  https   cloud google com kubernetes engine docs how to ingress features     Enabling IAP for GKE  https   cloud google com iap docs enabling kubernetes howto  "}
{"questions":"gcp gke docs title GCP Google Kubernetes Engine GKE Ingress with External DNS Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Implement GCP Google Kubernetes Engine GKE Ingress with External DNS","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Ingress with External DNS \ndescription: Implement GCP Google Kubernetes Engine GKE Ingress with External DNS\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. External DNS Controller Installed\n\n## Step-01: Introduction\n- Ingress with External DNS\n- We are going to use the Annotation `external-dns.alpha.kubernetes.io\/hostname` in Ingress Service.\n- DNS Recordsets will be automatically added to Google Cloud DNS using external-dns controller when Ingress Service deployed\n\n\n## Step-02: 01-Nginx-App3-Deployment-and-NodePortService.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: app3-nginx-deployment\n  labels:\n    app: app3-nginx\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app3-nginx\n  template:\n    metadata:\n      labels:\n        app: app3-nginx\n    spec:\n      containers:\n        - name: app3-nginx\n          image: stacksimplify\/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/ingress#def_inf_hc)             \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: \/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5               \n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: app3-nginx-nodeport-service\nspec:\n  type: NodePort\n  selector:\n    app: app3-nginx\n  ports:\n    - port: 80\n      targetPort: 80   \n```\n\n## Step-03: 02-ingress-external-dns.yaml\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress-externaldns-demo\n  annotations:\n    # If the class annotation is not specified it defaults to \"gce\".\n    # gce: external load balancer\n    # gce-internal: internal load balancer\n    kubernetes.io\/ingress.class: \"gce\"  \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io\/hostname: ingressextdns101.kalyanreddydaida.com\nspec:\n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                  \n```\n\n## Step-04: Deploy Kubernetes Manifests and Verify\n```t\n# Deploy Kubernetes Manifests \nkubectl apply -f kube-manifests\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# List Ingress Services\nkubectl get ingress\n\n# Describe Ingress Service\nkubectl describe ingress ingress-externaldns-demo\n\n# Verify external-dns Controller logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n\n# Verify Cloud DNS\n1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com\n2. Verify Record sets, DNS Name we added in Ingress Service should be present \n\n# Access Application\nhttp:\/\/<DNS-Name>\nhttp:\/\/ingressextdns101.kalyanreddydaida.com\n```\n\n## Step-05: Delete kube-manifests\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests\/\n\n# Verify external-dns Controller logs\nkubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+')\n[or]\nkubectl -n external-dns-ns get pods\nkubectl -n external-dns-ns logs -f <External-DNS-Pod-Name>\n\n\n# Verify Cloud DNS\n1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com \n2. Verify Record sets, DNS Name we added in Ingress Service should be not preset (already deleted) \n```\n\n\n","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Ingress with External DNS  description  Implement GCP Google Kubernetes Engine GKE Ingress with External DNS        Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123     3  External DNS Controller Installed     Step 01  Introduction   Ingress with External DNS   We are going to use the Annotation  external dns alpha kubernetes io hostname  in Ingress Service    DNS Recordsets will be automatically added to Google Cloud DNS using external dns controller when Ingress Service deployed      Step 02  01 Nginx App3 Deployment and NodePortService yaml    yaml apiVersion  apps v1 kind  Deployment metadata    name  app3 nginx deployment   labels      app  app3 nginx spec    replicas  1   selector      matchLabels        app  app3 nginx   template      metadata        labels          app  app3 nginx     spec        containers            name  app3 nginx           image  stacksimplify kubenginx 1 0 0           ports                containerPort  80             Readiness Probe  https   cloud google com kubernetes engine docs concepts ingress def inf hc                         readinessProbe              httpGet                scheme  HTTP               path   index html               port  80             initialDelaySeconds  10             periodSeconds  15             timeoutSeconds  5                    apiVersion  v1 kind  Service metadata    name  app3 nginx nodeport service spec    type  NodePort   selector      app  app3 nginx   ports        port  80       targetPort  80            Step 03  02 ingress external dns yaml    yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress externaldns demo   annotations        If the class annotation is not specified it defaults to  gce         gce  external load balancer       gce internal  internal load balancer     kubernetes io ingress class   gce          External DNS   For creating a Record Set in Google Cloud Cloud DNS     external dns alpha kubernetes io hostname  ingressextdns101 kalyanreddydaida com spec    defaultBackend      service        name  app3 nginx nodeport service       port          number  80                           Step 04  Deploy Kubernetes Manifests and Verify    t   Deploy Kubernetes Manifests  kubectl apply  f kube manifests    List Pods kubectl get pods    List Services kubectl get svc    List Ingress Services kubectl get ingress    Describe Ingress Service kubectl describe ingress ingress externaldns demo    Verify external dns Controller logs kubectl  n external dns ns logs  f   kubectl  n external dns ns get po   egrep  o  external dns A Za z0 9       or  kubectl  n external dns ns get pods kubectl  n external dns ns logs  f  External DNS Pod Name     Verify Cloud DNS 1  Go to Network Services    Cloud DNS    kalyanreddydaida com 2  Verify Record sets  DNS Name we added in Ingress Service should be present     Access Application http    DNS Name  http   ingressextdns101 kalyanreddydaida com         Step 05  Delete kube manifests    t   Delete Kubernetes Objects kubectl delete  f kube manifests     Verify external dns Controller logs kubectl  n external dns ns logs  f   kubectl  n external dns ns get po   egrep  o  external dns A Za z0 9       or  kubectl  n external dns ns get pods kubectl  n external dns ns logs  f  External DNS Pod Name      Verify Cloud DNS 1  Go to Network Services    Cloud DNS    kalyanreddydaida com  2  Verify Record sets  DNS Name we added in Ingress Service should be not preset  already deleted         "}
{"questions":"gcp gke docs title GKE Storage with GCP Cloud SQL MySQL Private Instance Use GCP Cloud SQL MySQL DB for GKE Workloads Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GKE Storage with GCP Cloud SQL - MySQL Private Instance\ndescription: Use GCP Cloud SQL MySQL DB for GKE Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n\n\n## Step-01: Introduction\n- GKE Private Cluster \n- GCP Cloud SQL with Private IP \n\n\n## Step-02: Create Private Service Connection to Google Managed Services from our VPC Network\n## Step-02-01: Create ALLOCATED IP RANGES FOR SERVICES\n- Go to VPC Networks -> default -> PRIVATE SERVICE CONNECTION -> ALLOCATED IP RANGES FOR SERVICES\n- Click on **ALLOCATE IP RANGE**\n- **Name:** google-managed-services-default  (google-managed-services-<VPC-NAME>)\n- **Description:** google-managed-services-default  \n- **IP Range:** Automatic\n- **Prefix Length:** 16\n- Click on **ALLOCATE** \n\n## Step-02-02: Create PRIVATE CONNECTIONS TO SERVICES\n- Delete existing connection if any present `servicenetworking-googleapis-com`\n- Click on **CREATE CONNECTION**\n- **Connected Service Provider:** Google Cloud Platform\n- **Connection Name:** servicenetworking-googleapis-com (DEFAULT POPULATED CANNOT CHANGE)\n- **Assigned IP Allocation:** google-managed-services-default  \n- Click on **CONNECT**\n\n## Step-03: Create Google Cloud SQL MySQL Instance\n- Go to SQL -> Choose MySQL\n- **Instance ID:** ums-db-private-instance\n- **Password:** KalyanReddy13\n- **Database Version:** MYSQL 8.0\n- **Choose a configuration to start with:** Development\n- **Choose region and zonal availability**\n  - **Region:** US-central1(IOWA)\n  - **Zonal availability:** Single Zone\n  - **Primary Zone:** us-central1-c\n- **Customize your instance**\n  - **Machine Type:** LightWeight (1 vCPU, 3.75GB)\n  - **Storage Type:** HDD\n  - **Storage Capacity:** 10GB \n  - **Enable automatic storage increases:** CHECKED\n  - **Instance IP Assignment:** \n    - **Private IP:** CHECKED\n      - **Associated networking:** default\n      - **MESSAGE:** Private services access connection for network default has been successfully created. You will now be able to use the same network across all your project's managed services. If you would like to change this connection, please visit the Networking page.\n      - **Allocated IP range (optional):** google-managed-services-default\n    - **Public IP:** UNCHECKED\n  - **Authorized networks:** NOT ADDED ANYTHING\n- **Data Protection**\n  - **Automatic Backups:** UNCHECKED\n- **Instance deletion protection:** UNCHECKED  \n- **Maintenance:** Leave to defaults\n- **Flags:** Leave to defaults\n- **Labels:** Leave to defaults\n- Click on **CREATE INSTANCE**      \n\n\n## Step-04: Create DB Schema webappdb \n- Go to SQL ->  ums-db-public-instance -> Databases -> **CREATE DATABASE**\n- **Database Name:** webappdb\n- **Character set:** utf8\n- **Collation:** Default collation\n- Click on **CREATE**\n\n\n## Step-05: 01-MySQL-externalName-Service.yaml\n- Update Cloud SQL MySQL DB `Private IP` in ExternalName Service\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: mysql-externalname-service\nspec:\n  type: ExternalName\n  externalName: 10.64.18.3\n```\n\n## Step-06: 02-Kubernetes-Secrets.yaml\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: mysql-db-password\ntype: Opaque\ndata: \n  db-password: S2FseWFuUmVkZHkxMw==\n\n# Base64 of KalyanReddy13\n# https:\/\/www.base64encode.org\/\n# Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw==\n```\n\n## Step-07: 03-UserMgmtWebApp-Deployment.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment \nmetadata:\n  name: usermgmt-webapp\n  labels:\n    app: usermgmt-webapp\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: usermgmt-webapp\n  template:  \n    metadata:\n      labels: \n        app: usermgmt-webapp\n    spec:\n      initContainers:\n        - name: init-db\n          image: busybox:1.31\n          command: ['sh', '-c', 'echo -e \"Checking for the availability of MySQL Server deployment\"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf \"-\"; done; echo -e \"  >> MySQL DB Server has started\";']      \n      containers:\n        - name: usermgmt-webapp\n          image: stacksimplify\/kube-usermgmt-webapp:1.0.0-MySQLDB\n          imagePullPolicy: Always\n          ports: \n            - containerPort: 8080           \n          env:\n            - name: DB_HOSTNAME\n              value: \"mysql-externalname-service\"            \n            - name: DB_PORT\n              value: \"3306\"            \n            - name: DB_NAME\n              value: \"webappdb\"            \n            - name: DB_USERNAME\n              value: \"root\"            \n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: mysql-db-password\n                  key: db-password   \n```\n\n## Step-08: 04-UserMgmtWebApp-LoadBalancer-Service.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: usermgmt-webapp-lb-service\n  labels: \n    app: usermgmt-webapp\nspec: \n  type: LoadBalancer\n  selector: \n    app: usermgmt-webapp\n  ports: \n    - port: 80 # Service Port\n      targetPort: 8080 # Container Port\n```\n\n## Step-09: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\/\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n\n## Step-10: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n```\n\n## Step-11: Connect to MySQL DB (Cloud SQL) from GKE Cluster using kubectl\n```t\n## Verify from Kubernetes Cluster, we are able to connect to MySQL DB\n# Template\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h <Kubernetes-ExternalName-Service> -u <USER_NAME> -p<PASSWORD>\n\n# MySQL Client 8.0: Replace External Name Service, Username and Password\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13\n\nmysql> show schemas;\nmysql> use webappdb;\nmysql> show tables;\nmysql> select * from user;\nmysql> exit\n```\n\n## Step-12: Create New user admin102 and verify in Cloud SQL MySQL webappdb\n```t\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\n# Create New User (Used for testing  `allowVolumeExpansion: true` Option)\nUsername: admin102\nPassword: password102\nFirst Name: fname102\nLast Name: lname102\nEmail Address: admin102@stacksimplify.com\nSocial Security Address: ssn102\n\n# MySQL Client 8.0: Replace External Name Service, Username and Password\nkubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13\n\nmysql> show schemas;\nmysql> use webappdb;\nmysql> show tables;\nmysql> select * from user;\nmysql> exit\n```\n\n## Step-13: Clean-Up\n```t\n# Delete Kubernetes Objects\nkubectl delete -f kube-manifests\/\n\n# Important Note: \nDONT DELETE GCP Cloud SQL Instance. We will use it in next demo and clean-up\n```\n\n## References\n- [Private Service Access with MySQL](https:\/\/cloud.google.com\/sql\/docs\/mysql\/configure-private-services-access#console)\n- [Private Service Access](https:\/\/cloud.google.com\/vpc\/docs\/private-services-access)\n- [VPC Network Peering Limits](https:\/\/cloud.google.com\/vpc\/docs\/quota#vpc-peering)\n- [Configuring Private Service Access](https:\/\/cloud.google.com\/vpc\/docs\/configure-private-services-access)\n- [Additional Reference Only - Enabling private services access](https:\/\/cloud.google.com\/service-infrastructure\/docs\/enabling-private-services-access)\n\n","site":"gcp gke docs","answers_cleaned":"    title  GKE Storage with GCP Cloud SQL   MySQL Private Instance description  Use GCP Cloud SQL MySQL DB for GKE Workloads         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123          Step 01  Introduction   GKE Private Cluster    GCP Cloud SQL with Private IP       Step 02  Create Private Service Connection to Google Managed Services from our VPC Network    Step 02 01  Create ALLOCATED IP RANGES FOR SERVICES   Go to VPC Networks    default    PRIVATE SERVICE CONNECTION    ALLOCATED IP RANGES FOR SERVICES   Click on   ALLOCATE IP RANGE       Name    google managed services default   google managed services  VPC NAME       Description    google managed services default       IP Range    Automatic     Prefix Length    16   Click on   ALLOCATE        Step 02 02  Create PRIVATE CONNECTIONS TO SERVICES   Delete existing connection if any present  servicenetworking googleapis com    Click on   CREATE CONNECTION       Connected Service Provider    Google Cloud Platform     Connection Name    servicenetworking googleapis com  DEFAULT POPULATED CANNOT CHANGE      Assigned IP Allocation    google managed services default     Click on   CONNECT       Step 03  Create Google Cloud SQL MySQL Instance   Go to SQL    Choose MySQL     Instance ID    ums db private instance     Password    KalyanReddy13     Database Version    MYSQL 8 0     Choose a configuration to start with    Development     Choose region and zonal availability         Region    US central1 IOWA        Zonal availability    Single Zone       Primary Zone    us central1 c     Customize your instance         Machine Type    LightWeight  1 vCPU  3 75GB        Storage Type    HDD       Storage Capacity    10GB        Enable automatic storage increases    CHECKED       Instance IP Assignment             Private IP    CHECKED           Associated networking    default           MESSAGE    Private services access connection for network default has been successfully created  You will now be able to use the same network across all your project s managed services  If you would like to change this connection  please visit the Networking page            Allocated IP range  optional     google managed services default         Public IP    UNCHECKED       Authorized networks    NOT ADDED ANYTHING     Data Protection         Automatic Backups    UNCHECKED     Instance deletion protection    UNCHECKED       Maintenance    Leave to defaults     Flags    Leave to defaults     Labels    Leave to defaults   Click on   CREATE INSTANCE              Step 04  Create DB Schema webappdb    Go to SQL     ums db public instance    Databases      CREATE DATABASE       Database Name    webappdb     Character set    utf8     Collation    Default collation   Click on   CREATE        Step 05  01 MySQL externalName Service yaml   Update Cloud SQL MySQL DB  Private IP  in ExternalName Service    yaml apiVersion  v1 kind  Service metadata    name  mysql externalname service spec    type  ExternalName   externalName  10 64 18 3         Step 06  02 Kubernetes Secrets yaml    yaml apiVersion  v1 kind  Secret metadata    name  mysql db password type  Opaque data     db password  S2FseWFuUmVkZHkxMw      Base64 of KalyanReddy13   https   www base64encode org    Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw           Step 07  03 UserMgmtWebApp Deployment yaml    yaml apiVersion  apps v1 kind  Deployment  metadata    name  usermgmt webapp   labels      app  usermgmt webapp spec    replicas  1   selector      matchLabels        app  usermgmt webapp   template        metadata        labels           app  usermgmt webapp     spec        initContainers            name  init db           image  busybox 1 31           command    sh     c    echo  e  Checking for the availability of MySQL Server deployment   while   nc  z mysql externalname service 3306  do sleep 1  printf      done  echo  e       MySQL DB Server has started                 containers            name  usermgmt webapp           image  stacksimplify kube usermgmt webapp 1 0 0 MySQLDB           imagePullPolicy  Always           ports                 containerPort  8080                      env                name  DB HOSTNAME               value   mysql externalname service                            name  DB PORT               value   3306                            name  DB NAME               value   webappdb                            name  DB USERNAME               value   root                            name  DB PASSWORD               valueFrom                  secretKeyRef                    name  mysql db password                   key  db password            Step 08  04 UserMgmtWebApp LoadBalancer Service yaml    yaml apiVersion  v1 kind  Service metadata    name  usermgmt webapp lb service   labels       app  usermgmt webapp spec     type  LoadBalancer   selector       app  usermgmt webapp   ports         port  80   Service Port       targetPort  8080   Container Port         Step 09  Deploy kube manifests    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests     List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    Verify Pod Logs kubectl get pods kubectl logs  f  USERMGMT POD NAME  kubectl logs  f usermgmt webapp 6ff7d7d849 7lrg5          Step 10  Access Application    t   List Services kubectl get svc    Access Application http    ExternalIP from get service output  Username  admin101 Password  password101         Step 11  Connect to MySQL DB  Cloud SQL  from GKE Cluster using kubectl    t    Verify from Kubernetes Cluster  we are able to connect to MySQL DB   Template kubectl run  it   rm   image mysql 8 0   restart Never mysql client    mysql  h  Kubernetes ExternalName Service   u  USER NAME   p PASSWORD     MySQL Client 8 0  Replace External Name Service  Username and Password kubectl run  it   rm   image mysql 8 0   restart Never mysql client    mysql  h mysql externalname service  u root  pKalyanReddy13  mysql  show schemas  mysql  use webappdb  mysql  show tables  mysql  select   from user  mysql  exit         Step 12  Create New user admin102 and verify in Cloud SQL MySQL webappdb    t   Access Application http    ExternalIP from get service output  Username  admin101 Password  password101    Create New User  Used for testing   allowVolumeExpansion  true  Option  Username  admin102 Password  password102 First Name  fname102 Last Name  lname102 Email Address  admin102 stacksimplify com Social Security Address  ssn102    MySQL Client 8 0  Replace External Name Service  Username and Password kubectl run  it   rm   image mysql 8 0   restart Never mysql client    mysql  h mysql externalname service  u root  pKalyanReddy13  mysql  show schemas  mysql  use webappdb  mysql  show tables  mysql  select   from user  mysql  exit         Step 13  Clean Up    t   Delete Kubernetes Objects kubectl delete  f kube manifests     Important Note   DONT DELETE GCP Cloud SQL Instance  We will use it in next demo and clean up         References    Private Service Access with MySQL  https   cloud google com sql docs mysql configure private services access console     Private Service Access  https   cloud google com vpc docs private services access     VPC Network Peering Limits  https   cloud google com vpc docs quota vpc peering     Configuring Private Service Access  https   cloud google com vpc docs configure private services access     Additional Reference Only   Enabling private services access  https   cloud google com service infrastructure docs enabling private services access   "}
{"questions":"gcp gke docs Implement GCP Google Kubernetes Engine GKE Ingress with Cloud Armor Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t title GCP Google Kubernetes Engine GKE Ingress with Cloud Armor","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Ingress with Cloud Armor\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress with Cloud Armor\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n3. Registered Domain using Google Cloud Domains\n4. External DNS Controller installed and ready to use\n```t\n# List External DNS Pods\nkubectl -n external-dns-ns get pods\n```\n5. Verify if External IP Address is created\n```t\n# List External IP Address\ngcloud compute addresses list\n\n# Describe External IP Address \ngcloud compute addresses describe gke-ingress-extip1 --global\n```\n\n\n## Step-01: Introduction\n- Ingress Service with Cloud Armor\n\n## Step-02: Create Cloud Armor Policy\n- Go to Network Security -> Cloud Armor -> CREATE POLICY\n### Configure Policy\n- **Name:** cloud-armor-policy-1\n- **Description:** Cloud Armor Demo with GKE Ingress\n- **Policy type:** Backend security policy\n- **Default rule action:** Deny\n- **Deny Status:** 403(Forbidden)\n- Click on **NEXT STEP**\n### Add More Rules (Optional)\n- Leave to default \n- NO NEW RULES OTHER THAN EXISTING DEFAULT RULE\n- ALL IP ADDRESS -> DENY -> With 403 ERROR -> Priority 2,147,483,647\t\n- Click on **NEXT STEP**\n### Add Policy to Targets (Optional)\n- Leave to default \n- Click on **NEXT STEP**\n### Advanced configurations (Adaptive Protection) (optional)\n- Click on **Enable** checkbox\n- Click on **DONE**\n- Click on **CREATE POLICY**\n\n## Step-03: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: cloud-armor-demo-deployment\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      app: cloud-armor-demo\n  template:\n    metadata:\n      labels:\n        app: cloud-armor-demo\n    spec:\n      containers:\n        - name: cloud-armor-demo\n          image: stacksimplify\/kubenginx:1.0.0\n          ports:\n            - containerPort: 80\n          # Readiness Probe (https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/ingress#def_inf_hc)            \n          readinessProbe:\n            httpGet:\n              scheme: HTTP\n              path: \/index.html\n              port: 80\n            initialDelaySeconds: 10\n            periodSeconds: 15\n            timeoutSeconds: 5    \n```\n## Step-04: 02-kubernetes-NodePort-service.yaml\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: cloud-armor-demo-nodeport-service\n  annotations:\n    cloud.google.com\/backend-config: '{\"ports\": {\"80\":\"my-backendconfig\"}}'\nspec:\n  type: NodePort\n  selector:\n    app: cloud-armor-demo\n  ports:\n    - port: 80\n      targetPort: 80\n```\n## Step-05: 03-ingress.yaml\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress-cloud-armor-demo\n  annotations:\n    # External Load Balancer\n    kubernetes.io\/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io\/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # External DNS - For creating a Record Set in Google Cloud Cloud DNS\n    external-dns.alpha.kubernetes.io\/hostname: cloudarmor-ingress.kalyanreddydaida.com\nspec:          \n  defaultBackend:\n    service:\n      name: cloud-armor-demo-nodeport-service\n      port:\n        number: 80     \n\n```\n## Step-06: 04-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com\/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  securityPolicy:\n    name: \"cloud-armor-policy-1\"\n```\n## Step-07: Deploy Kubernetes Manifests and Verify\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get po\n\n# List Services\nkubectl get svc\n\n# List Ingress Services\nkubectl get ingress\n\n# List Backendconfig\nkubectl get backendconfig\n\n# Access Application\nhttp:\/\/<DNS-NAME>\nhttp:\/\/cloudarmor-ingress.kalyanreddydaida.com\nObservation:\n1. We should get 403 Forbidden error.\n2. This is expected because we have configured a Cloud Armor Policy to block All IP Addresses with 403 Error\n```\n\n## Step-08: Make a note of Public IP for your Internet Connection\n- Go to [URL: www.whatismyip.com](https:\/\/www.whatismyip.com\/) and make a note of your local desktop Public IP\n- If you are behind Company \/ Organizations proxies, not sure if it works. \n- I am using my Home Internet Connection\n\n\n## Step-09: Add new rule in Cloud Armor Policy\n- Go to Network Security -> Cloud Armor -> POLICIES -> cloud-armor-policy-1 -> RULES -> ADD RULE\n- **Description:** Allow-from-my-desktop\n- **Mode:** Basic Mode(IP Address \/ Ranges only)\n- **Match:** 49.206.52.84 (My internet connection public ip)\n- **Action:** Allow\n- **Priority:** 1\n- Click on **ADD**\n- WAIT FOR 5 MINUTES for new policy to go live\n\n## Step-10: Access Application\n```t\n# Access Application from local desktop\nhttp:\/\/<DNS-NAME>\nhttp:\/\/cloudarmor-ingress.kalyanreddydaida.com\ncurl http:\/\/cloudarmor-ingress.kalyanreddydaida.com\nObservation:\n1. Application access should be successful\n```\n\n## Step-11: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Delete Cloud Armor Policy\nGo to Network Security -> Cloud Armor -> POLICIES -> cloud-armor-policy-1 -> DELETE\n```\n\n\n## References\n- https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#cloud_armor\n- https:\/\/cloud.google.com\/armor\/docs\/security-policy-overview\n- https:\/\/cloud.google.com\/armor\/docs\/integrating-cloud-armor\n- https:\/\/cloud.google.com\/armor\/docs\/configure-security-policie","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Ingress with Cloud Armor description  Implement GCP Google Kubernetes Engine GKE Ingress with Cloud Armor         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes      3  Registered Domain using Google Cloud Domains 4  External DNS Controller installed and ready to use    t   List External DNS Pods kubectl  n external dns ns get pods     5  Verify if External IP Address is created    t   List External IP Address gcloud compute addresses list    Describe External IP Address  gcloud compute addresses describe gke ingress extip1   global          Step 01  Introduction   Ingress Service with Cloud Armor     Step 02  Create Cloud Armor Policy   Go to Network Security    Cloud Armor    CREATE POLICY     Configure Policy     Name    cloud armor policy 1     Description    Cloud Armor Demo with GKE Ingress     Policy type    Backend security policy     Default rule action    Deny     Deny Status    403 Forbidden    Click on   NEXT STEP       Add More Rules  Optional    Leave to default    NO NEW RULES OTHER THAN EXISTING DEFAULT RULE   ALL IP ADDRESS    DENY    With 403 ERROR    Priority 2 147 483 647    Click on   NEXT STEP       Add Policy to Targets  Optional    Leave to default    Click on   NEXT STEP       Advanced configurations  Adaptive Protection   optional    Click on   Enable   checkbox   Click on   DONE     Click on   CREATE POLICY       Step 03  01 kubernetes deployment yaml    yaml apiVersion  apps v1 kind  Deployment metadata    name  cloud armor demo deployment spec    replicas  2   selector      matchLabels        app  cloud armor demo   template      metadata        labels          app  cloud armor demo     spec        containers            name  cloud armor demo           image  stacksimplify kubenginx 1 0 0           ports                containerPort  80             Readiness Probe  https   cloud google com kubernetes engine docs concepts ingress def inf hc                        readinessProbe              httpGet                scheme  HTTP               path   index html               port  80             initialDelaySeconds  10             periodSeconds  15             timeoutSeconds  5            Step 04  02 kubernetes NodePort service yaml    yaml apiVersion  v1 kind  Service metadata    name  cloud armor demo nodeport service   annotations      cloud google com backend config     ports     80   my backendconfig     spec    type  NodePort   selector      app  cloud armor demo   ports        port  80       targetPort  80        Step 05  03 ingress yaml    yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress cloud armor demo   annotations        External Load Balancer     kubernetes io ingress class   gce          Static IP for Ingress Service     kubernetes io ingress global static ip name   gke ingress extip1           External DNS   For creating a Record Set in Google Cloud Cloud DNS     external dns alpha kubernetes io hostname  cloudarmor ingress kalyanreddydaida com spec              defaultBackend      service        name  cloud armor demo nodeport service       port          number  80              Step 06  04 backendconfig yaml    yaml apiVersion  cloud google com v1 kind  BackendConfig metadata    name  my backendconfig spec    timeoutSec  42   Backend service timeout  https   cloud google com kubernetes engine docs how to ingress features timeout   connectionDraining    Connection draining timeout  https   cloud google com kubernetes engine docs how to ingress features draining timeout     drainingTimeoutSec  62   logging    HTTP access logging  https   cloud google com kubernetes engine docs how to ingress features http logging     enable  true     sampleRate  1 0   securityPolicy      name   cloud armor policy 1         Step 07  Deploy Kubernetes Manifests and Verify    t   Deploy Kubernetes Manifests kubectl apply  f kube manifests    List Deployments kubectl get deploy    List Pods kubectl get po    List Services kubectl get svc    List Ingress Services kubectl get ingress    List Backendconfig kubectl get backendconfig    Access Application http    DNS NAME  http   cloudarmor ingress kalyanreddydaida com Observation  1  We should get 403 Forbidden error  2  This is expected because we have configured a Cloud Armor Policy to block All IP Addresses with 403 Error         Step 08  Make a note of Public IP for your Internet Connection   Go to  URL  www whatismyip com  https   www whatismyip com   and make a note of your local desktop Public IP   If you are behind Company   Organizations proxies  not sure if it works     I am using my Home Internet Connection      Step 09  Add new rule in Cloud Armor Policy   Go to Network Security    Cloud Armor    POLICIES    cloud armor policy 1    RULES    ADD RULE     Description    Allow from my desktop     Mode    Basic Mode IP Address   Ranges only      Match    49 206 52 84  My internet connection public ip      Action    Allow     Priority    1   Click on   ADD     WAIT FOR 5 MINUTES for new policy to go live     Step 10  Access Application    t   Access Application from local desktop http    DNS NAME  http   cloudarmor ingress kalyanreddydaida com curl http   cloudarmor ingress kalyanreddydaida com Observation  1  Application access should be successful         Step 11  Clean Up    t   Delete Kubernetes Resources kubectl delete  f kube manifests    Delete Cloud Armor Policy Go to Network Security    Cloud Armor    POLICIES    cloud armor policy 1    DELETE          References   https   cloud google com kubernetes engine docs how to ingress features cloud armor   https   cloud google com armor docs security policy overview   https   cloud google com armor docs integrating cloud armor   https   cloud google com armor docs configure security policie"}
{"questions":"gcp gke docs What is a POD What is a Multi Container POD Step 01 PODs Introduction Learn about Kubernetes Pods title Kubernetes PODs Step 02 PODs Demo","answers":"---\ntitle: Kubernetes PODs\ndescription: Learn about Kubernetes Pods\n---\n\n## Step-01: PODs Introduction\n- What is a POD ?\n- What is a Multi-Container POD?\n\n## Step-02: PODs Demo\n### Step-02-01: Get Worker Nodes Status\n- Verify if kubernetes worker nodes are ready. \n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT-NAME>\ngcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123\n\n# Get Worker Node Status\nkubectl get nodes\n\n# Get Worker Node Status with wide option\nkubectl get nodes -o wide\n```\n\n### Step-02-02:  Create a Pod\n- Create a Pod\n```t\n# Template\nkubectl run <desired-pod-name> --image <Container-Image> \n\n# Replace Pod Name, Container Image\nkubectl run my-first-pod --image stacksimplify\/kubenginx:1.0.0\n```  \n\n### Step-02-03: List Pods\n- Get the list of pods\n```t\n# List Pods\nkubectl get pods\n\n# Alias name for pods is po\nkubectl get po\n```\n\n### Step-02-04: List Pods with wide option\n- List pods with wide option which also provide Node information on which Pod is running\n```t\n# List Pods with Wide Option\nkubectl get pods -o wide\n```\n\n### Step-02-05: What happened in the backgroup when above command is run?\n1. Kubernetes created a pod\n2. Pulled the docker image from docker hub\n3. Created the container in the pod\n4. Started the container present in the pod\n\n### Step-02-06: Describe Pod\n- Describe the POD, primarily required during troubleshooting. \n- Events shown will be of a great help during troubleshooting. \n```t\n# To get list of pod names\nkubectl get pods\n\n# Describe the Pod\nkubectl describe pod <Pod-Name>\nkubectl describe pod my-first-pod \nObservation:\n1. Review Events - thats the key for troubleshooting, understanding what happened\n```\n\n### Step-02-07: Access Application\n- Currently we can access this application only inside worker nodes. \n- To access it externally, we need to create a **NodePort or Load Balancer Service**. \n- **Services** is one very very important concept in Kubernetes. \n\n### Step-02-08: Delete Pod\n```t\n# To get list of pod names\nkubectl get pods\n\n# Delete Pod\nkubectl delete pod <Pod-Name>\nkubectl delete pod my-first-pod\n```\n\n## Step-03: Load Balancer Service Introduction\n- What are Services in k8s?\n- What is a Load Balancer Service?\n- How it works?\n\n## Step-04: Demo - Expose Pod with a Service\n- Expose pod with a service (Load Balancer Service) to access the application externally (from internet)\n- **Ports**\n  - **port:** Port on which node port service listens in Kubernetes cluster internally\n  - **targetPort:** We define container port here on which our application is running.\n- Verify the following before LB Service creation\n  - Azure Standard Load Balancer created for Azure AKS Cluster\n    - Frontend IP Configuration\n    - Load Balancing Rules\n  - Azure Public IP \n```t\n# Create  a Pod\nkubectl run <desired-pod-name> --image <Container-Image> \nkubectl run my-first-pod --image stacksimplify\/kubenginx:1.0.0 \n\n# Expose Pod as a Service\nkubectl expose pod <Pod-Name>  --type=LoadBalancer --port=80 --name=<Service-Name>\nkubectl expose pod my-first-pod  --type=LoadBalancer --port=80 --name=my-first-service\n\n# Get Service Info\nkubectl get service\nkubectl get svc\nObservation:\n1. Initially External-IP will show as pending and slowly it will get the external-ip assigned and displayed.\n2. It will take 2 to 3 minutes to get the external-ip listed\n\n# Describe Service\nkubectl describe service my-first-service\n\n# Access Application\nhttp:\/\/<External-IP-from-get-service-output>\ncurl http:\/\/<External-IP-from-get-service-output>\n```\n- Verify the following after LB Service creation\n- Google Load Balancer created, verify it. \n  - Verify Backends \n  - Verify Frontends\n- Verify **Workloads and Services** on Google GKE Dashboard GCP Console\n\n\n## Step-05: Interact with a Pod\n### Step-05-01: Verify Pod Logs\n```t\n# Get Pod Name\nkubectl get po\n\n# Dump Pod logs\nkubectl logs <pod-name>\nkubectl logs my-first-pod\n\n# Stream pod logs with -f option and access application to see logs\nkubectl logs <pod-name>\nkubectl logs -f my-first-pod\n```\n- **Important Notes**\n- Refer below link and search for **Interacting with running Pods** for additional log options\n- Troubleshooting skills are very important. So please go through all logging options available and master them.\n- **Reference:** https:\/\/kubernetes.io\/docs\/reference\/kubectl\/cheatsheet\/\n\n### Step-05-02: Connect to a Container in POD and execute command\n```t\n# Connect to Nginx Container in a POD\nkubectl exec -it <pod-name> -- \/bin\/bash\nkubectl exec -it my-first-pod -- \/bin\/bash\n\n# Execute some commands in Nginx container\nls\ncd \/usr\/share\/nginx\/html\ncat index.html\nexit\n```\n### Step-05-03: Running individual commands in a Container\n```t\n# Template\nkubectl exec -it <pod-name> -- <COMMAND>\n\n# Sample Commands\nkubectl exec -it my-first-pod -- env\nkubectl exec -it my-first-pod -- ls\nkubectl exec -it my-first-pod -- cat \/usr\/share\/nginx\/html\/index.html\n```\n\n## Step-06: Get YAML Output of Pod & Service\n### Get YAML Output\n```t\n# Get pod definition YAML output\nkubectl get pod my-first-pod -o yaml   \n\n# Get service definition YAML output\nkubectl get service my-first-service -o yaml   \n```\n\n## Step-07: Clean-Up\n```t\n# Get all Objects in default namespace\nkubectl get all\n\n# Delete Services\nkubectl delete svc my-first-service\n\n# Delete Pod\nkubectl delete pod my-first-pod\n\n# Get all Objects in default namespace\nkubectl get all\n```\n\n\n## LOGS - More Options\n\n```t\n# Return snapshot logs from pod nginx with only one container\nkubectl logs nginx\n\n# Return snapshot of previous terminated ruby container logs from pod web-1\nkubectl logs -p -c ruby web-1\n\n# Begin streaming the logs of the ruby container in pod web-1\nkubectl logs -f -c ruby web-1\n\n# Display only the most recent 20 lines of output in pod nginx\nkubectl logs --tail=20 nginx\n\n# Show all logs from pod nginx written in the last hour\nkubectl logs --since=1h nginx\n```","site":"gcp gke docs","answers_cleaned":"    title  Kubernetes PODs description  Learn about Kubernetes Pods         Step 01  PODs Introduction   What is a POD     What is a Multi Container POD      Step 02  PODs Demo     Step 02 01  Get Worker Nodes Status   Verify if kubernetes worker nodes are ready      t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT NAME  gcloud container clusters get credentials standard public cluster 1   region us central1   project kdaida123    Get Worker Node Status kubectl get nodes    Get Worker Node Status with wide option kubectl get nodes  o wide          Step 02 02   Create a Pod   Create a Pod    t   Template kubectl run  desired pod name    image  Container Image      Replace Pod Name  Container Image kubectl run my first pod   image stacksimplify kubenginx 1 0 0            Step 02 03  List Pods   Get the list of pods    t   List Pods kubectl get pods    Alias name for pods is po kubectl get po          Step 02 04  List Pods with wide option   List pods with wide option which also provide Node information on which Pod is running    t   List Pods with Wide Option kubectl get pods  o wide          Step 02 05  What happened in the backgroup when above command is run  1  Kubernetes created a pod 2  Pulled the docker image from docker hub 3  Created the container in the pod 4  Started the container present in the pod      Step 02 06  Describe Pod   Describe the POD  primarily required during troubleshooting     Events shown will be of a great help during troubleshooting      t   To get list of pod names kubectl get pods    Describe the Pod kubectl describe pod  Pod Name  kubectl describe pod my first pod  Observation  1  Review Events   thats the key for troubleshooting  understanding what happened          Step 02 07  Access Application   Currently we can access this application only inside worker nodes     To access it externally  we need to create a   NodePort or Load Balancer Service         Services   is one very very important concept in Kubernetes        Step 02 08  Delete Pod    t   To get list of pod names kubectl get pods    Delete Pod kubectl delete pod  Pod Name  kubectl delete pod my first pod         Step 03  Load Balancer Service Introduction   What are Services in k8s    What is a Load Balancer Service    How it works      Step 04  Demo   Expose Pod with a Service   Expose pod with a service  Load Balancer Service  to access the application externally  from internet      Ports         port    Port on which node port service listens in Kubernetes cluster internally       targetPort    We define container port here on which our application is running    Verify the following before LB Service creation     Azure Standard Load Balancer created for Azure AKS Cluster       Frontend IP Configuration       Load Balancing Rules     Azure Public IP     t   Create  a Pod kubectl run  desired pod name    image  Container Image   kubectl run my first pod   image stacksimplify kubenginx 1 0 0     Expose Pod as a Service kubectl expose pod  Pod Name     type LoadBalancer   port 80   name  Service Name  kubectl expose pod my first pod    type LoadBalancer   port 80   name my first service    Get Service Info kubectl get service kubectl get svc Observation  1  Initially External IP will show as pending and slowly it will get the external ip assigned and displayed  2  It will take 2 to 3 minutes to get the external ip listed    Describe Service kubectl describe service my first service    Access Application http    External IP from get service output  curl http    External IP from get service output        Verify the following after LB Service creation   Google Load Balancer created  verify it       Verify Backends      Verify Frontends   Verify   Workloads and Services   on Google GKE Dashboard GCP Console      Step 05  Interact with a Pod     Step 05 01  Verify Pod Logs    t   Get Pod Name kubectl get po    Dump Pod logs kubectl logs  pod name  kubectl logs my first pod    Stream pod logs with  f option and access application to see logs kubectl logs  pod name  kubectl logs  f my first pod         Important Notes     Refer below link and search for   Interacting with running Pods   for additional log options   Troubleshooting skills are very important  So please go through all logging options available and master them      Reference    https   kubernetes io docs reference kubectl cheatsheet       Step 05 02  Connect to a Container in POD and execute command    t   Connect to Nginx Container in a POD kubectl exec  it  pod name      bin bash kubectl exec  it my first pod     bin bash    Execute some commands in Nginx container ls cd  usr share nginx html cat index html exit         Step 05 03  Running individual commands in a Container    t   Template kubectl exec  it  pod name      COMMAND     Sample Commands kubectl exec  it my first pod    env kubectl exec  it my first pod    ls kubectl exec  it my first pod    cat  usr share nginx html index html         Step 06  Get YAML Output of Pod   Service     Get YAML Output    t   Get pod definition YAML output kubectl get pod my first pod  o yaml       Get service definition YAML output kubectl get service my first service  o yaml            Step 07  Clean Up    t   Get all Objects in default namespace kubectl get all    Delete Services kubectl delete svc my first service    Delete Pod kubectl delete pod my first pod    Get all Objects in default namespace kubectl get all          LOGS   More Options     t   Return snapshot logs from pod nginx with only one container kubectl logs nginx    Return snapshot of previous terminated ruby container logs from pod web 1 kubectl logs  p  c ruby web 1    Begin streaming the logs of the ruby container in pod web 1 kubectl logs  f  c ruby web 1    Display only the most recent 20 lines of output in pod nginx kubectl logs   tail 20 nginx    Show all logs from pod nginx written in the last hour kubectl logs   since 1h nginx    "}
{"questions":"gcp gke docs Deploy simple Kubernetes Deployment and Kubernetes Load Balancer Service and Test Create GKE Standard GKE Cluster Clean Up Learn to create Google Kubernetes Engine GKE Cluster title GCP Google Kubernetes Engine Create GKE Cluster Step 01 Introduction Configure Google CloudShell to access GKE Cluster","answers":"---\ntitle: GCP Google Kubernetes Engine - Create GKE Cluster\ndescription: Learn to create Google Kubernetes Engine GKE Cluster\n---\n\n## Step-01: Introduction\n- Create GKE Standard GKE Cluster \n- Configure Google CloudShell to access GKE Cluster\n- Deploy simple Kubernetes Deployment and Kubernetes Load Balancer Service and Test \n- Clean-Up\n\n## Step-02: Create Standard GKE Cluster \n- Go to Kubernetes Engine -> Clusters -> CREATE\n- Select **GKE Standard -> CONFIGURE**\n- **Cluster Basics**\n  - **Name:** standard-public-cluster-1\n  - **Location type:** Regional\n  - **Region:** us-central1\n  - **Specify default node locations:** us-central1-a, us-central1-b, us-central1-c\n  - **Release Channel**\n    - **Release Channel:** Rapid Channel\n    - **Version:** LATEST AVAIALABLE ON THAT DAY\n  - REST ALL LEAVE TO DEFAULTS\n- **NODE POOLS: default-pool**\n- **Node pool details**\n  - **Name:** default-pool\n  - **Number of Nodes (per zone):** 1\n  - **Node Pool Upgrade Strategy:** Surge Upgrade\n- **Nodes: Configure node settings** \n  - **Image type:** Containerized Optimized OS\n  - **Machine configuration**\n    - **GENERAL PURPOSE SERIES:** E2\n    - **Machine Type:** e2-small\n  - **Boot disk type:** Balanced persistent disk\n  - **Boot disk size(GB):** 20\n  - **Boot disk encryption:** Google-managed encryption key (default )\n  - **Enable Node on Spot VMs:** CHECKED\n- **Node Networking:** LEAVE TO DEFAULTS  \n- **Node Security:** \n  - **Access scopes:** Allow default access (LEAVE TO DEFAULT)\n  - REST ALL REVIEW AND LEAVE TO DEFAULTS\n- **Node Metadata:** REVIEW AND LEAVE TO DEFAULTS\n- **CLUSTER** \n  - **Automation:** REVIEW AND LEAVE TO DEFAULTS\n  - **Networking:** REVIEW AND LEAVE TO DEFAULTS\n    - **CHECK THIS BOX: Enable Dataplane V2** CHECK IT - IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED\n  - **Security:** REVIEW AND LEAVE TO DEFAULTS\n    - **CHECK THIS BOX: Enable Workload Identity** CHECK IT - IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED\n  - **Metadata:** REVIEW AND LEAVE TO DEFAULTS\n  - **Features:** REVIEW AND LEAVE TO DEFAULTS\n- CLICK ON **CREATE**\n\n## Step-03: Verify Cluster Details\n- Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1**\n- Review\n  - Details Tab\n  - Nodes Tab\n    - Review same nodes **Compute Engine**\n  - Storage Tab\n    - Review Storage Classes\n  - Logs Tab\n    - Review Cluster Logs\n    - Review Cluster Logs **Filter By Severity**\n\n## Step-04: Verify Additional Features in GKE on a High-Level\n### Step-04-01: Verify Workloads Tab\n- Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1**\n- Workloads -> **SHOW SYSTEM WORKLOADS**\n\n### Step-04-02: Verify Services & Ingress\n- Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1**\n- Services & Ingress -> **SHOW SYSTEM OBJECTS**\n\n### Step-04-03: Verify Applications, Secrets & ConfigMaps\n- Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1**\n- Applications\n- Secrets & ConfigMaps\n\n### Step-04-04: Verify Storage\n- Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1**\n- Storage Classes\n  - premium-rwo\n  - standard\n  - standard-rwo\n\n### Step-04-05: Verify the below\n1. Object Browser\n2. Migrate to Containers\n3. Backup for GKE\n4. Config Management\n5. Protect\n\n## Step-05: Google CloudShell: Connect to GKE Cluster using kubectl\n- [kubectl Authentication in GKE](https:\/\/cloud.google.com\/blog\/products\/containers-kubernetes\/kubectl-auth-changes-in-gke)\n```t\n# Verify gke-gcloud-auth-plugin Installation (if not installed, install it)\ngke-gcloud-auth-plugin --version \n\n# Install Kubectl authentication plugin for GKE\nsudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin\n\n# Verify gke-gcloud-auth-plugin Installation\ngke-gcloud-auth-plugin --version \n\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT-NAME>\ngcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123\n\n# Run kubectl with the new plugin prior to the release of v1.25\nvi ~\/.bashrc\nUSE_GKE_GCLOUD_AUTH_PLUGIN=True\n\n# Reload the environment value\nsource ~\/.bashrc\n\n# Check if Environment variable loaded in Terminal\necho $USE_GKE_GCLOUD_AUTH_PLUGIN\n\n# Verify kubectl version\nkubectl version --short\n\n# Install kubectl (if not installed)\ngcloud components install kubectl\n\n# Configure kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --zone <ZONE> --project <PROJECT-ID>\ngcloud container clusters get-credentials standard-cluster-1 --zone us-central1-c --project kdaida123\n\n# Verify Kubernetes Worker Nodes\nkubectl get nodes\n\n# Verify System Pod in kube-system Namespace\nkubectl -n kube-system get pods\n\n# Verify kubeconfig file\ncat $HOME\/.kube\/config\nkubectl config view\n```\n\n## Step-06: Review Sample Application: 01-kubernetes-deployment.yaml\n- **Folder:** kube-manifests\n```yaml\napiVersion: apps\/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 2\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: stacksimplify\/kubenginx:1.0.0\n          ports: \n            - containerPort: 80  \n    \n```\n\n## Step-07: Review Sample Application: 02-kubernetes-loadbalancer-service.yaml\n- **Folder:** kube-manifests\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-lb-service\nspec:\n  type: LoadBalancer # ClusterIp, # NodePort\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 80 # Container Port\n```\n\n## Step-08: Upload Sample App to Google CloudShell\n```t\n# Upload Sample App to Google CloudShell\nGo to Google CloudShell -> 3 Dots -> Upload -> Folder -> google-kubernetes-engine\n\n# Change Directory\ncd google-kubernetes-engine\/02-Create-GKE-Cluster\n\n# Verify folder uploaded\nls kube-manifests\/\n\n# Verify Files\ncat kube-manifests\/01-kubernetes-deployment.yaml\ncat kube-manifests\/02-kubernetes-loadbalancer-service.yaml\n```\n\n## Step-09: Deploy Sample Application and Verify\n```t\n# Change Directory\ncd google-kubernetes-engine\/02-Create-GKE-Cluster\n\n# Deploy Sample App using kubectl\nkubectl apply -f kube-manifests\/\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pod\n\n# List Services\nkubectl get svc\n\n# Access Sample Application\nhttp:\/\/<EXTERNAL-IP>\n```\n\n## Step-10: Verify Workloads in GKE Dashboard\n- Go to GCP Console -> Kubernetes Engine -> Workloads\n- Click on  **myapp1-deployment**\n- Review all tabs\n\n## Step-11: Verify Services in GKE Dashboard\n- Go to GCP Console -> Kubernetes Engine -> Services & Ingress\n- Click on **myapp1-lb-service**\n- Review all tabs\n\n## Step-13: Verify Load Balancer\n- Go to GCP Console -> Networking Services -> Load Balancing\n- Review all tabs\n\n## Step-14: Clean-Up\n- Go to Google Cloud Shell\n```t\n# Change Directory\ncd google-kubernetes-engine\/02-Create-GKE-Cluster\n\n# Delete Kubernetes Deployment and Service\nkubectl delete -f kube-manifests\/\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pod\n\n# List Services\nkubectl get svc\n```\n\n\n","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine   Create GKE Cluster description  Learn to create Google Kubernetes Engine GKE Cluster         Step 01  Introduction   Create GKE Standard GKE Cluster    Configure Google CloudShell to access GKE Cluster   Deploy simple Kubernetes Deployment and Kubernetes Load Balancer Service and Test    Clean Up     Step 02  Create Standard GKE Cluster    Go to Kubernetes Engine    Clusters    CREATE   Select   GKE Standard    CONFIGURE       Cluster Basics         Name    standard public cluster 1       Location type    Regional       Region    us central1       Specify default node locations    us central1 a  us central1 b  us central1 c       Release Channel           Release Channel    Rapid Channel         Version    LATEST AVAIALABLE ON THAT DAY     REST ALL LEAVE TO DEFAULTS     NODE POOLS  default pool       Node pool details         Name    default pool       Number of Nodes  per zone     1       Node Pool Upgrade Strategy    Surge Upgrade     Nodes  Configure node settings          Image type    Containerized Optimized OS       Machine configuration           GENERAL PURPOSE SERIES    E2         Machine Type    e2 small       Boot disk type    Balanced persistent disk       Boot disk size GB     20       Boot disk encryption    Google managed encryption key  default         Enable Node on Spot VMs    CHECKED     Node Networking    LEAVE TO DEFAULTS       Node Security           Access scopes    Allow default access  LEAVE TO DEFAULT      REST ALL REVIEW AND LEAVE TO DEFAULTS     Node Metadata    REVIEW AND LEAVE TO DEFAULTS     CLUSTER          Automation    REVIEW AND LEAVE TO DEFAULTS       Networking    REVIEW AND LEAVE TO DEFAULTS         CHECK THIS BOX  Enable Dataplane V2   CHECK IT   IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED       Security    REVIEW AND LEAVE TO DEFAULTS         CHECK THIS BOX  Enable Workload Identity   CHECK IT   IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED       Metadata    REVIEW AND LEAVE TO DEFAULTS       Features    REVIEW AND LEAVE TO DEFAULTS   CLICK ON   CREATE       Step 03  Verify Cluster Details   Go to Kubernetes Engine    Clusters      standard public cluster 1     Review     Details Tab     Nodes Tab       Review same nodes   Compute Engine       Storage Tab       Review Storage Classes     Logs Tab       Review Cluster Logs       Review Cluster Logs   Filter By Severity       Step 04  Verify Additional Features in GKE on a High Level     Step 04 01  Verify Workloads Tab   Go to Kubernetes Engine    Clusters      standard public cluster 1     Workloads      SHOW SYSTEM WORKLOADS        Step 04 02  Verify Services   Ingress   Go to Kubernetes Engine    Clusters      standard public cluster 1     Services   Ingress      SHOW SYSTEM OBJECTS        Step 04 03  Verify Applications  Secrets   ConfigMaps   Go to Kubernetes Engine    Clusters      standard public cluster 1     Applications   Secrets   ConfigMaps      Step 04 04  Verify Storage   Go to Kubernetes Engine    Clusters      standard public cluster 1     Storage Classes     premium rwo     standard     standard rwo      Step 04 05  Verify the below 1  Object Browser 2  Migrate to Containers 3  Backup for GKE 4  Config Management 5  Protect     Step 05  Google CloudShell  Connect to GKE Cluster using kubectl    kubectl Authentication in GKE  https   cloud google com blog products containers kubernetes kubectl auth changes in gke     t   Verify gke gcloud auth plugin Installation  if not installed  install it  gke gcloud auth plugin   version     Install Kubectl authentication plugin for GKE sudo apt get install google cloud sdk gke gcloud auth plugin    Verify gke gcloud auth plugin Installation gke gcloud auth plugin   version     Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT NAME  gcloud container clusters get credentials standard public cluster 1   region us central1   project kdaida123    Run kubectl with the new plugin prior to the release of v1 25 vi    bashrc USE GKE GCLOUD AUTH PLUGIN True    Reload the environment value source    bashrc    Check if Environment variable loaded in Terminal echo  USE GKE GCLOUD AUTH PLUGIN    Verify kubectl version kubectl version   short    Install kubectl  if not installed  gcloud components install kubectl    Configure kubectl gcloud container clusters get credentials  CLUSTER NAME    zone  ZONE    project  PROJECT ID  gcloud container clusters get credentials standard cluster 1   zone us central1 c   project kdaida123    Verify Kubernetes Worker Nodes kubectl get nodes    Verify System Pod in kube system Namespace kubectl  n kube system get pods    Verify kubeconfig file cat  HOME  kube config kubectl config view         Step 06  Review Sample Application  01 kubernetes deployment yaml     Folder    kube manifests    yaml apiVersion  apps v1 kind  Deployment  metadata   Dictionary   name  myapp1 deployment spec    Dictionary   replicas  2   selector      matchLabels        app  myapp1   template        metadata    Dictionary       name  myapp1 pod       labels    Dictionary         app  myapp1    Key value pairs     spec        containers    List           name  myapp1 container           image  stacksimplify kubenginx 1 0 0           ports                 containerPort  80                Step 07  Review Sample Application  02 kubernetes loadbalancer service yaml     Folder    kube manifests    yaml apiVersion  v1 kind  Service  metadata    name  myapp1 lb service spec    type  LoadBalancer   ClusterIp    NodePort   selector      app  myapp1   ports         name  http       port  80   Service Port       targetPort  80   Container Port         Step 08  Upload Sample App to Google CloudShell    t   Upload Sample App to Google CloudShell Go to Google CloudShell    3 Dots    Upload    Folder    google kubernetes engine    Change Directory cd google kubernetes engine 02 Create GKE Cluster    Verify folder uploaded ls kube manifests     Verify Files cat kube manifests 01 kubernetes deployment yaml cat kube manifests 02 kubernetes loadbalancer service yaml         Step 09  Deploy Sample Application and Verify    t   Change Directory cd google kubernetes engine 02 Create GKE Cluster    Deploy Sample App using kubectl kubectl apply  f kube manifests     List Deployments kubectl get deploy    List Pods kubectl get pod    List Services kubectl get svc    Access Sample Application http    EXTERNAL IP          Step 10  Verify Workloads in GKE Dashboard   Go to GCP Console    Kubernetes Engine    Workloads   Click on    myapp1 deployment     Review all tabs     Step 11  Verify Services in GKE Dashboard   Go to GCP Console    Kubernetes Engine    Services   Ingress   Click on   myapp1 lb service     Review all tabs     Step 13  Verify Load Balancer   Go to GCP Console    Networking Services    Load Balancing   Review all tabs     Step 14  Clean Up   Go to Google Cloud Shell    t   Change Directory cd google kubernetes engine 02 Create GKE Cluster    Delete Kubernetes Deployment and Service kubectl delete  f kube manifests     List Deployments kubectl get deploy    List Pods kubectl get pod    List Services kubectl get svc       "}
{"questions":"gcp gke docs title GCP Google Kubernetes Engine Kubernetes Limit Range Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created Implement GCP Google Kubernetes Engine Kubernetes Limit Range t","answers":"---\ntitle: GCP Google Kubernetes Engine Kubernetes Limit Range\ndescription: Implement GCP Google Kubernetes Engine Kubernetes Limit Range\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n# Step-01: Introduction\n1. Kubernetes Namespaces - LimitRange \n2. Kubernetes Namespaces - Declarative using YAML\n\n## Step-02: Create Namespace manifest\n- **Important Note:** File name starts with `01-`  so that when creating k8s objects namespace will get created first so it don't throw an error.\n```yaml\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: qa\n```\n\n## Step-03: Create LimitRange manifest\n- Instead of specifying `resources like cpu and memory` in every container spec of a pod defintion, we can provide the default CPU & Memory for all containers in a namespace using `LimitRange`\n```yaml\napiVersion: v1\nkind: LimitRange\nmetadata:\n  name: default-cpu-mem-limit-range\n  namespace: qa\nspec:\n  limits:\n    - default:\n        cpu: \"400m\"  # If not specified default limit is 1 vCPU per container     \n        memory: \"256Mi\" # If not specified the Container's memory limit is set to 512Mi, which is the default memory limit for the namespace.\n      defaultRequest:\n        cpu: \"200m\" # If not specified default it will take from whatever specified in limits.default.cpu      \n        memory: \"128Mi\" # If not specified default it will take from whatever specified in limits.default.memory\n      max: \n        cpu: \"500m\"\n        memory: \"500Mi\"\n      min:       \n        cpu: \"100m\"\n        memory: \"100Mi\"\n      type: Container  \n```\n\n\n## Step-04: Demo-01: Create Kubernetes Resources & Test\n```t\n# Create Kubernetes Resources\nkubectl apply -f 01-kube-manifests-LimitRange-defaults\n\n# List Pods\nkubectl get pods -n qa -w\n\n# View Pod Specification (CPU & Memory)\nkubectl describe pod <pod-name> -n qa\nObservation: \n1. We will find the \"Limits\" in pod container equals to \"defaults\" from LimitRange\n2. We will find the \"Requests\" in pod container equals to \"defaultRequest\"\n\n# Sample from Pod description\n    Limits:\n      cpu:     400m\n      memory:  256Mi\n    Requests:\n      cpu:        200m\n      memory:     128Mi\n\n# Get & Describe Limits\nkubectl get limits -n qa\nkubectl describe limits default-cpu-mem-limit-range -n qa\n\n# List Services\nkubectl get svc -n qa\n\n# Access Application \nhttp:\/\/<SVC-External-IP>\/\n```\n\n## Step-05: Demo-01: Clean-Up\n- Delete all Kubernetes objects created as part of this section\n```t\n# Delete All\nkubectl delete -f 01-kube-manifests-LimitRange-defaults\/\n```\n\n## Step-06: Demo-02: Update Demo-02 Deployment Manifest with Requests & Limits\n- Negative case testing\n- When deployed with these `Requests & Limits`  where `cpu=600m in limits` which is above the `max cpu = 500m` in LimitRange `default-cpu-mem-limit-range` it should not schedule the pods and throw  error in `ReplicaSet Events`. \n- **File Name:** 03-kubernetes-deployment.yaml\n```t\n# Update Demo-02 Deployment Manifest with Requests & Limits\n          resources:\n            requests:\n              memory: \"128Mi\" \n              cpu: \"450m\" \n            limits:\n              memory: \"256Mi\"\n              cpu: \"600m\"  \n```\n\n## Step-07: Demo-02: Create Kubernetes Resources & Test\n```t\n# Create Kubernetes Resources\nkubectl apply -f 02-kube-manifests-LimitRange-MinMax\n\n# List Pods\nkubectl get pods -n qa\nObservation:\n1. No Pod should be scheduled\n\n# List Deployments\nkubectl get deploy -n qa\nObservation: 0\/2 ready which means no pods scheduled. Verify ReplicaSet Events\n\n# List & Describe ReplicaSets\nkubectl get rs -n qa\nkubectl describe rs <ReplicaSet-Name> -n qa\nObservation: Below error will be displayed\n Warning  FailedCreate  18s (x5 over 56s)  replicaset-controller  (combined from similar events): Error creating: pods \"myapp1-deployment-5dd9f78fd8-k5th6\" is forbidden: maximum cpu usage per Container is 500m, but limit is 600m\n\n# Get & Describe Limits\nkubectl get limits -n qa\nkubectl describe limits default-cpu-mem-limit-range -n qa\n\n# List Services\nkubectl get svc -n qa\n\n# Access Application \nhttp:\/\/<SVC-External-IP>\/\n```\n\n## Step-08: Demo-02: Update Deployment resources.limit=500m\n- **File Name:** 03-kubernetes-deployment.yaml\n```t\n# Demo-02: Update Deployment resources.limit=500m\n          resources:\n            requests:\n              memory: \"128Mi\" \n              cpu: \"450m\"\n            limits:\n              memory: \"256Mi\"\n              cpu: \"500m\" # This is equal to Max value defined in LimitRange, Pods will be scheduled.   \n```\n\n## Step-09: Demo-02: Deploy the updated Deployment\n```t\n# Deploy the Updated Deployment\nkubectl apply -f 02-kube-manifests-LimitRange-MinMax\/03-kubernetes-deployment.yaml\n\n# List Pods\nkubectl get pods -n qa\nObservation:\n1. Pods should be scheduled now. \n```\n\n## Step-10: Demo-02: Clean-Up\n```t\n# Delete Demo-02 Kubernetes Resources\nkubectl delete -f 02-kube-manifests-LimitRange-MinMax\n```\n\n\n## References:\n- https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/namespaces-walkthrough\/\n- https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/manage-resources\/cpu-default-namespace\/\n- https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/manage-resources\/memory-default-namespace\/\n\n","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine Kubernetes Limit Range description  Implement GCP Google Kubernetes Engine Kubernetes Limit Range         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes        Step 01  Introduction 1  Kubernetes Namespaces   LimitRange  2  Kubernetes Namespaces   Declarative using YAML     Step 02  Create Namespace manifest     Important Note    File name starts with  01    so that when creating k8s objects namespace will get created first so it don t throw an error     yaml apiVersion  v1 kind  Namespace metadata    name  qa         Step 03  Create LimitRange manifest   Instead of specifying  resources like cpu and memory  in every container spec of a pod defintion  we can provide the default CPU   Memory for all containers in a namespace using  LimitRange     yaml apiVersion  v1 kind  LimitRange metadata    name  default cpu mem limit range   namespace  qa spec    limits        default          cpu   400m     If not specified default limit is 1 vCPU per container              memory   256Mi    If not specified the Container s memory limit is set to 512Mi  which is the default memory limit for the namespace        defaultRequest          cpu   200m    If not specified default it will take from whatever specified in limits default cpu               memory   128Mi    If not specified default it will take from whatever specified in limits default memory       max           cpu   500m          memory   500Mi        min                 cpu   100m          memory   100Mi        type  Container            Step 04  Demo 01  Create Kubernetes Resources   Test    t   Create Kubernetes Resources kubectl apply  f 01 kube manifests LimitRange defaults    List Pods kubectl get pods  n qa  w    View Pod Specification  CPU   Memory  kubectl describe pod  pod name   n qa Observation   1  We will find the  Limits  in pod container equals to  defaults  from LimitRange 2  We will find the  Requests  in pod container equals to  defaultRequest     Sample from Pod description     Limits        cpu      400m       memory   256Mi     Requests        cpu         200m       memory      128Mi    Get   Describe Limits kubectl get limits  n qa kubectl describe limits default cpu mem limit range  n qa    List Services kubectl get svc  n qa    Access Application  http    SVC External IP           Step 05  Demo 01  Clean Up   Delete all Kubernetes objects created as part of this section    t   Delete All kubectl delete  f 01 kube manifests LimitRange defaults          Step 06  Demo 02  Update Demo 02 Deployment Manifest with Requests   Limits   Negative case testing   When deployed with these  Requests   Limits   where  cpu 600m in limits  which is above the  max cpu   500m  in LimitRange  default cpu mem limit range  it should not schedule the pods and throw  error in  ReplicaSet Events        File Name    03 kubernetes deployment yaml    t   Update Demo 02 Deployment Manifest with Requests   Limits           resources              requests                memory   128Mi                 cpu   450m               limits                memory   256Mi                cpu   600m            Step 07  Demo 02  Create Kubernetes Resources   Test    t   Create Kubernetes Resources kubectl apply  f 02 kube manifests LimitRange MinMax    List Pods kubectl get pods  n qa Observation  1  No Pod should be scheduled    List Deployments kubectl get deploy  n qa Observation  0 2 ready which means no pods scheduled  Verify ReplicaSet Events    List   Describe ReplicaSets kubectl get rs  n qa kubectl describe rs  ReplicaSet Name   n qa Observation  Below error will be displayed  Warning  FailedCreate  18s  x5 over 56s   replicaset controller   combined from similar events   Error creating  pods  myapp1 deployment 5dd9f78fd8 k5th6  is forbidden  maximum cpu usage per Container is 500m  but limit is 600m    Get   Describe Limits kubectl get limits  n qa kubectl describe limits default cpu mem limit range  n qa    List Services kubectl get svc  n qa    Access Application  http    SVC External IP           Step 08  Demo 02  Update Deployment resources limit 500m     File Name    03 kubernetes deployment yaml    t   Demo 02  Update Deployment resources limit 500m           resources              requests                memory   128Mi                 cpu   450m              limits                memory   256Mi                cpu   500m    This is equal to Max value defined in LimitRange  Pods will be scheduled             Step 09  Demo 02  Deploy the updated Deployment    t   Deploy the Updated Deployment kubectl apply  f 02 kube manifests LimitRange MinMax 03 kubernetes deployment yaml    List Pods kubectl get pods  n qa Observation  1  Pods should be scheduled now           Step 10  Demo 02  Clean Up    t   Delete Demo 02 Kubernetes Resources kubectl delete  f 02 kube manifests LimitRange MinMax          References    https   kubernetes io docs tasks administer cluster namespaces walkthrough    https   kubernetes io docs tasks administer cluster manage resources cpu default namespace    https   kubernetes io docs tasks administer cluster manage resources memory default namespace   "}
{"questions":"gcp gke docs title GCP Google Kubernetes Engine Access to Multiple Clusters We should have the two clusters created and ready standard cluster private 1 Implement GCP Google Kubernetes Engine Access to Multiple Clusters autopilot cluster private 1 Step 00 Pre requisites","answers":"---\ntitle: GCP Google Kubernetes Engine Access to Multiple Clusters\ndescription: Implement GCP Google Kubernetes Engine Access to Multiple Clusters\n---\n\n## Step-00: Pre-requisites\n- We should have the two clusters created and ready\n- standard-cluster-private-1\n- autopilot-cluster-private-1\n\n## Step-01: Introduction\n- Configure access to Multiple Clusters\n- Understand kube config file $HOME\/.kube\/config\n- Understand kubectl config command\n  - kubectl config view\n  - kubectl config current-context\n  - kubectl config use-context <context-name>\n  - kubectl config get-context\n  - kubectl config get-clusters\n\n\n## Step-02: Pre-requisite\n- Verify if you have any two GKE Clusters created and ready for use\n- standard-cluster-private-1\n- autopilot-cluster-private-1\n\n## Step-03: Clean-Up kube config file\n```t\n# Clean existing kube configs\ncd $HOME\/.kube\n>config\ncat config\n```\n\n## Step-04: Configure Standard Cluster Access for kubectl\n- Understand commands \n  - kubectl config view\n  - kubectl config current-context\n```t\n# View kubeconfig\nkubectl config view\n\n# Configure kubeconfig for kubectl: standard-cluster-private-1 \ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# View kubeconfig\nkubectl config view\n\n# View Cluster Information\nkubectl cluster-info\n\n# View the current context for kubectl\nkubectl config current-context\n```\n\n## Step-05: Configure Autopilot Cluster Access for kubectl\n```t\n# Configure kubeconfig for kubectl: autopilot-cluster-private-1\ngcloud container clusters get-credentials autopilot-cluster-private-1 --region us-central1 --project kdaida123\n\n# View the current context for kubectl\nkubectl config current-context\n\n# View Cluster Information\nkubectl cluster-info\n\n# View kubeconfig\nkubectl config view\n```\n\n## Step-06: Switch Contexts between clusters\n- Understand the kubectl config command **use-context**\n```t\n# View the current context for kubectl\nkubectl config current-context\n\n# View kubeconfig\nkubectl config view \nGet contexts.context.name to which you want to switch \n\n# Switch Context\nkubectl config use-context gke_kdaida123_us-central1_standard-cluster-private-1\n\n# View the current context for kubectl\nkubectl config current-context\n\n# View Cluster Information\nkubectl cluster-info\n```\n\n## Step-07: List Contexts configured in kubeconfig\n```t\n# List Contexts\nkubectl config get-contexts\n```\n\n## Step-08: List Clusters configured in kubeconfig\n```t\n# List Clusters\nkubectl config get-clusters\n``","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine Access to Multiple Clusters description  Implement GCP Google Kubernetes Engine Access to Multiple Clusters         Step 00  Pre requisites   We should have the two clusters created and ready   standard cluster private 1   autopilot cluster private 1     Step 01  Introduction   Configure access to Multiple Clusters   Understand kube config file  HOME  kube config   Understand kubectl config command     kubectl config view     kubectl config current context     kubectl config use context  context name      kubectl config get context     kubectl config get clusters      Step 02  Pre requisite   Verify if you have any two GKE Clusters created and ready for use   standard cluster private 1   autopilot cluster private 1     Step 03  Clean Up kube config file    t   Clean existing kube configs cd  HOME  kube  config cat config         Step 04  Configure Standard Cluster Access for kubectl   Understand commands      kubectl config view     kubectl config current context    t   View kubeconfig kubectl config view    Configure kubeconfig for kubectl  standard cluster private 1  gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    View kubeconfig kubectl config view    View Cluster Information kubectl cluster info    View the current context for kubectl kubectl config current context         Step 05  Configure Autopilot Cluster Access for kubectl    t   Configure kubeconfig for kubectl  autopilot cluster private 1 gcloud container clusters get credentials autopilot cluster private 1   region us central1   project kdaida123    View the current context for kubectl kubectl config current context    View Cluster Information kubectl cluster info    View kubeconfig kubectl config view         Step 06  Switch Contexts between clusters   Understand the kubectl config command   use context      t   View the current context for kubectl kubectl config current context    View kubeconfig kubectl config view  Get contexts context name to which you want to switch     Switch Context kubectl config use context gke kdaida123 us central1 standard cluster private 1    View the current context for kubectl kubectl config current context    View Cluster Information kubectl cluster info         Step 07  List Contexts configured in kubeconfig    t   List Contexts kubectl config get contexts         Step 08  List Clusters configured in kubeconfig    t   List Clusters kubectl config get clusters   "}
{"questions":"gcp gke docs Configure kubeconfig for kubectl on your local terminal Verify if you are able to reach GKE Cluster using kubectl from your local terminal title gcloud cli install on macOS Learn to install gcloud cli on MacOS Install gcloud CLI on MacOS Step 01 Introduction","answers":"---\ntitle: gcloud cli install on macOS\ndescription: Learn to install gcloud cli on MacOS\n---\n\n## Step-01: Introduction\n- Install gcloud CLI on MacOS\n- Configure kubeconfig for kubectl on your local terminal\n- Verify if you are able to reach GKE Cluster using kubectl from your local terminal\n\n## Step-02: Install gcloud cli on MacOS\n- [Install gcloud cli](https:\/\/cloud.google.com\/sdk\/docs\/install-sdk#mac)\n```t\n# Verify Python Version (Supported versions are Python 3 (3.5 to 3.8, 3.7 recommended)\npython3 -V\n\n# Determine your machine hardware \nuname -m\n\n# Create Folder\nmkdir gcloud-cli-software\n\n# Download gcloud cli based on machine hardware \n## Important Note: Download the latest version available on that respective day\nDowload Link: https:\/\/cloud.google.com\/sdk\/docs\/install-sdk#mac\n\n## As on today the below is the latest version (x86_64 bit)\ncurl -O https:\/\/dl.google.com\/dl\/cloudsdk\/channels\/rapid\/downloads\/google-cloud-cli-418.0.0-darwin-x86_64.tar.gz\n\n# Unzip binary\nls -lrta\ntar -zxf google-cloud-cli-418.0.0-darwin-x86_64.tar.gz\n\n# Run the install script with screen reader mode on:\n.\/google-cloud-sdk\/install.sh --screen-reader=true\n```\n\n## Step-03: Verify gcloud cli version\n```t\n# Open new terminal\nAS PATH is updated, open new terminal\n\n# gcloud cli version\ngcloud version\n\n## Sample Output\nKalyans-Mac-mini:gcloud-cli-software kalyanreddy$ gcloud version\nGoogle Cloud SDK 418.0.0\nbq 2.0.85\ncore 2023.02.13\ngcloud-crc32c 1.0.0\ngsutil 5.20\nKalyans-Mac-mini:gcloud-cli-software kalyanreddy$\n```\n\n## Step-04: Intialize gcloud CLI in local Terminal \n```t\n# Initialize gcloud CLI\n.\/google-cloud-sdk\/bin\/gcloud init\n\n# gcloud config Configurations Commands (For Reference)\ngcloud config list\ngcloud config configurations list\ngcloud config configurations activate\ngcloud config configurations create\ngcloud config configurations delete\ngcloud config configurations describe\ngcloud config configurations rename\n```\n\n## Step-05: Verify gke-gcloud-auth-plugin \n```t\n# Change Directroy\ngcloud-cli-software\n\n## Important Note about gke-gcloud-auth-plugin: \n1. Kubernetes clients require an authentication plugin, gke- gcloud-auth-plugin, which uses the Client-go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters\n\n# Verify if gke-gcloud-auth-plugin installed\ngke-gcloud-auth-plugin --version\n\n# Install gke-gcloud-auth-plugin\ngcloud components install gke-gcloud-auth-plugin\n\n# Verify if gke-gcloud-auth-plugin installed\ngke-gcloud-auth-plugin --version\n```\n\n## Step-06: Remove any existing kubectl clients\n```t\n# Verify kubectl version\nkubectl version --short\nwhich kubectl \nObservation: \n1. We are not using kubectl from gcloud CLI and we need to fix that. \n\n# Removing existing kubectl\nwhich kubectl\nrm \/usr\/local\/bin\/kubectl\n```\n\n## Step-07: Install kubectl client from gcloud CLI\n```t\n# List gcloud components\ngcloud components list\n\n## SAMPLE OUTPUT\nStatus: Not Installed\nName: kubectl\nID: kubectl\nSize: < 1 MiB\n\n# Install kubectl client\ngcloud components install kubectl\n\n# Verify kubectl version\nOPEN NEW TERMINAL AS PATH IS UPDATED\nkubectl version --short\nwhich kubectl\n```\n\n\n## Step-08: Fix kubectl client version equal to GKE Cluster version\n- **Important Note:** You must use a kubectl version that is within one minor version difference of your Kubernetescluster control plane. \n- For example, a 1.24 kubectl client works with Kubernetes Cluster 1.23, 1.24 and 1.25 clusters.\n- As our GKE cluster version is 1.26, we will also upgrade our kubectl to 1.26\n```t\n# Verify kubectl version\nOPEN NEW TERMINAL AS PATH IS UPDATED\nkubectl version --short\nwhich kubectl\n\n# Change Directroy \ncd \/Users\/kalyanreddy\/Documents\/course-repos\/gcloud-cli-software\/google-cloud-sdk\/bin\/\n\n# List files\nls -lrta\n\n# Backup existing kubectl\ncp kubectl kubectl_bkup_1.24\n\n# Copy latest kubectl\ncp kubectl.1.26 kubectl\n\n# Verify kubectl version\nkubectl version --short\nwhich kubectl\n```\n\n## Step-09: Configure kubeconfig for kubectl in local desktop terminal\n```t\n# Clean-Up kubeconfig file (if any older configs exists)\nrm $HOME\/.kube\/config\n\n# Configure kubeconfig for kubectl \ngcloud container clusters get-credentials <GKE-CLUSTER-NAME> --region <REGION> --project <PROJECT>\ngcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123\n\n# Verify Kubernetes Worker Nodes\nkubectl get nodes\n\n\n# Verify System Pod in kube-system Namespace\nkubectl -n kube-system get pods\n\n# Verify kubeconfig file\ncat $HOME\/.kube\/config\nkubectl config view\n```\n\n\n\n## References\n- [gcloud CLI](https:\/\/cloud.google.com\/sdk\/gcloud)\n- [Install the Google Cloud CLI](https:\/\/cloud.google.com\/sdk\/docs\/install-sdk#mac","site":"gcp gke docs","answers_cleaned":"    title  gcloud cli install on macOS description  Learn to install gcloud cli on MacOS         Step 01  Introduction   Install gcloud CLI on MacOS   Configure kubeconfig for kubectl on your local terminal   Verify if you are able to reach GKE Cluster using kubectl from your local terminal     Step 02  Install gcloud cli on MacOS    Install gcloud cli  https   cloud google com sdk docs install sdk mac     t   Verify Python Version  Supported versions are Python 3  3 5 to 3 8  3 7 recommended  python3  V    Determine your machine hardware  uname  m    Create Folder mkdir gcloud cli software    Download gcloud cli based on machine hardware     Important Note  Download the latest version available on that respective day Dowload Link  https   cloud google com sdk docs install sdk mac     As on today the below is the latest version  x86 64 bit  curl  O https   dl google com dl cloudsdk channels rapid downloads google cloud cli 418 0 0 darwin x86 64 tar gz    Unzip binary ls  lrta tar  zxf google cloud cli 418 0 0 darwin x86 64 tar gz    Run the install script with screen reader mode on    google cloud sdk install sh   screen reader true         Step 03  Verify gcloud cli version    t   Open new terminal AS PATH is updated  open new terminal    gcloud cli version gcloud version     Sample Output Kalyans Mac mini gcloud cli software kalyanreddy  gcloud version Google Cloud SDK 418 0 0 bq 2 0 85 core 2023 02 13 gcloud crc32c 1 0 0 gsutil 5 20 Kalyans Mac mini gcloud cli software kalyanreddy          Step 04  Intialize gcloud CLI in local Terminal     t   Initialize gcloud CLI   google cloud sdk bin gcloud init    gcloud config Configurations Commands  For Reference  gcloud config list gcloud config configurations list gcloud config configurations activate gcloud config configurations create gcloud config configurations delete gcloud config configurations describe gcloud config configurations rename         Step 05  Verify gke gcloud auth plugin     t   Change Directroy gcloud cli software     Important Note about gke gcloud auth plugin   1  Kubernetes clients require an authentication plugin  gke  gcloud auth plugin  which uses the Client go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters    Verify if gke gcloud auth plugin installed gke gcloud auth plugin   version    Install gke gcloud auth plugin gcloud components install gke gcloud auth plugin    Verify if gke gcloud auth plugin installed gke gcloud auth plugin   version         Step 06  Remove any existing kubectl clients    t   Verify kubectl version kubectl version   short which kubectl  Observation   1  We are not using kubectl from gcloud CLI and we need to fix that      Removing existing kubectl which kubectl rm  usr local bin kubectl         Step 07  Install kubectl client from gcloud CLI    t   List gcloud components gcloud components list     SAMPLE OUTPUT Status  Not Installed Name  kubectl ID  kubectl Size    1 MiB    Install kubectl client gcloud components install kubectl    Verify kubectl version OPEN NEW TERMINAL AS PATH IS UPDATED kubectl version   short which kubectl          Step 08  Fix kubectl client version equal to GKE Cluster version     Important Note    You must use a kubectl version that is within one minor version difference of your Kubernetescluster control plane     For example  a 1 24 kubectl client works with Kubernetes Cluster 1 23  1 24 and 1 25 clusters    As our GKE cluster version is 1 26  we will also upgrade our kubectl to 1 26    t   Verify kubectl version OPEN NEW TERMINAL AS PATH IS UPDATED kubectl version   short which kubectl    Change Directroy  cd  Users kalyanreddy Documents course repos gcloud cli software google cloud sdk bin     List files ls  lrta    Backup existing kubectl cp kubectl kubectl bkup 1 24    Copy latest kubectl cp kubectl 1 26 kubectl    Verify kubectl version kubectl version   short which kubectl         Step 09  Configure kubeconfig for kubectl in local desktop terminal    t   Clean Up kubeconfig file  if any older configs exists  rm  HOME  kube config    Configure kubeconfig for kubectl  gcloud container clusters get credentials  GKE CLUSTER NAME    region  REGION    project  PROJECT  gcloud container clusters get credentials standard public cluster 1   region us central1   project kdaida123    Verify Kubernetes Worker Nodes kubectl get nodes     Verify System Pod in kube system Namespace kubectl  n kube system get pods    Verify kubeconfig file cat  HOME  kube config kubectl config view           References    gcloud CLI  https   cloud google com sdk gcloud     Install the Google Cloud CLI  https   cloud google com sdk docs install sdk mac"}
{"questions":"gcp gke docs title GCP Google Kubernetes Engine GKE Headless Service Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl Implement GCP Google Kubernetes Engine GKE Headless Service 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Headless Service\ndescription: Implement GCP Google Kubernetes Engine GKE Headless Service\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123\n\n# List GKE Kubernetes Worker Nodes\nkubectl get nodes\n```\n## Step-01: Introduction\n- Implement Kubernetes ClusterIP and Headless Service\n- Understand Headless Service in detail\n\n## Step-02: 01-kubernetes-deployment.yaml\n```yaml\napiVersion: apps\/v1\nkind: Deployment \nmetadata: #Dictionary\n  name: myapp1-deployment\nspec: # Dictionary\n  replicas: 4\n  selector:\n    matchLabels:\n      app: myapp1\n  template:  \n    metadata: # Dictionary\n      name: myapp1-pod\n      labels: # Dictionary\n        app: myapp1  # Key value pairs\n    spec:\n      containers: # List\n        - name: myapp1-container\n          #image: stacksimplify\/kubenginx:1.0.0\n          image: us-docker.pkg.dev\/google-samples\/containers\/gke\/hello-app:2.0\n          ports: \n            - containerPort: 8080          \n```\n\n## Step-03: 02-kubernetes-clusterip-service.yaml\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-cip-service\nspec:\n  type: ClusterIP # ClusterIP, # NodePort, # LoadBalancer, # ExternalName\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 80 # Service Port\n      targetPort: 8080 # Container Port\n```\n\n## Step-04: 03-kubernetes-headless-service.yaml\n- Add `spec.clusterIP: None`\n###  VERY IMPORTANT NODE\n1. When using Headless Service, we should use both the  \"Service Port and Target Port\" same. \n2. Headless Service directly sends traffic to Pod with Pod IP and Container Port. \n3. DNS resolution directly happens from headless service to Pod IP.\n\n```yaml\napiVersion: v1\nkind: Service \nmetadata:\n  name: myapp1-headless-service\nspec:\n  #type: ClusterIP # ClusterIP, # NodePort, # LoadBalancer, # ExternalName\n  clusterIP: None\n  selector:\n    app: myapp1\n  ports: \n    - name: http\n      port: 8080 # Service Port\n      targetPort: 8080 # Container Port\n\n```\n\n## Step-05: Deply Kubernetes Manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 01-kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\nkubectl get pods -o wide\nObservation: make a note of Pod IP\n\n# List Services\nkubectl get svc\nObservation: \n1. \"CLUSTER-IP\" will be \"NONE\" for Headless Service\n\n## Sample \nKalyans-Mac-mini:19-GKE-Headless-Service kalyanreddy$ kubectl get svc\nNAME                      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE\nkubernetes                ClusterIP   10.24.0.1    <none>        443\/TCP   135m\nmyapp1-cip-service        ClusterIP   10.24.2.34   <none>        80\/TCP    4m9s\nmyapp1-headless-service   ClusterIP   None         <none>        80\/TCP    4m9s\nKalyans-Mac-mini:19-GKE-Headless-Service kalyanreddy$ \n\n```\n\n\n## Step-06: Review Curl Kubernetes Manifests\n- **Project Folder:** 02-kube-manifests-curl\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: curl-pod\nspec:\n  containers:\n  - name: curl\n    image: curlimages\/curl \n    command: [ \"sleep\", \"600\" ]\n```\n\n## Step-07: Deply Curl-pod and Verify ClusterIP and Headless Services\n```t\n# Deploy curl-pod\nkubectl apply -f 02-kube-manifests-curl\n\n# List Services\nkubectl get svc\n\n# GKE Cluster Kubernetes Service Full DNS Name format\n<svc>.<ns>.svc.cluster.local\n\n# Will open up a terminal session into the container\nkubectl exec -it curl-pod -- sh\n\n# ClusterIP Service: nslookup and curl Test\nnslookup myapp1-cip-service.default.svc.cluster.local\ncurl myapp1-cip-service.default.svc.cluster.local\n\n### ClusterIP Service nslookup Outptu\n $ nslookup myapp1-cip-service.default.svc.cluster.local\nServer:\t\t10.24.0.10\nAddress:\t10.24.0.10:53\n\nName:\tmyapp1-cip-service.default.svc.cluster.local\nAddress: 10.24.2.34\n\n# Headless Service: nslookup and curl Test\nnslookup myapp1-headless-service.default.svc.cluster.local\ncurl myapp1-headless-service.default.svc.cluster.local:8080\nObservation:\n1. There is no specific IP for Headless Service\n2. It will be directly dns resolved to Pod IP\n3. That said we should use the same port as Container Port for Headless Service (VERY VERY IMPORTANT)\n\n### Headless Service nslookup Output\n$ nslookup myapp1-headless-service.default.svc.cluster.local\nServer:\t\t10.24.0.10\nAddress:\t10.24.0.10:53\n\nName:\tmyapp1-headless-service.default.svc.cluster.local\nAddress: 10.20.0.25\nName:\tmyapp1-headless-service.default.svc.cluster.local\nAddress: 10.20.0.26\nName:\tmyapp1-headless-service.default.svc.cluster.local\nAddress: 10.20.1.28\nName:\tmyapp1-headless-service.default.svc.cluster.local\nAddress: 10.20.1.29\n```\n\n## Step-08: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f 01-kube-manifests\n\n# Delete Kubernetes Resources - Curl Pod\nkubectl delete -f 02-kube-manifests-curl\n```\n\n","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Headless Service description  Implement GCP Google Kubernetes Engine GKE Headless Service         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard public cluster 1   region us central1   project kdaida123    List GKE Kubernetes Worker Nodes kubectl get nodes        Step 01  Introduction   Implement Kubernetes ClusterIP and Headless Service   Understand Headless Service in detail     Step 02  01 kubernetes deployment yaml    yaml apiVersion  apps v1 kind  Deployment  metadata   Dictionary   name  myapp1 deployment spec    Dictionary   replicas  4   selector      matchLabels        app  myapp1   template        metadata    Dictionary       name  myapp1 pod       labels    Dictionary         app  myapp1    Key value pairs     spec        containers    List           name  myapp1 container            image  stacksimplify kubenginx 1 0 0           image  us docker pkg dev google samples containers gke hello app 2 0           ports                 containerPort  8080                   Step 03  02 kubernetes clusterip service yaml    yaml apiVersion  v1 kind  Service  metadata    name  myapp1 cip service spec    type  ClusterIP   ClusterIP    NodePort    LoadBalancer    ExternalName   selector      app  myapp1   ports         name  http       port  80   Service Port       targetPort  8080   Container Port         Step 04  03 kubernetes headless service yaml   Add  spec clusterIP  None       VERY IMPORTANT NODE 1  When using Headless Service  we should use both the   Service Port and Target Port  same   2  Headless Service directly sends traffic to Pod with Pod IP and Container Port   3  DNS resolution directly happens from headless service to Pod IP      yaml apiVersion  v1 kind  Service  metadata    name  myapp1 headless service spec     type  ClusterIP   ClusterIP    NodePort    LoadBalancer    ExternalName   clusterIP  None   selector      app  myapp1   ports         name  http       port  8080   Service Port       targetPort  8080   Container Port          Step 05  Deply Kubernetes Manifests    t   Deploy Kubernetes Manifests kubectl apply  f 01 kube manifests    List Deployments kubectl get deploy    List Pods kubectl get pods kubectl get pods  o wide Observation  make a note of Pod IP    List Services kubectl get svc Observation   1   CLUSTER IP  will be  NONE  for Headless Service     Sample  Kalyans Mac mini 19 GKE Headless Service kalyanreddy  kubectl get svc NAME                      TYPE        CLUSTER IP   EXTERNAL IP   PORT S    AGE kubernetes                ClusterIP   10 24 0 1     none         443 TCP   135m myapp1 cip service        ClusterIP   10 24 2 34    none         80 TCP    4m9s myapp1 headless service   ClusterIP   None          none         80 TCP    4m9s Kalyans Mac mini 19 GKE Headless Service kalyanreddy             Step 06  Review Curl Kubernetes Manifests     Project Folder    02 kube manifests curl    yaml apiVersion  v1 kind  Pod metadata    name  curl pod spec    containers      name  curl     image  curlimages curl      command     sleep    600            Step 07  Deply Curl pod and Verify ClusterIP and Headless Services    t   Deploy curl pod kubectl apply  f 02 kube manifests curl    List Services kubectl get svc    GKE Cluster Kubernetes Service Full DNS Name format  svc   ns  svc cluster local    Will open up a terminal session into the container kubectl exec  it curl pod    sh    ClusterIP Service  nslookup and curl Test nslookup myapp1 cip service default svc cluster local curl myapp1 cip service default svc cluster local      ClusterIP Service nslookup Outptu    nslookup myapp1 cip service default svc cluster local Server   10 24 0 10 Address  10 24 0 10 53  Name  myapp1 cip service default svc cluster local Address  10 24 2 34    Headless Service  nslookup and curl Test nslookup myapp1 headless service default svc cluster local curl myapp1 headless service default svc cluster local 8080 Observation  1  There is no specific IP for Headless Service 2  It will be directly dns resolved to Pod IP 3  That said we should use the same port as Container Port for Headless Service  VERY VERY IMPORTANT       Headless Service nslookup Output   nslookup myapp1 headless service default svc cluster local Server   10 24 0 10 Address  10 24 0 10 53  Name  myapp1 headless service default svc cluster local Address  10 20 0 25 Name  myapp1 headless service default svc cluster local Address  10 20 0 26 Name  myapp1 headless service default svc cluster local Address  10 20 1 28 Name  myapp1 headless service default svc cluster local Address  10 20 1 29         Step 08  Clean Up    t   Delete Kubernetes Resources kubectl delete  f 01 kube manifests    Delete Kubernetes Resources   Curl Pod kubectl delete  f 02 kube manifests curl      "}
{"questions":"gcp gke docs Use Google Disks Volume Snapshots and Restore Concepts applied for Kubernetes Workloads Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal title GKE Persistent Disks Volume Snapshots and Restore Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GKE Persistent Disks - Volume Snapshots and Restore\ndescription: Use Google Disks Volume Snapshots and Restore Concepts applied for Kubernetes Workloads\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Feature: Compute Engine persistent disk CSI Driver\n  - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. \n  - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster.\n\n## Step-01: Introduction\n1. Deploy UMS WebApp with `01-kube-manifests`\n2. Create new User (admin102, admin103)\n3. Create Volume Snapshot Kubernetes Objects and Deploy them\n4. Delete User (admin102, admin103)\n5. Deploy PVC Restore `03-Volume-Restore`\n6. Verify if after restore 2 more users what we deleted got restored in our UMS App\n7. Clean Up (kubectl delete -R -f <Folder>)\n\n## Step-02:  Kubernetes YAML Manifests\n- **Project Folder:** 01-kube-manifests\n- No changes to Kubernetes YAML Manifests, same as Section `21-GKE-PD-existing-SC-standard-rwo`\n- 01-persistent-volume-claim.yaml\n- 02-UserManagement-ConfigMap.yaml\n- 03-mysql-deployment.yaml\n- 04-mysql-clusterip-service.yaml\n- 05-UserMgmtWebApp-Deployment.yaml\n- 06-UserMgmtWebApp-LoadBalancer-Service.yaml\n\n## Step-03: Deploy kube-manifests\n```t\n# Deploy Kubernetes Manifests\nkubectl apply -f 01-kube-manifests\/\n\n# List Storage Class\nkubectl get sc\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List ConfigMaps\nkubectl get configmap\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# List Services\nkubectl get svc\n\n# Verify Pod Logs\nkubectl get pods\nkubectl logs -f <USERMGMT-POD-NAME>\nkubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5\n```\n\n## Step-04: Verify Persistent Disks\n- Go to Compute Engine -> Storage -> Disks\n- Search for `4GB`\u00a0Persistent Disk\n- **Observation:** Review the below items\n  - **Zones:** us-central1-c\n  - **Type:** Balanced persistent disk\n  - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk\n\n## Step-05: Access Application\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\n# Create New User admin102\nUsername: admin102\nPassword: password102\nFirst Name: fname102\nLast Name: lname102\nEmail Address: admin102@stacksimplify.com\nSocial Security Address: ssn102\n\n# Create New User admin103\nUsername: admin103\nPassword: password103\nFirst Name: fname103\nLast Name: lname103\nEmail Address: admin103@stacksimplify.com\nSocial Security Address: ssn103\n```\n\n## Step-06: 02-Volume-Snapshot: Create Volume Snapshots\n- **Project Folder:** 02-Volume-Snapshot\n### Step-06-01: 01-VolumeSnapshotClass.yaml\n```yaml\napiVersion: snapshot.storage.k8s.io\/v1\nkind: VolumeSnapshotClass\nmetadata:\n  name: my-snapshotclass\ndriver: pd.csi.storage.gke.io\ndeletionPolicy: Delete\n#parameters: \n#  storage-locations: us-east2\n\n# Optional Note: \n# To use a custom storage location, add a storage-locations parameter to the snapshot class. \n# To use this parameter, your clusters must use version 1.21 or later.\n```\n### Step-06-02: 02-VolumeSnapshot.yaml\n```yaml\napiVersion: snapshot.storage.k8s.io\/v1\nkind: VolumeSnapshot\nmetadata:\n  name: my-snapshot1\nspec:\n  volumeSnapshotClassName: my-snapshotclass\n  source:\n    persistentVolumeClaimName: mysql-pv-claim\n```\n### Step-06-03: Deploy Volume Snapshot Kubernetes Manifests\n```t\n# Deploy Volume Snapshot Kubernetes Manifests\nkubectl apply -f 02-Volume-Snapshot\/\n\n# List VolumeSnapshotClass\nkubectl get volumesnapshotclass\n\n# Describe VolumeSnapshotClass\nkubectl describe volumesnapshotclass my-snapshotclass\n\n# List VolumeSnapshot\nkubectl get volumesnapshot\n\n# Describe VolumeSnapshot\nkubectl describe volumesnapshot my-snapshot1\n\n# Verify the Snapshots\nGo to Compute Engine -> Storage -> Snapshots\nObservation:\n1. You should find the new snapshot created\n2. Review the \"Creation Time\"\n3. Review the \"Disk Size: 4GB\"\n```\n\n## Step-07: Delete users admin102, admin103\n```t\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\n\n# Delete Users\nadmin102\nadmin103\n```\n\n\n## Step-08: 03-Volume-Restore: Create Volume Restore\n- **Project Folder:** 03-Volume-Restore\n### Step-08-01: 01-restore-pvc.yaml\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: pvc-restore\nspec:\n  dataSource:\n    name: my-snapshot1\n    kind: VolumeSnapshot\n    apiGroup: snapshot.storage.k8s.io\n  storageClassName: standard-rwo\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 4Gi\n```\n### Step-08-02: 02-mysql-deployment.yaml\n- Update Claim Name from `claimName: mysql-pv-claim` to `claimName: pvc-restore` \n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: mysql\nspec: \n  replicas: 1\n  selector:\n    matchLabels:\n      app: mysql\n  strategy:\n    type: Recreate \n  template: \n    metadata: \n      labels: \n        app: mysql\n    spec: \n      containers:\n        - name: mysql\n          image: mysql:5.6\n          env:\n            - name: MYSQL_ROOT_PASSWORD\n              value: dbpassword11\n          ports:\n            - containerPort: 3306\n              name: mysql    \n          volumeMounts:\n            - name: mysql-persistent-storage\n              mountPath: \/var\/lib\/mysql    \n            - name: usermanagement-dbcreation-script\n              mountPath: \/docker-entrypoint-initdb.d #https:\/\/hub.docker.com\/_\/mysql Refer Initializing a fresh instance                                        \n      volumes: \n        - name: mysql-persistent-storage\n          persistentVolumeClaim:\n            #claimName: mysql-pv-claim\n            claimName: pvc-restore\n        - name: usermanagement-dbcreation-script\n          configMap:\n            name: usermanagement-dbcreation-script\n```\n### Step-08-03: Deploy Volume Restore Kubernetes Manifests\n```t\n# Deploy Volume Restore Kubernetes Manifests\nkubectl apply -f 03-Volume-Restore\/\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List Pods\nkubectl get pods\n\n# Restart Deployments (Optional - If ERRORS)\nkubectl rollout restart deployment mysql\nkubectl rollout restart deployment usermgmt-webapp\n\n# Review Persistent Disk\n1. Go to Compute Engine -> Storage -> Disks\n2. You should find a new \"Balanced persistent disk\" created as part of new PVC \"pvc-restore\"\n3. To get the exact Disk name for \"pvc-restore\" PVC run command \"kubectl get pvc\"\n\n\n# Access Application\nhttp:\/\/<ExternalIP-from-get-service-output>\nUsername: admin101\nPassword: password101\nObservation:\n1. You should find admin102, admin103 present\n2. That proves, we have restored the MySQL Data using VolumeSnapshots and PVC\n```\n\n## Step-09: Clean-Up\n```t\n# Delete All (Disks, Snapshots)\nkubectl delete -f 01-kube-manifests -f 02-Volume-Snapshot -f 03-Volume-Restore\n\n# List PVC\nkubectl get pvc\n\n# List PV\nkubectl get pv\n\n# List VolumeSnapshotClass\nkubectl get volumesnapshotclass\n\n# List VolumeSnapshot\nkubectl get volumesnapshot\n\n# Verify Persistent Disks\n1. Go to Compute Engine -> Storage -> Disks -> REFRESH\n2. Two disks created as part of this demo is deleted\n\n# Verify Disk Snapshots\n1. Go to Compute Engine -> Storage -> Snapshots -> REFRESH\n2. There should not be any snapshot which we created as part of this demo. \n```\n\n","site":"gcp gke docs","answers_cleaned":"    title  GKE Persistent Disks   Volume Snapshots and Restore description  Use Google Disks Volume Snapshots and Restore Concepts applied for Kubernetes Workloads         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123     3  Feature  Compute Engine persistent disk CSI Driver     Verify the Feature   Compute Engine persistent disk CSI Driver   enabled in GKE Cluster       This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster      Step 01  Introduction 1  Deploy UMS WebApp with  01 kube manifests  2  Create new User  admin102  admin103  3  Create Volume Snapshot Kubernetes Objects and Deploy them 4  Delete User  admin102  admin103  5  Deploy PVC Restore  03 Volume Restore  6  Verify if after restore 2 more users what we deleted got restored in our UMS App 7  Clean Up  kubectl delete  R  f  Folder       Step 02   Kubernetes YAML Manifests     Project Folder    01 kube manifests   No changes to Kubernetes YAML Manifests  same as Section  21 GKE PD existing SC standard rwo    01 persistent volume claim yaml   02 UserManagement ConfigMap yaml   03 mysql deployment yaml   04 mysql clusterip service yaml   05 UserMgmtWebApp Deployment yaml   06 UserMgmtWebApp LoadBalancer Service yaml     Step 03  Deploy kube manifests    t   Deploy Kubernetes Manifests kubectl apply  f 01 kube manifests     List Storage Class kubectl get sc    List PVC kubectl get pvc    List PV kubectl get pv    List ConfigMaps kubectl get configmap    List Deployments kubectl get deploy    List Pods kubectl get pods    List Services kubectl get svc    Verify Pod Logs kubectl get pods kubectl logs  f  USERMGMT POD NAME  kubectl logs  f usermgmt webapp 6ff7d7d849 7lrg5         Step 04  Verify Persistent Disks   Go to Compute Engine    Storage    Disks   Search for  4GB  Persistent Disk     Observation    Review the below items       Zones    us central1 c       Type    Balanced persistent disk       In use by    gke standard cluster 1 default pool db7b638f j5lk     Step 05  Access Application    t   List Services kubectl get svc    Access Application http    ExternalIP from get service output  Username  admin101 Password  password101    Create New User admin102 Username  admin102 Password  password102 First Name  fname102 Last Name  lname102 Email Address  admin102 stacksimplify com Social Security Address  ssn102    Create New User admin103 Username  admin103 Password  password103 First Name  fname103 Last Name  lname103 Email Address  admin103 stacksimplify com Social Security Address  ssn103         Step 06  02 Volume Snapshot  Create Volume Snapshots     Project Folder    02 Volume Snapshot     Step 06 01  01 VolumeSnapshotClass yaml    yaml apiVersion  snapshot storage k8s io v1 kind  VolumeSnapshotClass metadata    name  my snapshotclass driver  pd csi storage gke io deletionPolicy  Delete  parameters      storage locations  us east2    Optional Note     To use a custom storage location  add a storage locations parameter to the snapshot class     To use this parameter  your clusters must use version 1 21 or later          Step 06 02  02 VolumeSnapshot yaml    yaml apiVersion  snapshot storage k8s io v1 kind  VolumeSnapshot metadata    name  my snapshot1 spec    volumeSnapshotClassName  my snapshotclass   source      persistentVolumeClaimName  mysql pv claim         Step 06 03  Deploy Volume Snapshot Kubernetes Manifests    t   Deploy Volume Snapshot Kubernetes Manifests kubectl apply  f 02 Volume Snapshot     List VolumeSnapshotClass kubectl get volumesnapshotclass    Describe VolumeSnapshotClass kubectl describe volumesnapshotclass my snapshotclass    List VolumeSnapshot kubectl get volumesnapshot    Describe VolumeSnapshot kubectl describe volumesnapshot my snapshot1    Verify the Snapshots Go to Compute Engine    Storage    Snapshots Observation  1  You should find the new snapshot created 2  Review the  Creation Time  3  Review the  Disk Size  4GB          Step 07  Delete users admin102  admin103    t   List Services kubectl get svc    Access Application http    ExternalIP from get service output  Username  admin101 Password  password101    Delete Users admin102 admin103          Step 08  03 Volume Restore  Create Volume Restore     Project Folder    03 Volume Restore     Step 08 01  01 restore pvc yaml    yaml apiVersion  v1 kind  PersistentVolumeClaim metadata    name  pvc restore spec    dataSource      name  my snapshot1     kind  VolumeSnapshot     apiGroup  snapshot storage k8s io   storageClassName  standard rwo   accessModes        ReadWriteOnce   resources      requests        storage  4Gi         Step 08 02  02 mysql deployment yaml   Update Claim Name from  claimName  mysql pv claim  to  claimName  pvc restore      yaml apiVersion  apps v1 kind  Deployment metadata    name  mysql spec     replicas  1   selector      matchLabels        app  mysql   strategy      type  Recreate    template       metadata         labels           app  mysql     spec         containers            name  mysql           image  mysql 5 6           env                name  MYSQL ROOT PASSWORD               value  dbpassword11           ports                containerPort  3306               name  mysql               volumeMounts                name  mysql persistent storage               mountPath   var lib mysql                   name  usermanagement dbcreation script               mountPath   docker entrypoint initdb d  https   hub docker com   mysql Refer Initializing a fresh instance                                               volumes             name  mysql persistent storage           persistentVolumeClaim               claimName  mysql pv claim             claimName  pvc restore           name  usermanagement dbcreation script           configMap              name  usermanagement dbcreation script         Step 08 03  Deploy Volume Restore Kubernetes Manifests    t   Deploy Volume Restore Kubernetes Manifests kubectl apply  f 03 Volume Restore     List PVC kubectl get pvc    List PV kubectl get pv    List Pods kubectl get pods    Restart Deployments  Optional   If ERRORS  kubectl rollout restart deployment mysql kubectl rollout restart deployment usermgmt webapp    Review Persistent Disk 1  Go to Compute Engine    Storage    Disks 2  You should find a new  Balanced persistent disk  created as part of new PVC  pvc restore  3  To get the exact Disk name for  pvc restore  PVC run command  kubectl get pvc      Access Application http    ExternalIP from get service output  Username  admin101 Password  password101 Observation  1  You should find admin102  admin103 present 2  That proves  we have restored the MySQL Data using VolumeSnapshots and PVC         Step 09  Clean Up    t   Delete All  Disks  Snapshots  kubectl delete  f 01 kube manifests  f 02 Volume Snapshot  f 03 Volume Restore    List PVC kubectl get pvc    List PV kubectl get pv    List VolumeSnapshotClass kubectl get volumesnapshotclass    List VolumeSnapshot kubectl get volumesnapshot    Verify Persistent Disks 1  Go to Compute Engine    Storage    Disks    REFRESH 2  Two disks created as part of this demo is deleted    Verify Disk Snapshots 1  Go to Compute Engine    Storage    Snapshots    REFRESH 2  There should not be any snapshot which we created as part of this demo        "}
{"questions":"gcp gke docs Implement GCP Google Kubernetes Engine GKE Continuous Integration Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl title GCP Google Kubernetes Engine GKE CI 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE CI\ndescription: Implement GCP Google Kubernetes Engine GKE Continuous Integration\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n\n## Step-01: Introduction\n- Implement Continuous Integration for GKE Workloads using\n- Google Cloud Source\n- Google Cloud Build\n- Google Artifact Repository\n\n\n## Step-02: Enable APIs in Google Cloud\n```t\n# Enable APIs in Google Cloud\ngcloud services enable container.googleapis.com \\\n    cloudbuild.googleapis.com \\\n    sourcerepo.googleapis.com \\\n    artifactregistry.googleapis.com\n\n# Google Cloud Services \nGKE: container.googleapis.com     \nCloud Build: cloudbuild.googleapis.com\nCloud Source: sourcerepo.googleapis.com\nArtifact Registry: artifactregistry.googleapis.com\n```\n\n## Step-03: Create Artifact Repository\n```t\n# List Artifact Repositories\ngcloud artifacts repositories list\n\n# Create Artifact Repository\ngcloud artifacts repositories create myapps-repository \\\n  --repository-format=docker \\\n  --location=us-central1 \n\n# List Artifact Repositories\ngcloud artifacts repositories list\n\n# Describe Artifact Repository \ngcloud artifacts repositories describe myapps-repository --location=us-central1\n```\n\n## Step-04: Install Git client on local desktop (if not present)\n```t\n# Download and Install Git Client and Installed\nhttps:\/\/git-scm.com\/downloads\n```\n\n## Step-05: Create SSH Keys for Git Repo Access\n- [Generating SSH Key Pair](https:\/\/cloud.google.com\/source-repositories\/docs\/authentication#generate_a_key_pair)\n```t\n# Change Directory\ncd 01-SSH-Keys\n\n# Create SSH Keys\nssh-keygen -t [KEY_TYPE] -C \"[USER_EMAIL]\"\nKEY_TYPE: rsa, ecdsa, ed25519\nUSER_EMAIL: dkalyanreddy@gmail.com \n\n# Replace Values KEY_TYPE, USER_EMAIL\nssh-keygen -t ed25519 -C \"dkalyanreddy@gmail.com\"\nProvide the File Name as \"id_gcp_cloud_source\"\n\n## Sample Output\nKalyans-Mac-mini:01-SSH-Keys kalyanreddy$ ssh-keygen -t ed25519 -C \"dkalyanreddy@gmail.com\"\nGenerating public\/private ed25519 key pair.\nEnter file in which to save the key (\/Users\/kalyanreddy\/.ssh\/id_ed25519): id_gcp_cloud_source\nEnter passphrase (empty for no passphrase): \nEnter same passphrase again: \nYour identification has been saved in id_gcp_cloud_source\nYour public key has been saved in id_gcp_cloud_source.pub\nThe key fingerprint is:\nSHA256:YialyCj3XaSa4b8ewk4bcK1hXxO7DDM5uiCP1J2TOZ0 dkalyanreddy@gmail.com\nThe key's randomart image is:\n+--[ED25519 256]--+\n|                 |\n|                 |\n|      . o        |\n| o . + + o       |\n|o = B % S        |\n|...B.&=X.o       |\n|....%B+Eo        |\n|.+ + *o.         |\n|. . +.+.         |\n+----[SHA256]-----+\nKalyans-Mac-mini:01-SSH-Keys kalyanreddy$ ls -lrta\ntotal 16\ndrwxr-xr-x  6 kalyanreddy  staff  192 Jun 29 09:45 ..\n-rw-------  1 kalyanreddy  staff  419 Jun 29 09:46 id_gcp_cloud_source\ndrwxr-xr-x  4 kalyanreddy  staff  128 Jun 29 09:46 .\n-rw-r--r--  1 kalyanreddy  staff  104 Jun 29 09:46 id_gcp_cloud_source.pub\nKalyans-Mac-mini:01-SSH-Keys kalyanreddy$ \n```\n\n## Step-06: Review SSH Keys (Public and Private Keys)\n```t\n# Change Directroy \ncd 01-SSH-Keys\n\n# Review Private Key: id_gcp_cloud_source\ncat id_gcp_cloud_source\n\n# Review Public Key: id_gcp_cloud_source.pub \ncat id_gcp_cloud_source.pub \n```\n\n## Step-07: Update SSH Public Key in Google Cloud Source\n- Go to -> Source Repositories -> 3 Dots -> Manage SSH Keys -> Register SSH Key\n- [Google Cloud Source URL](https:\/\/source.cloud.google.com\/)\n```t\n# Key Name\nName: gke-course\nKey: Output from command \"cat id_gcp_cloud_source.pub\" in previous step. Put content from Public Key\n```\n- Click on **Register**\n\n\n## Step-08: Update SSH Private Key in Git Config\n- Update SSH Private Key in your local desktop Git Config\n```t\n# Copy SSH Private Key to your \".ssh\" folder in your Home Directory from your course\ncd 01-SSH-Keys\ncp id_gcp_cloud_source $HOME\/.ssh  \n\n# Change Directory (Your local desktop home directory)\ncd $HOME\/.ssh  \n\n# Verify File in \"$HOME\/.ssh\"\nls -lrta id_gcp_cloud_source\n\n# Verify existing git \"config\" file\ncat config\n\n# Backup any existing \"config\" file\ncp config config_bkup_before_cloud_source\n\n# Update \"config\" file to point to \"id_gcp_cloud_source\" private key\nvi config\n\n## Sample Output after changes\nKalyans-Mac-mini:.ssh kalyanreddy$ cat config\nHost *\n  AddKeysToAgent yes\n  UseKeychain yes\n  IdentityFile ~\/.ssh\/id_gcp_cloud_source\nKalyans-Mac-mini:.ssh kalyanreddy$ \n\n# Backup config with cloudsource\ncp config config_with_cloud_source_key\n```\n\n## Step-09: Update Git Global Config in your local deskopt\n```t\n# List Global Git Config\ngit config --list\n\n# Update Global Git Config\ngit config --global user.email \"YOUR_EMAIL_ADDRESS\"\ngit config --global user.name \"YOUR_NAME\"\n\n# Replace YOUR_EMAIL_ADDRESS, YOUR_NAME\ngit config --global user.name \"Kalyan Reddy Daida\"\ngit config --global user.email \"dkalyanreddy@gmail.com\"\n\n# List Global Git Config\ngit config --list\n```\n\n## Step-10: Create Git repositories in Cloud Source\n```t\n# List Cloud Source Repository\ngcloud source repos list\n\n# Create Git repositories in Cloud Source\ngcloud source repos create myapp1-app-repo\n\n# List Cloud Source Repository\ngcloud source repos list\n\n# Verify using Cloud Console\nSearch for -> Source Repositories \nhttps:\/\/source.cloud.google.com\/repos\n```\n\n## Step-11: Clone Cloud Source Git Repository, Commit a Change, Push to Remote Repo and Verify\n```t\n# Change Directory \ncd course-repos\n\n# Verify using Cloud Console\nSearch for -> Source Repositories \nhttps:\/\/source.cloud.google.com\/repos\nGo to Repo -> myapp1-app-repo -> SSH Authentication\n\n# Copy the git clone command and run \ngit clone ssh:\/\/dkalyanreddy@gmail.com@source.developers.google.com:2022\/p\/kdaida123\/r\/myapp1-app-repo\n\n# Change Directory\ncd myapp1-app-repo\n\n# Create a simple readme file\ntouch README.md\necho \"# GKE CI Demo\" > README.md\nls -lrta\n\n# Add Files and do local commit\ngit add .\ngit commit -am \"First Commit\"\n\n# Push file to Cloud Source Git Repo (Remote Repo)\ngit push\n\n# Verify in Git Remote Repo\nSearch for -> Source Repositories \nhttps:\/\/source.cloud.google.com\/repos\nGo to Repo -> myapp1-app-repo \n```\n\n## Step-12: Review Files in 02-Docker-Image folder\n1. Dockerfile\n2. index.html\n\n## Step-13: Copy files from 02-Docker-Image folder to Git Repo\n```t\n# Change Directroy \ncd 57-GKE-Continuous-Integration\/02-Docker-Image\n\n# Copy Files to Git repo \"myapp1-app-repo\"\n1. Dockerfile\n2. index.html\n\n# Local Git Commit and Push to Remote Repo\ngit add .\ngit commit -am \"Second Commit\"\ngit push\n\n# Verify in Git Remote Repo\nSearch for -> Source Repositories \nhttps:\/\/source.cloud.google.com\/repos\nGo to Repo -> myapp1-app-repo \n```\n\n## Step-14: Create a container image with Cloud Build and store it in Artifact Registry using glcoud builds command\n```t\n# Change Directory (Git App Repo: myapp1-app-repo)\ncd myapp1-app-repo\n\n# Get latest git commit id (current branch)\ngit rev-parse HEAD\n\n# Get latest git commit id first 7 chars (current branch)\ngit rev-parse --short=7 HEAD\n\n# Ensure you are in local git repo folder where \"Dockerfile, index.html\" present\ncd myapp1-app-repo \n\n# Create a Cloud Build build based on the latest commit \ngcloud builds submit --tag=\"us-central1-docker.pkg.dev\/${PROJECT_ID}\/${$APP_ARTIFACT_REPO}\/myapp1:${COMMIT_ID}\" .\n\n# Replace Values ${PROJECT_ID}, ${$APP_ARTIFACT_REPO}, ${COMMIT_ID}\ngcloud builds submit --tag=\"us-central1-docker.pkg.dev\/kdaida123\/myapps-repository\/myapp1:6f7d338\" .\n```\n\n## Step-15: Review Cloud Build YAML file\n```yaml\nsteps:\n# This step builds the container image.\n- name: 'gcr.io\/cloud-builders\/docker'\n  id: Build\n  args:\n  - 'build'\n  - '-t'\n  - 'us-central1-docker.pkg.dev\/$PROJECT_ID\/myapps-repository\/myapp1:$SHORT_SHA'\n  - '.'\n\n# This step pushes the image to Artifact Registry\n# The PROJECT_ID and SHORT_SHA variables are automatically\n# replaced by Cloud Build.\n- name: 'gcr.io\/cloud-builders\/docker'\n  id: Push\n  args:\n  - 'push'\n  - 'us-central1-docker.pkg.dev\/$PROJECT_ID\/myapps-repository\/myapp1:$SHORT_SHA'\n```\n\n## Step-16: Copy cloudbuild.yaml to Git Repo\n```t\n# Change Directroy \ncd 57-GKE-Continuous-Integration\/03-cloudbuild-yaml\n\n# Copy Files to Git repo\n1. cloudbuild.yaml\n\n# Local Git Commit and Push to Remote Repo\ngit add .\ngit commit -am \"Third Commit\"\ngit push\n\n# Verify in Git Remote Repo\nSearch for -> Source Repositories \nhttps:\/\/source.cloud.google.com\/repos\nGo to Repo -> myapp1-app-repo \n```\n\n## Step-17: Create Continuous Integration Pipeline in Cloud Build\n- Go to Cloud Build -> Dashboard -> Region: us-central-1 -> Click on **SET UP BUILD TRIGGERS** [OR]\n- Go to Cloud Build -> TRIGGERS -> Click on **CREATE TRIGGER** \n- **Name:** myapp1-ci\n- **Region:** us-central1\n- **Description:** myapp1 Continuous Integration Pipeline\n- **Tags:** environment=dev\n- **Event:** Push to a branch\n- **Source:** myapp1-app-repo\n- **Branch:** main (Auto-populated)\n- **Configuration:** Cloud Build configuration file (yaml or json)\n- **Location:** Repository\n- **Cloud Build Configuration file location:** \/cloudbuild.yaml\n- **Approval:** leave unchecked\n- **Service account:** leave to default\n- Click on **CREATE**\n\n\n## Step-18: Make a simple change in \"index.html\" and push the changes to Git Repo\n```t\n# Change Directroy \ncd myapp1-app-repo\n\n# Update file index.html (change V1 to V2)\n<p>Application Version: V2<\/p>\n\n# Local Git Commit and Push to Remote Repo\ngit status\ngit add .\ngit commit -am \"V2 Commit\"\ngit push\n\n# Verify in Git Remote Repo\nSearch for -> Source Repositories \nhttps:\/\/source.cloud.google.com\/repos\nGo to Repo -> myapp1-app-repo \n```\n\n## Step-19: Verify Code Build CI Pipeline\n```t\n# Verify Code Build\n1. Go to Code Build -> Dashboard or go directly to Code Build -> History\n2. Click on Build History -> View All\n3. Verify \"BUILD LOG\"\n4. Verify \"EXECUTION DETAILS\"\n5. Verify \"VIEW RAW\"\n\n# Verify Artifact Repository\n1. Go to Artifact Registry -> myapps-repository -> myapp1\n2. You should find the docker image pushed to Artifact Registry\n```\n\n## Step-20: Review Kubernetes Manifests\n- **Project Folder:** 04-kube-manifests\n- 01-kubernetes-deployment.yaml\n- 02-kubernetes-loadBalancer-service.yaml\n\n## Step-21: Update Container Image to V1 Docker Image we built\n```yaml\n# 01-kubernetes-deployment.yaml: Update \"image\" \n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: us-central1-docker.pkg.dev\/kdaida123\/myapps-repository\/myapp1:d1c3b88\n          ports: \n            - containerPort: 80  \n```\n\n## Step-22: Deploy Kubernetes Manifests and Verify\n```t\n# Change Directory\nYou should in Course Content folder \ngoogle-kubernetes-engine\/<RESPECTIVE-SECTION>\n\n# Deploy Kubernetes Manifests\nkubectl apply -f 04-kube-manifests\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# Describe Pod (Review Events to understand from where Docker Image downloaded)\nkubectl describe pod <POD-NAME>\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<EXTERNAL-IP-GET-SERVICE-OUTPUT>\nObservation:\n1. You should see \"Application Version: V1\"\n```\n\n## Step-23: Update Container Image to V2 Docker Image we built\n```yaml\n# 01-kubernetes-deployment.yaml: Update \"image\" \n    spec:\n      containers: # List\n        - name: myapp1-container\n          image: us-central1-docker.pkg.dev\/kdaida123\/myapps-repository\/myapp1:3af592c\n          ports: \n            - containerPort: 80  \n```\n\n## Step-24: Update Kubernetes Deployment and Verify\n```t\n# Deply Kubernetes Manifests (Updated Image Tag)\nkubectl apply -f 04-kube-manifests\n\n# Restart Kubernetes Deployment (Optional - if it is not updated)\nkubectl rollout restart deployment myapp1-deployment\n\n# List Deployments\nkubectl get deploy\n\n# List Pods\nkubectl get pods\n\n# Describe Pod (Review Events to understand from where Docker Image downloaded)\nkubectl describe pod <POD-NAME>\n\n# List Services\nkubectl get svc\n\n# Access Application\nhttp:\/\/<EXTERNAL-IP-GET-SERVICE-OUTPUT>\nObservation:\n1. You should see \"Application Version: V2\"\n```\n\n## Step-25: Clean-Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f 04-kube-manifests\n```\n\n## Step-26: How to add Approvals before starting the Build Process ?\n### Step-26-01: Enable Approval in Cloud Build\n- Go to Cloud Build -> Triggers -> myapp1-ci\n- Check the box in **Approval: Require approval before build executes**\n\n### Step-26-02: Add Users to Cloud Build Approver IAM Role\n- Go to IAM & Admin -> GRANT ACCESS \n- **Add Principal:** dkalyanreddy@gmail.com\n- **Assign Roles:** Cloud Build Approver\n- Click on **SAVE**\n\n## Step-27: Update the Git Repo to test Build Approval Process\n```t\n# Change Directroy \ncd myapp1-app-repo\n\n# Update file index.html (change V2 to V3)\n<p>Application Version: V3<\/p>\n\n# Local Git Commit and Push to Remote Repo\ngit status\ngit add .\ngit commit -am \"V3 Commit\"\ngit push\n\n# Verify in Git Remote Repo\nSearch for -> Source Repositories \nhttps:\/\/source.cloud.google.com\/repos\nGo to Repo -> myapp1-app-repo \n```\n\n## Step-28: Verify and Approve the Build\n- Go to Cloud Build -> Triggers -> myapp1-ci -> Select and Approve\n- Verify if build is successful.\n\n\n\n\n## References\n- [Cloud Build for Docker Images](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/tutorials\/gitops-cloud-build","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE CI description  Implement GCP Google Kubernetes Engine GKE Continuous Integration         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes          Step 01  Introduction   Implement Continuous Integration for GKE Workloads using   Google Cloud Source   Google Cloud Build   Google Artifact Repository      Step 02  Enable APIs in Google Cloud    t   Enable APIs in Google Cloud gcloud services enable container googleapis com       cloudbuild googleapis com       sourcerepo googleapis com       artifactregistry googleapis com    Google Cloud Services  GKE  container googleapis com      Cloud Build  cloudbuild googleapis com Cloud Source  sourcerepo googleapis com Artifact Registry  artifactregistry googleapis com         Step 03  Create Artifact Repository    t   List Artifact Repositories gcloud artifacts repositories list    Create Artifact Repository gcloud artifacts repositories create myapps repository       repository format docker       location us central1     List Artifact Repositories gcloud artifacts repositories list    Describe Artifact Repository  gcloud artifacts repositories describe myapps repository   location us central1         Step 04  Install Git client on local desktop  if not present     t   Download and Install Git Client and Installed https   git scm com downloads         Step 05  Create SSH Keys for Git Repo Access    Generating SSH Key Pair  https   cloud google com source repositories docs authentication generate a key pair     t   Change Directory cd 01 SSH Keys    Create SSH Keys ssh keygen  t  KEY TYPE   C   USER EMAIL   KEY TYPE  rsa  ecdsa  ed25519 USER EMAIL  dkalyanreddy gmail com     Replace Values KEY TYPE  USER EMAIL ssh keygen  t ed25519  C  dkalyanreddy gmail com  Provide the File Name as  id gcp cloud source      Sample Output Kalyans Mac mini 01 SSH Keys kalyanreddy  ssh keygen  t ed25519  C  dkalyanreddy gmail com  Generating public private ed25519 key pair  Enter file in which to save the key   Users kalyanreddy  ssh id ed25519   id gcp cloud source Enter passphrase  empty for no passphrase    Enter same passphrase again   Your identification has been saved in id gcp cloud source Your public key has been saved in id gcp cloud source pub The key fingerprint is  SHA256 YialyCj3XaSa4b8ewk4bcK1hXxO7DDM5uiCP1J2TOZ0 dkalyanreddy gmail com The key s randomart image is      ED25519 256                                                      o            o       o          o   B   S              B   X o               B Eo                 o                                      SHA256        Kalyans Mac mini 01 SSH Keys kalyanreddy  ls  lrta total 16 drwxr xr x  6 kalyanreddy  staff  192 Jun 29 09 45     rw         1 kalyanreddy  staff  419 Jun 29 09 46 id gcp cloud source drwxr xr x  4 kalyanreddy  staff  128 Jun 29 09 46    rw r  r    1 kalyanreddy  staff  104 Jun 29 09 46 id gcp cloud source pub Kalyans Mac mini 01 SSH Keys kalyanreddy           Step 06  Review SSH Keys  Public and Private Keys     t   Change Directroy  cd 01 SSH Keys    Review Private Key  id gcp cloud source cat id gcp cloud source    Review Public Key  id gcp cloud source pub  cat id gcp cloud source pub          Step 07  Update SSH Public Key in Google Cloud Source   Go to    Source Repositories    3 Dots    Manage SSH Keys    Register SSH Key    Google Cloud Source URL  https   source cloud google com      t   Key Name Name  gke course Key  Output from command  cat id gcp cloud source pub  in previous step  Put content from Public Key       Click on   Register        Step 08  Update SSH Private Key in Git Config   Update SSH Private Key in your local desktop Git Config    t   Copy SSH Private Key to your   ssh  folder in your Home Directory from your course cd 01 SSH Keys cp id gcp cloud source  HOME  ssh      Change Directory  Your local desktop home directory  cd  HOME  ssh      Verify File in   HOME  ssh  ls  lrta id gcp cloud source    Verify existing git  config  file cat config    Backup any existing  config  file cp config config bkup before cloud source    Update  config  file to point to  id gcp cloud source  private key vi config     Sample Output after changes Kalyans Mac mini  ssh kalyanreddy  cat config Host     AddKeysToAgent yes   UseKeychain yes   IdentityFile    ssh id gcp cloud source Kalyans Mac mini  ssh kalyanreddy      Backup config with cloudsource cp config config with cloud source key         Step 09  Update Git Global Config in your local deskopt    t   List Global Git Config git config   list    Update Global Git Config git config   global user email  YOUR EMAIL ADDRESS  git config   global user name  YOUR NAME     Replace YOUR EMAIL ADDRESS  YOUR NAME git config   global user name  Kalyan Reddy Daida  git config   global user email  dkalyanreddy gmail com     List Global Git Config git config   list         Step 10  Create Git repositories in Cloud Source    t   List Cloud Source Repository gcloud source repos list    Create Git repositories in Cloud Source gcloud source repos create myapp1 app repo    List Cloud Source Repository gcloud source repos list    Verify using Cloud Console Search for    Source Repositories  https   source cloud google com repos         Step 11  Clone Cloud Source Git Repository  Commit a Change  Push to Remote Repo and Verify    t   Change Directory  cd course repos    Verify using Cloud Console Search for    Source Repositories  https   source cloud google com repos Go to Repo    myapp1 app repo    SSH Authentication    Copy the git clone command and run  git clone ssh   dkalyanreddy gmail com source developers google com 2022 p kdaida123 r myapp1 app repo    Change Directory cd myapp1 app repo    Create a simple readme file touch README md echo    GKE CI Demo    README md ls  lrta    Add Files and do local commit git add   git commit  am  First Commit     Push file to Cloud Source Git Repo  Remote Repo  git push    Verify in Git Remote Repo Search for    Source Repositories  https   source cloud google com repos Go to Repo    myapp1 app repo          Step 12  Review Files in 02 Docker Image folder 1  Dockerfile 2  index html     Step 13  Copy files from 02 Docker Image folder to Git Repo    t   Change Directroy  cd 57 GKE Continuous Integration 02 Docker Image    Copy Files to Git repo  myapp1 app repo  1  Dockerfile 2  index html    Local Git Commit and Push to Remote Repo git add   git commit  am  Second Commit  git push    Verify in Git Remote Repo Search for    Source Repositories  https   source cloud google com repos Go to Repo    myapp1 app repo          Step 14  Create a container image with Cloud Build and store it in Artifact Registry using glcoud builds command    t   Change Directory  Git App Repo  myapp1 app repo  cd myapp1 app repo    Get latest git commit id  current branch  git rev parse HEAD    Get latest git commit id first 7 chars  current branch  git rev parse   short 7 HEAD    Ensure you are in local git repo folder where  Dockerfile  index html  present cd myapp1 app repo     Create a Cloud Build build based on the latest commit  gcloud builds submit   tag  us central1 docker pkg dev   PROJECT ID     APP ARTIFACT REPO  myapp1   COMMIT ID        Replace Values   PROJECT ID      APP ARTIFACT REPO     COMMIT ID  gcloud builds submit   tag  us central1 docker pkg dev kdaida123 myapps repository myapp1 6f7d338            Step 15  Review Cloud Build YAML file    yaml steps    This step builds the container image    name   gcr io cloud builders docker    id  Build   args       build        t       us central1 docker pkg dev  PROJECT ID myapps repository myapp1  SHORT SHA             This step pushes the image to Artifact Registry   The PROJECT ID and SHORT SHA variables are automatically   replaced by Cloud Build    name   gcr io cloud builders docker    id  Push   args       push       us central1 docker pkg dev  PROJECT ID myapps repository myapp1  SHORT SHA          Step 16  Copy cloudbuild yaml to Git Repo    t   Change Directroy  cd 57 GKE Continuous Integration 03 cloudbuild yaml    Copy Files to Git repo 1  cloudbuild yaml    Local Git Commit and Push to Remote Repo git add   git commit  am  Third Commit  git push    Verify in Git Remote Repo Search for    Source Repositories  https   source cloud google com repos Go to Repo    myapp1 app repo          Step 17  Create Continuous Integration Pipeline in Cloud Build   Go to Cloud Build    Dashboard    Region  us central 1    Click on   SET UP BUILD TRIGGERS    OR    Go to Cloud Build    TRIGGERS    Click on   CREATE TRIGGER        Name    myapp1 ci     Region    us central1     Description    myapp1 Continuous Integration Pipeline     Tags    environment dev     Event    Push to a branch     Source    myapp1 app repo     Branch    main  Auto populated      Configuration    Cloud Build configuration file  yaml or json      Location    Repository     Cloud Build Configuration file location     cloudbuild yaml     Approval    leave unchecked     Service account    leave to default   Click on   CREATE        Step 18  Make a simple change in  index html  and push the changes to Git Repo    t   Change Directroy  cd myapp1 app repo    Update file index html  change V1 to V2   p Application Version  V2  p     Local Git Commit and Push to Remote Repo git status git add   git commit  am  V2 Commit  git push    Verify in Git Remote Repo Search for    Source Repositories  https   source cloud google com repos Go to Repo    myapp1 app repo          Step 19  Verify Code Build CI Pipeline    t   Verify Code Build 1  Go to Code Build    Dashboard or go directly to Code Build    History 2  Click on Build History    View All 3  Verify  BUILD LOG  4  Verify  EXECUTION DETAILS  5  Verify  VIEW RAW     Verify Artifact Repository 1  Go to Artifact Registry    myapps repository    myapp1 2  You should find the docker image pushed to Artifact Registry         Step 20  Review Kubernetes Manifests     Project Folder    04 kube manifests   01 kubernetes deployment yaml   02 kubernetes loadBalancer service yaml     Step 21  Update Container Image to V1 Docker Image we built    yaml   01 kubernetes deployment yaml  Update  image       spec        containers    List           name  myapp1 container           image  us central1 docker pkg dev kdaida123 myapps repository myapp1 d1c3b88           ports                 containerPort  80           Step 22  Deploy Kubernetes Manifests and Verify    t   Change Directory You should in Course Content folder  google kubernetes engine  RESPECTIVE SECTION     Deploy Kubernetes Manifests kubectl apply  f 04 kube manifests    List Deployments kubectl get deploy    List Pods kubectl get pods    Describe Pod  Review Events to understand from where Docker Image downloaded  kubectl describe pod  POD NAME     List Services kubectl get svc    Access Application http    EXTERNAL IP GET SERVICE OUTPUT  Observation  1  You should see  Application Version  V1          Step 23  Update Container Image to V2 Docker Image we built    yaml   01 kubernetes deployment yaml  Update  image       spec        containers    List           name  myapp1 container           image  us central1 docker pkg dev kdaida123 myapps repository myapp1 3af592c           ports                 containerPort  80           Step 24  Update Kubernetes Deployment and Verify    t   Deply Kubernetes Manifests  Updated Image Tag  kubectl apply  f 04 kube manifests    Restart Kubernetes Deployment  Optional   if it is not updated  kubectl rollout restart deployment myapp1 deployment    List Deployments kubectl get deploy    List Pods kubectl get pods    Describe Pod  Review Events to understand from where Docker Image downloaded  kubectl describe pod  POD NAME     List Services kubectl get svc    Access Application http    EXTERNAL IP GET SERVICE OUTPUT  Observation  1  You should see  Application Version  V2          Step 25  Clean Up    t   Delete Kubernetes Resources kubectl delete  f 04 kube manifests         Step 26  How to add Approvals before starting the Build Process       Step 26 01  Enable Approval in Cloud Build   Go to Cloud Build    Triggers    myapp1 ci   Check the box in   Approval  Require approval before build executes        Step 26 02  Add Users to Cloud Build Approver IAM Role   Go to IAM   Admin    GRANT ACCESS      Add Principal    dkalyanreddy gmail com     Assign Roles    Cloud Build Approver   Click on   SAVE       Step 27  Update the Git Repo to test Build Approval Process    t   Change Directroy  cd myapp1 app repo    Update file index html  change V2 to V3   p Application Version  V3  p     Local Git Commit and Push to Remote Repo git status git add   git commit  am  V3 Commit  git push    Verify in Git Remote Repo Search for    Source Repositories  https   source cloud google com repos Go to Repo    myapp1 app repo          Step 28  Verify and Approve the Build   Go to Cloud Build    Triggers    myapp1 ci    Select and Approve   Verify if build is successful         References    Cloud Build for Docker Images  https   cloud google com kubernetes engine docs tutorials gitops cloud build"}
{"questions":"gcp gke docs title Kubernetes ReplicaSets Step 02 Create ReplicaSet Learn about Kubernetes ReplicaSets What are ReplicaSets What is the advantage of using ReplicaSets Step 01 Introduction to ReplicaSets","answers":"---\ntitle: Kubernetes ReplicaSets\ndescription: Learn about Kubernetes ReplicaSets\n---\n\n## Step-01: Introduction to ReplicaSets\n- What are ReplicaSets?\n- What is the advantage of using ReplicaSets?\n\n## Step-02: Create ReplicaSet\n\n### Step-02-01: Create ReplicaSet\n- Create ReplicaSet\n```t\n# Kubernetes ReplicaSet\nkubectl create -f replicaset-demo.yml\n```\n- **replicaset-demo.yml**\n```yaml\napiVersion: apps\/v1\nkind: ReplicaSet\nmetadata:\n  name: my-helloworld-rs\n  labels:\n    app: my-helloworld\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: my-helloworld\n  template:\n    metadata:\n      labels:\n        app: my-helloworld\n    spec:\n      containers:\n      - name: my-helloworld-app\n        image: stacksimplify\/kube-helloworld:1.0.0\n```\n\n### Step-02-02: List ReplicaSets\n- Get list of ReplicaSets\n```t\n# List ReplicaSets\nkubectl get replicaset\nkubectl get rs\n```\n\n### Step-02-03: Describe ReplicaSet\n- Describe the newly created ReplicaSet\n```t\n# Describe ReplicaSet\nkubectl describe rs\/<replicaset-name>\n\nkubectl describe rs\/my-helloworld-rs\n[or]\nkubectl describe rs my-helloworld-rs\n```\n\n### Step-02-04: List of Pods\n- Get list of Pods\n```t\n# Get list of Pods\nkubectl get pods\nkubectl describe pod <pod-name>\n\n# Get list of Pods with Pod IP and Node in which it is running\nkubectl get pods -o wide\n```\n\n### Step-02-05: Verify the Owner of the Pod\n- Verify the owner reference of the pod.\n- Verify under **\"name\"** tag under **\"ownerReferences\"**. We will find the replicaset name to which this pod belongs to. \n```t\n# List Pod with Output as YAML\nkubectl get pods <pod-name> -o yaml\nkubectl get pods my-helloworld-rs-c8rrj -o yaml \n```\n\n## Step-03: Expose ReplicaSet as a Service\n- Expose ReplicaSet with a service (Load Balancer Service) to access the application externally (from internet)\n```t\n# Expose ReplicaSet as a Service\nkubectl expose rs <ReplicaSet-Name>  --type=LoadBalancer --port=80 --target-port=8080 --name=<Service-Name-To-Be-Created>\nkubectl expose rs my-helloworld-rs  --type=LoadBalancer --port=80 --target-port=8080 --name=my-helloworld-rs-service\n\n# List Services\nkubectl get service\nkubectl get svc\n```\n- **Access the Application using External or Public IP**\n```t\n# Access Application\nhttp:\/\/<External-IP-from-get-service-output>\/hello\ncurl http:\/\/<External-IP-from-get-service-output>\/hello\n\n# Observation\n1. Each time we access the application, request will be sent to different pod and pods id will be displayed for us. \n```\n\n## Step-04: Test Replicaset Reliability or High Availability \n- Test how the high availability or reliability concept is achieved automatically in Kubernetes\n- Whenever a POD is accidentally terminated due to some application issue, ReplicaSet should auto-create that Pod to maintain desired number of Replicas configured to achive High Availability.\n```t\n# To get Pod Name\nkubectl get pods\n\n# Delete the Pod\nkubectl delete pod <Pod-Name>\n\n# Verify the new pod got created automatically\nkubectl get pods   (Verify Age and name of new pod)\n``` \n\n## Step-05: Test ReplicaSet Scalability feature \n- Test how scalability is going to seamless & quick\n- Update the **replicas** field in **replicaset-demo.yml** from 3 to 6.\n```yaml\n# Before change\nspec:\n  replicas: 3\n\n# After change\nspec:\n  replicas: 6\n```\n- Update the ReplicaSet\n```t\n# Apply latest changes to ReplicaSet\nkubectl replace -f replicaset-demo.yml\n\n# Verify if new pods got created\nkubectl get pods -o wide\n```\n\n## Step-06: Delete ReplicaSet & Service\n### Step-06-01: Delete ReplicaSet\n```t\n# Delete ReplicaSet\nkubectl delete rs <ReplicaSet-Name>\n\n# Sample Commands\nkubectl delete rs\/my-helloworld-rs\n[or]\nkubectl delete rs my-helloworld-rs\n\n# Verify if ReplicaSet got deleted\nkubectl get rs\n```\n\n### Step-06-02: Delete Service created for ReplicaSet\n```t\n# Delete Service\nkubectl delete svc <service-name>\n\n# Sample Commands\nkubectl delete svc my-helloworld-rs-service\n[or]\nkubectl delete svc\/my-helloworld-rs-service\n\n# Verify if Service got deleted\nkubectl get svc\n```","site":"gcp gke docs","answers_cleaned":"    title  Kubernetes ReplicaSets description  Learn about Kubernetes ReplicaSets         Step 01  Introduction to ReplicaSets   What are ReplicaSets    What is the advantage of using ReplicaSets      Step 02  Create ReplicaSet      Step 02 01  Create ReplicaSet   Create ReplicaSet    t   Kubernetes ReplicaSet kubectl create  f replicaset demo yml         replicaset demo yml      yaml apiVersion  apps v1 kind  ReplicaSet metadata    name  my helloworld rs   labels      app  my helloworld spec    replicas  3   selector      matchLabels        app  my helloworld   template      metadata        labels          app  my helloworld     spec        containers          name  my helloworld app         image  stacksimplify kube helloworld 1 0 0          Step 02 02  List ReplicaSets   Get list of ReplicaSets    t   List ReplicaSets kubectl get replicaset kubectl get rs          Step 02 03  Describe ReplicaSet   Describe the newly created ReplicaSet    t   Describe ReplicaSet kubectl describe rs  replicaset name   kubectl describe rs my helloworld rs  or  kubectl describe rs my helloworld rs          Step 02 04  List of Pods   Get list of Pods    t   Get list of Pods kubectl get pods kubectl describe pod  pod name     Get list of Pods with Pod IP and Node in which it is running kubectl get pods  o wide          Step 02 05  Verify the Owner of the Pod   Verify the owner reference of the pod    Verify under    name    tag under    ownerReferences     We will find the replicaset name to which this pod belongs to      t   List Pod with Output as YAML kubectl get pods  pod name   o yaml kubectl get pods my helloworld rs c8rrj  o yaml          Step 03  Expose ReplicaSet as a Service   Expose ReplicaSet with a service  Load Balancer Service  to access the application externally  from internet     t   Expose ReplicaSet as a Service kubectl expose rs  ReplicaSet Name     type LoadBalancer   port 80   target port 8080   name  Service Name To Be Created  kubectl expose rs my helloworld rs    type LoadBalancer   port 80   target port 8080   name my helloworld rs service    List Services kubectl get service kubectl get svc         Access the Application using External or Public IP      t   Access Application http    External IP from get service output  hello curl http    External IP from get service output  hello    Observation 1  Each time we access the application  request will be sent to different pod and pods id will be displayed for us           Step 04  Test Replicaset Reliability or High Availability    Test how the high availability or reliability concept is achieved automatically in Kubernetes   Whenever a POD is accidentally terminated due to some application issue  ReplicaSet should auto create that Pod to maintain desired number of Replicas configured to achive High Availability     t   To get Pod Name kubectl get pods    Delete the Pod kubectl delete pod  Pod Name     Verify the new pod got created automatically kubectl get pods    Verify Age and name of new pod           Step 05  Test ReplicaSet Scalability feature    Test how scalability is going to seamless   quick   Update the   replicas   field in   replicaset demo yml   from 3 to 6     yaml   Before change spec    replicas  3    After change spec    replicas  6       Update the ReplicaSet    t   Apply latest changes to ReplicaSet kubectl replace  f replicaset demo yml    Verify if new pods got created kubectl get pods  o wide         Step 06  Delete ReplicaSet   Service     Step 06 01  Delete ReplicaSet    t   Delete ReplicaSet kubectl delete rs  ReplicaSet Name     Sample Commands kubectl delete rs my helloworld rs  or  kubectl delete rs my helloworld rs    Verify if ReplicaSet got deleted kubectl get rs          Step 06 02  Delete Service created for ReplicaSet    t   Delete Service kubectl delete svc  service name     Sample Commands kubectl delete svc my helloworld rs service  or  kubectl delete svc my helloworld rs service    Verify if Service got deleted kubectl get svc    "}
{"questions":"gcp gke docs Step 00 Pre requisites title GCP Google Kubernetes Engine GKE Ingress SSL Redirect 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl Implement GCP Google Kubernetes Engine GKE Ingress SSL Redirect 1 Verify if GKE Cluster is created gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Ingress SSL Redirect\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress SSL Redirect\n---\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, ZONE, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n```\n3. Registered Domain using Google Cloud Domains\n4. DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS (Example: demo1.kalyanreddydaida.com)\n\n\n## Step-01: Introduction\n- Google Managed Certificates for GKE Ingress\n- Ingress SSL\n- Ingress SSL Redirect (HTTP to HTTPS)\n\n## Step-02: 06-frontendconfig.yaml\n```yaml\napiVersion: networking.gke.io\/v1beta1\nkind: FrontendConfig\nmetadata:\n  name: my-frontend-config\nspec:\n  redirectToHttps:\n    enabled: true\n    #responseCodeName: RESPONSE_CODE\n```\n\n## Step-03: 04-Ingress-SSL.yaml\n- Add the Annotation `networking.gke.io\/v1beta1.FrontendConfig: \"my-frontend-config\"`\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress-ssl\n  annotations:\n    # External Load Balancer\n    kubernetes.io\/ingress.class: \"gce\"  \n    # Static IP for Ingress Service\n    kubernetes.io\/ingress.global-static-ip-name: \"gke-ingress-extip1\"   \n    # Google Managed SSL Certificates\n    networking.gke.io\/managed-certificates: managed-cert-for-ingress\n    # SSL Redirect HTTP to HTTPS\n    networking.gke.io\/v1beta1.FrontendConfig: \"my-frontend-config\"    \nspec: \n  defaultBackend:\n    service:\n      name: app3-nginx-nodeport-service\n      port:\n        number: 80                            \n  rules:\n    - http:\n        paths:           \n          - path: \/app1\n            pathType: Prefix\n            backend:\n              service:\n                name: app1-nginx-nodeport-service\n                port: \n                  number: 80\n          - path: \/app2\n            pathType: Prefix\n            backend:\n              service:\n                name: app2-nginx-nodeport-service\n                port: \n                  number: 80                   \n```\n\n## Step-04: Deploy kube-manifests and Verify\n- From previous `Ingress SSL` demo we didn't clean-up those Kubernetes resources.\n- We are going use them here, in addition to previous demo in this demo we are just adding `06-frontendconfig.yaml`\n```t\n# Deploy Kubernetes manifests\nkubectl apply -f kube-manifests\nObservation:\n1. Only \"my-frontend-config\" will be created, rest all unchanged\n\n### Sample Output\nKalyans-Mac-mini:38-GKE-Ingress-Google-Managed-SSL-Redirect kalyanreddy$ kubectl apply -f kube-manifests\/\ndeployment.apps\/app1-nginx-deployment unchanged\nservice\/app1-nginx-nodeport-service unchanged\ndeployment.apps\/app2-nginx-deployment unchanged\nservice\/app2-nginx-nodeport-service unchanged\ndeployment.apps\/app3-nginx-deployment unchanged\nservice\/app3-nginx-nodeport-service unchanged\ningress.networking.k8s.io\/ingress-ssl configured\nmanagedcertificate.networking.gke.io\/managed-cert-for-ingress unchanged\nfrontendconfig.networking.gke.io\/my-frontend-config created  \nKalyans-Mac-mini:38-GKE-Ingress-Google-Managed-SSL-Redirect kalyanreddy$ \n\n\n# List FrontendConfig\nkubectl get frontendconfig\n\n# Describe FrontendConfig\nkubectl describe frontendconfig my-frontend-config\n\n# List Ingress Load Balancers\nkubectl get ingress\n\n# Describe Ingress and view Rules\nkubectl describe ingress ingress-ssl\n```\n\n\n## Step-05: Access Application\n```t\n# Important Note\nWait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors\n\n# Access Application\nhttp:\/\/<DNS-DOMAIN-NAME>\/app1\/index.html\nhttp:\/\/<DNS-DOMAIN-NAME>\/app2\/index.html\nhttp:\/\/<DNS-DOMAIN-NAME>\/\n\n# Note: Replace Domain Name registered in Cloud DNS\n# HTTP URLs: Should redirect to HTTPS URLs \nhttp:\/\/demo1.kalyanreddydaida.com\/app1\/index.html\nhttp:\/\/demo1.kalyanreddydaida.com\/app2\/index.html\nhttp:\/\/demo1.kalyanreddydaida.com\/\n\n# HTTPS URLs\nhttps:\/\/demo1.kalyanreddydaida.com\/app1\/index.html\nhttps:\/\/demo1.kalyanreddydaida.com\/app2\/index.html\nhttps:\/\/demo1.kalyanreddydaida.com\/\n```\n\n## Step-06: Clean Up\n```t\n# Delete Kubernetes Resources\nkubectl delete -f kube-manifests\n\n# Verify Load Balancer Deleted\nGo to Network Services -> Load Balancing -> No Load balancers should be present\n```","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Ingress SSL Redirect description  Implement GCP Google Kubernetes Engine GKE Ingress SSL Redirect        Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  ZONE  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123     3  Registered Domain using Google Cloud Domains 4  DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS  Example  demo1 kalyanreddydaida com       Step 01  Introduction   Google Managed Certificates for GKE Ingress   Ingress SSL   Ingress SSL Redirect  HTTP to HTTPS      Step 02  06 frontendconfig yaml    yaml apiVersion  networking gke io v1beta1 kind  FrontendConfig metadata    name  my frontend config spec    redirectToHttps      enabled  true      responseCodeName  RESPONSE CODE         Step 03  04 Ingress SSL yaml   Add the Annotation  networking gke io v1beta1 FrontendConfig   my frontend config      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress ssl   annotations        External Load Balancer     kubernetes io ingress class   gce          Static IP for Ingress Service     kubernetes io ingress global static ip name   gke ingress extip1           Google Managed SSL Certificates     networking gke io managed certificates  managed cert for ingress       SSL Redirect HTTP to HTTPS     networking gke io v1beta1 FrontendConfig   my frontend config      spec     defaultBackend      service        name  app3 nginx nodeport service       port          number  80                               rules        http          paths                         path   app1             pathType  Prefix             backend                service                  name  app1 nginx nodeport service                 port                     number  80             path   app2             pathType  Prefix             backend                service                  name  app2 nginx nodeport service                 port                     number  80                            Step 04  Deploy kube manifests and Verify   From previous  Ingress SSL  demo we didn t clean up those Kubernetes resources    We are going use them here  in addition to previous demo in this demo we are just adding  06 frontendconfig yaml     t   Deploy Kubernetes manifests kubectl apply  f kube manifests Observation  1  Only  my frontend config  will be created  rest all unchanged      Sample Output Kalyans Mac mini 38 GKE Ingress Google Managed SSL Redirect kalyanreddy  kubectl apply  f kube manifests  deployment apps app1 nginx deployment unchanged service app1 nginx nodeport service unchanged deployment apps app2 nginx deployment unchanged service app2 nginx nodeport service unchanged deployment apps app3 nginx deployment unchanged service app3 nginx nodeport service unchanged ingress networking k8s io ingress ssl configured managedcertificate networking gke io managed cert for ingress unchanged frontendconfig networking gke io my frontend config created   Kalyans Mac mini 38 GKE Ingress Google Managed SSL Redirect kalyanreddy       List FrontendConfig kubectl get frontendconfig    Describe FrontendConfig kubectl describe frontendconfig my frontend config    List Ingress Load Balancers kubectl get ingress    Describe Ingress and view Rules kubectl describe ingress ingress ssl          Step 05  Access Application    t   Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors    Access Application http    DNS DOMAIN NAME  app1 index html http    DNS DOMAIN NAME  app2 index html http    DNS DOMAIN NAME      Note  Replace Domain Name registered in Cloud DNS   HTTP URLs  Should redirect to HTTPS URLs  http   demo1 kalyanreddydaida com app1 index html http   demo1 kalyanreddydaida com app2 index html http   demo1 kalyanreddydaida com     HTTPS URLs https   demo1 kalyanreddydaida com app1 index html https   demo1 kalyanreddydaida com app2 index html https   demo1 kalyanreddydaida com          Step 06  Clean Up    t   Delete Kubernetes Resources kubectl delete  f kube manifests    Verify Load Balancer Deleted Go to Network Services    Load Balancing    No Load balancers should be present    "}
{"questions":"gcp gke docs Step 00 Pre requisites title GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity 2 Verify if kubeconfig for kubectl is configured in your local terminal Implement GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t","answers":"---\ntitle: GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity\ndescription: Implement GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity\n---\n\n## Step-00: Pre-requisites\n1. Verify if GKE Cluster is created\n2. Verify if kubeconfig for kubectl is configured in your local terminal\n```t\n# Configure kubeconfig for kubectl\ngcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT>\n\n# Replace Values CLUSTER-NAME, REGION, PROJECT\ngcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123\n\n# List Kubernetes Nodes\nkubectl get nodes\n```\n\n3. ExternalDNS Controller should be installed and ready to use\n```t\n# List Namespaces (external-dns-ns namespace should be present)\nkubectl get ns\n\n# List External DNS Pods\nkubectl -n external-dns-ns get pods\n```\n\n## Step-01: Introduction\n- Implement following Features for Ingress Service\n- BackendConfig - CLIENT_IP Affinity for Ingress Service\n- We are going to create two projects\n  - **Project-01:** CLIENT_IP Affinity enabled\n  - **Project-02:** CLIENT_IP Affinity disabled\n\n## Step-02: Create External IP Address using gcloud\n```t\n# Create External IP Address 1 (IF NOT CREATED - ALREADY CREATED IN PREVIOUS SECTIONS)\ngcloud compute addresses create ADDRESS_NAME --global\ngcloud compute addresses create gke-ingress-extip1 --global\n\n# Create External IP Address 2\ngcloud compute addresses create ADDRESS_NAME --global\ngcloud compute addresses create gke-ingress-extip2 --global\n\n# Describe External IP Address to get\ngcloud compute addresses describe ADDRESS_NAME --global\ngcloud compute addresses describe gke-ingress-extip2 --global\n\n# Verify\nGo to VPC Network -> IP Addresses -> External IP Address\n```\n\n## Step-03: Project-01: Review YAML Manifests\n- **Project Folder:** 01-kube-manifests-with-clientip-affinity\n- 01-kubernetes-deployment.yaml\n- 02-kubernetes-NodePort-service.yaml\n- 03-ingress.yaml\n- 04-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com\/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  sessionAffinity:\n    affinityType: \"CLIENT_IP\"\n```\n\n## Step-04: Project-02: Review YAML Manifests\n- **Project Folder:** 02-kube-manifests-without-clientip-affinity\n- 01-kubernetes-deployment.yaml\n- 02-kubernetes-NodePort-service.yaml\n- 03-ingress.yaml\n- 04-backendconfig.yaml\n```yaml\napiVersion: cloud.google.com\/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig2\nspec:\n  timeoutSec: 42 # Backend service timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n```\n## Step-05: Deploy Kubernetes Manifests\n```t\n# Project-01: Deploy Kubernetes Manifests \nkubectl apply -f 01-kube-manifests-with-clientip-affinity\n\n# Project-02: Deploy Kubernetes Manifests \nkubectl apply -f 02-kube-manifests-without-clientip-affinity\n\n# Verify Deployments\nkubectl get deploy \n\n# Verify Pods\nkubectl get pods\n\n# Verify Node Port Services\nkubectl get svc\n\n# Verify Ingress Services\nkubectl get ingress\n\n# Verify Backend Config\nkubectl get backendconfig\n\n# Project-01: Verify Load Balancer Settings\nGo to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Client IP Affinity Setting\nObservation:\nClient IP Affinity setting should be in enabled state\n\n# Project-02: Verify Load Balancer Settings\nGo to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Client IP Affinity Setting\nClient IP Affinity setting should be in disabled state\n```\n\n## Step-06: Access Application\n```t\n# Project-01: Access Application using DNS or ExtIP\nhttp:\/\/ingress-with-clientip-affinity.kalyanreddydaida.com\nhttp:\/\/<EXT-IP-1>\ncurl ingress-with-clientip-affinity.kalyanreddydaida.com\nObservation:\n1. Request will keep going always to only one POD due to CLIENT_IP Affinity we configured\n\n# Project-02: Access Application using DNS or ExtIP\nhttp:\/\/ingress-without-clientip-affinity.kalyanreddydaida.com\nhttp:\/\/<EXT-IP-2>\ncurl ingress-without-clientip-affinity.kalyanreddydaida.com\nObservation:\n1. Requests will be load balanced to 4 pods created as part of \"cdn-demo2\" deployment.\n```\n\n## Step-07: How to remove a setting from FrontendConfig or BackendConfig\n- To revoke an Ingress feature, you must explicitly disable the feature configuration in the FrontendConfig or BackendConfig CRD\n- **Important Note:** To clear or disable a previously enabled configuration, set the field's value to an empty string (\"\") or to a Boolean value of false, depending on the field type.\n```yaml\napiVersion: cloud.google.com\/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  sessionAffinity:\n    #affinityType: \"CLIENT_IP\"  # Disable at Step-07\n    affinityType: \"\"          # Enable at Step-07\n```\n\n## Step-08: Apply Changes and Verify\n```t\n# Apply Changes\nkubectl apply -f 01-kube-manifests-with-clientip-affinity\n\n# Verify Load Balancer \nGo to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Client IP Affinity Setting\nObservation:\nShould be disabled\n```\n\n## Step-09: Deleting a FrontendConfig or BackendConfig\n- [Deleting a FrontendConfig or BackendConfig](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#deleting_a_frontendconfig_or_backendconfig)\n\n## Step-10: Clean-Up\n```t\n# Project-01: Delete Kubernetes Resources \nkubectl delete -f 01-kube-manifests-with-clientip-affinity\n\n# Project-02: Delete Kubernetes Resources \nkubectl delete -f 02-kube-manifests-without-clientip-affinity\n```\n\n## Step-11: Rollback 04-backendconfig.yaml\n- Put back `affinityType: \"CLIENT_IP\"` it will be ready for Students Demo.\n```yaml\napiVersion: cloud.google.com\/v1\nkind: BackendConfig\nmetadata:\n  name: my-backendconfig\nspec:\n  timeoutSec: 42 # Backend service timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#timeout\n  connectionDraining: # Connection draining timeout: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#draining_timeout\n    drainingTimeoutSec: 62\n  logging: # HTTP access logging: https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features#http_logging\n    enable: true\n    sampleRate: 1.0\n  sessionAffinity:\n    affinityType: \"CLIENT_IP\"  # Disable at Step-07\n    #affinityType: \"\"          # Enable at Step-07\n```\n\n\n\n## References\n- [Ingress Features](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/ingress-features)\n\n","site":"gcp gke docs","answers_cleaned":"    title  GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity description  Implement GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity         Step 00  Pre requisites 1  Verify if GKE Cluster is created 2  Verify if kubeconfig for kubectl is configured in your local terminal    t   Configure kubeconfig for kubectl gcloud container clusters get credentials  CLUSTER NAME    region  REGION    project  PROJECT     Replace Values CLUSTER NAME  REGION  PROJECT gcloud container clusters get credentials standard cluster private 1   region us central1   project kdaida123    List Kubernetes Nodes kubectl get nodes      3  ExternalDNS Controller should be installed and ready to use    t   List Namespaces  external dns ns namespace should be present  kubectl get ns    List External DNS Pods kubectl  n external dns ns get pods         Step 01  Introduction   Implement following Features for Ingress Service   BackendConfig   CLIENT IP Affinity for Ingress Service   We are going to create two projects       Project 01    CLIENT IP Affinity enabled       Project 02    CLIENT IP Affinity disabled     Step 02  Create External IP Address using gcloud    t   Create External IP Address 1  IF NOT CREATED   ALREADY CREATED IN PREVIOUS SECTIONS  gcloud compute addresses create ADDRESS NAME   global gcloud compute addresses create gke ingress extip1   global    Create External IP Address 2 gcloud compute addresses create ADDRESS NAME   global gcloud compute addresses create gke ingress extip2   global    Describe External IP Address to get gcloud compute addresses describe ADDRESS NAME   global gcloud compute addresses describe gke ingress extip2   global    Verify Go to VPC Network    IP Addresses    External IP Address         Step 03  Project 01  Review YAML Manifests     Project Folder    01 kube manifests with clientip affinity   01 kubernetes deployment yaml   02 kubernetes NodePort service yaml   03 ingress yaml   04 backendconfig yaml    yaml apiVersion  cloud google com v1 kind  BackendConfig metadata    name  my backendconfig spec    timeoutSec  42   Backend service timeout  https   cloud google com kubernetes engine docs how to ingress features timeout   connectionDraining    Connection draining timeout  https   cloud google com kubernetes engine docs how to ingress features draining timeout     drainingTimeoutSec  62   logging    HTTP access logging  https   cloud google com kubernetes engine docs how to ingress features http logging     enable  true     sampleRate  1 0   sessionAffinity      affinityType   CLIENT IP          Step 04  Project 02  Review YAML Manifests     Project Folder    02 kube manifests without clientip affinity   01 kubernetes deployment yaml   02 kubernetes NodePort service yaml   03 ingress yaml   04 backendconfig yaml    yaml apiVersion  cloud google com v1 kind  BackendConfig metadata    name  my backendconfig2 spec    timeoutSec  42   Backend service timeout  https   cloud google com kubernetes engine docs how to ingress features timeout   connectionDraining    Connection draining timeout  https   cloud google com kubernetes engine docs how to ingress features draining timeout     drainingTimeoutSec  62   logging    HTTP access logging  https   cloud google com kubernetes engine docs how to ingress features http logging     enable  true     sampleRate  1 0        Step 05  Deploy Kubernetes Manifests    t   Project 01  Deploy Kubernetes Manifests  kubectl apply  f 01 kube manifests with clientip affinity    Project 02  Deploy Kubernetes Manifests  kubectl apply  f 02 kube manifests without clientip affinity    Verify Deployments kubectl get deploy     Verify Pods kubectl get pods    Verify Node Port Services kubectl get svc    Verify Ingress Services kubectl get ingress    Verify Backend Config kubectl get backendconfig    Project 01  Verify Load Balancer Settings Go to Network Services    Load Balancing    Load Balancer    Backends    Verify Client IP Affinity Setting Observation  Client IP Affinity setting should be in enabled state    Project 02  Verify Load Balancer Settings Go to Network Services    Load Balancing    Load Balancer    Backends    Verify Client IP Affinity Setting Client IP Affinity setting should be in disabled state         Step 06  Access Application    t   Project 01  Access Application using DNS or ExtIP http   ingress with clientip affinity kalyanreddydaida com http    EXT IP 1  curl ingress with clientip affinity kalyanreddydaida com Observation  1  Request will keep going always to only one POD due to CLIENT IP Affinity we configured    Project 02  Access Application using DNS or ExtIP http   ingress without clientip affinity kalyanreddydaida com http    EXT IP 2  curl ingress without clientip affinity kalyanreddydaida com Observation  1  Requests will be load balanced to 4 pods created as part of  cdn demo2  deployment          Step 07  How to remove a setting from FrontendConfig or BackendConfig   To revoke an Ingress feature  you must explicitly disable the feature configuration in the FrontendConfig or BackendConfig CRD     Important Note    To clear or disable a previously enabled configuration  set the field s value to an empty string      or to a Boolean value of false  depending on the field type     yaml apiVersion  cloud google com v1 kind  BackendConfig metadata    name  my backendconfig spec    timeoutSec  42   Backend service timeout  https   cloud google com kubernetes engine docs how to ingress features timeout   connectionDraining    Connection draining timeout  https   cloud google com kubernetes engine docs how to ingress features draining timeout     drainingTimeoutSec  62   logging    HTTP access logging  https   cloud google com kubernetes engine docs how to ingress features http logging     enable  true     sampleRate  1 0   sessionAffinity       affinityType   CLIENT IP     Disable at Step 07     affinityType                Enable at Step 07         Step 08  Apply Changes and Verify    t   Apply Changes kubectl apply  f 01 kube manifests with clientip affinity    Verify Load Balancer  Go to Network Services    Load Balancing    Load Balancer    Backends    Verify Client IP Affinity Setting Observation  Should be disabled         Step 09  Deleting a FrontendConfig or BackendConfig    Deleting a FrontendConfig or BackendConfig  https   cloud google com kubernetes engine docs how to ingress features deleting a frontendconfig or backendconfig      Step 10  Clean Up    t   Project 01  Delete Kubernetes Resources  kubectl delete  f 01 kube manifests with clientip affinity    Project 02  Delete Kubernetes Resources  kubectl delete  f 02 kube manifests without clientip affinity         Step 11  Rollback 04 backendconfig yaml   Put back  affinityType   CLIENT IP   it will be ready for Students Demo     yaml apiVersion  cloud google com v1 kind  BackendConfig metadata    name  my backendconfig spec    timeoutSec  42   Backend service timeout  https   cloud google com kubernetes engine docs how to ingress features timeout   connectionDraining    Connection draining timeout  https   cloud google com kubernetes engine docs how to ingress features draining timeout     drainingTimeoutSec  62   logging    HTTP access logging  https   cloud google com kubernetes engine docs how to ingress features http logging     enable  true     sampleRate  1 0   sessionAffinity      affinityType   CLIENT IP     Disable at Step 07      affinityType                Enable at Step 07           References    Ingress Features  https   cloud google com kubernetes engine docs how to ingress features   "}
{"questions":"gcp gke docs Learn to write and test Kubernetes Pods with YAML yml Step 01 Kubernetes YAML Top level Objects title Kubernetes Pods with YAML apiVersion Discuss about the k8s YAML top level objects kube base definition yml","answers":"---\ntitle: Kubernetes Pods with YAML\ndescription: Learn to write and test Kubernetes Pods with YAML\n---\n\n## Step-01: Kubernetes YAML Top level Objects\n- Discuss about the k8s YAML top level objects\n- **kube-base-definition.yml**\n```yml\napiVersion:\nkind:\nmetadata:\n  \nspec:\n```\n- [Kubernetes Reference](https:\/\/kubernetes.io\/docs\/reference\/)\n- [Kubernetes API Reference](https:\/\/kubernetes.io\/docs\/reference\/kubernetes-api\/)\n-  [Pod API Objects Reference](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#pod-v1-core)\n\n## Step-02: Create Simple Pod Definition using YAML \n- We are going to create a very basic pod definition\n- **01-pod-definition.yml**\n```yaml\napiVersion: v1 # String\nkind: Pod  # String\nmetadata: # Dictionary\n  name: myapp-pod\n  labels: # Dictionary \n    app: myapp         \nspec:\n  containers: # List\n    - name: myapp\n      image: stacksimplify\/kubenginx:1.0.0\n      ports:\n        - containerPort: 80\n```\n- **Create Pod**\n```t\n# Change Directory\ncd kube-manifests\n\n# Create Pod\nkubectl create -f 01-pod-definition.yml\n[or]\nkubectl apply -f 01-pod-definition.yml\n\n# List Pods\nkubectl get pods\n```\n\n## Step-03: Create a LoadBalancer Service\n- **02-pod-LoadBalancer-service.yml**\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: myapp-pod-loadbalancer-service  # Name of the Service\nspec:\n  type: LoadBalancer\n  selector:\n  # Loadbalance traffic across Pods matching this label selector\n    app: myapp\n  # Accept traffic sent to port 80    \n  ports: \n    - name: http\n      port: 80    # Service Port\n      targetPort: 80 # Container Port\n```\n- **Create LoadBalancer Service for Pod**\n```t\n# Create Service\nkubectl apply -f 02-pod-LoadBalancer-service.yml\n\n# List Service\nkubectl get svc\n\n# Access Application\nhttp:\/\/<Load-Balancer-Service-IP>\ncurl http:\/\/<Load-Balancer-Service-IP>\n```\n\n## Step-04: Clean-Up Kubernetes Pod and Service\n```t\n# Change Directory\ncd kube-manifests\n\n# Delete Pod\nkubectl delete -f 01-pod-definition.yml\n\n# Delete Service\nkubectl delete -f  02-pod-LoadBalancer-service.yml\n```\n\n\n## API Object References\n- [Kubernetes API Spec](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/)\n- [Pod Spec](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#pod-v1-core)\n- [Service Spec](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#service-v1-core)\n- [Kubernetes API Reference](https:\/\/kubernetes.io\/docs\/reference\/kubernetes-api\/)\n\n","site":"gcp gke docs","answers_cleaned":"    title  Kubernetes Pods with YAML description  Learn to write and test Kubernetes Pods with YAML         Step 01  Kubernetes YAML Top level Objects   Discuss about the k8s YAML top level objects     kube base definition yml      yml apiVersion  kind  metadata     spec         Kubernetes Reference  https   kubernetes io docs reference      Kubernetes API Reference  https   kubernetes io docs reference kubernetes api       Pod API Objects Reference  https   kubernetes io docs reference generated kubernetes api v1 24  pod v1 core      Step 02  Create Simple Pod Definition using YAML    We are going to create a very basic pod definition     01 pod definition yml      yaml apiVersion  v1   String kind  Pod    String metadata    Dictionary   name  myapp pod   labels    Dictionary      app  myapp          spec    containers    List       name  myapp       image  stacksimplify kubenginx 1 0 0       ports            containerPort  80         Create Pod      t   Change Directory cd kube manifests    Create Pod kubectl create  f 01 pod definition yml  or  kubectl apply  f 01 pod definition yml    List Pods kubectl get pods         Step 03  Create a LoadBalancer Service     02 pod LoadBalancer service yml      yaml apiVersion  v1 kind  Service metadata    name  myapp pod loadbalancer service    Name of the Service spec    type  LoadBalancer   selector      Loadbalance traffic across Pods matching this label selector     app  myapp     Accept traffic sent to port 80       ports         name  http       port  80      Service Port       targetPort  80   Container Port         Create LoadBalancer Service for Pod      t   Create Service kubectl apply  f 02 pod LoadBalancer service yml    List Service kubectl get svc    Access Application http    Load Balancer Service IP  curl http    Load Balancer Service IP          Step 04  Clean Up Kubernetes Pod and Service    t   Change Directory cd kube manifests    Delete Pod kubectl delete  f 01 pod definition yml    Delete Service kubectl delete  f  02 pod LoadBalancer service yml          API Object References    Kubernetes API Spec  https   kubernetes io docs reference generated kubernetes api v1 24      Pod Spec  https   kubernetes io docs reference generated kubernetes api v1 24  pod v1 core     Service Spec  https   kubernetes io docs reference generated kubernetes api v1 24  service v1 core     Kubernetes API Reference  https   kubernetes io docs reference kubernetes api    "}
{"questions":"GCP and reproducibility and many others What s important you should expect that other engineers would be working on your code and occasionally changing it Your code should be readable and maintainable are covered by unit tests This helps you to find errors earlier before submitting a training job or even worse deploying a model to production to find out serving is broken because of a typo One way to achieve this is splitting it into smaller pieces separate functions and classes that After you ve built a successful prototype of a machine learning model there s still plenty of Get to know recommended for testing Tensorflow code It s also typically a good idea to care about plenty of things such as operationalization of your model monitoring CI CD reliability things to do To some extent your journey as an ML engineer only begins here You d need to take","answers":"After you\u2019ve built a successful prototype of a machine learning model, there\u2019s still plenty of\nthings to do. To some extent, your journey as an ML engineer only begins here. You\u2019d need to take\ncare about plenty of things such as operationalization of your model: monitoring, CI\/CD, reliability\nand reproducibility, and many others. What\u2019s important, you should expect that other engineers would\nbe working on your code and occasionally changing it. Your code should be readable and maintainable.\nOne way to achieve this  is splitting it into smaller pieces (separate functions and classes) that\nare covered by unit tests. This helps you to find errors earlier (before submitting a training job\nor - even worse - deploying a model to production to find out serving is broken because of a typo)\n\nGet to know recommended [practices](https:\/\/www.tensorflow.org\/community\/contribute\/tests) for testing Tensorflow code. It\u2019s also typically a good idea to\nlook at test cases examples in TensorFlow source code. Use [tf.test.TestCase](https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/test\/TestCase) class to implement your\nunit tests. Get to know this class and how it makes your life easier - e.g., it takes care to use\nthe same random seed (to make your tests more stable), has a lot of useful assertions as well as\ntakes care about creating temp dir or managing TensorFlow sessions.\n\nStart simple and continue extending your test coverage as long as your model gets more complex and\nneeds more debugging. It\u2019s typically a good idea to add a unit test each time you fix any specific\nerror to make sure this error wouldn\u2019t occur again.\n\n## Testing custom layers\nWhen you implement custom training routines, the recommended\npractice is to create\n[custom layers](https:\/\/www.tensorflow.org\/guide\/keras\/custom_layers_and_models)\nby subclassing [tf.keras.layers.Layer](https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/keras\/layers\/Layer).\nThis gives you a possibility to isolate (and test) logical pieces of your code, which makes it more\nreadable and easier to maintain.\n\nLet's play with `ExampleBlock` - a simple custom layer. First, we\u2019d like to test whether the shape of the output tensor is the one we expect.\n```python\nclass LinearBlockTest(tf.test.TestCase):\n   def test_shape_default(self):\n       x = np.ones((4, 32))\n       layer = example.LinearBlock()\n       output = layer(x)\n       self.assertAllEqual(output.shape, (4, 32))\n```\n\nYou can find more examples testing output shape by exploring the `LinearBlockTest` class.\n\nThe next thing we can also check is the actual output. It\u2019s not always needed but it might be a good\nidea when you have a layer with a custom logic that needs to be double-checked.\nPlease note that despite using initializers for weights, our test is not flaky (tf.test.TestCase\ntakes care about it).\nWe can also patch various pieces of the layer we\u2019re concerned about (other layers used, loss\nfunctions, stdout, etc.) to check the desired output. Let\u2019s have a look at the example where we\u2019ve\npatched initializer:\n```python\n@patch.object(initializers, 'get', lambda _: tf.compat.v1.keras.initializers.Ones)\ndef test_output_ones(self):\n   dim = 4\n   batch_size = 3\n   output_dim = 2\n   x = np.ones((batch_size, dim))\n   layer = example.LinearBlock(output_dim)\n   output = layer(x)\n   expected_output = np.ones((batch_size, output_dim)) * (dim + 1)\n   self.assertAllClose(output, expected_output, atol=1e-4)\n```\n\n## Testing custom keras models\nThe easiest way to test your model is to prepare a small fake dataset and run a few training steps on\nthis dataset to check whether the model can be successfully trained, and validation and prediction\nalso works. Please keep in mind, that successfully means all steps can be run without generating\nerrors, but in the basic case we don\u2019t check whether the training itself makes sense - i.e., whether\na loss decreases to any meaningful value. But more about it later.\nLet\u2019s have a look at a simple example of how to test a model from this [tutorial](https:\/\/www.tensorflow.org\/tutorials\/keras\/regression).\n```python\nclass ExampleModelTest(tf.test.TestCase):\n   def _get_data(self):\n       dataset_path = tf.keras.utils.get_file(\n           'auto-mpg.data',\n           'http:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/auto-mpg\/auto-mpg.data')\n       column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',\n                       'Acceleration', 'Model Year', 'Origin']\n       dataset = pd.read_csv(dataset_path, names=column_names, na_values='?', comment='\\t',\n                             sep=' ', skipinitialspace=True)\n       dataset = dataset.dropna()\n       dataset['Origin'] = dataset['Origin'].map({1: 'USA', 2: 'Europe', 3: 'Japan'})\n       dataset = pd.get_dummies(dataset, prefix='', prefix_sep='')\n       dataset = dataset[dataset.columns].astype('float64')\n       labels = dataset.pop('MPG')\n       return dataset, labels\n\n   def test_basic(self):\n       train_dataset, train_labels = self._get_data()\n       dim = len(train_dataset.keys())\n       example_model = example.get_model(dim)\n       test_ind = train_dataset.sample(10).index\n       test_dataset, test_labels = train_dataset.iloc[test_ind], train_labels.iloc[test_ind]\n       history = example_model.fit(\n           train_dataset, train_labels, steps_per_epoch=2, epochs=2,\n           batch_size=10,\n           validation_split=0.1,\n           validation_steps=1)\n       self.assertAlmostEqual(\n           history.history['mse'][-1],\n           history.history['loss'][-1],\n           places=2)\n       _ = example_model.evaluate(test_dataset, test_labels)\n       _ = example_model.predict(test_dataset)\n```\nYou can find additional examples by looking at `ExampleModelTest` class.\n\n## Testing custom estimators\nIf we\u2019re using tf.estimator API, we can use the same approach. There are a few moments to keep in\nmind though. First, if you might want to convert your pandas DataFrame (you\u2019ve read from csv or other\nsource) into a [tf.data.Dataset](https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/data\/Dataset) to make a dataset for testing purposes:\n```\nfaked_dataset = tf.data.Dataset.from_tensor_slices((dict(df.pop(LABEL)), df[LABEL].values))\nreturn faked_dataset.repeat().batch(batch_size)\n```\n\nTensorflow training routine consists of two main pieces - first, the graph is created and compiled,\nand then the same computational graph is run over input data step by step. You can test whether the \ngraph can be compiled in train\/eval\/... mode with just invoking the `model_fn`:\n```python\n...\nfeatures = make_faked_batch(...)\nmodel_fn(features, {}, tf.estimator.ModeKeys.TRAIN))\n```\n\nOr you actually can create an estimator run a few training steps on a faked dataset you\u2019ve prepared:\n```python\n...\ne = tf.estimator.Estimator(model_fn, config=tf.estimator.RunConfig(model_dir=self.get_temp_dir())))\ne.train(input_fn=lambda: make_faked_batch(...), steps=3)\n```\n\n## Testing model logic\nAll above guarantees only that our model is formally correct (i.e., tensor input and output shapes\nmatch one another, there\u2019s no typos and other formal errors in the code). Having these unit tests\ntypically is a huge step forward since it speeds up the model development process. We still might\nwant to extend the test coverage, but it\u2019s also worth looking into other possibilities, for example:\n* use [TensorFlow Data Validation](https:\/\/www.tensorflow.org\/tfx\/tutorials\/data_validation\/tfdv_basic) to inspect your data for anomalies and skewness\n* use [TensorFlow Model Analysis](https:\/\/www.tensorflow.org\/tfx\/tutorials\/model_analysis\/tfma_basic) to check for performance of your trained model\n* have a look how [PerfZero](https:\/\/github.com\/tensorflow\/benchmarks\/tree\/master\/perfzero) helps\ndebug and track TensorFlow models performance with help of [tf.test.Benchmark](https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/test\/Benchmark)\n* implement integration tests if needed\n* consider adding unit tests for your tfx\/kubeflow pipelines\n* implement proper monitoring and alerting for your ML models\n* check additional materials, e.g. [What is your ML test score](https:\/\/research.google\/pubs\/pub45742\/)\nand [Testing & Debugging ML systems](https:\/\/developers.google.com\/machine-learning\/testing-debugging) course","site":"GCP","answers_cleaned":"After you ve built a successful prototype of a machine learning model  there s still plenty of things to do  To some extent  your journey as an ML engineer only begins here  You d need to take care about plenty of things such as operationalization of your model  monitoring  CI CD  reliability and reproducibility  and many others  What s important  you should expect that other engineers would be working on your code and occasionally changing it  Your code should be readable and maintainable  One way to achieve this  is splitting it into smaller pieces  separate functions and classes  that are covered by unit tests  This helps you to find errors earlier  before submitting a training job or   even worse   deploying a model to production to find out serving is broken because of a typo   Get to know recommended  practices  https   www tensorflow org community contribute tests  for testing Tensorflow code  It s also typically a good idea to look at test cases examples in TensorFlow source code  Use  tf test TestCase  https   www tensorflow org api docs python tf test TestCase  class to implement your unit tests  Get to know this class and how it makes your life easier   e g   it takes care to use the same random seed  to make your tests more stable   has a lot of useful assertions as well as takes care about creating temp dir or managing TensorFlow sessions   Start simple and continue extending your test coverage as long as your model gets more complex and needs more debugging  It s typically a good idea to add a unit test each time you fix any specific error to make sure this error wouldn t occur again      Testing custom layers When you implement custom training routines  the recommended practice is to create  custom layers  https   www tensorflow org guide keras custom layers and models  by subclassing  tf keras layers Layer  https   www tensorflow org api docs python tf keras layers Layer   This gives you a possibility to isolate  and test  logical pieces of your code  which makes it more readable and easier to maintain   Let s play with  ExampleBlock    a simple custom layer  First  we d like to test whether the shape of the output tensor is the one we expect     python class LinearBlockTest tf test TestCase      def test shape default self          x   np ones  4  32          layer   example LinearBlock          output   layer x         self assertAllEqual output shape   4  32        You can find more examples testing output shape by exploring the  LinearBlockTest  class   The next thing we can also check is the actual output  It s not always needed but it might be a good idea when you have a layer with a custom logic that needs to be double checked  Please note that despite using initializers for weights  our test is not flaky  tf test TestCase takes care about it   We can also patch various pieces of the layer we re concerned about  other layers used  loss functions  stdout  etc   to check the desired output  Let s have a look at the example where we ve patched initializer     python  patch object initializers   get   lambda    tf compat v1 keras initializers Ones  def test output ones self      dim   4    batch size   3    output dim   2    x   np ones  batch size  dim      layer   example LinearBlock output dim     output   layer x     expected output   np ones  batch size  output dim      dim   1     self assertAllClose output  expected output  atol 1e 4          Testing custom keras models The easiest way to test your model is to prepare a small fake dataset and run a few training steps on this dataset to check whether the model can be successfully trained  and validation and prediction also works  Please keep in mind  that successfully means all steps can be run without generating errors  but in the basic case we don t check whether the training itself makes sense   i e   whether a loss decreases to any meaningful value  But more about it later  Let s have a look at a simple example of how to test a model from this  tutorial  https   www tensorflow org tutorials keras regression      python class ExampleModelTest tf test TestCase      def  get data self          dataset path   tf keras utils get file              auto mpg data               http   archive ics uci edu ml machine learning databases auto mpg auto mpg data          column names     MPG   Cylinders   Displacement   Horsepower   Weight                           Acceleration    Model Year    Origin          dataset   pd read csv dataset path  names column names  na values      comment   t                                sep      skipinitialspace True         dataset   dataset dropna          dataset  Origin     dataset  Origin   map  1   USA   2   Europe   3   Japan           dataset   pd get dummies dataset  prefix     prefix sep            dataset   dataset dataset columns  astype  float64          labels   dataset pop  MPG          return dataset  labels     def test basic self          train dataset  train labels   self  get data          dim   len train dataset keys           example model   example get model dim         test ind   train dataset sample 10  index        test dataset  test labels   train dataset iloc test ind   train labels iloc test ind         history   example model fit             train dataset  train labels  steps per epoch 2  epochs 2             batch size 10             validation split 0 1             validation steps 1         self assertAlmostEqual             history history  mse    1              history history  loss    1              places 2             example model evaluate test dataset  test labels             example model predict test dataset      You can find additional examples by looking at  ExampleModelTest  class      Testing custom estimators If we re using tf estimator API  we can use the same approach  There are a few moments to keep in mind though  First  if you might want to convert your pandas DataFrame  you ve read from csv or other source  into a  tf data Dataset  https   www tensorflow org api docs python tf data Dataset  to make a dataset for testing purposes      faked dataset   tf data Dataset from tensor slices  dict df pop LABEL    df LABEL  values   return faked dataset repeat   batch batch size       Tensorflow training routine consists of two main pieces   first  the graph is created and compiled  and then the same computational graph is run over input data step by step  You can test whether the  graph can be compiled in train eval     mode with just invoking the  model fn      python     features   make faked batch      model fn features      tf estimator ModeKeys TRAIN        Or you actually can create an estimator run a few training steps on a faked dataset you ve prepared     python     e   tf estimator Estimator model fn  config tf estimator RunConfig model dir self get temp dir      e train input fn lambda  make faked batch       steps 3          Testing model logic All above guarantees only that our model is formally correct  i e   tensor input and output shapes match one another  there s no typos and other formal errors in the code   Having these unit tests typically is a huge step forward since it speeds up the model development process  We still might want to extend the test coverage  but it s also worth looking into other possibilities  for example    use  TensorFlow Data Validation  https   www tensorflow org tfx tutorials data validation tfdv basic  to inspect your data for anomalies and skewness   use  TensorFlow Model Analysis  https   www tensorflow org tfx tutorials model analysis tfma basic  to check for performance of your trained model   have a look how  PerfZero  https   github com tensorflow benchmarks tree master perfzero  helps debug and track TensorFlow models performance with help of  tf test Benchmark  https   www tensorflow org api docs python tf test Benchmark    implement integration tests if needed   consider adding unit tests for your tfx kubeflow pipelines   implement proper monitoring and alerting for your ML models   check additional materials  e g   What is your ML test score  https   research google pubs pub45742   and  Testing   Debugging ML systems  https   developers google com machine learning testing debugging  course"}
{"questions":"GCP This example creates a gRCP server that connect to redis to find the name of a gRPC Example user for a given user id Application Project Structure grpcexampleredis","answers":"# gRPC Example\n\nThis example creates a gRCP server that connect to redis to find the name of a\nuser for a given user id.\n\n## Application Project Structure\n\n ```\n    .\n    \u2514\u2500\u2500 grpc_example_redis\n        \u2514\u2500\u2500 src\n            \u2514\u2500\u2500 main\n                \u251c\u2500\u2500 java\n                        \u2514\u2500\u2500 com.example.grpc\n                            \u251c\u2500\u2500 client\n                                \u2514\u2500\u2500 ConnectClient # Example Client\n                            \u251c\u2500\u2500 server\n                                \u2514\u2500\u2500 ConnectServer # Initializes gRPC Server\n                            \u2514\u2500\u2500 service\n                                \u251c\u2500\u2500 ConnectServiceImpl # Implementation of rpc services in the proto\n                                \u2514\u2500\u2500 RedisUtil # Tool to initialize redis\n                \u2514\u2500\u2500 proto\n                    \u2514\u2500\u2500 connect_service.proto # Proto definition of the server\n       \u251c\u2500\u2500 pom.xml\n       \u2514\u2500\u2500 README.md\n```\n\n## Technology Stack\n\n1. gRPC\n2. Redis\n\n## Setup Instructions\n\n### Redis Emulator Setup\n\n#### Quick Start\n\nReference guide can be found [here](https:\/\/redis.io\/topics\/quickstart)\n\n#### Installation\n\n```\n$ wget http:\/\/download.redis.io\/redis-stable.tar.gz\n$ tar xvzf redis-stable.tar.gz\n$ cd redis-stable\n$ make\n$ make test\n$ make install\n```\n\n#### Start the Server\n\n```\n$ redis-server\n[28550] 01 Aug 19:29:28 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server \/path\/to\/redis.conf'\n[28550] 01 Aug 19:29:28 * Server started, Redis version 2.2.12\n[28550] 01 Aug 19:29:28 * The server is now ready to accept connections on port 6379\n... more logs ...\n\n```\n\n#### Test the Server\n\n```\n$ redis-cli\nredis 127.0.0.1:6379> ping\nPONG\n```\n\n#### Set Data\n\n```\nredis 127.0.0.1:6379> set 1234 MyName\nOK\nredis 127.0.0.1:6379> get mykey\n\"MyName\"\n```\n\n#### Set environmental variables\n\n```\n$ export REDIS_HOST=127.0.0.1\n$ export REDIS_PORT=6379\n$ export REDIS_MAX_TOTAL_CONNECTIONS=128\n```\n\n## Usage\n\n### Initialize the server\n\n```\n$ mvn -DskipTests package exec:java\n    -Dexec.mainClass=com.example.grpc.server.ConnectServer\n\n```\n\n### Run the Client\n\n```\n$ mvn -DskipTests package exec:java\n      -Dexec.mainClass=com.example.grpc.client.ConnectClient\n``","site":"GCP","answers_cleaned":"  gRPC Example  This example creates a gRCP server that connect to redis to find the name of a user for a given user id      Application Project Structure                     grpc example redis             src                 main                     java                             com example grpc                                 client                                     ConnectClient   Example Client                                 server                                     ConnectServer   Initializes gRPC Server                                 service                                     ConnectServiceImpl   Implementation of rpc services in the proto                                     RedisUtil   Tool to initialize redis                     proto                         connect service proto   Proto definition of the server            pom xml            README md         Technology Stack  1  gRPC 2  Redis     Setup Instructions      Redis Emulator Setup       Quick Start  Reference guide can be found  here  https   redis io topics quickstart        Installation        wget http   download redis io redis stable tar gz   tar xvzf redis stable tar gz   cd redis stable   make   make test   make install           Start the Server        redis server  28550  01 Aug 19 29 28   Warning  no config file specified  using the default config  In order to specify a config file use  redis server  path to redis conf   28550  01 Aug 19 29 28   Server started  Redis version 2 2 12  28550  01 Aug 19 29 28   The server is now ready to accept connections on port 6379     more logs                Test the Server        redis cli redis 127 0 0 1 6379  ping PONG           Set Data      redis 127 0 0 1 6379  set 1234 MyName OK redis 127 0 0 1 6379  get mykey  MyName            Set environmental variables        export REDIS HOST 127 0 0 1   export REDIS PORT 6379   export REDIS MAX TOTAL CONNECTIONS 128         Usage      Initialize the server        mvn  DskipTests package exec java      Dexec mainClass com example grpc server ConnectServer           Run the Client        mvn  DskipTests package exec java        Dexec mainClass com example grpc client ConnectClient   "}
{"questions":"GCP Dataflow PubSub XML to Google cloud storage sample pipeline you may not use this file except in compliance with the License License https www apache org licenses LICENSE 2 0 You may obtain a copy of the License at Licensed under the Apache License Version 2 0 the License Copyright 2023 Google LLC","answers":"# Dataflow PubSub XML to Google cloud storage sample pipeline\n\n## License\nCopyright 2023 Google LLC\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    https:\/\/www.apache.org\/licenses\/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n# 1. Getting started\n\n## Create a new Google Cloud project\n\n**It is recommended to go through this walkthrough using a new temporary Google\nCloud project, unrelated to any of your existing Google Cloud projects.**\n\nSee https:\/\/cloud.google.com\/resource-manager\/docs\/creating-managing-projects\nfor more details. For a quick reference, please follow these steps:\n\n1. Open the [Cloud Platform Console][cloud-console].\n2. In the drop-down menu at the top, select **Create a project**.\n3. Give your project a name = <CHANGE_ME>\n4. Save your project's name to an environment variable for ease of use:\n```\nexport PROJECT=<CHANGE_ME>\n```\n\n# 2. Configure a local environment\n\n## Setup the test environment\n```\npython -m venv dataflow_pub_sub_xml_to_gcs\nsource .\/dataflow_pub_sub_xml_to_gcs\/bin\/activate\npip install -q --upgrade pip setuptools wheel\npip install 'apache-beam[gcp]' # Linux, Mac\n\\path\\to\\env\\Scripts\\activate # Windows\n```\nIf you're running this on an Apple Silicon Mac and face issues when running, please run the following commands to build the _grpcio_ library from source:\n```\npip uninstall grpcio\nexport GRPC_PYTHON_LDFLAGS=\" -framework CoreFoundation\"\npip install grpcio --no-binary :all:\n```\n\n# 3. Configure the cloud environment\n\n## Setting Google Application Default Credentials\n\nSet your [Google Application Default\nCredentials][application-default-credentials] by [initializing the Google Cloud\nSDK][cloud-sdk-init] with the command:\n\n```\ngcloud init\n```\nGenerate a credentials file by running the\n[application-default login](https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/auth\/application-default\/login) command:\n\n```\ngcloud auth application-default login\n```\nMake sure to enable necessary APIs:\n```\ngcloud services enable dataflow.googleapis.com  compute.googleapis.com  logging.googleapis.com  storage-component.googleapis.com  storage-api.googleapis.com  pubsub.googleapis.com  cloudresourcemanager.googleapis.com  cloudscheduler.googleapis.com\n```\n\n[cloud-sdk-init]: https:\/\/cloud.google.com\/sdk\/docs\/initializing\n[application-default-credentials]: https:\/\/developers.google.com\/identity\/protocols\/application-default-credentials\n\n## Configure a PubSub topic\n### Pubsub Setup\nThe following [doc](https:\/\/cloud.google.com\/pubsub\/docs\/quickstart-console) can be used to set up the topic and optional subscription needed to run this example.\n\n#### Topics\nTo run this example one topic needs to be created:\n1. A topic to publish the XML formatted data\n```\nexport TOPIC_ID=<CHANGE_ME>\ngcloud pubsub topics create $TOPIC_ID\n```\n\n#### Subscription\n**Optionally** You can set up a custom subscription. However, this is not mandatory since the Dataflow PubSub source automatically creates one if a topic is provided.\n\n## Create a GCS bucket\n\nThe output will write to a GCS bucket:\n```\nexport BUCKET_NAME=<CHANGE_ME>\ngsutil mb gs:\/\/$BUCKET_NAME\n```\n\n# 4. Run the test\n## Start sending messages to PubSub\nExecute the message sending script as follows:\n```\npython publish2PubSub.py \\\n--project_id $PROJECT \\\n--pub_sub_topic_id $TOPIC_ID \\\n--xml_string XML_STRING \\\n--message_send_interval MESSAGE_SEND_INTERVAL\n```\nFor example:\n```\npython publish2PubSub.py \\\n--project_id $PROJECT \\\n--pub_sub_topic_id $TOPIC_ID \\\n--xml_string \"<note><to>PubSub<\/to><from>Test<\/from><heading>Test<\/heading><body>Sample body<\/body><\/note>\" \\\n--message_send_interval 1\n```\n\n## Start the Pipeline\nOpen up a new terminal and execute the following command:\n```\npython beamPubSubXml2Gcs.py \\\n--project_id $PROJECT \\\n--input_topic_id $TOPIC_ID \\\n--runner RUNNER \\\n--window_size WINDOW_SIZE \\\n--output_path \"gs:\/\/$BUCKET_NAME\/\" \\\n--num_shards NUM_SHARDS\n```\nFor example:\n```\npython beamPubSubXml2Gcs.py \\\n--project_id $PROJECT \\\n--input_topic_id $TOPIC_ID \\\n--runner DataflowRunner \\\n--window_size 1.0 \\\n--gcs_path gs:\/\/$BUCKET_NAME\/ \\\n--num_shards 2\n```\n\n## Monitor the Dataflow Job\nNavigate to https:\/\/console.cloud.google.com\/dataflow\/jobs to locate the job\nyou just created.  Clicking on the job will let you navigate to the job\nmonitoring screen.\n\n## Debug the Pipeline\n**Optionally** This sample contains the necessary bindings to debug step by step and\/or breakpoint this code in Vs Code. To do so, please install the VsCode Google Cloud [extension](https:\/\/cloud.google.com\/code\/docs\/vscode\/install)\n\n## View the output in CGS\n\nList the generated files in the GCS bucket and inspect their contents\n```\ngsutil ls gs:\/\/${BUCKET_NAME}\/output_location\/\ngsutil cat gs:\/\/${BUCKET_NAME}\/output_location\/*\n```\n\n# 5. Clean up\n\n## Remove cloud resources\n1. Delete the PubSub topic\n```\ngcloud pubsub topics delete $TOPIC_ID\n```\n2. Delete the GCS files\n```\ngsutil -m rm -rf \"gs:\/\/${BUCKET_NAME}\/output_location\/*\"\n```\n3. Remove the GCS bucket\n```\ngsutil rb gs:\/\/${BUCKET_NAME}\n```\n4. **Optionally** Revoke the authentication credentials that you created, and delete the local credential file.\n```\ngcloud auth application-default revoke\n```\n5. **Optionally** Revoke credentials from the gcloud CLI.\n```\ngcloud auth revoke\n```\n## Terminate the PubSub streaming\nOn the terminal where you ran the _publish2PubSub_ script, press _Ctrl+C_ and _Y_ to confirm","site":"GCP","answers_cleaned":"  Dataflow PubSub XML to Google cloud storage sample pipeline     License Copyright 2023 Google LLC  Licensed under the Apache License  Version 2 0  the  License    you may not use this file except in compliance with the License  You may obtain a copy of the License at      https   www apache org licenses LICENSE 2 0  Unless required by applicable law or agreed to in writing  software distributed under the License is distributed on an  AS IS  BASIS  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND  either express or implied  See the License for the specific language governing permissions and limitations under the License     1  Getting started     Create a new Google Cloud project    It is recommended to go through this walkthrough using a new temporary Google Cloud project  unrelated to any of your existing Google Cloud projects     See https   cloud google com resource manager docs creating managing projects for more details  For a quick reference  please follow these steps   1  Open the  Cloud Platform Console  cloud console   2  In the drop down menu at the top  select   Create a project    3  Give your project a name    CHANGE ME  4  Save your project s name to an environment variable for ease of use      export PROJECT  CHANGE ME         2  Configure a local environment     Setup the test environment     python  m venv dataflow pub sub xml to gcs source   dataflow pub sub xml to gcs bin activate pip install  q   upgrade pip setuptools wheel pip install  apache beam gcp     Linux  Mac  path to env Scripts activate   Windows     If you re running this on an Apple Silicon Mac and face issues when running  please run the following commands to build the  grpcio  library from source      pip uninstall grpcio export GRPC PYTHON LDFLAGS    framework CoreFoundation  pip install grpcio   no binary  all         3  Configure the cloud environment     Setting Google Application Default Credentials  Set your  Google Application Default Credentials  application default credentials  by  initializing the Google Cloud SDK  cloud sdk init  with the command       gcloud init     Generate a credentials file by running the  application default login  https   cloud google com sdk gcloud reference auth application default login  command       gcloud auth application default login     Make sure to enable necessary APIs      gcloud services enable dataflow googleapis com  compute googleapis com  logging googleapis com  storage component googleapis com  storage api googleapis com  pubsub googleapis com  cloudresourcemanager googleapis com  cloudscheduler googleapis com       cloud sdk init   https   cloud google com sdk docs initializing  application default credentials   https   developers google com identity protocols application default credentials     Configure a PubSub topic     Pubsub Setup The following  doc  https   cloud google com pubsub docs quickstart console  can be used to set up the topic and optional subscription needed to run this example        Topics To run this example one topic needs to be created  1  A topic to publish the XML formatted data     export TOPIC ID  CHANGE ME  gcloud pubsub topics create  TOPIC ID           Subscription   Optionally   You can set up a custom subscription  However  this is not mandatory since the Dataflow PubSub source automatically creates one if a topic is provided      Create a GCS bucket  The output will write to a GCS bucket      export BUCKET NAME  CHANGE ME  gsutil mb gs    BUCKET NAME        4  Run the test    Start sending messages to PubSub Execute the message sending script as follows      python publish2PubSub py     project id  PROJECT     pub sub topic id  TOPIC ID     xml string XML STRING     message send interval MESSAGE SEND INTERVAL     For example      python publish2PubSub py     project id  PROJECT     pub sub topic id  TOPIC ID     xml string   note  to PubSub  to  from Test  from  heading Test  heading  body Sample body  body   note       message send interval 1         Start the Pipeline Open up a new terminal and execute the following command      python beamPubSubXml2Gcs py     project id  PROJECT     input topic id  TOPIC ID     runner RUNNER     window size WINDOW SIZE     output path  gs    BUCKET NAME       num shards NUM SHARDS     For example      python beamPubSubXml2Gcs py     project id  PROJECT     input topic id  TOPIC ID     runner DataflowRunner     window size 1 0     gcs path gs    BUCKET NAME      num shards 2         Monitor the Dataflow Job Navigate to https   console cloud google com dataflow jobs to locate the job you just created   Clicking on the job will let you navigate to the job monitoring screen      Debug the Pipeline   Optionally   This sample contains the necessary bindings to debug step by step and or breakpoint this code in Vs Code  To do so  please install the VsCode Google Cloud  extension  https   cloud google com code docs vscode install      View the output in CGS  List the generated files in the GCS bucket and inspect their contents     gsutil ls gs     BUCKET NAME  output location  gsutil cat gs     BUCKET NAME  output location          5  Clean up     Remove cloud resources 1  Delete the PubSub topic     gcloud pubsub topics delete  TOPIC ID     2  Delete the GCS files     gsutil  m rm  rf  gs     BUCKET NAME  output location        3  Remove the GCS bucket     gsutil rb gs     BUCKET NAME      4    Optionally   Revoke the authentication credentials that you created  and delete the local credential file      gcloud auth application default revoke     5    Optionally   Revoke credentials from the gcloud CLI      gcloud auth revoke        Terminate the PubSub streaming On the terminal where you ran the  publish2PubSub  script  press  Ctrl C  and  Y  to confirm"}
{"questions":"GCP This example covers how distributed data preprocessing training and serving A simple machine learning system capable of recommending songs given a user as a query using collaborative filtering and TensorFlow user and item features to be included during training can be done on GCP CloudML Deep Collaborative Filtering Unlike classic matrix factorization approaches using a neural network allows","answers":"# CloudML Deep Collaborative Filtering\nA simple machine learning system capable of recommending songs given a user as a query\nusing collaborative filtering and TensorFlow.\n\nUnlike classic matrix factorization approaches, using a neural network allows\nuser and item features to be included during training.\n\nThis example covers how distributed data preprocessing, training, and serving\ncan be done on [Google Cloud Platform](https:\/\/cloud.google.com\/)(GCP).\n\nFurther reading:\n  - [Neural Collaborative Filtering](https:\/\/arxiv.org\/abs\/1708.05031): A paper\n    on using neural networks instead of matrix factorization to perform\n    collaborative filtering.\n  - [Deep Neural Networks for YouTube Recommendations](https:\/\/ai.google\/research\/pubs\/pub45530):\n    Youtube's approach to recommending millions of videos at low latencies by\n    first generating candidates from multiple models and ranking the candidate\n    pool.\n\nFor a fully managed service, check out [Recommendations\nAI](https:\/\/cloud.google.com\/recommendations\/).\n\n## Setup\nCreate a new project on GCP and set up GCP credentials:\n```shell\ngcloud auth login\ngcloud auth application-default login\n```\n\nEnable the following APIS:\n- [Dataflow](http:\/\/console.cloud.google.com\/apis\/api\/dataflow.googleapis.com)\n- [AI Platform](http:\/\/console.cloud.google.com\/apis\/api\/ml.googleapis.com)\n\nUsing the `preprocessing\/config.example.ini` template, create\n`preprocessing\/config.ini` with the GCP project id fields filled in.\nAdditionally, you will need to create a GCS bucket. This code assumes a bucket\nexists by the name of `[project-id]-bucket`.\n\nSet up your python environment:\n```shell\npython3 -m venv venv\nsource .\/venv\/bin\/activate\npip install -r requirements.txt\n```\n\n## Preprocessing\nThe data preprocessing pipeline uses the\n[ListenBrainz](https:\/\/console.cloud.google.com\/marketplace\/details\/metabrainz\/listenbrainz)\ndataset hosted on [Cloud\nMarketplace](https:\/\/console.cloud.google.com\/marketplace). Data is processed\nand written to [Google Cloud Storage](https:\/\/cloud.google.com\/storage\/)(GCS) as\n[TFRecords](https:\/\/www.tensorflow.org\/tutorials\/load_data\/tf_records).\n\nThese files are read using [Cloud DataFlow](https:\/\/cloud.google.com\/dataflow\/).\nThe steps involved are as follows:\n1. Read the data in using the\n   [BigQuery](https:\/\/cloud.google.com\/bigquery\/) query found\n   [here](trainer\/query.py).\n   This query cleans the features and creates a label for each unique user-item\n   pair that exists. This label is 1 if a user has listened to a song more than\n   twice and 0 otherwise. Samples are also given weights based on how many\n   interactions there were between the user and item.\n2. Using [TensorFlow\n   Transform](https:\/\/www.tensorflow.org\/tfx\/transform\/get_started), map each\n   username and product id to an integer value and write the vocabularies to\n   text files. Leave users and items under a set frequency threshold out of the\n   vocabularies.\n3. Filter away user-item pairs where either element is outside of its\n   corresponding vocabulary.\n4. Split the data into train, validation, and test sets.\n5. Write each dataset as TFRecords to GCS.\n\n### Execution\n| Command | Description |\n|---------|-------------|\n| `bin\/run.preprocess.local.sh` | Process a sample of the data locally and write outputs to a local directory. |\n| `bin\/run.preprocess.cloud.sh` | Process the data on GCP using DataFlow and write outputs to a GCS bucket. |\n| `bin\/run.test.sh`             | Run unit tests for the preprocessing pipeline. |\n\n\n## Training\nA [Custom Estimator](https:\/\/www.tensorflow.org\/guide\/custom_estimators) is\ntrained using TensorFlow and [Cloud AI Platform](https:\/\/cloud.google.com\/ai-platform\/)(CAIP).\nThe training steps are as follows:\n1. Read TFRecords from GCS and create a `tf.data.Dataset` for each of them that\n   yields data in batches.\n2. Use the TensorFlow Transform output from preprocessing to transform usernames\n   and product ids into int ids.\n3. Use `user_id`s and `item_id`s to train embeddings.\n4. Add item features and create two input layers: one with user embedding\n   vectors, and another with the concatenation of item embedding vectors and\n   item features.\n5. Create a user neural net and item neural net from the input layers, ensuring\n   that the final layers are the same size.\n6. Compute the cosine similarity between the final layers of the user and item\n   nets. Take the absolute value to get a value between 0 and 1.\n7. Calculate error using log loss and train the model.\n8. Evaluate the model performance by sampling 1000 random items and calculating\n   the average recall@k when each positive sample's item is ranked against\n   these random items for the sample's user.\n9. Export a `SavedModel` for use in serving.\n\n### Execution\nTraining job scripts expect the following argument:\n- `MODEL_INPUTS_DIR`: The directory containing the TFRecords from preprocessing.\n\n| Command | Description |\n|---------|-------------|\n| `bin\/run.train.local.sh` | Train the model locally and save model checkpoints to a local model dir. |\n| `bin\/run.train.cloud.sh` | Train the model on CAIP and save model checkpoints to a GCS bucket. |\n| `bin\/run.train.tune.sh` | Train the model on CAIP as above, but using hyperparameter tuning. |\n\nNote: `SCALE_TIER` is set to `STANDARD_1` to demonstrate distributed training.\nHowever, it can be set to `BASIC` to reduce costs. See [scale\ntiers](https:\/\/cloud.google.com\/ml-engine\/docs\/tensorflow\/machine-types).\n\n### Tensorboard\nModel training can be monitored on Tensorboard using the following command:\n```shell\ntensorboard --logdir <path to model dir>\/<trial number>\n```\nTensorboard's projector, in particular, is very useful for debugging\nor analyzing embeddings. In the projector tab in Tensorboard, try setting the\nlabel to `name`.\n\n## Serving\nModels can be hosted on CAIP, which can be used to make online and batch predictions via JSON requests.\n1. Upload the `SavedModel` from training to CAIP.\n2. Using a file containing a list of usernames, create inputs to pass to the\n   model hosted on CAIP for predictions.\n3. Make the predictions.\n\n### Execution\nThe cloud serving job and prediction job scripts expect the same argument:\n- `MODEL_OUTPUTS_DIR`: The model directory containing each model trial.\n- `TRIAL` (optional): The trial number to use.\nThe local serving job expects no arguments, and the local prediction job expects\nthe model version number.\n\n| Command | Description |\n|---------|-------------|\n| `bin\/run.serve.local.sh` | Upload a new version of the recommender model to CAIP using a locally trained model. |\n| `bin\/run.serve.cloud.sh` | Upload a new version of the recommender model to CAIP using a model trained on CAIP. |\n| `bin\/run.predict.local.sh` | Using `serving\/test.json`, create a prediction job on CAIP after using the local serving script. |\n| `bin\/run.predict.cloud.sh` | Using `serving\/test.json`, create a prediction job on CAIP after using the cloud sering script. |","site":"GCP","answers_cleaned":"  CloudML Deep Collaborative Filtering A simple machine learning system capable of recommending songs given a user as a query using collaborative filtering and TensorFlow   Unlike classic matrix factorization approaches  using a neural network allows user and item features to be included during training   This example covers how distributed data preprocessing  training  and serving can be done on  Google Cloud Platform  https   cloud google com   GCP    Further reading       Neural Collaborative Filtering  https   arxiv org abs 1708 05031   A paper     on using neural networks instead of matrix factorization to perform     collaborative filtering       Deep Neural Networks for YouTube Recommendations  https   ai google research pubs pub45530       Youtube s approach to recommending millions of videos at low latencies by     first generating candidates from multiple models and ranking the candidate     pool   For a fully managed service  check out  Recommendations AI  https   cloud google com recommendations        Setup Create a new project on GCP and set up GCP credentials     shell gcloud auth login gcloud auth application default login      Enable the following APIS     Dataflow  http   console cloud google com apis api dataflow googleapis com     AI Platform  http   console cloud google com apis api ml googleapis com   Using the  preprocessing config example ini  template  create  preprocessing config ini  with the GCP project id fields filled in  Additionally  you will need to create a GCS bucket  This code assumes a bucket exists by the name of   project id  bucket    Set up your python environment     shell python3  m venv venv source   venv bin activate pip install  r requirements txt         Preprocessing The data preprocessing pipeline uses the  ListenBrainz  https   console cloud google com marketplace details metabrainz listenbrainz  dataset hosted on  Cloud Marketplace  https   console cloud google com marketplace   Data is processed and written to  Google Cloud Storage  https   cloud google com storage   GCS  as  TFRecords  https   www tensorflow org tutorials load data tf records    These files are read using  Cloud DataFlow  https   cloud google com dataflow    The steps involved are as follows  1  Read the data in using the     BigQuery  https   cloud google com bigquery   query found     here  trainer query py      This query cleans the features and creates a label for each unique user item    pair that exists  This label is 1 if a user has listened to a song more than    twice and 0 otherwise  Samples are also given weights based on how many    interactions there were between the user and item  2  Using  TensorFlow    Transform  https   www tensorflow org tfx transform get started   map each    username and product id to an integer value and write the vocabularies to    text files  Leave users and items under a set frequency threshold out of the    vocabularies  3  Filter away user item pairs where either element is outside of its    corresponding vocabulary  4  Split the data into train  validation  and test sets  5  Write each dataset as TFRecords to GCS       Execution   Command   Description                                bin run preprocess local sh    Process a sample of the data locally and write outputs to a local directory       bin run preprocess cloud sh    Process the data on GCP using DataFlow and write outputs to a GCS bucket       bin run test sh                Run unit tests for the preprocessing pipeline         Training A  Custom Estimator  https   www tensorflow org guide custom estimators  is trained using TensorFlow and  Cloud AI Platform  https   cloud google com ai platform   CAIP   The training steps are as follows  1  Read TFRecords from GCS and create a  tf data Dataset  for each of them that    yields data in batches  2  Use the TensorFlow Transform output from preprocessing to transform usernames    and product ids into int ids  3  Use  user id s and  item id s to train embeddings  4  Add item features and create two input layers  one with user embedding    vectors  and another with the concatenation of item embedding vectors and    item features  5  Create a user neural net and item neural net from the input layers  ensuring    that the final layers are the same size  6  Compute the cosine similarity between the final layers of the user and item    nets  Take the absolute value to get a value between 0 and 1  7  Calculate error using log loss and train the model  8  Evaluate the model performance by sampling 1000 random items and calculating    the average recall k when each positive sample s item is ranked against    these random items for the sample s user  9  Export a  SavedModel  for use in serving       Execution Training job scripts expect the following argument     MODEL INPUTS DIR   The directory containing the TFRecords from preprocessing     Command   Description                                bin run train local sh    Train the model locally and save model checkpoints to a local model dir       bin run train cloud sh    Train the model on CAIP and save model checkpoints to a GCS bucket       bin run train tune sh    Train the model on CAIP as above  but using hyperparameter tuning     Note   SCALE TIER  is set to  STANDARD 1  to demonstrate distributed training  However  it can be set to  BASIC  to reduce costs  See  scale tiers  https   cloud google com ml engine docs tensorflow machine types        Tensorboard Model training can be monitored on Tensorboard using the following command     shell tensorboard   logdir  path to model dir   trial number      Tensorboard s projector  in particular  is very useful for debugging or analyzing embeddings  In the projector tab in Tensorboard  try setting the label to  name       Serving Models can be hosted on CAIP  which can be used to make online and batch predictions via JSON requests  1  Upload the  SavedModel  from training to CAIP  2  Using a file containing a list of usernames  create inputs to pass to the    model hosted on CAIP for predictions  3  Make the predictions       Execution The cloud serving job and prediction job scripts expect the same argument     MODEL OUTPUTS DIR   The model directory containing each model trial     TRIAL   optional   The trial number to use  The local serving job expects no arguments  and the local prediction job expects the model version number     Command   Description                                bin run serve local sh    Upload a new version of the recommender model to CAIP using a locally trained model       bin run serve cloud sh    Upload a new version of the recommender model to CAIP using a model trained on CAIP       bin run predict local sh    Using  serving test json   create a prediction job on CAIP after using the local serving script       bin run predict cloud sh    Using  serving test json   create a prediction job on CAIP after using the cloud sering script   "}
{"questions":"GCP The processor service subscribes to the topic processes every message The DFDL definitions are stored in a Bigtable database The application send a request with the binary to process to a pubsub topic Project Structure Data Format Description Language Processor Example This module is a example how to process a binary using a DFDL definition applies the definition and publishes the json result to a topic in pubsub","answers":"# Data Format Description Language ([DFDL](https:\/\/en.wikipedia.org\/wiki\/Data_Format_Description_Language)) Processor Example\nThis module is a example how to process a binary using a DFDL definition.\nThe DFDL definitions are stored in a Bigtable database.\n\nThe application send a request with the binary to process to a pubsub topic.\n\nThe processor service subscribes to the topic, processes every message,\napplies the definition and publishes the json result to a topic in pubsub.\n\n## Project Structure\n\n```\n.\n\u2514\u2500\u2500 dfdl_example\n \u251c\u2500\u2500 examples # Contain a binary and dfdl definition to be used to run this example\n \u2514\u2500\u2500 src\n       \u2514\u2500\u2500 main\n           \u2514\u2500\u2500 java\n               \u2514\u2500\u2500 com.example.dfdl\n                   \u251c\u2500\u2500 BigtableServer # Configures bigtable database\n                   \u251c\u2500\u2500 BigtableService # Reads dfdl definitons from a bigtable database\n                   \u251c\u2500\u2500 DfdlDef # Embedded entities\n                   \u251c\u2500\u2500 DfdlService # Processes the binary using a dfdl definition and output a json\n                   \u251c\u2500\u2500 MessageController # Publishes message to a topic with a binary to be processed.\n                   \u251c\u2500\u2500 ProcessorService # Initializes components, configurations and services.\n                   \u251c\u2500\u2500 PubSubServer # Publishes and subscribes to topics using channels adapters.\n \u2514\u2500\u2500 resources\n      \u2514\u2500\u2500 application.properties\n \u2514\u2500\u2500 pom.xml\n \u2514\u2500\u2500 README.md\n```\n\n### Tools\n\nBefore you start is recommended that you install the following tools:\n\n1. [Google Cloud SDK](https:\/\/cloud.google.com\/sdk\/docs\/install)\n2. [Cloud Bigtable Tool](https:\/\/cloud.google.com\/bigtable\/docs\/cbt-overview)\n\n## Technology Stack\n1. Cloud Bigtable\n2. Cloud Pubsub\n\n## Frameworks\n1. Spring Boot\n\n## Libraries\n1. [Apache Daffodil](https:\/\/daffodil.apache.org\/)\n\n## Setup Instructions\n### Project Setup\n#### Creating a Project in the Google Cloud Platform Console\n\nIf you haven't already created a project, create one now. Projects enable you to\nmanage all Google Cloud Platform resources for your app, including deployment,\naccess control, billing, and services.\n\n1. Open the [Cloud Platform Console][cloud-console].\n1. In the drop-down menu at the top, select **Create a project**.\n1. Give your project a name = my-dfdl-project\n1. Make a note of the project ID, which might be different from the project\n   name. The project ID is used in commands and in configurations.\n\n[cloud-console]: https:\/\/console.cloud.google.com\/\n\n#### Enabling billing for your project.\n\nIf you haven't already enabled billing for your project, [enable\nbilling][enable-billing] now.  Enabling billing allows is required to use Cloud Bigtable\nand to create VM instances.\n\n[enable-billing]: https:\/\/console.cloud.google.com\/project\/_\/settings\n\n#### Install the Google Cloud SDK.\n\nIf you haven't already installed the Google Cloud SDK, [install the Google\nCloud SDK][cloud-sdk] now. The SDK contains tools and libraries that enable you\nto create and manage resources on Google Cloud Platform.\n\n[cloud-sdk]: https:\/\/cloud.google.com\/sdk\/\n\n#### Setting Google Application Default Credentials\n\nSet your [Google Application Default\nCredentials][application-default-credentials] by [initializing the Google Cloud\nSDK][cloud-sdk-init] with the command:\n\n```\ngcloud init\n```\nGenerate a credentials file by running the\n[application-default login](https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/auth\/application-default\/login) command:\n\n```\n    gcloud auth application-default login\n```\n\n[cloud-sdk-init]: https:\/\/cloud.google.com\/sdk\/docs\/initializing\n[application-default-credentials]: https:\/\/developers.google.com\/identity\/protocols\/application-default-credentials\n\n### Bigtable Setup\nHow to create a Bigtable database instance can be found [here](https:\/\/cloud.google.com\/bigtable\/docs\/creating-instance)\n\n#### How to add data to bigtable\nThe following doc, [Writing to Bigtable](https:\/\/cloud.google.com\/bigtable\/docs\/writing-data),\ncan be used to add data to bigtable to run the example.\n\nThis example connects to a Cloud Bigtable with a collection with the\nfollowing specification.\nThe configuration can be changed by changing the application.properties file.\n```\n    Table\n     dfdl-schemas =>\n         Column Family\n            dfdl => \n               Column Family Qualifier => \n                   binary_example => {\n                       'name': \"dfdl-name\"\n                       'definiton':\n                        \"<?xml version\"1.0\" encoding=\"UTF-8\"?>\n                           <xs:schema xmlns:xs=\"http:\/\/www.w3.org\/2001\/XMLSchema\"\n                            xmlns:dfdl=\"http:\/\/www.ogf.org\/dfdl\/dfdl-1.0\/\"\n                            targetNamespace=\"http:\/\/example.com\/dfdl\/helloworld\/\">\n                              <xs:include\n                            schemaLocation=\"org\/apache\/daffodil\/xsd\/DFDLGeneralFormat.dfdl.xsd\" \/>\n                              <xs:annotation>\n                                 <xs:appinfo source=\"http:\/\/www.ogf.org\/dfdl\/\">\n                                    <dfdl:format ref=\"GeneralFormat\" \/>\n                                 <\/xs:appinfo>\n                              <\/xs:annotation>\n                              <xs:element name=\"binary_example\">\n                                 <xs:complexType>\n                                    <xs:sequence>\n                                       <xs:element name=\"w\" type=\"xs:int\">\n                                          <xs:annotation>\n                                             <xs:appinfo source=\"http:\/\/www.ogf.org\/dfdl\/\">\n                                                <dfdl:element representation=\"binary\"\n                                                    binaryNumberRep=\"binary\" byteOrder=\"bigEndian\" lengthKind=\"implicit\" \/>\n                                             <\/xs:appinfo>\n                                          <\/xs:annotation>\n                                       <\/xs:element>\n                                       <xs:element name=\"x\" type=\"xs:int\">\n                                          <xs:annotation>\n                                             <xs:appinfo source=\"http:\/\/www.ogf.org\/dfdl\/\">\n                                                <dfdl:element representation=\"binary\"\n                                                      binaryNumberRep=\"binary\" byteOrder=\"bigEndian\" lengthKind=\"implicit\" \/>\n                                             <\/xs:appinfo>\n                                          <\/xs:annotation>\n                                       <\/xs:element>\n                                       <xs:element name=\"y\" type=\"xs:double\">\n                                          <xs:annotation>\n                                             <xs:appinfo source=\"http:\/\/www.ogf.org\/dfdl\/\">\n                                                <dfdl:element representation=\"binary\"\n                                                       binaryFloatRep=\"ieee\" byteOrder=\"bigEndian\" lengthKind=\"implicit\" \/>\n                                             <\/xs:appinfo>\n                                          <\/xs:annotation>\n                                       <\/xs:element>\n                                       <xs:element name=\"z\" type=\"xs:float\">\n                                          <xs:annotation>\n                                             <xs:appinfo source=\"http:\/\/www.ogf.org\/dfdl\/\">\n                                                <dfdl:element representation=\"binary\"\n                                                      byteOrder=\"bigEndian\" lengthKind=\"implicit\" binaryFloatRep=\"ieee\" \/>\n                                             <\/xs:appinfo>\n                                          <\/xs:annotation>\n                                       <\/xs:element>\n                                    <\/xs:sequence>\n                                 <\/xs:complexType>\n                              <\/xs:element>\n                           <\/xs:schema>\";\n                        }\n```\nThis dfdl definition example can be found in the binary_example.dfdl.xsd file.\n\n### Pubsub Setup\nThe following [doc](https:\/\/cloud.google.com\/pubsub\/docs\/quickstart-console)\ncan be used to set up the topics and subscriptions needed to run this example.\n\n#### Topics\nTo run this example two topics need to be created:\n1. A topic to publish the final json output: \"data-output-json-topic\"\n2. A topic to publish the binary to be processed: \"data-input-binary-topic\"\n\n#### Subscription\nThe following subscriptions need to be created:\n1. A subscription to pull the binary data: data-input-binary-sub\n\n## Usage\n### Initialize the application\nReference: [Building an Application with Spring Boot](https:\/\/spring.io\/guides\/gs\/spring-boot\/)\n```\n      .\/mvnw spring-boot:run\n```\n### Send a request\n```\n    curl --data \"message=0000000500779e8c169a54dd0a1b4a3fce2946f6\" localhost:8081\/publish\n``","site":"GCP","answers_cleaned":"  Data Format Description Language   DFDL  https   en wikipedia org wiki Data Format Description Language   Processor Example This module is a example how to process a binary using a DFDL definition  The DFDL definitions are stored in a Bigtable database   The application send a request with the binary to process to a pubsub topic   The processor service subscribes to the topic  processes every message  applies the definition and publishes the json result to a topic in pubsub      Project Structure            dfdl example      examples   Contain a binary and dfdl definition to be used to run this example      src            main                java                    com example dfdl                        BigtableServer   Configures bigtable database                        BigtableService   Reads dfdl definitons from a bigtable database                        DfdlDef   Embedded entities                        DfdlService   Processes the binary using a dfdl definition and output a json                        MessageController   Publishes message to a topic with a binary to be processed                         ProcessorService   Initializes components  configurations and services                         PubSubServer   Publishes and subscribes to topics using channels adapters       resources           application properties      pom xml      README md          Tools  Before you start is recommended that you install the following tools   1   Google Cloud SDK  https   cloud google com sdk docs install  2   Cloud Bigtable Tool  https   cloud google com bigtable docs cbt overview      Technology Stack 1  Cloud Bigtable 2  Cloud Pubsub     Frameworks 1  Spring Boot     Libraries 1   Apache Daffodil  https   daffodil apache org       Setup Instructions     Project Setup      Creating a Project in the Google Cloud Platform Console  If you haven t already created a project  create one now  Projects enable you to manage all Google Cloud Platform resources for your app  including deployment  access control  billing  and services   1  Open the  Cloud Platform Console  cloud console   1  In the drop down menu at the top  select   Create a project    1  Give your project a name   my dfdl project 1  Make a note of the project ID  which might be different from the project    name  The project ID is used in commands and in configurations    cloud console   https   console cloud google com        Enabling billing for your project   If you haven t already enabled billing for your project   enable billing  enable billing  now   Enabling billing allows is required to use Cloud Bigtable and to create VM instances    enable billing   https   console cloud google com project   settings       Install the Google Cloud SDK   If you haven t already installed the Google Cloud SDK   install the Google Cloud SDK  cloud sdk  now  The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform    cloud sdk   https   cloud google com sdk        Setting Google Application Default Credentials  Set your  Google Application Default Credentials  application default credentials  by  initializing the Google Cloud SDK  cloud sdk init  with the command       gcloud init     Generate a credentials file by running the  application default login  https   cloud google com sdk gcloud reference auth application default login  command           gcloud auth application default login       cloud sdk init   https   cloud google com sdk docs initializing  application default credentials   https   developers google com identity protocols application default credentials      Bigtable Setup How to create a Bigtable database instance can be found  here  https   cloud google com bigtable docs creating instance        How to add data to bigtable The following doc   Writing to Bigtable  https   cloud google com bigtable docs writing data   can be used to add data to bigtable to run the example   This example connects to a Cloud Bigtable with a collection with the following specification  The configuration can be changed by changing the application properties file          Table      dfdl schemas             Column Family             dfdl                    Column Family Qualifier                        binary example                              name    dfdl name                          definiton                              xml version 1 0  encoding  UTF 8                                xs schema xmlns xs  http   www w3 org 2001 XMLSchema                              xmlns dfdl  http   www ogf org dfdl dfdl 1 0                               targetNamespace  http   example com dfdl helloworld                                   xs include                             schemaLocation  org apache daffodil xsd DFDLGeneralFormat dfdl xsd                                    xs annotation                                    xs appinfo source  http   www ogf org dfdl                                         dfdl format ref  GeneralFormat                                        xs appinfo                                  xs annotation                                 xs element name  binary example                                     xs complexType                                       xs sequence                                          xs element name  w  type  xs int                                              xs annotation                                                xs appinfo source  http   www ogf org dfdl                                                     dfdl element representation  binary                                                      binaryNumberRep  binary  byteOrder  bigEndian  lengthKind  implicit                                                    xs appinfo                                              xs annotation                                           xs element                                          xs element name  x  type  xs int                                              xs annotation                                                xs appinfo source  http   www ogf org dfdl                                                     dfdl element representation  binary                                                        binaryNumberRep  binary  byteOrder  bigEndian  lengthKind  implicit                                                    xs appinfo                                              xs annotation                                           xs element                                          xs element name  y  type  xs double                                              xs annotation                                                xs appinfo source  http   www ogf org dfdl                                                     dfdl element representation  binary                                                         binaryFloatRep  ieee  byteOrder  bigEndian  lengthKind  implicit                                                    xs appinfo                                              xs annotation                                           xs element                                          xs element name  z  type  xs float                                              xs annotation                                                xs appinfo source  http   www ogf org dfdl                                                     dfdl element representation  binary                                                        byteOrder  bigEndian  lengthKind  implicit  binaryFloatRep  ieee                                                    xs appinfo                                              xs annotation                                           xs element                                        xs sequence                                     xs complexType                                  xs element                               xs schema                                  This dfdl definition example can be found in the binary example dfdl xsd file       Pubsub Setup The following  doc  https   cloud google com pubsub docs quickstart console  can be used to set up the topics and subscriptions needed to run this example        Topics To run this example two topics need to be created  1  A topic to publish the final json output   data output json topic  2  A topic to publish the binary to be processed   data input binary topic        Subscription The following subscriptions need to be created  1  A subscription to pull the binary data  data input binary sub     Usage     Initialize the application Reference   Building an Application with Spring Boot  https   spring io guides gs spring boot               mvnw spring boot run         Send a request         curl   data  message 0000000500779e8c169a54dd0a1b4a3fce2946f6  localhost 8081 publish   "}
{"questions":"GCP 1 Drastically reduce time to perform or by supporting execution of resource plans A state scalable project factory pattern with Terragrunt 1 Providing a dynamic way to configure for categories of resources in directories Resolves the problem of state volume explotion with project factory Terragrunt helps with that by Overview 1 Providing configuration of source code by generating code in target directories using dynamic definitions","answers":"# A 'state-scalable' project factory pattern with Terragrunt\n\n## Overview\n\nResolves the problem of state volume explotion with project factory. Terragrunt helps with that by:\n\n1. Providing a dynamic way to configure [remote_state](https:\/\/terragrunt.gruntwork.io\/docs\/features\/keep-your-remote-state-configuration-dry\/#keep-your-remote-state-configuration-dry) for categories of resources in directories.\n1. Providing [DRY](https:\/\/en.wikipedia.org\/wiki\/Don%27t_repeat_yourself) configuration of source code by generating code in target directories using dynamic [source](https:\/\/terragrunt.gruntwork.io\/docs\/features\/keep-your-terraform-code-dry\/#motivation) definitions.\n1. Drastically reduce time to perform `terraform plan` or `terraform apply` by supporting [parallel](https:\/\/terragrunt.gruntwork.io\/docs\/features\/execute-terraform-commands-on-multiple-modules-at-once\/) execution of resource plans.\n\nThis pattern scales the 'factory' oriented approach of IaC implementation, facilitating both scalability of the Terraform state file size and also developer productivity by minimizing time to run *plans*. By providing mechanisms to create resource group definitions using both local and common data configurations through `defaults`, and implementing `DRY` code in a central `source`, it encourages a mature `Infrastructure as Data` implementation practice.\n\n## Expalanation\n\n![Diagram](docs\/images\/image2.png)\n\nImplementing a factory oriented pattern for deploying resource groups is a common practice in IaC (Infrastructure as Code). This is typically done by having a configurable blueprint of data to describe the infrastructure to avoid repetition of code. [Project-factory](https:\/\/registry.terraform.io\/modules\/terraform-google-modules\/project-factory\/google\/latest) is a common manifestation of this requirement since in GCP, projects need to be created ubiquitously.\n\n This pattern can however result in intractable state file sizes resulting in slow pipeline steps everytime `terraform plan` or `apply` needs to run, because it reads all resources from the Cloud environment. The general guideline from [Google Cloud's documentation](https:\/\/cloud.google.com\/docs\/terraform\/best-practices-for-terraform#minimize-resources) is that each state file should not have more than 100 resources -- which itself can be obfuscated by the use of modules. This example provides a solution to *state explosion* using Terragrunt.\n\nIn addition, it describes a pattern such that the *Terraform source code* for implementing resources can be defined in the sub-directory as a data configuraiton, instead of repeating the code.\n\nGenerally, this pattern splits IaC into `data` and `src` directories at the top level with their configuration connected by `terragrunt.hcl` files at different levels of the file hierarchy. In this example, the public `project-factory` module is used to create projects for `team1` and `team2` categories, while maintaining separate state files using the Terragrunt's configuration. As described in the diagram below, the state files for each category would be stored in the following GCS bucket URL paths:\n\n```\nTeam1 -  gcs:\/\/<bucket>\/data\/team1\/default.state\nTeam2 -  gcs:\/\/<bucket>\/data\/team2\/default.state\n```\n\nThis is enabled by the root `terragrunt.hcl` file located under the repository root, defining a dynamic `remote-back-end` configuration that is set at the subdirectory level.\n\n```terraform\n# Root -> terragrunt.hcl\n\nremote_state {\n  backend = \"gcs\"\n  config = {\n    bucket   = local.vars.bucket_prefix\n    prefix   = path_relative_to_include()\n    project  = local.vars.root_project\n    location = local.vars.region\n  }\n}\n```\n\nUnder the subdirectory for team1, the `include` block is defined and the `path` variable is set relative to the root configuration.\n\n```\n# Root -> data -> team1 -> terragrunt.hcl\n\ninclude \"root\" {\n  path = find_in_parent_folders()\n}\n```\n\n### Dynamic source configuration\n\nAn additional configuration defined in the `terragrunt.hcl` file for team1 is the `terraform` block. This specifies that the source code for the data configuration describing the resources for this directory will be implemented by following `terraform` -> `source` code. Terragrunt manages a temporary instance of the source code inside a directory `.terragrunt-cache`, absolving the developer from maintaining several instances of the code base in different data subdirectories.\n\n```terraform\n# Root -> data -> team1 -> terragrunt.hcl\n\nterraform {\n  # Pull the terraform configuration from the local file system. Terragrunt will make a copy of the source folder in the\n  # Terragrunt working directory (typically `.terragrunt-cache`).\n  source = \"..\/..\/\/src\"\n\n  # Files to include from the Terraform src directory\n  include_in_copy = [\n    \"main.tf\",\n    \"variables.tf\",\n    \"outputs.tf\",\n    \"provider.tf\"\n  ]\n}\n```\n\nIn this example, the `terraform` configuration of a local source code module in `src\/` is provided, which simply invokes the public `project-factory` module by flattening the specifications provided in `default` and `data` directories. In practice, instead of local ones this can be modules hosted in private or public registries implementing IaC blueprints for common use-cases -- eg. Projects and supporting resoources for Data Platform Teams and Application Teams.\n\n## Requirements\n\nA few resoures need to be created before running terragrunt. Either use the terraform scripts under setup folder or follow manual steps given below.\nIn either case make sure individual running the steps have folder creator, project creator and storage admin roles.\n\n### Setup by terraform scripts\n\n1. cd setup\n2. Create a terraform.tfvars from the sample with the correct org_id, billing_account, default region and bucket name where state will be stored.\n3. This creates resources that are needed to run the sample terragrunt project factory.\n\n- \"terragrunt_test\" folder under org\n- \"terragrunt-seedproject\" project under \"terragrunt_test\" folder\n- \"terragrunt-iac-core-bkt\" GCS Bucket for storing state\n- \"Team1\" and \"Team2\" folders\n- Generate root.yaml and defaults.yaml files inside the teams' directoriees from template files.\n\n```\nterraform init\nterraform plan\nterraform apply\n```\n\n### Manual setup\n\n- Create two Folders where Terragrunt will create projects. Add corresponding folders id's in data\/team\/defaults.yaml and data\/team2\/defaults.yaml\n- Create a Project to store terraform state and a gcs bucket in that project.\n- Add project_id and gcs_bucket name in root.yaml.\n\n\n## How to run\n*Steps 2 to 5 can be skipped if you ran the setup scripts*\n\n1. [Install Terragrunt](https:\/\/terragrunt.gruntwork.io\/docs\/getting-started\/install\/)\n1. Create [folders](https:\/\/cloud.google.com\/resource-manager\/docs\/creating-managing-folders#creating-folders) in your organization; similar to `team1`, `team2` as shown in sample. \n1. Create project files in data \\<category\\>  projects similar to `*.yaml.sample` files provided to project specific configurations.\n1. Create defaults.yaml file for each category similar to `defaults.yaml.sample` file provided for common configurations.\n1. Create root.yaml file similar to `root.yaml.sample` for remote backend configurations.\n\n```\nterragrunt run-all init\nterragrunt run-all plan\nterragrunt run-all apply\n```\n\n**Note: `terragrunt plan` or `apply` can be run directly in subdirectories (ie. data\/team1 etc) with a `terragrunt.hcl` file, to create resources for each team. This is useful for separating pipelines. `terragrunt run-all` is useful for runnning all deployments at once and in parallel.** \n\n## Variations\n\n- A version of this pattern that integrates easy with [Fabric FAST](https:\/\/github.com\/GoogleCloudPlatform\/cloud-foundation-fabric\/tree\/master\/blueprints\/factories\/project-factory) or [Cloud Foundatiaon Toolkit](https:\/\/github.com\/GoogleCloudPlatform\/cloud-foundation-toolkit)\n\n## Resources\n\n- [Terragrunt](https:\/\/terragrunt.gruntwork.io\/docs\/getting-started)\n\n- [*Infrastructure as Data*](https:\/\/medium.com\/dzerolabs\/shifting-from-infrastructure-as-code-to-infrastructure-as-data-bdb1ae1840e3) Medium link\n- [Splitting a monolithic Terraform state using Terragrunt](https:\/\/medium.com\/cts-technologies\/murdering-monoliths-using-terragrunt-to-split-monolithic-terraform-state-up-into-multiple-stacks-17ead2d8e0e9)\n\n## TODO\n\n1. Update example with different service accounts for team directories\n1. Create branches for variations. Variations can be like, integrating with FAST Fabric project factory pattern eg.\n\n## Caveats\n\n- Terragrunt has [restrictions](https:\/\/docs.gruntwork.io\/guides\/working-with-code\/tfc-integration) when it comes to integrating with Hashicorp's Terraform Cloud or Terraform Cloud Enterprise platform. TL;DR: You can still use TCE\/TC for storing states, monitoring and auditing but cannot use the UI for Terraform runs. Initiating runs using the CLI is still possible.","site":"GCP","answers_cleaned":"  A  state scalable  project factory pattern with Terragrunt     Overview  Resolves the problem of state volume explotion with project factory  Terragrunt helps with that by   1  Providing a dynamic way to configure  remote state  https   terragrunt gruntwork io docs features keep your remote state configuration dry  keep your remote state configuration dry  for categories of resources in directories  1  Providing  DRY  https   en wikipedia org wiki Don 27t repeat yourself  configuration of source code by generating code in target directories using dynamic  source  https   terragrunt gruntwork io docs features keep your terraform code dry  motivation  definitions  1  Drastically reduce time to perform  terraform plan  or  terraform apply  by supporting  parallel  https   terragrunt gruntwork io docs features execute terraform commands on multiple modules at once   execution of resource plans   This pattern scales the  factory  oriented approach of IaC implementation  facilitating both scalability of the Terraform state file size and also developer productivity by minimizing time to run  plans   By providing mechanisms to create resource group definitions using both local and common data configurations through  defaults   and implementing  DRY  code in a central  source   it encourages a mature  Infrastructure as Data  implementation practice      Expalanation    Diagram  docs images image2 png   Implementing a factory oriented pattern for deploying resource groups is a common practice in IaC  Infrastructure as Code   This is typically done by having a configurable blueprint of data to describe the infrastructure to avoid repetition of code   Project factory  https   registry terraform io modules terraform google modules project factory google latest  is a common manifestation of this requirement since in GCP  projects need to be created ubiquitously    This pattern can however result in intractable state file sizes resulting in slow pipeline steps everytime  terraform plan  or  apply  needs to run  because it reads all resources from the Cloud environment  The general guideline from  Google Cloud s documentation  https   cloud google com docs terraform best practices for terraform minimize resources  is that each state file should not have more than 100 resources    which itself can be obfuscated by the use of modules  This example provides a solution to  state explosion  using Terragrunt   In addition  it describes a pattern such that the  Terraform source code  for implementing resources can be defined in the sub directory as a data configuraiton  instead of repeating the code   Generally  this pattern splits IaC into  data  and  src  directories at the top level with their configuration connected by  terragrunt hcl  files at different levels of the file hierarchy  In this example  the public  project factory  module is used to create projects for  team1  and  team2  categories  while maintaining separate state files using the Terragrunt s configuration  As described in the diagram below  the state files for each category would be stored in the following GCS bucket URL paths       Team1    gcs    bucket  data team1 default state Team2    gcs    bucket  data team2 default state      This is enabled by the root  terragrunt hcl  file located under the repository root  defining a dynamic  remote back end  configuration that is set at the subdirectory level      terraform   Root    terragrunt hcl  remote state     backend    gcs    config         bucket     local vars bucket prefix     prefix     path relative to include       project    local vars root project     location   local vars region            Under the subdirectory for team1  the  include  block is defined and the  path  variable is set relative to the root configuration         Root    data    team1    terragrunt hcl  include  root      path   find in parent folders              Dynamic source configuration  An additional configuration defined in the  terragrunt hcl  file for team1 is the  terraform  block  This specifies that the source code for the data configuration describing the resources for this directory will be implemented by following  terraform      source  code  Terragrunt manages a temporary instance of the source code inside a directory   terragrunt cache   absolving the developer from maintaining several instances of the code base in different data subdirectories      terraform   Root    data    team1    terragrunt hcl  terraform       Pull the terraform configuration from the local file system  Terragrunt will make a copy of the source folder in the     Terragrunt working directory  typically   terragrunt cache      source           src       Files to include from the Terraform src directory   include in copy          main tf        variables tf        outputs tf        provider tf             In this example  the  terraform  configuration of a local source code module in  src   is provided  which simply invokes the public  project factory  module by flattening the specifications provided in  default  and  data  directories  In practice  instead of local ones this can be modules hosted in private or public registries implementing IaC blueprints for common use cases    eg  Projects and supporting resoources for Data Platform Teams and Application Teams      Requirements  A few resoures need to be created before running terragrunt  Either use the terraform scripts under setup folder or follow manual steps given below  In either case make sure individual running the steps have folder creator  project creator and storage admin roles       Setup by terraform scripts  1  cd setup 2  Create a terraform tfvars from the sample with the correct org id  billing account  default region and bucket name where state will be stored  3  This creates resources that are needed to run the sample terragrunt project factory      terragrunt test  folder under org    terragrunt seedproject  project under  terragrunt test  folder    terragrunt iac core bkt  GCS Bucket for storing state    Team1  and  Team2  folders   Generate root yaml and defaults yaml files inside the teams  directoriees from template files       terraform init terraform plan terraform apply          Manual setup    Create two Folders where Terragrunt will create projects  Add corresponding folders id s in data team defaults yaml and data team2 defaults yaml   Create a Project to store terraform state and a gcs bucket in that project    Add project id and gcs bucket name in root yaml       How to run  Steps 2 to 5 can be skipped if you ran the setup scripts   1   Install Terragrunt  https   terragrunt gruntwork io docs getting started install   1  Create  folders  https   cloud google com resource manager docs creating managing folders creating folders  in your organization  similar to  team1    team2  as shown in sample   1  Create project files in data   category    projects similar to    yaml sample  files provided to project specific configurations  1  Create defaults yaml file for each category similar to  defaults yaml sample  file provided for common configurations  1  Create root yaml file similar to  root yaml sample  for remote backend configurations       terragrunt run all init terragrunt run all plan terragrunt run all apply        Note   terragrunt plan  or  apply  can be run directly in subdirectories  ie  data team1 etc  with a  terragrunt hcl  file  to create resources for each team  This is useful for separating pipelines   terragrunt run all  is useful for runnning all deployments at once and in parallel         Variations    A version of this pattern that integrates easy with  Fabric FAST  https   github com GoogleCloudPlatform cloud foundation fabric tree master blueprints factories project factory  or  Cloud Foundatiaon Toolkit  https   github com GoogleCloudPlatform cloud foundation toolkit      Resources     Terragrunt  https   terragrunt gruntwork io docs getting started       Infrastructure as Data   https   medium com dzerolabs shifting from infrastructure as code to infrastructure as data bdb1ae1840e3  Medium link    Splitting a monolithic Terraform state using Terragrunt  https   medium com cts technologies murdering monoliths using terragrunt to split monolithic terraform state up into multiple stacks 17ead2d8e0e9      TODO  1  Update example with different service accounts for team directories 1  Create branches for variations  Variations can be like  integrating with FAST Fabric project factory pattern eg      Caveats    Terragrunt has  restrictions  https   docs gruntwork io guides working with code tfc integration  when it comes to integrating with Hashicorp s Terraform Cloud or Terraform Cloud Enterprise platform  TL DR  You can still use TCE TC for storing states  monitoring and auditing but cannot use the UI for Terraform runs  Initiating runs using the CLI is still possible "}
{"questions":"GCP This repo houses the example code for a blog post on using a persistent history Dataproc Persistent History Server server to view job history about your Spark MapReduce jobs and aggregated YARN logs from short lived clusters on GCS Directory structure","answers":"# Dataproc Persistent History Server\nThis repo houses the example code for a blog post on using a persistent history\nserver to view job history about your Spark \/ MapReduce jobs and\naggregated YARN logs from short-lived clusters on GCS.\n\n![Architecture Diagram](img\/persistent-history-arch.png)\n\n## Directory structure\n- `cluster_templates\/`\n  - `history_server.yaml`\n  - `ephemeral_cluster.yaml`\n- `init_actions`\n  - `disable_history_servers.sh`\n- `workflow_templates`\n  - `spark_mr_workflow_template.yaml`\n- `terraform`\n  - `variables.tf`\n  - `network.tf`\n  - `history-server.tf`\n  - `history-bucket.tf`\n  - `long-running-cluster.tf`\n  - `firewall.tf`\n  - `service-account.tf`\n\n## Usage\nThe recommended way to run this example is to use terraform as it creates a vpc network\nto run the example with the appropriate firewall rules.\n\n### Enabling services\n```\ngcloud services enable \\\ncompute.googleapis.com \\\ndataproc.googleapis.com\n```\n\n### Pre-requisites\n- [Install Google Cloud SDK](https:\/\/cloud.google.com\/sdk\/)\n- Enable the following APIs if not already enabled.\n  - `gcloud services enable compute.googleapis.com dataproc.googleapis.com`\n- \\[Optional\\] [Install Terraform](https:\/\/learn.hashicorp.com\/terraform\/getting-started\/install.html)\n\n### Disclaimer\nThis is for example purposes only. You should take a much closer look at the firewall\nrules that make sense for your organization's security requirements.\n\n### Should I run with Terraform or `gcloud` SDK ?\nThis repo provides artifacts to spin up the infrastructure for persisting\njob history and yarn logs with terraform or with gcloud. The recommended\nway to use this is to modify the Terraform code to fit your needs for\nlong running resources.\n\nHowever, the cluster templates are included as an example of\nstandardizing cluster creation for ephemeral clusters.\nYou might ask, \"Why is there a cluster template for the history server?\".\nThe history server is simply a cleaner interface for reading your logs\nfrom GCS. For Spark, it is stateless and you may wish to only spin up\n a history server when you'll actually be using it. For MapReduce,\nthe history server will only be aware of the files on GCS when it was\ncreated and those files which it moves from intermediate done directory\nto the done directory. For this reason, MapReduce workflows should\nuse a persistent history server.\n\n### Managing Log Retention\nOften times, it makes sense to leave the history\nserver running because several teams may use it and you could configure\nit to manage clean up of your logs by setting the following additional\nproperties:\n - `yarn:yarn.log-aggregation.retain-seconds`\n - `spark:spark.history.fs.cleaner.enabled`\n - `spark:spark.history.fs.cleaner.maxAge`\n - `mapred:mapreduce.jobhistory.cleaner.enable`\n - `mapred:mapreduce.jobhistory.max-age-ms`\n\n### Terraform\nTo spin up the whole example you could simply edit the\n`terraform.tfvars` file to set the variables to the\ndesired values and run the following commands.\n\nNote, this assumes that you have an existing project and\nthe sufficient permissions to spin up the resources for this\nexample.\n```\ncd terraform\nterraform init\nterraform apply\n```\nThis will create:\n1. VPC Network and subnetwork for your dataproc clusters.\n1. Various firewall rules for this network.\n1. A single node dataproc history-server cluster.\n1. A long running dataproc cluster.\n1. A GCS Bucket for YARN log aggregation, and Spark MapReduce Job History\nas well as initialization actions for your clusters.\n\n### Alternatively, with Google Cloud SDK\nThese instructions detail how to run this entire example with `gcloud`.\n1.  Replace `PROJECT` with your GCP project id in each file.\n1.  Replace `HISTORY_BUCKET` with your GCS bucket for logs in each file.\n1.  Replace `HISTORY_SERVER` with your dataproc history server.\n1.  Replace `REGION` with your desired GCP Compute region.\n1.  Replace `ZONE` with your desired GCP Compute zone.\n\n```\ncd workflow_templates\nsed -i 's\/PROJECT\/your-gcp-project-id\/g' *\nsed -i 's\/HISTORY_BUCKET\/your-history-bucket\/g' *\nsed -i 's\/HISTORY_SERVER\/your-history-server\/g' *\nsed -i 's\/REGION\/us-central1\/g' *\nsed -i 's\/ZONE\/us-central1-f\/g' *\nsed -i 's\/SUBNET\/your-subnet-id\/g' *\n\ncd cluster_templates\nsed -i 's\/PROJECT\/your-gcp-project-id\/g' *\nsed -i 's\/HISTORY_BUCKET\/your-history-bucket\/g' *\nsed -i 's\/HISTORY_SERVER\/your-history-server\/g' *\nsed -i 's\/REGION\/us-central1\/g' *\nsed -i 's\/ZONE\/us-central1-f\/g' *\nsed -i 's\/SUBNET\/your-subnet-id\/g' *\n```\n\nStage an empty file to create the spark-events path on GCS.\n\n```\ntouch .keep\ngsutil cp .keep gs:\/\/your-history-bucket\/spark-events\/.keep\nrm .keep\n```\n\nStage our initialization action for disabling history servers\non your ephemeral clusters.\n```\ngsutil cp init_actions\/disable_history_servers.sh\n```\n\nCreate the history server.\n\n```sh\ngcloud beta dataproc clusters import \\\n  history-server \\\n  --source=cluster_templates\/history-server.yaml \\\n  --region=us-central1\n```\n\nCreate a cluster which you can manually submit jobs to and tear down.\n\n```sh\ngcloud beta dataproc clusters import \\\nephemeral-cluster \\\n--source=cluster_templates\/ephemeral-cluster.yaml \\\n--region=us-central1\n```\n\n### Running the Workflow Template\nImport the workflow template to run an example spark and hadoop job\nto verify your setup is working.\n\n```sh\ngcloud dataproc workflow-templates import spark-mr-example \\\n--source=workflow_templates\/spark_mr_workflow_template.yaml\n```\n\nTrigger the workflow template to spin up a cluster,\nrun the example jobs and tear it down.\n\n```sh\ngcloud dataproc workflow-templates instantiate spark-mr-example\n```\n\n### Viewing the History UI\nFollow [these instructions](https:\/\/cloud.google.com\/dataproc\/docs\/concepts\/accessing\/cluster-web-interfaces)\n to look at the UI by ssh tunneling to the history server.\nPorts to visit:\n - MapReduce Job History: 19888\n - Spark Job History: 18080\n\n\n### Closing Note\nIf you're adapting this example for your own use consider the following:\n1. Setting an appropriate retention for your logs.\n1. Setting more appropriate firewall rules for your security requirements.","site":"GCP","answers_cleaned":"  Dataproc Persistent History Server This repo houses the example code for a blog post on using a persistent history server to view job history about your Spark   MapReduce jobs and aggregated YARN logs from short lived clusters on GCS     Architecture Diagram  img persistent history arch png      Directory structure    cluster templates        history server yaml       ephemeral cluster yaml     init actions       disable history servers sh     workflow templates       spark mr workflow template yaml     terraform       variables tf       network tf       history server tf       history bucket tf       long running cluster tf       firewall tf       service account tf      Usage The recommended way to run this example is to use terraform as it creates a vpc network to run the example with the appropriate firewall rules       Enabling services     gcloud services enable   compute googleapis com   dataproc googleapis com          Pre requisites    Install Google Cloud SDK  https   cloud google com sdk     Enable the following APIs if not already enabled       gcloud services enable compute googleapis com dataproc googleapis com      Optional    Install Terraform  https   learn hashicorp com terraform getting started install html       Disclaimer This is for example purposes only  You should take a much closer look at the firewall rules that make sense for your organization s security requirements       Should I run with Terraform or  gcloud  SDK   This repo provides artifacts to spin up the infrastructure for persisting job history and yarn logs with terraform or with gcloud  The recommended way to use this is to modify the Terraform code to fit your needs for long running resources   However  the cluster templates are included as an example of standardizing cluster creation for ephemeral clusters  You might ask   Why is there a cluster template for the history server    The history server is simply a cleaner interface for reading your logs from GCS  For Spark  it is stateless and you may wish to only spin up  a history server when you ll actually be using it  For MapReduce  the history server will only be aware of the files on GCS when it was created and those files which it moves from intermediate done directory to the done directory  For this reason  MapReduce workflows should use a persistent history server       Managing Log Retention Often times  it makes sense to leave the history server running because several teams may use it and you could configure it to manage clean up of your logs by setting the following additional properties      yarn yarn log aggregation retain seconds      spark spark history fs cleaner enabled      spark spark history fs cleaner maxAge      mapred mapreduce jobhistory cleaner enable      mapred mapreduce jobhistory max age ms       Terraform To spin up the whole example you could simply edit the  terraform tfvars  file to set the variables to the desired values and run the following commands   Note  this assumes that you have an existing project and the sufficient permissions to spin up the resources for this example      cd terraform terraform init terraform apply     This will create  1  VPC Network and subnetwork for your dataproc clusters  1  Various firewall rules for this network  1  A single node dataproc history server cluster  1  A long running dataproc cluster  1  A GCS Bucket for YARN log aggregation  and Spark MapReduce Job History as well as initialization actions for your clusters       Alternatively  with Google Cloud SDK These instructions detail how to run this entire example with  gcloud   1   Replace  PROJECT  with your GCP project id in each file  1   Replace  HISTORY BUCKET  with your GCS bucket for logs in each file  1   Replace  HISTORY SERVER  with your dataproc history server  1   Replace  REGION  with your desired GCP Compute region  1   Replace  ZONE  with your desired GCP Compute zone       cd workflow templates sed  i  s PROJECT your gcp project id g    sed  i  s HISTORY BUCKET your history bucket g    sed  i  s HISTORY SERVER your history server g    sed  i  s REGION us central1 g    sed  i  s ZONE us central1 f g    sed  i  s SUBNET your subnet id g     cd cluster templates sed  i  s PROJECT your gcp project id g    sed  i  s HISTORY BUCKET your history bucket g    sed  i  s HISTORY SERVER your history server g    sed  i  s REGION us central1 g    sed  i  s ZONE us central1 f g    sed  i  s SUBNET your subnet id g         Stage an empty file to create the spark events path on GCS       touch  keep gsutil cp  keep gs   your history bucket spark events  keep rm  keep      Stage our initialization action for disabling history servers on your ephemeral clusters      gsutil cp init actions disable history servers sh      Create the history server      sh gcloud beta dataproc clusters import     history server       source cluster templates history server yaml       region us central1      Create a cluster which you can manually submit jobs to and tear down      sh gcloud beta dataproc clusters import   ephemeral cluster     source cluster templates ephemeral cluster yaml     region us central1          Running the Workflow Template Import the workflow template to run an example spark and hadoop job to verify your setup is working      sh gcloud dataproc workflow templates import spark mr example     source workflow templates spark mr workflow template yaml      Trigger the workflow template to spin up a cluster  run the example jobs and tear it down      sh gcloud dataproc workflow templates instantiate spark mr example          Viewing the History UI Follow  these instructions  https   cloud google com dataproc docs concepts accessing cluster web interfaces   to look at the UI by ssh tunneling to the history server  Ports to visit     MapReduce Job History  19888    Spark Job History  18080       Closing Note If you re adapting this example for your own use consider the following  1  Setting an appropriate retention for your logs  1  Setting more appropriate firewall rules for your security requirements "}
{"questions":"GCP BigQuery Benchmark Repos While Google provides some high level guidelines on BigQuery performance in these scenarios we don t provide consistent metrics on how the above factors can impact Customers new to BigQuery often have questions on how to best utilize the platform with regards to performance For example a common question which has routinely resurfaced in this area is the performance of file loads for efficient load times As a second example when informing customers that queries run on external data sources are less efficient than those run on BigQuery managed tables customers have followed up asking exactly how much less efficient queries on external sources are into BigQuery specifically the optimal file parameters file type columns column types file size etc","answers":"# BigQuery Benchmark Repos\nCustomers new to BigQuery often have questions on how to best utilize the platform with regards to performance.\nFor example, a common question which has routinely resurfaced in this area is the performance of file loads\ninto BigQuery, specifically the optimal file parameters (file type, # columns, column types, file size, etc)\nfor efficient load times.  As a second example, when informing customers that queries run on external data\nsources are less efficient than those run on BigQuery managed tables, customers have followed up, asking\nexactly how much less efficient queries on external sources are.\n\nWhile Google provides some high-level guidelines on BigQuery performance in these\nscenarios, we don\u2019t provide consistent metrics on how the above factors can impact\nperformance. This repository seeks to create benchmarks to address questions with\nquantitative data and a higher level of confidence, allowing more definitive answers\nwhen interacting with customers.\n\nWhile this repository is intended to continue growing, it currently includes the following\nbenchmarks:\n### File Loader Benchmark\nThe File Loader benchmark measures the affect of file properties on performance when loading\nfiles into BigQuery tables. Files are created using a combination of properties such as file type, compression type,\nnumber of columns, column types (such as 100% STRING vs 50% STRING\/ 50% NUMERIC), number of files,\nand the size of files. Once the files are created, they are loaded into BigQuery tables.\n\n#### Benchmark Parameters\nSpecific file parameters are used in this project for performance testing. While\nthe list of parameters is growing, the current list of parameters and values\nis as follows:\n\n**File Type**:\n\n* Avro\n* CSV\n* JSON\n* Parquet\n\n**Compression**:\n\n* gzip (for CSV and JSON)\n* snappy (for AVRO)\n\n**Number of columns**:\n* 10\n* 100\n* 1000\n\n**Column Types**:\n* String-only\n* 50% String \/ 50% NUMERIC\n* 10% String \/ 90% NUMERIC\n\n**Number of files**:\n* 1\n* 100\n* 1000\n* 10000\n\n**Target Data Size (Size of the BigQuery staging table used to generate each file)**:\n* 10MB\n* 100MB\n* 1GB\n* 2GB\n\nThese parameters are used to create combinations of file types stored on in a\nbucket on GCS. An example of a file prefix generated from the above list of file parameters is:\n`fileType=csv\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=100\/tableSize=10MB\/* `.\nThis prefix holds 100 uncompressed CSV files, each generated from a 10 MB BigQuery\nstaging table with 10 string columns. The tool loads the 100 CSV files with this prefix\nto a BigQuery table and records the performance to create a benchmark.\n\nIn the future, a parameter for slot type will be added with values for communal\nand reserved. In addition, ORC will be added as a value for file type, and struct\/\narray types will be added to the values for column types.\n\n### Federated Query Benchmark\nThe federated query benchmark quantifies the difference in performance between\nqueries on federated (external) and managed BigQuery tables. A variety of queries\nranging in complexity will be created. These queries will be run on managed BigQuery\ntables and federated Google Cloud Storage files (including AVRO, CSV, JSON, and PARQUET) of\nidentical schemas and sizes. The files created for the File Loader Benchmark will be reused\nhere to run external queries on and to create BQ Managed tables with.\n\n#### Benchmark Parameters\nParameters for this benchmark will include the type of table, type of query,\nand the table properties.\n\n**Table Type**:\n\n* `BQ_MANAGED`: Tables located within and managed by BigQuery.\n* `EXTERNAL`: Data located in GCS files, which are used to create a temporary\nexternal table for querying.\n\n\n**Query Type**:\n* `SIMPLE_SELECT_*`: Select all columns and all rows.\n* `SELECT_ONE_STRING`: Select the first string field in the schema. All schemas used in the\nbenchmark contain at least one string field.\n* `SELECT_50_PERCENT`: Select the first 50% of the table's fields.\n\nFuture iterations of this benchmark will include more complex queries, such as those\nthat utilize joins, subqueries, window functions, etc.\n\nSince the files created for the File Loader Benchmark will be reused for this\nbenchmark, both the BQ Managed tables and GCS files will share the File Loader Benchmark\nparameters, with the only difference being that the snappy compression type is not\nsupported for federated queries and therefore will not be included for comparison.\n\n**File Type**:\n\n* Avro\n* CSV\n* JSON\n* Parquet\n\n**Compression**:\n\n* gzip (for CSV and JSON)\n\n**Number of columns**:\n* 10\n* 100\n* 1000\n\n**Column Types**:\n* String-only\n* 50% String \/ 50% NUMERIC\n* 10% String \/ 90% NUMERIC\n\n**Number of files**:\n* 1\n* 100\n* 1000\n* 10000\n\n**Target Data Size (Size of the BigQuery staging table used to generate each file)**:\n* 10MB\n* 100MB\n* 1GB\n* 2GB\n\n\n## Benchmark Results\n\n#### BigQuery\n\nThe results of the benchmarks will be saved in a separate BigQuery table for\nad hoc analysis. The results table will use the following schema: [results_table_schema.json](json_schemas\/results_table_schema.json)\n\n#### DataStudio\nOnce the results table is populated with data, DataStudio can be used to visualize results.\nSee [this article](https:\/\/support.google.com\/datastudio\/answer\/6283323?hl=en) to get started\nwith DataStudio.\n\n## Usage\nThis project contains the tools to create the resources needed to run the benchmarks.\nThe main method for the project is located in\n[`bq_benchmark.py`](bq_benchmark.py).\n\n### Prepping the Benchmarks Resources from Scratch\nThe following steps are needed to create the resources needed for the benchmarks.\nSome steps will only be needed for certain benchmarks, so feel free to skip them if you\nare only focused on a certain set of benchmarks.\n\n#### 1. Create the Results Table (Needed for all benchmarks)\n\nIf running the whole project from scratch, the first step is to create a table\nin BigQuery to store the results of the benchmark loads. A json file has been\nprovided in the json_schemas directory ([results_table_schema.json](json_schemas\/results_table_schema.json))\nwith the  above schema. The schema can be used to create the results\ntable by running the using the following command:\n```\npython bq_benchmark.py \\\n--create_results_table \\\n--results_table_schema_path=<optional path to json schema for results table> \\\n--results_table_name=<results table name> \\\n--results_dataset_id=<dataset ID>\n```\nParameters:\n\n`--create_results_table`: Flag to indicate that a results table should be created. It has a value of\n`store_true`, so this flag will be set to False, unless it is provided in the\ncommand.\n\n\n`--results_table_schema_path`: Optional argument. It defaults to `json_schemas\/results_table_schema.json`.\nIf using a json schema in a different location, provide the path to that\nschema.\n\n\n`--results_table_name`: String representing the name of the results table. Note that just the table name\nis needed, not the full project_id.dataset_id.table_name indicator.\n\n`--dataset_id`: ID of the dataset to hold the results table.\n\n#### 2. Select File Parameters (Needed for File Loader and Federated Query Benchmarks)\nFile parameters are used to help create the files needed for both the File Loader Benchmark and\n the Federated Query Benchmark. They can be configured in the `FILE_PARAMETERS` dictionary in\n[`generic_benchmark_tools\/file_parameters.py`](generic_benchmark_tools\/file_parameters.py). Currently,\nno file parameters can be added to the dictionary, as this will cause errors.\nHowever, parameters can be removed\nfrom the dictionary if you are looking for a smaller set of file combinations.\nNote that the parameter `numFiles` has to include at least the number 1 to\nensure that the subsequent number of files are properly created. This is\nbecause the program uses this first file to make copies to create subsequent\nfiles. This is a much faster alternative than recreating identical files.\nFor example, if you don't want the 1000 or 10000 as `numFile` parameters,\nyou can take them out, but you must leave 1 (e.g. [1, 100]). That way the first\nfile can be copied to create the 100 files.\n\n\n\n#### 3. Create Schemas for the Benchmark Staging Tables (Needed for File Loader and Federated Query Benchmarks)\nIn order to create the files with the above parameters, the [Dataflow Data Generator\ntool](https:\/\/github.com\/GoogleCloudPlatform\/professional-services\/tree\/master\/examples\/dataflow-data-generator)\nfrom the Professional Services Examples library needs to be leveraged to create\nstaging tables containing combinations of `columnTypes` and `numColumns` from the\nlist of file parameters in [`generic_benchmark_tools\/file_parameters.py`](generic_benchmark_tools\/file_parameters.py). The staging tables\nwill later be resized to match the sizes in `targetDataSize` file parameter, and then\nthey will be extracted to files in GCS. However, before any of this can be done, JSON schemas for\nthe staging tables must be created. To do this run the following command:\n\n```\npython bq_benchmark.py \\\n--create_benchmark_schemas \\\n--benchmark_table_schemas_directory=<optional directory where schemas should be stored>\n```\n\nParameters:\n\n`--create_benchmark_schemas`: Flag to indicate that benchmark schemas should be created. It has a value of\n`store_true`, so this flag will be set to False, unless it is provided in the\ncommand.\n\n`--benchmark_table_schemas_directory`: Optional argument for the directory where\nthe schemas for the staging tables are to be stored. It defaults to `json_schemas\/benchmark_table_schemas`.\nIf you would prefer that the schemas are written to a different directory, provide that directory.\n\n#### 4. Create Staging Tables (Needed for File Loader and Federated Query Benchmarks)\nOnce the schemas are created for the staging tables, the staging tables themselves can be\ncreated. This is a two step process.\n\nFirst, a set of staging tables are created using the data_generator_pipeline\nmodule in the [Dataflow Data Generator\ntool](https:\/\/github.com\/GoogleCloudPlatform\/professional-services\/tree\/master\/examples\/dataflow-data-generator) using\nthe schemas created in step 3. One staging table is created for each combination of columnTypes and\nnumColumns file parameters. A small number of rows are created in each staging table (500 rows) to\nget the process started. Once the tables are created, they are saved in a staging dataset. The names of\nstaging tables are generated using their respective columnTypes and numColumms parameters.\nFor example, a staging table created using the 100_STRING `columnTypes` param and 10\n`numColumns` would be named `100_STRING_10`.\n\nSecond, each staging table is used to create resized staging tables to match the sizes in the `targetDataSizes` parameter.\nThis is accomplished using the [bq_table_resizer module](https:\/\/github.com\/GoogleCloudPlatform\/professional-services\/blob\/master\/examples\/dataflow-data-generator\/bigquery-scripts\/bq_table_resizer.py)\n of the [Dataflow Data Generator\ntool](https:\/\/github.com\/GoogleCloudPlatform\/professional-services\/tree\/master\/examples\/dataflow-data-generator).\nThe resized staging tables are saved in a second staging dataset ws). The names of\nresized staging tables are generated using the name of the staging table they were\nresized from, plus the `targetDataSizes` param. For example, the `100_STRING_10` staging table\nfrom above will be used to create the following four tables in the\nresized staging dataset: `100_STRING_10_10MB`, `100_STRING_10_100MB`, `100_STRING_10_1GB`,\n`100_STRING_10_2GB`.\n\nTo run the process of creating staging and resized staging tables, run the following\ncommand:\n```\npython bq_benchmark.py \\\n--create_staging_tables \\\n--bq_project_id=<ID of project holding BigQuery resources> \\\n--staging_dataset_id=<ID of dataset holding staging tables> \\\n--resized_staging_dataset_id=<ID of dataset holding resized staging tables> \\\n--benchmark_table_schemas_directory=<optional directory where staging table schemas are stored> \\\n--dataflow_staging_location=<path on GCS to serve as staging location for Dataflow> \\\n--dataflow_temp_location=<path on GCS to serve as temp location for Dataflow>\n```\n\nParameters:\n\n`--create_staging_tables`: Flag to indicate that staging and resized staging\ntables should be created. It has a value of `store_true`, so this flag will be\nset to False, unless it is provided in the\ncommand.\n\n`--bq_project_id`: The ID of the project that will hold the BigQuery resources for\nthe benchmark, including all datasets, results tables, staging tables, and\nbenchmark tables.\n\n`--staging_dataset_id`: The ID of the dataset that will hold the first set of staging\ntables. For the tool to work correctly, the `staging_dataset_id` must only contain\nstaging tables, and it must be different than the `--resized_staging_dataset_id`.\nDo not store tables for any other purposes in this dataset.\n\n`--resized_staging_dataset_id`: The ID of the dataset that will hold the resized staging\ntables. For the tool to work correctly, the `resized_staging_dataset_id` must only contain\nresized staging tables, and it must be different than the `--staging_dataset_id`.\nDo not store tables for any other purposes in this dataset.\n\n`--benchmark_table_schemas_directory`: Optional argument for the directory where\nthe schemas for the staging tables are stored. It defaults to\n`json_schemas\/benchmark_table_schemas`. If your schemas are elsewhere, provide\nthat directory.\n\n`--dataflow_staging_location`: Staging location for Dataflow on GCS. Include\nthe 'gs:\/\/' prefix, the name of the bucket you want to use, and any prefix. For example\n'`gs:\/\/<bucket_name>\/staging`. Note: be sure to use a different bucket than the one\nprovided in the --bucket_name parameter used below with the `--create_files` and\n-`-create_benchmark` tables flags.\n\n`--dataflow_temp_location`: Temp location for Dataflow on GCS. Include\nthe 'gs:\/\/' prefix, the name of the bucket you want to use, and any prefix. For example\n'`gs:\/\/<bucket_name>\/temp`. Note: be sure to use a different bucket than the one\nprovided in the --bucket_name parameter used below with the `--create_files` and\n-`-create_benchmark tables flags`.\n\n#### 5. Create Files (Needed for File Loader and Federated Query Benchmarks)\nOnce the resized staging tables are created, the next step is to use the resized\nstaging tables to create the files on GCS. The resized staging tables already contain\ncombinations of the `columnTypes`, `numColumns`, and `targetDataSize` parameters. Now\neach of the resized staging tables must be extracted to combinations of files\ngenerated from the fileType and compression parameters. In each combination,\nthe extraction is only done for the first file (`numFiles`=1). For example,\nthe resized staging table `100_STRING_10_10MB` must be use to create the following\nfiles on GCS:\n\n* fileType=avro\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=1\/tableSize=10MB\/file1.avro\n* fileType=avro\/compression=snappy\/numColumns=10\/columnTypes=100_STRING\/numFiles=1\/tableSize=10MB\/file1.snappy\n* fileType=csv\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=1\/tableSize=10MB\/file1.csv\n* fileType=csv\/compression=gzip\/numColumns=10\/columnTypes=100_STRING\/numFiles=1\/tableSize=10MB\/file1.gzip\n* fileType=json\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=1\/tableSize=10MB\/file1.json\n* fileType=json\/compression=gzip\/numColumns=10\/columnTypes=100_STRING\/numFiles=1\/tableSize=10MB\/file1.gzip\n* fileType=parquet\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=1\/tableSize=10MB\/file1.parquet\n\n\nThe method of extracting the resized staging table depends on the combination of parameters.\nBigQuery extract jobs are used if the `fileType` is csv or json, or if the `fileType` is avro and\nthe resized staging table size is <= 1 GB. If the `fileType` is avro and the `targetDataSize`\nis > 1 GB, DataFlow is used to generate the file, since attempting to extract a staging table\nof this size to avro causes errors. If the `fileType` is parquet, DataFlow is used as well,\nsince BigQuery extract jobs don't support the parquet file type.\n\nOnce the first file for each combination is generated (`numFiles`=1), it is copied\nto create the same combination of files, but where numFiles > 1. More specifically,\nit is copied 100 times for `numFiles`=100, 1000 times for `numFiles`=1000, and\n10000 times for `numFiles`=10000. Copying is much faster than extracting each\ntable tens of thousands of times. As an example, the files listed above are\ncopied to create the following 77,700 files:\n\n* fileType=avro\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=100\/tableSize=10MB\/* (contains file1.avro- file100.avro)\n* fileType=avro\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=1000\/tableSize=10MB\/* (contains file1.avro - file1000.avro)\n* fileType=avro\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=10000\/tableSize=10MB\/* (contains file1.avro - file10000.avro)\n* fileType=avro\/compression=snappy\/numColumns=10\/columnTypes=100_STRING\/numFiles=100\/tableSize=10MB\/* (contains file1.snappy- file100.snappy)\n* fileType=avro\/compression=snappy\/numColumns=10\/columnTypes=100_STRING\/numFiles=1000\/tableSize=10MB\/* (contains file1.snappy - file1000.snappy)\n* fileType=avro\/compression=snappy\/numColumns=10\/columnTypes=100_STRING\/numFiles=10000\/tableSize=10MB\/* (contains file1.snappy - file10000.snappy)\n* fileType=csv\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=100\/tableSize=10MB\/* (contains file1.csv - file100.csv)\n* fileType=csv\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=1000\/tableSize=10MB\/* (contains file1.csv - file1000.csv)\n* fileType=csv\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=10000\/tableSize=10MB\/* (contains file1.csv - file10000.csv)\n* fileType=csv\/compression=gzip\/numColumns=10\/columnTypes=100_STRING\/numFiles=100\/tableSize=10MB\/* (contains file1.gzip - file100.gzip)\n* fileType=csv\/compression=gzip\/numColumns=10\/columnTypes=100_STRING\/numFiles=1000\/tableSize=10MB\/* (contains file1.gzip - file1000.gzip)\n* fileType=csv\/compression=gzip\/numColumns=10\/columnTypes=100_STRING\/numFiles=10000\/tableSize=10MB\/* (contains file1.gzip - file10000.gzip)\n* fileType=json\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=100\/tableSize=10MB\/* (contains file1.json - file100.json)\n* fileType=json\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=1000\/tableSize=10MB\/* (contains file1.json - file1000.json)\n* fileType=json\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=10000\/tableSize=10MB\/* (contains file1.json - file10000.json)\n* fileType=json\/compression=gzip\/numColumns=10\/columnTypes=100_STRING\/numFiles=100\/tableSize=10MB\/* (contains file1.gzip - file100.gzip)\n* fileType=json\/compression=gzip\/numColumns=10\/columnTypes=100_STRING\/numFiles=1000\/tableSize=10MB\/* (contains file1.gzip - file1000.gzip)\n* fileType=json\/compression=gzip\/numColumns=10\/columnTypes=100_STRING\/numFiles=10000\/tableSize=10MB\/* (contains file1.gzip - file10000.gzip)\n* fileType=parquet\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=100\/tableSize=10MB\/* (contains file1.parquet- file100.parquet)\n* fileType=parquet\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=1000\/tableSize=10MB\/* (contains file1.parquet - file1000.parquet)\n* fileType=parquet\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=10000\/tableSize=10MB\/* (contains file1.parquet - file10000.parquet)\n\nTo complete the process of creating files, run the following command:\n```\npython bq_benchmark.py \\\n--create_files \\\n--gcs_project_id=<ID of project holding GCS resources> \\\n--resized_staging_dataset_id=<ID of dataset holding resized staging tables> \\\n--bucket_name=<name of bucket to hold files> \\\n--dataflow_staging_location=<path on GCS to serve as staging location for Dataflow> \\\n--dataflow_temp_location=<path on GCS to serve as temp location for Dataflow> \\\n--restart_file=<optional file name to restart with if program is stopped> \\\n\n```\n\nParameters:\n\n`--create_files`: Flag to indicate that files should be created and stored on GCS.\nIt has a value of `store_true`, so this flag will be\nset to False, unless it is provided in the\ncommand.\n\n`--gcs_project_id`: The ID of the project that will hold the GCS resources for\nthe benchmark, including all files and the bucket that holds them.\n\n`--resized_staging_dataset_id`: The ID of the dataset that holds the resized\nstaging tables generated using the `--create_staging_tables` command.\n\n`--bucket_name`: Name of the bucket that will hold the created files. Note that\nthe only purpose of this bucket should be to hold the created files, and that files\nused for any other reason should be stored in a different bucket.\n\n`--dataflow_staging_location`: Staging location for Dataflow on GCS. Include\nthe 'gs:\/\/' prefix, the name of the bucket you want to use, and any prefix. For example\n`gs:\/\/<bucket_name>\/staging`. Note: be sure to use a different bucket than the one\nprovided in the `--bucket_name parameter`.\n\n`--dataflow_temp_location`: Temp location for Dataflow on GCS. Include\nthe 'gs:\/\/' prefix, the name of the bucket you want to use, and any prefix. For example\n`gs:\/\/<bucket_name>\/temp`. Note: be sure to use a different bucket than the one\nprovided in the `--bucket_name parameter`.\n\n`--restart_file`: Optional file name to start the file creation process with. Creating\neach file combination can take hours, and often a backend error or a timeout will\noccur, preventing all the files from being created. If this happens, copy the last file\nthat was successfully created from the logs and use it here. It should start with `fileType=`\nand end with the file extension. For example,\n`fileType=csv\/compression=none\/numColumns=10\/columnTypes=100_STRING\/numFiles=1000\/tableSize=10MB\/file324.csv`\n\n### Running the benchmarks\n\n#### File Loader Benchmark\nOnce the files are created, the File Loader Benchmark can be run. As a prerequisite for this step, a log sink in BigQuery that captures logs\nabout BigQuery must be set up in the same project that holds the benchmark\ntables. If a BigQuery log sink is not already set up, follow [these steps](https:\/\/github.com\/GoogleCloudPlatform\/professional-services\/tree\/master\/examples\/bigquery-audit-log#1-getting-the-bigquery-log-data).\n\nNote that this benchmark will delete tables after recording information on load time. Before the\ntables are deleted, the tables and their respective files can be used to run the Federated Query Benchmark. If\nrunning the two benchmarks independently, each file will be used to create a BigQuery table two different times. Running the two benchmarks\nat the same time can save time if results for both benchmarks are desired. In this case, the `--include_federated_query_benchmark` flag can\nbe added to the below command. Be aware that running the queries will add significant time to the benchmark run, so leave the\nflag out of the command if the primary goal is to obtain results for the File Loader Benchmark.\n\n\nTo run the benchmark, use the following command:\n\n\n```\npython bq_benchmark.py \\\n--run_file_loader_benchmark \\\n--bq_project_id=<ID of the project holding the BigQuery resources> \\\n--gcs_project_id=<ID of project holding GCS resources> \\\n--staging_project_id=<ID of project holding staging tables> \\\n--staging_dataset_id=<ID of dataset holding staging tables> \\\n--benchmark_dataset_id=<ID of the dataset holding the benchmark tables> \\\n--bucket_name=<name of bucket to hold files> \\\n--results_table_name=<Name of results table> \\\n--results_dataset_id=<Name dataset holding results table> \\\n--duplicate_benchmark_tables \\\n--bq_logs_dataset=<Name of dataset hold BQ logs table>\n--include_federated_query_benchmark\n\n```\n\nParameters:\n`--run_file_loader_benchmark`: Flag to initiate process of running the File Loader Benchmark by creating tables from files and storing results for comparison.\nIt has a value of `store_true`, so this flag will be\nset to False, unless it is provided in the\ncommand.\n\n`--gcs_project_id`: The ID of the project that will hold the GCS resources for\nthe benchmark, including all files and the bucket that holds them.\n\n`--bq_project_id`: The ID of the project that will hold the BigQuery resources for\nthe benchmark, including all datasets, results tables, staging tables, and\nbenchmark tables.\n\n`--staging_project_id`: The ID of the project that holds the first set of staging\ntables. While this will be the same as the `--bq_project_id` if running the project\nfrom scratch, it will differ from `--bq_project_id` if you are using file combinations\nthat have already been created and running benchmarks\/saving results in your own project.\n\n`--staging_dataset_id`: The ID of the dataset that will hold the first set of staging\ntables. For the tool to work correctly, the `staging_dataset_id` must only contain\nstaging tables, and it must be different than the `--resized_staging_dataset_id`.\nDo not store tables for any other purposes in this dataset.\n\n`--dataset_id`: The ID of the dataset that will hold the benchmark tables.\n\n`--bucket_name`: Name of the bucket that will hold the file combinations to be\nloaded into benchmark tables. Note that the only purpose of this bucket should\nbe to hold the file combinations, and that files used for any other reason\nshould be stored in a different bucket.\n\n`--results_table_name`: Name of the results table to hold relevant information\nabout the benchmark loads.\n\n`--results_dataset_id`: Name of the dataset that holds the results table.\n\n`--duplicate_benchmark_tables`: Flag to indicate that a benchmark table should be\ncreated for a given file combination, even if that file combination has a benchmark\ntable already. Creating multiple benchmark tables for each file combination can\nincrease the accuracy of the average runtimes calculated from the results. If\nthis behavior is desired, include the flag. However, if you want to ensure that you\nfirst have at least one benchmark table for each file combination, then leave the\nflag off. In that case, the benchmark creation process will skip a file combination\nif it already has a benchmark table.\n\n`--bq_logs_dataset`: Name of dataset hold BQ logs table. This dataset must be\nin project used for `--bq_project_id`.\n\n`--include_federated_query_benchmark`: Flag to indicate that the Federated Query Benchmark should\nbe run on the created tables and the files the tables were loaded from before\nthe tables are deleted. If results for both benchmarks are desired, this will save time\nwhen compared to running each benchmark independently, since the same tables needed for the\nFile Loader Benchmark are needed for the Federated Query Benchmark. It has a value of `store_true`,\nso this flag will be set to False, unless it is provided in the\ncommand.\n\n\n#### Federated Query Benchmark\nOnce the files are created, the Federated Query Benchmark can be run. As a prerequisite for this step, a log sink in BigQuery that captures logs\nabout BigQuery must be set up in the same project that holds the benchmark\ntables. If a BigQuery log sink is not already set up, follow [these steps](https:\/\/github.com\/GoogleCloudPlatform\/professional-services\/tree\/master\/examples\/bigquery-audit-log#1-getting-the-bigquery-log-data).\n\nAs mentioned above, the Federated Query Benchmark can be run while running the File Loader Benchmark in addition to using\nthe command below. Note, though , that running federated queries on\nsnappy compressed files is not supported. When the File Loader Benchmark encounters a snappy compressed file, it still\nloads the file into a BigQuery table to capture load results, but it will skip the Federated Query portion. When the Federated\nQuery Benchmark encounters a snappy compressed file, it will skip the load all together. Therefore, if obtaining Federated\nQuery Benchmark results is the primary goal, use the command below.\n\nIt should also be noted that since the Federated Query Benchmark loads files into tables, the load results for the File\nLoader Benchmark will also be captured. This will not add significant time to the benchmark run since the tables have to\nbe loaded regardless.\n\nTo run the benchmark, use the following command:\n\n```\npython bq_benchmark.py \\\n--run_federated_query_benchmark \\\n--bq_project_id=<ID of the project holding the BigQuery resources> \\\n--gcs_project_id=<ID of project holding GCS resources> \\\n--staging_project_id=<ID of project holding staging tables> \\\n--staging_dataset_id=<ID of dataset holding staging tables> \\\n--benchmark_dataset_id=<ID of the dataset holding the benchmark tables> \\\n--bucket_name=<name of bucket to hold files> \\\n--results_table_name=<Name of results table> \\\n--results_dataset_id=<Name dataset holding results table> \\\n--bq_logs_dataset=<Name of dataset hold BQ logs table>\n\n```\n\nParameters:\n`--run_federated_query_benchmark`: Flag to initiate the process running the Federated Query\nBenchmark by creating tables from files, running queries on both\nthe table and the files, and storing performance results.\nIt has a value of `store_true`, so this flag will be\nset to False, unless it is provided in the\ncommand.\n\n`--gcs_project_id`: The ID of the project that will hold the GCS resources for\nthe benchmark, including all files and the bucket that holds them.\n\n`--bq_project_id`: The ID of the project that will hold the BigQuery resources for\nthe benchmark, including all datasets, results tables, staging tables, and\nbenchmark tables.\n\n`--staging_project_id`: The ID of the project that holds the first set of staging\ntables. While this will be the same as the `--bq_project_id` if running the project\nfrom scratch, it will differ from `--bq_project_id` if you are using file combinations\nthat have already been created and running benchmarks\/saving results in your own project.\n\n`--staging_dataset_id`: The ID of the dataset that will hold the first set of staging\ntables. For the tool to work correctly, the `staging_dataset_id` must only contain\nstaging tables, and it must be different than the `--resized_staging_dataset_id`.\nDo not store tables for any other purposes in this dataset.\n\n`--dataset_id`: The ID of the dataset that will hold the benchmark tables.\n\n`--bucket_name`: Name of the bucket that will hold the file combinations to be\nloaded into benchmark tables. Note that the only purpose of this bucket should\nbe to hold the file combinations, and that files used for any other reason\nshould be stored in a different bucket.\n\n`--results_table_name`: Name of the results table to hold relevant information\nabout the benchmark loads.\n\n`--results_dataset_id`: Name of the dataset that holds the results table.\n\n`--bq_logs_dataset`: Name of dataset hold BQ logs table. This dataset must be\nin project used for `--bq_project_id`.\n\n\n## Testing\n\nTests can be run by running the following command in the bq_file_load_benchmark\ndirectory:\n\n```\npython -m pytest --project_id=<ID of project that will hold test resources>\n\n```\n\nNote that the tests will create and destroy resources in the project denoted\nby `--project_id`.","site":"GCP","answers_cleaned":"  BigQuery Benchmark Repos Customers new to BigQuery often have questions on how to best utilize the platform with regards to performance  For example  a common question which has routinely resurfaced in this area is the performance of file loads into BigQuery  specifically the optimal file parameters  file type    columns  column types  file size  etc  for efficient load times   As a second example  when informing customers that queries run on external data sources are less efficient than those run on BigQuery managed tables  customers have followed up  asking exactly how much less efficient queries on external sources are   While Google provides some high level guidelines on BigQuery performance in these scenarios  we don t provide consistent metrics on how the above factors can impact performance  This repository seeks to create benchmarks to address questions with quantitative data and a higher level of confidence  allowing more definitive answers when interacting with customers   While this repository is intended to continue growing  it currently includes the following benchmarks      File Loader Benchmark The File Loader benchmark measures the affect of file properties on performance when loading files into BigQuery tables  Files are created using a combination of properties such as file type  compression type  number of columns  column types  such as 100  STRING vs 50  STRING  50  NUMERIC   number of files  and the size of files  Once the files are created  they are loaded into BigQuery tables        Benchmark Parameters Specific file parameters are used in this project for performance testing  While the list of parameters is growing  the current list of parameters and values is as follows     File Type       Avro   CSV   JSON   Parquet    Compression       gzip  for CSV and JSON    snappy  for AVRO     Number of columns      10   100   1000    Column Types      String only   50  String   50  NUMERIC   10  String   90  NUMERIC    Number of files      1   100   1000   10000    Target Data Size  Size of the BigQuery staging table used to generate each file       10MB   100MB   1GB   2GB  These parameters are used to create combinations of file types stored on in a bucket on GCS  An example of a file prefix generated from the above list of file parameters is   fileType csv compression none numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB      This prefix holds 100 uncompressed CSV files  each generated from a 10 MB BigQuery staging table with 10 string columns  The tool loads the 100 CSV files with this prefix to a BigQuery table and records the performance to create a benchmark   In the future  a parameter for slot type will be added with values for communal and reserved  In addition  ORC will be added as a value for file type  and struct  array types will be added to the values for column types       Federated Query Benchmark The federated query benchmark quantifies the difference in performance between queries on federated  external  and managed BigQuery tables  A variety of queries ranging in complexity will be created  These queries will be run on managed BigQuery tables and federated Google Cloud Storage files  including AVRO  CSV  JSON  and PARQUET  of identical schemas and sizes  The files created for the File Loader Benchmark will be reused here to run external queries on and to create BQ Managed tables with        Benchmark Parameters Parameters for this benchmark will include the type of table  type of query  and the table properties     Table Type        BQ MANAGED   Tables located within and managed by BigQuery     EXTERNAL   Data located in GCS files  which are used to create a temporary external table for querying      Query Type       SIMPLE SELECT     Select all columns and all rows     SELECT ONE STRING   Select the first string field in the schema  All schemas used in the benchmark contain at least one string field     SELECT 50 PERCENT   Select the first 50  of the table s fields   Future iterations of this benchmark will include more complex queries  such as those that utilize joins  subqueries  window functions  etc   Since the files created for the File Loader Benchmark will be reused for this benchmark  both the BQ Managed tables and GCS files will share the File Loader Benchmark parameters  with the only difference being that the snappy compression type is not supported for federated queries and therefore will not be included for comparison     File Type       Avro   CSV   JSON   Parquet    Compression       gzip  for CSV and JSON     Number of columns      10   100   1000    Column Types      String only   50  String   50  NUMERIC   10  String   90  NUMERIC    Number of files      1   100   1000   10000    Target Data Size  Size of the BigQuery staging table used to generate each file       10MB   100MB   1GB   2GB      Benchmark Results       BigQuery  The results of the benchmarks will be saved in a separate BigQuery table for ad hoc analysis  The results table will use the following schema   results table schema json  json schemas results table schema json        DataStudio Once the results table is populated with data  DataStudio can be used to visualize results  See  this article  https   support google com datastudio answer 6283323 hl en  to get started with DataStudio      Usage This project contains the tools to create the resources needed to run the benchmarks  The main method for the project is located in   bq benchmark py   bq benchmark py        Prepping the Benchmarks Resources from Scratch The following steps are needed to create the resources needed for the benchmarks  Some steps will only be needed for certain benchmarks  so feel free to skip them if you are only focused on a certain set of benchmarks        1  Create the Results Table  Needed for all benchmarks   If running the whole project from scratch  the first step is to create a table in BigQuery to store the results of the benchmark loads  A json file has been provided in the json schemas directory   results table schema json  json schemas results table schema json   with the  above schema  The schema can be used to create the results table by running the using the following command      python bq benchmark py     create results table     results table schema path  optional path to json schema for results table      results table name  results table name      results dataset id  dataset ID      Parameters      create results table   Flag to indicate that a results table should be created  It has a value of  store true   so this flag will be set to False  unless it is provided in the command       results table schema path   Optional argument  It defaults to  json schemas results table schema json   If using a json schema in a different location  provide the path to that schema       results table name   String representing the name of the results table  Note that just the table name is needed  not the full project id dataset id table name indicator      dataset id   ID of the dataset to hold the results table        2  Select File Parameters  Needed for File Loader and Federated Query Benchmarks  File parameters are used to help create the files needed for both the File Loader Benchmark and  the Federated Query Benchmark  They can be configured in the  FILE PARAMETERS  dictionary in   generic benchmark tools file parameters py   generic benchmark tools file parameters py   Currently  no file parameters can be added to the dictionary  as this will cause errors  However  parameters can be removed from the dictionary if you are looking for a smaller set of file combinations  Note that the parameter  numFiles  has to include at least the number 1 to ensure that the subsequent number of files are properly created  This is because the program uses this first file to make copies to create subsequent files  This is a much faster alternative than recreating identical files  For example  if you don t want the 1000 or 10000 as  numFile  parameters  you can take them out  but you must leave 1  e g   1  100    That way the first file can be copied to create the 100 files          3  Create Schemas for the Benchmark Staging Tables  Needed for File Loader and Federated Query Benchmarks  In order to create the files with the above parameters  the  Dataflow Data Generator tool  https   github com GoogleCloudPlatform professional services tree master examples dataflow data generator  from the Professional Services Examples library needs to be leveraged to create staging tables containing combinations of  columnTypes  and  numColumns  from the list of file parameters in   generic benchmark tools file parameters py   generic benchmark tools file parameters py   The staging tables will later be resized to match the sizes in  targetDataSize  file parameter  and then they will be extracted to files in GCS  However  before any of this can be done  JSON schemas for the staging tables must be created  To do this run the following command       python bq benchmark py     create benchmark schemas     benchmark table schemas directory  optional directory where schemas should be stored       Parameters      create benchmark schemas   Flag to indicate that benchmark schemas should be created  It has a value of  store true   so this flag will be set to False  unless it is provided in the command      benchmark table schemas directory   Optional argument for the directory where the schemas for the staging tables are to be stored  It defaults to  json schemas benchmark table schemas   If you would prefer that the schemas are written to a different directory  provide that directory        4  Create Staging Tables  Needed for File Loader and Federated Query Benchmarks  Once the schemas are created for the staging tables  the staging tables themselves can be created  This is a two step process   First  a set of staging tables are created using the data generator pipeline module in the  Dataflow Data Generator tool  https   github com GoogleCloudPlatform professional services tree master examples dataflow data generator  using the schemas created in step 3  One staging table is created for each combination of columnTypes and numColumns file parameters  A small number of rows are created in each staging table  500 rows  to get the process started  Once the tables are created  they are saved in a staging dataset  The names of staging tables are generated using their respective columnTypes and numColumms parameters  For example  a staging table created using the 100 STRING  columnTypes  param and 10  numColumns  would be named  100 STRING 10    Second  each staging table is used to create resized staging tables to match the sizes in the  targetDataSizes  parameter  This is accomplished using the  bq table resizer module  https   github com GoogleCloudPlatform professional services blob master examples dataflow data generator bigquery scripts bq table resizer py   of the  Dataflow Data Generator tool  https   github com GoogleCloudPlatform professional services tree master examples dataflow data generator   The resized staging tables are saved in a second staging dataset ws   The names of resized staging tables are generated using the name of the staging table they were resized from  plus the  targetDataSizes  param  For example  the  100 STRING 10  staging table from above will be used to create the following four tables in the resized staging dataset   100 STRING 10 10MB    100 STRING 10 100MB    100 STRING 10 1GB    100 STRING 10 2GB    To run the process of creating staging and resized staging tables  run the following command      python bq benchmark py     create staging tables     bq project id  ID of project holding BigQuery resources      staging dataset id  ID of dataset holding staging tables      resized staging dataset id  ID of dataset holding resized staging tables      benchmark table schemas directory  optional directory where staging table schemas are stored      dataflow staging location  path on GCS to serve as staging location for Dataflow      dataflow temp location  path on GCS to serve as temp location for Dataflow       Parameters      create staging tables   Flag to indicate that staging and resized staging tables should be created  It has a value of  store true   so this flag will be set to False  unless it is provided in the command      bq project id   The ID of the project that will hold the BigQuery resources for the benchmark  including all datasets  results tables  staging tables  and benchmark tables      staging dataset id   The ID of the dataset that will hold the first set of staging tables  For the tool to work correctly  the  staging dataset id  must only contain staging tables  and it must be different than the    resized staging dataset id   Do not store tables for any other purposes in this dataset      resized staging dataset id   The ID of the dataset that will hold the resized staging tables  For the tool to work correctly  the  resized staging dataset id  must only contain resized staging tables  and it must be different than the    staging dataset id   Do not store tables for any other purposes in this dataset      benchmark table schemas directory   Optional argument for the directory where the schemas for the staging tables are stored  It defaults to  json schemas benchmark table schemas   If your schemas are elsewhere  provide that directory      dataflow staging location   Staging location for Dataflow on GCS  Include the  gs     prefix  the name of the bucket you want to use  and any prefix  For example   gs    bucket name  staging   Note  be sure to use a different bucket than the one provided in the   bucket name parameter used below with the    create files  and    create benchmark  tables flags      dataflow temp location   Temp location for Dataflow on GCS  Include the  gs     prefix  the name of the bucket you want to use  and any prefix  For example   gs    bucket name  temp   Note  be sure to use a different bucket than the one provided in the   bucket name parameter used below with the    create files  and    create benchmark tables flags         5  Create Files  Needed for File Loader and Federated Query Benchmarks  Once the resized staging tables are created  the next step is to use the resized staging tables to create the files on GCS  The resized staging tables already contain combinations of the  columnTypes    numColumns   and  targetDataSize  parameters  Now each of the resized staging tables must be extracted to combinations of files generated from the fileType and compression parameters  In each combination  the extraction is only done for the first file   numFiles  1   For example  the resized staging table  100 STRING 10 10MB  must be use to create the following files on GCS     fileType avro compression none numColumns 10 columnTypes 100 STRING numFiles 1 tableSize 10MB file1 avro   fileType avro compression snappy numColumns 10 columnTypes 100 STRING numFiles 1 tableSize 10MB file1 snappy   fileType csv compression none numColumns 10 columnTypes 100 STRING numFiles 1 tableSize 10MB file1 csv   fileType csv compression gzip numColumns 10 columnTypes 100 STRING numFiles 1 tableSize 10MB file1 gzip   fileType json compression none numColumns 10 columnTypes 100 STRING numFiles 1 tableSize 10MB file1 json   fileType json compression gzip numColumns 10 columnTypes 100 STRING numFiles 1 tableSize 10MB file1 gzip   fileType parquet compression none numColumns 10 columnTypes 100 STRING numFiles 1 tableSize 10MB file1 parquet   The method of extracting the resized staging table depends on the combination of parameters  BigQuery extract jobs are used if the  fileType  is csv or json  or if the  fileType  is avro and the resized staging table size is    1 GB  If the  fileType  is avro and the  targetDataSize  is   1 GB  DataFlow is used to generate the file  since attempting to extract a staging table of this size to avro causes errors  If the  fileType  is parquet  DataFlow is used as well  since BigQuery extract jobs don t support the parquet file type   Once the first file for each combination is generated   numFiles  1   it is copied to create the same combination of files  but where numFiles   1  More specifically  it is copied 100 times for  numFiles  100  1000 times for  numFiles  1000  and 10000 times for  numFiles  10000  Copying is much faster than extracting each table tens of thousands of times  As an example  the files listed above are copied to create the following 77 700 files     fileType avro compression none numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB    contains file1 avro  file100 avro    fileType avro compression none numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB    contains file1 avro   file1000 avro    fileType avro compression none numColumns 10 columnTypes 100 STRING numFiles 10000 tableSize 10MB    contains file1 avro   file10000 avro    fileType avro compression snappy numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB    contains file1 snappy  file100 snappy    fileType avro compression snappy numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB    contains file1 snappy   file1000 snappy    fileType avro compression snappy numColumns 10 columnTypes 100 STRING numFiles 10000 tableSize 10MB    contains file1 snappy   file10000 snappy    fileType csv compression none numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB    contains file1 csv   file100 csv    fileType csv compression none numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB    contains file1 csv   file1000 csv    fileType csv compression none numColumns 10 columnTypes 100 STRING numFiles 10000 tableSize 10MB    contains file1 csv   file10000 csv    fileType csv compression gzip numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB    contains file1 gzip   file100 gzip    fileType csv compression gzip numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB    contains file1 gzip   file1000 gzip    fileType csv compression gzip numColumns 10 columnTypes 100 STRING numFiles 10000 tableSize 10MB    contains file1 gzip   file10000 gzip    fileType json compression none numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB    contains file1 json   file100 json    fileType json compression none numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB    contains file1 json   file1000 json    fileType json compression none numColumns 10 columnTypes 100 STRING numFiles 10000 tableSize 10MB    contains file1 json   file10000 json    fileType json compression gzip numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB    contains file1 gzip   file100 gzip    fileType json compression gzip numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB    contains file1 gzip   file1000 gzip    fileType json compression gzip numColumns 10 columnTypes 100 STRING numFiles 10000 tableSize 10MB    contains file1 gzip   file10000 gzip    fileType parquet compression none numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB    contains file1 parquet  file100 parquet    fileType parquet compression none numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB    contains file1 parquet   file1000 parquet    fileType parquet compression none numColumns 10 columnTypes 100 STRING numFiles 10000 tableSize 10MB    contains file1 parquet   file10000 parquet   To complete the process of creating files  run the following command      python bq benchmark py     create files     gcs project id  ID of project holding GCS resources      resized staging dataset id  ID of dataset holding resized staging tables      bucket name  name of bucket to hold files      dataflow staging location  path on GCS to serve as staging location for Dataflow      dataflow temp location  path on GCS to serve as temp location for Dataflow      restart file  optional file name to restart with if program is stopped          Parameters      create files   Flag to indicate that files should be created and stored on GCS  It has a value of  store true   so this flag will be set to False  unless it is provided in the command      gcs project id   The ID of the project that will hold the GCS resources for the benchmark  including all files and the bucket that holds them      resized staging dataset id   The ID of the dataset that holds the resized staging tables generated using the    create staging tables  command      bucket name   Name of the bucket that will hold the created files  Note that the only purpose of this bucket should be to hold the created files  and that files used for any other reason should be stored in a different bucket      dataflow staging location   Staging location for Dataflow on GCS  Include the  gs     prefix  the name of the bucket you want to use  and any prefix  For example  gs    bucket name  staging   Note  be sure to use a different bucket than the one provided in the    bucket name parameter       dataflow temp location   Temp location for Dataflow on GCS  Include the  gs     prefix  the name of the bucket you want to use  and any prefix  For example  gs    bucket name  temp   Note  be sure to use a different bucket than the one provided in the    bucket name parameter       restart file   Optional file name to start the file creation process with  Creating each file combination can take hours  and often a backend error or a timeout will occur  preventing all the files from being created  If this happens  copy the last file that was successfully created from the logs and use it here  It should start with  fileType   and end with the file extension  For example   fileType csv compression none numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB file324 csv       Running the benchmarks       File Loader Benchmark Once the files are created  the File Loader Benchmark can be run  As a prerequisite for this step  a log sink in BigQuery that captures logs about BigQuery must be set up in the same project that holds the benchmark tables  If a BigQuery log sink is not already set up  follow  these steps  https   github com GoogleCloudPlatform professional services tree master examples bigquery audit log 1 getting the bigquery log data    Note that this benchmark will delete tables after recording information on load time  Before the tables are deleted  the tables and their respective files can be used to run the Federated Query Benchmark  If running the two benchmarks independently  each file will be used to create a BigQuery table two different times  Running the two benchmarks at the same time can save time if results for both benchmarks are desired  In this case  the    include federated query benchmark  flag can be added to the below command  Be aware that running the queries will add significant time to the benchmark run  so leave the flag out of the command if the primary goal is to obtain results for the File Loader Benchmark    To run the benchmark  use the following command        python bq benchmark py     run file loader benchmark     bq project id  ID of the project holding the BigQuery resources      gcs project id  ID of project holding GCS resources      staging project id  ID of project holding staging tables      staging dataset id  ID of dataset holding staging tables      benchmark dataset id  ID of the dataset holding the benchmark tables      bucket name  name of bucket to hold files      results table name  Name of results table      results dataset id  Name dataset holding results table      duplicate benchmark tables     bq logs dataset  Name of dataset hold BQ logs table    include federated query benchmark       Parameters     run file loader benchmark   Flag to initiate process of running the File Loader Benchmark by creating tables from files and storing results for comparison  It has a value of  store true   so this flag will be set to False  unless it is provided in the command      gcs project id   The ID of the project that will hold the GCS resources for the benchmark  including all files and the bucket that holds them      bq project id   The ID of the project that will hold the BigQuery resources for the benchmark  including all datasets  results tables  staging tables  and benchmark tables      staging project id   The ID of the project that holds the first set of staging tables  While this will be the same as the    bq project id  if running the project from scratch  it will differ from    bq project id  if you are using file combinations that have already been created and running benchmarks saving results in your own project      staging dataset id   The ID of the dataset that will hold the first set of staging tables  For the tool to work correctly  the  staging dataset id  must only contain staging tables  and it must be different than the    resized staging dataset id   Do not store tables for any other purposes in this dataset      dataset id   The ID of the dataset that will hold the benchmark tables      bucket name   Name of the bucket that will hold the file combinations to be loaded into benchmark tables  Note that the only purpose of this bucket should be to hold the file combinations  and that files used for any other reason should be stored in a different bucket      results table name   Name of the results table to hold relevant information about the benchmark loads      results dataset id   Name of the dataset that holds the results table      duplicate benchmark tables   Flag to indicate that a benchmark table should be created for a given file combination  even if that file combination has a benchmark table already  Creating multiple benchmark tables for each file combination can increase the accuracy of the average runtimes calculated from the results  If this behavior is desired  include the flag  However  if you want to ensure that you first have at least one benchmark table for each file combination  then leave the flag off  In that case  the benchmark creation process will skip a file combination if it already has a benchmark table      bq logs dataset   Name of dataset hold BQ logs table  This dataset must be in project used for    bq project id       include federated query benchmark   Flag to indicate that the Federated Query Benchmark should be run on the created tables and the files the tables were loaded from before the tables are deleted  If results for both benchmarks are desired  this will save time when compared to running each benchmark independently  since the same tables needed for the File Loader Benchmark are needed for the Federated Query Benchmark  It has a value of  store true   so this flag will be set to False  unless it is provided in the command         Federated Query Benchmark Once the files are created  the Federated Query Benchmark can be run  As a prerequisite for this step  a log sink in BigQuery that captures logs about BigQuery must be set up in the same project that holds the benchmark tables  If a BigQuery log sink is not already set up  follow  these steps  https   github com GoogleCloudPlatform professional services tree master examples bigquery audit log 1 getting the bigquery log data    As mentioned above  the Federated Query Benchmark can be run while running the File Loader Benchmark in addition to using the command below  Note  though   that running federated queries on snappy compressed files is not supported  When the File Loader Benchmark encounters a snappy compressed file  it still loads the file into a BigQuery table to capture load results  but it will skip the Federated Query portion  When the Federated Query Benchmark encounters a snappy compressed file  it will skip the load all together  Therefore  if obtaining Federated Query Benchmark results is the primary goal  use the command below   It should also be noted that since the Federated Query Benchmark loads files into tables  the load results for the File Loader Benchmark will also be captured  This will not add significant time to the benchmark run since the tables have to be loaded regardless   To run the benchmark  use the following command       python bq benchmark py     run federated query benchmark     bq project id  ID of the project holding the BigQuery resources      gcs project id  ID of project holding GCS resources      staging project id  ID of project holding staging tables      staging dataset id  ID of dataset holding staging tables      benchmark dataset id  ID of the dataset holding the benchmark tables      bucket name  name of bucket to hold files      results table name  Name of results table      results dataset id  Name dataset holding results table      bq logs dataset  Name of dataset hold BQ logs table        Parameters     run federated query benchmark   Flag to initiate the process running the Federated Query Benchmark by creating tables from files  running queries on both the table and the files  and storing performance results  It has a value of  store true   so this flag will be set to False  unless it is provided in the command      gcs project id   The ID of the project that will hold the GCS resources for the benchmark  including all files and the bucket that holds them      bq project id   The ID of the project that will hold the BigQuery resources for the benchmark  including all datasets  results tables  staging tables  and benchmark tables      staging project id   The ID of the project that holds the first set of staging tables  While this will be the same as the    bq project id  if running the project from scratch  it will differ from    bq project id  if you are using file combinations that have already been created and running benchmarks saving results in your own project      staging dataset id   The ID of the dataset that will hold the first set of staging tables  For the tool to work correctly  the  staging dataset id  must only contain staging tables  and it must be different than the    resized staging dataset id   Do not store tables for any other purposes in this dataset      dataset id   The ID of the dataset that will hold the benchmark tables      bucket name   Name of the bucket that will hold the file combinations to be loaded into benchmark tables  Note that the only purpose of this bucket should be to hold the file combinations  and that files used for any other reason should be stored in a different bucket      results table name   Name of the results table to hold relevant information about the benchmark loads      results dataset id   Name of the dataset that holds the results table      bq logs dataset   Name of dataset hold BQ logs table  This dataset must be in project used for    bq project id        Testing  Tests can be run by running the following command in the bq file load benchmark directory       python  m pytest   project id  ID of project that will hold test resources        Note that the tests will create and destroy resources in the project denoted by    project id  "}
{"questions":"GCP grpcexample gRPC Example user for a given user id This example creates a gRCP server that connect to spanner to find the name of Application Project Structure","answers":"# gRPC Example\n\nThis example creates a gRCP server that connect to spanner to find the name of\nuser for a given user id.\n\n## Application Project Structure\n\n ```\n    .\n    \u2514\u2500\u2500 grpc_example\n        \u2514\u2500\u2500 src\n            \u2514\u2500\u2500 main\n                \u251c\u2500\u2500 java\n                        \u2514\u2500\u2500 com.example.grpc\n                            \u251c\u2500\u2500 client\n                                \u2514\u2500\u2500 ConnectClient # Example Client\n                            \u251c\u2500\u2500 server\n                                \u2514\u2500\u2500 ConnectServer # Initializes gRPC Server\n                            \u2514\u2500\u2500 service\n                                \u2514\u2500\u2500 ConnectService # Implementation of rpc services in the proto\n                \u2514\u2500\u2500 proto\n                    \u2514\u2500\u2500 connect_service.proto # Proto definition of the server\n       \u251c\u2500\u2500 pom.xml\n       \u2514\u2500\u2500 README.md\n```\n\n## Technology Stack\n\n1. gRPC\n2. Spanner\n\n## Setup Instructions\n\n### Project Setup\n\n#### Creating a Project in the Google Cloud Platform Console\n\nIf you haven't already created a project, create one now. Projects enable you to\nmanage all Google Cloud Platform resources for your app, including deployment,\naccess control, billing, and services.\n\n1. Open the [Cloud Platform Console][cloud-console].\n1. In the drop-down menu at the top, select **Create a project**.\n1. Give your project a name = my-dfdl-project\n1. Make a note of the project ID, which might be different from the project\n   name. The project ID is used in commands and in configurations.\n\n[cloud-console]: https:\/\/console.cloud.google.com\/\n\n#### Enabling billing for your project.\n\nIf you haven't already enabled billing for your\nproject, [enable billing][enable-billing] now. Enabling billing allows is\nrequired to use Cloud Bigtable and to create VM instances.\n\n[enable-billing]: https:\/\/console.cloud.google.com\/project\/_\/settings\n\n#### Install the Google Cloud SDK.\n\nIf you haven't already installed the Google Cloud\nSDK, [install the Google Cloud SDK][cloud-sdk] now. The SDK contains tools and\nlibraries that enable you to create and manage resources on Google Cloud\nPlatform.\n\n[cloud-sdk]: https:\/\/cloud.google.com\/sdk\/\n\n#### Setting Google Application Default Credentials\n\nSet\nyour [Google Application Default Credentials][application-default-credentials]\nby [initializing the Google Cloud SDK][cloud-sdk-init] with the command:\n\n```\ngcloud init\n```\n\nGenerate a credentials file by running the\n[application-default login](https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/auth\/application-default\/login)\ncommand:\n\n```\ngcloud auth application-default login\n```\n\n[cloud-sdk-init]: https:\/\/cloud.google.com\/sdk\/docs\/initializing\n\n[application-default-credentials]: https:\/\/developers.google.com\/identity\/protocols\/application-default-credentials\n\n### Spanner Setup from the Console\n\n1. Create an instance called grpc-example\n2. Create a database called grpc_example_db\n3. Create a table named Users using the following Database Definition Language (\n   DDL) statement for the database\n\n```\n  CREATE TABLE Users (\n  user_id INT64 NOT NULL,\n  name STRING(MAX),\n) PRIMARY KEY(user_id);\n```\n\n3. Insert the following record into the table Users\n\n```\nINSERT INTO\n  Users (user_id,\n    name)\nVALUES\n  (1234, -- type: INT64\n    \"MyName\" -- type: STRING(MAX)\n    );\n```\n\n## Usage\n\n### Initialize the server\n\n```\n$ mvn -DskipTests package exec:java \n    -Dexec.mainClass=com.example.grpc.server.ConnectServer\n\n```\n\n### Run the Client\n\n```\n$ mvn -DskipTests package exec:java \n      -Dexec.mainClass=com.example.grpc.client.ConnectClient\n``","site":"GCP","answers_cleaned":"  gRPC Example  This example creates a gRCP server that connect to spanner to find the name of user for a given user id      Application Project Structure                     grpc example             src                 main                     java                             com example grpc                                 client                                     ConnectClient   Example Client                                 server                                     ConnectServer   Initializes gRPC Server                                 service                                     ConnectService   Implementation of rpc services in the proto                     proto                         connect service proto   Proto definition of the server            pom xml            README md         Technology Stack  1  gRPC 2  Spanner     Setup Instructions      Project Setup       Creating a Project in the Google Cloud Platform Console  If you haven t already created a project  create one now  Projects enable you to manage all Google Cloud Platform resources for your app  including deployment  access control  billing  and services   1  Open the  Cloud Platform Console  cloud console   1  In the drop down menu at the top  select   Create a project    1  Give your project a name   my dfdl project 1  Make a note of the project ID  which might be different from the project    name  The project ID is used in commands and in configurations    cloud console   https   console cloud google com        Enabling billing for your project   If you haven t already enabled billing for your project   enable billing  enable billing  now  Enabling billing allows is required to use Cloud Bigtable and to create VM instances    enable billing   https   console cloud google com project   settings       Install the Google Cloud SDK   If you haven t already installed the Google Cloud SDK   install the Google Cloud SDK  cloud sdk  now  The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform    cloud sdk   https   cloud google com sdk        Setting Google Application Default Credentials  Set your  Google Application Default Credentials  application default credentials  by  initializing the Google Cloud SDK  cloud sdk init  with the command       gcloud init      Generate a credentials file by running the  application default login  https   cloud google com sdk gcloud reference auth application default login  command       gcloud auth application default login       cloud sdk init   https   cloud google com sdk docs initializing   application default credentials   https   developers google com identity protocols application default credentials      Spanner Setup from the Console  1  Create an instance called grpc example 2  Create a database called grpc example db 3  Create a table named Users using the following Database Definition Language      DDL  statement for the database        CREATE TABLE Users     user id INT64 NOT NULL    name STRING MAX     PRIMARY KEY user id        3  Insert the following record into the table Users      INSERT INTO   Users  user id      name  VALUES    1234     type  INT64      MyName     type  STRING MAX                 Usage      Initialize the server        mvn  DskipTests package exec java       Dexec mainClass com example grpc server ConnectServer           Run the Client        mvn  DskipTests package exec java         Dexec mainClass com example grpc client ConnectClient   "}
{"questions":"GCP The popularization of IoT devices and the evolvement of machine learning technologies have brought tremendous opportunities for new businesses We demonstrate how home appliances e g kettle and washing machine working status on off can be inferred from gross power readings collected by a smart meter together with state of art machine learning techniques An end to end demo system is developed entirely on Google Cloud Platform as shown in the following figure It includes Data collection and ingesting through Cloud IoT Core and Cloud Pub Sub Home Appliances Working Status Monitoring Using Gross Power Readings Machine Learning model serving using CMLE together with App Engine as frontend Steps to deploy the demo system Data visualization and exploration using Colab Machine learning model development using Tensorflow and training using Cloud Machine Learning Engine CMLE","answers":"# Home Appliances\u2019 Working Status Monitoring Using Gross Power Readings\nThe popularization of IoT devices and the evolvement of machine learning technologies have brought tremendous opportunities for new businesses. We demonstrate how home appliances\u2019 (e.g. kettle and washing machine) working status (on\/off) can be inferred from gross power readings collected by a smart meter together with state-of-art machine learning techniques. An end-to-end demo system is developed entirely on Google Cloud Platform as shown in the following figure. It includes\n* Data collection and ingesting through Cloud IoT Core and Cloud Pub\/Sub\n* Machine learning model development using Tensorflow and training using Cloud Machine Learning Engine (CMLE)\n* Machine Learning model serving using CMLE together with App Engine as frontend\n* Data visualization and exploration using Colab\n![system architecture](.\/img\/arch.jpg)\n\n## Steps to deploy the demo system\n\n### Step 0. Prerequisite\nBefore you follow the instructions below to deploy our demo system, you need a Google cloud project if you don't have one. You can find detailed instructions [here](https:\/\/cloud.google.com\/dataproc\/docs\/guides\/setup-project).\n\nAfter you have created a google cloud project, follow the instructions below:\n```shell\n# clone the repository storing all the necessary codes\ngit clone [REPO_URL]\n\ncd professional-services\/examples\/e2e-home-appliance-status-monitoring\n\n# remember your project's id in an environment variable\nGOOGLE_PROJECT_ID=[your-google-project-id]\n\n# create and download a service account\ngcloud --project ${GOOGLE_PROJECT_ID} iam service-accounts create e2e-demo-sc\ngcloud --project ${GOOGLE_PROJECT_ID} projects add-iam-policy-binding ${GOOGLE_PROJECT_ID} \\\n       --member \"serviceAccount:e2e-demo-sc@${GOOGLE_PROJECT_ID}.iam.gserviceaccount.com\" \\\n       --role \"roles\/owner\"\ngcloud --project ${GOOGLE_PROJECT_ID} iam service-accounts keys create e2e_demo_credential.json \\\n       --iam-account e2e-demo-sc@${GOOGLE_PROJECT_ID}.iam.gserviceaccount.com\nGOOGLE_APPLICATION_CREDENTIALS=${PWD}\"\/e2e_demo_credential.json\"\n\n# create a new GCS bucket if you don't have one\nBUCKET_NAME=[your-bucket-name]\ngsutil mb -p ${GOOGLE_PROJECT_ID} gs:\/\/${BUCKET_NAME}\/\n```\n\nYou also need to enable the following APIs in the APIs & Services menu.\n* Cloud ML Engine API\n* Cloud IoT API\n* Cloud PubSub API\n\n### Step 1. Deploy a trained ML model in Cloud ML Engine.\nYou can download our trained model [from the `data` directory](.\/data\/model.tar.bz2) or you can train your own model using the `ml\/start.sh`.\nNotice: you need to enable CLoud ML Engine API first.\n\nIf you are using our trained model:\n```shell\n# download our trained model\ntar jxvf data\/model.tar.bz2\n\n# upload the model to your bucket\ngsutil cp -r model gs:\/\/${BUCKET_NAME}\n```\n\nIf you want to train your own model:\n```shell\npip install ml\/\ncd ml\n\n# use one of the following commands and your model should be saved in your cloud storage bucket\n\n# train locally with default parameter\nbash start.sh -l\n# train locally with specified parameters\nbash start.sh -l learning-rate=0.00001 lstm-size=128\n# train on Cloud ML Engine with default parameter\nbash start.sh -p ${GOOGLE_PROJECT_ID} -b ${BUCKET_NAME}\n# train on Cloud ML Engine with specified parameters\nbash start.sh -p ${GOOGLE_PROJECT_ID} -b ${BUCKET_NAME} learning-rate=0.00001 lstm-size=128\n# run hyper-parameter tuning on Cloud ML Engine\nbash start.sh -p ${GOOGLE_PROJECT_ID} -b ${BUCKET_NAME} -t\n```\n\nFinally let's deploy our model to ML engine:\n```shell\n# Set up an appropriate region\n# Available regions: https:\/\/cloud.google.com\/ml-engine\/docs\/tensorflow\/regions\nREGION=\"your-application-region\"\n\n# create a model\ngcloud ml-engine models create EnergyDisaggregationModel \\\n      --regions ${REGION} \\\n      --project ${GOOGLE_PROJECT_ID}\n\n# create a model version\ngcloud ml-engine versions create v01 \\\n      --model EnergyDisaggregationModel \\\n      --origin gs:\/\/${BUCKET_NAME}\/model \\\n      --runtime-version 1.12 \\\n      --framework TensorFlow \\\n      --python-version 3.5 \\\n      --project ${GOOGLE_PROJECT_ID}\n```\n\n### Step 2. Deploy server.\nType in the following commands to start server in app engine.\n```shell\ncd e2e_demo\/server\ncp ${GOOGLE_APPLICATION_CREDENTIALS} .\necho \"  GOOGLE_APPLICATION_CREDENTIALS: '${GOOGLE_APPLICATION_CREDENTIALS##*\/}'\" >> app.yaml\necho \"  GOOGLE_CLOUD_PROJECT: '${GOOGLE_PROJECT_ID}'\" >> app.yaml\n\n# deploy application engine, choose any region that suits and answer yes at the end.\ngcloud --project ${GOOGLE_PROJECT_ID} app deploy\n\n# create a pubsub topic \"data\" and a subscription in the topic.\n# this is the pubsub between IoT devices and the server.\ngcloud --project ${GOOGLE_PROJECT_ID} pubsub topics create data\ngcloud --project ${GOOGLE_PROJECT_ID} pubsub subscriptions create sub0 \\\n       --topic=data --push-endpoint=https:\/\/${GOOGLE_PROJECT_ID}.appspot.com\/upload\n\n# create a pubsub topic \"pred\" and a subscription in the topic.\n# this is the pubsub between the server and a result utilizing client.\ngcloud --project ${GOOGLE_PROJECT_ID} pubsub topics create pred\ngcloud --project ${GOOGLE_PROJECT_ID} pubsub subscriptions create sub1 --topic=pred\n\n# uncompress the data \nbunzip data\/*csv.bz2\n\n# create BigQuery dataset and tables.\nbq --project_id ${GOOGLE_PROJECT_ID} mk \\\n   --dataset ${GOOGLE_PROJECT_ID}:EnergyDisaggregation\nbq --project_id ${GOOGLE_PROJECT_ID} load --autodetect \\\n   --source_format=CSV EnergyDisaggregation.ApplianceInfo \\\n   .\/data\/appliance_info.csv\nbq --project_id ${GOOGLE_PROJECT_ID} load --autodetect \\\n   --source_format=CSV EnergyDisaggregation.ApplianceStatusGroundTruth \\\n   .\/data\/appliance_status_ground_truth.csv\nbq --project_id ${GOOGLE_PROJECT_ID} mk \\\n   --table ${GOOGLE_PROJECT_ID}:EnergyDisaggregation.ActivePower \\\n   time:TIMESTAMP,device_id:STRING,power:INTEGER\nbq --project_id ${GOOGLE_PROJECT_ID} mk \\\n   --table ${GOOGLE_PROJECT_ID}:EnergyDisaggregation.Predictions \\\n   time:TIMESTAMP,device_id:STRING,appliance_id:INTEGER,pred_status:INTEGER,pred_prob:FLOAT\n```\n\n### Step 3. Setup your Cloud IoT Client(s)\nFollow the instructions below to set up your client(s).\nNote: you need to enable the Cloud IoT API first.\n```shell\n# You need to specify the IDs for cloud iot registry and the devices you want.\n# See more details for permitted characters and sizes for each resource:\n# https:\/\/cloud.google.com\/iot\/docs\/requirements#permitted_characters_and_size_requirements\nREGISTRY_ID=\"your-registry-id\"\nDEVICE_IDS=(\"your-device-id1\" \"your-device-id2\" ...)\n\n# create an iot registry with created pubsub topic above\ngcloud --project ${GOOGLE_PROJECT_ID} iot registries create ${REGISTRY_ID} \\\n       --region ${REGION} --event-notification-config topic=data\n\n# generates key pair for setting in Cloud IoT device.\n# The rs256.key generated by the following command would be used to create JWT.\nssh-keygen -t rsa -b 4096 -f .\/rs256.key\n# Press \"Enter\" twice\nopenssl rsa -in .\/rs256.key -pubout -outform PEM -out .\/rs256.pub\n\n# create multiple devices\nfor device in ${DEVICE_IDS[@]}; do\n  # create an iot device with generated public key and the registry.\n  gcloud --project ${GOOGLE_PROJECT_ID} iot devices create ${device} \\\n         --region ${REGION} --public-key path=.\/rs256.pub,type=rsa-pem \\\n         --registry ${REGISTRY_ID} --log-level=debug\ndone\n\n# download root ca certificates for use in mqtt client communicated with iot server.\n# download using curl: curl -o roots.pem https:\/\/pki.goog\/roots.pem\nwget https:\/\/pki.goog\/roots.pem --no-check-certificate\n```\n\n### Complete! Try the demo system in colab or locally.\nIf you want to use colab, visit https:\/\/colab.research.google.com\/ and you can import our notebooks either directly from Github or upload from your cloned repository. Follow the instructions in the notebooks and you should be able to reproduce our results.\n\nIf you want to run the demo locally:\n```\ncd notebook\nvirtualenv env\nsource env\/bin\/activate\npip install jupyter\njupyter notebook\n```\n\n*notebook\/EnergyDisaggregationDemo_View.ipynb* allows you to view raw power consumption data and our model's prediction results in almost real time. It pulls data from our server's pubsub topic for visualization. Fill in the necessary information in the *Configuration* block and run all the cells.\n\n*notebook\/EnergyDisaggregationDemo_Client.ipynb* simulates multiple smart meters by reading in power consumption data from a real world dataset and sends the readings to our server. All Cloud IoT Core related code resides in this notebook. Fill in the necessary information in the *Configuration* block and run all the cells, once you see messages being sent you should be able to see plots like the one shown below in *notebook\/EnergyDisaggregationDemo_View.ipynb*.\n\n![Demo system sample output](.\/img\/demo03.gif)\n\n","site":"GCP","answers_cleaned":"  Home Appliances  Working Status Monitoring Using Gross Power Readings The popularization of IoT devices and the evolvement of machine learning technologies have brought tremendous opportunities for new businesses  We demonstrate how home appliances   e g  kettle and washing machine  working status  on off  can be inferred from gross power readings collected by a smart meter together with state of art machine learning techniques  An end to end demo system is developed entirely on Google Cloud Platform as shown in the following figure  It includes   Data collection and ingesting through Cloud IoT Core and Cloud Pub Sub   Machine learning model development using Tensorflow and training using Cloud Machine Learning Engine  CMLE    Machine Learning model serving using CMLE together with App Engine as frontend   Data visualization and exploration using Colab   system architecture    img arch jpg      Steps to deploy the demo system      Step 0  Prerequisite Before you follow the instructions below to deploy our demo system  you need a Google cloud project if you don t have one  You can find detailed instructions  here  https   cloud google com dataproc docs guides setup project    After you have created a google cloud project  follow the instructions below     shell   clone the repository storing all the necessary codes git clone  REPO URL   cd professional services examples e2e home appliance status monitoring    remember your project s id in an environment variable GOOGLE PROJECT ID  your google project id     create and download a service account gcloud   project   GOOGLE PROJECT ID  iam service accounts create e2e demo sc gcloud   project   GOOGLE PROJECT ID  projects add iam policy binding   GOOGLE PROJECT ID             member  serviceAccount e2e demo sc   GOOGLE PROJECT ID  iam gserviceaccount com             role  roles owner  gcloud   project   GOOGLE PROJECT ID  iam service accounts keys create e2e demo credential json            iam account e2e demo sc   GOOGLE PROJECT ID  iam gserviceaccount com GOOGLE APPLICATION CREDENTIALS   PWD   e2e demo credential json     create a new GCS bucket if you don t have one BUCKET NAME  your bucket name  gsutil mb  p   GOOGLE PROJECT ID  gs     BUCKET NAME        You also need to enable the following APIs in the APIs   Services menu    Cloud ML Engine API   Cloud IoT API   Cloud PubSub API      Step 1  Deploy a trained ML model in Cloud ML Engine  You can download our trained model  from the  data  directory    data model tar bz2  or you can train your own model using the  ml start sh   Notice  you need to enable CLoud ML Engine API first   If you are using our trained model     shell   download our trained model tar jxvf data model tar bz2    upload the model to your bucket gsutil cp  r model gs     BUCKET NAME       If you want to train your own model     shell pip install ml  cd ml    use one of the following commands and your model should be saved in your cloud storage bucket    train locally with default parameter bash start sh  l   train locally with specified parameters bash start sh  l learning rate 0 00001 lstm size 128   train on Cloud ML Engine with default parameter bash start sh  p   GOOGLE PROJECT ID   b   BUCKET NAME    train on Cloud ML Engine with specified parameters bash start sh  p   GOOGLE PROJECT ID   b   BUCKET NAME  learning rate 0 00001 lstm size 128   run hyper parameter tuning on Cloud ML Engine bash start sh  p   GOOGLE PROJECT ID   b   BUCKET NAME   t      Finally let s deploy our model to ML engine     shell   Set up an appropriate region   Available regions  https   cloud google com ml engine docs tensorflow regions REGION  your application region     create a model gcloud ml engine models create EnergyDisaggregationModel           regions   REGION            project   GOOGLE PROJECT ID     create a model version gcloud ml engine versions create v01           model EnergyDisaggregationModel           origin gs     BUCKET NAME  model           runtime version 1 12           framework TensorFlow           python version 3 5           project   GOOGLE PROJECT ID           Step 2  Deploy server  Type in the following commands to start server in app engine     shell cd e2e demo server cp   GOOGLE APPLICATION CREDENTIALS    echo    GOOGLE APPLICATION CREDENTIALS     GOOGLE APPLICATION CREDENTIALS           app yaml echo    GOOGLE CLOUD PROJECT     GOOGLE PROJECT ID       app yaml    deploy application engine  choose any region that suits and answer yes at the end  gcloud   project   GOOGLE PROJECT ID  app deploy    create a pubsub topic  data  and a subscription in the topic    this is the pubsub between IoT devices and the server  gcloud   project   GOOGLE PROJECT ID  pubsub topics create data gcloud   project   GOOGLE PROJECT ID  pubsub subscriptions create sub0            topic data   push endpoint https     GOOGLE PROJECT ID  appspot com upload    create a pubsub topic  pred  and a subscription in the topic    this is the pubsub between the server and a result utilizing client  gcloud   project   GOOGLE PROJECT ID  pubsub topics create pred gcloud   project   GOOGLE PROJECT ID  pubsub subscriptions create sub1   topic pred    uncompress the data  bunzip data  csv bz2    create BigQuery dataset and tables  bq   project id   GOOGLE PROJECT ID  mk        dataset   GOOGLE PROJECT ID  EnergyDisaggregation bq   project id   GOOGLE PROJECT ID  load   autodetect        source format CSV EnergyDisaggregation ApplianceInfo        data appliance info csv bq   project id   GOOGLE PROJECT ID  load   autodetect        source format CSV EnergyDisaggregation ApplianceStatusGroundTruth        data appliance status ground truth csv bq   project id   GOOGLE PROJECT ID  mk        table   GOOGLE PROJECT ID  EnergyDisaggregation ActivePower      time TIMESTAMP device id STRING power INTEGER bq   project id   GOOGLE PROJECT ID  mk        table   GOOGLE PROJECT ID  EnergyDisaggregation Predictions      time TIMESTAMP device id STRING appliance id INTEGER pred status INTEGER pred prob FLOAT          Step 3  Setup your Cloud IoT Client s  Follow the instructions below to set up your client s   Note  you need to enable the Cloud IoT API first     shell   You need to specify the IDs for cloud iot registry and the devices you want    See more details for permitted characters and sizes for each resource    https   cloud google com iot docs requirements permitted characters and size requirements REGISTRY ID  your registry id  DEVICE IDS   your device id1   your device id2          create an iot registry with created pubsub topic above gcloud   project   GOOGLE PROJECT ID  iot registries create   REGISTRY ID             region   REGION    event notification config topic data    generates key pair for setting in Cloud IoT device    The rs256 key generated by the following command would be used to create JWT  ssh keygen  t rsa  b 4096  f   rs256 key   Press  Enter  twice openssl rsa  in   rs256 key  pubout  outform PEM  out   rs256 pub    create multiple devices for device in   DEVICE IDS      do     create an iot device with generated public key and the registry    gcloud   project   GOOGLE PROJECT ID  iot devices create   device               region   REGION    public key path   rs256 pub type rsa pem              registry   REGISTRY ID    log level debug done    download root ca certificates for use in mqtt client communicated with iot server    download using curl  curl  o roots pem https   pki goog roots pem wget https   pki goog roots pem   no check certificate          Complete  Try the demo system in colab or locally  If you want to use colab  visit https   colab research google com  and you can import our notebooks either directly from Github or upload from your cloned repository  Follow the instructions in the notebooks and you should be able to reproduce our results   If you want to run the demo locally      cd notebook virtualenv env source env bin activate pip install jupyter jupyter notebook       notebook EnergyDisaggregationDemo View ipynb  allows you to view raw power consumption data and our model s prediction results in almost real time  It pulls data from our server s pubsub topic for visualization  Fill in the necessary information in the  Configuration  block and run all the cells    notebook EnergyDisaggregationDemo Client ipynb  simulates multiple smart meters by reading in power consumption data from a real world dataset and sends the readings to our server  All Cloud IoT Core related code resides in this notebook  Fill in the necessary information in the  Configuration  block and run all the cells  once you see messages being sent you should be able to see plots like the one shown below in  notebook EnergyDisaggregationDemo View ipynb      Demo system sample output    img demo03 gif   "}
{"questions":"GCP Contains several examples and solutions to common use cases observed in various scenarios The solutions below become more complex as we incorporate more Dataflow features","answers":"Contains several examples and solutions to common use cases observed in various scenarios\n\n\n- [Ingesting data from a file into BigQuery](#ingesting-data-from-a-file-into-bigquery)\n- [Transforming data in Dataflow](#transforming-data-in-dataflow)\n- [Joining file and BigQuery datasets in Dataflow](#joining-file-and-bigquery-datasets-in-dataflow)\n- [Ingest data from files into Bigquery reading the file structure from Datastore](#ingest-data-from-files-into-bigquery-reading-the-file-structure-from-datastore)\n- [Data lake to data mart](#data-lake-to-data-mart)\n\nThe solutions below become more complex as we incorporate more Dataflow features.\n\n## Ingesting data from a file into BigQuery\n![Alt text](img\/csv_file_to_bigquery.png?raw=true \"CSV file to BigQuery\")\n\nThis example shows how to ingest a raw CSV file into BigQuery with minimal transformation.  It is the simplest example\nand a great one to start with in order to become familiar with Dataflow.\n\nThere are three main steps:\n1. [Read in the file](pipelines\/data_ingestion.py#L100-L106).\n2. [Transform the CSV format into a dictionary format](pipelines\/data_ingestion.py#L107-L113).\n3. [Write the data to BigQuery](pipelines\/data_ingestion.py#L114-L126).\n\n\n### Read data in from the file.\n![Alt text](img\/csv_file.png?raw=true \"CSV file\")\n\nUsing the built in TextIO connector allows beam to have several workers read the file in parallel.  This allows larger\n file sizes and large number of input files to scale well within beam.\n\nDataflow will read in each row of data from the file and distribute the data to the next stage of the data pipeline.\n\n### Transform the CSV format into a dictionary format.\n![Alt text](img\/custom_python_code.png?raw=true \"Custom Python code\")\n\nThis is the stage of the code where you would typically put your business logic.  In this example we are simply transforming the\ndata from a CSV format into a python dictionary.  The dictionary maps column names to the values we want to store in\nBigQuery.\n\n### Write the data to BigQuery.\n![Alt text](img\/output_to_bigquery.png?raw=true \"Output to BigQuery\")\n\nWriting the data to BigQuery does not require custom code.  Passing the table name and a few other optional arguments\ninto BigQueryIO sets up the final stage of the pipeline.\n\nThis stage of the pipeline is typically referred to as our sink.  The sink is the final destination of data.  No more\nprocessing will occur in the pipeline after this stage.\n\n### Full code examples\n\nReady to dive deeper?  Check out the complete code [here](pipelines\/data_ingestion.py).\n\n## Transforming data in Dataflow\n![Alt text](img\/csv_file_to_bigquery.png?raw=true \"CSV file to BigQuery\")\n\nThis example builds upon simple ingestion, and demonstrates some basic data type transformations.\n\nIn line with the previous example there are 3 steps.  The transformation step is made more useful by translating the\ndate format from the source data into a date format BigQuery accepts.\n\n1. [Read in the file](pipelines\/data_transformation.py#L136-L142).\n2. [Transform the CSV format into a dictionary format and translate the date format](pipelines\/data_transformation.py#L143-L149).\n3. [Write the data to BigQuery](pipelines\/data_transformation.py#L150-L161).\n\n\n### Read data in from the file.\n![Alt text](img\/csv_file.png?raw=true \"CSV file\")\n\nSimilar to the previous example, this example uses TextIO to read the file from Google Cloud Storage.\n\n### Transform the CSV format into a dictionary format.\n![Alt text](img\/custom_python_code.png?raw=true \"Custom Python code\")\n\nThis example builds upon the simpler ingestion example by introducing data type transformations.\n\n### Write the data to BigQuery.\n![Alt text](img\/output_to_bigquery.png?raw=true \"Output to BigQuery\")\n\nJust as in our previous example, this example uses BigQuery IO to write out to BigQuery.\n\n### Full code examples\n\nReady to dive deeper?  Check out the complete code [here](pipelines\/data_transformation.py).\n\n## Joining file and BigQuery datasets in Dataflow\n![Alt text](img\/csv_join_bigquery_to_bigquery.png?raw=true \"CSV file joined with BigQuery data to BigQuery\")\n\nThis example demonstrates how to work with two datasets.  A primary dataset is read from a file, and another dataset\ncontaining reference data is read from BigQuery.  The two datasets are then joined in Dataflow before writing the joined\ndataset to BigQuery.\n\nThis pipeline contains 4 steps:\n1. [Read in the primary dataset from a file](pipelines\/data_enrichment.py#L165-L176).\n2. [Read in the reference data from BigQuery](pipelines\/data_enrichment.py#L155-L163).\n3. [Custom Python code](pipelines\/data_enrichment.py#L138-L143) is used to [join the two datasets](pipelines\/data_enrichment.py#L177-L180).\n4. [The joined dataset is written out to BigQuery](pipelines\/data_enrichment.py#L181-L194).\n\n\n### Read in the primary dataset from a file\n![Alt text](img\/csv_file.png?raw=true \"CSV file\")\n\nSimilar to previous examples, we use TextIO to read the dataset from a CSV file.\n\n### Read in the reference data from BigQuery\n![Alt text](img\/import_state_name_from_bigquery.png?raw=true \"Import state name data from BigQuery\")\n\nUsing BigQueryIO, we can specify a query to read data from.  Dataflow then is able to distribute the data\nfrom BigQuery to the next stages in the pipeline.\n\nIn this example the additional dataset is represented as a side input.  Side inputs in Dataflow are typically reference\ndatasets that fit into memory.  Other examples will explore alternative methods for joining datasets which work well for\ndatasets that do not fit into memory.\n\n### Custom Python code is used to join the two datasets\n![Alt text](img\/3_custom_python_code.png?raw=true \"Custom python code\")\n\nUsing custom python code, we join the two datasets together.  Because the two datasets are dictionaries,\nthe python code is the same as it would be for unioning any two python dictionaries.\n\n### The joined dataset is written out to BigQuery\n![Alt text](img\/4_output_to_bigquery.png?raw=true \"Custom python code\")\n\nFinally the joined dataset is written out to BigQuery.  This uses the same BigQueryIO API which is used in previous\nexamples.\n\n### Full code examples\n\nReady to dive deeper?  Check out the complete code [here](pipelines\/data_enrichment.py).\n\n## Ingest data from files into Bigquery reading the file structure from Datastore\n\nIn this example we create a Python [Apache Beam](https:\/\/beam.apache.org\/) pipeline running on [Google Cloud Dataflow](https:\/\/cloud.google.com\/dataflow\/) to import CSV files into BigQuery using the following architecture:\n\n![Apache Beam pipeline to import CSV into BQ](img\/data_ingestion_configurable.jpg)\n\nThe architecture uses:\n* [Google Cloud Storage]() to store CSV source files\n* [Google Cloud Datastore](https:\/\/cloud.google.com\/datastore\/docs\/concepts\/overview) to store CSV file structure and field type\n* [Google Cloud Dataflow](https:\/\/cloud.google.com\/dataflow\/) to read files from Google Cloud Storage, Transform data base on the structure of the file and import the data into Google BigQuery\n* [Google BigQuery](https:\/\/cloud.google.com\/bigquery\/) to store data in a Data Lake.\n\nYou can use this script as a starting point to import your files into Google BigQuery. You'll probably need to adapt the script logic to your file name structure or to your peculiar needs.\n\n### 1. Prerequisites\n - Up and running GCP project with enabled billing account\n - gcloud installed and initiated to your project\n - Google Cloud Datastore enabled\n - Google Cloud Dataflow API enabled\n - Google Cloud Storage Bucket containing the file to import (CSV format) using the following naming convention: `TABLENAME_*.csv`\n - Google Cloud Storage Bucket for tem and staging Google Dataflow files\n - Google BigQuery dataset\n - [Python](https:\/\/www.python.org\/) >= 2.7 and python-dev module\n - gcc\n - Google Cloud [Application Default Credentials](https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/auth\/application-default\/login)\n\n### 2. Create virtual environment\nCreate a new virtual environment (recommended) and install requirements:\n\n```\nvirtualenv env\nsource .\/env\/bin\/activate\npip install -r requirements.txt\n```\n\n### 3. Configure Table schema\nCreate a file that contains the structure of the CSVs to be imported. Filename needs to follow convention: `TABLENAME.csv`.\n\nExample:\n```\nname,STRING\nsurname,STRING\nage,INTEGER\n```\n\nYou can check parameters accepted by the `datastore_schema_import.py` script with the following command:\n```\npython pipelines\/datastore_schema_import.py --help\n```\n\nRun the `datastore_schema_import.py` script to create the entry in Google Cloud Datastore using the following command:\n```\npython pipelines\/datastore_schema_import.py --schema-file=<path_to_TABLENAME.csv>\n```\n\n### 4. Upload files into Google Cloud Storage\nUpload files to be imported into Google Bigquery in a Google Cloud Storage Bucket. You can use `gsutil` using a command like:\n```\ngsutil cp [LOCAL_OBJECT_LOCATION] gs:\/\/[DESTINATION_BUCKET_NAME]\/\n```\nTo optimize upload of big files see the [documentation](https:\/\/cloud.google.com\/solutions\/transferring-big-data-sets-to-gcp).\nFiles need to be in CSV format, with the name of the column as first row. For example:\n```\nname,surname,age\ntest_1,test_1,30\ntest_2,test_2,40\n\"test_3, jr\",surname,50\n```\n\n### 5. Run pipeline\nYou can check parameters accepted by the `data_ingestion_configurable.py` script with the following command:\n```\npython pipelines\/data_ingestion_configurable --help\n```\n\nYou can run the pipeline locally with the following command:\n```\npython pipelines\/data_ingestion_configurable.py \\\n--project=###PUT HERE PROJECT ID### \\\n--input-bucket=###PUT HERE GCS BUCKET NAME: gs:\/\/bucket_name ### \\\n--input-path=###PUT HERE INPUT FOLDER### \\\n--input-files=###PUT HERE FILE NAMES### \\\n--bq-dataset=###PUT HERE BQ DATASET NAME###\n```\n\nor you can run the pipeline on Google Dataflow using the following command:\n\n```\npython pipelines\/data_ingestion_configurable.py \\\n--runner=DataflowRunner \\\n--max_num_workers=100 \\\n--autoscaling_algorithm=THROUGHPUT_BASED \\\n--region=###PUT HERE REGION### \\\n--staging_location=###PUT HERE GCS STAGING LOCATION### \\\n--temp_location=###PUT HERE GCS TMP LOCATION###\\\n--project=###PUT HERE PROJECT ID### \\\n--input-bucket=###PUT HERE GCS BUCKET NAME### \\\n--input-path=###PUT HERE INPUT FOLDER### \\\n--input-files=###PUT HERE FILE NAMES### \\\n--bq-dataset=###PUT HERE BQ DATASET NAME###\n```\n\n### 6. Check results\nYou can check data imported into Google BigQuery from the Google Cloud Console UI.\n\n## Data lake to data mart\n![Alt text](img\/data_lake_to_data_mart.png?raw=true \"Data lake to data mart\")\n\nThis example demonstrates joining data from two different datasets in BigQuery, applying transformations to\nthe joined dataset before uploading to BigQuery.\n\nJoining two datasets from BigQuery is a common use case when a data lake has been implemented in BigQuery.\nCreating a data mart with denormalized datasets facilitates better performance when using visualization tools.\n\nThis pipeline contains 4 steps:\n1. [Read in the primary dataset from BigQuery](pipelines\/data_lake_to_mart.py#L278-L283).\n2. [Read in the reference data from BigQuery](pipelines\/data_lake_to_mart.py#L248-L276).\n3. [Custom Python code](pipelines\/data_lake_to_mart.py#L210-L224) is used to [join the two datasets](pipelines\/data_lake_to_mart.py#L284-L287).\nAlternatively, [CoGroupByKey can be used to join the two datasets](pipelines\/data_lake_to_mart_cogroupbykey.py#L300-L310).\n4. [The joined dataset is written out to BigQuery](pipelines\/data_lake_to_mart.py#L288-L301).\n\n\n### Read in the primary dataset from BigQuery\n![Alt text](img\/1_query_orders.png?raw=true \"Read from BigQuery\")\n\nSimilar to previous examples, we use BigQueryIO to read the dataset from the results of a query.  In this case our\nmain dataset is a fake orders dataset, containing a history of orders and associated data like quantity.\n\n### Read in the reference data from BigQuery\n![Alt text](img\/2_query_account_details.png?raw=true \"Import state name data from BigQuery\")\n\nIn this example we use a fake account details dataset.  This represents a common use case for denormalizing a dataset.\nThe account details information contains attributes linked to the accounts in the orders dataset.  For example the\naddress and city of the account.\n\n### Custom Python code is used to join the two datasets\n![Alt text](img\/3_custom_python_code.png?raw=true \"Custom python code\")\n\nUsing custom python code, we join the two datasets together.  We provide two examples of joining these datasets.  The\nfirst example uses side inputs, which require the dataset fit into memory.  The second example demonstrates how to use\nCoGroupByKey to join the datasets.\n\nCoGroupByKey will facilitate joins between two datesets even if neither fit into memory.  Explore the comments in the\ntwo code examples for a more in depth explanation.\n\n### The joined dataset is written out to BigQuery\n![Alt text](img\/4_output_to_bigquery.png?raw=true \"Custom python code\")\n\nFinally the joined dataset is written out to BigQuery.  This uses the same BigQueryIO API which is used in previous\nexamples.\n\n### Full code examples\n\nReady to dive deeper?  Check out the complete code.\nThe example using side inputs is [here](pipelines\/data_lake_to_mart.py) and the example using CoGroupByKey is\n[here](pipelines\/data_lake_to_mart_cogroupbykey.py).\n","site":"GCP","answers_cleaned":"Contains several examples and solutions to common use cases observed in various scenarios      Ingesting data from a file into BigQuery   ingesting data from a file into bigquery     Transforming data in Dataflow   transforming data in dataflow     Joining file and BigQuery datasets in Dataflow   joining file and bigquery datasets in dataflow     Ingest data from files into Bigquery reading the file structure from Datastore   ingest data from files into bigquery reading the file structure from datastore     Data lake to data mart   data lake to data mart   The solutions below become more complex as we incorporate more Dataflow features      Ingesting data from a file into BigQuery   Alt text  img csv file to bigquery png raw true  CSV file to BigQuery    This example shows how to ingest a raw CSV file into BigQuery with minimal transformation   It is the simplest example and a great one to start with in order to become familiar with Dataflow   There are three main steps  1   Read in the file  pipelines data ingestion py L100 L106   2   Transform the CSV format into a dictionary format  pipelines data ingestion py L107 L113   3   Write the data to BigQuery  pipelines data ingestion py L114 L126         Read data in from the file    Alt text  img csv file png raw true  CSV file    Using the built in TextIO connector allows beam to have several workers read the file in parallel   This allows larger  file sizes and large number of input files to scale well within beam   Dataflow will read in each row of data from the file and distribute the data to the next stage of the data pipeline       Transform the CSV format into a dictionary format    Alt text  img custom python code png raw true  Custom Python code    This is the stage of the code where you would typically put your business logic   In this example we are simply transforming the data from a CSV format into a python dictionary   The dictionary maps column names to the values we want to store in BigQuery       Write the data to BigQuery    Alt text  img output to bigquery png raw true  Output to BigQuery    Writing the data to BigQuery does not require custom code   Passing the table name and a few other optional arguments into BigQueryIO sets up the final stage of the pipeline   This stage of the pipeline is typically referred to as our sink   The sink is the final destination of data   No more processing will occur in the pipeline after this stage       Full code examples  Ready to dive deeper   Check out the complete code  here  pipelines data ingestion py       Transforming data in Dataflow   Alt text  img csv file to bigquery png raw true  CSV file to BigQuery    This example builds upon simple ingestion  and demonstrates some basic data type transformations   In line with the previous example there are 3 steps   The transformation step is made more useful by translating the date format from the source data into a date format BigQuery accepts   1   Read in the file  pipelines data transformation py L136 L142   2   Transform the CSV format into a dictionary format and translate the date format  pipelines data transformation py L143 L149   3   Write the data to BigQuery  pipelines data transformation py L150 L161         Read data in from the file    Alt text  img csv file png raw true  CSV file    Similar to the previous example  this example uses TextIO to read the file from Google Cloud Storage       Transform the CSV format into a dictionary format    Alt text  img custom python code png raw true  Custom Python code    This example builds upon the simpler ingestion example by introducing data type transformations       Write the data to BigQuery    Alt text  img output to bigquery png raw true  Output to BigQuery    Just as in our previous example  this example uses BigQuery IO to write out to BigQuery       Full code examples  Ready to dive deeper   Check out the complete code  here  pipelines data transformation py       Joining file and BigQuery datasets in Dataflow   Alt text  img csv join bigquery to bigquery png raw true  CSV file joined with BigQuery data to BigQuery    This example demonstrates how to work with two datasets   A primary dataset is read from a file  and another dataset containing reference data is read from BigQuery   The two datasets are then joined in Dataflow before writing the joined dataset to BigQuery   This pipeline contains 4 steps  1   Read in the primary dataset from a file  pipelines data enrichment py L165 L176   2   Read in the reference data from BigQuery  pipelines data enrichment py L155 L163   3   Custom Python code  pipelines data enrichment py L138 L143  is used to  join the two datasets  pipelines data enrichment py L177 L180   4   The joined dataset is written out to BigQuery  pipelines data enrichment py L181 L194         Read in the primary dataset from a file   Alt text  img csv file png raw true  CSV file    Similar to previous examples  we use TextIO to read the dataset from a CSV file       Read in the reference data from BigQuery   Alt text  img import state name from bigquery png raw true  Import state name data from BigQuery    Using BigQueryIO  we can specify a query to read data from   Dataflow then is able to distribute the data from BigQuery to the next stages in the pipeline   In this example the additional dataset is represented as a side input   Side inputs in Dataflow are typically reference datasets that fit into memory   Other examples will explore alternative methods for joining datasets which work well for datasets that do not fit into memory       Custom Python code is used to join the two datasets   Alt text  img 3 custom python code png raw true  Custom python code    Using custom python code  we join the two datasets together   Because the two datasets are dictionaries  the python code is the same as it would be for unioning any two python dictionaries       The joined dataset is written out to BigQuery   Alt text  img 4 output to bigquery png raw true  Custom python code    Finally the joined dataset is written out to BigQuery   This uses the same BigQueryIO API which is used in previous examples       Full code examples  Ready to dive deeper   Check out the complete code  here  pipelines data enrichment py       Ingest data from files into Bigquery reading the file structure from Datastore  In this example we create a Python  Apache Beam  https   beam apache org   pipeline running on  Google Cloud Dataflow  https   cloud google com dataflow   to import CSV files into BigQuery using the following architecture     Apache Beam pipeline to import CSV into BQ  img data ingestion configurable jpg   The architecture uses     Google Cloud Storage    to store CSV source files    Google Cloud Datastore  https   cloud google com datastore docs concepts overview  to store CSV file structure and field type    Google Cloud Dataflow  https   cloud google com dataflow   to read files from Google Cloud Storage  Transform data base on the structure of the file and import the data into Google BigQuery    Google BigQuery  https   cloud google com bigquery   to store data in a Data Lake   You can use this script as a starting point to import your files into Google BigQuery  You ll probably need to adapt the script logic to your file name structure or to your peculiar needs       1  Prerequisites    Up and running GCP project with enabled billing account    gcloud installed and initiated to your project    Google Cloud Datastore enabled    Google Cloud Dataflow API enabled    Google Cloud Storage Bucket containing the file to import  CSV format  using the following naming convention   TABLENAME   csv     Google Cloud Storage Bucket for tem and staging Google Dataflow files    Google BigQuery dataset     Python  https   www python org      2 7 and python dev module    gcc    Google Cloud  Application Default Credentials  https   cloud google com sdk gcloud reference auth application default login       2  Create virtual environment Create a new virtual environment  recommended  and install requirements       virtualenv env source   env bin activate pip install  r requirements txt          3  Configure Table schema Create a file that contains the structure of the CSVs to be imported  Filename needs to follow convention   TABLENAME csv    Example      name STRING surname STRING age INTEGER      You can check parameters accepted by the  datastore schema import py  script with the following command      python pipelines datastore schema import py   help      Run the  datastore schema import py  script to create the entry in Google Cloud Datastore using the following command      python pipelines datastore schema import py   schema file  path to TABLENAME csv           4  Upload files into Google Cloud Storage Upload files to be imported into Google Bigquery in a Google Cloud Storage Bucket  You can use  gsutil  using a command like      gsutil cp  LOCAL OBJECT LOCATION  gs    DESTINATION BUCKET NAME       To optimize upload of big files see the  documentation  https   cloud google com solutions transferring big data sets to gcp   Files need to be in CSV format  with the name of the column as first row  For example      name surname age test 1 test 1 30 test 2 test 2 40  test 3  jr  surname 50          5  Run pipeline You can check parameters accepted by the  data ingestion configurable py  script with the following command      python pipelines data ingestion configurable   help      You can run the pipeline locally with the following command      python pipelines data ingestion configurable py     project    PUT HERE PROJECT ID        input bucket    PUT HERE GCS BUCKET NAME  gs   bucket name         input path    PUT HERE INPUT FOLDER        input files    PUT HERE FILE NAMES        bq dataset    PUT HERE BQ DATASET NAME         or you can run the pipeline on Google Dataflow using the following command       python pipelines data ingestion configurable py     runner DataflowRunner     max num workers 100     autoscaling algorithm THROUGHPUT BASED     region    PUT HERE REGION        staging location    PUT HERE GCS STAGING LOCATION        temp location    PUT HERE GCS TMP LOCATION       project    PUT HERE PROJECT ID        input bucket    PUT HERE GCS BUCKET NAME        input path    PUT HERE INPUT FOLDER        input files    PUT HERE FILE NAMES        bq dataset    PUT HERE BQ DATASET NAME             6  Check results You can check data imported into Google BigQuery from the Google Cloud Console UI      Data lake to data mart   Alt text  img data lake to data mart png raw true  Data lake to data mart    This example demonstrates joining data from two different datasets in BigQuery  applying transformations to the joined dataset before uploading to BigQuery   Joining two datasets from BigQuery is a common use case when a data lake has been implemented in BigQuery  Creating a data mart with denormalized datasets facilitates better performance when using visualization tools   This pipeline contains 4 steps  1   Read in the primary dataset from BigQuery  pipelines data lake to mart py L278 L283   2   Read in the reference data from BigQuery  pipelines data lake to mart py L248 L276   3   Custom Python code  pipelines data lake to mart py L210 L224  is used to  join the two datasets  pipelines data lake to mart py L284 L287   Alternatively   CoGroupByKey can be used to join the two datasets  pipelines data lake to mart cogroupbykey py L300 L310   4   The joined dataset is written out to BigQuery  pipelines data lake to mart py L288 L301         Read in the primary dataset from BigQuery   Alt text  img 1 query orders png raw true  Read from BigQuery    Similar to previous examples  we use BigQueryIO to read the dataset from the results of a query   In this case our main dataset is a fake orders dataset  containing a history of orders and associated data like quantity       Read in the reference data from BigQuery   Alt text  img 2 query account details png raw true  Import state name data from BigQuery    In this example we use a fake account details dataset   This represents a common use case for denormalizing a dataset  The account details information contains attributes linked to the accounts in the orders dataset   For example the address and city of the account       Custom Python code is used to join the two datasets   Alt text  img 3 custom python code png raw true  Custom python code    Using custom python code  we join the two datasets together   We provide two examples of joining these datasets   The first example uses side inputs  which require the dataset fit into memory   The second example demonstrates how to use CoGroupByKey to join the datasets   CoGroupByKey will facilitate joins between two datesets even if neither fit into memory   Explore the comments in the two code examples for a more in depth explanation       The joined dataset is written out to BigQuery   Alt text  img 4 output to bigquery png raw true  Custom python code    Finally the joined dataset is written out to BigQuery   This uses the same BigQueryIO API which is used in previous examples       Full code examples  Ready to dive deeper   Check out the complete code  The example using side inputs is  here  pipelines data lake to mart py  and the example using CoGroupByKey is  here  pipelines data lake to mart cogroupbykey py   "}
{"questions":"GCP The test app generates time series with a custom metric called This example demonstrates creating alerts for missing monitoring data with Stackdriver alerts for missing timeseries them is missing If one or two timeseries are missing you want exactly one alert When there is a total outage you still want to get one alert not 100 suppose that you have 100 timeseries and you want to find out when any one of Stackdriver in a way that duplicate alerts are not generated For example alerts","answers":"# Stackdriver alerts for missing timeseries\n\nThis example demonstrates creating alerts for missing monitoring data with\nStackdriver in a way that duplicate alerts are not generated. For example,\nsuppose that you have 100 timeseries and you want to find out when any one of\nthem is missing. If one or two timeseries are missing, you want exactly one\nalert. When there is a total outage you still want to get one alert, not 100\nalerts.\n\nThe test app generates time series with a custom metric called\ntask_latency_distribution with tags based on a partition label. Each\npartition generates its own time series.\n\nThe example assumes that you are familiar with Stackdriver Monitoring and\nAlerting. It builds on the discussion in\n[Alerting policies in depth](https:\/\/cloud.google.com\/monitoring\/alerts\/concepts-indepth).\n\n## Setup\nClone this repo and change to this working directory. Enable the Stackdriver\nMonitoring API\n\n```shell\ngcloud services enable monitoring.googleapis.com\n```\n\nIn the GCP Console, go to\n[Monitoring](https:\/\/console.cloud.google.com\/monitoring).\nIf you have not already created a workspace for this project before, click New\nworkspace, and then click Add. It takes a few minutes to create the workspace.\nClick Alerting | Policies overview. The list should be empty at this point\nunless you have created policies previously.\n\n## Deploy the app\n\nThe example code is based on the Go code in\n[Custom metrics with OpenCensus](https:\/\/cloud.google.com\/monitoring\/custom-metrics\/open-census).\n\n[Download](https:\/\/golang.org\/dl\/) and install the latest version of Go.\n\nBuild the test app\n\n```shell\ngo build\n```\n\nThe instructions here are based on the article\n[Setting up authentication](https:\/\/cloud.google.com\/monitoring\/docs\/reference\/libraries#setting_up_authentication)\nfor the Stackdriver client library. Note that if you run the test app on a\nGoogle Cloud Compute Engine instance, on Google Kubernetes Engine, on App\nEngine, or on Cloud Run, then you will not need to create a service account\nor download the credentials.\n\nFirst, set the project id in a shell variable\n\n```shell\nexport GOOGLE_CLOUD_PROJECT=[your project]\n```\n\nCreate a service account:\n\n```shell\nSA_NAME=stackdriver-metrics-writer\ngcloud iam service-accounts create $SA_NAME \\\n --display-name=\"Stackdriver Metrics Writer\"\n```\n\n```shell\nSA_ID=\"$SA_NAME@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com\"\ngcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \\\n  --member \"serviceAccount:$SA_ID\" --role \"roles\/monitoring.metricWriter\"\n```\n\nGenerate a credentials file with an exported variable\nGOOGLE_APPLICATION_CREDENTIALS referring to it.\n\n```shell\nmkdir -p ~\/.auth\nchmod go-rwx ~\/.auth\nexport GOOGLE_APPLICATION_CREDENTIALS=~\/.auth\/stackdriver_demo_credentials.json\ngcloud iam service-accounts keys create $GOOGLE_APPLICATION_CREDENTIALS \\\n --iam-account $SA_ID\n```\n\nRun the program with three partitions, labeled \"1\", \"2\", and \"3\"\n\n```shell\n.\/alert-absence-demo --labels \"1,2,3\"\n```\n\nThis will write three time series with the given labels. A few minutes after\nstarting the app you should be able to see the time series data in the\nStackdriver Metric explorer.\n\n## Create the policy\n\nCreate a notification channel with the command\n\n```shell\nEMAIL=\"your email\"\nCHANNEL=$(gcloud alpha monitoring channels create \\\n  --channel-labels=email_address=$EMAIL \\\n  --display-name=\"Email to project owner\" \\\n  --type=email \\\n  --format='value(name)')\n```\n\nCreate an alert policy with the command\n\n```shell\ngcloud alpha monitoring policies create \\\n--notification-channels=$CHANNEL \\\n--documentation-from-file=policy_doc.md \\\n--policy-from-file=alert_policy.json\n```\n\nAt this point no alerts should be firing. You can check that in the Stackdriver\nMonitoring console alert policy detail.\n\n## Testing\n\nKill the processes with CTL-C and restart it with only two partitions:\n\n```shell\n.\/alert-absence-demo --labels \"1,2\"\n```\n\nAn alert should be generated in about 5 minutes with a subject like \u2018One of the\ntimeseries is absent.\u2019 You should be able to see this in the Stackdriver console\nand you should also receive an email notification.\n\nRestart the process with three partitions:\n\n```shell\n.\/alert-absence-demo --labels \"1,2,3\"\n```\n\nAfter a few minutes the alert should resolve itself.\n\nStop the process and restart it with only one partition:\n\n```shell\n.\/alert-absence-demo --labels \"1\"\n```\n\nCheck that only one alert is fired.\n\nStop the process and do not restart it. You should see an alert that indicates\nall time series are absent.","site":"GCP","answers_cleaned":"  Stackdriver alerts for missing timeseries  This example demonstrates creating alerts for missing monitoring data with Stackdriver in a way that duplicate alerts are not generated  For example  suppose that you have 100 timeseries and you want to find out when any one of them is missing  If one or two timeseries are missing  you want exactly one alert  When there is a total outage you still want to get one alert  not 100 alerts   The test app generates time series with a custom metric called task latency distribution with tags based on a partition label  Each partition generates its own time series   The example assumes that you are familiar with Stackdriver Monitoring and Alerting  It builds on the discussion in  Alerting policies in depth  https   cloud google com monitoring alerts concepts indepth       Setup Clone this repo and change to this working directory  Enable the Stackdriver Monitoring API     shell gcloud services enable monitoring googleapis com      In the GCP Console  go to  Monitoring  https   console cloud google com monitoring   If you have not already created a workspace for this project before  click New workspace  and then click Add  It takes a few minutes to create the workspace  Click Alerting   Policies overview  The list should be empty at this point unless you have created policies previously      Deploy the app  The example code is based on the Go code in  Custom metrics with OpenCensus  https   cloud google com monitoring custom metrics open census     Download  https   golang org dl   and install the latest version of Go   Build the test app     shell go build      The instructions here are based on the article  Setting up authentication  https   cloud google com monitoring docs reference libraries setting up authentication  for the Stackdriver client library  Note that if you run the test app on a Google Cloud Compute Engine instance  on Google Kubernetes Engine  on App Engine  or on Cloud Run  then you will not need to create a service account or download the credentials   First  set the project id in a shell variable     shell export GOOGLE CLOUD PROJECT  your project       Create a service account      shell SA NAME stackdriver metrics writer gcloud iam service accounts create  SA NAME      display name  Stackdriver Metrics Writer          shell SA ID   SA NAME  GOOGLE CLOUD PROJECT iam gserviceaccount com  gcloud projects add iam policy binding  GOOGLE CLOUD PROJECT       member  serviceAccount  SA ID    role  roles monitoring metricWriter       Generate a credentials file with an exported variable GOOGLE APPLICATION CREDENTIALS referring to it      shell mkdir  p    auth chmod go rwx    auth export GOOGLE APPLICATION CREDENTIALS    auth stackdriver demo credentials json gcloud iam service accounts keys create  GOOGLE APPLICATION CREDENTIALS      iam account  SA ID      Run the program with three partitions  labeled  1    2   and  3      shell   alert absence demo   labels  1 2 3       This will write three time series with the given labels  A few minutes after starting the app you should be able to see the time series data in the Stackdriver Metric explorer      Create the policy  Create a notification channel with the command     shell EMAIL  your email  CHANNEL   gcloud alpha monitoring channels create       channel labels email address  EMAIL       display name  Email to project owner        type email       format  value name         Create an alert policy with the command     shell gcloud alpha monitoring policies create     notification channels  CHANNEL     documentation from file policy doc md     policy from file alert policy json      At this point no alerts should be firing  You can check that in the Stackdriver Monitoring console alert policy detail      Testing  Kill the processes with CTL C and restart it with only two partitions      shell   alert absence demo   labels  1 2       An alert should be generated in about 5 minutes with a subject like  One of the timeseries is absent   You should be able to see this in the Stackdriver console and you should also receive an email notification   Restart the process with three partitions      shell   alert absence demo   labels  1 2 3       After a few minutes the alert should resolve itself   Stop the process and restart it with only one partition      shell   alert absence demo   labels  1       Check that only one alert is fired   Stop the process and do not restart it  You should see an alert that indicates all time series are absent "}
{"questions":"GCP An HTTP POST to the airflow endpoint from an on prem system is used as a trigger to initiate the workflow Cloud Composer Ephemeral Dataproc Cluster for Spark Job Workflow Overview","answers":"## Cloud Composer: Ephemeral Dataproc Cluster for Spark Job\n### Workflow Overview\n\n***\n\n\n![Alt text](..\/img\/composer-http-post-arch.png \"A diagram illustrating the workflow described below.\")\n\nAn HTTP POST to the airflow endpoint from an on-prem system is used as a trigger to initiate the workflow.\n\nAt a high level the Cloud Composer workflow performs the following steps:\n1. Extracts some metadata from the HTTP POST that triggered the workflow.\n1. Spins up a Dataproc Cluster.\n1. Submits a Spark job that performs the following:\n    * Reads newline delimited json data generated by an export from the [nyc-tlc:yellow.trips public\n     BigQuery table](https:\/\/bigquery.cloud.google.com\/table\/nyc-tlc:yellow.trips?pli=1).\n    * Enhances the data with an average_speed column.\n    * Writes the enhanced data as in CSV format to a temporary location in Google Cloud storage.\n1. Tear down the Dataproc Cluster Load these files to BigQuery.\n1. Clean up the temporary path of enhanced data in GCS.\n\n##### 1. Extract metadata from POST:\nWhen there is an HTTP POST to the airflow endpoint it should contain a paylaod of the following structure.\n```\n    payload = {\n        'run_id': 'post-triggered-run-%s' % datetime.now().strftime('%Y%m%d%H%M%s'),\n        'conf':  \"{'raw_path': raw_path, 'transformed_path': transformed_path}\"\n\n    }\n```\nWhere raw_path is a timestamped path to the existing raw files in gcs in newline delimited json format and\ntransformed path is a path with matching time stamp to stage the enhanced file before loading to BigQuery.\n\n\n\n##### 2 & 3. Spins up a Dataproc Cluster and submit Spark Job\n\nThe workflow then provisions a Dataproc Cluster and submits a spark job to enhance the data.\n\n##### 4. Move to processed bucket\n\nBased on the status of the Spark job, the workflow will then move the processed files to a Cloud Storage bucket setup to store processed data. A separate folder is created along with a processed date field to hold the files in this bucket.\n\n##### Full code examples\n\nReady to dive deeper? Check out the [complete code](ephemeral_dataproc_spark_dag.py)\n\n***\n\n#### Setup and Pre-requisites\nIt is recommended that virtualenv be used to keep everything tidy. The [requirements.txt](requirements.txt) describes the dependencies needed for the code used in this repo.\n```bash\nvirtualenv composer-env\nsource composer-env\/bin\/activate\n```\nThe POST will need to be authenticated with [Identity Aware Proxy](https:\/\/cloud.google.com\/iap\/docs\/).\nWe reccomend doing this by copying the latest version of [make_iap_request.py](https:\/\/github.com\/GoogleCloudPlatform\/python-docs-samples\/blob\/master\/iap\/make_iap_request.py)\nfrom the Google Cloud python-docs-samples repo and using the provided [dag_trigger.py](dag_trigger.py).\n```bash\npip install -r ~\/professional-services\/examples\/cloud-composer-examples\/requirements.txt\nwget https:\/\/raw.githubusercontent.com\/GoogleCloudPlatform\/python-docs-samples\/master\/iap\/requirements.txt -O ~\/professional-services\/examples\/cloud-composer-examples\/iap_requirements.txt\npip install -r iap_requirements.txt\nwget https:\/\/raw.githubusercontent.com\/GoogleCloudPlatform\/python-docs-samples\/master\/iap\/make_iap_request.py -O ~\/professional-services\/examples\/cloud-composer-examples\/composer_http_post_example\/make_iap_request.py\n```\n(Or if your are on a Mac you can use curl.)\n```bash\n# From the cloud-composer-examples directory\npip install -r ~\/professional-services\/examples\/cloud-composer-examples\/requirements.txt\ncurl https:\/\/raw.githubusercontent.com\/GoogleCloudPlatform\/python-docs-samples\/master\/iap\/requirements.txt >> ~\/professional-services\/examples\/cloud-composer-examples\/iap_requirements.txt\npip install -r iap-requirements.txt\ncurl https:\/\/raw.githubusercontent.com\/GoogleCloudPlatform\/python-docs-samples\/master\/iap\/make_iap_request.py >> ~\/professional-services\/examples\/cloud-composer-examples\/composer_http_post_example\/make_iap_request.py\n```\n\nNote that we skipped install pyspark for the purposes of being lighter weight to stand up this example. If you have the need to test pyspark locally you should additionally run:\n```bash\npip install pyspark>=2.3.1\n```\n\nThe following high-level steps describe the setup needed to run this example:\n0. set your project information.\n```bash\nexport PROJECT=<REPLACE-THIS-WITH-YOUR-PROJECT-ID>\ngcloud config set project $PROJECT\n```\n1. Create a Cloud Storage (GCS) bucket for receiving input files (*input-gcs-bucket*).\n```bash\ngsutil mb -c regional -l us-central1 gs:\/\/$PROJECT\n```\n2. Export the public BigQuery Table to a new dataset.\n```bash\nbq mk ComposerDemo\nexport EXPORT_TS=`date \"+%Y-%m-%dT%H%M%S\"`&& bq extract \\\n--destination_format=NEWLINE_DELIMITED_JSON \\\nnyc-tlc:yellow.trips \\\ngs:\/\/$PROJECT\/cloud-composer-lab\/raw-$EXPORT_TS\/nyc-tlc-yellow-*.json\n```\n3. Create a Cloud Composer environment - Follow [these](https:\/\/cloud.google.com\/composer\/docs\/quickstart) steps to create a Cloud Composer environment if needed (*cloud-composer-env*).\nWe will set these variables in the composer environment.\n\n| Key                   | Value                                           |Example                                   |\n| :--------------------- |:---------------------------------------------- |:---------------------------              |\n| gcp_project           | *your-gcp-project-id*                           |cloud-comp-http-demo                        |\n| gcp_bucket            | *gcs-bucket-with-raw-files*                     |cloud-comp-http-demo          |\n| gce_zone              | *compute-engine-zone*                           |us-central1-b                              |\n\n```bash\ngcloud beta composer environments create demo-ephemeral-dataproc \\\n    --location us-central1 \\\n    --zone us-central1-b \\\n    --machine-type n1-standard-2 \\\n    --disk-size 20\n\n# Set Airflow Variables in the Composer Environment we just created.\ngcloud composer environments run \\\ndemo-ephemeral-dataproc \\\n--location=us-central1 variables -- \\\n--set gcp_project $PROJECT\ngcloud composer environments run demo-ephemeral-dataproc \\\n--location=us-central1 variables -- \\\n--set gce_zone us-central1-b\ngcloud composer environments run demo-ephemeral-dataproc \\\n--location=us-central1 variables -- \\\n--set gcs_bucket $PROJECT\ngcloud composer environments run demo-ephemeral-dataproc \\\n--location=us-central1 variables -- \\\n--set bq_output_table $PROJECT:ComposerDemo.nyc-tlc-yellow-trips\n```\n\n6. Browse to the Cloud Composer widget in Cloud Console and click on the DAG folder icon as shown below:\n![Alt text](..\/img\/dag-folder-example.png \"Screen shot showing where to find the DAG folder in the console.\")\n\n7. Upload the PySpark code [spark_avg_speed.py](composer_http_examples\/spark_avg_speed.py) into a *spark-jobs* folder in GCS.\n```bash\ngsutil cp ~\/professional-services\/examples\/cloud-composer-examples\/composer_http_post_example\/spark_avg_speed.py gs:\/\/$PROJECT\/spark-jobs\/\n```\n\n8. The DAG folder is essentially a Cloud Storage bucket. Upload the [ephemeral_dataproc_spark_dag.py](composer_http_examples\/ephemeral_dataproc_spark_dag.py) file into the folder:\n\n```bash\ngsutil cp ~\/professional-services\/examples\/cloud-composer-examples\/composer_http_post_example\/ephemeral_dataproc_spark_dag.py gs:\/\/<dag-folder>\/dags\n```\n***\n\n##### Triggering the workflow\n\nMake sure that you have installed the `iap_requirements.txt` in the steps above.\nYou will need to create a service account with the necessary permissions to create an IAP request and use Cloud Composer to use the `dag_trigger.py` script. This can be achieved with the following commands:\n```bash\ngcloud iam service-accounts create dag-trigger\n\n# Give service account permissions to create tokens for\n# iap requests.\ngcloud projects add-iam-policy-binding $PROJECT \\\n--member \\\nserviceAccount:dag-trigger@$PROJECT.iam.gserviceaccount.com \\\n--role roles\/iam.serviceAccountTokenCreator\n\ngcloud projects add-iam-policy-binding $PROJECT \\\n--member \\\nserviceAccount:dag-trigger@$PROJECT.iam.gserviceaccount.com \\\n--role roles\/iam.serviceAccountActor\n\n# Service account also needs to be authorized to use Composer.\ngcloud projects add-iam-policy-binding $PROJECT \\\n--member \\\nserviceAccount:dag-trigger@$PROJECT.iam.gserviceaccount.com \\\n--role roles\/composer.user\n\n# We need a service account key to trigger the dag.\ngcloud iam service-accounts keys create ~\/$PROJECT-dag-trigger-key.json \\\n--iam-account=dag-trigger@$PROJECT.iam.gserviceaccount.com\n\n# Finally use this as your application credentials by setting the environment variable on the machine you will run `dag_trigger.py`\nexport GOOGLE_APPLICATION_CREDENTIALS=~\/$PROJECT-dag-trigger-key.json\n```\nTo trigger this workflow use [dag_trigger.py](dag_trigger.py) takes 3 arguments as shown below\n```bash\npython dag_trigger.py \\\n--url=<airflow endpoint url> \\\n--iapClientId=<client id> \\\n--raw_path=<path to raw files for enhancement in GCS>\n```\nThe endpoint of triggering the dag had the following structure `https:\/\/<airflow web server url>\/api\/experimental\/dags\/<dag-id>\/dag_runs` in this case our dag-id is average-speed.\nThe airflow webserver can be found once your composer environment is set up by clicking on your environment in the console and checking here:\n\n![Alt text](..\/img\/airflow-ui.png \"Screen Shot showing how to get the airflow URL\")\n\nIn oder to obtain your `--iapClientId`\nVisit the Airflow URL https:\/\/YOUR_UNIQUE_ID.appspot.com (which you noted in the last step) in an incognito window, *don't* authenticate or login, and first landing page for IAP Auth has client Id in the url in the address bar:\nhttps:\/\/accounts.google.com\/signin\/oauth\/identifier?**client_id=00000000000-xxxx0x0xx0xx00xxxx0x00xxx0xxxxx.apps.googleusercontent.com**&as=a6VGEPwFpCL1qIwusi49IQ&destination=https%3A%2F%2Fh0b798498b93687a6-tp.appspot.com&approval_state=!ChRKSmd1TVc1VlQzMDB3MHI2UGI4SxIfWXhaRjJLcWdwcndRVUU3MWpGWk5XazFEbUp6N05SWQ%E2%88%99AB8iHBUAAAAAWvsaqTGCmRazWx9NqQtnYVOllz0r2x_i&xsrfsig=AHgIfE_o0kxXt6N3ch1JH4Fb19CB7wdbMg&flowName=GeneralOAuthFlow\n\n***","site":"GCP","answers_cleaned":"   Cloud Composer  Ephemeral Dataproc Cluster for Spark Job     Workflow Overview          Alt text     img composer http post arch png  A diagram illustrating the workflow described below     An HTTP POST to the airflow endpoint from an on prem system is used as a trigger to initiate the workflow   At a high level the Cloud Composer workflow performs the following steps  1  Extracts some metadata from the HTTP POST that triggered the workflow  1  Spins up a Dataproc Cluster  1  Submits a Spark job that performs the following        Reads newline delimited json data generated by an export from the  nyc tlc yellow trips public      BigQuery table  https   bigquery cloud google com table nyc tlc yellow trips pli 1         Enhances the data with an average speed column        Writes the enhanced data as in CSV format to a temporary location in Google Cloud storage  1  Tear down the Dataproc Cluster Load these files to BigQuery  1  Clean up the temporary path of enhanced data in GCS         1  Extract metadata from POST  When there is an HTTP POST to the airflow endpoint it should contain a paylaod of the following structure          payload              run id    post triggered run  s    datetime now   strftime   Y m d H M s             conf       raw path   raw path   transformed path   transformed path              Where raw path is a timestamped path to the existing raw files in gcs in newline delimited json format and transformed path is a path with matching time stamp to stage the enhanced file before loading to BigQuery           2   3  Spins up a Dataproc Cluster and submit Spark Job  The workflow then provisions a Dataproc Cluster and submits a spark job to enhance the data         4  Move to processed bucket  Based on the status of the Spark job  the workflow will then move the processed files to a Cloud Storage bucket setup to store processed data  A separate folder is created along with a processed date field to hold the files in this bucket         Full code examples  Ready to dive deeper  Check out the  complete code  ephemeral dataproc spark dag py             Setup and Pre requisites It is recommended that virtualenv be used to keep everything tidy  The  requirements txt  requirements txt  describes the dependencies needed for the code used in this repo     bash virtualenv composer env source composer env bin activate     The POST will need to be authenticated with  Identity Aware Proxy  https   cloud google com iap docs    We reccomend doing this by copying the latest version of  make iap request py  https   github com GoogleCloudPlatform python docs samples blob master iap make iap request py  from the Google Cloud python docs samples repo and using the provided  dag trigger py  dag trigger py      bash pip install  r   professional services examples cloud composer examples requirements txt wget https   raw githubusercontent com GoogleCloudPlatform python docs samples master iap requirements txt  O   professional services examples cloud composer examples iap requirements txt pip install  r iap requirements txt wget https   raw githubusercontent com GoogleCloudPlatform python docs samples master iap make iap request py  O   professional services examples cloud composer examples composer http post example make iap request py      Or if your are on a Mac you can use curl      bash   From the cloud composer examples directory pip install  r   professional services examples cloud composer examples requirements txt curl https   raw githubusercontent com GoogleCloudPlatform python docs samples master iap requirements txt      professional services examples cloud composer examples iap requirements txt pip install  r iap requirements txt curl https   raw githubusercontent com GoogleCloudPlatform python docs samples master iap make iap request py      professional services examples cloud composer examples composer http post example make iap request py      Note that we skipped install pyspark for the purposes of being lighter weight to stand up this example  If you have the need to test pyspark locally you should additionally run     bash pip install pyspark  2 3 1      The following high level steps describe the setup needed to run this example  0  set your project information     bash export PROJECT  REPLACE THIS WITH YOUR PROJECT ID  gcloud config set project  PROJECT     1  Create a Cloud Storage  GCS  bucket for receiving input files   input gcs bucket       bash gsutil mb  c regional  l us central1 gs    PROJECT     2  Export the public BigQuery Table to a new dataset     bash bq mk ComposerDemo export EXPORT TS  date    Y  m  dT H M S     bq extract     destination format NEWLINE DELIMITED JSON   nyc tlc yellow trips   gs    PROJECT cloud composer lab raw  EXPORT TS nyc tlc yellow   json     3  Create a Cloud Composer environment   Follow  these  https   cloud google com composer docs quickstart  steps to create a Cloud Composer environment if needed   cloud composer env    We will set these variables in the composer environment     Key                     Value                                            Example                                                                                                                                                              gcp project              your gcp project id                             cloud comp http demo                            gcp bucket               gcs bucket with raw files                       cloud comp http demo              gce zone                 compute engine zone                             us central1 b                                    bash gcloud beta composer environments create demo ephemeral dataproc         location us central1         zone us central1 b         machine type n1 standard 2         disk size 20    Set Airflow Variables in the Composer Environment we just created  gcloud composer environments run   demo ephemeral dataproc     location us central1 variables        set gcp project  PROJECT gcloud composer environments run demo ephemeral dataproc     location us central1 variables        set gce zone us central1 b gcloud composer environments run demo ephemeral dataproc     location us central1 variables        set gcs bucket  PROJECT gcloud composer environments run demo ephemeral dataproc     location us central1 variables        set bq output table  PROJECT ComposerDemo nyc tlc yellow trips      6  Browse to the Cloud Composer widget in Cloud Console and click on the DAG folder icon as shown below    Alt text     img dag folder example png  Screen shot showing where to find the DAG folder in the console     7  Upload the PySpark code  spark avg speed py  composer http examples spark avg speed py  into a  spark jobs  folder in GCS     bash gsutil cp   professional services examples cloud composer examples composer http post example spark avg speed py gs    PROJECT spark jobs       8  The DAG folder is essentially a Cloud Storage bucket  Upload the  ephemeral dataproc spark dag py  composer http examples ephemeral dataproc spark dag py  file into the folder      bash gsutil cp   professional services examples cloud composer examples composer http post example ephemeral dataproc spark dag py gs    dag folder  dags                Triggering the workflow  Make sure that you have installed the  iap requirements txt  in the steps above  You will need to create a service account with the necessary permissions to create an IAP request and use Cloud Composer to use the  dag trigger py  script  This can be achieved with the following commands     bash gcloud iam service accounts create dag trigger    Give service account permissions to create tokens for   iap requests  gcloud projects add iam policy binding  PROJECT     member   serviceAccount dag trigger  PROJECT iam gserviceaccount com     role roles iam serviceAccountTokenCreator  gcloud projects add iam policy binding  PROJECT     member   serviceAccount dag trigger  PROJECT iam gserviceaccount com     role roles iam serviceAccountActor    Service account also needs to be authorized to use Composer  gcloud projects add iam policy binding  PROJECT     member   serviceAccount dag trigger  PROJECT iam gserviceaccount com     role roles composer user    We need a service account key to trigger the dag  gcloud iam service accounts keys create    PROJECT dag trigger key json     iam account dag trigger  PROJECT iam gserviceaccount com    Finally use this as your application credentials by setting the environment variable on the machine you will run  dag trigger py  export GOOGLE APPLICATION CREDENTIALS    PROJECT dag trigger key json     To trigger this workflow use  dag trigger py  dag trigger py  takes 3 arguments as shown below    bash python dag trigger py     url  airflow endpoint url      iapClientId  client id      raw path  path to raw files for enhancement in GCS      The endpoint of triggering the dag had the following structure  https    airflow web server url  api experimental dags  dag id  dag runs  in this case our dag id is average speed  The airflow webserver can be found once your composer environment is set up by clicking on your environment in the console and checking here     Alt text     img airflow ui png  Screen Shot showing how to get the airflow URL    In oder to obtain your    iapClientId  Visit the Airflow URL https   YOUR UNIQUE ID appspot com  which you noted in the last step  in an incognito window   don t  authenticate or login  and first landing page for IAP Auth has client Id in the url in the address bar  https   accounts google com signin oauth identifier   client id 00000000000 xxxx0x0xx0xx00xxxx0x00xxx0xxxxx apps googleusercontent com   as a6VGEPwFpCL1qIwusi49IQ destination https 3A 2F 2Fh0b798498b93687a6 tp appspot com approval state  ChRKSmd1TVc1VlQzMDB3MHI2UGI4SxIfWXhaRjJLcWdwcndRVUU3MWpGWk5XazFEbUp6N05SWQ E2 88 99AB8iHBUAAAAAWvsaqTGCmRazWx9NqQtnYVOllz0r2x i xsrfsig AHgIfE o0kxXt6N3ch1JH4Fb19CB7wdbMg flowName GeneralOAuthFlow     "}
{"questions":"GCP This repo contains an example Cloud Composer workflow that triggers Cloud Dataflow to transform enrich and load a delimited text file into Cloud BigQuery Workflow Overview Cloud Composer workflow using Cloud Dataflow A Cloud Function with a Cloud Storage trigger is used to initiate the workflow when a file is uploaded for processing The goal of this example is to provide a common pattern to automatically trigger via Google Cloud Function a Dataflow job when a file arrives in Google Cloud Storage process the data and load it into BigQuery","answers":"## Cloud Composer workflow using Cloud Dataflow\n##### This repo contains an example Cloud Composer workflow that triggers Cloud Dataflow to transform, enrich and load a delimited text file into Cloud BigQuery.\nThe goal of this example is to provide a common pattern to automatically trigger, via Google Cloud Function, a Dataflow job when a file arrives in Google Cloud Storage, process the data and load it into BigQuery.\n\n### Workflow Overview\n\n***\n\n![Alt text](..\/img\/workflow-overview.png \"Workflow Overview\")\nA Cloud Function with a Cloud Storage trigger is used to initiate the workflow when a file is uploaded for processing.\n\nAt a high-level the Cloud Composer workflow performs the following steps:\n1. Extracts the location of the input file that triggered the workflow.\n2. Executes a Cloud Dataflow job that performs the following:\n    - Parses the delimited input file and adds some useful 'metadata'\n        - 'filename': The name of the file that is proceeded by the Cloud Dataflow job\n        - 'load_dt': The date in YYYY-MM-DD format when the file is processed\n    - Loads the data into an existing Cloud BigQuery table (any existing data is truncated)\n3. Moves the input file to a Cloud Storage bucket that is setup for storing processed files.\n\n##### 1. Extract the input file location:\nWhen a file is uploaded to the Cloud Storage bucket, a Cloud Function is triggered. This invocation wraps the event information (_bucket and object details_) that triggered this event and passes it to the the Cloud Composer workflow that gets triggered. The workflow extracts this information and passes it to the Cloud Dataflow job.\n```\njob_args = {\n        'input': 'gs:\/\/\/',\n        'output': models.Variable.get('bq_output_table'),\n        'fields': models.Variable.get('input_field_names'),\n        'load_dt': ds_tag\n    }\n```\n\n##### 2. Executes the Cloud Dataflow job\n\nThe workflow then executes a [Cloud Dataflow job](dataflow\/process_delimited.py) to process the delimited file, adds filename and load_dt fields and loads the data into a Cloud BigQuery table.\n\n##### 3. Move to processed bucket\n\n\n![Alt text](..\/img\/sample-dag.png \"DAG Overview\")\n\nBased on the status of the Cloud Dataflow job, the workflow will then move the processed files to a Cloud Storage bucket setup to store processed data. A separate folder is created along with a processed date field to hold the files in this bucket.\n\n##### Full code examples\n\nReady to dive deeper? Check out the complete code [here](simple_load_dag.py)\n\n***\n\n#### Setup and Pre-requisites\nIt is recommended that virtualenv be used to keep everything tidy. The [requirements.txt](requirements.txt) describes the dependencies needed for the code used in this repo.\n\nThe following high-level steps describe the setup needed to run this example:\n\n1. Create a Cloud Storage (GCS) bucket for receiving input files (*input-gcs-bucket*).\n2. Create a GCS bucket for storing processed files (*output-gcs-bucket*).\n3. Create a Cloud Composer environment - Follow [these](https:\/\/cloud.google.com\/composer\/docs\/quickstart) steps to create a Cloud Composer environment if needed (*cloud-composer-env*).\n4. Create a Cloud BigQuery table for the processed output. The following schema is used for this example:\n\n|Column Name | Column Type|\n|:-----------|:-----------|\n|state\t     |STRING      |\n|gender\t     |STRING      |\n|year\t     |STRING      |\n|name\t     |STRING      |\n|number\t     |STRING      |\n|created_date|STRING      |\n|filename\t |STRING      |\n|load_dt\t |DATE        |\n\n5. Set the following [Airflow variables](https:\/\/airflow.apache.org\/docs\/stable\/concepts.html#variables) needed for this example:\n\n| Key                   | Value                                           |Example                                   |\n| :--------------------- |:---------------------------------------------- |:---------------------------              |\n| gcp_project           | *your-gcp-project-id*                           |cloud-comp-df-demo                        |\n| gcp_temp_location     | *gcs-bucket-for-dataflow-temp-files*            |gs:\/\/my-comp-df-demo-temp\/tmp             |\n| gcs_completion_bucket | *output-gcs-bucket*                             |my-comp-df-demp-output                    |\n| input_field_names     | *comma-separated-field-names-for-delimited-file*|state,gender,year,name,number,created_date|\n| bq_output_table       | *bigquery-output-table*                         |my_dataset.usa_names                      |\n| email                 | *some-email@mycompany.com*                      |some-email@mycompany.com                  |\n\n The variables can be set as follows:\n\n `gcloud composer environments run` **_cloud-composer-env-name_** `variables -- --set` **_key val_**\n\n6. Browse to the Cloud Composer widget in Cloud Console and click on the DAG folder icon as shown below:\n![Alt text](..\/img\/dag-folder-example.png \"Workflow Overview\")\n\n7. The DAG folder is essentially a Cloud Storage bucket. Upload the [simple_load_dag.py](simple_load_dag.py) file into the folder:\n![Alt text](..\/img\/bucket-example.png \"DAG Bucket\")\n8. Upload the Python Dataflow code [process_delimited.py](dataflow\/process_delimited.py) into a *dataflow* folder created in the base DAG folder.\n9. Finally follow [these](https:\/\/cloud.google.com\/composer\/docs\/how-to\/using\/triggering-with-gcf) instructions to create a Cloud Function.\n    - Ensure that the **DAG_NAME** property is set to _**GcsToBigQueryTriggered**_ i.e. The DAG name defined in [simple_load_dag.py](simple_load_dag.py).\n\n***\n\n##### Triggering the workflow\n\nThe workflow is automatically triggered by Cloud Function that gets invoked when a new file is uploaded into the *input-gcs-bucket*\nFor this example workflow, the [usa_names.csv](resources\/usa_names.csv) file can be uploaded into the  *input-gcs-bucket*\n\n`gsutil cp resources\/usa_names.csv gs:\/\/` **_input-gcs-bucket_**\n\n***","site":"GCP","answers_cleaned":"   Cloud Composer workflow using Cloud Dataflow       This repo contains an example Cloud Composer workflow that triggers Cloud Dataflow to transform  enrich and load a delimited text file into Cloud BigQuery  The goal of this example is to provide a common pattern to automatically trigger  via Google Cloud Function  a Dataflow job when a file arrives in Google Cloud Storage  process the data and load it into BigQuery       Workflow Overview         Alt text     img workflow overview png  Workflow Overview   A Cloud Function with a Cloud Storage trigger is used to initiate the workflow when a file is uploaded for processing   At a high level the Cloud Composer workflow performs the following steps  1  Extracts the location of the input file that triggered the workflow  2  Executes a Cloud Dataflow job that performs the following        Parses the delimited input file and adds some useful  metadata             filename   The name of the file that is proceeded by the Cloud Dataflow job            load dt   The date in YYYY MM DD format when the file is processed       Loads the data into an existing Cloud BigQuery table  any existing data is truncated  3  Moves the input file to a Cloud Storage bucket that is setup for storing processed files         1  Extract the input file location  When a file is uploaded to the Cloud Storage bucket  a Cloud Function is triggered  This invocation wraps the event information   bucket and object details   that triggered this event and passes it to the the Cloud Composer workflow that gets triggered  The workflow extracts this information and passes it to the Cloud Dataflow job      job args              input    gs                output   models Variable get  bq output table             fields   models Variable get  input field names             load dt   ds tag                  2  Executes the Cloud Dataflow job  The workflow then executes a  Cloud Dataflow job  dataflow process delimited py  to process the delimited file  adds filename and load dt fields and loads the data into a Cloud BigQuery table         3  Move to processed bucket     Alt text     img sample dag png  DAG Overview    Based on the status of the Cloud Dataflow job  the workflow will then move the processed files to a Cloud Storage bucket setup to store processed data  A separate folder is created along with a processed date field to hold the files in this bucket         Full code examples  Ready to dive deeper  Check out the complete code  here  simple load dag py             Setup and Pre requisites It is recommended that virtualenv be used to keep everything tidy  The  requirements txt  requirements txt  describes the dependencies needed for the code used in this repo   The following high level steps describe the setup needed to run this example   1  Create a Cloud Storage  GCS  bucket for receiving input files   input gcs bucket    2  Create a GCS bucket for storing processed files   output gcs bucket    3  Create a Cloud Composer environment   Follow  these  https   cloud google com composer docs quickstart  steps to create a Cloud Composer environment if needed   cloud composer env    4  Create a Cloud BigQuery table for the processed output  The following schema is used for this example    Column Name   Column Type                               state       STRING         gender       STRING         year       STRING         name       STRING         number       STRING         created date STRING         filename   STRING         load dt   DATE           5  Set the following  Airflow variables  https   airflow apache org docs stable concepts html variables  needed for this example     Key                     Value                                            Example                                                                                                                                                              gcp project              your gcp project id                             cloud comp df demo                            gcp temp location        gcs bucket for dataflow temp files              gs   my comp df demo temp tmp                 gcs completion bucket    output gcs bucket                               my comp df demp output                        input field names        comma separated field names for delimited file  state gender year name number created date    bq output table          bigquery output table                           my dataset usa names                          email                    some email mycompany com                        some email mycompany com                      The variables can be set as follows     gcloud composer environments run     cloud composer env name     variables      set     key val     6  Browse to the Cloud Composer widget in Cloud Console and click on the DAG folder icon as shown below    Alt text     img dag folder example png  Workflow Overview    7  The DAG folder is essentially a Cloud Storage bucket  Upload the  simple load dag py  simple load dag py  file into the folder    Alt text     img bucket example png  DAG Bucket   8  Upload the Python Dataflow code  process delimited py  dataflow process delimited py  into a  dataflow  folder created in the base DAG folder  9  Finally follow  these  https   cloud google com composer docs how to using triggering with gcf  instructions to create a Cloud Function        Ensure that the   DAG NAME   property is set to    GcsToBigQueryTriggered    i e  The DAG name defined in  simple load dag py  simple load dag py               Triggering the workflow  The workflow is automatically triggered by Cloud Function that gets invoked when a new file is uploaded into the  input gcs bucket  For this example workflow  the  usa names csv  resources usa names csv  file can be uploaded into the   input gcs bucket    gsutil cp resources usa names csv gs        input gcs bucket        "}
{"questions":"GCP bigquery analyze realtime reddit data 3 1 5 4 2 Table Of Contents","answers":"# bigquery-analyze-realtime-reddit-data\n\n## Table Of Contents\n\n1. [Use Case](#use-case)\n2. [About](#about)\n3. [Architecture](#architecture)\n4. [Guide](#guide)\n5. [Sample](#sample)\n\n----\n\n## Use-case\n\nSimple deployment of a ([reddit](https:\/\/www.reddit.com)) social media data collection architecture on Google Cloud Platform.\n\n----\n\n## About\n\nThis repository contains the resources necessary to deploy a basic data stream and data lake on Google Cloud Platform.  Terraform templates deploy the entire infrastructure, which includes a [Google Compute Engine](https:\/\/cloud.google.com\/compute) VM with a initialization script that clones a [reddit streaming application repository](https:\/\/github.com\/CYarros10\/reddit-streaming-application). The GCE VM executes a python script from that repo.  The python script accesses a user's [reddit developer client](https:\/\/www.reddit.com\/prefs\/apps\/) and begins to collect reddit comments from a specified list of the top 50 subreddits. \n\nAs the GCE VM collects reddit comments, it cleans, censors, and analyzes sentiment of each comment. Finally, it pushes the comment to a [Google Cloud Pub\/Sub](https:\/\/cloud.google.com\/pubsub) topic.  A [Cloud Dataflow](https:\/\/cloud.google.com\/dataflow) job subscribes to the PubSub topic, reads the comments, and writes them to a [Cloud Bigquery](https:\/\/cloud.google.com\/bigquery) table.\n\nThe user now has access to an ever-increasing dataset of reddit comments + sentiment analysis.\n\n----\n\n## Architecture\n\n![Stack-Resources](images\/architecture.png)\n\n----\n\n## Guide\n\n### 1. Create your reddit bot account\n\n1. [Register a reddit account](https:\/\/www.reddit.com\/register\/)\n\n2. Follow prompts to create new reddit account:\n    * Provide email address\n    * Choose username and password\n    * Click `Finish`\n\n3. Once your account is created, go to [reddit developer console.](https:\/\/www.reddit.com\/prefs\/apps\/)\n\n4. Select **\u201care you a developer? Create an app...\u201d**\n\n5. Give it a name.\n\n6. Select script.  <--- **This is important!**\n\n7. For about url and redirect uri, use http:\/\/127.0.0.1\n\n8. You will now get a client_id (underneath web app) and secret\n\n9. Keep track of your reddit account username, password, app client_id (in blue box), and app secret (in red box). These will be used in tutorial Step 11\n\n#### Further Learning \/ References: PRAW\n\n* [PRAW Quick start](https:\/\/praw.readthedocs.io\/en\/latest\/getting_started\/quick_start.html)\n\n### 2. Run setup.sh\n\n\nIf you need to allow externalIPs, run this command (or similar) in your project:\n\n```bash\necho \"{\n  \\\"constraint\\\": \\\"constraints\/compute.vmExternalIpAccess\\\",\n\t\\\"listPolicy\\\": {\n\t    \\\"allValues\\\": \\\"ALLOW\\\"\n\t  }\n}\" > external_ip_policy.json\n\ngcloud resource-manager org-policies set-policy external_ip_policy.json --project=\"$projectId\"\n```\n\n\n```bash\n.\/scripts\/setup.sh -i <project-id> -r <region> -c <reddit-client-id> -u <reddit-user>\n```\n\n### 4. Wait for data collection\n\nThe VM will take a minute or two to setup. Then comments will start to flow into Bigquery in near-realtime!\n\n### 5. Query your data using Bigquery\n\n**example:**\n\n```sql\n    select subreddit, author, comment_text, sentiment_score\n    from reddit.comments_raw\n    order by sentiment_score desc\n    limit 25;\n```\n\n----\n\n## Sample\n\n### Example of a Collected+Analyzed reddit Comment:\n\n```json\n{\n    \"comment_id\": \"fx3wgci\",\n    \"subreddit\": \"Fitness\",\n    \"author\": \"silverbird666\",\n    \"comment_text\": \"well, i dont exactly count my calories, but i run on a competitive base and do kickboxing, that stuff burns quite much calories. i just stick to my established diet, and supplement with protein bars and shakes whenever i fail to hit my daily intake of protein. works for me.\",\n    \"distinguished\": null,\n    \"submitter\": false,\n    \"total_words\": 50,\n    \"reading_ease_score\": 71.44,\n    \"reading_ease\": \"standard\",\n    \"reading_grade_level\": \"7th and 8th grade\",\n    \"sentiment_score\": -0.17,\n    \"censored\": 0,\n    \"positive\": 0,\n    \"neutral\": 1,\n    \"negative\": 0,\n    \"subjectivity_score\": 0.35,\n    \"subjective\": 0,\n    \"url\": \"https:\/\/reddit.com\/r\/Fitness\/comments\/hlk84h\/victory_sunday\/fx3wgci\/\",\n    \"comment_date\": \"2020-07-06 15:41:15\",\n    \"comment_timestamp\": \"2020\/07\/06 15:41:15\",\n    \"comment_hour\": 15,\n    \"comment_year\": 2020,\n    \"comment_month\": 7,\n    \"comment_day\": 6\n}\n```","site":"GCP","answers_cleaned":"  bigquery analyze realtime reddit data     Table Of Contents  1   Use Case   use case  2   About   about  3   Architecture   architecture  4   Guide   guide  5   Sample   sample            Use case  Simple deployment of a   reddit  https   www reddit com   social media data collection architecture on Google Cloud Platform            About  This repository contains the resources necessary to deploy a basic data stream and data lake on Google Cloud Platform   Terraform templates deploy the entire infrastructure  which includes a  Google Compute Engine  https   cloud google com compute  VM with a initialization script that clones a  reddit streaming application repository  https   github com CYarros10 reddit streaming application   The GCE VM executes a python script from that repo   The python script accesses a user s  reddit developer client  https   www reddit com prefs apps   and begins to collect reddit comments from a specified list of the top 50 subreddits    As the GCE VM collects reddit comments  it cleans  censors  and analyzes sentiment of each comment  Finally  it pushes the comment to a  Google Cloud Pub Sub  https   cloud google com pubsub  topic   A  Cloud Dataflow  https   cloud google com dataflow  job subscribes to the PubSub topic  reads the comments  and writes them to a  Cloud Bigquery  https   cloud google com bigquery  table   The user now has access to an ever increasing dataset of reddit comments   sentiment analysis            Architecture    Stack Resources  images architecture png            Guide      1  Create your reddit bot account  1   Register a reddit account  https   www reddit com register    2  Follow prompts to create new reddit account        Provide email address       Choose username and password       Click  Finish   3  Once your account is created  go to  reddit developer console   https   www reddit com prefs apps    4  Select    are you a developer  Create an app        5  Give it a name   6  Select script          This is important     7  For about url and redirect uri  use http   127 0 0 1  8  You will now get a client id  underneath web app  and secret  9  Keep track of your reddit account username  password  app client id  in blue box   and app secret  in red box   These will be used in tutorial Step 11       Further Learning   References  PRAW     PRAW Quick start  https   praw readthedocs io en latest getting started quick start html       2  Run setup sh   If you need to allow externalIPs  run this command  or similar  in your project      bash echo        constraint      constraints compute vmExternalIpAccess       listPolicy             allValues      ALLOW             external ip policy json  gcloud resource manager org policies set policy external ip policy json   project   projectId           bash   scripts setup sh  i  project id   r  region   c  reddit client id   u  reddit user           4  Wait for data collection  The VM will take a minute or two to setup  Then comments will start to flow into Bigquery in near realtime       5  Query your data using Bigquery    example        sql     select subreddit  author  comment text  sentiment score     from reddit comments raw     order by sentiment score desc     limit 25                Sample      Example of a Collected Analyzed reddit Comment      json        comment id    fx3wgci        subreddit    Fitness        author    silverbird666        comment text    well  i dont exactly count my calories  but i run on a competitive base and do kickboxing  that stuff burns quite much calories  i just stick to my established diet  and supplement with protein bars and shakes whenever i fail to hit my daily intake of protein  works for me         distinguished   null       submitter   false       total words   50       reading ease score   71 44       reading ease    standard        reading grade level    7th and 8th grade        sentiment score    0 17       censored   0       positive   0       neutral   1       negative   0       subjectivity score   0 35       subjective   0       url    https   reddit com r Fitness comments hlk84h victory sunday fx3wgci         comment date    2020 07 06 15 41 15        comment timestamp    2020 07 06 15 41 15        comment hour   15       comment year   2020       comment month   7       comment day   6      "}
{"questions":"GCP This example provides guidance on how to use to batch records that are published to a Pub Sub topic Using correctly allows us to better utilize the available resources cpu memory network bandwidth on the client machine and to improve throughput In order to demonstrate this process end to end we also provided a simple Dataflow pipeline that reads these Avro messages from a Pub Sub topic and streams those records into a BigQuery table Using Batching in Cloud Pub Sub Java client API In addition to BatchingSettings this code sample also demonstrates the use of to encode the messages instead of using JSON strings Components","answers":"# Using Batching in Cloud Pub\/Sub Java client API.\n\nThis example provides guidance on how to use [Pub\/Sub's Java client API](https:\/\/cloud.google.com\/pubsub\/docs\/reference\/libraries) to batch records that are published to a Pub\/Sub topic.\nUsing [BatchingSettings](http:\/\/googleapis.github.io\/gax-java\/1.4.1\/apidocs\/com\/google\/api\/gax\/batching\/BatchingSettings.html) correctly allows us to better utilize the available resources (cpu, memory, network bandwidth) on the client machine and to improve throughput.\nIn addition to BatchingSettings, this code sample also demonstrates the use of [Avro](http:\/\/avro.apache.org\/docs\/current\/) to encode the messages instead of using JSON strings.\n\nIn order to demonstrate this process end-to-end, we also provided a simple Dataflow pipeline that reads these Avro messages from a Pub\/Sub topic and streams those records into a BigQuery table.\n\n## Components\n\n[ObjectPublisher](src\/main\/java\/com\/google\/cloud\/pso\/pubsub\/common\/ObjectPublisher.java) - A generic publisher class that can be used to publish any object as a payload to Cloud Pub\/Sub.\nThis publisher class provides various configurable parameters for controlling the [BatchingSettings](http:\/\/googleapis.github.io\/gax-java\/1.4.1\/apidocs\/com\/google\/api\/gax\/batching\/BatchingSettings.html) for the publishing client.\n\n[EmployeePublisher](src\/main\/java\/com\/google\/cloud\/pso\/pubsub\/EmployeePublisherMain.java) -\nAn implementation of the [ObjectPublisher](src\/main\/java\/com\/google\/cloud\/pso\/pubsub\/common\/ObjectPublisher.java) to publish Avro encoded test records to Cloud Pub\/Sub.\nThis will generate random test records using the sample [Employee Avro Schema](src\/main\/avro\/employee.avsc).\n\n[IngestionMain](src\/main\/java\/com\/google\/cloud\/pso\/pipeline\/IngestionMain.java) - A sample Dataflow pipeline to read the Avro encoded test records and stream them into BigQuery.\n\n### Requirements\n\n* Java 8\n* Maven 3\n* Cloud Pub\/Sub topic and subscription\n    * The Pub\/Sub topic will be used by the client to publish messages.\n    * The Pub\/Sub subscription on the same topic will be used by the Dataflow job to read the messages.\n* BigQuery table to stream records into - The table schema should match the Avro schema of the messages.\n\n### Building the Project\n\nBuild the entire project using the maven compile command.\n```sh\nmvn clean compile\n```\n\n### Running unit tests\nRun the unit tests using the maven test command.\n```sh\nmvn clean test\n```\n\n### Publishing sample records to Cloud Pub\/Sub\nPublish sample [Employee](src\/main\/avro\/employee.avsc) records using the maven exec command.\n```sh\nmvn compile exec:java \\\n  -Dexec.mainClass=com.google.cloud.pso.pubsub.EmployeePublisherMain \\\n  -Dexec.cleanupDaemonThreads=false \\\n  -Dexec.args=\" \\\n  --topic <output-pubsub-topic> --numberOfMessages <number-of-messages>\"\n```\n\nThere are several other optional parameters related to batch settings. These parameters can be viewed by passing the help flag.\n```sh\nmvn compile exec:java \\\n  -Dexec.mainClass=com.google.cloud.pso.pubsub.EmployeePublisherMain \\\n  -Dexec.cleanupDaemonThreads=false \\\n  -Dexec.args=\"--help\"\n\nUsage: com.google.cloud.pso.pubsub.EmployeePublisherMain [options]\n  * - Required parameters\n  Options:\n    --bytesThreshold, -b\n      Batch threshold bytes.\n      Default: 1024\n    --delayThreshold, -d\n      Delay threshold in milliseconds.\n      Default: PT0.5S\n    --elementCount, -c\n      The number of elements to be batched in each request.\n      Default: 500\n    --help, -h\n\n    --numberOfMessages, -n\n      Number of sample messages to publish to Pub\/Sub\n      Default: 100000\n  * --topic, -t\n      The Pub\/Sub topic to write messages to\n```\n\n### Streaming Cloud Pub\/Sub messages into BigQuery using Dataflow\nThe [IngestionMain](src\/main\/java\/com\/google\/cloud\/pso\/pipeline\/IngestionMain.java) pipeline provides a simple example of consuming the Avro records that were published into Cloud Pub\/Sub and streaming those records into a BigQuery table.\nThis pipeline example can be compiled and executed using the maven exec command.\n```sh\nmvn compile exec:java \\\n  -Dexec.mainClass=com.google.cloud.pso.pipeline.IngestionMain \\\n  -Dexec.cleanupDaemonThreads=false \\\n  -Dexec.args=\" \\\n  --project=<my-gcp-project> \\\n  --stagingLocation=<my-gcs-staging-bucket> \\\n  --tempLocation=<my-gcs-temp-bucket> \\\n  --runner=DataflowRunner \\\n  --autoscalingAlgorithm=THROUGHPUT_BASED \\\n  --maxNumWorkers=<max-num-workers> \\\n  --subscription=<my-input-pubsub-subscription> \\\n  --tableId=<my-output-bigquery-table>\"\n```\n\n### Authentication\nThese examples use the [Google client libraries to implicitly determine the credentials used][1]. It is strongly recommended that a Service Account with appropriate permissions be used for accessing the resources in Google Cloud Platform Project.\n\n[1]: https:\/\/cloud.google.com\/docs\/authentication\/getting-started","site":"GCP","answers_cleaned":"  Using Batching in Cloud Pub Sub Java client API   This example provides guidance on how to use  Pub Sub s Java client API  https   cloud google com pubsub docs reference libraries  to batch records that are published to a Pub Sub topic  Using  BatchingSettings  http   googleapis github io gax java 1 4 1 apidocs com google api gax batching BatchingSettings html  correctly allows us to better utilize the available resources  cpu  memory  network bandwidth  on the client machine and to improve throughput  In addition to BatchingSettings  this code sample also demonstrates the use of  Avro  http   avro apache org docs current   to encode the messages instead of using JSON strings   In order to demonstrate this process end to end  we also provided a simple Dataflow pipeline that reads these Avro messages from a Pub Sub topic and streams those records into a BigQuery table      Components   ObjectPublisher  src main java com google cloud pso pubsub common ObjectPublisher java    A generic publisher class that can be used to publish any object as a payload to Cloud Pub Sub  This publisher class provides various configurable parameters for controlling the  BatchingSettings  http   googleapis github io gax java 1 4 1 apidocs com google api gax batching BatchingSettings html  for the publishing client    EmployeePublisher  src main java com google cloud pso pubsub EmployeePublisherMain java    An implementation of the  ObjectPublisher  src main java com google cloud pso pubsub common ObjectPublisher java  to publish Avro encoded test records to Cloud Pub Sub  This will generate random test records using the sample  Employee Avro Schema  src main avro employee avsc     IngestionMain  src main java com google cloud pso pipeline IngestionMain java    A sample Dataflow pipeline to read the Avro encoded test records and stream them into BigQuery       Requirements    Java 8   Maven 3   Cloud Pub Sub topic and subscription       The Pub Sub topic will be used by the client to publish messages        The Pub Sub subscription on the same topic will be used by the Dataflow job to read the messages    BigQuery table to stream records into   The table schema should match the Avro schema of the messages       Building the Project  Build the entire project using the maven compile command     sh mvn clean compile          Running unit tests Run the unit tests using the maven test command     sh mvn clean test          Publishing sample records to Cloud Pub Sub Publish sample  Employee  src main avro employee avsc  records using the maven exec command     sh mvn compile exec java      Dexec mainClass com google cloud pso pubsub EmployeePublisherMain      Dexec cleanupDaemonThreads false      Dexec args         topic  output pubsub topic    numberOfMessages  number of messages        There are several other optional parameters related to batch settings  These parameters can be viewed by passing the help flag     sh mvn compile exec java      Dexec mainClass com google cloud pso pubsub EmployeePublisherMain      Dexec cleanupDaemonThreads false      Dexec args    help   Usage  com google cloud pso pubsub EmployeePublisherMain  options        Required parameters   Options        bytesThreshold   b       Batch threshold bytes        Default  1024       delayThreshold   d       Delay threshold in milliseconds        Default  PT0 5S       elementCount   c       The number of elements to be batched in each request        Default  500       help   h        numberOfMessages   n       Number of sample messages to publish to Pub Sub       Default  100000       topic   t       The Pub Sub topic to write messages to          Streaming Cloud Pub Sub messages into BigQuery using Dataflow The  IngestionMain  src main java com google cloud pso pipeline IngestionMain java  pipeline provides a simple example of consuming the Avro records that were published into Cloud Pub Sub and streaming those records into a BigQuery table  This pipeline example can be compiled and executed using the maven exec command     sh mvn compile exec java      Dexec mainClass com google cloud pso pipeline IngestionMain      Dexec cleanupDaemonThreads false      Dexec args         project  my gcp project        stagingLocation  my gcs staging bucket        tempLocation  my gcs temp bucket        runner DataflowRunner       autoscalingAlgorithm THROUGHPUT BASED       maxNumWorkers  max num workers        subscription  my input pubsub subscription        tableId  my output bigquery table            Authentication These examples use the  Google client libraries to implicitly determine the credentials used  1   It is strongly recommended that a Service Account with appropriate permissions be used for accessing the resources in Google Cloud Platform Project    1   https   cloud google com docs authentication getting started"}
{"questions":"GCP The Dataproc cluster lifecycle management will be done by the automatically generated Airflow DAGs to reuse or create clusters accordingly cluster proposed configuration includes a scalability policy usage Each DAG will execute a single Dataproc cluster Job referenced in a DAG configuration file and that Job will be executed in a Dataproc Cluster that can be reused for multiple Jobs DAGs Writing DAGs isn t practical when having multiple DAGs that run similar Dataproc Jobs and want to share clusters efficiently with just some parameters changing between them Here makes sense to dynamically generate DAGs Using this project you can deploy multiple Composer Airflow DAGs from a single configuration it means you will create configuration files for DAGs to be automatically generated during deployment you can also use it to generate and upload DAGs manually Dataproc lifecycle management orchestrated by Composer","answers":"# Dataproc lifecycle management orchestrated by Composer\n\nWriting DAGs isn\u2019t practical when having multiple DAGs that run similar Dataproc Jobs, and want to share clusters efficiently, with just some parameters changing between them. Here makes sense to dynamically generate DAGs.\n\nUsing this project you can deploy multiple Composer\/Airflow DAGs from a single configuration, it means you will create configuration files for DAGs to be automatically generated during deployment _(you can also use it to generate and upload DAGs manually)_.\n\nEach DAG will execute a single Dataproc cluster Job referenced in a DAG configuration file, and that Job will be executed in a Dataproc Cluster that can be reused for multiple Jobs\/DAGs. \n\nThe Dataproc cluster lifecycle management will be done by the automatically generated Airflow DAGs, to reuse or create clusters accordingly _(cluster proposed configuration includes a scalability policy usage)_. \n\nThis approach aims to use resources efficiently meanwhile minimize provisioning and execution time.\n\nThis is the high level diagram:\n![](images\/dataproc_lifecycle.png)\n\n\n## Prerequisites\n1. This blueprint will deploy all its resources into the project defined by the `project_id` variable. Please note that we assume this project already exists.\n2. The user deploying the project _(executing terraform plan\/apply)_ should have admin permissions in the selected project, or permission to create all the resources defined in the Terraform scripts. \n\n## Project Folder Structure\n ```bash\nmain.tf\n...\ndags\/          (Autogenerated on Terraform Plan\/Apply from \/dag_config\/ files)\n\u251c\u2500\u2500 ephemeral_cluster_job_1.py\n\u251c\u2500\u2500 ephemeral_cluster_job_2.py\njobs\/\n\u251c\u2500\u2500 hello_world_spark.py\n\u251c\u2500\u2500 ...        (Add your dataproc jobs here)\ninclude\/\n\u251c\u2500\u2500 dag_template.py\n\u251c\u2500\u2500 generate_dag_files.py\n\u2514\u2500\u2500 dag_config\n    \u251c\u2500\u2500 dag1_config.json\n    \u2514\u2500\u2500 dag2_config.json\n    \u2514\u2500\u2500 ...     (Add your Composer\/Airflow DAGs configuration here)\n...\n ```\n## Adding Jobs\n#### Prepare Dataproc Jobs to be executed\n1. Clone this repository\n2. Locate your Dataproc jobs in the **\/jobs\/** folder in your local environment\n\n#### Prepare Composer Dags to be deployed\n3. Locate your DAG configuration files in the **\/include\/dag_config\/** folder in your local environment. DAG configuration files have the following variables:\n```json\n{\n    \"DagId\": \"ephemeral_cluster_job_1\",     --DAG name you will see in Airflow environment\n    \"Schedule\": \"'@daily'\",                 --DAG Schedule\n    \"ClusterName\":\"ephemeral-cluster-test\", --Dataproc Cluster to be Used\/created for this DAG\/Job to be executed in\n    \"StartYear\":\"2022\",                     --DAG start year\n    \"StartMonth\":\"9\",                       --DAG start month\n    \"StartDay\":\"13\",                        --DAG start day\n    \"Catchup\":\"False\",                      --DAG backfill catchup\n    \"ClusterMachineType\":\"n1-standard-4\",   --Dataproc machine type to be used by master and worker cluster nodes\n    \"ClusterIdleDeleteTtl\":\"300\",           --Time in seconds to delete unused Dataproc cluster\n    \"SparkJob\":\"hello_world_spark.py\"       --Spark Job to be executed by DAG, should be placed in \/jobs\/ folder of this project. (if other type of Dataproc jobs modify dag_template.py)\n}\n```\n4. (Optional) You can run `python3 include\/generate_dag_files.py` in your local environment, if you want to review generated DAGs before deploying(TF plan\/apply) those.\n\n## Deployment\n\n1. Set Google Cloud Platform credentials on local environment: \nhttps:\/\/cloud.google.com\/source-repositories\/docs\/authentication\n\n\n2. You must supply the `project_id` variable as minimum in order to deploy the project. Default Terraform variables and example values in `varibles.tf` file.\n\n3. Run Terraform Plan\/Apply\n   ```bash\n    $ cd terraform\/\n    $ terraform init\n    $ terraform plan\n    $ terraform apply\n   ##Optionally variables could be used\n    $ terraform apply -var 'project_id=<PROJECT_ID>' \\\n   -var 'region=<REGION>'\n    ```\n\nOnce you deploy terraform plan for the first time, and composer environment is running, you can _terraform plan\/apply_ after adding new DAG configuration files, to generate and deploy DAGs, to the existing environment. \n\nFirst time it is deployed, resource creation will take several minutes(up to 40) because of Composer Environment provisioning, you should expect for successful completion along with a list of the created resources.\n\n## Running DAGs\nDAGs will run as per **Schedule**, **StartDate**, and **Catchup** configuration in DAG configuration file, or can be triggered manually trough Airflow web console after deployment. \n![](images\/dag_execution_example.png)\n\n## Shared VPC\nThe example supports the configuration of a Shared VPC as an input variable. \nTo deploy the solution on a Shared VPC, you have to configure the `network_config` variable:\n```\nnetwork_config = {\n    host_project       = \"PROJECT_ID\"\n    network_self_link  = \"https:\/\/www.googleapis.com\/compute\/v1\/projects\/PROJECT_ID\/global\/networks\/VPC_NAME\"\n    subnet_self_link   = \"https:\/\/www.googleapis.com\/compute\/v1\/projects\/PROJECT_ID\/regions\/$REGION\/subnetworks\/SUBNET_NAME\"\n    name               = \"VPC_NAME\"\n  }\n```\nIf Shared VPC is used [firewall rules](https:\/\/cloud.google.com\/composer\/docs\/composer-2\/create-environments) would be needed in the host project.\n\n### TODO\n- Add support for CMEK (Composer and dataproc)","site":"GCP","answers_cleaned":"  Dataproc lifecycle management orchestrated by Composer  Writing DAGs isn t practical when having multiple DAGs that run similar Dataproc Jobs  and want to share clusters efficiently  with just some parameters changing between them  Here makes sense to dynamically generate DAGs   Using this project you can deploy multiple Composer Airflow DAGs from a single configuration  it means you will create configuration files for DAGs to be automatically generated during deployment   you can also use it to generate and upload DAGs manually     Each DAG will execute a single Dataproc cluster Job referenced in a DAG configuration file  and that Job will be executed in a Dataproc Cluster that can be reused for multiple Jobs DAGs    The Dataproc cluster lifecycle management will be done by the automatically generated Airflow DAGs  to reuse or create clusters accordingly   cluster proposed configuration includes a scalability policy usage      This approach aims to use resources efficiently meanwhile minimize provisioning and execution time   This is the high level diagram      images dataproc lifecycle png       Prerequisites 1  This blueprint will deploy all its resources into the project defined by the  project id  variable  Please note that we assume this project already exists  2  The user deploying the project   executing terraform plan apply   should have admin permissions in the selected project  or permission to create all the resources defined in the Terraform scripts       Project Folder Structure     bash main tf     dags            Autogenerated on Terraform Plan Apply from  dag config  files      ephemeral cluster job 1 py     ephemeral cluster job 2 py jobs      hello world spark py                 Add your dataproc jobs here  include      dag template py     generate dag files py     dag config         dag1 config json         dag2 config json                  Add your Composer Airflow DAGs configuration here              Adding Jobs      Prepare Dataproc Jobs to be executed 1  Clone this repository 2  Locate your Dataproc jobs in the    jobs    folder in your local environment       Prepare Composer Dags to be deployed 3  Locate your DAG configuration files in the    include dag config    folder in your local environment  DAG configuration files have the following variables     json        DagId    ephemeral cluster job 1         DAG name you will see in Airflow environment      Schedule      daily                      DAG Schedule      ClusterName   ephemeral cluster test     Dataproc Cluster to be Used created for this DAG Job to be executed in      StartYear   2022                         DAG start year      StartMonth   9                           DAG start month      StartDay   13                            DAG start day      Catchup   False                          DAG backfill catchup      ClusterMachineType   n1 standard 4       Dataproc machine type to be used by master and worker cluster nodes      ClusterIdleDeleteTtl   300               Time in seconds to delete unused Dataproc cluster      SparkJob   hello world spark py          Spark Job to be executed by DAG  should be placed in  jobs  folder of this project   if other type of Dataproc jobs modify dag template py        4   Optional  You can run  python3 include generate dag files py  in your local environment  if you want to review generated DAGs before deploying TF plan apply  those      Deployment  1  Set Google Cloud Platform credentials on local environment   https   cloud google com source repositories docs authentication   2  You must supply the  project id  variable as minimum in order to deploy the project  Default Terraform variables and example values in  varibles tf  file   3  Run Terraform Plan Apply       bash       cd terraform        terraform init       terraform plan       terraform apply      Optionally variables could be used       terraform apply  var  project id  PROJECT ID         var  region  REGION            Once you deploy terraform plan for the first time  and composer environment is running  you can  terraform plan apply  after adding new DAG configuration files  to generate and deploy DAGs  to the existing environment    First time it is deployed  resource creation will take several minutes up to 40  because of Composer Environment provisioning  you should expect for successful completion along with a list of the created resources      Running DAGs DAGs will run as per   Schedule      StartDate    and   Catchup   configuration in DAG configuration file  or can be triggered manually trough Airflow web console after deployment       images dag execution example png      Shared VPC The example supports the configuration of a Shared VPC as an input variable   To deploy the solution on a Shared VPC  you have to configure the  network config  variable      network config         host project          PROJECT ID      network self link     https   www googleapis com compute v1 projects PROJECT ID global networks VPC NAME      subnet self link      https   www googleapis com compute v1 projects PROJECT ID regions  REGION subnetworks SUBNET NAME      name                  VPC NAME          If Shared VPC is used  firewall rules  https   cloud google com composer docs composer 2 create environments  would be needed in the host project       TODO   Add support for CMEK  Composer and dataproc "}
{"questions":"GCP 3 1 4 dataproc job optimization guide 2 Table Of Contents","answers":"# dataproc-job-optimization-guide\n\n----\n\n## Table Of Contents\n\n1. [About](#About)\n2. [Guide](#Guide)\n3. [Results](#Results)\n4. [Next Steps](#Next-steps)\n\n----\n\n## About\n\nThis guide is designed to optimize performance and cost of applications running on Dataproc clusters.  Because Dataproc supports many big data technologies - each with their own intricacies - this guide is intended to be trial-and-error experimentation.  Initially it will begin with a generic dataproc cluster with defaults set. As you proceed through the guide, you\u2019ll increasingly customize Dataproc cluster configurations to fit your specific workload.\n\nPlan to separate Dataproc Jobs into different clusters - they use resources differently and can impact each other\u2019s performances when run simultaneously. Even better, isolating single jobs to single clusters can set you up for ephemeral clusters, where jobs can run in parallel on their own dedicated resources.\n\nOnce your job is running successfully, you can safely iterate on the configuration to improve runtime and cost, falling back to the last successful run whenever experimental changes have a negative impact.\n\n----\n\n## Guide\n\n### 1. Getting Started \n\nFill in your environment variables and run the following code in a terminal to set up your Google Cloud project. \n\n```bash\nexport PROJECT_ID=\"\"\nexport REGION=\"\"\nexport CLUSTER_NAME=\"\"\nexport BUCKET_NAME=\"\"\nTIMESTAMP=$(date \"+%Y-%m-%dT%H%M%S\")\nexport TIMESTAMP\n\n.\/scripts\/setup.sh -p $PROJECT_ID -r $REGION -c $CLUSTER_NAME -b $BUCKET_NAME -t $TIMESTAMP\n```\n\nThis script will:\n\n1. Setup project and enable APIs\n2. Remove any old infrastructure related to this project (in case of previous runs)\n3. Create a Google Cloud Storage bucket and a BigQuery Dataset\n4. Load public data into a personal BigQuery Dataset\n5. Import autoscaling policies\n6. Create the first Dataproc sizing cluster\n\n**Monitoring Dataproc Jobs**\n![Stack-Resources](images\/monitoring-jobs.png)\n\n\n### 2. Calculate Dataproc cluster size\n\nA sizing cluster can help determine the right number of workers for your application.  This cluster will have an autoscaling policy attached.  Set the autoscaling policy min\/max values to whatever is allowed in your project. Run your jobs on this cluster. Autoscaling will continue to add nodes until the YARN pending memory metric is zero. A perfectly sized cluster should never have YARN pending memory.\n\n\n```bash\ngsutil -m rm -r gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\n\ngcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-sizing scripts\/spark_average_speed.py -- gs:\/\/$BUCKET_NAME\/raw-$TIMESTAMP\/ gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\/\n```\n\n![Stack-Resources](images\/monitoring-nodemanagers.png)\n\n### 3. Optimize Dataproc cluster configuration\n\nUsing a non-autoscaling cluster during this experimentation phase can lead to the discovery of more accurate machine-types, persistent disks, application properties, etc. For now, build an isolated non-autoscaling cluster for your job that has the optimized number of primary workers.\n\nRun your job on this appropriately-sized non-autoscaling cluster. If the CPU is maxing out, consider using C2 machine type. If memory is maxing out, consider using N2D-highmem machine types.  Also consider increasing the machine cores (while maintaining a consistent overall core count observed during sizing phase). \n\n\n**Monitoring CPU Utilization**\n\n![Stack-Resources](images\/monitoring-cpu.png)\n\n\n**Monitoring YARN Memory**\n\n![Stack-Resources](images\/monitoring-yarn-memory.png)\n\n**8 x n2-standard-2 = 1 min 53 seconds**\n\n```bash\ngcloud dataproc clusters create $CLUSTER_NAME-testing-2x8-standard \\\n  --master-machine-type=n2-standard-2 \\\n  --worker-machine-type=n2-standard-2 \\\n  --num-workers=8 \\\n  --master-boot-disk-type=pd-standard \\\n  --master-boot-disk-size=1000GB \\\n  --worker-boot-disk-type=pd-standard \\\n  --worker-boot-disk-size=1000GB \\\n  --region=$REGION\n\ngsutil -m rm -r gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\n\ngcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-testing-2x8-standard scripts\/spark_average_speed.py -- gs:\/\/$BUCKET_NAME\/raw-$TIMESTAMP\/ gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\/\n```\n\n**4 x n2-standard-4 = 1 min 48 seconds**\n\n```bash\ngcloud dataproc clusters delete $CLUSTER_NAME-testing-2x8-standard \\\n  --region=$REGION\n\ngcloud dataproc clusters create $CLUSTER_NAME-testing-4x4-standard \\\n  --master-machine-type=n2-standard-4 \\\n  --worker-machine-type=n2-standard-4 \\\n  --num-workers=4 \\\n  --master-boot-disk-type=pd-standard \\\n  --master-boot-disk-size=1000GB \\\n  --worker-boot-disk-type=pd-standard \\\n  --worker-boot-disk-size=1000GB \\\n  --region=$REGION\n\ngsutil -m rm -r gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\n\ngcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-testing-4x4-standard scripts\/spark_average_speed.py -- gs:\/\/$BUCKET_NAME\/raw-$TIMESTAMP\/ gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\/\n```\n\n**2 x n2-standard-8 = 1 min 31 seconds**\n\n```bash\ngcloud dataproc clusters delete $CLUSTER_NAME-testing-4x4-standard \\\n  --region=$REGION\n\ngcloud dataproc clusters create $CLUSTER_NAME-testing-8x2-standard \\\n  --master-machine-type=n2-standard-8 \\\n  --worker-machine-type=n2-standard-8 \\\n  --num-workers=2 \\\n  --master-boot-disk-type=pd-standard \\\n  --master-boot-disk-size=1000GB \\\n  --worker-boot-disk-type=pd-standard \\\n  --worker-boot-disk-size=1000GB \\\n  --region=$REGION\n\ngsutil -m rm -r gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\n\ngcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-testing-8x2-standard scripts\/spark_average_speed.py -- gs:\/\/$BUCKET_NAME\/raw-$TIMESTAMP\/ gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\/\n```\n\nIf you\u2019re still observing performance issues, consider moving from pd-standard to pd-balanced or pd-ssd.\n\n- Standard persistent disks (pd-standard) are suited for large data processing workloads that primarily use sequential I\/Os.\n\n- Balanced persistent disks (pd-balanced) are an alternative to SSD persistent disks that balance performance and cost. With the same maximum IOPS as SSD persistent disks and lower IOPS per GB, a balanced persistent disk offers performance levels suitable for most general-purpose applications at a price point between that of standard and SSD persistent disks.\n\n- SSD persistent disks (pd-ssd) are suited for enterprise applications and high-performance database needs that require lower latency and more IOPS than standard persistent disks provide.\n\nFor similar costs, pd-standard 1000GB == pd-balanced 500GB == pd-ssd 250 GB. \n\n![Stack-Resources](images\/monitoring-hdfs.png)\n\n\n**2 x n2-standard-8-balanced = 1 min 26 seconds**\n\n```bash\ngcloud dataproc clusters delete $CLUSTER_NAME-testing-8x2-standard \\\n  --region=$REGION\n\ngcloud dataproc clusters create $CLUSTER_NAME-testing-8x2-balanced \\\n  --master-machine-type=n2-standard-8 \\\n  --worker-machine-type=n2-standard-8 \\\n  --num-workers=2 \\\n  --master-boot-disk-type=pd-balanced \\\n  --master-boot-disk-size=500GB \\\n  --worker-boot-disk-type=pd-balanced \\\n  --worker-boot-disk-size=500GB \\\n  --region=$REGION\n\ngsutil -m rm -r gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\n\ngcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-testing-8x2-balanced scripts\/spark_average_speed.py -- gs:\/\/$BUCKET_NAME\/raw-$TIMESTAMP\/ gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\/\n```\n\n**2 x n2-standard-8-ssd = 1 min 21 seconds**\n\n```bash\ngcloud dataproc clusters delete $CLUSTER_NAME-testing-8x2-balanced \\\n  --region=$REGION\n\ngcloud dataproc clusters create $CLUSTER_NAME-testing-8x2-ssd \\\n  --master-machine-type=n2-standard-8 \\\n  --worker-machine-type=n2-standard-8 \\\n  --num-workers=2 \\\n  --master-boot-disk-type=pd-ssd \\\n  --master-boot-disk-size=250GB \\\n  --worker-boot-disk-type=pd-ssd \\\n  --worker-boot-disk-size=250GB \\\n  --region=$REGION\n\ngsutil -m rm -r gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\n\ngcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-testing-8x2-ssd scripts\/spark_average_speed.py -- gs:\/\/$BUCKET_NAME\/raw-$TIMESTAMP\/ gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\/\n```\n\nMonitor HDFS Capacity to determine disk size. If this ever drops to zero, you\u2019ll need to increase the persistent disk size.  If HDFS Capacity is too large for this job, consider lowering the disk size to save on storage costs.\n\n**2 x n2-standard-8-ssd-costop = 1 min 18 seconds**\n\n```bash\ngcloud dataproc clusters delete $CLUSTER_NAME-testing-8x2-ssd \\\n  --region=$REGION\n\ngcloud dataproc clusters create $CLUSTER_NAME-testing-8x2-ssd-costop \\\n  --master-machine-type=n2-standard-8 \\\n  --worker-machine-type=n2-standard-8 \\\n  --num-workers=2 \\\n  --master-boot-disk-type=pd-ssd \\\n  --master-boot-disk-size=30GB \\\n  --worker-boot-disk-type=pd-ssd \\\n  --worker-boot-disk-size=30GB \\\n  --region=$REGION\n\ngsutil -m rm -r gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\n\ngcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-testing-8x2-ssd-costop scripts\/spark_average_speed.py -- gs:\/\/$BUCKET_NAME\/raw-$TIMESTAMP\/ gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\/\n```\n\n### 4. Optimize application-specific properties\n\n\nIf you\u2019re still observing performance issues, you can begin to adjust application properties. Ideally these properties are set on the job submission. This isolates properties to their respective jobs. Since this job runs on Spark, view the [tuning guide here.](https:\/\/cloud.google.com\/dataproc\/docs\/support\/spark-job-tuning)\n\nSince this guide uses a simple spark application and small amount of data, you may not see job performance improvement.  This section is more applicable for larger use-cases.\n\nsample job submit:\n\n**2 x n2-standard-8-ssd-costop-appop = 1 min 15 seconds**\n\n```bash\ngsutil -m rm -r gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\n\ngcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-testing-8x2-ssd-costop scripts\/spark_average_speed.py --properties='spark.executor.cores=5,spark.driver.cores=5,spark.executor.instances=1,spark.executor.memory=25459m,spark.driver.memory=25459m,spark.executor.memoryOverhead=2829m,spark.default.parallelism=10,spark.sql.shuffle.partitions=10,spark.shuffle.spill.compress=true,spark.checkpoint.compress=true,spark.io.compresion.codex=snappy,spark.dynamicAllocation=true,spark.shuffle.service.enabled=true' -- gs:\/\/$BUCKET_NAME\/raw-$TIMESTAMP\/ gs:\/\/$BUCKET_NAME\/transformed-$TIMESTAMP\/\n```\n\n### 5.  Handle edge-case workload spikes via an autoscaling policy\n\nNow that you have an optimally sized, configured, tuned cluster, you can choose to introduce autoscaling.  This should NOT be seen as a cost-optimization technique.  But it can improve performance during the edge-cases that require more worker nodes.\n\nUse ephemeral clusters (see step 6) to allow clusters to scale up, and delete them when the job or workflow is complete. Downscaling may not be necessary on ephemeral, job\/workflow scoped clusters.\n\n- Ensure primary workers make up >50% of your cluster.  Do not scale primary workers.\n\n    - This does increase cost versus a smaller number of primary workers, but this is a tradeoff you can make; stability versus cost.\n\n        - Note: Having too many secondary workers can create job instability. This is a tradeoff you can choose to make as you see fit, but best practice is to avoid having the majority of your workers be secondary.\n\n- Prefer ephemeral clusters where possible.\n\n    - Allow these to scale up, but not down, and delete them when jobs are complete.\n\n    - Set scaleDownFactor to 0.0 for ephemeral clusters.\n\n[Autoscaling clusters | Dataproc Documentation | Google Cloud](https:\/\/cloud.google.com\/dataproc\/docs\/concepts\/configuring-clusters\/autoscaling#how_autoscaling_works)\n\nsample template:\n\n```yaml\nworkerConfig:\n  minInstances: 2\n  maxInstances: 2\nsecondaryWorkerConfig:\n  minInstances: 0\n  maxInstances: 10\nbasicAlgorithm:\n  cooldownPeriod: 5m\n  yarnConfig:\n    scaleUpFactor: 1.0\n    scaleDownFactor: 0\n    gracefulDecommissionTimeout: 0s\n```\n\n### 6. Optimize cost and reusability via ephemeral Dataproc clusters\n\nThere are several key advantages of using ephemeral clusters:\n\n- You can use different cluster configurations for individual jobs, eliminating the administrative burden of managing tools across jobs.\n\n- You can scale clusters to suit individual jobs or groups of jobs.\n\n- You only pay for resources when your jobs are using them.\n\n- You don't need to maintain clusters over time, because they are freshly configured every time you use them.\n\n- You don't need to maintain separate infrastructure for development, testing, and production. You can use the same definitions to create as many different versions of a cluster as you need when you need them.\n\n\nsample workflow template:\n\n```yaml\njobs:\n- pysparkJob:\n    args:\n    - \"gs:\/\/%%BUCKET_NAME%%\/raw-%%TIMESTAMP%%\/\"\n    - \"gs:\/\/%%BUCKET_NAME%%\/transformed-%%TIMESTAMP%%\/\"\n    mainPythonFileUri: gs:\/\/%%BUCKET_NAME%%\/scripts\/spark_average_speed.py\n  stepId: spark_average_speed\nplacement:\n  managedCluster:\n    clusterName: final-cluster-wft\n    config:\n      gceClusterConfig:\n        zoneUri: %%REGION%%-a\n      masterConfig:\n        diskConfig:\n          bootDiskSizeGb: 30\n          bootDiskType: pd-ssd\n        machineTypeUri: n2-standard-8\n        minCpuPlatform: AUTOMATIC\n        numInstances: 1\n        preemptibility: NON_PREEMPTIBLE\n      workerConfig:\n        diskConfig:\n          bootDiskSizeGb: 30\n          bootDiskType: pd-ssd\n        machineTypeUri: n2-standard-8\n        minCpuPlatform: AUTOMATIC\n        numInstances: 2\n        preemptibility: NON_PREEMPTIBLE\n```\n\nsample workflow template execution\n\n```bash\ngcloud dataproc workflow-templates instantiate-from-file \\\n  --file templates\/final-cluster-wft.yml \\\n  --region $REGION\n```\n\n\n----\n\n## Results\n\nEven in this small scale example. job performance was optimized by 71% (265 seconds -> 75 seconds).  And with a properly sized ephemeral cluster, you only pay for what is necessary.\n\n![Stack-Resources](images\/monitoring-job-progress.png)\n\n----\n\n## Next-Steps\n\nTo continue striving for maximum optimal performance, please review and consider the guidance laid out in the Google Cloud Blog.\n\n- [Dataproc best practices | Google Cloud Blog](https:\/\/cloud.google.com\/blog\/topics\/developers-practitioners\/dataproc-best-practices-guide)\n- [7 best practices for running Cloud Dataproc in production | Google Cloud Blog](https:\/\/cloud.google.com\/blog\/products\/data-analytics\/7-best-practices-for-running-cloud-dataproc-in-production)","site":"GCP","answers_cleaned":"  dataproc job optimization guide           Table Of Contents  1   About   About  2   Guide   Guide  3   Results   Results  4   Next Steps   Next steps            About  This guide is designed to optimize performance and cost of applications running on Dataproc clusters   Because Dataproc supports many big data technologies   each with their own intricacies   this guide is intended to be trial and error experimentation   Initially it will begin with a generic dataproc cluster with defaults set  As you proceed through the guide  you ll increasingly customize Dataproc cluster configurations to fit your specific workload   Plan to separate Dataproc Jobs into different clusters   they use resources differently and can impact each other s performances when run simultaneously  Even better  isolating single jobs to single clusters can set you up for ephemeral clusters  where jobs can run in parallel on their own dedicated resources   Once your job is running successfully  you can safely iterate on the configuration to improve runtime and cost  falling back to the last successful run whenever experimental changes have a negative impact            Guide      1  Getting Started   Fill in your environment variables and run the following code in a terminal to set up your Google Cloud project       bash export PROJECT ID    export REGION    export CLUSTER NAME    export BUCKET NAME    TIMESTAMP   date    Y  m  dT H M S   export TIMESTAMP    scripts setup sh  p  PROJECT ID  r  REGION  c  CLUSTER NAME  b  BUCKET NAME  t  TIMESTAMP      This script will   1  Setup project and enable APIs 2  Remove any old infrastructure related to this project  in case of previous runs  3  Create a Google Cloud Storage bucket and a BigQuery Dataset 4  Load public data into a personal BigQuery Dataset 5  Import autoscaling policies 6  Create the first Dataproc sizing cluster    Monitoring Dataproc Jobs     Stack Resources  images monitoring jobs png        2  Calculate Dataproc cluster size  A sizing cluster can help determine the right number of workers for your application   This cluster will have an autoscaling policy attached   Set the autoscaling policy min max values to whatever is allowed in your project  Run your jobs on this cluster  Autoscaling will continue to add nodes until the YARN pending memory metric is zero  A perfectly sized cluster should never have YARN pending memory       bash gsutil  m rm  r gs    BUCKET NAME transformed  TIMESTAMP  gcloud dataproc jobs submit pyspark   region  REGION   cluster  CLUSTER NAME sizing scripts spark average speed py    gs    BUCKET NAME raw  TIMESTAMP  gs    BUCKET NAME transformed  TIMESTAMP         Stack Resources  images monitoring nodemanagers png       3  Optimize Dataproc cluster configuration  Using a non autoscaling cluster during this experimentation phase can lead to the discovery of more accurate machine types  persistent disks  application properties  etc  For now  build an isolated non autoscaling cluster for your job that has the optimized number of primary workers   Run your job on this appropriately sized non autoscaling cluster  If the CPU is maxing out  consider using C2 machine type  If memory is maxing out  consider using N2D highmem machine types   Also consider increasing the machine cores  while maintaining a consistent overall core count observed during sizing phase        Monitoring CPU Utilization      Stack Resources  images monitoring cpu png      Monitoring YARN Memory      Stack Resources  images monitoring yarn memory png     8 x n2 standard 2   1 min 53 seconds       bash gcloud dataproc clusters create  CLUSTER NAME testing 2x8 standard       master machine type n2 standard 2       worker machine type n2 standard 2       num workers 8       master boot disk type pd standard       master boot disk size 1000GB       worker boot disk type pd standard       worker boot disk size 1000GB       region  REGION  gsutil  m rm  r gs    BUCKET NAME transformed  TIMESTAMP  gcloud dataproc jobs submit pyspark   region  REGION   cluster  CLUSTER NAME testing 2x8 standard scripts spark average speed py    gs    BUCKET NAME raw  TIMESTAMP  gs    BUCKET NAME transformed  TIMESTAMP         4 x n2 standard 4   1 min 48 seconds       bash gcloud dataproc clusters delete  CLUSTER NAME testing 2x8 standard       region  REGION  gcloud dataproc clusters create  CLUSTER NAME testing 4x4 standard       master machine type n2 standard 4       worker machine type n2 standard 4       num workers 4       master boot disk type pd standard       master boot disk size 1000GB       worker boot disk type pd standard       worker boot disk size 1000GB       region  REGION  gsutil  m rm  r gs    BUCKET NAME transformed  TIMESTAMP  gcloud dataproc jobs submit pyspark   region  REGION   cluster  CLUSTER NAME testing 4x4 standard scripts spark average speed py    gs    BUCKET NAME raw  TIMESTAMP  gs    BUCKET NAME transformed  TIMESTAMP         2 x n2 standard 8   1 min 31 seconds       bash gcloud dataproc clusters delete  CLUSTER NAME testing 4x4 standard       region  REGION  gcloud dataproc clusters create  CLUSTER NAME testing 8x2 standard       master machine type n2 standard 8       worker machine type n2 standard 8       num workers 2       master boot disk type pd standard       master boot disk size 1000GB       worker boot disk type pd standard       worker boot disk size 1000GB       region  REGION  gsutil  m rm  r gs    BUCKET NAME transformed  TIMESTAMP  gcloud dataproc jobs submit pyspark   region  REGION   cluster  CLUSTER NAME testing 8x2 standard scripts spark average speed py    gs    BUCKET NAME raw  TIMESTAMP  gs    BUCKET NAME transformed  TIMESTAMP       If you re still observing performance issues  consider moving from pd standard to pd balanced or pd ssd     Standard persistent disks  pd standard  are suited for large data processing workloads that primarily use sequential I Os     Balanced persistent disks  pd balanced  are an alternative to SSD persistent disks that balance performance and cost  With the same maximum IOPS as SSD persistent disks and lower IOPS per GB  a balanced persistent disk offers performance levels suitable for most general purpose applications at a price point between that of standard and SSD persistent disks     SSD persistent disks  pd ssd  are suited for enterprise applications and high performance database needs that require lower latency and more IOPS than standard persistent disks provide   For similar costs  pd standard 1000GB    pd balanced 500GB    pd ssd 250 GB      Stack Resources  images monitoring hdfs png      2 x n2 standard 8 balanced   1 min 26 seconds       bash gcloud dataproc clusters delete  CLUSTER NAME testing 8x2 standard       region  REGION  gcloud dataproc clusters create  CLUSTER NAME testing 8x2 balanced       master machine type n2 standard 8       worker machine type n2 standard 8       num workers 2       master boot disk type pd balanced       master boot disk size 500GB       worker boot disk type pd balanced       worker boot disk size 500GB       region  REGION  gsutil  m rm  r gs    BUCKET NAME transformed  TIMESTAMP  gcloud dataproc jobs submit pyspark   region  REGION   cluster  CLUSTER NAME testing 8x2 balanced scripts spark average speed py    gs    BUCKET NAME raw  TIMESTAMP  gs    BUCKET NAME transformed  TIMESTAMP         2 x n2 standard 8 ssd   1 min 21 seconds       bash gcloud dataproc clusters delete  CLUSTER NAME testing 8x2 balanced       region  REGION  gcloud dataproc clusters create  CLUSTER NAME testing 8x2 ssd       master machine type n2 standard 8       worker machine type n2 standard 8       num workers 2       master boot disk type pd ssd       master boot disk size 250GB       worker boot disk type pd ssd       worker boot disk size 250GB       region  REGION  gsutil  m rm  r gs    BUCKET NAME transformed  TIMESTAMP  gcloud dataproc jobs submit pyspark   region  REGION   cluster  CLUSTER NAME testing 8x2 ssd scripts spark average speed py    gs    BUCKET NAME raw  TIMESTAMP  gs    BUCKET NAME transformed  TIMESTAMP       Monitor HDFS Capacity to determine disk size  If this ever drops to zero  you ll need to increase the persistent disk size   If HDFS Capacity is too large for this job  consider lowering the disk size to save on storage costs     2 x n2 standard 8 ssd costop   1 min 18 seconds       bash gcloud dataproc clusters delete  CLUSTER NAME testing 8x2 ssd       region  REGION  gcloud dataproc clusters create  CLUSTER NAME testing 8x2 ssd costop       master machine type n2 standard 8       worker machine type n2 standard 8       num workers 2       master boot disk type pd ssd       master boot disk size 30GB       worker boot disk type pd ssd       worker boot disk size 30GB       region  REGION  gsutil  m rm  r gs    BUCKET NAME transformed  TIMESTAMP  gcloud dataproc jobs submit pyspark   region  REGION   cluster  CLUSTER NAME testing 8x2 ssd costop scripts spark average speed py    gs    BUCKET NAME raw  TIMESTAMP  gs    BUCKET NAME transformed  TIMESTAMP           4  Optimize application specific properties   If you re still observing performance issues  you can begin to adjust application properties  Ideally these properties are set on the job submission  This isolates properties to their respective jobs  Since this job runs on Spark  view the  tuning guide here   https   cloud google com dataproc docs support spark job tuning   Since this guide uses a simple spark application and small amount of data  you may not see job performance improvement   This section is more applicable for larger use cases   sample job submit     2 x n2 standard 8 ssd costop appop   1 min 15 seconds       bash gsutil  m rm  r gs    BUCKET NAME transformed  TIMESTAMP  gcloud dataproc jobs submit pyspark   region  REGION   cluster  CLUSTER NAME testing 8x2 ssd costop scripts spark average speed py   properties  spark executor cores 5 spark driver cores 5 spark executor instances 1 spark executor memory 25459m spark driver memory 25459m spark executor memoryOverhead 2829m spark default parallelism 10 spark sql shuffle partitions 10 spark shuffle spill compress true spark checkpoint compress true spark io compresion codex snappy spark dynamicAllocation true spark shuffle service enabled true     gs    BUCKET NAME raw  TIMESTAMP  gs    BUCKET NAME transformed  TIMESTAMP           5   Handle edge case workload spikes via an autoscaling policy  Now that you have an optimally sized  configured  tuned cluster  you can choose to introduce autoscaling   This should NOT be seen as a cost optimization technique   But it can improve performance during the edge cases that require more worker nodes   Use ephemeral clusters  see step 6  to allow clusters to scale up  and delete them when the job or workflow is complete  Downscaling may not be necessary on ephemeral  job workflow scoped clusters     Ensure primary workers make up  50  of your cluster   Do not scale primary workers         This does increase cost versus a smaller number of primary workers  but this is a tradeoff you can make  stability versus cost             Note  Having too many secondary workers can create job instability  This is a tradeoff you can choose to make as you see fit  but best practice is to avoid having the majority of your workers be secondary     Prefer ephemeral clusters where possible         Allow these to scale up  but not down  and delete them when jobs are complete         Set scaleDownFactor to 0 0 for ephemeral clusters    Autoscaling clusters   Dataproc Documentation   Google Cloud  https   cloud google com dataproc docs concepts configuring clusters autoscaling how autoscaling works   sample template      yaml workerConfig    minInstances  2   maxInstances  2 secondaryWorkerConfig    minInstances  0   maxInstances  10 basicAlgorithm    cooldownPeriod  5m   yarnConfig      scaleUpFactor  1 0     scaleDownFactor  0     gracefulDecommissionTimeout  0s          6  Optimize cost and reusability via ephemeral Dataproc clusters  There are several key advantages of using ephemeral clusters     You can use different cluster configurations for individual jobs  eliminating the administrative burden of managing tools across jobs     You can scale clusters to suit individual jobs or groups of jobs     You only pay for resources when your jobs are using them     You don t need to maintain clusters over time  because they are freshly configured every time you use them     You don t need to maintain separate infrastructure for development  testing  and production  You can use the same definitions to create as many different versions of a cluster as you need when you need them    sample workflow template      yaml jobs    pysparkJob      args         gs     BUCKET NAME   raw   TIMESTAMP            gs     BUCKET NAME   transformed   TIMESTAMP         mainPythonFileUri  gs     BUCKET NAME   scripts spark average speed py   stepId  spark average speed placement    managedCluster      clusterName  final cluster wft     config        gceClusterConfig          zoneUri    REGION   a       masterConfig          diskConfig            bootDiskSizeGb  30           bootDiskType  pd ssd         machineTypeUri  n2 standard 8         minCpuPlatform  AUTOMATIC         numInstances  1         preemptibility  NON PREEMPTIBLE       workerConfig          diskConfig            bootDiskSizeGb  30           bootDiskType  pd ssd         machineTypeUri  n2 standard 8         minCpuPlatform  AUTOMATIC         numInstances  2         preemptibility  NON PREEMPTIBLE      sample workflow template execution     bash gcloud dataproc workflow templates instantiate from file       file templates final cluster wft yml       region  REGION                Results  Even in this small scale example  job performance was optimized by 71   265 seconds    75 seconds    And with a properly sized ephemeral cluster  you only pay for what is necessary     Stack Resources  images monitoring job progress png            Next Steps  To continue striving for maximum optimal performance  please review and consider the guidance laid out in the Google Cloud Blog      Dataproc best practices   Google Cloud Blog  https   cloud google com blog topics developers practitioners dataproc best practices guide     7 best practices for running Cloud Dataproc in production   Google Cloud Blog  https   cloud google com blog products data analytics 7 best practices for running cloud dataproc in production "}
{"questions":"GCP Why is Survival Analysis Helpful for Churn Prediction Survival Analysis is used to predict the time to event when the event in question has not necessarily occurred yet In this case the event is a customer churning This model uses Survival Analysis to classify customers into time to churn buckets The model output can be used to calculate each user s churn score for different durations Churn Prediction with Survival Analysis If a customer is still active or is censored using Survival Analysis terminology we do not know their final lifetime or when they will churn If we assume that the customer s lifetime ended at the time of prediction or training the results will be biased underestimating lifetime Throwing out active users will also bias results through information loss The same methodology can be used used to predict customers total lifetime from their birth initial signup or t 0 and from the current state t 0","answers":"# Churn Prediction with Survival Analysis\nThis model uses Survival Analysis to classify customers into time-to-churn buckets. The model output can be used to calculate each user's churn score for different durations.\n\nThe same methodology can be used used to predict customers' total lifetime from their \"birth\" (initial signup, or t = 0) and from the current state (t > 0).\n\n## Why is Survival Analysis Helpful for Churn Prediction?\nSurvival Analysis is used to predict the time-to-event, when the event in question has not necessarily occurred yet. In this case, the event is a customer churning.\n\nIf a customer is still active, or is \"censored\" using Survival Analysis terminology, we do not know their final lifetime or when they will churn. If we assume that the customer's lifetime ended at the time of prediction (or training), the results will be biased (underestimating lifetime). Throwing out active users will also bias results through information loss.\n\nBy using a Survival Analysis approach to churn prediction, the entire population (regardless of current tenure or status) can be included.\n\n## Dataset\nThis example uses the public [Google Analytics Sample Dataset](https:\/\/support.google.com\/analytics\/answer\/7586738?hl=en) on BigQuery and artificially generated subscription start and end dates as input.\n\nTo create a churn model with real data, omit the 'Generate Data' step in the Beam pipeline in preprocessor\/preprocessor\/preprocess.py. Instead of randomly generating values, the BigQuery results should include the following fields: start_date, end_date, and active. These values correspond to the user's subscription lifetime and their censorship status.\n\n## Setup\n### Set up GCP credentials\n```shell\ngcloud auth login\ngcloud auth application-default login\n```\n\n### Set up Python environment\n```shell\nvirtualenv venv\nsource .\/venv\/bin\/activate\npip install -r requirements.txt\n```\n\n\n## Preprocessing\nUsing Dataflow, the data preprocessing script reads user data from BigQuery, generates random (fake) time-to-churn labels, creates TFRecords, and adds them to Google Cloud Storage.\n\nEach record should have three labels before preprocessing:\n1. **active**: indicator for censorship. It is 0 if user is inactive (uncensored) and 1 if the user is active (censored).\n2. **start_date**: Date when user began their lifetime.\n3. **end_date**: Date when user ends their lifetime. It is None if the user is still active.\n`_generateFakeData` randomly generates these three fields in order to create fake sample data. In practice, these fields should be available in some form in the historical data.\n\nDuring preprocessing, the aforementioned fields are combined into a single `2*n-dimensional indicator array`, where n is the number of bounded lifetime buckets (i.e. n = 2 for 0-2 months, 2-3 months, 3+ months):\n  + indicator array = [survival array | failure array]\n  + survival array = 1 if individual has survived interval, 0 otherwise (for each of the n intervals)\n  + failure array = 1 if individual failed during interval, 0 otherwise\n    + If an individual is censored (still active), their failure array contains only 0s\n\n### Set Constants\n```shell\nBUCKET=\"gs:\/\/[GCS Bucket]\"\nNOW=\"$(date +%Y%m%d%H%M%S)\"\nOUTPUT_DIR=\"${BUCKET}\/output_data\/${NOW}\"\nPROJECT=\"[PROJECT ID]\"\n```\n\n### Run locally with Dataflow\n```shell\ncd preprocessor\n\npython -m run_preprocessing \\\n--output_dir \"${OUTPUT_DIR}\" \\\n--project_id \"${PROJECT}\"\n\ncd ..\n```\n\n### Run on the Cloud with Dataflow\nThe top-level preprocessor directory should be the working directory for running the preprocessing script. The setup.py file should be located in the working directory.\n\n```shell\ncd preprocessor\n\npython -m run_preprocessing \\\n--cloud \\\n--output_dir \"${OUTPUT_DIR}\" \\\n--project_id \"${PROJECT}\"\n\ncd ..\n```\n\n\n## Model Training\nModel training minimizes the negative of the log likelihood function for a statistical Survival Analysis model with discrete-time intervals. The loss function is based off the paper [A scalable discrete-time survival model for neural networks](https:\/\/peerj.com\/articles\/6257.pdf).\n\nFor each record, the conditional hazard probability is the probability of failure in an interval, given that individual has survived at least to the beginning of the interval. Therefore, the probability that a user survives the given interval, or the likelihood, is the product of (1 - hazard) for all of the earlier (and current) intervals.\n\nSo, the log likelihood is: ln(current hazard) + sum(ln(1 - earlier hazards)) summed over all time intervals. Equivalently, each individual's log likelihood is: `ln(1 - (1 if survived 0 if not)*(Prob of failure)) + ln(1 - (1 if failed 0 if not)*(Prob of survival))` summed over all time intervals.\n\n### Set Constants\nThe TFRecord output of the preprocessing job should be used as input to the training job.\n\nMake sure to navigate back to the top-level directory.\n\n```shell\nINPUT_DIR=\"${OUTPUT_DIR}\"\nMODEL_DIR=\"${BUCKET}\/model\/$(date +%Y%m%d%H%M%S)\"\n```\n\n### Train locally with AI Platform\n```shell\ngcloud ai-platform local train \\\n--module-name trainer.task \\\n--package-path trainer\/trainer \\\n--job-dir ${MODEL_DIR} \\\n-- \\\n--input-dir \"${INPUT_DIR}\"\n```\n\n### Train on the Cloud with AI Platform\n```shell\nJOB_NAME=\"train_$(date +%Y%m%d%H%M%S)\"\n\ngcloud ai-platform jobs submit training ${JOB_NAME} \\\n--job-dir ${MODEL_DIR} \\\n--config trainer\/config.yaml \\\n--module-name trainer.task \\\n--package-path trainer\/trainer \\\n--region us-east1 \\\n--python-version 3.5 \\\n--runtime-version 1.13 \\\n-- \\\n--input-dir ${INPUT_DIR}\n```\n\n### Hyperparameter Tuning with AI Platform\n```shell\nJOB_NAME=\"hptuning_$(date +%Y%m%d%H%M%S)\"\n\ngcloud ai-platform jobs submit training ${JOB_NAME} \\\n--job-dir ${MODEL_DIR} \\\n--module-name trainer.task \\\n--package-path trainer\/trainer \\\n--config trainer\/hptuning_config.yaml \\\n--python-version 3.5 \\\n--runtime-version 1.13 \\\n-- \\\n--input-dir ${INPUT_DIR}\n```\n\n### Launch Tensorboard\n```shell\ntensorboard --logdir ${MODEL_DIR}\n```\n\n## Predictions\nThe model predicts the conditional likelihood that a user survived an interval given that the user reached the interval. It outputs an n-dimensional vector, where each element corresponds to predicted conditional probability of surviving through end of time interval (1 - hazard).\n\nIn order to determine the predicted class, the cumulative product of the conditional probabilities must be compared to some threshold.\n\n### Deploy model on AI Platform\nThe SavedModel was saved in a timestamped subdirectory of model_dir.\n```shell\nMODEL_NAME=\"survival_model\"\nVERSION_NAME=\"demo_version\"\nSAVED_MODEL_DIR=$(gsutil ls $MODEL_DIR\/export\/export | tail -1)\n\ngcloud ai-platform models create $MODEL_NAME \\\n--regions us-east1\n\ngcloud ai-platform versions create $VERSION_NAME \\\n  --model $MODEL_NAME \\\n  --origin $SAVED_MODEL_DIR \\\n  --runtime-version=1.13 \\\n  --framework TENSORFLOW \\\n  --python-version=3.5\n```\n### Running batch predictions\n```shell\nINPUT_PATHS=$INPUT_DIR\/data\/test\/*\nOUTPUT_PATH=<GCS directory for predictions>\nJOB_NAME=\"predict_$(date +%Y%m%d%H%M%S)\"\n\ngcloud ai-platform jobs submit prediction $JOB_NAME \\\n    --model $MODEL_NAME \\\n    --input-paths $INPUT_PATHS \\\n    --output-path $OUTPUT_PATH \\\n    --region us-east1 \\\n    --data-format TF_RECORD\n```","site":"GCP","answers_cleaned":"  Churn Prediction with Survival Analysis This model uses Survival Analysis to classify customers into time to churn buckets  The model output can be used to calculate each user s churn score for different durations   The same methodology can be used used to predict customers  total lifetime from their  birth   initial signup  or t   0  and from the current state  t   0       Why is Survival Analysis Helpful for Churn Prediction  Survival Analysis is used to predict the time to event  when the event in question has not necessarily occurred yet  In this case  the event is a customer churning   If a customer is still active  or is  censored  using Survival Analysis terminology  we do not know their final lifetime or when they will churn  If we assume that the customer s lifetime ended at the time of prediction  or training   the results will be biased  underestimating lifetime   Throwing out active users will also bias results through information loss   By using a Survival Analysis approach to churn prediction  the entire population  regardless of current tenure or status  can be included      Dataset This example uses the public  Google Analytics Sample Dataset  https   support google com analytics answer 7586738 hl en  on BigQuery and artificially generated subscription start and end dates as input   To create a churn model with real data  omit the  Generate Data  step in the Beam pipeline in preprocessor preprocessor preprocess py  Instead of randomly generating values  the BigQuery results should include the following fields  start date  end date  and active  These values correspond to the user s subscription lifetime and their censorship status      Setup     Set up GCP credentials    shell gcloud auth login gcloud auth application default login          Set up Python environment    shell virtualenv venv source   venv bin activate pip install  r requirements txt          Preprocessing Using Dataflow  the data preprocessing script reads user data from BigQuery  generates random  fake  time to churn labels  creates TFRecords  and adds them to Google Cloud Storage   Each record should have three labels before preprocessing  1    active    indicator for censorship  It is 0 if user is inactive  uncensored  and 1 if the user is active  censored   2    start date    Date when user began their lifetime  3    end date    Date when user ends their lifetime  It is None if the user is still active    generateFakeData  randomly generates these three fields in order to create fake sample data  In practice  these fields should be available in some form in the historical data   During preprocessing  the aforementioned fields are combined into a single  2 n dimensional indicator array   where n is the number of bounded lifetime buckets  i e  n   2 for 0 2 months  2 3 months  3  months       indicator array    survival array   failure array      survival array   1 if individual has survived interval  0 otherwise  for each of the n intervals      failure array   1 if individual failed during interval  0 otherwise       If an individual is censored  still active   their failure array contains only 0s      Set Constants    shell BUCKET  gs    GCS Bucket   NOW    date   Y m d H M S   OUTPUT DIR    BUCKET  output data   NOW   PROJECT   PROJECT ID            Run locally with Dataflow    shell cd preprocessor  python  m run preprocessing     output dir    OUTPUT DIR       project id    PROJECT    cd             Run on the Cloud with Dataflow The top level preprocessor directory should be the working directory for running the preprocessing script  The setup py file should be located in the working directory      shell cd preprocessor  python  m run preprocessing     cloud     output dir    OUTPUT DIR       project id    PROJECT    cd             Model Training Model training minimizes the negative of the log likelihood function for a statistical Survival Analysis model with discrete time intervals  The loss function is based off the paper  A scalable discrete time survival model for neural networks  https   peerj com articles 6257 pdf    For each record  the conditional hazard probability is the probability of failure in an interval  given that individual has survived at least to the beginning of the interval  Therefore  the probability that a user survives the given interval  or the likelihood  is the product of  1   hazard  for all of the earlier  and current  intervals   So  the log likelihood is  ln current hazard    sum ln 1   earlier hazards   summed over all time intervals  Equivalently  each individual s log likelihood is   ln 1    1 if survived 0 if not   Prob of failure     ln 1    1 if failed 0 if not   Prob of survival    summed over all time intervals       Set Constants The TFRecord output of the preprocessing job should be used as input to the training job   Make sure to navigate back to the top level directory      shell INPUT DIR    OUTPUT DIR   MODEL DIR    BUCKET  model   date   Y m d H M S            Train locally with AI Platform    shell gcloud ai platform local train     module name trainer task     package path trainer trainer     job dir   MODEL DIR           input dir    INPUT DIR            Train on the Cloud with AI Platform    shell JOB NAME  train   date   Y m d H M S    gcloud ai platform jobs submit training   JOB NAME      job dir   MODEL DIR      config trainer config yaml     module name trainer task     package path trainer trainer     region us east1     python version 3 5     runtime version 1 13          input dir   INPUT DIR           Hyperparameter Tuning with AI Platform    shell JOB NAME  hptuning   date   Y m d H M S    gcloud ai platform jobs submit training   JOB NAME      job dir   MODEL DIR      module name trainer task     package path trainer trainer     config trainer hptuning config yaml     python version 3 5     runtime version 1 13          input dir   INPUT DIR           Launch Tensorboard    shell tensorboard   logdir   MODEL DIR          Predictions The model predicts the conditional likelihood that a user survived an interval given that the user reached the interval  It outputs an n dimensional vector  where each element corresponds to predicted conditional probability of surviving through end of time interval  1   hazard    In order to determine the predicted class  the cumulative product of the conditional probabilities must be compared to some threshold       Deploy model on AI Platform The SavedModel was saved in a timestamped subdirectory of model dir     shell MODEL NAME  survival model  VERSION NAME  demo version  SAVED MODEL DIR   gsutil ls  MODEL DIR export export   tail  1   gcloud ai platform models create  MODEL NAME     regions us east1  gcloud ai platform versions create  VERSION NAME       model  MODEL NAME       origin  SAVED MODEL DIR       runtime version 1 13       framework TENSORFLOW       python version 3 5         Running batch predictions    shell INPUT PATHS  INPUT DIR data test   OUTPUT PATH  GCS directory for predictions  JOB NAME  predict   date   Y m d H M S    gcloud ai platform jobs submit prediction  JOB NAME         model  MODEL NAME         input paths  INPUT PATHS         output path  OUTPUT PATH         region us east1         data format TF RECORD    "}
{"questions":"GCP The DFDL definitions are stored in a Firestore database The processor service subscribes to the topic processes every message The application send a request with the binary to process to a pubsub topic Project Structure Data Format Description Language Processor Example This module is a example how to process a binary using a DFDL definition applies the definition and publishes the json result to a topic in pubsub","answers":"# Data Format Description Language ([DFDL](https:\/\/en.wikipedia.org\/wiki\/Data_Format_Description_Language)) Processor Example\nThis module is a example how to process a binary using a DFDL definition.\nThe DFDL definitions are stored in a Firestore database.\n\nThe application send a request with the binary to process to a pubsub topic.\n\nThe processor service subscribes to the topic, processes every message,\napplies the definition and publishes the json result to a topic in pubsub.\n\n## Project Structure\n\n```\n.\n\u2514\u2500\u2500 dfdl_example\n \u251c\u2500\u2500 examples # Contain a binary and dfdl definition to be used to run this example\n \u2514\u2500\u2500 src\n       \u2514\u2500\u2500 main\n           \u2514\u2500\u2500 java\n               \u2514\u2500\u2500 com.example.dfdl\n                   \u251c\u2500\u2500 DfdlDef # Embedded entiites\n                   \u251c\u2500\u2500 DfdlDefRepository # A Firestore Reactive Repository\n                   \u251c\u2500\u2500 DfdlService # Processes the binary using a dfdl definition and output a json\n                   \u251c\u2500\u2500 FirestoreService # Reads dfdl definitons from a firestore database\n                   \u251c\u2500\u2500 MessageController # Publishes message to a topic with a binary to be processed.\n                   \u251c\u2500\u2500 ProcessorService # Initializes components, configurations and services.\n                   \u251c\u2500\u2500 PubSubServer # Publishes and subscribes to topics using channels adapters.\n                   \u2514\u2500\u2500 README.md\n \u2514\u2500\u2500 resources\n      \u2514\u2500\u2500 application.properties\n \u2514\u2500\u2500 pom.xml\n```\n\n## Technology Stack\n1. Cloud Firestore\n2. Cloud Pubsub\n\n## Frameworks\n1. Spring Boot\n2. [Spring Data Cloud Firestore](https:\/\/docs.spring.io\/spring-cloud-gcp\/docs\/current\/reference\/html\/firestore.html)\n   * [Reactive Repository](https:\/\/docs.spring.io\/spring-cloud-gcp\/docs\/current\/reference\/html\/firestore.html#_reactive_repositories)\n\n## Libraries\n1. [Apache Daffodil](https:\/\/daffodil.apache.org\/)\n\n## Setup Instructions\n### Project Setup\n#### Creating a Project in the Google Cloud Platform Console\n\nIf you haven't already created a project, create one now. Projects enable you to\nmanage all Google Cloud Platform resources for your app, including deployment,\naccess control, billing, and services.\n\n1. Open the [Cloud Platform Console][cloud-console].\n1. In the drop-down menu at the top, select **Create a project**.\n1. Give your project a name = my-dfdl-project\n1. Make a note of the project ID, which might be different from the project\n   name. The project ID is used in commands and in configurations.\n\n[cloud-console]: https:\/\/console.cloud.google.com\/\n\n#### Enabling billing for your project.\n\nIf you haven't already enabled billing for your project, [enable\nbilling][enable-billing] now.  Enabling billing allows is required to use Cloud Bigtable\nand to create VM instances.\n\n[enable-billing]: https:\/\/console.cloud.google.com\/project\/_\/settings\n\n#### Install the Google Cloud SDK.\n\nIf you haven't already installed the Google Cloud SDK, [install the Google\nCloud SDK][cloud-sdk] now. The SDK contains tools and libraries that enable you\nto create and manage resources on Google Cloud Platform.\n\n[cloud-sdk]: https:\/\/cloud.google.com\/sdk\/\n\n#### Setting Google Application Default Credentials\n\nSet your [Google Application Default\nCredentials][application-default-credentials] by [initializing the Google Cloud\nSDK][cloud-sdk-init] with the command:\n\n```\ngcloud init\n```\nGenerate a credentials file by running the\n[application-default login](https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/auth\/application-default\/login) command:\n   \n```\n    gcloud auth application-default login\n```\n\n[cloud-sdk-init]: https:\/\/cloud.google.com\/sdk\/docs\/initializing\n[application-default-credentials]: https:\/\/developers.google.com\/identity\/protocols\/application-default-credentials\n\n### Firestore Setup\nHow to create a Firestore database instance can be found [here](https:\/\/cloud.google.com\/firestore\/docs\/quickstart-servers#create_a_in_native_mode_database) \n\n#### How to add data to firestore\nThe following doc, [Managing firestore using the console](https:\/\/cloud.google.com\/firestore\/docs\/using-console),\ncan be used to add data to firestore to run the example.\n\nThis example connects to a Cloud Firestore with a collection with the\nfollowing specification.\nThe configuration can be changed by changing the application.properties file.\n```\n    Root collection\n     dfdl-schemas =>\n         document_id\n             binary_example => {\n                 'definiton':\n                  \"<?xml version\"1.0\" encoding=\"UTF-8\"?>\n                     <xs:schema xmlns:xs=\"http:\/\/www.w3.org\/2001\/XMLSchema\"\n                      xmlns:dfdl=\"http:\/\/www.ogf.org\/dfdl\/dfdl-1.0\/\"\n                      targetNamespace=\"http:\/\/example.com\/dfdl\/helloworld\/\">\n                        <xs:include\n                      schemaLocation=\"org\/apache\/daffodil\/xsd\/DFDLGeneralFormat.dfdl.xsd\" \/>\n                        <xs:annotation>\n                           <xs:appinfo source=\"http:\/\/www.ogf.org\/dfdl\/\">\n                              <dfdl:format ref=\"GeneralFormat\" \/>\n                           <\/xs:appinfo>\n                        <\/xs:annotation>\n                        <xs:element name=\"binary_example\">\n                           <xs:complexType>\n                              <xs:sequence>\n                                 <xs:element name=\"w\" type=\"xs:int\">\n                                    <xs:annotation>\n                                       <xs:appinfo source=\"http:\/\/www.ogf.org\/dfdl\/\">\n                                          <dfdl:element representation=\"binary\"\n                                              binaryNumberRep=\"binary\" byteOrder=\"bigEndian\" lengthKind=\"implicit\" \/>\n                                       <\/xs:appinfo>\n                                    <\/xs:annotation>\n                                 <\/xs:element>\n                                 <xs:element name=\"x\" type=\"xs:int\">\n                                    <xs:annotation>\n                                       <xs:appinfo source=\"http:\/\/www.ogf.org\/dfdl\/\">\n                                          <dfdl:element representation=\"binary\"\n                                                binaryNumberRep=\"binary\" byteOrder=\"bigEndian\" lengthKind=\"implicit\" \/>\n                                       <\/xs:appinfo>\n                                    <\/xs:annotation>\n                                 <\/xs:element>\n                                 <xs:element name=\"y\" type=\"xs:double\">\n                                    <xs:annotation>\n                                       <xs:appinfo source=\"http:\/\/www.ogf.org\/dfdl\/\">\n                                          <dfdl:element representation=\"binary\"\n                                                 binaryFloatRep=\"ieee\" byteOrder=\"bigEndian\" lengthKind=\"implicit\" \/>\n                                       <\/xs:appinfo>\n                                    <\/xs:annotation>\n                                 <\/xs:element>\n                                 <xs:element name=\"z\" type=\"xs:float\">\n                                    <xs:annotation>\n                                       <xs:appinfo source=\"http:\/\/www.ogf.org\/dfdl\/\">\n                                          <dfdl:element representation=\"binary\"\n                                                byteOrder=\"bigEndian\" lengthKind=\"implicit\" binaryFloatRep=\"ieee\" \/>\n                                       <\/xs:appinfo>\n                                    <\/xs:annotation>\n                                 <\/xs:element>\n                              <\/xs:sequence>\n                           <\/xs:complexType>\n                        <\/xs:element>\n                     <\/xs:schema>\";\n                  }\n```\nThis dfdl definition example can be found in the binary_example.dfdl.xsd file.\n\n### Pubsub Setup\nThe following [doc](https:\/\/cloud.google.com\/pubsub\/docs\/quickstart-console)\ncan be used to set up the topics and subscriptions needed to run this example.\n\n#### Topics\nTo run this example two topics need to be created:\n1. A topic to publish the final json output: \"data-output-json-topic\"\n2. A topic to publish the binary to be processed: \"data-input-binary-topic\"\n\n#### Subscription\nThe following subscriptions need to be created:\n1. A subscription to pull the binary data: data-input-binary-sub\n\n## Usage\n### Initialize the application\nReference: [Building an Application with Spring Boot](https:\/\/spring.io\/guides\/gs\/spring-boot\/)\n```\n      .\/mvnw spring-boot:run\n```\n### Send a request\n```\n    curl --data \"message=0000000500779e8c169a54dd0a1b4a3fce2946f6\" localhost:8081\/publish\n``","site":"GCP","answers_cleaned":"  Data Format Description Language   DFDL  https   en wikipedia org wiki Data Format Description Language   Processor Example This module is a example how to process a binary using a DFDL definition  The DFDL definitions are stored in a Firestore database   The application send a request with the binary to process to a pubsub topic   The processor service subscribes to the topic  processes every message  applies the definition and publishes the json result to a topic in pubsub      Project Structure            dfdl example      examples   Contain a binary and dfdl definition to be used to run this example      src            main                java                    com example dfdl                        DfdlDef   Embedded entiites                        DfdlDefRepository   A Firestore Reactive Repository                        DfdlService   Processes the binary using a dfdl definition and output a json                        FirestoreService   Reads dfdl definitons from a firestore database                        MessageController   Publishes message to a topic with a binary to be processed                         ProcessorService   Initializes components  configurations and services                         PubSubServer   Publishes and subscribes to topics using channels adapters                         README md      resources           application properties      pom xml         Technology Stack 1  Cloud Firestore 2  Cloud Pubsub     Frameworks 1  Spring Boot 2   Spring Data Cloud Firestore  https   docs spring io spring cloud gcp docs current reference html firestore html        Reactive Repository  https   docs spring io spring cloud gcp docs current reference html firestore html  reactive repositories      Libraries 1   Apache Daffodil  https   daffodil apache org       Setup Instructions     Project Setup      Creating a Project in the Google Cloud Platform Console  If you haven t already created a project  create one now  Projects enable you to manage all Google Cloud Platform resources for your app  including deployment  access control  billing  and services   1  Open the  Cloud Platform Console  cloud console   1  In the drop down menu at the top  select   Create a project    1  Give your project a name   my dfdl project 1  Make a note of the project ID  which might be different from the project    name  The project ID is used in commands and in configurations    cloud console   https   console cloud google com        Enabling billing for your project   If you haven t already enabled billing for your project   enable billing  enable billing  now   Enabling billing allows is required to use Cloud Bigtable and to create VM instances    enable billing   https   console cloud google com project   settings       Install the Google Cloud SDK   If you haven t already installed the Google Cloud SDK   install the Google Cloud SDK  cloud sdk  now  The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform    cloud sdk   https   cloud google com sdk        Setting Google Application Default Credentials  Set your  Google Application Default Credentials  application default credentials  by  initializing the Google Cloud SDK  cloud sdk init  with the command       gcloud init     Generate a credentials file by running the  application default login  https   cloud google com sdk gcloud reference auth application default login  command              gcloud auth application default login       cloud sdk init   https   cloud google com sdk docs initializing  application default credentials   https   developers google com identity protocols application default credentials      Firestore Setup How to create a Firestore database instance can be found  here  https   cloud google com firestore docs quickstart servers create a in native mode database         How to add data to firestore The following doc   Managing firestore using the console  https   cloud google com firestore docs using console   can be used to add data to firestore to run the example   This example connects to a Cloud Firestore with a collection with the following specification  The configuration can be changed by changing the application properties file          Root collection      dfdl schemas             document id              binary example                        definiton                        xml version 1 0  encoding  UTF 8                          xs schema xmlns xs  http   www w3 org 2001 XMLSchema                        xmlns dfdl  http   www ogf org dfdl dfdl 1 0                         targetNamespace  http   example com dfdl helloworld                             xs include                       schemaLocation  org apache daffodil xsd DFDLGeneralFormat dfdl xsd                              xs annotation                              xs appinfo source  http   www ogf org dfdl                                   dfdl format ref  GeneralFormat                                  xs appinfo                            xs annotation                           xs element name  binary example                               xs complexType                                 xs sequence                                    xs element name  w  type  xs int                                        xs annotation                                          xs appinfo source  http   www ogf org dfdl                                               dfdl element representation  binary                                                binaryNumberRep  binary  byteOrder  bigEndian  lengthKind  implicit                                              xs appinfo                                        xs annotation                                     xs element                                    xs element name  x  type  xs int                                        xs annotation                                          xs appinfo source  http   www ogf org dfdl                                               dfdl element representation  binary                                                  binaryNumberRep  binary  byteOrder  bigEndian  lengthKind  implicit                                              xs appinfo                                        xs annotation                                     xs element                                    xs element name  y  type  xs double                                        xs annotation                                          xs appinfo source  http   www ogf org dfdl                                               dfdl element representation  binary                                                   binaryFloatRep  ieee  byteOrder  bigEndian  lengthKind  implicit                                              xs appinfo                                        xs annotation                                     xs element                                    xs element name  z  type  xs float                                        xs annotation                                          xs appinfo source  http   www ogf org dfdl                                               dfdl element representation  binary                                                  byteOrder  bigEndian  lengthKind  implicit  binaryFloatRep  ieee                                              xs appinfo                                        xs annotation                                     xs element                                  xs sequence                               xs complexType                            xs element                         xs schema                            This dfdl definition example can be found in the binary example dfdl xsd file       Pubsub Setup The following  doc  https   cloud google com pubsub docs quickstart console  can be used to set up the topics and subscriptions needed to run this example        Topics To run this example two topics need to be created  1  A topic to publish the final json output   data output json topic  2  A topic to publish the binary to be processed   data input binary topic        Subscription The following subscriptions need to be created  1  A subscription to pull the binary data  data input binary sub     Usage     Initialize the application Reference   Building an Application with Spring Boot  https   spring io guides gs spring boot               mvnw spring boot run         Send a request         curl   data  message 0000000500779e8c169a54dd0a1b4a3fce2946f6  localhost 8081 publish   "}
{"questions":"GCP tl dr The Terraform project in this repository deploys a single simple Python Cloud Function that executes a simple action against Google Cloud API using identity of the function caller illustrating the full flow described above Often the Google Cloud Functions are used for automation tasks e g cloud infrastrcuture automation In big Google Cloud organizations such automation needs access to all organization tenants and because of that the Cloud Function Service Account needs wide scope IAM permissions at the organization or similar level This example describes one possible solution of reducing the set of Google Cloud IAM permissions required by a Service Account running a Google Cloud Function by executing the Cloud Function on behalf of and with IAM permissions of the caller identity Cloud Function Act As Caller","answers":"# Cloud Function \"Act As\" Caller\n\n> tl;dr The Terraform project in this repository deploys a single simple Python Cloud Function that executes a simple action against Google Cloud API using identity of the function caller illustrating the full flow described above.\n\n---\n\nThis example describes one possible solution of reducing the set of Google Cloud IAM permissions required by a Service Account running a Google Cloud Function by executing the Cloud Function on behalf of and with IAM permissions of the caller identity.\n\nOften the Google Cloud Functions are used for automation tasks, e.g. cloud infrastrcuture automation. In big Google Cloud organizations such automation needs access to all organization tenants and because of that the Cloud Function Service Account needs wide scope IAM permissions at the organization or similar level.\n\nFollowing the Principle of Least Privilege the Service Account that the Cloud Function executes under needs to have only IAM permissions required for its successful execution. In the case when a common Cloud Function automation is called by multiple tenants for performing operations on their individual tenant Google Cloud resources it would only require IAM permissions to the GCP resources of that tenant at a time. Those are typically permissions that the tenant is already having or can obtain for its tenant Service Accounts or Workload Identity.\n\nThis example illustrates a possible approach of reducing the set of Cloud Function IAM permissions to the temporary set of permissions defined by the context of the Cloud Function caller and its identity in particular. \n\nThe example contains a solution for the GitHub based CI\/CD workflow caller of a Google Cloud Function based on the Workload Idenitty Federation mechanism supported by GitHub and [google-github-actions\/auth](https:\/\/github.com\/google-github-actions\/auth) GitHub Action.\n\nThe Terraform project in this repository deploys a single simple Python Cloud Function that executes a simple action against Google Cloud API using identity of the function caller illustrating the full flow described above.\n\n\n## Caller Workload Identity\n\nOne of the typical security concerns in Cloud SaaS based CI\/CD pipelines that manage and manipulate Google Cloud resources is the need to securely authenticate the caller process on the Google Cloud side. \n\nA typical way of authenticating is using Google Cloud Serivce Account keys stored on the caller side that effectively are being used as long lived client secrets that need to be protected and kept in secret. The need to manage and protect the long lived Service Account keys is a security challenge for the client.\n\nThe recommended way to improve the secruity posture of the CI\/CD automation is to remove the need to authenticate with Google Cloud using Service Account keys altogether. The Workload Identity Federation is the mechanism that allows secure authentication with no long lived secrets managed by the client based on the Open ID Connect authentication flow.\n\nThe following diagram describes the authentication process that does not require Google Cloud Service Account key management on the client side.\n\n![Workflow Identity Federation Authentication](images\/wif-authentication.png?raw=true \"Workflow Identity Federation Authentication\")\n\n## Solution\n\nThe Cloud Function Service Account is not granted any IAM permissions in the tenant GCP project. The only permissions it requires is read access to the ephemeal temporary Google Secret Manager Secret resource in its own project where the Cloud Function is defined and running.\n\nThe Caller is the GitHub Runner that executes the GitHub Workflow defined in this source repository in the [.github\/workflows\/call-function.yml](.\/.github\/workflows\/call-function.yml) file. It authenticates to GCP as a Workload Identity using Workflow Identity Fedederation set up by the Terraform project in this repository.\n\nThe Service Account \"mapped\" to the Workload Identity of the GitHub Workflow run has needed read\/write permissions to the GCP resources that the Cloud Function needs to manipulate.\n\nThere are several variations possible in regards to the location of the ephemeral secrets to be stored (application project vs central automation project).\n\n### Simple Invocation\n\nThe following diagram shows the simplest case of the Cloud Function invocation in which the GCP access token gets passed in the call to the Cloud Function, e.g. in its payload or better HTTP header.\n\n![Invocation with the secret in the application project](images\/cf-act-as0.png?raw=true \"Invocation with the secret in the application project\")\n\n1. The GitHub Workflow (the Caller) authenticates to the GCP using the [google-github-actions\/auth](https:\/\/github.com\/google-github-actions\/auth) GitHub Action. After this step the GitHub Workflow has short lived GCP access token available to the subsequent workflow steps to execute and connect to the GCP to call the Cloud Function.\n2. The Caller passes the GCP access token obtained on the previous step in the Cloud Function invocation HTTPS request payload or header.\n3. The Cloud Function authenticates to the Google API using the access token extracted from the Secret Manager Secret on the previous step and accesses the target GCP resource on behalf and with IAM permissions of the Caller.\n\n### Invocation with Secret\n\nIn some cases, such as when the logic represented by a Cloud Function is implemented by several components, such as Cloud Functions chained together, it would be needed to pass the caller's access token between those components securely. In that situation it is preferrable to apply a [Claim Check pattern](https:\/\/www.enterpriseintegrationpatterns.com\/StoreInLibrary.html) and pass the resource name of the secret containing the access token between the solution components as it is illustrated in the following diagram.\n\n![Invocation with the secret in the application project](images\/cf-act-as3.png?raw=true \"Invocation with the secret in the application project\")\n\n1. The GitHub Workflow (the Caller) authenticates to the GCP using the [google-github-actions\/auth](https:\/\/github.com\/google-github-actions\/auth) GitHub Action. After this step the GitHub Workflow has short lived GCP access token available to the subsequent workflow steps to execute and connect to the GCP to call the Cloud Function.\n2. The Caller passes the GCP access token obtained on the previous step in the Cloud Function invocation HTTPS request payload or header.\n3. The first Cloud Function extracts the GCP access token obtained on the previous step from the incoming message payload and stores it in an ephemeral Secret Manager Secret in the central project location.\n4. The Cloud Function extracts the access token from the ephemeral Secret Manager Secret\n5. The Cloud Function authenticates to the Google API using the access token extracted from the Secret Manager Secret on the previous step and accesses the target GCP resource on behalf and with IAM permissions of the Caller.\n6. (Optionally) The Cloud Function drops the ephemeral Secret Manager Secret resource.\n7. (Optionally) The Caller double checks and drops the ephemeral Secret Manager Secret resource.\n\n## Service Accounts and IAM Permissions\n\nThis project creates two GCP Service Accounts:\n* Cloud Function Service Account \u2013 replaces the default Cloud Function [runtime service account](https:\/\/cloud.google.com\/functions\/docs\/securing\/function-identity#runtime_service_account) with an explicit customer managed service account\n* Workload Identity Service Account \u2013 the GCP service account that represents the external GitHub Workload Identity. When the GitHub workflow authenticates to the GCP it is this service account's IAM permissions that the GitHub Workload Identity is granted.\n\n| Service Account     | Role                                        | Description                                       |\n|---------------------|---------------------------------------------|---------------------------------------------------|\n| `cf-sample-account` | `roles\/secretmanager.secretVersionManager`  | To create the Secret Manager secret for access token for the \"Invocation with central secret\" case |\n|                     | `roles\/secretmanager.secretAccessor`        | To read the access token from Secret Manager secret  |\n| `wi-sample-account` | `roles\/secretmanager.secretVersionManager`  | To create the Secret Manager secret for access token for the \"Invocation with application owned secret\" case |\n|                     |                                             | To store the access token in the Secret Manager secret in the \"Invocation with application owned secret\" case |\n|                     | `roles\/iam.workloadIdentityUser`            | Maps the external GitHub Workload Identity to the Workload Identity Service Account |\n|                     | `roles\/cloudfunctions.developer`            | `cloudfunctions.functions.call` permission to invoke the sample Cloud Function  |\n|                     | `roles\/viewer`                              | Sample IAM permissions to list GCE VM instances granted to the Workload Identity Service Account but not to the Cloud Function Service Account  |\n\n\n## Deployment\n\nThe Terraform project in this repository defines the following input variables that can either be edited in the `variables.tf` file directly or passed over the Terraform command line.\n\n| Name          | Description                                     |\n|---------------|-------------------------------------------------|\n| `project_id`  | Target GCP project id. All resources will be deployed here.  |\n| `project_number`  | Target GCP project number.                  |\n| `location`    | The name of the GCP region to deploy resources  |\n| `zone`        | The GCP Zone for the sample Cloud Function to list GCE VM instances in                                     |\n| `github_repo`  | The name of the GitHub repository in the format `organization\/repository`.                                |\n| `github_ref`  | The git reference to the source code repository. Usually a reference to the branch which is getting built  |\n\nTo deploy the example with Cloud Function and all required GCP components including Workload Idepntity Pool and Provider use the usual\n```\nterraform init\nterraform plan\nterraform apply\n```\n\nin the root folder of this repository.\n\nThe project deploys the GCP resources by default into the `europe-west3` region. You can change that by passing alternative value to the `location`  input variable by copying the `terraform.tfvars.sample` to the `terraform.tfvars` file and updating values there.\n\n\n## Call Function\n\nAfter the example is provisioned through Terraform, you can test and call the deployed function from the command line with gcloud\n```\ngcloud functions call sample-function --region=${REGION} --data '{}'\n```\n\nThe sample Cloud Function calls Google Compute Engine API v1 and [lists](https:\/\/cloud.google.com\/compute\/docs\/reference\/rest\/v1\/instances\/list) Google Compte Engine instances in the specified region.\n\nThe Cloud Function deployed by this project runs as `cf-sample-account@${PROJECT_ID}.iam.gserviceaccount.com` service account.\nThis service account doesn't have any granted permissions in GCP except for the read access to the Secret Manager Secret. Hence, the Cloud Function cannot reach the GCE API and list the VMs by default.\n\nFor this action to successfully complete from the command line as illustrated above, the Cloud Function service\naccount `cf-sample-account@${PROJECT_ID}.iam.gserviceaccount.com` needs to have `compute.instances.list` permission in the target GCP project.\n\nIf the execution succeeds, the command line output will be similar to the following\n```\n$ gcloud functions call sample-function --region=${REGION} --data '{}'\n\nexecutionId: 84e2bkg5717v\nresult: ', jumpbox (https:\/\/www.googleapis.com\/compute\/v1\/projects\/${PROJECT_ID}\/zones\/europe-west3-c\/machineTypes\/e2-micro)'\n```\n\n## GitHub Workflow\n\nThe sample GitHub [workflow](.github\/workflows\/call-function.yml) in this repository illustrates the way of calling the sample Cloud Function from a GitHub workflow.\n\nFor the workflow to succeed, a dedicated service account `wi-sample-account` is mapped to the authenticated GitHub Workload Identity. It needs to have `cloudfunctions.functions.call` permission for the deployed Sample Cloud Function in order to be able to invoke it. The `roles\/cloudfunctions.developer` built-in role grants that permission.\n\n\n## Running Example\n\nCopy `terraform.tfvar.sample` file to `terraform.tfvar` and adjust settings inside for your project, location, etc.\n\nDeploy the GCP resources with Terraform:\n```\nterraform init\nterraform plan\nterraform apply\n```\n\nTo invoke the Cloud Function on behalf of the GitHub workload idenity, it is needed to create GitHub Actions workflow from the `.github\/workflows\/call-function.yml` file. Copy this file with relative folders to the root of your GitHub repository for GitHub to pick up the workflow.\n\nPlease note that the GitHub workflow reads the parameters during the run from the `terraform.tfvars.sample` file in the root repository folder. You'd need to either modify the workflow file or check in correct values to the `terraform.tfvars.sample` file.\n\nAfter the GCP resources are provisioned, and given that the parameters in the `terraform.tfvars.sample` are correct, the GitHub Actions run should succeed. It is the [last step](.github\/workflows\/call-function.yml#L68) is the sample Cloud Function call. \nThis is because the Workload Identity Service Account that the project associates with the GitHub identity has permissions to list GCE VM instances, which is what the sample Python Cloud Function is doing.\n\nAt this point it is now possible to ensure that the direct Cloud Function invocation, e.g. using an interactive user account, with no access token supplied, fails because the Cloud Function Service Account itself does not have permissions to list GCE VM instances:\n```\ngcloud functions call sample-function --region=${REGION} --data '{}'\n```\nThat call should fail no matter which permissions for GCE your current user account is having:\n```\nresult: \"Error: <HttpError 403 when requesting https:\/\/compute.googleapis.com\/compute\/v1\/projects\/$PROJECT_ID\/zones\/$ZONE\/instances?alt=json\\\n  \\ returned \\\"Required 'compute.instances.list' permission for 'projects\/$PROJECT_ID'\\\"\\\n  . Details: \\\"[{'message': \\\"Required 'compute.instances.list' permission for 'projects\/$PROJECT_ID'\\\"\\\n  , 'domain': 'global', 'reason': 'forbidden'}]\\\">\"\n```\n\nNow you can try to pass the access token that represents an account that has GCE VM instances list permissions. E.g. if your current user account has that permission:\n```\ngcloud functions call sample-function --region=${REGION} --data \"{ \\\"access_token\\\": \\\"$(gcloud auth print-access-token)\\\" }\"\n```\nThis time the call should succeed and show the list of GCE VMs, e.g.\n```\nexecutionId: 76wkh0r8yhjf\nresult: 'jumpbox'\n```\n\nAlternatively, you can pass the access token via Secret Manager Secret in the same way as GitHub workflow does.\n\nSave the access token to the Secret Manager Secret created by this Terraform project and pass the Secret Manager secret resource name in the call to the sample Cloud Function instead of the access token:\n```\ngcloud functions call sample-function --region=${REGION} \\\n    --data \"{ \\\"secret_resource\\\": \\\"$(echo -n $(gcloud auth print-access-token) | \\\n    gcloud secrets versions add access-token-secret --data-file=- --format json | \\\n    jq -r .name)\\\" }\"\n```\n\nThis call should succeed as well. The sample Cloud Function will extract the access token of the current user account from the Secret Manager secret and call GCE API provding that access token for authentication.\n\n## Cleaning up\n\nTo avoid incurring charges to your Google Cloud account for the resources used in this tutorial, you can use Terraform to delete most of the resources. If you \ncreated a new project for deploying the resources, you can also delete the entire project.\n\nTo delete resources using Terraform, run the following command:\n\n    terraform destroy\n\nTo delete the project, do the following:\n\n1.  In the Cloud Console, go to the [Projects page](https:\/\/console.cloud.google.com\/iam-admin\/projects).\n1.  In the project list, select the project you want to delete and click **Delete**.\n1.  In the dialog, type the project ID, and then click **Shut down** to delete the project.\n---\n\n\n## Useful Commands\n\n### Read current access token using gcloud\n\n[Getting the access token using gcloud](https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/auth\/application-default\/print-access-token)\n\n```\ngcloud auth application-default print-access-token \n```\n\n### Store access token in Secret Manager\n```\necho -n \"$(gcloud auth print-access-token)\" | \\\n    gcloud secrets versions add access-token-secret --data-file=-\n```\n\n### Develop and Debugg the Cloud Function locally \nWithin the [function](.\/function) folder run following commds to start the function framework locally:\n```\npip install -r requirements.txt\nfunctions-framework --target main --debug\n``","site":"GCP","answers_cleaned":"  Cloud Function  Act As  Caller    tl dr The Terraform project in this repository deploys a single simple Python Cloud Function that executes a simple action against Google Cloud API using identity of the function caller illustrating the full flow described above        This example describes one possible solution of reducing the set of Google Cloud IAM permissions required by a Service Account running a Google Cloud Function by executing the Cloud Function on behalf of and with IAM permissions of the caller identity   Often the Google Cloud Functions are used for automation tasks  e g  cloud infrastrcuture automation  In big Google Cloud organizations such automation needs access to all organization tenants and because of that the Cloud Function Service Account needs wide scope IAM permissions at the organization or similar level   Following the Principle of Least Privilege the Service Account that the Cloud Function executes under needs to have only IAM permissions required for its successful execution  In the case when a common Cloud Function automation is called by multiple tenants for performing operations on their individual tenant Google Cloud resources it would only require IAM permissions to the GCP resources of that tenant at a time  Those are typically permissions that the tenant is already having or can obtain for its tenant Service Accounts or Workload Identity   This example illustrates a possible approach of reducing the set of Cloud Function IAM permissions to the temporary set of permissions defined by the context of the Cloud Function caller and its identity in particular    The example contains a solution for the GitHub based CI CD workflow caller of a Google Cloud Function based on the Workload Idenitty Federation mechanism supported by GitHub and  google github actions auth  https   github com google github actions auth  GitHub Action   The Terraform project in this repository deploys a single simple Python Cloud Function that executes a simple action against Google Cloud API using identity of the function caller illustrating the full flow described above       Caller Workload Identity  One of the typical security concerns in Cloud SaaS based CI CD pipelines that manage and manipulate Google Cloud resources is the need to securely authenticate the caller process on the Google Cloud side    A typical way of authenticating is using Google Cloud Serivce Account keys stored on the caller side that effectively are being used as long lived client secrets that need to be protected and kept in secret  The need to manage and protect the long lived Service Account keys is a security challenge for the client   The recommended way to improve the secruity posture of the CI CD automation is to remove the need to authenticate with Google Cloud using Service Account keys altogether  The Workload Identity Federation is the mechanism that allows secure authentication with no long lived secrets managed by the client based on the Open ID Connect authentication flow   The following diagram describes the authentication process that does not require Google Cloud Service Account key management on the client side     Workflow Identity Federation Authentication  images wif authentication png raw true  Workflow Identity Federation Authentication       Solution  The Cloud Function Service Account is not granted any IAM permissions in the tenant GCP project  The only permissions it requires is read access to the ephemeal temporary Google Secret Manager Secret resource in its own project where the Cloud Function is defined and running   The Caller is the GitHub Runner that executes the GitHub Workflow defined in this source repository in the   github workflows call function yml     github workflows call function yml  file  It authenticates to GCP as a Workload Identity using Workflow Identity Fedederation set up by the Terraform project in this repository   The Service Account  mapped  to the Workload Identity of the GitHub Workflow run has needed read write permissions to the GCP resources that the Cloud Function needs to manipulate   There are several variations possible in regards to the location of the ephemeral secrets to be stored  application project vs central automation project        Simple Invocation  The following diagram shows the simplest case of the Cloud Function invocation in which the GCP access token gets passed in the call to the Cloud Function  e g  in its payload or better HTTP header     Invocation with the secret in the application project  images cf act as0 png raw true  Invocation with the secret in the application project    1  The GitHub Workflow  the Caller  authenticates to the GCP using the  google github actions auth  https   github com google github actions auth  GitHub Action  After this step the GitHub Workflow has short lived GCP access token available to the subsequent workflow steps to execute and connect to the GCP to call the Cloud Function  2  The Caller passes the GCP access token obtained on the previous step in the Cloud Function invocation HTTPS request payload or header  3  The Cloud Function authenticates to the Google API using the access token extracted from the Secret Manager Secret on the previous step and accesses the target GCP resource on behalf and with IAM permissions of the Caller       Invocation with Secret  In some cases  such as when the logic represented by a Cloud Function is implemented by several components  such as Cloud Functions chained together  it would be needed to pass the caller s access token between those components securely  In that situation it is preferrable to apply a  Claim Check pattern  https   www enterpriseintegrationpatterns com StoreInLibrary html  and pass the resource name of the secret containing the access token between the solution components as it is illustrated in the following diagram     Invocation with the secret in the application project  images cf act as3 png raw true  Invocation with the secret in the application project    1  The GitHub Workflow  the Caller  authenticates to the GCP using the  google github actions auth  https   github com google github actions auth  GitHub Action  After this step the GitHub Workflow has short lived GCP access token available to the subsequent workflow steps to execute and connect to the GCP to call the Cloud Function  2  The Caller passes the GCP access token obtained on the previous step in the Cloud Function invocation HTTPS request payload or header  3  The first Cloud Function extracts the GCP access token obtained on the previous step from the incoming message payload and stores it in an ephemeral Secret Manager Secret in the central project location  4  The Cloud Function extracts the access token from the ephemeral Secret Manager Secret 5  The Cloud Function authenticates to the Google API using the access token extracted from the Secret Manager Secret on the previous step and accesses the target GCP resource on behalf and with IAM permissions of the Caller  6   Optionally  The Cloud Function drops the ephemeral Secret Manager Secret resource  7   Optionally  The Caller double checks and drops the ephemeral Secret Manager Secret resource      Service Accounts and IAM Permissions  This project creates two GCP Service Accounts    Cloud Function Service Account   replaces the default Cloud Function  runtime service account  https   cloud google com functions docs securing function identity runtime service account  with an explicit customer managed service account   Workload Identity Service Account   the GCP service account that represents the external GitHub Workload Identity  When the GitHub workflow authenticates to the GCP it is this service account s IAM permissions that the GitHub Workload Identity is granted     Service Account       Role                                          Description                                                                                                                                                                      cf sample account     roles secretmanager secretVersionManager     To create the Secret Manager secret for access token for the  Invocation with central secret  case                            roles secretmanager secretAccessor           To read the access token from Secret Manager secret       wi sample account     roles secretmanager secretVersionManager     To create the Secret Manager secret for access token for the  Invocation with application owned secret  case                                                                         To store the access token in the Secret Manager secret in the  Invocation with application owned secret  case                            roles iam workloadIdentityUser               Maps the external GitHub Workload Identity to the Workload Identity Service Account                            roles cloudfunctions developer                cloudfunctions functions call  permission to invoke the sample Cloud Function                             roles viewer                                 Sample IAM permissions to list GCE VM instances granted to the Workload Identity Service Account but not to the Cloud Function Service Account         Deployment  The Terraform project in this repository defines the following input variables that can either be edited in the  variables tf  file directly or passed over the Terraform command line     Name            Description                                                                                                              project id     Target GCP project id  All resources will be deployed here        project number     Target GCP project number                        location       The name of the GCP region to deploy resources       zone           The GCP Zone for the sample Cloud Function to list GCE VM instances in                                          github repo     The name of the GitHub repository in the format  organization repository                                       github ref     The git reference to the source code repository  Usually a reference to the branch which is getting built     To deploy the example with Cloud Function and all required GCP components including Workload Idepntity Pool and Provider use the usual     terraform init terraform plan terraform apply      in the root folder of this repository   The project deploys the GCP resources by default into the  europe west3  region  You can change that by passing alternative value to the  location   input variable by copying the  terraform tfvars sample  to the  terraform tfvars  file and updating values there       Call Function  After the example is provisioned through Terraform  you can test and call the deployed function from the command line with gcloud     gcloud functions call sample function   region   REGION    data           The sample Cloud Function calls Google Compute Engine API v1 and  lists  https   cloud google com compute docs reference rest v1 instances list  Google Compte Engine instances in the specified region   The Cloud Function deployed by this project runs as  cf sample account   PROJECT ID  iam gserviceaccount com  service account  This service account doesn t have any granted permissions in GCP except for the read access to the Secret Manager Secret  Hence  the Cloud Function cannot reach the GCE API and list the VMs by default   For this action to successfully complete from the command line as illustrated above  the Cloud Function service account  cf sample account   PROJECT ID  iam gserviceaccount com  needs to have  compute instances list  permission in the target GCP project   If the execution succeeds  the command line output will be similar to the following       gcloud functions call sample function   region   REGION    data       executionId  84e2bkg5717v result     jumpbox  https   www googleapis com compute v1 projects   PROJECT ID  zones europe west3 c machineTypes e2 micro           GitHub Workflow  The sample GitHub  workflow   github workflows call function yml  in this repository illustrates the way of calling the sample Cloud Function from a GitHub workflow   For the workflow to succeed  a dedicated service account  wi sample account  is mapped to the authenticated GitHub Workload Identity  It needs to have  cloudfunctions functions call  permission for the deployed Sample Cloud Function in order to be able to invoke it  The  roles cloudfunctions developer  built in role grants that permission       Running Example  Copy  terraform tfvar sample  file to  terraform tfvar  and adjust settings inside for your project  location  etc   Deploy the GCP resources with Terraform      terraform init terraform plan terraform apply      To invoke the Cloud Function on behalf of the GitHub workload idenity  it is needed to create GitHub Actions workflow from the   github workflows call function yml  file  Copy this file with relative folders to the root of your GitHub repository for GitHub to pick up the workflow   Please note that the GitHub workflow reads the parameters during the run from the  terraform tfvars sample  file in the root repository folder  You d need to either modify the workflow file or check in correct values to the  terraform tfvars sample  file   After the GCP resources are provisioned  and given that the parameters in the  terraform tfvars sample  are correct  the GitHub Actions run should succeed  It is the  last step   github workflows call function yml L68  is the sample Cloud Function call   This is because the Workload Identity Service Account that the project associates with the GitHub identity has permissions to list GCE VM instances  which is what the sample Python Cloud Function is doing   At this point it is now possible to ensure that the direct Cloud Function invocation  e g  using an interactive user account  with no access token supplied  fails because the Cloud Function Service Account itself does not have permissions to list GCE VM instances      gcloud functions call sample function   region   REGION    data          That call should fail no matter which permissions for GCE your current user account is having      result   Error   HttpError 403 when requesting https   compute googleapis com compute v1 projects  PROJECT ID zones  ZONE instances alt json      returned   Required  compute instances list  permission for  projects  PROJECT ID         Details       message     Required  compute instances list  permission for  projects  PROJECT ID          domain    global    reason    forbidden             Now you can try to pass the access token that represents an account that has GCE VM instances list permissions  E g  if your current user account has that permission      gcloud functions call sample function   region   REGION    data      access token        gcloud auth print access token           This time the call should succeed and show the list of GCE VMs  e g      executionId  76wkh0r8yhjf result   jumpbox       Alternatively  you can pass the access token via Secret Manager Secret in the same way as GitHub workflow does   Save the access token to the Secret Manager Secret created by this Terraform project and pass the Secret Manager secret resource name in the call to the sample Cloud Function instead of the access token      gcloud functions call sample function   region   REGION          data      secret resource        echo  n   gcloud auth print access token          gcloud secrets versions add access token secret   data file     format json         jq  r  name            This call should succeed as well  The sample Cloud Function will extract the access token of the current user account from the Secret Manager secret and call GCE API provding that access token for authentication      Cleaning up  To avoid incurring charges to your Google Cloud account for the resources used in this tutorial  you can use Terraform to delete most of the resources  If you  created a new project for deploying the resources  you can also delete the entire project   To delete resources using Terraform  run the following command       terraform destroy  To delete the project  do the following   1   In the Cloud Console  go to the  Projects page  https   console cloud google com iam admin projects   1   In the project list  select the project you want to delete and click   Delete    1   In the dialog  type the project ID  and then click   Shut down   to delete the project           Useful Commands      Read current access token using gcloud   Getting the access token using gcloud  https   cloud google com sdk gcloud reference auth application default print access token       gcloud auth application default print access token           Store access token in Secret Manager     echo  n    gcloud auth print access token           gcloud secrets versions add access token secret   data file            Develop and Debugg the Cloud Function locally  Within the  function    function  folder run following commds to start the function framework locally      pip install  r requirements txt functions framework   target main   debug   "}
{"questions":"GCP Scikit learn pipeline trainer for AI Platform This is a example for building a scikit learn based machine learning pipeline trainer 3 Estimator exact machine learning model e g RandomForestClassifier 2 Pre processer handle typical standard pre processing e g scaling imputation one hot encoding and etc to serve online traffic The entire pipeline includes three major components The pipeline can be trained locally or remotely on AI platform The trained model can be further deployed on AI platform that can be run on AI Platform which is built on top of the 1 Transformer generate new features from rawdata with user defined logic function","answers":"# Scikit-learn pipeline trainer for AI Platform\n\nThis is a example for building a scikit-learn-based machine learning pipeline trainer\nthat can be run on AI Platform, which is built on top of the [scikit-learn template](https:\/\/github.com\/GoogleCloudPlatform\/cloudml-samples\/tree\/master\/sklearn\/sklearn-template\/template).\nThe pipeline can be trained locally or remotely on AI platform. The trained model can be further deployed on AI platform\nto serve online traffic. The entire pipeline includes three major components:\n\n1. Transformer: generate new features from raw_data with user defined logic (function).\n2. Pre-processer: handle typical standard pre-processing e.g. scaling, imputation, one-hot-encoding and etc.\n3. Estimator: exact machine learning model e.g. RandomForestClassifier.\n\nCompared with the [scikit-learn template](https:\/\/github.com\/GoogleCloudPlatform\/cloudml-samples\/tree\/master\/sklearn\/sklearn-template\/template),\nthis example has the following additional feature:\n\n1. Support both Classification and Regression, which can be specified in the configuration\n2. Support serving for both JSON and List of Value formats\n3. Support additional custom transformation logics besides typical pre-processing provided by scikit-learn\n\nGoogle Cloud tools used:\n- [Google Cloud Platform](https:\/\/cloud.google.com\/) (GCP) lets you build and\nhost applications and websites, store data, and analyze data on Google's\nscalable infrastructure.\n- [Cloud ML Engine](https:\/\/cloud.google.com\/ml-engine\/) is a managed service\nthat enables you to easily build machine learning models that work on any type\nof data, of any size. This is now part of\n[AI Platform](https:\/\/cloud.google.com\/ai-platform\/).\n- [Google Cloud Storage](https:\/\/cloud.google.com\/storage\/) (GCS) is a unified\nobject storage for developers and enterprises, from live data serving to data\nanalytics\/ML to data archiving.\n- [Cloud SDK](https:\/\/cloud.google.com\/sdk\/) is a set of tools for Google Cloud\nPlatform, which contains e.g. gcloud, gsutil, and bq command-line tools to\ninteract with Google Cloud products and services.\n- [Google BigQuery](https:\/\/cloud.google.com\/bigquery\/) A fast, highly scalable,\ncost-effective, and fully managed cloud data warehouse for analytics, with even\nbuilt-in machine learning.\n\n## Pipeline overview\nThe overall flow of the pipeline can be summarized as follows and illustrated in the flowchart:\n\n**Raw Data -> Transformer -> Pre-processor -> Estimator -> Trained Pipeline**\n\n![Flowchart](.\/sklearn.png)\n\n## Repository structure\n```\ntemplate\n    |__ config\n        |__ config.yaml             # for running normal training job on AI Platform\n        |__ hptuning_config.yaml    # for running hyperparameter tunning job on AI Platform\n    |__ scripts\n        |__ train.sh                # convenience script for running machine learning training jobs\n        |__ deploy.sh               # convenience script for deploying trained scikit-learn model\n        |__ predict.sh              # convenience script for requesting online prediction\n        |__ predict.py              # helper function for requesting online prediction using python\n    |__ trainer                     # trainer package\n        |__ util                    # utility functions\n            |__ utils.py            # utility functions including e.g. loading data from bigquery and cloud storage\n            |__ preprocess_utils.py # utility functions for constructing preprocessing pipeline\n            |__ transform_utils.py  # utility functions for constructing transform pipeline\n        |__ metadata.py             # dataset metadata and feature columns definitions\n        |__ constants.py            # constants used in the project\n        |__ model.py                # pre-processing and machine learning model pipeline definition\n        |__ task.py                 # training job entry point, handling the parameters passed from command line\n        |__ transform_config.py     # configuration for transform pipeline construction\"\n    |__ predictor.py                # define custom prediction behavior\n    |__ setup.py                    # specify necessary dependency for running job on AI Platform\n    |__ requirements.txt            # specify necessary dependency, helper for setup environment for local development\n```\n\n## Using the template\n### Step 0. Prerequisites\nBefore you follow the instructions below to adapt the template to your machine learning job,\nyou need a Google cloud project if you don't have one. You can find detailed instructions\n[here](https:\/\/cloud.google.com\/dataproc\/docs\/guides\/setup-project).\n\n- Make sure the following API & Services are enabled.\n    * Cloud Storage\n    * Cloud Machine Learning Engine\n    * BigQuery API\n    * Cloud Build API (for CI\/CD integration)\n    * Cloud Source Repositories API (for CI\/CD integration)\n\n- Configure project id and bucket id as environment variable.\n  ```bash\n  $ export PROJECT_ID=[your-google-project-id]\n  $ export BUCKET_ID=[your-google-cloud-storage-bucket-name]\n  ```\n\n- Set up a service account for calls to GCP APIs.\n  More information on setting up a service account can be found\n  [here](https:\/\/cloud.google.com\/docs\/authentication\/getting-started).\n\n### Step 1. Tailor the scikit-learn trainer to your data\n`metadata.py` is where the dataset's metadata is defined.\nBy default, the file is configured to train on the Census dataset, which can be found at\n[`bigquery-public-data.ml_datasets.census_adult_income`](https:\/\/bigquery.cloud.google.com\/table\/bigquery-public-data:ml_datasets.census_adult_income).\n\n```python\n# Usage: Modify below based on the dataset used.\nCSV_COLUMNS = None  # Schema of the data. Necessary for data stored in GCS\n\n# In the following, I provided an example based on census dataset.\nNUMERIC_FEATURES = [\n    'age',\n    'hours_per_week',\n]\n\nCATEGORICAL_FEATURES = [\n    'workclass',\n    'education',\n    'marital_status',\n    'occupation',\n    'relationship',\n    'race',\n    'sex',\n    'native_country'\n]\n\nFEATURE_COLUMNS = NUMERIC_FEATURES + CATEGORICAL_FEATURES\n\nLABEL = 'income_bracket'\nPROBLEM_TYPE = 'classification'  # 'regression' or 'classification'\n```\n\nIn most cases, only the following items need to be modified, in order to adapt to the target dataset.\n- **COLUMNS**: the schema of ths data, only required for data stored in GCS\n- **NUMERIC_FEATURES**: columns those will be treated as numerical features\n- **CATEGORICAL_FEATURES**: columns those will be treated as categorical features\n- **LABEL**: column that will be treated as label\n\n### Step 2. Add new features with domain knowledge\n`transform_config.py` is where the logic of generating new features out of raw dataset is defined.\nThere are two parts need to be provided for each new feature generating logic:\n\n* User defined function that handle the generation of new feature. There would be four cases in terms of the\ncombinations of the dimensions of input and output as listed below:\n    * ()->(): scalar to scalar\n    * (n) -> (): multi-inputs to scalar\n    * () -> (n): scalar to multi-outputs\n    * (n) -> (n): multi-inputs to multi-outputs\n\nThe example below takes in `age` and converts it into age bucket, which is an example of \"scalar to scalar\" function.\n\n```python\ndef _age_class(age):\n  \"\"\"Example scalar processing function\n\n  Args:\n    age: (int), age in integer\n\n  Returns:\n\n  \"\"\"\n  if age < 10:\n    return 1\n  elif 10 <= age < 18:\n    return 2\n  elif 18 <= age < 30:\n    return 3\n  elif 30 <= age < 50:\n    return 4\n  else:\n    return 5\n```\n\n* An entry in `TRANSFORM_CONFIG`. After the user defined function is done, to incorporate the transformation into the\nentire pipeline, an additional entry need to be added to `TRANSFORM_CONFIG` with\n    * input_columns: name of columns needed for as inputs to the transform function\n    * process_functions: transform function\n    * output_columns: names assigned to the output columns, data type indicator (N: for numerical, C: for categorical)\n\nThe example below is the counter part of the user defined function in previous section.\n\n```python\n# this is an example for generating new categorical feature using single\n# column from the raw data\n{\n    'input_columns': ['age'],\n    'process_function': _age_class,\n    'output_columns': [('age_class', constants.CATEGORICAL_INDICATOR)]\n},\n```\n\nFor more examples, please refer to the [Appendix](#appendix).\n\n### Step 3. Modify YAML config files for training on AI Platform\nThe files are located in `config`:\n- `config.yaml`: for running normal training job on AI Platform.\n- `hptuning_config.yaml`: for running hyperparameter tuning job on AI Platform.\n\nThe YAML files share some configuration parameters. In particular, `runtimeVersion` and `pythonVersion` should\ncorrespond in both files. Note that both Python 2.7 and Python 3.5 are supported, but Python 3.5 is the recommended\none since Python 2.7 is [deprecated](https:\/\/pythonclock.org\/) soon.\n\n```yaml\ntrainingInput:\n  scaleTier: STANDARD_1   # Machine type\n  runtimeVersion: \"1.13\"  # Scikit-learn version\n  # Note that both Python 2.7 and Python 3.5 are supported, but Python 3.5 is the\n  # recommended one since 2.7 is deprecated soon\n  pythonVersion: \"3.5\"\n```\n\nMore information on supported runtime version can be found\n[here](https:\/\/cloud.google.com\/ml-engine\/docs\/tensorflow\/runtime-version-list).\n\n### Step 4. Submit scikit-learn training job\n\nYou can run ML training jobs through the `train.sh` Bash script.\n\n```shell\nbash scripts\/train.sh [INPUT] [RUN_ENV] [RUN_TYPE] [EXTRA_TRAINER_ARGS]\n```\n- INPUT: Dataset to use for training and evaluation, which can be BigQuery table or a file (CSV).\n         BigQuery table should be specified as `PROJECT_ID.DATASET.TABLE_NAME`.\n- RUN_ENV: (Optional), whether to run `local` (on-prem) or `remote` (GCP). Default value is `local`.\n- RUN_TYPE: (Optional), whether to run `train` or `hptuning`. Default value is `train`.\n- EXTRA_TRAINER_ARGS: (Optional), additional arguments to pass to the trainer.\n\n**Note**: Please make sure the REGION is set to a supported Cloud region for your project in `train.sh`\n```shell\nREGION=us-central1\n```\n\n### Step 5. Deploy the trained model\n\nThe trained model can then be deployed to AI Platform for online serving using the `deploy.sh` script.\n\n```shell\nbash scripts\/deploy.sh [MODEL_NAME] [VERSION_NAME] [MODEL_DIR]\n```\n\nwhere:\n\n- MODEL_NAME: Name of the model to be deployed.\n- VERSION_NAME: Version of the model to be deployed.\n- MODEL_DIR: Path to directory containing trained and exported scikit-learn model.\n\n**Note**: Please make sure the following parameters are properly set in deploy.sh\n```shell\nREGION=us-central1\n\n# The following two parameters should be aligned with those used during\n# training job, i.e., specified in the yaml files under config\/\nRUN_TIME=1.13\n# Note that both Python 2.7 and Python 3.5 are supported,\n# but Python 3.5 is the recommended one since 2.7 is deprecated soon\nPYTHON_VERSION=3.5\n```\n\n### Step 6. Run predictions using the deployed model\n\nAfter the model is successfully deployed, you can send small samples of new data to the API associated with the model,\nand it would return predictions in the response.\nThere are two helper scripts available, `predict.sh` and `predict.py`, which use gcloud and Python API for\nrequesting predictions respectively.\n\n```shell\nbash scripts\/predict.sh [INPUT_DATA_FILE] [MODEL_NAME] [VERSION_NAME]\n```\n\nwhere:\n\n- INPUT_DATA_FILE: Path to sample file contained data in line-delimited JSON format.\n  See `sample_data\/sample_list.txt` or `sample_data\/sample_json.txt` for an example. More information can be found\n  [here](https:\/\/cloud.google.com\/ml-engine\/docs\/scikit\/online-predict#formatting_instances_as_lists).\n- MODEL_NAME: Name of the deployed model to use.\n- VERSION_NAME: Version of the deployed model to use.\n\nNote that two data formats are supported for online prediction:\n* List of values:\n```python\n[39,34,\" Private\",\" 9th\",\" Married-civ-spouse\",\" Other-service\",\" Wife\",\" Black\",\" Female\",\" United-States\"]\n```\n* JSON:\n```python\n{\n    \"age\": 39,\n    \"hours_per_week\": 34,\n    \"workclass\": \" Private\",\n    \"education\": \" 9th\",\n    \"marital_status\": \" Married-civ-spouse\",\n    \"occupation\": \" Other-service\",\n    \"relationship\": \" Wife\",\n    \"race\": \" Black\",\n    \"sex\": \" Female\",\n    \"native_country\": \" United-States\"\n  }\n```\n\n### Appendix\nIn this section, I have provided a complete example for iris dataset and demonstrate all four cases of user\ndefined functions.\n```python\n\nimport numpy as np\n\ndef _numeric_square(num):\n  \"\"\"Example scalar processing function\n\n  Args:\n    num: (float)\n\n  Returns:\n    float\n  \"\"\"\n  return np.power(num, 2)\n\n\ndef _numeric_square_root(num):\n  \"\"\"Example scalar processing function\n\n  Args:\n    num: (float)\n\n  Returns:\n    float\n  \"\"\"\n  return np.sqrt(num)\n\n\ndef _numeric_sq_sr(num):\n  \"\"\"Example function that take scala and return an array\n\n  Args:\n    num: (float)\n\n  Returns:\n    numpy.array\n  \"\"\"\n  return np.array([_numeric_square(num), _numeric_square_root(num)])\n\n\ndef _area(args):\n  \"\"\"Examples function that take an array and return a scalar\n\n  Args:\n    args: (numpy.array), args[0] -> length, args[1] -> width\n\n  Returns:\n    float\n  \"\"\"\n  return args[0] * args[1]\n\n\ndef _area_class(args):\n  \"\"\"Examples function that take an array and return a scalar\n\n  Args:\n    args: (numpy.array), args[0] -> length, args[1] -> width\n\n  Returns:\n    int\n  \"\"\"\n  area = args[0] * args[1]\n  cl = 1 if area > 2 else 0\n  return cl\n\n\ndef _area_and_class(args):\n  \"\"\"Examples function that take an array and return an array\n\n  Args:\n    args: (numpy.array), args[0] -> length, args[1] -> width\n\n  Returns:\n    numpy.array\n  \"\"\"\n  area = args[0] * args[1]\n  cl = 1 if area > 2 else 0\n  return np.array([area, cl])\n\n\nTRANSFORM_CONFIG = [\n    # this is an example for pass through features,\n    # i.e., those doesn't need any processing\n    {\n        'input_columns': ['sepal_length', 'sepal_width', 'petal_length',\n                          'petal_width'],\n        # the raw feature types are defined in the metadata,\n        # no need to do it here\n        'process_function': None,\n        'output_columns': ['sepal_length', 'sepal_width', 'petal_length',\n                           'petal_width']\n    },\n    # this is an example for generating new numerical feature using a single\n    # column from the raw data\n    {\n        'input_columns': ['sepal_length'],\n        'process_function': _numeric_square,\n        'output_columns': [('sepal_length_sq', constants.NUMERICAL_INDICATOR)]\n        # 'N' stands for numerical feature\n    },\n    # this is an example for generating new numerical feature using a single\n    # column from the raw data\n    {\n        'input_columns': ['sepal_width'],\n        'process_function': _numeric_square_root,\n        'output_columns': [('sepal_width_sr', constants.NUMERICAL_INDICATOR)]\n    },\n    # this is an example for generating new numerical feature using multiple\n    # columns from the raw data\n    {\n        'input_columns': ['petal_length', 'petal_width'],\n        'process_function': _area,\n        'output_columns': [('petal_area', constants.NUMERICAL_INDICATOR)]\n    },\n    # this is an example for generating new categorical feature using multiple\n    # columns from the raw data\n    {\n        'input_columns': ['petal_length', 'petal_width'],\n        'process_function': _area_class,\n        'output_columns': [('petal_area_cl', constants.CATEGORICAL_INDICATOR)]\n        # 'C' stands for categorical feature\n    },\n    # this is an example for generating multiple features using a single column\n    # from the raw data\n    {\n        'input_columns': ['petal_length'],\n        'process_function': _numeric_sq_sr,\n        'output_columns': [('petal_length_sq', constants.NUMERICAL_INDICATOR),\n                           ('petal_width_sr', constants.NUMERICAL_INDICATOR)]\n    },\n    # this is an example for generating multiple features using multiple columns\n    # from the raw data\n    {\n        'input_columns': ['sepal_length', 'sepal_width'],\n        'process_function': _area_and_class,\n        'output_columns': [('sepal_area', constants.NUMERICAL_INDICATOR),\n                           ('sepal_area_cl', constants.CATEGORICAL_INDICATOR)]\n    },\n\n]\n\n``","site":"GCP","answers_cleaned":"  Scikit learn pipeline trainer for AI Platform  This is a example for building a scikit learn based machine learning pipeline trainer that can be run on AI Platform  which is built on top of the  scikit learn template  https   github com GoogleCloudPlatform cloudml samples tree master sklearn sklearn template template   The pipeline can be trained locally or remotely on AI platform  The trained model can be further deployed on AI platform to serve online traffic  The entire pipeline includes three major components   1  Transformer  generate new features from raw data with user defined logic  function   2  Pre processer  handle typical standard pre processing e g  scaling  imputation  one hot encoding and etc  3  Estimator  exact machine learning model e g  RandomForestClassifier   Compared with the  scikit learn template  https   github com GoogleCloudPlatform cloudml samples tree master sklearn sklearn template template   this example has the following additional feature   1  Support both Classification and Regression  which can be specified in the configuration 2  Support serving for both JSON and List of Value formats 3  Support additional custom transformation logics besides typical pre processing provided by scikit learn  Google Cloud tools used     Google Cloud Platform  https   cloud google com    GCP  lets you build and host applications and websites  store data  and analyze data on Google s scalable infrastructure     Cloud ML Engine  https   cloud google com ml engine   is a managed service that enables you to easily build machine learning models that work on any type of data  of any size  This is now part of  AI Platform  https   cloud google com ai platform       Google Cloud Storage  https   cloud google com storage    GCS  is a unified object storage for developers and enterprises  from live data serving to data analytics ML to data archiving     Cloud SDK  https   cloud google com sdk   is a set of tools for Google Cloud Platform  which contains e g  gcloud  gsutil  and bq command line tools to interact with Google Cloud products and services     Google BigQuery  https   cloud google com bigquery   A fast  highly scalable  cost effective  and fully managed cloud data warehouse for analytics  with even built in machine learning      Pipeline overview The overall flow of the pipeline can be summarized as follows and illustrated in the flowchart     Raw Data    Transformer    Pre processor    Estimator    Trained Pipeline      Flowchart    sklearn png      Repository structure     template         config             config yaml               for running normal training job on AI Platform             hptuning config yaml      for running hyperparameter tunning job on AI Platform         scripts             train sh                  convenience script for running machine learning training jobs             deploy sh                 convenience script for deploying trained scikit learn model             predict sh                convenience script for requesting online prediction             predict py                helper function for requesting online prediction using python         trainer                       trainer package             util                      utility functions                 utils py              utility functions including e g  loading data from bigquery and cloud storage                 preprocess utils py   utility functions for constructing preprocessing pipeline                 transform utils py    utility functions for constructing transform pipeline             metadata py               dataset metadata and feature columns definitions             constants py              constants used in the project             model py                  pre processing and machine learning model pipeline definition             task py                   training job entry point  handling the parameters passed from command line             transform config py       configuration for transform pipeline construction          predictor py                  define custom prediction behavior         setup py                      specify necessary dependency for running job on AI Platform         requirements txt              specify necessary dependency  helper for setup environment for local development         Using the template     Step 0  Prerequisites Before you follow the instructions below to adapt the template to your machine learning job  you need a Google cloud project if you don t have one  You can find detailed instructions  here  https   cloud google com dataproc docs guides setup project      Make sure the following API   Services are enabled        Cloud Storage       Cloud Machine Learning Engine       BigQuery API       Cloud Build API  for CI CD integration        Cloud Source Repositories API  for CI CD integration     Configure project id and bucket id as environment variable       bash     export PROJECT ID  your google project id      export BUCKET ID  your google cloud storage bucket name           Set up a service account for calls to GCP APIs    More information on setting up a service account can be found    here  https   cloud google com docs authentication getting started        Step 1  Tailor the scikit learn trainer to your data  metadata py  is where the dataset s metadata is defined  By default  the file is configured to train on the Census dataset  which can be found at   bigquery public data ml datasets census adult income   https   bigquery cloud google com table bigquery public data ml datasets census adult income       python   Usage  Modify below based on the dataset used  CSV COLUMNS   None    Schema of the data  Necessary for data stored in GCS    In the following  I provided an example based on census dataset  NUMERIC FEATURES          age        hours per week      CATEGORICAL FEATURES          workclass        education        marital status        occupation        relationship        race        sex        native country     FEATURE COLUMNS   NUMERIC FEATURES   CATEGORICAL FEATURES  LABEL    income bracket  PROBLEM TYPE    classification      regression  or  classification       In most cases  only the following items need to be modified  in order to adapt to the target dataset      COLUMNS    the schema of ths data  only required for data stored in GCS     NUMERIC FEATURES    columns those will be treated as numerical features     CATEGORICAL FEATURES    columns those will be treated as categorical features     LABEL    column that will be treated as label      Step 2  Add new features with domain knowledge  transform config py  is where the logic of generating new features out of raw dataset is defined  There are two parts need to be provided for each new feature generating logic     User defined function that handle the generation of new feature  There would be four cases in terms of the combinations of the dimensions of input and output as listed below                scalar to scalar        n         multi inputs to scalar              n   scalar to multi outputs        n      n   multi inputs to multi outputs  The example below takes in  age  and converts it into age bucket  which is an example of  scalar to scalar  function      python def  age class age        Example scalar processing function    Args      age   int   age in integer    Returns           if age   10      return 1   elif 10    age   18      return 2   elif 18    age   30      return 3   elif 30    age   50      return 4   else      return 5        An entry in  TRANSFORM CONFIG   After the user defined function is done  to incorporate the transformation into the entire pipeline  an additional entry need to be added to  TRANSFORM CONFIG  with       input columns  name of columns needed for as inputs to the transform function       process functions  transform function       output columns  names assigned to the output columns  data type indicator  N  for numerical  C  for categorical   The example below is the counter part of the user defined function in previous section      python   this is an example for generating new categorical feature using single   column from the raw data        input columns     age         process function    age class       output columns      age class   constants CATEGORICAL INDICATOR           For more examples  please refer to the  Appendix   appendix        Step 3  Modify YAML config files for training on AI Platform The files are located in  config      config yaml   for running normal training job on AI Platform     hptuning config yaml   for running hyperparameter tuning job on AI Platform   The YAML files share some configuration parameters  In particular   runtimeVersion  and  pythonVersion  should correspond in both files  Note that both Python 2 7 and Python 3 5 are supported  but Python 3 5 is the recommended one since Python 2 7 is  deprecated  https   pythonclock org   soon      yaml trainingInput    scaleTier  STANDARD 1     Machine type   runtimeVersion   1 13     Scikit learn version     Note that both Python 2 7 and Python 3 5 are supported  but Python 3 5 is the     recommended one since 2 7 is deprecated soon   pythonVersion   3 5       More information on supported runtime version can be found  here  https   cloud google com ml engine docs tensorflow runtime version list        Step 4  Submit scikit learn training job  You can run ML training jobs through the  train sh  Bash script      shell bash scripts train sh  INPUT   RUN ENV   RUN TYPE   EXTRA TRAINER ARGS        INPUT  Dataset to use for training and evaluation  which can be BigQuery table or a file  CSV            BigQuery table should be specified as  PROJECT ID DATASET TABLE NAME     RUN ENV   Optional   whether to run  local   on prem  or  remote   GCP   Default value is  local     RUN TYPE   Optional   whether to run  train  or  hptuning   Default value is  train     EXTRA TRAINER ARGS   Optional   additional arguments to pass to the trainer     Note    Please make sure the REGION is set to a supported Cloud region for your project in  train sh     shell REGION us central1          Step 5  Deploy the trained model  The trained model can then be deployed to AI Platform for online serving using the  deploy sh  script      shell bash scripts deploy sh  MODEL NAME   VERSION NAME   MODEL DIR       where     MODEL NAME  Name of the model to be deployed    VERSION NAME  Version of the model to be deployed    MODEL DIR  Path to directory containing trained and exported scikit learn model     Note    Please make sure the following parameters are properly set in deploy sh    shell REGION us central1    The following two parameters should be aligned with those used during   training job  i e   specified in the yaml files under config  RUN TIME 1 13   Note that both Python 2 7 and Python 3 5 are supported    but Python 3 5 is the recommended one since 2 7 is deprecated soon PYTHON VERSION 3 5          Step 6  Run predictions using the deployed model  After the model is successfully deployed  you can send small samples of new data to the API associated with the model  and it would return predictions in the response  There are two helper scripts available   predict sh  and  predict py   which use gcloud and Python API for requesting predictions respectively      shell bash scripts predict sh  INPUT DATA FILE   MODEL NAME   VERSION NAME       where     INPUT DATA FILE  Path to sample file contained data in line delimited JSON format    See  sample data sample list txt  or  sample data sample json txt  for an example  More information can be found    here  https   cloud google com ml engine docs scikit online predict formatting instances as lists     MODEL NAME  Name of the deployed model to use    VERSION NAME  Version of the deployed model to use   Note that two data formats are supported for online prediction    List of values     python  39 34   Private    9th    Married civ spouse    Other service    Wife    Black    Female    United States         JSON     python        age   39       hours per week   34       workclass     Private        education     9th        marital status     Married civ spouse        occupation     Other service        relationship     Wife        race     Black        sex     Female        native country     United States               Appendix In this section  I have provided a complete example for iris dataset and demonstrate all four cases of user defined functions     python  import numpy as np  def  numeric square num        Example scalar processing function    Args      num   float     Returns      float         return np power num  2    def  numeric square root num        Example scalar processing function    Args      num   float     Returns      float         return np sqrt num    def  numeric sq sr num        Example function that take scala and return an array    Args      num   float     Returns      numpy array         return np array   numeric square num    numeric square root num      def  area args        Examples function that take an array and return a scalar    Args      args   numpy array   args 0     length  args 1     width    Returns      float         return args 0    args 1    def  area class args        Examples function that take an array and return a scalar    Args      args   numpy array   args 0     length  args 1     width    Returns      int         area   args 0    args 1    cl   1 if area   2 else 0   return cl   def  area and class args        Examples function that take an array and return an array    Args      args   numpy array   args 0     length  args 1     width    Returns      numpy array         area   args 0    args 1    cl   1 if area   2 else 0   return np array  area  cl     TRANSFORM CONFIG           this is an example for pass through features        i e   those doesn t need any processing                input columns     sepal length    sepal width    petal length                              petal width              the raw feature types are defined in the metadata            no need to do it here          process function   None           output columns     sepal length    sepal width    petal length                               petal width                this is an example for generating new numerical feature using a single       column from the raw data                input columns     sepal length             process function    numeric square           output columns      sepal length sq   constants NUMERICAL INDICATOR              N  stands for numerical feature              this is an example for generating new numerical feature using a single       column from the raw data                input columns     sepal width             process function    numeric square root           output columns      sepal width sr   constants NUMERICAL INDICATOR                this is an example for generating new numerical feature using multiple       columns from the raw data                input columns     petal length    petal width             process function    area           output columns      petal area   constants NUMERICAL INDICATOR                this is an example for generating new categorical feature using multiple       columns from the raw data                input columns     petal length    petal width             process function    area class           output columns      petal area cl   constants CATEGORICAL INDICATOR              C  stands for categorical feature              this is an example for generating multiple features using a single column       from the raw data                input columns     petal length             process function    numeric sq sr           output columns      petal length sq   constants NUMERICAL INDICATOR                                petal width sr   constants NUMERICAL INDICATOR                this is an example for generating multiple features using multiple columns       from the raw data                input columns     sepal length    sepal width             process function    area and class           output columns      sepal area   constants NUMERICAL INDICATOR                                sepal area cl   constants CATEGORICAL INDICATOR                "}
{"questions":"GCP Transpose a BigQuery table using Dataflow img src img simplesqlbasedpivot png alt Simple SQL based pivot height 150 width 650 sql Transposing Pivoting Rotating the orientation of a table is a very common task that is performed as part of a standard report generation workflow While some relational databases provide a built in pivot function of some sort it can also be done via standard SQL As an example the following table can be pivoted using","answers":"# Transpose a BigQuery table using Dataflow\n\nTransposing\/Pivoting\/Rotating the orientation of a table is a very common task that is performed as part of a standard report generation workflow. While some relational databases provide a built-in *pivot* function of some sort, it can also be done via standard SQL.\n\nAs an example, the following table can be pivoted using [BigQuery Standard SQL](https:\/\/cloud.google.com\/bigquery\/docs\/reference\/standard-sql\/):\n\n<img src=\"img\/simple_sql_based_pivot.png\" alt=\"Simple SQL based pivot\" height=150 width=650\/>\n\n\n```sql\nSELECT\n  id,\n  MAX(CASE\n      WHEN class = 'HVAC' THEN SALES END) AS HVAC_SALES,  MAX(CASE\n      WHEN class = 'GENERATORS' THEN SALES END) AS GENERATORS_SALES\nFROM\n  `project-id.dataset_id.table_id`\nGROUP BY\n  id;\n```\n\nHowever, this can get significantly more complicated as:\n\n* Number of pivot fields increase (single pivot field _**class**_ in the above example).\n* Number of distinct values in pivot fields increase (two distinct values _**HVAC**_ and _**GENERATORS**_ in the above example).\n* Number of pivot values increase (single pivot value _**sales**_ in the above example).\n\nThe most common approach to pivoting a complex table would be a two step approach:\n\n1. Run a custom script to analyze the table and generate a SQL statement such as the one above.\n2. Run the dynamically generated SQL to pivot the table and write the output to another table.\n\nThis could also be done using a convenient Dataflow pipeline as described below.\n\n## [Pivot Dataflow Pipeline](src\/main\/java\/com\/google\/cloud\/pso\/pipeline\/Pivot.java)\n\n[Pivot](src\/main\/java\/com\/google\/cloud\/pso\/pipeline\/Pivot.java) -\nA Dataflow pipeline that can be used to pivot a BigQuery table across any number of pivot fields and values.\nThis pipeline allows the user to specify a comma separated list of fields across which the table should be rotated in addition to a comma separated list of fields that are rotated.\ni.e. The user can specify:\n\n* Key fields along which the table is rotated (_**id**_ in the above example).\n* Pivot fields that should be rotated (_**class**_ in the above example).\n* Pivot values that should be rotated (_**sales**_ in the above example).\n\nThe pipeline will perform various steps to complete the pivot process:\n\n1. Validate that the fields are valid and have the correct datatypes.\n2. Read the data from an input BigQuery table.\n3. Analyze the pivot fields and dynamically generate the correct schema.\n4. Pivot every record based on the dynamically generated schema.\n5. Write the pivoted records into a target BigQuery table.\n\n<img src=\"img\/pipeline_graph.png\" alt=\"Pipeline Graph\" height=650 width=500\/>\n\n## Getting Started\n\n### Requirements\n\n* Java 8\n* Maven 3\n\n### Building the Project\n\nBuild the entire project using the maven compile command.\n```sh\nmvn clean compile\n```\n\n### Running unit tests\n\nRun all unit tests.\n```sh\nmvn clean test\n```\n\n### Running the Pipeline\n\n<img src=\"img\/example_raw_table.png\" alt=\"Raw input table\" height=200 width=550\/>\n\nThe above input table shows a slightly more complex example. In order to pivot this table, we have the following inputs:\n\n* keyFields = id,locid\n* pivotFields = class,on_sale,state\n* pivotValues = sale_price,count\n\nThe _**desc**_ field is ignored and will not be in the output table.\n\nThe [Pivot](src\/main\/java\/com\/google\/cloud\/pso\/pipeline\/Pivot.java) pipeline will create a new pivot table based on the inputs.\n\n<img src=\"img\/example_pivoted_table.png\" alt=\"Raw input table\" height=175 width=850\/>\n\nExecute the pipeline using the maven exec:java command.\n```sh\nMY_PROJECT=my-project-id\nMY_STAGING_BUCKET=my-staging-bucket-name\nMY_DATASET_ID=my-dataset-id\nMY_SOURCE_TABLE_ID=my-source-table-id\nMY_TARGET_TABLE_ID=my-target-table-id\n\nmvn compile exec:java -Dexec.mainClass=com.google.cloud.pso.pipeline.Pivot -Dexec.cleanupDaemonThreads=false -Dexec.args=\" \\\n--project=$MY_PROJECT \\\n--runner=DataflowRunner \\\n--stagingLocation=gs:\/\/${MY_STAGING_BUCKET}\/staging \\\n--tempLocation=gs:\/\/${MY_STAGING_BUCKET}\/tmp \\\n--inputTableSpec=${MY_PROJECT}:${MY_DATASET_ID}.${MY_SOURCE_TABLE_ID} \\\n--outputTableSpec=${MY_PROJECT}:${MY_DATASET_ID}.${MY_TARGET_TABLE_ID} \\\n--keyFields=id,locid \\\n--pivotFields=class,on_sale,state \\\n--valueFields=sale_price,count\"\n``","site":"GCP","answers_cleaned":"  Transpose a BigQuery table using Dataflow  Transposing Pivoting Rotating the orientation of a table is a very common task that is performed as part of a standard report generation workflow  While some relational databases provide a built in  pivot  function of some sort  it can also be done via standard SQL   As an example  the following table can be pivoted using  BigQuery Standard SQL  https   cloud google com bigquery docs reference standard sql      img src  img simple sql based pivot png  alt  Simple SQL based pivot  height 150 width 650        sql SELECT   id    MAX CASE       WHEN class    HVAC  THEN SALES END  AS HVAC SALES   MAX CASE       WHEN class    GENERATORS  THEN SALES END  AS GENERATORS SALES FROM    project id dataset id table id  GROUP BY   id       However  this can get significantly more complicated as     Number of pivot fields increase  single pivot field    class    in the above example     Number of distinct values in pivot fields increase  two distinct values    HVAC    and    GENERATORS    in the above example     Number of pivot values increase  single pivot value    sales    in the above example    The most common approach to pivoting a complex table would be a two step approach   1  Run a custom script to analyze the table and generate a SQL statement such as the one above  2  Run the dynamically generated SQL to pivot the table and write the output to another table   This could also be done using a convenient Dataflow pipeline as described below       Pivot Dataflow Pipeline  src main java com google cloud pso pipeline Pivot java    Pivot  src main java com google cloud pso pipeline Pivot java    A Dataflow pipeline that can be used to pivot a BigQuery table across any number of pivot fields and values  This pipeline allows the user to specify a comma separated list of fields across which the table should be rotated in addition to a comma separated list of fields that are rotated  i e  The user can specify     Key fields along which the table is rotated     id    in the above example     Pivot fields that should be rotated     class    in the above example     Pivot values that should be rotated     sales    in the above example    The pipeline will perform various steps to complete the pivot process   1  Validate that the fields are valid and have the correct datatypes  2  Read the data from an input BigQuery table  3  Analyze the pivot fields and dynamically generate the correct schema  4  Pivot every record based on the dynamically generated schema  5  Write the pivoted records into a target BigQuery table    img src  img pipeline graph png  alt  Pipeline Graph  height 650 width 500       Getting Started      Requirements    Java 8   Maven 3      Building the Project  Build the entire project using the maven compile command     sh mvn clean compile          Running unit tests  Run all unit tests     sh mvn clean test          Running the Pipeline   img src  img example raw table png  alt  Raw input table  height 200 width 550    The above input table shows a slightly more complex example  In order to pivot this table  we have the following inputs     keyFields   id locid   pivotFields   class on sale state   pivotValues   sale price count  The    desc    field is ignored and will not be in the output table   The  Pivot  src main java com google cloud pso pipeline Pivot java  pipeline will create a new pivot table based on the inputs    img src  img example pivoted table png  alt  Raw input table  height 175 width 850    Execute the pipeline using the maven exec java command     sh MY PROJECT my project id MY STAGING BUCKET my staging bucket name MY DATASET ID my dataset id MY SOURCE TABLE ID my source table id MY TARGET TABLE ID my target table id  mvn compile exec java  Dexec mainClass com google cloud pso pipeline Pivot  Dexec cleanupDaemonThreads false  Dexec args       project  MY PROJECT     runner DataflowRunner     stagingLocation gs     MY STAGING BUCKET  staging     tempLocation gs     MY STAGING BUCKET  tmp     inputTableSpec   MY PROJECT    MY DATASET ID    MY SOURCE TABLE ID      outputTableSpec   MY PROJECT    MY DATASET ID    MY TARGET TABLE ID      keyFields id locid     pivotFields class on sale state     valueFields sale price count    "}
{"questions":"GCP Assumptions Through the use of machine learning you can build an automated categorization pipeline This solution describes an approach to automating the review process for audio files using machine learning APIs Introduction Many consumer facing applications allow creators to upload audio files as a part of the creative experience If you re running an application with a similar use case you may want to extract the text from the audio file and then classify based on the content For example you may want to categorize content or add appropriate tags for search indexing The process of having humans listening to content is problematic if you have a large volume of content Having users supplying their own tags may also be problematic because they may not include all useful tags or they may tag inaccurately","answers":"Introduction\n============\n\nMany consumer-facing applications allow creators to upload audio files as a part of the creative experience. If you\u2019re running an application with a similar use case, you may want to extract the text from the audio file and then classify based on the content. For example, you may want to categorize content or add appropriate tags for search indexing. The process of having humans listening to content is problematic if you have a large volume of content. Having users supplying their own tags may also be problematic because they may not include all useful tags or they may tag inaccurately.\n\nThrough the use of machine learning, you can build an automated categorization pipeline. This solution describes an approach to automating the review process for audio files using machine learning APIs.\n\n\nAssumptions\n============\n\nThis example only supports the encoding formats currently supported by the\n[Speech API](https:\/\/cloud.google.com\/speech-to-text\/docs\/encoding).\n\nIf you try to use .mp3 or another file type, then you may need to perform preprocessing to convert\nyour file into an accepted encoding type.\n\n\n<h2>Background on Serverless Data Processing Pipelines<\/h2>\n\nThe solution involves creating five GCS buckets using default configuration settings. Because of\nthis, no [object lifecycle management](https:\/\/cloud.google.com\/storage\/docs\/lifecycle) policies are\nconfigured. If you would like to specify different retention policies you can [enable](https:\/\/cloud.google.com\/storage\/docs\/managing-lifecycles#enable)\nthis using `gsutil` while following the deployment process.\n\nDuring processing, audio files are moved between buckets as they progress\nthrough various stages of the pipeline. Specifically, the audio file should first be moved to the\n`staging` bucket. After the Speech API completes processing the file, the audio file is moved from\nthe `staging` bucket to either the `processed` or `error` bucket, depending on whether the Speech\nAPI returned a success or error response.\n\n<h2>Installation\/Set-up<\/h2>\n\n1. [Install and initialize the Cloud SDK](https:\/\/cloud.google.com\/sdk\/docs\/how-to)\n\n<h3>Create a GCP Project<\/h3>\n\n1. Open a Linux terminal window or [Cloud Shell](https:\/\/console.cloud.google.com\/home\/dashboard?cloudshell=true), and enter the following to configure your desired project id.\n\n````\nexport PROJECT=[project_id]\n````\n\n2. Create a new GCP project\n\n````\ngcloud config create project $PROJECT\n````\n\n3. Set your terminal to use that project\n\n````\ngcloud config set project $PROJECT\n````\n\nDeployment\n==========\n\nStep 1: [Deploy the App Engine frontend](#step-1)\n\nStep 2a: [Use gcloud for remaining resources](#step-2a)\n\nStep 2b: [Use terraform for remaining resources](#step2-b)\n\nStep 3: [View Results](#view-results)\n\n### Step 1\n<h3>Clone the repository and change your directory into the root folder<\/h3>\n\n1. In your terminal type the following to clone the professional-services repository:\n\n````\ngit clone https:\/\/github.com\/GoogleCloudPlatform\/professional-services\n````\n\n2. Change directories into this project\n\n`````\ncd examples\/ml-audio-content-profiling\/\n`````\n\n<h3>Create Project Resources<\/h3>\n1. Change directories into the frontend App Engine application folder.\n\n````\ncd app\n````\n\n2a. Compile the angular frontend application. Note that this requires [installing Angular](https:\/\/angular.io\/cli) on\nyour device and compiling the output.\n\n````\nnpm install -g @angular\/cli\n````\n\n````\ncd angular\/\n````\n\n````\nnpm install\n````\n\n````\nng build --prod\n````\n\n````\ncd ..\n````\n\n2b. There is also an [Open Sourced moderator UI](https:\/\/github.com\/conversationai\/conversationai-moderator)\nwhich contains detailed features for sorting through results from the Perspective API. Note that\nthis will not display any results from the NLP API or a transcript itself, but you may add in these\nadditional features if you wish. This is an alternative to the frontend in the `app` folder in this\nrepository.\n\n\n3. Deploy the application.\n\n````\ngcloud app deploy\n````\n\nYou will be prompted to use a region to serve the location from. You may pick any region, but you\ncannot change this value later.\n\nYou can verify that the app was deployed correctly by navigating to\nhttps:\/\/`$PROJECT`.appspot.google.com. You should see the following UI:\n\n![text](img\/app_home.png?raw=true)\n\n\n### Step 2a\n<h3>Enable APIs for your Project<\/h3>\n1. Enable APIs\n\n````\ngcloud services enable language.googleapis.com\ngcloud services enable speech.googleapis.com\ngcloud services enable cloudfunctions.googleapis.com\ngcloud services enable commentanalyzer.googleapis.com\ngcloud services enable cloudscheduler.googleapis.com\n````\n\n\n2. Change directories to be at the root directory again.\n\n````\ncd ..\n````\n\n<h3>Create PubSub resources<\/h3>\n\n1. Create PubSub topic named stt_queue\n\n````\nexport TOPIC_NAME=stt_queue\n\ngcloud pubsub topics create $TOPIC_NAME\n````\n\n2. Create subscription to stt_queue topic\n\n````\nexport SUBSCRIPTION_NAME=pull_stt_ids\n\ngcloud pubsub subscriptions create $SUBSCRIPTION_NAME --topic=$TOPIC_NAME\n````\n\n3. Generate a static UUID that you will need in each of your bucket names to ensure that they are\nunique.\n\n3a. First install uuidgen. If it is already installed or if you are using Cloud Shell, skip this step.\n\n````\nsudo apt-get install uuid-runtime\n````\n\n3b. Then generate the random UUID\n\n````\nexport STATIC_UUID=$(echo $(uuidgen | tr '[:upper:]' '[:lower:]') | cut -c1-20)\n````\n\n\n4. Create five GCP buckets to hold the output files\n\n````\nexport staging_audio_bucket=staging-audio-files-$STATIC_UUID\n\ngsutil mb gs:\/\/$staging_audio_bucket\n````\n\n````\nexport processed_audio_bucket=processed-audio-files-$STATIC_UUID\ngsutil mb gs:\/\/$processed_audio_bucket\n````\n\n````\nexport error_audio_bucket=error-audio-files-$STATIC_UUID\ngsutil mb gs:\/\/$error_audio_bucket\n````\n\n\n````\nexport transcription_bucket=transcription-files-$STATIC_UUID\ngsutil mb gs:\/\/$transcription_bucket\n````\n\n````\nexport output_bucket=output-files-$STATIC_UUID\ngsutil mb gs:\/\/$output_bucket\n````\n\n\n4. Deploy first Cloud Function to Send STT API\n\nChange directories into the send_stt_api_function directory\n\n````\ncd send_stt_api_function\/\n````\n\nDeploy function\n\n````\ngcloud functions deploy send_stt_api \\\n  --entry-point main \\\n  --runtime python37 \\\n  --trigger-resource $staging_audio_bucket \\\n  --trigger-event google.storage.object.finalize \\\n  --timeout 540s \\\n  --set-env-vars topic_name=$TOPIC_NAME,error_bucket=$error_audio_bucket\n````\n\n5. Deploy second Cloud Function to Read STT API Output\n\nChange directories into the read_stt_api_function\n\n````\ncd ..\/read_stt_api_function\/\n````\n\n````\ngcloud functions deploy read_stt_api \\\n  --entry-point main \\\n   --runtime python37 \\\n   --trigger-resource cron_topic \\\n   --trigger-event google.pubsub.topic.publish \\\n   --timeout 540s \\\n   --set-env-vars topic_name=$TOPIC_NAME,subscription_name=$SUBSCRIPTION_NAME,transcription_bucket=$transcription_bucket,staging_audio_bucket=$staging_audio_bucket,processed_audio_bucket=$processed_audio_bucket,error_audio_bucket=$error_audio_bucket\n````\n\n6. Deploy Cloud Scheduler Job\n\n````\ngcloud scheduler jobs create pubsub check_stt_job \\\n  --schedule \"*\/10 * * * *\" \\\n  --topic cron_topic \\\n  --message-body \"Check Speech-to-text results\"\n````\n\nNote that you can edit the `schedule` flag to be any interval in UNIX cron. By default, our solution\nuses every 10 minutes.\n\n\n7. Deploy Perspective API Function\n\nChange directories into the perspective_api_function directory\n\n````\ncd ..\/perspective_api_function\/\n````\n\n````\ngcloud functions deploy perspective_api \\\n  --entry-point main \\\n  --runtime python37 \\\n  --trigger-resource $transcription_bucket \\\n  --trigger-event google.storage.object.finalize \\\n  --timeout 540s \\\n  --set-env-vars output_bucket=$output_bucket\n````\n\n8. Deploy NLP Function\n\nChange directories into the nlp_api_function directory\n\n````\ncd ..\/nlp_api_function\/\n````\n\n````\ngcloud functions deploy nlp_api \\\n  --entry-point main \\\n  --runtime python37 \\\n  --trigger-resource $transcription_bucket \\\n  --trigger-event google.storage.object.finalize \\\n  --timeout 540s \\\n  --set-env-vars output_bucket=$output_bucket\n````\n\n\n### Step 2b\n\n1. [Install Terraform](https:\/\/learn.hashicorp.com\/terraform\/getting-started\/install)\n\n2. Copy `terraform.tfvars.sample` to create the `terraform.tfvars` file.\nYou must input the `project_id` inside of the quotes. If you wish to edit any of the other\ndefault values for the other variables specified in `variables.tf`, you may add them in your\n`terraform.tfvars`.\n\n3. In your terminal, cd into the `terraform\/` directory.\n\n````\ncd terraform\/\n````\n\n4. Enter the following commands, ensuring that there are no errors:\n\n  ````\n  terraform init\n  ````\n\n  ````\n  terraform plan\n  ````\n\n  ````\n  terraform apply\n  ````\n\n  ````\n  yes\n  ````\n\n\nAll of the resources should be deployed.\n\n\n### View Results\n<h3>Test it out<\/h3>\n\n1. You can start by trying to upload an audio file in GCS. You can do this using `gsutil` or in the\nUI under the <b>staging bucket<\/b>. This will trigger `send_stt_api_function`. This submits the\nrequest to the Speech API and publishes the job id to PubSub.\n\n\n2. By default, `read_stt_api_function` is scheduled to run every ten minutes, as configured by Cloud\nScheduler. If you want to test it earlier, you can navigate to Cloud Scheduler in the console and\nclick 'Run Now'. This will pull from the PubSub topic to grab any job ids. It then calls\nthe Speech API to see if the job is complete. If it is not complete, it repushes the id back to\nPubSub. If it is complete, it extracts the transcript from the Speech API response. Finally, it then saves this\ntranscript in GCS in the transcription files bucket. It also moves the audio file from the staging\nstaging bucket to the processed audio bucket. If there were any errors, it moves the audio file\ninstead to the error audio bucket.\n\n![](img\/cloud_scheduler.png)\n\n3. The upload in the previous step to the transcription files bucket then triggers the other two\nfunctions: `perpsective_api_function` and `nlp_api_function`. Each of these downloads the\ntranscription file from GCS and then calls its respective API with it to receive insight about the\nprobability of toxic content in the file as well as entities mentioned. Each then publishes the\nresponse into its respective GCS bucket.\n\n\n4. You can view the result of the entire pipeline by using the deployed App Engine app. Navigate\nto: https:\/\/`[PROJECT_ID]`.appspot.com. This will present a table of all files that have been\nuploaded and which already have completed processing through the pipeline. You can click through\neach file to view the resulting transcript, toxicity, and entity\/sentiment analysis.\n\n![](img\/app_home_files.png)","site":"GCP","answers_cleaned":"Introduction               Many consumer facing applications allow creators to upload audio files as a part of the creative experience  If you re running an application with a similar use case  you may want to extract the text from the audio file and then classify based on the content  For example  you may want to categorize content or add appropriate tags for search indexing  The process of having humans listening to content is problematic if you have a large volume of content  Having users supplying their own tags may also be problematic because they may not include all useful tags or they may tag inaccurately   Through the use of machine learning  you can build an automated categorization pipeline  This solution describes an approach to automating the review process for audio files using machine learning APIs    Assumptions               This example only supports the encoding formats currently supported by the  Speech API  https   cloud google com speech to text docs encoding    If you try to use  mp3 or another file type  then you may need to perform preprocessing to convert your file into an accepted encoding type     h2 Background on Serverless Data Processing Pipelines  h2   The solution involves creating five GCS buckets using default configuration settings  Because of this  no  object lifecycle management  https   cloud google com storage docs lifecycle  policies are configured  If you would like to specify different retention policies you can  enable  https   cloud google com storage docs managing lifecycles enable  this using  gsutil  while following the deployment process   During processing  audio files are moved between buckets as they progress through various stages of the pipeline  Specifically  the audio file should first be moved to the  staging  bucket  After the Speech API completes processing the file  the audio file is moved from the  staging  bucket to either the  processed  or  error  bucket  depending on whether the Speech API returned a success or error response    h2 Installation Set up  h2   1   Install and initialize the Cloud SDK  https   cloud google com sdk docs how to    h3 Create a GCP Project  h3   1  Open a Linux terminal window or  Cloud Shell  https   console cloud google com home dashboard cloudshell true   and enter the following to configure your desired project id        export PROJECT  project id        2  Create a new GCP project       gcloud config create project  PROJECT       3  Set your terminal to use that project       gcloud config set project  PROJECT       Deployment             Step 1   Deploy the App Engine frontend   step 1   Step 2a   Use gcloud for remaining resources   step 2a   Step 2b   Use terraform for remaining resources   step2 b   Step 3   View Results   view results       Step 1  h3 Clone the repository and change your directory into the root folder  h3   1  In your terminal type the following to clone the professional services repository        git clone https   github com GoogleCloudPlatform professional services       2  Change directories into this project        cd examples ml audio content profiling          h3 Create Project Resources  h3  1  Change directories into the frontend App Engine application folder        cd app       2a  Compile the angular frontend application  Note that this requires  installing Angular  https   angular io cli  on your device and compiling the output        npm install  g  angular cli            cd angular             npm install            ng build   prod            cd          2b  There is also an  Open Sourced moderator UI  https   github com conversationai conversationai moderator  which contains detailed features for sorting through results from the Perspective API  Note that this will not display any results from the NLP API or a transcript itself  but you may add in these additional features if you wish  This is an alternative to the frontend in the  app  folder in this repository    3  Deploy the application        gcloud app deploy       You will be prompted to use a region to serve the location from  You may pick any region  but you cannot change this value later   You can verify that the app was deployed correctly by navigating to https     PROJECT  appspot google com  You should see the following UI     text  img app home png raw true        Step 2a  h3 Enable APIs for your Project  h3  1  Enable APIs       gcloud services enable language googleapis com gcloud services enable speech googleapis com gcloud services enable cloudfunctions googleapis com gcloud services enable commentanalyzer googleapis com gcloud services enable cloudscheduler googleapis com        2  Change directories to be at the root directory again        cd           h3 Create PubSub resources  h3   1  Create PubSub topic named stt queue       export TOPIC NAME stt queue  gcloud pubsub topics create  TOPIC NAME       2  Create subscription to stt queue topic       export SUBSCRIPTION NAME pull stt ids  gcloud pubsub subscriptions create  SUBSCRIPTION NAME   topic  TOPIC NAME       3  Generate a static UUID that you will need in each of your bucket names to ensure that they are unique   3a  First install uuidgen  If it is already installed or if you are using Cloud Shell  skip this step        sudo apt get install uuid runtime       3b  Then generate the random UUID       export STATIC UUID   echo   uuidgen   tr    upper       lower       cut  c1 20         4  Create five GCP buckets to hold the output files       export staging audio bucket staging audio files  STATIC UUID  gsutil mb gs    staging audio bucket            export processed audio bucket processed audio files  STATIC UUID gsutil mb gs    processed audio bucket            export error audio bucket error audio files  STATIC UUID gsutil mb gs    error audio bucket             export transcription bucket transcription files  STATIC UUID gsutil mb gs    transcription bucket            export output bucket output files  STATIC UUID gsutil mb gs    output bucket        4  Deploy first Cloud Function to Send STT API  Change directories into the send stt api function directory       cd send stt api function        Deploy function       gcloud functions deploy send stt api       entry point main       runtime python37       trigger resource  staging audio bucket       trigger event google storage object finalize       timeout 540s       set env vars topic name  TOPIC NAME error bucket  error audio bucket       5  Deploy second Cloud Function to Read STT API Output  Change directories into the read stt api function       cd    read stt api function             gcloud functions deploy read stt api       entry point main        runtime python37        trigger resource cron topic        trigger event google pubsub topic publish        timeout 540s        set env vars topic name  TOPIC NAME subscription name  SUBSCRIPTION NAME transcription bucket  transcription bucket staging audio bucket  staging audio bucket processed audio bucket  processed audio bucket error audio bucket  error audio bucket       6  Deploy Cloud Scheduler Job       gcloud scheduler jobs create pubsub check stt job       schedule    10                topic cron topic       message body  Check Speech to text results        Note that you can edit the  schedule  flag to be any interval in UNIX cron  By default  our solution uses every 10 minutes    7  Deploy Perspective API Function  Change directories into the perspective api function directory       cd    perspective api function             gcloud functions deploy perspective api       entry point main       runtime python37       trigger resource  transcription bucket       trigger event google storage object finalize       timeout 540s       set env vars output bucket  output bucket       8  Deploy NLP Function  Change directories into the nlp api function directory       cd    nlp api function             gcloud functions deploy nlp api       entry point main       runtime python37       trigger resource  transcription bucket       trigger event google storage object finalize       timeout 540s       set env vars output bucket  output bucket            Step 2b  1   Install Terraform  https   learn hashicorp com terraform getting started install   2  Copy  terraform tfvars sample  to create the  terraform tfvars  file  You must input the  project id  inside of the quotes  If you wish to edit any of the other default values for the other variables specified in  variables tf   you may add them in your  terraform tfvars    3  In your terminal  cd into the  terraform   directory        cd terraform        4  Enter the following commands  ensuring that there are no errors            terraform init                  terraform plan                  terraform apply                  yes          All of the resources should be deployed        View Results  h3 Test it out  h3   1  You can start by trying to upload an audio file in GCS  You can do this using  gsutil  or in the UI under the  b staging bucket  b   This will trigger  send stt api function   This submits the request to the Speech API and publishes the job id to PubSub    2  By default   read stt api function  is scheduled to run every ten minutes  as configured by Cloud Scheduler  If you want to test it earlier  you can navigate to Cloud Scheduler in the console and click  Run Now   This will pull from the PubSub topic to grab any job ids  It then calls the Speech API to see if the job is complete  If it is not complete  it repushes the id back to PubSub  If it is complete  it extracts the transcript from the Speech API response  Finally  it then saves this transcript in GCS in the transcription files bucket  It also moves the audio file from the staging staging bucket to the processed audio bucket  If there were any errors  it moves the audio file instead to the error audio bucket       img cloud scheduler png   3  The upload in the previous step to the transcription files bucket then triggers the other two functions   perpsective api function  and  nlp api function   Each of these downloads the transcription file from GCS and then calls its respective API with it to receive insight about the probability of toxic content in the file as well as entities mentioned  Each then publishes the response into its respective GCS bucket    4  You can view the result of the entire pipeline by using the deployed App Engine app  Navigate to  https     PROJECT ID   appspot com  This will present a table of all files that have been uploaded and which already have completed processing through the pipeline  You can click through each file to view the resulting transcript  toxicity  and entity sentiment analysis       img app home files png "}
{"questions":"GCP Prepare the infrastructure e g datasets tables etc needed by the pipeline by referring to the Creating infrastructure components Note the BigQuery dataset name that you crate for late steps Usage dataflow production ready Python","answers":"# dataflow-production-ready (Python)\n\n## Usage\n\n### Creating infrastructure components \n\nPrepare the infrastructure (e.g. datasets, tables, etc) needed by the pipeline by referring to the\n[Terraform module](\/terraform\/README.MD)\n\nNote the BigQuery dataset name that you crate for late steps.\n\n### Creating Python Virtual Environment for development\n\nIn the module root directory, run the following:\n\n```\npython3 -m venv \/tmp\/venv\/dataflow-production-ready-env\nsource \/tmp\/venv\/dataflow-production-ready-env\/bin\/activate\npip install -r python\/requirements.txt\n```\n\n### Setting the GCP Project\n\nIn the repo root directory, set the environment variables\n\n```\nexport GCP_PROJECT=<PROJECT_ID>\nexport REGION=<GCP_REGION>\nexport BUCKET_NAME=<DEMO_BUCKET_NAME>\n```\n\nThen set the GCP project\n\n```\ngcloud config set project $GCP_PROJECT\n```\n\nThen, create a GCS bucket for this demo\n```\ngsutil mb -l $REGION -p $GCP_PROJECT gs:\/\/$BUCKET_NAME\n```\n\n\n### Running a full build\n\nThe build is defined by [cloudbuild.yaml](cloudbuild.yaml) and runs on Cloud Build. It applies the following steps:\n* Run unit tests\n* Build a container image as defined in [Dockerfile](Dockerfile)\n* Create a Dataflow flex template based on the container image\n* Run automated system integration test using the Flex template (including test resources provisioning)\n\nSet the following variables:\n```\nexport TARGET_GCR_IMAGE=\"dataflow_flex_ml_preproc\"\nexport TARGET_GCR_IMAGE_TAG=\"python\"\nexport TEMPLATE_GCS_LOCATION=\"gs:\/\/$BUCKET_NAME\/template\/spec.json\"\n```\n\nRun the following command in the root folder\n\n```\ngcloud builds submit --config=python\/cloudbuild.yaml --substitutions=_IMAGE_NAME=${TARGET_GCR_IMAGE},_IMAGE_TAG=${TARGET_GCR_IMAGE_TAG},_TEMPLATE_GCS_LOCATION=${TEMPLATE_GCS_LOCATION},_REGION=${REGION}\n```\n\n### Manual Commands\n\n#### Prerequisites\n\n* Create an input file similar to [integration_test_input.csv](\/data\/integration_test_input.csv) (or copy it to GCS and use it as input)\n\n* Set extra variables (use same dataset name as created by the Terraform module)\n```\nexport INPUT_CSV=\"gs:\/\/$BUCKET_NAME\/input\/path_to_CSV\"\nexport BQ_RESULTS=\"project:dataset.ml_preproc_results\"\nexport BQ_ERRORS=\"project:dataset.ml_preproc_errors\"\nexport TEMP_LOCATION=\"gs:\/\/$BUCKET_NAME\/tmp\"\nexport SETUP_FILE=\"\/dataflow\/template\/ml_preproc\/setup.py\"\n```\n\n#### Running pipeline locally \n\nExport this extra variables and run the script\n```\nchmod +x run_direct_runner.sh\n.\/run_direct_runner.sh\n``` \n\n#### Running pipeline on Dataflow service\n\nExport this extra variables and run the script\n```\nchmod +x run_dataflow_runner.sh\n.\/run_dataflow_runner.sh\n``` \n\n#### Running Flex Templates\n\nEven if the job runs successfully on Dataflow service when submitted locally, the template has to be tested as well since\nit might contain errors in the Docker file that prevents the job from running. \n\nTo run the flex template after deploying it, run: \n\n```\nchmod +x run_dataflow_template.sh\n.\/run_dataflow_template.sh\n``` \n\nNote that the parameter setup_file must be included in [metadata.json](ml_preproc\/spec\/metadata.json) and passed to the pipeline. It enables working with multiple Python modules\/files and it's set to the path of \n[setup.py](ml_preproc\/setup.py) inside the docker container. \n\n#### Running Unit Tests\n\nTo run all unit tests\n\n```\npython -m unittest discover\n```\n\nTo run particular test file\n\n```\npython -m unittest ml_preproc.pipeline.ml_preproc_test\n```\n\n#### Debug flex-template container image\nIn cloud shell, run the deployed container image using the bash endpoint \n```\ndocker run -it --entrypoint \/bin\/bash gcr.io\/$PROJECT_ID\/$TARGET_GCR_IMAGE\n```\n\n\n\n## Dataflow Pipeline\n* [main.py](ml_preproc\/main.py) - The entry point of the pipeline\n* [setup.py](ml_preproc\/setup.py) - To package the pipeline and distribute it to the workers. Without this file, main.py won't be able to import modules at runtime. [[source]](https:\/\/beam.apache.org\/documentation\/sdks\/python-pipeline-dependencies\/#multiple-file-dependencies) \n\n## Flex Templates Overview\nThe pipeline demonstrates how to use Flex Templates in Dataflow to create a template out of practically any Dataflow pipeline. This pipeline\ndoes not use any [ValueProvider](https:\/\/github.com\/apache\/beam\/blob\/master\/sdks\/python\/apache_beam\/options\/value_provider.py) to accept user inputs and is built like any other non-templated\nDataflow pipeline. This pipeline also allows the user to change the job\ngraph depending on the value provided for an option at runtime\n\nWe make the pipeline ready for reuse by \"packaging\" the pipeline artifacts\nin a Docker container. In order to simplify the process of packaging the pipeline into a container we\nutilize [Google Cloud Build](https:\/\/cloud.google.com\/cloud-build\/).\n\nWe preinstall all the dependencies needed to *compile and execute* the pipeline\ninto a container using a custom [Dockerfile](ml_preproc\/Dockerfile).\n\nIn this example, we are using the following base image for Python 3:\n\n`gcr.io\/dataflow-templates-base\/python3-template-launcher-base`\n\nWe will utilize Google Cloud Builds ability to build a container using a Dockerfile as documented in the [quickstart](https:\/\/cloud.google.com\/cloud-build\/docs\/quickstart-docker).\n\nIn addition, we will use a CD pipeline on Cloud Build to update the flex template automatically.\n\n\n## Continues deployment\nThe CD pipeline is defined in [cloudbuild.yaml](ml_preproc\/cloudbuild.yaml) to be executed by Cloud Build. It follows the following steps:\n1. Run unit tests\n2. Build and register a container image via Cloud Build as defined in the [Dockerfile](ml_preproc\/Dockerfile). The container packages the Dataflow pipeline and its dependencies and acts as the Dataflow Flex Template\n3. Build the Dataflow template by creating a spec.json file on GCS including the container image ID and the pipeline metadata based on [metadata.json](ml_preproc\/spec\/metadata.json). The template could be run later on by pointing to this spec.json file\n4. Running system integration test using the deployed Flex-template and waiting for it's results \n\n### Substitution variables\nCloud Build provides default variables such as `$PROJECT_ID` that could be used in the build YAML file. User defined variables could also be used in the form of `$_USER_VARIABLE`.\n\nIn this project the following variables are used:\n- `$_TARGET_GCR_IMAGE`: The GCR image name to be submitted to Cloud Build (not URI) (e.g wordcount-flex-template)\n- `$_TEMPLATE_GCS_LOCATION`: GCS location to store the template spec file (e.g. gs:\/\/bucket\/dir\/). The spec file path is required later on to submit run commands to Dataflow\n- `$_REGION`: GCP region to deploy and run the dataflow flex template\n- `$_IMAGE_TAG`: Image tag\n\nThese variables must be set during manual build execution or via a build trigger\n\n\n### Triggering builds automatically\nTo trigger a build on certain actions (e.g. commits to master)\n1. Go to Cloud Build > Triggers > Create Trigger. If you're using Github, choose the \"Connect Repository\" option.     \n2. Configure the trigger\n3. Point the trigger to the [cloudbuild.yaml](ml_preproc\/cloudbuild.yaml) file in the repository\n4. Add the substitution variables as explained in the [Substitution variables](#substitution-variables) section.\n","site":"GCP","answers_cleaned":"  dataflow production ready  Python      Usage      Creating infrastructure components   Prepare the infrastructure  e g  datasets  tables  etc  needed by the pipeline by referring to the  Terraform module   terraform README MD   Note the BigQuery dataset name that you crate for late steps       Creating Python Virtual Environment for development  In the module root directory  run the following       python3  m venv  tmp venv dataflow production ready env source  tmp venv dataflow production ready env bin activate pip install  r python requirements txt          Setting the GCP Project  In the repo root directory  set the environment variables      export GCP PROJECT  PROJECT ID  export REGION  GCP REGION  export BUCKET NAME  DEMO BUCKET NAME       Then set the GCP project      gcloud config set project  GCP PROJECT      Then  create a GCS bucket for this demo     gsutil mb  l  REGION  p  GCP PROJECT gs    BUCKET NAME           Running a full build  The build is defined by  cloudbuild yaml  cloudbuild yaml  and runs on Cloud Build  It applies the following steps    Run unit tests   Build a container image as defined in  Dockerfile  Dockerfile    Create a Dataflow flex template based on the container image   Run automated system integration test using the Flex template  including test resources provisioning   Set the following variables      export TARGET GCR IMAGE  dataflow flex ml preproc  export TARGET GCR IMAGE TAG  python  export TEMPLATE GCS LOCATION  gs    BUCKET NAME template spec json       Run the following command in the root folder      gcloud builds submit   config python cloudbuild yaml   substitutions  IMAGE NAME   TARGET GCR IMAGE   IMAGE TAG   TARGET GCR IMAGE TAG   TEMPLATE GCS LOCATION   TEMPLATE GCS LOCATION   REGION   REGION           Manual Commands       Prerequisites    Create an input file similar to  integration test input csv   data integration test input csv   or copy it to GCS and use it as input     Set extra variables  use same dataset name as created by the Terraform module      export INPUT CSV  gs    BUCKET NAME input path to CSV  export BQ RESULTS  project dataset ml preproc results  export BQ ERRORS  project dataset ml preproc errors  export TEMP LOCATION  gs    BUCKET NAME tmp  export SETUP FILE   dataflow template ml preproc setup py            Running pipeline locally   Export this extra variables and run the script     chmod  x run direct runner sh   run direct runner sh            Running pipeline on Dataflow service  Export this extra variables and run the script     chmod  x run dataflow runner sh   run dataflow runner sh            Running Flex Templates  Even if the job runs successfully on Dataflow service when submitted locally  the template has to be tested as well since it might contain errors in the Docker file that prevents the job from running    To run the flex template after deploying it  run        chmod  x run dataflow template sh   run dataflow template sh       Note that the parameter setup file must be included in  metadata json  ml preproc spec metadata json  and passed to the pipeline  It enables working with multiple Python modules files and it s set to the path of   setup py  ml preproc setup py  inside the docker container         Running Unit Tests  To run all unit tests      python  m unittest discover      To run particular test file      python  m unittest ml preproc pipeline ml preproc test           Debug flex template container image In cloud shell  run the deployed container image using the bash endpoint      docker run  it   entrypoint  bin bash gcr io  PROJECT ID  TARGET GCR IMAGE           Dataflow Pipeline    main py  ml preproc main py    The entry point of the pipeline    setup py  ml preproc setup py    To package the pipeline and distribute it to the workers  Without this file  main py won t be able to import modules at runtime    source   https   beam apache org documentation sdks python pipeline dependencies  multiple file dependencies       Flex Templates Overview The pipeline demonstrates how to use Flex Templates in Dataflow to create a template out of practically any Dataflow pipeline  This pipeline does not use any  ValueProvider  https   github com apache beam blob master sdks python apache beam options value provider py  to accept user inputs and is built like any other non templated Dataflow pipeline  This pipeline also allows the user to change the job graph depending on the value provided for an option at runtime  We make the pipeline ready for reuse by  packaging  the pipeline artifacts in a Docker container  In order to simplify the process of packaging the pipeline into a container we utilize  Google Cloud Build  https   cloud google com cloud build     We preinstall all the dependencies needed to  compile and execute  the pipeline into a container using a custom  Dockerfile  ml preproc Dockerfile    In this example  we are using the following base image for Python 3    gcr io dataflow templates base python3 template launcher base   We will utilize Google Cloud Builds ability to build a container using a Dockerfile as documented in the  quickstart  https   cloud google com cloud build docs quickstart docker    In addition  we will use a CD pipeline on Cloud Build to update the flex template automatically       Continues deployment The CD pipeline is defined in  cloudbuild yaml  ml preproc cloudbuild yaml  to be executed by Cloud Build  It follows the following steps  1  Run unit tests 2  Build and register a container image via Cloud Build as defined in the  Dockerfile  ml preproc Dockerfile   The container packages the Dataflow pipeline and its dependencies and acts as the Dataflow Flex Template 3  Build the Dataflow template by creating a spec json file on GCS including the container image ID and the pipeline metadata based on  metadata json  ml preproc spec metadata json   The template could be run later on by pointing to this spec json file 4  Running system integration test using the deployed Flex template and waiting for it s results       Substitution variables Cloud Build provides default variables such as   PROJECT ID  that could be used in the build YAML file  User defined variables could also be used in the form of    USER VARIABLE    In this project the following variables are used       TARGET GCR IMAGE   The GCR image name to be submitted to Cloud Build  not URI   e g wordcount flex template       TEMPLATE GCS LOCATION   GCS location to store the template spec file  e g  gs   bucket dir    The spec file path is required later on to submit run commands to Dataflow      REGION   GCP region to deploy and run the dataflow flex template      IMAGE TAG   Image tag  These variables must be set during manual build execution or via a build trigger       Triggering builds automatically To trigger a build on certain actions  e g  commits to master  1  Go to Cloud Build   Triggers   Create Trigger  If you re using Github  choose the  Connect Repository  option       2  Configure the trigger 3  Point the trigger to the  cloudbuild yaml  ml preproc cloudbuild yaml  file in the repository 4  Add the substitution variables as explained in the  Substitution variables   substitution variables  section  "}
{"questions":"GCP In this solution we build an approch to ingestion flat files in GCS to BigQuery using serverless technology This solution might be not be performanct if you have frequent small files that lands to GCP We use data in this example Figure below shows an overall approach Once the object is uploaded in the GCS bucket a object notification is recevied by the Pub Sub Topic Pub Sub Topic triggers a cloud function which then invokes the serverless spark Any error during the cloud function invocation and serverless spark execution is send to dead letter topic Step 1 Create a bucket the bucket holds the data to be ingested in GCP Once the object is upload in a bucket the notification is created in Pub Sub topic Ingesting GCS files to BigQuery using Cloud Functions and Serverless Spark","answers":"# Ingesting GCS files to BigQuery using Cloud Functions and Serverless Spark\n\nIn this solution, we build an approch to ingestion flat files (in GCS) to BigQuery using serverless technology. This solution might be not be performanct if you have frequent small files that lands to GCP. We use [Daily Shelter Occupancy](https:\/\/open.toronto.ca\/dataset\/daily-shelter-occupancy\/) data in this example. Figure below shows an overall approach. Once the object is uploaded in the GCS bucket, a object notification is recevied by the Pub\/Sub Topic. Pub\/Sub Topic triggers a cloud function which then invokes the serverless spark. Any error during the cloud function invocation and serverless spark execution is send to dead letter topic.\n\n![](docs\/gcs2bq_serverless_spark.jpg)\n\n\n- **Step 1:**  Create a bucket, the bucket holds the data to be ingested in GCP. Once the object is upload in a bucket, the notification is created in Pub\/Sub topic.\n\n    ```\n    PROJECT_ID=<<project_id>>\n    GCS_BUCKET_NAME=<<Bucket name>>\n    gsutil mb gs:\/\/${GCS_BUCKET_NAME}\n    gsutil notification create \\\n        -t projects\/${PROJECT_ID}\/topics\/create_notification_${GCS_BUCKET_NAME} \\\n        -e OBJECT_FINALIZE \\\n        -f json gs:\/\/${GCS_BUCKET_NAME}\n    ```\n- **Step 2:** Build and copy jar to a GCS bucket(Create a GCS bucket to store the jar if you dont have one). There are number of dataproce templates that are avaliable to [use](https:\/\/github.com\/GoogleCloudPlatform\/dataproc-templates). \n  \n    ```\n    GCS_ARTIFACT_REPO=<<artifact repo name>>\n    gsutil mb gs:\/\/${GCS_ARTIFACT_REPO}\n    cd gcs2bq-spark\n    mvn clean install\n    gsutil cp target\/GCS2BQWithSpark-1.0-SNAPSHOT.jar gs:\/\/${GCS_ARTIFACT_REPO}\/\n    ```\n\n- **Step 3:** [The page](https:\/\/cloud.google.com\/dataproc-serverless\/docs\/concepts\/network) describe the network configuration required to run serverless spark\n  \n  - **Open subnet connectivity:** The subnet must allow subnet communication on all ports. The following gcloud command attaches a network firewall to a subnet that allows ingress communications using all protocols on all ports if the source and destination are tagged with \"serverless-spark\"\n\n      ```\n      gcloud compute firewall-rules create allow-internal-ingress \\\n      --network=\"default\" \\\n      --source-tags=\"serverless-spark\" \\\n      --target-tags=\"serverless-spark\" \\\n      --direction=\"ingress\" \\\n      --action=\"allow\" \\\n      --rules=\"all\"\n      ````\n\n  - **Private Google Access:** The subnet must have [Private Google Access](https:\/\/cloud.google.com\/vpc\/docs\/configure-private-google-access) enabled.\n      - External network access. Drivers and executors have internal IP addresses. You can set up [Cloud NAT](https:\/\/cloud.google.com\/nat\/docs\/overview) to allow outbound traffic using internal IPs on your VPC network.\n\n- **Step 4:** Create necessary GCP resources required by Serverless Spark\n    - **Create BQ Dataset** Create a dataset to load GCS files. \n        ```\n        DATASET_NAME=<<dataset_name>>\n        bq --location=US mk -d \\\n            ${DATASET_NAME}\n        ```\n\n    -  **Create BQ table** Create a table using the schema in `schema\/schema.json`\n        ```\n        TABLE_NAME=<<table_name>>\n        bq mk --table ${PROJECT_ID}:${DATASET_NAME}.${TABLE_NAME} \\\n            .\/schema\/schema.json\n        ```\n\n    - **Create service account** Create service acccount used run the service account. We also create the permission required to read from GCS bucket, write to BigQuery table and publish error message in deadletter queue. The service account is used to run the serverless spark, so it needs dataproc worker role as well.\n  \n        ```\n        SERVICE_ACCOUNT_ID=\"gcs-to-bq-sa\"\n        gcloud iam service-accounts create ${SERVICE_ACCOUNT_ID} \\\n            --description=\"GCS to BQ service account for Serverless Spark\" \\\n            --display-name=\"GCS2BQ-SA\"\n        \n        roles=(\"roles\/dataproc.worker\" \"roles\/bigquery.dataEditor\" \"roles\/bigquery.jobUser\" \"roles\/storage.objectViewer\" \"roles\/pubsub.publisher\")\n        for role in ${roles[@]}; do\n            gcloud projects add-iam-policy-binding ${PROJECT_ID} \\\n                --member=\"serviceAccount:${SERVICE_ACCOUNT_ID}@${PROJECT_ID}.iam.gserviceaccount.com\" \\\n                --role=\"$role\"\n        done\n        ```\n    - **Create BQ temp Bucket** GCS to BigQuery requires a temporary bucket. Lets create a temporary bucket\n        ```\n        GCS_TEMP_BUCKET=<<temp_bucket>>\n        gsutil mb gs:\/\/${GCS_TEMP_BUCKET}\n        ```\n    - **Create Deadletter Topic and Subscription** Lets create a dead letter topic and subscription\n\n        ```\n        ERROR_TOPIC=err_gcs2bq_${GCS_BUCKET_NAME}\n        gcloud pubsub topics create $ERROR_TOPIC\n        gcloud pubsub subscriptions create err_sub_${GCS_BUCKET_NAME}} \\\n        --topic=${ERROR_TOPIC}\n        ```\n\n    Once all resources are create, please change the varaibles value () in `trigger-serverless-spark-fxn\/main.py` from line 25 to 29\n\n        ```\n        bq_temp_bucket = <<GCS_TEMP_BUCKET>>\n        gcs_artifact_rep = <<GCS_ARTIFACT_REPO>>\n        dataset= <<DATASET_NAME>>\n        bq_table = <<TABLE_NAME>>\n        error_topic=<<ERROR_TOPIC>>\n        ```\n\n- **Step 5:** The cloud function is triggered once the object is copied to bucket.  The cloud function triggers the Servereless spark\n  \n  Deploy the function.\n\n    ```\n    cd trigger-serverless-spark-fxn\n    gcloud functions deploy trigger-serverless-spark-fxn --entry-point \\\n    invoke_sreverless_spark --runtime python37 \\\n    --trigger-resource ${GCS_BUCKET_NAME}_create_notification \\\n    --trigger-event google.pubsub.topic.publish\n    ```\n\n- **Step 6:** Invoke the end-to-end pipeline. Download [2020 Daily Center Data](https:\/\/ckan0.cf.opendata.inter.prod-toronto.ca\/download_resource\/800cc97f-34b3-4d4d-9bc1-6e2ce2d6f44a?format=csv) and upload to the GCS bucket(<<GCS_BUCKET_NAME>>) in Step 1.\n  \n  \n**Debugging Pipelines**\n\n  Error message for the failed data pipelines are publish to Pub\/Sub topic (ERROR_TOPIC) created in Step 4(Create Deadletter Topic and Subscription). The errors from both cloud function and spark are forwarded to Pub\/Sub. Pub\/Sub topic might have multiple entry for the same data-pipeline instance. Messages in Pub\/Sub topic can be filtered using \"oid\" attribute. The attribute(oid) is unique for each pipeline run and holds full object name with the [generation id](https:\/\/cloud.google.com\/storage\/docs\/metadata#generation-number).\n  ","site":"GCP","answers_cleaned":"  Ingesting GCS files to BigQuery using Cloud Functions and Serverless Spark  In this solution  we build an approch to ingestion flat files  in GCS  to BigQuery using serverless technology  This solution might be not be performanct if you have frequent small files that lands to GCP  We use  Daily Shelter Occupancy  https   open toronto ca dataset daily shelter occupancy   data in this example  Figure below shows an overall approach  Once the object is uploaded in the GCS bucket  a object notification is recevied by the Pub Sub Topic  Pub Sub Topic triggers a cloud function which then invokes the serverless spark  Any error during the cloud function invocation and serverless spark execution is send to dead letter topic       docs gcs2bq serverless spark jpg        Step 1     Create a bucket  the bucket holds the data to be ingested in GCP  Once the object is upload in a bucket  the notification is created in Pub Sub topic               PROJECT ID   project id       GCS BUCKET NAME   Bucket name       gsutil mb gs     GCS BUCKET NAME      gsutil notification create            t projects   PROJECT ID  topics create notification   GCS BUCKET NAME             e OBJECT FINALIZE            f json gs     GCS BUCKET NAME              Step 2    Build and copy jar to a GCS bucket Create a GCS bucket to store the jar if you dont have one   There are number of dataproce templates that are avaliable to  use  https   github com GoogleCloudPlatform dataproc templates                   GCS ARTIFACT REPO   artifact repo name       gsutil mb gs     GCS ARTIFACT REPO      cd gcs2bq spark     mvn clean install     gsutil cp target GCS2BQWithSpark 1 0 SNAPSHOT jar gs     GCS ARTIFACT REPO                Step 3     The page  https   cloud google com dataproc serverless docs concepts network  describe the network configuration required to run serverless spark          Open subnet connectivity    The subnet must allow subnet communication on all ports  The following gcloud command attaches a network firewall to a subnet that allows ingress communications using all protocols on all ports if the source and destination are tagged with  serverless spark                   gcloud compute firewall rules create allow internal ingress           network  default            source tags  serverless spark            target tags  serverless spark            direction  ingress            action  allow            rules  all                    Private Google Access    The subnet must have  Private Google Access  https   cloud google com vpc docs configure private google access  enabled          External network access  Drivers and executors have internal IP addresses  You can set up  Cloud NAT  https   cloud google com nat docs overview  to allow outbound traffic using internal IPs on your VPC network       Step 4    Create necessary GCP resources required by Serverless Spark         Create BQ Dataset   Create a dataset to load GCS files                       DATASET NAME   dataset name           bq   location US mk  d                 DATASET NAME                        Create BQ table   Create a table using the schema in  schema schema json                      TABLE NAME   table name           bq mk   table   PROJECT ID    DATASET NAME    TABLE NAME                  schema schema json                      Create service account   Create service acccount used run the service account  We also create the permission required to read from GCS bucket  write to BigQuery table and publish error message in deadletter queue  The service account is used to run the serverless spark  so it needs dataproc worker role as well                         SERVICE ACCOUNT ID  gcs to bq sa          gcloud iam service accounts create   SERVICE ACCOUNT ID                  description  GCS to BQ service account for Serverless Spark                  display name  GCS2BQ SA                   roles   roles dataproc worker   roles bigquery dataEditor   roles bigquery jobUser   roles storage objectViewer   roles pubsub publisher           for role in   roles      do             gcloud projects add iam policy binding   PROJECT ID                      member  serviceAccount   SERVICE ACCOUNT ID    PROJECT ID  iam gserviceaccount com                      role   role          done                     Create BQ temp Bucket   GCS to BigQuery requires a temporary bucket  Lets create a temporary bucket                     GCS TEMP BUCKET   temp bucket           gsutil mb gs     GCS TEMP BUCKET                      Create Deadletter Topic and Subscription   Lets create a dead letter topic and subscription                      ERROR TOPIC err gcs2bq   GCS BUCKET NAME          gcloud pubsub topics create  ERROR TOPIC         gcloud pubsub subscriptions create err sub   GCS BUCKET NAME               topic   ERROR TOPIC                   Once all resources are create  please change the varaibles value    in  trigger serverless spark fxn main py  from line 25 to 29                      bq temp bucket     GCS TEMP BUCKET           gcs artifact rep     GCS ARTIFACT REPO           dataset    DATASET NAME           bq table     TABLE NAME           error topic   ERROR TOPIC                    Step 5    The cloud function is triggered once the object is copied to bucket   The cloud function triggers the Servereless spark      Deploy the function               cd trigger serverless spark fxn     gcloud functions deploy trigger serverless spark fxn   entry point       invoke sreverless spark   runtime python37         trigger resource   GCS BUCKET NAME  create notification         trigger event google pubsub topic publish              Step 6    Invoke the end to end pipeline  Download  2020 Daily Center Data  https   ckan0 cf opendata inter prod toronto ca download resource 800cc97f 34b3 4d4d 9bc1 6e2ce2d6f44a format csv  and upload to the GCS bucket   GCS BUCKET NAME    in Step 1          Debugging Pipelines      Error message for the failed data pipelines are publish to Pub Sub topic  ERROR TOPIC  created in Step 4 Create Deadletter Topic and Subscription   The errors from both cloud function and spark are forwarded to Pub Sub  Pub Sub topic might have multiple entry for the same data pipeline instance  Messages in Pub Sub topic can be filtered using  oid  attribute  The attribute oid  is unique for each pipeline run and holds full object name with the  generation id  https   cloud google com storage docs metadata generation number     "}
{"questions":"GCP This pre commit hook uses open source tools to provide developers with a way to validate Kubernetes manifests before changes are committed and pushed to a repository Left Shift Validation at Pre Commit Hook Using left shift validate means you learn if your deployments are going to fail on the order of seconds as it happens right before committing rather than minutes or hours after it has already undergone multiple parts or even most of your CI CD pipeline While these scripts were first designed to work in environemnts using Kustomize they ve been adapted to work for whatever file organization structure you provide uses git diff to find any and all staged yaml files and then will validate each against the Constraints and ConstraintTemplates you provide Policy checks are typically instantiated after code is pushed to the repository as it goes through each environment dev QA production etc and right before administration to the cluster The goal with this script is to validate at the pre commit hook stage applying your security and policy check even earlier shifting left than usual","answers":"# Left-Shift Validation at Pre-Commit Hook\n\nThis pre-commit hook uses open-source tools to provide developers with a way to validate Kubernetes manifests before changes are committed and pushed to a repository.\n\nPolicy checks are typically instantiated after code is pushed to the repository, as it goes through each environment (dev, QA, production, etc.), and right before administration to the cluster. The goal with this script is to validate at the pre-commit hook stage, applying your security and policy check even earlier (shifting left) than usual.\n\nUsing left-shift validate means you learn *if your deployments are going to fail* on the order of seconds as it happens right before committing, rather than minutes or hours after it has already undergone multiple parts, or even most, of your CI\/CD pipeline.\n\nWhile these scripts were first designed to work in environemnts using Kustomize, they've been adapted to work for whatever file organization structure you provide. `validate.sh` uses git diff to find any and all staged yaml files, and then will validate each against the Constraints and ConstraintTemplates you provide.\n\n---\n\n## Setting Up left-shift validation\n\nUsing left-shift validation is simple!\n\n### Initial installation\n\n1. From this repository, you only need the following items:\n\n- `validate.sh`\n- `setup.sh`\n- `constraints-and-templates\/` directory\n\n    *If you don't want to clone the whole project, you can use `wget` to download the specific files. For example:*\n\n    `wget https:\/\/raw.githubusercontent.com\/GoogleCloudPlatform\/professional-services\/main\/examples\/left-shift-validation-pre-commit-hook\/validate.sh`\n\n2. Place these items into the same directory as your `.git` folder. `setup.sh` will move `validate.sh` into your hooks from there.\n\n3. Make both files executable, which can be accomplished by:\n\n    `$ chmod +x setup.sh && chmod +x validate.sh`\n\n4. After this step, run `setup.sh`\n\n    `$ .\/setup.sh`\n\n5. Answer the prompts, and then you're done!\n\nNow, whenever you go to commit code and updated Kubernetes yaml files are found, the pre-commit hook will test your changes against the policy constraints you've specified. Nifty!\n\n**After initial installation:**\n\n`setup.sh` is primarily for dependency installations and contains a guided walkthrough of obtaining the locations of your Constraint and ConstraintTemplates. If your policy constraints change, you can run this script again, which will automatically update dependencies as well.\n\nThe locations you specify for your Constraints, ConstraintTemplates, and (if using Kustomize) base Kustomization folder will be written into a settings file in `.oss_dependencies` for `validate.sh` (pre-commit hook) to use. You can directly change these variables as necessary without having to run `setup.sh`. The pre-commit hook will run every time you attempt a commit!\n\n---\n\n## Technical Overview\n\nLeft-shift validation is intended to run as a pre-commit hook, so it has been designed with two distinct Bash scripts, `setup.sh` and `validate.sh`. **Here's what they do:**\n\n### setup.sh\n\n1. Install or update dependencies.\n2. Turn `validate.sh` into a pre commit hook.\n3. Determine the locations of Constraints\/ConstraintTemplates to use, then save the path\/remote repository URL to `.env`.\n\n### validate.sh (Pre-commit Hook)\n\n1. When `git commit` runs, check if any yaml files have been updated.\n2. (If you're using Kustomize) Run `kustomize` to build a unified yaml file for evaluation.\n3. Gather the Constraints and ConstraintTemplates and save them to a local folder. Use `kpt` if from a remote repo.\n4. Run `gator test`, which validates the unified yaml file against the policies (Constraints and ConstraintTemplates).\n5. Fail the commit if violations are found. If there are no errors, continue the commit.\n\n---\n\n## Purpose\n\nOrganizations that deploy applications on Kubernetes clusters often use tools like [Open Policy Agent](https:\/\/www.openpolicyagent.org\/) (OPA) or [Gatekeeper](https:\/\/open-policy-agent.github.io\/gatekeeper\/website\/docs) to enforce security and operational policies. These are often essential for a company to meet legal or business requirements, but they have the added benefit of empowering their developers by providing consistent feedback on their work by showcasing if it meets the security standards of their organization.\n\nTypically, validation checks occur at the end of the CI\/CD pipeline right before admission to deploy to a cluster, throughout the pipeline, or even in the repository, with automated code reviews. These checks are great, and using many in tandem is benificial for redundancy, but our goal is to reduce the potentially long wait times between when a developer submits their Kubernetes files and when those files either pass or fail policy reviews.\nLeft-shift validation intends to streamline the development process by providing actionable feedback as quickly as possible at the pre-commit hook stage (which one can consider even before the pipeline), which as early as you could go.\n\nHere's what a typical CI\/CD pipeline might look like with OSS policy validations throughout. In this case, we use Gatekeeper to define Constraints and ConstraintTemplates, and they are stored in a Git Repo that can be accessed by the pipeline (Jenkins, Google Cloud Build, etc.).\n\n![A Sample Ci\/CD Pipeline with Policy Validation Built-in](img\/sample-cicd-pipeline.png)\n\nAnd what left-shift validation does is extend these redundant validation steps into the developer's local development environment, much earlier in the pipeline, like so:\n\n![Left-shift validation Architectural Diagram](img\/leftshift-validate-architecture.png)\n\n---\n\n## Important Things to Note\n\n### Left-shift validation is an **Enhancement**, not a Replacement\n\nWe do not intend for left-shift validation to replace other automated policy control systems. Instead, this is a project that can be used to help support developers who work on Kubernetes manifests by enhancing the delivery pipeline. Using left-shift validation means you learn *if your deployments are going to fail* on the order of seconds, rather than minutes or hours.\n\nThe only thing that happens if you don't use left-shift validation to shift left on automated policy validation is it takes longer for you to learn if you have a problem. That's all!\n\n### Handling Dependencies\n\nLeft-shift validation uses the follwing dependencies:\n\n| Name | Role |\n| ----- | -----|\n| [kpt](https:\/\/kpt.dev\/) | Enables us to fetch directories from remote git repositories and use their contents in later steps. Kpt also provides easier integration with Kustomize. |\n| [kustomize](https:\/\/github.com\/kubernetes-sigs\/kustomize) | Collates and hydrates raw yaml files into formats that work best with validation steps. |\n| [gator](https:\/\/open-policy-agent.github.io\/gatekeeper\/website\/docs\/gator\/) | Allows for evaluating Gatekeeper ConstraintTemplates and Constraints in a local environment. |\n\nIn order for left-shift validation to work, these tools must be installed on your system. `setup.sh` will install or update each tool, and they will be accessible to the validation script via a dependency folder. If you'd like to handle installation yourself, you may! As long as the commands are in your `$PATH`, `validate.sh` will recognize and use them.\n\n---\n\n### Deep-Dive\n\nLet's go a bit further into how everything works together. The idea is that you can run `setup.sh` whenever you need to configure your pre-commit script. This can include changes like:\n\n- Updating Dependencies (which will happen automatically anytime you run `setup.sh`)\n- Resetting the default behavior of your pre-commit hook, if you make changes that break the code.\n- Describing new Constraints and\/or ConstraintTemplates to use. We have a collection of samples from the [OPA Gatekeeper Library](https:\/\/github.com\/open-policy-agent\/gatekeeper-library) that you can use, but you can also supply your own repository. If you have separate repositories for your Constraints and Templates, that is also supported.\n\n`validate.sh` depends on `setup.sh` to take care of the more time-consuming steps in order to run as fast as possible. It's for this reason that `setup.sh` handles all aspects of setting up the environment, Then, `validate.sh` only needs to identify changed resources, gather Constraints and ConstraintTemplates with kpt, and finally run `gator test` to produce an outcome.\n\nWhen configured as a pre-commit script (taken care of by `setup.sh`), `validate.sh` will take the locations of your Constraints and ConstraintTemplates, which you provided in `setup.sh`, and obtain those manifests. Whether they're stored in one or two repositories, stored locally, or if you want to continue with the sample policies in the OPA Gatekeeper Library, the script supports all of those combinations. Here's how that decision flow works:\n\n![validate-decision-tree](img\/validate-decision-tree.png)\n\n---\n\n## Cleanup\n\nDone with left-shift validation? Uninstalling is easy! Since we installed dependencies from pre-compiled binaries, all we need to do is delete a few directories and files. You have two options here:\n\n### Automatic Uninstall Using cleanup.sh\n\nYou can simply use `cleanup.sh`, which will automatically delete folders like `.oss_dependencies\/` and any manifests, Constraints, or ConstraintTemplates that are still lingering around.\n\n### IMPORTANT NOTE\n\nWhen you install left-shift validation, it becomes `pre-commit` in the `.git\/hooks\/` directory. This script will delete the file, rather than renaming it by appending `.sample` to the filename.\n\n### Manual Uninstall\n\nTo remove left-shift validation, you must delete all of the files that have been created. This includes:\n\n- `setup.sh`\n- `.oss_dependencies\/*` (which can be found in the root directory of your project)\n\nThen, you must go into your .git\/hooks\/ folder, and either delete `pre-commit`, or add \".sample\" to the end of the filename, which tells Git not to run it in the future.\n\n---\n\n## Contact Info\n\nWe're always looking for help, or suggestions for improvement. Please feel free to reach out to us if you've got ideas or feedback!\n\n- [Janine Bariuan](mailto:janinebariuan@google.com)\n- [Thomas Desrosiers](mailto:tdesrosi@google.com)","site":"GCP","answers_cleaned":"  Left Shift Validation at Pre Commit Hook  This pre commit hook uses open source tools to provide developers with a way to validate Kubernetes manifests before changes are committed and pushed to a repository   Policy checks are typically instantiated after code is pushed to the repository  as it goes through each environment  dev  QA  production  etc    and right before administration to the cluster  The goal with this script is to validate at the pre commit hook stage  applying your security and policy check even earlier  shifting left  than usual   Using left shift validate means you learn  if your deployments are going to fail  on the order of seconds as it happens right before committing  rather than minutes or hours after it has already undergone multiple parts  or even most  of your CI CD pipeline   While these scripts were first designed to work in environemnts using Kustomize  they ve been adapted to work for whatever file organization structure you provide   validate sh  uses git diff to find any and all staged yaml files  and then will validate each against the Constraints and ConstraintTemplates you provide           Setting Up left shift validation  Using left shift validation is simple       Initial installation  1  From this repository  you only need the following items      validate sh     setup sh     constraints and templates   directory       If you don t want to clone the whole project  you can use  wget  to download the specific files  For example         wget https   raw githubusercontent com GoogleCloudPlatform professional services main examples left shift validation pre commit hook validate sh   2  Place these items into the same directory as your   git  folder   setup sh  will move  validate sh  into your hooks from there   3  Make both files executable  which can be accomplished by          chmod  x setup sh    chmod  x validate sh   4  After this step  run  setup sh            setup sh   5  Answer the prompts  and then you re done   Now  whenever you go to commit code and updated Kubernetes yaml files are found  the pre commit hook will test your changes against the policy constraints you ve specified  Nifty     After initial installation      setup sh  is primarily for dependency installations and contains a guided walkthrough of obtaining the locations of your Constraint and ConstraintTemplates  If your policy constraints change  you can run this script again  which will automatically update dependencies as well   The locations you specify for your Constraints  ConstraintTemplates  and  if using Kustomize  base Kustomization folder will be written into a settings file in   oss dependencies  for  validate sh   pre commit hook  to use  You can directly change these variables as necessary without having to run  setup sh   The pre commit hook will run every time you attempt a commit           Technical Overview  Left shift validation is intended to run as a pre commit hook  so it has been designed with two distinct Bash scripts   setup sh  and  validate sh     Here s what they do         setup sh  1  Install or update dependencies  2  Turn  validate sh  into a pre commit hook  3  Determine the locations of Constraints ConstraintTemplates to use  then save the path remote repository URL to   env        validate sh  Pre commit Hook   1  When  git commit  runs  check if any yaml files have been updated  2   If you re using Kustomize  Run  kustomize  to build a unified yaml file for evaluation  3  Gather the Constraints and ConstraintTemplates and save them to a local folder  Use  kpt  if from a remote repo  4  Run  gator test   which validates the unified yaml file against the policies  Constraints and ConstraintTemplates   5  Fail the commit if violations are found  If there are no errors  continue the commit           Purpose  Organizations that deploy applications on Kubernetes clusters often use tools like  Open Policy Agent  https   www openpolicyagent org    OPA  or  Gatekeeper  https   open policy agent github io gatekeeper website docs  to enforce security and operational policies  These are often essential for a company to meet legal or business requirements  but they have the added benefit of empowering their developers by providing consistent feedback on their work by showcasing if it meets the security standards of their organization   Typically  validation checks occur at the end of the CI CD pipeline right before admission to deploy to a cluster  throughout the pipeline  or even in the repository  with automated code reviews  These checks are great  and using many in tandem is benificial for redundancy  but our goal is to reduce the potentially long wait times between when a developer submits their Kubernetes files and when those files either pass or fail policy reviews  Left shift validation intends to streamline the development process by providing actionable feedback as quickly as possible at the pre commit hook stage  which one can consider even before the pipeline   which as early as you could go   Here s what a typical CI CD pipeline might look like with OSS policy validations throughout  In this case  we use Gatekeeper to define Constraints and ConstraintTemplates  and they are stored in a Git Repo that can be accessed by the pipeline  Jenkins  Google Cloud Build  etc       A Sample Ci CD Pipeline with Policy Validation Built in  img sample cicd pipeline png   And what left shift validation does is extend these redundant validation steps into the developer s local development environment  much earlier in the pipeline  like so     Left shift validation Architectural Diagram  img leftshift validate architecture png           Important Things to Note      Left shift validation is an   Enhancement    not a Replacement  We do not intend for left shift validation to replace other automated policy control systems  Instead  this is a project that can be used to help support developers who work on Kubernetes manifests by enhancing the delivery pipeline  Using left shift validation means you learn  if your deployments are going to fail  on the order of seconds  rather than minutes or hours   The only thing that happens if you don t use left shift validation to shift left on automated policy validation is it takes longer for you to learn if you have a problem  That s all       Handling Dependencies  Left shift validation uses the follwing dependencies     Name   Role                       kpt  https   kpt dev     Enables us to fetch directories from remote git repositories and use their contents in later steps  Kpt also provides easier integration with Kustomize       kustomize  https   github com kubernetes sigs kustomize    Collates and hydrates raw yaml files into formats that work best with validation steps       gator  https   open policy agent github io gatekeeper website docs gator     Allows for evaluating Gatekeeper ConstraintTemplates and Constraints in a local environment     In order for left shift validation to work  these tools must be installed on your system   setup sh  will install or update each tool  and they will be accessible to the validation script via a dependency folder  If you d like to handle installation yourself  you may  As long as the commands are in your   PATH    validate sh  will recognize and use them            Deep Dive  Let s go a bit further into how everything works together  The idea is that you can run  setup sh  whenever you need to configure your pre commit script  This can include changes like     Updating Dependencies  which will happen automatically anytime you run  setup sh     Resetting the default behavior of your pre commit hook  if you make changes that break the code    Describing new Constraints and or ConstraintTemplates to use  We have a collection of samples from the  OPA Gatekeeper Library  https   github com open policy agent gatekeeper library  that you can use  but you can also supply your own repository  If you have separate repositories for your Constraints and Templates  that is also supported    validate sh  depends on  setup sh  to take care of the more time consuming steps in order to run as fast as possible  It s for this reason that  setup sh  handles all aspects of setting up the environment  Then   validate sh  only needs to identify changed resources  gather Constraints and ConstraintTemplates with kpt  and finally run  gator test  to produce an outcome   When configured as a pre commit script  taken care of by  setup sh     validate sh  will take the locations of your Constraints and ConstraintTemplates  which you provided in  setup sh   and obtain those manifests  Whether they re stored in one or two repositories  stored locally  or if you want to continue with the sample policies in the OPA Gatekeeper Library  the script supports all of those combinations  Here s how that decision flow works     validate decision tree  img validate decision tree png           Cleanup  Done with left shift validation  Uninstalling is easy  Since we installed dependencies from pre compiled binaries  all we need to do is delete a few directories and files  You have two options here       Automatic Uninstall Using cleanup sh  You can simply use  cleanup sh   which will automatically delete folders like   oss dependencies   and any manifests  Constraints  or ConstraintTemplates that are still lingering around       IMPORTANT NOTE  When you install left shift validation  it becomes  pre commit  in the   git hooks   directory  This script will delete the file  rather than renaming it by appending   sample  to the filename       Manual Uninstall  To remove left shift validation  you must delete all of the files that have been created  This includes      setup sh      oss dependencies     which can be found in the root directory of your project   Then  you must go into your  git hooks  folder  and either delete  pre commit   or add   sample  to the end of the filename  which tells Git not to run it in the future           Contact Info  We re always looking for help  or suggestions for improvement  Please feel free to reach out to us if you ve got ideas or feedback      Janine Bariuan  mailto janinebariuan google com     Thomas Desrosiers  mailto tdesrosi google com "}
{"questions":"GCP Config files included in this repo 3 1 The config files in this repo supports the GCP blogpost on Visualizing Cloud DNS public zone query data using log based metrics and Cloud Monitoring Note For details on configuring Cloud Monitoring to monitor GCP cloud DNS public zones please refer to the blog post 2 Overview Monitoring GCP Cloud DNS public zone","answers":"# Monitoring GCP Cloud DNS public zone\n## Overview\nThe config files in this repo supports the GCP blogpost on \"Visualizing Cloud DNS public zone query data using log-based metrics and Cloud Monitoring\".  \n\n### Config files included in this repo\n1. [config.yaml](config.yaml)\n2. [dashboard.json](dashboard.json)\n3. [latency-config.yaml](latency-config.yaml)\n\n###### Note: For details on configuring Cloud Monitoring to monitor GCP cloud DNS public zones, please refer to the blog post.\n\n## Creating the log-based metrics\nWe require the creation of two distinct log-based metrics: a counter metric and a distribution metric.\n\n* Counter metrics count the number of log entries that match a specified filter within a specified period. For example, we can use a counter metric to count the number of log entries for a specific DNS query name, query type, or response code.\n* Distribution metrics also count values, but they collect the counts into ranges of values (histogram buckets). For example, we can use a distribution metric to extract the distribution of server latency.\n\nTo create log-based metrics, use the `gcloud logging metrics create` command. The configuration for the logging metrics can be passed to `gcloud` using the [config.yaml](.\/config.yaml) file. \n\n**Note:** All [user-defined log-based](https:\/\/cloud.google.com\/logging\/docs\/logs-based-metrics#user-metrics) metrics are a class of Cloud Monitoring custom metrics and are subject to charges. For pricing information, please refer to [Cloud Logging pricing: Log-based metrics](https:\/\/cloud.google.com\/stackdriver\/pricing#log-based-metrics).\n\n**Note:** The retention period for log-based metrics is six weeks. Please refer to the [data retention](https:\/\/cloud.google.com\/monitoring\/quotas#data_retention_policy) documentation for more details. \n\n\n## **Create the counter metric**\n\n1. Create a file named `config.yaml `with the following content:\n\n    ```\n    filter: |-\n    resource.type=\"dns_query\"\n    resource.labels.target_type=\"public-zone\"\n    labelExtractors:\n    ProjectID: EXTRACT(resource.labels.project_id)\n    QueryName: EXTRACT(jsonPayload.queryName)\n    QueryType: EXTRACT(jsonPayload.queryType)\n    ResponseCode: EXTRACT(jsonPayload.responseCode)\n    TargetName: EXTRACT(resource.labels.target_name)\n    metricDescriptor:\n    labels:\n        - key: QueryName\n        - key: TargetName\n        - key: ResponseCode\n        - key: ProjectID\n        - key: QueryType\n    metricKind: DELTA\n    unit: \"1\"\n    valueType: INT64\n    ```\n\n\n2. To create counter metrics, use the `gcloud logging metrics create` command.\n\n    **Command**\n    ```\n    gcloud logging metrics create cloud-dns-log-based-metric --config-from-file=config.yaml\n    ```\n\n## **Create the distribution metric**\n\n1. Create a file named `latency-config.yaml `with the following content:\n\n    ```\n    filter: |\n    resource.type=\"dns_query\"\n    resource.labels.target_type=\"public-zone\"\n    labelExtractors:\n    ProjectID: EXTRACT(resource.labels.project_id)\n    QueryName: EXTRACT(jsonPayload.queryName)\n    QueryType: EXTRACT(jsonPayload.queryType)\n    ResponseCode: EXTRACT(jsonPayload.responseCode)\n    SourceIP: EXTRACT(jsonPayload.sourceIP)\n    TargetName: EXTRACT(resource.labels.target_name)\n    metricDescriptor:\n    labels:\n        - key: ResponseCode\n        - key: QueryType\n        - key: TargetName\n        - key: ProjectID\n        - key: SourceIP\n        - key: QueryName\n    metricKind: DELTA\n    unit: \"1\"\n    valueType: DISTRIBUTION\n    valueExtractor: EXTRACT(jsonPayload.serverLatency)\n    bucketOptions:\n    exponentialBuckets:\n        growthFactor: 2\n        numFiniteBuckets: 64\n        scale: 0.01\n    ```\n\n To create counter metrics, use the `gcloud logging metrics create` command.\n\n**Command**\n\n    ```\n    gcloud logging metrics create cloud-dns-latency-log-based-metric --config-from-file=latency-config.yaml\n    ```\n\n\n\n## **Customization options**\n\nThe provided customization options are optional and are included for illustrative purposes only. We are not using these options in this blog post. If you decide to use these options in the future, you can edit the log-based metrics to make the desired changes.\n\n\n### **Include Source IP (Counter Metrics Only)**\n\nTo extract the Source IP from the log based metrics, add the following to labelExtractors and metricDescriptor in the `config.yaml` provided above. However, please note that extracting this label comes with risk and would be best suitable for temporary testing or zones where the expected volume of DNS queries is low.\n\nIn general, it is best practice to extract labels with a finite set of values. Otherwise, values that come from an infinite set, or are always unique, can lead to [high cardinality of metrics](https:\/\/cloud.google.com\/monitoring\/api\/v3\/metric-model#cardinality), which can not only increase costs but also result in ingestion errors. \n\n**Example**\n\n```\nlabelExtractors:\nProjectID: EXTRACT(resource.labels.project_id)\nQueryName: EXTRACT(jsonPayload.queryName)\nQueryType: EXTRACT(jsonPayload.queryType)\nResponseCode: EXTRACT(jsonPayload.responseCode)\nTargetName: EXTRACT(resource.labels.target_name)\nSourceIP: EXTRACT(jsonPayload.sourceIP)\n\nmetricDescriptor:\nlabels:\n    - key: QueryName\n    - key: TargetName\n    - key: ResponseCode\n    - key: ProjectID\n    - key: QueryType\n    - key: SourceIP\n```\n\n### **Finetune the filters**\n\nThe provided `config.yaml` processes all DNS query logs with a target_type of public-zone. Cloud Logging will process logs for all public zones that have logging enabled. To reduce the number of log entries processed, users can update the filter to provide a specific project or public zones.\n\n**Example**\n\n```\nresource.type=\"dns_query\"\nresource.labels.target_type=\"public-zone\"\nresource.labels.project_id=\"my-project-id\"\nresource.labels.target_name=\"my-zone-name\"\n```\n\n## **Create the custom dashboard**\n\nUse the `gcloud monitoring dashboards create command` to create the dashboard. This command will create a custom dashboard named gcloud-custom-dashboard.\n\n**Command**\n\n```\ngcloud monitoring dashboards create --config-from-file=dashboard.json\n```\n\n### **Things to consider**\n\n1. Log-based metrics are not suitable for real-time monitoring or highly sensitive alerts because they have higher ingestion delays than other types of metrics. This is because the ingestion time of the log, the metric processing time, and the reporting time must all be taken into account.\n2. There may be a delay in your metric counts.  Due to the potential 10 minute delay for log ingestion, the corresponding log-base metric could also have delays in displaying the correct log count.\n3. It is recommended that users change the alignment period to at least 5 minutes when configuring alerts for log-based metrics to account for delays. This will ensure that alerts are triggered only when there is a significant change in the metric, rather than being triggered by minor fluctuations.\n\n## References\n- [GCP Cloud DNS](https:\/\/cloud.google.com\/dns\/docs\/overview)\n- [GCP Cloud DNS log schema](https:\/\/cloud.google.com\/dns\/docs\/monitoring)\n- [GCP Cloud Monitoring](https:\/\/cloud.google.com\/monitoring)\n- [GCP Log based metrics](https:\/\/cloud.google.com\/logging\/docs\/logs-based-metrics)\n- [High cardinality of metrics](https:\/\/cloud.google.com\/monitoring\/api\/v3\/metric-model#cardinality)\n- [User-defined log-based](https:\/\/cloud.google.com\/logging\/docs\/logs-based-metrics#user-metrics) \n- [Cloud Logging pricing: Log-based metrics](https:\/\/cloud.google.com\/stackdriver\/pricing#log-based-metrics)\n- [Data retention](https:\/\/cloud.google.com\/monitoring\/quotas#data_retention_policy)","site":"GCP","answers_cleaned":"  Monitoring GCP Cloud DNS public zone    Overview The config files in this repo supports the GCP blogpost on  Visualizing Cloud DNS public zone query data using log based metrics and Cloud Monitoring          Config files included in this repo 1   config yaml  config yaml  2   dashboard json  dashboard json  3   latency config yaml  latency config yaml          Note  For details on configuring Cloud Monitoring to monitor GCP cloud DNS public zones  please refer to the blog post      Creating the log based metrics We require the creation of two distinct log based metrics  a counter metric and a distribution metric     Counter metrics count the number of log entries that match a specified filter within a specified period  For example  we can use a counter metric to count the number of log entries for a specific DNS query name  query type  or response code    Distribution metrics also count values  but they collect the counts into ranges of values  histogram buckets   For example  we can use a distribution metric to extract the distribution of server latency   To create log based metrics  use the  gcloud logging metrics create  command  The configuration for the logging metrics can be passed to  gcloud  using the  config yaml    config yaml  file      Note    All  user defined log based  https   cloud google com logging docs logs based metrics user metrics  metrics are a class of Cloud Monitoring custom metrics and are subject to charges  For pricing information  please refer to  Cloud Logging pricing  Log based metrics  https   cloud google com stackdriver pricing log based metrics      Note    The retention period for log based metrics is six weeks  Please refer to the  data retention  https   cloud google com monitoring quotas data retention policy  documentation for more details          Create the counter metric    1  Create a file named  config yaml  with the following content               filter         resource type  dns query      resource labels target type  public zone      labelExtractors      ProjectID  EXTRACT resource labels project id      QueryName  EXTRACT jsonPayload queryName      QueryType  EXTRACT jsonPayload queryType      ResponseCode  EXTRACT jsonPayload responseCode      TargetName  EXTRACT resource labels target name      metricDescriptor      labels            key  QueryName           key  TargetName           key  ResponseCode           key  ProjectID           key  QueryType     metricKind  DELTA     unit   1      valueType  INT64           2  To create counter metrics  use the  gcloud logging metrics create  command         Command               gcloud logging metrics create cloud dns log based metric   config from file config yaml               Create the distribution metric    1  Create a file named  latency config yaml  with the following content               filter        resource type  dns query      resource labels target type  public zone      labelExtractors      ProjectID  EXTRACT resource labels project id      QueryName  EXTRACT jsonPayload queryName      QueryType  EXTRACT jsonPayload queryType      ResponseCode  EXTRACT jsonPayload responseCode      SourceIP  EXTRACT jsonPayload sourceIP      TargetName  EXTRACT resource labels target name      metricDescriptor      labels            key  ResponseCode           key  QueryType           key  TargetName           key  ProjectID           key  SourceIP           key  QueryName     metricKind  DELTA     unit   1      valueType  DISTRIBUTION     valueExtractor  EXTRACT jsonPayload serverLatency      bucketOptions      exponentialBuckets          growthFactor  2         numFiniteBuckets  64         scale  0 01           To create counter metrics  use the  gcloud logging metrics create  command     Command                gcloud logging metrics create cloud dns latency log based metric   config from file latency config yaml                 Customization options    The provided customization options are optional and are included for illustrative purposes only  We are not using these options in this blog post  If you decide to use these options in the future  you can edit the log based metrics to make the desired changes          Include Source IP  Counter Metrics Only     To extract the Source IP from the log based metrics  add the following to labelExtractors and metricDescriptor in the  config yaml  provided above  However  please note that extracting this label comes with risk and would be best suitable for temporary testing or zones where the expected volume of DNS queries is low   In general  it is best practice to extract labels with a finite set of values  Otherwise  values that come from an infinite set  or are always unique  can lead to  high cardinality of metrics  https   cloud google com monitoring api v3 metric model cardinality   which can not only increase costs but also result in ingestion errors      Example        labelExtractors  ProjectID  EXTRACT resource labels project id  QueryName  EXTRACT jsonPayload queryName  QueryType  EXTRACT jsonPayload queryType  ResponseCode  EXTRACT jsonPayload responseCode  TargetName  EXTRACT resource labels target name  SourceIP  EXTRACT jsonPayload sourceIP   metricDescriptor  labels        key  QueryName       key  TargetName       key  ResponseCode       key  ProjectID       key  QueryType       key  SourceIP            Finetune the filters    The provided  config yaml  processes all DNS query logs with a target type of public zone  Cloud Logging will process logs for all public zones that have logging enabled  To reduce the number of log entries processed  users can update the filter to provide a specific project or public zones     Example        resource type  dns query  resource labels target type  public zone  resource labels project id  my project id  resource labels target name  my zone name            Create the custom dashboard    Use the  gcloud monitoring dashboards create command  to create the dashboard  This command will create a custom dashboard named gcloud custom dashboard     Command        gcloud monitoring dashboards create   config from file dashboard json            Things to consider    1  Log based metrics are not suitable for real time monitoring or highly sensitive alerts because they have higher ingestion delays than other types of metrics  This is because the ingestion time of the log  the metric processing time  and the reporting time must all be taken into account  2  There may be a delay in your metric counts   Due to the potential 10 minute delay for log ingestion  the corresponding log base metric could also have delays in displaying the correct log count  3  It is recommended that users change the alignment period to at least 5 minutes when configuring alerts for log based metrics to account for delays  This will ensure that alerts are triggered only when there is a significant change in the metric  rather than being triggered by minor fluctuations      References    GCP Cloud DNS  https   cloud google com dns docs overview     GCP Cloud DNS log schema  https   cloud google com dns docs monitoring     GCP Cloud Monitoring  https   cloud google com monitoring     GCP Log based metrics  https   cloud google com logging docs logs based metrics     High cardinality of metrics  https   cloud google com monitoring api v3 metric model cardinality     User defined log based  https   cloud google com logging docs logs based metrics user metrics      Cloud Logging pricing  Log based metrics  https   cloud google com stackdriver pricing log based metrics     Data retention  https   cloud google com monitoring quotas data retention policy "}
{"questions":"GCP Getting user profile from IAP enabled GAE application This setup can be done from This example demonstrates how to retrieve user profile e g name photo from an IAP enabled GAE application GCP project Initial Setup following The following setup assumes that you are setting up your new application GCP project based on the You need permission to run this e g for creating GAE app","answers":"# Getting user profile from IAP-enabled GAE application\nThis example demonstrates how to retrieve user profile (e.g. name, photo) from an IAP-enabled GAE application.\n\n## Initial Setup\nThis setup can be done from `Cloud Shell`.  \nYou need `Project Owner` permission to run this, e.g. for creating GAE app.\n\nThe following setup assumes that you are setting up your new application GCP project based on the \nfollowing:\n* GCP project: `project-id-1234`\n* GAE region: `asia-northeast1`\n\n1.  Set up environment variables.\n    ```bash\n    export PROJECT=project-id-1234\n    export REGION=asia-northeast1\n    ```\n\n1.  Setup your gcloud\n    ```bash\n    gcloud config configurations create iap-user-profile\n    gcloud config set project $PROJECT\n    ```\n\n1.  Create GAE application.\n    ```bash\n    gcloud app create --region=$REGION\n    ```\n\n1.  Enable required APIs\n    ```bash\n    gcloud services enable \\\n        iap.googleapis.com \\\n        secretmanager.googleapis.com \\\n        cloudresourcemanager.googleapis.com \\\n        people.googleapis.com\n    ```\n\n1.  Deploy this sample application. This will become the `default` service.  \n    Note: IAP can only be enabled when there is already a service deployed in GAE.\n    ```bash\n    cd professional-services\/examples\/iap-user-profile\/\n    gcloud app deploy --quiet\n    ```\n    \n1.  Configure `Consent Screen` on the below link  \n    https:\/\/console.cloud.google.com\/apis\/credentials\/consent?project=project-id-1234\n    \n    1.  Choose `Internal` for the `User Type`, then click `Create`\n    1.  Type in the `Application name`, e.g. IAP User Profile Example\n    1.  Choose the appropriate `Support email`. Alternatively, you can leave it to use your own email.\n    1.  Fill in the `Authorized domains` based on your GAE application domain, e.g.\n        * project-id-1234.an.r.appspot.com\n    1.  Click `Save`\n\n1.  Enable IAP on the below link for the `App Engine app`. Toggle on the button and then click `Turn On`.  \n    https:\/\/console.cloud.google.com\/security\/iap?project=project-id-1234\n\n1.  Add IAM policy binding to the IAP-enabled App Engine application.\n    Register your user email to access the application.\n    ```bash\n    gcloud iap web add-iam-policy-binding --resource-type=app-engine \\\n          --member='user:your-user@domain.com' \\\n          --role='roles\/iap.httpsResourceAccessor'\n    ```\n    \n1.  Create new `Credentials` on the below link. This credential will be used by the OAuth2 login\n    flow to retrieve the user profile.  \n    https:\/\/console.cloud.google.com\/apis\/credentials?project=project-id-1234\n    \n    1.  Click `Create Credentials`. Choose `OAuth client ID`.\n    1.  Choose `Web application` for the `Application type`\n    1.  Type in `IAP User Profile Svc` for the `Name`\n    1.  Fill in the `Authorized JavaScript origins`\n        * https:\/\/project-id-1234.an.r.appspot.com\n    1.  Fill in the `Authorized redirect URIs`\n        * https:\/\/project-id-1234.an.r.appspot.com\/auth-callback\n    1.  Click `Create`\n    1.  Click the newly created `IAP User Profile Example` credential and then click the `Download JSON` button.\n        You'll need to paste the JSON content later as secret in the Secret Manager. \n\n1.  Create a secret in Secret Manager named `iap-user-profile-svc-oauth2-client` with the client credential JSON file as\n    the value. \n    ```bash\n    gcloud secrets create iap-user-profile-svc-oauth2-client \\\n        --locations=asia-southeast1 --replication-policy=user-managed \\\n        --data-file=\/path-to\/client_secret.json\n    ```\n    \n1.  Add IAM policy binding to the secret for GAE default service account.\n    ```bash\n    gcloud secrets add-iam-policy-binding iap-user-profile-svc-oauth2-client \\\n          --member='serviceAccount:project-id-1234@appspot.gserviceaccount.com' \\\n          --role='roles\/secretmanager.secretAccessor'\n    ```\n\n## Accessing the Application\n\n1.  Access the application in your browser.  \n    Note: If you are accessing it first time, it may take some time before the policy takes effect.\n    Retry several times until you are prompted the OAuth login screen.  \n    https:\/\/project-id-1234.an.r.appspot.com\/\n    \n1.  You will be prompted the OAuth login one more time.  \n    This is intended since we are going to use this scope to access your People API.\n\n1.  You should be able to see your user profile displayed on the web page.","site":"GCP","answers_cleaned":"  Getting user profile from IAP enabled GAE application This example demonstrates how to retrieve user profile  e g  name  photo  from an IAP enabled GAE application      Initial Setup This setup can be done from  Cloud Shell     You need  Project Owner  permission to run this  e g  for creating GAE app   The following setup assumes that you are setting up your new application GCP project based on the  following    GCP project   project id 1234    GAE region   asia northeast1   1   Set up environment variables         bash     export PROJECT project id 1234     export REGION asia northeast1          1   Setup your gcloud        bash     gcloud config configurations create iap user profile     gcloud config set project  PROJECT          1   Create GAE application         bash     gcloud app create   region  REGION          1   Enable required APIs        bash     gcloud services enable           iap googleapis com           secretmanager googleapis com           cloudresourcemanager googleapis com           people googleapis com          1   Deploy this sample application  This will become the  default  service        Note  IAP can only be enabled when there is already a service deployed in GAE         bash     cd professional services examples iap user profile      gcloud app deploy   quiet              1   Configure  Consent Screen  on the below link       https   console cloud google com apis credentials consent project project id 1234          1   Choose  Internal  for the  User Type   then click  Create      1   Type in the  Application name   e g  IAP User Profile Example     1   Choose the appropriate  Support email   Alternatively  you can leave it to use your own email      1   Fill in the  Authorized domains  based on your GAE application domain  e g            project id 1234 an r appspot com     1   Click  Save   1   Enable IAP on the below link for the  App Engine app   Toggle on the button and then click  Turn On         https   console cloud google com security iap project project id 1234  1   Add IAM policy binding to the IAP enabled App Engine application      Register your user email to access the application         bash     gcloud iap web add iam policy binding   resource type app engine               member  user your user domain com                role  roles iap httpsResourceAccessor               1   Create new  Credentials  on the below link  This credential will be used by the OAuth2 login     flow to retrieve the user profile        https   console cloud google com apis credentials project project id 1234          1   Click  Create Credentials   Choose  OAuth client ID       1   Choose  Web application  for the  Application type      1   Type in  IAP User Profile Svc  for the  Name      1   Fill in the  Authorized JavaScript origins            https   project id 1234 an r appspot com     1   Fill in the  Authorized redirect URIs            https   project id 1234 an r appspot com auth callback     1   Click  Create      1   Click the newly created  IAP User Profile Example  credential and then click the  Download JSON  button          You ll need to paste the JSON content later as secret in the Secret Manager    1   Create a secret in Secret Manager named  iap user profile svc oauth2 client  with the client credential JSON file as     the value          bash     gcloud secrets create iap user profile svc oauth2 client             locations asia southeast1   replication policy user managed             data file  path to client secret json              1   Add IAM policy binding to the secret for GAE default service account         bash     gcloud secrets add iam policy binding iap user profile svc oauth2 client               member  serviceAccount project id 1234 appspot gserviceaccount com                role  roles secretmanager secretAccessor              Accessing the Application  1   Access the application in your browser        Note  If you are accessing it first time  it may take some time before the policy takes effect      Retry several times until you are prompted the OAuth login screen        https   project id 1234 an r appspot com       1   You will be prompted the OAuth login one more time        This is intended since we are going to use this scope to access your People API   1   You should be able to see your user profile displayed on the web page "}
{"questions":"GCP Social Security Numbers SSNs While the DLP API in GCP offers the ability to look for SSNs it may not be Hashpipeline accurate especially if there are other items such as account numbers that look similar One solution would Only 5 Million total records be to store SSNs in a Dictionary InfoType in Cloud DLP however that has the following limitations Overview In this solution we are trying to create a way to indicate security teams if there is a file found with US","answers":"# Hashpipeline\n\n## Overview\n\nIn this solution, we are trying to create a way to indicate security teams if there is a file found with US\nSocial Security Numbers (SSNs). While the DLP API in GCP offers the ability to look for SSNs, it may not be\naccurate, especially if there are other items such as account numbers that look similar. One solution would\nbe to store SSNs in a Dictionary InfoType in Cloud DLP, however that has the following limitations:\n\n* Only 5 Million total records\n* SSNs stored in plain text\n\nTo avoid those limitations, we built a PoC Dataflow pipeline that will run for every new file\nin a specified GCS bucket and determine how many (if any) SSNs are found, triggering a Pubsub Topic. The known\nSSNs will be stored in Firestore, a highly scalable key value store, only after being hashed with a salt and\nkey, which is stored in Secret Manager. This is what the architecture will look like when we're done.\n\n![](.\/img\/arch.png)\n\n## Usage\n\nThis repo offers end-to-end deployment of the Hashpipeline solution using [HashiCorp Terraform](https:\/\/terraform.io)\ngiven a project and list of buckets to monitor.\n\n### Prerequisites\n\nThis has only been tested on Mac OSX but will likely work on Linux as well.\n\n* `terraform` executable is available in `$PATH`\n* `gcloud` is installed and up to date\n* `python` is version 3.5 or higher\n\n\n### Step 1: Deploy the Infrastructure\nNote that the following APIs will be enabled on your project by Terraform:\n\n* `iam.googleapis.com`\n* `dlp.googleapis.com`\n* `secretmanager.googleapis.com`\n* `firestore.googleapis.com`\n* `dataflow.googleapis.com`\n* `compute.googleapis.com`\n\nThen deploy the infrastructure to your project\n\n```\ncd infrastructure\ncp terraform.tfvars.sample terraform.tfvars\n# Update with your own values.\nterraform apply\n```\n\n### Step 2: Generate the Hash Key\n\nThis will create a new 64 byte key for use with HMAC and store it in Secret Manager\n\n```\nmake pip\nmake create_key\n```\n\n### Step 3: Seed Firestore with SSNs\n\nSince SSNs can exist in the data center in many stores, we'll assume the input is\na flat, newline separated file including valid SSNs. How you get them in that format is\nup to you. Once you have your input file, simply authenticate to `gcloud` and then run:\n\n```\n.\/scripts\/hasher.py upload \\\n\t\t--project $PROJECT \\\n\t\t--secret $SECRET \\\n\t\t--salt $SALT \\\n\t\t--collection $COLLECTION \\\n\t\t--infile $SSN_FILE\n```\n\nFor more information on the input parameters, just run `.\/bin\/hasher.py --help`\n\n### Step 4: Build and Deploy\n\nThis uses Dataflow's Templates to build our pipeline and then run it. To use the values we created in terraform, just run:\n\n```\nmake build\nmake deploy\n```\n\nAt this point your Dataflow job will start up, so you can check its progress in the GCP Console.\n\n### Step 5: Subscribe\n\nThis pipeline just emits every finding in the file as a separate Pubsub message. We show an\nexample of how to subscribe to this and consume these messages in Python in the [poller.py](.\/scripts\/poller.py)\nscript. However since this is specifically a security solution, you will likely want to consume\nthese notifications in your SIEM such as Splunk, etc.\n\n## Testing\/Demo\n\n### Step 1\n\nFollow Step 1 and 2 from above to set up the demo environment\n\n### Step 2: Seed the Firestore with Fake SSNs\n\nThis script will do the following:\n\n* Create a list of valid and random Social Security Numbers\n* Store the plain text in `scripts\/socials.txt`\n* Hash the numbers (normalized without dashes) using HMAC-SHA256 and the key generated from `make create_key`\n* Store the hashed values in Firestore under the collection specified in the terraform variable: `firestore_collection`\n\n```\nmake seed_firestore\n```\n\n### Step 3: Generate some input files for dataflow to use\n\nThis will store the input files under the `inputs\/` directory, so we have something to test with.\n\n```\nmake generate_input_files\n```\n\n### Step 5: Test out the pipeline locally\n\nThis will run the pipeline against the `small-input.txt` file generated by the previous step. In only has\n50 lines so it shouldn't take too long.\n\n```\nmake run_local\n```\n\n\n### Step 6: Subscribe\nIn a separate terminal, start the poller from the test subscription and count the findings by filename.\n```\n$ make subscribe\nSuccessfully subscribed to <subscription>. Messages will print below...\n```\n\nNow in a third terminal, run the following command to upload a file to the test bucket.\n\n```\nexport BUCKET=<dataflow-test-bucket>\ngsutil cp inputs\/small-input.txt gs:\/\/$BUCKET\/small.txt\n```\n\nAfter a little while, in your `subscribe` terminal, you should get something that looks like this, after\nthe files have been uploaded, along with the raw messages printed to standard out:\n\n```\n...\n-----------------------------------  --------\nFilename                             Findings\ngs:\/\/<dataflow-test-bucket>\/small.txt  26\n-----------------------------------  --------\n```\n\nThis number can be verified by looking in the file itself on the first line, which would say `expected_valid_socials = 26`\nfor this example.\n\n\n### Step 7: Deploy the pipeline to a template\n\n```\nmake build\nmake deploy\n```\n\nNow you can try out the same thing as Step 4 to verify it works.\n\n## Disclaimer\n\nWhile best efforts have been made to make this pipeline hardened from a security perspective, this is meant **only as\na demo and proof of concept** and should not be directly used in a production system without being fully vetted by security\nteams and the people who will maintain the code in the organization.","site":"GCP","answers_cleaned":"  Hashpipeline     Overview  In this solution  we are trying to create a way to indicate security teams if there is a file found with US Social Security Numbers  SSNs   While the DLP API in GCP offers the ability to look for SSNs  it may not be accurate  especially if there are other items such as account numbers that look similar  One solution would be to store SSNs in a Dictionary InfoType in Cloud DLP  however that has the following limitations     Only 5 Million total records   SSNs stored in plain text  To avoid those limitations  we built a PoC Dataflow pipeline that will run for every new file in a specified GCS bucket and determine how many  if any  SSNs are found  triggering a Pubsub Topic  The known SSNs will be stored in Firestore  a highly scalable key value store  only after being hashed with a salt and key  which is stored in Secret Manager  This is what the architecture will look like when we re done         img arch png      Usage  This repo offers end to end deployment of the Hashpipeline solution using  HashiCorp Terraform  https   terraform io  given a project and list of buckets to monitor       Prerequisites  This has only been tested on Mac OSX but will likely work on Linux as well      terraform  executable is available in   PATH     gcloud  is installed and up to date    python  is version 3 5 or higher       Step 1  Deploy the Infrastructure Note that the following APIs will be enabled on your project by Terraform      iam googleapis com     dlp googleapis com     secretmanager googleapis com     firestore googleapis com     dataflow googleapis com     compute googleapis com   Then deploy the infrastructure to your project      cd infrastructure cp terraform tfvars sample terraform tfvars   Update with your own values  terraform apply          Step 2  Generate the Hash Key  This will create a new 64 byte key for use with HMAC and store it in Secret Manager      make pip make create key          Step 3  Seed Firestore with SSNs  Since SSNs can exist in the data center in many stores  we ll assume the input is a flat  newline separated file including valid SSNs  How you get them in that format is up to you  Once you have your input file  simply authenticate to  gcloud  and then run         scripts hasher py upload       project  PROJECT       secret  SECRET       salt  SALT       collection  COLLECTION       infile  SSN FILE      For more information on the input parameters  just run    bin hasher py   help       Step 4  Build and Deploy  This uses Dataflow s Templates to build our pipeline and then run it  To use the values we created in terraform  just run       make build make deploy      At this point your Dataflow job will start up  so you can check its progress in the GCP Console       Step 5  Subscribe  This pipeline just emits every finding in the file as a separate Pubsub message  We show an example of how to subscribe to this and consume these messages in Python in the  poller py    scripts poller py  script  However since this is specifically a security solution  you will likely want to consume these notifications in your SIEM such as Splunk  etc      Testing Demo      Step 1  Follow Step 1 and 2 from above to set up the demo environment      Step 2  Seed the Firestore with Fake SSNs  This script will do the following     Create a list of valid and random Social Security Numbers   Store the plain text in  scripts socials txt    Hash the numbers  normalized without dashes  using HMAC SHA256 and the key generated from  make create key    Store the hashed values in Firestore under the collection specified in the terraform variable   firestore collection       make seed firestore          Step 3  Generate some input files for dataflow to use  This will store the input files under the  inputs   directory  so we have something to test with       make generate input files          Step 5  Test out the pipeline locally  This will run the pipeline against the  small input txt  file generated by the previous step  In only has 50 lines so it shouldn t take too long       make run local           Step 6  Subscribe In a separate terminal  start the poller from the test subscription and count the findings by filename        make subscribe Successfully subscribed to  subscription   Messages will print below         Now in a third terminal  run the following command to upload a file to the test bucket       export BUCKET  dataflow test bucket  gsutil cp inputs small input txt gs    BUCKET small txt      After a little while  in your  subscribe  terminal  you should get something that looks like this  after the files have been uploaded  along with the raw messages printed to standard out                                                         Filename                             Findings gs    dataflow test bucket  small txt  26                                                    This number can be verified by looking in the file itself on the first line  which would say  expected valid socials   26  for this example        Step 7  Deploy the pipeline to a template      make build make deploy      Now you can try out the same thing as Step 4 to verify it works      Disclaimer  While best efforts have been made to make this pipeline hardened from a security perspective  this is meant   only as a demo and proof of concept   and should not be directly used in a production system without being fully vetted by security teams and the people who will maintain the code in the organization "}
{"questions":"GCP This directory shows a series of pipelines used to generate data in GCS or BigQuery The intention for these pipelines are to be a tool for partners customers and SCEs who want to create a dummy dataset that Testing Data Generators Human Readable Data Generation looks like the schema of their actual data in order to run some queries in BigQuery Data Generator There are two different types of use cases for this kind of tool which we refer to throughout this documentation as Human Readable and Performance These pipelines are a great place to get started when you only have a customer s schema","answers":"# Data Generator\nThis directory shows a series of pipelines used to generate data in GCS or BigQuery.\nThe intention for these pipelines are to be a tool for partners, customers and SCEs who want to create a dummy dataset that\nlooks like the schema of their actual data in order to run some queries in BigQuery.\nThere are two different types of use cases for this kind of tool which we refer to throughout this documentation as Human Readable and Performance\nTesting Data Generators.\n\n\n## Human Readable Data Generation\nThese pipelines are a great place to get started when you only have a customer's schema\nand do not have a requirement for your generated dataset to have similar distribution to\nthe source dataset (this is required for accurately capturing query performance).\n - Human readable \/ queryable data. This includes smart populating columns with data formatted based on the field name.\n - This can be used in scenarios where there are hurdles to get over in migrating actual data to BigQuery  to unblock integration tests and downstream development.\n - Generate joinable schemas for < 1 Billion distinct keys\n - Generates data from just a schema\n - Numeric columns trend upwards based on a `date` field if it exists.\n\n![Alt text](img\/data-generator.png)\n\n\n - [Data Generator](data-generator-pipeline\/data_generator_pipeline.py): This pipeline should\n    can be used to generate a central fact table in snowflake schema.\n - [Data Generator (Joinable Table)](data-generator-pipeline\/data_generator_pipeline.py):\n    this pipeline should be used to generate data that joins to an existing BigQuery Table\n    on a certain key.\n\n## Performance Testing Data Generation\nThe final pipeline supports the later use case where matching the distribution of the source\ndataset for replicating query performance is the goal.\n - Prioritizes speed and distribution matching over human readable data (ie. random strings rather than random sentences w\/ english words)\n - Match the distribution of keys in a dataset to benchmark join performance\n - Generate joinable schemas on a larger scale.\n - Generates data based on a schema and a histogram table containing the desired distribution of data across the key columns\n\n![Alt text](img\/distribution-matcher.png)\n\n - [Histogram Tool](bigquery-scripts\/bq_histogram_tool.py): This is an example script of what could\n    be run on a customer's table to extract the distribution information per key without collecting\n    meaningful data. This script would be run by the client and they would share the output table.\n    If the customer is not already in BigQuery this histogram tool can serve as boilerplate for a\n    histogram tool that reads from their source database and writes to BigQuery.\n - [Distribution Matcher](data-generator-pipeline\/data_distribution_matcher.py): This pipeline operates\n    on a BigQuery table containing key hashes and counts and will replicate this distribution in the\n    generated dataset..\n\n## General Performance Recommendations\nA few recommendations when generating large datasets with any of these pipelines:\n - Write to AVRO on GCS then load to BigQuery.\n - Use machines with a lot of CPU. We recommend `n1-highcpu-32`.\n - Run on a private network to avoid using public ip addresses.\n - Request higher quotas for your project to support scaling to 300+ large workers,\n   specifically, in the region you wish to run the pipeline:\n   - 300+ In-use IP addresses\n   - 10,000+ CPUs\n\n### Human Readable Data Generator Usage\nThis tool has several parameters to specify what kind of data you would like to generate.\n\n\n#### Schema\nThe schema may be specified using the `--schema_file` parameter  with a file containing a\nlist of json objects with `name`,  `type`, `mode` and optionally `description` fields.\nThis form follows the output of`bq show --format=json --schema <table_reference>`.\nThis data generator now supports nested types like `RECORD`\/`STRUCT`. Note, that the approach\ntaken was to generate a `REPEATED` `RECORD` (aka `ARRAY<STRUCT>`) and each record generated\nwill have between 0 and 3 elements in this array.\nie.\n```\n--schema_file=gs:\/\/python-dataflow-example\/schemas\/lineorder-schema.json\n```\nlineorder-schema.json:\n```\n{\n    \"fields\": [\n                {\"name\": \"lo_order_key\",\n                 \"type\": \"STRING\",\n                 \"mode\": \"REQUIRED\"\n                },\n                {\"name\": \"lo_linenumber\",\n                 \"type\": \"INTEGER\",\n                 \"mode\": \"NULLABLE\"\n                },\n                {...}\n              ]\n}\n```\nAlternatively, the schema may be specified with a reference to an existing BigQuery table with the\n`--input_bq_table` parameter. We suggest using the BigQuery UI to create an empty BigQuery table to\navoid typos when writing your own schema json.\n\n```\n--input_bq_table=BigQueryFaker.lineorders\n```\n\nNote, if you are generating data that is also being loaded into an RDBMS you can specify the RDMS type\nin the `description` field of the schema. The data generator will parse this to extract datasize.\nie. The below field will have strings truncated to be within 36 bytes.\n```\n[\n    {\"name\": \"lo_order_key\",\n     \"type\": \"STRING\",\n     \"mode\": \"REQUIRED\",\n     \"description\": \"VARCHAR(36)\"\n    },\n    {...}\n]\n```\n\n#### Number of records\nTo specify the number of records to generate use the `--num_records` parameter. Note we recommend only calling this\npipeline for a maximum of 50 Million records at a time. For generating larger tables you can simply call the pipeline\nscript several times.\n\n```\n--num_records=1000000\n```\n\n#### Output Prefix\nThe output is specified as a GCS prefix. Note that multiple files will be written with\n`<prefix>-<this-shard-number>-of-<total-shards>.<suffix>`. The suffix will be the appropriate suffix for the file type\nbased on if you pass the `--csv_schema_order` or `--avro_schema_file` parameters described later.\n\n--gcs_output_prefix=gs:\/\/<BUCKET NAME>\/path\/to\/myprefix\n\nWill create files at:\n\ngs:\/\/<BUCKET NAME>\/path\/to\/myprefix-#####-of-#####.<suffix>\n\n\n\n#### Output format\n\nOutput format is specified by passing one of the `--csv_schema_order`, `--avro_schema_file`, or `--write_to_parquet` parameters.\n\n`--csv_schema_order` should be a comma separated list specifying the order of the fieldnames for writing.\nNote that `RECORD` are not supported when writing to CSV, because it is a flat file format.\n\n```\n--csv_schema_order=lo_order_key,lo_linenumber,...\n```\n\n`--avro_schema_file` should be a file path to the avro schema to write.\n\n```\n--avro_schema_file=\/path\/to\/linorders.avsc\n```\n\n`--write_to_parquet` is a flag that specifies the output should be parquet. In order for beam to write to parquet,\na pyarrow schema is needed. Therefore, this tool translates the schema in the `--schema_file` to\na pyarrow schema automatically if this flag is included, but pyarrow doesn't support all fields that are supported\nby BigQuery. STRING, NUMERIC, INTEGER, FLOAT, NUMERIC, BOOLEAN, TIMESTAMP, DATE, TIME, and DATETIME types are supported.\n\nThere is limited support for writing RECORD types to parquet. Due to this [known pyarrow issue](https:\/\/jira.apache.org\/jira\/browse\/ARROW-2587?jql=project%20%3D%20ARROW%20AND%20fixVersion%20%3D%200.14.0%20AND%20text%20~%20%22struct%22) this tool does not support writing arrays nested within structs.\n\nHowever BYTE, and GEOGRAPHY fields are not supported and cannot be included in the `--schema_file` when writing\nto parquet.\n\n```\n--write_to_parquet\n```\n\nAlternatively, you can write directly to a BigQuery table by specifying an `--output_bq_table`. However, if you are generating\nmore than 100K records, you may run into the limitation of the python SDK where WriteToBigQuery does not orchestrate multiple\nload jobs you hit one of the single load job limitations [BEAM-2801](https:\/\/issues.apache.org\/jira\/browse\/BEAM-2801). If you\nare not concerned with having many duplicates, you can generate an initial BigQuery table with `--num_records=10000000` and\nthen use [`bq_table_resizer.py`](bigquery-scripts\/bq_table_resizer.py) to copy the table into itself until it reaches the\ndesired size.\n\n```\n--output_bq_table=project:dataset.table\n```\n\n\n#### Sparsity (optional)\nData is seldom full for every record so you can specify the probability of a NULLABLE column being null with the `--p_null` parameter.\n\n```\n--p_null=0.2\n```\n\n\n#### Keys and IDs (optional)\n\nThe data generator will parse your field names and generate keys\/ids for fields whose name contains \"`_key`\" or \"`_id`\".\nThe cardinality of such key columns can be controlled with the `--n_keys` parameter.\n\nAdditionally, you can parameterize the key-skew by passing` --key_skew_distribution`. By default this is `None`, meaning roughly equal\ndistribution of rowcount across keys. This also supports `\"binomial\"` giving a maximum variance bell curve of keys over the range of the\nkeyset or `\"zipf\"` giving a distribution across the keyset according to zipf's law.\n\n\n##### Primary Key (optional)\nThe data generator can support a primary key columns by passing a comma separated list of field names to `--primary_key_cols`.\nNote this is done by a deduplication process at the end of the pipeline. This may be a bottleneck for large data volumes.\nAlso, using this parameter might cause you to fall short of `--num_records` output records due to the deduplicaiton.\nTo mitigate this you can set `--n_keys` to a number much larger than the number of records you are generating.\n\n#### Date Parameters (optional)\nTo constrain the dates generated in date columns one can use the `--min_date` and `--max_date` parameters.\n\nThe minimum date will default to January 1, 2000 and the `max_date` will default to today.\n\nIf you are using these parameters be sure to use YYYY-MM-DD format.\n\n```\n--min_date=1970-01-01 \\\n--max_date=2010-01-01\n```\n\n#### Number Parameters (optional)\nThe range of integers and\/or floats can be constrained with the `--max_int` and `--max_float` parameters.\nThese default to 100 Million.\nThe number of decimal places in a float can be controlled with the `--float_precision` parameter.\nThe default float precision is 2.\nBoth integers and floats can be constrained to strictly positive values using\nthe `--strictly_pos=True`.\nTrue is the default.\n\n#### Write Disposition (optional)\nThe BigQuery write disposition can be specified using the `--write_disp` parameter.\n\nThe default is `WRITE_APPEND`.\n\n\n#### Dataflow Pipeline parameters\nFor basic usage we recommend the following parameters:\n```\npython data_generator_pipeline.py \\\n--project=<PROJECT ID> \\\n--setup_file=.\/setup.py \\\n\n--worker_machine_type=n1-highcpu-32 \\ # This is a high cpu process so tuning the machine type will boost performance\n\n--runner=DataflowRunner \\ # run on Dataflow workers\n--staging_location=gs:\/\/<BUCKET NAME>\/test \\\n--temp_location=gs:\/\/<BUCKET NAME>\/temp \\\n--save_main_session \\ # serializes main session and sends to each worker\n```\n\nFor isolating your Dataflow workers on a private network you can additionally specify:\n```\n...\n--region=us-east1 \\\n--subnetwork=<FULL PATH TO SUBNET> \\\n--network=<NETWORK ID> \\\n--no_use_public_ips\n```\n\n### Modifying FakeRowGen\nYou may want to change the `FakeRowGen` DoFn class to more accurately spoof your data. You can use `special_map` to map\nsubstrings in field names to [Faker Providers](https:\/\/faker.readthedocs.io\/en\/latest\/providers.html). The only\nrequirement for this DoFn is for it to return a list containing a single python dictionary mapping field names to values.\nSo hack away if you need something more specific any python code is fair game. Keep in mind\nthat if you use a non-standard module (available in PyPI) you will need to make sure it gets installed on each of the workers or you will get\n\nnamespace issues. This can be done most simply by adding the module to `setup.py`.\n\n### Generating Joinable tables Snowflake schema\nTo generate multiple tables that join based on certain keys, start by generating the central fact table with the above described\n[`data_generator_pipeline.py`](data-generator-pipeline\/data_generator_pipeline.py).\nThen use [`data_generator_joinable_table.py`](data-generator-pipeline\/data_generator_pipeline.py) with the above described parameters\nfor the new table plus three additional parameters described below.\n - `--fact_table` The existing fact table in BigQuery that will be queried to obtain list of distinct key values.\n - `--source_joining_key_col` The field name of the foreign key col in the existing table.\n - `--dest_joining_key_col` The field name in the table we are generating with the pipeline for joining to the existing table.\n\nNote, this method selects distinct keys from the `--fact_table` as a side input which are passed as a list to the to each worker which randomly\nselects a value to assign to this record. This means that this list must comfortably fit in memory. This makes this method only suitable for key\ncolumns with relatively low cardinality (< 1 Billion distinct keys). If you have more rigorous needs for generating joinable schemas, you should\nconsider using the distribution matcher pipeline.\n\n## Performance Testing Data Generator Usage\nSteps:\n - Generate the posterior histogram table. For an example of how to do this on an existing BigQuery table look at the BigQuery Histogram Tool\ndescribed later in this doc.\n - Use the [`data_distribution_matcher.py`](data-generator-pipeline\/data_distribution_matcher.py) pipeline.\n\n\nYou can specify `--schema_file` (or `--input_table`), `--gcs_output_prefix` and `--output_format` the same way as described above in the\nHuman Readable Data Generator section. Additionally, you must specify an `--histogram_table`. This table will have a field for each key column (which will store\na hash of each value) and a frequency with which these values occur.\n\n### Generating Joinable Schemas\nJoinable tables can be created by running the distribution matcher on a histogram for all relevant tables in the dataset. Because each histogram table\nentry captures the hash of each key it refers to we can capture exact join scenarios without handing over any real data.\n\n## BigQuery Scripts\nIncluded are three BigQuery utility scripts to help you with your data generating needs. The first helps with loading many gcs files to BigQuery\nwhile staying under the 15TB per load job limit, the next will help you profile the distribution of an existing dataset and the last will allow\nyou to resize BigQuery tables to be a desired size.\n\n\n### BigQuery batch loads\nThis script is meant to orchestrate BigQuery load jobs of many\njson files on Google Cloud Storage. It ensures that each load\nstays under the 15 TB per load job limit. It operates on the\n\noutput of `gsutil -l`.\n\n\nThis script can be called with the following arguments:\n\n`--project`: GCP project ID\n\n`--dataset`: BigQuery dataset ID containing the table your wish\n    to populate.\n\n`--table`: BigQuery table ID of the table you wish to populate\n\n`--source_file`: This is the output of `gsutil -l` with the URI of\n    each file that you would like to load\n\n`--create_table`: If passed this script will create\n    the destination table.\n\n`--schema_file`: Path to a json file defining the destination BigQuery\n    table schema.\n\n\n`--partitioning_column`: name of the field for date partitioning in\n    the destination table.\n\n\n`--max_bad_records`: Number of permissible bad records per load job.\n\n\n#### Example Usage:\n```\ngsutil -l gs:\/\/<bucket>\/path\/to\/json\/<file prefix>-*.json >> .\/files_to_load.txt\n\npython bq_load_batches.py --project=<project> \\\n--dataset=<dataset_id> \\\n--table=<table_id> \\\n--partitioning_column date \\\n--source_file=files_to_load.txt\n```\n### BigQuery Histogram Tool\nThis script will create a BigQuery table containing the hashes of the key columns\nspecified as a comma separated list to the `--key_cols` parameter and the frequency\nfor which that group of key columns appears in the `--input_table`. This serves as\na histogram of the original table and will be used as the source for\n[`data_distribution_matcher.py`](data_generator_pipeline\/data_distribution_matcher.py)\n\n#### Example Usage:\n```\npython bq_histogram_tool.py \\\n--input_table=<project>.<dataset>.<source_table> \\\n--output_table=<project>.<dataset>.<histogram_table> \\\n--key_cols=item_id,store_id\n```\n\n### BigQuery table resizer\n\nThis script is to help increase the size of a table based on a generated or sample.\nIf you are short on time and have a requirement to generate a 100TB table you can\nuse this script to generate a few GB and copy table into itself until it it is the\ndesired size or number of rows. While this would be inappropriate for accurate\nperformance benchmarking it can be used to get a query specific cost estimate.\nThis script can be used to copy a table in place or create a new table if you\nwant to maintain the record of the original records. You can specify the target\ntable size in either number of rows or GB.\n\n#### Example Usage\n\n```\npython bq_table_resizer.py \\\n--project my-project-id \\\n--source_dataset my-dataset-id \\\n--source_table my-source-table-id \\\n--destination_dataset my-dataset-id \\\n--destination_table my-new-table-id \\\n--target_gb 15000 \\\n--location US\n```\n\n### Running the tests\nNote, that the tests for the BigQuery table resizer require that you have\n`GOOGLE_APPLICATION_DEFAULT` set to credentials with access to a BigQuery\nenvironment where you can create and destroy tables.\n\n```\ncd data-generator-pipeline\npython -m unittest discover\n```","site":"GCP","answers_cleaned":"  Data Generator This directory shows a series of pipelines used to generate data in GCS or BigQuery  The intention for these pipelines are to be a tool for partners  customers and SCEs who want to create a dummy dataset that looks like the schema of their actual data in order to run some queries in BigQuery  There are two different types of use cases for this kind of tool which we refer to throughout this documentation as Human Readable and Performance Testing Data Generators       Human Readable Data Generation These pipelines are a great place to get started when you only have a customer s schema and do not have a requirement for your generated dataset to have similar distribution to the source dataset  this is required for accurately capturing query performance      Human readable   queryable data  This includes smart populating columns with data formatted based on the field name     This can be used in scenarios where there are hurdles to get over in migrating actual data to BigQuery  to unblock integration tests and downstream development     Generate joinable schemas for   1 Billion distinct keys    Generates data from just a schema    Numeric columns trend upwards based on a  date  field if it exists     Alt text  img data generator png        Data Generator  data generator pipeline data generator pipeline py   This pipeline should     can be used to generate a central fact table in snowflake schema      Data Generator  Joinable Table   data generator pipeline data generator pipeline py       this pipeline should be used to generate data that joins to an existing BigQuery Table     on a certain key      Performance Testing Data Generation The final pipeline supports the later use case where matching the distribution of the source dataset for replicating query performance is the goal     Prioritizes speed and distribution matching over human readable data  ie  random strings rather than random sentences w  english words     Match the distribution of keys in a dataset to benchmark join performance    Generate joinable schemas on a larger scale     Generates data based on a schema and a histogram table containing the desired distribution of data across the key columns    Alt text  img distribution matcher png       Histogram Tool  bigquery scripts bq histogram tool py   This is an example script of what could     be run on a customer s table to extract the distribution information per key without collecting     meaningful data  This script would be run by the client and they would share the output table      If the customer is not already in BigQuery this histogram tool can serve as boilerplate for a     histogram tool that reads from their source database and writes to BigQuery      Distribution Matcher  data generator pipeline data distribution matcher py   This pipeline operates     on a BigQuery table containing key hashes and counts and will replicate this distribution in the     generated dataset       General Performance Recommendations A few recommendations when generating large datasets with any of these pipelines     Write to AVRO on GCS then load to BigQuery     Use machines with a lot of CPU  We recommend  n1 highcpu 32      Run on a private network to avoid using public ip addresses     Request higher quotas for your project to support scaling to 300  large workers     specifically  in the region you wish to run the pipeline       300  In use IP addresses      10 000  CPUs      Human Readable Data Generator Usage This tool has several parameters to specify what kind of data you would like to generate         Schema The schema may be specified using the    schema file  parameter  with a file containing a list of json objects with  name     type    mode  and optionally  description  fields  This form follows the output of bq show   format json   schema  table reference    This data generator now supports nested types like  RECORD   STRUCT   Note  that the approach taken was to generate a  REPEATED   RECORD   aka  ARRAY STRUCT    and each record generated will have between 0 and 3 elements in this array  ie        schema file gs   python dataflow example schemas lineorder schema json     lineorder schema json             fields                       name    lo order key                     type    STRING                     mode    REQUIRED                                       name    lo linenumber                     type    INTEGER                     mode    NULLABLE                                                                 Alternatively  the schema may be specified with a reference to an existing BigQuery table with the    input bq table  parameter  We suggest using the BigQuery UI to create an empty BigQuery table to avoid typos when writing your own schema json         input bq table BigQueryFaker lineorders      Note  if you are generating data that is also being loaded into an RDBMS you can specify the RDMS type in the  description  field of the schema  The data generator will parse this to extract datasize  ie  The below field will have strings truncated to be within 36 bytes              name    lo order key         type    STRING         mode    REQUIRED         description    VARCHAR 36                                Number of records To specify the number of records to generate use the    num records  parameter  Note we recommend only calling this pipeline for a maximum of 50 Million records at a time  For generating larger tables you can simply call the pipeline script several times         num records 1000000           Output Prefix The output is specified as a GCS prefix  Note that multiple files will be written with   prefix   this shard number  of  total shards   suffix    The suffix will be the appropriate suffix for the file type based on if you pass the    csv schema order  or    avro schema file  parameters described later     gcs output prefix gs    BUCKET NAME  path to myprefix  Will create files at   gs    BUCKET NAME  path to myprefix       of        suffix          Output format  Output format is specified by passing one of the    csv schema order      avro schema file   or    write to parquet  parameters      csv schema order  should be a comma separated list specifying the order of the fieldnames for writing  Note that  RECORD  are not supported when writing to CSV  because it is a flat file format         csv schema order lo order key lo linenumber             avro schema file  should be a file path to the avro schema to write         avro schema file  path to linorders avsc         write to parquet  is a flag that specifies the output should be parquet  In order for beam to write to parquet  a pyarrow schema is needed  Therefore  this tool translates the schema in the    schema file  to a pyarrow schema automatically if this flag is included  but pyarrow doesn t support all fields that are supported by BigQuery  STRING  NUMERIC  INTEGER  FLOAT  NUMERIC  BOOLEAN  TIMESTAMP  DATE  TIME  and DATETIME types are supported   There is limited support for writing RECORD types to parquet  Due to this  known pyarrow issue  https   jira apache org jira browse ARROW 2587 jql project 20 3D 20ARROW 20AND 20fixVersion 20 3D 200 14 0 20AND 20text 20  20 22struct 22  this tool does not support writing arrays nested within structs   However BYTE  and GEOGRAPHY fields are not supported and cannot be included in the    schema file  when writing to parquet         write to parquet      Alternatively  you can write directly to a BigQuery table by specifying an    output bq table   However  if you are generating more than 100K records  you may run into the limitation of the python SDK where WriteToBigQuery does not orchestrate multiple load jobs you hit one of the single load job limitations  BEAM 2801  https   issues apache org jira browse BEAM 2801   If you are not concerned with having many duplicates  you can generate an initial BigQuery table with    num records 10000000  and then use   bq table resizer py   bigquery scripts bq table resizer py  to copy the table into itself until it reaches the desired size         output bq table project dataset table            Sparsity  optional  Data is seldom full for every record so you can specify the probability of a NULLABLE column being null with the    p null  parameter         p null 0 2            Keys and IDs  optional   The data generator will parse your field names and generate keys ids for fields whose name contains    key   or    id    The cardinality of such key columns can be controlled with the    n keys  parameter   Additionally  you can parameterize the key skew by passing    key skew distribution   By default this is  None   meaning roughly equal distribution of rowcount across keys  This also supports   binomial   giving a maximum variance bell curve of keys over the range of the keyset or   zipf   giving a distribution across the keyset according to zipf s law          Primary Key  optional  The data generator can support a primary key columns by passing a comma separated list of field names to    primary key cols   Note this is done by a deduplication process at the end of the pipeline  This may be a bottleneck for large data volumes  Also  using this parameter might cause you to fall short of    num records  output records due to the deduplicaiton  To mitigate this you can set    n keys  to a number much larger than the number of records you are generating        Date Parameters  optional  To constrain the dates generated in date columns one can use the    min date  and    max date  parameters   The minimum date will default to January 1  2000 and the  max date  will default to today   If you are using these parameters be sure to use YYYY MM DD format         min date 1970 01 01     max date 2010 01 01           Number Parameters  optional  The range of integers and or floats can be constrained with the    max int  and    max float  parameters  These default to 100 Million  The number of decimal places in a float can be controlled with the    float precision  parameter  The default float precision is 2  Both integers and floats can be constrained to strictly positive values using the    strictly pos True   True is the default        Write Disposition  optional  The BigQuery write disposition can be specified using the    write disp  parameter   The default is  WRITE APPEND          Dataflow Pipeline parameters For basic usage we recommend the following parameters      python data generator pipeline py     project  PROJECT ID      setup file   setup py      worker machine type n1 highcpu 32     This is a high cpu process so tuning the machine type will boost performance    runner DataflowRunner     run on Dataflow workers   staging location gs    BUCKET NAME  test     temp location gs    BUCKET NAME  temp     save main session     serializes main session and sends to each worker      For isolating your Dataflow workers on a private network you can additionally specify            region us east1     subnetwork  FULL PATH TO SUBNET      network  NETWORK ID      no use public ips          Modifying FakeRowGen You may want to change the  FakeRowGen  DoFn class to more accurately spoof your data  You can use  special map  to map substrings in field names to  Faker Providers  https   faker readthedocs io en latest providers html   The only requirement for this DoFn is for it to return a list containing a single python dictionary mapping field names to values  So hack away if you need something more specific any python code is fair game  Keep in mind that if you use a non standard module  available in PyPI  you will need to make sure it gets installed on each of the workers or you will get  namespace issues  This can be done most simply by adding the module to  setup py        Generating Joinable tables Snowflake schema To generate multiple tables that join based on certain keys  start by generating the central fact table with the above described   data generator pipeline py   data generator pipeline data generator pipeline py   Then use   data generator joinable table py   data generator pipeline data generator pipeline py  with the above described parameters for the new table plus three additional parameters described below        fact table  The existing fact table in BigQuery that will be queried to obtain list of distinct key values        source joining key col  The field name of the foreign key col in the existing table        dest joining key col  The field name in the table we are generating with the pipeline for joining to the existing table   Note  this method selects distinct keys from the    fact table  as a side input which are passed as a list to the to each worker which randomly selects a value to assign to this record  This means that this list must comfortably fit in memory  This makes this method only suitable for key columns with relatively low cardinality    1 Billion distinct keys   If you have more rigorous needs for generating joinable schemas  you should consider using the distribution matcher pipeline      Performance Testing Data Generator Usage Steps     Generate the posterior histogram table  For an example of how to do this on an existing BigQuery table look at the BigQuery Histogram Tool described later in this doc     Use the   data distribution matcher py   data generator pipeline data distribution matcher py  pipeline    You can specify    schema file   or    input table       gcs output prefix  and    output format  the same way as described above in the Human Readable Data Generator section  Additionally  you must specify an    histogram table   This table will have a field for each key column  which will store a hash of each value  and a frequency with which these values occur       Generating Joinable Schemas Joinable tables can be created by running the distribution matcher on a histogram for all relevant tables in the dataset  Because each histogram table entry captures the hash of each key it refers to we can capture exact join scenarios without handing over any real data      BigQuery Scripts Included are three BigQuery utility scripts to help you with your data generating needs  The first helps with loading many gcs files to BigQuery while staying under the 15TB per load job limit  the next will help you profile the distribution of an existing dataset and the last will allow you to resize BigQuery tables to be a desired size        BigQuery batch loads This script is meant to orchestrate BigQuery load jobs of many json files on Google Cloud Storage  It ensures that each load stays under the 15 TB per load job limit  It operates on the  output of  gsutil  l     This script can be called with the following arguments      project   GCP project ID     dataset   BigQuery dataset ID containing the table your wish     to populate      table   BigQuery table ID of the table you wish to populate     source file   This is the output of  gsutil  l  with the URI of     each file that you would like to load     create table   If passed this script will create     the destination table      schema file   Path to a json file defining the destination BigQuery     table schema       partitioning column   name of the field for date partitioning in     the destination table       max bad records   Number of permissible bad records per load job         Example Usage      gsutil  l gs    bucket  path to json  file prefix    json      files to load txt  python bq load batches py   project  project      dataset  dataset id      table  table id      partitioning column date     source file files to load txt         BigQuery Histogram Tool This script will create a BigQuery table containing the hashes of the key columns specified as a comma separated list to the    key cols  parameter and the frequency for which that group of key columns appears in the    input table   This serves as a histogram of the original table and will be used as the source for   data distribution matcher py   data generator pipeline data distribution matcher py        Example Usage      python bq histogram tool py     input table  project   dataset   source table      output table  project   dataset   histogram table      key cols item id store id          BigQuery table resizer  This script is to help increase the size of a table based on a generated or sample  If you are short on time and have a requirement to generate a 100TB table you can use this script to generate a few GB and copy table into itself until it it is the desired size or number of rows  While this would be inappropriate for accurate performance benchmarking it can be used to get a query specific cost estimate  This script can be used to copy a table in place or create a new table if you want to maintain the record of the original records  You can specify the target table size in either number of rows or GB        Example Usage      python bq table resizer py     project my project id     source dataset my dataset id     source table my source table id     destination dataset my dataset id     destination table my new table id     target gb 15000     location US          Running the tests Note  that the tests for the BigQuery table resizer require that you have  GOOGLE APPLICATION DEFAULT  set to credentials with access to a BigQuery environment where you can create and destroy tables       cd data generator pipeline python  m unittest discover    "}
{"questions":"GCP Unit Tests pytest bash pip install r requirements dev txt Run unit tests after installing development dependencis","answers":"Unit Tests\n===\n\nRun unit tests after installing development dependencis:\n\n```bash\npip install -r requirements-dev.txt\npytest\n```\n\nSave and Replay VM Deletion Events\n===\n\nIt is useful to replay VM deletion events to test out changes to the Background\nFunction.  See the [Replay Quickstart][replay-qs] for more information.\n\n 1. Deploy the function\n 2. Get the subscription of the function with `gcloud pubsub subscriptions\n    list`\n 3. Create a pubsub snapshot with `gcloud pubsub snapshots create vm-deletions\n    --subscription <subscription>`\n 4. Delete a VM instance\n 5. Deploy a new version of the same function\n 6. Replay the deletion events to the new version `gcloud pubsub subscriptions\n    seek <subscription> --snapshot=vm-deletions`\n 7. Inspect the function output with `gcloud functions logs read --limit 50`\n\nInteractive Testing\n===\n\nA run helper method is provided to run the function from a local workstation.\nFirst, make sure you have the following environment variables set as if they\nwould be in the context of GCF.\n\n```bash\n# env | grep DNS\nDNS_VM_GC_DNS_PROJECT=dnsregistration\nDNS_VM_GC_DNS_ZONES=nonprod-private-zone\n```\n\nIf necessary, use Application Default Credentials:\n\n```bash\n# env | grep GOOG\nGOOGLE_APPLICATION_CREDENTIALS=\/Users\/jmccune\/.credentials\/dns-logging-83a9d261f444.json\n```\n\nRun the python REPL with debugging enabled via the DEBUG environment variable.\n\n```ipython\nPython 3.7.3 (default, Mar 27 2019, 09:23:39)\n[Clang 10.0.0 (clang-1000.11.45.5)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import conftest\n>>> from main_test import run\n>>> run()\n{\"message\": \"BEGIN Zone cleanup\", \"managed_zone\": \"nonprod-private-zone\"}\n{\"message\": \"BEGIN search for deletion candidates\", \"instance\": \"test\", \"ip\": \"10.138.0.45\"}\n{\"message\": \"Skipped, not an A record\", \"record\": {\"name\": \"gcp.example.com.\", \"type\": \"NS\", \"ttl\": 21600, \"rrdatas\": [\"ns-gcp-private.googledomains.com.\"], \"signatureRrdatas\": [], \"kind\": \"dns#resourceRecordSet\"}}\n{\"message\": \"Skipped, not an A record\", \"record\": {\"name\": \"gcp.example.com.\", \"type\": \"SOA\", \"ttl\": 21600, \"rrdatas\": [\"ns-gcp-private.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 259200 300\"], \"signatureRrdatas\": [], \"kind\": \"dns#resourceRecordSet\"}}\n{\"message\": \"Skipped, shortname != instance\", \"record\": {\"name\": \"keep.gcp.example.com.\", \"type\": \"A\", \"ttl\": 300, \"rrdatas\": [\"10.138.0.45\"], \"signatureRrdatas\": [], \"kind\": \"dns#resourceRecordSet\"}}\n{\"message\": \"Skipped, ip does not match\", \"record\": {\"name\": \"test.keep.gcp.example.com.\", \"type\": \"A\", \"ttl\": 300, \"rrdatas\": [\"10.138.0.43\", \"10.138.0.44\", \"10.138.0.45\"], \"signatureRrdatas\": [], \"kind\": \"dns#resourceRecordSet\"}}\n{\"message\": \"Skipped, ip does not match\", \"record\": {\"name\": \"test.nonprod.gcp.example.com.\", \"type\": \"A\", \"ttl\": 300, \"rrdatas\": [\"10.138.0.250\"], \"signatureRrdatas\": [], \"kind\": \"dns#resourceRecordSet\"}}\n{\"message\": \"END search for deletion candidates\", \"candidates\": []}\n{\"message\": \"END Zone cleanup\", \"managed_zone\": \"nonprod-private-zone\"}\n>>>\n```\n\nUnit Tests\n===\n\nRun unit tests to interact with fixture data.  Fixture data is provided for the\nmain entry point of a pubsub message sent to Google Cloud Functions and for the\nAPI response when collecting the IP address from the compute API.\n\nTest dependencies\n---\n\nInstall test dependencies:\n\n```bash\npip install pytest\npip install google-api-python-client\n```\n\nRun the tests\n---\n\n```\npytest -vv tests\n```\n\nSample events\n===\n\nThis is the JSON log data sent from Stackdriver's compute.instances.delete\nevent to Pub\/Sub and then to the Background Function implemented in Python.\nThere are two events, one for the GCE_API_CALL initiating the VM deletion, the\nsecond for the GCE_OPERATION_DONE event concluding the VM deletion.\n\nObtained using `gcloud functions logs read --limit 100`.\n\n```txt\nD      dns_vm_gc  578383257362746  2019-06-12 01:30:09.128  Function execution started\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145  {\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145      \"insertId\": \"6o9bdnfxt05mn\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145      \"jsonPayload\": {\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"actor\": {\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145              \"user\": \"jmccune@google.com\"\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          },\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"event_subtype\": \"compute.instances.delete\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"event_timestamp_us\": \"1560300993146327\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"event_type\": \"GCE_API_CALL\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"ip_address\": \"\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"operation\": {\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145              \"id\": \"971500189857477422\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145              \"name\": \"operation-1560300992590-58b15e267f2cc-e7529c4d-f1343924\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145              \"type\": \"operation\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145              \"zone\": \"us-west1-a\"\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          },\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"request\": {\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145              \"body\": \"null\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145              \"url\": \"https:\/\/www.googleapis.com\/compute\/v1\/projects\/user-dev-242122\/zones\/us-west1-a\/instances\/test?key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\"\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          },\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"resource\": {\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145              \"id\": \"613579339353422259\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145              \"name\": \"test\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145              \"type\": \"instance\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145              \"zone\": \"us-west1-a\"\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          },\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"trace_id\": \"operation-1560300992590-58b15e267f2cc-e7529c4d-f1343924\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"user_agent\": \"Mozilla\/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/74.0.3729.169 Safari\/537.36\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"version\": \"1.2\"\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145      },\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145      \"labels\": {\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"compute.googleapis.com\/resource_id\": \"613579339353422259\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"compute.googleapis.com\/resource_name\": \"test\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"compute.googleapis.com\/resource_type\": \"instance\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"compute.googleapis.com\/resource_zone\": \"us-west1-a\"\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145      },\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145      \"logName\": \"projects\/user-dev-242122\/logs\/compute.googleapis.com%2Factivity_log\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145      \"receiveTimestamp\": \"2019-06-12T00:56:33.188262452Z\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145      \"resource\": {\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"labels\": {\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145              \"instance_id\": \"613579339353422259\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145              \"project_id\": \"user-dev-242122\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145              \"zone\": \"us-west1-a\"\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          },\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145          \"type\": \"gce_instance\"\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145      },\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145      \"severity\": \"INFO\",\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145      \"timestamp\": \"2019-06-12T00:56:33.146327Z\"\nI      dns_vm_gc  578383257362746  2019-06-12 01:30:09.145  }\nD      dns_vm_gc  578383257362746  2019-06-12 01:30:09.146  Function execution took 20 ms, finished with status: 'ok'\nD      dns_vm_gc  578389534045377  2019-06-12 01:30:20.433  Function execution started\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438  {\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438      \"insertId\": \"hrbv6bfh9qzr2\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438      \"jsonPayload\": {\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          \"actor\": {\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438              \"user\": \"jmccune@google.com\"\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          },\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          \"event_subtype\": \"compute.instances.delete\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          \"event_timestamp_us\": \"1560301035092322\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          \"event_type\": \"GCE_OPERATION_DONE\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          \"operation\": {\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438              \"id\": \"971500189857477422\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438              \"name\": \"operation-1560300992590-58b15e267f2cc-e7529c4d-f1343924\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438              \"type\": \"operation\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438              \"zone\": \"us-west1-a\"\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          },\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          \"resource\": {\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438              \"id\": \"613579339353422259\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438              \"name\": \"test\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438              \"type\": \"instance\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438              \"zone\": \"us-west1-a\"\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          },\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          \"trace_id\": \"operation-1560300992590-58b15e267f2cc-e7529c4d-f1343924\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          \"version\": \"1.2\"\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438      },\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438      \"labels\": {\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          \"compute.googleapis.com\/resource_id\": \"613579339353422259\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          \"compute.googleapis.com\/resource_name\": \"test\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          \"compute.googleapis.com\/resource_type\": \"instance\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          \"compute.googleapis.com\/resource_zone\": \"us-west1-a\"\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438      },\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438      \"logName\": \"projects\/user-dev-242122\/logs\/compute.googleapis.com%2Factivity_log\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438      \"receiveTimestamp\": \"2019-06-12T00:57:15.146606691Z\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438      \"resource\": {\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          \"labels\": {\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438              \"instance_id\": \"613579339353422259\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438              \"project_id\": \"user-dev-242122\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438              \"zone\": \"us-west1-a\"\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          },\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438          \"type\": \"gce_instance\"\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438      },\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438      \"severity\": \"INFO\",\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438      \"timestamp\": \"2019-06-12T00:57:15.092322Z\"\nI      dns_vm_gc  578389534045377  2019-06-12 01:30:20.438  }\nD      dns_vm_gc  578389534045377  2019-06-12 01:30:20.439  Function execution took 7 ms, finished with status: 'ok'\n```","site":"GCP","answers_cleaned":"Unit Tests      Run unit tests after installing development dependencis      bash pip install  r requirements dev txt pytest      Save and Replay VM Deletion Events      It is useful to replay VM deletion events to test out changes to the Background Function   See the  Replay Quickstart  replay qs  for more information    1  Deploy the function  2  Get the subscription of the function with  gcloud pubsub subscriptions     list   3  Create a pubsub snapshot with  gcloud pubsub snapshots create vm deletions       subscription  subscription    4  Delete a VM instance  5  Deploy a new version of the same function  6  Replay the deletion events to the new version  gcloud pubsub subscriptions     seek  subscription    snapshot vm deletions   7  Inspect the function output with  gcloud functions logs read   limit 50   Interactive Testing      A run helper method is provided to run the function from a local workstation  First  make sure you have the following environment variables set as if they would be in the context of GCF      bash   env   grep DNS DNS VM GC DNS PROJECT dnsregistration DNS VM GC DNS ZONES nonprod private zone      If necessary  use Application Default Credentials      bash   env   grep GOOG GOOGLE APPLICATION CREDENTIALS  Users jmccune  credentials dns logging 83a9d261f444 json      Run the python REPL with debugging enabled via the DEBUG environment variable      ipython Python 3 7 3  default  Mar 27 2019  09 23 39   Clang 10 0 0  clang 1000 11 45 5   on darwin Type  help    copyright    credits  or  license  for more information      import conftest     from main test import run     run     message    BEGIN Zone cleanup    managed zone    nonprod private zone     message    BEGIN search for deletion candidates    instance    test    ip    10 138 0 45     message    Skipped  not an A record    record     name    gcp example com     type    NS    ttl   21600   rrdatas     ns gcp private googledomains com      signatureRrdatas        kind    dns resourceRecordSet      message    Skipped  not an A record    record     name    gcp example com     type    SOA    ttl   21600   rrdatas     ns gcp private googledomains com  cloud dns hostmaster google com  1 21600 3600 259200 300     signatureRrdatas        kind    dns resourceRecordSet      message    Skipped  shortname    instance    record     name    keep gcp example com     type    A    ttl   300   rrdatas     10 138 0 45     signatureRrdatas        kind    dns resourceRecordSet      message    Skipped  ip does not match    record     name    test keep gcp example com     type    A    ttl   300   rrdatas     10 138 0 43    10 138 0 44    10 138 0 45     signatureRrdatas        kind    dns resourceRecordSet      message    Skipped  ip does not match    record     name    test nonprod gcp example com     type    A    ttl   300   rrdatas     10 138 0 250     signatureRrdatas        kind    dns resourceRecordSet      message    END search for deletion candidates    candidates         message    END Zone cleanup    managed zone    nonprod private zone            Unit Tests      Run unit tests to interact with fixture data   Fixture data is provided for the main entry point of a pubsub message sent to Google Cloud Functions and for the API response when collecting the IP address from the compute API   Test dependencies      Install test dependencies      bash pip install pytest pip install google api python client      Run the tests          pytest  vv tests      Sample events      This is the JSON log data sent from Stackdriver s compute instances delete event to Pub Sub and then to the Background Function implemented in Python  There are two events  one for the GCE API CALL initiating the VM deletion  the second for the GCE OPERATION DONE event concluding the VM deletion   Obtained using  gcloud functions logs read   limit 100       txt D      dns vm gc  578383257362746  2019 06 12 01 30 09 128  Function execution started I      dns vm gc  578383257362746  2019 06 12 01 30 09 145    I      dns vm gc  578383257362746  2019 06 12 01 30 09 145       insertId    6o9bdnfxt05mn   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145       jsonPayload     I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           actor     I      dns vm gc  578383257362746  2019 06 12 01 30 09 145               user    jmccune google com  I      dns vm gc  578383257362746  2019 06 12 01 30 09 145             I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           event subtype    compute instances delete   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           event timestamp us    1560300993146327   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           event type    GCE API CALL   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           ip address       I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           operation     I      dns vm gc  578383257362746  2019 06 12 01 30 09 145               id    971500189857477422   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145               name    operation 1560300992590 58b15e267f2cc e7529c4d f1343924   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145               type    operation   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145               zone    us west1 a  I      dns vm gc  578383257362746  2019 06 12 01 30 09 145             I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           request     I      dns vm gc  578383257362746  2019 06 12 01 30 09 145               body    null   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145               url    https   www googleapis com compute v1 projects user dev 242122 zones us west1 a instances test key XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX  I      dns vm gc  578383257362746  2019 06 12 01 30 09 145             I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           resource     I      dns vm gc  578383257362746  2019 06 12 01 30 09 145               id    613579339353422259   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145               name    test   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145               type    instance   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145               zone    us west1 a  I      dns vm gc  578383257362746  2019 06 12 01 30 09 145             I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           trace id    operation 1560300992590 58b15e267f2cc e7529c4d f1343924   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           user agent    Mozilla 5 0  Macintosh  Intel Mac OS X 10 14 5  AppleWebKit 537 36  KHTML  like Gecko  Chrome 74 0 3729 169 Safari 537 36   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           version    1 2  I      dns vm gc  578383257362746  2019 06 12 01 30 09 145         I      dns vm gc  578383257362746  2019 06 12 01 30 09 145       labels     I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           compute googleapis com resource id    613579339353422259   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           compute googleapis com resource name    test   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           compute googleapis com resource type    instance   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           compute googleapis com resource zone    us west1 a  I      dns vm gc  578383257362746  2019 06 12 01 30 09 145         I      dns vm gc  578383257362746  2019 06 12 01 30 09 145       logName    projects user dev 242122 logs compute googleapis com 2Factivity log   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145       receiveTimestamp    2019 06 12T00 56 33 188262452Z   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145       resource     I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           labels     I      dns vm gc  578383257362746  2019 06 12 01 30 09 145               instance id    613579339353422259   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145               project id    user dev 242122   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145               zone    us west1 a  I      dns vm gc  578383257362746  2019 06 12 01 30 09 145             I      dns vm gc  578383257362746  2019 06 12 01 30 09 145           type    gce instance  I      dns vm gc  578383257362746  2019 06 12 01 30 09 145         I      dns vm gc  578383257362746  2019 06 12 01 30 09 145       severity    INFO   I      dns vm gc  578383257362746  2019 06 12 01 30 09 145       timestamp    2019 06 12T00 56 33 146327Z  I      dns vm gc  578383257362746  2019 06 12 01 30 09 145    D      dns vm gc  578383257362746  2019 06 12 01 30 09 146  Function execution took 20 ms  finished with status   ok  D      dns vm gc  578389534045377  2019 06 12 01 30 20 433  Function execution started I      dns vm gc  578389534045377  2019 06 12 01 30 20 438    I      dns vm gc  578389534045377  2019 06 12 01 30 20 438       insertId    hrbv6bfh9qzr2   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438       jsonPayload     I      dns vm gc  578389534045377  2019 06 12 01 30 20 438           actor     I      dns vm gc  578389534045377  2019 06 12 01 30 20 438               user    jmccune google com  I      dns vm gc  578389534045377  2019 06 12 01 30 20 438             I      dns vm gc  578389534045377  2019 06 12 01 30 20 438           event subtype    compute instances delete   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438           event timestamp us    1560301035092322   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438           event type    GCE OPERATION DONE   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438           operation     I      dns vm gc  578389534045377  2019 06 12 01 30 20 438               id    971500189857477422   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438               name    operation 1560300992590 58b15e267f2cc e7529c4d f1343924   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438               type    operation   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438               zone    us west1 a  I      dns vm gc  578389534045377  2019 06 12 01 30 20 438             I      dns vm gc  578389534045377  2019 06 12 01 30 20 438           resource     I      dns vm gc  578389534045377  2019 06 12 01 30 20 438               id    613579339353422259   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438               name    test   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438               type    instance   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438               zone    us west1 a  I      dns vm gc  578389534045377  2019 06 12 01 30 20 438             I      dns vm gc  578389534045377  2019 06 12 01 30 20 438           trace id    operation 1560300992590 58b15e267f2cc e7529c4d f1343924   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438           version    1 2  I      dns vm gc  578389534045377  2019 06 12 01 30 20 438         I      dns vm gc  578389534045377  2019 06 12 01 30 20 438       labels     I      dns vm gc  578389534045377  2019 06 12 01 30 20 438           compute googleapis com resource id    613579339353422259   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438           compute googleapis com resource name    test   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438           compute googleapis com resource type    instance   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438           compute googleapis com resource zone    us west1 a  I      dns vm gc  578389534045377  2019 06 12 01 30 20 438         I      dns vm gc  578389534045377  2019 06 12 01 30 20 438       logName    projects user dev 242122 logs compute googleapis com 2Factivity log   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438       receiveTimestamp    2019 06 12T00 57 15 146606691Z   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438       resource     I      dns vm gc  578389534045377  2019 06 12 01 30 20 438           labels     I      dns vm gc  578389534045377  2019 06 12 01 30 20 438               instance id    613579339353422259   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438               project id    user dev 242122   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438               zone    us west1 a  I      dns vm gc  578389534045377  2019 06 12 01 30 20 438             I      dns vm gc  578389534045377  2019 06 12 01 30 20 438           type    gce instance  I      dns vm gc  578389534045377  2019 06 12 01 30 20 438         I      dns vm gc  578389534045377  2019 06 12 01 30 20 438       severity    INFO   I      dns vm gc  578389534045377  2019 06 12 01 30 20 438       timestamp    2019 06 12T00 57 15 092322Z  I      dns vm gc  578389534045377  2019 06 12 01 30 20 438    D      dns vm gc  578389534045377  2019 06 12 01 30 20 439  Function execution took 7 ms  finished with status   ok     "}
{"questions":"GCP guaranteed A race exists between the function obtaining the VM IP address and Please note DNS record deletion is implemented however cannot be is obtained the function will not delete the DNS record because it cannot the operation If the VM is deleted before the IP when a VM is deleted VM DNS Garbage Collection This folder contains a Background Function bg which deletes DNS A records","answers":"VM DNS Garbage Collection\n===\n\nThis folder contains a [Background Function][bg] which deletes DNS A records\nwhen a VM is deleted.\n\n**Please note** DNS record deletion is implemented, however, cannot be\nguaranteed.  A race exists between the function obtaining the VM IP address and\nthe `compute.instances.delete` operation.  If the VM is deleted before the IP\nis obtained, the function will not delete the DNS record because it cannot\ncheck the IP address matches the VM being deleted.\n\nIn practice this background function collects the IP address well within the\n~30 second window of the VM delete operation.\n\nStructured logs enable VM deletions which were not processed because the race\nwas lost.  See [Lost Race](#lost-race) for log filters to identify VM's deleted\nbefore cleanup could take place.\n\n![Example Log Output](.\/img\/example_logs.png)\n\nProject Setup\n===\n\nThis example has been developed for use with multiple service projects.  A\ncentralized logs project is used to host one pubsub topic for all VM deletion\nevents.  One deployment of the function implements the event handler.\n\n * The logs project contains the dns-vm-gc Pub\/Sub topic and the\n   dns_vm_gc function deployed as a Background Function.\n * One or more service projects contain VM resources to be deleted.\n * The host project contains a VPC shared with the user project and DNS\n   resource record sets needing to be cleaned up automatically.\n\nIdentify the Logs Project\n===\n\nIdentify a project to host the `vm-deletions` Pub\/Sub topic and the DNS VM GC\nCloud Function.  Service projects are configured to export filtered logs into\nthis topic.\n\nIf a project does not already exist, create a new project.  A suggested name is\n`logs`.  The rest of this document will use `logs-123456` as the project ID for\nthe centralized logs project.\n\nCreate the vm-deletions Pub\/Sub topic\n---\n\nService projects export `compute.instances.delete` events to the `vm-deletions`\ntopic.  The VM DNS GC background function subscribes to this topic and triggers\non each event.\n\nCreate a topic named `vm-deletions` in the logs project as per [Create a\ntopic][pubsub-quickstart].\n\nConfigure Log Exports\n---\n\nConfigure Log Exports in one or more service projects.  Logs are exported to\nthe `vm-deletions` topic in the logs project.\n\n[Stackdriver logs exports][logs-exports] are used to convey VM lifecycle events\nto the DNS VM GC function via Cloud Pub\/Sub.  A Stackdriver filter is used to\nlimit logs to VM deletion events, reducing data traveling through Pub\/Sub.\n\nConfigure an export to the `vm-deletions` topic with the following filter, for\nexample `projects\/logs-123456\/topics\/vm-deletions`.\n\n```\nresource.type=\"gce_instance\"\njsonPayload.event_type=\"GCE_API_CALL\"\njsonPayload.event_subtype=\"compute.instances.delete\"\n```\n\nThis filter results in one event published per VM deletion, a `GCE_API_CALL`\nevent when the VM deletion is requested.\n\nIf additional events are published to the topic, the function triggers, but\nignores events which do not match this filter.\n\nService Account\n===\n\nThe Background Function runs with a service account identity.  Create a service\naccount named `dns-vm-gc` in the logs project for this purpose.  This example\nassumes [GCP-managed][sa-gcp-managed] keys.\n\nIf you are modifying this example you may download the service account key and\nrun locally as the service account using the GOOGLE_APPLICATION_CREDENTIALS\nenvironment file.  See [Providing credentials to your application][adc] for\ndetails.\n\nService Account Roles\n===\n\nThe Background Function service account requires the following roles.\n\nDNS Admin\n---\n\nGrant the DNS Admin role to the dns-vm-gc service account in the host project.\nDNS Admin allows the DNS VM GC function to delete DNS records in the host\nproject.\n\nThis role may be granted at the Shared VPC project level.\n\nCompute Viewer\n---\n\nGrant the Compute Viewer role to the dns-vm-gc service account.  Compute Viewer\nallows the DNS VM GC function to read the IP address of the VM, necessary to\nensure the correct A record is deleted.\n\nThis role may be granted at the project, folder or organization level as\nappropriate.\n\nLogs Writer\n---\n\nGrant the Logs Writer role to the dns-vm-gc service account.  Logs Writer is\nrequired to write structured event logs to the [Reporting\nStream](#reporting-stream).\n\nThis role may be granted at the project, folder, or organization level as\nappropriate.  It is recommended to grant the role at the same level the log\nstream exists at, the logging project by default.  See [Custom Reporting\nDestination](#custom-reporting-destination) for more information.\n\nDeployment\n===\n\nDeploy this function into the logs project to simplify the subscription to the\n`vm-deletions` topic.\n\nEnvironment variables are used to configure the behavior of the function.\nUpdate the env.yaml file to reflect the correct VPC Host project and Managed\nZone names for your environment.  A sample is provided in env.yaml.sample.\n\n```yaml\n# env.yaml\n---\nDNS_VM_GC_DNS_PROJECT: my-vpc-host-project\nDNS_VM_GC_DNS_ZONES: my-nonprod-private-zone,my-prod-private-zone\n```\n\n```bash\ngcloud functions deploy dns_vm_gc \\\n  --retry \\\n  --runtime=python37 \\\n  --service-account=dns-vm-gc@logs-123456.iam.gserviceaccount.com \\\n  --trigger-topic=vm-deletions \\\n  --env-vars-file=env.yaml\n```\n\nLogging and Reporting\n===\n\nThe DNS VM GC function logs into two different locations.  Structured Events\nintended for reporting are sent to a special purpose reporting stream.  Plain\ntext logs are sent to the standard Cloud Function logs accessible via `gcloud\nfunctions logs read`.\n\nReporting Stream\n---\n\nThe reporting stream is intended to answer two primary questions:\n\n 1. Which VM deletion events, if any, were not processed?\n 2. What records were deleted automatically?\n\nWhen the function loses the race against the delete operation, the event is not\nprocessed and the function reports a detail code of `LOST_RACE`.\n\nWhen the function deletes a record automatically, the fully qualified domain\nname is logged along with a detail code of `RR_DELETED` for resource record\ndeleted.\n\nCustom Reporting Destination\n---\n\nBy default the reporting stream is located at\n`projects\/<logs_project>\/logs\/<function_name>`.  The reporting stream is\nconfigurable by setting the `DNS_VM_GC_REPORTING_LOG_STREAM` environment\nvariable when deploying the function.  For example, to send reporting events to\nthe organization level:\n\n```yaml\n# env.yaml\n---\nDNS_VM_GC_DNS_PROJECT: my-vpc-host-project\nDNS_VM_GC_DNS_ZONES: my-nonprod-private-zone,my-prod-private-zone\nDNS_VM_GC_REPORTING_LOG_STREAM: organizations\/000000000000\/logs\/dns-vm-gc-report\n```\n\nSee the `logName` field of the [LogEntry][logentry] resource for a list of\npossible report stream destinations.\n\nReading the Report Logs\n---\n\nDownload all structured logs to the report stream produced by the function\nusing:\n\n```bash\ngcloud functions logs read logName=\"projects\/<logs_project>\/logs\/<function_name>\"\n```\n\nCloud Function Logs\n---\n\nThe function also logs unstructured plain text logs using [Cloud Function\nLogs][gcf-logs].  Becasue these logs are unstructured, they are less useful\nthan the Report Stream logs for reporting purposes, however, are present to\nkeep all activity associated together with each execution ID of the function.\n\nNote the cloud function logs have an execution_id.  This execution ID is not\nreadily available at runtime and therefore absent from the structured report\nlog stream.  The function logs a message with the `event_id` being processed to\nassociate the execution_id with the event_id.  This behavior is intended to\ncorrelate each execution in the Cloud Function Logs with each report in the\nReport Stream.  The correlation of execution_id to event_id is not necessary\nfor day to day reporting.  The correlation is useful for the rare situation of\ncomplete end-to-end tracing.\n\nReporting\n===\n\nLost Race\n---\n\nPeriodic reporting should be performed to monitor for `NOT_PROCESSED` results.\nIn the event of a lost race, automatic DNS record deletion is not guaranteed.\n\nThe following Stackdriver Advanced Filter identifies when a VM deletion event\nwas not processed automatically:\n\n```\nresource.type=\"cloud_function\"\nresource.labels.function_name=\"dns_vm_gc\"\nlogName=\"projects\/dns-logging\/logs\/dns-vm-gc-report\"\njsonPayload.result=\"NOT_PROCESSED\"\n```\n\nDeleted Resource Records\n---\n\nAll records automatically deleted may be identified with the a filter on the\ndetail code.\n\n```\nresource.type=\"cloud_function\"\nresource.labels.function_name=\"dns_vm_gc\"\nlogName=\"projects\/dns-logging\/logs\/dns-vm-gc-report\"\njsonPayload.detail=\"RR_DELETED\"\n```\n\nDebug Logs\n---\n\nDebug logs are also available, but are not sent by default.  To enable, deploy\nthe function with the `DEBUG` environment variable set to a non-empty string.\nNote, debug logs generates 2*N log events every time a VM is deleted where N is\nthe number of DNS records across all configured managed zones.  For example,\ndeleting 10 VM instances with 1,000 managed DNS records generates 20,000 debug\nlog entries at minimum.\n\nDetail Codes\n---\n\nThe following detail codes may be reported to the reporting stream:\n\n| Detail Code   | Description                                     | Result        |\n| -----------   | -----------                                     | ------        |\n| NO_MATCHES    | No DNS records matched the VM deleted           | OK            |\n| RR_DELETED    | A DNS record matched and has been deleted       | OK            |\n| VM_NO_IP      | The function won the race, but the VM has no IP | OK            |\n| IGNORED_EVENT | Trigger event is not a VM delete GCE_API_CALL   | OK            |\n| LOST_RACE     | The VM was deleted before the IP was determined | NOT_PROCESSED |\n\nIn addition, there are detail codes when DEBUG is turned on indicating the\nreason why DNS records were not automatically deleted.\n\n| Detail Code      | Reason DNS record not deleted              | Result |\n| -----------      | -----------------------------              | ------ |\n| RR_NOT_A_RECORD  | Resource Record is not an A record         | OK     |\n| RR_NAME_MISMATCH | Shortname doesn't match the VM name        | OK     |\n| RR_IP_MISMATCH   | rrdatas is not one IP matching the VM's IP | OK     |\n\n\n[bg]: https:\/\/cloud.google.com\/functions\/docs\/writing\/background\n[sa-gcp-managed]: https:\/\/cloud.google.com\/iam\/docs\/understanding-service-accounts#managing_service_account_keys\n[pubsub-quickstart]: https:\/\/cloud.google.com\/pubsub\/docs\/quickstart-console#create_a_topic\n[logs-exports]: https:\/\/cloud.google.com\/logging\/docs\/export\/\n[adc]: https:\/\/cloud.google.com\/docs\/authentication\/production#providing_credentials_to_your_application\n[logentry]: https:\/\/cloud.google.com\/logging\/docs\/reference\/v2\/rest\/v2\/LogEntry\n[gcf-logs]: https:\/\/cloud.google.com\/functions\/docs\/monitoring\/logging","site":"GCP","answers_cleaned":"VM DNS Garbage Collection      This folder contains a  Background Function  bg  which deletes DNS A records when a VM is deleted     Please note   DNS record deletion is implemented  however  cannot be guaranteed   A race exists between the function obtaining the VM IP address and the  compute instances delete  operation   If the VM is deleted before the IP is obtained  the function will not delete the DNS record because it cannot check the IP address matches the VM being deleted   In practice this background function collects the IP address well within the  30 second window of the VM delete operation   Structured logs enable VM deletions which were not processed because the race was lost   See  Lost Race   lost race  for log filters to identify VM s deleted before cleanup could take place     Example Log Output    img example logs png   Project Setup      This example has been developed for use with multiple service projects   A centralized logs project is used to host one pubsub topic for all VM deletion events   One deployment of the function implements the event handler      The logs project contains the dns vm gc Pub Sub topic and the    dns vm gc function deployed as a Background Function     One or more service projects contain VM resources to be deleted     The host project contains a VPC shared with the user project and DNS    resource record sets needing to be cleaned up automatically   Identify the Logs Project      Identify a project to host the  vm deletions  Pub Sub topic and the DNS VM GC Cloud Function   Service projects are configured to export filtered logs into this topic   If a project does not already exist  create a new project   A suggested name is  logs    The rest of this document will use  logs 123456  as the project ID for the centralized logs project   Create the vm deletions Pub Sub topic      Service projects export  compute instances delete  events to the  vm deletions  topic   The VM DNS GC background function subscribes to this topic and triggers on each event   Create a topic named  vm deletions  in the logs project as per  Create a topic  pubsub quickstart    Configure Log Exports      Configure Log Exports in one or more service projects   Logs are exported to the  vm deletions  topic in the logs project    Stackdriver logs exports  logs exports  are used to convey VM lifecycle events to the DNS VM GC function via Cloud Pub Sub   A Stackdriver filter is used to limit logs to VM deletion events  reducing data traveling through Pub Sub   Configure an export to the  vm deletions  topic with the following filter  for example  projects logs 123456 topics vm deletions        resource type  gce instance  jsonPayload event type  GCE API CALL  jsonPayload event subtype  compute instances delete       This filter results in one event published per VM deletion  a  GCE API CALL  event when the VM deletion is requested   If additional events are published to the topic  the function triggers  but ignores events which do not match this filter   Service Account      The Background Function runs with a service account identity   Create a service account named  dns vm gc  in the logs project for this purpose   This example assumes  GCP managed  sa gcp managed  keys   If you are modifying this example you may download the service account key and run locally as the service account using the GOOGLE APPLICATION CREDENTIALS environment file   See  Providing credentials to your application  adc  for details   Service Account Roles      The Background Function service account requires the following roles   DNS Admin      Grant the DNS Admin role to the dns vm gc service account in the host project  DNS Admin allows the DNS VM GC function to delete DNS records in the host project   This role may be granted at the Shared VPC project level   Compute Viewer      Grant the Compute Viewer role to the dns vm gc service account   Compute Viewer allows the DNS VM GC function to read the IP address of the VM  necessary to ensure the correct A record is deleted   This role may be granted at the project  folder or organization level as appropriate   Logs Writer      Grant the Logs Writer role to the dns vm gc service account   Logs Writer is required to write structured event logs to the  Reporting Stream   reporting stream    This role may be granted at the project  folder  or organization level as appropriate   It is recommended to grant the role at the same level the log stream exists at  the logging project by default   See  Custom Reporting Destination   custom reporting destination  for more information   Deployment      Deploy this function into the logs project to simplify the subscription to the  vm deletions  topic   Environment variables are used to configure the behavior of the function  Update the env yaml file to reflect the correct VPC Host project and Managed Zone names for your environment   A sample is provided in env yaml sample      yaml   env yaml     DNS VM GC DNS PROJECT  my vpc host project DNS VM GC DNS ZONES  my nonprod private zone my prod private zone         bash gcloud functions deploy dns vm gc       retry       runtime python37       service account dns vm gc logs 123456 iam gserviceaccount com       trigger topic vm deletions       env vars file env yaml      Logging and Reporting      The DNS VM GC function logs into two different locations   Structured Events intended for reporting are sent to a special purpose reporting stream   Plain text logs are sent to the standard Cloud Function logs accessible via  gcloud functions logs read    Reporting Stream      The reporting stream is intended to answer two primary questions    1  Which VM deletion events  if any  were not processed   2  What records were deleted automatically   When the function loses the race against the delete operation  the event is not processed and the function reports a detail code of  LOST RACE    When the function deletes a record automatically  the fully qualified domain name is logged along with a detail code of  RR DELETED  for resource record deleted   Custom Reporting Destination      By default the reporting stream is located at  projects  logs project  logs  function name     The reporting stream is configurable by setting the  DNS VM GC REPORTING LOG STREAM  environment variable when deploying the function   For example  to send reporting events to the organization level      yaml   env yaml     DNS VM GC DNS PROJECT  my vpc host project DNS VM GC DNS ZONES  my nonprod private zone my prod private zone DNS VM GC REPORTING LOG STREAM  organizations 000000000000 logs dns vm gc report      See the  logName  field of the  LogEntry  logentry  resource for a list of possible report stream destinations   Reading the Report Logs      Download all structured logs to the report stream produced by the function using      bash gcloud functions logs read logName  projects  logs project  logs  function name        Cloud Function Logs      The function also logs unstructured plain text logs using  Cloud Function Logs  gcf logs    Becasue these logs are unstructured  they are less useful than the Report Stream logs for reporting purposes  however  are present to keep all activity associated together with each execution ID of the function   Note the cloud function logs have an execution id   This execution ID is not readily available at runtime and therefore absent from the structured report log stream   The function logs a message with the  event id  being processed to associate the execution id with the event id   This behavior is intended to correlate each execution in the Cloud Function Logs with each report in the Report Stream   The correlation of execution id to event id is not necessary for day to day reporting   The correlation is useful for the rare situation of complete end to end tracing   Reporting      Lost Race      Periodic reporting should be performed to monitor for  NOT PROCESSED  results  In the event of a lost race  automatic DNS record deletion is not guaranteed   The following Stackdriver Advanced Filter identifies when a VM deletion event was not processed automatically       resource type  cloud function  resource labels function name  dns vm gc  logName  projects dns logging logs dns vm gc report  jsonPayload result  NOT PROCESSED       Deleted Resource Records      All records automatically deleted may be identified with the a filter on the detail code       resource type  cloud function  resource labels function name  dns vm gc  logName  projects dns logging logs dns vm gc report  jsonPayload detail  RR DELETED       Debug Logs      Debug logs are also available  but are not sent by default   To enable  deploy the function with the  DEBUG  environment variable set to a non empty string  Note  debug logs generates 2 N log events every time a VM is deleted where N is the number of DNS records across all configured managed zones   For example  deleting 10 VM instances with 1 000 managed DNS records generates 20 000 debug log entries at minimum   Detail Codes      The following detail codes may be reported to the reporting stream     Detail Code     Description                                       Result                                                                                                NO MATCHES      No DNS records matched the VM deleted             OK                RR DELETED      A DNS record matched and has been deleted         OK                VM NO IP        The function won the race  but the VM has no IP   OK                IGNORED EVENT   Trigger event is not a VM delete GCE API CALL     OK                LOST RACE       The VM was deleted before the IP was determined   NOT PROCESSED    In addition  there are detail codes when DEBUG is turned on indicating the reason why DNS records were not automatically deleted     Detail Code        Reason DNS record not deleted                Result                                                                                RR NOT A RECORD    Resource Record is not an A record           OK         RR NAME MISMATCH   Shortname doesn t match the VM name          OK         RR IP MISMATCH     rrdatas is not one IP matching the VM s IP   OK          bg   https   cloud google com functions docs writing background  sa gcp managed   https   cloud google com iam docs understanding service accounts managing service account keys  pubsub quickstart   https   cloud google com pubsub docs quickstart console create a topic  logs exports   https   cloud google com logging docs export   adc   https   cloud google com docs authentication production providing credentials to your application  logentry   https   cloud google com logging docs reference v2 rest v2 LogEntry  gcf logs   https   cloud google com functions docs monitoring logging"}
{"questions":"GCP representation for any use or purpose Your use of it is subject to your agreement with Google Twilio Conversation Integration with a Virtual Agent using Dialogflow Project Structure This is an example how to integrate a Twilio Conversation Services with Virtual Copyright 2023 Google This software is provided as is without warranty or Agent using Dialogflow","answers":"Copyright 2023 Google. This software is provided as-is, without warranty or\nrepresentation for any use or purpose. Your use of it is subject to your\nagreement with Google.\n\n# Twilio Conversation Integration with a Virtual Agent using Dialogflow\n\nThis is an example how to integrate a Twilio Conversation Services with Virtual \nAgent using Dialogflow.\n\n## Project Structure\n\n```\n.\n\u2514\u2500\u2500 twilio\n   \u2514\u2500\u2500 src\n   \u2514\u2500\u2500 main\n           \u2514\u2500\u2500 java\n               \u2514\u2500\u2500 com.middleware.controller\n                   \u251c\u2500\u2500 cache # redis initialization and mapping of conversations\n                   \u251c\u2500\u2500 dialogflow # handler of a conversation with dialogflow\n                   \u251c\u2500\u2500 rest # entry point and request handler\n                   \u251c\u2500\u2500 twilio # marketplace and twilio conversation services set up and initialization\n                   \u251c\u2500\u2500 util # utility to inittialize a twilio conversation to test the integration  \n                   \u2514\u2500\u2500 webhook # classes to process new message and new participant    \n          \u2514\u2500\u2500 resources\n              \u2514\u2500\u2500 application.properties\n          \u2514\u2500\u2500 proto\n              \u2514\u2500\u2500 dialogflow.proto # dialogflow conversation information holder\n   \u251c\u2500\u2500 pom.xml\n   \u2514\u2500\u2500 README.md\n```\n\n## Components\n\n- Dialogflow\n- Google Cloud MemoryStore (Redis)\n- Twilio with Flex\n\n## How mapping between Twilio Conversation and Dialogflow Conversation is implemented\n\nWe use a mapping between the Twilio Conversation SID and the Dialogflow Conversation ID\nto maintain the conversation context. This mapping is stored in the [Redis\ncache.](https:\/\/cloud.google.com\/memorystore\/docs\/redis\/redis-overview) and it works as followed:\n\n1. If there is no active conversation available in\n   the Redis cache, we create a new Dialogflow conversation and store the\n   mapping in the Redis cache.\n2. If there is an active conversation available in the Redis cache, we use\n   the same Dialogflow conversation to send the message to the Dialogflow\n   agent.\n3. If the user is not already present in the conversation, we add them to the\n   conversation.\n4. On each message, we send the message to the Dialogflow agent and get the\n   response\n   using [AnalyzeContent API](https:\/\/developers.google.com\/resources\/api-libraries\/documentation\/dialogflow\/v2beta1\/java\/latest\/index.html?com\/google\/api\/services\/dialogflow\/v2beta1\/Dialogflow.Projects.Conversations.Participants.AnalyzeContent.html)\n5. Reply message is sent to the user using the Twilio Conversations API.\n6. At the time of handoff, we send the conversation context to the Flex UI\n   using the [Interaction API](https:\/\/www.twilio.com\/docs\/flex\/developer\/conversations\/interactions-api\/interactions).\n7. If the Agent Assist feature is enabled, we won't close the conversation in\n   Dialogflow. Otherwise, we will close the conversation in Dialogflow if\n   agent-handoff or end-of-conversation is detected.\n9. If the conversation is closed in Dialogflow, we delete the mapping from\n   the Redis cache.\n\n## Setup Instructions\n\n### GCP Project Setup\n\n#### Creating a Project in the Google Cloud Platform Console\n\nIf you haven't already created a project, create one now. Projects enable you to\nmanage all Google Cloud Platform resources for your app, including deployment,\naccess control, billing, and services.\n\n1. Open the [Cloud Platform Console][cloud-console].\n1. In the drop-down menu at the top, select **Create a project**.\n1. Give your project a name.\n1. Make a note of the project ID, which might be different from the project\n   name. The project ID is used in commands and in configurations.\n\n[cloud-console]: https:\/\/console.cloud.google.com\/\n\n#### Enabling billing for your project.\n\nIf you haven't already enabled billing for your project, [enable\nbilling][enable-billing] now. Enabling billing allows is required to use Cloud\nBigtable and to create VM instances.\n\n[enable-billing]: https:\/\/console.cloud.google.com\/project\/_\/settings\n\n#### Install the Google Cloud SDK.\n\nIf you haven't already installed the Google Cloud SDK, [install the Google\nCloud SDK][cloud-sdk] now. The SDK contains tools and libraries that enable you\nto create and manage resources on Google Cloud Platform.\n\n[cloud-sdk]: https:\/\/cloud.google.com\/sdk\/\n\n#### Setting Google Application Default Credentials\n\nSet your [Google Application Default\nCredentials][application-default-credentials] by [initializing the Google Cloud\nSDK][cloud-sdk-init] with the command:\n\n```\n   gcloud init\n```\n\nGenerate a credentials file by running the\n[application-default login](https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/auth\/application-default\/login)\ncommand:\n\n```\n    gcloud auth application-default login\n```\n\n[cloud-sdk-init]: https:\/\/cloud.google.com\/sdk\/docs\/initializing\n\n[application-default-credentials]: https:\/\/developers.google.com\/identity\/protocols\/application-default-credentials\n\n### Local Development Set Up\n\nThis is a Spring Boot application designed to run on port 8080. Upon launch, the\napplication will initialize and bind to port 8080, making it accessible for\nincoming connections.\n\n#### Set of environment variables:\n\nThe following environmental variables need to be set up in the localhost:\n\n```\nREDIS_HOST :\n   IP address of the Redis instance\n\nREDIS_PORT :\n   Port of the Redis instance (Default: 6379)\n```\n\n```\nTWILIO_ADD_ON_CONVERSATION_PROFILE_ID : \n   Dialogflow Conversation Profile ID (Provided by Twilio Integration)\n\nTWILIO_ADD_ON_PROJECT_ID :\n   GCP Project ID where the Dialogflow Agent is deployed (Provided\n   by Twilio Integration)\n\nTWILIO_ADD_ON_AGENT_LOCATION(Optional):\n   Dialogflow Agent Location (Provided by Twilio Integration). Default: global\n```   \n\n```\nTWILIO_ACCOUNT_SID :\n   Twilio Account SID\n\nTWILIO_AUTH_TOKEN :\n   OAuth Token for the Twilio Account\n\nTWILIO_WORKSPACE_SID :\n   Flex Workspace SID, (Used for interaction task creation)\n   \nTWILIO_WORKFLOW_SID :\n   Flex Workflow SID, (Used for interaction task creation)   \n```\n\n#### Redis Set Up\n\n##### Install a Redis Emulator\n\nPlease refer to this [doc](https:\/\/redis.io\/docs\/getting-started\/) to install a\nredis emulator in your localhost\n\nFor Linux:\n[Install Redis on Linux](https:\/\/redis.io\/docs\/getting-started\/installation\/install-redis-on-linux\/)\n\n##### Initialized the server\n\n```\n   $ redis-server\n```\n\n##### Basic commands to access the data\n\nList all the keys\n\n```\n  $ redis-cli\n   127.0.0.1:6379> KEYS *\n   (empty array)\n```\n\nDelete a key\n\n```\n   127.0.0.1:6379> DEL \"<key>\"\n```\n\nGet a key\n\n```\n  127.0.0.1:6379> GET \"<key>\"\n```\n\n## Usage\n\n### Endpoints\n\n> POST \/handleConversations\n\nEvents Handled\n\n1. **onParticipantAdded** : Expected Variables from the request\n    - ConversationSid\n2. **onMessageAdded** : Expected Variables from the request\n    - ConversationSid\n    - Body\n    - Source\n\n\n### Initialize the application\n\nReference: [Building an Application with Spring Boot](https:\/\/spring.io\/guides\/gs\/spring-boot\/)\n\n```\n   .\/mvnw spring-boot:run\n```\n\n### Send a request\n\n```\n curl --location --request POST 'localhost:8080\/handleConversations' \\\n --header 'Content-Type: application\/x-www-form-urlencoded' \\\n --data-urlencode 'Body=Talk to agent' \\\n --data-urlencode 'EventType=onMessageAdded' \\\n --data-urlencode 'Source=whatsapp' \\\n --data-urlencode 'ConversationSid=XXXXXX' \\\n```\n\n## How to get the dialogflow conversation profile id\n\nDialogflow Conversation Profile ID also known as Integration ID is a unique\ntext which is used for the interaction of Dialogflow with 3rd party applications\nlike Twilio.\n\n> [Create\/Edit a Dialogflow Conversation Profile](https:\/\/cloud.google.com\/agent-assist\/docs\/conversation-profile#create_and_edit_a_conversation_profile)\n\nYou can also create a conversation profile using Twilio One Click Integration\nfrom the Dialogflow Console.\n\n> Dialogflow Console > Corresponding Agent > Manage > Integrations >\n> Twilio > One Click Integration > Connect\n\nOnce the conversation profile is created, you can find the conversation profile\nid in the following ways:\n\n> Open [Agent Assist](https:\/\/agentassist.cloud.google.com\/), then Conversation\n> Profiles on the left bottom\n\n\n## How to create a Twilio conversation\n\n1. Create a Conversation programmatically using the\n   Twilio [Conversations API](https:\/\/www.twilio.com\/docs\/conversations\/api\/conversation-resource?code-sample=code-create-conversation&code-language=curl&code-sdk-version=json)\n2. Add the customer chat participant to the conversation. Use the SID of the\n   conversation that you created in the previous\n   step. [Reference](https:\/\/www.twilio.com\/docs\/conversations\/api\/conversation-participant-resource?code-sample=code-create-conversation-participant-chat&code-language=curl&code-sdk-version=json)\n3. Add a [scoped webhook](https:\/\/www.twilio.com\/docs\/conversations\/api\/conversation-scoped-webhook-resource?code-sample=code-create-attach-a-new-conversation-scoped-webhook&code-language=curl&code-sdk-version=json)\n   with \"webhook\" as the target using the Conversations Webhook endpoint. With a [scoped webhook](https:\/\/www.twilio.com\/docs\/conversations\/api\/conversation-scoped-webhook-resource?code-sample=code-create-attach-a-new-conversation-scoped-webhook&code-language=curl&code-sdk-version=json)\n   set to your endpoint and the configuration filter set to \"onMessageAdded\", any message that is added to the conversation will invoke the configured webhook URL.\n4) Simulate a new customer message by using the Message endpoint.\n   Remember to set Author to the identity you set in step 2 and to add the\n   header \u201cX-Twilio-Webhook-Enabled\u201d to the request so our webhook gets invoked\n5) Use the Twilio [Interaction API](https:\/\/www.twilio.com\/docs\/flex\/developer\/conversations\/interactions-api)\n   to invoke a handoff to the Flex UI\n\n   \n## How to run the initializer to programmatically create a Twilio conversation \n\n> [ConversationInitializer](.\/src\/main\/java\/com\/middleware\/controller\/util\/ConversationInitializer.java)\n\n```\nmvn -DskipTests package exec:java \n        -Dexec.mainClass=com.middleware.controller.util.ConversationInitializer \n        -Dexec.args=\"add\"\n        \nmvn -DskipTests package exec:java \n        -Dexec.mainClass=com.middleware.controller.util.ConversationInitializer \n        -Dexec.args=\"delete <conversation sid>\"\n```\n\n# Deployment\n\nJIB is used to build the docker image and push it to the GCR.\n\n```bash\n mvn compile jib:build\n```\n\n# References\n\n1.[Twilio Conversations Fundamentals](https:\/\/www.twilio.com\/docs\/conversations\/fundamentals)\n2.[Dialogflow](https:\/\/cloud.google.com\/dialogflow\/es\/docs)\n","site":"GCP","answers_cleaned":"Copyright 2023 Google  This software is provided as is  without warranty or representation for any use or purpose  Your use of it is subject to your agreement with Google     Twilio Conversation Integration with a Virtual Agent using Dialogflow  This is an example how to integrate a Twilio Conversation Services with Virtual  Agent using Dialogflow      Project Structure            twilio        src        main                java                    com middleware controller                        cache   redis initialization and mapping of conversations                        dialogflow   handler of a conversation with dialogflow                        rest   entry point and request handler                        twilio   marketplace and twilio conversation services set up and initialization                        util   utility to inittialize a twilio conversation to test the integration                          webhook   classes to process new message and new participant                   resources                   application properties               proto                   dialogflow proto   dialogflow conversation information holder        pom xml        README md         Components    Dialogflow   Google Cloud MemoryStore  Redis    Twilio with Flex     How mapping between Twilio Conversation and Dialogflow Conversation is implemented  We use a mapping between the Twilio Conversation SID and the Dialogflow Conversation ID to maintain the conversation context  This mapping is stored in the  Redis cache   https   cloud google com memorystore docs redis redis overview  and it works as followed   1  If there is no active conversation available in    the Redis cache  we create a new Dialogflow conversation and store the    mapping in the Redis cache  2  If there is an active conversation available in the Redis cache  we use    the same Dialogflow conversation to send the message to the Dialogflow    agent  3  If the user is not already present in the conversation  we add them to the    conversation  4  On each message  we send the message to the Dialogflow agent and get the    response    using  AnalyzeContent API  https   developers google com resources api libraries documentation dialogflow v2beta1 java latest index html com google api services dialogflow v2beta1 Dialogflow Projects Conversations Participants AnalyzeContent html  5  Reply message is sent to the user using the Twilio Conversations API  6  At the time of handoff  we send the conversation context to the Flex UI    using the  Interaction API  https   www twilio com docs flex developer conversations interactions api interactions   7  If the Agent Assist feature is enabled  we won t close the conversation in    Dialogflow  Otherwise  we will close the conversation in Dialogflow if    agent handoff or end of conversation is detected  9  If the conversation is closed in Dialogflow  we delete the mapping from    the Redis cache      Setup Instructions      GCP Project Setup       Creating a Project in the Google Cloud Platform Console  If you haven t already created a project  create one now  Projects enable you to manage all Google Cloud Platform resources for your app  including deployment  access control  billing  and services   1  Open the  Cloud Platform Console  cloud console   1  In the drop down menu at the top  select   Create a project    1  Give your project a name  1  Make a note of the project ID  which might be different from the project    name  The project ID is used in commands and in configurations    cloud console   https   console cloud google com        Enabling billing for your project   If you haven t already enabled billing for your project   enable billing  enable billing  now  Enabling billing allows is required to use Cloud Bigtable and to create VM instances    enable billing   https   console cloud google com project   settings       Install the Google Cloud SDK   If you haven t already installed the Google Cloud SDK   install the Google Cloud SDK  cloud sdk  now  The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform    cloud sdk   https   cloud google com sdk        Setting Google Application Default Credentials  Set your  Google Application Default Credentials  application default credentials  by  initializing the Google Cloud SDK  cloud sdk init  with the command          gcloud init      Generate a credentials file by running the  application default login  https   cloud google com sdk gcloud reference auth application default login  command           gcloud auth application default login       cloud sdk init   https   cloud google com sdk docs initializing   application default credentials   https   developers google com identity protocols application default credentials      Local Development Set Up  This is a Spring Boot application designed to run on port 8080  Upon launch  the application will initialize and bind to port 8080  making it accessible for incoming connections        Set of environment variables   The following environmental variables need to be set up in the localhost       REDIS HOST      IP address of the Redis instance  REDIS PORT      Port of the Redis instance  Default  6379           TWILIO ADD ON CONVERSATION PROFILE ID       Dialogflow Conversation Profile ID  Provided by Twilio Integration   TWILIO ADD ON PROJECT ID      GCP Project ID where the Dialogflow Agent is deployed  Provided    by Twilio Integration   TWILIO ADD ON AGENT LOCATION Optional      Dialogflow Agent Location  Provided by Twilio Integration   Default  global             TWILIO ACCOUNT SID      Twilio Account SID  TWILIO AUTH TOKEN      OAuth Token for the Twilio Account  TWILIO WORKSPACE SID      Flex Workspace SID   Used for interaction task creation      TWILIO WORKFLOW SID      Flex Workflow SID   Used for interaction task creation               Redis Set Up        Install a Redis Emulator  Please refer to this  doc  https   redis io docs getting started   to install a redis emulator in your localhost  For Linux   Install Redis on Linux  https   redis io docs getting started installation install redis on linux          Initialized the server           redis server            Basic commands to access the data  List all the keys          redis cli    127 0 0 1 6379  KEYS       empty array       Delete a key         127 0 0 1 6379  DEL   key        Get a key        127 0 0 1 6379  GET   key           Usage      Endpoints    POST  handleConversations  Events Handled  1    onParticipantAdded     Expected Variables from the request       ConversationSid 2    onMessageAdded     Expected Variables from the request       ConversationSid       Body       Source       Initialize the application  Reference   Building an Application with Spring Boot  https   spring io guides gs spring boot             mvnw spring boot run          Send a request       curl   location   request POST  localhost 8080 handleConversations       header  Content Type  application x www form urlencoded       data urlencode  Body Talk to agent       data urlencode  EventType onMessageAdded       data urlencode  Source whatsapp       data urlencode  ConversationSid XXXXXX            How to get the dialogflow conversation profile id  Dialogflow Conversation Profile ID also known as Integration ID is a unique text which is used for the interaction of Dialogflow with 3rd party applications like Twilio      Create Edit a Dialogflow Conversation Profile  https   cloud google com agent assist docs conversation profile create and edit a conversation profile   You can also create a conversation profile using Twilio One Click Integration from the Dialogflow Console     Dialogflow Console   Corresponding Agent   Manage   Integrations     Twilio   One Click Integration   Connect  Once the conversation profile is created  you can find the conversation profile id in the following ways     Open  Agent Assist  https   agentassist cloud google com    then Conversation   Profiles on the left bottom      How to create a Twilio conversation  1  Create a Conversation programmatically using the    Twilio  Conversations API  https   www twilio com docs conversations api conversation resource code sample code create conversation code language curl code sdk version json  2  Add the customer chat participant to the conversation  Use the SID of the    conversation that you created in the previous    step   Reference  https   www twilio com docs conversations api conversation participant resource code sample code create conversation participant chat code language curl code sdk version json  3  Add a  scoped webhook  https   www twilio com docs conversations api conversation scoped webhook resource code sample code create attach a new conversation scoped webhook code language curl code sdk version json     with  webhook  as the target using the Conversations Webhook endpoint  With a  scoped webhook  https   www twilio com docs conversations api conversation scoped webhook resource code sample code create attach a new conversation scoped webhook code language curl code sdk version json     set to your endpoint and the configuration filter set to  onMessageAdded   any message that is added to the conversation will invoke the configured webhook URL  4  Simulate a new customer message by using the Message endpoint     Remember to set Author to the identity you set in step 2 and to add the    header  X Twilio Webhook Enabled  to the request so our webhook gets invoked 5  Use the Twilio  Interaction API  https   www twilio com docs flex developer conversations interactions api     to invoke a handoff to the Flex UI         How to run the initializer to programmatically create a Twilio conversation      ConversationInitializer    src main java com middleware controller util ConversationInitializer java       mvn  DskipTests package exec java           Dexec mainClass com middleware controller util ConversationInitializer           Dexec args  add           mvn  DskipTests package exec java           Dexec mainClass com middleware controller util ConversationInitializer           Dexec args  delete  conversation sid          Deployment  JIB is used to build the docker image and push it to the GCR      bash  mvn compile jib build        References  1  Twilio Conversations Fundamentals  https   www twilio com docs conversations fundamentals  2  Dialogflow  https   cloud google com dialogflow es docs  "}
{"questions":"GCP Dataflow Streaming Schema Handler This package contains a set of components required to handle unanticipated incoming streaming data into BigQuery with schema mismatch The code will uses Schema enforcement and DLT Dead Letter Table approach to store schema incompability In case of schema incompability detected from the incoming message the incoming message will be evolved to match the targeted table schema by adding and removing fields which then will be store in the targetted table In addition the original message with schema incomability will be store in the DLT for debugging and record purposes Pipeline This is the main class deploying the pipeline The program waits for the Pub sub JSON push event If the predefined schema matches with the data ingests the streaming data into a successful BigQuery dataset","answers":"# Dataflow Streaming Schema Handler\n\nThis package contains a set of components required to handle unanticipated incoming streaming data into BigQuery with schema mismatch.\nThe code will uses Schema enforcement and DLT (Dead Letter Table) approach to store schema incompability. In case of schema incompability detected from the incoming message, the incoming message will be evolved to match the targeted table schema by adding and removing fields which then will be store in the targetted table. In addition, the original message (with schema incomability) will be store in the DLT for debugging and record purposes.\n\n## Pipeline\n\n[Dataflow Streaming Schema Handler](src\/main\/java\/com\/google\/cloud\/pso\/pipeline\/PubSubToBigQueryJSON.java) -\nThis is the main class deploying the pipeline. The program waits for the Pub\/sub JSON push event.\nIf the predefined schema matches with the data, ingests the streaming data into a successful BigQuery dataset. \nElse if there is a schema mismatch, ingests data into an unsuccessful BigQuery dataset (DLT).\n\n\n![Architecture Diagram](img\/architecture.png \"Architecture\")\n\nHappy path:\n> Incoming message fields matches with the targetted schema\n> Result are store in the targetted table\n\nSchema mismatch:\n> Incoming message fields did not match with the targetted schema\n> The incoming message fields which matches with the targetted schema will be kept, any additional fields that does not match will be remove and in case of field are missing, the field will bet set to null value. The result will then be store in the targetted table\n> The original message (with the mismatch schema) will be store in the DLT table \n\n\n## Getting Started\n\n### Requirements\n\n* [gcloud sdk](https:\/\/cloud.google.com\/sdk\/docs\/install-sdk)\n* Java 11\n* Maven 3\n\n### Building the Project\n\nBuild the entire project using the maven compile command.\n```sh\nmvn clean compile\n```\n\n### Setting GCP Resources\n```bash\n# Set the pipeline vars\nPROJECT_ID=<gcp-project-id>\nBQ_DATASET=<bigquery-dataset-name>\nBUCKET=<gcs-bucket>\nPIPELINE_FOLDER=gs:\/\/${BUCKET}\/dataflow\/pipelines\/streaming-benchmark\nPUBSUB_TOPIC=<pubsub-topic>\nPUBSUB_SUBS_ID=<subscriptions-id>\n```\n\n### Creating an example dataset\n```sh\nbq --location=US mk -d ${BQ_DATASET}\n```\n\n#### Create an example table\n```sh\nbq mk \\\n  --table \\\n  ${PROJECT_ID}:${BQ_DATASET}.tutorial_table \\\n  .\/src\/resources\/person.json\n```\n \n#### Create an example PubSub topic and subscription\n\n```sh\ngcloud pubsub topics create ${PUBSUB_TOPIC}\ngcloud pubsub subscriptions create ${PUBSUB_SUBS_ID} \\\n    --topic=${PUBSUB_TOPIC}\n```\n\n\n### Executing the Pipeline\nThe below instruction will deploy dataflow into the GCP project\n\n```bash\n# Set the runner\nRUNNER=DataflowRunner\n\n# Compute engine zone\nREGION=us-central1\n\n# Build the template\nmvn compile exec:java \\\n-Dexec.mainClass=com.google.cloud.pso.pipeline.PubsubToBigQueryJSON \\\n-Dexec.cleanupDaemonThreads=false \\\n-Dexec.args=\" \\\n    --project=${PROJECT_ID} \\\n    --dataset=${BQ_DATASET} \\\n    --stagingLocation=${PIPELINE_FOLDER}\/staging \\\n    --tempLocation=${PIPELINE_FOLDER}\/temp \\\n    --region=${REGION} \\\n    --maxNumWorkers=5 \\\n    --pubsubSubscription=projects\/${PROJECT_ID}\/subscriptions\/${PUBSUB_SUBS_ID} \\\n    --runner=${RUNNER}\"\n```\n\n### Sending sample data to pubsub\n\nOnce the dataflow job are deploy and running we can try to send few messages to see the output being store in the GCP project\n\nBelow are the happy path message\n```sh\ngcloud pubsub topics publish ${PUBSUB_TOPIC} \\\n  --message=\"{\\\"tutorial_table\\\":{\\\"person\\\":{\\\"name\\\":\\\"happyPath\\\",\\\"age\\\":22},\\\"date\\\": \\\"2022-08-03\\\",\\\"id\\\": \\\"xpze\\\"}}\"\n```\n\nBelow are the mismatch schema message, with \"age\" field being removed to the pubsub\n```sh\ngcloud pubsub topics publish ${PUBSUB_TOPIC} \\\n  --message=\"{\\\"tutorial_table\\\":{\\\"person\\\":{\\\"name\\\":\\\"mismatchSchema\\\"},\\\"date\\\": \\\"2022-08-03\\\",\\\"id\\\": \\\"avghy\\\"}}\"\n``","site":"GCP","answers_cleaned":"  Dataflow Streaming Schema Handler  This package contains a set of components required to handle unanticipated incoming streaming data into BigQuery with schema mismatch  The code will uses Schema enforcement and DLT  Dead Letter Table  approach to store schema incompability  In case of schema incompability detected from the incoming message  the incoming message will be evolved to match the targeted table schema by adding and removing fields which then will be store in the targetted table  In addition  the original message  with schema incomability  will be store in the DLT for debugging and record purposes      Pipeline   Dataflow Streaming Schema Handler  src main java com google cloud pso pipeline PubSubToBigQueryJSON java    This is the main class deploying the pipeline  The program waits for the Pub sub JSON push event  If the predefined schema matches with the data  ingests the streaming data into a successful BigQuery dataset   Else if there is a schema mismatch  ingests data into an unsuccessful BigQuery dataset  DLT       Architecture Diagram  img architecture png  Architecture    Happy path    Incoming message fields matches with the targetted schema   Result are store in the targetted table  Schema mismatch    Incoming message fields did not match with the targetted schema   The incoming message fields which matches with the targetted schema will be kept  any additional fields that does not match will be remove and in case of field are missing  the field will bet set to null value  The result will then be store in the targetted table   The original message  with the mismatch schema  will be store in the DLT table       Getting Started      Requirements     gcloud sdk  https   cloud google com sdk docs install sdk    Java 11   Maven 3      Building the Project  Build the entire project using the maven compile command     sh mvn clean compile          Setting GCP Resources    bash   Set the pipeline vars PROJECT ID  gcp project id  BQ DATASET  bigquery dataset name  BUCKET  gcs bucket  PIPELINE FOLDER gs     BUCKET  dataflow pipelines streaming benchmark PUBSUB TOPIC  pubsub topic  PUBSUB SUBS ID  subscriptions id           Creating an example dataset    sh bq   location US mk  d   BQ DATASET            Create an example table    sh bq mk       table       PROJECT ID    BQ DATASET  tutorial table       src resources person json            Create an example PubSub topic and subscription     sh gcloud pubsub topics create   PUBSUB TOPIC  gcloud pubsub subscriptions create   PUBSUB SUBS ID          topic   PUBSUB TOPIC            Executing the Pipeline The below instruction will deploy dataflow into the GCP project     bash   Set the runner RUNNER DataflowRunner    Compute engine zone REGION us central1    Build the template mvn compile exec java    Dexec mainClass com google cloud pso pipeline PubsubToBigQueryJSON    Dexec cleanupDaemonThreads false    Dexec args           project   PROJECT ID          dataset   BQ DATASET          stagingLocation   PIPELINE FOLDER  staging         tempLocation   PIPELINE FOLDER  temp         region   REGION          maxNumWorkers 5         pubsubSubscription projects   PROJECT ID  subscriptions   PUBSUB SUBS ID          runner   RUNNER            Sending sample data to pubsub  Once the dataflow job are deploy and running we can try to send few messages to see the output being store in the GCP project  Below are the happy path message    sh gcloud pubsub topics publish   PUBSUB TOPIC        message     tutorial table      person      name     happyPath     age   22    date      2022 08 03     id      xpze           Below are the mismatch schema message  with  age  field being removed to the pubsub    sh gcloud pubsub topics publish   PUBSUB TOPIC        message     tutorial table      person      name     mismatchSchema      date      2022 08 03     id      avghy        "}
{"questions":"GCP Everything in this repository has been developed as a parallel to CAI Config Validator The difference is that the Constraint Template schemas target instead of This ensures that the policies only target terraform resource changes instead of the entire CAI metadata library from a project folder or organization Use this when you intend to validate changes rather than declaratively manage a GCP cloud environment See for information on how to use this library Terraform Config Validator Policy Library This repo contains a library of constraint templates and sample constraints to be used for Terraform resource change requests If you re looking for the CAI variant please see User Guide","answers":"# Terraform Config Validator Policy Library\n\nThis repo contains a library of constraint templates and sample constraints to be used for Terraform resource change requests. If you're looking for the CAI variant, please see [Config Validator](https:\/\/github.com\/lykaasegura\/w-secteam-repo).\n\nEverything in this repository has been developed as a parallel to CAI Config Validator. The difference is that the Constraint\/Template schemas target `validation.resourcechange.terraform.cloud.google.com` instead of `validation.gcp.forsetisecurity.org`. This ensures that the policies only target terraform resource changes, instead of the entire CAI metadata library from a project, folder, or organization. Use this when you intend to validate changes, rather than declaratively manage a GCP cloud environment.\n\n## User Guide\n\nSee [docs\/user_guide.md](docs\/user_guide.md) for information on how to use this library.\n\nSee [docs\/functional_principles.md](docs\/functional_principles.md) for information on how to **develop** your own policies to use with `gcloud beta terraform vet`.\n\n## Creating Policies in the Constraint Framework\n\nThis library is set up in the **Constraint Framework** style. This means that we utilize Gatekeeper Constraints and ConstraintTemplates to interpret and apply rego logic to incoming terraform change resources. This can be challenging to understand at first, so please refer to the [functional principles](docs\/functional_principles.md) documentation found in the `docs` folder.\n\n## General Differences\n\nThis library is intended to validate terraform plan resources. Therefore, as mentioned, the target has been swapped from `validation.gcp.forsetisecurity.org` to `validation.resourcechange.terraform.cloud.google.com`. This also means that the Constraint and ConstraintTemplate definitions have also had to be changed from Gatekeeper API version `v1alpha1` to `v1beta1`, as this functionality is currently under development. As a result, the Rego policy language has also had to change. If you user CAI Constraints and Templates (ie. v1apha1 Constraints\/Templates), those inlined Rego policies **will not work.**\n\nYou can check out documentation on how to create terraform policies in the [`gcloud terraform beta vet` documentation](https:\/\/cloud.google.com\/docs\/terraform\/policy-validation\/create-terraform-constraints).\n\n## Working with this policy library\n\nThe operation of this library is similar with the CAI library, as the development flow with Make and other tools has proven to be quite efficient and helpful. Therefore, you can check the [user guide](docs\/user_guide.md) for relevant documentation required to get this library working for your needs.\n\n### Initializing a policy library\n\nYou can easily set up a new (local) policy library by downloading a [bundle](.\/docs\/index.md#policy-bundles) using [kpt](https:\/\/kpt.dev\/).\n\nDownload the full policy library and install the [Forseti bundle](.\/docs\/bundles\/forseti-security.md):\n\n```\nexport BUNDLE=forseti-security\nkpt pkg get https:\/\/github.com\/GoogleCloudPlatform\/policy-library.git .\/policy-library\nkpt fn source policy-library\/samples\/ | \\\n  kpt fn eval - --image gcr.io\/config-validator\/get-policy-bundle:latest -- bundle=$BUNDLE | \\\n  kpt fn sink policy-library\/policies\/constraints\/$BUNDLE\n```\n\nOnce you have initialized a library, you might want to save it to [git](.\/docs\/user_guide.md#https:\/\/github.com\/GoogleCloudPlatform\/policy-library\/blob\/master\/docs\/user_guide.md#get-started-with-the-policy-library-repository).\n\n### Developing a Constraint\n\nIf this library doesn't contain a constraint that matches your use case, you can develop a new one\nusing the [Constraint Template Authoring Guide](docs\/functional_principles.md).\n\n#### Available Commands\n\n```\nmake audit                          Run audit against real CAI dump data\nmake build                          Format and build\nmake build_templates                Inline Rego rules into constraint templates\nmake format                         Format Rego rules\nmake help                           Prints help for targets with comments\nmake test                           Test constraint templates via OPA\n```\n\n#### Inlining\n\nYou can run `make build` to automatically inline Rego rules into your constraint templates.\n\nThis is done by finding a `INLINE(\"filename\")` and `#ENDINLINE` statements in your yaml,\nand replacing everything in between with the contents of the file.\n\nFor example, running `make build` would replace the raw content with the replaced content below\n\nRaw:\n\n```\n#INLINE(\"my_rule.rego\")\n# This text will be replaced\n#ENDINLINE\n```\n\nReplaced:\n\n```\n#INLINE(\"my_rule.rego\")\n#contents of my_rule.rego\n#ENDINLINE\n```\n\n#### Linting Policies\n\nConfig Validator provides a policy linter.  You can invoke it as:\n\n```\ngo get github.com\/GoogleCloudPlatform\/config-validator\/cmd\/policy-tool\npolicy-tool --policies .\/policies --policies .\/samples --libs .\/lib\n```\n\n#### Local CI\n\nYou can run the cloudbuild CI locally as follows:\n\n```\ngcloud components install cloud-build-local\ncloud-build-local --config .\/cloudbuild.yaml --dryrun=false .\n```\n\n#### Updating CI Images\n\nYou can update the CI images to add new versions of rego\/opa as they are released.\n\n```\n# Rebuild all images.\nmake -j ci-images\n\n# Rebuild a single image\nmake ci-image-v1.16.0\n```","site":"GCP","answers_cleaned":"  Terraform Config Validator Policy Library  This repo contains a library of constraint templates and sample constraints to be used for Terraform resource change requests  If you re looking for the CAI variant  please see  Config Validator  https   github com lykaasegura w secteam repo    Everything in this repository has been developed as a parallel to CAI Config Validator  The difference is that the Constraint Template schemas target  validation resourcechange terraform cloud google com  instead of  validation gcp forsetisecurity org   This ensures that the policies only target terraform resource changes  instead of the entire CAI metadata library from a project  folder  or organization  Use this when you intend to validate changes  rather than declaratively manage a GCP cloud environment      User Guide  See  docs user guide md  docs user guide md  for information on how to use this library   See  docs functional principles md  docs functional principles md  for information on how to   develop   your own policies to use with  gcloud beta terraform vet       Creating Policies in the Constraint Framework  This library is set up in the   Constraint Framework   style  This means that we utilize Gatekeeper Constraints and ConstraintTemplates to interpret and apply rego logic to incoming terraform change resources  This can be challenging to understand at first  so please refer to the  functional principles  docs functional principles md  documentation found in the  docs  folder      General Differences  This library is intended to validate terraform plan resources  Therefore  as mentioned  the target has been swapped from  validation gcp forsetisecurity org  to  validation resourcechange terraform cloud google com   This also means that the Constraint and ConstraintTemplate definitions have also had to be changed from Gatekeeper API version  v1alpha1  to  v1beta1   as this functionality is currently under development  As a result  the Rego policy language has also had to change  If you user CAI Constraints and Templates  ie  v1apha1 Constraints Templates   those inlined Rego policies   will not work     You can check out documentation on how to create terraform policies in the   gcloud terraform beta vet  documentation  https   cloud google com docs terraform policy validation create terraform constraints       Working with this policy library  The operation of this library is similar with the CAI library  as the development flow with Make and other tools has proven to be quite efficient and helpful  Therefore  you can check the  user guide  docs user guide md  for relevant documentation required to get this library working for your needs       Initializing a policy library  You can easily set up a new  local  policy library by downloading a  bundle    docs index md policy bundles  using  kpt  https   kpt dev     Download the full policy library and install the  Forseti bundle    docs bundles forseti security md        export BUNDLE forseti security kpt pkg get https   github com GoogleCloudPlatform policy library git   policy library kpt fn source policy library samples        kpt fn eval     image gcr io config validator get policy bundle latest    bundle  BUNDLE       kpt fn sink policy library policies constraints  BUNDLE      Once you have initialized a library  you might want to save it to  git    docs user guide md https   github com GoogleCloudPlatform policy library blob master docs user guide md get started with the policy library repository        Developing a Constraint  If this library doesn t contain a constraint that matches your use case  you can develop a new one using the  Constraint Template Authoring Guide  docs functional principles md         Available Commands      make audit                          Run audit against real CAI dump data make build                          Format and build make build templates                Inline Rego rules into constraint templates make format                         Format Rego rules make help                           Prints help for targets with comments make test                           Test constraint templates via OPA           Inlining  You can run  make build  to automatically inline Rego rules into your constraint templates   This is done by finding a  INLINE  filename    and   ENDINLINE  statements in your yaml  and replacing everything in between with the contents of the file   For example  running  make build  would replace the raw content with the replaced content below  Raw        INLINE  my rule rego     This text will be replaced  ENDINLINE      Replaced        INLINE  my rule rego    contents of my rule rego  ENDINLINE           Linting Policies  Config Validator provides a policy linter   You can invoke it as       go get github com GoogleCloudPlatform config validator cmd policy tool policy tool   policies   policies   policies   samples   libs   lib           Local CI  You can run the cloudbuild CI locally as follows       gcloud components install cloud build local cloud build local   config   cloudbuild yaml   dryrun false             Updating CI Images  You can update the CI images to add new versions of rego opa as they are released         Rebuild all images  make  j ci images    Rebuild a single image make ci image v1 16 0    "}
{"questions":"GCP Config Validator Setup User Guide Table of Contents Go from setup to proof of concept in under 1 hour","answers":"## Config Validator | Setup & User Guide\n\n### Go from setup to proof-of-concept in under 1 hour\n\n**Table of Contents**\n\n* [Overview](#overview)\n* [How to set up constraints with Policy Library](#how-to-set-up-constraints-with-policy-library)\n  * [Get started with the Policy Library repository](#get-started-with-the-policy-library-repository)\n  * [Instantiate constraints](#instantiate-constraints)\n* [How to validate policies](#how-to-validate-policies)\n  * [Deploy Forseti](#deploy-forseti)\n  * [Policy Library Sync from Git Repository](https:\/\/forsetisecurity.org\/docs\/latest\/configure\/config-validator\/policy-library-sync-from-git-repo.html)\n  * [Policy Library Sync from GCS](https:\/\/forsetisecurity.org\/docs\/latest\/configure\/config-validator\/policy-library-sync-from-gcs.html)\n* [End to end workflow with sample constraint](#end-to-end-workflow-with-sample-constraint)\n* [Contact Info](#contact-info)\n\n## Overview\n\nThis tool is designed to perform policy validation check on Terraform resource changes. It will not help with ongoing monitoring in your organization heirarchy, so if you're looking for that, please find the [config-validator](https:\/\/github.com\/GoogleCloudPlatform\/config-validator) project and associated [policy-library](https:\/\/github.com\/GoogleCloudPlatform\/policy-library) to get started with Cloud Asset Inventory policies.\n\nDesigned as an offshoot from the aforementioned policy-library, we set out to design a similar library that targets resource changes before terraform deployments. By refactoring rego policies in our library, we were able to target `validation.resourcechange.terraform.cloud.google.com` instead of the forsetisecurity target for CAI data. This allows for policy control in cases when the current state of the enviornment would clearly conflict with security policies, but you can't enforce fine-grained control to allow for that state to exist while locking out nearby features from terraform. This would likely be the case in automated IAM role or permission granting in a project with a super-admin. The super-admin may need to be there, and if using CAI policy validation, the pipeline would always fail if you define policies that limit the scope of a user's control.\n\nKeep in mind that this behavior may lead to security vulnerabilities, because the tool does not perform any ongoing monitoring.\n\n## How to set up constraints with Policy Library\n\n### Get started with the Policy Library repository\n\nThe Policy Library repository contains the following directories:\n\n* `policies`\n  * `constraints`: This is initially empty. You should place your constraint\n        files here.\n  * `templates`: This directory contains pre-defined constraint templates.\n* `validator`: This directory contains the `.rego` files and their associated\n    unit tests. You do not need to touch this directory unless you intend to\n    modify existing constraint templates or create new ones. Running `make\n    build` will inline the Rego content in the corresponding constraint template\n    files.\n\nThis repository contains a set of pre-defined constraint\ntemplates. You can duplicate this repository into a private repository. First\nyou should create a new **private** git repository. For example, if you use\nGitHub then you can use the [GitHub UI](https:\/\/github.com\/new). Then follow the\nsteps below to get everything setup.\n\nThis policy library can also be made public, but it is not recommended. By\nmaking your policy library public, it would allow others to see what you\nare and **ARE NOT** scanning for.\n\n#### Duplicate Policy Library Repository\n\nTo run the following commands, you will need to configure git to connect\nsecurely. It is recommended to connect with SSH. [Here is a helpful resource](https:\/\/help.github.com\/en\/github\/authenticating-to-github\/connecting-to-github-with-ssh) for learning about how\nthis works, including steps to set this up for GitHub repositories; other\nproviders offer this feature as well.\n\n```\nexport GIT_REPO_ADDR=\"git@github.com:${YOUR_GITHUB_USERNAME}\/policy-library.git\"\ngit clone --bare https:\/\/github.com\/tdesrosi\/gcp-terraform-config-validator.git\ncd policy-library.git\ngit push --mirror ${GIT_REPO_ADDR}\ncd ..\nrm -rf policy-library.git\ngit clone ${GIT_REPO_ADDR}\n```\n\n#### Setup Constraints\n\nThen you need to examine the available constraint templates inside the\n`templates` directory. Pick the constraint templates that you wish to use,\ncreate constraint YAML files corresponding to those templates, and place them\nunder `policies\/constraints`. Commit the newly created constraint files to\n**your** Git repository. For example, assuming you have created a Git repository\nnamed \"policy-library\" under your GitHub account, you can use the following\ncommands to perform the initial commit:\n\n```\ncd policy-library\n# Add new constraints...\ngit add --all\ngit commit -m \"Initial commit of policy library constraints\"\ngit push -u origin master\n```\n\n#### Pull in latest changes from Public Repository\n\nPeriodically you should pull any changes from the public repository, which might\ncontain new templates and Rego files.\n\n```\ngit remote add public https:\/\/github.com\/tdesrosi\/policy-library-tf-resource-change.git\ngit pull public main\ngit push origin main\n```\n\n### Instantiate constraints\n\nThe constraint template library only contains templates. Templates specify the\nconstraint logic, and you must create constraints based on those templates in\norder to enforce them. Constraint parameters are defined as YAML files in the\nfollowing format:\n\n```\napiVersion: constraints.gatekeeper.sh\/v1beta1\nkind: # place constraint template kind here\nmetadata:\n  name: # place constraint name here\nspec:\n  severity: # low, medium, or high\n  match:\n    target: [] # put the constraint application target here\n    exclude: [] # optional, default is no exclusions\n  parameters: # put the parameters defined in constraint template here\n```\n\nThe <code><em>target<\/em><\/code> field is specified in a path-like format. It\nspecifies where in the GCP resources hierarchy the constraint is to be applied.\nFor example:\n\n<table>\n  <tr>\n   <td>Target\n   <\/td>\n   <td>Description\n   <\/td>\n  <\/tr>\n  <tr>\n   <td>organizations\/**\n   <\/td>\n   <td>All organizations\n   <\/td>\n  <\/tr>\n  <tr>\n   <td>organizations\/123\/**\n   <\/td>\n   <td>Everything in organization 123\n   <\/td>\n  <\/tr>\n  <tr>\n   <td>organizations\/123\/folders\/**\n   <\/td>\n   <td>Everything in organization 123 that is under a folder\n   <\/td>\n  <\/tr>\n  <tr>\n   <td>organizations\/123\/folders\/456\n   <\/td>\n   <td>Everything in folder 456 in organization 123\n   <\/td>\n  <\/tr>\n  <tr>\n   <td>organizations\/123\/folders\/456\/projects\/789\n   <\/td>\n   <td>Everything in project 789 in folder 456 in organization 123\n   <\/td>\n  <\/tr>\n<\/table>\n\nThe <code><em>exclude<\/em><\/code> field follows the same pattern and has\nprecedence over the <code><em>target<\/em><\/code> field. If a resource is in\nboth, it will be excluded.\n\nThe schema of the <code><em>parameters<\/em><\/code> field is defined in the\nconstraint template, using the\n[OpenAPI V3](https:\/\/github.com\/OAI\/OpenAPI-Specification\/blob\/master\/versions\/3.0.0.md#schemaObject)\nschema. This is the same validation schema in Kubernetes's custom resource\ndefinition. Every template contains a <code><em>validation<\/em><\/code> section\nthat looks like the following:\n\n```\nvalidation:\n  openAPIV3Schema:\n    properties:\n      mode:\n        type: string\n      instances:\n        type: array\n        items: string\n```\n\nAccording to the template above, the parameter field in the constraint file\nshould contain a string named `mode` and a string array named\n<code><em>instances<\/em><\/code>. For example:\n\n```\nparameters:\n  mode: allowlist\n  instances:\n    - \/\/compute.googleapis.com\/projects\/test-project\/zones\/us-east1-b\/instances\/one\n    - \/\/compute.googleapis.com\/projects\/test-project\/zones\/us-east1-b\/instances\/two\n```\n\nThese parameters specify that two VM instances may have external IP addresses.\nThe are exempt from the constraint since they are allowlisted.\n\nHere is a complete example of a sample external IP address constraint file:\n\n```\napiVersion: constraints.gatekeeper.sh\/v1beta1\nkind: TFGCPExternalIpAccessConstraintV1\nmetadata:\n  name: forbid-external-ip-allowlist\nspec:\n  severity: high\n  match:\n    target: [\"organizations\/**\"]\n  parameters:\n    mode: \"allowlist\"\n    instances:\n    - \/\/compute.googleapis.com\/projects\/test-project\/zones\/us-east1-b\/instances\/one\n    - \/\/compute.googleapis.com\/projects\/test-project\/zones\/us-east1-b\/instances\/two\n```\n\n## How to validate policies\n\nFollow the [instructions](https:\/\/cloud.google.com\/docs\/terraform\/policy-validation\/validate-policies)\nto validate policies in your local or production environments.\n\n## End to end workflow with sample constraint\n\nIn this section, you will apply a constraint that enforces IAM policy member\ndomain restriction using [Cloud Shell](https:\/\/cloud.google.com\/shell\/).\n\nFirst click on this\n[link](https:\/\/console.cloud.google.com\/cloudshell\/open?cloudshell_image=gcr.io\/graphite-cloud-shell-images\/terraform:latest&cloudshell_git_repo=https:\/\/github.com\/tdesrosi\/policy-tf-resource-change.git)\nto open a new Cloud Shell session. The Cloud Shell session has Terraform\npre-installed and the Policy Library repository cloned. Once you have the\nsession open, the next step is to copy over the sample IAM domain restriction\nconstraint:\n\n```\ncp samples\/constraints\/iam_service_accounts_only.yaml policies\/constraints\n```\n\nLet's take a look at this constraint:\n\n```\napiVersion: constraints.gatekeeper.sh\/v1beta1\nkind: TFGCPIAMAllowedPolicyMemberDomainsConstraintV2\nmetadata:\n  name: service-accounts-only\n  annotations:\n    description:\n      Checks that members that have been granted IAM roles belong to allowlisted\n      domains. Block IAM role bindings for non-service accounts by domain\n      (gserviceaccount.com)\nspec:\n  severity: high\n  parameters:\n    domains:\n      - gserviceaccount.com\n```\n\nIt specifies that only members from gserviceaccount.com domain can be present in\nan IAM policy. To verify that it works, let's attempt to create a project.\nCreate the following Terraform `main.tf` file:\n\n```\nprovider \"google\" {\n  version = \"~> 1.20\"\n  project = \"your-terraform-provider-project\"\n}\n\nresource \"random_id\" \"proj\" {\n  byte_length = 8\n}\n\nresource \"google_project\" \"sample_project\" {\n  project_id      = \"validator-${random_id.proj.hex}\"\n  name            = \"config validator test project\"\n}\n\nresource \"google_project_iam_binding\" \"sample_iam_binding\" {\n  project = \"${google_project.sample_project.project_id}\"\n  role    = \"roles\/owner\"\n\n  members = [\n    \"user:your-email@your-domain\"\n  ]\n}\n\n```\n\nMake sure to specify your Terraform\n[provider project](https:\/\/www.terraform.io\/docs\/providers\/google\/getting_started.html)\nand email address. Then initialize Terraform and generate a Terraform plan:\n\n```\nterraform init\nterraform plan -out=test.tfplan\nterraform show -json .\/test.tfplan > .\/tfplan.json\n```\n\nSince your email address is in the IAM policy binding, the plan should result in\na violation. Let's try this out:\n\n```\ngcloud beta terraform vet tfplan.json --policy-library=policy-library\n```\n\nThe Terraform validator should return a violation. As a test, you can relax the\nconstraint to make the violation go away. Edit the\n`policy-library\/policies\/constraints\/iam_service_accounts_only.yaml` file and\nappend your email domain to the domains allowlist:\n\n```\napiVersion: constraints.gatekeeper.sh\/v1beta1\nkind: TFGCPIAMAllowedPolicyMemberDomainsConstraintV2\nmetadata:\n  name: service-accounts-only\n  annotations:\n    description:\n      Checks that members that have been granted IAM roles belong to allowlisted\n      domains. Block IAM role bindings for non-service accounts by domain\n      (gserviceaccount.com)\nspec:\n  severity: high\n  parameters:\n    domains:\n      - gserviceaccount.com\n      - your-email-domain.com\n```\n\nThen run Terraform plan and validate the output again:\n\n```\nterraform plan -out=test.tfplan\nterraform show -json .\/test.tfplan > .\/tfplan.json\ngcloud beta terraform vet tfplan.json --policy-library=policy-library\n```\n\nThe command above should result in no violations found.\n\n## Contact Info\n\nQuestions or comments? Please contact tdesrosi@google.com for this project, or validator-support@google.com for information about the terraform-validator project.","site":"GCP","answers_cleaned":"   Config Validator   Setup   User Guide      Go from setup to proof of concept in under 1 hour    Table of Contents       Overview   overview     How to set up constraints with Policy Library   how to set up constraints with policy library       Get started with the Policy Library repository   get started with the policy library repository       Instantiate constraints   instantiate constraints     How to validate policies   how to validate policies       Deploy Forseti   deploy forseti       Policy Library Sync from Git Repository  https   forsetisecurity org docs latest configure config validator policy library sync from git repo html       Policy Library Sync from GCS  https   forsetisecurity org docs latest configure config validator policy library sync from gcs html     End to end workflow with sample constraint   end to end workflow with sample constraint     Contact Info   contact info      Overview  This tool is designed to perform policy validation check on Terraform resource changes  It will not help with ongoing monitoring in your organization heirarchy  so if you re looking for that  please find the  config validator  https   github com GoogleCloudPlatform config validator  project and associated  policy library  https   github com GoogleCloudPlatform policy library  to get started with Cloud Asset Inventory policies   Designed as an offshoot from the aforementioned policy library  we set out to design a similar library that targets resource changes before terraform deployments  By refactoring rego policies in our library  we were able to target  validation resourcechange terraform cloud google com  instead of the forsetisecurity target for CAI data  This allows for policy control in cases when the current state of the enviornment would clearly conflict with security policies  but you can t enforce fine grained control to allow for that state to exist while locking out nearby features from terraform  This would likely be the case in automated IAM role or permission granting in a project with a super admin  The super admin may need to be there  and if using CAI policy validation  the pipeline would always fail if you define policies that limit the scope of a user s control   Keep in mind that this behavior may lead to security vulnerabilities  because the tool does not perform any ongoing monitoring      How to set up constraints with Policy Library      Get started with the Policy Library repository  The Policy Library repository contains the following directories      policies       constraints   This is initially empty  You should place your constraint         files here       templates   This directory contains pre defined constraint templates     validator   This directory contains the   rego  files and their associated     unit tests  You do not need to touch this directory unless you intend to     modify existing constraint templates or create new ones  Running  make     build  will inline the Rego content in the corresponding constraint template     files   This repository contains a set of pre defined constraint templates  You can duplicate this repository into a private repository  First you should create a new   private   git repository  For example  if you use GitHub then you can use the  GitHub UI  https   github com new   Then follow the steps below to get everything setup   This policy library can also be made public  but it is not recommended  By making your policy library public  it would allow others to see what you are and   ARE NOT   scanning for        Duplicate Policy Library Repository  To run the following commands  you will need to configure git to connect securely  It is recommended to connect with SSH   Here is a helpful resource  https   help github com en github authenticating to github connecting to github with ssh  for learning about how this works  including steps to set this up for GitHub repositories  other providers offer this feature as well       export GIT REPO ADDR  git github com   YOUR GITHUB USERNAME  policy library git  git clone   bare https   github com tdesrosi gcp terraform config validator git cd policy library git git push   mirror   GIT REPO ADDR  cd    rm  rf policy library git git clone   GIT REPO ADDR            Setup Constraints  Then you need to examine the available constraint templates inside the  templates  directory  Pick the constraint templates that you wish to use  create constraint YAML files corresponding to those templates  and place them under  policies constraints   Commit the newly created constraint files to   your   Git repository  For example  assuming you have created a Git repository named  policy library  under your GitHub account  you can use the following commands to perform the initial commit       cd policy library   Add new constraints    git add   all git commit  m  Initial commit of policy library constraints  git push  u origin master           Pull in latest changes from Public Repository  Periodically you should pull any changes from the public repository  which might contain new templates and Rego files       git remote add public https   github com tdesrosi policy library tf resource change git git pull public main git push origin main          Instantiate constraints  The constraint template library only contains templates  Templates specify the constraint logic  and you must create constraints based on those templates in order to enforce them  Constraint parameters are defined as YAML files in the following format       apiVersion  constraints gatekeeper sh v1beta1 kind    place constraint template kind here metadata    name    place constraint name here spec    severity    low  medium  or high   match      target       put the constraint application target here     exclude       optional  default is no exclusions   parameters    put the parameters defined in constraint template here      The  code  em target  em   code  field is specified in a path like format  It specifies where in the GCP resources hierarchy the constraint is to be applied  For example    table     tr      td Target      td      td Description      td      tr     tr      td organizations         td      td All organizations      td      tr     tr      td organizations 123         td      td Everything in organization 123      td      tr     tr      td organizations 123 folders         td      td Everything in organization 123 that is under a folder      td      tr     tr      td organizations 123 folders 456      td      td Everything in folder 456 in organization 123      td      tr     tr      td organizations 123 folders 456 projects 789      td      td Everything in project 789 in folder 456 in organization 123      td      tr    table   The  code  em exclude  em   code  field follows the same pattern and has precedence over the  code  em target  em   code  field  If a resource is in both  it will be excluded   The schema of the  code  em parameters  em   code  field is defined in the constraint template  using the  OpenAPI V3  https   github com OAI OpenAPI Specification blob master versions 3 0 0 md schemaObject  schema  This is the same validation schema in Kubernetes s custom resource definition  Every template contains a  code  em validation  em   code  section that looks like the following       validation    openAPIV3Schema      properties        mode          type  string       instances          type  array         items  string      According to the template above  the parameter field in the constraint file should contain a string named  mode  and a string array named  code  em instances  em   code   For example       parameters    mode  allowlist   instances          compute googleapis com projects test project zones us east1 b instances one         compute googleapis com projects test project zones us east1 b instances two      These parameters specify that two VM instances may have external IP addresses  The are exempt from the constraint since they are allowlisted   Here is a complete example of a sample external IP address constraint file       apiVersion  constraints gatekeeper sh v1beta1 kind  TFGCPExternalIpAccessConstraintV1 metadata    name  forbid external ip allowlist spec    severity  high   match      target    organizations        parameters      mode   allowlist      instances          compute googleapis com projects test project zones us east1 b instances one         compute googleapis com projects test project zones us east1 b instances two         How to validate policies  Follow the  instructions  https   cloud google com docs terraform policy validation validate policies  to validate policies in your local or production environments      End to end workflow with sample constraint  In this section  you will apply a constraint that enforces IAM policy member domain restriction using  Cloud Shell  https   cloud google com shell     First click on this  link  https   console cloud google com cloudshell open cloudshell image gcr io graphite cloud shell images terraform latest cloudshell git repo https   github com tdesrosi policy tf resource change git  to open a new Cloud Shell session  The Cloud Shell session has Terraform pre installed and the Policy Library repository cloned  Once you have the session open  the next step is to copy over the sample IAM domain restriction constraint       cp samples constraints iam service accounts only yaml policies constraints      Let s take a look at this constraint       apiVersion  constraints gatekeeper sh v1beta1 kind  TFGCPIAMAllowedPolicyMemberDomainsConstraintV2 metadata    name  service accounts only   annotations      description        Checks that members that have been granted IAM roles belong to allowlisted       domains  Block IAM role bindings for non service accounts by domain        gserviceaccount com  spec    severity  high   parameters      domains          gserviceaccount com      It specifies that only members from gserviceaccount com domain can be present in an IAM policy  To verify that it works  let s attempt to create a project  Create the following Terraform  main tf  file       provider  google      version       1 20    project    your terraform provider project     resource  random id   proj      byte length   8    resource  google project   sample project      project id         validator   random id proj hex     name               config validator test project     resource  google project iam binding   sample iam binding      project      google project sample project project id     role       roles owner     members          user your email your domain              Make sure to specify your Terraform  provider project  https   www terraform io docs providers google getting started html  and email address  Then initialize Terraform and generate a Terraform plan       terraform init terraform plan  out test tfplan terraform show  json   test tfplan     tfplan json      Since your email address is in the IAM policy binding  the plan should result in a violation  Let s try this out       gcloud beta terraform vet tfplan json   policy library policy library      The Terraform validator should return a violation  As a test  you can relax the constraint to make the violation go away  Edit the  policy library policies constraints iam service accounts only yaml  file and append your email domain to the domains allowlist       apiVersion  constraints gatekeeper sh v1beta1 kind  TFGCPIAMAllowedPolicyMemberDomainsConstraintV2 metadata    name  service accounts only   annotations      description        Checks that members that have been granted IAM roles belong to allowlisted       domains  Block IAM role bindings for non service accounts by domain        gserviceaccount com  spec    severity  high   parameters      domains          gserviceaccount com         your email domain com      Then run Terraform plan and validate the output again       terraform plan  out test tfplan terraform show  json   test tfplan     tfplan json gcloud beta terraform vet tfplan json   policy library policy library      The command above should result in no violations found      Contact Info  Questions or comments  Please contact tdesrosi google com for this project  or validator support google com for information about the terraform validator project "}
{"questions":"GCP The folder structure below contains a TL DR explanation of each item s purpose We ll go into further detail below bash You ll notice that this repository contains a handful of folders each with different items It s confusing at first so let s dive into it First let s start with how the library is organized policy library tf resource change root Functional Principles of the Constraint Framework Folder Structure","answers":"# Functional Principles of the Constraint Framework\n\nYou'll notice that this repository contains a handful of folders, each with different items. It's confusing at first, so let's dive into it! First, let's start with how the library is organized.\n\n## Folder Structure\n\nThe folder structure below contains a TL;DR explanation of each item's purpose. We'll go into further detail below.\n\n```(bash)\npolicy-library-tf-resource-change (root)\/\n\u251c\u2500\u2500 docs\/\n\u2502   \u2514\u2500\u2500 *Contains documentation on this library*\n\u251c\u2500\u2500 lib\/\n\u2502   \u2514\u2500\u2500 *Contains shared rego functions*\n\u251c\u2500\u2500 policies\/\n\u2502   \u251c\u2500\u2500 constraints\/\n\u2502   \u2502   \u2514\u2500\u2500 *Contains constraint yaml files*\n\u2502   \u2514\u2500\u2500 templates\/\n\u2502       \u2514\u2500\u2500 *Contains constraint template yaml files*\n\u251c\u2500\u2500 samples\/\n\u2502   \u2514\u2500\u2500 constraints\/\n\u2502       \u2514\u2500\u2500 *Contains sample constraint yaml files (not checked at runtime)*\n\u251c\u2500\u2500 scripts\/\n\u2502   \u2514\u2500\u2500 *Contains helper scripts to assist with policy development*\n\u251c\u2500\u2500 validator\/\n\u2502   \u251c\u2500\u2500 *Contains rego policies (used in constraint template yaml files)*\n\u2502   \u251c\u2500\u2500 *files ending in `*_test.rego` are base unit testing files for their associated rego policies*\n\u2502   \u2514\u2500\u2500 test\/\n\u2502       \u2514\u2500\u2500 *Contains test data\/constraints used for unit testing*\n\u2514\u2500\u2500 Makefile *Allows user to use `make ...` commands for policy development*\n```\n\n## Basic Operation\n\nWhen you run `gcloud beta terrafor vet`, a number of things happen. First, the resource being tested (ie. Your terraform plan json file) is translated from its native Terraform schema to Cloud Asset Inventory (CAI) schema. Keep in mind that the data being passed in is exactly the same, it's just translated into a language that the validator can understand. Because the validator is also set up to perform ongoing policy validation on the environment, all resources are cast into the CAI schema to bring all data together. Don't be alarmed, however, our constraints and constraint templates are reworked to tell the validator to *only* look at terraform resource changes. We'll explore how that happens later on.\n\nOnce the data is in CAI format, `gcloud beta terraform vet` then initializes the constraints and templates in your policy library. Let's look at what each type of yaml file contains:\n\n| Type | Description |\n| -- | -- |\n| ConstraintTemplate | Describes how to integrate logic from a rego policy into rules that the validator checks. This file will describe how constraints that use it should be configured, and also provides the rego policy as an inlined yaml string field. You'll also notice the `target` definition, which reads as `validation.resourcechange.terraform.cloud.google.com`. This is the main difference between this policy library and the [existing CAI policy library](https:\/\/github.com\/GoogleCloudPlatform\/policy-library). **This is the part of the library that allows us to target terraform resource changes and skip current environment monitoring.** Ultimately, think of ConstraintTemplates as definitions that our corresponding constraint(s) must abide by. |\n| Constraint | Implements single rules that depend on a constraint template. Remember that constraint templates contain a schema that describes how to write constraints that depend on them. The actual constraint will contain the data that you'll be looking to test. For example, if you want to validate that terraform only creates IAM bindings, you would create a constraint that passes in the *mode* `allowlist`, and a list of `members` (domains, in this case) that terraform is allowed to create. The constraint tells the validator to fail the pipeline if terraform tries to bind an IAM role to a user ouside of the domain(s) you've passed in. <br \/><br \/> You can create multiple constraints for any given ConstraintTemplate, you just need to make sure that the rules don't conflict with one another. For example, any allowlist\/denylist policy would be difficult to create multiple constraints for. The reason is that if one constraint is of type `allowlist` and the other is of type `denylist`, it's very easy to introduce overlaps in the sets that you create. By this logic, if *domain-a.com* is not a part of your policy member domain allowlist or denylist constraints, it'll automatically be denied. This is because allowlists are more inclusive than denylists, and any domain not in an allowlist will be denied. Just because it's not denied in the denylist, it will be denied by the allowlist. |\n\nIf this is still confusing to you, the [Gatekeeper docuemntation](https:\/\/github.com\/GoogleCloudPlatform\/policy-library) provides a detailed explanation of how Constraints and ConstraintTemplates work together. Yes, Gatekeeper is generally synonymous with Kubernetes resources, but it's simply an extension of Open Policy Agent. We use gatekeeper's API in the terraform validator, and it's how the tool is able to interpret policies from the policy library.\n\n## Developing Terraform Constraints\n\nIn order to develop new policies, you must first understand the process for writing rego policies and how the constraint framework is used. The very first step is to collect some sample data. erraform-based constraints operate on **resource change data**, which comes from the `resource_changes` key of [Terraform plan JSON.](https:\/\/developer.hashicorp.com\/terraform\/internals\/json-format) This will be an array of objects that describe changes that terraform will try to make to the target environment. Here's an example that comes directly from one of the data files used in one of the unit tests in this library.\n\n```(json)\n \"resource_changes\": [\n        {\n            \"address\": \"google_project_iam_binding.iam-service-account-user-12345\",\n            \"mode\": \"managed\",\n            \"type\": \"google_project_iam_binding\",\n            \"name\": \"iam-service-account-user-12345\",\n            \"provider_name\": \"registry.terraform.io\/hashicorp\/google\",\n            \"change\": {\n                \"actions\": [\n                    \"create\"\n                ],\n                \"before\": null,\n                \"after\": {\n                    \"condition\": [],\n                    \"members\": [\n                        \"serviceAccount:service-12345@notiam.gserviceaccount.com\",\n                        \"user:bad@notgoogle.com\"\n                    ],\n                    \"project\": \"12345\",\n                    \"role\": \"roles\/iam.serviceAccountUser\"\n                },\n                \"after_unknown\": {\n                    \"condition\": [],\n                    \"etag\": true,\n                    \"id\": true,\n                    \"members\": [\n                        false,\n                        false\n                    ]\n                },\n                \"before_sensitive\": false,\n                \"after_sensitive\": {\n                    \"condition\": [],\n                    \"members\": [\n                        false,\n                        false\n                    ]\n                }\n            }\n        },\n        [...]\n    ]\n```\n\nYou can start to see the schema of each resource, which will help us with the next step. It's time to write the Rego policy, which you can put under the `validator\/` folder in the project.\n\nNote that Rego policies for terraform resource changes are slightly different than their CAI counterparts. Let's go through a sample policy that allows us to allowlist\/denylist certain IAM roles:\n\n```(rego)\npackage templates.gcp.TFGCPIAMAllowBanRolesConstraintV1\n\nviolation[{\n \"msg\": message,\n \"details\": metadata,\n}] {\n params := input.parameters\n resource := input.review\n resource.type == \"google_project_iam_binding\"\n not resource.change.actions[0] == \"delete\"\n role := resource.change.after.role\n matches_found = [r | r := config_pattern(role); glob.match(params.roles[_], [], r)]\n mode := object.get(params, \"mode\", \"allowlist\")\n target_match_count(mode, desired_count)\n count(matches_found) != desired_count\n message := output_msg(desired_count, resource.name, role)\n metadata := {\n  \"resource\": resource.name,\n  \"role\": role,\n }\n}\ntarget_match_count(mode) = 0 {\n mode == \"denylist\"\n}\ntarget_match_count(mode) = 1 {\n mode == \"allowlist\"\n}\noutput_msg(0, asset_name, role) = msg {\n msg := sprintf(\"%v is in the banned list of IAM policy for %v\", [role, asset_name])\n}\noutput_msg(1, asset_name, role) = msg {\n msg := sprintf(\"%v is NOT in the allowed list of IAM policy for %v\", [role, asset_name])\n}\nconfig_pattern(old_pattern) = \"**\" {\n old_pattern == \"*\"\n}\nconfig_pattern(old_pattern) = old_pattern {\n old_pattern != \"*\"\n}\n```\n\nThere are two parts to this policy. We have a `violation` object, which contains line-by-line logic statements. The way rego works is that the policy will run line-by-line, and will `break` if any of the conditions don't pass. For instance, if the `resource.type` is **not** \"google_project_iam_binding\", the rest of the rule will not be checked, and no violation gets reported. If you're familiar with Golang syntax, a `:=` means *set the variable equal to the value on the right side (declaration, assignment, and redeclaration)*. A `==` means *check the equality of the left and right side*. A `=` is for assignment **only**, meaning that it won't infer the type of the variable. It can be used outside of functions of violations, however.\n\nThe violation object will run with every `input.review` object (which is each object in the `resource_changes` array, evaluated one at a time). `input.parameters` will always be the same, as it contains the information passed in the `parameters` section of the constraint (AHA! That's why we use the constraint framework!). In the case of this policy, if the number of matches we get between the input object and the constraint does not match the desired count (determined by whether the mode of the constraint is `allowlist` or `denylist`), the `violation` will spit out a violation message, and an object containing metadata.\n\nThe second part of the policy is for functions that the policy uses in the violation. You'll see duplicates, like `target_match_count`, for example. These duplicative functions allow us to return a different value based on the input. In the case of `target_match_count`, if the mode is of type `*denylist`, it will return a `0`. So, if we end up getting one hit between our `input.review` object, the actual count **will not** equal our desired count, and the violation will trigger, and send a violation message to stdout.\n\n### Inlining Rego Policies\n\nThis is the easiest part of the development process. Once you've created a new policy, if you've cloned this repo, simply run `make build` to inline the policy into its accompanying constraint template. Remember the first line of the rego policy? `package templates.gcp.TFGCPIAMAllowBanRolesConstraintV1`? You'll need to make sure that you create a constraint template with the `kind` property under spec->crd->spec->names->kind set to `TFGCPIAMAllowBanRolesConstraintV1`. You can see this, and other details, in the ConstraintTemplate that accompanys this policy:\n\n```(yaml)\napiVersion: templates.gatekeeper.sh\/v1beta1\nkind: ConstraintTemplate\nmetadata:\n  name: tfgcpiamallowbanrolesconstraintv1\nspec:\n  crd:\n    spec:\n      names:\n        kind: TFGCPIAMAllowBanRolesConstraintV1\n      validation:\n        openAPIV3Schema:\n          properties:\n            mode:\n              description: \"Enforcement mode, defaults to allow\"\n              type: string\n              enum: [denylist, allowlist]\n            roles:\n              description: \"Roles to be allowed or banned\n                ex. roles\/owner; Wildcards (*) supported\"\n              type: array\n              items:\n                type: string\n  targets:\n    - target: validation.resourcechange.terraform.cloud.google.com\n      rego: |\n        #INLINE(\"validator\/my_rego_rule.rego\")\n        ... (rego code)\n        #ENDINLINE\n```\n\nYou can see that the ConstraintTemplate defines a set of properties that constraints built from this template must abide by. Look at a constraint that uses this template to get a better idea of how the two interract:\n\n```(yaml)\napiVersion: constraints.gatekeeper.sh\/v1beta1\nkind: TFGCPIAMAllowBanRolesConstraintV1\nmetadata:\n  name: iam-allow-roles\n  annotations:\n    description: Allow only the listed IAM role bindings to be created. This\n      constraint is member-agnostic.\nspec:\n  severity: high\n  match:\n    target: # {\"$ref\":\"#\/definitions\/io.k8s.cli.setters.target\"}\n      - \"organizations\/**\"\n    exclude: [] # optional, default is no exclusions\n  parameters:\n    mode: \"allowlist\"\n    roles:\n      - \"roles\/logging.viewer\"\n      - \"roles\/resourcemanager.projectIamAdmin\"\n```\n\nNotice that the constraint defines a set of parameters, including `mode` and `roles`. If you look at the ConstraintTempalte, you can see that these two fields are defined and described. `Mode` is a string enumerable that *must* be either denylist or allowlist. `gcloud beta terraform vet` will actually error out if this is not upheld in the associated constraint(s).\n\n## Additional Resources\n\nDocumentation on the Constraint Framework and `gcloud beta terraform vet` is pretty widespread, but this document provides some application-specific information for using the `validation.resourcechange.terraform.cloud.google.com` target for validating terraform resource changes. Documentation on this use case does exist, and can be found [here](https:\/\/cloud.google.com\/docs\/terraform\/policy-validation\/create-terraform-constraints).\n\nWhen it comes time to create your own rego policies, I also recommend using the [Rego Playground](https:\/\/play.openpolicyagent.org\/) to first get a hang of the Rego language. Developing Rego in a local environment is really challenging, as debugging and tracing features must be integrated manually, usually with a log statement, like `trace(sprintf(\"Value: %s\", [value]))`. However, the very nature of how Rego runs makes this extremely challenging, becasue if your violation exits before you even get to a log line, the trace will never be printed to stdout, and you won't easily be able to see what's going wrong in your policies.\n\nIn my early development of this policy library, I created a couple of playgrounds in order to get a hang of Rego. Here's [an example](https:\/\/play.openpolicyagent.org\/p\/HzzUikhvQ4) of that, where I tested the logic for validating permissions on IAM Custom Roles. Take a look if you're interested! Although keep in mind that this wouldn't work if brought into the policy library, as this policy is unsuported by the Gatekeeper v1beta1 API Version. You would need to change the `deny` rule to `violation` and use updated Rego built-ins, like `object.get` instead of `get_default()`. For me, the output gave me helpful hints on what was wrong with my rules and utility functions.\n\n## Contact Info\n\nQuestions or comments? Please contact tdesrosi@google.com for this project, or validator-support@google.com for information about the terraform-validator project.","site":"GCP","answers_cleaned":"  Functional Principles of the Constraint Framework  You ll notice that this repository contains a handful of folders  each with different items  It s confusing at first  so let s dive into it  First  let s start with how the library is organized      Folder Structure  The folder structure below contains a TL DR explanation of each item s purpose  We ll go into further detail below       bash  policy library tf resource change  root       docs           Contains documentation on this library      lib           Contains shared rego functions      policies          constraints               Contains constraint yaml files          templates               Contains constraint template yaml files      samples          constraints               Contains sample constraint yaml files  not checked at runtime       scripts           Contains helper scripts to assist with policy development      validator           Contains rego policies  used in constraint template yaml files            files ending in    test rego  are base unit testing files for their associated rego policies          test               Contains test data constraints used for unit testing      Makefile  Allows user to use  make      commands for policy development          Basic Operation  When you run  gcloud beta terrafor vet   a number of things happen  First  the resource being tested  ie  Your terraform plan json file  is translated from its native Terraform schema to Cloud Asset Inventory  CAI  schema  Keep in mind that the data being passed in is exactly the same  it s just translated into a language that the validator can understand  Because the validator is also set up to perform ongoing policy validation on the environment  all resources are cast into the CAI schema to bring all data together  Don t be alarmed  however  our constraints and constraint templates are reworked to tell the validator to  only  look at terraform resource changes  We ll explore how that happens later on   Once the data is in CAI format   gcloud beta terraform vet  then initializes the constraints and templates in your policy library  Let s look at what each type of yaml file contains     Type   Description                 ConstraintTemplate   Describes how to integrate logic from a rego policy into rules that the validator checks  This file will describe how constraints that use it should be configured  and also provides the rego policy as an inlined yaml string field  You ll also notice the  target  definition  which reads as  validation resourcechange terraform cloud google com   This is the main difference between this policy library and the  existing CAI policy library  https   github com GoogleCloudPlatform policy library     This is the part of the library that allows us to target terraform resource changes and skip current environment monitoring    Ultimately  think of ConstraintTemplates as definitions that our corresponding constraint s  must abide by      Constraint   Implements single rules that depend on a constraint template  Remember that constraint templates contain a schema that describes how to write constraints that depend on them  The actual constraint will contain the data that you ll be looking to test  For example  if you want to validate that terraform only creates IAM bindings  you would create a constraint that passes in the  mode   allowlist   and a list of  members   domains  in this case  that terraform is allowed to create  The constraint tells the validator to fail the pipeline if terraform tries to bind an IAM role to a user ouside of the domain s  you ve passed in   br    br    You can create multiple constraints for any given ConstraintTemplate  you just need to make sure that the rules don t conflict with one another  For example  any allowlist denylist policy would be difficult to create multiple constraints for  The reason is that if one constraint is of type  allowlist  and the other is of type  denylist   it s very easy to introduce overlaps in the sets that you create  By this logic  if  domain a com  is not a part of your policy member domain allowlist or denylist constraints  it ll automatically be denied  This is because allowlists are more inclusive than denylists  and any domain not in an allowlist will be denied  Just because it s not denied in the denylist  it will be denied by the allowlist     If this is still confusing to you  the  Gatekeeper docuemntation  https   github com GoogleCloudPlatform policy library  provides a detailed explanation of how Constraints and ConstraintTemplates work together  Yes  Gatekeeper is generally synonymous with Kubernetes resources  but it s simply an extension of Open Policy Agent  We use gatekeeper s API in the terraform validator  and it s how the tool is able to interpret policies from the policy library      Developing Terraform Constraints  In order to develop new policies  you must first understand the process for writing rego policies and how the constraint framework is used  The very first step is to collect some sample data  erraform based constraints operate on   resource change data    which comes from the  resource changes  key of  Terraform plan JSON   https   developer hashicorp com terraform internals json format  This will be an array of objects that describe changes that terraform will try to make to the target environment  Here s an example that comes directly from one of the data files used in one of the unit tests in this library       json    resource changes                            address    google project iam binding iam service account user 12345                mode    managed                type    google project iam binding                name    iam service account user 12345                provider name    registry terraform io hashicorp google                change                      actions                          create                                      before   null                   after                          condition                            members                              serviceAccount service 12345 notiam gserviceaccount com                            user bad notgoogle com                                              project    12345                        role    roles iam serviceAccountUser                                      after unknown                          condition                            etag   true                       id   true                       members                             false                          false                                                           before sensitive   false                   after sensitive                          condition                            members                             false                          false                                                                                           You can start to see the schema of each resource  which will help us with the next step  It s time to write the Rego policy  which you can put under the  validator   folder in the project   Note that Rego policies for terraform resource changes are slightly different than their CAI counterparts  Let s go through a sample policy that allows us to allowlist denylist certain IAM roles       rego  package templates gcp TFGCPIAMAllowBanRolesConstraintV1  violation     msg   message    details   metadata        params    input parameters  resource    input review  resource type     google project iam binding   not resource change actions 0      delete   role    resource change after role  matches found    r   r    config pattern role   glob match params roles         r    mode    object get params   mode    allowlist    target match count mode  desired count   count matches found     desired count  message    output msg desired count  resource name  role   metadata         resource   resource name     role   role       target match count mode    0    mode     denylist    target match count mode    1    mode     allowlist    output msg 0  asset name  role    msg    msg    sprintf   v is in the banned list of IAM policy for  v    role  asset name     output msg 1  asset name  role    msg    msg    sprintf   v is NOT in the allowed list of IAM policy for  v    role  asset name     config pattern old pattern            old pattern          config pattern old pattern    old pattern    old pattern               There are two parts to this policy  We have a  violation  object  which contains line by line logic statements  The way rego works is that the policy will run line by line  and will  break  if any of the conditions don t pass  For instance  if the  resource type  is   not    google project iam binding   the rest of the rule will not be checked  and no violation gets reported  If you re familiar with Golang syntax  a      means  set the variable equal to the value on the right side  declaration  assignment  and redeclaration    A      means  check the equality of the left and right side   A     is for assignment   only    meaning that it won t infer the type of the variable  It can be used outside of functions of violations  however   The violation object will run with every  input review  object  which is each object in the  resource changes  array  evaluated one at a time    input parameters  will always be the same  as it contains the information passed in the  parameters  section of the constraint  AHA  That s why we use the constraint framework    In the case of this policy  if the number of matches we get between the input object and the constraint does not match the desired count  determined by whether the mode of the constraint is  allowlist  or  denylist    the  violation  will spit out a violation message  and an object containing metadata   The second part of the policy is for functions that the policy uses in the violation  You ll see duplicates  like  target match count   for example  These duplicative functions allow us to return a different value based on the input  In the case of  target match count   if the mode is of type   denylist   it will return a  0   So  if we end up getting one hit between our  input review  object  the actual count   will not   equal our desired count  and the violation will trigger  and send a violation message to stdout       Inlining Rego Policies  This is the easiest part of the development process  Once you ve created a new policy  if you ve cloned this repo  simply run  make build  to inline the policy into its accompanying constraint template  Remember the first line of the rego policy   package templates gcp TFGCPIAMAllowBanRolesConstraintV1   You ll need to make sure that you create a constraint template with the  kind  property under spec  crd  spec  names  kind set to  TFGCPIAMAllowBanRolesConstraintV1   You can see this  and other details  in the ConstraintTemplate that accompanys this policy       yaml  apiVersion  templates gatekeeper sh v1beta1 kind  ConstraintTemplate metadata    name  tfgcpiamallowbanrolesconstraintv1 spec    crd      spec        names          kind  TFGCPIAMAllowBanRolesConstraintV1       validation          openAPIV3Schema            properties              mode                description   Enforcement mode  defaults to allow                type  string               enum   denylist  allowlist              roles                description   Roles to be allowed or banned                 ex  roles owner  Wildcards     supported                type  array               items                  type  string   targets        target  validation resourcechange terraform cloud google com       rego             INLINE  validator my rego rule rego                rego code           ENDINLINE      You can see that the ConstraintTemplate defines a set of properties that constraints built from this template must abide by  Look at a constraint that uses this template to get a better idea of how the two interract       yaml  apiVersion  constraints gatekeeper sh v1beta1 kind  TFGCPIAMAllowBanRolesConstraintV1 metadata    name  iam allow roles   annotations      description  Allow only the listed IAM role bindings to be created  This       constraint is member agnostic  spec    severity  high   match      target       ref     definitions io k8s cli setters target            organizations         exclude       optional  default is no exclusions   parameters      mode   allowlist      roles           roles logging viewer           roles resourcemanager projectIamAdmin       Notice that the constraint defines a set of parameters  including  mode  and  roles   If you look at the ConstraintTempalte  you can see that these two fields are defined and described   Mode  is a string enumerable that  must  be either denylist or allowlist   gcloud beta terraform vet  will actually error out if this is not upheld in the associated constraint s       Additional Resources  Documentation on the Constraint Framework and  gcloud beta terraform vet  is pretty widespread  but this document provides some application specific information for using the  validation resourcechange terraform cloud google com  target for validating terraform resource changes  Documentation on this use case does exist  and can be found  here  https   cloud google com docs terraform policy validation create terraform constraints    When it comes time to create your own rego policies  I also recommend using the  Rego Playground  https   play openpolicyagent org   to first get a hang of the Rego language  Developing Rego in a local environment is really challenging  as debugging and tracing features must be integrated manually  usually with a log statement  like  trace sprintf  Value   s    value      However  the very nature of how Rego runs makes this extremely challenging  becasue if your violation exits before you even get to a log line  the trace will never be printed to stdout  and you won t easily be able to see what s going wrong in your policies   In my early development of this policy library  I created a couple of playgrounds in order to get a hang of Rego  Here s  an example  https   play openpolicyagent org p HzzUikhvQ4  of that  where I tested the logic for validating permissions on IAM Custom Roles  Take a look if you re interested  Although keep in mind that this wouldn t work if brought into the policy library  as this policy is unsuported by the Gatekeeper v1beta1 API Version  You would need to change the  deny  rule to  violation  and use updated Rego built ins  like  object get  instead of  get default     For me  the output gave me helpful hints on what was wrong with my rules and utility functions      Contact Info  Questions or comments  Please contact tdesrosi google com for this project  or validator support google com for information about the terraform validator project "}
{"questions":"GCP format and compressed using compression Application can be extended to support additional redis operations different encodings e g avro protobuf and Application Framework to execute various operations on redis to evaluate key performance metrics such as CPU Memory compression formats Check below for details Overview utilization Bytes transferred Time per command etc For complete list of metrics refer Framework currently supports few operations set get push pop with sample payload Payload is encoded in","answers":"## Overview\n  Application Framework to execute various operations on redis to evaluate key performance metrics such as CPU & Memory\n  utilization, Bytes transferred, Time per command etc. For complete list of metrics refer [MemoryStore Redis Metrics](https:\/\/cloud.google.com\/memorystore\/docs\/redis\/supported-monitoring-metrics)\n\n  Framework currently supports few operations (set\/get, push & pop) with sample payload. Payload is encoded in\n  [message pack](https:\/\/msgpack.org\/index.html) format and compressed using [LZ4](https:\/\/lz4.org\/) compression.\n\n  Application can be extended to support additional redis operations, different encodings (e.g: avro, protobuf) and\n  compression formats. Check below [section](#ExtendApplication) for details.\n\n## Getting Started\n\n### System Requirements\n\n* Java 11\n* Maven 3\n* [gcloud CLI](https:\/\/cloud.google.com\/sdk\/gcloud)\n\n### Building Jar\n- Execute the below script from the project root directory to build uber jar:\n\n```bash\n$ .\/scripts\/build-jar.sh\n```\n Successful execution of the script will generate the jar in the path:```{project_dir}\/artifacts\/redis-benchmarks-${version}.jar```\n\n### Building Docker image\nDocker image can be built and pushed to [google cloud artifact registry](https:\/\/cloud.google.com\/artifact-registry).\n\nPrior to building the docker image, follow the below steps:\n- [Enable Artifact Registry](https:\/\/cloud.google.com\/artifact-registry\/docs\/enable-service)\n\n- Create a new repository called ```benchmarks``` using the command:\n```bash\n$ location=\"us-central1\"\n$ gcloud artifacts repositories create benchmarks \\\n      --repository-format=docker \\\n      --location=${location} \\\n      --description=\"Contains docker images to execute benchmark tests.\"\n```\n\n- Execute the below script from the project root directory to build & push docker image:\n```bash\n$ .\/scripts\/build-image.sh\n```\nSuccessful execution of the script will push the image to artifact registry.\n\n- Image can be pulled using below command:\n```bash\n$ project_id=<<gcp-project>>\n$ docker pull us-central1-docker.pkg.dev\/${project_id}\/benchmarks\/redis-benchmark:latest\n```\n\n## Executing Application\n\n### Application options\nFollowing are the options that can be supplied while executing the application:\n\n| Name                 | Description                                                                                   | Optional | Default Value |\n|----------------------|:----------------------------------------------------------------------------------------------|:---------|:--------------|\n| Project              | Cloud Project identifier                                                                      | N        |               |\n| hostname             | Redis host name                                                                               | Y        | localhost     |\n| port                 | Redis port number                                                                             | Y        | 6379          |\n| runduration_minutes  | Amount of time (in minutes) to run the application                                            | Y        | 1             |\n| cpu_scaling_factor   | Determines the parallelism (i.e tasks) by multiplying cpu_scaling_factor with available cores | Y        | 1             |\n| write_ratio          | Determines the percent of writes compared to reads. Default is 20% writes and 80% reads.      | Y        | 0.2           |\n| task_types           | Task types to be executed. Can be supplied 1 or more as comma seperated values.               | N        |               |\n\n### Run application\n```bash\n$ PROJECT_ID=<<gcp-project>>\n$ RUNDURATION_MINUTES=1\n$ CPU_SCALING_FACTOR=1\n$ WRITE_RATIO=0.2\n$ REDIS_HOST=localhost\n$ REDIS_PORT=6379\n$ TASK_TYPES=\"SetGet,ListOps\"\n\n$ java -jar .\/artifacts\/redis-benchmarks-1.0.jar \\\n--project=${PROJECT_ID} \\\n--runduration_minutes=${RUNDURATION_MINUTES} \\\n--cpu_scaling_factor=${CPU_SCALING_FACTOR} \\\n--write_ratio=${WRITE_RATIO} \\\n--task_types=${TASK_TYPES} \\\n--hostname=${REDIS_HOST} \\\n--port=${REDIS_PORT}\n```\n\n### Output\nPost completion, the application will display the number of writes, reads, cache hits and misses for each task.\n\n![Output.png](img\/Output.png)\n\n## <a name=\"ExtendApplication\">Extending Application<\/a>\n\nApplication can be extended to support additional payloads, encodings and compression formats.\n\n### Adding new payload\nCreate a new class similar to [profile](.\/src\/main\/java\/com\/google\/cloud\/pso\/benchmarks\/redis\/model\/Profile.java) that\nimplements Payload interface\n\n### Using different encoding\nCreate a new class similar to [MessagePack](.\/src\/main\/java\/com\/google\/cloud\/pso\/benchmarks\/redis\/serde\/MessagePack.java)\nthat implements EncDecoder interface\n\n### Using different Compression\nCreate a new class similar to [LZ4Compression](.\/src\/main\/java\/com\/google\/cloud\/pso\/benchmarks\/redis\/compression\/LZ4Compression.java)\nthat implements Compression interface\n\n### Adding additional tasks\nCreate a new class similar to [SetGetTask](.\/src\/main\/java\/com\/google\/cloud\/pso\/benchmarks\/redis\/tasks\/SetGetTask.java)\nthat implements RedisTask interface\n\n- Payload can be generated using [PayloadGenerator](.\/src\/main\/java\/com\/google\/cloud\/pso\/benchmarks\/redis\/PayloadGenerator.java)\nthat accepts payload type, encoding and compression as input parameters.\n\n- Tasks can be created with appropriate payloads and can be supplied to the workload executior. Check **createRedisTask** method\n in [WorkloadExecutor](.\/src\/main\/java\/com\/google\/cloud\/pso\/benchmarks\/redis\/WorkloadExecutor.java) for reference.\n\n\n## Disclaimer\n\nThis project is not an official Google project. It is not supported by Google and disclaims all warranties as to its\nquality, merchantability, or fitness for a particular purpose.","site":"GCP","answers_cleaned":"   Overview   Application Framework to execute various operations on redis to evaluate key performance metrics such as CPU   Memory   utilization  Bytes transferred  Time per command etc  For complete list of metrics refer  MemoryStore Redis Metrics  https   cloud google com memorystore docs redis supported monitoring metrics     Framework currently supports few operations  set get  push   pop  with sample payload  Payload is encoded in    message pack  https   msgpack org index html  format and compressed using  LZ4  https   lz4 org   compression     Application can be extended to support additional redis operations  different encodings  e g  avro  protobuf  and   compression formats  Check below  section   ExtendApplication  for details      Getting Started      System Requirements    Java 11   Maven 3    gcloud CLI  https   cloud google com sdk gcloud       Building Jar   Execute the below script from the project root directory to build uber jar      bash     scripts build jar sh      Successful execution of the script will generate the jar in the path     project dir  artifacts redis benchmarks   version  jar         Building Docker image Docker image can be built and pushed to  google cloud artifact registry  https   cloud google com artifact registry    Prior to building the docker image  follow the below steps     Enable Artifact Registry  https   cloud google com artifact registry docs enable service     Create a new repository called    benchmarks    using the command     bash   location  us central1    gcloud artifacts repositories create benchmarks           repository format docker           location   location            description  Contains docker images to execute benchmark tests          Execute the below script from the project root directory to build   push docker image     bash     scripts build image sh     Successful execution of the script will push the image to artifact registry     Image can be pulled using below command     bash   project id   gcp project     docker pull us central1 docker pkg dev   project id  benchmarks redis benchmark latest         Executing Application      Application options Following are the options that can be supplied while executing the application     Name                   Description                                                                                     Optional   Default Value                                                                                                                                                         Project                Cloud Project identifier                                                                        N                            hostname               Redis host name                                                                                 Y          localhost         port                   Redis port number                                                                               Y          6379              runduration minutes    Amount of time  in minutes  to run the application                                              Y          1                 cpu scaling factor     Determines the parallelism  i e tasks  by multiplying cpu scaling factor with available cores   Y          1                 write ratio            Determines the percent of writes compared to reads  Default is 20  writes and 80  reads         Y          0 2               task types             Task types to be executed  Can be supplied 1 or more as comma seperated values                  N                               Run application    bash   PROJECT ID   gcp project     RUNDURATION MINUTES 1   CPU SCALING FACTOR 1   WRITE RATIO 0 2   REDIS HOST localhost   REDIS PORT 6379   TASK TYPES  SetGet ListOps     java  jar   artifacts redis benchmarks 1 0 jar     project   PROJECT ID      runduration minutes   RUNDURATION MINUTES      cpu scaling factor   CPU SCALING FACTOR      write ratio   WRITE RATIO      task types   TASK TYPES      hostname   REDIS HOST      port   REDIS PORT           Output Post completion  the application will display the number of writes  reads  cache hits and misses for each task     Output png  img Output png       a name  ExtendApplication  Extending Application  a   Application can be extended to support additional payloads  encodings and compression formats       Adding new payload Create a new class similar to  profile    src main java com google cloud pso benchmarks redis model Profile java  that implements Payload interface      Using different encoding Create a new class similar to  MessagePack    src main java com google cloud pso benchmarks redis serde MessagePack java  that implements EncDecoder interface      Using different Compression Create a new class similar to  LZ4Compression    src main java com google cloud pso benchmarks redis compression LZ4Compression java  that implements Compression interface      Adding additional tasks Create a new class similar to  SetGetTask    src main java com google cloud pso benchmarks redis tasks SetGetTask java  that implements RedisTask interface    Payload can be generated using  PayloadGenerator    src main java com google cloud pso benchmarks redis PayloadGenerator java  that accepts payload type  encoding and compression as input parameters     Tasks can be created with appropriate payloads and can be supplied to the workload executior  Check   createRedisTask   method  in  WorkloadExecutor    src main java com google cloud pso benchmarks redis WorkloadExecutor java  for reference       Disclaimer  This project is not an official Google project  It is not supported by Google and disclaims all warranties as to its quality  merchantability  or fitness for a particular purpose "}
{"questions":"GCP kafka2avro batch mode pipeline This example contains two Dataflow pipelines reads objects from Kafka convert them to Avro and write the generates a set of objects and write them to Kafka This is a output to Google Cloud Storage This is a streaming mode pipeline This example shows how to use Apache Beam and SCIO to read objects from a Kafka topic and serialize them encoded as Avro files in Google Cloud Storage","answers":"# kafka2avro\n\nThis example shows how to use Apache Beam and SCIO to read objects from a Kafka\ntopic, and serialize them encoded as Avro files in Google Cloud Storage.\n\nThis example contains two Dataflow pipelines:\n* [Object2Kafka](src\/main\/scala\/com\/google\/cloud\/pso\/kafka2avro\/Object2Kafka.scala): generates a set of objects and write them to Kafka. This is a\n  batch mode pipeline.\n* [Kafka2Avro](src\/main\/scala\/com\/google\/cloud\/pso\/kafka2avro\/Kafka2Avro.scala): reads objects from Kafka, convert them to Avro, and write the\n  output to Google Cloud Storage. This is a streaming mode pipeline\n\n\n## Configuration\n\nBefore compiling and generating your package, you need to change some options in\n[`src\/main\/resources\/application.conf`](src\/main\/resources\/application.conf):\n\n* `broker`: String with the address of the Kafka brokers.\n* `dest-bucket`: The name of the bucket where the Avro files will be written\n* `dest-path`: The directories structure where the Avro files will be written\n  (e.g. blank to write in the top level dir in the bucket, or anything like\n  a\/b\/c).\n* `kafka-topic`: The name of the topic where the objects are written to, or read\n  from.\n* `num-demo-objects`: Number of objects that will be generated by the Object2Kafka\n  pipeline, these objects can be read with the Kafka2Avro pipeline to test that\n  everything is working as expected.\n\nThe configuration file follows [the HOCON format](https:\/\/github.com\/lightbend\/config\/blob\/master\/README.md#using-hocon-the-json-superset).\n\nHere is a sample configuration file with all the options set:\n\n```bash\nbroker = \"1.2.3.4:9092\"\ndest-bucket = \"my-bucket-in-gcs\"\ndest-path = \"persisted\/fromkafka\/avro\/\"\nkafka-topic = \"my_kafka_topic\"\nnum-demo-objects = 500  # comments are allowed in the config file\n```\n\n## Pre-requirements\n\n### Build tool\n\nThis example is written in Scala and uses SBT as build tool.\n\nYou need to have SBT >= 1.0 installed. You can download SBT from https:\/\/www.scala-sbt.org\/\n\nThe Scala version is 2.12.8. If you have the JDK > 1.8 installed, SBT should automatically download the Scala compiler.\n\n## Compile\n\nRun `sbt` in the top sources folder.\n\nInside sbt, download all the dependencies:\n\n```\nsbt:kafka2avro> update\n```\n\nand then compile\n\n```\nsbt:kafka2avro> compile\n```\n\n## Deploy and run\n\nIf you have managed to compile the code, you can generate a JAR package to be\ndeployed on Dataflow, with:\n\n```\nsbt:kafka2avro> pack\n```\n\nThis will generate a set of JAR files in `target\/pack\/lib`\n\n### Running the Object2Kafka pipeline\n\nThis is batch pipeline, provided just an example to populate Kafka and test the\nstreaming pipeline.\n\nOnce you have generated the JAR file using the `pack` command inside SBT, you\ncan now launch the job in Dataflow to populate Kafka with some demo\nobjects. Using Java 1.8, run the following command. Notice that you have to set\nthe project id, and a location in a GCS bucket to store the JARs imported by Dataflow:\n\n```\nCLASSPATH=\"target\/pack\/lib\/*\" java com.google.cloud.pso.kafka2avro.Object2Kafka --exec.mainClass=com.google.cloud.pso.kafka2avro.Object2Kafka --project=YOUR_PROJECT_ID --stagingLocation=\"gs:\/\/YOUR_BUCKET\/YOUR_STAGING_LOCATION\" --runner=DataflowRunner\n```\n\n### Running the Kafka2Avro pipeline\n\nThis is a streaming pipeline. It will keep running unless you cancel it. The\ndefault windowing policy is to group messages every 2 minutes, in a fixed\nwindow. To change the policy, please see\n[the function `windowIn` in `Kafka2Avro.scala`](src\/main\/scala\/com\/google\/cloud\/pso\/kafka2avro\/Kafka2Avro.scala#L60-L70).\n\nOnce you have generated the JAR file using the `pack` command inside SBT, you\ncan now launch the job in Dataflow to populate Kafka with some demo\nobjects. Using Java 1.8, run the following command. Notice that you have to set\nthe project id, and a location in a GCS bucket to store the JARs imported by\nDataflow:\n\n```\nCLASSPATH=\"target\/pack\/lib\/*\" java com.google.cloud.pso.kafka2avro.Kafka2Avro --exec.mainClass=com.google.cloud.pso.kafka2avro.Kafka2Avro --project=YOUR_PROJECT_ID --stagingLocation=\"gs:\/\/YOUR_BUCKET\/YOUR_STAGING_LOCATION\" --runner=DataflowRunner\n```\n\nPlease remember that the machine running the JAR may need to have connectivity\nto the Kafka cluster in order to retrieve some metadata, prior to launching the\npipeline in Dataflow.\n\n**Remember that this is a streaming pipeline, it will keep running forever until\nyou cancel or stop it.**\n\n### Wrong filenames for some dependencies\n\nIn some cases, some dependencies may be downloaded with wrong filenames. For\ninstance, containing symbols that need to be escaped. Importing these JARs in\nthe job in Dataflow will fail.\n\nIf when running your Dataflow job, it fails before it is launched because it\ncannot copy some dependencies, change the name of the offending files so they\ndon't contain symbols. For instance:\n\n```\nmv target\/pack\/lib\/netty-codec-http2-\\[4.1.25.Final,4.1.25.Final\\].jar target\/pack\/lib\/netty-codec-http2.jar\n```\n\n## Continuous Integration\n\nThis example includes [a configuration file for Cloud Build](cloudbuild.yaml),\nso you can use it to run the unit tests with every commit done to your repository.\nTo use this configuration file:\n\n* Add your sources to a Git repository (either in Bitbucket, Github or Google\n  Cloud Source).\n* Configure a trigger in Google Cloud Build linked to your Git repository.\n* Set the path for the configuration file to [`cloudbuild.yaml`](cloudbuild.yaml).\n\nThe included configuration file will do the following steps:\n* Download a cache for Ivy2 from a Google Cloud Storage bucket named\n  YOURPROJECT_cache, where `YOURPROJECT` is your GCP project id.\n* Compile and test the Scala code.\n* Generate a package.\n* Upload the new Ivy2 cache to the same bucket as in the first step.\n* Upload the generated package and all its dependencies to a bucket named\n  YOURPROJECT_pkgs, where `YOURPROJECT` is your GCP project id.\n\nSo these default steps will try to write to and read from two different buckets in\nGoogle Cloud Storage. Please either create these buckets in your GCP project, or\nchange the configuration.\n\nPlease note that you need to build and include the `scala-sbt` Cloud Builder in\norder to use this configuration file.\n\n* Make sure you have the Google Cloud SDK configured with your credentials and\n  project\n* Download the sources from\n  [GoogleCloudPlatform\/cloud-builders-community\/tree\/master\/scala-sbt](https:\/\/github.com\/GoogleCloudPlatform\/cloud-builders-community\/tree\/master\/scala-sbt)\n* And then in the `scala-sbt` sources dir, run `gcloud builds submit\n  . --config=cloudbuild.yaml` to add the builder to your GCP project. You only\n  need to do this once.","site":"GCP","answers_cleaned":"  kafka2avro  This example shows how to use Apache Beam and SCIO to read objects from a Kafka topic  and serialize them encoded as Avro files in Google Cloud Storage   This example contains two Dataflow pipelines     Object2Kafka  src main scala com google cloud pso kafka2avro Object2Kafka scala   generates a set of objects and write them to Kafka  This is a   batch mode pipeline     Kafka2Avro  src main scala com google cloud pso kafka2avro Kafka2Avro scala   reads objects from Kafka  convert them to Avro  and write the   output to Google Cloud Storage  This is a streaming mode pipeline      Configuration  Before compiling and generating your package  you need to change some options in   src main resources application conf   src main resources application conf       broker   String with the address of the Kafka brokers     dest bucket   The name of the bucket where the Avro files will be written    dest path   The directories structure where the Avro files will be written    e g  blank to write in the top level dir in the bucket  or anything like   a b c      kafka topic   The name of the topic where the objects are written to  or read   from     num demo objects   Number of objects that will be generated by the Object2Kafka   pipeline  these objects can be read with the Kafka2Avro pipeline to test that   everything is working as expected   The configuration file follows  the HOCON format  https   github com lightbend config blob master README md using hocon the json superset    Here is a sample configuration file with all the options set      bash broker    1 2 3 4 9092  dest bucket    my bucket in gcs  dest path    persisted fromkafka avro   kafka topic    my kafka topic  num demo objects   500    comments are allowed in the config file         Pre requirements      Build tool  This example is written in Scala and uses SBT as build tool   You need to have SBT    1 0 installed  You can download SBT from https   www scala sbt org   The Scala version is 2 12 8  If you have the JDK   1 8 installed  SBT should automatically download the Scala compiler      Compile  Run  sbt  in the top sources folder   Inside sbt  download all the dependencies       sbt kafka2avro  update      and then compile      sbt kafka2avro  compile         Deploy and run  If you have managed to compile the code  you can generate a JAR package to be deployed on Dataflow  with       sbt kafka2avro  pack      This will generate a set of JAR files in  target pack lib       Running the Object2Kafka pipeline  This is batch pipeline  provided just an example to populate Kafka and test the streaming pipeline   Once you have generated the JAR file using the  pack  command inside SBT  you can now launch the job in Dataflow to populate Kafka with some demo objects  Using Java 1 8  run the following command  Notice that you have to set the project id  and a location in a GCS bucket to store the JARs imported by Dataflow       CLASSPATH  target pack lib    java com google cloud pso kafka2avro Object2Kafka   exec mainClass com google cloud pso kafka2avro Object2Kafka   project YOUR PROJECT ID   stagingLocation  gs   YOUR BUCKET YOUR STAGING LOCATION    runner DataflowRunner          Running the Kafka2Avro pipeline  This is a streaming pipeline  It will keep running unless you cancel it  The default windowing policy is to group messages every 2 minutes  in a fixed window  To change the policy  please see  the function  windowIn  in  Kafka2Avro scala   src main scala com google cloud pso kafka2avro Kafka2Avro scala L60 L70    Once you have generated the JAR file using the  pack  command inside SBT  you can now launch the job in Dataflow to populate Kafka with some demo objects  Using Java 1 8  run the following command  Notice that you have to set the project id  and a location in a GCS bucket to store the JARs imported by Dataflow       CLASSPATH  target pack lib    java com google cloud pso kafka2avro Kafka2Avro   exec mainClass com google cloud pso kafka2avro Kafka2Avro   project YOUR PROJECT ID   stagingLocation  gs   YOUR BUCKET YOUR STAGING LOCATION    runner DataflowRunner      Please remember that the machine running the JAR may need to have connectivity to the Kafka cluster in order to retrieve some metadata  prior to launching the pipeline in Dataflow     Remember that this is a streaming pipeline  it will keep running forever until you cancel or stop it         Wrong filenames for some dependencies  In some cases  some dependencies may be downloaded with wrong filenames  For instance  containing symbols that need to be escaped  Importing these JARs in the job in Dataflow will fail   If when running your Dataflow job  it fails before it is launched because it cannot copy some dependencies  change the name of the offending files so they don t contain symbols  For instance       mv target pack lib netty codec http2   4 1 25 Final 4 1 25 Final   jar target pack lib netty codec http2 jar         Continuous Integration  This example includes  a configuration file for Cloud Build  cloudbuild yaml   so you can use it to run the unit tests with every commit done to your repository  To use this configuration file     Add your sources to a Git repository  either in Bitbucket  Github or Google   Cloud Source     Configure a trigger in Google Cloud Build linked to your Git repository    Set the path for the configuration file to   cloudbuild yaml   cloudbuild yaml    The included configuration file will do the following steps    Download a cache for Ivy2 from a Google Cloud Storage bucket named   YOURPROJECT cache  where  YOURPROJECT  is your GCP project id    Compile and test the Scala code    Generate a package    Upload the new Ivy2 cache to the same bucket as in the first step    Upload the generated package and all its dependencies to a bucket named   YOURPROJECT pkgs  where  YOURPROJECT  is your GCP project id   So these default steps will try to write to and read from two different buckets in Google Cloud Storage  Please either create these buckets in your GCP project  or change the configuration   Please note that you need to build and include the  scala sbt  Cloud Builder in order to use this configuration file     Make sure you have the Google Cloud SDK configured with your credentials and   project   Download the sources from    GoogleCloudPlatform cloud builders community tree master scala sbt  https   github com GoogleCloudPlatform cloud builders community tree master scala sbt    And then in the  scala sbt  sources dir  run  gcloud builds submit       config cloudbuild yaml  to add the builder to your GCP project  You only   need to do this once "}
{"questions":"GCP This approach genrally has following benefits With multi folder repositories it s possible to combine similar IaC into a single repository and or centralize the mangement of multiple business units code Organizing code across multiple folders within a single version control repositiroy such as github is a very common practice and we re referring this as multi folder repository Selective deployment approach lets you find the folders changed within your repository and only run the logic for changed folders Selective deployment Reduced overhead in managing multiple CI CD pipelines","answers":"# Selective deployment\n\nOrganizing code across multiple folders within a single version control repositiroy such as github is a very common practice, and we're referring this as multi-folder repository.  \n\n**Selective deployment** approach lets you find the folders changed within your repository and only run the logic for changed folders.  \n\nWith multi-folder repositories, it's possible to combine similar IaC into a single repository and\/or centralize the mangement of multiple business units code.   \nThis approach genrally has following benefits:\n\n- Reduced overhead in managing multiple CI\/CD pipelines\n- Better code visibility\n- Reduces overhead in managing multiple ACLs for similar code\n\nExample multi-folder repository structure\n\n```txt\n\u2514\u2500\u2500 single-repo\n    \u251c\u2500\u2500 build-files\n    \u2502   \u2514\u2500\u2500 compile.sh\n    \u251c\u2500\u2500 build.yaml\n    \u2514\u2500\u2500 user-resources\n        \u251c\u2500\u2500 business-unit1\n        \u2502   \u251c\u2500\u2500 dev-env\n        \u2502   \u251c\u2500\u2500 prod-env\n        \u2502   \u2514\u2500\u2500 qa-env\n        \u2514\u2500\u2500 business-unit2\n            \u251c\u2500\u2500 dev-env\n            \u251c\u2500\u2500 prod-env\n            \u2514\u2500\u2500 qa-env\n```\n\n``` Note: A Mono repo is always a multi-folder repository, however vice versa is not always true. Checkout https:\/\/www.hashicorp.com\/blog\/terraform-mono-repo-vs-multi-repo-the-great-debate article for more information on mono vs multi repos for IaC.```\n\n\n## Solution\n\nThis can be addressed in multiple different ways, and our approach is as below:  \n\n### Step 1: Find the commit associated with last successful build.\n\n```sh\nnth_successful_commit() {\n  local n=$1  # n=1 --> Last successful commit.\n  local trigger_name=$2\n  local project=$3\n\n  local trigger_id=$(get_trigger_value $trigger_name $project \"id\")\n  local nth_successful_build=$(gcloud builds list --filter \"buildTriggerId=$trigger_id AND STATUS=(SUCCESS)\" --format \"value(id)\" --limit=$build_find_limit --project $project | awk \"NR==$n\") || exit 1\n\n  local nth_successful_commit=$(gcloud builds describe $nth_successful_build --format \"value(substitutions.COMMIT_SHA)\" --project $project) || exit 1\n  echo $nth_successful_commit\n}\n```\n\n### Step 2: Find the differece between current commit and last successful commit.  \n\n```sh\nprevious_commit_sha=$(nth_successful_commit 1 $apply_trigger_name $project) || exit 1\n\ngit diff --name-only ${previous_commit_sha} ${commit_sha} | sort -u > $logs_dir\/diff.log || exit 1\n```\nThis step will give you a list of files\/folders that were modified after the commit associated with last successful build.  \n\n### Step 3: Iterate over changed folders.\n  \nYou can now iterate over only the changed folders received from Step 2 in the $logs_dir\/diff.log file.\n\n## Implementation steps\n\n### Pre-requisites\n\n- A cloud source repository (or any other source control repositories).\n- A cloud build configuration file with basic steps to be executed on a single folder of the repository.\n\n```Note: We are assuming that the pipeline is built on Google cloud build. If the pipeline is built on other platforms, you might need to retrofit this solution accordingly.```\n\n### Setup Clooud build\nAdd the right values for the below CloudBuild substitution variables in the `cloudbuild.yaml` file.  \n\n```sh\n  _TF_SA_EMAIL: ''\n  _PREVIOUS_COMMIT_SHA: ''\n  _RUN_ALL_PROJECTS: 'false'\n```\n\n- `_TF_SA_EMAIL` is the GCP service_account with the necessary IAM permissions that the terraform impersonates. Cloudbuild default SA should have [roles\/iam.serviceAccountTokenCreator](https:\/\/cloud.google.com\/iam\/docs\/service-accounts#token-creator-role) on the `_TF_SA_EMAIL` service_account.\n- `_PREVIOUS_COMMIT_SHA` is the github commit_sha that is used for explicitly checking delta changes between this commit and the latest commit instead of automatically detecting last sucessful commit based on a successful cloudbuild execution. Especially this would be required for first time execution when there is not a successful cloudbuild execution.  \n- `_RUN_ALL_PROJECTS` is to force executing through all folders. Once is a while this is required:\n    - for deploying a change that imapcts all folders such as a terraform module commonly used by code in all\/multiple folders.\n    - for detecting and fixing any configurations drifts, especially when manual changes are performed.\n\n## Important points\n\nUse unshallow copy of git clone.  \n\nCloud build in its default behaviour uses shallow a copy of the repository (i.e. only the code associated with the commit with which the current build was triggered). Shallow copy prevents us from performing git operations like git diff. However, we can use following step in the cloud build to fetch unshallow copy:\n\n```yaml\n  - id: 'unshallow'\n    name: gcr.io\/cloud-builders\/git\n    args: ['fetch', '--unshallow']\n``","site":"GCP","answers_cleaned":"  Selective deployment  Organizing code across multiple folders within a single version control repositiroy such as github is a very common practice  and we re referring this as multi folder repository       Selective deployment   approach lets you find the folders changed within your repository and only run the logic for changed folders     With multi folder repositories  it s possible to combine similar IaC into a single repository and or centralize the mangement of multiple business units code     This approach genrally has following benefits     Reduced overhead in managing multiple CI CD pipelines   Better code visibility   Reduces overhead in managing multiple ACLs for similar code  Example multi folder repository structure     txt     single repo         build files             compile sh         build yaml         user resources             business unit1                 dev env                 prod env                 qa env             business unit2                 dev env                 prod env                 qa env          Note  A Mono repo is always a multi folder repository  however vice versa is not always true  Checkout https   www hashicorp com blog terraform mono repo vs multi repo the great debate article for more information on mono vs multi repos for IaC          Solution  This can be addressed in multiple different ways  and our approach is as below         Step 1  Find the commit associated with last successful build      sh nth successful commit       local n  1    n 1     Last successful commit    local trigger name  2   local project  3    local trigger id   get trigger value  trigger name  project  id     local nth successful build   gcloud builds list   filter  buildTriggerId  trigger id AND STATUS  SUCCESS     format  value id     limit  build find limit   project  project   awk  NR   n      exit 1    local nth successful commit   gcloud builds describe  nth successful build   format  value substitutions COMMIT SHA     project  project     exit 1   echo  nth successful commit            Step 2  Find the differece between current commit and last successful commit        sh previous commit sha   nth successful commit 1  apply trigger name  project     exit 1  git diff   name only   previous commit sha    commit sha    sort  u    logs dir diff log    exit 1     This step will give you a list of files folders that were modified after the commit associated with last successful build         Step 3  Iterate over changed folders     You can now iterate over only the changed folders received from Step 2 in the  logs dir diff log file      Implementation steps      Pre requisites    A cloud source repository  or any other source control repositories     A cloud build configuration file with basic steps to be executed on a single folder of the repository      Note  We are assuming that the pipeline is built on Google cloud build  If the pipeline is built on other platforms  you might need to retrofit this solution accordingly          Setup Clooud build Add the right values for the below CloudBuild substitution variables in the  cloudbuild yaml  file        sh    TF SA EMAIL        PREVIOUS COMMIT SHA        RUN ALL PROJECTS   false           TF SA EMAIL  is the GCP service account with the necessary IAM permissions that the terraform impersonates  Cloudbuild default SA should have  roles iam serviceAccountTokenCreator  https   cloud google com iam docs service accounts token creator role  on the   TF SA EMAIL  service account      PREVIOUS COMMIT SHA  is the github commit sha that is used for explicitly checking delta changes between this commit and the latest commit instead of automatically detecting last sucessful commit based on a successful cloudbuild execution  Especially this would be required for first time execution when there is not a successful cloudbuild execution        RUN ALL PROJECTS  is to force executing through all folders  Once is a while this is required        for deploying a change that imapcts all folders such as a terraform module commonly used by code in all multiple folders        for detecting and fixing any configurations drifts  especially when manual changes are performed      Important points  Use unshallow copy of git clone     Cloud build in its default behaviour uses shallow a copy of the repository  i e  only the code associated with the commit with which the current build was triggered   Shallow copy prevents us from performing git operations like git diff  However  we can use following step in the cloud build to fetch unshallow copy      yaml     id   unshallow      name  gcr io cloud builders git     args    fetch      unshallow     "}
{"questions":"GCP GCP Project to host all the resources A terraform script is provided to setup all the required resources Introduction GCP resources This example implements the infrastructure required to deploy an end to end using platform MLOps with Vertex AI Infra setup","answers":"# MLOps with Vertex AI - Infra setup\n\n## Introduction\nThis example implements the infrastructure required to deploy an end-to-end [MLOps process](https:\/\/services.google.com\/fh\/files\/misc\/practitioners_guide_to_mlops_whitepaper.pdf) using [Vertex AI](https:\/\/cloud.google.com\/vertex-ai) platform.\n\n\n##  GCP resources\nA terraform script is provided to setup all the required resources:\n\n- GCP Project  to host all the resources\n- Isolated VPC network and a subnet to be used by Vertex and Dataflow (using a Shared VPC is also possible). \n- Firewall rule to allow the internal subnet communication required by Dataflow\n- Cloud NAT required to reach the internet from the different computing resources (Vertex and Dataflow)\n- GCS buckets to host Vertex AI and Cloud Build Artifacts.\n- BigQuery Dataset where the training data will be stored\n- Service account `mlops-[env]@` with the minimum permissions required by Vertex and Dataflow\n- Service account `github` to be used by Workload Identity Federation, to federate Github identity.\n- Secret to store the Github SSH key to get access the CI\/CD code repo (you will set the secret value later, so it can be used).\n\n![MLOps project description](.\/images\/mlops_projects.png \"MLOps project description\")\n\n## Pre-requirements\n\n### User groups\n\nUser groups provide a stable frame of reference that allows decoupling the final set of permissions from the stage where entities and resources are created, and their IAM bindings defined. These groups should be created before launching Terraform.\n\nWe use the following groups to control access to resources:\n\n- *Data Scientits* (gcp-ml-ds@<company.org>). They create ML pipelines in the experimentation environment.\n- *ML Engineers* (gcp-ml-eng@<company.org>). They handle and run the different environments, with access to all resources in order to troubleshoot possible issues with pipelines. \n\nThese groups are not suitable for production grade environments. You can configure the group names through the `groups`variable. \n\n### Git environment for the ML Pipelines\n\nClone the Google Cloud Professional services [repo](https:\/\/github.com\/GoogleCloudPlatform\/professional-services) to a temp directory: \n```\ngit clone https:\/\/github.com\/GoogleCloudPlatform\/professional-services.git\ncd professional-services\/\n```\n\nSetup your new Github repo using the Github web console or CLI.\n\nCopy the `vertex_mlops_enterprise` folder to your local folder, including the Github actions, hidden dirs and files:\n\n```\ncp -r .\/examples\/vertex_mlops_enterprise\/  <YOUR LOCAL FOLDER>\n```\n\nCommit the files in the main branch (`main`):\n```\ngit init\ngit add *\ngit commit -m \"first commit\"\ngit branch -M main\ngit remote add origin https:\/\/github.com\/<ORG>\/<REPO>.git\ngit push -u origin main\n```\nYou will need to configure the Github organization and repo name in the `github` variable.\n\n### Branches\nCreate the additional branches in Github (`dev`, `staging`, `prod`). This can be also done from the UI (`Create branch: dev from main`).\n\nPull the remote repo with `git pull`.\n\nCheckout the staging branch with `git checkout dev`.\n\nReview the files `*.yml` files in the `.github\/workflows` and modify them if needed. These files should be automatically updated when launched terraform.\n\nReview the files `*.yaml` files in the `build` folder and modify them if needed. These files should be automatically updated when launched terraform.\n\n##  Instructions\n###  Deploy the different environments\n\nYou will need to repeat this process for each one of the different environments (01-development, 02-staging, 03-production):\n\n- Go to the environment folder: I.e. `cd ..\/terraform\/01-dev`\n- It is recommended to have a remote state file. In this case, make sure to create the right `providers.tf` file, set the name of a bucket that you want to use as the storage for your Terraform state. This should be an existing bucket that your user has access to.\n- Create a `terraform.tfvars` file and specify the required variables. You can use the `terraform.tfvars.sample` an an starting point\n\n```tfm\nproject_create = {\n    billing_account_id = \"000000-123456-123456\"\n    parent             = \"folders\/111111111111\"\n}\nproject_id          = \"creditcards-dev\"\n```\n- Make sure you fill in the following parameters:\n  - `project_create.billing_account_id`: Billing account\n  - `project_create.parent `: Parent folder where the project will be created.\n  - `project_id`:  Project id, references existing project if `project_create` is null.\n- Make sure you have the right authentication setup (application default credentials, or a service account key)\n- Run `terraform init` and `terraform apply`\n- It is possible that some errors like `googleapi: Error 400: Service account xxxx does not exist.` appears. This is due to some dependencies with the Project IAM authoritative bindings of the service accounts. In this case, re-run again the process with `terraform apply`\n\n##  What's next?\nContinue [configuring the GIT integration with Cloud Build](.\/02-GIT_SETUP.md) and [launching the MLOps pipeline](.\/03-MLOPS.md).\n\n<!-- BEGIN TFDOC -->\n<!-- END TFDOC -->","site":"GCP","answers_cleaned":"  MLOps with Vertex AI   Infra setup     Introduction This example implements the infrastructure required to deploy an end to end  MLOps process  https   services google com fh files misc practitioners guide to mlops whitepaper pdf  using  Vertex AI  https   cloud google com vertex ai  platform        GCP resources A terraform script is provided to setup all the required resources     GCP Project  to host all the resources   Isolated VPC network and a subnet to be used by Vertex and Dataflow  using a Shared VPC is also possible      Firewall rule to allow the internal subnet communication required by Dataflow   Cloud NAT required to reach the internet from the different computing resources  Vertex and Dataflow    GCS buckets to host Vertex AI and Cloud Build Artifacts    BigQuery Dataset where the training data will be stored   Service account  mlops  env    with the minimum permissions required by Vertex and Dataflow   Service account  github  to be used by Workload Identity Federation  to federate Github identity    Secret to store the Github SSH key to get access the CI CD code repo  you will set the secret value later  so it can be used      MLOps project description    images mlops projects png  MLOps project description       Pre requirements      User groups  User groups provide a stable frame of reference that allows decoupling the final set of permissions from the stage where entities and resources are created  and their IAM bindings defined  These groups should be created before launching Terraform   We use the following groups to control access to resources      Data Scientits   gcp ml ds  company org    They create ML pipelines in the experimentation environment     ML Engineers   gcp ml eng  company org    They handle and run the different environments  with access to all resources in order to troubleshoot possible issues with pipelines    These groups are not suitable for production grade environments  You can configure the group names through the  groups variable        Git environment for the ML Pipelines  Clone the Google Cloud Professional services  repo  https   github com GoogleCloudPlatform professional services  to a temp directory       git clone https   github com GoogleCloudPlatform professional services git cd professional services       Setup your new Github repo using the Github web console or CLI   Copy the  vertex mlops enterprise  folder to your local folder  including the Github actions  hidden dirs and files       cp  r   examples vertex mlops enterprise    YOUR LOCAL FOLDER       Commit the files in the main branch   main        git init git add   git commit  m  first commit  git branch  M main git remote add origin https   github com  ORG   REPO  git git push  u origin main     You will need to configure the Github organization and repo name in the  github  variable       Branches Create the additional branches in Github   dev    staging    prod    This can be also done from the UI   Create branch  dev from main     Pull the remote repo with  git pull    Checkout the staging branch with  git checkout dev    Review the files    yml  files in the   github workflows  and modify them if needed  These files should be automatically updated when launched terraform   Review the files    yaml  files in the  build  folder and modify them if needed  These files should be automatically updated when launched terraform       Instructions      Deploy the different environments  You will need to repeat this process for each one of the different environments  01 development  02 staging  03 production      Go to the environment folder  I e   cd    terraform 01 dev    It is recommended to have a remote state file  In this case  make sure to create the right  providers tf  file  set the name of a bucket that you want to use as the storage for your Terraform state  This should be an existing bucket that your user has access to    Create a  terraform tfvars  file and specify the required variables  You can use the  terraform tfvars sample  an an starting point     tfm project create         billing account id    000000 123456 123456      parent                folders 111111111111    project id             creditcards dev        Make sure you fill in the following parameters       project create billing account id   Billing account      project create parent    Parent folder where the project will be created       project id    Project id  references existing project if  project create  is null    Make sure you have the right authentication setup  application default credentials  or a service account key    Run  terraform init  and  terraform apply    It is possible that some errors like  googleapi  Error 400  Service account xxxx does not exist   appears  This is due to some dependencies with the Project IAM authoritative bindings of the service accounts  In this case  re run again the process with  terraform apply       What s next  Continue  configuring the GIT integration with Cloud Build    02 GIT SETUP md  and  launching the MLOps pipeline    03 MLOPS md         BEGIN TFDOC          END TFDOC    "}
{"questions":"GCP Using dbt and Cloud Composer for managing BigQuery example code Cloud Composer is a fully managed data workflow orchestration service that empowers you to author schedule and monitor pipelines Code Examples 1 Basic This repository demonstrate using the dbt to manage tables in BigQuery and using Cloud Composer for schedule the dbt run DBT Data Building Tool is a command line tool that enables data analysts and engineers to transform data in their warehouses simply by writing select statements There are two sets of example","answers":"# Using dbt and Cloud Composer for managing BigQuery example code   \n\nDBT (Data Building Tool) is a command-line tool that enables data analysts and engineers to transform data in their warehouses simply by writing select statements.   \nCloud Composer is a fully managed data workflow orchestration service that empowers you to author, schedule, and monitor pipelines.   \n    \nThis repository demonstrate using the dbt to manage tables in BigQuery and using Cloud Composer for schedule the dbt run.   \n\n## Code Examples\nThere are two sets of example:   \n1. Basic   \n    The basic example is demonstrating the minimum configuration that you need to run dbt on Cloud Composer\n2. Optimized   \n    The optimized example is demonstrating optimization on splitting the dbt run for each models,   \n    implementing incremental in the dbt model, and using Airflow execution date to handle backfill.\n\n## Technical Requirements\nThese GCP services will be used in the example code:   \n- Cloud Composer\n- BigQuery\n- Google Cloud Storage (GCS) \n- Cloud Build\n- Google Container Repository (GCR)\n- Cloud Source Repository (CSR)\n\n## High Level Flow\nThis diagram explains the example solution's flow:   \n<img src=\"img\/dbt-on-cloud-composer-diagram.PNG\" width=\"700\">\n\n1. The code starts from a dbt project stored in a repository. (The example is under [basic or optimized]\/dbt-project folder)\n2. Any changes from the dbt project will trigger Cloud Build run\n3. The Cloud Build will create\/update an image to GCR; and export dbt docs to GCS\n4. The Airflow DAG deployed to Cloud Composer (The example is under [basic or optimized]\/dag folder)\n5. The dbt run triggered using KubernetesPodOperator that pulls image from the step \\#3\n6. At the end of the process the BigQuery objects will be created\/updated (i.e datasets and tables)\n\n## How to run\n\n### Prerequisites\n1. Cloud Composer environment   \n    https:\/\/cloud.google.com\/composer\/docs\/how-to\/managing\/creating\n2. Set 3 ENVIRONMENT VARIABLES in the Cloud Composer (AIRFLOW_VAR_BIGQUERY_LOCATION, AIRFLOW_VAR_RUN_ENVIRONMENT, AIRFLOW_VAR_SOURCE_DATA_PROJECT)   \n    https:\/\/cloud.google.com\/composer\/docs\/how-to\/managing\/environment-variables\n3. Cloud Source Repository (or any git provider)   \n    Store the code from dbt-project in this dedicated repository   \n    The repository should contain dbt_project.yml file (Check the example code under [basic or optimized]\/dbt-project] folder)    \n    Note that the dedicated dbt-project repository is not this example code repository (github repo)\n4. Cloud Build triggers   \n    Trigger build from the dbt project repository   \n    https:\/\/cloud.google.com\/build\/docs\/automating-builds\/create-manage-triggers   \n\n    Set Trigger's substitution variables : _GCS_BUCKET and _DBT_SERVICE_ACCOUNT   \n    _GCS_BUCKET : A GCS bucket id for storing dbt documentation files.   \n    _DBT_SERVICE_ACCOUNT : A service account to run dbt from Cloud Build.     \n5. BigQuery API enabled\n6. Service account to run dbt commands\n7. Kubernetes Secret to be bound with the service account   \n    https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/secret\n\n    Alternatively, instead of using Kubernetes Secret, Workload Identity federation can be used (recommended approach). More details in **Authentication** section below.\n\n### Profiles for running the dbt project\nCheck in the \/dbt-project\/.dbt\/profiles.yml, you will find 2 options to run the dbt:   \n1. local    \n    You can run the dbt project using your local machine or Cloud Shell.\n    To do that, run     \n    ```\n    gcloud auth application-default login   \n    ```\n\n    Trigger dbt run by using this command:   \n    ```\n    dbt run --vars '{\"project_id\": [Your Project id], \"bigquery_location\": \"us\", \"execution_date\": \"1970-01-01\",\"source_data_project\": \"bigquery-public-data\"}' --profiles-dir .dbt\n    ```\n2. remote   \n    This option is for running dbt using service account   \n    For example from Cloud build and Cloud Composer     \n    Check cloudbuild.yaml and dag\/dbt_with_kubernetes.py to see how to use this option\n\n### Run the code\nAfter all the Prerequisites are prepared. You will have:   \n1. A dbt-project repository\n2. Airflow DAG to run the dbt\n\nHere are the follow up steps for running the code:\n1. Push the code in dbt-project repository and make sure the Cloud Build triggered; and successfully create the docker image\n2. In the Cloud Composer UI, run the DAG (e.g dbt_with_kubernetes.py)\n3. If successfull, check the BigQuery console to check the tables\n\nWith this mechanism, you have 2 independent runs.   \nUpdating the dbt-project, including models, schema and configurations will run the Cloud Build to create the docker image.   \nThe DAG as dbt scheduler will run the dbt-project from the latest docker image available.\n\n### Passing variables from Cloud Composer to dbt run\nYou can pass variables from Cloud Composer to the dbt run.    \nAs an example, in this code we configure the BigQuery dataset location in the US as part of the DAG.   \n```\ndefault_dbt_vars = {\n        \"project_id\": project,\n\n        # Example on using Cloud Composer's variable to be passed to dbt\n        \"bigquery_location\": Variable.get(\"bigquery_location\"),\n        \n        \"key_file_dir\": '\/var\/secrets\/google\/key.json',\n        \"source_data_project\": Variable.get(\"source_data_project\")\n    }\n```\n\nIn the dbt script, you can use the variable like this:\n```\nlocation: \"\"\n```\n\n### Authentication\nWhen provisioning DBT runtime environment using KubernetesPodOperator there are two available options for authentication of the DBT process. To achieve better separation of concerns and follow good security practices, the identity of the DBT process (Service Account) should be different than the Cloud Composer Service Account.\n\nAuthentication options are:\n\n- Workload Identity federation [**Recommended**]\n\nA better way to manage the identity and authentication for K8s workloads is to avoid using SA Keys as Secrets and use Workload Identity federation mechanism [[documentation](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/workload-identity)]\n\nDepending on the Composer version you might need to enable Workload Identity in the cluster, configure the node pool to use the GKE_METADATA metadata server to request a short-lived auth tokens. \n\n- Service Account key stored as Kubernetes Secret\n\nCreate a the SA key using the command (Note: SA keys are very sensitive and easy to misuse which can be a security risk. They should be kept protected and only be used under special circumstances)\n```bash\ngcloud iam service-accounts keys create key-file \\\n    --iam-account=sa-name@project-id.iam.gserviceaccount.com\n```\n\nThen save the key json file as *key.json* and configure *kubectl* command line tool to access the GKE cluster used by the Cloud Composer environment.\n\n```bash\ngcloud container clusters get-credentials gke-cluster-name --zone cluster-zone --project project-name\n```\n\nOnced authenticated, create dbt secret in the default namespace by running\n```\nkubectl create secret generic dbt-sa-secret --from-file key.json=.\/key.json\n```\n\nSince then, in the DAG code, when creating a container, the service account key will extracted from K8s Secret and then be be mounted under \/var\/secrets\/google path in the container filesystem and available for DBT in the runtime.\n\n- Composer 1\n\nTo enable Workload Identity on a new cluster, run the following command:\n```bash\ngcloud container clusters create CLUSTER_NAME \\\n    --region=COMPUTE_REGION \\\n    --workload-pool=PROJECT_ID.svc.id.goog\n```\nCreate new node pool (recommended) or update the existing one (might break the airflow setup and require extra steps):\n\n```bash\ngcloud container node-pools create NODEPOOL_NAME \\\n    --cluster=CLUSTER_NAME \\\n    --workload-metadata=GKE_METADATA\n```\n\n- Composer 2\n\nNo further actions required, as the GKE is already using Workload Identity and thanks to the Autopilot mode there's no need to manage the node pool manually.\n\nTo let the DAG to use the Workload Identity the following steps are required:\n\n1) Create a namespace for the Kubernetes service account:\n```bash\nkubectl create namespace NAMESPACE\n```\n2) Create a Kubernetes service account for your application to use\n```bash\nkubectl create serviceaccount KSA_NAME \\\n    --namespace NAMESPACE\n```\n3) Assuming that the dbt-sa already exists and has the right permissions to trigger BigQuery jobs, the special binding has to be added to allow the Kubernetes service account as the IAM service account:\n\n```bash\ngcloud iam service-accounts add-iam-policy-binding GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com \\\n    --role roles\/iam.workloadIdentityUser \\\n    --member \"serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE\/KSA_NAME]\"\n```\n\n4) Using *kubectl* tool annotate the Kubernetes service account with the email address of the IAM service account.\n\n```bash\nkubectl annotate serviceaccount KSA_NAME \\\n    --namespace NAMESPACE \\\n    iam.gke.io\/gcp-service-account=GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com\n```\n\nTo make use of the Workload Identity in our DAG, replace the existing KubernetesPodOperator call with the one that uses the Workload Identity.\n\n1) Composer 1\nUse example configuration from the snippet below:\n\n```python\nKubernetesPodOperator(\n(...)\n    namespace='dbt-namespace',\n    service_account_name=\"dbt-k8s-sa\",\n    affinity={\n        'nodeAffinity': {\n            'requiredDuringSchedulingIgnoredDuringExecution': {\n                'nodeSelectorTerms': [{\n                    'matchExpressions': [{\n                        'key': 'cloud.google.com\/gke-nodepool',\n                        'operator': 'In',\n                        'values': [\n                            'dbt-pool',\n                        ]\n                    }]\n                }]\n            }\n        }\n    }\n(...)\n).execute(context)\n```\n\nThe affinity configuration lets GKE to schedule the pod in one of the specific node-pools that are set up to use Workload Identity.\n\n2) Composer 2\n\nIn case of Composer 2 (Autopilot), the configuration is simpler, example snippet:\n\n```python\nKubernetesPodOperator(\n(...)\n    namespace='dbt-tasks',\n    service_account_name=\"dbt-k8s-sa\"\n(...)\n).execute(context)\n```\n\nWhen using Workload Identity option there is no need to store the IAM SA key as a Secret in GKE which massively improves the maintenance efforts and is generally considered more secure, as there is no need to generate and export the SA Key.","site":"GCP","answers_cleaned":"  Using dbt and Cloud Composer for managing BigQuery example code     DBT  Data Building Tool  is a command line tool that enables data analysts and engineers to transform data in their warehouses simply by writing select statements     Cloud Composer is a fully managed data workflow orchestration service that empowers you to author  schedule  and monitor pipelines          This repository demonstrate using the dbt to manage tables in BigQuery and using Cloud Composer for schedule the dbt run         Code Examples There are two sets of example     1  Basic        The basic example is demonstrating the minimum configuration that you need to run dbt on Cloud Composer 2  Optimized        The optimized example is demonstrating optimization on splitting the dbt run for each models         implementing incremental in the dbt model  and using Airflow execution date to handle backfill      Technical Requirements These GCP services will be used in the example code       Cloud Composer   BigQuery   Google Cloud Storage  GCS     Cloud Build   Google Container Repository  GCR    Cloud Source Repository  CSR      High Level Flow This diagram explains the example solution s flow      img src  img dbt on cloud composer diagram PNG  width  700    1  The code starts from a dbt project stored in a repository   The example is under  basic or optimized  dbt project folder  2  Any changes from the dbt project will trigger Cloud Build run 3  The Cloud Build will create update an image to GCR  and export dbt docs to GCS 4  The Airflow DAG deployed to Cloud Composer  The example is under  basic or optimized  dag folder  5  The dbt run triggered using KubernetesPodOperator that pulls image from the step   3 6  At the end of the process the BigQuery objects will be created updated  i e datasets and tables      How to run      Prerequisites 1  Cloud Composer environment        https   cloud google com composer docs how to managing creating 2  Set 3 ENVIRONMENT VARIABLES in the Cloud Composer  AIRFLOW VAR BIGQUERY LOCATION  AIRFLOW VAR RUN ENVIRONMENT  AIRFLOW VAR SOURCE DATA PROJECT         https   cloud google com composer docs how to managing environment variables 3  Cloud Source Repository  or any git provider         Store the code from dbt project in this dedicated repository        The repository should contain dbt project yml file  Check the example code under  basic or optimized  dbt project  folder          Note that the dedicated dbt project repository is not this example code repository  github repo  4  Cloud Build triggers        Trigger build from the dbt project repository        https   cloud google com build docs automating builds create manage triggers         Set Trigger s substitution variables    GCS BUCKET and  DBT SERVICE ACCOUNT         GCS BUCKET   A GCS bucket id for storing dbt documentation files          DBT SERVICE ACCOUNT   A service account to run dbt from Cloud Build       5  BigQuery API enabled 6  Service account to run dbt commands 7  Kubernetes Secret to be bound with the service account        https   cloud google com kubernetes engine docs concepts secret      Alternatively  instead of using Kubernetes Secret  Workload Identity federation can be used  recommended approach   More details in   Authentication   section below       Profiles for running the dbt project Check in the  dbt project  dbt profiles yml  you will find 2 options to run the dbt     1  local         You can run the dbt project using your local machine or Cloud Shell      To do that  run                  gcloud auth application default login                 Trigger dbt run by using this command                 dbt run   vars    project id    Your Project id    bigquery location    us    execution date    1970 01 01   source data project    bigquery public data      profiles dir  dbt         2  remote        This option is for running dbt using service account        For example from Cloud build and Cloud Composer          Check cloudbuild yaml and dag dbt with kubernetes py to see how to use this option      Run the code After all the Prerequisites are prepared  You will have     1  A dbt project repository 2  Airflow DAG to run the dbt  Here are the follow up steps for running the code  1  Push the code in dbt project repository and make sure the Cloud Build triggered  and successfully create the docker image 2  In the Cloud Composer UI  run the DAG  e g dbt with kubernetes py  3  If successfull  check the BigQuery console to check the tables  With this mechanism  you have 2 independent runs     Updating the dbt project  including models  schema and configurations will run the Cloud Build to create the docker image     The DAG as dbt scheduler will run the dbt project from the latest docker image available       Passing variables from Cloud Composer to dbt run You can pass variables from Cloud Composer to the dbt run      As an example  in this code we configure the BigQuery dataset location in the US as part of the DAG         default dbt vars              project id   project             Example on using Cloud Composer s variable to be passed to dbt          bigquery location   Variable get  bigquery location                      key file dir     var secrets google key json            source data project   Variable get  source data project              In the dbt script  you can use the variable like this      location              Authentication When provisioning DBT runtime environment using KubernetesPodOperator there are two available options for authentication of the DBT process  To achieve better separation of concerns and follow good security practices  the identity of the DBT process  Service Account  should be different than the Cloud Composer Service Account   Authentication options are     Workload Identity federation    Recommended     A better way to manage the identity and authentication for K8s workloads is to avoid using SA Keys as Secrets and use Workload Identity federation mechanism   documentation  https   cloud google com kubernetes engine docs how to workload identity    Depending on the Composer version you might need to enable Workload Identity in the cluster  configure the node pool to use the GKE METADATA metadata server to request a short lived auth tokens      Service Account key stored as Kubernetes Secret  Create a the SA key using the command  Note  SA keys are very sensitive and easy to misuse which can be a security risk  They should be kept protected and only be used under special circumstances     bash gcloud iam service accounts keys create key file         iam account sa name project id iam gserviceaccount com      Then save the key json file as  key json  and configure  kubectl  command line tool to access the GKE cluster used by the Cloud Composer environment      bash gcloud container clusters get credentials gke cluster name   zone cluster zone   project project name      Onced authenticated  create dbt secret in the default namespace by running     kubectl create secret generic dbt sa secret   from file key json   key json      Since then  in the DAG code  when creating a container  the service account key will extracted from K8s Secret and then be be mounted under  var secrets google path in the container filesystem and available for DBT in the runtime     Composer 1  To enable Workload Identity on a new cluster  run the following command     bash gcloud container clusters create CLUSTER NAME         region COMPUTE REGION         workload pool PROJECT ID svc id goog     Create new node pool  recommended  or update the existing one  might break the airflow setup and require extra steps       bash gcloud container node pools create NODEPOOL NAME         cluster CLUSTER NAME         workload metadata GKE METADATA        Composer 2  No further actions required  as the GKE is already using Workload Identity and thanks to the Autopilot mode there s no need to manage the node pool manually   To let the DAG to use the Workload Identity the following steps are required   1  Create a namespace for the Kubernetes service account     bash kubectl create namespace NAMESPACE     2  Create a Kubernetes service account for your application to use    bash kubectl create serviceaccount KSA NAME         namespace NAMESPACE     3  Assuming that the dbt sa already exists and has the right permissions to trigger BigQuery jobs  the special binding has to be added to allow the Kubernetes service account as the IAM service account      bash gcloud iam service accounts add iam policy binding GSA NAME GSA PROJECT iam gserviceaccount com         role roles iam workloadIdentityUser         member  serviceAccount PROJECT ID svc id goog NAMESPACE KSA NAME        4  Using  kubectl  tool annotate the Kubernetes service account with the email address of the IAM service account      bash kubectl annotate serviceaccount KSA NAME         namespace NAMESPACE       iam gke io gcp service account GSA NAME GSA PROJECT iam gserviceaccount com      To make use of the Workload Identity in our DAG  replace the existing KubernetesPodOperator call with the one that uses the Workload Identity   1  Composer 1 Use example configuration from the snippet below      python KubernetesPodOperator            namespace  dbt namespace       service account name  dbt k8s sa       affinity            nodeAffinity                  requiredDuringSchedulingIgnoredDuringExecution                      nodeSelectorTerms                           matchExpressions                               key    cloud google com gke nodepool                            operator    In                            values                                  dbt pool                                                                                                             execute context       The affinity configuration lets GKE to schedule the pod in one of the specific node pools that are set up to use Workload Identity   2  Composer 2  In case of Composer 2  Autopilot   the configuration is simpler  example snippet      python KubernetesPodOperator            namespace  dbt tasks       service account name  dbt k8s sa          execute context       When using Workload Identity option there is no need to store the IAM SA key as a Secret in GKE which massively improves the maintenance efforts and is generally considered more secure  as there is no need to generate and export the SA Key "}
{"questions":"GCP FairingXGBoost this notebook demonstrate how to complete you can use Kubeflow Fairing to deploy your trained model as a prediction endpoint Kubeflow Fairing Examples Train an XGBoost model in a local notebook Train an XGBoost model remotely on Kubeflow cluster with Kubeflow Fairing ML models in a hybrid cloud environment By using Kubeflow Fairing and adding a few lines of code you can run your ML In the repo we provided three notebooks to demonstrate the usage of Kubeflow Faring training job locally or in the cloud directly from Python code or a Jupyter notebook After your training job is is a Python package that streamlines the process of building training and deploying machine learning","answers":"# Kubeflow Fairing Examples\n`Kubeflow Fairing` is a Python package that streamlines the process of building, training, and deploying machine learning\n(ML) models in a hybrid cloud environment. By using Kubeflow Fairing and adding a few lines of code, you can run your ML\ntraining job locally or in the cloud, directly from Python code or a Jupyter notebook. After your training job is\ncomplete, you can use Kubeflow Fairing to deploy your trained model as a prediction endpoint.\n\nIn the repo, we provided three notebooks to demonstrate the usage of Kubeflow Faring:\n- Fairing_XGBoost: this notebook demonstrate how to\n    * Train an XGBoost model in a local notebook,\n    * Train an XGBoost model remotely on Kubeflow cluster, with Kubeflow Fairing\n    * Train an XGBoost model remotely on AI Platform training, with Kubeflow Fairing\n    * Deploy a trained model to Kubeflow, and call the deployed endpoint for predictions, with Kubeflow Fairing\n\n- Fairing_Tensorflow_Keras: this notebook demonstrate how to\n    * Train an Keras model in a local notebook,\n    * Train an Keras model remotely on Kubeflow cluster (distributed), with Kubeflow Fairing\n    * Train an Keras model remotely on AI Platform training, with Kubeflow Fairing\n    * Deploy a trained model to Kubeflow, with Kubeflow Fairing\n\n- Fairing_Py_File: this notebook introduces you to using Kubeflow Fairing to train the model, which is developed\nusing tensorflow or keras and enclosed in python files\n    * Train an Tensorflow model remotely on Kubeflow cluster (distributed), with Kubeflow Fairing\n    * Train an Tensorflow model remotely on AI Platform training, with Kubeflow Fairing\n\n**Note that Kubeflow Fairing doesn't require kubeflow cluster as pre-requisite.\nKubeflow Fairing + AI platform is a valid combination**\n\n## Setups:\n### Prerequisites\nBefore you follow the instructions below to deploy your own kubeflow cluster,\nyou need a Google cloud project if you don't have one. You can find detailed instructions\n[here](https:\/\/cloud.google.com\/dataproc\/docs\/guides\/setup-project).\n\n- Make sure the following API & Services are enabled.\n    * Cloud Storage\n    * Cloud Machine Learning Engine\n    * Cloud Source Repositories API (for CI\/CD integration)\n    * Compute Engine API\n    * GKE API\n    * IAM API\n    * Deployment Manager API\n\n- Configure project id and bucket id as environment variable.\n  ```bash\n  $ export PROJECT_ID=[your-google-project-id]\n  $ export GCP_BUCKET=[your-google-cloud-storage-bucket-name]\n  $ export DEPLOYMENT_NAME=[your-deployment-name]\n  ```\n\n- Deploy Kubeflow Cluster on GCP.\nThe running of training and serving jobs on kubeflow will require a kubeflow deployment. Please refer the link\n[here](https:\/\/www.kubeflow.org\/docs\/gke\/deploy\/) to set up your Kubeflow deployment in your environment.\n\n### Setup Environment\nPlease refer the link [here](https:\/\/www.kubeflow.org\/docs\/fairing\/gcp-local-notebook\/) to properly\nsetup the environments. The key steps are summarized as follows\n- Create service account\n    ```bash\n    export SA_NAME = [service account name]\n    gcloud iam service-accounts create ${SA_NAME}\n    gcloud projects add-iam-policy-binding ${PROJECT_ID} \\\n      --member serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com \\\n      --role 'roles\/editor'\n    gcloud iam service-accounts keys create ~\/key.json \\\n      --iam-account ${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com\n    ```\n\n- Authorize for Source Repository\n    ```bash\n    gcloud auth configure-docker\n    ```\n\n- Update local kubeconfig (for submitting job to kubeflow cluster)\n    ```bash\n    export CLUSTER_NAME=${DEPLOYMENT_NAME} # this is the deployment name or the kubernetes cluster name\n    export ZONE=us-central1-c\n    gcloud container clusters get-credentials ${CLUSTER_NAME} --region ${ZONE}\n    ```\n\n- Set the environmental variable: GOOGLE_APPLICATION_CREDENTIALS\n    ```bash\n    export GOOGLE_APPLICATION_CREDENTIALS = ~\/key.json\n    ```\n\n- Install the latest version of fairing\n    ```bash\n    pip install git+https:\/\/github.com\/kubeflow\/fairing@master\n    ```\n### Running Notebook\nPlease not that the above configuration is required for notebook service running outside Kubeflow environment.\nAnd the examples demonstrated are fully tested on notebook service outside Kubeflow cluster also, which\nmeans it could be\n- Notebook running on your personal computer\n- Notebook on AI Platform, Google Cloud Platform\n- Essentially notebook on any environment outside Kubeflow cluster\n\nFor notebook running inside Kubeflow cluster, for example JupytHub will be deployed together with kubeflow, the\nenvironment variables, e.g. service account, projects and etc, should have been pre-configured while\nsetting up the cluster. The fairing package will also be pre-installed together with the deployment. **The only thing\nneed to be aware is that docker is usually not installed, which would require `cluster` as the builder option as\nexplained in the following section**\n\n## Concepts of Kubeflow Fairing\nThere are three major concepts in Kubeflow Fairing: preprocessor, builder and deployer\n\n### Preprocessor\nThe preprocessor defines how Kubeflow Fairing will map a set of inputs to a context when building the container image for your training job. The preprocessor can convert input files, exclude some files, and change the entrypoint for the training job.\n\n* **python**: Copies the input files directly into the container image.\n* **notebook**: Converts a notebook into a runnable python file. Strips out the non-python code.\n* **full_notebook**: Runs a full notebook as-is, including bash scripts or non-Python code.\n* **function**: FunctionPreProcessor preprocesses a single function. It sets as the command a function_shim that calls the function directly.\n\n### Builder\nThe builder defines how Kubeflow Fairing will build the container image for your training job, and location of the container registry to store the container image in. There are different strategies that will make sense for different environments and use cases.\n\n* **append**: Creates a Dockerfile by appending the your code as a new layer on an existing docker image. This builder requires less to time to create a container image for your training job, because the base image is not pulled to create the image and only the differences are pushed to the container image registry.\n* **cluster**: Builds the container image for your training job in the Kubernetes cluster. This option is useful for building jobs in environments where a Docker daemon is not present, for example a hosted notebook.\n* **docker**: Uses a local docker daemon to build and push the container image for your training job to your container image registry.\n\n### Deployer\nThe deployer defines where Kubeflow Fairing will deploy and run your training job. The deployer uses the image produced by the builder to deploy and run your training job on Kubeflow or Kubernetes.\n\n* **Job**: Uses a Kubernetes Job resource to launch your training job.\n* **TfJob**: Uses the TFJob component of Kubeflow to launch your Tensorflow training job.\n* **GCPJob**: Handle submitting training job to GCP.\n* **Serving**: Serves a prediction endpoint using Kubernetes deployments and services","site":"GCP","answers_cleaned":"  Kubeflow Fairing Examples  Kubeflow Fairing  is a Python package that streamlines the process of building  training  and deploying machine learning  ML  models in a hybrid cloud environment  By using Kubeflow Fairing and adding a few lines of code  you can run your ML training job locally or in the cloud  directly from Python code or a Jupyter notebook  After your training job is complete  you can use Kubeflow Fairing to deploy your trained model as a prediction endpoint   In the repo  we provided three notebooks to demonstrate the usage of Kubeflow Faring    Fairing XGBoost  this notebook demonstrate how to       Train an XGBoost model in a local notebook        Train an XGBoost model remotely on Kubeflow cluster  with Kubeflow Fairing       Train an XGBoost model remotely on AI Platform training  with Kubeflow Fairing       Deploy a trained model to Kubeflow  and call the deployed endpoint for predictions  with Kubeflow Fairing    Fairing Tensorflow Keras  this notebook demonstrate how to       Train an Keras model in a local notebook        Train an Keras model remotely on Kubeflow cluster  distributed   with Kubeflow Fairing       Train an Keras model remotely on AI Platform training  with Kubeflow Fairing       Deploy a trained model to Kubeflow  with Kubeflow Fairing    Fairing Py File  this notebook introduces you to using Kubeflow Fairing to train the model  which is developed using tensorflow or keras and enclosed in python files       Train an Tensorflow model remotely on Kubeflow cluster  distributed   with Kubeflow Fairing       Train an Tensorflow model remotely on AI Platform training  with Kubeflow Fairing    Note that Kubeflow Fairing doesn t require kubeflow cluster as pre requisite  Kubeflow Fairing   AI platform is a valid combination       Setups      Prerequisites Before you follow the instructions below to deploy your own kubeflow cluster  you need a Google cloud project if you don t have one  You can find detailed instructions  here  https   cloud google com dataproc docs guides setup project      Make sure the following API   Services are enabled        Cloud Storage       Cloud Machine Learning Engine       Cloud Source Repositories API  for CI CD integration        Compute Engine API       GKE API       IAM API       Deployment Manager API    Configure project id and bucket id as environment variable       bash     export PROJECT ID  your google project id      export GCP BUCKET  your google cloud storage bucket name      export DEPLOYMENT NAME  your deployment name           Deploy Kubeflow Cluster on GCP  The running of training and serving jobs on kubeflow will require a kubeflow deployment  Please refer the link  here  https   www kubeflow org docs gke deploy   to set up your Kubeflow deployment in your environment       Setup Environment Please refer the link  here  https   www kubeflow org docs fairing gcp local notebook   to properly setup the environments  The key steps are summarized as follows   Create service account        bash     export SA NAME    service account name      gcloud iam service accounts create   SA NAME      gcloud projects add iam policy binding   PROJECT ID            member serviceAccount   SA NAME    PROJECT ID  iam gserviceaccount com           role  roles editor      gcloud iam service accounts keys create   key json           iam account   SA NAME    PROJECT ID  iam gserviceaccount com            Authorize for Source Repository        bash     gcloud auth configure docker            Update local kubeconfig  for submitting job to kubeflow cluster         bash     export CLUSTER NAME   DEPLOYMENT NAME    this is the deployment name or the kubernetes cluster name     export ZONE us central1 c     gcloud container clusters get credentials   CLUSTER NAME    region   ZONE             Set the environmental variable  GOOGLE APPLICATION CREDENTIALS        bash     export GOOGLE APPLICATION CREDENTIALS     key json            Install the latest version of fairing        bash     pip install git https   github com kubeflow fairing master             Running Notebook Please not that the above configuration is required for notebook service running outside Kubeflow environment  And the examples demonstrated are fully tested on notebook service outside Kubeflow cluster also  which means it could be   Notebook running on your personal computer   Notebook on AI Platform  Google Cloud Platform   Essentially notebook on any environment outside Kubeflow cluster  For notebook running inside Kubeflow cluster  for example JupytHub will be deployed together with kubeflow  the environment variables  e g  service account  projects and etc  should have been pre configured while setting up the cluster  The fairing package will also be pre installed together with the deployment    The only thing need to be aware is that docker is usually not installed  which would require  cluster  as the builder option as explained in the following section       Concepts of Kubeflow Fairing There are three major concepts in Kubeflow Fairing  preprocessor  builder and deployer      Preprocessor The preprocessor defines how Kubeflow Fairing will map a set of inputs to a context when building the container image for your training job  The preprocessor can convert input files  exclude some files  and change the entrypoint for the training job       python    Copies the input files directly into the container image      notebook    Converts a notebook into a runnable python file  Strips out the non python code      full notebook    Runs a full notebook as is  including bash scripts or non Python code      function    FunctionPreProcessor preprocesses a single function  It sets as the command a function shim that calls the function directly       Builder The builder defines how Kubeflow Fairing will build the container image for your training job  and location of the container registry to store the container image in  There are different strategies that will make sense for different environments and use cases       append    Creates a Dockerfile by appending the your code as a new layer on an existing docker image  This builder requires less to time to create a container image for your training job  because the base image is not pulled to create the image and only the differences are pushed to the container image registry      cluster    Builds the container image for your training job in the Kubernetes cluster  This option is useful for building jobs in environments where a Docker daemon is not present  for example a hosted notebook      docker    Uses a local docker daemon to build and push the container image for your training job to your container image registry       Deployer The deployer defines where Kubeflow Fairing will deploy and run your training job  The deployer uses the image produced by the builder to deploy and run your training job on Kubeflow or Kubernetes       Job    Uses a Kubernetes Job resource to launch your training job      TfJob    Uses the TFJob component of Kubeflow to launch your Tensorflow training job      GCPJob    Handle submitting training job to GCP      Serving    Serves a prediction endpoint using Kubernetes deployments and services"}
{"questions":"GCP you may not use this file except in compliance with the License https www apache org licenses LICENSE 2 0 Copyright 2022 Google LLC You may obtain a copy of the License at Unless required by applicable law or agreed to in writing software Licensed under the Apache License Version 2 0 the License","answers":"<!--\nCopyright 2022 Google LLC\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    https:\/\/www.apache.org\/licenses\/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n-->\n\n# Deploy the Custom Dataflow template by following these steps\n\n## Overview\n\nThe purpose of this walkthrough is to create\n[Custom Dataflow templates](https:\/\/cloud.google.com\/dataflow\/docs\/concepts\/dataflow-templates).\n\nThe value of Custom Dataflow templates is that it allows us to execute\nDataflow jobs without installing any code.  This is useful to enable Dataflow\nexecution using an automated process or to enable others without technical\nexpertise to run jobs via a user-friendly guided user interface.\n\n## 1. Select or create project\n\n**It is recommended to go through this walkthrough using a new temporary Google\nCloud project, unrelated to any of your existing Google Cloud projects.**\n\nSelect or create a project to begin.\n\n<walkthrough-project-setup><\/walkthrough-project-setup>\n\n## 2. Set default project\n\n```sh\ngcloud config set project <walkthrough-project-id\/>\n```\n\n## 3. Setup environment\n\nBest practice recommends a Dataflow job to:\n1) Utilize a worker service account to access the pipeline's files and resources\n2) Minimally necessary IAM permissions for the worker service account\n3) Minimally required Google cloud services\n\nTherefore, this step will:\n\n- Create service accounts\n- Provision IAM credentials\n- Enable required Google cloud services\n\nRun the terraform workflow in\nthe [infrastructure\/01.setup](infrastructure\/01.setup) directory.\n\nTerraform will ask your permission before provisioning resources.\nIf you agree with terraform provisioning resources,\ntype `yes` to proceed.\n\n```sh\nDIR=infrastructure\/01.setup\nterraform -chdir=$DIR init\nterraform -chdir=$DIR apply -var='project=<walkthrough-project-id\/>'\n```\n\n## 4. Provision network\n\nBest practice recommends a Dataflow job to:\n1) Utilize a custom network and subnetwork\n2) Minimally necessary network firewall rules\n3) Building Python custom templates additionally requires the use of a\n[Cloud NAT](https:\/\/cloud.google.com\/nat\/docs\/overview); per best practice we\nexecute the Dataflow job using private IPs\n\nTherefore, this step will:\n\n- Provision a custom network and subnetwork\n- Provision firewall rules\n- Provision a Cloud NAT and its dependent Cloud Router\n\nRun the terraform workflow in\nthe [infrastructure\/02.network](infrastructure\/02.network) directory.\n\nTerraform will ask your permission before provisioning resources.\nIf you agree with terraform provisioning resources,\ntype `yes` to proceed.\n\n```sh\nDIR=infrastructure\/02.network\nterraform -chdir=$DIR init\nterraform -chdir=$DIR apply -var='project=<walkthrough-project-id\/>'\n```\n\n## 5. Provision data pipeline IO resources\n\nThe Apache Beam example that our Dataflow template executes is a derived word\ncount for both [Java](https:\/\/github.com\/apache\/beam\/blob\/master\/examples\/java\/src\/main\/java\/org\/apache\/beam\/examples\/MinimalWordCount.java)\nand [python](https:\/\/github.com\/apache\/beam\/blob\/master\/sdks\/python\/apache_beam\/examples\/wordcount_minimal.py).\n\nThe word count example requires a source [Google Cloud Storage](https:\/\/cloud.google.com\/storage) bucket.\nTo make the example interesting, we copy all the files from\n`gs:\/\/apache-beam-samples\/shakespeare\/*` to a custom bucket in our project.\n\nTherefore, this step will:\n- Provision a Google Cloud storage bucket\n- Create Google Cloud storage objects to read from in the pipeline\n\nRun the terraform workflow in\nthe [infrastructure\/03.io](infrastructure\/03.io) directory.\n\nTerraform will ask your permission before provisioning resources.\nIf you agree with terraform provisioning resources,\ntype `yes` to proceed.\n\n```sh\nDIR=infrastructure\/03.io\nterraform -chdir=$DIR init\nterraform -chdir=$DIR apply -var='project=<walkthrough-project-id\/>'\n```\n\n## 6. Provision the Dataflow template builder\n\nWe will use [Cloud Build](https:\/\/cloud.google.com\/build) to build the\ncustom Dataflow template.  There are advantages to using Cloud Build to build\nour custom Dataflow template, instead of performing the necessary commands on\nour local machine.  Cloud Build connects to our version control,\n[GitHub](https:\/\/GitHub.com) in this example, so that any changes made to\na specific branch will automatically trigger a new build of our Dataflow\ntemplate.\n\nTherefore, this step will:\n\n- Provision cloud build trigger that will:\n    1. Run the language specific build process i.e. gradle shadowJar, go build, etc.\n    2. Execute the `gcloud dataflow flex-template` command with relevant arguments.\n\n### 6.1. Fork the repository into your own GitHub organization or personal account\n\nIn order to benefit from [Cloud Build](https:\/\/cloud.google.com\/build), the service\nrequires we own this repository; it will not work with a any repository, even\nif it is public.\n\nTherefore, complete these steps before proceeding:\n1) [Fork the repository](https:\/\/github.com\/GoogleCloudPlatform\/professional-services\/fork)\n2) [Connect forked repository to Cloud Build](https:\/\/console.cloud.google.com\/cloud-build\/triggers\/connect)\n\n### 6.2 Execute terraform module to provision Cloud Build trigger\n\nFirst, set your GitHub organization or username:\n\n```sh\nGITHUB_REPO_OWNER=<change me>\n```\n\nNext, set expected defaults.  (_Note: Normally it makes sense to default terraform\nvariables instead of doing this._)\n\n```sh\nGITHUB_REPO_NAME=professional-services\nWORKING_DIR_PREFIX=examples\/dataflow-custom-templates\n```\n\nRun the terraform workflow in\nthe [infrastructure\/04.template](infrastructure\/04.template) directory.\n\nTerraform will ask your permission before provisioning resources.\nIf you agree with terraform provisioning resources,\ntype `yes` to proceed.\n\n```sh\nDIR=infrastructure\/04.template\nterraform -chdir=$DIR init\nterraform -chdir=$DIR apply -var=\"project=$(gcloud config get-value project)\" -var=\"github_repository_owner=$GITHUB_REPO_OWNER\" -var=\"github_repository_name=$GITHUB_REPO_NAME\" -var=\"working_dir_prefix=$WORKING_DIR_PREFIX\"\n```\n\n## 7. Run Cloud Build Trigger\n\nNavigate to [cloud-build\/triggers](https:\/\/console.cloud.google.com\/cloud-build\/triggers).\nYou should see a Cloud Build trigger listed for each language of this example.\nClick the `RUN` button next to the created Cloud Build trigger to execute the\ncustom template Cloud Build trigger for your language of choice manually.\n\nSee [Create Manual Triggers](https:\/\/cloud.google.com\/build\/docs\/automating-builds\/create-manual-triggers?hl=en#running_manual_triggers)\nfor more information.\n\nThis step will take several minutes to complete.\n\n## 8. Execute the Dataflow Template\n\n### 1. Start the Dataflow Job creation form\n\nThere are multiple ways to run a Dataflow Job from a custom template.  We will\nuse the Google Cloud Web UI.\n\nTo start the process, navigate to [dataflow\/createjob](https:\/\/console.cloud.google.com\/dataflow\/createjob).\n\n### 2. Select Custom Template\n\nSelect `Custom Template` from the `Dataflow template` drop down menu.  Then,\nclick the `BROWSE` button and navigate to the bucket with the name that starts\nwith `dataflow-templates-`.  Within this bucket, select the json file object\nthat represents the template details.  You should see a JSON file for each\nof the Cloud Build triggers you ran to create the custom template.\n\n### 3. Complete Dataflow Job template UI form\n\nThe Google Cloud console will further prompt for required fields such as Job\nname and any required fields for the custom Dataflow template.\n\n### 4. Run the template\n\nWhen you are satisfied by the values provided to the custom Dataflow template,\nclick the `RUN` button.\n\n### 5. Monitor the Dataflow Job\n\nNavigate to [dataflow\/jobs](https:\/\/console.cloud.google.com\/dataflow\/jobs) to locate the job\nyou just created.  Clicking on the job will let you navigate to the job\nmonitoring screen.","site":"GCP","answers_cleaned":"     Copyright 2022 Google LLC  Licensed under the Apache License  Version 2 0  the  License    you may not use this file except in compliance with the License  You may obtain a copy of the License at      https   www apache org licenses LICENSE 2 0  Unless required by applicable law or agreed to in writing  software distributed under the License is distributed on an  AS IS  BASIS  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND  either express or implied  See the License for the specific language governing permissions and limitations under the License         Deploy the Custom Dataflow template by following these steps     Overview  The purpose of this walkthrough is to create  Custom Dataflow templates  https   cloud google com dataflow docs concepts dataflow templates    The value of Custom Dataflow templates is that it allows us to execute Dataflow jobs without installing any code   This is useful to enable Dataflow execution using an automated process or to enable others without technical expertise to run jobs via a user friendly guided user interface      1  Select or create project    It is recommended to go through this walkthrough using a new temporary Google Cloud project  unrelated to any of your existing Google Cloud projects     Select or create a project to begin    walkthrough project setup   walkthrough project setup      2  Set default project     sh gcloud config set project  walkthrough project id           3  Setup environment  Best practice recommends a Dataflow job to  1  Utilize a worker service account to access the pipeline s files and resources 2  Minimally necessary IAM permissions for the worker service account 3  Minimally required Google cloud services  Therefore  this step will     Create service accounts   Provision IAM credentials   Enable required Google cloud services  Run the terraform workflow in the  infrastructure 01 setup  infrastructure 01 setup  directory   Terraform will ask your permission before provisioning resources  If you agree with terraform provisioning resources  type  yes  to proceed      sh DIR infrastructure 01 setup terraform  chdir  DIR init terraform  chdir  DIR apply  var  project  walkthrough project id            4  Provision network  Best practice recommends a Dataflow job to  1  Utilize a custom network and subnetwork 2  Minimally necessary network firewall rules 3  Building Python custom templates additionally requires the use of a  Cloud NAT  https   cloud google com nat docs overview   per best practice we execute the Dataflow job using private IPs  Therefore  this step will     Provision a custom network and subnetwork   Provision firewall rules   Provision a Cloud NAT and its dependent Cloud Router  Run the terraform workflow in the  infrastructure 02 network  infrastructure 02 network  directory   Terraform will ask your permission before provisioning resources  If you agree with terraform provisioning resources  type  yes  to proceed      sh DIR infrastructure 02 network terraform  chdir  DIR init terraform  chdir  DIR apply  var  project  walkthrough project id            5  Provision data pipeline IO resources  The Apache Beam example that our Dataflow template executes is a derived word count for both  Java  https   github com apache beam blob master examples java src main java org apache beam examples MinimalWordCount java  and  python  https   github com apache beam blob master sdks python apache beam examples wordcount minimal py    The word count example requires a source  Google Cloud Storage  https   cloud google com storage  bucket  To make the example interesting  we copy all the files from  gs   apache beam samples shakespeare    to a custom bucket in our project   Therefore  this step will    Provision a Google Cloud storage bucket   Create Google Cloud storage objects to read from in the pipeline  Run the terraform workflow in the  infrastructure 03 io  infrastructure 03 io  directory   Terraform will ask your permission before provisioning resources  If you agree with terraform provisioning resources  type  yes  to proceed      sh DIR infrastructure 03 io terraform  chdir  DIR init terraform  chdir  DIR apply  var  project  walkthrough project id            6  Provision the Dataflow template builder  We will use  Cloud Build  https   cloud google com build  to build the custom Dataflow template   There are advantages to using Cloud Build to build our custom Dataflow template  instead of performing the necessary commands on our local machine   Cloud Build connects to our version control   GitHub  https   GitHub com  in this example  so that any changes made to a specific branch will automatically trigger a new build of our Dataflow template   Therefore  this step will     Provision cloud build trigger that will      1  Run the language specific build process i e  gradle shadowJar  go build  etc      2  Execute the  gcloud dataflow flex template  command with relevant arguments       6 1  Fork the repository into your own GitHub organization or personal account  In order to benefit from  Cloud Build  https   cloud google com build   the service requires we own this repository  it will not work with a any repository  even if it is public   Therefore  complete these steps before proceeding  1   Fork the repository  https   github com GoogleCloudPlatform professional services fork  2   Connect forked repository to Cloud Build  https   console cloud google com cloud build triggers connect       6 2 Execute terraform module to provision Cloud Build trigger  First  set your GitHub organization or username      sh GITHUB REPO OWNER  change me       Next  set expected defaults     Note  Normally it makes sense to default terraform variables instead of doing this        sh GITHUB REPO NAME professional services WORKING DIR PREFIX examples dataflow custom templates      Run the terraform workflow in the  infrastructure 04 template  infrastructure 04 template  directory   Terraform will ask your permission before provisioning resources  If you agree with terraform provisioning resources  type  yes  to proceed      sh DIR infrastructure 04 template terraform  chdir  DIR init terraform  chdir  DIR apply  var  project   gcloud config get value project    var  github repository owner  GITHUB REPO OWNER   var  github repository name  GITHUB REPO NAME   var  working dir prefix  WORKING DIR PREFIX          7  Run Cloud Build Trigger  Navigate to  cloud build triggers  https   console cloud google com cloud build triggers   You should see a Cloud Build trigger listed for each language of this example  Click the  RUN  button next to the created Cloud Build trigger to execute the custom template Cloud Build trigger for your language of choice manually   See  Create Manual Triggers  https   cloud google com build docs automating builds create manual triggers hl en running manual triggers  for more information   This step will take several minutes to complete      8  Execute the Dataflow Template      1  Start the Dataflow Job creation form  There are multiple ways to run a Dataflow Job from a custom template   We will use the Google Cloud Web UI   To start the process  navigate to  dataflow createjob  https   console cloud google com dataflow createjob        2  Select Custom Template  Select  Custom Template  from the  Dataflow template  drop down menu   Then  click the  BROWSE  button and navigate to the bucket with the name that starts with  dataflow templates     Within this bucket  select the json file object that represents the template details   You should see a JSON file for each of the Cloud Build triggers you ran to create the custom template       3  Complete Dataflow Job template UI form  The Google Cloud console will further prompt for required fields such as Job name and any required fields for the custom Dataflow template       4  Run the template  When you are satisfied by the values provided to the custom Dataflow template  click the  RUN  button       5  Monitor the Dataflow Job  Navigate to  dataflow jobs  https   console cloud google com dataflow jobs  to locate the job you just created   Clicking on the job will let you navigate to the job monitoring screen "}
{"questions":"GCP 1 Cloud Storage Entities creation and update for Dialogflow This module is an example how to create and update entities for Dialogflow Recommended Reading Technology Stack 1 Dialogflow 1 Cloud Functions","answers":"# Entities creation and update for Dialogflow\nThis module is an example how to create and update entities for Dialogflow.\n\n## Recommended Reading\n[Entities Options](https:\/\/cloud.google.com\/dialogflow\/docs\/entities-options)\n\n## Technology Stack\n1. Cloud Storage\n1. Cloud Functions\n1. Dialogflow\n\n## Programming Language\nPython 3\n\n## Project Structure\n```\n.\n\u2514\u2500\u2500 dialogflow_webhook_bank_example\n \u251c\u2500\u2500 main.py # Implementation of examples how to load entities in Dialogflow\n \u251c\u2500\u2500 entities.json # file with entities to be load in a json format\n \u251c\u2500\u2500 requirements.txt # Required libraries for this example\n```\n## Setup Instructions\n ### Project Setup\n How to setup your project for this example can be found [here](https:\/\/cloud.google.com\/dialogflow\/docs\/quick\/setup).\n\n ### Dialogflow Agent Setup\n Build an agent by following the instructions [here](https:\/\/cloud.google.com\/dialogflow\/docs\/quick\/build-agent).\n\n ### Cloud Storage Setup\n Upload the entities.json file to a bucket by following the instructions [here](https:\/\/cloud.google.com\/storage\/docs\/quickstart-console#create_a_bucket).\n\n ### Cloud Functions Setup\n This implementation is deployed on GCP using Cloud Functions.\n More info [here](https:\/\/cloud.google.com\/functions\/docs\/concepts\/overview).\n\n To run the Python scripts on GCP, the `gcloud` command-line tool from the Google Cloud SDK is needed.\n Refer to the [installation](https:\/\/cloud.google.com\/sdk\/install) page for the appropriate\n instructions depending on your platform.\n\n Note that this project has been tested on a Unix-based environment.\n\n After installing, make sure to initialize your Cloud project:\n ```\n `$ gcloud init`\n```\n\n## Usage\n\n### Create entities one by one\nUse the EntityTypesClient.create_entity_type to create entities one by one.\n\n#### More Info\n[EntityType proto](https:\/\/github.com\/googleapis\/googleapis\/blob\/551cf1e6e3addcc63740427c4f9b40dedd3dac27\/google\/cloud\/dialogflow\/v2\/entity_type.proto#L200)\n\n[Client for Dialogflow API - EntityTypeClient.create_entity_type](https:\/\/dialogflow-python-client-v2.readthedocs.io\/en\/latest\/_modules\/dialogflow_v2\/gapic\/entity_types_client.html#EntityTypesClient.create_entity_type)\n\n### Example\nRun the sample using gcloud util as followed:\n\n```\n      $ gcloud functions call entities_builder --data '{\n          \"entities\": [{\n             \"display_name\":\n                 \"saving-account-types\",\n             \"kind\": \"KIND_MAP\",\n             \"entities\": [{\n                 \"value\": \"saving-account-types\",\n                 \"synonyms\": [\n                     \"saving\",\n                     \"saving account\",\n                     \"child saving\",\n                     \"IRA\",\n                     \"CD\",\n                     \"student saving\"]\n             }]\n         }, {\n             \"display_name\":\n                 \"checking-account-types\",\n             \"kind\": \"KIND_MAP\",\n             \"entities\": [{\n                 \"value\":\n                     \"checking-account-types\",\n                 \"synonyms\": [\n                     \"checking\", \"checking account\", \"student checking account\",\n                     \"student account\", \"business checking account\", \"business account\"\n                 ]\n             }]\n         }, {\n             \"display_name\": \"account_types\",\n             \"kind\": \"KIND_LIST\",\n             \"entities\": [\n                 {\n                     \"value\": \"@saving-account-types:saving-account-types\",\n                     \"synonyms\": [\n                         \"@saving-account-types:saving-account-types\"\n                     ]\n                 },\n                 {\n                     \"value\": \"@checking-account-types:checking-account-types\",\n                     \"synonyms\": [\n                         \"@checking-account-types:checking-account-types\"\n                     ]\n                 },\n                 {\n                     \"value\": \"@sys.date-period:date-period @saving-account-types:saving-account-types\",\n                     \"synonyms\": [\n                         \"@sys.date-period:date-period @saving-account-types:saving-account-types\"\n                     ]\n                 },\n                 {\n                     \"value\": \"@sys.date-period:date-period @checking-account-types:checking-account-types\",\n                     \"synonyms\": [\n                         \"@sys.date-period:date-period @checking-account-types:checking-account-types\"\n                     ]\n                 }\n             ]\n         }]\n    }'\n```\n\n### Create entities in batch\nUse the EntityTypesClient.batch_update_entity_types to create or update entities in batch.\n\n#### More Info\n[Client for Dialogflow API - EntityTypeClient.batch_update_entity_types](https:\/\/dialogflow-python-client-v2.readthedocs.io\/en\/latest\/_modules\/dialogflow_v2\/gapic\/entity_types_client.html#EntityTypesClient.batch_update_entity_types)\n\n[EntityTypeBatch proto](https:\/\/github.com\/googleapis\/googleapis\/blob\/551cf1e6e3addcc63740427c4f9b40dedd3dac27\/google\/cloud\/dialogflow\/v2\/entity_type.proto#L533)\n\n[BatchUpdateEntityTypesRequest proto](https:\/\/github.com\/googleapis\/googleapis\/blob\/master\/google\/cloud\/dialogflow\/v2\/entity_type.proto#L397)\n\n#### Examples\n##### Using entity_type_batch_uri\nThe URI to a Google Cloud Storage file containing entity types to update or create.\nThe URI must start with \"gs:\/\/\".\nThe entities.json file is an example of a json format file that can be uploaded to gcs and passed to the function.\n```\n    $ gcloud functions call entities_builder --data '{ \"bucket\": \"gs:\/\/<bucket_name>\/entities.json\"}'\n\n```\n##### Using entity_type_batch_inline\nFor each entity type in the batch:\n- The `name` is the the unique identifier of the entity type\n- If `name` is specified, we update an existing entity type.\n- If `name` is not specified, we create a new entity type.\n```\n   $ gcloud functions call entities_builder --data '{\n       \"entities_batch\": {\n           \"entity_types\":[\n               {\n                   \"name\": \"5201cee0-ddfb-4f7c-ae94-fff87189d13c\",\n                   \"display_name\":\n                     \"saving-account-types\",\n                   \"kind\": \"KIND_MAP\",\n                   \"entities\": [{\n                       \"value\": \"saving-account-types\",\n                       \"synonyms\": [\n                           \"saving\",\n                           \"saving account\",\n                           \"child saving\",\n                           \"IRA\",\n                           \"CD\",\n                           \"student saving\",\n                           \"senior saving\"]\n                   }]\n               },\n               {\n                   \"display_name\":\n                     \"checking-account-types\",\n                   \"kind\": \"KIND_MAP\",\n                   \"entities\": [{\n                       \"value\":\n                         \"checking-account-types\",\n                       \"synonyms\": [\n                           \"checking\", \"checking account\", \"student checking account\",\n                           \"student account\", \"business checking account\", \"business account\"\n                       ]\n                   }]\n               },\n               {\n                   \"display_name\": \"account_types\",\n                   \"kind\": \"KIND_LIST\",\n                   \"entities\": [\n                       {\n                           \"value\": \"@saving-account-types:saving-account-types\",\n                           \"synonyms\": [\n                               \"@saving-account-types:saving-account-types\"\n                           ]\n                       },\n                       {\n                           \"value\": \"@checking-account-types:checking-account-types\",\n                           \"synonyms\": [\n                               \"@checking-account-types:checking-account-types\"\n                           ]\n                       },\n                       {\n                           \"value\": \"@sys.date-period:date-period @saving-account-types:saving-account-types\",\n                           \"synonyms\": [\n                               \"@sys.date-period:date-period @saving-account-types:saving-account-types\"\n                           ]\n                       },\n                       {\n                           \"value\": \"@sys.date-period:date-period @checking-account-types:checking-account-types\",\n                           \"synonyms\": [\n                               \"@sys.date-period:date-period @checking-account-types:checking-account-types\"\n                           ]\n                       }\n                   ]\n               }\n           ]\n       }\n}'\n```\n\n# Entities Definition\n```\n    \u2514\u2500\u2500  main\n       \u251c\u2500\u2500 creates a map entity\n       \u251c\u2500\u2500 create a composite entity\n       \u251c\u2500\u2500 updates a map entity\n```\n\nBelow the definition of the entities.\n\n## Map entities\n#### entity name: saving-accounts-types\n\nDefine synonyms: true\n```\n      {\n        \"value\": \"saving-account-types\",\n        \"synonyms\": [\n          \"saving\",\n          \"saving account\",\n          \"child saving\",\n          \"IRA\",\n          \"CD\"\n        ]\n      }\n```\n\n\n#### entity name: checking-account-types\n\nDefine synonyms: true\n```\n      {\n        \"value\": \"checking-account-types\",\n        \"synonyms\": [\n          \"checking\",\n          \"checking account\",\n          \"student checking account\",\n          \"student account\",\n          \"business checking account\",\n          \"business account\"\n        ]\n      }\n```\n## Composite entities\n### entity name: account-types\n```\n      {\n        \"value\": \"@saving-account-type:saving-account-type\",\n        \"synonyms\": [\n          \"@saving-account-type:saving-account-type\"\n        ]\n      },\n\n      {\n        \"value\": \"@checking-account-type:checking-account-type\",\n        \"synonyms\": [\n          \"@checking-account-type:checking-account-type\"\n        ]\n      },\n\n      {\n        \"value\": \"@sys.date-period:date-period @saving-account-type:saving-account-type\",\n        \"synonyms\": [\n          \"@sys.date-period:date-period @saving-account-type:saving-account-type\"\n        ]\n      },\n\n      {\n        \"value\": \"@sys.date-period:date-period @checking-account-type:checking-account-type\",\n        \"synonyms\": [\n          \"@sys.date-period:date-period @checking-account-type:checking-account-type\"\n        ]\n      }\n```\n\n# References\n[Client for Dialogflow API ](https:\/\/dialogflow-python-client-v2.readthedocs.io\/en\/latest\/gapic\/v2\/api.html#dialogflow_v2.EntityTypesClient)\n\n[EntityType proto](https:\/\/github.com\/googleapis\/googleapis\/blob\/master\/google\/cloud\/dialogflow\/v2\/entity_type.proto)\n\n[Protocol Buffers Tutorial](https:\/\/developers.google.com\/protocol-buffers\/docs\/pythontutorial","site":"GCP","answers_cleaned":"  Entities creation and update for Dialogflow This module is an example how to create and update entities for Dialogflow      Recommended Reading  Entities Options  https   cloud google com dialogflow docs entities options      Technology Stack 1  Cloud Storage 1  Cloud Functions 1  Dialogflow     Programming Language Python 3     Project Structure           dialogflow webhook bank example      main py   Implementation of examples how to load entities in Dialogflow      entities json   file with entities to be load in a json format      requirements txt   Required libraries for this example        Setup Instructions      Project Setup  How to setup your project for this example can be found  here  https   cloud google com dialogflow docs quick setup         Dialogflow Agent Setup  Build an agent by following the instructions  here  https   cloud google com dialogflow docs quick build agent         Cloud Storage Setup  Upload the entities json file to a bucket by following the instructions  here  https   cloud google com storage docs quickstart console create a bucket         Cloud Functions Setup  This implementation is deployed on GCP using Cloud Functions   More info  here  https   cloud google com functions docs concepts overview     To run the Python scripts on GCP  the  gcloud  command line tool from the Google Cloud SDK is needed   Refer to the  installation  https   cloud google com sdk install  page for the appropriate  instructions depending on your platform    Note that this project has been tested on a Unix based environment    After installing  make sure to initialize your Cloud project           gcloud init          Usage      Create entities one by one Use the EntityTypesClient create entity type to create entities one by one        More Info  EntityType proto  https   github com googleapis googleapis blob 551cf1e6e3addcc63740427c4f9b40dedd3dac27 google cloud dialogflow v2 entity type proto L200    Client for Dialogflow API   EntityTypeClient create entity type  https   dialogflow python client v2 readthedocs io en latest  modules dialogflow v2 gapic entity types client html EntityTypesClient create entity type       Example Run the sample using gcloud util as followed               gcloud functions call entities builder   data               entities                    display name                     saving account types                 kind    KIND MAP                 entities                        value    saving account types                     synonyms                           saving                         saving account                         child saving                         IRA                         CD                         student saving                                               display name                     checking account types                 kind    KIND MAP                 entities                        value                         checking account types                     synonyms                           checking    checking account    student checking account                         student account    business checking account    business account                                                                 display name    account types                 kind    KIND LIST                 entities                                              value     saving account types saving account types                         synonyms                                saving account types saving account types                                                                                      value     checking account types checking account types                         synonyms                                checking account types checking account types                                                                                      value     sys date period date period  saving account types saving account types                         synonyms                                sys date period date period  saving account types saving account types                                                                                      value     sys date period date period  checking account types checking account types                         synonyms                                sys date period date period  checking account types checking account types                                                                                       Create entities in batch Use the EntityTypesClient batch update entity types to create or update entities in batch        More Info  Client for Dialogflow API   EntityTypeClient batch update entity types  https   dialogflow python client v2 readthedocs io en latest  modules dialogflow v2 gapic entity types client html EntityTypesClient batch update entity types    EntityTypeBatch proto  https   github com googleapis googleapis blob 551cf1e6e3addcc63740427c4f9b40dedd3dac27 google cloud dialogflow v2 entity type proto L533    BatchUpdateEntityTypesRequest proto  https   github com googleapis googleapis blob master google cloud dialogflow v2 entity type proto L397        Examples       Using entity type batch uri The URI to a Google Cloud Storage file containing entity types to update or create  The URI must start with  gs      The entities json file is an example of a json format file that can be uploaded to gcs and passed to the function            gcloud functions call entities builder   data     bucket    gs    bucket name  entities json               Using entity type batch inline For each entity type in the batch    The  name  is the the unique identifier of the entity type   If  name  is specified  we update an existing entity type    If  name  is not specified  we create a new entity type           gcloud functions call entities builder   data            entities batch                 entity types                                         name    5201cee0 ddfb 4f7c ae94 fff87189d13c                       display name                         saving account types                       kind    KIND MAP                       entities                              value    saving account types                           synonyms                                 saving                               saving account                               child saving                               IRA                               CD                               student saving                               senior saving                                                                                display name                         checking account types                       kind    KIND MAP                       entities                              value                             checking account types                           synonyms                                 checking    checking account    student checking account                               student account    business checking account    business account                                                                                                        display name    account types                       kind    KIND LIST                       entities                                                          value     saving account types saving account types                               synonyms                                      saving account types saving account types                                                                                                              value     checking account types checking account types                               synonyms                                      checking account types checking account types                                                                                                              value     sys date period date period  saving account types saving account types                               synonyms                                      sys date period date period  saving account types saving account types                                                                                                              value     sys date period date period  checking account types checking account types                               synonyms                                      sys date period date period  checking account types checking account types                                                                                                                              Entities Definition              main            creates a map entity            create a composite entity            updates a map entity      Below the definition of the entities      Map entities      entity name  saving accounts types  Define synonyms  true                      value    saving account types            synonyms                saving              saving account              child saving              IRA              CD                               entity name  checking account types  Define synonyms  true                      value    checking account types            synonyms                checking              checking account              student checking account              student account              business checking account              business account                           Composite entities     entity name  account types                      value     saving account type saving account type            synonyms                 saving account type saving account type                                       value     checking account type checking account type            synonyms                 checking account type checking account type                                       value     sys date period date period  saving account type saving account type            synonyms                 sys date period date period  saving account type saving account type                                       value     sys date period date period  checking account type checking account type            synonyms                 sys date period date period  checking account type checking account type                           References  Client for Dialogflow API   https   dialogflow python client v2 readthedocs io en latest gapic v2 api html dialogflow v2 EntityTypesClient    EntityType proto  https   github com googleapis googleapis blob master google cloud dialogflow v2 entity type proto    Protocol Buffers Tutorial  https   developers google com protocol buffers docs pythontutorial"}
{"questions":"GCP Automatic Subordinate CA activation using Root CA in the CAS This repository contains sample Terraform resource definitions for deploying several Root and Subordinate CA provisioning It demonstrates several Certificate Authority Service features and provides examples Certificate Authority Service Demo of Terraform configuration for the following CAS features related Certificate Authority Service CAS resources to the Google Cloud","answers":"# Certificate Authority Service Demo\n\nThis repository contains sample Terraform resource definitions for deploying several \nrelated Certificate Authority Service (CAS) resources to the Google Cloud.\n\nIt demonstrates several Certificate Authority Service features and provides examples\nof Terraform configuration for the following CAS features:\n\n* Root and Subordinate CA provisioning\n* Automatic Subordinate CA activation using Root CA in the CAS\n* Configuration example for manual subordinate CA activation\n* Multi-regional Subordinate CA deployment\n* CA configuration with Cloud HSM signing keys including example for imported keys\n* Application team domain ownership validation using CAS Certificate Templates and conditional IAM policies\n* CAS CA Pool throughput scaling and a load test script for certificate request load generation\n* CAS API activation\n\nThe following diagram shows resources being deployed by this project and the resulting CA hiearchy structure:\n\n![Demo Deployment](images\/deployment.png?raw=true)\n\n\n## Pre-requisites\n\nThe deployment presumes and relies upon an existing Google Cloud Project with attached active Billing account.\n\nTo perform the successful deployment, your Google Cloud account needs to have `Project Editor` role in the target \nGoogle Cloud project.\n\nUpdate the Google Cloud project id in the [terraform.tfvar](.\/terraform.tfvars) file by setting the `project_id` variable\nto the id of the target Google Cloud project before proceeding with the execution.\n\n## Demonstration\n\nThe Terraform project in this repository defines the following input variables that can either be edited in the `variables.tf` file directly or passed over the Terraform command line.\n\nThe project deploys the Google Cloud resources by default into the regions defined by the `location1` and `location2` variables. \nYou can change that by passing alternative values in the `terraform.tfvars.sample` file and copying it to the `terraform.tfvars` file.\n\nInitiate Terraform and deploy Google Cloud resources\n```\nterraform init\nterraform plan\nterraform apply\n```\n\n\n### Provisioned resources\n\nThe created CAS resources become visible in the Certificate Authority Service [section](https:\/\/console.cloud.google.com\/security\/cas\/caPools) \nin the Cloud Console.\n\n### Domain ownership validation\n\n1. ACME and Non-ACME service accounts get created in the [cas-template.tf](.\/cas-template.tf)\n2. [Load Python cryptography library](https:\/\/cloud.google.com\/kms\/docs\/crypto#macos)\n```\nexport CLOUDSDK_PYTHON_SITEPACKAGES=1\n```\n3. Set environment variables to the desired values, for example:\n```\nexport PROJECT_ID=my_project_id\nexport LOCATION=europe-west3\nexport CA_POOL=acme-sub-pool-europe\n```\n\n4. Non-ACME account should NOT be able to create certificate in acme.com domain:\n```\ngcloud privateca certificates create \\\n   --issuer-location ${LOCATION} \\\n   --issuer-pool ${CA_POOL} \\\n   --generate-key \\\n   --key-output-file .cert.key \\\n   --cert-output-file .cert.crt \\\n   --dns-san \"team1.acme.com\" \\\n   --template \"projects\/${PROJECT_ID}\/locations\/${LOCATION}\/certificateTemplates\/acme-sub-ca-europe-template\" \\\n   --impersonate-service-account \"non-acme-team-sa@${PROJECT_ID}.iam.gserviceaccount.com\"\n```\n\n5. Non-ACME account should NOT be able to create certificate in other example.com domain:\n```\ngcloud privateca certificates create \\\n   --issuer-location ${LOCATION} \\\n   --issuer-pool ${CA_POOL} \\\n   --generate-key \\\n   --key-output-file .cert.key \\\n   --cert-output-file .cert.crt \\\n   --dns-san \"team1.example.com\" \\\n   --template \"projects\/${PROJECT_ID}\/locations\/${LOCATION}\/certificateTemplates\/acme-sub-ca-europe-template\" \\\n   --impersonate-service-account \"non-acme-team-sa@${PROJECT_ID}.iam.gserviceaccount.com\"\n```\n\n6. ACME account should be able to create certificate in acme.com domain:\n```\ngcloud privateca certificates create \\\n   --issuer-location ${LOCATION} \\\n   --issuer-pool ${CA_POOL} \\\n   --generate-key \\\n   --key-output-file .cert.key \\\n   --cert-output-file .cert.crt \\\n   --dns-san \"team1.acme.com\" \\\n   --template \"projects\/${PROJECT_ID}\/locations\/${LOCATION}\/certificateTemplates\/acme-sub-ca-europe-template\" \\\n   --impersonate-service-account \"acme-team-sa@${PROJECT_ID}.iam.gserviceaccount.com\"\n```\n7. ACME account should NOT be able to create certificate in other example.com domain:\n```\ngcloud privateca certificates create \\\n   --issuer-location ${LOCATION} \\\n   --issuer-pool ${CA_POOL} \\\n   --generate-key \\\n   --key-output-file .cert.key \\\n   --cert-output-file .cert.crt \\\n   --dns-san \"team1.example.com\" \\\n   --template \"projects\/${PROJECT_ID}\/locations\/${LOCATION}\/certificateTemplates\/db-sub-ca-europe-template\" \\\n   --impersonate-service-account \"acme-team-sa@${PROJECT_ID}.iam.gserviceaccount.com\"\n```\n\n### Scaling CA Pool\n\n1. Set environment variables\n```\nexport PROJECT_ID=my_project_id\nexport LOCATION=europe-west3\nexport CA_POOL=acme-sub-pool-europe\nexport CONCURRENCY=2\nexport QPS=50\nexport TIME=15s\n```\n\n2. Run load test\n```\n.\/load-cas.sh $PROJECT_ID $LOCATION $CA_POOL $CONCURRENCY $QPS $TIME\n```\n\nThe test uses [Fortio](https:\/\/github.com\/fortio\/fortio) to call CAS API concurrenly over HTTPS\nto generate dummy certificates simulataing load on the CA Pool. The test will run for the time duration\ndefined by the `TIME` environment variable.\n\nCheck the outcome\n```\n142.251.39.106:443: 3\n172.217.168.234:443: 3\n142.251.36.42:443: 3\n216.58.214.10:443: 3\n172.217.23.202:443: 3\n142.250.179.138:443: 3\n216.58.208.106:443: 3\n142.251.36.10:443: 3\n142.250.179.202:443: 3\n142.250.179.170:443: 3\nCode 200 : 248 (89.2 %)\nCode 429 : 30 (10.8 %)\nResponse Header Sizes : count 278 avg 347.91367 +\/- 121 min 0 max 390 sum 96720\nResponse Body\/Total Sizes : count 278 avg 6981.3165 +\/- 2237 min 550 max 7763 sum 1940806\nAll done 278 calls (plus 4 warmup) 224.976 ms avg, 16.8 qps\n```\n\nNotice the portion of the requests returning 429 error, which indicated that the load exceeds the current CA pool throughput limit.\n\n3. Add additional subordinate CA to the CA pool. For that, rename `cas-scaling.tf.sample` to `cas-scaling.tf` and run\n```\nterraform apply --auto-approve\n```\n\n4. Run the load test again\n```\n.\/load-cas.sh $PROJECT_ID $LOCATION $CA_POOL $CONCURRENCY $QPS $TIME\n```\n\nand check the outcome:\n```\nIP addresses distribution:\n142.250.179.170:443: 1\n172.217.23.202:443: 1\n142.251.39.106:443: 1\n172.217.168.202:443: 1\nCode 200 : 258 (100.0 %)\nResponse Header Sizes : count 258 avg 390 +\/- 0 min 390 max 390 sum 100620\nResponse Body\/Total Sizes : count 258 avg 7759.624 +\/- 1.497 min 7758 max 7763 sum 2001983\nAll done 258 calls (plus 4 warmup) 233.180 ms avg, 17.1 qps\n```\n\nNotice that there are no 429 errors in responses any more and the load can now be handled.\n\n### Manual Subordinate CA activation\n\nThe [sub-activation.tf.sample](.\/sub-activation.tf.sample) file contains example Terraform configuration \nto perform manual Subordinate CA activaction. Use the Certificate Signing Request in the Terraform run output \nto sign using external Root Certificate Authority.\n\nSet the `pem_ca_certificate` and `subordinate_config.pem_issuer_chain` fields in the [ca.tf](.\/modules\/cas-ca\/ca.tf) \nto the files obtained from the issuer.\n\n## Clean up\n\nYou can clean up and free Google Cloud resources created with this project you can either\n* Delete the Google Cloud project with all created resources\n* Run the following command\n```\nterraform destroy\n```\n\nIt is not possible to create new CAS resources with the same resource id as were already used before even \nof they were deleted by the `terraform destroy` command. The new deployment attempt to the same Google Cloud\nproject needs to use new resource names. Modify the values of the following variables in the [terraform.tfvars](.\/terraform.tfvars) \nfile before running the demo deployment again:\n* `root_pool_name`\n* `sub_pool1_name`\n* `sub_pool2_name`\n* `root_ca_name`\n* `sub_ca1_name`\n* `sub_ca2_name","site":"GCP","answers_cleaned":"  Certificate Authority Service Demo  This repository contains sample Terraform resource definitions for deploying several  related Certificate Authority Service  CAS  resources to the Google Cloud   It demonstrates several Certificate Authority Service features and provides examples of Terraform configuration for the following CAS features     Root and Subordinate CA provisioning   Automatic Subordinate CA activation using Root CA in the CAS   Configuration example for manual subordinate CA activation   Multi regional Subordinate CA deployment   CA configuration with Cloud HSM signing keys including example for imported keys   Application team domain ownership validation using CAS Certificate Templates and conditional IAM policies   CAS CA Pool throughput scaling and a load test script for certificate request load generation   CAS API activation  The following diagram shows resources being deployed by this project and the resulting CA hiearchy structure     Demo Deployment  images deployment png raw true       Pre requisites  The deployment presumes and relies upon an existing Google Cloud Project with attached active Billing account   To perform the successful deployment  your Google Cloud account needs to have  Project Editor  role in the target  Google Cloud project   Update the Google Cloud project id in the  terraform tfvar    terraform tfvars  file by setting the  project id  variable to the id of the target Google Cloud project before proceeding with the execution      Demonstration  The Terraform project in this repository defines the following input variables that can either be edited in the  variables tf  file directly or passed over the Terraform command line   The project deploys the Google Cloud resources by default into the regions defined by the  location1  and  location2  variables   You can change that by passing alternative values in the  terraform tfvars sample  file and copying it to the  terraform tfvars  file   Initiate Terraform and deploy Google Cloud resources     terraform init terraform plan terraform apply           Provisioned resources  The created CAS resources become visible in the Certificate Authority Service  section  https   console cloud google com security cas caPools   in the Cloud Console       Domain ownership validation  1  ACME and Non ACME service accounts get created in the  cas template tf    cas template tf  2   Load Python cryptography library  https   cloud google com kms docs crypto macos      export CLOUDSDK PYTHON SITEPACKAGES 1     3  Set environment variables to the desired values  for example      export PROJECT ID my project id export LOCATION europe west3 export CA POOL acme sub pool europe      4  Non ACME account should NOT be able to create certificate in acme com domain      gcloud privateca certificates create        issuer location   LOCATION         issuer pool   CA POOL         generate key        key output file  cert key        cert output file  cert crt        dns san  team1 acme com         template  projects   PROJECT ID  locations   LOCATION  certificateTemplates acme sub ca europe template         impersonate service account  non acme team sa   PROJECT ID  iam gserviceaccount com       5  Non ACME account should NOT be able to create certificate in other example com domain      gcloud privateca certificates create        issuer location   LOCATION         issuer pool   CA POOL         generate key        key output file  cert key        cert output file  cert crt        dns san  team1 example com         template  projects   PROJECT ID  locations   LOCATION  certificateTemplates acme sub ca europe template         impersonate service account  non acme team sa   PROJECT ID  iam gserviceaccount com       6  ACME account should be able to create certificate in acme com domain      gcloud privateca certificates create        issuer location   LOCATION         issuer pool   CA POOL         generate key        key output file  cert key        cert output file  cert crt        dns san  team1 acme com         template  projects   PROJECT ID  locations   LOCATION  certificateTemplates acme sub ca europe template         impersonate service account  acme team sa   PROJECT ID  iam gserviceaccount com      7  ACME account should NOT be able to create certificate in other example com domain      gcloud privateca certificates create        issuer location   LOCATION         issuer pool   CA POOL         generate key        key output file  cert key        cert output file  cert crt        dns san  team1 example com         template  projects   PROJECT ID  locations   LOCATION  certificateTemplates db sub ca europe template         impersonate service account  acme team sa   PROJECT ID  iam gserviceaccount com           Scaling CA Pool  1  Set environment variables     export PROJECT ID my project id export LOCATION europe west3 export CA POOL acme sub pool europe export CONCURRENCY 2 export QPS 50 export TIME 15s      2  Run load test       load cas sh  PROJECT ID  LOCATION  CA POOL  CONCURRENCY  QPS  TIME      The test uses  Fortio  https   github com fortio fortio  to call CAS API concurrenly over HTTPS to generate dummy certificates simulataing load on the CA Pool  The test will run for the time duration defined by the  TIME  environment variable   Check the outcome     142 251 39 106 443  3 172 217 168 234 443  3 142 251 36 42 443  3 216 58 214 10 443  3 172 217 23 202 443  3 142 250 179 138 443  3 216 58 208 106 443  3 142 251 36 10 443  3 142 250 179 202 443  3 142 250 179 170 443  3 Code 200   248  89 2    Code 429   30  10 8    Response Header Sizes   count 278 avg 347 91367     121 min 0 max 390 sum 96720 Response Body Total Sizes   count 278 avg 6981 3165     2237 min 550 max 7763 sum 1940806 All done 278 calls  plus 4 warmup  224 976 ms avg  16 8 qps      Notice the portion of the requests returning 429 error  which indicated that the load exceeds the current CA pool throughput limit   3  Add additional subordinate CA to the CA pool  For that  rename  cas scaling tf sample  to  cas scaling tf  and run     terraform apply   auto approve      4  Run the load test again       load cas sh  PROJECT ID  LOCATION  CA POOL  CONCURRENCY  QPS  TIME      and check the outcome      IP addresses distribution  142 250 179 170 443  1 172 217 23 202 443  1 142 251 39 106 443  1 172 217 168 202 443  1 Code 200   258  100 0    Response Header Sizes   count 258 avg 390     0 min 390 max 390 sum 100620 Response Body Total Sizes   count 258 avg 7759 624     1 497 min 7758 max 7763 sum 2001983 All done 258 calls  plus 4 warmup  233 180 ms avg  17 1 qps      Notice that there are no 429 errors in responses any more and the load can now be handled       Manual Subordinate CA activation  The  sub activation tf sample    sub activation tf sample  file contains example Terraform configuration  to perform manual Subordinate CA activaction  Use the Certificate Signing Request in the Terraform run output  to sign using external Root Certificate Authority   Set the  pem ca certificate  and  subordinate config pem issuer chain  fields in the  ca tf    modules cas ca ca tf   to the files obtained from the issuer      Clean up  You can clean up and free Google Cloud resources created with this project you can either   Delete the Google Cloud project with all created resources   Run the following command     terraform destroy      It is not possible to create new CAS resources with the same resource id as were already used before even  of they were deleted by the  terraform destroy  command  The new deployment attempt to the same Google Cloud project needs to use new resource names  Modify the values of the following variables in the  terraform tfvars    terraform tfvars   file before running the demo deployment again     root pool name     sub pool1 name     sub pool2 name     root ca name     sub ca1 name     sub ca2 name"}
{"questions":"GCP generates fake JSON messages matching the schema to a Pub Sub topic at the rate of the QPS Dataflow Streaming Benchmark fake or generated data This pipeline takes in a QPS parameter a path to a schema file and A streaming pipeline which generates messages at a specified rate to a Pub Sub topic The messages Pipeline When developing Dataflow pipelines it s common to want to benchmark them at a specific QPS using","answers":"# Dataflow Streaming Benchmark\n\nWhen developing Dataflow pipelines, it's common to want to benchmark them at a specific QPS using\nfake or generated data. This pipeline takes in a QPS parameter, a path to a schema file, and\ngenerates fake JSON messages matching the schema to a Pub\/Sub topic at the rate of the QPS.\n\n## Pipeline\n\n[StreamingBenchmark](src\/main\/java\/com\/google\/cloud\/pso\/pipeline\/StreamingBenchmark.java) -\nA streaming pipeline which generates messages at a specified rate to a Pub\/Sub topic. The messages\nare generated according to a schema template which instructs the pipeline how to populate the\nmessages with fake data compliant to constraints.\n\n> Note the number of workers executing the pipeline must be large enough to support the supplied\n> QPS. Use a general rule of 2,500 QPS per core in the worker pool when configuring your pipeline.\n\n\n![Pipeline DAG](img\/pipeline-dag.png \"Pipeline DAG\")\n\n## Getting Started\n\n### Requirements\n\n* Java 8\n* Maven 3\n\n### Building the Project\n\nBuild the entire project using the maven compile command.\n```sh\nmvn clean compile\n```\n\n### Creating the Schema File\nThe schema file used to generate JSON messages with fake data is based on the\n[json-data-generator](https:\/\/github.com\/vincentrussell\/json-data-generator) library. This library\nallows for the structuring of a sample JSON schema and injection of common faker functions to\ninstruct the data generator of what type of fake data to create in each field. See the\njson-data-generator [docs](https:\/\/github.com\/vincentrussell\/json-data-generator) for more\ninformation on the faker functions.\n\n#### Message Attributes\nIf the message schema contains fields matching (case-insensitive) the following names then such fields\nwill be added to the output Pub\/Sub message attributes:\neventId, eventTimestamp\n\nAttribute fields can be helpful in various scenarios like deduping messages, inspecting message timestamps etc\n \n#### Example Schema File\nBelow is an example schema file which generates fake game event payloads with random data.\n```javascript\n{\n  \"eventId\": \"\",\n  \"eventTimestamp\": ,\n  \"ipv4\": \"\",\n  \"ipv6\": \"\",\n  \"country\": \"\",\n  \"username\": \"\",\n  \"quest\": \"\",\n  \"score\": ,\n  \"completed\": \n}\n```\n\n#### Example Output Data\nBased on the above schema, the below would be an example of a message which would be output to the\nPub\/Sub topic.\n```javascript\n{\n  \"eventId\": \"5dacca34-163b-42cb-872e-fe3bad7bffa9\",\n  \"eventTimestamp\": 1537729128894,\n  \"ipv4\": \"164.215.241.55\",\n  \"ipv6\": \"e401:58fc:93c5:689b:4401:206f:4734:2740\",\n  \"country\": \"Montserrat\",\n  \"username\": \"asellers\",\n  \"quest\": \"A Break In the Ice\",\n  \"score\": 2721,\n  \"completed\": false\n}\n```\nSince the schema includes the reserved field names of `eventId` and `eventTimestamp`, the output Pub\/Sub \nmessage will also contain these fields in the message attributes in addition to the regular payload.\n\n### Executing the Pipeline\n```bash\n# Set the pipeline vars\nPROJECT_ID=<project-id>\nBUCKET=<bucket>\nPIPELINE_FOLDER=gs:\/\/${BUCKET}\/dataflow\/pipelines\/streaming-benchmark\nSCHEMA_LOCATION=gs:\/\/<path-to-schema-location-in-gcs>\nPUBSUB_TOPIC=projects\/$PROJECT_ID\/topics\/<topic-id>\n\n# Set the desired QPS\nQPS=50000\n\n# Set the runner\nRUNNER=DataflowRunner\n\n# Compute engine zone\nZONE=us-east1-d\n\n# Build the template\nmvn compile exec:java \\\n-Dexec.mainClass=com.google.cloud.pso.pipeline.StreamingBenchmark \\\n-Dexec.cleanupDaemonThreads=false \\\n-Dexec.args=\" \\\n    --project=${PROJECT_ID} \\\n    --stagingLocation=${PIPELINE_FOLDER}\/staging \\\n    --tempLocation=${PIPELINE_FOLDER}\/temp \\\n    --runner=${RUNNER} \\\n    --zone=${ZONE} \\\n    --autoscalingAlgorithm=THROUGHPUT_BASED \\\n    --maxNumWorkers=5 \\\n    --qps=${QPS} \\\n    --schemaLocation=${SCHEMA_LOCATION} \\\n    --topic=${PUBSUB_TOPIC}\"\n```","site":"GCP","answers_cleaned":"  Dataflow Streaming Benchmark  When developing Dataflow pipelines  it s common to want to benchmark them at a specific QPS using fake or generated data  This pipeline takes in a QPS parameter  a path to a schema file  and generates fake JSON messages matching the schema to a Pub Sub topic at the rate of the QPS      Pipeline   StreamingBenchmark  src main java com google cloud pso pipeline StreamingBenchmark java    A streaming pipeline which generates messages at a specified rate to a Pub Sub topic  The messages are generated according to a schema template which instructs the pipeline how to populate the messages with fake data compliant to constraints     Note the number of workers executing the pipeline must be large enough to support the supplied   QPS  Use a general rule of 2 500 QPS per core in the worker pool when configuring your pipeline      Pipeline DAG  img pipeline dag png  Pipeline DAG       Getting Started      Requirements    Java 8   Maven 3      Building the Project  Build the entire project using the maven compile command     sh mvn clean compile          Creating the Schema File The schema file used to generate JSON messages with fake data is based on the  json data generator  https   github com vincentrussell json data generator  library  This library allows for the structuring of a sample JSON schema and injection of common faker functions to instruct the data generator of what type of fake data to create in each field  See the json data generator  docs  https   github com vincentrussell json data generator  for more information on the faker functions        Message Attributes If the message schema contains fields matching  case insensitive  the following names then such fields will be added to the output Pub Sub message attributes  eventId  eventTimestamp  Attribute fields can be helpful in various scenarios like deduping messages  inspecting message timestamps etc        Example Schema File Below is an example schema file which generates fake game event payloads with random data     javascript      eventId          eventTimestamp        ipv4          ipv6          country          username          quest          score        completed                Example Output Data Based on the above schema  the below would be an example of a message which would be output to the Pub Sub topic     javascript      eventId    5dacca34 163b 42cb 872e fe3bad7bffa9      eventTimestamp   1537729128894     ipv4    164 215 241 55      ipv6    e401 58fc 93c5 689b 4401 206f 4734 2740      country    Montserrat      username    asellers      quest    A Break In the Ice      score   2721     completed   false       Since the schema includes the reserved field names of  eventId  and  eventTimestamp   the output Pub Sub  message will also contain these fields in the message attributes in addition to the regular payload       Executing the Pipeline    bash   Set the pipeline vars PROJECT ID  project id  BUCKET  bucket  PIPELINE FOLDER gs     BUCKET  dataflow pipelines streaming benchmark SCHEMA LOCATION gs    path to schema location in gcs  PUBSUB TOPIC projects  PROJECT ID topics  topic id     Set the desired QPS QPS 50000    Set the runner RUNNER DataflowRunner    Compute engine zone ZONE us east1 d    Build the template mvn compile exec java    Dexec mainClass com google cloud pso pipeline StreamingBenchmark    Dexec cleanupDaemonThreads false    Dexec args           project   PROJECT ID          stagingLocation   PIPELINE FOLDER  staging         tempLocation   PIPELINE FOLDER  temp         runner   RUNNER          zone   ZONE          autoscalingAlgorithm THROUGHPUT BASED         maxNumWorkers 5         qps   QPS          schemaLocation   SCHEMA LOCATION          topic   PUBSUB TOPIC      "}
{"questions":"GCP Webhook example Recommended Reading This module is a webhook example for Dialogflow An agent created in Dialogflow is connected to this webhook that is running in Cloud Function Technology Stack 1 Cloud Firestore The webhook also connects to a Cloud Firestore to get the users information used in the example","answers":"# Webhook example\n\nThis module is a webhook example for Dialogflow. An agent created in Dialogflow is connected to this webhook that is running in Cloud Function.\nThe webhook also connects to a Cloud Firestore to get the users information used in the example.\n\n## Recommended Reading\n[Dialogflow Fulfillment Overview](https:\/\/cloud.google.com\/dialogflow\/docs\/fulfillment-overview).\n\n## Technology Stack\n1. Cloud Firestore\n1. Cloud Functions\n1. Dialogflow\n\n## Libraries\n1. Pandas\n1. Google Cloud Firestore\n1. Dialogflow\n\n## Programming Language\nPython 3\n\n## Project Structure\n\n```\n.\n\u2514\u2500\u2500 dialogflow_webhook_bank_example\n \u251c\u2500\u2500 main.py # Implementation of the webhook\n \u251c\u2500\u2500 intents_config.yaml # Configuration of the intent from dialogflow\n \u251c\u2500\u2500 agent.zip # Configuration of the agent for this example in dialogflow\n \u251c\u2500\u2500 requirements.txt # Required libraries for this example\n```\n\n## Setup Instructions\n### Project Setup\nHow to setup your project for this example can be found [here](https:\/\/cloud.google.com\/dialogflow\/docs\/quick\/setup).\n\n### Dialogflow Agent Setup\n\n1. Build an agent by following the instructions [here](https:\/\/cloud.google.com\/dialogflow\/docs\/quick\/build-agent).\n1. Once the agent is built, go to settings \u2699 and under the Export and Import tab, choose the option RESTORE FROM ZIP.\n1. Follow the instructions to restore the agent from agent.zip.\n\n\n### Cloud Functions Setup\nThis implementation is deployed on GCP using Cloud Functions.\nMore info [here](https:\/\/cloud.google.com\/functions\/docs\/concepts\/overview).\n\nTo run the Python scripts on GCP, the `gcloud` command-line tool from the Google Cloud SDK is needed.\nRefer to the [installation](https:\/\/cloud.google.com\/sdk\/install) page for the appropriate\ninstructions depending on your platform.\n\nNote that this project has been tested on a Unix-based environment.\n\nAfter installing, make sure to initialize your Cloud project:\n\n`$ gcloud init`\n\n### Cloud Firestore Setup\nQuick start for Cloud Firestore can be found [here](https:\/\/cloud.google.com\/firestore\/docs\/quickstart-servers).\n\n#### How to add data\nThis example connects to a Cloud Firestore with a collection with the following specification:\n\n    Root collection\n     users =>\n         document_id\n             NXJn5wTqWXwiTuc5tdun => {\n                 'first_name': 'Pedro',\n                 'last_name': 'Perez',\n                 'accounts': {\n                     'saving': {\n                         'transactions': [\n                             {'type': 'deposit', 'amount': 20},\n                             {'type': 'deposit', 'amount': 90}\n                          ],\n                         'balance': 110},\n                     'checking': {\n                         'transactions': [\n                             {'type': 'deposit', 'amount': 50},\n                             {'type': 'withdraw', 'amount': '-10'}\n                         ],\n                         'balance': 150}\n                 },\n                 'user_id': 123456\n             }\n\nExamples how to add data to a collection can be found [here](https:\/\/cloud.google.com\/firestore\/docs\/quickstart-servers#add_data).\n\n    from google.cloud import firestore\n\n    user_dict= {\n      u'user_id': u'123456',\n      u'first_name': u'Pedro',\n      u'last_name': u'Perez',\n      u'accounts': {\n          u'checking': {\n            u'transactions': [\n                {u'amount': 50, 'type': 'udeposit'},\n                {u'type': u'withdraw', u'amount': u'-10'}\n              ],\n              u'balance': 150\n           },\n           u'saving': {\n              u'transactions': [\n                  {u'amount': 20, u'type': u'deposit'},\n                  {u'type': u'deposit', u'amount': 90}\n               ],\n               u'balance': 110\n            }\n        }\n    }\n    db = firestore.Client()\n    db.collection(u'users').document(user_dict['user_id']).set(user_dict)\n\n## Deployment\n\n   $ gcloud functions deploy dialogflow_webhook_bank\n         --runtime python37\n         --trigger-http\n         --allow-unauthenticated\n\n## Usage\n### Dialogflow Agent Example\n\n      [User] Hi, Hello, I need assistance\n      [Agent] Welcome to our bank! Can I have your user id?\n      [User] <Give an invalid user_id number> user_id 12345\n      \u21b3 [Agent] Sorry I could not find your user_id. Can you try again?\n      [User] <Give a valid user_id number> user id 123456\n      \u21b3 [Agent] What can I do for you?\n      \u21b3 [User] Check my balance, Verify my balance, balance\n           \u21b3 [Agent] Here are your account balances.\n                <List of all account balances from firebase>\n                [Agent]What else can I do for you? - Follow up\n      \u21b3 [User] All my transactions, transactions,\n           \u21b3 [Agent] Here are all the transactions that I found.\n                <List of all the transactions from firebase>\n                [Agent] What else can I do for you? - Follow up\n      \u21b3 [User] Deposit transactions, credits, deposits,\n           \u21b3 [Agent] Here are all the deposit transactions that I found.\n               <List of deposit transactions in firebase>\n               [Agent] What else can I do for you? - Follow up\n      \u21b3[User] I am done, thanks, bye\n          \u21b3 [Agent] Have a nice day!\n\n### Running the sample from Dialogflow console\n\nIn [Dialogflow's console](https:\/\/console.dialogflow.com), in the simulator on the right, query your Dialogflow agent with `I need assistance` and respond to the questions your Dialogflow agent asks.\n\n### Running the sample using gcloud util\n\n  Example:\n\n    $ gcloud functions call dialogflow_webhook_bank --data\n        '{\n          \"responseId\": \"ec0be141-e09a-4dca-b445-4e811ad4999b-ab1309b0\",\n          \"queryResult\": {\n            \"queryText\": \"123456 user id\",\n            \"action\": \"welcome.welcome-custom\",\n            \"parameters\": {\n              \"user_id\": 123456\n            },\n            \"allRequiredParamsPresent\": true,\n            \"fulfillmentText\": \"What can I do for you?\",\n            \"fulfillmentMessages\": [\n              {\n                \"text\": {\n                  \"text\": [\n                    \"What can I do for you?\"\n                  ]\n                }\n              }\n            ],\n            \"outputContexts\": [\n              {\n                \"name\": \"projects\/<project-id>\/agent\/sessions\/e7f62474-fd2c-3ca0-dfa6-73d3ed2ab17f\/contexts\/user_id_action-followup\",\n                \"lifespanCount\": 5,\n                \"parameters\": {\n                  \"user_id\": 123456,\n                  \"user_id.original\": \"123456\"\n                }\n              },\n              {\n                \"name\": \"projects\/<project-id>\/agent\/sessions\/e7f62474-fd2c-3ca0-dfa6-73d3ed2ab17f\/contexts\/welcome-followup\",\n                \"lifespanCount\": 1,\n                \"parameters\": {\n                  \"user_id\": 123456,\n                  \"user_id.original\": \"123456\"\n                }\n              },\n              {\n                \"name\": \"projects\/<project-id>\/agent\/sessions\/e7f62474-fd2c-3ca0-dfa6-73d3ed2ab17f\/contexts\/__system_counters__\",\n                \"parameters\": {\n                  \"no-input\": 0,\n                  \"no-match\": 0,\n                  \"user_id\": 1234567891,\n                  \"user_id.original\": \"123456\"\n                }\n              }\n            ],\n            \"intent\": {\n              \"name\": \"projects\/<project-id>\/agent\/intents\/e3cabac7-cfb8-4da1-96bb-f14687913bf6\",\n              \"displayName\": \"user_id_action\"\n            },\n            \"intentDetectionConfidence\": 0.78590345,\n            \"languageCode\": \"en\"\n          },\n          \"originalDetectIntentRequest\": {\n            \"payload\": {}\n          },\n          \"session\": \"projects\/<project-id>\/agent\/sessions\/e7f62474-fd2c-3ca0-dfa6-73d3ed2ab17f\"\n        }\n\n\n# References\n[google-cloud-firestore.Documents API](https:\/\/googleapis.dev\/python\/firestore\/latest\/document.html#google.cloud.firestore_v1.document)\n\n[google-cloud-firestore.Queries API](https:\/\/googleapis.dev\/python\/firestore\/latest\/query.html#google.cloud.firestore_v1.query)\n\n[Example querying and filtering data from Cloud Firestore](https:\/\/cloud.google.com\/firestore\/docs\/query-data\/queries)\n\n[Pandas Dataframe Libraries](https:\/\/pandas.pydata.org\/pandas-docs\/stable\/reference\/api\/pandas.DataFrame.html","site":"GCP","answers_cleaned":"  Webhook example  This module is a webhook example for Dialogflow  An agent created in Dialogflow is connected to this webhook that is running in Cloud Function  The webhook also connects to a Cloud Firestore to get the users information used in the example      Recommended Reading  Dialogflow Fulfillment Overview  https   cloud google com dialogflow docs fulfillment overview       Technology Stack 1  Cloud Firestore 1  Cloud Functions 1  Dialogflow     Libraries 1  Pandas 1  Google Cloud Firestore 1  Dialogflow     Programming Language Python 3     Project Structure            dialogflow webhook bank example      main py   Implementation of the webhook      intents config yaml   Configuration of the intent from dialogflow      agent zip   Configuration of the agent for this example in dialogflow      requirements txt   Required libraries for this example         Setup Instructions     Project Setup How to setup your project for this example can be found  here  https   cloud google com dialogflow docs quick setup        Dialogflow Agent Setup  1  Build an agent by following the instructions  here  https   cloud google com dialogflow docs quick build agent   1  Once the agent is built  go to settings   and under the Export and Import tab  choose the option RESTORE FROM ZIP  1  Follow the instructions to restore the agent from agent zip        Cloud Functions Setup This implementation is deployed on GCP using Cloud Functions  More info  here  https   cloud google com functions docs concepts overview    To run the Python scripts on GCP  the  gcloud  command line tool from the Google Cloud SDK is needed  Refer to the  installation  https   cloud google com sdk install  page for the appropriate instructions depending on your platform   Note that this project has been tested on a Unix based environment   After installing  make sure to initialize your Cloud project      gcloud init       Cloud Firestore Setup Quick start for Cloud Firestore can be found  here  https   cloud google com firestore docs quickstart servers         How to add data This example connects to a Cloud Firestore with a collection with the following specification       Root collection      users             document id              NXJn5wTqWXwiTuc5tdun                        first name    Pedro                     last name    Perez                     accounts                           saving                               transactions                                    type    deposit    amount   20                                  type    deposit    amount   90                                                         balance   110                         checking                               transactions                                    type    deposit    amount   50                                  type    withdraw    amount     10                                                         balance   150                                        user id   123456                 Examples how to add data to a collection can be found  here  https   cloud google com firestore docs quickstart servers add data        from google cloud import firestore      user dict          u user id   u 123456         u first name   u Pedro         u last name   u Perez         u accounts               u checking                 u transactions                      u amount   50   type    udeposit                     u type   u withdraw   u amount   u  10                                  u balance   150                          u saving                   u transactions                        u amount   20  u type   u deposit                       u type   u deposit   u amount   90                                   u balance   110                                   db   firestore Client       db collection u users   document user dict  user id    set user dict      Deployment       gcloud functions deploy dialogflow webhook bank            runtime python37            trigger http            allow unauthenticated     Usage     Dialogflow Agent Example         User  Hi  Hello  I need assistance        Agent  Welcome to our bank  Can I have your user id         User   Give an invalid user id number  user id 12345          Agent  Sorry I could not find your user id  Can you try again         User   Give a valid user id number  user id 123456          Agent  What can I do for you           User  Check my balance  Verify my balance  balance               Agent  Here are your account balances                   List of all account balances from firebase                   Agent What else can I do for you    Follow up          User  All my transactions  transactions                Agent  Here are all the transactions that I found                   List of all the transactions from firebase                   Agent  What else can I do for you    Follow up          User  Deposit transactions  credits  deposits                Agent  Here are all the deposit transactions that I found                  List of deposit transactions in firebase                  Agent  What else can I do for you    Follow up         User  I am done  thanks  bye              Agent  Have a nice day       Running the sample from Dialogflow console  In  Dialogflow s console  https   console dialogflow com   in the simulator on the right  query your Dialogflow agent with  I need assistance  and respond to the questions your Dialogflow agent asks       Running the sample using gcloud util    Example         gcloud functions call dialogflow webhook bank   data                       responseId    ec0be141 e09a 4dca b445 4e811ad4999b ab1309b0              queryResult                  queryText    123456 user id                action    welcome welcome custom                parameters                    user id   123456                             allRequiredParamsPresent   true               fulfillmentText    What can I do for you                 fulfillmentMessages                                      text                        text                          What can I do for you                                                                                     outputContexts                                      name    projects  project id  agent sessions e7f62474 fd2c 3ca0 dfa6 73d3ed2ab17f contexts user id action followup                    lifespanCount   5                   parameters                        user id   123456                     user id original    123456                                                                      name    projects  project id  agent sessions e7f62474 fd2c 3ca0 dfa6 73d3ed2ab17f contexts welcome followup                    lifespanCount   1                   parameters                        user id   123456                     user id original    123456                                                                      name    projects  project id  agent sessions e7f62474 fd2c 3ca0 dfa6 73d3ed2ab17f contexts   system counters                      parameters                        no input   0                     no match   0                     user id   1234567891                     user id original    123456                                                                intent                    name    projects  project id  agent intents e3cabac7 cfb8 4da1 96bb f14687913bf6                  displayName    user id action                              intentDetectionConfidence   0 78590345               languageCode    en                          originalDetectIntentRequest                  payload                              session    projects  project id  agent sessions e7f62474 fd2c 3ca0 dfa6 73d3ed2ab17f                References  google cloud firestore Documents API  https   googleapis dev python firestore latest document html google cloud firestore v1 document    google cloud firestore Queries API  https   googleapis dev python firestore latest query html google cloud firestore v1 query    Example querying and filtering data from Cloud Firestore  https   cloud google com firestore docs query data queries    Pandas Dataframe Libraries  https   pandas pydata org pandas docs stable reference api pandas DataFrame html"}
{"questions":"GCP based failover The load balancers in front of the Managed Instance Group or Cloud Run service are configured for DNS load balancing for enhancing availability of a Cloud Run or Google Cloud Compute Managed Instance Groups based applications The applciation instances get redundantly deployed to two distinct regions Multi regional Application Availability This demo project contains Google Cloud infrastrcuture components that illustrate use cases The project covers several use cases that can be broken into the following categories with respective entry points in Terraform","answers":"\n# Multi-regional Application Availability\n\nThis demo project contains Google Cloud infrastrcuture components that illustrate use cases \nfor enhancing availability of a Cloud Run or Google Cloud Compute Managed Instance Groups based applications. \nThe applciation instances get redundantly deployed to two distinct regions. \nThe load balancers in front of the Managed Instance Group or Cloud Run service are configured for DNS load balancing \nbased failover.\n\nThe project covers several use cases that can be broken into the following categories with respective entry points in Terraform \nfiles for Google Cloud resources definitions for load balancing and Cloud DNS service configuration:\n\n| Load Balancer              | Type     |  OCI | Cloud Run Backend                | GCE MIG Backend                    |\n|----------------------------|----------|------|----------------------------------|------------------------------------|\n| Regional Pass-through      | Internal |  L4  |                -                 |[l4-rilb-mig.tf](.\/l4-rilb-mig.tf)  |\n| Regional Application       | Internal |  L7  |[l7-rilb-cr.tf](.\/l7-rilb-cr.tf)  |[l7-rilb-mig.tf](.\/l7-rilb-mig.tf)  |\n| Cross-Regional Application | Internal |  L7  |[l7-crilb-cr.tf](.\/l7-crilb-cr.tf)|[l7-crilb-mig.tf](.\/l7-crilb-mig.tf)|\n| Global Application         | External |  L7  |[l7-gxlb-cr.tf](.\/l7-gxlb-cr.tf)  |                  -                 |\n\nTerraform files with `dns-` prefix contain Cloud DNS resource definitions for the respective use case.\n\nWhen all resources from the project are provisioned the respective demo aplication endpoints can be used to verify the \ndeployment and test failover. The following table contains the URLs to be tested from a GCE VM attached to the same \ninternal VPC network where the load balancers are deployed.\n\n| Load Balancer              | Type     |  OCI | Cloud Run Backend              | GCE MIG Backend                    |\n|----------------------------|----------|------|--------------------------------|------------------------------------|\n| Regional Pass-through      | Internal |  L4  |                -               |`http:\/\/l4-rilb-mig.hello.zone:8080`|\n| Regional Application       | Internal |  L7  |`https:\/\/l7-rilb-cr.hello.zone` |`https:\/\/l7-rilb-mig.hello.zone`    |\n| Cross-Regional Application | Internal |  L7  |`https:\/\/l7-crilb-cr.hello.zone`|`https:\/\/l7-crilb-mig.hello.zone`   |\n| Global Application         | External |  L7  |`https:\/\/l7-gxlb.hello.zone`    |                  -                 |\n\n\nThe following diagrams illustrate the Google Cloud resources created for the respective load balancer type:\n\n1) L4 Regional Pass-through Internal Load Balancer DNS load balancing to GCE Managed Instance Groups [\\[1\\]](https:\/\/cloud.google.com\/load-balancing\/docs\/internal\/setting-up-internal) [l4-rilb-mig.tf](.\/l4-rilb-mig.tf)\n\n![Deployment Diagram](.\/images\/l4-rilb-mig.png)\n\n2) L7 Regional Internal Application Load Balancer DNS load balancing to Cloud Run service instances [\\[2\\]](https:\/\/cloud.google.com\/load-balancing\/docs\/l7-internal\/setting-up-l7-internal-serverless) [l7-rilb-cr.tf](.\/l7-rilb-cr.tf)\n\n![Deployment Diagram](.\/images\/l7-rilb-cr.png)\n\n3) L7 Cross-Regional Internal Application Load Balancer DNS load balancing to Cloud Run service instances [\\[3\\]](https:\/\/cloud.google.com\/load-balancing\/docs\/l7-internal\/setting-up-l7-cross-reg-serverless) [l7-crilb-cr.tf](.\/l7-crilb-cr.tf)\n\n![Deployment Diagram](https:\/\/cloud.google.com\/static\/load-balancing\/images\/cross-reg-int-cloudrun.svg)\n\n4) L7 Regional Internal Application Load Balancer DNS load balancing to GCE Managed Instance Groups [\\[4\\]](https:\/\/cloud.google.com\/load-balancing\/docs\/l7-internal\/setting-up-l7-internal) [l7-rilb-mig.tf](.\/l7-rilb-mig.tf)\n\n![Deployment Diagram](.\/images\/l7-rilb-mig.png)\n\n5) L7 Cross-Regional Internal Application Load Balancer DNS load balancing to GCE Managed Instance Groups [\\[5\\]](https:\/\/cloud.google.com\/load-balancing\/docs\/l7-internal\/setting-up-l7-cross-reg-internal) [l7-crilb-mig.tf](.\/l7-crilb-mig.tf)\n\n![Deployment Diagram](https:\/\/cloud.google.com\/static\/load-balancing\/images\/cross-reg-int-vm.svg)\n\n6) L7 External Application Load Balancer based load balancing [\\[6\\]](https:\/\/cloud.google.com\/load-balancing\/docs\/https\/setting-up-https-serverless) [l7-gxlb-cr.tf](.\/l7-gxlb-cr.tf)\n\n![Deployment Diagram](.\/images\/l7-gxlb-cr.png)\n\n## Pre-requisites\n\nThe deployment presumes and relies upon an existing Google Cloud Project with attached active Billing account.\n\nTo perform successful deployment, your Google Cloud account needs to have `Project Editor` role in the target \nGoogle Cloud project.\n\nCopy the [terraform.tfvar.sample](.\/terraform.tfvars.sample) file into `terraform.tfvar` file and update it with the Google Cloud project id in `project_id` variable and other variables according to your environment.\n\nYou can also choose the generation of the Cloud Run service instances by setting the `cloud_run_generation` input variable to `v1` \nor `v2` (default) respectively.\n\n[Enable](https:\/\/cloud.google.com\/artifact-registry\/docs\/enable-service) Google Artifact Registry API in the demo project.\n\nVPC network and load balancer subnet are not created in the project and are referenced in the input variables. \nThe additional proxy subnetwork required for load balancers setup is defined in [network.tf](.\/network.tf) together with\nthe references to the network resources.\n\nA jumpbox GCE VM attached to the project\u2019s VPC network for accessing internal resources and running load tests.\n\n## GCE Managed Instance Groups\n\n1. Checkout demo HTTP responder service container\n```\ngit clone https:\/\/github.com\/GoogleCloudPlatform\/golang-samples.git\ncd golang-samples\/run\/hello-broken\n```\n\n2. Build container, tag it and push to the Artifact Registry \n```\ndocker build . -t eu.gcr.io\/${PROJECT_ID}\/hello-broken:latest\ndocker push eu.gcr.io\/${PROJECT_ID}\/hello-broken:latest\n```\n\n3. Edit the `terraform.tfvars` file setting project_id variable to the id of the Google Cloud project where resources will be deployed to.\n\nTo reach the external IP of the Global External (L7) load balancer created by the resources in the `l7-gxlb-cr.tf` file \nyou also need to modify the `domain` variable to the subdomain value of the DNS domain that you control. \n\n4. Provision the demo infrastructure in the Google Cloud:\n```\nterraform init\nterraform apply \u2013auto-approve\n```\n\nTo reach the external IP of the Global External (L7) load balancer created by the resources in the `l7-gxlb-cr.tf` file \nyou can now modify your DNS record for the subdomain defined in the `domain` variable and point to to the IP address\nof the created Global External Load Balancer:\n```\ngcloud compute forwarding-rules list | grep gxlb-cr\n```\n\n5. Open the [Cloud Console](https:\/\/console.cloud.google.com\/net-services\/loadbalancing\/list\/loadBalancers) and check the L4 Regional Internal Network Load Balancer, Managed Instance Group, Cloud Run services and the `hello.zone` in the Cloud DNS. In the same way \nyou can also check load balancer resources created for other use cases.\n\n6. Log in into the jumpbox VM attached to the internal VPC network and switch to sudo mode for simpler docker container execution:\n```\ngcloud compute ssh jumpbox\nsudo -i\n```\nCheck whether all load balancers and components have come up properly:\n```\ncurl -s http:\/\/l4-rilb-mig.hello.zone:8080 && echo OK || echo NOK\ncurl -sk https:\/\/l7-crilb-cr.hello.zone && echo OK || echo NOK\ncurl -sk https:\/\/l7-crilb-mig.hello.zone && echo OK || echo NOK\ncurl -sk https:\/\/l7-gxlb.hello.zone && echo OK || echo NOK\ncurl -sk https:\/\/l7-rilb-cr.hello.zone && echo OK || echo NOK\ncurl -sk https:\/\/l7-rilb-mig.hello.zone && echo OK || echo NOK\n```\nAll of the commands must return successfully and print `OK`.\n\n7. Run the load test\n\nFor the load test you can use the open source [Fortio tool](https:\/\/github.com\/fortio\/fortio) which is often used\nfor testing Kubernetes and service mesh workloads.\n```\ncurl http:\/\/l4-rilb-mig.hello.zone:8080\ndocker run fortio\/fortio load --https-insecure -t 1m -qps 1 http:\/\/l4-rilb-mig.hello.zone:8080\n```\n\nThe result after 1 minute of execution should be similar to\n\n```\nIP addresses distribution:\n10.156.0.11:8080: 1\nCode 200 : 258 (100.0 %)\nResponse Header Sizes : count 258 avg 390 +\/- 0 min 390 max 390 sum 100620\nResponse Body\/Total Sizes : count 258 avg 7759.624 +\/- 1.497 min 7758 max 7763 sum 2001983\nAll done 258 calls (plus 4 warmup) 233.180 ms avg, 17.1 qps\n```\n\nPlease note the IP address of the internal pass-through load balancer in the nearest region getting all calls.\n\n8. Test failover\n \nIn the second console window SSH into the VM in the GCE MIG group in the nearest region\n```\nexport MIG_VM=$(gcloud compute instances list --format=\"value[](name)\" --filter=\"name~l4-europe-west3\")\nexport MIG_VM_ZONE=$(gcloud compute instances list --format=\"value[](zone)\" --filter=\"name=${MIG_VM}\")\n\ngcloud compute ssh --zone $MIG_VM_ZONE $MIG_VM --tunnel-through-iap --project $PROJECT_ID\nsudo -i\n\ndocker ps\n```\n\nRun the load test in the first console window again. \nWhile the test is running switch to the second console window and execute:\n```\ndocker stop ${CONTAINER}\n```\n\nSwitch to the first console window and notice the failover happening. The output at the end of the execution should look like following\n\n```\nIP addresses distribution:\n10.156.0.11:8080: 16\n10.199.0.48:8080: 4\nCode -1 : 12 (10.0 %)\nCode 200 : 108 (90.0 %)\nResponse Header Sizes : count 258 avg 390 +\/- 0 min 390 max 390 sum 100620\nResponse Body\/Total Sizes : count 258 avg 7759.624 +\/- 1.497 min 7758 max 7763 sum 2001983\nAll done 120 calls (plus 4 warmup) 83.180 ms avg, 2.0 qps\n```\n\nThe Cloud DNS starts returning the second IP address from the healthy region with apvailable backend service and it starts processing\nincoming requests.\n\nPlease note that the service VM in the Managed Instance has been automatically restarted by the GCE Managed Instance Group\n[autohealing](https:\/\/cloud.google.com\/compute\/docs\/instance-groups\/autohealing-instances-in-migs).\n\n## HA for Cloud Run\n\nThis demo project alsocontains scearios for improving cross-regional availability of application services \ndeployed and running in Cloud Run. There are several aspects related to the Cloud Run deployment which\nneed to be taken into account and are discussed in the following sections.\n\n### Authentication\n\nIn case when the application Cloud Run service needs to be protected by the authentication and not allow unauthenticated \ninvocations, the credentials need to be passed in the `Authentication: Bearer <ID token>` HTTP reqeust header.\n\nWhen the client application is running on Google Cloud, e.g. in a GCE VM, the following commands obtain \ncorrect ID tokens for authentication with each respective regional Cloud Run service instance. \n\nPresuming the Cloud Run instances are deployed in two regions and exposed under `cr-service-beh76gkxvq-ey.a.run.app` \nand `cr-service-us-beh76gkxvq-uc.a.run.app` hostnames respectively, the commands to obtain authentication tokens\nfor each of them are:\n```\ncurl \"http:\/\/metadata.google.internal\/computeMetadata\/v1\/instance\/service-accounts\/default\/identity?audience=https:\/\/cr-service-beh76gkxvq-ey.a.run.app\" -H \"Metadata-Flavor: Google\" > .\/id-token.txt\n\ncurl \"http:\/\/metadata.google.internal\/computeMetadata\/v1\/instance\/service-accounts\/default\/identity?audience=https:\/\/cr-service-us-beh76gkxvq-uc.a.run.app\" -H \"Metadata-Flavor: Google\" > .\/id-token-us.txt\n```\nOtherwise, the ID token can be obtained using the gcloud command, please read on.\n\nAs you can see, the regional Cloud Run service endpoint FQDN is used as the ID token authentication scope. That makes the \ntokens not interchangeable. That is, a token obtained for the Cloud Run service in Region A will fail authentication with \nthe Cloud Run service in the Region B.\n\nHere is how to utilize the authentication token when invoking the regional Cloud Service instance directly:\n\nRegion A (e.g. in EU):\n```\ncurl -H \"Authorization: Bearer $(cat .\/id-token.txt)\"  https:\/\/cr-service-beh76gkxvq-ey.a.run.app\n```\n\nRegion B (e.g. in US):\n```\ncurl -H \"Authorization: Bearer $(cat .\/id-token-us.txt)\"  https:\/\/cr-service-us-beh76gkxvq-uc.a.run.app\n```\n\nTo overcome the limitation of distinct ID token scopes and to be able to make the Cloud Service client seamlessly failover \nto the Cloud Run service in another region using the same ID token for authentication, \nthe [custom audiences](https:\/\/cloud.google.com\/run\/docs\/configuring\/custom-audiences)\ncan be used. (Please note, that the custom audience `cr-service` is already being set in the `google_cloud_run_v2_service` Terraform resource in this demo project.)\n```\ngcloud run services update cr-service --region=europe-west3 --add-custom-audiences=cr-service\ngcloud run services update cr-service --region=us-central1 --add-custom-audiences=cr-service\n\nexport TOKEN=$(gcloud auth print-identity-token --impersonate-service-account SERVICE_ACCOUNT_EMAIL --audiences='cr-service')\n```\nor\n```\ncurl \"http:\/\/metadata.google.internal\/computeMetadata\/v1\/instance\/service-accounts\/default\/identity?audience=cr-service\" -H \"Metadata-Flavor: Google\" > .\/id-token.txt\n```\n\nNow we can make an authenticated call using the single ID token to the global FQDN hostname representing \nCloud Run instances running in both regions. That will work and authentication will succeed even in case of a Cloud Run or\nentire Google Cloud region outage in one of the regions.\n\n```\ncurl -k -H \"Authorization: Bearer $(TOKEN)\" https:\/\/l7-crilb-cr.hello.zone\ncurl -k -H \"Authorization: Bearer $(cat .\/id-token.txt)\" https:\/\/l7-crilb-cr.hello.zone\n\n# If the internal of external application load balancer with serverless network endpoint groups (NEGs) \n# is configured with a TLS certificate for the Cloud DNS name resolving to the load balancer IP address, \n# then the we can also omit `-k` curl parameter and client will verify the server TLS certificate properly:\n\ncurl \"http:\/\/metadata.google.internal\/computeMetadata\/v1\/instance\/service-accounts\/default\/identity?audience=cr-service\" -H \"Metadata-Flavor: Google\" > .\/id-token.txt\n\ncat .\/id-token.txt\n\ncurl -H \"Authorization: Bearer $(cat Creds\/id-token.txt)\"  https:\/\/l7-crilb-cr.hello.zone\n```\n\n### Failover\n\nWe can follow the [instructions](https:\/\/cloud.google.com\/load-balancing\/docs\/l7-internal\/setting-up-l7-cross-reg-serverless#test-failover) in the public documentation to simulate regional Cloud Run backend outage.\n\nFirst, let's ensure, that our demo application service is accessible via internal cross-regional load balancer backed by \ntwo Cloud Run instances running in distinct regions (`europe-west3` and `us-central1` by default).\nFrom the client GCE VM attached to the demo project private VPC network:\n```\ndocker run fortio\/fortio load --https-insecure -t 1m -qps 1 https:\/\/l7-crilb-cr.hello.zone\n```\nWe should see 100% successful invocations:\n```\nIP addresses distribution:\n10.156.0.51:443: 4\nCode 200 : 60 (100.0 %)\n```\n\nNow let's simulate regional outage by removing all of the serverless NEG backends from one of the regions, e.g.\n```\n# Check the serverless NEG backends before the backend deletion:\ngcloud compute backend-services list --filter=\"name:l7-crilb-cr\"\n\ngcloud compute backend-services remove-backend l7-crilb-cr \\\n   --network-endpoint-group=cloudrun-sneg \\\n   --network-endpoint-group-region=europe-west3 \\\n   --global\n\n# Check the serverless NEG backends after the backend deletion:\ngcloud compute backend-services list --filter=\"name:l7-crilb-cr\"\n```\n\nIf you executed the previous command while running Fortio in parallel (you can increase the time interval the tool runs for by modifying the `-t` command line parameter), you should see an output similar to:\n```\nIP addresses distribution:\n10.156.0.51:443: 4\nCode 200 : 300 (100.0 %)\n```\nWith our test setup of 1 call per second, all calls have reached their destination.\n\nIf we now delete Serverless NEG backend in the second region client calls will start failing.\n\nTo restore the load balancer infrastructure, just re-apply the Terraform configuration by running `terraform apply`.\n\nWhat we have seen so far was the failover at the Internal Cross-Regional Application load balancer backend side. That is, \nthe client application (Fortio) was still accessing the load balancer IP address in the nearest `europe-west3` region.\nYou can check that by running `host l7-crilb-cr.hello.zone` which will return the IP address from the `europe-west3` region.\n\nWhat would happen in case of a full `europe-west3` region outage? \n\nThe [.\/l4-rilb-mig.tf](.\/l4-rilb-mig.tf) use case discussed above illustrates that case.\n\nUnfortunately, the [Cloud DNS health checks](https:\/\/cloud.google.com\/dns\/docs\/zones\/manage-routing-policies#health-checks) \nfor L7 load balancers cannot detect the outage of the application backend service yet. A missing load balancer backend is also \nnot considered as outage and the IP address switch does not occur.\nThey only check the availability of the internal Google Cloud infrastructure (Envoy proxies) supporting the L7 load balancers. \nIt is difficult to simulate the actual Google Cloud region outage that would trigger the Cloud DNS IP address failover. \nIt is expected in the future that the Cloud DNS health checks will also be able to detect availability of the application \nservice providing similar behaviour as the health checks for the L4 Network Passthrough load balancer currently provide.\n\n\n## Cleanup\n\nTo clear custom audiences:\n```\ngcloud run services update cr-service --region=europe-west3 --clear-custom-audiences\ngcloud run services update cr-service --region=us-central1 --clear-custom-audiences\n```\n\nTo remove resources created by this project deployment either delete the target Google Cloud project or run\n```\nterraform destroy --auto-approve\n```\n\n## Useful Links\n\n* [Multi-region failover using Cloud DNS Routing Policies and Health Checks for Internal TCP\/UDP Load Balancer](https:\/\/codelabs.developers.google.com\/clouddns-failover-policy-codelab#0)\n\n* [AWS DNS load balancing example](https:\/\/docs.aws.amazon.com\/whitepapers\/latest\/real-time-communication-on-aws\/cross-region-dns-based-load-balancing-and-failover.html)","site":"GCP","answers_cleaned":"   Multi regional Application Availability  This demo project contains Google Cloud infrastrcuture components that illustrate use cases  for enhancing availability of a Cloud Run or Google Cloud Compute Managed Instance Groups based applications   The applciation instances get redundantly deployed to two distinct regions   The load balancers in front of the Managed Instance Group or Cloud Run service are configured for DNS load balancing  based failover   The project covers several use cases that can be broken into the following categories with respective entry points in Terraform  files for Google Cloud resources definitions for load balancing and Cloud DNS service configuration     Load Balancer                Type        OCI   Cloud Run Backend                  GCE MIG Backend                                                                                                                                                 Regional Pass through        Internal    L4                                       l4 rilb mig tf    l4 rilb mig tf       Regional Application         Internal    L7    l7 rilb cr tf    l7 rilb cr tf     l7 rilb mig tf    l7 rilb mig tf       Cross Regional Application   Internal    L7    l7 crilb cr tf    l7 crilb cr tf   l7 crilb mig tf    l7 crilb mig tf     Global Application           External    L7    l7 gxlb cr tf    l7 gxlb cr tf                                           Terraform files with  dns   prefix contain Cloud DNS resource definitions for the respective use case   When all resources from the project are provisioned the respective demo aplication endpoints can be used to verify the  deployment and test failover  The following table contains the URLs to be tested from a GCE VM attached to the same  internal VPC network where the load balancers are deployed     Load Balancer                Type        OCI   Cloud Run Backend                GCE MIG Backend                                                                                                                                               Regional Pass through        Internal    L4                                     http   l4 rilb mig hello zone 8080     Regional Application         Internal    L7    https   l7 rilb cr hello zone    https   l7 rilb mig hello zone         Cross Regional Application   Internal    L7    https   l7 crilb cr hello zone   https   l7 crilb mig hello zone        Global Application           External    L7    https   l7 gxlb hello zone                                              The following diagrams illustrate the Google Cloud resources created for the respective load balancer type   1  L4 Regional Pass through Internal Load Balancer DNS load balancing to GCE Managed Instance Groups    1    https   cloud google com load balancing docs internal setting up internal   l4 rilb mig tf    l4 rilb mig tf     Deployment Diagram    images l4 rilb mig png   2  L7 Regional Internal Application Load Balancer DNS load balancing to Cloud Run service instances    2    https   cloud google com load balancing docs l7 internal setting up l7 internal serverless   l7 rilb cr tf    l7 rilb cr tf     Deployment Diagram    images l7 rilb cr png   3  L7 Cross Regional Internal Application Load Balancer DNS load balancing to Cloud Run service instances    3    https   cloud google com load balancing docs l7 internal setting up l7 cross reg serverless   l7 crilb cr tf    l7 crilb cr tf     Deployment Diagram  https   cloud google com static load balancing images cross reg int cloudrun svg   4  L7 Regional Internal Application Load Balancer DNS load balancing to GCE Managed Instance Groups    4    https   cloud google com load balancing docs l7 internal setting up l7 internal   l7 rilb mig tf    l7 rilb mig tf     Deployment Diagram    images l7 rilb mig png   5  L7 Cross Regional Internal Application Load Balancer DNS load balancing to GCE Managed Instance Groups    5    https   cloud google com load balancing docs l7 internal setting up l7 cross reg internal   l7 crilb mig tf    l7 crilb mig tf     Deployment Diagram  https   cloud google com static load balancing images cross reg int vm svg   6  L7 External Application Load Balancer based load balancing    6    https   cloud google com load balancing docs https setting up https serverless   l7 gxlb cr tf    l7 gxlb cr tf     Deployment Diagram    images l7 gxlb cr png      Pre requisites  The deployment presumes and relies upon an existing Google Cloud Project with attached active Billing account   To perform successful deployment  your Google Cloud account needs to have  Project Editor  role in the target  Google Cloud project   Copy the  terraform tfvar sample    terraform tfvars sample  file into  terraform tfvar  file and update it with the Google Cloud project id in  project id  variable and other variables according to your environment   You can also choose the generation of the Cloud Run service instances by setting the  cloud run generation  input variable to  v1   or  v2   default  respectively    Enable  https   cloud google com artifact registry docs enable service  Google Artifact Registry API in the demo project   VPC network and load balancer subnet are not created in the project and are referenced in the input variables   The additional proxy subnetwork required for load balancers setup is defined in  network tf    network tf  together with the references to the network resources   A jumpbox GCE VM attached to the project s VPC network for accessing internal resources and running load tests      GCE Managed Instance Groups  1  Checkout demo HTTP responder service container     git clone https   github com GoogleCloudPlatform golang samples git cd golang samples run hello broken      2  Build container  tag it and push to the Artifact Registry      docker build    t eu gcr io   PROJECT ID  hello broken latest docker push eu gcr io   PROJECT ID  hello broken latest      3  Edit the  terraform tfvars  file setting project id variable to the id of the Google Cloud project where resources will be deployed to   To reach the external IP of the Global External  L7  load balancer created by the resources in the  l7 gxlb cr tf  file  you also need to modify the  domain  variable to the subdomain value of the DNS domain that you control    4  Provision the demo infrastructure in the Google Cloud      terraform init terraform apply  auto approve      To reach the external IP of the Global External  L7  load balancer created by the resources in the  l7 gxlb cr tf  file  you can now modify your DNS record for the subdomain defined in the  domain  variable and point to to the IP address of the created Global External Load Balancer      gcloud compute forwarding rules list   grep gxlb cr      5  Open the  Cloud Console  https   console cloud google com net services loadbalancing list loadBalancers  and check the L4 Regional Internal Network Load Balancer  Managed Instance Group  Cloud Run services and the  hello zone  in the Cloud DNS  In the same way  you can also check load balancer resources created for other use cases   6  Log in into the jumpbox VM attached to the internal VPC network and switch to sudo mode for simpler docker container execution      gcloud compute ssh jumpbox sudo  i     Check whether all load balancers and components have come up properly      curl  s http   l4 rilb mig hello zone 8080    echo OK    echo NOK curl  sk https   l7 crilb cr hello zone    echo OK    echo NOK curl  sk https   l7 crilb mig hello zone    echo OK    echo NOK curl  sk https   l7 gxlb hello zone    echo OK    echo NOK curl  sk https   l7 rilb cr hello zone    echo OK    echo NOK curl  sk https   l7 rilb mig hello zone    echo OK    echo NOK     All of the commands must return successfully and print  OK    7  Run the load test  For the load test you can use the open source  Fortio tool  https   github com fortio fortio  which is often used for testing Kubernetes and service mesh workloads      curl http   l4 rilb mig hello zone 8080 docker run fortio fortio load   https insecure  t 1m  qps 1 http   l4 rilb mig hello zone 8080      The result after 1 minute of execution should be similar to      IP addresses distribution  10 156 0 11 8080  1 Code 200   258  100 0    Response Header Sizes   count 258 avg 390     0 min 390 max 390 sum 100620 Response Body Total Sizes   count 258 avg 7759 624     1 497 min 7758 max 7763 sum 2001983 All done 258 calls  plus 4 warmup  233 180 ms avg  17 1 qps      Please note the IP address of the internal pass through load balancer in the nearest region getting all calls   8  Test failover   In the second console window SSH into the VM in the GCE MIG group in the nearest region     export MIG VM   gcloud compute instances list   format  value   name     filter  name l4 europe west3   export MIG VM ZONE   gcloud compute instances list   format  value   zone     filter  name   MIG VM     gcloud compute ssh   zone  MIG VM ZONE  MIG VM   tunnel through iap   project  PROJECT ID sudo  i  docker ps      Run the load test in the first console window again   While the test is running switch to the second console window and execute      docker stop   CONTAINER       Switch to the first console window and notice the failover happening  The output at the end of the execution should look like following      IP addresses distribution  10 156 0 11 8080  16 10 199 0 48 8080  4 Code  1   12  10 0    Code 200   108  90 0    Response Header Sizes   count 258 avg 390     0 min 390 max 390 sum 100620 Response Body Total Sizes   count 258 avg 7759 624     1 497 min 7758 max 7763 sum 2001983 All done 120 calls  plus 4 warmup  83 180 ms avg  2 0 qps      The Cloud DNS starts returning the second IP address from the healthy region with apvailable backend service and it starts processing incoming requests   Please note that the service VM in the Managed Instance has been automatically restarted by the GCE Managed Instance Group  autohealing  https   cloud google com compute docs instance groups autohealing instances in migs       HA for Cloud Run  This demo project alsocontains scearios for improving cross regional availability of application services  deployed and running in Cloud Run  There are several aspects related to the Cloud Run deployment which need to be taken into account and are discussed in the following sections       Authentication  In case when the application Cloud Run service needs to be protected by the authentication and not allow unauthenticated  invocations  the credentials need to be passed in the  Authentication  Bearer  ID token   HTTP reqeust header   When the client application is running on Google Cloud  e g  in a GCE VM  the following commands obtain  correct ID tokens for authentication with each respective regional Cloud Run service instance    Presuming the Cloud Run instances are deployed in two regions and exposed under  cr service beh76gkxvq ey a run app   and  cr service us beh76gkxvq uc a run app  hostnames respectively  the commands to obtain authentication tokens for each of them are      curl  http   metadata google internal computeMetadata v1 instance service accounts default identity audience https   cr service beh76gkxvq ey a run app   H  Metadata Flavor  Google      id token txt  curl  http   metadata google internal computeMetadata v1 instance service accounts default identity audience https   cr service us beh76gkxvq uc a run app   H  Metadata Flavor  Google      id token us txt     Otherwise  the ID token can be obtained using the gcloud command  please read on   As you can see  the regional Cloud Run service endpoint FQDN is used as the ID token authentication scope  That makes the  tokens not interchangeable  That is  a token obtained for the Cloud Run service in Region A will fail authentication with  the Cloud Run service in the Region B   Here is how to utilize the authentication token when invoking the regional Cloud Service instance directly   Region A  e g  in EU       curl  H  Authorization  Bearer   cat   id token txt    https   cr service beh76gkxvq ey a run app      Region B  e g  in US       curl  H  Authorization  Bearer   cat   id token us txt    https   cr service us beh76gkxvq uc a run app      To overcome the limitation of distinct ID token scopes and to be able to make the Cloud Service client seamlessly failover  to the Cloud Run service in another region using the same ID token for authentication   the  custom audiences  https   cloud google com run docs configuring custom audiences  can be used   Please note  that the custom audience  cr service  is already being set in the  google cloud run v2 service  Terraform resource in this demo project       gcloud run services update cr service   region europe west3   add custom audiences cr service gcloud run services update cr service   region us central1   add custom audiences cr service  export TOKEN   gcloud auth print identity token   impersonate service account SERVICE ACCOUNT EMAIL   audiences  cr service       or     curl  http   metadata google internal computeMetadata v1 instance service accounts default identity audience cr service   H  Metadata Flavor  Google      id token txt      Now we can make an authenticated call using the single ID token to the global FQDN hostname representing  Cloud Run instances running in both regions  That will work and authentication will succeed even in case of a Cloud Run or entire Google Cloud region outage in one of the regions       curl  k  H  Authorization  Bearer   TOKEN   https   l7 crilb cr hello zone curl  k  H  Authorization  Bearer   cat   id token txt   https   l7 crilb cr hello zone    If the internal of external application load balancer with serverless network endpoint groups  NEGs     is configured with a TLS certificate for the Cloud DNS name resolving to the load balancer IP address     then the we can also omit   k  curl parameter and client will verify the server TLS certificate properly   curl  http   metadata google internal computeMetadata v1 instance service accounts default identity audience cr service   H  Metadata Flavor  Google      id token txt  cat   id token txt  curl  H  Authorization  Bearer   cat Creds id token txt    https   l7 crilb cr hello zone          Failover  We can follow the  instructions  https   cloud google com load balancing docs l7 internal setting up l7 cross reg serverless test failover  in the public documentation to simulate regional Cloud Run backend outage   First  let s ensure  that our demo application service is accessible via internal cross regional load balancer backed by  two Cloud Run instances running in distinct regions   europe west3  and  us central1  by default   From the client GCE VM attached to the demo project private VPC network      docker run fortio fortio load   https insecure  t 1m  qps 1 https   l7 crilb cr hello zone     We should see 100  successful invocations      IP addresses distribution  10 156 0 51 443  4 Code 200   60  100 0         Now let s simulate regional outage by removing all of the serverless NEG backends from one of the regions  e g        Check the serverless NEG backends before the backend deletion  gcloud compute backend services list   filter  name l7 crilb cr   gcloud compute backend services remove backend l7 crilb cr        network endpoint group cloudrun sneg        network endpoint group region europe west3        global    Check the serverless NEG backends after the backend deletion  gcloud compute backend services list   filter  name l7 crilb cr       If you executed the previous command while running Fortio in parallel  you can increase the time interval the tool runs for by modifying the   t  command line parameter   you should see an output similar to      IP addresses distribution  10 156 0 51 443  4 Code 200   300  100 0        With our test setup of 1 call per second  all calls have reached their destination   If we now delete Serverless NEG backend in the second region client calls will start failing   To restore the load balancer infrastructure  just re apply the Terraform configuration by running  terraform apply    What we have seen so far was the failover at the Internal Cross Regional Application load balancer backend side  That is   the client application  Fortio  was still accessing the load balancer IP address in the nearest  europe west3  region  You can check that by running  host l7 crilb cr hello zone  which will return the IP address from the  europe west3  region   What would happen in case of a full  europe west3  region outage    The    l4 rilb mig tf    l4 rilb mig tf  use case discussed above illustrates that case   Unfortunately  the  Cloud DNS health checks  https   cloud google com dns docs zones manage routing policies health checks   for L7 load balancers cannot detect the outage of the application backend service yet  A missing load balancer backend is also  not considered as outage and the IP address switch does not occur  They only check the availability of the internal Google Cloud infrastructure  Envoy proxies  supporting the L7 load balancers   It is difficult to simulate the actual Google Cloud region outage that would trigger the Cloud DNS IP address failover   It is expected in the future that the Cloud DNS health checks will also be able to detect availability of the application  service providing similar behaviour as the health checks for the L4 Network Passthrough load balancer currently provide       Cleanup  To clear custom audiences      gcloud run services update cr service   region europe west3   clear custom audiences gcloud run services update cr service   region us central1   clear custom audiences      To remove resources created by this project deployment either delete the target Google Cloud project or run     terraform destroy   auto approve         Useful Links     Multi region failover using Cloud DNS Routing Policies and Health Checks for Internal TCP UDP Load Balancer  https   codelabs developers google com clouddns failover policy codelab 0      AWS DNS load balancing example  https   docs aws amazon com whitepapers latest real time communication on aws cross region dns based load balancing and failover html "}
{"questions":"GCP This is the BigQuery query that will generate the source data SELECT table that will be created in AlloyDB sql We are going to be moving data from a public dataset stored in BigQuery into a toaddress dataflow bigquery to alloydb fromaddress","answers":"# dataflow-bigquery-to-alloydb\n\nWe are going to be moving data from a public dataset stored in BigQuery into a\ntable that will be created in AlloyDB.\nThis is the BigQuery query that will generate the source data:\n\n```sql\nSELECT\n    from_address,\n    to_address,\n    CASE\n        WHEN SAFE_CAST(value AS NUMERIC) IS NULL THEN 0\n        ELSE SAFE_CAST(value AS NUMERIC)\n        END AS value,\n    block_timestamp\nFROM\n    bigquery-public-data.crypto_ethereum.token_transfers\nWHERE\n    DATE(block_timestamp) = DATE_ADD(CURRENT_DATE(), INTERVAL -1 DAY)\n```\n\n## Create the AlloyDB table in which we will store the BigQuery data\n\nCreate a database for the table in AlloyDB:\n\n```SQL\nCREATE DATABASE ethereum;\n```\n\nCreate the table in which we will write the BigQuery data:\n\n```sql\nCREATE TABLE token_transfers (\n    from_address VARCHAR,\n    to_address VARCHAR,\n    value NUMERIC,\n    block_timestamp TIMESTAMP\n);\n```\n\n## Create the local environment\n\n```\npython3 -m venv env\nsource env\/bin\/activate\npip3 install -r requirements.txt\n```\n\n## Running the Dataflow pipeline\n\nIf the Python environment is not activated, you need to do it:\n\n```\nsource env\/bin\/activate\n```\n\nFor running the Dataflow pipeline, a Bucket is needed for staging the BigQuery\ndata. If you don't have a bucket, please create one in the same region in\nwhich Dataflow will run, for example in `southamerica-east1`\n\n```\ngcloud storage buckets create gs:\/\/<BUCKET_NAME> --location=southamerica-east1\n```\n\nConfigure environment variables\n\n```\nTMP_BUCKET=<name of the bucket used for staging>\nPROJECT=<name of your GCP project>\nREGION=<name of the GCP region in which Dataflow will run>\nSUBNETWORK=<ID of the subnetwork in which Dataflow will run, for example:\nhttps:\/\/www.googleapis.com\/compute\/v1\/projects\/<NAME_OF_THE_VPC_PROJECT>\/regions\/<REGION>\/subnetworks\/<NAME_OF_THE_SUBNET>\nALLOYDB_IP=<IP address of AlloyDB>\nALLOYDB_USERNAME=<USERNAME used for connecting to AlloyDB>\nALLOYDB_PASSWORD=<PASSWORD used for connecting to AlloyDB>\nALLOYDB_DATABASE=ethereum\nALLOYDB_TABLE=token_transfers\nBQ_QUERY=\"\n    SELECT\n        from_address,\n        to_address,\n        CASE\n            WHEN SAFE_CAST(value AS NUMERIC) IS NULL THEN 0\n            ELSE SAFE_CAST(value AS NUMERIC)\n            END AS value,\n        block_timestamp\n    FROM\n        bigquery-public-data.crypto_ethereum.token_transfers\n    WHERE\n        DATE(block_timestamp) = DATE_ADD(CURRENT_DATE(), INTERVAL -1 DAY)\n\"\n```\n\nExecute the pipeline\n\n```\npython3 main.py \\\n    --runner DataflowRunner \\\n    --region ${REGION} \\\n    --project ${PROJECT} \\\n    --temp_location gs:\/\/${TMP_BUCKET}\/tmp\/ \\\n    --alloydb_username ${ALLOYDB_USERNAME} \\\n    --alloydb_password ${ALLOYDB_PASSWORD} \\\n    --alloydb_ip ${ALLOYDB_IP} \\\n    --alloydb_database ${ALLOYDB_DATABASE} \\\n    --alloydb_table ${ALLOYDB_TABLE} \\\n    --bq_query \"${BQ_QUERY}\" \\\n    --no_use_public_ips \\\n    --subnetwork=${SUBNETWORK}\n```","site":"GCP","answers_cleaned":"  dataflow bigquery to alloydb  We are going to be moving data from a public dataset stored in BigQuery into a table that will be created in AlloyDB  This is the BigQuery query that will generate the source data      sql SELECT     from address      to address      CASE         WHEN SAFE CAST value AS NUMERIC  IS NULL THEN 0         ELSE SAFE CAST value AS NUMERIC          END AS value      block timestamp FROM     bigquery public data crypto ethereum token transfers WHERE     DATE block timestamp    DATE ADD CURRENT DATE    INTERVAL  1 DAY          Create the AlloyDB table in which we will store the BigQuery data  Create a database for the table in AlloyDB      SQL CREATE DATABASE ethereum       Create the table in which we will write the BigQuery data      sql CREATE TABLE token transfers       from address VARCHAR      to address VARCHAR      value NUMERIC      block timestamp TIMESTAMP            Create the local environment      python3  m venv env source env bin activate pip3 install  r requirements txt         Running the Dataflow pipeline  If the Python environment is not activated  you need to do it       source env bin activate      For running the Dataflow pipeline  a Bucket is needed for staging the BigQuery data  If you don t have a bucket  please create one in the same region in which Dataflow will run  for example in  southamerica east1       gcloud storage buckets create gs    BUCKET NAME    location southamerica east1      Configure environment variables      TMP BUCKET  name of the bucket used for staging  PROJECT  name of your GCP project  REGION  name of the GCP region in which Dataflow will run  SUBNETWORK  ID of the subnetwork in which Dataflow will run  for example  https   www googleapis com compute v1 projects  NAME OF THE VPC PROJECT  regions  REGION  subnetworks  NAME OF THE SUBNET  ALLOYDB IP  IP address of AlloyDB  ALLOYDB USERNAME  USERNAME used for connecting to AlloyDB  ALLOYDB PASSWORD  PASSWORD used for connecting to AlloyDB  ALLOYDB DATABASE ethereum ALLOYDB TABLE token transfers BQ QUERY       SELECT         from address          to address          CASE             WHEN SAFE CAST value AS NUMERIC  IS NULL THEN 0             ELSE SAFE CAST value AS NUMERIC              END AS value          block timestamp     FROM         bigquery public data crypto ethereum token transfers     WHERE         DATE block timestamp    DATE ADD CURRENT DATE    INTERVAL  1 DAY         Execute the pipeline      python3 main py         runner DataflowRunner         region   REGION          project   PROJECT          temp location gs     TMP BUCKET  tmp          alloydb username   ALLOYDB USERNAME          alloydb password   ALLOYDB PASSWORD          alloydb ip   ALLOYDB IP          alloydb database   ALLOYDB DATABASE          alloydb table   ALLOYDB TABLE          bq query    BQ QUERY           no use public ips         subnetwork   SUBNETWORK     "}
{"questions":"GCP gcs hive external table file optimization 1 Example solution to showcase impact of file count file size and file type on Hive external tables and query speeds 2 Table Of Contents","answers":"# gcs-hive-external-table-file-optimization\n\nExample solution to showcase impact of file count, file size, and file type on Hive external tables and query speeds\n\n----\n\n## Table Of Contents\n\n1. [About](#about)\n2. [Use Case](#use-case)\n3. [Architecture](#architecture)\n4. [Guide](#guide)\n5. [Sample Queries](#sample-queries)\n6. [Sample Results](#sample-results)\n\n\n----\n\n## about\n\nOne way to perform data analytics is through Hive on [Cloud Dataproc]().  You can create external tables in Hive, where the schema resides in Dataproc but the data resides in [Google Cloud Storage]().  This allows you to separate compute and storage, enabling you to scale your data independently of compute power.  \n\nIn older HDFS \/ Hive On-Prem setups, the compute and storage were closely tied together, either on the same machine or in a nearby machine.  But when storage is separated on the cloud, you save on storage costs at the expense of latency.  It takes time for Cloud Dataproc to retrieve files on Google Cloud Storage.  When there are many small files, this can negatively affect query performance.\n\nFile type and compression can also affect query performance.  \n\n**It is important to be deliberate in choosing your Google Cloud Storage file strategy when performing data analytics on Google Cloud.**\n\n**In this example you'll see a 99.996% improvement in query run time.**\n\n----\n\n## use-case\n\nThis repository sets up a real-world example of comparing query performance between different file sizes on Google Cloud Storage.  It provides code to perform a one-time **file compaction** using [Google Bigquery](https:\/\/cloud.google.com\/bigquery) and the [bq cli](https:\/\/cloud.google.com\/bigquery\/docs\/bq-command-line-tool), and in doing so, optimizes your query performance when using Cloud Dataproc + External Tables in Hive + data on Google Cloud Storage.\n\nThe setup script will create external tables with source data in the form of:\n    - small raw json files\n    - compacted json files\n    - compacted compressed json files\n    - compacted parquet files\n    - compacted compressed parquet files\n    - compacted avro files\n    - compacted compressed avro files\n\nFinally, it will show you how to query all of the tables and demonstrate query run times for each source data \/ file format.\n\n----\n\n## guide\n\n\nDo the following sample guide to generate many small files in Google Cloud Storage:\n\nhttps:\/\/github.com\/CYarros10\/gcp-dataproc-workflow-template-custom-image-sample\n\nThen:\n\n```bash\n\ncd gcs-hive-external-table-file-optimization\n\n.\/scripts\/setup.sh <project_id> <project_number> <region> <dataset> <table>\n```\n\n----\n\n## sample-queries\n\n**Hive**\n\n```sql\n\nmsck repair table comments;\nmsck repair table comments_json;\nmsck repair table comments_json_gz;\nmsck repair table comments_avro;\nmsck repair table comments_avro_snappy;\nmsck repair table comments_avro_deflate;\nmsck repair table comments_parquet;\nmsck repair table comments_parquet_snappy;\nmsck repair table comments_parquet_gzip;\n\nadd jar \/lib\/hive\/lib\/hive-hcatalog-core-2.3.7.jar;\nadd jar \/lib\/hive\/lib\/json-1.8.jar;\nadd jar \/lib\/hive\/lib\/json-path-2.1.0.jar;\nadd jar \/lib\/hive\/lib\/json4s-ast_2.12-3.5.3.jar;\nadd jar \/lib\/hive\/lib\/json4s-core_2.12-3.5.3.jar;\nadd jar \/lib\/hive\/lib\/json4s-jackson_2.12-3.5.3.jar;\nadd jar \/lib\/hive\/lib\/json4s-scalap_2.12-3.5.3.jar;\n\nselect count(*) from comments;\nselect count(*) from comments_json;\nselect count(*) from comments_json_gz;\nselect count(*) from comments_avro;\nselect count(*) from comments_avro_snappy;\nselect count(*) from comments_avro_deflate;\nselect count(*) from comments_parquet;\nselect count(*) from comments_parquet_snappy;\nselect count(*) from comments_parquet_gzip;\n\n```\n\n----\n\n## sample-results\n\nsorted by query runtime: \n\n| file type | compression | file count | file size (mb) | query runtime (seconds) |\n|---|--|---|---|---|\n| parquet | GZIP | 1 | 13.1 | 1.64 | \n| parquet | SNAPPY | 1 | 20.1 | 2.11 |\n| json | none | 1 | 95.6 | 2.35 |\n| parquet | none | 1 | 32.2 | 2.66 |\n| json | GZIP | 1 | 17.1 | 4.20 |\n| avro | SNAPPY | 1 | 25.7 | 8.79 |\n| avro | DEFLATE | 1 | 18.4 | 9.20 |\n| avro | none | 1 | 44.7 | 15.59 |\n| json | none | 6851 | 0.01  | 476.52 |\n\n\n\ncomments = 6851 x 10kb file(s)\n\n![Stack-Resources](images\/comments.png)\n\ncomments_json = 1 x 95.6mb file(s)\n\n![Stack-Resources](images\/comments_json.png)\n\ncomments_json_gz = 1 x 17.1mb file(s)\n\n![Stack-Resources](images\/comments_json_gz.png)\n\ncomments_avro = 1 x 44.7mb file(s)\n\n![Stack-Resources](images\/comments_avro.png)\n\ncomments_avro_snappy = 1 x 25.7mb file(s)\n\n![Stack-Resources](images\/comments_avro_snappy.png)\n\ncomments_avro_deflate = 1 x 18.4mb file(s)\n\n![Stack-Resources](images\/comments_avro_deflate.png)\n\ncomments_parquet = 1 x 32.2mb file(s)\n\n![Stack-Resources](images\/comments_parquet.png)\n\ncomments_parquet_snappy = 1 x 20.1mb file(s)\n\n![Stack-Resources](images\/comments_parquet_snappy.png)\n\ncomments_parquet_gzip = 1 x 13.1mb file(s)\n\n![Stack-Resources](images\/comments_parquet_gzip.png","site":"GCP","answers_cleaned":"  gcs hive external table file optimization  Example solution to showcase impact of file count  file size  and file type on Hive external tables and query speeds           Table Of Contents  1   About   about  2   Use Case   use case  3   Architecture   architecture  4   Guide   guide  5   Sample Queries   sample queries  6   Sample Results   sample results             about  One way to perform data analytics is through Hive on  Cloud Dataproc      You can create external tables in Hive  where the schema resides in Dataproc but the data resides in  Google Cloud Storage      This allows you to separate compute and storage  enabling you to scale your data independently of compute power     In older HDFS   Hive On Prem setups  the compute and storage were closely tied together  either on the same machine or in a nearby machine   But when storage is separated on the cloud  you save on storage costs at the expense of latency   It takes time for Cloud Dataproc to retrieve files on Google Cloud Storage   When there are many small files  this can negatively affect query performance   File type and compression can also affect query performance       It is important to be deliberate in choosing your Google Cloud Storage file strategy when performing data analytics on Google Cloud       In this example you ll see a 99 996  improvement in query run time              use case  This repository sets up a real world example of comparing query performance between different file sizes on Google Cloud Storage   It provides code to perform a one time   file compaction   using  Google Bigquery  https   cloud google com bigquery  and the  bq cli  https   cloud google com bigquery docs bq command line tool   and in doing so  optimizes your query performance when using Cloud Dataproc   External Tables in Hive   data on Google Cloud Storage   The setup script will create external tables with source data in the form of        small raw json files       compacted json files       compacted compressed json files       compacted parquet files       compacted compressed parquet files       compacted avro files       compacted compressed avro files  Finally  it will show you how to query all of the tables and demonstrate query run times for each source data   file format            guide   Do the following sample guide to generate many small files in Google Cloud Storage   https   github com CYarros10 gcp dataproc workflow template custom image sample  Then      bash  cd gcs hive external table file optimization    scripts setup sh  project id   project number   region   dataset   table                sample queries    Hive       sql  msck repair table comments  msck repair table comments json  msck repair table comments json gz  msck repair table comments avro  msck repair table comments avro snappy  msck repair table comments avro deflate  msck repair table comments parquet  msck repair table comments parquet snappy  msck repair table comments parquet gzip   add jar  lib hive lib hive hcatalog core 2 3 7 jar  add jar  lib hive lib json 1 8 jar  add jar  lib hive lib json path 2 1 0 jar  add jar  lib hive lib json4s ast 2 12 3 5 3 jar  add jar  lib hive lib json4s core 2 12 3 5 3 jar  add jar  lib hive lib json4s jackson 2 12 3 5 3 jar  add jar  lib hive lib json4s scalap 2 12 3 5 3 jar   select count    from comments  select count    from comments json  select count    from comments json gz  select count    from comments avro  select count    from comments avro snappy  select count    from comments avro deflate  select count    from comments parquet  select count    from comments parquet snappy  select count    from comments parquet gzip                 sample results  sorted by query runtime      file type   compression   file count   file size  mb    query runtime  seconds                           parquet   GZIP   1   13 1   1 64      parquet   SNAPPY   1   20 1   2 11     json   none   1   95 6   2 35     parquet   none   1   32 2   2 66     json   GZIP   1   17 1   4 20     avro   SNAPPY   1   25 7   8 79     avro   DEFLATE   1   18 4   9 20     avro   none   1   44 7   15 59     json   none   6851   0 01    476 52      comments   6851 x 10kb file s     Stack Resources  images comments png   comments json   1 x 95 6mb file s     Stack Resources  images comments json png   comments json gz   1 x 17 1mb file s     Stack Resources  images comments json gz png   comments avro   1 x 44 7mb file s     Stack Resources  images comments avro png   comments avro snappy   1 x 25 7mb file s     Stack Resources  images comments avro snappy png   comments avro deflate   1 x 18 4mb file s     Stack Resources  images comments avro deflate png   comments parquet   1 x 32 2mb file s     Stack Resources  images comments parquet png   comments parquet snappy   1 x 20 1mb file s     Stack Resources  images comments parquet snappy png   comments parquet gzip   1 x 13 1mb file s     Stack Resources  images comments parquet gzip png"}
{"questions":"GCP the entire solution on Google Cloud Platform GCP IoT Nirvana distributed all over the world and to follow temperature evolution by city in Architecture real time This document will guide you through the necessary steps to set up Internet of Things architecture running on Google Cloud Platform The purpose of This solution was built with the purpose of demonstrating an end to end the solution is to simulate the collection of temperature measures from sensors","answers":"# IoT Nirvana\n\nThis solution was built with the purpose of demonstrating an end-to-end\nInternet of Things architecture running on Google Cloud Platform. The purpose of\nthe solution is to simulate the collection of temperature measures from sensors\ndistributed all over the world and to follow temperature evolution by city in\nreal time. This document will guide you through the necessary steps to set up\nthe entire solution on Google Cloud Platform (GCP).\n\n## Architecture\n\nThe image below contains a high level diagram of the solution.\n![](img\/architecture.png)\n\nThe following components are represented on the diagram:\n\n1. Temperature sensors are simulated by running IoT Java clients on Google Compute\n   Engine\n2. The sensors send temperature data to an IoT Core registry running on GCP\n3. The IoT Core registry publishes it into a PubSub topic\n4. A streaming Dataflow pipeline is capturing the temperature data in real time\n   by subscribing to and reading from the PubSub topic\n5. Temperature data is pushed into BigQuery for analytics purposes\n6. Temperature data is also saved to Datastore for real time querying\n7. Temperature is displayed in real time in a Web AppEngine application\n8. All components are logging data to Stackdriver\n\n## Bootstrapping\n\nAs a pre-requisite you will need a GCP project to which you have owner rights\nin order to facilitate the setup of the solution. In the remainder of this guide\nthis project's identifier will be referred to as **[PROJECT_ID]**.\n\nEnable the following APIs in your project:\n\n* [Cloud Pub\/Sub API](https:\/\/console.cloud.google.com\/apis\/api\/pubsub.googleapis.com)\n* [DataFlow API](https:\/\/console.cloud.google.com\/apis\/api\/dataflow.googleapis.com)\n* [Google Cloud IoT API](https:\/\/console.cloud.google.com\/apis\/library\/cloudiot.googleapis.com)\n\nIn order to run the simulation in ideal conditions, with 10 virtual machines,\nplease request an increase of your CPU quota to 80 vCPU. This is however\noptional.\n\nCreate the environment variables that will be used throughout this tutorial. You can edit the default values provided below, however please note that not all products may be available in all regions:\n\n```\nexport PROJECT_ID=<PROJECT_ID>\nexport BUCKET_NAME=<BUCKET_NAME>\nexport REGION=us-central1\nexport ZONE=us-central1-a\nexport PUBSUB_TOPIC=telemetry\nexport PUBSUB_SUBSCRIPTION=telemetry-sub\nexport IOT_REGISTRY=devices\nexport BIGQUERY_DATASET=warehouse\nexport BIGQUERY_TABLE=device_data\n```\nRun the **setup_gcp_environment.sh** script with the following parameters, in\nthis order, to create the corresponding resources in your GCP project:\n\nFollowing arguments must be provided:\n\n    1) Project Id\n    2) Region where the Cloud IoT Core registry will be created\n    3) Zone where a temporary VM to generate the Java image will be created\n    4) Cloud IoT Core registry name\n    5) PubSub telemetry topic name\n    6) PubSub subscription name\n    7) BigQuery dataset name\n\nIn addition, the script also creates an Debian image with Java pre-installed,\ncalled **debian9-java8-img** that will be used to run the Java programs\nsimulating temperature sensors.\n\nExample:\n\n`setup_gcp_environment.sh $PROJECT_ID $REGION $ZONE $IOT_REGISTRY $PUBSUB_TOPIC $PUBSUB_SUBSCRIPTION  $BIGQUERY_DATASET`\n\n## Build the solution\n\nThe first action is to compile and package all the modules of the solution: the\nclient simulating the temperature sensor, the Dataflow pipeline and the frontend\nAppEngine application displaying temperatures in real time. To to this, run the\nfollowing command at the root of the project:\n\n`mvn clean install`\n\n## Dataflow pipeline\n\nIn order to run the Dataflow pipeline execute the `run_oncloud.sh` script at the\nroot of the project with the following parameters:\n\n* **[PROJECT_ID]** - your project's identifier\n* **[BUCKET_NAME]** - the name of the bucket created by the bootstrapping\n  script, identical to your project's identifier, **[PROJECT_ID]**, where the\n  Dataflow pipeline's binary package will be stored\n* **[PUBSUB_TOPIC]** - the name of the PubSub topic created by the bootstrapping\n  script, from which the Dataflow pipeline will read the temperature data;\n  please note that this isn't the topic's canonical name, but instead the name\n  relative to your project\n* **[BIGQUERY_TABLE]** - a name for the BigQuery table where Dataflow will save\n  the temperature data; the format of this parameter must follow the rule\n  **[BIGQUERY_DATASET].[TABLE_NAME]**\n\nExample:\n\n`run_oncloud.sh $PROJECT_ID $BUCKET_NAME $PUBSUB_TOPIC $BIGQUERY_DATASET.$BIGQUERY_TABLE`\n\n## Temperature sensor\n\nCopy the JAR package containing the client binaries to Google Cloud Storage in\nthe bucket previously created. Run the following command in the `\/client`\nfolder:\n\n`gsutil cp target\/google-cloud-demo-iot-nirvana-client-jar-with-dependencies.jar gs:\/\/$BUCKET_NAME\/client\/`\n\nCheck that the JAR file has been correctly copied in the Google Cloud Storage\nbucket with the following command:\n\n`gsutil ls gs:\/\/$BUCKET_NAME\/client\/google-cloud-demo-iot-nirvana-client-jar-with-dependencies.jar`\n\n## AppEngine Web frontend\n\nThe following steps will allow you to set up and run on AppEngine the Web\nfrontend that allows to visualize in real time the temperature data captured\nfrom the temperature sensors:\n1. Modify the `src\/main\/webapp\/startup.sh`file in the `\/app-engine` folder by\n   updating the variables below. This is the startup script of the Virtual\n   Machines that will be created from the image **debian9-java8-img** and it\n   creates 10 instances of the Java client simulating a temperature sensor.\n   * PROJECT_ID - your GCP project's identifier, **[PROJECT_ID]**\n   * BUCKET_NAME - name of the Google Cloud Storage bucket created by the\n     bootstrapping script\n   * REGISTRY_NAME - name of the IoT Core registry created by the bootstrapping\n     script\n   * REGION - region in which the IoT Core registry was created by the\n     bootstrapping script\n2. Copy the `startup.sh` file in the Google Cloud Storage bucket by running the\n   following command in the `\/app-engine` folder:\n   `gsutil cp src\/main\/webapp\/startup.sh gs:\/\/$BUCKET_NAME\/`\n3. Modify the `\/pom.xml` file in the `\/app-engine` folder:\n   * Update the `<app.id\/>` node with the **[PROJECT_ID]** of your GCP project\n   * Update the `<app.version\/>` with the desired version of the application\n4. Modify the `src\/main\/webapp\/config\/client.properties` file in the\n   `\/app-engine` folder by updating the values of the following parameters:\n   * GCS_BUCKET- name of the Google Cloud Storage bucket created by the\n     bootstrapping script\n   * GCE_METADATA_STARTUP_VALUE - path on Google Cloud Storage to the startup\n     script edited at the previous step, gs:\/\/[BUCKET_NAME]\/startup.sh\n   * GCP_CLOUD_IOT_CORE_REGISTRY_NAME - name of the IoT Core registry created by\n     the bootstrapping script\n   * GCP_CLOUD_IOT_CORE_REGION - region in which the IoT Core registry was\n     created by the bootstrapping script\n5. Enable the [Maps Javascript API](https:\/\/console.cloud.google.com\/apis\/library\/maps-backend.googleapis.com)\n6. In the *Credentials* section of the Maps Javascript API generate the API key\n   that will be used by the Web frontend to call Google Maps. This key will be\n   referred to as **[MAPS_API_KEY]** further in the document. Make sure to:\n   * Select HTTP in the \"Application restrictions\" list\n   * Enter the URLs of the application, `https:\/\/[YOUR_PROJECT_ID].appspot.com\/*`\n     and `http:\/\/[YOUR_PROJECT_ID].appspot.com\/*`, in the \"Accept requests for\n     these HTTP referrers (web sites)\" input zone\n7. Update the `src\/main\/webapp\/index.html` file in the `\/app-engine` folder by\n   replacing the **[MAPS_API_KEY]** text with the actual value of the Google\n   Maps API key generated at step 2.\n8. Run the `gcloud app create` command to create the Google AppEngine\n   application\n9. Deploy the frontend Web application on AppEngine by running the following\n   command in the at the root of the project:\n   `mvn -pl app-engine appengine:update`\n\n## Testing\n\nIn order to test the end to end solution, it is necessary first to start the\ntemperature sensors simulation. Follow the steps below to achieve this:\n\n* Go to the following address in your web browser, which will display the map\n  of the Earth with 3 buttons at the bottom: **Start**, **Update**, **Stop**\n  `https:\/\/[YOUR_PROJECT_ID].appspot.com\/index.html`\n* Click on the **Start** button at the bottom left of the page (this also\n  enables the buttons **Update** and **Stop**)\n* The VM instances being launched are visible in the Google Cloud Console under\n  [Compute Engine](https:\/\/console.cloud.google.com\/compute\/instances)\n\nIn order to visualize temperature data in real time on Google Maps do the\nfollowing:\n\n* Click on the **Update** button at the bottom center of the page\n  `https:\/\/[YOUR_PROJECT_ID].appspot.com\/index.html`. This will display on the\n  map test cities used for simulating the temperature sensors.\n* Run the following SQL query in BigQuery to retrieve the most recent cities for\n  which data is available:\n  `SELECT City, Time FROM ``[BIGQUERY_DATASET].[TABLE_NAME]`` ORDER BY 2 DESC LIMIT 10`\n* Locate on the map one of the cities returned by the query and click on the\n  city icon to visualise the temperatures graph.\n\nTo stop the simulation click on the **Stop** button at the bottom right of the\npage `https:\/\/[YOUR_PROJECT_ID].appspot.com\/index.html`.\n","site":"GCP","answers_cleaned":"  IoT Nirvana  This solution was built with the purpose of demonstrating an end to end Internet of Things architecture running on Google Cloud Platform  The purpose of the solution is to simulate the collection of temperature measures from sensors distributed all over the world and to follow temperature evolution by city in real time  This document will guide you through the necessary steps to set up the entire solution on Google Cloud Platform  GCP       Architecture  The image below contains a high level diagram of the solution      img architecture png   The following components are represented on the diagram   1  Temperature sensors are simulated by running IoT Java clients on Google Compute    Engine 2  The sensors send temperature data to an IoT Core registry running on GCP 3  The IoT Core registry publishes it into a PubSub topic 4  A streaming Dataflow pipeline is capturing the temperature data in real time    by subscribing to and reading from the PubSub topic 5  Temperature data is pushed into BigQuery for analytics purposes 6  Temperature data is also saved to Datastore for real time querying 7  Temperature is displayed in real time in a Web AppEngine application 8  All components are logging data to Stackdriver     Bootstrapping  As a pre requisite you will need a GCP project to which you have owner rights in order to facilitate the setup of the solution  In the remainder of this guide this project s identifier will be referred to as    PROJECT ID      Enable the following APIs in your project      Cloud Pub Sub API  https   console cloud google com apis api pubsub googleapis com     DataFlow API  https   console cloud google com apis api dataflow googleapis com     Google Cloud IoT API  https   console cloud google com apis library cloudiot googleapis com   In order to run the simulation in ideal conditions  with 10 virtual machines  please request an increase of your CPU quota to 80 vCPU  This is however optional   Create the environment variables that will be used throughout this tutorial  You can edit the default values provided below  however please note that not all products may be available in all regions       export PROJECT ID  PROJECT ID  export BUCKET NAME  BUCKET NAME  export REGION us central1 export ZONE us central1 a export PUBSUB TOPIC telemetry export PUBSUB SUBSCRIPTION telemetry sub export IOT REGISTRY devices export BIGQUERY DATASET warehouse export BIGQUERY TABLE device data     Run the   setup gcp environment sh   script with the following parameters  in this order  to create the corresponding resources in your GCP project   Following arguments must be provided       1  Project Id     2  Region where the Cloud IoT Core registry will be created     3  Zone where a temporary VM to generate the Java image will be created     4  Cloud IoT Core registry name     5  PubSub telemetry topic name     6  PubSub subscription name     7  BigQuery dataset name  In addition  the script also creates an Debian image with Java pre installed  called   debian9 java8 img   that will be used to run the Java programs simulating temperature sensors   Example    setup gcp environment sh  PROJECT ID  REGION  ZONE  IOT REGISTRY  PUBSUB TOPIC  PUBSUB SUBSCRIPTION   BIGQUERY DATASET      Build the solution  The first action is to compile and package all the modules of the solution  the client simulating the temperature sensor  the Dataflow pipeline and the frontend AppEngine application displaying temperatures in real time  To to this  run the following command at the root of the project    mvn clean install      Dataflow pipeline  In order to run the Dataflow pipeline execute the  run oncloud sh  script at the root of the project with the following parameters        PROJECT ID      your project s identifier      BUCKET NAME      the name of the bucket created by the bootstrapping   script  identical to your project s identifier     PROJECT ID     where the   Dataflow pipeline s binary package will be stored      PUBSUB TOPIC      the name of the PubSub topic created by the bootstrapping   script  from which the Dataflow pipeline will read the temperature data    please note that this isn t the topic s canonical name  but instead the name   relative to your project      BIGQUERY TABLE      a name for the BigQuery table where Dataflow will save   the temperature data  the format of this parameter must follow the rule      BIGQUERY DATASET   TABLE NAME     Example    run oncloud sh  PROJECT ID  BUCKET NAME  PUBSUB TOPIC  BIGQUERY DATASET  BIGQUERY TABLE      Temperature sensor  Copy the JAR package containing the client binaries to Google Cloud Storage in the bucket previously created  Run the following command in the   client  folder    gsutil cp target google cloud demo iot nirvana client jar with dependencies jar gs    BUCKET NAME client    Check that the JAR file has been correctly copied in the Google Cloud Storage bucket with the following command    gsutil ls gs    BUCKET NAME client google cloud demo iot nirvana client jar with dependencies jar      AppEngine Web frontend  The following steps will allow you to set up and run on AppEngine the Web frontend that allows to visualize in real time the temperature data captured from the temperature sensors  1  Modify the  src main webapp startup sh file in the   app engine  folder by    updating the variables below  This is the startup script of the Virtual    Machines that will be created from the image   debian9 java8 img   and it    creates 10 instances of the Java client simulating a temperature sensor       PROJECT ID   your GCP project s identifier     PROJECT ID         BUCKET NAME   name of the Google Cloud Storage bucket created by the      bootstrapping script      REGISTRY NAME   name of the IoT Core registry created by the bootstrapping      script      REGION   region in which the IoT Core registry was created by the      bootstrapping script 2  Copy the  startup sh  file in the Google Cloud Storage bucket by running the    following command in the   app engine  folder      gsutil cp src main webapp startup sh gs    BUCKET NAME   3  Modify the   pom xml  file in the   app engine  folder       Update the   app id    node with the    PROJECT ID    of your GCP project      Update the   app version    with the desired version of the application 4  Modify the  src main webapp config client properties  file in the      app engine  folder by updating the values of the following parameters       GCS BUCKET  name of the Google Cloud Storage bucket created by the      bootstrapping script      GCE METADATA STARTUP VALUE   path on Google Cloud Storage to the startup      script edited at the previous step  gs    BUCKET NAME  startup sh      GCP CLOUD IOT CORE REGISTRY NAME   name of the IoT Core registry created by      the bootstrapping script      GCP CLOUD IOT CORE REGION   region in which the IoT Core registry was      created by the bootstrapping script 5  Enable the  Maps Javascript API  https   console cloud google com apis library maps backend googleapis com  6  In the  Credentials  section of the Maps Javascript API generate the API key    that will be used by the Web frontend to call Google Maps  This key will be    referred to as    MAPS API KEY    further in the document  Make sure to       Select HTTP in the  Application restrictions  list      Enter the URLs of the application   https    YOUR PROJECT ID  appspot com         and  http    YOUR PROJECT ID  appspot com     in the  Accept requests for      these HTTP referrers  web sites   input zone 7  Update the  src main webapp index html  file in the   app engine  folder by    replacing the    MAPS API KEY    text with the actual value of the Google    Maps API key generated at step 2  8  Run the  gcloud app create  command to create the Google AppEngine    application 9  Deploy the frontend Web application on AppEngine by running the following    command in the at the root of the project      mvn  pl app engine appengine update      Testing  In order to test the end to end solution  it is necessary first to start the temperature sensors simulation  Follow the steps below to achieve this     Go to the following address in your web browser  which will display the map   of the Earth with 3 buttons at the bottom    Start      Update      Stop      https    YOUR PROJECT ID  appspot com index html    Click on the   Start   button at the bottom left of the page  this also   enables the buttons   Update   and   Stop      The VM instances being launched are visible in the Google Cloud Console under    Compute Engine  https   console cloud google com compute instances   In order to visualize temperature data in real time on Google Maps do the following     Click on the   Update   button at the bottom center of the page    https    YOUR PROJECT ID  appspot com index html   This will display on the   map test cities used for simulating the temperature sensors    Run the following SQL query in BigQuery to retrieve the most recent cities for   which data is available     SELECT City  Time FROM    BIGQUERY DATASET   TABLE NAME    ORDER BY 2 DESC LIMIT 10    Locate on the map one of the cities returned by the query and click on the   city icon to visualise the temperatures graph   To stop the simulation click on the   Stop   button at the bottom right of the page  https    YOUR PROJECT ID  appspot com index html   "}
{"questions":"GCP ","answers":"- [Reusable Plugins](#reusable-plugins-for-cloud-data-fusion-cdf--cdap)\n    - [Overview](#overview)\n    - [CheckPointReadAction, CheckPointUpdateAction](#checkpointreadaction-checkpointupdateaction)\n        - [Dependencies](#dependencies)\n            - [Setting up Firestore](#setting-up-firestore)\n            - [Set Runtime Arguments](#set-runtime-arguments)\n    - [CopyTableAction](#copytableaction)\n    - [DropTableAction](#droptableaction)\n    - [TruncateTableAction](#truncatetableaction)\n- [Putting it all together into a Pipeline](#putting-it-all-together-into-a-pipeline)\n  - [CheckPointReadAction](#checkpointreadaction)\n  - [TruncateTableAction](#truncatetableaction)\n  - [Database source](#database-source)\n  - [BigQuery sink](#bigquery-sink)\n  - [MergeLastUpdateTSAction](#mergelastupdatetsaction)\n  - [CheckPointUpdateAction](#checkpointupdateaction)\n- [Building the CDF\/CDAP Plugin (JAR file \/ JSON file) and deploying into CDF\/CDAP](#building-the-cdfcdap-plugin-jar-file--json-file-and-deploying-into-cdfcdap)\n\n# Reusable Plugins for Cloud Data Fusion (CDF) \/ CDAP\n\n## Overview\nThe CDF\/CDAP plugins detailed below can be reused in the context of data pipelines.\n\nLet's say you run your incremental pipeline once every 5 minutes. When running an incremental pipeline, you have to filter the records by a specific field (e.g., `lastUpdateDateTime` of records > latest watermark value - buffer time) so it will sync the records that were updated since your last incremental sync. Subsequently, a merge and dedupe step is done to make sure only new\/updated are synced into the destination table.\n\n## `CheckPointReadAction`, `CheckPointUpdateAction`\n\n**Plugin Description**  \nCreates, reads, and updates checkpoints in incremental pull pipelines.\n\n\n`CheckPointReadAction` - reads checkpoints in Firestore DB and provides the data during runtime as environment variable\n\n`CheckPointUpdateAction` - updates checkpoints in Firestore DB (i.e., creates a new document and stores maximum update date \/ time  from BQ so the next run it can use this checkpoint value to filter records that were added since then)\n\nFor now these plugins only support timestamp values - in the future, integer values can potentially be added.\n### Dependencies\n\n#### Setting up Firestore\n\n1. Setup Firestore DB\n    1. Firestore is used to store \/ read checkpoints, which is used in your incremental pipelines\n\n1. Create a collection with a document from the parent path \/\n    1. Collection ID: `PIPELINE_CHECKPOINTS`\n    1. Document ID: `INCREMENTAL_DEMO`\n\n![image](img\/1-create_pipeline_checkpoint_collection.png)\n\n1. Create a collection under Parent path `\/PIPELINE_CHECKPOINTS\/INCREMENTAL_DEMO`\n    1. Collection ID: `CHECKPOINT`\n    1. Document ID: just accept what was provided initially\n        1. Field #1\n            1. Note:\n                1. Set to maximum timestamp from destination (BQ table)\n                1. Set to minimum timestamp if running for the first time from source (e.g., SQL server table)\n\n            1. Field name: `CREATED_TIMESTAMP`\n            1. Field type: `string`\n            1. Date and time: `2020-05-08 17:21:01`\n\n        1. Field #2\n            1. Note: enter the current time in timestamp format\n            1. Field name: CREATED_TIMESTAMP\n            1. Field type: timestamp\n            1. Date and time: 25\/08\/2020, 15:49\n\n![image](img\/2-create_check_point_collection.png)\n\n#### Set Runtime Arguments\n\nBefore running the pipeline, add the `lastWatermarkValue` variable as runtime argument (on Pipeline Studio view, click on drop-down arrow for Run button) and set the value = 0 :\n\n![image](img\/3-runtime_arguments.png)\n\nCheckpointReadAction will populate lastWatermarkValue with the CHECKPOINT_VALUE  from Firestore. lastWatermarkValue runtime argument will be used as parameter of the import query of the Database Source in a subsequent step:  \n```sql\nSELECT * FROM test WHERE last_update_datetime > '${latestWatermarkValue}'\n```\n\nBigQuery - actual destination table name (this is where max checkpoint is taken from - i.e., max timestamp)\n\n**Use Case**  \nThis plugin can be used at the beginning of an incremental CDAP data pipeline to read the checkpoint value from the last sync.\n\n\nLet's say you run your pipeline once every 5 minutes. When running an incremental pipeline, you have to filter the records by a specific field (timestamp - current date > current date -3) - it is doing merge and dedupe even though we are processing the same records to make sure duplicate records are not in the destination table.\n\n`CheckPointReadAction` - reads checkpoints in Firestore DB and provides the data during runtime as environment variable\n\n`CheckPointUpdateAction` - updates checkpoints in Firestore DB (i.e., creates a new document and stores maximum update date \/ time  from BQ so the next run it can use this checkpoint value to filter records that were added since then)\n\nFor now these plugins only support timestamp values - in the future, integer values can potentially be added.\n\n**`CheckpointReadAction` plugin requires the following config properties:**\n\n-  Label : plugin label name.\n-  Specify the collection name in firestore DB: Name of the Collection.\n-  Specify the document name to read the checkpoint details: Provide the document name specified in the Collection.\n-  Buffer time to add to checkpoint value. (Note: in Minutes): Number of minutes that need to be subtracted from the Firestore collection value.\n-  Project: project ID.\n-  Key path: Service account key file path to communicate with the Firestore DB.\n\n**Please see the following screenshot for example.**\n\n![image](img\/4-checkpoint_read_action_plugin_ui.png)\n\n**`CheckpointUpdateAction` plugin requires the following configuration:**\n\n-  Label : plugin label name.\n-  Specify the collection name in firestore DB: Name of the Collection.\n-  Specify the document name to read the checkpoint details: Provide the document name specified in the Collection.\n-  Dataset name where incremental pull table exists: Big Query Dataset name.\n-  Table name that needs incremental pull: Big Query table name.\n-  Specify the checkpoint column from incremental pull table:\n-  Project: project ID.\n-  Key path: Service account key file path to communicate with the Firestore DB.\n\n**Please see the below screenshot for example:**\n\n![image](img\/5-checkpoint_update_action_plugin_ui.png)\n\n## `CopyTableAction`\n\n**Plugin description**  \nCopies the BigQuery table from staging to destination at the end of the pipeline run. A new table is created if it doesn't exist. Otherwise, if the table exists, the plugin replaces the existing BigQuery destination table with data from staging.\n\n**Use case**  \nThis is applicable in the CDAP data pipelines which do the full import\/scan the data from source system to BigQuery.\n\n**Dependencies**  \nDestination dataset : `bq_dataset`  \nDestination table : `bq_table`  \nSource dataset : `bq_dataset_batch_staging`\nSource table : `bq_table`\n\n**`CopyTableAction` plugin requires the following configuration:**\n\n-  Label: plugin label name.\n-  Key path: Service account key file path to call the Big Query API.\n-  Project ID: GCP project ID.\n-  Dataset: Big Query dataset name.\n-  Table Name: Big Query table name.\n\n**Please see the following screenshot for example:**\n\n![image](img\/6-copy_table_action_ui.png)\n\n## `DropTableAction`\n\n**Plugin Description**  \nDrops a BigQuery table in the beginning of the pipeline runs.\n\n**Use Case**  \nUseful to drop staging tables.\n\n**Dependencies**  \nRequires BQ table to drop to exist.\n\n**Drop table action plugin requires the following configuration:**\n\n-  Label : plugin label name.\n-  Key path: Service account key file path to call the Big Query API.\n-  Project ID: GCP project ID.\n-  Dataset: Big Query dataset name.\n-  Table Name: Big Query table name.\n\nPlease see the following screenshot for example configuration:\n\n![image](img\/7-drop_table_action_ui.png)\n\n## `TruncateTableAction`\n\n**Plugin Description**  \nTruncates a BigQuery table when we set pipelines to restore the data from source.\n\n**Use Case**  \nApplicable in restoring data pipelines from source.\n\n**TruncateTable action plugin requires the following configuration:**\n\n-  Label : plugin label name.\n-  Key path: Service account key file path to call the Big Query API.\n-  Project ID: GCP project ID.\n-  Dataset: Big Query dataset name.\n-  Table Name: Big Query table name.\n\n**Please see the following screenshot for example configuration:**\n\n![image](img\/8-truncate_table_action_ui.png)\n\n\n# Putting it all together into a Pipeline\n\n`CheckPointReadAction` \u2192 `TruncateTableAction` \u2192 Database \u2192 BigQuery \u2192 `MergeLastUpdateTSAction` \u2192 `CheckPointUpdateAction`\n\nWhat does the pipeline do?\n\n1. `CheckPointReadAction` - reads latest checkpoint from Firestore\n1. `TruncateTableAction` - truncate the records in the log table\n1. Database Source- imports data from the source\n1. BigQuery Sink - exports data into BigQuery from previous step (database source)\n1. `MergeLastUpdateTSAction` -  merge based on timestamp and the update column list (columns to keep in the merge).\n    -  Note: Alternatively, you can use [`BigQueryExecute`](https:\/\/github.com\/data-integrations\/google-cloud\/blob\/develop\/src\/main\/java\/io\/cdap\/plugin\/gcp\/bigquery\/action\/BigQueryExecute.java) action to do a Merge.\n1. `CheckPointUpdateAction` - update checkpoint in Firestore from the max record lastUpdateTimestamp in BigQuery \n\n## Successful run of Incremental Pipeline\n\n![image](img\/9-successful_run_incremental_v2_pipeline.png)\n\n### Runtime arguments (set latestWatermarkValue to 0)\n![image](img\/9-set_runtime_arugments_latestWatermarkValue.png)\n\n### `CheckPointReadAction`\n\n**Label:**  \n`CheckPointReadAction`\n\n**Specify the document name to read the checkpoint details\\*:**  \nINCREMENTAL_DEMO\n\n**Buffer time to add to checkpoint value. (Note: in Minutes):**  \n1\n\n**project:**  \n`pso-cdf-plugins-287518`\n\n**serviceFilePath:**  \nauto-detect\n\n**Screenshot:**\n\n![image](img\/10-checkpoint_read_action_ui_pipeline_parameters.png)\n\n### `TruncateTableAction`\n\n**Label:**\n`TruncateTableAction`\n\n**Key path*:**  \nauto-detect\n\n**ProjectId* :**  \n`pso-cdf-plugins-287518`\n\n**Dataset** *\n`bq_dataset`\n\n**Table name***  \n`bq_table_LOG`\n\n![image](img\/11-truncate_table_action_ui_pipeline_parameters.png)\n\n### Database source\n\n**Label** *\nDatabase\n\n**Reference Name***  \ntest\n\n**Plugin Name***  \nsqlserver42\n\n**Plugin Type**  \njdbc\n\n**Connection String**  \njdbc:sqlserver:\/\/<fill in IP address of database server>:<db port>;databaseName=main;user=<fill in user>;password=<fill in password>;\n\n**Import Query:**\n```sql\nSELECT * FROM test WHERE last_update_datetime > '${latestWatermarkValue}'\n```\n\n![image](img\/12-database_source_ui_pipeline_parameters.png)\n\n\n### BigQuery sink\n\n**Label** *\nBigQuery\n\n**Reference Name***  \nbq_table_sink\n\n**Project ID**  \n`pso-cdf-plugins-287518`\n\n**Dataset***  \n`bq_dataset`\n\n**Table***  (write to a temporary table, e.g., bq_table_LOG)\n`bq_table_LOG`\n\n**Service Account File Path**  \nauto-detect\n\n**Schema**\n\n![image](img\/13-bigquery_sink_ui_pipeline_parameters.png)\n\n### `MergeLastUpdateTSAction`\n\n**Label***  \n`MergeLastUpdateTSAction`\n\n**Key path***  \nauto-detect\n\n**Project ID***  \n`pso-cdf-plugins-287518`\n\n**Dataset name**  \n`bq_dataset`\n\n**Table name***  \n`bq_table`\n\n**Primary key list***  \nid\n\n**Update columns list***  \nid,name,last_update_datetime\n\n![image](img\/14-merge_last_update_ts_action.png)\n\n### `CheckPointUpdateAction`\n\n**Label***  \n`CheckPointUpdateAction`\n\n**Specify the collection name in firestore DB***  \nPIPELINE_CHECKPOINTS\n\n**Specify the document name to read the checkpoint details***  \nINCREMENTAL_DEMO\n\n**Dataset name where incremental pull table exists***  \n`bq_dataset`\n\n**Table name that needs incremental pull***  \n`bq_table`\n\n**Specify the checkpoint column from incremental pull table***  \nlast_update_datetime\n\n**serviceFilePath**  \nauto-detect\n\n**project**  \n`pso-cdf-plugins-287518`\n\n![image](img\/15-checkpoint_update_action_ui_pipeline_parameters.png)\n\n## Building the CDF\/CDAP Plugin (JAR file \/ JSON file) and deploying into CDF\/CDAP\n\nThis plugin requires Java JDK1.8 and maven.\n\n1. To build the CDAP \/ CDF plugin jar, execute the following command on the root.\n```bash\nmvn clean compile package\n```\n\n2. You will find the generated JAR file and JSON file under target folder:\n   1. `GoogleFunctions-1.6.jar`\n   1. `GoogleFunctions-1.6.json`\n    \n1. Deploy `GoogleFunctions-1.6.jar` and `GoogleFunctions-1.6.json` into CDF\/CDAP (note that if you have the same version already deployed then you\u2019ll get an error that it already exists):\n    1. Go to Control Center\n    1. Delete `GoogleFunctions` artifact if the same version already exists.\n    1. Upload plugin by clicking on the circled green + button  \n    1. Pick the JAR file \/ JSON file created under target folder\n    1. You\u2019ll see a confirmation of the successful plugin upload  ","site":"GCP","answers_cleaned":"   Reusable Plugins   reusable plugins for cloud data fusion cdf  cdap         Overview   overview         CheckPointReadAction  CheckPointUpdateAction   checkpointreadaction checkpointupdateaction             Dependencies   dependencies                 Setting up Firestore   setting up firestore                 Set Runtime Arguments   set runtime arguments         CopyTableAction   copytableaction         DropTableAction   droptableaction         TruncateTableAction   truncatetableaction     Putting it all together into a Pipeline   putting it all together into a pipeline       CheckPointReadAction   checkpointreadaction       TruncateTableAction   truncatetableaction       Database source   database source       BigQuery sink   bigquery sink       MergeLastUpdateTSAction   mergelastupdatetsaction       CheckPointUpdateAction   checkpointupdateaction     Building the CDF CDAP Plugin  JAR file   JSON file  and deploying into CDF CDAP   building the cdfcdap plugin jar file  json file and deploying into cdfcdap     Reusable Plugins for Cloud Data Fusion  CDF    CDAP     Overview The CDF CDAP plugins detailed below can be reused in the context of data pipelines   Let s say you run your incremental pipeline once every 5 minutes  When running an incremental pipeline  you have to filter the records by a specific field  e g    lastUpdateDateTime  of records   latest watermark value   buffer time  so it will sync the records that were updated since your last incremental sync  Subsequently  a merge and dedupe step is done to make sure only new updated are synced into the destination table       CheckPointReadAction    CheckPointUpdateAction     Plugin Description     Creates  reads  and updates checkpoints in incremental pull pipelines     CheckPointReadAction    reads checkpoints in Firestore DB and provides the data during runtime as environment variable   CheckPointUpdateAction    updates checkpoints in Firestore DB  i e   creates a new document and stores maximum update date   time  from BQ so the next run it can use this checkpoint value to filter records that were added since then   For now these plugins only support timestamp values   in the future  integer values can potentially be added      Dependencies       Setting up Firestore  1  Setup Firestore DB     1  Firestore is used to store   read checkpoints  which is used in your incremental pipelines  1  Create a collection with a document from the parent path       1  Collection ID   PIPELINE CHECKPOINTS      1  Document ID   INCREMENTAL DEMO     image  img 1 create pipeline checkpoint collection png   1  Create a collection under Parent path   PIPELINE CHECKPOINTS INCREMENTAL DEMO      1  Collection ID   CHECKPOINT      1  Document ID  just accept what was provided initially         1  Field  1             1  Note                  1  Set to maximum timestamp from destination  BQ table                  1  Set to minimum timestamp if running for the first time from source  e g   SQL server table               1  Field name   CREATED TIMESTAMP              1  Field type   string              1  Date and time   2020 05 08 17 21 01           1  Field  2             1  Note  enter the current time in timestamp format             1  Field name  CREATED TIMESTAMP             1  Field type  timestamp             1  Date and time  25 08 2020  15 49    image  img 2 create check point collection png        Set Runtime Arguments  Before running the pipeline  add the  lastWatermarkValue  variable as runtime argument  on Pipeline Studio view  click on drop down arrow for Run button  and set the value   0      image  img 3 runtime arguments png   CheckpointReadAction will populate lastWatermarkValue with the CHECKPOINT VALUE  from Firestore  lastWatermarkValue runtime argument will be used as parameter of the import query of the Database Source in a subsequent step       sql SELECT   FROM test WHERE last update datetime      latestWatermarkValue        BigQuery   actual destination table name  this is where max checkpoint is taken from   i e   max timestamp     Use Case     This plugin can be used at the beginning of an incremental CDAP data pipeline to read the checkpoint value from the last sync    Let s say you run your pipeline once every 5 minutes  When running an incremental pipeline  you have to filter the records by a specific field  timestamp   current date   current date  3    it is doing merge and dedupe even though we are processing the same records to make sure duplicate records are not in the destination table    CheckPointReadAction    reads checkpoints in Firestore DB and provides the data during runtime as environment variable   CheckPointUpdateAction    updates checkpoints in Firestore DB  i e   creates a new document and stores maximum update date   time  from BQ so the next run it can use this checkpoint value to filter records that were added since then   For now these plugins only support timestamp values   in the future  integer values can potentially be added      CheckpointReadAction  plugin requires the following config properties        Label   plugin label name     Specify the collection name in firestore DB  Name of the Collection     Specify the document name to read the checkpoint details  Provide the document name specified in the Collection     Buffer time to add to checkpoint value   Note  in Minutes   Number of minutes that need to be subtracted from the Firestore collection value     Project  project ID     Key path  Service account key file path to communicate with the Firestore DB     Please see the following screenshot for example       image  img 4 checkpoint read action plugin ui png      CheckpointUpdateAction  plugin requires the following configuration        Label   plugin label name     Specify the collection name in firestore DB  Name of the Collection     Specify the document name to read the checkpoint details  Provide the document name specified in the Collection     Dataset name where incremental pull table exists  Big Query Dataset name     Table name that needs incremental pull  Big Query table name     Specify the checkpoint column from incremental pull table     Project  project ID     Key path  Service account key file path to communicate with the Firestore DB     Please see the below screenshot for example       image  img 5 checkpoint update action plugin ui png       CopyTableAction     Plugin description     Copies the BigQuery table from staging to destination at the end of the pipeline run  A new table is created if it doesn t exist  Otherwise  if the table exists  the plugin replaces the existing BigQuery destination table with data from staging     Use case     This is applicable in the CDAP data pipelines which do the full import scan the data from source system to BigQuery     Dependencies     Destination dataset    bq dataset    Destination table    bq table    Source dataset    bq dataset batch staging  Source table    bq table      CopyTableAction  plugin requires the following configuration        Label  plugin label name     Key path  Service account key file path to call the Big Query API     Project ID  GCP project ID     Dataset  Big Query dataset name     Table Name  Big Query table name     Please see the following screenshot for example       image  img 6 copy table action ui png       DropTableAction     Plugin Description     Drops a BigQuery table in the beginning of the pipeline runs     Use Case     Useful to drop staging tables     Dependencies     Requires BQ table to drop to exist     Drop table action plugin requires the following configuration        Label   plugin label name     Key path  Service account key file path to call the Big Query API     Project ID  GCP project ID     Dataset  Big Query dataset name     Table Name  Big Query table name   Please see the following screenshot for example configuration     image  img 7 drop table action ui png       TruncateTableAction     Plugin Description     Truncates a BigQuery table when we set pipelines to restore the data from source     Use Case     Applicable in restoring data pipelines from source     TruncateTable action plugin requires the following configuration        Label   plugin label name     Key path  Service account key file path to call the Big Query API     Project ID  GCP project ID     Dataset  Big Query dataset name     Table Name  Big Query table name     Please see the following screenshot for example configuration       image  img 8 truncate table action ui png      Putting it all together into a Pipeline   CheckPointReadAction     TruncateTableAction    Database   BigQuery    MergeLastUpdateTSAction     CheckPointUpdateAction   What does the pipeline do   1   CheckPointReadAction    reads latest checkpoint from Firestore 1   TruncateTableAction    truncate the records in the log table 1  Database Source  imports data from the source 1  BigQuery Sink   exports data into BigQuery from previous step  database source  1   MergeLastUpdateTSAction     merge based on timestamp and the update column list  columns to keep in the merge          Note  Alternatively  you can use   BigQueryExecute   https   github com data integrations google cloud blob develop src main java io cdap plugin gcp bigquery action BigQueryExecute java  action to do a Merge  1   CheckPointUpdateAction    update checkpoint in Firestore from the max record lastUpdateTimestamp in BigQuery      Successful run of Incremental Pipeline    image  img 9 successful run incremental v2 pipeline png       Runtime arguments  set latestWatermarkValue to 0    image  img 9 set runtime arugments latestWatermarkValue png        CheckPointReadAction     Label       CheckPointReadAction     Specify the document name to read the checkpoint details        INCREMENTAL DEMO    Buffer time to add to checkpoint value   Note  in Minutes       1    project       pso cdf plugins 287518     serviceFilePath      auto detect    Screenshot       image  img 10 checkpoint read action ui pipeline parameters png        TruncateTableAction     Label     TruncateTableAction     Key path       auto detect    ProjectId         pso cdf plugins 287518     Dataset      bq dataset     Table name       bq table LOG     image  img 11 truncate table action ui pipeline parameters png       Database source    Label     Database    Reference Name      test    Plugin Name      sqlserver42    Plugin Type     jdbc    Connection String     jdbc sqlserver    fill in IP address of database server   db port  databaseName main user  fill in user  password  fill in password      Import Query       sql SELECT   FROM test WHERE last update datetime      latestWatermarkValue          image  img 12 database source ui pipeline parameters png        BigQuery sink    Label     BigQuery    Reference Name      bq table sink    Project ID      pso cdf plugins 287518     Dataset       bq dataset     Table      write to a temporary table  e g   bq table LOG   bq table LOG     Service Account File Path     auto detect    Schema      image  img 13 bigquery sink ui pipeline parameters png        MergeLastUpdateTSAction     Label       MergeLastUpdateTSAction     Key path      auto detect    Project ID       pso cdf plugins 287518     Dataset name      bq dataset     Table name       bq table     Primary key list      id    Update columns list      id name last update datetime    image  img 14 merge last update ts action png        CheckPointUpdateAction     Label       CheckPointUpdateAction     Specify the collection name in firestore DB      PIPELINE CHECKPOINTS    Specify the document name to read the checkpoint details      INCREMENTAL DEMO    Dataset name where incremental pull table exists       bq dataset     Table name that needs incremental pull       bq table     Specify the checkpoint column from incremental pull table      last update datetime    serviceFilePath     auto detect    project      pso cdf plugins 287518     image  img 15 checkpoint update action ui pipeline parameters png      Building the CDF CDAP Plugin  JAR file   JSON file  and deploying into CDF CDAP  This plugin requires Java JDK1 8 and maven   1  To build the CDAP   CDF plugin jar  execute the following command on the root     bash mvn clean compile package      2  You will find the generated JAR file and JSON file under target folder     1   GoogleFunctions 1 6 jar     1   GoogleFunctions 1 6 json       1  Deploy  GoogleFunctions 1 6 jar  and  GoogleFunctions 1 6 json  into CDF CDAP  note that if you have the same version already deployed then you ll get an error that it already exists       1  Go to Control Center     1  Delete  GoogleFunctions  artifact if the same version already exists      1  Upload plugin by clicking on the circled green   button       1  Pick the JAR file   JSON file created under target folder     1  You ll see a confirmation of the successful plugin upload  "}
{"questions":"GCP The GitLab Kubernetes Agent KAS is a tool that helps you deploy your code to a Google Kubernetes Engine GKE cluster using GitOps practices It does this by using a YAML configuration file that you create in your repository It handles automated deployment of services after the images have been built by a CI CD pipeline from source code The flow diagram below illustrate the relationship between the two processes Note that in this example and is the same in production it can be hosted in different GitLab repos This example deploys a dummy image to decouple the Application CI CD step Brief explanation Terraform for Deploying a KAS agent in a GKE cluster This repository provides Terraform code for deploying a KAS agent in a GKE cluster to connect it with a Gitlab repository to automatically deploy manage and monitor your cloud native solutions using GitOps practices This creates resources in your cluster to deploy an agent that communicates with Gitlab to synchronize deployments Here is a to a guide that explains the manual steps for making this configuration and an overview of the solution More resource links are provided in section The Gitlab agent created in the Gitlab project is connected to the agentk service running in the cluster When there is a change in the manifest file describing the deployment the agent in Gitlab pulls the changes and invokes the connected service running in the GKE cluster to apply the declarative changes to the Kubernetes resources The Kubernetes Service Account federating the deployment is governed by the RBAC policies with limited permissions to the cluster configmap and the product namespaces","answers":"# Terraform for Deploying a KAS agent in a GKE cluster\nThis repository provides Terraform code for deploying a KAS agent in a GKE cluster, to connect it with a Gitlab repository to automatically deploy, manage, and monitor your cloud-native solutions using GitOps practices. This creates resources in your cluster to deploy an agent that communicates with Gitlab to synchronize deployments. Here is a [link](https:\/\/about.gitlab.com\/blog\/2021\/09\/10\/setting-up-the-k-agent\/) to a guide that explains the manual steps for making this configuration and an overview of the solution. More resource links are provided in [this](#references-and-public-docs) section.\n\n## Brief explanation\nThe GitLab Kubernetes Agent (KAS) is a tool that helps you deploy your code to a Google Kubernetes Engine (GKE) cluster using GitOps practices. It does this by using a YAML configuration file that you create in your repository. It handles automated deployment of services after the images have been built by a CI\/CD pipeline from source code. The flow diagram below illustrate the relationship between the two processes. Note that in this example `Manifest Repository` and `Agent configuration repository` is the same -- in production it can be hosted in different GitLab repos. This example deploys a dummy `nginx` image to decouple the Application CI\/CD step.\n\n![Gitlab KAS flow diagram](docs\/gitlab-kas-flow-diag.png)\n\nThe Gitlab agent created in the Gitlab project is connected to the agentk service running in the cluster. When there is a change in the manifest file describing the deployment, the agent in Gitlab pulls the changes and invokes the connected service running in the GKE cluster to apply the declarative changes to the Kubernetes resources. The Kubernetes Service Account federating the deployment is governed by the RBAC policies with limited permissions to the cluster configmap and the product namespaces.\n\n![KAS Agent connection](docs\/kas-agent-connection.png)\n\n## How to setup\n- Create a new project in Gitlab\n- [Create a personal token](https:\/\/docs.gitlab.com\/ee\/user\/profile\/personal_access_tokens.html) in Gitlab, and set environment variable `GITLAB_TOKEN` to be used my Terraform\n```\nexport GITLAB_TOKEN=<token value>\n```\nNote that in production you'd want to use a [*Runner authentication token* or a *CI\/CD job token*](https:\/\/docs.gitlab.com\/ee\/security\/token_overview.html#runner-authentication-tokens-also-called-runner-tokens) depending on your IaC pipeline strategy.\n- Setup a [GKE cluster](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/creating-a-zonal-cluster) that can egress to public internet to communicate with a Gitlab agent. \n- Use a personal or service account that has `roles\/container.admin` permission.\n- Create directory `manifests\/` in your Gitlab project\n- Create a tfvars file from the [sample](.\/terraform.tfvars.sample) file\n- Run terraform init and apply to deploy resources\n```\nterraform init\nterraform apply\n```\n- Add yaml file in `\/manifests` directory (sample is provided below) to add a deployment, and commit the change.\n- Ensure your namespace for the product has the correct deployment\n\n## Useful commands\n```\n# Get namespaces. Check that 2 namespaces for product and gitlab-agent exists\nkubectl get ns\n\n# Check pods gitlab kas namespace\nkubectl get pods -n <gitlab kas namespace name>\n\n# Check pods product namespace\nkubectl get pods -n <product namespace name>\n\n# Check logs in kas agent pod for synchronization\nkubectl logs <kas agent pod name> -n <gitlab kas namespace name>\n```\n\n## Resources created\nSee [terraform-docs.md](.\/terraform-docs.md) for details.\nHere is a summary:\n\n**Gitlab Resources**\n- Agent instance in Gitlab project to poll configuration changes in deployment manifests\n- Agent config file in the Gitlab project based on template in `templates\/agent-config.yaml.tpl`\n\n**Product K8s Resources**\n- Sample prouct namespace\n\n**Agentk K8s Resources**\n- Namespace for agentk image in the cluster\n- Deploy KAS agent client through Helm chart\n- Kubernetes Service Account used by agent to manage deployments in product namespace\n- RBAC roles for KSA to read\/write configMap of cluster\n- RBAC roles for KSA to read\/write any k8s resources in product namespace\n\n## Considerations\n- Setup remote backend in provider\n- Configure service account with appropriate permissions in providers.tf\n- This example assumes SaaS offering of Gitlab KAS server is used \"wss:\/\/kas.gitlab.com\". For a self-managed server, the endpoint will be different\n- The product namespace is created here for giving an example. Remove it for implementation.\n- Networking and proxy settings between cluster and Gitlab agent can be configured in the helm chart value. Documentation reference is provided in `main.tf`.\n- This solution assumes that the product namespaces are created separately.Before adding a manifest YAML file for a new namespace ensure that the namespace is created.\n- Host the agentk image in an Artifact Registry and set the reference to that in the variables rather than pulling it from Gitlab's registry.\n\n## Variables to configure (Example)\n```\nproject_id = \"gitlab-kas-gke\"\ncluster_name = \"gitlab-kas-agent-cluster\"\ncluster_location = \"us-central1-c\"\ngitlab_repo_name = \"<user\/org>\/test-gitlab-kas-gke\"\nproduct_name = \"test-kas\"\nagentk_image_tag = \"v15.9.0-rc1\"\n```\n\n## Sample deployment.yaml\n```\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx-deployment-test\n  namespace: test-kas # Make sure this matches the product's namespace\n  labels:\n    app: nginx\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: nginx\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - name: nginx\n        image: nginx:1.14.2\n        ports:\n        - containerPort: 80\n```\n\n## [References and public docs](#references-and-public-docs)\n- [Using GitOps with a Kubernetes cluster](https:\/\/docs.gitlab.com\/ee\/user\/clusters\/agent\/gitops.html)\n- [How to deploy the GitLab Agent for Kubernetes with limited permissions](https:\/\/about.gitlab.com\/blog\/2021\/09\/10\/setting-up-the-k-agent\/)<a name=\"helpful\">\n- [Troubleshooting](https:\/\/docs.gitlab.com\/ee\/user\/clusters\/agent\/troubleshooting.html)\n- [Installing the agent for Kubernetes](https:\/\/docs.gitlab.com\/ee\/user\/clusters\/agent\/install)\n- [Working with the agent for Kubernetes](https:\/\/docs.gitlab.com\/ee\/user\/clusters\/agent\/work_with_agent.html)","site":"GCP","answers_cleaned":"  Terraform for Deploying a KAS agent in a GKE cluster This repository provides Terraform code for deploying a KAS agent in a GKE cluster  to connect it with a Gitlab repository to automatically deploy  manage  and monitor your cloud native solutions using GitOps practices  This creates resources in your cluster to deploy an agent that communicates with Gitlab to synchronize deployments  Here is a  link  https   about gitlab com blog 2021 09 10 setting up the k agent   to a guide that explains the manual steps for making this configuration and an overview of the solution  More resource links are provided in  this   references and public docs  section      Brief explanation The GitLab Kubernetes Agent  KAS  is a tool that helps you deploy your code to a Google Kubernetes Engine  GKE  cluster using GitOps practices  It does this by using a YAML configuration file that you create in your repository  It handles automated deployment of services after the images have been built by a CI CD pipeline from source code  The flow diagram below illustrate the relationship between the two processes  Note that in this example  Manifest Repository  and  Agent configuration repository  is the same    in production it can be hosted in different GitLab repos  This example deploys a dummy  nginx  image to decouple the Application CI CD step     Gitlab KAS flow diagram  docs gitlab kas flow diag png   The Gitlab agent created in the Gitlab project is connected to the agentk service running in the cluster  When there is a change in the manifest file describing the deployment  the agent in Gitlab pulls the changes and invokes the connected service running in the GKE cluster to apply the declarative changes to the Kubernetes resources  The Kubernetes Service Account federating the deployment is governed by the RBAC policies with limited permissions to the cluster configmap and the product namespaces     KAS Agent connection  docs kas agent connection png      How to setup   Create a new project in Gitlab    Create a personal token  https   docs gitlab com ee user profile personal access tokens html  in Gitlab  and set environment variable  GITLAB TOKEN  to be used my Terraform     export GITLAB TOKEN  token value      Note that in production you d want to use a   Runner authentication token  or a  CI CD job token   https   docs gitlab com ee security token overview html runner authentication tokens also called runner tokens  depending on your IaC pipeline strategy    Setup a  GKE cluster  https   cloud google com kubernetes engine docs how to creating a zonal cluster  that can egress to public internet to communicate with a Gitlab agent     Use a personal or service account that has  roles container admin  permission    Create directory  manifests   in your Gitlab project   Create a tfvars file from the  sample    terraform tfvars sample  file   Run terraform init and apply to deploy resources     terraform init terraform apply       Add yaml file in   manifests  directory  sample is provided below  to add a deployment  and commit the change    Ensure your namespace for the product has the correct deployment     Useful commands       Get namespaces  Check that 2 namespaces for product and gitlab agent exists kubectl get ns    Check pods gitlab kas namespace kubectl get pods  n  gitlab kas namespace name     Check pods product namespace kubectl get pods  n  product namespace name     Check logs in kas agent pod for synchronization kubectl logs  kas agent pod name   n  gitlab kas namespace name          Resources created See  terraform docs md    terraform docs md  for details  Here is a summary     Gitlab Resources     Agent instance in Gitlab project to poll configuration changes in deployment manifests   Agent config file in the Gitlab project based on template in  templates agent config yaml tpl     Product K8s Resources     Sample prouct namespace    Agentk K8s Resources     Namespace for agentk image in the cluster   Deploy KAS agent client through Helm chart   Kubernetes Service Account used by agent to manage deployments in product namespace   RBAC roles for KSA to read write configMap of cluster   RBAC roles for KSA to read write any k8s resources in product namespace     Considerations   Setup remote backend in provider   Configure service account with appropriate permissions in providers tf   This example assumes SaaS offering of Gitlab KAS server is used  wss   kas gitlab com   For a self managed server  the endpoint will be different   The product namespace is created here for giving an example  Remove it for implementation    Networking and proxy settings between cluster and Gitlab agent can be configured in the helm chart value  Documentation reference is provided in  main tf     This solution assumes that the product namespaces are created separately Before adding a manifest YAML file for a new namespace ensure that the namespace is created    Host the agentk image in an Artifact Registry and set the reference to that in the variables rather than pulling it from Gitlab s registry      Variables to configure  Example      project id    gitlab kas gke  cluster name    gitlab kas agent cluster  cluster location    us central1 c  gitlab repo name     user org  test gitlab kas gke  product name    test kas  agentk image tag    v15 9 0 rc1          Sample deployment yaml     apiVersion  apps v1 kind  Deployment metadata    name  nginx deployment test   namespace  test kas   Make sure this matches the product s namespace   labels      app  nginx spec    replicas  3   selector      matchLabels        app  nginx   template      metadata        labels          app  nginx     spec        containers          name  nginx         image  nginx 1 14 2         ports            containerPort  80          References and public docs   references and public docs     Using GitOps with a Kubernetes cluster  https   docs gitlab com ee user clusters agent gitops html     How to deploy the GitLab Agent for Kubernetes with limited permissions  https   about gitlab com blog 2021 09 10 setting up the k agent   a name  helpful      Troubleshooting  https   docs gitlab com ee user clusters agent troubleshooting html     Installing the agent for Kubernetes  https   docs gitlab com ee user clusters agent install     Working with the agent for Kubernetes  https   docs gitlab com ee user clusters agent work with agent html "}
{"questions":"GCP Getting Started Pub Sub topic to be ingested elsewhere This example illustrates how to use the DLP api in a Cloud Function to redact Redacting Sensitive Data Using the DLP API following These instructions will walk you through setting up your environment to do the sensitive data from log exports The scrubbed logs will then be posted to a","answers":"# Redacting Sensitive Data Using the DLP API\n\nThis example illustrates how to use the DLP api in a Cloud Function to redact\nsensitive data from log exports. The scrubbed logs will then be posted to a\nPub\/Sub topic to be ingested elsewhere.\n\n## Getting Started\n\nThese instructions will walk you through setting up your environment to do the\nfollowing:\n\n* Export logs to a Pub\/Sub log export\n* Deploy a Cloud Function that subscribes to the log export\n* Write scrubbed logs to a Pub\/Sub topic.\n\n### Prerequisites\n\nEnsure that you have the [Google Cloud SDK](https:\/\/cloud.google.com\/sdk\/install) installed and authenticated to the project you want\nto deploy the example to.\n\n### Enable Required APIs\n\nThe Cloud Functions, Pub\/Sub, and DLP APIs will all need to be enabled for this\nexample to work properly.\n\n```\ngcloud services enable cloudfunctions pubsub dlp.googleapis.com\n```\n\n### Pub\/Sub\n\nPub\/Sub will be used to facilitate the transfer of logs from Stackdriver to the\nCloud Function for processing.  Once the logs are scrubbed, they will be sent to\nanother Pub\/Sub topic for final consumption.\n\nDefine the Pub\/Sub topic and subscription names\n```\nexport LOG_EXPORT_TOPIC_NAME=log-export\n\nexport DLP_SCRUBBED_TOPIC_NAME=scrubbed-log-export\n\nexport LOG_EXPORT_NAME=log-export-destination\n\nexport DLP_SCRUBBED_SUBSCRIPTION_NAME=scrubbed-log-export-subscription\n```\n\nCreate the Pub\/Sub topics and subscription\n```\ngcloud pubsub topics create $LOG_EXPORT_TOPIC_NAME\n\ngcloud pubsub topics create $DLP_SCRUBBED_TOPIC_NAME\n\ngcloud pubsub subscriptions create $DLP_SCRUBBED_SUBSCRIPTION_NAME \\\n --topic $DLP_SCRUBBED_TOPIC_NAME\n```\n\n### Stackdriver Log Export\n\nA log export will be created in Stackdriver that sends all \"global\" logs to the\nfirst Pub\/Sub topic we created.\n\nCreate the export.\n```\nexport PROJECT_ID=$(gcloud config get-value project)\n\ngcloud logging sinks create $LOG_EXPORT_NAME \\\n  $LOG_EXPORT_TOPIC_NAME pubsub.googleapis.com\/projects\/$PROJECT_ID\/topics\/$LOG_EXPORT_TOPIC_NAME \\\n  --log-filter resource.type=\"global\"\n```\n\nGive Pub\/Sub topic writer permissions to the service account being used for the\nlog export.\n```\nexport SERVICE_ACCOUNT=$(gcloud logging sinks describe $LOG_EXPORT_NAME --format=\"value(writerIdentity)\")\n\ngcloud projects add-iam-policy-binding $PROJECT_ID \\\n  --member $SERVICE_ACCOUNT \\\n  --role roles\/pubsub.publisher\n```\n\n### Cloud Function\n\nCloud Functions will be used to call the DLP API to scrub the log content, then post the output to a new Pub\/Sub topic.\n\nClone the professional services repo.\n```\ngit clone https:\/\/github.com\/GoogleCloudPlatform\/professional-services.git\n```\n\nDeploy the Cloud Function.\n```\ngcloud functions deploy dlp-log-scrubber \\\n  --runtime python37 \\\n  --trigger-topic $LOG_EXPORT_TOPIC_NAME \\\n  --entry-point process_log_entry \\\n  --set-env-vars OUTPUT_TOPIC_NAME=$DLP_SCRUBBED_TOPIC_NAME \\\n  --source professional-services\/examples\/dlp\/cloud_function_example\/cloud_function\/.\n```\n\nWait a few minutes for the Cloud Function to deploy, then you can proceed to test.\n\n### Testing the DLP API\n\nNow that we have set up our Stackdriver log exports, Pub\/Sub, and Cloud Function we can proceed to test the DLP API.\n\nFirst, write a log entry to Stackdriver that will be picked up by the log export we created.\n```\ngcloud logging write my-test-log \\\n  '{ \"message\": \"user: test_email@test.com, visa: 1111-2222-3333-4444, DOB: 01\/22\/2019\"}' \\\n  --payload-type=json\n```\n\nYou will see the log entry under Logging > Logs if you view the \"Global\" log type\n\n![Stackdriver Log Screenshot](img\/stackdriver_log_img.png)\n\nNow that we have written the log to Stackdriver, the log export to Pub\/Sub should have triggered our Cloud Function. To verify, we can change the log viewer to filter for our Cloud Function logs.\n\n![Stackdriver Cloud Function Screenshot](img\/stackdriver_cloud_function_log.png)\n\nFinally, lets grab the Cloud Function output from the Pub\/Sub subscription we created\n```\ngcloud pubsub subscriptions pull --auto-ack $DLP_SCRUBBED_SUBSCRIPTION_NAME\n```\n\nThe output will show the same log message with the email address, visa card number, and date of birth removed.\n\n``\"jsonPayload\": {\"message\": \"user: [EMAIL_ADDRESS], visa: [CREDIT_CARD_NUMBER], DOB: [DATE_OF_BIRTH]\"}``\n\nThis example demonstrates the identification and replacement of a small number of data types (EMAIL_ADDRESS, CREDIT_CARD_NUMBER, and DATE_OF_BIRTH), however, the DLP API supports many more which can be found [here](https:\/\/cloud.google.com\/dlp\/docs\/infotypes-reference)","site":"GCP","answers_cleaned":"  Redacting Sensitive Data Using the DLP API  This example illustrates how to use the DLP api in a Cloud Function to redact sensitive data from log exports  The scrubbed logs will then be posted to a Pub Sub topic to be ingested elsewhere      Getting Started  These instructions will walk you through setting up your environment to do the following     Export logs to a Pub Sub log export   Deploy a Cloud Function that subscribes to the log export   Write scrubbed logs to a Pub Sub topic       Prerequisites  Ensure that you have the  Google Cloud SDK  https   cloud google com sdk install  installed and authenticated to the project you want to deploy the example to       Enable Required APIs  The Cloud Functions  Pub Sub  and DLP APIs will all need to be enabled for this example to work properly       gcloud services enable cloudfunctions pubsub dlp googleapis com          Pub Sub  Pub Sub will be used to facilitate the transfer of logs from Stackdriver to the Cloud Function for processing   Once the logs are scrubbed  they will be sent to another Pub Sub topic for final consumption   Define the Pub Sub topic and subscription names     export LOG EXPORT TOPIC NAME log export  export DLP SCRUBBED TOPIC NAME scrubbed log export  export LOG EXPORT NAME log export destination  export DLP SCRUBBED SUBSCRIPTION NAME scrubbed log export subscription      Create the Pub Sub topics and subscription     gcloud pubsub topics create  LOG EXPORT TOPIC NAME  gcloud pubsub topics create  DLP SCRUBBED TOPIC NAME  gcloud pubsub subscriptions create  DLP SCRUBBED SUBSCRIPTION NAME      topic  DLP SCRUBBED TOPIC NAME          Stackdriver Log Export  A log export will be created in Stackdriver that sends all  global  logs to the first Pub Sub topic we created   Create the export      export PROJECT ID   gcloud config get value project   gcloud logging sinks create  LOG EXPORT NAME      LOG EXPORT TOPIC NAME pubsub googleapis com projects  PROJECT ID topics  LOG EXPORT TOPIC NAME       log filter resource type  global       Give Pub Sub topic writer permissions to the service account being used for the log export      export SERVICE ACCOUNT   gcloud logging sinks describe  LOG EXPORT NAME   format  value writerIdentity     gcloud projects add iam policy binding  PROJECT ID       member  SERVICE ACCOUNT       role roles pubsub publisher          Cloud Function  Cloud Functions will be used to call the DLP API to scrub the log content  then post the output to a new Pub Sub topic   Clone the professional services repo      git clone https   github com GoogleCloudPlatform professional services git      Deploy the Cloud Function      gcloud functions deploy dlp log scrubber       runtime python37       trigger topic  LOG EXPORT TOPIC NAME       entry point process log entry       set env vars OUTPUT TOPIC NAME  DLP SCRUBBED TOPIC NAME       source professional services examples dlp cloud function example cloud function        Wait a few minutes for the Cloud Function to deploy  then you can proceed to test       Testing the DLP API  Now that we have set up our Stackdriver log exports  Pub Sub  and Cloud Function we can proceed to test the DLP API   First  write a log entry to Stackdriver that will be picked up by the log export we created      gcloud logging write my test log         message    user  test email test com  visa  1111 2222 3333 4444  DOB  01 22 2019          payload type json      You will see the log entry under Logging   Logs if you view the  Global  log type    Stackdriver Log Screenshot  img stackdriver log img png   Now that we have written the log to Stackdriver  the log export to Pub Sub should have triggered our Cloud Function  To verify  we can change the log viewer to filter for our Cloud Function logs     Stackdriver Cloud Function Screenshot  img stackdriver cloud function log png   Finally  lets grab the Cloud Function output from the Pub Sub subscription we created     gcloud pubsub subscriptions pull   auto ack  DLP SCRUBBED SUBSCRIPTION NAME      The output will show the same log message with the email address  visa card number  and date of birth removed      jsonPayload     message    user   EMAIL ADDRESS   visa   CREDIT CARD NUMBER   DOB   DATE OF BIRTH       This example demonstrates the identification and replacement of a small number of data types  EMAIL ADDRESS  CREDIT CARD NUMBER  and DATE OF BIRTH   however  the DLP API supports many more which can be found  here  https   cloud google com dlp docs infotypes reference "}
{"questions":"GCP Anthos Service Mesh ASM for multiple GKE clusters using Terraform Here are several reference documents if you encounter an issue when following the instructions below Pod traffic security scanning using ASM Docker and Google Artifact Registry GAR Documentation Multi Cluster ASM on Private Clusters Contents Connecting GKE clusters and ASM to an external database","answers":"# Contents\n- [Multi-Cluster ASM on Private Clusters](.\/infrastructure): Anthos Service Mesh (ASM) for multiple GKE clusters, using Terraform\n- [Twistlock PoC](.\/twistlock): Pod traffic security scanning, using ASM, Docker and Google Artifact Registry (GAR)\n- [Cloud SQL for PostgreSQL PoC](.\/postgres): Connecting GKE clusters and ASM to an external database\n\n# Multi-Cluster ASM on Private Clusters\n\n## Documentation\n\nHere are several reference documents if you encounter an issue when following the instructions below:\n\n- [Installing ASM using Anthos CLI](https:\/\/cloud.google.com\/service-mesh\/docs\/gke-anthos-cli-existing-cluster)\n- [Installing ASM using IstioCtl](https:\/\/cloud.google.com\/service-mesh\/docs\/gke-install-existing-cluster)\n- [Adding clusters to an Athos Service Mesh](https:\/\/cloud.google.com\/service-mesh\/docs\/gke-install-multi-cluster)\n\n## Description\n\nIn [Adding clusters to an Athos Service Mesh](https:\/\/cloud.google.com\/service-mesh\/docs\/gke-install-multi-cluster), it shows how to federate service meshes of two Anthos **public** clusters. However, it misses a key instruction to open the firewall for the service port to the remote cluster. So, your final test of HelloWorld might not work. \n\nThis sample builds on the topic of Google's Anthos Service Mesh official installation documents, and adds instructions on how to federate two private clusters, which is more likely in real world environments. \n\nAs illustrated in the diagram below, we will create a VPC with three subnets. Two subnets are for private clusters, and one for GCE servers. So, we illustrate using a bastion server to access private clusters as in a real environment. \n\n![NetworkImage](.\/asm-private-multiclusters-intranet.png)\n\nThe clusters are not accessible from an external network. Users can only log into the bastion server via an IAP tunnel to gain access to this VPC. A firewall rule is built to allow IAP tunneling into the GCE subnet (Subnet C) only. For the bastion server in Subnet C to access Kubernetes APIs of both private clusters, Subnet C's CIDR range is added to the \"_GKE Control Plane Authorized Network_\" of both clusters. This is illustrated as blue lines and yellow underscore lines in the diagram above.\n\nAlso, in order for both clusters to access the service mesh (Istiod) and service deployed on the other cluster, we need to do the following:\n- The pod CIDR range of one cluster must be added to the \"_GKE Control Plane Authorized Network_\" of the other cluster. This enables one cluster to ping _istiod_ on the other cluster. \n- The firewall needs to be open for one cluster's pod CIDR to access the service port on the other cluster. In this sample, it is port 5000 used by the HelloWord testing application. Because the invocation of service is bidirectional in HelloWorld testing application, we will add firewall rules for each direction. \n\nThe infrastructure used in this sample is coded in Terraform scripts. The ASM installation steps are coded in a Shell script.     \n\n## Prerequisites\n\nAs mentioned in [Add GKE clusters to Anthos Service Mesh](https:\/\/cloud.google.com\/service-mesh\/docs\/gke-install-multi-cluster), there are several prerequisites. \n\nThis guide assumes that you have:\n\n- [A Cloud project](https:\/\/cloud.google.com\/resource-manager\/docs\/creating-managing-projects).\n- [A Cloud Billing account](https:\/\/cloud.google.com\/billing\/docs\/how-to\/manage-billing-account).\n\nAlso, the multi-cluster configuration has these requirements for the clusters in it:\n\n- All clusters must be on the same network.\n\n  **NOTE:** ASM 1.7 does [not support multiple networks](https:\/\/cloud.google.com\/service-mesh\/docs\/supported-features#platform_environment), even peered ones.\n\n- If you join clusters that are not in the same project, they must be installed using the `asm-gcp-multiproject` profile and the clusters must be in a shared VPC configuration together on the same network. In addition, we recommend that you have one project to host the shared VPC, and two service projects for creating clusters. For more information, see [Setting up clusters with Shared VPC](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/cluster-shared-vpc).\n\nIn this sample, we create two private clusters in different subnets of the same VPC in the same project, and enable clusters to communicate to each other's API server.   \n\n## How to set up and run this sample\n\n### Build Infrastructure\n\n1. Create a GCP project.\n2. Create a VPC in GCP project.\n3. Create a subnet in the VPC. \n4. Create a VM in the subnet. This will be the bastion server to simulate an intranet access to GKE clusters.\n   - This step is now done by Terraform, in file [infrastructure\/bastion.tf](.\/infrastructure\/bastion.tf)\n   - The Bastion host is used for interaction with the GKE clusters\n   - For this demo, we ran Terraform from a local machine, not from the Bastion host\n   - **Note:** you will have to manually [create a Google Cloud firewall rule](https:\/\/cloud.google.com\/vpc\/docs\/using-firewalls), to allow connection to the bastion server via SSH (port 22). We did not automate this for security reasons.\n\n5. Set up Git on your local machine, then clone this Github sample. Also clone this Github sample onto the bastion server\n\n6. Set up [Terraform](https:\/\/learn.hashicorp.com\/terraform\/getting-started\/install.html) on your local machine, so you will be able to build infrastructure. \n\n7. On your local machine, update the corresponding parameters for your project. \n   - In ``vars.sh``, check to see whether you need to update `CLUSTER1_LOCATION`,`CLUSTER1_CLUSTER_NAME`, `CLUSTER1_CLUSTER_CTX`, `CLUSTER2_LOCATION`, `CLUSTER2_CLUSTER_NAME`, `CLUSTER2_CLUSTER_CTX`.\n   - In [infrastructure\/terraform.example.tfvars](.\/infrastructure\/terraform.example.tfvars), rename this file to terraform.tfvars and update \"project_id\" and \"billing_account\".\n   - In [infrastructure\/shared.tf](.\/infrastructure\/shared.tf), check whether you need to update \"project_prefix\" and \"region\". \n   - **[OPTIONAL]** In the locals section of _infrastructure\/shared.tf_, update CIDR ranges for bastion_cidr and existing_vpc if you need to. \n   - Source _vars.sh_ to set up basic environment variables.\n     \n     ``` \n     source vars.sh\n     ```\n\n8. If you want to run Terraform in your own workspace, create a `backend.tf` file from _infrastructure\/backend.tf_tmpl_, and update your Terraform workspace information in this file.\n\n9. Under \"_infrastructure_\" directory, run\n   - terraform init\n      \n     ```\n     terraform init\n     ```\n      \n   - terraform plan\n       \n     ```\n     terraform plan -out output.tftxt\n     ```\n       \n     **NOTE:** You may get an error that the Compute Engine API has not been used before in the project. In this case please [manually enable the Compute Engine API](https:\/\/console.cloud.google.com\/apis\/library\/compute.googleapis.com)\n\n     ```\n     Error: Error when reading or editing GCE default service account: googleapi: Error 403: Compute Engine API has not been used in project XXXXXXXXXXX before or it is disabled. Enable it by visiting https:\/\/console.developers.google.com\/apis\/api\/compute.googleapis.com\/overview then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry., accessNotConfigured\n     ```\n       \n   - terraform apply\n       \n     ```\n     terraform apply output.tftxt\n     ```\n\n   - If Terraform completes without error, you should have VPC, NAT, a bastion server, two private clusters and firewall rules. Please check all artifacts in GCP Console.\n\n10. SSH onto the bastion server.\n11. Make sure you have the following tools installed:\n    - The Cloud SDK (the gcloud command-line tool)\n    - The standard command-line tools: awk, curl, grep, sed, sha256sum, and tr\n    - git\n    - kpt\n    - kubectl\n    - jq\n\n### Install ASM\n\n1. On bastion server, go to this source code directory, then source _vars.sh_\n   \n   **NOTE:** Make sure you manually [create a Google Cloud firewall rule](https:\/\/cloud.google.com\/vpc\/docs\/using-firewalls) to allow SSH connections to your bastion server over port 22\n\n   ```\n   cd asm-private-multiclusters-intranet\n   source vars.sh\n   ```\n\n2. Source _scripts\/main.sh_\n   ```\n   source scripts\/main.sh\n   ```\n3. Run install_asm_mesh\n   ```\n   install_asm_mesh\n   ```\n\n   or, you can run the commands in install_asm_mesh step by step manually\n   ```\n   # Navigate to your working directory. Binaries will be downloaded to this directory.\n   cd ${WORK_DIR}\n\n   # Set up K8s config and context\n   set_up_credential ${CLUSTER1_CLUSTER_NAME} ${CLUSTER1_LOCATION} ${CLUSTER1_CLUSTER_CTX} ${TF_VAR_project_id}\n   set_up_credential ${CLUSTER2_CLUSTER_NAME} ${CLUSTER2_LOCATION} ${CLUSTER2_CLUSTER_CTX} ${TF_VAR_project_id}\n\n   # Download ASM Installer\n   download_asm_installer ${ASM_MAJOR_VER} ${ASM_MINOR_VER}\n\n   #Install ASM\n   install_asm ${CLUSTER1_CLUSTER_NAME} ${CLUSTER1_LOCATION} ${TF_VAR_project_id}\n   install_asm ${CLUSTER2_CLUSTER_NAME} ${CLUSTER2_LOCATION} ${TF_VAR_project_id}\n\n   # Register clusters\n   grant_role_to_connect_agent ${TF_VAR_project_id}\n   register_cluster ${CLUSTER1_CLUSTER_CTX} ${CLUSTER1_LOCATION}\n   register_cluster ${CLUSTER2_CLUSTER_CTX} ${CLUSTER2_LOCATION}\n\n   # Add clusters to mesh\n   cross_cluster_service_secret ${CLUSTER1_CLUSTER_NAME} ${CLUSTER1_CLUSTER_CTX} ${CLUSTER2_CLUSTER_CTX}\n   cross_cluster_service_secret ${CLUSTER2_CLUSTER_NAME} ${CLUSTER2_CLUSTER_CTX} ${CLUSTER1_CLUSTER_CTX}\n\n   ```\n\n### Deploy test helloworld application\n\nRun install_test_app\n```\ninstall_test_app\n```\n\n### Prepare for verification\n```\nexport CTX1=$CLUSTER1_CLUSTER_CTX\nexport CTX2=$CLUSTER2_CLUSTER_CTX\n```\n\nFollow the instruction in \"**Verify cross-cluster load balancing**\" section of [Add clusters to an Anthos Service Mesh](https:\/\/cloud.google.com\/service-mesh\/docs\/gke-install-multi-cluster) to verify.\n\n**Please Note:** You don't need to install `Helloworld` application, it has been installed for you already.\n\n## Internal Load Balancer\n\nAnthos ASM deploys ingress gateway using external load balancer by default. If we need to change the ingress gateway to be internal load balancer, we can use `--option` or `--custom-overlay` parameter along with out load balancer yaml (.\/istio-profiles\/internal-load-balancer.yaml).\n\nPlease note that we need to specify out \"targetPort\" for https and http2 ports for current ASM version. \n\n# Twistlock PoC\n- Pod traffic security scanning, using ASM, Docker and Google Artifact Registry (GAR)\n- Please see the [twistlock folder readme](.\/twistlock)\n\n# Cloud SQL for PostgreSQL PoC\n- Connecting GKE clusters and ASM to a database that is external database to the Kubernetes clusters\n- Please see the [postgres folder readme](.\/postgres)","site":"GCP","answers_cleaned":"  Contents    Multi Cluster ASM on Private Clusters    infrastructure   Anthos Service Mesh  ASM  for multiple GKE clusters  using Terraform    Twistlock PoC    twistlock   Pod traffic security scanning  using ASM  Docker and Google Artifact Registry  GAR     Cloud SQL for PostgreSQL PoC    postgres   Connecting GKE clusters and ASM to an external database    Multi Cluster ASM on Private Clusters     Documentation  Here are several reference documents if you encounter an issue when following the instructions below      Installing ASM using Anthos CLI  https   cloud google com service mesh docs gke anthos cli existing cluster     Installing ASM using IstioCtl  https   cloud google com service mesh docs gke install existing cluster     Adding clusters to an Athos Service Mesh  https   cloud google com service mesh docs gke install multi cluster      Description  In  Adding clusters to an Athos Service Mesh  https   cloud google com service mesh docs gke install multi cluster   it shows how to federate service meshes of two Anthos   public   clusters  However  it misses a key instruction to open the firewall for the service port to the remote cluster  So  your final test of HelloWorld might not work    This sample builds on the topic of Google s Anthos Service Mesh official installation documents  and adds instructions on how to federate two private clusters  which is more likely in real world environments    As illustrated in the diagram below  we will create a VPC with three subnets  Two subnets are for private clusters  and one for GCE servers  So  we illustrate using a bastion server to access private clusters as in a real environment      NetworkImage    asm private multiclusters intranet png   The clusters are not accessible from an external network  Users can only log into the bastion server via an IAP tunnel to gain access to this VPC  A firewall rule is built to allow IAP tunneling into the GCE subnet  Subnet C  only  For the bastion server in Subnet C to access Kubernetes APIs of both private clusters  Subnet C s CIDR range is added to the   GKE Control Plane Authorized Network   of both clusters  This is illustrated as blue lines and yellow underscore lines in the diagram above   Also  in order for both clusters to access the service mesh  Istiod  and service deployed on the other cluster  we need to do the following    The pod CIDR range of one cluster must be added to the   GKE Control Plane Authorized Network   of the other cluster  This enables one cluster to ping  istiod  on the other cluster     The firewall needs to be open for one cluster s pod CIDR to access the service port on the other cluster  In this sample  it is port 5000 used by the HelloWord testing application  Because the invocation of service is bidirectional in HelloWorld testing application  we will add firewall rules for each direction    The infrastructure used in this sample is coded in Terraform scripts  The ASM installation steps are coded in a Shell script           Prerequisites  As mentioned in  Add GKE clusters to Anthos Service Mesh  https   cloud google com service mesh docs gke install multi cluster   there are several prerequisites    This guide assumes that you have      A Cloud project  https   cloud google com resource manager docs creating managing projects      A Cloud Billing account  https   cloud google com billing docs how to manage billing account    Also  the multi cluster configuration has these requirements for the clusters in it     All clusters must be on the same network       NOTE    ASM 1 7 does  not support multiple networks  https   cloud google com service mesh docs supported features platform environment   even peered ones     If you join clusters that are not in the same project  they must be installed using the  asm gcp multiproject  profile and the clusters must be in a shared VPC configuration together on the same network  In addition  we recommend that you have one project to host the shared VPC  and two service projects for creating clusters  For more information  see  Setting up clusters with Shared VPC  https   cloud google com kubernetes engine docs how to cluster shared vpc    In this sample  we create two private clusters in different subnets of the same VPC in the same project  and enable clusters to communicate to each other s API server         How to set up and run this sample      Build Infrastructure  1  Create a GCP project  2  Create a VPC in GCP project  3  Create a subnet in the VPC   4  Create a VM in the subnet  This will be the bastion server to simulate an intranet access to GKE clusters       This step is now done by Terraform  in file  infrastructure bastion tf    infrastructure bastion tf       The Bastion host is used for interaction with the GKE clusters      For this demo  we ran Terraform from a local machine  not from the Bastion host        Note    you will have to manually  create a Google Cloud firewall rule  https   cloud google com vpc docs using firewalls   to allow connection to the bastion server via SSH  port 22   We did not automate this for security reasons   5  Set up Git on your local machine  then clone this Github sample  Also clone this Github sample onto the bastion server  6  Set up  Terraform  https   learn hashicorp com terraform getting started install html  on your local machine  so you will be able to build infrastructure    7  On your local machine  update the corresponding parameters for your project        In   vars sh    check to see whether you need to update  CLUSTER1 LOCATION   CLUSTER1 CLUSTER NAME    CLUSTER1 CLUSTER CTX    CLUSTER2 LOCATION    CLUSTER2 CLUSTER NAME    CLUSTER2 CLUSTER CTX        In  infrastructure terraform example tfvars    infrastructure terraform example tfvars   rename this file to terraform tfvars and update  project id  and  billing account        In  infrastructure shared tf    infrastructure shared tf   check whether you need to update  project prefix  and  region            OPTIONAL    In the locals section of  infrastructure shared tf   update CIDR ranges for bastion cidr and existing vpc if you need to        Source  vars sh  to set up basic environment variables                       source vars sh           8  If you want to run Terraform in your own workspace  create a  backend tf  file from  infrastructure backend tf tmpl   and update your Terraform workspace information in this file   9  Under   infrastructure   directory  run      terraform init                      terraform init                      terraform plan                       terraform plan  out output tftxt                         NOTE    You may get an error that the Compute Engine API has not been used before in the project  In this case please  manually enable the Compute Engine API  https   console cloud google com apis library compute googleapis com                 Error  Error when reading or editing GCE default service account  googleapi  Error 403  Compute Engine API has not been used in project XXXXXXXXXXX before or it is disabled  Enable it by visiting https   console developers google com apis api compute googleapis com overview then retry  If you enabled this API recently  wait a few minutes for the action to propagate to our systems and retry   accessNotConfigured                       terraform apply                       terraform apply output tftxt                If Terraform completes without error  you should have VPC  NAT  a bastion server  two private clusters and firewall rules  Please check all artifacts in GCP Console   10  SSH onto the bastion server  11  Make sure you have the following tools installed        The Cloud SDK  the gcloud command line tool        The standard command line tools  awk  curl  grep  sed  sha256sum  and tr       git       kpt       kubectl       jq      Install ASM  1  On bastion server  go to this source code directory  then source  vars sh           NOTE    Make sure you manually  create a Google Cloud firewall rule  https   cloud google com vpc docs using firewalls  to allow SSH connections to your bastion server over port 22            cd asm private multiclusters intranet    source vars sh         2  Source  scripts main sh            source scripts main sh        3  Run install asm mesh           install asm mesh            or  you can run the commands in install asm mesh step by step manually             Navigate to your working directory  Binaries will be downloaded to this directory     cd   WORK DIR        Set up K8s config and context    set up credential   CLUSTER1 CLUSTER NAME    CLUSTER1 LOCATION    CLUSTER1 CLUSTER CTX    TF VAR project id     set up credential   CLUSTER2 CLUSTER NAME    CLUSTER2 LOCATION    CLUSTER2 CLUSTER CTX    TF VAR project id        Download ASM Installer    download asm installer   ASM MAJOR VER    ASM MINOR VER       Install ASM    install asm   CLUSTER1 CLUSTER NAME    CLUSTER1 LOCATION    TF VAR project id     install asm   CLUSTER2 CLUSTER NAME    CLUSTER2 LOCATION    TF VAR project id        Register clusters    grant role to connect agent   TF VAR project id     register cluster   CLUSTER1 CLUSTER CTX    CLUSTER1 LOCATION     register cluster   CLUSTER2 CLUSTER CTX    CLUSTER2 LOCATION        Add clusters to mesh    cross cluster service secret   CLUSTER1 CLUSTER NAME    CLUSTER1 CLUSTER CTX    CLUSTER2 CLUSTER CTX     cross cluster service secret   CLUSTER2 CLUSTER NAME    CLUSTER2 CLUSTER CTX    CLUSTER1 CLUSTER CTX               Deploy test helloworld application  Run install test app     install test app          Prepare for verification     export CTX1  CLUSTER1 CLUSTER CTX export CTX2  CLUSTER2 CLUSTER CTX      Follow the instruction in    Verify cross cluster load balancing    section of  Add clusters to an Anthos Service Mesh  https   cloud google com service mesh docs gke install multi cluster  to verify     Please Note    You don t need to install  Helloworld  application  it has been installed for you already      Internal Load Balancer  Anthos ASM deploys ingress gateway using external load balancer by default  If we need to change the ingress gateway to be internal load balancer  we can use    option  or    custom overlay  parameter along with out load balancer yaml    istio profiles internal load balancer yaml    Please note that we need to specify out  targetPort  for https and http2 ports for current ASM version      Twistlock PoC   Pod traffic security scanning  using ASM  Docker and Google Artifact Registry  GAR    Please see the  twistlock folder readme    twistlock     Cloud SQL for PostgreSQL PoC   Connecting GKE clusters and ASM to a database that is external database to the Kubernetes clusters   Please see the  postgres folder readme    postgres "}
{"questions":"GCP Summary PostgreSQL uses application level protocol negotiation for SSL connection Istio Proxy currently uses TCP level protocol negotiation so Istio Proxy sidecar errors out during SSL handshake when it tries to auto encrypt connection with PostgreSQL In this article we document how to reproduce this issue Auto Encrypt PostgreSQL SSL Connection Using Istio Proxy Sidecar Enforce SSL connection on Cloud SQL PostgreSQL instance Prerequisites Create client certificate and download client certificate client key and server certificate We will use them in the client container for testing without sidecar auto encryption and mount them into Istio Proxy sidecar for sidecar auto encryption","answers":"# Auto Encrypt PostgreSQL SSL Connection Using Istio Proxy Sidecar\n\n### Summary\n\nPostgreSQL uses application-level protocol negotiation for SSL connection. Istio Proxy currently uses TCP-level protocol negotiation, so Istio Proxy sidecar errors out during SSL handshake when it tries to auto encrypt connection with PostgreSQL. In this article, we document how to reproduce this issue.\n\n### Prerequisites\n\n* Enforce SSL connection on Cloud SQL PostgreSQL instance.\n* Create client certificate and download client certificate, client key and server certificate. We will use them in the client container for testing without sidecar auto-encryption, and mount them into Istio Proxy sidecar for sidecar auto-encryption.\n* Add K8s node IPs to the Authorized Networks of PostgreSQL instance. Or, we can add \"0.0.0.0\/0\" to allow client connection from any IP address for testing purpose.\n\n### Build Container\n\nUse the Dockerfile to build a testing PostgreSQL client container image. We package the certificates into Docker image just for initial connection testing. **They are not needed in client container for sidecar auto-encryption**.\n\n### Test Direct SSL Connection\n\nDeploy a PostgreSQL client without any sidecar. Please note that we turn off the sidecar inject in the YAML file even though we have label our namespace to have Istio sidecar auto inject. \n\n```\nkubectl apply -f postgres-plain.yaml -n <YOUR_NAMESPACE>\n```\n\nRun the follow command to make sure SSL connection works.\n```\n# Enter into postgres-plain Pod\nkubectl exec -it deploy\/postgres-plain -n <YOUR_NAMESPACE> -- \/bin\/bash\n\n# Once in the Pod, run this psql command with SSL mode\n  psql \"sslmode=verify-ca sslrootcert=server-ca.pem \\\n      sslcert=client-cert.pem sslkey=client-key.pem \\\n      hostaddr=YOUR_POSTGRESQL_IP \\\n      port=5432 \\\n      user=YOUR_USERNAME dbname=YOUR_DB_NAME\"\n\n# Enter your password when it is prompted\n```\n\nYou should see something like this:\n```\npsql (12.5, server 12.4)\nSSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)\nType \"help\" for help.\n```\n\nNow, run `psql` command in Non-SSL mode\n```\npsql \"hostaddr=YOUR_POSTGRESQL_IP port=5432 user=YOUR_USERNAME dbname=YOUR_DB_NAME\"\n```\n\nYou should see the error message as below. This proves that Non-SSL connection doesn't work.\n```\npsql: error: FATAL:  connection requires a valid client certificate\nFATAL:  pg_hba.conf rejects connection for host \"35.235.65.143\", user \"postgres\", database \"postgres\", SSL off\n```\n\n### Deploy Istio Proxy Sidecar\n\n#### Create K8s secret for the certificates\n\nWe will mount our PostgreSQL certificates into Istio Proxy sidecar. In order to achieve this, we need to upload certificates as K8s secret.\n\n```\nkubectl create secret generic postgres-cert --from-file=certs\/client-cert.pem  --from-file=certs\/client-key.pem  --from-file=certs\/server-ca.pem\n```\n\n#### Mount the certificates into Istio Proxy sidecar\n\nWe mount our PostgreSQL certificates into Istio Proxy sidecar via the following annotations.\n```\n        sidecar.istio.io\/userVolume: '[{\"name\":\"postgres-cert\", \"secret\":{\"secretName\":\"postgres-cert\"}}]'\n        sidecar.istio.io\/userVolumeMount: '[{\"name\":\"postgres-cert\", \"mountPath\":\"\/etc\/certs\/postgres-cert\", \"re\nadonly\":true}]'\n```\n\n#### Configure Sidecar certificates \n\nWe use [Service Entry](https:\/\/istio.io\/latest\/docs\/reference\/config\/networking\/service-entry\/) and [Destination\n Rule](https:\/\/istio.io\/latest\/docs\/reference\/config\/networking\/destination-rule\/) to instruct Istio Proxy sidec\nar to auto encrypt network traffic with specified Redis host and port with **SIMPLE** TLS. The detailed comments\n can be found in `destination-rule.yaml` and `service-entry.yaml` source code. \n\nAlso, here is how we instruct Istio Proxy sidecar to use our certificates for encryption. \n\n```\n      clientCertificate: \/etc\/certs\/postgres-cert\/client-cert.pem\n      privateKey: \/etc\/certs\/postgres-cert\/client-key.pem\n      caCertificates: \/etc\/certs\/postgres-cert\/server-ca.pem\n```\n\nDeploy both YAML files to your namespace. \n```\nkubectl apply -f destination-rule.yaml -n <YOUR_NAMESPACE>\n\nkubectl apply -f service-entry.yaml -n <YOUR_NAMESPACE>\n```\n\n#### Deploy PostgresSQL client with sidecar injection\n\nRun this command to deploy PostgreSQL client with Istio Proxy sidecar inject\n``\nkubectl apply -f postgres-istio.yaml -n <YOUR_NAMESPACE>\n``\n\n#### Run `psql` command in Non-SSL mode\n```\npsql \"hostaddr=YOUR_POSTGRESQL_IP port=5432 user=YOUR_USERNAME dbname=YOUR_DB_NAME\"\n```\n\nYou should be prompted for password. \n\nYou will see errors.  \n\n#### Look into the Istio Proxy log\n\nYou can read the logs in Cloud Logging. However, you may want to view sidecar log messages for the detailed network traffic information and errors with the following command.\n```\nkubectl logs deploy\/postgres-istio -c istio-proxy -n <YOUR_NAMESPACE>\n```","site":"GCP","answers_cleaned":"  Auto Encrypt PostgreSQL SSL Connection Using Istio Proxy Sidecar      Summary  PostgreSQL uses application level protocol negotiation for SSL connection  Istio Proxy currently uses TCP level protocol negotiation  so Istio Proxy sidecar errors out during SSL handshake when it tries to auto encrypt connection with PostgreSQL  In this article  we document how to reproduce this issue       Prerequisites    Enforce SSL connection on Cloud SQL PostgreSQL instance    Create client certificate and download client certificate  client key and server certificate  We will use them in the client container for testing without sidecar auto encryption  and mount them into Istio Proxy sidecar for sidecar auto encryption    Add K8s node IPs to the Authorized Networks of PostgreSQL instance  Or  we can add  0 0 0 0 0  to allow client connection from any IP address for testing purpose       Build Container  Use the Dockerfile to build a testing PostgreSQL client container image  We package the certificates into Docker image just for initial connection testing    They are not needed in client container for sidecar auto encryption         Test Direct SSL Connection  Deploy a PostgreSQL client without any sidecar  Please note that we turn off the sidecar inject in the YAML file even though we have label our namespace to have Istio sidecar auto inject        kubectl apply  f postgres plain yaml  n  YOUR NAMESPACE       Run the follow command to make sure SSL connection works        Enter into postgres plain Pod kubectl exec  it deploy postgres plain  n  YOUR NAMESPACE      bin bash    Once in the Pod  run this psql command with SSL mode   psql  sslmode verify ca sslrootcert server ca pem         sslcert client cert pem sslkey client key pem         hostaddr YOUR POSTGRESQL IP         port 5432         user YOUR USERNAME dbname YOUR DB NAME     Enter your password when it is prompted      You should see something like this      psql  12 5  server 12 4  SSL connection  protocol  TLSv1 3  cipher  TLS AES 256 GCM SHA384  bits  256  compression  off  Type  help  for help       Now  run  psql  command in Non SSL mode     psql  hostaddr YOUR POSTGRESQL IP port 5432 user YOUR USERNAME dbname YOUR DB NAME       You should see the error message as below  This proves that Non SSL connection doesn t work      psql  error  FATAL   connection requires a valid client certificate FATAL   pg hba conf rejects connection for host  35 235 65 143   user  postgres   database  postgres   SSL off          Deploy Istio Proxy Sidecar       Create K8s secret for the certificates  We will mount our PostgreSQL certificates into Istio Proxy sidecar  In order to achieve this  we need to upload certificates as K8s secret       kubectl create secret generic postgres cert   from file certs client cert pem    from file certs client key pem    from file certs server ca pem           Mount the certificates into Istio Proxy sidecar  We mount our PostgreSQL certificates into Istio Proxy sidecar via the following annotations              sidecar istio io userVolume      name   postgres cert    secret    secretName   postgres cert              sidecar istio io userVolumeMount      name   postgres cert    mountPath    etc certs postgres cert    re adonly  true              Configure Sidecar certificates   We use  Service Entry  https   istio io latest docs reference config networking service entry   and  Destination  Rule  https   istio io latest docs reference config networking destination rule   to instruct Istio Proxy sidec ar to auto encrypt network traffic with specified Redis host and port with   SIMPLE   TLS  The detailed comments  can be found in  destination rule yaml  and  service entry yaml  source code    Also  here is how we instruct Istio Proxy sidecar to use our certificates for encryption              clientCertificate   etc certs postgres cert client cert pem       privateKey   etc certs postgres cert client key pem       caCertificates   etc certs postgres cert server ca pem      Deploy both YAML files to your namespace       kubectl apply  f destination rule yaml  n  YOUR NAMESPACE   kubectl apply  f service entry yaml  n  YOUR NAMESPACE            Deploy PostgresSQL client with sidecar injection  Run this command to deploy PostgreSQL client with Istio Proxy sidecar inject    kubectl apply  f postgres istio yaml  n  YOUR NAMESPACE           Run  psql  command in Non SSL mode     psql  hostaddr YOUR POSTGRESQL IP port 5432 user YOUR USERNAME dbname YOUR DB NAME       You should be prompted for password    You will see errors          Look into the Istio Proxy log  You can read the logs in Cloud Logging  However  you may want to view sidecar log messages for the detailed network traffic information and errors with the following command      kubectl logs deploy postgres istio  c istio proxy  n  YOUR NAMESPACE     "}
{"questions":"GCP Summary PostgreSQL Auto SSL Connection Using Cloud SQL Proxy Because ASM Istio Proxy sidecar doesn t work with PostgreSQL SSL auto encryption we demonstrate how to use Cloud SQL Proxy to auto encrypt SSL connection with Cloud SQL PostgreSQL database in this article PostgreSQL uses application level protocol negotiation for SSL connection Istio Proxy currently uses TCP level protocol negotiation so Istio Proxy sidecar errors out during SSL handshake when it tries to auto encrypt connection with PostgreSQL Please follow the steps in to see the details of this issue Prerequisites","answers":"# PostgreSQL Auto SSL Connection Using Cloud SQL Proxy\n\n## Summary\n\nPostgreSQL uses application-level protocol negotiation for SSL connection. Istio Proxy currently uses TCP-level protocol negotiation, so Istio Proxy sidecar errors out during SSL handshake when it tries to auto encrypt connection with PostgreSQL. Please follow the steps in [PostgreSQL Auto SSL Connection Problem Using Istio Sidecar](.\/Istio-Sidecar.md) to see the details of this issue.\n\nBecause ASM Istio Proxy sidecar doesn't work with PostgreSQL SSL auto encryption, we demonstrate how to use Cloud SQL Proxy to auto encrypt SSL connection with Cloud SQL PostgreSQL database in this article.\n\n## Prerequisites\n\n* Enforce SSL connection on Cloud SQL PostgreSQL instance.\n* **We don't need certificates for Cloud SQL Proxy connection.** However, we will create client certificate and download client certificate, client key and server certificate for the purpose of initial SSL connection without sidecar auto-encryption. Instructions for downloading Cloud SQL for PostgreSQL certificates is on this page: [Configuring SSL\/TLS certificates](https:\/\/cloud.google.com\/sql\/docs\/postgres\/configure-ssl-instance)\n* Add K8s node IPs to the Authorized Networks of PostgreSQL instance. Or, we can add \"0.0.0.0\/0\" to allow client connection from any IP address for testing purpose. \n\n## Build Container\n\n1. Download the `client-cert.pem`, `client-key.pem` and `server-ca.pem` certificates, using the instructions on [Configuring SSL\/TLS certificates](https:\/\/cloud.google.com\/sql\/docs\/postgres\/configure-ssl-instance)\n\n    **NOTE:** These certificates are not needed for connecting via the Cloud SQL Proxy\n\n2. Use the [Dockerfile](.\/Dockerfile) to build a test PostgreSQL client container image.\n\n    The certificates are packaged into the Docker image just for initial connection testing.\n\n## Test Direct SSL Connection\n\n1. Deploy a PostgreSQL client without any sidecar.\n\n    ```\n    kubectl apply -f postgres-plain.yaml -n sample\n    ```\n\n2. Run the follow command to make sure the SSL connection works.\n\n    ```\n    # Enter into postgres-plain Pod\n    kubectl exec -it deploy\/postgres-plain -n sample -- \/bin\/bash\n\n    # Once in the Pod, run this psql command with SSL mode\n      psql \"sslmode=verify-ca sslrootcert=server-ca.pem \\\n          sslcert=client-cert.pem sslkey=client-key.pem \\\n          hostaddr=YOUR_POSTGRESQL_IP \\\n          port=5432 \\\n          user=YOUR_USERNAME dbname=YOUR_DB_NAME\"\n\n    # Enter your password when it is prompted\n    ```\n\n    You should see something like this:\n\n    ```\n    psql (12.5, server 12.4)\n    SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)\n    Type \"help\" for help.\n    ```\n\n3. Now, run `psql` command in Non-SSL mode\n\n    ```\n    psql \"hostaddr=YOUR_POSTGRESQL_IP port=5432 user=YOUR_USERNAME dbname=YOUR_DB_NAME\"\n    ```\n\n    You should see the below error message. This proves that a Non-SSL connection doesn't work.\n\n    ```\n    psql: error: FATAL:  connection requires a valid client certificate\n    FATAL:  pg_hba.conf rejects connection for host \"35.235.65.143\", user \"postgres\", database \"postgres\", SSL off\n    ```\n\n## Deploy the Cloud SQL Proxy Sidecar\n\n1. Create a Kubernetes Service Account\n\n    - We have already deployed our GKE cluster with [Workload Identity](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/workload-identity) enabled.\n    - We will use a Kubernetes Service Account (KSA) binding to a Google Cloud Service Account (GSA) to simplify our Cloud Proxy sidecar deployment.\n    - Create a KSA named \"ksa-sqlproxy\"\n\n      ```\n      kubectl apply -f service-account.yaml -n sample\n      ```\n\n2. Set up a Google Cloud Service Account \n\n    - Set up a Google Cloud Service Account (or use an existing GSA). \n    - Make sure that [Cloud SQL Client predefined role](https:\/\/cloud.google.com\/sql\/docs\/mysql\/project-access-control#roles) (roles\/cloudsql.client) is granted to this GSA.\n    - In the next step, we will create a new GSA `sql-client@${PROJECT_ID}.iam.gserviceaccount.com`.\n\n3. Bind KSA to GSA\n\n    ```\n    export PROJECT_ID=\"$(gcloud config get-value project || ${GOOGLE_CLOUD_PROJECT})\"\n\n    gcloud iam service-accounts add-iam-policy-binding \\\n      --role roles\/iam.workloadIdentityUser \\\n      --member \"serviceAccount:${PROJECT_ID}.svc.id.goog[YOUR_NAMESPACE\/ksa-sqlproxy]\" \\\n      sql-client@${PROJECT_ID}.iam.gserviceaccount.com\n    ```\n\n4. Add annotation to the service account\n\n    ```\n    kubectl annotate serviceaccount \\\n      ksa-sqlproxy \\\n      iam.gke.io\/gcp-service-account=\"sql-client@${PROJECT_ID}.iam.gserviceaccount.com\" \\\n      -n YOUR_NAMESPACE\n    ```\n\n5. Deploy PostgreSQL client with Cloud SQL Proxy sidecar\n\n    Take a look at the deployment YAML file, [postgres-cloudproxy.yaml](.\/postgres-cloudproxy.yaml). Please note the following two items:\n\n      i. The \"serviceAccountName: ksa-sqlproxy\" entry for pod.\n\n      - This pod will use this KSA to authenticate itself through Google Cloud IAM.\n      - Remember that we don't need the certificate files.\n\n      ii. The container entry for Cloud SQL Proxy. \n\n    ```\n    kubectl apply -f postgres-cloudproxy.yaml -n sample\n    ```\n\n6. Test out Cloud SQL Proxy sidecar\n\n    - Run the following command to get into Postgres client container\n\n      ```\n      kubectl exec -it deploy\/postgres-check -c postgres-check -n sample -- \/bin\/bash\n      ```\n\n    - Run `psql` command in Non-SSL mode\n    \n      ```\n      psql \"hostaddr=YOUR_POSTGRESQL_IP port=5432 user=YOUR_USERNAME dbname=YOUR_DB_NAME\"\n      ```\n\n    You should be prompted for password, then you should be connected to your PostgreSQL database.","site":"GCP","answers_cleaned":"  PostgreSQL Auto SSL Connection Using Cloud SQL Proxy     Summary  PostgreSQL uses application level protocol negotiation for SSL connection  Istio Proxy currently uses TCP level protocol negotiation  so Istio Proxy sidecar errors out during SSL handshake when it tries to auto encrypt connection with PostgreSQL  Please follow the steps in  PostgreSQL Auto SSL Connection Problem Using Istio Sidecar    Istio Sidecar md  to see the details of this issue   Because ASM Istio Proxy sidecar doesn t work with PostgreSQL SSL auto encryption  we demonstrate how to use Cloud SQL Proxy to auto encrypt SSL connection with Cloud SQL PostgreSQL database in this article      Prerequisites    Enforce SSL connection on Cloud SQL PostgreSQL instance      We don t need certificates for Cloud SQL Proxy connection    However  we will create client certificate and download client certificate  client key and server certificate for the purpose of initial SSL connection without sidecar auto encryption  Instructions for downloading Cloud SQL for PostgreSQL certificates is on this page   Configuring SSL TLS certificates  https   cloud google com sql docs postgres configure ssl instance    Add K8s node IPs to the Authorized Networks of PostgreSQL instance  Or  we can add  0 0 0 0 0  to allow client connection from any IP address for testing purpose       Build Container  1  Download the  client cert pem    client key pem  and  server ca pem  certificates  using the instructions on  Configuring SSL TLS certificates  https   cloud google com sql docs postgres configure ssl instance         NOTE    These certificates are not needed for connecting via the Cloud SQL Proxy  2  Use the  Dockerfile    Dockerfile  to build a test PostgreSQL client container image       The certificates are packaged into the Docker image just for initial connection testing      Test Direct SSL Connection  1  Deploy a PostgreSQL client without any sidecar               kubectl apply  f postgres plain yaml  n sample          2  Run the follow command to make sure the SSL connection works                 Enter into postgres plain Pod     kubectl exec  it deploy postgres plain  n sample     bin bash        Once in the Pod  run this psql command with SSL mode       psql  sslmode verify ca sslrootcert server ca pem             sslcert client cert pem sslkey client key pem             hostaddr YOUR POSTGRESQL IP             port 5432             user YOUR USERNAME dbname YOUR DB NAME         Enter your password when it is prompted              You should see something like this               psql  12 5  server 12 4      SSL connection  protocol  TLSv1 3  cipher  TLS AES 256 GCM SHA384  bits  256  compression  off      Type  help  for help           3  Now  run  psql  command in Non SSL mode              psql  hostaddr YOUR POSTGRESQL IP port 5432 user YOUR USERNAME dbname YOUR DB NAME               You should see the below error message  This proves that a Non SSL connection doesn t work               psql  error  FATAL   connection requires a valid client certificate     FATAL   pg hba conf rejects connection for host  35 235 65 143   user  postgres   database  postgres   SSL off             Deploy the Cloud SQL Proxy Sidecar  1  Create a Kubernetes Service Account        We have already deployed our GKE cluster with  Workload Identity  https   cloud google com kubernetes engine docs how to workload identity  enabled        We will use a Kubernetes Service Account  KSA  binding to a Google Cloud Service Account  GSA  to simplify our Cloud Proxy sidecar deployment        Create a KSA named  ksa sqlproxy                   kubectl apply  f service account yaml  n sample            2  Set up a Google Cloud Service Account         Set up a Google Cloud Service Account  or use an existing GSA          Make sure that  Cloud SQL Client predefined role  https   cloud google com sql docs mysql project access control roles   roles cloudsql client  is granted to this GSA        In the next step  we will create a new GSA  sql client   PROJECT ID  iam gserviceaccount com    3  Bind KSA to GSA              export PROJECT ID    gcloud config get value project      GOOGLE CLOUD PROJECT         gcloud iam service accounts add iam policy binding           role roles iam workloadIdentityUser           member  serviceAccount   PROJECT ID  svc id goog YOUR NAMESPACE ksa sqlproxy           sql client   PROJECT ID  iam gserviceaccount com          4  Add annotation to the service account              kubectl annotate serviceaccount         ksa sqlproxy         iam gke io gcp service account  sql client   PROJECT ID  iam gserviceaccount com           n YOUR NAMESPACE          5  Deploy PostgreSQL client with Cloud SQL Proxy sidecar      Take a look at the deployment YAML file   postgres cloudproxy yaml    postgres cloudproxy yaml   Please note the following two items         i  The  serviceAccountName  ksa sqlproxy  entry for pod           This pod will use this KSA to authenticate itself through Google Cloud IAM          Remember that we don t need the certificate files         ii  The container entry for Cloud SQL Proxy                kubectl apply  f postgres cloudproxy yaml  n sample          6  Test out Cloud SQL Proxy sidecar        Run the following command to get into Postgres client container                  kubectl exec  it deploy postgres check  c postgres check  n sample     bin bash                  Run  psql  command in Non SSL mode                      psql  hostaddr YOUR POSTGRESQL IP port 5432 user YOUR USERNAME dbname YOUR DB NAME                 You should be prompted for password  then you should be connected to your PostgreSQL database "}
{"questions":"GCP Cost Optimization Dashboard This repo contains SQL scripts for analyzing GCP Billing Recommendations data and also a guide to setup the Cost Optimization dashboard Introduction For sample dashboard The Cost Optimization dashboard builds on top of existing and adds following additional insights to the dashboard Compute Engine Insights Cloud Storage Insights","answers":"# Cost Optimization Dashboard\n\nThis repo contains SQL scripts for analyzing GCP Billing, Recommendations data and also a guide to setup the Cost Optimization dashboard.\nFor sample dashboard [see here](https:\/\/datastudio.google.com\/c\/u\/0\/reporting\/6cf564a4-9c94-4cfd-becd-b9c770ee7aa2\/page\/r34iB).\n\n\n## Introduction\nThe Cost Optimization dashboard builds on top of existing [GCP billing dashboard](https:\/\/cloud.google.com\/billing\/docs\/how-to\/visualize-data) and adds following additional insights to the dashboard.\n* Compute Engine Insights\n* Cloud Storage Insights\n* BigQuery Insights\n* Cost Optimization Recommendations\n* Etc.\n\n\nFew key things to keep in mind before starting.\n* [Recommendations data export](https:\/\/cloud.google.com\/recommender\/docs\/bq-export\/export-recommendations-to-bq) to bigquery is still in Preview(as of June 2021).\n* Currently, no automation is available for this setup.\n\n> <span style=\"color:red\">*NOTE: Implementing this dashboard will incur additional BigQuery charges.*<\/span>\n\n\n## Prerequisites\nThe user running the steps in this guide should have ability to \n* Create a GCP project and BQ datasets.\n* Schedule BQ queries and data transfer jobs.\n* Provision BQ access to the Datastudio dashboard.\n* Fully qualified names of BQ tables. Example format: ```<project-id>.<dataset-name>.<table-name>```.\n\n\nAlso, Make sure the following resources are accessible.\n* [billing_dashboard_view](https:\/\/datastudio.google.com\/datasources\/c7f4b535-3dd0-4aaa-bd2b-78f9718df9e2)\n* [co_dashboard_view](https:\/\/datastudio.google.com\/datasources\/78dc2597-d8e7-40db-8fbb-f3b2c8271b6d)\n* [co_recommendations_view](https:\/\/datastudio.google.com\/datasources\/c972e0f6-51e1-483c-8947-214b300d26a6)\n* [GCP Cost Optimization Dashboard](https:\/\/datastudio.google.com\/reporting\/6cf564a4-9c94-4cfd-becd-b9c770ee7aa2)\n\n> <span style=\"color:red\">*NOTE: In case of permission issues with above links, please reach out to gcp-co-dashboard@google.com*<\/span>\n \n## Setup\nThe overall process of the setup is as follows, each step is outlined in detail below,\n* Create a project and the required BigQuery datasets.\n* Create data transfers jobs, and scheduled queries.\n  * Initiate data transfer for billing data export into ```billing``` dataset.\n  * Initiate data transfer for recommendations data export into ```recommender``` dataset.\n  * Setup dashboard related functions, views and scheduled queries in ```dashboard``` dataset.\n* Use pre-existing templates as a starting point to set up the DataStudio dashboard.\n\n### Project and datasets\n* Create a project to hold the BigQuery datasets for the dashboard\n* Create the following datasets in the project. Please make sure that all datasets are created in the same region (eg: US). [See instructions](https:\/\/cloud.google.com\/bigquery\/docs\/datasets#create-dataset) on how to create a dataset in BigQuery\n  * ```billing```\n  * ```recommender```\n  * ```dashboard```\n\n### Billing data exports to Bigquery\n* In the project created above, enable export for both \u2018Daily cost detail\u2019 and \u2018Pricing\u2019 data tables to the ```billing``` dataset, by following the instructions [here](https:\/\/cloud.google.com\/billing\/docs\/how-to\/export-data-bigquery-setup).\n* Data availability\n\n  > Your [BigQuery dataset](https:\/\/cloud.google.com\/bigquery\/docs\/datasets-intro) only reflects Google Cloud usage and cost data incurred from the date you set up Cloud Billing export, and after. That is, Google Cloud billing data is not added retroactively, so you won't see Cloud Billing data from before you enable export.\n\n### Recommendations data exports to Bigquery\n* Export recommendations data to the ```recommender``` dataset, by following the instructions [here](https:\/\/cloud.google.com\/recommender\/docs\/bq-export\/export-recommendations-to-bq).\n* Data availability\n\n  > There is a one day delay after a transfer is first created before your requested organization is opted-in to exports, and recommendations for your organization are available for export. In the interim, you will see the message \u201cTransfer deferred due to source data not being available\u201d. This message may also be shown in case the data is not ready for export on future days - the transfers will automatically be rescheduled to check for source data at a later time.\n\n* At this point the datasets would look something like below.\n\n  ![](docs\/image1.png)\n\n### Cost Optimization data analysis scripts\nThis step involves setting up the following data analysis components.\n* Required SQL functions\n* Daily scheduled scripts for data analysis and aggregation\n\n\n#### Common Functions\n* Compose a new query and copy the SQL at [common_functions.sql](scripts\/common_functions.sql).\n* Execute the query to create some required functions in the ```dashboard``` dataset.\n* This is how the dataset will look like after the above step.\n\n  ![](docs\/image2.png)\n\n\n#### CO Billing Data Table\n* Compose a new query and copy the SQL at [co_billing_data.sql](scripts\/co_billing_data.sql).\n* Replace ```<BILLING_EXPORT_TABLE>``` with the correct table name created at \"Billing data export to Bigquery\" step.\n* Run the query and ensure it\u2019s completed without errors\n* Click \u2018Schedule query -> Create new scheduled query\u2019.\n\n  ![](docs\/image3.png)\n\n* Fill the details as seen in the below screenshot.\n\n  ![](docs\/image4.png)\n\n* Click \u2018Schedule\u2019 to create a scheduled query.\n\n\n#### CO Pricing Data Table\n* Compose a new query and copy the SQL at [co_pricing_data.sql](scripts\/co_pricing_data.sql). \n* Replace ```<PRICING_EXPORT_TABLE>``` with the correct table name created at \"Billing data export to Bigquery\" step.\n* Click \u2018Schedule query -> Create new scheduled query\u2019.\n\n  ![](docs\/image3.png)\n\n* Fill the details as seen in the below screenshot.\n\n  [ ![](docs\/image5.png)](docs\/image5.png)\n\n* Click \u2018Schedule\u2019 to create a scheduled query.\n\n\n#### CO Recommendations Data Table\n* Compose a new query and copy the SQL at [co_recommendations_data.sql](scripts\/co_recommendations_data.sql).\n* Click \u2018Schedule query -> Create new scheduled query\u2019.\n\n  ![](docs\/image3.png)\n\n* Fill the details as seen in the below screenshot.\n\n  ![](docs\/image6.png)\n\n* Click \u2018Schedule\u2019 to create a scheduled query.\n\n#### Verify\n* This is how the BQ scheduled queries screen will look like for CO queries after the above steps.\n\n  ![](docs\/image7.png)\n\n* This is how the dataset will look like after the above step.\n\n  ![](docs\/image8.png)\n\n\n## Dashboard\nThis step involves copying the template data sources and template dashboard report and making necessary changes.\n\n### Data Sources\nBelow are the template data sources that are of interest.\n* [billing_dashboard_view](https:\/\/datastudio.google.com\/datasources\/c7f4b535-3dd0-4aaa-bd2b-78f9718df9e2)\n* [co_dashboard_view](https:\/\/datastudio.google.com\/datasources\/78dc2597-d8e7-40db-8fbb-f3b2c8271b6d)\n* [co_recommendations_view](https:\/\/datastudio.google.com\/datasources\/c972e0f6-51e1-483c-8947-214b300d26a6)\n\nFor each of the above data sources \n* Copy the data source\n\n  ![](docs\/image9.png)\n\n* Change the name at the top left corner.\n  * For Example, from \"Copy of [EXTERNAL] xyz_view\" to \u201cxyz_view\u201d.\n* Click the \u2018Edit Connection\u2019 button if the Custom Query editor window does not show up after copying the data source in the above step.\n* In the Billing Project Selector panel (middle panel), select the project created in the beginning of this guide.\n* In the Query panel to the right\n  * Wherever applicable, replace all occurrences of project name(anilgcp-co-dev) to the project created in the beginning of this guide.\n    * Example: `anilgcp-co-dev.dashboard.co_billing_data` to `REPLACE_WITH_PROJECT_ID.dashboard.co_billing_data`\n  * Wherever applicable, replace all occurrences of billing export table name to the correct table name.\n    * Example: `anilgcp-co-dev.billing.gcp_billing_export_v1_xxxxxxxxxxxx` to `REPLACE_WITH_PROJECT_ID.billing.billing_export_table_name`\n* Make sure all occurrences of fully qualified table names are enclosed within the backticks (`).\n* Make sure \u2018Enable date parameters\u2019 is selected for both data sources from above steps.\n\n  ![](docs\/image10.png)\n\n* Click \"Reconnect\".\n* Review the confirmation, which should say \"Allow parameter sharing?\", and click \"Allow\".\n\n  ![](docs\/image11.png)\n\n* Review the confirmation, which should say \"No fields changed due to configuration change.\", and click \"Apply\". \n\n### Report\n* Copy the [dashboard template](https:\/\/datastudio.google.com\/reporting\/6cf564a4-9c94-4cfd-becd-b9c770ee7aa2) by clicking the \"Make a copy\" button at the top right hand side, as shown below.\n\n  ![](docs\/image12.png)\n\n* Point the \"New Data Source\" to the newly created data sources from the above steps, and click \"Copy Report\".\n\n  ![](docs\/image13.png)\n\n* Change the name of the dashboard from \u201cCopy of [EXTERNAL] GCP Cost Optimization Dashboard\u201d to \u201cGCP Cost Optimization Dashboard\u201d or something similar.\n\n\n## References and support\n* For feedback and support reach out to your TAM","site":"GCP","answers_cleaned":"  Cost Optimization Dashboard  This repo contains SQL scripts for analyzing GCP Billing  Recommendations data and also a guide to setup the Cost Optimization dashboard  For sample dashboard  see here  https   datastudio google com c u 0 reporting 6cf564a4 9c94 4cfd becd b9c770ee7aa2 page r34iB        Introduction The Cost Optimization dashboard builds on top of existing  GCP billing dashboard  https   cloud google com billing docs how to visualize data  and adds following additional insights to the dashboard    Compute Engine Insights   Cloud Storage Insights   BigQuery Insights   Cost Optimization Recommendations   Etc    Few key things to keep in mind before starting     Recommendations data export  https   cloud google com recommender docs bq export export recommendations to bq  to bigquery is still in Preview as of June 2021     Currently  no automation is available for this setup      span style  color red   NOTE  Implementing this dashboard will incur additional BigQuery charges    span       Prerequisites The user running the steps in this guide should have ability to    Create a GCP project and BQ datasets    Schedule BQ queries and data transfer jobs    Provision BQ access to the Datastudio dashboard    Fully qualified names of BQ tables  Example format      project id   dataset name   table name        Also  Make sure the following resources are accessible     billing dashboard view  https   datastudio google com datasources c7f4b535 3dd0 4aaa bd2b 78f9718df9e2     co dashboard view  https   datastudio google com datasources 78dc2597 d8e7 40db 8fbb f3b2c8271b6d     co recommendations view  https   datastudio google com datasources c972e0f6 51e1 483c 8947 214b300d26a6     GCP Cost Optimization Dashboard  https   datastudio google com reporting 6cf564a4 9c94 4cfd becd b9c770ee7aa2      span style  color red   NOTE  In case of permission issues with above links  please reach out to gcp co dashboard google com   span       Setup The overall process of the setup is as follows  each step is outlined in detail below    Create a project and the required BigQuery datasets    Create data transfers jobs  and scheduled queries      Initiate data transfer for billing data export into    billing    dataset      Initiate data transfer for recommendations data export into    recommender    dataset      Setup dashboard related functions  views and scheduled queries in    dashboard    dataset    Use pre existing templates as a starting point to set up the DataStudio dashboard       Project and datasets   Create a project to hold the BigQuery datasets for the dashboard   Create the following datasets in the project  Please make sure that all datasets are created in the same region  eg  US    See instructions  https   cloud google com bigquery docs datasets create dataset  on how to create a dataset in BigQuery        billing           recommender           dashboard         Billing data exports to Bigquery   In the project created above  enable export for both  Daily cost detail  and  Pricing  data tables to the    billing    dataset  by following the instructions  here  https   cloud google com billing docs how to export data bigquery setup     Data availability      Your  BigQuery dataset  https   cloud google com bigquery docs datasets intro  only reflects Google Cloud usage and cost data incurred from the date you set up Cloud Billing export  and after  That is  Google Cloud billing data is not added retroactively  so you won t see Cloud Billing data from before you enable export       Recommendations data exports to Bigquery   Export recommendations data to the    recommender    dataset  by following the instructions  here  https   cloud google com recommender docs bq export export recommendations to bq     Data availability      There is a one day delay after a transfer is first created before your requested organization is opted in to exports  and recommendations for your organization are available for export  In the interim  you will see the message  Transfer deferred due to source data not being available   This message may also be shown in case the data is not ready for export on future days   the transfers will automatically be rescheduled to check for source data at a later time     At this point the datasets would look something like below         docs image1 png       Cost Optimization data analysis scripts This step involves setting up the following data analysis components    Required SQL functions   Daily scheduled scripts for data analysis and aggregation        Common Functions   Compose a new query and copy the SQL at  common functions sql  scripts common functions sql     Execute the query to create some required functions in the    dashboard    dataset    This is how the dataset will look like after the above step         docs image2 png         CO Billing Data Table   Compose a new query and copy the SQL at  co billing data sql  scripts co billing data sql     Replace     BILLING EXPORT TABLE     with the correct table name created at  Billing data export to Bigquery  step    Run the query and ensure it s completed without errors   Click  Schedule query    Create new scheduled query          docs image3 png     Fill the details as seen in the below screenshot         docs image4 png     Click  Schedule  to create a scheduled query         CO Pricing Data Table   Compose a new query and copy the SQL at  co pricing data sql  scripts co pricing data sql      Replace     PRICING EXPORT TABLE     with the correct table name created at  Billing data export to Bigquery  step    Click  Schedule query    Create new scheduled query          docs image3 png     Fill the details as seen in the below screenshot           docs image5 png   docs image5 png     Click  Schedule  to create a scheduled query         CO Recommendations Data Table   Compose a new query and copy the SQL at  co recommendations data sql  scripts co recommendations data sql     Click  Schedule query    Create new scheduled query          docs image3 png     Fill the details as seen in the below screenshot         docs image6 png     Click  Schedule  to create a scheduled query        Verify   This is how the BQ scheduled queries screen will look like for CO queries after the above steps         docs image7 png     This is how the dataset will look like after the above step         docs image8 png       Dashboard This step involves copying the template data sources and template dashboard report and making necessary changes       Data Sources Below are the template data sources that are of interest     billing dashboard view  https   datastudio google com datasources c7f4b535 3dd0 4aaa bd2b 78f9718df9e2     co dashboard view  https   datastudio google com datasources 78dc2597 d8e7 40db 8fbb f3b2c8271b6d     co recommendations view  https   datastudio google com datasources c972e0f6 51e1 483c 8947 214b300d26a6   For each of the above data sources    Copy the data source        docs image9 png     Change the name at the top left corner      For Example  from  Copy of  EXTERNAL  xyz view  to  xyz view     Click the  Edit Connection  button if the Custom Query editor window does not show up after copying the data source in the above step    In the Billing Project Selector panel  middle panel   select the project created in the beginning of this guide    In the Query panel to the right     Wherever applicable  replace all occurrences of project name anilgcp co dev  to the project created in the beginning of this guide        Example   anilgcp co dev dashboard co billing data  to  REPLACE WITH PROJECT ID dashboard co billing data      Wherever applicable  replace all occurrences of billing export table name to the correct table name        Example   anilgcp co dev billing gcp billing export v1 xxxxxxxxxxxx  to  REPLACE WITH PROJECT ID billing billing export table name    Make sure all occurrences of fully qualified table names are enclosed within the backticks        Make sure  Enable date parameters  is selected for both data sources from above steps         docs image10 png     Click  Reconnect     Review the confirmation  which should say  Allow parameter sharing    and click  Allow          docs image11 png     Review the confirmation  which should say  No fields changed due to configuration change    and click  Apply         Report   Copy the  dashboard template  https   datastudio google com reporting 6cf564a4 9c94 4cfd becd b9c770ee7aa2  by clicking the  Make a copy  button at the top right hand side  as shown below         docs image12 png     Point the  New Data Source  to the newly created data sources from the above steps  and click  Copy Report          docs image13 png     Change the name of the dashboard from  Copy of  EXTERNAL  GCP Cost Optimization Dashboard  to  GCP Cost Optimization Dashboard  or something similar       References and support   For feedback and support reach out to your TAM"}
{"questions":"GCP Copyright 2021 Google LLC All Rights Reserved you may not use this file except in compliance with the License http www apache org licenses LICENSE 2 0 You may obtain a copy of the License at Unless required by applicable law or agreed to in writing software Licensed under the Apache License Version 2 0 the License","answers":"<!--\nCopyright 2021 Google LLC. All Rights Reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at:\n\n     http:\/\/www.apache.org\/licenses\/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n==============================================================================\n-->\n\n# Vertex AI Pipeline\nThis repository demonstrates end-to-end [MLOps process](https:\/\/services.google.com\/fh\/files\/misc\/practitioners_guide_to_mlops_whitepaper.pdf) \nusing [Vertex AI](https:\/\/cloud.google.com\/vertex-ai) platform \nand [Smart Analytics](https:\/\/cloud.google.com\/solutions\/smart-analytics) technology capabilities.\n\nIn particular two general [Vertex AI Pipeline](https:\/\/cloud.google.com\/vertex-ai\/docs\/pipelines) \ntemplates has been provided:\n- Training pipeline including:\n  - Data processing\n  - Custom model training\n  - Model evaluation\n  - Endpoint creation\n  - Model deployment\n  - Deployment testing\n  - Model monitoring\n- Batch-prediction pipeline including\n  - Data processing\n  - Batch prediction using deployed model\n\nNote that besides Data processing being done using BigQuery, all other steps are build on top of\n[Vertex AI](https:\/\/cloud.google.com\/vertex-ai) platform capabilities.\n\n<p align=\"center\">\n    <img src=\".\/training_pipeline.png\" alt=\"Sample Training Pipeline\" width=\"600\"\/>\n<\/p>\n\n### Dataset\nThe dataset used throughout the demonstration is\n[Banknote Authentication Data Set](https:\/\/archive.ics.uci.edu\/ml\/datasets\/banknote+authentication).\nData were extracted from images that were taken from genuine and forged banknote-like specimens. \nFor digitization, an industrial camera usually used for print inspection was used. \nThe final images have 400x 400 pixels. Due to the object lens and distance to the \ninvestigated object gray-scale pictures with a resolution of about 660 dpi were gained. \nWavelet Transform tool were used to extract features from images.\nAttribute Information:\n1. variance of Wavelet Transformed image (continuous)\n2. skewness of Wavelet Transformed image (continuous)\n3. curtosis of Wavelet Transformed image (continuous)\n4. entropy of image (continuous)\n5. class (integer)\n\n### Machine Learning Problem\nGiven the Banknote Authentication Data Set, a binary classification problem is adopted where \nattribute `class` is chosen as label and the remaining attributes are used as features.\n\n[LightGBM](https:\/\/github.com\/microsoft\/LightGBM), a gradient boosting framework that uses tree based \nlearning algorithms, is used to train the model for purpose of demonstrating \n[custom training](https:\/\/cloud.google.com\/vertex-ai\/docs\/training\/custom-training) and\n[custom serving](https:\/\/cloud.google.com\/vertex-ai\/docs\/predictions\/use-custom-container) \ncapabilities of Vertex AI platform, which provide more native support for e.g. Tensorflow,\nPytorch, Scikit-Learn and Pytorch.\n\n\n## Repository Structure\n\nThe repository contains the following:\n\n```\n.\n\u251c\u2500\u2500 components    : custom vertex pipeline components\n\u251c\u2500\u2500 images        : custom container images for training and serving\n\u251c\u2500\u2500 pipelines     : vertex ai pipeline definitions and runners\n\u251c\u2500\u2500 configs       : configurations for defining vertex ai pipeline\n\u251c\u2500\u2500 scripts       : scripts for runing local testing \n\u2514\u2500\u2500 notebooks     : notebooks used development and testing of vertex ai pipeline\n```\nIn addition\n- `build_components_cb.sh`: build all components defined in `components` folder using Cloud Build\n- `build_images_cb.sh`: build custom images (training and serving) defined in `images` folder using Cloud Build\n- `build_pipeline_cb.sh`: build training and batch-prediction pipeline defined in `pipelines` folder using Cloud Build\n\n## Get Started\n\nThe end-to-end process of creating and running the training pipeline contains the following steps:\n\n1. Setting up [MLOps environment](https:\/\/github.com\/GoogleCloudPlatform\/mlops-with-vertex-ai\/tree\/main\/provision) on Google Cloud.\n2. Create an [Artifact Registry](https:\/\/cloud.google.com\/artifact-registry) for your organization to manage container images\n3. Develop the training and serving logic\n4. Create the components required to build and run the pipeline\n5. Prepare and consolidate the configurations of the various steps of the pipeline\n6. Build the pipeline\n7. Run and orchestrate the pipeline\n\n### Create Artifact Registry\n[Artifact Registry](https:\/\/cloud.google.com\/artifact-registry)\nis a single place for your organization to manage container images and language \npackages (such as Maven and npm). It is fully integrated with Google Cloud\u2019s tooling and \nruntimes and comes with support for native artifact protocols. More importantly, it supports \nregional and multi-regional repositories.\n\nWe have provided a helper script: `scripts\/create_artifact_registry.sh`\n\n\n### Develop Training and Serving Logic\nDevelop your machine learning program and then containerize them as demonstrated in `images`. \nThe requirements for writing training code can be found \n[here](https:\/\/cloud.google.com\/vertex-ai\/docs\/training\/code-requirements) as well.\nNote that custom serving image is not necessary if your choosen framework is supported by \n[pre-built-container](https:\/\/cloud.google.com\/vertex-ai\/docs\/predictions\/pre-built-containers),\nwhich are organized by machine learning (ML) framework and framework version, \nprovide HTTP prediction servers that you can use to serve predictions with minimal configuration\n\nWe have also provided helper scripts:\n- `scripts\/run_training_local.sh`: test the training program locally\n- `scripts\/run_serving_local.sh`: test the serving program locally\n- `build_images_cb.sh`: build the images using Cloud Build service\n\n#### Environment variables for special Cloud Storage directories\nVertex AI sets the following environment variables when it runs your training code:\n\n- `AIP_MODEL_DIR`: a Cloud Storage URI of a directory intended for saving model artifacts.\n- `AIP_CHECKPOINT_DIR`: a Cloud Storage URI of a directory intended for saving checkpoints.\n- `AIP_TENSORBOARD_LOG_DIR`: a Cloud Storage URI of a directory intended for saving TensorBoard logs. See Using Vertex TensorBoard with custom training.\n\n### Build Components\nThe following template custom components are provided:\n- `components\/data_process`: read BQ table, perform transformation in BQ and export to GCS\n- `components\/train_model`: launch custom (distributed) training job on Vertex AI platform \n- `components\/check_model_metrics`: check the metrics of a training job and verify whether it produces better model\n- `components\/create_endpoint`: create an endpoint on Vertex AI platform\n- `components\/deploy_model`: deployed a model artifact to a created endpoint on Vertex AI platform\n- `components\/test_endpoint`: call the endpoint of deployed model for verification\n- `components\/monitor_model`: track deployed model performance using Vertex Model Monitoring\n- `components\/batch_prediction`: launch batch prediction job on Vertex AI platform\n\nWe have also provided a helper script: `build_components_cb.sh`\n\n### Build and Run Pipeline\nThe sample definition of pipelines are\n- `pipelines\/training_pipeline.py`\n- `pipelines\/batch_prediction_pipeline.py`\n\nAfter compiled the training or batch-prediction pipeline, you may trigger the pipeline run\nusing the provided runner\n- `pipelines\/trainin_pipeline_runner.py`\n- `pipelines\/batch_prediction_pipeline_runner.py`\n\nAn example to run training pipeline using the runner\n```shell\npython training_pipeline_runner \\\n  --project_id \"$PROJECT_ID\" \\\n  --pipeline_region $PIPELINE_REGION \\\n  --pipeline_root $PIPELINE_ROOT \\\n  --pipeline_job_spec_path $PIPELINE_SPEC_PATH \\\n  --data_pipeline_root $DATA_PIPELINE_ROOT \\\n  --input_dataset_uri \"$DATA_URI\" \\\n  --training_data_schema ${DATA_SCHEMA} \\\n  --data_region $DATA_REGION \\\n  --gcs_data_output_folder $GCS_OUTPUT_PATH \\\n  --training_container_image_uri \"$TRAIN_IMAGE_URI\" \\\n  --train_additional_args $TRAIN_ARGS \\\n  --serving_container_image_uri \"$SERVING_IMAGE_URI\" \\\n  --custom_job_service_account $CUSTOM_JOB_SA \\\n  --hptune_region $PIPELINE_REGION \\\n  --hp_config_max_trials 30 \\\n  --hp_config_suggestions_per_request 5 \\\n  --vpc_network \"$VPC_NETWORK\" \\\n  --metrics_name $METRIC_NAME \\\n  --metrics_threshold $METRIC_THRESHOLD \\\n  --endpoint_machine_type n1-standard-4 \\\n  --endpoint_min_replica_count 1 \\\n  --endpoint_max_replica_count 2 \\\n  --endpoint_test_instances ${TEST_INSTANCE} \\\n  --monitoring_user_emails $MONITORING_EMAIL \\\n  --monitoring_log_sample_rate 0.8 \\\n  --monitor_interval 3600 \\\n  --monitoring_default_threshold 0.3 \\\n  --monitoring_custom_skew_thresholds $MONITORING_CONFIG \\\n  --monitoring_custom_drift_thresholds $MONITORING_CONFIG \\\n  --enable_model_monitoring True \\\n  --pipeline_schedule \"0 2 * * *\" \\\n  --pipeline_schedule_timezone \"US\/Pacific\" \\\n  --enable_pipeline_caching\n```\n\nWe have also provided helper scripts:\n- `scripts\/build_pipeline_spec.sh`: compile and build the pipeline specs locally\n- `scripts\/run_training_pipeline.sh`: create and run training Vertex AI Pipeline based on the specs\n- `scripts\/run_batch_prediction_pipeline.sh`: create and run batch-prediction Vertex AI Pipeline based on the specs\n- `build_pipeline_spec_cb.sh`: compile and build the pipeline specs using Cloud Build service\n\n\n### Some common parameters\n|Field|Explanation|\n|-----|-----|\n|project_id|Your GCP project|\n|pipeline_region|The region to run Vertex AI Pipeline|\n|pipeline_root|The GCS buckets used for storing artifacts of your pipeline runs|\n|data_pipeline_root|The GCS staging location for custom job|\n|input_dataset_uri|Full URI of input dataset|\n|data_region|Region of input dataset|\n\n## Contributors\n- [Shixin Luo](https:\/\/github.com\/luotigerlsx)\n- [Tommy Siu](https:\/\/github.com\/tommysiu)\n- [Nathan Faggian](https:\/\/github.com\/nfaggian","site":"GCP","answers_cleaned":"     Copyright 2021 Google LLC  All Rights Reserved   Licensed under the Apache License  Version 2 0  the  License    you may not use this file except in compliance with the License  You may obtain a copy of the License at        http   www apache org licenses LICENSE 2 0  Unless required by applicable law or agreed to in writing  software distributed under the License is distributed on an  AS IS  BASIS  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND  either express or implied  See the License for the specific language governing permissions and limitations under the License                                                                                        Vertex AI Pipeline This repository demonstrates end to end  MLOps process  https   services google com fh files misc practitioners guide to mlops whitepaper pdf   using  Vertex AI  https   cloud google com vertex ai  platform  and  Smart Analytics  https   cloud google com solutions smart analytics  technology capabilities   In particular two general  Vertex AI Pipeline  https   cloud google com vertex ai docs pipelines   templates has been provided    Training pipeline including      Data processing     Custom model training     Model evaluation     Endpoint creation     Model deployment     Deployment testing     Model monitoring   Batch prediction pipeline including     Data processing     Batch prediction using deployed model  Note that besides Data processing being done using BigQuery  all other steps are build on top of  Vertex AI  https   cloud google com vertex ai  platform capabilities    p align  center        img src    training pipeline png  alt  Sample Training Pipeline  width  600      p       Dataset The dataset used throughout the demonstration is  Banknote Authentication Data Set  https   archive ics uci edu ml datasets banknote authentication   Data were extracted from images that were taken from genuine and forged banknote like specimens   For digitization  an industrial camera usually used for print inspection was used   The final images have 400x 400 pixels  Due to the object lens and distance to the  investigated object gray scale pictures with a resolution of about 660 dpi were gained   Wavelet Transform tool were used to extract features from images  Attribute Information  1  variance of Wavelet Transformed image  continuous  2  skewness of Wavelet Transformed image  continuous  3  curtosis of Wavelet Transformed image  continuous  4  entropy of image  continuous  5  class  integer       Machine Learning Problem Given the Banknote Authentication Data Set  a binary classification problem is adopted where  attribute  class  is chosen as label and the remaining attributes are used as features    LightGBM  https   github com microsoft LightGBM   a gradient boosting framework that uses tree based  learning algorithms  is used to train the model for purpose of demonstrating   custom training  https   cloud google com vertex ai docs training custom training  and  custom serving  https   cloud google com vertex ai docs predictions use custom container   capabilities of Vertex AI platform  which provide more native support for e g  Tensorflow  Pytorch  Scikit Learn and Pytorch       Repository Structure  The repository contains the following             components      custom vertex pipeline components     images          custom container images for training and serving     pipelines       vertex ai pipeline definitions and runners     configs         configurations for defining vertex ai pipeline     scripts         scripts for runing local testing      notebooks       notebooks used development and testing of vertex ai pipeline     In addition    build components cb sh   build all components defined in  components  folder using Cloud Build    build images cb sh   build custom images  training and serving  defined in  images  folder using Cloud Build    build pipeline cb sh   build training and batch prediction pipeline defined in  pipelines  folder using Cloud Build     Get Started  The end to end process of creating and running the training pipeline contains the following steps   1  Setting up  MLOps environment  https   github com GoogleCloudPlatform mlops with vertex ai tree main provision  on Google Cloud  2  Create an  Artifact Registry  https   cloud google com artifact registry  for your organization to manage container images 3  Develop the training and serving logic 4  Create the components required to build and run the pipeline 5  Prepare and consolidate the configurations of the various steps of the pipeline 6  Build the pipeline 7  Run and orchestrate the pipeline      Create Artifact Registry  Artifact Registry  https   cloud google com artifact registry  is a single place for your organization to manage container images and language  packages  such as Maven and npm   It is fully integrated with Google Cloud s tooling and  runtimes and comes with support for native artifact protocols  More importantly  it supports  regional and multi regional repositories   We have provided a helper script   scripts create artifact registry sh        Develop Training and Serving Logic Develop your machine learning program and then containerize them as demonstrated in  images    The requirements for writing training code can be found   here  https   cloud google com vertex ai docs training code requirements  as well  Note that custom serving image is not necessary if your choosen framework is supported by   pre built container  https   cloud google com vertex ai docs predictions pre built containers   which are organized by machine learning  ML  framework and framework version   provide HTTP prediction servers that you can use to serve predictions with minimal configuration  We have also provided helper scripts     scripts run training local sh   test the training program locally    scripts run serving local sh   test the serving program locally    build images cb sh   build the images using Cloud Build service       Environment variables for special Cloud Storage directories Vertex AI sets the following environment variables when it runs your training code      AIP MODEL DIR   a Cloud Storage URI of a directory intended for saving model artifacts     AIP CHECKPOINT DIR   a Cloud Storage URI of a directory intended for saving checkpoints     AIP TENSORBOARD LOG DIR   a Cloud Storage URI of a directory intended for saving TensorBoard logs  See Using Vertex TensorBoard with custom training       Build Components The following template custom components are provided     components data process   read BQ table  perform transformation in BQ and export to GCS    components train model   launch custom  distributed  training job on Vertex AI platform     components check model metrics   check the metrics of a training job and verify whether it produces better model    components create endpoint   create an endpoint on Vertex AI platform    components deploy model   deployed a model artifact to a created endpoint on Vertex AI platform    components test endpoint   call the endpoint of deployed model for verification    components monitor model   track deployed model performance using Vertex Model Monitoring    components batch prediction   launch batch prediction job on Vertex AI platform  We have also provided a helper script   build components cb sh       Build and Run Pipeline The sample definition of pipelines are    pipelines training pipeline py     pipelines batch prediction pipeline py   After compiled the training or batch prediction pipeline  you may trigger the pipeline run using the provided runner    pipelines trainin pipeline runner py     pipelines batch prediction pipeline runner py   An example to run training pipeline using the runner    shell python training pipeline runner       project id   PROJECT ID        pipeline region  PIPELINE REGION       pipeline root  PIPELINE ROOT       pipeline job spec path  PIPELINE SPEC PATH       data pipeline root  DATA PIPELINE ROOT       input dataset uri   DATA URI        training data schema   DATA SCHEMA        data region  DATA REGION       gcs data output folder  GCS OUTPUT PATH       training container image uri   TRAIN IMAGE URI        train additional args  TRAIN ARGS       serving container image uri   SERVING IMAGE URI        custom job service account  CUSTOM JOB SA       hptune region  PIPELINE REGION       hp config max trials 30       hp config suggestions per request 5       vpc network   VPC NETWORK        metrics name  METRIC NAME       metrics threshold  METRIC THRESHOLD       endpoint machine type n1 standard 4       endpoint min replica count 1       endpoint max replica count 2       endpoint test instances   TEST INSTANCE        monitoring user emails  MONITORING EMAIL       monitoring log sample rate 0 8       monitor interval 3600       monitoring default threshold 0 3       monitoring custom skew thresholds  MONITORING CONFIG       monitoring custom drift thresholds  MONITORING CONFIG       enable model monitoring True       pipeline schedule  0 2              pipeline schedule timezone  US Pacific        enable pipeline caching      We have also provided helper scripts     scripts build pipeline spec sh   compile and build the pipeline specs locally    scripts run training pipeline sh   create and run training Vertex AI Pipeline based on the specs    scripts run batch prediction pipeline sh   create and run batch prediction Vertex AI Pipeline based on the specs    build pipeline spec cb sh   compile and build the pipeline specs using Cloud Build service       Some common parameters  Field Explanation                 project id Your GCP project   pipeline region The region to run Vertex AI Pipeline   pipeline root The GCS buckets used for storing artifacts of your pipeline runs   data pipeline root The GCS staging location for custom job   input dataset uri Full URI of input dataset   data region Region of input dataset      Contributors    Shixin Luo  https   github com luotigerlsx     Tommy Siu  https   github com tommysiu     Nathan Faggian  https   github com nfaggian"}
{"questions":"GCP Instrumenting Web Applications End to End with Stackdriver and OpenTelemetry Stackdriver export to BigQuery and analyze the logs with SQL queries This tutorial demonstrates instrumenting a web application end to end from the JavaScript browser code that drives HTTP requests to a Node js backend that with OpenTelemetry and Stackdriver to run for a load test It shows how to can be run anywhere that you can run Node collect the instrumentation data from the browser and the server ship to browser to the backend application including logging monitoring and tracing The app is something like a simple web version of Apache Bench It includes","answers":"# Instrumenting Web Applications End-to-End with Stackdriver and OpenTelemetry\n\nThis tutorial demonstrates instrumenting a web application end-to-end, from the\nbrowser to the backend application, including logging, monitoring, and tracing\nwith OpenTelemetry and Stackdriver to run for a load test. It shows how to\ncollect the instrumentation data from the browser and the server, ship to\nStackdriver, export to BigQuery, and analyze the logs with SQL queries.\nThe app is something like a simple web version of Apache Bench. It includes\nJavaScript browser code that drives HTTP requests to a Node.js backend that\ncan be run anywhere that you can run Node.\n\n## Setup\n\nTo work through this example, you will need a GCP project. Follow these steps\n1. [Select or create a GCP project](https:\/\/console.cloud.google.com\/projectselector2\/home\/dashboard)\n2. [Enable billing for your project](https:\/\/support.google.com\/cloud\/answer\/6293499#enable-billing)\n3. Clone the repo\ngit clone https:\/\/github.com\/GoogleCloudPlatform\/professional-services.git\n4. Install Node.js\n5. Install Go\n6. Install Docker\n7. Install the [Google Cloud SDK](https:\/\/cloud.google.com\/sdk\/install)\n\nClone this repository to your environment with the command\n\n```shell\ngit clone https:\/\/github.com\/GoogleCloudPlatform\/professional-services.git\n```\n\nChange to this directory and set an environment variable to remember the\nlocation\n\n```shell\ncd professional-services\/examples\/web-instrumentation\nWI_HOME=`pwd`\n```\n\nSet Google Cloud SDK to the current project\n\n```shell\nexport GOOGLE_CLOUD_PROJECT=[Your project]\ngcloud config set project $GOOGLE_CLOUD_PROJECT\n```\n\nEnable the required services\n\n```shell\ngcloud services enable bigquery.googleapis.com \\\n  cloudbuild.googleapis.com \\\n  cloudtrace.googleapis.com \\\n  compute.googleapis.com \\\n  container.googleapis.com \\\n  containerregistry.googleapis.com \\\n  logging.googleapis.com \\\n  monitoring.googleapis.com\n```\n\n\nInstall the JavaScript packages required by both the server and the browser:\n\n```shell\nnpm install\n```\n\n## OpenTelemetry collector\n\nOpen up a new shell. In a new directory, clone the OpenTelemetry collector\ncontrib project, which contains the Stackdriver exporter\n\n```shell\ngit clone https:\/\/github.com\/open-telemetry\/opentelemetry-collector-contrib\ncd opentelemetry-collector-contrib\n```\n\nBuild the binary executable\n\n```shell\nmake otelcontribcol\n```\n\nBuild the container\n\n```shell\nmake docker-otelcontribcol\n```\n\nTag it for Google Container Registry (GCR)\n\n```shell\ndocker tag otelcontribcol gcr.io\/$GOOGLE_CLOUD_PROJECT\/otelcontribcol:latest\n```\n\nPush to GCR\n\n```shell\ndocker push gcr.io\/$GOOGLE_CLOUD_PROJECT\/otelcontribcol\n```\n\n### Run the OpenTelemetry collector locally\n\nIf you are running on GKE only, you do do not need to do this step.\nFor running locally, the OpenTelemetry collector needs permissions and\ncredentials to write to Stackdriver.\n\nObtain user access credentials and store them for Application Default\nCredentials\n\n```shell\ngcloud auth application-default login \\\n  --scopes=\"https:\/\/www.googleapis.com\/auth\/trace.append\"\n```\n\nInstall Go and run **This bit is not working for OT**\n\n```shell\nmake otelcontribcol\nbin\/linux\/otelcontribcol --config=$WI_HOME\/conf\/otservice-config.yaml\n```\n\n## Browser code\n\nThe browser code refers to ES2015 modules that need to be transpiled and bundled\nwith the help of webpack. Make sure that the variable `agentURL` in\n`src\\index.js` refers to localhost if running the OpenCensus agent locally or to\nthe external IP of the OpenCensus agent if running on Kubernetes.\n\nIn the original terminal, change to the browser code directory\n\n```shell\ncd browser\n```\n\nInstall the browser dependencies\n\n```shell\nnpm install\n```\n\nCompile the code\n\n```shell\nnpm run build\n```\n\n## Run app locally\n\nThe app can be deployed locally. First change to the top level directory\n\n```shell\ncd ..\n```\n\nTo run the app locally type\n\n```shell\nnode .\/src\/app.js\n```\n\nOpen your browser at http:\/\/localhost:8080\n\nFill in the test form to generate some load. You should see logs from both the\nNode.js server and the browser code in the console. You should see traces in\nStackdriver.\n\n## Deploy to Kubernetes\n\nCreate a cluster with 1 node and cluster autoscaling enabled\n\n```shell\nZONE=us-central1-a\nNAME=web-instrumentation\nCHANNEL=regular # choose rapid if you want to live on the edge\ngcloud beta container clusters create $NAME \\\n   --num-nodes 1 \\\n   --enable-autoscaling --min-nodes 1 --max-nodes 4 \\\n   --enable-basic-auth \\\n   --issue-client-certificate \\\n   --release-channel $CHANNEL \\\n   --zone $ZONE \\\n   --enable-stackdriver-kubernetes\n```\n\nChange the project id in file `k8s\/ot-service.yaml` with the sed command\n\n```shell\nsed -i.bak \"s\/\/$GOOGLE_CLOUD_PROJECT\/\" k8s\/ot-service.yaml\n```\n\nDeploy the OpenTelemetry service to the Kubernetes cluster\n\n```shell\nkubectl apply -f k8s\/ot-service.yaml\n```\n\nGet the external IP. It might take a few minutes for the deployment to complete\nand an IP address be allocated. You may have to execute the command below\nseveral times before the EXTERNAL_IP shell variable is successfully set.\n\n```shell\nEXTERNAL_IP=$(kubectl get svc ot-service-service \\\n    -o jsonpath=\"{.status.loadBalancer.ingress[*].ip}\")\necho \"External IP: $EXTERNAL_IP\"\n```\n\nEdit the file `browser\/src\/index.js` changing the variable `collectorURL` to\nrefer to the external IP and port (80) of the agent with the following sed\ncommand\n\n```shell\nsed -i.bak \"s\/localhost:55678\/${EXTERNAL_IP}:80\/\" browser\/src\/index.js\n```\n\nRebuild the web client\n\n```shell\ncd browser\nnpm run build\ncd ..\n```\n\n### Build the app image\n\nTo deploy the image to the Google Container Registry (GCR), use the following\nCloud Build command\n\n```shell\ngcloud builds submit\n```\n\nChange the project id in file `k8s\/deployment.yaml` with the sed command\n\n```shell\nsed -i.bak \"s\/\/$GOOGLE_CLOUD_PROJECT\/\" k8s\/deployment.yaml\n```\n\nDeploy the app\n\n```shell\nkubectl apply -f k8s\/deployment.yaml\n```\n\nConfigure a service\n\n```shell\nkubectl apply -f k8s\/service.yaml\n```\n\nExpose a service:\n\n```shell\nkubectl apply -f k8s\/ingress.yaml\n```\n\nCheck for the external IP\n\n```shell\nkubectl get ingress\n```\n\nIt may take a few minutes for the service to be exposed through an external IP.\n\nNavigate to the external IP. It should present a form that will allow you to\nsend a series of XML HTTP requests to the server. That will generate trace and\nmonitoring data.\n\n## Log exports\n\nBefore running tests, consider first setting up\n[log exports](https:\/\/cloud.google.com\/logging\/docs\/export).\nto BigQuery for more targeted log queries to analyse the results of your load\ntests or production issues.\n\nCreate a BQ dataset to export the container logs to\n\n```shell\nbq --location=US mk -d \\\n  --description \"Web instrumentation container log exports\" \\\n  --project_id $GOOGLE_CLOUD_PROJECT \\\n  web_instr_container\n```\n\nCreate a log export for the container logs\n\n```shell\nLOG_SA=$(gcloud logging sinks create web-instr-container-logs \\\n  bigquery.googleapis.com\/projects\/$GOOGLE_CLOUD_PROJECT\/datasets\/web_instr_container \\\n  --log-filter='resource.type=\"k8s_container\" AND labels.k8s-pod\/app=\"web-instrumentation\"' \\\n  --format='value(\"writerIdentity\")')\n```\n\nThe identify of the logs writer service account is captured in the shell\nvariable `LOG_SA`. Give this service account write access to BigQuery\n\n```shell\ngcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \\\n    --member $LOG_SA \\\n    --role roles\/bigquery.dataEditor\n```\n\nCreate a BQ dataset for the load balancer logs\n\n```shell\nbq --location=US mk -d \\\n  --description \"Web instrumentation load balancer log exports\" \\\n  --project_id $GOOGLE_CLOUD_PROJECT \\\n  web_instr_load_balancer\n```\n\nRepeat creation of the log sink for load balancer logs\n\n```shell\nLOG_SA=$(gcloud logging sinks create web-instr-load-balancer-logs \\\n  bigquery.googleapis.com\/projects\/$GOOGLE_CLOUD_PROJECT\/datasets\/web_instr_load_balancer \\\n  --log-filter='resource.type=\"http_load_balancer\"' \\\n  --format='value(\"writerIdentity\")')\n```\n\nNote that the service account id changes so that you need to note that anre\nrepeat the step for granting write access BigQuery\n\n```shell\ngcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \\\n  --member $LOG_SA \\\n  --role roles\/bigquery.dataEditor\n```\n\n## Running the load test\n\nNow you are ready to run the load test. You might try opening two tabs in a\nbrowser. In one tab generate a steady state load with request of say, 1 second\napart, to give a baseline. Then hit the app with a sudden spike to see how it\nresponds.\n\nYou can send a request from the command line with cURL\n\n```shell\nEXTERNAL_IP=[from kubectl get ingress command]\nREQUEST_ID=1234567889 # A random number\n# See W3C Trace Context for format\nTRACE=00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01\nMILLIS=`date +%s%N | cut -b1-13`\ncurl \"http:\/\/$EXTERNAL_IP\/data\/$REQUEST_ID\" \\\n  -H \"traceparent: $TRACE\" \\\n  -H 'Content-Type: application\/json' \\\n  --data-binary \"{\\\"data\\\":{\\\"name\\\":\\\"Smoke test\\\",\\\"reqId\\\":$REQUEST_ID,\\\"tSent\\\":$MILLIS}}\"\n```\n\nCheck that you see the log for the request in the Log Viewer. After the log is\nreplicated to BigQuery, you should be able to query it with a query like below.\nNote that the table name will be something like `requests_20200129`. A shell\nvariable is used to set the date below.\n\n```shell\nDATE=$(date -u +'%Y%m%d')\nbq query --use_legacy_sql=false \\\n\"SELECT\n  httpRequest.status,\n  httpRequest.requestUrl,\n  timestamp\nFROM web_instr_load_balancer.requests_${DATE}\nORDER BY timestamp DESC\nLIMIT 10\"\n```\n\nThere are more queries in the Colab sheet\n[load_test_analysis.ipynb](https:\/\/colab.research.google.com\/github\/googlecolab\/GoogleCloudPlatform\/professional-services\/blob\/web-instrumentation\/examples\/web-instrumentation\/load_test_analysis.ipynb).\n\n## Troubleshooting\n\nTry the following, depending on where you encounter problems.\n\n### Check project id\n\nCheck that you have set your project id in files `k8s\/oc-service.yaml`,\n`conf\\ocagent-config.yaml`, and `deployment.yaml`.\n\n### Tracing issues\nYou can use zPages to see the trace data sent to the OC agent. Find the\nname of the POD running the agent:\n\n```shell\nkubectl get pods\n```\n\nStart port forwarding\n\n```shell\nkubectl port-forward $POD 55679:55679\n```\n\nBrowse to the URL http:\/\/localhost:55679\/debug\/tracez\n\n### Browser JavaScript\nFor trouble bundling the web client, see Webpack\n[Getting Started](https:\/\/webpack.js.org\/guides\/getting-started\/).","site":"GCP","answers_cleaned":"  Instrumenting Web Applications End to End with Stackdriver and OpenTelemetry  This tutorial demonstrates instrumenting a web application end to end  from the browser to the backend application  including logging  monitoring  and tracing with OpenTelemetry and Stackdriver to run for a load test  It shows how to collect the instrumentation data from the browser and the server  ship to Stackdriver  export to BigQuery  and analyze the logs with SQL queries  The app is something like a simple web version of Apache Bench  It includes JavaScript browser code that drives HTTP requests to a Node js backend that can be run anywhere that you can run Node      Setup  To work through this example  you will need a GCP project  Follow these steps 1   Select or create a GCP project  https   console cloud google com projectselector2 home dashboard  2   Enable billing for your project  https   support google com cloud answer 6293499 enable billing  3  Clone the repo git clone https   github com GoogleCloudPlatform professional services git 4  Install Node js 5  Install Go 6  Install Docker 7  Install the  Google Cloud SDK  https   cloud google com sdk install   Clone this repository to your environment with the command     shell git clone https   github com GoogleCloudPlatform professional services git      Change to this directory and set an environment variable to remember the location     shell cd professional services examples web instrumentation WI HOME  pwd       Set Google Cloud SDK to the current project     shell export GOOGLE CLOUD PROJECT  Your project  gcloud config set project  GOOGLE CLOUD PROJECT      Enable the required services     shell gcloud services enable bigquery googleapis com     cloudbuild googleapis com     cloudtrace googleapis com     compute googleapis com     container googleapis com     containerregistry googleapis com     logging googleapis com     monitoring googleapis com       Install the JavaScript packages required by both the server and the browser      shell npm install         OpenTelemetry collector  Open up a new shell  In a new directory  clone the OpenTelemetry collector contrib project  which contains the Stackdriver exporter     shell git clone https   github com open telemetry opentelemetry collector contrib cd opentelemetry collector contrib      Build the binary executable     shell make otelcontribcol      Build the container     shell make docker otelcontribcol      Tag it for Google Container Registry  GCR      shell docker tag otelcontribcol gcr io  GOOGLE CLOUD PROJECT otelcontribcol latest      Push to GCR     shell docker push gcr io  GOOGLE CLOUD PROJECT otelcontribcol          Run the OpenTelemetry collector locally  If you are running on GKE only  you do do not need to do this step  For running locally  the OpenTelemetry collector needs permissions and credentials to write to Stackdriver   Obtain user access credentials and store them for Application Default Credentials     shell gcloud auth application default login       scopes  https   www googleapis com auth trace append       Install Go and run   This bit is not working for OT       shell make otelcontribcol bin linux otelcontribcol   config  WI HOME conf otservice config yaml         Browser code  The browser code refers to ES2015 modules that need to be transpiled and bundled with the help of webpack  Make sure that the variable  agentURL  in  src index js  refers to localhost if running the OpenCensus agent locally or to the external IP of the OpenCensus agent if running on Kubernetes   In the original terminal  change to the browser code directory     shell cd browser      Install the browser dependencies     shell npm install      Compile the code     shell npm run build         Run app locally  The app can be deployed locally  First change to the top level directory     shell cd         To run the app locally type     shell node   src app js      Open your browser at http   localhost 8080  Fill in the test form to generate some load  You should see logs from both the Node js server and the browser code in the console  You should see traces in Stackdriver      Deploy to Kubernetes  Create a cluster with 1 node and cluster autoscaling enabled     shell ZONE us central1 a NAME web instrumentation CHANNEL regular   choose rapid if you want to live on the edge gcloud beta container clusters create  NAME        num nodes 1        enable autoscaling   min nodes 1   max nodes 4        enable basic auth        issue client certificate        release channel  CHANNEL        zone  ZONE        enable stackdriver kubernetes      Change the project id in file  k8s ot service yaml  with the sed command     shell sed  i bak  s   GOOGLE CLOUD PROJECT   k8s ot service yaml      Deploy the OpenTelemetry service to the Kubernetes cluster     shell kubectl apply  f k8s ot service yaml      Get the external IP  It might take a few minutes for the deployment to complete and an IP address be allocated  You may have to execute the command below several times before the EXTERNAL IP shell variable is successfully set      shell EXTERNAL IP   kubectl get svc ot service service        o jsonpath    status loadBalancer ingress    ip    echo  External IP   EXTERNAL IP       Edit the file  browser src index js  changing the variable  collectorURL  to refer to the external IP and port  80  of the agent with the following sed command     shell sed  i bak  s localhost 55678   EXTERNAL IP  80   browser src index js      Rebuild the web client     shell cd browser npm run build cd             Build the app image  To deploy the image to the Google Container Registry  GCR   use the following Cloud Build command     shell gcloud builds submit      Change the project id in file  k8s deployment yaml  with the sed command     shell sed  i bak  s   GOOGLE CLOUD PROJECT   k8s deployment yaml      Deploy the app     shell kubectl apply  f k8s deployment yaml      Configure a service     shell kubectl apply  f k8s service yaml      Expose a service      shell kubectl apply  f k8s ingress yaml      Check for the external IP     shell kubectl get ingress      It may take a few minutes for the service to be exposed through an external IP   Navigate to the external IP  It should present a form that will allow you to send a series of XML HTTP requests to the server  That will generate trace and monitoring data      Log exports  Before running tests  consider first setting up  log exports  https   cloud google com logging docs export   to BigQuery for more targeted log queries to analyse the results of your load tests or production issues   Create a BQ dataset to export the container logs to     shell bq   location US mk  d       description  Web instrumentation container log exports        project id  GOOGLE CLOUD PROJECT     web instr container      Create a log export for the container logs     shell LOG SA   gcloud logging sinks create web instr container logs     bigquery googleapis com projects  GOOGLE CLOUD PROJECT datasets web instr container       log filter  resource type  k8s container  AND labels k8s pod app  web instrumentation         format  value  writerIdentity          The identify of the logs writer service account is captured in the shell variable  LOG SA   Give this service account write access to BigQuery     shell gcloud projects add iam policy binding  GOOGLE CLOUD PROJECT         member  LOG SA         role roles bigquery dataEditor      Create a BQ dataset for the load balancer logs     shell bq   location US mk  d       description  Web instrumentation load balancer log exports        project id  GOOGLE CLOUD PROJECT     web instr load balancer      Repeat creation of the log sink for load balancer logs     shell LOG SA   gcloud logging sinks create web instr load balancer logs     bigquery googleapis com projects  GOOGLE CLOUD PROJECT datasets web instr load balancer       log filter  resource type  http load balancer         format  value  writerIdentity          Note that the service account id changes so that you need to note that anre repeat the step for granting write access BigQuery     shell gcloud projects add iam policy binding  GOOGLE CLOUD PROJECT       member  LOG SA       role roles bigquery dataEditor         Running the load test  Now you are ready to run the load test  You might try opening two tabs in a browser  In one tab generate a steady state load with request of say  1 second apart  to give a baseline  Then hit the app with a sudden spike to see how it responds   You can send a request from the command line with cURL     shell EXTERNAL IP  from kubectl get ingress command  REQUEST ID 1234567889   A random number   See W3C Trace Context for format TRACE 00 0af7651916cd43dd8448eb211c80319c b7ad6b7169203331 01 MILLIS  date   s N   cut  b1 13  curl  http    EXTERNAL IP data  REQUEST ID       H  traceparent   TRACE       H  Content Type  application json        data binary     data      name     Smoke test     reqId    REQUEST ID   tSent    MILLIS         Check that you see the log for the request in the Log Viewer  After the log is replicated to BigQuery  you should be able to query it with a query like below  Note that the table name will be something like  requests 20200129   A shell variable is used to set the date below      shell DATE   date  u    Y m d   bq query   use legacy sql false    SELECT   httpRequest status    httpRequest requestUrl    timestamp FROM web instr load balancer requests   DATE  ORDER BY timestamp DESC LIMIT 10       There are more queries in the Colab sheet  load test analysis ipynb  https   colab research google com github googlecolab GoogleCloudPlatform professional services blob web instrumentation examples web instrumentation load test analysis ipynb       Troubleshooting  Try the following  depending on where you encounter problems       Check project id  Check that you have set your project id in files  k8s oc service yaml    conf ocagent config yaml   and  deployment yaml        Tracing issues You can use zPages to see the trace data sent to the OC agent  Find the name of the POD running the agent      shell kubectl get pods      Start port forwarding     shell kubectl port forward  POD 55679 55679      Browse to the URL http   localhost 55679 debug tracez      Browser JavaScript For trouble bundling the web client  see Webpack  Getting Started  https   webpack js org guides getting started   "}
{"questions":"GCP Five9 Voicestream Integration with Agent Assist Copyright 2024 Google This software is provided as is without warranty or representation for any use or purpose Your use of it is subject to your agreement with Google Project Structure This is a PoC to integrate Five9 Voicestream with Agent Assist","answers":"Copyright 2024 Google. This software is provided as-is, without warranty or\nrepresentation for any use or purpose. Your use of it is subject to your\nagreement with Google.\n\n# Five9 Voicestream Integration with Agent Assist\n\nThis is a PoC to integrate Five9 Voicestream with Agent Assist.\n\n## Project Structure\n\n```\n.\n\u251c\u2500\u2500 assets\n\u2502   \u2514\u2500\u2500 FAQ.csv\n\u251c\u2500\u2500 client\n\u2502   \u251c\u2500\u2500 audio\n\u2502   \u2502   \u251c\u2500\u2500 END_USER.wav\n\u2502   \u2502   \u2514\u2500\u2500 HUMAN_AGENT.wav\n\u2502   \u2514\u2500\u2500 client_voicestream.py\n\u251c\u2500\u2500 .env\n\u251c\u2500\u2500 proto\n\u2502   \u251c\u2500\u2500 voice_pb2_grpc.py\n\u2502   \u251c\u2500\u2500 voice_pb2.py\n\u2502   \u2514\u2500\u2500 voice.proto\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 requirements.txt\n\u2514\u2500\u2500 server\n    \u251c\u2500\u2500 server.py\n    \u251c\u2500\u2500 services\n    \u2502   \u2514\u2500\u2500 get_suggestions.py\n    \u2514\u2500\u2500 utils\n        \u251c\u2500\u2500 conversation_management.py\n        \u2514\u2500\u2500 participant_management.py\n```\n\n## Components\n- Agent Assist\n- Five9 with VoiceStream\n\n## Setup Instructions\n\n### GCP Project Setup\n\n#### Creating a Project in the Google Cloud Platform Console\n\nIf you haven't already created a project, create one now. Projects enable you to\nmanage all Google Cloud Platform resources for your app, including deployment,\naccess control, billing, and services.\n\n1. Open the [Cloud Platform Console][cloud-console].\n1. In the drop-down menu at the top, select **Create a project**.\n1. Give your project a name.\n1. Make a note of the project ID, which might be different from the project\n   name. The project ID is used in commands and in configurations.\n\n[cloud-console]: https:\/\/console.cloud.google.com\/\n\n#### Enabling billing for your project.\n\nIf you haven't already enabled billing for your project, [enable\nbilling][enable-billing] now. Enabling billing allows is required to use Cloud\nBigtable and to create VM instances.\n\n[enable-billing]: https:\/\/console.cloud.google.com\/project\/_\/settings\n\n#### Install the Google Cloud SDK.\n\nIf you haven't already installed the Google Cloud SDK, [install the Google\nCloud SDK][cloud-sdk] now. The SDK contains tools and libraries that enable you\nto create and manage resources on Google Cloud Platform.\n\n[cloud-sdk]: https:\/\/cloud.google.com\/sdk\/\n\n#### Setting Google Application Default Credentials\n\nSet your [Google Application Default\nCredentials][application-default-credentials] by [initializing the Google Cloud\nSDK][cloud-sdk-init] with the command:\n\n```\n   gcloud init\n```\n\nGenerate a credentials file by running the\n[application-default login](https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/auth\/application-default\/login)\ncommand:\n\n```\n    gcloud auth application-default login\n```\n\n[cloud-sdk-init]: https:\/\/cloud.google.com\/sdk\/docs\/initializing\n\n[application-default-credentials]: https:\/\/developers.google.com\/identity\/protocols\/application-default-credentials\n\n#### Create a Knowledge Base\n\nAgent Assist follows a conversation between a human agent and an end-user and provide the human agent with relevant document suggestions. These suggestions are based on knowledge bases, namely, collections of documents that you upload to Agent Assist. These documents are called knowledge documents and can be either articles (for use with Article Suggestion) or FAQ documents (for use with FAQ Assist).\n\nIn this specific implementation, a CSV sheet with FAQ will be used as knowledge document.\n> [FAQ CSV file](.\/assets\/FAQ.csv)\n> [Create a Knowledge Base](https:\/\/cloud.google.com\/agent-assist\/docs\/knowledge-base)\n\n#### Create a Conversation Profile\n\nA conversation profile configures a set of parameters that control the suggestions made to an agent.\n\n> [Create\/Edit an Agent Assist Conversation Profile](https:\/\/cloud.google.com\/agent-assist\/docs\/conversation-profile#create_and_edit_a_conversation_profile)\n\nWhile creating the the conversation profile, check the FAQs box. In the \"Knowledge bases\" input box, select the recently created Knowledge Base. The other values in the section should be set as default.\n\nOnce the conversation profile is created, you can find the CONVERSATION_PROFILE_ID (Integration ID) in the following ways:\n\n> Open [Agent Assist](https:\/\/agentassist.cloud.google.com\/), then Conversation\n> Profiles on the left bottom\n\n### Usage Pre-requisites\n\n- FAQs Suggestions should be enabled in the Agent Assist Conversation Profile\n- Agent Assist will only give you suggestions to conversations with Human Agents. It will not\n  give suggestions if the conversation is being guided by virtual agents.\n\n\n### Local Development Set Up\n\nThis application is designed to run on port 8080. Upon launch, the\napplication will initialize and bind to port 8080, making it accessible for\nincoming connections. This can be changed in the .env file.\n\n#### Protocol Buffer Compiler:\n\nThis implementation leverages from Buffer compilers for service definitions and data serialization. In this case, protoc is used to compile Five9's protofile.\n\n ```\nNOTE: The compilation of the Five9's Voicestream protofile was already made, therefore this step can be skipped. But if an update of the protofile is needed, please follow these steps to properly output the required python files.\n ```\n\n> [Protocol Buffer Compiler Installation](https:\/\/grpc.io\/docs\/protoc-installation\/)\n> [Five9's Voicestream protofile](.\/proto\/voice.proto)\n\nTo compile the protofile:\n> Open a terminal window\n> Go to the root where your proto folder is\n> Run the following command:\n   ```\n   python3 -m grpc_tools.protoc -I proto --python_out=proto --grpc_python_out=proto proto\/voice.proto\n   ```\n> Two python files will be generated inside the proto folder.\n   > [voice_pb2_grpc.py](.\/proto\/voice_pb2_grpc.py)\n   > [voice_pb2.py](.\/proto\/voice_pb2.py)\n\n\n#### Set of variables:\n\nThe following variables need to be set up in the .env file inside the root folder\n\n```\nSERVER_ADDRESS : \n   Target server address\n\nPORT :\n   Connection Port\n   \nPROJECT_ID :\n   GCP Project ID where the Agent Assist Conversation Profile is deployed.\n\nCONVERSATION_PROFILE_ID : \n   Agent Assist Conversation Profile ID\n\nCHUNK_SIZE :\n   Number of bytes of audio to be sent each time\n\nRESTART_TIMEOUT :\n   Timeout of one stream\n\nMAX_LOOKBACK : \n   Lookback for unprocessed audio data\n   \n```\n\n### Steps to follow\n\n## Start gRPC Server\n\nStart the gRPC Server controller. This will start a server on port 8080, where the voicestream client will send the data.\n\n> [Server Controller](.\/server\/server.py)\n\nInside the server folder, run the following command:\n\n```\npython server.py\n```\n\n## Start gRPC Client \n\nAccording to Five9's Self Service Developer Guide:\n\n```\nVoiceStream does not support multi-channel streaming. VoiceStream\ntransmits each distinct audio stream over a separate gRPC session: one\nfor audio from the agent, and one for audio to the agent.\n```\n\nIn order to simulate this behaviour using our local environment, the same script should be run simultaneously. One that sends the customer audio (END_USER) and one that sends the agent audio (HUMAN_AGENT)\n\n> [Five9 Voicestream Client](.\/client\/client_voicestream.py)\n\nInside the client folder, run the following command to send the human agent audio:\n\n```\npython client_voicestream.py --role=HUMAN_AGENT --call_id=<CALL_ID>\n\n```\nIn another terminal, run the following command to send the customer audio:\n```\npython client_voicestream.py --role=END_USER --call_id=<CALL_ID>\n\n```\n\nIn order for both streams to be associated to the same conversation it is fundamental to specify a destination CONVERSATION_ID. For this to happen, the CALL_ID specified in the initial configuration sent by Five9 will be passed to the Agent Assist as the internal CONVERSATION_ID. In this implementation, we are manually defining this CALL_ID for testing purposes.\n\n\n# References\n1.[Agent Assist Documentation](https:\/\/cloud.google.com\/agent-assist\/docs)\n2.[Dialogflow](https:\/\/cloud.google.com\/dialogflow\/docs)\n3.[Five9 VoiceStream](https:\/\/www.five9.com\/news\/news-releases\/five9-announces-five9-voicestream)\n4,[Five9 VoiceStream Release Notes](https:\/\/releasenotes.five9.com\/space\/RNA\/23143057870\/VoiceStream)\n","site":"GCP","answers_cleaned":"Copyright 2024 Google  This software is provided as is  without warranty or representation for any use or purpose  Your use of it is subject to your agreement with Google     Five9 Voicestream Integration with Agent Assist  This is a PoC to integrate Five9 Voicestream with Agent Assist      Project Structure            assets         FAQ csv     client         audio             END USER wav             HUMAN AGENT wav         client voicestream py      env     proto         voice pb2 grpc py         voice pb2 py         voice proto     README md     requirements txt     server         server py         services             get suggestions py         utils             conversation management py             participant management py         Components   Agent Assist   Five9 with VoiceStream     Setup Instructions      GCP Project Setup       Creating a Project in the Google Cloud Platform Console  If you haven t already created a project  create one now  Projects enable you to manage all Google Cloud Platform resources for your app  including deployment  access control  billing  and services   1  Open the  Cloud Platform Console  cloud console   1  In the drop down menu at the top  select   Create a project    1  Give your project a name  1  Make a note of the project ID  which might be different from the project    name  The project ID is used in commands and in configurations    cloud console   https   console cloud google com        Enabling billing for your project   If you haven t already enabled billing for your project   enable billing  enable billing  now  Enabling billing allows is required to use Cloud Bigtable and to create VM instances    enable billing   https   console cloud google com project   settings       Install the Google Cloud SDK   If you haven t already installed the Google Cloud SDK   install the Google Cloud SDK  cloud sdk  now  The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform    cloud sdk   https   cloud google com sdk        Setting Google Application Default Credentials  Set your  Google Application Default Credentials  application default credentials  by  initializing the Google Cloud SDK  cloud sdk init  with the command          gcloud init      Generate a credentials file by running the  application default login  https   cloud google com sdk gcloud reference auth application default login  command           gcloud auth application default login       cloud sdk init   https   cloud google com sdk docs initializing   application default credentials   https   developers google com identity protocols application default credentials       Create a Knowledge Base  Agent Assist follows a conversation between a human agent and an end user and provide the human agent with relevant document suggestions  These suggestions are based on knowledge bases  namely  collections of documents that you upload to Agent Assist  These documents are called knowledge documents and can be either articles  for use with Article Suggestion  or FAQ documents  for use with FAQ Assist    In this specific implementation  a CSV sheet with FAQ will be used as knowledge document     FAQ CSV file    assets FAQ csv     Create a Knowledge Base  https   cloud google com agent assist docs knowledge base        Create a Conversation Profile  A conversation profile configures a set of parameters that control the suggestions made to an agent      Create Edit an Agent Assist Conversation Profile  https   cloud google com agent assist docs conversation profile create and edit a conversation profile   While creating the the conversation profile  check the FAQs box  In the  Knowledge bases  input box  select the recently created Knowledge Base  The other values in the section should be set as default   Once the conversation profile is created  you can find the CONVERSATION PROFILE ID  Integration ID  in the following ways     Open  Agent Assist  https   agentassist cloud google com    then Conversation   Profiles on the left bottom      Usage Pre requisites    FAQs Suggestions should be enabled in the Agent Assist Conversation Profile   Agent Assist will only give you suggestions to conversations with Human Agents  It will not   give suggestions if the conversation is being guided by virtual agents        Local Development Set Up  This application is designed to run on port 8080  Upon launch  the application will initialize and bind to port 8080  making it accessible for incoming connections  This can be changed in the  env file        Protocol Buffer Compiler   This implementation leverages from Buffer compilers for service definitions and data serialization  In this case  protoc is used to compile Five9 s protofile        NOTE  The compilation of the Five9 s Voicestream protofile was already made  therefore this step can be skipped  But if an update of the protofile is needed  please follow these steps to properly output the required python files           Protocol Buffer Compiler Installation  https   grpc io docs protoc installation      Five9 s Voicestream protofile    proto voice proto   To compile the protofile    Open a terminal window   Go to the root where your proto folder is   Run the following command            python3  m grpc tools protoc  I proto   python out proto   grpc python out proto proto voice proto          Two python files will be generated inside the proto folder        voice pb2 grpc py    proto voice pb2 grpc py        voice pb2 py    proto voice pb2 py         Set of variables   The following variables need to be set up in the  env file inside the root folder      SERVER ADDRESS       Target server address  PORT      Connection Port     PROJECT ID      GCP Project ID where the Agent Assist Conversation Profile is deployed   CONVERSATION PROFILE ID       Agent Assist Conversation Profile ID  CHUNK SIZE      Number of bytes of audio to be sent each time  RESTART TIMEOUT      Timeout of one stream  MAX LOOKBACK       Lookback for unprocessed audio data              Steps to follow     Start gRPC Server  Start the gRPC Server controller  This will start a server on port 8080  where the voicestream client will send the data      Server Controller    server server py   Inside the server folder  run the following command       python server py         Start gRPC Client   According to Five9 s Self Service Developer Guide       VoiceStream does not support multi channel streaming  VoiceStream transmits each distinct audio stream over a separate gRPC session  one for audio from the agent  and one for audio to the agent       In order to simulate this behaviour using our local environment  the same script should be run simultaneously  One that sends the customer audio  END USER  and one that sends the agent audio  HUMAN AGENT      Five9 Voicestream Client    client client voicestream py   Inside the client folder  run the following command to send the human agent audio       python client voicestream py   role HUMAN AGENT   call id  CALL ID       In another terminal  run the following command to send the customer audio      python client voicestream py   role END USER   call id  CALL ID        In order for both streams to be associated to the same conversation it is fundamental to specify a destination CONVERSATION ID  For this to happen  the CALL ID specified in the initial configuration sent by Five9 will be passed to the Agent Assist as the internal CONVERSATION ID  In this implementation  we are manually defining this CALL ID for testing purposes      References 1  Agent Assist Documentation  https   cloud google com agent assist docs  2  Dialogflow  https   cloud google com dialogflow docs  3  Five9 VoiceStream  https   www five9 com news news releases five9 announces five9 voicestream  4  Five9 VoiceStream Release Notes  https   releasenotes five9 com space RNA 23143057870 VoiceStream  "}
{"questions":"GCP export BUCKET YOURBUCKET Before launching training job please copy the raw data and define the environmental variables the bucket for staging and the bucket where you are going to store data as well as training job s outputs TensorFlow Profiling Examples gsutil m cp gs cloud training demos babyweight preproc gs BUCKET babyweight preproc Profiler hooks You also need to have installed export BUCKETSTAGING YOURSTAGINGBUCKET The code below is based on this you can find more","answers":"# TensorFlow Profiling Examples\n\nBefore launching training job, please copy the raw data and define the environmental variables (the bucket for staging and the bucket where you are going to store data as well as training job's outputs)\n`export BUCKET=YOUR_BUCKET\ngsutil -m cp gs:\/\/cloud-training-demos\/babyweight\/preproc\/* gs:\/\/$BUCKET\/babyweight\/preproc\/\nexport BUCKET_STAGING=YOUR_STAGING_BUCKET`\nYou also need to have [bazel](https:\/\/docs.bazel.build\/versions\/master\/install.html) installed.\nThe code below is based on this [codelab](https:\/\/codelabs.developers.google.com\/codelabs\/scd-babyweight2\/index.html?index=..%2F..%2Fcloud-quest-scientific-data#0) (you can find more [here](https:\/\/github.com\/GoogleCloudPlatform\/training-data-analyst\/tree\/master\/blogs\/babyweight)).\n\n## Profiler hooks\nYou can dump profiles for a every *n*-th step. We are going to demonstrate how to collect dumps both in distributed mode as well as when training is don on a single machine (including training with a GPU accelerator).\nAfter you've trained a model (see examples below), you need to copy the dumps localy in order to inspect them. You can do it as follows:\n```shell\nrm -rf \/tmp\/profiler\nmkdir -p \/tmp\/profiler\ngsutil -m cp -r $OUTDIR\/timeline*.json \/tmp\/profiler\n```\nAnd now you can launch a trace event profiling tool in your Chrome browser (chrome:\/\/tracing), load a specific timeline and visually inspect it.\n\n### Training on a single CPU machine: BASIC CMLE tier\nYou can launch the job as following:\n```shell\nOUTDIR=gs:\/\/$BUCKET\/babyweight\/hooks_basic\nJOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)\ngsutil -m rm -rf $OUTDIR\ngcloud ml-engine jobs submit training $JOBNAME \\\n  --region=us-west1 \\\n  --module-name=trainer-hooks.task \\\n  --package-path=trainer-hooks \\\n  --job-dir=$OUTDIR \\\n  --staging-bucket=gs:\/\/$BUCKET_STAGING \\\n  --scale-tier=BASIC \\\n  --runtime-version=\"1.10\" \\\n  -- \\\n  --bucket=$BUCKET\/babyweight \\\n  --output_dir=${OUTDIR} \\\n  --eval_int=1200 \\\n  --train_steps=50000\n```\n\n### Distributed training on CPUs: STANDARD tier\nYou can launch the job as following:\n```shell\nOUTDIR=gs:\/\/$BUCKET\/babyweight\/hooks_standard\nJOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)\ngsutil -m rm -rf $OUTDIR\ngcloud ml-engine jobs submit training $JOBNAME \\\n  --region=us-west1 \\\n  --module-name=trainer-hooks.task \\\n  --package-path=trainer-hooks \\\n  --job-dir=$OUTDIR \\\n  --staging-bucket=gs:\/\/$BUCKET_STAGING \\\n  --scale-tier=STANDARD_1 \\\n  --runtime-version=\"1.10\" \\\n  -- \\\n  --bucket=$BUCKET\/babyweight \\\n  --output_dir=${OUTDIR} \\\n  --train_steps=50000\n```\n\n### Training on GPU:\n```shell\nOUTDIR=gs:\/\/$BUCKET\/babyweight\/hooks_gpu\nJOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)\ngsutil -m rm -rf $OUTDIR\ngcloud ml-engine jobs submit training $JOBNAME \\\n  --region=us-west1 \\\n  --module-name=trainer-hooks.task \\\n  --package-path=trainer-hooks \\\n  --job-dir=$OUTDIR \\\n  --staging-bucket=gs:\/\/$BUCKET_STAGING \\\n  --scale-tier=BASIC_GPU \\\n  --runtime-version=\"1.10\" \\\n  -- \\\n  --bucket=$BUCKET\/babyweight \\\n  --output_dir=${OUTDIR} \\\n  --batch_size=8192 \\\n  --train_steps=21000\n```\n\n### Defining you own schedule\n```shell\nOUTDIR=gs:\/\/$BUCKET\/babyweight\/hooks_basic-ext\nJOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)\ngsutil -m rm -rf $OUTDIR\ngcloud ml-engine jobs submit training $JOBNAME \\\n  --region=us-west1 \\\n  --module-name=trainer-hooks-ext.task \\\n  --package-path=trainer-hooks-ext \\\n  --job-dir=$OUTDIR \\\n  --staging-bucket=gs:\/\/$BUCKET_STAGING \\\n  --scale-tier=BASIC \\\n  --runtime-version=\"1.10\" \\\n  -- \\\n  --bucket=$BUCKET\/babyweight \\\n  --output_dir=${OUTDIR} \\\n  --eval_int=1200 \\\n  --train_steps=15000\n```\n\n## Deep profiling\nWe can collect a deep profiling dump that can be later analyzed with a profiling CLI tool or with a profiler-ui as described in a post (ADD LINK HERE).\nLaunch the training job as following:\n```shell\nOUTDIR=gs:\/\/$BUCKET\/babyweight\/profiler_standard\nJOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)\ngsutil -m rm -rf $OUTDIR\ngcloud ml-engine jobs submit training $JOBNAME \\\n  --region=us-west1 \\\n  --module-name=trainer-deep-profiling.task \\\n  --package-path=trainer-deep-profiling \\\n  --job-dir=$OUTDIR \\\n  --staging-bucket=gs:\/\/$BUCKET_STAGING \\\n  --scale-tier=STANDARD_1 \\\n  --runtime-version=\"1.10\" \\\n  -- \\\n  --bucket=$BUCKET\/babyweight \\\n  --output_dir=${OUTDIR} \\\n  --train_steps=100000\n```\n\n### Profiler CLI\n1. In order to use [profiler CLI](https:\/\/github.com\/tensorflow\/tensorflow\/blob\/9590c4c32dd4346ea5c35673336f5912c6072bf2\/tensorflow\/core\/profiler\/README.md), you need to build the profiler first:\n`git clone https:\/\/github.com\/tensorflow\/tensorflow.git\ncd tensorflow\nbazel build -c opt tensorflow\/core\/profiler:profiler`\n2. Copy dumps locally:\n`rm -rf \/tmp\/profiler\nmkdir -p \/tmp\/profiler\ngsutil -m cp -r $OUTDIR\/profiler \/tmp`\n3. Launch the profiler with `bazel-bin\/tensorflow\/core\/profiler\/profiler --profile_path=\/tmp\/profiler\/$(ls \/tmp\/profiler\/ | head -1)`\n\n### Profiler UI\nYou can also use a [profiler-ui](https:\/\/github.com\/tensorflow\/profiler-ui), i.e. a web interface for a tensorflow profiler.\n1. If you'd like to install [pprof](https:\/\/github.com\/google\/pprof), please follow the [installation instructions](https:\/\/github.com\/google\/pprof#building-pprof).\n2. Clone the repository:\n`git clone https:\/\/github.com\/tensorflow\/profiler-ui.git\ncd profiler-ui`\n3. Copy dumps locally:\n`rm -rf \/tmp\/profiler\nmkdir -p \/tmp\/profiler\ngsutil -m cp -r $OUTDIR\/$MODEL\/profiler \/tmp`\n4. Launch the profiler with `python ui.py --profile_context_path=\/tmp\/profiler\/$(ls \/tmp\/profiler\/ | head -1)`","site":"GCP","answers_cleaned":"  TensorFlow Profiling Examples  Before launching training job  please copy the raw data and define the environmental variables  the bucket for staging and the bucket where you are going to store data as well as training job s outputs   export BUCKET YOUR BUCKET gsutil  m cp gs   cloud training demos babyweight preproc   gs    BUCKET babyweight preproc  export BUCKET STAGING YOUR STAGING BUCKET  You also need to have  bazel  https   docs bazel build versions master install html  installed  The code below is based on this  codelab  https   codelabs developers google com codelabs scd babyweight2 index html index    2F   2Fcloud quest scientific data 0   you can find more  here  https   github com GoogleCloudPlatform training data analyst tree master blogs babyweight        Profiler hooks You can dump profiles for a every  n  th step  We are going to demonstrate how to collect dumps both in distributed mode as well as when training is don on a single machine  including training with a GPU accelerator   After you ve trained a model  see examples below   you need to copy the dumps localy in order to inspect them  You can do it as follows     shell rm  rf  tmp profiler mkdir  p  tmp profiler gsutil  m cp  r  OUTDIR timeline  json  tmp profiler     And now you can launch a trace event profiling tool in your Chrome browser  chrome   tracing   load a specific timeline and visually inspect it       Training on a single CPU machine  BASIC CMLE tier You can launch the job as following     shell OUTDIR gs    BUCKET babyweight hooks basic JOBNAME babyweight   date  u   y m d  H M S  gsutil  m rm  rf  OUTDIR gcloud ml engine jobs submit training  JOBNAME       region us west1       module name trainer hooks task       package path trainer hooks       job dir  OUTDIR       staging bucket gs    BUCKET STAGING       scale tier BASIC       runtime version  1 10               bucket  BUCKET babyweight       output dir   OUTDIR        eval int 1200       train steps 50000          Distributed training on CPUs  STANDARD tier You can launch the job as following     shell OUTDIR gs    BUCKET babyweight hooks standard JOBNAME babyweight   date  u   y m d  H M S  gsutil  m rm  rf  OUTDIR gcloud ml engine jobs submit training  JOBNAME       region us west1       module name trainer hooks task       package path trainer hooks       job dir  OUTDIR       staging bucket gs    BUCKET STAGING       scale tier STANDARD 1       runtime version  1 10               bucket  BUCKET babyweight       output dir   OUTDIR        train steps 50000          Training on GPU     shell OUTDIR gs    BUCKET babyweight hooks gpu JOBNAME babyweight   date  u   y m d  H M S  gsutil  m rm  rf  OUTDIR gcloud ml engine jobs submit training  JOBNAME       region us west1       module name trainer hooks task       package path trainer hooks       job dir  OUTDIR       staging bucket gs    BUCKET STAGING       scale tier BASIC GPU       runtime version  1 10               bucket  BUCKET babyweight       output dir   OUTDIR        batch size 8192       train steps 21000          Defining you own schedule    shell OUTDIR gs    BUCKET babyweight hooks basic ext JOBNAME babyweight   date  u   y m d  H M S  gsutil  m rm  rf  OUTDIR gcloud ml engine jobs submit training  JOBNAME       region us west1       module name trainer hooks ext task       package path trainer hooks ext       job dir  OUTDIR       staging bucket gs    BUCKET STAGING       scale tier BASIC       runtime version  1 10               bucket  BUCKET babyweight       output dir   OUTDIR        eval int 1200       train steps 15000         Deep profiling We can collect a deep profiling dump that can be later analyzed with a profiling CLI tool or with a profiler ui as described in a post  ADD LINK HERE   Launch the training job as following     shell OUTDIR gs    BUCKET babyweight profiler standard JOBNAME babyweight   date  u   y m d  H M S  gsutil  m rm  rf  OUTDIR gcloud ml engine jobs submit training  JOBNAME       region us west1       module name trainer deep profiling task       package path trainer deep profiling       job dir  OUTDIR       staging bucket gs    BUCKET STAGING       scale tier STANDARD 1       runtime version  1 10               bucket  BUCKET babyweight       output dir   OUTDIR        train steps 100000          Profiler CLI 1  In order to use  profiler CLI  https   github com tensorflow tensorflow blob 9590c4c32dd4346ea5c35673336f5912c6072bf2 tensorflow core profiler README md   you need to build the profiler first   git clone https   github com tensorflow tensorflow git cd tensorflow bazel build  c opt tensorflow core profiler profiler  2  Copy dumps locally   rm  rf  tmp profiler mkdir  p  tmp profiler gsutil  m cp  r  OUTDIR profiler  tmp  3  Launch the profiler with  bazel bin tensorflow core profiler profiler   profile path  tmp profiler   ls  tmp profiler    head  1        Profiler UI You can also use a  profiler ui  https   github com tensorflow profiler ui   i e  a web interface for a tensorflow profiler  1  If you d like to install  pprof  https   github com google pprof   please follow the  installation instructions  https   github com google pprof building pprof   2  Clone the repository   git clone https   github com tensorflow profiler ui git cd profiler ui  3  Copy dumps locally   rm  rf  tmp profiler mkdir  p  tmp profiler gsutil  m cp  r  OUTDIR  MODEL profiler  tmp  4  Launch the profiler with  python ui py   profile context path  tmp profiler   ls  tmp profiler    head  1  "}
{"questions":"GCP Google Cloud Platform Estimated cost to run this tutorial is 9 per day Prerequisites This tutorial uses Anthos which is on the Google Cloud Platform GCP If you don t have an account you can for 300 in free credits After signing up a GCP project and to a billing account","answers":"# Prerequisites\n\n## Google Cloud Platform\n\nThis tutorial uses Anthos which is on the Google Cloud Platform (GCP). If you don\u2019t have an account, you can [sign up](https:\/\/cloud.google.com\/free\/) for $300 in free credits. \n\nEstimated cost to run this tutorial is $9 per day.\n\nAfter signing up, [create](https:\/\/cloud.google.com\/resource-manager\/docs\/creating-managing-projects) a GCP project and [attach](https:\/\/cloud.google.com\/billing\/docs\/how-to\/modify-project) to a billing account.\n\n\n## GitLab\n\nThis tutorial uses GitLab for code repository and for continuous integration & continuous delivery.\n\n[Sign up](https:\/\/gitlab.com\/users\/sign_up) to create a GitLab account if you don\u2019t already have one.\n\nTo grant your laptop access to make changes as this account, generate and add SSH keys to your account.\n\nGenerate SSH keys:\n\n\n```bash\nssh-keygen\n```\n\n\nIt is recommended to add a passphrase when generating new ssh keys, but for this demo you can proceed without setting one. \n\nCopy content of `id_rsa.pub`, paste in the [key textbox](https:\/\/gitlab.com\/-\/profile\/keys) and add key.\n\nCreate a [new public group](https:\/\/gitlab.com\/groups\/new) and  give it a name of choice (e.g anthos-demo).  When creating a group, gitlab assigns you a URI. Take note of the URI. All repositories\/projects created during this tutorial will be under this group.\n\n\n## Install and Initialize the Google Cloud SDK\n\nFollow the Google Cloud SDK [documentation](https:\/\/cloud.google.com\/sdk\/docs\/install) to install and configure the `gcloud` command line utility.\n\nAt the time this doc was created, some tools used in this tutorial are still in beta. To be able to utilize them, install `gcloud beta`\n\n\n```bash\ngcloud components install beta\n```\n\n\nInitialize gcloud config with the project id:\n\n\n```bash\ngcloud init \n```\n\n\nThen be sure to authorize `gcloud` to access the Cloud Platform with your Google user credentials:\n\n\n```bash\ngcloud auth login\n```\n\n\n## Install Kubectl\n\nThe `kubectl` command line utility is used to interact with the Kubernetes API Server. \n\nDownload and install `kubectl` using `gcloud`:\n\n\n```bash\ngcloud components install kubectl\n```\n\n\nVerify installation:\n\n\n```bash\nkubectl version --client\n```\n\n\n\n## Install nomos\n\nThe nomos command is an optional command-line tool used to interact with the Config sync operator. \n\nDownload nomos:\n\n\n```bash\ngcloud components install nomos\n```\n\n\nmacOS and Linux clients should run this extra command to configure the binary to be executable.\n\n\n```bash\nchmod +x <\/path\/to\/nomos>\n```\n\n\nMove the command to a location your system searches for binaries, such as `\/usr\/local\/bin`, or you can run the command by using its fully-qualified path.\n\n**Environment variables**\n\nThe following environment variables are required for some of the commands in this guide\n\n\n```bash\nexport PROJECT_ID=\"<project id you created earlier>\"\nexport PROJECT_NUMBER=\"$(gcloud projects describe \"${PROJECT_ID}\" \\\n--format='value(projectNumber)')\"\nexport USER=\"<user email you are using>\"\nexport GROUP_NAME=\"<your gitlab group name>\" # if your group name has space(s), replace with `-`\nexport GROUP_URI=\"<your gitlab group URI>\"\nexport REGION=\"us-central1\"\nexport ZONE=\"us-central1-c\"\n```\n\n\n**APIs**\n\nEnable all the APIs that are required for this guide:\n\n \n```bash\ngcloud services enable \\\n   anthos.googleapis.com \\\n   anthosgke.googleapis.com \\\n   anthosaudit.googleapis.com \\\n   binaryauthorization.googleapis.com \\\n   cloudbuild.googleapis.com \\\n   containerscanning.googleapis.com \\\n   cloudresourcemanager.googleapis.com \\\n   container.googleapis.com \\\n   gkeconnect.googleapis.com \\\n   gkehub.googleapis.com \\\n   serviceusage.googleapis.com \\\n   stackdriver.googleapis.com \\\n   monitoring.googleapis.com \\\n   logging.googleapis.com\n```\n\nNext: [Register GKE Clusters with Anthos](2-register-gke-clusters-with-anthos.md","site":"GCP","answers_cleaned":"  Prerequisites     Google Cloud Platform  This tutorial uses Anthos which is on the Google Cloud Platform  GCP   If you don t have an account  you can  sign up  https   cloud google com free   for  300 in free credits    Estimated cost to run this tutorial is  9 per day   After signing up   create  https   cloud google com resource manager docs creating managing projects  a GCP project and  attach  https   cloud google com billing docs how to modify project  to a billing account       GitLab  This tutorial uses GitLab for code repository and for continuous integration   continuous delivery    Sign up  https   gitlab com users sign up  to create a GitLab account if you don t already have one   To grant your laptop access to make changes as this account  generate and add SSH keys to your account   Generate SSH keys       bash ssh keygen       It is recommended to add a passphrase when generating new ssh keys  but for this demo you can proceed without setting one    Copy content of  id rsa pub   paste in the  key textbox  https   gitlab com   profile keys  and add key   Create a  new public group  https   gitlab com groups new  and  give it a name of choice  e g anthos demo    When creating a group  gitlab assigns you a URI  Take note of the URI  All repositories projects created during this tutorial will be under this group       Install and Initialize the Google Cloud SDK  Follow the Google Cloud SDK  documentation  https   cloud google com sdk docs install  to install and configure the  gcloud  command line utility   At the time this doc was created  some tools used in this tutorial are still in beta  To be able to utilize them  install  gcloud beta       bash gcloud components install beta       Initialize gcloud config with the project id       bash gcloud init        Then be sure to authorize  gcloud  to access the Cloud Platform with your Google user credentials       bash gcloud auth login          Install Kubectl  The  kubectl  command line utility is used to interact with the Kubernetes API Server    Download and install  kubectl  using  gcloud        bash gcloud components install kubectl       Verify installation       bash kubectl version   client           Install nomos  The nomos command is an optional command line tool used to interact with the Config sync operator    Download nomos       bash gcloud components install nomos       macOS and Linux clients should run this extra command to configure the binary to be executable       bash chmod  x   path to nomos        Move the command to a location your system searches for binaries  such as   usr local bin   or you can run the command by using its fully qualified path     Environment variables    The following environment variables are required for some of the commands in this guide      bash export PROJECT ID   project id you created earlier   export PROJECT NUMBER    gcloud projects describe    PROJECT ID       format  value projectNumber     export USER   user email you are using   export GROUP NAME   your gitlab group name     if your group name has space s   replace with     export GROUP URI   your gitlab group URI   export REGION  us central1  export ZONE  us central1 c          APIs    Enable all the APIs that are required for this guide        bash gcloud services enable      anthos googleapis com      anthosgke googleapis com      anthosaudit googleapis com      binaryauthorization googleapis com      cloudbuild googleapis com      containerscanning googleapis com      cloudresourcemanager googleapis com      container googleapis com      gkeconnect googleapis com      gkehub googleapis com      serviceusage googleapis com      stackdriver googleapis com      monitoring googleapis com      logging googleapis com      Next   Register GKE Clusters with Anthos  2 register gke clusters with anthos md"}
{"questions":"GCP ACM is a key component of Anthos that lets you define and enforce configs including custom policies and apply it across all your infrastructure both on premises and in the cloud Set up Anthos Config Management ACM Before setting up ACM first enable the ACM feature With ACM you can set configs and policies in one repo Typically the security or operator team manages this repo Using the repo model lets developers focus on app development repo s while the security operators focus on infrastructure As long as clusters are in sync with the ACM repo changes can t be made on policies config managed by ACM except using the repo This allows enterprises to take all the advantages that come with a version control system while creating and modifying configs","answers":"\n# Set up Anthos Config Management (ACM)\n\n[Anthos Config Management](https:\/\/cloud.google.com\/anthos\/config-management) (ACM) is a key component of Anthos that lets you define and enforce configs, including custom policies, and apply it across all your infrastructure both on-premises and in the cloud.\n\nWith ACM, you can set configs and policies in one repo. Typically the security or operator team manages this repo. Using the repo model lets developers focus on app development repo(s) while the security\/operators focus on infrastructure. As long as clusters are in sync with the ACM repo, changes can\u2019t be made on policies\/config managed by ACM except using the repo. This allows enterprises to take all the advantages that come with a version control system while creating and modifying configs. [Learn more](https:\/\/cloud.google.com\/anthos-config-management\/docs\/concepts).\n\nBefore setting up ACM, first enable the ACM feature:\n\n\n```bash\ngcloud alpha container hub config-management enable\n```\n\n\n\n## Create GitLab repo\/project\n\nClick on the [GitLab $GROUP_NAME group](https:\/\/gitlab.com\/dashboard\/groups) you created earlier and create a new blank project called ACM. Make it public: \n\n![alt_text](images\/create-blank-project.png \"Create blank project\")\n\n\nClone the repo locally and use nomos to create the [repo structure](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/add-on\/config-sync\/concepts\/repo) that allows [Config Sync](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/add-on\/config-sync\/overview) to read from the repo:\n\n\n```bash\ncd ~\nmkdir $GROUP_NAME\ncd $GROUP_NAME\/\ngit clone git@gitlab.com:$GROUP_NAME\/acm.git\ncd acm\/\ngit switch -c main\nnomos init\n```\n\n\nTo know more about the functions of the different directories created by `nomos init `you can read the [repo structure](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/add-on\/config-sync\/concepts\/repo) documentation.\n\n\n## Deploy config sync operator to clusters\n\nThe [Config Sync Operator](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/add-on\/config-sync\/how-to\/installing#git-creds-secret) is a controller that manages Config Sync in a Kubernetes cluster.\n\n[Download](https:\/\/cloud.google.com\/anthos-config-management\/downloads) the operator and deploy to your clusters:\n\n\n```bash\ncd ~\/$GROUP_NAME\/acm\nmkdir setup\ncd setup\/\ngsutil cp gs:\/\/config-management-release\/released\/latest\/config-management-operator.yaml config-management-operator.yaml\n\nfor i in \"dev\" \"prod\"; do\n  gcloud container clusters get-credentials ${i} --zone=$ZONE\n  kubectl apply -f config-management-operator.yaml\ndone\n```\n\n\n\n## Configure operator\n\nTo configure the behavior of Config sync, create 2 configuration files `config-management-dev.yaml `for the dev cluster and `config-management-prod.yaml` for the prod cluster:\n\n\n```bash\ncat > config-management-dev.yaml << EOF\napiVersion: configmanagement.gke.io\/v1\nkind: ConfigManagement\nmetadata:\n  name: config-management\nspec:\n  clusterName: dev\n  git:\n    syncRepo: https:\/\/gitlab.com\/$GROUP_URI\/acm.git\n    syncBranch: dev\n    secretType: none\n  policyController:\n    enabled: true\nEOF\n\ncat > config-management-prod.yaml << EOF\napiVersion: configmanagement.gke.io\/v1\nkind: ConfigManagement\nmetadata:\n  name: config-management\nspec:\n  clusterName: prod\n  git:\n    syncRepo: https:\/\/gitlab.com\/$GROUP_URI\/acm.git\n    syncBranch: main\n    secretType: none\n  policyController:\n    enabled: true\nEOF\n```\n\n\nNote: Notice the syncBranch values. secretType is set to none because the repo is public. If repo isn\u2019t public grant operator access by following these [steps](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/add-on\/config-sync\/how-to\/installing#git-creds-secret).\n\n\n## ClusterSelectors and Namespaces\n\n[ClusterSelectors](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/add-on\/config-sync\/how-to\/clusterselectors) and [namespaces](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/add-on\/config-sync\/how-to\/namespace-scoped-objects) are ways of grouping to apply configs on a subset of our infrastructure. The ACM repo is where all clusterselectors and namespaces are defined and applied\n\nNavigate to clusterregistry\/ in acm repo:\n\n\n```bash\ncd ~\/$GROUP_NAME\/acm\/clusterregistry\n```\n\n\nCreate prod and dev cluster selectors:\n\n\n```bash\ncat > dev-cluster-selector.yaml << EOF\nkind: Cluster\napiVersion: clusterregistry.k8s.io\/v1alpha1\nmetadata:\n name: dev\n labels:\n   environment: dev\n---\nkind: ClusterSelector\napiVersion: configmanagement.gke.io\/v1\nmetadata:\n name: dev-cluster-selector\nspec:\n selector:\n   matchLabels:\n     environment: dev\n\nEOF\n\ncat > prod-cluster-selector.yaml << EOF\nkind: Cluster\napiVersion: clusterregistry.k8s.io\/v1alpha1\nmetadata:\n name: prod\n labels:\n   environment: prod\n---\nkind: ClusterSelector\napiVersion: configmanagement.gke.io\/v1\nmetadata:\n name: prod-cluster-selector\nspec:\n selector:\n   matchLabels:\n     environment: prod\nEOF\n```\n\n\nCreate 3 namespaces dev, stage and prod and set up so that dev and stage namespaces deploy to the dev cluster and prod deploys to prod cluster:\n\n\n```bash\ncd ~\/$GROUP_NAME\/acm\/namespaces\nmkdir dev\ncd dev\ncat > namespace.yaml << EOF\napiVersion: v1\nkind: Namespace\nmetadata:\n name: dev\n annotations:\n   configmanagement.gke.io\/cluster-selector: dev-cluster-selector\nEOF\n\ncd ..\nmkdir stage\ncd stage\ncat > namespace.yaml << EOF\napiVersion: v1\nkind: Namespace\nmetadata:\n name: stage\n annotations:\n   configmanagement.gke.io\/cluster-selector: dev-cluster-selector\nEOF\n\ncd ..\nmkdir prod\ncd prod\ncat > namespace.yaml << EOF\napiVersion: v1\nkind: Namespace\nmetadata:\n name: prod\n annotations:\n   configmanagement.gke.io\/cluster-selector: prod-cluster-selector\nEOF\n```\n\n\n\n## Policy constraint\n\nLastly, we\u2019ll take advantage of the policy control feature of ACM and create a policy. [Policy controller](https:\/\/cloud.google.com\/anthos-config-management\/docs\/concepts\/policy-controller) is used to check, audit and enforce your clusters\u2019 compliance with policies that may be related to security, regulations or arbitrary business rules. \n\nIn this tutorial we\u2019ll create a no privileged container constraint and apply it to the prod namespace. Notice we set policycontroller to true in `config-management-prod.yaml `earlier. This will enable this policy to be enforced in our clusters.\n\nCreate `constraint-restrict-privileged-container.yaml` in `cluster\/` :\n\n\n```bash\ncd ~\/$GROUP_NAME\/acm\/cluster\n\ncat > constraint-restrict-privileged-container.yaml << EOF\napiVersion: templates.gatekeeper.sh\/v1beta1\nkind: ConstraintTemplate\nmetadata:\n name: noprivilegedcontainer\n annotations:\n    configmanagement.gke.io\/cluster-selector: prod-cluster-selector\nspec:\n crd:\n   spec:\n     names:\n       kind: NoPrivilegedContainer\n targets:\n   - target: admission.k8s.gatekeeper.sh\n     rego: |\n       package noprivileged\n       violation[{\"msg\": msg, \"details\": {}}] {\n           c := input_containers[_]\n           c.securityContext.privileged\n           msg := sprintf(\"Privileged container is not allowed: %v, securityContext: %v\", [c.name, c.securityContext])\n       }\n       input_containers[c] {\n           c := input.review.object.spec.containers[_]\n       }\n       input_containers[c] {\n           c := input.review.object.spec.initContainers[_]\n       }\n       input_containers[c] {\n           c := input.review.object.spec.template.spec.containers[_]\n       }\n       input_containers[c] {\n           c := input.review.object.spec.template.spec.initContainers[_]\n       }\n---\napiVersion: constraints.gatekeeper.sh\/v1beta1\nkind: NoPrivilegedContainer\nmetadata:\n name: no-privileged-container\n annotations:\n    configmanagement.gke.io\/cluster-selector: prod-cluster-selector\nspec:\n enforcementAction: dryrun\n match:\n   kinds:\n     - apiGroups: [\"*\"]\n       kinds: [\"Deployment\", \"Pod\"]\nEOF\n```\n\n\n\n## Push changes to remote\n\nNow that we have made these changes locally, let\u2019s push these changes to the remote repo so the clusters can read from it and configure itself:\n\nPush changes to remote:\n\n\n```bash\ngit add -A\ngit commit -m \"Set up ACM\"\ngit push -u origin main\n```\n\n\nSince the dev cluster is configured to sync with the dev branch, create a new branch called `dev`  from the `main` branch by clicking the \u201c+\u201d sign in the repo page.\n\nAlso create a branch called dev in your local and switch to this branch\n\n\n```bash\ngit checkout dev\n```\n\n\n Apply the configuration:\n\n\n```bash\ncd ~\/$GROUP_NAME\/acm\/setup\n\nfor i in \"dev\" \"prod\"; do\n   gcloud container clusters get-credentials ${i} \\\n       --zone=$ZONE\n   kubectl apply -f config-management-${i}.yaml\ndone\n```\n\nVerify the configuration for both dev and prod clusters:\n\n\n```bash\nfor i in \"dev\" \"prod\"; do\n   gcloud container clusters get-credentials ${i}\n   kubectl -n kube-system get pods | grep config-management\ndone\n```\n\n\nYou should see a response like below for both of your clusters:\n\n\n```bash\nconfig-management-operator-59455ffc4-c6nvp          1\/1       Running         4m52s \n```\n\n\nConfirm your clusters are synced from the [console](https:\/\/console.cloud.google.com\/anthos\/config_management) or run:\n\n\n```bash\nnomos status\n```\n\n\nA status of `Pending` or `Synced` means your installation is fine\n\n\nNext: [CICD with Anthos and Gitlab](4-cicd-with-anthos-and-gitlab.md","site":"GCP","answers_cleaned":"   Set up Anthos Config Management  ACM    Anthos Config Management  https   cloud google com anthos config management   ACM  is a key component of Anthos that lets you define and enforce configs  including custom policies  and apply it across all your infrastructure both on premises and in the cloud   With ACM  you can set configs and policies in one repo  Typically the security or operator team manages this repo  Using the repo model lets developers focus on app development repo s  while the security operators focus on infrastructure  As long as clusters are in sync with the ACM repo  changes can t be made on policies config managed by ACM except using the repo  This allows enterprises to take all the advantages that come with a version control system while creating and modifying configs   Learn more  https   cloud google com anthos config management docs concepts    Before setting up ACM  first enable the ACM feature       bash gcloud alpha container hub config management enable           Create GitLab repo project  Click on the  GitLab  GROUP NAME group  https   gitlab com dashboard groups  you created earlier and create a new blank project called ACM  Make it public      alt text  images create blank project png  Create blank project     Clone the repo locally and use nomos to create the  repo structure  https   cloud google com kubernetes engine docs add on config sync concepts repo  that allows  Config Sync  https   cloud google com kubernetes engine docs add on config sync overview  to read from the repo       bash cd   mkdir  GROUP NAME cd  GROUP NAME  git clone git gitlab com  GROUP NAME acm git cd acm  git switch  c main nomos init       To know more about the functions of the different directories created by  nomos init  you can read the  repo structure  https   cloud google com kubernetes engine docs add on config sync concepts repo  documentation       Deploy config sync operator to clusters  The  Config Sync Operator  https   cloud google com kubernetes engine docs add on config sync how to installing git creds secret  is a controller that manages Config Sync in a Kubernetes cluster    Download  https   cloud google com anthos config management downloads  the operator and deploy to your clusters       bash cd    GROUP NAME acm mkdir setup cd setup  gsutil cp gs   config management release released latest config management operator yaml config management operator yaml  for i in  dev   prod   do   gcloud container clusters get credentials   i    zone  ZONE   kubectl apply  f config management operator yaml done           Configure operator  To configure the behavior of Config sync  create 2 configuration files  config management dev yaml  for the dev cluster and  config management prod yaml  for the prod cluster       bash cat   config management dev yaml    EOF apiVersion  configmanagement gke io v1 kind  ConfigManagement metadata    name  config management spec    clusterName  dev   git      syncRepo  https   gitlab com  GROUP URI acm git     syncBranch  dev     secretType  none   policyController      enabled  true EOF  cat   config management prod yaml    EOF apiVersion  configmanagement gke io v1 kind  ConfigManagement metadata    name  config management spec    clusterName  prod   git      syncRepo  https   gitlab com  GROUP URI acm git     syncBranch  main     secretType  none   policyController      enabled  true EOF       Note  Notice the syncBranch values  secretType is set to none because the repo is public  If repo isn t public grant operator access by following these  steps  https   cloud google com kubernetes engine docs add on config sync how to installing git creds secret        ClusterSelectors and Namespaces   ClusterSelectors  https   cloud google com kubernetes engine docs add on config sync how to clusterselectors  and  namespaces  https   cloud google com kubernetes engine docs add on config sync how to namespace scoped objects  are ways of grouping to apply configs on a subset of our infrastructure  The ACM repo is where all clusterselectors and namespaces are defined and applied  Navigate to clusterregistry  in acm repo       bash cd    GROUP NAME acm clusterregistry       Create prod and dev cluster selectors       bash cat   dev cluster selector yaml    EOF kind  Cluster apiVersion  clusterregistry k8s io v1alpha1 metadata   name  dev  labels     environment  dev     kind  ClusterSelector apiVersion  configmanagement gke io v1 metadata   name  dev cluster selector spec   selector     matchLabels       environment  dev  EOF  cat   prod cluster selector yaml    EOF kind  Cluster apiVersion  clusterregistry k8s io v1alpha1 metadata   name  prod  labels     environment  prod     kind  ClusterSelector apiVersion  configmanagement gke io v1 metadata   name  prod cluster selector spec   selector     matchLabels       environment  prod EOF       Create 3 namespaces dev  stage and prod and set up so that dev and stage namespaces deploy to the dev cluster and prod deploys to prod cluster       bash cd    GROUP NAME acm namespaces mkdir dev cd dev cat   namespace yaml    EOF apiVersion  v1 kind  Namespace metadata   name  dev  annotations     configmanagement gke io cluster selector  dev cluster selector EOF  cd    mkdir stage cd stage cat   namespace yaml    EOF apiVersion  v1 kind  Namespace metadata   name  stage  annotations     configmanagement gke io cluster selector  dev cluster selector EOF  cd    mkdir prod cd prod cat   namespace yaml    EOF apiVersion  v1 kind  Namespace metadata   name  prod  annotations     configmanagement gke io cluster selector  prod cluster selector EOF           Policy constraint  Lastly  we ll take advantage of the policy control feature of ACM and create a policy   Policy controller  https   cloud google com anthos config management docs concepts policy controller  is used to check  audit and enforce your clusters  compliance with policies that may be related to security  regulations or arbitrary business rules    In this tutorial we ll create a no privileged container constraint and apply it to the prod namespace  Notice we set policycontroller to true in  config management prod yaml  earlier  This will enable this policy to be enforced in our clusters   Create  constraint restrict privileged container yaml  in  cluster          bash cd    GROUP NAME acm cluster  cat   constraint restrict privileged container yaml    EOF apiVersion  templates gatekeeper sh v1beta1 kind  ConstraintTemplate metadata   name  noprivilegedcontainer  annotations      configmanagement gke io cluster selector  prod cluster selector spec   crd     spec       names         kind  NoPrivilegedContainer  targets       target  admission k8s gatekeeper sh      rego           package noprivileged        violation   msg   msg   details                     c    input containers               c securityContext privileged            msg    sprintf  Privileged container is not allowed   v  securityContext   v    c name  c securityContext                   input containers c               c    input review object spec containers                    input containers c               c    input review object spec initContainers                    input containers c               c    input review object spec template spec containers                    input containers c               c    input review object spec template spec initContainers                 apiVersion  constraints gatekeeper sh v1beta1 kind  NoPrivilegedContainer metadata   name  no privileged container  annotations      configmanagement gke io cluster selector  prod cluster selector spec   enforcementAction  dryrun  match     kinds         apiGroups               kinds    Deployment    Pod   EOF           Push changes to remote  Now that we have made these changes locally  let s push these changes to the remote repo so the clusters can read from it and configure itself   Push changes to remote       bash git add  A git commit  m  Set up ACM  git push  u origin main       Since the dev cluster is configured to sync with the dev branch  create a new branch called  dev   from the  main  branch by clicking the     sign in the repo page   Also create a branch called dev in your local and switch to this branch      bash git checkout dev        Apply the configuration       bash cd    GROUP NAME acm setup  for i in  dev   prod   do    gcloud container clusters get credentials   i             zone  ZONE    kubectl apply  f config management   i  yaml done      Verify the configuration for both dev and prod clusters       bash for i in  dev   prod   do    gcloud container clusters get credentials   i     kubectl  n kube system get pods   grep config management done       You should see a response like below for both of your clusters       bash config management operator 59455ffc4 c6nvp          1 1       Running         4m52s        Confirm your clusters are synced from the  console  https   console cloud google com anthos config management  or run       bash nomos status       A status of  Pending  or  Synced  means your installation is fine   Next   CICD with Anthos and Gitlab  4 cicd with anthos and gitlab md"}
{"questions":"GCP In this section we ll automate a CI CD pipeline taking advantage of the features from anthos CICD with Anthos Before creating a CICD pipeline we need an application For this tutorial we ll use the popular hello kubernetes application created by paulbower but with a few modifications Download hello kubernetes app Create app","answers":"# CICD with Anthos\n\nIn this section, we\u2019ll automate a CI\/CD pipeline taking advantage of the features from anthos.\n\n\n## Create app\n\nBefore creating a CICD pipeline we need an application. For this tutorial, we\u2019ll use the popular hello kubernetes application created by paulbower but with a few modifications.\n\nDownload hello-kubernetes app:\n\n\n```bash\ncd ~\/$GROUP_NAME\/\ngit clone https:\/\/github.com\/itodotimothy6\/hello-kubernetes.git\ncd hello-kubernetes\/\nrm -rf .git\n```\n\n\nThe hello-kubernetes dir will later be made to a gitlab repo and this is where the developer team will spend most of their time on. In this tutorial, we\u2019ll isolate developer\u2019s work in one repo and security\/platform in a separate repo, that way developers can focus on application logic and other teams focus on what they do best.\n\n\n## Platform admin repo\n\nAs a good practice to keep out non-developer work from the app repo, create a platform admin repo that\u2019ll contain code\/scripts\/commands that need to be run during the CI\/CD process. \n\nAlso [gitlab](https:\/\/docs.gitlab.com\/ee\/ci\/quick_start\/README.html#cicd-process-overview) uses `.gitlab-ci.yml` file to define a cicd pipeline. For a complex pipeline, we can avoid crowding the `.gitlab-ci.yml` file by abstracting some of the code and storing in the platform admin.\n\nCreate platform-admin:\n\n\n```bash\ncd ~\/$GROUP_NAME\/\nmkdir platform-admin\/\n```\n\n\nNow, we\u2019ll create the different stages of the ci-cd process and store in sub-directories in platform-admin\n\n\n## Build\n\nThis is the first stage. In this stage, we\u2019ll create a build container job which builds an image using the `hello-kubernetes` Dockerfile and pushes this image to [container registry](https:\/\/cloud.google.com\/container-registry) (gcr.io).  In this tutorial we\u2019ll use a build container tool known as [kaniko](https:\/\/github.com\/GoogleContainerTools\/kaniko#kaniko---build-images-in-kubernetes).\n\nCreate build stage:\n\n\n```bash\ncd platform-admin\/\nmkdir build\/\ncd build\/\ncat > build-container.yaml << EOF\nbuild:\n stage: Build\n tags:\n   - prod\n image:\n   name: gcr.io\/kaniko-project\/executor:debug\n   entrypoint: [\"\"]\n script:\n   - echo \"Building container image and pushing to gcr.io in ${PROJECT_ID}\"\n   - \/kaniko\/executor --context \\$CI_PROJECT_DIR --dockerfile \\$CI_PROJECT_DIR\/Dockerfile --destination \\${HOSTNAME}\/\\${PROJECT_ID}\/\\${CONTAINER_NAME}:\\$CI_COMMIT_SHORT_SHA\nEOF\n```\n\n\n\n## Binary Authorization\n\n[Binary authorization ](https:\/\/cloud.google.com\/binary-authorization)is the process of creating [attestations](https:\/\/cloud.google.com\/binary-authorization\/docs\/key-concepts#attestations) on container images for the purpose of verifying that certain criteria are met before you can deploy the images to GKE. In this guide, we\u2019ll implement binary authorization using Cloud Build and GKE.  [Learn more](https:\/\/cloud.google.com\/binary-authorization).\n\nEnable binary authorization on your clusters: \n\n\n```bash\nfor i in \"dev\" \"prod\"; do\n   gcloud container clusters update ${i} --enable-binauthz\ndone\n```\n\n\nCreate signing keys and configure attestations for stage and prod pipelines: (Read this [article](https:\/\/cloud.google.com\/solutions\/binary-auth-with-cloud-build-and-gke) to understand step by step what the below set of commands do)\n\n\n```bash\nexport CLOUD_BUILD_SA_EMAIL=\"${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com\"\n\ngcloud projects add-iam-policy-binding \"${PROJECT_ID}\" \\\n --member \"serviceAccount:${CLOUD_BUILD_SA_EMAIL}\" \\\n --role \"roles\/container.developer\"\n\n# Create signing keys\ngcloud kms keyrings create \"binauthz\" \\\n --project \"${PROJECT_ID}\" \\\n --location \"${REGION}\"\n\ngcloud kms keys create \"vulnz-signer\" \\\n --project \"${PROJECT_ID}\" \\\n --location \"${REGION}\" \\\n --keyring \"binauthz\" \\\n --purpose \"asymmetric-signing\" \\\n --default-algorithm \"rsa-sign-pkcs1-4096-sha512\"\n\ngcloud kms keys create \"qa-signer\" \\\n --project \"${PROJECT_ID}\" \\\n --location \"${REGION}\" \\\n --keyring \"binauthz\" \\\n --purpose \"asymmetric-signing\" \\\n --default-algorithm \"rsa-sign-pkcs1-4096-sha512\"\n\ncurl \"https:\/\/containeranalysis.googleapis.com\/v1\/projects\/${PROJECT_ID}\/notes\/?noteId=vulnz-note\" \\\n --request \"POST\" \\\n --header \"Content-Type: application\/json\" \\\n --header \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n --header \"X-Goog-User-Project: ${PROJECT_ID}\" \\\n --data-binary @- <<EOF\n   {\n     \"name\": \"projects\/${PROJECT_ID}\/notes\/vulnz-note\",\n     \"attestation\": {\n       \"hint\": {\n         \"human_readable_name\": \"Vulnerability scan note\"\n       }\n     }\n   }\nEOF\n\ncurl \"https:\/\/containeranalysis.googleapis.com\/v1\/projects\/${PROJECT_ID}\/notes\/vulnz-note:setIamPolicy\" \\\n --request POST \\\n --header \"Content-Type: application\/json\" \\\n --header \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n --header \"X-Goog-User-Project: ${PROJECT_ID}\" \\\n --data-binary @- <<EOF\n   {\n     \"resource\": \"projects\/${PROJECT_ID}\/notes\/vulnz-note\",\n     \"policy\": {\n       \"bindings\": [\n         {\n           \"role\": \"roles\/containeranalysis.notes.occurrences.viewer\",\n           \"members\": [\n             \"serviceAccount:${CLOUD_BUILD_SA_EMAIL}\"\n           ]\n         },\n         {\n           \"role\": \"roles\/containeranalysis.notes.attacher\",\n           \"members\": [\n             \"serviceAccount:${CLOUD_BUILD_SA_EMAIL}\"\n           ]\n         }\n       ]\n     }\n   }\nEOF\n\ngcloud container binauthz attestors create \"vulnz-attestor\" \\\n --project \"${PROJECT_ID}\" \\\n --attestation-authority-note-project \"${PROJECT_ID}\" \\\n --attestation-authority-note \"vulnz-note\" \\\n --description \"Vulnerability scan attestor\"\n\ngcloud beta container binauthz attestors public-keys add \\\n --project \"${PROJECT_ID}\" \\\n --attestor \"vulnz-attestor\" \\\n --keyversion \"1\" \\\n --keyversion-key \"vulnz-signer\" \\\n --keyversion-keyring \"binauthz\" \\\n --keyversion-location \"${REGION}\" \\\n --keyversion-project \"${PROJECT_ID}\"\n\ngcloud container binauthz attestors add-iam-policy-binding \"vulnz-attestor\" \\\n --project \"${PROJECT_ID}\" \\\n --member \"serviceAccount:${CLOUD_BUILD_SA_EMAIL}\" \\\n --role \"roles\/binaryauthorization.attestorsViewer\"\n\ngcloud kms keys add-iam-policy-binding \"vulnz-signer\" \\\n --project \"${PROJECT_ID}\" \\\n --location \"${REGION}\" \\\n --keyring \"binauthz\" \\\n --member \"serviceAccount:${CLOUD_BUILD_SA_EMAIL}\" \\\n --role 'roles\/cloudkms.signerVerifier'\n\ncurl \"https:\/\/containeranalysis.googleapis.com\/v1\/projects\/${PROJECT_ID}\/notes\/?noteId=qa-note\" \\\n --request \"POST\" \\\n --header \"Content-Type: application\/json\" \\\n --header \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n --header \"X-Goog-User-Project: ${PROJECT_ID}\" \\\n --data-binary @- <<EOF\n   {\n     \"name\": \"projects\/${PROJECT_ID}\/notes\/qa-note\",\n     \"attestation\": {\n       \"hint\": {\n         \"human_readable_name\": \"QA note\"\n       }\n     }\n   }\nEOF\n\ncurl \"https:\/\/containeranalysis.googleapis.com\/v1\/projects\/${PROJECT_ID}\/notes\/qa-note:setIamPolicy\" \\\n --request POST \\\n --header \"Content-Type: application\/json\" \\\n --header \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n --header \"X-Goog-User-Project: ${PROJECT_ID}\" \\\n --data-binary @- <<EOF\n   {\n     \"resource\": \"projects\/${PROJECT_ID}\/notes\/qa-note\",\n     \"policy\": {\n       \"bindings\": [\n         {\n           \"role\": \"roles\/containeranalysis.notes.occurrences.viewer\",\n           \"members\": [\n             \"serviceAccount:${CLOUD_BUILD_SA_EMAIL}\"\n           ]\n         },\n         {\n           \"role\": \"roles\/containeranalysis.notes.attacher\",\n           \"members\": [\n             \"serviceAccount:${CLOUD_BUILD_SA_EMAIL}\"\n           ]\n         }\n       ]\n     }\n   }\nEOF\n\ngcloud container binauthz attestors create \"qa-attestor\" \\\n --project \"${PROJECT_ID}\" \\\n --attestation-authority-note-project \"${PROJECT_ID}\" \\\n --attestation-authority-note \"qa-note\" \\\n --description \"QA attestor\"\n\ngcloud beta container binauthz attestors public-keys add \\\n --project \"${PROJECT_ID}\" \\\n --attestor \"qa-attestor\" \\\n --keyversion \"1\" \\\n --keyversion-key \"qa-signer\" \\\n --keyversion-keyring \"binauthz\" \\\n --keyversion-location \"${REGION}\" \\\n --keyversion-project \"${PROJECT_ID}\"\n\ngcloud container binauthz attestors add-iam-policy-binding \"qa-attestor\" \\\n --project \"${PROJECT_ID}\" \\\n --member \"serviceAccount:${CLOUD_BUILD_SA_EMAIL}\" \\\n --role \"roles\/binaryauthorization.attestorsViewer\"\n```\n\n\nVulnerability scan checker needs to be created with Cloud Build for verifying `hello-kubernetes` container images in the CI\/CD pipeline. Execute the following steps to create a `cloudbuild-attestor` in Container Registry: \n\n\n```bash\n# Give cloudbuild service account the required roles and permissions\ngcloud projects add-iam-policy-binding ${PROJECT_ID} \\\n --member serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \\\n --role roles\/binaryauthorization.attestorsViewer\n\ngcloud projects add-iam-policy-binding ${PROJECT_ID} \\\n --member serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \\\n --role roles\/cloudkms.signerVerifier\n\ngcloud projects add-iam-policy-binding ${PROJECT_ID} \\\n --member serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \\\n --role roles\/containeranalysis.notes.attacher\n\n# Create attestor using cloudbuild\ngit clone https:\/\/github.com\/GoogleCloudPlatform\/gke-binary-auth-tools ~\/$GROUP_NAME\/binauthz-tools\n\ngcloud builds submit \\\n --project \"${PROJECT_ID}\" \\\n --tag \"gcr.io\/${PROJECT_ID}\/cloudbuild-attestor\" \\\n ~\/$GROUP_NAME\/binauthz-tools\n\n# clean up - delete binauthz-tools\nrm -rf ~\/$GROUP_NAME\/binauthz-tools\n```\n\n\nVerify cloudbuild-attestor image is created by inputting `gcr.io\/&lt;project-id>\/cloudbuild-attestor `in your browser.\n\nCreate binauth.yaml which describes the Binary Authorization policy for the project: \n\n\n```bash\ncd ~\/$GROUP_NAME\/platform-admin\/\nmkdir binauth\/\ncd binauth\/\n\ncat > binauth.yaml << EOF\ndefaultAdmissionRule:\n enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG\n evaluationMode: ALWAYS_DENY\nglobalPolicyEvaluationMode: ENABLE\nadmissionWhitelistPatterns:\n# Gitlab runner\n- namePattern: gitlab\/gitlab-runner-helper:x86_64-8fa89735\n- namePattern: gitlab\/gitlab-runner-helper:x86_64-ece86343\n- namePattern: gitlab\/gitlab-runner:alpine-v13.6.0\n- namePattern: gcr.io\/abm-test-bed\/gitlab-runner@sha256:8f623d3c55ffc783752d0b34097c5625a32a910a8c1427308f5c39fd9a23a3c0\n# Gitlab runner job containers\n- namePattern: google\/cloud-sdk\n- namePattern: gcr.io\/cloud-builders\/gke-deploy:latest\n- namePattern: gcr.io\/kaniko-project\/*\n- namePattern: gcr.io\/cloud-solutions-images\/kustomize:3.7\n- namePattern: gcr.io\/kpt-functions\/gatekeeper-validate\n- namePattern: gcr.io\/kpt-functions\/read-yaml\n- namePattern: gcr.io\/stackdriver-prometheus\/*\n- namePattern: gcr.io\/$PROJECT_ID\/cloudbuild-attestor\n- namePattern: gcr.io\/config-management-release\/*\nclusterAdmissionRules:\n # Staging\/dev cluster\n $ZONE.dev:\n   evaluationMode: REQUIRE_ATTESTATION\n   enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG\n   requireAttestationsBy:\n   - projects\/$PROJECT_ID\/attestors\/vulnz-attestor\n # Production cluster\n $ZONE.prod:\n   evaluationMode: REQUIRE_ATTESTATION\n   enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG\n   requireAttestationsBy:\n   - projects\/$PROJECT_ID\/attestors\/vulnz-attestor\n   - projects\/$PROJECT_ID\/attestors\/qa-attestor\nEOF\n```\n\n\nUpload binauth.yaml policy to the project:\n\n\n```bash\ngcloud container binauthz policy import .\/binauth.yaml\n```\n\n\nCreate verify image:\n\n\n```bash\ncd ~\/$GROUP_NAME\/platform-admin\/\nmkdir vulnerability\/\ncd vulnerability\/\n\ncat > vulnerability-scan-result.yaml << EOF\ncheck-vulnerability-scan-result:\n stage: Verify Image\n tags:\n - prod\n image:\n   name: gcr.io\/\\${PROJECT_ID}\/cloudbuild-attestor\n script:\n   - |\n     \/scripts\/check_vulnerabilities.sh \\\\\n       -p \\${PROJECT_ID} \\\\\n       -i \\${HOSTNAME}\/\\${PROJECT_ID}\/\\${CONTAINER_NAME}:\\${CI_COMMIT_SHORT_SHA} \\\\\n       -t 8\nEOF\n\ncat > vulnerability-scan-verify.yaml << EOF\nattest-vulnerability-scan:\n stage: Verify Image\n tags:\n - prod\n image:\n   name: 'gcr.io\/\\${PROJECT_ID}\/cloudbuild-attestor'\n script:\n   - mkdir images\n   - echo \"\\$(gcloud container images describe --format 'value(image_summary.digest)' \\${HOSTNAME}\/\\${PROJECT_ID}\/\\${CONTAINER_NAME}:\\${CI_COMMIT_SHORT_SHA})\" > images\/digest.txt\n   - |\n     FQ_DIGEST=\\$(gcloud container images describe --format 'value(image_summary.fully_qualified_digest)' \\${HOSTNAME}\/\\${PROJECT_ID}\/\\${CONTAINER_NAME}:\\${CI_COMMIT_SHORT_SHA})\n     \/scripts\/create_attestation.sh \\\\\n       -p \"\\$PROJECT_ID\" \\\\\n       -i \"\\$FQ_DIGEST\" \\\\\n       -a \"\\$_VULNZ_ATTESTOR\" \\\\\n       -v \"\\$_VULNZ_KMS_KEY_VERSION\" \\\\\n       -k \"\\$_VULNZ_KMS_KEY\" \\\\\n       -l \"\\$_KMS_LOCATION\" \\\\\n       -r \"\\$_KMS_KEYRING\"\n artifacts:\n   paths:\n     - images\/\nEOF\n```\n\n\n\n## Hydrate manifest using Kustomize\n\nIn this tutorial, we use Kustomize to create a hydrated manifest of our deployment which will be stored in a repo called `hello-kubernetes-env`\n\nCreate shared nodejs kustomize base in platform-admin:\n\n\n```bash\ncd ~\/$GROUP_NAME\/platform-admin\/\nmkdir -p shared-kustomize-bases\/nodejs\ncd shared-kustomize-bases\/nodejs\n\ncat > deployment.yaml << EOF\nkind: Deployment\napiVersion: apps\/v1\nmetadata:\n  name: nodejs\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: nodejs\n  template:\n    metadata:\n      labels:\n        app: nodejs\n    spec:\n      containers:\n      - name: nodejs\n        image: app\n        ports:\n        - containerPort: 8080\nEOF\n\ncat > kustomization.yaml << EOF\napiVersion: kustomize.config.k8s.io\/v1beta1\nkind: Kustomization\nresources:\n- deployment.yaml\n- service.yaml\nEOF\n\ncat > service.yaml << EOF\nkind: Service\napiVersion: v1\nmetadata:\n  name: nodejs\nspec:\n  type: LoadBalancer\n  selector:\n    app: nodejs\n  ports:\n  - name: http\n    port: 80\n    targetPort: 8080\nEOF\n```\n\n\nTo allow developers apply patches when deploying, create overlays for dev, stage and prod in the `hello-kubernetes` repo:\n\n\n```bash\ncd ~\/$GROUP_NAME\/hello-kubernetes\/\nmkdir -p kubernetes\/overlays\/dev\ncd kubernetes\/overlays\/dev\/\n\ncat > kustomization.yaml << EOF\napiVersion: kustomize.config.k8s.io\/v1beta1\nkind: Kustomization\nnamespace: dev\nbases:\n - ..\/..\/base\nnamePrefix: dev-\nEOF\n\ncd ..\nmkdir stage\/\ncd stage\/\ncat > kustomization.yaml << EOF\napiVersion: kustomize.config.k8s.io\/v1beta1\nkind: Kustomization\nnamespace: stage\nbases:\n - ..\/..\/base\nnamePrefix: stage-\nEOF\n\ncd ..\nmkdir prod\/\ncd prod\/\ncat > kustomization.yaml << EOF\napiVersion: kustomize.config.k8s.io\/v1beta1\nkind: Kustomization\nnamespace: prod\nbases:\n - ..\/..\/base\nnamePrefix: prod-\nEOF\n```\n\n\nNow that we have the kustomize base & overlay, we'll start creating the kustomize CI\/CD jobs\n\nCreate fetch base stage:\n\n\n```bash\ncd ~\/$GROUP_NAME\/platform-admin\/\nmkdir kustomize-steps\/\ncd kustomize-steps\/\n\ncat > fetch-base.yaml << EOF\nfetch_kustomize_base:\n stage: Fetch Bases\n image: gcr.io\/cloud-solutions-images\/kustomize:3.7\n tags:\n - prod\n script:\n # Add auth to git urls\n - git config --global url.\"https:\/\/gitlab-ci-token:\\${CI_JOB_TOKEN}@\\${CI_SERVER_HOST}\".insteadOf \"https:\/\/\\${CI_SERVER_HOST}\"\n - mkdir -p kubernetes\/base\/\n\n # Pull from Kustomize shared base from platform repo\n - echo \\${SSH_KEY} | base64 -d > \/working\/ssh-key\n - chmod 400 \/working\/ssh-key\n - export GIT_SSH_COMMAND=\"ssh -i \/working\/ssh-key -o UserKnownHostsFile=\/dev\/null -o StrictHostKeyChecking=no\"\n - git clone git@\\${CI_SERVER_HOST}:\\${CI_PROJECT_NAMESPACE}\/platform-admin.git -b main\n - cp platform-admin\/shared-kustomize-bases\/nodejs\/* kubernetes\/base\n\n artifacts:\n   paths:\n     - kubernetes\/base\/\nEOF\n```\n\n\nCreate hydrate dev\/prod manifest stage:\n\n\n```bash\ncat > hydrate-dev.yaml << EOF\nkustomize-dev:\n stage: Hydrate Manifests\n image: gcr.io\/cloud-solutions-images\/kustomize:3.7\n tags:\n   - prod\n except:\n   refs:\n     - main\n script:\n   - DIGEST=\\$(cat images\/digest.txt)\n\n   # dev\n   - mkdir -p .\/hydrated-manifests\/\n   - cd \\${KUSTOMIZATION_PATH_DEV}\n   - kustomize edit set image app=\\${HOSTNAME}\/\\${PROJECT_ID}\/\\${CONTAINER_NAME}@\\${DIGEST}\n   - kustomize build . -o ..\/..\/..\/hydrated-manifests\/dev.yaml\n   - cd -\n\n artifacts:\n   paths:\n     - hydrated-manifests\/\nEOF\n\ncat > hydrate-prod.yaml << EOF\nkustomize:\nstage: Hydrate Manifests\nimage: gcr.io\/cloud-solutions-images\/kustomize:3.7\ntags:\n  - prod\nonly:\n  refs:\n    - main\nscript:\n- DIGEST=\\$(cat images\/digest.txt)\n\n# build out staging manifests\n- mkdir -p .\/hydrated-manifests\/\n\n# stage\n- cd \\${KUSTOMIZATION_PATH_NON_PROD}\n- kustomize edit set image app=\\${HOSTNAME}\/\\${PROJECT_ID}\/\\${CONTAINER_NAME}@\\${DIGEST}\n- kustomize build . -o ..\/..\/..\/hydrated-manifests\/stage.yaml\n- cd -\n\n# prod\n- cd \\${KUSTOMIZATION_PATH_PROD}\n- kustomize edit set image app=\\${HOSTNAME}\/\\${PROJECT_ID}\/\\${CONTAINER_NAME}@\\${DIGEST}\n- kustomize build . -o ..\/..\/..\/hydrated-manifests\/production.yaml\n- cd -\nartifacts:\n  paths:\n    - hydrated-manifests\/\nEOF\n```\n\n\n\n## ACM policy check in CI pipeline\n\nACM, as discussed earlier, is used to ensure consistency in config and automate policy checks. We\u2019ll incorporate ACM to our CI pipeline to ensure that any changes that fail policy check is terminated at the CI stage even before deployment.\n\nCreate stage that downloads acm policies for the acm repo:\n\n\n```bash\ncd ~\/$GROUP_NAME\/platform-admin\/\nmkdir acm\/\ncd acm\/\n\ncat > download-policies.yaml << EOF\ndownload-acm-policies:\n stage: Download ACM Policy\n image: gcr.io\/cloud-solutions-images\/kustomize:3.7\n tags:\n   - prod\n script:\n # Note: Having SSH_KEY in GitLab is only for demo purposes. You  should \n # consider saving the key as a secret in the k8s cluster and have the secret\n # mounted as a file inside the container instead.\n - echo \\${SSH_KEY} | base64 -d > \/working\/ssh-key\n - chmod 400 \/working\/ssh-key\n - export GIT_SSH_COMMAND=\"ssh -i \/working\/ssh-key -o UserKnownHostsFile=\/dev\/null -o StrictHostKeyChecking=no\"\n - git clone git@\\${CI_SERVER_HOST}:\\${CI_PROJECT_NAMESPACE}\/acm.git -b main\n - cp acm\/policies\/cluster\/constraint* hydrated-manifests\/.\n artifacts:\n   paths:\n     - hydrated-manifests\/\nEOF\n```\n\n\nCreate stage that reads acm:\n\n\n```bash\ncd ~\/$GROUP_NAME\/platform-admin\/acm\/\n\ncat > read-acm.yaml << EOF\nread-yaml:\n stage: Read ACM YAML\n image:\n   name: gcr.io\/kpt-functions\/read-yaml\n   entrypoint: [\"\/bin\/sh\", \"-c\"]\n tags:\n   - prod\n script:\n - mkdir stage && cp hydrated-manifests\/stage.yaml stage && cp hydrated-manifests\/constraint* stage\n - mkdir prod && cp hydrated-manifests\/production.yaml prod && cp hydrated-manifests\/constraint* prod\n # The following 2 commands are combining all the YAMLs from the source_dir into one single YAML file\n - \/usr\/local\/bin\/node \/home\/node\/app\/dist\/read_yaml_run.js -d source_dir=stage\/ --output stage-source.yaml --input \/dev\/null\n - \/usr\/local\/bin\/node \/home\/node\/app\/dist\/read_yaml_run.js -d source_dir=prod\/ --output prod-source.yaml --input \/dev\/null\n artifacts:\n   paths:\n   - stage-source.yaml\n   - prod-source.yaml\n   expire_in: 1 hour\nEOF\n```\n\n\nCreate validate acm stage:\n\n\n```bash\ncd ~\/$GROUP_NAME\/platform-admin\/acm\/\n\ncat > validate-acm.yaml << EOF\nvalidate-acm-policy:\n stage: ACM Policy Check\n image:\n   name: gcr.io\/kpt-functions\/gatekeeper-validate\n   entrypoint: [\"\/bin\/sh\", \"-c\"]\n tags:\n   - prod\n script:\n - \/app\/gatekeeper_validate --input stage-source.yaml\n - \/app\/gatekeeper_validate --input prod-source.yaml\nEOF\n```\n\n\n\n## Deploy\n\nThe last stage in the pipeline is to deploy our changes. For dev(other branches except main), we\u2019ll deploy the hydrate-dev maifest. For stage and prod, we\u2019ll copy the hydrated stage and prod manifests to the `hello-kubernetes-env` repo\n\nCreate deploy prod stage:\n\n\n```bash\ncd ~\/$GROUP_NAME\/platform-admin\/\nmkdir deploy\/\ncd deploy\/\n\ncat > deploy-dev.yaml << EOF\ndeploy-dev:\n stage: Deploy Dev\n tags:\n   - dev\n script:\n   - kubectl apply -f hydrated-manifests\/dev.yaml\n except:\n   refs:\n     - main\nEOF\n\ncat > deploy-prod.yaml << EOF\npush-manifests:\n only:\n   refs:\n     - main\n stage: Push Manifests\n image: gcr.io\/cloud-solutions-images\/kustomize:3.7\n tags:\n   - prod\n script:\n #- cp \/working\/.ssh\/ssh-deploy \/working\/ssh-key\n - echo \\${SSH_KEY} | base64 -d > \/working\/ssh-key\n - chmod 400 \/working\/ssh-key\n - export GIT_SSH_COMMAND=\"ssh -i \/working\/ssh-key -o UserKnownHostsFile=\/dev\/null -o StrictHostKeyChecking=no\"\n - git config --global user.email \"\\${CI_PROJECT_NAME}-ci@\\${CI_SERVER_HOST}\"\n - git config --global user.name \"\\${CI_PROJECT_NAMESPACE}\/\\${CI_PROJECT_NAME}\"\n - git clone git@\\${CI_SERVER_HOST}:\\${CI_PROJECT_NAMESPACE}\/\\${CI_PROJECT_NAME}-env.git -b stage\n - cd \\${CI_PROJECT_NAME}-env\n - cp ..\/hydrated-manifests\/stage.yaml stage.yaml\n - cp ..\/hydrated-manifests\/production.yaml production.yaml\n - |\n   # If files have changed, commit them back to the env repo in the staging branch\n   if [ -z \"\\$(git status --porcelain)\" ]; then\n     echo \"No changes found in env repository.\"\n   else\n     git add stage.yaml stage.yaml\n     git add production.yaml production.yaml\n     git commit -m \"\\${CI_COMMIT_REF_SLUG} -- \\${CI_PIPELINE_URL}\"\n     git push origin stage\n   fi\nEOF\n```\n\n\nPush platform-admin remote:\n\nIn gitlab, create a blank public project under the [$GROUP_NAME](https:\/\/gitlab.com\/dashboard\/groups) group called `platform-admin` then run the following commands to push `platform-admin` dir to gitlab\n\n \n\n\n```bash\ncd ~\/$GROUP_NAME\/platform-admin\/\ngit init\ngit remote add origin git@gitlab.com:$GROUP_URI\/platform-admin.git\ngit add .\ngit commit -m \"Initial commit\"\ngit push -u origin main\n```\n\n\n\n## gitlab-ci.yml\n\n.gitlab-ci.yml is the file used by gitlab for ci cd pipeline. We\u2019ll create a gitlab-ci.yml that references the different stage files in platform-admin and orders them as listed above. Remember we separated out these stages in a platform-admin repo to avoid a crowded gitlab-ci.yml and to separate operations from the app repo.\n\nCreate .gitlab.ci.yml in the root directory of hello-kubernetes:\n\n\n```bash\ncd ~\/$GROUP_NAME\/hello-kubernetes\/\ncat > .gitlab-ci.yml << EOF\nimage: google\/cloud-sdk\n\ninclude:\n# Build Steps\n- project: \"$GROUP_URI\/platform-admin\"\n file: \"build\/build-container.yaml\"\n# Vulnerability Scan Steps\n- project: \"$GROUP_URI\/platform-admin\"\n file: \"vulnerability\/vulnerability-scan-result.yaml\"\n- project: \"$GROUP_URI\/platform-admin\"\n file: \"vulnerability\/vulnerability-scan-verify.yaml\"\n# Kustomize Steps\n- project: \"$GROUP_URI\/platform-admin\"\n file: \"kustomize-steps\/fetch-base.yaml\"\n- project: \"$GROUP_URI\/platform-admin\"\n file: \"kustomize-steps\/hydrate-dev.yaml\"\n- project: \"$GROUP_URI\/platform-admin\"\n file: \"kustomize-steps\/hydrate-prod.yaml\"\n# ACM Steps\n- project: \"$GROUP_URI\/platform-admin\"\n file: \"acm\/download-policies.yaml\"\n- project: \"$GROUP_URI\/platform-admin\"\n file: \"acm\/read-acm.yaml\"\n- project: \"$GROUP_URI\/platform-admin\"\n file: \"acm\/validate-acm.yaml\"\n# Deploy Steps\n- project: \"$GROUP_URI\/platform-admin\"\n file: \"deploy\/deploy-dev.yaml\"\n- project: \"$GROUP_URI\/platform-admin\"\n file: \"deploy\/deploy-prod.yaml\"\n\nvariables:\n KUBERNETES_SERVICE_ACCOUNT_OVERWRITE: default\n KUSTOMIZATION_PATH_BASE: \".\/base\"\n KUSTOMIZATION_PATH_DEV: \".\/kubernetes\/overlays\/dev\"\n KUSTOMIZATION_PATH_NON_PROD: \".\/kubernetes\/overlays\/stage\"\n KUSTOMIZATION_PATH_PROD: \".\/kubernetes\/overlays\/prod\"\n HOSTNAME: \"gcr.io\"\n PROJECT_ID: \"$PROJECT_ID\"\n CONTAINER_NAME: \"hello-kubernetes\"\n\n # Binary Authorization Variables\n _VULNZ_ATTESTOR: \"vulnz-attestor\"\n _VULNZ_KMS_KEY_VERSION: \"1\"\n _VULNZ_KMS_KEY: \"vulnz-signer\"\n _KMS_KEYRING: \"binauthz\"\n _KMS_LOCATION: \"$REGION\"\n _COMPUTE_REGION: \"$REGION\"\n _PROD_CLUSTER: \"prod\"\n _STAGING_CLUSTER: \"dev\"\n _QA_ATTESTOR: \"qa-attestor\"\n _QA_KMS_KEY: \"qa-signer\"\n _QA_KMS_KEY_VERSION: \"1\"\n\nstages:\n - Build\n - Verify Image\n - Fetch Bases\n - Hydrate Manifests\n - Download ACM Policy\n - Read ACM YAML\n - ACM Policy Check\n - Deploy Dev\n - Push Manifests\nEOF\n```\n\n\n\n## Remote hello-kubernetes & hello kubernetes-env\n\nIn gitlab, create a blank public project under [$GROUP_NAME](https:\/\/gitlab.com\/dashboard\/groups) called `hello-kubernetes`.\n\nPush local hello-kubernetes to remote:\n\nMake sure you have created `hello-kubernetes` project before running the below commands\n\n\n```bash\ncd ~\/$GROUP_NAME\/hello-kubernetes\/\ngit init\ngit remote add origin git@gitlab.com:$GROUP_URI\/hello-kubernetes.git\ngit add .\ngit commit -m \"Initial commit\"\ngit push -u origin main\n```\n\n\nCreate a public project under [$GROUP_NAME](https:\/\/gitlab.com\/dashboard\/groups) called `hello-kubernetes-env`. Ensure \u201cinitialize repository with a README is checked so you can have a non-empty repository.\n\nCreate a branch called `prod` in the `hello-kubernetes-env` repo. Set `prod` as the  [default branch](https:\/\/docs.gitlab.com\/ee\/user\/project\/repository\/branches\/default.html#change-the-default-branch-name-for-a-project) and delete the `main` branch. \n\nAt this point, the pipeline fails because a gitlab runner does not exist\n\n\n## SSH Keys\n\nSome stages in the pipeline require gitlab runner to clone repositories. We\u2019ll give the repositories SSH keys to authenticate them to read or write from our repositories.\n\n**Note**: Having SSH_KEY in GitLab is only for demo purposes. You should consider saving the key as a secret in the k8s cluster and have the secret mounted as a file inside the container instead.\n\nGenerate an SSH key pair and store it in your directory of choice. Make sure your pub key is base64 encoded.\n\nFor the `ACM`,` hello-kubernetes` and `hello-kubernetes-env` repos, go to Settings > Repositories > Deploy Keys and paste the public key in the key text box. Check \u201cwrite access allowed\u201d for `hello-kubernetes-env` to enable the pipeline to push into it.\n\n\n\n![alt_text](images\/deploy-ssh-keys.png \"Deploy SSH keyes\")\n\n\nCreate an SSH_KEY [variable](https:\/\/docs.gitlab.com\/ee\/ci\/variables\/#custom-cicd-variables) in hello-kubernetes repo by going to Settings > CI\/CD > Variables. Make sure to mask your variables and uncheck \u201cprotect variable\u201d\n\n\n![alt_text](images\/environment-variables.png \"Gitlab evironment variables\")\n\n\n\n## Register gitlab runner\n\n[Gitlab runner](https:\/\/docs.gitlab.com\/runner\/) is what runs the jobs in the gitlab pipeline. In this tutorial, we'll install the Gitlab runner application to our prod cluster and register it as our gitlab runner. Before registering gitlab runner on our system, we\u2019ll enable[ workload identity](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/workload-identity#enable_on_cluster), which is the recommended way to access Google cloud services from applications running within GKE. \n\nSetup workload identity on your clusters:\n\n\n```bash\nfor i in \"dev\" \"prod\"; do\n gcloud container clusters update ${i} \\\n --workload-pool=$PROJECT_ID.svc.id.goog\n\n gcloud container node-pools update default-pool \\\n --cluster=${i} \\\n --zone=$ZONE \\\n --workload-metadata=GKE_METADATA\n\n gcloud container clusters get-credentials ${i} --zone=$ZONE\n kubectl create namespace gitlab\n kubectl create serviceaccount --namespace gitlab gitlab-runner\ndone\n\ngcloud iam service-accounts create gitlab-sa\n\ngcloud iam service-accounts add-iam-policy-binding \\\n--role roles\/iam.workloadIdentityUser \\\n--member \"serviceAccount:$PROJECT_ID.svc.id.goog[gitlab\/gitlab-runner]\" \\\ngitlab-sa@$PROJECT_ID.iam.gserviceaccount.com\n\nkubectl annotate serviceaccount \\\n--namespace gitlab \\\ngitlab-runner \\\niam.gke.io\/gcp-service-account=gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com\n```\n\n\nSome actions in the pipeline require the runner to have some IAM roles to perform. You can permit the runner to perform these actions by assigning the roles to the workload identity service account.\n\nGive workload service account the required roles and permissions:\n\n\n```bash\ngcloud projects add-iam-policy-binding \"${PROJECT_ID}\" \\\n --member \"serviceAccount:gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com\" \\\n --role \"roles\/storage.admin\"\n\ngcloud projects add-iam-policy-binding \"${PROJECT_ID}\" \\\n --member \"serviceAccount:gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com\" \\\n --role \"roles\/binaryauthorization.attestorsViewer\"\n\ngcloud projects add-iam-policy-binding \"${PROJECT_ID}\" \\\n --member \"serviceAccount:gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com\" \\\n --role \"roles\/cloudkms.signerVerifier\"\n\ngcloud projects add-iam-policy-binding \"${PROJECT_ID}\" \\\n --member \"serviceAccount:gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com\" \\\n --role \"roles\/containeranalysis.occurrences.editor\"\n\ngcloud projects add-iam-policy-binding \"${PROJECT_ID}\" \\\n --member \"serviceAccount:gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com\" \\\n --role \"roles\/containeranalysis.notes.editor\"\n\ngcloud kms keys add-iam-policy-binding \"qa-signer\" \\\n --project \"${PROJECT_ID}\" \\\n --location \"${REGION}\" \\\n --keyring \"binauthz\" \\\n --member \"serviceAccount:gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com\" \\\n --role 'roles\/cloudkms.signerVerifier'\n```\n\n\nDeploy gitlab runner in acm repo: \n\n\n<table>\n  <tr>\n  <\/tr>\n<\/table>\n\n\n\n```bash\ncd ~\/$GROUP_NAME\/acm\/namespaces\n\ncat > gitlab-rbac.yaml << EOF\napiVersion: rbac.authorization.k8s.io\/v1\nkind: Role\nmetadata:\n  name: gitlab-runner\nrules:\n- apiGroups: [\"*\"] # \"\" indicates the core API group\n  resources: [\"*\"]\n  verbs: [\"*\"]\n---\nkind: RoleBinding\napiVersion: rbac.authorization.k8s.io\/v1\nmetadata:\n  name: gitlab-runner\nsubjects:\n- kind: ServiceAccount\n  name: gitlab-runner\n  namespace: gitlab\nroleRef:\n  kind: Role\n  name: gitlab-runner\n  apiGroup: rbac.authorization.k8s.io\nEOF\n\nmkdir operations\/\ncd operations\/\n\ncat > config-map.yaml << EOF\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: gitlab-runner-config\n annotations:\n   configmanagement.gke.io\/cluster-selector: prod-cluster-selector\ndata:\n kubernetes-namespace: \"gitlab\"\n kubernetes-service-account: \"gitlab-runner\"\n gitlab-server-address: \"https:\/\/gitlab.com\/\"\n runner-tag-list: \"prod\"\n entrypoint: |\n   #!\/bin\/bash\n\n   set -xe\n\n   # Register the runner\n   \/entrypoint register --non-interactive \\\\\n     --url \\$GITLAB_SERVER_ADDRESS \\\\\n     --registration-token \\$REGISTRATION_TOKEN \\\\\n     --executor kubernetes\n\n   # Start the runner\n   \/entrypoint run --user=gitlab-runner \\\\\n     --working-directory=\/home\/gitlab-runner\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: gitlab-runner-config\n annotations:\n   configmanagement.gke.io\/cluster-selector: dev-cluster-selector\ndata:\n kubernetes-namespace: \"gitlab\"\n kubernetes-service-account: \"gitlab-runner\"\n gitlab-server-address: \"https:\/\/gitlab.com\/\"\n runner-tag-list: \"dev\"\n entrypoint: |\n   #!\/bin\/bash\n\n   set -xe\n\n   # Register the runner\n   \/entrypoint register --non-interactive \\\\\n     --url \\$GITLAB_SERVER_ADDRESS \\\\\n     --registration-token \\$REGISTRATION_TOKEN \\\\\n     --executor kubernetes\n  \n\n   # Start the runner\n   \/entrypoint run --user=gitlab-runner \\\\\n     --working-directory=\/home\/gitlab-runner\n\nEOF\n\ncat > deployment.yaml << EOF\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n name: gitlab-runner\nspec:\n replicas: 1\n selector:\n   matchLabels:\n     name: gitlab-runner\n template:\n   metadata:\n     labels:\n       name: gitlab-runner\n   spec:\n     serviceAccountName: gitlab-runner\n     containers:\n     - command:\n       - \/bin\/bash\n       - \/scripts\/entrypoint\n       image: gcr.io\/abm-test-bed\/gitlab-runner@sha256:8f623d3c55ffc783752d0b34097c5625a32a910a8c1427308f5c39fd9a23a3c0\n       imagePullPolicy: IfNotPresent\n       name: gitlab-runner\n       resources:\n         requests:\n           cpu: \"100m\"\n         limits:\n           cpu: \"100m\"\n       env:\n         - name: GITLAB_SERVER_ADDRESS\n           valueFrom:\n             configMapKeyRef:\n               name: gitlab-runner-config\n               key: gitlab-server-address\n         - name: RUNNER_TAG_LIST\n           valueFrom:\n             configMapKeyRef:\n               name: gitlab-runner-config\n               key: runner-tag-list\n         - name: KUBERNETES_NAMESPACE\n           valueFrom:\n             configMapKeyRef:\n               name: gitlab-runner-config\n               key: kubernetes-namespace\n         - name: KUBERNETES_SERVICE_ACCOUNT\n           valueFrom:\n             configMapKeyRef:\n               name: gitlab-runner-config\n               key: kubernetes-service-account\n         - name: REGISTRATION_TOKEN\n           valueFrom:\n             secretKeyRef:\n               name: gitlab-runner-secret\n               key: runner-registration-token\n       volumeMounts:\n         - name: config\n           mountPath: \/scripts\/entrypoint\n           readOnly: true\n           subPath: entrypoint\n         - mountPath: \/tmp\/template.config.toml\n           name: config\n           subPath: template.config.toml\n     volumes:\n       - name: config\n         configMap:\n           name: gitlab-runner-config\n     restartPolicy: Always\nEOF\n\nmkdir gitlab\ncd gitlab\/\n\ncat > namespace.yaml << EOF\napiVersion: v1\nkind: Namespace\nmetadata:\n name: gitlab\nEOF\ncd ..\n\ncat > service-account.yaml << EOF\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: gitlab-runner\n annotations:\n   iam.gke.io\/gcp-service-account: gitlab-sa@$PROJECT_ID.iam.gserviceaccount.com\nEOF\n```\n\n\nPush changes to acm repo:\n\n\n```bash\ngit add -A\ngit commit -m  \"register gitlab runner\"\ngit push origin dev\n```\n\n\nMerge `ACM` dev with main branch\n\nCreate gitlab-runner secret:\n\n\n```bash\nfor i in \"dev\" \"prod\"; do\n  gcloud container clusters get-credentials ${i} --zone=$ZONE\n  kubectl create secret generic gitlab-runner-secret -n gitlab \\\n    --from-literal=runner-registration-token=<REGISTRATION_TOKEN>\ndone\n```\n\n\nTo find your `REGISTRATION_TOKEN `navigate to $GROUP_NAME [group](https:\/\/gitlab.com\/dashboard\/groups) page, Click Settings > CI \/ CD > Runners > Expand \n\n\n\n![alt_text](images\/registration-token.png \"Registration token\")\n\n\nVerify your runner has been created. Settings > CI\/CD > Runners > Expand. Should see two runners listed under group runners\n\n**Verify that App is deployed on dev**\n\n**Hello-Kubernetes-env **\n\nUsing the &lt;app-repo>-env model allows one more layer of checks before a deployment is pushed to production. The hydrated manifests are pushed to hello-kubernetes-env. The stage manifests are pushed to the stage branch and the prod manifests are pushed to the prod branch. In this repo you can set up manual checks to ensure a human reviews and approve changes before it\u2019s pushed to production.\n\nCreate gitlab-ci.yml for hello-kubernetes-env :\n\n\n```bash\ncd ~\/$GROUP_NAME\/\ngit clone git@gitlab.com:$GROUP_URI\/hello-kubernetes-env.git\ncd hello-kubernetes-env\/\n\ncat > .gitlab-ci.yml << EOF\nimage: google\/cloud-sdk\n\nvariables:\n KUBERNETES_SERVICE_ACCOUNT_OVERWRITE: default\n KUSTOMIZATION_PATH_BASE: \".\/base\"\n KUSTOMIZATION_PATH_DEV: \".\/kubernetes\/overlays\/dev\"\n KUSTOMIZATION_PATH_NON_PROD: \".\/kubernetes\/overlays\/stage\"\n KUSTOMIZATION_PATH_PROD: \".\/kubernetes\/overlays\/prod\"\n HOSTNAME: \"gcr.io\"\n PROJECT_ID: \"$PROJECT_ID\"\n CONTAINER_NAME: \"hello-kubernetes\"\n\n # Binary Authorization Variables\n _VULNZ_ATTESTOR: \"vulnz-attestor\"\n _VULNZ_KMS_KEY_VERSION: \"1\"\n _VULNZ_KMS_KEY: \"vulnz-signer\"\n _KMS_KEYRING: \"binauthz\"\n _KMS_LOCATION: \"us-central1\"\n _COMPUTE_REGION: \"us-central1\"\n _PROD_CLUSTER: \"prod\"\n _STAGING_CLUSTER: \"dev\"\n _QA_ATTESTOR: \"qa-attestor\"\n _QA_KMS_KEY: \"qa-signer\"\n _QA_KMS_KEY_VERSION: \"1\"\n\nstages:\n - Deploy Stage\n - Manual Verification\n - Attest QA Deployment\n - Deploy Full GKE\n\ndeploy-stage:\n stage: Deploy Stage\n tags:\n   - dev\n only:\n   - stage\n environment:\n   name: nonprod\n   url: https:\/\/\\$CI_ENVIRONMENT_SLUG\n script:\n   - kubectl apply -f stage.yaml\n\nmanual-verification:\n stage: Manual Verification\n tags:\n   - dev\n only:\n   - stage\n script:\n   - echo \"run uat here\"\n   - sleep 1s\n when: manual\n allow_failure: false\n\nattest-qa-deployment:\n stage: Attest QA Deployment\n tags:\n   - dev\n only:\n   - stage\n environment:\n   name: nonprod\n   url: https:\/\/\\$CI_ENVIRONMENT_SLUG\n retry: 2\n script:\n   - kubectl apply -f stage.yaml --dry-run -o  jsonpath='{range .items[*]}{.spec.template.spec.containers[*].image}{\"\\n\"}' | sed '\/^[[:space:]]*$\/d' | sed 's\/ \/\\n\/g' > nonprd_images.txt\n   - cat nonprd_images.txt\n   - |\n     while IFS= read -r IMAGE; do\n        if [[ \\$(gcloud beta container binauthz attestations list \\\\\n         --project \"\\$PROJECT_ID\" \\\\\n         --attestor \"\\$_QA_ATTESTOR\" \\\\\n         --attestor-project \"\\$PROJECT_ID\" \\\\\n         --artifact-url \"\\$IMAGE\" \\\\\n         2>&1 | grep \"\\$_QA_KMS_KEY\" | wc -l) = 0 ]]; then\n         gcloud beta container binauthz attestations sign-and-create \\\\\n           --project \"\\$PROJECT_ID\" \\\\\n           --artifact-url \"\\$IMAGE\" \\\\\n           --attestor \"\\$_QA_ATTESTOR\" \\\\\n           --attestor-project \"\\$PROJECT_ID\" \\\\\n           --keyversion \"\\$_QA_KMS_KEY_VERSION\" \\\\\n           --keyversion-key \"\\$_QA_KMS_KEY\" \\\\\n           --keyversion-location \"\\$_KMS_LOCATION\" \\\\\n           --keyversion-keyring \"\\$_KMS_KEYRING\" \\\\\n           --keyversion-project \"\\$PROJECT_ID\"\n         echo \"Attested Image \\$IMAGE\"\n       fi\n     done < nonprd_images.txt\n\ndeploy-production:\n stage: Deploy Full GKE\n tags:\n   - prod\n only:\n   - prod\n environment:\n   name: production\n   url: https:\/\/\\$CI_ENVIRONMENT_SLUG\n script:\n   - kubectl apply -f production.yaml\n\nEOF\n\ngit add .\ngit commit -m \"Add stage pipeline\"\ngit push\n```\n\n\nIn the hello-kubernetes-env repo on gitlab, create a branch named stage from prod","site":"GCP","answers_cleaned":"  CICD with Anthos  In this section  we ll automate a CI CD pipeline taking advantage of the features from anthos       Create app  Before creating a CICD pipeline we need an application  For this tutorial  we ll use the popular hello kubernetes application created by paulbower but with a few modifications   Download hello kubernetes app       bash cd    GROUP NAME  git clone https   github com itodotimothy6 hello kubernetes git cd hello kubernetes  rm  rf  git       The hello kubernetes dir will later be made to a gitlab repo and this is where the developer team will spend most of their time on  In this tutorial  we ll isolate developer s work in one repo and security platform in a separate repo  that way developers can focus on application logic and other teams focus on what they do best       Platform admin repo  As a good practice to keep out non developer work from the app repo  create a platform admin repo that ll contain code scripts commands that need to be run during the CI CD process    Also  gitlab  https   docs gitlab com ee ci quick start README html cicd process overview  uses   gitlab ci yml  file to define a cicd pipeline  For a complex pipeline  we can avoid crowding the   gitlab ci yml  file by abstracting some of the code and storing in the platform admin   Create platform admin       bash cd    GROUP NAME  mkdir platform admin        Now  we ll create the different stages of the ci cd process and store in sub directories in platform admin      Build  This is the first stage  In this stage  we ll create a build container job which builds an image using the  hello kubernetes  Dockerfile and pushes this image to  container registry  https   cloud google com container registry   gcr io    In this tutorial we ll use a build container tool known as  kaniko  https   github com GoogleContainerTools kaniko kaniko   build images in kubernetes    Create build stage       bash cd platform admin  mkdir build  cd build  cat   build container yaml    EOF build   stage  Build  tags       prod  image     name  gcr io kaniko project executor debug    entrypoint        script       echo  Building container image and pushing to gcr io in   PROJECT ID         kaniko executor   context   CI PROJECT DIR   dockerfile   CI PROJECT DIR Dockerfile   destination    HOSTNAME     PROJECT ID     CONTAINER NAME    CI COMMIT SHORT SHA EOF           Binary Authorization   Binary authorization   https   cloud google com binary authorization is the process of creating  attestations  https   cloud google com binary authorization docs key concepts attestations  on container images for the purpose of verifying that certain criteria are met before you can deploy the images to GKE  In this guide  we ll implement binary authorization using Cloud Build and GKE    Learn more  https   cloud google com binary authorization    Enable binary authorization on your clusters        bash for i in  dev   prod   do    gcloud container clusters update   i    enable binauthz done       Create signing keys and configure attestations for stage and prod pipelines   Read this  article  https   cloud google com solutions binary auth with cloud build and gke  to understand step by step what the below set of commands do       bash export CLOUD BUILD SA EMAIL    PROJECT NUMBER  cloudbuild gserviceaccount com   gcloud projects add iam policy binding    PROJECT ID        member  serviceAccount   CLOUD BUILD SA EMAIL        role  roles container developer     Create signing keys gcloud kms keyrings create  binauthz       project    PROJECT ID        location    REGION    gcloud kms keys create  vulnz signer       project    PROJECT ID        location    REGION        keyring  binauthz       purpose  asymmetric signing       default algorithm  rsa sign pkcs1 4096 sha512   gcloud kms keys create  qa signer       project    PROJECT ID        location    REGION        keyring  binauthz       purpose  asymmetric signing       default algorithm  rsa sign pkcs1 4096 sha512   curl  https   containeranalysis googleapis com v1 projects   PROJECT ID  notes  noteId vulnz note       request  POST       header  Content Type  application json       header  Authorization  Bearer   gcloud auth print access token        header  X Goog User Project    PROJECT ID        data binary      EOF            name    projects   PROJECT ID  notes vulnz note         attestation             hint               human readable name    Vulnerability scan note                       EOF  curl  https   containeranalysis googleapis com v1 projects   PROJECT ID  notes vulnz note setIamPolicy       request POST      header  Content Type  application json       header  Authorization  Bearer   gcloud auth print access token        header  X Goog User Project    PROJECT ID        data binary      EOF            resource    projects   PROJECT ID  notes vulnz note         policy             bindings                            role    roles containeranalysis notes occurrences viewer               members                   serviceAccount   CLOUD BUILD SA EMAIL                                                   role    roles containeranalysis notes attacher               members                   serviceAccount   CLOUD BUILD SA EMAIL                                                EOF  gcloud container binauthz attestors create  vulnz attestor       project    PROJECT ID        attestation authority note project    PROJECT ID        attestation authority note  vulnz note       description  Vulnerability scan attestor   gcloud beta container binauthz attestors public keys add      project    PROJECT ID        attestor  vulnz attestor       keyversion  1       keyversion key  vulnz signer       keyversion keyring  binauthz       keyversion location    REGION        keyversion project    PROJECT ID    gcloud container binauthz attestors add iam policy binding  vulnz attestor       project    PROJECT ID        member  serviceAccount   CLOUD BUILD SA EMAIL        role  roles binaryauthorization attestorsViewer   gcloud kms keys add iam policy binding  vulnz signer       project    PROJECT ID        location    REGION        keyring  binauthz       member  serviceAccount   CLOUD BUILD SA EMAIL        role  roles cloudkms signerVerifier   curl  https   containeranalysis googleapis com v1 projects   PROJECT ID  notes  noteId qa note       request  POST       header  Content Type  application json       header  Authorization  Bearer   gcloud auth print access token        header  X Goog User Project    PROJECT ID        data binary      EOF            name    projects   PROJECT ID  notes qa note         attestation             hint               human readable name    QA note                       EOF  curl  https   containeranalysis googleapis com v1 projects   PROJECT ID  notes qa note setIamPolicy       request POST      header  Content Type  application json       header  Authorization  Bearer   gcloud auth print access token        header  X Goog User Project    PROJECT ID        data binary      EOF            resource    projects   PROJECT ID  notes qa note         policy             bindings                            role    roles containeranalysis notes occurrences viewer               members                   serviceAccount   CLOUD BUILD SA EMAIL                                                   role    roles containeranalysis notes attacher               members                   serviceAccount   CLOUD BUILD SA EMAIL                                                EOF  gcloud container binauthz attestors create  qa attestor       project    PROJECT ID        attestation authority note project    PROJECT ID        attestation authority note  qa note       description  QA attestor   gcloud beta container binauthz attestors public keys add      project    PROJECT ID        attestor  qa attestor       keyversion  1       keyversion key  qa signer       keyversion keyring  binauthz       keyversion location    REGION        keyversion project    PROJECT ID    gcloud container binauthz attestors add iam policy binding  qa attestor       project    PROJECT ID        member  serviceAccount   CLOUD BUILD SA EMAIL        role  roles binaryauthorization attestorsViewer        Vulnerability scan checker needs to be created with Cloud Build for verifying  hello kubernetes  container images in the CI CD pipeline  Execute the following steps to create a  cloudbuild attestor  in Container Registry        bash   Give cloudbuild service account the required roles and permissions gcloud projects add iam policy binding   PROJECT ID       member serviceAccount   PROJECT NUMBER  cloudbuild gserviceaccount com      role roles binaryauthorization attestorsViewer  gcloud projects add iam policy binding   PROJECT ID       member serviceAccount   PROJECT NUMBER  cloudbuild gserviceaccount com      role roles cloudkms signerVerifier  gcloud projects add iam policy binding   PROJECT ID       member serviceAccount   PROJECT NUMBER  cloudbuild gserviceaccount com      role roles containeranalysis notes attacher    Create attestor using cloudbuild git clone https   github com GoogleCloudPlatform gke binary auth tools    GROUP NAME binauthz tools  gcloud builds submit      project    PROJECT ID        tag  gcr io   PROJECT ID  cloudbuild attestor        GROUP NAME binauthz tools    clean up   delete binauthz tools rm  rf    GROUP NAME binauthz tools       Verify cloudbuild attestor image is created by inputting  gcr io  lt project id  cloudbuild attestor  in your browser   Create binauth yaml which describes the Binary Authorization policy for the project        bash cd    GROUP NAME platform admin  mkdir binauth  cd binauth   cat   binauth yaml    EOF defaultAdmissionRule   enforcementMode  ENFORCED BLOCK AND AUDIT LOG  evaluationMode  ALWAYS DENY globalPolicyEvaluationMode  ENABLE admissionWhitelistPatterns    Gitlab runner   namePattern  gitlab gitlab runner helper x86 64 8fa89735   namePattern  gitlab gitlab runner helper x86 64 ece86343   namePattern  gitlab gitlab runner alpine v13 6 0   namePattern  gcr io abm test bed gitlab runner sha256 8f623d3c55ffc783752d0b34097c5625a32a910a8c1427308f5c39fd9a23a3c0   Gitlab runner job containers   namePattern  google cloud sdk   namePattern  gcr io cloud builders gke deploy latest   namePattern  gcr io kaniko project     namePattern  gcr io cloud solutions images kustomize 3 7   namePattern  gcr io kpt functions gatekeeper validate   namePattern  gcr io kpt functions read yaml   namePattern  gcr io stackdriver prometheus     namePattern  gcr io  PROJECT ID cloudbuild attestor   namePattern  gcr io config management release   clusterAdmissionRules     Staging dev cluster   ZONE dev     evaluationMode  REQUIRE ATTESTATION    enforcementMode  ENFORCED BLOCK AND AUDIT LOG    requireAttestationsBy       projects  PROJECT ID attestors vulnz attestor    Production cluster   ZONE prod     evaluationMode  REQUIRE ATTESTATION    enforcementMode  ENFORCED BLOCK AND AUDIT LOG    requireAttestationsBy       projects  PROJECT ID attestors vulnz attestor      projects  PROJECT ID attestors qa attestor EOF       Upload binauth yaml policy to the project       bash gcloud container binauthz policy import   binauth yaml       Create verify image       bash cd    GROUP NAME platform admin  mkdir vulnerability  cd vulnerability   cat   vulnerability scan result yaml    EOF check vulnerability scan result   stage  Verify Image  tags     prod  image     name  gcr io    PROJECT ID  cloudbuild attestor  script               scripts check vulnerabilities sh            p    PROJECT ID             i    HOSTNAME     PROJECT ID     CONTAINER NAME     CI COMMIT SHORT SHA             t 8 EOF  cat   vulnerability scan verify yaml    EOF attest vulnerability scan   stage  Verify Image  tags     prod  image     name   gcr io    PROJECT ID  cloudbuild attestor   script       mkdir images      echo     gcloud container images describe   format  value image summary digest      HOSTNAME     PROJECT ID     CONTAINER NAME     CI COMMIT SHORT SHA      images digest txt             FQ DIGEST    gcloud container images describe   format  value image summary fully qualified digest      HOSTNAME     PROJECT ID     CONTAINER NAME     CI COMMIT SHORT SHA         scripts create attestation sh            p    PROJECT ID             i    FQ DIGEST             a     VULNZ ATTESTOR             v     VULNZ KMS KEY VERSION             k     VULNZ KMS KEY             l     KMS LOCATION             r     KMS KEYRING   artifacts     paths         images  EOF           Hydrate manifest using Kustomize  In this tutorial  we use Kustomize to create a hydrated manifest of our deployment which will be stored in a repo called  hello kubernetes env   Create shared nodejs kustomize base in platform admin       bash cd    GROUP NAME platform admin  mkdir  p shared kustomize bases nodejs cd shared kustomize bases nodejs  cat   deployment yaml    EOF kind  Deployment apiVersion  apps v1 metadata    name  nodejs spec    replicas  3   selector      matchLabels        app  nodejs   template      metadata        labels          app  nodejs     spec        containers          name  nodejs         image  app         ports            containerPort  8080 EOF  cat   kustomization yaml    EOF apiVersion  kustomize config k8s io v1beta1 kind  Kustomization resources    deployment yaml   service yaml EOF  cat   service yaml    EOF kind  Service apiVersion  v1 metadata    name  nodejs spec    type  LoadBalancer   selector      app  nodejs   ports      name  http     port  80     targetPort  8080 EOF       To allow developers apply patches when deploying  create overlays for dev  stage and prod in the  hello kubernetes  repo       bash cd    GROUP NAME hello kubernetes  mkdir  p kubernetes overlays dev cd kubernetes overlays dev   cat   kustomization yaml    EOF apiVersion  kustomize config k8s io v1beta1 kind  Kustomization namespace  dev bases           base namePrefix  dev  EOF  cd    mkdir stage  cd stage  cat   kustomization yaml    EOF apiVersion  kustomize config k8s io v1beta1 kind  Kustomization namespace  stage bases           base namePrefix  stage  EOF  cd    mkdir prod  cd prod  cat   kustomization yaml    EOF apiVersion  kustomize config k8s io v1beta1 kind  Kustomization namespace  prod bases           base namePrefix  prod  EOF       Now that we have the kustomize base   overlay  we ll start creating the kustomize CI CD jobs  Create fetch base stage       bash cd    GROUP NAME platform admin  mkdir kustomize steps  cd kustomize steps   cat   fetch base yaml    EOF fetch kustomize base   stage  Fetch Bases  image  gcr io cloud solutions images kustomize 3 7  tags     prod  script     Add auth to git urls    git config   global url  https   gitlab ci token    CI JOB TOKEN     CI SERVER HOST   insteadOf  https      CI SERVER HOST      mkdir  p kubernetes base      Pull from Kustomize shared base from platform repo    echo    SSH KEY    base64  d    working ssh key    chmod 400  working ssh key    export GIT SSH COMMAND  ssh  i  working ssh key  o UserKnownHostsFile  dev null  o StrictHostKeyChecking no     git clone git    CI SERVER HOST     CI PROJECT NAMESPACE  platform admin git  b main    cp platform admin shared kustomize bases nodejs   kubernetes base   artifacts     paths         kubernetes base  EOF       Create hydrate dev prod manifest stage       bash cat   hydrate dev yaml    EOF kustomize dev   stage  Hydrate Manifests  image  gcr io cloud solutions images kustomize 3 7  tags       prod  except     refs         main  script       DIGEST    cat images digest txt        dev      mkdir  p   hydrated manifests       cd    KUSTOMIZATION PATH DEV       kustomize edit set image app    HOSTNAME     PROJECT ID     CONTAINER NAME     DIGEST       kustomize build    o          hydrated manifests dev yaml      cd     artifacts     paths         hydrated manifests  EOF  cat   hydrate prod yaml    EOF kustomize  stage  Hydrate Manifests image  gcr io cloud solutions images kustomize 3 7 tags      prod only    refs        main script    DIGEST    cat images digest txt     build out staging manifests   mkdir  p   hydrated manifests     stage   cd    KUSTOMIZATION PATH NON PROD    kustomize edit set image app    HOSTNAME     PROJECT ID     CONTAINER NAME     DIGEST    kustomize build    o          hydrated manifests stage yaml   cd      prod   cd    KUSTOMIZATION PATH PROD    kustomize edit set image app    HOSTNAME     PROJECT ID     CONTAINER NAME     DIGEST    kustomize build    o          hydrated manifests production yaml   cd   artifacts    paths        hydrated manifests  EOF           ACM policy check in CI pipeline  ACM  as discussed earlier  is used to ensure consistency in config and automate policy checks  We ll incorporate ACM to our CI pipeline to ensure that any changes that fail policy check is terminated at the CI stage even before deployment   Create stage that downloads acm policies for the acm repo       bash cd    GROUP NAME platform admin  mkdir acm  cd acm   cat   download policies yaml    EOF download acm policies   stage  Download ACM Policy  image  gcr io cloud solutions images kustomize 3 7  tags       prod  script     Note  Having SSH KEY in GitLab is only for demo purposes  You  should     consider saving the key as a secret in the k8s cluster and have the secret    mounted as a file inside the container instead     echo    SSH KEY    base64  d    working ssh key    chmod 400  working ssh key    export GIT SSH COMMAND  ssh  i  working ssh key  o UserKnownHostsFile  dev null  o StrictHostKeyChecking no     git clone git    CI SERVER HOST     CI PROJECT NAMESPACE  acm git  b main    cp acm policies cluster constraint  hydrated manifests    artifacts     paths         hydrated manifests  EOF       Create stage that reads acm       bash cd    GROUP NAME platform admin acm   cat   read acm yaml    EOF read yaml   stage  Read ACM YAML  image     name  gcr io kpt functions read yaml    entrypoint     bin sh     c    tags       prod  script     mkdir stage    cp hydrated manifests stage yaml stage    cp hydrated manifests constraint  stage    mkdir prod    cp hydrated manifests production yaml prod    cp hydrated manifests constraint  prod    The following 2 commands are combining all the YAMLs from the source dir into one single YAML file     usr local bin node  home node app dist read yaml run js  d source dir stage    output stage source yaml   input  dev null     usr local bin node  home node app dist read yaml run js  d source dir prod    output prod source yaml   input  dev null  artifacts     paths       stage source yaml      prod source yaml    expire in  1 hour EOF       Create validate acm stage       bash cd    GROUP NAME platform admin acm   cat   validate acm yaml    EOF validate acm policy   stage  ACM Policy Check  image     name  gcr io kpt functions gatekeeper validate    entrypoint     bin sh     c    tags       prod  script      app gatekeeper validate   input stage source yaml     app gatekeeper validate   input prod source yaml EOF           Deploy  The last stage in the pipeline is to deploy our changes  For dev other branches except main   we ll deploy the hydrate dev maifest  For stage and prod  we ll copy the hydrated stage and prod manifests to the  hello kubernetes env  repo  Create deploy prod stage       bash cd    GROUP NAME platform admin  mkdir deploy  cd deploy   cat   deploy dev yaml    EOF deploy dev   stage  Deploy Dev  tags       dev  script       kubectl apply  f hydrated manifests dev yaml  except     refs         main EOF  cat   deploy prod yaml    EOF push manifests   only     refs         main  stage  Push Manifests  image  gcr io cloud solutions images kustomize 3 7  tags       prod  script      cp  working  ssh ssh deploy  working ssh key    echo    SSH KEY    base64  d    working ssh key    chmod 400  working ssh key    export GIT SSH COMMAND  ssh  i  working ssh key  o UserKnownHostsFile  dev null  o StrictHostKeyChecking no     git config   global user email     CI PROJECT NAME  ci    CI SERVER HOST      git config   global user name     CI PROJECT NAMESPACE     CI PROJECT NAME      git clone git    CI SERVER HOST     CI PROJECT NAMESPACE     CI PROJECT NAME  env git  b stage    cd    CI PROJECT NAME  env    cp    hydrated manifests stage yaml stage yaml    cp    hydrated manifests production yaml production yaml           If files have changed  commit them back to the env repo in the staging branch    if    z     git status   porcelain      then      echo  No changes found in env repository      else      git add stage yaml stage yaml      git add production yaml production yaml      git commit  m     CI COMMIT REF SLUG        CI PIPELINE URL        git push origin stage    fi EOF       Push platform admin remote   In gitlab  create a blank public project under the   GROUP NAME  https   gitlab com dashboard groups  group called  platform admin  then run the following commands to push  platform admin  dir to gitlab         bash cd    GROUP NAME platform admin  git init git remote add origin git gitlab com  GROUP URI platform admin git git add   git commit  m  Initial commit  git push  u origin main           gitlab ci yml   gitlab ci yml is the file used by gitlab for ci cd pipeline  We ll create a gitlab ci yml that references the different stage files in platform admin and orders them as listed above  Remember we separated out these stages in a platform admin repo to avoid a crowded gitlab ci yml and to separate operations from the app repo   Create  gitlab ci yml in the root directory of hello kubernetes       bash cd    GROUP NAME hello kubernetes  cat    gitlab ci yml    EOF image  google cloud sdk  include    Build Steps   project    GROUP URI platform admin   file   build build container yaml    Vulnerability Scan Steps   project    GROUP URI platform admin   file   vulnerability vulnerability scan result yaml    project    GROUP URI platform admin   file   vulnerability vulnerability scan verify yaml    Kustomize Steps   project    GROUP URI platform admin   file   kustomize steps fetch base yaml    project    GROUP URI platform admin   file   kustomize steps hydrate dev yaml    project    GROUP URI platform admin   file   kustomize steps hydrate prod yaml    ACM Steps   project    GROUP URI platform admin   file   acm download policies yaml    project    GROUP URI platform admin   file   acm read acm yaml    project    GROUP URI platform admin   file   acm validate acm yaml    Deploy Steps   project    GROUP URI platform admin   file   deploy deploy dev yaml    project    GROUP URI platform admin   file   deploy deploy prod yaml   variables   KUBERNETES SERVICE ACCOUNT OVERWRITE  default  KUSTOMIZATION PATH BASE     base   KUSTOMIZATION PATH DEV     kubernetes overlays dev   KUSTOMIZATION PATH NON PROD     kubernetes overlays stage   KUSTOMIZATION PATH PROD     kubernetes overlays prod   HOSTNAME   gcr io   PROJECT ID    PROJECT ID   CONTAINER NAME   hello kubernetes      Binary Authorization Variables   VULNZ ATTESTOR   vulnz attestor    VULNZ KMS KEY VERSION   1    VULNZ KMS KEY   vulnz signer    KMS KEYRING   binauthz    KMS LOCATION    REGION    COMPUTE REGION    REGION    PROD CLUSTER   prod    STAGING CLUSTER   dev    QA ATTESTOR   qa attestor    QA KMS KEY   qa signer    QA KMS KEY VERSION   1   stages     Build    Verify Image    Fetch Bases    Hydrate Manifests    Download ACM Policy    Read ACM YAML    ACM Policy Check    Deploy Dev    Push Manifests EOF           Remote hello kubernetes   hello kubernetes env  In gitlab  create a blank public project under   GROUP NAME  https   gitlab com dashboard groups  called  hello kubernetes    Push local hello kubernetes to remote   Make sure you have created  hello kubernetes  project before running the below commands      bash cd    GROUP NAME hello kubernetes  git init git remote add origin git gitlab com  GROUP URI hello kubernetes git git add   git commit  m  Initial commit  git push  u origin main       Create a public project under   GROUP NAME  https   gitlab com dashboard groups  called  hello kubernetes env   Ensure  initialize repository with a README is checked so you can have a non empty repository   Create a branch called  prod  in the  hello kubernetes env  repo  Set  prod  as the   default branch  https   docs gitlab com ee user project repository branches default html change the default branch name for a project  and delete the  main  branch    At this point  the pipeline fails because a gitlab runner does not exist      SSH Keys  Some stages in the pipeline require gitlab runner to clone repositories  We ll give the repositories SSH keys to authenticate them to read or write from our repositories     Note    Having SSH KEY in GitLab is only for demo purposes  You should consider saving the key as a secret in the k8s cluster and have the secret mounted as a file inside the container instead   Generate an SSH key pair and store it in your directory of choice  Make sure your pub key is base64 encoded   For the  ACM    hello kubernetes  and  hello kubernetes env  repos  go to Settings   Repositories   Deploy Keys and paste the public key in the key text box  Check  write access allowed  for  hello kubernetes env  to enable the pipeline to push into it       alt text  images deploy ssh keys png  Deploy SSH keyes     Create an SSH KEY  variable  https   docs gitlab com ee ci variables  custom cicd variables  in hello kubernetes repo by going to Settings   CI CD   Variables  Make sure to mask your variables and uncheck  protect variable      alt text  images environment variables png  Gitlab evironment variables         Register gitlab runner   Gitlab runner  https   docs gitlab com runner   is what runs the jobs in the gitlab pipeline  In this tutorial  we ll install the Gitlab runner application to our prod cluster and register it as our gitlab runner  Before registering gitlab runner on our system  we ll enable  workload identity  https   cloud google com kubernetes engine docs how to workload identity enable on cluster   which is the recommended way to access Google cloud services from applications running within GKE    Setup workload identity on your clusters       bash for i in  dev   prod   do  gcloud container clusters update   i       workload pool  PROJECT ID svc id goog   gcloud container node pools update default pool      cluster   i       zone  ZONE      workload metadata GKE METADATA   gcloud container clusters get credentials   i    zone  ZONE  kubectl create namespace gitlab  kubectl create serviceaccount   namespace gitlab gitlab runner done  gcloud iam service accounts create gitlab sa  gcloud iam service accounts add iam policy binding     role roles iam workloadIdentityUser     member  serviceAccount  PROJECT ID svc id goog gitlab gitlab runner     gitlab sa  PROJECT ID iam gserviceaccount com  kubectl annotate serviceaccount     namespace gitlab   gitlab runner   iam gke io gcp service account gitlab sa  PROJECT ID iam gserviceaccount com       Some actions in the pipeline require the runner to have some IAM roles to perform  You can permit the runner to perform these actions by assigning the roles to the workload identity service account   Give workload service account the required roles and permissions       bash gcloud projects add iam policy binding    PROJECT ID        member  serviceAccount gitlab sa  PROJECT ID iam gserviceaccount com       role  roles storage admin   gcloud projects add iam policy binding    PROJECT ID        member  serviceAccount gitlab sa  PROJECT ID iam gserviceaccount com       role  roles binaryauthorization attestorsViewer   gcloud projects add iam policy binding    PROJECT ID        member  serviceAccount gitlab sa  PROJECT ID iam gserviceaccount com       role  roles cloudkms signerVerifier   gcloud projects add iam policy binding    PROJECT ID        member  serviceAccount gitlab sa  PROJECT ID iam gserviceaccount com       role  roles containeranalysis occurrences editor   gcloud projects add iam policy binding    PROJECT ID        member  serviceAccount gitlab sa  PROJECT ID iam gserviceaccount com       role  roles containeranalysis notes editor   gcloud kms keys add iam policy binding  qa signer       project    PROJECT ID        location    REGION        keyring  binauthz       member  serviceAccount gitlab sa  PROJECT ID iam gserviceaccount com       role  roles cloudkms signerVerifier        Deploy gitlab runner in acm repo      table     tr      tr    table        bash cd    GROUP NAME acm namespaces  cat   gitlab rbac yaml    EOF apiVersion  rbac authorization k8s io v1 kind  Role metadata    name  gitlab runner rules    apiGroups             indicates the core API group   resources          verbs            kind  RoleBinding apiVersion  rbac authorization k8s io v1 metadata    name  gitlab runner subjects    kind  ServiceAccount   name  gitlab runner   namespace  gitlab roleRef    kind  Role   name  gitlab runner   apiGroup  rbac authorization k8s io EOF  mkdir operations  cd operations   cat   config map yaml    EOF apiVersion  v1 kind  ConfigMap metadata   name  gitlab runner config  annotations     configmanagement gke io cluster selector  prod cluster selector data   kubernetes namespace   gitlab   kubernetes service account   gitlab runner   gitlab server address   https   gitlab com    runner tag list   prod   entrypoint          bin bash     set  xe       Register the runner     entrypoint register   non interactive           url   GITLAB SERVER ADDRESS           registration token   REGISTRATION TOKEN           executor kubernetes       Start the runner     entrypoint run   user gitlab runner           working directory  home gitlab runner     apiVersion  v1 kind  ConfigMap metadata   name  gitlab runner config  annotations     configmanagement gke io cluster selector  dev cluster selector data   kubernetes namespace   gitlab   kubernetes service account   gitlab runner   gitlab server address   https   gitlab com    runner tag list   dev   entrypoint          bin bash     set  xe       Register the runner     entrypoint register   non interactive           url   GITLAB SERVER ADDRESS           registration token   REGISTRATION TOKEN           executor kubernetes          Start the runner     entrypoint run   user gitlab runner           working directory  home gitlab runner  EOF  cat   deployment yaml    EOF apiVersion  apps v1 kind  Deployment metadata   name  gitlab runner spec   replicas  1  selector     matchLabels       name  gitlab runner  template     metadata       labels         name  gitlab runner    spec       serviceAccountName  gitlab runner      containers         command            bin bash           scripts entrypoint        image  gcr io abm test bed gitlab runner sha256 8f623d3c55ffc783752d0b34097c5625a32a910a8c1427308f5c39fd9a23a3c0        imagePullPolicy  IfNotPresent        name  gitlab runner        resources           requests             cpu   100m           limits             cpu   100m         env             name  GITLAB SERVER ADDRESS            valueFrom               configMapKeyRef                 name  gitlab runner config                key  gitlab server address            name  RUNNER TAG LIST            valueFrom               configMapKeyRef                 name  gitlab runner config                key  runner tag list            name  KUBERNETES NAMESPACE            valueFrom               configMapKeyRef                 name  gitlab runner config                key  kubernetes namespace            name  KUBERNETES SERVICE ACCOUNT            valueFrom               configMapKeyRef                 name  gitlab runner config                key  kubernetes service account            name  REGISTRATION TOKEN            valueFrom               secretKeyRef                 name  gitlab runner secret                key  runner registration token        volumeMounts             name  config            mountPath   scripts entrypoint            readOnly  true            subPath  entrypoint            mountPath   tmp template config toml            name  config            subPath  template config toml      volumes           name  config          configMap             name  gitlab runner config      restartPolicy  Always EOF  mkdir gitlab cd gitlab   cat   namespace yaml    EOF apiVersion  v1 kind  Namespace metadata   name  gitlab EOF cd     cat   service account yaml    EOF apiVersion  v1 kind  ServiceAccount metadata   name  gitlab runner  annotations     iam gke io gcp service account  gitlab sa  PROJECT ID iam gserviceaccount com EOF       Push changes to acm repo       bash git add  A git commit  m   register gitlab runner  git push origin dev       Merge  ACM  dev with main branch  Create gitlab runner secret       bash for i in  dev   prod   do   gcloud container clusters get credentials   i    zone  ZONE   kubectl create secret generic gitlab runner secret  n gitlab         from literal runner registration token  REGISTRATION TOKEN  done       To find your  REGISTRATION TOKEN  navigate to  GROUP NAME  group  https   gitlab com dashboard groups  page  Click Settings   CI   CD   Runners   Expand       alt text  images registration token png  Registration token     Verify your runner has been created  Settings   CI CD   Runners   Expand  Should see two runners listed under group runners    Verify that App is deployed on dev      Hello Kubernetes env     Using the  lt app repo  env model allows one more layer of checks before a deployment is pushed to production  The hydrated manifests are pushed to hello kubernetes env  The stage manifests are pushed to the stage branch and the prod manifests are pushed to the prod branch  In this repo you can set up manual checks to ensure a human reviews and approve changes before it s pushed to production   Create gitlab ci yml for hello kubernetes env        bash cd    GROUP NAME  git clone git gitlab com  GROUP URI hello kubernetes env git cd hello kubernetes env   cat    gitlab ci yml    EOF image  google cloud sdk  variables   KUBERNETES SERVICE ACCOUNT OVERWRITE  default  KUSTOMIZATION PATH BASE     base   KUSTOMIZATION PATH DEV     kubernetes overlays dev   KUSTOMIZATION PATH NON PROD     kubernetes overlays stage   KUSTOMIZATION PATH PROD     kubernetes overlays prod   HOSTNAME   gcr io   PROJECT ID    PROJECT ID   CONTAINER NAME   hello kubernetes      Binary Authorization Variables   VULNZ ATTESTOR   vulnz attestor    VULNZ KMS KEY VERSION   1    VULNZ KMS KEY   vulnz signer    KMS KEYRING   binauthz    KMS LOCATION   us central1    COMPUTE REGION   us central1    PROD CLUSTER   prod    STAGING CLUSTER   dev    QA ATTESTOR   qa attestor    QA KMS KEY   qa signer    QA KMS KEY VERSION   1   stages     Deploy Stage    Manual Verification    Attest QA Deployment    Deploy Full GKE  deploy stage   stage  Deploy Stage  tags       dev  only       stage  environment     name  nonprod    url  https     CI ENVIRONMENT SLUG  script       kubectl apply  f stage yaml  manual verification   stage  Manual Verification  tags       dev  only       stage  script       echo  run uat here       sleep 1s  when  manual  allow failure  false  attest qa deployment   stage  Attest QA Deployment  tags       dev  only       stage  environment     name  nonprod    url  https     CI ENVIRONMENT SLUG  retry  2  script       kubectl apply  f stage yaml   dry run  o  jsonpath   range  items      spec template spec containers    image    n      sed       space      d    sed  s    n g    nonprd images txt      cat nonprd images txt             while IFS  read  r IMAGE  do         if       gcloud beta container binauthz attestations list               project    PROJECT ID                attestor     QA ATTESTOR                attestor project    PROJECT ID                artifact url    IMAGE              2  1   grep     QA KMS KEY    wc  l    0     then          gcloud beta container binauthz attestations sign and create                 project    PROJECT ID                  artifact url    IMAGE                  attestor     QA ATTESTOR                  attestor project    PROJECT ID                  keyversion     QA KMS KEY VERSION                  keyversion key     QA KMS KEY                  keyversion location     KMS LOCATION                  keyversion keyring     KMS KEYRING                  keyversion project    PROJECT ID           echo  Attested Image   IMAGE         fi      done   nonprd images txt  deploy production   stage  Deploy Full GKE  tags       prod  only       prod  environment     name  production    url  https     CI ENVIRONMENT SLUG  script       kubectl apply  f production yaml  EOF  git add   git commit  m  Add stage pipeline  git push       In the hello kubernetes env repo on gitlab  create a branch named stage from prod"}
{"questions":"GCP node os information This container image can be deployed on a Kubernetes cluster When accessed via a web browser on port it will display The default Hello world message displayed can be overridden using the environment variable The default port of 8080 can be overriden using the environment variable Hello Kubernetes a default Hello world message the pod name","answers":"# Hello Kubernetes!\n\nThis container image can be deployed on a Kubernetes cluster. When accessed via a web browser on port `8080`, it will display:\n- a default **Hello world!** message\n- the pod name\n- node os information\n\n![Hello world! from the hello-kubernetes image](hello-kubernetes.png)\n\nThe default \"Hello world!\" message displayed can be overridden using the `MESSAGE` environment variable. The default port of 8080 can be overriden using the `PORT` environment variable.\n\n## DockerHub\n\nIt is available on DockerHub as:\n\n- [paulbouwer\/hello-kubernetes:1.8](https:\/\/hub.docker.com\/r\/paulbouwer\/hello-kubernetes\/)\n\n## Deploy\n\n### Standard Configuration\n\nDeploy to your Kubernetes cluster using the hello-kubernetes.yaml, which contains definitions for the service and deployment objects:\n\n```yaml\n# hello-kubernetes.yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: hello-kubernetes\nspec:\n  type: LoadBalancer\n  ports:\n  - port: 80\n    targetPort: 8080\n  selector:\n    app: hello-kubernetes\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: hello-kubernetes\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: hello-kubernetes\n  template:\n    metadata:\n      labels:\n        app: hello-kubernetes\n    spec:\n      containers:\n      - name: hello-kubernetes\n        image: paulbouwer\/hello-kubernetes:1.8\n        ports:\n        - containerPort: 8080\n```\n\n```bash\n$ kubectl apply -f yaml\/hello-kubernetes.yaml\n```\n\nThis will display a **Hello world!** message when you hit the service endpoint in a browser. You can get the service endpoint ip address by executing the following command and grabbing the returned external ip address value:\n\n```bash\n$ kubectl get service hello-kubernetes\n```\n\n### Customise Message\n\nYou can customise the message displayed by the `hello-kubernetes` container. Deploy using the hello-kubernetes.custom-message.yaml, which contains definitions for the service and deployment objects.\n\nIn the definition for the deployment, add an `env` variable with the name of `MESSAGE`. The value you provide will be displayed as the custom message.\n\n```yaml\n# hello-kubernetes.custom-message.yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: hello-kubernetes-custom\nspec:\n  type: LoadBalancer\n  ports:\n  - port: 80\n    targetPort: 8080\n  selector:\n    app: hello-kubernetes-custom\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: hello-kubernetes-custom\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: hello-kubernetes-custom\n  template:\n    metadata:\n      labels:\n        app: hello-kubernetes-custom\n    spec:\n      containers:\n      - name: hello-kubernetes\n        image: paulbouwer\/hello-kubernetes:1.8\n        ports:\n        - containerPort: 8080\n        env:\n        - name: MESSAGE\n          value: I just deployed this on Kubernetes!\n```\n\n```bash\n$ kubectl apply -f yaml\/hello-kubernetes.custom-message.yaml\n```\n\n### Specify Custom Port\n\nBy default, the `hello-kubernetes` app listens on port `8080`. If you have a requirement for the app to listen on another port, you can specify the port via an env variable with the name of PORT. Remember to also update the `containers.ports.containerPort` value to match.\n\nHere is an example:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: hello-kubernetes-custom\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: hello-kubernetes-custom\n  template:\n    metadata:\n      labels:\n        app: hello-kubernetes-custom\n    spec:\n      containers:\n      - name: hello-kubernetes\n        image: paulbouwer\/hello-kubernetes:1.8\n        ports:\n        - containerPort: 80\n        env:\n        - name: PORT\n          value: \"80\"\n```\n\n\n## Build Container Image\n\nIf you'd like to build the image yourself, then you can do so as follows. The `build-arg` parameters provides metadata as defined in [OCI image spec annotations](https:\/\/github.com\/opencontainers\/image-spec\/blob\/master\/annotations.md).\n\nBash\n```bash\n$ docker build --no-cache --build-arg IMAGE_VERSION=\"1.8\" --build-arg IMAGE_CREATE_DATE=\"`date -u +\"%Y-%m-%dT%H:%M:%SZ\"`\" --build-arg IMAGE_SOURCE_REVISION=\"`git rev-parse HEAD`\" -f Dockerfile -t \"hello-kubernetes:1.8\" app\n```\n\nPowershell\n```powershell\nPS> docker build --no-cache --build-arg IMAGE_VERSION=\"1.8\" --build-arg IMAGE_CREATE_DATE=\"$(Get-Date((Get-Date).ToUniversalTime()) -UFormat '%Y-%m-%dT%H:%M:%SZ')\" --build-arg IMAGE_SOURCE_REVISION=\"$(git rev-parse HEAD)\" -f Dockerfile -t \"hello-kubernetes:1.8\" app\n```\n\n## Develop Application\n\nIf you have [VS Code](https:\/\/code.visualstudio.com\/) and the [Visual Studio Code Remote - Containers](https:\/\/marketplace.visualstudio.com\/items?itemName=ms-vscode-remote.remote-containers) extension installed, the `.devcontainer` folder will be used to build a container based node.js 13 development environment. \n\nPort `8080` has been configured to be forwarded to your host. If you run `npm start` in the `app` folder in the VS Code Remote Containers terminal, you will be able to access the website on `http:\/\/localhost:8080`. You can change the port in the `.devcontainer\\devcontainer.json` file under the `appPort` key.\n\nSee [here](https:\/\/code.visualstudio.com\/docs\/remote\/containers) for more details on working with this setup","site":"GCP","answers_cleaned":"  Hello Kubernetes   This container image can be deployed on a Kubernetes cluster  When accessed via a web browser on port  8080   it will display    a default   Hello world    message   the pod name   node os information    Hello world  from the hello kubernetes image  hello kubernetes png   The default  Hello world   message displayed can be overridden using the  MESSAGE  environment variable  The default port of 8080 can be overriden using the  PORT  environment variable      DockerHub  It is available on DockerHub as      paulbouwer hello kubernetes 1 8  https   hub docker com r paulbouwer hello kubernetes       Deploy      Standard Configuration  Deploy to your Kubernetes cluster using the hello kubernetes yaml  which contains definitions for the service and deployment objects      yaml   hello kubernetes yaml apiVersion  v1 kind  Service metadata    name  hello kubernetes spec    type  LoadBalancer   ports      port  80     targetPort  8080   selector      app  hello kubernetes     apiVersion  apps v1 kind  Deployment metadata    name  hello kubernetes spec    replicas  3   selector      matchLabels        app  hello kubernetes   template      metadata        labels          app  hello kubernetes     spec        containers          name  hello kubernetes         image  paulbouwer hello kubernetes 1 8         ports            containerPort  8080         bash   kubectl apply  f yaml hello kubernetes yaml      This will display a   Hello world    message when you hit the service endpoint in a browser  You can get the service endpoint ip address by executing the following command and grabbing the returned external ip address value      bash   kubectl get service hello kubernetes          Customise Message  You can customise the message displayed by the  hello kubernetes  container  Deploy using the hello kubernetes custom message yaml  which contains definitions for the service and deployment objects   In the definition for the deployment  add an  env  variable with the name of  MESSAGE   The value you provide will be displayed as the custom message      yaml   hello kubernetes custom message yaml apiVersion  v1 kind  Service metadata    name  hello kubernetes custom spec    type  LoadBalancer   ports      port  80     targetPort  8080   selector      app  hello kubernetes custom     apiVersion  apps v1 kind  Deployment metadata    name  hello kubernetes custom spec    replicas  3   selector      matchLabels        app  hello kubernetes custom   template      metadata        labels          app  hello kubernetes custom     spec        containers          name  hello kubernetes         image  paulbouwer hello kubernetes 1 8         ports            containerPort  8080         env            name  MESSAGE           value  I just deployed this on Kubernetes          bash   kubectl apply  f yaml hello kubernetes custom message yaml          Specify Custom Port  By default  the  hello kubernetes  app listens on port  8080   If you have a requirement for the app to listen on another port  you can specify the port via an env variable with the name of PORT  Remember to also update the  containers ports containerPort  value to match   Here is an example      yaml apiVersion  apps v1 kind  Deployment metadata    name  hello kubernetes custom spec    replicas  3   selector      matchLabels        app  hello kubernetes custom   template      metadata        labels          app  hello kubernetes custom     spec        containers          name  hello kubernetes         image  paulbouwer hello kubernetes 1 8         ports            containerPort  80         env            name  PORT           value   80           Build Container Image  If you d like to build the image yourself  then you can do so as follows  The  build arg  parameters provides metadata as defined in  OCI image spec annotations  https   github com opencontainers image spec blob master annotations md    Bash    bash   docker build   no cache   build arg IMAGE VERSION  1 8    build arg IMAGE CREATE DATE   date  u    Y  m  dT H  M  SZ      build arg IMAGE SOURCE REVISION   git rev parse HEAD    f Dockerfile  t  hello kubernetes 1 8  app      Powershell    powershell PS  docker build   no cache   build arg IMAGE VERSION  1 8    build arg IMAGE CREATE DATE    Get Date  Get Date  ToUniversalTime     UFormat   Y  m  dT H  M  SZ      build arg IMAGE SOURCE REVISION    git rev parse HEAD    f Dockerfile  t  hello kubernetes 1 8  app         Develop Application  If you have  VS Code  https   code visualstudio com   and the  Visual Studio Code Remote   Containers  https   marketplace visualstudio com items itemName ms vscode remote remote containers  extension installed  the   devcontainer  folder will be used to build a container based node js 13 development environment    Port  8080  has been configured to be forwarded to your host  If you run  npm start  in the  app  folder in the VS Code Remote Containers terminal  you will be able to access the website on  http   localhost 8080   You can change the port in the   devcontainer devcontainer json  file under the  appPort  key   See  here  https   code visualstudio com docs remote containers  for more details on working with this setup"}
{"questions":"GCP The solution leverages Airflow s dependency management capabilities by dynamically configuring the externaldatefn parameter in the to create a hierarchical relationship between the parent and child DAGs Composer Dependency Management Solution DAG code snippet for Depedency Management using with yearly schedule frequency TL DR This repository presents a Cloud Composer workflow designed to orchestrate complex task dependencies within Apache Airflow The solution specifically addresses the challenge of managing parent child DAG relationships across varying temporal frequencies yearly monthly weekly By implementing similar framework data engineers can ensure reliable and timely triggering of child DAGs in accordance with their respective parent DAG s schedule enhancing overall workflow efficiency and maintainability The goal of this use case is to provide a common pattern to automatically trigger and implement the composer dependency management The primary challenge addressed is the need to handle complex dependencies between DAGs with different frequencies","answers":"# Composer Dependency Management\n\n### TL;DR: This repository presents a Cloud Composer workflow designed to orchestrate complex task dependencies within Apache Airflow. The solution specifically addresses the challenge of managing parent-child DAG relationships across varying temporal frequencies (yearly, monthly, weekly). By implementing similar framework, data engineers can ensure reliable and timely triggering of child DAGs in accordance with their respective parent DAG's schedule, enhancing overall workflow efficiency and maintainability.\n\nThe goal of this use-case is to provide a common pattern to automatically trigger and implement the composer dependency management. The primary challenge addressed is the need to handle complex dependencies between DAGs with different frequencies. \n\nThe solution leverages Airflow's dependency management capabilities by dynamically configuring the external_date_fn parameter in the [Airflow External Task Sensor](https:\/\/airflow.apache.org\/docs\/apache-airflow\/stable\/_api\/airflow\/sensors\/external_task\/index.html) to create a hierarchical relationship between the parent and child DAGs.\n\n***Solution DAG code-snippet for Depedency-Management using [external_task_sensor](https:\/\/airflow.apache.org\/docs\/apache-airflow\/stable\/_api\/airflow\/sensors\/external_task\/index.html) with yearly schedule frequency:***\n```\n# Define parent task IDs and external DAG IDs\nparent_tasks = [\n    {\"task_id\": \"parent_task_1\", \"dag_id\": \"company_cal_refresh\", \"schedule_frequency\":\"yearly\"}\n]\n\ndef execution_delta_dependency(logical_date, **kwargs):\n    dt = logical_date\n    task_instance_id=str(kwargs['task_instance']).split(':')[1].split(' ')[1].split('.')[1]\n    res = None\n    for sub in parent_tasks:\n        if sub['task_id'] == task_instance_id:\n            res = sub\n            break\n\n    schedule_frequency=res['schedule_frequency']\n    parent_dag_poke = ''\n    if schedule_frequency == \"monthly\":\n        parent_dag_poke = dt.replace(day=1).replace(hour=0, minute=0, second=0, microsecond=0)\n    elif schedule_frequency == \"weekly\":\n        parent_dag_poke = (dt - timedelta(days=dt.isoweekday() % 7)).replace(hour=0, minute=0, second=0, microsecond=0)\n    elif schedule_frequency == \"yearly\":\n        parent_dag_poke = dt.replace(day=1, month=1, hour=0, minute=0, second=0, microsecond=0)\n    elif schedule_frequency == \"daily\":\n        parent_dag_poke = (dt).replace(hour=0, minute=0, second=0, microsecond=0)    \n    print(parent_dag_poke)\n    return parent_dag_poke\n\n    # Create external task sensors dynamically\n    external_task_sensors = []\n    for parent_task in parent_tasks:\n        external_task_sensor = ExternalTaskSensor(\n            task_id=parent_task[\"task_id\"],\n            external_dag_id=parent_task[\"dag_id\"],\n            timeout=900,\n            execution_date_fn=execution_delta_dependency,\n            poke_interval=60,  # Check every 60 seconds\n            mode=\"reschedule\",  # Reschedule task if external task fails\n            check_existence=True\n        )\n        external_task_sensors.append(external_task_sensor)\n```\n\n### Hypothetical use case\n\n#### Workflow Overview\n\n***\nThe workflow involves the following steps:\n\n1. **Create Parent DAGs**:\n    - Create separate DAGs for each parent job (yearly, monthly, and weekly).\n    - Define the schedule for each parent DAG accordingly.\n\n2. **Define Child DAGs**:\n    - Create child DAGs for each task that needs to be executed based on the parent's schedule.\n\n3. **Set Dependencies**:\n    - Use the `ExternalTaskSensor` argument to establish the dependency between the child DAG and its immediate parent DAG.\n\n4. **Trigger Child DAGs**:\n    - Utilize Airflow's `TriggerDagRunOperator` to trigger child DAGs when the parent DAG completes.\n    - Configure the `wait_for_downstream` parameter to specify the conditions under which the child DAG should be triggered.\n\n5. **Handle Data Lineage**:\n    - Ensure that the child DAGs have access to the necessary data generated by the parent DAG.\n    - Consider using Airflow's XComs or a central data store for data sharing.\n\n![Alt text](img\/composer_mgmt_usecase.png \"Workflow Overview\")\n\n### Benefits\n- Improved DAG organization and maintainability.\n- Simplified dependency management.\n- Reliable execution of child DAGs based on parent schedules.\n- Reduced risk of data inconsistencies.\n- Scalable approach for managing complex DAG dependencies.\n\n## Hypothetical User Story: The Symphony of Data Orchestration\nIn the bustling city of San Francisco, a dynamic e-commerce company named \"Symphony Goods\" was on a mission to revolutionize the online shopping experience. At the heart of their success was a robust data infrastructure that seamlessly managed and processed vast amounts of information.\n\n### Symphony Goods Data Workflows\nSymphony Goods relied on a sophisticated data orchestration system powered by Apache Airflow to automate and streamline their data workflows. This system consisted of a series of interconnected data pipelines, each designed to perform specific tasks and produce valuable insights.\n\n#### Yearly Refresh: Company Calendar\nOnce a year, Symphony Goods executed a critical process known as [\"Company_cal_refresh\"](company_cal_refresh.py). This workflow ensured that the company's internal calendars were synchronized across all departments and systems. It involved extracting data from various sources, such as employee schedules, project timelines, and public holidays, and consolidating it into a centralized repository. The updated calendar served as a single source of truth, enabling efficient planning, resource allocation, and communication within the organization.\n\n#### Monthly Refresh: Product Catalog\nEvery month, Symphony Goods performed a \"Product_catalog_refresh\" workflow to keep its product catalog up-to-date. This process involved ingesting data from multiple channels, including supplier feeds, internal databases, and customer feedback. The workflow validated, transformed, and enriched the product information, ensuring that customers had access to accurate and comprehensive product details.\n\n#### Weekly Summary Report\nSymphony Goods generated a \"Weekly_summary_report\" every week to monitor key performance indicators (KPIs) and track business growth. The workflow aggregated data from various sources, such as sales figures, customer engagement metrics, and website traffic analytics. It then presented the data in visually appealing dashboards and reports, enabling stakeholders to make informed decisions.\n\n#### Daily Refresh: Product Inventory\nTo ensure optimal inventory management, Symphony Goods ran a [\"Product_inventory_refresh\"](product_catalog_refresh.py) workflow on a daily basis. This workflow extracted inventory data from warehouses, distribution centers, and point-of-sale systems. It calculated available stock levels, identified potential stockouts, and provided recommendations for replenishment. The workflow ensured that Symphony Goods could fulfill customer orders promptly and maintain high levels of customer satisfaction.\n\nThe symphony of data orchestration at Symphony Goods was a testament to the power of automation and integration. By leveraging Apache Airflow, the company was able to streamline its data operations, improve data quality, and gain valuable insights to drive business growth. As Symphony Goods continued to scale its operations, the data orchestration system served as the backbone, ensuring that data was always available, accurate, and actionable.\n\n### Workflow Frequencies\n1. **Yearly**: [Company_cal_refresh](company_cal_refresh.py) \n2. **Monthly**: [Product_catalog_refresh](product_catalog_refresh.py)\n3. **Weekly**: [Weekly_summary_report](weekly_summary_report.py)\n4. **Daily**: [Product_inventory_refresh](product_inventory_refresh.py)\n\n## Use-case Lineage: Summary of Lineage and Dependencies\nThe provided context describes the data orchestration system used by Symphony Goods, an e-commerce company in San Francisco. The system is powered by Apache Airflow and consists of four main workflows:\n\n1. **Yearly: Company_cal_refresh**\n    - Synchronizes internal calendars across all departments and systems, ensuring efficient planning and resource allocation.\n    - Depends on data from employee schedules, project timelines, and public holidays.\n\n2. **Monthly: Product_catalog_refresh**\n    - Keeps the product catalog up-to-date by ingesting data from multiple channels and validating, transforming, and enriching it.\n    - Depends on data from supplier feeds, internal databases, and customer feedback.\n\n3. **Weekly: Weekly_summary_report**\n    - Generates weekly summary reports to monitor key performance indicators (KPIs) and track business growth.\n    - Depends on data from sales figures, customer engagement metrics, and website traffic analytics.\n\n4. **Daily: Product_inventory_refresh**\n    - Ensures optimal inventory management by extracting inventory data from various sources and calculating available stock levels.\n    - Depends on data from warehouses, distribution centers, and point-of-sale systems.\n\nThe symphony of data orchestration at Symphony Goods is a testament to the power of automation and integration. By leveraging Apache Airflow, the company was able to streamline its data operations, improve data quality, and gain valuable insights to drive business growth.","site":"GCP","answers_cleaned":"  Composer Dependency Management      TL DR  This repository presents a Cloud Composer workflow designed to orchestrate complex task dependencies within Apache Airflow  The solution specifically addresses the challenge of managing parent child DAG relationships across varying temporal frequencies  yearly  monthly  weekly   By implementing similar framework  data engineers can ensure reliable and timely triggering of child DAGs in accordance with their respective parent DAG s schedule  enhancing overall workflow efficiency and maintainability   The goal of this use case is to provide a common pattern to automatically trigger and implement the composer dependency management  The primary challenge addressed is the need to handle complex dependencies between DAGs with different frequencies    The solution leverages Airflow s dependency management capabilities by dynamically configuring the external date fn parameter in the  Airflow External Task Sensor  https   airflow apache org docs apache airflow stable  api airflow sensors external task index html  to create a hierarchical relationship between the parent and child DAGs      Solution DAG code snippet for Depedency Management using  external task sensor  https   airflow apache org docs apache airflow stable  api airflow sensors external task index html  with yearly schedule frequency           Define parent task IDs and external DAG IDs parent tasks           task id    parent task 1    dag id    company cal refresh    schedule frequency   yearly      def execution delta dependency logical date    kwargs       dt   logical date     task instance id str kwargs  task instance    split      1  split      1  split      1      res   None     for sub in parent tasks          if sub  task id      task instance id              res   sub             break      schedule frequency res  schedule frequency       parent dag poke          if schedule frequency     monthly           parent dag poke   dt replace day 1  replace hour 0  minute 0  second 0  microsecond 0      elif schedule frequency     weekly           parent dag poke    dt   timedelta days dt isoweekday     7   replace hour 0  minute 0  second 0  microsecond 0      elif schedule frequency     yearly           parent dag poke   dt replace day 1  month 1  hour 0  minute 0  second 0  microsecond 0      elif schedule frequency     daily           parent dag poke    dt  replace hour 0  minute 0  second 0  microsecond 0          print parent dag poke      return parent dag poke        Create external task sensors dynamically     external task sensors          for parent task in parent tasks          external task sensor   ExternalTaskSensor              task id parent task  task id                external dag id parent task  dag id                timeout 900              execution date fn execution delta dependency              poke interval 60     Check every 60 seconds             mode  reschedule      Reschedule task if external task fails             check existence True                   external task sensors append external task sensor           Hypothetical use case       Workflow Overview      The workflow involves the following steps   1    Create Parent DAGs          Create separate DAGs for each parent job  yearly  monthly  and weekly         Define the schedule for each parent DAG accordingly   2    Define Child DAGs          Create child DAGs for each task that needs to be executed based on the parent s schedule   3    Set Dependencies          Use the  ExternalTaskSensor  argument to establish the dependency between the child DAG and its immediate parent DAG   4    Trigger Child DAGs          Utilize Airflow s  TriggerDagRunOperator  to trigger child DAGs when the parent DAG completes        Configure the  wait for downstream  parameter to specify the conditions under which the child DAG should be triggered   5    Handle Data Lineage          Ensure that the child DAGs have access to the necessary data generated by the parent DAG        Consider using Airflow s XComs or a central data store for data sharing     Alt text  img composer mgmt usecase png  Workflow Overview        Benefits   Improved DAG organization and maintainability    Simplified dependency management    Reliable execution of child DAGs based on parent schedules    Reduced risk of data inconsistencies    Scalable approach for managing complex DAG dependencies      Hypothetical User Story  The Symphony of Data Orchestration In the bustling city of San Francisco  a dynamic e commerce company named  Symphony Goods  was on a mission to revolutionize the online shopping experience  At the heart of their success was a robust data infrastructure that seamlessly managed and processed vast amounts of information       Symphony Goods Data Workflows Symphony Goods relied on a sophisticated data orchestration system powered by Apache Airflow to automate and streamline their data workflows  This system consisted of a series of interconnected data pipelines  each designed to perform specific tasks and produce valuable insights        Yearly Refresh  Company Calendar Once a year  Symphony Goods executed a critical process known as   Company cal refresh   company cal refresh py   This workflow ensured that the company s internal calendars were synchronized across all departments and systems  It involved extracting data from various sources  such as employee schedules  project timelines  and public holidays  and consolidating it into a centralized repository  The updated calendar served as a single source of truth  enabling efficient planning  resource allocation  and communication within the organization        Monthly Refresh  Product Catalog Every month  Symphony Goods performed a  Product catalog refresh  workflow to keep its product catalog up to date  This process involved ingesting data from multiple channels  including supplier feeds  internal databases  and customer feedback  The workflow validated  transformed  and enriched the product information  ensuring that customers had access to accurate and comprehensive product details        Weekly Summary Report Symphony Goods generated a  Weekly summary report  every week to monitor key performance indicators  KPIs  and track business growth  The workflow aggregated data from various sources  such as sales figures  customer engagement metrics  and website traffic analytics  It then presented the data in visually appealing dashboards and reports  enabling stakeholders to make informed decisions        Daily Refresh  Product Inventory To ensure optimal inventory management  Symphony Goods ran a   Product inventory refresh   product catalog refresh py  workflow on a daily basis  This workflow extracted inventory data from warehouses  distribution centers  and point of sale systems  It calculated available stock levels  identified potential stockouts  and provided recommendations for replenishment  The workflow ensured that Symphony Goods could fulfill customer orders promptly and maintain high levels of customer satisfaction   The symphony of data orchestration at Symphony Goods was a testament to the power of automation and integration  By leveraging Apache Airflow  the company was able to streamline its data operations  improve data quality  and gain valuable insights to drive business growth  As Symphony Goods continued to scale its operations  the data orchestration system served as the backbone  ensuring that data was always available  accurate  and actionable       Workflow Frequencies 1    Yearly     Company cal refresh  company cal refresh py   2    Monthly     Product catalog refresh  product catalog refresh py  3    Weekly     Weekly summary report  weekly summary report py  4    Daily     Product inventory refresh  product inventory refresh py      Use case Lineage  Summary of Lineage and Dependencies The provided context describes the data orchestration system used by Symphony Goods  an e commerce company in San Francisco  The system is powered by Apache Airflow and consists of four main workflows   1    Yearly  Company cal refresh         Synchronizes internal calendars across all departments and systems  ensuring efficient planning and resource allocation        Depends on data from employee schedules  project timelines  and public holidays   2    Monthly  Product catalog refresh         Keeps the product catalog up to date by ingesting data from multiple channels and validating  transforming  and enriching it        Depends on data from supplier feeds  internal databases  and customer feedback   3    Weekly  Weekly summary report         Generates weekly summary reports to monitor key performance indicators  KPIs  and track business growth        Depends on data from sales figures  customer engagement metrics  and website traffic analytics   4    Daily  Product inventory refresh         Ensures optimal inventory management by extracting inventory data from various sources and calculating available stock levels        Depends on data from warehouses  distribution centers  and point of sale systems   The symphony of data orchestration at Symphony Goods is a testament to the power of automation and integration  By leveraging Apache Airflow  the company was able to streamline its data operations  improve data quality  and gain valuable insights to drive business growth "}
{"questions":"GCP shanekok9 gmail com Andrew Leach Google karanpalsani utexas edu Dimos Christopoulos Google Contributors Better Consumer Complaint and Support Request Handling With AI Anastasiia Manokhina Google Michael Sherman Google","answers":"# Better Consumer Complaint and Support Request Handling With AI\n\n## Contributors\n\n- Dimos Christopoulos (Google)\n- [Shane Kok](https:\/\/www.linkedin.com\/in\/shane-kok-b1970a82\/) (shanekok9@gmail.com)\n- Andrew Leach (Google)\n- Anastasiia Manokhina (Google)\n- [Karan Palsani](https:\/\/www.linkedin.com\/in\/karanpalsani\/) (karanpalsani@utexas.edu)  \n- Michael Sherman (Google)\n- [Michael Sparkman](https:\/\/www.linkedin.com\/in\/michael-sparkman\/) (michaelsparkman1996@gmail.com)  \n- [Sahana Subramanian](https:\/\/www.linkedin.com\/in\/sahana-subramanian\/) (sahana.subramanian@utexas.edu)  \n\n\n# Overview\n\nThis example shows how to use ML models to predict a company's response to consumer complaints using the public [CFPB Consumer Complaint Database](https:\/\/console.cloud.google.com\/marketplace\/details\/cfpb\/complaint-database?filter=solution-type:dataset&id=5a1b3026-d189-4a35-8620-099f7b5a600b) on BigQuery. It provides an implementation of [AutoML Tables](https:\/\/cloud.google.com\/automl-tables) for model training and batch prediction, and has a flexible config-driven BigQuery SQL pipeline for adapting to new data sources.\n\nThis specific example identifies the outcomes of customer complaints, which could serve a customer support workflow that routes risky cases to specific support channels. But this example can be adapted to other support use cases by changing the label of the machine learning model. For example:\n* Routing support requests to specific teams.\n* Identifing support requests appropriate for templated vs. manual responses.\n* Prioritization of support requests.\n* Identifying a specific product (or products) needing support.\n\n## Directory Structure\n\n```\n.\n\u251c\u2500\u2500 scripts         # Python scripts for running the data and modeling pipeline.\n\u251c\u2500\u2500 queries         # SQL queries for data manipulation, cleaning, and transformation.\n\u251c\u2500\u2500 notebooks       # Jupyter notebooks for data exploration. Not part of the pipeline codebase, not reviewed, not tested in the pipeline environment, and dependent on 3rd party Python packages not required by the pipeline. Provided for reference only.\n\u2514\u2500\u2500 config          # Project configuration and table ingestion schemas. The configuration for the pipeline is all in `pipeline.yaml`.\n```\n\n## Solution Diagram\nThe diagram represents what each of the scripts does, including the structure of tables created at each step:\n![diagram](.\/solution-diagram.png)\n\n## Configuration Overview\n\nThe configuration provided with the code is `config\/pipeline.yaml`. This configuration information is used by pipeline scripts and for substitution into SQL queries stored in the `queries` folder.\n\nBasic configuration changes necessary when running the pipeline are discussed with the pipeline running instructions below.\n\nWe recommend making a separate copy of the configuration when you have to change configuration parameters. All pipeline steps are run with the config file as a command line option, and using separate copies makes tracking different pipeline runs more manageable. \n\nThe main sections of the configuration are:\n* `file_paths`: Absolute locations of files read by the pipeline. These will have to be changed to fit your environment.\n* `global`: Core configuration information used by multiple steps of the pipeline. It contains the names of the BigQuery dataset and tables, the ID of the Google Cloud Platform project, AutoML Tables model\/data identification parameters, etc.\n* `query_files`: Filenames of SQL queries used by the pipeline.\n* `query_params`: Parameters for substitution into individual SQL queries.\n* `model`: Configuration information for the AutoML Tables Model. Includes parameters on training\/optimizing the model, identification of key columns in the training data (e.g., the target), training data columns to exclude from model building, and type configuration for each feature used by the model.\n\n## Instructions for Running the Pipeline to Predict Company Responses to Consumer Complaints\n\nAll instructions were tested on a [Cloud AI Platform Notebook](https:\/\/cloud.google.com\/ai-platform\/notebooks\/docs\/) instance, created through the [UI](https:\/\/console.cloud.google.com\/ai-platform\/notebooks\/instances). If you are running in another environment, you'll have to setup the [`gcloud` SDK](https:\/\/cloud.google.com\/sdk\/install), install Python 3 and virtualenv, and possibly manage other dependencies. We have not tested these instructions in other environments.\n\n**All commands, unless otherwise stated, should be run from the directory containing this README.** \n\n## Enable Required APIs in your Project\n\nThese instructions have been tested in a fresh Google Cloud project without any organization constraints. You should be able to run the code in an existing project, but make sure the following APIs are enabled, and make sure these products can communicate with one another--if you're running in a VPC or have organization-imposed firewall rule or product restrictions you may have some difficulty.\n\nRequired APIs to enable:\n1. [Compute Engine API](https:\/\/console.cloud.google.com\/apis\/api\/compute.googleapis.com\/)\n1. [BigQuery API](https:\/\/console.cloud.google.com\/apis\/api\/bigquery.googleapis.com\/)\n1. [Cloud AutoML API](https:\/\/console.cloud.google.com\/apis\/api\/automl.googleapis.com\/)\n1. [Cloud Storage API](https:\/\/console.cloud.google.com\/apis\/api\/storage-component.googleapis.com\/)\n\n### Setup for a New Local Environment\n\nThese steps should be followed before you run the pipeline for the first time from a new development environment. \n\nAs stated previously, these instructions have been tested in a [Google Cloud AI Platforms Notebook](https:\/\/console.cloud.google.com\/ai-platform\/notebooks\/instances).\n\n1. Run `gcloud init`, choose to use a new account, authenticate, and [set your project ID](https:\/\/cloud.google.com\/resource-manager\/docs\/creating-managing-projects#identifying_projects) as the project. Choose a region in the US if prompted to set a default region.\n1. Clone the github project.\n1. Navigate to the directory containing this readme.\n1. Create a Python 3 virtual environment (`automl-support` in this example, in your home directory):\n   * Run `python3 -m virtualenv $HOME\/env\/automl-support` .\n   * Activate the environment. Run: `source ~\/env\/automl-support\/bin\/activate` .\n   * Install the required Python packages: `pip install -r requirements.txt` . You may get an error about apache-beam and pyyaml version incompatibilities, this will have no effect.\n\n### Required Configuration Changes\n\nConfiguration is read from a file specified when running the pipeline from the command line. We recommend working with different copies of the configuration for different experiments, environments, and other needs. Note that if values in the configuration match existing tables, resources, etc. in your project, strange errors and possibly data loss may result. The default values in `config\/pipeline.yaml` provided with the code should be changed before running the pipeline.\n\n1. Make a copy of the configuration file: `cp config\/pipeline.yaml config\/my_config.yaml` .\n1. Edit `config\/my_config.yaml` and make the following changes then save:\n   * `file_paths.queries` is the path to the queries subfolder. Change this value to the absolute local path where the queries subfolder resides.\n   * `global.destination_project_id` is the project_id of the project you want to run the pipeline in (and where the AutoML models will live). Change this to your project_id.\n1. Also consider changing the following:\n  * `global.destination_dataset` is the BigQuery dataset where data ingested by the pipeline into your project is stored. Note the table names don't need to change, since they will be written to the new dataset. Make sure this dataset doesn't already exist in your poject. If this dataset exists, the training pipeline will fail--you'll need to delete the dataset first.\n   * `global.dataset_display_name` and `global.model_display_name` are the name of the AutoML Tables dataset and model created by the pipeline. Change these to new values if you wish (they can be the same).\n\nYou should create a new config file and change these parameters for every full pipeline run. For failed pipeline runs, you'll want to delete the resources specified in these config values since the pipeline will not delete existing resources automatically.\n\nNote that on subsequent pipeline runs if you aren't rerunning ingestion you don't need to change `global.destination_dataset`, and if you aren't rerunning the model build you don't need to change `global.dataset_display_name` and `global.model_display_name`.\n\nIf you need to change the default paths (because you are running somewhere besides an AI Platform Notebook, because your repo is in a different path, or because your AutoML service account key is in a different location) change the values in `file_paths`.\n\n### Running the Pipeline\n\nThese steps have only been tested for users with the \"Owner\" [IAM role](https:\/\/cloud.google.com\/iam\/docs\/understanding-roles#primitive_role_definitions) in your project. These steps should work for the \"Editor\" role as well, but we have not tested it.\n\nAll commands should be run from the project root (the folder with this README). This assumes your config file is in `config\/my_config.yaml`.\n\n1. Active the Python environment if it is not already activated. Run: `source ~\/env\/automl-support\/bin\/activate` or similar (see Setup for a New Environment, above).\n1. Run the model pipeline: `nohup bash run_pipeline.sh config\/my_config.yaml ftp > pipeline.out & disown` . This command will run the pipeline in the background, save logs to `pipeline.out`, and will not terminate if the terminal is closed. It will run all steps of the pipeline in sequence, or a subset of the steps as determined by the second positional arg (MODE). Ex. `fp` instead of `fp` would create features and then generate predictions using the model specified in the config. Pipline steps (`$MODE` argument):\n   * Create features (f): This creates the dataset of features (config value `global.destination_dataset`) and feature tables.\n   * Train (t): This creates the training dataset in AutoML Tables Forecasting (config value `global.dataset_display_name`) and trains the model (config value `global.model_display_name`). Note that in the AutoML Tables UI the dataset will appear as soon as it is created but the model will not appear until it is completely trained.\n   * Predict (p): This makes predictions with the model, and copies the unformatted results to a predictions table (config value `global.predictions_table`). AutoML generates its own dataset in BQ, which will contain errors if predictions for any rows fail. This spurious dataset (named prediction_<model_name>_<timestamp>) will be deleted if there are no errors.\t\n  \t  \nThis command pipes its output to a log file (`pipeline.out`). To follow this log file, run `tail -n 5 -f pipeline.out` to monitor the command while it runs.\tThis command pipes its output to a log file (`pipeline.out`). To follow this log file, run `tail -n 5 -f pipeline.out` to monitor the command while it runs.\n    \nSome of the AutoML steps are long-running operations. If you're following the logged output, you'll see continually longer sleepings between API calls. This is expected behavior. AutoML training can take hours, depending on your config settings. With the default settings, you can expect around two hours to complete the pipeline and model training.\n\n**Note:** If the pipeline is run and the destination datasets has already been created, the run will fail. Use the BQ UI, client, or command line interface to delete the dataset, or select new destinations (`global.destination_dataset`) in the config. AutoML also does not enforce that display names are unique, if multiple datasets or models are created with the same name, the run will fail. Use the AutoML UI or client to delete them, or select new display names in the config (`global.dataset_display_name` and `global.model_display_name`).\n\n### Common Configuration Changes\n\n* Change `model.train_budget_hours` to control how long the model trains for. The default is 1, but you should expect an extra hour of spin up time on top of the training budget. Upping the budget may improve model performance.\n    \n## Online Predictions\n\nThe example pipeline makes batch predictions, but a common deployment pattern is to create an API endpoint that receives features and returns a prediction. Do the following steps to deploy a model for online prediction, make a prediction, and then undeploy the model. **Do not leave your model deployed, deployed models can easily cost tens to hundreds of dollars a day.**\n\nAll commands should be run from the project root (the folder with this README). This assumes your config file is in config\/my_config.yaml.\n\n1. Make sure you have activated the same virtual environment used for the model training pipeline.\n1. Deploy the model: `bash online_predict.sh config\/my_config.yaml deploy`. Take note of the \"name\" value in the response.\n1. Deployment will take up to 15 minutes. To check the status of the deployment run the following command, replacing \"operation-name\" with the \"name\" value from the previous step: `curl -X GET -H \"Authorization: Bearer $(gcloud auth application-default print-access-token)\" -H \"Content-Type: application\/json\" https:\/\/automl.googleapis.com\/v1beta1\/operation-name`. When the operation is complete, the response will have a \"done\" item with the value \"true\".\n1. Make a prediction using the provided sample predict_payload.json, containing features of an unlabeled example complaint: `bash online_predict.sh config\/my_config.yaml predict predict_payload.json`. The response will have the different classes with values based on the confidence of the class. To predict for different features, change \"values\" in the .json file. The order of features in the json is the order of fields in the BigQuery Table used to train the model, minus the columns excluded by the `model.exclude_columns` config value.\n1. You should undeploy your model when finished to avoid excessive charges. Run: `bash online_predict.sh config\/my_config.yaml undeploy`. You should also verify in the UI that the model is undeployed.\n\n## Using All Data to Train a Model\n\nThis example intentionally splits the available data into training and prediction. Once you are comfortable with the model's performance, you should train the model on your available data. You can do this by changing the the config value `query_params.train_predict_split.test_threshold` to 0, which will put all data into the training split. Note that once you do this, the batch predict script won't run (since there's no data to use for prediction)","site":"GCP","answers_cleaned":"  Better Consumer Complaint and Support Request Handling With AI     Contributors    Dimos Christopoulos  Google     Shane Kok  https   www linkedin com in shane kok b1970a82    shanekok9 gmail com    Andrew Leach  Google    Anastasiia Manokhina  Google     Karan Palsani  https   www linkedin com in karanpalsani    karanpalsani utexas edu      Michael Sherman  Google     Michael Sparkman  https   www linkedin com in michael sparkman    michaelsparkman1996 gmail com       Sahana Subramanian  https   www linkedin com in sahana subramanian    sahana subramanian utexas edu        Overview  This example shows how to use ML models to predict a company s response to consumer complaints using the public  CFPB Consumer Complaint Database  https   console cloud google com marketplace details cfpb complaint database filter solution type dataset id 5a1b3026 d189 4a35 8620 099f7b5a600b  on BigQuery  It provides an implementation of  AutoML Tables  https   cloud google com automl tables  for model training and batch prediction  and has a flexible config driven BigQuery SQL pipeline for adapting to new data sources   This specific example identifies the outcomes of customer complaints  which could serve a customer support workflow that routes risky cases to specific support channels  But this example can be adapted to other support use cases by changing the label of the machine learning model  For example    Routing support requests to specific teams    Identifing support requests appropriate for templated vs  manual responses    Prioritization of support requests    Identifying a specific product  or products  needing support      Directory Structure            scripts           Python scripts for running the data and modeling pipeline      queries           SQL queries for data manipulation  cleaning  and transformation      notebooks         Jupyter notebooks for data exploration  Not part of the pipeline codebase  not reviewed  not tested in the pipeline environment  and dependent on 3rd party Python packages not required by the pipeline  Provided for reference only      config            Project configuration and table ingestion schemas  The configuration for the pipeline is all in  pipeline yaml           Solution Diagram The diagram represents what each of the scripts does  including the structure of tables created at each step    diagram    solution diagram png      Configuration Overview  The configuration provided with the code is  config pipeline yaml   This configuration information is used by pipeline scripts and for substitution into SQL queries stored in the  queries  folder   Basic configuration changes necessary when running the pipeline are discussed with the pipeline running instructions below   We recommend making a separate copy of the configuration when you have to change configuration parameters  All pipeline steps are run with the config file as a command line option  and using separate copies makes tracking different pipeline runs more manageable    The main sections of the configuration are     file paths   Absolute locations of files read by the pipeline  These will have to be changed to fit your environment     global   Core configuration information used by multiple steps of the pipeline  It contains the names of the BigQuery dataset and tables  the ID of the Google Cloud Platform project  AutoML Tables model data identification parameters  etc     query files   Filenames of SQL queries used by the pipeline     query params   Parameters for substitution into individual SQL queries     model   Configuration information for the AutoML Tables Model  Includes parameters on training optimizing the model  identification of key columns in the training data  e g   the target   training data columns to exclude from model building  and type configuration for each feature used by the model      Instructions for Running the Pipeline to Predict Company Responses to Consumer Complaints  All instructions were tested on a  Cloud AI Platform Notebook  https   cloud google com ai platform notebooks docs   instance  created through the  UI  https   console cloud google com ai platform notebooks instances   If you are running in another environment  you ll have to setup the   gcloud  SDK  https   cloud google com sdk install   install Python 3 and virtualenv  and possibly manage other dependencies  We have not tested these instructions in other environments     All commands  unless otherwise stated  should be run from the directory containing this README         Enable Required APIs in your Project  These instructions have been tested in a fresh Google Cloud project without any organization constraints  You should be able to run the code in an existing project  but make sure the following APIs are enabled  and make sure these products can communicate with one another  if you re running in a VPC or have organization imposed firewall rule or product restrictions you may have some difficulty   Required APIs to enable  1   Compute Engine API  https   console cloud google com apis api compute googleapis com   1   BigQuery API  https   console cloud google com apis api bigquery googleapis com   1   Cloud AutoML API  https   console cloud google com apis api automl googleapis com   1   Cloud Storage API  https   console cloud google com apis api storage component googleapis com        Setup for a New Local Environment  These steps should be followed before you run the pipeline for the first time from a new development environment    As stated previously  these instructions have been tested in a  Google Cloud AI Platforms Notebook  https   console cloud google com ai platform notebooks instances    1  Run  gcloud init   choose to use a new account  authenticate  and  set your project ID  https   cloud google com resource manager docs creating managing projects identifying projects  as the project  Choose a region in the US if prompted to set a default region  1  Clone the github project  1  Navigate to the directory containing this readme  1  Create a Python 3 virtual environment   automl support  in this example  in your home directory        Run  python3  m virtualenv  HOME env automl support         Activate the environment  Run   source   env automl support bin activate         Install the required Python packages   pip install  r requirements txt    You may get an error about apache beam and pyyaml version incompatibilities  this will have no effect       Required Configuration Changes  Configuration is read from a file specified when running the pipeline from the command line  We recommend working with different copies of the configuration for different experiments  environments  and other needs  Note that if values in the configuration match existing tables  resources  etc  in your project  strange errors and possibly data loss may result  The default values in  config pipeline yaml  provided with the code should be changed before running the pipeline   1  Make a copy of the configuration file   cp config pipeline yaml config my config yaml    1  Edit  config my config yaml  and make the following changes then save        file paths queries  is the path to the queries subfolder  Change this value to the absolute local path where the queries subfolder resides        global destination project id  is the project id of the project you want to run the pipeline in  and where the AutoML models will live   Change this to your project id  1  Also consider changing the following       global destination dataset  is the BigQuery dataset where data ingested by the pipeline into your project is stored  Note the table names don t need to change  since they will be written to the new dataset  Make sure this dataset doesn t already exist in your poject  If this dataset exists  the training pipeline will fail  you ll need to delete the dataset first        global dataset display name  and  global model display name  are the name of the AutoML Tables dataset and model created by the pipeline  Change these to new values if you wish  they can be the same    You should create a new config file and change these parameters for every full pipeline run  For failed pipeline runs  you ll want to delete the resources specified in these config values since the pipeline will not delete existing resources automatically   Note that on subsequent pipeline runs if you aren t rerunning ingestion you don t need to change  global destination dataset   and if you aren t rerunning the model build you don t need to change  global dataset display name  and  global model display name    If you need to change the default paths  because you are running somewhere besides an AI Platform Notebook  because your repo is in a different path  or because your AutoML service account key is in a different location  change the values in  file paths        Running the Pipeline  These steps have only been tested for users with the  Owner   IAM role  https   cloud google com iam docs understanding roles primitive role definitions  in your project  These steps should work for the  Editor  role as well  but we have not tested it   All commands should be run from the project root  the folder with this README   This assumes your config file is in  config my config yaml    1  Active the Python environment if it is not already activated  Run   source   env automl support bin activate  or similar  see Setup for a New Environment  above   1  Run the model pipeline   nohup bash run pipeline sh config my config yaml ftp   pipeline out   disown    This command will run the pipeline in the background  save logs to  pipeline out   and will not terminate if the terminal is closed  It will run all steps of the pipeline in sequence  or a subset of the steps as determined by the second positional arg  MODE   Ex   fp  instead of  fp  would create features and then generate predictions using the model specified in the config  Pipline steps    MODE  argument        Create features  f   This creates the dataset of features  config value  global destination dataset   and feature tables       Train  t   This creates the training dataset in AutoML Tables Forecasting  config value  global dataset display name   and trains the model  config value  global model display name    Note that in the AutoML Tables UI the dataset will appear as soon as it is created but the model will not appear until it is completely trained       Predict  p   This makes predictions with the model  and copies the unformatted results to a predictions table  config value  global predictions table    AutoML generates its own dataset in BQ  which will contain errors if predictions for any rows fail  This spurious dataset  named prediction  model name   timestamp   will be deleted if there are no errors         This command pipes its output to a log file   pipeline out    To follow this log file  run  tail  n 5  f pipeline out  to monitor the command while it runs  This command pipes its output to a log file   pipeline out    To follow this log file  run  tail  n 5  f pipeline out  to monitor the command while it runs       Some of the AutoML steps are long running operations  If you re following the logged output  you ll see continually longer sleepings between API calls  This is expected behavior  AutoML training can take hours  depending on your config settings  With the default settings  you can expect around two hours to complete the pipeline and model training     Note    If the pipeline is run and the destination datasets has already been created  the run will fail  Use the BQ UI  client  or command line interface to delete the dataset  or select new destinations   global destination dataset   in the config  AutoML also does not enforce that display names are unique  if multiple datasets or models are created with the same name  the run will fail  Use the AutoML UI or client to delete them  or select new display names in the config   global dataset display name  and  global model display name         Common Configuration Changes    Change  model train budget hours  to control how long the model trains for  The default is 1  but you should expect an extra hour of spin up time on top of the training budget  Upping the budget may improve model performance          Online Predictions  The example pipeline makes batch predictions  but a common deployment pattern is to create an API endpoint that receives features and returns a prediction  Do the following steps to deploy a model for online prediction  make a prediction  and then undeploy the model    Do not leave your model deployed  deployed models can easily cost tens to hundreds of dollars a day     All commands should be run from the project root  the folder with this README   This assumes your config file is in config my config yaml   1  Make sure you have activated the same virtual environment used for the model training pipeline  1  Deploy the model   bash online predict sh config my config yaml deploy   Take note of the  name  value in the response  1  Deployment will take up to 15 minutes  To check the status of the deployment run the following command  replacing  operation name  with the  name  value from the previous step   curl  X GET  H  Authorization  Bearer   gcloud auth application default print access token    H  Content Type  application json  https   automl googleapis com v1beta1 operation name   When the operation is complete  the response will have a  done  item with the value  true   1  Make a prediction using the provided sample predict payload json  containing features of an unlabeled example complaint   bash online predict sh config my config yaml predict predict payload json   The response will have the different classes with values based on the confidence of the class  To predict for different features  change  values  in the  json file  The order of features in the json is the order of fields in the BigQuery Table used to train the model  minus the columns excluded by the  model exclude columns  config value  1  You should undeploy your model when finished to avoid excessive charges  Run   bash online predict sh config my config yaml undeploy   You should also verify in the UI that the model is undeployed      Using All Data to Train a Model  This example intentionally splits the available data into training and prediction  Once you are comfortable with the model s performance  you should train the model on your available data  You can do this by changing the the config value  query params train predict split test threshold  to 0  which will put all data into the training split  Note that once you do this  the batch predict script won t run  since there s no data to use for prediction "}
{"questions":"GCP In production setup it s also useful to load test your models to tune example illustrates how to automate deployment of your trained models to GKE your service can handle the required throughput First of all we need to train a model You are welcome to experiment with your This You can serve your TensorFlow models on Google Kubernetes Engine with TensorFlow Serving configuration and your whole setup as well as to make sure Prerequisites Preparing a model own model or you might train an example based on this","answers":"You can serve your TensorFlow models on Google Kubernetes Engine with\n[TensorFlow Serving](https:\/\/www.tensorflow.org\/tfx\/guide\/serving). This\nexample illustrates how to automate deployment of your trained models to GKE.\nIn production setup, it's also useful to load test your models to tune\nTensorFlow Serving configuration and your whole setup, as well as to make sure\nyour service can handle the required throughput.\n# Prerequisites\n## Preparing a model\nFirst of all, we need to train a model. You are welcome to experiment with your\nown model or you might train an example based on this\n[tutorial](https:\/\/www.tensorflow.org\/tutorials\/structured_data\/feature_columns).  \n\n```\ncd tensorflow\npython create_model.py\n```\nwould create\n## Creating GKE clusters for load testing and serving\nNow we need to deploy our model. We're going to serve our model with Tensorflow Serving\nlaunched in a docker container on a GKE cluster. Our _Dockerfile_ looks pretty simple:\n```\nFROM tensorflow\/serving:latest\n\nADD batching_parameters.txt \/benchmark\/batching_parameters.txt\nADD models.config \/benchmark\/models.config\n\nADD saved_model_regression \/models\/regression\n```\nWe only add model(s) binaries and a few configuration files. In a `models.config` we define\none (or many models) to be launched:\n```\nmodel_config_list {\n  config {\n    name: 'regression'\n    base_path: '\/models\/regression\/'\n    model_platform: \"tensorflow\"\n  }\n}\n\n```\nWe also need to create a GKE cluster and deploy a _tensorflow-app_ service there, that would\nexpose expose 8500 and 8501 ports (both for http and grpc requests) under a load balancer. \n```\npython experiment.py\n```\nwould create a _kubernetes.yaml_ file with default serving parameters.\n\nFor load testing we use a [locust](https:\/\/locust.io\/) framework. We've implemented a _RegressionUser_\ninheriting from _locust.HttpUser_ and configured locust to work in a distributed mode.\n\nNow we need to create two GKE clusters . We're doing this to emulate cross-cluster network latency\nas well as being able to experiment with different hardware for TensorFlow. All our deployment are\ndeployed with Cloud Build, and you can use a bash script to run e2e infrastructure creation.\n```\nexport TENSORFLOW_MACHINE_TYPE=e2-highcpu-8\nexport LOCUST_MACHINE_TYPE=e2-highcpu-32\nexport CLUSTER_ZONE=<GCP_ZONE>\nexport GCP_PROJECT=<YOUR_PROJECT>\n.\/create-cluster.sh\n```\n\n## Running a load test\nAfter a cluster has been created, you need to forward a port to localhost:\n```\ngcloud container clusters get-credentials ${LOCUST_CLUSTER_NAME} --zone ${CLUSTER_ZONE}  --project=${GCP_PROJECT}\nexport LOCUST_CONTEXT=\"gke_${GCP_PROJECT}_${CLUSTER_ZONE}_loadtest-locust-${LOCUST_MACHINE_TYPE}\"\nkubectl config use-context ${LOCUST_CONTEXT}\nkubectl port-forward svc\/locust-master 8089:8089\n```\nNow you can access the locust UI at _localhost:8089_ and initiate a load test of your model.\nWe've observed the following results for the example model - 8ms @p50 and 11 @p99 at 300 queries per\nsecond, and 13ms @p50 and 47ms @p99 at 3900 queries per second.\n\n## Experimenting with addition serving parameters\nTry to use a different hardware for Tensorflow Serving - e.g., recreate a GKE cluster using\n`n2-highcpu-8` machines. We've observed a significant increase in tail\nlatency and throughput we could handle (with the same amount of nodes). 3ms @p50 and 5ms @p99 at\n300 queries per second, and 15ms @p50 and 46ms @p90 at 15000 queries per second.\n\nAnother way to experiment with is to try out different [batching](https:\/\/www.tensorflow.org\/tfx\/serving\/serving_config#batching_configuration)\nparameters (you might look at the batching tuning \n[guide](https:\/\/github.com\/tensorflow\/serving\/blob\/master\/tensorflow_serving\/batching\/README.md#performance-tuning))\nas well as other TensorFlow Serving parameters defined\n[here](https:\/\/github.com\/tensorflow\/serving\/blob\/master\/tensorflow_serving\/model_servers\/main.cc#L59).\n\nOne of the possible configuration might be this one:\n```\npython experiment.py --enable_batching \\\n--batching_parameters_file=\/benchmark\/batching_parameters.txt  \\\n --max_batch_size=8000 --batch_timeout_micros=4  --num_batch_threads=4  \\\n --tensorflow_inter_op_parallelism=4 --tensorflow_intra_op_parallelism=4\n```\nIn this case, your _kubernetes.yaml_ would have the following lines:\n```\nspec:\n  replicas: 3\n  selector:\n    matchLabels:\n      app: tensorflow-app\n  template:\n    metadata:\n      labels:\n        app: tensorflow-app\n    spec:\n      containers:\n      - name: tensorflow-app\n        image: gcr.io\/mogr-test-277422\/tensorflow-app:latest\n        env:\n        - name: MODEL_NAME\n          value: regression\n        ports:\n        - containerPort: 8500\n        - containerPort: 8501\n        args: [\"--model_config_file=\/benchmark\/models.config\", \"--tensorflow_intra_op_parallelism=4\",\n               \"--tensorflow_inter_op_parallelism=4\",\n               \"--batching_parameters_file=\/benchmark\/batching_parameters.txt\", \"--enable_batching\"]\n```\nAnd the _batching_parameters.txt_ would look like this:\n```\nmax_batch_size { value: 8000 }\nbatch_timeout_micros { value: 4 }\nmax_enqueued_batches { value: 100 }\nnum_batch_threads { value: 4 }\n```\nWith this configuration, we would achieve much better performance (both higher throughput and lower\nlatency).","site":"GCP","answers_cleaned":"You can serve your TensorFlow models on Google Kubernetes Engine with  TensorFlow Serving  https   www tensorflow org tfx guide serving   This example illustrates how to automate deployment of your trained models to GKE  In production setup  it s also useful to load test your models to tune TensorFlow Serving configuration and your whole setup  as well as to make sure your service can handle the required throughput    Prerequisites    Preparing a model First of all  we need to train a model  You are welcome to experiment with your own model or you might train an example based on this  tutorial  https   www tensorflow org tutorials structured data feature columns          cd tensorflow python create model py     would create    Creating GKE clusters for load testing and serving Now we need to deploy our model  We re going to serve our model with Tensorflow Serving launched in a docker container on a GKE cluster  Our  Dockerfile  looks pretty simple      FROM tensorflow serving latest  ADD batching parameters txt  benchmark batching parameters txt ADD models config  benchmark models config  ADD saved model regression  models regression     We only add model s  binaries and a few configuration files  In a  models config  we define one  or many models  to be launched      model config list     config       name   regression      base path    models regression       model platform   tensorflow             We also need to create a GKE cluster and deploy a  tensorflow app  service there  that would expose expose 8500 and 8501 ports  both for http and grpc requests  under a load balancer       python experiment py     would create a  kubernetes yaml  file with default serving parameters   For load testing we use a  locust  https   locust io   framework  We ve implemented a  RegressionUser  inheriting from  locust HttpUser  and configured locust to work in a distributed mode   Now we need to create two GKE clusters   We re doing this to emulate cross cluster network latency as well as being able to experiment with different hardware for TensorFlow  All our deployment are deployed with Cloud Build  and you can use a bash script to run e2e infrastructure creation      export TENSORFLOW MACHINE TYPE e2 highcpu 8 export LOCUST MACHINE TYPE e2 highcpu 32 export CLUSTER ZONE  GCP ZONE  export GCP PROJECT  YOUR PROJECT    create cluster sh         Running a load test After a cluster has been created  you need to forward a port to localhost      gcloud container clusters get credentials   LOCUST CLUSTER NAME    zone   CLUSTER ZONE     project   GCP PROJECT  export LOCUST CONTEXT  gke   GCP PROJECT    CLUSTER ZONE  loadtest locust   LOCUST MACHINE TYPE   kubectl config use context   LOCUST CONTEXT  kubectl port forward svc locust master 8089 8089     Now you can access the locust UI at  localhost 8089  and initiate a load test of your model  We ve observed the following results for the example model   8ms  p50 and 11  p99 at 300 queries per second  and 13ms  p50 and 47ms  p99 at 3900 queries per second      Experimenting with addition serving parameters Try to use a different hardware for Tensorflow Serving   e g   recreate a GKE cluster using  n2 highcpu 8  machines  We ve observed a significant increase in tail latency and throughput we could handle  with the same amount of nodes   3ms  p50 and 5ms  p99 at 300 queries per second  and 15ms  p50 and 46ms  p90 at 15000 queries per second   Another way to experiment with is to try out different  batching  https   www tensorflow org tfx serving serving config batching configuration  parameters  you might look at the batching tuning   guide  https   github com tensorflow serving blob master tensorflow serving batching README md performance tuning   as well as other TensorFlow Serving parameters defined  here  https   github com tensorflow serving blob master tensorflow serving model servers main cc L59    One of the possible configuration might be this one      python experiment py   enable batching     batching parameters file  benchmark batching parameters txt       max batch size 8000   batch timeout micros 4    num batch threads 4       tensorflow inter op parallelism 4   tensorflow intra op parallelism 4     In this case  your  kubernetes yaml  would have the following lines      spec    replicas  3   selector      matchLabels        app  tensorflow app   template      metadata        labels          app  tensorflow app     spec        containers          name  tensorflow app         image  gcr io mogr test 277422 tensorflow app latest         env            name  MODEL NAME           value  regression         ports            containerPort  8500           containerPort  8501         args      model config file  benchmark models config      tensorflow intra op parallelism 4                     tensorflow inter op parallelism 4                     batching parameters file  benchmark batching parameters txt      enable batching       And the  batching parameters txt  would look like this      max batch size   value  8000   batch timeout micros   value  4   max enqueued batches   value  100   num batch threads   value  4       With this configuration  we would achieve much better performance  both higher throughput and lower latency  "}
{"questions":"GCP so that both read and writes are evenly distributed across the keys space Although we have tools our key is performing it is not obvious how to change or update a key for all the records in a table Bigtable instance and to write the same records to another table with the same For an optimal performance of our requests to a Bigtable instance it is crucial to choose Dataflow pipeline to change the key of a Bigtable This example contains a Dataflow pipeline to read data from a table in a such as to diagnose how a good key for our records https cloud google com bigtable docs schema design","answers":"## Dataflow pipeline to change the key of a Bigtable\n\nFor an optimal performance of our requests to a Bigtable instance, [it is crucial to choose\na good key for our records](https:\/\/cloud.google.com\/bigtable\/docs\/schema-design),\nso that both read and writes are evenly distributed across the keys space. Although we have tools\nsuch as [Key Visualizer](https:\/\/cloud.google.com\/bigtable\/docs\/keyvis-overview), to diagnose how\nour key is performing, it is not obvious how to change or update a key for all the records in a table.\n\nThis example contains a Dataflow pipeline to read data from a table in a\nBigtable instance, and to write the same records to another table with the same\nschema, but using a different key.\n\nThe pipeline does not assume any specific schema and can work with any table.\n\n### Build the pipeline\n\nThe build process is managed using Maven. To compile, just run\n\n`mvn compile`\n\nTo create a package for the pipeline, run\n\n`mvn package`\n\n### Setup `cbt`\n\nThe helper scripts in this repo use `cbt` to work (to create sample tables, to\ncreate an output table with the same schema as the input table). If you have not\nconfigured `cbt` yet, please see the following link:\n\n* https:\/\/cloud.google.com\/bigtable\/docs\/cbt-overview\n\nIn summary, you need to include your GCP project and Bigtable instance in the\n`~\/.cbtrc` file as shown below:\n\n```\nproject = YOUR_GCP_PROJECT_NAME\ninstance = YOUR_BIGTABLE_INSTANCE_NAME\n```\n\n### Create a sandbox table for testing purposes\n\nIf you already have data in Bigtable, you can ignore this section.\n\nIf you don't have any table available to try this pipeline out, you can create\none using a script in this repo:\n\n```bash\n$ .\/scripts\/create_sandbox_table.sh MY_TABLE\n```\n\nThat will create a table of name `MY_TABLE` with three records. To check that\nthe data has been actually written to the table, you can use `cbt count` and\n`cbt read`:\n\n```bash\n$ cbt count taxi_rides\n2020\/01\/29 17:44:31 -creds flag unset, will use gcloud credential\n3\n$ cbt read taxi_rides\n2020\/01\/29 17:43:34 -creds flag unset, will use gcloud credential\n----------------------------------------\n33cb2a42-d9f5-4b64-9e8a-b5aa1d6e142f#132\n  id:point_idx                             @ 2020\/01\/29-16:22:32.551000\n    \"132\"\n  id:ride_id                               @ 2020\/01\/29-16:22:31.407000\n    \"33cb2a42-d9f5-4b64-9e8a-b5aa1d6e142f\"\n  loc:latitude                             @ 2020\/01\/29-16:22:33.711000\n[...]\n```\n\n### Create the output table\n\nThe output table must exist before the pipeline is run, and it must have the\nsame schema as the input table.\n\nThat table will be **OVERWRITTEN** if it already contains data.\n\nIf you have `cbt` configured, you can use one of the scripts in this repo to\ncreate an empty table replicating the schema of another table:\n\n```bash\n$ .\/scripts\/copy_schema_to_new_table.sh MY_INPUT_TABLE MY_OUTPUT_TABLE\n```\n\n### Command line options for the pipeline\n\nIn addition to the [Dataflow command line\noptions](https:\/\/cloud.google.com\/dataflow\/docs\/guides\/specifying-exec-params),\nthis pipeline has three additional required options:\n\n* ``--bigtableInstance``, the name of the Bigtable instances where all the\n  tables are located\n* ``--inputTable``, the name of an existing table with the input data\n* ``--outputTable``, the name of an existing table. **BEWARE: it will be\n  overwritten**.\n\nRemember that Dataflow also requires at least the following command line\noptions:\n\n* ``--project``\n* ``--tempLocation``\n* ``--runner``\n* ``--region``\n\n### Run the pipeline as a standalone Java app\n\nYou don't necessarily need Maven to run a Dataflow pipeline, you can also use\nthe package generated by Maven as an standalone Java application. Make sure that\nyou have generated a package with `mvn package` and that the _bundled_ JAR file\nis available locally in the machine triggering the pipeline.\n\nThen run:\n\n```bash\n# Change if your location is different\nJAR_LOC=target\/bigtable-change-key-bundled-0.1-SNAPSHOT.jar\n\nPROJECT_ID=<YOUR_PROJECT>\nREGION=<YOUR_REGION_TO_RUN_DATAFLOW>\nTMP_GS_LOCATION=<GCS_URI_FOR_TEMP_FILES>\n\nBIGTABLE_INSTANCE=<YOUR_INSTANCE>\nINPUT_TABLE=<YOUR_INPUT_TABLE>\nOUTPUT_TABLE=<YOUR_OUTPUT_TABLE>\n\nRUNNER=DataflowRunner\n\njava -cp ${JAR_LOC} com.google.cloud.pso.pipeline.BigtableChangeKey \\\n        --project=${PROJECT_ID} \\\n        --gcpTempLocation=${TMP_GS_LOCATION} \\\n        --region=${REGION} \\\n        --runner=${RUNNER} \\\n        --bigtableInstance=${BIGTABLE_INSTANCE} \\\n        --inputTable=${INPUT_TABLE} \\\n        --outputTable=${OUTPUT_TABLE}\n```\n\nThen go to the Dataflow UI to check that the job is running properly.\n\nYou should see a job with a simple graph, similar to this one:\n\n![Pipeline graph](.\/imgs\/pipeline_graph.png)\n\nYou can now check that the destination table has the same records as the input\ntable, and that the key has changed.  You can use `cbt count` and `cbt read` for\nthat purpose, by comparing with the results of the original table.\n\n### Change the update key function\n\nThe pipeline [includes a key transform function that just reverses\nit](.\/src\/main\/java\/com\/google\/cloud\/pso\/pipeline\/BigtableChangeKey.java#L30-L49).\nIt is only provided as an example so it is easier to write your own function.\n\n```java\n\/**\n   * Return a new key for a given key and record in the existing table.\n   *\n   * <p>The purpose of this method is to test different key strategies over the same data in\n   * Bigtable.\n   *\n   * @param key The existing key in the table\n   * @param record The full record, in case it is needed to choose the new key\n   * @return The new key for the same record\n   *\/\n  public static String transformKey(String key, Row record) {\n    \/**\n     * TODO: Change the existing key here, by a new key\n     *\n     * <p>Here we just reverse the key, as a demo. Ideally, you should test different strategies,\n     * test the performance obtained with each key transform strategy, and then decide how you need\n     * to change the keys.\n     *\/\n    return StringUtils.reverse(key);\n  }\n```\n\nThe function has two input parameters:\n\n* `key`: the current key of the record\n* `record`: the full record, with all the column families, columns,\n   values\/cells, versions of cells, etc.\n\nThe `record` is of type\n[com.google.bigtable.v2.Row](http:\/\/googleapis.github.io\/googleapis\/java\/all\/latest\/apidocs\/com\/google\/bigtable\/v2\/Row.html).\nYou can traverse the record to recover all the elements. See [an example of how\nto traverse a\nRow](src\/main\/java\/com\/google\/cloud\/pso\/transforms\/UpdateKey.java#L57-L78).\n\nThe new key must be returned as a `String`. In order to leverage this pipeline,\nyou must create the new key using the previous key and the data contained in the\nrecord. This pipeline assumes that you don't need any other external piece of\ninformation.\n\nThe function that you create is [passed to the `UpdateKey` transform in these\nlines](src\/main\/java\/com\/google\/cloud\/pso\/pipeline\/BigtableChangeKey.java#L81-L83).\nYou can pass any function (named functions, lambdas, etc), and the `UpdateKey`\ntransform will make sure that the function can be serialized. You need to make\nsure that you are passing an idempotent function, that is _thread-compatible_\nand serializable, or you may experience issues when the function is called from\nthe pipeline workers. For more details about the requirements of your code, see:\n\n* https:\/\/beam.apache.org\/documentation\/programming-guide\/#requirements-for-writing-user-code-for-beam-transforms\n\nAny pure function with no side effects and\/or external dependencies (other than\nthose passed through the input arguments, namely the key and the record), will\nfulfill those requirements.\n\n## License\n\nCopyright 2020 Google LLC\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n* http:\/\/www.apache.org\/licenses\/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.","site":"GCP","answers_cleaned":"   Dataflow pipeline to change the key of a Bigtable  For an optimal performance of our requests to a Bigtable instance   it is crucial to choose a good key for our records  https   cloud google com bigtable docs schema design   so that both read and writes are evenly distributed across the keys space  Although we have tools such as  Key Visualizer  https   cloud google com bigtable docs keyvis overview   to diagnose how our key is performing  it is not obvious how to change or update a key for all the records in a table   This example contains a Dataflow pipeline to read data from a table in a Bigtable instance  and to write the same records to another table with the same schema  but using a different key   The pipeline does not assume any specific schema and can work with any table       Build the pipeline  The build process is managed using Maven  To compile  just run   mvn compile   To create a package for the pipeline  run   mvn package       Setup  cbt   The helper scripts in this repo use  cbt  to work  to create sample tables  to create an output table with the same schema as the input table   If you have not configured  cbt  yet  please see the following link     https   cloud google com bigtable docs cbt overview  In summary  you need to include your GCP project and Bigtable instance in the     cbtrc  file as shown below       project   YOUR GCP PROJECT NAME instance   YOUR BIGTABLE INSTANCE NAME          Create a sandbox table for testing purposes  If you already have data in Bigtable  you can ignore this section   If you don t have any table available to try this pipeline out  you can create one using a script in this repo      bash     scripts create sandbox table sh MY TABLE      That will create a table of name  MY TABLE  with three records  To check that the data has been actually written to the table  you can use  cbt count  and  cbt read       bash   cbt count taxi rides 2020 01 29 17 44 31  creds flag unset  will use gcloud credential 3   cbt read taxi rides 2020 01 29 17 43 34  creds flag unset  will use gcloud credential                                          33cb2a42 d9f5 4b64 9e8a b5aa1d6e142f 132   id point idx                               2020 01 29 16 22 32 551000      132    id ride id                                 2020 01 29 16 22 31 407000      33cb2a42 d9f5 4b64 9e8a b5aa1d6e142f    loc latitude                               2020 01 29 16 22 33 711000                Create the output table  The output table must exist before the pipeline is run  and it must have the same schema as the input table   That table will be   OVERWRITTEN   if it already contains data   If you have  cbt  configured  you can use one of the scripts in this repo to create an empty table replicating the schema of another table      bash     scripts copy schema to new table sh MY INPUT TABLE MY OUTPUT TABLE          Command line options for the pipeline  In addition to the  Dataflow command line options  https   cloud google com dataflow docs guides specifying exec params   this pipeline has three additional required options         bigtableInstance    the name of the Bigtable instances where all the   tables are located       inputTable    the name of an existing table with the input data       outputTable    the name of an existing table    BEWARE  it will be   overwritten     Remember that Dataflow also requires at least the following command line options         project         tempLocation         runner         region        Run the pipeline as a standalone Java app  You don t necessarily need Maven to run a Dataflow pipeline  you can also use the package generated by Maven as an standalone Java application  Make sure that you have generated a package with  mvn package  and that the  bundled  JAR file is available locally in the machine triggering the pipeline   Then run      bash   Change if your location is different JAR LOC target bigtable change key bundled 0 1 SNAPSHOT jar  PROJECT ID  YOUR PROJECT  REGION  YOUR REGION TO RUN DATAFLOW  TMP GS LOCATION  GCS URI FOR TEMP FILES   BIGTABLE INSTANCE  YOUR INSTANCE  INPUT TABLE  YOUR INPUT TABLE  OUTPUT TABLE  YOUR OUTPUT TABLE   RUNNER DataflowRunner  java  cp   JAR LOC  com google cloud pso pipeline BigtableChangeKey             project   PROJECT ID              gcpTempLocation   TMP GS LOCATION              region   REGION              runner   RUNNER              bigtableInstance   BIGTABLE INSTANCE              inputTable   INPUT TABLE              outputTable   OUTPUT TABLE       Then go to the Dataflow UI to check that the job is running properly   You should see a job with a simple graph  similar to this one     Pipeline graph    imgs pipeline graph png   You can now check that the destination table has the same records as the input table  and that the key has changed   You can use  cbt count  and  cbt read  for that purpose  by comparing with the results of the original table       Change the update key function  The pipeline  includes a key transform function that just reverses it    src main java com google cloud pso pipeline BigtableChangeKey java L30 L49   It is only provided as an example so it is easier to write your own function      java          Return a new key for a given key and record in the existing table             p The purpose of this method is to test different key strategies over the same data in      Bigtable             param key The existing key in the table       param record The full record  in case it is needed to choose the new key       return The new key for the same record         public static String transformKey String key  Row record                   TODO  Change the existing key here  by a new key                p Here we just reverse the key  as a demo  Ideally  you should test different strategies         test the performance obtained with each key transform strategy  and then decide how you need        to change the keys              return StringUtils reverse key            The function has two input parameters      key   the current key of the record    record   the full record  with all the column families  columns     values cells  versions of cells  etc   The  record  is of type  com google bigtable v2 Row  http   googleapis github io googleapis java all latest apidocs com google bigtable v2 Row html   You can traverse the record to recover all the elements  See  an example of how to traverse a Row  src main java com google cloud pso transforms UpdateKey java L57 L78    The new key must be returned as a  String   In order to leverage this pipeline  you must create the new key using the previous key and the data contained in the record  This pipeline assumes that you don t need any other external piece of information   The function that you create is  passed to the  UpdateKey  transform in these lines  src main java com google cloud pso pipeline BigtableChangeKey java L81 L83   You can pass any function  named functions  lambdas  etc   and the  UpdateKey  transform will make sure that the function can be serialized  You need to make sure that you are passing an idempotent function  that is  thread compatible  and serializable  or you may experience issues when the function is called from the pipeline workers  For more details about the requirements of your code  see     https   beam apache org documentation programming guide  requirements for writing user code for beam transforms  Any pure function with no side effects and or external dependencies  other than those passed through the input arguments  namely the key and the record   will fulfill those requirements      License  Copyright 2020 Google LLC  Licensed under the Apache License  Version 2 0  the  License    you may not use this file except in compliance with the License  You may obtain a copy of the License at    http   www apache org licenses LICENSE 2 0  Unless required by applicable law or agreed to in writing  software distributed under the License is distributed on an  AS IS  BASIS  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND  either express or implied  See the License for the specific language governing permissions and limitations under the License "}
{"questions":"GCP 1 Create a new project in CSR and clone it to your machine Google Cloud Build to automatically run unit tests and pylint upon code check in By following this tutorial you will learn This repo contains example code and instructions that show you how to use CSR and Basic Python Continuous Integration CI With Cloud Source Repositories CSR and Google Cloud Build how to build basic Python continuous integration pipelines on Google Cloud Platform GCP 1 Create your code and unit tests By following along with this example you will learn how to Overview","answers":"# Basic Python Continuous Integration (CI) With Cloud Source Repositories (CSR) and Google Cloud Build\n\n## Overview\nThis repo contains example code and instructions that show you how to use CSR and\nGoogle Cloud Build to automatically run unit tests and pylint upon code check-in. By following this tutorial you will learn\nhow to build basic Python continuous integration pipelines on Google Cloud Platform (GCP).\n\nBy following along with this example you will learn how to:\n1. Create a new project in CSR and clone it to your machine.\n1. Create your code and unit tests.\n1. Create a custom container image called a cloud builder that Google Cloud Build will use to run your Python tests.\n1. Create a cloud build trigger that will tell Google Cloud Build when to run.\n1. Tie it all together by creating a cloudbuild.yaml file that tells Google Cloud Build how to execute your tests\n when the cloud build trigger fires, using the custom cloud builder you created.\n\nIn order to follow along with this tutorial you'll need to:\n* Create or have access to an existing [GCP project](https:\/\/cloud.google.com\/resource-manager\/docs\/creating-managing-projects).\n* Install and configure the [Google Cloud SDK](https:\/\/cloud.google.com\/sdk).\n\n## 1. Create a new project in Cloud Source Repositories\nYou'll start by creating a new repository in CSR, copying the files in this example into the CSR repository, and\ncommiting them to your new repository.\n\n1. Go to [https:\/\/source.cloud.google.com\/](https:\/\/source.cloud.google.com\/).\n1. Click 'Add repository'.\n1. Choose 'Create new repository'.\n1. Specify a name, and your project name.\n1. Follow the instructions to 'git clone' the empty repo to your workstation.\n1. Copy the files from this example into the new repo.\n1. Add the files to the new repo with the command:\n    ```bash\n    git add .\n    ```\n1. Commit and push these files in the new repo by running the commands:\n    ```bash\n    git commit -m 'Creating a repository for Python Cloud Build CI example.'\n    git push origin master.'\n    ```\n\nYou can alternatively do the same using the Google Cloud SDK:\n1. Choose a name for your source repo and configure an environment variable for that name with the command:\n    ```bash\n    export REPO_NAME = <YOUR_REPO_NAME>\n    ```\n1. Create the repository by running the command:\n     ```bash\n     gcloud source repos create $REPO_NAME\n     ```\n1. Clone the new repository to your local machine by running the command:\n     ```bash\n     gcloud source repos clone $REPO_NAME.\n     ```\n1. Copy the files from this example into the new repo.\n1. Add the files to the new repo with the command:\n    ```bash\n    git add .\n    ```\n1. Commit and push these files in the new repo by running the commands:\n    ```bash\n    git commit -m 'Creating repository for Python Cloud Build CI example.'\n    git push origin master.'\n    ```\n\n## 2. Create your code and unit tests\nCreating unit tests is beyond the scope of this README, but if you review the tests in tests\/ you'll quickly get the idea.\nPytest is being used as the testing suite for this project. Before proceeding make sure you can run your tests from the\ncommand line by running this command from the root of the project:\n\n```bash\npython3 -m pytest\n```\n\nOr if you want to be fancy and use the [coverage](https:\/\/pytest-cov.readthedocs.io\/en\/latest\/readme.html) plug-in:\n\n```bash\npython3 -m pytest --cov=my_module tests\/\n```\n\nIf everything goes well you should expect to see output like this, showing successful tests:\n\n```bash\n$ python3 -m pytest --cov=my_module tests\/\n============================================================================= test session starts ==============================================================================\nplatform darwin -- Python 3.7.3, pytest-4.6.2, py-1.8.0, pluggy-0.12.0\nrootdir: \/Users\/mikebernico\/dev\/basic-cicd-cloudbuild\nplugins: cov-2.7.1\ncollected 6 items\n\ntests\/test_my_module.py ......                                                                                                                                           [100%]\n\n---------- coverage: platform darwin, python 3.7.3-final-0 -----------\nName                     Stmts   Miss  Cover\n--------------------------------------------\nmy_module\/__init__.py        1      0   100%\nmy_module\/my_module.py       4      0   100%\n--------------------------------------------\nTOTAL                        5      0   100%\n```\n\nNow that your tests are working locally, you can configure Google Cloud Build to run them every time you push new code.\n\n## 3. Building a Python Cloud Build Container\nTo run Python-based tests in Google Cloud Build you need a Python container image used as a [cloud builder](https:\/\/cloud.google.com\/cloud-build\/docs\/cloud-builders).\nA cloud builder is just a container image with the software you need for your build steps in it. Google does not distribute a\nprebuilt Python cloud builder, so a custom cloud builder is required. The code needed to build a custom Python 3 cloud build\ncontainer is located in '\/python-cloud-builder' under the root of this project.\n\nInside that folder you will find a Dockerfile and a very minimal cloud-build.yaml.\n\nThe Dockerfile specifies the software that will be inside the container image.\n\n```Docker\nFROM python:3 # Start from the public Python 3 image contained in DockerHub\nRUN pip install virtualenv\n# Install virtualenv so that a virtual environment can be used to carry build steps forward.\n```\n\nThe Dockerfile is enough to build the image, however you will also need a cloud-build.yaml to tell Cloud Build how to\nbuild and upload the resulting image to GCP.\n\n```yaml\nsteps:\n- name: 'gcr.io\/cloud-builders\/docker'\n  args: [ 'build', '-t', 'gcr.io\/$PROJECT_ID\/python-cloudbuild', '.' ]\n  # This step tells Google Cloud Build to use docker to build the Dockerfile.\nimages:\n- 'gcr.io\/$PROJECT_ID\/python-cloudbuild'\n  # The resulting image is then named python-cloudbuild and uploaded to your projects container registry.\n\n```\n\nYou don't need to change either of these files to follow this tutorial. They are included here to help you understand\nthe process of building custom build images. Once you're ready, run these commands from the root of the project to build\nand upload your custom Python cloud builder:\n\n```bash\ncd python-cloud-builder\ngcloud builds submit --config=cloudbuild.yaml .\n```\nThis creates the custom Python cloud builder and uploads it to your GCP project's\n[container registry](https:\/\/cloud.google.com\/container-registry\/), which is a private\nlocation to store container images. Your new cloud builder will be called gcr.io\/$PROJECT_ID\/python-cloudbuild, where\n$PROJECT_ID is the name of your GCP project.\n\n## 4. Create a Google Cloud Build Trigger\nNow that you've created a Python cloud builder to run your tests, you\nshould [create a trigger](https:\/\/cloud.google.com\/cloud-build\/docs\/running-builds\/automate-builds) that tells Cloud\nBuild when to run those tests. To do that, follow these steps:\n\n1. On the GCP console navigate to 'Cloud Build' > 'Triggers'.\n1. Add a trigger by clicking '+ CREATE TRIGGER'.\n1. Choose the cloud source repository you created in step 1. from the 'Repository' drop down.\n1. Assuming you want the trigger to fire on any branch, accept the default trigger type and regex.\n1. Choose the 'Cloud Build configuration file (yaml or json)' radio button  under 'Build configuration'.\n1. Click 'Create trigger'.\n\n\n## 5. Create a cloudbuild.yaml file that executes your tests and runs pylint\nAt this point you've ran some unit tests on the command line, created a new repository in CSR,\ncreated a Python cloud builder, and used Google Cloud Build to create build trigger that runs whenever you\npush new code. In this last step you'll tie this all together and  tell Google Cloud Build how to automatically run\ntests and run pylint to examine your code whenever a code change is pushed into CSR.\n\nIn order to tell Google Cloud Builder how to run your tests, you'll need to create a file called cloudbuild.yaml in the\nroot of your project. Inside that file you'll and add the steps needed to execute your unit tests. For each steps you\nwill reference the cloud builder that was created in step 3, by it's location in the Google Container Registry.\n\n*Note: Each step specified in the cloudbuild.yaml is a separate, ephemeral run of a docker image,\nhowever [the \/workspace\/ directory is preserved between runs](https:\/\/cloud.google.com\/cloud-build\/docs\/build-config#dir).\nOne way to carry python packages forward is to use a virtualenv housed in \/workspace\/*\n\n\n```yaml\nsteps:\n- name: 'gcr.io\/$PROJECT_ID\/python-cloudbuild' # Cloud Build automatically substitutes $PROJECT_ID for your Project ID.\n  entrypoint: '\/bin\/bash'\n  args: ['-c','virtualenv \/workspace\/venv' ]\n  # Creates a Python virtualenv stored in \/workspace\/venv that will persist across container runs.\n- name: 'gcr.io\/$PROJECT_ID\/python-cloudbuild'\n  entrypoint: 'venv\/bin\/pip'\n  args: ['install', '-V', '-r', 'requirements.txt']\n  # Installs any dependencies listed in the project's requirements.txt.\n- name: 'gcr.io\/$PROJECT_ID\/python-cloudbuild'\n  entrypoint: 'venv\/bin\/python'\n  args: ['-m', 'pytest', '-v']\n  # Runs pytest from the virtual environment (with all requirements)\n  # using the verbose flag so you can see each individual test.\n- name: 'gcr.io\/$PROJECT_ID\/python-cloudbuild'\n  entrypoint: 'venv\/bin\/pylint'\n  args: ['my_module\/']\n  # Runs pylint against the module my_module contained one folder under the project root.\n```\n\n## Wrap Up\nThat's all there is to it!\n\nFrom here you can inspect your builds in Google Cloud Build's history.  You can also build in third-party integrations via\nPubSub. You can find more documentation on how to use third party integrations\n [here](https:\/\/cloud.google.com\/cloud-build\/docs\/configure-third-party-notifications).\n\nIf you wanted to automatically deploy this code you could add additional steps that enabled continuous delivery.\n\nPrebuilt cloud builders exist for gcloud, kubectl, etc. The full list can be found at [https:\/\/github.com\/GoogleCloudPlatform\/cloud-builders](https:\/\/github.com\/GoogleCloudPlatform\/cloud-builders).\nGoogle also maintains a community repo for other cloud builders contributed by the public [here](https:\/\/github.com\/GoogleCloudPlatform\/cloud-builders-community).\n\n\n## License\n   Copyright 2019 Google LLC\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http:\/\/www.apache.org\/licenses\/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.","site":"GCP","answers_cleaned":"  Basic Python Continuous Integration  CI  With Cloud Source Repositories  CSR  and Google Cloud Build     Overview This repo contains example code and instructions that show you how to use CSR and Google Cloud Build to automatically run unit tests and pylint upon code check in  By following this tutorial you will learn how to build basic Python continuous integration pipelines on Google Cloud Platform  GCP    By following along with this example you will learn how to  1  Create a new project in CSR and clone it to your machine  1  Create your code and unit tests  1  Create a custom container image called a cloud builder that Google Cloud Build will use to run your Python tests  1  Create a cloud build trigger that will tell Google Cloud Build when to run  1  Tie it all together by creating a cloudbuild yaml file that tells Google Cloud Build how to execute your tests  when the cloud build trigger fires  using the custom cloud builder you created   In order to follow along with this tutorial you ll need to    Create or have access to an existing  GCP project  https   cloud google com resource manager docs creating managing projects     Install and configure the  Google Cloud SDK  https   cloud google com sdk       1  Create a new project in Cloud Source Repositories You ll start by creating a new repository in CSR  copying the files in this example into the CSR repository  and commiting them to your new repository   1  Go to  https   source cloud google com   https   source cloud google com    1  Click  Add repository   1  Choose  Create new repository   1  Specify a name  and your project name  1  Follow the instructions to  git clone  the empty repo to your workstation  1  Copy the files from this example into the new repo  1  Add the files to the new repo with the command         bash     git add           1  Commit and push these files in the new repo by running the commands         bash     git commit  m  Creating a repository for Python Cloud Build CI example       git push origin master            You can alternatively do the same using the Google Cloud SDK  1  Choose a name for your source repo and configure an environment variable for that name with the command         bash     export REPO NAME    YOUR REPO NAME          1  Create the repository by running the command          bash      gcloud source repos create  REPO NAME          1  Clone the new repository to your local machine by running the command          bash      gcloud source repos clone  REPO NAME           1  Copy the files from this example into the new repo  1  Add the files to the new repo with the command         bash     git add           1  Commit and push these files in the new repo by running the commands         bash     git commit  m  Creating repository for Python Cloud Build CI example       git push origin master               2  Create your code and unit tests Creating unit tests is beyond the scope of this README  but if you review the tests in tests  you ll quickly get the idea  Pytest is being used as the testing suite for this project  Before proceeding make sure you can run your tests from the command line by running this command from the root of the project      bash python3  m pytest      Or if you want to be fancy and use the  coverage  https   pytest cov readthedocs io en latest readme html  plug in      bash python3  m pytest   cov my module tests       If everything goes well you should expect to see output like this  showing successful tests      bash   python3  m pytest   cov my module tests                                                                                test session starts                                                                                platform darwin    Python 3 7 3  pytest 4 6 2  py 1 8 0  pluggy 0 12 0 rootdir   Users mikebernico dev basic cicd cloudbuild plugins  cov 2 7 1 collected 6 items  tests test my module py                                                                                                                                                   100               coverage  platform darwin  python 3 7 3 final 0             Name                     Stmts   Miss  Cover                                              my module   init   py        1      0   100  my module my module py       4      0   100                                               TOTAL                        5      0   100       Now that your tests are working locally  you can configure Google Cloud Build to run them every time you push new code      3  Building a Python Cloud Build Container To run Python based tests in Google Cloud Build you need a Python container image used as a  cloud builder  https   cloud google com cloud build docs cloud builders   A cloud builder is just a container image with the software you need for your build steps in it  Google does not distribute a prebuilt Python cloud builder  so a custom cloud builder is required  The code needed to build a custom Python 3 cloud build container is located in   python cloud builder  under the root of this project   Inside that folder you will find a Dockerfile and a very minimal cloud build yaml   The Dockerfile specifies the software that will be inside the container image      Docker FROM python 3   Start from the public Python 3 image contained in DockerHub RUN pip install virtualenv   Install virtualenv so that a virtual environment can be used to carry build steps forward       The Dockerfile is enough to build the image  however you will also need a cloud build yaml to tell Cloud Build how to build and upload the resulting image to GCP      yaml steps    name   gcr io cloud builders docker    args     build     t    gcr io  PROJECT ID python cloudbuild             This step tells Google Cloud Build to use docker to build the Dockerfile  images     gcr io  PROJECT ID python cloudbuild      The resulting image is then named python cloudbuild and uploaded to your projects container registry        You don t need to change either of these files to follow this tutorial  They are included here to help you understand the process of building custom build images  Once you re ready  run these commands from the root of the project to build and upload your custom Python cloud builder      bash cd python cloud builder gcloud builds submit   config cloudbuild yaml       This creates the custom Python cloud builder and uploads it to your GCP project s  container registry  https   cloud google com container registry    which is a private location to store container images  Your new cloud builder will be called gcr io  PROJECT ID python cloudbuild  where  PROJECT ID is the name of your GCP project      4  Create a Google Cloud Build Trigger Now that you ve created a Python cloud builder to run your tests  you should  create a trigger  https   cloud google com cloud build docs running builds automate builds  that tells Cloud Build when to run those tests  To do that  follow these steps   1  On the GCP console navigate to  Cloud Build     Triggers   1  Add a trigger by clicking    CREATE TRIGGER   1  Choose the cloud source repository you created in step 1  from the  Repository  drop down  1  Assuming you want the trigger to fire on any branch  accept the default trigger type and regex  1  Choose the  Cloud Build configuration file  yaml or json   radio button  under  Build configuration   1  Click  Create trigger        5  Create a cloudbuild yaml file that executes your tests and runs pylint At this point you ve ran some unit tests on the command line  created a new repository in CSR  created a Python cloud builder  and used Google Cloud Build to create build trigger that runs whenever you push new code  In this last step you ll tie this all together and  tell Google Cloud Build how to automatically run tests and run pylint to examine your code whenever a code change is pushed into CSR   In order to tell Google Cloud Builder how to run your tests  you ll need to create a file called cloudbuild yaml in the root of your project  Inside that file you ll and add the steps needed to execute your unit tests  For each steps you will reference the cloud builder that was created in step 3  by it s location in the Google Container Registry    Note  Each step specified in the cloudbuild yaml is a separate  ephemeral run of a docker image  however  the  workspace  directory is preserved between runs  https   cloud google com cloud build docs build config dir   One way to carry python packages forward is to use a virtualenv housed in  workspace        yaml steps    name   gcr io  PROJECT ID python cloudbuild    Cloud Build automatically substitutes  PROJECT ID for your Project ID    entrypoint    bin bash    args     c   virtualenv  workspace venv        Creates a Python virtualenv stored in  workspace venv that will persist across container runs    name   gcr io  PROJECT ID python cloudbuild    entrypoint   venv bin pip    args    install     V     r    requirements txt       Installs any dependencies listed in the project s requirements txt    name   gcr io  PROJECT ID python cloudbuild    entrypoint   venv bin python    args     m    pytest     v       Runs pytest from the virtual environment  with all requirements      using the verbose flag so you can see each individual test    name   gcr io  PROJECT ID python cloudbuild    entrypoint   venv bin pylint    args    my module        Runs pylint against the module my module contained one folder under the project root          Wrap Up That s all there is to it   From here you can inspect your builds in Google Cloud Build s history   You can also build in third party integrations via PubSub  You can find more documentation on how to use third party integrations   here  https   cloud google com cloud build docs configure third party notifications    If you wanted to automatically deploy this code you could add additional steps that enabled continuous delivery   Prebuilt cloud builders exist for gcloud  kubectl  etc  The full list can be found at  https   github com GoogleCloudPlatform cloud builders  https   github com GoogleCloudPlatform cloud builders   Google also maintains a community repo for other cloud builders contributed by the public  here  https   github com GoogleCloudPlatform cloud builders community        License    Copyright 2019 Google LLC     Licensed under the Apache License  Version 2 0  the  License       you may not use this file except in compliance with the License     You may obtain a copy of the License at         http   www apache org licenses LICENSE 2 0     Unless required by applicable law or agreed to in writing  software    distributed under the License is distributed on an  AS IS  BASIS     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND  either express or implied     See the License for the specific language governing permissions and    limitations under the License "}
{"questions":"GCP The last year has been like a roller coaster for the cryptocurrency market At the end of 2017 the value of bitcoin BTC almost reached 20 000 USD only to fall below 4 000 USD a few months later What if there is a pattern in the high volatility of the cryptocurrencies market If so can we learn from it and get an edge on future trends Is there a way to observe all exchanges in real time and visualize it on a single chart A Google Cloud Dataflow Cloud Bigtable Websockets example CryptoRealTime In this tutorial we will graph the trades volume and time delta from trade execution until it reaches our system an indicator of how close to real time we can get the data","answers":"# CryptoRealTime\n\n## A Google Cloud Dataflow\/Cloud Bigtable Websockets example\n\nThe last year has been like a roller coaster for the cryptocurrency market. At the end of 2017, the value of bitcoin (BTC) almost reached $20,000 USD, only to fall below $4,000 USD a few months later. What if there is a pattern in the high volatility of the cryptocurrencies market? If so, can we learn from it and get an edge on future trends? Is there a way to observe all exchanges in real time and visualize it on a single chart?\n\nIn this tutorial we will graph the trades, volume and time delta from trade execution until it reaches our system (an indicator of how close to real time we can get the data).\n\n\n![realtime multi exchange BTC\/USD observer](crypto.gif)\n\n[Consider reading the Medium article](https:\/\/medium.com\/@igalic\/bigtable-beam-dataflow-cryptocurrencies-gcp-terraform-java-maven-4e7873811e86)\n\n[Terraform - get this up and running in less than 5 minutes](https:\/\/github.com\/galic1987\/professional-services\/blob\/master\/examples\/cryptorealtime\/TERRAFORM-README.md)\n\n## Architecture\n![Cryptorealtime Cloud Architecture overview](https:\/\/i.ibb.co\/dMc9bMz\/Screen-Shot-2019-02-11-at-4-56-29-PM.png)\n\n## Frontend\n![Cryptorealtime Cloud Fronted overview](https:\/\/i.ibb.co\/2S28KYq\/Screen-Shot-2019-02-12-at-2-53-41-PM.png)\n\n## Costs\nThis tutorial uses billable components of GCP, including:\n- Cloud Dataflow\n- Compute Engine\n- Cloud Storage\n- Cloud Bigtable\n\nWe recommend to clean up the project after finishing this tutorial to avoid costs. Use the [Pricing Calculator](https:\/\/cloud.google.com\/products\/calculator\/) to generate a cost estimate based on your projected usage.\n\n## Project setup\n### Install the Google Cloud Platform SDK on a new VM\n  * Log into the console, and activate a cloud console session\n  * Create a new VM\n```console\ngcloud beta compute instances create crypto-driver \\\n  --zone=us-central1-a \\\n  --machine-type=n1-standard-1 \\\n  --service-account=$(gcloud iam service-accounts list --format='value(email)' --filter=\"compute\") \\\n  --scopes=https:\/\/www.googleapis.com\/auth\/cloud-platform \\\n  --image=debian-9-stretch-v20181210 \\\n  --image-project=debian-cloud \\\n  --boot-disk-size=20GB \\\n  --boot-disk-device-name=crypto-driver\n```\n\n\n  * SSH into that VM\n\n```console\n  gcloud compute ssh --zone=us-central1-a crypto-driver\n```\n\n  * Installing necessary tools like java, git, maven, pip, python and Cloud Bigtable command line tool cbt using the following command:\n```console\n  sudo -s\n  apt-get update -y\n  curl https:\/\/bootstrap.pypa.io\/get-pip.py -o get-pip.py\n  sudo python3 get-pip.py\n  sudo pip3 install virtualenv\n  virtualenv -p python3 venv\n  source venv\/bin\/activate\n  sudo apt -y --allow-downgrades install openjdk-8-jdk git maven google-cloud-sdk=271.0.0-0 google-cloud-sdk-cbt=271.0.0-0\n\n```\n\n### Create a Google Cloud Bigtable instance\n```console\nexport PROJECT=$(gcloud info --format='value(config.project)')\nexport ZONE=$(curl \"http:\/\/metadata.google.internal\/computeMetadata\/v1\/instance\/zone\" -H \"Metadata-Flavor: Google\"|cut -d\/ -f4)\ngcloud services enable bigtable.googleapis.com \\\nbigtableadmin.googleapis.com \\\ndataflow.googleapis.com\n\ngcloud bigtable instances create cryptorealtime \\\n  --cluster=cryptorealtime-c1 \\\n  --cluster-zone=${ZONE} \\\n  --display-name=cryptorealtime \\\n  --cluster-storage-type=HDD \\\n  --instance-type=DEVELOPMENT\ncbt -instance=cryptorealtime createtable cryptorealtime families=market\n```\n\n### Create a Bucket\n```console\ngsutil mb -p ${PROJECT} gs:\/\/realtimecrypto-${PROJECT}\n```\n\n### Create firewall for visualization server on port 5000\n```console\ngcloud compute firewall-rules create crypto-dashboard --action=ALLOW --rules=tcp:5000 --source-ranges=0.0.0.0\/0 --target-tags=crypto-console --description=\"Open port 5000 for crypto visualization tutorial\"\n\ngcloud compute instances add-tags crypto-driver --tags=\"crypto-console\" --zone=${ZONE}\n```\n\n\n### Clone the repo\n```console\ngit clone https:\/\/github.com\/GoogleCloudPlatform\/professional-services\n```\n\n### Build the pipeline\n```console\ncd professional-services\/examples\/cryptorealtime\nmvn clean install\n```\n\n### Start the DataFlow pipeline\n```console\n.\/run.sh ${PROJECT} \\\ncryptorealtime gs:\/\/realtimecrypto-${PROJECT}\/temp \\\ncryptorealtime market\n```\n\n### Start the Webserver and Visualization\n```console\ncd frontend\/\npip install -r requirements.txt\npython app.py ${PROJECT} cryptorealtime cryptorealtime market\n```\n\nFind your external IP in [Compute console instance](https:\/\/console.cloud.google.com\/compute\/instances) and open it in your browser with port 5000 at the end e.g.\nhttp:\/\/external-ip:5000\/stream\n\nYou should be able to see the visualization of aggregated BTC\/USD pair on several exchanges (without predictor part)\n\n\n# Cleanup\n* To save on cost we can clean up the pipeline by running the following command\n```console\ngcloud dataflow jobs cancel \\\n  $(gcloud dataflow jobs list \\\n  --format='value(id)' \\\n  --filter=\"name:runthepipeline*\")\n```\n\n* Empty and Delete the bucket:\n```console\n  gsutil -m rm -r gs:\/\/realtimecrypto-${PROJECT}\/*\n  gsutil rb gs:\/\/realtimecrypto-${PROJECT}\n```\n\n* Delete the Cloud Bigtable instance:\n```console\n  gcloud bigtable instances delete cryptorealtime\n```\n\n* Exit the VM and delete it.\n```console\n  gcloud compute instances delete crypto-driver --delete-disks\n```\n\n1. View the status of your Dataflow job in the Cloud Dataflow console\n\n1. After a few minutes, from the shell,\n\n```console\n  cbt -instance=<bigtable-instance-name> read <bigtable-table-name>\n```\n\nShould return many rows of crypto trades data that the frontend project will read for it's dashboard.\n\n\n## External libraries used to connect to exchanges\nhttps:\/\/github.com\/bitrich-info\/xchange-stream\n\n","site":"GCP","answers_cleaned":"  CryptoRealTime     A Google Cloud Dataflow Cloud Bigtable Websockets example  The last year has been like a roller coaster for the cryptocurrency market  At the end of 2017  the value of bitcoin  BTC  almost reached  20 000 USD  only to fall below  4 000 USD a few months later  What if there is a pattern in the high volatility of the cryptocurrencies market  If so  can we learn from it and get an edge on future trends  Is there a way to observe all exchanges in real time and visualize it on a single chart   In this tutorial we will graph the trades  volume and time delta from trade execution until it reaches our system  an indicator of how close to real time we can get the data       realtime multi exchange BTC USD observer  crypto gif    Consider reading the Medium article  https   medium com  igalic bigtable beam dataflow cryptocurrencies gcp terraform java maven 4e7873811e86    Terraform   get this up and running in less than 5 minutes  https   github com galic1987 professional services blob master examples cryptorealtime TERRAFORM README md      Architecture   Cryptorealtime Cloud Architecture overview  https   i ibb co dMc9bMz Screen Shot 2019 02 11 at 4 56 29 PM png      Frontend   Cryptorealtime Cloud Fronted overview  https   i ibb co 2S28KYq Screen Shot 2019 02 12 at 2 53 41 PM png      Costs This tutorial uses billable components of GCP  including    Cloud Dataflow   Compute Engine   Cloud Storage   Cloud Bigtable  We recommend to clean up the project after finishing this tutorial to avoid costs  Use the  Pricing Calculator  https   cloud google com products calculator   to generate a cost estimate based on your projected usage      Project setup     Install the Google Cloud Platform SDK on a new VM     Log into the console  and activate a cloud console session     Create a new VM    console gcloud beta compute instances create crypto driver       zone us central1 a       machine type n1 standard 1       service account   gcloud iam service accounts list   format  value email     filter  compute         scopes https   www googleapis com auth cloud platform       image debian 9 stretch v20181210       image project debian cloud       boot disk size 20GB       boot disk device name crypto driver           SSH into that VM     console   gcloud compute ssh   zone us central1 a crypto driver          Installing necessary tools like java  git  maven  pip  python and Cloud Bigtable command line tool cbt using the following command     console   sudo  s   apt get update  y   curl https   bootstrap pypa io get pip py  o get pip py   sudo python3 get pip py   sudo pip3 install virtualenv   virtualenv  p python3 venv   source venv bin activate   sudo apt  y   allow downgrades install openjdk 8 jdk git maven google cloud sdk 271 0 0 0 google cloud sdk cbt 271 0 0 0           Create a Google Cloud Bigtable instance    console export PROJECT   gcloud info   format  value config project    export ZONE   curl  http   metadata google internal computeMetadata v1 instance zone   H  Metadata Flavor  Google  cut  d   f4  gcloud services enable bigtable googleapis com   bigtableadmin googleapis com   dataflow googleapis com  gcloud bigtable instances create cryptorealtime       cluster cryptorealtime c1       cluster zone   ZONE        display name cryptorealtime       cluster storage type HDD       instance type DEVELOPMENT cbt  instance cryptorealtime createtable cryptorealtime families market          Create a Bucket    console gsutil mb  p   PROJECT  gs   realtimecrypto   PROJECT           Create firewall for visualization server on port 5000    console gcloud compute firewall rules create crypto dashboard   action ALLOW   rules tcp 5000   source ranges 0 0 0 0 0   target tags crypto console   description  Open port 5000 for crypto visualization tutorial   gcloud compute instances add tags crypto driver   tags  crypto console    zone   ZONE            Clone the repo    console git clone https   github com GoogleCloudPlatform professional services          Build the pipeline    console cd professional services examples cryptorealtime mvn clean install          Start the DataFlow pipeline    console   run sh   PROJECT    cryptorealtime gs   realtimecrypto   PROJECT  temp   cryptorealtime market          Start the Webserver and Visualization    console cd frontend  pip install  r requirements txt python app py   PROJECT  cryptorealtime cryptorealtime market      Find your external IP in  Compute console instance  https   console cloud google com compute instances  and open it in your browser with port 5000 at the end e g  http   external ip 5000 stream  You should be able to see the visualization of aggregated BTC USD pair on several exchanges  without predictor part      Cleanup   To save on cost we can clean up the pipeline by running the following command    console gcloud dataflow jobs cancel       gcloud dataflow jobs list       format  value id         filter  name runthepipeline           Empty and Delete the bucket     console   gsutil  m rm  r gs   realtimecrypto   PROJECT      gsutil rb gs   realtimecrypto   PROJECT         Delete the Cloud Bigtable instance     console   gcloud bigtable instances delete cryptorealtime        Exit the VM and delete it     console   gcloud compute instances delete crypto driver   delete disks      1  View the status of your Dataflow job in the Cloud Dataflow console  1  After a few minutes  from the shell      console   cbt  instance  bigtable instance name  read  bigtable table name       Should return many rows of crypto trades data that the frontend project will read for it s dashboard       External libraries used to connect to exchanges https   github com bitrich info xchange stream  "}
{"questions":"GCP The word pipeline refers to a collection of all job instances of a query in the transformation process in a data warehouse in this case BigQuery Each pipeline involves source table s and destination table For example a query Table Access Pattern Analysis FROM project dataset flashsalepurchases SELECT purchaseId shop Pipeline Optimisation This module consists of deep dive analysis of a BigQuery environment in Google Cloud Platform according to audit logs data access data which can be used to optimise BigQuery usage and improve time space and cost of BigQuery b Definitions b","answers":"# Table Access Pattern Analysis\nThis module consists of deep dive analysis of a BigQuery environment in Google Cloud Platform, according to audit logs - data access data, which can be used to optimise BigQuery usage, and improve time, space and cost of BigQuery.\n\n## Pipeline Optimisation\n\n#### <b>Definitions<\/b>\nThe word 'pipeline' refers to a collection of all job instances of a query in the transformation process in a data warehouse, in this case BigQuery. Each 'pipeline' involves source table(s) and destination table. For example, a query:\n```\nSELECT purchaseId, shop\nFROM project.dataset.flash_sale_purchases\nWHERE isReturn = TRUE\nGROUP BY shop\n```\nwith its destination table set to return_purchases table. The source table of this query is flash_sale_purchases and its destination table is return_purchases table. \n\n![](assets\/pipeline-definition.png)\n\nIn the illustration above, one of the pipeline involves T1 and T2 as its source tables and T5 as its destination table.\n\nGiven enough historical data from the audit logs, you can group queries which have the same source table(s) and destination table combination (each group becomes a single pipeline), and see the run history of each of the pipelines. Same source table(s) - destination table combination will almost always come from the same query, even if they are a different query, the semantics should be similar, so this assumption is still valid. After grouping into different pipelines, according to the source table(s) - destination table combination, you might be able to see a pattern in their execution history. You might see that this pipeline is executed hourly, or daily, or even monthly, and when it was last executed. \n\n<i>Note that each of the source table(s) can be a different table type, it can be either a view, materialized view, or a normal table. We only consider the normal tables because the notion of 'analysing the update pattern' does not really apply to view and materialized view. View does not gets updated, and materialized view gets updated automatically<\/i>\n\nWe can categorise every pipeline to different pipeline type and scheduling patern. There are 3 different pipeline types, namely:\n<ul>\n<li>Live pipelines: Pipelines that run regularly on an obvious schedule, until now\n<li>Dead pipelines: Pipelines that used to run regularly on an obvious schedule, but it stopped some time ago\n<li>Ad hoc pipelines: Pipelines that does not have a regular scheduling pattern detected, or not enough repetition to conclude that it\u2019s a scheduled live pipeline\n<\/ul>\n\nThe scheduling pattern can be identified as hourly, daily, weekly, or non-deterministically (no obvious pattern).\n\n#### <b>Purpose<\/b>\nThis tool helps identify tables with a high difference between the write and read frequency across data warehouse queries. High discrepancy between the write and read frequency of a table might be a good starting point for identifying optimisation points. Using this tool we can visualise the pipelines that are involved with a table of interest. We can then further analyse the pipeline type, its scheduling pattern and the jobs in each pipeline, and pinpoint problems or optimisations from there. \n\n![](assets\/pipeline-example.gif)\n\nThis GIF shows the graph that will be visualised when running the tool. The graph is an HTML page that is rendered as an iFrame in the Jupyter Notebook. You can zoom in, zoom out, select (or deselect) a node to see more details about it. Each node represents a table, and each edge represents a pipeline from one table to another. The weight of the edges indicates the frequency of jobs of that pipeline compared to the rest of the pipelines in the current graph.\n\n#### <b>Analysing the Result<\/b>\nAs can be seen from the GIF, the tool will visualise all the pipelines associated with a table. To be specific, this includes all query jobs that has this table of interest as its source or destination table. As mentioned above, for every query job, there are source table(s) and also a single destination table. \n\nThe tables that are involved in the pipelines associated with a table (all tables that are included when the pipeline graph of a table of interest is plotted) employs the below logic:\n\nFor every query jobs that has the table of interest as one of its source tables or destination table, \n* For every source table(s) of every query job that has the table of interest as one of its source table(s), recursively find query jobs that has this source table as its destination table, and get its source table(s). \n* For every destination table of every query job that has the table of interest as its destination table, recursively find query jobs that has destination table as its source table, and get its destination table.\n\nAs seen from the GIF, for every table that is involved in the pipeline of the table of interest, you can select it, and see the details of the job schedule of every query involving this particular table. It will list all ad-hoc, live or dead pipelines that have this table as its source or destination table. \n\nFor example, a pipeline information on the right side of the graph might look like this\n\n![](assets\/pipeline-information-example.png)\n\nThis means that the table `data-analytics-pocs.public.bigquery_audit_log` is a destination table in an ad-hoc pipeline, where `project4.finance.cloudaudit_googleapis_com_data_access_*` are the source table(s). The jobs of this pipeline has a non-deterministic schedule, and its pipeline ID is 208250. The pipeline ID information is useful if you want to further analyse this pipeline by querying the intermediate tables created. See [intermediate tables](#intermediate-tables-creation)\n\nGiven these insights, you can further deep dive into insights that are particularly interesting for you. For example, you might identify imbalance queries, the table `flash_sale_purchases` in your data warehouse might updated hourly but only queried daily. You might also identify queries that are already \u2018dead\u2019 and no longer scheduled, according to the last execution time, and identify if this is intended, or something might have happened inside the query that causes an error.\n\n## Architecture\nThis tool is built on top of BigQuery and Python modules. The data source of the tool is audit log - data access which is located in BigQuery. The module will be responsible for the creation of intermediate tables (from the audit logs - data access source table), and the execution of all relevant queries towards those intermediate tables that will be used for analysis purposes. The analysis can be done through a Jupyter notebook, which can be run locally (if installed) or in AI Platform Notebooks. This guide will specifically be on running the tool on AI Platform Notebooks\n\n![](assets\/architecture.png)\n\n### Directories and Files\n```\ndata-dumpling-data-assessment\/\n\u251c\u2500\u2500 table-access-pattern-analysis\/\n\u2502   \u251c\u2500\u2500 assets\/\n\u2502   \u251c\u2500\u2500 bq_routines\/\n\u2502   \u251c\u2500\u2500 pipeline_graph\/\n\u2502   \u251c\u2500\u2500 src\/\n\u2502   \u251c\u2500\u2500 templates\/\n\u2502   \u251c\u2500\u2500 README.md\n\u2502   \u251c\u2500\u2500 pipeline.ipynb\n\u2502   \u251c\u2500\u2500 pipeline-output_only.ipynb\n\u2502   \u251c\u2500\u2500 requirements.txt\n\u2502   \u2514\u2500\u2500 var.env\n```\n\nThere are several subdirectories under the `table-access-pattern-analysis` subdirectory.\n<ul>\n<li> <b>assets\/<\/b>\n\nThis directory contains images or other assets that are used in README.md\n\n<li> <b>bq_routines\/<\/b>\n\nThis directory contains all the [JS UDF](https:\/\/cloud.google.com\/bigquery\/docs\/reference\/standard-sql\/user-defined-functions#javascript-udf-structure) functions that will be created in BigQuery upon usage of the tool. These files are not to be run independently in a JS environment, these file contents will be loaded by the Python package, `src\/` to be constructed as a function creation query to BigQuery.\n\nFor more information about each of the functions, look at this [section](#routines-creation)\n\n<li> <b>pipeline_graph\/<\/b>\n\nThis directory contains the HTML file, which is a webpage that is used to display the pipeline visualisation of the pipeline optimisation module.\n\n<li> <b>src\/<\/b>\n\nThis directory is the source Python package of the module, it drives the logic for table and routines creation, as well as query towards BigQuery tables.\n\n<li> <b>templates\/<\/b>\n\nThis directory consist of a template HTML file that will be filled using Jinja2 templating system, through the Python code.\n\n<li> <b>README.md<\/b>\n\nThis is the README file which explains all the details fo this directory.\n\n<li> <b>pipeline.ipynb<\/b>\n\nThis Notebook is used for the pipeline optimisation. \n\n<li> <b>pipeline-output_only.ipynb<\/b>\n\nThis Notebook is used for demonstration purposes of the pipeline optimisation only, it shows the expected output and result of running the notebook. \n\n<li> <b>requirements.txt<\/b>\n\nThis file consist of all the dependencies, you don't need to install it manually because it's part of the Jupyter Notebooks command.\n\n<li> <b>var.env<\/b>\n\nThis is the file on which environment variables are to be defined and to be loaded by the different Jupyter Notebooks. For every 'analysis workflow', you should redefine some of the variables. For details, look at this [section](#environment-variables)\n\n<\/ul>\n\n## Prerequisites\n* Your account must have access to read the audit logs - data access table that will be used as a source table for the analysis. For more details regarding different kinds of audit logs, visit this [page](https:\/\/cloud.google.com\/logging\/docs\/audit#data-access)\n* The audit logs - data access table that will be used as a source table for the analysis should contain BigQuery logs version 1. For more details regarding audit logs version, visit this [page](https:\/\/cloud.google.com\/bigquery\/docs\/reference\/auditlogs)\n* Your account must have access to write to the destination dataset.\n* The source and destination dataset must be in the same location\n\n## Set Up\nThis set up is for running JupyterLab Notebook in AI Platform Notebooks, you can also choose to run the Jupyter Notebook locally.\n1. Go to a GCP project.\n2. Navigate to <b>AI Platform -> Notebooks<\/b>. <b>New Instance -> Choose Python3 Option -> Name the instance <\/b>\n3. Clone this repository \n4. Go to the `table-access-pattern-analysis` directory of the project\n5. Set the environment variables inside `var.env`.\n6. Run the analysis, as described [below](#analysis).\n\n### Environment Variables\nThe environment variables that you need to set includes:\n* INPUT_PROJECT_ID\n* INPUT_DATASET_ID\n* INPUT_AUDIT_LOGS_TABLE_ID\n* IS_AUDIT_LOGS_INPUT_TABLE_PARTITIONED\n* OUTPUT_PROJECT_ID\n* OUTPUT_DATASET_ID\n* OUTPUT_TABLE_SUFFIX\n* LOCATION\n* IS_INTERACTIVE_TABLES_MODE\n\nThe details of each of the environment variables are as follows:\n<ul>\n<li><b>INPUT_PROJECT_ID, INPUT_DATASET_ID, INPUT_AUDIT_LOGS_TABLE_ID<\/b>\n\n<ul>\n<li> Definition \n\n* These 3 environment variables should point to the audit logs - data access table that will be the source table of the analysis. The complete path to the audit logs table source will be `INPUT_PROJECT_ID.INPUT_DATASET_ID.INPUT_AUDIT_LOGS_TABLE_ID`. If you want to analyse on a table with a wildcard, include the wildcard in the INPUT_AUDIT_LOGS_TABLE_ID variable as well. \n\n<li> Example values\n\n* INPUT_PROJECT_ID = 'project-a'\n* INPUT_DATASET_ID = 'dataset-b'\n* INPUT_AUDIT_LOGS_TABLE_ID = 'cloudaudit_googleapis_com_data_access_*'\n<\/ul>\n\n<li><b>OUTPUT_PROJECT_ID, OUTPUT_DATASET_ID<\/b>\n<ul>\n<li> Definition \n\n* These 2 environment variables should point to the dataset ID that will contain all the tables and routines that is going to be created during the analysis.\n\n<li> Example values\n\n* OUTPUT_PROJECT_ID = 'project-c'\n* OUTPUT_DATASET_ID = 'dataset-d'\n\n<\/ul>\n\n<li><b>OUTPUT_TABLE_SUFFIX<\/b>\n<ul>\n<li> Definition \n\n* The 'OUTPUT_TABLE_SUFFIX' variable is used to denote an 'analysis environment' that you intend to build. All tables that are produced by this run will have this variable as its suffix, thus it will not replace any existing tables that you have created for other analysis. \n\n* If this variable is not set, the analysis cannot be run as you might unintentionally forgot to change the suffix and replace an existing set of tables with the same suffix set. \n<li> Example value\n\n* OUTPUT_TABLE_SUFFIX = 'first-analysis'. \n<\/ul>\n\n<li><b>IS_AUDIT_LOGS_INPUT_TABLE_PARTITIONED<\/b>\n<ul>\n<li> Definition \n\n* The 'IS_AUDIT_LOGS_INPUT_TABLE_PARTITIONED' variable is a boolean value which denotes whether the input audit log table is a partitioned table. \n\n<li> Value\n\n* Its value should be either \"TRUE\" or \"FALSE\", with the exact casing. \n<\/ul>\n\n<li><b>LOCATION<\/b>\n\n<ul>\n<li> Definition \n\n* The 'LOCATION' variable is used to specify the region on which the input dataset and output dataset is located, a common and most used location is 'US'.\n\n<li> Example value\n    \n* LOCATION=US\n<\/ul>\n\n<li><b>IS_INTERACTIVE_TABLES_MODE<\/b>\n<ul>\n<li> Definition\n\n* Boolean on whether you want the tables to be interactive, it is recommended to set this to \"TRUE\". \n* If you want the tables output to be interactive (can filter, sort, search), you should set this value to \"TRUE\". \n* If you want the tables output to not be interactive, you can set this value to \"FALSE\".\n    \n<li> Value\n\n* Its value should be either \"TRUE\" or \"FALSE\", with the exact casing.\n\n<\/ul>\n<\/ul>\n\n### Caveats\nAfter resetting any environment variables, you need to restart the kernel because otherwise it will not be loaded by Jupyter. To restart, go to the menu 'Kernel' and choose 'Restart'\n\n## Analysis\n1. Open a notebook to run an analysis.\n2. You can choose the interactivity mode of the output.\n    * If you want the tables output to be interactive, you can choose to run the Classic Jupyter Notebook. The output of the tables produced by this notebook will be interactive (can filter, sort, search), but it is an older version of Jupyter notebook in AI Platform Notebooks. To do this,\n        1. Navigate to `Help` menu in Jupyter\n        2. Click on `Launch Classic Notebook`. \n        3. Navigate the directory and open the Notebook that you want to do the analysis on.\n\n    * If you prefer a newer version of the Jupyter notebook, you can choose to not run the Classic Jupyter Notebook. The output of the tables produced by this notebook is not interactive. You can double click on the intended Notebook from the list of files, without following the steps to launch a Classic Jupyter Notebook\n2. Run the cells from top to bottom of the notebook. \n3. In the first cell, there is a datetime picker, which is used to filter the audit logs data source to the start and end date range specified. If you select `05-05-2021` as a start date and `06-05-2021`, the analysis result of the notebook run will be based on audit logs data on 5th May 2021 to 6th May 2021.\n4. Run pipeline optimisation analysis produced in Jupyter Notebook\n    <ul>\n    <li><b>Pipeline Optimisation, run `pipeline.ipynb`<\/b>\n\n    This tool helps identify pipeline optimisation points. At first, the tool will list down tables with high difference of writing and reading frequency throughout the data warehouse queries. \n\n    After identifying the table that you would like to analyse further, you can select the table in the next part of the notebook and display the result in an iFrame inside the notebook. \n    <\/ul>\n\n## Appendix\n### Intermediate Tables Creation\nAs mentioned in the [Architecture](#architecture) section, this module involves the creation of intermediate tables. These are important and relevant for users that are interested to analyse the insights generated from this tool even further. The schema and details of each intermediate tables created are explained below. \n\n<ul>\n<li> job_info_with_tables_info<OUTPUT_TABLE_SUFFIX>\n\nThis table stores some of the details of job history that are relevant to pipeline optimisation. Each job history entry corresponds to a single entry in the audit logs. The audit logs are filtered to the ones that are relevant for pipeline optimisation.\n```\n[\n    {\n        \"name\": \"jobId\",\n        \"type\": \"STRING\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The job ID of this job run\"\n    },\n    {\n        \"name\": \"timestamp\",\n        \"type\": \"TIMESTAMP\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"Timestamp when the job was run\"\n    },\n    {\n        \"name\": \"email\",\n        \"type\": \"STRING\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The account that ran this job\"\n    },\n    {\n        \"name\": \"projectId\",\n        \"type\": \"STRING\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The project ID this job was ran on\"\n    },\n    {\n        \"name\": \"totalSlotMs\",\n        \"type\": \"INTEGER\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The slot ms consumed by this job\"\n    },\n    {\n        \"name\": \"totalProcessedBytes\",\n        \"type\": \"STRING\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The total bytes processed when this job was ran\"\n    },\n    {\n        \"name\": \"destinationTable\",\n        \"type\": \"STRING\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The destination table of this job, in a concatenated 'project.dataset.table' string format\"\n    },\n    {\n        \"name\": \"sourceTables\",\n        \"type\": \"STRING\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The source tables of this job, in a JSON string format of the array of concatenated 'project.dataset.table' string format, for example it can be a string of '[tableA, tableB, tableC]'\"\n    }\n]\n```\n\n<li> pipeline_info<OUTPUT_TABLE_SUFFIX>\n\nThis table stores the information of the different pipelines. Each unique pipeline is a collection of all job instances of a query (involving unique source table(s)-destination table combination) in the transformation process in BigQuery.\n```\n[\n    {\n        \"name\": \"pipelineId\",\n        \"type\": \"INTEGER\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The pipeline ID of this job run\"\n    },\n    {\n        \"name\": \"timestamps\",\n        \"type\": \"ARRAY<TIMESTAMP>\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"Timestamps when this pipeline was run in the past\"\n    },\n    {\n        \"name\": \"pipelineType\",\n        \"type\": \"STRING\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The pipeline type of this pipeline, its value can be dead\/live\/ad hoc\"\n    },\n    {\n        \"name\": \"schedule\",\n        \"type\": \"STRING\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The schedule for this pipeline, its value can be non deterministic\/hourly\/daily\/monthly\"\n    },\n    {\n        \"name\": \"destinationTable\",\n        \"type\": \"STRING\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The destination table of this pipeline, in a concatenated 'project.dataset.table' string format\"\n    },\n    {\n        \"name\": \"sourceTables\",\n        \"type\": \"STRING\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The source tables of this job, in a JSON string format of the array of concatenated 'project.dataset.table' string format, for example it can be a string of '[tableA, tableB, tableC]'\"\n    }\n]\n```\n\n<li>source_destination_table_pairs<OUTPUT_TABLE_SUFFIX>\n\nThis table stores all source-destination table pair. It also stores the pipeline ID, which is the pipeline ID that this pair was part of. \n\n```\n[\n    {\n        \"name\": \"destinationTable\",\n        \"type\": \"STRING\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The destination table\"\n    },\n    {\n        \"name\": \"sourceTable\",\n        \"type\": \"STRING\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The source table\"\n    },\n    {\n        \"name\": \"pipelineId\",\n        \"type\": \"INTEGER\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The pipeline ID for the pipeline that this pair was part of\"\n    }\n]\n```\n\n<li>table_direct_pipelines<OUTPUT_TABLE_SUFFIX>\n\nThis table stores all table pipeline, as destination table and as source table\n```\n[\n    {\n        \"name\": \"table\",\n        \"type\": \"STRING\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"The table\"\n    },\n    {\n        \"name\": \"directBackwardPipelines\",\n        \"type\": \"ARRAY<STRUCT<INTEGER, STRING, STRING, INTEGER, STRING, STRING>>\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"An array of pipeline informations that have the current table as its destination table. Each of the struct has information about the pipelineId, sourceTables, destinationTable, frequency, pipelineType, and schedule\"\n    },\n    {\n        \"name\": \"directForwardPipelines\",\n        \"type\": \"ARRAY<STRUCT<INTEGER, STRING, STRING, INTEGER, STRING, STRING>>\",\n        \"mode\": \"NULLABLE\",\n        \"description\": \"An array of pipeline informations that have the current table as one of its source table. Each of the struct has information about the pipelineId, sourceTables, destinationTable, frequency, pipelineType, and schedule\"\n    }\n]\n```\n<\/ul>\n\n\n### Routines Creation\nThere are several JavaScript UDFs created in BigQuery upon usage of the tool. These function files are not to be run independently in a JS environment, these file contents will be loaded by the Python package, `src\/` to be constructed as a function creation query to BigQuery.\n\n<ul> \n\n<li>getPipelineTypeAndSchedule\n\nThis function takes an array of timestamps and returns a struct of the pipeline type and schedule according to the history. There are 3 possible values for pipeline type: live\/dead\/ad hoc, and there are 4 possible values for schedule: non deterministic\/hourly\/daily\/monthly. \n\nThe routine file content is located in `bq_routines\/getPipelineTypeAndSchedule.js`\n\n<li>getTablesInvolvedInPipelineOfTable\n\nThis function returns a list of tables that are involved in the pipeline of the table of input.\n\nThe routine file content is located in `bq_routines\/getTablesInvolvedInPipelineOfTable.js`\n\n<\/ul","site":"GCP","answers_cleaned":"  Table Access Pattern Analysis This module consists of deep dive analysis of a BigQuery environment in Google Cloud Platform  according to audit logs   data access data  which can be used to optimise BigQuery usage  and improve time  space and cost of BigQuery      Pipeline Optimisation        b Definitions  b  The word  pipeline  refers to a collection of all job instances of a query in the transformation process in a data warehouse  in this case BigQuery  Each  pipeline  involves source table s  and destination table  For example  a query      SELECT purchaseId  shop FROM project dataset flash sale purchases WHERE isReturn   TRUE GROUP BY shop     with its destination table set to return purchases table  The source table of this query is flash sale purchases and its destination table is return purchases table        assets pipeline definition png   In the illustration above  one of the pipeline involves T1 and T2 as its source tables and T5 as its destination table   Given enough historical data from the audit logs  you can group queries which have the same source table s  and destination table combination  each group becomes a single pipeline   and see the run history of each of the pipelines  Same source table s    destination table combination will almost always come from the same query  even if they are a different query  the semantics should be similar  so this assumption is still valid  After grouping into different pipelines  according to the source table s    destination table combination  you might be able to see a pattern in their execution history  You might see that this pipeline is executed hourly  or daily  or even monthly  and when it was last executed     i Note that each of the source table s  can be a different table type  it can be either a view  materialized view  or a normal table  We only consider the normal tables because the notion of  analysing the update pattern  does not really apply to view and materialized view  View does not gets updated  and materialized view gets updated automatically  i   We can categorise every pipeline to different pipeline type and scheduling patern  There are 3 different pipeline types  namely   ul   li Live pipelines  Pipelines that run regularly on an obvious schedule  until now  li Dead pipelines  Pipelines that used to run regularly on an obvious schedule  but it stopped some time ago  li Ad hoc pipelines  Pipelines that does not have a regular scheduling pattern detected  or not enough repetition to conclude that it s a scheduled live pipeline   ul   The scheduling pattern can be identified as hourly  daily  weekly  or non deterministically  no obvious pattern          b Purpose  b  This tool helps identify tables with a high difference between the write and read frequency across data warehouse queries  High discrepancy between the write and read frequency of a table might be a good starting point for identifying optimisation points  Using this tool we can visualise the pipelines that are involved with a table of interest  We can then further analyse the pipeline type  its scheduling pattern and the jobs in each pipeline  and pinpoint problems or optimisations from there        assets pipeline example gif   This GIF shows the graph that will be visualised when running the tool  The graph is an HTML page that is rendered as an iFrame in the Jupyter Notebook  You can zoom in  zoom out  select  or deselect  a node to see more details about it  Each node represents a table  and each edge represents a pipeline from one table to another  The weight of the edges indicates the frequency of jobs of that pipeline compared to the rest of the pipelines in the current graph         b Analysing the Result  b  As can be seen from the GIF  the tool will visualise all the pipelines associated with a table  To be specific  this includes all query jobs that has this table of interest as its source or destination table  As mentioned above  for every query job  there are source table s  and also a single destination table    The tables that are involved in the pipelines associated with a table  all tables that are included when the pipeline graph of a table of interest is plotted  employs the below logic   For every query jobs that has the table of interest as one of its source tables or destination table     For every source table s  of every query job that has the table of interest as one of its source table s   recursively find query jobs that has this source table as its destination table  and get its source table s      For every destination table of every query job that has the table of interest as its destination table  recursively find query jobs that has destination table as its source table  and get its destination table   As seen from the GIF  for every table that is involved in the pipeline of the table of interest  you can select it  and see the details of the job schedule of every query involving this particular table  It will list all ad hoc  live or dead pipelines that have this table as its source or destination table    For example  a pipeline information on the right side of the graph might look like this      assets pipeline information example png   This means that the table  data analytics pocs public bigquery audit log  is a destination table in an ad hoc pipeline  where  project4 finance cloudaudit googleapis com data access    are the source table s   The jobs of this pipeline has a non deterministic schedule  and its pipeline ID is 208250  The pipeline ID information is useful if you want to further analyse this pipeline by querying the intermediate tables created  See  intermediate tables   intermediate tables creation   Given these insights  you can further deep dive into insights that are particularly interesting for you  For example  you might identify imbalance queries  the table  flash sale purchases  in your data warehouse might updated hourly but only queried daily  You might also identify queries that are already  dead  and no longer scheduled  according to the last execution time  and identify if this is intended  or something might have happened inside the query that causes an error      Architecture This tool is built on top of BigQuery and Python modules  The data source of the tool is audit log   data access which is located in BigQuery  The module will be responsible for the creation of intermediate tables  from the audit logs   data access source table   and the execution of all relevant queries towards those intermediate tables that will be used for analysis purposes  The analysis can be done through a Jupyter notebook  which can be run locally  if installed  or in AI Platform Notebooks  This guide will specifically be on running the tool on AI Platform Notebooks      assets architecture png       Directories and Files     data dumpling data assessment      table access pattern analysis          assets          bq routines          pipeline graph          src          templates          README md         pipeline ipynb         pipeline output only ipynb         requirements txt         var env      There are several subdirectories under the  table access pattern analysis  subdirectory   ul   li   b assets   b   This directory contains images or other assets that are used in README md   li   b bq routines   b   This directory contains all the  JS UDF  https   cloud google com bigquery docs reference standard sql user defined functions javascript udf structure  functions that will be created in BigQuery upon usage of the tool  These files are not to be run independently in a JS environment  these file contents will be loaded by the Python package   src   to be constructed as a function creation query to BigQuery   For more information about each of the functions  look at this  section   routines creation    li   b pipeline graph   b   This directory contains the HTML file  which is a webpage that is used to display the pipeline visualisation of the pipeline optimisation module    li   b src   b   This directory is the source Python package of the module  it drives the logic for table and routines creation  as well as query towards BigQuery tables    li   b templates   b   This directory consist of a template HTML file that will be filled using Jinja2 templating system  through the Python code    li   b README md  b   This is the README file which explains all the details fo this directory    li   b pipeline ipynb  b   This Notebook is used for the pipeline optimisation     li   b pipeline output only ipynb  b   This Notebook is used for demonstration purposes of the pipeline optimisation only  it shows the expected output and result of running the notebook     li   b requirements txt  b   This file consist of all the dependencies  you don t need to install it manually because it s part of the Jupyter Notebooks command    li   b var env  b   This is the file on which environment variables are to be defined and to be loaded by the different Jupyter Notebooks  For every  analysis workflow   you should redefine some of the variables  For details  look at this  section   environment variables     ul      Prerequisites   Your account must have access to read the audit logs   data access table that will be used as a source table for the analysis  For more details regarding different kinds of audit logs  visit this  page  https   cloud google com logging docs audit data access    The audit logs   data access table that will be used as a source table for the analysis should contain BigQuery logs version 1  For more details regarding audit logs version  visit this  page  https   cloud google com bigquery docs reference auditlogs    Your account must have access to write to the destination dataset    The source and destination dataset must be in the same location     Set Up This set up is for running JupyterLab Notebook in AI Platform Notebooks  you can also choose to run the Jupyter Notebook locally  1  Go to a GCP project  2  Navigate to  b AI Platform    Notebooks  b    b New Instance    Choose Python3 Option    Name the instance   b  3  Clone this repository  4  Go to the  table access pattern analysis  directory of the project 5  Set the environment variables inside  var env   6  Run the analysis  as described  below   analysis        Environment Variables The environment variables that you need to set includes    INPUT PROJECT ID   INPUT DATASET ID   INPUT AUDIT LOGS TABLE ID   IS AUDIT LOGS INPUT TABLE PARTITIONED   OUTPUT PROJECT ID   OUTPUT DATASET ID   OUTPUT TABLE SUFFIX   LOCATION   IS INTERACTIVE TABLES MODE  The details of each of the environment variables are as follows   ul   li  b INPUT PROJECT ID  INPUT DATASET ID  INPUT AUDIT LOGS TABLE ID  b    ul   li  Definition     These 3 environment variables should point to the audit logs   data access table that will be the source table of the analysis  The complete path to the audit logs table source will be  INPUT PROJECT ID INPUT DATASET ID INPUT AUDIT LOGS TABLE ID   If you want to analyse on a table with a wildcard  include the wildcard in the INPUT AUDIT LOGS TABLE ID variable as well     li  Example values    INPUT PROJECT ID    project a    INPUT DATASET ID    dataset b    INPUT AUDIT LOGS TABLE ID    cloudaudit googleapis com data access      ul    li  b OUTPUT PROJECT ID  OUTPUT DATASET ID  b   ul   li  Definition     These 2 environment variables should point to the dataset ID that will contain all the tables and routines that is going to be created during the analysis    li  Example values    OUTPUT PROJECT ID    project c    OUTPUT DATASET ID    dataset d     ul    li  b OUTPUT TABLE SUFFIX  b   ul   li  Definition     The  OUTPUT TABLE SUFFIX  variable is used to denote an  analysis environment  that you intend to build  All tables that are produced by this run will have this variable as its suffix  thus it will not replace any existing tables that you have created for other analysis      If this variable is not set  the analysis cannot be run as you might unintentionally forgot to change the suffix and replace an existing set of tables with the same suffix set    li  Example value    OUTPUT TABLE SUFFIX    first analysis      ul    li  b IS AUDIT LOGS INPUT TABLE PARTITIONED  b   ul   li  Definition     The  IS AUDIT LOGS INPUT TABLE PARTITIONED  variable is a boolean value which denotes whether the input audit log table is a partitioned table     li  Value    Its value should be either  TRUE  or  FALSE   with the exact casing     ul    li  b LOCATION  b    ul   li  Definition     The  LOCATION  variable is used to specify the region on which the input dataset and output dataset is located  a common and most used location is  US     li  Example value        LOCATION US   ul    li  b IS INTERACTIVE TABLES MODE  b   ul   li  Definition    Boolean on whether you want the tables to be interactive  it is recommended to set this to  TRUE      If you want the tables output to be interactive  can filter  sort  search   you should set this value to  TRUE      If you want the tables output to not be interactive  you can set this value to  FALSE         li  Value    Its value should be either  TRUE  or  FALSE   with the exact casing     ul    ul       Caveats After resetting any environment variables  you need to restart the kernel because otherwise it will not be loaded by Jupyter  To restart  go to the menu  Kernel  and choose  Restart      Analysis 1  Open a notebook to run an analysis  2  You can choose the interactivity mode of the output        If you want the tables output to be interactive  you can choose to run the Classic Jupyter Notebook  The output of the tables produced by this notebook will be interactive  can filter  sort  search   but it is an older version of Jupyter notebook in AI Platform Notebooks  To do this          1  Navigate to  Help  menu in Jupyter         2  Click on  Launch Classic Notebook            3  Navigate the directory and open the Notebook that you want to do the analysis on         If you prefer a newer version of the Jupyter notebook  you can choose to not run the Classic Jupyter Notebook  The output of the tables produced by this notebook is not interactive  You can double click on the intended Notebook from the list of files  without following the steps to launch a Classic Jupyter Notebook 2  Run the cells from top to bottom of the notebook   3  In the first cell  there is a datetime picker  which is used to filter the audit logs data source to the start and end date range specified  If you select  05 05 2021  as a start date and  06 05 2021   the analysis result of the notebook run will be based on audit logs data on 5th May 2021 to 6th May 2021  4  Run pipeline optimisation analysis produced in Jupyter Notebook      ul       li  b Pipeline Optimisation  run  pipeline ipynb   b       This tool helps identify pipeline optimisation points  At first  the tool will list down tables with high difference of writing and reading frequency throughout the data warehouse queries        After identifying the table that you would like to analyse further  you can select the table in the next part of the notebook and display the result in an iFrame inside the notebook         ul      Appendix     Intermediate Tables Creation As mentioned in the  Architecture   architecture  section  this module involves the creation of intermediate tables  These are important and relevant for users that are interested to analyse the insights generated from this tool even further  The schema and details of each intermediate tables created are explained below     ul   li  job info with tables info OUTPUT TABLE SUFFIX   This table stores some of the details of job history that are relevant to pipeline optimisation  Each job history entry corresponds to a single entry in the audit logs  The audit logs are filtered to the ones that are relevant for pipeline optimisation                       name    jobId            type    STRING            mode    NULLABLE            description    The job ID of this job run                        name    timestamp            type    TIMESTAMP            mode    NULLABLE            description    Timestamp when the job was run                        name    email            type    STRING            mode    NULLABLE            description    The account that ran this job                        name    projectId            type    STRING            mode    NULLABLE            description    The project ID this job was ran on                        name    totalSlotMs            type    INTEGER            mode    NULLABLE            description    The slot ms consumed by this job                        name    totalProcessedBytes            type    STRING            mode    NULLABLE            description    The total bytes processed when this job was ran                        name    destinationTable            type    STRING            mode    NULLABLE            description    The destination table of this job  in a concatenated  project dataset table  string format                        name    sourceTables            type    STRING            mode    NULLABLE            description    The source tables of this job  in a JSON string format of the array of concatenated  project dataset table  string format  for example it can be a string of   tableA  tableB  tableC                  li  pipeline info OUTPUT TABLE SUFFIX   This table stores the information of the different pipelines  Each unique pipeline is a collection of all job instances of a query  involving unique source table s  destination table combination  in the transformation process in BigQuery                       name    pipelineId            type    INTEGER            mode    NULLABLE            description    The pipeline ID of this job run                        name    timestamps            type    ARRAY TIMESTAMP             mode    NULLABLE            description    Timestamps when this pipeline was run in the past                        name    pipelineType            type    STRING            mode    NULLABLE            description    The pipeline type of this pipeline  its value can be dead live ad hoc                        name    schedule            type    STRING            mode    NULLABLE            description    The schedule for this pipeline  its value can be non deterministic hourly daily monthly                        name    destinationTable            type    STRING            mode    NULLABLE            description    The destination table of this pipeline  in a concatenated  project dataset table  string format                        name    sourceTables            type    STRING            mode    NULLABLE            description    The source tables of this job  in a JSON string format of the array of concatenated  project dataset table  string format  for example it can be a string of   tableA  tableB  tableC                  li source destination table pairs OUTPUT TABLE SUFFIX   This table stores all source destination table pair  It also stores the pipeline ID  which is the pipeline ID that this pair was part of                         name    destinationTable            type    STRING            mode    NULLABLE            description    The destination table                        name    sourceTable            type    STRING            mode    NULLABLE            description    The source table                        name    pipelineId            type    INTEGER            mode    NULLABLE            description    The pipeline ID for the pipeline that this pair was part of                li table direct pipelines OUTPUT TABLE SUFFIX   This table stores all table pipeline  as destination table and as source table                      name    table            type    STRING            mode    NULLABLE            description    The table                        name    directBackwardPipelines            type    ARRAY STRUCT INTEGER  STRING  STRING  INTEGER  STRING  STRING              mode    NULLABLE            description    An array of pipeline informations that have the current table as its destination table  Each of the struct has information about the pipelineId  sourceTables  destinationTable  frequency  pipelineType  and schedule                        name    directForwardPipelines            type    ARRAY STRUCT INTEGER  STRING  STRING  INTEGER  STRING  STRING              mode    NULLABLE            description    An array of pipeline informations that have the current table as one of its source table  Each of the struct has information about the pipelineId  sourceTables  destinationTable  frequency  pipelineType  and schedule                ul        Routines Creation There are several JavaScript UDFs created in BigQuery upon usage of the tool  These function files are not to be run independently in a JS environment  these file contents will be loaded by the Python package   src   to be constructed as a function creation query to BigQuery    ul     li getPipelineTypeAndSchedule  This function takes an array of timestamps and returns a struct of the pipeline type and schedule according to the history  There are 3 possible values for pipeline type  live dead ad hoc  and there are 4 possible values for schedule  non deterministic hourly daily monthly    The routine file content is located in  bq routines getPipelineTypeAndSchedule js    li getTablesInvolvedInPipelineOfTable  This function returns a list of tables that are involved in the pipeline of the table of input   The routine file content is located in  bq routines getTablesInvolvedInPipelineOfTable js     ul"}
{"questions":"GCP deployment neo4jbackuprestoreviagkegcsexample backup Backup Restore via and Example Project Structure","answers":"\n# [Neo4j](https:\/\/neo4j.com\/developer\/graph-database\/) Backup & Restore via [GKE Cronjob](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/cronjobs) and [GCS](https:\/\/cloud.google.com\/storage) Example\n\n\n## Project Structure\n```\n.\n\u2514\u2500\u2500 neo4j_backup_restore_via_gke_gcs_example\n \u2514\u2500\u2500 backup\n    \u2514\u2500\u2500 deployment \n          \u251c\u2500\u2500 backup-cronjob.yaml  #(Cronjob configuration)\n          \u2514\u2500\u2500 deploy-exec.sh  #(Executable for backup deployment)\n    \u2514\u2500\u2500 docker\n          \u251c\u2500\u2500 Dockerfile  #(Backup pod docker image)\n          \u251c\u2500\u2500 backup-via-admin.sh #(Helper used by docker image)\n          \u2514\u2500\u2500 pod-image-exec.sh #(Executable for build & push docker image)\n    \u251c\u2500\u2500 neo4j-backup-architecture.png\n    \u2514\u2500\u2500  backup.env  #(Update gcloud configuration)\n \u2514\u2500\u2500 restore\n    \u251c\u2500\u2500 restore.env  #(Update gcloud configuration)\n    \u251c\u2500\u2500 download-backup.sh  #(Helper to copy backup from GCS)\n    \u251c\u2500\u2500 restore-via-admin.sh #(Helper to run restore admin commands)\n    \u251c\u2500\u2500 restore-exec.sh  #(Excutable for Restore)\n    \u2514\u2500\u2500 cleanup.sh  #(Helper to remove local backup copy on pod)\n \u2514\u2500\u2500 README.md\n```\n\n## Backup\n\n### Backup Architecture\n![image info](.\/backup\/neo4j-backup-architecture.png)\n\n### Build and push backup pod image\n\n* Make sure the Enviornment variables are set correctly in ```backup\/backup.env``` file. \n  - This file should point to the Google Cloud Storage `REMOTE_BACKUPSET` to backup the graphs, GCR bucket used to point to the `BACKUP_IMAGE` used by the backup pod, `GKE_NAMESPACE` to be used to backup from the correct Neo4j cluster, and the `NEO4J_ADMIN_SERVER` IPs to backup the servers from.\n* Simply run ```pod-image-exec.sh``` file to build and push the backup pod image.\n\n```bash\n# Have execute-access to the script\n$ chmod u+x backup\/docker\/pod-image-exec.sh\n\n# Run the back-up pod image script\n$ .\/backup\/docker\/pod-image-exec.sh\n```\n\n### Deploy backup kubernetes cronjob\n* Navigate to ```backup\/deployment\/backup-cronjob.yaml``` and edit the schedule or anything else you'd like to customize in the cronjob.\n* Run the ```backup\/deployment\/deploy-exec.sh``` to deploy the cronjob on your neo4j cluster.\n```bash\n# Have execute-access to the script\n$ chmod u+x backup\/deployment\/deploy-exec.sh\n\n# Run the back-up cronjob deployment script\n$ .\/backup\/deployment\/deploy-exec.sh\n```\n\n\n### Update backup pod image\n* Configuration for image used by the backup-pod can be found in the file `backup\/docker\/Dockerfile`. \n* If any changes need to be made to the Backup configuration used by the back-up pod, please modify and save your changes on the following shell file `backup-via-admin.sh`\n  - Once any of these files are changed, an updated container image needs to be built and pushed to container registry.\n\n```bash\n# Script to build and push the image\n$ .\/pod-image-exec.sh\n```\n\n### Delete backup cronjob\n* Run the following command to delete the backup cronjob. Replace the <CRONJOB_NAME> with currently assigned cronjob name\n\n```bash\n# Delete cronjob\n$ kubectl delete cronjob <CRONJOB_NAME>\n```\n\n### Re-deploy Backup Cronjob\n* Run the following YAML file to de-deploy the Kubernetes Cronjob which schedules the backups\n\n```bash\n# Delete cronjob\n$ kubectl apply -f backup-cronjob.yaml\n```\n\n## Restore\n\nThis procedure assumes that you either have sidecar container on your neo4j instance running the google cloud-sdk or your neo4j instance servers have the google cloud-sdk pre-installed\n\n### Download and restore from Google Cloud Storage Bucket\n\nSimply run the ```\/restore\/restore-exec.sh``` which will call the helper shell scripts and complete the restore process one server at a time.\n\n```bash\n# Have execute-access to the script\n$ chmod u+x restore\/restore-exec.sh\n\n# Execute restore procedure script\n$ .\/restore\/restore-exec.sh\n\n```\n\n## References\n\nThis software uses the following open source packages:\n\n- [GKE Cronjobs](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/cronjobs)\n- [Artifact Registry](https:\/\/cloud.google.com\/artifact-registry)\n- [Dockerfile](https:\/\/docs.docker.com\/develop\/develop-images\/dockerfile_best-practices\/)\n- [Neo4j Enterprise](https:\/\/neo4j.com\/licensing\/)\n\n\n--","site":"GCP","answers_cleaned":"    Neo4j  https   neo4j com developer graph database   Backup   Restore via  GKE Cronjob  https   cloud google com kubernetes engine docs how to cronjobs  and  GCS  https   cloud google com storage  Example      Project Structure           neo4j backup restore via gke gcs example      backup         deployment                backup cronjob yaml    Cronjob configuration                deploy exec sh    Executable for backup deployment          docker               Dockerfile    Backup pod docker image                backup via admin sh   Helper used by docker image                pod image exec sh   Executable for build   push docker image          neo4j backup architecture png          backup env    Update gcloud configuration       restore         restore env    Update gcloud configuration          download backup sh    Helper to copy backup from GCS          restore via admin sh   Helper to run restore admin commands          restore exec sh    Excutable for Restore          cleanup sh    Helper to remove local backup copy on pod       README md         Backup      Backup Architecture   image info    backup neo4j backup architecture png       Build and push backup pod image    Make sure the Enviornment variables are set correctly in    backup backup env    file       This file should point to the Google Cloud Storage  REMOTE BACKUPSET  to backup the graphs  GCR bucket used to point to the  BACKUP IMAGE  used by the backup pod   GKE NAMESPACE  to be used to backup from the correct Neo4j cluster  and the  NEO4J ADMIN SERVER  IPs to backup the servers from    Simply run    pod image exec sh    file to build and push the backup pod image      bash   Have execute access to the script   chmod u x backup docker pod image exec sh    Run the back up pod image script     backup docker pod image exec sh          Deploy backup kubernetes cronjob   Navigate to    backup deployment backup cronjob yaml    and edit the schedule or anything else you d like to customize in the cronjob    Run the    backup deployment deploy exec sh    to deploy the cronjob on your neo4j cluster     bash   Have execute access to the script   chmod u x backup deployment deploy exec sh    Run the back up cronjob deployment script     backup deployment deploy exec sh           Update backup pod image   Configuration for image used by the backup pod can be found in the file  backup docker Dockerfile      If any changes need to be made to the Backup configuration used by the back up pod  please modify and save your changes on the following shell file  backup via admin sh      Once any of these files are changed  an updated container image needs to be built and pushed to container registry      bash   Script to build and push the image     pod image exec sh          Delete backup cronjob   Run the following command to delete the backup cronjob  Replace the  CRONJOB NAME  with currently assigned cronjob name     bash   Delete cronjob   kubectl delete cronjob  CRONJOB NAME           Re deploy Backup Cronjob   Run the following YAML file to de deploy the Kubernetes Cronjob which schedules the backups     bash   Delete cronjob   kubectl apply  f backup cronjob yaml         Restore  This procedure assumes that you either have sidecar container on your neo4j instance running the google cloud sdk or your neo4j instance servers have the google cloud sdk pre installed      Download and restore from Google Cloud Storage Bucket  Simply run the     restore restore exec sh    which will call the helper shell scripts and complete the restore process one server at a time      bash   Have execute access to the script   chmod u x restore restore exec sh    Execute restore procedure script     restore restore exec sh          References  This software uses the following open source packages      GKE Cronjobs  https   cloud google com kubernetes engine docs how to cronjobs     Artifact Registry  https   cloud google com artifact registry     Dockerfile  https   docs docker com develop develop images dockerfile best practices      Neo4j Enterprise  https   neo4j com licensing       "}
{"questions":"GCP It explains how to create Flex template and run it in a restricted environment where there is no internet connectivity to dataflow launcher or worker nodes It also run dataflow template on shared VPC Example also contains a DAG which can be used to trigger Dataflow job from composer It also demonstrates how we can use cloudbuild to implement CI CD for this dataflow job Dataflow Python Flex Template This example contains a sample Dataflow job which reads a XML file and inserts the records to BQ table Resorces structure","answers":"## Dataflow Python Flex Template\n\nThis example contains a sample Dataflow job which reads a XML file and inserts the records to BQ table.\nIt explains how to create Flex template and run it in a restricted environment where there is no \ninternet connectivity to dataflow launcher or worker nodes. It also run dataflow template on shared VPC.\nExample also contains a DAG which can be used to trigger Dataflow job from composer. It also demonstrates\nhow we can use cloudbuild to implement CI\/CD for this dataflow job.\n\n\n### Resorces structure\n\nBelow Tree explains the purpose of each file in the folder.\n\n```\ndataflow-flex-python\/\n\u251c\u2500\u2500 cloudbuild_base.yaml   --> Cloudbuild config to build SDK image\n\u251c\u2500\u2500 cloudbuild_df_job.yaml --> Cloudbuild config to build Launcher image and Flex template\n\u251c\u2500\u2500 composer_variables.template --> Definition of All Composer variables used by DAG\n\u251c\u2500\u2500 dag\n\u2502   \u2514\u2500\u2500 xml-to-bq-dag.py --> Dag code to launch Dataflow Job\n\u251c\u2500\u2500 df-package ---> Dataflow template package\n\u2502   \u251c\u2500\u2500 corder\n\u2502   \u2502   \u251c\u2500\u2500 bq_schema.py --> BQ Table Schemas\n\u2502   \u2502   \u251c\u2500\u2500 models.py --> Data Model for input data, generated by xsdata and pydantic plugin\n\u2502   \u2502   \u251c\u2500\u2500 customer_orders.py --> Dataflow pipeline Implementation\n\u2502   \u2502   \u251c\u2500\u2500 customer_orders_test.py --> pytest for Dataflow pipeline code\n\u2502   \u2502   \u2514\u2500\u2500 __init__.py\n\u2502   \u251c\u2500\u2500 main.py --> Used by launcher to launch the pipeline\n\u2502   \u2514\u2500\u2500 setup.py --> Used to Install the package\n\u251c\u2500\u2500 Dockerfile_Launcher --> Dockerfile to create Launcher Image\n\u251c\u2500\u2500 Dockerfile_SDK --> Dockerfile to create SDK image\n\u251c\u2500\u2500 metadata.json --> metadata file used during building flex teamplate\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 requirements-test.txt --> Python Requirements for running tests\n\u251c\u2500\u2500 requirements.txt --> Python Requirements for dataflow job\n\u2514\u2500\u2500 sample-data --> Directory holding some sample data for test\n```\n\n### Prerequisites\n\nThis example assumes Project, Network, DNS and Firewalls has already been setup.\n\n#### Export Variables\n```\nexport PROJECT_ID=<project_id>\nexport PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format=\"value(projectNumber)\")\nexport HOST_PROJECT_ID=<HOST_PROJECT_ID>\nexport INPUT_BUCKET_NAME=pw-df-input-bkt\nexport STAGING_BUCKET_NAME=pw-df-temp-bkt\nexport LOCATION=us-central1\nexport BQ_DATASET=bqtoxmldataset\nexport NETWORK=shared-vpc\nexport SUBNET=bh-subnet-usc1\nexport REPO=dataflowf-image-repo\nexport DF_WORKER_SA=dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com\n```\n\n#### Setup IAM\n```\n# Create service account for dataflow workers and launchers\ngcloud iam service-accounts create dataflow-worker-sa --project=$PROJECT_ID\n\n# Assign dataflow worker permissions\ngcloud projects add-iam-policy-binding $PROJECT_ID --member \"serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com\" --role roles\/dataflow.worker\n\n# Assign Object viewer permissions in order to read the data from cloud storage\ngcloud projects add-iam-policy-binding $PROJECT_ID --member \"serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com\" --role roles\/storage.objectViewer\n\n# Assign Object viewer permissions in order to Create temp files cloud storage\ngcloud projects add-iam-policy-binding $PROJECT_ID --member \"serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com\" --role roles\/storage.objectCreator\n\n# Assign Service Account User permissions\ngcloud projects add-iam-policy-binding $PROJECT_ID --member \"serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com\" --role roles\/iam.serviceAccountUser\n\n# Assign BigQuery job user permissions\ngcloud projects add-iam-policy-binding $PROJECT_ID --member \"serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com\" --role roles\/bigquery.jobUser\n\n# Assign bigquery data editor permissions\ngcloud projects add-iam-policy-binding $PROJECT_ID --member \"serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com\" --role roles\/bigquery.dataEditor\n\n# Assign Artifactory Reader Permissions\ngcloud projects add-iam-policy-binding $PROJECT_ID --member \"serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com\" --role roles\/artifactregistry.reader\n\n# Assign network user permissions on Host project, this is needed only if dataflow workers will be using shared VPC\ngcloud projects add-iam-policy-binding $HOST_PROJECT_ID --member \"serviceAccount:service-$PROJECT_NUMBER@dataflow-service-producer-prod.iam.gserviceaccount.com\" --role roles\/compute.networkUser\n```\n\n#### Setup Cloud Storage\n```\n# Create Cloud Storage bucket for input data\ngcloud storage buckets create gs:\/\/$INPUT_BUCKET_NAME --location $LOCATION --project $PROJECT_ID\n\n# Create a bucket for dataflow staging and temp locations\ngcloud storage buckets create gs:\/\/$STAGING_BUCKET_NAME --location $LOCATION --project $PROJECT_ID\n\ngsutil iam ch serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com:roles\/storage.legacyBucketWriter gs:\/\/$STAGING_BUCKET_NAME\n\n# Assign Legacy Bucket Writer Role on Input bucket in order to move the object\ngsutil iam ch serviceAccount:dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com:roles\/storage.legacyBucketWriter gs:\/\/$INPUT_BUCKET_NAME\n```\n\n#### Create BQ Dataset\n```\nbq --location=$LOCATION mk --dataset $PROJECT_ID:$BQ_DATASET\n```\n\n#### Create ARtifactory registry\n```\ngcloud  artifacts  repositories create $REPO --location $LOCATION --repository-format docker --project $PROJECT_ID\n```\n\n### Build Templates\n\n#### Build and Push Docker Images for template\n```\n# Build Base Image, all packages will be used from this image when dataflow job runs\ndocker build -t $LOCATION-docker.pkg.dev\/$PROJECT_ID\/$REPO\/dataflow-2.40-base:dev -f Dockerfile_SDK .\n\n# Build Image used by launcher to launch the Dataflow job\ndocker build -t $LOCATION-docker.pkg.dev\/$PROJECT_ID\/$REPO\/df-xml-to-bq:dev -f Dockerfile_Launcher .\n\n# Push both the images to repo\ndocker push $LOCATION-docker.pkg.dev\/$PROJECT_ID\/$REPO\/dataflow-2.40-base:dev\ndocker push $LOCATION-docker.pkg.dev\/$PROJECT_ID\/$REPO\/df-xml-to-bq:dev\n```\n\n#### Build Dataflow flex template\n```\ngcloud dataflow flex-template build gs:\/\/$INPUT_BUCKET_NAME\/dataflow-templates\/xml-to-bq.json \\\n--image \"$LOCATION-docker.pkg.dev\/$PROJECT_ID\/$REPO\/df-xml-to-bq:dev\" \\\n--sdk-language \"PYTHON\" \\\n--metadata-file metadata.json \\\n--project $PROJECT_ID\n```\n\n### Demo\n\n#### Upload Sample Data\n```\ngcloud storage cp .\/sample-data\/*.xml gs:\/\/$INPUT_BUCKET_NAME\/data\/\n```\n\n#### Run job using DirectRunner locally\n```\n# Install Reuirements\npip3 install -r requirements.txt requirements-test.txt\n\ncd df-package\n\n# Run tests\npython3 -m pytest\n\n# Run Job\npython3 main.py --input=..\/sample-data\/customer-orders.xml \\\n--temp_location=gs:\/\/$STAGING_BUCKET_NAME\/tmp \\\n--staging_location=gs:\/\/$STAGING_BUCKET_NAME\/staging \\\n--output=$PROJECT_ID:$BQ_DATASET \\\n--dead_letter_dir=..\/dead\/ \\\n--runner=DirectRunner\n\ncd ..\/\n```\n\n#### Run Job using Gcloud Command\n```\ngcloud dataflow flex-template run xml-to-bq-sample-pipeline-$(date '+%Y-%m-%d-%H-%M-%S') \\\n--template-file-gcs-location gs:\/\/$INPUT_BUCKET_NAME\/dataflow-templates\/xml-to-bq.json \\\n--additional-experiments use_runner_v2 \\\n--additional-experiments=use_network_tags_for_flex_templates=\"dataflow-worker;allow-iap-ssh\" \\\n--additional-experiments=use_network_tags=\"dataflow-worker;allow-iap-ssh\" \\\n--additional-experiments=use_unsupported_python_version \\\n--disable-public-ips \\\n--network projects\/$HOST_PROJECT_ID\/global\/networks\/$NETWORK \\\n--subnetwork https:\/\/www.googleapis.com\/compute\/v1\/projects\/$HOST_PROJECT_ID\/regions\/$LOCATION\/subnetworks\/$SUBNET \\\n--service-account-email dataflow-worker-sa@$PROJECT_ID.iam.gserviceaccount.com \\\n--staging-location gs:\/\/$STAGING_BUCKET_NAME\/staging \\\n--temp-location gs:\/\/$STAGING_BUCKET_NAME\/tmp \\\n--region $LOCATION --worker-region=$LOCATION \\\n--parameters output=$PROJECT_ID:$BQ_DATASET \\\n--parameters input=gs:\/\/$INPUT_BUCKET_NAME\/data\/* \\\n--parameters dead_letter_dir=gs:\/\/$INPUT_BUCKET_NAME\/invalid_files \\\n--parameters sdk_location=container \\\n--parameters sdk_container_image=$LOCATION-docker.pkg.dev\/$PROJECT_ID\/$REPO\/dataflow-2.40-base:dev \\\n--project $PROJECT_ID\n```\n\n### CI\/CD using Cloudbuild\n\n#### Build Docker Image and Template with Cloud build\nThe below section uses gcloud command. In Real World scenario Cloud build triggers\ncan be created ahich can run this build job whenever their is change in the code.\n```\n\n# Build and push Base Image\ngcloud builds submit --config cloudbuild_base.yaml . --project $PROJECT_ID --substitutions _LOCATION=$LOCATION,_PROJECT_ID=$PROJECT_ID,_REPOSITORY=$REPO\n\n# Build and push launcher image and create flex template\ngcloud builds submit --config cloudbuild_df_job.yaml . --project $PROJECT_ID --substitutions _LOCATION=$LOCATION,_PROJECT_ID=$PROJECT_ID,_REPOSITORY=$REPO,_TEMPLATE_PATH=gs:\/\/$INPUT_BUCKET_NAME\/dataflow-templates\n```\n\n### Run Dataflow Flex template job from Composer Dags\n\n#### Set Environment Variables for composer\n```\nexport COMPOSER_ENV_NAME=<composer-env-name>\nexport COMPOSER_REGION=$LOCATION\n\n\nCOMPOSER_VAR_FILE=composer_variables.json\nif [ ! -f \"${COMPOSER_VAR_FILE}\" ]; then\n    envsubst < composer_variables.template > ${COMPOSER_VAR_FILE}\nfi\n\ngcloud composer environments storage data import \\\n--environment ${COMPOSER_ENV_NAME} \\\n--location ${COMPOSER_REGION} \\\n--source ${COMPOSER_VAR_FILE}\n\ngcloud composer environments run \\\n${COMPOSER_ENV_NAME} \\\n--location ${COMPOSER_REGION} \\\nvariables import -- \/home\/airflow\/gcs\/data\/${COMPOSER_VAR_FILE}\n```\n\n#### Assign permissions to Composer Worker SA\n```\nCOMPOSER_SA=$(gcloud composer environments describe  $COMPOSER_ENV_NAME --location $COMPOSER_REGION --project $PROJECT_ID --format json | jq '.config.nodeConfig.serviceAccount')\n\n# Assign Service Account User permissions\ngcloud projects add-iam-policy-binding $PROJECT_ID --member \"serviceAccount:$COMPOSER_SA\" --role roles\/iam.serviceAccountUser\ngcloud projects add-iam-policy-binding $PROJECT_ID --member \"serviceAccount:$COMPOSER_SA\" --role roles\/dataflow.admin\n```\n\n#### Upload the dag to composer's dag bucket\n```\nDAG_PATH=$(gcloud composer environments describe $COMPOSER_ENV_NAME --location $COMPOSER_REGION --project $PROJECT_ID --format json | jq '.config.dagGcsPrefix')\ngcloud storage cp dag\/xml-to-bq-dag.py $DAG_PATH\n```\n\n### Limitations\n\nCurrently this pipleine loads the whole XML file into memory\nfor the conversion to dict via xmltodict. This approach works for small\nfiles but is not parallelizable on super large XML files as they are not\nread in chunks but in one go. This risks having a single worker dealing\nwith very large file instances (slow) and running potentially out of memmory.\nIn our experience any XML file above ~ 300mb would start slowing down the\npipeline considerably and potentially memory failures can start showing up at ~ 500mb.\nThis is if you go with the default worker.\n\n**Contributors:** @singhpradeepk, @kkulczak, @akolkiewicz\n\n**Credit:**\n\nSample data has been borrowed from https:\/\/learn.microsoft.com\/en-in\/dotnet\/standard\/linq\/sample-xml-file-customers-orders-namespace#customersordersinnamespacexml\n\nData Model has been borrowed from https:\/\/learn.microsoft.com\/en-in\/dotnet\/standard\/linq\/sample-xsd-file-customers-orders#customersordersxsd","site":"GCP","answers_cleaned":"   Dataflow Python Flex Template  This example contains a sample Dataflow job which reads a XML file and inserts the records to BQ table  It explains how to create Flex template and run it in a restricted environment where there is no  internet connectivity to dataflow launcher or worker nodes  It also run dataflow template on shared VPC  Example also contains a DAG which can be used to trigger Dataflow job from composer  It also demonstrates how we can use cloudbuild to implement CI CD for this dataflow job        Resorces structure  Below Tree explains the purpose of each file in the folder       dataflow flex python      cloudbuild base yaml       Cloudbuild config to build SDK image     cloudbuild df job yaml     Cloudbuild config to build Launcher image and Flex template     composer variables template     Definition of All Composer variables used by DAG     dag         xml to bq dag py     Dag code to launch Dataflow Job     df package      Dataflow template package         corder             bq schema py     BQ Table Schemas             models py     Data Model for input data  generated by xsdata and pydantic plugin             customer orders py     Dataflow pipeline Implementation             customer orders test py     pytest for Dataflow pipeline code               init   py         main py     Used by launcher to launch the pipeline         setup py     Used to Install the package     Dockerfile Launcher     Dockerfile to create Launcher Image     Dockerfile SDK     Dockerfile to create SDK image     metadata json     metadata file used during building flex teamplate     README md     requirements test txt     Python Requirements for running tests     requirements txt     Python Requirements for dataflow job     sample data     Directory holding some sample data for test          Prerequisites  This example assumes Project  Network  DNS and Firewalls has already been setup        Export Variables     export PROJECT ID  project id  export PROJECT NUMBER   gcloud projects describe  PROJECT ID   format  value projectNumber    export HOST PROJECT ID  HOST PROJECT ID  export INPUT BUCKET NAME pw df input bkt export STAGING BUCKET NAME pw df temp bkt export LOCATION us central1 export BQ DATASET bqtoxmldataset export NETWORK shared vpc export SUBNET bh subnet usc1 export REPO dataflowf image repo export DF WORKER SA dataflow worker sa  PROJECT ID iam gserviceaccount com           Setup IAM       Create service account for dataflow workers and launchers gcloud iam service accounts create dataflow worker sa   project  PROJECT ID    Assign dataflow worker permissions gcloud projects add iam policy binding  PROJECT ID   member  serviceAccount dataflow worker sa  PROJECT ID iam gserviceaccount com    role roles dataflow worker    Assign Object viewer permissions in order to read the data from cloud storage gcloud projects add iam policy binding  PROJECT ID   member  serviceAccount dataflow worker sa  PROJECT ID iam gserviceaccount com    role roles storage objectViewer    Assign Object viewer permissions in order to Create temp files cloud storage gcloud projects add iam policy binding  PROJECT ID   member  serviceAccount dataflow worker sa  PROJECT ID iam gserviceaccount com    role roles storage objectCreator    Assign Service Account User permissions gcloud projects add iam policy binding  PROJECT ID   member  serviceAccount dataflow worker sa  PROJECT ID iam gserviceaccount com    role roles iam serviceAccountUser    Assign BigQuery job user permissions gcloud projects add iam policy binding  PROJECT ID   member  serviceAccount dataflow worker sa  PROJECT ID iam gserviceaccount com    role roles bigquery jobUser    Assign bigquery data editor permissions gcloud projects add iam policy binding  PROJECT ID   member  serviceAccount dataflow worker sa  PROJECT ID iam gserviceaccount com    role roles bigquery dataEditor    Assign Artifactory Reader Permissions gcloud projects add iam policy binding  PROJECT ID   member  serviceAccount dataflow worker sa  PROJECT ID iam gserviceaccount com    role roles artifactregistry reader    Assign network user permissions on Host project  this is needed only if dataflow workers will be using shared VPC gcloud projects add iam policy binding  HOST PROJECT ID   member  serviceAccount service  PROJECT NUMBER dataflow service producer prod iam gserviceaccount com    role roles compute networkUser           Setup Cloud Storage       Create Cloud Storage bucket for input data gcloud storage buckets create gs    INPUT BUCKET NAME   location  LOCATION   project  PROJECT ID    Create a bucket for dataflow staging and temp locations gcloud storage buckets create gs    STAGING BUCKET NAME   location  LOCATION   project  PROJECT ID  gsutil iam ch serviceAccount dataflow worker sa  PROJECT ID iam gserviceaccount com roles storage legacyBucketWriter gs    STAGING BUCKET NAME    Assign Legacy Bucket Writer Role on Input bucket in order to move the object gsutil iam ch serviceAccount dataflow worker sa  PROJECT ID iam gserviceaccount com roles storage legacyBucketWriter gs    INPUT BUCKET NAME           Create BQ Dataset     bq   location  LOCATION mk   dataset  PROJECT ID  BQ DATASET           Create ARtifactory registry     gcloud  artifacts  repositories create  REPO   location  LOCATION   repository format docker   project  PROJECT ID          Build Templates       Build and Push Docker Images for template       Build Base Image  all packages will be used from this image when dataflow job runs docker build  t  LOCATION docker pkg dev  PROJECT ID  REPO dataflow 2 40 base dev  f Dockerfile SDK      Build Image used by launcher to launch the Dataflow job docker build  t  LOCATION docker pkg dev  PROJECT ID  REPO df xml to bq dev  f Dockerfile Launcher      Push both the images to repo docker push  LOCATION docker pkg dev  PROJECT ID  REPO dataflow 2 40 base dev docker push  LOCATION docker pkg dev  PROJECT ID  REPO df xml to bq dev           Build Dataflow flex template     gcloud dataflow flex template build gs    INPUT BUCKET NAME dataflow templates xml to bq json     image   LOCATION docker pkg dev  PROJECT ID  REPO df xml to bq dev      sdk language  PYTHON      metadata file metadata json     project  PROJECT ID          Demo       Upload Sample Data     gcloud storage cp   sample data   xml gs    INPUT BUCKET NAME data            Run job using DirectRunner locally       Install Reuirements pip3 install  r requirements txt requirements test txt  cd df package    Run tests python3  m pytest    Run Job python3 main py   input    sample data customer orders xml     temp location gs    STAGING BUCKET NAME tmp     staging location gs    STAGING BUCKET NAME staging     output  PROJECT ID  BQ DATASET     dead letter dir    dead      runner DirectRunner  cd               Run Job using Gcloud Command     gcloud dataflow flex template run xml to bq sample pipeline   date    Y  m  d  H  M  S       template file gcs location gs    INPUT BUCKET NAME dataflow templates xml to bq json     additional experiments use runner v2     additional experiments use network tags for flex templates  dataflow worker allow iap ssh      additional experiments use network tags  dataflow worker allow iap ssh      additional experiments use unsupported python version     disable public ips     network projects  HOST PROJECT ID global networks  NETWORK     subnetwork https   www googleapis com compute v1 projects  HOST PROJECT ID regions  LOCATION subnetworks  SUBNET     service account email dataflow worker sa  PROJECT ID iam gserviceaccount com     staging location gs    STAGING BUCKET NAME staging     temp location gs    STAGING BUCKET NAME tmp     region  LOCATION   worker region  LOCATION     parameters output  PROJECT ID  BQ DATASET     parameters input gs    INPUT BUCKET NAME data       parameters dead letter dir gs    INPUT BUCKET NAME invalid files     parameters sdk location container     parameters sdk container image  LOCATION docker pkg dev  PROJECT ID  REPO dataflow 2 40 base dev     project  PROJECT ID          CI CD using Cloudbuild       Build Docker Image and Template with Cloud build The below section uses gcloud command  In Real World scenario Cloud build triggers can be created ahich can run this build job whenever their is change in the code         Build and push Base Image gcloud builds submit   config cloudbuild base yaml     project  PROJECT ID   substitutions  LOCATION  LOCATION  PROJECT ID  PROJECT ID  REPOSITORY  REPO    Build and push launcher image and create flex template gcloud builds submit   config cloudbuild df job yaml     project  PROJECT ID   substitutions  LOCATION  LOCATION  PROJECT ID  PROJECT ID  REPOSITORY  REPO  TEMPLATE PATH gs    INPUT BUCKET NAME dataflow templates          Run Dataflow Flex template job from Composer Dags       Set Environment Variables for composer     export COMPOSER ENV NAME  composer env name  export COMPOSER REGION  LOCATION   COMPOSER VAR FILE composer variables json if      f    COMPOSER VAR FILE      then     envsubst   composer variables template     COMPOSER VAR FILE  fi  gcloud composer environments storage data import     environment   COMPOSER ENV NAME      location   COMPOSER REGION      source   COMPOSER VAR FILE   gcloud composer environments run     COMPOSER ENV NAME      location   COMPOSER REGION    variables import     home airflow gcs data   COMPOSER VAR FILE            Assign permissions to Composer Worker SA     COMPOSER SA   gcloud composer environments describe   COMPOSER ENV NAME   location  COMPOSER REGION   project  PROJECT ID   format json   jq   config nodeConfig serviceAccount      Assign Service Account User permissions gcloud projects add iam policy binding  PROJECT ID   member  serviceAccount  COMPOSER SA    role roles iam serviceAccountUser gcloud projects add iam policy binding  PROJECT ID   member  serviceAccount  COMPOSER SA    role roles dataflow admin           Upload the dag to composer s dag bucket     DAG PATH   gcloud composer environments describe  COMPOSER ENV NAME   location  COMPOSER REGION   project  PROJECT ID   format json   jq   config dagGcsPrefix   gcloud storage cp dag xml to bq dag py  DAG PATH          Limitations  Currently this pipleine loads the whole XML file into memory for the conversion to dict via xmltodict  This approach works for small files but is not parallelizable on super large XML files as they are not read in chunks but in one go  This risks having a single worker dealing with very large file instances  slow  and running potentially out of memmory  In our experience any XML file above   300mb would start slowing down the pipeline considerably and potentially memory failures can start showing up at   500mb  This is if you go with the default worker     Contributors     singhpradeepk   kkulczak   akolkiewicz    Credit     Sample data has been borrowed from https   learn microsoft com en in dotnet standard linq sample xml file customers orders namespace customersordersinnamespacexml  Data Model has been borrowed from https   learn microsoft com en in dotnet standard linq sample xsd file customers orders customersordersxsd"}
{"questions":"GCP Google Cloud Storage Google Speech to Text representation for any use or purpose Your use of it is subject to your agreement with Google Technology Stack Copyright 2023 Google This software is provided as is without warranty or Google Cloud Run Google Artifact Registry","answers":"```\nCopyright 2023 Google. This software is provided as-is, without warranty or\nrepresentation for any use or purpose. Your use of it is subject to your\nagreement with Google.\n```\n## Technology Stack\n- Google Cloud Run\n- Google Artifact Registry\n- Google Cloud Storage\n- Google Speech to Text\n- Vertex AI Conversation\n- Dialogflow CX\n- Dialogflow CX Agent\n- Google Data Store\n- Google Secret Manager\n- Gradio\n\n## GCP Project Setup\n\n### Creating a Project in the Google Cloud Platform Console\n\nIf you haven't already created a project, create one now. Projects enable you to\nmanage all Google Cloud Platform resources for your app, including deployment,\naccess control, billing, and services.\n\n1. Open the [Cloud Platform Console][cloud-console].\n2. In the drop-down menu at the top, select **NEW PROJECT**.\n3. Give your project a name.\n4. Make a note of the project ID, which might be different from the project\n   name. The project ID is used in commands and in configurations.\n\n[cloud-console]: https:\/\/console.cloud.google.com\/\n\n### Enabling billing for your project.\n\nIf you haven't already enabled billing for your project, [enable\nbilling][enable-billing] now. Enabling billing allows is required to use Cloud\nBigtable and to create VM instances.\n\n[enable-billing]: https:\/\/console.cloud.google.com\/project\/_\/settings\n\n### Install the Google Cloud SDK.\n\nIf you haven't already installed the Google Cloud SDK, [install the Google\nCloud SDK][cloud-sdk] now. The SDK contains tools and libraries that enable you\nto create and manage resources on Google Cloud Platform.\n\n[cloud-sdk]: https:\/\/cloud.google.com\/sdk\/\n\n### Setting Google Application Default Credentials\n\nSet your [Google Application Default\nCredentials][application-default-credentials] by [initializing the Google Cloud\nSDK][cloud-sdk-init] with the command:\n\n```\n   gcloud init\n```\n\nGenerate a credentials file by running the\n[application-default login](https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/auth\/application-default\/login)\ncommand:\n\n```\n    gcloud auth application-default login\n```\n\n[cloud-sdk-init]: https:\/\/cloud.google.com\/sdk\/docs\/initializing\n\n[application-default-credentials]: https:\/\/developers.google.com\/identity\/protocols\/application-default-credentials\n\n## Upload your data to a Cloud Storage bucket\nFollow these [instructions][instructions] to upload your pdf documents \nor pdf manuals to be used in this example\n\n[instructions]:https:\/\/cloud.google.com\/storage\/docs\/uploading-objects\n \n## Create a Generative AI Agent\nFollow the instructions at this [link][link] and perform the following:\n1. Create Data Stores: Select information that you would like the Vertex AI Search and Conversation to query\n2. Create an Agent: Create the Dialogflow CX agent that queries the Data Store\n3. Test the agent in the simulator\n4. Take note of you agent link by going to [Dialogflow CX Console][Dialogflow CX Console] and see the information about the agent you created\n\n[link]: https:\/\/cloud.google.com\/generative-ai-app-builder\/docs\/a\n[Dialogflow CX Console]:https:\/\/cloud.google.com\/dialogflow\/cx\/docs\/concept\/console#agent\n\n### Dialogflow CX Agent Data Stores\nData Stores are used to find answers for end-user's questions. \nData Stores are a collection documents, each of which reference your data.\n\nFor this particular example data store will consist of the following characteristics:\n1. Your organizational documents or manuals.  \n2. The data store type will be unstructured in a pdf format\n3. The data is uploaded without metadata for simplicity.\nOnly need to point the import to the gcp bucket folder where the pdf files are. \nTheir extension will decide their type.\n\nWhen an end-user asks the agent a question, the agent searches for an answer from the \ngiven source content and summarizes the findings into a coherent agent response. \nIt also provides supporting links to the sources of the response for the end-user to learn more. \n\n\n","site":"GCP","answers_cleaned":"    Copyright 2023 Google  This software is provided as is  without warranty or representation for any use or purpose  Your use of it is subject to your agreement with Google         Technology Stack   Google Cloud Run   Google Artifact Registry   Google Cloud Storage   Google Speech to Text   Vertex AI Conversation   Dialogflow CX   Dialogflow CX Agent   Google Data Store   Google Secret Manager   Gradio     GCP Project Setup      Creating a Project in the Google Cloud Platform Console  If you haven t already created a project  create one now  Projects enable you to manage all Google Cloud Platform resources for your app  including deployment  access control  billing  and services   1  Open the  Cloud Platform Console  cloud console   2  In the drop down menu at the top  select   NEW PROJECT    3  Give your project a name  4  Make a note of the project ID  which might be different from the project    name  The project ID is used in commands and in configurations    cloud console   https   console cloud google com       Enabling billing for your project   If you haven t already enabled billing for your project   enable billing  enable billing  now  Enabling billing allows is required to use Cloud Bigtable and to create VM instances    enable billing   https   console cloud google com project   settings      Install the Google Cloud SDK   If you haven t already installed the Google Cloud SDK   install the Google Cloud SDK  cloud sdk  now  The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform    cloud sdk   https   cloud google com sdk       Setting Google Application Default Credentials  Set your  Google Application Default Credentials  application default credentials  by  initializing the Google Cloud SDK  cloud sdk init  with the command          gcloud init      Generate a credentials file by running the  application default login  https   cloud google com sdk gcloud reference auth application default login  command           gcloud auth application default login       cloud sdk init   https   cloud google com sdk docs initializing   application default credentials   https   developers google com identity protocols application default credentials     Upload your data to a Cloud Storage bucket Follow these  instructions  instructions  to upload your pdf documents  or pdf manuals to be used in this example   instructions  https   cloud google com storage docs uploading objects      Create a Generative AI Agent Follow the instructions at this  link  link  and perform the following  1  Create Data Stores  Select information that you would like the Vertex AI Search and Conversation to query 2  Create an Agent  Create the Dialogflow CX agent that queries the Data Store 3  Test the agent in the simulator 4  Take note of you agent link by going to  Dialogflow CX Console  Dialogflow CX Console  and see the information about the agent you created   link   https   cloud google com generative ai app builder docs a  Dialogflow CX Console  https   cloud google com dialogflow cx docs concept console agent      Dialogflow CX Agent Data Stores Data Stores are used to find answers for end user s questions   Data Stores are a collection documents  each of which reference your data   For this particular example data store will consist of the following characteristics  1  Your organizational documents or manuals    2  The data store type will be unstructured in a pdf format 3  The data is uploaded without metadata for simplicity  Only need to point the import to the gcp bucket folder where the pdf files are   Their extension will decide their type   When an end user asks the agent a question  the agent searches for an answer from the  given source content and summarizes the findings into a coherent agent response   It also provides supporting links to the sources of the response for the end user to learn more     "}
{"questions":"GCP Near realtime NRT Feature Producer Hypothetical Scenario Features We want to build and use near real time NRT features in the hypotethical scoring system Scoring is not part of this example There are multiple sources that produce NRT features Features are ideally defined in the feature store system and are exposed in the online store Features are stored in BigQuery and synced to Online Feature store Vertex ai Below you can see the definition Feature type Feature name Feature source Window assuming sliding Period Method Beam SQL Destination","answers":"# Near realtime (NRT) Feature Producer\n\n## Hypothetical Scenario\n\nWe want to build and use near real time (NRT) features in the hypotethical scoring system. Scoring is not part of this example. There are multiple sources that produce NRT features. Features are ideally defined in the feature store system and are exposed in the online store.\n\n### Features \nFeatures are stored in BigQuery and synced to Online Feature store (Vertex ai). Below you can see the definition. \n\n| Feature type | Feature name | Feature source | Window (assuming sliding)\/Period | Method (Beam SQL) | Destination\n|--------|---------------------|------------------|-----------------------|----------------------|----------------------|\n| NRT   | Total_number_of_clicks_last_90sec per user_id   | Ga4 topic           | 90sec\/30s  | count(*)   | BQ table  |\n| NRT    | Total_number_of_logins_last_5min per user_id   | Authn topic   |   300sec\/30s     | count(*)  | BQ table |\n| NRT    | Total_number_of_transactions_last5min per user_id    |  Transactions topic   | 300sec\/30s  |  count(*)  | BQ table |\n\n###  Scoring pipeline (not part of the example)\n\nPipeline input is a transaction topic, for each message, it takes entity id and use it for enrichment - it reads total_number_of_clicks_last_90sec, total_number_of_logins_last_5min, total_number_of_transactions_last5min and many other historical features that are needed for scoring. \nScore is emitted downstream along with transaction details. \n\n### Near real time feature engineering pipeline\nPipeline takes events from the source topic, splits into multiple branches based on windowing strategy (duration, period) and does aggregations.\n\nBranches are joined back (which is tricky!), and stored into the destination table. \n\n### Visualization\nTo simplify the visualization here are 2 features (f1, f2) - 90s and 60s. As there is a sliding window happening, each row has overlapping windows visualized.\n\nEvents happen within the (window start, window end boundary), but output of aggregation is triggered at the end of the window with a timestamp of end boundary minus 1ms, e.g. 29.999.  \n\n![viz](viz.png)\n\nNotice that windows emit by the end of first period and second period already contain aggregations for 60s and 90s windows. \n\nNotice also that this pipeline should also emit 0 (default for some aggregations) even if there is no window triggered due as there is no data for key.  \n\n### Resetting feature value \nThere is a need to reset total_number_of_clicks_last_90sec if there are no more events for a specific user_id. \n\nSolution is implementing a stateful processing step after each feature calculation that resets timer or expires value (produce default\/0\/null). There is additional windowing needed to make this possible. \n\n### Merging branches\nTotal_number_of_clicks_last_90sec and total_number_of_locations_last_5min are features that are calculated based on the same data product or similar period and should ideally be stored in the same destination. \nPipeline takes events from ga4 topic, splits into two branches, does window (90s and 300s) and aggregations (count and count distinct). \nAs windows are different, the result of window & aggregation can\u2019t be instatnly co-grouped and stored as one row (entity_id, Total_number_of_clicks_last_90sec, total_number_of_locations_last_5min, timestamp). \n\nSolution here is that the windowing period should match (events are produced at the same rate)and branches should be re-windowed to a fixed window and co-grouped. \n\n## Sources\n\nRepository is created based on [quickstart](https:\/\/cloud.google.com\/dataflow\/docs\/quickstarts\/create-pipeline-java), but most of the files are removed.\n\nIt contains NRTFeature transform and other building blocks to showcase how implement above requirements. \n\n## Architecture of demo pipeline\n\nThere  is demo pipeline implemented with taxi data source, producing two features and storing them to BQ backed feature store. \n\n\n```\n\n                   \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n                   \u2502  PubSubIO    \u2502 Topic: taxirides-realtime\n                   \u2502 (Read\/Source)\u2502\n                   \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                          \u2502  PCollection<String>\n                          v\n                   \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n                   \u2502  JsonToRow     \u2502\n                   \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2518\n                          \u2502    \u2502  PCollection<Row>\n                          \u2502    \u2502\n                          \u2502    \u2502\n                          \u2502    \u2502\n                  \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2502\n                  \u2502            \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510  \n                  v                     v\n           \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510      \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n           \u2502NRTFeature    \u2502      \u2502 NRT Feature (pax) \u2502 max(passenger_count) group by ride_id\n           \u2502   (meter)    \u2502      \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n           \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518             \u2502  PCollection<KV<String,Row>>\n                  \u2502                     \u2502\n                  \u2502                     \u2502\n                  \u2502                     \u2502\n                  \u2502                     \u2502\n                  \u2514\u2500 \u2500\u2500\u2500\u2500\u2500\u2500\u2510  \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                           v  v\n                     \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n                     \u2502 CoGroupByKey  \u2502\n                     \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                            \u2502  PCollection<KV<String, CoGbkResult>> \n                     \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n                     \u2502 CoGroupByKey  \u2502\n                     \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518          \n                            \u2502   PCollection<KV<TableRow> \n                            v\n                  \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n                  \u2502  BigQueryIO          \u2502 \n                  \u2502(features)            \u2502\n                  \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n\n\n```\n\n## Run\n\nAltough pom.xml supports multiple profiles, this was tested locally and dataflow only.\n\n### Dataflow\n\n```\nmvn -Pdataflow-runner compile exec:java \\\n-Dexec.mainClass=com.google.dataflow.feature.pipeline.TaxiNRTPipeline \\\n-Dexec.args=\"--project=PROJECT_ID \\\n--gcpTempLocation=gs:\/\/BUCKET_NAME\/temp\/ \\\n--output=gs:\/\/BUCKET_NAME\/output \\\n--runner=DataflowRunner \\\n--projectId=FEATURE_PROJECT_ID \\\n--datasetName=FEATURE_DATASET_NAME \\\n--tableName=FEATURE_TABLE_NAME \\\n--region=REGION\"\n```\n","site":"GCP","answers_cleaned":"  Near realtime  NRT  Feature Producer     Hypothetical Scenario  We want to build and use near real time  NRT  features in the hypotethical scoring system  Scoring is not part of this example  There are multiple sources that produce NRT features  Features are ideally defined in the feature store system and are exposed in the online store       Features  Features are stored in BigQuery and synced to Online Feature store  Vertex ai   Below you can see the definition      Feature type   Feature name   Feature source   Window  assuming sliding  Period   Method  Beam SQL    Destination                                                                                                                             NRT     Total number of clicks last 90sec per user id     Ga4 topic             90sec 30s    count        BQ table      NRT      Total number of logins last 5min per user id     Authn topic       300sec 30s       count       BQ table     NRT      Total number of transactions last5min per user id       Transactions topic     300sec 30s     count       BQ table         Scoring pipeline  not part of the example   Pipeline input is a transaction topic  for each message  it takes entity id and use it for enrichment   it reads total number of clicks last 90sec  total number of logins last 5min  total number of transactions last5min and many other historical features that are needed for scoring   Score is emitted downstream along with transaction details        Near real time feature engineering pipeline Pipeline takes events from the source topic  splits into multiple branches based on windowing strategy  duration  period  and does aggregations   Branches are joined back  which is tricky    and stored into the destination table        Visualization To simplify the visualization here are 2 features  f1  f2    90s and 60s  As there is a sliding window happening  each row has overlapping windows visualized   Events happen within the  window start  window end boundary   but output of aggregation is triggered at the end of the window with a timestamp of end boundary minus 1ms  e g  29 999       viz  viz png   Notice that windows emit by the end of first period and second period already contain aggregations for 60s and 90s windows    Notice also that this pipeline should also emit 0  default for some aggregations  even if there is no window triggered due as there is no data for key         Resetting feature value  There is a need to reset total number of clicks last 90sec if there are no more events for a specific user id    Solution is implementing a stateful processing step after each feature calculation that resets timer or expires value  produce default 0 null   There is additional windowing needed to make this possible        Merging branches Total number of clicks last 90sec and total number of locations last 5min are features that are calculated based on the same data product or similar period and should ideally be stored in the same destination   Pipeline takes events from ga4 topic  splits into two branches  does window  90s and 300s  and aggregations  count and count distinct    As windows are different  the result of window   aggregation can t be instatnly co grouped and stored as one row  entity id  Total number of clicks last 90sec  total number of locations last 5min  timestamp     Solution here is that the windowing period should match  events are produced at the same rate and branches should be re windowed to a fixed window and co grouped       Sources  Repository is created based on  quickstart  https   cloud google com dataflow docs quickstarts create pipeline java   but most of the files are removed   It contains NRTFeature transform and other building blocks to showcase how implement above requirements       Architecture of demo pipeline  There  is demo pipeline implemented with taxi data source  producing two features and storing them to BQ backed feature store                                                                    PubSubIO      Topic  taxirides realtime                       Read Source                                                                    PCollection String                            v                                                             JsonToRow                                                                               PCollection Row                                                                                                                                                                                                    v                     v                                                                    NRTFeature             NRT Feature  pax    max passenger count  group by ride id                 meter                                                                             PCollection KV String Row                                                                                                                                                                                                                                                v  v                                                               CoGroupByKey                                                                          PCollection KV String  CoGbkResult                                                                  CoGroupByKey                                                                                     PCollection KV TableRow                               v                                                                 BigQueryIO                                 features                                                                     Run  Altough pom xml supports multiple profiles  this was tested locally and dataflow only       Dataflow      mvn  Pdataflow runner compile exec java    Dexec mainClass com google dataflow feature pipeline TaxiNRTPipeline    Dexec args    project PROJECT ID     gcpTempLocation gs   BUCKET NAME temp      output gs   BUCKET NAME output     runner DataflowRunner     projectId FEATURE PROJECT ID     datasetName FEATURE DATASET NAME     tableName FEATURE TABLE NAME     region REGION      "}
{"questions":"GCP At a high level the Cloud Dataflow pipeline performs the following steps Workflow Overview img src img dataflowelasticworkflow png alt Workflow Overview height 400 width 800 Indexing documents into Elasticsearch using Cloud Dataflow This example Cloud Dataflow pipeline demonstrates the process of reading JSON documents from Cloud Pub Sub enhancing the document using metadata stored in Cloud Bigtable and indexing those documents into The pipeline also validates the documents for correctness and availability of metadata and publishes any documents that fail validation into another Cloud Pub Sub topic for debugging and eventual reprocessing","answers":"## Indexing documents into Elasticsearch using Cloud Dataflow\nThis example Cloud Dataflow pipeline demonstrates the process of reading JSON documents from Cloud Pub\/Sub, enhancing the document using metadata stored in Cloud Bigtable and indexing those documents into [Elasticsearch](https:\/\/www.elastic.co\/). The pipeline also validates the documents for correctness and availability of metadata and publishes any documents that fail validation into another Cloud Pub\/Sub topic for debugging and eventual reprocessing.\n\n### Workflow Overview\n\n***\n\n<img src=\"img\/dataflow_elastic_workflow.png\" alt=\"Workflow Overview\" height=\"400\" width=\"800\"\/>\n\nAt a high-level the Cloud Dataflow pipeline performs the following steps:\n1. Reads JSON documents from Cloud Pub\/Sub, validates that the documents are well-formed and contains a user provided unique id field (e.g. **SKU**).\n2. Enhances the document using external metadata stored in a Cloud Bigtable table. The pipeline looks up the metadata from Cloud Bigtable using the unique id field (e.g. **SKU**) extracted from the document.\n3. Indexes the enhanced document into an existing Elasticsearch index.\n4. Publishes into a Cloud Pub\/Sub topic, any documents that either fail validation (i.e. are not well-formed JSON documents) or do not have a metadata record in Cloud Bigtable.\n5. Optionally corrects and republishes the failed documents back into Cloud Pub\/Sub. *Note: This workflow is not part of the sample code provided in this repo*.\n\n#### Sample Data\nFor the purpose of demonstrating this pipeline, we will use the [products](https:\/\/github.com\/BestBuyAPIs\/open-data-set\/blob\/master\/products.json) data provided [here](https:\/\/github.com\/BestBuyAPIs\/open-data-set). The products data provides JSON documents with various attributes associated with a product:\n```json\n{\n                    \"image\": \"http:\/\/img.bbystatic.com\/BestBuy_US\/images\/products\/4853\/48530_sa.jpg\",\n                    \"shipping\": 5.49,\n                    \"price\": 5.49,\n                    \"name\": \"Duracell - AA 1.5V CopperTop Batteries (4-Pack)\",\n                    \"upc\": \"041333415017\",\n                    \"description\": \"Long-lasting energy; DURALOCK Power Preserve technology; for toys, clocks, radios, games, remotes, PDAs and more\",\n                    \"model\": \"MN1500B4Z\",\n                    \"sku\": 48530,\n                    \"type\": \"HardGood\",\n                    \"category\": [\n                        {\n                            \"name\": \"Connected Home & Housewares\",\n                            \"id\": \"pcmcat312300050015\"\n                        },\n                        {\n                            \"name\": \"Housewares\",\n                            \"id\": \"pcmcat248700050021\"\n                        },\n                        {\n                            \"name\": \"Household Batteries\",\n                            \"id\": \"pcmcat303600050001\"\n                        },\n                        {\n                            \"name\": \"Alkaline Batteries\",\n                            \"id\": \"abcat0208002\"\n                        }\n                    ],\n                    \"url\": \"http:\/\/www.bestbuy.com\/site\/duracell-aa-1-5v-coppertop-batteries-4-pack\/48530.p?id=1099385268988&skuId=48530&cmp=RMXCC\",\n                    \"manufacturer\": \"Duracell\"\n                }\n```\n\n#### Sample metadata\nIn order to demonstrate how the documents are enhanced using external metadata stored in Cloud Bigtable, we will create a Cloud Bigtable table (e.g. *products_metadata*) with a single column family (e.g. *cf*). A randomly generated *boolean* value is then stored for a field called **in_stock** associated with the **SKU** that is used as a *rowkey*:\n\n|rowkey      |in_stock    |\n|:-----------|:-----------|\n|1234\t       |true        |\n|5678\t       |false       |\n|....\t       |....        |\n\n#### Generating sample data and metadata.\nIn order to assist with publishing the products data into Cloud Pub\/Sub and populating the metadata table in Cloud Bigtable, we provided a helper pipeline [Publish Products](ElasticIndexer\/src\/main\/java\/com\/google\/cloud\/pso\/utils\/PublishProducts.java).\nThe sample pipeline can be executed from the folder containing the [pom.xml](ElasticIndexer\/pom.xml) file:\n```bash\nmvn compile exec:java -Dexec.mainClass=com.google.cloud.pso.utils.PublishProducts -Dexec.args=\" \\\n--runner=DataflowRunner \\\n--project=[GCP_PROJECT_ID] \\\n--stagingLocation=[GCS_STAGING_BUCKET] \\\n--input=[GCS_BUCKET_CONTAINING_PRODUCTS_FILE]\/products.json.gz \\\n--topic=[INPUT_Pub\/Sub_TOPIC] \\\n--idField=\/sku \\\n--instanceId=[BIGTABLE_INSTANCE_ID] \\\n--tableName=[BIGTABLE_TABLE_NAME] \\\n--columnFamily=[BIGTABLE_COLUMN_FAMILY] \\\n--columnQualifier=[BIGTABLE_COLUMN_QUALIFIER]\"\n```\n<img src=\"img\/sample_data_gen_pipeline.png\" alt=\"Sample data generation workflow\" height=\"864\" width=\"800\"\/>\n\n***\n\n#### Setup and Pre-requisites\nThe sample pipeline is written in Java and requires Java 8 and [Apache Maven](https:\/\/maven.apache.org\/).\n\nThe following high-level steps describe the setup needed to run this example:\n\n1. Create a Cloud Pub\/Sub topic and subscription for consuming the documents to be indexed.\n2. Create a Cloud Pub\/Sub topic and subscription for publishing the invalid documents.\n3. Create a Cloud Bigtable table to store the metadata. The metadata can be stored in a single column family (for e.g. *cf*).\n4. Identify the following relevant fields for the existing Elasticsearch index where the documents will be published.\n\n| Field                  | Value                                          |Example                                   |\n| :--------------------- |:---------------------------------------------- |:---------------------------              |\n| addresses              | *comma-separated-es-addresses*                 |http:\/\/x.x.x.x:9200                       |\n| index                  | *es-index-name*                                |prod_index                                |\n| type                   | *es-index-type*                                |prod                                      |\n\n5. Generate sample data and metadata using the helper pipeline as described earlier.\n\n##### Build and Execute\nThe sample pipeline can be executed from the folder containing the [pom.xml](ElasticIndexer\/pom.xml) file:\n```bash\nmvn compile exec:java -Dexec.mainClass=com.google.cloud.pso.IndexerMain -Dexec.args=\" \\\n--runner=DataflowRunner \\\n--project=[GCP_PROJECT_ID] \\\n--stagingLocation=[GCS_STAGING_BUCKET] \\\n--inputSubscription=[INPUT_Pub\/Sub_SUBSCRIPTION] \\\n--idField=[DOC_ID_FIELD] \\\n--addresses=[ES_ADDRESSES] \\\n--index=[ES_INDEX_NAME] \\\n--type=[ES_INDEX_TYPE] \\\n--rejectionTopic=[Pub\/Sub_REJECTED_DOCS_TOPIC] \\\n--instanceId=[BIGTABLE_INSTANCE_ID] \\\n--tableName=[BIGTABLE_TABLE_NAME] \\\n--columnFamily=[BIGTABLE_COLUMN_FAMILY] \\\n--columnQualifier=[BIGTABLE_COLUMN_QUALIFIER]\"\n```\n\n***\n\n##### Full code examples\n\nReady to dive deeper? Check out the complete code [here](ElasticIndexer\/src\/main\/java\/com\/google\/cloud\/pso\/IndexerMain.java)\n","site":"GCP","answers_cleaned":"   Indexing documents into Elasticsearch using Cloud Dataflow This example Cloud Dataflow pipeline demonstrates the process of reading JSON documents from Cloud Pub Sub  enhancing the document using metadata stored in Cloud Bigtable and indexing those documents into  Elasticsearch  https   www elastic co    The pipeline also validates the documents for correctness and availability of metadata and publishes any documents that fail validation into another Cloud Pub Sub topic for debugging and eventual reprocessing       Workflow Overview        img src  img dataflow elastic workflow png  alt  Workflow Overview  height  400  width  800     At a high level the Cloud Dataflow pipeline performs the following steps  1  Reads JSON documents from Cloud Pub Sub  validates that the documents are well formed and contains a user provided unique id field  e g    SKU     2  Enhances the document using external metadata stored in a Cloud Bigtable table  The pipeline looks up the metadata from Cloud Bigtable using the unique id field  e g    SKU    extracted from the document  3  Indexes the enhanced document into an existing Elasticsearch index  4  Publishes into a Cloud Pub Sub topic  any documents that either fail validation  i e  are not well formed JSON documents  or do not have a metadata record in Cloud Bigtable  5  Optionally corrects and republishes the failed documents back into Cloud Pub Sub   Note  This workflow is not part of the sample code provided in this repo         Sample Data For the purpose of demonstrating this pipeline  we will use the  products  https   github com BestBuyAPIs open data set blob master products json  data provided  here  https   github com BestBuyAPIs open data set   The products data provides JSON documents with various attributes associated with a product     json                        image    http   img bbystatic com BestBuy US images products 4853 48530 sa jpg                        shipping   5 49                       price   5 49                       name    Duracell   AA 1 5V CopperTop Batteries  4 Pack                         upc    041333415017                        description    Long lasting energy  DURALOCK Power Preserve technology  for toys  clocks  radios  games  remotes  PDAs and more                        model    MN1500B4Z                        sku   48530                       type    HardGood                        category                                                            name    Connected Home   Housewares                                id    pcmcat312300050015                                                                                    name    Housewares                                id    pcmcat248700050021                                                                                    name    Household Batteries                                id    pcmcat303600050001                                                                                    name    Alkaline Batteries                                id    abcat0208002                                                                        url    http   www bestbuy com site duracell aa 1 5v coppertop batteries 4 pack 48530 p id 1099385268988 skuId 48530 cmp RMXCC                        manufacturer    Duracell                              Sample metadata In order to demonstrate how the documents are enhanced using external metadata stored in Cloud Bigtable  we will create a Cloud Bigtable table  e g   products metadata   with a single column family  e g   cf    A randomly generated  boolean  value is then stored for a field called   in stock   associated with the   SKU   that is used as a  rowkey     rowkey       in stock                                   1234         true           5678         false                                           Generating sample data and metadata  In order to assist with publishing the products data into Cloud Pub Sub and populating the metadata table in Cloud Bigtable  we provided a helper pipeline  Publish Products  ElasticIndexer src main java com google cloud pso utils PublishProducts java   The sample pipeline can be executed from the folder containing the  pom xml  ElasticIndexer pom xml  file     bash mvn compile exec java  Dexec mainClass com google cloud pso utils PublishProducts  Dexec args       runner DataflowRunner     project  GCP PROJECT ID      stagingLocation  GCS STAGING BUCKET      input  GCS BUCKET CONTAINING PRODUCTS FILE  products json gz     topic  INPUT Pub Sub TOPIC      idField  sku     instanceId  BIGTABLE INSTANCE ID      tableName  BIGTABLE TABLE NAME      columnFamily  BIGTABLE COLUMN FAMILY      columnQualifier  BIGTABLE COLUMN QUALIFIER        img src  img sample data gen pipeline png  alt  Sample data generation workflow  height  864  width  800               Setup and Pre requisites The sample pipeline is written in Java and requires Java 8 and  Apache Maven  https   maven apache org     The following high level steps describe the setup needed to run this example   1  Create a Cloud Pub Sub topic and subscription for consuming the documents to be indexed  2  Create a Cloud Pub Sub topic and subscription for publishing the invalid documents  3  Create a Cloud Bigtable table to store the metadata  The metadata can be stored in a single column family  for e g   cf    4  Identify the following relevant fields for the existing Elasticsearch index where the documents will be published     Field                    Value                                           Example                                                                                                                                                              addresses                 comma separated es addresses                   http   x x x x 9200                           index                     es index name                                  prod index                                    type                      es index type                                  prod                                         5  Generate sample data and metadata using the helper pipeline as described earlier         Build and Execute The sample pipeline can be executed from the folder containing the  pom xml  ElasticIndexer pom xml  file     bash mvn compile exec java  Dexec mainClass com google cloud pso IndexerMain  Dexec args       runner DataflowRunner     project  GCP PROJECT ID      stagingLocation  GCS STAGING BUCKET      inputSubscription  INPUT Pub Sub SUBSCRIPTION      idField  DOC ID FIELD      addresses  ES ADDRESSES      index  ES INDEX NAME      type  ES INDEX TYPE      rejectionTopic  Pub Sub REJECTED DOCS TOPIC      instanceId  BIGTABLE INSTANCE ID      tableName  BIGTABLE TABLE NAME      columnFamily  BIGTABLE COLUMN FAMILY      columnQualifier  BIGTABLE COLUMN QUALIFIER                   Full code examples  Ready to dive deeper  Check out the complete code  here  ElasticIndexer src main java com google cloud pso IndexerMain java  "}
{"questions":"GCP present as bigquery user defined functions to deploy their custom services or libraries written in any language other than SQL and javascript which are not BigQuery Remote Function Sample Code This repository has string format Java code which can be deployed on cloud run or cloud function and can be invoked BQ remote functions provide direct integration with cloud function or cloud run allows user using SQL queries from BigQuery","answers":"# BigQuery Remote Function Sample Code\n\n[Bigquery remote function](https:\/\/cloud.google.com\/bigquery\/docs\/reference\/standard-sql\/remote-functions) allows user\nto deploy their custom services or libraries written in any language other than SQL and javascript, which are not\npresent as bigquery user defined functions.\nBQ remote functions provide direct integration with cloud function or cloud run\n\nThis repository has string format Java code, which can be deployed on cloud run or cloud function, and can be invoked\nusing SQL queries from BigQuery.\n\nBigquery sends HTTP request POST request to cloud run\nas [input json format](https:\/\/cloud.google.com\/bigquery\/docs\/reference\/standard-sql\/remote-functions#input_format)\nand expects endpoint to return code\nin [output json format](https:\/\/cloud.google.com\/bigquery\/docs\/reference\/standard-sql\/remote-functions#output_format)\nand in case of failure, sends back error messages.\n\n### Deployment Steps on Cloud Run:\n\n\n1. Set Environment variables:\n    ```\n    PROJECT_NAME=$(gcloud config get-value project)\n    INSTANCE_NAME=string-format\n    REGION=us-central1\n    JAVA_VERSION=java11\n    SERVICE_ENTRY_POINT=com.google.cloud.pso.bqremotefunc.StringFormat\n    ```\n2. clone this git repo on GCP project and got to directory\n    ```\n   cd  examples\/bq-remote-function\/string_formatter\n   ```\n3. Deploy the code as cloud run using below commands:\n    ```\n       gcloud functions deploy $INSTANCE_NAME \\\n       --project=$PROJECT_NAME \\\n       --gen2 \\\n       --region=$REGION \\\n       --runtime=$JAVA_VERSION \\\n       --entry-point=$SERVICE_ENTRY_POINT \\\n       --trigger-http\n    ```\n4. Copy the https url from cloud run UI\n\n5. Create a remote function in BigQuery.\n    1. Create a connection of type **CLOUD_RESOURCE**\n\n   replace connection name in below command and run on cloud shell.\n   ```\n   bq mk --connection \\\n    --display_name=<connection-name> \\\n    --connection_type=CLOUD_RESOURCE \\\n    --project_id=$PROJECT_ID \\\n    --location=$REGION  <connection-name>\n\n   ```\n    2. Create a remote function in BigQuery Editor with below query (replace the variables based on your environment)\n    ```\n   CREATE or Replace FUNCTION `<project-id>.<dataset>.<function-name>`\n   (text STRING) RETURNS STRING\n    REMOTE WITH CONNECTION `<BQ connection name>\n    OPTIONS (endpoint = '<HTTP end point of the cloud run service>');\n   ```\n\n6. Use the remote function in a query just like any other user-defined functions.\n\n```\n   SELECT \n   `<project-id>.<dataset>.<function-name>`(col_name)\n   from\n   (select \n   * \n   from \n   unnest(['text1','text2','text3']) as col_name );\n```\n\n7. Expected Output\n\n```\ntext1_test\ntext2_test\ntext3_test\n```\n\n### Logging and Monitoring the cloud run:\n\nGo to GCP Cloud run, click the instance created\nselect LOGS on action bar,\nwhen the instance is invoked from BigQuery, you will see the logs printed,\nparallely in METRICS section, you can check the request count, container utilisation and billable time.\n\n### Cost\n\nThe cost can be calculated using [pricing calculator](https:\/\/cloud.google.com\/products\/calculator) for both Cloud Run\nand BigQuery utilization by entering CPU, memory and concurrent requests count.\n\n### Clean up\n\nTo destroy delete the cloud run instance and bq remote function.\n\n### Limitations:\n\nBQ remote function fails to\nsupport [payload >10mb](https:\/\/cloud.google.com\/bigquery\/quotas#query_jobs:~:text=Maximum%20request%20size,like%20query%20parameters)\n, and accepts\ncertain [data types](https:\/\/cloud.google.com\/bigquery\/docs\/reference\/standard-sql\/remote-functions#limitations).\n\n### Next steps:\n\nFor more Cloud Run samples beyond Java, see the main list in\nthe [Cloud Run Samples repository](https:\/\/github.com\/GoogleCloudPlatform\/cloud-run-samples).\n","site":"GCP","answers_cleaned":"  BigQuery Remote Function Sample Code   Bigquery remote function  https   cloud google com bigquery docs reference standard sql remote functions  allows user to deploy their custom services or libraries written in any language other than SQL and javascript  which are not present as bigquery user defined functions  BQ remote functions provide direct integration with cloud function or cloud run  This repository has string format Java code  which can be deployed on cloud run or cloud function  and can be invoked using SQL queries from BigQuery   Bigquery sends HTTP request POST request to cloud run as  input json format  https   cloud google com bigquery docs reference standard sql remote functions input format  and expects endpoint to return code in  output json format  https   cloud google com bigquery docs reference standard sql remote functions output format  and in case of failure  sends back error messages       Deployment Steps on Cloud Run    1  Set Environment variables              PROJECT NAME   gcloud config get value project      INSTANCE NAME string format     REGION us central1     JAVA VERSION java11     SERVICE ENTRY POINT com google cloud pso bqremotefunc StringFormat         2  clone this git repo on GCP project and got to directory            cd  examples bq remote function string formatter        3  Deploy the code as cloud run using below commands                 gcloud functions deploy  INSTANCE NAME            project  PROJECT NAME            gen2            region  REGION            runtime  JAVA VERSION            entry point  SERVICE ENTRY POINT            trigger http         4  Copy the https url from cloud run UI  5  Create a remote function in BigQuery      1  Create a connection of type   CLOUD RESOURCE       replace connection name in below command and run on cloud shell            bq mk   connection         display name  connection name          connection type CLOUD RESOURCE         project id  PROJECT ID         location  REGION   connection name              2  Create a remote function in BigQuery Editor with below query  replace the variables based on your environment             CREATE or Replace FUNCTION   project id   dataset   function name       text STRING  RETURNS STRING     REMOTE WITH CONNECTION   BQ connection name      OPTIONS  endpoint     HTTP end point of the cloud run service             6  Use the remote function in a query just like any other user defined functions          SELECT       project id   dataset   function name   col name     from     select           from     unnest   text1   text2   text3    as col name         7  Expected Output      text1 test text2 test text3 test          Logging and Monitoring the cloud run   Go to GCP Cloud run  click the instance created select LOGS on action bar  when the instance is invoked from BigQuery  you will see the logs printed  parallely in METRICS section  you can check the request count  container utilisation and billable time       Cost  The cost can be calculated using  pricing calculator  https   cloud google com products calculator  for both Cloud Run and BigQuery utilization by entering CPU  memory and concurrent requests count       Clean up  To destroy delete the cloud run instance and bq remote function       Limitations   BQ remote function fails to support  payload  10mb  https   cloud google com bigquery quotas query jobs   text Maximum 20request 20size like 20query 20parameters    and accepts certain  data types  https   cloud google com bigquery docs reference standard sql remote functions limitations        Next steps   For more Cloud Run samples beyond Java  see the main list in the  Cloud Run Samples repository  https   github com GoogleCloudPlatform cloud run samples   "}
{"questions":"GCP Uploading files directly to Google Cloud Storage by using Signed URL This is an architecture for uploading files directly to Google Cloud Storage by using Signed URL Overview This code implements the following architecture","answers":"# Uploading files directly to Google Cloud Storage by using Signed URL\n\nThis is an architecture for uploading files directly to Google Cloud Storage by using Signed URL.\n\n## Overview\n\nThis code implements the following architecture:\n\n![architecture diagram](.\/architecture.png)\n\nThe characteristic of the architecture is that serverless realizes processing from file-upload to delivery. Let\u2019s see what kind of processing could be done in order.\n\n1. Generates a Signed URL that allows PUT request to be executed only for a specific bucket and object for authenticated user by application domain logic.\n2. The user tries to upload a file for a specific bucket and object by using given Signed URL.\n3. When completed to upload the file to GCS, Google Cloud Functions is triggered as finalize event. GCF validates the uploaded file.\n4. After confirmed that the file is image format and appropriate size at 3, annotate the image by posting Cloud Vision API to filter inappropriate content.\n5. Both 3 and 4 validations are completed, copy the image file from the uploaded bucket to the distribution bucket.\n6. The copied image file is now available to the public.\n\n## Usage\n\nFirst off, you should check the requirements for realizing this system.\n\n## Requirements\n\n### API\n\nIn order to realize this system, you need to enable following APIs.\n\n- Cloud Storage API\n- Cloud Functions API\n- Identity and Access Management (IAM) API\n  - If you are going to use original service account and its private key instead of the `signBlob` API, you don't need to enable this API.\n- Cloud Vision API\n\n### Service Account\n\nIn order to generate the Signed URL on App Engine Standard, you need to prepare the service account for signing signature.\nThe service account must have following authorities:\n\n- `storage.buckets.get`\n- `storage.objects.create`\n- `storage.objects.delete`\n\nAnd you need to grant your service account `Service Account Token Creator`.\n\n## Step.1 Create uploadable and distribution buckets\n\nBefore deploying applications, you should create two buckets for use in this system.\n\n```sh\nREGION=\"<REGION>\"\nPROJECT_ID=\"<PROJECT ID>\"\nUPLOADABLE_BUCKET=\"<UPLOADABLE BUCKET NAME>\"\nDISTRIBUTION_BUCKET=\"<DISTRIBUTION BUCKET NAME>\"\nLIFECYCLE_POLICY_FILE=\".\/lifecycle.json\"\n\n# Creates the uploadable bucket\ngsutil mb -p $PROJECT_ID -l $REGION --retention 900s gs:\/\/$UPLOADABLE_BUCKET\n# Creates the bucket for distribution\ngsutil mb -p $PROJECT_ID -l $REGION gs:\/\/$DISTRIBUTION_BUCKET\n# Set lifecycle for the uploadable bucket\ngsutil lifecycle set $LIFECYCLE_POLICY_FILE gs:\/\/$UPLOADABLE_BUCKET\n# Publish all objects to all users\ngsutil iam ch allUsers:objectViewer gs:\/\/$DISTRIBUTION_BUCKET\n```\n\n### Step.2 Deploy to App Engine Standard\n\nTo generate Signed URL, you need to deploy the code placed in `appengine`.\nMake sure environment variables have appropriate values in `app.yaml` before deploying.\n\n```sh\ncd appengine\n# Make sure environment variables have appropriate values in app.yaml\ngcloud app deploy\n```\n\n### Step.3 Deploy to Google Cloud Functions\n\nTo validate files and copy files to distribution bucket, you need to deploy the code placed in `function`.\nMake sure constant variables have appropriate values in `function\/main.go`.\n\n```sh\nUPLOADABLE_BUCKET=\"<UPLOADABLE_BUCKET>\"\ncd function\n# Make sure constant variables have appropriate values in `function\/main.go`.\ngcloud functions deploy UploadImage --runtime go111 --trigger-resource $UPLOADABLE_BUCKET --trigger-event google.storage.object.finalize --retry\n```\n\n### Step.4 Try to upload your image!\n\nBy executing the following code, you can try to upload sample image by using Signed URL.\n\n```go\npackage main\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\/ioutil\"\n\t\"log\"\n\t\"net\/http\"\n\t\"net\/url\"\n\t\"strings\"\n)\n\nconst signerUrl = \"<APPENGINE_URL>\"\n\nfunc getSignedURL(target string, values url.Values) (string, error) {\n\tresp, err := http.PostForm(target, values)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tdefer resp.Body.Close()\n\tb, err := ioutil.ReadAll(resp.Body)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn strings.TrimSpace(string(b)), nil\n}\n\nfunc main() {\n\t\/\/ Get signed url by requesting API server hosted on App Engine.\n\tu, err := getSignedURL(signerUrl, url.Values{\"content_type\": {\"image\/png\"}, \"ext\": {\"png\"}})\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tfmt.Printf(\"Signed URL here: %q\\n\", u)\n\n\tb, err := ioutil.ReadFile(\".\/sample.png\")\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\treq, err := http.NewRequest(\"PUT\", u, bytes.NewReader(b))\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\treq.Header.Add(\"Content-Type\", \"image\/png\")\n\tclient := new(http.Client)\n\tresp, err := client.Do(req)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\tfmt.Println(resp)\n}\n```\n\nAnd then you can confirm that the sample image file is now published by accessing `https:\/\/console.cloud.google.com\/storage\/browser\/$DISTRIBUTION_BUCKET?project=PROJECT_ID`.","site":"GCP","answers_cleaned":"  Uploading files directly to Google Cloud Storage by using Signed URL  This is an architecture for uploading files directly to Google Cloud Storage by using Signed URL      Overview  This code implements the following architecture     architecture diagram    architecture png   The characteristic of the architecture is that serverless realizes processing from file upload to delivery  Let s see what kind of processing could be done in order   1  Generates a Signed URL that allows PUT request to be executed only for a specific bucket and object for authenticated user by application domain logic  2  The user tries to upload a file for a specific bucket and object by using given Signed URL  3  When completed to upload the file to GCS  Google Cloud Functions is triggered as finalize event  GCF validates the uploaded file  4  After confirmed that the file is image format and appropriate size at 3  annotate the image by posting Cloud Vision API to filter inappropriate content  5  Both 3 and 4 validations are completed  copy the image file from the uploaded bucket to the distribution bucket  6  The copied image file is now available to the public      Usage  First off  you should check the requirements for realizing this system      Requirements      API  In order to realize this system  you need to enable following APIs     Cloud Storage API   Cloud Functions API   Identity and Access Management  IAM  API     If you are going to use original service account and its private key instead of the  signBlob  API  you don t need to enable this API    Cloud Vision API      Service Account  In order to generate the Signed URL on App Engine Standard  you need to prepare the service account for signing signature  The service account must have following authorities      storage buckets get     storage objects create     storage objects delete   And you need to grant your service account  Service Account Token Creator       Step 1 Create uploadable and distribution buckets  Before deploying applications  you should create two buckets for use in this system      sh REGION   REGION   PROJECT ID   PROJECT ID   UPLOADABLE BUCKET   UPLOADABLE BUCKET NAME   DISTRIBUTION BUCKET   DISTRIBUTION BUCKET NAME   LIFECYCLE POLICY FILE    lifecycle json     Creates the uploadable bucket gsutil mb  p  PROJECT ID  l  REGION   retention 900s gs    UPLOADABLE BUCKET   Creates the bucket for distribution gsutil mb  p  PROJECT ID  l  REGION gs    DISTRIBUTION BUCKET   Set lifecycle for the uploadable bucket gsutil lifecycle set  LIFECYCLE POLICY FILE gs    UPLOADABLE BUCKET   Publish all objects to all users gsutil iam ch allUsers objectViewer gs    DISTRIBUTION BUCKET          Step 2 Deploy to App Engine Standard  To generate Signed URL  you need to deploy the code placed in  appengine   Make sure environment variables have appropriate values in  app yaml  before deploying      sh cd appengine   Make sure environment variables have appropriate values in app yaml gcloud app deploy          Step 3 Deploy to Google Cloud Functions  To validate files and copy files to distribution bucket  you need to deploy the code placed in  function   Make sure constant variables have appropriate values in  function main go       sh UPLOADABLE BUCKET   UPLOADABLE BUCKET   cd function   Make sure constant variables have appropriate values in  function main go   gcloud functions deploy UploadImage   runtime go111   trigger resource  UPLOADABLE BUCKET   trigger event google storage object finalize   retry          Step 4 Try to upload your image   By executing the following code  you can try to upload sample image by using Signed URL      go package main  import     bytes    fmt    io ioutil    log    net http    net url    strings     const signerUrl     APPENGINE URL    func getSignedURL target string  values url Values   string  error     resp  err    http PostForm target  values   if err    nil     return     err     defer resp Body Close    b  err    ioutil ReadAll resp Body   if err    nil     return     err     return strings TrimSpace string b    nil    func main         Get signed url by requesting API server hosted on App Engine   u  err    getSignedURL signerUrl  url Values  content type     image png     ext     png      if err    nil     log Fatal err      fmt Printf  Signed URL here   q n   u    b  err    ioutil ReadFile    sample png    if err    nil     log Fatal err      req  err    http NewRequest  PUT   u  bytes NewReader b    if err    nil     log Fatal err      req Header Add  Content Type    image png    client    new http Client   resp  err    client Do req   if err    nil     log Fatal err      fmt Println resp         And then you can confirm that the sample image file is now published by accessing  https   console cloud google com storage browser  DISTRIBUTION BUCKET project PROJECT ID  "}
{"questions":"GCP Context De id Pipeline Design Document Objective The DLP De identification Pipeline aims to identify and anonymize sensitive data stored in BigQuery or Google Cloud Storage GCS The pipeline reads data from a source de identifies sensitive information using Cloud DLP and writes the de identified data to a corresponding location in the specified destination This enables the secure migration of data to lower environments such as development and testing where developers or other users require data access but sensitive information needs to be removed to mitigate privacy and security risks Background","answers":"## De-id Pipeline Design Document\n\n### Context\n\n#### Objective\n\nThe DLP De-identification Pipeline aims to identify and anonymize sensitive data stored in BigQuery or Google Cloud Storage (GCS). The pipeline reads data from a source, de-identifies sensitive information using Cloud DLP, and writes the de-identified data to a corresponding location in the specified destination. This enables the secure migration of data to lower environments, such as development and testing, where developers or other users require data access, but sensitive information needs to be removed to mitigate privacy and security risks.\n\n#### Background\n\nProduction environments often contain sensitive information or Personally Identifiable Information (PII), but lower environments require de-identified data to prevent unauthorized access and potential breaches. Therefore, migrating de-identified data to these environments is crucial for purposes including testing, development, and analysis.\n\nIdeally, de-identified data should closely resemble the source data to facilitate the accurate replication of processes and scenarios found in production. This allows users in lower environments to work with realistic data without compromising sensitive information.\n\nGoogle Cloud's Sensitive Data Protection (also known as Data Loss Prevention or DLP) service offers built-in features for identifying and de-identifying sensitive data in Cloud Storage and integrates with services like BigQuery. However, it has limitations regarding file types and sizes and lacks a unified solution that seamlessly handles both BigQuery and Cloud Storage data de-identification. This pipeline addresses these limitations by providing a comprehensive and scalable solution for de-identifying data across both BigQuery and GCS. \n\n### Design\n\n#### Overview\n\nThe DLP De-identification Pipeline is a Dataflow pipeline that anonymizes sensitive data residing in BigQuery or Google Cloud Storage (GCS). It offers a comprehensive solution for migrating data to lower environments while ensuring privacy and security. \n![GCS mode diagram](diagrams\/design_diagram_gcs.png)\n![BQ mode diagram](diagrams\/design_diagram_gcs.png)\n\nThe De-id pipeline works as follows:\n\n1.  **Data Ingestion**: The pipeline reads data from either BigQuery tables or various file formats stored in GCS. \n\n2.  **De-identification**: Leveraging Cloud DLP\u2019s powerful de-identification capabilities, the pipeline anonymizes sensitive data within the ingested data. This includes techniques like:\n\n    *   **Format-Preserving Encryption:** This technique encrypts sensitive data while maintaining its original format and referential integrity. This is crucial for preserving data utility in lower environments.\n    *   **Other De-identification Techniques:** Cloud DLP offers a range of other de-identification techniques, such as masking, redaction, tokenization, and pseudonymization, which can be configured based on specific needs and privacy requirements. \n3.  **Output**: The pipeline writes the de-identified data to the specified destination, mirroring the source structure and format. This ensures consistency and facilitates seamless integration with downstream processes in lower environments. \n\nThe De-id pipeline offers several **key benefits**:\n\n*   **Comprehensive Solution:** Handles both structured data from BigQuery and unstructured\/semi-structured data from GCS.\n*   **Scalability and Reliability:** Built on Dataflow, the pipeline provides scalability and reliability for handling large datasets and heavy de-identification tasks. \n*   **Data Utility:** Format-preserving encryption and other de-identification techniques ensure that the anonymized data remains useful for testing, development, and analysis in lower environments.\n*   **Security and Privacy:** By de-identifying sensitive data, the pipeline helps protect sensitive information and comply with privacy regulations. \n\nThe De-id pipeline offers a robust and efficient way to create secure and usable data copies for lower environments, enabling various data-driven activities without compromising sensitive information. \n\n#### Detailed Design\n\n##### DLP Templates\n\nThis solution employs templates to streamline the de-identification process for sensitive data. By configuring de-identification settings within a template, a reusable blueprint is established. This eliminates the need for repetitive configuration, allowing de-identification jobs to be executed multiple times with ease. \n\nTo ensure referential integrity while masking sensitive information, a combination of format-preserving encryption (FPE) and regular expressions (regex) is utilized. This approach enables the original data pattern to be maintained even after de-identification. \n\n**Illustrative Example: Customer ID**\n\nConsider a scenario where a Customer ID follows the format \"A123456\" (i.e., \"A\" followed by a 6-digit number) and is classified as PII. A custom PII info type named \"CUSTOMER\\_ID\" can be configured within the inspection template, utilizing the following regex:\n\n```json\n{\n  \"info_type\": {\n    \"name\": \"CUSTOMER_ID\"\n  },\n  \"regex\": {\n    \"group_indexes\":,\n    \"pattern\": \"(A)(\\\\d{6})\"\n  }\n}\n```\n\nIn this regex, two group indexes are defined, but only the second group index (the 6-digit number) is designated as sensitive. This ensures that during de-identification, only the numerical portion undergoes transformation. FPE guarantees that the output remains a 6-digit number, and by preserving the prefix \"A,\" the overall pattern of the Customer ID is retained.\n\n**FPE Configuration**\n\nHere\u2019s an example of how FPE can be configured within the de-identification template for this Customer ID: \n\n```json\n{\n  \"primitive_transformation\": {\n    \"crypto_replace_ffx_fpe_config\": {\n      \"crypto_key\": <CYPTO_KEY>\n      \"common_alphabet\": \"NUMERIC\"\n    }\n  },\n  \"info_types\": [\n    {\n      \"name\": \"CUSTOMER_ID\"\n    }\n  ]\n}\n```\n\n**Template Configuration for this Example**\n\nThis table below shows the PII configured and how they are de-identified using this solution. The inspection and de-identification templates can be customized to suit your specific needs and integrate them into your data processing pipeline. \n\n| **PII Info Type** | **Original** | **De-identified** |\n| :----------------- | :----------- | :--------------- |\n| Customer ID        | A935492      | A678512          |\n| Email Address      | email@example.net | 9jRsv@example.net    |\n| Credit Card Number | 3524882434259679 | 1839406548854298     |\n| SSN                | 298-34-4337  | 515-57-9132       |\n| Date               | 1979-10-29  | 1982-08-24       |","site":"GCP","answers_cleaned":"   De id Pipeline Design Document      Context       Objective  The DLP De identification Pipeline aims to identify and anonymize sensitive data stored in BigQuery or Google Cloud Storage  GCS   The pipeline reads data from a source  de identifies sensitive information using Cloud DLP  and writes the de identified data to a corresponding location in the specified destination  This enables the secure migration of data to lower environments  such as development and testing  where developers or other users require data access  but sensitive information needs to be removed to mitigate privacy and security risks        Background  Production environments often contain sensitive information or Personally Identifiable Information  PII   but lower environments require de identified data to prevent unauthorized access and potential breaches  Therefore  migrating de identified data to these environments is crucial for purposes including testing  development  and analysis   Ideally  de identified data should closely resemble the source data to facilitate the accurate replication of processes and scenarios found in production  This allows users in lower environments to work with realistic data without compromising sensitive information   Google Cloud s Sensitive Data Protection  also known as Data Loss Prevention or DLP  service offers built in features for identifying and de identifying sensitive data in Cloud Storage and integrates with services like BigQuery  However  it has limitations regarding file types and sizes and lacks a unified solution that seamlessly handles both BigQuery and Cloud Storage data de identification  This pipeline addresses these limitations by providing a comprehensive and scalable solution for de identifying data across both BigQuery and GCS        Design       Overview  The DLP De identification Pipeline is a Dataflow pipeline that anonymizes sensitive data residing in BigQuery or Google Cloud Storage  GCS   It offers a comprehensive solution for migrating data to lower environments while ensuring privacy and security     GCS mode diagram  diagrams design diagram gcs png    BQ mode diagram  diagrams design diagram gcs png   The De id pipeline works as follows   1     Data Ingestion    The pipeline reads data from either BigQuery tables or various file formats stored in GCS    2     De identification    Leveraging Cloud DLP s powerful de identification capabilities  the pipeline anonymizes sensitive data within the ingested data  This includes techniques like             Format Preserving Encryption    This technique encrypts sensitive data while maintaining its original format and referential integrity  This is crucial for preserving data utility in lower environments            Other De identification Techniques    Cloud DLP offers a range of other de identification techniques  such as masking  redaction  tokenization  and pseudonymization  which can be configured based on specific needs and privacy requirements   3     Output    The pipeline writes the de identified data to the specified destination  mirroring the source structure and format  This ensures consistency and facilitates seamless integration with downstream processes in lower environments    The De id pipeline offers several   key benefits           Comprehensive Solution    Handles both structured data from BigQuery and unstructured semi structured data from GCS        Scalability and Reliability    Built on Dataflow  the pipeline provides scalability and reliability for handling large datasets and heavy de identification tasks         Data Utility    Format preserving encryption and other de identification techniques ensure that the anonymized data remains useful for testing  development  and analysis in lower environments        Security and Privacy    By de identifying sensitive data  the pipeline helps protect sensitive information and comply with privacy regulations    The De id pipeline offers a robust and efficient way to create secure and usable data copies for lower environments  enabling various data driven activities without compromising sensitive information         Detailed Design        DLP Templates  This solution employs templates to streamline the de identification process for sensitive data  By configuring de identification settings within a template  a reusable blueprint is established  This eliminates the need for repetitive configuration  allowing de identification jobs to be executed multiple times with ease    To ensure referential integrity while masking sensitive information  a combination of format preserving encryption  FPE  and regular expressions  regex  is utilized  This approach enables the original data pattern to be maintained even after de identification      Illustrative Example  Customer ID    Consider a scenario where a Customer ID follows the format  A123456   i e    A  followed by a 6 digit number  and is classified as PII  A custom PII info type named  CUSTOMER  ID  can be configured within the inspection template  utilizing the following regex      json      info type          name    CUSTOMER ID          regex          group indexes         pattern     A    d 6               In this regex  two group indexes are defined  but only the second group index  the 6 digit number  is designated as sensitive  This ensures that during de identification  only the numerical portion undergoes transformation  FPE guarantees that the output remains a 6 digit number  and by preserving the prefix  A   the overall pattern of the Customer ID is retained     FPE Configuration    Here s an example of how FPE can be configured within the de identification template for this Customer ID       json      primitive transformation          crypto replace ffx fpe config            crypto key    CYPTO KEY         common alphabet    NUMERIC                info types                  name    CUSTOMER ID                     Template Configuration for this Example    This table below shows the PII configured and how they are de identified using this solution  The inspection and de identification templates can be customized to suit your specific needs and integrate them into your data processing pipeline        PII Info Type       Original       De identified                                                                Customer ID          A935492        A678512              Email Address        email example net   9jRsv example net        Credit Card Number   3524882434259679   1839406548854298         SSN                  298 34 4337    515 57 9132           Date                 1979 10 29    1982 08 24        "}
{"questions":"GCP This Beam pipeline reads data from either Google Cloud Storage GCS or BigQuery BQ de identifies sensitive data using DLP and writes the de identified data to the corresponding destination in GCS or BQ The pipeline supports two modes To learn more read DLP De identification Pipeline GCS mode For processing files stored in GCS in Avro CSV TXT DAT and JSON formats Setup BigQuery mode For processing data stored in BigQuery tables","answers":"# DLP De-identification Pipeline\n\nThis Beam pipeline reads data from either Google Cloud Storage (GCS) or BigQuery (BQ), de-identifies sensitive data using DLP, and writes the de-identified data to the corresponding destination in GCS or BQ. The pipeline supports two modes:\n\n* **GCS mode:** For processing files stored in GCS in Avro, CSV, TXT, DAT, and JSON formats.\n* **BigQuery mode:** For processing data stored in BigQuery tables.\n\nTo learn more, read [DOC.md](DOC.md)\n\n## Setup\n\nBefore running the pipeline, ensure the following prerequisites are met:\n\n* **DLP Inspect Template:** A DLP inspect template that defines the types of sensitive data to be identified. See [steps](src\/dlp\/templates\/README.md#setup-and-deploy-the-templates) for creating a DLP template. \n* **DLP De-identify Template:** A DLP de-identify template that defines how to transform the sensitive data. See [steps](src\/dlp\/templates\/README.md#setup-and-deploy-the-templates) for creating a DLP template. \n* **Service Account:** A service account with the necessary permissions to access resources in both the source and destination projects. This service account should have the following roles:\n    * **Source Project:** Dataflow Admin, Dataflow Worker, Storage Object Admin, DLP Administrator, BigQuery Data Editor, BigQuery Job User.\n    * **Destination Project:** Storage Object Viewer, BigQuery Data Editor, BigQuery Job User.\n* **BigQuery Schema (BigQuery mode only):**  The dataset and tables, including their schema, should already exist in the destination project.\n\n\n## Pipeline Options\n| Pipeline Option | Description |\n|---|---|\n| `project` | The Google Cloud project ID. |\n| `region` | The Google Cloud region where the Dataflow job will run. |\n| `job_name` | The name of the Dataflow job. |\n| `service_account` | The service account used to run the Dataflow job. |\n| `machine_type` | The machine type for Dataflow workers. |\n| `max_num_workers` | The maximum number of Dataflow workers. |\n| `job_dir` | The GCS location for staging Dataflow job files. |\n| `prod` | A boolean flag indicating whether the pipeline is running in production mode. 'True' runs the pipeline with Dataflow runner while 'False' runs the pipeline locally.|\n| `inspect_template` | The name of the DLP inspect template. |\n| `deidentify_template` | The name of the DLP de-identify template. |\n| `dlp_batch_size` | The batch size for processing data with DLP. The default is 100.|\n| `mode` | The mode of operation: \"gcs\" for Google Cloud Storage or \"bq\" for BigQuery. |\n| `input_dir` | (GCS mode) The GCS location of the input data. |\n| `output_dir` | (GCS mode) The GCS location for the output data. It can be in a different project from the input |\n| `input_projects` | (BigQuery mode) A list of Google Cloud project IDs containing the input BigQuery tables. |\n| `output_projects` | (BigQuery mode) A list of Google Cloud project IDs where the output BigQuery tables will be written. |\n| `config_file` | YAML config file with all the pipeline options set to avoid passing a lot of options in a command. |\n\n## Run\n1. To avoid passing many flags in the run command, fill [config.yaml](config.yaml) with the paramaters.\n\n    - Example `config.yaml ` to deidentify data in GCS\n        ```yaml\n        project: project-id\n        region: us-central1\n        job_name: dlp-deid-pipeline\n        service_account: dlp-deid-pipeline-sa@project-id.iam.gserviceaccount.com\n        machine_type: n1-standard-2\n        max_num_workers: 30\n        job_dir: gs:\/\/staging-bucket\n        prod: False\n\n        # DLP Params\n        inspect_template: projects\/project-id\/locations\/global\/inspectTemplates\/inspect_template\n        deidentify_template: projects\/project-id\/locations\/global\/deidentifyTemplates\/deidentify_template\n        dlp_batch_size: 100\n\n        # Either \"gcs\" or \"bq\"\n        mode: gcs\n\n        # GCS mode required paramas\n        input_dir: gs:\/\/input-bucket\/dir\n        output_dir: gs:\/\/output-bucket\/dir\n        ```\n\n    - Example `config.yaml` to deidentify data in BigQuery\n        ```yaml\n        project: project-id\n        region: us-central1\n        job_name: dlp-deid-pipeline\n        service_account: dlp-deid-pipeline-sa@project-id.iam.gserviceaccount.com\n        machine_type: n1-standard-2\n        max_num_workers: 30\n        job_dir: gs:\/\/staging-bucket\n\n        # DLP Params\n        inspect_template: projects\/project-id\/locations\/global\/inspectTemplates\/inspect_template\n        deidentify_template: projects\/project-id\/locations\/global\/deidentifyTemplates\/deidentify_template\n        dlp_batch_size: 100\n\n        # Either \"gcs\" or \"bq\"\n        mode: bq\n\n        # BigQuery mode required params\n        input_projects:  input_projects1,input_projects2\n        output_projects: output_projects1,output_projects2\n    ```\n\n2. Create virtual env\n    ```bash\n    python3 -m venv .venv\n    source .venv\/bin\/activate\n    ```\n\n3. Run\n    - Run locally\n        ```\n        python3 src.run --config_file config.yaml\n        ```\n    - Run on Dataflow\n        ```\n        python3 src.run --config_file config.yaml --prod true\n        ```","site":"GCP","answers_cleaned":"  DLP De identification Pipeline  This Beam pipeline reads data from either Google Cloud Storage  GCS  or BigQuery  BQ   de identifies sensitive data using DLP  and writes the de identified data to the corresponding destination in GCS or BQ  The pipeline supports two modes       GCS mode    For processing files stored in GCS in Avro  CSV  TXT  DAT  and JSON formats      BigQuery mode    For processing data stored in BigQuery tables   To learn more  read  DOC md  DOC md      Setup  Before running the pipeline  ensure the following prerequisites are met       DLP Inspect Template    A DLP inspect template that defines the types of sensitive data to be identified  See  steps  src dlp templates README md setup and deploy the templates  for creating a DLP template       DLP De identify Template    A DLP de identify template that defines how to transform the sensitive data  See  steps  src dlp templates README md setup and deploy the templates  for creating a DLP template       Service Account    A service account with the necessary permissions to access resources in both the source and destination projects  This service account should have the following roles          Source Project    Dataflow Admin  Dataflow Worker  Storage Object Admin  DLP Administrator  BigQuery Data Editor  BigQuery Job User          Destination Project    Storage Object Viewer  BigQuery Data Editor  BigQuery Job User      BigQuery Schema  BigQuery mode only      The dataset and tables  including their schema  should already exist in the destination project       Pipeline Options   Pipeline Option   Description                project    The Google Cloud project ID       region    The Google Cloud region where the Dataflow job will run       job name    The name of the Dataflow job       service account    The service account used to run the Dataflow job       machine type    The machine type for Dataflow workers       max num workers    The maximum number of Dataflow workers       job dir    The GCS location for staging Dataflow job files       prod    A boolean flag indicating whether the pipeline is running in production mode   True  runs the pipeline with Dataflow runner while  False  runs the pipeline locally      inspect template    The name of the DLP inspect template       deidentify template    The name of the DLP de identify template       dlp batch size    The batch size for processing data with DLP  The default is 100      mode    The mode of operation   gcs  for Google Cloud Storage or  bq  for BigQuery       input dir     GCS mode  The GCS location of the input data       output dir     GCS mode  The GCS location for the output data  It can be in a different project from the input      input projects     BigQuery mode  A list of Google Cloud project IDs containing the input BigQuery tables       output projects     BigQuery mode  A list of Google Cloud project IDs where the output BigQuery tables will be written       config file    YAML config file with all the pipeline options set to avoid passing a lot of options in a command        Run 1  To avoid passing many flags in the run command  fill  config yaml  config yaml  with the paramaters         Example  config yaml   to deidentify data in GCS            yaml         project  project id         region  us central1         job name  dlp deid pipeline         service account  dlp deid pipeline sa project id iam gserviceaccount com         machine type  n1 standard 2         max num workers  30         job dir  gs   staging bucket         prod  False            DLP Params         inspect template  projects project id locations global inspectTemplates inspect template         deidentify template  projects project id locations global deidentifyTemplates deidentify template         dlp batch size  100            Either  gcs  or  bq          mode  gcs            GCS mode required paramas         input dir  gs   input bucket dir         output dir  gs   output bucket dir                    Example  config yaml  to deidentify data in BigQuery            yaml         project  project id         region  us central1         job name  dlp deid pipeline         service account  dlp deid pipeline sa project id iam gserviceaccount com         machine type  n1 standard 2         max num workers  30         job dir  gs   staging bucket            DLP Params         inspect template  projects project id locations global inspectTemplates inspect template         deidentify template  projects project id locations global deidentifyTemplates deidentify template         dlp batch size  100            Either  gcs  or  bq          mode  bq            BigQuery mode required params         input projects   input projects1 input projects2         output projects  output projects1 output projects2          2  Create virtual env        bash     python3  m venv  venv     source  venv bin activate          3  Run       Run locally                     python3 src run   config file config yaml                   Run on Dataflow                     python3 src run   config file config yaml   prod true            "}
{"questions":"GCP Credit Card Number 3524882434259679 1839406548854298 are reusable configurations that tell DLP how to inspect de identify or re identify your data PII Info Type Original De identified DLP Templates Email Address email example net 9jRsv example net Customer ID A935492 A678512 This solution considers the following as sensitive data and provides the expected outcome","answers":"# DLP Templates\n\n[Templates](https:\/\/cloud.google.com\/sensitive-data-protection\/docs\/concepts-templates) are reusable configurations that tell DLP how to inspect, de-identify, or re-identify your data.\nThis solution considers the following as sensitive data and provides the expected outcome:\n\n| PII Info Type    | Original          | De-identified      |\n|-----------------|-------------------|-------------------|\n| Customer ID     | A935492          | A678512          |\n| Email Address   | email@example.net | 9jRsv@example.net |\n| Credit Card Number | 3524882434259679 | 1839406548854298 |\n| SSN             | 298-34-4337      | 515-57-9132      |\n| Date            | 1979-10-29      | 1982-08-24      |\n\nIn this solution, the templates are created using [Cloud Functions](https:\/\/cloud.google.com\/functions\/1stgendocs\/concepts\/overview).\n\n## Setup and Deploy the Templates\n- Set Project ID and Region\n  ```\n  PROJECT_ID=<project_id>\n  REGION=<region>\n  PROJECT_NUMBER=<project_number>\n  ```\n- Enable required APIs \n  ```\n  gcloud services enable \\\n    cloudfunctions.googleapis.com \\\n    secretmanager.googleapis.com \\\n    dlp.googleapis.com \\\n    cloudkms.googleapis.com\n  ```\n- Ensure the service account has the required permissions (since a 1st Gen function is used, the default service account is the App Engine service account)\n  ```\n  gcloud projects add-iam-policy-binding $PROJECT_ID \\\n    --member=\"serviceAccount:${PROJECT_ID}@appspot.gserviceaccount.com\" \\\n    --role=\"roles\/secretmanager.secretAccessor\" \\\n    --role=\"roles\/dlp.user\" \\\n    --role=\"roles\/cloudkms.cryptoKeyEncrypterDecrypter\"\n  ```\n- Create a KMS key ring\n  ```\n  gcloud kms keyrings create \"dlp-keyring\" \\\n    --location \"global\"\n  ```\n- Create a key\n  ```\n  gcloud kms keys create \"dlp-key\" \\\n    --location \"global\" \\\n    --keyring \"dlp-keyring\" \\\n    --purpose \"encryption\"\n  ```\n- Create a 256-bit AES key using openssl:\n  ```\n  openssl rand -out \".\/aes_key.bin\" 32\n  ```\n- Encode the key as base64 string and wrap key using Cloud KMS key\n  ```\n  curl \"https:\/\/cloudkms.googleapis.com\/v1\/projects\/datastream-rm\/locations\/global\/keyRings\/dlp-keyring\/cryptoKeys\/dlp-key:encrypt\" \\\n    --request \"POST\" \\\n    --header \"Authorization:Bearer $(gcloud auth application-default print-access-token)\" \\\n    --header \"content-type: application\/json\" \\\n    --data \"{\\\"plaintext\\\": \\\"$(base64 -i .\/aes_key.bin)\"}\"\n  ```\n- Store wrapped key in secret manager\n  ```\n  echo -n \"<ciphertext from previous result>\" | gcloud secrets create dlp-wrapped-key \\\n    --replication-policy=\"automatic\" \\\n    --data-file=-\n  ```\n- Set the key name and the wrapped key\n  ```\n  KMS_KEY_NAME=projects\/$PROJECT\/locations\/global\/keyRings\/dlp-keyring\/cryptoKeys\/dlp-key\n  SECRET_NAME=projects\/$PROJECT_NUMBER\/secrets\/tdm-dlp-wrapped-key\n\n  ```\n- Deploy the function to create an inspect template\n  ```\n  gcloud functions deploy create-inspect-template \\\n    --runtime python311 \\\n    --trigger-http \\\n    --source src\/dlp\/templates\/inspect \\\n    --entry-point main \\\n    --region $REGION \n  ```\n- Deploy the function to create a de-identify template\n  ```\n  gcloud functions deploy create-deidentify-template \\\n    --runtime python311 \\\n    --trigger-http \\\n    --source src\/dlp\/templates\/deidentify \\\n    --entry-point main \\\n    --region $REGION\n  ```\n- Create inspect template\n  ```\n  gcloud functions call create-inspect-template \\\n    --data '{\"project\": \"{$PROJECT_ID}\"}'\n  ```\n- Create deidentify template\n  ```\n  gcloud functions call create-deidentify-template \\\n    --data '{\"project\": \"{$PROJECT_ID}\", \"kms_key_name: \"{$KMS_KEY_NAME}\", \"secret_name\": \"{$SECRET_NAME}}'\n  ```\n\n## Templates\nIf you followed the steps correctly, you should now have two DLP templates in your project. These templates names should look like below:\n```\nprojects\/<project_id>\/locations\/global\/inspectTemplates\/inspect_template\nprojects\/<project_id>\/locations\/global\/inspectTemplates\/deidentify_template\n``","site":"GCP","answers_cleaned":"  DLP Templates   Templates  https   cloud google com sensitive data protection docs concepts templates  are reusable configurations that tell DLP how to inspect  de identify  or re identify your data  This solution considers the following as sensitive data and provides the expected outcome     PII Info Type      Original            De identified                                                                      Customer ID       A935492            A678512              Email Address     email example net   9jRsv example net     Credit Card Number   3524882434259679   1839406548854298     SSN               298 34 4337        515 57 9132          Date              1979 10 29        1982 08 24         In this solution  the templates are created using  Cloud Functions  https   cloud google com functions 1stgendocs concepts overview       Setup and Deploy the Templates   Set Project ID and Region         PROJECT ID  project id    REGION  region    PROJECT NUMBER  project number          Enable required APIs          gcloud services enable       cloudfunctions googleapis com       secretmanager googleapis com       dlp googleapis com       cloudkms googleapis com         Ensure the service account has the required permissions  since a 1st Gen function is used  the default service account is the App Engine service account          gcloud projects add iam policy binding  PROJECT ID         member  serviceAccount   PROJECT ID  appspot gserviceaccount com          role  roles secretmanager secretAccessor          role  roles dlp user          role  roles cloudkms cryptoKeyEncrypterDecrypter          Create a KMS key ring         gcloud kms keyrings create  dlp keyring          location  global          Create a key         gcloud kms keys create  dlp key          location  global          keyring  dlp keyring          purpose  encryption          Create a 256 bit AES key using openssl          openssl rand  out    aes key bin  32         Encode the key as base64 string and wrap key using Cloud KMS key         curl  https   cloudkms googleapis com v1 projects datastream rm locations global keyRings dlp keyring cryptoKeys dlp key encrypt          request  POST          header  Authorization Bearer   gcloud auth application default print access token           header  content type  application json          data     plaintext        base64  i   aes key bin             Store wrapped key in secret manager         echo  n   ciphertext from previous result     gcloud secrets create dlp wrapped key         replication policy  automatic          data file           Set the key name and the wrapped key         KMS KEY NAME projects  PROJECT locations global keyRings dlp keyring cryptoKeys dlp key   SECRET NAME projects  PROJECT NUMBER secrets tdm dlp wrapped key          Deploy the function to create an inspect template         gcloud functions deploy create inspect template         runtime python311         trigger http         source src dlp templates inspect         entry point main         region  REGION          Deploy the function to create a de identify template         gcloud functions deploy create deidentify template         runtime python311         trigger http         source src dlp templates deidentify         entry point main         region  REGION         Create inspect template         gcloud functions call create inspect template         data    project      PROJECT ID             Create deidentify template         gcloud functions call create deidentify template         data    project      PROJECT ID     kms key name     KMS KEY NAME     secret name      SECRET NAME              Templates If you followed the steps correctly  you should now have two DLP templates in your project  These templates names should look like below      projects  project id  locations global inspectTemplates inspect template projects  project id  locations global inspectTemplates deidentify template   "}
{"questions":"GCP Sentiment analysis using TensorFlow RNNEstimator on Google Cloud Platform TensorFlow input without preprocessing needed A more detailed guide can be found This code aims at providing a simple example of how to train a RNN model using Overview on Google Cloud Platform The model is designed to handle raw text files in","answers":"# Sentiment analysis using TensorFlow RNNEstimator on Google Cloud Platform.\n\n### Overview.\n\nThis code aims at providing a simple example of how to train a RNN model using\nTensorFlow\n[RNNEstimator](https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/contrib\/estimator\/RNNEstimator)\non Google Cloud Platform. The model is designed to handle raw text files in\ninput without preprocessing needed. A more detailed guide can be found\n[here](https:\/\/docs.google.com\/document\/d\/1CKYdv_LyTcpQw07UH_4iCsxL6IGs6hmsFWwUMv5bwug\/edit#).\n\n### Problem and data.\n\nThe problem is a text classification example where we categorize the movie\nreviews into positive or negative sentiment. We base this example on the IMDb\ndataset provided from this website:\nhttp:\/\/ai.stanford.edu\/~amaas\/data\/sentiment\/\n\n### Set-up environment.\n\n```sh\nPROJECT_NAME=sentiment_analysis\ngit clone https:\/\/github.com\/GoogleCloudPlatform\/professional-services.git\ncd professional-services\/examples\/cloudml-sentiment-analysis\npython -m virtualenv env\nsource env\/bin\/activate\npython -m pip install -U pip\npython -m pip install -r requirements.txt\n```\n\n### Download data.\n\n```sh\nDATA_PATH=data\nINPUT_DATA=${DATA_PATH}\/aclImdb\/train\nTRAINING_INPUT_DATA=${DATA_PATH}\/training_data\nwget http:\/\/ai.stanford.edu\/~amaas\/data\/sentiment\/aclImdb_v1.tar.gz -P $DATA_PATH\ntar -xzf ${DATA_PATH}\/aclImdb_v1.tar.gz -C $DATA_PATH\n```\n\n### Configure GCP.\n\n```sh\nPROJECT_ID=<...>\nBUCKET_PATH=<...>\ngcloud config set project $PROJECT_ID\n```\n\n### Move data to GCP.\n\n```sh\ngsutil -m cp -r $DATA_PATH\/aclImdb $BUCKET_PATH\nGCP_INPUT_DATA=$BUCKET_PATH\/aclImdb\/train\n```\n\n### Preprocess data.\n\n```sh\nJOB_NAME=training-$(date +\"%Y%m%d-%H%M%S\")\nPROCESSED_DATA=$BUCKET_PATH\/processed_data\/$JOB_NAME\npython run_preprocessing.py \\\n  --input_dir=$GCP_INPUT_DATA \\\n  --output_dir=$PROCESSED_DATA \\\n  --gcp=True \\\n  --project_id=$PROJECT_ID \\\n  --job_name=$JOB_NAME \\\n  --num_workers=8 \\\n  --worker_machine_type=n1-highcpu-4 \\\n  --region=us-central1\n```\n\n### Train model locally.\n\n```sh\nMODEL_NAME=${PROJECT_NAME}_$(date +\"%Y%m%d_%H%M%S\")\nTRAINING_OUTPUT_DIR=models\/$MODEL_NAME\npython -m trainer.task \\\n  --input_dir=$PROCESSED_DATA \\\n  --model_dir=$TRAINING_OUTPUT_DIR\n```\n\n### Train model on GCP.\n\n```sh\nMODEL_NAME=${PROJECT_NAME}_$(date +\"%Y%m%d_%H%M%S\")\nTRAINING_OUTPUT_DIR=${BUCKET_PATH}\/$MODEL_NAME\ngcloud ml-engine jobs submit training $MODEL_NAME \\\n  --module-name trainer.task \\\n  --staging-bucket $BUCKET_PATH \\\n  --package-path $PWD\/trainer \\\n  --region=us-central1 \\\n  --runtime-version 1.12 \\\n  --config=config_hp_tuning.yaml \\\n  --stream-logs \\\n  -- \\\n  --input_dir $PROCESSED_DATA \\\n  --model_dir $TRAINING_OUTPUT_DIR\n```\n\n### Train model locally with gcloud.\n\n```sh\nMODEL_NAME=${PROJECT_NAME}_$(date +\"%Y%m%d_%H%M%S\")\nTRAINING_OUTPUT_DIR=models\/$MODEL_NAME\ngcloud ml-engine local train \\\n  --module-name=trainer.task \\\n  --package-path=$PWD\/trainer \\\n  -- \\\n  --input_dir=$PROCESSED_DATA \\\n  --model_dir=$TRAINING_OUTPUT_DIR\n```\n\n### Monitor with tensorboard.\n\n```sh\ntensorboard --logdir=$TRAINING_OUTPUT_DIR\n```\n\n### Save model in GCP.\n\n**With HP tuning:**\n```sh\nTRIAL_NUMBER=''\nMODEL_SAVED_NAME=$(gsutil ls ${TRAINING_OUTPUT_DIR}\/${TRIAL_NUMBER}\/export\/exporter\/ | tail -1)\n```\n\n**Without HP tuning:**\n```sh\nMODEL_SAVED_NAME=$(gsutil ls ${TRAINING_OUTPUT_DIR}\/export\/exporter\/ | tail -1)\n```\n\n```sh\ngcloud ml-engine models create $PROJECT_NAME \\\n  --regions us-central1\ngcloud ml-engine versions create $MODEL_NAME \\\n  --model $PROJECT_NAME \\\n  --origin $MODEL_SAVED_NAME \\\n  --runtime-version 1.12\n```\n\n### Make local online predictions.\n\n```sh\ngcloud ml-engine local predict \\\n  --model-dir=${TRAINING_OUTPUT_DIR}\/export\/exporter\/$(ls ${TRAINING_OUTPUT_DIR}\/export\/exporter\/ | tail -1) \\\n  --text-instances=${DATA_PATH}\/aclImdb\/test\/*\/*.txt\n```\n\n### Make online predictions with GCP.\n\n```sh\ngcloud ml-engine predict \\\n  --model=$PROJECT_NAME \\\n  --version=$MODEL_NAME \\\n  --text-instances=$DATA_PATH\/aclImdb\/test\/neg\/0_2.txt\n```\n\n### Move out of sample data to GCS.\n\n```sh\nPREDICTION_DATA_PATH=${BUCKET_PATH}\/prediction_data\ngsutil -m cp -r ${DATA_PATH}\/aclImdb\/test\/ $PREDICTION_DATA_PATH\n```\n\n### Make batch predictions with GCP.\n\n```sh\nJOB_NAME=${PROJECT_NAME}_predict_$(date +\"%Y%m%d_%H%M%S\")\nPREDICTIONS_OUTPUT_PATH=${BUCKET_PATH}\/predictions\/$JOB_NAME\ngcloud ml-engine jobs submit prediction $JOB_NAME \\\n  --model $PROJECT_NAME \\\n  --input-paths $PREDICTION_DATA_PATH\/neg\/* \\\n  --output-path $PREDICTIONS_OUTPUT_PATH \\\n  --region us-central1 \\\n  --data-format TEXT \\\n  --version $MODEL_NAME\n```\n\n### Scoring.\n\n```sh\npython scoring.py \\\n  --project_name=$PROJECT_ID \\\n  --model_name=$PROJECT_NAME \\\n  --input_path=$DATA_PATH\/aclImdb\/test \\\n  --size=1000 \\\n  --batch_size=20\n```","site":"GCP","answers_cleaned":"  Sentiment analysis using TensorFlow RNNEstimator on Google Cloud Platform       Overview   This code aims at providing a simple example of how to train a RNN model using TensorFlow  RNNEstimator  https   www tensorflow org api docs python tf contrib estimator RNNEstimator  on Google Cloud Platform  The model is designed to handle raw text files in input without preprocessing needed  A more detailed guide can be found  here  https   docs google com document d 1CKYdv LyTcpQw07UH 4iCsxL6IGs6hmsFWwUMv5bwug edit         Problem and data   The problem is a text classification example where we categorize the movie reviews into positive or negative sentiment  We base this example on the IMDb dataset provided from this website  http   ai stanford edu  amaas data sentiment       Set up environment      sh PROJECT NAME sentiment analysis git clone https   github com GoogleCloudPlatform professional services git cd professional services examples cloudml sentiment analysis python  m virtualenv env source env bin activate python  m pip install  U pip python  m pip install  r requirements txt          Download data      sh DATA PATH data INPUT DATA   DATA PATH  aclImdb train TRAINING INPUT DATA   DATA PATH  training data wget http   ai stanford edu  amaas data sentiment aclImdb v1 tar gz  P  DATA PATH tar  xzf   DATA PATH  aclImdb v1 tar gz  C  DATA PATH          Configure GCP      sh PROJECT ID       BUCKET PATH       gcloud config set project  PROJECT ID          Move data to GCP      sh gsutil  m cp  r  DATA PATH aclImdb  BUCKET PATH GCP INPUT DATA  BUCKET PATH aclImdb train          Preprocess data      sh JOB NAME training   date    Y m d  H M S   PROCESSED DATA  BUCKET PATH processed data  JOB NAME python run preprocessing py       input dir  GCP INPUT DATA       output dir  PROCESSED DATA       gcp True       project id  PROJECT ID       job name  JOB NAME       num workers 8       worker machine type n1 highcpu 4       region us central1          Train model locally      sh MODEL NAME   PROJECT NAME    date    Y m d  H M S   TRAINING OUTPUT DIR models  MODEL NAME python  m trainer task       input dir  PROCESSED DATA       model dir  TRAINING OUTPUT DIR          Train model on GCP      sh MODEL NAME   PROJECT NAME    date    Y m d  H M S   TRAINING OUTPUT DIR   BUCKET PATH   MODEL NAME gcloud ml engine jobs submit training  MODEL NAME       module name trainer task       staging bucket  BUCKET PATH       package path  PWD trainer       region us central1       runtime version 1 12       config config hp tuning yaml       stream logs              input dir  PROCESSED DATA       model dir  TRAINING OUTPUT DIR          Train model locally with gcloud      sh MODEL NAME   PROJECT NAME    date    Y m d  H M S   TRAINING OUTPUT DIR models  MODEL NAME gcloud ml engine local train       module name trainer task       package path  PWD trainer              input dir  PROCESSED DATA       model dir  TRAINING OUTPUT DIR          Monitor with tensorboard      sh tensorboard   logdir  TRAINING OUTPUT DIR          Save model in GCP     With HP tuning       sh TRIAL NUMBER    MODEL SAVED NAME   gsutil ls   TRAINING OUTPUT DIR    TRIAL NUMBER  export exporter    tail  1         Without HP tuning       sh MODEL SAVED NAME   gsutil ls   TRAINING OUTPUT DIR  export exporter    tail  1          sh gcloud ml engine models create  PROJECT NAME       regions us central1 gcloud ml engine versions create  MODEL NAME       model  PROJECT NAME       origin  MODEL SAVED NAME       runtime version 1 12          Make local online predictions      sh gcloud ml engine local predict       model dir   TRAINING OUTPUT DIR  export exporter   ls   TRAINING OUTPUT DIR  export exporter    tail  1        text instances   DATA PATH  aclImdb test     txt          Make online predictions with GCP      sh gcloud ml engine predict       model  PROJECT NAME       version  MODEL NAME       text instances  DATA PATH aclImdb test neg 0 2 txt          Move out of sample data to GCS      sh PREDICTION DATA PATH   BUCKET PATH  prediction data gsutil  m cp  r   DATA PATH  aclImdb test   PREDICTION DATA PATH          Make batch predictions with GCP      sh JOB NAME   PROJECT NAME  predict   date    Y m d  H M S   PREDICTIONS OUTPUT PATH   BUCKET PATH  predictions  JOB NAME gcloud ml engine jobs submit prediction  JOB NAME       model  PROJECT NAME       input paths  PREDICTION DATA PATH neg         output path  PREDICTIONS OUTPUT PATH       region us central1       data format TEXT       version  MODEL NAME          Scoring      sh python scoring py       project name  PROJECT ID       model name  PROJECT NAME       input path  DATA PATH aclImdb test       size 1000       batch size 20    "}
{"questions":"grafana getting started title Get started with Grafana and InfluxDB Learn how to build your first InfluxDB dashboard in Grafana getting started influxdb aliases products weight 400 labels oss enterprise","answers":"---\naliases:\n  - getting-started-influxdb\/\ndescription: Learn how to build your first InfluxDB dashboard in Grafana.\nlabels:\n  products:\n    - enterprise\n    - oss\ntitle: Get started with Grafana and InfluxDB\nweight: 400\n---\n\n# Get started with Grafana and InfluxDB\n\n\n\n#### Get InfluxDB\n\nYou can [download InfluxDB](https:\/\/portal.influxdata.com\/downloads\/) and install it locally or you can sign up for [InfluxDB Cloud](https:\/\/www.influxdata.com\/products\/influxdb-cloud\/). Windows installers are not available for some versions of InfluxDB.\n\n#### Install other InfluxDB software\n\n[Install Telegraf](https:\/\/docs.influxdata.com\/telegraf\/v1.18\/introduction\/installation\/). This tool is an agent that helps you get metrics into InfluxDB. For more information, refer to [Telegraf documentation](https:\/\/docs.influxdata.com\/telegraf\/v1.18\/).\n\nIf you chose to use InfluxDB Cloud, then you should [download and install the InfluxDB Cloud CLI](https:\/\/portal.influxdata.com\/downloads\/). This tool allows you to send command line instructions to your cloud account. For more information, refer to [Influx CLI documentation](https:\/\/docs.influxdata.com\/influxdb\/cloud\/write-data\/developer-tools\/influx-cli\/).\n\n#### Get data into InfluxDB\n\nIf you downloaded and installed InfluxDB on your local machine, then use the [Quick Start](https:\/\/docs.influxdata.com\/influxdb\/v2.0\/write-data\/#quick-start-for-influxdb-oss) feature to visualize InfluxDB metrics.\n\nIf you are using the cloud account, then the wizards will guide you through the initial process. For more information, refer to [Configure Telegraf](https:\/\/docs.influxdata.com\/influxdb\/cloud\/write-data\/no-code\/use-telegraf\/#configure-telegraf).\n\n##### Note for Windows users:\n\nWindows users might need to make additional adjustments. Look for special instructions in the InfluxData documentation and [Using Telegraf on Windows](https:\/\/www.influxdata.com\/blog\/using-telegraf-on-windows\/) blog post. The regular system monitoring template in InfluxDB Cloud is not compatible with Windows. Windows users who use InfluxDB Cloud to monitor their system will need to use the [Windows System Monitoring Template](https:\/\/github.com\/influxdata\/community-templates\/tree\/master\/windows_system).\n\n#### Add your InfluxDB data source to Grafana\n\nYou can have more than one InfluxDB data source defined in Grafana.\n\n1. Follow the general instructions to [add a data source]().\n1. Decide if you will use InfluxQL or Flux as your query language.\n   - [Configure the data source]() for your chosen query language.\n     Each query language has its own unique data source settings.\n   - For querying features specific to each language, see the data source's [query editor documentation]().\n\n##### InfluxDB guides\n\nInfluxDB publishes guidance for connecting different versions of their product to Grafana.\n\n- **InfluxDB OSS or Enterprise 1.8+.** To turn on Flux, refer to [Configure InfluxDB](https:\/\/docs.influxdata.com\/influxdb\/v1.8\/administration\/config\/#flux-enabled-false.). Select your InfluxDB version in the upper right corner.\n- **InfluxDB OSS or Enterprise 2.x.** Refer to [Use Grafana with InfluxDB](https:\/\/docs.influxdata.com\/influxdb\/v2.0\/tools\/grafana\/). Select your InfluxDB version in the upper right corner.\n- **InfluxDB Cloud.** Refer to [Use Grafana with InfluxDB Cloud](https:\/\/docs.influxdata.com\/influxdb\/cloud\/tools\/grafana\/).\n\n##### Important tips\n\n- Make sure your Grafana token has read access. If it doesn't, then you'll get an authentication error and be unable to connect Grafana to InfluxDB.\n- Avoid apostrophes and other non-standard characters in bucket and token names.\n- If the text name of the organization or bucket doesn't work, then try the ID number.\n- If you change your bucket name in InfluxDB, then you must also change it in Grafana and your Telegraf .conf file as well.\n\n#### Add a query\n\nThis step varies depending on the query language that you selected when you set up your data source in Grafana.\n\n##### InfluxQL query language\n\nIn the query editor, click **select measurement**.\n\n![InfluxQL query](\/static\/img\/docs\/influxdb\/influxql-query-7-5.png)\n\nGrafana displays a list of possible series. Click one to select it, and Grafana graphs any available data. If there is no data to display, then try another selection or check your data source.\n\n##### Flux query language\n\nCreate a simple Flux query.\n\n1. [Add a panel]().\n1. In the query editor, select your InfluxDB-Flux data source. For more information, refer to [Queries]().\n1. Select the **Table** visualization.\n1. In the query editor text field, enter `buckets()` and then click outside of the query editor.\n\nThis generic query returns a list of buckets.\n\n![Flux query](\/static\/img\/docs\/influxdb\/flux-query-7-5.png)\n\nYou can also create Flux queries in the InfluxDB Explore view.\n\n1. In your browser, log in to the InfluxDB native UI (OSS is typically something like http:\/\/localhost:8086 or for InfluxDB Cloud use: https:\/\/cloud2.influxdata.com).\n1. Click **Explore** to open the Data Explorer.\n1. The InfluxDB Data Explorer provides two mechanisms for creating Flux queries: a graphical query editor and a script editor. Using the graphical query editor, [create a query](https:\/\/docs.influxdata.com\/influxdb\/cloud\/query-data\/execute-queries\/data-explorer\/). It will look something like this:\n\n   ![InfluxDB Explore query](\/static\/img\/docs\/influxdb\/influx-explore-query-7-5.png)\n\n1. Click **Script Editor** to view the text of the query, and then copy all the lines of your Flux code, which will look something like this:\n\n   ![InfluxDB Explore Script Editor](\/static\/img\/docs\/influxdb\/explore-query-text-7-5.png)\n\n1. In Grafana, [add a panel]() and then paste your Flux code into the query editor.\n1. Click **Apply**. Your new panel should be visible with data from your Flux query.\n\n#### Check InfluxDB metrics in Grafana Explore\n\nIn your Grafana instance, go to the [Explore]() view and build queries to experiment with the metrics you want to monitor. Here you can also debug issues related to collecting metrics.\n\n#### Start building dashboards\n\nThere you go! Use Explore and Data Explorer to experiment with your data, and add the queries that you like to your dashboard as panels. Have fun!\n\nHere are some resources to learn more:\n\n- Grafana documentation: [InfluxDB data source]()\n- InfluxDB documentation: [Comparison of Flux vs InfluxQL](https:\/\/docs.influxdata.com\/influxdb\/v1.8\/flux\/flux-vs-influxql\/)","site":"grafana getting started","answers_cleaned":"    aliases      getting started influxdb  description  Learn how to build your first InfluxDB dashboard in Grafana  labels    products        enterprise       oss title  Get started with Grafana and InfluxDB weight  400        Get started with Grafana and InfluxDB         Get InfluxDB  You can  download InfluxDB  https   portal influxdata com downloads   and install it locally or you can sign up for  InfluxDB Cloud  https   www influxdata com products influxdb cloud    Windows installers are not available for some versions of InfluxDB        Install other InfluxDB software   Install Telegraf  https   docs influxdata com telegraf v1 18 introduction installation    This tool is an agent that helps you get metrics into InfluxDB  For more information  refer to  Telegraf documentation  https   docs influxdata com telegraf v1 18     If you chose to use InfluxDB Cloud  then you should  download and install the InfluxDB Cloud CLI  https   portal influxdata com downloads    This tool allows you to send command line instructions to your cloud account  For more information  refer to  Influx CLI documentation  https   docs influxdata com influxdb cloud write data developer tools influx cli          Get data into InfluxDB  If you downloaded and installed InfluxDB on your local machine  then use the  Quick Start  https   docs influxdata com influxdb v2 0 write data  quick start for influxdb oss  feature to visualize InfluxDB metrics   If you are using the cloud account  then the wizards will guide you through the initial process  For more information  refer to  Configure Telegraf  https   docs influxdata com influxdb cloud write data no code use telegraf  configure telegraf          Note for Windows users   Windows users might need to make additional adjustments  Look for special instructions in the InfluxData documentation and  Using Telegraf on Windows  https   www influxdata com blog using telegraf on windows   blog post  The regular system monitoring template in InfluxDB Cloud is not compatible with Windows  Windows users who use InfluxDB Cloud to monitor their system will need to use the  Windows System Monitoring Template  https   github com influxdata community templates tree master windows system         Add your InfluxDB data source to Grafana  You can have more than one InfluxDB data source defined in Grafana   1  Follow the general instructions to  add a data source     1  Decide if you will use InfluxQL or Flux as your query language        Configure the data source    for your chosen query language       Each query language has its own unique data source settings       For querying features specific to each language  see the data source s  query editor documentation            InfluxDB guides  InfluxDB publishes guidance for connecting different versions of their product to Grafana       InfluxDB OSS or Enterprise 1 8     To turn on Flux  refer to  Configure InfluxDB  https   docs influxdata com influxdb v1 8 administration config  flux enabled false    Select your InfluxDB version in the upper right corner      InfluxDB OSS or Enterprise 2 x    Refer to  Use Grafana with InfluxDB  https   docs influxdata com influxdb v2 0 tools grafana    Select your InfluxDB version in the upper right corner      InfluxDB Cloud    Refer to  Use Grafana with InfluxDB Cloud  https   docs influxdata com influxdb cloud tools grafana           Important tips    Make sure your Grafana token has read access  If it doesn t  then you ll get an authentication error and be unable to connect Grafana to InfluxDB    Avoid apostrophes and other non standard characters in bucket and token names    If the text name of the organization or bucket doesn t work  then try the ID number    If you change your bucket name in InfluxDB  then you must also change it in Grafana and your Telegraf  conf file as well        Add a query  This step varies depending on the query language that you selected when you set up your data source in Grafana         InfluxQL query language  In the query editor  click   select measurement       InfluxQL query   static img docs influxdb influxql query 7 5 png   Grafana displays a list of possible series  Click one to select it  and Grafana graphs any available data  If there is no data to display  then try another selection or check your data source         Flux query language  Create a simple Flux query   1   Add a panel     1  In the query editor  select your InfluxDB Flux data source  For more information  refer to  Queries     1  Select the   Table   visualization  1  In the query editor text field  enter  buckets    and then click outside of the query editor   This generic query returns a list of buckets     Flux query   static img docs influxdb flux query 7 5 png   You can also create Flux queries in the InfluxDB Explore view   1  In your browser  log in to the InfluxDB native UI  OSS is typically something like http   localhost 8086 or for InfluxDB Cloud use  https   cloud2 influxdata com   1  Click   Explore   to open the Data Explorer  1  The InfluxDB Data Explorer provides two mechanisms for creating Flux queries  a graphical query editor and a script editor  Using the graphical query editor   create a query  https   docs influxdata com influxdb cloud query data execute queries data explorer    It will look something like this        InfluxDB Explore query   static img docs influxdb influx explore query 7 5 png   1  Click   Script Editor   to view the text of the query  and then copy all the lines of your Flux code  which will look something like this        InfluxDB Explore Script Editor   static img docs influxdb explore query text 7 5 png   1  In Grafana   add a panel    and then paste your Flux code into the query editor  1  Click   Apply    Your new panel should be visible with data from your Flux query        Check InfluxDB metrics in Grafana Explore  In your Grafana instance  go to the  Explore    view and build queries to experiment with the metrics you want to monitor  Here you can also debug issues related to collecting metrics        Start building dashboards  There you go  Use Explore and Data Explorer to experiment with your data  and add the queries that you like to your dashboard as panels  Have fun   Here are some resources to learn more     Grafana documentation   InfluxDB data source      InfluxDB documentation   Comparison of Flux vs InfluxQL  https   docs influxdata com influxdb v1 8 flux flux vs influxql  "}
{"questions":"grafana getting started Learn how to build your first Prometheus dashboard in Grafana guides gettingstarted aliases products getting started prometheus labels oss enterprise","answers":"---\naliases:\n  - ..\/guides\/getting_started\/\n  - ..\/guides\/gettingstarted\/\n  - getting-started-prometheus\/\ndescription: Learn how to build your first Prometheus dashboard in Grafana.\nlabels:\n  products:\n    - enterprise\n    - oss\ntitle: Get started with Grafana and Prometheus\nweight: 300\n---\n\n# Get started with Grafana and Prometheus\n\nPrometheus is an open source monitoring system for which Grafana provides out-of-the-box support. This topic walks you through the steps to create a series of dashboards in Grafana to display system metrics for a server monitored by Prometheus.\n\n_Grafana and Prometheus_:\n\n1. Download Prometheus and node_exporter\n1. Install Prometheus node_exporter\n1. Install and configure Prometheus\n1. Configure Prometheus for Grafana\n1. Check Prometheus metrics in Grafana Explore view\n1. Start building dashboards\n\n#### Download Prometheus and node_exporter\n\nDownload the following components:\n\n- [Prometheus](https:\/\/prometheus.io\/download\/#prometheus)\n- [node_exporter](https:\/\/prometheus.io\/download\/#node_exporter)\n\nLike Grafana, you can install Prometheus on many different operating systems. Refer to the [Prometheus download page](https:\/\/prometheus.io\/download\/) to see a list of stable versions of Prometheus components.\n\n#### Install Prometheus node_exporter\n\nInstall node_exporter on all hosts you want to monitor. This guide shows you how to install it locally.\n\nPrometheus node_exporter is a widely used tool that exposes system metrics. For instructions on installing node_exporter, refer to the [Installing and running the node_exporter](https:\/\/prometheus.io\/docs\/guides\/node-exporter\/#installing-and-running-the-node-exporter) section in the Prometheus documentation.\n\nWhen you run node_exporter locally, navigate to `http:\/\/localhost:9100\/metrics` to check that it is exporting metrics.\n\n\nThe instructions in the referenced topic are intended for Linux users. You may have to alter the instructions slightly depending on your operating system. For example, if you are on Windows, use the [windows_exporter](https:\/\/github.com\/prometheus-community\/windows_exporter) instead.\n\n\n#### Install and configure Prometheus\n\n1. After [downloading Prometheus](https:\/\/prometheus.io\/download\/#prometheus), extract it and navigate to the directory.\n\n   ```\n   tar xvfz prometheus-*.tar.gz\n   cd prometheus-*\n   ```\n\n1. Locate the `prometheus.yml` file in the directory.\n\n1. Modify Prometheus's configuration file to monitor the hosts where you installed node_exporter.\n\nBy default, Prometheus looks for the file `prometheus.yml` in the current working directory. This behavior can be changed via the `--config.file` command line flag. For example, some Prometheus installers use it to set the configuration file to `\/etc\/prometheus\/prometheus.yml`.\n\nThe following example shows you the code you should add. Notice that static configs targets are set to `['localhost:9100']` to target node-explorer when running it locally.\n\n```\n # A scrape configuration containing exactly one endpoint to scrape from node_exporter running on a host:\n scrape_configs:\n     # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.\n     - job_name: 'node'\n\n     # metrics_path defaults to '\/metrics'\n     # scheme defaults to 'http'.\n\n       static_configs:\n       - targets: ['localhost:9100']\n```\n\n1. Start the Prometheus service:\n\n   ```\n    .\/prometheus --config.file=.\/prometheus.yml\n   ```\n\n1. Confirm that Prometheus is running by navigating to `http:\/\/localhost:9090`.\n\nYou can see that the node_exporter metrics have been delivered to Prometheus. Next, the metrics will be sent to Grafana.\n\n#### Configure Prometheus for Grafana\n\nWhen running Prometheus locally, there are two ways to configure Prometheus for Grafana. You can use a hosted Grafana instance at [Grafana Cloud](\/) or run Grafana locally.\n\nThis guide describes configuring Prometheus in a hosted Grafana instance on Grafana Cloud.\n\n1. Sign up for [https:\/\/grafana.com\/](\/auth\/sign-up\/create-user). Grafana gives you a Prometheus instance out of the box.\n\n![Prometheus details in Grafana.com](\/static\/img\/docs\/getting-started\/screenshot-grafana-prometheus-details.png)\n\n1. Because you are running your own Prometheus instance locally, you must `remote_write` your metrics to the Grafana.com Prometheus instance. Grafana provides code to add to your `prometheus.yml` config file. This includes a remote write endpoint, your user name and password.\n\nAdd the following code to your prometheus.yml file to begin sending metrics to your hosted Grafana instance.\n\n```\nremote_write:\n- url: <https:\/\/your-remote-write-endpoint>\n  basic_auth:\n    username: <your user name>\n    password: <Your Grafana.com API Key>\n```\n\n\nTo configure your Prometheus instance to work with Grafana locally instead of Grafana Cloud, install Grafana [here](\/grafana\/download) and follow the configuration steps listed [here](\/docs\/grafana\/latest\/datasources\/prometheus\/#configure-the-data-source).\n\n\n#### Check Prometheus metrics in Grafana Explore view\n\nIn your Grafana instance, go to the [Explore]() view and build queries to experiment with the metrics you want to monitor. Here you can also debug issues related to collecting metrics from Prometheus.\n\n#### Start building dashboards\n\nNow that you have a curated list of queries, create [dashboards]() to render system metrics monitored by Prometheus. When you install Prometheus and node_exporter or windows_exporter, you will find recommended dashboards for use.\n\nThe following image shows a dashboard with three panels showing some system metrics.\n\n![Prometheus dashboards](\/static\/img\/docs\/getting-started\/simple_grafana_prom_dashboard.png)\n\nTo learn more:\n\n- Grafana documentation: [Prometheus data source]()\n- Prometheus documentation: [What is Prometheus?](https:\/\/prometheus.io\/docs\/introduction\/overview\/)","site":"grafana getting started","answers_cleaned":"    aliases         guides getting started         guides gettingstarted      getting started prometheus  description  Learn how to build your first Prometheus dashboard in Grafana  labels    products        enterprise       oss title  Get started with Grafana and Prometheus weight  300        Get started with Grafana and Prometheus  Prometheus is an open source monitoring system for which Grafana provides out of the box support  This topic walks you through the steps to create a series of dashboards in Grafana to display system metrics for a server monitored by Prometheus    Grafana and Prometheus    1  Download Prometheus and node exporter 1  Install Prometheus node exporter 1  Install and configure Prometheus 1  Configure Prometheus for Grafana 1  Check Prometheus metrics in Grafana Explore view 1  Start building dashboards       Download Prometheus and node exporter  Download the following components      Prometheus  https   prometheus io download  prometheus     node exporter  https   prometheus io download  node exporter   Like Grafana  you can install Prometheus on many different operating systems  Refer to the  Prometheus download page  https   prometheus io download   to see a list of stable versions of Prometheus components        Install Prometheus node exporter  Install node exporter on all hosts you want to monitor  This guide shows you how to install it locally   Prometheus node exporter is a widely used tool that exposes system metrics  For instructions on installing node exporter  refer to the  Installing and running the node exporter  https   prometheus io docs guides node exporter  installing and running the node exporter  section in the Prometheus documentation   When you run node exporter locally  navigate to  http   localhost 9100 metrics  to check that it is exporting metrics    The instructions in the referenced topic are intended for Linux users  You may have to alter the instructions slightly depending on your operating system  For example  if you are on Windows  use the  windows exporter  https   github com prometheus community windows exporter  instead         Install and configure Prometheus  1  After  downloading Prometheus  https   prometheus io download  prometheus   extract it and navigate to the directory             tar xvfz prometheus   tar gz    cd prometheus           1  Locate the  prometheus yml  file in the directory   1  Modify Prometheus s configuration file to monitor the hosts where you installed node exporter   By default  Prometheus looks for the file  prometheus yml  in the current working directory  This behavior can be changed via the    config file  command line flag  For example  some Prometheus installers use it to set the configuration file to   etc prometheus prometheus yml    The following example shows you the code you should add  Notice that static configs targets are set to    localhost 9100    to target node explorer when running it locally          A scrape configuration containing exactly one endpoint to scrape from node exporter running on a host   scrape configs         The job name is added as a label  job  job name   to any timeseries scraped from this config         job name   node          metrics path defaults to   metrics         scheme defaults to  http           static configs           targets    localhost 9100        1  Start the Prometheus service                prometheus   config file   prometheus yml         1  Confirm that Prometheus is running by navigating to  http   localhost 9090    You can see that the node exporter metrics have been delivered to Prometheus  Next  the metrics will be sent to Grafana        Configure Prometheus for Grafana  When running Prometheus locally  there are two ways to configure Prometheus for Grafana  You can use a hosted Grafana instance at  Grafana Cloud     or run Grafana locally   This guide describes configuring Prometheus in a hosted Grafana instance on Grafana Cloud   1  Sign up for  https   grafana com    auth sign up create user   Grafana gives you a Prometheus instance out of the box     Prometheus details in Grafana com   static img docs getting started screenshot grafana prometheus details png   1  Because you are running your own Prometheus instance locally  you must  remote write  your metrics to the Grafana com Prometheus instance  Grafana provides code to add to your  prometheus yml  config file  This includes a remote write endpoint  your user name and password   Add the following code to your prometheus yml file to begin sending metrics to your hosted Grafana instance       remote write    url   https   your remote write endpoint    basic auth      username   your user name      password   Your Grafana com API Key        To configure your Prometheus instance to work with Grafana locally instead of Grafana Cloud  install Grafana  here   grafana download  and follow the configuration steps listed  here   docs grafana latest datasources prometheus  configure the data source          Check Prometheus metrics in Grafana Explore view  In your Grafana instance  go to the  Explore    view and build queries to experiment with the metrics you want to monitor  Here you can also debug issues related to collecting metrics from Prometheus        Start building dashboards  Now that you have a curated list of queries  create  dashboards    to render system metrics monitored by Prometheus  When you install Prometheus and node exporter or windows exporter  you will find recommended dashboards for use   The following image shows a dashboard with three panels showing some system metrics     Prometheus dashboards   static img docs getting started simple grafana prom dashboard png   To learn more     Grafana documentation   Prometheus data source      Prometheus documentation   What is Prometheus   https   prometheus io docs introduction overview  "}
{"questions":"grafana getting started getting started sql guides gettingstarted aliases products Learn how to build your first MS SQL Server dashboard in Grafana labels oss enterprise","answers":"---\naliases:\n  - ..\/guides\/getting_started\/\n  - ..\/guides\/gettingstarted\/\n  - getting-started-sql\/\ndescription: Learn how to build your first MS SQL Server dashboard in Grafana.\nlabels:\n  products:\n    - enterprise\n    - oss\ntitle: Get started with Grafana and MS SQL Server\nweight: 500\n---\n\n# Get started with Grafana and MS SQL Server\n\nMicrosoft SQL Server is a popular relational database management system that is widely used in development and production environments. This topic walks you through the steps to create a series of dashboards in Grafana to display metrics from a MS SQL Server database.\n\n#### Download MS SQL Server\n\nMS SQL Server can be installed on Windows or Linux operating systems and also on Docker containers. Refer to the [MS SQL Server downloads page](https:\/\/www.microsoft.com\/en-us\/sql-server\/sql-server-downloads), for a complete list of all available options.\n\n#### Install MS SQL Server\n\nYou can install MS SQL Server on the host running Grafana or on a remote server. To install the software from the [downloads page](https:\/\/www.microsoft.com\/en-us\/sql-server\/sql-server-downloads), follow their setup prompts.\n\nIf you are on a Windows host but want to use Grafana and MS SQL data source on a Linux environment, refer to the [WSL to set up your Grafana development environment](\/blog\/2021\/03\/03\/.how-to-set-up-a-grafana-development-environment-on-a-windows-pc-using-wsl). This will allow you to leverage the resources available in [grafana\/grafana](https:\/\/github.com\/grafana\/grafana) GitHub repository. Here you will find a collection of supported data sources, including MS SQL Server, along with test data and pre-configured dashboards for use.\n\n#### Add the MS SQL data source\n\nThere are several ways to authenticate in MSSQL. Start by:\n\n1. Click **Connections** in the left-side menu and filter by `mssql`.\n1. Select the **Microsoft SQL Server** option.\n1. Click **Create a Microsoft SQL Server data source** in the top right corner to open the configuration page.\n1. Select the desired authentication method and fill in the right information as detailed below.\n1. Click **Save & test**.\n\n##### General configuration\n\n| Name       | Description                                                                                                           |\n| ---------- | --------------------------------------------------------------------------------------------------------------------- |\n| `Name`     | The data source name. This is how you refer to the data source in panels and queries.                                 |\n| `Host`     | The IP address\/hostname and optional port of your MS SQL instance. If port is omitted, the default 1433 will be used. |\n| `Database` | Name of your MS SQL database.                                                                                         |\n\n##### SQL Server Authentication\n\n| Name       | Description                     |\n| ---------- | ------------------------------- |\n| `User`     | Database user's login\/username. |\n| `Password` | Database user's password.       |\n\n##### Windows Active Directory (Kerberos)\n\nBelow are the four possible ways to authenticate via Windows Active Directory\/Kerberos.\n\n\nWindows Active Directory (Kerberos) authentication is not supported in Grafana Cloud at the moment.\n\n\n| Method                    | Description                                                                                                                                                  |\n| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| **Username + password**   | Enter the domain user and password                                                                                                                           |\n| **Keytab file**           | Specify the path to a valid keytab file to use that for authentication.                                                                                      |\n| **Credential cache**      | Log in on the host via `kinit` and pass the path to the credential cache. The cache path can be found by running `klist` on the host in question.            |\n| **Credential cache file** | This option allows multiple valid configurations to be present and matching is performed on host, database, and user. See the example JSON below this table. |\n\n```json\n[\n  {\n    \"user\": \"grot@GF.LAB\",\n    \"database\": \"dbone\",\n    \"address\": \"mysql1.mydomain.com:3306\",\n    \"credentialCache\": \"\/tmp\/krb5cc_1000\"\n  },\n  {\n    \"user\": \"grot@GF.LAB\",\n    \"database\": \"dbtwo\",\n    \"address\": \"mysql2.gf.lab\",\n    \"credentialCache\": \"\/tmp\/krb5cc_1000\"\n  }\n]\n```\n\nFor installations from the [grafana\/grafana](https:\/\/github.com\/grafana\/grafana\/tree\/main) repository, `gdev-mssql` data source is available. Once you add this data source, you can use the `Datasource tests - MSSQL` dashboard with three panels showing metrics generated from a test database.\n\n![MS SQL Server dashboard](\/static\/img\/docs\/getting-started\/gdev-sql-dashboard.png)\n\nOptionally, play around this dashboard and customize it to:\n\n- Create different panels.\n- Change titles for panels.\n- Change frequency of data polling.\n- Change the period for which the data is displayed.\n- Rearrange and resize panels.\n\n#### Start building dashboards\n\nNow that you have gained some idea of using the pre-packaged MS SQL data source and some test data, the next step is to setup your own instance of MS SQL Server database and data your development or sandbox area.\n\nTo fetch data from your own instance of MS SQL Server, add the data source using instructions in Step 4 of this topic. In Grafana [Explore]() build queries to experiment with the metrics you want to monitor.\n\nOnce you have a curated list of queries, create [dashboards]() to render metrics from the SQL Server database. For troubleshooting, user permissions, known issues, and query examples, refer to [Using Microsoft SQL Server in Grafana]().","site":"grafana getting started","answers_cleaned":"    aliases         guides getting started         guides gettingstarted      getting started sql  description  Learn how to build your first MS SQL Server dashboard in Grafana  labels    products        enterprise       oss title  Get started with Grafana and MS SQL Server weight  500        Get started with Grafana and MS SQL Server  Microsoft SQL Server is a popular relational database management system that is widely used in development and production environments  This topic walks you through the steps to create a series of dashboards in Grafana to display metrics from a MS SQL Server database        Download MS SQL Server  MS SQL Server can be installed on Windows or Linux operating systems and also on Docker containers  Refer to the  MS SQL Server downloads page  https   www microsoft com en us sql server sql server downloads   for a complete list of all available options        Install MS SQL Server  You can install MS SQL Server on the host running Grafana or on a remote server  To install the software from the  downloads page  https   www microsoft com en us sql server sql server downloads   follow their setup prompts   If you are on a Windows host but want to use Grafana and MS SQL data source on a Linux environment  refer to the  WSL to set up your Grafana development environment   blog 2021 03 03  how to set up a grafana development environment on a windows pc using wsl   This will allow you to leverage the resources available in  grafana grafana  https   github com grafana grafana  GitHub repository  Here you will find a collection of supported data sources  including MS SQL Server  along with test data and pre configured dashboards for use        Add the MS SQL data source  There are several ways to authenticate in MSSQL  Start by   1  Click   Connections   in the left side menu and filter by  mssql   1  Select the   Microsoft SQL Server   option  1  Click   Create a Microsoft SQL Server data source   in the top right corner to open the configuration page  1  Select the desired authentication method and fill in the right information as detailed below  1  Click   Save   test           General configuration    Name         Description                                                                                                                                                                                                                                                       Name        The data source name  This is how you refer to the data source in panels and queries                                       Host        The IP address hostname and optional port of your MS SQL instance  If port is omitted  the default 1433 will be used       Database    Name of your MS SQL database                                                                                                   SQL Server Authentication    Name         Description                                                                           User        Database user s login username       Password    Database user s password                 Windows Active Directory  Kerberos   Below are the four possible ways to authenticate via Windows Active Directory Kerberos    Windows Active Directory  Kerberos  authentication is not supported in Grafana Cloud at the moment      Method                      Description                                                                                                                                                                                                                                                                                                                                                     Username   password       Enter the domain user and password                                                                                                                                 Keytab file               Specify the path to a valid keytab file to use that for authentication                                                                                             Credential cache          Log in on the host via  kinit  and pass the path to the credential cache  The cache path can be found by running  klist  on the host in question                   Credential cache file     This option allows multiple valid configurations to be present and matching is performed on host  database  and user  See the example JSON below this table        json            user    grot GF LAB        database    dbone        address    mysql1 mydomain com 3306        credentialCache     tmp krb5cc 1000                user    grot GF LAB        database    dbtwo        address    mysql2 gf lab        credentialCache     tmp krb5cc 1000             For installations from the  grafana grafana  https   github com grafana grafana tree main  repository   gdev mssql  data source is available  Once you add this data source  you can use the  Datasource tests   MSSQL  dashboard with three panels showing metrics generated from a test database     MS SQL Server dashboard   static img docs getting started gdev sql dashboard png   Optionally  play around this dashboard and customize it to     Create different panels    Change titles for panels    Change frequency of data polling    Change the period for which the data is displayed    Rearrange and resize panels        Start building dashboards  Now that you have gained some idea of using the pre packaged MS SQL data source and some test data  the next step is to setup your own instance of MS SQL Server database and data your development or sandbox area   To fetch data from your own instance of MS SQL Server  add the data source using instructions in Step 4 of this topic  In Grafana  Explore    build queries to experiment with the metrics you want to monitor   Once you have a curated list of queries  create  dashboards    to render metrics from the SQL Server database  For troubleshooting  user permissions  known issues  and query examples  refer to  Using Microsoft SQL Server in Grafana    "}
{"questions":"grafana setup certificates labels https products Learn how to set up Grafana HTTPS for secure web traffic keywords ssl grafana enterprise","answers":"---\ndescription: Learn how to set up Grafana HTTPS for secure web traffic.\nkeywords:\n  - grafana\n  - https\n  - ssl\n  - certificates\nlabels:\n  products:\n    - enterprise\n    - oss\nmenuTitle: Set up HTTPS\ntitle: Set up Grafana HTTPS for secure web traffic\nweight: 900\n---\n\n# Set up Grafana HTTPS for secure web traffic\n\nWhen accessing the Grafana UI through the web, it is important to set up HTTPS to ensure the communication between Grafana and the end user is encrypted, including login credentials and retrieved metric data.\n\nIn order to ensure secure traffic over the internet, Grafana must have a key for encryption and a [Secure Socket Layer (SSL) Certificate](https:\/\/www.kaspersky.com\/resource-center\/definitions\/what-is-a-ssl-certificate) to verify the identity of the site.\n\nThe following image shows a browser lock icon which confirms the connection is safe.\n\n\n\nThis topic shows you how to:\n\n1. Obtain a certificate and key\n2. Configure Grafana HTTPS\n3. Restart the Grafana server\n\n## Before you begin\n\nTo follow these instructions, you need:\n\n- You must have shell access to the system and `sudo` access to perform actions as root or administrator.\n- For the CA-signed option, you need a domain name that you possess and that is associated with the machine you are using.\n\n## Obtain a certificate and key\n\nYou can use one of two methods to obtain a certificate and a key. The faster and easier _self-signed_ option might show browser warnings to the user that they will have to accept each time they visit the site. Alternatively, the Certificate Authority (CA) signed option requires more steps to complete, but it enables full trust with the browser. To learn more about the difference between these options, refer to [Difference between self-signed CA and self-signed certificate](https:\/\/www.baeldung.com\/cs\/self-signed-ca-vs-certificate).\n\n### Generate a self-signed certificate\n\nThis section shows you how to use `openssl` tooling to generate all necessary files from the command line.\n\n1. Run the following command to generate a 2048-bit RSA private key, which is used to decrypt traffic:\n\n   ```bash\n   $ sudo openssl genrsa -out \/etc\/grafana\/grafana.key 2048\n   ```\n\n1. Run the following command to generate a certificate, using the private key from the previous step.\n\n   ```bash\n   $ sudo openssl req -new -key \/etc\/grafana\/grafana.key -out \/etc\/grafana\/grafana.csr\n   ```\n\n   When prompted, answer the questions, which might include your fully-qualified domain name, email address, country code, and others. The following example is similar to the prompts you will see.\n\n   ```\n   You are about to be asked to enter information that will be incorporated\n   into your certificate request.\n   What you are about to enter is what is called a Distinguished Name or a DN.\n   There are quite a few fields but you can leave some blank\n   For some fields there will be a default value,\n   If you enter '.', the field will be left blank.\n   -----\n   Country Name (2 letter code) [AU]:US\n   State or Province Name (full name) [Some-State]:Virginia\n   Locality Name (eg, city) []:Richmond\n   Organization Name (eg, company) [Internet Pty Ltd]:\n   Organizational Unit Name (eg, section) []:\n   Common Name (e.g. server FQDN or YOUR name) []:subdomain.mysite.com\n   Email Address []:me@mysite.com\n\n   Please enter the following 'extra' attributes\n   to be sent with your certificate request\n   A challenge password []:\n   An optional company name []:\n   ```\n\n1. Run the following command to self-sign the certificate with the private key, for a period of validity of 365 days:\n\n   ```bash\n   sudo openssl x509 -req -days 365 -in \/etc\/grafana\/grafana.csr -signkey \/etc\/grafana\/grafana.key -out \/etc\/grafana\/grafana.crt\n   ```\n\n1. Run the following commands to set the appropriate permissions for the files:\n\n   ```bash\n   sudo chown grafana:grafana \/etc\/grafana\/grafana.crt\n   sudo chown grafana:grafana \/etc\/grafana\/grafana.key\n   sudo chmod 400 \/etc\/grafana\/grafana.key \/etc\/grafana\/grafana.crt\n   ```\n\n   **Note**: When using these files, browsers might provide warnings for the resulting website because a third-party source does not trust the certificate. Browsers will show trust warnings; however, the connection will remain encrypted.\n\n   The following image shows an insecure HTTP connection.\n\n   \n\n### Obtain a signed certificate from LetsEncrypt\n\n[LetsEncrypt](https:\/\/letsencrypt.org\/) is a nonprofit certificate authority that provides certificates without any charge. For signed certificates, there are multiple companies and certificate authorities (CAs) available. The principles for generating the certificates might vary slightly in accordance with the provider but will generally remain the same.\n\nThe examples in this section use LetsEncrypt because it is free.\n\n\nThe instructions provided in this section are for a Debian-based Linux system. For other distributions and operating systems, please refer to the [certbot instructions](https:\/\/certbot.eff.org\/instructions). Also, these instructions require you to have a domain name that you are in control of. Dynamic domain names like those from Amazon EC2 or DynDNS providers will not function.\n\n\n#### Install `snapd` and `certbot`\n\n`certbot` is an open-source program used to manage LetsEncrypt certificates, and `snapd` is a tool that assists in running `certbot` and installing the certificates.\n\n1. To install `snapd`, run the following commands:\n\n   ```bash\n   sudo apt-get install snapd\n   sudo snap install core; sudo snap refresh core\n   ```\n\n1. Run the following commands to install:\n\n   ```bash\n   sudo apt-get remove certbot\n   sudo snap install --classic certbot\n   sudo ln -s \/snap\/bin\/certbot \/usr\/bin\/certbot\n   ```\n\n   These commands:\n\n   - Uninstall `certbot` from your system if it has been installed using a package manager\n   - Install `certbot` using `snapd`\n\n#### Generate certificates using `certbot`\n\nThe `sudo certbot certonly --standalone` command prompts you to answer questions before it generates a certificate. This process temporarily opens a service on port `80` that LetsEncrypt uses to verify communication with your host.\n\nTo generate certificates using `certbot`, complete the following steps:\n\n1. Ensure that port `80` traffic is permitted by applicable firewall rules.\n\n1. Run the following command to generate certificates:\n\n   ```bash\n   $ sudo certbot certonly --standalone\n\n   Saving debug log to \/var\/log\/letsencrypt\/letsencrypt.log\n   Enter email address (used for urgent renewal and security notices)\n   (Enter 'c' to cancel): me@mysite.com\n\n   - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n   Please read the Terms of Service at\n   https:\/\/letsencrypt.org\/documents\/LE-SA-v1.3-September-21-2022.pdf. You must\n   agree in order to register with the ACME server. Do you agree?\n   - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n   (Y)es\/(N)o: y\n\n   - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n   Would you be willing, once your first certificate is successfully issued, to\n   share your email address with the Electronic Frontier Foundation, a founding\n   partner of the Let\u2019s Encrypt project and the non-profit organization that\n   develops Certbot? We\u2019d like to send you email about our work encrypting the web,\n   EFF news, campaigns, and ways to support digital freedom.\n   - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n   (Y)es\/(N)o: n\n   Account registered.\n   Please enter the domain name(s) you would like on your certificate (comma and\/or\n   space separated) (Enter 'c' to cancel): subdomain.mysite.com\n   Requesting a certificate for subdomain.mysite.com\n\n   Successfully received certificate.\n   Certificate is saved at: \/etc\/letsencrypt\/live\/subdomain.mysite.com\/fullchain.pem\n   Key is saved at:         \/etc\/letsencrypt\/live\/subdomain.mysite.com\/privkey.pem\n   This certificate expires on 2023-06-20.\n   These files will be updated when the certificate renews.\n   Certbot has set up a scheduled task to automatically renew this certificate in the background.\n\n   - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n   If you like Certbot, please consider supporting our work by:\n   * Donating to ISRG \/ Let\u2019s Encrypt:   https:\/\/letsencrypt.org\/donate\n   * Donating to EFF:                    https:\/\/eff.org\/donate-le\n   - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n   ```\n\n#### Set up symlinks to Grafana\n\nSymbolic links, also known as symlinks, enable you to create pointers to existing LetsEncrypt files in the `\/etc\/grafana` directory. By using symlinks rather than copying files, you can use `certbot` to refresh or request updated certificates from LetsEncrypt without the need to reconfigure the Grafana settings.\n\nTo set up symlinks to Grafana, run the following commands:\n\n```bash\n$ sudo ln -s \/etc\/letsencrypt\/live\/subdomain.mysite.com\/privkey.pem \/etc\/grafana\/grafana.key\n$ sudo ln -s \/etc\/letsencrypt\/live\/subdomain.mysite.com\/fullchain.pem \/etc\/grafana\/grafana.crt\n```\n\n#### Adjust permissions\n\nGrafana usually runs under the `grafana` Linux group, and you must ensure that the Grafana server process has permission to read the relevant files. Without read access, the HTTPS server fails to start properly.\n\nTo adjust permissions, perform the following steps:\n\n1. Run the following commands to set the appropriate permissions and groups for the files:\n\n   ```bash\n   $ sudo chgrp -R grafana \/etc\/letsencrypt\/*\n   $ sudo chmod -R g+rx \/etc\/letsencrypt\/*\n   $ sudo chgrp -R grafana \/etc\/grafana\/grafana.crt \/etc\/grafana\/grafana.key\n   $ sudo chmod 400 \/etc\/grafana\/grafana.crt \/etc\/grafana\/grafana.key\n   ```\n\n1. Run the following command to verify that the `grafana` group can read the symlinks:\n\n   ```bash\n   $ $ ls -l \/etc\/grafana\/grafana.*\n\n   lrwxrwxrwx 1 root grafana    67 Mar 22 14:15 \/etc\/grafana\/grafana.crt -> \/etc\/letsencrypt\/live\/subdomain.mysite.com\/fullchain.pem\n   -rw-r----- 1 root grafana 54554 Mar 22 14:13 \/etc\/grafana\/grafana.ini\n   lrwxrwxrwx 1 root grafana    65 Mar 22 14:15 \/etc\/grafana\/grafana.key -> \/etc\/letsencrypt\/live\/subdomain.mysite.com\/privkey.pem\n   ```\n\n## Configure Grafana HTTPS and restart Grafana\n\nIn this section you edit the `grafana.ini` file so that it includes the certificate you created. If you need help identifying where to find this file, or what each key means, refer to [Configuration file location]().\n\nTo configure Grafana HTTPS and restart Grafana, complete the following steps.\n\n1. Open the `grafana.ini` file and edit the following configuration parameters:\n\n   ```\n   [server]\n   http_addr =\n   http_port = 3000\n   domain = mysite.com\n   root_url = https:\/\/subdomain.mysite.com:3000\n   cert_key = \/etc\/grafana\/grafana.key\n   cert_file = \/etc\/grafana\/grafana.crt\n   enforce_domain = False\n   protocol = https\n   ```\n\n   > **Note**: The standard port for SSL traffic is 443, which you can use instead of Grafana's default port 3000. This change might require additional operating system privileges or configuration to bind to lower-numbered privileged ports.\n\n1. [Restart the Grafana server]() using `systemd`, `init.d`, or the binary as appropriate for your environment.\n\n## Troubleshooting\n\nRefer to the following troubleshooting tips as required.\n\n### Failure to obtain a certificate\n\nThe following reasons explain why the `certbot` process might fail:\n\n- To make sure you can get a certificate from LetsEncrypt, you need to ensure that port 80 is open so that LetsEncrypt can communicate with your machine. If port 80 is blocked or firewall is enabled, the exchange will fail and you won't be able to receive a certificate.\n- LetsEncrypt requires proof that you control the domain, so attempts to obtain certificates for domains you do not\n  control might be rejected.\n\n### Grafana starts, but HTTPS is unavailable\n\nWhen you configure HTTPS, the following errors might appear in Grafana's logs.\n\n#### Permission denied\n\n```\nlevel=error msg=\"Stopped background service\" service=*api.HTTPServer reason=\"open \/etc\/grafana\/grafana.crt: permission denied\"\n```\n\n##### Resolution\n\nTo ensure secure HTTPS setup, it is essential that the cryptographic keys and certificates are as restricted as possible. However, if the file permissions are too restricted, the Grafana process may not have access to the necessary files, thus impeding a successful HTTPS setup. Please re-examine the listed instructions to double check the file permissions and try again.\n\n#### Cannot assign requested address\n\n```\nlisten tcp 34.148.30.243:3000: bind: cannot assign requested address\n```\n\n##### Resolution\n\nCheck the config to ensure the `http_addr` is left blank, allowing Grafana to bind to all interfaces. If you have set `http_addr` to a specific subdomain, such as `subdomain.mysite.com`, this might prevent the Grafana process from binding to an external address, due to network address translation layers being present.","site":"grafana setup","answers_cleaned":"    description  Learn how to set up Grafana HTTPS for secure web traffic  keywords      grafana     https     ssl     certificates labels    products        enterprise       oss menuTitle  Set up HTTPS title  Set up Grafana HTTPS for secure web traffic weight  900        Set up Grafana HTTPS for secure web traffic  When accessing the Grafana UI through the web  it is important to set up HTTPS to ensure the communication between Grafana and the end user is encrypted  including login credentials and retrieved metric data   In order to ensure secure traffic over the internet  Grafana must have a key for encryption and a  Secure Socket Layer  SSL  Certificate  https   www kaspersky com resource center definitions what is a ssl certificate  to verify the identity of the site   The following image shows a browser lock icon which confirms the connection is safe     This topic shows you how to   1  Obtain a certificate and key 2  Configure Grafana HTTPS 3  Restart the Grafana server     Before you begin  To follow these instructions  you need     You must have shell access to the system and  sudo  access to perform actions as root or administrator    For the CA signed option  you need a domain name that you possess and that is associated with the machine you are using      Obtain a certificate and key  You can use one of two methods to obtain a certificate and a key  The faster and easier  self signed  option might show browser warnings to the user that they will have to accept each time they visit the site  Alternatively  the Certificate Authority  CA  signed option requires more steps to complete  but it enables full trust with the browser  To learn more about the difference between these options  refer to  Difference between self signed CA and self signed certificate  https   www baeldung com cs self signed ca vs certificate        Generate a self signed certificate  This section shows you how to use  openssl  tooling to generate all necessary files from the command line   1  Run the following command to generate a 2048 bit RSA private key  which is used to decrypt traffic         bash      sudo openssl genrsa  out  etc grafana grafana key 2048         1  Run the following command to generate a certificate  using the private key from the previous step         bash      sudo openssl req  new  key  etc grafana grafana key  out  etc grafana grafana csr            When prompted  answer the questions  which might include your fully qualified domain name  email address  country code  and others  The following example is similar to the prompts you will see             You are about to be asked to enter information that will be incorporated    into your certificate request     What you are about to enter is what is called a Distinguished Name or a DN     There are quite a few fields but you can leave some blank    For some fields there will be a default value     If you enter      the field will be left blank              Country Name  2 letter code   AU  US    State or Province Name  full name   Some State  Virginia    Locality Name  eg  city     Richmond    Organization Name  eg  company   Internet Pty Ltd      Organizational Unit Name  eg  section         Common Name  e g  server FQDN or YOUR name     subdomain mysite com    Email Address    me mysite com     Please enter the following  extra  attributes    to be sent with your certificate request    A challenge password        An optional company name             1  Run the following command to self sign the certificate with the private key  for a period of validity of 365 days         bash    sudo openssl x509  req  days 365  in  etc grafana grafana csr  signkey  etc grafana grafana key  out  etc grafana grafana crt         1  Run the following commands to set the appropriate permissions for the files         bash    sudo chown grafana grafana  etc grafana grafana crt    sudo chown grafana grafana  etc grafana grafana key    sudo chmod 400  etc grafana grafana key  etc grafana grafana crt              Note    When using these files  browsers might provide warnings for the resulting website because a third party source does not trust the certificate  Browsers will show trust warnings  however  the connection will remain encrypted      The following image shows an insecure HTTP connection            Obtain a signed certificate from LetsEncrypt   LetsEncrypt  https   letsencrypt org   is a nonprofit certificate authority that provides certificates without any charge  For signed certificates  there are multiple companies and certificate authorities  CAs  available  The principles for generating the certificates might vary slightly in accordance with the provider but will generally remain the same   The examples in this section use LetsEncrypt because it is free    The instructions provided in this section are for a Debian based Linux system  For other distributions and operating systems  please refer to the  certbot instructions  https   certbot eff org instructions   Also  these instructions require you to have a domain name that you are in control of  Dynamic domain names like those from Amazon EC2 or DynDNS providers will not function         Install  snapd  and  certbot    certbot  is an open source program used to manage LetsEncrypt certificates  and  snapd  is a tool that assists in running  certbot  and installing the certificates   1  To install  snapd   run the following commands         bash    sudo apt get install snapd    sudo snap install core  sudo snap refresh core         1  Run the following commands to install         bash    sudo apt get remove certbot    sudo snap install   classic certbot    sudo ln  s  snap bin certbot  usr bin certbot            These commands        Uninstall  certbot  from your system if it has been installed using a package manager      Install  certbot  using  snapd        Generate certificates using  certbot   The  sudo certbot certonly   standalone  command prompts you to answer questions before it generates a certificate  This process temporarily opens a service on port  80  that LetsEncrypt uses to verify communication with your host   To generate certificates using  certbot   complete the following steps   1  Ensure that port  80  traffic is permitted by applicable firewall rules   1  Run the following command to generate certificates         bash      sudo certbot certonly   standalone     Saving debug log to  var log letsencrypt letsencrypt log    Enter email address  used for urgent renewal and security notices      Enter  c  to cancel   me mysite com                                                                                        Please read the Terms of Service at    https   letsencrypt org documents LE SA v1 3 September 21 2022 pdf  You must    agree in order to register with the ACME server  Do you agree                                                                                         Y es  N o  y                                                                                        Would you be willing  once your first certificate is successfully issued  to    share your email address with the Electronic Frontier Foundation  a founding    partner of the Let s Encrypt project and the non profit organization that    develops Certbot  We d like to send you email about our work encrypting the web     EFF news  campaigns  and ways to support digital freedom                                                                                         Y es  N o  n    Account registered     Please enter the domain name s  you would like on your certificate  comma and or    space separated   Enter  c  to cancel   subdomain mysite com    Requesting a certificate for subdomain mysite com     Successfully received certificate     Certificate is saved at   etc letsencrypt live subdomain mysite com fullchain pem    Key is saved at           etc letsencrypt live subdomain mysite com privkey pem    This certificate expires on 2023 06 20     These files will be updated when the certificate renews     Certbot has set up a scheduled task to automatically renew this certificate in the background                                                                                         If you like Certbot  please consider supporting our work by       Donating to ISRG   Let s Encrypt    https   letsencrypt org donate      Donating to EFF                     https   eff org donate le                                                                                                 Set up symlinks to Grafana  Symbolic links  also known as symlinks  enable you to create pointers to existing LetsEncrypt files in the   etc grafana  directory  By using symlinks rather than copying files  you can use  certbot  to refresh or request updated certificates from LetsEncrypt without the need to reconfigure the Grafana settings   To set up symlinks to Grafana  run the following commands      bash   sudo ln  s  etc letsencrypt live subdomain mysite com privkey pem  etc grafana grafana key   sudo ln  s  etc letsencrypt live subdomain mysite com fullchain pem  etc grafana grafana crt           Adjust permissions  Grafana usually runs under the  grafana  Linux group  and you must ensure that the Grafana server process has permission to read the relevant files  Without read access  the HTTPS server fails to start properly   To adjust permissions  perform the following steps   1  Run the following commands to set the appropriate permissions and groups for the files         bash      sudo chgrp  R grafana  etc letsencrypt        sudo chmod  R g rx  etc letsencrypt        sudo chgrp  R grafana  etc grafana grafana crt  etc grafana grafana key      sudo chmod 400  etc grafana grafana crt  etc grafana grafana key         1  Run the following command to verify that the  grafana  group can read the symlinks         bash        ls  l  etc grafana grafana       lrwxrwxrwx 1 root grafana    67 Mar 22 14 15  etc grafana grafana crt     etc letsencrypt live subdomain mysite com fullchain pem     rw r      1 root grafana 54554 Mar 22 14 13  etc grafana grafana ini    lrwxrwxrwx 1 root grafana    65 Mar 22 14 15  etc grafana grafana key     etc letsencrypt live subdomain mysite com privkey pem            Configure Grafana HTTPS and restart Grafana  In this section you edit the  grafana ini  file so that it includes the certificate you created  If you need help identifying where to find this file  or what each key means  refer to  Configuration file location      To configure Grafana HTTPS and restart Grafana  complete the following steps   1  Open the  grafana ini  file and edit the following configuration parameters              server     http addr      http port   3000    domain   mysite com    root url   https   subdomain mysite com 3000    cert key    etc grafana grafana key    cert file    etc grafana grafana crt    enforce domain   False    protocol   https                Note    The standard port for SSL traffic is 443  which you can use instead of Grafana s default port 3000  This change might require additional operating system privileges or configuration to bind to lower numbered privileged ports   1   Restart the Grafana server    using  systemd    init d   or the binary as appropriate for your environment      Troubleshooting  Refer to the following troubleshooting tips as required       Failure to obtain a certificate  The following reasons explain why the  certbot  process might fail     To make sure you can get a certificate from LetsEncrypt  you need to ensure that port 80 is open so that LetsEncrypt can communicate with your machine  If port 80 is blocked or firewall is enabled  the exchange will fail and you won t be able to receive a certificate    LetsEncrypt requires proof that you control the domain  so attempts to obtain certificates for domains you do not   control might be rejected       Grafana starts  but HTTPS is unavailable  When you configure HTTPS  the following errors might appear in Grafana s logs        Permission denied      level error msg  Stopped background service  service  api HTTPServer reason  open  etc grafana grafana crt  permission denied             Resolution  To ensure secure HTTPS setup  it is essential that the cryptographic keys and certificates are as restricted as possible  However  if the file permissions are too restricted  the Grafana process may not have access to the necessary files  thus impeding a successful HTTPS setup  Please re examine the listed instructions to double check the file permissions and try again        Cannot assign requested address      listen tcp 34 148 30 243 3000  bind  cannot assign requested address            Resolution  Check the config to ensure the  http addr  is left blank  allowing Grafana to bind to all interfaces  If you have set  http addr  to a specific subdomain  such as  subdomain mysite com   this might prevent the Grafana process from binding to an external address  due to network address translation layers being present "}
{"questions":"grafana setup restart grafana menuTitle Start Grafana aliases products installation restart grafana How to start the Grafana server labels oss enterprise","answers":"---\naliases:\n  - ..\/installation\/restart-grafana\/\n  - .\/restart-grafana\/\ndescription: How to start the Grafana server\nlabels:\n  products:\n    - enterprise\n    - oss\nmenuTitle: Start Grafana\ntitle: Start the Grafana server\nweight: 300\n---\n\n# Start the Grafana server\n\nThis topic includes instructions for starting the Grafana server. For certain configuration changes, you might have to restart the Grafana server for them to take effect.\n\nThe following instructions start the `grafana-server` process as the `grafana` user, which was created during the package installation.\n\nIf you installed with the APT repository or `.deb` package, then you can start the server using `systemd` or `init.d`. If you installed a binary `.tar.gz` file, then you execute the binary.\n\n## Linux\n\nThe following subsections describe three methods of starting and restarting the Grafana server: with systemd, initd, or by directly running the binary. You should follow only one set of instructions, depending on how your machine is configured.\n\n### Start the Grafana server with systemd\n\nComplete the following steps to start the Grafana server using systemd and verify that it is running.\n\n1. To start the service, run the following commands:\n\n   ```bash\n   sudo systemctl daemon-reload\n   sudo systemctl start grafana-server\n   ```\n\n1. To verify that the service is running, run the following command:\n\n   ```bash\n   sudo systemctl status grafana-server\n   ```\n\n### Configure the Grafana server to start at boot using systemd\n\nTo configure the Grafana server to start at boot, run the following command:\n\n```bash\nsudo systemctl enable grafana-server.service\n```\n\n#### Serve Grafana on a port < 1024\n\n\n\n### Restart the Grafana server using systemd\n\nTo restart the Grafana server, run the following command:\n\n```bash\nsudo systemctl restart grafana-server\n```\n\n\nSUSE or openSUSE users might need to start the server with the systemd method, then use the init.d method to configure Grafana to start at boot.\n\n\n### Start the Grafana server using init.d\n\nComplete the following steps to start the Grafana server using init.d and verify that it is running:\n\n1. To start the Grafana server, run the following command:\n\n   ```bash\n   sudo service grafana-server start\n   ```\n\n1. To verify that the service is running, run the following command:\n\n   ```bash\n   sudo service grafana-server status\n   ```\n\n### Configure the Grafana server to start at boot using init.d\n\nTo configure the Grafana server to start at boot, run the following command:\n\n```bash\nsudo update-rc.d grafana-server defaults\n```\n\n#### Restart the Grafana server using init.d\n\nTo restart the Grafana server, run the following command:\n\n```bash\nsudo service grafana-server restart\n```\n\n### Start the server using the binary\n\nThe `grafana` binary .tar.gz needs the working directory to be the root install directory where the binary and the `public` folder are located.\n\nTo start the Grafana server, run the following command:\n\n```bash\n.\/bin\/grafana server\n```\n\n## Docker\n\nTo restart the Grafana service, use the `docker restart` command.\n\n`docker restart grafana`\n\nAlternatively, you can use the `docker compose restart` command to restart Grafana. For more information, refer to [docker compose documentation](https:\/\/docs.docker.com\/compose\/).\n\n### Docker compose example\n\nConfigure your `docker-compose.yml` file. For example:\n\n```yml\nversion: '3.8'\nservices:\n  grafana:\n    image: grafana\/grafana:latest\n    container_name: grafana\n    restart: unless-stopped\n    environment:\n      - TERM=linux\n      - GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-polystat-panel\n    ports:\n      - '3000:3000'\n    volumes:\n      - 'grafana_storage:\/var\/lib\/grafana'\nvolumes:\n  grafana_storage: {}\n```\n\nStart the Grafana server:\n\n`docker compose up -d`\n\nThis starts the Grafana server container in detached mode along with the two plugins specified in the YAML file.\n\nTo restart the running container, use this command:\n\n`docker compose restart grafana`\n\n## Windows\n\nComplete the following steps to start the Grafana server on Windows:\n\n1. Execute `grafana.exe server`; the `grafana` binary is located in the `bin` directory.\n\n   We recommend that you run `grafana.exe server` from the command line.\n\n   If you want to run Grafana as a Windows service, you can download [NSSM](https:\/\/nssm.cc\/).\n\n1. To run Grafana, open your browser and go to the Grafana port (http:\/\/localhost:3000\/ is default).\n\n   > **Note:** The default Grafana port is `3000`. This port might require extra permissions on Windows. If it does not appear in the default port, you can try changing to a different port.\n\n1. To change the port, complete the following steps:\n\n   a. In the `conf` directory, copy `sample.ini` to `custom.ini`.\n\n   > **Note:** You should edit `custom.ini`, never `defaults.ini`.\n\n   b. Edit `custom.ini` and uncomment the `http_port` configuration option (`;` is the comment character in ini files) and change it to something similar to `8080`, which should not require extra Windows privileges.\n\nTo restart the Grafana server, complete the following steps:\n\n1. Open the **Services** app.\n1. Right-click on the **Grafana** service.\n1. In the context menu, click **Restart**.\n\n## macOS\n\nRestart methods differ depending on whether you installed Grafana using Homebrew or as standalone macOS binaries.\n\n### Start Grafana using Homebrew\n\nTo start Grafana using [Homebrew](http:\/\/brew.sh\/), run the following start command:\n\n```bash\nbrew services start grafana\n```\n\n### Restart Grafana using Homebrew\n\nUse the [Homebrew](http:\/\/brew.sh\/) restart command:\n\n```bash\nbrew services restart grafana\n```\n\n### Restart standalone macOS binaries\n\nTo restart Grafana:\n\n1. Open a terminal and go to the directory where you copied the install setup files.\n1. Run the command:\n\n```bash\n.\/bin\/grafana server\n```\n\n## Next steps\n\nAfter the Grafana server is up and running, consider taking the next steps:\n\n- Refer to [Get Started]() to learn how to build your first dashboard.\n- Refer to [Configuration]() to learn about how you can customize your environment.","site":"grafana setup","answers_cleaned":"    aliases         installation restart grafana        restart grafana  description  How to start the Grafana server labels    products        enterprise       oss menuTitle  Start Grafana title  Start the Grafana server weight  300        Start the Grafana server  This topic includes instructions for starting the Grafana server  For certain configuration changes  you might have to restart the Grafana server for them to take effect   The following instructions start the  grafana server  process as the  grafana  user  which was created during the package installation   If you installed with the APT repository or   deb  package  then you can start the server using  systemd  or  init d   If you installed a binary   tar gz  file  then you execute the binary      Linux  The following subsections describe three methods of starting and restarting the Grafana server  with systemd  initd  or by directly running the binary  You should follow only one set of instructions  depending on how your machine is configured       Start the Grafana server with systemd  Complete the following steps to start the Grafana server using systemd and verify that it is running   1  To start the service  run the following commands         bash    sudo systemctl daemon reload    sudo systemctl start grafana server         1  To verify that the service is running  run the following command         bash    sudo systemctl status grafana server             Configure the Grafana server to start at boot using systemd  To configure the Grafana server to start at boot  run the following command      bash sudo systemctl enable grafana server service           Serve Grafana on a port   1024        Restart the Grafana server using systemd  To restart the Grafana server  run the following command      bash sudo systemctl restart grafana server       SUSE or openSUSE users might need to start the server with the systemd method  then use the init d method to configure Grafana to start at boot        Start the Grafana server using init d  Complete the following steps to start the Grafana server using init d and verify that it is running   1  To start the Grafana server  run the following command         bash    sudo service grafana server start         1  To verify that the service is running  run the following command         bash    sudo service grafana server status             Configure the Grafana server to start at boot using init d  To configure the Grafana server to start at boot  run the following command      bash sudo update rc d grafana server defaults           Restart the Grafana server using init d  To restart the Grafana server  run the following command      bash sudo service grafana server restart          Start the server using the binary  The  grafana  binary  tar gz needs the working directory to be the root install directory where the binary and the  public  folder are located   To start the Grafana server  run the following command      bash   bin grafana server         Docker  To restart the Grafana service  use the  docker restart  command    docker restart grafana   Alternatively  you can use the  docker compose restart  command to restart Grafana  For more information  refer to  docker compose documentation  https   docs docker com compose         Docker compose example  Configure your  docker compose yml  file  For example      yml version   3 8  services    grafana      image  grafana grafana latest     container name  grafana     restart  unless stopped     environment          TERM linux         GF INSTALL PLUGINS grafana clock panel grafana polystat panel     ports           3000 3000      volumes           grafana storage  var lib grafana  volumes    grafana storage          Start the Grafana server    docker compose up  d   This starts the Grafana server container in detached mode along with the two plugins specified in the YAML file   To restart the running container  use this command    docker compose restart grafana      Windows  Complete the following steps to start the Grafana server on Windows   1  Execute  grafana exe server   the  grafana  binary is located in the  bin  directory      We recommend that you run  grafana exe server  from the command line      If you want to run Grafana as a Windows service  you can download  NSSM  https   nssm cc     1  To run Grafana  open your browser and go to the Grafana port  http   localhost 3000  is default           Note    The default Grafana port is  3000   This port might require extra permissions on Windows  If it does not appear in the default port  you can try changing to a different port   1  To change the port  complete the following steps      a  In the  conf  directory  copy  sample ini  to  custom ini           Note    You should edit  custom ini   never  defaults ini       b  Edit  custom ini  and uncomment the  http port  configuration option      is the comment character in ini files  and change it to something similar to  8080   which should not require extra Windows privileges   To restart the Grafana server  complete the following steps   1  Open the   Services   app  1  Right click on the   Grafana   service  1  In the context menu  click   Restart        macOS  Restart methods differ depending on whether you installed Grafana using Homebrew or as standalone macOS binaries       Start Grafana using Homebrew  To start Grafana using  Homebrew  http   brew sh    run the following start command      bash brew services start grafana          Restart Grafana using Homebrew  Use the  Homebrew  http   brew sh   restart command      bash brew services restart grafana          Restart standalone macOS binaries  To restart Grafana   1  Open a terminal and go to the directory where you copied the install setup files  1  Run the command      bash   bin grafana server         Next steps  After the Grafana server is up and running  consider taking the next steps     Refer to  Get Started    to learn how to build your first dashboard    Refer to  Configuration    to learn about how you can customize your environment "}
{"questions":"grafana setup Grafana Live is a real time messaging engine that pushes event data to live live ha setup live configure grafana live live live feature overview live aliases live live channel live set up grafana live a frontend when an event occurs","answers":"---\naliases:\n  - ..\/live\/\n  - ..\/live\/configure-grafana-live\/\n  - ..\/live\/live-channel\/\n  - ..\/live\/live-feature-overview\/\n  - ..\/live\/live-ha-setup\/\n  - ..\/live\/set-up-grafana-live\/\ndescription: Grafana Live is a real-time messaging engine that pushes event data to\n  a frontend when an event occurs.\nlabels:\n  products:\n    - enterprise\n    - oss\nmenuTitle: Set up Grafana Live\ntitle: Set up Grafana Live\nweight: 1100\n---\n\n# Set up Grafana Live\n\nGrafana Live is a real-time messaging engine you can use to push event data to a frontend as soon as an event occurs.\n\nThis could be notifications about dashboard changes, new frames for rendered data, and so on. Live features can help eliminate a page reload or polling in many places, it can stream Internet of things (IoT) sensors or any other real-time data to panels.\n\n\nBy `real-time`, we indicate a soft real-time. Due to network latencies, garbage collection cycles, and so on, the delay of a delivered message can be up to several hundred milliseconds or higher.\n\n\n## Concepts\n\nGrafana Live sends data to clients over persistent WebSocket connection. Grafana frontend subscribes on channels to receive data which was published into that channel \u2013 in other words PUB\/SUB mechanics is used. All subscriptions on a page multiplexed inside a single WebSocket connection. There are some rules regarding Live channel names \u2013 see [Grafana Live channel]().\n\nHandling persistent connections like WebSocket in scale may require operating system and infrastructure tuning. That's why by default Grafana Live supports 100 simultaneous connections max. For more details on how to tune this limit, refer to [Live configuration section]().\n\n## Features\n\nHaving a way to send data to clients in real-time opens a road for new ways of data interaction and visualization. Below we describe Grafana Live features supported at the moment.\n\n### Dashboard change notifications\n\nAs soon as there is a change to the dashboard layout, it is automatically reflected on other devices connected to Grafana Live.\n\n### Data streaming from plugins\n\nWith Grafana Live, backend data source plugins can stream updates to frontend panels.\n\nFor data source plugin channels, Grafana uses `ds` scope. Namespace in the case of data source channels is a data source unique ID (UID) which is issued by Grafana at the moment of data source creation. The path is a custom string that plugin authors free to choose themselves (just make sure it consists of allowed symbols).\n\nFor example, a data source channel looks like this: `ds\/<DATASOURCE_UID>\/<CUSTOM_PATH>`.\n\nRefer to the tutorial about [building a streaming data source backend plugin](\/tutorials\/build-a-streaming-data-source-plugin\/) for more details.\n\nThe basic streaming example included in Grafana core streams frames with some generated data to a panel. To look at it create a new panel and point it to the `-- Grafana --` data source. Next, choose `Live Measurements` and select the `plugin\/testdata\/random-20Hz-stream` channel.\n\n### Data streaming from Telegraf\n\nA new API endpoint `\/api\/live\/push\/:streamId` allows accepting metrics data in Influx format from Telegraf. These metrics are transformed into Grafana data frames and published to channels.\n\nRefer to the tutorial about [streaming metrics from Telegraf to Grafana](\/tutorials\/stream-metrics-from-telegraf-to-grafana\/) for more information.\n\n## Grafana Live channel\n\nGrafana Live is a PUB\/SUB server, clients subscribe to channels to receive real-time updates published to those channels.\n\n### Channel structure\n\nChannel is a string identifier. In Grafana channel consists of 3 parts delimited by `\/`:\n\n- Scope\n- Namespace\n- Path\n\nFor example, the channel `grafana\/dashboard\/xyz` has the scope `grafana`, namespace `dashboard`, and path `xyz`.\n\nScope, namespace and path can only have ASCII alphanumeric symbols (A-Z, a-z, 0-9), `_` (underscore) and `-` (dash) at the moment. The path part can additionally have `\/`, `.` and `=` symbols. The meaning of scope, namespace and path is context-specific.\n\nThe maximum length of a channel is 160 symbols.\n\nScope determines the purpose of a channel in Grafana. For example, for data source plugin channels Grafana uses `ds` scope. For built-in features like dashboard edit notifications Grafana uses `grafana` scope.\n\nNamespace has a different meaning depending on scope. For example, for `grafana` scope this could be a name of built-in real-time feature like `dashboard` (i.e. dashboards events).\n\nThe path, which is the final part of a channel, usually contains the identifier of some concrete resource such as the ID of a dashboard that a user is currently looking at. But a path can be anything.\n\nChannels are lightweight and ephemeral - they are created automatically on user subscription and removed as soon as last user left a channel.\n\n### Data format\n\nAll data travelling over Live channels must be JSON-encoded.\n\n## Configure Grafana Live\n\nGrafana Live is enabled by default. In Grafana v8.0, it has a strict default for a maximum number of connections per Grafana server instance.\n\n### Max number of connections\n\nGrafana Live uses persistent connections (WebSocket at the moment) to deliver real-time updates to clients.\n\nWebSocket is a persistent connection that starts with an HTTP Upgrade request (using the same HTTP port as the rest of Grafana) and then switches to a TCP mode where WebSocket frames can travel in both directions between a client and a server. Each logged-in user opens a WebSocket connection \u2013 one per browser tab.\n\nThe number of maximum WebSocket connections users can establish with Grafana is limited to 100 by default. See [max_connections]() option.\n\nIn case you want to increase this limit, ensure that your server and infrastructure allow handling more connections. The following sections discuss several common problems which could happen when managing persistent connections, in particular WebSocket connections.\n\n### Request origin check\n\nTo avoid hijacking of WebSocket connection Grafana Live checks the Origin request header sent by a client in an HTTP Upgrade request. Requests without Origin header pass through without any origin check.\n\nBy default, Live accepts connections with Origin header that matches configured [root_url]() (which is a public Grafana URL).\n\nIt is possible to provide a list of additional origin patterns to allow WebSocket connections from. This can be achieved using the [allowed_origins]() option of Grafana Live configuration.\n\n#### Resource usage\n\nEach persistent connection costs some memory on a server. Typically, this should be about 50 KB per connection at this moment. Thus a server with 1 GB RAM is expected to handle about 20k connections max. Each active connection consumes additional CPU resources since the client and server send PING\/PONG frames to each other to maintain a connection.\n\nUsing the streaming functionality results in additional CPU usage. The exact CPU resource utilization can be hard to estimate as it heavily depends on the Grafana Live usage pattern.\n\n#### Open file limit\n\nEach WebSocket connection costs a file descriptor on a server machine where Grafana runs. Most operating systems have a quite low default limit for the maximum number of descriptors that process can open.\n\nTo look at the current limit on Unix run:\n\n```\nulimit -n\n```\n\nOn a Linux system, you can also check out the current limits for a running process with:\n\n```\ncat \/proc\/<PROCESS_PID>\/limits\n```\n\nThe open files limit shows approximately how many user connections your server can currently handle.\n\nTo increase this limit, refer to [these instructions](https:\/\/docs.riak.com\/riak\/kv\/2.2.3\/using\/performance\/open-files-limit.1.html) for popular operating systems.\n\n#### Ephemeral port exhaustion\n\nEphemeral port exhaustion problem can happen between your load balancer (or reverse proxy) software and Grafana server. For example, when you load balance requests\/connections between different Grafana instances. If you connect directly to a single Grafana server instance, then you should not come across this issue.\n\nThe problem arises because each TCP connection uniquely identified in the OS by the 4-part-tuple:\n\n```\nsource ip | source port | destination ip | destination port\n```\n\nBy default, on load balancer\/server boundary you are limited to 65535 possible variants. But actually, due to some OS limits (for example on Unix available ports defined in `ip_local_port_range` sysctl parameter) and sockets in TIME_WAIT state, the number is even less.\n\nIn order to eliminate a problem you can:\n\n- Increase the ephemeral port range by tuning `ip_local_port_range` kernel option.\n- Deploy more Grafana server instances to load balance across.\n- Deploy more load balancer instances.\n- Use virtual network interfaces.\n\n#### WebSocket and proxies\n\nNot all proxies can transparently proxy WebSocket connections by default. For example, if you are using Nginx before Grafana you need to configure WebSocket proxy like this:\n\n```\nhttp {\n    map $http_upgrade $connection_upgrade {\n        default upgrade;\n        '' close;\n    }\n\n    upstream grafana {\n        server 127.0.0.1:3000;\n    }\n\n    server {\n        listen 8000;\n\n        location \/ {\n            proxy_http_version 1.1;\n            proxy_set_header Upgrade $http_upgrade;\n            proxy_set_header Connection $connection_upgrade;\n            proxy_set_header Host $http_host;\n            proxy_pass http:\/\/grafana;\n        }\n    }\n}\n```\n\nSee the [Nginx blog on their website](https:\/\/www.nginx.com\/blog\/websocket-nginx\/) for more information. Also, refer to your load balancer\/reverse proxy documentation to find out more information on dealing with WebSocket connections.\n\nSome corporate proxies can remove headers required to properly establish a WebSocket connection. In this case, you should tune intermediate proxies to not remove required headers. However, the better option is to use Grafana with TLS. Now WebSocket connection will inherit TLS and thus must be handled transparently by proxies.\n\nProxies like Nginx and Envoy have default limits on maximum number of connections which can be established. Make sure you have a reasonable limit for max number of incoming and outgoing connections in your proxy configuration.\n\n## Configure Grafana Live HA setup\n\nBy default, Grafana Live uses in-memory data structures and in-memory PUB\/SUB hub for handling subscriptions.\n\nIn a high availability Grafana setup involving several Grafana server instances behind a load balancer, you can find the following limitations:\n\n- Built-in features like dashboard change notifications will only be broadcasted to users connected to the same Grafana server process instance.\n- Streaming from Telegraf will deliver data only to clients connected to the same instance which received Telegraf data, active stream cache is not shared between different Grafana instances.\n- A separate unidirectional stream between Grafana and backend data source may be opened on different Grafana servers for the same channel.\n\nTo bypass these limitations, Grafana v8.1 has an experimental Live HA engine that requires Redis to work.\n\n### Configure Redis Live engine\n\nWhen the Redis engine is configured, Grafana Live keeps its state in Redis and uses Redis PUB\/SUB functionality to deliver messages to all subscribers throughout all Grafana server nodes.\n\nHere is an example configuration:\n\n```\n[live]\nha_engine = redis\nha_engine_address = 127.0.0.1:6379\n```\n\nFor additional information, refer to the [ha_engine]() and [ha_engine_address]() options.\n\nAfter running:\n\n- All built-in real-time notifications like dashboard changes are delivered to all Grafana server instances and broadcasted to all subscribers.\n- Streaming from Telegraf delivers messages to all subscribers.\n- A separate unidirectional stream between Grafana and backend data source opens on different Grafana servers. Publishing data to a channel delivers messages to instance subscribers, as a result, publications from different instances on different machines do not produce duplicate data on panels.\n\nAt the moment we only support single Redis node.\n\n> **Note:** It's possible to use Redis Sentinel and Haproxy to achieve a highly available Redis setup. Redis nodes should be managed by [Redis Sentinel](https:\/\/redis.io\/topics\/sentinel) to achieve automatic failover. Haproxy configuration example:\n>\n> ```\n> listen redis\n>   server redis-01 127.0.0.1:6380 check port 6380 check inter 2s weight 1 inter 2s downinter 5s rise 10 fall 2 on-marked-down shutdown-sessions on-marked-up shutdown-backup-sessions\n>   server redis-02 127.0.0.1:6381 check port 6381 check inter 2s weight 1 inter 2s downinter 5s rise 10 fall 2 backup\n>   bind *:6379\n>   mode tcp\n>   option tcpka\n>   option tcplog\n>   option tcp-check\n>   tcp-check send PING\\r\\n\n>   tcp-check expect string +PONG\n>   tcp-check send info\\ replication\\r\\n\n>   tcp-check expect string role:master\n>   tcp-check send QUIT\\r\\n\n>   tcp-check expect string +OK\n>   balance roundrobin\n> ```\n>\n> Next, point Grafana Live to Haproxy address:port.","site":"grafana setup","answers_cleaned":"    aliases         live         live configure grafana live         live live channel         live live feature overview         live live ha setup         live set up grafana live  description  Grafana Live is a real time messaging engine that pushes event data to   a frontend when an event occurs  labels    products        enterprise       oss menuTitle  Set up Grafana Live title  Set up Grafana Live weight  1100        Set up Grafana Live  Grafana Live is a real time messaging engine you can use to push event data to a frontend as soon as an event occurs   This could be notifications about dashboard changes  new frames for rendered data  and so on  Live features can help eliminate a page reload or polling in many places  it can stream Internet of things  IoT  sensors or any other real time data to panels    By  real time   we indicate a soft real time  Due to network latencies  garbage collection cycles  and so on  the delay of a delivered message can be up to several hundred milliseconds or higher       Concepts  Grafana Live sends data to clients over persistent WebSocket connection  Grafana frontend subscribes on channels to receive data which was published into that channel   in other words PUB SUB mechanics is used  All subscriptions on a page multiplexed inside a single WebSocket connection  There are some rules regarding Live channel names   see  Grafana Live channel      Handling persistent connections like WebSocket in scale may require operating system and infrastructure tuning  That s why by default Grafana Live supports 100 simultaneous connections max  For more details on how to tune this limit  refer to  Live configuration section         Features  Having a way to send data to clients in real time opens a road for new ways of data interaction and visualization  Below we describe Grafana Live features supported at the moment       Dashboard change notifications  As soon as there is a change to the dashboard layout  it is automatically reflected on other devices connected to Grafana Live       Data streaming from plugins  With Grafana Live  backend data source plugins can stream updates to frontend panels   For data source plugin channels  Grafana uses  ds  scope  Namespace in the case of data source channels is a data source unique ID  UID  which is issued by Grafana at the moment of data source creation  The path is a custom string that plugin authors free to choose themselves  just make sure it consists of allowed symbols    For example  a data source channel looks like this   ds  DATASOURCE UID   CUSTOM PATH     Refer to the tutorial about  building a streaming data source backend plugin   tutorials build a streaming data source plugin   for more details   The basic streaming example included in Grafana core streams frames with some generated data to a panel  To look at it create a new panel and point it to the     Grafana     data source  Next  choose  Live Measurements  and select the  plugin testdata random 20Hz stream  channel       Data streaming from Telegraf  A new API endpoint   api live push  streamId  allows accepting metrics data in Influx format from Telegraf  These metrics are transformed into Grafana data frames and published to channels   Refer to the tutorial about  streaming metrics from Telegraf to Grafana   tutorials stream metrics from telegraf to grafana   for more information      Grafana Live channel  Grafana Live is a PUB SUB server  clients subscribe to channels to receive real time updates published to those channels       Channel structure  Channel is a string identifier  In Grafana channel consists of 3 parts delimited by         Scope   Namespace   Path  For example  the channel  grafana dashboard xyz  has the scope  grafana   namespace  dashboard   and path  xyz    Scope  namespace and path can only have ASCII alphanumeric symbols  A Z  a z  0 9        underscore  and      dash  at the moment  The path part can additionally have          and     symbols  The meaning of scope  namespace and path is context specific   The maximum length of a channel is 160 symbols   Scope determines the purpose of a channel in Grafana  For example  for data source plugin channels Grafana uses  ds  scope  For built in features like dashboard edit notifications Grafana uses  grafana  scope   Namespace has a different meaning depending on scope  For example  for  grafana  scope this could be a name of built in real time feature like  dashboard   i e  dashboards events    The path  which is the final part of a channel  usually contains the identifier of some concrete resource such as the ID of a dashboard that a user is currently looking at  But a path can be anything   Channels are lightweight and ephemeral   they are created automatically on user subscription and removed as soon as last user left a channel       Data format  All data travelling over Live channels must be JSON encoded      Configure Grafana Live  Grafana Live is enabled by default  In Grafana v8 0  it has a strict default for a maximum number of connections per Grafana server instance       Max number of connections  Grafana Live uses persistent connections  WebSocket at the moment  to deliver real time updates to clients   WebSocket is a persistent connection that starts with an HTTP Upgrade request  using the same HTTP port as the rest of Grafana  and then switches to a TCP mode where WebSocket frames can travel in both directions between a client and a server  Each logged in user opens a WebSocket connection   one per browser tab   The number of maximum WebSocket connections users can establish with Grafana is limited to 100 by default  See  max connections    option   In case you want to increase this limit  ensure that your server and infrastructure allow handling more connections  The following sections discuss several common problems which could happen when managing persistent connections  in particular WebSocket connections       Request origin check  To avoid hijacking of WebSocket connection Grafana Live checks the Origin request header sent by a client in an HTTP Upgrade request  Requests without Origin header pass through without any origin check   By default  Live accepts connections with Origin header that matches configured  root url     which is a public Grafana URL    It is possible to provide a list of additional origin patterns to allow WebSocket connections from  This can be achieved using the  allowed origins    option of Grafana Live configuration        Resource usage  Each persistent connection costs some memory on a server  Typically  this should be about 50 KB per connection at this moment  Thus a server with 1 GB RAM is expected to handle about 20k connections max  Each active connection consumes additional CPU resources since the client and server send PING PONG frames to each other to maintain a connection   Using the streaming functionality results in additional CPU usage  The exact CPU resource utilization can be hard to estimate as it heavily depends on the Grafana Live usage pattern        Open file limit  Each WebSocket connection costs a file descriptor on a server machine where Grafana runs  Most operating systems have a quite low default limit for the maximum number of descriptors that process can open   To look at the current limit on Unix run       ulimit  n      On a Linux system  you can also check out the current limits for a running process with       cat  proc  PROCESS PID  limits      The open files limit shows approximately how many user connections your server can currently handle   To increase this limit  refer to  these instructions  https   docs riak com riak kv 2 2 3 using performance open files limit 1 html  for popular operating systems        Ephemeral port exhaustion  Ephemeral port exhaustion problem can happen between your load balancer  or reverse proxy  software and Grafana server  For example  when you load balance requests connections between different Grafana instances  If you connect directly to a single Grafana server instance  then you should not come across this issue   The problem arises because each TCP connection uniquely identified in the OS by the 4 part tuple       source ip   source port   destination ip   destination port      By default  on load balancer server boundary you are limited to 65535 possible variants  But actually  due to some OS limits  for example on Unix available ports defined in  ip local port range  sysctl parameter  and sockets in TIME WAIT state  the number is even less   In order to eliminate a problem you can     Increase the ephemeral port range by tuning  ip local port range  kernel option    Deploy more Grafana server instances to load balance across    Deploy more load balancer instances    Use virtual network interfaces        WebSocket and proxies  Not all proxies can transparently proxy WebSocket connections by default  For example  if you are using Nginx before Grafana you need to configure WebSocket proxy like this       http       map  http upgrade  connection upgrade           default upgrade             close             upstream grafana           server 127 0 0 1 3000             server           listen 8000           location                 proxy http version 1 1              proxy set header Upgrade  http upgrade              proxy set header Connection  connection upgrade              proxy set header Host  http host              proxy pass http   grafana                         See the  Nginx blog on their website  https   www nginx com blog websocket nginx   for more information  Also  refer to your load balancer reverse proxy documentation to find out more information on dealing with WebSocket connections   Some corporate proxies can remove headers required to properly establish a WebSocket connection  In this case  you should tune intermediate proxies to not remove required headers  However  the better option is to use Grafana with TLS  Now WebSocket connection will inherit TLS and thus must be handled transparently by proxies   Proxies like Nginx and Envoy have default limits on maximum number of connections which can be established  Make sure you have a reasonable limit for max number of incoming and outgoing connections in your proxy configuration      Configure Grafana Live HA setup  By default  Grafana Live uses in memory data structures and in memory PUB SUB hub for handling subscriptions   In a high availability Grafana setup involving several Grafana server instances behind a load balancer  you can find the following limitations     Built in features like dashboard change notifications will only be broadcasted to users connected to the same Grafana server process instance    Streaming from Telegraf will deliver data only to clients connected to the same instance which received Telegraf data  active stream cache is not shared between different Grafana instances    A separate unidirectional stream between Grafana and backend data source may be opened on different Grafana servers for the same channel   To bypass these limitations  Grafana v8 1 has an experimental Live HA engine that requires Redis to work       Configure Redis Live engine  When the Redis engine is configured  Grafana Live keeps its state in Redis and uses Redis PUB SUB functionality to deliver messages to all subscribers throughout all Grafana server nodes   Here is an example configuration        live  ha engine   redis ha engine address   127 0 0 1 6379      For additional information  refer to the  ha engine    and  ha engine address    options   After running     All built in real time notifications like dashboard changes are delivered to all Grafana server instances and broadcasted to all subscribers    Streaming from Telegraf delivers messages to all subscribers    A separate unidirectional stream between Grafana and backend data source opens on different Grafana servers  Publishing data to a channel delivers messages to instance subscribers  as a result  publications from different instances on different machines do not produce duplicate data on panels   At the moment we only support single Redis node       Note    It s possible to use Redis Sentinel and Haproxy to achieve a highly available Redis setup  Redis nodes should be managed by  Redis Sentinel  https   redis io topics sentinel  to achieve automatic failover  Haproxy configuration example            listen redis     server redis 01 127 0 0 1 6380 check port 6380 check inter 2s weight 1 inter 2s downinter 5s rise 10 fall 2 on marked down shutdown sessions on marked up shutdown backup sessions     server redis 02 127 0 0 1 6381 check port 6381 check inter 2s weight 1 inter 2s downinter 5s rise 10 fall 2 backup     bind   6379     mode tcp     option tcpka     option tcplog     option tcp check     tcp check send PING r n     tcp check expect string  PONG     tcp check send info  replication r n     tcp check expect string role master     tcp check send QUIT r n     tcp check expect string  OK     balance roundrobin           Next  point Grafana Live to Haproxy address port "}
{"questions":"grafana setup documentation Guide for configuring the Grafana Docker image installation configure docker aliases administration configure docker keywords configuration grafana docker","answers":"---\naliases:\n  - ..\/administration\/configure-docker\/\n  - ..\/installation\/configure-docker\/\ndescription: Guide for configuring the Grafana Docker image\nkeywords:\n  - grafana\n  - configuration\n  - documentation\n  - docker\n  - docker compose\nlabels:\n  products:\n    - enterprise\n    - oss\nmenuTitle: Configure a Docker image\ntitle: Configure a Grafana Docker image\nweight: 1800\n---\n\n# Configure a Grafana Docker image\n\nThis topic explains how to run Grafana on Docker in complex environments that require you to:\n\n- Use different images\n- Change logging levels\n- Define secrets on the Cloud\n- Configure plugins\n\n> **Note:** The examples in this topic use the Grafana Enterprise Docker image. You can use the Grafana Open Source edition by changing the Docker image to `grafana\/grafana-oss`.\n\n## Supported Docker image variants\n\nYou can install and run Grafana using the following official Docker images.\n\n- **Grafana Enterprise**: `grafana\/grafana-enterprise`\n\n- **Grafana Open Source**: `grafana\/grafana-oss`\n\nEach edition is available in two variants: Alpine and Ubuntu.\n\n## Alpine image (recommended)\n\n[Alpine Linux](https:\/\/alpinelinux.org\/about\/) is a Linux distribution not affiliated with any commercial entity. It is a versatile operating system that caters to users who prioritize security, efficiency, and user-friendliness. Alpine Linux is much smaller than other distribution base images, allowing for slimmer and more secure images to be created.\n\nBy default, the images are built using the widely used [Alpine Linux project](http:\/\/alpinelinux.org\/) base image, which can be found in the [Alpine docker repo](https:\/\/hub.docker.com\/_\/alpine).\nIf you prioritize security and want to minimize the size of your image, it is recommended that you use the Alpine variant. However, it's important to note that the Alpine variant uses [musl libc](http:\/\/www.musl-libc.org\/) instead of [glibc and others](http:\/\/www.etalabs.net\/compare_libcs.html). As a result, some software might encounter problems depending on their libc requirements. Nonetheless, most software should not experience any issues, so the Alpine variant is generally reliable.\n\n## Ubuntu image\n\nThe Ubuntu-based Grafana Enterprise and OSS images are built using the [Ubuntu](https:\/\/ubuntu.com\/) base image, which can be found in the [Ubuntu docker repo](https:\/\/hub.docker.com\/_\/ubuntu). An Ubuntu-based image can be a good option for users who prefer an Ubuntu-based image or require certain tools unavailable on Alpine.\n\n- **Grafana Enterprise**: `grafana\/grafana-enterprise:<version>-ubuntu`\n\n- **Grafana Open Source**: `grafana\/grafana-oss:<version>-ubuntu`\n\n## Run a specific version of Grafana\n\nYou can also run a specific version of Grafana or a beta version based on the main branch of the [grafana\/grafana GitHub repository](https:\/\/github.com\/grafana\/grafana).\n\n> **Note:** If you use a Linux operating system such as Debian or Ubuntu and encounter permission errors when running Docker commands, you might need to prefix the command with `sudo` or add your user to the `docker` group. The official Docker documentation provides instructions on how to [run Docker without a non-root user](https:\/\/docs.docker.com\/engine\/install\/linux-postinstall\/).\n\nTo run a specific version of Grafana, add it in the command <version number> section:\n\n```bash\ndocker run -d -p 3000:3000 --name grafana grafana\/grafana-enterprise:<version number>\n```\n\nExample:\n\nThe following command runs the Grafana Enterprise container and specifies version 9.4.7. If you want to run a different version, modify the version number section.\n\n```bash\ndocker run -d -p 3000:3000 --name grafana grafana\/grafana-enterprise:9.4.7\n```\n\n## Run the Grafana main branch\n\nAfter every successful build of the main branch, two tags, `grafana\/grafana-oss:main` and `grafana\/grafana-oss:main-ubuntu`, are updated. Additionally, two new tags are created: `grafana\/grafana-oss-dev:<version><build ID>-pre` and `grafana\/grafana-oss-dev:<version><build ID>-pre-ubuntu`, where `version` is the next version of Grafana and `build ID `is the ID of the corresponding CI build. These tags provide access to the most recent Grafana main builds. For more information, refer to [grafana\/grafana-oss-dev](https:\/\/hub.docker.com\/r\/grafana\/grafana-oss-dev\/tags).\n\nTo ensure stability and consistency, we strongly recommend using the `grafana\/grafana-oss-dev:<version><build ID>-pre` tag when running the Grafana main branch in a production environment. This tag ensures that you are using a specific version of Grafana instead of the most recent commit, which could potentially introduce bugs or issues. It also avoids polluting the tag namespace for the main Grafana images with thousands of pre-release tags.\n\nFor a list of available tags, refer to [grafana\/grafana-oss](https:\/\/hub.docker.com\/r\/grafana\/grafana-oss\/tags\/) and [grafana\/grafana-oss-dev](https:\/\/hub.docker.com\/r\/grafana\/grafana-oss-dev\/tags\/).\n\n## Default paths\n\nGrafana comes with default configuration parameters that remain the same among versions regardless of the operating system or the environment (for example, virtual machine, Docker, Kubernetes, etc.). You can refer to the [Configure Grafana]() documentation to view all the default configuration settings.\n\nThe following configurations are set by default when you start the Grafana Docker container. When running in Docker you cannot change the configurations by editing the `conf\/grafana.ini` file. Instead, you can modify the configuration using [environment variables]().\n\n| Setting               | Default value             |\n| --------------------- | ------------------------- |\n| GF_PATHS_CONFIG       | \/etc\/grafana\/grafana.ini  |\n| GF_PATHS_DATA         | \/var\/lib\/grafana          |\n| GF_PATHS_HOME         | \/usr\/share\/grafana        |\n| GF_PATHS_LOGS         | \/var\/log\/grafana          |\n| GF_PATHS_PLUGINS      | \/var\/lib\/grafana\/plugins  |\n| GF_PATHS_PROVISIONING | \/etc\/grafana\/provisioning |\n\n## Install plugins in the Docker container\n\nYou can install publicly available plugins and plugins that are private or used internally in an organization. For plugin installation instructions, refer to [Install plugins in the Docker container]().\n\n### Install plugins from other sources\n\nTo install plugins from other sources, you must define the custom URL and specify it immediately before the plugin name in the `GF_PLUGINS_PREINSTALL` environment variable: `GF_PLUGINS_PREINSTALL=<plugin ID>@[<plugin version>]@<url to plugin zip>`.\n\nExample:\n\nThe following command runs Grafana Enterprise on **port 3000** in detached mode and installs the custom plugin, which is specified as a URL parameter in the `GF_PLUGINS_PREINSTALL` environment variable.\n\n```bash\ndocker run -d -p 3000:3000 --name=grafana \\\n  -e \"GF_PLUGINS_PREINSTALL=custom-plugin@@http:\/\/plugin-domain.com\/my-custom-plugin.zip,grafana-clock-panel\" \\\n  grafana\/grafana-enterprise\n```\n\n## Build a custom Grafana Docker image\n\nIn the Grafana GitHub repository, the `packaging\/docker\/custom\/` folder includes a `Dockerfile` that you can use to build a custom Grafana image. The `Dockerfile` accepts `GRAFANA_VERSION`, `GF_INSTALL_PLUGINS`, and `GF_INSTALL_IMAGE_RENDERER_PLUGIN` as build arguments.\n\nThe `GRAFANA_VERSION` build argument must be a valid `grafana\/grafana` Docker image tag. By default, Grafana builds an Alpine-based image. To build an Ubuntu-based image, append `-ubuntu` to the `GRAFANA_VERSION` build argument.\n\nExample:\n\nThe following example shows you how to build and run a custom Grafana Docker image based on the latest official Ubuntu-based Grafana Docker image:\n\n```bash\n# go to the custom directory\ncd packaging\/docker\/custom\n\n# run the docker build command to build the image\ndocker build \\\n  --build-arg \"GRAFANA_VERSION=latest-ubuntu\" \\\n  -t grafana-custom .\n\n# run the custom grafana container using docker run command\ndocker run -d -p 3000:3000 --name=grafana grafana-custom\n```\n\n### Build Grafana with the Image Renderer plugin pre-installed\n\n> **Note:** This feature is experimental.\n\nCurrently, the Grafana Image Renderer plugin requires dependencies that are not available in the Grafana Docker image (see [GitHub Issue#301](https:\/\/github.com\/grafana\/grafana-image-renderer\/issues\/301) for more details). However, you can create a customized Docker image using the `GF_INSTALL_IMAGE_RENDERER_PLUGIN` build argument as a solution. This will install the necessary dependencies for the Grafana Image Renderer plugin to run.\n\nExample:\n\nThe following example shows how to build a customized Grafana Docker image that includes the Image Renderer plugin.\n\n```bash\n# go to the folder\ncd packaging\/docker\/custom\n\n# running the build command\ndocker build \\\n  --build-arg \"GRAFANA_VERSION=latest\" \\\n  --build-arg \"GF_INSTALL_IMAGE_RENDERER_PLUGIN=true\" \\\n  -t grafana-custom .\n\n# running the docker run command\ndocker run -d -p 3000:3000 --name=grafana grafana-custom\n```\n\n### Build a Grafana Docker image with pre-installed plugins\n\nIf you run multiple Grafana installations with the same plugins, you can save time by building a customized image that includes plugins available on the [Grafana Plugin download page](\/grafana\/plugins). When you build a customized image, Grafana doesn't have to install the plugins each time it starts, making the startup process more efficient.\n\n> **Note:** To specify the version of a plugin, you can use the `GF_INSTALL_PLUGINS` build argument and add the version number. The latest version is used if you don't specify a version number. For example, you can use `--build-arg \"GF_INSTALL_PLUGINS=grafana-clock-panel 1.0.1,grafana-simple-json-datasource 1.3.5\"` to specify the versions of two plugins.\n\nExample:\n\nThe following example shows how to build and run a custom Grafana Docker image with pre-installed plugins.\n\n```bash\n# go to the custom directory\ncd packaging\/docker\/custom\n\n# running the build command\n# include the plugins you want e.g. clock planel etc\ndocker build \\\n  --build-arg \"GRAFANA_VERSION=latest\" \\\n  --build-arg \"GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource\" \\\n  -t grafana-custom .\n\n# running the custom Grafana container using the docker run command\ndocker run -d -p 3000:3000 --name=grafana grafana-custom\n```\n\n### Build a Grafana Docker image with pre-installed plugins from other sources\n\nYou can create a Docker image containing a plugin that is exclusive to your organization, even if it is not accessible to the public. Simply use the `GF_INSTALL_PLUGINS` build argument to specify the plugin's URL and installation folder name, such as `GF_INSTALL_PLUGINS=<url to plugin zip>;<plugin install folder name>`.\n\nThe following example demonstrates creating a customized Grafana Docker image that includes a custom plugin from a URL link, the clock panel plugin, and the simple-json-datasource plugin. You can define these plugins in the build argument using the Grafana Plugin environment variable.\n\n```bash\n# go to the folder\ncd packaging\/docker\/custom\n\n# running the build command\ndocker build \\\n  --build-arg \"GRAFANA_VERSION=latest\" \\\n  --build-arg \"GF_INSTALL_PLUGINS=http:\/\/plugin-domain.com\/my-custom-plugin.zip;my-custom-plugin,grafana-clock-panel,grafana-simple-json-datasource\" \\\n  -t grafana-custom .\n\n# running the docker run command\ndocker run -d -p 3000:3000 --name=grafana grafana-custom\n```\n\n## Logging\n\nBy default, Docker container logs are directed to `STDOUT`, a common practice in the Docker community. You can change this by setting a different [log mode]() such as `console`, `file`, or `syslog`. You can use one or more modes by separating them with spaces, for example, `console file`. By default, both `console` and `file` modes are enabled.\n\nExample:\n\nThe following example runs Grafana using the `console file` log mode that is set in the `GF_LOG_MODE` environment variable.\n\n```bash\n# Run Grafana while logging to both standard out\n# and \/var\/log\/grafana\/grafana.log\n\ndocker run -p 3000:3000 -e \"GF_LOG_MODE=console file\" grafana\/grafana-enterprise\n```\n\n## Configure Grafana with Docker Secrets\n\nYou can input confidential data like login credentials and secrets into Grafana using configuration files. This method works well with [Docker Secrets](https:\/\/docs.docker.com\/engine\/swarm\/secrets\/), as the secrets are automatically mapped to the `\/run\/secrets\/` location within the container.\n\nYou can apply this technique to any configuration options in `conf\/grafana.ini` by setting `GF_<SectionName>_<KeyName>__FILE` to the file path that contains the secret information. For more information about Docker secret command usage, refer to [docker secret](https:\/\/docs.docker.com\/engine\/reference\/commandline\/secret\/).\n\nThe following example demonstrates how to set the admin password:\n\n- Admin password secret: `\/run\/secrets\/admin_password`\n- Environment variable: `GF_SECURITY_ADMIN_PASSWORD__FILE=\/run\/secrets\/admin_password`\n\n### Configure Docker secrets credentials for AWS CloudWatch\n\nGrafana ships with built-in support for the [Amazon CloudWatch datasource](). To configure the data source, you must provide information such as the AWS ID-Key, secret access key, region, and so on. You can use Docker secrets as a way to provide this information.\n\nExample:\n\nThe example below shows how to use Grafana environment variables via Docker Secrets for the AWS ID-Key, secret access key, region, and profile.\n\nThe example uses the following values for the AWS Cloudwatch data source:\n\n```bash\nAWS_default_ACCESS_KEY_ID=aws01us02\nAWS_default_SECRET_ACCESS_KEY=topsecret9b78c6\nAWS_default_REGION=us-east-1\n```\n\n1. Create a Docker secret for each of the values noted above.\n\n   ```bash\n   echo \"aws01us02\" | docker secret create aws_access_key_id -\n   ```\n\n   ```bash\n   echo \"topsecret9b78c6\" | docker secret create aws_secret_access_key -\n   ```\n\n   ```bash\n   echo \"us-east-1\" | docker secret create aws_region -\n   ```\n\n1. Run the following command to determine that the secrets were created.\n\n   ```bash\n   $ docker secret ls\n   ```\n\n   The output from the command should look similar to the following:\n\n   ```\n   ID                          NAME           DRIVER    CREATED              UPDATED\n   i4g62kyuy80lnti5d05oqzgwh   aws_access_key_id             5 minutes ago        5 minutes ago\n   uegit5plcwodp57fxbqbnke7h   aws_secret_access_key         3 minutes ago        3 minutes ago\n   fxbqbnke7hplcwodp57fuegit   aws_region                    About a minute ago   About a minute ago\n   ```\n\n   Where:\n\n   ID = the secret unique ID that will be use in the docker run command\n\n   NAME = the logical name defined for each secret\n\n1. Add the secrets to the command line when you run Docker.\n\n   ```bash\n   docker run -d -p 3000:3000 --name grafana \\\n     -e \"GF_DEFAULT_INSTANCE_NAME=my-grafana\" \\\n     -e \"GF_AWS_PROFILES=default\" \\\n     -e \"GF_AWS_default_ACCESS_KEY_ID__FILE=\/run\/secrets\/aws_access_key_id\" \\\n     -e \"GF_AWS_default_SECRET_ACCESS_KEY__FILE=\/run\/secrets\/aws_secret_access_key\" \\\n     -e \"GF_AWS_default_REGION__FILE=\/run\/secrets\/aws_region\" \\\n     -v grafana-data:\/var\/lib\/grafana \\\n     grafana\/grafana-enterprise\n   ```\n\nYou can also specify multiple profiles to `GF_AWS_PROFILES` (for example, `GF_AWS_PROFILES=default another`).\n\nThe following list includes the supported environment variables:\n\n- `GF_AWS_${profile}_ACCESS_KEY_ID`: AWS access key ID (required).\n- `GF_AWS_${profile}_SECRET_ACCESS_KEY`: AWS secret access key (required).\n- `GF_AWS_${profile}_REGION`: AWS region (optional).\n\n## Troubleshoot a Docker deployment\n\nBy default, the Grafana log level is set to `INFO`, but you can increase the log level to `DEBUG` mode when you want to reproduce a problem.\n\nFor more information about logging, refer to [logs]().\n\n### Increase log level using the Docker run (CLI) command\n\nTo increase the log level to `DEBUG` mode, add the environment variable `GF_LOG_LEVEL` to the command line.\n\n```bash\ndocker run -d -p 3000:3000 --name=grafana \\\n  -e \"GF_LOG_LEVEL=debug\" \\\n  grafana\/grafana-enterprise\n```\n\n### Increase log level using the Docker Compose\n\nTo increase the log level to `DEBUG` mode, add the environment variable `GF_LOG_LEVEL` to the `docker-compose.yaml` file.\n\n```yaml\nversion: '3.8'\nservices:\n  grafana:\n    image: grafana\/grafana-enterprise\n    container_name: grafana\n    restart: unless-stopped\n    environment:\n      # increases the log level from info to debug\n      - GF_LOG_LEVEL=debug\n    ports:\n      - '3000:3000'\n    volumes:\n      - 'grafana_storage:\/var\/lib\/grafana'\nvolumes:\n  grafana_storage: {}\n```\n\n### Validate Docker Compose YAML file\n\nThe chance of syntax errors appearing in a YAML file increases as the file becomes more complex. You can use the following command to check for syntax errors.\n\n```bash\n# go to your docker-compose.yaml directory\ncd \/path-to\/docker-compose\/file\n\n# run the validation command\ndocker compose config\n```\n\nIf there are errors in the YAML file, the command output highlights the lines that contain errors. If there are no errors in the YAML file, the output includes the content of the `docker-compose.yaml` file in detailed YAML format.","site":"grafana setup","answers_cleaned":"    aliases         administration configure docker         installation configure docker  description  Guide for configuring the Grafana Docker image keywords      grafana     configuration     documentation     docker     docker compose labels    products        enterprise       oss menuTitle  Configure a Docker image title  Configure a Grafana Docker image weight  1800        Configure a Grafana Docker image  This topic explains how to run Grafana on Docker in complex environments that require you to     Use different images   Change logging levels   Define secrets on the Cloud   Configure plugins      Note    The examples in this topic use the Grafana Enterprise Docker image  You can use the Grafana Open Source edition by changing the Docker image to  grafana grafana oss       Supported Docker image variants  You can install and run Grafana using the following official Docker images       Grafana Enterprise     grafana grafana enterprise       Grafana Open Source     grafana grafana oss   Each edition is available in two variants  Alpine and Ubuntu      Alpine image  recommended    Alpine Linux  https   alpinelinux org about   is a Linux distribution not affiliated with any commercial entity  It is a versatile operating system that caters to users who prioritize security  efficiency  and user friendliness  Alpine Linux is much smaller than other distribution base images  allowing for slimmer and more secure images to be created   By default  the images are built using the widely used  Alpine Linux project  http   alpinelinux org   base image  which can be found in the  Alpine docker repo  https   hub docker com   alpine   If you prioritize security and want to minimize the size of your image  it is recommended that you use the Alpine variant  However  it s important to note that the Alpine variant uses  musl libc  http   www musl libc org   instead of  glibc and others  http   www etalabs net compare libcs html   As a result  some software might encounter problems depending on their libc requirements  Nonetheless  most software should not experience any issues  so the Alpine variant is generally reliable      Ubuntu image  The Ubuntu based Grafana Enterprise and OSS images are built using the  Ubuntu  https   ubuntu com   base image  which can be found in the  Ubuntu docker repo  https   hub docker com   ubuntu   An Ubuntu based image can be a good option for users who prefer an Ubuntu based image or require certain tools unavailable on Alpine       Grafana Enterprise     grafana grafana enterprise  version  ubuntu       Grafana Open Source     grafana grafana oss  version  ubuntu      Run a specific version of Grafana  You can also run a specific version of Grafana or a beta version based on the main branch of the  grafana grafana GitHub repository  https   github com grafana grafana        Note    If you use a Linux operating system such as Debian or Ubuntu and encounter permission errors when running Docker commands  you might need to prefix the command with  sudo  or add your user to the  docker  group  The official Docker documentation provides instructions on how to  run Docker without a non root user  https   docs docker com engine install linux postinstall     To run a specific version of Grafana  add it in the command  version number  section      bash docker run  d  p 3000 3000   name grafana grafana grafana enterprise  version number       Example   The following command runs the Grafana Enterprise container and specifies version 9 4 7  If you want to run a different version  modify the version number section      bash docker run  d  p 3000 3000   name grafana grafana grafana enterprise 9 4 7         Run the Grafana main branch  After every successful build of the main branch  two tags   grafana grafana oss main  and  grafana grafana oss main ubuntu   are updated  Additionally  two new tags are created   grafana grafana oss dev  version  build ID  pre  and  grafana grafana oss dev  version  build ID  pre ubuntu   where  version  is the next version of Grafana and  build ID  is the ID of the corresponding CI build  These tags provide access to the most recent Grafana main builds  For more information  refer to  grafana grafana oss dev  https   hub docker com r grafana grafana oss dev tags    To ensure stability and consistency  we strongly recommend using the  grafana grafana oss dev  version  build ID  pre  tag when running the Grafana main branch in a production environment  This tag ensures that you are using a specific version of Grafana instead of the most recent commit  which could potentially introduce bugs or issues  It also avoids polluting the tag namespace for the main Grafana images with thousands of pre release tags   For a list of available tags  refer to  grafana grafana oss  https   hub docker com r grafana grafana oss tags   and  grafana grafana oss dev  https   hub docker com r grafana grafana oss dev tags        Default paths  Grafana comes with default configuration parameters that remain the same among versions regardless of the operating system or the environment  for example  virtual machine  Docker  Kubernetes  etc    You can refer to the  Configure Grafana    documentation to view all the default configuration settings   The following configurations are set by default when you start the Grafana Docker container  When running in Docker you cannot change the configurations by editing the  conf grafana ini  file  Instead  you can modify the configuration using  environment variables        Setting                 Default value                                                                       GF PATHS CONFIG          etc grafana grafana ini      GF PATHS DATA            var lib grafana              GF PATHS HOME            usr share grafana            GF PATHS LOGS            var log grafana              GF PATHS PLUGINS         var lib grafana plugins      GF PATHS PROVISIONING    etc grafana provisioning       Install plugins in the Docker container  You can install publicly available plugins and plugins that are private or used internally in an organization  For plugin installation instructions  refer to  Install plugins in the Docker container          Install plugins from other sources  To install plugins from other sources  you must define the custom URL and specify it immediately before the plugin name in the  GF PLUGINS PREINSTALL  environment variable   GF PLUGINS PREINSTALL  plugin ID    plugin version    url to plugin zip     Example   The following command runs Grafana Enterprise on   port 3000   in detached mode and installs the custom plugin  which is specified as a URL parameter in the  GF PLUGINS PREINSTALL  environment variable      bash docker run  d  p 3000 3000   name grafana      e  GF PLUGINS PREINSTALL custom plugin  http   plugin domain com my custom plugin zip grafana clock panel      grafana grafana enterprise         Build a custom Grafana Docker image  In the Grafana GitHub repository  the  packaging docker custom   folder includes a  Dockerfile  that you can use to build a custom Grafana image  The  Dockerfile  accepts  GRAFANA VERSION    GF INSTALL PLUGINS   and  GF INSTALL IMAGE RENDERER PLUGIN  as build arguments   The  GRAFANA VERSION  build argument must be a valid  grafana grafana  Docker image tag  By default  Grafana builds an Alpine based image  To build an Ubuntu based image  append   ubuntu  to the  GRAFANA VERSION  build argument   Example   The following example shows you how to build and run a custom Grafana Docker image based on the latest official Ubuntu based Grafana Docker image      bash   go to the custom directory cd packaging docker custom    run the docker build command to build the image docker build       build arg  GRAFANA VERSION latest ubuntu       t grafana custom      run the custom grafana container using docker run command docker run  d  p 3000 3000   name grafana grafana custom          Build Grafana with the Image Renderer plugin pre installed      Note    This feature is experimental   Currently  the Grafana Image Renderer plugin requires dependencies that are not available in the Grafana Docker image  see  GitHub Issue 301  https   github com grafana grafana image renderer issues 301  for more details   However  you can create a customized Docker image using the  GF INSTALL IMAGE RENDERER PLUGIN  build argument as a solution  This will install the necessary dependencies for the Grafana Image Renderer plugin to run   Example   The following example shows how to build a customized Grafana Docker image that includes the Image Renderer plugin      bash   go to the folder cd packaging docker custom    running the build command docker build       build arg  GRAFANA VERSION latest        build arg  GF INSTALL IMAGE RENDERER PLUGIN true       t grafana custom      running the docker run command docker run  d  p 3000 3000   name grafana grafana custom          Build a Grafana Docker image with pre installed plugins  If you run multiple Grafana installations with the same plugins  you can save time by building a customized image that includes plugins available on the  Grafana Plugin download page   grafana plugins   When you build a customized image  Grafana doesn t have to install the plugins each time it starts  making the startup process more efficient       Note    To specify the version of a plugin  you can use the  GF INSTALL PLUGINS  build argument and add the version number  The latest version is used if you don t specify a version number  For example  you can use    build arg  GF INSTALL PLUGINS grafana clock panel 1 0 1 grafana simple json datasource 1 3 5   to specify the versions of two plugins   Example   The following example shows how to build and run a custom Grafana Docker image with pre installed plugins      bash   go to the custom directory cd packaging docker custom    running the build command   include the plugins you want e g  clock planel etc docker build       build arg  GRAFANA VERSION latest        build arg  GF INSTALL PLUGINS grafana clock panel grafana simple json datasource       t grafana custom      running the custom Grafana container using the docker run command docker run  d  p 3000 3000   name grafana grafana custom          Build a Grafana Docker image with pre installed plugins from other sources  You can create a Docker image containing a plugin that is exclusive to your organization  even if it is not accessible to the public  Simply use the  GF INSTALL PLUGINS  build argument to specify the plugin s URL and installation folder name  such as  GF INSTALL PLUGINS  url to plugin zip   plugin install folder name     The following example demonstrates creating a customized Grafana Docker image that includes a custom plugin from a URL link  the clock panel plugin  and the simple json datasource plugin  You can define these plugins in the build argument using the Grafana Plugin environment variable      bash   go to the folder cd packaging docker custom    running the build command docker build       build arg  GRAFANA VERSION latest        build arg  GF INSTALL PLUGINS http   plugin domain com my custom plugin zip my custom plugin grafana clock panel grafana simple json datasource       t grafana custom      running the docker run command docker run  d  p 3000 3000   name grafana grafana custom         Logging  By default  Docker container logs are directed to  STDOUT   a common practice in the Docker community  You can change this by setting a different  log mode    such as  console    file   or  syslog   You can use one or more modes by separating them with spaces  for example   console file   By default  both  console  and  file  modes are enabled   Example   The following example runs Grafana using the  console file  log mode that is set in the  GF LOG MODE  environment variable      bash   Run Grafana while logging to both standard out   and  var log grafana grafana log  docker run  p 3000 3000  e  GF LOG MODE console file  grafana grafana enterprise         Configure Grafana with Docker Secrets  You can input confidential data like login credentials and secrets into Grafana using configuration files  This method works well with  Docker Secrets  https   docs docker com engine swarm secrets    as the secrets are automatically mapped to the   run secrets   location within the container   You can apply this technique to any configuration options in  conf grafana ini  by setting  GF  SectionName   KeyName   FILE  to the file path that contains the secret information  For more information about Docker secret command usage  refer to  docker secret  https   docs docker com engine reference commandline secret     The following example demonstrates how to set the admin password     Admin password secret    run secrets admin password    Environment variable   GF SECURITY ADMIN PASSWORD  FILE  run secrets admin password       Configure Docker secrets credentials for AWS CloudWatch  Grafana ships with built in support for the  Amazon CloudWatch datasource     To configure the data source  you must provide information such as the AWS ID Key  secret access key  region  and so on  You can use Docker secrets as a way to provide this information   Example   The example below shows how to use Grafana environment variables via Docker Secrets for the AWS ID Key  secret access key  region  and profile   The example uses the following values for the AWS Cloudwatch data source      bash AWS default ACCESS KEY ID aws01us02 AWS default SECRET ACCESS KEY topsecret9b78c6 AWS default REGION us east 1      1  Create a Docker secret for each of the values noted above         bash    echo  aws01us02    docker secret create aws access key id                 bash    echo  topsecret9b78c6    docker secret create aws secret access key                 bash    echo  us east 1    docker secret create aws region           1  Run the following command to determine that the secrets were created         bash      docker secret ls            The output from the command should look similar to the following             ID                          NAME           DRIVER    CREATED              UPDATED    i4g62kyuy80lnti5d05oqzgwh   aws access key id             5 minutes ago        5 minutes ago    uegit5plcwodp57fxbqbnke7h   aws secret access key         3 minutes ago        3 minutes ago    fxbqbnke7hplcwodp57fuegit   aws region                    About a minute ago   About a minute ago            Where      ID   the secret unique ID that will be use in the docker run command     NAME   the logical name defined for each secret  1  Add the secrets to the command line when you run Docker         bash    docker run  d  p 3000 3000   name grafana         e  GF DEFAULT INSTANCE NAME my grafana          e  GF AWS PROFILES default          e  GF AWS default ACCESS KEY ID  FILE  run secrets aws access key id          e  GF AWS default SECRET ACCESS KEY  FILE  run secrets aws secret access key          e  GF AWS default REGION  FILE  run secrets aws region          v grafana data  var lib grafana        grafana grafana enterprise         You can also specify multiple profiles to  GF AWS PROFILES   for example   GF AWS PROFILES default another     The following list includes the supported environment variables      GF AWS   profile  ACCESS KEY ID   AWS access key ID  required      GF AWS   profile  SECRET ACCESS KEY   AWS secret access key  required      GF AWS   profile  REGION   AWS region  optional       Troubleshoot a Docker deployment  By default  the Grafana log level is set to  INFO   but you can increase the log level to  DEBUG  mode when you want to reproduce a problem   For more information about logging  refer to  logs          Increase log level using the Docker run  CLI  command  To increase the log level to  DEBUG  mode  add the environment variable  GF LOG LEVEL  to the command line      bash docker run  d  p 3000 3000   name grafana      e  GF LOG LEVEL debug      grafana grafana enterprise          Increase log level using the Docker Compose  To increase the log level to  DEBUG  mode  add the environment variable  GF LOG LEVEL  to the  docker compose yaml  file      yaml version   3 8  services    grafana      image  grafana grafana enterprise     container name  grafana     restart  unless stopped     environment          increases the log level from info to debug         GF LOG LEVEL debug     ports           3000 3000      volumes           grafana storage  var lib grafana  volumes    grafana storage              Validate Docker Compose YAML file  The chance of syntax errors appearing in a YAML file increases as the file becomes more complex  You can use the following command to check for syntax errors      bash   go to your docker compose yaml directory cd  path to docker compose file    run the validation command docker compose config      If there are errors in the YAML file  the command output highlights the lines that contain errors  If there are no errors in the YAML file  the output includes the content of the  docker compose yaml  file in detailed YAML format "}
{"questions":"grafana setup administration jaeger instrumentation Jaeger traces emitted and propagation by Grafana administration view server internal metrics tracing jaeger aliases keywords admin metrics grafana","answers":"---\naliases:\n  - ..\/admin\/metrics\/\n  - ..\/administration\/jaeger-instrumentation\/\n  - ..\/administration\/view-server\/internal-metrics\/\ndescription: Jaeger traces emitted and propagation by Grafana\nkeywords:\n  - grafana\n  - jaeger\n  - tracing\nlabels:\n  products:\n    - enterprise\n    - oss\ntitle: Set up Grafana monitoring\nweight: 800\n---\n\n# Set up Grafana monitoring\n\nGrafana supports tracing.\n\nGrafana can emit Jaeger or OpenTelemetry Protocol (OTLP) traces for its HTTP API endpoints and propagate Jaeger and [w3c Trace Context](https:\/\/www.w3.org\/TR\/trace-context\/) trace information to compatible data sources.\nAll HTTP endpoints are logged evenly (annotations, dashboard, tags, and so on).\nWhen a trace ID is propagated, it is reported with operation 'HTTP \/datasources\/proxy\/:id\/\\*'.\n\nRefer to [Configuration's OpenTelemetry section]() for a reference of tracing options available in Grafana.\n\n## View Grafana internal metrics\n\nGrafana collects some metrics about itself internally. Grafana supports pushing metrics to Graphite or exposing them to be scraped by Prometheus.\n\nFor more information about configuration options related to Grafana metrics, refer to [metrics]() and [metrics.graphite]() in [Configuration]().\n\n### Available metrics\n\nWhen enabled, Grafana exposes a number of metrics, including:\n\n- Active Grafana instances\n- Number of dashboards, users, and playlists\n- HTTP status codes\n- Requests by routing group\n- Grafana active alerts\n- Grafana performance\n\n### Pull metrics from Grafana into Prometheus\n\nThese instructions assume you have already added Prometheus as a data source in Grafana.\n\n1. Enable Prometheus to scrape metrics from Grafana. In your configuration file (`grafana.ini` or `custom.ini` depending on your operating system) remove the semicolon to enable the following configuration options:\n\n   ```\n   # Metrics available at HTTP URL \/metrics and \/metrics\/plugins\/:pluginId\n   [metrics]\n   # Disable \/ Enable internal metrics\n   enabled           = true\n\n   # Disable total stats (stat_totals_*) metrics to be generated\n   disable_total_stats = false\n   ```\n\n1. (optional) If you want to require authorization to view the metrics endpoints, then uncomment and set the following options:\n\n   ```\n   basic_auth_username =\n   basic_auth_password =\n   ```\n\n1. Restart Grafana. Grafana now exposes metrics at http:\/\/localhost:3000\/metrics.\n1. Add the job to your prometheus.yml file.\n   Example:\n\n   ```\n   - job_name: 'grafana_metrics'\n\n      scrape_interval: 15s\n      scrape_timeout: 5s\n\n      static_configs:\n        - targets: ['localhost:3000']\n   ```\n\n1. Restart Prometheus. Your new job should appear on the Targets tab.\n1. In Grafana, click **Connections** in the left-side menu.\n1. Under your connections, click **Data Sources**.\n1. Select the **Prometheus** data source.\n1. Under the name of your data source, click **Dashboards**.\n1. On the Dashboards tab, click **Import** in the _Grafana metrics_ row to import the Grafana metrics dashboard. All scraped Grafana metrics are available in the dashboard.\n\n### View Grafana metrics in Graphite\n\nThese instructions assume you have already added Graphite as a data source in Grafana.\n\n1. Enable sending metrics to Graphite. In your configuration file (`grafana.ini` or `custom.ini` depending on your operating system) remove the semicolon to enable the following configuration options:\n\n   ```\n   # Metrics available at HTTP API Url \/metrics\n   [metrics]\n   # Disable \/ Enable internal metrics\n   enabled           = true\n\n   # Disable total stats (stat_totals_*) metrics to be generated\n   disable_total_stats = false\n   ```\n\n1. Enable [metrics.graphite] options:\n\n   ```\n   # Send internal metrics to Graphite\n   [metrics.graphite]\n   # Enable by setting the address setting (ex localhost:2003)\n   address = <hostname or ip>:<port#>\n   prefix = prod.grafana.%(instance_name)s.\n   ```\n\n1. Restart Grafana. Grafana now exposes metrics at http:\/\/localhost:3000\/metrics and sends them to the Graphite location you specified.\n\n### Pull metrics from Grafana backend plugin into Prometheus\n\nAny installed [backend plugin](https:\/\/grafana.com\/developers\/plugin-tools\/key-concepts\/backend-plugins\/) exposes a metrics endpoint through Grafana that you can configure Prometheus to scrape.\n\nThese instructions assume you have already added Prometheus as a data source in Grafana.\n\n1. Enable Prometheus to scrape backend plugin metrics from Grafana. In your configuration file (`grafana.ini` or `custom.ini` depending on your operating system) remove the semicolon to enable the following configuration options:\n\n   ```\n   # Metrics available at HTTP URL \/metrics and \/metrics\/plugins\/:pluginId\n   [metrics]\n   # Disable \/ Enable internal metrics\n   enabled           = true\n\n   # Disable total stats (stat_totals_*) metrics to be generated\n   disable_total_stats = false\n   ```\n\n1. (optional) If you want to require authorization to view the metrics endpoints, then uncomment and set the following options:\n\n   ```\n   basic_auth_username =\n   basic_auth_password =\n   ```\n\n1. Restart Grafana. Grafana now exposes metrics at `http:\/\/localhost:3000\/metrics\/plugins\/<plugin id>`, e.g. http:\/\/localhost:3000\/metrics\/plugins\/grafana-github-datasource if you have the [Grafana GitHub datasource](\/grafana\/plugins\/grafana-github-datasource\/) installed.\n1. Add the job to your prometheus.yml file.\n   Example:\n\n   ```\n   - job_name: 'grafana_github_datasource'\n\n      scrape_interval: 15s\n      scrape_timeout: 5s\n      metrics_path: \/metrics\/plugins\/grafana-test-datasource\n\n      static_configs:\n        - targets: ['localhost:3000']\n   ```\n\n1. Restart Prometheus. Your new job should appear on the Targets tab.\n1. In Grafana, hover your mouse over the **Configuration** (gear) icon on the left sidebar and then click **Data Sources**.\n1. Select the **Prometheus** data source.\n1. Import a Golang application metrics dashboard - for example [Go Processes](\/grafana\/dashboards\/6671).","site":"grafana setup","answers_cleaned":"    aliases         admin metrics         administration jaeger instrumentation         administration view server internal metrics  description  Jaeger traces emitted and propagation by Grafana keywords      grafana     jaeger     tracing labels    products        enterprise       oss title  Set up Grafana monitoring weight  800        Set up Grafana monitoring  Grafana supports tracing   Grafana can emit Jaeger or OpenTelemetry Protocol  OTLP  traces for its HTTP API endpoints and propagate Jaeger and  w3c Trace Context  https   www w3 org TR trace context   trace information to compatible data sources  All HTTP endpoints are logged evenly  annotations  dashboard  tags  and so on   When a trace ID is propagated  it is reported with operation  HTTP  datasources proxy  id       Refer to  Configuration s OpenTelemetry section    for a reference of tracing options available in Grafana      View Grafana internal metrics  Grafana collects some metrics about itself internally  Grafana supports pushing metrics to Graphite or exposing them to be scraped by Prometheus   For more information about configuration options related to Grafana metrics  refer to  metrics    and  metrics graphite    in  Configuration          Available metrics  When enabled  Grafana exposes a number of metrics  including     Active Grafana instances   Number of dashboards  users  and playlists   HTTP status codes   Requests by routing group   Grafana active alerts   Grafana performance      Pull metrics from Grafana into Prometheus  These instructions assume you have already added Prometheus as a data source in Grafana   1  Enable Prometheus to scrape metrics from Grafana  In your configuration file   grafana ini  or  custom ini  depending on your operating system  remove the semicolon to enable the following configuration options               Metrics available at HTTP URL  metrics and  metrics plugins  pluginId     metrics       Disable   Enable internal metrics    enabled             true       Disable total stats  stat totals    metrics to be generated    disable total stats   false         1   optional  If you want to require authorization to view the metrics endpoints  then uncomment and set the following options             basic auth username      basic auth password           1  Restart Grafana  Grafana now exposes metrics at http   localhost 3000 metrics  1  Add the job to your prometheus yml file     Example               job name   grafana metrics         scrape interval  15s       scrape timeout  5s        static configs            targets    localhost 3000           1  Restart Prometheus  Your new job should appear on the Targets tab  1  In Grafana  click   Connections   in the left side menu  1  Under your connections  click   Data Sources    1  Select the   Prometheus   data source  1  Under the name of your data source  click   Dashboards    1  On the Dashboards tab  click   Import   in the  Grafana metrics  row to import the Grafana metrics dashboard  All scraped Grafana metrics are available in the dashboard       View Grafana metrics in Graphite  These instructions assume you have already added Graphite as a data source in Grafana   1  Enable sending metrics to Graphite  In your configuration file   grafana ini  or  custom ini  depending on your operating system  remove the semicolon to enable the following configuration options               Metrics available at HTTP API Url  metrics     metrics       Disable   Enable internal metrics    enabled             true       Disable total stats  stat totals    metrics to be generated    disable total stats   false         1  Enable  metrics graphite  options               Send internal metrics to Graphite     metrics graphite       Enable by setting the address setting  ex localhost 2003     address    hostname or ip   port      prefix   prod grafana   instance name s          1  Restart Grafana  Grafana now exposes metrics at http   localhost 3000 metrics and sends them to the Graphite location you specified       Pull metrics from Grafana backend plugin into Prometheus  Any installed  backend plugin  https   grafana com developers plugin tools key concepts backend plugins   exposes a metrics endpoint through Grafana that you can configure Prometheus to scrape   These instructions assume you have already added Prometheus as a data source in Grafana   1  Enable Prometheus to scrape backend plugin metrics from Grafana  In your configuration file   grafana ini  or  custom ini  depending on your operating system  remove the semicolon to enable the following configuration options               Metrics available at HTTP URL  metrics and  metrics plugins  pluginId     metrics       Disable   Enable internal metrics    enabled             true       Disable total stats  stat totals    metrics to be generated    disable total stats   false         1   optional  If you want to require authorization to view the metrics endpoints  then uncomment and set the following options             basic auth username      basic auth password           1  Restart Grafana  Grafana now exposes metrics at  http   localhost 3000 metrics plugins  plugin id    e g  http   localhost 3000 metrics plugins grafana github datasource if you have the  Grafana GitHub datasource   grafana plugins grafana github datasource   installed  1  Add the job to your prometheus yml file     Example               job name   grafana github datasource         scrape interval  15s       scrape timeout  5s       metrics path   metrics plugins grafana test datasource        static configs            targets    localhost 3000           1  Restart Prometheus  Your new job should appear on the Targets tab  1  In Grafana  hover your mouse over the   Configuration    gear  icon on the left sidebar and then click   Data Sources    1  Select the   Prometheus   data source  1  Import a Golang application metrics dashboard   for example  Go Processes   grafana dashboards 6671  "}
{"questions":"grafana setup labels aliases keywords logs grafana audit Auditing auditing enterprise auditing","answers":"---\naliases:\n  - ..\/..\/enterprise\/auditing\/\ndescription: Auditing\nkeywords:\n  - grafana\n  - auditing\n  - audit\n  - logs\nlabels:\n  products:\n    - cloud\n    - enterprise\ntitle: Audit a Grafana instance\nweight: 800\n---\n\n# Audit a Grafana instance\n\nAuditing allows you to track important changes to your Grafana instance. By default, audit logs are logged to file but the auditing feature also supports sending logs directly to Loki.\n\n\nTo enable sending Grafana Cloud audit logs to your Grafana Cloud Logs instance, please [file a support ticket](\/profile\/org\/tickets\/new). Note that standard ingest and retention rates apply for ingesting these audit logs.\n\n\nOnly API requests or UI actions that trigger an API request generate an audit log.\n\n\nAvailable in [Grafana Enterprise]() and [Grafana Cloud](\/docs\/grafana-cloud).\n\n\n## Audit logs\n\nAudit logs are JSON objects representing user actions like:\n\n- Modifications to resources such as dashboards and data sources.\n- A user failing to log in.\n\n### Format\n\nAudit logs contain the following fields. The fields followed by **\\*** are always available, the others depend on the type of action logged.\n\n| Field name              | Type    | Description                                                                                                                                                                                                              |\n| ----------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `timestamp`\\*           | string  | The date and time the request was made, in coordinated universal time (UTC) using the [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339#section-5.6) format.                                                                 |\n| `user`\\*                | object  | Information about the user that made the request. Either one of the `UserID` or `ApiKeyID` fields will contain content if `isAnonymous=false`.                                                                           |\n| `user.userId`           | number  | ID of the Grafana user that made the request.                                                                                                                                                                            |\n| `user.orgId`\\*          | number  | Current organization of the user that made the request.                                                                                                                                                                  |\n| `user.orgRole`          | string  | Current role of the user that made the request.                                                                                                                                                                          |\n| `user.name`             | string  | Name of the Grafana user that made the request.                                                                                                                                                                          |\n| `user.authTokenId`      | number  | ID of the user authentication token.                                                                                                                                                                                     |\n| `user.apiKeyId`         | number  | ID of the Grafana API key used to make the request.                                                                                                                                                                      |\n| `user.isAnonymous`\\*    | boolean | If an anonymous user made the request, `true`. Otherwise, `false`.                                                                                                                                                       |\n| `action`\\*              | string  | The request action. For example, `create`, `update`, or `manage-permissions`.                                                                                                                                            |\n| `request`\\*             | object  | Information about the HTTP request.                                                                                                                                                                                      |\n| `request.params`        | object  | Request\u2019s path parameters.                                                                                                                                                                                               |\n| `request.query`         | object  | Request\u2019s query parameters.                                                                                                                                                                                              |\n| `request.body`          | string  | Request\u2019s body. Filled with `<non-marshalable format>` when it isn't a valid JSON.                                                                                                                                       |\n| `result`\\*              | object  | Information about the HTTP response.                                                                                                                                                                                     |\n| `result.statusType`     | string  | If the request action was successful, `success`. Otherwise, `failure`.                                                                                                                                                   |\n| `result.statusCode`     | number  | HTTP status of the request.                                                                                                                                                                                              |\n| `result.failureMessage` | string  | HTTP error message.                                                                                                                                                                                                      |\n| `result.body`           | string  | Response body. Filled with `<non-marshalable format>` when it isn't a valid JSON.                                                                                                                                        |\n| `resources`             | array   | Information about the resources that the request action affected. This field can be null for non-resource actions such as `login` or `logout`.                                                                           |\n| `resources[x].id`\\*     | number  | ID of the resource.                                                                                                                                                                                                      |\n| `resources[x].type`\\*   | string  | The type of the resource that was logged: `alert`, `alert-notification`, `annotation`, `api-key`, `auth-token`, `dashboard`, `datasource`, `folder`, `org`, `panel`, `playlist`, `report`, `team`, `user`, or `version`. |\n| `requestUri`\\*          | string  | Request URI.                                                                                                                                                                                                             |\n| `ipAddress`\\*           | string  | IP address that the request was made from.                                                                                                                                                                               |\n| `userAgent`\\*           | string  | Agent through which the request was made.                                                                                                                                                                                |\n| `grafanaVersion`\\*      | string  | Current version of Grafana when this log is created.                                                                                                                                                                     |\n| `additionalData`        | object  | Additional information that can be provided about the request.                                                                                                                                                           |\n\nThe `additionalData` field can contain the following information:\n| Field name | Action | Description |\n| ---------- | ------ | ----------- |\n| `loginUsername` | `login` | Login used in the Grafana authentication form. |\n| `extUserInfo` | `login` | User information provided by the external system that was used to log in. |\n| `authTokenCount` | `login` | Number of active authentication tokens for the user that logged in. |\n| `terminationReason` | `logout` | The reason why the user logged out, such as a manual logout or a token expiring. |\n| `billing_role` | `billing-information` | The billing role associated with the billing information being sent. |\n\n### Recorded actions\n\nThe audit logs include records about the following categories of actions. Each action is\ndistinguished by the `action` and `resources[...].type` fields in the JSON record.\n\nFor example, creating an API key produces an audit log like this:\n\n```json {hl_lines=4}\n{\n  \"action\": \"create\",\n  \"resources\": [\n    {\n      \"id\": 1,\n      \"type\": \"api-key\"\n    }\n  ],\n  \"timestamp\": \"2021-11-12T22:12:36.144795692Z\",\n  \"user\": {\n    \"userId\": 1,\n    \"orgId\": 1,\n    \"orgRole\": \"Admin\",\n    \"username\": \"admin\",\n    \"isAnonymous\": false,\n    \"authTokenId\": 1\n  },\n  \"request\": {\n    \"body\": \"{\\\"name\\\":\\\"example\\\",\\\"role\\\":\\\"Viewer\\\",\\\"secondsToLive\\\":null}\"\n  },\n  \"result\": {\n    \"statusType\": \"success\",\n    \"statusCode\": 200,\n    \"responseBody\": \"{\\\"id\\\":1,\\\"name\\\":\\\"example\\\"}\"\n  },\n  \"resources\": [\n    {\n      \"id\": 1,\n      \"type\": \"api-key\"\n    }\n  ],\n  \"requestUri\": \"\/api\/auth\/keys\",\n  \"ipAddress\": \"127.0.0.1:54652\",\n  \"userAgent\": \"Mozilla\/5.0 (X11; Linux x86_64; rv:94.0) Gecko\/20100101 Firefox\/94.0\",\n  \"grafanaVersion\": \"8.3.0-pre\"\n}\n```\n\nSome actions can only be distinguished by their `requestUri` fields. For those actions, the relevant\npattern of the `requestUri` field is given.\n\nNote that almost all these recorded actions are actions that correspond to API requests or UI actions that\ntrigger an API request. Therefore, the action `{\"action\": \"email\", \"resources\": [{\"type\": \"report\"}]}` corresponds\nto the action when the user requests a report's preview to be sent through email, and not the scheduled ones.\n\n#### Sessions\n\n| Action                           | Distinguishing fields                                                                      |\n| -------------------------------- | ------------------------------------------------------------------------------------------ |\n| Log in                           | `{\"action\": \"login-AUTH-MODULE\"}` \\*                                                       |\n| Log out \\*\\*                     | `{\"action\": \"logout\"}`                                                                     |\n| Force logout for user            | `{\"action\": \"logout-user\"}`                                                                |\n| Remove user authentication token | `{\"action\": \"revoke-auth-token\", \"resources\": [{\"type\": \"auth-token\"}, {\"type\": \"user\"}]}` |\n| Create API key                   | `{\"action\": \"create\", \"resources\": [{\"type\": \"api-key\"}]}`                                 |\n| Delete API key                   | `{\"action\": \"delete\", \"resources\": [{\"type\": \"api-key\"}]}`                                 |\n\n\\* Where `AUTH-MODULE` is the name of the authentication module: `grafana`, `saml`,\n`ldap`, etc. \\\n\\*\\* Includes manual log out, token expired\/revoked, and [SAML Single Logout]().\n\n#### Service accounts\n\n| Action                       | Distinguishing fields                                                                                 |\n| ---------------------------- | ----------------------------------------------------------------------------------------------------- |\n| Create service account       | `{\"action\": \"create\", \"resources\": [{\"type\": \"service-account\"}]}`                                    |\n| Update service account       | `{\"action\": \"update\", \"resources\": [{\"type\": \"service-account\"}]}`                                    |\n| Delete service account       | `{\"action\": \"delete\", \"resources\": [{\"type\": \"service-account\"}]}`                                    |\n| Create service account token | `{\"action\": \"create\", \"resources\": [{\"type\": \"service-account\"}, {\"type\": \"service-account-token\"}]}` |\n| Delete service account token | `{\"action\": \"delete\", \"resources\": [{\"type\": \"service-account\"}, {\"type\": \"service-account-token\"}]}` |\n| Hide API keys                | `{\"action\": \"hide-api-keys\"}`                                                                         |\n| Migrate API keys             | `{\"action\": \"migrate-api-keys\"}`                                                                      |\n| Migrate API key              | `{\"action\": \"migrate-api-keys\"}, \"resources\": [{\"type\": \"api-key\"}]}`                                 |\n\n#### Access control\n\n| Action                                   | Distinguishing fields                                                                                                       |\n| ---------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- |\n| Create role                              | `{\"action\": \"create\", \"resources\": [{\"type\": \"role\"}]}`                                                                     |\n| Update role                              | `{\"action\": \"update\", \"resources\": [{\"type\": \"role\"}]}`                                                                     |\n| Delete role                              | `{\"action\": \"delete\", \"resources\": [{\"type\": \"role\"}]}`                                                                     |\n| Assign built-in role                     | `{\"action\": \"assign-builtin-role\", \"resources\": [{\"type\": \"role\"}, {\"type\": \"builtin-role\"}]}`                              |\n| Remove built-in role                     | `{\"action\": \"remove-builtin-role\", \"resources\": [{\"type\": \"role\"}, {\"type\": \"builtin-role\"}]}`                              |\n| Grant team role                          | `{\"action\": \"grant-team-role\", \"resources\": [{\"type\": \"team\"}]}`                                                            |\n| Set team roles                           | `{\"action\": \"set-team-roles\", \"resources\": [{\"type\": \"team\"}]}`                                                             |\n| Revoke team role                         | `{\"action\": \"revoke-team-role\", \"resources\": [{\"type\": \"role\"}, {\"type\": \"team\"}]}`                                         |\n| Grant user role                          | `{\"action\": \"grant-user-role\", \"resources\": [{\"type\": \"role\"}, {\"type\": \"user\"}]}`                                          |\n| Set user roles                           | `{\"action\": \"set-user-roles\", \"resources\": [{\"type\": \"user\"}]}`                                                             |\n| Revoke user role                         | `{\"action\": \"revoke-user-role\", \"resources\": [{\"type\": \"role\"}, {\"type\": \"user\"}]}`                                         |\n| Set user permissions on folder           | `{\"action\": \"set-user-permissions-on-folder\", \"resources\": [{\"type\": \"folder\"}, {\"type\": \"user\"}]}`                         |\n| Set team permissions on folder           | `{\"action\": \"set-team-permissions-on-folder\", \"resources\": [{\"type\": \"folder\"}, {\"type\": \"team\"}]}`                         |\n| Set basic role permissions on folder     | `{\"action\": \"set-basic-role-permissions-on-folder\", \"resources\": [{\"type\": \"folder\"}, {\"type\": \"builtin-role\"}]}`           |\n| Set user permissions on dashboard        | `{\"action\": \"set-user-permissions-on-dashboards\", \"resources\": [{\"type\": \"dashboard\"}, {\"type\": \"user\"}]}`                  |\n| Set team permissions on dashboard        | `{\"action\": \"set-team-permissions-on-dashboards\", \"resources\": [{\"type\": \"dashboard\"}, {\"type\": \"team\"}]}`                  |\n| Set basic role permissions on dashboard  | `{\"action\": \"set-basic-role-permissions-on-dashboards\", \"resources\": [{\"type\": \"dashboard\"}, {\"type\": \"builtin-role\"}]}`    |\n| Set user permissions on team             | `{\"action\": \"set-user-permissions-on-teams\", \"resources\": [{\"type\": \"teams\"}, {\"type\": \"user\"}]}`                           |\n| Set user permissions on service account  | `{\"action\": \"set-user-permissions-on-service-accounts\", \"resources\": [{\"type\": \"service-account\"}, {\"type\": \"user\"}]}`      |\n| Set user permissions on datasource       | `{\"action\": \"set-user-permissions-on-data-sources\", \"resources\": [{\"type\": \"datasource\"}, {\"type\": \"user\"}]}`               |\n| Set team permissions on datasource       | `{\"action\": \"set-team-permissions-on-data-sources\", \"resources\": [{\"type\": \"datasource\"}, {\"type\": \"team\"}]}`               |\n| Set basic role permissions on datasource | `{\"action\": \"set-basic-role-permissions-on-data-sources\", \"resources\": [{\"type\": \"datasource\"}, {\"type\": \"builtin-role\"}]}` |\n\n#### User management\n\n| Action                    | Distinguishing fields                                               |\n| ------------------------- | ------------------------------------------------------------------- |\n| Create user               | `{\"action\": \"create\", \"resources\": [{\"type\": \"user\"}]}`             |\n| Update user               | `{\"action\": \"update\", \"resources\": [{\"type\": \"user\"}]}`             |\n| Delete user               | `{\"action\": \"delete\", \"resources\": [{\"type\": \"user\"}]}`             |\n| Disable user              | `{\"action\": \"disable\", \"resources\": [{\"type\": \"user\"}]}`            |\n| Enable user               | `{\"action\": \"enable\", \"resources\": [{\"type\": \"user\"}]}`             |\n| Update password           | `{\"action\": \"update-password\", \"resources\": [{\"type\": \"user\"}]}`    |\n| Send password reset email | `{\"action\": \"send-reset-email\"}`                                    |\n| Reset password            | `{\"action\": \"reset-password\"}`                                      |\n| Update permissions        | `{\"action\": \"update-permissions\", \"resources\": [{\"type\": \"user\"}]}` |\n| Send signup email         | `{\"action\": \"signup-email\"}`                                        |\n| Click signup link         | `{\"action\": \"signup\"}`                                              |\n| Reload LDAP configuration | `{\"action\": \"ldap-reload\"}`                                         |\n| Get user in LDAP          | `{\"action\": \"ldap-search\"}`                                         |\n| Sync user with LDAP       | `{\"action\": \"ldap-sync\", \"resources\": [{\"type\": \"user\"}]`           |\n\n#### Team and organization management\n\n| Action                               | Distinguishing fields                                                        |\n| ------------------------------------ | ---------------------------------------------------------------------------- |\n| Add team                             | `{\"action\": \"create\", \"requestUri\": \"\/api\/teams\"}`                           |\n| Update team                          | `{\"action\": \"update\", \"requestUri\": \"\/api\/teams\/TEAM-ID\"}`\\*                 |\n| Delete team                          | `{\"action\": \"delete\", \"requestUri\": \"\/api\/teams\/TEAM-ID\"}`\\*                 |\n| Add external group for team          | `{\"action\": \"create\", \"requestUri\": \"\/api\/teams\/TEAM-ID\/groups\"}`\\*          |\n| Remove external group for team       | `{\"action\": \"delete\", \"requestUri\": \"\/api\/teams\/TEAM-ID\/groups\/GROUP-ID\"}`\\* |\n| Add user to team                     | `{\"action\": \"create\", \"resources\": [{\"type\": \"user\"}, {\"type\": \"team\"}]}`    |\n| Update team member permissions       | `{\"action\": \"update\", \"resources\": [{\"type\": \"user\"}, {\"type\": \"team\"}]}`    |\n| Remove user from team                | `{\"action\": \"delete\", \"resources\": [{\"type\": \"user\"}, {\"type\": \"team\"}]}`    |\n| Create organization                  | `{\"action\": \"create\", \"resources\": [{\"type\": \"org\"}]}`                       |\n| Update organization                  | `{\"action\": \"update\", \"resources\": [{\"type\": \"org\"}]}`                       |\n| Delete organization                  | `{\"action\": \"delete\", \"resources\": [{\"type\": \"org\"}]}`                       |\n| Add user to organization             | `{\"action\": \"create\", \"resources\": [{\"type\": \"org\"}, {\"type\": \"user\"}]}`     |\n| Change user role in organization     | `{\"action\": \"update\", \"resources\": [{\"type\": \"user\"}, {\"type\": \"org\"}]}`     |\n| Remove user from organization        | `{\"action\": \"delete\", \"resources\": [{\"type\": \"user\"}, {\"type\": \"org\"}]}`     |\n| Invite external user to organization | `{\"action\": \"org-invite\", \"resources\": [{\"type\": \"org\"}, {\"type\": \"user\"}]}` |\n| Revoke invitation                    | `{\"action\": \"revoke-org-invite\", \"resources\": [{\"type\": \"org\"}]}`            |\n\n\\* Where `TEAM-ID` is the ID of the affected team, and `GROUP-ID` (if present) is the ID of the\nexternal group.\n\n#### Folder and dashboard management\n\n| Action                        | Distinguishing fields                                                    |\n| ----------------------------- | ------------------------------------------------------------------------ |\n| Create folder                 | `{\"action\": \"create\", \"resources\": [{\"type\": \"folder\"}]}`                |\n| Update folder                 | `{\"action\": \"update\", \"resources\": [{\"type\": \"folder\"}]}`                |\n| Update folder permissions     | `{\"action\": \"manage-permissions\", \"resources\": [{\"type\": \"folder\"}]}`    |\n| Delete folder                 | `{\"action\": \"delete\", \"resources\": [{\"type\": \"folder\"}]}`                |\n| Create\/update dashboard       | `{\"action\": \"create-update\", \"resources\": [{\"type\": \"dashboard\"}]}`      |\n| Import dashboard              | `{\"action\": \"create\", \"resources\": [{\"type\": \"dashboard\"}]}`             |\n| Update dashboard permissions  | `{\"action\": \"manage-permissions\", \"resources\": [{\"type\": \"dashboard\"}]}` |\n| Restore old dashboard version | `{\"action\": \"restore\", \"resources\": [{\"type\": \"dashboard\"}]}`            |\n| Delete dashboard              | `{\"action\": \"delete\", \"resources\": [{\"type\": \"dashboard\"}]}`             |\n\n#### Library elements management\n\n| Action                 | Distinguishing fields                                              |\n| ---------------------- | ------------------------------------------------------------------ |\n| Create library element | `{\"action\": \"create\", \"resources\": [{\"type\": \"library-element\"}]}` |\n| Update library element | `{\"action\": \"update\", \"resources\": [{\"type\": \"library-element\"}]}` |\n| Delete library element | `{\"action\": \"delete\", \"resources\": [{\"type\": \"library-element\"}]}` |\n\n#### Data sources management\n\n| Action                                             | Distinguishing fields                                                                     |\n| -------------------------------------------------- | ----------------------------------------------------------------------------------------- |\n| Create datasource                                  | `{\"action\": \"create\", \"resources\": [{\"type\": \"datasource\"}]}`                             |\n| Update datasource                                  | `{\"action\": \"update\", \"resources\": [{\"type\": \"datasource\"}]}`                             |\n| Delete datasource                                  | `{\"action\": \"delete\", \"resources\": [{\"type\": \"datasource\"}]}`                             |\n| Enable permissions for datasource                  | `{\"action\": \"enable-permissions\", \"resources\": [{\"type\": \"datasource\"}]}`                 |\n| Disable permissions for datasource                 | `{\"action\": \"disable-permissions\", \"resources\": [{\"type\": \"datasource\"}]}`                |\n| Grant datasource permission to role, team, or user | `{\"action\": \"create\", \"resources\": [{\"type\": \"datasource\"}, {\"type\": \"dspermission\"}]}`\\* |\n| Remove datasource permission                       | `{\"action\": \"delete\", \"resources\": [{\"type\": \"datasource\"}, {\"type\": \"dspermission\"}]}`   |\n| Enable caching for datasource                      | `{\"action\": \"enable-cache\", \"resources\": [{\"type\": \"datasource\"}]}`                       |\n| Disable caching for datasource                     | `{\"action\": \"disable-cache\", \"resources\": [{\"type\": \"datasource\"}]}`                      |\n| Update datasource caching configuration            | `{\"action\": \"update\", \"resources\": [{\"type\": \"datasource\"}]}`                             |\n\n\\* `resources` may also contain a third item with `\"type\":` set to `\"user\"` or `\"team\"`.\n\n#### Data source query\n\n| Action           | Distinguishing fields                                        |\n| ---------------- | ------------------------------------------------------------ |\n| Query datasource | `{\"action\": \"query\", \"resources\": [{\"type\": \"datasource\"}]}` |\n\n#### Reporting\n\n| Action                    | Distinguishing fields                                                            |\n| ------------------------- | -------------------------------------------------------------------------------- |\n| Create report             | `{\"action\": \"create\", \"resources\": [{\"type\": \"report\"}, {\"type\": \"dashboard\"}]}` |\n| Update report             | `{\"action\": \"update\", \"resources\": [{\"type\": \"report\"}, {\"type\": \"dashboard\"}]}` |\n| Delete report             | `{\"action\": \"delete\", \"resources\": [{\"type\": \"report\"}]}`                        |\n| Send report by email      | `{\"action\": \"email\", \"resources\": [{\"type\": \"report\"}]}`                         |\n| Update reporting settings | `{\"action\": \"change-settings\"}`                                                  |\n\n#### Annotations, playlists and snapshots management\n\n| Action                            | Distinguishing fields                                                                |\n| --------------------------------- | ------------------------------------------------------------------------------------ |\n| Create annotation                 | `{\"action\": \"create\", \"resources\": [{\"type\": \"annotation\"}]}`                        |\n| Create Graphite annotation        | `{\"action\": \"create-graphite\", \"resources\": [{\"type\": \"annotation\"}]}`               |\n| Update annotation                 | `{\"action\": \"update\", \"resources\": [{\"type\": \"annotation\"}]}`                        |\n| Patch annotation                  | `{\"action\": \"patch\", \"resources\": [{\"type\": \"annotation\"}]}`                         |\n| Delete annotation                 | `{\"action\": \"delete\", \"resources\": [{\"type\": \"annotation\"}]}`                        |\n| Delete all annotations from panel | `{\"action\": \"mass-delete\", \"resources\": [{\"type\": \"dashboard\"}, {\"type\": \"panel\"}]}` |\n| Create playlist                   | `{\"action\": \"create\", \"resources\": [{\"type\": \"playlist\"}]}`                          |\n| Update playlist                   | `{\"action\": \"update\", \"resources\": [{\"type\": \"playlist\"}]}`                          |\n| Delete playlist                   | `{\"action\": \"delete\", \"resources\": [{\"type\": \"playlist\"}]}`                          |\n| Create a snapshot                 | `{\"action\": \"create\", \"resources\": [{\"type\": \"dashboard\"}, {\"type\": \"snapshot\"}]}`   |\n| Delete a snapshot                 | `{\"action\": \"delete\", \"resources\": [{\"type\": \"snapshot\"}]}`                          |\n| Delete a snapshot by delete key   | `{\"action\": \"delete\", \"resources\": [{\"type\": \"snapshot\"}]}`                          |\n\n#### Provisioning\n\n| Action                            | Distinguishing fields                      |\n| --------------------------------- | ------------------------------------------ |\n| Reload provisioned dashboards     | `{\"action\": \"provisioning-dashboards\"}`    |\n| Reload provisioned datasources    | `{\"action\": \"provisioning-datasources\"}`   |\n| Reload provisioned plugins        | `{\"action\": \"provisioning-plugins\"}`       |\n| Reload provisioned alerts         | `{\"action\": \"provisioning-alerts\"}`        |\n| Reload provisioned access control | `{\"action\": \"provisioning-accesscontrol\"}` |\n\n#### Plugins management\n\n| Action           | Distinguishing fields     |\n| ---------------- | ------------------------- |\n| Install plugin   | `{\"action\": \"install\"}`   |\n| Uninstall plugin | `{\"action\": \"uninstall\"}` |\n\n#### Miscellaneous\n\n| Action                   | Distinguishing fields                                        |\n| ------------------------ | ------------------------------------------------------------ |\n| Set licensing token      | `{\"action\": \"create\", \"requestUri\": \"\/api\/licensing\/token\"}` |\n| Save billing information | `{\"action\": \"billing-information\"}`                          |\n\n#### Cloud migration management\n\n\n\n| Action                           | Distinguishing fields                                       |\n| -------------------------------- | ----------------------------------------------------------- |\n| Connect to a cloud instance      | `{\"action\": \"connect-instance\"}`                            |\n| Disconnect from a cloud instance | `{\"action\": \"disconnect-instance\"}`                         |\n| Build a snapshot                 | `{\"action\": \"build\", \"resources\": [{\"type\": \"snapshot\"}]}`  |\n| Upload a snapshot                | `{\"action\": \"upload\", \"resources\": [{\"type\": \"snapshot\"}]}` |\n\n#### Generic actions\n\nIn addition to the actions listed above, any HTTP request (`POST`, `PATCH`, `PUT`, and `DELETE`)\nagainst the API is recorded with one of the following generic actions.\n\nFurthermore, you can also record `GET` requests. See below how to configure it.\n\n| Action         | Distinguishing fields          |\n| -------------- | ------------------------------ |\n| POST request   | `{\"action\": \"post-action\"}`    |\n| PATCH request  | `{\"action\": \"partial-update\"}` |\n| PUT request    | `{\"action\": \"update\"}`         |\n| DELETE request | `{\"action\": \"delete\"}`         |\n| GET request    | `{\"action\": \"retrieve\"}`       |\n\n## Configuration\n\n\nThe auditing feature is disabled by default.\n\n\nAudit logs can be saved into files, sent to a Loki instance or sent to the Grafana default logger. By default, only the file exporter is enabled.\nYou can choose which exporter to use in the [configuration file]().\n\nOptions are `file`, `loki`, and `logger`. Use spaces to separate multiple modes, such as `file loki`.\n\nBy default, when a user creates or updates a dashboard, its content will not appear in the logs as it can significantly increase the size of your logs. If this is important information for you and you can handle the amount of data generated, then you can enable this option in the configuration.\n\n```ini\n[auditing]\n# Enable the auditing feature\nenabled = false\n# List of enabled loggers\nloggers = file\n# Keep dashboard content in the logs (request or response fields); this can significantly increase the size of your logs.\nlog_dashboard_content = false\n# Keep requests and responses body; this can significantly increase the size of your logs.\nverbose = false\n# Write an audit log for every status code.\n# By default it only logs the following ones: 2XX, 3XX, 401, 403 and 500.\nlog_all_status_codes = false\n# Maximum response body (in bytes) to be audited; 500KiB by default.\n# May help reducing the memory footprint caused by auditing.\nmax_response_size_bytes = 512000\n```\n\nEach exporter has its own configuration fields.\n\n### File exporter\n\nAudit logs are saved into files. You can configure the folder to use to save these files. Logs are rotated when the file size is exceeded and at the start of a new day.\n\n```ini\n[auditing.logs.file]\n# Path to logs folder\npath = data\/log\n# Maximum log files to keep\nmax_files = 5\n# Max size in megabytes per log file\nmax_file_size_mb = 256\n```\n\n### Loki exporter\n\nAudit logs are sent to a [Loki](\/oss\/loki\/) service, through HTTP or gRPC.\n\n\nThe HTTP option for the Loki exporter is available only in Grafana Enterprise version 7.4 and later.\n\n\n```ini\n[auditing.logs.loki]\n# Set the communication protocol to use with Loki (can be grpc or http)\ntype = grpc\n# Set the address for writing logs to Loki\nurl = localhost:9095\n# Defaults to true. If true, it establishes a secure connection to Loki\ntls = true\n# Set the tenant ID for Loki communication, which is disabled by default.\n# The tenant ID is required to interact with Loki running in multi-tenant mode.\ntenant_id =\n```\n\nIf you have multiple Grafana instances sending logs to the same Loki service or if you are using Loki for non-audit logs, audit logs come with additional labels to help identifying them:\n\n- **host** - OS hostname on which the Grafana instance is running.\n- **grafana_instance** - Application URL.\n- **kind** - `auditing`\n\nWhen basic authentication is needed to ingest logs in your Loki instance, you can specify credentials in the URL field. For example:\n\n```ini\n# Set the communication protocol to use with Loki (can be grpc or http)\ntype = http\n# Set the address for writing logs to Loki\nurl = user:password@localhost:3000\n```\n\n### Console exporter\n\nAudit logs are sent to the Grafana default logger. The audit logs use the `auditing.console` logger and are logged on `debug`-level, learn how to enable debug logging in the [log configuration]() section of the documentation. Accessing the audit logs in this way is not recommended for production use.","site":"grafana setup","answers_cleaned":"    aliases            enterprise auditing  description  Auditing keywords      grafana     auditing     audit     logs labels    products        cloud       enterprise title  Audit a Grafana instance weight  800        Audit a Grafana instance  Auditing allows you to track important changes to your Grafana instance  By default  audit logs are logged to file but the auditing feature also supports sending logs directly to Loki    To enable sending Grafana Cloud audit logs to your Grafana Cloud Logs instance  please  file a support ticket   profile org tickets new   Note that standard ingest and retention rates apply for ingesting these audit logs    Only API requests or UI actions that trigger an API request generate an audit log    Available in  Grafana Enterprise    and  Grafana Cloud   docs grafana cloud        Audit logs  Audit logs are JSON objects representing user actions like     Modifications to resources such as dashboards and data sources    A user failing to log in       Format  Audit logs contain the following fields  The fields followed by        are always available  the others depend on the type of action logged     Field name                Type      Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    timestamp                string    The date and time the request was made  in coordinated universal time  UTC  using the  RFC3339  https   tools ietf org html rfc3339 section 5 6  format                                                                       user                     object    Information about the user that made the request  Either one of the  UserID  or  ApiKeyID  fields will contain content if  isAnonymous false                                                                                  user userId              number    ID of the Grafana user that made the request                                                                                                                                                                                  user orgId               number    Current organization of the user that made the request                                                                                                                                                                        user orgRole             string    Current role of the user that made the request                                                                                                                                                                                user name                string    Name of the Grafana user that made the request                                                                                                                                                                                user authTokenId         number    ID of the user authentication token                                                                                                                                                                                           user apiKeyId            number    ID of the Grafana API key used to make the request                                                                                                                                                                            user isAnonymous         boolean   If an anonymous user made the request   true   Otherwise   false                                                                                                                                                              action                   string    The request action  For example   create    update   or  manage permissions                                                                                                                                                   request                  object    Information about the HTTP request                                                                                                                                                                                            request params           object    Request s path parameters                                                                                                                                                                                                     request query            object    Request s query parameters                                                                                                                                                                                                    request body             string    Request s body  Filled with   non marshalable format   when it isn t a valid JSON                                                                                                                                             result                   object    Information about the HTTP response                                                                                                                                                                                           result statusType        string    If the request action was successful   success   Otherwise   failure                                                                                                                                                          result statusCode        number    HTTP status of the request                                                                                                                                                                                                    result failureMessage    string    HTTP error message                                                                                                                                                                                                            result body              string    Response body  Filled with   non marshalable format   when it isn t a valid JSON                                                                                                                                              resources                array     Information about the resources that the request action affected  This field can be null for non resource actions such as  login  or  logout                                                                                  resources x  id          number    ID of the resource                                                                                                                                                                                                            resources x  type        string    The type of the resource that was logged   alert    alert notification    annotation    api key    auth token    dashboard    datasource    folder    org    panel    playlist    report    team    user   or  version        requestUri               string    Request URI                                                                                                                                                                                                                   ipAddress                string    IP address that the request was made from                                                                                                                                                                                     userAgent                string    Agent through which the request was made                                                                                                                                                                                      grafanaVersion           string    Current version of Grafana when this log is created                                                                                                                                                                           additionalData           object    Additional information that can be provided about the request                                                                                                                                                               The  additionalData  field can contain the following information    Field name   Action   Description                                            loginUsername     login    Login used in the Grafana authentication form       extUserInfo     login    User information provided by the external system that was used to log in       authTokenCount     login    Number of active authentication tokens for the user that logged in       terminationReason     logout    The reason why the user logged out  such as a manual logout or a token expiring       billing role     billing information    The billing role associated with the billing information being sent         Recorded actions  The audit logs include records about the following categories of actions  Each action is distinguished by the  action  and  resources      type  fields in the JSON record   For example  creating an API key produces an audit log like this      json  hl lines 4       action    create      resources                  id   1         type    api key                timestamp    2021 11 12T22 12 36 144795692Z      user          userId   1       orgId   1       orgRole    Admin        username    admin        isAnonymous   false       authTokenId   1         request          body       name     example     role     Viewer     secondsToLive   null           result          statusType    success        statusCode   200       responseBody       id   1   name     example             resources                  id   1         type    api key                requestUri     api auth keys      ipAddress    127 0 0 1 54652      userAgent    Mozilla 5 0  X11  Linux x86 64  rv 94 0  Gecko 20100101 Firefox 94 0      grafanaVersion    8 3 0 pre         Some actions can only be distinguished by their  requestUri  fields  For those actions  the relevant pattern of the  requestUri  field is given   Note that almost all these recorded actions are actions that correspond to API requests or UI actions that trigger an API request  Therefore  the action    action    email    resources      type    report      corresponds to the action when the user requests a report s preview to be sent through email  and not the scheduled ones        Sessions    Action                             Distinguishing fields                                                                                                                                                                                                            Log in                                action    login AUTH MODULE                                                                 Log out                               action    logout                                                                            Force logout for user                 action    logout user                                                                       Remove user authentication token      action    revoke auth token    resources      type    auth token      type    user          Create API key                        action    create    resources      type    api key                                          Delete API key                        action    delete    resources      type    api key                                            Where  AUTH MODULE  is the name of the authentication module   grafana    saml    ldap   etc         Includes manual log out  token expired revoked  and  SAML Single Logout           Service accounts    Action                         Distinguishing fields                                                                                                                                                                                                                              Create service account            action    create    resources      type    service account                                             Update service account            action    update    resources      type    service account                                             Delete service account            action    delete    resources      type    service account                                             Create service account token      action    create    resources      type    service account      type    service account token          Delete service account token      action    delete    resources      type    service account      type    service account token          Hide API keys                     action    hide api keys                                                                                Migrate API keys                  action    migrate api keys                                                                             Migrate API key                   action    migrate api keys     resources      type    api key                                              Access control    Action                                     Distinguishing fields                                                                                                                                                                                                                                                                                      Create role                                   action    create    resources      type    role                                                                              Update role                                   action    update    resources      type    role                                                                              Delete role                                   action    delete    resources      type    role                                                                              Assign built in role                          action    assign builtin role    resources      type    role      type    builtin role                                       Remove built in role                          action    remove builtin role    resources      type    role      type    builtin role                                       Grant team role                               action    grant team role    resources      type    team                                                                     Set team roles                                action    set team roles    resources      type    team                                                                      Revoke team role                              action    revoke team role    resources      type    role      type    team                                                  Grant user role                               action    grant user role    resources      type    role      type    user                                                   Set user roles                                action    set user roles    resources      type    user                                                                      Revoke user role                              action    revoke user role    resources      type    role      type    user                                                  Set user permissions on folder                action    set user permissions on folder    resources      type    folder      type    user                                  Set team permissions on folder                action    set team permissions on folder    resources      type    folder      type    team                                  Set basic role permissions on folder          action    set basic role permissions on folder    resources      type    folder      type    builtin role                    Set user permissions on dashboard             action    set user permissions on dashboards    resources      type    dashboard      type    user                           Set team permissions on dashboard             action    set team permissions on dashboards    resources      type    dashboard      type    team                           Set basic role permissions on dashboard       action    set basic role permissions on dashboards    resources      type    dashboard      type    builtin role             Set user permissions on team                  action    set user permissions on teams    resources      type    teams      type    user                                    Set user permissions on service account       action    set user permissions on service accounts    resources      type    service account      type    user               Set user permissions on datasource            action    set user permissions on data sources    resources      type    datasource      type    user                        Set team permissions on datasource            action    set team permissions on data sources    resources      type    datasource      type    team                        Set basic role permissions on datasource      action    set basic role permissions on data sources    resources      type    datasource      type    builtin role              User management    Action                      Distinguishing fields                                                                                                                                                       Create user                    action    create    resources      type    user                      Update user                    action    update    resources      type    user                      Delete user                    action    delete    resources      type    user                      Disable user                   action    disable    resources      type    user                     Enable user                    action    enable    resources      type    user                      Update password                action    update password    resources      type    user             Send password reset email      action    send reset email                                           Reset password                 action    reset password                                             Update permissions             action    update permissions    resources      type    user          Send signup email              action    signup email                                               Click signup link              action    signup                                                     Reload LDAP configuration      action    ldap reload                                                Get user in LDAP               action    ldap search                                                Sync user with LDAP            action    ldap sync    resources      type    user                       Team and organization management    Action                                 Distinguishing fields                                                                                                                                                                                    Add team                                  action    create    requestUri     api teams                                  Update team                               action    update    requestUri     api teams TEAM ID                          Delete team                               action    delete    requestUri     api teams TEAM ID                          Add external group for team               action    create    requestUri     api teams TEAM ID groups                   Remove external group for team            action    delete    requestUri     api teams TEAM ID groups GROUP ID          Add user to team                          action    create    resources      type    user      type    team             Update team member permissions            action    update    resources      type    user      type    team             Remove user from team                     action    delete    resources      type    user      type    team             Create organization                       action    create    resources      type    org                                Update organization                       action    update    resources      type    org                                Delete organization                       action    delete    resources      type    org                                Add user to organization                  action    create    resources      type    org      type    user              Change user role in organization          action    update    resources      type    user      type    org              Remove user from organization             action    delete    resources      type    user      type    org              Invite external user to organization      action    org invite    resources      type    org      type    user          Revoke invitation                         action    revoke org invite    resources      type    org                       Where  TEAM ID  is the ID of the affected team  and  GROUP ID   if present  is the ID of the external group        Folder and dashboard management    Action                          Distinguishing fields                                                                                                                                                                     Create folder                      action    create    resources      type    folder                         Update folder                      action    update    resources      type    folder                         Update folder permissions          action    manage permissions    resources      type    folder             Delete folder                      action    delete    resources      type    folder                         Create update dashboard            action    create update    resources      type    dashboard               Import dashboard                   action    create    resources      type    dashboard                      Update dashboard permissions       action    manage permissions    resources      type    dashboard          Restore old dashboard version      action    restore    resources      type    dashboard                     Delete dashboard                   action    delete    resources      type    dashboard                          Library elements management    Action                   Distinguishing fields                                                                                                                                                  Create library element      action    create    resources      type    library element          Update library element      action    update    resources      type    library element          Delete library element      action    delete    resources      type    library element              Data sources management    Action                                               Distinguishing fields                                                                                                                                                                                                                            Create datasource                                       action    create    resources      type    datasource                                      Update datasource                                       action    update    resources      type    datasource                                      Delete datasource                                       action    delete    resources      type    datasource                                      Enable permissions for datasource                       action    enable permissions    resources      type    datasource                          Disable permissions for datasource                      action    disable permissions    resources      type    datasource                         Grant datasource permission to role  team  or user      action    create    resources      type    datasource      type    dspermission            Remove datasource permission                            action    delete    resources      type    datasource      type    dspermission            Enable caching for datasource                           action    enable cache    resources      type    datasource                                Disable caching for datasource                          action    disable cache    resources      type    datasource                               Update datasource caching configuration                 action    update    resources      type    datasource                                         resources  may also contain a third item with   type    set to   user   or   team          Data source query    Action             Distinguishing fields                                                                                                                                Query datasource      action    query    resources      type    datasource              Reporting    Action                      Distinguishing fields                                                                                                                                                                                 Create report                  action    create    resources      type    report      type    dashboard          Update report                  action    update    resources      type    report      type    dashboard          Delete report                  action    delete    resources      type    report                                 Send report by email           action    email    resources      type    report                                  Update reporting settings      action    change settings                                                             Annotations  playlists and snapshots management    Action                              Distinguishing fields                                                                                                                                                                                                 Create annotation                      action    create    resources      type    annotation                                 Create Graphite annotation             action    create graphite    resources      type    annotation                        Update annotation                      action    update    resources      type    annotation                                 Patch annotation                       action    patch    resources      type    annotation                                  Delete annotation                      action    delete    resources      type    annotation                                 Delete all annotations from panel      action    mass delete    resources      type    dashboard      type    panel          Create playlist                        action    create    resources      type    playlist                                   Update playlist                        action    update    resources      type    playlist                                   Delete playlist                        action    delete    resources      type    playlist                                   Create a snapshot                      action    create    resources      type    dashboard      type    snapshot            Delete a snapshot                      action    delete    resources      type    snapshot                                   Delete a snapshot by delete key        action    delete    resources      type    snapshot                                       Provisioning    Action                              Distinguishing fields                                                                                                             Reload provisioned dashboards          action    provisioning dashboards           Reload provisioned datasources         action    provisioning datasources          Reload provisioned plugins             action    provisioning plugins              Reload provisioned alerts              action    provisioning alerts               Reload provisioned access control      action    provisioning accesscontrol            Plugins management    Action             Distinguishing fields                                                          Install plugin        action    install          Uninstall plugin      action    uninstall            Miscellaneous    Action                     Distinguishing fields                                                                                                                                        Set licensing token           action    create    requestUri     api licensing token        Save billing information      action    billing information                                     Cloud migration management      Action                             Distinguishing fields                                                                                                                                              Connect to a cloud instance           action    connect instance                                   Disconnect from a cloud instance      action    disconnect instance                                Build a snapshot                      action    build    resources      type    snapshot           Upload a snapshot                     action    upload    resources      type    snapshot              Generic actions  In addition to the actions listed above  any HTTP request   POST    PATCH    PUT   and  DELETE   against the API is recorded with one of the following generic actions   Furthermore  you can also record  GET  requests  See below how to configure it     Action           Distinguishing fields                                                                  POST request        action    post action           PATCH request       action    partial update        PUT request         action    update                DELETE request      action    delete                GET request         action    retrieve                Configuration   The auditing feature is disabled by default    Audit logs can be saved into files  sent to a Loki instance or sent to the Grafana default logger  By default  only the file exporter is enabled  You can choose which exporter to use in the  configuration file      Options are  file    loki   and  logger   Use spaces to separate multiple modes  such as  file loki    By default  when a user creates or updates a dashboard  its content will not appear in the logs as it can significantly increase the size of your logs  If this is important information for you and you can handle the amount of data generated  then you can enable this option in the configuration      ini  auditing    Enable the auditing feature enabled   false   List of enabled loggers loggers   file   Keep dashboard content in the logs  request or response fields   this can significantly increase the size of your logs  log dashboard content   false   Keep requests and responses body  this can significantly increase the size of your logs  verbose   false   Write an audit log for every status code    By default it only logs the following ones  2XX  3XX  401  403 and 500  log all status codes   false   Maximum response body  in bytes  to be audited  500KiB by default    May help reducing the memory footprint caused by auditing  max response size bytes   512000      Each exporter has its own configuration fields       File exporter  Audit logs are saved into files  You can configure the folder to use to save these files  Logs are rotated when the file size is exceeded and at the start of a new day      ini  auditing logs file    Path to logs folder path   data log   Maximum log files to keep max files   5   Max size in megabytes per log file max file size mb   256          Loki exporter  Audit logs are sent to a  Loki   oss loki   service  through HTTP or gRPC    The HTTP option for the Loki exporter is available only in Grafana Enterprise version 7 4 and later       ini  auditing logs loki    Set the communication protocol to use with Loki  can be grpc or http  type   grpc   Set the address for writing logs to Loki url   localhost 9095   Defaults to true  If true  it establishes a secure connection to Loki tls   true   Set the tenant ID for Loki communication  which is disabled by default    The tenant ID is required to interact with Loki running in multi tenant mode  tenant id        If you have multiple Grafana instances sending logs to the same Loki service or if you are using Loki for non audit logs  audit logs come with additional labels to help identifying them       host     OS hostname on which the Grafana instance is running      grafana instance     Application URL      kind      auditing   When basic authentication is needed to ingest logs in your Loki instance  you can specify credentials in the URL field  For example      ini   Set the communication protocol to use with Loki  can be grpc or http  type   http   Set the address for writing logs to Loki url   user password localhost 3000          Console exporter  Audit logs are sent to the Grafana default logger  The audit logs use the  auditing console  logger and are logged on  debug  level  learn how to enable debug logging in the  log configuration    section of the documentation  Accessing the audit logs in this way is not recommended for production use "}
{"questions":"grafana setup export enterprise usage insights export logs labels aliases usage insights keywords Export logs of usage insights grafana enterprise","answers":"---\naliases:\n  - ..\/..\/enterprise\/usage-insights\/export-logs\/\ndescription: Export logs of usage insights\nkeywords:\n  - grafana\n  - export\n  - usage-insights\n  - enterprise\nlabels:\n  products:\n    - cloud\n    - enterprise\ntitle: Export logs of usage insights\nweight: 900\n---\n\n# Export logs of usage insights\n\n\nAvailable in [Grafana Enterprise]() and [Grafana Cloud Pro and Advanced](\/docs\/grafana-cloud\/).\n\n\nBy exporting usage logs to Loki, you can directly query them and create dashboards of the information that matters to you most, such as dashboard errors, most active organizations, or your top-10 most-used queries. This configuration is done for you in Grafana Cloud, with provisioned dashboards. Read about them in the [Grafana Cloud documentation](\/docs\/grafana-cloud\/usage-insights\/).\n\n## Usage insights logs\n\nUsage insights logs are JSON objects that represent certain user activities, such as:\n\n- A user opens a dashboard.\n- A query is sent to a data source.\n\n### Scope\n\nA log is created every time:\n\n- A user opens a dashboard.\n- A query is sent to a data source in the dashboard view.\n- A query is performed via Explore.\n\n### Format\n\nLogs of usage insights contain the following fields, where the fields followed by \\* are always available, and the others depend on the logged event:\n| Field name | Type | Description |\n| ---------- | ---- | ----------- |\n| `eventName`\\* | string | Type of the event, which can be either `data-request` or `dashboard-view`. |\n| `folderName`\\* | string | Name of the dashboard folder. |\n| `dashboardName`\\* | string | Name of the dashboard where the event happened. |\n| `dashboardId`\\* | number | ID of the dashboard where the event happened. |\n| `datasourceName`| string | Name of the data source that was queried. |\n| `datasourceType` | string | Type of the data source that was queried. For example, `prometheus`, `elasticsearch`, or `loki`. |\n| `datasourceId` | number | ID of the data source that was queried. |\n| `panelId` | number | ID of the panel of the query. |\n| `panelName` | string | Name of the panel of the query. |\n| `error` | string | Error returned by the query. |\n| `duration` | number | Duration of the query. |\n| `source` | string | Source of the query. For example, `dashboard` or `explore`. |\n| `orgId`\\* | number | ID of the user\u2019s organization. |\n| `orgName`\\* | string | Name of the user\u2019s organization. |\n| `timestamp`\\* | string | The date and time that the request was made, in Coordinated Universal Time (UTC) in [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339#section-5.6) format. |\n| `tokenId`\\* | number | ID of the user\u2019s authentication token. |\n| `username`\\* | string | Name of the Grafana user that made the request. |\n| `userId`\\* | number | ID of the Grafana user that made the request. |\n| `totalQueries`\\* | number | Number of queries executed for the data request. |\n| `cachedQueries`\\* | number | Number of fetched queries that came from the cache. |\n\n## Configuration\n\nTo export your logs, enable the usage insights feature and [configure]() an export location in the configuration file:\n\n```ini\n[usage_insights.export]\n# Enable the usage insights export feature\nenabled = true\n# Storage type\nstorage = loki\n```\n\nThe options for storage type are `loki` and `logger` (added in Grafana Enterprise 8.2).\n\nIf the storage type is set to `loki` you'll need to also configure Grafana\nto export to a Loki ingestion server. To do this, you'll need Loki installed.\nRefer to [Install Loki](\/docs\/loki\/latest\/installation\/) for instructions\non how to install Loki.\n\n```ini\n[usage_insights.export.storage.loki]\n# Set the communication protocol to use with Loki (can be grpc or http)\ntype = grpc\n# Set the address for writing logs to Loki (format must be host:port)\nurl = localhost:9095\n# Defaults to true. If true, it establishes a secure connection to Loki\ntls = true\n# Set the tenant ID for Loki communication, which is disabled by default.\n# The tenant ID is required to interact with Loki running in multi-tenant mode.\ntenant_id =\n```\n\nUsing `logger` will print usage insights to your [Grafana server log]().\nThere is no option for configuring the `logger` storage type.\n\n## Visualize Loki usage insights in Grafana\n\nIf you export logs into Loki, you can build Grafana dashboards to understand your Grafana instance usage.\n\n1. Add Loki as a data source. Refer to [Grafana fundamentals tutorial](\/tutorials\/grafana-fundamentals\/#6).\n1. Import one of the following dashboards:\n   - [Usage insights](\/grafana\/dashboards\/13785)\n   - [Usage insights datasource details](\/grafana\/dashboards\/13786)\n1. Play with usage insights to understand them:\n   - In Explore, you can use the query `{datasource=\"gdev-loki\",kind=\"usage_insights\"}` to retrieve all logs related to your `gdev-loki` data source.\n   - In a dashboard, you can build a table panel with the query `topk(10, sum by (error) (count_over_time({kind=\"usage_insights\", datasource=\"gdev-prometheus\"} | json | error != \"\" [$__interval])))` to display the 10 most common errors your users see using the `gdev-prometheus` data source.\n   - In a dashboard, you can build a graph panel with the queries `sum by(host) (count_over_time({kind=\"usage_insights\"} | json | eventName=\"data-request\" | error != \"\" [$__interval]))` and `sum by(host) (count_over_time({kind=\"usage_insights\"} | json | eventName=\"data-request\" | error = \"\" [$__interval]))` to show the evolution of the data request count over time. Using `by (host)` allows you to have more information for each Grafana server you have if you have set up Grafana for [high availability](<>).","site":"grafana setup","answers_cleaned":"    aliases            enterprise usage insights export logs  description  Export logs of usage insights keywords      grafana     export     usage insights     enterprise labels    products        cloud       enterprise title  Export logs of usage insights weight  900        Export logs of usage insights   Available in  Grafana Enterprise    and  Grafana Cloud Pro and Advanced   docs grafana cloud      By exporting usage logs to Loki  you can directly query them and create dashboards of the information that matters to you most  such as dashboard errors  most active organizations  or your top 10 most used queries  This configuration is done for you in Grafana Cloud  with provisioned dashboards  Read about them in the  Grafana Cloud documentation   docs grafana cloud usage insights        Usage insights logs  Usage insights logs are JSON objects that represent certain user activities  such as     A user opens a dashboard    A query is sent to a data source       Scope  A log is created every time     A user opens a dashboard    A query is sent to a data source in the dashboard view    A query is performed via Explore       Format  Logs of usage insights contain the following fields  where the fields followed by    are always available  and the others depend on the logged event    Field name   Type   Description                                          eventName      string   Type of the event  which can be either  data request  or  dashboard view        folderName      string   Name of the dashboard folder       dashboardName      string   Name of the dashboard where the event happened       dashboardId      number   ID of the dashboard where the event happened       datasourceName   string   Name of the data source that was queried       datasourceType    string   Type of the data source that was queried  For example   prometheus    elasticsearch   or  loki        datasourceId    number   ID of the data source that was queried       panelId    number   ID of the panel of the query       panelName    string   Name of the panel of the query       error    string   Error returned by the query       duration    number   Duration of the query       source    string   Source of the query  For example   dashboard  or  explore        orgId      number   ID of the user s organization       orgName      string   Name of the user s organization       timestamp      string   The date and time that the request was made  in Coordinated Universal Time  UTC  in  RFC3339  https   tools ietf org html rfc3339 section 5 6  format       tokenId      number   ID of the user s authentication token       username      string   Name of the Grafana user that made the request       userId      number   ID of the Grafana user that made the request       totalQueries      number   Number of queries executed for the data request       cachedQueries      number   Number of fetched queries that came from the cache        Configuration  To export your logs  enable the usage insights feature and  configure    an export location in the configuration file      ini  usage insights export    Enable the usage insights export feature enabled   true   Storage type storage   loki      The options for storage type are  loki  and  logger   added in Grafana Enterprise 8 2    If the storage type is set to  loki  you ll need to also configure Grafana to export to a Loki ingestion server  To do this  you ll need Loki installed  Refer to  Install Loki   docs loki latest installation   for instructions on how to install Loki      ini  usage insights export storage loki    Set the communication protocol to use with Loki  can be grpc or http  type   grpc   Set the address for writing logs to Loki  format must be host port  url   localhost 9095   Defaults to true  If true  it establishes a secure connection to Loki tls   true   Set the tenant ID for Loki communication  which is disabled by default    The tenant ID is required to interact with Loki running in multi tenant mode  tenant id        Using  logger  will print usage insights to your  Grafana server log     There is no option for configuring the  logger  storage type      Visualize Loki usage insights in Grafana  If you export logs into Loki  you can build Grafana dashboards to understand your Grafana instance usage   1  Add Loki as a data source  Refer to  Grafana fundamentals tutorial   tutorials grafana fundamentals  6   1  Import one of the following dashboards        Usage insights   grafana dashboards 13785        Usage insights datasource details   grafana dashboards 13786  1  Play with usage insights to understand them       In Explore  you can use the query   datasource  gdev loki  kind  usage insights    to retrieve all logs related to your  gdev loki  data source       In a dashboard  you can build a table panel with the query  topk 10  sum by  error   count over time  kind  usage insights   datasource  gdev prometheus     json   error           interval      to display the 10 most common errors your users see using the  gdev prometheus  data source       In a dashboard  you can build a graph panel with the queries  sum by host   count over time  kind  usage insights     json   eventName  data request    error           interval     and  sum by host   count over time  kind  usage insights     json   eventName  data request    error          interval     to show the evolution of the data request count over time  Using  by  host   allows you to have more information for each Grafana server you have if you have set up Grafana for  high availability      "}
{"questions":"grafana setup stop certain vulnerabilities from being exploited by a malicious attacker labels aliases products docs grafana latest setup grafana configure security configure security hardening title Configure security hardening Security hardening enables you to apply additional security which might oss enterprise","answers":"---\naliases:\n  - \/docs\/grafana\/latest\/setup-grafana\/configure-security\/configure-security-hardening\/\ndescription: Security hardening enables you to apply additional security which might\n  stop certain vulnerabilities from being exploited by a malicious attacker.\nlabels:\n  products:\n    - enterprise\n    - oss\ntitle: Configure security hardening\n---\n\n# Configure security hardening\n\nSecurity hardening enables you to apply additional security, which can help stop certain vulnerabilities from being exploited by a malicious attacker.\n\n\nThese settings are available in the [grafana.ini configuration file](). To apply changes to the configuration file, restart the Grafana server.\n\n\n## Additional security for cookies\n\nIf Grafana uses HTTPS, you can further secure the cookie that the system uses to authenticate access to the web UI. By applying additional security to the cookie, you might mitigate certain attacks that result from an attacker obtaining the cookie value.\n\n\nGrafana must use HTTPS for the following configurations to work properly.\n\n\n### Add a secure attribute to cookies\n\nTo provide mitigation against some MITM attacks, add the `Secure` attribute to the cookie that is used to authenticate users. This attribute forces users only to send the cookie over a valid HTTPS secure connection.\n\nExample:\n\n```toml\n# Set to true if you host Grafana behind HTTPS. The default value is false.\ncookie_secure = true\n```\n\n### Add a SameSite attribute to cookies\n\nTo mitigate almost all CSRF-attacks, set the _cookie_samesite_ option to `strict`. This setting prevents clients from sending the cookie in requests that are made cross-site, but only from the site that creates the cookie.\n\nExample:\n\n```toml\n# set cookie SameSite attribute. defaults to `lax`. can be set to \"lax\", \"strict\", \"none\" and \"disabled\"\ncookie_samesite = strict\n```\n\n\nBy setting the SameSite attribute to \"strict,\" only the user clicks within a Grafana instance work. The default option, \"lax,\" does not produce this behavior.\n\n\n### Add a prefix to cookie names\n\nYou can further secure the cookie authentication by adding a [Cookie Prefix](https:\/\/googlechrome.github.io\/samples\/cookie-prefixes\/). Cookies without a special prefix can be overwritten in a man-in-the-middle attack, even if the site uses HTTPS. A cookie prefix forces clients only to accept the cookie if certain criteria are met.\nAdd a prefix to the current cookie name with either `__Secure-` or `__Host-` where the latter provides additional protection by only allowing the cookie to be created from the host that sent the Set-Cookie header.\n\nExample:\n\n```toml\n# Login cookie name\nlogin_cookie_name = __Host-grafana_session\n```\n\n## Security headers\n\nGrafana includes a few additional headers that you can configure to help mitigate against certain attacks, such as XSS.\n\n### Add a Content Security Policy\n\nA content security policy (CSP) is an HTTP response header that controls how the web browser handles content, such as allowing inline scripts to execute or loading images from certain domains. The default CSP template is already configured to provide sufficient protection against some attacks. This makes it more difficult for attackers to execute arbitrary JavaScript if such a vulnerability is present.\n\nExample:\n\n```toml\n# Enable adding the Content-Security-Policy header to your requests.\n# CSP enables you to control the resources the user agent can load and helps prevent XSS attacks.\ncontent_security_policy = true\n\n# Set the Content Security Policy template that is used when the Content-Security-Policy header is added to your requests.\n# $NONCE in the template includes a random nonce.\n# $ROOT_PATH is server.root_url without the protocol.\ncontent_security_policy_template = \"\"\"script-src 'self' 'unsafe-eval' 'unsafe-inline' 'strict-dynamic' $NONCE;object-src 'none';font-src 'self';style-src 'self' 'unsafe-inline' blob:;img-src * data:;base-uri 'self';connect-src 'self' grafana.com ws:\/\/$ROOT_PATH wss:\/\/$ROOT_PATH;manifest-src 'self';media-src 'none';form-action 'self';\"\"\"\n```\n\n### Enable trusted types\n\n**Currently in development. [Trusted types](https:\/\/github.com\/w3c\/trusted-types\/blob\/main\/explainer.md) is an experimental Javascript API with [limited browser support](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Headers\/Content-Security-Policy\/trusted-types#browser_compatibility).**\n\nTrusted types reduce the risk of DOM XSS by enforcing developers to sanitize strings that are used in injection sinks, such as setting `innerHTML` on an element. Furthermore, when enabling trusted types, these injection sinks need to go through a policy that will sanitize, or leave the string intact and return it as \"safe\". This provides some protection from client side injection vulnerabilities in third party libraries, such as jQuery, Angular and even third party plugins.\n\nTo enable trusted types in enforce mode, where injection sinks are automatically sanitized:\n\n- Enable `content_security_policy` in the configuration.\n- Add `require-trusted-types-for 'script'` to the `content_security_policy_template` in the configuration.\n\nTo enable trusted types in report mode, where inputs that have not been sanitized with trusted types will be logged to the console:\n\n- Enable `content_security_policy_report_only` in the configuration.\n- Add `require-trusted-types-for 'script'` to the `content_security_policy_report_only_template` in the configuration.\n\nAs this is a feature currently in development, things may break. If they do, or if you have any other feedback, feel free to [open an issue](https:\/\/github.com\/grafana\/grafana\/issues\/new\/choose).\n\n## Additional security hardening\n\nThe Grafana server has several built-in security features that you can opt-in to enhance security. This section describes additional techniques you can use to harden security.\n\n### Hide the version number\n\nIf set to `true`, the Grafana server hides the running version number for unauthenticated users. Version numbers might reveal if you are running an outdated and vulnerable version of Grafana.\n\nExample:\n\n```toml\n# mask the Grafana version number for unauthenticated users\nhide_version = true\n```\n\n### Enforce domain verification\n\nIf set to `true`, the Grafana server redirects requests that have a Host-header value that is mismatched to the actual domain. This might help to mitigate some DNS rebinding attacks.\n\nExample:\n\n```toml\n# Redirect to correct domain if host header does not match domain\n# Prevents DNS rebinding attacks\nenforce_domain = true\n```","site":"grafana setup","answers_cleaned":"    aliases       docs grafana latest setup grafana configure security configure security hardening  description  Security hardening enables you to apply additional security which might   stop certain vulnerabilities from being exploited by a malicious attacker  labels    products        enterprise       oss title  Configure security hardening        Configure security hardening  Security hardening enables you to apply additional security  which can help stop certain vulnerabilities from being exploited by a malicious attacker    These settings are available in the  grafana ini configuration file     To apply changes to the configuration file  restart the Grafana server       Additional security for cookies  If Grafana uses HTTPS  you can further secure the cookie that the system uses to authenticate access to the web UI  By applying additional security to the cookie  you might mitigate certain attacks that result from an attacker obtaining the cookie value    Grafana must use HTTPS for the following configurations to work properly        Add a secure attribute to cookies  To provide mitigation against some MITM attacks  add the  Secure  attribute to the cookie that is used to authenticate users  This attribute forces users only to send the cookie over a valid HTTPS secure connection   Example      toml   Set to true if you host Grafana behind HTTPS  The default value is false  cookie secure   true          Add a SameSite attribute to cookies  To mitigate almost all CSRF attacks  set the  cookie samesite  option to  strict   This setting prevents clients from sending the cookie in requests that are made cross site  but only from the site that creates the cookie   Example      toml   set cookie SameSite attribute  defaults to  lax   can be set to  lax    strict    none  and  disabled  cookie samesite   strict       By setting the SameSite attribute to  strict   only the user clicks within a Grafana instance work  The default option   lax   does not produce this behavior        Add a prefix to cookie names  You can further secure the cookie authentication by adding a  Cookie Prefix  https   googlechrome github io samples cookie prefixes    Cookies without a special prefix can be overwritten in a man in the middle attack  even if the site uses HTTPS  A cookie prefix forces clients only to accept the cookie if certain criteria are met  Add a prefix to the current cookie name with either    Secure   or    Host   where the latter provides additional protection by only allowing the cookie to be created from the host that sent the Set Cookie header   Example      toml   Login cookie name login cookie name     Host grafana session         Security headers  Grafana includes a few additional headers that you can configure to help mitigate against certain attacks  such as XSS       Add a Content Security Policy  A content security policy  CSP  is an HTTP response header that controls how the web browser handles content  such as allowing inline scripts to execute or loading images from certain domains  The default CSP template is already configured to provide sufficient protection against some attacks  This makes it more difficult for attackers to execute arbitrary JavaScript if such a vulnerability is present   Example      toml   Enable adding the Content Security Policy header to your requests    CSP enables you to control the resources the user agent can load and helps prevent XSS attacks  content security policy   true    Set the Content Security Policy template that is used when the Content Security Policy header is added to your requests     NONCE in the template includes a random nonce     ROOT PATH is server root url without the protocol  content security policy template      script src  self   unsafe eval   unsafe inline   strict dynamic   NONCE object src  none  font src  self  style src  self   unsafe inline  blob  img src   data  base uri  self  connect src  self  grafana com ws    ROOT PATH wss    ROOT PATH manifest src  self  media src  none  form action  self               Enable trusted types    Currently in development   Trusted types  https   github com w3c trusted types blob main explainer md  is an experimental Javascript API with  limited browser support  https   developer mozilla org en US docs Web HTTP Headers Content Security Policy trusted types browser compatibility      Trusted types reduce the risk of DOM XSS by enforcing developers to sanitize strings that are used in injection sinks  such as setting  innerHTML  on an element  Furthermore  when enabling trusted types  these injection sinks need to go through a policy that will sanitize  or leave the string intact and return it as  safe   This provides some protection from client side injection vulnerabilities in third party libraries  such as jQuery  Angular and even third party plugins   To enable trusted types in enforce mode  where injection sinks are automatically sanitized     Enable  content security policy  in the configuration    Add  require trusted types for  script   to the  content security policy template  in the configuration   To enable trusted types in report mode  where inputs that have not been sanitized with trusted types will be logged to the console     Enable  content security policy report only  in the configuration    Add  require trusted types for  script   to the  content security policy report only template  in the configuration   As this is a feature currently in development  things may break  If they do  or if you have any other feedback  feel free to  open an issue  https   github com grafana grafana issues new choose       Additional security hardening  The Grafana server has several built in security features that you can opt in to enhance security  This section describes additional techniques you can use to harden security       Hide the version number  If set to  true   the Grafana server hides the running version number for unauthenticated users  Version numbers might reveal if you are running an outdated and vulnerable version of Grafana   Example      toml   mask the Grafana version number for unauthenticated users hide version   true          Enforce domain verification  If set to  true   the Grafana server redirects requests that have a Host header value that is mismatched to the actual domain  This might help to mitigate some DNS rebinding attacks   Example      toml   Redirect to correct domain if host header does not match domain   Prevents DNS rebinding attacks enforce domain   true    "}
{"questions":"grafana setup weight 100 menuTitle Plan your IAM integration strategy Learn how to plan your identity and access management strategy before setting up Grafana title Plan your IAM integration strategy keywords IAM Auth Grafana IdP","answers":"---\ntitle: Plan your IAM integration strategy\nmenuTitle: Plan your IAM integration strategy\ndescription: Learn how to plan your identity and access management strategy before setting up Grafana.\nweight: 100\nkeywords:\n  - IdP\n  - IAM\n  - Auth\n  - Grafana\n---\n\n# Plan your IAM integration strategy\n\nThis section describes the decisions you should make when using an Identity and Access Management (IAM) provider to manage access to Grafana. IAM ensures that users have secure access to sensitive data and [other resources](), simplifying user management and authentication.\n\n## Benefits of integrating with an IAM provider\n\nIntegrating with an IAM provider provides the following benefits:\n\n- **User management**: By providing Grafana access to your current user management system, you eliminate the overhead of replicating user information and instead have centralized user management for users' roles and permissions to Grafana resources.\n\n- **Security**: Many IAM solutions provide advanced security features such as multi-factor authentication, RBAC, and audit trails, which can help to improve the security of your Grafana installation.\n\n- **SSO**: Properly setting up Grafana with your current IAM solution enables users to access Grafana with the same credentials they use for other applications.\n\n- **Scalability**: User additions and updates in your user database are immediately reflected in Grafana.\n\nIn order to plan an integration with Grafana, assess your organization's current needs, requirements, and any existing IAM solutions being used. This includes thinking about how roles and permissions will be mapped to users in Grafana and how users can be grouped to access shared resources.\n\n## Internal vs external users\n\nAs a first step, determine how you want to manage users who will access Grafana.\n\nDo you already use an identity provider to manage users? If so, Grafana might be able to integrate with your identity provider through one of our IdP integrations.\nRefer to [Configure authentication documentation]() for the list of supported providers.\n\nIf you are not interested in setting up an external identity provider, but still want to limit access to your Grafana instance, consider using Grafana's basic authentication.\n\nFinally, if you want your Grafana instance to be accessible to everyone, you can enable anonymous access to Grafana.\nFor information, refer to the [anonymous authentication documentation]().\n\n## Ways to organize users\n\nOrganize users in subgroups that are sensible to the organization. For example:\n\n- **Security**: Different groups of users or customers should only have access to their intended resources.\n- **Simplicity**: Reduce the scope of dashboards and resources available.\n- **Cost attribution**: Track and bill costs to individual customers, departments, or divisions.\n- **Customization**: Each group of users could have a personalized experience like different dashboards or theme colors.\n\n### Users in Grafana teams\n\nYou can organize users into [teams]() and assign them roles and permissions reflecting the current organization. For example, instead of assigning five users access to the same dashboard, you can create a team of those users and assign dashboard permissions to the team.\n\nA user can belong to multiple teams and be a member or an administrator for a given team. Team members inherit permissions from the team but cannot edit the team itself. Team administrators can add members to a team and update its settings, such as the team name, team members, roles assigned, and UI preferences.\n\nTeams are a perfect solution for working with a subset of users. Teams can share resources with other teams.\n\n### Users in Grafana organizations\n\n[Grafana organizations]() allow complete isolation of resources, such as dashboards and data sources. Users can be members of one or several organizations, and they can only access resources from an organization they belong to.\n\nHaving multiple organizations in a single instance of Grafana lets you manage your users in one place while completely separating resources.\n\nOrganizations provide a higher measure of isolation within Grafana than teams do and can be helpful in certain scenarios. However, because organizations lack the scalability and flexibility of teams and [folders](), we do not recommend using them as the default way to group users and resources.\n\nNote that Grafana Cloud does not support having more than 1 organizations per instance.\n\n### Choosing between teams and organizations\n\n[Grafana teams]() and Grafana organizations serve similar purposes in the Grafana platform. Both are designed to help group users and manage and control access to resources.\n\nTeams provide more flexibility, as resources can be accessible by multiple teams, and team creation and management are simple.\n\nIn contrast, organizations provide more isolation than teams, as resources cannot be shared between organizations.\nThey are more difficult to manage than teams, as you must create and update resources for each organization individually.\nOrganizations cater to bigger companies or users with intricate access needs, necessitating complete resource segregation.\n\n## Access to external systems\n\nConsider the need for machine-to-machine [M2M](https:\/\/en.wikipedia.org\/wiki\/Machine_to_machine) communications. If a system needs to interact with Grafana, ensure it has proper access.\n\nConsider the following scenarios:\n\n**Schedule reports**: Generate reports periodically from Grafana through the reporting API and have them delivered to different communications channels like email, instant messaging, or keep them in a shared storage.\n\n**Define alerts**: Define alert rules to be triggered when a specific condition is met. Route alert notifications to different teams according to your organization's needs.\n\n**Provisioning file**: Provisioning files can be used to automate the creation of dashboards, data sources, and other resources.\n\nThese are just a few examples of how Grafana can be used in M2M scenarios. The platform is highly flexible and can be used in various M2M applications, making it a powerful tool for organizations seeking insights into their systems and devices.\n\n### Service accounts\n\nYou can use a service account to run automated workloads in Grafana, such as dashboard provisioning, configuration, or report generation. Create service accounts and service accounts tokens to authenticate applications, such as Terraform, with the Grafana API.\n\n\nService accounts will eventually replace [API keys](\/docs\/grafana\/<GRAFANA_VERSION>\/administration\/service-accounts\/migrate-api-keys\/) as the primary way to authenticate applications that interact with Grafana.\n\n\nA common use case for creating a service account is to perform operations on automated or triggered tasks. You can use service accounts to:\n\n- Schedule reports for specific dashboards to be delivered on a daily\/weekly\/monthly basis\n- Define alerts in your system to be used in Grafana\n- Set up an external SAML authentication provider\n- Interact with Grafana without signing in as a user\n\nIn [Grafana Enterprise](), you can also use service accounts in combination with [role-based access control]() to grant very specific permissions to applications that interact with Grafana.\n\n\nService accounts can only act in the organization they are created for. We recommend creating service accounts in each organization if you have the same task needed for multiple organizations.\n\n\nThe following video shows how to migrate from API keys to service accounts.\n\n<br>\n\n#### Service account tokens\n\nTo authenticate with Grafana's HTTP API, a randomly generated string known as a service account token can be used as an alternative to a password.\n\nWhen a service account is created, it can be linked to multiple access tokens. These service access tokens can be utilized in the same manner as API keys, providing a means to programmatically access Grafana HTTP API.\n\nYou can create multiple tokens for the same service account. You might want to do this if:\n\n- Multiple applications use the same permissions, but you want to audit or manage their actions separately.\n- You need to rotate or replace a compromised token.\n\n\nIn Grafana's audit logs it will still show up as the same service account.\n\n\nService account access tokens inherit permissions from the service account.\n\n### API keys\n\n\nGrafana recommends using service accounts instead of API keys. API keys will be deprecated in the near future. For more information, refer to [Grafana service accounts]().\n\n\nYou can use Grafana API keys to interact with data sources via HTTP APIs.\n\n## How to work with roles?\n\nGrafana roles control the access of users and service accounts to specific resources and determine their authorized actions.\n\nYou can assign roles through the user interface or APIs, establish them through Terraform, or synchronize them automatically via an external IAM provider.\n\n### What are roles?\n\nWithin an organization, Grafana has established three primary [organization roles]() - organization administrator, editor, and viewer - which dictate the user's level of access and permissions, including the ability to edit data sources or create teams. Grafana also has an empty role that you can start with and to which you can gradually add custom permissions.\nTo be a member of any organization, every user must be assigned a role.\n\nIn addition, Grafana provides a server administrator role that grants access to and enables interaction with resources that affect the entire instance, including organizations, users, and server-wide settings.\nThis particular role can only be accessed by users of self-hosted Grafana instances. It is a significant role intended for the administrators of the Grafana instance.\n\n### What are permissions?\n\nEach role consists of a set of [permissions]() that determine the tasks a user can perform in the system.\nFor example, the **Admin** role includes permissions that let an administrator create and delete users.\n\nGrafana allows for precise permission settings on both dashboards and folders, giving you the ability to control which users and teams can view, edit, and administer them.\nFor example, you might want a certain viewer to be able to edit a dashboard. While that user can see all dashboards, you can grant them access to update only one of them.\n\nIn [Grafana Enterprise](), you can also grant granular permissions for data sources to control who can query and edit them.\n\nDashboard, folder, and data source permissions can be set through the UI or APIs or provisioned through Terraform.\n\n### Role-based access control\n\n\nAvailable in [Grafana Enterprise]() and [Grafana Cloud](\/docs\/grafana-cloud\/).\n\n\nIf you think that the basic organization and server administrator roles are too limiting, it might be beneficial to employ [role-based access control (RBAC)]().\nRBAC is a flexible approach to managing user access to Grafana resources, including users, data sources, and reports. It enables easy granting, changing, and revoking of read and write access for users.\n\nRBAC comes with pre-defined roles, such as data source writer, which allows updating, reading, or querying all data sources.\nYou can assign these roles to users, teams, and service accounts.\n\nIn addition, RBAC empowers you to generate personalized roles and modify permissions authorized by the standard Grafana roles.\n\n## User synchronization between Grafana and identity providers\n\nWhen connecting Grafana to an identity provider, it's important to think beyond just the initial authentication setup. You should also think about the maintenance of user bases and roles. Using Grafana's team and role synchronization features ensures that updates you make to a user in your identity provider will be reflected in their role assignment and team memberships in Grafana.\n\n### Team sync\n\nTeam sync is a feature that allows you to synchronize teams or groups from your authentication provider with teams in Grafana. This means that users of specific teams or groups in LDAP, OAuth, or SAML will be automatically added or removed as members of corresponding teams in Grafana. Whenever a user logs in, Grafana will check for any changes in the teams or groups of the authentication provider and update the user's teams in Grafana accordingly. This makes it easy to manage user permissions across multiple systems.\n\n\nAvailable in [Grafana Enterprise]() and [Grafana Cloud Advanced](\/docs\/grafana-cloud\/).\n\n\n\nTeam synchronization occurs only when a user logs in. However, if you are using LDAP, it is possible to enable active background synchronization. This allows for the continuous synchronization of teams.\n\n\n### Role Sync\n\nGrafana can synchronize basic roles from your authentication provider by mapping attributes from the identity provider to the user role in Grafana. This means that users with specific attributes, like role, team, or group membership in LDAP, OAuth, or SAML, will be automatically assigned the corresponding role in Grafana. Whenever a user logs in, Grafana will check for any changes in the user information retrieved from the authentication provider and update the user's role in Grafana accordingly.\n\n### Organization sync\n\nOrganization sync is the process of binding all the users from an organization in Grafana. This delegates the role of managing users to the identity provider. This way, there's no need to manage user access from Grafana because the identity provider will be queried whenever a new user tries to log in.\n\nWith organization sync, users from identity provider groups can be assigned to corresponding Grafana organizations. This functionality is similar to role sync but with the added benefit of specifying the organization that a user belongs to for a particular identity provider group. Please note that this feature is only available for self-hosted Grafana instances, as Cloud Grafana instances have a single organization limit.\n\n\nOrganization sync is currently only supported for SAML and LDAP.\n\n\n\nYou don't need to invite users through Grafana when syncing with Organization sync.\n\n\n\nCurrently, only basic roles can be mapped via Organization sync.\n","site":"grafana setup","answers_cleaned":"    title  Plan your IAM integration strategy menuTitle  Plan your IAM integration strategy description  Learn how to plan your identity and access management strategy before setting up Grafana  weight  100 keywords      IdP     IAM     Auth     Grafana        Plan your IAM integration strategy  This section describes the decisions you should make when using an Identity and Access Management  IAM  provider to manage access to Grafana  IAM ensures that users have secure access to sensitive data and  other resources     simplifying user management and authentication      Benefits of integrating with an IAM provider  Integrating with an IAM provider provides the following benefits       User management    By providing Grafana access to your current user management system  you eliminate the overhead of replicating user information and instead have centralized user management for users  roles and permissions to Grafana resources       Security    Many IAM solutions provide advanced security features such as multi factor authentication  RBAC  and audit trails  which can help to improve the security of your Grafana installation       SSO    Properly setting up Grafana with your current IAM solution enables users to access Grafana with the same credentials they use for other applications       Scalability    User additions and updates in your user database are immediately reflected in Grafana   In order to plan an integration with Grafana  assess your organization s current needs  requirements  and any existing IAM solutions being used  This includes thinking about how roles and permissions will be mapped to users in Grafana and how users can be grouped to access shared resources      Internal vs external users  As a first step  determine how you want to manage users who will access Grafana   Do you already use an identity provider to manage users  If so  Grafana might be able to integrate with your identity provider through one of our IdP integrations  Refer to  Configure authentication documentation    for the list of supported providers   If you are not interested in setting up an external identity provider  but still want to limit access to your Grafana instance  consider using Grafana s basic authentication   Finally  if you want your Grafana instance to be accessible to everyone  you can enable anonymous access to Grafana  For information  refer to the  anonymous authentication documentation         Ways to organize users  Organize users in subgroups that are sensible to the organization  For example       Security    Different groups of users or customers should only have access to their intended resources      Simplicity    Reduce the scope of dashboards and resources available      Cost attribution    Track and bill costs to individual customers  departments  or divisions      Customization    Each group of users could have a personalized experience like different dashboards or theme colors       Users in Grafana teams  You can organize users into  teams    and assign them roles and permissions reflecting the current organization  For example  instead of assigning five users access to the same dashboard  you can create a team of those users and assign dashboard permissions to the team   A user can belong to multiple teams and be a member or an administrator for a given team  Team members inherit permissions from the team but cannot edit the team itself  Team administrators can add members to a team and update its settings  such as the team name  team members  roles assigned  and UI preferences   Teams are a perfect solution for working with a subset of users  Teams can share resources with other teams       Users in Grafana organizations   Grafana organizations    allow complete isolation of resources  such as dashboards and data sources  Users can be members of one or several organizations  and they can only access resources from an organization they belong to   Having multiple organizations in a single instance of Grafana lets you manage your users in one place while completely separating resources   Organizations provide a higher measure of isolation within Grafana than teams do and can be helpful in certain scenarios  However  because organizations lack the scalability and flexibility of teams and  folders     we do not recommend using them as the default way to group users and resources   Note that Grafana Cloud does not support having more than 1 organizations per instance       Choosing between teams and organizations   Grafana teams    and Grafana organizations serve similar purposes in the Grafana platform  Both are designed to help group users and manage and control access to resources   Teams provide more flexibility  as resources can be accessible by multiple teams  and team creation and management are simple   In contrast  organizations provide more isolation than teams  as resources cannot be shared between organizations  They are more difficult to manage than teams  as you must create and update resources for each organization individually  Organizations cater to bigger companies or users with intricate access needs  necessitating complete resource segregation      Access to external systems  Consider the need for machine to machine  M2M  https   en wikipedia org wiki Machine to machine  communications  If a system needs to interact with Grafana  ensure it has proper access   Consider the following scenarios     Schedule reports    Generate reports periodically from Grafana through the reporting API and have them delivered to different communications channels like email  instant messaging  or keep them in a shared storage     Define alerts    Define alert rules to be triggered when a specific condition is met  Route alert notifications to different teams according to your organization s needs     Provisioning file    Provisioning files can be used to automate the creation of dashboards  data sources  and other resources   These are just a few examples of how Grafana can be used in M2M scenarios  The platform is highly flexible and can be used in various M2M applications  making it a powerful tool for organizations seeking insights into their systems and devices       Service accounts  You can use a service account to run automated workloads in Grafana  such as dashboard provisioning  configuration  or report generation  Create service accounts and service accounts tokens to authenticate applications  such as Terraform  with the Grafana API    Service accounts will eventually replace  API keys   docs grafana  GRAFANA VERSION  administration service accounts migrate api keys   as the primary way to authenticate applications that interact with Grafana    A common use case for creating a service account is to perform operations on automated or triggered tasks  You can use service accounts to     Schedule reports for specific dashboards to be delivered on a daily weekly monthly basis   Define alerts in your system to be used in Grafana   Set up an external SAML authentication provider   Interact with Grafana without signing in as a user  In  Grafana Enterprise     you can also use service accounts in combination with  role based access control    to grant very specific permissions to applications that interact with Grafana    Service accounts can only act in the organization they are created for  We recommend creating service accounts in each organization if you have the same task needed for multiple organizations    The following video shows how to migrate from API keys to service accounts    br        Service account tokens  To authenticate with Grafana s HTTP API  a randomly generated string known as a service account token can be used as an alternative to a password   When a service account is created  it can be linked to multiple access tokens  These service access tokens can be utilized in the same manner as API keys  providing a means to programmatically access Grafana HTTP API   You can create multiple tokens for the same service account  You might want to do this if     Multiple applications use the same permissions  but you want to audit or manage their actions separately    You need to rotate or replace a compromised token    In Grafana s audit logs it will still show up as the same service account    Service account access tokens inherit permissions from the service account       API keys   Grafana recommends using service accounts instead of API keys  API keys will be deprecated in the near future  For more information  refer to  Grafana service accounts       You can use Grafana API keys to interact with data sources via HTTP APIs      How to work with roles   Grafana roles control the access of users and service accounts to specific resources and determine their authorized actions   You can assign roles through the user interface or APIs  establish them through Terraform  or synchronize them automatically via an external IAM provider       What are roles   Within an organization  Grafana has established three primary  organization roles      organization administrator  editor  and viewer   which dictate the user s level of access and permissions  including the ability to edit data sources or create teams  Grafana also has an empty role that you can start with and to which you can gradually add custom permissions  To be a member of any organization  every user must be assigned a role   In addition  Grafana provides a server administrator role that grants access to and enables interaction with resources that affect the entire instance  including organizations  users  and server wide settings  This particular role can only be accessed by users of self hosted Grafana instances  It is a significant role intended for the administrators of the Grafana instance       What are permissions   Each role consists of a set of  permissions    that determine the tasks a user can perform in the system  For example  the   Admin   role includes permissions that let an administrator create and delete users   Grafana allows for precise permission settings on both dashboards and folders  giving you the ability to control which users and teams can view  edit  and administer them  For example  you might want a certain viewer to be able to edit a dashboard  While that user can see all dashboards  you can grant them access to update only one of them   In  Grafana Enterprise     you can also grant granular permissions for data sources to control who can query and edit them   Dashboard  folder  and data source permissions can be set through the UI or APIs or provisioned through Terraform       Role based access control   Available in  Grafana Enterprise    and  Grafana Cloud   docs grafana cloud      If you think that the basic organization and server administrator roles are too limiting  it might be beneficial to employ  role based access control  RBAC      RBAC is a flexible approach to managing user access to Grafana resources  including users  data sources  and reports  It enables easy granting  changing  and revoking of read and write access for users   RBAC comes with pre defined roles  such as data source writer  which allows updating  reading  or querying all data sources  You can assign these roles to users  teams  and service accounts   In addition  RBAC empowers you to generate personalized roles and modify permissions authorized by the standard Grafana roles      User synchronization between Grafana and identity providers  When connecting Grafana to an identity provider  it s important to think beyond just the initial authentication setup  You should also think about the maintenance of user bases and roles  Using Grafana s team and role synchronization features ensures that updates you make to a user in your identity provider will be reflected in their role assignment and team memberships in Grafana       Team sync  Team sync is a feature that allows you to synchronize teams or groups from your authentication provider with teams in Grafana  This means that users of specific teams or groups in LDAP  OAuth  or SAML will be automatically added or removed as members of corresponding teams in Grafana  Whenever a user logs in  Grafana will check for any changes in the teams or groups of the authentication provider and update the user s teams in Grafana accordingly  This makes it easy to manage user permissions across multiple systems    Available in  Grafana Enterprise    and  Grafana Cloud Advanced   docs grafana cloud       Team synchronization occurs only when a user logs in  However  if you are using LDAP  it is possible to enable active background synchronization  This allows for the continuous synchronization of teams        Role Sync  Grafana can synchronize basic roles from your authentication provider by mapping attributes from the identity provider to the user role in Grafana  This means that users with specific attributes  like role  team  or group membership in LDAP  OAuth  or SAML  will be automatically assigned the corresponding role in Grafana  Whenever a user logs in  Grafana will check for any changes in the user information retrieved from the authentication provider and update the user s role in Grafana accordingly       Organization sync  Organization sync is the process of binding all the users from an organization in Grafana  This delegates the role of managing users to the identity provider  This way  there s no need to manage user access from Grafana because the identity provider will be queried whenever a new user tries to log in   With organization sync  users from identity provider groups can be assigned to corresponding Grafana organizations  This functionality is similar to role sync but with the added benefit of specifying the organization that a user belongs to for a particular identity provider group  Please note that this feature is only available for self hosted Grafana instances  as Cloud Grafana instances have a single organization limit    Organization sync is currently only supported for SAML and LDAP     You don t need to invite users through Grafana when syncing with Organization sync     Currently  only basic roles can be mapped via Organization sync  "}
{"questions":"grafana setup of key management system providers administration database encryption enterprise enterprise encryption aliases products If you have a Grafana Enterprise license you can integrate with a variety labels oss enterprise","answers":"---\naliases:\n  - ..\/..\/administration\/database-encryption\/\n  - ..\/..\/enterprise\/enterprise-encryption\/\ndescription: If you have a Grafana Enterprise license, you can integrate with a variety\n  of key management system providers.\nlabels:\n  products:\n    - enterprise\n    - oss\ntitle: Configure database encryption\nweight: 700\n---\n\n# Configure database encryption\n\nGrafana\u2019s database contains secrets, which are used to query data sources, send alert notifications, and perform other functions within Grafana.\n\nGrafana encrypts these secrets before they are written to the database, by using a symmetric-key encryption algorithm called Advanced Encryption Standard (AES). These secrets are signed using a [secret key]() that you can change when you configure a new Grafana instance.\n\n\nGrafana v9.0 and newer use [envelope encryption](#envelope-encryption) by default, which adds a layer of indirection to the encryption process that introduces an [**implicit breaking change**](#implicit-breaking-change) for older versions of Grafana.\n\n\nFor further details about how to operate a Grafana instance with envelope encryption, see the [Operational work]() section.\n\n\nIn Grafana Enterprise, you can also [encrypt secrets in AES-GCM (Galois\/Counter Mode)]() instead of the default AES-CFB (Cipher FeedBack mode).\n\n\n## Envelope encryption\n\n\nSince Grafana v9.0, you can turn envelope encryption off by adding the feature toggle `disableEnvelopeEncryption` to your [Grafana configuration]().\n\n\nInstead of encrypting all secrets with a single key, Grafana uses a set of keys called data encryption keys (DEKs) to encrypt them. These data encryption keys are themselves encrypted with a single key encryption key (KEK), configured through the `secret_key` attribute in your\n[Grafana configuration]() or by [Encrypting your database with a key from a key management service (KMS)](#encrypting-your-database-with-a-key-from-a-key-management-service-kms).\n\n### Implicit breaking change\n\nEnvelope encryption introduces an implicit breaking change to versions of Grafana prior to v9.0, because it changes how secrets stored in the Grafana database are encrypted. Grafana administrators can upgrade to Grafana v9.0 with no action required from the database encryption perspective, but must be extremely careful if they need to roll an upgrade back to Grafana v8.5 or earlier because secrets created or modified after upgrading to Grafana v9.0 can\u2019t be decrypted by previous versions.\n\nGrafana v8.5 implemented envelope encryption behind an optional feature toggle. Grafana administrators who need to downgrade to Grafana v8.5 can enable envelope encryption as a workaround by adding the feature toggle `envelopeEncryption` to the [Grafana configuration]().\n\n## Operational work\n\nFrom the database encryption perspective, Grafana administrators can:\n\n- [**Re-encrypt secrets**](#re-encrypt-secrets): re-encrypt secrets with envelope encryption and a fresh data key.\n- [**Roll back secrets**](#roll-back-secrets): decrypt secrets encrypted with envelope encryption and re-encrypt them with legacy encryption.\n- [**Re-encrypt data keys**](#re-encrypt-data-keys): re-encrypt data keys with a fresh key encryption key and a KMS integration.\n- [**Rotate data keys**](#rotate-data-keys): disable active data keys and stop using them for encryption in favor of a fresh one.\n\n### Re-encrypt secrets\n\nYou can re-encrypt secrets in order to:\n\n- Move already existing secrets' encryption forward from legacy to envelope encryption.\n- Re-encrypt secrets after a [data keys rotation](#rotate-data-keys).\n\nTo re-encrypt secrets, use the [Grafana CLI]() by running the `grafana cli admin secrets-migration re-encrypt` command or the `\/encryption\/reencrypt-secrets` endpoint of the Grafana [Admin API](). It's safe to run more than once, more recommended under maintenance mode.\n\n### Roll back secrets\n\nYou can roll back secrets encrypted with envelope encryption to legacy encryption. This might be necessary to downgrade to Grafana versions prior to v9.0 after an unsuccessful upgrade.\n\nTo roll back secrets, use the [Grafana CLI]() by running the `grafana cli admin secrets-migration rollback` command or the `\/encryption\/rollback-secrets` endpoint of the Grafana [Admin API](). It's safe to run more than once, more recommended under maintenance mode.\n\n### Re-encrypt data keys\n\nYou can re-encrypt data keys encrypted with a specific key encryption key (KEK). This allows you to either re-encrypt existing data keys with a new KEK version or to re-encrypt them with a completely different KEK.\n\nTo re-encrypt data keys, use the [Grafana CLI]() by running the `grafana cli admin secrets-migration re-encrypt-data-keys` command or the `\/encryption\/reencrypt-data-keys` endpoint of the Grafana [Admin API](). It's safe to run more than once, more recommended under maintenance mode.\n\n### Rotate data keys\n\nYou can rotate data keys to disable the active data key and therefore stop using them for encryption operations. For high-availability setups, you might need to wait until the data keys cache's time-to-live (TTL) expires to ensure that all rotated data keys are no longer being used for encryption operations.\n\nNew data keys for encryption operations are generated on demand.\n\n\nData key rotation does **not** implicitly re-encrypt secrets. Grafana will continue to use rotated data keys to decrypt\nsecrets still encrypted with them. To completely stop using\nrotated data keys for both encryption and decryption, see [secrets re-encryption](#re-encrypt-secrets).\n\n\nTo rotate data keys, use the `\/encryption\/rotate-data-keys` endpoint of the Grafana [Admin API](). It's safe to call more than once, more recommended under maintenance mode.\n\n## Encrypting your database with a key from a key management service (KMS)\n\nIf you are using Grafana Enterprise, you can integrate with a key management service (KMS) provider, and change Grafana\u2019s cryptographic mode of operation from AES-CFB to AES-GCM.\n\nYou can choose to encrypt secrets stored in the Grafana database using a key from a KMS, which is a secure central storage location that is designed to help you to create and manage cryptographic keys and control their use across many services. When you integrate with a KMS, Grafana does not directly store your encryption key. Instead, Grafana stores KMS credentials and the identifier of the key, which Grafana uses to encrypt the database.\n\nGrafana integrates with the following key management services:\n\n- [AWS KMS]()\n- [Azure Key Vault]()\n- [Google Cloud KMS]()\n- [Hashicorp Key Vault]()\n\n## Changing your encryption mode to AES-GCM\n\nGrafana encrypts secrets using Advanced Encryption Standard in Cipher FeedBack mode (AES-CFB). You might prefer to use AES in Galois\/Counter Mode (AES-GCM) instead, to meet your company\u2019s security requirements or in order to maintain consistency with other services.\n\nTo change your encryption mode, update the `algorithm` value in the `[security.encryption]` section of your Grafana configuration file. For further details, refer to [Enterprise configuration]().","site":"grafana setup","answers_cleaned":"    aliases            administration database encryption            enterprise enterprise encryption  description  If you have a Grafana Enterprise license  you can integrate with a variety   of key management system providers  labels    products        enterprise       oss title  Configure database encryption weight  700        Configure database encryption  Grafana s database contains secrets  which are used to query data sources  send alert notifications  and perform other functions within Grafana   Grafana encrypts these secrets before they are written to the database  by using a symmetric key encryption algorithm called Advanced Encryption Standard  AES   These secrets are signed using a  secret key    that you can change when you configure a new Grafana instance    Grafana v9 0 and newer use  envelope encryption   envelope encryption  by default  which adds a layer of indirection to the encryption process that introduces an    implicit breaking change     implicit breaking change  for older versions of Grafana    For further details about how to operate a Grafana instance with envelope encryption  see the  Operational work    section    In Grafana Enterprise  you can also  encrypt secrets in AES GCM  Galois Counter Mode     instead of the default AES CFB  Cipher FeedBack mode        Envelope encryption   Since Grafana v9 0  you can turn envelope encryption off by adding the feature toggle  disableEnvelopeEncryption  to your  Grafana configuration       Instead of encrypting all secrets with a single key  Grafana uses a set of keys called data encryption keys  DEKs  to encrypt them  These data encryption keys are themselves encrypted with a single key encryption key  KEK   configured through the  secret key  attribute in your  Grafana configuration    or by  Encrypting your database with a key from a key management service  KMS    encrypting your database with a key from a key management service kms        Implicit breaking change  Envelope encryption introduces an implicit breaking change to versions of Grafana prior to v9 0  because it changes how secrets stored in the Grafana database are encrypted  Grafana administrators can upgrade to Grafana v9 0 with no action required from the database encryption perspective  but must be extremely careful if they need to roll an upgrade back to Grafana v8 5 or earlier because secrets created or modified after upgrading to Grafana v9 0 can t be decrypted by previous versions   Grafana v8 5 implemented envelope encryption behind an optional feature toggle  Grafana administrators who need to downgrade to Grafana v8 5 can enable envelope encryption as a workaround by adding the feature toggle  envelopeEncryption  to the  Grafana configuration         Operational work  From the database encryption perspective  Grafana administrators can        Re encrypt secrets     re encrypt secrets   re encrypt secrets with envelope encryption and a fresh data key       Roll back secrets     roll back secrets   decrypt secrets encrypted with envelope encryption and re encrypt them with legacy encryption       Re encrypt data keys     re encrypt data keys   re encrypt data keys with a fresh key encryption key and a KMS integration       Rotate data keys     rotate data keys   disable active data keys and stop using them for encryption in favor of a fresh one       Re encrypt secrets  You can re encrypt secrets in order to     Move already existing secrets  encryption forward from legacy to envelope encryption    Re encrypt secrets after a  data keys rotation   rotate data keys    To re encrypt secrets  use the  Grafana CLI    by running the  grafana cli admin secrets migration re encrypt  command or the   encryption reencrypt secrets  endpoint of the Grafana  Admin API     It s safe to run more than once  more recommended under maintenance mode       Roll back secrets  You can roll back secrets encrypted with envelope encryption to legacy encryption  This might be necessary to downgrade to Grafana versions prior to v9 0 after an unsuccessful upgrade   To roll back secrets  use the  Grafana CLI    by running the  grafana cli admin secrets migration rollback  command or the   encryption rollback secrets  endpoint of the Grafana  Admin API     It s safe to run more than once  more recommended under maintenance mode       Re encrypt data keys  You can re encrypt data keys encrypted with a specific key encryption key  KEK   This allows you to either re encrypt existing data keys with a new KEK version or to re encrypt them with a completely different KEK   To re encrypt data keys  use the  Grafana CLI    by running the  grafana cli admin secrets migration re encrypt data keys  command or the   encryption reencrypt data keys  endpoint of the Grafana  Admin API     It s safe to run more than once  more recommended under maintenance mode       Rotate data keys  You can rotate data keys to disable the active data key and therefore stop using them for encryption operations  For high availability setups  you might need to wait until the data keys cache s time to live  TTL  expires to ensure that all rotated data keys are no longer being used for encryption operations   New data keys for encryption operations are generated on demand    Data key rotation does   not   implicitly re encrypt secrets  Grafana will continue to use rotated data keys to decrypt secrets still encrypted with them  To completely stop using rotated data keys for both encryption and decryption  see  secrets re encryption   re encrypt secrets     To rotate data keys  use the   encryption rotate data keys  endpoint of the Grafana  Admin API     It s safe to call more than once  more recommended under maintenance mode      Encrypting your database with a key from a key management service  KMS   If you are using Grafana Enterprise  you can integrate with a key management service  KMS  provider  and change Grafana s cryptographic mode of operation from AES CFB to AES GCM   You can choose to encrypt secrets stored in the Grafana database using a key from a KMS  which is a secure central storage location that is designed to help you to create and manage cryptographic keys and control their use across many services  When you integrate with a KMS  Grafana does not directly store your encryption key  Instead  Grafana stores KMS credentials and the identifier of the key  which Grafana uses to encrypt the database   Grafana integrates with the following key management services      AWS KMS       Azure Key Vault       Google Cloud KMS       Hashicorp Key Vault        Changing your encryption mode to AES GCM  Grafana encrypts secrets using Advanced Encryption Standard in Cipher FeedBack mode  AES CFB   You might prefer to use AES in Galois Counter Mode  AES GCM  instead  to meet your company s security requirements or in order to maintain consistency with other services   To change your encryption mode  update the  algorithm  value in the   security encryption   section of your Grafana configuration file  For further details  refer to  Enterprise configuration    "}
{"questions":"grafana setup enterprise vault aliases products secrets for configuration and provisioning title Integrate Grafana with Hashicorp Vault Learn how to integrate Grafana with Hashicorp Vault so that you can use labels oss enterprise","answers":"---\naliases:\n  - ..\/..\/..\/enterprise\/vault\/\ndescription: Learn how to integrate Grafana with Hashicorp Vault so that you can use\n  secrets for configuration and provisioning.\nlabels:\n  products:\n    - enterprise\n    - oss\ntitle: Integrate Grafana with Hashicorp Vault\nweight: 500\n---\n\n# Integrate Grafana with Hashicorp Vault\n\nIf you manage your secrets with [Hashicorp Vault](https:\/\/www.hashicorp.com\/products\/vault), you can use them for [Configuration]() and [Provisioning]().\n\n\nAvailable in [Grafana Enterprise]().\n\n\n\nIf you have Grafana [set up for high availability](), then we advise not to use dynamic secrets for provisioning files.\nEach Grafana instance is responsible for renewing its own leases. Your data source leases might expire when one of your Grafana servers shuts down.\n\n\n## Configuration\n\nBefore using Vault, you need to activate it by providing a URL, authentication method (currently only token),\nand a token for your Vault service. Grafana automatically renews the service token if it is renewable and\nset up with a limited lifetime.\n\nIf you're using short-lived leases, then you can also configure how often Grafana should renew the lease and for how long. We recommend keeping the defaults unless you run into problems.\n\n```ini\n[keystore.vault]\n# Location of the Vault server\n;url =\n# Vault namespace if using Vault with multi-tenancy\n;namespace =\n# Method for authenticating towards Vault. Vault is inactive if this option is not set\n# Possible values: token\n;auth_method =\n# Secret token to connect to Vault when auth_method is token\n;token =\n# Time between checking if there are any secrets which needs to be renewed.\n;lease_renewal_interval = 5m\n# Time until expiration for tokens which are renewed. Should have a value higher than lease_renewal_interval\n;lease_renewal_expires_within = 15m\n# New duration for renewed tokens. Vault may be configured to ignore this value and impose a stricter limit.\n;lease_renewal_increment = 1h\n```\n\nExample for `vault server -dev`:\n\n```ini\n[keystore.vault]\nurl = http:\/\/127.0.0.1:8200 # HTTP should only be used for local testing\nauth_method = token\ntoken = s.sAZLyI0r7sFLMPq6MWtoOhAN # replace with your key\n```\n\n## Using the Vault expander\n\nAfter you configure Vault, you must set the configuration or provisioning files you wish to\nuse Vault. Vault configuration is an extension of configuration's [variable expansion]() and follows the\n`$__vault{<argument>}` syntax.\n\nThe argument to Vault consists of three parts separated by a colon:\n\n- The first part specifies which secrets engine should be used.\n- The second part specifies which secret should be accessed.\n- The third part specifies which field of that secret should be used.\n\nFor example, if you place a Key\/Value secret for the Grafana admin user in _secret\/grafana\/admin_defaults_\nthe syntax for accessing its _password_ field would be `$__vault{kv:secret\/grafana\/admin_defaults:password}`.\n\n### Secrets engines\n\nVault supports many secrets engines which represents different methods for storing or generating secrets when requested by an\nauthorized user. Grafana supports a subset of these which are most likely to be relevant for a Grafana installation.\n\n#### Key\/Value\n\nGrafana supports Vault's [K\/V version 2](https:\/\/www.vaultproject.io\/docs\/secrets\/kv\/kv-v2) storage engine which\nis used to store and retrieve arbitrary secrets as `kv`.\n\n```ini\n$__vault{kv:secret\/grafana\/smtp:username}\n```\n\n#### Databases\n\nThe Vault [databases secrets engines](https:\/\/www.vaultproject.io\/docs\/secrets\/databases) is a family of\nsecret engines which shares a similar syntax and grants the user dynamic access to a database.\nYou can use this both for setting up Grafana's own database access and for provisioning data sources.\n\n```ini\n$__vault{database:database\/creds\/grafana:username}\n```\n\n### Examples\n\nThe following examples show you how to set your [configuration]() or [provisioning]() files to use Vault to retrieve configuration values.\n\n#### Configuration\n\nThe following is a partial example for using Vault to set up a Grafana configuration file's email and database credentials.\nRefer to [Configuration]() for more information.\n\n```ini\n[smtp]\nenabled = true\nhost = $__vault{kv:secret\/grafana\/smtp:hostname}:587\nuser = $__vault{kv:secret\/grafana\/smtp:username}\npassword = $__vault{kv:secret\/grafana\/smtp:password}\n\n[database]\ntype = mysql\nhost = mysqlhost:3306\nname = grafana\nuser = $__vault{database:database\/creds\/grafana:username}\npassword = $__vault{database:database\/creds\/grafana:password}\n```\n\n#### Provisioning\n\nThe following is a full examples of a provisioning YAML file setting up a MySQL data source using Vault's\ndatabase secrets engine.\nRefer to [Provisioning]() for more information.\n\n**provisioning\/custom.yaml**\n\n```ini\napiVersion: 1\n\ndatasources:\n  - name: statistics\n    type: mysql\n    url: localhost:3306\n    database: stats\n    user: $__vault{database:database\/creds\/ro\/stats:username}\n    secureJsonData:\n      password: $__vault{database:database\/creds\/ro\/stats:password}\n```","site":"grafana setup","answers_cleaned":"    aliases               enterprise vault  description  Learn how to integrate Grafana with Hashicorp Vault so that you can use   secrets for configuration and provisioning  labels    products        enterprise       oss title  Integrate Grafana with Hashicorp Vault weight  500        Integrate Grafana with Hashicorp Vault  If you manage your secrets with  Hashicorp Vault  https   www hashicorp com products vault   you can use them for  Configuration    and  Provisioning       Available in  Grafana Enterprise        If you have Grafana  set up for high availability     then we advise not to use dynamic secrets for provisioning files  Each Grafana instance is responsible for renewing its own leases  Your data source leases might expire when one of your Grafana servers shuts down       Configuration  Before using Vault  you need to activate it by providing a URL  authentication method  currently only token   and a token for your Vault service  Grafana automatically renews the service token if it is renewable and set up with a limited lifetime   If you re using short lived leases  then you can also configure how often Grafana should renew the lease and for how long  We recommend keeping the defaults unless you run into problems      ini  keystore vault    Location of the Vault server  url     Vault namespace if using Vault with multi tenancy  namespace     Method for authenticating towards Vault  Vault is inactive if this option is not set   Possible values  token  auth method     Secret token to connect to Vault when auth method is token  token     Time between checking if there are any secrets which needs to be renewed   lease renewal interval   5m   Time until expiration for tokens which are renewed  Should have a value higher than lease renewal interval  lease renewal expires within   15m   New duration for renewed tokens  Vault may be configured to ignore this value and impose a stricter limit   lease renewal increment   1h      Example for  vault server  dev       ini  keystore vault  url   http   127 0 0 1 8200   HTTP should only be used for local testing auth method   token token   s sAZLyI0r7sFLMPq6MWtoOhAN   replace with your key         Using the Vault expander  After you configure Vault  you must set the configuration or provisioning files you wish to use Vault  Vault configuration is an extension of configuration s  variable expansion    and follows the     vault  argument    syntax   The argument to Vault consists of three parts separated by a colon     The first part specifies which secrets engine should be used    The second part specifies which secret should be accessed    The third part specifies which field of that secret should be used   For example  if you place a Key Value secret for the Grafana admin user in  secret grafana admin defaults  the syntax for accessing its  password  field would be     vault kv secret grafana admin defaults password         Secrets engines  Vault supports many secrets engines which represents different methods for storing or generating secrets when requested by an authorized user  Grafana supports a subset of these which are most likely to be relevant for a Grafana installation        Key Value  Grafana supports Vault s  K V version 2  https   www vaultproject io docs secrets kv kv v2  storage engine which is used to store and retrieve arbitrary secrets as  kv       ini    vault kv secret grafana smtp username            Databases  The Vault  databases secrets engines  https   www vaultproject io docs secrets databases  is a family of secret engines which shares a similar syntax and grants the user dynamic access to a database  You can use this both for setting up Grafana s own database access and for provisioning data sources      ini    vault database database creds grafana username           Examples  The following examples show you how to set your  configuration    or  provisioning    files to use Vault to retrieve configuration values        Configuration  The following is a partial example for using Vault to set up a Grafana configuration file s email and database credentials  Refer to  Configuration    for more information      ini  smtp  enabled   true host      vault kv secret grafana smtp hostname  587 user      vault kv secret grafana smtp username  password      vault kv secret grafana smtp password    database  type   mysql host   mysqlhost 3306 name   grafana user      vault database database creds grafana username  password      vault database database creds grafana password            Provisioning  The following is a full examples of a provisioning YAML file setting up a MySQL data source using Vault s database secrets engine  Refer to  Provisioning    for more information     provisioning custom yaml       ini apiVersion  1  datasources      name  statistics     type  mysql     url  localhost 3306     database  stats     user     vault database database creds ro stats username      secureJsonData        password     vault database database creds ro stats password     "}
{"questions":"grafana setup auth overview Learn about all the ways in which you can configure Grafana to authenticate aliases products auth users labels cloud enterprise","answers":"---\naliases:\n  - ..\/..\/auth\/\n  - ..\/..\/auth\/overview\/\ndescription: Learn about all the ways in which you can configure Grafana to authenticate\n  users.\nlabels:\n  products:\n    - cloud\n    - enterprise\n    - oss\ntitle: Configure authentication\nweight: 200\n---\n\n# Configure authentication\n\nGrafana provides many ways to authenticate users. Some authentication integrations also enable syncing user permissions and org memberships.\n\nThe following table shows all supported authentication methods and the features available for them. [Team sync]() and [active sync]() are only available in Grafana Enterprise.\n\n| Authentication method                                 | Multi Org Mapping | Enforce Sync | Role Mapping | Grafana Admin Mapping | Team Sync | Allowed groups | Active Sync | Skip OrgRole mapping | Auto Login | Single Logout |\n| :---------------------------------------------------- | :---------------- | :----------- | :----------- | :-------------------- | :-------- | :------------- | :---------- | :------------------- | :--------- | :------------ |\n| [Anonymous access]() | N\/A               | N\/A          | N\/A          | N\/A                   | N\/A       | N\/A            | N\/A         | N\/A                  | N\/A        | N\/A           |\n| [Auth Proxy]()           | no                | yes          | yes          | no                    | yes       | no             | N\/A         | no                   | N\/A        | N\/A           |\n| [Azure AD OAuth]()          | yes               | yes          | yes          | yes                   | yes       | yes            | N\/A         | yes                  | yes        | yes           |\n| [Basic auth]()              | yes               | N\/A          | yes          | yes                   | N\/A       | N\/A            | N\/A         | N\/A                  | N\/A        | N\/A           |\n| [Generic OAuth]()     | yes               | yes          | yes          | yes                   | yes       | no             | N\/A         | yes                  | yes        | yes           |\n| [GitHub OAuth]()             | yes               | yes          | yes          | yes                   | yes       | yes            | N\/A         | yes                  | yes        | yes           |\n| [GitLab OAuth]()             | yes               | yes          | yes          | yes                   | yes       | yes            | N\/A         | yes                  | yes        | yes           |\n| [Google OAuth]()             | yes               | no           | no           | no                    | yes       | no             | N\/A         | no                   | yes        | yes           |\n| [Grafana.com OAuth]() | no                | no           | yes          | no                    | N\/A       | N\/A            | N\/A         | yes                  | yes        | yes           |\n| [Okta OAuth]()                 | yes               | yes          | yes          | yes                   | yes       | yes            | N\/A         | yes                  | yes        | yes           |\n| [SAML]() (Enterprise only)     | yes               | yes          | yes          | yes                   | yes       | yes            | N\/A         | yes                  | yes        | yes           |\n| [LDAP]()                       | yes               | yes          | yes          | yes                   | yes       | yes            | yes         | no                   | N\/A        | N\/A           |\n| [JWT Proxy]()                   | no                | yes          | yes          | yes                   | no        | no             | N\/A         | no                   | N\/A        | N\/A           |\n\nFields explanation:\n\n**Multi Org Mapping:** Able to add a user and map roles to multiple organizations\n\n**Enforce Sync:** If the information provided by the identity provider is empty, does the integration skip setting that user\u2019s field or does it enforce a default.\n\n**Role Mapping:** Able to map a user\u2019s role in the default org\n\n**Grafana Admin Mapping:** Able to map a user\u2019s admin role in the default org\n\n**Team Sync:** Able to sync teams from a predefined group\/team in a your IdP\n\n**Allowed Groups:** Only allow members of certain groups to login\n\n**Active Sync:** Add users to teams and update their profile without requiring them to log in\n\n**Skip OrgRole Sync:** Able to modify org role for users and not sync it back to the IdP\n\n**Auto Login:** Automatically redirects to provider login page if user is not logged in \\* for OAuth; Only works if it's the only configured provider\n\n**Single Logout:** Logging out from Grafana also logs you out of provider session\n\n## Configuring multiple identity providers\n\nGrafana allows you to configure more than one authentication provider, however it is not possible to configure the same type of authentication provider twice.\nFor example, you can have [SAML]() (Enterprise only) and [Generic OAuth]() configured, but you can not have two different [Generic OAuth]() configurations.\n\n> Note: Grafana does not support multiple identity providers resolving the same user. Ensure there are no user account overlaps between the different providers\n\nIn scenarios where you have multiple identity providers of the same type, there are a couple of options:\n\n- Use different Grafana instances each configured with a given identity provider.\n- Check if the identity provider supports account federation. In such cases, you can configure it once and let your identity provider federate the accounts from different providers.\n- If SAML is supported by the identity provider, you can configure one [Generic OAuth]() and one [SAML]() (Enterprise only).\n\n## Using the same email address to login with different identity providers\n\nIf users want to use the same email address with multiple identity providers (for example, Grafana.Com OAuth and Google OAuth), you can configure Grafana to use the email address as the unique identifier for the user. This is done by enabling the `oauth_allow_insecure_email_lookup` option, which is disabled by default. Please note that enabling this option can lower the security of your Grafana instance. If you enable this option, you should also ensure that the `Allowed organization`, `Allowed groups` and `Allowed domains` settings are configured correctly to prevent unauthorized access.\n\nTo enable this option, refer to the [Enable email lookup](#enable-email-lookup) section.\n\n## Multi-factor authentication (MFA\/2FA)\n\nGrafana and the Grafana Cloud portal currently do not include built-in support for multi-factor authentication (MFA).\n\nWe strongly recommend integrating an external identity provider (IdP) that supports MFA, such as Okta, Azure AD, or Google Workspace. By configuring your Grafana instances to use an external IdP, you can leverage MFA to protect your accounts and resources effectively.\n\n## Login and short-lived tokens\n\n> The following applies when using Grafana's basic authentication, LDAP (without Auth proxy) or OAuth integration.\n\nGrafana uses short-lived tokens as a mechanism for verifying authenticated users.\nThese short-lived tokens are rotated on an interval specified by `token_rotation_interval_minutes` for active authenticated users.\n\nInactive authenticated users will remain logged in for a duration specified by `login_maximum_inactive_lifetime_duration`.\nThis means that a user can close a Grafana window and return before `now + login_maximum_inactive_lifetime_duration` to continue their session.\nThis is true as long as the time since last user login is less than `login_maximum_lifetime_duration`.\n\n## Settings\n\nExample:\n\n```bash\n[auth]\n\n# Login cookie name\nlogin_cookie_name = grafana_session\n\n# The maximum lifetime (duration) an authenticated user can be inactive before being required to login at next visit. Default is 7 days (7d). This setting should be expressed as a duration, e.g. 5m (minutes), 6h (hours), 10d (days), 2w (weeks), 1M (month). The lifetime resets at each successful token rotation (token_rotation_interval_minutes).\nlogin_maximum_inactive_lifetime_duration =\n\n# The maximum lifetime (duration) an authenticated user can be logged in since login time before being required to login. Default is 30 days (30d). This setting should be expressed as a duration, e.g. 5m (minutes), 6h (hours), 10d (days), 2w (weeks), 1M (month).\nlogin_maximum_lifetime_duration =\n\n# How often should auth tokens be rotated for authenticated users when being active. The default is every 10 minutes.\ntoken_rotation_interval_minutes = 10\n\n# The maximum lifetime (seconds) an API key can be used. If it is set all the API keys should have limited lifetime that is lower than this value.\napi_key_max_seconds_to_live = -1\n\n# Enforce user lookup based on email instead of the unique ID provided by the IdP.\noauth_allow_insecure_email_lookup = false\n```\n\n## Extended authentication settings\n\n### Enable email lookup\n\nBy default, Grafana identifies users based on the unique ID provided by the identity provider (IdP).\nIn certain cases, however, enabling user lookups by email can be a feasible option, such as when:\n\n- The identity provider is a single-tenant setup.\n- Unique, validated, and non-editable emails are provided by the IdP.\n- The infrastructure allows email-based identification without compromising security.\n\n**Important note**: While it is possible to configure Grafana to allow email-based user lookups, we strongly recommend against this approach in most cases due to potential security risks.\nIf you still choose to proceed, the following configuration can be applied to enable email lookup.\n\n```bash\n[auth]\noauth_allow_insecure_email_lookup = true\n```\n\nYou can also enable email lookup using the API:\n\n\nAvailable in [Grafana Enterprise]() and [Grafana Cloud]() since Grafana v10.4.\n\n\n```\ncurl --request PUT \\\n  --url http:\/\/{slug}.grafana.com\/api\/admin\/settings \\\n  --header 'Authorization: Bearer glsa_yourserviceaccounttoken' \\\n  --header 'Content-Type: application\/json' \\\n  --data '{ \"updates\": { \"auth\": { \"oauth_allow_insecure_email_lookup\": \"true\" }}}'\n```\n\nFinally, you can also enable it using the UI by going to **Administration -> Authentication -> Auth settings**.\n\n### Automatic OAuth login\n\nSet to true to attempt login with specific OAuth provider automatically, skipping the login screen.\nThis setting is ignored if multiple auth providers are configured to use auto login.\nDefaults to `false`.\n\n```bash\n[auth.generic_oauth]\nauto_login = true\n```\n\n### Avoid automatic login\n\nThe `disableAutoLogin=true` URL parameter allows users to bypass the automatic login feature in scenarios where incorrect configuration changes prevent normal login functionality.\nThis feature is especially helpful when you need to access the login screen to troubleshoot and fix misconfigurations.\n\n#### How to use\n\n1. Add `disableAutoLogin=true` as a query parameter to your Grafana URL.\n   - Example: `grafana.example.net\/login?disableAutoLogin=true` or `grafana.example.net\/login?disableAutoLogin`\n1. This will redirect you to the standard login screen, bypassing the automatic login mechanism.\n1. Fix any configuration issues and test your login setup.\n\nThis feature is available for both for OAuth and SAML. Ensure that after fixing the issue, you remove the parameter or revert the configuration to re-enable the automatic login feature, if desired.\n\n### Hide sign-out menu\n\nSet the option detailed below to true to hide sign-out menu link. Useful if you use an auth proxy or JWT authentication.\n\n```bash\n[auth]\ndisable_signout_menu = true\n```\n\n### URL redirect after signing out\n\nURL to redirect the user to after signing out from Grafana. This can for example be used to enable signout from an OAuth provider.\n\nExample for Generic OAuth:\n\n```bash\n[auth.generic_oauth]\nsignout_redirect_url =\n```\n\n### Remote logout\n\nYou can log out from other devices by removing login sessions from the bottom of your profile page. If you are\na Grafana admin user, you can also do the same for any user from the Server Admin \/ Edit User view.\n\n### Protected roles\n\n\nAvailable in [Grafana Enterprise]() and [Grafana Cloud]().\n\n\nBy default, after you configure an authorization provider, Grafana will adopt existing users into the new authentication scheme. For example, if you have created a user with basic authentication having the login `jsmith@example.com`, then set up SAML authentication where `jsmith@example.com` is an account, the user's authentication type will be changed to SAML if they perform a SAML sign-in.\n\nYou can disable this user adoption for certain roles using the `protected_roles` property:\n\n```bash\n[auth.security]\nprotected_roles = server_admins org_admins\n```\n\nThe value of `protected_roles` should be a list of roles to protect, separated by spaces. Valid roles are `viewers`, `editors`, `org_admins`, `server_admins`, and `all` (a superset of the other roles).","site":"grafana setup","answers_cleaned":"    aliases            auth            auth overview  description  Learn about all the ways in which you can configure Grafana to authenticate   users  labels    products        cloud       enterprise       oss title  Configure authentication weight  200        Configure authentication  Grafana provides many ways to authenticate users  Some authentication integrations also enable syncing user permissions and org memberships   The following table shows all supported authentication methods and the features available for them   Team sync    and  active sync    are only available in Grafana Enterprise     Authentication method                                   Multi Org Mapping   Enforce Sync   Role Mapping   Grafana Admin Mapping   Team Sync   Allowed groups   Active Sync   Skip OrgRole mapping   Auto Login   Single Logout                                                                                                                                                                                                                                         Anonymous access      N A                 N A            N A            N A                     N A         N A              N A           N A                    N A          N A                Auth Proxy                no                  yes            yes            no                      yes         no               N A           no                     N A          N A                Azure AD OAuth               yes                 yes            yes            yes                     yes         yes              N A           yes                    yes          yes                Basic auth                   yes                 N A            yes            yes                     N A         N A              N A           N A                    N A          N A                Generic OAuth          yes                 yes            yes            yes                     yes         no               N A           yes                    yes          yes                GitHub OAuth                  yes                 yes            yes            yes                     yes         yes              N A           yes                    yes          yes                GitLab OAuth                  yes                 yes            yes            yes                     yes         yes              N A           yes                    yes          yes                Google OAuth                  yes                 no             no             no                      yes         no               N A           no                     yes          yes                Grafana com OAuth      no                  no             yes            no                      N A         N A              N A           yes                    yes          yes                Okta OAuth                      yes                 yes            yes            yes                     yes         yes              N A           yes                    yes          yes                SAML     Enterprise only        yes                 yes            yes            yes                     yes         yes              N A           yes                    yes          yes                LDAP                            yes                 yes            yes            yes                     yes         yes              yes           no                     N A          N A                JWT Proxy                        no                  yes            yes            yes                     no          no               N A           no                     N A          N A              Fields explanation     Multi Org Mapping    Able to add a user and map roles to multiple organizations    Enforce Sync    If the information provided by the identity provider is empty  does the integration skip setting that user s field or does it enforce a default     Role Mapping    Able to map a user s role in the default org    Grafana Admin Mapping    Able to map a user s admin role in the default org    Team Sync    Able to sync teams from a predefined group team in a your IdP    Allowed Groups    Only allow members of certain groups to login    Active Sync    Add users to teams and update their profile without requiring them to log in    Skip OrgRole Sync    Able to modify org role for users and not sync it back to the IdP    Auto Login    Automatically redirects to provider login page if user is not logged in    for OAuth  Only works if it s the only configured provider    Single Logout    Logging out from Grafana also logs you out of provider session     Configuring multiple identity providers  Grafana allows you to configure more than one authentication provider  however it is not possible to configure the same type of authentication provider twice  For example  you can have  SAML     Enterprise only  and  Generic OAuth    configured  but you can not have two different  Generic OAuth    configurations     Note  Grafana does not support multiple identity providers resolving the same user  Ensure there are no user account overlaps between the different providers  In scenarios where you have multiple identity providers of the same type  there are a couple of options     Use different Grafana instances each configured with a given identity provider    Check if the identity provider supports account federation  In such cases  you can configure it once and let your identity provider federate the accounts from different providers    If SAML is supported by the identity provider  you can configure one  Generic OAuth    and one  SAML     Enterprise only       Using the same email address to login with different identity providers  If users want to use the same email address with multiple identity providers  for example  Grafana Com OAuth and Google OAuth   you can configure Grafana to use the email address as the unique identifier for the user  This is done by enabling the  oauth allow insecure email lookup  option  which is disabled by default  Please note that enabling this option can lower the security of your Grafana instance  If you enable this option  you should also ensure that the  Allowed organization    Allowed groups  and  Allowed domains  settings are configured correctly to prevent unauthorized access   To enable this option  refer to the  Enable email lookup   enable email lookup  section      Multi factor authentication  MFA 2FA   Grafana and the Grafana Cloud portal currently do not include built in support for multi factor authentication  MFA    We strongly recommend integrating an external identity provider  IdP  that supports MFA  such as Okta  Azure AD  or Google Workspace  By configuring your Grafana instances to use an external IdP  you can leverage MFA to protect your accounts and resources effectively      Login and short lived tokens    The following applies when using Grafana s basic authentication  LDAP  without Auth proxy  or OAuth integration   Grafana uses short lived tokens as a mechanism for verifying authenticated users  These short lived tokens are rotated on an interval specified by  token rotation interval minutes  for active authenticated users   Inactive authenticated users will remain logged in for a duration specified by  login maximum inactive lifetime duration   This means that a user can close a Grafana window and return before  now   login maximum inactive lifetime duration  to continue their session  This is true as long as the time since last user login is less than  login maximum lifetime duration       Settings  Example      bash  auth     Login cookie name login cookie name   grafana session    The maximum lifetime  duration  an authenticated user can be inactive before being required to login at next visit  Default is 7 days  7d   This setting should be expressed as a duration  e g  5m  minutes   6h  hours   10d  days   2w  weeks   1M  month   The lifetime resets at each successful token rotation  token rotation interval minutes   login maximum inactive lifetime duration      The maximum lifetime  duration  an authenticated user can be logged in since login time before being required to login  Default is 30 days  30d   This setting should be expressed as a duration  e g  5m  minutes   6h  hours   10d  days   2w  weeks   1M  month   login maximum lifetime duration      How often should auth tokens be rotated for authenticated users when being active  The default is every 10 minutes  token rotation interval minutes   10    The maximum lifetime  seconds  an API key can be used  If it is set all the API keys should have limited lifetime that is lower than this value  api key max seconds to live    1    Enforce user lookup based on email instead of the unique ID provided by the IdP  oauth allow insecure email lookup   false         Extended authentication settings      Enable email lookup  By default  Grafana identifies users based on the unique ID provided by the identity provider  IdP   In certain cases  however  enabling user lookups by email can be a feasible option  such as when     The identity provider is a single tenant setup    Unique  validated  and non editable emails are provided by the IdP    The infrastructure allows email based identification without compromising security     Important note    While it is possible to configure Grafana to allow email based user lookups  we strongly recommend against this approach in most cases due to potential security risks  If you still choose to proceed  the following configuration can be applied to enable email lookup      bash  auth  oauth allow insecure email lookup   true      You can also enable email lookup using the API    Available in  Grafana Enterprise    and  Grafana Cloud    since Grafana v10 4        curl   request PUT       url http    slug  grafana com api admin settings       header  Authorization  Bearer glsa yourserviceaccounttoken        header  Content Type  application json        data     updates      auth      oauth allow insecure email lookup    true            Finally  you can also enable it using the UI by going to   Administration    Authentication    Auth settings         Automatic OAuth login  Set to true to attempt login with specific OAuth provider automatically  skipping the login screen  This setting is ignored if multiple auth providers are configured to use auto login  Defaults to  false       bash  auth generic oauth  auto login   true          Avoid automatic login  The  disableAutoLogin true  URL parameter allows users to bypass the automatic login feature in scenarios where incorrect configuration changes prevent normal login functionality  This feature is especially helpful when you need to access the login screen to troubleshoot and fix misconfigurations        How to use  1  Add  disableAutoLogin true  as a query parameter to your Grafana URL       Example   grafana example net login disableAutoLogin true  or  grafana example net login disableAutoLogin  1  This will redirect you to the standard login screen  bypassing the automatic login mechanism  1  Fix any configuration issues and test your login setup   This feature is available for both for OAuth and SAML  Ensure that after fixing the issue  you remove the parameter or revert the configuration to re enable the automatic login feature  if desired       Hide sign out menu  Set the option detailed below to true to hide sign out menu link  Useful if you use an auth proxy or JWT authentication      bash  auth  disable signout menu   true          URL redirect after signing out  URL to redirect the user to after signing out from Grafana  This can for example be used to enable signout from an OAuth provider   Example for Generic OAuth      bash  auth generic oauth  signout redirect url            Remote logout  You can log out from other devices by removing login sessions from the bottom of your profile page  If you are a Grafana admin user  you can also do the same for any user from the Server Admin   Edit User view       Protected roles   Available in  Grafana Enterprise    and  Grafana Cloud       By default  after you configure an authorization provider  Grafana will adopt existing users into the new authentication scheme  For example  if you have created a user with basic authentication having the login  jsmith example com   then set up SAML authentication where  jsmith example com  is an account  the user s authentication type will be changed to SAML if they perform a SAML sign in   You can disable this user adoption for certain roles using the  protected roles  property      bash  auth security  protected roles   server admins org admins      The value of  protected roles  should be a list of roles to protect  separated by spaces  Valid roles are  viewers    editors    org admins    server admins   and  all   a superset of the other roles  "}
{"questions":"grafana setup title Configure JWT authentication menuTitle JWT Grafana JWT Authentication aliases products auth jwt labels oss enterprise","answers":"---\naliases:\n  - ..\/..\/..\/auth\/jwt\/\ndescription: Grafana JWT Authentication\nlabels:\n  products:\n    - enterprise\n    - oss\nmenuTitle: JWT\ntitle: Configure JWT authentication\nweight: 1600\n---\n\n# Configure JWT authentication\n\nYou can configure Grafana to accept a JWT token provided in the HTTP header. The token is verified using any of the following:\n\n- PEM-encoded key file\n- JSON Web Key Set (JWKS) in a local file\n- JWKS provided by the configured JWKS endpoint\n\nThis method of authentication is useful for integrating with other systems that\nuse JWKS but can't directly integrate with Grafana or if you want to use pass-through\nauthentication in an app embedding Grafana.\n\n\nGrafana does not currently support refresh tokens.\n\n\n## Enable JWT\n\nTo use JWT authentication:\n\n1. Enable JWT in the [main config file]().\n1. Specify the header name that contains a token.\n\n```ini\n[auth.jwt]\n# By default, auth.jwt is disabled.\nenabled = true\n\n# HTTP header to look into to get a JWT token.\nheader_name = X-JWT-Assertion\n```\n\n## Configure login claim\n\nTo identify the user, some of the claims needs to be selected as a login info. The subject claim called `\"sub\"` is mandatory and needs to identify the principal that is the subject of the JWT.\n\nTypically, the subject claim called `\"sub\"` would be used as a login but it might also be set to some application specific claim.\n\n```ini\n# [auth.jwt]\n# ...\n\n# Specify a claim to use as a username to sign in.\nusername_claim = sub\n\n# Specify a claim to use as an email to sign in.\nemail_claim = sub\n\n# auto-create users if they are not already matched\n# auto_sign_up = true\n```\n\nIf `auto_sign_up` is enabled, then the `sub` claim is used as the \"external Auth ID\". The `name` claim is used as the user's full name if it is present.\n\nAdditionally, if the login username or the email claims are nested inside the JWT structure, you can specify the path to the attributes using the `username_attribute_path` and `email_attribute_path` configuration options using the JMESPath syntax.\n\nJWT structure example.\n\n```json\n{\n  \"user\": {\n    \"UID\": \"1234567890\",\n    \"name\": \"John Doe\",\n    \"username\": \"johndoe\",\n    \"emails\": [\"personal@email.com\", \"professional@email.com\"]\n  }\n}\n```\n\n```ini\n# [auth.jwt]\n# ...\n\n# Specify a nested attribute to use as a username to sign in.\nusername_attribute_path = user.username # user's login is johndoe\n\n# Specify a nested attribute to use as an email to sign in.\nemail_attribute_path = user.emails[1] # user's email is professional@email.com\n```\n\n## Iframe Embedding\n\nIf you want to embed Grafana in an iframe while maintaining user identity and role checks,\nyou can use JWT authentication to authenticate the iframe.\n\n\nFor Grafana Cloud, or scenarios where verifying viewer identity is not required,\nembed [shared dashboards](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/dashboards\/share-dashboards-panels\/shared-dashboards\/).\n\n\nIn this scenario, you will need to configure Grafana to accept a JWT\nprovided in the HTTP header and a reverse proxy should rewrite requests to the\nGrafana instance to include the JWT in the request's headers.\n\n\nFor embedding to work, you must enable `allow_embedding` in the [security section](). This setting is not available in Grafana Cloud.\n\n\nIn a scenario where it is not possible to rewrite the request headers you\ncan use URL login instead.\n\n### URL login\n\n`url_login` allows grafana to search for a JWT in the URL query parameter\n`auth_token` and use it as the authentication token.\n\n**Note**: You need to have enabled JWT before setting this setting see section Enabled JWT\n\n\nthis can lead to JWTs being exposed in logs and possible session hijacking if the server is not\nusing HTTP over TLS.\n\n\n```ini\n# [auth.jwt]\n# ...\nurl_login = true # enable JWT authentication in the URL\n```\n\nAn example of an URL for accessing grafana with JWT URL authentication is:\n\n```\nhttp:\/\/env.grafana.local\/d\/RciOKLR4z\/board-identifier?orgId=1&kiosk&auth_token=eyJhbxxxxxxxxxxxxx\n```\n\nA sample repository using this authentication method is available\nat [grafana-iframe-oauth-sample](https:\/\/github.com\/grafana\/grafana-iframe-oauth-sample).\n\n## Signature verification\n\nJSON web token integrity needs to be verified so cryptographic signature is used for this purpose. So we expect that every token must be signed with some known cryptographic key.\n\nYou have a variety of options on how to specify where the keys are located.\n\n### Verify token using a JSON Web Key Set loaded from https endpoint\n\nFor more information on JWKS endpoints, refer to [Auth0 docs](https:\/\/auth0.com\/docs\/tokens\/json-web-tokens\/json-web-key-sets).\n\n```ini\n# [auth.jwt]\n# ...\n\njwk_set_url = https:\/\/your-auth-provider.example.com\/.well-known\/jwks.json\n\n# Cache TTL for data loaded from http endpoint.\ncache_ttl = 60m\n```\n\n> **Note**: If the JWKS endpoint includes cache control headers and the value is less than the configured `cache_ttl`, then the cache control header value is used instead. If the cache_ttl is not set, no caching is performed. `no-store` and `no-cache` cache control headers are ignored.\n\n### Verify token using a JSON Web Key Set loaded from JSON file\n\nKey set in the same format as in JWKS endpoint but located on disk.\n\n```ini\njwk_set_file = \/path\/to\/jwks.json\n```\n\n### Verify token using a single key loaded from PEM-encoded file\n\nPEM-encoded key file in PKIX, PKCS #1, PKCS #8 or SEC 1 format.\n\n```ini\nkey_file = \/path\/to\/key.pem\n```\n\nIf the JWT token's header specifies a `kid` (Key ID), then the Key ID must be set using the `key_id` configuration option.\n\n```ini\nkey_id = my-key-id\n```\n\n## Validate claims\n\nBy default, only `\"exp\"`, `\"nbf\"` and `\"iat\"` claims are validated.\n\nConsider validating that other claims match your expectations by using the `expect_claims` configuration option.\nToken claims must match exactly the values set here.\n\n```ini\n# This can be seen as a required \"subset\" of a JWT Claims Set.\nexpect_claims = {\"iss\": \"https:\/\/your-token-issuer\", \"your-custom-claim\": \"foo\"}\n```\n\n## Roles\n\nGrafana checks for the presence of a role using the [JMESPath](http:\/\/jmespath.org\/examples.html) specified via the `role_attribute_path` configuration option. The JMESPath is applied to JWT token claims. The result after evaluation of the `role_attribute_path` JMESPath expression should be a valid Grafana role, for example, `None`, `Viewer`, `Editor` or `Admin`.\n\nThe organization that the role is assigned to can be configured using the `X-Grafana-Org-Id` header.\n\n### JMESPath examples\n\nTo ease configuration of a proper JMESPath expression, you can test\/evaluate expressions with custom payloads at http:\/\/jmespath.org\/.\n\n### Role mapping\n\nIf the `role_attribute_path` property does not return a role, then the user is assigned the `Viewer` role by default. You can disable the role assignment by setting `role_attribute_strict = true`. It denies user access if no role or an invalid role is returned.\n\n**Basic example:**\n\nIn the following example user will get `Editor` as role when authenticating. The value of the property `role` will be the resulting role if the role is a proper Grafana role, i.e. `None`, `Viewer`, `Editor` or `Admin`.\n\nPayload:\n\n```json\n{\n    ...\n    \"role\": \"Editor\",\n    ...\n}\n```\n\nConfig:\n\n```bash\nrole_attribute_path = role\n```\n\n**Advanced example:**\n\nIn the following example user will get `Admin` as role when authenticating since it has a role `admin`. If a user has a role `editor` it will get `Editor` as role, otherwise `Viewer`.\n\nPayload:\n\n```json\n{\n    ...\n    \"info\": {\n        ...\n        \"roles\": [\n            \"engineer\",\n            \"admin\",\n        ],\n        ...\n    },\n    ...\n}\n```\n\nConfig:\n\n```bash\nrole_attribute_path = contains(info.roles[*], 'admin') && 'Admin' || contains(info.roles[*], 'editor') && 'Editor' || 'Viewer'\n```\n\n### Grafana Admin Role\n\nIf the `role_attribute_path` property returns a `GrafanaAdmin` role, Grafana Admin is not assigned by default, instead the `Admin` role is assigned. To allow `Grafana Admin` role to be assigned set `allow_assign_grafana_admin = true`.\n\n### Skip organization role mapping\n\nTo skip the assignment of roles and permissions upon login via JWT and handle them via other mechanisms like the user interface, we can skip the organization role synchronization with the following configuration.\n\n```ini\n[auth.jwt]\n# ...\n\nskip_org_role_sync = true\n```","site":"grafana setup","answers_cleaned":"    aliases               auth jwt  description  Grafana JWT Authentication labels    products        enterprise       oss menuTitle  JWT title  Configure JWT authentication weight  1600        Configure JWT authentication  You can configure Grafana to accept a JWT token provided in the HTTP header  The token is verified using any of the following     PEM encoded key file   JSON Web Key Set  JWKS  in a local file   JWKS provided by the configured JWKS endpoint  This method of authentication is useful for integrating with other systems that use JWKS but can t directly integrate with Grafana or if you want to use pass through authentication in an app embedding Grafana    Grafana does not currently support refresh tokens       Enable JWT  To use JWT authentication   1  Enable JWT in the  main config file     1  Specify the header name that contains a token      ini  auth jwt    By default  auth jwt is disabled  enabled   true    HTTP header to look into to get a JWT token  header name   X JWT Assertion         Configure login claim  To identify the user  some of the claims needs to be selected as a login info  The subject claim called   sub   is mandatory and needs to identify the principal that is the subject of the JWT   Typically  the subject claim called   sub   would be used as a login but it might also be set to some application specific claim      ini    auth jwt           Specify a claim to use as a username to sign in  username claim   sub    Specify a claim to use as an email to sign in  email claim   sub    auto create users if they are not already matched   auto sign up   true      If  auto sign up  is enabled  then the  sub  claim is used as the  external Auth ID   The  name  claim is used as the user s full name if it is present   Additionally  if the login username or the email claims are nested inside the JWT structure  you can specify the path to the attributes using the  username attribute path  and  email attribute path  configuration options using the JMESPath syntax   JWT structure example      json      user          UID    1234567890        name    John Doe        username    johndoe        emails     personal email com    professional email com                 ini    auth jwt           Specify a nested attribute to use as a username to sign in  username attribute path   user username   user s login is johndoe    Specify a nested attribute to use as an email to sign in  email attribute path   user emails 1    user s email is professional email com         Iframe Embedding  If you want to embed Grafana in an iframe while maintaining user identity and role checks  you can use JWT authentication to authenticate the iframe    For Grafana Cloud  or scenarios where verifying viewer identity is not required  embed  shared dashboards  https   grafana com docs grafana  GRAFANA VERSION  dashboards share dashboards panels shared dashboards      In this scenario  you will need to configure Grafana to accept a JWT provided in the HTTP header and a reverse proxy should rewrite requests to the Grafana instance to include the JWT in the request s headers    For embedding to work  you must enable  allow embedding  in the  security section     This setting is not available in Grafana Cloud    In a scenario where it is not possible to rewrite the request headers you can use URL login instead       URL login   url login  allows grafana to search for a JWT in the URL query parameter  auth token  and use it as the authentication token     Note    You need to have enabled JWT before setting this setting see section Enabled JWT   this can lead to JWTs being exposed in logs and possible session hijacking if the server is not using HTTP over TLS       ini    auth jwt        url login   true   enable JWT authentication in the URL      An example of an URL for accessing grafana with JWT URL authentication is       http   env grafana local d RciOKLR4z board identifier orgId 1 kiosk auth token eyJhbxxxxxxxxxxxxx      A sample repository using this authentication method is available at  grafana iframe oauth sample  https   github com grafana grafana iframe oauth sample       Signature verification  JSON web token integrity needs to be verified so cryptographic signature is used for this purpose  So we expect that every token must be signed with some known cryptographic key   You have a variety of options on how to specify where the keys are located       Verify token using a JSON Web Key Set loaded from https endpoint  For more information on JWKS endpoints  refer to  Auth0 docs  https   auth0 com docs tokens json web tokens json web key sets       ini    auth jwt         jwk set url   https   your auth provider example com  well known jwks json    Cache TTL for data loaded from http endpoint  cache ttl   60m          Note    If the JWKS endpoint includes cache control headers and the value is less than the configured  cache ttl   then the cache control header value is used instead  If the cache ttl is not set  no caching is performed   no store  and  no cache  cache control headers are ignored       Verify token using a JSON Web Key Set loaded from JSON file  Key set in the same format as in JWKS endpoint but located on disk      ini jwk set file    path to jwks json          Verify token using a single key loaded from PEM encoded file  PEM encoded key file in PKIX  PKCS  1  PKCS  8 or SEC 1 format      ini key file    path to key pem      If the JWT token s header specifies a  kid   Key ID   then the Key ID must be set using the  key id  configuration option      ini key id   my key id         Validate claims  By default  only   exp      nbf   and   iat   claims are validated   Consider validating that other claims match your expectations by using the  expect claims  configuration option  Token claims must match exactly the values set here      ini   This can be seen as a required  subset  of a JWT Claims Set  expect claims     iss    https   your token issuer    your custom claim    foo           Roles  Grafana checks for the presence of a role using the  JMESPath  http   jmespath org examples html  specified via the  role attribute path  configuration option  The JMESPath is applied to JWT token claims  The result after evaluation of the  role attribute path  JMESPath expression should be a valid Grafana role  for example   None    Viewer    Editor  or  Admin    The organization that the role is assigned to can be configured using the  X Grafana Org Id  header       JMESPath examples  To ease configuration of a proper JMESPath expression  you can test evaluate expressions with custom payloads at http   jmespath org        Role mapping  If the  role attribute path  property does not return a role  then the user is assigned the  Viewer  role by default  You can disable the role assignment by setting  role attribute strict   true   It denies user access if no role or an invalid role is returned     Basic example     In the following example user will get  Editor  as role when authenticating  The value of the property  role  will be the resulting role if the role is a proper Grafana role  i e   None    Viewer    Editor  or  Admin    Payload      json                role    Editor                  Config      bash role attribute path   role        Advanced example     In the following example user will get  Admin  as role when authenticating since it has a role  admin   If a user has a role  editor  it will get  Editor  as role  otherwise  Viewer    Payload      json                info                          roles                  engineer                admin                                                Config      bash role attribute path   contains info roles      admin       Admin     contains info roles      editor       Editor      Viewer           Grafana Admin Role  If the  role attribute path  property returns a  GrafanaAdmin  role  Grafana Admin is not assigned by default  instead the  Admin  role is assigned  To allow  Grafana Admin  role to be assigned set  allow assign grafana admin   true        Skip organization role mapping  To skip the assignment of roles and permissions upon login via JWT and handle them via other mechanisms like the user interface  we can skip the organization role synchronization with the following configuration      ini  auth jwt         skip org role sync   true    "}
{"questions":"grafana setup weight 600 Learn how to configure SAML authentication in Grafana s UI products title Configure SAML authentication using the Grafana user interface menuTitle SAML user interface labels cloud enterprise","answers":"---\ndescription: Learn how to configure SAML authentication in Grafana's UI.\nlabels:\n  products:\n    - cloud\n    - enterprise\nmenuTitle: SAML user interface\ntitle: Configure SAML authentication using the Grafana user interface\nweight: 600\n---\n\n# Configure SAML authentication using the Grafana user interface\n\n\nAvailable in [Grafana Enterprise]() version 10.0 and later, and [Grafana Cloud Pro and Advanced](\/docs\/grafana-cloud\/).\n\n\nYou can configure SAML authentication in Grafana through the user interface (UI) or the Grafana configuration file. For instructions on how to set up SAML using the Grafana configuration file, refer to [Configure SAML authentication using the configuration file]().\n\nThe Grafana SAML UI provides the following advantages over configuring SAML in the Grafana configuration file:\n\n- It is accessible by Grafana Cloud users\n- SAML UI carries out input validation and provides useful feedback on the correctness of the configuration, making SAML setup easier\n- It doesn't require Grafana to be restarted after a configuration update\n- Access to the SAML UI only requires access to authentication settings, so it can be used by users with limited access to Grafana's configuration\n\n\nAny configuration changes made through the Grafana user interface (UI) will take precedence over settings specified in the Grafana configuration file or through environment variables. This means that if you modify any configuration settings in the UI, they will override any corresponding settings set via environment variables or defined in the configuration file. For more information on how Grafana determines the order of precedence for its settings, please refer to the [Settings update at runtime]().\n\n\n\nDisabling the UI does not affect any configuration settings that were previously set up through the UI. Those settings will continue to function as intended even with the UI disabled.\n\n\n## Before you begin\n\nTo follow this guide, you need:\n\n- Knowledge of SAML authentication. Refer to [SAML authentication in Grafana]() for an overview of Grafana's SAML integration.\n- Permissions `settings:read` and `settings:write` with scope `settings:auth.saml:*` that allow you to read and update SAML authentication settings.\n\n  These permissions are granted by `fixed:authentication.config:writer` role.\n  By default, this role is granted to Grafana server administrator in self-hosted instances and to Organization admins in Grafana Cloud instances.\n\n- Grafana instance running Grafana version 10.0 or later with [Grafana Enterprise]() or [Grafana Cloud Pro or Advanced](\/docs\/grafana-cloud\/) license.\n\n\nIt is possible to set up Grafana with SAML authentication using Azure AD. However, if an Azure AD user belongs to more than 150 groups, a Graph API endpoint is shared instead.\n\nGrafana versions 11.1 and below do not support fetching the groups from the Graph API endpoint. As a result, users with more than 150 groups will not be able to retrieve their groups. Instead, it is recommended that you use OIDC\/OAuth workflows.\n\nAs of Grafana 11.2, the SAML integration offers a mechanism to retrieve user groups from the Graph API.\n\nRelated links:\n\n- [Azure AD SAML limitations](https:\/\/learn.microsoft.com\/en-us\/entra\/identity-platform\/id-token-claims-reference#groups-overage-claim)\n- [Set up SAML with Azure AD]()\n- [Configure a Graph API application in Azure AD]()\n  \n\n## Steps To Configure SAML Authentication\n\nSign in to Grafana and navigate to **Administration > Authentication > Configure SAML**.\n\n### 1. General Settings Section\n\n1. Complete the **General settings** fields.\n\n   For assistance, consult the following table for additional guidance about certain fields:\n\n   | Field                                 | Description                                                                                                                                                                                                                                                   |\n   | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n   | **Allow signup**                      | If enabled, you can create new users through the SAML login. If disabled, then only existing Grafana users can log in with SAML.                                                                                                                              |\n   | **Auto login**                        | If enabled, Grafana will attempt to automatically log in with SAML skipping the login screen.                                                                                                                                                                 |\n   | **Single logout**                     | The SAML single logout feature enables users to log out from all applications associated with the current IdP session established using SAML SSO. For more information, refer to [SAML single logout documentation]](). |\n   | **Identity provider initiated login** | Enables users to log in to Grafana directly from the SAML IdP. For more information, refer to [IdP initiated login documentation]().                                                                 |\n\n1. Click **Next: Sign requests**.\n\n### 2. Sign Requests Section\n\n1. In the **Sign requests** field, specify whether you want the outgoing requests to be signed, and, if so, then:\n\n   1. Provide a certificate and a private key that will be used by the service provider (Grafana) and the SAML IdP.\n\n      Use the [PKCS #8](https:\/\/en.wikipedia.org\/wiki\/PKCS_8) format to issue the private key.\n\n      For more information, refer to an [example on how to generate SAML credentials]().\n\n      Alternatively, you can generate a new private key and certificate pair directly from the UI. Click on the `Generate key and certificate` button to open a form where you enter some information you want to be embedded into the new certificate.\n\n   1. Choose which signature algorithm should be used.\n\n      The SAML standard recommends using a digital signature for some types of messages, like authentication or logout requests to avoid [man-in-the-middle attacks](https:\/\/en.wikipedia.org\/wiki\/Man-in-the-middle_attack).\n\n1. Click **Next: Connect Grafana with Identity Provider**.\n\n### 3. Connect Grafana with Identity Provider Section\n\n1. Configure IdP using Grafana Metadata\n   1. Copy the **Metadata URL** and provide it to your SAML IdP to establish a connection between Grafana and the IdP.\n      - The metadata URL contains all the necessary information for the IdP to establish a connection with Grafana.\n   1. Copy the **Assertion Consumer Service URL** and provide it to your SAML IdP.\n      - The Assertion Consumer Service URL is the endpoint where the IdP sends the SAML assertion after the user has been authenticated.\n   1. If you want to use the **Single Logout** feature, copy the **Single Logout Service URL** and provide it to your SAML IdP.\n1. Finish configuring Grafana using IdP data\n   1. Provide IdP Metadata to Grafana.\n   - The metadata contains all the necessary information for Grafana to establish a connection with the IdP.\n   - This can be provided as Base64-encoded value, a path to a file, or as a URL.\n1. Click **Next: User mapping**.\n\n### 4. User Mapping Section\n\n1. If you wish to [map user information from SAML assertions](), complete the **Assertion attributes mappings** section.\n\n   You also need to configure the **Groups attribute** field if you want to use group synchronization. Group sync allows you to automatically map users to Grafana teams or role-based access control roles based on their SAML group membership.\n   To learn more about how to configure group synchronization, refer to [Configure team sync]() and [Configure group attribute sync](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/setup-grafana\/configure-security\/configure-group-attribute-sync) documentation.\n\n1. If you want to automatically assign users' roles based on their SAML roles, complete the **Role mapping** section.\n\n   First, you need to configure the **Role attribute** field to specify which SAML attribute should be used to retrieve SAML role information.\n   Then enter the SAML roles that you want to map to Grafana roles in **Role mapping** section. If you want to map multiple SAML roles to a Grafana role, separate them by a comma and a space. For example, `Editor: editor, developer`.\n\n   Role mapping will automatically update user's [basic role]() based on their SAML roles every time the user logs in to Grafana.\n   Learn more about [SAML role synchronization]().\n\n1. If you're setting up Grafana with Azure AD using the SAML protocol and want to fetch user groups from the Graph API, complete the **Azure AD Service Account Configuration** subsection.\n   1. Set up a service account in Azure AD and provide the necessary details in the **Azure AD Service Account Configuration** section.\n   1. Provide the **Client ID** of your Azure AD application.\n   1. Provide the **Client Secret** of your Azure AD application, the **Client Secret** will be used to request an access token from Azure AD.\n   1. Provide the Azure AD request **Access Token URL**.\n   1. If you don't have users with more than 150 groups, you can still force the use of the Graph API by enabling the **Force use Graph API** toggle.\n1. If you have multiple organizations and want to automatically add users to organizations, complete the **Org mapping section**.\n\n   First, you need to configure the **Org attribute** field to specify which SAML attribute should be used to retrieve SAML organization information.\n   Now fill in the **Org mapping** field with mappings from SAML organization to Grafana organization. For example, `Org mapping: Engineering:2, Sales:2` will map users who belong to `Engineering` or `Sales` organizations in SAML to Grafana organization with ID 2.\n   If you want users to have different roles in different organizations, you can additionally specify a role. For example, `Org mapping: Engineering:2:Editor` will map users who belong to `Engineering` organizations in SAML to Grafana organization with ID 2 and assign them Editor role.\n\n   Organization mapping will automatically update user's organization memberships (and roles, if they have been configured) based on their SAML organization every time the user logs in to Grafana.\n   Learn more about [SAML organization mapping]().\n\n1. If you want to limit the access to Grafana based on user's SAML organization membership, fill in the **Allowed organizations** field.\n1. Click **Next: Test and enable**.\n\n### 5. Test And Enable Section\n\n1. Click **Save and enable**\n   - If there are issues with your configuration, an error message will appear. Refer back to the previous steps to correct the issues and click on `Save and apply` on the top right corner once you are done.\n1. If there are no configuration issues, SAML integration status will change to `Enabled`.\n   Your SAML configuration is now enabled.\n1. To disable SAML integration, click `Disable` in the top right corner.","site":"grafana setup","answers_cleaned":"    description  Learn how to configure SAML authentication in Grafana s UI  labels    products        cloud       enterprise menuTitle  SAML user interface title  Configure SAML authentication using the Grafana user interface weight  600        Configure SAML authentication using the Grafana user interface   Available in  Grafana Enterprise    version 10 0 and later  and  Grafana Cloud Pro and Advanced   docs grafana cloud      You can configure SAML authentication in Grafana through the user interface  UI  or the Grafana configuration file  For instructions on how to set up SAML using the Grafana configuration file  refer to  Configure SAML authentication using the configuration file      The Grafana SAML UI provides the following advantages over configuring SAML in the Grafana configuration file     It is accessible by Grafana Cloud users   SAML UI carries out input validation and provides useful feedback on the correctness of the configuration  making SAML setup easier   It doesn t require Grafana to be restarted after a configuration update   Access to the SAML UI only requires access to authentication settings  so it can be used by users with limited access to Grafana s configuration   Any configuration changes made through the Grafana user interface  UI  will take precedence over settings specified in the Grafana configuration file or through environment variables  This means that if you modify any configuration settings in the UI  they will override any corresponding settings set via environment variables or defined in the configuration file  For more information on how Grafana determines the order of precedence for its settings  please refer to the  Settings update at runtime        Disabling the UI does not affect any configuration settings that were previously set up through the UI  Those settings will continue to function as intended even with the UI disabled       Before you begin  To follow this guide  you need     Knowledge of SAML authentication  Refer to  SAML authentication in Grafana    for an overview of Grafana s SAML integration    Permissions  settings read  and  settings write  with scope  settings auth saml    that allow you to read and update SAML authentication settings     These permissions are granted by  fixed authentication config writer  role    By default  this role is granted to Grafana server administrator in self hosted instances and to Organization admins in Grafana Cloud instances     Grafana instance running Grafana version 10 0 or later with  Grafana Enterprise    or  Grafana Cloud Pro or Advanced   docs grafana cloud   license    It is possible to set up Grafana with SAML authentication using Azure AD  However  if an Azure AD user belongs to more than 150 groups  a Graph API endpoint is shared instead   Grafana versions 11 1 and below do not support fetching the groups from the Graph API endpoint  As a result  users with more than 150 groups will not be able to retrieve their groups  Instead  it is recommended that you use OIDC OAuth workflows   As of Grafana 11 2  the SAML integration offers a mechanism to retrieve user groups from the Graph API   Related links      Azure AD SAML limitations  https   learn microsoft com en us entra identity platform id token claims reference groups overage claim     Set up SAML with Azure AD       Configure a Graph API application in Azure AD           Steps To Configure SAML Authentication  Sign in to Grafana and navigate to   Administration   Authentication   Configure SAML         1  General Settings Section  1  Complete the   General settings   fields      For assistance  consult the following table for additional guidance about certain fields        Field                                   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         Allow signup                          If enabled  you can create new users through the SAML login  If disabled  then only existing Grafana users can log in with SAML                                                                                                                                        Auto login                            If enabled  Grafana will attempt to automatically log in with SAML skipping the login screen                                                                                                                                                                           Single logout                         The SAML single logout feature enables users to log out from all applications associated with the current IdP session established using SAML SSO  For more information  refer to  SAML single logout documentation               Identity provider initiated login     Enables users to log in to Grafana directly from the SAML IdP  For more information  refer to  IdP initiated login documentation                                                                        1  Click   Next  Sign requests         2  Sign Requests Section  1  In the   Sign requests   field  specify whether you want the outgoing requests to be signed  and  if so  then      1  Provide a certificate and a private key that will be used by the service provider  Grafana  and the SAML IdP         Use the  PKCS  8  https   en wikipedia org wiki PKCS 8  format to issue the private key         For more information  refer to an  example on how to generate SAML credentials            Alternatively  you can generate a new private key and certificate pair directly from the UI  Click on the  Generate key and certificate  button to open a form where you enter some information you want to be embedded into the new certificate      1  Choose which signature algorithm should be used         The SAML standard recommends using a digital signature for some types of messages  like authentication or logout requests to avoid  man in the middle attacks  https   en wikipedia org wiki Man in the middle attack    1  Click   Next  Connect Grafana with Identity Provider         3  Connect Grafana with Identity Provider Section  1  Configure IdP using Grafana Metadata    1  Copy the   Metadata URL   and provide it to your SAML IdP to establish a connection between Grafana and the IdP          The metadata URL contains all the necessary information for the IdP to establish a connection with Grafana     1  Copy the   Assertion Consumer Service URL   and provide it to your SAML IdP          The Assertion Consumer Service URL is the endpoint where the IdP sends the SAML assertion after the user has been authenticated     1  If you want to use the   Single Logout   feature  copy the   Single Logout Service URL   and provide it to your SAML IdP  1  Finish configuring Grafana using IdP data    1  Provide IdP Metadata to Grafana       The metadata contains all the necessary information for Grafana to establish a connection with the IdP       This can be provided as Base64 encoded value  a path to a file  or as a URL  1  Click   Next  User mapping         4  User Mapping Section  1  If you wish to  map user information from SAML assertions     complete the   Assertion attributes mappings   section      You also need to configure the   Groups attribute   field if you want to use group synchronization  Group sync allows you to automatically map users to Grafana teams or role based access control roles based on their SAML group membership     To learn more about how to configure group synchronization  refer to  Configure team sync    and  Configure group attribute sync  https   grafana com docs grafana  GRAFANA VERSION  setup grafana configure security configure group attribute sync  documentation   1  If you want to automatically assign users  roles based on their SAML roles  complete the   Role mapping   section      First  you need to configure the   Role attribute   field to specify which SAML attribute should be used to retrieve SAML role information     Then enter the SAML roles that you want to map to Grafana roles in   Role mapping   section  If you want to map multiple SAML roles to a Grafana role  separate them by a comma and a space  For example   Editor  editor  developer       Role mapping will automatically update user s  basic role    based on their SAML roles every time the user logs in to Grafana     Learn more about  SAML role synchronization      1  If you re setting up Grafana with Azure AD using the SAML protocol and want to fetch user groups from the Graph API  complete the   Azure AD Service Account Configuration   subsection     1  Set up a service account in Azure AD and provide the necessary details in the   Azure AD Service Account Configuration   section     1  Provide the   Client ID   of your Azure AD application     1  Provide the   Client Secret   of your Azure AD application  the   Client Secret   will be used to request an access token from Azure AD     1  Provide the Azure AD request   Access Token URL       1  If you don t have users with more than 150 groups  you can still force the use of the Graph API by enabling the   Force use Graph API   toggle  1  If you have multiple organizations and want to automatically add users to organizations  complete the   Org mapping section        First  you need to configure the   Org attribute   field to specify which SAML attribute should be used to retrieve SAML organization information     Now fill in the   Org mapping   field with mappings from SAML organization to Grafana organization  For example   Org mapping  Engineering 2  Sales 2  will map users who belong to  Engineering  or  Sales  organizations in SAML to Grafana organization with ID 2     If you want users to have different roles in different organizations  you can additionally specify a role  For example   Org mapping  Engineering 2 Editor  will map users who belong to  Engineering  organizations in SAML to Grafana organization with ID 2 and assign them Editor role      Organization mapping will automatically update user s organization memberships  and roles  if they have been configured  based on their SAML organization every time the user logs in to Grafana     Learn more about  SAML organization mapping      1  If you want to limit the access to Grafana based on user s SAML organization membership  fill in the   Allowed organizations   field  1  Click   Next  Test and enable         5  Test And Enable Section  1  Click   Save and enable        If there are issues with your configuration  an error message will appear  Refer back to the previous steps to correct the issues and click on  Save and apply  on the top right corner once you are done  1  If there are no configuration issues  SAML integration status will change to  Enabled      Your SAML configuration is now enabled  1  To disable SAML integration  click  Disable  in the top right corner "}
{"questions":"grafana setup documentation labels auth gitlab aliases keywords configuration grafana Grafana GitLab OAuth Guide oauth","answers":"---\naliases:\n  - ..\/..\/..\/auth\/gitlab\/\ndescription: Grafana GitLab OAuth Guide\nkeywords:\n  - grafana\n  - configuration\n  - documentation\n  - oauth\nlabels:\n  products:\n    - cloud\n    - enterprise\n    - oss\nmenuTitle: GitLab OAuth\ntitle: Configure GitLab OAuth authentication\nweight: 1000\n---\n\n# Configure GitLab OAuth authentication\n\n\n\nThis topic describes how to configure GitLab OAuth authentication.\n\n\nIf Users use the same email address in GitLab that they use with other authentication providers (such as Grafana.com), you need to do additional configuration to ensure that the users are matched correctly. Please refer to the [Using the same email address to login with different identity providers]() documentation for more information.\n\n\n## Before you begin\n\nEnsure you know how to create a GitLab OAuth application. Consult GitLab's documentation on [creating a GitLab OAuth application](https:\/\/docs.gitlab.com\/ee\/integration\/oauth_provider.html) for more information.\n\n### Create a GitLab OAuth Application\n\n1. Log in to your GitLab account and go to **Profile > Preferences > Applications**.\n1. Click **Add new application**.\n1. Fill out the fields.\n   - In the **Redirect URI** field, enter the following: `https:\/\/<YOUR-GRAFANA-URL>\/login\/gitlab` and check `openid`, `email`, `profile` in the **Scopes** list.\n   - Leave the **Confidential** checkbox checked.\n1. Click **Save application**.\n1. Note your **Application ID** (this is the `Client Id`) and **Secret** (this is the `Client Secret`).\n\n## Configure GitLab authentication client using the Grafana UI\n\n\nAvailable in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle.\n\n\nAs a Grafana Admin, you can configure GitLab OAuth client from within Grafana using the GitLab UI. To do this, navigate to **Administration > Authentication > GitLab** page and fill in the form. If you have a current configuration in the Grafana configuration file then the form will be pre-populated with those values otherwise the form will contain default values.\n\nAfter you have filled in the form, click **Save** to save the configuration. If the save was successful, Grafana will apply the new configurations.\n\nIf you need to reset changes you made in the UI back to the default values, click **Reset**. After you have reset the changes, Grafana will apply the configuration from the Grafana configuration file (if there is any configuration) or the default values.\n\n\nIf you run Grafana in high availability mode, configuration changes may not get applied to all Grafana instances immediately. You may need to wait a few minutes for the configuration to propagate to all Grafana instances.\n\n\nRefer to [configuration options]() for more information.\n\n## Configure GitLab authentication client using the Terraform provider\n\n\nAvailable in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. Supported in the Terraform provider since v2.12.0.\n\n\n```terraform\nresource \"grafana_sso_settings\" \"gitlab_sso_settings\" {\n  provider_name = \"gitlab\"\n  oauth2_settings {\n    name                  = \"Gitlab\"\n    client_id             = \"YOUR_GITLAB_APPLICATION_ID\"\n    client_secret         = \"YOUR_GITLAB_APPLICATION_SECRET\"\n    allow_sign_up         = true\n    auto_login            = false\n    scopes                = \"openid email profile\"\n    allowed_domains       = \"mycompany.com mycompany.org\"\n    role_attribute_path   = \"contains(groups[*], 'example-group') && 'Editor' || 'Viewer'\"\n    role_attribute_strict = false\n    allowed_groups        = \"[\\\"admins\\\", \\\"software engineers\\\", \\\"developers\/frontend\\\"]\"\n    use_pkce              = true\n    use_refresh_token     = true\n  }\n}\n```\n\nGo to [Terraform Registry](https:\/\/registry.terraform.io\/providers\/grafana\/grafana\/latest\/docs\/resources\/sso_settings) for a complete reference on using the `grafana_sso_settings` resource.\n\n## Configure GitLab authentication client using the Grafana configuration file\n\nEnsure that you have access to the [Grafana configuration file]().\n\n### Steps\n\nTo configure GitLab authentication with Grafana, follow these steps:\n\n1. Create an OAuth application in GitLab.\n\n   1. Set the redirect URI to `http:\/\/<my_grafana_server_name_or_ip>:<grafana_server_port>\/login\/gitlab`.\n\n      Ensure that the Redirect URI is the complete HTTP address that you use to access Grafana via your browser, but with the appended path of `\/login\/gitlab`.\n\n      For the Redirect URI to be correct, it might be necessary to set the `root_url` option in the `[server]`section of the Grafana configuration file. For example, if you are serving Grafana behind a proxy.\n\n   1. Set the OAuth2 scopes to `openid`, `email` and `profile`.\n\n1. Refer to the following table to update field values located in the `[auth.gitlab]` section of the Grafana configuration file:\n\n   | Field                        | Description                                                                                   |\n   | ---------------------------- | --------------------------------------------------------------------------------------------- |\n   | `client_id`, `client_secret` | These values must match the `Application ID` and `Secret` from your GitLab OAuth application. |\n   | `enabled`                    | Enables GitLab authentication. Set this value to `true`.                                      |\n\n   Review the list of other GitLab [configuration options]() and complete them, as necessary.\n\n1. Optional: [Configure a refresh token]():\n\n   a. Set `use_refresh_token` to `true` in `[auth.gitlab]` section in Grafana configuration file.\n\n1. [Configure role mapping]().\n1. Optional: [Configure group synchronization]().\n1. Restart Grafana.\n\n   You should now see a GitLab login button on the login page and be able to log in or sign up with your GitLab accounts.\n\n### Configure a refresh token\n\nWhen a user logs in using an OAuth provider, Grafana verifies that the access token has not expired. When an access token expires, Grafana uses the provided refresh token (if any exists) to obtain a new access token.\n\nGrafana uses a refresh token to obtain a new access token without requiring the user to log in again. If a refresh token doesn't exist, Grafana logs the user out of the system after the access token has expired.\n\nBy default, GitLab provides a refresh token.\n\nRefresh token fetching and access token expiration check is enabled by default for the GitLab provider since Grafana v10.1.0. If you would like to disable access token expiration check then set the `use_refresh_token` configuration value to `false`.\n\n\nThe `accessTokenExpirationCheck` feature toggle has been removed in Grafana v10.3.0 and the `use_refresh_token` configuration value will be used instead for configuring refresh token fetching and access token expiration check.\n\n\n### Configure allowed groups\n\nTo limit access to authenticated users that are members of one or more [GitLab\ngroups](https:\/\/docs.gitlab.com\/ce\/user\/group\/index.html), set `allowed_groups`\nto a comma or space-separated list of groups.\n\nGitLab's groups are referenced by the group name. For example, `developers`. To reference a subgroup `frontend`, use `developers\/frontend`.\nNote that in GitLab, the group or subgroup name does not always match its display name, especially if the display name contains spaces or special characters.\nMake sure you always use the group or subgroup name as it appears in the URL of the group or subgroup.\n\n### Configure role mapping\n\nUnless `skip_org_role_sync` option is enabled, the user's role will be set to the role retrieved from GitLab upon user login.\n\nThe user's role is retrieved using a [JMESPath](http:\/\/jmespath.org\/examples.html) expression from the `role_attribute_path` configuration option.\nTo map the server administrator role, use the `allow_assign_grafana_admin` configuration option.\nRefer to [configuration options]() for more information.\n\nYou can use the `org_mapping` configuration option to assign the user to multiple organizations and specify their role based on their GitLab group membership. For more information, refer to [Org roles mapping example](#org-roles-mapping-example). If the org role mapping (`org_mapping`) is specified and Entra ID returns a valid role, then the user will get the highest of the two roles.\n\nIf no valid role is found, the user is assigned the role specified by [the `auto_assign_org_role` option]().\nYou can disable this default role assignment by setting `role_attribute_strict = true`. This setting denies user access if no role or an invalid role is returned after evaluating the `role_attribute_path` and the `org_mapping` expressions.\n\nTo ease configuration of a proper JMESPath expression, go to [JMESPath](http:\/\/jmespath.org\/) to test and evaluate expressions with custom payloads.\n\n### Role mapping examples\n\nThis section includes examples of JMESPath expressions used for role mapping.\n\n##### Org roles mapping example\n\nThe GitLab integration uses the external users' groups in the `org_mapping` configuration to map organizations and roles based on their GitLab group membership.\n\nIn this example, the user has been granted the role of a `Viewer` in the `org_foo` organization, and the role of an `Editor` in the `org_bar` and `org_baz` orgs.\n\nThe external user is part of the following GitLab groups: `groupd-1` and `group-2`.\n\nConfig:\n\n```ini\norg_mapping = group-1:org_foo:Viewer groupd-1:org_bar:Editor *:org_baz:Editor\n```\n\n#### Map roles using user information from OAuth token\n\nIn this example, the user with email `admin@company.com` has been granted the `Admin` role.\nAll other users are granted the `Viewer` role.\n\n```ini\nrole_attribute_path = email=='admin@company.com' && 'Admin' || 'Viewer'\n```\n\n#### Map roles using groups\n\nIn this example, the user from GitLab group 'example-group' have been granted the `Editor` role.\nAll other users are granted the `Viewer` role.\n\n```ini\nrole_attribute_path = contains(groups[*], 'example-group') && 'Editor' || 'Viewer'\n```\n\n#### Map server administrator role\n\nIn this example, the user with email `admin@company.com` has been granted the `Admin` organization role as well as the Grafana server admin role.\nAll other users are granted the `Viewer` role.\n\n```bash\nrole_attribute_path = email=='admin@company.com' && 'GrafanaAdmin' || 'Viewer'\n```\n\n#### Map one role to all users\n\nIn this example, all users will be assigned `Viewer` role regardless of the user information received from the identity provider.\n\n```ini\nrole_attribute_path = \"'Viewer'\"\nskip_org_role_sync = false\n```\n\n### Example of GitLab configuration in Grafana\n\nThis section includes an example of GitLab configuration in the Grafana configuration file.\n\n```bash\n[auth.gitlab]\nenabled = true\nallow_sign_up = true\nauto_login = false\nclient_id = YOUR_GITLAB_APPLICATION_ID\nclient_secret = YOUR_GITLAB_APPLICATION_SECRET\nscopes = openid email profile\nauth_url = https:\/\/gitlab.com\/oauth\/authorize\ntoken_url = https:\/\/gitlab.com\/oauth\/token\napi_url = https:\/\/gitlab.com\/api\/v4\nrole_attribute_path = contains(groups[*], 'example-group') && 'Editor' || 'Viewer'\nrole_attribute_strict = false\nallow_assign_grafana_admin = false\nallowed_groups = [\"admins\", \"software engineers\", \"developers\/frontend\"]\nallowed_domains = mycompany.com mycompany.org\ntls_skip_verify_insecure = false\nuse_pkce = true\nuse_refresh_token = true\n```\n\n## Configure group synchronization\n\n\nAvailable in [Grafana Enterprise](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/introduction\/grafana-enterprise) and [Grafana Cloud](\/docs\/grafana-cloud\/).\n\n\nGrafana supports synchronization of GitLab groups with Grafana teams and roles. This allows automatically assigning users to the appropriate teams or granting them the mapped roles.\nTeams and roles get synchronized when the user logs in.\n\nGitLab groups are referenced by the group name. For example, `developers`. To reference a subgroup `frontend`, use `developers\/frontend`.\nNote that in GitLab, the group or subgroup name does not always match its display name, especially if the display name contains spaces or special characters.\nMake sure you always use the group or subgroup name as it appears in the URL of the group or subgroup.\n\nTo learn more about group synchronization, refer to [Configure team sync](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/setup-grafana\/configure-security\/configure-team-sync) and [Configure group attribute sync](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/setup-grafana\/configure-security\/configure-group-attribute-sync).\n\n## Configuration options\n\nThe table below describes all GitLab OAuth configuration options. You can apply these options as environment variables, similar to any other configuration within Grafana. For more information, refer to [Override configuration with environment variables]().\n\n\nIf the configuration option requires a JMESPath expression that includes a colon, enclose the entire expression in quotes to prevent parsing errors. For example `role_attribute_path: \"role:view\"`\n\n\n| Setting                      | Required | Supported on Cloud | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               | Default                              |\n| ---------------------------- | -------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ |\n| `enabled`                    | Yes      | Yes                | Whether GitLab OAuth authentication is allowed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           | `false`                              |\n| `client_id`                  | Yes      | Yes                | Client ID provided by your GitLab OAuth app.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |                                      |\n| `client_secret`              | Yes      | Yes                | Client secret provided by your GitLab OAuth app.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |                                      |\n| `auth_url`                   | Yes      | Yes                | Authorization endpoint of your GitLab OAuth provider. If you use your own instance of GitLab instead of gitlab.com, adjust `auth_url` by replacing the `gitlab.com` hostname with your own.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               | `https:\/\/gitlab.com\/oauth\/authorize` |\n| `token_url`                  | Yes      | Yes                | Endpoint used to obtain GitLab OAuth access token. If you use your own instance of GitLab instead of gitlab.com, adjust `token_url` by replacing the `gitlab.com` hostname with your own.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 | `https:\/\/gitlab.com\/oauth\/token`     |\n| `api_url`                    | No       | Yes                | Grafana uses `<api_url>\/user` endpoint to obtain GitLab user information compatible with [OpenID UserInfo](https:\/\/connect2id.com\/products\/server\/docs\/api\/userinfo).                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | `https:\/\/gitlab.com\/api\/v4`          |\n| `name`                       | No       | Yes                | Name used to refer to the GitLab authentication in the Grafana user interface.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            | `GitLab`                             |\n| `icon`                       | No       | Yes                | Icon used for GitLab authentication in the Grafana user interface.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        | `gitlab`                             |\n| `scopes`                     | No       | Yes                | List of comma or space-separated GitLab OAuth scopes.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | `openid email profile`               |\n| `allow_sign_up`              | No       | Yes                | Whether to allow new Grafana user creation through GitLab login. If set to `false`, then only existing Grafana users can log in with GitLab OAuth.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        | `true`                               |\n| `auto_login`                 | No       | Yes                | Set to `true` to enable users to bypass the login screen and automatically log in. This setting is ignored if you configure multiple auth providers to use auto-login.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    | `false`                              |\n| `role_attribute_path`        | No       | Yes                | [JMESPath](http:\/\/jmespath.org\/examples.html) expression to use for Grafana role lookup. Grafana will first evaluate the expression using the GitLab OAuth token. If no role is found, Grafana creates a JSON data with `groups` key that maps to groups obtained from GitLab's `\/oauth\/userinfo` endpoint, and evaluates the expression using this data. Finally, if a valid role is still not found, the expression is evaluated against the user information retrieved from `api_url\/users` endpoint and groups retrieved from `api_url\/groups` endpoint. The result of the evaluation should be a valid Grafana role (`None`, `Viewer`, `Editor`, `Admin` or `GrafanaAdmin`). For more information on user role mapping, refer to [Configure role mapping](). |                                      |\n| `role_attribute_strict`      | No       | Yes                | Set to `true` to deny user login if the Grafana role cannot be extracted using `role_attribute_path`. For more information on user role mapping, refer to [Configure role mapping]().                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             | `false`                              |\n| `org_mapping`                | No       | No                 | List of comma- or space-separated `<ExternalGitlabGroupName>:<OrgIdOrName>:<Role>` mappings. Value can be `*` meaning \"All users\". Role is optional and can have the following values: `None`, `Viewer`, `Editor` or `Admin`. For more information on external organization to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example).                                                                                                                                                                                                                                                                                                                                                                                                                                            |                                      |\n| `skip_org_role_sync`         | No       | Yes                | Set to `true` to stop automatically syncing user roles.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   | `false`                              |\n| `allow_assign_grafana_admin` | No       | No                 | Set to `true` to enable automatic sync of the Grafana server administrator role. If this option is set to `true` and the result of evaluating `role_attribute_path` for a user is `GrafanaAdmin`, Grafana grants the user the server administrator privileges and organization administrator role. If this option is set to `false` and the result of evaluating `role_attribute_path` for a user is `GrafanaAdmin`, Grafana grants the user only organization administrator role. For more information on user role mapping, refer to [Configure role mapping]().                                                                                                                                                                                                | `false`                              |\n| `allowed_domains`            | No       | Yes                | List of comma or space-separated domains. User must belong to at least one domain to log in.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |                                      |\n| `allowed_groups`             | No       | Yes                | List of comma or space-separated groups. The user should be a member of at least one group to log in.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |                                      |\n| `tls_skip_verify_insecure`   | No       | No                 | If set to `true`, the client accepts any certificate presented by the server and any host name in that certificate. _You should only use this for testing_, because this mode leaves SSL\/TLS susceptible to man-in-the-middle attacks.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    | `false`                              |\n| `tls_client_cert`            | No       | No                 | The path to the certificate.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |                                      |\n| `tls_client_key`             | No       | No                 | The path to the key.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |                                      |\n| `tls_client_ca`              | No       | No                 | The path to the trusted certificate authority list.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |                                      |\n| `use_pkce`                   | No       | Yes                | Set to `true` to use [Proof Key for Code Exchange (PKCE)](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7636). Grafana uses the SHA256 based `S256` challenge method and a 128 bytes (base64url encoded) code verifier.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        | `true`                               |\n| `use_refresh_token`          | No       | Yes                | Set to `true` to use refresh token and check access token expiration. The `accessTokenExpirationCheck` feature toggle should also be enabled to use refresh token.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        | `true`                               |\n| `signout_redirect_url`       | No       | Yes                | URL to redirect to after the user logs out.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |                                      |","site":"grafana setup","answers_cleaned":"    aliases               auth gitlab  description  Grafana GitLab OAuth Guide keywords      grafana     configuration     documentation     oauth labels    products        cloud       enterprise       oss menuTitle  GitLab OAuth title  Configure GitLab OAuth authentication weight  1000        Configure GitLab OAuth authentication    This topic describes how to configure GitLab OAuth authentication    If Users use the same email address in GitLab that they use with other authentication providers  such as Grafana com   you need to do additional configuration to ensure that the users are matched correctly  Please refer to the  Using the same email address to login with different identity providers    documentation for more information       Before you begin  Ensure you know how to create a GitLab OAuth application  Consult GitLab s documentation on  creating a GitLab OAuth application  https   docs gitlab com ee integration oauth provider html  for more information       Create a GitLab OAuth Application  1  Log in to your GitLab account and go to   Profile   Preferences   Applications    1  Click   Add new application    1  Fill out the fields       In the   Redirect URI   field  enter the following   https    YOUR GRAFANA URL  login gitlab  and check  openid    email    profile  in the   Scopes   list       Leave the   Confidential   checkbox checked  1  Click   Save application    1  Note your   Application ID    this is the  Client Id   and   Secret    this is the  Client Secret        Configure GitLab authentication client using the Grafana UI   Available in Public Preview in Grafana 10 4 behind the  ssoSettingsApi  feature toggle    As a Grafana Admin  you can configure GitLab OAuth client from within Grafana using the GitLab UI  To do this  navigate to   Administration   Authentication   GitLab   page and fill in the form  If you have a current configuration in the Grafana configuration file then the form will be pre populated with those values otherwise the form will contain default values   After you have filled in the form  click   Save   to save the configuration  If the save was successful  Grafana will apply the new configurations   If you need to reset changes you made in the UI back to the default values  click   Reset    After you have reset the changes  Grafana will apply the configuration from the Grafana configuration file  if there is any configuration  or the default values    If you run Grafana in high availability mode  configuration changes may not get applied to all Grafana instances immediately  You may need to wait a few minutes for the configuration to propagate to all Grafana instances    Refer to  configuration options    for more information      Configure GitLab authentication client using the Terraform provider   Available in Public Preview in Grafana 10 4 behind the  ssoSettingsApi  feature toggle  Supported in the Terraform provider since v2 12 0       terraform resource  grafana sso settings   gitlab sso settings      provider name    gitlab    oauth2 settings       name                     Gitlab      client id                YOUR GITLAB APPLICATION ID      client secret            YOUR GITLAB APPLICATION SECRET      allow sign up           true     auto login              false     scopes                   openid email profile      allowed domains          mycompany com mycompany org      role attribute path      contains groups      example group       Editor      Viewer       role attribute strict   false     allowed groups              admins      software engineers      developers frontend         use pkce                true     use refresh token       true            Go to  Terraform Registry  https   registry terraform io providers grafana grafana latest docs resources sso settings  for a complete reference on using the  grafana sso settings  resource      Configure GitLab authentication client using the Grafana configuration file  Ensure that you have access to the  Grafana configuration file          Steps  To configure GitLab authentication with Grafana  follow these steps   1  Create an OAuth application in GitLab      1  Set the redirect URI to  http    my grafana server name or ip   grafana server port  login gitlab          Ensure that the Redirect URI is the complete HTTP address that you use to access Grafana via your browser  but with the appended path of   login gitlab          For the Redirect URI to be correct  it might be necessary to set the  root url  option in the   server  section of the Grafana configuration file  For example  if you are serving Grafana behind a proxy      1  Set the OAuth2 scopes to  openid    email  and  profile    1  Refer to the following table to update field values located in the   auth gitlab   section of the Grafana configuration file        Field                          Description                                                                                                                                                                                                                               client id    client secret    These values must match the  Application ID  and  Secret  from your GitLab OAuth application          enabled                       Enables GitLab authentication  Set this value to  true                                              Review the list of other GitLab  configuration options    and complete them  as necessary   1  Optional   Configure a refresh token         a  Set  use refresh token  to  true  in   auth gitlab   section in Grafana configuration file   1   Configure role mapping     1  Optional   Configure group synchronization     1  Restart Grafana      You should now see a GitLab login button on the login page and be able to log in or sign up with your GitLab accounts       Configure a refresh token  When a user logs in using an OAuth provider  Grafana verifies that the access token has not expired  When an access token expires  Grafana uses the provided refresh token  if any exists  to obtain a new access token   Grafana uses a refresh token to obtain a new access token without requiring the user to log in again  If a refresh token doesn t exist  Grafana logs the user out of the system after the access token has expired   By default  GitLab provides a refresh token   Refresh token fetching and access token expiration check is enabled by default for the GitLab provider since Grafana v10 1 0  If you would like to disable access token expiration check then set the  use refresh token  configuration value to  false     The  accessTokenExpirationCheck  feature toggle has been removed in Grafana v10 3 0 and the  use refresh token  configuration value will be used instead for configuring refresh token fetching and access token expiration check        Configure allowed groups  To limit access to authenticated users that are members of one or more  GitLab groups  https   docs gitlab com ce user group index html   set  allowed groups  to a comma or space separated list of groups   GitLab s groups are referenced by the group name  For example   developers   To reference a subgroup  frontend   use  developers frontend   Note that in GitLab  the group or subgroup name does not always match its display name  especially if the display name contains spaces or special characters  Make sure you always use the group or subgroup name as it appears in the URL of the group or subgroup       Configure role mapping  Unless  skip org role sync  option is enabled  the user s role will be set to the role retrieved from GitLab upon user login   The user s role is retrieved using a  JMESPath  http   jmespath org examples html  expression from the  role attribute path  configuration option  To map the server administrator role  use the  allow assign grafana admin  configuration option  Refer to  configuration options    for more information   You can use the  org mapping  configuration option to assign the user to multiple organizations and specify their role based on their GitLab group membership  For more information  refer to  Org roles mapping example   org roles mapping example   If the org role mapping   org mapping   is specified and Entra ID returns a valid role  then the user will get the highest of the two roles   If no valid role is found  the user is assigned the role specified by  the  auto assign org role  option     You can disable this default role assignment by setting  role attribute strict   true   This setting denies user access if no role or an invalid role is returned after evaluating the  role attribute path  and the  org mapping  expressions   To ease configuration of a proper JMESPath expression  go to  JMESPath  http   jmespath org   to test and evaluate expressions with custom payloads       Role mapping examples  This section includes examples of JMESPath expressions used for role mapping         Org roles mapping example  The GitLab integration uses the external users  groups in the  org mapping  configuration to map organizations and roles based on their GitLab group membership   In this example  the user has been granted the role of a  Viewer  in the  org foo  organization  and the role of an  Editor  in the  org bar  and  org baz  orgs   The external user is part of the following GitLab groups   groupd 1  and  group 2    Config      ini org mapping   group 1 org foo Viewer groupd 1 org bar Editor   org baz Editor           Map roles using user information from OAuth token  In this example  the user with email  admin company com  has been granted the  Admin  role  All other users are granted the  Viewer  role      ini role attribute path   email   admin company com      Admin      Viewer            Map roles using groups  In this example  the user from GitLab group  example group  have been granted the  Editor  role  All other users are granted the  Viewer  role      ini role attribute path   contains groups      example group       Editor      Viewer            Map server administrator role  In this example  the user with email  admin company com  has been granted the  Admin  organization role as well as the Grafana server admin role  All other users are granted the  Viewer  role      bash role attribute path   email   admin company com      GrafanaAdmin      Viewer            Map one role to all users  In this example  all users will be assigned  Viewer  role regardless of the user information received from the identity provider      ini role attribute path     Viewer   skip org role sync   false          Example of GitLab configuration in Grafana  This section includes an example of GitLab configuration in the Grafana configuration file      bash  auth gitlab  enabled   true allow sign up   true auto login   false client id   YOUR GITLAB APPLICATION ID client secret   YOUR GITLAB APPLICATION SECRET scopes   openid email profile auth url   https   gitlab com oauth authorize token url   https   gitlab com oauth token api url   https   gitlab com api v4 role attribute path   contains groups      example group       Editor      Viewer  role attribute strict   false allow assign grafana admin   false allowed groups     admins    software engineers    developers frontend   allowed domains   mycompany com mycompany org tls skip verify insecure   false use pkce   true use refresh token   true         Configure group synchronization   Available in  Grafana Enterprise  https   grafana com docs grafana  GRAFANA VERSION  introduction grafana enterprise  and  Grafana Cloud   docs grafana cloud      Grafana supports synchronization of GitLab groups with Grafana teams and roles  This allows automatically assigning users to the appropriate teams or granting them the mapped roles  Teams and roles get synchronized when the user logs in   GitLab groups are referenced by the group name  For example   developers   To reference a subgroup  frontend   use  developers frontend   Note that in GitLab  the group or subgroup name does not always match its display name  especially if the display name contains spaces or special characters  Make sure you always use the group or subgroup name as it appears in the URL of the group or subgroup   To learn more about group synchronization  refer to  Configure team sync  https   grafana com docs grafana  GRAFANA VERSION  setup grafana configure security configure team sync  and  Configure group attribute sync  https   grafana com docs grafana  GRAFANA VERSION  setup grafana configure security configure group attribute sync       Configuration options  The table below describes all GitLab OAuth configuration options  You can apply these options as environment variables  similar to any other configuration within Grafana  For more information  refer to  Override configuration with environment variables       If the configuration option requires a JMESPath expression that includes a colon  enclose the entire expression in quotes to prevent parsing errors  For example  role attribute path   role view       Setting                        Required   Supported on Cloud   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 Default                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       enabled                       Yes        Yes                  Whether GitLab OAuth authentication is allowed                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               false                                    client id                     Yes        Yes                  Client ID provided by your GitLab OAuth app                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           client secret                 Yes        Yes                  Client secret provided by your GitLab OAuth app                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       auth url                      Yes        Yes                  Authorization endpoint of your GitLab OAuth provider  If you use your own instance of GitLab instead of gitlab com  adjust  auth url  by replacing the  gitlab com  hostname with your own                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   https   gitlab com oauth authorize       token url                     Yes        Yes                  Endpoint used to obtain GitLab OAuth access token  If you use your own instance of GitLab instead of gitlab com  adjust  token url  by replacing the  gitlab com  hostname with your own                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     https   gitlab com oauth token           api url                       No         Yes                  Grafana uses   api url  user  endpoint to obtain GitLab user information compatible with  OpenID UserInfo  https   connect2id com products server docs api userinfo                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          https   gitlab com api v4                name                          No         Yes                  Name used to refer to the GitLab authentication in the Grafana user interface                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                GitLab                                   icon                          No         Yes                  Icon used for GitLab authentication in the Grafana user interface                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            gitlab                                   scopes                        No         Yes                  List of comma or space separated GitLab OAuth scopes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         openid email profile                     allow sign up                 No         Yes                  Whether to allow new Grafana user creation through GitLab login  If set to  false   then only existing Grafana users can log in with GitLab OAuth                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            true                                     auto login                    No         Yes                  Set to  true  to enable users to bypass the login screen and automatically log in  This setting is ignored if you configure multiple auth providers to use auto login                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        false                                    role attribute path           No         Yes                   JMESPath  http   jmespath org examples html  expression to use for Grafana role lookup  Grafana will first evaluate the expression using the GitLab OAuth token  If no role is found  Grafana creates a JSON data with  groups  key that maps to groups obtained from GitLab s   oauth userinfo  endpoint  and evaluates the expression using this data  Finally  if a valid role is still not found  the expression is evaluated against the user information retrieved from  api url users  endpoint and groups retrieved from  api url groups  endpoint  The result of the evaluation should be a valid Grafana role   None    Viewer    Editor    Admin  or  GrafanaAdmin    For more information on user role mapping  refer to  Configure role mapping                                                 role attribute strict         No         Yes                  Set to  true  to deny user login if the Grafana role cannot be extracted using  role attribute path   For more information on user role mapping  refer to  Configure role mapping                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    false                                    org mapping                   No         No                   List of comma  or space separated   ExternalGitlabGroupName   OrgIdOrName   Role   mappings  Value can be     meaning  All users   Role is optional and can have the following values   None    Viewer    Editor  or  Admin   For more information on external organization to role mapping  refer to  Org roles mapping example   org roles mapping example                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          skip org role sync            No         Yes                  Set to  true  to stop automatically syncing user roles                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       false                                    allow assign grafana admin    No         No                   Set to  true  to enable automatic sync of the Grafana server administrator role  If this option is set to  true  and the result of evaluating  role attribute path  for a user is  GrafanaAdmin   Grafana grants the user the server administrator privileges and organization administrator role  If this option is set to  false  and the result of evaluating  role attribute path  for a user is  GrafanaAdmin   Grafana grants the user only organization administrator role  For more information on user role mapping  refer to  Configure role mapping                                                                                                                                                                                                       false                                    allowed domains               No         Yes                  List of comma or space separated domains  User must belong to at least one domain to log in                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           allowed groups                No         Yes                  List of comma or space separated groups  The user should be a member of at least one group to log in                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  tls skip verify insecure      No         No                   If set to  true   the client accepts any certificate presented by the server and any host name in that certificate   You should only use this for testing   because this mode leaves SSL TLS susceptible to man in the middle attacks                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        false                                    tls client cert               No         No                   The path to the certificate                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           tls client key                No         No                   The path to the key                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   tls client ca                 No         No                   The path to the trusted certificate authority list                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    use pkce                      No         Yes                  Set to  true  to use  Proof Key for Code Exchange  PKCE   https   datatracker ietf org doc html rfc7636   Grafana uses the SHA256 based  S256  challenge method and a 128 bytes  base64url encoded  code verifier                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            true                                     use refresh token             No         Yes                  Set to  true  to use refresh token and check access token expiration  The  accessTokenExpirationCheck  feature toggle should also be enabled to use refresh token                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            true                                     signout redirect url          No         Yes                  URL to redirect to after the user logs out                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        "}
{"questions":"grafana setup documentation Grafana Auth Proxy Guide aliases tutorials authproxy proxy auth auth proxy keywords configuration grafana","answers":"---\naliases:\n  - ..\/..\/..\/auth\/auth-proxy\/\n  - ..\/..\/..\/tutorials\/authproxy\/\ndescription: Grafana Auth Proxy Guide\nkeywords:\n  - grafana\n  - configuration\n  - documentation\n  - proxy\nlabels:\n  products:\n    - cloud\n    - enterprise\n    - oss\nmenuTitle: Auth proxy\ntitle: Configure auth proxy authentication\nweight: 1500\n---\n\n# Configure auth proxy authentication\n\nYou can configure Grafana to let a HTTP reverse proxy handle authentication. Popular web servers have a very\nextensive list of pluggable authentication modules, and any of them can be used with the AuthProxy feature.\nBelow we detail the configuration options for auth proxy.\n\n```bash\n[auth.proxy]\n# Defaults to false, but set to true to enable this feature\nenabled = true\n# HTTP Header name that will contain the username or email\nheader_name = X-WEBAUTH-USER\n# HTTP Header property, defaults to `username` but can also be `email`\nheader_property = username\n# Set to `true` to enable auto sign up of users who do not exist in Grafana DB. Defaults to `true`.\nauto_sign_up = true\n# Define cache time to live in minutes\n# If combined with Grafana LDAP integration it is also the sync interval\n# Set to 0 to always fetch and sync the latest user data\nsync_ttl = 15\n# Limit where auth proxy requests come from by configuring a list of IP addresses.\n# This can be used to prevent users spoofing the X-WEBAUTH-USER header.\n# Example `whitelist = 192.168.1.1, 192.168.1.0\/24, 2001::23, 2001::0\/120`\nwhitelist =\n# Optionally define more headers to sync other user attributes\n# Example `headers = Name:X-WEBAUTH-NAME Role:X-WEBAUTH-ROLE Email:X-WEBAUTH-EMAIL Groups:X-WEBAUTH-GROUPS`\nheaders =\n# Non-ASCII strings in header values are encoded using quoted-printable encoding\n;headers_encoded = false\n# Check out docs on this for more details on the below setting\nenable_login_token = false\n```\n\n## Interacting with Grafana\u2019s AuthProxy via curl\n\n```bash\ncurl -H \"X-WEBAUTH-USER: admin\"  http:\/\/localhost:3000\/api\/users\n[\n    {\n        \"id\":1,\n        \"name\":\"\",\n        \"login\":\"admin\",\n        \"email\":\"admin@localhost\",\n        \"isAdmin\":true\n    }\n]\n```\n\nWe can then send a second request to the `\/api\/user` method which will return the details of the logged in user. We will use this request to show how Grafana automatically adds the new user we specify to the system. Here we create a new user called \u201canthony\u201d.\n\n```bash\ncurl -H \"X-WEBAUTH-USER: anthony\" http:\/\/localhost:3000\/api\/user\n{\n    \"email\":\"anthony\",\n    \"name\":\"\",\n    \"login\":\"anthony\",\n    \"theme\":\"\",\n    \"orgId\":1,\n    \"isGrafanaAdmin\":false\n}\n```\n\n## Making Apache\u2019s auth work together with Grafana\u2019s AuthProxy\n\nI\u2019ll demonstrate how to use Apache for authenticating users. In this example we use BasicAuth with Apache\u2019s text file based authentication handler, i.e. htpasswd files. However, any available Apache authentication capabilities could be used.\n\n### Apache BasicAuth\n\nIn this example we use Apache as a reverse proxy in front of Grafana. Apache handles the Authentication of users before forwarding requests to the Grafana backend service.\n\n#### Apache configuration\n\n```bash\n    <VirtualHost *:80>\n        ServerAdmin webmaster@authproxy\n        ServerName authproxy\n        ErrorLog \"logs\/authproxy-error_log\"\n        CustomLog \"logs\/authproxy-access_log\" common\n\n        <Proxy *>\n            AuthType Basic\n            AuthName GrafanaAuthProxy\n            AuthBasicProvider file\n            AuthUserFile \/etc\/apache2\/grafana_htpasswd\n            Require valid-user\n\n            RewriteEngine On\n            RewriteRule .* - [E=PROXY_USER:%{LA-U:REMOTE_USER},NS]\n            RequestHeader set X-WEBAUTH-USER \"%{PROXY_USER}e\"\n        <\/Proxy>\n\n        RequestHeader unset Authorization\n\n        ProxyRequests Off\n        ProxyPass \/ http:\/\/localhost:3000\/\n        ProxyPassReverse \/ http:\/\/localhost:3000\/\n    <\/VirtualHost>\n```\n\n- The first four lines of the virtualhost configuration are standard, so we won\u2019t go into detail on what they do.\n\n- We use a **\\<proxy>** configuration block for applying our authentication rules to every proxied request. These rules include requiring basic authentication where user:password credentials are stored in the **\/etc\/apache2\/grafana_htpasswd** file. This file can be created with the `htpasswd` command.\n\n  - The next part of the configuration is the tricky part. We use Apache\u2019s rewrite engine to create our **X-WEBAUTH-USER header**, populated with the authenticated user.\n\n    - **RewriteRule .\\* - [E=PROXY_USER:%{LA-U:REMOTE_USER}, NS]**: This line is a little bit of magic. What it does, is for every request use the rewriteEngines look-ahead (LA-U) feature to determine what the REMOTE_USER variable would be set to after processing the request. Then assign the result to the variable PROXY_USER. This is necessary as the REMOTE_USER variable is not available to the RequestHeader function.\n\n    - **RequestHeader set X-WEBAUTH-USER \u201c%{PROXY_USER}e\u201d**: With the authenticated username now stored in the PROXY_USER variable, we create a new HTTP request header that will be sent to our backend Grafana containing the username.\n\n- The **RequestHeader unset Authorization** removes the Authorization header from the HTTP request before it is forwarded to Grafana. This ensures that Grafana does not try to authenticate the user using these credentials (BasicAuth is a supported authentication handler in Grafana).\n\n- The last 3 lines are then just standard reverse proxy configuration to direct all authenticated requests to our Grafana server running on port 3000.\n\n## Full walkthrough using Docker.\n\nFor this example, we use the official Grafana Docker image available at [Docker Hub](https:\/\/hub.docker.com\/r\/grafana\/grafana\/).\n\n- Create a file `grafana.ini` with the following contents\n\n```bash\n[users]\nallow_sign_up = false\nauto_assign_org = true\nauto_assign_org_role = Editor\n\n[auth.proxy]\nenabled = true\nheader_name = X-WEBAUTH-USER\nheader_property = username\nauto_sign_up = true\n```\n\nLaunch the Grafana container, using our custom grafana.ini to replace `\/etc\/grafana\/grafana.ini`. We don't expose\nany ports for this container as it will only be connected to by our Apache container.\n\n```bash\ndocker run -i -v $(pwd)\/grafana.ini:\/etc\/grafana\/grafana.ini --name grafana grafana\/grafana\n```\n\n### Apache Container\n\nFor this example we use the official Apache docker image available at [Docker Hub](https:\/\/hub.docker.com\/_\/httpd\/)\n\n- Create a file `httpd.conf` with the following contents\n\n```bash\nServerRoot \"\/usr\/local\/apache2\"\nListen 80\nLoadModule mpm_event_module modules\/mod_mpm_event.so\nLoadModule authn_file_module modules\/mod_authn_file.so\nLoadModule authn_core_module modules\/mod_authn_core.so\nLoadModule authz_host_module modules\/mod_authz_host.so\nLoadModule authz_user_module modules\/mod_authz_user.so\nLoadModule authz_core_module modules\/mod_authz_core.so\nLoadModule auth_basic_module modules\/mod_auth_basic.so\nLoadModule log_config_module modules\/mod_log_config.so\nLoadModule env_module modules\/mod_env.so\nLoadModule headers_module modules\/mod_headers.so\nLoadModule unixd_module modules\/mod_unixd.so\nLoadModule rewrite_module modules\/mod_rewrite.so\nLoadModule proxy_module modules\/mod_proxy.so\nLoadModule proxy_http_module modules\/mod_proxy_http.so\n<IfModule unixd_module>\nUser daemon\nGroup daemon\n<\/IfModule>\nServerAdmin you@example.com\n<Directory \/>\n    AllowOverride none\n    Require all denied\n<\/Directory>\nDocumentRoot \"\/usr\/local\/apache2\/htdocs\"\nErrorLog \/proc\/self\/fd\/2\nLogLevel error\n<IfModule log_config_module>\n    LogFormat \"%h %l %u %t \\\"%r\\\" %>s %b \\\"%{Referer}i\\\" \\\"%{User-Agent}i\\\"\" combined\n    LogFormat \"%h %l %u %t \\\"%r\\\" %>s %b\" common\n    <IfModule logio_module>\n    LogFormat \"%h %l %u %t \\\"%r\\\" %>s %b \\\"%{Referer}i\\\" \\\"%{User-Agent}i\\\" %I %O\" combinedio\n    <\/IfModule>\n    CustomLog \/proc\/self\/fd\/1 common\n<\/IfModule>\n<Proxy *>\n    AuthType Basic\n    AuthName GrafanaAuthProxy\n    AuthBasicProvider file\n    AuthUserFile \/tmp\/htpasswd\n    Require valid-user\n    RewriteEngine On\n    RewriteRule .* - [E=PROXY_USER:%{LA-U:REMOTE_USER},NS]\n    RequestHeader set X-WEBAUTH-USER \"%{PROXY_USER}e\"\n<\/Proxy>\nRequestHeader unset Authorization\nProxyRequests Off\nProxyPass \/ http:\/\/grafana:3000\/\nProxyPassReverse \/ http:\/\/grafana:3000\/\n```\n\n- Create a htpasswd file. We create a new user **anthony** with the password **password**\n\n  ```bash\n  htpasswd -bc htpasswd anthony password\n  ```\n\n- Launch the httpd container using our custom httpd.conf and our htpasswd file. The container will listen on port 80, and we create a link to the **grafana** container so that this container can resolve the hostname **grafana** to the Grafana container\u2019s IP address.\n\n  ```bash\n  docker run -i -p 80:80 --link grafana:grafana -v $(pwd)\/httpd.conf:\/usr\/local\/apache2\/conf\/httpd.conf -v $(pwd)\/htpasswd:\/tmp\/htpasswd httpd:2.4\n  ```\n\n### Use grafana.\n\nWith our Grafana and Apache containers running, you can now connect to http:\/\/localhost\/ and log in using the username\/password we created in the htpasswd file.\n\n\nIf the user is deleted from Grafana, the user will be not be able to login and resync until after the `sync_ttl` has expired.\n\n\n### Team Sync (Enterprise only)\n\n> Only available in Grafana Enterprise v6.3+\n\nWith Team Sync, it's possible to set up synchronization between teams in your authentication provider and Grafana. You can send Grafana values as part of an HTTP header and have Grafana map them to your team structure. This allows you to put users into specific teams automatically.\n\nTo support the feature, auth proxy allows optional headers to map additional user attributes. The specific attribute to support team sync is `Groups`.\n\n```bash\n# Optionally define more headers to sync other user attributes\nheaders = \"Groups:X-WEBAUTH-GROUPS\"\n```\n\nYou use the `X-WEBAUTH-GROUPS` header to send the team information for each user. Specifically, the set of Grafana's group IDs that the user belongs to.\n\nFirst, we need to set up the mapping between your authentication provider and Grafana. Follow [these instructions]() to add groups to a team within Grafana.\n\nOnce that's done. You can verify your mappings by querying the API.\n\n```bash\n# First, inspect your teams and obtain the corresponding ID of the team we want to inspect the groups for.\ncurl -H \"X-WEBAUTH-USER: admin\" -H \"X-WEBAUTH-GROUPS: lokiteamOnExternalSystem\" http:\/\/localhost:3000\/api\/teams\/search\n{\n  \"totalCount\": 2,\n  \"teams\": [\n    {\n      \"id\": 1,\n      \"orgId\": 1,\n      \"name\": \"Core\",\n      \"email\": \"core@grafana.com\",\n      \"avatarUrl\": \"\/avatar\/327a5353552d2dc3966e2e646908f540\",\n      \"memberCount\": 1,\n      \"permission\": 0\n    },\n    {\n      \"id\": 2,\n      \"orgId\": 1,\n      \"name\": \"Loki\",\n      \"email\": \"loki@grafana.com\",\n      \"avatarUrl\": \"\/avatar\/102f937d5344d33fdb37b65d430f36ef\",\n      \"memberCount\": 0,\n      \"permission\": 0\n    }\n  ],\n  \"page\": 1,\n  \"perPage\": 1000\n}\n\n# Then, query the groups for that particular team. In our case, the Loki team which has an ID of \"2\".\ncurl -H \"X-WEBAUTH-USER: admin\" -H \"X-WEBAUTH-GROUPS: lokiteamOnExternalSystem\" http:\/\/localhost:3000\/api\/teams\/2\/groups\n[\n  {\n    \"orgId\": 1,\n    \"teamId\": 2,\n    \"groupId\": \"lokiTeamOnExternalSystem\"\n  }\n]\n```\n\nFinally, whenever Grafana receives a request with a header of `X-WEBAUTH-GROUPS: lokiTeamOnExternalSystem`, the user under authentication will be placed into the specified team. Placement in multiple teams is supported by using comma-separated values e.g. `lokiTeamOnExternalSystem,CoreTeamOnExternalSystem`.\n\n```bash\ncurl -H \"X-WEBAUTH-USER: leonard\" -H \"X-WEBAUTH-GROUPS: lokiteamOnExternalSystem\" http:\/\/localhost:3000\/dashboards\/home\n{\n  \"meta\": {\n    \"isHome\": true,\n    \"canSave\": false,\n    ...\n}\n```\n\nWith this, the user `leonard` will be automatically placed into the Loki team as part of Grafana authentication.\n\n\nAn empty `X-WEBAUTH-GROUPS` or the absence of a groups header will remove the user from all teams.\n\n\n[Learn more about Team Sync]()\n\n## Login token and session cookie\n\nWith `enable_login_token` set to `true` Grafana will, after successful auth proxy header validation, assign the user\na login token and cookie. You only have to configure your auth proxy to provide headers for the \/login route.\nRequests via other routes will be authenticated using the cookie.\n\nUse settings `login_maximum_inactive_lifetime_duration` and `login_maximum_lifetime_duration` under `[auth]` to control session\nlifetime.","site":"grafana setup","answers_cleaned":"    aliases               auth auth proxy               tutorials authproxy  description  Grafana Auth Proxy Guide keywords      grafana     configuration     documentation     proxy labels    products        cloud       enterprise       oss menuTitle  Auth proxy title  Configure auth proxy authentication weight  1500        Configure auth proxy authentication  You can configure Grafana to let a HTTP reverse proxy handle authentication  Popular web servers have a very extensive list of pluggable authentication modules  and any of them can be used with the AuthProxy feature  Below we detail the configuration options for auth proxy      bash  auth proxy    Defaults to false  but set to true to enable this feature enabled   true   HTTP Header name that will contain the username or email header name   X WEBAUTH USER   HTTP Header property  defaults to  username  but can also be  email  header property   username   Set to  true  to enable auto sign up of users who do not exist in Grafana DB  Defaults to  true   auto sign up   true   Define cache time to live in minutes   If combined with Grafana LDAP integration it is also the sync interval   Set to 0 to always fetch and sync the latest user data sync ttl   15   Limit where auth proxy requests come from by configuring a list of IP addresses    This can be used to prevent users spoofing the X WEBAUTH USER header    Example  whitelist   192 168 1 1  192 168 1 0 24  2001  23  2001  0 120  whitelist     Optionally define more headers to sync other user attributes   Example  headers   Name X WEBAUTH NAME Role X WEBAUTH ROLE Email X WEBAUTH EMAIL Groups X WEBAUTH GROUPS  headers     Non ASCII strings in header values are encoded using quoted printable encoding  headers encoded   false   Check out docs on this for more details on the below setting enable login token   false         Interacting with Grafana s AuthProxy via curl     bash curl  H  X WEBAUTH USER  admin   http   localhost 3000 api users                  id  1           name               login   admin            email   admin localhost            isAdmin  true              We can then send a second request to the   api user  method which will return the details of the logged in user  We will use this request to show how Grafana automatically adds the new user we specify to the system  Here we create a new user called  anthony       bash curl  H  X WEBAUTH USER  anthony  http   localhost 3000 api user        email   anthony        name           login   anthony        theme           orgId  1       isGrafanaAdmin  false           Making Apache s auth work together with Grafana s AuthProxy  I ll demonstrate how to use Apache for authenticating users  In this example we use BasicAuth with Apache s text file based authentication handler  i e  htpasswd files  However  any available Apache authentication capabilities could be used       Apache BasicAuth  In this example we use Apache as a reverse proxy in front of Grafana  Apache handles the Authentication of users before forwarding requests to the Grafana backend service        Apache configuration     bash      VirtualHost   80          ServerAdmin webmaster authproxy         ServerName authproxy         ErrorLog  logs authproxy error log          CustomLog  logs authproxy access log  common           Proxy                AuthType Basic             AuthName GrafanaAuthProxy             AuthBasicProvider file             AuthUserFile  etc apache2 grafana htpasswd             Require valid user              RewriteEngine On             RewriteRule       E PROXY USER   LA U REMOTE USER  NS              RequestHeader set X WEBAUTH USER    PROXY USER e            Proxy           RequestHeader unset Authorization          ProxyRequests Off         ProxyPass   http   localhost 3000          ProxyPassReverse   http   localhost 3000        VirtualHost         The first four lines of the virtualhost configuration are standard  so we won t go into detail on what they do     We use a     proxy    configuration block for applying our authentication rules to every proxied request  These rules include requiring basic authentication where user password credentials are stored in the    etc apache2 grafana htpasswd   file  This file can be created with the  htpasswd  command       The next part of the configuration is the tricky part  We use Apache s rewrite engine to create our   X WEBAUTH USER header    populated with the authenticated user           RewriteRule        E PROXY USER   LA U REMOTE USER   NS     This line is a little bit of magic  What it does  is for every request use the rewriteEngines look ahead  LA U  feature to determine what the REMOTE USER variable would be set to after processing the request  Then assign the result to the variable PROXY USER  This is necessary as the REMOTE USER variable is not available to the RequestHeader function           RequestHeader set X WEBAUTH USER    PROXY USER e     With the authenticated username now stored in the PROXY USER variable  we create a new HTTP request header that will be sent to our backend Grafana containing the username     The   RequestHeader unset Authorization   removes the Authorization header from the HTTP request before it is forwarded to Grafana  This ensures that Grafana does not try to authenticate the user using these credentials  BasicAuth is a supported authentication handler in Grafana      The last 3 lines are then just standard reverse proxy configuration to direct all authenticated requests to our Grafana server running on port 3000      Full walkthrough using Docker   For this example  we use the official Grafana Docker image available at  Docker Hub  https   hub docker com r grafana grafana       Create a file  grafana ini  with the following contents     bash  users  allow sign up   false auto assign org   true auto assign org role   Editor   auth proxy  enabled   true header name   X WEBAUTH USER header property   username auto sign up   true      Launch the Grafana container  using our custom grafana ini to replace   etc grafana grafana ini   We don t expose any ports for this container as it will only be connected to by our Apache container      bash docker run  i  v   pwd  grafana ini  etc grafana grafana ini   name grafana grafana grafana          Apache Container  For this example we use the official Apache docker image available at  Docker Hub  https   hub docker com   httpd      Create a file  httpd conf  with the following contents     bash ServerRoot   usr local apache2  Listen 80 LoadModule mpm event module modules mod mpm event so LoadModule authn file module modules mod authn file so LoadModule authn core module modules mod authn core so LoadModule authz host module modules mod authz host so LoadModule authz user module modules mod authz user so LoadModule authz core module modules mod authz core so LoadModule auth basic module modules mod auth basic so LoadModule log config module modules mod log config so LoadModule env module modules mod env so LoadModule headers module modules mod headers so LoadModule unixd module modules mod unixd so LoadModule rewrite module modules mod rewrite so LoadModule proxy module modules mod proxy so LoadModule proxy http module modules mod proxy http so  IfModule unixd module  User daemon Group daemon   IfModule  ServerAdmin you example com  Directory        AllowOverride none     Require all denied   Directory  DocumentRoot   usr local apache2 htdocs  ErrorLog  proc self fd 2 LogLevel error  IfModule log config module      LogFormat   h  l  u  t    r     s  b     Referer i       User Agent i    combined     LogFormat   h  l  u  t    r     s  b  common      IfModule logio module      LogFormat   h  l  u  t    r     s  b     Referer i       User Agent i    I  O  combinedio       IfModule      CustomLog  proc self fd 1 common   IfModule   Proxy        AuthType Basic     AuthName GrafanaAuthProxy     AuthBasicProvider file     AuthUserFile  tmp htpasswd     Require valid user     RewriteEngine On     RewriteRule       E PROXY USER   LA U REMOTE USER  NS      RequestHeader set X WEBAUTH USER    PROXY USER e    Proxy  RequestHeader unset Authorization ProxyRequests Off ProxyPass   http   grafana 3000  ProxyPassReverse   http   grafana 3000         Create a htpasswd file  We create a new user   anthony   with the password   password         bash   htpasswd  bc htpasswd anthony password          Launch the httpd container using our custom httpd conf and our htpasswd file  The container will listen on port 80  and we create a link to the   grafana   container so that this container can resolve the hostname   grafana   to the Grafana container s IP address        bash   docker run  i  p 80 80   link grafana grafana  v   pwd  httpd conf  usr local apache2 conf httpd conf  v   pwd  htpasswd  tmp htpasswd httpd 2 4            Use grafana   With our Grafana and Apache containers running  you can now connect to http   localhost  and log in using the username password we created in the htpasswd file    If the user is deleted from Grafana  the user will be not be able to login and resync until after the  sync ttl  has expired        Team Sync  Enterprise only     Only available in Grafana Enterprise v6 3   With Team Sync  it s possible to set up synchronization between teams in your authentication provider and Grafana  You can send Grafana values as part of an HTTP header and have Grafana map them to your team structure  This allows you to put users into specific teams automatically   To support the feature  auth proxy allows optional headers to map additional user attributes  The specific attribute to support team sync is  Groups       bash   Optionally define more headers to sync other user attributes headers    Groups X WEBAUTH GROUPS       You use the  X WEBAUTH GROUPS  header to send the team information for each user  Specifically  the set of Grafana s group IDs that the user belongs to   First  we need to set up the mapping between your authentication provider and Grafana  Follow  these instructions    to add groups to a team within Grafana   Once that s done  You can verify your mappings by querying the API      bash   First  inspect your teams and obtain the corresponding ID of the team we want to inspect the groups for  curl  H  X WEBAUTH USER  admin   H  X WEBAUTH GROUPS  lokiteamOnExternalSystem  http   localhost 3000 api teams search      totalCount   2     teams                  id   1         orgId   1         name    Core          email    core grafana com          avatarUrl     avatar 327a5353552d2dc3966e2e646908f540          memberCount   1         permission   0                     id   2         orgId   1         name    Loki          email    loki grafana com          avatarUrl     avatar 102f937d5344d33fdb37b65d430f36ef          memberCount   0         permission   0               page   1     perPage   1000      Then  query the groups for that particular team  In our case  the Loki team which has an ID of  2   curl  H  X WEBAUTH USER  admin   H  X WEBAUTH GROUPS  lokiteamOnExternalSystem  http   localhost 3000 api teams 2 groups            orgId   1       teamId   2       groupId    lokiTeamOnExternalSystem             Finally  whenever Grafana receives a request with a header of  X WEBAUTH GROUPS  lokiTeamOnExternalSystem   the user under authentication will be placed into the specified team  Placement in multiple teams is supported by using comma separated values e g   lokiTeamOnExternalSystem CoreTeamOnExternalSystem       bash curl  H  X WEBAUTH USER  leonard   H  X WEBAUTH GROUPS  lokiteamOnExternalSystem  http   localhost 3000 dashboards home      meta          isHome   true       canSave   false                 With this  the user  leonard  will be automatically placed into the Loki team as part of Grafana authentication    An empty  X WEBAUTH GROUPS  or the absence of a groups header will remove the user from all teams     Learn more about Team Sync        Login token and session cookie  With  enable login token  set to  true  Grafana will  after successful auth proxy header validation  assign the user a login token and cookie  You only have to configure your auth proxy to provide headers for the  login route  Requests via other routes will be authenticated using the cookie   Use settings  login maximum inactive lifetime duration  and  login maximum lifetime duration  under   auth   to control session lifetime "}
{"questions":"grafana setup Grafana LDAP Authentication Guide auth ldap aliases products labels cloud installation ldap oss enterprise","answers":"---\naliases:\n  - ..\/..\/..\/auth\/ldap\/\n  - ..\/..\/..\/installation\/ldap\/\ndescription: Grafana LDAP Authentication Guide\nlabels:\n  products:\n    - cloud\n    - enterprise\n    - oss\nmenuTitle: LDAP\ntitle: Configure LDAP authentication\nweight: 300\n---\n\n# Configure LDAP authentication\n\nThe LDAP integration in Grafana allows your Grafana users to login with their LDAP credentials. You can also specify mappings between LDAP\ngroup memberships and Grafana Organization user roles.\n\n\n[Enhanced LDAP authentication]() is available in [Grafana Cloud](\/docs\/grafana-cloud\/) and in [Grafana Enterprise]().\n\n\nRefer to [Role-based access control]() to understand how you can control access with role-based permissions.\n\n## Supported LDAP Servers\n\nGrafana uses a [third-party LDAP library](https:\/\/github.com\/go-ldap\/ldap) under the hood that supports basic LDAP v3 functionality.\nThis means that you should be able to configure LDAP integration using any compliant LDAPv3 server, for example [OpenLDAP](#openldap) or\n[Active Directory](#active-directory) among [others](https:\/\/en.wikipedia.org\/wiki\/Directory_service#LDAP_implementations).\n\n## Enable LDAP\n\nIn order to use LDAP integration you'll first need to enable LDAP in the [main config file]() as well as specify the path to the LDAP\nspecific configuration file (default: `\/etc\/grafana\/ldap.toml`).\n\nAfter enabling LDAP, the default behavior is for Grafana users to be created automatically upon successful LDAP authentication. If you prefer for only existing Grafana users to be able to sign in, you can change `allow_sign_up` to `false` in the `[auth.ldap]` section.\n\n```ini\n[auth.ldap]\n# Set to `true` to enable LDAP integration (default: `false`)\nenabled = true\n\n# Path to the LDAP specific configuration file (default: `\/etc\/grafana\/ldap.toml`)\nconfig_file = \/etc\/grafana\/ldap.toml\n\n# Allow sign-up should be `true` (default) to allow Grafana to create users on successful LDAP authentication.\n# If set to `false` only already existing Grafana users will be able to login.\nallow_sign_up = true\n```\n\n## Disable org role synchronization\n\nIf you use LDAP to authenticate users but don't use role mapping, and prefer to manually assign organizations\nand roles, you can use the `skip_org_role_sync` configuration option.\n\n```ini\n[auth.ldap]\n# Set to `true` to enable LDAP integration (default: `false`)\nenabled = true\n\n# Path to the LDAP specific configuration file (default: `\/etc\/grafana\/ldap.toml`)\nconfig_file = \/etc\/grafana\/ldap.toml\n\n# Allow sign-up should be `true` (default) to allow Grafana to create users on successful LDAP authentication.\n# If set to `false` only already existing Grafana users will be able to login.\nallow_sign_up = true\n\n# Prevent synchronizing ldap users organization roles\nskip_org_role_sync = true\n```\n\n## Grafana LDAP Configuration\n\nDepending on which LDAP server you're using and how that's configured your Grafana LDAP configuration may vary.\nSee [configuration examples](#configuration-examples) for more information.\n\n**LDAP specific configuration file (ldap.toml) example:**\n\n```bash\n[[servers]]\n# Ldap server host (specify multiple hosts space separated)\nhost = \"ldap.my_secure_remote_server.org\"\n# Default port is 389 or 636 if use_ssl = true\nport = 636\n# Set to true if LDAP server should use an encrypted TLS connection (either with STARTTLS or LDAPS)\nuse_ssl = true\n# If set to true, use LDAP with STARTTLS instead of LDAPS\nstart_tls = false\n# The value of an accepted TLS cipher. By default, this value is empty. Example value: [\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"])\n# For a complete list of supported ciphers and TLS versions, refer to: https:\/\/go.dev\/src\/crypto\/tls\/cipher_suites.go\n# Starting with Grafana v11.0 only ciphers with ECDHE support are accepted for TLS 1.2 connections.\ntls_ciphers = []\n# This is the minimum TLS version allowed. By default, this value is empty. Accepted values are: TLS1.1 (only for Grafana v10.4 or earlier), TLS1.2, TLS1.3.\nmin_tls_version = \"\"\n# set to true if you want to skip SSL cert validation\nssl_skip_verify = false\n# set to the path to your root CA certificate or leave unset to use system defaults\n# root_ca_cert = \"\/path\/to\/certificate.crt\"\n# Authentication against LDAP servers requiring client certificates\n# client_cert = \"\/path\/to\/client.crt\"\n# client_key = \"\/path\/to\/client.key\"\n\n# Search user bind dn\nbind_dn = \"cn=admin,dc=grafana,dc=org\"\n# Search user bind password\n# If the password contains # or ; you have to wrap it with triple quotes. Ex \"\"\"#password;\"\"\"\nbind_password = \"grafana\"\n# We recommend using variable expansion for the bind_password, for more info https:\/\/grafana.com\/docs\/grafana\/latest\/setup-grafana\/configure-grafana\/#variable-expansion\n# bind_password = '$__env{LDAP_BIND_PASSWORD}'\n\n# Timeout in seconds. Applies to each host specified in the 'host' entry (space separated).\ntimeout = 10\n\n# User search filter, for example \"(cn=%s)\" or \"(sAMAccountName=%s)\" or \"(uid=%s)\"\n# Allow login from email or username, example \"(|(sAMAccountName=%s)(userPrincipalName=%s))\"\nsearch_filter = \"(cn=%s)\"\n\n# An array of base dns to search through\nsearch_base_dns = [\"dc=grafana,dc=org\"]\n\n# group_search_filter = \"(&(objectClass=posixGroup)(memberUid=%s))\"\n# group_search_filter_user_attribute = \"distinguishedName\"\n# group_search_base_dns = [\"ou=groups,dc=grafana,dc=org\"]\n\n# Specify names of the LDAP attributes your LDAP uses\n[servers.attributes]\nmember_of = \"memberOf\"\nemail =  \"email\"\n```\n\n\nWhenever you modify the ldap.toml file, you must restart Grafana in order for the change(s) to take effect.\n\n\n### Using environment variables\n\nYou can interpolate variables in the TOML configuration from environment variables. For instance, you could externalize your `bind_password` that way:\n\n```bash\nbind_password = \"${LDAP_ADMIN_PASSWORD}\"\n```\n\n## LDAP debug view\n\nGrafana has an LDAP debug view built-in which allows you to test your LDAP configuration directly within Grafana. Only Grafana admins can use the LDAP debug view.\n\nWithin this view, you'll be able to see which LDAP servers are currently reachable and test your current configuration.\n\n\n\nTo use the debug view, complete the following steps:\n\n1.  Type the username of a user that exists within any of your LDAP server(s)\n1.  Then, press \"Run\"\n1.  If the user is found within any of your LDAP instances, the mapping information is displayed.\n\nNote that this does not work if you are using the single bind configuration outlined below.\n\n\n\n[Grafana Enterprise]() users with [enhanced LDAP integration]() enabled can also see sync status in the debug view. This requires the `ldap.status:read` permission.\n\n\n\n### Bind and bind password\n\nBy default the configuration expects you to specify a bind DN and bind password. This should be a read only user that can perform LDAP searches.\nWhen the user DN is found a second bind is performed with the user provided username and password (in the normal Grafana login form).\n\n```bash\nbind_dn = \"cn=admin,dc=grafana,dc=org\"\nbind_password = \"grafana\"\n```\n\n#### Single bind example\n\nIf you can provide a single bind expression that matches all possible users, you can skip the second bind and bind against the user DN directly.\nThis allows you to not specify a bind_password in the configuration file.\n\n```bash\nbind_dn = \"cn=%s,o=users,dc=grafana,dc=org\"\n```\n\nIn this case you skip providing a `bind_password` and instead provide a `bind_dn` value with a `%s` somewhere. This will be replaced with the username entered in on the Grafana login page.\nThe search filter and search bases settings are still needed to perform the LDAP search to retrieve the other LDAP information (like LDAP groups and email).\n\n### POSIX schema\n\nIf your LDAP server does not support the `memberOf` attribute, add the following options:\n\n```bash\n## Group search filter, to retrieve the groups of which the user is a member (only set if memberOf attribute is not available)\ngroup_search_filter = \"(&(objectClass=posixGroup)(memberUid=%s))\"\n## An array of the base DNs to search through for groups. Typically uses ou=groups\ngroup_search_base_dns = [\"ou=groups,dc=grafana,dc=org\"]\n## the %s in the search filter will be replaced with the attribute defined below\ngroup_search_filter_user_attribute = \"uid\"\n```\n\n### Group mappings\n\nIn `[[servers.group_mappings]]` you can map an LDAP group to a Grafana organization and role. These will be synced every time the user logs in, with LDAP being the authoritative source.\n\nThe first group mapping that an LDAP user is matched to will be used for the sync. If you have LDAP users that fit multiple mappings, the topmost mapping in the TOML configuration will be used.\n\n**LDAP specific configuration file (ldap.toml) example:**\n\n```bash\n[[servers]]\n# other settings omitted for clarity\n\n[[servers.group_mappings]]\ngroup_dn = \"cn=superadmins,dc=grafana,dc=org\"\norg_role = \"Admin\"\ngrafana_admin = true\n\n[[servers.group_mappings]]\ngroup_dn = \"cn=admins,dc=grafana,dc=org\"\norg_role = \"Admin\"\n\n[[servers.group_mappings]]\ngroup_dn = \"cn=users,dc=grafana,dc=org\"\norg_role = \"Editor\"\n\n[[servers.group_mappings]]\ngroup_dn = \"*\"\norg_role = \"Viewer\"\n```\n\n| Setting         | Required | Description                                                                                                                                           | Default              |\n| --------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------- |\n| `group_dn`      | Yes      | LDAP distinguished name (DN) of LDAP group. If you want to match all (or no LDAP groups) then you can use wildcard (`\"*\"`)                            |\n| `org_role`      | Yes      | Assign users of `group_dn` the organization role `Admin`, `Editor`, or `Viewer`. The organization role name is case sensitive.                        |\n| `org_id`        | No       | The Grafana organization database id. Setting this allows for multiple group_dn's to be assigned to the same `org_role` provided the `org_id` differs | `1` (default org id) |\n| `grafana_admin` | No       | When `true` makes user of `group_dn` Grafana server admin. A Grafana server admin has admin access over all organizations and users.                  | `false`              |\n\n\nCommenting out a group mapping requires also commenting out the header of\nsaid group or it will fail validation as an empty mapping.\n\n\nExample:\n\n```bash\n[[servers]]\n# other settings omitted for clarity\n\n[[servers.group_mappings]]\ngroup_dn = \"cn=superadmins,dc=grafana,dc=org\"\norg_role = \"Admin\"\ngrafana_admin = true\n\n# [[servers.group_mappings]]\n# group_dn = \"cn=admins,dc=grafana,dc=org\"\n# org_role = \"Admin\"\n\n[[servers.group_mappings]]\ngroup_dn = \"cn=users,dc=grafana,dc=org\"\norg_role = \"Editor\"\n```\n\n### Nested\/recursive group membership\n\nUsers with nested\/recursive group membership must have an LDAP server that supports `LDAP_MATCHING_RULE_IN_CHAIN`\nand configure `group_search_filter` in a way that it returns the groups the submitted username is a member of.\n\nTo configure `group_search_filter`:\n\n- You can set `group_search_base_dns` to specify where the matching groups are defined.\n- If you do not use `group_search_base_dns`, then the previously defined `search_base_dns` is used.\n\n**Active Directory example:**\n\nActive Directory groups store the Distinguished Names (DNs) of members, so your filter will need to know the DN for the user based only on the submitted username.\nMultiple DN templates are searched by combining filters with the LDAP OR-operator. Two examples:\n\n```bash\ngroup_search_filter = \"(member:1.2.840.113556.1.4.1941:=%s)\"\ngroup_search_base_dns = [\"DC=mycorp,DC=mytld\"]\ngroup_search_filter_user_attribute = \"dn\"\n```\n\n```bash\ngroup_search_filter = \"(member:1.2.840.113556.1.4.1941:=CN=%s,[user container\/OU])\"\ngroup_search_filter = \"(|(member:1.2.840.113556.1.4.1941:=CN=%s,[user container\/OU])(member:1.2.840.113556.1.4.1941:=CN=%s,[another user container\/OU]))\"\ngroup_search_filter_user_attribute = \"cn\"\n```\n\nFor more information on AD searches see [Microsoft's Search Filter Syntax](https:\/\/docs.microsoft.com\/en-us\/windows\/desktop\/adsi\/search-filter-syntax) documentation.\n\nFor troubleshooting, changing `member_of` in `[servers.attributes]` to \"dn\" will show you more accurate group memberships when [debug is enabled](#troubleshooting).\n\n## Configuration examples\n\nThe following examples describe different LDAP configuration options.\n\n### OpenLDAP\n\n[OpenLDAP](http:\/\/www.openldap.org\/) is an open source directory service.\n\n**LDAP specific configuration file (ldap.toml):**\n\n```bash\n[[servers]]\nhost = \"127.0.0.1\"\nport = 389\nuse_ssl = false\nstart_tls = false\nssl_skip_verify = false\nbind_dn = \"cn=admin,dc=grafana,dc=org\"\nbind_password = \"grafana\"\nsearch_filter = \"(cn=%s)\"\nsearch_base_dns = [\"dc=grafana,dc=org\"]\n\n[servers.attributes]\nmember_of = \"memberOf\"\nemail =  \"email\"\n\n# [[servers.group_mappings]] omitted for clarity\n```\n\n### Multiple LDAP servers\n\nGrafana does support receiving information from multiple LDAP servers.\n\n**LDAP specific configuration file (ldap.toml):**\n\n```bash\n# --- First LDAP Server ---\n\n[[servers]]\nhost = \"10.0.0.1\"\nport = 389\nuse_ssl = false\nstart_tls = false\nssl_skip_verify = false\nbind_dn = \"cn=admin,dc=grafana,dc=org\"\nbind_password = \"grafana\"\nsearch_filter = \"(cn=%s)\"\nsearch_base_dns = [\"ou=users,dc=grafana,dc=org\"]\n\n[servers.attributes]\nmember_of = \"memberOf\"\nemail =  \"email\"\n\n[[servers.group_mappings]]\ngroup_dn = \"cn=admins,ou=groups,dc=grafana,dc=org\"\norg_role = \"Admin\"\ngrafana_admin = true\n\n# --- Second LDAP Server ---\n\n[[servers]]\nhost = \"10.0.0.2\"\nport = 389\nuse_ssl = false\nstart_tls = false\nssl_skip_verify = false\n\nbind_dn = \"cn=admin,dc=grafana,dc=org\"\nbind_password = \"grafana\"\nsearch_filter = \"(cn=%s)\"\nsearch_base_dns = [\"ou=users,dc=grafana,dc=org\"]\n\n[servers.attributes]\nmember_of = \"memberOf\"\nemail =  \"email\"\n\n[[servers.group_mappings]]\ngroup_dn = \"cn=editors,ou=groups,dc=grafana,dc=org\"\norg_role = \"Editor\"\n\n[[servers.group_mappings]]\ngroup_dn = \"*\"\norg_role = \"Viewer\"\n```\n\n### Active Directory\n\n[Active Directory](<https:\/\/technet.microsoft.com\/en-us\/library\/hh831484(v=ws.11).aspx>) is a directory service which is commonly used in Windows environments.\n\nAssuming the following Active Directory server setup:\n\n- IP address: `10.0.0.1`\n- Domain: `CORP`\n- DNS name: `corp.local`\n\n**LDAP specific configuration file (ldap.toml):**\n\n```bash\n[[servers]]\nhost = \"10.0.0.1\"\nport = 3269\nuse_ssl = true\nstart_tls = false\nssl_skip_verify = true\nbind_dn = \"CORP\\\\%s\"\nsearch_filter = \"(sAMAccountName=%s)\"\nsearch_base_dns = [\"dc=corp,dc=local\"]\n\n[servers.attributes]\nmember_of = \"memberOf\"\nemail =  \"mail\"\n\n# [[servers.group_mappings]] omitted for clarity\n```\n\n#### Port requirements\n\nIn above example SSL is enabled and an encrypted port have been configured. If your Active Directory don't support SSL please change `enable_ssl = false` and `port = 389`.\nPlease inspect your Active Directory configuration and documentation to find the correct settings. For more information about Active Directory and port requirements see [link](<https:\/\/technet.microsoft.com\/en-us\/library\/dd772723(v=ws.10)>).\n\n## Troubleshooting\n\nTo troubleshoot and get more log info enable LDAP debug logging in the [main config file]().\n\n```bash\n[log]\nfilters = ldap:debug\n```","site":"grafana setup","answers_cleaned":"    aliases               auth ldap               installation ldap  description  Grafana LDAP Authentication Guide labels    products        cloud       enterprise       oss menuTitle  LDAP title  Configure LDAP authentication weight  300        Configure LDAP authentication  The LDAP integration in Grafana allows your Grafana users to login with their LDAP credentials  You can also specify mappings between LDAP group memberships and Grafana Organization user roles     Enhanced LDAP authentication    is available in  Grafana Cloud   docs grafana cloud   and in  Grafana Enterprise       Refer to  Role based access control    to understand how you can control access with role based permissions      Supported LDAP Servers  Grafana uses a  third party LDAP library  https   github com go ldap ldap  under the hood that supports basic LDAP v3 functionality  This means that you should be able to configure LDAP integration using any compliant LDAPv3 server  for example  OpenLDAP   openldap  or  Active Directory   active directory  among  others  https   en wikipedia org wiki Directory service LDAP implementations       Enable LDAP  In order to use LDAP integration you ll first need to enable LDAP in the  main config file    as well as specify the path to the LDAP specific configuration file  default    etc grafana ldap toml     After enabling LDAP  the default behavior is for Grafana users to be created automatically upon successful LDAP authentication  If you prefer for only existing Grafana users to be able to sign in  you can change  allow sign up  to  false  in the   auth ldap   section      ini  auth ldap    Set to  true  to enable LDAP integration  default   false   enabled   true    Path to the LDAP specific configuration file  default    etc grafana ldap toml   config file    etc grafana ldap toml    Allow sign up should be  true   default  to allow Grafana to create users on successful LDAP authentication    If set to  false  only already existing Grafana users will be able to login  allow sign up   true         Disable org role synchronization  If you use LDAP to authenticate users but don t use role mapping  and prefer to manually assign organizations and roles  you can use the  skip org role sync  configuration option      ini  auth ldap    Set to  true  to enable LDAP integration  default   false   enabled   true    Path to the LDAP specific configuration file  default    etc grafana ldap toml   config file    etc grafana ldap toml    Allow sign up should be  true   default  to allow Grafana to create users on successful LDAP authentication    If set to  false  only already existing Grafana users will be able to login  allow sign up   true    Prevent synchronizing ldap users organization roles skip org role sync   true         Grafana LDAP Configuration  Depending on which LDAP server you re using and how that s configured your Grafana LDAP configuration may vary  See  configuration examples   configuration examples  for more information     LDAP specific configuration file  ldap toml  example        bash   servers     Ldap server host  specify multiple hosts space separated  host    ldap my secure remote server org    Default port is 389 or 636 if use ssl   true port   636   Set to true if LDAP server should use an encrypted TLS connection  either with STARTTLS or LDAPS  use ssl   true   If set to true  use LDAP with STARTTLS instead of LDAPS start tls   false   The value of an accepted TLS cipher  By default  this value is empty  Example value    TLS ECDHE ECDSA WITH AES 256 GCM SHA384      For a complete list of supported ciphers and TLS versions  refer to  https   go dev src crypto tls cipher suites go   Starting with Grafana v11 0 only ciphers with ECDHE support are accepted for TLS 1 2 connections  tls ciphers        This is the minimum TLS version allowed  By default  this value is empty  Accepted values are  TLS1 1  only for Grafana v10 4 or earlier   TLS1 2  TLS1 3  min tls version        set to true if you want to skip SSL cert validation ssl skip verify   false   set to the path to your root CA certificate or leave unset to use system defaults   root ca cert     path to certificate crt    Authentication against LDAP servers requiring client certificates   client cert     path to client crt    client key     path to client key     Search user bind dn bind dn    cn admin dc grafana dc org    Search user bind password   If the password contains   or   you have to wrap it with triple quotes  Ex     password     bind password    grafana    We recommend using variable expansion for the bind password  for more info https   grafana com docs grafana latest setup grafana configure grafana  variable expansion   bind password       env LDAP BIND PASSWORD      Timeout in seconds  Applies to each host specified in the  host  entry  space separated   timeout   10    User search filter  for example   cn  s   or   sAMAccountName  s   or   uid  s     Allow login from email or username  example     sAMAccountName  s  userPrincipalName  s    search filter     cn  s      An array of base dns to search through search base dns     dc grafana dc org      group search filter       objectClass posixGroup  memberUid  s      group search filter user attribute    distinguishedName    group search base dns     ou groups dc grafana dc org      Specify names of the LDAP attributes your LDAP uses  servers attributes  member of    memberOf  email     email        Whenever you modify the ldap toml file  you must restart Grafana in order for the change s  to take effect        Using environment variables  You can interpolate variables in the TOML configuration from environment variables  For instance  you could externalize your  bind password  that way      bash bind password      LDAP ADMIN PASSWORD           LDAP debug view  Grafana has an LDAP debug view built in which allows you to test your LDAP configuration directly within Grafana  Only Grafana admins can use the LDAP debug view   Within this view  you ll be able to see which LDAP servers are currently reachable and test your current configuration     To use the debug view  complete the following steps   1   Type the username of a user that exists within any of your LDAP server s  1   Then  press  Run  1   If the user is found within any of your LDAP instances  the mapping information is displayed   Note that this does not work if you are using the single bind configuration outlined below      Grafana Enterprise    users with  enhanced LDAP integration    enabled can also see sync status in the debug view  This requires the  ldap status read  permission         Bind and bind password  By default the configuration expects you to specify a bind DN and bind password  This should be a read only user that can perform LDAP searches  When the user DN is found a second bind is performed with the user provided username and password  in the normal Grafana login form       bash bind dn    cn admin dc grafana dc org  bind password    grafana            Single bind example  If you can provide a single bind expression that matches all possible users  you can skip the second bind and bind against the user DN directly  This allows you to not specify a bind password in the configuration file      bash bind dn    cn  s o users dc grafana dc org       In this case you skip providing a  bind password  and instead provide a  bind dn  value with a   s  somewhere  This will be replaced with the username entered in on the Grafana login page  The search filter and search bases settings are still needed to perform the LDAP search to retrieve the other LDAP information  like LDAP groups and email        POSIX schema  If your LDAP server does not support the  memberOf  attribute  add the following options      bash    Group search filter  to retrieve the groups of which the user is a member  only set if memberOf attribute is not available  group search filter       objectClass posixGroup  memberUid  s       An array of the base DNs to search through for groups  Typically uses ou groups group search base dns     ou groups dc grafana dc org      the  s in the search filter will be replaced with the attribute defined below group search filter user attribute    uid           Group mappings  In    servers group mappings    you can map an LDAP group to a Grafana organization and role  These will be synced every time the user logs in  with LDAP being the authoritative source   The first group mapping that an LDAP user is matched to will be used for the sync  If you have LDAP users that fit multiple mappings  the topmost mapping in the TOML configuration will be used     LDAP specific configuration file  ldap toml  example        bash   servers     other settings omitted for clarity    servers group mappings   group dn    cn superadmins dc grafana dc org  org role    Admin  grafana admin   true    servers group mappings   group dn    cn admins dc grafana dc org  org role    Admin     servers group mappings   group dn    cn users dc grafana dc org  org role    Editor     servers group mappings   group dn       org role    Viewer         Setting           Required   Description                                                                                                                                             Default                                                                                                                                                                                                                                 group dn         Yes        LDAP distinguished name  DN  of LDAP group  If you want to match all  or no LDAP groups  then you can use wildcard                                         org role         Yes        Assign users of  group dn  the organization role  Admin    Editor   or  Viewer   The organization role name is case sensitive                              org id           No         The Grafana organization database id  Setting this allows for multiple group dn s to be assigned to the same  org role  provided the  org id  differs    1   default org id       grafana admin    No         When  true  makes user of  group dn  Grafana server admin  A Grafana server admin has admin access over all organizations and users                      false                   Commenting out a group mapping requires also commenting out the header of said group or it will fail validation as an empty mapping    Example      bash   servers     other settings omitted for clarity    servers group mappings   group dn    cn superadmins dc grafana dc org  org role    Admin  grafana admin   true      servers group mappings     group dn    cn admins dc grafana dc org    org role    Admin     servers group mappings   group dn    cn users dc grafana dc org  org role    Editor           Nested recursive group membership  Users with nested recursive group membership must have an LDAP server that supports  LDAP MATCHING RULE IN CHAIN  and configure  group search filter  in a way that it returns the groups the submitted username is a member of   To configure  group search filter      You can set  group search base dns  to specify where the matching groups are defined    If you do not use  group search base dns   then the previously defined  search base dns  is used     Active Directory example     Active Directory groups store the Distinguished Names  DNs  of members  so your filter will need to know the DN for the user based only on the submitted username  Multiple DN templates are searched by combining filters with the LDAP OR operator  Two examples      bash group search filter     member 1 2 840 113556 1 4 1941   s   group search base dns     DC mycorp DC mytld   group search filter user attribute    dn          bash group search filter     member 1 2 840 113556 1 4 1941  CN  s  user container OU    group search filter       member 1 2 840 113556 1 4 1941  CN  s  user container OU   member 1 2 840 113556 1 4 1941  CN  s  another user container OU     group search filter user attribute    cn       For more information on AD searches see  Microsoft s Search Filter Syntax  https   docs microsoft com en us windows desktop adsi search filter syntax  documentation   For troubleshooting  changing  member of  in   servers attributes   to  dn  will show you more accurate group memberships when  debug is enabled   troubleshooting       Configuration examples  The following examples describe different LDAP configuration options       OpenLDAP   OpenLDAP  http   www openldap org   is an open source directory service     LDAP specific configuration file  ldap toml         bash   servers   host    127 0 0 1  port   389 use ssl   false start tls   false ssl skip verify   false bind dn    cn admin dc grafana dc org  bind password    grafana  search filter     cn  s   search base dns     dc grafana dc org     servers attributes  member of    memberOf  email     email       servers group mappings   omitted for clarity          Multiple LDAP servers  Grafana does support receiving information from multiple LDAP servers     LDAP specific configuration file  ldap toml         bash       First LDAP Server        servers   host    10 0 0 1  port   389 use ssl   false start tls   false ssl skip verify   false bind dn    cn admin dc grafana dc org  bind password    grafana  search filter     cn  s   search base dns     ou users dc grafana dc org     servers attributes  member of    memberOf  email     email     servers group mappings   group dn    cn admins ou groups dc grafana dc org  org role    Admin  grafana admin   true        Second LDAP Server        servers   host    10 0 0 2  port   389 use ssl   false start tls   false ssl skip verify   false  bind dn    cn admin dc grafana dc org  bind password    grafana  search filter     cn  s   search base dns     ou users dc grafana dc org     servers attributes  member of    memberOf  email     email     servers group mappings   group dn    cn editors ou groups dc grafana dc org  org role    Editor     servers group mappings   group dn       org role    Viewer           Active Directory   Active Directory   https   technet microsoft com en us library hh831484 v ws 11  aspx   is a directory service which is commonly used in Windows environments   Assuming the following Active Directory server setup     IP address   10 0 0 1    Domain   CORP    DNS name   corp local     LDAP specific configuration file  ldap toml         bash   servers   host    10 0 0 1  port   3269 use ssl   true start tls   false ssl skip verify   true bind dn    CORP   s  search filter     sAMAccountName  s   search base dns     dc corp dc local     servers attributes  member of    memberOf  email     mail       servers group mappings   omitted for clarity           Port requirements  In above example SSL is enabled and an encrypted port have been configured  If your Active Directory don t support SSL please change  enable ssl   false  and  port   389   Please inspect your Active Directory configuration and documentation to find the correct settings  For more information about Active Directory and port requirements see  link   https   technet microsoft com en us library dd772723 v ws 10         Troubleshooting  To troubleshoot and get more log info enable LDAP debug logging in the  main config file         bash  log  filters   ldap debug    "}
{"questions":"grafana setup menuTitle Google OAuth auth google aliases products labels cloud Grafana Google OAuth Guide oss enterprise","answers":"---\naliases:\n  - ..\/..\/..\/auth\/google\/\ndescription: Grafana Google OAuth Guide\nlabels:\n  products:\n    - cloud\n    - enterprise\n    - oss\nmenuTitle: Google OAuth\ntitle: Configure Google OAuth authentication\nweight: 1100\n---\n\n# Configure Google OAuth authentication\n\nTo enable Google OAuth you must register your application with Google. Google will generate a client ID and secret key for you to use.\n\n\nIf Users use the same email address in Google that they use with other authentication providers (such as Grafana.com), you need to do additional configuration to ensure that the users are matched correctly. Please refer to the [Using the same email address to login with different identity providers]() documentation for more information.\n\n\n## Create Google OAuth keys\n\nFirst, you need to create a Google OAuth Client:\n\n1. Go to https:\/\/console.developers.google.com\/apis\/credentials.\n1. Create a new project if you don't have one already.\n   1. Enter a project name. The **Organization** and **Location** fields should both be set to your organization's information.\n   1. In **OAuth consent screen** select the **External** User Type. Click **CREATE**.\n   1. Fill out the requested information using the URL of your Grafana Cloud instance.\n   1. Accept the defaults, or customize the consent screen options.\n1. Click **Create Credentials**, then click **OAuth Client ID** in the drop-down menu\n1. Enter the following:\n   - **Application Type**: Web application\n   - **Name**: Grafana\n   - **Authorized JavaScript origins**: `https:\/\/<YOUR_GRAFANA_URL>`\n   - **Authorized redirect URIs**: `https:\/\/<YOUR_GRAFANA_URL>\/login\/google`\n   - Replace `<YOUR_GRAFANA_URL>` with the URL of your Grafana instance.\n     \n     The URL you enter is the one for your Grafana instance home page, not your Grafana Cloud portal URL.\n     \n1. Click Create\n1. Copy the Client ID and Client Secret from the 'OAuth Client' modal\n\n## Configure Google authentication client using the Grafana UI\n\n\nAvailable in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle.\n\n\nAs a Grafana Admin, you can configure Google OAuth client from within Grafana using the Google UI. To do this, navigate to **Administration > Authentication > Google** page and fill in the form. If you have a current configuration in the Grafana configuration file then the form will be pre-populated with those values otherwise the form will contain default values.\n\nAfter you have filled in the form, click **Save**. If the save was successful, Grafana will apply the new configurations.\n\nIf you need to reset changes made in the UI back to the default values, click **Reset**. After you have reset the changes, Grafana will apply the configuration from the Grafana configuration file (if there is any configuration) or the default values.\n\n\nIf you run Grafana in high availability mode, configuration changes may not get applied to all Grafana instances immediately. You may need to wait a few minutes for the configuration to propagate to all Grafana instances.\n\n\n## Configure Google authentication client using the Terraform provider\n\n\nAvailable in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. Supported in the Terraform provider since v2.12.0.\n\n\n```terraform\nresource \"grafana_sso_settings\" \"google_sso_settings\" {\n  provider_name = \"google\"\n  oauth2_settings {\n    name            = \"Google\"\n    client_id       = \"CLIENT_ID\"\n    client_secret   = \"CLIENT_SECRET\"\n    allow_sign_up   = true\n    auto_login      = false\n    scopes          = \"openid email profile\"\n    allowed_domains = \"mycompany.com mycompany.org\"\n    hosted_domain   = \"mycompany.com\"\n    use_pkce        = true\n  }\n}\n```\n\nGo to [Terraform Registry](https:\/\/registry.terraform.io\/providers\/grafana\/grafana\/latest\/docs\/resources\/sso_settings) for a complete reference on using the `grafana_sso_settings` resource.\n\n## Configure Google authentication client using the Grafana configuration file\n\nEnsure that you have access to the [Grafana configuration file]().\n\n### Enable Google OAuth in Grafana\n\nSpecify the Client ID and Secret in the [Grafana configuration file](). For example:\n\n```bash\n[auth.google]\nenabled = true\nallow_sign_up = true\nauto_login = false\nclient_id = CLIENT_ID\nclient_secret = CLIENT_SECRET\nscopes = openid email profile\nauth_url = https:\/\/accounts.google.com\/o\/oauth2\/v2\/auth\ntoken_url = https:\/\/oauth2.googleapis.com\/token\napi_url = https:\/\/openidconnect.googleapis.com\/v1\/userinfo\nallowed_domains = mycompany.com mycompany.org\nhosted_domain = mycompany.com\nuse_pkce = true\n```\n\nYou may have to set the `root_url` option of `[server]` for the callback URL to be\ncorrect. For example in case you are serving Grafana behind a proxy.\n\nRestart the Grafana back-end. You should now see a Google login button\non the login page. You can now login or sign up with your Google\naccounts. The `allowed_domains` option is optional, and domains were separated by space.\n\nYou may allow users to sign-up via Google authentication by setting the\n`allow_sign_up` option to `true`. When this option is set to `true`, any\nuser successfully authenticating via Google authentication will be\nautomatically signed up.\n\nYou may specify a domain to be passed as `hd` query parameter accepted by Google's\nOAuth 2.0 authentication API. Refer to Google's OAuth [documentation](https:\/\/developers.google.com\/identity\/openid-connect\/openid-connect#hd-param).\n\n\nSince Grafana 10.3.0, the `hd` parameter retrieved from Google ID token is also used to determine the user's hosted domain. The Google Oauth `allowed_domains` configuration option is used to restrict access to users from a specific domain. If the `allowed_domains` configuration option is set, the `hd` parameter from the Google ID token must match the `allowed_domains` configuration option. If the `hd` parameter from the Google ID token does not match the `allowed_domains` configuration option, the user is denied access.\n\nWhen an account does not belong to a google workspace, the hd claim will not be available.\n\nThis validation is enabled by default. To disable this validation, set the `validate_hd` configuration option to `false`. The `allowed_domains` configuration option will use the email claim to validate the domain.\n\n\n#### PKCE\n\nIETF's [RFC 7636](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7636)\nintroduces \"proof key for code exchange\" (PKCE) which provides\nadditional protection against some forms of authorization code\ninterception attacks. PKCE will be required in [OAuth 2.1](https:\/\/datatracker.ietf.org\/doc\/html\/draft-ietf-oauth-v2-1-03).\n\n\nYou can disable PKCE in Grafana by setting `use_pkce` to `false` in the`[auth.google]` section.\n\n\n#### Configure refresh token\n\nWhen a user logs in using an OAuth provider, Grafana verifies that the access token has not expired. When an access token expires, Grafana uses the provided refresh token (if any exists) to obtain a new access token.\n\nGrafana uses a refresh token to obtain a new access token without requiring the user to log in again. If a refresh token doesn't exist, Grafana logs the user out of the system after the access token has expired.\n\nBy default, Grafana includes the `access_type=offline` parameter in the authorization request to request a refresh token.\n\nRefresh token fetching and access token expiration check is enabled by default for the Google provider since Grafana v10.1.0. If you would like to disable access token expiration check then set the `use_refresh_token` configuration value to `false`.\n\n\nThe `accessTokenExpirationCheck` feature toggle has been removed in Grafana v10.3.0 and the `use_refresh_token` configuration value will be used instead for configuring refresh token fetching and access token expiration check.\n\n\n#### Configure automatic login\n\nSet `auto_login` option to true to attempt login automatically, skipping the login screen.\nThis setting is ignored if multiple auth providers are configured to use auto login.\n\n```\nauto_login = true\n```\n\n### Configure group synchronization\n\n\nAvailable in [Grafana Enterprise](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/introduction\/grafana-enterprise) and [Grafana Cloud](\/docs\/grafana-cloud\/).\n\n\nGrafana supports syncing users to teams and roles based on their Google groups.\n\nTo set up group sync for Google OAuth:\n\n1. Enable the Google Cloud Identity API on your [organization's dashboard](https:\/\/console.cloud.google.com\/apis\/api\/cloudidentity.googleapis.com\/).\n\n1. Add the `https:\/\/www.googleapis.com\/auth\/cloud-identity.groups.readonly` scope to your Grafana `[auth.google]` configuration:\n\n   Example:\n\n   ```ini\n   [auth.google]\n   # ..\n   scopes = openid email profile https:\/\/www.googleapis.com\/auth\/cloud-identity.groups.readonly\n   ```\n\nThe external group ID for a Google group is the group's email address, such as `dev@grafana.com`.\n\nTo learn more about how to configure group synchronization, refer to [Configure team sync]() and [Configure group attribute sync](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/setup-grafana\/configure-security\/configure-group-attribute-sync) documentation.\n\n#### Configure allowed groups\n\nTo limit access to authenticated users that are members of one or more groups, set `allowed_groups`\nto a comma or space separated list of groups.\n\nGoogle groups are referenced by the group email key. For example, `developers@google.com`.\n\n\nAdd the `https:\/\/www.googleapis.com\/auth\/cloud-identity.groups.readonly` scope to your Grafana `[auth.google]` scopes configuration to retrieve groups.\n\n\n#### Configure role mapping\n\nUnless `skip_org_role_sync` option is enabled, the user's role will be set to the role mapped from Google upon user login. If no mapping is set the default instance role is used.\n\nThe user's role is retrieved using a [JMESPath](http:\/\/jmespath.org\/examples.html) expression from the `role_attribute_path` configuration option.\nTo map the server administrator role, use the `allow_assign_grafana_admin` configuration option.\n\nIf no valid role is found, the user is assigned the role specified by [the `auto_assign_org_role` option]().\nYou can disable this default role assignment by setting `role_attribute_strict = true`. This setting denies user access if no role or an invalid role is returned after evaluating the `role_attribute_path` and the `org_mapping` expressions.\n\nTo ease configuration of a proper JMESPath expression, go to [JMESPath](http:\/\/jmespath.org\/) to test and evaluate expressions with custom payloads.\n\nBy default skip_org_role_sync is enabled. skip_org_role_sync will default to false in Grafana v10.3.0 and later versions.\n\n\n##### Role mapping examples\n\nThis section includes examples of JMESPath expressions used for role mapping.\n\n##### Org roles mapping example\n\nThe Google integration uses the external users' groups in the `org_mapping` configuration to map organizations and roles based on their Google group membership.\n\nIn this example, the user has been granted the role of a `Viewer` in the `org_foo` organization, and the role of an `Editor` in the `org_bar` and `org_baz` orgs.\n\nThe external user is part of the following Google groups: `group-1` and `group-2`.\n\nConfig:\n\n```ini\norg_mapping = group-1:org_foo:Viewer group-2:org_bar:Editor *:org_baz:Editor\n```\n\n###### Map roles using user information from OAuth token\n\nIn this example, the user with email `admin@company.com` has been granted the `Admin` role.\nAll other users are granted the `Viewer` role.\n\n```ini\nrole_attribute_path = email=='admin@company.com' && 'Admin' || 'Viewer'\nskip_org_role_sync = false\n```\n\n###### Map roles using groups\n\nIn this example, the user from Google group 'example-group@google.com' have been granted the `Editor` role.\nAll other users are granted the `Viewer` role.\n\n```ini\nrole_attribute_path = contains(groups[*], 'example-group@google.com') && 'Editor' || 'Viewer'\nskip_org_role_sync = false\n```\n\n\nAdd the `https:\/\/www.googleapis.com\/auth\/cloud-identity.groups.readonly` scope to your Grafana `[auth.google]` scopes configuration to retrieve groups.\n\n\n###### Map server administrator role\n\nIn this example, the user with email `admin@company.com` has been granted the `Admin` organization role as well as the Grafana server admin role.\nAll other users are granted the `Viewer` role.\n\n```ini\nallow_assign_grafana_admin = true\nskip_org_role_sync = false\nrole_attribute_path = email=='admin@company.com' && 'GrafanaAdmin' || 'Viewer'\n```\n\n###### Map one role to all users\n\nIn this example, all users will be assigned `Viewer` role regardless of the user information received from the identity provider.\n\n```ini\nrole_attribute_path = \"'Viewer'\"\nskip_org_role_sync = false\n```\n\n## Configuration options\n\nThe following table outlines the various Google OAuth configuration options. You can apply these options as environment variables, similar to any other configuration within Grafana. For more information, refer to [Override configuration with environment variables]().\n\n| Setting                      | Required | Supported on Cloud | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | Default                                            |\n| ---------------------------- | -------- | ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------- |\n| `enabled`                    | No       | Yes                | Enables Google authentication.                                                                                                                                                                                                                                                                                                                                                                                                                                                                  | `false`                                            |\n| `name`                       | No       | Yes                | Name that refers to the Google authentication from the Grafana user interface.                                                                                                                                                                                                                                                                                                                                                                                                                  | `Google`                                           |\n| `icon`                       | No       | Yes                | Icon used for the Google authentication in the Grafana user interface.                                                                                                                                                                                                                                                                                                                                                                                                                          | `google`                                           |\n| `client_id`                  | Yes      | Yes                | Client ID of the App.                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |                                                    |\n| `client_secret`              | Yes      | Yes                | Client secret of the App.                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |                                                    |\n| `auth_url`                   | Yes      | Yes                | Authorization endpoint of the Google OAuth provider.                                                                                                                                                                                                                                                                                                                                                                                                                                            | `https:\/\/accounts.google.com\/o\/oauth2\/v2\/auth`     |\n| `token_url`                  | Yes      | Yes                | Endpoint used to obtain the OAuth2 access token.                                                                                                                                                                                                                                                                                                                                                                                                                                                | `https:\/\/oauth2.googleapis.com\/token`              |\n| `api_url`                    | Yes      | Yes                | Endpoint used to obtain user information compatible with [OpenID UserInfo](https:\/\/connect2id.com\/products\/server\/docs\/api\/userinfo).                                                                                                                                                                                                                                                                                                                                                           | `https:\/\/openidconnect.googleapis.com\/v1\/userinfo` |\n| `auth_style`                 | No       | Yes                | Name of the [OAuth2 AuthStyle](https:\/\/pkg.go.dev\/golang.org\/x\/oauth2#AuthStyle) to be used when ID token is requested from OAuth2 provider. It determines how `client_id` and `client_secret` are sent to Oauth2 provider. Available values are `AutoDetect`, `InParams` and `InHeader`.                                                                                                                                                                                                       | `AutoDetect`                                       |\n| `scopes`                     | No       | Yes                | List of comma- or space-separated OAuth2 scopes.                                                                                                                                                                                                                                                                                                                                                                                                                                                | `openid email profile`                             |\n| `allow_sign_up`              | No       | Yes                | Controls Grafana user creation through the Google login. Only existing Grafana users can log in with Google if set to `false`.                                                                                                                                                                                                                                                                                                                                                                  | `true`                                             |\n| `auto_login`                 | No       | Yes                | Set to `true` to enable users to bypass the login screen and automatically log in. This setting is ignored if you configure multiple auth providers to use auto-login.                                                                                                                                                                                                                                                                                                                          | `false`                                            |\n| `hosted_domain`              | No       | Yes                | Specifies the domain to restrict access to users from that domain. This value is appended to the authorization request using the `hd` parameter.                                                                                                                                                                                                                                                                                                                                                |                                                    |\n| `validate_hd`                | No       | Yes                | Set to `false` to disable the validation of the `hd` parameter from the Google ID token. For more informatiion, refer to [Enable Google OAuth in Grafana]().                                                                                                                                                                                                                                                                                    | `true`                                             |\n| `role_attribute_strict`      | No       | Yes                | Set to `true` to deny user login if the Grafana org role cannot be extracted using `role_attribute_path` or `org_mapping`. For more information on user role mapping, refer to [Configure role mapping]().                                                                                                                                                                                                                                              | `false`                                            |\n| `org_attribute_path`         | No       | No                 | [JMESPath](http:\/\/jmespath.org\/examples.html) expression to use for Grafana org to role lookup. Grafana will first evaluate the expression using the OAuth2 ID token. If no value is returned, the expression will be evaluated using the user information obtained from the UserInfo endpoint. The result of the evaluation will be mapped to org roles based on `org_mapping`. For more information on org to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example). |                                                    |\n| `org_mapping`                | No       | No                 | List of comma- or space-separated `<ExternalOrgName>:<OrgIdOrName>:<Role>` mappings. Value can be `*` meaning \"All users\". Role is optional and can have the following values: `None`, `Viewer`, `Editor` or `Admin`. For more information on external organization to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example).                                                                                                                                          |                                                    |\n| `allow_assign_grafana_admin` | No       | No                 | Set to `true` to automatically sync the Grafana server administrator role. When enabled, if the Google user's App role is `GrafanaAdmin`, Grafana grants the user server administrator privileges and the organization administrator role. If disabled, the user will only receive the organization administrator role. For more details on user role mapping, refer to [Map roles]().                                                                               | `false`                                            |\n| `skip_org_role_sync`         | No       | Yes                | Set to `true` to stop automatically syncing user roles. This will allow you to set organization roles for your users from within Grafana manually.                                                                                                                                                                                                                                                                                                                                              | `false`                                            |\n| `allowed_groups`             | No       | Yes                | List of comma- or space-separated groups. The user should be a member of at least one group to log in. If you configure `allowed_groups`, you must also configure Google to include the `groups` claim following [Configure allowed groups]().                                                                                                                                                                                                        |                                                    |\n| `allowed_organizations`      | No       | Yes                | List of comma- or space-separated Azure tenant identifiers. The user should be a member of at least one tenant to log in.                                                                                                                                                                                                                                                                                                                                                                       |                                                    |\n| `allowed_domains`            | No       | Yes                | List of comma- or space-separated domains. The user should belong to at least one domain to log in.                                                                                                                                                                                                                                                                                                                                                                                             |                                                    |\n| `tls_skip_verify_insecure`   | No       | No                 | If set to `true`, the client accepts any certificate presented by the server and any host name in that certificate. _You should only use this for testing_, because this mode leaves SSL\/TLS susceptible to man-in-the-middle attacks.                                                                                                                                                                                                                                                          | `false`                                            |\n| `tls_client_cert`            | No       | No                 | The path to the certificate.                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |                                                    |\n| `tls_client_key`             | No       | No                 | The path to the key.                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |                                                    |\n| `tls_client_ca`              | No       | No                 | The path to the trusted certificate authority list.                                                                                                                                                                                                                                                                                                                                                                                                                                             |                                                    |\n| `use_pkce`                   | No       | Yes                | Set to `true` to use [Proof Key for Code Exchange (PKCE)](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7636). Grafana uses the SHA256 based `S256` challenge method and a 128 bytes (base64url encoded) code verifier.                                                                                                                                                                                                                                                                              | `true`                                             |\n| `use_refresh_token`          | No       | Yes                | Enables the use of refresh tokens and checks for access token expiration. When enabled, Grafana automatically adds the `promp=consent` and `access_type=offline` parameters to the authorization request.                                                                                                                                                                                                                                                                                       | `true`                                             |\n| `signout_redirect_url`       | No       | Yes                | URL to redirect to after the user logs out.                                                                                                                                                                                                                                                                                                                                                                                                                                                     |                                                    |","site":"grafana setup","answers_cleaned":"    aliases               auth google  description  Grafana Google OAuth Guide labels    products        cloud       enterprise       oss menuTitle  Google OAuth title  Configure Google OAuth authentication weight  1100        Configure Google OAuth authentication  To enable Google OAuth you must register your application with Google  Google will generate a client ID and secret key for you to use    If Users use the same email address in Google that they use with other authentication providers  such as Grafana com   you need to do additional configuration to ensure that the users are matched correctly  Please refer to the  Using the same email address to login with different identity providers    documentation for more information       Create Google OAuth keys  First  you need to create a Google OAuth Client   1  Go to https   console developers google com apis credentials  1  Create a new project if you don t have one already     1  Enter a project name  The   Organization   and   Location   fields should both be set to your organization s information     1  In   OAuth consent screen   select the   External   User Type  Click   CREATE       1  Fill out the requested information using the URL of your Grafana Cloud instance     1  Accept the defaults  or customize the consent screen options  1  Click   Create Credentials    then click   OAuth Client ID   in the drop down menu 1  Enter the following         Application Type    Web application        Name    Grafana        Authorized JavaScript origins     https    YOUR GRAFANA URL          Authorized redirect URIs     https    YOUR GRAFANA URL  login google       Replace   YOUR GRAFANA URL   with the URL of your Grafana instance             The URL you enter is the one for your Grafana instance home page  not your Grafana Cloud portal URL        1  Click Create 1  Copy the Client ID and Client Secret from the  OAuth Client  modal     Configure Google authentication client using the Grafana UI   Available in Public Preview in Grafana 10 4 behind the  ssoSettingsApi  feature toggle    As a Grafana Admin  you can configure Google OAuth client from within Grafana using the Google UI  To do this  navigate to   Administration   Authentication   Google   page and fill in the form  If you have a current configuration in the Grafana configuration file then the form will be pre populated with those values otherwise the form will contain default values   After you have filled in the form  click   Save    If the save was successful  Grafana will apply the new configurations   If you need to reset changes made in the UI back to the default values  click   Reset    After you have reset the changes  Grafana will apply the configuration from the Grafana configuration file  if there is any configuration  or the default values    If you run Grafana in high availability mode  configuration changes may not get applied to all Grafana instances immediately  You may need to wait a few minutes for the configuration to propagate to all Grafana instances       Configure Google authentication client using the Terraform provider   Available in Public Preview in Grafana 10 4 behind the  ssoSettingsApi  feature toggle  Supported in the Terraform provider since v2 12 0       terraform resource  grafana sso settings   google sso settings      provider name    google    oauth2 settings       name               Google      client id          CLIENT ID      client secret      CLIENT SECRET      allow sign up     true     auto login        false     scopes             openid email profile      allowed domains    mycompany com mycompany org      hosted domain      mycompany com      use pkce          true            Go to  Terraform Registry  https   registry terraform io providers grafana grafana latest docs resources sso settings  for a complete reference on using the  grafana sso settings  resource      Configure Google authentication client using the Grafana configuration file  Ensure that you have access to the  Grafana configuration file          Enable Google OAuth in Grafana  Specify the Client ID and Secret in the  Grafana configuration file     For example      bash  auth google  enabled   true allow sign up   true auto login   false client id   CLIENT ID client secret   CLIENT SECRET scopes   openid email profile auth url   https   accounts google com o oauth2 v2 auth token url   https   oauth2 googleapis com token api url   https   openidconnect googleapis com v1 userinfo allowed domains   mycompany com mycompany org hosted domain   mycompany com use pkce   true      You may have to set the  root url  option of   server   for the callback URL to be correct  For example in case you are serving Grafana behind a proxy   Restart the Grafana back end  You should now see a Google login button on the login page  You can now login or sign up with your Google accounts  The  allowed domains  option is optional  and domains were separated by space   You may allow users to sign up via Google authentication by setting the  allow sign up  option to  true   When this option is set to  true   any user successfully authenticating via Google authentication will be automatically signed up   You may specify a domain to be passed as  hd  query parameter accepted by Google s OAuth 2 0 authentication API  Refer to Google s OAuth  documentation  https   developers google com identity openid connect openid connect hd param     Since Grafana 10 3 0  the  hd  parameter retrieved from Google ID token is also used to determine the user s hosted domain  The Google Oauth  allowed domains  configuration option is used to restrict access to users from a specific domain  If the  allowed domains  configuration option is set  the  hd  parameter from the Google ID token must match the  allowed domains  configuration option  If the  hd  parameter from the Google ID token does not match the  allowed domains  configuration option  the user is denied access   When an account does not belong to a google workspace  the hd claim will not be available   This validation is enabled by default  To disable this validation  set the  validate hd  configuration option to  false   The  allowed domains  configuration option will use the email claim to validate the domain         PKCE  IETF s  RFC 7636  https   datatracker ietf org doc html rfc7636  introduces  proof key for code exchange   PKCE  which provides additional protection against some forms of authorization code interception attacks  PKCE will be required in  OAuth 2 1  https   datatracker ietf org doc html draft ietf oauth v2 1 03     You can disable PKCE in Grafana by setting  use pkce  to  false  in the  auth google   section         Configure refresh token  When a user logs in using an OAuth provider  Grafana verifies that the access token has not expired  When an access token expires  Grafana uses the provided refresh token  if any exists  to obtain a new access token   Grafana uses a refresh token to obtain a new access token without requiring the user to log in again  If a refresh token doesn t exist  Grafana logs the user out of the system after the access token has expired   By default  Grafana includes the  access type offline  parameter in the authorization request to request a refresh token   Refresh token fetching and access token expiration check is enabled by default for the Google provider since Grafana v10 1 0  If you would like to disable access token expiration check then set the  use refresh token  configuration value to  false     The  accessTokenExpirationCheck  feature toggle has been removed in Grafana v10 3 0 and the  use refresh token  configuration value will be used instead for configuring refresh token fetching and access token expiration check         Configure automatic login  Set  auto login  option to true to attempt login automatically  skipping the login screen  This setting is ignored if multiple auth providers are configured to use auto login       auto login   true          Configure group synchronization   Available in  Grafana Enterprise  https   grafana com docs grafana  GRAFANA VERSION  introduction grafana enterprise  and  Grafana Cloud   docs grafana cloud      Grafana supports syncing users to teams and roles based on their Google groups   To set up group sync for Google OAuth   1  Enable the Google Cloud Identity API on your  organization s dashboard  https   console cloud google com apis api cloudidentity googleapis com     1  Add the  https   www googleapis com auth cloud identity groups readonly  scope to your Grafana   auth google   configuration      Example         ini     auth google             scopes   openid email profile https   www googleapis com auth cloud identity groups readonly         The external group ID for a Google group is the group s email address  such as  dev grafana com    To learn more about how to configure group synchronization  refer to  Configure team sync    and  Configure group attribute sync  https   grafana com docs grafana  GRAFANA VERSION  setup grafana configure security configure group attribute sync  documentation        Configure allowed groups  To limit access to authenticated users that are members of one or more groups  set  allowed groups  to a comma or space separated list of groups   Google groups are referenced by the group email key  For example   developers google com     Add the  https   www googleapis com auth cloud identity groups readonly  scope to your Grafana   auth google   scopes configuration to retrieve groups         Configure role mapping  Unless  skip org role sync  option is enabled  the user s role will be set to the role mapped from Google upon user login  If no mapping is set the default instance role is used   The user s role is retrieved using a  JMESPath  http   jmespath org examples html  expression from the  role attribute path  configuration option  To map the server administrator role  use the  allow assign grafana admin  configuration option   If no valid role is found  the user is assigned the role specified by  the  auto assign org role  option     You can disable this default role assignment by setting  role attribute strict   true   This setting denies user access if no role or an invalid role is returned after evaluating the  role attribute path  and the  org mapping  expressions   To ease configuration of a proper JMESPath expression  go to  JMESPath  http   jmespath org   to test and evaluate expressions with custom payloads   By default skip org role sync is enabled  skip org role sync will default to false in Grafana v10 3 0 and later versions          Role mapping examples  This section includes examples of JMESPath expressions used for role mapping         Org roles mapping example  The Google integration uses the external users  groups in the  org mapping  configuration to map organizations and roles based on their Google group membership   In this example  the user has been granted the role of a  Viewer  in the  org foo  organization  and the role of an  Editor  in the  org bar  and  org baz  orgs   The external user is part of the following Google groups   group 1  and  group 2    Config      ini org mapping   group 1 org foo Viewer group 2 org bar Editor   org baz Editor             Map roles using user information from OAuth token  In this example  the user with email  admin company com  has been granted the  Admin  role  All other users are granted the  Viewer  role      ini role attribute path   email   admin company com      Admin      Viewer  skip org role sync   false             Map roles using groups  In this example  the user from Google group  example group google com  have been granted the  Editor  role  All other users are granted the  Viewer  role      ini role attribute path   contains groups      example group google com       Editor      Viewer  skip org role sync   false       Add the  https   www googleapis com auth cloud identity groups readonly  scope to your Grafana   auth google   scopes configuration to retrieve groups           Map server administrator role  In this example  the user with email  admin company com  has been granted the  Admin  organization role as well as the Grafana server admin role  All other users are granted the  Viewer  role      ini allow assign grafana admin   true skip org role sync   false role attribute path   email   admin company com      GrafanaAdmin      Viewer              Map one role to all users  In this example  all users will be assigned  Viewer  role regardless of the user information received from the identity provider      ini role attribute path     Viewer   skip org role sync   false         Configuration options  The following table outlines the various Google OAuth configuration options  You can apply these options as environment variables  similar to any other configuration within Grafana  For more information  refer to  Override configuration with environment variables        Setting                        Required   Supported on Cloud   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       Default                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         enabled                       No         Yes                  Enables Google authentication                                                                                                                                                                                                                                                                                                                                                                                                                                                                      false                                                  name                          No         Yes                  Name that refers to the Google authentication from the Grafana user interface                                                                                                                                                                                                                                                                                                                                                                                                                      Google                                                 icon                          No         Yes                  Icon used for the Google authentication in the Grafana user interface                                                                                                                                                                                                                                                                                                                                                                                                                              google                                                 client id                     Yes        Yes                  Client ID of the App                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      client secret                 Yes        Yes                  Client secret of the App                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  auth url                      Yes        Yes                  Authorization endpoint of the Google OAuth provider                                                                                                                                                                                                                                                                                                                                                                                                                                                https   accounts google com o oauth2 v2 auth           token url                     Yes        Yes                  Endpoint used to obtain the OAuth2 access token                                                                                                                                                                                                                                                                                                                                                                                                                                                    https   oauth2 googleapis com token                    api url                       Yes        Yes                  Endpoint used to obtain user information compatible with  OpenID UserInfo  https   connect2id com products server docs api userinfo                                                                                                                                                                                                                                                                                                                                                                https   openidconnect googleapis com v1 userinfo       auth style                    No         Yes                  Name of the  OAuth2 AuthStyle  https   pkg go dev golang org x oauth2 AuthStyle  to be used when ID token is requested from OAuth2 provider  It determines how  client id  and  client secret  are sent to Oauth2 provider  Available values are  AutoDetect    InParams  and  InHeader                                                                                                                                                                                                            AutoDetect                                             scopes                        No         Yes                  List of comma  or space separated OAuth2 scopes                                                                                                                                                                                                                                                                                                                                                                                                                                                    openid email profile                                   allow sign up                 No         Yes                  Controls Grafana user creation through the Google login  Only existing Grafana users can log in with Google if set to  false                                                                                                                                                                                                                                                                                                                                                                       true                                                   auto login                    No         Yes                  Set to  true  to enable users to bypass the login screen and automatically log in  This setting is ignored if you configure multiple auth providers to use auto login                                                                                                                                                                                                                                                                                                                              false                                                  hosted domain                 No         Yes                  Specifies the domain to restrict access to users from that domain  This value is appended to the authorization request using the  hd  parameter                                                                                                                                                                                                                                                                                                                                                                                                           validate hd                   No         Yes                  Set to  false  to disable the validation of the  hd  parameter from the Google ID token  For more informatiion  refer to  Enable Google OAuth in Grafana                                                                                                                                                                                                                                                                                           true                                                   role attribute strict         No         Yes                  Set to  true  to deny user login if the Grafana org role cannot be extracted using  role attribute path  or  org mapping   For more information on user role mapping  refer to  Configure role mapping                                                                                                                                                                                                                                                     false                                                  org attribute path            No         No                    JMESPath  http   jmespath org examples html  expression to use for Grafana org to role lookup  Grafana will first evaluate the expression using the OAuth2 ID token  If no value is returned  the expression will be evaluated using the user information obtained from the UserInfo endpoint  The result of the evaluation will be mapped to org roles based on  org mapping   For more information on org to role mapping  refer to  Org roles mapping example   org roles mapping example                                                             org mapping                   No         No                   List of comma  or space separated   ExternalOrgName   OrgIdOrName   Role   mappings  Value can be     meaning  All users   Role is optional and can have the following values   None    Viewer    Editor  or  Admin   For more information on external organization to role mapping  refer to  Org roles mapping example   org roles mapping example                                                                                                                                                                                                      allow assign grafana admin    No         No                   Set to  true  to automatically sync the Grafana server administrator role  When enabled  if the Google user s App role is  GrafanaAdmin   Grafana grants the user server administrator privileges and the organization administrator role  If disabled  the user will only receive the organization administrator role  For more details on user role mapping  refer to  Map roles                                                                                      false                                                  skip org role sync            No         Yes                  Set to  true  to stop automatically syncing user roles  This will allow you to set organization roles for your users from within Grafana manually                                                                                                                                                                                                                                                                                                                                                  false                                                  allowed groups                No         Yes                  List of comma  or space separated groups  The user should be a member of at least one group to log in  If you configure  allowed groups   you must also configure Google to include the  groups  claim following  Configure allowed groups                                                                                                                                                                                                                                                                      allowed organizations         No         Yes                  List of comma  or space separated Azure tenant identifiers  The user should be a member of at least one tenant to log in                                                                                                                                                                                                                                                                                                                                                                                                                                  allowed domains               No         Yes                  List of comma  or space separated domains  The user should belong to at least one domain to log in                                                                                                                                                                                                                                                                                                                                                                                                                                                        tls skip verify insecure      No         No                   If set to  true   the client accepts any certificate presented by the server and any host name in that certificate   You should only use this for testing   because this mode leaves SSL TLS susceptible to man in the middle attacks                                                                                                                                                                                                                                                              false                                                  tls client cert               No         No                   The path to the certificate                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               tls client key                No         No                   The path to the key                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       tls client ca                 No         No                   The path to the trusted certificate authority list                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        use pkce                      No         Yes                  Set to  true  to use  Proof Key for Code Exchange  PKCE   https   datatracker ietf org doc html rfc7636   Grafana uses the SHA256 based  S256  challenge method and a 128 bytes  base64url encoded  code verifier                                                                                                                                                                                                                                                                                  true                                                   use refresh token             No         Yes                  Enables the use of refresh tokens and checks for access token expiration  When enabled  Grafana automatically adds the  promp consent  and  access type offline  parameters to the authorization request                                                                                                                                                                                                                                                                                           true                                                   signout redirect url          No         Yes                  URL to redirect to after the user logs out                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            "}
{"questions":"grafana setup Grafana Azure AD OAuth Guide documentation labels aliases auth azuread keywords configuration grafana oauth","answers":"---\naliases:\n  - ..\/..\/..\/auth\/azuread\/\ndescription: Grafana Azure AD OAuth Guide\nkeywords:\n  - grafana\n  - configuration\n  - documentation\n  - oauth\nlabels:\n  products:\n    - cloud\n    - enterprise\n    - oss\nmenuTitle: Azure AD\/Entra ID OAuth\ntitle: Configure Azure AD\/Entra ID OAuth authentication\nweight: 800\n---\n\n# Configure Azure AD\/Entra ID OAuth authentication\n\nThe Azure AD authentication allows you to use a Microsoft Entra ID (formerly known as Azure Active Directory) tenant as an identity provider for Grafana. You can use Entra ID application roles to assign users and groups to Grafana roles from the Azure Portal.\n\n\nIf Users use the same email address in Microsoft Entra ID that they use with other authentication providers (such as Grafana.com), you need to do additional configuration to ensure that the users are matched correctly. Please refer to [Using the same email address to login with different identity providers]() for more information.\n\n\n## Create the Microsoft Entra ID application\n\nTo enable the Azure AD\/Entra ID OAuth, register your application with Entra ID.\n\n1. Log in to [Azure Portal](https:\/\/portal.azure.com), then click **Microsoft Entra ID** in the side menu.\n\n1. If you have access to more than one tenant, select your account in the upper right. Set your session to the Entra ID tenant you wish to use.\n\n1. Under **Manage** in the side menu, click **App Registrations** > **New Registration**. Enter a descriptive name.\n\n1. Under **Redirect URI**, select the app type **Web**.\n\n1. Add the following redirect URLs `https:\/\/<grafana domain>\/login\/azuread` and `https:\/\/<grafana domain>` then click **Register**. The app's **Overview** page opens.\n\n1. Note the **Application ID**. This is the OAuth client ID.\n\n1. Click **Endpoints** from the top menu.\n\n   - Note the **OAuth 2.0 authorization endpoint (v2)** URL. This is the authorization URL.\n   - Note the **OAuth 2.0 token endpoint (v2)**. This is the token URL.\n\n1. Click **Certificates & secrets** in the side menu, then add a new entry under **Client secrets** with the following configuration.\n\n   - Description: Grafana OAuth\n   - Expires: Select an expiration period\n\n1. Click **Add** then copy the key **Value**. This is the OAuth client secret.\n\n\nMake sure that you copy the string in the **Value** field, rather than the one in the **Secret ID** field.\n\n\n1. Define the required application roles for Grafana [using the Azure Portal](#configure-application-roles-for-grafana-in-the-azure-portal) or [using the manifest file](#configure-application-roles-for-grafana-in-the-manifest-file).\n\n1. Go to **Microsoft Entra ID** and then to **Enterprise Applications**, under **Manage**.\n\n1. Search for your application and click it.\n\n1. Click **Users and Groups**.\n1. Click **Add user\/group** to add a user or group to the Grafana roles.\n\n\nWhen assigning a group to a Grafana role, ensure that users are direct members of the group. Users in nested groups will not have access to Grafana due to limitations within Azure AD\/Entra ID side. For more information, see [Microsoft Entra service limits and restrictions](https:\/\/learn.microsoft.com\/en-us\/entra\/identity\/users\/directory-service-limits-restrictions).\n\n\n### Configure application roles for Grafana in the Azure Portal\n\nThis section describes setting up basic application roles for Grafana within the Azure Portal. For more information, see [Add app roles to your application and receive them in the token](https:\/\/learn.microsoft.com\/en-us\/entra\/identity-platform\/howto-add-app-roles-in-apps).\n\n1. Go to **App Registrations**, search for your application, and click it.\n\n1. Click **App roles** and then **Create app role**.\n\n1. Define a role corresponding to each Grafana role: Viewer, Editor, and Admin.\n\n   1. Choose a **Display name** for the role. For example, \"Grafana Editor\".\n\n   1. Set the **Allowed member types** to **Users\/Groups**.\n\n   1. Ensure that the **Value** field matches the Grafana role name. For example, \"Editor\".\n\n   1. Choose a **Description** for the role. For example, \"Grafana Editor Users\".\n\n   1. Click **Apply**.\n\n### Configure application roles for Grafana in the manifest file\n\nIf you prefer to configure the application roles for Grafana in the manifest file, complete the following steps:\n\n1. Go to **App Registrations**, search for your application, and click it.\n\n1. Click **Manifest**.\n\n1. Add a Universally Unique Identifier to each role.\n\n\nEvery role requires a [Universally Unique Identifier](https:\/\/en.wikipedia.org\/wiki\/Universally_unique_identifier) which you can generate on Linux with `uuidgen`, and on Windows through Microsoft PowerShell with `New-Guid`.\n\n\n1. Replace each \"SOME_UNIQUE_ID\" with the generated ID in the manifest file:\n\n   ```json\n   \t\"appRoles\": [\n   \t\t\t{\n   \t\t\t\t\"allowedMemberTypes\": [\n   \t\t\t\t\t\"User\"\n   \t\t\t\t],\n   \t\t\t\t\"description\": \"Grafana org admin Users\",\n   \t\t\t\t\"displayName\": \"Grafana Org Admin\",\n   \t\t\t\t\"id\": \"SOME_UNIQUE_ID\",\n   \t\t\t\t\"isEnabled\": true,\n   \t\t\t\t\"lang\": null,\n   \t\t\t\t\"origin\": \"Application\",\n   \t\t\t\t\"value\": \"Admin\"\n   \t\t\t},\n   \t\t\t{\n   \t\t\t\t\"allowedMemberTypes\": [\n   \t\t\t\t\t\"User\"\n   \t\t\t\t],\n   \t\t\t\t\"description\": \"Grafana read only Users\",\n   \t\t\t\t\"displayName\": \"Grafana Viewer\",\n   \t\t\t\t\"id\": \"SOME_UNIQUE_ID\",\n   \t\t\t\t\"isEnabled\": true,\n   \t\t\t\t\"lang\": null,\n   \t\t\t\t\"origin\": \"Application\",\n   \t\t\t\t\"value\": \"Viewer\"\n   \t\t\t},\n   \t\t\t{\n   \t\t\t\t\"allowedMemberTypes\": [\n   \t\t\t\t\t\"User\"\n   \t\t\t\t],\n   \t\t\t\t\"description\": \"Grafana Editor Users\",\n   \t\t\t\t\"displayName\": \"Grafana Editor\",\n   \t\t\t\t\"id\": \"SOME_UNIQUE_ID\",\n   \t\t\t\t\"isEnabled\": true,\n   \t\t\t\t\"lang\": null,\n   \t\t\t\t\"origin\": \"Application\",\n   \t\t\t\t\"value\": \"Editor\"\n   \t\t\t}\n   \t\t],\n   ```\n\n1. Click **Save**.\n\n### Assign server administrator privileges\n\nIf the application role received by Grafana is `GrafanaAdmin`, Grafana grants the user server administrator privileges.\nThis is useful if you want to grant server administrator privileges to a subset of users.\nGrafana also assigns the user the `Admin` role of the default organization.\n\nThe setting `allow_assign_grafana_admin` under `[auth.azuread]` must be set to `true` for this to work.\nIf the setting is set to `false`, the user is assigned the role of `Admin` of the default organization, but not server administrator privileges.\n\n```json\n{\n  \"allowedMemberTypes\": [\"User\"],\n  \"description\": \"Grafana server admin Users\",\n  \"displayName\": \"Grafana Server Admin\",\n  \"id\": \"SOME_UNIQUE_ID\",\n  \"isEnabled\": true,\n  \"lang\": null,\n  \"origin\": \"Application\",\n  \"value\": \"GrafanaAdmin\"\n}\n```\n\n## Before you begin\n\nEnsure that you have followed the steps in [Create the Microsoft Entra ID application](#create-the-microsoft-entra-id-application) before you begin.\n\n## Configure Azure AD authentication client using the Grafana UI\n\n\nAvailable in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle.\n\n\nAs a Grafana Admin, you can configure your Azure AD\/Entra ID OAuth client from within Grafana using the Grafana UI. To do this, navigate to the **Administration > Authentication > Azure AD** page and fill in the form. If you have a current configuration in the Grafana configuration file, the form will be pre-populated with those values. Otherwise the form will contain default values.\n\nAfter you have filled in the form, click **Save** to save the configuration. If the save was successful, Grafana will apply the new configurations.\n\nIf you need to reset changes you made in the UI back to the default values, click **Reset**. After you have reset the changes, Grafana will apply the configuration from the Grafana configuration file (if there is any configuration) or the default values.\n\n\nIf you run Grafana in high availability mode, configuration changes may not get applied to all Grafana instances immediately. You may need to wait a few minutes for the configuration to propagate to all Grafana instances.\n\n\n## Configure Azure AD authentication client using the Terraform provider\n\n\nAvailable in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. Supported in the Terraform provider since v2.12.0.\n\n\n```terraform\nresource \"grafana_sso_settings\" \"azuread_sso_settings\" {\n  provider_name = \"azuread\"\n  oauth2_settings {\n    name                       = \"Azure AD\"\n    auth_url                   = \"https:\/\/login.microsoftonline.com\/TENANT_ID\/oauth2\/v2.0\/authorize\"\n    token_url                  = \"https:\/\/login.microsoftonline.com\/TENANT_ID\/oauth2\/v2.0\/token\"\n    client_id                  = \"APPLICATION_ID\"\n    client_secret              = \"CLIENT_SECRET\"\n    allow_sign_up              = true\n    auto_login                 = false\n    scopes                     = \"openid email profile\"\n    allowed_organizations      = \"TENANT_ID\"\n    role_attribute_strict      = false\n    allow_assign_grafana_admin = false\n    skip_org_role_sync         = false\n    use_pkce                   = true\n  }\n}\n```\n\nRefer to [Terraform Registry](https:\/\/registry.terraform.io\/providers\/grafana\/grafana\/latest\/docs\/resources\/sso_settings) for a complete reference on using the `grafana_sso_settings` resource.\n\n## Configure Azure AD authentication client using the Grafana configuration file\n\nEnsure that you have access to the [Grafana configuration file]().\n\n### Enable Azure AD OAuth in Grafana\n\nAdd the following to the [Grafana configuration file]():\n\n```\n[auth.azuread]\nname = Azure AD\nenabled = true\nallow_sign_up = true\nauto_login = false\nclient_id = APPLICATION_ID\nclient_secret = CLIENT_SECRET\nscopes = openid email profile\nauth_url = https:\/\/login.microsoftonline.com\/TENANT_ID\/oauth2\/v2.0\/authorize\ntoken_url = https:\/\/login.microsoftonline.com\/TENANT_ID\/oauth2\/v2.0\/token\nallowed_domains =\nallowed_groups =\nallowed_organizations = TENANT_ID\nrole_attribute_strict = false\nallow_assign_grafana_admin = false\nskip_org_role_sync = false\nuse_pkce = true\n```\n\nYou can also use these environment variables to configure **client_id** and **client_secret**:\n\n```\nGF_AUTH_AZUREAD_CLIENT_ID\nGF_AUTH_AZUREAD_CLIENT_SECRET\n```\n\n\nVerify that the Grafana [root_url]() is set in your Azure Application Redirect URLs.\n\n\n### Configure refresh token\n\nWhen a user logs in using an OAuth provider, Grafana verifies that the access token has not expired. When an access token expires, Grafana uses the provided refresh token (if any exists) to obtain a new access token.\n\nGrafana uses a refresh token to obtain a new access token without requiring the user to log in again. If a refresh token doesn't exist, Grafana logs the user out of the system after the access token has expired.\n\nRefresh token fetching and access token expiration check is enabled by default for the AzureAD provider since Grafana v10.1.0. If you would like to disable access token expiration check then set the `use_refresh_token` configuration value to `false`.\n\n> **Note:** The `accessTokenExpirationCheck` feature toggle has been removed in Grafana v10.3.0 and the `use_refresh_token` configuration value will be used instead for configuring refresh token fetching and access token expiration check.\n\n### Configure allowed tenants\n\nTo limit access to authenticated users who are members of one or more tenants, set `allowed_organizations`\nto a comma- or space-separated list of tenant IDs. You can find tenant IDs on the Azure portal under **Microsoft Entra ID -> Overview**.\n\nMake sure to include the tenant IDs of all the federated Users' root directory if your Entra ID contains external identities.\n\nFor example, if you want to only give access to members of the tenant `example` with an ID of `8bab1c86-8fba-33e5-2089-1d1c80ec267d`, then set the following:\n\n```\nallowed_organizations = 8bab1c86-8fba-33e5-2089-1d1c80ec267d\n```\n\n### Configure allowed groups\n\nMicrosoft Entra ID groups can be used to limit user access to Grafana. For more information about managing groups in Entra ID, refer to [Manage Microsoft Entra groups and group membership](https:\/\/learn.microsoft.com\/en-us\/entra\/fundamentals\/how-to-manage-groups).\n\nTo limit access to authenticated users who are members of one or more Entra ID groups, set `allowed_groups`\nto a **comma-** or **space-separated** list of group object IDs.\n\n1. To find object IDs for a specific group on the Azure portal, go to **Microsoft Entra ID > Manage > Groups**.\n\n   You can find the Object Id of a group by clicking on the group and then clicking on **Properties**. The object ID is listed under **Object ID**. If you want to only give access to members of the group `example` with an Object Id of `8bab1c86-8fba-33e5-2089-1d1c80ec267d`, then set the following:\n\n   ```\n     allowed_groups = 8bab1c86-8fba-33e5-2089-1d1c80ec267d\n   ```\n\n1. You must enable adding the [group attribute](https:\/\/learn.microsoft.com\/en-us\/entra\/identity-platform\/optional-claims#configure-groups-optional-claims) to the tokens in your Entra ID App registration either [from the Azure Portal](#configure-group-membership-claims-on-the-azure-portal) or [from the manifest file](#configure-group-membership-claim-in-the-manifest-file).\n\n#### Configure group membership claims on the Azure Portal\n\nTo ensure that the `groups` claim is included in the token, add the `groups` claim to the token configuration either through the Azure Portal UI or by editing the manifest file.\n\nTo configure group membership claims from the Azure Portal UI, complete the following steps:\n\n1. Navigate to the **App Registrations** page and select your application.\n1. Under **Manage** in the side menu, select **Token configuration**.\n1. Click **Add groups claim** and select the relevant option for your use case (for example, **Security groups** and **Groups assigned to the application**).\n\nFor more information, see [Configure groups optional claims](https:\/\/learn.microsoft.com\/en-us\/entra\/identity-platform\/optional-claims#configure-groups-optional-claims).\n\n\nIf the user is a member of more than 200 groups, Entra ID does not emit the groups claim in the token and instead emits a group overage claim. To set up a group overage claim, see [Users with over 200 Group assignments](#users-with-over-200-group-assignments).\n\n\n#### Configure group membership claim in the manifest file\n\n1. Go to **App Registrations**, search for your application, and click it.\n\n1. Click **Manifest**.\n\n1. Add the following to the root of the manifest file:\n\n   ```\n   \"groupMembershipClaims\": \"ApplicationGroup, SecurityGroup\"\n   ```\n\n### Configure allowed domains\n\nThe `allowed_domains` option limits access to users who belong to specific domains. Separate domains with space or comma. For example,\n\n```\nallowed_domains = mycompany.com mycompany.org\n```\n\n### PKCE\n\nIETF's [RFC 7636](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7636)\nintroduces \"proof key for code exchange\" (PKCE) which provides\nadditional protection against some forms of authorization code\ninterception attacks. PKCE will be required in [OAuth 2.1](https:\/\/datatracker.ietf.org\/doc\/html\/draft-ietf-oauth-v2-1-03).\n\n> You can disable PKCE in Grafana by setting `use_pkce` to `false` in the`[auth.azuread]` section.\n\n### Configure automatic login\n\nTo bypass the login screen and log in automatically, enable the \"auto_login\" feature.\nThis setting is ignored if multiple auth providers are configured to use auto login.\n\n```\nauto_login = true\n```\n\n### Group sync (Enterprise only)\n\nWith group sync you can map your Entra ID groups to teams and roles in Grafana. This allows users to automatically be added to\nthe correct teams and be granted the correct roles in Grafana.\n\nYou can reference Entra ID groups by group object ID, like `8bab1c86-8fba-33e5-2089-1d1c80ec267d`.\n\nTo learn more about group synchronization, refer to [Configure team sync](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/setup-grafana\/configure-security\/configure-team-sync) and [Configure group attribute sync](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/setup-grafana\/configure-security\/configure-group-attribute-sync).\n\n## Common troubleshooting\n\nHere are some common issues and particulars you can run into when\nconfiguring Azure AD authentication in Grafana.\n\n### Users with over 200 Group assignments\n\nTo ensure that the token size doesn't exceed HTTP header size limits,\nEntra ID limits the number of object IDs that it includes in the groups claim.\nIf a user is member of more groups than the\noverage limit (200), then\nEntra ID does not emit the groups claim in the token and emits a group overage claim instead.\n\n> More information in [Groups overage claim](https:\/\/learn.microsoft.com\/en-us\/entra\/identity-platform\/id-token-claims-reference#groups-overage-claim)\n\nIf Grafana receives a token with a group overage claim instead of a groups claim,\nGrafana attempts to retrieve the user's group membership by calling the included endpoint.\n\n\nThe 'App registration' must include the `GroupMember.Read.All` API permission for group overage claim calls to succeed.\n\nAdmin consent might be required for this permission.\n\n\n#### Configure the required Graph API permissions\n\n1. Navigate to **Microsoft Entra ID > Manage > App registrations** and select your application.\n1. Select **API permissions** and then click on **Add a permission**.\n1. Select **Microsoft Graph** from the list of APIs.\n1. Select **Delegated permissions**.\n1. Under the **GroupMember** section, select **GroupMember.Read.All**.\n1. Click **Add permissions**.\n\n\nAdmin consent may be required for this permission.\n\n\n### Force fetching groups from Microsoft Graph API\n\nTo force fetching groups from Microsoft Graph API instead of the `id_token`. You can use the `force_use_graph_api` config option.\n\n```\nforce_use_graph_api = true\n```\n\n### Map roles\n\nBy default, Azure AD authentication will map users to organization roles based on the most privileged application role assigned to the user in Entra ID.\n\nIf no application role is found, the user is assigned the role specified by\n[the `auto_assign_org_role` option]().\nYou can disable this default role assignment by setting `role_attribute_strict = true`. This setting denies user access if no role or an invalid role is returned and the `org_mapping` expression evaluates to an empty mapping.\n\nYou can use the `org_mapping` configuration option to assign the user to multiple organizations and specify their role based on their Entra ID group membership. For more information, refer to [Org roles mapping example](#org-roles-mapping-example). If the org role mapping (`org_mapping`) is specified and Entra ID returns a valid role, then the user will get the highest of the two roles.\n\n**On every login** the user organization role will be reset to match Entra ID's application role and\ntheir organization membership will be reset to the default organization.\n\n#### Org roles mapping example\n\nThe Entra ID integration uses the external users' groups in the `org_mapping` configuration to map organizations and roles based on their Entra ID group membership.\n\nIn this example, the user has been granted the role of a `Viewer` in the `org_foo` organization, and the role of an `Editor` in the `org_bar` and `org_baz` orgs.\n\nThe external user is part of the following Entra ID groups: `032cb8e0-240f-4347-9120-6f33013e817a` and `bce1c492-0679-4989-941b-8de5e6789cb9`.\n\nConfig:\n\n```ini\norg_mapping = [\"032cb8e0-240f-4347-9120-6f33013e817a:org_foo:Viewer\", \"bce1c492-0679-4989-941b-8de5e6789cb9:org_bar:Editor\", \"*:org_baz:Editor\"]\n```\n\n## Skip organization role sync\n\nIf Azure AD authentication is not intended to sync user roles and organization membership and prevent the sync of org roles from Entra ID, set `skip_org_role_sync` to `true`. This is useful if you want to manage the organization roles for your users from within Grafana or that your organization roles are synced from another provider.\nSee [Configure Grafana]() for more details.\n\n```ini\n[auth.azuread]\n# ..\n# prevents the sync of org roles from AzureAD\nskip_org_role_sync = true\n```\n\n## Configuration options\n\nThe following table outlines the various Azure AD\/Entra ID configuration options. You can apply these options as environment variables, similar to any other configuration within Grafana. For more information, refer to [Override configuration with environment variables]().\n\n| Setting                      | Required | Supported on Cloud | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | Default                |\n| ---------------------------- | -------- | ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------- |\n| `enabled`                    | No       | Yes                | Enables Azure AD\/Entra ID authentication.                                                                                                                                                                                                                                                                                                                                                                                                                                                       | `false`                |\n| `name`                       | No       | Yes                | Name that refers to the Azure AD\/Entra ID authentication from the Grafana user interface.                                                                                                                                                                                                                                                                                                                                                                                                       | `OAuth`                |\n| `icon`                       | No       | Yes                | Icon used for the Azure AD\/Entra ID authentication in the Grafana user interface.                                                                                                                                                                                                                                                                                                                                                                                                               | `signin`               |\n| `client_id`                  | Yes      | Yes                | Client ID of the App (`Application (client) ID` on the **App registration** dashboard).                                                                                                                                                                                                                                                                                                                                                                                                         |                        |\n| `client_secret`              | Yes      | Yes                | Client secret of the App.                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |                        |\n| `auth_url`                   | Yes      | Yes                | Authorization endpoint of the Azure AD\/Entra ID OAuth2 provider.                                                                                                                                                                                                                                                                                                                                                                                                                                |                        |\n| `token_url`                  | Yes      | Yes                | Endpoint used to obtain the OAuth2 access token.                                                                                                                                                                                                                                                                                                                                                                                                                                                |                        |\n| `auth_style`                 | No       | Yes                | Name of the [OAuth2 AuthStyle](https:\/\/pkg.go.dev\/golang.org\/x\/oauth2#AuthStyle) to be used when ID token is requested from OAuth2 provider. It determines how `client_id` and `client_secret` are sent to Oauth2 provider. Available values are `AutoDetect`, `InParams` and `InHeader`.                                                                                                                                                                                                       | `AutoDetect`           |\n| `scopes`                     | No       | Yes                | List of comma- or space-separated OAuth2 scopes.                                                                                                                                                                                                                                                                                                                                                                                                                                                | `openid email profile` |\n| `allow_sign_up`              | No       | Yes                | Controls Grafana user creation through the Azure AD\/Entra ID login. Only existing Grafana users can log in with Azure AD\/Entra ID if set to `false`.                                                                                                                                                                                                                                                                                                                                            | `true`                 |\n| `auto_login`                 | No       | Yes                | Set to `true` to enable users to bypass the login screen and automatically log in. This setting is ignored if you configure multiple auth providers to use auto-login.                                                                                                                                                                                                                                                                                                                          | `false`                |\n| `role_attribute_strict`      | No       | Yes                | Set to `true` to deny user login if the Grafana org role cannot be extracted using `role_attribute_path` or `org_mapping`. For more information on user role mapping, refer to [Map roles]().                                                                                                                                                                                                                                                                        | `false`                |\n| `org_attribute_path`         | No       | No                 | [JMESPath](http:\/\/jmespath.org\/examples.html) expression to use for Grafana org to role lookup. Grafana will first evaluate the expression using the OAuth2 ID token. If no value is returned, the expression will be evaluated using the user information obtained from the UserInfo endpoint. The result of the evaluation will be mapped to org roles based on `org_mapping`. For more information on org to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example). |                        |\n| `org_mapping`                | No       | No                 | List of comma- or space-separated `<ExternalOrgName>:<OrgIdOrName>:<Role>` mappings. Value can be `*` meaning \"All users\". Role is optional and can have the following values: `None`, `Viewer`, `Editor` or `Admin`. For more information on external organization to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example).                                                                                                                                          |                        |\n| `allow_assign_grafana_admin` | No       | No                 | Set to `true` to automatically sync the Grafana server administrator role. When enabled, if the Azure AD\/Entra ID user's App role is `GrafanaAdmin`, Grafana grants the user server administrator privileges and the organization administrator role. If disabled, the user will only receive the organization administrator role. For more details on user role mapping, refer to [Map roles]().                                                                    | `false`                |\n| `skip_org_role_sync`         | No       | Yes                | Set to `true` to stop automatically syncing user roles. This will allow you to set organization roles for your users from within Grafana manually.                                                                                                                                                                                                                                                                                                                                              | `false`                |\n| `allowed_groups`             | No       | Yes                | List of comma- or space-separated groups. The user should be a member of at least one group to log in. If you configure `allowed_groups`, you must also configure Azure AD\/Entra ID to include the `groups` claim following [Configure group membership claims on the Azure Portal]().                                                                                                                                   |                        |\n| `allowed_organizations`      | No       | Yes                | List of comma- or space-separated Azure tenant identifiers. The user should be a member of at least one tenant to log in.                                                                                                                                                                                                                                                                                                                                                                       |                        |\n| `allowed_domains`            | No       | Yes                | List of comma- or space-separated domains. The user should belong to at least one domain to log in.                                                                                                                                                                                                                                                                                                                                                                                             |                        |\n| `tls_skip_verify_insecure`   | No       | No                 | If set to `true`, the client accepts any certificate presented by the server and any host name in that certificate. _You should only use this for testing_, because this mode leaves SSL\/TLS susceptible to man-in-the-middle attacks.                                                                                                                                                                                                                                                          | `false`                |\n| `tls_client_cert`            | No       | No                 | The path to the certificate.                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |                        |\n| `tls_client_key`             | No       | No                 | The path to the key.                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |                        |\n| `tls_client_ca`              | No       | No                 | The path to the trusted certificate authority list.                                                                                                                                                                                                                                                                                                                                                                                                                                             |                        |\n| `use_pkce`                   | No       | Yes                | Set to `true` to use [Proof Key for Code Exchange (PKCE)](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7636). Grafana uses the SHA256 based `S256` challenge method and a 128 bytes (base64url encoded) code verifier.                                                                                                                                                                                                                                                                              | `true`                 |\n| `use_refresh_token`          | No       | Yes                | Enables the use of refresh tokens and checks for access token expiration. When enabled, Grafana automatically adds the `offline_access` scope to the list of scopes.                                                                                                                                                                                                                                                                                                                            | `true`                 |\n| `force_use_graph_api`        | No       | Yes                | Set to `true` to always fetch groups from the Microsoft Graph API instead of the `id_token`. If a user belongs to more than 200 groups, the Microsoft Graph API will be used to retrieve the groups regardless of this setting.                                                                                                                                                                                                                                                                 | `false`                |\n| `signout_redirect_url`       | No       | Yes                | URL to redirect to after the user logs out.                                                                                                                                                                                                                                                                                                                                                                                                                                                     |                        |","site":"grafana setup","answers_cleaned":"    aliases               auth azuread  description  Grafana Azure AD OAuth Guide keywords      grafana     configuration     documentation     oauth labels    products        cloud       enterprise       oss menuTitle  Azure AD Entra ID OAuth title  Configure Azure AD Entra ID OAuth authentication weight  800        Configure Azure AD Entra ID OAuth authentication  The Azure AD authentication allows you to use a Microsoft Entra ID  formerly known as Azure Active Directory  tenant as an identity provider for Grafana  You can use Entra ID application roles to assign users and groups to Grafana roles from the Azure Portal    If Users use the same email address in Microsoft Entra ID that they use with other authentication providers  such as Grafana com   you need to do additional configuration to ensure that the users are matched correctly  Please refer to  Using the same email address to login with different identity providers    for more information       Create the Microsoft Entra ID application  To enable the Azure AD Entra ID OAuth  register your application with Entra ID   1  Log in to  Azure Portal  https   portal azure com   then click   Microsoft Entra ID   in the side menu   1  If you have access to more than one tenant  select your account in the upper right  Set your session to the Entra ID tenant you wish to use   1  Under   Manage   in the side menu  click   App Registrations       New Registration    Enter a descriptive name   1  Under   Redirect URI    select the app type   Web     1  Add the following redirect URLs  https    grafana domain  login azuread  and  https    grafana domain   then click   Register    The app s   Overview   page opens   1  Note the   Application ID    This is the OAuth client ID   1  Click   Endpoints   from the top menu        Note the   OAuth 2 0 authorization endpoint  v2    URL  This is the authorization URL       Note the   OAuth 2 0 token endpoint  v2     This is the token URL   1  Click   Certificates   secrets   in the side menu  then add a new entry under   Client secrets   with the following configuration        Description  Grafana OAuth      Expires  Select an expiration period  1  Click   Add   then copy the key   Value    This is the OAuth client secret    Make sure that you copy the string in the   Value   field  rather than the one in the   Secret ID   field    1  Define the required application roles for Grafana  using the Azure Portal   configure application roles for grafana in the azure portal  or  using the manifest file   configure application roles for grafana in the manifest file    1  Go to   Microsoft Entra ID   and then to   Enterprise Applications    under   Manage     1  Search for your application and click it   1  Click   Users and Groups    1  Click   Add user group   to add a user or group to the Grafana roles    When assigning a group to a Grafana role  ensure that users are direct members of the group  Users in nested groups will not have access to Grafana due to limitations within Azure AD Entra ID side  For more information  see  Microsoft Entra service limits and restrictions  https   learn microsoft com en us entra identity users directory service limits restrictions         Configure application roles for Grafana in the Azure Portal  This section describes setting up basic application roles for Grafana within the Azure Portal  For more information  see  Add app roles to your application and receive them in the token  https   learn microsoft com en us entra identity platform howto add app roles in apps    1  Go to   App Registrations    search for your application  and click it   1  Click   App roles   and then   Create app role     1  Define a role corresponding to each Grafana role  Viewer  Editor  and Admin      1  Choose a   Display name   for the role  For example   Grafana Editor       1  Set the   Allowed member types   to   Users Groups        1  Ensure that the   Value   field matches the Grafana role name  For example   Editor       1  Choose a   Description   for the role  For example   Grafana Editor Users       1  Click   Apply         Configure application roles for Grafana in the manifest file  If you prefer to configure the application roles for Grafana in the manifest file  complete the following steps   1  Go to   App Registrations    search for your application  and click it   1  Click   Manifest     1  Add a Universally Unique Identifier to each role    Every role requires a  Universally Unique Identifier  https   en wikipedia org wiki Universally unique identifier  which you can generate on Linux with  uuidgen   and on Windows through Microsoft PowerShell with  New Guid     1  Replace each  SOME UNIQUE ID  with the generated ID in the manifest file         json      appRoles                     allowedMemberTypes              User                    description    Grafana org admin Users           displayName    Grafana Org Admin           id    SOME UNIQUE ID           isEnabled   true          lang   null          origin    Application           value    Admin                           allowedMemberTypes              User                    description    Grafana read only Users           displayName    Grafana Viewer           id    SOME UNIQUE ID           isEnabled   true          lang   null          origin    Application           value    Viewer                           allowedMemberTypes              User                    description    Grafana Editor Users           displayName    Grafana Editor           id    SOME UNIQUE ID           isEnabled   true          lang   null          origin    Application           value    Editor                          1  Click   Save         Assign server administrator privileges  If the application role received by Grafana is  GrafanaAdmin   Grafana grants the user server administrator privileges  This is useful if you want to grant server administrator privileges to a subset of users  Grafana also assigns the user the  Admin  role of the default organization   The setting  allow assign grafana admin  under   auth azuread   must be set to  true  for this to work  If the setting is set to  false   the user is assigned the role of  Admin  of the default organization  but not server administrator privileges      json      allowedMemberTypes     User       description    Grafana server admin Users      displayName    Grafana Server Admin      id    SOME UNIQUE ID      isEnabled   true     lang   null     origin    Application      value    GrafanaAdmin            Before you begin  Ensure that you have followed the steps in  Create the Microsoft Entra ID application   create the microsoft entra id application  before you begin      Configure Azure AD authentication client using the Grafana UI   Available in Public Preview in Grafana 10 4 behind the  ssoSettingsApi  feature toggle    As a Grafana Admin  you can configure your Azure AD Entra ID OAuth client from within Grafana using the Grafana UI  To do this  navigate to the   Administration   Authentication   Azure AD   page and fill in the form  If you have a current configuration in the Grafana configuration file  the form will be pre populated with those values  Otherwise the form will contain default values   After you have filled in the form  click   Save   to save the configuration  If the save was successful  Grafana will apply the new configurations   If you need to reset changes you made in the UI back to the default values  click   Reset    After you have reset the changes  Grafana will apply the configuration from the Grafana configuration file  if there is any configuration  or the default values    If you run Grafana in high availability mode  configuration changes may not get applied to all Grafana instances immediately  You may need to wait a few minutes for the configuration to propagate to all Grafana instances       Configure Azure AD authentication client using the Terraform provider   Available in Public Preview in Grafana 10 4 behind the  ssoSettingsApi  feature toggle  Supported in the Terraform provider since v2 12 0       terraform resource  grafana sso settings   azuread sso settings      provider name    azuread    oauth2 settings       name                          Azure AD      auth url                      https   login microsoftonline com TENANT ID oauth2 v2 0 authorize      token url                     https   login microsoftonline com TENANT ID oauth2 v2 0 token      client id                     APPLICATION ID      client secret                 CLIENT SECRET      allow sign up                true     auto login                   false     scopes                        openid email profile      allowed organizations         TENANT ID      role attribute strict        false     allow assign grafana admin   false     skip org role sync           false     use pkce                     true            Refer to  Terraform Registry  https   registry terraform io providers grafana grafana latest docs resources sso settings  for a complete reference on using the  grafana sso settings  resource      Configure Azure AD authentication client using the Grafana configuration file  Ensure that you have access to the  Grafana configuration file          Enable Azure AD OAuth in Grafana  Add the following to the  Grafana configuration file           auth azuread  name   Azure AD enabled   true allow sign up   true auto login   false client id   APPLICATION ID client secret   CLIENT SECRET scopes   openid email profile auth url   https   login microsoftonline com TENANT ID oauth2 v2 0 authorize token url   https   login microsoftonline com TENANT ID oauth2 v2 0 token allowed domains   allowed groups   allowed organizations   TENANT ID role attribute strict   false allow assign grafana admin   false skip org role sync   false use pkce   true      You can also use these environment variables to configure   client id   and   client secret         GF AUTH AZUREAD CLIENT ID GF AUTH AZUREAD CLIENT SECRET       Verify that the Grafana  root url    is set in your Azure Application Redirect URLs        Configure refresh token  When a user logs in using an OAuth provider  Grafana verifies that the access token has not expired  When an access token expires  Grafana uses the provided refresh token  if any exists  to obtain a new access token   Grafana uses a refresh token to obtain a new access token without requiring the user to log in again  If a refresh token doesn t exist  Grafana logs the user out of the system after the access token has expired   Refresh token fetching and access token expiration check is enabled by default for the AzureAD provider since Grafana v10 1 0  If you would like to disable access token expiration check then set the  use refresh token  configuration value to  false        Note    The  accessTokenExpirationCheck  feature toggle has been removed in Grafana v10 3 0 and the  use refresh token  configuration value will be used instead for configuring refresh token fetching and access token expiration check       Configure allowed tenants  To limit access to authenticated users who are members of one or more tenants  set  allowed organizations  to a comma  or space separated list of tenant IDs  You can find tenant IDs on the Azure portal under   Microsoft Entra ID    Overview     Make sure to include the tenant IDs of all the federated Users  root directory if your Entra ID contains external identities   For example  if you want to only give access to members of the tenant  example  with an ID of  8bab1c86 8fba 33e5 2089 1d1c80ec267d   then set the following       allowed organizations   8bab1c86 8fba 33e5 2089 1d1c80ec267d          Configure allowed groups  Microsoft Entra ID groups can be used to limit user access to Grafana  For more information about managing groups in Entra ID  refer to  Manage Microsoft Entra groups and group membership  https   learn microsoft com en us entra fundamentals how to manage groups    To limit access to authenticated users who are members of one or more Entra ID groups  set  allowed groups  to a   comma    or   space separated   list of group object IDs   1  To find object IDs for a specific group on the Azure portal  go to   Microsoft Entra ID   Manage   Groups        You can find the Object Id of a group by clicking on the group and then clicking on   Properties    The object ID is listed under   Object ID    If you want to only give access to members of the group  example  with an Object Id of  8bab1c86 8fba 33e5 2089 1d1c80ec267d   then set the following               allowed groups   8bab1c86 8fba 33e5 2089 1d1c80ec267d         1  You must enable adding the  group attribute  https   learn microsoft com en us entra identity platform optional claims configure groups optional claims  to the tokens in your Entra ID App registration either  from the Azure Portal   configure group membership claims on the azure portal  or  from the manifest file   configure group membership claim in the manifest file         Configure group membership claims on the Azure Portal  To ensure that the  groups  claim is included in the token  add the  groups  claim to the token configuration either through the Azure Portal UI or by editing the manifest file   To configure group membership claims from the Azure Portal UI  complete the following steps   1  Navigate to the   App Registrations   page and select your application  1  Under   Manage   in the side menu  select   Token configuration    1  Click   Add groups claim   and select the relevant option for your use case  for example    Security groups   and   Groups assigned to the application      For more information  see  Configure groups optional claims  https   learn microsoft com en us entra identity platform optional claims configure groups optional claims     If the user is a member of more than 200 groups  Entra ID does not emit the groups claim in the token and instead emits a group overage claim  To set up a group overage claim  see  Users with over 200 Group assignments   users with over 200 group assignments          Configure group membership claim in the manifest file  1  Go to   App Registrations    search for your application  and click it   1  Click   Manifest     1  Add the following to the root of the manifest file              groupMembershipClaims    ApplicationGroup  SecurityGroup              Configure allowed domains  The  allowed domains  option limits access to users who belong to specific domains  Separate domains with space or comma  For example       allowed domains   mycompany com mycompany org          PKCE  IETF s  RFC 7636  https   datatracker ietf org doc html rfc7636  introduces  proof key for code exchange   PKCE  which provides additional protection against some forms of authorization code interception attacks  PKCE will be required in  OAuth 2 1  https   datatracker ietf org doc html draft ietf oauth v2 1 03      You can disable PKCE in Grafana by setting  use pkce  to  false  in the  auth azuread   section       Configure automatic login  To bypass the login screen and log in automatically  enable the  auto login  feature  This setting is ignored if multiple auth providers are configured to use auto login       auto login   true          Group sync  Enterprise only   With group sync you can map your Entra ID groups to teams and roles in Grafana  This allows users to automatically be added to the correct teams and be granted the correct roles in Grafana   You can reference Entra ID groups by group object ID  like  8bab1c86 8fba 33e5 2089 1d1c80ec267d    To learn more about group synchronization  refer to  Configure team sync  https   grafana com docs grafana  GRAFANA VERSION  setup grafana configure security configure team sync  and  Configure group attribute sync  https   grafana com docs grafana  GRAFANA VERSION  setup grafana configure security configure group attribute sync       Common troubleshooting  Here are some common issues and particulars you can run into when configuring Azure AD authentication in Grafana       Users with over 200 Group assignments  To ensure that the token size doesn t exceed HTTP header size limits  Entra ID limits the number of object IDs that it includes in the groups claim  If a user is member of more groups than the overage limit  200   then Entra ID does not emit the groups claim in the token and emits a group overage claim instead     More information in  Groups overage claim  https   learn microsoft com en us entra identity platform id token claims reference groups overage claim   If Grafana receives a token with a group overage claim instead of a groups claim  Grafana attempts to retrieve the user s group membership by calling the included endpoint    The  App registration  must include the  GroupMember Read All  API permission for group overage claim calls to succeed   Admin consent might be required for this permission         Configure the required Graph API permissions  1  Navigate to   Microsoft Entra ID   Manage   App registrations   and select your application  1  Select   API permissions   and then click on   Add a permission    1  Select   Microsoft Graph   from the list of APIs  1  Select   Delegated permissions    1  Under the   GroupMember   section  select   GroupMember Read All    1  Click   Add permissions      Admin consent may be required for this permission        Force fetching groups from Microsoft Graph API  To force fetching groups from Microsoft Graph API instead of the  id token   You can use the  force use graph api  config option       force use graph api   true          Map roles  By default  Azure AD authentication will map users to organization roles based on the most privileged application role assigned to the user in Entra ID   If no application role is found  the user is assigned the role specified by  the  auto assign org role  option     You can disable this default role assignment by setting  role attribute strict   true   This setting denies user access if no role or an invalid role is returned and the  org mapping  expression evaluates to an empty mapping   You can use the  org mapping  configuration option to assign the user to multiple organizations and specify their role based on their Entra ID group membership  For more information  refer to  Org roles mapping example   org roles mapping example   If the org role mapping   org mapping   is specified and Entra ID returns a valid role  then the user will get the highest of the two roles     On every login   the user organization role will be reset to match Entra ID s application role and their organization membership will be reset to the default organization        Org roles mapping example  The Entra ID integration uses the external users  groups in the  org mapping  configuration to map organizations and roles based on their Entra ID group membership   In this example  the user has been granted the role of a  Viewer  in the  org foo  organization  and the role of an  Editor  in the  org bar  and  org baz  orgs   The external user is part of the following Entra ID groups   032cb8e0 240f 4347 9120 6f33013e817a  and  bce1c492 0679 4989 941b 8de5e6789cb9    Config      ini org mapping     032cb8e0 240f 4347 9120 6f33013e817a org foo Viewer    bce1c492 0679 4989 941b 8de5e6789cb9 org bar Editor      org baz Editor           Skip organization role sync  If Azure AD authentication is not intended to sync user roles and organization membership and prevent the sync of org roles from Entra ID  set  skip org role sync  to  true   This is useful if you want to manage the organization roles for your users from within Grafana or that your organization roles are synced from another provider  See  Configure Grafana    for more details      ini  auth azuread         prevents the sync of org roles from AzureAD skip org role sync   true         Configuration options  The following table outlines the various Azure AD Entra ID configuration options  You can apply these options as environment variables  similar to any other configuration within Grafana  For more information  refer to  Override configuration with environment variables        Setting                        Required   Supported on Cloud   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       Default                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 enabled                       No         Yes                  Enables Azure AD Entra ID authentication                                                                                                                                                                                                                                                                                                                                                                                                                                                           false                      name                          No         Yes                  Name that refers to the Azure AD Entra ID authentication from the Grafana user interface                                                                                                                                                                                                                                                                                                                                                                                                           OAuth                      icon                          No         Yes                  Icon used for the Azure AD Entra ID authentication in the Grafana user interface                                                                                                                                                                                                                                                                                                                                                                                                                   signin                     client id                     Yes        Yes                  Client ID of the App   Application  client  ID  on the   App registration   dashboard                                                                                                                                                                                                                                                                                                                                                                                                                                         client secret                 Yes        Yes                  Client secret of the App                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      auth url                      Yes        Yes                  Authorization endpoint of the Azure AD Entra ID OAuth2 provider                                                                                                                                                                                                                                                                                                                                                                                                                                                               token url                     Yes        Yes                  Endpoint used to obtain the OAuth2 access token                                                                                                                                                                                                                                                                                                                                                                                                                                                                               auth style                    No         Yes                  Name of the  OAuth2 AuthStyle  https   pkg go dev golang org x oauth2 AuthStyle  to be used when ID token is requested from OAuth2 provider  It determines how  client id  and  client secret  are sent to Oauth2 provider  Available values are  AutoDetect    InParams  and  InHeader                                                                                                                                                                                                            AutoDetect                 scopes                        No         Yes                  List of comma  or space separated OAuth2 scopes                                                                                                                                                                                                                                                                                                                                                                                                                                                    openid email profile       allow sign up                 No         Yes                  Controls Grafana user creation through the Azure AD Entra ID login  Only existing Grafana users can log in with Azure AD Entra ID if set to  false                                                                                                                                                                                                                                                                                                                                                 true                       auto login                    No         Yes                  Set to  true  to enable users to bypass the login screen and automatically log in  This setting is ignored if you configure multiple auth providers to use auto login                                                                                                                                                                                                                                                                                                                              false                      role attribute strict         No         Yes                  Set to  true  to deny user login if the Grafana org role cannot be extracted using  role attribute path  or  org mapping   For more information on user role mapping  refer to  Map roles                                                                                                                                                                                                                                                                               false                      org attribute path            No         No                    JMESPath  http   jmespath org examples html  expression to use for Grafana org to role lookup  Grafana will first evaluate the expression using the OAuth2 ID token  If no value is returned  the expression will be evaluated using the user information obtained from the UserInfo endpoint  The result of the evaluation will be mapped to org roles based on  org mapping   For more information on org to role mapping  refer to  Org roles mapping example   org roles mapping example                                 org mapping                   No         No                   List of comma  or space separated   ExternalOrgName   OrgIdOrName   Role   mappings  Value can be     meaning  All users   Role is optional and can have the following values   None    Viewer    Editor  or  Admin   For more information on external organization to role mapping  refer to  Org roles mapping example   org roles mapping example                                                                                                                                                                          allow assign grafana admin    No         No                   Set to  true  to automatically sync the Grafana server administrator role  When enabled  if the Azure AD Entra ID user s App role is  GrafanaAdmin   Grafana grants the user server administrator privileges and the organization administrator role  If disabled  the user will only receive the organization administrator role  For more details on user role mapping  refer to  Map roles                                                                           false                      skip org role sync            No         Yes                  Set to  true  to stop automatically syncing user roles  This will allow you to set organization roles for your users from within Grafana manually                                                                                                                                                                                                                                                                                                                                                  false                      allowed groups                No         Yes                  List of comma  or space separated groups  The user should be a member of at least one group to log in  If you configure  allowed groups   you must also configure Azure AD Entra ID to include the  groups  claim following  Configure group membership claims on the Azure Portal                                                                                                                                                                     allowed organizations         No         Yes                  List of comma  or space separated Azure tenant identifiers  The user should be a member of at least one tenant to log in                                                                                                                                                                                                                                                                                                                                                                                                      allowed domains               No         Yes                  List of comma  or space separated domains  The user should belong to at least one domain to log in                                                                                                                                                                                                                                                                                                                                                                                                                            tls skip verify insecure      No         No                   If set to  true   the client accepts any certificate presented by the server and any host name in that certificate   You should only use this for testing   because this mode leaves SSL TLS susceptible to man in the middle attacks                                                                                                                                                                                                                                                              false                      tls client cert               No         No                   The path to the certificate                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   tls client key                No         No                   The path to the key                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           tls client ca                 No         No                   The path to the trusted certificate authority list                                                                                                                                                                                                                                                                                                                                                                                                                                                                            use pkce                      No         Yes                  Set to  true  to use  Proof Key for Code Exchange  PKCE   https   datatracker ietf org doc html rfc7636   Grafana uses the SHA256 based  S256  challenge method and a 128 bytes  base64url encoded  code verifier                                                                                                                                                                                                                                                                                  true                       use refresh token             No         Yes                  Enables the use of refresh tokens and checks for access token expiration  When enabled  Grafana automatically adds the  offline access  scope to the list of scopes                                                                                                                                                                                                                                                                                                                                true                       force use graph api           No         Yes                  Set to  true  to always fetch groups from the Microsoft Graph API instead of the  id token   If a user belongs to more than 200 groups  the Microsoft Graph API will be used to retrieve the groups regardless of this setting                                                                                                                                                                                                                                                                     false                      signout redirect url          No         Yes                  URL to redirect to after the user logs out                                                                                                                                                                                                                                                                                                                                                                                                                                                                                "}
{"questions":"grafana setup Learn about configuring LDAP authentication in Grafana using the Grafana UI aliases products auth enhanced ldap menuTitle LDAP user interface labels cloud oss enterprise","answers":"---\naliases:\n  - ..\/..\/..\/auth\/enhanced-ldap\/\ndescription: Learn about configuring LDAP authentication in Grafana using the Grafana UI.\nlabels:\n  products:\n    - cloud\n    - enterprise\n    - oss\nmenuTitle: LDAP user interface\ntitle: Configure LDAP authentication using the Grafana user interface\nweight: 300\n---\n\n# Configure LDAP authentication using the Grafana user interface\n\nThis page explains how to configure LDAP authentication in Grafana using the Grafana user interface. For more detailed information about configuring LDAP authentication using the configuration file, refer to [LDAP authentication]().\n\nBenefits of using the Grafana user interface to configure LDAP authentication include:\n\n- There is no need to edit the configuration file manually.\n- Quickly test the connection to the LDAP server.\n- There is no need to restart Grafana after making changes.\n\n\nAny configuration changes made through the Grafana user interface (UI) will take precedence over settings specified in the Grafana configuration file or through environment variables. If you modify any configuration settings in the UI, they will override any corresponding settings set via environment variables or defined in the configuration file.\n\n\n## Before you begin\n\nPrerequisites:\n\n- Knowledge of LDAP authentication and how it works.\n- Grafana instance v11.3.0 or later.\n- Permissions `settings:read` and `settings:write` with `settings:auth.ldap:*` scope.\n- This feature requires the `ssoSettingsLDAP` feature toggle to be enabled.\n\n## Steps to configure LDAP authentication\n\nSign in to Grafana and navigate to **Administration > Authentication > LDAP**.\n\n### 1. Complete mandatory fields\n\nThe mandatory fields have an asterisk (**\\***) next to them. Complete the following fields:\n\n1. **Server host**: Host name or IP address of the LDAP server.\n1. **Search filter**: The LDAP search filter finds entries within the directory.\n1. **Search base DNS**: List of base DNs to search through.\n\n### 2. Complete optional fields\n\nComplete the optional fields as needed:\n\n1. **Bind DN**: Distinguished name (DN) of the user to bind to.\n1. **Bind password**: Password for the server.\n\n### 3. Advanced settings\n\nClick the **Edit** button in the **Advanced settings** section to configure the following settings:\n\n#### 1. Miscellaneous settings\n\nComplementary settings for LDAP authentication.\n\n1. **Allow sign-up**: Allows new users to register upon logging in.\n1. **Port**: Port number of the LDAP server. The default is 389.\n1. **Timeout**: Time in seconds to wait for a response from the LDAP server.\n\n#### 2. Attributes\n\nAttributes used to map LDAP user assertion to Grafana user attributes.\n\n1. **Name**: Name of the assertion attribute to map to the Grafana user name.\n1. **Surname**: Name of the assertion attribute to map to the Grafana user surname.\n1. **Username**: Name of the assertion attribute to map to the Grafana user username.\n1. **Member Of**: Name of the assertion attribute to map to the Grafana user membership.\n1. **Email**: Name of the assertion attribute to map to the Grafana user email.\n\n#### 3. Group mapping\n\nMap LDAP groups to Grafana roles.\n\n1. **Skip organization role sync**: This option avoids syncing organization roles. It is useful when you want to manage roles manually.\n1. **Group search filter**: The LDAP search filter finds groups within the directory.\n1. **Group search base DNS**: List of base DNS to specify the matching groups' locations.\n1. **Group name attribute**: Identifies users within group entries.\n1. **Manage group mappings**:\n\n   When managing group mappings, the following fields will become available. To add a new group mapping, click the **Add group mapping** button.\n\n   1. **Add a group DN mapping**: The name of the key used to extract the ID token.\n   1. **Add an organization role mapping**: Select the Basic Role mapped to this group.\n   1. **Add the organization ID membership mapping**: Map the group to an organization ID.\n   1. **Define Grafana Admin membership**: Enable Grafana Admin privileges to the group.\n\n#### 4. Extra security settings\n\nAdditional security settings options for LDAP authentication.\n\n1. **Enable SSL**: This option will enable SSL to connect to the LDAP server.\n1. **Start TLS**: Use StartTLS to secure the connection to the LDAP server.\n1. **Min TLS version**: Choose the minimum TLS version to use. TLS1.2 or TLS1.3\n1. **TLS ciphers**: List the ciphers to use for the connection. For a complete list of ciphers, refer to the [Cipher Go library](https:\/\/go.dev\/src\/crypto\/tls\/cipher_suites.go).\n1. **Encryption key and certificate provision specification**:\n   This section allows you to specify the key and certificate for the LDAP server. You can provide the key and certificate in two ways: **base-64** encoded or **path to files**.\n   1. **Base-64 encoded certificate**:\n      All values used in this section must be base-64 encoded.\n      1. **Root CA certificate content**: List of root CA certificates.\n      1. **Client certificate content**: Client certificate content.\n      1. **Client key content**: Client key content.\n   1. **Path to files**:\n      Path in the file system to the key and certificate files\n      1. **Root CA certificate path**: Path to the root CA certificate.\n      1. **Client certificate path**: Path to the client certificate.\n      1. **Client key path**: Path to the client key.\n\n### 4. Persisting the configuration\n\nOnce you have configured the LDAP settings, click **Save** to persist the configuration.\n\nIf you want to delete all the changes made through the UI and revert to the configuration file settings, click the three dots menu icon and click **Reset to default values**.","site":"grafana setup","answers_cleaned":"    aliases               auth enhanced ldap  description  Learn about configuring LDAP authentication in Grafana using the Grafana UI  labels    products        cloud       enterprise       oss menuTitle  LDAP user interface title  Configure LDAP authentication using the Grafana user interface weight  300        Configure LDAP authentication using the Grafana user interface  This page explains how to configure LDAP authentication in Grafana using the Grafana user interface  For more detailed information about configuring LDAP authentication using the configuration file  refer to  LDAP authentication      Benefits of using the Grafana user interface to configure LDAP authentication include     There is no need to edit the configuration file manually    Quickly test the connection to the LDAP server    There is no need to restart Grafana after making changes    Any configuration changes made through the Grafana user interface  UI  will take precedence over settings specified in the Grafana configuration file or through environment variables  If you modify any configuration settings in the UI  they will override any corresponding settings set via environment variables or defined in the configuration file       Before you begin  Prerequisites     Knowledge of LDAP authentication and how it works    Grafana instance v11 3 0 or later    Permissions  settings read  and  settings write  with  settings auth ldap    scope    This feature requires the  ssoSettingsLDAP  feature toggle to be enabled      Steps to configure LDAP authentication  Sign in to Grafana and navigate to   Administration   Authentication   LDAP         1  Complete mandatory fields  The mandatory fields have an asterisk          next to them  Complete the following fields   1    Server host    Host name or IP address of the LDAP server  1    Search filter    The LDAP search filter finds entries within the directory  1    Search base DNS    List of base DNs to search through       2  Complete optional fields  Complete the optional fields as needed   1    Bind DN    Distinguished name  DN  of the user to bind to  1    Bind password    Password for the server       3  Advanced settings  Click the   Edit   button in the   Advanced settings   section to configure the following settings        1  Miscellaneous settings  Complementary settings for LDAP authentication   1    Allow sign up    Allows new users to register upon logging in  1    Port    Port number of the LDAP server  The default is 389  1    Timeout    Time in seconds to wait for a response from the LDAP server        2  Attributes  Attributes used to map LDAP user assertion to Grafana user attributes   1    Name    Name of the assertion attribute to map to the Grafana user name  1    Surname    Name of the assertion attribute to map to the Grafana user surname  1    Username    Name of the assertion attribute to map to the Grafana user username  1    Member Of    Name of the assertion attribute to map to the Grafana user membership  1    Email    Name of the assertion attribute to map to the Grafana user email        3  Group mapping  Map LDAP groups to Grafana roles   1    Skip organization role sync    This option avoids syncing organization roles  It is useful when you want to manage roles manually  1    Group search filter    The LDAP search filter finds groups within the directory  1    Group search base DNS    List of base DNS to specify the matching groups  locations  1    Group name attribute    Identifies users within group entries  1    Manage group mappings        When managing group mappings  the following fields will become available  To add a new group mapping  click the   Add group mapping   button      1    Add a group DN mapping    The name of the key used to extract the ID token     1    Add an organization role mapping    Select the Basic Role mapped to this group     1    Add the organization ID membership mapping    Map the group to an organization ID     1    Define Grafana Admin membership    Enable Grafana Admin privileges to the group        4  Extra security settings  Additional security settings options for LDAP authentication   1    Enable SSL    This option will enable SSL to connect to the LDAP server  1    Start TLS    Use StartTLS to secure the connection to the LDAP server  1    Min TLS version    Choose the minimum TLS version to use  TLS1 2 or TLS1 3 1    TLS ciphers    List the ciphers to use for the connection  For a complete list of ciphers  refer to the  Cipher Go library  https   go dev src crypto tls cipher suites go   1    Encryption key and certificate provision specification       This section allows you to specify the key and certificate for the LDAP server  You can provide the key and certificate in two ways    base 64   encoded or   path to files       1    Base 64 encoded certificate          All values used in this section must be base 64 encoded        1    Root CA certificate content    List of root CA certificates        1    Client certificate content    Client certificate content        1    Client key content    Client key content     1    Path to files          Path in the file system to the key and certificate files       1    Root CA certificate path    Path to the root CA certificate        1    Client certificate path    Path to the client certificate        1    Client key path    Path to the client key       4  Persisting the configuration  Once you have configured the LDAP settings  click   Save   to persist the configuration   If you want to delete all the changes made through the UI and revert to the configuration file settings  click the three dots menu icon and click   Reset to default values   "}
{"questions":"grafana setup documentation labels aliases auth github keywords configuration grafana Configure GitHub OAuth authentication oauth","answers":"---\naliases:\n  - ..\/..\/..\/auth\/github\/\ndescription: Configure GitHub OAuth authentication\nkeywords:\n  - grafana\n  - configuration\n  - documentation\n  - oauth\nlabels:\n  products:\n    - cloud\n    - enterprise\n    - oss\nmenuTitle: GitHub OAuth\ntitle: Configure GitHub OAuth authentication\nweight: 900\n---\n\n# Configure GitHub OAuth authentication\n\n\n\nThis topic describes how to configure GitHub OAuth authentication.\n\n\nIf Users use the same email address in GitHub that they use with other authentication providers (such as Grafana.com), you need to do additional configuration to ensure that the users are matched correctly. Please refer to the [Using the same email address to login with different identity providers]() documentation for more information.\n\n\n## Before you begin\n\nEnsure you know how to create a GitHub OAuth app. Consult GitHub's documentation on [creating an OAuth app](https:\/\/docs.github.com\/en\/apps\/oauth-apps\/building-oauth-apps\/creating-an-oauth-app) for more information.\n\n### Create a GitHub OAuth App\n\n1. Log in to your GitHub account.\n   In **Profile > Settings > Developer settings**, select **OAuth Apps**.\n1. Click **New OAuth App**.\n1. Fill out the fields, using your Grafana homepage URL when appropriate.\n   In the **Authorization callback URL** field, enter the following: `https:\/\/<YOUR-GRAFANA-URL>\/login\/github` .\n1. Note your client ID.\n1. Generate, then note, your client secret.\n\n## Configure GitHub authentication client using the Grafana UI\n\n\nAvailable in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle.\n\n\nAs a Grafana Admin, you can configure GitHub OAuth client from within Grafana using the GitHub UI. To do this, navigate to **Administration > Authentication > GitHub** page and fill in the form. If you have a current configuration in the Grafana configuration file, the form will be pre-populated with those values. Otherwise the form will contain default values.\n\nAfter you have filled in the form, click **Save** . If the save was successful, Grafana will apply the new configurations.\n\nIf you need to reset changes you made in the UI back to the default values, click **Reset**. After you have reset the changes, Grafana will apply the configuration from the Grafana configuration file (if there is any configuration) or the default values.\n\n\nIf you run Grafana in high availability mode, configuration changes may not get applied to all Grafana instances immediately. You may need to wait a few minutes for the configuration to propagate to all Grafana instances.\n\n\nRefer to [configuration options]() for more information.\n\n## Configure GitHub authentication client using the Terraform provider\n\n\nAvailable in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. Supported in the Terraform provider since v2.12.0.\n\n\n```terraform\nresource \"grafana_sso_settings\" \"github_sso_settings\" {\n  provider_name = \"github\"\n  oauth2_settings {\n    name                  = \"Github\"\n    client_id             = \"YOUR_GITHUB_APP_CLIENT_ID\"\n    client_secret         = \"YOUR_GITHUB_APP_CLIENT_SECRET\"\n    allow_sign_up         = true\n    auto_login            = false\n    scopes                = \"user:email,read:org\"\n    team_ids              = \"150,300\"\n    allowed_organizations = \"[\\\"My Organization\\\", \\\"Octocats\\\"]\"\n    allowed_domains       = \"mycompany.com mycompany.org\"\n    role_attribute_path   = \"[login=='octocat'][0] && 'GrafanaAdmin' || 'Viewer'\"\n  }\n}\n```\n\nGo to [Terraform Registry](https:\/\/registry.terraform.io\/providers\/grafana\/grafana\/latest\/docs\/resources\/sso_settings) for a complete reference on using the `grafana_sso_settings` resource.\n\n## Configure GitHub authentication client using the Grafana configuration file\n\nEnsure that you have access to the [Grafana configuration file]().\n\n### Configure GitHub authentication\n\nTo configure GitHub authentication with Grafana, follow these steps:\n\n1. Create an OAuth application in GitHub.\n1. Set the callback URL for your GitHub OAuth app to `http:\/\/<my_grafana_server_name_or_ip>:<grafana_server_port>\/login\/github`.\n\n   Ensure that the callback URL is the complete HTTP address that you use to access Grafana via your browser, but with the appended path of `\/login\/github`.\n\n   For the callback URL to be correct, it might be necessary to set the `root_url` option in the `[server]`section of the Grafana configuration file. For example, if you are serving Grafana behind a proxy.\n\n1. Refer to the following table to update field values located in the `[auth.github]` section of the Grafana configuration file:\n\n   | Field                        | Description                                                                         |\n   | ---------------------------- | ----------------------------------------------------------------------------------- |\n   | `client_id`, `client_secret` | These values must match the client ID and client secret from your GitHub OAuth app. |\n   | `enabled`                    | Enables GitHub authentication. Set this value to `true`.                            |\n\n   Review the list of other GitHub [configuration options]() and complete them, as necessary.\n\n1. [Configure role mapping]().\n1. Optional: [Configure group synchronization]().\n1. Restart Grafana.\n\n   You should now see a GitHub login button on the login page and be able to log in or sign up with your GitHub accounts.\n\n### Configure role mapping\n\nUnless `skip_org_role_sync` option is enabled, the user's role will be set to the role retrieved from GitHub upon user login.\n\nThe user's role is retrieved using a [JMESPath](http:\/\/jmespath.org\/examples.html) expression from the `role_attribute_path` configuration option.\nTo map the server administrator role, use the `allow_assign_grafana_admin` configuration option.\nRefer to [configuration options]() for more information.\n\nIf no valid role is found, the user is assigned the role specified by [the `auto_assign_org_role` option]().\nYou can disable this default role assignment by setting `role_attribute_strict = true`. This setting denies user access if no role or an invalid role is returned after evaluating the `role_attribute_path` and the `org_mapping` expressions.\n\nYou can use the `org_mapping` configuration options to assign the user to organizations and specify their role based on their GitHub team membership. For more information, refer to [Org roles mapping example](#org-roles-mapping-example). If both org role mapping (`org_mapping`) and the regular role mapping (`role_attribute_path`) are specified, then the user will get the highest of the two mapped roles.\n\nTo ease configuration of a proper JMESPath expression, go to [JMESPath](http:\/\/jmespath.org\/) to test and evaluate expressions with custom payloads.\n\n#### Role mapping examples\n\nThis section includes examples of JMESPath expressions used for role mapping.\n\n##### Org roles mapping example\n\nThe GitHub integration uses the external users' teams in the `org_mapping` configuration to map organizations and roles based on their GitHub team membership.\n\nIn this example, the user has been granted the role of a `Viewer` in the `org_foo` organization, and the role of an `Editor` in the `org_bar` and `org_baz` orgs.\n\nThe external user is part of the following GitHub teams: `@my-github-organization\/my-github-team-1` and `@my-github-organization\/my-github-team-2`.\n\nConfig:\n\n```ini\norg_mapping = @my-github-organization\/my-github-team-1:org_foo:Viewer @my-github-organization\/my-github-team-2:org_bar:Editor *:org_baz:Editor\n```\n\n##### Map roles using GitHub user information\n\nIn this example, the user with login `octocat` has been granted the `Admin` role.\nAll other users are granted the `Viewer` role.\n\n```bash\nrole_attribute_path = [login=='octocat'][0] && 'Admin' || 'Viewer'\n```\n\n##### Map roles using GitHub teams\n\nIn this example, the user from GitHub team `my-github-team` has been granted the `Editor` role.\nAll other users are granted the `Viewer` role.\n\n```bash\nrole_attribute_path = contains(groups[*], '@my-github-organization\/my-github-team') && 'Editor' || 'Viewer'\n```\n\n##### Map roles using multiple GitHub teams\n\nIn this example, the users from GitHub teams `admins` and `devops` have been granted the `Admin` role,\nthe users from GitHub teams `engineers` and `managers` have been granted the `Editor` role,\nthe users from GitHub team `qa` have been granted the `Viewer` role and\nall other users are granted the `None` role.\n\n```bash\nrole_attribute_path = contains(groups[*], '@my-github-organization\/admins') && 'Admin' || contains(groups[*], '@my-github-organization\/devops') && 'Admin' || contains(groups[*], '@my-github-organization\/engineers') && 'Editor' || contains(groups[*], '@my-github-organization\/managers') && 'Editor' || contains(groups[*], '@my-github-organization\/qa') && 'Viewer' || 'None'\n```\n\n##### Map server administrator role\n\nIn this example, the user with login `octocat` has been granted the `Admin` organization role as well as the Grafana server admin role.\nAll other users are granted the `Viewer` role.\n\n```bash\nrole_attribute_path = [login=='octocat'][0] && 'GrafanaAdmin' || 'Viewer'\n```\n\n##### Map one role to all users\n\nIn this example, all users will be assigned `Viewer` role regardless of the user information received from the identity provider.\n\n```ini\nrole_attribute_path = \"'Viewer'\"\nskip_org_role_sync = false\n```\n\n### Example of GitHub configuration in Grafana\n\nThis section includes an example of GitHub configuration in the Grafana configuration file.\n\n```bash\n[auth.github]\nenabled = true\nclient_id = YOUR_GITHUB_APP_CLIENT_ID\nclient_secret = YOUR_GITHUB_APP_CLIENT_SECRET\nscopes = user:email,read:org\nauth_url = https:\/\/github.com\/login\/oauth\/authorize\ntoken_url = https:\/\/github.com\/login\/oauth\/access_token\napi_url = https:\/\/api.github.com\/user\nallow_sign_up = true\nauto_login = false\nteam_ids = 150,300\nallowed_organizations = [\"My Organization\", \"Octocats\"]\nallowed_domains = mycompany.com mycompany.org\nrole_attribute_path = [login=='octocat'][0] && 'GrafanaAdmin' || 'Viewer'\n```\n\n## Configure group synchronization\n\n\nAvailable in [Grafana Enterprise](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/introduction\/grafana-enterprise) and [Grafana Cloud](\/docs\/grafana-cloud\/).\n\n\nGrafana supports synchronization of teams from your GitHub organization with Grafana teams and roles. This allows automatically assigning users to the appropriate teams or granting them the mapped roles.\nTeams and roles get synchronized when the user logs in.\n\nGitHub teams can be referenced in two ways:\n\n- `https:\/\/github.com\/orgs\/<org>\/teams\/<slug>`\n- `@<org>\/<slug>`\n\nExamples: `https:\/\/github.com\/orgs\/grafana\/teams\/developers` or `@grafana\/developers`.\n\nTo learn more about group synchronization, refer to [Configure team sync](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/setup-grafana\/configure-security\/configure-team-sync) and [Configure group attribute sync](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/setup-grafana\/configure-security\/configure-group-attribute-sync).\n\n## Configuration options\n\nThe table below describes all GitHub OAuth configuration options. You can apply these options as environment variables, similar to any other configuration within Grafana. For more information, refer to [Override configuration with environment variables]().\n\n\nIf the configuration option requires a JMESPath expression that includes a colon, enclose the entire expression in quotes to prevent parsing errors. For example `role_attribute_path: \"role:view\"`\n\n\n| Setting                      | Required | Supported on Cloud | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           | Default                                       |\n| ---------------------------- | -------- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------- |\n| `enabled`                    | No       | Yes                | Whether GitHub OAuth authentication is allowed.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       | `false`                                       |\n| `name`                       | No       | Yes                | Name used to refer to the GitHub authentication in the Grafana user interface.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        | `GitHub`                                      |\n| `icon`                       | No       | Yes                | Icon used for GitHub authentication in the Grafana user interface.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    | `github`                                      |\n| `client_id`                  | Yes      | Yes                | Client ID provided by your GitHub OAuth app.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |                                               |\n| `client_secret`              | Yes      | Yes                | Client secret provided by your GitHub OAuth app.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |                                               |\n| `auth_url`                   | Yes      | Yes                | Authorization endpoint of your GitHub OAuth provider.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 | `https:\/\/github.com\/login\/oauth\/authorize`    |\n| `token_url`                  | Yes      | Yes                | Endpoint used to obtain GitHub OAuth access token.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    | `https:\/\/github.com\/login\/oauth\/access_token` |\n| `api_url`                    | Yes      | Yes                | Endpoint used to obtain GitHub user information compatible with [OpenID UserInfo](https:\/\/connect2id.com\/products\/server\/docs\/api\/userinfo).                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          | `https:\/\/api.github.com\/user`                 |\n| `scopes`                     | No       | Yes                | List of comma- or space-separated GitHub OAuth scopes.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                | `user:email,read:org`                         |\n| `allow_sign_up`              | No       | Yes                | Whether to allow new Grafana user creation through GitHub login. If set to `false`, then only existing Grafana users can log in with GitHub OAuth.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    | `true`                                        |\n| `auto_login`                 | No       | Yes                | Set to `true` to enable users to bypass the login screen and automatically log in. This setting is ignored if you configure multiple auth providers to use auto-login.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                | `false`                                       |\n| `role_attribute_path`        | No       | Yes                | [JMESPath](http:\/\/jmespath.org\/examples.html) expression to use for Grafana role lookup. Grafana will first evaluate the expression using the user information obtained from the UserInfo endpoint. If no role is found, Grafana creates a JSON data with `groups` key that maps to GitHub teams obtained from GitHub's [`\/api\/user\/teams`](https:\/\/docs.github.com\/en\/rest\/teams\/teams#list-teams-for-the-authenticated-user) endpoint, and evaluates the expression using this data. The result of the evaluation should be a valid Grafana role (`None`, `Viewer`, `Editor`, `Admin` or `GrafanaAdmin`). For more information on user role mapping, refer to [Configure role mapping](#org-roles-mapping-example). |                                               |\n| `role_attribute_strict`      | No       | Yes                | Set to `true` to deny user login if the Grafana org role cannot be extracted using `role_attribute_path` or `org_mapping`. For more information on user role mapping, refer to [Configure role mapping](#org-roles-mapping-example).                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  | `false`                                       |\n| `org_mapping`                | No       | No                 | List of comma- or space-separated `<ExternalGitHubTeamName>:<OrgIdOrName>:<Role>` mappings. Value can be `*` meaning \"All users\". Role is optional and can have the following values: `None`, `Viewer`, `Editor` or `Admin`. For more information on external organization to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example).                                                                                                                                                                                                                                                                                                                                                         |                                               |\n| `skip_org_role_sync`         | No       | Yes                | Set to `true` to stop automatically syncing user roles.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               | `false`                                       |\n| `allow_assign_grafana_admin` | No       | No                 | Set to `true` to enable automatic sync of the Grafana server administrator role. If this option is set to `true` and the result of evaluating `role_attribute_path` for a user is `GrafanaAdmin`, Grafana grants the user the server administrator privileges and organization administrator role. If this option is set to `false` and the result of evaluating `role_attribute_path` for a user is `GrafanaAdmin`, Grafana grants the user only organization administrator role. For more information on user role mapping, refer to [Configure role mapping]().                                                                                                            | `false`                                       |\n| `allowed_organizations`      | No       | Yes                | List of comma- or space-separated organizations. User must be a member of at least one organization to log in.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |                                               |\n| `allowed_domains`            | No       | Yes                | List of comma- or space-separated domains. User must belong to at least one domain to log in.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |                                               |\n| `team_ids`                   | No       | Yes                | Integer list of team IDs. If set, user has to be a member of one of the given teams to log in.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |                                               |\n| `tls_skip_verify_insecure`   | No       | No                 | If set to `true`, the client accepts any certificate presented by the server and any host name in that certificate. _You should only use this for testing_, because this mode leaves SSL\/TLS susceptible to man-in-the-middle attacks.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                | `false`                                       |\n| `tls_client_cert`            | No       | No                 | The path to the certificate.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |                                               |\n| `tls_client_key`             | No       | No                 | The path to the key.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |                                               |\n| `tls_client_ca`              | No       | No                 | The path to the trusted certificate authority list.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |                                               |\n| `signout_redirect_url`       | No       | Yes                | URL to redirect to after the user logs out.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |                                               |","site":"grafana setup","answers_cleaned":"    aliases               auth github  description  Configure GitHub OAuth authentication keywords      grafana     configuration     documentation     oauth labels    products        cloud       enterprise       oss menuTitle  GitHub OAuth title  Configure GitHub OAuth authentication weight  900        Configure GitHub OAuth authentication    This topic describes how to configure GitHub OAuth authentication    If Users use the same email address in GitHub that they use with other authentication providers  such as Grafana com   you need to do additional configuration to ensure that the users are matched correctly  Please refer to the  Using the same email address to login with different identity providers    documentation for more information       Before you begin  Ensure you know how to create a GitHub OAuth app  Consult GitHub s documentation on  creating an OAuth app  https   docs github com en apps oauth apps building oauth apps creating an oauth app  for more information       Create a GitHub OAuth App  1  Log in to your GitHub account     In   Profile   Settings   Developer settings    select   OAuth Apps    1  Click   New OAuth App    1  Fill out the fields  using your Grafana homepage URL when appropriate     In the   Authorization callback URL   field  enter the following   https    YOUR GRAFANA URL  login github    1  Note your client ID  1  Generate  then note  your client secret      Configure GitHub authentication client using the Grafana UI   Available in Public Preview in Grafana 10 4 behind the  ssoSettingsApi  feature toggle    As a Grafana Admin  you can configure GitHub OAuth client from within Grafana using the GitHub UI  To do this  navigate to   Administration   Authentication   GitHub   page and fill in the form  If you have a current configuration in the Grafana configuration file  the form will be pre populated with those values  Otherwise the form will contain default values   After you have filled in the form  click   Save     If the save was successful  Grafana will apply the new configurations   If you need to reset changes you made in the UI back to the default values  click   Reset    After you have reset the changes  Grafana will apply the configuration from the Grafana configuration file  if there is any configuration  or the default values    If you run Grafana in high availability mode  configuration changes may not get applied to all Grafana instances immediately  You may need to wait a few minutes for the configuration to propagate to all Grafana instances    Refer to  configuration options    for more information      Configure GitHub authentication client using the Terraform provider   Available in Public Preview in Grafana 10 4 behind the  ssoSettingsApi  feature toggle  Supported in the Terraform provider since v2 12 0       terraform resource  grafana sso settings   github sso settings      provider name    github    oauth2 settings       name                     Github      client id                YOUR GITHUB APP CLIENT ID      client secret            YOUR GITHUB APP CLIENT SECRET      allow sign up           true     auto login              false     scopes                   user email read org      team ids                 150 300      allowed organizations       My Organization      Octocats         allowed domains          mycompany com mycompany org      role attribute path       login   octocat   0      GrafanaAdmin      Viewer              Go to  Terraform Registry  https   registry terraform io providers grafana grafana latest docs resources sso settings  for a complete reference on using the  grafana sso settings  resource      Configure GitHub authentication client using the Grafana configuration file  Ensure that you have access to the  Grafana configuration file          Configure GitHub authentication  To configure GitHub authentication with Grafana  follow these steps   1  Create an OAuth application in GitHub  1  Set the callback URL for your GitHub OAuth app to  http    my grafana server name or ip   grafana server port  login github       Ensure that the callback URL is the complete HTTP address that you use to access Grafana via your browser  but with the appended path of   login github       For the callback URL to be correct  it might be necessary to set the  root url  option in the   server  section of the Grafana configuration file  For example  if you are serving Grafana behind a proxy   1  Refer to the following table to update field values located in the   auth github   section of the Grafana configuration file        Field                          Description                                                                                                                                                                                                           client id    client secret    These values must match the client ID and client secret from your GitHub OAuth app          enabled                       Enables GitHub authentication  Set this value to  true                                    Review the list of other GitHub  configuration options    and complete them  as necessary   1   Configure role mapping     1  Optional   Configure group synchronization     1  Restart Grafana      You should now see a GitHub login button on the login page and be able to log in or sign up with your GitHub accounts       Configure role mapping  Unless  skip org role sync  option is enabled  the user s role will be set to the role retrieved from GitHub upon user login   The user s role is retrieved using a  JMESPath  http   jmespath org examples html  expression from the  role attribute path  configuration option  To map the server administrator role  use the  allow assign grafana admin  configuration option  Refer to  configuration options    for more information   If no valid role is found  the user is assigned the role specified by  the  auto assign org role  option     You can disable this default role assignment by setting  role attribute strict   true   This setting denies user access if no role or an invalid role is returned after evaluating the  role attribute path  and the  org mapping  expressions   You can use the  org mapping  configuration options to assign the user to organizations and specify their role based on their GitHub team membership  For more information  refer to  Org roles mapping example   org roles mapping example   If both org role mapping   org mapping   and the regular role mapping   role attribute path   are specified  then the user will get the highest of the two mapped roles   To ease configuration of a proper JMESPath expression  go to  JMESPath  http   jmespath org   to test and evaluate expressions with custom payloads        Role mapping examples  This section includes examples of JMESPath expressions used for role mapping         Org roles mapping example  The GitHub integration uses the external users  teams in the  org mapping  configuration to map organizations and roles based on their GitHub team membership   In this example  the user has been granted the role of a  Viewer  in the  org foo  organization  and the role of an  Editor  in the  org bar  and  org baz  orgs   The external user is part of the following GitHub teams    my github organization my github team 1  and   my github organization my github team 2    Config      ini org mapping    my github organization my github team 1 org foo Viewer  my github organization my github team 2 org bar Editor   org baz Editor            Map roles using GitHub user information  In this example  the user with login  octocat  has been granted the  Admin  role  All other users are granted the  Viewer  role      bash role attribute path    login   octocat   0      Admin      Viewer             Map roles using GitHub teams  In this example  the user from GitHub team  my github team  has been granted the  Editor  role  All other users are granted the  Viewer  role      bash role attribute path   contains groups       my github organization my github team       Editor      Viewer             Map roles using multiple GitHub teams  In this example  the users from GitHub teams  admins  and  devops  have been granted the  Admin  role  the users from GitHub teams  engineers  and  managers  have been granted the  Editor  role  the users from GitHub team  qa  have been granted the  Viewer  role and all other users are granted the  None  role      bash role attribute path   contains groups       my github organization admins       Admin     contains groups       my github organization devops       Admin     contains groups       my github organization engineers       Editor     contains groups       my github organization managers       Editor     contains groups       my github organization qa       Viewer      None             Map server administrator role  In this example  the user with login  octocat  has been granted the  Admin  organization role as well as the Grafana server admin role  All other users are granted the  Viewer  role      bash role attribute path    login   octocat   0      GrafanaAdmin      Viewer             Map one role to all users  In this example  all users will be assigned  Viewer  role regardless of the user information received from the identity provider      ini role attribute path     Viewer   skip org role sync   false          Example of GitHub configuration in Grafana  This section includes an example of GitHub configuration in the Grafana configuration file      bash  auth github  enabled   true client id   YOUR GITHUB APP CLIENT ID client secret   YOUR GITHUB APP CLIENT SECRET scopes   user email read org auth url   https   github com login oauth authorize token url   https   github com login oauth access token api url   https   api github com user allow sign up   true auto login   false team ids   150 300 allowed organizations     My Organization    Octocats   allowed domains   mycompany com mycompany org role attribute path    login   octocat   0      GrafanaAdmin      Viewer          Configure group synchronization   Available in  Grafana Enterprise  https   grafana com docs grafana  GRAFANA VERSION  introduction grafana enterprise  and  Grafana Cloud   docs grafana cloud      Grafana supports synchronization of teams from your GitHub organization with Grafana teams and roles  This allows automatically assigning users to the appropriate teams or granting them the mapped roles  Teams and roles get synchronized when the user logs in   GitHub teams can be referenced in two ways      https   github com orgs  org  teams  slug        org   slug    Examples   https   github com orgs grafana teams developers  or   grafana developers    To learn more about group synchronization  refer to  Configure team sync  https   grafana com docs grafana  GRAFANA VERSION  setup grafana configure security configure team sync  and  Configure group attribute sync  https   grafana com docs grafana  GRAFANA VERSION  setup grafana configure security configure group attribute sync       Configuration options  The table below describes all GitHub OAuth configuration options  You can apply these options as environment variables  similar to any other configuration within Grafana  For more information  refer to  Override configuration with environment variables       If the configuration option requires a JMESPath expression that includes a colon  enclose the entire expression in quotes to prevent parsing errors  For example  role attribute path   role view       Setting                        Required   Supported on Cloud   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             Default                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     enabled                       No         Yes                  Whether GitHub OAuth authentication is allowed                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           false                                             name                          No         Yes                  Name used to refer to the GitHub authentication in the Grafana user interface                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            GitHub                                            icon                          No         Yes                  Icon used for GitHub authentication in the Grafana user interface                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        github                                            client id                     Yes        Yes                  Client ID provided by your GitHub OAuth app                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                client secret                 Yes        Yes                  Client secret provided by your GitHub OAuth app                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            auth url                      Yes        Yes                  Authorization endpoint of your GitHub OAuth provider                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     https   github com login oauth authorize          token url                     Yes        Yes                  Endpoint used to obtain GitHub OAuth access token                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        https   github com login oauth access token       api url                       Yes        Yes                  Endpoint used to obtain GitHub user information compatible with  OpenID UserInfo  https   connect2id com products server docs api userinfo                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               https   api github com user                       scopes                        No         Yes                  List of comma  or space separated GitHub OAuth scopes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    user email read org                               allow sign up                 No         Yes                  Whether to allow new Grafana user creation through GitHub login  If set to  false   then only existing Grafana users can log in with GitHub OAuth                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        true                                              auto login                    No         Yes                  Set to  true  to enable users to bypass the login screen and automatically log in  This setting is ignored if you configure multiple auth providers to use auto login                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    false                                             role attribute path           No         Yes                   JMESPath  http   jmespath org examples html  expression to use for Grafana role lookup  Grafana will first evaluate the expression using the user information obtained from the UserInfo endpoint  If no role is found  Grafana creates a JSON data with  groups  key that maps to GitHub teams obtained from GitHub s    api user teams   https   docs github com en rest teams teams list teams for the authenticated user  endpoint  and evaluates the expression using this data  The result of the evaluation should be a valid Grafana role   None    Viewer    Editor    Admin  or  GrafanaAdmin    For more information on user role mapping  refer to  Configure role mapping   org roles mapping example                                                        role attribute strict         No         Yes                  Set to  true  to deny user login if the Grafana org role cannot be extracted using  role attribute path  or  org mapping   For more information on user role mapping  refer to  Configure role mapping   org roles mapping example                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       false                                             org mapping                   No         No                   List of comma  or space separated   ExternalGitHubTeamName   OrgIdOrName   Role   mappings  Value can be     meaning  All users   Role is optional and can have the following values   None    Viewer    Editor  or  Admin   For more information on external organization to role mapping  refer to  Org roles mapping example   org roles mapping example                                                                                                                                                                                                                                                                                                                                                                                                                skip org role sync            No         Yes                  Set to  true  to stop automatically syncing user roles                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   false                                             allow assign grafana admin    No         No                   Set to  true  to enable automatic sync of the Grafana server administrator role  If this option is set to  true  and the result of evaluating  role attribute path  for a user is  GrafanaAdmin   Grafana grants the user the server administrator privileges and organization administrator role  If this option is set to  false  and the result of evaluating  role attribute path  for a user is  GrafanaAdmin   Grafana grants the user only organization administrator role  For more information on user role mapping  refer to  Configure role mapping                                                                                                                   false                                             allowed organizations         No         Yes                  List of comma  or space separated organizations  User must be a member of at least one organization to log in                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              allowed domains               No         Yes                  List of comma  or space separated domains  User must belong to at least one domain to log in                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               team ids                      No         Yes                  Integer list of team IDs  If set  user has to be a member of one of the given teams to log in                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              tls skip verify insecure      No         No                   If set to  true   the client accepts any certificate presented by the server and any host name in that certificate   You should only use this for testing   because this mode leaves SSL TLS susceptible to man in the middle attacks                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    false                                             tls client cert               No         No                   The path to the certificate                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                tls client key                No         No                   The path to the key                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        tls client ca                 No         No                   The path to the trusted certificate authority list                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         signout redirect url          No         Yes                  URL to redirect to after the user logs out                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             "}
{"questions":"grafana setup Configure Generic OAuth authentication documentation labels auth generic oauth aliases keywords configuration grafana oauth","answers":"---\naliases:\n  - ..\/..\/..\/auth\/generic-oauth\/\ndescription: Configure Generic OAuth authentication\nkeywords:\n  - grafana\n  - configuration\n  - documentation\n  - oauth\nlabels:\n  products:\n    - cloud\n    - enterprise\n    - oss\nmenuTitle: Generic OAuth\ntitle: Configure Generic OAuth authentication\nweight: 700\n---\n\n# Configure Generic OAuth authentication\n\n\n\nGrafana provides OAuth2 integrations for the following auth providers:\n\n- [Azure AD OAuth]()\n- [GitHub OAuth]()\n- [GitLab OAuth]()\n- [Google OAuth]()\n- [Grafana Com OAuth]()\n- [Keycloak OAuth]()\n- [Okta OAuth]()\n\nIf your OAuth2 provider is not listed, you can use Generic OAuth authentication.\n\nThis topic describes how to configure Generic OAuth authentication using different methods and includes [examples of setting up Generic OAuth]() with specific OAuth2 providers.\n\n## Before you begin\n\nTo follow this guide:\n\n- Ensure you know how to create an OAuth2 application with your OAuth2 provider. Consult the documentation of your OAuth2 provider for more information.\n- Ensure your identity provider returns OpenID UserInfo compatible information such as the `sub` claim.\n- If you are using refresh tokens, ensure you know how to set them up with your OAuth2 provider. Consult the documentation of your OAuth2 provider for more information.\n\n\nIf Users use the same email address in Azure AD that they use with other authentication providers (such as Grafana.com), you need to do additional configuration to ensure that the users are matched correctly. Please refer to the [Using the same email address to login with different identity providers]() documentation for more information.\n\n\n## Configure generic OAuth authentication client using the Grafana UI\n\n\nAvailable in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle.\n\n\nAs a Grafana Admin, you can configure Generic OAuth client from within Grafana using the Generic OAuth UI. To do this, navigate to **Administration > Authentication > Generic OAuth** page and fill in the form. If you have a current configuration in the Grafana configuration file then the form will be pre-populated with those values otherwise the form will contain default values.\n\nAfter you have filled in the form, click **Save** to save the configuration. If the save was successful, Grafana will apply the new configurations.\n\nIf you need to reset changes you made in the UI back to the default values, click **Reset**. After you have reset the changes, Grafana will apply the configuration from the Grafana configuration file (if there is any configuration) or the default values.\n\n\nIf you run Grafana in high availability mode, configuration changes may not get applied to all Grafana instances immediately. You may need to wait a few minutes for the configuration to propagate to all Grafana instances.\n\n\nRefer to [configuration options]() for more information.\n\n## Configure generic OAuth authentication client using the Terraform provider\n\n\nAvailable in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. Supported in the Terraform provider since v2.12.0.\n\n\n```terraform\nresource \"grafana_sso_settings\" \"generic_sso_settings\" {\n  provider_name = \"generic_oauth\"\n  oauth2_settings {\n    name              = \"Auth0\"\n    auth_url          = \"https:\/\/<domain>\/authorize\"\n    token_url         = \"https:\/\/<domain>\/oauth\/token\"\n    api_url           = \"https:\/\/<domain>\/userinfo\"\n    client_id         = \"<client id>\"\n    client_secret     = \"<client secret>\"\n    allow_sign_up     = true\n    auto_login        = false\n    scopes            = \"openid profile email offline_access\"\n    use_pkce          = true\n    use_refresh_token = true\n  }\n}\n```\n\nRefer to [Terraform Registry](https:\/\/registry.terraform.io\/providers\/grafana\/grafana\/latest\/docs\/resources\/sso_settings) for a complete reference on using the `grafana_sso_settings` resource.\n\n## Configure generic OAuth authentication client using the Grafana configuration file\n\nEnsure that you have access to the [Grafana configuration file]().\n\n### Steps\n\nTo integrate your OAuth2 provider with Grafana using our Generic OAuth authentication, follow these steps:\n\n1. Create an OAuth2 application in your chosen OAuth2 provider.\n1. Set the callback URL for your OAuth2 app to `http:\/\/<my_grafana_server_name_or_ip>:<grafana_server_port>\/login\/generic_oauth`.\n\n   Ensure that the callback URL is the complete HTTP address that you use to access Grafana via your browser, but with the appended path of `\/login\/generic_oauth`.\n\n   For the callback URL to be correct, it might be necessary to set the `root_url` option in the `[server]`section of the Grafana configuration file. For example, if you are serving Grafana behind a proxy.\n\n1. Refer to the following table to update field values located in the `[auth.generic_oauth]` section of the Grafana configuration file:\n\n   | Field                        | Description                                                                                                                                                                                       |\n   | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n   | `client_id`, `client_secret` | These values must match the client ID and client secret from your OAuth2 app.                                                                                                                     |\n   | `auth_url`                   | The authorization endpoint of your OAuth2 provider.                                                                                                                                               |\n   | `api_url`                    | The user information endpoint of your OAuth2 provider. Information returned by this endpoint must be compatible with [OpenID UserInfo](https:\/\/connect2id.com\/products\/server\/docs\/api\/userinfo). |\n   | `enabled`                    | Enables Generic OAuth authentication. Set this value to `true`.                                                                                                                                   |\n\n   Review the list of other Generic OAuth [configuration options]() and complete them, as necessary.\n\n1. Optional: [Configure a refresh token]():\n\n   a. Extend the `scopes` field of `[auth.generic_oauth]` section in Grafana configuration file with refresh token scope used by your OAuth2 provider.\n\n   b. Set `use_refresh_token` to `true` in `[auth.generic_oauth]` section in Grafana configuration file.\n\n   c. Enable the refresh token on the provider if required.\n\n1. [Configure role mapping]().\n1. Optional: [Configure group synchronization]().\n1. Restart Grafana.\n\n   You should now see a Generic OAuth login button on the login page and be able to log in or sign up with your OAuth2 provider.\n\n### Configure login\n\nGrafana can resolve a user's login from the OAuth2 ID token or user information retrieved from the OAuth2 UserInfo endpoint.\nGrafana looks at these sources in the order listed until it finds a login.\nIf no login is found, then the user's login is set to user's email address.\n\nRefer to the following table for information on what to configure based on how your Oauth2 provider returns a user's login:\n\n| Source of login                                                                 | Required configuration                           |\n| ------------------------------------------------------------------------------- | ------------------------------------------------ |\n| `login` or `username` field of the OAuth2 ID token.                             | N\/A                                              |\n| Another field of the OAuth2 ID token.                                           | Set `login_attribute_path` configuration option. |\n| `login` or `username` field of the user information from the UserInfo endpoint. | N\/A                                              |\n| Another field of the user information from the UserInfo endpoint.               | Set `login_attribute_path` configuration option. |\n\n### Configure display name\n\nGrafana can resolve a user's display name from the OAuth2 ID token or user information retrieved from the OAuth2 UserInfo endpoint.\nGrafana looks at these sources in the order listed until it finds a display name.\nIf no display name is found, then user's login is displayed instead.\n\nRefer to the following table for information on what you need to configure depending on how your Oauth2 provider returns a user's name:\n\n| Source of display name                                                             | Required configuration                          |\n| ---------------------------------------------------------------------------------- | ----------------------------------------------- |\n| `name` or `display_name` field of the OAuth2 ID token.                             | N\/A                                             |\n| Another field of the OAuth2 ID token.                                              | Set `name_attribute_path` configuration option. |\n| `name` or `display_name` field of the user information from the UserInfo endpoint. | N\/A                                             |\n| Another field of the user information from the UserInfo endpoint.                  | Set `name_attribute_path` configuration option. |\n\n### Configure email address\n\nGrafana can resolve the user's email address from the OAuth2 ID token, the user information retrieved from the OAuth2 UserInfo endpoint, or the OAuth2 `\/emails` endpoint.\nGrafana looks at these sources in the order listed until an email address is found.\nIf no email is found, then the email address of the user is set to an empty string.\n\nRefer to the following table for information on what to configure based on how the Oauth2 provider returns a user's email address:\n\n| Source of email address                                                                                                                                                 | Required configuration                                                                                             |\n| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ |\n| `email` field of the OAuth2 ID token.                                                                                                                                   | N\/A                                                                                                                |\n| `attributes` map of the OAuth2 ID token.                                                                                                                                | Set `email_attribute_name` configuration option. By default, Grafana searches for email under `email:primary` key. |\n| `upn` field of the OAuth2 ID token.                                                                                                                                     | N\/A                                                                                                                |\n| `email` field of the user information from the UserInfo endpoint.                                                                                                       | N\/A                                                                                                                |\n| Another field of the user information from the UserInfo endpoint.                                                                                                       | Set `email_attribute_path` configuration option.                                                                   |\n| Email address marked as primary from the `\/emails` endpoint of <br \/> the OAuth2 provider (obtained by appending `\/emails` to the URL <br \/> configured with `api_url`) | N\/A                                                                                                                |\n\n### Configure a refresh token\n\nWhen a user logs in using an OAuth2 provider, Grafana verifies that the access token has not expired. When an access token expires, Grafana uses the provided refresh token (if any exists) to obtain a new access token.\n\nGrafana uses a refresh token to obtain a new access token without requiring the user to log in again. If a refresh token doesn't exist, Grafana logs the user out of the system after the access token has expired.\n\nTo configure Generic OAuth to use a refresh token, set `use_refresh_token` configuration option to `true` and perform one or both of the following steps, if required:\n\n1. Extend the `scopes` field of `[auth.generic_oauth]` section in Grafana configuration file with additional scopes.\n1. Enable the refresh token on the provider.\n\n> **Note:** The `accessTokenExpirationCheck` feature toggle has been removed in Grafana v10.3.0 and the `use_refresh_token` configuration value will be used instead for configuring refresh token fetching and access token expiration check.\n\n### Configure role mapping\n\nUnless `skip_org_role_sync` option is enabled, the user's role will be set to the role retrieved from the auth provider upon user login.\n\nThe user's role is retrieved using a [JMESPath](http:\/\/jmespath.org\/examples.html) expression from the `role_attribute_path` configuration option.\nTo map the server administrator role, use the `allow_assign_grafana_admin` configuration option.\nRefer to [configuration options]() for more information.\n\nIf no valid role is found, the user is assigned the role specified by [the `auto_assign_org_role` option]().\nYou can disable this default role assignment by setting `role_attribute_strict = true`. This setting denies user access if no role or an invalid role is returned after evaluating the `role_attribute_path` and the `org_mapping` expressions.\n\nYou can use the `org_attribute_path` and `org_mapping` configuration options to assign the user to organizations and specify their role. For more information, refer to [Org roles mapping example](#org-roles-mapping-example). If both org role mapping (`org_mapping`) and the regular role mapping (`role_attribute_path`) are specified, then the user will get the highest of the two mapped roles.\n\nTo ease configuration of a proper JMESPath expression, go to [JMESPath](http:\/\/jmespath.org\/) to test and evaluate expressions with custom payloads.\n\n#### Role mapping examples\n\nThis section includes examples of JMESPath expressions used for role mapping.\n\n##### Map user organization role\n\nIn this example, the user has been granted the role of an `Editor`. The role assigned is based on the value of the property `role`, which must be a valid Grafana role such as `Admin`, `Editor`, `Viewer` or `None`.\n\nPayload:\n\n```json\n{\n    ...\n    \"role\": \"Editor\",\n    ...\n}\n```\n\nConfig:\n\n```bash\nrole_attribute_path = role\n```\n\nIn the following more complex example, the user has been granted the `Admin` role. This is because they are a member of the `admin` group of their OAuth2 provider.\nIf the user was a member of the `editor` group, they would be granted the `Editor` role, otherwise `Viewer`.\n\nPayload:\n\n```json\n{\n    ...\n    \"info\": {\n        ...\n        \"groups\": [\n            \"engineer\",\n            \"admin\",\n        ],\n        ...\n    },\n    ...\n}\n```\n\nConfig:\n\n```bash\nrole_attribute_path = contains(info.groups[*], 'admin') && 'Admin' || contains(info.groups[*], 'editor') && 'Editor' || 'Viewer'\n```\n\n##### Map server administrator role\n\nIn the following example, the user is granted the Grafana server administrator role.\n\nPayload:\n\n```json\n{\n    ...\n    \"info\": {\n        ...\n        \"roles\": [\n            \"admin\",\n        ],\n        ...\n    },\n    ...\n}\n```\n\nConfig:\n\n```ini\nrole_attribute_path = contains(info.roles[*], 'admin') && 'GrafanaAdmin' || contains(info.roles[*], 'editor') && 'Editor' || 'Viewer'\nallow_assign_grafana_admin = true\n```\n\n##### Map one role to all users\n\nIn this example, all users will be assigned `Viewer` role regardless of the user information received from the identity provider.\n\nConfig:\n\n```ini\nrole_attribute_path = \"'Viewer'\"\nskip_org_role_sync = false\n```\n\n#### Org roles mapping example\n\nIn this example, the user has been granted the role of a `Viewer` in the `org_foo` org, and the role of an `Editor` in the `org_bar` and `org_baz` orgs.\n\nIf the user was a member of the `admin` group, they would be granted the Grafana server administrator role.\n\nPayload:\n\n```json\n{\n    ...\n    \"info\": {\n        ...\n        \"roles\": [\n            \"org_foo\",\n            \"org_bar\",\n            \"another_org\"\n        ],\n        ...\n    },\n    ...\n}\n```\n\nConfig:\n\n```ini\nrole_attribute_path = contains(info.roles[*], 'admin') && 'GrafanaAdmin' || 'None'\nallow_assign_grafana_admin = true\norg_attribute_path = info.roles\norg_mapping = org_foo:org_foo:Viewer org_bar:org_bar:Editor *:org_baz:Editor\n```\n\n## Configure group synchronization\n\n\nAvailable in [Grafana Enterprise](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/introduction\/grafana-enterprise) and [Grafana Cloud](\/docs\/grafana-cloud\/).\n\n\nGrafana supports synchronization of OAuth2 groups with Grafana teams and roles. This allows automatically assigning users to the appropriate teams or automatically granting them the mapped roles.\nTeams and roles get synchronized when the user logs in.\n\nGeneric OAuth groups can be referenced by group ID, such as `8bab1c86-8fba-33e5-2089-1d1c80ec267d` or `myteam`.\nFor information on configuring OAuth2 groups with Grafana using the `groups_attribute_path` configuration option, refer to [configuration options]().\n\nTo learn more about group synchronization, refer to [Configure team sync](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/setup-grafana\/configure-security\/configure-team-sync) and [Configure group attribute sync](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/setup-grafana\/configure-security\/configure-group-attribute-sync).\n\n#### Group attribute synchronization example\n\nConfiguration:\n\n```bash\ngroups_attribute_path = info.groups\n```\n\nPayload:\n\n```json\n{\n    ...\n    \"info\": {\n        ...\n        \"groups\": [\n            \"engineers\",\n            \"analysts\",\n        ],\n        ...\n    },\n    ...\n}\n```\n\n## Configuration options\n\nThe following table outlines the various Generic OAuth configuration options. You can apply these options as environment variables, similar to any other configuration within Grafana. For more information, refer to [Override configuration with environment variables]().\n\n\nIf the configuration option requires a JMESPath expression that includes a colon, enclose the entire expression in quotes to prevent parsing errors. For example `role_attribute_path: \"role:view\"`\n\n\n| Setting                      | Required | Supported on Cloud | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                | Default         |\n| ---------------------------- | -------- | ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- |\n| `enabled`                    | No       | Yes                | Enables Generic OAuth authentication.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      | `false`         |\n| `name`                       | No       | Yes                | Name that refers to the Generic OAuth authentication from the Grafana user interface.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      | `OAuth`         |\n| `icon`                       | No       | Yes                | Icon used for the Generic OAuth authentication in the Grafana user interface.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              | `signin`        |\n| `client_id`                  | Yes      | Yes                | Client ID provided by your OAuth2 app.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |                 |\n| `client_secret`              | Yes      | Yes                | Client secret provided by your OAuth2 app.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |                 |\n| `auth_url`                   | Yes      | Yes                | Authorization endpoint of your OAuth2 provider.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |                 |\n| `token_url`                  | Yes      | Yes                | Endpoint used to obtain the OAuth2 access token.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |                 |\n| `api_url`                    | Yes      | Yes                | Endpoint used to obtain user information compatible with [OpenID UserInfo](https:\/\/connect2id.com\/products\/server\/docs\/api\/userinfo).                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |                 |\n| `auth_style`                 | No       | Yes                | Name of the [OAuth2 AuthStyle](https:\/\/pkg.go.dev\/golang.org\/x\/oauth2#AuthStyle) to be used when ID token is requested from OAuth2 provider. It determines how `client_id` and `client_secret` are sent to Oauth2 provider. Available values are `AutoDetect`, `InParams` and `InHeader`.                                                                                                                                                                                                                                                                                                                  | `AutoDetect`    |\n| `scopes`                     | No       | Yes                | List of comma- or space-separated OAuth2 scopes.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           | `user:email`    |\n| `empty_scopes`               | No       | Yes                | Set to `true` to use an empty scope during authentication.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 | `false`         |\n| `allow_sign_up`              | No       | Yes                | Controls Grafana user creation through the Generic OAuth login. Only existing Grafana users can log in with Generic OAuth if set to `false`.                                                                                                                                                                                                                                                                                                                                                                                                                                                               | `true`          |\n| `auto_login`                 | No       | Yes                | Set to `true` to enable users to bypass the login screen and automatically log in. This setting is ignored if you configure multiple auth providers to use auto-login.                                                                                                                                                                                                                                                                                                                                                                                                                                     | `false`         |\n| `id_token_attribute_name`    | No       | Yes                | The name of the key used to extract the ID token from the returned OAuth2 token.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           | `id_token`      |\n| `login_attribute_path`       | No       | Yes                | [JMESPath](http:\/\/jmespath.org\/examples.html) expression to use for user login lookup from the user ID token. For more information on how user login is retrieved, refer to [Configure login]().                                                                                                                                                                                                                                                                                                                                                                          |                 |\n| `name_attribute_path`        | No       | Yes                | [JMESPath](http:\/\/jmespath.org\/examples.html) expression to use for user name lookup from the user ID token. This name will be used as the user's display name. For more information on how user display name is retrieved, refer to [Configure display name]().                                                                                                                                                                                                                                                                                                   |                 |\n| `email_attribute_path`       | No       | Yes                | [JMESPath](http:\/\/jmespath.org\/examples.html) expression to use for user email lookup from the user information. For more information on how user email is retrieved, refer to [Configure email address]().                                                                                                                                                                                                                                                                                                                                                       |                 |\n| `email_attribute_name`       | No       | Yes                | Name of the key to use for user email lookup within the `attributes` map of OAuth2 ID token. For more information on how user email is retrieved, refer to [Configure email address]().                                                                                                                                                                                                                                                                                                                                                                           | `email:primary` |\n| `role_attribute_path`        | No       | Yes                | [JMESPath](http:\/\/jmespath.org\/examples.html) expression to use for Grafana role lookup. Grafana will first evaluate the expression using the OAuth2 ID token. If no role is found, the expression will be evaluated using the user information obtained from the UserInfo endpoint. The result of the evaluation should be a valid Grafana role (`None`, `Viewer`, `Editor`, `Admin` or `GrafanaAdmin`). For more information on user role mapping, refer to [Configure role mapping]().                                                                          |                 |\n| `role_attribute_strict`      | No       | Yes                | Set to `true` to deny user login if the Grafana org role cannot be extracted using `role_attribute_path` or `org_mapping`. For more information on user role mapping, refer to [Configure role mapping]().                                                                                                                                                                                                                                                                                                                                                         | `false`         |\n| `skip_org_role_sync`         | No       | Yes                | Set to `true` to stop automatically syncing user roles. This will allow you to set organization roles for your users from within Grafana manually.                                                                                                                                                                                                                                                                                                                                                                                                                                                         | `false`         |\n| `org_attribute_path`         | No       | No                 | [JMESPath](http:\/\/jmespath.org\/examples.html) expression to use for Grafana org to role lookup. Grafana will first evaluate the expression using the OAuth2 ID token. If no value is returned, the expression will be evaluated using the user information obtained from the UserInfo endpoint. The result of the evaluation will be mapped to org roles based on `org_mapping`. For more information on org to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example).                                                                                                            |                 |\n| `org_mapping`                | No       | No                 | List of comma- or space-separated `<ExternalOrgName>:<OrgIdOrName>:<Role>` mappings. Value can be `*` meaning \"All users\". Role is optional and can have the following values: `None`, `Viewer`, `Editor` or `Admin`. For more information on external organization to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example).                                                                                                                                                                                                                                                     |                 |\n| `allow_assign_grafana_admin` | No       | No                 | Set to `true` to enable automatic sync of the Grafana server administrator role. If this option is set to `true` and the result of evaluating `role_attribute_path` for a user is `GrafanaAdmin`, Grafana grants the user the server administrator privileges and organization administrator role. If this option is set to `false` and the result of evaluating `role_attribute_path` for a user is `GrafanaAdmin`, Grafana grants the user only organization administrator role. For more information on user role mapping, refer to [Configure role mapping](). | `false`         |\n| `groups_attribute_path`      | No       | Yes                | [JMESPath](http:\/\/jmespath.org\/examples.html) expression to use for user group lookup. Grafana will first evaluate the expression using the OAuth2 ID token. If no groups are found, the expression will be evaluated using the user information obtained from the UserInfo endpoint. The result of the evaluation should be a string array of groups.                                                                                                                                                                                                                                                     |                 |\n| `allowed_groups`             | No       | Yes                | List of comma- or space-separated groups. The user should be a member of at least one group to log in. If you configure `allowed_groups`, you must also configure `groups_attribute_path`.                                                                                                                                                                                                                                                                                                                                                                                                                 |                 |\n| `allowed_organizations`      | No       | Yes                | List of comma- or space-separated organizations. The user should be a member of at least one organization to log in.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |                 |\n| `allowed_domains`            | No       | Yes                | List of comma- or space-separated domains. The user should belong to at least one domain to log in.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |                 |\n| `team_ids`                   | No       | Yes                | String list of team IDs. If set, the user must be a member of one of the given teams to log in. If you configure `team_ids`, you must also configure `teams_url` and `team_ids_attribute_path`.                                                                                                                                                                                                                                                                                                                                                                                                            |                 |\n| `team_ids_attribute_path`    | No       | Yes                | The [JMESPath](http:\/\/jmespath.org\/examples.html) expression to use for Grafana team ID lookup within the results returned by the `teams_url` endpoint.                                                                                                                                                                                                                                                                                                                                                                                                                                                    |                 |\n| `teams_url`                  | No       | Yes                | The URL used to query for team IDs. If not set, the default value is `\/teams`. If you configure `teams_url`, you must also configure `team_ids_attribute_path`.                                                                                                                                                                                                                                                                                                                                                                                                                                            |                 |\n| `tls_skip_verify_insecure`   | No       | No                 | If set to `true`, the client accepts any certificate presented by the server and any host name in that certificate. _You should only use this for testing_, because this mode leaves SSL\/TLS susceptible to man-in-the-middle attacks.                                                                                                                                                                                                                                                                                                                                                                     | `false`         |\n| `tls_client_cert`            | No       | No                 | The path to the certificate.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |                 |\n| `tls_client_key`             | No       | No                 | The path to the key.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |                 |\n| `tls_client_ca`              | No       | No                 | The path to the trusted certificate authority list.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |                 |\n| `use_pkce`                   | No       | Yes                | Set to `true` to use [Proof Key for Code Exchange (PKCE)](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7636). Grafana uses the SHA256 based `S256` challenge method and a 128 bytes (base64url encoded) code verifier.                                                                                                                                                                                                                                                                                                                                                                                         | `false`         |\n| `use_refresh_token`          | No       | Yes                | Set to `true` to use refresh token and check access token expiration.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      | `false`         |\n| `signout_redirect_url`       | No       | Yes                | URL to redirect to after the user logs out.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |                 |\n\n## Examples of setting up Generic OAuth\n\nThis section includes examples of setting up Generic OAuth integration.\n\n### Set up OAuth2 with Descope\n\nTo set up Generic OAuth authentication with Descope, follow these steps:\n\n1. Create a Descope Project [here](https:\/\/app.descope.com\/gettingStarted), and go through the Getting Started Wizard to configure your authentication. You can skip step if you already have Descope project set up.\n\n1. If you wish to use a flow besides `Sign Up or In`, go to the **IdP Applications** menu in the console, and select your IdP application. Then alter the **Flow Hosting URL** query parameter `?flow=sign-up-or-in` to change which flow id you wish to use.\n\n1. Click **Save**.\n\n1. Update the `[auth.generic_oauth]` section of the Grafana configuration file using the values from the **Settings** tab:\n\n   \n   You can get your Client ID (Descope Project ID) under [Project Settings](https:\/\/app.descope.com\/settings\/project). Your Client Secret (Descope Access Key) can be generated under [Access Keys](https:\/\/app.descope.com\/accesskeys).\n   \n\n   ```bash\n   [auth.generic_oauth]\n   enabled = true\n   allow_sign_up = true\n   auto_login = false\n   team_ids =\n   allowed_organizations =\n   name = Descope\n   client_id = <Descope Project ID>\n   client_secret = <Descope Access Key>\n   scopes = openid profile email descope.claims descope.custom_claims\n   auth_url = https:\/\/api.descope.com\/oauth2\/v1\/authorize\n   token_url = https:\/\/api.descope.com\/oauth2\/v1\/token\n   api_url = https:\/\/api.descope.com\/oauth2\/v1\/userinfo\n   use_pkce = true\n   use_refresh_token = true\n   ```\n\n### Set up OAuth2 with Auth0\n\n\nSupport for the Auth0 \"audience\" feature is not currently available in Grafana. For roles and permissions, the available options are described [here]().\n\n\nTo set up Generic OAuth authentication with Auth0, follow these steps:\n\n1. Create an Auth0 application using the following parameters:\n\n   - Name: Grafana\n   - Type: Regular Web Application\n\n1. Go to the **Settings** tab of the application and set **Allowed Callback URLs** to `https:\/\/<grafana domain>\/login\/generic_oauth`.\n\n1. Click **Save Changes**.\n\n1. Update the `[auth.generic_oauth]` section of the Grafana configuration file using the values from the **Settings** tab:\n\n   ```bash\n   [auth.generic_oauth]\n   enabled = true\n   allow_sign_up = true\n   auto_login = false\n   team_ids =\n   allowed_organizations =\n   name = Auth0\n   client_id = <client id>\n   client_secret = <client secret>\n   scopes = openid profile email offline_access\n   auth_url = https:\/\/<domain>\/authorize\n   token_url = https:\/\/<domain>\/oauth\/token\n   api_url = https:\/\/<domain>\/userinfo\n   use_pkce = true\n   use_refresh_token = true\n   ```\n\n### Set up OAuth2 with Bitbucket\n\nTo set up Generic OAuth authentication with Bitbucket, follow these steps:\n\n1. Navigate to **Settings > Workspace setting > OAuth consumers** in BitBucket.\n\n1. Create an application by selecting **Add consumer** and using the following parameters:\n\n   - Allowed Callback URLs: `https:\/\/<grafana domain>\/login\/generic_oauth`\n\n1. Click **Save**.\n\n1. Update the `[auth.generic_oauth]` section of the Grafana configuration file using the values from the `Key` and `Secret` from the consumer description:\n\n   ```bash\n   [auth.generic_oauth]\n   name = BitBucket\n   enabled = true\n   allow_sign_up = true\n   auto_login = false\n   client_id = <client key>\n   client_secret = <client secret>\n   scopes = account email\n   auth_url = https:\/\/bitbucket.org\/site\/oauth2\/authorize\n   token_url = https:\/\/bitbucket.org\/site\/oauth2\/access_token\n   api_url = https:\/\/api.bitbucket.org\/2.0\/user\n   teams_url = https:\/\/api.bitbucket.org\/2.0\/user\/permissions\/workspaces\n   team_ids_attribute_path = values[*].workspace.slug\n   team_ids =\n   allowed_organizations =\n   use_refresh_token = true\n   ```\n\nBy default, a refresh token is included in the response for the **Authorization Code Grant**.\n\n### Set up OAuth2 with OneLogin\n\nTo set up Generic OAuth authentication with OneLogin, follow these steps:\n\n1. Create a new Custom Connector in OneLogin with the following settings:\n\n   - Name: Grafana\n   - Sign On Method: OpenID Connect\n   - Redirect URI: `https:\/\/<grafana domain>\/login\/generic_oauth`\n   - Signing Algorithm: RS256\n   - Login URL: `https:\/\/<grafana domain>\/login\/generic_oauth`\n\n1. Add an app to the Grafana Connector:\n\n   - Display Name: Grafana\n\n1. Update the `[auth.generic_oauth]` section of the Grafana configuration file using the client ID and client secret from the **SSO** tab of the app details page:\n\n   Your OneLogin Domain will match the URL you use to access OneLogin.\n\n   ```bash\n   [auth.generic_oauth]\n   name = OneLogin\n   enabled = true\n   allow_sign_up = true\n   auto_login = false\n   client_id = <client id>\n   client_secret = <client secret>\n   scopes = openid email name\n   auth_url = https:\/\/<onelogin domain>.onelogin.com\/oidc\/2\/auth\n   token_url = https:\/\/<onelogin domain>.onelogin.com\/oidc\/2\/token\n   api_url = https:\/\/<onelogin domain>.onelogin.com\/oidc\/2\/me\n   team_ids =\n   allowed_organizations =\n   ```\n\n### Set up OAuth2 with Dex\n\nTo set up Generic OAuth authentication with [Dex IdP](https:\/\/dexidp.io\/), follow these\nsteps:\n\n1. Add Grafana as a client in the Dex config YAML file:\n\n   ```yaml\n   staticClients:\n     - id: <client id>\n       name: Grafana\n       secret: <client secret>\n       redirectURIs:\n         - 'https:\/\/<grafana domain>\/login\/generic_oauth'\n   ```\n\n   \n   Unlike many other OAuth2 providers, Dex doesn't provide `<client secret>`.\n   Instead, a secret can be generated with for example `openssl rand -hex 20`.\n   \n\n2. Update the `[auth.generic_oauth]` section of the Grafana configuration:\n\n   ```bash\n   [auth.generic_oauth]\n   name = Dex\n   enabled = true\n   client_id = <client id>\n   client_secret = <client secret>\n   scopes = openid email profile groups offline_access\n   auth_url = https:\/\/<dex base uri>\/auth\n   token_url = https:\/\/<dex base uri>\/token\n   api_url = https:\/\/<dex base uri>\/userinfo\n   ```\n\n   `<dex base uri>` corresponds to the `issuer: ` configuration in Dex (e.g. the Dex\n   domain possibly including a path such as e.g. `\/dex`). The `offline_access` scope is\n   needed when using [refresh tokens]().","site":"grafana setup","answers_cleaned":"    aliases               auth generic oauth  description  Configure Generic OAuth authentication keywords      grafana     configuration     documentation     oauth labels    products        cloud       enterprise       oss menuTitle  Generic OAuth title  Configure Generic OAuth authentication weight  700        Configure Generic OAuth authentication    Grafana provides OAuth2 integrations for the following auth providers      Azure AD OAuth       GitHub OAuth       GitLab OAuth       Google OAuth       Grafana Com OAuth       Keycloak OAuth       Okta OAuth     If your OAuth2 provider is not listed  you can use Generic OAuth authentication   This topic describes how to configure Generic OAuth authentication using different methods and includes  examples of setting up Generic OAuth    with specific OAuth2 providers      Before you begin  To follow this guide     Ensure you know how to create an OAuth2 application with your OAuth2 provider  Consult the documentation of your OAuth2 provider for more information    Ensure your identity provider returns OpenID UserInfo compatible information such as the  sub  claim    If you are using refresh tokens  ensure you know how to set them up with your OAuth2 provider  Consult the documentation of your OAuth2 provider for more information    If Users use the same email address in Azure AD that they use with other authentication providers  such as Grafana com   you need to do additional configuration to ensure that the users are matched correctly  Please refer to the  Using the same email address to login with different identity providers    documentation for more information       Configure generic OAuth authentication client using the Grafana UI   Available in Public Preview in Grafana 10 4 behind the  ssoSettingsApi  feature toggle    As a Grafana Admin  you can configure Generic OAuth client from within Grafana using the Generic OAuth UI  To do this  navigate to   Administration   Authentication   Generic OAuth   page and fill in the form  If you have a current configuration in the Grafana configuration file then the form will be pre populated with those values otherwise the form will contain default values   After you have filled in the form  click   Save   to save the configuration  If the save was successful  Grafana will apply the new configurations   If you need to reset changes you made in the UI back to the default values  click   Reset    After you have reset the changes  Grafana will apply the configuration from the Grafana configuration file  if there is any configuration  or the default values    If you run Grafana in high availability mode  configuration changes may not get applied to all Grafana instances immediately  You may need to wait a few minutes for the configuration to propagate to all Grafana instances    Refer to  configuration options    for more information      Configure generic OAuth authentication client using the Terraform provider   Available in Public Preview in Grafana 10 4 behind the  ssoSettingsApi  feature toggle  Supported in the Terraform provider since v2 12 0       terraform resource  grafana sso settings   generic sso settings      provider name    generic oauth    oauth2 settings       name                 Auth0      auth url             https    domain  authorize      token url            https    domain  oauth token      api url              https    domain  userinfo      client id             client id       client secret         client secret       allow sign up       true     auto login          false     scopes               openid profile email offline access      use pkce            true     use refresh token   true            Refer to  Terraform Registry  https   registry terraform io providers grafana grafana latest docs resources sso settings  for a complete reference on using the  grafana sso settings  resource      Configure generic OAuth authentication client using the Grafana configuration file  Ensure that you have access to the  Grafana configuration file          Steps  To integrate your OAuth2 provider with Grafana using our Generic OAuth authentication  follow these steps   1  Create an OAuth2 application in your chosen OAuth2 provider  1  Set the callback URL for your OAuth2 app to  http    my grafana server name or ip   grafana server port  login generic oauth       Ensure that the callback URL is the complete HTTP address that you use to access Grafana via your browser  but with the appended path of   login generic oauth       For the callback URL to be correct  it might be necessary to set the  root url  option in the   server  section of the Grafana configuration file  For example  if you are serving Grafana behind a proxy   1  Refer to the following table to update field values located in the   auth generic oauth   section of the Grafana configuration file        Field                          Description                                                                                                                                                                                                                                                                                                                                                                                                                                       client id    client secret    These values must match the client ID and client secret from your OAuth2 app                                                                                                                              auth url                      The authorization endpoint of your OAuth2 provider                                                                                                                                                        api url                       The user information endpoint of your OAuth2 provider  Information returned by this endpoint must be compatible with  OpenID UserInfo  https   connect2id com products server docs api userinfo           enabled                       Enables Generic OAuth authentication  Set this value to  true                                                                                                                                           Review the list of other Generic OAuth  configuration options    and complete them  as necessary   1  Optional   Configure a refresh token         a  Extend the  scopes  field of   auth generic oauth   section in Grafana configuration file with refresh token scope used by your OAuth2 provider      b  Set  use refresh token  to  true  in   auth generic oauth   section in Grafana configuration file      c  Enable the refresh token on the provider if required   1   Configure role mapping     1  Optional   Configure group synchronization     1  Restart Grafana      You should now see a Generic OAuth login button on the login page and be able to log in or sign up with your OAuth2 provider       Configure login  Grafana can resolve a user s login from the OAuth2 ID token or user information retrieved from the OAuth2 UserInfo endpoint  Grafana looks at these sources in the order listed until it finds a login  If no login is found  then the user s login is set to user s email address   Refer to the following table for information on what to configure based on how your Oauth2 provider returns a user s login     Source of login                                                                   Required configuration                                                                                                                                                                       login  or  username  field of the OAuth2 ID token                                N A                                                  Another field of the OAuth2 ID token                                              Set  login attribute path  configuration option       login  or  username  field of the user information from the UserInfo endpoint    N A                                                  Another field of the user information from the UserInfo endpoint                  Set  login attribute path  configuration option         Configure display name  Grafana can resolve a user s display name from the OAuth2 ID token or user information retrieved from the OAuth2 UserInfo endpoint  Grafana looks at these sources in the order listed until it finds a display name  If no display name is found  then user s login is displayed instead   Refer to the following table for information on what you need to configure depending on how your Oauth2 provider returns a user s name     Source of display name                                                               Required configuration                                                                                                                                                                        name  or  display name  field of the OAuth2 ID token                                N A                                                 Another field of the OAuth2 ID token                                                 Set  name attribute path  configuration option       name  or  display name  field of the user information from the UserInfo endpoint    N A                                                 Another field of the user information from the UserInfo endpoint                     Set  name attribute path  configuration option         Configure email address  Grafana can resolve the user s email address from the OAuth2 ID token  the user information retrieved from the OAuth2 UserInfo endpoint  or the OAuth2   emails  endpoint  Grafana looks at these sources in the order listed until an email address is found  If no email is found  then the email address of the user is set to an empty string   Refer to the following table for information on what to configure based on how the Oauth2 provider returns a user s email address     Source of email address                                                                                                                                                   Required configuration                                                                                                                                                                                                                                                                                                                                                                                                   email  field of the OAuth2 ID token                                                                                                                                      N A                                                                                                                     attributes  map of the OAuth2 ID token                                                                                                                                   Set  email attribute name  configuration option  By default  Grafana searches for email under  email primary  key       upn  field of the OAuth2 ID token                                                                                                                                        N A                                                                                                                     email  field of the user information from the UserInfo endpoint                                                                                                          N A                                                                                                                    Another field of the user information from the UserInfo endpoint                                                                                                          Set  email attribute path  configuration option                                                                        Email address marked as primary from the   emails  endpoint of  br    the OAuth2 provider  obtained by appending   emails  to the URL  br    configured with  api url     N A                                                                                                                       Configure a refresh token  When a user logs in using an OAuth2 provider  Grafana verifies that the access token has not expired  When an access token expires  Grafana uses the provided refresh token  if any exists  to obtain a new access token   Grafana uses a refresh token to obtain a new access token without requiring the user to log in again  If a refresh token doesn t exist  Grafana logs the user out of the system after the access token has expired   To configure Generic OAuth to use a refresh token  set  use refresh token  configuration option to  true  and perform one or both of the following steps  if required   1  Extend the  scopes  field of   auth generic oauth   section in Grafana configuration file with additional scopes  1  Enable the refresh token on the provider       Note    The  accessTokenExpirationCheck  feature toggle has been removed in Grafana v10 3 0 and the  use refresh token  configuration value will be used instead for configuring refresh token fetching and access token expiration check       Configure role mapping  Unless  skip org role sync  option is enabled  the user s role will be set to the role retrieved from the auth provider upon user login   The user s role is retrieved using a  JMESPath  http   jmespath org examples html  expression from the  role attribute path  configuration option  To map the server administrator role  use the  allow assign grafana admin  configuration option  Refer to  configuration options    for more information   If no valid role is found  the user is assigned the role specified by  the  auto assign org role  option     You can disable this default role assignment by setting  role attribute strict   true   This setting denies user access if no role or an invalid role is returned after evaluating the  role attribute path  and the  org mapping  expressions   You can use the  org attribute path  and  org mapping  configuration options to assign the user to organizations and specify their role  For more information  refer to  Org roles mapping example   org roles mapping example   If both org role mapping   org mapping   and the regular role mapping   role attribute path   are specified  then the user will get the highest of the two mapped roles   To ease configuration of a proper JMESPath expression  go to  JMESPath  http   jmespath org   to test and evaluate expressions with custom payloads        Role mapping examples  This section includes examples of JMESPath expressions used for role mapping         Map user organization role  In this example  the user has been granted the role of an  Editor   The role assigned is based on the value of the property  role   which must be a valid Grafana role such as  Admin    Editor    Viewer  or  None    Payload      json                role    Editor                  Config      bash role attribute path   role      In the following more complex example  the user has been granted the  Admin  role  This is because they are a member of the  admin  group of their OAuth2 provider  If the user was a member of the  editor  group  they would be granted the  Editor  role  otherwise  Viewer    Payload      json                info                          groups                  engineer                admin                                                Config      bash role attribute path   contains info groups      admin       Admin     contains info groups      editor       Editor      Viewer             Map server administrator role  In the following example  the user is granted the Grafana server administrator role   Payload      json                info                          roles                  admin                                                Config      ini role attribute path   contains info roles      admin       GrafanaAdmin     contains info roles      editor       Editor      Viewer  allow assign grafana admin   true            Map one role to all users  In this example  all users will be assigned  Viewer  role regardless of the user information received from the identity provider   Config      ini role attribute path     Viewer   skip org role sync   false           Org roles mapping example  In this example  the user has been granted the role of a  Viewer  in the  org foo  org  and the role of an  Editor  in the  org bar  and  org baz  orgs   If the user was a member of the  admin  group  they would be granted the Grafana server administrator role   Payload      json                info                          roles                  org foo                org bar                another org                                               Config      ini role attribute path   contains info roles      admin       GrafanaAdmin      None  allow assign grafana admin   true org attribute path   info roles org mapping   org foo org foo Viewer org bar org bar Editor   org baz Editor         Configure group synchronization   Available in  Grafana Enterprise  https   grafana com docs grafana  GRAFANA VERSION  introduction grafana enterprise  and  Grafana Cloud   docs grafana cloud      Grafana supports synchronization of OAuth2 groups with Grafana teams and roles  This allows automatically assigning users to the appropriate teams or automatically granting them the mapped roles  Teams and roles get synchronized when the user logs in   Generic OAuth groups can be referenced by group ID  such as  8bab1c86 8fba 33e5 2089 1d1c80ec267d  or  myteam   For information on configuring OAuth2 groups with Grafana using the  groups attribute path  configuration option  refer to  configuration options      To learn more about group synchronization  refer to  Configure team sync  https   grafana com docs grafana  GRAFANA VERSION  setup grafana configure security configure team sync  and  Configure group attribute sync  https   grafana com docs grafana  GRAFANA VERSION  setup grafana configure security configure group attribute sync         Group attribute synchronization example  Configuration      bash groups attribute path   info groups      Payload      json                info                          groups                  engineers                analysts                                                   Configuration options  The following table outlines the various Generic OAuth configuration options  You can apply these options as environment variables  similar to any other configuration within Grafana  For more information  refer to  Override configuration with environment variables       If the configuration option requires a JMESPath expression that includes a colon  enclose the entire expression in quotes to prevent parsing errors  For example  role attribute path   role view       Setting                        Required   Supported on Cloud   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Default                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              enabled                       No         Yes                  Enables Generic OAuth authentication                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          false               name                          No         Yes                  Name that refers to the Generic OAuth authentication from the Grafana user interface                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          OAuth               icon                          No         Yes                  Icon used for the Generic OAuth authentication in the Grafana user interface                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  signin              client id                     Yes        Yes                  Client ID provided by your OAuth2 app                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             client secret                 Yes        Yes                  Client secret provided by your OAuth2 app                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         auth url                      Yes        Yes                  Authorization endpoint of your OAuth2 provider                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    token url                     Yes        Yes                  Endpoint used to obtain the OAuth2 access token                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   api url                       Yes        Yes                  Endpoint used to obtain user information compatible with  OpenID UserInfo  https   connect2id com products server docs api userinfo                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               auth style                    No         Yes                  Name of the  OAuth2 AuthStyle  https   pkg go dev golang org x oauth2 AuthStyle  to be used when ID token is requested from OAuth2 provider  It determines how  client id  and  client secret  are sent to Oauth2 provider  Available values are  AutoDetect    InParams  and  InHeader                                                                                                                                                                                                                                                                                                                       AutoDetect          scopes                        No         Yes                  List of comma  or space separated OAuth2 scopes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               user email          empty scopes                  No         Yes                  Set to  true  to use an empty scope during authentication                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     false               allow sign up                 No         Yes                  Controls Grafana user creation through the Generic OAuth login  Only existing Grafana users can log in with Generic OAuth if set to  false                                                                                                                                                                                                                                                                                                                                                                                                                                                                    true                auto login                    No         Yes                  Set to  true  to enable users to bypass the login screen and automatically log in  This setting is ignored if you configure multiple auth providers to use auto login                                                                                                                                                                                                                                                                                                                                                                                                                                         false               id token attribute name       No         Yes                  The name of the key used to extract the ID token from the returned OAuth2 token                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               id token            login attribute path          No         Yes                   JMESPath  http   jmespath org examples html  expression to use for user login lookup from the user ID token  For more information on how user login is retrieved  refer to  Configure login                                                                                                                                                                                                                                                                                                                                                                                                     name attribute path           No         Yes                   JMESPath  http   jmespath org examples html  expression to use for user name lookup from the user ID token  This name will be used as the user s display name  For more information on how user display name is retrieved  refer to  Configure display name                                                                                                                                                                                                                                                                                                                              email attribute path          No         Yes                   JMESPath  http   jmespath org examples html  expression to use for user email lookup from the user information  For more information on how user email is retrieved  refer to  Configure email address                                                                                                                                                                                                                                                                                                                                                                                  email attribute name          No         Yes                  Name of the key to use for user email lookup within the  attributes  map of OAuth2 ID token  For more information on how user email is retrieved  refer to  Configure email address                                                                                                                                                                                                                                                                                                                                                                                  email primary       role attribute path           No         Yes                   JMESPath  http   jmespath org examples html  expression to use for Grafana role lookup  Grafana will first evaluate the expression using the OAuth2 ID token  If no role is found  the expression will be evaluated using the user information obtained from the UserInfo endpoint  The result of the evaluation should be a valid Grafana role   None    Viewer    Editor    Admin  or  GrafanaAdmin    For more information on user role mapping  refer to  Configure role mapping                                                                                                     role attribute strict         No         Yes                  Set to  true  to deny user login if the Grafana org role cannot be extracted using  role attribute path  or  org mapping   For more information on user role mapping  refer to  Configure role mapping                                                                                                                                                                                                                                                                                                                                                                false               skip org role sync            No         Yes                  Set to  true  to stop automatically syncing user roles  This will allow you to set organization roles for your users from within Grafana manually                                                                                                                                                                                                                                                                                                                                                                                                                                                             false               org attribute path            No         No                    JMESPath  http   jmespath org examples html  expression to use for Grafana org to role lookup  Grafana will first evaluate the expression using the OAuth2 ID token  If no value is returned  the expression will be evaluated using the user information obtained from the UserInfo endpoint  The result of the evaluation will be mapped to org roles based on  org mapping   For more information on org to role mapping  refer to  Org roles mapping example   org roles mapping example                                                                                                                                     org mapping                   No         No                   List of comma  or space separated   ExternalOrgName   OrgIdOrName   Role   mappings  Value can be     meaning  All users   Role is optional and can have the following values   None    Viewer    Editor  or  Admin   For more information on external organization to role mapping  refer to  Org roles mapping example   org roles mapping example                                                                                                                                                                                                                                                                              allow assign grafana admin    No         No                   Set to  true  to enable automatic sync of the Grafana server administrator role  If this option is set to  true  and the result of evaluating  role attribute path  for a user is  GrafanaAdmin   Grafana grants the user the server administrator privileges and organization administrator role  If this option is set to  false  and the result of evaluating  role attribute path  for a user is  GrafanaAdmin   Grafana grants the user only organization administrator role  For more information on user role mapping  refer to  Configure role mapping        false               groups attribute path         No         Yes                   JMESPath  http   jmespath org examples html  expression to use for user group lookup  Grafana will first evaluate the expression using the OAuth2 ID token  If no groups are found  the expression will be evaluated using the user information obtained from the UserInfo endpoint  The result of the evaluation should be a string array of groups                                                                                                                                                                                                                                                                             allowed groups                No         Yes                  List of comma  or space separated groups  The user should be a member of at least one group to log in  If you configure  allowed groups   you must also configure  groups attribute path                                                                                                                                                                                                                                                                                                                                                                                                                                          allowed organizations         No         Yes                  List of comma  or space separated organizations  The user should be a member of at least one organization to log in                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               allowed domains               No         Yes                  List of comma  or space separated domains  The user should belong to at least one domain to log in                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                team ids                      No         Yes                  String list of team IDs  If set  the user must be a member of one of the given teams to log in  If you configure  team ids   you must also configure  teams url  and  team ids attribute path                                                                                                                                                                                                                                                                                                                                                                                                                                     team ids attribute path       No         Yes                  The  JMESPath  http   jmespath org examples html  expression to use for Grafana team ID lookup within the results returned by the  teams url  endpoint                                                                                                                                                                                                                                                                                                                                                                                                                                                                            teams url                     No         Yes                  The URL used to query for team IDs  If not set  the default value is   teams   If you configure  teams url   you must also configure  team ids attribute path                                                                                                                                                                                                                                                                                                                                                                                                                                                                     tls skip verify insecure      No         No                   If set to  true   the client accepts any certificate presented by the server and any host name in that certificate   You should only use this for testing   because this mode leaves SSL TLS susceptible to man in the middle attacks                                                                                                                                                                                                                                                                                                                                                                         false               tls client cert               No         No                   The path to the certificate                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       tls client key                No         No                   The path to the key                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               tls client ca                 No         No                   The path to the trusted certificate authority list                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                use pkce                      No         Yes                  Set to  true  to use  Proof Key for Code Exchange  PKCE   https   datatracker ietf org doc html rfc7636   Grafana uses the SHA256 based  S256  challenge method and a 128 bytes  base64url encoded  code verifier                                                                                                                                                                                                                                                                                                                                                                                             false               use refresh token             No         Yes                  Set to  true  to use refresh token and check access token expiration                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          false               signout redirect url          No         Yes                  URL to redirect to after the user logs out                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         Examples of setting up Generic OAuth  This section includes examples of setting up Generic OAuth integration       Set up OAuth2 with Descope  To set up Generic OAuth authentication with Descope  follow these steps   1  Create a Descope Project  here  https   app descope com gettingStarted   and go through the Getting Started Wizard to configure your authentication  You can skip step if you already have Descope project set up   1  If you wish to use a flow besides  Sign Up or In   go to the   IdP Applications   menu in the console  and select your IdP application  Then alter the   Flow Hosting URL   query parameter   flow sign up or in  to change which flow id you wish to use   1  Click   Save     1  Update the   auth generic oauth   section of the Grafana configuration file using the values from the   Settings   tab          You can get your Client ID  Descope Project ID  under  Project Settings  https   app descope com settings project   Your Client Secret  Descope Access Key  can be generated under  Access Keys  https   app descope com accesskeys              bash     auth generic oauth     enabled   true    allow sign up   true    auto login   false    team ids      allowed organizations      name   Descope    client id    Descope Project ID     client secret    Descope Access Key     scopes   openid profile email descope claims descope custom claims    auth url   https   api descope com oauth2 v1 authorize    token url   https   api descope com oauth2 v1 token    api url   https   api descope com oauth2 v1 userinfo    use pkce   true    use refresh token   true             Set up OAuth2 with Auth0   Support for the Auth0  audience  feature is not currently available in Grafana  For roles and permissions  the available options are described  here       To set up Generic OAuth authentication with Auth0  follow these steps   1  Create an Auth0 application using the following parameters        Name  Grafana      Type  Regular Web Application  1  Go to the   Settings   tab of the application and set   Allowed Callback URLs   to  https    grafana domain  login generic oauth    1  Click   Save Changes     1  Update the   auth generic oauth   section of the Grafana configuration file using the values from the   Settings   tab         bash     auth generic oauth     enabled   true    allow sign up   true    auto login   false    team ids      allowed organizations      name   Auth0    client id    client id     client secret    client secret     scopes   openid profile email offline access    auth url   https    domain  authorize    token url   https    domain  oauth token    api url   https    domain  userinfo    use pkce   true    use refresh token   true             Set up OAuth2 with Bitbucket  To set up Generic OAuth authentication with Bitbucket  follow these steps   1  Navigate to   Settings   Workspace setting   OAuth consumers   in BitBucket   1  Create an application by selecting   Add consumer   and using the following parameters        Allowed Callback URLs   https    grafana domain  login generic oauth   1  Click   Save     1  Update the   auth generic oauth   section of the Grafana configuration file using the values from the  Key  and  Secret  from the consumer description         bash     auth generic oauth     name   BitBucket    enabled   true    allow sign up   true    auto login   false    client id    client key     client secret    client secret     scopes   account email    auth url   https   bitbucket org site oauth2 authorize    token url   https   bitbucket org site oauth2 access token    api url   https   api bitbucket org 2 0 user    teams url   https   api bitbucket org 2 0 user permissions workspaces    team ids attribute path   values    workspace slug    team ids      allowed organizations      use refresh token   true         By default  a refresh token is included in the response for the   Authorization Code Grant         Set up OAuth2 with OneLogin  To set up Generic OAuth authentication with OneLogin  follow these steps   1  Create a new Custom Connector in OneLogin with the following settings        Name  Grafana      Sign On Method  OpenID Connect      Redirect URI   https    grafana domain  login generic oauth       Signing Algorithm  RS256      Login URL   https    grafana domain  login generic oauth   1  Add an app to the Grafana Connector        Display Name  Grafana  1  Update the   auth generic oauth   section of the Grafana configuration file using the client ID and client secret from the   SSO   tab of the app details page      Your OneLogin Domain will match the URL you use to access OneLogin         bash     auth generic oauth     name   OneLogin    enabled   true    allow sign up   true    auto login   false    client id    client id     client secret    client secret     scopes   openid email name    auth url   https    onelogin domain  onelogin com oidc 2 auth    token url   https    onelogin domain  onelogin com oidc 2 token    api url   https    onelogin domain  onelogin com oidc 2 me    team ids      allowed organizations               Set up OAuth2 with Dex  To set up Generic OAuth authentication with  Dex IdP  https   dexidp io    follow these steps   1  Add Grafana as a client in the Dex config YAML file         yaml    staticClients         id   client id         name  Grafana        secret   client secret         redirectURIs              https    grafana domain  login generic oauth                 Unlike many other OAuth2 providers  Dex doesn t provide   client secret       Instead  a secret can be generated with for example  openssl rand  hex 20        2  Update the   auth generic oauth   section of the Grafana configuration         bash     auth generic oauth     name   Dex    enabled   true    client id    client id     client secret    client secret     scopes   openid email profile groups offline access    auth url   https    dex base uri  auth    token url   https    dex base uri  token    api url   https    dex base uri  userinfo              dex base uri   corresponds to the  issuer    configuration in Dex  e g  the Dex    domain possibly including a path such as e g    dex    The  offline access  scope is    needed when using  refresh tokens    "}
{"questions":"grafana setup Grafana Okta OIDC Guide enterprise aliases products menuTitle Okta OIDC labels cloud oss auth okta","answers":"---\naliases:\n  - ..\/..\/..\/auth\/okta\/\ndescription: Grafana Okta OIDC Guide\nlabels:\n  products:\n    - cloud\n    - enterprise\n    - oss\nmenuTitle: Okta OIDC\ntitle: Configure Okta OIDC authentication\nweight: 1400\n---\n\n# Configure Okta OIDC authentication\n\n\n\n\nIf Users use the same email address in Okta that they use with other authentication providers (such as Grafana.com), you need to do additional configuration to ensure that the users are matched correctly. Please refer to the [Using the same email address to login with different identity providers]() documentation for more information.\n\n\n## Before you begin\n\nTo follow this guide, ensure you have permissions in your Okta workspace to create an OIDC app.\n\n## Create an Okta app\n\n1. From the Okta Admin Console, select **Create App Integration** from the **Applications** menu.\n1. For **Sign-in method**, select **OIDC - OpenID Connect**.\n1. For **Application type**, select **Web Application** and click **Next**.\n1. Configure **New Web App Integration Operations**:\n\n   - **App integration name**: Choose a name for the app.\n   - **Logo (optional)**: Add a logo.\n   - **Grant type**: Select **Authorization Code** and **Refresh Token**.\n   - **Sign-in redirect URIs**: Replace the default setting with the Grafana Cloud Okta path, replacing <YOUR_ORG> with the name of your Grafana organization: https:\/\/<YOUR_ORG>.grafana.net\/login\/okta. For on-premises installation, use the Grafana server URL: http:\/\/<my_grafana_server_name_or_ip>:<grafana_server_port>\/login\/okta.\n   - **Sign-out redirect URIs (optional)**: Replace the default setting with the Grafana Cloud Okta path, replacing <YOUR_ORG> with the name of your Grafana organization: https:\/\/<YOUR_ORG>.grafana.net\/logout. For on-premises installation, use the Grafana server URL: http:\/\/<my_grafana_server_name_or_ip>:<grafana_server_port>\/logout.\n   - **Base URIs (optional)**: Add any base URIs\n   - **Controlled access**: Select whether to assign the app integration to everyone in your organization, or only selected groups. You can assign this option after you create the app.\n\n1. Make a note of the following:\n   - **ClientID**\n   - **Client Secret**\n   - **Auth URL**\n     For example: https:\/\/<TENANT_ID>.okta.com\/oauth2\/v1\/authorize\n   - **Token URL**\n     For example: https:\/\/<TENANT_ID>.okta.com\/oauth2\/v1\/token\n   - **API URL**\n     For example: https:\/\/<TENANT_ID>.okta.com\/oauth2\/v1\/userinfo\n\n### Configure Okta to Grafana role mapping\n\n1. In the **Okta Admin Console**, select **Directory > Profile Editor**.\n1. Select the Okta Application Profile you created previously (the default name for this is `<App name> User`).\n1. Select **Add Attribute** and fill in the following fields:\n\n   - **Data Type**: string\n   - **Display Name**: Meaningful name. For example, `Grafana Role`.\n   - **Variable Name**: Meaningful name. For example, `grafana_role`.\n   - **Description (optional)**: A description of the role.\n   - **Enum**: Select **Define enumerated list of values** and add the following:\n     - Display Name: Admin Value: Admin\n     - Display Name: Editor Value: Editor\n     - Display Name: Viewer Value: Viewer\n\n   The remaining attributes are optional and can be set as needed.\n\n1. Click **Save**.\n1. (Optional) You can add the role attribute to the default User profile. To do this, please follow the steps in the [Optional: Add the role attribute to the User (default) Okta profile]() section.\n\n### Configure Groups claim\n\n1. In the **Okta Admin Console**, select **Application > Applications**.\n1. Select the OpenID Connect application you created.\n1. Go to the **Sign On** tab and click **Edit** in the **OpenID Connect ID Token** section.\n1. In the **Group claim type** section, select **Filter**.\n1. In the **Group claim filter** section, leave the default name `groups` (or add it if the box is empty), then select **Matches regex** and add the following regex: `.*`.\n1. Click **Save**.\n1. Click the **Back to applications** link at the top of the page.\n1. From the **More** button dropdown menu, click **Refresh Application Data**.\n1. Include the `groups` scope in the **Scopes** field in Grafana of the Okta integration.\n   For Terraform or in the Grafana configuration file, include the `groups` scope in `scopes` field.\n\n\nIf you configure the `groups` claim differently, ensure that the `groups` claim is a string array.\n\n\n#### Optional: Add the role attribute to the User (default) Okta profile\n\nIf you want to configure the role for all users in the Okta directory, you can add the role attribute to the User (default) Okta profile.\n\n1. Return to the **Directory** section and select **Profile Editor**.\n1. Select the User (default) Okta profile, and click **Add Attribute**.\n1. Set all of the attributes in the same way you did in **Step 3**.\n1. Select **Add Mapping** to add your new attributes.\n   For example, **user.grafana_role -> grafana_role**.\n1. To add a role to a user, select the user from the **Directory**, and click **Profile -> Edit**.\n1. Select an option from your new attribute and click **Save**.\n1. Update the Okta integration by setting the `Role attribute path` (`role_attribute_path` in Terraform and config file) to `<YOUR_ROLE_VARIABLE>`. For example: `role_attribute_path = grafana_role` (using the configuration).\n\n## Configure Okta authentication client using the Grafana UI\n\n\nAvailable in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle.\n\n\nAs a Grafana Admin, you can configure Okta OAuth2 client from within Grafana using the Okta UI. To do this, navigate to **Administration > Authentication > Okta** page and fill in the form. If you have a current configuration in the Grafana configuration file then the form will be pre-populated with those values otherwise the form will contain default values.\n\nAfter you have filled in the form, click **Save**. If the save was successful, Grafana will apply the new configurations.\n\nIf you need to reset changes you made in the UI back to the default values, click **Reset**. After you have reset the changes, Grafana will apply the configuration from the Grafana configuration file (if there is any configuration) or the default values.\n\n\nIf you run Grafana in high availability mode, configuration changes may not get applied to all Grafana instances immediately. You may need to wait a few minutes for the configuration to propagate to all Grafana instances.\n\n\nRefer to [configuration options]() for more information.\n\n## Configure Okta authentication client using the Terraform provider\n\n\nAvailable in Public Preview in Grafana 10.4 behind the `ssoSettingsApi` feature toggle. Supported in the Terraform provider since v2.12.0.\n\n\n```terraform\nresource \"grafana_sso_settings\" \"okta_sso_settings\" {\n  provider_name = \"okta\"\n  oauth2_settings {\n    name                  = \"Okta\"\n    auth_url              = \"https:\/\/<okta tenant id>.okta.com\/oauth2\/v1\/authorize\"\n    token_url             = \"https:\/\/<okta tenant id>.okta.com\/oauth2\/v1\/token\"\n    api_url               = \"https:\/\/<okta tenant id>.okta.com\/oauth2\/v1\/userinfo\"\n    client_id             = \"CLIENT_ID\"\n    client_secret         = \"CLIENT_SECRET\"\n    allow_sign_up         = true\n    auto_login            = false\n    scopes                = \"openid profile email offline_access\"\n    role_attribute_path   = \"contains(groups[*], 'Example::DevOps') && 'Admin' || 'None'\"\n    role_attribute_strict = true\n    allowed_groups        = \"Example::DevOps,Example::Dev,Example::QA\"\n  }\n}\n```\n\nGo to [Terraform Registry](https:\/\/registry.terraform.io\/providers\/grafana\/grafana\/latest\/docs\/resources\/sso_settings) for a complete reference on using the `grafana_sso_settings` resource.\n\n## Configure Okta authentication client using the Grafana configuration file\n\nEnsure that you have access to the [Grafana configuration file]().\n\n### Steps\n\nTo integrate your Okta OIDC provider with Grafana using our Okta OIDC integration, follow these steps:\n\n1. Follow the [Create an Okta app]() steps to create an OIDC app in Okta.\n\n1. Refer to the following table to update field values located in the `[auth.okta]` section of the Grafana configuration file:\n\n   | Field       | Description                                                                                                 |\n   | ----------- | ----------------------------------------------------------------------------------------------------------- |\n   | `client_id` | These values must match the client ID from your Okta OIDC app.                                              |\n   | `auth_url`  | The authorization endpoint of your OIDC provider. `https:\/\/<okta-tenant-id>.okta.com\/oauth2\/v1\/authorize`   |\n   | `token_url` | The token endpoint of your Okta OIDC provider. `https:\/\/<okta-tenant-id>.okta.com\/oauth2\/v1\/token`          |\n   | `api_url`   | The user information endpoint of your Okta OIDC provider. `https:\/\/<tenant-id>.okta.com\/oauth2\/v1\/userinfo` |\n   | `enabled`   | Enables Okta OIDC authentication. Set this value to `true`.                                                 |\n\n1. Review the list of other Okta OIDC [configuration options]() and complete them as necessary.\n\n1. Optional: [Configure a refresh token]().\n1. [Configure role mapping]().\n1. Optional: [Configure group synchronization]().\n1. Restart Grafana.\n\n   You should now see a Okta OIDC login button on the login page and be able to log in or sign up with your OIDC provider.\n\nThe following is an example of a minimally functioning integration when\nconfigured with the instructions above:\n\n```ini\n[auth.okta]\nname = Okta\nicon = okta\nenabled = true\nallow_sign_up = true\nclient_id = <client id>\nscopes = openid profile email offline_access\nauth_url = https:\/\/<okta tenant id>.okta.com\/oauth2\/v1\/authorize\ntoken_url = https:\/\/<okta tenant id>.okta.com\/oauth2\/v1\/token\napi_url = https:\/\/<okta tenant id>.okta.com\/oauth2\/v1\/userinfo\nrole_attribute_path = grafana_role\nrole_attribute_strict = true\nallowed_groups = \"Example::DevOps\" \"Example::Dev\" \"Example::QA\"\n```\n\n### Configure a refresh token\n\nWhen a user logs in using an OAuth provider, Grafana verifies that the access token has not expired. When an access token expires, Grafana uses the provided refresh token (if any exists) to obtain a new access token without requiring the user to log in again.\n\nIf a refresh token doesn't exist, Grafana logs the user out of the system after the access token has expired.\n\nTo enable the `Refresh Token` head over the Okta application settings and:\n\n1. Under `General` tab, find the `General Settings` section.\n1. Within the `Grant Type` options, enable the `Refresh Token` checkbox.\n\nAt the configuration file, extend the `scopes` in `[auth.okta]` section with `offline_access` and set `use_refresh_token` to `true`.\n\n### Configure role mapping\n\n\nUnless `skip_org_role_sync` option is enabled, the user's role will be set to the role retrieved from the auth provider upon user login.\n\n\nThe user's role is retrieved using a [JMESPath](http:\/\/jmespath.org\/examples.html) expression from the `role_attribute_path` configuration option against the `api_url` (`\/userinfo` OIDC endpoint) endpoint payload.\n\nIf no valid role is found, the user is assigned the role specified by [the `auto_assign_org_role` option]().\nYou can disable this default role assignment by setting `role_attribute_strict = true`. This setting denies user access if no role or an invalid role is returned after evaluating the `role_attribute_path` and the `org_mapping` expressions.\n\nYou can use the `org_attribute_path` and `org_mapping` configuration options to assign the user to organizations and specify their role. For more information, refer to [Org roles mapping example](#org-roles-mapping-example). If both org role mapping (`org_mapping`) and the regular role mapping (`role_attribute_path`) are specified, then the user will get the highest of the two mapped roles.\n\nTo allow mapping Grafana server administrator role, use the `allow_assign_grafana_admin` configuration option.\nRefer to [configuration options]() for more information.\n\nIn [Create an Okta app](), you created a custom attribute in Okta to store the role. You can use this attribute to map the role to a Grafana role by setting the `role_attribute_path` configuration option to the custom attribute name: `role_attribute_path = grafana_role`.\n\nIf you want to map the role based on the user's group, you can use the `groups` attribute from the user info endpoint. An example of this is `role_attribute_path = contains(groups[*], 'Example::DevOps') && 'Admin' || 'None'`. You can find more examples of JMESPath expressions on the Generic OAuth page for [JMESPath examples]().\n\nTo learn about adding custom claims to the user info in Okta, refer to [add custom claims](https:\/\/developer.okta.com\/docs\/guides\/customize-tokens-returned-from-okta\/main\/#add-a-custom-claim-to-a-token).\n\n#### Org roles mapping example\n\n\nAvailable in on-premise Grafana installations.\n\n\nIn this example, the `org_mapping` uses the `groups` attribute as the source (`org_attribute_path`) to map the current user to different organizations and roles. The user has been granted the role of a `Viewer` in the `org_foo` org if they are a member of the `Group 1` group, the role of an `Editor` in the `org_bar` org if they are a member of the `Group 2` group, and the role of an `Editor` in the `org_baz`(OrgID=3) org.\n\nConfig:\n\n```ini\norg_attribute_path = groups\norg_mapping = [\"Group 1:org_foo:Viewer\", \"Group 2:org_bar:Editor\", \"*:3:Editor\"]\n```\n\n### Configure group synchronization (Enterprise only)\n\n\nAvailable in [Grafana Enterprise](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/introduction\/grafana-enterprise) and [Grafana Cloud](\/docs\/grafana-cloud\/).\n\n\nBy using group synchronization, you can link your Okta groups to teams and roles within Grafana. This allows automatically assigning users to the appropriate teams or granting them the mapped roles.\nTeams and roles get synchronized when the user logs in.\n\nOkta groups can be referenced by group names, like `Admins` or `Editors`.\n\nTo learn more about how to configure group synchronization, refer to [Configure team sync]() and [Configure group attribute sync](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/setup-grafana\/configure-security\/configure-group-attribute-sync) documentation.\n\n## Configuration options\n\nThe following table outlines the various Okta OIDC configuration options. You can apply these options as environment variables, similar to any other configuration within Grafana. For more information, refer to [Override configuration with environment variables]().\n\n\nIf the configuration option requires a JMESPath expression that includes a colon, enclose the entire expression in quotes to prevent parsing errors. For example `role_attribute_path: \"role:view\"`\n\n\n| Setting                 | Required | Supported on Cloud | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          | Default                       |\n| ----------------------- | -------- | ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------- |\n| `enabled`               | No       | Yes                | Enables Okta OIDC authentication.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    | `false`                       |\n| `name`                  | No       | Yes                | Name that refers to the Okta OIDC authentication from the Grafana user interface.                                                                                                                                                                                                                                                                                                                                                                                                                                                    | `Okta`                        |\n| `icon`                  | No       | Yes                | Icon used for the Okta OIDC authentication in the Grafana user interface.                                                                                                                                                                                                                                                                                                                                                                                                                                                            | `okta`                        |\n| `client_id`             | Yes      | Yes                | Client ID provided by your Okta OIDC app.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |                               |\n| `client_secret`         | Yes      | Yes                | Client secret provided by your Okta OIDC app.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |                               |\n| `auth_url`              | Yes      | Yes                | Authorization endpoint of your Okta OIDC provider.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |                               |\n| `token_url`             | Yes      | Yes                | Endpoint used to obtain the Okta OIDC access token.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |                               |\n| `api_url`               | Yes      | Yes                | Endpoint used to obtain user information.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |                               |\n| `scopes`                | No       | Yes                | List of comma- or space-separated Okta OIDC scopes.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  | `openid profile email groups` |\n| `allow_sign_up`         | No       | Yes                | Controls Grafana user creation through the Okta OIDC login. Only existing Grafana users can log in with Okta OIDC if set to `false`.                                                                                                                                                                                                                                                                                                                                                                                                 | `true`                        |\n| `auto_login`            | No       | Yes                | Set to `true` to enable users to bypass the login screen and automatically log in. This setting is ignored if you configure multiple auth providers to use auto-login.                                                                                                                                                                                                                                                                                                                                                               | `false`                       |\n| `role_attribute_path`   | No       | Yes                | [JMESPath](http:\/\/jmespath.org\/examples.html) expression to use for Grafana role lookup. Grafana will first evaluate the expression using the Okta OIDC ID token. If no role is found, the expression will be evaluated using the user information obtained from the UserInfo endpoint. The result of the evaluation should be a valid Grafana role (`None`, `Viewer`, `Editor`, `Admin` or `GrafanaAdmin`). For more information on user role mapping, refer to [Configure role mapping](). |                               |\n| `role_attribute_strict` | No       | Yes                | Set to `true` to deny user login if the Grafana org role cannot be extracted using `role_attribute_path` or `org_mapping`. For more information on user role mapping, refer to [Configure role mapping]().                                                                                                                                                                                                                                                                                   | `false`                       |\n| `org_attribute_path`    | No       | No                 | [JMESPath](http:\/\/jmespath.org\/examples.html) expression to use for Grafana org to role lookup. The result of the evaluation will be mapped to org roles based on `org_mapping`. For more information on org to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example).                                                                                                                                                                                                                                      |                               |\n| `org_mapping`           | No       | No                 | List of comma- or space-separated `<ExternalOrgName>:<OrgIdOrName>:<Role>` mappings. Value can be `*` meaning \"All users\". Role is optional and can have the following values: `None`, `Viewer`, `Editor` or `Admin`. For more information on external organization to role mapping, refer to [Org roles mapping example](#org-roles-mapping-example).                                                                                                                                                                               |                               |\n| `skip_org_role_sync`    | No       | Yes                | Set to `true` to stop automatically syncing user roles. This will allow you to set organization roles for your users from within Grafana manually.                                                                                                                                                                                                                                                                                                                                                                                   | `false`                       |\n| `allowed_groups`        | No       | Yes                | List of comma- or space-separated groups. The user should be a member of at least one group to log in.                                                                                                                                                                                                                                                                                                                                                                                                                               |                               |\n| `allowed_domains`       | No       | Yes                | List of comma- or space-separated domains. The user should belong to at least one domain to log in.                                                                                                                                                                                                                                                                                                                                                                                                                                  |                               |\n| `use_pkce`              | No       | Yes                | Set to `true` to use [Proof Key for Code Exchange (PKCE)](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7636). Grafana uses the SHA256 based `S256` challenge method and a 128 bytes (base64url encoded) code verifier.                                                                                                                                                                                                                                                                                                                   | `true`                        |\n| `use_refresh_token`     | No       | Yes                | Set to `true` to use refresh token and check access token expiration.                                                                                                                                                                                                                                                                                                                                                                                                                                                                | `false`                       |\n| `signout_redirect_url`  | No       | Yes                | URL to redirect to after the user logs out.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |                               |","site":"grafana setup","answers_cleaned":"    aliases               auth okta  description  Grafana Okta OIDC Guide labels    products        cloud       enterprise       oss menuTitle  Okta OIDC title  Configure Okta OIDC authentication weight  1400        Configure Okta OIDC authentication     If Users use the same email address in Okta that they use with other authentication providers  such as Grafana com   you need to do additional configuration to ensure that the users are matched correctly  Please refer to the  Using the same email address to login with different identity providers    documentation for more information       Before you begin  To follow this guide  ensure you have permissions in your Okta workspace to create an OIDC app      Create an Okta app  1  From the Okta Admin Console  select   Create App Integration   from the   Applications   menu  1  For   Sign in method    select   OIDC   OpenID Connect    1  For   Application type    select   Web Application   and click   Next    1  Configure   New Web App Integration Operations            App integration name    Choose a name for the app         Logo  optional     Add a logo         Grant type    Select   Authorization Code   and   Refresh Token           Sign in redirect URIs    Replace the default setting with the Grafana Cloud Okta path  replacing  YOUR ORG  with the name of your Grafana organization  https    YOUR ORG  grafana net login okta  For on premises installation  use the Grafana server URL  http    my grafana server name or ip   grafana server port  login okta         Sign out redirect URIs  optional     Replace the default setting with the Grafana Cloud Okta path  replacing  YOUR ORG  with the name of your Grafana organization  https    YOUR ORG  grafana net logout  For on premises installation  use the Grafana server URL  http    my grafana server name or ip   grafana server port  logout         Base URIs  optional     Add any base URIs        Controlled access    Select whether to assign the app integration to everyone in your organization  or only selected groups  You can assign this option after you create the app   1  Make a note of the following         ClientID          Client Secret          Auth URL        For example  https    TENANT ID  okta com oauth2 v1 authorize        Token URL        For example  https    TENANT ID  okta com oauth2 v1 token        API URL        For example  https    TENANT ID  okta com oauth2 v1 userinfo      Configure Okta to Grafana role mapping  1  In the   Okta Admin Console    select   Directory   Profile Editor    1  Select the Okta Application Profile you created previously  the default name for this is   App name  User    1  Select   Add Attribute   and fill in the following fields          Data Type    string        Display Name    Meaningful name  For example   Grafana Role          Variable Name    Meaningful name  For example   grafana role          Description  optional     A description of the role         Enum    Select   Define enumerated list of values   and add the following         Display Name  Admin Value  Admin        Display Name  Editor Value  Editor        Display Name  Viewer Value  Viewer     The remaining attributes are optional and can be set as needed   1  Click   Save    1   Optional  You can add the role attribute to the default User profile  To do this  please follow the steps in the  Optional  Add the role attribute to the User  default  Okta profile    section       Configure Groups claim  1  In the   Okta Admin Console    select   Application   Applications    1  Select the OpenID Connect application you created  1  Go to the   Sign On   tab and click   Edit   in the   OpenID Connect ID Token   section  1  In the   Group claim type   section  select   Filter    1  In the   Group claim filter   section  leave the default name  groups   or add it if the box is empty   then select   Matches regex   and add the following regex        1  Click   Save    1  Click the   Back to applications   link at the top of the page  1  From the   More   button dropdown menu  click   Refresh Application Data    1  Include the  groups  scope in the   Scopes   field in Grafana of the Okta integration     For Terraform or in the Grafana configuration file  include the  groups  scope in  scopes  field    If you configure the  groups  claim differently  ensure that the  groups  claim is a string array         Optional  Add the role attribute to the User  default  Okta profile  If you want to configure the role for all users in the Okta directory  you can add the role attribute to the User  default  Okta profile   1  Return to the   Directory   section and select   Profile Editor    1  Select the User  default  Okta profile  and click   Add Attribute    1  Set all of the attributes in the same way you did in   Step 3    1  Select   Add Mapping   to add your new attributes     For example    user grafana role    grafana role    1  To add a role to a user  select the user from the   Directory    and click   Profile    Edit    1  Select an option from your new attribute and click   Save    1  Update the Okta integration by setting the  Role attribute path    role attribute path  in Terraform and config file  to   YOUR ROLE VARIABLE    For example   role attribute path   grafana role   using the configuration       Configure Okta authentication client using the Grafana UI   Available in Public Preview in Grafana 10 4 behind the  ssoSettingsApi  feature toggle    As a Grafana Admin  you can configure Okta OAuth2 client from within Grafana using the Okta UI  To do this  navigate to   Administration   Authentication   Okta   page and fill in the form  If you have a current configuration in the Grafana configuration file then the form will be pre populated with those values otherwise the form will contain default values   After you have filled in the form  click   Save    If the save was successful  Grafana will apply the new configurations   If you need to reset changes you made in the UI back to the default values  click   Reset    After you have reset the changes  Grafana will apply the configuration from the Grafana configuration file  if there is any configuration  or the default values    If you run Grafana in high availability mode  configuration changes may not get applied to all Grafana instances immediately  You may need to wait a few minutes for the configuration to propagate to all Grafana instances    Refer to  configuration options    for more information      Configure Okta authentication client using the Terraform provider   Available in Public Preview in Grafana 10 4 behind the  ssoSettingsApi  feature toggle  Supported in the Terraform provider since v2 12 0       terraform resource  grafana sso settings   okta sso settings      provider name    okta    oauth2 settings       name                     Okta      auth url                 https    okta tenant id  okta com oauth2 v1 authorize      token url                https    okta tenant id  okta com oauth2 v1 token      api url                  https    okta tenant id  okta com oauth2 v1 userinfo      client id                CLIENT ID      client secret            CLIENT SECRET      allow sign up           true     auto login              false     scopes                   openid profile email offline access      role attribute path      contains groups      Example  DevOps       Admin      None       role attribute strict   true     allowed groups           Example  DevOps Example  Dev Example  QA             Go to  Terraform Registry  https   registry terraform io providers grafana grafana latest docs resources sso settings  for a complete reference on using the  grafana sso settings  resource      Configure Okta authentication client using the Grafana configuration file  Ensure that you have access to the  Grafana configuration file          Steps  To integrate your Okta OIDC provider with Grafana using our Okta OIDC integration  follow these steps   1  Follow the  Create an Okta app    steps to create an OIDC app in Okta   1  Refer to the following table to update field values located in the   auth okta   section of the Grafana configuration file        Field         Description                                                                                                                                                                                                                                          client id    These values must match the client ID from your Okta OIDC app                                                       auth url     The authorization endpoint of your OIDC provider   https    okta tenant id  okta com oauth2 v1 authorize            token url    The token endpoint of your Okta OIDC provider   https    okta tenant id  okta com oauth2 v1 token                   api url      The user information endpoint of your Okta OIDC provider   https    tenant id  okta com oauth2 v1 userinfo          enabled      Enables Okta OIDC authentication  Set this value to  true                                                      1  Review the list of other Okta OIDC  configuration options    and complete them as necessary   1  Optional   Configure a refresh token     1   Configure role mapping     1  Optional   Configure group synchronization     1  Restart Grafana      You should now see a Okta OIDC login button on the login page and be able to log in or sign up with your OIDC provider   The following is an example of a minimally functioning integration when configured with the instructions above      ini  auth okta  name   Okta icon   okta enabled   true allow sign up   true client id    client id  scopes   openid profile email offline access auth url   https    okta tenant id  okta com oauth2 v1 authorize token url   https    okta tenant id  okta com oauth2 v1 token api url   https    okta tenant id  okta com oauth2 v1 userinfo role attribute path   grafana role role attribute strict   true allowed groups    Example  DevOps   Example  Dev   Example  QA           Configure a refresh token  When a user logs in using an OAuth provider  Grafana verifies that the access token has not expired  When an access token expires  Grafana uses the provided refresh token  if any exists  to obtain a new access token without requiring the user to log in again   If a refresh token doesn t exist  Grafana logs the user out of the system after the access token has expired   To enable the  Refresh Token  head over the Okta application settings and   1  Under  General  tab  find the  General Settings  section  1  Within the  Grant Type  options  enable the  Refresh Token  checkbox   At the configuration file  extend the  scopes  in   auth okta   section with  offline access  and set  use refresh token  to  true        Configure role mapping   Unless  skip org role sync  option is enabled  the user s role will be set to the role retrieved from the auth provider upon user login    The user s role is retrieved using a  JMESPath  http   jmespath org examples html  expression from the  role attribute path  configuration option against the  api url     userinfo  OIDC endpoint  endpoint payload   If no valid role is found  the user is assigned the role specified by  the  auto assign org role  option     You can disable this default role assignment by setting  role attribute strict   true   This setting denies user access if no role or an invalid role is returned after evaluating the  role attribute path  and the  org mapping  expressions   You can use the  org attribute path  and  org mapping  configuration options to assign the user to organizations and specify their role  For more information  refer to  Org roles mapping example   org roles mapping example   If both org role mapping   org mapping   and the regular role mapping   role attribute path   are specified  then the user will get the highest of the two mapped roles   To allow mapping Grafana server administrator role  use the  allow assign grafana admin  configuration option  Refer to  configuration options    for more information   In  Create an Okta app     you created a custom attribute in Okta to store the role  You can use this attribute to map the role to a Grafana role by setting the  role attribute path  configuration option to the custom attribute name   role attribute path   grafana role    If you want to map the role based on the user s group  you can use the  groups  attribute from the user info endpoint  An example of this is  role attribute path   contains groups      Example  DevOps       Admin      None    You can find more examples of JMESPath expressions on the Generic OAuth page for  JMESPath examples      To learn about adding custom claims to the user info in Okta  refer to  add custom claims  https   developer okta com docs guides customize tokens returned from okta main  add a custom claim to a token         Org roles mapping example   Available in on premise Grafana installations    In this example  the  org mapping  uses the  groups  attribute as the source   org attribute path   to map the current user to different organizations and roles  The user has been granted the role of a  Viewer  in the  org foo  org if they are a member of the  Group 1  group  the role of an  Editor  in the  org bar  org if they are a member of the  Group 2  group  and the role of an  Editor  in the  org baz  OrgID 3  org   Config      ini org attribute path   groups org mapping     Group 1 org foo Viewer    Group 2 org bar Editor      3 Editor            Configure group synchronization  Enterprise only    Available in  Grafana Enterprise  https   grafana com docs grafana  GRAFANA VERSION  introduction grafana enterprise  and  Grafana Cloud   docs grafana cloud      By using group synchronization  you can link your Okta groups to teams and roles within Grafana  This allows automatically assigning users to the appropriate teams or granting them the mapped roles  Teams and roles get synchronized when the user logs in   Okta groups can be referenced by group names  like  Admins  or  Editors    To learn more about how to configure group synchronization  refer to  Configure team sync    and  Configure group attribute sync  https   grafana com docs grafana  GRAFANA VERSION  setup grafana configure security configure group attribute sync  documentation      Configuration options  The following table outlines the various Okta OIDC configuration options  You can apply these options as environment variables  similar to any other configuration within Grafana  For more information  refer to  Override configuration with environment variables       If the configuration option requires a JMESPath expression that includes a colon  enclose the entire expression in quotes to prevent parsing errors  For example  role attribute path   role view       Setting                   Required   Supported on Cloud   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            Default                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               enabled                  No         Yes                  Enables Okta OIDC authentication                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        false                             name                     No         Yes                  Name that refers to the Okta OIDC authentication from the Grafana user interface                                                                                                                                                                                                                                                                                                                                                                                                                                                        Okta                              icon                     No         Yes                  Icon used for the Okta OIDC authentication in the Grafana user interface                                                                                                                                                                                                                                                                                                                                                                                                                                                                okta                              client id                Yes        Yes                  Client ID provided by your Okta OIDC app                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  client secret            Yes        Yes                  Client secret provided by your Okta OIDC app                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              auth url                 Yes        Yes                  Authorization endpoint of your Okta OIDC provider                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         token url                Yes        Yes                  Endpoint used to obtain the Okta OIDC access token                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        api url                  Yes        Yes                  Endpoint used to obtain user information                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  scopes                   No         Yes                  List of comma  or space separated Okta OIDC scopes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      openid profile email groups       allow sign up            No         Yes                  Controls Grafana user creation through the Okta OIDC login  Only existing Grafana users can log in with Okta OIDC if set to  false                                                                                                                                                                                                                                                                                                                                                                                                      true                              auto login               No         Yes                  Set to  true  to enable users to bypass the login screen and automatically log in  This setting is ignored if you configure multiple auth providers to use auto login                                                                                                                                                                                                                                                                                                                                                                   false                             role attribute path      No         Yes                   JMESPath  http   jmespath org examples html  expression to use for Grafana role lookup  Grafana will first evaluate the expression using the Okta OIDC ID token  If no role is found  the expression will be evaluated using the user information obtained from the UserInfo endpoint  The result of the evaluation should be a valid Grafana role   None    Viewer    Editor    Admin  or  GrafanaAdmin    For more information on user role mapping  refer to  Configure role mapping                                          role attribute strict    No         Yes                  Set to  true  to deny user login if the Grafana org role cannot be extracted using  role attribute path  or  org mapping   For more information on user role mapping  refer to  Configure role mapping                                                                                                                                                                                                                                                                                          false                             org attribute path       No         No                    JMESPath  http   jmespath org examples html  expression to use for Grafana org to role lookup  The result of the evaluation will be mapped to org roles based on  org mapping   For more information on org to role mapping  refer to  Org roles mapping example   org roles mapping example                                                                                                                                                                                                                                                                             org mapping              No         No                   List of comma  or space separated   ExternalOrgName   OrgIdOrName   Role   mappings  Value can be     meaning  All users   Role is optional and can have the following values   None    Viewer    Editor  or  Admin   For more information on external organization to role mapping  refer to  Org roles mapping example   org roles mapping example                                                                                                                                                                                                                      skip org role sync       No         Yes                  Set to  true  to stop automatically syncing user roles  This will allow you to set organization roles for your users from within Grafana manually                                                                                                                                                                                                                                                                                                                                                                                       false                             allowed groups           No         Yes                  List of comma  or space separated groups  The user should be a member of at least one group to log in                                                                                                                                                                                                                                                                                                                                                                                                                                                                     allowed domains          No         Yes                  List of comma  or space separated domains  The user should belong to at least one domain to log in                                                                                                                                                                                                                                                                                                                                                                                                                                                                        use pkce                 No         Yes                  Set to  true  to use  Proof Key for Code Exchange  PKCE   https   datatracker ietf org doc html rfc7636   Grafana uses the SHA256 based  S256  challenge method and a 128 bytes  base64url encoded  code verifier                                                                                                                                                                                                                                                                                                                       true                              use refresh token        No         Yes                  Set to  true  to use refresh token and check access token expiration                                                                                                                                                                                                                                                                                                                                                                                                                                                                    false                             signout redirect url     No         Yes                  URL to redirect to after the user logs out                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            "}
{"questions":"grafana setup documentation Grafana Keycloak Guide aliases keycloak keywords configuration auth keycloak grafana oauth","answers":"---\naliases:\n  - ..\/..\/..\/auth\/keycloak\/\ndescription: Grafana Keycloak Guide\nkeywords:\n  - grafana\n  - keycloak\n  - configuration\n  - documentation\n  - oauth\nlabels:\n  products:\n    - cloud\n    - enterprise\n    - oss\nmenuTitle: Keycloak OAuth2\ntitle: Configure Keycloak OAuth2 authentication\nweight: 1300\n---\n\n# Configure Keycloak OAuth2 authentication\n\nKeycloak OAuth2 authentication allows users to log in to Grafana using their Keycloak credentials. This guide explains how to set up Keycloak as an authentication provider in Grafana.\n\nRefer to [Generic OAuth authentication]() for extra configuration options available for this provider.\n\n\nIf Users use the same email address in Keycloak that they use with other authentication providers (such as Grafana.com), you need to do additional configuration to ensure that the users are matched correctly. Please refer to the [Using the same email address to login with different identity providers]() documentation for more information.\n\n\nYou may have to set the `root_url` option of `[server]` for the callback URL to be\ncorrect. For example in case you are serving Grafana behind a proxy.\n\nExample config:\n\n```ini\n[auth.generic_oauth]\nenabled = true\nname = Keycloak-OAuth\nallow_sign_up = true\nclient_id = YOUR_APP_CLIENT_ID\nclient_secret = YOUR_APP_CLIENT_SECRET\nscopes = openid email profile offline_access roles\nemail_attribute_path = email\nlogin_attribute_path = username\nname_attribute_path = full_name\nauth_url = https:\/\/<PROVIDER_DOMAIN>\/realms\/<REALM_NAME>\/protocol\/openid-connect\/auth\ntoken_url = https:\/\/<PROVIDER_DOMAIN>\/realms\/<REALM_NAME>\/protocol\/openid-connect\/token\napi_url = https:\/\/<PROVIDER_DOMAIN>\/realms\/<REALM_NAME>\/protocol\/openid-connect\/userinfo\nrole_attribute_path = contains(roles[*], 'admin') && 'Admin' || contains(roles[*], 'editor') && 'Editor' || 'Viewer'\n```\n\nAs an example, `<PROVIDER_DOMAIN>` can be `keycloak-demo.grafana.org`\nand `<REALM_NAME>` can be `grafana`.\n\nTo configure the `kc_idp_hint` parameter for Keycloak, you need to change the `auth_url` configuration to include the `kc_idp_hint` parameter. For example if you want to hint the Google identity provider:\n\n```ini\nauth_url = https:\/\/<PROVIDER_DOMAIN>\/realms\/<REALM_NAME>\/protocol\/openid-connect\/auth?kc_idp_hint=google\n```\n\n\napi_url is not required if the id_token contains all the necessary user information and can add latency to the login process.\nIt is useful as a fallback or if the user has more than 150 group memberships.\n\n\n## Keycloak configuration\n\n1. Create a client in Keycloak with the following settings:\n\n- Client ID: `grafana-oauth`\n- Enabled: `ON`\n- Client Protocol: `openid-connect`\n- Access Type: `confidential`\n- Standard Flow Enabled: `ON`\n- Implicit Flow Enabled: `OFF`\n- Direct Access Grants Enabled: `ON`\n- Root URL: `<grafana_root_url>`\n- Valid Redirect URIs: `<grafana_root_url>\/login\/generic_oauth`\n- Web Origins: `<grafana_root_url>`\n- Admin URL: `<grafana_root_url>`\n- Base URL: `<grafana_root_url>`\n\nAs an example, `<grafana_root_url>` can be `https:\/\/play.grafana.org`.\nNon-listed configuration options can be left at their default values.\n\n2. In the client scopes configuration, _Assigned Default Client Scopes_ should match:\n\n```\nemail\noffline_access\nprofile\nroles\n```\n\n\nThese scopes do not add group claims to the id_token. Without group claims, group synchronization will not work. Group synchronization is covered further down in this document.\n\n\n3. For role mapping to work with the example configuration above,\n   you need to create the following roles and assign them to users:\n\n```\nadmin\neditor\nviewer\n```\n\n## Group synchronization\n\n\nAvailable in [Grafana Enterprise](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/introduction\/grafana-enterprise) and [Grafana Cloud](\/docs\/grafana-cloud\/).\n\n\nBy using group synchronization, you can link your Keycloak groups to teams and roles within Grafana. This allows automatically assigning users to the appropriate teams or granting them the mapped roles.\nThis is useful if you want to give your users access to specific resources based on their group membership.\nTeams and roles get synchronized when the user logs in.\n\nTo enable group synchronization, you need to add a `groups` mapper to the client configuration in Keycloak.\nThis will add the `groups` claim to the id_token. You can then use the `groups` claim to map groups to teams and roles in Grafana.\n\n1. In the client configuration, head to `Mappers` and create a mapper with the following settings:\n\n- Name: `Group Mapper`\n- Mapper Type: `Group Membership`\n- Token Claim Name: `groups`\n- Full group path: `OFF`\n- Add to ID token: `ON`\n- Add to access token: `OFF`\n- Add to userinfo: `ON`\n\n2. In Grafana's configuration add the following option:\n\n```ini\n[auth.generic_oauth]\ngroups_attribute_path = groups\n```\n\nIf you use nested groups containing special characters such as quotes or colons, the JMESPath parser can perform a harmless reverse function so Grafana can properly evaluate nested groups. The following example shows a parent group named `Global` with nested group `department` that contains a list of groups:\n\n```ini\n[auth.generic_oauth]\ngroups_attribute_path = reverse(\"Global:department\")\n```\n\nTo learn more about how to configure group synchronization, refer to [Configure team sync]() and [Configure group attribute sync](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/setup-grafana\/configure-security\/configure-group-attribute-sync) documentation.\n\n## Enable Single Logout\n\nTo enable Single Logout, you need to add the following option to the configuration of Grafana:\n\n```ini\n[auth.generic_oauth]\nsignout_redirect_url = https:\/\/<PROVIDER_DOMAIN>\/auth\/realms\/<REALM_NAME>\/protocol\/openid-connect\/logout?post_logout_redirect_uri=https%3A%2F%2F<GRAFANA_DOMAIN>%2Flogin\n```\n\nAs an example, `<PROVIDER_DOMAIN>` can be `keycloak-demo.grafana.org`,\n`<REALM_NAME>` can be `grafana` and `<GRAFANA_DOMAIN>` can be `play.grafana.org`.\n\n\nGrafana supports ID token hints for single logout. Grafana automatically adds the `id_token_hint` parameter to the logout request if it detects OAuth as the authentication method.\n\n\n## Allow assigning Grafana Admin\n\nIf the application role received by Grafana is `GrafanaAdmin` , Grafana grants the user server administrator privileges.\n\nThis is useful if you want to grant server administrator privileges to a subset of users.\nGrafana also assigns the user the `Admin` role of the default organization.\n\n```ini\nrole_attribute_path = contains(roles[*], 'grafanaadmin') && 'GrafanaAdmin' || contains(roles[*], 'admin') && 'Admin' || contains(roles[*], 'editor') && 'Editor' || 'Viewer'\nallow_assign_grafana_admin = true\n```\n\n### Configure refresh token\n\nWhen a user logs in using an OAuth provider, Grafana verifies that the access token has not expired. When an access token expires, Grafana uses the provided refresh token (if any exists) to obtain a new access token.\n\nGrafana uses a refresh token to obtain a new access token without requiring the user to log in again. If a refresh token doesn't exist, Grafana logs the user out of the system after the access token has expired.\n\nTo enable a refresh token for Keycloak, do the following:\n\n1. Extend the `scopes` in `[auth.generic_oauth]` with `offline_access`.\n\n1. Add `use_refresh_token = true` to `[auth.generic_oauth]` configuration.","site":"grafana setup","answers_cleaned":"    aliases               auth keycloak  description  Grafana Keycloak Guide keywords      grafana     keycloak     configuration     documentation     oauth labels    products        cloud       enterprise       oss menuTitle  Keycloak OAuth2 title  Configure Keycloak OAuth2 authentication weight  1300        Configure Keycloak OAuth2 authentication  Keycloak OAuth2 authentication allows users to log in to Grafana using their Keycloak credentials  This guide explains how to set up Keycloak as an authentication provider in Grafana   Refer to  Generic OAuth authentication    for extra configuration options available for this provider    If Users use the same email address in Keycloak that they use with other authentication providers  such as Grafana com   you need to do additional configuration to ensure that the users are matched correctly  Please refer to the  Using the same email address to login with different identity providers    documentation for more information    You may have to set the  root url  option of   server   for the callback URL to be correct  For example in case you are serving Grafana behind a proxy   Example config      ini  auth generic oauth  enabled   true name   Keycloak OAuth allow sign up   true client id   YOUR APP CLIENT ID client secret   YOUR APP CLIENT SECRET scopes   openid email profile offline access roles email attribute path   email login attribute path   username name attribute path   full name auth url   https    PROVIDER DOMAIN  realms  REALM NAME  protocol openid connect auth token url   https    PROVIDER DOMAIN  realms  REALM NAME  protocol openid connect token api url   https    PROVIDER DOMAIN  realms  REALM NAME  protocol openid connect userinfo role attribute path   contains roles      admin       Admin     contains roles      editor       Editor      Viewer       As an example    PROVIDER DOMAIN   can be  keycloak demo grafana org  and   REALM NAME   can be  grafana    To configure the  kc idp hint  parameter for Keycloak  you need to change the  auth url  configuration to include the  kc idp hint  parameter  For example if you want to hint the Google identity provider      ini auth url   https    PROVIDER DOMAIN  realms  REALM NAME  protocol openid connect auth kc idp hint google       api url is not required if the id token contains all the necessary user information and can add latency to the login process  It is useful as a fallback or if the user has more than 150 group memberships       Keycloak configuration  1  Create a client in Keycloak with the following settings     Client ID   grafana oauth    Enabled   ON    Client Protocol   openid connect    Access Type   confidential    Standard Flow Enabled   ON    Implicit Flow Enabled   OFF    Direct Access Grants Enabled   ON    Root URL    grafana root url     Valid Redirect URIs    grafana root url  login generic oauth    Web Origins    grafana root url     Admin URL    grafana root url     Base URL    grafana root url    As an example    grafana root url   can be  https   play grafana org   Non listed configuration options can be left at their default values   2  In the client scopes configuration   Assigned Default Client Scopes  should match       email offline access profile roles       These scopes do not add group claims to the id token  Without group claims  group synchronization will not work  Group synchronization is covered further down in this document    3  For role mapping to work with the example configuration above     you need to create the following roles and assign them to users       admin editor viewer         Group synchronization   Available in  Grafana Enterprise  https   grafana com docs grafana  GRAFANA VERSION  introduction grafana enterprise  and  Grafana Cloud   docs grafana cloud      By using group synchronization  you can link your Keycloak groups to teams and roles within Grafana  This allows automatically assigning users to the appropriate teams or granting them the mapped roles  This is useful if you want to give your users access to specific resources based on their group membership  Teams and roles get synchronized when the user logs in   To enable group synchronization  you need to add a  groups  mapper to the client configuration in Keycloak  This will add the  groups  claim to the id token  You can then use the  groups  claim to map groups to teams and roles in Grafana   1  In the client configuration  head to  Mappers  and create a mapper with the following settings     Name   Group Mapper    Mapper Type   Group Membership    Token Claim Name   groups    Full group path   OFF    Add to ID token   ON    Add to access token   OFF    Add to userinfo   ON   2  In Grafana s configuration add the following option      ini  auth generic oauth  groups attribute path   groups      If you use nested groups containing special characters such as quotes or colons  the JMESPath parser can perform a harmless reverse function so Grafana can properly evaluate nested groups  The following example shows a parent group named  Global  with nested group  department  that contains a list of groups      ini  auth generic oauth  groups attribute path   reverse  Global department        To learn more about how to configure group synchronization  refer to  Configure team sync    and  Configure group attribute sync  https   grafana com docs grafana  GRAFANA VERSION  setup grafana configure security configure group attribute sync  documentation      Enable Single Logout  To enable Single Logout  you need to add the following option to the configuration of Grafana      ini  auth generic oauth  signout redirect url   https    PROVIDER DOMAIN  auth realms  REALM NAME  protocol openid connect logout post logout redirect uri https 3A 2F 2F GRAFANA DOMAIN  2Flogin      As an example    PROVIDER DOMAIN   can be  keycloak demo grafana org     REALM NAME   can be  grafana  and   GRAFANA DOMAIN   can be  play grafana org     Grafana supports ID token hints for single logout  Grafana automatically adds the  id token hint  parameter to the logout request if it detects OAuth as the authentication method       Allow assigning Grafana Admin  If the application role received by Grafana is  GrafanaAdmin    Grafana grants the user server administrator privileges   This is useful if you want to grant server administrator privileges to a subset of users  Grafana also assigns the user the  Admin  role of the default organization      ini role attribute path   contains roles      grafanaadmin       GrafanaAdmin     contains roles      admin       Admin     contains roles      editor       Editor      Viewer  allow assign grafana admin   true          Configure refresh token  When a user logs in using an OAuth provider  Grafana verifies that the access token has not expired  When an access token expires  Grafana uses the provided refresh token  if any exists  to obtain a new access token   Grafana uses a refresh token to obtain a new access token without requiring the user to log in again  If a refresh token doesn t exist  Grafana logs the user out of the system after the access token has expired   To enable a refresh token for Keycloak  do the following   1  Extend the  scopes  in   auth generic oauth   with  offline access    1  Add  use refresh token   true  to   auth generic oauth   configuration "}
{"questions":"grafana setup enterprise saml set up saml with okta enterprise configure saml enterprise saml enterprise saml troubleshoot saml enterprise saml enable saml aliases enterprise saml about saml auth saml enterprise saml configure saml","answers":"---\naliases:\n  - ..\/..\/..\/auth\/saml\/\n  - ..\/..\/..\/enterprise\/configure-saml\/\n  - ..\/..\/..\/enterprise\/saml\/\n  - ..\/..\/..\/enterprise\/saml\/about-saml\/\n  - ..\/..\/..\/enterprise\/saml\/configure-saml\/\n  - ..\/..\/..\/enterprise\/saml\/enable-saml\/\n  - ..\/..\/..\/enterprise\/saml\/set-up-saml-with-okta\/\n  - ..\/..\/..\/enterprise\/saml\/troubleshoot-saml\/\ndescription: Learn how to configure SAML authentication in Grafana's configuration\n  file.\nlabels:\n  products:\n    - cloud\n    - enterprise\nmenuTitle: SAML\ntitle: Configure SAML authentication using the configuration file\nweight: 500\n---\n\n# Configure SAML authentication using the configuration file\n\n\nAvailable in [Grafana Enterprise]() and [Grafana Cloud](\/docs\/grafana-cloud).\n\n\nSAML authentication integration allows your Grafana users to log in by using an external SAML 2.0 Identity Provider (IdP). To enable this, Grafana becomes a Service Provider (SP) in the authentication flow, interacting with the IdP to exchange user information.\n\nYou can configure SAML authentication in Grafana through one of the following methods:\n\n- the Grafana configuration file\n- the API (refer to [SSO Settings API]())\n- the user interface (refer to [Configure SAML authentication using the Grafana user interface]())\n- the Terraform provider (refer to [Terraform docs](https:\/\/registry.terraform.io\/providers\/grafana\/grafana\/latest\/docs\/resources\/sso_settings))\n\n\nThe API and Terraform support are available in Public Preview in Grafana v11.1 behind the `ssoSettingsSAML` feature toggle. You must also enable the `ssoSettingsApi` flag.\n\n\nAll methods offer the same configuration options, but you might prefer using the Grafana configuration file or the Terraform provider if you want to keep all of Grafana's authentication settings in one place. Grafana Cloud users do not have access to Grafana configuration file, so they should configure SAML through the other methods.\n\n\nConfiguration in the API takes precedence over the configuration in the Grafana configuration file. SAML settings from the API will override any SAML configuration set in the Grafana configuration file.\n\n\n## Supported SAML\n\nGrafana supports the following SAML 2.0 bindings:\n\n- From the Service Provider (SP) to the Identity Provider (IdP):\n\n  - `HTTP-POST` binding\n  - `HTTP-Redirect` binding\n\n- From the Identity Provider (IdP) to the Service Provider (SP):\n  - `HTTP-POST` binding\n\nIn terms of security:\n\n- Grafana supports signed and encrypted assertions.\n- Grafana does not support signed or encrypted requests.\n\nIn terms of initiation, Grafana supports:\n\n- SP-initiated requests\n- IdP-initiated requests\n\nBy default, SP-initiated requests are enabled. For instructions on how to enable IdP-initiated logins, see [IdP-initiated Single Sign-On (SSO)]().\n\n\nIt is possible to set up Grafana with SAML authentication using Azure AD. However, if an Azure AD user belongs to more than 150 groups, a Graph API endpoint is shared instead.\n\nGrafana versions 11.1 and below, do not support fetching the groups from the Graph API endpoint. As a result, users with more than 150 groups will not be able to retrieve their groups. Instead, it is recommended that you use OIDC\/OAuth workflows,.\n\nAs of Grafana 11.2, the SAML integration offers a mechanism to retrieve user groups from the Graph API.\n\nRelated links:\n\n- [Azure AD SAML limitations](https:\/\/learn.microsoft.com\/en-us\/entra\/identity-platform\/id-token-claims-reference#groups-overage-claim)\n- [Set up SAML with Azure AD]()\n- [Configure a Graph API application in Azure AD]()\n  \n\n### Edit SAML options in the Grafana config file\n\n1. In the `[auth.saml]` section in the Grafana configuration file, set [`enabled`]() to `true`.\n1. Configure the [certificate and private key]().\n1. On the Okta application page where you have been redirected after application created, navigate to the **Sign On** tab and find **Identity Provider metadata** link in the **Settings** section.\n1. Set the [`idp_metadata_url`]() to the URL obtained from the previous step. The URL should look like `https:\/\/<your-org-id>.okta.com\/app\/<application-id>\/sso\/saml\/metadata`.\n1. Set the following options to the attribute names configured at the **step 10** of the SAML integration setup. You can find this attributes on the **General** tab of the application page (**ATTRIBUTE STATEMENTS** and **GROUP ATTRIBUTE STATEMENTS** in the **SAML Settings** section).\n   - [`assertion_attribute_login`]()\n   - [`assertion_attribute_email`]()\n   - [`assertion_attribute_name`]()\n   - [`assertion_attribute_groups`]()\n1. (Optional) Set the `name` parameter in the `[auth.saml]` section in the Grafana configuration file. This parameter replaces SAML in the Grafana user interface in locations such as the sign-in button.\n1. Save the configuration file and then restart the Grafana server.\n\nWhen you are finished, the Grafana configuration might look like this example:\n\n```bash\n[server]\nroot_url = https:\/\/grafana.example.com\n\n[auth.saml]\nenabled = true\nname = My IdP\nauto_login = false\nprivate_key_path = \"\/path\/to\/private_key.pem\"\ncertificate_path = \"\/path\/to\/certificate.cert\"\nidp_metadata_url = \"https:\/\/my-org.okta.com\/app\/my-application\/sso\/saml\/metadata\"\nassertion_attribute_name = DisplayName\nassertion_attribute_login = Login\nassertion_attribute_email = Email\nassertion_attribute_groups = Group\n```\n\n## Enable SAML authentication in Grafana\n\nTo use the SAML integration, in the `auth.saml` section of in the Grafana custom configuration file, set `enabled` to `true`.\n\nRefer to [Configuration]() for more information about configuring Grafana.\n\n## Additional configuration for HTTP-Post binding\n\nIf multiple bindings are supported for SAML Single Sign-On (SSO) by the Identity Provider (IdP), Grafana will use the `HTTP-Redirect` binding by default. If the IdP only supports the `HTTP-Post binding` then updating the `content_security_policy_template` (in case `content_security_policy = true`) and `content_security_policy_report_only_template` (in case `content_security_policy_report_only = true`) might be required to allow Grafana to initiate a POST request to the IdP. These settings are used to define the [Content Security Policy (CSP)](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Headers\/Content-Security-Policy) headers that are sent by Grafana.\n\nTo allow Grafana to initiate a POST request to the IdP, update the `content_security_policy_template` and `content_security_policy_report_only_template` settings in the Grafana configuration file and add the IdP's domain to the `form-action` directive. By default, the `form-action` directive is set to `self` which only allows POST requests to the same domain as Grafana. To allow POST requests to the IdP's domain, update the `form-action` directive to include the IdP's domain, for example: `form-action 'self' https:\/\/idp.example.com`.\n\n\nFor Grafana Cloud instances, please contact Grafana Support to update the `content_security_policy_template` and `content_security_policy_report_only_template` settings of your Grafana instance. Please provide the metadata URL\/file of your IdP.\n\n\n## Certificate and private key\n\nThe SAML SSO standard uses asymmetric encryption to exchange information between the SP (Grafana) and the IdP. To perform such encryption, you need a public part and a private part. In this case, the X.509 certificate provides the public part, while the private key provides the private part. The private key needs to be issued in a [PKCS#8](https:\/\/en.wikipedia.org\/wiki\/PKCS_8) format.\n\nGrafana supports two ways of specifying both the `certificate` and `private_key`.\n\n- Without a suffix (`certificate` or `private_key`), the configuration assumes you've supplied the base64-encoded file contents.\n- With the `_path` suffix (`certificate_path` or `private_key_path`), then Grafana treats the value entered as a file path and attempts to read the file from the file system.\n\n\nYou can only use one form of each configuration option. Using multiple forms, such as both `certificate` and `certificate_path`, results in an error.\n\n\n---\n\n### Generate private key for SAML authentication:\n\nAn example of how to generate a self-signed certificate and private key that's valid for one year:\n\n```sh\n$ openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes\u200b\n```\n\nThe generated `key.pem` and `cert.pem` files are then used for certificate and private_key.\n\nThe key you provide should look like:\n\n```\n-----BEGIN PRIVATE KEY-----\n...\n...\n-----END PRIVATE KEY-----\n```\n\n## Set up SAML with Azure AD\n\nGrafana supports user authentication through Azure AD, which is useful when you want users to access Grafana using single sign-on. This topic shows you how to configure SAML authentication in Grafana with [Azure AD](https:\/\/azure.microsoft.com\/en-us\/services\/active-directory\/).\n\n**Before you begin:**\n\n- Ensure you have permission to administer SAML authentication. For more information about roles and permissions in Grafana.\n  - [Roles and permissions]().\n- Learn the limitations of Azure AD SAML integration.\n  - [Azure AD SAML limitations](https:\/\/learn.microsoft.com\/en-us\/entra\/identity-platform\/id-token-claims-reference#groups-overage-claim)\n- Configure SAML integration with Azure AD, create an app integration inside the Azure AD organization first.\n  - [Add app integration in Azure AD](https:\/\/docs.microsoft.com\/en-us\/azure\/active-directory\/manage-apps\/add-application-portal-configure)\n- If you have users that belong to more than 150 groups, you need to configure a registered application to provide an Azure Graph API to retrieve the groups.\n  - [Setup Azure AD Graph API applications]()\n\n### Generate self-signed certificates\n\nAzure AD requires a certificate to sign the SAML requests. You can generate a self-signed certificate using the following command:\n\n```sh\n$ openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes\n```\n\nThis will generate a `key.pem` and `cert.pem` file that you can use for the `private_key_path` and `certificate_path` configuration options.\n\n### Add Microsoft Entra SAML Toolkit from the gallery\n\n> Taken from https:\/\/learn.microsoft.com\/en-us\/entra\/identity\/saas-apps\/saml-toolkit-tutorial#add-microsoft-entra-saml-toolkit-from-the-gallery\n\n1. Go to the [Azure portal](https:\/\/portal.azure.com\/#home) and sign in with your Azure AD account.\n1. Search for **Enterprise Applications**.\n1. In the **Enterprise applications** pane, select **New application**.\n1. In the search box, enter **SAML Toolkit**, and then select the **Microsoft Entra SAML Toolkit** from the results panel.\n1. Add a descriptive name and select **Create**.\n\n### Configure the SAML Toolkit application endpoints\n\nIn order to validate Azure AD users with Grafana, you need to configure the SAML Toolkit application endpoints by creating a new SAML integration in the Azure AD organization.\n\n> For the following configuration, we will use `https:\/\/localhost` as the Grafana URL. Replace it with your Grafana URL.\n\n1. In the **SAML Toolkit application**, select **Set up single sign-on**.\n1. In the **Single sign-on** pane, select **SAML**.\n1. In the Set up **Single Sign-On with SAML** pane, select the pencil icon for **Basic SAML Configuration** to edit the settings.\n1. In the **Basic SAML Configuration** pane, click on the **Edit** button and update the following fields:\n   - In the **Identifier (Entity ID)** field, enter `https:\/\/localhost\/saml\/metadata`.\n   - In the **Reply URL (Assertion Consumer Service URL)** field, enter `https:\/\/localhost\/saml\/acs`.\n   - In the **Sign on URL** field, enter `https:\/\/localhost`.\n   - In the **Relay State** field, enter `https:\/\/localhost`.\n   - In the **Logout URL** field, enter `https:\/\/localhost\/saml\/slo`.\n1. Select **Save**.\n1. At the **SAML Certificate** section, copy the **App Federation Metadata Url**.\n   - Use this URL in the `idp_metadata_url` field in the `custom.ini` file.\n\n### Configure a Graph API application in Azure AD\n\nWhile an Azure AD tenant can be configured in Grafana via SAML, some additional information is only accessible via the Graph API. To retrieve this information, create a new application in Azure AD and grant it the necessary permissions.\n\n> [Azure AD SAML limitations](https:\/\/learn.microsoft.com\/en-us\/entra\/identity-platform\/id-token-claims-reference#groups-overage-claim)\n\n> For the following configuration, the URL `https:\/\/localhost` will be used as the Grafana URL. Replace it with your Grafana instance URL.\n\n#### Create a new Application registration\n\nThis app registration will be used as a Service Account to retrieve more information about the user from the Azure AD.\n\n1. Go to the [Azure portal](https:\/\/portal.azure.com\/#home) and sign in with your Azure AD account.\n1. In the left-hand navigation pane, select the Azure Active Directory service, and then select **App registrations**.\n1. Click the **New registration** button.\n1. In the **Register an application** pane, enter a name for the application.\n1. In the **Supported account types** section, select the account types that can use the application.\n1. In the **Redirect URI** section, select Web and enter `https:\/\/localhost\/login\/azuread`.\n1. Click the **Register** button.\n\n#### Set up permissions for the application\n\n1. In the overview pane, look for **API permissions** section and select **Add a permission**.\n1. In the **Request API permissions** pane, select **Microsoft Graph**, and click **Application permissions**.\n1. In the **Select permissions** pane, under the **GroupMember** section, select **GroupMember.Read.All**.\n1. In the **Select permissions** pane, under the **User** section, select **User.Read.All**.\n1. Click the **Add permissions** button at the bottom of the page.\n1. In the **Request API permissions** pane, select **Microsoft Graph**, and click **Delegated permissions**.\n1. In the **Select permissions** pane, under the **User** section, select **User.Read**.\n1. Click the **Add permissions** button at the bottom of the page.\n1. In the **API permissions** section, select **Grant admin consent for <your-organization>**.\n\nThe following table shows what the permissions look like from the Azure AD portal:\n\n| Permissions name | Type        | Admin consent required | Status  |\n| ---------------- | ----------- | ---------------------- | ------- |\n| `Group.Read.All` | Application | Yes                    | Granted |\n| `User.Read`      | Delegated   | No                     | Granted |\n| `User.Read.All`  | Application | Yes                    | Granted |\n\n\n\n#### Generate a client secret\n\n1. In the **Overview** pane, select **Certificates & secrets**.\n1. Select **New client secret**.\n1. In the **Add a client secret** pane, enter a description for the secret.\n1. Set the expiration date for the secret.\n1. Select **Add**.\n1. Copy the value of the secret. This value is used in the `client_secret` field in the `custom.ini` file.\n\n## Set up SAML with Okta\n\nGrafana supports user authentication through Okta, which is useful when you want your users to access Grafana using single sign on. This guide will follow you through the steps of configuring SAML authentication in Grafana with [Okta](https:\/\/okta.com\/). You need to be an admin in your Okta organization to access Admin Console and create SAML integration. You also need permissions to edit Grafana config file and restart Grafana server.\n\n**Before you begin:**\n\n- To configure SAML integration with Okta, create an app integration inside the Okta organization first. [Add app integration in Okta](https:\/\/help.okta.com\/en\/prod\/Content\/Topics\/Apps\/apps-overview-add-apps.htm)\n- Ensure you have permission to administer SAML authentication. For more information about roles and permissions in Grafana, refer to [Roles and permissions]().\n\n**To set up SAML with Okta:**\n\n1. Log in to the [Okta portal](https:\/\/login.okta.com\/).\n1. Go to the Admin Console in your Okta organization by clicking **Admin** in the upper-right corner. If you are in the Developer Console, then click **Developer Console** in the upper-left corner and then click **Classic UI** to switch over to the Admin Console.\n1. In the Admin Console, navigate to **Applications** > **Applications**.\n1. Click **Create App Integration** to start the Application Integration Wizard.\n1. Choose **SAML 2.0** as the **Sign-in method**.\n1. Click **Create**.\n1. On the **General Settings** tab, enter a name for your Grafana integration. You can also upload a logo.\n1. On the **Configure SAML** tab, enter the SAML information related to your Grafana instance:\n\n   - In the **Single sign on URL** field, use the `\/saml\/acs` endpoint URL of your Grafana instance, for example, `https:\/\/grafana.example.com\/saml\/acs`.\n   - In the **Audience URI (SP Entity ID)** field, use the `\/saml\/metadata` endpoint URL, by default it is the `\/saml\/metadata` endpoint of your Grafana instance (for example `https:\/\/example.grafana.com\/saml\/metadata`). This could be configured differently, but the value here must match the `entity_id` setting of the SAML settings of Grafana.\n   - Leave the default values for **Name ID format** and **Application username**.\n     \n     If you plan to enable SAML Single Logout, consider setting the **Name ID format** to `EmailAddress` or `Persistent`. This must match the `name_id_format` setting of the Grafana instance.\n     \n   - In the **ATTRIBUTE STATEMENTS (OPTIONAL)** section, enter the SAML attributes to be shared with Grafana. The attribute names in Okta need to match exactly what is defined within Grafana, for example:\n\n     | Attribute name (in Grafana) | Name and value (in Okta profile)                     | Grafana configuration (under `auth.saml`) |\n     | --------------------------- | ---------------------------------------------------- | ----------------------------------------- |\n     | Login                       | Login - `user.login`                                 | `assertion_attribute_login = Login`       |\n     | Email                       | Email - `user.email`                                 | `assertion_attribute_email = Email`       |\n     | DisplayName                 | DisplayName - `user.firstName + \" \" + user.lastName` | `assertion_attribute_name = DisplayName`  |\n\n   - In the **GROUP ATTRIBUTE STATEMENTS (OPTIONAL)** section, enter a group attribute name (for example, `Group`, ensure it matches the `asssertion_attribute_groups` setting in Grafana) and set filter to `Matches regex .*` to return all user groups.\n\n1. Click **Next**.\n1. On the final Feedback tab, fill out the form and then click **Finish**.\n\n### Signature algorithm\n\nThe SAML standard recommends using a digital signature for some types of messages, like authentication or logout requests. If the `signature_algorithm` option is configured, Grafana will put a digital signature into SAML requests. Supported signature types are `rsa-sha1`, `rsa-sha256`, `rsa-sha512`. This option should match your IdP configuration, otherwise, signature validation will fail. Grafana uses key and certificate configured with `private_key` and `certificate` options for signing SAML requests.\n\n### Specify user's Name ID\n\nThe `name_id_format` configuration field specifies the format of the NameID element in the SAML assertion.\n\nBy default, this is set to `urn:oasis:names:tc:SAML:2.0:nameid-format:transient` and does not need to be specified in the configuration file.\n\nThe following list includes valid configuration field values:\n\n| `name_id_format` value in the configuration file or Terraform | `Name identifier format` on the UI |\n| ------------------------------------------------------------- | ---------------------------------- |\n| `urn:oasis:names:tc:SAML:2.0:nameid-format:transient`         | Default                            |\n| `urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified`       | Unspecified                        |\n| `urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress`      | Email address                      |\n| `urn:oasis:names:tc:SAML:2.0:nameid-format:persistent`        | Persistent                         |\n| `urn:oasis:names:tc:SAML:2.0:nameid-format:transient`         | Transient                          |\n\n### IdP metadata\n\nYou also need to define the public part of the IdP for message verification. The SAML IdP metadata XML defines where and how Grafana exchanges user information.\n\nGrafana supports three ways of specifying the IdP metadata.\n\n- Without a suffix `idp_metadata`, Grafana assumes base64-encoded XML file contents.\n- With the `_path` suffix, Grafana assumes a file path and attempts to read the file from the file system.\n- With the `_url` suffix, Grafana assumes a URL and attempts to load the metadata from the given location.\n\n### Maximum issue delay\n\nPrevents SAML response replay attacks and internal clock skews between the SP (Grafana) and the IdP. You can set a maximum amount of time between the IdP issuing a response and the SP (Grafana) processing it.\n\nThe configuration options is specified as a duration, such as `max_issue_delay = 90s` or `max_issue_delay = 1h`.\n\n### Metadata valid duration\n\nSP metadata is likely to expire at some point, perhaps due to a certificate rotation or change of location binding. Grafana allows you to specify for how long the metadata should be valid. Leveraging the `validUntil` field, you can tell consumers until when your metadata is going to be valid. The duration is computed by adding the duration to the current time.\n\nThe configuration option is specified as a duration, such as `metadata_valid_duration = 48h`.\n\n### Identity provider (IdP) registration\n\nFor the SAML integration to work correctly, you need to make the IdP aware of the SP.\n\nThe integration provides two key endpoints as part of Grafana:\n\n- The `\/saml\/metadata` endpoint, which contains the SP metadata. You can either download and upload it manually, or you make the IdP request it directly from the endpoint. Some providers name it Identifier or Entity ID.\n- The `\/saml\/acs` endpoint, which is intended to receive the ACS (Assertion Customer Service) callback. Some providers name it SSO URL or Reply URL.\n\n### IdP-initiated Single Sign-On (SSO)\n\nBy default, Grafana allows only service provider (SP) initiated logins (when the user logs in with SAML via Grafana\u2019s login page). If you want users to log in into Grafana directly from your identity provider (IdP), set the `allow_idp_initiated` configuration option to `true` and configure `relay_state` with the same value specified in the IdP configuration.\n\nIdP-initiated SSO has some security risks, so make sure you understand the risks before enabling this feature. When using IdP-initiated SSO, Grafana receives unsolicited SAML requests and can't verify that login flow was started by the user. This makes it hard to detect whether SAML message has been stolen or replaced. Because of this, IdP-initiated SSO is vulnerable to login cross-site request forgery (CSRF) and man\u00a0in the middle (MITM) attacks. We do not recommend using IdP-initiated SSO and keeping it disabled whenever possible.\n\n### Single logout\n\nSAML's single logout feature allows users to log out from all applications associated with the current IdP session established via SAML SSO. If the `single_logout` option is set to `true` and a user logs out, Grafana requests IdP to end the user session which in turn triggers logout from all other applications the user is logged into using the same IdP session (applications should support single logout). Conversely, if another application connected to the same IdP logs out using single logout, Grafana receives a logout request from IdP and ends the user session.\n\n`HTTP-Redirect` and `HTTP-POST` bindings are supported for single logout.\nWhen using `HTTP-Redirect` bindings the query should include a request signature.\n\n### Assertion mapping\n\nDuring the SAML SSO authentication flow, Grafana receives the ACS callback. The callback contains all the relevant information of the user under authentication embedded in the SAML response. Grafana parses the response to create (or update) the user within its internal database.\n\nFor Grafana to map the user information, it looks at the individual attributes within the assertion. You can think of these attributes as Key\/Value pairs (although, they contain more information than that).\n\nGrafana provides configuration options that let you modify which keys to look at for these values. The data we need to create the user in Grafana is Name, Login handle, and email.\n\n#### The `assertion_attribute_name` option\n\n`assertion_attribute_name` is a special assertion mapping that can either be a simple key, indicating a mapping to a single assertion attribute on the SAML response, or a complex template with variables using the `$__saml{<attribute>}` syntax. If this property is misconfigured, Grafana will log an error message on startup and disallow SAML sign-ins. Grafana will also log errors after a login attempt if a variable in the template is missing from the SAML response.\n\n**Examples**\n\n```ini\n#plain string mapping\nassertion_attribute_name = displayName\n```\n\n```ini\n#template mapping\nassertion_attribute_name = $__saml{firstName} $__saml{lastName}\n```\n\n### Allow new user signups\n\nBy default, new Grafana users using SAML authentication will have an account created for them automatically. To decouple authentication and account creation and ensure only users with existing accounts can log in with SAML, set the `allow_sign_up` option to false.\n\n### Configure automatic login\n\nSet `auto_login` option to true to attempt login automatically, skipping the login screen.\nThis setting is ignored if multiple auth providers are configured to use auto login.\n\n```\nauto_login = true\n```\n\n### Configure group synchronization\n\nGroup synchronization allows you to map user groups from an identity provider to Grafana teams and roles.\n\nTo use SAML group synchronization, set [`assertion_attribute_groups`]() to the attribute name where you store user groups.\nThen Grafana will use attribute values extracted from SAML assertion to add user to Grafana teams and grant them roles.\n\n\nTeam sync allows you sync users from SAML to Grafana teams. It does not automatically create teams in Grafana. You need to create teams in Grafana before you can use this feature.\n\n\nGiven the following partial SAML assertion:\n\n```xml\n<saml2:Attribute\n    Name=\"groups\"\n    NameFormat=\"urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified\">\n    <saml2:AttributeValue\n        xmlns:xs=\"http:\/\/www.w3.org\/2001\/XMLSchema\"\n        xmlns:xsi=\"http:\/\/www.w3.org\/2001\/XMLSchema-instance\"\n        xsi:type=\"xs:string\">admins_group\n    <\/saml2:AttributeValue>\n    <saml2:AttributeValue\n        xmlns:xs=\"http:\/\/www.w3.org\/2001\/XMLSchema\"\n        xmlns:xsi=\"http:\/\/www.w3.org\/2001\/XMLSchema-instance\"\n        xsi:type=\"xs:string\">division_1\n    <\/saml2:AttributeValue>\n<\/saml2:Attribute>\n```\n\nThe configuration would look like this:\n\n```ini\n[auth.saml]\n# ...\nassertion_attribute_groups = groups\n```\n\nThe following `External Group ID`s would be valid for configuring team sync or role sync in Grafana:\n\n- `admins_group`\n- `division_1`\n\nTo learn more about how to configure group synchronization, refer to [Configure team sync]() and [Configure group attribute sync](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/setup-grafana\/configure-security\/configure-group-attribute-sync) documentation.\n\n### Configure role sync\n\nRole sync allows you to map user roles from an identity provider to Grafana. To enable role sync, configure role attribute and possible values for the Editor, Admin, and Grafana Admin roles. For more information about user roles, refer to [Roles and permissions]().\n\n1. In the configuration file, set [`assertion_attribute_role`]() option to the attribute name where the role information will be extracted from.\n1. Set the [`role_values_none`]() option to the values mapped to the `None` role.\n1. Set the [`role_values_viewer`]() option to the values mapped to the `Viewer` role.\n1. Set the [`role_values_editor`]() option to the values mapped to the `Editor` role.\n1. Set the [`role_values_admin`]() option to the values mapped to the organization `Admin` role.\n1. Set the [`role_values_grafana_admin`]() option to the values mapped to the `Grafana Admin` role.\n\nIf a user role doesn't match any of configured values, then the role specified by the `auto_assign_org_role` config option will be assigned. If the `auto_assign_org_role` field is not set then the user role will default to `Viewer`.\n\nFor more information about roles and permissions in Grafana, refer to [Roles and permissions]().\n\nExample configuration:\n\n```ini\n[auth.saml]\nassertion_attribute_role = role\nrole_values_none = none\nrole_values_viewer = external\nrole_values_editor = editor, developer\nrole_values_admin = admin, operator\nrole_values_grafana_admin = superadmin\n```\n\n**Important**: When role sync is configured, any changes of user roles and organization membership made manually in Grafana will be overwritten on next user login. Assign user organizations and roles in the IdP instead.\n\nIf you don't want user organizations and roles to be synchronized with the IdP, you can use the `skip_org_role_sync` configuration option.\n\nExample configuration:\n\n```ini\n[auth.saml]\nskip_org_role_sync = true\n```\n\n### Configure organization mapping\n\nOrganization mapping allows you to assign users to particular organization in Grafana depending on attribute value obtained from identity provider.\n\n1. In configuration file, set [`assertion_attribute_org`]() to the attribute name you store organization info in. This attribute can be an array if you want a user to be in multiple organizations.\n1. Set [`org_mapping`]() option to the comma-separated list of `Organization:OrgId` pairs to map organization from IdP to Grafana organization specified by id. If you want users to have different roles in multiple organizations, you can set this option to a comma-separated list of `Organization:OrgId:Role` mappings.\n\nFor example, use following configuration to assign users from `Engineering` organization to the Grafana organization with id `2` as Editor and users from `Sales` - to the org with id `3` as Admin, based on `Org` assertion attribute value:\n\n```bash\n[auth.saml]\nassertion_attribute_org = Org\norg_mapping = Engineering:2:Editor, Sales:3:Admin\n```\n\nYou can specify multiple organizations both for the IdP and Grafana:\n\n- `org_mapping = Engineering:2, Sales:2` to map users from `Engineering` and `Sales` to `2` in Grafana.\n- `org_mapping = Engineering:2, Engineering:3` to assign `Engineering` to both `2` and `3` in Grafana.\n\nYou can use `*` as the SAML Organization if you want all your users to be in some Grafana organizations with a default role:\n\n- `org_mapping = *:2:Editor` to map all users to `2` in Grafana as Editors.\n\nYou can use `*` as the Grafana organization in the mapping if you want all users from a given SAML Organization to be added to all existing Grafana organizations.\n\n- `org_mapping = Engineering:*` to map users from `Engineering` to all existing Grafana organizations.\n- `org_mapping = Administration:*:Admin` to map users from `Administration` to all existing Grafana organizations as Admins.\n\n### Configure allowed organizations\n\nWith the [`allowed_organizations`]() option you can specify a list of organizations where the user must be a member of at least one of them to be able to log in to Grafana.\n\nTo put values containing spaces in the list, use the following JSON syntax:\n\n```ini\nallowed_organizations = [\"org 1\", \"second org\"]\n```\n\n### Example SAML configuration\n\n```bash\n[auth.saml]\nenabled = true\nauto_login = false\ncertificate_path = \"\/path\/to\/certificate.cert\"\nprivate_key_path = \"\/path\/to\/private_key.pem\"\nidp_metadata_path = \"\/my\/metadata.xml\"\nmax_issue_delay = 90s\nmetadata_valid_duration = 48h\nassertion_attribute_name = displayName\nassertion_attribute_login = mail\nassertion_attribute_email = mail\n\nassertion_attribute_groups = Group\nassertion_attribute_role = Role\nassertion_attribute_org = Org\nrole_values_viewer = external\nrole_values_editor = editor, developer\nrole_values_admin = admin, operator\nrole_values_grafana_admin = superadmin\norg_mapping = Engineering:2:Editor, Engineering:3:Viewer, Sales:3:Editor, *:1:Editor\nallowed_organizations = Engineering, Sales\n```\n\n### Example SAML configuration in Terraform\n\n\nAvailable in Public Preview in Grafana v11.1 behind the `ssoSettingsSAML` feature toggle. Supported in the Terraform provider since v2.17.0.\n\n\n```terraform\nresource \"grafana_sso_settings\" \"saml_sso_settings\" {\n  provider_name = \"saml\"\n  saml_settings {\n    name                       = \"SAML\"\n    auto_login                 = false\n    certificate_path           = \"\/path\/to\/certificate.cert\"\n    private_key_path           = \"\/path\/to\/private_key.pem\"\n    idp_metadata_path          = \"\/my\/metadata.xml\"\n    max_issue_delay            = \"90s\"\n    metadata_valid_duration    = \"48h\"\n    assertion_attribute_name   = \"displayName\"\n    assertion_attribute_login  = \"mail\"\n    assertion_attribute_email  = \"mail\"\n    assertion_attribute_groups = \"Group\"\n    assertion_attribute_role   = \"Role\"\n    assertion_attribute_org    = \"Org\"\n    role_values_editor         = \"editor, developer\"\n    role_values_admin          = \"admin, operator\"\n    role_values_grafana_admin  = \"superadmin\"\n    org_mapping                = \"Engineering:2:Editor, Engineering:3:Viewer, Sales:3:Editor, *:1:Editor\"\n    allowed_organizations      = \"Engineering, Sales\"\n  }\n}\n```\n\nGo to [Terraform Registry](https:\/\/registry.terraform.io\/providers\/grafana\/grafana\/latest\/docs\/resources\/sso_settings) for a complete reference on using the `grafana_sso_settings` resource.\n\n## Troubleshoot SAML authentication in Grafana\n\nTo troubleshoot and get more log information, enable SAML debug logging in the configuration file. Refer to [Configuration]() for more information.\n\n```bash\n[log]\nfilters = saml.auth:debug\n```\n\n## Troubleshooting\n\nFollowing are common issues found in configuring SAML authentication in Grafana and how to resolve them.\n\n### Infinite redirect loop \/ User gets redirected to the login page after successful login on the IdP side\n\nIf you experience an infinite redirect loop when `auto_login = true` or redirected to the login page after successful login, it is likely that the `grafana_session` cookie's SameSite setting is set to `Strict`. This setting prevents the `grafana_session` cookie from being sent to Grafana during cross-site requests. To resolve this issue, set the `security.cookie_samesite` option to `Lax` in the Grafana configuration file.\n\n### SAML authentication fails with error:\n\n- `asn1: structure error: tags don't match`\n\nWe only support one private key format: PKCS#8.\n\nThe keys may be in a different format (PKCS#1 or PKCS#12); in that case, it may be necessary to convert the private key format.\n\nThe following command creates a pkcs8 key file.\n\n```bash\nopenssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes\n```\n\n#### **Convert** the private key format to base64\n\nThe following command converts keys to base64 format.\n\nBase64-encode the cert.pem and key.pem files:\n(-w0 switch is not needed on Mac, only for Linux)\n\n```sh\n$ base64 -w0 key.pem > key.pem.base64\n$ base64 -w0 cert.pem > cert.pem.base64\n```\n\nThe base64-encoded values (`key.pem.base64, cert.pem.base64` files) are then used for certificate and private_key.\n\nThe keys you provide should look like:\n\n```\n-----BEGIN PRIVATE KEY-----\n...\n...\n-----END PRIVATE KEY-----\n```\n\n### SAML login attempts fail with request response \"origin not allowed\"\n\nWhen the user logs in using SAML and gets presented with \"origin not allowed\", the user might be issuing the login from an IdP (identity provider) service or the user is behind a reverse proxy. This potentially happens as Grafana's CSRF checks deem the requests to be invalid. For more information [CSRF](https:\/\/owasp.org\/www-community\/attacks\/csrf).\n\nTo solve this issue, you can configure either the [`csrf_trusted_origins`]() or [`csrf_additional_headers`]() option in the SAML configuration.\n\nExample of a configuration file:\n\n```bash\n# config.ini\n...\n[security]\ncsrf_trusted_origins = https:\/\/grafana.example.com\ncsrf_additional_headers = X-Forwarded-Host\n...\n```\n\n### SAML login attempts fail with request response \"login session has expired\"\n\nAccessing the Grafana login page from a URL that is not the root URL of the\nGrafana server can cause the instance to return the following error: \"login session has expired\".\n\nIf you are accessing grafana through a proxy server, ensure that cookies are correctly\nrewritten to the root URL of Grafana.\nCookies must be set on the same url as the `root_url` of Grafana. This is normally the reverse proxy's domain\/address.\n\nReview the cookie settings in your proxy server configuration to ensure that cookies are\nnot being discarded\n\nReview the following settings in your grafana config:\n\n```ini\n[security]\ncookie_samesite = none\n```\n\nThis setting should be set to none to allow grafana session cookies to work correctly with redirects.\n\n```ini\n[security]\ncookie_secure = true\n```\n\nEnsure cookie_secure is set to true to ensure that cookies are only sent over HTTPS.\n\n## Configure SAML authentication in Grafana\n\nThe table below describes all SAML configuration options. Continue reading below for details on specific options. Like any other Grafana configuration, you can apply these options as [environment variables]().\n\n| Setting                                                    | Required | Description                                                                                                                                                                                                  | Default                                               |\n| ---------------------------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------- |\n| `enabled`                                                  | No       | Whether SAML authentication is allowed.                                                                                                                                                                      | `false`                                               |\n| `name`                                                     | No       | Name used to refer to the SAML authentication in the Grafana user interface.                                                                                                                                 | `SAML`                                                |\n| `entity_id`                                                | No       | The entity ID of the service provider. This is the unique identifier of the service provider.                                                                                                                | `https:\/\/{Grafana URL}\/saml\/metadata`                 |\n| `single_logout`                                            | No       | Whether SAML Single Logout is enabled.                                                                                                                                                                       | `false`                                               |\n| `allow_sign_up`                                            | No       | Whether to allow new Grafana user creation through SAML login. If set to `false`, then only existing Grafana users can log in with SAML.                                                                     | `true`                                                |\n| `auto_login`                                               | No       | Whether SAML auto login is enabled.                                                                                                                                                                          | `false`                                               |\n| `allow_idp_initiated`                                      | No       | Whether SAML IdP-initiated login is allowed.                                                                                                                                                                 | `false`                                               |\n| `certificate` or `certificate_path`                        | Yes      | Base64-encoded string or Path for the SP X.509 certificate.                                                                                                                                                  |                                                       |\n| `private_key` or `private_key_path`                        | Yes      | Base64-encoded string or Path for the SP private key.                                                                                                                                                        |                                                       |\n| `signature_algorithm`                                      | No       | Signature algorithm used for signing requests to the IdP. Supported values are rsa-sha1, rsa-sha256, rsa-sha512.                                                                                             |                                                       |\n| `idp_metadata`, `idp_metadata_path`, or `idp_metadata_url` | Yes      | Base64-encoded string, Path or URL for the IdP SAML metadata XML.                                                                                                                                            |                                                       |\n| `max_issue_delay`                                          | No       | Maximum time allowed between the issuance of an AuthnRequest by the SP and the processing of the Response.                                                                                                   | `90s`                                                 |\n| `metadata_valid_duration`                                  | No       | Duration for which the SP metadata remains valid.                                                                                                                                                            | `48h`                                                 |\n| `relay_state`                                              | No       | Relay state for IdP-initiated login. This should match the relay state configured in the IdP.                                                                                                                |                                                       |\n| `assertion_attribute_name`                                 | No       | Friendly name or name of the attribute within the SAML assertion to use as the user name. Alternatively, this can be a template with variables that match the names of attributes within the SAML assertion. | `displayName`                                         |\n| `assertion_attribute_login`                                | No       | Friendly name or name of the attribute within the SAML assertion to use as the user login handle.                                                                                                            | `mail`                                                |\n| `assertion_attribute_email`                                | No       | Friendly name or name of the attribute within the SAML assertion to use as the user email.                                                                                                                   | `mail`                                                |\n| `assertion_attribute_groups`                               | No       | Friendly name or name of the attribute within the SAML assertion to use as the user groups.                                                                                                                  |                                                       |\n| `assertion_attribute_role`                                 | No       | Friendly name or name of the attribute within the SAML assertion to use as the user roles.                                                                                                                   |                                                       |\n| `assertion_attribute_org`                                  | No       | Friendly name or name of the attribute within the SAML assertion to use as the user organization                                                                                                             |                                                       |\n| `allowed_organizations`                                    | No       | List of comma- or space-separated organizations. User should be a member of at least one organization to log in.                                                                                             |                                                       |\n| `org_mapping`                                              | No       | List of comma- or space-separated Organization:OrgId:Role mappings. Organization can be `*` meaning \"All users\". Role is optional and can have the following values: `None`, `Viewer`, `Editor` or `Admin`.  |                                                       |\n| `role_values_none`                                         | No       | List of comma- or space-separated roles which will be mapped into the None role.                                                                                                                             |                                                       |\n| `role_values_viewer`                                       | No       | List of comma- or space-separated roles which will be mapped into the Viewer role.                                                                                                                           |                                                       |\n| `role_values_editor`                                       | No       | List of comma- or space-separated roles which will be mapped into the Editor role.                                                                                                                           |                                                       |\n| `role_values_admin`                                        | No       | List of comma- or space-separated roles which will be mapped into the Admin role.                                                                                                                            |                                                       |\n| `role_values_grafana_admin`                                | No       | List of comma- or space-separated roles which will be mapped into the Grafana Admin (Super Admin) role.                                                                                                      |                                                       |\n| `skip_org_role_sync`                                       | No       | Whether to skip organization role synchronization.                                                                                                                                                           | `false`                                               |\n| `name_id_format`                                           | No       | Specifies the format of the requested NameID element in the SAML AuthnRequest.                                                                                                                               | `urn:oasis:names:tc:SAML:2.0:nameid-format:transient` |\n| `client_id`                                                | No       | Client ID of the IdP service application used to retrieve more information about the user from the IdP. (Microsoft Entra ID only)                                                                            |                                                       |\n| `client_secret`                                            | No       | Client secret of the IdP service application used to retrieve more information about the user from the IdP. (Microsoft Entra ID only)                                                                        |                                                       |\n| `token_url`                                                | No       | URL to retrieve the access token from the IdP. (Microsoft Entra ID only)                                                                                                                                     |                                                       |\n| `force_use_graph_api`                                      | No       | Whether to use the IdP service application retrieve more information about the user from the IdP. (Microsoft Entra ID only)                                                                                  | `false`                                               |","site":"grafana setup","answers_cleaned":"    aliases               auth saml               enterprise configure saml               enterprise saml               enterprise saml about saml               enterprise saml configure saml               enterprise saml enable saml               enterprise saml set up saml with okta               enterprise saml troubleshoot saml  description  Learn how to configure SAML authentication in Grafana s configuration   file  labels    products        cloud       enterprise menuTitle  SAML title  Configure SAML authentication using the configuration file weight  500        Configure SAML authentication using the configuration file   Available in  Grafana Enterprise    and  Grafana Cloud   docs grafana cloud     SAML authentication integration allows your Grafana users to log in by using an external SAML 2 0 Identity Provider  IdP   To enable this  Grafana becomes a Service Provider  SP  in the authentication flow  interacting with the IdP to exchange user information   You can configure SAML authentication in Grafana through one of the following methods     the Grafana configuration file   the API  refer to  SSO Settings API       the user interface  refer to  Configure SAML authentication using the Grafana user interface       the Terraform provider  refer to  Terraform docs  https   registry terraform io providers grafana grafana latest docs resources sso settings     The API and Terraform support are available in Public Preview in Grafana v11 1 behind the  ssoSettingsSAML  feature toggle  You must also enable the  ssoSettingsApi  flag    All methods offer the same configuration options  but you might prefer using the Grafana configuration file or the Terraform provider if you want to keep all of Grafana s authentication settings in one place  Grafana Cloud users do not have access to Grafana configuration file  so they should configure SAML through the other methods    Configuration in the API takes precedence over the configuration in the Grafana configuration file  SAML settings from the API will override any SAML configuration set in the Grafana configuration file       Supported SAML  Grafana supports the following SAML 2 0 bindings     From the Service Provider  SP  to the Identity Provider  IdP         HTTP POST  binding      HTTP Redirect  binding    From the Identity Provider  IdP  to the Service Provider  SP        HTTP POST  binding  In terms of security     Grafana supports signed and encrypted assertions    Grafana does not support signed or encrypted requests   In terms of initiation  Grafana supports     SP initiated requests   IdP initiated requests  By default  SP initiated requests are enabled  For instructions on how to enable IdP initiated logins  see  IdP initiated Single Sign On  SSO        It is possible to set up Grafana with SAML authentication using Azure AD  However  if an Azure AD user belongs to more than 150 groups  a Graph API endpoint is shared instead   Grafana versions 11 1 and below  do not support fetching the groups from the Graph API endpoint  As a result  users with more than 150 groups will not be able to retrieve their groups  Instead  it is recommended that you use OIDC OAuth workflows    As of Grafana 11 2  the SAML integration offers a mechanism to retrieve user groups from the Graph API   Related links      Azure AD SAML limitations  https   learn microsoft com en us entra identity platform id token claims reference groups overage claim     Set up SAML with Azure AD       Configure a Graph API application in Azure AD            Edit SAML options in the Grafana config file  1  In the   auth saml   section in the Grafana configuration file  set   enabled     to  true   1  Configure the  certificate and private key     1  On the Okta application page where you have been redirected after application created  navigate to the   Sign On   tab and find   Identity Provider metadata   link in the   Settings   section  1  Set the   idp metadata url     to the URL obtained from the previous step  The URL should look like  https    your org id  okta com app  application id  sso saml metadata   1  Set the following options to the attribute names configured at the   step 10   of the SAML integration setup  You can find this attributes on the   General   tab of the application page    ATTRIBUTE STATEMENTS   and   GROUP ATTRIBUTE STATEMENTS   in the   SAML Settings   section          assertion attribute login            assertion attribute email            assertion attribute name            assertion attribute groups     1   Optional  Set the  name  parameter in the   auth saml   section in the Grafana configuration file  This parameter replaces SAML in the Grafana user interface in locations such as the sign in button  1  Save the configuration file and then restart the Grafana server   When you are finished  the Grafana configuration might look like this example      bash  server  root url   https   grafana example com   auth saml  enabled   true name   My IdP auto login   false private key path     path to private key pem  certificate path     path to certificate cert  idp metadata url    https   my org okta com app my application sso saml metadata  assertion attribute name   DisplayName assertion attribute login   Login assertion attribute email   Email assertion attribute groups   Group         Enable SAML authentication in Grafana  To use the SAML integration  in the  auth saml  section of in the Grafana custom configuration file  set  enabled  to  true    Refer to  Configuration    for more information about configuring Grafana      Additional configuration for HTTP Post binding  If multiple bindings are supported for SAML Single Sign On  SSO  by the Identity Provider  IdP   Grafana will use the  HTTP Redirect  binding by default  If the IdP only supports the  HTTP Post binding  then updating the  content security policy template   in case  content security policy   true   and  content security policy report only template   in case  content security policy report only   true   might be required to allow Grafana to initiate a POST request to the IdP  These settings are used to define the  Content Security Policy  CSP   https   developer mozilla org en US docs Web HTTP Headers Content Security Policy  headers that are sent by Grafana   To allow Grafana to initiate a POST request to the IdP  update the  content security policy template  and  content security policy report only template  settings in the Grafana configuration file and add the IdP s domain to the  form action  directive  By default  the  form action  directive is set to  self  which only allows POST requests to the same domain as Grafana  To allow POST requests to the IdP s domain  update the  form action  directive to include the IdP s domain  for example   form action  self  https   idp example com     For Grafana Cloud instances  please contact Grafana Support to update the  content security policy template  and  content security policy report only template  settings of your Grafana instance  Please provide the metadata URL file of your IdP       Certificate and private key  The SAML SSO standard uses asymmetric encryption to exchange information between the SP  Grafana  and the IdP  To perform such encryption  you need a public part and a private part  In this case  the X 509 certificate provides the public part  while the private key provides the private part  The private key needs to be issued in a  PKCS 8  https   en wikipedia org wiki PKCS 8  format   Grafana supports two ways of specifying both the  certificate  and  private key      Without a suffix   certificate  or  private key    the configuration assumes you ve supplied the base64 encoded file contents    With the   path  suffix   certificate path  or  private key path    then Grafana treats the value entered as a file path and attempts to read the file from the file system    You can only use one form of each configuration option  Using multiple forms  such as both  certificate  and  certificate path   results in an error             Generate private key for SAML authentication   An example of how to generate a self signed certificate and private key that s valid for one year      sh   openssl req  x509  newkey rsa 4096  keyout key pem  out cert pem  days 365  nodes       The generated  key pem  and  cert pem  files are then used for certificate and private key   The key you provide should look like            BEGIN PRIVATE KEY                   END PRIVATE KEY              Set up SAML with Azure AD  Grafana supports user authentication through Azure AD  which is useful when you want users to access Grafana using single sign on  This topic shows you how to configure SAML authentication in Grafana with  Azure AD  https   azure microsoft com en us services active directory       Before you begin       Ensure you have permission to administer SAML authentication  For more information about roles and permissions in Grafana       Roles and permissions       Learn the limitations of Azure AD SAML integration       Azure AD SAML limitations  https   learn microsoft com en us entra identity platform id token claims reference groups overage claim    Configure SAML integration with Azure AD  create an app integration inside the Azure AD organization first       Add app integration in Azure AD  https   docs microsoft com en us azure active directory manage apps add application portal configure    If you have users that belong to more than 150 groups  you need to configure a registered application to provide an Azure Graph API to retrieve the groups       Setup Azure AD Graph API applications         Generate self signed certificates  Azure AD requires a certificate to sign the SAML requests  You can generate a self signed certificate using the following command      sh   openssl req  x509  newkey rsa 4096  keyout key pem  out cert pem  days 365  nodes      This will generate a  key pem  and  cert pem  file that you can use for the  private key path  and  certificate path  configuration options       Add Microsoft Entra SAML Toolkit from the gallery    Taken from https   learn microsoft com en us entra identity saas apps saml toolkit tutorial add microsoft entra saml toolkit from the gallery  1  Go to the  Azure portal  https   portal azure com  home  and sign in with your Azure AD account  1  Search for   Enterprise Applications    1  In the   Enterprise applications   pane  select   New application    1  In the search box  enter   SAML Toolkit    and then select the   Microsoft Entra SAML Toolkit   from the results panel  1  Add a descriptive name and select   Create         Configure the SAML Toolkit application endpoints  In order to validate Azure AD users with Grafana  you need to configure the SAML Toolkit application endpoints by creating a new SAML integration in the Azure AD organization     For the following configuration  we will use  https   localhost  as the Grafana URL  Replace it with your Grafana URL   1  In the   SAML Toolkit application    select   Set up single sign on    1  In the   Single sign on   pane  select   SAML    1  In the Set up   Single Sign On with SAML   pane  select the pencil icon for   Basic SAML Configuration   to edit the settings  1  In the   Basic SAML Configuration   pane  click on the   Edit   button and update the following fields       In the   Identifier  Entity ID    field  enter  https   localhost saml metadata        In the   Reply URL  Assertion Consumer Service URL    field  enter  https   localhost saml acs        In the   Sign on URL   field  enter  https   localhost        In the   Relay State   field  enter  https   localhost        In the   Logout URL   field  enter  https   localhost saml slo   1  Select   Save    1  At the   SAML Certificate   section  copy the   App Federation Metadata Url         Use this URL in the  idp metadata url  field in the  custom ini  file       Configure a Graph API application in Azure AD  While an Azure AD tenant can be configured in Grafana via SAML  some additional information is only accessible via the Graph API  To retrieve this information  create a new application in Azure AD and grant it the necessary permissions      Azure AD SAML limitations  https   learn microsoft com en us entra identity platform id token claims reference groups overage claim     For the following configuration  the URL  https   localhost  will be used as the Grafana URL  Replace it with your Grafana instance URL        Create a new Application registration  This app registration will be used as a Service Account to retrieve more information about the user from the Azure AD   1  Go to the  Azure portal  https   portal azure com  home  and sign in with your Azure AD account  1  In the left hand navigation pane  select the Azure Active Directory service  and then select   App registrations    1  Click the   New registration   button  1  In the   Register an application   pane  enter a name for the application  1  In the   Supported account types   section  select the account types that can use the application  1  In the   Redirect URI   section  select Web and enter  https   localhost login azuread   1  Click the   Register   button        Set up permissions for the application  1  In the overview pane  look for   API permissions   section and select   Add a permission    1  In the   Request API permissions   pane  select   Microsoft Graph    and click   Application permissions    1  In the   Select permissions   pane  under the   GroupMember   section  select   GroupMember Read All    1  In the   Select permissions   pane  under the   User   section  select   User Read All    1  Click the   Add permissions   button at the bottom of the page  1  In the   Request API permissions   pane  select   Microsoft Graph    and click   Delegated permissions    1  In the   Select permissions   pane  under the   User   section  select   User Read    1  Click the   Add permissions   button at the bottom of the page  1  In the   API permissions   section  select   Grant admin consent for  your organization      The following table shows what the permissions look like from the Azure AD portal     Permissions name   Type          Admin consent required   Status                                                                             Group Read All    Application   Yes                      Granted      User Read         Delegated     No                       Granted      User Read All     Application   Yes                      Granted           Generate a client secret  1  In the   Overview   pane  select   Certificates   secrets    1  Select   New client secret    1  In the   Add a client secret   pane  enter a description for the secret  1  Set the expiration date for the secret  1  Select   Add    1  Copy the value of the secret  This value is used in the  client secret  field in the  custom ini  file      Set up SAML with Okta  Grafana supports user authentication through Okta  which is useful when you want your users to access Grafana using single sign on  This guide will follow you through the steps of configuring SAML authentication in Grafana with  Okta  https   okta com    You need to be an admin in your Okta organization to access Admin Console and create SAML integration  You also need permissions to edit Grafana config file and restart Grafana server     Before you begin       To configure SAML integration with Okta  create an app integration inside the Okta organization first   Add app integration in Okta  https   help okta com en prod Content Topics Apps apps overview add apps htm    Ensure you have permission to administer SAML authentication  For more information about roles and permissions in Grafana  refer to  Roles and permissions        To set up SAML with Okta     1  Log in to the  Okta portal  https   login okta com    1  Go to the Admin Console in your Okta organization by clicking   Admin   in the upper right corner  If you are in the Developer Console  then click   Developer Console   in the upper left corner and then click   Classic UI   to switch over to the Admin Console  1  In the Admin Console  navigate to   Applications       Applications    1  Click   Create App Integration   to start the Application Integration Wizard  1  Choose   SAML 2 0   as the   Sign in method    1  Click   Create    1  On the   General Settings   tab  enter a name for your Grafana integration  You can also upload a logo  1  On the   Configure SAML   tab  enter the SAML information related to your Grafana instance        In the   Single sign on URL   field  use the   saml acs  endpoint URL of your Grafana instance  for example   https   grafana example com saml acs        In the   Audience URI  SP Entity ID    field  use the   saml metadata  endpoint URL  by default it is the   saml metadata  endpoint of your Grafana instance  for example  https   example grafana com saml metadata    This could be configured differently  but the value here must match the  entity id  setting of the SAML settings of Grafana       Leave the default values for   Name ID format   and   Application username               If you plan to enable SAML Single Logout  consider setting the   Name ID format   to  EmailAddress  or  Persistent   This must match the  name id format  setting of the Grafana instance             In the   ATTRIBUTE STATEMENTS  OPTIONAL    section  enter the SAML attributes to be shared with Grafana  The attribute names in Okta need to match exactly what is defined within Grafana  for example          Attribute name  in Grafana    Name and value  in Okta profile                        Grafana configuration  under  auth saml                                                                                                                                                    Login                         Login    user login                                     assertion attribute login   Login                 Email                         Email    user email                                     assertion attribute email   Email                 DisplayName                   DisplayName    user firstName         user lastName     assertion attribute name   DisplayName           In the   GROUP ATTRIBUTE STATEMENTS  OPTIONAL    section  enter a group attribute name  for example   Group   ensure it matches the  asssertion attribute groups  setting in Grafana  and set filter to  Matches regex     to return all user groups   1  Click   Next    1  On the final Feedback tab  fill out the form and then click   Finish         Signature algorithm  The SAML standard recommends using a digital signature for some types of messages  like authentication or logout requests  If the  signature algorithm  option is configured  Grafana will put a digital signature into SAML requests  Supported signature types are  rsa sha1    rsa sha256    rsa sha512   This option should match your IdP configuration  otherwise  signature validation will fail  Grafana uses key and certificate configured with  private key  and  certificate  options for signing SAML requests       Specify user s Name ID  The  name id format  configuration field specifies the format of the NameID element in the SAML assertion   By default  this is set to  urn oasis names tc SAML 2 0 nameid format transient  and does not need to be specified in the configuration file   The following list includes valid configuration field values      name id format  value in the configuration file or Terraform    Name identifier format  on the UI                                                                                                             urn oasis names tc SAML 2 0 nameid format transient            Default                                 urn oasis names tc SAML 1 1 nameid format unspecified          Unspecified                             urn oasis names tc SAML 1 1 nameid format emailAddress         Email address                           urn oasis names tc SAML 2 0 nameid format persistent           Persistent                              urn oasis names tc SAML 2 0 nameid format transient            Transient                                 IdP metadata  You also need to define the public part of the IdP for message verification  The SAML IdP metadata XML defines where and how Grafana exchanges user information   Grafana supports three ways of specifying the IdP metadata     Without a suffix  idp metadata   Grafana assumes base64 encoded XML file contents    With the   path  suffix  Grafana assumes a file path and attempts to read the file from the file system    With the   url  suffix  Grafana assumes a URL and attempts to load the metadata from the given location       Maximum issue delay  Prevents SAML response replay attacks and internal clock skews between the SP  Grafana  and the IdP  You can set a maximum amount of time between the IdP issuing a response and the SP  Grafana  processing it   The configuration options is specified as a duration  such as  max issue delay   90s  or  max issue delay   1h        Metadata valid duration  SP metadata is likely to expire at some point  perhaps due to a certificate rotation or change of location binding  Grafana allows you to specify for how long the metadata should be valid  Leveraging the  validUntil  field  you can tell consumers until when your metadata is going to be valid  The duration is computed by adding the duration to the current time   The configuration option is specified as a duration  such as  metadata valid duration   48h        Identity provider  IdP  registration  For the SAML integration to work correctly  you need to make the IdP aware of the SP   The integration provides two key endpoints as part of Grafana     The   saml metadata  endpoint  which contains the SP metadata  You can either download and upload it manually  or you make the IdP request it directly from the endpoint  Some providers name it Identifier or Entity ID    The   saml acs  endpoint  which is intended to receive the ACS  Assertion Customer Service  callback  Some providers name it SSO URL or Reply URL       IdP initiated Single Sign On  SSO   By default  Grafana allows only service provider  SP  initiated logins  when the user logs in with SAML via Grafana s login page   If you want users to log in into Grafana directly from your identity provider  IdP   set the  allow idp initiated  configuration option to  true  and configure  relay state  with the same value specified in the IdP configuration   IdP initiated SSO has some security risks  so make sure you understand the risks before enabling this feature  When using IdP initiated SSO  Grafana receives unsolicited SAML requests and can t verify that login flow was started by the user  This makes it hard to detect whether SAML message has been stolen or replaced  Because of this  IdP initiated SSO is vulnerable to login cross site request forgery  CSRF  and man in the middle  MITM  attacks  We do not recommend using IdP initiated SSO and keeping it disabled whenever possible       Single logout  SAML s single logout feature allows users to log out from all applications associated with the current IdP session established via SAML SSO  If the  single logout  option is set to  true  and a user logs out  Grafana requests IdP to end the user session which in turn triggers logout from all other applications the user is logged into using the same IdP session  applications should support single logout   Conversely  if another application connected to the same IdP logs out using single logout  Grafana receives a logout request from IdP and ends the user session    HTTP Redirect  and  HTTP POST  bindings are supported for single logout  When using  HTTP Redirect  bindings the query should include a request signature       Assertion mapping  During the SAML SSO authentication flow  Grafana receives the ACS callback  The callback contains all the relevant information of the user under authentication embedded in the SAML response  Grafana parses the response to create  or update  the user within its internal database   For Grafana to map the user information  it looks at the individual attributes within the assertion  You can think of these attributes as Key Value pairs  although  they contain more information than that    Grafana provides configuration options that let you modify which keys to look at for these values  The data we need to create the user in Grafana is Name  Login handle  and email        The  assertion attribute name  option   assertion attribute name  is a special assertion mapping that can either be a simple key  indicating a mapping to a single assertion attribute on the SAML response  or a complex template with variables using the     saml  attribute    syntax  If this property is misconfigured  Grafana will log an error message on startup and disallow SAML sign ins  Grafana will also log errors after a login attempt if a variable in the template is missing from the SAML response     Examples       ini  plain string mapping assertion attribute name   displayName         ini  template mapping assertion attribute name      saml firstName     saml lastName           Allow new user signups  By default  new Grafana users using SAML authentication will have an account created for them automatically  To decouple authentication and account creation and ensure only users with existing accounts can log in with SAML  set the  allow sign up  option to false       Configure automatic login  Set  auto login  option to true to attempt login automatically  skipping the login screen  This setting is ignored if multiple auth providers are configured to use auto login       auto login   true          Configure group synchronization  Group synchronization allows you to map user groups from an identity provider to Grafana teams and roles   To use SAML group synchronization  set   assertion attribute groups     to the attribute name where you store user groups  Then Grafana will use attribute values extracted from SAML assertion to add user to Grafana teams and grant them roles    Team sync allows you sync users from SAML to Grafana teams  It does not automatically create teams in Grafana  You need to create teams in Grafana before you can use this feature    Given the following partial SAML assertion      xml  saml2 Attribute     Name  groups      NameFormat  urn oasis names tc SAML 2 0 attrname format unspecified        saml2 AttributeValue         xmlns xs  http   www w3 org 2001 XMLSchema          xmlns xsi  http   www w3 org 2001 XMLSchema instance          xsi type  xs string  admins group       saml2 AttributeValue       saml2 AttributeValue         xmlns xs  http   www w3 org 2001 XMLSchema          xmlns xsi  http   www w3 org 2001 XMLSchema instance          xsi type  xs string  division 1       saml2 AttributeValue    saml2 Attribute       The configuration would look like this      ini  auth saml        assertion attribute groups   groups      The following  External Group ID s would be valid for configuring team sync or role sync in Grafana      admins group     division 1   To learn more about how to configure group synchronization  refer to  Configure team sync    and  Configure group attribute sync  https   grafana com docs grafana  GRAFANA VERSION  setup grafana configure security configure group attribute sync  documentation       Configure role sync  Role sync allows you to map user roles from an identity provider to Grafana  To enable role sync  configure role attribute and possible values for the Editor  Admin  and Grafana Admin roles  For more information about user roles  refer to  Roles and permissions      1  In the configuration file  set   assertion attribute role     option to the attribute name where the role information will be extracted from  1  Set the   role values none     option to the values mapped to the  None  role  1  Set the   role values viewer     option to the values mapped to the  Viewer  role  1  Set the   role values editor     option to the values mapped to the  Editor  role  1  Set the   role values admin     option to the values mapped to the organization  Admin  role  1  Set the   role values grafana admin     option to the values mapped to the  Grafana Admin  role   If a user role doesn t match any of configured values  then the role specified by the  auto assign org role  config option will be assigned  If the  auto assign org role  field is not set then the user role will default to  Viewer    For more information about roles and permissions in Grafana  refer to  Roles and permissions      Example configuration      ini  auth saml  assertion attribute role   role role values none   none role values viewer   external role values editor   editor  developer role values admin   admin  operator role values grafana admin   superadmin        Important    When role sync is configured  any changes of user roles and organization membership made manually in Grafana will be overwritten on next user login  Assign user organizations and roles in the IdP instead   If you don t want user organizations and roles to be synchronized with the IdP  you can use the  skip org role sync  configuration option   Example configuration      ini  auth saml  skip org role sync   true          Configure organization mapping  Organization mapping allows you to assign users to particular organization in Grafana depending on attribute value obtained from identity provider   1  In configuration file  set   assertion attribute org     to the attribute name you store organization info in  This attribute can be an array if you want a user to be in multiple organizations  1  Set   org mapping     option to the comma separated list of  Organization OrgId  pairs to map organization from IdP to Grafana organization specified by id  If you want users to have different roles in multiple organizations  you can set this option to a comma separated list of  Organization OrgId Role  mappings   For example  use following configuration to assign users from  Engineering  organization to the Grafana organization with id  2  as Editor and users from  Sales    to the org with id  3  as Admin  based on  Org  assertion attribute value      bash  auth saml  assertion attribute org   Org org mapping   Engineering 2 Editor  Sales 3 Admin      You can specify multiple organizations both for the IdP and Grafana      org mapping   Engineering 2  Sales 2  to map users from  Engineering  and  Sales  to  2  in Grafana     org mapping   Engineering 2  Engineering 3  to assign  Engineering  to both  2  and  3  in Grafana   You can use     as the SAML Organization if you want all your users to be in some Grafana organizations with a default role      org mapping     2 Editor  to map all users to  2  in Grafana as Editors   You can use     as the Grafana organization in the mapping if you want all users from a given SAML Organization to be added to all existing Grafana organizations      org mapping   Engineering    to map users from  Engineering  to all existing Grafana organizations     org mapping   Administration   Admin  to map users from  Administration  to all existing Grafana organizations as Admins       Configure allowed organizations  With the   allowed organizations     option you can specify a list of organizations where the user must be a member of at least one of them to be able to log in to Grafana   To put values containing spaces in the list  use the following JSON syntax      ini allowed organizations     org 1    second org            Example SAML configuration     bash  auth saml  enabled   true auto login   false certificate path     path to certificate cert  private key path     path to private key pem  idp metadata path     my metadata xml  max issue delay   90s metadata valid duration   48h assertion attribute name   displayName assertion attribute login   mail assertion attribute email   mail  assertion attribute groups   Group assertion attribute role   Role assertion attribute org   Org role values viewer   external role values editor   editor  developer role values admin   admin  operator role values grafana admin   superadmin org mapping   Engineering 2 Editor  Engineering 3 Viewer  Sales 3 Editor    1 Editor allowed organizations   Engineering  Sales          Example SAML configuration in Terraform   Available in Public Preview in Grafana v11 1 behind the  ssoSettingsSAML  feature toggle  Supported in the Terraform provider since v2 17 0       terraform resource  grafana sso settings   saml sso settings      provider name    saml    saml settings       name                          SAML      auto login                   false     certificate path               path to certificate cert      private key path               path to private key pem      idp metadata path              my metadata xml      max issue delay               90s      metadata valid duration       48h      assertion attribute name      displayName      assertion attribute login     mail      assertion attribute email     mail      assertion attribute groups    Group      assertion attribute role      Role      assertion attribute org       Org      role values editor            editor  developer      role values admin             admin  operator      role values grafana admin     superadmin      org mapping                   Engineering 2 Editor  Engineering 3 Viewer  Sales 3 Editor    1 Editor      allowed organizations         Engineering  Sales             Go to  Terraform Registry  https   registry terraform io providers grafana grafana latest docs resources sso settings  for a complete reference on using the  grafana sso settings  resource      Troubleshoot SAML authentication in Grafana  To troubleshoot and get more log information  enable SAML debug logging in the configuration file  Refer to  Configuration    for more information      bash  log  filters   saml auth debug         Troubleshooting  Following are common issues found in configuring SAML authentication in Grafana and how to resolve them       Infinite redirect loop   User gets redirected to the login page after successful login on the IdP side  If you experience an infinite redirect loop when  auto login   true  or redirected to the login page after successful login  it is likely that the  grafana session  cookie s SameSite setting is set to  Strict   This setting prevents the  grafana session  cookie from being sent to Grafana during cross site requests  To resolve this issue  set the  security cookie samesite  option to  Lax  in the Grafana configuration file       SAML authentication fails with error      asn1  structure error  tags don t match   We only support one private key format  PKCS 8   The keys may be in a different format  PKCS 1 or PKCS 12   in that case  it may be necessary to convert the private key format   The following command creates a pkcs8 key file      bash openssl req  x509  newkey rsa 4096  keyout key pem  out cert pem  days 365  nodes             Convert   the private key format to base64  The following command converts keys to base64 format   Base64 encode the cert pem and key pem files    w0 switch is not needed on Mac  only for Linux      sh   base64  w0 key pem   key pem base64   base64  w0 cert pem   cert pem base64      The base64 encoded values   key pem base64  cert pem base64  files  are then used for certificate and private key   The keys you provide should look like            BEGIN PRIVATE KEY                   END PRIVATE KEY               SAML login attempts fail with request response  origin not allowed   When the user logs in using SAML and gets presented with  origin not allowed   the user might be issuing the login from an IdP  identity provider  service or the user is behind a reverse proxy  This potentially happens as Grafana s CSRF checks deem the requests to be invalid  For more information  CSRF  https   owasp org www community attacks csrf    To solve this issue  you can configure either the   csrf trusted origins     or   csrf additional headers     option in the SAML configuration   Example of a configuration file      bash   config ini      security  csrf trusted origins   https   grafana example com csrf additional headers   X Forwarded Host              SAML login attempts fail with request response  login session has expired   Accessing the Grafana login page from a URL that is not the root URL of the Grafana server can cause the instance to return the following error   login session has expired    If you are accessing grafana through a proxy server  ensure that cookies are correctly rewritten to the root URL of Grafana  Cookies must be set on the same url as the  root url  of Grafana  This is normally the reverse proxy s domain address   Review the cookie settings in your proxy server configuration to ensure that cookies are not being discarded  Review the following settings in your grafana config      ini  security  cookie samesite   none      This setting should be set to none to allow grafana session cookies to work correctly with redirects      ini  security  cookie secure   true      Ensure cookie secure is set to true to ensure that cookies are only sent over HTTPS      Configure SAML authentication in Grafana  The table below describes all SAML configuration options  Continue reading below for details on specific options  Like any other Grafana configuration  you can apply these options as  environment variables        Setting                                                      Required   Description                                                                                                                                                                                                    Default                                                                                                                                                                                                                                                                                                                                                                                                     enabled                                                     No         Whether SAML authentication is allowed                                                                                                                                                                          false                                                     name                                                        No         Name used to refer to the SAML authentication in the Grafana user interface                                                                                                                                     SAML                                                      entity id                                                   No         The entity ID of the service provider  This is the unique identifier of the service provider                                                                                                                    https    Grafana URL  saml metadata                       single logout                                               No         Whether SAML Single Logout is enabled                                                                                                                                                                           false                                                     allow sign up                                               No         Whether to allow new Grafana user creation through SAML login  If set to  false   then only existing Grafana users can log in with SAML                                                                         true                                                      auto login                                                  No         Whether SAML auto login is enabled                                                                                                                                                                              false                                                     allow idp initiated                                         No         Whether SAML IdP initiated login is allowed                                                                                                                                                                     false                                                     certificate  or  certificate path                           Yes        Base64 encoded string or Path for the SP X 509 certificate                                                                                                                                                                                                                private key  or  private key path                           Yes        Base64 encoded string or Path for the SP private key                                                                                                                                                                                                                      signature algorithm                                         No         Signature algorithm used for signing requests to the IdP  Supported values are rsa sha1  rsa sha256  rsa sha512                                                                                                                                                           idp metadata    idp metadata path   or  idp metadata url    Yes        Base64 encoded string  Path or URL for the IdP SAML metadata XML                                                                                                                                                                                                          max issue delay                                             No         Maximum time allowed between the issuance of an AuthnRequest by the SP and the processing of the Response                                                                                                       90s                                                       metadata valid duration                                     No         Duration for which the SP metadata remains valid                                                                                                                                                                48h                                                       relay state                                                 No         Relay state for IdP initiated login  This should match the relay state configured in the IdP                                                                                                                                                                              assertion attribute name                                    No         Friendly name or name of the attribute within the SAML assertion to use as the user name  Alternatively  this can be a template with variables that match the names of attributes within the SAML assertion     displayName                                               assertion attribute login                                   No         Friendly name or name of the attribute within the SAML assertion to use as the user login handle                                                                                                                mail                                                      assertion attribute email                                   No         Friendly name or name of the attribute within the SAML assertion to use as the user email                                                                                                                       mail                                                      assertion attribute groups                                  No         Friendly name or name of the attribute within the SAML assertion to use as the user groups                                                                                                                                                                                assertion attribute role                                    No         Friendly name or name of the attribute within the SAML assertion to use as the user roles                                                                                                                                                                                 assertion attribute org                                     No         Friendly name or name of the attribute within the SAML assertion to use as the user organization                                                                                                                                                                          allowed organizations                                       No         List of comma  or space separated organizations  User should be a member of at least one organization to log in                                                                                                                                                           org mapping                                                 No         List of comma  or space separated Organization OrgId Role mappings  Organization can be     meaning  All users   Role is optional and can have the following values   None    Viewer    Editor  or  Admin                                                                 role values none                                            No         List of comma  or space separated roles which will be mapped into the None role                                                                                                                                                                                           role values viewer                                          No         List of comma  or space separated roles which will be mapped into the Viewer role                                                                                                                                                                                         role values editor                                          No         List of comma  or space separated roles which will be mapped into the Editor role                                                                                                                                                                                         role values admin                                           No         List of comma  or space separated roles which will be mapped into the Admin role                                                                                                                                                                                          role values grafana admin                                   No         List of comma  or space separated roles which will be mapped into the Grafana Admin  Super Admin  role                                                                                                                                                                    skip org role sync                                          No         Whether to skip organization role synchronization                                                                                                                                                               false                                                     name id format                                              No         Specifies the format of the requested NameID element in the SAML AuthnRequest                                                                                                                                   urn oasis names tc SAML 2 0 nameid format transient       client id                                                   No         Client ID of the IdP service application used to retrieve more information about the user from the IdP   Microsoft Entra ID only                                                                                                                                          client secret                                               No         Client secret of the IdP service application used to retrieve more information about the user from the IdP   Microsoft Entra ID only                                                                                                                                      token url                                                   No         URL to retrieve the access token from the IdP   Microsoft Entra ID only                                                                                                                                                                                                   force use graph api                                         No         Whether to use the IdP service application retrieve more information about the user from the IdP   Microsoft Entra ID only                                                                                      false                                                 "}
{"questions":"grafana setup installation configuration aliases products Configuration documentation administration configuration labels title Configure Grafana oss enterprise","answers":"---\naliases:\n  - ..\/administration\/configuration\/\n  - ..\/installation\/configuration\/\ndescription: Configuration documentation\nlabels:\n  products:\n    - enterprise\n    - oss\ntitle: Configure Grafana\nweight: 200\n---\n\n# Configure Grafana\n\nGrafana has default and custom configuration files. You can customize your Grafana instance by modifying the custom configuration file or by using environment variables. To see the list of settings for a Grafana instance, refer to [View server settings]().\n\n\nAfter you add custom options, [uncomment](#remove-comments-in-the-ini-files) the relevant sections of the configuration file. Restart Grafana for your changes to take effect.\n\n\n## Configuration file location\n\nThe default settings for a Grafana instance are stored in the `$WORKING_DIR\/conf\/defaults.ini` file. _Do not_ change this file.\n\nDepending on your OS, your custom configuration file is either the `$WORKING_DIR\/conf\/custom.ini` file or the `\/usr\/local\/etc\/grafana\/grafana.ini` file. The custom configuration file path can be overridden using the `--config` parameter.\n\n### Linux\n\nIf you installed Grafana using the `deb` or `rpm` packages, then your configuration file is located at `\/etc\/grafana\/grafana.ini` and a separate `custom.ini` is not used. This path is specified in the Grafana init.d script using `--config` file parameter.\n\n### Docker\n\nRefer to [Configure a Grafana Docker image]() for information about environmental variables, persistent storage, and building custom Docker images.\n\n### Windows\n\nOn Windows, the `sample.ini` file is located in the same directory as `defaults.ini` file. It contains all the settings commented out. Copy `sample.ini` and name it `custom.ini`.\n\n### macOS\n\nBy default, the configuration file is located at `\/opt\/homebrew\/etc\/grafana\/grafana.ini` or `\/usr\/local\/etc\/grafana\/grafana.ini`. For a Grafana instance installed using Homebrew, edit the `grafana.ini` file directly. Otherwise, add a configuration file named `custom.ini` to the `conf` folder to override the settings defined in `conf\/defaults.ini`.\n\n## Remove comments in the .ini files\n\nGrafana uses semicolons (the `;` char) to comment out lines in a `.ini` file. You must uncomment each line in the `custom.ini` or the `grafana.ini` file that you are modify by removing `;` from the beginning of that line. Otherwise your changes will be ignored.\n\nFor example:\n\n```\n# The HTTP port  to use\n;http_port = 3000\n```\n\n## Override configuration with environment variables\n\nDo not use environment variables to _add_ new configuration settings. Instead, use environmental variables to _override_ existing options.\n\nTo override an option:\n\n```bash\nGF_<SectionName>_<KeyName>\n```\n\nWhere the section name is the text within the brackets. Everything should be uppercase, `.` and `-` should be replaced by `_`. For example, if you have these configuration settings:\n\n```bash\n# default section\ninstance_name = ${HOSTNAME}\n\n[security]\nadmin_user = admin\n\n[auth.google]\nclient_secret = 0ldS3cretKey\n\n[plugin.grafana-image-renderer]\nrendering_ignore_https_errors = true\n\n[feature_toggles]\nenable = newNavigation\n```\n\nYou can override variables on Linux machines with:\n\n```bash\nexport GF_DEFAULT_INSTANCE_NAME=my-instance\nexport GF_SECURITY_ADMIN_USER=owner\nexport GF_AUTH_GOOGLE_CLIENT_SECRET=newS3cretKey\nexport GF_PLUGIN_GRAFANA_IMAGE_RENDERER_RENDERING_IGNORE_HTTPS_ERRORS=true\nexport GF_FEATURE_TOGGLES_ENABLE=newNavigation\n```\n\n## Variable expansion\n\nIf any of your options contains the expression `$__<provider>{<argument>}`\nor `${<environment variable>}`, then they will be processed by Grafana's\nvariable expander. The expander runs the provider with the provided argument\nto get the final value of the option.\n\nThere are three providers: `env`, `file`, and `vault`.\n\n### Env provider\n\nThe `env` provider can be used to expand an environment variable. If you\nset an option to `$__env{PORT}` the `PORT` environment variable will be\nused in its place. For environment variables you can also use the\nshort-hand syntax `${PORT}`.\nGrafana's log directory would be set to the `grafana` directory in the\ndirectory behind the `LOGDIR` environment variable in the following\nexample.\n\n```ini\n[paths]\nlogs = $__env{LOGDIR}\/grafana\n```\n\n### File provider\n\n`file` reads a file from the filesystem. It trims whitespace from the\nbeginning and the end of files.\nThe database password in the following example would be replaced by\nthe content of the `\/etc\/secrets\/gf_sql_password` file:\n\n```ini\n[database]\npassword = $__file{\/etc\/secrets\/gf_sql_password}\n```\n\n### Vault provider\n\nThe `vault` provider allows you to manage your secrets with [Hashicorp Vault](https:\/\/www.hashicorp.com\/products\/vault).\n\n> Vault provider is only available in Grafana Enterprise v7.1+. For more information, refer to [Vault integration]() in [Grafana Enterprise]().\n\n<hr \/>\n\n## app_mode\n\nOptions are `production` and `development`. Default is `production`. _Do not_ change this option unless you are working on Grafana development.\n\n## instance_name\n\nSet the name of the grafana-server instance. Used in logging, internal metrics, and clustering info. Defaults to: `${HOSTNAME}`, which will be replaced with\nenvironment variable `HOSTNAME`, if that is empty or does not exist Grafana will try to use system calls to get the machine name.\n\n<hr \/>\n\n## [paths]\n\n### data\n\nPath to where Grafana stores the sqlite3 database (if used), file-based sessions (if used), and other data. This path is usually specified via command line in the init.d script or the systemd service file.\n\n**macOS:** The default SQLite database is located at `\/usr\/local\/var\/lib\/grafana`\n\n### temp_data_lifetime\n\nHow long temporary images in `data` directory should be kept. Defaults to: `24h`. Supported modifiers: `h` (hours),\n`m` (minutes), for example: `168h`, `30m`, `10h30m`. Use `0` to never clean up temporary files.\n\n### logs\n\nPath to where Grafana stores logs. This path is usually specified via command line in the init.d script or the systemd service file. You can override it in the configuration file or in the default environment variable file. However, please note that by overriding this the default log path will be used temporarily until Grafana has fully initialized\/started.\n\nOverride log path using the command line argument `cfg:default.paths.logs`:\n\n```bash\n.\/grafana-server --config \/custom\/config.ini --homepath \/custom\/homepath cfg:default.paths.logs=\/custom\/path\n```\n\n**macOS:** By default, the log file should be located at `\/usr\/local\/var\/log\/grafana\/grafana.log`.\n\n### plugins\n\nDirectory where Grafana automatically scans and looks for plugins. For information about manually or automatically installing plugins, refer to [Install Grafana plugins]().\n\n**macOS:** By default, the Mac plugin location is: `\/usr\/local\/var\/lib\/grafana\/plugins`.\n\n### provisioning\n\nFolder that contains [provisioning]() config files that Grafana will apply on startup. Dashboards will be reloaded when the json files changes.\n\n<hr \/>\n\n## [server]\n\n### protocol\n\n`http`,`https`,`h2` or `socket`\n\n### min_tls_version\n\nThe TLS Handshake requires a minimum TLS version. The available options are TLS1.2 and TLS1.3.\nIf you do not specify a version, the system uses TLS1.2.\n\n### http_addr\n\nThe host for the server to listen on. If your machine has more than one network interface, you can use this setting to expose the Grafana service on only one network interface and not have it available on others, such as the loopback interface. An empty value is equivalent to setting the value to `0.0.0.0`, which means the Grafana service binds to all interfaces.\n\nIn environments where network address translation (NAT) is used, ensure you use the network interface address and not a final public address; otherwise, you might see errors such as `bind: cannot assign requested address` in the logs.\n\n### http_port\n\nThe port to bind to, defaults to `3000`. To use port 80 you need to either give the Grafana binary permission for example:\n\n```bash\n$ sudo setcap 'cap_net_bind_service=+ep' \/usr\/sbin\/grafana-server\n```\n\nOr redirect port 80 to the Grafana port using:\n\n```bash\n$ sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000\n```\n\nAnother way is to put a web server like Nginx or Apache in front of Grafana and have them proxy requests to Grafana.\n\n### domain\n\nThis setting is only used in as a part of the `root_url` setting (see below). Important if you use GitHub or Google OAuth.\n\n### enforce_domain\n\nRedirect to correct domain if the host header does not match the domain. Prevents DNS rebinding attacks. Default is `false`.\n\n### root_url\n\nThis is the full URL used to access Grafana from a web browser. This is\nimportant if you use Google or GitHub OAuth authentication (for the\ncallback URL to be correct).\n\n\nThis setting is also important if you have a reverse proxy\nin front of Grafana that exposes it through a subpath. In that\ncase add the subpath to the end of this URL setting.\n\n\n### serve_from_sub_path\n\nServe Grafana from subpath specified in `root_url` setting. By default it is set to `false` for compatibility reasons.\n\nBy enabling this setting and using a subpath in `root_url` above, e.g.`root_url = http:\/\/localhost:3000\/grafana`, Grafana is accessible on `http:\/\/localhost:3000\/grafana`. If accessed without subpath Grafana will redirect to\nan URL with the subpath.\n\n### router_logging\n\nSet to `true` for Grafana to log all HTTP requests (not just errors). These are logged as Info level events to the Grafana log.\n\n### static_root_path\n\nThe path to the directory where the front end files (HTML, JS, and CSS\nfiles). Defaults to `public` which is why the Grafana binary needs to be\nexecuted with working directory set to the installation path.\n\n### enable_gzip\n\nSet this option to `true` to enable HTTP compression, this can improve\ntransfer speed and bandwidth utilization. It is recommended that most\nusers set it to `true`. By default it is set to `false` for compatibility\nreasons.\n\n### cert_file\n\nPath to the certificate file (if `protocol` is set to `https` or `h2`).\n\n### cert_key\n\nPath to the certificate key file (if `protocol` is set to `https` or `h2`).\n\n### certs_watch_interval\n\nControls whether `cert_key` and `cert_file` are periodically watched for changes.\nDisabled, by default. When enabled, `cert_key` and `cert_file`\nare watched for changes. If there is change, the new certificates are loaded automatically.\n\n\nAfter the new certificates are loaded, connections with old certificates\nwill not work. You must reload the connections to the old certs for them to work.\n\n\n### socket_gid\n\nGID where the socket should be set when `protocol=socket`.\nMake sure that the target group is in the group of Grafana process and that Grafana process is the file owner before you change this setting.\nIt is recommended to set the gid as http server user gid.\nNot set when the value is -1.\n\n### socket_mode\n\nMode where the socket should be set when `protocol=socket`. Make sure that Grafana process is the file owner before you change this setting.\n\n### socket\n\nPath where the socket should be created when `protocol=socket`. Make sure Grafana has appropriate permissions for that path before you change this setting.\n\n### cdn_url\n\nSpecify a full HTTP URL address to the root of your Grafana CDN assets. Grafana will add edition and version paths.\n\nFor example, given a cdn url like `https:\/\/cdn.myserver.com` grafana will try to load a javascript file from\n`http:\/\/cdn.myserver.com\/grafana-oss\/7.4.0\/public\/build\/app.<hash>.js`.\n\n### read_timeout\n\nSets the maximum time using a duration format (5s\/5m\/5ms) before timing out read of an incoming request and closing idle connections.\n`0` means there is no timeout for reading the request.\n\n<hr \/>\n\n## [server.custom_response_headers]\n\nThis setting enables you to specify additional headers that the server adds to HTTP(S) responses.\n\n```\nexampleHeader1 = exampleValue1\nexampleHeader2 = exampleValue2\n```\n\n<hr \/>\n\n## [database]\n\nGrafana needs a database to store users and dashboards (and other\nthings). By default it is configured to use [`sqlite3`](https:\/\/www.sqlite.org\/index.html) which is an\nembedded database (included in the main Grafana binary).\n\n### type\n\nEither `mysql`, `postgres` or `sqlite3`, it's your choice.\n\n### host\n\nOnly applicable to MySQL or Postgres. Includes IP or hostname and port or in case of Unix sockets the path to it.\nFor example, for MySQL running on the same host as Grafana: `host = 127.0.0.1:3306` or with Unix sockets: `host = \/var\/run\/mysqld\/mysqld.sock`\n\n### name\n\nThe name of the Grafana database. Leave it set to `grafana` or some\nother name.\n\n### user\n\nThe database user (not applicable for `sqlite3`).\n\n### password\n\nThe database user's password (not applicable for `sqlite3`). If the password contains `#` or `;` you have to wrap it with triple quotes. For example `\"\"\"#password;\"\"\"`\n\n### url\n\nUse either URL or the other fields below to configure the database\nExample: `mysql:\/\/user:secret@host:port\/database`\n\n### max_idle_conn\n\nThe maximum number of connections in the idle connection pool.\n\n### max_open_conn\n\nThe maximum number of open connections to the database. For MYSQL, configure this setting on both Grafana and the database. For more information, refer to [`sysvar_max_connections`](https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/server-system-variables.html#sysvar_max_connections).\n\n### conn_max_lifetime\n\nSets the maximum amount of time a connection may be reused. The default is 14400 (which means 14400 seconds or 4 hours). For MySQL, this setting should be shorter than the [`wait_timeout`](https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/server-system-variables.html#sysvar_wait_timeout) variable.\n\n### migration_locking\n\nSet to `false` to disable database locking during the migrations. Default is true.\n\n### locking_attempt_timeout_sec\n\nFor \"mysql\" and \"postgres\" only. Specify the time (in seconds) to wait before failing to lock the database for the migrations. Default is 0.\n\n### log_queries\n\nSet to `true` to log the sql calls and execution times.\n\n### ssl_mode\n\nFor Postgres, use use any [valid libpq `sslmode`](https:\/\/www.postgresql.org\/docs\/current\/libpq-ssl.html#LIBPQ-SSL-SSLMODE-STATEMENTS), e.g.`disable`, `require`, `verify-full`, etc.\nFor MySQL, use either `true`, `false`, or `skip-verify`.\n\n### ssl_sni\n\nFor Postgres, set to `0` to disable [Server Name Indication](https:\/\/www.postgresql.org\/docs\/current\/libpq-connect.html#LIBPQ-CONNECT-SSLSNI). This is enabled by default on SSL-enabled connections.\n\n### isolation_level\n\nOnly the MySQL driver supports isolation levels in Grafana. In case the value is empty, the driver's default isolation level is applied. Available options are \"READ-UNCOMMITTED\", \"READ-COMMITTED\", \"REPEATABLE-READ\" or \"SERIALIZABLE\".\n\n### ca_cert_path\n\nThe path to the CA certificate to use. On many Linux systems, certs can be found in `\/etc\/ssl\/certs`.\n\n### client_key_path\n\nThe path to the client key. Only if server requires client authentication.\n\n### client_cert_path\n\nThe path to the client cert. Only if server requires client authentication.\n\n### server_cert_name\n\nThe common name field of the certificate used by the `mysql` or `postgres` server. Not necessary if `ssl_mode` is set to `skip-verify`.\n\n### path\n\nOnly applicable for `sqlite3` database. The file path where the database\nwill be stored.\n\n### cache_mode\n\nFor \"sqlite3\" only. [Shared cache](https:\/\/www.sqlite.org\/sharedcache.html) setting used for connecting to the database. (private, shared)\nDefaults to `private`.\n\n### wal\n\nFor \"sqlite3\" only. Setting to enable\/disable [Write-Ahead Logging](https:\/\/sqlite.org\/wal.html). The default value is `false` (disabled).\n\n### query_retries\n\nThis setting applies to `sqlite` only and controls the number of times the system retries a query when the database is locked. The default value is `0` (disabled).\n\n### transaction_retries\n\nThis setting applies to `sqlite` only and controls the number of times the system retries a transaction when the database is locked. The default value is `5`.\n\n### instrument_queries\n\nSet to `true` to add metrics and tracing for database queries. The default value is `false`.\n\n<hr \/>\n\n## [remote_cache]\n\nCaches authentication details and session information in the configured database, Redis or Memcached. This setting does not configure [Query Caching in Grafana Enterprise]().\n\n### type\n\nEither `redis`, `memcached`, or `database`. Defaults to `database`\n\n### connstr\n\nThe remote cache connection string. The format depends on the `type` of the remote cache. Options are `database`, `redis`, and `memcache`.\n\n#### database\n\nLeave empty when using `database` since it will use the primary database.\n\n#### redis\n\nExample connstr: `addr=127.0.0.1:6379,pool_size=100,db=0,ssl=false`\n\n- `addr` is the host `:` port of the redis server.\n- `pool_size` (optional) is the number of underlying connections that can be made to redis.\n- `db` (optional) is the number identifier of the redis database you want to use.\n- `ssl` (optional) is if SSL should be used to connect to redis server. The value may be `true`, `false`, or `insecure`. Setting the value to `insecure` skips verification of the certificate chain and hostname when making the connection.\n\n#### memcache\n\nExample connstr: `127.0.0.1:11211`\n\n<hr \/>\n\n## [dataproxy]\n\n### logging\n\nThis enables data proxy logging, default is `false`.\n\n### timeout\n\nHow long the data proxy should wait before timing out. Default is 30 seconds.\n\nThis setting also applies to core backend HTTP data sources where query requests use an HTTP client with timeout set.\n\n### keep_alive_seconds\n\nInterval between keep-alive probes. Default is `30` seconds. For more details check the [Dialer.KeepAlive](https:\/\/golang.org\/pkg\/net\/#Dialer.KeepAlive) documentation.\n\n### tls_handshake_timeout_seconds\n\nThe length of time that Grafana will wait for a successful TLS handshake with the datasource. Default is `10` seconds. For more details check the [Transport.TLSHandshakeTimeout](https:\/\/golang.org\/pkg\/net\/http\/#Transport.TLSHandshakeTimeout) documentation.\n\n### expect_continue_timeout_seconds\n\nThe length of time that Grafana will wait for a datasource\u2019s first response headers after fully writing the request headers, if the request has an \u201cExpect: 100-continue\u201d header. A value of `0` will result in the body being sent immediately. Default is `1` second. For more details check the [Transport.ExpectContinueTimeout](https:\/\/golang.org\/pkg\/net\/http\/#Transport.ExpectContinueTimeout) documentation.\n\n### max_conns_per_host\n\nOptionally limits the total number of connections per host, including connections in the dialing, active, and idle states. On limit violation, dials are blocked. A value of `0` means that there are no limits. Default is `0`.\nFor more details check the [Transport.MaxConnsPerHost](https:\/\/golang.org\/pkg\/net\/http\/#Transport.MaxConnsPerHost) documentation.\n\n### max_idle_connections\n\nThe maximum number of idle connections that Grafana will maintain. Default is `100`. For more details check the [Transport.MaxIdleConns](https:\/\/golang.org\/pkg\/net\/http\/#Transport.MaxIdleConns) documentation.\n\n### idle_conn_timeout_seconds\n\nThe length of time that Grafana maintains idle connections before closing them. Default is `90` seconds. For more details check the [Transport.IdleConnTimeout](https:\/\/golang.org\/pkg\/net\/http\/#Transport.IdleConnTimeout) documentation.\n\n### send_user_header\n\nIf enabled and user is not anonymous, data proxy will add X-Grafana-User header with username into the request. Default is `false`.\n\n### response_limit\n\nLimits the amount of bytes that will be read\/accepted from responses of outgoing HTTP requests. Default is `0` which means disabled.\n\n### row_limit\n\nLimits the number of rows that Grafana will process from SQL (relational) data sources. Default is `1000000`.\n\n### user_agent\n\nSets a custom value for the `User-Agent` header for outgoing data proxy requests. If empty, the default value is `Grafana\/<BuildVersion>` (for example `Grafana\/9.0.0`).\n\n<hr \/>\n\n## [analytics]\n\n### enabled\n\nThis option is also known as _usage analytics_. When `false`, this option disables the writers that write to the Grafana database and the associated features, such as dashboard and data source insights, presence indicators, and advanced dashboard search. The default value is `true`.\n\n### reporting_enabled\n\nWhen enabled Grafana will send anonymous usage statistics to\n`stats.grafana.org`. No IP addresses are being tracked, only simple counters to\ntrack running instances, versions, dashboard and error counts. It is very helpful\nto us, so please leave this enabled. Counters are sent every 24 hours. Default\nvalue is `true`.\n\n### check_for_updates\n\nSet to false, disables checking for new versions of Grafana from Grafana's GitHub repository. When enabled, the check for a new version runs every 10 minutes. It will notify, via the UI, when a new version is available. The check itself will not prompt any auto-updates of the Grafana software, nor will it send any sensitive information.\n\n### check_for_plugin_updates\n\nSet to false disables checking for new versions of installed plugins from https:\/\/grafana.com. When enabled, the check for a new plugin runs every 10 minutes. It will notify, via the UI, when a new plugin update exists. The check itself will not prompt any auto-updates of the plugin, nor will it send any sensitive information.\n\n### google_analytics_ua_id\n\nIf you want to track Grafana usage via Google analytics specify _your_ Universal\nAnalytics ID here. By default this feature is disabled.\n\n### google_analytics_4_id\n\nIf you want to track Grafana usage via Google Analytics 4 specify _your_ GA4 ID here. By default this feature is disabled.\n\n### google_tag_manager_id\n\nGoogle Tag Manager ID, only enabled if you enter an ID here.\n\n### rudderstack_write_key\n\nIf you want to track Grafana usage via Rudderstack specify _your_ Rudderstack\nWrite Key here. The `rudderstack_data_plane_url` must also be provided for this\nfeature to be enabled. By default this feature is disabled.\n\n### rudderstack_data_plane_url\n\nRudderstack data plane url that will receive Rudderstack events. The\n`rudderstack_write_key` must also be provided for this feature to be enabled.\n\n### rudderstack_sdk_url\n\nOptional. If tracking with Rudderstack is enabled, you can provide a custom\nURL to load the Rudderstack SDK.\n\n### rudderstack_config_url\n\nOptional. If tracking with Rudderstack is enabled, you can provide a custom\nURL to load the Rudderstack config.\n\n### rudderstack_integrations_url\n\nOptional. If tracking with Rudderstack is enabled, you can provide a custom\nURL to load the SDK for destinations running in device mode. This setting is only valid for\nRudderstack version 1.1 and higher.\n\n### application_insights_connection_string\n\nIf you want to track Grafana usage via Azure Application Insights, then specify _your_ Application Insights connection string. Since the connection string contains semicolons, you need to wrap it in backticks (`). By default, tracking usage is disabled.\n\n### application_insights_endpoint_url\n\nOptionally, use this option to override the default endpoint address for Application Insights data collecting. For details, refer to the [Azure documentation](https:\/\/docs.microsoft.com\/en-us\/azure\/azure-monitor\/app\/custom-endpoints?tabs=js).\n\n<hr \/>\n\n### feedback_links_enabled\n\nSet to `false` to remove all feedback links from the UI. Default is `true`.\n\n## [security]\n\n### disable_initial_admin_creation\n\nDisable creation of admin user on first start of Grafana. Default is `false`.\n\n### admin_user\n\nThe name of the default Grafana Admin user, who has full permissions.\nDefault is `admin`.\n\n### admin_password\n\nThe password of the default Grafana Admin. Set once on first-run. Default is `admin`.\n\n### admin_email\n\nThe email of the default Grafana Admin, created on startup. Default is `admin@localhost`.\n\n### secret_key\n\nUsed for signing some data source settings like secrets and passwords, the encryption format used is AES-256 in CFB mode. Cannot be changed without requiring an update\nto data source settings to re-encode them.\n\n### disable_gravatar\n\nSet to `true` to disable the use of Gravatar for user profile images.\nDefault is `false`.\n\n### data_source_proxy_whitelist\n\nDefine a whitelist of allowed IP addresses or domains, with ports, to be used in data source URLs with the Grafana data source proxy. Format: `ip_or_domain:port` separated by spaces. PostgreSQL, MySQL, and MSSQL data sources do not use the proxy and are therefore unaffected by this setting.\n\n### disable_brute_force_login_protection\n\nSet to `true` to disable [brute force login protection](https:\/\/cheatsheetseries.owasp.org\/cheatsheets\/Authentication_Cheat_Sheet.html#account-lockout). Default is `false`. An existing user's account will be unable to login for 5 minutes if all login attempts are spent within a 5 minute window.\n\n### brute_force_login_protection_max_attempts\n\nConfigure how many login attempts a user have within a 5 minute window before the account will be locked. Default is `5`.\n\n### cookie_secure\n\nSet to `true` if you host Grafana behind HTTPS. Default is `false`.\n\n### cookie_samesite\n\nSets the `SameSite` cookie attribute and prevents the browser from sending this cookie along with cross-site requests. The main goal is to mitigate the risk of cross-origin information leakage. This setting also provides some protection against cross-site request forgery attacks (CSRF), [read more about SameSite here](https:\/\/owasp.org\/www-community\/SameSite). Valid values are `lax`, `strict`, `none`, and `disabled`. Default is `lax`. Using value `disabled` does not add any `SameSite` attribute to cookies.\n\n### allow_embedding\n\nWhen `false`, the HTTP header `X-Frame-Options: deny` will be set in Grafana HTTP responses which will instruct\nbrowsers to not allow rendering Grafana in a `<frame>`, `<iframe>`, `<embed>` or `<object>`. The main goal is to\nmitigate the risk of [Clickjacking](https:\/\/owasp.org\/www-community\/attacks\/Clickjacking). Default is `false`.\n\n### strict_transport_security\n\nSet to `true` if you want to enable HTTP `Strict-Transport-Security` (HSTS) response header. Only use this when HTTPS is enabled in your configuration, or when there is another upstream system that ensures your application does HTTPS (like a frontend load balancer). HSTS tells browsers that the site should only be accessed using HTTPS.\n\n### strict_transport_security_max_age_seconds\n\nSets how long a browser should cache HSTS in seconds. Only applied if strict_transport_security is enabled. The default value is `86400`.\n\n### strict_transport_security_preload\n\nSet to `true` to enable HSTS `preloading` option. Only applied if strict_transport_security is enabled. The default value is `false`.\n\n### strict_transport_security_subdomains\n\nSet to `true` to enable the HSTS includeSubDomains option. Only applied if strict_transport_security is enabled. The default value is `false`.\n\n### x_content_type_options\n\nSet to `false` to disable the X-Content-Type-Options response header. The X-Content-Type-Options response HTTP header is a marker used by the server to indicate that the MIME types advertised in the Content-Type headers should not be changed and be followed. The default value is `true`.\n\n### x_xss_protection\n\nSet to `false` to disable the X-XSS-Protection header, which tells browsers to stop pages from loading when they detect reflected cross-site scripting (XSS) attacks. The default value is `true`.\n\n### content_security_policy\n\nSet to `true` to add the Content-Security-Policy header to your requests. CSP allows to control resources that the user agent can load and helps prevent XSS attacks.\n\n### content_security_policy_template\n\nSet the policy template that will be used when adding the `Content-Security-Policy` header to your requests. `$NONCE` in the template includes a random nonce.\n\n### content_security_policy_report_only\n\nSet to `true` to add the `Content-Security-Policy-Report-Only` header to your requests. CSP in Report Only mode enables you to experiment with policies by monitoring their effects without enforcing them.\nYou can enable both policies simultaneously.\n\n### content_security_policy_template\n\nSet the policy template that will be used when adding the `Content-Security-Policy-Report-Only` header to your requests. `$NONCE` in the template includes a random nonce.\n\n### actions_allow_post_url\n\nSets API paths to be accessible between plugins using the POST verb. If the value is empty, you can only pass remote requests through the proxy. If the value is set, you can also send authenticated POST requests to the local server. You typically use this to enable backend communication between plugins.\n\nThis is a comma-separated list which uses glob matching.\n\nThis will allow access to all plugins that have a backend:\n\n`actions_allow_post_url=\/api\/plugins\/*`\n\nThis will limit access to the backend of a single plugin:\n\n`actions_allow_post_url=\/api\/plugins\/grafana-special-app`\n\n<hr \/>\n\n### angular_support_enabled\n\nThis is set to false by default, meaning that the angular framework and support components will not be loaded. This means that\nall [plugins]() and core features that depend on angular support will stop working.\n\nThe core features that depend on angular are:\n\n- Old graph panel\n- Old table panel\n\nThese features each have supported alternatives, and we recommend using them.\n\n### csrf_trusted_origins\n\nList of additional allowed URLs to pass by the CSRF check. Suggested when authentication comes from an IdP.\n\n### csrf_additional_headers\n\nList of allowed headers to be set by the user. Suggested to use for if authentication lives behind reverse proxies.\n\n### csrf_always_check\n\nSet to `true` to execute the CSRF check even if the login cookie is not in a request (default `false`).\n\n### enable_frontend_sandbox_for_plugins\n\nComma-separated list of plugins ids that will be loaded inside the frontend sandbox.\n\n## [snapshots]\n\n### enabled\n\nSet to `false` to disable the snapshot feature (default `true`).\n\n### external_enabled\n\nSet to `false` to disable external snapshot publish endpoint (default `true`).\n\n### external_snapshot_url\n\nSet root URL to a Grafana instance where you want to publish external snapshots (defaults to https:\/\/snapshots.raintank.io).\n\n### external_snapshot_name\n\nSet name for external snapshot button. Defaults to `Publish to snapshots.raintank.io`.\n\n### public_mode\n\nSet to true to enable this Grafana instance to act as an external snapshot server and allow unauthenticated requests for creating and deleting snapshots. Default is `false`.\n\n<hr \/>\n\n## [dashboards]\n\n### versions_to_keep\n\nNumber dashboard versions to keep (per dashboard). Default: `20`, Minimum: `1`.\n\n### min_refresh_interval\n\nThis feature prevents users from setting the dashboard refresh interval to a lower value than a given interval value. The default interval value is 5 seconds.\nThe interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. `30s` or `1m`.\n\nThis also limits the refresh interval options in Explore.\n\n### default_home_dashboard_path\n\nPath to the default home dashboard. If this value is empty, then Grafana uses StaticRootPath + \"dashboards\/home.json\".\n\n\nOn Linux, Grafana uses `\/usr\/share\/grafana\/public\/dashboards\/home.json` as the default home dashboard location.\n\n\n<hr \/>\n\n## [sql_datasources]\n\n### max_open_conns_default\n\nFor SQL data sources (MySql, Postgres, MSSQL) you can override the default maximum number of open connections (default: 100). The value configured in data source settings will be preferred over the default value.\n\n### max_idle_conns_default\n\nFor SQL data sources (MySql, Postgres, MSSQL) you can override the default allowed number of idle connections (default: 100). The value configured in data source settings will be preferred over the default value.\n\n### max_conn_lifetime_default\n\nFor SQL data sources (MySql, Postgres, MSSQL) you can override the default maximum connection lifetime specified in seconds (default: 14400). The value configured in data source settings will be preferred over the default value.\n\n<hr\/>\n\n## [users]\n\n### allow_sign_up\n\nSet to `false` to prohibit users from being able to sign up \/ create\nuser accounts. Default is `false`. The admin user can still create\nusers. For more information about creating a user, refer to [Add a user]().\n\n### allow_org_create\n\nSet to `false` to prohibit users from creating new organizations.\nDefault is `false`.\n\n### auto_assign_org\n\nSet to `true` to automatically add new users to the main organization\n(id 1). When set to `false`, new users automatically cause a new\norganization to be created for that new user. The organization will be\ncreated even if the `allow_org_create` setting is set to `false`. Default is `true`.\n\n### auto_assign_org_id\n\nSet this value to automatically add new users to the provided org.\nThis requires `auto_assign_org` to be set to `true`. Please make sure\nthat this organization already exists. Default is 1.\n\n### auto_assign_org_role\n\nThe `auto_assign_org_role` setting determines the default role assigned to new users in the main organization if `auto_assign_org` setting is set to `true`.\nYou can set this to one of the following roles: (`Viewer` (default), `Admin`, `Editor`, and `None`). For example:\n\n`auto_assign_org_role = Viewer`\n\n### verify_email_enabled\n\nRequire email validation before sign up completes or when updating a user email address. Default is `false`.\n\n### login_default_org_id\n\nSet the default organization for users when they sign in. The default is `-1`.\n\n### login_hint\n\nText used as placeholder text on login page for login\/username input.\n\n### password_hint\n\nText used as placeholder text on login page for password input.\n\n### default_theme\n\nSets the default UI theme: `dark`, `light`, or `system`. The default theme is `dark`.\n\n`system` matches the user's system theme.\n\n### default_language\n\nThis option will set the default UI language if a supported IETF language tag like `en-US` is available.\nIf set to `detect`, the default UI language will be determined by browser preference.\nThe default is `en-US`.\n\n### home_page\n\nPath to a custom home page. Users are only redirected to this if the default home dashboard is used. It should match a frontend route and contain a leading slash.\n\n### External user management\n\nIf you manage users externally you can replace the user invite button for organizations with a link to an external site together with a description.\n\n### viewers_can_edit\n\nViewers can access and use [Explore]() and perform temporary edits on panels in dashboards they have access to. They cannot save their changes. Default is `false`.\n\n### editors_can_admin\n\nEditors can administrate dashboards, folders and teams they create.\nDefault is `false`.\n\n### user_invite_max_lifetime_duration\n\nThe duration in time a user invitation remains valid before expiring.\nThis setting should be expressed as a duration. Examples: 6h (hours), 2d (days), 1w (week).\nDefault is `24h` (24 hours). The minimum supported duration is `15m` (15 minutes).\n\n### verification_email_max_lifetime_duration\n\nThe duration in time a verification email, used to update the email address of a user, remains valid before expiring.\nThis setting should be expressed as a duration. Examples: 6h (hours), 2d (days), 1w (week).\nDefault is 1h (1 hour).\n\n### last_seen_update_interval\n\nThe frequency of updating a user's last seen time.\nThis setting should be expressed as a duration. Examples: 1h (hour), 15m (minutes)\nDefault is `15m` (15 minutes). The minimum supported duration is `5m` (5 minutes). The maximum supported duration is `1h` (1 hour).\n\n### hidden_users\n\nThis is a comma-separated list of usernames. Users specified here are hidden in the Grafana UI. They are still visible to Grafana administrators and to themselves.\n\n<hr>\n\n## [auth]\n\nGrafana provides many ways to authenticate users. Refer to the Grafana [Authentication overview]() and other authentication documentation for detailed instructions on how to set up and configure authentication.\n\n### login_cookie_name\n\nThe cookie name for storing the auth token. Default is `grafana_session`.\n\n### login_maximum_inactive_lifetime_duration\n\nThe maximum lifetime (duration) an authenticated user can be inactive before being required to login at next visit. Default is 7 days (7d).\nThis setting should be expressed as a duration, e.g. 5m (minutes), 6h (hours), 10d (days), 2w (weeks), 1M (month). The lifetime resets at each successful token rotation (token_rotation_interval_minutes).\n\n### login_maximum_lifetime_duration\n\nThe maximum lifetime (duration) an authenticated user can be logged in since login time before being required to login. Default is 30 days (30d).\nThis setting should be expressed as a duration, e.g. 5m (minutes), 6h (hours), 10d (days), 2w (weeks), 1M (month).\n\n### token_rotation_interval_minutes\n\nHow often auth tokens are rotated for authenticated users when the user is active. The default is each 10 minutes.\n\n### disable_login_form\n\nSet to true to disable (hide) the login form, useful if you use OAuth. Default is false.\n\n### disable_signout_menu\n\nSet to `true` to disable the signout link in the side menu. This is useful if you use auth.proxy. Default is `false`.\n\n### signout_redirect_url\n\nThe URL the user is redirected to upon signing out. To support [OpenID Connect RP-Initiated Logout](https:\/\/openid.net\/specs\/openid-connect-rpinitiated-1_0.html), the user must add `post_logout_redirect_uri` to the `signout_redirect_url`.\n\nExample:\n\nsignout_redirect_url = http:\/\/localhost:8087\/realms\/grafana\/protocol\/openid-connect\/logout?post_logout_redirect_uri=http%3A%2F%2Flocalhost%3A3000%2Flogin\n\n### oauth_auto_login\n\n\nThis option is deprecated - use `auto_login` option for specific OAuth provider instead.\n\n\nSet to `true` to attempt login with OAuth automatically, skipping the login screen.\nThis setting is ignored if multiple OAuth providers are configured. Default is `false`.\n\n### oauth_state_cookie_max_age\n\nHow many seconds the OAuth state cookie lives before being deleted. Default is `600` (seconds)\nAdministrators can increase this if they experience OAuth login state mismatch errors.\n\n### oauth_login_error_message\n\nA custom error message for when users are unauthorized. Default is a key for an internationalized phrase in the frontend, `Login provider denied login request`.\n\n### oauth_refresh_token_server_lock_min_wait_ms\n\nMinimum wait time in milliseconds for the server lock retry mechanism. Default is `1000` (milliseconds). The server lock retry mechanism is used to prevent multiple Grafana instances from simultaneously refreshing OAuth tokens. This mechanism waits at least this amount of time before retrying to acquire the server lock.\n\nThere are five retries in total, so with the default value, the total wait time (for acquiring the lock) is at least 5 seconds (the wait time between retries is calculated as random(n, n + 500)), which means that the maximum token refresh duration must be less than 5-6 seconds.\n\nIf you experience issues with the OAuth token refresh mechanism, you can increase this value to allow more time for the token refresh to complete.\n\n### oauth_skip_org_role_update_sync\n\n\nThis option is removed from G11 in favor of OAuth provider specific `skip_org_role_sync` settings. The following sections explain settings for each provider.\n\n\nIf you want to change the `oauth_skip_org_role_update_sync` setting from `true` to `false`, then each provider you have set up, use the `skip_org_role_sync` setting to specify whether you want to skip the synchronization.\n\n\nCurrently if no organization role mapping is found for a user, Grafana doesn't update the user's organization role.\nWith Grafana 10, if `oauth_skip_org_role_update_sync` option is set to `false`, users with no mapping will be\nreset to the default organization role on every login. [See `auto_assign_org_role` option]().\n\n\n### skip_org_role_sync\n\n`skip_org_role_sync` prevents the synchronization of organization roles for a specific OAuth integration, while the deprecated setting `oauth_skip_org_role_update_sync` affects all configured OAuth providers.\n\nThe default value for `skip_org_role_sync` is `false`.\n\nWith `skip_org_role_sync` set to `false`, the users' organization and role is reset on every new login, based on the external provider's role. See your provider in the tables below.\n\nWith `skip_org_role_sync` set to `true`, when a user logs in for the first time, Grafana sets the organization role based on the value specified in `auto_assign_org_role` and forces the organization to `auto_assign_org_id` when specified, otherwise it falls back to OrgID `1`.\n\n> **Note**: Enabling `skip_org_role_sync` also disables the synchronization of Grafana Admins from the external provider, as such `allow_assign_grafana_admin` is ignored.\n\nUse this setting when you want to manage the organization roles of your users from within Grafana and be able to manually assign them to multiple organizations, or to prevent synchronization conflicts when they can be synchronized from another provider.\n\nThe behavior of `oauth_skip_org_role_update_sync` and `skip_org_role_sync`, can be seen in the tables below:\n\n**[auth.grafana_com]**\n| `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | Modifiable |\n|-----------------------------------|----------------------|-------------------------------------------------------------------------------------------------------------------------------------|---------------------------|\n| false | false | Synchronize user organization role with Grafana.com role. If no role is provided, `auto_assign_org_role` is set. | false |\n| true | false | Skips organization role synchronization for all OAuth providers' users. Role is set to `auto_assign_org_role`. | true |\n| false | true | Skips organization role synchronization for Grafana.com users. Role is set to `auto_assign_org_role`. | true |\n| true | true | Skips organization role synchronization for Grafana.com users and all other OAuth providers. Role is set to `auto_assign_org_role`. | true |\n\n**[auth.azuread]**\n| `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | Modifiable |\n|-----------------------------------|----------------------|---------------------------------------------------------------------------------------------------------------------------------|---------------------------|\n| false | false | Synchronize user organization role with AzureAD role. If no role is provided, `auto_assign_org_role` is set. | false |\n| true | false | Skips organization role synchronization for all OAuth providers' users. Role is set to `auto_assign_org_role`. | true |\n| false | true | Skips organization role synchronization for AzureAD users. Role is set to `auto_assign_org_role`. | true |\n| true | true | Skips organization role synchronization for AzureAD users and all other OAuth providers. Role is set to `auto_assign_org_role`. | true |\n\n**[auth.google]**\n| `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | Modifiable |\n|-----------------------------------|----------------------|----------------------------------------------------------------------------------------|---------------------------|\n| false | false | User organization role is set to `auto_assign_org_role` and cannot be changed. | false |\n| true | false | User organization role is set to `auto_assign_org_role` and can be changed in Grafana. | true |\n| false | true | User organization role is set to `auto_assign_org_role` and can be changed in Grafana. | true |\n| true | true | User organization role is set to `auto_assign_org_role` and can be changed in Grafana. | true |\n\n\nFor GitLab, GitHub, Okta, Generic OAuth providers, Grafana synchronizes organization roles and sets Grafana Admins. The `allow_assign_grafana_admin` setting is also accounted for, to allow or not setting the Grafana Admin role from the external provider.\n\n\n**[auth.github]**\n| `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | Modifiable |\n|-----------------------------------|----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------|\n| false | false | Synchronize user organization role with GitHub role. If no role is provided, `auto_assign_org_role` is set. | false |\n| true | false | Skips organization role synchronization for all OAuth providers' users. Role is set to `auto_assign_org_role`. | true |\n| false | true | Skips organization role and Grafana Admin synchronization for GitHub users. Role is set to `auto_assign_org_role`. | true |\n| true | true | Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for GitHub users. Role is set to `auto_assign_org_role`. | true |\n\n**[auth.gitlab]**\n| `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | Modifiable |\n|-----------------------------------|----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------|\n| false | false | Synchronize user organization role with Gitlab role. If no role is provided, `auto_assign_org_role` is set. | false |\n| true | false | Skips organization role synchronization for all OAuth providers' users. Role is set to `auto_assign_org_role`. | true |\n| false | true | Skips organization role and Grafana Admin synchronization for Gitlab users. Role is set to `auto_assign_org_role`. | true |\n| true | true | Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for Gitlab users. Role is set to `auto_assign_org_role`. | true |\n\n**[auth.generic_oauth]**\n| `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | Modifiable |\n|-----------------------------------|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------|\n| false | false | Synchronize user organization role with the provider's role. If no role is provided, `auto_assign_org_role` is set. | false |\n| true | false | Skips organization role synchronization for all OAuth providers' users. Role is set to `auto_assign_org_role`. | true |\n| false | true | Skips organization role and Grafana Admin synchronization for the provider's users. Role is set to `auto_assign_org_role`. | true |\n| true | true | Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for the provider's users. Role is set to `auto_assign_org_role`. | true |\n\n**[auth.okta]**\n| `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | Modifiable |\n|-----------------------------------|----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------|\n| false | false | Synchronize user organization role with Okta role. If no role is provided, `auto_assign_org_role` is set. | false |\n| true | false | Skips organization role synchronization for all OAuth providers' users. Role is set to `auto_assign_org_role`. | true |\n| false | true | Skips organization role and Grafana Admin synchronization for Okta users. Role is set to `auto_assign_org_role`. | true |\n| true | true | Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for Okta users. Role is set to `auto_assign_org_role`. | true |\n\n#### Example skip_org_role_sync\n\n[auth.google]\n| `oauth_skip_org_role_update_sync` | `skip_org_role_sync` | **Resulting Org Role** | **Example Scenario** |\n|-----------------------------------|----------------------|-----------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| false | false | Synchronized with Google Auth organization roles | A user logs in to Grafana using their Google account and their organization role is automatically set based on their role in Google. |\n| true | false | Skipped synchronization of organization roles from all OAuth providers | A user logs in to Grafana using their Google account and their organization role is **not** set based on their role. But Grafana Administrators can modify the role from the UI. |\n| false | true | Skipped synchronization of organization roles Google | A user logs in to Grafana using their Google account and their organization role is **not** set based on their role in Google. But Grafana Administrators can modify the role from the UI. |\n| true | true | Skipped synchronization of organization roles from all OAuth providers including Google | A user logs in to Grafana using their Google account and their organization role is **not** set based on their role in Google. But Grafana Administrators can modify the role from the UI. |\n\n### api_key_max_seconds_to_live\n\nLimit of API key seconds to live before expiration. Default is -1 (unlimited).\n\n### sigv4_auth_enabled\n\nSet to `true` to enable the AWS Signature Version 4 Authentication option for HTTP-based datasources. Default is `false`.\n\n### sigv4_verbose_logging\n\nSet to `true` to enable verbose request signature logging when AWS Signature Version 4 Authentication is enabled. Default is `false`.\n\n<hr \/>\n\n### managed_service_accounts_enabled\n\n> Only available in Grafana 11.3+.\n\nSet to `true` to enable the use of managed service accounts for plugin authentication. Default is `false`.\n\n> **Limitations:**\n> This feature currently **only supports single-organization deployments**.\n> The plugin's service account is automatically created in the default organization. This means the plugin can only access data and resources within that specific organization.\n\n## [auth.anonymous]\n\nRefer to [Anonymous authentication]() for detailed instructions.\n\n<hr \/>\n\n## [auth.github]\n\nRefer to [GitHub OAuth2 authentication]() for detailed instructions.\n\n<hr \/>\n\n## [auth.gitlab]\n\nRefer to [Gitlab OAuth2 authentication]() for detailed instructions.\n\n<hr \/>\n\n## [auth.google]\n\nRefer to [Google OAuth2 authentication]() for detailed instructions.\n\n<hr \/>\n\n## [auth.grafananet]\n\nLegacy key names, still in the config file so they work in env variables.\n\n<hr \/>\n\n## [auth.grafana_com]\n\nLegacy key names, still in the config file so they work in env variables.\n\n<hr \/>\n\n## [auth.azuread]\n\nRefer to [Azure AD OAuth2 authentication]() for detailed instructions.\n\n<hr \/>\n\n## [auth.okta]\n\nRefer to [Okta OAuth2 authentication]() for detailed instructions.\n\n<hr \/>\n\n## [auth.generic_oauth]\n\nRefer to [Generic OAuth authentication]() for detailed instructions.\n\n<hr \/>\n\n## [auth.basic]\n\nRefer to [Basic authentication]() for detailed instructions.\n\n<hr \/>\n\n## [auth.proxy]\n\nRefer to [Auth proxy authentication]() for detailed instructions.\n\n<hr \/>\n\n## [auth.ldap]\n\nRefer to [LDAP authentication]() for detailed instructions.\n\n## [aws]\n\nYou can configure core and external AWS plugins.\n\n### allowed_auth_providers\n\nSpecify what authentication providers the AWS plugins allow. For a list of allowed providers, refer to the data-source configuration page for a given plugin. If you configure a plugin by provisioning, only providers that are specified in `allowed_auth_providers` are allowed.\n\nOptions: `default` (AWS SDK default), `keys` (Access and secret key), `credentials` (Credentials file), `ec2_iam_role` (EC2 IAM role)\n\n### assume_role_enabled\n\nSet to `false` to disable AWS authentication from using an assumed role with temporary security credentials. For details about assume roles, refer to the AWS API reference documentation about the [AssumeRole](https:\/\/docs.aws.amazon.com\/STS\/latest\/APIReference\/API_AssumeRole.html) operation.\n\nIf this option is disabled, the **Assume Role** and the **External Id** field are removed from the AWS data source configuration page. If the plugin is configured using provisioning, it is possible to use an assumed role as long as `assume_role_enabled` is set to `true`.\n\n### list_metrics_page_limit\n\nUse the [List Metrics API](https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/APIReference\/API_ListMetrics.html) option to load metrics for custom namespaces in the CloudWatch data source. By default, the page limit is 500.\n\n<hr \/>\n\n## [azure]\n\nGrafana supports additional integration with Azure services when hosted in the Azure Cloud.\n\n### cloud\n\nAzure cloud environment where Grafana is hosted:\n\n| Azure Cloud                                      | Value                  |\n| ------------------------------------------------ | ---------------------- |\n| Microsoft Azure public cloud                     | AzureCloud (_default_) |\n| Microsoft Chinese national cloud                 | AzureChinaCloud        |\n| US Government cloud                              | AzureUSGovernment      |\n| Microsoft German national cloud (\"Black Forest\") | AzureGermanCloud       |\n\n### clouds_config\n\nThe JSON config defines a list of Azure clouds and their associated properties when hosted in custom Azure environments.\n\nFor example:\n\n```ini\nclouds_config = `[\n\t\t{\n\t\t\t\"name\":\"CustomCloud1\",\n\t\t\t\"displayName\":\"Custom Cloud 1\",\n\t\t\t\"aadAuthority\":\"https:\/\/login.cloud1.contoso.com\/\",\n\t\t\t\"properties\":{\n\t\t\t\t\"azureDataExplorerSuffix\": \".kusto.windows.cloud1.contoso.com\",\n\t\t\t\t\"logAnalytics\":            \"https:\/\/api.loganalytics.cloud1.contoso.com\",\n\t\t\t\t\"portal\":                  \"https:\/\/portal.azure.cloud1.contoso.com\",\n\t\t\t\t\"prometheusResourceId\":    \"https:\/\/prometheus.monitor.azure.cloud1.contoso.com\",\n\t\t\t\t\"resourceManager\":         \"https:\/\/management.azure.cloud1.contoso.com\"\n\t\t\t}\n\t\t}]`\n```\n\n### managed_identity_enabled\n\nSpecifies whether Grafana hosted in Azure service with Managed Identity configured (e.g. Azure Virtual Machines instance). Disabled by default, needs to be explicitly enabled.\n\n### managed_identity_client_id\n\nThe client ID to use for user-assigned managed identity.\n\nShould be set for user-assigned identity and should be empty for system-assigned identity.\n\n### workload_identity_enabled\n\nSpecifies whether Azure AD Workload Identity authentication should be enabled in datasources that support it.\n\nFor more documentation on Azure AD Workload Identity, review [Azure AD Workload Identity](https:\/\/azure.github.io\/azure-workload-identity\/docs\/) documentation.\n\nDisabled by default, needs to be explicitly enabled.\n\n### workload_identity_tenant_id\n\nTenant ID of the Azure AD Workload Identity.\n\nAllows to override default tenant ID of the Azure AD identity associated with the Kubernetes service account.\n\n### workload_identity_client_id\n\nClient ID of the Azure AD Workload Identity.\n\nAllows to override default client ID of the Azure AD identity associated with the Kubernetes service account.\n\n### workload_identity_token_file\n\nCustom path to token file for the Azure AD Workload Identity.\n\nAllows to set a custom path to the projected service account token file.\n\n### user_identity_enabled\n\nSpecifies whether user identity authentication (on behalf of currently signed-in user) should be enabled in datasources that support it (requires AAD authentication).\n\nDisabled by default, needs to be explicitly enabled.\n\n### user_identity_fallback_credentials_enabled\n\nSpecifies whether user identity authentication fallback credentials should be enabled in data sources. Enabling this allows data source creators to provide fallback credentials for backend-initiated requests, such as alerting, recorded queries, and so on.\n\nIt is by default and needs to be explicitly disabled. It will not have any effect if user identity authentication is disabled.\n\n### user_identity_token_url\n\nOverride token URL for Azure Active Directory.\n\nBy default is the same as token URL configured for AAD authentication settings.\n\n### user_identity_client_id\n\nOverride ADD application ID which would be used to exchange users token to an access token for the datasource.\n\nBy default is the same as used in AAD authentication or can be set to another application (for OBO flow).\n\n### user_identity_client_secret\n\nOverride the AAD application client secret.\n\nBy default is the same as used in AAD authentication or can be set to another application (for OBO flow).\n\n### forward_settings_to_plugins\n\nSet plugins that will receive Azure settings via plugin context.\n\nBy default, this will include all Grafana Labs owned Azure plugins or those that use Azure settings (Azure Monitor, Azure Data Explorer, Prometheus, MSSQL).\n\n### azure_entra_password_credentials_enabled\n\nSpecifies whether Entra password auth can be used for the MSSQL data source. This authentication is not recommended and consideration should be taken before enabling this.\n\nDisabled by default, needs to be explicitly enabled.\n\n## [auth.jwt]\n\nRefer to [JWT authentication]() for more information.\n\n<hr \/>\n\n## [smtp]\n\nEmail server settings.\n\n### enabled\n\nEnable this to allow Grafana to send email. Default is `false`.\n\n### host\n\nDefault is `localhost:25`. Use port 465 for implicit TLS.\n\n### user\n\nIn case of SMTP auth, default is `empty`.\n\n### password\n\nIn case of SMTP auth, default is `empty`. If the password contains `#` or `;`, then you have to wrap it with triple quotes. Example: \"\"\"#password;\"\"\"\n\n### cert_file\n\nFile path to a cert file, default is `empty`.\n\n### key_file\n\nFile path to a key file, default is `empty`.\n\n### skip_verify\n\nVerify SSL for SMTP server, default is `false`.\n\n### from_address\n\nAddress used when sending out emails, default is `admin@grafana.localhost`.\n\n### from_name\n\nName to be used when sending out emails, default is `Grafana`.\n\n### ehlo_identity\n\nName to be used as client identity for EHLO in SMTP dialog, default is `<instance_name>`.\n\n### startTLS_policy\n\nEither \"OpportunisticStartTLS\", \"MandatoryStartTLS\", \"NoStartTLS\". Default is `empty`.\n\n### enable_tracing\n\nEnable trace propagation in e-mail headers, using the `traceparent`, `tracestate` and (optionally) `baggage` fields. Default is `false`. To enable, you must first configure tracing in one of the `tracing.opentelemetry.*` sections.\n\n<hr>\n\n## [smtp.static_headers]\n\nEnter key-value pairs on their own lines to be included as headers on outgoing emails. All keys must be in canonical mail header format.\nExamples: `Foo=bar`, `Foo-Header=bar`.\n\n<hr>\n\n## [emails]\n\n### welcome_email_on_sign_up\n\nDefault is `false`.\n\n### templates_pattern\n\nEnter a comma separated list of template patterns. Default is `emails\/*.html, emails\/*.txt`.\n\n### content_types\n\nEnter a comma-separated list of content types that should be included in the emails that are sent. List the content types according descending preference, e.g. `text\/html, text\/plain` for HTML as the most preferred. The order of the parts is significant as the mail clients will use the content type that is supported and most preferred by the sender. Supported content types are `text\/html` and `text\/plain`. Default is `text\/html`.\n\n<hr>\n\n## [log]\n\nGrafana logging options.\n\n### mode\n\nOptions are \"console\", \"file\", and \"syslog\". Default is \"console\" and \"file\". Use spaces to separate multiple modes, e.g. `console file`.\n\n### level\n\nOptions are \"debug\", \"info\", \"warn\", \"error\", and \"critical\". Default is `info`.\n\n### filters\n\nOptional settings to set different levels for specific loggers.\nFor example: `filters = sqlstore:debug`\n\n### user_facing_default_error\n\nUse this configuration option to set the default error message shown to users. This message is displayed instead of sensitive backend errors, which should be obfuscated. The default message is `Please inspect the Grafana server log for details.`.\n\n<hr>\n\n## [log.console]\n\nOnly applicable when \"console\" is used in `[log]` mode.\n\n### level\n\nOptions are \"debug\", \"info\", \"warn\", \"error\", and \"critical\". Default is inherited from `[log]` level.\n\n### format\n\nLog line format, valid options are text, console and json. Default is `console`.\n\n<hr>\n\n## [log.file]\n\nOnly applicable when \"file\" used in `[log]` mode.\n\n### level\n\nOptions are \"debug\", \"info\", \"warn\", \"error\", and \"critical\". Default is inherited from `[log]` level.\n\n### format\n\nLog line format, valid options are text, console and json. Default is `text`.\n\n### log_rotate\n\nEnable automated log rotation, valid options are `false` or `true`. Default is `true`.\nWhen enabled use the `max_lines`, `max_size_shift`, `daily_rotate` and `max_days` to configure the behavior of the log rotation.\n\n### max_lines\n\nMaximum lines per file before rotating it. Default is `1000000`.\n\n### max_size_shift\n\nMaximum size of file before rotating it. Default is `28`, which means `1 << 28`, `256MB`.\n\n### daily_rotate\n\nEnable daily rotation of files, valid options are `false` or `true`. Default is `true`.\n\n### max_days\n\nMaximum number of days to keep log files. Default is `7`.\n\n<hr>\n\n## [log.syslog]\n\nOnly applicable when \"syslog\" used in `[log]` mode.\n\n### level\n\nOptions are \"debug\", \"info\", \"warn\", \"error\", and \"critical\". Default is inherited from `[log]` level.\n\n### format\n\nLog line format, valid options are text, console, and json. Default is `text`.\n\n### network and address\n\nSyslog network type and address. This can be UDP, TCP, or UNIX. If left blank, then the default UNIX endpoints are used.\n\n### facility\n\nSyslog facility. Valid options are user, daemon or local0 through local7. Default is empty.\n\n### tag\n\nSyslog tag. By default, the process's `argv[0]` is used.\n\n<hr>\n\n## [log.frontend]\n\n### enabled\n\nFaro javascript agent is initialized. Default is `false`.\n\n### custom_endpoint\n\nCustom HTTP endpoint to send events captured by the Faro agent to. Default, `\/log-grafana-javascript-agent`, will log the events to stdout.\n\n### log_endpoint_requests_per_second_limit\n\nRequests per second limit enforced per an extended period, for Grafana backend log ingestion endpoint, `\/log-grafana-javascript-agent`. Default is `3`.\n\n### log_endpoint_burst_limit\n\nMaximum requests accepted per short interval of time for Grafana backend log ingestion endpoint, `\/log-grafana-javascript-agent`. Default is `15`.\n\n### instrumentations_all_enabled\n\nEnables all Faro default instrumentation by using `getWebInstrumentations`. Overrides other instrumentation flags.\n\n### instrumentations_errors_enabled\n\nTurn on error instrumentation. Only affects Grafana Javascript Agent.\n\n### instrumentations_console_enabled\n\nTurn on console instrumentation. Only affects Grafana Javascript Agent\n\n### instrumentations_webvitals_enabled\n\nTurn on webvitals instrumentation. Only affects Grafana Javascript Agent\n\n### instrumentations_tracing_enabled\n\nTurns on tracing instrumentation. Only affects Grafana Javascript Agent.\n\n### api_key\n\nIf `custom_endpoint` required authentication, you can set the API key here. Only relevant for Grafana Javascript Agent provider.\n\n<hr>\n\n## [quota]\n\nSet quotas to `-1` to make unlimited.\n\n### enabled\n\nEnable usage quotas. Default is `false`.\n\n### org_user\n\nLimit the number of users allowed per organization. Default is 10.\n\n### org_dashboard\n\nLimit the number of dashboards allowed per organization. Default is 100.\n\n### org_data_source\n\nLimit the number of data sources allowed per organization. Default is 10.\n\n### org_api_key\n\nLimit the number of API keys that can be entered per organization. Default is 10.\n\n### org_alert_rule\n\nLimit the number of alert rules that can be entered per organization. Default is 100.\n\n### user_org\n\nLimit the number of organizations a user can create. Default is 10.\n\n### global_user\n\nSets a global limit of users. Default is -1 (unlimited).\n\n### global_org\n\nSets a global limit on the number of organizations that can be created. Default is -1 (unlimited).\n\n### global_dashboard\n\nSets a global limit on the number of dashboards that can be created. Default is -1 (unlimited).\n\n### global_api_key\n\nSets global limit of API keys that can be entered. Default is -1 (unlimited).\n\n### global_session\n\nSets a global limit on number of users that can be logged in at one time. Default is -1 (unlimited).\n\n### global_alert_rule\n\nSets a global limit on number of alert rules that can be created. Default is -1 (unlimited).\n\n### global_correlations\n\nSets a global limit on number of correlations that can be created. Default is -1 (unlimited).\n\n### alerting_rule_evaluation_results\n\nLimit the number of query evaluation results per alert rule. If the condition query of an alert rule produces more results than this limit, the evaluation results in an error. Default is -1 (unlimited).\n\n<hr>\n\n## [unified_alerting]\n\nFor more information about the Grafana alerts, refer to [Grafana Alerting]().\n\n### enabled\n\nEnable or disable Grafana Alerting. The default value is `true`.\n\nAlerting rules migrated from dashboards and panels will include a link back via the `annotations`.\n\n### disabled_orgs\n\nComma-separated list of organization IDs for which to disable Grafana 8 Unified Alerting.\n\n### admin_config_poll_interval\n\nSpecify the frequency of polling for admin config changes. The default value is `60s`.\n\nThe interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.\n\n### alertmanager_config_poll_interval\n\nSpecify the frequency of polling for Alertmanager config changes. The default value is `60s`.\n\nThe interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.\n\n### ha_redis_address\n\nThe Redis server address that should be connected to.\n\n\nFor more information on Redis, refer to [Enable alerting high availability using Redis](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/alerting\/set-up\/configure-high-availability\/#enable-alerting-high-availability-using-redis).\n\n\n### ha_redis_username\n\nThe username that should be used to authenticate with the Redis server.\n\n### ha_redis_password\n\nThe password that should be used to authenticate with the Redis server.\n\n### ha_redis_db\n\nThe Redis database. The default value is `0`.\n\n### ha_redis_prefix\n\nA prefix that is used for every key or channel that is created on the Redis server as part of HA for alerting.\n\n### ha_redis_peer_name\n\nThe name of the cluster peer that will be used as an identifier. If none is provided, a random one will be generated.\n\n### ha_redis_max_conns\n\nThe maximum number of simultaneous Redis connections.\n\n### ha_listen_address\n\nListen IP address and port to receive unified alerting messages for other Grafana instances. The port is used for both TCP and UDP. It is assumed other Grafana instances are also running on the same port. The default value is `0.0.0.0:9094`.\n\n### ha_advertise_address\n\nExplicit IP address and port to advertise other Grafana instances. The port is used for both TCP and UDP.\n\n### ha_peers\n\nComma-separated list of initial instances (in a format of host:port) that will form the HA cluster. Configuring this setting will enable High Availability mode for alerting.\n\n### ha_peer_timeout\n\nTime to wait for an instance to send a notification via the Alertmanager. In HA, each Grafana instance will\nbe assigned a position (e.g. 0, 1). We then multiply this position with the timeout to indicate how long should\neach instance wait before sending the notification to take into account replication lag. The default value is `15s`.\n\nThe interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.\n\n### ha_label\n\nThe label is an optional string to include on each packet and stream. It uniquely identifies the cluster and prevents cross-communication issues when sending gossip messages in an environment with multiple clusters.\n\n### ha_gossip_interval\n\nThe interval between sending gossip messages. By lowering this value (more frequent) gossip messages are propagated\nacross cluster more quickly at the expense of increased bandwidth usage. The default value is `200ms`.\n\nThe interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.\n\n### ha_reconnect_timeout\n\nLength of time to attempt to reconnect to a lost peer. When running Grafana in a Kubernetes cluster, set this duration to less than `15m`.\n\nThe string is a possibly signed sequence of decimal numbers followed by a unit suffix (ms, s, m, h, d), such as `30s` or `1m`.\n\n### ha_push_pull_interval\n\nThe interval between gossip full state syncs. Setting this interval lower (more frequent) will increase convergence speeds\nacross larger clusters at the expense of increased bandwidth usage. The default value is `60s`.\n\nThe interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.\n\n### execute_alerts\n\nEnable or disable alerting rule execution. The default value is `true`. The alerting UI remains visible.\n\n### evaluation_timeout\n\nSets the alert evaluation timeout when fetching data from the data source. The default value is `30s`.\n\nThe timeout string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.\n\n### max_attempts\n\nSets a maximum number of times we'll attempt to evaluate an alert rule before giving up on that evaluation. The default value is `1`.\n\n### min_interval\n\nSets the minimum interval to enforce between rule evaluations. The default value is `10s` which equals the scheduler interval. Rules will be adjusted if they are less than this value or if they are not multiple of the scheduler interval (10s). Higher values can help with resource management as we'll schedule fewer evaluations over time.\n\nThe interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.\n\n> **Note.** This setting has precedence over each individual rule frequency. If a rule frequency is lower than this value, then this value is enforced.\n\n<hr>\n\n## [unified_alerting.screenshots]\n\nFor more information about screenshots, refer to [Images in notifications]().\n\n### capture\n\nEnable screenshots in notifications. This option requires a remote HTTP image rendering service. Please see `[rendering]` for further configuration options.\n\n### capture_timeout\n\nThe timeout for capturing screenshots. If a screenshot cannot be captured within the timeout then the notification is sent without a screenshot.\nThe maximum duration is 30 seconds. This timeout should be less than the minimum Interval of all Evaluation Groups to avoid back pressure on alert rule evaluation.\n\n### max_concurrent_screenshots\n\nThe maximum number of screenshots that can be taken at the same time. This option is different from `concurrent_render_request_limit` as `max_concurrent_screenshots` sets the number of concurrent screenshots that can be taken at the same time for all firing alerts where as concurrent_render_request_limit sets the total number of concurrent screenshots across all Grafana services.\n\n### upload_external_image_storage\n\nUploads screenshots to the local Grafana server or remote storage such as Azure, S3 and GCS. Please see `[external_image_storage]` for further configuration options. If this option is false then screenshots will be persisted to disk for up to `temp_data_lifetime`.\n\n<hr>\n\n## [unified_alerting.reserved_labels]\n\nFor more information about Grafana Reserved Labels, refer to [Labels in Grafana Alerting](\/docs\/grafana\/next\/alerting\/fundamentals\/annotation-label\/how-to-use-labels\/)\n\n### disabled_labels\n\nComma-separated list of reserved labels added by the Grafana Alerting engine that should be disabled.\n\nFor example: `disabled_labels=grafana_folder`\n\n<hr>\n\n## [unified_alerting.state_history.annotations]\n\nThis section controls retention of annotations automatically created while evaluating alert rules when alerting state history backend is configured to be annotations (see setting [unified_alerting.state_history].backend)\n\n### max_age\n\nConfigures for how long alert annotations are stored. Default is 0, which keeps them forever. This setting should be expressed as an duration. Ex 6h (hours), 10d (days), 2w (weeks), 1M (month).\n\n### max_annotations_to_keep\n\nConfigures max number of alert annotations that Grafana stores. Default value is 0, which keeps all alert annotations.\n\n<hr>\n\n## [annotations]\n\n### cleanupjob_batchsize\n\nConfigures the batch size for the annotation clean-up job. This setting is used for dashboard, API, and alert annotations.\n\n### tags_length\n\nEnforces the maximum allowed length of the tags for any newly introduced annotations. It can be between 500 and 4096 (inclusive). Default value is 500. Setting it to a higher value would impact performance therefore is not recommended.\n\n## [annotations.dashboard]\n\nDashboard annotations means that annotations are associated with the dashboard they are created on.\n\n### max_age\n\nConfigures how long dashboard annotations are stored. Default is 0, which keeps them forever.\nThis setting should be expressed as a duration. Examples: 6h (hours), 10d (days), 2w (weeks), 1M (month).\n\n### max_annotations_to_keep\n\nConfigures max number of dashboard annotations that Grafana stores. Default value is 0, which keeps all dashboard annotations.\n\n## [annotations.api]\n\nAPI annotations means that the annotations have been created using the API without any association with a dashboard.\n\n### max_age\n\nConfigures how long Grafana stores API annotations. Default is 0, which keeps them forever.\nThis setting should be expressed as a duration. Examples: 6h (hours), 10d (days), 2w (weeks), 1M (month).\n\n### max_annotations_to_keep\n\nConfigures max number of API annotations that Grafana keeps. Default value is 0, which keeps all API annotations.\n\n<hr>\n\n## [explore]\n\nFor more information about this feature, refer to [Explore]().\n\n### enabled\n\nEnable or disable the Explore section. Default is `enabled`.\n\n### defaultTimeOffset\n\nSet a default time offset from now on the time picker. Default is 1 hour.\nThis setting should be expressed as a duration. Examples: 1h (hour), 1d (day), 1w (week), 1M (month).\n\n## [help]\n\nConfigures the help section.\n\n### enabled\n\nEnable or disable the Help section. Default is `enabled`.\n\n## [profile]\n\nConfigures the Profile section.\n\n### enabled\n\nEnable or disable the Profile section. Default is `enabled`.\n\n## [news]\n\n### news_feed_enabled\n\nEnables the news feed section. Default is `true`\n\n<hr>\n\n## [query]\n\n### concurrent_query_limit\n\nSet the number of queries that can be executed concurrently in a mixed data source panel. Default is the number of CPUs.\n\n## [query_history]\n\nConfigures Query history in Explore.\n\n### enabled\n\nEnable or disable the Query history. Default is `enabled`.\n\n<hr>\n\n## [short_links]\n\nConfigures settings around the short link feature.\n\n### expire_time\n\nShort links that are never accessed are considered expired or stale and will be deleted as cleanup. Set the expiration time in days. The default is `7` days. The maximum is `365` days, and setting above the maximum will have `365` set instead. Setting `0` means the short links will be cleaned up approximately every 10 minutes. A negative value such as `-1` will disable expiry.\n\n\nShort links without an expiration increase the size of the database and can\u2019t be deleted.\n\n\n<hr>\n\n## [metrics]\n\nFor detailed instructions, refer to [Internal Grafana metrics]().\n\n### enabled\n\nEnable metrics reporting. defaults true. Available via HTTP API `<URL>\/metrics`.\n\n### interval_seconds\n\nFlush\/write interval when sending metrics to external TSDB. Defaults to `10`.\n\n### disable_total_stats\n\nIf set to `true`, then total stats generation (`stat_totals_*` metrics) is disabled. Default is `false`.\n\n### total_stats_collector_interval_seconds\n\nSets the total stats collector interval. The default is 1800 seconds (30 minutes).\n\n### basic_auth_username and basic_auth_password\n\nIf both are set, then basic authentication is required to access the metrics endpoint.\n\n<hr>\n\n## [metrics.environment_info]\n\nAdds dimensions to the `grafana_environment_info` metric, which can expose more information about the Grafana instance.\n\n```\n; exampleLabel1 = exampleValue1\n; exampleLabel2 = exampleValue2\n```\n\n## [metrics.graphite]\n\nUse these options if you want to send internal Grafana metrics to Graphite.\n\n### address\n\nEnable by setting the address. Format is `<Hostname or ip>`:port.\n\n### prefix\n\nGraphite metric prefix. Defaults to `prod.grafana.%(instance_name)s.`\n\n<hr>\n\n## [grafana_net]\n\nRefer to [grafana_com] config as that is the new and preferred config name.\nThe grafana_net config is still accepted and parsed to grafana_com config.\n\n<hr>\n\n## [grafana_com]\n\n### url\n\nDefault is https:\/\/grafana.com.\nThe default authentication identity provider for Grafana Cloud.\n\n<hr>\n\n## [tracing.jaeger]\n\n[Deprecated - use tracing.opentelemetry.jaeger or tracing.opentelemetry.otlp instead]\n\nConfigure Grafana's Jaeger client for distributed tracing.\n\nYou can also use the standard `JAEGER_*` environment variables to configure\nJaeger. See the table at the end of https:\/\/www.jaegertracing.io\/docs\/1.16\/client-features\/\nfor the full list. Environment variables will override any settings provided here.\n\n### address\n\nThe host:port destination for reporting spans. (ex: `localhost:6831`)\n\nCan be set with the environment variables `JAEGER_AGENT_HOST` and `JAEGER_AGENT_PORT`.\n\n### always_included_tag\n\nComma-separated list of tags to include in all new spans, such as `tag1:value1,tag2:value2`.\n\nCan be set with the environment variable `JAEGER_TAGS` (use `=` instead of `:` with the environment variable).\n\n### sampler_type\n\nDefault value is `const`.\n\nSpecifies the type of sampler: `const`, `probabilistic`, `ratelimiting`, or `remote`.\n\nRefer to https:\/\/www.jaegertracing.io\/docs\/1.16\/sampling\/#client-sampling-configuration for details on the different tracing types.\n\nCan be set with the environment variable `JAEGER_SAMPLER_TYPE`.\n\n_To override this setting, enter `sampler_type` in the `tracing.opentelemetry` section._\n\n### sampler_param\n\nDefault value is `1`.\n\nThis is the sampler configuration parameter. Depending on the value of `sampler_type`, it can be `0`, `1`, or a decimal value in between.\n\n- For `const` sampler, `0` or `1` for always `false`\/`true` respectively\n- For `probabilistic` sampler, a probability between `0` and `1.0`\n- For `rateLimiting` sampler, the number of spans per second\n- For `remote` sampler, param is the same as for `probabilistic`\n  and indicates the initial sampling rate before the actual one\n  is received from the mothership\n\nMay be set with the environment variable `JAEGER_SAMPLER_PARAM`.\n\n_Setting `sampler_param` in the `tracing.opentelemetry` section will override this setting._\n\n### sampling_server_url\n\nsampling_server_url is the URL of a sampling manager providing a sampling strategy.\n\n_Setting `sampling_server_url` in the `tracing.opentelemetry` section will override this setting._\n\n### zipkin_propagation\n\nDefault value is `false`.\n\nControls whether or not to use Zipkin's span propagation format (with `x-b3-` HTTP headers). By default, Jaeger's format is used.\n\nCan be set with the environment variable and value `JAEGER_PROPAGATION=b3`.\n\n### disable_shared_zipkin_spans\n\nDefault value is `false`.\n\nSetting this to `true` turns off shared RPC spans. Leaving this available is the most common setting when using Zipkin elsewhere in your infrastructure.\n\n<hr>\n\n## [tracing.opentelemetry]\n\nConfigure general parameters shared between OpenTelemetry providers.\n\n### custom_attributes\n\nComma-separated list of attributes to include in all new spans, such as `key1:value1,key2:value2`.\n\nCan be set or overridden with the environment variable `OTEL_RESOURCE_ATTRIBUTES` (use `=` instead of `:` with the environment variable). The service name can be set or overridden using attributes or with the environment variable `OTEL_SERVICE_NAME`.\n\n### sampler_type\n\nDefault value is `const`.\n\nSpecifies the type of sampler: `const`, `probabilistic`, `ratelimiting`, or `remote`.\n\n### sampler_param\n\nDefault value is `1`.\n\nDepending on the value of `sampler_type`, the sampler configuration parameter can be `0`, `1`, or any decimal value between `0` and `1`.\n\n- For the `const` sampler, use `0` to never sample or `1` to always sample\n- For the `probabilistic` sampler, you can use a decimal value between `0.0` and `1.0`\n- For the `rateLimiting` sampler, enter the number of spans per second\n- For the `remote` sampler, use a decimal value between `0.0` and `1.0`\n  to specify the initial sampling rate used before the first update\n  is received from the sampling server\n\n### sampling_server_url\n\nWhen `sampler_type` is `remote`, this specifies the URL of the sampling server. This can be used by all tracing providers.\n\nUse a sampling server that supports the Jaeger remote sampling API, such as jaeger-agent, jaeger-collector, opentelemetry-collector-contrib, or [Grafana Alloy](https:\/\/grafana.com\/oss\/alloy-opentelemetry-collector\/).\n\n<hr>\n\n## [tracing.opentelemetry.jaeger]\n\nConfigure Grafana's Jaeger client for distributed tracing.\n\n### address\n\nThe host:port destination for reporting spans. (ex: `localhost:14268\/api\/traces`)\n\n### propagation\n\nThe propagation specifies the text map propagation format. The values `jaeger` and `w3c` are supported. Add a comma (`,`) between values to specify multiple formats (for example, `\"jaeger,w3c\"`). The default value is `w3c`.\n\n<hr>\n\n## [tracing.opentelemetry.otlp]\n\nConfigure Grafana's otlp client for distributed tracing.\n\n### address\n\nThe host:port destination for reporting spans. (ex: `localhost:4317`)\n\n### propagation\n\nThe propagation specifies the text map propagation format. The values `jaeger` and `w3c` are supported. Add a comma (`,`) between values to specify multiple formats (for example, `\"jaeger,w3c\"`). The default value is `w3c`.\n\n<hr>\n\n## [external_image_storage]\n\nThese options control how images should be made public so they can be shared on services like Slack or email message.\n\n### provider\n\nOptions are s3, webdav, gcs, azure_blob, local). If left empty, then Grafana ignores the upload action.\n\n<hr>\n\n## [external_image_storage.s3]\n\n### endpoint\n\nOptional endpoint URL (hostname or fully qualified URI) to override the default generated S3 endpoint. If you want to\nkeep the default, just leave this empty. You must still provide a `region` value if you specify an endpoint.\n\n### path_style_access\n\nSet this to true to force path-style addressing in S3 requests, i.e., `http:\/\/s3.amazonaws.com\/BUCKET\/KEY`, instead\nof the default, which is virtual hosted bucket addressing when possible (`http:\/\/BUCKET.s3.amazonaws.com\/KEY`).\n\n\nThis option is specific to the Amazon S3 service.\n\n\n### bucket_url\n\n(for backward compatibility, only works when no bucket or region are configured)\nBucket URL for S3. AWS region can be specified within URL or defaults to 'us-east-1', e.g.\n\n- http:\/\/grafana.s3.amazonaws.com\/\n- https:\/\/grafana.s3-ap-southeast-2.amazonaws.com\/\n\n### bucket\n\nBucket name for S3. e.g. grafana.snapshot.\n\n### region\n\nRegion name for S3. e.g. 'us-east-1', 'cn-north-1', etc.\n\n### path\n\nOptional extra path inside bucket, useful to apply expiration policies.\n\n### access_key\n\nAccess key, e.g. AAAAAAAAAAAAAAAAAAAA.\n\nAccess key requires permissions to the S3 bucket for the 's3:PutObject' and 's3:PutObjectAcl' actions.\n\n### secret_key\n\nSecret key, e.g. AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.\n\n<hr>\n\n## [external_image_storage.webdav]\n\n### url\n\nURL where Grafana sends PUT request with images.\n\n### username\n\nBasic auth username.\n\n### password\n\nBasic auth password.\n\n### public_url\n\nOptional URL to send to users in notifications. If the string contains the sequence ``, it is replaced with the uploaded filename. Otherwise, the file name is appended to the path part of the URL, leaving any query string unchanged.\n\n<hr>\n\n## [external_image_storage.gcs]\n\n### key_file\n\nOptional path to JSON key file associated with a Google service account to authenticate and authorize. If no value is provided it tries to use the [application default credentials](https:\/\/cloud.google.com\/docs\/authentication\/production#finding_credentials_automatically).\nService Account keys can be created and downloaded from https:\/\/console.developers.google.com\/permissions\/serviceaccounts.\n\nService Account should have \"Storage Object Writer\" role. The access control model of the bucket needs to be \"Set object-level and bucket-level permissions\". Grafana itself will make the images public readable when signed urls are not enabled.\n\n### bucket\n\nBucket Name on Google Cloud Storage.\n\n### path\n\nOptional extra path inside bucket.\n\n### enable_signed_urls\n\nIf set to true, Grafana creates a [signed URL](https:\/\/cloud.google.com\/storage\/docs\/access-control\/signed-urls) for\nthe image uploaded to Google Cloud Storage.\n\n### signed_url_expiration\n\nSets the signed URL expiration, which defaults to seven days.\n\n## [external_image_storage.azure_blob]\n\n### account_name\n\nStorage account name.\n\n### account_key\n\nStorage account key\n\n### container_name\n\nContainer name where to store \"Blob\" images with random names. Creating the blob container beforehand is required. Only public containers are supported.\n\n### sas_token_expiration_days\n\nNumber of days for SAS token validity. If specified SAS token will be attached to image URL. Allow storing images in private containers.\n\n<hr>\n\n## [external_image_storage.local]\n\nThis option does not require any configuration.\n\n<hr>\n\n## [rendering]\n\nOptions to configure a remote HTTP image rendering service, e.g. using https:\/\/github.com\/grafana\/grafana-image-renderer.\n\n#### renderer_token\n\nAn auth token will be sent to and verified by the renderer. The renderer will deny any request without an auth token matching the one configured on the renderer.\n\n### server_url\n\nURL to a remote HTTP image renderer service, e.g. http:\/\/localhost:8081\/render, will enable Grafana to render panels and dashboards to PNG-images using HTTP requests to an external service.\n\n### callback_url\n\nIf the remote HTTP image renderer service runs on a different server than the Grafana server you may have to configure this to a URL where Grafana is reachable, e.g. http:\/\/grafana.domain\/.\n\n### concurrent_render_request_limit\n\nConcurrent render request limit affects when the \/render HTTP endpoint is used. Rendering many images at the same time can overload the server,\nwhich this setting can help protect against by only allowing a certain number of concurrent requests. Default is `30`.\n\n### default_image_width\n\nConfigures the width of the rendered image. The default width is `1000`.\n\n### default_image_height\n\nConfigures the height of the rendered image. The default height is `500`.\n\n### default_image_scale\n\nConfigures the scale of the rendered image. The default scale is `1`.\n\n## [panels]\n\n### enable_alpha\n\nSet to `true` if you want to test alpha panels that are not yet ready for general usage. Default is `false`.\n\n### disable_sanitize_html\n\n\nThis configuration is not available in Grafana Cloud instances.\n\n\nIf set to true Grafana will allow script tags in text panels. Not recommended as it enables XSS vulnerabilities. Default is false.\n\n## [plugins]\n\n### enable_alpha\n\nSet to `true` if you want to test alpha plugins that are not yet ready for general usage. Default is `false`.\n\n### allow_loading_unsigned_plugins\n\nEnter a comma-separated list of plugin identifiers to identify plugins to load even if they are unsigned. Plugins with modified signatures are never loaded.\n\nWe do _not_ recommend using this option. For more information, refer to [Plugin signatures]().\n\n### plugin_admin_enabled\n\nAvailable to Grafana administrators only, enables installing \/ uninstalling \/ updating plugins directly from the Grafana UI. Set to `true` by default. Setting it to `false` will hide the install \/ uninstall \/ update controls.\n\nFor more information, refer to [Plugin catalog]().\n\n### plugin_admin_external_manage_enabled\n\nSet to `true` if you want to enable external management of plugins. Default is `false`. This is only applicable to Grafana Cloud users.\n\n### plugin_catalog_url\n\nCustom install\/learn more URL for enterprise plugins. Defaults to https:\/\/grafana.com\/grafana\/plugins\/.\n\n### plugin_catalog_hidden_plugins\n\nEnter a comma-separated list of plugin identifiers to hide in the plugin catalog.\n\n### public_key_retrieval_disabled\n\nDisable download of the public key for verifying plugin signature. The default is `false`. If disabled, it will use the hardcoded public key.\n\n### public_key_retrieval_on_startup\n\nForce download of the public key for verifying plugin signature on startup. The default is `false`. If disabled, the public key will be retrieved every 10 days. Requires `public_key_retrieval_disabled` to be false to have any effect.\n\n### disable_plugins\n\nEnter a comma-separated list of plugin identifiers to avoid loading (including core plugins). These plugins will be hidden in the catalog.\n\n### preinstall\n\nEnter a comma-separated list of plugin identifiers to preinstall. These plugins will be installed on startup, using the Grafana catalog as the source. Preinstalled plugins cannot be uninstalled from the Grafana user interface; they need to be removed from this list first.\n\nTo pin plugins to a specific version, use the format `plugin_id@version`, for example,`grafana-piechart-panel@1.6.0`. If no version is specified, the latest version is installed. _The plugin is automatically updated_ to the latest version when a new version is available in the Grafana plugin catalog on startup (except for new major versions).\n\nTo use a custom URL to download a plugin, use the format `plugin_id@version@url`, for example, `grafana-piechart-panel@1.6.0@https:\/\/example.com\/grafana-piechart-panel-1.6.0.zip`.\n\nBy default, Grafana preinstalls some suggested plugins. Check the default configuration file for the list of plugins.\n\n### preinstall_async\n\nBy default, plugins are preinstalled asynchronously, as a background process. This means that Grafana will start up faster, but the plugins may not be available immediately. If you need a plugin to be installed for provisioning, set this option to `false`. This causes Grafana to wait for the plugins to be installed before starting up (and fail if a plugin can't be installed).\n\n### preinstall_disabled\n\nThis option disables all preinstalled plugins. The default is `false`. To disable a specific plugin from being preinstalled, use the `disable_plugins` option.\n\n<hr>\n\n## [live]\n\n### max_connections\n\nThe `max_connections` option specifies the maximum number of connections to the Grafana Live WebSocket endpoint per Grafana server instance. Default is `100`.\n\nRefer to [Grafana Live configuration documentation]() if you specify a number higher than default since this can require some operating system and infrastructure tuning.\n\n0 disables Grafana Live, -1 means unlimited connections.\n\n### allowed_origins\n\nThe `allowed_origins` option is a comma-separated list of additional origins (`Origin` header of HTTP Upgrade request during WebSocket connection establishment) that will be accepted by Grafana Live.\n\nIf not set (default), then the origin is matched over [root_url]() which should be sufficient for most scenarios.\n\nOrigin patterns support wildcard symbol \"\\*\".\n\nFor example:\n\n```ini\n[live]\nallowed_origins = \"https:\/\/*.example.com\"\n```\n\n### ha_engine\n\n**Experimental**\n\nThe high availability (HA) engine name for Grafana Live. By default, it's not set. The only possible value is \"redis\".\n\nFor more information, refer to the [Configure Grafana Live HA setup]().\n\n### ha_engine_address\n\n**Experimental**\n\nAddress string of selected the high availability (HA) Live engine. For Redis, it's a `host:port` string. Example:\n\n```ini\n[live]\nha_engine = redis\nha_engine_address = 127.0.0.1:6379\n```\n\n<hr>\n\n## [plugin.plugin_id]\n\nThis section can be used to configure plugin-specific settings. Replace the `plugin_id` attribute with the plugin ID present in `plugin.json`.\n\nProperties described in this section are available for all plugins, but you must set them individually for each plugin.\n\n### tracing\n\n\n[OpenTelemetry must be configured as well](#tracingopentelemetry).\n\n\nIf `true`, propagate the tracing context to the plugin backend and enable tracing (if the backend supports it).\n\n## as_external\n\nLoad an external version of a core plugin if it has been installed.\n\nExperimental. Requires the feature toggle `externalCorePlugins` to be enabled.\n\n<hr>\n\n## [plugin.grafana-image-renderer]\n\nFor more information, refer to [Image rendering]().\n\n### rendering_timezone\n\nInstruct headless browser instance to use a default timezone when not provided by Grafana, e.g. when rendering panel image of alert. See [ICUs metaZones.txt](https:\/\/cs.chromium.org\/chromium\/src\/third_party\/icu\/source\/data\/misc\/metaZones.txt) for a list of supported timezone IDs. Fallbacks to TZ environment variable if not set.\n\n### rendering_language\n\nInstruct headless browser instance to use a default language when not provided by Grafana, e.g. when rendering panel image of alert.\nRefer to the HTTP header Accept-Language to understand how to format this value, e.g. 'fr-CH, fr;q=0.9, en;q=0.8, de;q=0.7, \\*;q=0.5'.\n\n### rendering_viewport_device_scale_factor\n\nInstruct headless browser instance to use a default device scale factor when not provided by Grafana, e.g. when rendering panel image of alert.\nDefault is `1`. Using a higher value will produce more detailed images (higher DPI), but requires more disk space to store an image.\n\n### rendering_ignore_https_errors\n\nInstruct headless browser instance whether to ignore HTTPS errors during navigation. Per default HTTPS errors are not ignored. Due to the security risk, we do not recommend that you ignore HTTPS errors.\n\n### rendering_verbose_logging\n\nInstruct headless browser instance whether to capture and log verbose information when rendering an image. Default is `false` and will only capture and log error messages.\n\nWhen enabled, debug messages are captured and logged as well.\n\nFor the verbose information to be included in the Grafana server log you have to adjust the rendering log level to debug, configure [log].filter = rendering:debug.\n\n### rendering_dumpio\n\nInstruct headless browser instance whether to output its debug and error messages into running process of remote rendering service. Default is `false`.\n\nIt can be useful to set this to `true` when troubleshooting.\n\n### rendering_timing_metrics\n\n> **Note:** Available from grafana-image-renderer v3.9.0+\n\nInstruct a headless browser instance on whether to record metrics for the duration of every rendering step. Default is `false`.\n\nSetting this to `true` when optimizing the rendering mode settings to improve the plugin performance or when troubleshooting can be useful.\n\n### rendering_args\n\nAdditional arguments to pass to the headless browser instance. Defaults are `--no-sandbox,--disable-gpu`. The list of Chromium flags can be found at (https:\/\/peter.sh\/experiments\/chromium-command-line-switches\/). Separate multiple arguments with commas.\n\n### rendering_chrome_bin\n\nYou can configure the plugin to use a different browser binary instead of the pre-packaged version of Chromium.\n\nPlease note that this is _not_ recommended. You might encounter problems if the installed version of Chrome\/Chromium is not compatible with the plugin.\n\n### rendering_mode\n\nInstruct how headless browser instances are created. Default is `default` and will create a new browser instance on each request.\n\nMode `clustered` will make sure that only a maximum of browsers\/incognito pages can execute concurrently.\n\nMode `reusable` will have one browser instance and will create a new incognito page on each request.\n\n### rendering_clustering_mode\n\nWhen rendering_mode = clustered, you can instruct how many browsers or incognito pages can execute concurrently. Default is `browser` and will cluster using browser instances.\n\nMode `context` will cluster using incognito pages.\n\n### rendering_clustering_max_concurrency\n\nWhen rendering_mode = clustered, you can define the maximum number of browser instances\/incognito pages that can execute concurrently. Default is `5`.\n\n### rendering_clustering_timeout\n\n\nAvailable in grafana-image-renderer v3.3.0 and later versions.\n\n\nWhen rendering_mode = clustered, you can specify the duration a rendering request can take before it will time out. Default is `30` seconds.\n\n### rendering_viewport_max_width\n\nLimit the maximum viewport width that can be requested.\n\n### rendering_viewport_max_height\n\nLimit the maximum viewport height that can be requested.\n\n### rendering_viewport_max_device_scale_factor\n\nLimit the maximum viewport device scale factor that can be requested.\n\n### grpc_host\n\nChange the listening host of the gRPC server. Default host is `127.0.0.1`.\n\n### grpc_port\n\nChange the listening port of the gRPC server. Default port is `0` and will automatically assign a port not in use.\n\n<hr>\n\n## [enterprise]\n\nFor more information about Grafana Enterprise, refer to [Grafana Enterprise]().\n\n<hr>\n\n## [feature_toggles]\n\n### enable\n\nKeys of features to enable, separated by space.\n\n### FEATURE_TOGGLE_NAME = false\n\nSome feature toggles for stable features are on by default. Use this setting to disable an on-by-default feature toggle with the name FEATURE_TOGGLE_NAME, for example, `exploreMixedDatasource = false`.\n\n<hr>\n\n## [feature_management]\n\nThe options in this section configure the experimental Feature Toggle Admin Page feature, which is enabled using the `featureToggleAdminPage` feature toggle. Grafana Labs offers support on a best-effort basis, and breaking changes might occur prior to the feature being made generally available.\n\nPlease see [Configure feature toggles]() for more information.\n\n### allow_editing\n\nLets you switch the feature toggle state in the feature management page. The default is `false`.\n\n### update_webhook\n\nSet the URL of the controller that manages the feature toggle updates. If not set, feature toggles in the feature management page will be read-only.\n\n\nThe API for feature toggle updates has not been defined yet.\n\n\n### hidden_toggles\n\nHide additional specific feature toggles from the feature management page. By default, feature toggles in the `unknown`, `experimental`, and `private preview` stages are hidden from the UI. Use this option to hide toggles in the `public preview`, `general availability`, and `deprecated` stages.\n\n### read_only_toggles\n\nUse to disable updates for additional specific feature toggles in the feature management page. By default, feature toggles can only be updated if they are in the `general availability` and `deprecated`stages. Use this option to disable updates for toggles in those stages.\n\n<hr>\n\n## [date_formats]\n\nThis section controls system-wide defaults for date formats used in time ranges, graphs, and date input boxes.\n\nThe format patterns use [Moment.js](https:\/\/momentjs.com\/docs\/#\/displaying\/) formatting tokens.\n\n### full_date\n\nFull date format used by time range picker and in other places where a full date is rendered.\n\n### intervals\n\nThese intervals formats are used in the graph to show only a partial date or time. For example, if there are only\nminutes between Y-axis tick labels then the `interval_minute` format is used.\n\nDefaults\n\n```\ninterval_second = HH:mm:ss\ninterval_minute = HH:mm\ninterval_hour = MM\/DD HH:mm\ninterval_day = MM\/DD\ninterval_month = YYYY-MM\ninterval_year = YYYY\n```\n\n### use_browser_locale\n\nSet this to `true` to have date formats automatically derived from your browser location. Defaults to `false`. This is an experimental feature.\n\n### default_timezone\n\nUsed as the default time zone for user preferences. Can be either `browser` for the browser local time zone or a time zone name from the IANA Time Zone database, such as `UTC` or `Europe\/Amsterdam`.\n\n### default_week_start\n\nSet the default start of the week, valid values are: `saturday`, `sunday`, `monday` or `browser` to use the browser locale to define the first day of the week. Default is `browser`.\n\n## [expressions]\n\n### enabled\n\nSet this to `false` to disable expressions and hide them in the Grafana UI. Default is `true`.\n\n## [geomap]\n\nThis section controls the defaults settings for Geomap Plugin.\n\n### default_baselayer_config\n\nThe json config used to define the default base map. Four base map options to choose from are `carto`, `esriXYZTiles`, `xyzTiles`, `standard`.\nFor example, to set cartoDB light as the default base layer:\n\n```ini\ndefault_baselayer_config = `{\n  \"type\": \"xyz\",\n  \"config\": {\n    \"attribution\": \"Open street map\",\n    \"url\": \"https:\/\/tile.openstreetmap.org\/{z}\/{x}\/{y}.png\"\n  }\n}`\n```\n\n### enable_custom_baselayers\n\nSet this to `false` to disable loading other custom base maps and hide them in the Grafana UI. Default is `true`.\n\n## [rbac]\n\nRefer to [Role-based access control]() for more information.\n\n## [navigation.app_sections]\n\nMove an app plugin (referenced by its id), including all its pages, to a specific navigation section. Format: `<pluginId> = <sectionId> <sortWeight>`\n\n## [navigation.app_standalone_pages]\n\nMove an individual app plugin page (referenced by its `path` field) to a specific navigation section.\nFormat: `<pageUrl> = <sectionId> <sortWeight>`\n\n## [public_dashboards]\n\nThis section configures the [shared dashboards](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/dashboards\/share-dashboards-panels\/shared-dashboards\/) feature.\n\n### enabled\n\nSet this to `false` to disable the shared dashboards feature. This prevents users from creating new shared dashboards and disables existing ones.","site":"grafana setup","answers_cleaned":"    aliases         administration configuration         installation configuration  description  Configuration documentation labels    products        enterprise       oss title  Configure Grafana weight  200        Configure Grafana  Grafana has default and custom configuration files  You can customize your Grafana instance by modifying the custom configuration file or by using environment variables  To see the list of settings for a Grafana instance  refer to  View server settings       After you add custom options   uncomment   remove comments in the ini files  the relevant sections of the configuration file  Restart Grafana for your changes to take effect       Configuration file location  The default settings for a Grafana instance are stored in the   WORKING DIR conf defaults ini  file   Do not  change this file   Depending on your OS  your custom configuration file is either the   WORKING DIR conf custom ini  file or the   usr local etc grafana grafana ini  file  The custom configuration file path can be overridden using the    config  parameter       Linux  If you installed Grafana using the  deb  or  rpm  packages  then your configuration file is located at   etc grafana grafana ini  and a separate  custom ini  is not used  This path is specified in the Grafana init d script using    config  file parameter       Docker  Refer to  Configure a Grafana Docker image    for information about environmental variables  persistent storage  and building custom Docker images       Windows  On Windows  the  sample ini  file is located in the same directory as  defaults ini  file  It contains all the settings commented out  Copy  sample ini  and name it  custom ini        macOS  By default  the configuration file is located at   opt homebrew etc grafana grafana ini  or   usr local etc grafana grafana ini   For a Grafana instance installed using Homebrew  edit the  grafana ini  file directly  Otherwise  add a configuration file named  custom ini  to the  conf  folder to override the settings defined in  conf defaults ini       Remove comments in the  ini files  Grafana uses semicolons  the     char  to comment out lines in a   ini  file  You must uncomment each line in the  custom ini  or the  grafana ini  file that you are modify by removing     from the beginning of that line  Otherwise your changes will be ignored   For example         The HTTP port  to use  http port   3000         Override configuration with environment variables  Do not use environment variables to  add  new configuration settings  Instead  use environmental variables to  override  existing options   To override an option      bash GF  SectionName   KeyName       Where the section name is the text within the brackets  Everything should be uppercase      and     should be replaced by      For example  if you have these configuration settings      bash   default section instance name     HOSTNAME    security  admin user   admin   auth google  client secret   0ldS3cretKey   plugin grafana image renderer  rendering ignore https errors   true   feature toggles  enable   newNavigation      You can override variables on Linux machines with      bash export GF DEFAULT INSTANCE NAME my instance export GF SECURITY ADMIN USER owner export GF AUTH GOOGLE CLIENT SECRET newS3cretKey export GF PLUGIN GRAFANA IMAGE RENDERER RENDERING IGNORE HTTPS ERRORS true export GF FEATURE TOGGLES ENABLE newNavigation         Variable expansion  If any of your options contains the expression      provider   argument    or     environment variable     then they will be processed by Grafana s variable expander  The expander runs the provider with the provided argument to get the final value of the option   There are three providers   env    file   and  vault        Env provider  The  env  provider can be used to expand an environment variable  If you set an option to     env PORT   the  PORT  environment variable will be used in its place  For environment variables you can also use the short hand syntax    PORT    Grafana s log directory would be set to the  grafana  directory in the directory behind the  LOGDIR  environment variable in the following example      ini  paths  logs      env LOGDIR  grafana          File provider   file  reads a file from the filesystem  It trims whitespace from the beginning and the end of files  The database password in the following example would be replaced by the content of the   etc secrets gf sql password  file      ini  database  password      file  etc secrets gf sql password           Vault provider  The  vault  provider allows you to manage your secrets with  Hashicorp Vault  https   www hashicorp com products vault      Vault provider is only available in Grafana Enterprise v7 1   For more information  refer to  Vault integration    in  Grafana Enterprise       hr        app mode  Options are  production  and  development   Default is  production    Do not  change this option unless you are working on Grafana development      instance name  Set the name of the grafana server instance  Used in logging  internal metrics  and clustering info  Defaults to     HOSTNAME    which will be replaced with environment variable  HOSTNAME   if that is empty or does not exist Grafana will try to use system calls to get the machine name    hr         paths       data  Path to where Grafana stores the sqlite3 database  if used   file based sessions  if used   and other data  This path is usually specified via command line in the init d script or the systemd service file     macOS    The default SQLite database is located at   usr local var lib grafana       temp data lifetime  How long temporary images in  data  directory should be kept  Defaults to   24h   Supported modifiers   h   hours    m   minutes   for example   168h    30m    10h30m   Use  0  to never clean up temporary files       logs  Path to where Grafana stores logs  This path is usually specified via command line in the init d script or the systemd service file  You can override it in the configuration file or in the default environment variable file  However  please note that by overriding this the default log path will be used temporarily until Grafana has fully initialized started   Override log path using the command line argument  cfg default paths logs       bash   grafana server   config  custom config ini   homepath  custom homepath cfg default paths logs  custom path        macOS    By default  the log file should be located at   usr local var log grafana grafana log        plugins  Directory where Grafana automatically scans and looks for plugins  For information about manually or automatically installing plugins  refer to  Install Grafana plugins        macOS    By default  the Mac plugin location is    usr local var lib grafana plugins        provisioning  Folder that contains  provisioning    config files that Grafana will apply on startup  Dashboards will be reloaded when the json files changes    hr         server       protocol   http   https   h2  or  socket       min tls version  The TLS Handshake requires a minimum TLS version  The available options are TLS1 2 and TLS1 3  If you do not specify a version  the system uses TLS1 2       http addr  The host for the server to listen on  If your machine has more than one network interface  you can use this setting to expose the Grafana service on only one network interface and not have it available on others  such as the loopback interface  An empty value is equivalent to setting the value to  0 0 0 0   which means the Grafana service binds to all interfaces   In environments where network address translation  NAT  is used  ensure you use the network interface address and not a final public address  otherwise  you might see errors such as  bind  cannot assign requested address  in the logs       http port  The port to bind to  defaults to  3000   To use port 80 you need to either give the Grafana binary permission for example      bash   sudo setcap  cap net bind service  ep   usr sbin grafana server      Or redirect port 80 to the Grafana port using      bash   sudo iptables  t nat  A PREROUTING  p tcp   dport 80  j REDIRECT   to port 3000      Another way is to put a web server like Nginx or Apache in front of Grafana and have them proxy requests to Grafana       domain  This setting is only used in as a part of the  root url  setting  see below   Important if you use GitHub or Google OAuth       enforce domain  Redirect to correct domain if the host header does not match the domain  Prevents DNS rebinding attacks  Default is  false        root url  This is the full URL used to access Grafana from a web browser  This is important if you use Google or GitHub OAuth authentication  for the callback URL to be correct     This setting is also important if you have a reverse proxy in front of Grafana that exposes it through a subpath  In that case add the subpath to the end of this URL setting        serve from sub path  Serve Grafana from subpath specified in  root url  setting  By default it is set to  false  for compatibility reasons   By enabling this setting and using a subpath in  root url  above  e g  root url   http   localhost 3000 grafana   Grafana is accessible on  http   localhost 3000 grafana   If accessed without subpath Grafana will redirect to an URL with the subpath       router logging  Set to  true  for Grafana to log all HTTP requests  not just errors   These are logged as Info level events to the Grafana log       static root path  The path to the directory where the front end files  HTML  JS  and CSS files   Defaults to  public  which is why the Grafana binary needs to be executed with working directory set to the installation path       enable gzip  Set this option to  true  to enable HTTP compression  this can improve transfer speed and bandwidth utilization  It is recommended that most users set it to  true   By default it is set to  false  for compatibility reasons       cert file  Path to the certificate file  if  protocol  is set to  https  or  h2         cert key  Path to the certificate key file  if  protocol  is set to  https  or  h2         certs watch interval  Controls whether  cert key  and  cert file  are periodically watched for changes  Disabled  by default  When enabled   cert key  and  cert file  are watched for changes  If there is change  the new certificates are loaded automatically    After the new certificates are loaded  connections with old certificates will not work  You must reload the connections to the old certs for them to work        socket gid  GID where the socket should be set when  protocol socket   Make sure that the target group is in the group of Grafana process and that Grafana process is the file owner before you change this setting  It is recommended to set the gid as http server user gid  Not set when the value is  1       socket mode  Mode where the socket should be set when  protocol socket   Make sure that Grafana process is the file owner before you change this setting       socket  Path where the socket should be created when  protocol socket   Make sure Grafana has appropriate permissions for that path before you change this setting       cdn url  Specify a full HTTP URL address to the root of your Grafana CDN assets  Grafana will add edition and version paths   For example  given a cdn url like  https   cdn myserver com  grafana will try to load a javascript file from  http   cdn myserver com grafana oss 7 4 0 public build app  hash  js        read timeout  Sets the maximum time using a duration format  5s 5m 5ms  before timing out read of an incoming request and closing idle connections   0  means there is no timeout for reading the request    hr         server custom response headers   This setting enables you to specify additional headers that the server adds to HTTP S  responses       exampleHeader1   exampleValue1 exampleHeader2   exampleValue2       hr         database   Grafana needs a database to store users and dashboards  and other things   By default it is configured to use   sqlite3   https   www sqlite org index html  which is an embedded database  included in the main Grafana binary        type  Either  mysql    postgres  or  sqlite3   it s your choice       host  Only applicable to MySQL or Postgres  Includes IP or hostname and port or in case of Unix sockets the path to it  For example  for MySQL running on the same host as Grafana   host   127 0 0 1 3306  or with Unix sockets   host    var run mysqld mysqld sock       name  The name of the Grafana database  Leave it set to  grafana  or some other name       user  The database user  not applicable for  sqlite3         password  The database user s password  not applicable for  sqlite3    If the password contains     or     you have to wrap it with triple quotes  For example      password           url  Use either URL or the other fields below to configure the database Example   mysql   user secret host port database       max idle conn  The maximum number of connections in the idle connection pool       max open conn  The maximum number of open connections to the database  For MYSQL  configure this setting on both Grafana and the database  For more information  refer to   sysvar max connections   https   dev mysql com doc refman 8 0 en server system variables html sysvar max connections        conn max lifetime  Sets the maximum amount of time a connection may be reused  The default is 14400  which means 14400 seconds or 4 hours   For MySQL  this setting should be shorter than the   wait timeout   https   dev mysql com doc refman 5 7 en server system variables html sysvar wait timeout  variable       migration locking  Set to  false  to disable database locking during the migrations  Default is true       locking attempt timeout sec  For  mysql  and  postgres  only  Specify the time  in seconds  to wait before failing to lock the database for the migrations  Default is 0       log queries  Set to  true  to log the sql calls and execution times       ssl mode  For Postgres  use use any  valid libpq  sslmode   https   www postgresql org docs current libpq ssl html LIBPQ SSL SSLMODE STATEMENTS   e g  disable    require    verify full   etc  For MySQL  use either  true    false   or  skip verify        ssl sni  For Postgres  set to  0  to disable  Server Name Indication  https   www postgresql org docs current libpq connect html LIBPQ CONNECT SSLSNI   This is enabled by default on SSL enabled connections       isolation level  Only the MySQL driver supports isolation levels in Grafana  In case the value is empty  the driver s default isolation level is applied  Available options are  READ UNCOMMITTED    READ COMMITTED    REPEATABLE READ  or  SERIALIZABLE        ca cert path  The path to the CA certificate to use  On many Linux systems  certs can be found in   etc ssl certs        client key path  The path to the client key  Only if server requires client authentication       client cert path  The path to the client cert  Only if server requires client authentication       server cert name  The common name field of the certificate used by the  mysql  or  postgres  server  Not necessary if  ssl mode  is set to  skip verify        path  Only applicable for  sqlite3  database  The file path where the database will be stored       cache mode  For  sqlite3  only   Shared cache  https   www sqlite org sharedcache html  setting used for connecting to the database   private  shared  Defaults to  private        wal  For  sqlite3  only  Setting to enable disable  Write Ahead Logging  https   sqlite org wal html   The default value is  false   disabled        query retries  This setting applies to  sqlite  only and controls the number of times the system retries a query when the database is locked  The default value is  0   disabled        transaction retries  This setting applies to  sqlite  only and controls the number of times the system retries a transaction when the database is locked  The default value is  5        instrument queries  Set to  true  to add metrics and tracing for database queries  The default value is  false     hr         remote cache   Caches authentication details and session information in the configured database  Redis or Memcached  This setting does not configure  Query Caching in Grafana Enterprise          type  Either  redis    memcached   or  database   Defaults to  database       connstr  The remote cache connection string  The format depends on the  type  of the remote cache  Options are  database    redis   and  memcache         database  Leave empty when using  database  since it will use the primary database        redis  Example connstr   addr 127 0 0 1 6379 pool size 100 db 0 ssl false      addr  is the host     port of the redis server     pool size   optional  is the number of underlying connections that can be made to redis     db   optional  is the number identifier of the redis database you want to use     ssl   optional  is if SSL should be used to connect to redis server  The value may be  true    false   or  insecure   Setting the value to  insecure  skips verification of the certificate chain and hostname when making the connection        memcache  Example connstr   127 0 0 1 11211    hr         dataproxy       logging  This enables data proxy logging  default is  false        timeout  How long the data proxy should wait before timing out  Default is 30 seconds   This setting also applies to core backend HTTP data sources where query requests use an HTTP client with timeout set       keep alive seconds  Interval between keep alive probes  Default is  30  seconds  For more details check the  Dialer KeepAlive  https   golang org pkg net  Dialer KeepAlive  documentation       tls handshake timeout seconds  The length of time that Grafana will wait for a successful TLS handshake with the datasource  Default is  10  seconds  For more details check the  Transport TLSHandshakeTimeout  https   golang org pkg net http  Transport TLSHandshakeTimeout  documentation       expect continue timeout seconds  The length of time that Grafana will wait for a datasource s first response headers after fully writing the request headers  if the request has an  Expect  100 continue  header  A value of  0  will result in the body being sent immediately  Default is  1  second  For more details check the  Transport ExpectContinueTimeout  https   golang org pkg net http  Transport ExpectContinueTimeout  documentation       max conns per host  Optionally limits the total number of connections per host  including connections in the dialing  active  and idle states  On limit violation  dials are blocked  A value of  0  means that there are no limits  Default is  0   For more details check the  Transport MaxConnsPerHost  https   golang org pkg net http  Transport MaxConnsPerHost  documentation       max idle connections  The maximum number of idle connections that Grafana will maintain  Default is  100   For more details check the  Transport MaxIdleConns  https   golang org pkg net http  Transport MaxIdleConns  documentation       idle conn timeout seconds  The length of time that Grafana maintains idle connections before closing them  Default is  90  seconds  For more details check the  Transport IdleConnTimeout  https   golang org pkg net http  Transport IdleConnTimeout  documentation       send user header  If enabled and user is not anonymous  data proxy will add X Grafana User header with username into the request  Default is  false        response limit  Limits the amount of bytes that will be read accepted from responses of outgoing HTTP requests  Default is  0  which means disabled       row limit  Limits the number of rows that Grafana will process from SQL  relational  data sources  Default is  1000000        user agent  Sets a custom value for the  User Agent  header for outgoing data proxy requests  If empty  the default value is  Grafana  BuildVersion    for example  Grafana 9 0 0      hr         analytics       enabled  This option is also known as  usage analytics   When  false   this option disables the writers that write to the Grafana database and the associated features  such as dashboard and data source insights  presence indicators  and advanced dashboard search  The default value is  true        reporting enabled  When enabled Grafana will send anonymous usage statistics to  stats grafana org   No IP addresses are being tracked  only simple counters to track running instances  versions  dashboard and error counts  It is very helpful to us  so please leave this enabled  Counters are sent every 24 hours  Default value is  true        check for updates  Set to false  disables checking for new versions of Grafana from Grafana s GitHub repository  When enabled  the check for a new version runs every 10 minutes  It will notify  via the UI  when a new version is available  The check itself will not prompt any auto updates of the Grafana software  nor will it send any sensitive information       check for plugin updates  Set to false disables checking for new versions of installed plugins from https   grafana com  When enabled  the check for a new plugin runs every 10 minutes  It will notify  via the UI  when a new plugin update exists  The check itself will not prompt any auto updates of the plugin  nor will it send any sensitive information       google analytics ua id  If you want to track Grafana usage via Google analytics specify  your  Universal Analytics ID here  By default this feature is disabled       google analytics 4 id  If you want to track Grafana usage via Google Analytics 4 specify  your  GA4 ID here  By default this feature is disabled       google tag manager id  Google Tag Manager ID  only enabled if you enter an ID here       rudderstack write key  If you want to track Grafana usage via Rudderstack specify  your  Rudderstack Write Key here  The  rudderstack data plane url  must also be provided for this feature to be enabled  By default this feature is disabled       rudderstack data plane url  Rudderstack data plane url that will receive Rudderstack events  The  rudderstack write key  must also be provided for this feature to be enabled       rudderstack sdk url  Optional  If tracking with Rudderstack is enabled  you can provide a custom URL to load the Rudderstack SDK       rudderstack config url  Optional  If tracking with Rudderstack is enabled  you can provide a custom URL to load the Rudderstack config       rudderstack integrations url  Optional  If tracking with Rudderstack is enabled  you can provide a custom URL to load the SDK for destinations running in device mode  This setting is only valid for Rudderstack version 1 1 and higher       application insights connection string  If you want to track Grafana usage via Azure Application Insights  then specify  your  Application Insights connection string  Since the connection string contains semicolons  you need to wrap it in backticks      By default  tracking usage is disabled       application insights endpoint url  Optionally  use this option to override the default endpoint address for Application Insights data collecting  For details  refer to the  Azure documentation  https   docs microsoft com en us azure azure monitor app custom endpoints tabs js     hr         feedback links enabled  Set to  false  to remove all feedback links from the UI  Default is  true        security       disable initial admin creation  Disable creation of admin user on first start of Grafana  Default is  false        admin user  The name of the default Grafana Admin user  who has full permissions  Default is  admin        admin password  The password of the default Grafana Admin  Set once on first run  Default is  admin        admin email  The email of the default Grafana Admin  created on startup  Default is  admin localhost        secret key  Used for signing some data source settings like secrets and passwords  the encryption format used is AES 256 in CFB mode  Cannot be changed without requiring an update to data source settings to re encode them       disable gravatar  Set to  true  to disable the use of Gravatar for user profile images  Default is  false        data source proxy whitelist  Define a whitelist of allowed IP addresses or domains  with ports  to be used in data source URLs with the Grafana data source proxy  Format   ip or domain port  separated by spaces  PostgreSQL  MySQL  and MSSQL data sources do not use the proxy and are therefore unaffected by this setting       disable brute force login protection  Set to  true  to disable  brute force login protection  https   cheatsheetseries owasp org cheatsheets Authentication Cheat Sheet html account lockout   Default is  false   An existing user s account will be unable to login for 5 minutes if all login attempts are spent within a 5 minute window       brute force login protection max attempts  Configure how many login attempts a user have within a 5 minute window before the account will be locked  Default is  5        cookie secure  Set to  true  if you host Grafana behind HTTPS  Default is  false        cookie samesite  Sets the  SameSite  cookie attribute and prevents the browser from sending this cookie along with cross site requests  The main goal is to mitigate the risk of cross origin information leakage  This setting also provides some protection against cross site request forgery attacks  CSRF    read more about SameSite here  https   owasp org www community SameSite   Valid values are  lax    strict    none   and  disabled   Default is  lax   Using value  disabled  does not add any  SameSite  attribute to cookies       allow embedding  When  false   the HTTP header  X Frame Options  deny  will be set in Grafana HTTP responses which will instruct browsers to not allow rendering Grafana in a   frame      iframe      embed   or   object    The main goal is to mitigate the risk of  Clickjacking  https   owasp org www community attacks Clickjacking   Default is  false        strict transport security  Set to  true  if you want to enable HTTP  Strict Transport Security   HSTS  response header  Only use this when HTTPS is enabled in your configuration  or when there is another upstream system that ensures your application does HTTPS  like a frontend load balancer   HSTS tells browsers that the site should only be accessed using HTTPS       strict transport security max age seconds  Sets how long a browser should cache HSTS in seconds  Only applied if strict transport security is enabled  The default value is  86400        strict transport security preload  Set to  true  to enable HSTS  preloading  option  Only applied if strict transport security is enabled  The default value is  false        strict transport security subdomains  Set to  true  to enable the HSTS includeSubDomains option  Only applied if strict transport security is enabled  The default value is  false        x content type options  Set to  false  to disable the X Content Type Options response header  The X Content Type Options response HTTP header is a marker used by the server to indicate that the MIME types advertised in the Content Type headers should not be changed and be followed  The default value is  true        x xss protection  Set to  false  to disable the X XSS Protection header  which tells browsers to stop pages from loading when they detect reflected cross site scripting  XSS  attacks  The default value is  true        content security policy  Set to  true  to add the Content Security Policy header to your requests  CSP allows to control resources that the user agent can load and helps prevent XSS attacks       content security policy template  Set the policy template that will be used when adding the  Content Security Policy  header to your requests    NONCE  in the template includes a random nonce       content security policy report only  Set to  true  to add the  Content Security Policy Report Only  header to your requests  CSP in Report Only mode enables you to experiment with policies by monitoring their effects without enforcing them  You can enable both policies simultaneously       content security policy template  Set the policy template that will be used when adding the  Content Security Policy Report Only  header to your requests    NONCE  in the template includes a random nonce       actions allow post url  Sets API paths to be accessible between plugins using the POST verb  If the value is empty  you can only pass remote requests through the proxy  If the value is set  you can also send authenticated POST requests to the local server  You typically use this to enable backend communication between plugins   This is a comma separated list which uses glob matching   This will allow access to all plugins that have a backend    actions allow post url  api plugins     This will limit access to the backend of a single plugin    actions allow post url  api plugins grafana special app    hr         angular support enabled  This is set to false by default  meaning that the angular framework and support components will not be loaded  This means that all  plugins    and core features that depend on angular support will stop working   The core features that depend on angular are     Old graph panel   Old table panel  These features each have supported alternatives  and we recommend using them       csrf trusted origins  List of additional allowed URLs to pass by the CSRF check  Suggested when authentication comes from an IdP       csrf additional headers  List of allowed headers to be set by the user  Suggested to use for if authentication lives behind reverse proxies       csrf always check  Set to  true  to execute the CSRF check even if the login cookie is not in a request  default  false         enable frontend sandbox for plugins  Comma separated list of plugins ids that will be loaded inside the frontend sandbox       snapshots       enabled  Set to  false  to disable the snapshot feature  default  true         external enabled  Set to  false  to disable external snapshot publish endpoint  default  true         external snapshot url  Set root URL to a Grafana instance where you want to publish external snapshots  defaults to https   snapshots raintank io        external snapshot name  Set name for external snapshot button  Defaults to  Publish to snapshots raintank io        public mode  Set to true to enable this Grafana instance to act as an external snapshot server and allow unauthenticated requests for creating and deleting snapshots  Default is  false     hr         dashboards       versions to keep  Number dashboard versions to keep  per dashboard   Default   20   Minimum   1        min refresh interval  This feature prevents users from setting the dashboard refresh interval to a lower value than a given interval value  The default interval value is 5 seconds  The interval string is a possibly signed sequence of decimal numbers  followed by a unit suffix  ms  s  m  h  d   e g   30s  or  1m    This also limits the refresh interval options in Explore       default home dashboard path  Path to the default home dashboard  If this value is empty  then Grafana uses StaticRootPath    dashboards home json     On Linux  Grafana uses   usr share grafana public dashboards home json  as the default home dashboard location     hr         sql datasources       max open conns default  For SQL data sources  MySql  Postgres  MSSQL  you can override the default maximum number of open connections  default  100   The value configured in data source settings will be preferred over the default value       max idle conns default  For SQL data sources  MySql  Postgres  MSSQL  you can override the default allowed number of idle connections  default  100   The value configured in data source settings will be preferred over the default value       max conn lifetime default  For SQL data sources  MySql  Postgres  MSSQL  you can override the default maximum connection lifetime specified in seconds  default  14400   The value configured in data source settings will be preferred over the default value    hr        users       allow sign up  Set to  false  to prohibit users from being able to sign up   create user accounts  Default is  false   The admin user can still create users  For more information about creating a user  refer to  Add a user          allow org create  Set to  false  to prohibit users from creating new organizations  Default is  false        auto assign org  Set to  true  to automatically add new users to the main organization  id 1   When set to  false   new users automatically cause a new organization to be created for that new user  The organization will be created even if the  allow org create  setting is set to  false   Default is  true        auto assign org id  Set this value to automatically add new users to the provided org  This requires  auto assign org  to be set to  true   Please make sure that this organization already exists  Default is 1       auto assign org role  The  auto assign org role  setting determines the default role assigned to new users in the main organization if  auto assign org  setting is set to  true   You can set this to one of the following roles    Viewer   default    Admin    Editor   and  None    For example    auto assign org role   Viewer       verify email enabled  Require email validation before sign up completes or when updating a user email address  Default is  false        login default org id  Set the default organization for users when they sign in  The default is   1        login hint  Text used as placeholder text on login page for login username input       password hint  Text used as placeholder text on login page for password input       default theme  Sets the default UI theme   dark    light   or  system   The default theme is  dark     system  matches the user s system theme       default language  This option will set the default UI language if a supported IETF language tag like  en US  is available  If set to  detect   the default UI language will be determined by browser preference  The default is  en US        home page  Path to a custom home page  Users are only redirected to this if the default home dashboard is used  It should match a frontend route and contain a leading slash       External user management  If you manage users externally you can replace the user invite button for organizations with a link to an external site together with a description       viewers can edit  Viewers can access and use  Explore    and perform temporary edits on panels in dashboards they have access to  They cannot save their changes  Default is  false        editors can admin  Editors can administrate dashboards  folders and teams they create  Default is  false        user invite max lifetime duration  The duration in time a user invitation remains valid before expiring  This setting should be expressed as a duration  Examples  6h  hours   2d  days   1w  week   Default is  24h   24 hours   The minimum supported duration is  15m   15 minutes        verification email max lifetime duration  The duration in time a verification email  used to update the email address of a user  remains valid before expiring  This setting should be expressed as a duration  Examples  6h  hours   2d  days   1w  week   Default is 1h  1 hour        last seen update interval  The frequency of updating a user s last seen time  This setting should be expressed as a duration  Examples  1h  hour   15m  minutes  Default is  15m   15 minutes   The minimum supported duration is  5m   5 minutes   The maximum supported duration is  1h   1 hour        hidden users  This is a comma separated list of usernames  Users specified here are hidden in the Grafana UI  They are still visible to Grafana administrators and to themselves    hr       auth   Grafana provides many ways to authenticate users  Refer to the Grafana  Authentication overview    and other authentication documentation for detailed instructions on how to set up and configure authentication       login cookie name  The cookie name for storing the auth token  Default is  grafana session        login maximum inactive lifetime duration  The maximum lifetime  duration  an authenticated user can be inactive before being required to login at next visit  Default is 7 days  7d   This setting should be expressed as a duration  e g  5m  minutes   6h  hours   10d  days   2w  weeks   1M  month   The lifetime resets at each successful token rotation  token rotation interval minutes        login maximum lifetime duration  The maximum lifetime  duration  an authenticated user can be logged in since login time before being required to login  Default is 30 days  30d   This setting should be expressed as a duration  e g  5m  minutes   6h  hours   10d  days   2w  weeks   1M  month        token rotation interval minutes  How often auth tokens are rotated for authenticated users when the user is active  The default is each 10 minutes       disable login form  Set to true to disable  hide  the login form  useful if you use OAuth  Default is false       disable signout menu  Set to  true  to disable the signout link in the side menu  This is useful if you use auth proxy  Default is  false        signout redirect url  The URL the user is redirected to upon signing out  To support  OpenID Connect RP Initiated Logout  https   openid net specs openid connect rpinitiated 1 0 html   the user must add  post logout redirect uri  to the  signout redirect url    Example   signout redirect url   http   localhost 8087 realms grafana protocol openid connect logout post logout redirect uri http 3A 2F 2Flocalhost 3A3000 2Flogin      oauth auto login   This option is deprecated   use  auto login  option for specific OAuth provider instead    Set to  true  to attempt login with OAuth automatically  skipping the login screen  This setting is ignored if multiple OAuth providers are configured  Default is  false        oauth state cookie max age  How many seconds the OAuth state cookie lives before being deleted  Default is  600   seconds  Administrators can increase this if they experience OAuth login state mismatch errors       oauth login error message  A custom error message for when users are unauthorized  Default is a key for an internationalized phrase in the frontend   Login provider denied login request        oauth refresh token server lock min wait ms  Minimum wait time in milliseconds for the server lock retry mechanism  Default is  1000   milliseconds   The server lock retry mechanism is used to prevent multiple Grafana instances from simultaneously refreshing OAuth tokens  This mechanism waits at least this amount of time before retrying to acquire the server lock   There are five retries in total  so with the default value  the total wait time  for acquiring the lock  is at least 5 seconds  the wait time between retries is calculated as random n  n   500    which means that the maximum token refresh duration must be less than 5 6 seconds   If you experience issues with the OAuth token refresh mechanism  you can increase this value to allow more time for the token refresh to complete       oauth skip org role update sync   This option is removed from G11 in favor of OAuth provider specific  skip org role sync  settings  The following sections explain settings for each provider    If you want to change the  oauth skip org role update sync  setting from  true  to  false   then each provider you have set up  use the  skip org role sync  setting to specify whether you want to skip the synchronization    Currently if no organization role mapping is found for a user  Grafana doesn t update the user s organization role  With Grafana 10  if  oauth skip org role update sync  option is set to  false   users with no mapping will be reset to the default organization role on every login   See  auto assign org role  option           skip org role sync   skip org role sync  prevents the synchronization of organization roles for a specific OAuth integration  while the deprecated setting  oauth skip org role update sync  affects all configured OAuth providers   The default value for  skip org role sync  is  false    With  skip org role sync  set to  false   the users  organization and role is reset on every new login  based on the external provider s role  See your provider in the tables below   With  skip org role sync  set to  true   when a user logs in for the first time  Grafana sets the organization role based on the value specified in  auto assign org role  and forces the organization to  auto assign org id  when specified  otherwise it falls back to OrgID  1        Note    Enabling  skip org role sync  also disables the synchronization of Grafana Admins from the external provider  as such  allow assign grafana admin  is ignored   Use this setting when you want to manage the organization roles of your users from within Grafana and be able to manually assign them to multiple organizations  or to prevent synchronization conflicts when they can be synchronized from another provider   The behavior of  oauth skip org role update sync  and  skip org role sync   can be seen in the tables below      auth grafana com       oauth skip org role update sync     skip org role sync      Resulting Org Role     Modifiable                                                                                                                                                                                                                                    false   false   Synchronize user organization role with Grafana com role  If no role is provided   auto assign org role  is set    false     true   false   Skips organization role synchronization for all OAuth providers  users  Role is set to  auto assign org role     true     false   true   Skips organization role synchronization for Grafana com users  Role is set to  auto assign org role     true     true   true   Skips organization role synchronization for Grafana com users and all other OAuth providers  Role is set to  auto assign org role     true       auth azuread       oauth skip org role update sync     skip org role sync      Resulting Org Role     Modifiable                                                                                                                                                                                                                                false   false   Synchronize user organization role with AzureAD role  If no role is provided   auto assign org role  is set    false     true   false   Skips organization role synchronization for all OAuth providers  users  Role is set to  auto assign org role     true     false   true   Skips organization role synchronization for AzureAD users  Role is set to  auto assign org role     true     true   true   Skips organization role synchronization for AzureAD users and all other OAuth providers  Role is set to  auto assign org role     true       auth google       oauth skip org role update sync     skip org role sync      Resulting Org Role     Modifiable                                                                                                                                                                                       false   false   User organization role is set to  auto assign org role  and cannot be changed    false     true   false   User organization role is set to  auto assign org role  and can be changed in Grafana    true     false   true   User organization role is set to  auto assign org role  and can be changed in Grafana    true     true   true   User organization role is set to  auto assign org role  and can be changed in Grafana    true     For GitLab  GitHub  Okta  Generic OAuth providers  Grafana synchronizes organization roles and sets Grafana Admins  The  allow assign grafana admin  setting is also accounted for  to allow or not setting the Grafana Admin role from the external provider       auth github       oauth skip org role update sync     skip org role sync      Resulting Org Role     Modifiable                                                                                                                                                                                                                                                                 false   false   Synchronize user organization role with GitHub role  If no role is provided   auto assign org role  is set    false     true   false   Skips organization role synchronization for all OAuth providers  users  Role is set to  auto assign org role     true     false   true   Skips organization role and Grafana Admin synchronization for GitHub users  Role is set to  auto assign org role     true     true   true   Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for GitHub users  Role is set to  auto assign org role     true       auth gitlab       oauth skip org role update sync     skip org role sync      Resulting Org Role     Modifiable                                                                                                                                                                                                                                                                 false   false   Synchronize user organization role with Gitlab role  If no role is provided   auto assign org role  is set    false     true   false   Skips organization role synchronization for all OAuth providers  users  Role is set to  auto assign org role     true     false   true   Skips organization role and Grafana Admin synchronization for Gitlab users  Role is set to  auto assign org role     true     true   true   Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for Gitlab users  Role is set to  auto assign org role     true       auth generic oauth       oauth skip org role update sync     skip org role sync      Resulting Org Role     Modifiable                                                                                                                                                                                                                                                                         false   false   Synchronize user organization role with the provider s role  If no role is provided   auto assign org role  is set    false     true   false   Skips organization role synchronization for all OAuth providers  users  Role is set to  auto assign org role     true     false   true   Skips organization role and Grafana Admin synchronization for the provider s users  Role is set to  auto assign org role     true     true   true   Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for the provider s users  Role is set to  auto assign org role     true       auth okta       oauth skip org role update sync     skip org role sync      Resulting Org Role     Modifiable                                                                                                                                                                                                                                                               false   false   Synchronize user organization role with Okta role  If no role is provided   auto assign org role  is set    false     true   false   Skips organization role synchronization for all OAuth providers  users  Role is set to  auto assign org role     true     false   true   Skips organization role and Grafana Admin synchronization for Okta users  Role is set to  auto assign org role     true     true   true   Skips organization role synchronization for all OAuth providers and skips Grafana Admin synchronization for Okta users  Role is set to  auto assign org role     true         Example skip org role sync   auth google     oauth skip org role update sync     skip org role sync      Resulting Org Role       Example Scenario                                                                                                                                                                                                                                                                                                                                                           false   false   Synchronized with Google Auth organization roles   A user logs in to Grafana using their Google account and their organization role is automatically set based on their role in Google      true   false   Skipped synchronization of organization roles from all OAuth providers   A user logs in to Grafana using their Google account and their organization role is   not   set based on their role  But Grafana Administrators can modify the role from the UI      false   true   Skipped synchronization of organization roles Google   A user logs in to Grafana using their Google account and their organization role is   not   set based on their role in Google  But Grafana Administrators can modify the role from the UI      true   true   Skipped synchronization of organization roles from all OAuth providers including Google   A user logs in to Grafana using their Google account and their organization role is   not   set based on their role in Google  But Grafana Administrators can modify the role from the UI         api key max seconds to live  Limit of API key seconds to live before expiration  Default is  1  unlimited        sigv4 auth enabled  Set to  true  to enable the AWS Signature Version 4 Authentication option for HTTP based datasources  Default is  false        sigv4 verbose logging  Set to  true  to enable verbose request signature logging when AWS Signature Version 4 Authentication is enabled  Default is  false     hr         managed service accounts enabled    Only available in Grafana 11 3    Set to  true  to enable the use of managed service accounts for plugin authentication  Default is  false        Limitations      This feature currently   only supports single organization deployments      The plugin s service account is automatically created in the default organization  This means the plugin can only access data and resources within that specific organization       auth anonymous   Refer to  Anonymous authentication    for detailed instructions    hr         auth github   Refer to  GitHub OAuth2 authentication    for detailed instructions    hr         auth gitlab   Refer to  Gitlab OAuth2 authentication    for detailed instructions    hr         auth google   Refer to  Google OAuth2 authentication    for detailed instructions    hr         auth grafananet   Legacy key names  still in the config file so they work in env variables    hr         auth grafana com   Legacy key names  still in the config file so they work in env variables    hr         auth azuread   Refer to  Azure AD OAuth2 authentication    for detailed instructions    hr         auth okta   Refer to  Okta OAuth2 authentication    for detailed instructions    hr         auth generic oauth   Refer to  Generic OAuth authentication    for detailed instructions    hr         auth basic   Refer to  Basic authentication    for detailed instructions    hr         auth proxy   Refer to  Auth proxy authentication    for detailed instructions    hr         auth ldap   Refer to  LDAP authentication    for detailed instructions       aws   You can configure core and external AWS plugins       allowed auth providers  Specify what authentication providers the AWS plugins allow  For a list of allowed providers  refer to the data source configuration page for a given plugin  If you configure a plugin by provisioning  only providers that are specified in  allowed auth providers  are allowed   Options   default   AWS SDK default    keys   Access and secret key    credentials   Credentials file    ec2 iam role   EC2 IAM role       assume role enabled  Set to  false  to disable AWS authentication from using an assumed role with temporary security credentials  For details about assume roles  refer to the AWS API reference documentation about the  AssumeRole  https   docs aws amazon com STS latest APIReference API AssumeRole html  operation   If this option is disabled  the   Assume Role   and the   External Id   field are removed from the AWS data source configuration page  If the plugin is configured using provisioning  it is possible to use an assumed role as long as  assume role enabled  is set to  true        list metrics page limit  Use the  List Metrics API  https   docs aws amazon com AmazonCloudWatch latest APIReference API ListMetrics html  option to load metrics for custom namespaces in the CloudWatch data source  By default  the page limit is 500    hr         azure   Grafana supports additional integration with Azure services when hosted in the Azure Cloud       cloud  Azure cloud environment where Grafana is hosted     Azure Cloud                                        Value                                                                                                    Microsoft Azure public cloud                       AzureCloud   default       Microsoft Chinese national cloud                   AzureChinaCloud            US Government cloud                                AzureUSGovernment          Microsoft German national cloud   Black Forest     AzureGermanCloud              clouds config  The JSON config defines a list of Azure clouds and their associated properties when hosted in custom Azure environments   For example      ini clouds config              name   CustomCloud1       displayName   Custom Cloud 1       aadAuthority   https   login cloud1 contoso com        properties         azureDataExplorerSuffix     kusto windows cloud1 contoso com        logAnalytics               https   api loganalytics cloud1 contoso com        portal                     https   portal azure cloud1 contoso com        prometheusResourceId       https   prometheus monitor azure cloud1 contoso com        resourceManager            https   management azure cloud1 contoso com                      managed identity enabled  Specifies whether Grafana hosted in Azure service with Managed Identity configured  e g  Azure Virtual Machines instance   Disabled by default  needs to be explicitly enabled       managed identity client id  The client ID to use for user assigned managed identity   Should be set for user assigned identity and should be empty for system assigned identity       workload identity enabled  Specifies whether Azure AD Workload Identity authentication should be enabled in datasources that support it   For more documentation on Azure AD Workload Identity  review  Azure AD Workload Identity  https   azure github io azure workload identity docs   documentation   Disabled by default  needs to be explicitly enabled       workload identity tenant id  Tenant ID of the Azure AD Workload Identity   Allows to override default tenant ID of the Azure AD identity associated with the Kubernetes service account       workload identity client id  Client ID of the Azure AD Workload Identity   Allows to override default client ID of the Azure AD identity associated with the Kubernetes service account       workload identity token file  Custom path to token file for the Azure AD Workload Identity   Allows to set a custom path to the projected service account token file       user identity enabled  Specifies whether user identity authentication  on behalf of currently signed in user  should be enabled in datasources that support it  requires AAD authentication    Disabled by default  needs to be explicitly enabled       user identity fallback credentials enabled  Specifies whether user identity authentication fallback credentials should be enabled in data sources  Enabling this allows data source creators to provide fallback credentials for backend initiated requests  such as alerting  recorded queries  and so on   It is by default and needs to be explicitly disabled  It will not have any effect if user identity authentication is disabled       user identity token url  Override token URL for Azure Active Directory   By default is the same as token URL configured for AAD authentication settings       user identity client id  Override ADD application ID which would be used to exchange users token to an access token for the datasource   By default is the same as used in AAD authentication or can be set to another application  for OBO flow        user identity client secret  Override the AAD application client secret   By default is the same as used in AAD authentication or can be set to another application  for OBO flow        forward settings to plugins  Set plugins that will receive Azure settings via plugin context   By default  this will include all Grafana Labs owned Azure plugins or those that use Azure settings  Azure Monitor  Azure Data Explorer  Prometheus  MSSQL        azure entra password credentials enabled  Specifies whether Entra password auth can be used for the MSSQL data source  This authentication is not recommended and consideration should be taken before enabling this   Disabled by default  needs to be explicitly enabled       auth jwt   Refer to  JWT authentication    for more information    hr         smtp   Email server settings       enabled  Enable this to allow Grafana to send email  Default is  false        host  Default is  localhost 25   Use port 465 for implicit TLS       user  In case of SMTP auth  default is  empty        password  In case of SMTP auth  default is  empty   If the password contains     or      then you have to wrap it with triple quotes  Example      password          cert file  File path to a cert file  default is  empty        key file  File path to a key file  default is  empty        skip verify  Verify SSL for SMTP server  default is  false        from address  Address used when sending out emails  default is  admin grafana localhost        from name  Name to be used when sending out emails  default is  Grafana        ehlo identity  Name to be used as client identity for EHLO in SMTP dialog  default is   instance name         startTLS policy  Either  OpportunisticStartTLS    MandatoryStartTLS    NoStartTLS   Default is  empty        enable tracing  Enable trace propagation in e mail headers  using the  traceparent    tracestate  and  optionally   baggage  fields  Default is  false   To enable  you must first configure tracing in one of the  tracing opentelemetry    sections    hr       smtp static headers   Enter key value pairs on their own lines to be included as headers on outgoing emails  All keys must be in canonical mail header format  Examples   Foo bar    Foo Header bar     hr       emails       welcome email on sign up  Default is  false        templates pattern  Enter a comma separated list of template patterns  Default is  emails   html  emails   txt        content types  Enter a comma separated list of content types that should be included in the emails that are sent  List the content types according descending preference  e g   text html  text plain  for HTML as the most preferred  The order of the parts is significant as the mail clients will use the content type that is supported and most preferred by the sender  Supported content types are  text html  and  text plain   Default is  text html     hr       log   Grafana logging options       mode  Options are  console    file   and  syslog   Default is  console  and  file   Use spaces to separate multiple modes  e g   console file        level  Options are  debug    info    warn    error   and  critical   Default is  info        filters  Optional settings to set different levels for specific loggers  For example   filters   sqlstore debug       user facing default error  Use this configuration option to set the default error message shown to users  This message is displayed instead of sensitive backend errors  which should be obfuscated  The default message is  Please inspect the Grafana server log for details      hr       log console   Only applicable when  console  is used in   log   mode       level  Options are  debug    info    warn    error   and  critical   Default is inherited from   log   level       format  Log line format  valid options are text  console and json  Default is  console     hr       log file   Only applicable when  file  used in   log   mode       level  Options are  debug    info    warn    error   and  critical   Default is inherited from   log   level       format  Log line format  valid options are text  console and json  Default is  text        log rotate  Enable automated log rotation  valid options are  false  or  true   Default is  true   When enabled use the  max lines    max size shift    daily rotate  and  max days  to configure the behavior of the log rotation       max lines  Maximum lines per file before rotating it  Default is  1000000        max size shift  Maximum size of file before rotating it  Default is  28   which means  1    28    256MB        daily rotate  Enable daily rotation of files  valid options are  false  or  true   Default is  true        max days  Maximum number of days to keep log files  Default is  7     hr       log syslog   Only applicable when  syslog  used in   log   mode       level  Options are  debug    info    warn    error   and  critical   Default is inherited from   log   level       format  Log line format  valid options are text  console  and json  Default is  text        network and address  Syslog network type and address  This can be UDP  TCP  or UNIX  If left blank  then the default UNIX endpoints are used       facility  Syslog facility  Valid options are user  daemon or local0 through local7  Default is empty       tag  Syslog tag  By default  the process s  argv 0   is used    hr       log frontend       enabled  Faro javascript agent is initialized  Default is  false        custom endpoint  Custom HTTP endpoint to send events captured by the Faro agent to  Default    log grafana javascript agent   will log the events to stdout       log endpoint requests per second limit  Requests per second limit enforced per an extended period  for Grafana backend log ingestion endpoint    log grafana javascript agent   Default is  3        log endpoint burst limit  Maximum requests accepted per short interval of time for Grafana backend log ingestion endpoint    log grafana javascript agent   Default is  15        instrumentations all enabled  Enables all Faro default instrumentation by using  getWebInstrumentations   Overrides other instrumentation flags       instrumentations errors enabled  Turn on error instrumentation  Only affects Grafana Javascript Agent       instrumentations console enabled  Turn on console instrumentation  Only affects Grafana Javascript Agent      instrumentations webvitals enabled  Turn on webvitals instrumentation  Only affects Grafana Javascript Agent      instrumentations tracing enabled  Turns on tracing instrumentation  Only affects Grafana Javascript Agent       api key  If  custom endpoint  required authentication  you can set the API key here  Only relevant for Grafana Javascript Agent provider    hr       quota   Set quotas to   1  to make unlimited       enabled  Enable usage quotas  Default is  false        org user  Limit the number of users allowed per organization  Default is 10       org dashboard  Limit the number of dashboards allowed per organization  Default is 100       org data source  Limit the number of data sources allowed per organization  Default is 10       org api key  Limit the number of API keys that can be entered per organization  Default is 10       org alert rule  Limit the number of alert rules that can be entered per organization  Default is 100       user org  Limit the number of organizations a user can create  Default is 10       global user  Sets a global limit of users  Default is  1  unlimited        global org  Sets a global limit on the number of organizations that can be created  Default is  1  unlimited        global dashboard  Sets a global limit on the number of dashboards that can be created  Default is  1  unlimited        global api key  Sets global limit of API keys that can be entered  Default is  1  unlimited        global session  Sets a global limit on number of users that can be logged in at one time  Default is  1  unlimited        global alert rule  Sets a global limit on number of alert rules that can be created  Default is  1  unlimited        global correlations  Sets a global limit on number of correlations that can be created  Default is  1  unlimited        alerting rule evaluation results  Limit the number of query evaluation results per alert rule  If the condition query of an alert rule produces more results than this limit  the evaluation results in an error  Default is  1  unlimited     hr       unified alerting   For more information about the Grafana alerts  refer to  Grafana Alerting          enabled  Enable or disable Grafana Alerting  The default value is  true    Alerting rules migrated from dashboards and panels will include a link back via the  annotations        disabled orgs  Comma separated list of organization IDs for which to disable Grafana 8 Unified Alerting       admin config poll interval  Specify the frequency of polling for admin config changes  The default value is  60s    The interval string is a possibly signed sequence of decimal numbers  followed by a unit suffix  ms  s  m  h  d   e g  30s or 1m       alertmanager config poll interval  Specify the frequency of polling for Alertmanager config changes  The default value is  60s    The interval string is a possibly signed sequence of decimal numbers  followed by a unit suffix  ms  s  m  h  d   e g  30s or 1m       ha redis address  The Redis server address that should be connected to    For more information on Redis  refer to  Enable alerting high availability using Redis  https   grafana com docs grafana  GRAFANA VERSION  alerting set up configure high availability  enable alerting high availability using redis         ha redis username  The username that should be used to authenticate with the Redis server       ha redis password  The password that should be used to authenticate with the Redis server       ha redis db  The Redis database  The default value is  0        ha redis prefix  A prefix that is used for every key or channel that is created on the Redis server as part of HA for alerting       ha redis peer name  The name of the cluster peer that will be used as an identifier  If none is provided  a random one will be generated       ha redis max conns  The maximum number of simultaneous Redis connections       ha listen address  Listen IP address and port to receive unified alerting messages for other Grafana instances  The port is used for both TCP and UDP  It is assumed other Grafana instances are also running on the same port  The default value is  0 0 0 0 9094        ha advertise address  Explicit IP address and port to advertise other Grafana instances  The port is used for both TCP and UDP       ha peers  Comma separated list of initial instances  in a format of host port  that will form the HA cluster  Configuring this setting will enable High Availability mode for alerting       ha peer timeout  Time to wait for an instance to send a notification via the Alertmanager  In HA  each Grafana instance will be assigned a position  e g  0  1   We then multiply this position with the timeout to indicate how long should each instance wait before sending the notification to take into account replication lag  The default value is  15s    The interval string is a possibly signed sequence of decimal numbers  followed by a unit suffix  ms  s  m  h  d   e g  30s or 1m       ha label  The label is an optional string to include on each packet and stream  It uniquely identifies the cluster and prevents cross communication issues when sending gossip messages in an environment with multiple clusters       ha gossip interval  The interval between sending gossip messages  By lowering this value  more frequent  gossip messages are propagated across cluster more quickly at the expense of increased bandwidth usage  The default value is  200ms    The interval string is a possibly signed sequence of decimal numbers  followed by a unit suffix  ms  s  m  h  d   e g  30s or 1m       ha reconnect timeout  Length of time to attempt to reconnect to a lost peer  When running Grafana in a Kubernetes cluster  set this duration to less than  15m    The string is a possibly signed sequence of decimal numbers followed by a unit suffix  ms  s  m  h  d   such as  30s  or  1m        ha push pull interval  The interval between gossip full state syncs  Setting this interval lower  more frequent  will increase convergence speeds across larger clusters at the expense of increased bandwidth usage  The default value is  60s    The interval string is a possibly signed sequence of decimal numbers  followed by a unit suffix  ms  s  m  h  d   e g  30s or 1m       execute alerts  Enable or disable alerting rule execution  The default value is  true   The alerting UI remains visible       evaluation timeout  Sets the alert evaluation timeout when fetching data from the data source  The default value is  30s    The timeout string is a possibly signed sequence of decimal numbers  followed by a unit suffix  ms  s  m  h  d   e g  30s or 1m       max attempts  Sets a maximum number of times we ll attempt to evaluate an alert rule before giving up on that evaluation  The default value is  1        min interval  Sets the minimum interval to enforce between rule evaluations  The default value is  10s  which equals the scheduler interval  Rules will be adjusted if they are less than this value or if they are not multiple of the scheduler interval  10s   Higher values can help with resource management as we ll schedule fewer evaluations over time   The interval string is a possibly signed sequence of decimal numbers  followed by a unit suffix  ms  s  m  h  d   e g  30s or 1m       Note    This setting has precedence over each individual rule frequency  If a rule frequency is lower than this value  then this value is enforced    hr       unified alerting screenshots   For more information about screenshots  refer to  Images in notifications          capture  Enable screenshots in notifications  This option requires a remote HTTP image rendering service  Please see   rendering   for further configuration options       capture timeout  The timeout for capturing screenshots  If a screenshot cannot be captured within the timeout then the notification is sent without a screenshot  The maximum duration is 30 seconds  This timeout should be less than the minimum Interval of all Evaluation Groups to avoid back pressure on alert rule evaluation       max concurrent screenshots  The maximum number of screenshots that can be taken at the same time  This option is different from  concurrent render request limit  as  max concurrent screenshots  sets the number of concurrent screenshots that can be taken at the same time for all firing alerts where as concurrent render request limit sets the total number of concurrent screenshots across all Grafana services       upload external image storage  Uploads screenshots to the local Grafana server or remote storage such as Azure  S3 and GCS  Please see   external image storage   for further configuration options  If this option is false then screenshots will be persisted to disk for up to  temp data lifetime     hr       unified alerting reserved labels   For more information about Grafana Reserved Labels  refer to  Labels in Grafana Alerting   docs grafana next alerting fundamentals annotation label how to use labels        disabled labels  Comma separated list of reserved labels added by the Grafana Alerting engine that should be disabled   For example   disabled labels grafana folder    hr       unified alerting state history annotations   This section controls retention of annotations automatically created while evaluating alert rules when alerting state history backend is configured to be annotations  see setting  unified alerting state history  backend       max age  Configures for how long alert annotations are stored  Default is 0  which keeps them forever  This setting should be expressed as an duration  Ex 6h  hours   10d  days   2w  weeks   1M  month        max annotations to keep  Configures max number of alert annotations that Grafana stores  Default value is 0  which keeps all alert annotations    hr       annotations       cleanupjob batchsize  Configures the batch size for the annotation clean up job  This setting is used for dashboard  API  and alert annotations       tags length  Enforces the maximum allowed length of the tags for any newly introduced annotations  It can be between 500 and 4096  inclusive   Default value is 500  Setting it to a higher value would impact performance therefore is not recommended       annotations dashboard   Dashboard annotations means that annotations are associated with the dashboard they are created on       max age  Configures how long dashboard annotations are stored  Default is 0  which keeps them forever  This setting should be expressed as a duration  Examples  6h  hours   10d  days   2w  weeks   1M  month        max annotations to keep  Configures max number of dashboard annotations that Grafana stores  Default value is 0  which keeps all dashboard annotations       annotations api   API annotations means that the annotations have been created using the API without any association with a dashboard       max age  Configures how long Grafana stores API annotations  Default is 0  which keeps them forever  This setting should be expressed as a duration  Examples  6h  hours   10d  days   2w  weeks   1M  month        max annotations to keep  Configures max number of API annotations that Grafana keeps  Default value is 0  which keeps all API annotations    hr       explore   For more information about this feature  refer to  Explore          enabled  Enable or disable the Explore section  Default is  enabled        defaultTimeOffset  Set a default time offset from now on the time picker  Default is 1 hour  This setting should be expressed as a duration  Examples  1h  hour   1d  day   1w  week   1M  month        help   Configures the help section       enabled  Enable or disable the Help section  Default is  enabled        profile   Configures the Profile section       enabled  Enable or disable the Profile section  Default is  enabled        news       news feed enabled  Enables the news feed section  Default is  true    hr       query       concurrent query limit  Set the number of queries that can be executed concurrently in a mixed data source panel  Default is the number of CPUs       query history   Configures Query history in Explore       enabled  Enable or disable the Query history  Default is  enabled     hr       short links   Configures settings around the short link feature       expire time  Short links that are never accessed are considered expired or stale and will be deleted as cleanup  Set the expiration time in days  The default is  7  days  The maximum is  365  days  and setting above the maximum will have  365  set instead  Setting  0  means the short links will be cleaned up approximately every 10 minutes  A negative value such as   1  will disable expiry    Short links without an expiration increase the size of the database and can t be deleted     hr       metrics   For detailed instructions  refer to  Internal Grafana metrics          enabled  Enable metrics reporting  defaults true  Available via HTTP API   URL  metrics        interval seconds  Flush write interval when sending metrics to external TSDB  Defaults to  10        disable total stats  If set to  true   then total stats generation   stat totals    metrics  is disabled  Default is  false        total stats collector interval seconds  Sets the total stats collector interval  The default is 1800 seconds  30 minutes        basic auth username and basic auth password  If both are set  then basic authentication is required to access the metrics endpoint    hr       metrics environment info   Adds dimensions to the  grafana environment info  metric  which can expose more information about the Grafana instance         exampleLabel1   exampleValue1   exampleLabel2   exampleValue2          metrics graphite   Use these options if you want to send internal Grafana metrics to Graphite       address  Enable by setting the address  Format is   Hostname or ip   port       prefix  Graphite metric prefix  Defaults to  prod grafana   instance name s     hr       grafana net   Refer to  grafana com  config as that is the new and preferred config name  The grafana net config is still accepted and parsed to grafana com config    hr       grafana com       url  Default is https   grafana com  The default authentication identity provider for Grafana Cloud    hr       tracing jaeger    Deprecated   use tracing opentelemetry jaeger or tracing opentelemetry otlp instead   Configure Grafana s Jaeger client for distributed tracing   You can also use the standard  JAEGER    environment variables to configure Jaeger  See the table at the end of https   www jaegertracing io docs 1 16 client features  for the full list  Environment variables will override any settings provided here       address  The host port destination for reporting spans   ex   localhost 6831    Can be set with the environment variables  JAEGER AGENT HOST  and  JAEGER AGENT PORT        always included tag  Comma separated list of tags to include in all new spans  such as  tag1 value1 tag2 value2    Can be set with the environment variable  JAEGER TAGS   use     instead of     with the environment variable        sampler type  Default value is  const    Specifies the type of sampler   const    probabilistic    ratelimiting   or  remote    Refer to https   www jaegertracing io docs 1 16 sampling  client sampling configuration for details on the different tracing types   Can be set with the environment variable  JAEGER SAMPLER TYPE     To override this setting  enter  sampler type  in the  tracing opentelemetry  section        sampler param  Default value is  1    This is the sampler configuration parameter  Depending on the value of  sampler type   it can be  0    1   or a decimal value in between     For  const  sampler   0  or  1  for always  false   true  respectively   For  probabilistic  sampler  a probability between  0  and  1 0    For  rateLimiting  sampler  the number of spans per second   For  remote  sampler  param is the same as for  probabilistic    and indicates the initial sampling rate before the actual one   is received from the mothership  May be set with the environment variable  JAEGER SAMPLER PARAM     Setting  sampler param  in the  tracing opentelemetry  section will override this setting        sampling server url  sampling server url is the URL of a sampling manager providing a sampling strategy    Setting  sampling server url  in the  tracing opentelemetry  section will override this setting        zipkin propagation  Default value is  false    Controls whether or not to use Zipkin s span propagation format  with  x b3   HTTP headers   By default  Jaeger s format is used   Can be set with the environment variable and value  JAEGER PROPAGATION b3        disable shared zipkin spans  Default value is  false    Setting this to  true  turns off shared RPC spans  Leaving this available is the most common setting when using Zipkin elsewhere in your infrastructure    hr       tracing opentelemetry   Configure general parameters shared between OpenTelemetry providers       custom attributes  Comma separated list of attributes to include in all new spans  such as  key1 value1 key2 value2    Can be set or overridden with the environment variable  OTEL RESOURCE ATTRIBUTES   use     instead of     with the environment variable   The service name can be set or overridden using attributes or with the environment variable  OTEL SERVICE NAME        sampler type  Default value is  const    Specifies the type of sampler   const    probabilistic    ratelimiting   or  remote        sampler param  Default value is  1    Depending on the value of  sampler type   the sampler configuration parameter can be  0    1   or any decimal value between  0  and  1      For the  const  sampler  use  0  to never sample or  1  to always sample   For the  probabilistic  sampler  you can use a decimal value between  0 0  and  1 0    For the  rateLimiting  sampler  enter the number of spans per second   For the  remote  sampler  use a decimal value between  0 0  and  1 0    to specify the initial sampling rate used before the first update   is received from the sampling server      sampling server url  When  sampler type  is  remote   this specifies the URL of the sampling server  This can be used by all tracing providers   Use a sampling server that supports the Jaeger remote sampling API  such as jaeger agent  jaeger collector  opentelemetry collector contrib  or  Grafana Alloy  https   grafana com oss alloy opentelemetry collector      hr       tracing opentelemetry jaeger   Configure Grafana s Jaeger client for distributed tracing       address  The host port destination for reporting spans   ex   localhost 14268 api traces        propagation  The propagation specifies the text map propagation format  The values  jaeger  and  w3c  are supported  Add a comma       between values to specify multiple formats  for example    jaeger w3c     The default value is  w3c     hr       tracing opentelemetry otlp   Configure Grafana s otlp client for distributed tracing       address  The host port destination for reporting spans   ex   localhost 4317        propagation  The propagation specifies the text map propagation format  The values  jaeger  and  w3c  are supported  Add a comma       between values to specify multiple formats  for example    jaeger w3c     The default value is  w3c     hr       external image storage   These options control how images should be made public so they can be shared on services like Slack or email message       provider  Options are s3  webdav  gcs  azure blob  local   If left empty  then Grafana ignores the upload action    hr       external image storage s3       endpoint  Optional endpoint URL  hostname or fully qualified URI  to override the default generated S3 endpoint  If you want to keep the default  just leave this empty  You must still provide a  region  value if you specify an endpoint       path style access  Set this to true to force path style addressing in S3 requests  i e    http   s3 amazonaws com BUCKET KEY   instead of the default  which is virtual hosted bucket addressing when possible   http   BUCKET s3 amazonaws com KEY      This option is specific to the Amazon S3 service        bucket url   for backward compatibility  only works when no bucket or region are configured  Bucket URL for S3  AWS region can be specified within URL or defaults to  us east 1   e g     http   grafana s3 amazonaws com    https   grafana s3 ap southeast 2 amazonaws com       bucket  Bucket name for S3  e g  grafana snapshot       region  Region name for S3  e g   us east 1    cn north 1   etc       path  Optional extra path inside bucket  useful to apply expiration policies       access key  Access key  e g  AAAAAAAAAAAAAAAAAAAA   Access key requires permissions to the S3 bucket for the  s3 PutObject  and  s3 PutObjectAcl  actions       secret key  Secret key  e g  AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA    hr       external image storage webdav       url  URL where Grafana sends PUT request with images       username  Basic auth username       password  Basic auth password       public url  Optional URL to send to users in notifications  If the string contains the sequence     it is replaced with the uploaded filename  Otherwise  the file name is appended to the path part of the URL  leaving any query string unchanged    hr       external image storage gcs       key file  Optional path to JSON key file associated with a Google service account to authenticate and authorize  If no value is provided it tries to use the  application default credentials  https   cloud google com docs authentication production finding credentials automatically   Service Account keys can be created and downloaded from https   console developers google com permissions serviceaccounts   Service Account should have  Storage Object Writer  role  The access control model of the bucket needs to be  Set object level and bucket level permissions   Grafana itself will make the images public readable when signed urls are not enabled       bucket  Bucket Name on Google Cloud Storage       path  Optional extra path inside bucket       enable signed urls  If set to true  Grafana creates a  signed URL  https   cloud google com storage docs access control signed urls  for the image uploaded to Google Cloud Storage       signed url expiration  Sets the signed URL expiration  which defaults to seven days       external image storage azure blob       account name  Storage account name       account key  Storage account key      container name  Container name where to store  Blob  images with random names  Creating the blob container beforehand is required  Only public containers are supported       sas token expiration days  Number of days for SAS token validity  If specified SAS token will be attached to image URL  Allow storing images in private containers    hr       external image storage local   This option does not require any configuration    hr       rendering   Options to configure a remote HTTP image rendering service  e g  using https   github com grafana grafana image renderer        renderer token  An auth token will be sent to and verified by the renderer  The renderer will deny any request without an auth token matching the one configured on the renderer       server url  URL to a remote HTTP image renderer service  e g  http   localhost 8081 render  will enable Grafana to render panels and dashboards to PNG images using HTTP requests to an external service       callback url  If the remote HTTP image renderer service runs on a different server than the Grafana server you may have to configure this to a URL where Grafana is reachable  e g  http   grafana domain        concurrent render request limit  Concurrent render request limit affects when the  render HTTP endpoint is used  Rendering many images at the same time can overload the server  which this setting can help protect against by only allowing a certain number of concurrent requests  Default is  30        default image width  Configures the width of the rendered image  The default width is  1000        default image height  Configures the height of the rendered image  The default height is  500        default image scale  Configures the scale of the rendered image  The default scale is  1        panels       enable alpha  Set to  true  if you want to test alpha panels that are not yet ready for general usage  Default is  false        disable sanitize html   This configuration is not available in Grafana Cloud instances    If set to true Grafana will allow script tags in text panels  Not recommended as it enables XSS vulnerabilities  Default is false       plugins       enable alpha  Set to  true  if you want to test alpha plugins that are not yet ready for general usage  Default is  false        allow loading unsigned plugins  Enter a comma separated list of plugin identifiers to identify plugins to load even if they are unsigned  Plugins with modified signatures are never loaded   We do  not  recommend using this option  For more information  refer to  Plugin signatures          plugin admin enabled  Available to Grafana administrators only  enables installing   uninstalling   updating plugins directly from the Grafana UI  Set to  true  by default  Setting it to  false  will hide the install   uninstall   update controls   For more information  refer to  Plugin catalog          plugin admin external manage enabled  Set to  true  if you want to enable external management of plugins  Default is  false   This is only applicable to Grafana Cloud users       plugin catalog url  Custom install learn more URL for enterprise plugins  Defaults to https   grafana com grafana plugins        plugin catalog hidden plugins  Enter a comma separated list of plugin identifiers to hide in the plugin catalog       public key retrieval disabled  Disable download of the public key for verifying plugin signature  The default is  false   If disabled  it will use the hardcoded public key       public key retrieval on startup  Force download of the public key for verifying plugin signature on startup  The default is  false   If disabled  the public key will be retrieved every 10 days  Requires  public key retrieval disabled  to be false to have any effect       disable plugins  Enter a comma separated list of plugin identifiers to avoid loading  including core plugins   These plugins will be hidden in the catalog       preinstall  Enter a comma separated list of plugin identifiers to preinstall  These plugins will be installed on startup  using the Grafana catalog as the source  Preinstalled plugins cannot be uninstalled from the Grafana user interface  they need to be removed from this list first   To pin plugins to a specific version  use the format  plugin id version   for example  grafana piechart panel 1 6 0   If no version is specified  the latest version is installed   The plugin is automatically updated  to the latest version when a new version is available in the Grafana plugin catalog on startup  except for new major versions    To use a custom URL to download a plugin  use the format  plugin id version url   for example   grafana piechart panel 1 6 0 https   example com grafana piechart panel 1 6 0 zip    By default  Grafana preinstalls some suggested plugins  Check the default configuration file for the list of plugins       preinstall async  By default  plugins are preinstalled asynchronously  as a background process  This means that Grafana will start up faster  but the plugins may not be available immediately  If you need a plugin to be installed for provisioning  set this option to  false   This causes Grafana to wait for the plugins to be installed before starting up  and fail if a plugin can t be installed        preinstall disabled  This option disables all preinstalled plugins  The default is  false   To disable a specific plugin from being preinstalled  use the  disable plugins  option    hr       live       max connections  The  max connections  option specifies the maximum number of connections to the Grafana Live WebSocket endpoint per Grafana server instance  Default is  100    Refer to  Grafana Live configuration documentation    if you specify a number higher than default since this can require some operating system and infrastructure tuning   0 disables Grafana Live   1 means unlimited connections       allowed origins  The  allowed origins  option is a comma separated list of additional origins   Origin  header of HTTP Upgrade request during WebSocket connection establishment  that will be accepted by Grafana Live   If not set  default   then the origin is matched over  root url    which should be sufficient for most scenarios   Origin patterns support wildcard symbol        For example      ini  live  allowed origins    https     example com           ha engine    Experimental    The high availability  HA  engine name for Grafana Live  By default  it s not set  The only possible value is  redis    For more information  refer to the  Configure Grafana Live HA setup          ha engine address    Experimental    Address string of selected the high availability  HA  Live engine  For Redis  it s a  host port  string  Example      ini  live  ha engine   redis ha engine address   127 0 0 1 6379       hr       plugin plugin id   This section can be used to configure plugin specific settings  Replace the  plugin id  attribute with the plugin ID present in  plugin json    Properties described in this section are available for all plugins  but you must set them individually for each plugin       tracing    OpenTelemetry must be configured as well   tracingopentelemetry     If  true   propagate the tracing context to the plugin backend and enable tracing  if the backend supports it       as external  Load an external version of a core plugin if it has been installed   Experimental  Requires the feature toggle  externalCorePlugins  to be enabled    hr       plugin grafana image renderer   For more information  refer to  Image rendering          rendering timezone  Instruct headless browser instance to use a default timezone when not provided by Grafana  e g  when rendering panel image of alert  See  ICUs metaZones txt  https   cs chromium org chromium src third party icu source data misc metaZones txt  for a list of supported timezone IDs  Fallbacks to TZ environment variable if not set       rendering language  Instruct headless browser instance to use a default language when not provided by Grafana  e g  when rendering panel image of alert  Refer to the HTTP header Accept Language to understand how to format this value  e g   fr CH  fr q 0 9  en q 0 8  de q 0 7     q 0 5        rendering viewport device scale factor  Instruct headless browser instance to use a default device scale factor when not provided by Grafana  e g  when rendering panel image of alert  Default is  1   Using a higher value will produce more detailed images  higher DPI   but requires more disk space to store an image       rendering ignore https errors  Instruct headless browser instance whether to ignore HTTPS errors during navigation  Per default HTTPS errors are not ignored  Due to the security risk  we do not recommend that you ignore HTTPS errors       rendering verbose logging  Instruct headless browser instance whether to capture and log verbose information when rendering an image  Default is  false  and will only capture and log error messages   When enabled  debug messages are captured and logged as well   For the verbose information to be included in the Grafana server log you have to adjust the rendering log level to debug  configure  log  filter   rendering debug       rendering dumpio  Instruct headless browser instance whether to output its debug and error messages into running process of remote rendering service  Default is  false    It can be useful to set this to  true  when troubleshooting       rendering timing metrics      Note    Available from grafana image renderer v3 9 0   Instruct a headless browser instance on whether to record metrics for the duration of every rendering step  Default is  false    Setting this to  true  when optimizing the rendering mode settings to improve the plugin performance or when troubleshooting can be useful       rendering args  Additional arguments to pass to the headless browser instance  Defaults are    no sandbox   disable gpu   The list of Chromium flags can be found at  https   peter sh experiments chromium command line switches    Separate multiple arguments with commas       rendering chrome bin  You can configure the plugin to use a different browser binary instead of the pre packaged version of Chromium   Please note that this is  not  recommended  You might encounter problems if the installed version of Chrome Chromium is not compatible with the plugin       rendering mode  Instruct how headless browser instances are created  Default is  default  and will create a new browser instance on each request   Mode  clustered  will make sure that only a maximum of browsers incognito pages can execute concurrently   Mode  reusable  will have one browser instance and will create a new incognito page on each request       rendering clustering mode  When rendering mode   clustered  you can instruct how many browsers or incognito pages can execute concurrently  Default is  browser  and will cluster using browser instances   Mode  context  will cluster using incognito pages       rendering clustering max concurrency  When rendering mode   clustered  you can define the maximum number of browser instances incognito pages that can execute concurrently  Default is  5        rendering clustering timeout   Available in grafana image renderer v3 3 0 and later versions    When rendering mode   clustered  you can specify the duration a rendering request can take before it will time out  Default is  30  seconds       rendering viewport max width  Limit the maximum viewport width that can be requested       rendering viewport max height  Limit the maximum viewport height that can be requested       rendering viewport max device scale factor  Limit the maximum viewport device scale factor that can be requested       grpc host  Change the listening host of the gRPC server  Default host is  127 0 0 1        grpc port  Change the listening port of the gRPC server  Default port is  0  and will automatically assign a port not in use    hr       enterprise   For more information about Grafana Enterprise  refer to  Grafana Enterprise       hr       feature toggles       enable  Keys of features to enable  separated by space       FEATURE TOGGLE NAME   false  Some feature toggles for stable features are on by default  Use this setting to disable an on by default feature toggle with the name FEATURE TOGGLE NAME  for example   exploreMixedDatasource   false     hr       feature management   The options in this section configure the experimental Feature Toggle Admin Page feature  which is enabled using the  featureToggleAdminPage  feature toggle  Grafana Labs offers support on a best effort basis  and breaking changes might occur prior to the feature being made generally available   Please see  Configure feature toggles    for more information       allow editing  Lets you switch the feature toggle state in the feature management page  The default is  false        update webhook  Set the URL of the controller that manages the feature toggle updates  If not set  feature toggles in the feature management page will be read only    The API for feature toggle updates has not been defined yet        hidden toggles  Hide additional specific feature toggles from the feature management page  By default  feature toggles in the  unknown    experimental   and  private preview  stages are hidden from the UI  Use this option to hide toggles in the  public preview    general availability   and  deprecated  stages       read only toggles  Use to disable updates for additional specific feature toggles in the feature management page  By default  feature toggles can only be updated if they are in the  general availability  and  deprecated stages  Use this option to disable updates for toggles in those stages    hr       date formats   This section controls system wide defaults for date formats used in time ranges  graphs  and date input boxes   The format patterns use  Moment js  https   momentjs com docs   displaying   formatting tokens       full date  Full date format used by time range picker and in other places where a full date is rendered       intervals  These intervals formats are used in the graph to show only a partial date or time  For example  if there are only minutes between Y axis tick labels then the  interval minute  format is used   Defaults      interval second   HH mm ss interval minute   HH mm interval hour   MM DD HH mm interval day   MM DD interval month   YYYY MM interval year   YYYY          use browser locale  Set this to  true  to have date formats automatically derived from your browser location  Defaults to  false   This is an experimental feature       default timezone  Used as the default time zone for user preferences  Can be either  browser  for the browser local time zone or a time zone name from the IANA Time Zone database  such as  UTC  or  Europe Amsterdam        default week start  Set the default start of the week  valid values are   saturday    sunday    monday  or  browser  to use the browser locale to define the first day of the week  Default is  browser        expressions       enabled  Set this to  false  to disable expressions and hide them in the Grafana UI  Default is  true        geomap   This section controls the defaults settings for Geomap Plugin       default baselayer config  The json config used to define the default base map  Four base map options to choose from are  carto    esriXYZTiles    xyzTiles    standard   For example  to set cartoDB light as the default base layer      ini default baselayer config         type    xyz      config          attribution    Open street map        url    https   tile openstreetmap org  z   x   y  png                  enable custom baselayers  Set this to  false  to disable loading other custom base maps and hide them in the Grafana UI  Default is  true        rbac   Refer to  Role based access control    for more information       navigation app sections   Move an app plugin  referenced by its id   including all its pages  to a specific navigation section  Format    pluginId     sectionId   sortWeight        navigation app standalone pages   Move an individual app plugin page  referenced by its  path  field  to a specific navigation section  Format    pageUrl     sectionId   sortWeight        public dashboards   This section configures the  shared dashboards  https   grafana com docs grafana  GRAFANA VERSION  dashboards share dashboards panels shared dashboards   feature       enabled  Set this to  false  to disable the shared dashboards feature  This prevents users from creating new shared dashboards and disables existing ones "}
{"questions":"grafana setup documentation guide troubleshooting diagnostics enable diagnostics grafana aliases troubleshooting keywords Learn how to configure profiling and tracing so that you can troubleshoot Grafana","answers":"---\naliases:\n  - ..\/..\/troubleshooting\/diagnostics\/\n  - ..\/enable-diagnostics\/\ndescription: Learn how to configure profiling and tracing so that you can troubleshoot Grafana.\nkeywords:\n  - grafana\n  - troubleshooting\n  - documentation\n  - guide\nlabels:\n  products:\n    - enterprise\n    - oss\nmenuTitle: Configure profiling and tracing\ntitle: Configure profiling and tracing to troubleshoot Grafana\nweight: 200\n---\n\n# Configure profiling and tracing to troubleshoot Grafana\n\nYou can set up the `grafana-server` process to enable certain diagnostics when it starts. This can be useful\nwhen investigating certain performance problems. It's _not_ recommended to have these enabled by default.\n\n## Turn on profiling and collect profiles\n\nThe `grafana-server` can be started with the command-line option `-profile` to enable profiling, `-profile-addr` to override the default HTTP address (`localhost`), and\n`-profile-port` to override the default HTTP port (`6060`) where the `pprof` debugging endpoints are available. Further, [`-profile-block-rate`](https:\/\/pkg.go.dev\/runtime#SetBlockProfileRate) controls the fraction of goroutine blocking events that are reported in the blocking profile, default `1` (i.e. track every event) for backward compatibility reasons, and [`-profile-mutex-rate`](https:\/\/pkg.go.dev\/runtime#SetMutexProfileFraction) controls the fraction of mutex contention events that are reported in the mutex profile, default `0` (i.e. track no events). The higher the fraction (that is, the smaller this value) the more overhead it adds to normal operations.\n\nRunning Grafana with profiling enabled and without block and mutex profiling enabled should only add a fraction of overhead and is suitable for [continuous profiling](https:\/\/grafana.com\/oss\/pyroscope\/). Adding a small fraction of block and mutex profiling, such as 10-5 (10%-20%) should in general be fine.\n\nEnable profiling:\n\n```bash\n.\/grafana server -profile -profile-addr=0.0.0.0 -profile-port=8080\n```\n\nEnable profiling with block and mutex profiling enabled with a fraction of 20%:\n\n```bash\n.\/grafana server -profile -profile-addr=0.0.0.0 -profile-port=8080 -profile-block-rate=5 -profile-mutex-rate=5\n```\n\nNote that `pprof` debugging endpoints are served on a different port than the Grafana HTTP server. Check what debugging endpoints are available by browsing `http:\/\/<profile-addr><profile-port>\/debug\/pprof`.\n\nThere are some additional [godeltaprof](https:\/\/github.com\/grafana\/pyroscope-go\/tree\/main\/godeltaprof) endpoints available which are more suitable in a continuous profiling scenario. These endpoints are `\/debug\/pprof\/delta_heap`, `\/debug\/pprof\/delta_block`, `\/debug\/pprof\/delta_mutex`.\n\nYou can configure or override profiling settings using environment variables:\n\n```bash\nexport GF_DIAGNOSTICS_PROFILING_ENABLED=true\nexport GF_DIAGNOSTICS_PROFILING_ADDR=0.0.0.0\nexport GF_DIAGNOSTICS_PROFILING_PORT=8080\nexport GF_DIAGNOSTICS_PROFILING_BLOCK_RATE=5\nexport GF_DIAGNOSTICS_PROFILING_MUTEX_RATE=5\n```\n\nIn general, you use the [Go command pprof](https:\/\/golang.org\/cmd\/pprof\/) to both collect and analyze profiling data. You can also use [curl](https:\/\/curl.se\/) or similar to collect profiles which could be convenient in environments where you don't have the Go\/pprof command available. Next, some usage examples of using curl and pprof to collect and analyze memory and CPU profiles.\n\n**Analyzing high memory usage\/memory leaks:**\n\nWhen experiencing high memory usage or potential memory leaks it's useful to collect several heap profiles and later when analyzing, compare them. It's a good idea to wait some time, e.g. 30 seconds, between collecting each profile to allow memory consumption to increase.\n\n```bash\ncurl http:\/\/<profile-addr>:<profile-port>\/debug\/pprof\/heap > heap1.pprof\nsleep 30\ncurl http:\/\/<profile-addr>:<profile-port>\/debug\/pprof\/heap > heap2.pprof\n```\n\nYou can then use pprof tool to compare two heap profiles:\n\n```bash\ngo tool pprof -http=localhost:8081 --base heap1.pprof heap2.pprof\n```\n\n**Analyzing high CPU usage:**\n\nWhen experiencing high CPU usage it's suggested to collect CPU profiles over a period of time, e.g. 30 seconds.\n\n```bash\ncurl 'http:\/\/<profile-addr>:<profile-port>\/debug\/pprof\/profile?seconds=30' > profile.pprof\n```\n\nYou can then use pprof tool to analyze the collected CPU profile:\n\n```bash\ngo tool pprof -http=localhost:8081 profile.pprof\n```\n\n## Use tracing\n\nThe `grafana-server` can be started with the arguments `-tracing` to enable tracing and `-tracing-file` to override the default trace file (`trace.out`) where trace result is written to. For example:\n\n```bash\n.\/grafana server -tracing -tracing-file=\/tmp\/trace.out\n```\n\nYou can configure or override profiling settings using environment variables:\n\n```bash\nexport GF_DIAGNOSTICS_TRACING_ENABLED=true\nexport GF_DIAGNOSTICS_TRACING_FILE=\/tmp\/trace.out\n```\n\nView the trace in a web browser (Go required to be installed):\n\n```bash\ngo tool trace <trace file>\n2019\/11\/24 22:20:42 Parsing trace...\n2019\/11\/24 22:20:42 Splitting trace...\n2019\/11\/24 22:20:42 Opening browser. Trace viewer is listening on http:\/\/127.0.0.1:39735\n```\n\nFor more information about how to analyze trace files, refer to [Go command trace](https:\/\/golang.org\/cmd\/trace\/).","site":"grafana setup","answers_cleaned":"    aliases            troubleshooting diagnostics         enable diagnostics  description  Learn how to configure profiling and tracing so that you can troubleshoot Grafana  keywords      grafana     troubleshooting     documentation     guide labels    products        enterprise       oss menuTitle  Configure profiling and tracing title  Configure profiling and tracing to troubleshoot Grafana weight  200        Configure profiling and tracing to troubleshoot Grafana  You can set up the  grafana server  process to enable certain diagnostics when it starts  This can be useful when investigating certain performance problems  It s  not  recommended to have these enabled by default      Turn on profiling and collect profiles  The  grafana server  can be started with the command line option   profile  to enable profiling    profile addr  to override the default HTTP address   localhost    and   profile port  to override the default HTTP port   6060   where the  pprof  debugging endpoints are available  Further     profile block rate   https   pkg go dev runtime SetBlockProfileRate  controls the fraction of goroutine blocking events that are reported in the blocking profile  default  1   i e  track every event  for backward compatibility reasons  and    profile mutex rate   https   pkg go dev runtime SetMutexProfileFraction  controls the fraction of mutex contention events that are reported in the mutex profile  default  0   i e  track no events   The higher the fraction  that is  the smaller this value  the more overhead it adds to normal operations   Running Grafana with profiling enabled and without block and mutex profiling enabled should only add a fraction of overhead and is suitable for  continuous profiling  https   grafana com oss pyroscope    Adding a small fraction of block and mutex profiling  such as 10 5  10  20   should in general be fine   Enable profiling      bash   grafana server  profile  profile addr 0 0 0 0  profile port 8080      Enable profiling with block and mutex profiling enabled with a fraction of 20       bash   grafana server  profile  profile addr 0 0 0 0  profile port 8080  profile block rate 5  profile mutex rate 5      Note that  pprof  debugging endpoints are served on a different port than the Grafana HTTP server  Check what debugging endpoints are available by browsing  http    profile addr  profile port  debug pprof    There are some additional  godeltaprof  https   github com grafana pyroscope go tree main godeltaprof  endpoints available which are more suitable in a continuous profiling scenario  These endpoints are   debug pprof delta heap     debug pprof delta block     debug pprof delta mutex    You can configure or override profiling settings using environment variables      bash export GF DIAGNOSTICS PROFILING ENABLED true export GF DIAGNOSTICS PROFILING ADDR 0 0 0 0 export GF DIAGNOSTICS PROFILING PORT 8080 export GF DIAGNOSTICS PROFILING BLOCK RATE 5 export GF DIAGNOSTICS PROFILING MUTEX RATE 5      In general  you use the  Go command pprof  https   golang org cmd pprof   to both collect and analyze profiling data  You can also use  curl  https   curl se   or similar to collect profiles which could be convenient in environments where you don t have the Go pprof command available  Next  some usage examples of using curl and pprof to collect and analyze memory and CPU profiles     Analyzing high memory usage memory leaks     When experiencing high memory usage or potential memory leaks it s useful to collect several heap profiles and later when analyzing  compare them  It s a good idea to wait some time  e g  30 seconds  between collecting each profile to allow memory consumption to increase      bash curl http    profile addr   profile port  debug pprof heap   heap1 pprof sleep 30 curl http    profile addr   profile port  debug pprof heap   heap2 pprof      You can then use pprof tool to compare two heap profiles      bash go tool pprof  http localhost 8081   base heap1 pprof heap2 pprof        Analyzing high CPU usage     When experiencing high CPU usage it s suggested to collect CPU profiles over a period of time  e g  30 seconds      bash curl  http    profile addr   profile port  debug pprof profile seconds 30    profile pprof      You can then use pprof tool to analyze the collected CPU profile      bash go tool pprof  http localhost 8081 profile pprof         Use tracing  The  grafana server  can be started with the arguments   tracing  to enable tracing and   tracing file  to override the default trace file   trace out   where trace result is written to  For example      bash   grafana server  tracing  tracing file  tmp trace out      You can configure or override profiling settings using environment variables      bash export GF DIAGNOSTICS TRACING ENABLED true export GF DIAGNOSTICS TRACING FILE  tmp trace out      View the trace in a web browser  Go required to be installed       bash go tool trace  trace file  2019 11 24 22 20 42 Parsing trace    2019 11 24 22 20 42 Splitting trace    2019 11 24 22 20 42 Opening browser  Trace viewer is listening on http   127 0 0 1 39735      For more information about how to analyze trace files  refer to  Go command trace  https   golang org cmd trace   "}
{"questions":"grafana setup settings enterprise settings updates labels grafana aliases products keywords Settings updates at runtime runtime","answers":"---\naliases:\n  - ..\/..\/enterprise\/settings-updates\/\ndescription: Settings updates at runtime\nkeywords:\n  - grafana\n  - runtime\n  - settings\nlabels:\n  products:\n    - enterprise\n    - oss\ntitle: Settings updates at runtime\nweight: 500\n---\n\n# Settings updates at runtime\n\n\nThis functionality is deprecated and will be removed in a future release. For configuring SAML authentication, please use the new [SSO settings API]().\n\n\nBy updating settings at runtime, you can update Grafana settings without needing to restart the Grafana server.\n\nUpdates that happen at runtime are stored in the database and override\n[settings from other sources]()\n(arguments, environment variables, settings file, etc). Therefore, every time a specific setting key is removed at runtime,\nthe value used for that key is the inherited one from the other sources in the reverse order of precedence\n(`arguments > environment variables > settings file`). When no value is provided through any of these options, then the value used will be the application default\n\nCurrently, **it only supports updates on the `auth.saml` section.**\n\n## Update settings via the API\n\nYou can update settings through the [Admin API]().\n\nWhen you submit a settings update via API, Grafana verifies if the given settings updates are allowed and valid. If they are, then Grafana stores the settings in the database and reloads\nGrafana services with no need to restart the instance.\n\nSo, the payload of a `PUT` request to the update settings endpoint (`\/api\/admin\/settings`)\nshould contain (either one or both):\n\n- An `updates` map with a key, and a value per section you want to set.\n- A `removals` list with keys per section you want to unset.\n\nFor example, if you provide the following `updates`:\n\n```json\n{\n  \"updates\": {\n    \"auth.saml\": {\n      \"enabled\": \"true\",\n      \"single_logout\": \"false\"\n    }\n  }\n}\n```\n\nit would enable SAML and disable single logouts. And, if you provide the following `removals`:\n\n```json\n{\n  \"removals\": {\n    \"auth.saml\": [\"allow_idp_initiated\"]\n  }\n}\n```\n\nit would remove the key\/value setting identified by `allow_idp_initiated` within the `auth.saml`.\nSo, the SAML service would be reloaded and that value would be inherited for either (settings `.ini` file,\nenvironment variable, command line arguments or any other accepted mechanism to provide configuration).\n\nTherefore, the complete HTTP payload would looks like:\n\n```json\n{\n  \"updates\": {\n    \"auth.saml\": {\n      \"enabled\": \"true\",\n      \"single_logout\": \"false\"\n    }\n  },\n  \"removals\": {\n    \"auth.saml\": [\"allow_idp_initiated\"]\n  }\n}\n```\n\nIn case any of these settings cannot be overridden nor valid, it would return an error and these settings\nwon't be persisted into the database.\n\n## Background job (high availability set-ups)\n\nGrafana Enterprise has a built-in scheduled background job that looks into the database every minute for\nsettings updates. If there are updates, it reloads the Grafana services affected by the detected changes.\n\nThe background job synchronizes settings between instances in a highly available set-up. So, after you perform some changes through the\nHTTP API, then the other instances are synchronized through the database and the background job.\n\n## Control access with role-based access control\n\nIf you have [role-based access control]() enabled, you can control who can read or update settings.\nRefer to the [Admin API]() for more information.","site":"grafana setup","answers_cleaned":"    aliases            enterprise settings updates  description  Settings updates at runtime keywords      grafana     runtime     settings labels    products        enterprise       oss title  Settings updates at runtime weight  500        Settings updates at runtime   This functionality is deprecated and will be removed in a future release  For configuring SAML authentication  please use the new  SSO settings API       By updating settings at runtime  you can update Grafana settings without needing to restart the Grafana server   Updates that happen at runtime are stored in the database and override  settings from other sources     arguments  environment variables  settings file  etc   Therefore  every time a specific setting key is removed at runtime  the value used for that key is the inherited one from the other sources in the reverse order of precedence   arguments   environment variables   settings file    When no value is provided through any of these options  then the value used will be the application default  Currently    it only supports updates on the  auth saml  section        Update settings via the API  You can update settings through the  Admin API      When you submit a settings update via API  Grafana verifies if the given settings updates are allowed and valid  If they are  then Grafana stores the settings in the database and reloads Grafana services with no need to restart the instance   So  the payload of a  PUT  request to the update settings endpoint    api admin settings   should contain  either one or both      An  updates  map with a key  and a value per section you want to set    A  removals  list with keys per section you want to unset   For example  if you provide the following  updates       json      updates          auth saml            enabled    true          single logout    false                   it would enable SAML and disable single logouts  And  if you provide the following  removals       json      removals          auth saml     allow idp initiated              it would remove the key value setting identified by  allow idp initiated  within the  auth saml   So  the SAML service would be reloaded and that value would be inherited for either  settings   ini  file  environment variable  command line arguments or any other accepted mechanism to provide configuration    Therefore  the complete HTTP payload would looks like      json      updates          auth saml            enabled    true          single logout    false                removals          auth saml     allow idp initiated              In case any of these settings cannot be overridden nor valid  it would return an error and these settings won t be persisted into the database      Background job  high availability set ups   Grafana Enterprise has a built in scheduled background job that looks into the database every minute for settings updates  If there are updates  it reloads the Grafana services affected by the detected changes   The background job synchronizes settings between instances in a highly available set up  So  after you perform some changes through the HTTP API  then the other instances are synchronized through the database and the background job      Control access with role based access control  If you have  role based access control    enabled  you can control who can read or update settings  Refer to the  Admin API    for more information "}
{"questions":"grafana setup weight 100 Learn about Grafana Enterprise configuration options that you can specify labels aliases products title Configure Grafana Enterprise enterprise enterprise configuration oss enterprise","answers":"---\naliases:\n  - ..\/..\/enterprise\/enterprise-configuration\/\ndescription: Learn about Grafana Enterprise configuration options that you can specify.\nlabels:\n  products:\n    - enterprise\n    - oss\ntitle: Configure Grafana Enterprise\nweight: 100\n---\n\n# Configure Grafana Enterprise\n\nThis page describes Grafana Enterprise-specific configuration options that you can specify in a `.ini` configuration file or using environment variables. Refer to [Configuration]() for more information about available configuration options.\n\n## [enterprise]\n\n### license_path\n\nLocal filesystem path to Grafana Enterprise's license file.\nDefaults to `<paths.data>\/license.jwt`.\n\n### license_text\n\nWhen set to the text representation (i.e. content of the license file)\nof the license, Grafana will evaluate and apply the given license to\nthe instance.\n\n### auto_refresh_license\n\nWhen enabled, Grafana will send the license and usage statistics to\nthe license issuer. If the license has been updated on the issuer's\nside to be valid for a different number of users or a new duration,\nyour Grafana instance will be updated with the new terms\nautomatically. Defaults to `true`.\n\n\nThe license only automatically updates once per day. To immediately update the terms for a license, use the Grafana UI to renew your license token.\n\n\n### license_validation_type\n\nWhen set to `aws`, Grafana will validate its license status with Amazon Web Services (AWS) instead of with Grafana Labs. Only use this setting if you purchased an Enterprise license from AWS Marketplace. Defaults to empty, which means that by default Grafana Enterprise will validate using a license issued by Grafana Labs. For details about licenses issued by AWS, refer to [Activate a Grafana Enterprise license purchased through AWS Marketplace]().\n\n## [white_labeling]\n\n### app_title\n\nSet to your company name to override application title.\n\n### login_logo\n\nSet to complete URL to override login logo.\n\n### login_background\n\nSet to complete CSS background expression to override login background. Example:\n\n```bash\n[white_labeling]\nlogin_background = url(http:\/\/www.bhmpics.com\/wallpapers\/starfield-1920x1080.jpg)\n```\n\n### menu_logo\n\nSet to complete URL to override menu logo.\n\n### fav_icon\n\nSet to complete URL to override fav icon (icon shown in browser tab).\n\n### apple_touch_icon\n\nSet to complete URL to override Apple\/iOS icon.\n\n### hide_edition\n\nSet to `true` to remove the Grafana edition from appearing in the footer.\n\n### footer_links\n\nList the link IDs to use here. Grafana will look for matching link configurations, the link IDs should be space-separated and contain no whitespace.\n\n## [usage_insights.export]\n\nBy [exporting usage logs](), you can directly query them and create dashboards of the information that matters to you most, such as dashboard errors, most active organizations, or your top-10 most-used queries.\n\n### enabled\n\nEnable the usage insights export feature.\n\n### storage\n\nSpecify a storage type. Defaults to `loki`.\n\n## [usage_insights.export.storage.loki]\n\n### type\n\nSet the communication protocol to use with Loki, which is either `grpc` or `http`. Defaults to `grpc`.\n\n### url\n\nSet the address for writing logs to Loki (format must be host:port).\n\n### tls\n\nDecide whether or not to enable the TLS (Transport Layer Security) protocol when establishing the connection to Loki. Defaults to true.\n\n### tenant_id\n\nSet the tenant ID for Loki communication, which is disabled by default. The tenant ID is required to interact with Loki running in [multi-tenant mode](\/docs\/loki\/latest\/operations\/multi-tenancy\/).\n\n## [analytics.summaries]\n\n### buffer_write_interval\n\nInterval for writing dashboard usage stats buffer to database.\n\n### buffer_write_timeout\n\nTimeout for writing dashboard usage stats buffer to database.\n\n### rollup_interval\n\nInterval for trying to roll up per dashboard usage summary. Only rolled up at most once per day.\n\n### rollup_timeout\n\nTimeout for trying to rollup per dashboard usage summary.\n\n## [analytics.views]\n\n### recent_users_age\n\nAge for recent active users.\n\n## [reporting]\n\n### rendering_timeout\n\nTimeout for each panel rendering request.\n\n### concurrent_render_limit\n\nMaximum number of concurrent calls to the rendering service.\n\n### image_scale_factor\n\nScale factor for rendering images. Value `2` is enough for monitor resolutions, `4` would be better for printed material. Setting a higher value affects performance and memory.\n\n### max_attachment_size_mb\n\nSet the maximum file size in megabytes for the CSV attachments.\n\n### fonts_path\n\nPath to the directory containing font files.\n\n### font_regular\n\nName of the TrueType font file with regular style.\n\n### font_bold\n\nName of the TrueType font file with bold style.\n\n### font_italic\n\nName of the TrueType font file with italic style.\n\n### max_retries_per_panel\n\nMaximum number of panel rendering request retries before returning an error. To disable the retry feature, enter `0`. This is available in public preview and requires the `reportingRetries` feature toggle.\n\n### allowed_domains\n\nAllowed domains to receive reports. Use an asterisk (`*`) to allow all domains. Use a comma-separated list to allow multiple domains. Example: allowed_domains = grafana.com, example.org\n\n## [auditing]\n\n[Auditing]() allows you to track important changes to your Grafana instance. By default, audit logs are logged to file but the auditing feature also supports sending logs directly to Loki.\n\n### enabled\n\nEnable the auditing feature. Defaults to false.\n\n### loggers\n\nList of enabled loggers.\n\n### log_dashboard_content\n\nKeep dashboard content in the logs (request or response fields). This can significantly increase the size of your logs.\n\n### verbose\n\nLog all requests and keep requests and responses body. This can significantly increase the size of your logs.\n\n### log_all_status_codes\n\nSet to false to only log requests with 2xx, 3xx, 401, 403, 500 responses.\n\n### max_response_size_bytes\n\nMaximum response body (in bytes) to be recorded. May help reducing the memory footprint caused by auditing.\n\n## [auditing.logs.file]\n\n### path\n\nPath to logs folder.\n\n### max_files\n\nMaximum log files to keep.\n\n### max_file_size_mb\n\nMax size in megabytes per log file.\n\n## [auditing.logs.loki]\n\n### url\n\nSet the URL for writing logs to Loki.\n\n### tls\n\nIf true, it establishes a secure connection to Loki. Defaults to true.\n\n### tenant_id\n\nSet the tenant ID for Loki communication, which is disabled by default. The tenant ID is required to interact with Loki running in [multi-tenant mode](\/docs\/loki\/latest\/operations\/multi-tenancy\/).\n\n## [auth.saml]\n\n### enabled\n\nIf true, the feature is enabled. Defaults to false.\n\n### allow_sign_up\n\nIf true, allow new Grafana users to be created through SAML logins. Defaults to true.\n\n### certificate\n\nBase64-encoded public X.509 certificate. Used to sign requests to the IdP.\n\n### certificate_path\n\nPath to the public X.509 certificate. Used to sign requests to the IdP.\n\n### private_key\n\nBase64-encoded private key. Used to decrypt assertions from the IdP.\n\n### private_key_path\n\nPath to the private key. Used to decrypt assertions from the IdP.\n\n### idp_metadata\n\nBase64-encoded IdP SAML metadata XML. Used to verify and obtain binding locations from the IdP.\n\n### idp_metadata_path\n\nPath to the SAML metadata XML. Used to verify and obtain binding locations from the IdP.\n\n### idp_metadata_url\n\nURL to fetch SAML IdP metadata. Used to verify and obtain binding locations from the IdP.\n\n### max_issue_delay\n\nTime since the IdP issued a response and the SP is allowed to process it. Defaults to 90 seconds.\n\n### metadata_valid_duration\n\nHow long the SPs metadata is valid. Defaults to 48 hours.\n\n### assertion_attribute_name\n\nFriendly name or name of the attribute within the SAML assertion to use as the user name. Alternatively, this can be a template with variables that match the names of attributes within the SAML assertion.\n\n### assertion_attribute_login\n\nFriendly name or name of the attribute within the SAML assertion to use as the user login handle.\n\n### assertion_attribute_email\n\nFriendly name or name of the attribute within the SAML assertion to use as the user email.\n\n### assertion_attribute_groups\n\nFriendly name or name of the attribute within the SAML assertion to use as the user groups.\n\n### assertion_attribute_role\n\nFriendly name or name of the attribute within the SAML assertion to use as the user roles.\n\n### assertion_attribute_org\n\nFriendly name or name of the attribute within the SAML assertion to use as the user organization.\n\n### allowed_organizations\n\nList of comma- or space-separated organizations. Each user must be a member of at least one organization to log in.\n\n### org_mapping\n\nList of comma- or space-separated Organization:OrgId:Role mappings. Organization can be `*` meaning \"All users\". Role is optional and can have the following values: `Admin`, `Editor` ,`Viewer` or `None`.\n\n### role_values_none\n\nList of comma- or space-separated roles that will be mapped to the None role.\n\n### role_values_viewer\n\nList of comma- or space-separated roles that will be mapped to the Viewer role.\n\n### role_values_editor\n\nList of comma- or space-separated roles that will be mapped to the Editor role.\n\n### role_values_admin\n\nList of comma- or space-separated roles that will be mapped to the Admin role.\n\n### role_values_grafana_admin\n\nList of comma- or space-separated roles that will be mapped to the Grafana Admin (Super Admin) role.\n\n## [keystore.vault]\n\n### url\n\nLocation of the Vault server.\n\n### namespace\n\nVault namespace if using Vault with multi-tenancy.\n\n### auth_method\n\nMethod for authenticating towards Vault. Vault is inactive if this option is not set. Current possible values: `token`.\n\n### token\n\nSecret token to connect to Vault when auth_method is `token`.\n\n### lease_renewal_interval\n\nTime between checking if there are any secrets which needs to be renewed.\n\n### lease_renewal_expires_within\n\nTime until expiration for tokens which are renewed. Should have a value higher than lease_renewal_interval.\n\n### lease_renewal_increment\n\nNew duration for renewed tokens. Vault may be configured to ignore this value and impose a stricter limit.\n\n## [security.egress]\n\nSecurity egress makes it possible to control outgoing traffic from the Grafana server.\n\n### host_deny_list\n\nA list of hostnames or IP addresses separated by spaces for which requests are blocked.\n\n### host_allow_list\n\nA list of hostnames or IP addresses separated by spaces for which requests are allowed. All other requests are blocked.\n\n### header_drop_list\n\nA list of headers that are stripped from the outgoing data source and alerting requests.\n\n### cookie_drop_list\n\nA list of cookies that are stripped from the outgoing data source and alerting requests.\n\n## [security.encryption]\n\n### algorithm\n\nEncryption algorithm used to encrypt secrets stored in the database and cookies. Possible values are `aes-cfb` (default) and `aes-gcm`. AES-CFB stands for _Advanced Encryption Standard_ in _cipher feedback_ mode, and AES-GCM stands for _Advanced Encryption Standard_ in _Galois\/Counter Mode_.\n\n## [caching]\n\nWhen query caching is enabled, Grafana can temporarily store the results of data source queries and serve cached responses to similar requests.\n\n### backend\n\nThe caching backend to use when storing cached queries. Options: `memory`, `redis`, and `memcached`.\n\nThe default is `memory`.\n\n### enabled\n\nSetting 'enabled' to `true` allows users to configure query caching for data sources.\n\nThis value is `true` by default.\n\n\nThis setting enables the caching feature, but it does not turn on query caching for any data source. To turn on query caching for a data source, update the setting on the data source configuration page. For more information, refer to the [query caching docs]().\n\n\n### ttl\n\n_Time to live_ (TTL) is the time that a query result is stored in the caching system before it is deleted or refreshed. This setting defines the time to live for query caching, when TTL is not configured in data source settings. The default value is `1m` (1 minute).\n\n### max_ttl\n\nThe max duration that a query result is stored in the caching system before it is deleted or refreshed. This value will override `ttl` config option or data source setting if the `ttl` value is greater than `max_ttl`. To disable this constraint, set this value to `0s`.\n\nThe default is `0s` (disabled).\n\n\nDisabling this constraint is not recommended in production environments.\n\n\n### max_value_mb\n\nThis value limits the size of a single cache value. If a cache value (or query result) exceeds this size, then it is not cached. To disable this limit, set this value to `0`.\n\nThe default is `1`.\n\n### connection_timeout\n\nThis setting defines the duration to wait for a connection to the caching backend.\n\nThe default is `5s`.\n\n### read_timeout\n\nThis setting defines the duration to wait for the caching backend to return a cached result. To disable this timeout, set this value to `0s`.\n\nThe default is `0s` (disabled).\n\n\nDisabling this timeout is not recommended in production environments.\n\n\n### write_timeout\n\nThis setting defines the number of seconds to wait for the caching backend to store a result. To disable this timeout, set this value to `0s`.\n\nThe default is `0s` (disabled).\n\n\nDisabling this timeout is not recommended in production environments.\n\n\n## [caching.encryption]\n\n### enabled\n\nWhen 'enabled' is `true`, query values in the cache are encrypted.\n\nThe default is `false`.\n\n### encryption_key\n\nA string used to generate a key for encrypting the cache. For the encrypted cache data to persist between Grafana restarts, you must specify this key. If it is empty when encryption is enabled, then the key is automatically generated on startup, and the cache clears upon restarts.\n\nThe default is `\"\"`.\n\n## [caching.memory]\n\n### gc_interval\n\nWhen storing cache data in-memory, this setting defines how often a background process cleans up stale data from the in-memory cache. More frequent \"garbage collection\" can keep memory usage from climbing but will increase CPU usage.\n\nThe default is `1m`.\n\n### max_size_mb\n\nThe maximum size of the in-memory cache in megabytes. Once this size is reached, new cache items are rejected. For more flexible control over cache eviction policies and size, use the Redis or Memcached backend.\n\nTo disable the maximum, set this value to `0`.\n\nThe default is `25`.\n\n\nDisabling the maximum is not recommended in production environments.\n\n\n## [caching.redis]\n\n### url\n\nThe full Redis URL of your Redis server. For example: `redis:\/\/username:password@localhost:6379`. To enable TLS, use the `rediss` scheme.\n\nThe default is `\"redis:\/\/localhost:6379\"`.\n\n### cluster\n\nA comma-separated list of Redis cluster members, either in `host:port` format or using the full Redis URLs (`redis:\/\/username:password@localhost:6379`). For example, `localhost:7000, localhost: 7001, localhost:7002`.\nIf you use the full Redis URLs, then you can specify the scheme, username, and password only once. For example, `redis:\/\/username:password@localhost:0000,localhost:1111,localhost:2222`. You cannot specify a different username and password for each URL.\n\n\nIf you have specify `cluster`, the value for `url` is ignored.\n\n\n\nYou can enable TLS for cluster mode using the `rediss` scheme in Grafana Enterprise v8.5 and later versions.\n\n\n### prefix\n\nA string that prefixes all Redis keys. This value must be set if using a shared database in Redis. If `prefix` is empty, then one will not be used.\n\nThe default is `\"grafana\"`.\n\n## [caching.memcached]\n\n### servers\n\nA space-separated list of memcached servers. Example: `memcached-server-1:11211 memcached-server-2:11212 memcached-server-3:11211`. Or if there's only one server: `memcached-server:11211`.\n\nThe default is `\"localhost:11211\"`.\n\n\nThe following memcached configuration requires the `tlsMemcached` feature toggle.\n\n\n### tls_enabled\n\nEnables TLS authentication for memcached. Defaults to `false`.\n\n### tls_cert_path\n\nPath to the client certificate, which will be used for authenticating with the server. Also requires the key path to be configured.\n\n### tls_key_path\n\nPath to the key for the client certificate. Also requires the client certificate to be configured.\n\n### tls_ca_path\n\nPath to the CA certificates to validate the server certificate against. If not set, the host's root CA certificates are used.\n\n### tls_server_name\n\nOverride the expected name on the server certificate.\n\n### connection_timeout\n\nTimeout for the memcached client to connect to memcached. Defaults to `0`, which uses the memcached client default timeout per connection scheme.\n\n## [recorded_queries]\n\n### enabled\n\nWhether the recorded queries feature is enabled\n\n### min_interval\n\nSets the minimum interval to enforce between query evaluations. The default value is `10s`. Query evaluation will be\nadjusted if they are less than this value. Higher values can help with resource management.\n\nThe interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g.\n30s or 1m.\n\n### max_queries\n\nThe maximum number of recorded queries that can exist.\n\n### default_remote_write_datasource_uid\n\nThe UID of the datasource where the query data will be written.\n\nIf all `default_remote_write_*` properties are set, this information will be populated at startup. If a remote write target has\nalready been configured, nothing will happen.\n\n### default_remote_write_path\n\nThe api path where metrics will be written\n\nIf all `default_remote_write_*` properties are set, this information will be populated at startup. If a remote write target has\nalready been configured, nothing will happen.\n\n### default_remote_write_datasource_org_id\n\nThe org id of the datasource where the query data will be written.\n\nIf all `default_remote_write_*` properties are set, this information will be populated at startup. If a remote write target has\nalready been configured, nothing will happen.","site":"grafana setup","answers_cleaned":"    aliases            enterprise enterprise configuration  description  Learn about Grafana Enterprise configuration options that you can specify  labels    products        enterprise       oss title  Configure Grafana Enterprise weight  100        Configure Grafana Enterprise  This page describes Grafana Enterprise specific configuration options that you can specify in a   ini  configuration file or using environment variables  Refer to  Configuration    for more information about available configuration options       enterprise       license path  Local filesystem path to Grafana Enterprise s license file  Defaults to   paths data  license jwt        license text  When set to the text representation  i e  content of the license file  of the license  Grafana will evaluate and apply the given license to the instance       auto refresh license  When enabled  Grafana will send the license and usage statistics to the license issuer  If the license has been updated on the issuer s side to be valid for a different number of users or a new duration  your Grafana instance will be updated with the new terms automatically  Defaults to  true     The license only automatically updates once per day  To immediately update the terms for a license  use the Grafana UI to renew your license token        license validation type  When set to  aws   Grafana will validate its license status with Amazon Web Services  AWS  instead of with Grafana Labs  Only use this setting if you purchased an Enterprise license from AWS Marketplace  Defaults to empty  which means that by default Grafana Enterprise will validate using a license issued by Grafana Labs  For details about licenses issued by AWS  refer to  Activate a Grafana Enterprise license purchased through AWS Marketplace          white labeling       app title  Set to your company name to override application title       login logo  Set to complete URL to override login logo       login background  Set to complete CSS background expression to override login background  Example      bash  white labeling  login background   url http   www bhmpics com wallpapers starfield 1920x1080 jpg           menu logo  Set to complete URL to override menu logo       fav icon  Set to complete URL to override fav icon  icon shown in browser tab        apple touch icon  Set to complete URL to override Apple iOS icon       hide edition  Set to  true  to remove the Grafana edition from appearing in the footer       footer links  List the link IDs to use here  Grafana will look for matching link configurations  the link IDs should be space separated and contain no whitespace       usage insights export   By  exporting usage logs     you can directly query them and create dashboards of the information that matters to you most  such as dashboard errors  most active organizations  or your top 10 most used queries       enabled  Enable the usage insights export feature       storage  Specify a storage type  Defaults to  loki        usage insights export storage loki       type  Set the communication protocol to use with Loki  which is either  grpc  or  http   Defaults to  grpc        url  Set the address for writing logs to Loki  format must be host port        tls  Decide whether or not to enable the TLS  Transport Layer Security  protocol when establishing the connection to Loki  Defaults to true       tenant id  Set the tenant ID for Loki communication  which is disabled by default  The tenant ID is required to interact with Loki running in  multi tenant mode   docs loki latest operations multi tenancy         analytics summaries       buffer write interval  Interval for writing dashboard usage stats buffer to database       buffer write timeout  Timeout for writing dashboard usage stats buffer to database       rollup interval  Interval for trying to roll up per dashboard usage summary  Only rolled up at most once per day       rollup timeout  Timeout for trying to rollup per dashboard usage summary       analytics views       recent users age  Age for recent active users       reporting       rendering timeout  Timeout for each panel rendering request       concurrent render limit  Maximum number of concurrent calls to the rendering service       image scale factor  Scale factor for rendering images  Value  2  is enough for monitor resolutions   4  would be better for printed material  Setting a higher value affects performance and memory       max attachment size mb  Set the maximum file size in megabytes for the CSV attachments       fonts path  Path to the directory containing font files       font regular  Name of the TrueType font file with regular style       font bold  Name of the TrueType font file with bold style       font italic  Name of the TrueType font file with italic style       max retries per panel  Maximum number of panel rendering request retries before returning an error  To disable the retry feature  enter  0   This is available in public preview and requires the  reportingRetries  feature toggle       allowed domains  Allowed domains to receive reports  Use an asterisk       to allow all domains  Use a comma separated list to allow multiple domains  Example  allowed domains   grafana com  example org      auditing    Auditing    allows you to track important changes to your Grafana instance  By default  audit logs are logged to file but the auditing feature also supports sending logs directly to Loki       enabled  Enable the auditing feature  Defaults to false       loggers  List of enabled loggers       log dashboard content  Keep dashboard content in the logs  request or response fields   This can significantly increase the size of your logs       verbose  Log all requests and keep requests and responses body  This can significantly increase the size of your logs       log all status codes  Set to false to only log requests with 2xx  3xx  401  403  500 responses       max response size bytes  Maximum response body  in bytes  to be recorded  May help reducing the memory footprint caused by auditing       auditing logs file       path  Path to logs folder       max files  Maximum log files to keep       max file size mb  Max size in megabytes per log file       auditing logs loki       url  Set the URL for writing logs to Loki       tls  If true  it establishes a secure connection to Loki  Defaults to true       tenant id  Set the tenant ID for Loki communication  which is disabled by default  The tenant ID is required to interact with Loki running in  multi tenant mode   docs loki latest operations multi tenancy         auth saml       enabled  If true  the feature is enabled  Defaults to false       allow sign up  If true  allow new Grafana users to be created through SAML logins  Defaults to true       certificate  Base64 encoded public X 509 certificate  Used to sign requests to the IdP       certificate path  Path to the public X 509 certificate  Used to sign requests to the IdP       private key  Base64 encoded private key  Used to decrypt assertions from the IdP       private key path  Path to the private key  Used to decrypt assertions from the IdP       idp metadata  Base64 encoded IdP SAML metadata XML  Used to verify and obtain binding locations from the IdP       idp metadata path  Path to the SAML metadata XML  Used to verify and obtain binding locations from the IdP       idp metadata url  URL to fetch SAML IdP metadata  Used to verify and obtain binding locations from the IdP       max issue delay  Time since the IdP issued a response and the SP is allowed to process it  Defaults to 90 seconds       metadata valid duration  How long the SPs metadata is valid  Defaults to 48 hours       assertion attribute name  Friendly name or name of the attribute within the SAML assertion to use as the user name  Alternatively  this can be a template with variables that match the names of attributes within the SAML assertion       assertion attribute login  Friendly name or name of the attribute within the SAML assertion to use as the user login handle       assertion attribute email  Friendly name or name of the attribute within the SAML assertion to use as the user email       assertion attribute groups  Friendly name or name of the attribute within the SAML assertion to use as the user groups       assertion attribute role  Friendly name or name of the attribute within the SAML assertion to use as the user roles       assertion attribute org  Friendly name or name of the attribute within the SAML assertion to use as the user organization       allowed organizations  List of comma  or space separated organizations  Each user must be a member of at least one organization to log in       org mapping  List of comma  or space separated Organization OrgId Role mappings  Organization can be     meaning  All users   Role is optional and can have the following values   Admin    Editor    Viewer  or  None        role values none  List of comma  or space separated roles that will be mapped to the None role       role values viewer  List of comma  or space separated roles that will be mapped to the Viewer role       role values editor  List of comma  or space separated roles that will be mapped to the Editor role       role values admin  List of comma  or space separated roles that will be mapped to the Admin role       role values grafana admin  List of comma  or space separated roles that will be mapped to the Grafana Admin  Super Admin  role       keystore vault       url  Location of the Vault server       namespace  Vault namespace if using Vault with multi tenancy       auth method  Method for authenticating towards Vault  Vault is inactive if this option is not set  Current possible values   token        token  Secret token to connect to Vault when auth method is  token        lease renewal interval  Time between checking if there are any secrets which needs to be renewed       lease renewal expires within  Time until expiration for tokens which are renewed  Should have a value higher than lease renewal interval       lease renewal increment  New duration for renewed tokens  Vault may be configured to ignore this value and impose a stricter limit       security egress   Security egress makes it possible to control outgoing traffic from the Grafana server       host deny list  A list of hostnames or IP addresses separated by spaces for which requests are blocked       host allow list  A list of hostnames or IP addresses separated by spaces for which requests are allowed  All other requests are blocked       header drop list  A list of headers that are stripped from the outgoing data source and alerting requests       cookie drop list  A list of cookies that are stripped from the outgoing data source and alerting requests       security encryption       algorithm  Encryption algorithm used to encrypt secrets stored in the database and cookies  Possible values are  aes cfb   default  and  aes gcm   AES CFB stands for  Advanced Encryption Standard  in  cipher feedback  mode  and AES GCM stands for  Advanced Encryption Standard  in  Galois Counter Mode        caching   When query caching is enabled  Grafana can temporarily store the results of data source queries and serve cached responses to similar requests       backend  The caching backend to use when storing cached queries  Options   memory    redis   and  memcached    The default is  memory        enabled  Setting  enabled  to  true  allows users to configure query caching for data sources   This value is  true  by default    This setting enables the caching feature  but it does not turn on query caching for any data source  To turn on query caching for a data source  update the setting on the data source configuration page  For more information  refer to the  query caching docs           ttl   Time to live   TTL  is the time that a query result is stored in the caching system before it is deleted or refreshed  This setting defines the time to live for query caching  when TTL is not configured in data source settings  The default value is  1m   1 minute        max ttl  The max duration that a query result is stored in the caching system before it is deleted or refreshed  This value will override  ttl  config option or data source setting if the  ttl  value is greater than  max ttl   To disable this constraint  set this value to  0s    The default is  0s   disabled     Disabling this constraint is not recommended in production environments        max value mb  This value limits the size of a single cache value  If a cache value  or query result  exceeds this size  then it is not cached  To disable this limit  set this value to  0    The default is  1        connection timeout  This setting defines the duration to wait for a connection to the caching backend   The default is  5s        read timeout  This setting defines the duration to wait for the caching backend to return a cached result  To disable this timeout  set this value to  0s    The default is  0s   disabled     Disabling this timeout is not recommended in production environments        write timeout  This setting defines the number of seconds to wait for the caching backend to store a result  To disable this timeout  set this value to  0s    The default is  0s   disabled     Disabling this timeout is not recommended in production environments        caching encryption       enabled  When  enabled  is  true   query values in the cache are encrypted   The default is  false        encryption key  A string used to generate a key for encrypting the cache  For the encrypted cache data to persist between Grafana restarts  you must specify this key  If it is empty when encryption is enabled  then the key is automatically generated on startup  and the cache clears upon restarts   The default is            caching memory       gc interval  When storing cache data in memory  this setting defines how often a background process cleans up stale data from the in memory cache  More frequent  garbage collection  can keep memory usage from climbing but will increase CPU usage   The default is  1m        max size mb  The maximum size of the in memory cache in megabytes  Once this size is reached  new cache items are rejected  For more flexible control over cache eviction policies and size  use the Redis or Memcached backend   To disable the maximum  set this value to  0    The default is  25     Disabling the maximum is not recommended in production environments        caching redis       url  The full Redis URL of your Redis server  For example   redis   username password localhost 6379   To enable TLS  use the  rediss  scheme   The default is   redis   localhost 6379         cluster  A comma separated list of Redis cluster members  either in  host port  format or using the full Redis URLs   redis   username password localhost 6379    For example   localhost 7000  localhost  7001  localhost 7002   If you use the full Redis URLs  then you can specify the scheme  username  and password only once  For example   redis   username password localhost 0000 localhost 1111 localhost 2222   You cannot specify a different username and password for each URL    If you have specify  cluster   the value for  url  is ignored     You can enable TLS for cluster mode using the  rediss  scheme in Grafana Enterprise v8 5 and later versions        prefix  A string that prefixes all Redis keys  This value must be set if using a shared database in Redis  If  prefix  is empty  then one will not be used   The default is   grafana         caching memcached       servers  A space separated list of memcached servers  Example   memcached server 1 11211 memcached server 2 11212 memcached server 3 11211   Or if there s only one server   memcached server 11211    The default is   localhost 11211      The following memcached configuration requires the  tlsMemcached  feature toggle        tls enabled  Enables TLS authentication for memcached  Defaults to  false        tls cert path  Path to the client certificate  which will be used for authenticating with the server  Also requires the key path to be configured       tls key path  Path to the key for the client certificate  Also requires the client certificate to be configured       tls ca path  Path to the CA certificates to validate the server certificate against  If not set  the host s root CA certificates are used       tls server name  Override the expected name on the server certificate       connection timeout  Timeout for the memcached client to connect to memcached  Defaults to  0   which uses the memcached client default timeout per connection scheme       recorded queries       enabled  Whether the recorded queries feature is enabled      min interval  Sets the minimum interval to enforce between query evaluations  The default value is  10s   Query evaluation will be adjusted if they are less than this value  Higher values can help with resource management   The interval string is a possibly signed sequence of decimal numbers  followed by a unit suffix  ms  s  m  h  d   e g  30s or 1m       max queries  The maximum number of recorded queries that can exist       default remote write datasource uid  The UID of the datasource where the query data will be written   If all  default remote write    properties are set  this information will be populated at startup  If a remote write target has already been configured  nothing will happen       default remote write path  The api path where metrics will be written  If all  default remote write    properties are set  this information will be populated at startup  If a remote write target has already been configured  nothing will happen       default remote write datasource org id  The org id of the datasource where the query data will be written   If all  default remote write    properties are set  this information will be populated at startup  If a remote write target has already been configured  nothing will happen "}
{"questions":"grafana setup weight 300 aliases products enterprise white labeling enable custom branding title Configure custom branding Change the look of Grafana to match your corporate brand labels enterprise","answers":"---\naliases:\n  - ..\/..\/enterprise\/white-labeling\/\n  - ..\/enable-custom-branding\/\ndescription: Change the look of Grafana to match your corporate brand.\nlabels:\n  products:\n    - enterprise\ntitle: Configure custom branding\nweight: 300\n---\n\n# Configure custom branding\n\nCustom branding enables you to replace the Grafana Labs brand and logo with your corporate brand and logo.\n\n\nAvailable in [Grafana Enterprise]() and [Grafana Cloud](\/docs\/grafana-cloud). For Cloud Advanced and Enterprise customers, please provide custom elements and logos to our Support team. We will help you host your images and update your custom branding.\n\nThis feature is not available for Grafana Free and Pro tiers.\nFor more information on feature availability across plans, refer to our [feature comparison page](\/docs\/grafana-cloud\/cost-management-and-billing\/understand-grafana-cloud-features\/)\n\n\n\nThe `grafana.ini` file includes Grafana Enterprise custom branding. As with all configuration options, you can use environment variables to set custom branding.\n\nWith custom branding, you have the ability to modify the following elements:\n\n- Application title\n- Login background\n- Login logo\n- Side menu top logo\n- Footer and help menu links\n- Fav icon (shown in browser tab)\n- Login title (will not appear if a login logo is set)\n- Login subtitle (will not appear if a login logo is set)\n- Login box background\n- Loading logo\n\n> You will have to host your logo and other images used by the custom branding feature separately. Make sure Grafana can access the URL where the assets are stored.\n\nThe configuration file in Grafana Enterprise contains the following options. For more information about configuring Grafana, refer to [Configure Grafana]().\n\n```ini\n# Enterprise only\n[white_labeling]\n# Set to your company name to override application title\n;app_title =\n\n# Set to main title on the login page (Will not appear if a login logo is set)\n;login_title =\n\n# Set to login subtitle (Will not appear if a login logo is set)\n;login_subtitle =\n\n# Set to complete URL to override login logo\n;login_logo =\n\n# Set to complete CSS background expression to override login background\n# example: login_background = url(http:\/\/www.bhmpics.com\/wallpapers\/starfield-1920x1080.jpg)\n;login_background =\n\n# Set to complete CSS background expression to override login box background\n;login_box_background =\n\n# Set to complete URL to override menu logo\n;menu_logo =\n\n# Set to complete URL to override fav icon (icon shown in browser tab)\n;fav_icon =\n\n# Set to complete URL to override apple\/ios icon\n;apple_touch_icon =\n\n# Set to complete URL to override loading logo\n;loading_logo =\n\n# Set to `true` to remove the Grafana edition from appearing in the footer\n;hide_edition =\n```\n\nYou have the option of adding custom links in place of the default footer links (Documentation, Support, Community). Below is an example of how to replace the default footer and help links with custom links.\n\n```ini\nfooter_links = support guides extracustom\nfooter_links_support_text = Support\nfooter_links_support_url = http:\/\/your.support.site\nfooter_links_guides_text = Guides\nfooter_links_guides_url = http:\/\/your.guides.site\nfooter_links_extracustom_text = Custom text\nfooter_links_extracustom_url = http:\/\/your.custom.site\n```\n\nThe following example shows configuring custom branding using environment variables instead of the `custom.ini` or `grafana.ini` files.\n\n```\nGF_WHITE_LABELING_FOOTER_LINKS=support guides extracustom\nGF_WHITE_LABELING_FOOTER_LINKS_SUPPORT_TEXT=Support\nGF_WHITE_LABELING_FOOTER_LINKS_SUPPORT_URL=http:\/\/your.support.site\nGF_WHITE_LABELING_FOOTER_LINKS_GUIDES_TEXT=Guides\nGF_WHITE_LABELING_FOOTER_LINKS_GUIDES_URL=http:\/\/your.guides.site\nGF_WHITE_LABELING_FOOTER_LINKS_EXTRACUSTOM_TEXT=Custom Text\nGF_WHITE_LABELING_FOOTER_LINKS_EXTRACUSTOM_URL=http:\/\/your.custom.site\n```\n\n\nThe following two links are always present in the footer:\n\n\n- Grafana edition\n- Grafana version with build number\n\nIf you specify `footer_links` or `GF_WHITE_LABELING_FOOTER_LINKS`, then all other default links are removed from the footer, and only what is specified is included.\n\n## Custom branding for shared dashboards\n\nIn addition to the customizations described below, you can customize the footer of your shared dashboards.\nTo customize the footer of a shared dashboard, add the following section to the `grafana.ini` file.\n\n```ini\n[white_labeling.public_dashboards]\n\n# Hides the footer for the shared dashboards if set to `true`.\n# example: footer_hide = \"true\"\n;footer_hide =\n\n# Set to text shown in the footer\n;footer_text =\n\n# Set to complete url to override shared dashboard footer logo. Default is `grafana-logo` and will display the Grafana logo.\n# An empty value will hide the footer logo.\n;footer_logo =\n\n# Set to link for the footer\n;footer_link =\n\n# Set to `true` to hide the Grafana logo next to the title\n;header_logo_hide =\n```\n\nIf you specify `footer_hide` to `true`, all the other values are ignored because the footer will not be shown.","site":"grafana setup","answers_cleaned":"    aliases            enterprise white labeling         enable custom branding  description  Change the look of Grafana to match your corporate brand  labels    products        enterprise title  Configure custom branding weight  300        Configure custom branding  Custom branding enables you to replace the Grafana Labs brand and logo with your corporate brand and logo    Available in  Grafana Enterprise    and  Grafana Cloud   docs grafana cloud   For Cloud Advanced and Enterprise customers  please provide custom elements and logos to our Support team  We will help you host your images and update your custom branding   This feature is not available for Grafana Free and Pro tiers  For more information on feature availability across plans  refer to our  feature comparison page   docs grafana cloud cost management and billing understand grafana cloud features      The  grafana ini  file includes Grafana Enterprise custom branding  As with all configuration options  you can use environment variables to set custom branding   With custom branding  you have the ability to modify the following elements     Application title   Login background   Login logo   Side menu top logo   Footer and help menu links   Fav icon  shown in browser tab    Login title  will not appear if a login logo is set    Login subtitle  will not appear if a login logo is set    Login box background   Loading logo    You will have to host your logo and other images used by the custom branding feature separately  Make sure Grafana can access the URL where the assets are stored   The configuration file in Grafana Enterprise contains the following options  For more information about configuring Grafana  refer to  Configure Grafana         ini   Enterprise only  white labeling    Set to your company name to override application title  app title      Set to main title on the login page  Will not appear if a login logo is set   login title      Set to login subtitle  Will not appear if a login logo is set   login subtitle      Set to complete URL to override login logo  login logo      Set to complete CSS background expression to override login background   example  login background   url http   www bhmpics com wallpapers starfield 1920x1080 jpg   login background      Set to complete CSS background expression to override login box background  login box background      Set to complete URL to override menu logo  menu logo      Set to complete URL to override fav icon  icon shown in browser tab   fav icon      Set to complete URL to override apple ios icon  apple touch icon      Set to complete URL to override loading logo  loading logo      Set to  true  to remove the Grafana edition from appearing in the footer  hide edition        You have the option of adding custom links in place of the default footer links  Documentation  Support  Community   Below is an example of how to replace the default footer and help links with custom links      ini footer links   support guides extracustom footer links support text   Support footer links support url   http   your support site footer links guides text   Guides footer links guides url   http   your guides site footer links extracustom text   Custom text footer links extracustom url   http   your custom site      The following example shows configuring custom branding using environment variables instead of the  custom ini  or  grafana ini  files       GF WHITE LABELING FOOTER LINKS support guides extracustom GF WHITE LABELING FOOTER LINKS SUPPORT TEXT Support GF WHITE LABELING FOOTER LINKS SUPPORT URL http   your support site GF WHITE LABELING FOOTER LINKS GUIDES TEXT Guides GF WHITE LABELING FOOTER LINKS GUIDES URL http   your guides site GF WHITE LABELING FOOTER LINKS EXTRACUSTOM TEXT Custom Text GF WHITE LABELING FOOTER LINKS EXTRACUSTOM URL http   your custom site       The following two links are always present in the footer      Grafana edition   Grafana version with build number  If you specify  footer links  or  GF WHITE LABELING FOOTER LINKS   then all other default links are removed from the footer  and only what is specified is included      Custom branding for shared dashboards  In addition to the customizations described below  you can customize the footer of your shared dashboards  To customize the footer of a shared dashboard  add the following section to the  grafana ini  file      ini  white labeling public dashboards     Hides the footer for the shared dashboards if set to  true     example  footer hide    true   footer hide      Set to text shown in the footer  footer text      Set to complete url to override shared dashboard footer logo  Default is  grafana logo  and will display the Grafana logo    An empty value will hide the footer logo   footer logo      Set to link for the footer  footer link      Set to  true  to hide the Grafana logo next to the title  header logo hide        If you specify  footer hide  to  true   all the other values are ignored because the footer will not be shown "}
{"questions":"grafana setup docs grafana latest setup grafana configure grafana feature toggles weight 150 Learn about feature toggles which you can enable or disable DO NOT EDIT THIS PAGE it is machine generated by running the test in aliases https github com grafana grafana blob main pkg services featuremgmt togglesgentest go L27 title Configure feature toggles","answers":"---\naliases:\n  - \/docs\/grafana\/latest\/setup-grafana\/configure-grafana\/feature-toggles\/\ndescription: Learn about feature toggles, which you can enable or disable.\ntitle: Configure feature toggles\nweight: 150\n---\n\n<!-- DO NOT EDIT THIS PAGE, it is machine generated by running the test in -->\n<!-- https:\/\/github.com\/grafana\/grafana\/blob\/main\/pkg\/services\/featuremgmt\/toggles_gen_test.go#L27 -->\n\n# Configure feature toggles\n\nYou use feature toggles, also known as feature flags, to enable or disable features in Grafana. You can turn on feature toggles to try out new functionality in development or test environments.\n\nThis page contains a list of available feature toggles. To learn how to turn on feature toggles, refer to our [Configure Grafana documentation](). Feature toggles are also available to Grafana Cloud Advanced customers. If you use Grafana Cloud Advanced, you can open a support ticket and specify the feature toggles and stack for which you want them enabled.\n\nFor more information about feature release stages, refer to [Release life cycle for Grafana Labs](https:\/\/grafana.com\/docs\/release-life-cycle\/) and [Manage feature toggles](https:\/\/grafana.com\/docs\/grafana\/<GRAFANA_VERSION>\/administration\/feature-toggles\/#manage-feature-toggles).\n\n## General availability feature toggles\n\nMost [generally available](https:\/\/grafana.com\/docs\/release-life-cycle\/#general-availability) features are enabled by default. You can disable these feature by setting the feature flag to \"false\" in the configuration.\n\n| Feature toggle name                    | Description                                                                                                                                                        | Enabled by default |\n| -------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------ |\n| `disableEnvelopeEncryption`            | Disable envelope encryption (emergency only)                                                                                                                       |                    |\n| `publicDashboardsScene`                | Enables public dashboard rendering using scenes                                                                                                                    | Yes                |\n| `featureHighlights`                    | Highlight Grafana Enterprise features                                                                                                                              |                    |\n| `correlations`                         | Correlations page                                                                                                                                                  | Yes                |\n| `cloudWatchCrossAccountQuerying`       | Enables cross-account querying in CloudWatch datasources                                                                                                           | Yes                |\n| `accessControlOnCall`                  | Access control primitives for OnCall                                                                                                                               | Yes                |\n| `nestedFolders`                        | Enable folder nesting                                                                                                                                              | Yes                |\n| `logsContextDatasourceUi`              | Allow datasource to provide custom UI for context view                                                                                                             | Yes                |\n| `lokiQuerySplitting`                   | Split large interval queries into subqueries with smaller time intervals                                                                                           | Yes                |\n| `prometheusMetricEncyclopedia`         | Adds the metrics explorer component to the Prometheus query builder as an option in metric select                                                                  | Yes                |\n| `influxdbBackendMigration`             | Query InfluxDB InfluxQL without the proxy                                                                                                                          | Yes                |\n| `dataplaneFrontendFallback`            | Support dataplane contract field name change for transformations and field name matchers where the name is different                                               | Yes                |\n| `unifiedRequestLog`                    | Writes error logs to the request logger                                                                                                                            | Yes                |\n| `recordedQueriesMulti`                 | Enables writing multiple items from a single query within Recorded Queries                                                                                         | Yes                |\n| `logsExploreTableVisualisation`        | A table visualisation for logs in Explore                                                                                                                          | Yes                |\n| `transformationsRedesign`              | Enables the transformations redesign                                                                                                                               | Yes                |\n| `traceQLStreaming`                     | Enables response streaming of TraceQL queries of the Tempo data source                                                                                             |                    |\n| `awsAsyncQueryCaching`                 | Enable caching for async queries for Redshift and Athena. Requires that the datasource has caching and async query support enabled                                 | Yes                |\n| `prometheusConfigOverhaulAuth`         | Update the Prometheus configuration page with the new auth component                                                                                               | Yes                |\n| `alertingNoDataErrorExecution`         | Changes how Alerting state manager handles execution of NoData\/Error                                                                                               | Yes                |\n| `angularDeprecationUI`                 | Display Angular warnings in dashboards and panels                                                                                                                  | Yes                |\n| `dashgpt`                              | Enable AI powered features in dashboards                                                                                                                           | Yes                |\n| `alertingInsights`                     | Show the new alerting insights landing page                                                                                                                        | Yes                |\n| `panelMonitoring`                      | Enables panel monitoring through logs and measurements                                                                                                             | Yes                |\n| `formatString`                         | Enable format string transformer                                                                                                                                   | Yes                |\n| `transformationsVariableSupport`       | Allows using variables in transformations                                                                                                                          | Yes                |\n| `kubernetesPlaylists`                  | Use the kubernetes API in the frontend for playlists, and route \/api\/playlist requests to k8s                                                                      | Yes                |\n| `recoveryThreshold`                    | Enables feature recovery threshold (aka hysteresis) for threshold server-side expression                                                                           | Yes                |\n| `lokiStructuredMetadata`               | Enables the loki data source to request structured metadata from the Loki server                                                                                   | Yes                |\n| `managedPluginsInstall`                | Install managed plugins directly from plugins catalog                                                                                                              | Yes                |\n| `addFieldFromCalculationStatFunctions` | Add cumulative and window functions to the add field from calculation transformation                                                                               | Yes                |\n| `annotationPermissionUpdate`           | Change the way annotation permissions work by scoping them to folders and dashboards.                                                                              | Yes                |\n| `dashboardSceneForViewers`             | Enables dashboard rendering using Scenes for viewer roles                                                                                                          | Yes                |\n| `dashboardSceneSolo`                   | Enables rendering dashboards using scenes for solo panels                                                                                                          | Yes                |\n| `dashboardScene`                       | Enables dashboard rendering using scenes for all roles                                                                                                             | Yes                |\n| `ssoSettingsApi`                       | Enables the SSO settings API and the OAuth configuration UIs in Grafana                                                                                            | Yes                |\n| `logsInfiniteScrolling`                | Enables infinite scrolling for the Logs panel in Explore and Dashboards                                                                                            | Yes                |\n| `exploreMetrics`                       | Enables the new Explore Metrics core app                                                                                                                           | Yes                |\n| `alertingSimplifiedRouting`            | Enables users to easily configure alert notifications by specifying a contact point directly when editing or creating an alert rule                                | Yes                |\n| `logRowsPopoverMenu`                   | Enable filtering menu displayed when text of a log line is selected                                                                                                | Yes                |\n| `lokiQueryHints`                       | Enables query hints for Loki                                                                                                                                       | Yes                |\n| `alertingQueryOptimization`            | Optimizes eligible queries in order to reduce load on datasources                                                                                                  |                    |\n| `promQLScope`                          | In-development feature that will allow injection of labels into prometheus queries.                                                                                | Yes                |\n| `groupToNestedTableTransformation`     | Enables the group to nested table transformation                                                                                                                   | Yes                |\n| `tlsMemcached`                         | Use TLS-enabled memcached in the enterprise caching feature                                                                                                        | Yes                |\n| `cloudWatchNewLabelParsing`            | Updates CloudWatch label parsing to be more accurate                                                                                                               | Yes                |\n| `accessActionSets`                     | Introduces action sets for resource permissions. Also ensures that all folder editors and admins can create subfolders without needing any additional permissions. | Yes                |\n| `newDashboardSharingComponent`         | Enables the new sharing drawer design                                                                                                                              |                    |\n| `notificationBanner`                   | Enables the notification banner UI and API                                                                                                                         | Yes                |\n| `pluginProxyPreserveTrailingSlash`     | Preserve plugin proxy trailing slash.                                                                                                                              |                    |\n| `pinNavItems`                          | Enables pinning of nav items                                                                                                                                       | Yes                |\n| `openSearchBackendFlowEnabled`         | Enables the backend query flow for Open Search datasource plugin                                                                                                   | Yes                |\n| `cloudWatchRoundUpEndTime`             | Round up end time for metric queries to the next minute to avoid missing data                                                                                      | Yes                |\n| `cloudwatchMetricInsightsCrossAccount` | Enables cross account observability for Cloudwatch Metric Insights query builder                                                                                   | Yes                |\n| `singleTopNav`                         | Unifies the top search bar and breadcrumb bar into one                                                                                                             | Yes                |\n| `azureMonitorDisableLogLimit`          | Disables the log limit restriction for Azure Monitor when true. The limit is enabled by default.                                                                   |                    |\n| `preinstallAutoUpdate`                 | Enables automatic updates for pre-installed plugins                                                                                                                | Yes                |\n| `alertingUIOptimizeReducer`            | Enables removing the reducer from the alerting UI when creating a new alert rule and using instant query                                                           | Yes                |\n| `azureMonitorEnableUserAuth`           | Enables user auth for Azure Monitor datasource only                                                                                                                | Yes                |\n\n## Public preview feature toggles\n\n[Public preview](https:\/\/grafana.com\/docs\/release-life-cycle\/#public-preview) features are supported by our Support teams, but might be limited to enablement, configuration, and some troubleshooting.\n\n| Feature toggle name               | Description                                                                                                                                                                                  |\n| --------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `panelTitleSearch`                | Search for dashboards using panel title                                                                                                                                                      |\n| `autoMigrateOldPanels`            | Migrate old angular panels to supported versions (graph, table-old, worldmap, etc)                                                                                                           |\n| `autoMigrateGraphPanel`           | Migrate old graph panel to supported time series panel - broken out from autoMigrateOldPanels to enable granular tracking                                                                    |\n| `autoMigrateTablePanel`           | Migrate old table panel to supported table panel - broken out from autoMigrateOldPanels to enable granular tracking                                                                          |\n| `autoMigratePiechartPanel`        | Migrate old piechart panel to supported piechart panel - broken out from autoMigrateOldPanels to enable granular tracking                                                                    |\n| `autoMigrateWorldmapPanel`        | Migrate old worldmap panel to supported geomap panel - broken out from autoMigrateOldPanels to enable granular tracking                                                                      |\n| `autoMigrateStatPanel`            | Migrate old stat panel to supported stat panel - broken out from autoMigrateOldPanels to enable granular tracking                                                                            |\n| `disableAngular`                  | Dynamic flag to disable angular at runtime. The preferred method is to set `angular_support_enabled` to `false` in the [security] settings, which allows you to change the state at runtime. |\n| `grpcServer`                      | Run the GRPC server                                                                                                                                                                          |\n| `alertingNoNormalState`           | Stop maintaining state of alerts that are not firing                                                                                                                                         |\n| `renderAuthJWT`                   | Uses JWT-based auth for rendering instead of relying on remote cache                                                                                                                         |\n| `refactorVariablesTimeRange`      | Refactor time range variables flow to reduce number of API calls made when query variables are chained                                                                                       |\n| `faroDatasourceSelector`          | Enable the data source selector within the Frontend Apps section of the Frontend Observability                                                                                               |\n| `enableDatagridEditing`           | Enables the edit functionality in the datagrid panel                                                                                                                                         |\n| `sqlDatasourceDatabaseSelection`  | Enables previous SQL data source dataset dropdown behavior                                                                                                                                   |\n| `reportingRetries`                | Enables rendering retries for the reporting feature                                                                                                                                          |\n| `externalServiceAccounts`         | Automatic service account and token setup for plugins                                                                                                                                        |\n| `cloudWatchBatchQueries`          | Runs CloudWatch metrics queries as separate batches                                                                                                                                          |\n| `teamHttpHeaders`                 | Enables LBAC for datasources to apply LogQL filtering of logs to the client requests for users in teams                                                                                      |\n| `pdfTables`                       | Enables generating table data as PDF in reporting                                                                                                                                            |\n| `canvasPanelPanZoom`              | Allow pan and zoom in canvas panel                                                                                                                                                           |\n| `regressionTransformation`        | Enables regression analysis transformation                                                                                                                                                   |\n| `onPremToCloudMigrations`         | Enable the Grafana Migration Assistant, which helps you easily migrate on-prem dashboards, folders, and data source configurations to your Grafana Cloud stack.                              |\n| `newPDFRendering`                 | New implementation for the dashboard-to-PDF rendering                                                                                                                                        |\n| `ssoSettingsSAML`                 | Use the new SSO Settings API to configure the SAML connector                                                                                                                                 |\n| `azureMonitorPrometheusExemplars` | Allows configuration of Azure Monitor as a data source that can provide Prometheus exemplars                                                                                                 |\n| `ssoSettingsLDAP`                 | Use the new SSO Settings API to configure LDAP                                                                                                                                               |\n| `useSessionStorageForRedirection` | Use session storage for handling the redirection after login                                                                                                                                 |\n| `reportingUseRawTimeRange`        | Uses the original report or dashboard time range instead of making an absolute transformation                                                                                                |\n\n## Experimental feature toggles\n\n[Experimental](https:\/\/grafana.com\/docs\/release-life-cycle\/#experimental) features are early in their development lifecycle and so are not yet supported in Grafana Cloud.\nExperimental features might be changed or removed without prior notice.\n\n| Feature toggle name                           | Description                                                                                                                                                                                                                                                                       |\n| --------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `live-service-web-worker`                     | This will use a webworker thread to processes events rather than the main thread                                                                                                                                                                                                  |\n| `queryOverLive`                               | Use Grafana Live WebSocket to execute backend queries                                                                                                                                                                                                                             |\n| `lokiExperimentalStreaming`                   | Support new streaming approach for loki (prototype, needs special loki build)                                                                                                                                                                                                     |\n| `storage`                                     | Configurable storage for dashboards, datasources, and resources                                                                                                                                                                                                                   |\n| `canvasPanelNesting`                          | Allow elements nesting                                                                                                                                                                                                                                                            |\n| `vizActions`                                  | Allow actions in visualizations                                                                                                                                                                                                                                                   |\n| `disableSecretsCompatibility`                 | Disable duplicated secret storage in legacy tables                                                                                                                                                                                                                                |\n| `logRequestsInstrumentedAsUnknown`            | Logs the path for requests that are instrumented as unknown                                                                                                                                                                                                                       |\n| `showDashboardValidationWarnings`             | Show warnings when dashboards do not validate against the schema                                                                                                                                                                                                                  |\n| `mysqlAnsiQuotes`                             | Use double quotes to escape keyword in a MySQL query                                                                                                                                                                                                                              |\n| `mysqlParseTime`                              | Ensure the parseTime flag is set for MySQL driver                                                                                                                                                                                                                                 |\n| `alertingBacktesting`                         | Rule backtesting API for alerting                                                                                                                                                                                                                                                 |\n| `editPanelCSVDragAndDrop`                     | Enables drag and drop for CSV and Excel files                                                                                                                                                                                                                                     |\n| `lokiShardSplitting`                          | Use stream shards to split queries into smaller subqueries                                                                                                                                                                                                                        |\n| `lokiQuerySplittingConfig`                    | Give users the option to configure split durations for Loki queries                                                                                                                                                                                                               |\n| `individualCookiePreferences`                 | Support overriding cookie preferences per user                                                                                                                                                                                                                                    |\n| `influxqlStreamingParser`                     | Enable streaming JSON parser for InfluxDB datasource InfluxQL query language                                                                                                                                                                                                      |\n| `lokiLogsDataplane`                           | Changes logs responses from Loki to be compliant with the dataplane specification.                                                                                                                                                                                                |\n| `disableSSEDataplane`                         | Disables dataplane specific processing in server side expressions.                                                                                                                                                                                                                |\n| `alertStateHistoryLokiSecondary`              | Enable Grafana to write alert state history to an external Loki instance in addition to Grafana annotations.                                                                                                                                                                      |\n| `alertStateHistoryLokiPrimary`                | Enable a remote Loki instance as the primary source for state history reads.                                                                                                                                                                                                      |\n| `alertStateHistoryLokiOnly`                   | Disable Grafana alerts from emitting annotations when a remote Loki instance is available.                                                                                                                                                                                        |\n| `extraThemes`                                 | Enables extra themes                                                                                                                                                                                                                                                              |\n| `lokiPredefinedOperations`                    | Adds predefined query operations to Loki query editor                                                                                                                                                                                                                             |\n| `pluginsFrontendSandbox`                      | Enables the plugins frontend sandbox                                                                                                                                                                                                                                              |\n| `frontendSandboxMonitorOnly`                  | Enables monitor only in the plugin frontend sandbox (if enabled)                                                                                                                                                                                                                  |\n| `pluginsDetailsRightPanel`                    | Enables right panel for the plugins details page                                                                                                                                                                                                                                  |\n| `awsDatasourcesTempCredentials`               | Support temporary security credentials in AWS plugins for Grafana Cloud customers                                                                                                                                                                                                 |\n| `mlExpressions`                               | Enable support for Machine Learning in server-side expressions                                                                                                                                                                                                                    |\n| `metricsSummary`                              | Enables metrics summary queries in the Tempo data source                                                                                                                                                                                                                          |\n| `datasourceAPIServers`                        | Expose some datasources as apiservers.                                                                                                                                                                                                                                            |\n| `provisioning`                                | Next generation provisioning... and git                                                                                                                                                                                                                                           |\n| `permissionsFilterRemoveSubquery`             | Alternative permission filter implementation that does not use subqueries for fetching the dashboard folder                                                                                                                                                                       |\n| `aiGeneratedDashboardChanges`                 | Enable AI powered features for dashboards to auto-summary changes when saving                                                                                                                                                                                                     |\n| `sseGroupByDatasource`                        | Send query to the same datasource in a single request when using server side expressions. The `cloudWatchBatchQueries` feature toggle should be enabled if this used with CloudWatch.                                                                                             |\n| `libraryPanelRBAC`                            | Enables RBAC support for library panels                                                                                                                                                                                                                                           |\n| `wargamesTesting`                             | Placeholder feature flag for internal testing                                                                                                                                                                                                                                     |\n| `externalCorePlugins`                         | Allow core plugins to be loaded as external                                                                                                                                                                                                                                       |\n| `pluginsAPIMetrics`                           | Sends metrics of public grafana packages usage by plugins                                                                                                                                                                                                                         |\n| `enableNativeHTTPHistogram`                   | Enables native HTTP Histograms                                                                                                                                                                                                                                                    |\n| `disableClassicHTTPHistogram`                 | Disables classic HTTP Histogram (use with enableNativeHTTPHistogram)                                                                                                                                                                                                              |\n| `kubernetesSnapshots`                         | Routes snapshot requests from \/api to the \/apis endpoint                                                                                                                                                                                                                          |\n| `kubernetesDashboards`                        | Use the kubernetes API in the frontend for dashboards                                                                                                                                                                                                                             |\n| `kubernetesDashboardsAPI`                     | Use the kubernetes API in the backend for dashboards                                                                                                                                                                                                                              |\n| `kubernetesFolders`                           | Use the kubernetes API in the frontend for folders, and route \/api\/folders requests to k8s                                                                                                                                                                                        |\n| `grafanaAPIServerTestingWithExperimentalAPIs` | Facilitate integration testing of experimental APIs                                                                                                                                                                                                                               |\n| `datasourceQueryTypes`                        | Show query type endpoints in datasource API servers (currently hardcoded for testdata, expressions, and prometheus)                                                                                                                                                               |\n| `queryService`                                | Register \/apis\/query.grafana.app\/ -- will eventually replace \/api\/ds\/query                                                                                                                                                                                                        |\n| `queryServiceRewrite`                         | Rewrite requests targeting \/ds\/query to the query service                                                                                                                                                                                                                         |\n| `queryServiceFromUI`                          | Routes requests to the new query service                                                                                                                                                                                                                                          |\n| `cachingOptimizeSerializationMemoryUsage`     | If enabled, the caching backend gradually serializes query responses for the cache, comparing against the configured `[caching]max_value_mb` value as it goes. This can can help prevent Grafana from running out of memory while attempting to cache very large query responses. |\n| `prometheusPromQAIL`                          | Prometheus and AI\/ML to assist users in creating a query                                                                                                                                                                                                                          |\n| `prometheusCodeModeMetricNamesSearch`         | Enables search for metric names in Code Mode, to improve performance when working with an enormous number of metric names                                                                                                                                                         |\n| `alertmanagerRemoteSecondary`                 | Enable Grafana to sync configuration and state with a remote Alertmanager.                                                                                                                                                                                                        |\n| `alertmanagerRemotePrimary`                   | Enable Grafana to have a remote Alertmanager instance as the primary Alertmanager.                                                                                                                                                                                                |\n| `alertmanagerRemoteOnly`                      | Disable the internal Alertmanager and only use the external one defined.                                                                                                                                                                                                          |\n| `extractFieldsNameDeduplication`              | Make sure extracted field names are unique in the dataframe                                                                                                                                                                                                                       |\n| `dashboardNewLayouts`                         | Enables experimental new dashboard layouts                                                                                                                                                                                                                                        |\n| `pluginsSkipHostEnvVars`                      | Disables passing host environment variable to plugin processes                                                                                                                                                                                                                    |\n| `tableSharedCrosshair`                        | Enables shared crosshair in table panel                                                                                                                                                                                                                                           |\n| `kubernetesFeatureToggles`                    | Use the kubernetes API for feature toggle management in the frontend                                                                                                                                                                                                              |\n| `newFolderPicker`                             | Enables the nested folder picker without having nested folders enabled                                                                                                                                                                                                            |\n| `onPremToCloudMigrationsAlerts`               | Enables the migration of alerts and its child resources to your Grafana Cloud stack. Requires `onPremToCloudMigrations` to be enabled in conjunction.                                                                                                                             |\n| `onPremToCloudMigrationsAuthApiMig`           | Enables the use of auth api instead of gcom for internal token services. Requires `onPremToCloudMigrations` to be enabled in conjunction.                                                                                                                                         |\n| `scopeApi`                                    | In-development feature flag for the scope api using the app platform.                                                                                                                                                                                                             |\n| `sqlExpressions`                              | Enables using SQL and DuckDB functions as Expressions.                                                                                                                                                                                                                            |\n| `nodeGraphDotLayout`                          | Changed the layout algorithm for the node graph                                                                                                                                                                                                                                   |\n| `kubernetesAggregator`                        | Enable grafana's embedded kube-aggregator                                                                                                                                                                                                                                         |\n| `expressionParser`                            | Enable new expression parser                                                                                                                                                                                                                                                      |\n| `disableNumericMetricsSortingInExpressions`   | In server-side expressions, disable the sorting of numeric-kind metrics by their metric name or labels.                                                                                                                                                                           |\n| `queryLibrary`                                | Enables Query Library feature in Explore                                                                                                                                                                                                                                          |\n| `logsExploreTableDefaultVisualization`        | Sets the logs table as default visualisation in logs explore                                                                                                                                                                                                                      |\n| `alertingListViewV2`                          | Enables the new alert list view design                                                                                                                                                                                                                                            |\n| `dashboardRestore`                            | Enables deleted dashboard restore feature                                                                                                                                                                                                                                         |\n| `alertingCentralAlertHistory`                 | Enables the new central alert history.                                                                                                                                                                                                                                            |\n| `sqlQuerybuilderFunctionParameters`           | Enables SQL query builder function parameters                                                                                                                                                                                                                                     |\n| `failWrongDSUID`                              | Throws an error if a datasource has an invalid UIDs                                                                                                                                                                                                                               |\n| `alertingApiServer`                           | Register Alerting APIs with the K8s API server                                                                                                                                                                                                                                    |\n| `dataplaneAggregator`                         | Enable grafana dataplane aggregator                                                                                                                                                                                                                                               |\n| `newFiltersUI`                                | Enables new combobox style UI for the Ad hoc filters variable in scenes architecture                                                                                                                                                                                              |\n| `lokiSendDashboardPanelNames`                 | Send dashboard and panel names to Loki when querying                                                                                                                                                                                                                              |\n| `alertingPrometheusRulesPrimary`              | Uses Prometheus rules as the primary source of truth for ruler-enabled data sources                                                                                                                                                                                               |\n| `exploreLogsShardSplitting`                   | Used in Explore Logs to split queries into multiple queries based on the number of shards                                                                                                                                                                                         |\n| `exploreLogsAggregatedMetrics`                | Used in Explore Logs to query by aggregated metrics                                                                                                                                                                                                                               |\n| `exploreLogsLimitedTimeRange`                 | Used in Explore Logs to limit the time range                                                                                                                                                                                                                                      |\n| `homeSetupGuide`                              | Used in Home for users who want to return to the onboarding flow or quickly find popular config pages                                                                                                                                                                             |\n| `appSidecar`                                  | Enable the app sidecar feature that allows rendering 2 apps at the same time                                                                                                                                                                                                      |\n| `alertingQueryAndExpressionsStepMode`         | Enables step mode for alerting queries and expressions                                                                                                                                                                                                                            |\n| `rolePickerDrawer`                            | Enables the new role picker drawer design                                                                                                                                                                                                                                         |\n| `pluginsSriChecks`                            | Enables SRI checks for plugin assets                                                                                                                                                                                                                                              |\n| `unifiedStorageBigObjectsSupport`             | Enables to save big objects in blob storage                                                                                                                                                                                                                                       |\n| `timeRangeProvider`                           | Enables time pickers sync                                                                                                                                                                                                                                                         |\n| `prometheusUsesCombobox`                      | Use new combobox component for Prometheus query editor                                                                                                                                                                                                                            |\n| `userStorageAPI`                              | Enables the user storage API                                                                                                                                                                                                                                                      |\n| `dashboardSchemaV2`                           | Enables the new dashboard schema version 2, implementing changes necessary for dynamic dashboards and dashboards as code.                                                                                                                                                         |\n| `playlistsWatcher`                            | Enables experimental watcher for playlists                                                                                                                                                                                                                                        |\n| `enableExtensionsAdminPage`                   | Enables the extension admin page regardless of development mode                                                                                                                                                                                                                   |\n| `zipkinBackendMigration`                      | Enables querying Zipkin data source without the proxy                                                                                                                                                                                                                             |\n| `enableSCIM`                                  | Enables SCIM support for user and group management                                                                                                                                                                                                                                |\n| `crashDetection`                              | Enables browser crash detection reporting to Faro.                                                                                                                                                                                                                                |\n| `jaegerBackendMigration`                      | Enables querying the Jaeger data source without the proxy                                                                                                                                                                                                                         |\n| `alertingNotificationsStepMode`               | Enables simplified step mode in the notifications section                                                                                                                                                                                                                         |\n\n## Development feature toggles\n\nThe following toggles require explicitly setting Grafana's [app mode]() to 'development' before you can enable this feature toggle. These features tend to be experimental.\n\n| Feature toggle name                    | Description                                                                   |\n| -------------------------------------- | ----------------------------------------------------------------------------- |\n| `grafanaAPIServerWithExperimentalAPIs` | Register experimental APIs with the k8s API server, including all datasources |\n| `grafanaAPIServerEnsureKubectlAccess`  | Start an additional https handler and write kubectl options                   |\n| `panelTitleSearchInV1`                 | Enable searching for dashboards using panel title in search v1                |","site":"grafana setup","answers_cleaned":"    aliases       docs grafana latest setup grafana configure grafana feature toggles  description  Learn about feature toggles  which you can enable or disable  title  Configure feature toggles weight  150           DO NOT EDIT THIS PAGE  it is machine generated by running the test in          https   github com grafana grafana blob main pkg services featuremgmt toggles gen test go L27        Configure feature toggles  You use feature toggles  also known as feature flags  to enable or disable features in Grafana  You can turn on feature toggles to try out new functionality in development or test environments   This page contains a list of available feature toggles  To learn how to turn on feature toggles  refer to our  Configure Grafana documentation     Feature toggles are also available to Grafana Cloud Advanced customers  If you use Grafana Cloud Advanced  you can open a support ticket and specify the feature toggles and stack for which you want them enabled   For more information about feature release stages  refer to  Release life cycle for Grafana Labs  https   grafana com docs release life cycle   and  Manage feature toggles  https   grafana com docs grafana  GRAFANA VERSION  administration feature toggles  manage feature toggles       General availability feature toggles  Most  generally available  https   grafana com docs release life cycle  general availability  features are enabled by default  You can disable these feature by setting the feature flag to  false  in the configuration     Feature toggle name                      Description                                                                                                                                                          Enabled by default                                                                                                                                                                                                                                           disableEnvelopeEncryption               Disable envelope encryption  emergency only                                                                                                                                                  publicDashboardsScene                   Enables public dashboard rendering using scenes                                                                                                                      Yes                     featureHighlights                       Highlight Grafana Enterprise features                                                                                                                                                        correlations                            Correlations page                                                                                                                                                    Yes                     cloudWatchCrossAccountQuerying          Enables cross account querying in CloudWatch datasources                                                                                                             Yes                     accessControlOnCall                     Access control primitives for OnCall                                                                                                                                 Yes                     nestedFolders                           Enable folder nesting                                                                                                                                                Yes                     logsContextDatasourceUi                 Allow datasource to provide custom UI for context view                                                                                                               Yes                     lokiQuerySplitting                      Split large interval queries into subqueries with smaller time intervals                                                                                             Yes                     prometheusMetricEncyclopedia            Adds the metrics explorer component to the Prometheus query builder as an option in metric select                                                                    Yes                     influxdbBackendMigration                Query InfluxDB InfluxQL without the proxy                                                                                                                            Yes                     dataplaneFrontendFallback               Support dataplane contract field name change for transformations and field name matchers where the name is different                                                 Yes                     unifiedRequestLog                       Writes error logs to the request logger                                                                                                                              Yes                     recordedQueriesMulti                    Enables writing multiple items from a single query within Recorded Queries                                                                                           Yes                     logsExploreTableVisualisation           A table visualisation for logs in Explore                                                                                                                            Yes                     transformationsRedesign                 Enables the transformations redesign                                                                                                                                 Yes                     traceQLStreaming                        Enables response streaming of TraceQL queries of the Tempo data source                                                                                                                       awsAsyncQueryCaching                    Enable caching for async queries for Redshift and Athena  Requires that the datasource has caching and async query support enabled                                   Yes                     prometheusConfigOverhaulAuth            Update the Prometheus configuration page with the new auth component                                                                                                 Yes                     alertingNoDataErrorExecution            Changes how Alerting state manager handles execution of NoData Error                                                                                                 Yes                     angularDeprecationUI                    Display Angular warnings in dashboards and panels                                                                                                                    Yes                     dashgpt                                 Enable AI powered features in dashboards                                                                                                                             Yes                     alertingInsights                        Show the new alerting insights landing page                                                                                                                          Yes                     panelMonitoring                         Enables panel monitoring through logs and measurements                                                                                                               Yes                     formatString                            Enable format string transformer                                                                                                                                     Yes                     transformationsVariableSupport          Allows using variables in transformations                                                                                                                            Yes                     kubernetesPlaylists                     Use the kubernetes API in the frontend for playlists  and route  api playlist requests to k8s                                                                        Yes                     recoveryThreshold                       Enables feature recovery threshold  aka hysteresis  for threshold server side expression                                                                             Yes                     lokiStructuredMetadata                  Enables the loki data source to request structured metadata from the Loki server                                                                                     Yes                     managedPluginsInstall                   Install managed plugins directly from plugins catalog                                                                                                                Yes                     addFieldFromCalculationStatFunctions    Add cumulative and window functions to the add field from calculation transformation                                                                                 Yes                     annotationPermissionUpdate              Change the way annotation permissions work by scoping them to folders and dashboards                                                                                 Yes                     dashboardSceneForViewers                Enables dashboard rendering using Scenes for viewer roles                                                                                                            Yes                     dashboardSceneSolo                      Enables rendering dashboards using scenes for solo panels                                                                                                            Yes                     dashboardScene                          Enables dashboard rendering using scenes for all roles                                                                                                               Yes                     ssoSettingsApi                          Enables the SSO settings API and the OAuth configuration UIs in Grafana                                                                                              Yes                     logsInfiniteScrolling                   Enables infinite scrolling for the Logs panel in Explore and Dashboards                                                                                              Yes                     exploreMetrics                          Enables the new Explore Metrics core app                                                                                                                             Yes                     alertingSimplifiedRouting               Enables users to easily configure alert notifications by specifying a contact point directly when editing or creating an alert rule                                  Yes                     logRowsPopoverMenu                      Enable filtering menu displayed when text of a log line is selected                                                                                                  Yes                     lokiQueryHints                          Enables query hints for Loki                                                                                                                                         Yes                     alertingQueryOptimization               Optimizes eligible queries in order to reduce load on datasources                                                                                                                            promQLScope                             In development feature that will allow injection of labels into prometheus queries                                                                                   Yes                     groupToNestedTableTransformation        Enables the group to nested table transformation                                                                                                                     Yes                     tlsMemcached                            Use TLS enabled memcached in the enterprise caching feature                                                                                                          Yes                     cloudWatchNewLabelParsing               Updates CloudWatch label parsing to be more accurate                                                                                                                 Yes                     accessActionSets                        Introduces action sets for resource permissions  Also ensures that all folder editors and admins can create subfolders without needing any additional permissions    Yes                     newDashboardSharingComponent            Enables the new sharing drawer design                                                                                                                                                        notificationBanner                      Enables the notification banner UI and API                                                                                                                           Yes                     pluginProxyPreserveTrailingSlash        Preserve plugin proxy trailing slash                                                                                                                                                         pinNavItems                             Enables pinning of nav items                                                                                                                                         Yes                     openSearchBackendFlowEnabled            Enables the backend query flow for Open Search datasource plugin                                                                                                     Yes                     cloudWatchRoundUpEndTime                Round up end time for metric queries to the next minute to avoid missing data                                                                                        Yes                     cloudwatchMetricInsightsCrossAccount    Enables cross account observability for Cloudwatch Metric Insights query builder                                                                                     Yes                     singleTopNav                            Unifies the top search bar and breadcrumb bar into one                                                                                                               Yes                     azureMonitorDisableLogLimit             Disables the log limit restriction for Azure Monitor when true  The limit is enabled by default                                                                                              preinstallAutoUpdate                    Enables automatic updates for pre installed plugins                                                                                                                  Yes                     alertingUIOptimizeReducer               Enables removing the reducer from the alerting UI when creating a new alert rule and using instant query                                                             Yes                     azureMonitorEnableUserAuth              Enables user auth for Azure Monitor datasource only                                                                                                                  Yes                      Public preview feature toggles   Public preview  https   grafana com docs release life cycle  public preview  features are supported by our Support teams  but might be limited to enablement  configuration  and some troubleshooting     Feature toggle name                 Description                                                                                                                                                                                                                                                                                                                                                                                                                            panelTitleSearch                   Search for dashboards using panel title                                                                                                                                                           autoMigrateOldPanels               Migrate old angular panels to supported versions  graph  table old  worldmap  etc                                                                                                                 autoMigrateGraphPanel              Migrate old graph panel to supported time series panel   broken out from autoMigrateOldPanels to enable granular tracking                                                                         autoMigrateTablePanel              Migrate old table panel to supported table panel   broken out from autoMigrateOldPanels to enable granular tracking                                                                               autoMigratePiechartPanel           Migrate old piechart panel to supported piechart panel   broken out from autoMigrateOldPanels to enable granular tracking                                                                         autoMigrateWorldmapPanel           Migrate old worldmap panel to supported geomap panel   broken out from autoMigrateOldPanels to enable granular tracking                                                                           autoMigrateStatPanel               Migrate old stat panel to supported stat panel   broken out from autoMigrateOldPanels to enable granular tracking                                                                                 disableAngular                     Dynamic flag to disable angular at runtime  The preferred method is to set  angular support enabled  to  false  in the  security  settings  which allows you to change the state at runtime       grpcServer                         Run the GRPC server                                                                                                                                                                               alertingNoNormalState              Stop maintaining state of alerts that are not firing                                                                                                                                              renderAuthJWT                      Uses JWT based auth for rendering instead of relying on remote cache                                                                                                                              refactorVariablesTimeRange         Refactor time range variables flow to reduce number of API calls made when query variables are chained                                                                                            faroDatasourceSelector             Enable the data source selector within the Frontend Apps section of the Frontend Observability                                                                                                    enableDatagridEditing              Enables the edit functionality in the datagrid panel                                                                                                                                              sqlDatasourceDatabaseSelection     Enables previous SQL data source dataset dropdown behavior                                                                                                                                        reportingRetries                   Enables rendering retries for the reporting feature                                                                                                                                               externalServiceAccounts            Automatic service account and token setup for plugins                                                                                                                                             cloudWatchBatchQueries             Runs CloudWatch metrics queries as separate batches                                                                                                                                               teamHttpHeaders                    Enables LBAC for datasources to apply LogQL filtering of logs to the client requests for users in teams                                                                                           pdfTables                          Enables generating table data as PDF in reporting                                                                                                                                                 canvasPanelPanZoom                 Allow pan and zoom in canvas panel                                                                                                                                                                regressionTransformation           Enables regression analysis transformation                                                                                                                                                        onPremToCloudMigrations            Enable the Grafana Migration Assistant  which helps you easily migrate on prem dashboards  folders  and data source configurations to your Grafana Cloud stack                                    newPDFRendering                    New implementation for the dashboard to PDF rendering                                                                                                                                             ssoSettingsSAML                    Use the new SSO Settings API to configure the SAML connector                                                                                                                                      azureMonitorPrometheusExemplars    Allows configuration of Azure Monitor as a data source that can provide Prometheus exemplars                                                                                                      ssoSettingsLDAP                    Use the new SSO Settings API to configure LDAP                                                                                                                                                    useSessionStorageForRedirection    Use session storage for handling the redirection after login                                                                                                                                      reportingUseRawTimeRange           Uses the original report or dashboard time range instead of making an absolute transformation                                                                                                      Experimental feature toggles   Experimental  https   grafana com docs release life cycle  experimental  features are early in their development lifecycle and so are not yet supported in Grafana Cloud  Experimental features might be changed or removed without prior notice     Feature toggle name                             Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  live service web worker                        This will use a webworker thread to processes events rather than the main thread                                                                                                                                                                                                       queryOverLive                                  Use Grafana Live WebSocket to execute backend queries                                                                                                                                                                                                                                  lokiExperimentalStreaming                      Support new streaming approach for loki  prototype  needs special loki build                                                                                                                                                                                                           storage                                        Configurable storage for dashboards  datasources  and resources                                                                                                                                                                                                                        canvasPanelNesting                             Allow elements nesting                                                                                                                                                                                                                                                                 vizActions                                     Allow actions in visualizations                                                                                                                                                                                                                                                        disableSecretsCompatibility                    Disable duplicated secret storage in legacy tables                                                                                                                                                                                                                                     logRequestsInstrumentedAsUnknown               Logs the path for requests that are instrumented as unknown                                                                                                                                                                                                                            showDashboardValidationWarnings                Show warnings when dashboards do not validate against the schema                                                                                                                                                                                                                       mysqlAnsiQuotes                                Use double quotes to escape keyword in a MySQL query                                                                                                                                                                                                                                   mysqlParseTime                                 Ensure the parseTime flag is set for MySQL driver                                                                                                                                                                                                                                      alertingBacktesting                            Rule backtesting API for alerting                                                                                                                                                                                                                                                      editPanelCSVDragAndDrop                        Enables drag and drop for CSV and Excel files                                                                                                                                                                                                                                          lokiShardSplitting                             Use stream shards to split queries into smaller subqueries                                                                                                                                                                                                                             lokiQuerySplittingConfig                       Give users the option to configure split durations for Loki queries                                                                                                                                                                                                                    individualCookiePreferences                    Support overriding cookie preferences per user                                                                                                                                                                                                                                         influxqlStreamingParser                        Enable streaming JSON parser for InfluxDB datasource InfluxQL query language                                                                                                                                                                                                           lokiLogsDataplane                              Changes logs responses from Loki to be compliant with the dataplane specification                                                                                                                                                                                                      disableSSEDataplane                            Disables dataplane specific processing in server side expressions                                                                                                                                                                                                                      alertStateHistoryLokiSecondary                 Enable Grafana to write alert state history to an external Loki instance in addition to Grafana annotations                                                                                                                                                                            alertStateHistoryLokiPrimary                   Enable a remote Loki instance as the primary source for state history reads                                                                                                                                                                                                            alertStateHistoryLokiOnly                      Disable Grafana alerts from emitting annotations when a remote Loki instance is available                                                                                                                                                                                              extraThemes                                    Enables extra themes                                                                                                                                                                                                                                                                   lokiPredefinedOperations                       Adds predefined query operations to Loki query editor                                                                                                                                                                                                                                  pluginsFrontendSandbox                         Enables the plugins frontend sandbox                                                                                                                                                                                                                                                   frontendSandboxMonitorOnly                     Enables monitor only in the plugin frontend sandbox  if enabled                                                                                                                                                                                                                        pluginsDetailsRightPanel                       Enables right panel for the plugins details page                                                                                                                                                                                                                                       awsDatasourcesTempCredentials                  Support temporary security credentials in AWS plugins for Grafana Cloud customers                                                                                                                                                                                                      mlExpressions                                  Enable support for Machine Learning in server side expressions                                                                                                                                                                                                                         metricsSummary                                 Enables metrics summary queries in the Tempo data source                                                                                                                                                                                                                               datasourceAPIServers                           Expose some datasources as apiservers                                                                                                                                                                                                                                                  provisioning                                   Next generation provisioning    and git                                                                                                                                                                                                                                                permissionsFilterRemoveSubquery                Alternative permission filter implementation that does not use subqueries for fetching the dashboard folder                                                                                                                                                                            aiGeneratedDashboardChanges                    Enable AI powered features for dashboards to auto summary changes when saving                                                                                                                                                                                                          sseGroupByDatasource                           Send query to the same datasource in a single request when using server side expressions  The  cloudWatchBatchQueries  feature toggle should be enabled if this used with CloudWatch                                                                                                   libraryPanelRBAC                               Enables RBAC support for library panels                                                                                                                                                                                                                                                wargamesTesting                                Placeholder feature flag for internal testing                                                                                                                                                                                                                                          externalCorePlugins                            Allow core plugins to be loaded as external                                                                                                                                                                                                                                            pluginsAPIMetrics                              Sends metrics of public grafana packages usage by plugins                                                                                                                                                                                                                              enableNativeHTTPHistogram                      Enables native HTTP Histograms                                                                                                                                                                                                                                                         disableClassicHTTPHistogram                    Disables classic HTTP Histogram  use with enableNativeHTTPHistogram                                                                                                                                                                                                                    kubernetesSnapshots                            Routes snapshot requests from  api to the  apis endpoint                                                                                                                                                                                                                               kubernetesDashboards                           Use the kubernetes API in the frontend for dashboards                                                                                                                                                                                                                                  kubernetesDashboardsAPI                        Use the kubernetes API in the backend for dashboards                                                                                                                                                                                                                                   kubernetesFolders                              Use the kubernetes API in the frontend for folders  and route  api folders requests to k8s                                                                                                                                                                                             grafanaAPIServerTestingWithExperimentalAPIs    Facilitate integration testing of experimental APIs                                                                                                                                                                                                                                    datasourceQueryTypes                           Show query type endpoints in datasource API servers  currently hardcoded for testdata  expressions  and prometheus                                                                                                                                                                     queryService                                   Register  apis query grafana app     will eventually replace  api ds query                                                                                                                                                                                                             queryServiceRewrite                            Rewrite requests targeting  ds query to the query service                                                                                                                                                                                                                              queryServiceFromUI                             Routes requests to the new query service                                                                                                                                                                                                                                               cachingOptimizeSerializationMemoryUsage        If enabled  the caching backend gradually serializes query responses for the cache  comparing against the configured   caching max value mb  value as it goes  This can can help prevent Grafana from running out of memory while attempting to cache very large query responses       prometheusPromQAIL                             Prometheus and AI ML to assist users in creating a query                                                                                                                                                                                                                               prometheusCodeModeMetricNamesSearch            Enables search for metric names in Code Mode  to improve performance when working with an enormous number of metric names                                                                                                                                                              alertmanagerRemoteSecondary                    Enable Grafana to sync configuration and state with a remote Alertmanager                                                                                                                                                                                                              alertmanagerRemotePrimary                      Enable Grafana to have a remote Alertmanager instance as the primary Alertmanager                                                                                                                                                                                                      alertmanagerRemoteOnly                         Disable the internal Alertmanager and only use the external one defined                                                                                                                                                                                                                extractFieldsNameDeduplication                 Make sure extracted field names are unique in the dataframe                                                                                                                                                                                                                            dashboardNewLayouts                            Enables experimental new dashboard layouts                                                                                                                                                                                                                                             pluginsSkipHostEnvVars                         Disables passing host environment variable to plugin processes                                                                                                                                                                                                                         tableSharedCrosshair                           Enables shared crosshair in table panel                                                                                                                                                                                                                                                kubernetesFeatureToggles                       Use the kubernetes API for feature toggle management in the frontend                                                                                                                                                                                                                   newFolderPicker                                Enables the nested folder picker without having nested folders enabled                                                                                                                                                                                                                 onPremToCloudMigrationsAlerts                  Enables the migration of alerts and its child resources to your Grafana Cloud stack  Requires  onPremToCloudMigrations  to be enabled in conjunction                                                                                                                                   onPremToCloudMigrationsAuthApiMig              Enables the use of auth api instead of gcom for internal token services  Requires  onPremToCloudMigrations  to be enabled in conjunction                                                                                                                                               scopeApi                                       In development feature flag for the scope api using the app platform                                                                                                                                                                                                                   sqlExpressions                                 Enables using SQL and DuckDB functions as Expressions                                                                                                                                                                                                                                  nodeGraphDotLayout                             Changed the layout algorithm for the node graph                                                                                                                                                                                                                                        kubernetesAggregator                           Enable grafana s embedded kube aggregator                                                                                                                                                                                                                                              expressionParser                               Enable new expression parser                                                                                                                                                                                                                                                           disableNumericMetricsSortingInExpressions      In server side expressions  disable the sorting of numeric kind metrics by their metric name or labels                                                                                                                                                                                 queryLibrary                                   Enables Query Library feature in Explore                                                                                                                                                                                                                                               logsExploreTableDefaultVisualization           Sets the logs table as default visualisation in logs explore                                                                                                                                                                                                                           alertingListViewV2                             Enables the new alert list view design                                                                                                                                                                                                                                                 dashboardRestore                               Enables deleted dashboard restore feature                                                                                                                                                                                                                                              alertingCentralAlertHistory                    Enables the new central alert history                                                                                                                                                                                                                                                  sqlQuerybuilderFunctionParameters              Enables SQL query builder function parameters                                                                                                                                                                                                                                          failWrongDSUID                                 Throws an error if a datasource has an invalid UIDs                                                                                                                                                                                                                                    alertingApiServer                              Register Alerting APIs with the K8s API server                                                                                                                                                                                                                                         dataplaneAggregator                            Enable grafana dataplane aggregator                                                                                                                                                                                                                                                    newFiltersUI                                   Enables new combobox style UI for the Ad hoc filters variable in scenes architecture                                                                                                                                                                                                   lokiSendDashboardPanelNames                    Send dashboard and panel names to Loki when querying                                                                                                                                                                                                                                   alertingPrometheusRulesPrimary                 Uses Prometheus rules as the primary source of truth for ruler enabled data sources                                                                                                                                                                                                    exploreLogsShardSplitting                      Used in Explore Logs to split queries into multiple queries based on the number of shards                                                                                                                                                                                              exploreLogsAggregatedMetrics                   Used in Explore Logs to query by aggregated metrics                                                                                                                                                                                                                                    exploreLogsLimitedTimeRange                    Used in Explore Logs to limit the time range                                                                                                                                                                                                                                           homeSetupGuide                                 Used in Home for users who want to return to the onboarding flow or quickly find popular config pages                                                                                                                                                                                  appSidecar                                     Enable the app sidecar feature that allows rendering 2 apps at the same time                                                                                                                                                                                                           alertingQueryAndExpressionsStepMode            Enables step mode for alerting queries and expressions                                                                                                                                                                                                                                 rolePickerDrawer                               Enables the new role picker drawer design                                                                                                                                                                                                                                              pluginsSriChecks                               Enables SRI checks for plugin assets                                                                                                                                                                                                                                                   unifiedStorageBigObjectsSupport                Enables to save big objects in blob storage                                                                                                                                                                                                                                            timeRangeProvider                              Enables time pickers sync                                                                                                                                                                                                                                                              prometheusUsesCombobox                         Use new combobox component for Prometheus query editor                                                                                                                                                                                                                                 userStorageAPI                                 Enables the user storage API                                                                                                                                                                                                                                                           dashboardSchemaV2                              Enables the new dashboard schema version 2  implementing changes necessary for dynamic dashboards and dashboards as code                                                                                                                                                               playlistsWatcher                               Enables experimental watcher for playlists                                                                                                                                                                                                                                             enableExtensionsAdminPage                      Enables the extension admin page regardless of development mode                                                                                                                                                                                                                        zipkinBackendMigration                         Enables querying Zipkin data source without the proxy                                                                                                                                                                                                                                  enableSCIM                                     Enables SCIM support for user and group management                                                                                                                                                                                                                                     crashDetection                                 Enables browser crash detection reporting to Faro                                                                                                                                                                                                                                      jaegerBackendMigration                         Enables querying the Jaeger data source without the proxy                                                                                                                                                                                                                              alertingNotificationsStepMode                  Enables simplified step mode in the notifications section                                                                                                                                                                                                                               Development feature toggles  The following toggles require explicitly setting Grafana s  app mode    to  development  before you can enable this feature toggle  These features tend to be experimental     Feature toggle name                      Description                                                                                                                                                                                                   grafanaAPIServerWithExperimentalAPIs    Register experimental APIs with the k8s API server  including all datasources      grafanaAPIServerEnsureKubectlAccess     Start an additional https handler and write kubectl options                        panelTitleSearchInV1                    Enable searching for dashboards using panel title in search v1                 "}
{"questions":"grafana setup title Run Grafana Docker image labels menuTitle Grafana Docker image aliases installation docker products Guide for running Grafana using Docker oss enterprise","answers":"---\naliases:\n  - ..\/..\/installation\/docker\/\ndescription: Guide for running Grafana using Docker\nlabels:\n  products:\n    - enterprise\n    - oss\nmenuTitle: Grafana Docker image\ntitle: Run Grafana Docker image\nweight: 400\n---\n\n# Run Grafana Docker image\n\nThis topic guides you through installing Grafana via the official Docker images. Specifically, it covers running Grafana via the Docker command line interface (CLI) and docker-compose.\n\n\n\nGrafana Docker images come in two editions:\n\n- **Grafana Enterprise**: `grafana\/grafana-enterprise`\n- **Grafana Open Source**: `grafana\/grafana-oss`\n\n> **Note:** The recommended and default edition of Grafana is Grafana Enterprise. It is free and includes all the features of the OSS edition. Additionally, you have the option to upgrade to the [full Enterprise feature set](\/products\/enterprise\/?utm_source=grafana-install-page), which includes support for [Enterprise plugins](\/grafana\/plugins\/?enterprise=1&utcm_source=grafana-install-page).\n\nThe default images for Grafana are created using the Alpine Linux project and can be found in the Alpine official image. For instructions on configuring a Docker image for Grafana, refer to [Configure a Grafana Docker image]().\n\n## Run Grafana via Docker CLI\n\nThis section shows you how to run Grafana using the Docker CLI.\n\n> **Note:** If you are on a Linux system (for example, Debian or Ubuntu), you might need to add `sudo` before the command or add your user to the `docker` group. For more information, refer to [Linux post-installation steps for Docker Engine](https:\/\/docs.docker.com\/engine\/install\/linux-postinstall\/).\n\nTo run the latest stable version of Grafana, run the following command:\n\n```bash\ndocker run -d -p 3000:3000 --name=grafana grafana\/grafana-enterprise\n```\n\nWhere:\n\n- [`docker run`](https:\/\/docs.docker.com\/engine\/reference\/commandline\/run\/) is a Docker CLI command that runs a new container from an image\n- `-d` (`--detach`) runs the container in the background\n- `-p <host-port>:<container-port>` (`--publish`) publish a container's port(s) to the host, allowing you to reach the container's port via a host port. In this case, we can reach the container's port `3000` via the host's port `3000`\n- `--name` assign a logical name to the container (e.g. `grafana`). This allows you to refer to the container by name instead of by ID.\n- `grafana\/grafana-enterprise` is the image to run\n\n### Stop the Grafana container\n\nTo stop the Grafana container, run the following command:\n\n```bash\n# The `docker ps` command shows the processes running in Docker\ndocker ps\n\n# This will display a list of containers that looks like the following:\nCONTAINER ID   IMAGE  COMMAND   CREATED  STATUS   PORTS    NAMES\ncd48d3994968   grafana\/grafana-enterprise   \"\/run.sh\"   8 seconds ago   Up 7 seconds   0.0.0.0:3000->3000\/tcp   grafana\n\n# To stop the grafana container run the command\n# docker stop CONTAINER-ID or use\n# docker stop NAME, which is `grafana` as previously defined\ndocker stop grafana\n```\n\n### Save your Grafana data\n\nBy default, Grafana uses an embedded SQLite version 3 database to store configuration, users, dashboards, and other data. When you run Docker images as containers, changes to these Grafana data are written to the filesystem within the container, which will only persist for as long as the container exists. If you stop and remove the container, any filesystem changes (i.e. the Grafana data) will be discarded. To avoid losing your data, you can set up persistent storage using [Docker volumes](https:\/\/docs.docker.com\/storage\/volumes\/) or [bind mounts](https:\/\/docs.docker.com\/storage\/bind-mounts\/) for your container.\n\n> **Note:** Though both methods are similar, there is a slight difference. If you want your storage to be fully managed by Docker and accessed only through Docker containers and the Docker CLI, you should choose to use persistent storage. However, if you need full control of the storage and want to allow other processes besides Docker to access or modify the storage layer, then bind mounts is the right choice for your environment.\n\n#### Use Docker volumes (recommended)\n\nUse Docker volumes when you want the Docker Engine to manage the storage volume.\n\nTo use Docker volumes for persistent storage, complete the following steps:\n\n1. Create a Docker volume to be used by the Grafana container, giving it a descriptive name (e.g. `grafana-storage`). Run the following command:\n\n   ```bash\n   # create a persistent volume for your data\n   docker volume create grafana-storage\n\n   # verify that the volume was created correctly\n   # you should see some JSON output\n   docker volume inspect grafana-storage\n   ```\n\n1. Start the Grafana container by running the following command:\n   ```bash\n   # start grafana\n   docker run -d -p 3000:3000 --name=grafana \\\n     --volume grafana-storage:\/var\/lib\/grafana \\\n     grafana\/grafana-enterprise\n   ```\n\n#### Use bind mounts\n\nIf you plan to use directories on your host for the database or configuration when running Grafana in Docker, you must start the container with a user with permission to access and write to the directory you map.\n\nTo use bind mounts, run the following command:\n\n```bash\n# create a directory for your data\nmkdir data\n\n# start grafana with your user id and using the data directory\ndocker run -d -p 3000:3000 --name=grafana \\\n  --user \"$(id -u)\" \\\n  --volume \"$PWD\/data:\/var\/lib\/grafana\" \\\n  grafana\/grafana-enterprise\n```\n\n### Use environment variables to configure Grafana\n\nGrafana supports specifying custom configuration settings using [environment variables]().\n\n```bash\n# enable debug logs\n\ndocker run -d -p 3000:3000 --name=grafana \\\n  -e \"GF_LOG_LEVEL=debug\" \\\n  grafana\/grafana-enterprise\n```\n\n## Install plugins in the Docker container\n\nYou can install plugins in Grafana from the official and community [plugins page](\/grafana\/plugins) or by using a custom URL to install a private plugin. These plugins allow you to add new visualization types, data sources, and applications to help you better visualize your data.\n\nGrafana currently supports three types of plugins: panel, data source, and app. For more information on managing plugins, refer to [Plugin Management]().\n\nTo install plugins in the Docker container, complete the following steps:\n\n1. Pass the plugins you want to be installed to Docker with the `GF_PLUGINS_PREINSTALL` environment variable as a comma-separated list.\n\n   This starts a background process that installs the list of plugins while Grafana server starts.\n\n   For example:\n\n   ```bash\n   docker run -d -p 3000:3000 --name=grafana \\\n     -e \"GF_PLUGINS_PREINSTALL=grafana-clock-panel, grafana-simple-json-datasource\" \\\n     grafana\/grafana-enterprise\n   ```\n\n1. To specify the version of a plugin, add the version number to the `GF_PLUGINS_PREINSTALL` environment variable.\n\n   For example:\n\n   ```bash\n   docker run -d -p 3000:3000 --name=grafana \\\n     -e \"GF_PLUGINS_PREINSTALL=grafana-clock-panel 1.0.1\" \\\n     grafana\/grafana-enterprise\n   ```\n\n   > **Note:** If you do not specify a version number, the latest version is used.\n\n1. To install a plugin from a custom URL, use the following convention to specify the URL: `<plugin ID>@[<plugin version>]@<url to plugin zip>`.\n\n   For example:\n\n   ```bash\n   docker run -d -p 3000:3000 --name=grafana \\\n     -e \"GF_PLUGINS_PREINSTALL=custom-plugin@@https:\/\/github.com\/VolkovLabs\/custom-plugin.zip\" \\\n     grafana\/grafana-enterprise\n   ```\n\n## Example\n\nThe following example runs the latest stable version of Grafana, listening on port 3000, with the container named `grafana`, persistent storage in the `grafana-storage` docker volume, the server root URL set, and the official [clock panel](\/grafana\/plugins\/grafana-clock-panel) plugin installed.\n\n```bash\n# create a persistent volume for your data\ndocker volume create grafana-storage\n\n# start grafana by using the above persistent storage\n# and defining environment variables\n\ndocker run -d -p 3000:3000 --name=grafana \\\n  --volume grafana-storage:\/var\/lib\/grafana \\\n  -e \"GF_SERVER_ROOT_URL=http:\/\/my.grafana.server\/\" \\\n  -e \"GF_PLUGINS_PREINSTALL=grafana-clock-panel\" \\\n  grafana\/grafana-enterprise\n```\n\n## Run Grafana via Docker Compose\n\nDocker Compose is a software tool that makes it easy to define and share applications that consist of multiple containers. It works by using a YAML file, usually called `docker-compose.yaml`, which lists all the services that make up the application. You can start the containers in the correct order with a single command, and with another command, you can shut them down. For more information about the benefits of using Docker Compose and how to use it refer to [Use Docker Compose](https:\/\/docs.docker.com\/get-started\/08_using_compose\/).\n\n### Before you begin\n\nTo run Grafana via Docker Compose, install the compose tool on your machine. To determine if the compose tool is available, run the following command:\n\n```bash\ndocker compose version\n```\n\nIf the compose tool is unavailable, refer to [Install Docker Compose](https:\/\/docs.docker.com\/compose\/install\/).\n\n### Run the latest stable version of Grafana\n\nThis section shows you how to run Grafana using Docker Compose. The examples in this section use Compose version 3. For more information about compatibility, refer to [Compose and Docker compatibility matrix](https:\/\/docs.docker.com\/compose\/compose-file\/compose-file-v3\/).\n\n> **Note:** If you are on a Linux system (for example, Debian or Ubuntu), you might need to add `sudo` before the command or add your user to the `docker` group. For more information, refer to [Linux post-installation steps for Docker Engine](https:\/\/docs.docker.com\/engine\/install\/linux-postinstall\/).\n\nTo run the latest stable version of Grafana using Docker Compose, complete the following steps:\n\n1. Create a `docker-compose.yaml` file.\n\n   ```bash\n   # first go into the directory where you have created this docker-compose.yaml file\n   cd \/path\/to\/docker-compose-directory\n\n   # now create the docker-compose.yaml file\n   touch docker-compose.yaml\n   ```\n\n1. Now, add the following code into the `docker-compose.yaml` file.\n\n   For example:\n\n   ```bash\n   services:\n     grafana:\n       image: grafana\/grafana-enterprise\n       container_name: grafana\n       restart: unless-stopped\n       ports:\n        - '3000:3000'\n   ```\n\n1. To run `docker-compose.yaml`, run the following command:\n\n   ```bash\n   # start the grafana container\n   docker compose up -d\n   ```\n\n   Where:\n\n   d = detached mode\n\n   up = to bring the container up and running\n\nTo determine that Grafana is running, open a browser window and type `IP_ADDRESS:3000`. The sign in screen should appear.\n\n### Stop the Grafana container\n\nTo stop the Grafana container, run the following command:\n\n```bash\ndocker compose down\n```\n\n> **Note:** For more information about using Docker Compose commands, refer to [docker compose](https:\/\/docs.docker.com\/engine\/reference\/commandline\/compose\/).\n\n### Save your Grafana data\n\nBy default, Grafana uses an embedded SQLite version 3 database to store configuration, users, dashboards, and other data. When you run Docker images as containers, changes to these Grafana data are written to the filesystem within the container, which will only persist for as long as the container exists. If you stop and remove the container, any filesystem changes (i.e. the Grafana data) will be discarded. To avoid losing your data, you can set up persistent storage using [Docker volumes](https:\/\/docs.docker.com\/storage\/volumes\/) or [bind mounts](https:\/\/docs.docker.com\/storage\/bind-mounts\/) for your container.\n\n#### Use Docker volumes (recommended)\n\nUse Docker volumes when you want the Docker Engine to manage the storage volume.\n\nTo use Docker volumes for persistent storage, complete the following steps:\n\n1. Create a `docker-compose.yaml` file\n\n   ```bash\n   # first go into the directory where you have created this docker-compose.yaml file\n   cd \/path\/to\/docker-compose-directory\n\n   # now create the docker-compose.yaml file\n   touch docker-compose.yaml\n   ```\n\n1. Add the following code into the `docker-compose.yaml` file.\n\n   ```yaml\n   services:\n     grafana:\n       image: grafana\/grafana-enterprise\n       container_name: grafana\n       restart: unless-stopped\n       ports:\n         - '3000:3000'\n       volumes:\n         - grafana-storage:\/var\/lib\/grafana\n   volumes:\n     grafana-storage: {}\n   ```\n\n1. Save the file and run the following command:\n\n   ```bash\n   docker compose up -d\n   ```\n\n#### Use bind mounts\n\nIf you plan to use directories on your host for the database or configuration when running Grafana in Docker, you must start the container with a user that has the permission to access and write to the directory you map.\n\nTo use bind mounts, complete the following steps:\n\n1. Create a `docker-compose.yaml` file\n\n   ```bash\n   # first go into the directory where you have created this docker-compose.yaml file\n   cd \/path\/to\/docker-compose-directory\n\n   # now create the docker-compose.yaml file\n   touch docker-compose.yaml\n   ```\n\n1. Create the directory where you will be mounting your data, in this case is `\/data` e.g. in your current working directory:\n\n   ```bash\n   mkdir $PWD\/data\n   ```\n\n1. Now, add the following code into the `docker-compose.yaml` file.\n\n   ```yaml\n   services:\n     grafana:\n       image: grafana\/grafana-enterprise\n       container_name: grafana\n       restart: unless-stopped\n       # if you are running as root then set it to 0\n       # else find the right id with the id -u command\n       user: '0'\n       ports:\n         - '3000:3000'\n       # adding the mount volume point which we create earlier\n       volumes:\n         - '$PWD\/data:\/var\/lib\/grafana'\n   ```\n\n1. Save the file and run the following command:\n\n   ```bash\n   docker compose up -d\n   ```\n\n### Example\n\nThe following example runs the latest stable version of Grafana, listening on port 3000, with the container named `grafana`, persistent storage in the `grafana-storage` docker volume, the server root URL set, and the official [clock panel](\/grafana\/plugins\/grafana-clock-panel\/) plugin installed.\n\n```bash\nservices:\n  grafana:\n    image: grafana\/grafana-enterprise\n    container_name: grafana\n    restart: unless-stopped\n    environment:\n     - GF_SERVER_ROOT_URL=http:\/\/my.grafana.server\/\n     - GF_PLUGINS_PREINSTALL=grafana-clock-panel\n    ports:\n     - '3000:3000'\n    volumes:\n     - 'grafana_storage:\/var\/lib\/grafana'\nvolumes:\n  grafana_storage: {}\n```\n\n> **Note:** If you want to specify the version of a plugin, add the version number to the `GF_PLUGINS_PREINSTALL` environment variable. For example: `-e \"GF_PLUGINS_PREINSTALL=grafana-clock-panel@1.0.1,grafana-simple-json-datasource@1.3.5\"`. If you do not specify a version number, the latest version is used.\n\n## Next steps\n\nRefer to the [Getting Started]() guide for information about logging in, setting up data sources, and so on.\n\n## Configure Docker image\n\nRefer to [Configure a Grafana Docker image]() page for details on options for customizing your environment, logging, database, and so on.\n\n## Configure Grafana\n\nRefer to the [Configuration]() page for details on options for customizing your environment, logging, database, and so on.","site":"grafana setup","answers_cleaned":"    aliases            installation docker  description  Guide for running Grafana using Docker labels    products        enterprise       oss menuTitle  Grafana Docker image title  Run Grafana Docker image weight  400        Run Grafana Docker image  This topic guides you through installing Grafana via the official Docker images  Specifically  it covers running Grafana via the Docker command line interface  CLI  and docker compose     Grafana Docker images come in two editions       Grafana Enterprise     grafana grafana enterprise      Grafana Open Source     grafana grafana oss       Note    The recommended and default edition of Grafana is Grafana Enterprise  It is free and includes all the features of the OSS edition  Additionally  you have the option to upgrade to the  full Enterprise feature set   products enterprise  utm source grafana install page   which includes support for  Enterprise plugins   grafana plugins  enterprise 1 utcm source grafana install page    The default images for Grafana are created using the Alpine Linux project and can be found in the Alpine official image  For instructions on configuring a Docker image for Grafana  refer to  Configure a Grafana Docker image         Run Grafana via Docker CLI  This section shows you how to run Grafana using the Docker CLI       Note    If you are on a Linux system  for example  Debian or Ubuntu   you might need to add  sudo  before the command or add your user to the  docker  group  For more information  refer to  Linux post installation steps for Docker Engine  https   docs docker com engine install linux postinstall     To run the latest stable version of Grafana  run the following command      bash docker run  d  p 3000 3000   name grafana grafana grafana enterprise      Where       docker run   https   docs docker com engine reference commandline run   is a Docker CLI command that runs a new container from an image     d      detach   runs the container in the background     p  host port   container port       publish   publish a container s port s  to the host  allowing you to reach the container s port via a host port  In this case  we can reach the container s port  3000  via the host s port  3000       name  assign a logical name to the container  e g   grafana    This allows you to refer to the container by name instead of by ID     grafana grafana enterprise  is the image to run      Stop the Grafana container  To stop the Grafana container  run the following command      bash   The  docker ps  command shows the processes running in Docker docker ps    This will display a list of containers that looks like the following  CONTAINER ID   IMAGE  COMMAND   CREATED  STATUS   PORTS    NAMES cd48d3994968   grafana grafana enterprise     run sh    8 seconds ago   Up 7 seconds   0 0 0 0 3000  3000 tcp   grafana    To stop the grafana container run the command   docker stop CONTAINER ID or use   docker stop NAME  which is  grafana  as previously defined docker stop grafana          Save your Grafana data  By default  Grafana uses an embedded SQLite version 3 database to store configuration  users  dashboards  and other data  When you run Docker images as containers  changes to these Grafana data are written to the filesystem within the container  which will only persist for as long as the container exists  If you stop and remove the container  any filesystem changes  i e  the Grafana data  will be discarded  To avoid losing your data  you can set up persistent storage using  Docker volumes  https   docs docker com storage volumes   or  bind mounts  https   docs docker com storage bind mounts   for your container       Note    Though both methods are similar  there is a slight difference  If you want your storage to be fully managed by Docker and accessed only through Docker containers and the Docker CLI  you should choose to use persistent storage  However  if you need full control of the storage and want to allow other processes besides Docker to access or modify the storage layer  then bind mounts is the right choice for your environment        Use Docker volumes  recommended   Use Docker volumes when you want the Docker Engine to manage the storage volume   To use Docker volumes for persistent storage  complete the following steps   1  Create a Docker volume to be used by the Grafana container  giving it a descriptive name  e g   grafana storage    Run the following command         bash      create a persistent volume for your data    docker volume create grafana storage       verify that the volume was created correctly      you should see some JSON output    docker volume inspect grafana storage         1  Start the Grafana container by running the following command        bash      start grafana    docker run  d  p 3000 3000   name grafana          volume grafana storage  var lib grafana        grafana grafana enterprise              Use bind mounts  If you plan to use directories on your host for the database or configuration when running Grafana in Docker  you must start the container with a user with permission to access and write to the directory you map   To use bind mounts  run the following command      bash   create a directory for your data mkdir data    start grafana with your user id and using the data directory docker run  d  p 3000 3000   name grafana       user    id  u         volume   PWD data  var lib grafana      grafana grafana enterprise          Use environment variables to configure Grafana  Grafana supports specifying custom configuration settings using  environment variables         bash   enable debug logs  docker run  d  p 3000 3000   name grafana      e  GF LOG LEVEL debug      grafana grafana enterprise         Install plugins in the Docker container  You can install plugins in Grafana from the official and community  plugins page   grafana plugins  or by using a custom URL to install a private plugin  These plugins allow you to add new visualization types  data sources  and applications to help you better visualize your data   Grafana currently supports three types of plugins  panel  data source  and app  For more information on managing plugins  refer to  Plugin Management      To install plugins in the Docker container  complete the following steps   1  Pass the plugins you want to be installed to Docker with the  GF PLUGINS PREINSTALL  environment variable as a comma separated list      This starts a background process that installs the list of plugins while Grafana server starts      For example         bash    docker run  d  p 3000 3000   name grafana         e  GF PLUGINS PREINSTALL grafana clock panel  grafana simple json datasource         grafana grafana enterprise         1  To specify the version of a plugin  add the version number to the  GF PLUGINS PREINSTALL  environment variable      For example         bash    docker run  d  p 3000 3000   name grafana         e  GF PLUGINS PREINSTALL grafana clock panel 1 0 1         grafana grafana enterprise                Note    If you do not specify a version number  the latest version is used   1  To install a plugin from a custom URL  use the following convention to specify the URL    plugin ID    plugin version    url to plugin zip        For example         bash    docker run  d  p 3000 3000   name grafana         e  GF PLUGINS PREINSTALL custom plugin  https   github com VolkovLabs custom plugin zip         grafana grafana enterprise            Example  The following example runs the latest stable version of Grafana  listening on port 3000  with the container named  grafana   persistent storage in the  grafana storage  docker volume  the server root URL set  and the official  clock panel   grafana plugins grafana clock panel  plugin installed      bash   create a persistent volume for your data docker volume create grafana storage    start grafana by using the above persistent storage   and defining environment variables  docker run  d  p 3000 3000   name grafana       volume grafana storage  var lib grafana      e  GF SERVER ROOT URL http   my grafana server        e  GF PLUGINS PREINSTALL grafana clock panel      grafana grafana enterprise         Run Grafana via Docker Compose  Docker Compose is a software tool that makes it easy to define and share applications that consist of multiple containers  It works by using a YAML file  usually called  docker compose yaml   which lists all the services that make up the application  You can start the containers in the correct order with a single command  and with another command  you can shut them down  For more information about the benefits of using Docker Compose and how to use it refer to  Use Docker Compose  https   docs docker com get started 08 using compose         Before you begin  To run Grafana via Docker Compose  install the compose tool on your machine  To determine if the compose tool is available  run the following command      bash docker compose version      If the compose tool is unavailable  refer to  Install Docker Compose  https   docs docker com compose install         Run the latest stable version of Grafana  This section shows you how to run Grafana using Docker Compose  The examples in this section use Compose version 3  For more information about compatibility  refer to  Compose and Docker compatibility matrix  https   docs docker com compose compose file compose file v3         Note    If you are on a Linux system  for example  Debian or Ubuntu   you might need to add  sudo  before the command or add your user to the  docker  group  For more information  refer to  Linux post installation steps for Docker Engine  https   docs docker com engine install linux postinstall     To run the latest stable version of Grafana using Docker Compose  complete the following steps   1  Create a  docker compose yaml  file         bash      first go into the directory where you have created this docker compose yaml file    cd  path to docker compose directory       now create the docker compose yaml file    touch docker compose yaml         1  Now  add the following code into the  docker compose yaml  file      For example         bash    services       grafana         image  grafana grafana enterprise        container name  grafana        restart  unless stopped        ports             3000 3000          1  To run  docker compose yaml   run the following command         bash      start the grafana container    docker compose up  d            Where      d   detached mode     up   to bring the container up and running  To determine that Grafana is running  open a browser window and type  IP ADDRESS 3000   The sign in screen should appear       Stop the Grafana container  To stop the Grafana container  run the following command      bash docker compose down          Note    For more information about using Docker Compose commands  refer to  docker compose  https   docs docker com engine reference commandline compose         Save your Grafana data  By default  Grafana uses an embedded SQLite version 3 database to store configuration  users  dashboards  and other data  When you run Docker images as containers  changes to these Grafana data are written to the filesystem within the container  which will only persist for as long as the container exists  If you stop and remove the container  any filesystem changes  i e  the Grafana data  will be discarded  To avoid losing your data  you can set up persistent storage using  Docker volumes  https   docs docker com storage volumes   or  bind mounts  https   docs docker com storage bind mounts   for your container        Use Docker volumes  recommended   Use Docker volumes when you want the Docker Engine to manage the storage volume   To use Docker volumes for persistent storage  complete the following steps   1  Create a  docker compose yaml  file        bash      first go into the directory where you have created this docker compose yaml file    cd  path to docker compose directory       now create the docker compose yaml file    touch docker compose yaml         1  Add the following code into the  docker compose yaml  file         yaml    services       grafana         image  grafana grafana enterprise        container name  grafana        restart  unless stopped        ports              3000 3000         volumes             grafana storage  var lib grafana    volumes       grafana storage             1  Save the file and run the following command         bash    docker compose up  d              Use bind mounts  If you plan to use directories on your host for the database or configuration when running Grafana in Docker  you must start the container with a user that has the permission to access and write to the directory you map   To use bind mounts  complete the following steps   1  Create a  docker compose yaml  file        bash      first go into the directory where you have created this docker compose yaml file    cd  path to docker compose directory       now create the docker compose yaml file    touch docker compose yaml         1  Create the directory where you will be mounting your data  in this case is   data  e g  in your current working directory         bash    mkdir  PWD data         1  Now  add the following code into the  docker compose yaml  file         yaml    services       grafana         image  grafana grafana enterprise        container name  grafana        restart  unless stopped          if you are running as root then set it to 0          else find the right id with the id  u command        user   0         ports              3000 3000           adding the mount volume point which we create earlier        volumes               PWD data  var lib grafana          1  Save the file and run the following command         bash    docker compose up  d             Example  The following example runs the latest stable version of Grafana  listening on port 3000  with the container named  grafana   persistent storage in the  grafana storage  docker volume  the server root URL set  and the official  clock panel   grafana plugins grafana clock panel   plugin installed      bash services    grafana      image  grafana grafana enterprise     container name  grafana     restart  unless stopped     environment         GF SERVER ROOT URL http   my grafana server         GF PLUGINS PREINSTALL grafana clock panel     ports          3000 3000      volumes          grafana storage  var lib grafana  volumes    grafana storage              Note    If you want to specify the version of a plugin  add the version number to the  GF PLUGINS PREINSTALL  environment variable  For example    e  GF PLUGINS PREINSTALL grafana clock panel 1 0 1 grafana simple json datasource 1 3 5    If you do not specify a version number  the latest version is used      Next steps  Refer to the  Getting Started    guide for information about logging in  setting up data sources  and so on      Configure Docker image  Refer to  Configure a Grafana Docker image    page for details on options for customizing your environment  logging  database  and so on      Configure Grafana  Refer to the  Configuration    page for details on options for customizing your environment  logging  database  and so on "}
{"questions":"grafana setup menuTitle Debian or Ubuntu installation debian aliases products installation installation debian Install guide for Grafana on Debian or Ubuntu labels oss enterprise","answers":"---\naliases:\n  - ..\/..\/installation\/debian\/\n  - ..\/..\/installation\/installation\/debian\/\ndescription: Install guide for Grafana on Debian or Ubuntu\nlabels:\n  products:\n    - enterprise\n    - oss\nmenuTitle: Debian or Ubuntu\ntitle: Install Grafana on Debian or Ubuntu\nweight: 100\n---\n\n# Install Grafana on Debian or Ubuntu\n\nThis topic explains how to install Grafana dependencies, install Grafana on Linux Debian or Ubuntu, and start the Grafana server on your Debian or Ubuntu system.\n\nThere are multiple ways to install Grafana: using the Grafana Labs APT repository, by downloading a `.deb` package, or by downloading a binary `.tar.gz` file. Choose only one of the methods below that best suits your needs.\n\n\nIf you install via the `.deb` package or `.tar.gz` file, then you must manually update Grafana for each new version.\n\n\nThe following video demonstrates how to install Grafana on Debian and Ubuntu as outlined in this document:\n\n\n\n## Install from APT repository\n\nIf you install from the APT repository, Grafana automatically updates when you run `apt-get update`.\n\n| Grafana Version           | Package            | Repository                            |\n| ------------------------- | ------------------ | ------------------------------------- |\n| Grafana Enterprise        | grafana-enterprise | `https:\/\/apt.grafana.com stable main` |\n| Grafana Enterprise (Beta) | grafana-enterprise | `https:\/\/apt.grafana.com beta main`   |\n| Grafana OSS               | grafana            | `https:\/\/apt.grafana.com stable main` |\n| Grafana OSS (Beta)        | grafana            | `https:\/\/apt.grafana.com beta main`   |\n\n\nGrafana Enterprise is the recommended and default edition. It is available for free and includes all the features of the OSS edition. You can also upgrade to the [full Enterprise feature set](\/products\/enterprise\/?utm_source=grafana-install-page), which has support for [Enterprise plugins](\/grafana\/plugins\/?enterprise=1&utcm_source=grafana-install-page).\n\n\nComplete the following steps to install Grafana from the APT repository:\n\n1. Install the prerequisite packages:\n\n   ```bash\n   sudo apt-get install -y apt-transport-https software-properties-common wget\n   ```\n\n1. Import the GPG key:\n\n   ```bash\n   sudo mkdir -p \/etc\/apt\/keyrings\/\n   wget -q -O - https:\/\/apt.grafana.com\/gpg.key | gpg --dearmor | sudo tee \/etc\/apt\/keyrings\/grafana.gpg > \/dev\/null\n   ```\n\n1. To add a repository for stable releases, run the following command:\n\n   ```bash\n   echo \"deb [signed-by=\/etc\/apt\/keyrings\/grafana.gpg] https:\/\/apt.grafana.com stable main\" | sudo tee -a \/etc\/apt\/sources.list.d\/grafana.list\n   ```\n\n1. To add a repository for beta releases, run the following command:\n\n   ```bash\n   echo \"deb [signed-by=\/etc\/apt\/keyrings\/grafana.gpg] https:\/\/apt.grafana.com beta main\" | sudo tee -a \/etc\/apt\/sources.list.d\/grafana.list\n   ```\n\n1. Run the following command to update the list of available packages:\n\n   ```bash\n   # Updates the list of available packages\n   sudo apt-get update\n   ```\n\n1. To install Grafana OSS, run the following command:\n\n   ```bash\n   # Installs the latest OSS release:\n   sudo apt-get install grafana\n   ```\n\n1. To install Grafana Enterprise, run the following command:\n\n   ```bash\n   # Installs the latest Enterprise release:\n   sudo apt-get install grafana-enterprise\n   ```\n\n## Install Grafana using a deb package or as a standalone binary\n\nIf you choose not to install Grafana using APT, you can download and install Grafana using the deb package or as a standalone binary.\n\nComplete the following steps to install Grafana using DEB or the standalone binaries:\n\n1. Navigate to the [Grafana download page](\/grafana\/download).\n1. Select the Grafana version you want to install.\n   - The most recent Grafana version is selected by default.\n   - The **Version** field displays only tagged releases. If you want to install a nightly build, click **Nightly Builds** and then select a version.\n1. Select an **Edition**.\n   - **Enterprise:** This is the recommended version. It is functionally identical to the open source version, but includes features you can unlock with a license, if you so choose.\n   - **Open Source:** This version is functionally identical to the Enterprise version, but you will need to download the Enterprise version if you want Enterprise features.\n1. Depending on which system you are running, click the **Linux** or **ARM** tab on the [download page](\/grafana\/download).\n1. Copy and paste the code from the [download page](\/grafana\/download) into your command line and run.\n\n## Uninstall on Debian or Ubuntu\n\nComplete any of the following steps to uninstall Grafana.\n\nTo uninstall Grafana, run the following commands in a terminal window:\n\n1. If you configured Grafana to run with systemd, stop the systemd service for Grafana server:\n\n   ```shell\n   sudo systemctl stop grafana-server\n   ```\n\n1. If you configured Grafana to run with init.d, stop the init.d service for Grafana server:\n\n   ```shell\n   sudo service grafana-server stop\n   ```\n\n1. To uninstall Grafana OSS:\n\n   ```shell\n   sudo apt-get remove grafana\n   ```\n\n1. To uninstall Grafana Enterprise:\n\n   ```shell\n   sudo apt-get remove grafana-enterprise\n   ```\n\n1. Optional: To remove the Grafana repository:\n\n   ```bash\n   sudo rm -i \/etc\/apt\/sources.list.d\/grafana.list\n   ```\n\n## Next steps\n\n- [Start the Grafana server]()","site":"grafana setup","answers_cleaned":"    aliases            installation debian            installation installation debian  description  Install guide for Grafana on Debian or Ubuntu labels    products        enterprise       oss menuTitle  Debian or Ubuntu title  Install Grafana on Debian or Ubuntu weight  100        Install Grafana on Debian or Ubuntu  This topic explains how to install Grafana dependencies  install Grafana on Linux Debian or Ubuntu  and start the Grafana server on your Debian or Ubuntu system   There are multiple ways to install Grafana  using the Grafana Labs APT repository  by downloading a   deb  package  or by downloading a binary   tar gz  file  Choose only one of the methods below that best suits your needs    If you install via the   deb  package or   tar gz  file  then you must manually update Grafana for each new version    The following video demonstrates how to install Grafana on Debian and Ubuntu as outlined in this document        Install from APT repository  If you install from the APT repository  Grafana automatically updates when you run  apt get update      Grafana Version             Package              Repository                                                                                                                           Grafana Enterprise          grafana enterprise    https   apt grafana com stable main      Grafana Enterprise  Beta    grafana enterprise    https   apt grafana com beta main        Grafana OSS                 grafana               https   apt grafana com stable main      Grafana OSS  Beta           grafana               https   apt grafana com beta main        Grafana Enterprise is the recommended and default edition  It is available for free and includes all the features of the OSS edition  You can also upgrade to the  full Enterprise feature set   products enterprise  utm source grafana install page   which has support for  Enterprise plugins   grafana plugins  enterprise 1 utcm source grafana install page     Complete the following steps to install Grafana from the APT repository   1  Install the prerequisite packages         bash    sudo apt get install  y apt transport https software properties common wget         1  Import the GPG key         bash    sudo mkdir  p  etc apt keyrings     wget  q  O   https   apt grafana com gpg key   gpg   dearmor   sudo tee  etc apt keyrings grafana gpg    dev null         1  To add a repository for stable releases  run the following command         bash    echo  deb  signed by  etc apt keyrings grafana gpg  https   apt grafana com stable main    sudo tee  a  etc apt sources list d grafana list         1  To add a repository for beta releases  run the following command         bash    echo  deb  signed by  etc apt keyrings grafana gpg  https   apt grafana com beta main    sudo tee  a  etc apt sources list d grafana list         1  Run the following command to update the list of available packages         bash      Updates the list of available packages    sudo apt get update         1  To install Grafana OSS  run the following command         bash      Installs the latest OSS release     sudo apt get install grafana         1  To install Grafana Enterprise  run the following command         bash      Installs the latest Enterprise release     sudo apt get install grafana enterprise            Install Grafana using a deb package or as a standalone binary  If you choose not to install Grafana using APT  you can download and install Grafana using the deb package or as a standalone binary   Complete the following steps to install Grafana using DEB or the standalone binaries   1  Navigate to the  Grafana download page   grafana download   1  Select the Grafana version you want to install       The most recent Grafana version is selected by default       The   Version   field displays only tagged releases  If you want to install a nightly build  click   Nightly Builds   and then select a version  1  Select an   Edition           Enterprise    This is the recommended version  It is functionally identical to the open source version  but includes features you can unlock with a license  if you so choose         Open Source    This version is functionally identical to the Enterprise version  but you will need to download the Enterprise version if you want Enterprise features  1  Depending on which system you are running  click the   Linux   or   ARM   tab on the  download page   grafana download   1  Copy and paste the code from the  download page   grafana download  into your command line and run      Uninstall on Debian or Ubuntu  Complete any of the following steps to uninstall Grafana   To uninstall Grafana  run the following commands in a terminal window   1  If you configured Grafana to run with systemd  stop the systemd service for Grafana server         shell    sudo systemctl stop grafana server         1  If you configured Grafana to run with init d  stop the init d service for Grafana server         shell    sudo service grafana server stop         1  To uninstall Grafana OSS         shell    sudo apt get remove grafana         1  To uninstall Grafana Enterprise         shell    sudo apt get remove grafana enterprise         1  Optional  To remove the Grafana repository         bash    sudo rm  i  etc apt sources list d grafana list            Next steps     Start the Grafana server   "}
{"questions":"grafana setup Install guide for Grafana on RHEL and Fedora menuTitle RHEL or Fedora products weight 200 title Install Grafana on RHEL or Fedora labels oss enterprise","answers":"---\ndescription: Install guide for Grafana on RHEL and Fedora.\nlabels:\n  products:\n    - enterprise\n    - oss\nmenuTitle: RHEL or Fedora\ntitle: Install Grafana on RHEL or Fedora\nweight: 200\n---\n\n# Install Grafana on RHEL or Fedora\n\nThis topic explains how to install Grafana dependencies, install Grafana on RHEL or Fedora, and start the Grafana server on your system.\n\nYou can install Grafana from the RPM repository, from standalone RPM, or with the binary `.tar.gz` file.\n\nIf you install via RPM or the `.tar.gz` file, then you must manually update Grafana for each new version.\n\nThe following video demonstrates how to install Grafana on RHEL or Fedora as outlined in this document:\n\n\n\n## Install Grafana from the RPM repository\n\nIf you install from the RPM repository, then Grafana is automatically updated every time you update your applications.\n\n| Grafana Version           | Package            | Repository                     |\n| ------------------------- | ------------------ | ------------------------------ |\n| Grafana Enterprise        | grafana-enterprise | `https:\/\/rpm.grafana.com`      |\n| Grafana Enterprise (Beta) | grafana-enterprise | `https:\/\/rpm-beta.grafana.com` |\n| Grafana OSS               | grafana            | `https:\/\/rpm.grafana.com`      |\n| Grafana OSS (Beta)        | grafana            | `https:\/\/rpm-beta.grafana.com` |\n\n\nGrafana Enterprise is the recommended and default edition. It is available for free and includes all the features of the OSS edition. You can also upgrade to the [full Enterprise feature set](\/products\/enterprise\/?utm_source=grafana-install-page), which has support for [Enterprise plugins](\/grafana\/plugins\/?enterprise=1&utcm_source=grafana-install-page).\n\n\nTo install Grafana from the RPM repository, complete the following steps:\n\n\nIf you wish to install beta versions of Grafana, substitute the repository URL for the beta URL listed above.\n\n\n1. Import the GPG key:\n\n   ```bash\n   wget -q -O gpg.key https:\/\/rpm.grafana.com\/gpg.key\n   sudo rpm --import gpg.key\n   ```\n\n1. Create `\/etc\/yum.repos.d\/grafana.repo` with the following content:\n\n   ```bash\n   [grafana]\n   name=grafana\n   baseurl=https:\/\/rpm.grafana.com\n   repo_gpgcheck=1\n   enabled=1\n   gpgcheck=1\n   gpgkey=https:\/\/rpm.grafana.com\/gpg.key\n   sslverify=1\n   sslcacert=\/etc\/pki\/tls\/certs\/ca-bundle.crt\n   ```\n\n1. To install Grafana OSS, run the following command:\n\n   ```bash\n   sudo dnf install grafana\n   ```\n\n1. To install Grafana Enterprise, run the following command:\n\n   ```bash\n   sudo dnf install grafana-enterprise\n   ```\n\n## Install the Grafana RPM package manually\n\nIf you install Grafana manually using YUM or RPM, then you must manually update Grafana for each new version. This method varies according to which Linux OS you are running.\n\n**Note:** The RPM files are signed. You can verify the signature with this [public GPG key](https:\/\/rpm.grafana.com\/gpg.key).\n\n1. On the [Grafana download page](\/grafana\/download), select the Grafana version you want to install.\n   - The most recent Grafana version is selected by default.\n   - The **Version** field displays only finished releases. If you want to install a beta version, click **Nightly Builds** and then select a version.\n1. Select an **Edition**.\n   - **Enterprise** - Recommended download. Functionally identical to the open source version, but includes features you can unlock with a license if you so choose.\n   - **Open Source** - Functionally identical to the Enterprise version, but you will need to download the Enterprise version if you want Enterprise features.\n1. Depending on which system you are running, click **Linux** or **ARM**.\n1. Copy and paste the RPM package URL and the local RPM package information from the [download page](\/grafana\/download) into the pattern shown below and run the command.\n\n   ```bash\n   sudo yum install -y <rpm package url>\n   ```\n\n## Install Grafana as a standalone binary\n\nComplete the following steps to install Grafana using the standalone binaries:\n\n1. Navigate to the [Grafana download page](\/grafana\/download).\n1. Select the Grafana version you want to install.\n   - The most recent Grafana version is selected by default.\n   - The **Version** field displays only tagged releases. If you want to install a nightly build, click **Nightly Builds** and then select a version.\n1. Select an **Edition**.\n   - **Enterprise:** This is the recommended version. It is functionally identical to the open-source version but includes features you can unlock with a license if you so choose.\n   - **Open Source:** This version is functionally identical to the Enterprise version, but you will need to download the Enterprise version if you want Enterprise features.\n1. Depending on which system you are running, click the **Linux** or **ARM** tab on the [download page](\/grafana\/download).\n1. Copy and paste the code from the [download page](\/grafana\/download) page into your command line and run.\n\n## Uninstall on RHEL or Fedora\n\nTo uninstall Grafana, run the following commands in a terminal window:\n\n1. If you configured Grafana to run with systemd, stop the systemd service for Grafana server:\n\n   ```shell\n   sudo systemctl stop grafana-server\n   ```\n\n1. If you configured Grafana to run with init.d, stop the init.d service for Grafana server:\n\n   ```shell\n   sudo service grafana-server stop\n   ```\n\n1. To uninstall Grafana OSS:\n\n   ```shell\n   sudo dnf remove grafana\n   ```\n\n1. To uninstall Grafana Enterprise:\n\n   ```shell\n   sudo dnf remove grafana-enterprise\n   ```\n\n1. Optional: To remove the Grafana repository:\n\n   ```shell\n   sudo rm -i \/etc\/yum.repos.d\/grafana.repo\n   ```\n\n## Next steps\n\nRefer to [Start the Grafana server]().","site":"grafana setup","answers_cleaned":"    description  Install guide for Grafana on RHEL and Fedora  labels    products        enterprise       oss menuTitle  RHEL or Fedora title  Install Grafana on RHEL or Fedora weight  200        Install Grafana on RHEL or Fedora  This topic explains how to install Grafana dependencies  install Grafana on RHEL or Fedora  and start the Grafana server on your system   You can install Grafana from the RPM repository  from standalone RPM  or with the binary   tar gz  file   If you install via RPM or the   tar gz  file  then you must manually update Grafana for each new version   The following video demonstrates how to install Grafana on RHEL or Fedora as outlined in this document        Install Grafana from the RPM repository  If you install from the RPM repository  then Grafana is automatically updated every time you update your applications     Grafana Version             Package              Repository                                                                                                             Grafana Enterprise          grafana enterprise    https   rpm grafana com           Grafana Enterprise  Beta    grafana enterprise    https   rpm beta grafana com      Grafana OSS                 grafana               https   rpm grafana com           Grafana OSS  Beta           grafana               https   rpm beta grafana com      Grafana Enterprise is the recommended and default edition  It is available for free and includes all the features of the OSS edition  You can also upgrade to the  full Enterprise feature set   products enterprise  utm source grafana install page   which has support for  Enterprise plugins   grafana plugins  enterprise 1 utcm source grafana install page     To install Grafana from the RPM repository  complete the following steps    If you wish to install beta versions of Grafana  substitute the repository URL for the beta URL listed above    1  Import the GPG key         bash    wget  q  O gpg key https   rpm grafana com gpg key    sudo rpm   import gpg key         1  Create   etc yum repos d grafana repo  with the following content         bash     grafana     name grafana    baseurl https   rpm grafana com    repo gpgcheck 1    enabled 1    gpgcheck 1    gpgkey https   rpm grafana com gpg key    sslverify 1    sslcacert  etc pki tls certs ca bundle crt         1  To install Grafana OSS  run the following command         bash    sudo dnf install grafana         1  To install Grafana Enterprise  run the following command         bash    sudo dnf install grafana enterprise            Install the Grafana RPM package manually  If you install Grafana manually using YUM or RPM  then you must manually update Grafana for each new version  This method varies according to which Linux OS you are running     Note    The RPM files are signed  You can verify the signature with this  public GPG key  https   rpm grafana com gpg key    1  On the  Grafana download page   grafana download   select the Grafana version you want to install       The most recent Grafana version is selected by default       The   Version   field displays only finished releases  If you want to install a beta version  click   Nightly Builds   and then select a version  1  Select an   Edition           Enterprise     Recommended download  Functionally identical to the open source version  but includes features you can unlock with a license if you so choose         Open Source     Functionally identical to the Enterprise version  but you will need to download the Enterprise version if you want Enterprise features  1  Depending on which system you are running  click   Linux   or   ARM    1  Copy and paste the RPM package URL and the local RPM package information from the  download page   grafana download  into the pattern shown below and run the command         bash    sudo yum install  y  rpm package url             Install Grafana as a standalone binary  Complete the following steps to install Grafana using the standalone binaries   1  Navigate to the  Grafana download page   grafana download   1  Select the Grafana version you want to install       The most recent Grafana version is selected by default       The   Version   field displays only tagged releases  If you want to install a nightly build  click   Nightly Builds   and then select a version  1  Select an   Edition           Enterprise    This is the recommended version  It is functionally identical to the open source version but includes features you can unlock with a license if you so choose         Open Source    This version is functionally identical to the Enterprise version  but you will need to download the Enterprise version if you want Enterprise features  1  Depending on which system you are running  click the   Linux   or   ARM   tab on the  download page   grafana download   1  Copy and paste the code from the  download page   grafana download  page into your command line and run      Uninstall on RHEL or Fedora  To uninstall Grafana  run the following commands in a terminal window   1  If you configured Grafana to run with systemd  stop the systemd service for Grafana server         shell    sudo systemctl stop grafana server         1  If you configured Grafana to run with init d  stop the init d service for Grafana server         shell    sudo service grafana server stop         1  To uninstall Grafana OSS         shell    sudo dnf remove grafana         1  To uninstall Grafana Enterprise         shell    sudo dnf remove grafana enterprise         1  Optional  To remove the Grafana repository         shell    sudo rm  i  etc yum repos d grafana repo            Next steps  Refer to  Start the Grafana server    "}
{"questions":"grafana setup menuTitle SUSE or openSUSE weight 300 Install guide for Grafana on SUSE or openSUSE products title Install Grafana on SUSE or openSUSE labels oss enterprise","answers":"---\ndescription: Install guide for Grafana on SUSE or openSUSE.\nlabels:\n  products:\n    - enterprise\n    - oss\nmenuTitle: SUSE or openSUSE\ntitle: Install Grafana on SUSE or openSUSE\nweight: 300\n---\n\n# Install Grafana on SUSE or openSUSE\n\nThis topic explains how to install Grafana dependencies, install Grafana on SUSE or openSUSE and start the Grafana server on your system.\n\nYou can install Grafana using the RPM repository, or by downloading a binary `.tar.gz` file.\n\nIf you install via RPM or the `.tar.gz` file, then you must manually update Grafana for each new version.\n\nThe following video demonstrates how to install Grafana on SUSE or openSUSE as outlined in this document:\n\n\n\n## Install Grafana from the RPM repository\n\nIf you install from the RPM repository, then Grafana is automatically updated every time you run `sudo zypper update`.\n\n| Grafana Version    | Package            | Repository                |\n| ------------------ | ------------------ | ------------------------- |\n| Grafana Enterprise | grafana-enterprise | `https:\/\/rpm.grafana.com` |\n| Grafana OSS        | grafana            | `https:\/\/rpm.grafana.com` |\n\n\nGrafana Enterprise is the recommended and default edition. It is available for free and includes all the features of the OSS edition. You can also upgrade to the [full Enterprise feature set](\/products\/enterprise\/?utm_source=grafana-install-page), which has support for [Enterprise plugins](\/grafana\/plugins\/?enterprise=1&utcm_source=grafana-install-page).\n\n\nTo install Grafana using the RPM repository, complete the following steps:\n\n1. Import the GPG key:\n\n   ```bash\n   wget -q -O gpg.key https:\/\/rpm.grafana.com\/gpg.key\n   sudo rpm --import gpg.key\n   ```\n\n1. Use zypper to add the grafana repo.\n\n   ```bash\n   sudo zypper addrepo https:\/\/rpm.grafana.com grafana\n   ```\n\n1. To install Grafana OSS, run the following command:\n\n   ```bash\n   sudo zypper install grafana\n   ```\n\n1. To install Grafana Enterprise, run the following command:\n\n   ```bash\n   sudo zypper install grafana-enterprise\n   ```\n\n## Install the Grafana RPM package manually\n\nIf you install Grafana manually using RPM, then you must manually update Grafana for each new version. This method varies according to which Linux OS you are running.\n\n**Note:** The RPM files are signed. You can verify the signature with this [public GPG key](https:\/\/rpm.grafana.com\/gpg.key).\n\n1. On the [Grafana download page](\/grafana\/download), select the Grafana version you want to install.\n   - The most recent Grafana version is selected by default.\n   - The **Version** field displays only finished releases. If you want to install a beta version, click **Nightly Builds** and then select a version.\n1. Select an **Edition**.\n   - **Enterprise** - Recommended download. Functionally identical to the open source version, but includes features you can unlock with a license if you so choose.\n   - **Open Source** - Functionally identical to the Enterprise version, but you will need to download the Enterprise version if you want Enterprise features.\n1. Depending on which system you are running, click **Linux** or **ARM**.\n1. Copy and paste the RPM package URL and the local RPM package information from the installation page into the pattern shown below, then run the commands.\n\n   ```bash\n   sudo zypper install initscripts urw-fonts wget\n   wget <rpm package url>\n   sudo rpm -Uvh <local rpm package>\n   ```\n\n## Install Grafana as a standalone binary\n\nComplete the following steps to install Grafana using the standalone binaries:\n\n1. Navigate to the [Grafana download page](\/grafana\/download).\n1. Select the Grafana version you want to install.\n   - The most recent Grafana version is selected by default.\n   - The **Version** field displays only tagged releases. If you want to install a nightly build, click **Nightly Builds** and then select a version.\n1. Select an **Edition**.\n   - **Enterprise:** This is the recommended version. It is functionally identical to the open-source version but includes features you can unlock with a license if you so choose.\n   - **Open Source:** This version is functionally identical to the Enterprise version, but you will need to download the Enterprise version if you want Enterprise features.\n1. Depending on which system you are running, click the **Linux** or **ARM** tab on the [download page](\/grafana\/download).\n1. Copy and paste the code from the [download page](\/grafana\/download) into your command line and run.\n\n## Uninstall on SUSE or openSUSE\n\nTo uninstall Grafana, run the following commands in a terminal window:\n\n1. If you configured Grafana to run with systemd, stop the systemd service for Grafana server:\n\n   ```shell\n   sudo systemctl stop grafana-server\n   ```\n\n1. If you configured Grafana to run with init.d, stop the init.d service for Grafana server:\n\n   ```shell\n   sudo service grafana-server stop\n   ```\n\n1. To uninstall Grafana OSS:\n\n   ```shell\n   sudo zypper remove grafana\n   ```\n\n1. To uninstall Grafana Enterprise:\n\n   ```shell\n   sudo zypper remove grafana-enterprise\n   ```\n\n1. Optional: To remove the Grafana repository:\n\n   ```shell\n   sudo zypper removerepo grafana\n   ```\n\n## Next steps\n\nRefer to [Start the Grafana server]().","site":"grafana setup","answers_cleaned":"    description  Install guide for Grafana on SUSE or openSUSE  labels    products        enterprise       oss menuTitle  SUSE or openSUSE title  Install Grafana on SUSE or openSUSE weight  300        Install Grafana on SUSE or openSUSE  This topic explains how to install Grafana dependencies  install Grafana on SUSE or openSUSE and start the Grafana server on your system   You can install Grafana using the RPM repository  or by downloading a binary   tar gz  file   If you install via RPM or the   tar gz  file  then you must manually update Grafana for each new version   The following video demonstrates how to install Grafana on SUSE or openSUSE as outlined in this document        Install Grafana from the RPM repository  If you install from the RPM repository  then Grafana is automatically updated every time you run  sudo zypper update      Grafana Version      Package              Repository                                                                                            Grafana Enterprise   grafana enterprise    https   rpm grafana com      Grafana OSS          grafana               https   rpm grafana com      Grafana Enterprise is the recommended and default edition  It is available for free and includes all the features of the OSS edition  You can also upgrade to the  full Enterprise feature set   products enterprise  utm source grafana install page   which has support for  Enterprise plugins   grafana plugins  enterprise 1 utcm source grafana install page     To install Grafana using the RPM repository  complete the following steps   1  Import the GPG key         bash    wget  q  O gpg key https   rpm grafana com gpg key    sudo rpm   import gpg key         1  Use zypper to add the grafana repo         bash    sudo zypper addrepo https   rpm grafana com grafana         1  To install Grafana OSS  run the following command         bash    sudo zypper install grafana         1  To install Grafana Enterprise  run the following command         bash    sudo zypper install grafana enterprise            Install the Grafana RPM package manually  If you install Grafana manually using RPM  then you must manually update Grafana for each new version  This method varies according to which Linux OS you are running     Note    The RPM files are signed  You can verify the signature with this  public GPG key  https   rpm grafana com gpg key    1  On the  Grafana download page   grafana download   select the Grafana version you want to install       The most recent Grafana version is selected by default       The   Version   field displays only finished releases  If you want to install a beta version  click   Nightly Builds   and then select a version  1  Select an   Edition           Enterprise     Recommended download  Functionally identical to the open source version  but includes features you can unlock with a license if you so choose         Open Source     Functionally identical to the Enterprise version  but you will need to download the Enterprise version if you want Enterprise features  1  Depending on which system you are running  click   Linux   or   ARM    1  Copy and paste the RPM package URL and the local RPM package information from the installation page into the pattern shown below  then run the commands         bash    sudo zypper install initscripts urw fonts wget    wget  rpm package url     sudo rpm  Uvh  local rpm package             Install Grafana as a standalone binary  Complete the following steps to install Grafana using the standalone binaries   1  Navigate to the  Grafana download page   grafana download   1  Select the Grafana version you want to install       The most recent Grafana version is selected by default       The   Version   field displays only tagged releases  If you want to install a nightly build  click   Nightly Builds   and then select a version  1  Select an   Edition           Enterprise    This is the recommended version  It is functionally identical to the open source version but includes features you can unlock with a license if you so choose         Open Source    This version is functionally identical to the Enterprise version  but you will need to download the Enterprise version if you want Enterprise features  1  Depending on which system you are running  click the   Linux   or   ARM   tab on the  download page   grafana download   1  Copy and paste the code from the  download page   grafana download  into your command line and run      Uninstall on SUSE or openSUSE  To uninstall Grafana  run the following commands in a terminal window   1  If you configured Grafana to run with systemd  stop the systemd service for Grafana server         shell    sudo systemctl stop grafana server         1  If you configured Grafana to run with init d  stop the init d service for Grafana server         shell    sudo service grafana server stop         1  To uninstall Grafana OSS         shell    sudo zypper remove grafana         1  To uninstall Grafana Enterprise         shell    sudo zypper remove grafana enterprise         1  Optional  To remove the Grafana repository         shell    sudo zypper removerepo grafana            Next steps  Refer to  Start the Grafana server    "}
{"questions":"grafana setup title Deploy Grafana on Kubernetes aliases products oss menuTitle Grafana on Kubernetes labels Guide for deploying Grafana on Kubernetes installation kubernetes enterprise","answers":"---\naliases:\n  - ..\/..\/installation\/kubernetes\/\ndescription: Guide for deploying Grafana on Kubernetes\nlabels:\n  products:\n    - enterprise\n    - oss\nmenuTitle: Grafana on Kubernetes\ntitle: Deploy Grafana on Kubernetes\nweight: 500\n---\n\n# Deploy Grafana on Kubernetes\n\nOn this page, you will find instructions for installing and running Grafana on Kubernetes using Kubernetes manifests for the setup. If Helm is your preferred option, refer to [Grafana Helm community charts](https:\/\/github.com\/grafana\/helm-charts).\n\nWatch this video to learn more about installing Grafana on Kubernetes: \n\n## Before you begin\n\nTo follow this guide:\n\n- You need the latest version of [Kubernetes](https:\/\/kubernetes.io\/) running either locally or remotely on a public or private cloud.\n\n- If you plan to use it in a local environment, you can use various Kubernetes options such as [minikube](https:\/\/minikube.sigs.k8s.io\/docs\/), [kind](https:\/\/kind.sigs.k8s.io\/), [Docker Desktop](https:\/\/docs.docker.com\/desktop\/kubernetes\/), and others.\n\n- If you plan to use Kubernetes in a production setting, it's recommended to utilize managed cloud services like [Google Kubernetes Engine (GKE)](https:\/\/cloud.google.com\/kubernetes-engine), [Amazon Elastic Kubernetes Service (EKS)](https:\/\/aws.amazon.com\/eks\/), or [Azure Kubernetes Service (AKS)](https:\/\/azure.microsoft.com\/en-us\/products\/kubernetes-service\/).\n\n## System requirements\n\nThis section provides minimum hardware and software requirements.\n\n### Minimum Hardware Requirements\n\n- Disk space: 1 GB\n- Memory: 750 MiB (approx 750 MB)\n- CPU: 250m (approx 0.25 cores)\n\n### Supported databases\n\nFor a list of supported databases, refer to [supported databases](\/docs\/grafana\/latest\/setup-grafana\/installation#supported-databases).\n\n### Supported web browsers\n\nFor a list of support web browsers, refer to [supported web browsers](\/docs\/grafana\/latest\/setup-grafana\/installation#supported-web-browsers).\n\n\nEnable port `3000` in your network environment, as this is the Grafana default port.\n\n\n## Deploy Grafana OSS on Kubernetes\n\nThis section explains how to install Grafana OSS using Kubernetes.\n\nIf you want to install Grafana Enterprise on Kubernetes,\u00a0refer to [Deploy Grafana Enterprise on Kubernetes](#deploy-grafana-enterprise-on-kubernetes).\n\n\nIf you deploy an application in Kubernetes, it will use the default namespace which may already have other applications running. This can result in conflicts and other issues.\n\nIt is recommended to create a new namespace in Kubernetes to better manage, organize, allocate, and manage cluster resources. For more information about Namespaces, refer to the official [Kubernetes documentation](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/namespaces\/).\n\n1. To create a namespace, run the following command:\n\n   ```bash\n   kubectl create namespace my-grafana\n   ```\n\n   In this example, the namespace is `my-grafana`\n\n1. To verify and view the newly created namespace, run the following command:\n\n   ```bash\n   kubectl get namespace my-grafana\n   ```\n\n   The output of the command provides more information about the newly created namespace.\n\n1. Create a YAML manifest file named `grafana.yaml`. This file will contain the necessary code for deployment.\n\n   ```bash\n   touch grafana.yaml\n   ```\n\n   In the next step you define the following three objects in the YAML file.\n\n   | Object                        | Description                                                                                                                   |\n   | ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |\n   | Persistent Volume Claim (PVC) | This object stores the data.                                                                                                  |\n   | Service                       | This object provides network access to the Pod defined in the deployment.                                                     |\n   | Deployment                    | This object is responsible for creating the pods, ensuring they stay up to date, and managing Replicaset and Rolling updates. |\n\n1. Copy and paste the following contents and save it in the `grafana.yaml` file.\n\n   ```yaml\n   ---\n   apiVersion: v1\n   kind: PersistentVolumeClaim\n   metadata:\n     name: grafana-pvc\n   spec:\n     accessModes:\n       - ReadWriteOnce\n     resources:\n       requests:\n         storage: 1Gi\n   ---\n   apiVersion: apps\/v1\n   kind: Deployment\n   metadata:\n     labels:\n       app: grafana\n     name: grafana\n   spec:\n     selector:\n       matchLabels:\n         app: grafana\n     template:\n       metadata:\n         labels:\n           app: grafana\n       spec:\n         securityContext:\n           fsGroup: 472\n           supplementalGroups:\n             - 0\n         containers:\n           - name: grafana\n             image: grafana\/grafana:latest\n             imagePullPolicy: IfNotPresent\n             ports:\n               - containerPort: 3000\n                 name: http-grafana\n                 protocol: TCP\n             readinessProbe:\n               failureThreshold: 3\n               httpGet:\n                 path: \/robots.txt\n                 port: 3000\n                 scheme: HTTP\n               initialDelaySeconds: 10\n               periodSeconds: 30\n               successThreshold: 1\n               timeoutSeconds: 2\n             livenessProbe:\n               failureThreshold: 3\n               initialDelaySeconds: 30\n               periodSeconds: 10\n               successThreshold: 1\n               tcpSocket:\n                 port: 3000\n               timeoutSeconds: 1\n             resources:\n               requests:\n                 cpu: 250m\n                 memory: 750Mi\n             volumeMounts:\n               - mountPath: \/var\/lib\/grafana\n                 name: grafana-pv\n         volumes:\n           - name: grafana-pv\n             persistentVolumeClaim:\n               claimName: grafana-pvc\n   ---\n   apiVersion: v1\n   kind: Service\n   metadata:\n     name: grafana\n   spec:\n     ports:\n       - port: 3000\n         protocol: TCP\n         targetPort: http-grafana\n     selector:\n       app: grafana\n     sessionAffinity: None\n     type: LoadBalancer\n   ```\n\n1. Run the following command to send the manifest to the Kubernetes API server:\n\n   ```bash\n   kubectl apply -f grafana.yaml --namespace=my-grafana\n   ```\n\n   This command creates the PVC, Deployment, and Service objects.\n\n1. Complete the following steps to verify the deployment status of each object.\n\n   a. For PVC, run the following command:\n\n   ```bash\n   kubectl get pvc --namespace=my-grafana -o wide\n   ```\n\n   b. For Deployment, run the following command:\n\n   ```bash\n   kubectl get deployments --namespace=my-grafana -o wide\n   ```\n\n   c. For Service, run the following command:\n\n   ```bash\n   kubectl get svc --namespace=my-grafana -o wide\n   ```\n\n## Access Grafana on Managed K8s Providers\n\nIn this task, you access Grafana deployed on a Managed Kubernetes provider using a web browser. Accessing Grafana via a web browser is straightforward if it is deployed on a Managed Kubernetes Provider as it uses the cloud provider\u2019s **LoadBalancer** to which the external load balancer routes are automatically created.\n\n1. Run the following command to obtain the deployment information:\n\n   ```bash\n   kubectl get all --namespace=my-grafana\n   ```\n\n   The output returned should look similar to the following:\n\n   ```bash\n   NAME                           READY   STATUS    RESTARTS   AGE\n   pod\/grafana-69946c9bd6-kwjb6   1\/1     Running   0          7m27s\n\n   NAME              TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)          AGE\n   service\/grafana   LoadBalancer   10.5.243.226   1.120.130.330   3000:31171\/TCP   7m27s\n\n   NAME                      READY   UP-TO-DATE   AVAILABLE   AGE\n   deployment.apps\/grafana   1\/1     1            1           7m29s\n\n   NAME                                 DESIRED   CURRENT   READY   AGE\n   replicaset.apps\/grafana-69946c9bd6   1         1         1       7m30s\n   ```\n\n1. Identify the **EXTERNAL-IP** value in the output and type it into your browser.\n\n   The Grafana sign-in page appears.\n\n1. To sign in, enter `admin` for both the username and password.\n\n1. If you do not see the EXTERNAL-IP, complete the following steps:\n\n   a. Run the following command to do a port-forwarding of the Grafana service on port `3000`.\n\n   ```bash\n   kubectl port-forward service\/grafana 3000:3000 --namespace=my-grafana\n   ```\n\n   For more information about port-forwarding, refer to [Use Port Forwarding to Access Applications in a Cluster](https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/port-forward-access-application-cluster\/).\n\n   b. Navigate to `localhost:3000` in your browser.\n\n   The Grafana sign-in page appears.\n\n   c. To sign in, enter `admin` for both the username and password.\n\n## Access Grafana using minikube\n\nThere are multiple ways to access the Grafana UI on a web browser when using minikube. For more information about minikube, refer to [How to access applications running within minikube](https:\/\/minikube.sigs.k8s.io\/docs\/handbook\/accessing\/).\n\nThis section lists the two most common options for accessing an application running in minikube.\n\n### Option 1: Expose the service\n\nThis option uses the `type: LoadBalancer` in the `grafana.yaml` service manifest, which makes the service accessible through the `minikube service` command. For more information, refer to [minikube Service command usage](https:\/\/minikube.sigs.k8s.io\/docs\/commands\/service\/).\n\n1. Run the following command to obtain the Grafana service IP:\n\n   ```bash\n   minikube service grafana --namespace=my-grafana\n   ```\n\n   The output returns the Kubernetes URL for service in your local cluster.\n\n   ```bash\n   |------------|---------|-------------|------------------------------|\n   | NAMESPACE  |  NAME   | TARGET PORT |             URL              |\n   |------------|---------|-------------|------------------------------|\n   | my-grafana | grafana |        3000 | http:\/\/192.168.122.144:32182 |\n   |------------|---------|-------------|------------------------------|\n   Opening service my-grafana\/grafana in default browser...\n   http:\/\/192.168.122.144:32182\n   ```\n\n1. Run a `curl` command to verify whether a given connection should work in a browser under ideal circumstances.\n\n   ```bash\n   curl 192.168.122.144:32182\n   ```\n\n   The following example output shows that an endpoint has been located:\n\n   `<a href=\"\/login\">Found<\/a>.`\n\n1. Access the Grafana UI in the browser using the provided IP:Port from the command above. For example `192.168.122.144:32182`\n\n   The Grafana sign-in page appears.\n\n1. To sign in to Grafana, enter `admin` for both the username and password.\n\n### Option 2: Use port forwarding\n\nIf Option 1 does not work in your minikube environment (this mostly depends on the network), then as an alternative you can use the port forwarding option for the Grafana service on port `3000`.\n\nFor more information about port forwarding, refer to [Use Port Forwarding to Access Applications in a Cluster](https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/port-forward-access-application-cluster\/).\n\n1. To find the minikube IP address, run the following command:\n\n   ```bash\n   minikube ip\n   ```\n\n   The output contains the IP address that you use to access the Grafana Pod during port forwarding.\n\n   A Pod is the smallest deployment unit in Kubernetes and is the core building block for running applications in a Kubernetes cluster. For more information about Pods, refer to [Pods](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/).\n\n1. To obtain the Grafana Pod information, run the following command:\n\n   ```bash\n   kubectl get pods --namespace=my-grafana\n   ```\n\n   The output should look similar to the following:\n\n   ```bash\n   NAME                       READY   STATUS    RESTARTS   AGE\n   grafana-58445b6986-dxrrw   1\/1     Running   0          9m54s\n   ```\n\n   The output shows the Grafana POD name in the `NAME` column, that you use for port forwarding.\n\n1. Run the following command for enabling the port forwarding on the POD:\n\n   ```bash\n   kubectl port-forward pod\/grafana-58445b6986-dxrrw --namespace=my-grafana --address 0.0.0.0 3000:3000\n   ```\n\n1. To access the Grafana UI on the web browser, type the minikube IP along with the forwarded port. For example `192.168.122.144:3000`\n\n   The Grafana sign-in page appears.\n\n1. To sign in to Grafana, enter `admin` for both the username and password.\n\n## Update an existing deployment using a rolling update strategy\n\nRolling updates enable deployment updates to take place with no downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on nodes with available resources. For more information about rolling updates, refer to [Performing a Rolling Update](https:\/\/kubernetes.io\/docs\/tutorials\/kubernetes-basics\/update\/update-intro\/).\n\nThe following steps use the `kubectl annotate` command to add the metadata and keep track of the deployment. For more information about `kubectl annotate`, refer to [kubectl annotate documentation](https:\/\/jamesdefabia.github.io\/docs\/user-guide\/kubectl\/kubectl_annotate\/).\n\n\nInstead of using the `annotate` flag, you can still use the `--record` flag. However, it has been deprecated and will be removed in the future version of Kubernetes. See: https:\/\/github.com\/kubernetes\/kubernetes\/issues\/40422\n\n\n1. To view the current status of the rollout, run the following command:\n\n   ```bash\n   kubectl rollout history deployment\/grafana --namespace=my-grafana\n   ```\n\n   The output will look similar to this:\n\n   ```bash\n   deployment.apps\/grafana\n   REVISION  CHANGE-CAUSE\n   1         NONE\n   ```\n\n   The output shows that nothing has been updated or changed after applying the `grafana.yaml` file.\n\n1. To add metadata to keep record of the initial deployment, run the following command:\n\n   ```bash\n   kubectl annotate deployment\/grafana kubernetes.io\/change-cause='deployed the default base yaml file' --namespace=my-grafana\n   ```\n\n1. To review the rollout history and verify the changes, run the following command:\n\n   ```bash\n   kubectl rollout history deployment\/grafana --namespace=my-grafana\n   ```\n\n   You should see the updated information that you added in the `CHANGE-CAUSE` earlier.\n\n### Change Grafana image version\n\n1. To change the deployed Grafana version, run the following `kubectl edit` command:\n\n   ```bash\n   kubectl edit deployment grafana --namespace=my-grafana\n   ```\n\n1. In the editor, change the container image under the `kind: Deployment` section.\n\n   For example:\n\n   - From\n\n     - `yaml image: grafana\/grafana-oss:10.0.1`\n\n   - To\n     - `yaml image: grafana\/grafana-oss-dev:10.1.0-124419pre`\n\n1. Save the changes.\n\n   Once you save the file, you receive a message similar to the following:\n\n   ```bash\n   deployment.apps\/grafana edited\n   ```\n\n   This means that the changes have been applied.\n\n1. To verify that the rollout on the cluster is successful, run the following command:\n\n   ```bash\n   kubectl rollout status deployment grafana --namespace=my-grafana\n   ```\n\n   A successful deployment rollout means that the Grafana Dev cluster is now available.\n\n1. To check the statuses of all deployed objects, run the following command and include the `-o wide` flag to get more detailed output:\n\n   ```bash\n   kubectl get all --namespace=my-grafana -o wide\n   ```\n\n   You should see the newly deployed `grafana-oss-dev` image.\n\n1. To verify it, access the Grafana UI in the browser using the provided IP:Port from the command above.\n\n   The Grafana sign-in page appears.\n\n1. To sign in to Grafana, enter `admin` for both the username and password.\n1. In the top-right corner, click the help icon.\n\n   The version information appears.\n\n1. Add the `change cause` metadata to keep track of things using the commands:\n\n   ```bash\n   kubectl annotate deployment grafana --namespace=my-grafana kubernetes.io\/change-cause='using grafana-oss-dev:10.1.0-124419pre for testing'\n   ```\n\n1. To verify, run the `kubectl rollout history` command:\n\n   ```bash\n   kubectl rollout history deployment grafana --namespace=my-grafana\n   ```\n\n   You will see an output similar to this:\n\n   ```bash\n   deployment.apps\/grafana\n   REVISION  CHANGE-CAUSE\n   1         deploying the default yaml\n   2         using grafana-oss-dev:10.1.0-124419pre for testing\n   ```\n\nThis means that `REVISION#2` is the current version.\n\n\nThe last line of the `kubectl rollout history deployment` command output is the one which is currently active and running on your Kubernetes environment.\n\n\n### Roll back a deployment\n\nWhen the Grafana deployment becomes unstable due to crash looping, bugs, and so on, you can roll back a deployment to an earlier version (a `REVISION`).\n\nBy default, Kubernetes deployment rollout history remains in the system so that you can roll back at any time. For more information, refer to\u00a0[Rolling Back to a Previous Revision](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/#rolling-back-to-a-previous-revision).\n\n1. To list all possible `REVISION` values, run the following command:\n\n   ```bash\n   kubectl rollout history deployment grafana --namespace=my-grafana\n   ```\n\n1. To roll back to a previous version, run the `kubectl rollout undo` command and provide a revision number.\n\n   Example: To roll back to a previous version, specify the `REVISION` number, which appears after you run the `kubectl rollout history deployment` command, in the `--to-revision` parameter.\n\n   ```bash\n   kubectl rollout undo deployment grafana --to-revision=1 --namespace=my-grafana\n   ```\n\n1. To verify that the rollback on the cluster is successful, run the following command:\n\n   ```bash\n   kubectl rollout status deployment grafana --namespace=my-grafana\n   ```\n\n1. Access the Grafana UI in the browser using the provided IP:Port from the command above.\n\n   The Grafana sign-in page appears.\n\n1. To sign in to Grafana, enter `admin` for both the username and password.\n1. In the top-right corner, click the help icon to display the version number.\n\n1. To see the new rollout history, run the following command:\n\n   ```bash\n   kubectl rollout history deployment grafana --namespace=my-grafana\n   ```\n\nIf you need to go back to any other `REVISION`, just repeat the steps above and use the correct revision number in the `--to-revision` parameter.\n\n## Provision Grafana resources using configuration files\n\nProvisioning can add, update, or delete resources specified in your configuration files when Grafana starts. For detailed information, refer to [Grafana Provisioning](\/docs\/grafana\/<GRAFANA_VERSION>\/administration\/provisioning).\n\nThis section outlines general instructions for provisioning Grafana resources within Kubernetes, using a persistent volume to supply the configuration files to the Grafana pod.\n\n1. Add a new `PersistentVolumeClaim` to the `grafana.yaml` file.\n\n   ```yaml\n   ---\n   apiVersion: v1\n   kind: PersistentVolumeClaim\n   metadata:\n     name: grafana-provisioning-pvc\n   spec:\n     accessModes:\n       - ReadWriteOnce\n     resources:\n       requests:\n         storage: 1Mi\n   ```\n\n1. In the `grafana.yaml` file, mount the persistent volume into `\/etc\/grafana\/provisioning` as follows.\n\n   ```yaml\n   ...\n       volumeMounts:\n         - mountPath: \/etc\/grafana\/provisioning\n           name: grafana-provisioning-pv\n         ...\n   volumes:\n     - name: grafana-provisioning-pv\n       persistentVolumeClaim:\n         claimName: grafana-provisioning-pvc\n   ...\n   ```\n\n1. Find or create the provision resources you want to add. For instance, create a `alerting.yaml` file adding a mute timing (alerting resource).\n\n   ```yaml\n   apiVersion: 1\n   muteTimes:\n     - orgId: 1\n       name: MuteWeekends\n       time_intervals:\n         - weekdays: [saturday, sunday]\n   ```\n\n1. By default, configuration files for alerting resources need to be placed in the `provisioning\/alerting` directory.\n\n   Save the `alerting.yaml` file in a directory named `alerting`, as we will next supply this `alerting` directory to the `\/etc\/grafana\/provisioning` folder of the Grafana pod.\n\n1. Verify first the content of the provisioning directory in the running Grafana pod.\n\n   ```bash\n   kubectl exec -n my-grafana <pod_name> -- ls \/etc\/grafana\/provisioning\/\n   ```\n\n   ```bash\n   kubectl exec -n my-grafana <pod_name> -- ls \/etc\/grafana\/provisioning\/alerting\n   ```\n\n   Because the `alerting` folder is not available yet, the last command should output a `No such file or directory` error.\n\n1. Copy the local `alerting` directory to `\/etc\/grafana\/provisioning\/` in the Grafana pod.\n\n   ```bash\n   kubectl cp alerting my-grafana\/<pod_name>:\/etc\/grafana\/provisioning\/\n   ```\n\n   You can follow the same process to provision additional Grafana resources by supplying the following folders:\n\n   - `provisioning\/dashboards`\n   - `provisioning\/datasources`\n   - `provisioning\/plugins`\n\n1. Verify the `alerting` directory in the running Grafana pod includes the `alerting.yaml` file.\n\n   ```bash\n   kubectl exec -n my-grafana <pod_name> -- ls \/etc\/grafana\/provisioning\/alerting\n   ```\n\n1. Restart the Grafana pod to provision the resources.\n\n   ```bash\n   kubectl rollout restart -n my-grafana deployment --selector=app=grafana\n   ```\n\n   Note that `rollout restart` kills the previous pod and scales a new pod. When the old pod terminates, you may have to enable port-forwarding in the new pod. For instructions, refer to the previous sections about port forwarding in this guide.\n\n1. Verify the Grafana resources are properly provisioned within the Grafana instance.\n\n## Troubleshooting\n\nThis section includes troubleshooting tips you might find helpful when deploying Grafana on Kubernetes.\n\n### Collecting logs\n\nIt is important to view the Grafana server logs while troubleshooting any issues.\n\n1. To check the Grafana logs, run the following command:\n\n   ```bash\n   # dump Pod logs for a Deployment (single-container case)\n   kubectl logs --namespace=my-grafana deploy\/grafana\n   ```\n\n1. If you have multiple containers running in the deployment, run the following command to obtain the logs only for the Grafana deployment:\n\n   ```bash\n   # dump Pod logs for a Deployment (multi-container case)\n   kubectl logs --namespace=my-grafana deploy\/grafana -c grafana\n   ```\n\nFor more information about accessing Kubernetes application logs, refer to [Pods](https:\/\/kubernetes.io\/docs\/reference\/kubectl\/cheatsheet\/#interacting-with-running-pods) and [Deployments](https:\/\/kubernetes.io\/docs\/reference\/kubectl\/cheatsheet\/#interacting-with-deployments-and-services).\n\n### Increasing log levels to debug mode\n\nBy default, the Grafana log level is set to `info`, but you can increase it to `debug` mode to fetch information needed to diagnose and troubleshoot a problem. For more information about Grafana log levels, refer to [Configuring logs](\/docs\/grafana\/latest\/setup-grafana\/configure-grafana#log).\n\nThe following example uses the Kubernetes ConfigMap which is an API object that stores non-confidential data in key-value pairs. For more information, refer to [Kubernetes ConfigMap Concept](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/configmap\/).\n\n1. Create a empty file and name it `grafana.ini` and add the following:\n\n   ```bash\n   [log]\n   ; # Either \"debug\", \"info\", \"warn\", \"error\", \"critical\", default is \"info\"\n   ; # we change from info to debug level\n   level = debug\n   ```\n\n   This example adds the portion of the log section from the configuration file. You can refer to the [Configure Grafana](\/docs\/grafana\/latest\/setup-grafana\/configure-grafana\/) documentation to view all the default configuration settings.\n\n1. To add the configuration file into the Kubernetes cluster via the ConfigMap object, run the following command:\n\n   ```bash\n   kubectl create configmap ge-config --from-file=\/path\/to\/file\/grafana.ini --namespace=my-grafana\n   ```\n\n1. To verify the ConfigMap object creation, run the following command:\n\n   ```bash\n   kubectl get configmap --namespace=my-grafana\n   ```\n\n1. Open the `grafana.yaml` file and In the Deployment section, provide the mount path to the custom configuration (`\/etc\/grafana`) and reference the newly created ConfigMap for it.\n\n   ```bash\n   ---\n   apiVersion: apps\/v1\n   kind: Deployment\n   metadata:\n     labels:\n       app: grafana\n     name: grafana\n   # the rest of the code remains the same.\n   ...\n   ....\n   ...\n               requests:\n               cpu: 250m\n               memory: 750Mi\n           volumeMounts:\n             - mountPath: \/var\/lib\/grafana\n               name: grafana-pv\n              # This is to mount the volume for the custom configuration\n             - mountPath: \/etc\/grafana\n               name: ge-config\n       volumes:\n         - name: grafana-pv\n           persistentVolumeClaim:\n             claimName: grafana-pvc\n          # This is to provide the reference to the ConfigMap for the volume\n         - name: ge-config\n           configMap:\n             name: ge-config\n   ```\n\n1. Deploy the manifest using the following kubectl apply command:\n\n   ```bash\n   kubectl apply -f grafana.yaml --namespace=my-grafana\n   ```\n\n1. To verify the status, run the following commands:\n\n   ```bash\n   # first check the rollout status\n   kubectl rollout status deployment grafana --namespace=my-grafana\n\n   # then check the deployment and configMap information\n   kubectl get all --namespace=my-grafana\n   ```\n\n1. To verify it, access the Grafana UI in the browser using the provided IP:Port\n\n   The Grafana sign-in page appears.\n\n1. To sign in to Grafana, enter `admin` for both the username and password.\n\n1. Navigate to **Server Admin > Settings** and then search for log.\n\n   You should see the level to debug mode.\n\n### Using the --dry-run command\n\nYou can use the Kubernetes `--dry-run` command to send requests to modifying endpoints and determine if the request would have succeeded.\n\nPerforming a dry run can be useful for catching errors or unintended consequences before they occur.\u00a0For more information, refer to [Kubernetes Dry-run](https:\/\/github.com\/kubernetes\/enhancements\/blob\/master\/keps\/sig-api-machinery\/576-dry-run\/README.md).\n\nExample:\n\nThe following example shows how to perform a dry run when you make changes to the `grafana.yaml` such as using a new image version, or adding new labels and you want to determine if there are syntax errors or conflicts.\n\nTo perform a dry run, run the following command:\n\n```bash\nkubectl apply -f grafana.yaml --dry-run=server --namespace=grafana\n```\n\nIf there are no errors, then the output will look similar to this:\n\n```bash\npersistentvolumeclaim\/grafana-pvc unchanged (server dry run)\ndeployment.apps\/grafana unchanged (server dry run)\nservice\/grafana unchanged (server dry run)\n```\n\nIf there are errors or warnings, you will see them in the terminal.\n\n## Remove Grafana\n\nIf you want to remove any of the Grafana deployment objects, use the `kubectl delete command`.\n\n1. If you want to remove the complete Grafana deployment, run the following command:\n\n   ```bash\n   kubectl delete -f grafana.yaml --namespace=my-grafana\n   ```\n\n   This command deletes the deployment, persistentvolumeclaim, and service objects.\n\n1. To delete the ConfigMap, run the following command:\n\n   ```bash\n   kubectl delete configmap ge-config --namespace=my-grafana\n   ```\n\n## Deploy Grafana Enterprise on Kubernetes\n\nThe process for deploying Grafana Enterprise is almost identical to the preceding process, except for additional steps that are required for adding your license file.\n\n### Obtain Grafana Enterprise license\n\nTo run Grafana Enterprise, you need a valid license.\nTo obtain a license, [contact a Grafana Labs representative](\/contact?about=grafana-enterprise).\nThis topic assumes that you have a valid license in a `license.jwt` file.\nAssociate your license with a URL that you can use later in the topic.\n\n### Create license secret\n\nCreate a Kubernetes secret from your license file using the following command:\n\n```bash\nkubectl create secret generic ge-license --from-file=\/path\/to\/your\/license.jwt\n```\n\n### Create Grafana Enterprise configuration\n\n1. Create a Grafana configuration file with the name `grafana.ini`\n\n1. Paste the following YAML contents into the file you created:\n\n   ```yaml\n   [enterprise]\n   license_path = \/etc\/grafana\/license\/license.jwt\n   [server]\n   root_url =\/your\/license\/root\/url\n   ```\n\n1. Update the `root_url` field to the url associated with the license provided to you.\n\n### Create Configmap for Grafana Enterprise configuration\n\nCreate a Kubernetes Configmap from your `grafana.ini` file with the following command:\n\n```bash\nkubectl create configmap ge-config --from-file=\/path\/to\/your\/grafana.ini\n```\n\n### Create Grafana Enterprise Kubernetes manifest\n\n1. Create a `grafana.yaml` file, and copy-and-paste the following content into it.\n\n   The following YAML is identical to the one for a Grafana installation, except for the additional references to the Configmap that contains your Grafana configuration file and the secret that has your license.\n\n   ```yaml\n   ---\n   apiVersion: v1\n   kind: PersistentVolumeClaim\n   metadata:\n     name: grafana-pvc\n   spec:\n     accessModes:\n       - ReadWriteOnce\n     resources:\n       requests:\n         storage: 1Gi\n   ---\n   apiVersion: apps\/v1\n   kind: Deployment\n   metadata:\n     labels:\n       app: grafana\n     name: grafana\n   spec:\n     selector:\n       matchLabels:\n         app: grafana\n     template:\n       metadata:\n         labels:\n           app: grafana\n       spec:\n         securityContext:\n           fsGroup: 472\n           supplementalGroups:\n             - 0\n         containers:\n           - image: grafana\/grafana-enterprise:latest\n             imagePullPolicy: IfNotPresent\n             name: grafana\n             ports:\n               - containerPort: 3000\n                 name: http-grafana\n                 protocol: TCP\n             readinessProbe:\n               failureThreshold: 3\n               httpGet:\n                 path: \/robots.txt\n                 port: 3000\n                 scheme: HTTP\n               initialDelaySeconds: 10\n               periodSeconds: 30\n               successThreshold: 1\n               timeoutSeconds: 2\n             resources:\n               limits:\n                 memory: 4Gi\n               requests:\n                 cpu: 100m\n                 memory: 2Gi\n             volumeMounts:\n               - mountPath: \/var\/lib\/grafana\n                 name: grafana-pv\n               - mountPath: \/etc\/grafana\n                 name: ge-config\n               - mountPath: \/etc\/grafana\/license\n                 name: ge-license\n         volumes:\n           - name: grafana-pv\n             persistentVolumeClaim:\n               claimName: grafana-pvc\n           - name: ge-config\n             configMap:\n               name: ge-config\n           - name: ge-license\n             secret:\n               secretName: ge-license\n   ---\n   apiVersion: v1\n   kind: Service\n   metadata:\n     name: grafana\n   spec:\n     ports:\n       - port: 3000\n         protocol: TCP\n         targetPort: http-grafana\n     selector:\n       app: grafana\n     sessionAffinity: None\n     type: LoadBalancer\n   ```\n\n   \n   If you use `LoadBalancer` in the Service and depending on your cloud platform and network configuration, doing so might expose your Grafana instance to the Internet. To eliminate this risk, use `ClusterIP` to restrict access from within the cluster Grafana is deployed to.\n   \n\n1. To send the manifest to Kubernetes API Server, run the following command:\n   `kubectl apply -f grafana.yaml`\n\n1. To verify the manifest was sent, run the following command:\n   `kubectl port-forward service\/grafana 3000:3000`\n\n1. Navigate to `localhost:3000` in your browser.\n\n   You should see the Grafana login page.\n\n1. Use `admin` for both the username and password to login.\n1. To verify you are working with an enterprise license, scroll to the bottom of the page where you should see `Enterprise (Licensed)`.","site":"grafana setup","answers_cleaned":"    aliases            installation kubernetes  description  Guide for deploying Grafana on Kubernetes labels    products        enterprise       oss menuTitle  Grafana on Kubernetes title  Deploy Grafana on Kubernetes weight  500        Deploy Grafana on Kubernetes  On this page  you will find instructions for installing and running Grafana on Kubernetes using Kubernetes manifests for the setup  If Helm is your preferred option  refer to  Grafana Helm community charts  https   github com grafana helm charts    Watch this video to learn more about installing Grafana on Kubernetes       Before you begin  To follow this guide     You need the latest version of  Kubernetes  https   kubernetes io   running either locally or remotely on a public or private cloud     If you plan to use it in a local environment  you can use various Kubernetes options such as  minikube  https   minikube sigs k8s io docs     kind  https   kind sigs k8s io     Docker Desktop  https   docs docker com desktop kubernetes    and others     If you plan to use Kubernetes in a production setting  it s recommended to utilize managed cloud services like  Google Kubernetes Engine  GKE   https   cloud google com kubernetes engine    Amazon Elastic Kubernetes Service  EKS   https   aws amazon com eks    or  Azure Kubernetes Service  AKS   https   azure microsoft com en us products kubernetes service        System requirements  This section provides minimum hardware and software requirements       Minimum Hardware Requirements    Disk space  1 GB   Memory  750 MiB  approx 750 MB    CPU  250m  approx 0 25 cores       Supported databases  For a list of supported databases  refer to  supported databases   docs grafana latest setup grafana installation supported databases        Supported web browsers  For a list of support web browsers  refer to  supported web browsers   docs grafana latest setup grafana installation supported web browsers     Enable port  3000  in your network environment  as this is the Grafana default port       Deploy Grafana OSS on Kubernetes  This section explains how to install Grafana OSS using Kubernetes   If you want to install Grafana Enterprise on Kubernetes  refer to  Deploy Grafana Enterprise on Kubernetes   deploy grafana enterprise on kubernetes     If you deploy an application in Kubernetes  it will use the default namespace which may already have other applications running  This can result in conflicts and other issues   It is recommended to create a new namespace in Kubernetes to better manage  organize  allocate  and manage cluster resources  For more information about Namespaces  refer to the official  Kubernetes documentation  https   kubernetes io docs concepts overview working with objects namespaces     1  To create a namespace  run the following command         bash    kubectl create namespace my grafana            In this example  the namespace is  my grafana   1  To verify and view the newly created namespace  run the following command         bash    kubectl get namespace my grafana            The output of the command provides more information about the newly created namespace   1  Create a YAML manifest file named  grafana yaml   This file will contain the necessary code for deployment         bash    touch grafana yaml            In the next step you define the following three objects in the YAML file        Object                          Description                                                                                                                                                                                                                                                                                               Persistent Volume Claim  PVC    This object stores the data                                                                                                          Service                         This object provides network access to the Pod defined in the deployment                                                             Deployment                      This object is responsible for creating the pods  ensuring they stay up to date  and managing Replicaset and Rolling updates     1  Copy and paste the following contents and save it in the  grafana yaml  file         yaml           apiVersion  v1    kind  PersistentVolumeClaim    metadata       name  grafana pvc    spec       accessModes           ReadWriteOnce      resources         requests           storage  1Gi           apiVersion  apps v1    kind  Deployment    metadata       labels         app  grafana      name  grafana    spec       selector         matchLabels           app  grafana      template         metadata           labels             app  grafana        spec           securityContext             fsGroup  472            supplementalGroups                 0          containers               name  grafana              image  grafana grafana latest              imagePullPolicy  IfNotPresent              ports                   containerPort  3000                  name  http grafana                  protocol  TCP              readinessProbe                 failureThreshold  3                httpGet                   path   robots txt                  port  3000                  scheme  HTTP                initialDelaySeconds  10                periodSeconds  30                successThreshold  1                timeoutSeconds  2              livenessProbe                 failureThreshold  3                initialDelaySeconds  30                periodSeconds  10                successThreshold  1                tcpSocket                   port  3000                timeoutSeconds  1              resources                 requests                   cpu  250m                  memory  750Mi              volumeMounts                   mountPath   var lib grafana                  name  grafana pv          volumes               name  grafana pv              persistentVolumeClaim                 claimName  grafana pvc           apiVersion  v1    kind  Service    metadata       name  grafana    spec       ports           port  3000          protocol  TCP          targetPort  http grafana      selector         app  grafana      sessionAffinity  None      type  LoadBalancer         1  Run the following command to send the manifest to the Kubernetes API server         bash    kubectl apply  f grafana yaml   namespace my grafana            This command creates the PVC  Deployment  and Service objects   1  Complete the following steps to verify the deployment status of each object      a  For PVC  run the following command         bash    kubectl get pvc   namespace my grafana  o wide            b  For Deployment  run the following command         bash    kubectl get deployments   namespace my grafana  o wide            c  For Service  run the following command         bash    kubectl get svc   namespace my grafana  o wide            Access Grafana on Managed K8s Providers  In this task  you access Grafana deployed on a Managed Kubernetes provider using a web browser  Accessing Grafana via a web browser is straightforward if it is deployed on a Managed Kubernetes Provider as it uses the cloud provider s   LoadBalancer   to which the external load balancer routes are automatically created   1  Run the following command to obtain the deployment information         bash    kubectl get all   namespace my grafana            The output returned should look similar to the following         bash    NAME                           READY   STATUS    RESTARTS   AGE    pod grafana 69946c9bd6 kwjb6   1 1     Running   0          7m27s     NAME              TYPE           CLUSTER IP     EXTERNAL IP      PORT S           AGE    service grafana   LoadBalancer   10 5 243 226   1 120 130 330   3000 31171 TCP   7m27s     NAME                      READY   UP TO DATE   AVAILABLE   AGE    deployment apps grafana   1 1     1            1           7m29s     NAME                                 DESIRED   CURRENT   READY   AGE    replicaset apps grafana 69946c9bd6   1         1         1       7m30s         1  Identify the   EXTERNAL IP   value in the output and type it into your browser      The Grafana sign in page appears   1  To sign in  enter  admin  for both the username and password   1  If you do not see the EXTERNAL IP  complete the following steps      a  Run the following command to do a port forwarding of the Grafana service on port  3000          bash    kubectl port forward service grafana 3000 3000   namespace my grafana            For more information about port forwarding  refer to  Use Port Forwarding to Access Applications in a Cluster  https   kubernetes io docs tasks access application cluster port forward access application cluster        b  Navigate to  localhost 3000  in your browser      The Grafana sign in page appears      c  To sign in  enter  admin  for both the username and password      Access Grafana using minikube  There are multiple ways to access the Grafana UI on a web browser when using minikube  For more information about minikube  refer to  How to access applications running within minikube  https   minikube sigs k8s io docs handbook accessing     This section lists the two most common options for accessing an application running in minikube       Option 1  Expose the service  This option uses the  type  LoadBalancer  in the  grafana yaml  service manifest  which makes the service accessible through the  minikube service  command  For more information  refer to  minikube Service command usage  https   minikube sigs k8s io docs commands service     1  Run the following command to obtain the Grafana service IP         bash    minikube service grafana   namespace my grafana            The output returns the Kubernetes URL for service in your local cluster         bash                                                                               NAMESPACE     NAME     TARGET PORT               URL                                                                                              my grafana   grafana          3000   http   192 168 122 144 32182                                                                               Opening service my grafana grafana in default browser       http   192 168 122 144 32182         1  Run a  curl  command to verify whether a given connection should work in a browser under ideal circumstances         bash    curl 192 168 122 144 32182            The following example output shows that an endpoint has been located        a href   login  Found  a     1  Access the Grafana UI in the browser using the provided IP Port from the command above  For example  192 168 122 144 32182      The Grafana sign in page appears   1  To sign in to Grafana  enter  admin  for both the username and password       Option 2  Use port forwarding  If Option 1 does not work in your minikube environment  this mostly depends on the network   then as an alternative you can use the port forwarding option for the Grafana service on port  3000    For more information about port forwarding  refer to  Use Port Forwarding to Access Applications in a Cluster  https   kubernetes io docs tasks access application cluster port forward access application cluster     1  To find the minikube IP address  run the following command         bash    minikube ip            The output contains the IP address that you use to access the Grafana Pod during port forwarding      A Pod is the smallest deployment unit in Kubernetes and is the core building block for running applications in a Kubernetes cluster  For more information about Pods  refer to  Pods  https   kubernetes io docs concepts workloads pods     1  To obtain the Grafana Pod information  run the following command         bash    kubectl get pods   namespace my grafana            The output should look similar to the following         bash    NAME                       READY   STATUS    RESTARTS   AGE    grafana 58445b6986 dxrrw   1 1     Running   0          9m54s            The output shows the Grafana POD name in the  NAME  column  that you use for port forwarding   1  Run the following command for enabling the port forwarding on the POD         bash    kubectl port forward pod grafana 58445b6986 dxrrw   namespace my grafana   address 0 0 0 0 3000 3000         1  To access the Grafana UI on the web browser  type the minikube IP along with the forwarded port  For example  192 168 122 144 3000      The Grafana sign in page appears   1  To sign in to Grafana  enter  admin  for both the username and password      Update an existing deployment using a rolling update strategy  Rolling updates enable deployment updates to take place with no downtime by incrementally updating Pods instances with new ones  The new Pods will be scheduled on nodes with available resources  For more information about rolling updates  refer to  Performing a Rolling Update  https   kubernetes io docs tutorials kubernetes basics update update intro     The following steps use the  kubectl annotate  command to add the metadata and keep track of the deployment  For more information about  kubectl annotate   refer to  kubectl annotate documentation  https   jamesdefabia github io docs user guide kubectl kubectl annotate      Instead of using the  annotate  flag  you can still use the    record  flag  However  it has been deprecated and will be removed in the future version of Kubernetes  See  https   github com kubernetes kubernetes issues 40422   1  To view the current status of the rollout  run the following command         bash    kubectl rollout history deployment grafana   namespace my grafana            The output will look similar to this         bash    deployment apps grafana    REVISION  CHANGE CAUSE    1         NONE            The output shows that nothing has been updated or changed after applying the  grafana yaml  file   1  To add metadata to keep record of the initial deployment  run the following command         bash    kubectl annotate deployment grafana kubernetes io change cause  deployed the default base yaml file    namespace my grafana         1  To review the rollout history and verify the changes  run the following command         bash    kubectl rollout history deployment grafana   namespace my grafana            You should see the updated information that you added in the  CHANGE CAUSE  earlier       Change Grafana image version  1  To change the deployed Grafana version  run the following  kubectl edit  command         bash    kubectl edit deployment grafana   namespace my grafana         1  In the editor  change the container image under the  kind  Deployment  section      For example        From          yaml image  grafana grafana oss 10 0 1        To         yaml image  grafana grafana oss dev 10 1 0 124419pre   1  Save the changes      Once you save the file  you receive a message similar to the following         bash    deployment apps grafana edited            This means that the changes have been applied   1  To verify that the rollout on the cluster is successful  run the following command         bash    kubectl rollout status deployment grafana   namespace my grafana            A successful deployment rollout means that the Grafana Dev cluster is now available   1  To check the statuses of all deployed objects  run the following command and include the   o wide  flag to get more detailed output         bash    kubectl get all   namespace my grafana  o wide            You should see the newly deployed  grafana oss dev  image   1  To verify it  access the Grafana UI in the browser using the provided IP Port from the command above      The Grafana sign in page appears   1  To sign in to Grafana  enter  admin  for both the username and password  1  In the top right corner  click the help icon      The version information appears   1  Add the  change cause  metadata to keep track of things using the commands         bash    kubectl annotate deployment grafana   namespace my grafana kubernetes io change cause  using grafana oss dev 10 1 0 124419pre for testing          1  To verify  run the  kubectl rollout history  command         bash    kubectl rollout history deployment grafana   namespace my grafana            You will see an output similar to this         bash    deployment apps grafana    REVISION  CHANGE CAUSE    1         deploying the default yaml    2         using grafana oss dev 10 1 0 124419pre for testing         This means that  REVISION 2  is the current version    The last line of the  kubectl rollout history deployment  command output is the one which is currently active and running on your Kubernetes environment        Roll back a deployment  When the Grafana deployment becomes unstable due to crash looping  bugs  and so on  you can roll back a deployment to an earlier version  a  REVISION     By default  Kubernetes deployment rollout history remains in the system so that you can roll back at any time  For more information  refer to  Rolling Back to a Previous Revision  https   kubernetes io docs concepts workloads controllers deployment  rolling back to a previous revision    1  To list all possible  REVISION  values  run the following command         bash    kubectl rollout history deployment grafana   namespace my grafana         1  To roll back to a previous version  run the  kubectl rollout undo  command and provide a revision number      Example  To roll back to a previous version  specify the  REVISION  number  which appears after you run the  kubectl rollout history deployment  command  in the    to revision  parameter         bash    kubectl rollout undo deployment grafana   to revision 1   namespace my grafana         1  To verify that the rollback on the cluster is successful  run the following command         bash    kubectl rollout status deployment grafana   namespace my grafana         1  Access the Grafana UI in the browser using the provided IP Port from the command above      The Grafana sign in page appears   1  To sign in to Grafana  enter  admin  for both the username and password  1  In the top right corner  click the help icon to display the version number   1  To see the new rollout history  run the following command         bash    kubectl rollout history deployment grafana   namespace my grafana         If you need to go back to any other  REVISION   just repeat the steps above and use the correct revision number in the    to revision  parameter      Provision Grafana resources using configuration files  Provisioning can add  update  or delete resources specified in your configuration files when Grafana starts  For detailed information  refer to  Grafana Provisioning   docs grafana  GRAFANA VERSION  administration provisioning    This section outlines general instructions for provisioning Grafana resources within Kubernetes  using a persistent volume to supply the configuration files to the Grafana pod   1  Add a new  PersistentVolumeClaim  to the  grafana yaml  file         yaml           apiVersion  v1    kind  PersistentVolumeClaim    metadata       name  grafana provisioning pvc    spec       accessModes           ReadWriteOnce      resources         requests           storage  1Mi         1  In the  grafana yaml  file  mount the persistent volume into   etc grafana provisioning  as follows         yaml               volumeMounts             mountPath   etc grafana provisioning            name  grafana provisioning pv                 volumes         name  grafana provisioning pv        persistentVolumeClaim           claimName  grafana provisioning pvc                1  Find or create the provision resources you want to add  For instance  create a  alerting yaml  file adding a mute timing  alerting resource          yaml    apiVersion  1    muteTimes         orgId  1        name  MuteWeekends        time intervals             weekdays   saturday  sunday          1  By default  configuration files for alerting resources need to be placed in the  provisioning alerting  directory      Save the  alerting yaml  file in a directory named  alerting   as we will next supply this  alerting  directory to the   etc grafana provisioning  folder of the Grafana pod   1  Verify first the content of the provisioning directory in the running Grafana pod         bash    kubectl exec  n my grafana  pod name     ls  etc grafana provisioning                bash    kubectl exec  n my grafana  pod name     ls  etc grafana provisioning alerting            Because the  alerting  folder is not available yet  the last command should output a  No such file or directory  error   1  Copy the local  alerting  directory to   etc grafana provisioning   in the Grafana pod         bash    kubectl cp alerting my grafana  pod name   etc grafana provisioning             You can follow the same process to provision additional Grafana resources by supplying the following folders         provisioning dashboards        provisioning datasources        provisioning plugins   1  Verify the  alerting  directory in the running Grafana pod includes the  alerting yaml  file         bash    kubectl exec  n my grafana  pod name     ls  etc grafana provisioning alerting         1  Restart the Grafana pod to provision the resources         bash    kubectl rollout restart  n my grafana deployment   selector app grafana            Note that  rollout restart  kills the previous pod and scales a new pod  When the old pod terminates  you may have to enable port forwarding in the new pod  For instructions  refer to the previous sections about port forwarding in this guide   1  Verify the Grafana resources are properly provisioned within the Grafana instance      Troubleshooting  This section includes troubleshooting tips you might find helpful when deploying Grafana on Kubernetes       Collecting logs  It is important to view the Grafana server logs while troubleshooting any issues   1  To check the Grafana logs  run the following command         bash      dump Pod logs for a Deployment  single container case     kubectl logs   namespace my grafana deploy grafana         1  If you have multiple containers running in the deployment  run the following command to obtain the logs only for the Grafana deployment         bash      dump Pod logs for a Deployment  multi container case     kubectl logs   namespace my grafana deploy grafana  c grafana         For more information about accessing Kubernetes application logs  refer to  Pods  https   kubernetes io docs reference kubectl cheatsheet  interacting with running pods  and  Deployments  https   kubernetes io docs reference kubectl cheatsheet  interacting with deployments and services        Increasing log levels to debug mode  By default  the Grafana log level is set to  info   but you can increase it to  debug  mode to fetch information needed to diagnose and troubleshoot a problem  For more information about Grafana log levels  refer to  Configuring logs   docs grafana latest setup grafana configure grafana log    The following example uses the Kubernetes ConfigMap which is an API object that stores non confidential data in key value pairs  For more information  refer to  Kubernetes ConfigMap Concept  https   kubernetes io docs concepts configuration configmap     1  Create a empty file and name it  grafana ini  and add the following         bash     log         Either  debug    info    warn    error    critical   default is  info         we change from info to debug level    level   debug            This example adds the portion of the log section from the configuration file  You can refer to the  Configure Grafana   docs grafana latest setup grafana configure grafana   documentation to view all the default configuration settings   1  To add the configuration file into the Kubernetes cluster via the ConfigMap object  run the following command         bash    kubectl create configmap ge config   from file  path to file grafana ini   namespace my grafana         1  To verify the ConfigMap object creation  run the following command         bash    kubectl get configmap   namespace my grafana         1  Open the  grafana yaml  file and In the Deployment section  provide the mount path to the custom configuration    etc grafana   and reference the newly created ConfigMap for it         bash           apiVersion  apps v1    kind  Deployment    metadata       labels         app  grafana      name  grafana      the rest of the code remains the same                                       requests                 cpu  250m                memory  750Mi            volumeMounts                 mountPath   var lib grafana                name  grafana pv                 This is to mount the volume for the custom configuration                mountPath   etc grafana                name  ge config        volumes             name  grafana pv            persistentVolumeClaim               claimName  grafana pvc             This is to provide the reference to the ConfigMap for the volume            name  ge config            configMap               name  ge config         1  Deploy the manifest using the following kubectl apply command         bash    kubectl apply  f grafana yaml   namespace my grafana         1  To verify the status  run the following commands         bash      first check the rollout status    kubectl rollout status deployment grafana   namespace my grafana       then check the deployment and configMap information    kubectl get all   namespace my grafana         1  To verify it  access the Grafana UI in the browser using the provided IP Port     The Grafana sign in page appears   1  To sign in to Grafana  enter  admin  for both the username and password   1  Navigate to   Server Admin   Settings   and then search for log      You should see the level to debug mode       Using the   dry run command  You can use the Kubernetes    dry run  command to send requests to modifying endpoints and determine if the request would have succeeded   Performing a dry run can be useful for catching errors or unintended consequences before they occur  For more information  refer to  Kubernetes Dry run  https   github com kubernetes enhancements blob master keps sig api machinery 576 dry run README md    Example   The following example shows how to perform a dry run when you make changes to the  grafana yaml  such as using a new image version  or adding new labels and you want to determine if there are syntax errors or conflicts   To perform a dry run  run the following command      bash kubectl apply  f grafana yaml   dry run server   namespace grafana      If there are no errors  then the output will look similar to this      bash persistentvolumeclaim grafana pvc unchanged  server dry run  deployment apps grafana unchanged  server dry run  service grafana unchanged  server dry run       If there are errors or warnings  you will see them in the terminal      Remove Grafana  If you want to remove any of the Grafana deployment objects  use the  kubectl delete command    1  If you want to remove the complete Grafana deployment  run the following command         bash    kubectl delete  f grafana yaml   namespace my grafana            This command deletes the deployment  persistentvolumeclaim  and service objects   1  To delete the ConfigMap  run the following command         bash    kubectl delete configmap ge config   namespace my grafana            Deploy Grafana Enterprise on Kubernetes  The process for deploying Grafana Enterprise is almost identical to the preceding process  except for additional steps that are required for adding your license file       Obtain Grafana Enterprise license  To run Grafana Enterprise  you need a valid license  To obtain a license   contact a Grafana Labs representative   contact about grafana enterprise   This topic assumes that you have a valid license in a  license jwt  file  Associate your license with a URL that you can use later in the topic       Create license secret  Create a Kubernetes secret from your license file using the following command      bash kubectl create secret generic ge license   from file  path to your license jwt          Create Grafana Enterprise configuration  1  Create a Grafana configuration file with the name  grafana ini   1  Paste the following YAML contents into the file you created         yaml     enterprise     license path    etc grafana license license jwt     server     root url   your license root url         1  Update the  root url  field to the url associated with the license provided to you       Create Configmap for Grafana Enterprise configuration  Create a Kubernetes Configmap from your  grafana ini  file with the following command      bash kubectl create configmap ge config   from file  path to your grafana ini          Create Grafana Enterprise Kubernetes manifest  1  Create a  grafana yaml  file  and copy and paste the following content into it      The following YAML is identical to the one for a Grafana installation  except for the additional references to the Configmap that contains your Grafana configuration file and the secret that has your license         yaml           apiVersion  v1    kind  PersistentVolumeClaim    metadata       name  grafana pvc    spec       accessModes           ReadWriteOnce      resources         requests           storage  1Gi           apiVersion  apps v1    kind  Deployment    metadata       labels         app  grafana      name  grafana    spec       selector         matchLabels           app  grafana      template         metadata           labels             app  grafana        spec           securityContext             fsGroup  472            supplementalGroups                 0          containers               image  grafana grafana enterprise latest              imagePullPolicy  IfNotPresent              name  grafana              ports                   containerPort  3000                  name  http grafana                  protocol  TCP              readinessProbe                 failureThreshold  3                httpGet                   path   robots txt                  port  3000                  scheme  HTTP                initialDelaySeconds  10                periodSeconds  30                successThreshold  1                timeoutSeconds  2              resources                 limits                   memory  4Gi                requests                   cpu  100m                  memory  2Gi              volumeMounts                   mountPath   var lib grafana                  name  grafana pv                  mountPath   etc grafana                  name  ge config                  mountPath   etc grafana license                  name  ge license          volumes               name  grafana pv              persistentVolumeClaim                 claimName  grafana pvc              name  ge config              configMap                 name  ge config              name  ge license              secret                 secretName  ge license           apiVersion  v1    kind  Service    metadata       name  grafana    spec       ports           port  3000          protocol  TCP          targetPort  http grafana      selector         app  grafana      sessionAffinity  None      type  LoadBalancer                If you use  LoadBalancer  in the Service and depending on your cloud platform and network configuration  doing so might expose your Grafana instance to the Internet  To eliminate this risk  use  ClusterIP  to restrict access from within the cluster Grafana is deployed to       1  To send the manifest to Kubernetes API Server  run the following command      kubectl apply  f grafana yaml   1  To verify the manifest was sent  run the following command      kubectl port forward service grafana 3000 3000   1  Navigate to  localhost 3000  in your browser      You should see the Grafana login page   1  Use  admin  for both the username and password to login  1  To verify you are working with an enterprise license  scroll to the bottom of the page where you should see  Enterprise  Licensed   "}
{"questions":"grafana setup title Deploy Grafana using Helm Charts aliases products weight 500 labels menuTitle Grafana on Helm Charts installation helm oss Guide for deploying Grafana using Helm Charts","answers":"---\naliases:\n  - ..\/..\/installation\/helm\/\ndescription: Guide for deploying Grafana using Helm Charts\nlabels:\n  products:\n    - oss\nmenuTitle: Grafana on Helm Charts\ntitle: Deploy Grafana using Helm Charts\nweight: 500\n---\n\n# Deploy Grafana using Helm Charts\n\nThis topic includes instructions for installing and running Grafana on Kubernetes using Helm Charts.\n\n[Helm](https:\/\/helm.sh\/) is an open-source command line tool used for managing Kubernetes applications. It is a graduate project in the [CNCF Landscape](https:\/\/www.cncf.io\/projects\/helm\/).\n\n\nThe Grafana open-source community offers Helm Charts for running it on Kubernetes. Please be aware that the code is provided without any warranties. If you encounter any problems, you can report them to the [Official GitHub repository](https:\/\/github.com\/grafana\/helm-charts\/).\n\n\nWatch this video to learn more about installing Grafana using Helm Charts: \n\n## Before you begin\n\nTo install Grafana using Helm, ensure you have completed the following:\n\n- Install a Kubernetes server on your machine. For information about installing Kubernetes, refer to [Install Kubernetes](https:\/\/kubernetes.io\/docs\/setup\/).\n- Install the latest stable version of Helm. For information on installing Helm, refer to [Install Helm](https:\/\/helm.sh\/docs\/intro\/install\/).\n\n## Install Grafana using Helm\n\nWhen you install Grafana using Helm, you complete the following tasks:\n\n1. Set up the Grafana Helm repository, which provides a space in which you will install Grafana.\n\n1. Deploy Grafana using Helm, which installs Grafana into a namespace.\n\n1. Accessing Grafana, which provides steps to sign into Grafana.\n\n### Set up the Grafana Helm repository\n\nTo set up the Grafana Helm repository so that you download the correct Grafana Helm charts on your machine, complete the following steps:\n\n1. To add the Grafana repository, use the following command syntax:\n\n   `helm repo add <DESIRED-NAME> <HELM-REPO-URL>`\n\n   The following example adds the `grafana` Helm repository.\n\n   ```bash\n   helm repo add grafana https:\/\/grafana.github.io\/helm-charts\n   ```\n\n1. Run the following command to verify the repository was added:\n\n   ```bash\n   helm repo list\n   ```\n\n   After you add the repository, you should see an output similar to the following:\n\n   ```bash\n   NAME    URL\n   grafana https:\/\/grafana.github.io\/helm-charts\n   ```\n\n1. Run the following command to update the repository to download the latest Grafana Helm charts:\n\n   ```bash\n   helm repo update\n   ```\n\n### Deploy the Grafana Helm charts\n\nAfter you have set up the Grafana Helm repository, you can start to deploy it on your Kubernetes cluster.\n\nWhen you deploy Grafana Helm charts, use a separate namespace instead of relying on the default namespace. The default namespace might already have other applications running, which can lead to conflicts and other potential issues.\n\nWhen you create a new namespace in Kubernetes, you can better organize, allocate, and manage cluster resources. For more information, refer to [Namespaces](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/namespaces\/).\n\n1. To create a namespace, run the following command:\n\n   ```bash\n   kubectl create namespace monitoring\n   ```\n\n   You will see an output similar to this, which means that the namespace has been successfully created:\n\n   ```bash\n   namespace\/monitoring created\n   ```\n\n1. Search for the official `grafana\/grafana` repository using the command:\n\n   `helm search repo <repo-name\/package-name>`\n\n   For example, the following command provides a list of the Grafana Helm Charts from which you will install the latest version of the Grafana chart.\n\n   ```bash\n   helm search repo grafana\/grafana\n   ```\n\n1. Run the following command to deploy the Grafana Helm Chart inside your namespace.\n\n   ```bash\n   helm install my-grafana grafana\/grafana --namespace monitoring\n   ```\n\n   Where:\n\n   - `helm install`: Installs the chart by deploying it on the Kubernetes cluster\n   - `my-grafana`: The logical chart name that you provided\n   - `grafana\/grafana`: The repository and package name to install\n   - `--namespace`: The Kubernetes namespace (i.e. `monitoring`) where you want to deploy the chart\n\n1. To verify the deployment status, run the following command and verify that `deployed` appears in the **STATUS** column:\n\n   ```bash\n   helm list -n monitoring\n   ```\n\n   You should see an output similar to the following:\n\n   ```bash\n   NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART          APP VERSION\n   my-grafana      monitoring      1               2024-01-13 23:06:42.737989554 +0000 UTC deployed        grafana-6.59.0 10.1.0\n   ```\n\n1. To check the overall status of all the objects in the namespace, run the following command:\n\n   ```bash\n   kubectl get all -n monitoring\n   ```\n\n   If you encounter errors or warnings in the **STATUS** column, check the logs and refer to the Troubleshooting section of this documentation.\n\n### Access Grafana\n\nThis section describes the steps you must complete to access Grafana via web browser.\n\n1. Run the following `helm get notes` command:\n\n   ```bash\n   helm get notes my-grafana -n monitoring\n   ```\n\n   This command will print out the chart notes. You will the output `NOTES` that provide the complete instructions about:\n\n   - How to decode the login password for the Grafana admin account\n   - Access Grafana service to the web browser\n\n1. To get the Grafana admin password, run the command as follows:\n\n   ```bash\n   kubectl get secret --namespace monitoring my-grafana -o jsonpath=\"{.data.admin-password}\" | base64 --decode ; echo\n   ```\n\n   It will give you a decoded `base64` string output which is the password for the admin account.\n\n1. Save the decoded password to a file on your machine.\n\n1. To access Grafana service on the web browser, run the following command:\n\n   ```bash\n   export POD_NAME=$(kubectl get pods --namespace monitoring -l \"app.kubernetes.io\/name=grafana,app.kubernetes.io\/instance=my-grafana\" -o jsonpath=\"{.items[0].metadata.name}\")\n   ```\n\n   The above command will export a shell variable named `POD_NAME` that will save the complete name of the pod which got deployed.\n\n1. Run the following port forwarding command to direct the Grafana pod to listen to port `3000`:\n\n   ```bash\n   kubectl --namespace monitoring port-forward $POD_NAME 3000\n   ```\n\n   For more information about port-forwarding, refer to [Use Port Forwarding to Access Applications in a Cluster](https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/port-forward-access-application-cluster\/).\n\n1. Navigate to `127.0.0.1:3000` in your browser.\n\n1. The Grafana sign-in page appears.\n\n1. To sign in, enter `admin` for the username.\n\n1. For the password paste it which you have saved to a file after decoding it earlier.\n\n## Customize Grafana default configuration\n\nHelm is a popular package manager for Kubernetes. It bundles Kubernetes resource manifests to be re-used across different environments. These manifests are written in a templating language, allowing you to provide configuration values via `values.yaml` file, or in-line using Helm, to replace the placeholders in the manifest where these configurations should reside.\n\nThe `values.yaml` file allows you to customize the chart's configuration by specifying values for various parameters such as image versions, resource limits, service configurations, etc.\n\nBy modifying the values in the `values.yaml` file, you can tailor the deployment of a Helm chart to your specific requirements by using the helm install or upgrade commands. For more information about configuring Helm, refer to [Values Files](https:\/\/helm.sh\/docs\/chart_template_guide\/values_files\/).\n\n### Download the values.yaml file\n\nIn order to make any configuration changes, download the `values.yaml` file from the Grafana Helm Charts repository:\n\nhttps:\/\/github.com\/grafana\/helm-charts\/blob\/main\/charts\/grafana\/values.yaml\n\n\nDepending on your use case requirements, you can use a single YAML file that contains your configuration changes or you can create multiple YAML files.\n\n\n### Enable persistent storage **(recommended)**\n\nBy default, persistent storage is disabled, which means that Grafana uses ephemeral storage, and all data will be stored within the container's file system. This data will be lost if the container is stopped, restarted, or if the container crashes.\n\nIt is highly recommended that you enable persistent storage in Grafana Helm charts if you want to ensure that your data persists and is not lost in case of container restarts or failures.\n\nEnabling persistent storage in Grafana Helm charts ensures a reliable solution for running Grafana in production environments.\n\nTo enable the persistent storage in the Grafana Helm charts, complete the following steps:\n\n1. Open the `values.yaml` file in your favorite editor.\n\n1. Edit the values and under the section of `persistence`, change the `enable` flag from `false` to `true`\n\n   ```yaml\n   .......\n   ............\n   ......\n   persistence:\n     type: pvc\n     enabled: true\n     # storageClassName: default\n   .......\n   ............\n   ......\n   ```\n\n1. Run the following `helm upgrade` command by specifying the `values.yaml` file to make the changes take effect:\n\n   ```bash\n   helm upgrade my-grafana grafana\/grafana -f values.yaml -n monitoring\n   ```\n\nThe PVC will now store all your data such as dashboards, data sources, and so on.\n\n### Install plugins (e.g. Zabbix app, Clock panel, etc.)\n\nYou can install plugins in Grafana from the official and community [plugins page](https:\/\/grafana.com\/grafana\/plugins). These plugins allow you to add new visualization types, data sources, and applications to help you better visualize your data.\n\nGrafana currently supports three types of plugins: panel, data source, and app. For more information on managing plugins, refer to [Plugin Management](https:\/\/grafana.com\/docs\/grafana\/latest\/administration\/plugin-management\/).\n\nTo install plugins in the Grafana Helm Charts, complete the following steps:\n\n1. Open the `values.yaml` file in your favorite editor.\n\n1. Find the line that says `plugins:` and under that section, define the plugins that you want to install.\n\n   ```yaml\n   .......\n   ............\n   ......\n   plugins:\n   # here we are installing two plugins, make sure to keep the indentation correct as written here.\n\n   - alexanderzobnin-zabbix-app\n   - grafana-clock-panel\n   .......\n   ............\n   ......\n   ```\n\n1. Save the changes and use the `helm upgrade` command to get these plugins installed:\n\n   ```bash\n   helm upgrade my-grafana grafana\/grafana -f values.yaml -n monitoring\n   ```\n\n1. Navigate to `127.0.0.1:3000` in your browser.\n\n1. Login with admin credentials when the Grafana sign-in page appears.\n\n1. Navigate to UI -> Administration -> Plugins\n\n1. Search for the above plugins and they should be marked as installed.\n\n### Configure a Private CA (Certificate Authority)\n\nIn many enterprise networks, TLS certificates are issued by a private certificate authority and are not trusted by default (using the provided OS trust chain).\n\nIf your Grafana instance needs to interact with services exposing certificates issued by these private CAs, then you need to ensure Grafana trusts the root certificate.\n\nYou might need to configure this if you:\n\n- have plugins that require connectivity to other self hosted systems. For example, if you've installed the Grafana Enterprise Metrics, Logs, or Traces (GEM, GEL, GET) plugins, and your GEM (or GEL\/GET) cluster is using a private certificate.\n- want to connect to data sources which are listening on HTTPS with a private certificate.\n- are using a backend database for persistence, or caching service that uses private certificates for encryption in transit.\n\nIn some cases you can specify a self-signed certificate within Grafana (such as in some data sources), or choose to skip TLS certificate validation (this is not recommended unless absolutely necessary).\n\nA simple solution which should work across your entire instance (plugins, data sources, and backend connections) is to add your self-signed CA certificate to your Kubernetes deployment.\n\n1. Create a ConfigMap containing the certificate, and deploy it to your Kubernetes cluster\n\n   ```yaml\n   # grafana-ca-configmap.yaml\n   ---\n   apiVersion: v1\n   kind: ConfigMap\n   metadata:\n     name: grafana-ca-cert\n   data:\n     ca.pem: |\n       -----BEGIN CERTIFICATE-----\n       (rest of the CA cert)\n       -----END CERTIFICATE-----\n   ```\n\n   ```bash\n   kubectl apply --filename grafana-ca-configmap.yaml --namespace monitoring\n   ```\n\n1. Open the Helm `values.yaml` file in your favorite editor.\n\n1. Find the line that says `extraConfigmapMounts:` and under that section, specify the additional ConfigMap that you want to mount.\n\n   ```yaml\n   .......\n   ............\n   ......\n   extraConfigmapMounts:\n      - name: ca-certs-configmap\n        mountPath: \/etc\/ssl\/certs\/ca.pem\n        subPath: ca.pem\n        configMap: grafana-ca-cert\n        readOnly: true\n   .......\n   ............\n   ......\n   ```\n\n1. Save the changes and use the `helm upgrade` command to update your Grafana deployment and mount the new ConfigMap:\n\n   ```bash\n   helm upgrade my-grafana grafana\/grafana --values values.yaml --namespace monitoring\n   ```\n\n## Troubleshooting\n\nThis section includes troubleshooting tips you might find helpful when deploying Grafana on Kubernetes via Helm.\n\n### Collect logs\n\nIt is important to view the Grafana server logs while troubleshooting any issues.\n\nTo check the Grafana logs, run the following command:\n\n```bash\n# dump Pod logs for a Deployment (single-container case)\n\nkubectl logs --namespace=monitoring deploy\/my-grafana\n```\n\nIf you have multiple containers running in the deployment, run the following command to obtain the logs only for the Grafana deployment:\n\n```bash\n# dump Pod logs for a Deployment (multi-container case)\n\nkubectl logs --namespace=monitoring deploy\/grafana -c my-grafana\n```\n\nFor more information about accessing Kubernetes application logs, refer to [Pods](https:\/\/kubernetes.io\/docs\/reference\/kubectl\/cheatsheet\/#interacting-with-running-pods) and [Deployments](https:\/\/kubernetes.io\/docs\/reference\/kubectl\/cheatsheet\/#interacting-with-deployments-and-services).\n\n### Increase log levels\n\nBy default, the Grafana log level is set to `info`, but you can increase it to `debug` mode to fetch information needed to diagnose and troubleshoot a problem. For more information about Grafana log levels, refer to [Configuring logs](https:\/\/grafana.com\/docs\/grafana\/latest\/setup-grafana\/configure-grafana#log).\n\nTo increase log level to `debug` mode, use the following steps:\n\n1. Open the `values.yaml` file in your favorite editor and search for the string `grafana.ini` and there you will find a section about log mode.\n\n1. Add level: `debug` just below the line `mode: console`\n\n   ```yaml\n   # This is the values.yaml file\n      .....\n   .......\n   ....\n   grafana.ini:\n   paths:\n      data: \/var\/lib\/grafana\/\n      .....\n   .......\n   ....\n      mode: console\n      level: debug\n   ```\n\n   Make sure to keep the indentation level the same otherwise it will not work.\n\n1. Now to apply this, run the `helm upgrade` command as follows:\n\n   ```bash\n   helm upgrade my-grafana grafana\/grafana -f values.yaml -n monitoring\n   ```\n\n1. To verify it, access the Grafana UI in the browser using the provided `IP:Port`. The Grafana sign-in page appears.\n\n1. To sign in to Grafana, enter `admin` for the username and paste the password which was decoded earlier. Navigate to Server Admin > Settings and then search for log. You should see the level to `debug` mode.\n\n### Reset Grafana admin secrets (login credentials)\n\nBy default the login credentials for the super admin account are generated via `secrets`. However, this can be changed easily. To achieve this, use the following steps:\n\n1. Edit the `values.yaml` file and search for the string `adminPassword`. There you can define a new password:\n\n   ```yaml\n   # Administrator credentials when not using an existing secret (see below)\n   adminUser: admin\n   adminPassword: admin\n   ```\n\n1. Then use the `helm upgrade` command as follows:\n\n   ```bash\n   helm upgrade my-grafana grafana\/grafana -f values.yaml -n monitoring\n   ```\n\n   This command will now make your super admin login credentials as `admin` for both username and password.\n\n1. To verify it, sign in to Grafana, enter `admin` for both username and password. You should be able to login as super admin.\n\n## Uninstall the Grafana deployment\n\nTo uninstall the Grafana deployment, run the command:\n\n`helm uninstall <RELEASE-NAME> <NAMESPACE-NAME>`\n\n```bash\nhelm uninstall my-grafana -n monitoring\n```\n\nThis deletes all of the objects from the given namespace monitoring.\n\nIf you want to delete the namespace `monitoring`, then run the command:\n\n```bash\nkubectl delete namespace monitoring\n```","site":"grafana setup","answers_cleaned":"    aliases            installation helm  description  Guide for deploying Grafana using Helm Charts labels    products        oss menuTitle  Grafana on Helm Charts title  Deploy Grafana using Helm Charts weight  500        Deploy Grafana using Helm Charts  This topic includes instructions for installing and running Grafana on Kubernetes using Helm Charts    Helm  https   helm sh   is an open source command line tool used for managing Kubernetes applications  It is a graduate project in the  CNCF Landscape  https   www cncf io projects helm      The Grafana open source community offers Helm Charts for running it on Kubernetes  Please be aware that the code is provided without any warranties  If you encounter any problems  you can report them to the  Official GitHub repository  https   github com grafana helm charts      Watch this video to learn more about installing Grafana using Helm Charts       Before you begin  To install Grafana using Helm  ensure you have completed the following     Install a Kubernetes server on your machine  For information about installing Kubernetes  refer to  Install Kubernetes  https   kubernetes io docs setup      Install the latest stable version of Helm  For information on installing Helm  refer to  Install Helm  https   helm sh docs intro install        Install Grafana using Helm  When you install Grafana using Helm  you complete the following tasks   1  Set up the Grafana Helm repository  which provides a space in which you will install Grafana   1  Deploy Grafana using Helm  which installs Grafana into a namespace   1  Accessing Grafana  which provides steps to sign into Grafana       Set up the Grafana Helm repository  To set up the Grafana Helm repository so that you download the correct Grafana Helm charts on your machine  complete the following steps   1  To add the Grafana repository  use the following command syntax       helm repo add  DESIRED NAME   HELM REPO URL       The following example adds the  grafana  Helm repository         bash    helm repo add grafana https   grafana github io helm charts         1  Run the following command to verify the repository was added         bash    helm repo list            After you add the repository  you should see an output similar to the following         bash    NAME    URL    grafana https   grafana github io helm charts         1  Run the following command to update the repository to download the latest Grafana Helm charts         bash    helm repo update             Deploy the Grafana Helm charts  After you have set up the Grafana Helm repository  you can start to deploy it on your Kubernetes cluster   When you deploy Grafana Helm charts  use a separate namespace instead of relying on the default namespace  The default namespace might already have other applications running  which can lead to conflicts and other potential issues   When you create a new namespace in Kubernetes  you can better organize  allocate  and manage cluster resources  For more information  refer to  Namespaces  https   kubernetes io docs concepts overview working with objects namespaces     1  To create a namespace  run the following command         bash    kubectl create namespace monitoring            You will see an output similar to this  which means that the namespace has been successfully created         bash    namespace monitoring created         1  Search for the official  grafana grafana  repository using the command       helm search repo  repo name package name       For example  the following command provides a list of the Grafana Helm Charts from which you will install the latest version of the Grafana chart         bash    helm search repo grafana grafana         1  Run the following command to deploy the Grafana Helm Chart inside your namespace         bash    helm install my grafana grafana grafana   namespace monitoring            Where         helm install   Installs the chart by deploying it on the Kubernetes cluster       my grafana   The logical chart name that you provided       grafana grafana   The repository and package name to install         namespace   The Kubernetes namespace  i e   monitoring   where you want to deploy the chart  1  To verify the deployment status  run the following command and verify that  deployed  appears in the   STATUS   column         bash    helm list  n monitoring            You should see an output similar to the following         bash    NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART          APP VERSION    my grafana      monitoring      1               2024 01 13 23 06 42 737989554  0000 UTC deployed        grafana 6 59 0 10 1 0         1  To check the overall status of all the objects in the namespace  run the following command         bash    kubectl get all  n monitoring            If you encounter errors or warnings in the   STATUS   column  check the logs and refer to the Troubleshooting section of this documentation       Access Grafana  This section describes the steps you must complete to access Grafana via web browser   1  Run the following  helm get notes  command         bash    helm get notes my grafana  n monitoring            This command will print out the chart notes  You will the output  NOTES  that provide the complete instructions about        How to decode the login password for the Grafana admin account      Access Grafana service to the web browser  1  To get the Grafana admin password  run the command as follows         bash    kubectl get secret   namespace monitoring my grafana  o jsonpath    data admin password     base64   decode   echo            It will give you a decoded  base64  string output which is the password for the admin account   1  Save the decoded password to a file on your machine   1  To access Grafana service on the web browser  run the following command         bash    export POD NAME   kubectl get pods   namespace monitoring  l  app kubernetes io name grafana app kubernetes io instance my grafana   o jsonpath    items 0  metadata name               The above command will export a shell variable named  POD NAME  that will save the complete name of the pod which got deployed   1  Run the following port forwarding command to direct the Grafana pod to listen to port  3000          bash    kubectl   namespace monitoring port forward  POD NAME 3000            For more information about port forwarding  refer to  Use Port Forwarding to Access Applications in a Cluster  https   kubernetes io docs tasks access application cluster port forward access application cluster     1  Navigate to  127 0 0 1 3000  in your browser   1  The Grafana sign in page appears   1  To sign in  enter  admin  for the username   1  For the password paste it which you have saved to a file after decoding it earlier      Customize Grafana default configuration  Helm is a popular package manager for Kubernetes  It bundles Kubernetes resource manifests to be re used across different environments  These manifests are written in a templating language  allowing you to provide configuration values via  values yaml  file  or in line using Helm  to replace the placeholders in the manifest where these configurations should reside   The  values yaml  file allows you to customize the chart s configuration by specifying values for various parameters such as image versions  resource limits  service configurations  etc   By modifying the values in the  values yaml  file  you can tailor the deployment of a Helm chart to your specific requirements by using the helm install or upgrade commands  For more information about configuring Helm  refer to  Values Files  https   helm sh docs chart template guide values files         Download the values yaml file  In order to make any configuration changes  download the  values yaml  file from the Grafana Helm Charts repository   https   github com grafana helm charts blob main charts grafana values yaml   Depending on your use case requirements  you can use a single YAML file that contains your configuration changes or you can create multiple YAML files        Enable persistent storage    recommended     By default  persistent storage is disabled  which means that Grafana uses ephemeral storage  and all data will be stored within the container s file system  This data will be lost if the container is stopped  restarted  or if the container crashes   It is highly recommended that you enable persistent storage in Grafana Helm charts if you want to ensure that your data persists and is not lost in case of container restarts or failures   Enabling persistent storage in Grafana Helm charts ensures a reliable solution for running Grafana in production environments   To enable the persistent storage in the Grafana Helm charts  complete the following steps   1  Open the  values yaml  file in your favorite editor   1  Edit the values and under the section of  persistence   change the  enable  flag from  false  to  true         yaml                                         persistence       type  pvc      enabled  true        storageClassName  default                                              1  Run the following  helm upgrade  command by specifying the  values yaml  file to make the changes take effect         bash    helm upgrade my grafana grafana grafana  f values yaml  n monitoring         The PVC will now store all your data such as dashboards  data sources  and so on       Install plugins  e g  Zabbix app  Clock panel  etc    You can install plugins in Grafana from the official and community  plugins page  https   grafana com grafana plugins   These plugins allow you to add new visualization types  data sources  and applications to help you better visualize your data   Grafana currently supports three types of plugins  panel  data source  and app  For more information on managing plugins  refer to  Plugin Management  https   grafana com docs grafana latest administration plugin management     To install plugins in the Grafana Helm Charts  complete the following steps   1  Open the  values yaml  file in your favorite editor   1  Find the line that says  plugins   and under that section  define the plugins that you want to install         yaml                                         plugins       here we are installing two plugins  make sure to keep the indentation correct as written here        alexanderzobnin zabbix app      grafana clock panel                                              1  Save the changes and use the  helm upgrade  command to get these plugins installed         bash    helm upgrade my grafana grafana grafana  f values yaml  n monitoring         1  Navigate to  127 0 0 1 3000  in your browser   1  Login with admin credentials when the Grafana sign in page appears   1  Navigate to UI    Administration    Plugins  1  Search for the above plugins and they should be marked as installed       Configure a Private CA  Certificate Authority   In many enterprise networks  TLS certificates are issued by a private certificate authority and are not trusted by default  using the provided OS trust chain    If your Grafana instance needs to interact with services exposing certificates issued by these private CAs  then you need to ensure Grafana trusts the root certificate   You might need to configure this if you     have plugins that require connectivity to other self hosted systems  For example  if you ve installed the Grafana Enterprise Metrics  Logs  or Traces  GEM  GEL  GET  plugins  and your GEM  or GEL GET  cluster is using a private certificate    want to connect to data sources which are listening on HTTPS with a private certificate    are using a backend database for persistence  or caching service that uses private certificates for encryption in transit   In some cases you can specify a self signed certificate within Grafana  such as in some data sources   or choose to skip TLS certificate validation  this is not recommended unless absolutely necessary    A simple solution which should work across your entire instance  plugins  data sources  and backend connections  is to add your self signed CA certificate to your Kubernetes deployment   1  Create a ConfigMap containing the certificate  and deploy it to your Kubernetes cluster        yaml      grafana ca configmap yaml           apiVersion  v1    kind  ConfigMap    metadata       name  grafana ca cert    data       ca pem                BEGIN CERTIFICATE              rest of the CA cert              END CERTIFICATE                    bash    kubectl apply   filename grafana ca configmap yaml   namespace monitoring         1  Open the Helm  values yaml  file in your favorite editor   1  Find the line that says  extraConfigmapMounts   and under that section  specify the additional ConfigMap that you want to mount         yaml                                         extraConfigmapMounts          name  ca certs configmap         mountPath   etc ssl certs ca pem         subPath  ca pem         configMap  grafana ca cert         readOnly  true                                              1  Save the changes and use the  helm upgrade  command to update your Grafana deployment and mount the new ConfigMap         bash    helm upgrade my grafana grafana grafana   values values yaml   namespace monitoring            Troubleshooting  This section includes troubleshooting tips you might find helpful when deploying Grafana on Kubernetes via Helm       Collect logs  It is important to view the Grafana server logs while troubleshooting any issues   To check the Grafana logs  run the following command      bash   dump Pod logs for a Deployment  single container case   kubectl logs   namespace monitoring deploy my grafana      If you have multiple containers running in the deployment  run the following command to obtain the logs only for the Grafana deployment      bash   dump Pod logs for a Deployment  multi container case   kubectl logs   namespace monitoring deploy grafana  c my grafana      For more information about accessing Kubernetes application logs  refer to  Pods  https   kubernetes io docs reference kubectl cheatsheet  interacting with running pods  and  Deployments  https   kubernetes io docs reference kubectl cheatsheet  interacting with deployments and services        Increase log levels  By default  the Grafana log level is set to  info   but you can increase it to  debug  mode to fetch information needed to diagnose and troubleshoot a problem  For more information about Grafana log levels  refer to  Configuring logs  https   grafana com docs grafana latest setup grafana configure grafana log    To increase log level to  debug  mode  use the following steps   1  Open the  values yaml  file in your favorite editor and search for the string  grafana ini  and there you will find a section about log mode   1  Add level   debug  just below the line  mode  console         yaml      This is the values yaml file                                   grafana ini     paths        data   var lib grafana                                       mode  console       level  debug            Make sure to keep the indentation level the same otherwise it will not work   1  Now to apply this  run the  helm upgrade  command as follows         bash    helm upgrade my grafana grafana grafana  f values yaml  n monitoring         1  To verify it  access the Grafana UI in the browser using the provided  IP Port   The Grafana sign in page appears   1  To sign in to Grafana  enter  admin  for the username and paste the password which was decoded earlier  Navigate to Server Admin   Settings and then search for log  You should see the level to  debug  mode       Reset Grafana admin secrets  login credentials   By default the login credentials for the super admin account are generated via  secrets   However  this can be changed easily  To achieve this  use the following steps   1  Edit the  values yaml  file and search for the string  adminPassword   There you can define a new password         yaml      Administrator credentials when not using an existing secret  see below     adminUser  admin    adminPassword  admin         1  Then use the  helm upgrade  command as follows         bash    helm upgrade my grafana grafana grafana  f values yaml  n monitoring            This command will now make your super admin login credentials as  admin  for both username and password   1  To verify it  sign in to Grafana  enter  admin  for both username and password  You should be able to login as super admin      Uninstall the Grafana deployment  To uninstall the Grafana deployment  run the command    helm uninstall  RELEASE NAME   NAMESPACE NAME       bash helm uninstall my grafana  n monitoring      This deletes all of the objects from the given namespace monitoring   If you want to delete the namespace  monitoring   then run the command      bash kubectl delete namespace monitoring    "}
{"questions":"grafana setup plugin grafana rendering image aliases keywords image rendering Image rendering administration imagerendering","answers":"---\naliases:\n  - ..\/administration\/image_rendering\/\n  - ..\/image-rendering\/\ndescription: Image rendering\nkeywords:\n  - grafana\n  - image\n  - rendering\n  - plugin\nlabels:\n  products:\n    - enterprise\n    - oss\ntitle: Set up image rendering\nweight: 1000\n---\n\n# Set up image rendering\n\nGrafana supports automatic rendering of panels as PNG images. This allows Grafana to automatically generate images of your panels to include in alert notifications, [PDF export](), and [Reporting](). PDF Export and Reporting are available only in [Grafana Enterprise]() and [Grafana Cloud](\/docs\/grafana-cloud\/).\n\n> **Note:** Image rendering of dashboards is not supported at this time.\n\nWhile an image is being rendered, the PNG image is temporarily written to the file system. When the image is rendered, the PNG image is temporarily written to the `png` folder in the Grafana `data` folder.\n\nA background job runs every 10 minutes and removes temporary images. You can configure how long an image should be stored before being removed by configuring the [temp_data_lifetime]() setting.\n\nYou can also render a PNG by hovering over the panel to display the actions menu in the top-right corner, and then clicking **Share > Share link**. The **Render image** option is displayed in the link settings.\n\n## Alerting and render limits\n\nAlert notifications can include images, but rendering many images at the same time can overload the server where the renderer is running. For instructions of how to configure this, see [max_concurrent_screenshots]().\n\n## Install Grafana Image Renderer plugin\n\n\nAll PhantomJS support has been removed. Instead, use the Grafana Image Renderer plugin or remote rendering service.\n\n\nTo install the plugin, refer to the [Grafana Image Renderer Installation instructions](\/grafana\/plugins\/grafana-image-renderer\/?tab=installation#installation).\n\n### Memory requirements\n\nRendering images requires a lot of memory, mainly because Grafana creates browser instances in the background for the actual rendering. Grafana recommends a minimum of 16GB of free memory on the system rendering images.\n\nRendering multiple images in parallel requires an even bigger memory footprint. You can use the remote rendering service in order to render images on a remote system, so your local system resources are not affected.\n\n## Configuration\n\nThe Grafana Image Renderer plugin has a number of configuration options that are used in plugin or remote rendering modes.\n\nIn plugin mode, you can specify them directly in the [Grafana configuration file]().\n\nIn remote rendering mode, you can specify them in a `.json` [configuration file](#configuration-file) or, for some of them, you can override the configuration defaults using environment variables.\n\n### Configuration file\n\nYou can update your settings by using a configuration file, see [default.json](https:\/\/github.com\/grafana\/grafana-image-renderer\/tree\/master\/default.json) for defaults. Note that any configured environment variable takes precedence over configuration file settings.\n\nYou can volume mount your custom configuration file when starting the docker container:\n\n```bash\ndocker run -d --name=renderer --network=host -v \/some\/path\/config.json:\/usr\/src\/app\/config.json grafana\/grafana-image-renderer:latest\n```\n\nYou can see a docker-compose example using a custom configuration file [here](https:\/\/github.com\/grafana\/grafana-image-renderer\/tree\/master\/devenv\/docker\/custom-config).\n\n### Security\n\n\nThis feature is available in Image Renderer v3.6.1 and later.\n\n\nYou can restrict access to the rendering endpoint by specifying a secret token. The token should be configured in the Grafana configuration file and the renderer configuration file. This token is important when you run the plugin in remote rendering mode.\n\nRenderer versions v3.6.1 or later require a Grafana version with this feature. These include:\n\n- Grafana v9.1.2 or later\n- Grafana v9.0.8 or later patch releases\n- Grafana v8.5.11 or later patch releases\n- Grafana v8.4.11 or later patch releases\n- Grafana v8.3.11 or later patch releases\n\n```bash\nAUTH_TOKEN=-\n```\n\n```json\n{\n  \"service\": {\n    \"security\": {\n      \"authToken\": \"-\"\n    }\n  }\n}\n```\n\nSee [Grafana configuration]() for how to configure the token in Grafana.\n\n### Rendering mode\n\nYou can instruct how headless browser instances are created by configuring a rendering mode. Default is `default`, other supported values are `clustered` and `reusable`.\n\n#### Default\n\nDefault mode will create a new browser instance on each request. When handling multiple concurrent requests, this mode increases memory usage as it will launch multiple browsers at the same time. If you want to set a maximum number of browser to open, you'll need to use the [clustered mode](#clustered).\n\n\nWhen using the `default` mode, it's recommended to not remove the default Chromium flag `--disable-gpu`. When receiving a lot of concurrent requests, not using this flag can cause Puppeteer `newPage` function to freeze, causing request timeouts and leaving browsers open.\n\n\n```bash\nRENDERING_MODE=default\n```\n\n```json\n{\n  \"rendering\": {\n    \"mode\": \"default\"\n  }\n}\n```\n\n#### Clustered\n\nWith the `clustered` mode, you can configure how many browser instances or incognito pages can execute concurrently. Default is `browser` and will ensure a maximum amount of browser instances can execute concurrently. Mode `context` will ensure a maximum amount of incognito pages can execute concurrently. You can also configure the maximum concurrency allowed, which per default is `5`, and the maximum duration of a rendering request, which per default is `30` seconds.\n\nUsing a cluster of incognito pages is more performant and consumes less CPU and memory than a cluster of browsers. However, if one page crashes it can bring down the entire browser with it (making all the rendering requests happening at the same time fail). Also, each page isn't guaranteed to be totally clean (cookies and storage might bleed-through as seen [here](https:\/\/bugs.chromium.org\/p\/chromium\/issues\/detail?id=754576)).\n\n```bash\nRENDERING_MODE=clustered\nRENDERING_CLUSTERING_MODE=browser\nRENDERING_CLUSTERING_MAX_CONCURRENCY=5\nRENDERING_CLUSTERING_TIMEOUT=30\n```\n\n```json\n{\n  \"rendering\": {\n    \"mode\": \"clustered\",\n    \"clustering\": {\n      \"mode\": \"browser\",\n      \"maxConcurrency\": 5,\n      \"timeout\": 30\n    }\n  }\n}\n```\n\n#### Reusable (experimental)\n\nWhen using the rendering mode `reusable`, one browser instance will be created and reused. A new incognito page will be opened for each request. This mode is experimental since, if the browser instance crashes, it will not automatically be restarted. You can achieve a similar behavior using `clustered` mode with a high `maxConcurrency` setting.\n\n```bash\nRENDERING_MODE=reusable\n```\n\n```json\n{\n  \"rendering\": {\n    \"mode\": \"reusable\"\n  }\n}\n```\n\n#### Optimize the performance, CPU and memory usage of the image renderer\n\nThe performance and resources consumption of the different modes depend a lot on the number of concurrent requests your service is handling. To understand how many concurrent requests your service is handling, [monitor your image renderer service]().\n\nWith no concurrent requests, the different modes show very similar performance and CPU \/ memory usage.\n\nWhen handling concurrent requests, we see the following trends:\n\n- To improve performance and reduce CPU and memory consumption, use [clustered](#clustered) mode with `RENDERING_CLUSTERING_MODE` set as `context`. This parallelizes incognito pages instead of browsers.\n- If you use the [clustered](#clustered) mode with a `maxConcurrency` setting below your average number of concurrent requests, performance will drop as the rendering requests will need to wait for the other to finish before getting access to an incognito page \/ browser.\n\nTo achieve better performance, monitor the machine on which your service is running. If you don't have enough memory and \/ or CPU, every rendering step will be slower than usual, increasing the duration of every rendering request.\n\n### Other available settings\n\n\nPlease note that not all settings are available using environment variables. If there is no example using environment variable below, it means that you need to update the configuration file.\n\n\n#### HTTP host\n\nChange the listening host of the HTTP server. Default is unset and will use the local host.\n\n```bash\nHTTP_HOST=localhost\n```\n\n```json\n{\n  \"service\": {\n    \"host\": \"localhost\"\n  }\n}\n```\n\n#### HTTP port\n\nChange the listening port of the HTTP server. Default is `8081`. Setting `0` will automatically assign a port not in use.\n\n```bash\nHTTP_PORT=0\n```\n\n```json\n{\n  \"service\": {\n    \"port\": 0\n  }\n}\n```\n\n#### HTTP protocol\n\n\nHTTPS protocol is supported in the image renderer v3.11.0 and later.\n\n\nChange the protocol of the server, it can be `http` or `https`. Default is `http`.\n\n```json\n{\n  \"service\": {\n    \"protocol\": \"http\"\n  }\n}\n```\n\n#### HTTPS certificate and key file\n\nPath to the image renderer certificate and key file used to start an HTTPS server.\n\n```json\n{\n  \"service\": {\n    \"certFile\": \".\/path\/to\/cert\",\n    \"certKey\": \".\/path\/to\/key\"\n  }\n}\n```\n\n#### HTTPS min TLS version\n\nMinimum TLS version allowed. Accepted values are: `TLSv1.2`, `TLSv1.3`. Default is `TLSv1.2`.\n\n```json\n{\n  \"service\": {\n    \"minTLSVersion\": \"TLSv1.2\"\n  }\n}\n```\n\n#### Enable Prometheus metrics\n\nYou can enable [Prometheus](https:\/\/prometheus.io\/) metrics endpoint `\/metrics` using the environment variable `ENABLE_METRICS`. Node.js and render request duration metrics are included, see [Enable Prometheus metrics endpoint]() for details.\n\nDefault is `false`.\n\n```bash\nENABLE_METRICS=true\n```\n\n```json\n{\n  \"service\": {\n    \"metrics\": {\n      \"enabled\": true,\n      \"collectDefaultMetrics\": true,\n      \"requestDurationBuckets\": [1, 5, 7, 9, 11, 13, 15, 20, 30]\n    }\n  }\n}\n```\n\n#### Enable detailed timing metrics\n\nWith the [Prometheus metrics enabled](#enable-prometheus-metrics), you can also enable detailed metrics to get the duration of every rendering step.\n\nDefault is `false`.\n\n```bash\n# Available from v3.9.0+\nRENDERING_TIMING_METRICS=true\n```\n\n```json\n{\n  \"rendering\": {\n    \"timingMetrics\": true\n  }\n}\n```\n\n#### Log level\n\nChange the log level. Default is `info` and will include log messages with level `error`, `warning` and `info`.\n\n```bash\nLOG_LEVEL=debug\n```\n\n```json\n{\n  \"service\": {\n    \"logging\": {\n      \"level\": \"debug\",\n      \"console\": {\n        \"json\": false,\n        \"colorize\": true\n      }\n    }\n  }\n}\n```\n\n#### Verbose logging\n\nInstruct headless browser instance whether to capture and log verbose information when rendering an image. Default is `false` and will only capture and log error messages. When enabled (`true`) debug messages are captured and logged as well.\n\nNote that you need to change log level to `debug`, see above, for the verbose information to be included in the logs.\n\n```bash\nRENDERING_VERBOSE_LOGGING=true\n```\n\n```json\n{\n  \"rendering\": {\n    \"verboseLogging\": true\n  }\n}\n```\n\n#### Capture browser output\n\nInstruct headless browser instance whether to output its debug and error messages into running process of remote rendering service. Default is `false`.\nThis can be useful to enable (`true`) when troubleshooting.\n\n```bash\nRENDERING_DUMPIO=true\n```\n\n```json\n{\n  \"rendering\": {\n    \"dumpio\": true\n  }\n}\n```\n\n#### Custom Chrome\/Chromium\n\nIf you already have [Chrome](https:\/\/www.google.com\/chrome\/) or [Chromium](https:\/\/www.chromium.org\/)\ninstalled on your system, then you can use this instead of the pre-packaged version of Chromium.\n\n\nPlease note that this is not recommended, since you may encounter problems if the installed version of Chrome\/Chromium is not compatible with the [Grafana Image renderer plugin](\/grafana\/plugins\/grafana-image-renderer).\n\n\nYou need to make sure that the Chrome\/Chromium executable is available for the Grafana\/image rendering service process.\n\n```bash\nCHROME_BIN=\"\/usr\/bin\/chromium-browser\"\n```\n\n```json\n{\n  \"rendering\": {\n    \"chromeBin\": \"\/usr\/bin\/chromium-browser\"\n  }\n}\n```\n\n#### Start browser with additional arguments\n\nAdditional arguments to pass to the headless browser instance. Defaults are `--no-sandbox,--disable-gpu`. The list of Chromium flags can be found [here](https:\/\/peter.sh\/experiments\/chromium-command-line-switches\/) and the list of flags used as defaults by Puppeteer can be found [there](https:\/\/cri.dev\/posts\/2020-04-04-Full-list-of-Chromium-Puppeteer-flags\/). Multiple arguments is separated with comma-character.\n\n```bash\nRENDERING_ARGS=--no-sandbox,--disable-setuid-sandbox,--disable-dev-shm-usage,--disable-accelerated-2d-canvas,--disable-gpu,--window-size=1280x758\n```\n\n```json\n{\n  \"rendering\": {\n    \"args\": [\n      \"--no-sandbox\",\n      \"--disable-setuid-sandbox\",\n      \"--disable-dev-shm-usage\",\n      \"--disable-accelerated-2d-canvas\",\n      \"--disable-gpu\",\n      \"--window-size=1280x758\"\n    ]\n  }\n}\n```\n\n#### Ignore HTTPS errors\n\nInstruct headless browser instance whether to ignore HTTPS errors during navigation. Per default HTTPS errors are not ignored.\nDue to the security risk it's not recommended to ignore HTTPS errors.\n\n```bash\nIGNORE_HTTPS_ERRORS=true\n```\n\n```json\n{\n  \"rendering\": {\n    \"ignoresHttpsErrors\": true\n  }\n}\n```\n\n#### Default timezone\n\nInstruct headless browser instance to use a default timezone when not provided by Grafana, .e.g. when rendering panel image of alert. See [ICU\u2019s metaZones.txt](https:\/\/cs.chromium.org\/chromium\/src\/third_party\/icu\/source\/data\/misc\/metaZones.txt?rcl=faee8bc70570192d82d2978a71e2a615788597d1) for a list of supported timezone IDs. Fallbacks to `TZ` environment variable if not set.\n\n```bash\nBROWSER_TZ=Europe\/Stockholm\n```\n\n```json\n{\n  \"rendering\": {\n    \"timezone\": \"Europe\/Stockholm\"\n  }\n}\n```\n\n#### Default language\n\nInstruct headless browser instance to use a default language when not provided by Grafana, e.g. when rendering panel image of alert.\nRefer to the HTTP header Accept-Language to understand how to format this value.\n\n```bash\n# Available from v3.9.0+\nRENDERING_LANGUAGE=\"fr-CH, fr;q=0.9, en;q=0.8, de;q=0.7, *;q=0.5\"\n```\n\n```json\n{\n  \"rendering\": {\n    \"acceptLanguage\": \"fr-CH, fr;q=0.9, en;q=0.8, de;q=0.7, *;q=0.5\"\n  }\n}\n```\n\n#### Viewport width\n\nDefault viewport width when width is not specified in the rendering request. Default is `1000`.\n\n```bash\n# Available from v3.9.0+\nRENDERING_VIEWPORT_WIDTH=1000\n```\n\n```json\n{\n  \"rendering\": {\n    \"width\": 1000\n  }\n}\n```\n\n#### Viewport height\n\nDefault viewport height when height is not specified in the rendering request. Default is `500`.\n\n```bash\n# Available from v3.9.0+\nRENDERING_VIEWPORT_HEIGHT=500\n```\n\n```json\n{\n  \"rendering\": {\n    \"height\": 500\n  }\n}\n```\n\n#### Viewport maximum width\n\nLimit the maximum viewport width that can be requested. Default is `3000`.\n\n```bash\n# Available from v3.9.0+\nRENDERING_VIEWPORT_MAX_WIDTH=1000\n```\n\n```json\n{\n  \"rendering\": {\n    \"maxWidth\": 1000\n  }\n}\n```\n\n#### Viewport maximum height\n\nLimit the maximum viewport height that can be requested. Default is `3000`.\n\n```bash\n# Available from v3.9.0+\nRENDERING_VIEWPORT_MAX_HEIGHT=500\n```\n\n```json\n{\n  \"rendering\": {\n    \"maxHeight\": 500\n  }\n}\n```\n\n#### Device scale factor\n\nSpecify default device scale factor for rendering images. `2` is enough for monitor resolutions, `4` would be better for printed material. Setting a higher value affects performance and memory. Default is `1`.\nThis can be overridden in the rendering request.\n\n```bash\n# Available from v3.9.0+\nRENDERING_VIEWPORT_DEVICE_SCALE_FACTOR=2\n```\n\n```json\n{\n  \"rendering\": {\n    \"deviceScaleFactor\": 2\n  }\n}\n```\n\n#### Maximum device scale factor\n\nLimit the maximum device scale factor that can be requested. Default is `4`.\n\n```bash\n# Available from v3.9.0+\nRENDERING_VIEWPORT_MAX_DEVICE_SCALE_FACTOR=4\n```\n\n```json\n{\n  \"rendering\": {\n    \"maxDeviceScaleFactor\": 4\n  }\n}\n```\n\n#### Page zoom level\n\nThe following command sets a page zoom level. The default value is `1`. A value of `1.5` equals 150% zoom.\n\n```bash\nRENDERING_VIEWPORT_PAGE_ZOOM_LEVEL=1\n```\n\n```json\n{\n  \"rendering\": {\n    \"pageZoomLevel\": 1\n  }\n}\n```","site":"grafana setup","answers_cleaned":"    aliases         administration image rendering         image rendering  description  Image rendering keywords      grafana     image     rendering     plugin labels    products        enterprise       oss title  Set up image rendering weight  1000        Set up image rendering  Grafana supports automatic rendering of panels as PNG images  This allows Grafana to automatically generate images of your panels to include in alert notifications   PDF export     and  Reporting     PDF Export and Reporting are available only in  Grafana Enterprise    and  Grafana Cloud   docs grafana cloud         Note    Image rendering of dashboards is not supported at this time   While an image is being rendered  the PNG image is temporarily written to the file system  When the image is rendered  the PNG image is temporarily written to the  png  folder in the Grafana  data  folder   A background job runs every 10 minutes and removes temporary images  You can configure how long an image should be stored before being removed by configuring the  temp data lifetime    setting   You can also render a PNG by hovering over the panel to display the actions menu in the top right corner  and then clicking   Share   Share link    The   Render image   option is displayed in the link settings      Alerting and render limits  Alert notifications can include images  but rendering many images at the same time can overload the server where the renderer is running  For instructions of how to configure this  see  max concurrent screenshots         Install Grafana Image Renderer plugin   All PhantomJS support has been removed  Instead  use the Grafana Image Renderer plugin or remote rendering service    To install the plugin  refer to the  Grafana Image Renderer Installation instructions   grafana plugins grafana image renderer  tab installation installation        Memory requirements  Rendering images requires a lot of memory  mainly because Grafana creates browser instances in the background for the actual rendering  Grafana recommends a minimum of 16GB of free memory on the system rendering images   Rendering multiple images in parallel requires an even bigger memory footprint  You can use the remote rendering service in order to render images on a remote system  so your local system resources are not affected      Configuration  The Grafana Image Renderer plugin has a number of configuration options that are used in plugin or remote rendering modes   In plugin mode  you can specify them directly in the  Grafana configuration file      In remote rendering mode  you can specify them in a   json   configuration file   configuration file  or  for some of them  you can override the configuration defaults using environment variables       Configuration file  You can update your settings by using a configuration file  see  default json  https   github com grafana grafana image renderer tree master default json  for defaults  Note that any configured environment variable takes precedence over configuration file settings   You can volume mount your custom configuration file when starting the docker container      bash docker run  d   name renderer   network host  v  some path config json  usr src app config json grafana grafana image renderer latest      You can see a docker compose example using a custom configuration file  here  https   github com grafana grafana image renderer tree master devenv docker custom config        Security   This feature is available in Image Renderer v3 6 1 and later    You can restrict access to the rendering endpoint by specifying a secret token  The token should be configured in the Grafana configuration file and the renderer configuration file  This token is important when you run the plugin in remote rendering mode   Renderer versions v3 6 1 or later require a Grafana version with this feature  These include     Grafana v9 1 2 or later   Grafana v9 0 8 or later patch releases   Grafana v8 5 11 or later patch releases   Grafana v8 4 11 or later patch releases   Grafana v8 3 11 or later patch releases     bash AUTH TOKEN           json      service          security            authToken                        See  Grafana configuration    for how to configure the token in Grafana       Rendering mode  You can instruct how headless browser instances are created by configuring a rendering mode  Default is  default   other supported values are  clustered  and  reusable         Default  Default mode will create a new browser instance on each request  When handling multiple concurrent requests  this mode increases memory usage as it will launch multiple browsers at the same time  If you want to set a maximum number of browser to open  you ll need to use the  clustered mode   clustered     When using the  default  mode  it s recommended to not remove the default Chromium flag    disable gpu   When receiving a lot of concurrent requests  not using this flag can cause Puppeteer  newPage  function to freeze  causing request timeouts and leaving browsers open       bash RENDERING MODE default         json      rendering          mode    default                  Clustered  With the  clustered  mode  you can configure how many browser instances or incognito pages can execute concurrently  Default is  browser  and will ensure a maximum amount of browser instances can execute concurrently  Mode  context  will ensure a maximum amount of incognito pages can execute concurrently  You can also configure the maximum concurrency allowed  which per default is  5   and the maximum duration of a rendering request  which per default is  30  seconds   Using a cluster of incognito pages is more performant and consumes less CPU and memory than a cluster of browsers  However  if one page crashes it can bring down the entire browser with it  making all the rendering requests happening at the same time fail   Also  each page isn t guaranteed to be totally clean  cookies and storage might bleed through as seen  here  https   bugs chromium org p chromium issues detail id 754576        bash RENDERING MODE clustered RENDERING CLUSTERING MODE browser RENDERING CLUSTERING MAX CONCURRENCY 5 RENDERING CLUSTERING TIMEOUT 30         json      rendering          mode    clustered        clustering            mode    browser          maxConcurrency   5         timeout   30                       Reusable  experimental   When using the rendering mode  reusable   one browser instance will be created and reused  A new incognito page will be opened for each request  This mode is experimental since  if the browser instance crashes  it will not automatically be restarted  You can achieve a similar behavior using  clustered  mode with a high  maxConcurrency  setting      bash RENDERING MODE reusable         json      rendering          mode    reusable                  Optimize the performance  CPU and memory usage of the image renderer  The performance and resources consumption of the different modes depend a lot on the number of concurrent requests your service is handling  To understand how many concurrent requests your service is handling   monitor your image renderer service      With no concurrent requests  the different modes show very similar performance and CPU   memory usage   When handling concurrent requests  we see the following trends     To improve performance and reduce CPU and memory consumption  use  clustered   clustered  mode with  RENDERING CLUSTERING MODE  set as  context   This parallelizes incognito pages instead of browsers    If you use the  clustered   clustered  mode with a  maxConcurrency  setting below your average number of concurrent requests  performance will drop as the rendering requests will need to wait for the other to finish before getting access to an incognito page   browser   To achieve better performance  monitor the machine on which your service is running  If you don t have enough memory and   or CPU  every rendering step will be slower than usual  increasing the duration of every rendering request       Other available settings   Please note that not all settings are available using environment variables  If there is no example using environment variable below  it means that you need to update the configuration file         HTTP host  Change the listening host of the HTTP server  Default is unset and will use the local host      bash HTTP HOST localhost         json      service          host    localhost                  HTTP port  Change the listening port of the HTTP server  Default is  8081   Setting  0  will automatically assign a port not in use      bash HTTP PORT 0         json      service          port   0                 HTTP protocol   HTTPS protocol is supported in the image renderer v3 11 0 and later    Change the protocol of the server  it can be  http  or  https   Default is  http       json      service          protocol    http                  HTTPS certificate and key file  Path to the image renderer certificate and key file used to start an HTTPS server      json      service          certFile      path to cert        certKey      path to key                  HTTPS min TLS version  Minimum TLS version allowed  Accepted values are   TLSv1 2    TLSv1 3   Default is  TLSv1 2       json      service          minTLSVersion    TLSv1 2                  Enable Prometheus metrics  You can enable  Prometheus  https   prometheus io   metrics endpoint   metrics  using the environment variable  ENABLE METRICS   Node js and render request duration metrics are included  see  Enable Prometheus metrics endpoint    for details   Default is  false       bash ENABLE METRICS true         json      service          metrics            enabled   true         collectDefaultMetrics   true         requestDurationBuckets    1  5  7  9  11  13  15  20  30                        Enable detailed timing metrics  With the  Prometheus metrics enabled   enable prometheus metrics   you can also enable detailed metrics to get the duration of every rendering step   Default is  false       bash   Available from v3 9 0  RENDERING TIMING METRICS true         json      rendering          timingMetrics   true                 Log level  Change the log level  Default is  info  and will include log messages with level  error    warning  and  info       bash LOG LEVEL debug         json      service          logging            level    debug          console              json   false           colorize   true                               Verbose logging  Instruct headless browser instance whether to capture and log verbose information when rendering an image  Default is  false  and will only capture and log error messages  When enabled   true   debug messages are captured and logged as well   Note that you need to change log level to  debug   see above  for the verbose information to be included in the logs      bash RENDERING VERBOSE LOGGING true         json      rendering          verboseLogging   true                 Capture browser output  Instruct headless browser instance whether to output its debug and error messages into running process of remote rendering service  Default is  false   This can be useful to enable   true   when troubleshooting      bash RENDERING DUMPIO true         json      rendering          dumpio   true                 Custom Chrome Chromium  If you already have  Chrome  https   www google com chrome   or  Chromium  https   www chromium org   installed on your system  then you can use this instead of the pre packaged version of Chromium    Please note that this is not recommended  since you may encounter problems if the installed version of Chrome Chromium is not compatible with the  Grafana Image renderer plugin   grafana plugins grafana image renderer     You need to make sure that the Chrome Chromium executable is available for the Grafana image rendering service process      bash CHROME BIN   usr bin chromium browser          json      rendering          chromeBin     usr bin chromium browser                  Start browser with additional arguments  Additional arguments to pass to the headless browser instance  Defaults are    no sandbox   disable gpu   The list of Chromium flags can be found  here  https   peter sh experiments chromium command line switches   and the list of flags used as defaults by Puppeteer can be found  there  https   cri dev posts 2020 04 04 Full list of Chromium Puppeteer flags    Multiple arguments is separated with comma character      bash RENDERING ARGS   no sandbox   disable setuid sandbox   disable dev shm usage   disable accelerated 2d canvas   disable gpu   window size 1280x758         json      rendering          args              no sandbox            disable setuid sandbox            disable dev shm usage            disable accelerated 2d canvas            disable gpu            window size 1280x758                        Ignore HTTPS errors  Instruct headless browser instance whether to ignore HTTPS errors during navigation  Per default HTTPS errors are not ignored  Due to the security risk it s not recommended to ignore HTTPS errors      bash IGNORE HTTPS ERRORS true         json      rendering          ignoresHttpsErrors   true                 Default timezone  Instruct headless browser instance to use a default timezone when not provided by Grafana   e g  when rendering panel image of alert  See  ICU s metaZones txt  https   cs chromium org chromium src third party icu source data misc metaZones txt rcl faee8bc70570192d82d2978a71e2a615788597d1  for a list of supported timezone IDs  Fallbacks to  TZ  environment variable if not set      bash BROWSER TZ Europe Stockholm         json      rendering          timezone    Europe Stockholm                  Default language  Instruct headless browser instance to use a default language when not provided by Grafana  e g  when rendering panel image of alert  Refer to the HTTP header Accept Language to understand how to format this value      bash   Available from v3 9 0  RENDERING LANGUAGE  fr CH  fr q 0 9  en q 0 8  de q 0 7    q 0 5          json      rendering          acceptLanguage    fr CH  fr q 0 9  en q 0 8  de q 0 7    q 0 5                  Viewport width  Default viewport width when width is not specified in the rendering request  Default is  1000       bash   Available from v3 9 0  RENDERING VIEWPORT WIDTH 1000         json      rendering          width   1000                 Viewport height  Default viewport height when height is not specified in the rendering request  Default is  500       bash   Available from v3 9 0  RENDERING VIEWPORT HEIGHT 500         json      rendering          height   500                 Viewport maximum width  Limit the maximum viewport width that can be requested  Default is  3000       bash   Available from v3 9 0  RENDERING VIEWPORT MAX WIDTH 1000         json      rendering          maxWidth   1000                 Viewport maximum height  Limit the maximum viewport height that can be requested  Default is  3000       bash   Available from v3 9 0  RENDERING VIEWPORT MAX HEIGHT 500         json      rendering          maxHeight   500                 Device scale factor  Specify default device scale factor for rendering images   2  is enough for monitor resolutions   4  would be better for printed material  Setting a higher value affects performance and memory  Default is  1   This can be overridden in the rendering request      bash   Available from v3 9 0  RENDERING VIEWPORT DEVICE SCALE FACTOR 2         json      rendering          deviceScaleFactor   2                 Maximum device scale factor  Limit the maximum device scale factor that can be requested  Default is  4       bash   Available from v3 9 0  RENDERING VIEWPORT MAX DEVICE SCALE FACTOR 4         json      rendering          maxDeviceScaleFactor   4                 Page zoom level  The following command sets a page zoom level  The default value is  1   A value of  1 5  equals 150  zoom      bash RENDERING VIEWPORT PAGE ZOOM LEVEL 1         json      rendering          pageZoomLevel   1          "}
{"questions":"grafana setup image rendering troubleshooting plugin rendering aliases Image rendering troubleshooting troubleshooting keywords grafana image","answers":"---\naliases:\n  - ..\/..\/image-rendering\/troubleshooting\/\ndescription: Image rendering troubleshooting\nkeywords:\n  - grafana\n  - image\n  - rendering\n  - plugin\n  - troubleshooting\nlabels:\n  products:\n    - enterprise\n    - oss\nmenuTitle: Troubleshooting\ntitle: Troubleshoot image rendering\nweight: 200\n---\n\n# Troubleshoot image rendering\n\nIn this section, you'll learn how to enable logging for the image renderer and you'll find the most common issues.\n\n## Enable debug logging\n\nTo troubleshoot the image renderer, different kind of logs are available.\n\nYou can enable debug log messages for rendering in the Grafana configuration file and inspect the Grafana server logs.\n\n```bash\n[log]\nfilters = rendering:debug\n```\n\nYou can also enable more logs in image renderer service itself by enabling [debug logging]().\n\n## Missing libraries\n\nThe plugin and rendering service uses [Chromium browser](https:\/\/www.chromium.org\/) which depends on certain libraries.\nIf you don't have all of those libraries installed in your system you may encounter errors when trying to render an image, e.g.\n\n```bash\nRendering failed: Error: Failed to launch chrome!\/var\/lib\/grafana\/plugins\/grafana-image-renderer\/chrome-linux\/chrome:\nerror while loading shared libraries: libX11.so.6: cannot open shared object file: No such file or directory\\n\\n\\nTROUBLESHOOTING: https:\/\/github.com\/GoogleChrome\/puppeteer\/blob\/master\/docs\/troubleshooting.md\n```\n\nIn general you can use the [`ldd`](<https:\/\/en.wikipedia.org\/wiki\/Ldd_(Unix)>) utility to figure out what shared libraries\nare not installed in your system:\n\n```bash\ncd <grafana-image-render plugin directory>\nldd chrome-headless-shell\/linux-132.0.6781.0\/chrome-headless-shell-linux64\/chrome-headless-shell\n      linux-vdso.so.1 (0x00007fff1bf65000)\n      libdl.so.2 => \/lib\/x86_64-linux-gnu\/libdl.so.2 (0x00007f2047945000)\n      libpthread.so.0 => \/lib\/x86_64-linux-gnu\/libpthread.so.0 (0x00007f2047924000)\n      librt.so.1 => \/lib\/x86_64-linux-gnu\/librt.so.1 (0x00007f204791a000)\n      libX11.so.6 => not found\n      libX11-xcb.so.1 => not found\n      libxcb.so.1 => not found\n      libXcomposite.so.1 => not found\n        ...\n```\n\n**Ubuntu:**\n\nOn Ubuntu 18.10 the following dependencies are required for the image rendering to function.\n\n```bash\nlibx11-6 libx11-xcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrender1 libxtst6 libglib2.0-0 libnss3 libcups2  libdbus-1-3 libxss1 libxrandr2 libgtk-3-0 libasound2 libxcb-dri3-0 libgbm1 libxshmfence1\n```\n\n**Debian:**\n\nOn Debian 9 (Stretch) the following dependencies are required for the image rendering to function.\n\n```bash\nlibx11 libcairo libcairo2 libxtst6 libxcomposite1 libx11-xcb1 libxcursor1 libxdamage1 libnss3 libcups libcups2 libxss libxss1 libxrandr2 libasound2 libatk1.0-0 libatk-bridge2.0-0 libpangocairo-1.0-0 libgtk-3-0 libgbm1 libxshmfence1\n```\n\nOn Debian 10 (Buster) the following dependencies are required for the image rendering to function.\n\n```bash\nlibxdamage1 libxext6 libxi6 libxtst6 libnss3 libcups2 libxss1 libxrandr2 libasound2 libatk1.0-0 libatk-bridge2.0-0 libpangocairo-1.0-0 libpango-1.0-0 libcairo2 libatspi2.0-0 libgtk3.0-cil libgdk3.0-cil libx11-xcb-dev libgbm1 libxshmfence1\n```\n\n**Centos:**\n\nOn a minimal CentOS 7 installation, the following dependencies are required for the image rendering to function:\n\n```bash\nlibXcomposite libXdamage libXtst cups libXScrnSaver pango atk adwaita-cursor-theme adwaita-icon-theme at at-spi2-atk at-spi2-core cairo-gobject colord-libs dconf desktop-file-utils ed emacs-filesystem gdk-pixbuf2 glib-networking gnutls gsettings-desktop-schemas gtk-update-icon-cache gtk3 hicolor-icon-theme jasper-libs json-glib libappindicator-gtk3 libdbusmenu libdbusmenu-gtk3 libepoxy liberation-fonts liberation-narrow-fonts liberation-sans-fonts liberation-serif-fonts libgusb libindicator-gtk3 libmodman libproxy libsoup libwayland-cursor libwayland-egl libxkbcommon m4 mailx nettle patch psmisc redhat-lsb-core redhat-lsb-submod-security rest spax time trousers xdg-utils xkeyboard-config alsa-lib\n```\n\nOn a minimal CentOS 8 installation, the following dependencies are required for the image rendering to function:\n\n```bash\nlibXcomposite libXdamage libXtst cups libXScrnSaver pango atk adwaita-cursor-theme adwaita-icon-theme at at-spi2-atk at-spi2-core cairo-gobject colord-libs dconf desktop-file-utils ed emacs-filesystem gdk-pixbuf2 glib-networking gnutls gsettings-desktop-schemas gtk-update-icon-cache gtk3 hicolor-icon-theme jasper-libs json-glib libappindicator-gtk3 libdbusmenu libdbusmenu-gtk3 libepoxy liberation-fonts liberation-narrow-fonts liberation-sans-fonts liberation-serif-fonts libgusb libindicator-gtk3 libmodman libproxy libsoup libwayland-cursor libwayland-egl libxkbcommon m4 mailx nettle patch psmisc redhat-lsb-core redhat-lsb-submod-security rest spax time trousers xdg-utils xkeyboard-config alsa-lib libX11-xcb\n```\n\n**RHEL:**\n\nOn a minimal RHEL 8 installation, the following dependencies are required for the image rendering to function:\n\n```bash\nlinux-vdso.so.1 libdl.so.2 libpthread.so.0 libgobject-2.0.so.0 libglib-2.0.so.0 libnss3.so libnssutil3.so libsmime3.so libnspr4.so libatk-1.0.so.0 libatk-bridge-2.0.so.0 libcups.so.2 libgio-2.0.so.0 libdrm.so.2 libdbus-1.so.3 libexpat.so.1 libxcb.so.1 libxkbcommon.so.0 libm.so.6 libX11.so.6 libXcomposite.so.1 libXdamage.so.1 libXext.so.6 libXfixes.so.3 libXrandr.so.2 libgbm.so.1 libpango-1.0.so.0 libcairo.so.2 libasound.so.2 libatspi.so.0 libgcc_s.so.1 libc.so.6 \/lib64\/ld-linux-x86-64.so.2 libgnutls.so.30 libpcre.so.1 libffi.so.6 libplc4.so libplds4.so librt.so.1 libgmodule-2.0.so.0 libgssapi_krb5.so.2 libkrb5.so.3 libk5crypto.so.3 libcom_err.so.2 libavahi-common.so.3 libavahi-client.so.3 libcrypt.so.1 libz.so.1 libselinux.so.1 libresolv.so.2 libmount.so.1 libsystemd.so.0 libXau.so.6 libXrender.so.1 libthai.so.0 libfribidi.so.0 libpixman-1.so.0 libfontconfig.so.1 libpng16.so.16 libxcb-render.so.0 libidn2.so.0 libunistring.so.2 libtasn1.so.6 libnettle.so.6 libhogweed.so.4 libgmp.so.10 libkrb5support.so.0 libkeyutils.so.1 libpcre2-8.so.0 libuuid.so.1 liblz4.so.1 libgcrypt.so.20 libbz2.so.1\n```\n\n## Certificate signed by internal certificate authorities\n\nIn many cases, Grafana runs on internal servers and uses certificates that have not been signed by a CA ([Certificate Authority](https:\/\/en.wikipedia.org\/wiki\/Certificate_authority)) known to Chrome, and therefore cannot be validated. Chrome internally uses NSS ([Network Security Services](https:\/\/en.wikipedia.org\/wiki\/Network_Security_Services)) for cryptographic operations such as the validation of certificates.\n\nIf you are using the Grafana Image Renderer with a Grafana server that uses a certificate signed by such a custom CA (for example a company-internal CA), rendering images will fail and you will see messages like this in the Grafana log:\n\n```\nt=2019-12-04T12:39:22+0000 lvl=error msg=\"Render request failed\" logger=rendering error=map[] url=\"https:\/\/192.168.106.101:3443\/d-solo\/zxDJxNaZk\/graphite-metrics?orgId=1&refresh=1m&from=1575438321300&to=1575459921300&var-Host=master1&panelId=4&width=1000&height=500&tz=Europe%2FBerlin&render=1\" timestamp=0001-01-01T00:00:00.000Z\nt=2019-12-04T12:39:22+0000 lvl=error msg=\"Rendering failed.\" logger=context userId=1 orgId=1 uname=admin error=\"Rendering failed: Error: net::ERR_CERT_AUTHORITY_INVALID at https:\/\/192.168.106.101:3443\/d-solo\/zxDJxNaZk\/graphite-metrics?orgId=1&refresh=1m&from=1575438321300&to=1575459921300&var-Host=master1&panelId=4&width=1000&height=500&tz=Europe%2FBerlin&render=1\"\nt=2019-12-04T12:39:22+0000 lvl=error msg=\"Request Completed\" logger=context userId=1 orgId=1 uname=admin method=GET path=\/render\/d-solo\/zxDJxNaZk\/graphite-metrics status=500 remote_addr=192.168.106.101 time_ms=310 size=1722 referer=\"https:\/\/grafana.xxx-xxx\/d\/zxDJxNaZk\/graphite-metrics?orgId=1&refresh=1m\"\n```\n\nIf this happens, then you have to add the certificate to the trust store. If you have the certificate file for the internal root CA in the file `internal-root-ca.crt.pem`, then use these commands to create a user specific NSS trust store for the Grafana user (`grafana` for the purpose of this example) and execute the following steps:\n\n**Linux:**\n\n```\n[root@server ~]# [ -d \/usr\/share\/grafana\/.pki\/nssdb ] || mkdir -p \/usr\/share\/grafana\/.pki\/nssdb\n[root@server ~]# certutil -d sql:\/usr\/share\/grafana\/.pki\/nssdb -A -n internal-root-ca -t C -i \/etc\/pki\/tls\/certs\/internal-root-ca.crt.pem\n[root@server ~]# chown -R grafana: \/usr\/share\/grafana\/.pki\/nssdb\n```\n\n**Windows:**\n\n```\ncertutil \u2013addstore \"Root\" <path>\/internal-root-ca.crt.pem\n```\n\n**Container:**\n\n```Dockerfile\nFROM grafana\/grafana-image-renderer:latest\n\nUSER root\n\nRUN apk add --no-cache nss-tools\n\nUSER grafana\n\nCOPY internal-root-ca.crt.pem \/etc\/pki\/tls\/certs\/internal-root-ca.crt.pem\nRUN mkdir -p \/home\/grafana\/.pki\/nssdb\nRUN certutil -d sql:\/home\/grafana\/.pki\/nssdb -A -n internal-root-ca -t C -i \/etc\/pki\/tls\/certs\/internal-root-ca.crt.pem\n```\n\n## Custom Chrome\/Chromium\n\nAs a last resort, if you already have [Chrome](https:\/\/www.google.com\/chrome\/) or [Chromium](https:\/\/www.chromium.org\/)\ninstalled on your system, then you can configure the Grafana Image renderer plugin to use this\ninstead of the pre-packaged version of Chromium.\n\n\nPlease note that this is not recommended, since you may encounter problems if the installed version of Chrome\/Chromium is not\ncompatible with the [Grafana Image renderer plugin](\/grafana\/plugins\/grafana-image-renderer).\n\n\nTo override the path to the Chrome\/Chromium executable in plugin mode, set an environment variable and make sure that it's available for the Grafana process. For example:\n\n```bash\nexport GF_PLUGIN_RENDERING_CHROME_BIN=\"\/usr\/bin\/chromium-browser\"\n```\n\nIn remote rendering mode, you need to set the environment variable or update the configuration file and make sure that it's available for the image rendering service process:\n\n```bash\nCHROME_BIN=\"\/usr\/bin\/chromium-browser\"\n```\n\n```json\n{\n  \"rendering\": {\n    \"chromeBin\": \"\/usr\/bin\/chromium-browser\"\n  }\n}\n```","site":"grafana setup","answers_cleaned":"    aliases            image rendering troubleshooting  description  Image rendering troubleshooting keywords      grafana     image     rendering     plugin     troubleshooting labels    products        enterprise       oss menuTitle  Troubleshooting title  Troubleshoot image rendering weight  200        Troubleshoot image rendering  In this section  you ll learn how to enable logging for the image renderer and you ll find the most common issues      Enable debug logging  To troubleshoot the image renderer  different kind of logs are available   You can enable debug log messages for rendering in the Grafana configuration file and inspect the Grafana server logs      bash  log  filters   rendering debug      You can also enable more logs in image renderer service itself by enabling  debug logging         Missing libraries  The plugin and rendering service uses  Chromium browser  https   www chromium org   which depends on certain libraries  If you don t have all of those libraries installed in your system you may encounter errors when trying to render an image  e g      bash Rendering failed  Error  Failed to launch chrome  var lib grafana plugins grafana image renderer chrome linux chrome  error while loading shared libraries  libX11 so 6  cannot open shared object file  No such file or directory n n nTROUBLESHOOTING  https   github com GoogleChrome puppeteer blob master docs troubleshooting md      In general you can use the   ldd    https   en wikipedia org wiki Ldd  Unix    utility to figure out what shared libraries are not installed in your system      bash cd  grafana image render plugin directory  ldd chrome headless shell linux 132 0 6781 0 chrome headless shell linux64 chrome headless shell       linux vdso so 1  0x00007fff1bf65000        libdl so 2     lib x86 64 linux gnu libdl so 2  0x00007f2047945000        libpthread so 0     lib x86 64 linux gnu libpthread so 0  0x00007f2047924000        librt so 1     lib x86 64 linux gnu librt so 1  0x00007f204791a000        libX11 so 6    not found       libX11 xcb so 1    not found       libxcb so 1    not found       libXcomposite so 1    not found                    Ubuntu     On Ubuntu 18 10 the following dependencies are required for the image rendering to function      bash libx11 6 libx11 xcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrender1 libxtst6 libglib2 0 0 libnss3 libcups2  libdbus 1 3 libxss1 libxrandr2 libgtk 3 0 libasound2 libxcb dri3 0 libgbm1 libxshmfence1        Debian     On Debian 9  Stretch  the following dependencies are required for the image rendering to function      bash libx11 libcairo libcairo2 libxtst6 libxcomposite1 libx11 xcb1 libxcursor1 libxdamage1 libnss3 libcups libcups2 libxss libxss1 libxrandr2 libasound2 libatk1 0 0 libatk bridge2 0 0 libpangocairo 1 0 0 libgtk 3 0 libgbm1 libxshmfence1      On Debian 10  Buster  the following dependencies are required for the image rendering to function      bash libxdamage1 libxext6 libxi6 libxtst6 libnss3 libcups2 libxss1 libxrandr2 libasound2 libatk1 0 0 libatk bridge2 0 0 libpangocairo 1 0 0 libpango 1 0 0 libcairo2 libatspi2 0 0 libgtk3 0 cil libgdk3 0 cil libx11 xcb dev libgbm1 libxshmfence1        Centos     On a minimal CentOS 7 installation  the following dependencies are required for the image rendering to function      bash libXcomposite libXdamage libXtst cups libXScrnSaver pango atk adwaita cursor theme adwaita icon theme at at spi2 atk at spi2 core cairo gobject colord libs dconf desktop file utils ed emacs filesystem gdk pixbuf2 glib networking gnutls gsettings desktop schemas gtk update icon cache gtk3 hicolor icon theme jasper libs json glib libappindicator gtk3 libdbusmenu libdbusmenu gtk3 libepoxy liberation fonts liberation narrow fonts liberation sans fonts liberation serif fonts libgusb libindicator gtk3 libmodman libproxy libsoup libwayland cursor libwayland egl libxkbcommon m4 mailx nettle patch psmisc redhat lsb core redhat lsb submod security rest spax time trousers xdg utils xkeyboard config alsa lib      On a minimal CentOS 8 installation  the following dependencies are required for the image rendering to function      bash libXcomposite libXdamage libXtst cups libXScrnSaver pango atk adwaita cursor theme adwaita icon theme at at spi2 atk at spi2 core cairo gobject colord libs dconf desktop file utils ed emacs filesystem gdk pixbuf2 glib networking gnutls gsettings desktop schemas gtk update icon cache gtk3 hicolor icon theme jasper libs json glib libappindicator gtk3 libdbusmenu libdbusmenu gtk3 libepoxy liberation fonts liberation narrow fonts liberation sans fonts liberation serif fonts libgusb libindicator gtk3 libmodman libproxy libsoup libwayland cursor libwayland egl libxkbcommon m4 mailx nettle patch psmisc redhat lsb core redhat lsb submod security rest spax time trousers xdg utils xkeyboard config alsa lib libX11 xcb        RHEL     On a minimal RHEL 8 installation  the following dependencies are required for the image rendering to function      bash linux vdso so 1 libdl so 2 libpthread so 0 libgobject 2 0 so 0 libglib 2 0 so 0 libnss3 so libnssutil3 so libsmime3 so libnspr4 so libatk 1 0 so 0 libatk bridge 2 0 so 0 libcups so 2 libgio 2 0 so 0 libdrm so 2 libdbus 1 so 3 libexpat so 1 libxcb so 1 libxkbcommon so 0 libm so 6 libX11 so 6 libXcomposite so 1 libXdamage so 1 libXext so 6 libXfixes so 3 libXrandr so 2 libgbm so 1 libpango 1 0 so 0 libcairo so 2 libasound so 2 libatspi so 0 libgcc s so 1 libc so 6  lib64 ld linux x86 64 so 2 libgnutls so 30 libpcre so 1 libffi so 6 libplc4 so libplds4 so librt so 1 libgmodule 2 0 so 0 libgssapi krb5 so 2 libkrb5 so 3 libk5crypto so 3 libcom err so 2 libavahi common so 3 libavahi client so 3 libcrypt so 1 libz so 1 libselinux so 1 libresolv so 2 libmount so 1 libsystemd so 0 libXau so 6 libXrender so 1 libthai so 0 libfribidi so 0 libpixman 1 so 0 libfontconfig so 1 libpng16 so 16 libxcb render so 0 libidn2 so 0 libunistring so 2 libtasn1 so 6 libnettle so 6 libhogweed so 4 libgmp so 10 libkrb5support so 0 libkeyutils so 1 libpcre2 8 so 0 libuuid so 1 liblz4 so 1 libgcrypt so 20 libbz2 so 1         Certificate signed by internal certificate authorities  In many cases  Grafana runs on internal servers and uses certificates that have not been signed by a CA   Certificate Authority  https   en wikipedia org wiki Certificate authority   known to Chrome  and therefore cannot be validated  Chrome internally uses NSS   Network Security Services  https   en wikipedia org wiki Network Security Services   for cryptographic operations such as the validation of certificates   If you are using the Grafana Image Renderer with a Grafana server that uses a certificate signed by such a custom CA  for example a company internal CA   rendering images will fail and you will see messages like this in the Grafana log       t 2019 12 04T12 39 22 0000 lvl error msg  Render request failed  logger rendering error map   url  https   192 168 106 101 3443 d solo zxDJxNaZk graphite metrics orgId 1 refresh 1m from 1575438321300 to 1575459921300 var Host master1 panelId 4 width 1000 height 500 tz Europe 2FBerlin render 1  timestamp 0001 01 01T00 00 00 000Z t 2019 12 04T12 39 22 0000 lvl error msg  Rendering failed   logger context userId 1 orgId 1 uname admin error  Rendering failed  Error  net  ERR CERT AUTHORITY INVALID at https   192 168 106 101 3443 d solo zxDJxNaZk graphite metrics orgId 1 refresh 1m from 1575438321300 to 1575459921300 var Host master1 panelId 4 width 1000 height 500 tz Europe 2FBerlin render 1  t 2019 12 04T12 39 22 0000 lvl error msg  Request Completed  logger context userId 1 orgId 1 uname admin method GET path  render d solo zxDJxNaZk graphite metrics status 500 remote addr 192 168 106 101 time ms 310 size 1722 referer  https   grafana xxx xxx d zxDJxNaZk graphite metrics orgId 1 refresh 1m       If this happens  then you have to add the certificate to the trust store  If you have the certificate file for the internal root CA in the file  internal root ca crt pem   then use these commands to create a user specific NSS trust store for the Grafana user   grafana  for the purpose of this example  and execute the following steps     Linux          root server        d  usr share grafana  pki nssdb      mkdir  p  usr share grafana  pki nssdb  root server     certutil  d sql  usr share grafana  pki nssdb  A  n internal root ca  t C  i  etc pki tls certs internal root ca crt pem  root server     chown  R grafana   usr share grafana  pki nssdb        Windows         certutil  addstore  Root   path  internal root ca crt pem        Container        Dockerfile FROM grafana grafana image renderer latest  USER root  RUN apk add   no cache nss tools  USER grafana  COPY internal root ca crt pem  etc pki tls certs internal root ca crt pem RUN mkdir  p  home grafana  pki nssdb RUN certutil  d sql  home grafana  pki nssdb  A  n internal root ca  t C  i  etc pki tls certs internal root ca crt pem         Custom Chrome Chromium  As a last resort  if you already have  Chrome  https   www google com chrome   or  Chromium  https   www chromium org   installed on your system  then you can configure the Grafana Image renderer plugin to use this instead of the pre packaged version of Chromium    Please note that this is not recommended  since you may encounter problems if the installed version of Chrome Chromium is not compatible with the  Grafana Image renderer plugin   grafana plugins grafana image renderer     To override the path to the Chrome Chromium executable in plugin mode  set an environment variable and make sure that it s available for the Grafana process  For example      bash export GF PLUGIN RENDERING CHROME BIN   usr bin chromium browser       In remote rendering mode  you need to set the environment variable or update the configuration file and make sure that it s available for the image rendering service process      bash CHROME BIN   usr bin chromium browser          json      rendering          chromeBin     usr bin chromium browser           "}
{"questions":"grafana setup plugin Image rendering monitoring rendering aliases monitoring keywords grafana image rendering monitoring image","answers":"---\naliases:\n  - ..\/..\/image-rendering\/monitoring\/\ndescription: Image rendering monitoring\nkeywords:\n  - grafana\n  - image\n  - rendering\n  - plugin\n  - monitoring\nlabels:\n  products:\n    - enterprise\n    - oss\ntitle: Monitor the image renderer\nweight: 100\n---\n\n# Monitor the image renderer\n\nRendering images requires a lot of memory, mainly because Grafana creates browser instances in the background for the actual rendering. Monitoring your service can help you allocate the right amount of resources to your rendering service and set the right [rendering mode]().\n\n## Enable Prometheus metrics endpoint\n\nConfigure this service to expose a Prometheus metrics endpoint. For information on how to configure and monitor this service using Prometheus as a data source, refer to [Grafana Image Rendering Service dashboard](\/grafana\/dashboards\/12203).\n\n**Metrics endpoint output example:**\n\n```\n# HELP process_cpu_user_seconds_total Total user CPU time spent in seconds.\n# TYPE process_cpu_user_seconds_total counter\nprocess_cpu_user_seconds_total 0.536 1579444523566\n\n# HELP process_cpu_system_seconds_total Total system CPU time spent in seconds.\n# TYPE process_cpu_system_seconds_total counter\nprocess_cpu_system_seconds_total 0.064 1579444523566\n\n# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.\n# TYPE process_cpu_seconds_total counter\nprocess_cpu_seconds_total 0.6000000000000001 1579444523566\n\n# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.\n# TYPE process_start_time_seconds gauge\nprocess_start_time_seconds 1579444433\n\n# HELP process_resident_memory_bytes Resident memory size in bytes.\n# TYPE process_resident_memory_bytes gauge\nprocess_resident_memory_bytes 52686848 1579444523568\n\n# HELP process_virtual_memory_bytes Virtual memory size in bytes.\n# TYPE process_virtual_memory_bytes gauge\nprocess_virtual_memory_bytes 2055344128 1579444523568\n\n# HELP process_heap_bytes Process heap size in bytes.\n# TYPE process_heap_bytes gauge\nprocess_heap_bytes 1996390400 1579444523568\n\n# HELP process_open_fds Number of open file descriptors.\n# TYPE process_open_fds gauge\nprocess_open_fds 31 1579444523567\n\n# HELP process_max_fds Maximum number of open file descriptors.\n# TYPE process_max_fds gauge\nprocess_max_fds 1573877\n\n# HELP nodejs_eventloop_lag_seconds Lag of event loop in seconds.\n# TYPE nodejs_eventloop_lag_seconds gauge\nnodejs_eventloop_lag_seconds 0.000915922 1579444523567\n\n# HELP nodejs_active_handles Number of active libuv handles grouped by handle type. Every handle type is C++ class name.\n# TYPE nodejs_active_handles gauge\nnodejs_active_handles{type=\"WriteStream\"} 2 1579444523566\nnodejs_active_handles{type=\"Server\"} 1 1579444523566\nnodejs_active_handles{type=\"Socket\"} 9 1579444523566\nnodejs_active_handles{type=\"ChildProcess\"} 2 1579444523566\n\n# HELP nodejs_active_handles_total Total number of active handles.\n# TYPE nodejs_active_handles_total gauge\nnodejs_active_handles_total 14 1579444523567\n\n# HELP nodejs_active_requests Number of active libuv requests grouped by request type. Every request type is C++ class name.\n# TYPE nodejs_active_requests gauge\nnodejs_active_requests{type=\"FSReqCallback\"} 2\n\n# HELP nodejs_active_requests_total Total number of active requests.\n# TYPE nodejs_active_requests_total gauge\nnodejs_active_requests_total 2 1579444523567\n\n# HELP nodejs_heap_size_total_bytes Process heap size from node.js in bytes.\n# TYPE nodejs_heap_size_total_bytes gauge\nnodejs_heap_size_total_bytes 13725696 1579444523567\n\n# HELP nodejs_heap_size_used_bytes Process heap size used from node.js in bytes.\n# TYPE nodejs_heap_size_used_bytes gauge\nnodejs_heap_size_used_bytes 12068008 1579444523567\n\n# HELP nodejs_external_memory_bytes Nodejs external memory size in bytes.\n# TYPE nodejs_external_memory_bytes gauge\nnodejs_external_memory_bytes 1728962 1579444523567\n\n# HELP nodejs_heap_space_size_total_bytes Process heap space size total from node.js in bytes.\n# TYPE nodejs_heap_space_size_total_bytes gauge\nnodejs_heap_space_size_total_bytes{space=\"read_only\"} 262144 1579444523567\nnodejs_heap_space_size_total_bytes{space=\"new\"} 1048576 1579444523567\nnodejs_heap_space_size_total_bytes{space=\"old\"} 9809920 1579444523567\nnodejs_heap_space_size_total_bytes{space=\"code\"} 425984 1579444523567\nnodejs_heap_space_size_total_bytes{space=\"map\"} 1052672 1579444523567\nnodejs_heap_space_size_total_bytes{space=\"large_object\"} 1077248 1579444523567\nnodejs_heap_space_size_total_bytes{space=\"code_large_object\"} 49152 1579444523567\nnodejs_heap_space_size_total_bytes{space=\"new_large_object\"} 0 1579444523567\n\n# HELP nodejs_heap_space_size_used_bytes Process heap space size used from node.js in bytes.\n# TYPE nodejs_heap_space_size_used_bytes gauge\nnodejs_heap_space_size_used_bytes{space=\"read_only\"} 32296 1579444523567\nnodejs_heap_space_size_used_bytes{space=\"new\"} 601696 1579444523567\nnodejs_heap_space_size_used_bytes{space=\"old\"} 9376600 1579444523567\nnodejs_heap_space_size_used_bytes{space=\"code\"} 286688 1579444523567\nnodejs_heap_space_size_used_bytes{space=\"map\"} 704320 1579444523567\nnodejs_heap_space_size_used_bytes{space=\"large_object\"} 1064872 1579444523567\nnodejs_heap_space_size_used_bytes{space=\"code_large_object\"} 3552 1579444523567\nnodejs_heap_space_size_used_bytes{space=\"new_large_object\"} 0 1579444523567\n\n# HELP nodejs_heap_space_size_available_bytes Process heap space size available from node.js in bytes.\n# TYPE nodejs_heap_space_size_available_bytes gauge\nnodejs_heap_space_size_available_bytes{space=\"read_only\"} 229576 1579444523567\nnodejs_heap_space_size_available_bytes{space=\"new\"} 445792 1579444523567\nnodejs_heap_space_size_available_bytes{space=\"old\"} 417712 1579444523567\nnodejs_heap_space_size_available_bytes{space=\"code\"} 20576 1579444523567\nnodejs_heap_space_size_available_bytes{space=\"map\"} 343632 1579444523567\nnodejs_heap_space_size_available_bytes{space=\"large_object\"} 0 1579444523567\nnodejs_heap_space_size_available_bytes{space=\"code_large_object\"} 0 1579444523567\nnodejs_heap_space_size_available_bytes{space=\"new_large_object\"} 1047488 1579444523567\n\n# HELP nodejs_version_info Node.js version info.\n# TYPE nodejs_version_info gauge\nnodejs_version_info{version=\"v14.16.1\",major=\"14\",minor=\"16\",patch=\"1\"} 1\n\n# HELP grafana_image_renderer_service_http_request_duration_seconds duration histogram of http responses labeled with: status_code\n# TYPE grafana_image_renderer_service_http_request_duration_seconds histogram\ngrafana_image_renderer_service_http_request_duration_seconds_bucket{le=\"1\",status_code=\"200\"} 0\ngrafana_image_renderer_service_http_request_duration_seconds_bucket{le=\"5\",status_code=\"200\"} 4\ngrafana_image_renderer_service_http_request_duration_seconds_bucket{le=\"7\",status_code=\"200\"} 4\ngrafana_image_renderer_service_http_request_duration_seconds_bucket{le=\"9\",status_code=\"200\"} 4\ngrafana_image_renderer_service_http_request_duration_seconds_bucket{le=\"11\",status_code=\"200\"} 4\ngrafana_image_renderer_service_http_request_duration_seconds_bucket{le=\"13\",status_code=\"200\"} 4\ngrafana_image_renderer_service_http_request_duration_seconds_bucket{le=\"15\",status_code=\"200\"} 4\ngrafana_image_renderer_service_http_request_duration_seconds_bucket{le=\"20\",status_code=\"200\"} 4\ngrafana_image_renderer_service_http_request_duration_seconds_bucket{le=\"30\",status_code=\"200\"} 4\ngrafana_image_renderer_service_http_request_duration_seconds_bucket{le=\"+Inf\",status_code=\"200\"} 4\ngrafana_image_renderer_service_http_request_duration_seconds_sum{status_code=\"200\"} 10.492873834\ngrafana_image_renderer_service_http_request_duration_seconds_count{status_code=\"200\"} 4\n\n# HELP up 1 = up, 0 = not up\n# TYPE up gauge\nup 1\n\n# HELP grafana_image_renderer_http_request_in_flight A gauge of requests currently being served by the image renderer.\n# TYPE grafana_image_renderer_http_request_in_flight gauge\ngrafana_image_renderer_http_request_in_flight 1\n\n# HELP grafana_image_renderer_step_duration_seconds duration histogram of browser steps for rendering an image labeled with: step\n# TYPE grafana_image_renderer_step_duration_seconds histogram\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"0.3\",step=\"launch\"} 0\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"0.5\",step=\"launch\"} 0\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"1\",step=\"launch\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"2\",step=\"launch\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"3\",step=\"launch\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"5\",step=\"launch\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"+Inf\",step=\"launch\"} 1\ngrafana_image_renderer_step_duration_seconds_sum{step=\"launch\"} 0.7914972\ngrafana_image_renderer_step_duration_seconds_count{step=\"launch\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"0.3\",step=\"newPage\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"0.5\",step=\"newPage\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"1\",step=\"newPage\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"2\",step=\"newPage\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"3\",step=\"newPage\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"5\",step=\"newPage\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"+Inf\",step=\"newPage\"} 1\ngrafana_image_renderer_step_duration_seconds_sum{step=\"newPage\"} 0.2217868\ngrafana_image_renderer_step_duration_seconds_count{step=\"newPage\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"0.3\",step=\"prepare\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"0.5\",step=\"prepare\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"1\",step=\"prepare\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"2\",step=\"prepare\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"3\",step=\"prepare\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"5\",step=\"prepare\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"+Inf\",step=\"prepare\"} 1\ngrafana_image_renderer_step_duration_seconds_sum{step=\"prepare\"} 0.0819274\ngrafana_image_renderer_step_duration_seconds_count{step=\"prepare\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"0.3\",step=\"navigate\"} 0\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"0.5\",step=\"navigate\"} 0\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"1\",step=\"navigate\"} 0\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"2\",step=\"navigate\"} 0\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"3\",step=\"navigate\"} 0\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"5\",step=\"navigate\"} 0\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"+Inf\",step=\"navigate\"} 1\ngrafana_image_renderer_step_duration_seconds_sum{step=\"navigate\"} 15.3311258\ngrafana_image_renderer_step_duration_seconds_count{step=\"navigate\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"0.3\",step=\"panelsRendered\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"0.5\",step=\"panelsRendered\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"1\",step=\"panelsRendered\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"2\",step=\"panelsRendered\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"3\",step=\"panelsRendered\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"5\",step=\"panelsRendered\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"+Inf\",step=\"panelsRendered\"} 1\ngrafana_image_renderer_step_duration_seconds_sum{step=\"panelsRendered\"} 0.0205577\ngrafana_image_renderer_step_duration_seconds_count{step=\"panelsRendered\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"0.3\",step=\"screenshot\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"0.5\",step=\"screenshot\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"1\",step=\"screenshot\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"2\",step=\"screenshot\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"3\",step=\"screenshot\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"5\",step=\"screenshot\"} 1\ngrafana_image_renderer_step_duration_seconds_bucket{le=\"+Inf\",step=\"screenshot\"} 1\ngrafana_image_renderer_step_duration_seconds_sum{step=\"screenshot\"} 0.2866623\ngrafana_image_renderer_step_duration_seconds_count{step=\"screenshot\"} 1\n\n# HELP grafana_image_renderer_browser_info A metric with a constant '1 value labeled by version of the browser in use\n# TYPE grafana_image_renderer_browser_info gauge\ngrafana_image_renderer_browser_info{version=\"HeadlessChrome\/79.0.3945.0\"} 1\n```","site":"grafana setup","answers_cleaned":"    aliases            image rendering monitoring  description  Image rendering monitoring keywords      grafana     image     rendering     plugin     monitoring labels    products        enterprise       oss title  Monitor the image renderer weight  100        Monitor the image renderer  Rendering images requires a lot of memory  mainly because Grafana creates browser instances in the background for the actual rendering  Monitoring your service can help you allocate the right amount of resources to your rendering service and set the right  rendering mode         Enable Prometheus metrics endpoint  Configure this service to expose a Prometheus metrics endpoint  For information on how to configure and monitor this service using Prometheus as a data source  refer to  Grafana Image Rendering Service dashboard   grafana dashboards 12203      Metrics endpoint output example           HELP process cpu user seconds total Total user CPU time spent in seconds    TYPE process cpu user seconds total counter process cpu user seconds total 0 536 1579444523566    HELP process cpu system seconds total Total system CPU time spent in seconds    TYPE process cpu system seconds total counter process cpu system seconds total 0 064 1579444523566    HELP process cpu seconds total Total user and system CPU time spent in seconds    TYPE process cpu seconds total counter process cpu seconds total 0 6000000000000001 1579444523566    HELP process start time seconds Start time of the process since unix epoch in seconds    TYPE process start time seconds gauge process start time seconds 1579444433    HELP process resident memory bytes Resident memory size in bytes    TYPE process resident memory bytes gauge process resident memory bytes 52686848 1579444523568    HELP process virtual memory bytes Virtual memory size in bytes    TYPE process virtual memory bytes gauge process virtual memory bytes 2055344128 1579444523568    HELP process heap bytes Process heap size in bytes    TYPE process heap bytes gauge process heap bytes 1996390400 1579444523568    HELP process open fds Number of open file descriptors    TYPE process open fds gauge process open fds 31 1579444523567    HELP process max fds Maximum number of open file descriptors    TYPE process max fds gauge process max fds 1573877    HELP nodejs eventloop lag seconds Lag of event loop in seconds    TYPE nodejs eventloop lag seconds gauge nodejs eventloop lag seconds 0 000915922 1579444523567    HELP nodejs active handles Number of active libuv handles grouped by handle type  Every handle type is C   class name    TYPE nodejs active handles gauge nodejs active handles type  WriteStream   2 1579444523566 nodejs active handles type  Server   1 1579444523566 nodejs active handles type  Socket   9 1579444523566 nodejs active handles type  ChildProcess   2 1579444523566    HELP nodejs active handles total Total number of active handles    TYPE nodejs active handles total gauge nodejs active handles total 14 1579444523567    HELP nodejs active requests Number of active libuv requests grouped by request type  Every request type is C   class name    TYPE nodejs active requests gauge nodejs active requests type  FSReqCallback   2    HELP nodejs active requests total Total number of active requests    TYPE nodejs active requests total gauge nodejs active requests total 2 1579444523567    HELP nodejs heap size total bytes Process heap size from node js in bytes    TYPE nodejs heap size total bytes gauge nodejs heap size total bytes 13725696 1579444523567    HELP nodejs heap size used bytes Process heap size used from node js in bytes    TYPE nodejs heap size used bytes gauge nodejs heap size used bytes 12068008 1579444523567    HELP nodejs external memory bytes Nodejs external memory size in bytes    TYPE nodejs external memory bytes gauge nodejs external memory bytes 1728962 1579444523567    HELP nodejs heap space size total bytes Process heap space size total from node js in bytes    TYPE nodejs heap space size total bytes gauge nodejs heap space size total bytes space  read only   262144 1579444523567 nodejs heap space size total bytes space  new   1048576 1579444523567 nodejs heap space size total bytes space  old   9809920 1579444523567 nodejs heap space size total bytes space  code   425984 1579444523567 nodejs heap space size total bytes space  map   1052672 1579444523567 nodejs heap space size total bytes space  large object   1077248 1579444523567 nodejs heap space size total bytes space  code large object   49152 1579444523567 nodejs heap space size total bytes space  new large object   0 1579444523567    HELP nodejs heap space size used bytes Process heap space size used from node js in bytes    TYPE nodejs heap space size used bytes gauge nodejs heap space size used bytes space  read only   32296 1579444523567 nodejs heap space size used bytes space  new   601696 1579444523567 nodejs heap space size used bytes space  old   9376600 1579444523567 nodejs heap space size used bytes space  code   286688 1579444523567 nodejs heap space size used bytes space  map   704320 1579444523567 nodejs heap space size used bytes space  large object   1064872 1579444523567 nodejs heap space size used bytes space  code large object   3552 1579444523567 nodejs heap space size used bytes space  new large object   0 1579444523567    HELP nodejs heap space size available bytes Process heap space size available from node js in bytes    TYPE nodejs heap space size available bytes gauge nodejs heap space size available bytes space  read only   229576 1579444523567 nodejs heap space size available bytes space  new   445792 1579444523567 nodejs heap space size available bytes space  old   417712 1579444523567 nodejs heap space size available bytes space  code   20576 1579444523567 nodejs heap space size available bytes space  map   343632 1579444523567 nodejs heap space size available bytes space  large object   0 1579444523567 nodejs heap space size available bytes space  code large object   0 1579444523567 nodejs heap space size available bytes space  new large object   1047488 1579444523567    HELP nodejs version info Node js version info    TYPE nodejs version info gauge nodejs version info version  v14 16 1  major  14  minor  16  patch  1   1    HELP grafana image renderer service http request duration seconds duration histogram of http responses labeled with  status code   TYPE grafana image renderer service http request duration seconds histogram grafana image renderer service http request duration seconds bucket le  1  status code  200   0 grafana image renderer service http request duration seconds bucket le  5  status code  200   4 grafana image renderer service http request duration seconds bucket le  7  status code  200   4 grafana image renderer service http request duration seconds bucket le  9  status code  200   4 grafana image renderer service http request duration seconds bucket le  11  status code  200   4 grafana image renderer service http request duration seconds bucket le  13  status code  200   4 grafana image renderer service http request duration seconds bucket le  15  status code  200   4 grafana image renderer service http request duration seconds bucket le  20  status code  200   4 grafana image renderer service http request duration seconds bucket le  30  status code  200   4 grafana image renderer service http request duration seconds bucket le   Inf  status code  200   4 grafana image renderer service http request duration seconds sum status code  200   10 492873834 grafana image renderer service http request duration seconds count status code  200   4    HELP up 1   up  0   not up   TYPE up gauge up 1    HELP grafana image renderer http request in flight A gauge of requests currently being served by the image renderer    TYPE grafana image renderer http request in flight gauge grafana image renderer http request in flight 1    HELP grafana image renderer step duration seconds duration histogram of browser steps for rendering an image labeled with  step   TYPE grafana image renderer step duration seconds histogram grafana image renderer step duration seconds bucket le  0 3  step  launch   0 grafana image renderer step duration seconds bucket le  0 5  step  launch   0 grafana image renderer step duration seconds bucket le  1  step  launch   1 grafana image renderer step duration seconds bucket le  2  step  launch   1 grafana image renderer step duration seconds bucket le  3  step  launch   1 grafana image renderer step duration seconds bucket le  5  step  launch   1 grafana image renderer step duration seconds bucket le   Inf  step  launch   1 grafana image renderer step duration seconds sum step  launch   0 7914972 grafana image renderer step duration seconds count step  launch   1 grafana image renderer step duration seconds bucket le  0 3  step  newPage   1 grafana image renderer step duration seconds bucket le  0 5  step  newPage   1 grafana image renderer step duration seconds bucket le  1  step  newPage   1 grafana image renderer step duration seconds bucket le  2  step  newPage   1 grafana image renderer step duration seconds bucket le  3  step  newPage   1 grafana image renderer step duration seconds bucket le  5  step  newPage   1 grafana image renderer step duration seconds bucket le   Inf  step  newPage   1 grafana image renderer step duration seconds sum step  newPage   0 2217868 grafana image renderer step duration seconds count step  newPage   1 grafana image renderer step duration seconds bucket le  0 3  step  prepare   1 grafana image renderer step duration seconds bucket le  0 5  step  prepare   1 grafana image renderer step duration seconds bucket le  1  step  prepare   1 grafana image renderer step duration seconds bucket le  2  step  prepare   1 grafana image renderer step duration seconds bucket le  3  step  prepare   1 grafana image renderer step duration seconds bucket le  5  step  prepare   1 grafana image renderer step duration seconds bucket le   Inf  step  prepare   1 grafana image renderer step duration seconds sum step  prepare   0 0819274 grafana image renderer step duration seconds count step  prepare   1 grafana image renderer step duration seconds bucket le  0 3  step  navigate   0 grafana image renderer step duration seconds bucket le  0 5  step  navigate   0 grafana image renderer step duration seconds bucket le  1  step  navigate   0 grafana image renderer step duration seconds bucket le  2  step  navigate   0 grafana image renderer step duration seconds bucket le  3  step  navigate   0 grafana image renderer step duration seconds bucket le  5  step  navigate   0 grafana image renderer step duration seconds bucket le   Inf  step  navigate   1 grafana image renderer step duration seconds sum step  navigate   15 3311258 grafana image renderer step duration seconds count step  navigate   1 grafana image renderer step duration seconds bucket le  0 3  step  panelsRendered   1 grafana image renderer step duration seconds bucket le  0 5  step  panelsRendered   1 grafana image renderer step duration seconds bucket le  1  step  panelsRendered   1 grafana image renderer step duration seconds bucket le  2  step  panelsRendered   1 grafana image renderer step duration seconds bucket le  3  step  panelsRendered   1 grafana image renderer step duration seconds bucket le  5  step  panelsRendered   1 grafana image renderer step duration seconds bucket le   Inf  step  panelsRendered   1 grafana image renderer step duration seconds sum step  panelsRendered   0 0205577 grafana image renderer step duration seconds count step  panelsRendered   1 grafana image renderer step duration seconds bucket le  0 3  step  screenshot   1 grafana image renderer step duration seconds bucket le  0 5  step  screenshot   1 grafana image renderer step duration seconds bucket le  1  step  screenshot   1 grafana image renderer step duration seconds bucket le  2  step  screenshot   1 grafana image renderer step duration seconds bucket le  3  step  screenshot   1 grafana image renderer step duration seconds bucket le  5  step  screenshot   1 grafana image renderer step duration seconds bucket le   Inf  step  screenshot   1 grafana image renderer step duration seconds sum step  screenshot   0 2866623 grafana image renderer step duration seconds count step  screenshot   1    HELP grafana image renderer browser info A metric with a constant  1 value labeled by version of the browser in use   TYPE grafana image renderer browser info gauge grafana image renderer browser info version  HeadlessChrome 79 0 3945 0   1    "}
{"questions":"helm aliases docs quickstart title Quickstart Guide This guide covers how you can quickly get started using Helm Prerequisites How to install and get started with Helm including instructions for distros FAQs and plugins weight 1","answers":"---\ntitle: \"Quickstart Guide\"\ndescription: \"How to install and get started with Helm including instructions for distros, FAQs, and plugins.\"\nweight: 1\naliases: [\"\/docs\/quickstart\/\"]\n---\n\nThis guide covers how you can quickly get started using Helm.\n\n## Prerequisites\n\nThe following prerequisites are required for a successful and properly secured\nuse of Helm.\n\n1. A Kubernetes cluster\n2. Deciding what security configurations to apply to your installation, if any\n3. Installing and configuring Helm.\n\n### Install Kubernetes or have access to a cluster\n\n- You must have Kubernetes installed. For the latest release of Helm, we\n  recommend the latest stable release of Kubernetes, which in most cases is the\n  second-latest minor release.\n- You should also have a local configured copy of `kubectl`.\n\nSee the [Helm Version Support Policy](https:\/\/helm.sh\/docs\/topics\/version_skew\/) for the maximum version skew supported between Helm and Kubernetes.\n\n## Install Helm\n\nDownload a binary release of the Helm client. You can use tools like `homebrew`,\nor look at [the official releases page](https:\/\/github.com\/helm\/helm\/releases).\n\nFor more details, or for other options, see [the installation guide]().\n\n## Initialize a Helm Chart Repository\n\nOnce you have Helm ready, you can add a chart repository. Check [Artifact\nHub](https:\/\/artifacthub.io\/packages\/search?kind=0) for available Helm chart\nrepositories.\n\n```console\n$ helm repo add bitnami https:\/\/charts.bitnami.com\/bitnami\n```\n\nOnce this is installed, you will be able to list the charts you can install:\n\n```console\n$ helm search repo bitnami\nNAME                             \tCHART VERSION\tAPP VERSION  \tDESCRIPTION\nbitnami\/bitnami-common           \t0.0.9        \t0.0.9        \tDEPRECATED Chart with custom templates used in ...\nbitnami\/airflow                  \t8.0.2        \t2.0.0        \tApache Airflow is a platform to programmaticall...\nbitnami\/apache                   \t8.2.3        \t2.4.46       \tChart for Apache HTTP Server\nbitnami\/aspnet-core              \t1.2.3        \t3.1.9        \tASP.NET Core is an open-source framework create...\n# ... and many more\n```\n\n## Install an Example Chart\n\nTo install a chart, you can run the `helm install` command. Helm has several\nways to find and install a chart, but the easiest is to use the `bitnami`\ncharts.\n\n```console\n$ helm repo update              # Make sure we get the latest list of charts\n$ helm install bitnami\/mysql --generate-name\nNAME: mysql-1612624192\nLAST DEPLOYED: Sat Feb  6 16:09:56 2021\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\nNOTES: ...\n```\n\nIn the example above, the `bitnami\/mysql` chart was released, and the name of\nour new release is `mysql-1612624192`.\n\nYou get a simple idea of the features of this MySQL chart by running `helm show\nchart bitnami\/mysql`. Or you could run `helm show all bitnami\/mysql` to get all\ninformation about the chart.\n\nWhenever you install a chart, a new release is created. So one chart can be\ninstalled multiple times into the same cluster. And each can be independently\nmanaged and upgraded.\n\nThe `helm install` command is a very powerful command with many capabilities. To\nlearn more about it, check out the [Using Helm Guide]()\n\n## Learn About Releases\n\nIt's easy to see what has been released using Helm:\n\n```console\n$ helm list\nNAME            \tNAMESPACE\tREVISION\tUPDATED                             \tSTATUS  \tCHART      \tAPP VERSION\nmysql-1612624192\tdefault  \t1       \t2021-02-06 16:09:56.283059 +0100 CET\tdeployed\tmysql-8.3.0\t8.0.23\n```\n\nThe `helm list` (or `helm ls`) function will show you a list of all deployed releases.\n\n## Uninstall a Release\n\nTo uninstall a release, use the `helm uninstall` command:\n\n```console\n$ helm uninstall mysql-1612624192\nrelease \"mysql-1612624192\" uninstalled\n```\n\nThis will uninstall `mysql-1612624192` from Kubernetes, which will remove all\nresources associated with the release as well as the release history.\n\nIf the flag `--keep-history` is provided, release history will be kept. You will\nbe able to request information about that release:\n\n```console\n$ helm status mysql-1612624192\nStatus: UNINSTALLED\n...\n```\n\nBecause Helm tracks your releases even after you've uninstalled them, you can\naudit a cluster's history, and even undelete a release (with `helm rollback`).\n\n## Reading the Help Text\n\nTo learn more about the available Helm commands, use `helm help` or type a\ncommand followed by the `-h` flag:\n\n```console\n$ helm get -h\n```","site":"helm","answers_cleaned":"    title   Quickstart Guide  description   How to install and get started with Helm including instructions for distros  FAQs  and plugins   weight  1 aliases     docs quickstart         This guide covers how you can quickly get started using Helm      Prerequisites  The following prerequisites are required for a successful and properly secured use of Helm   1  A Kubernetes cluster 2  Deciding what security configurations to apply to your installation  if any 3  Installing and configuring Helm       Install Kubernetes or have access to a cluster    You must have Kubernetes installed  For the latest release of Helm  we   recommend the latest stable release of Kubernetes  which in most cases is the   second latest minor release    You should also have a local configured copy of  kubectl    See the  Helm Version Support Policy  https   helm sh docs topics version skew   for the maximum version skew supported between Helm and Kubernetes      Install Helm  Download a binary release of the Helm client  You can use tools like  homebrew   or look at  the official releases page  https   github com helm helm releases    For more details  or for other options  see  the installation guide         Initialize a Helm Chart Repository  Once you have Helm ready  you can add a chart repository  Check  Artifact Hub  https   artifacthub io packages search kind 0  for available Helm chart repositories      console   helm repo add bitnami https   charts bitnami com bitnami      Once this is installed  you will be able to list the charts you can install      console   helm search repo bitnami NAME                              CHART VERSION APP VERSION   DESCRIPTION bitnami bitnami common            0 0 9         0 0 9         DEPRECATED Chart with custom templates used in     bitnami airflow                   8 0 2         2 0 0         Apache Airflow is a platform to programmaticall    bitnami apache                    8 2 3         2 4 46        Chart for Apache HTTP Server bitnami aspnet core               1 2 3         3 1 9         ASP NET Core is an open source framework create          and many more         Install an Example Chart  To install a chart  you can run the  helm install  command  Helm has several ways to find and install a chart  but the easiest is to use the  bitnami  charts      console   helm repo update                Make sure we get the latest list of charts   helm install bitnami mysql   generate name NAME  mysql 1612624192 LAST DEPLOYED  Sat Feb  6 16 09 56 2021 NAMESPACE  default STATUS  deployed REVISION  1 TEST SUITE  None NOTES           In the example above  the  bitnami mysql  chart was released  and the name of our new release is  mysql 1612624192    You get a simple idea of the features of this MySQL chart by running  helm show chart bitnami mysql   Or you could run  helm show all bitnami mysql  to get all information about the chart   Whenever you install a chart  a new release is created  So one chart can be installed multiple times into the same cluster  And each can be independently managed and upgraded   The  helm install  command is a very powerful command with many capabilities  To learn more about it  check out the  Using Helm Guide        Learn About Releases  It s easy to see what has been released using Helm      console   helm list NAME             NAMESPACE REVISION UPDATED                              STATUS   CHART       APP VERSION mysql 1612624192 default   1        2021 02 06 16 09 56 283059  0100 CET deployed mysql 8 3 0 8 0 23      The  helm list   or  helm ls   function will show you a list of all deployed releases      Uninstall a Release  To uninstall a release  use the  helm uninstall  command      console   helm uninstall mysql 1612624192 release  mysql 1612624192  uninstalled      This will uninstall  mysql 1612624192  from Kubernetes  which will remove all resources associated with the release as well as the release history   If the flag    keep history  is provided  release history will be kept  You will be able to request information about that release      console   helm status mysql 1612624192 Status  UNINSTALLED          Because Helm tracks your releases even after you ve uninstalled them  you can audit a cluster s history  and even undelete a release  with  helm rollback        Reading the Help Text  To learn more about the available Helm commands  use  helm help  or type a command followed by the   h  flag      console   helm get  h    "}
{"questions":"helm Helm cheatsheet Basic interpretations context Helm cheatsheet featuring all the necessary commands required to manage an application through Helm weight 4 title Cheat Sheet","answers":"---\ntitle: \"Cheat Sheet\"\ndescription: \"Helm cheatsheet\"\nweight: 4\n---\n\nHelm cheatsheet featuring all the necessary commands required to manage an application through Helm.\n\n-----------------------------------------------------------------------------------------------------------------------------------------------\n### Basic interpretations\/context\n\nChart:\n- It is the name of your chart in case it has been pulled and untarred.\n- It is <repo_name>\/<chart_name> in case the repository has been added but chart not pulled.\n- It is the URL\/Absolute path to the chart.\n\nName:\n- It is the name you want to give to your current helm chart installation.\n\nRelease:\n- Is the name you assigned to an installation instance. \n\nRevision:\n- Is the value from the Helm history command\n\nRepo-name:\n- The name of a repository. \n\nDIR:\n- Directory name\/path\n\n------------------------------------------------------------------------------------------------------------------------------------------------\n\n### Chart Management\n\n```bash\nhelm create <name>                      # Creates a chart directory along with the common files and directories used in a chart.\nhelm package <chart-path>               # Packages a chart into a versioned chart archive file.\nhelm lint <chart>                       # Run tests to examine a chart and identify possible issues:\nhelm show all <chart>                   # Inspect a chart and list its contents:\nhelm show values <chart>                # Displays the contents of the values.yaml file\nhelm pull <chart>                       # Download\/pull chart \nhelm pull <chart> --untar=true          # If set to true, will untar the chart after downloading it\nhelm pull <chart> --verify              # Verify the package before using it\nhelm pull <chart> --version <number>    # Default-latest is used, specify a version constraint for the chart version to use\nhelm dependency list <chart>            # Display a list of a chart\u2019s dependencies:\n``` \n--------------------------------------------------------------------------------------------------------------------------------------------------\n\n### Install and Uninstall Apps\n\n```bash\nhelm install <name> <chart>                           # Install the chart with a name\nhelm install <name> <chart> --namespace <namespace>   # Install the chart in a specific namespace\nhelm install <name> <chart> --set key1=val1,key2=val2 # Set values on the command line (can specify multiple or separate values with commas)\nhelm install <name> <chart> --values <yaml-file\/url>  # Install the chart with your specified values\nhelm install <name> <chart> --dry-run --debug         # Run a test installation to validate chart (p)\nhelm install <name> <chart> --verify                  # Verify the package before using it \nhelm install <name> <chart> --dependency-update       # update dependencies if they are missing before installing the chart\nhelm uninstall <name>                                 # Uninstall a release\n```\n------------------------------------------------------------------------------------------------------------------------------------------------\n### Perform App Upgrade and Rollback\n\n```bash\nhelm upgrade <release> <chart>                            # Upgrade a release\nhelm upgrade <release> <chart> --atomic                   # If set, upgrade process rolls back changes made in case of failed upgrade.\nhelm upgrade <release> <chart> --dependency-update        # update dependencies if they are missing before installing the chart\nhelm upgrade <release> <chart> --version <version_number> # specify a version constraint for the chart version to use\nhelm upgrade <release> <chart> --values                   # specify values in a YAML file or a URL (can specify multiple)\nhelm upgrade <release> <chart> --set key1=val1,key2=val2  # Set values on the command line (can specify multiple or separate valuese)\nhelm upgrade <release> <chart> --force                    # Force resource updates through a replacement strategy\nhelm rollback <release> <revision>                        # Roll back a release to a specific revision\nhelm rollback <release> <revision>  --cleanup-on-fail     # Allow deletion of new resources created in this rollback when rollback fails\n``` \n------------------------------------------------------------------------------------------------------------------------------------------------\n### List, Add, Remove, and Update Repositories\n\n```bash\nhelm repo add <repo-name> <url>   # Add a repository from the internet:\nhelm repo list                    # List added chart repositories\nhelm repo update                  # Update information of available charts locally from chart repositories\nhelm repo remove <repo_name>      # Remove one or more chart repositories\nhelm repo index <DIR>             # Read the current directory and generate an index file based on the charts found.\nhelm repo index <DIR> --merge     # Merge the generated index with an existing index file\nhelm search repo <keyword>        # Search repositories for a keyword in charts\nhelm search hub <keyword>         # Search for charts in the Artifact Hub or your own hub instance\n```\n-------------------------------------------------------------------------------------------------------------------------------------------------\n### Helm Release monitoring\n\n```bash\nhelm list                       # Lists all of the releases for a specified namespace, uses current namespace context if namespace not specified\nhelm list --all                 # Show all releases without any filter applied, can use -a\nhelm list --all-namespaces      # List releases across all namespaces, we can use -A\nhelm list -l key1=value1,key2=value2 # Selector (label query) to filter on, supports '=', '==', and '!='\nhelm list --date                # Sort by release date\nhelm list --deployed            # Show deployed releases. If no other is specified, this will be automatically enabled\nhelm list --pending             # Show pending releases\nhelm list --failed              # Show failed releases\nhelm list --uninstalled         # Show uninstalled releases (if 'helm uninstall --keep-history' was used)\nhelm list --superseded          # Show superseded releases\nhelm list -o yaml               # Prints the output in the specified format. Allowed values: table, json, yaml (default table)\nhelm status <release>           # This command shows the status of a named release.\nhelm status <release> --revision <number>   # if set, display the status of the named release with revision\nhelm history <release>          # Historical revisions for a given release.\nhelm env                        # Env prints out all the environment information in use by Helm.\n```\n-------------------------------------------------------------------------------------------------------------------------------------------------\n### Download Release Information\n\n```bash\nhelm get all <release>      # A human readable collection of information about the notes, hooks, supplied values, and generated manifest file of the given release.\nhelm get hooks <release>    # This command downloads hooks for a given release. Hooks are formatted in YAML and separated by the YAML '---\\n' separator.\nhelm get manifest <release> # A manifest is a YAML-encoded representation of the Kubernetes resources that were generated from this release's chart(s). If a chart is dependent on other charts, those resources will also be included in the manifest.\nhelm get notes <release>    # Shows notes provided by the chart of a named release.\nhelm get values <release>   # Downloads a values file for a given release. use -o to format output\n```\n-------------------------------------------------------------------------------------------------------------------------------------------------\n### Plugin Management\n\n```bash\nhelm plugin install <path\/url1>     # Install plugins\nhelm plugin list                    # View a list of all installed plugins\nhelm plugin update <plugin>         # Update plugins\nhelm plugin uninstall <plugin>      # Uninstall a plugin\n```\n-------------------------------------------------------------------------------------------------------------------------------------------------","site":"helm","answers_cleaned":"    title   Cheat Sheet  description   Helm cheatsheet  weight  4      Helm cheatsheet featuring all the necessary commands required to manage an application through Helm                                                                                                                                                       Basic interpretations context  Chart    It is the name of your chart in case it has been pulled and untarred    It is  repo name   chart name  in case the repository has been added but chart not pulled    It is the URL Absolute path to the chart   Name    It is the name you want to give to your current helm chart installation   Release    Is the name you assigned to an installation instance    Revision    Is the value from the Helm history command  Repo name    The name of a repository    DIR    Directory name path                                                                                                                                                        Chart Management     bash helm create  name                         Creates a chart directory along with the common files and directories used in a chart  helm package  chart path                  Packages a chart into a versioned chart archive file  helm lint  chart                          Run tests to examine a chart and identify possible issues  helm show all  chart                      Inspect a chart and list its contents  helm show values  chart                   Displays the contents of the values yaml file helm pull  chart                          Download pull chart  helm pull  chart    untar true            If set to true  will untar the chart after downloading it helm pull  chart    verify                Verify the package before using it helm pull  chart    version  number       Default latest is used  specify a version constraint for the chart version to use helm dependency list  chart               Display a list of a chart s dependencies                                                                                                                                                               Install and Uninstall Apps     bash helm install  name   chart                              Install the chart with a name helm install  name   chart    namespace  namespace      Install the chart in a specific namespace helm install  name   chart    set key1 val1 key2 val2   Set values on the command line  can specify multiple or separate values with commas  helm install  name   chart    values  yaml file url     Install the chart with your specified values helm install  name   chart    dry run   debug           Run a test installation to validate chart  p  helm install  name   chart    verify                    Verify the package before using it  helm install  name   chart    dependency update         update dependencies if they are missing before installing the chart helm uninstall  name                                    Uninstall a release                                                                                                                                                          Perform App Upgrade and Rollback     bash helm upgrade  release   chart                               Upgrade a release helm upgrade  release   chart    atomic                     If set  upgrade process rolls back changes made in case of failed upgrade  helm upgrade  release   chart    dependency update          update dependencies if they are missing before installing the chart helm upgrade  release   chart    version  version number    specify a version constraint for the chart version to use helm upgrade  release   chart    values                     specify values in a YAML file or a URL  can specify multiple  helm upgrade  release   chart    set key1 val1 key2 val2    Set values on the command line  can specify multiple or separate valuese  helm upgrade  release   chart    force                      Force resource updates through a replacement strategy helm rollback  release   revision                           Roll back a release to a specific revision helm rollback  release   revision     cleanup on fail       Allow deletion of new resources created in this rollback when rollback fails                                                                                                                                                           List  Add  Remove  and Update Repositories     bash helm repo add  repo name   url      Add a repository from the internet  helm repo list                      List added chart repositories helm repo update                    Update information of available charts locally from chart repositories helm repo remove  repo name         Remove one or more chart repositories helm repo index  DIR                Read the current directory and generate an index file based on the charts found  helm repo index  DIR    merge       Merge the generated index with an existing index file helm search repo  keyword           Search repositories for a keyword in charts helm search hub  keyword            Search for charts in the Artifact Hub or your own hub instance                                                                                                                                                           Helm Release monitoring     bash helm list                         Lists all of the releases for a specified namespace  uses current namespace context if namespace not specified helm list   all                   Show all releases without any filter applied  can use  a helm list   all namespaces        List releases across all namespaces  we can use  A helm list  l key1 value1 key2 value2   Selector  label query  to filter on  supports            and      helm list   date                  Sort by release date helm list   deployed              Show deployed releases  If no other is specified  this will be automatically enabled helm list   pending               Show pending releases helm list   failed                Show failed releases helm list   uninstalled           Show uninstalled releases  if  helm uninstall   keep history  was used  helm list   superseded            Show superseded releases helm list  o yaml                 Prints the output in the specified format  Allowed values  table  json  yaml  default table  helm status  release              This command shows the status of a named release  helm status  release    revision  number      if set  display the status of the named release with revision helm history  release             Historical revisions for a given release  helm env                          Env prints out all the environment information in use by Helm                                                                                                                                                            Download Release Information     bash helm get all  release         A human readable collection of information about the notes  hooks  supplied values  and generated manifest file of the given release  helm get hooks  release       This command downloads hooks for a given release  Hooks are formatted in YAML and separated by the YAML      n  separator  helm get manifest  release    A manifest is a YAML encoded representation of the Kubernetes resources that were generated from this release s chart s   If a chart is dependent on other charts  those resources will also be included in the manifest  helm get notes  release       Shows notes provided by the chart of a named release  helm get values  release      Downloads a values file for a given release  use  o to format output                                                                                                                                                           Plugin Management     bash helm plugin install  path url1        Install plugins helm plugin list                      View a list of all installed plugins helm plugin update  plugin            Update plugins helm plugin uninstall  plugin         Uninstall a plugin                                                                                                                                                      "}
{"questions":"helm title Installing Helm aliases docs install Learn how to install and get running with Helm weight 2 This guide shows how to install the Helm CLI Helm can be installed either from source or from pre built binary releases","answers":"---\ntitle: \"Installing Helm\"\ndescription: \"Learn how to install and get running with Helm.\"\nweight: 2\naliases: [\"\/docs\/install\/\"]\n---\n\nThis guide shows how to install the Helm CLI. Helm can be installed either from\nsource, or from pre-built binary releases.\n\n## From The Helm Project\n\nThe Helm project provides two ways to fetch and install Helm. These are the\nofficial methods to get Helm releases. In addition to that, the Helm community\nprovides methods to install Helm through different package managers.\nInstallation through those methods can be found below the official methods.\n\n### From the Binary Releases\n\nEvery [release](https:\/\/github.com\/helm\/helm\/releases) of Helm provides binary\nreleases for a variety of OSes. These binary versions can be manually downloaded\nand installed.\n\n1. Download your [desired version](https:\/\/github.com\/helm\/helm\/releases)\n2. Unpack it (`tar -zxvf helm-v3.0.0-linux-amd64.tar.gz`)\n3. Find the `helm` binary in the unpacked directory, and move it to its desired\n   destination (`mv linux-amd64\/helm \/usr\/local\/bin\/helm`)\n\nFrom there, you should be able to run the client and [add the stable\nchart repository](https:\/\/helm.sh\/docs\/intro\/quickstart\/#initialize-a-helm-chart-repository):\n`helm help`.\n\n**Note:** Helm automated tests are performed for Linux AMD64 only during\nGitHub Actions builds and releases. Testing of other OSes are the responsibility of\nthe community requesting Helm for the OS in question.\n\n### From Script\n\nHelm now has an installer script that will automatically grab the latest version\nof Helm and [install it\nlocally](https:\/\/raw.githubusercontent.com\/helm\/helm\/main\/scripts\/get-helm-3).\n\nYou can fetch that script, and then execute it locally. It's well documented so\nthat you can read through it and understand what it is doing before you run it.\n\n```console\n$ curl -fsSL -o get_helm.sh https:\/\/raw.githubusercontent.com\/helm\/helm\/main\/scripts\/get-helm-3\n$ chmod 700 get_helm.sh\n$ .\/get_helm.sh\n```\n\nYes, you can `curl\nhttps:\/\/raw.githubusercontent.com\/helm\/helm\/main\/scripts\/get-helm-3 | bash` if\nyou want to live on the edge.\n\n## Through Package Managers\n\nThe Helm community provides the ability to install Helm through operating system\npackage managers. These are not supported by the Helm project and are not\nconsidered trusted 3rd parties.\n\n### From Homebrew (macOS)\n\nMembers of the Helm community have contributed a Helm formula build to Homebrew.\nThis formula is generally up to date.\n\n```console\nbrew install helm\n```\n\n(Note: There is also a formula for emacs-helm, which is a different project.)\n\n### From Chocolatey (Windows)\n\nMembers of the Helm community have contributed a [Helm\npackage](https:\/\/chocolatey.org\/packages\/kubernetes-helm) build to\n[Chocolatey](https:\/\/chocolatey.org\/). This package is generally up to date.\n\n```console\nchoco install kubernetes-helm\n```\n\n### From Scoop (Windows)\n\nMembers of the Helm community have contributed a [Helm\npackage](https:\/\/github.com\/ScoopInstaller\/Main\/blob\/master\/bucket\/helm.json) build to [Scoop](https:\/\/scoop.sh). This package is generally up to date.\n\n```console\nscoop install helm\n```\n\n### From Winget (Windows)\n\nMembers of the Helm community have contributed a [Helm\npackage](https:\/\/github.com\/microsoft\/winget-pkgs\/tree\/master\/manifests\/h\/Helm\/Helm) build to [Winget](https:\/\/learn.microsoft.com\/en-us\/windows\/package-manager\/). This package is generally up to date.\n\n```console\nwinget install Helm.Helm\n```\n\n### From Apt (Debian\/Ubuntu)\n\nMembers of the Helm community have contributed a [Helm\npackage](https:\/\/helm.baltorepo.com\/stable\/debian\/) for Apt. This package is\ngenerally up to date.\n\n```console\ncurl https:\/\/baltocdn.com\/helm\/signing.asc | gpg --dearmor | sudo tee \/usr\/share\/keyrings\/helm.gpg > \/dev\/null\nsudo apt-get install apt-transport-https --yes\necho \"deb [arch=$(dpkg --print-architecture) signed-by=\/usr\/share\/keyrings\/helm.gpg] https:\/\/baltocdn.com\/helm\/stable\/debian\/ all main\" | sudo tee \/etc\/apt\/sources.list.d\/helm-stable-debian.list\nsudo apt-get update\nsudo apt-get install helm\n```\n\n### From dnf\/yum (fedora)\nSince Fedora 35, helm is available on the official repository.\nYou can install helm with invoking:\n\n```console\nsudo dnf install helm\n```\n\n### From Snap\n\nThe [Snapcrafters](https:\/\/github.com\/snapcrafters) community maintains the Snap\nversion of the [Helm package](https:\/\/snapcraft.io\/helm):\n\n```console\nsudo snap install helm --classic\n```\n\n### From pkg (FreeBSD)\n\nMembers of the FreeBSD community have contributed a [Helm\npackage](https:\/\/www.freshports.org\/sysutils\/helm) build to the\n[FreeBSD Ports Collection](https:\/\/man.freebsd.org\/ports).\nThis package is generally up to date.\n\n```console\npkg install helm\n```\n\n### Development Builds\n\nIn addition to releases you can download or install development snapshots of\nHelm.\n\n### From Canary Builds\n\n\"Canary\" builds are versions of the Helm software that are built from the latest\n`main` branch. They are not official releases, and may not be stable. However,\nthey offer the opportunity to test the cutting edge features.\n\nCanary Helm binaries are stored at [get.helm.sh](https:\/\/get.helm.sh). Here are\nlinks to the common builds:\n\n- [Linux AMD64](https:\/\/get.helm.sh\/helm-canary-linux-amd64.tar.gz)\n- [macOS AMD64](https:\/\/get.helm.sh\/helm-canary-darwin-amd64.tar.gz)\n- [Experimental Windows\n  AMD64](https:\/\/get.helm.sh\/helm-canary-windows-amd64.zip)\n\n### From Source (Linux, macOS)\n\nBuilding Helm from source is slightly more work, but is the best way to go if\nyou want to test the latest (pre-release) Helm version.\n\nYou must have a working Go environment.\n\n```console\n$ git clone https:\/\/github.com\/helm\/helm.git\n$ cd helm\n$ make\n```\n\nIf required, it will fetch the dependencies and cache them, and validate\nconfiguration. It will then compile `helm` and place it in `bin\/helm`.\n\n## Conclusion\n\nIn most cases, installation is as simple as getting a pre-built `helm` binary.\nThis document covers additional cases for those who want to do more\nsophisticated things with Helm.\n\nOnce you have the Helm Client successfully installed, you can move on to using\nHelm to manage charts and [add the stable\nchart repository](https:\/\/helm.sh\/docs\/intro\/quickstart\/#initialize-a-helm-chart-repository).","site":"helm","answers_cleaned":"    title   Installing Helm  description   Learn how to install and get running with Helm   weight  2 aliases     docs install         This guide shows how to install the Helm CLI  Helm can be installed either from source  or from pre built binary releases      From The Helm Project  The Helm project provides two ways to fetch and install Helm  These are the official methods to get Helm releases  In addition to that  the Helm community provides methods to install Helm through different package managers  Installation through those methods can be found below the official methods       From the Binary Releases  Every  release  https   github com helm helm releases  of Helm provides binary releases for a variety of OSes  These binary versions can be manually downloaded and installed   1  Download your  desired version  https   github com helm helm releases  2  Unpack it   tar  zxvf helm v3 0 0 linux amd64 tar gz   3  Find the  helm  binary in the unpacked directory  and move it to its desired    destination   mv linux amd64 helm  usr local bin helm    From there  you should be able to run the client and  add the stable chart repository  https   helm sh docs intro quickstart  initialize a helm chart repository    helm help      Note    Helm automated tests are performed for Linux AMD64 only during GitHub Actions builds and releases  Testing of other OSes are the responsibility of the community requesting Helm for the OS in question       From Script  Helm now has an installer script that will automatically grab the latest version of Helm and  install it locally  https   raw githubusercontent com helm helm main scripts get helm 3    You can fetch that script  and then execute it locally  It s well documented so that you can read through it and understand what it is doing before you run it      console   curl  fsSL  o get helm sh https   raw githubusercontent com helm helm main scripts get helm 3   chmod 700 get helm sh     get helm sh      Yes  you can  curl https   raw githubusercontent com helm helm main scripts get helm 3   bash  if you want to live on the edge      Through Package Managers  The Helm community provides the ability to install Helm through operating system package managers  These are not supported by the Helm project and are not considered trusted 3rd parties       From Homebrew  macOS   Members of the Helm community have contributed a Helm formula build to Homebrew  This formula is generally up to date      console brew install helm       Note  There is also a formula for emacs helm  which is a different project        From Chocolatey  Windows   Members of the Helm community have contributed a  Helm package  https   chocolatey org packages kubernetes helm  build to  Chocolatey  https   chocolatey org    This package is generally up to date      console choco install kubernetes helm          From Scoop  Windows   Members of the Helm community have contributed a  Helm package  https   github com ScoopInstaller Main blob master bucket helm json  build to  Scoop  https   scoop sh   This package is generally up to date      console scoop install helm          From Winget  Windows   Members of the Helm community have contributed a  Helm package  https   github com microsoft winget pkgs tree master manifests h Helm Helm  build to  Winget  https   learn microsoft com en us windows package manager    This package is generally up to date      console winget install Helm Helm          From Apt  Debian Ubuntu   Members of the Helm community have contributed a  Helm package  https   helm baltorepo com stable debian   for Apt  This package is generally up to date      console curl https   baltocdn com helm signing asc   gpg   dearmor   sudo tee  usr share keyrings helm gpg    dev null sudo apt get install apt transport https   yes echo  deb  arch   dpkg   print architecture  signed by  usr share keyrings helm gpg  https   baltocdn com helm stable debian  all main    sudo tee  etc apt sources list d helm stable debian list sudo apt get update sudo apt get install helm          From dnf yum  fedora  Since Fedora 35  helm is available on the official repository  You can install helm with invoking      console sudo dnf install helm          From Snap  The  Snapcrafters  https   github com snapcrafters  community maintains the Snap version of the  Helm package  https   snapcraft io helm       console sudo snap install helm   classic          From pkg  FreeBSD   Members of the FreeBSD community have contributed a  Helm package  https   www freshports org sysutils helm  build to the  FreeBSD Ports Collection  https   man freebsd org ports   This package is generally up to date      console pkg install helm          Development Builds  In addition to releases you can download or install development snapshots of Helm       From Canary Builds   Canary  builds are versions of the Helm software that are built from the latest  main  branch  They are not official releases  and may not be stable  However  they offer the opportunity to test the cutting edge features   Canary Helm binaries are stored at  get helm sh  https   get helm sh   Here are links to the common builds      Linux AMD64  https   get helm sh helm canary linux amd64 tar gz     macOS AMD64  https   get helm sh helm canary darwin amd64 tar gz     Experimental Windows   AMD64  https   get helm sh helm canary windows amd64 zip       From Source  Linux  macOS   Building Helm from source is slightly more work  but is the best way to go if you want to test the latest  pre release  Helm version   You must have a working Go environment      console   git clone https   github com helm helm git   cd helm   make      If required  it will fetch the dependencies and cache them  and validate configuration  It will then compile  helm  and place it in  bin helm       Conclusion  In most cases  installation is as simple as getting a pre built  helm  binary  This document covers additional cases for those who want to do more sophisticated things with Helm   Once you have the Helm Client successfully installed  you can move on to using Helm to manage charts and  add the stable chart repository  https   helm sh docs intro quickstart  initialize a helm chart repository  "}
{"questions":"helm Explains the basics of Helm install md the Helm client Kubernetes cluster It assumes that you have already installed ref title Using Helm weight 3 This guide explains the basics of using Helm to manage packages on your","answers":"---\ntitle: \"Using Helm\"\ndescription: \"Explains the basics of Helm.\"\nweight: 3\n---\n\nThis guide explains the basics of using Helm to manage packages on your\nKubernetes cluster. It assumes that you have already [installed]() the Helm client.\n\nIf you are simply interested in running a few quick commands, you may wish to\nbegin with the [Quickstart Guide](). This chapter\ncovers the particulars of Helm commands, and explains how to use Helm.\n\n## Three Big Concepts\n\nA *Chart* is a Helm package. It contains all of the resource definitions\nnecessary to run an application, tool, or service inside of a Kubernetes\ncluster. Think of it like the Kubernetes equivalent of a Homebrew formula, an\nApt dpkg, or a Yum RPM file.\n\nA *Repository* is the place where charts can be collected and shared. It's like\nPerl's [CPAN archive](https:\/\/www.cpan.org) or the [Fedora Package\nDatabase](https:\/\/src.fedoraproject.org\/), but for Kubernetes packages.\n\nA *Release* is an instance of a chart running in a Kubernetes cluster. One chart\ncan often be installed many times into the same cluster. And each time it is\ninstalled, a new _release_ is created. Consider a MySQL chart. If you want two\ndatabases running in your cluster, you can install that chart twice. Each one\nwill have its own _release_, which will in turn have its own _release name_.\n\nWith these concepts in mind, we can now explain Helm like this:\n\nHelm installs _charts_ into Kubernetes, creating a new _release_ for each\ninstallation. And to find new charts, you can search Helm chart _repositories_.\n\n## 'helm search': Finding Charts\n\nHelm comes with a powerful search command. It can be used to search two\ndifferent types of source:\n\n- `helm search hub` searches [the Artifact Hub](https:\/\/artifacthub.io), which\n  lists helm charts from dozens of different repositories.\n- `helm search repo` searches the repositories that you have added to your local\n  helm client (with `helm repo add`). This search is done over local data, and\n  no public network connection is needed.\n\nYou can find publicly available charts by running `helm search hub`:\n\n```console\n$ helm search hub wordpress\nURL                                                 CHART VERSION APP VERSION DESCRIPTION\nhttps:\/\/hub.helm.sh\/charts\/bitnami\/wordpress        7.6.7         5.2.4       Web publishing platform for building blogs and ...\nhttps:\/\/hub.helm.sh\/charts\/presslabs\/wordpress-...  v0.6.3        v0.6.3      Presslabs WordPress Operator Helm Chart\nhttps:\/\/hub.helm.sh\/charts\/presslabs\/wordpress-...  v0.7.1        v0.7.1      A Helm chart for deploying a WordPress site on ...\n```\n\nThe above searches for all `wordpress` charts on Artifact Hub.\n\nWith no filter, `helm search hub` shows you all of the available charts.\n\n`helm search hub` exposes the URL to the location on [artifacthub.io](https:\/\/artifacthub.io\/) but not the actual Helm repo. `helm search hub --list-repo-url` exposes the actual Helm repo URL which comes in handy when you are looking to add a new repo: `helm repo\nadd [NAME] [URL]`.\n\nUsing `helm search repo`, you can find the names of the charts in repositories\nyou have already added:\n\n```console\n$ helm repo add brigade https:\/\/brigadecore.github.io\/charts\n\"brigade\" has been added to your repositories\n$ helm search repo brigade\nNAME                          CHART VERSION APP VERSION DESCRIPTION\nbrigade\/brigade               1.3.2         v1.2.1      Brigade provides event-driven scripting of Kube...\nbrigade\/brigade-github-app    0.4.1         v0.2.1      The Brigade GitHub App, an advanced gateway for...\nbrigade\/brigade-github-oauth  0.2.0         v0.20.0     The legacy OAuth GitHub Gateway for Brigade\nbrigade\/brigade-k8s-gateway   0.1.0                     A Helm chart for Kubernetes\nbrigade\/brigade-project       1.0.0         v1.0.0      Create a Brigade project\nbrigade\/kashti                0.4.0         v0.4.0      A Helm chart for Kubernetes\n```\n\nHelm search uses a fuzzy string matching algorithm, so you can type parts of\nwords or phrases:\n\n```console\n$ helm search repo kash\nNAME            CHART VERSION APP VERSION DESCRIPTION\nbrigade\/kashti  0.4.0         v0.4.0      A Helm chart for Kubernetes\n```\n\nSearch is a good way to find available packages. Once you have found a package\nyou want to install, you can use `helm install` to install it.\n\n## 'helm install': Installing a Package\n\nTo install a new package, use the `helm install` command. At its simplest, it\ntakes two arguments: A release name that you pick, and the name of the chart you\nwant to install.\n\n```console\n$ helm install happy-panda bitnami\/wordpress\nNAME: happy-panda\nLAST DEPLOYED: Tue Jan 26 10:27:17 2021\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 1\nNOTES:\n** Please be patient while the chart is being deployed **\n\nYour WordPress site can be accessed through the following DNS name from within your cluster:\n\n    happy-panda-wordpress.default.svc.cluster.local (port 80)\n\nTo access your WordPress site from outside the cluster follow the steps below:\n\n1. Get the WordPress URL by running these commands:\n\n  NOTE: It may take a few minutes for the LoadBalancer IP to be available.\n        Watch the status with: 'kubectl get svc --namespace default -w happy-panda-wordpress'\n\n   export SERVICE_IP=$(kubectl get svc --namespace default happy-panda-wordpress --template \"\")\n   echo \"WordPress URL: http:\/\/$SERVICE_IP\/\"\n   echo \"WordPress Admin URL: http:\/\/$SERVICE_IP\/admin\"\n\n2. Open a browser and access WordPress using the obtained URL.\n\n3. Login with the following credentials below to see your blog:\n\n  echo Username: user\n  echo Password: $(kubectl get secret --namespace default happy-panda-wordpress -o jsonpath=\"{.data.wordpress-password}\" | base64 --decode)\n```\n\nNow the `wordpress` chart is installed. Note that installing a chart creates a\nnew _release_ object. The release above is named `happy-panda`. (If you want\nHelm to generate a name for you, leave off the release name and use\n`--generate-name`.)\n\nDuring installation, the `helm` client will print useful information about which\nresources were created, what the state of the release is, and also whether there\nare additional configuration steps you can or should take.\n\nHelm installs resources in the following order:\n\n- Namespace\n- NetworkPolicy\n- ResourceQuota\n- LimitRange\n- PodSecurityPolicy\n- PodDisruptionBudget\n- ServiceAccount\n- Secret\n- SecretList\n- ConfigMap\n- StorageClass\n- PersistentVolume\n- PersistentVolumeClaim\n- CustomResourceDefinition\n- ClusterRole\n- ClusterRoleList\n- ClusterRoleBinding\n- ClusterRoleBindingList\n- Role\n- RoleList\n- RoleBinding\n- RoleBindingList\n- Service\n- DaemonSet\n- Pod\n- ReplicationController\n- ReplicaSet\n- Deployment\n- HorizontalPodAutoscaler\n- StatefulSet\n- Job\n- CronJob\n- Ingress\n- APIService\n\nHelm does not wait until all of the resources are running before it exits. Many\ncharts require Docker images that are over 600MB in size, and may take a long\ntime to install into the cluster.\n\nTo keep track of a release's state, or to re-read configuration information, you\ncan use `helm status`:\n\n```console\n$ helm status happy-panda\nNAME: happy-panda\nLAST DEPLOYED: Tue Jan 26 10:27:17 2021\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 1\nNOTES:\n** Please be patient while the chart is being deployed **\n\nYour WordPress site can be accessed through the following DNS name from within your cluster:\n\n    happy-panda-wordpress.default.svc.cluster.local (port 80)\n\nTo access your WordPress site from outside the cluster follow the steps below:\n\n1. Get the WordPress URL by running these commands:\n\n  NOTE: It may take a few minutes for the LoadBalancer IP to be available.\n        Watch the status with: 'kubectl get svc --namespace default -w happy-panda-wordpress'\n\n   export SERVICE_IP=$(kubectl get svc --namespace default happy-panda-wordpress --template \"\")\n   echo \"WordPress URL: http:\/\/$SERVICE_IP\/\"\n   echo \"WordPress Admin URL: http:\/\/$SERVICE_IP\/admin\"\n\n2. Open a browser and access WordPress using the obtained URL.\n\n3. Login with the following credentials below to see your blog:\n\n  echo Username: user\n  echo Password: $(kubectl get secret --namespace default happy-panda-wordpress -o jsonpath=\"{.data.wordpress-password}\" | base64 --decode)\n```\n\nThe above shows the current state of your release.\n\n### Customizing the Chart Before Installing\n\nInstalling the way we have here will only use the default configuration options\nfor this chart. Many times, you will want to customize the chart to use your\npreferred configuration.\n\nTo see what options are configurable on a chart, use `helm show values`:\n\n```console\n$ helm show values bitnami\/wordpress\n## Global Docker image parameters\n## Please, note that this will override the image parameters, including dependencies, configured to use the global value\n## Current available global Docker image parameters: imageRegistry and imagePullSecrets\n##\n# global:\n#   imageRegistry: myRegistryName\n#   imagePullSecrets:\n#     - myRegistryKeySecretName\n#   storageClass: myStorageClass\n\n## Bitnami WordPress image version\n## ref: https:\/\/hub.docker.com\/r\/bitnami\/wordpress\/tags\/\n##\nimage:\n  registry: docker.io\n  repository: bitnami\/wordpress\n  tag: 5.6.0-debian-10-r35\n  [..]\n```\n\nYou can then override any of these settings in a YAML formatted file, and then\npass that file during installation.\n\n```console\n$ echo '{mariadb.auth.database: user0db, mariadb.auth.username: user0}' > values.yaml\n$ helm install -f values.yaml bitnami\/wordpress --generate-name\n```\n\nThe above will create a default MariaDB user with the name `user0`, and grant\nthis user access to a newly created `user0db` database, but will accept all the\nrest of the defaults for that chart.\n\nThere are two ways to pass configuration data during install:\n\n- `--values` (or `-f`): Specify a YAML file with overrides. This can be\n  specified multiple times and the rightmost file will take precedence\n- `--set`: Specify overrides on the command line.\n\nIf both are used, `--set` values are merged into `--values` with higher\nprecedence. Overrides specified with `--set` are persisted in a Secret.\nValues that have been `--set` can be viewed for a given release with `helm get\nvalues <release-name>`. Values that have been `--set` can be cleared by running\n`helm upgrade` with `--reset-values` specified.\n\n#### The Format and Limitations of `--set`\n\nThe `--set` option takes zero or more name\/value pairs. At its simplest, it is\nused like this: `--set name=value`. The YAML equivalent of that is:\n\n```yaml\nname: value\n```\n\nMultiple values are separated by `,` characters. So `--set a=b,c=d` becomes:\n\n```yaml\na: b\nc: d\n```\n\nMore complex expressions are supported. For example, `--set outer.inner=value`\nis translated into this:\n```yaml\nouter:\n  inner: value\n```\n\nLists can be expressed by enclosing values in `{` and `}`. For example, `--set\nname={a, b, c}` translates to:\n\n```yaml\nname:\n  - a\n  - b\n  - c\n```\n\nCertain name\/key can be set to be `null` or to be an empty array `[]`. For example, `--set name=[],a=null` translates\n\n```yaml\nname:\n  - a\n  - b\n  - c\na: b\n```\n\nto\n\n```yaml\nname: []\na: null\n```\n\nAs of Helm 2.5.0, it is possible to access list items using an array index\nsyntax. For example, `--set servers[0].port=80` becomes:\n\n```yaml\nservers:\n  - port: 80\n```\n\nMultiple values can be set this way. The line `--set\nservers[0].port=80,servers[0].host=example` becomes:\n\n```yaml\nservers:\n  - port: 80\n    host: example\n```\n\nSometimes you need to use special characters in your `--set` lines. You can use\na backslash to escape the characters; `--set name=value1\\,value2` will become:\n\n```yaml\nname: \"value1,value2\"\n```\n\nSimilarly, you can escape dot sequences as well, which may come in handy when\ncharts use the `toYaml` function to parse annotations, labels and node\nselectors. The syntax for `--set nodeSelector.\"kubernetes\\.io\/role\"=master`\nbecomes:\n\n```yaml\nnodeSelector:\n  kubernetes.io\/role: master\n```\n\nDeeply nested data structures can be difficult to express using `--set`. Chart\ndesigners are encouraged to consider the `--set` usage when designing the format\nof a `values.yaml` file  (read more about [Values Files](..\/chart_template_guide\/values_files\/)).\n\n### More Installation Methods\n\nThe `helm install` command can install from several sources:\n\n- A chart repository (as we've seen above)\n- A local chart archive (`helm install foo foo-0.1.1.tgz`)\n- An unpacked chart directory (`helm install foo path\/to\/foo`)\n- A full URL (`helm install foo https:\/\/example.com\/charts\/foo-1.2.3.tgz`)\n\n## 'helm upgrade' and 'helm rollback': Upgrading a Release, and Recovering on Failure\n\nWhen a new version of a chart is released, or when you want to change the\nconfiguration of your release, you can use the `helm upgrade` command.\n\nAn upgrade takes an existing release and upgrades it according to the\ninformation you provide. Because Kubernetes charts can be large and complex,\nHelm tries to perform the least invasive upgrade. It will only update things\nthat have changed since the last release.\n\n```console\n$ helm upgrade -f panda.yaml happy-panda bitnami\/wordpress\n```\n\nIn the above case, the `happy-panda` release is upgraded with the same chart,\nbut with a new YAML file:\n\n```yaml\nmariadb.auth.username: user1\n```\n\nWe can use `helm get values` to see whether that new setting took effect.\n\n```console\n$ helm get values happy-panda\nmariadb:\n  auth:\n    username: user1\n```\n\nThe `helm get` command is a useful tool for looking at a release in the cluster.\nAnd as we can see above, it shows that our new values from `panda.yaml` were\ndeployed to the cluster.\n\nNow, if something does not go as planned during a release, it is easy to roll\nback to a previous release using `helm rollback [RELEASE] [REVISION]`.\n\n```console\n$ helm rollback happy-panda 1\n```\n\nThe above rolls back our happy-panda to its very first release version. A\nrelease version is an incremental revision. Every time an install, upgrade, or\nrollback happens, the revision number is incremented by 1. The first revision\nnumber is always 1. And we can use `helm history [RELEASE]` to see revision\nnumbers for a certain release.\n\n## Helpful Options for Install\/Upgrade\/Rollback\n\nThere are several other helpful options you can specify for customizing the\nbehavior of Helm during an install\/upgrade\/rollback. Please note that this is\nnot a full list of cli flags. To see a description of all flags, just run `helm\n<command> --help`.\n\n- `--timeout`: A [Go duration](https:\/\/golang.org\/pkg\/time\/#ParseDuration) value\n  to wait for Kubernetes commands to complete. This defaults to `5m0s`.\n- `--wait`: Waits until all Pods are in a ready state, PVCs are bound,\n  Deployments have minimum (`Desired` minus `maxUnavailable`) Pods in ready\n  state and Services have an IP address (and Ingress if a `LoadBalancer`) before\n  marking the release as successful. It will wait for as long as the `--timeout`\n  value. If timeout is reached, the release will be marked as `FAILED`. Note: In\n  scenarios where Deployment has `replicas` set to 1 and `maxUnavailable` is not\n  set to 0 as part of rolling update strategy, `--wait` will return as ready as\n  it has satisfied the minimum Pod in ready condition.\n- `--no-hooks`: This skips running hooks for the command\n- `--recreate-pods` (only available for `upgrade` and `rollback`): This flag\n  will cause all pods to be recreated (with the exception of pods belonging to\n  deployments). (DEPRECATED in Helm 3)\n\n## 'helm uninstall': Uninstalling a Release\n\nWhen it is time to uninstall a release from the cluster, use the `helm\nuninstall` command:\n\n```console\n$ helm uninstall happy-panda\n```\n\nThis will remove the release from the cluster. You can see all of your currently\ndeployed releases with the `helm list` command:\n\n```console\n$ helm list\nNAME            VERSION UPDATED                         STATUS          CHART\ninky-cat        1       Wed Sep 28 12:59:46 2016        DEPLOYED        alpine-0.1.0\n```\n\nFrom the output above, we can see that the `happy-panda` release was\nuninstalled.\n\nIn previous versions of Helm, when a release was deleted, a record of its\ndeletion would remain. In Helm 3, deletion removes the release record as well.\nIf you wish to keep a deletion release record, use `helm uninstall\n--keep-history`. Using `helm list --uninstalled` will only show releases that\nwere uninstalled with the `--keep-history` flag.\n\nThe `helm list --all` flag will show you all release records that Helm has\nretained, including records for failed or deleted items (if `--keep-history` was\nspecified):\n\n```console\n$  helm list --all\nNAME            VERSION UPDATED                         STATUS          CHART\nhappy-panda     2       Wed Sep 28 12:47:54 2016        UNINSTALLED     wordpress-10.4.5.6.0\ninky-cat        1       Wed Sep 28 12:59:46 2016        DEPLOYED        alpine-0.1.0\nkindred-angelf  2       Tue Sep 27 16:16:10 2016        UNINSTALLED     alpine-0.1.0\n```\n\nNote that because releases are now deleted by default, it is no longer possible\nto rollback an uninstalled resource.\n\n## 'helm repo': Working with Repositories\n\nHelm 3 no longer ships with a default chart repository. The `helm repo` command\ngroup provides commands to add, list, and remove repositories.\n\nYou can see which repositories are configured using `helm repo list`:\n\n```console\n$ helm repo list\nNAME            URL\nstable          https:\/\/charts.helm.sh\/stable\nmumoshu         https:\/\/mumoshu.github.io\/charts\n```\n\nAnd new repositories can be added with `helm repo\nadd [NAME] [URL]`:\n\n```console\n$ helm repo add dev https:\/\/example.com\/dev-charts\n```\n\nBecause chart repositories change frequently, at any point you can make sure\nyour Helm client is up to date by running `helm repo update`.\n\nRepositories can be removed with `helm repo remove`.\n\n## Creating Your Own Charts\n\nThe [Chart Development Guide]() explains how\nto develop your own charts. But you can get started quickly by using the `helm\ncreate` command:\n\n```console\n$ helm create deis-workflow\nCreating deis-workflow\n```\n\nNow there is a chart in `.\/deis-workflow`. You can edit it and create your own\ntemplates.\n\nAs you edit your chart, you can validate that it is well-formed by running `helm\nlint`.\n\nWhen it's time to package the chart up for distribution, you can run the `helm\npackage` command:\n\n```console\n$ helm package deis-workflow\ndeis-workflow-0.1.0.tgz\n```\n\nAnd that chart can now easily be installed by `helm install`:\n\n```console\n$ helm install deis-workflow .\/deis-workflow-0.1.0.tgz\n...\n```\n\nCharts that are packaged can be loaded into chart repositories. See the\ndocumentation for [Helm chart\nrepositories]() for more details.\n\n## Conclusion\n\nThis chapter has covered the basic usage patterns of the `helm` client,\nincluding searching, installation, upgrading, and uninstalling. It has also\ncovered useful utility commands like `helm status`, `helm get`, and `helm repo`.\n\nFor more information on these commands, take a look at Helm's built-in help:\n`helm help`.\n\nIn the [next chapter](..\/howto\/charts_tips_and_tricks\/), we look at the process of developing charts.","site":"helm","answers_cleaned":"    title   Using Helm  description   Explains the basics of Helm   weight  3      This guide explains the basics of using Helm to manage packages on your Kubernetes cluster  It assumes that you have already  installed    the Helm client   If you are simply interested in running a few quick commands  you may wish to begin with the  Quickstart Guide     This chapter covers the particulars of Helm commands  and explains how to use Helm      Three Big Concepts  A  Chart  is a Helm package  It contains all of the resource definitions necessary to run an application  tool  or service inside of a Kubernetes cluster  Think of it like the Kubernetes equivalent of a Homebrew formula  an Apt dpkg  or a Yum RPM file   A  Repository  is the place where charts can be collected and shared  It s like Perl s  CPAN archive  https   www cpan org  or the  Fedora Package Database  https   src fedoraproject org    but for Kubernetes packages   A  Release  is an instance of a chart running in a Kubernetes cluster  One chart can often be installed many times into the same cluster  And each time it is installed  a new  release  is created  Consider a MySQL chart  If you want two databases running in your cluster  you can install that chart twice  Each one will have its own  release   which will in turn have its own  release name    With these concepts in mind  we can now explain Helm like this   Helm installs  charts  into Kubernetes  creating a new  release  for each installation  And to find new charts  you can search Helm chart  repositories        helm search   Finding Charts  Helm comes with a powerful search command  It can be used to search two different types of source      helm search hub  searches  the Artifact Hub  https   artifacthub io   which   lists helm charts from dozens of different repositories     helm search repo  searches the repositories that you have added to your local   helm client  with  helm repo add    This search is done over local data  and   no public network connection is needed   You can find publicly available charts by running  helm search hub       console   helm search hub wordpress URL                                                 CHART VERSION APP VERSION DESCRIPTION https   hub helm sh charts bitnami wordpress        7 6 7         5 2 4       Web publishing platform for building blogs and     https   hub helm sh charts presslabs wordpress      v0 6 3        v0 6 3      Presslabs WordPress Operator Helm Chart https   hub helm sh charts presslabs wordpress      v0 7 1        v0 7 1      A Helm chart for deploying a WordPress site on          The above searches for all  wordpress  charts on Artifact Hub   With no filter   helm search hub  shows you all of the available charts    helm search hub  exposes the URL to the location on  artifacthub io  https   artifacthub io   but not the actual Helm repo   helm search hub   list repo url  exposes the actual Helm repo URL which comes in handy when you are looking to add a new repo   helm repo add  NAME   URL     Using  helm search repo   you can find the names of the charts in repositories you have already added      console   helm repo add brigade https   brigadecore github io charts  brigade  has been added to your repositories   helm search repo brigade NAME                          CHART VERSION APP VERSION DESCRIPTION brigade brigade               1 3 2         v1 2 1      Brigade provides event driven scripting of Kube    brigade brigade github app    0 4 1         v0 2 1      The Brigade GitHub App  an advanced gateway for    brigade brigade github oauth  0 2 0         v0 20 0     The legacy OAuth GitHub Gateway for Brigade brigade brigade k8s gateway   0 1 0                     A Helm chart for Kubernetes brigade brigade project       1 0 0         v1 0 0      Create a Brigade project brigade kashti                0 4 0         v0 4 0      A Helm chart for Kubernetes      Helm search uses a fuzzy string matching algorithm  so you can type parts of words or phrases      console   helm search repo kash NAME            CHART VERSION APP VERSION DESCRIPTION brigade kashti  0 4 0         v0 4 0      A Helm chart for Kubernetes      Search is a good way to find available packages  Once you have found a package you want to install  you can use  helm install  to install it       helm install   Installing a Package  To install a new package  use the  helm install  command  At its simplest  it takes two arguments  A release name that you pick  and the name of the chart you want to install      console   helm install happy panda bitnami wordpress NAME  happy panda LAST DEPLOYED  Tue Jan 26 10 27 17 2021 NAMESPACE  default STATUS  deployed REVISION  1 NOTES     Please be patient while the chart is being deployed     Your WordPress site can be accessed through the following DNS name from within your cluster       happy panda wordpress default svc cluster local  port 80   To access your WordPress site from outside the cluster follow the steps below   1  Get the WordPress URL by running these commands     NOTE  It may take a few minutes for the LoadBalancer IP to be available          Watch the status with   kubectl get svc   namespace default  w happy panda wordpress      export SERVICE IP   kubectl get svc   namespace default happy panda wordpress   template        echo  WordPress URL  http    SERVICE IP      echo  WordPress Admin URL  http    SERVICE IP admin   2  Open a browser and access WordPress using the obtained URL   3  Login with the following credentials below to see your blog     echo Username  user   echo Password    kubectl get secret   namespace default happy panda wordpress  o jsonpath    data wordpress password     base64   decode       Now the  wordpress  chart is installed  Note that installing a chart creates a new  release  object  The release above is named  happy panda    If you want Helm to generate a name for you  leave off the release name and use    generate name     During installation  the  helm  client will print useful information about which resources were created  what the state of the release is  and also whether there are additional configuration steps you can or should take   Helm installs resources in the following order     Namespace   NetworkPolicy   ResourceQuota   LimitRange   PodSecurityPolicy   PodDisruptionBudget   ServiceAccount   Secret   SecretList   ConfigMap   StorageClass   PersistentVolume   PersistentVolumeClaim   CustomResourceDefinition   ClusterRole   ClusterRoleList   ClusterRoleBinding   ClusterRoleBindingList   Role   RoleList   RoleBinding   RoleBindingList   Service   DaemonSet   Pod   ReplicationController   ReplicaSet   Deployment   HorizontalPodAutoscaler   StatefulSet   Job   CronJob   Ingress   APIService  Helm does not wait until all of the resources are running before it exits  Many charts require Docker images that are over 600MB in size  and may take a long time to install into the cluster   To keep track of a release s state  or to re read configuration information  you can use  helm status       console   helm status happy panda NAME  happy panda LAST DEPLOYED  Tue Jan 26 10 27 17 2021 NAMESPACE  default STATUS  deployed REVISION  1 NOTES     Please be patient while the chart is being deployed     Your WordPress site can be accessed through the following DNS name from within your cluster       happy panda wordpress default svc cluster local  port 80   To access your WordPress site from outside the cluster follow the steps below   1  Get the WordPress URL by running these commands     NOTE  It may take a few minutes for the LoadBalancer IP to be available          Watch the status with   kubectl get svc   namespace default  w happy panda wordpress      export SERVICE IP   kubectl get svc   namespace default happy panda wordpress   template        echo  WordPress URL  http    SERVICE IP      echo  WordPress Admin URL  http    SERVICE IP admin   2  Open a browser and access WordPress using the obtained URL   3  Login with the following credentials below to see your blog     echo Username  user   echo Password    kubectl get secret   namespace default happy panda wordpress  o jsonpath    data wordpress password     base64   decode       The above shows the current state of your release       Customizing the Chart Before Installing  Installing the way we have here will only use the default configuration options for this chart  Many times  you will want to customize the chart to use your preferred configuration   To see what options are configurable on a chart  use  helm show values       console   helm show values bitnami wordpress    Global Docker image parameters    Please  note that this will override the image parameters  including dependencies  configured to use the global value    Current available global Docker image parameters  imageRegistry and imagePullSecrets      global      imageRegistry  myRegistryName     imagePullSecrets          myRegistryKeySecretName     storageClass  myStorageClass     Bitnami WordPress image version    ref  https   hub docker com r bitnami wordpress tags     image    registry  docker io   repository  bitnami wordpress   tag  5 6 0 debian 10 r35             You can then override any of these settings in a YAML formatted file  and then pass that file during installation      console   echo   mariadb auth database  user0db  mariadb auth username  user0     values yaml   helm install  f values yaml bitnami wordpress   generate name      The above will create a default MariaDB user with the name  user0   and grant this user access to a newly created  user0db  database  but will accept all the rest of the defaults for that chart   There are two ways to pass configuration data during install        values   or   f    Specify a YAML file with overrides  This can be   specified multiple times and the rightmost file will take precedence      set   Specify overrides on the command line   If both are used     set  values are merged into    values  with higher precedence  Overrides specified with    set  are persisted in a Secret  Values that have been    set  can be viewed for a given release with  helm get values  release name    Values that have been    set  can be cleared by running  helm upgrade  with    reset values  specified        The Format and Limitations of    set   The    set  option takes zero or more name value pairs  At its simplest  it is used like this     set name value   The YAML equivalent of that is      yaml name  value      Multiple values are separated by     characters  So    set a b c d  becomes      yaml a  b c  d      More complex expressions are supported  For example     set outer inner value  is translated into this     yaml outer    inner  value      Lists can be expressed by enclosing values in     and      For example     set name  a  b  c   translates to      yaml name      a     b     c      Certain name key can be set to be  null  or to be an empty array       For example     set name    a null  translates     yaml name      a     b     c a  b      to     yaml name     a  null      As of Helm 2 5 0  it is possible to access list items using an array index syntax  For example     set servers 0  port 80  becomes      yaml servers      port  80      Multiple values can be set this way  The line    set servers 0  port 80 servers 0  host example  becomes      yaml servers      port  80     host  example      Sometimes you need to use special characters in your    set  lines  You can use a backslash to escape the characters     set name value1  value2  will become      yaml name   value1 value2       Similarly  you can escape dot sequences as well  which may come in handy when charts use the  toYaml  function to parse annotations  labels and node selectors  The syntax for    set nodeSelector  kubernetes  io role  master  becomes      yaml nodeSelector    kubernetes io role  master      Deeply nested data structures can be difficult to express using    set   Chart designers are encouraged to consider the    set  usage when designing the format of a  values yaml  file   read more about  Values Files     chart template guide values files          More Installation Methods  The  helm install  command can install from several sources     A chart repository  as we ve seen above    A local chart archive   helm install foo foo 0 1 1 tgz     An unpacked chart directory   helm install foo path to foo     A full URL   helm install foo https   example com charts foo 1 2 3 tgz        helm upgrade  and  helm rollback   Upgrading a Release  and Recovering on Failure  When a new version of a chart is released  or when you want to change the configuration of your release  you can use the  helm upgrade  command   An upgrade takes an existing release and upgrades it according to the information you provide  Because Kubernetes charts can be large and complex  Helm tries to perform the least invasive upgrade  It will only update things that have changed since the last release      console   helm upgrade  f panda yaml happy panda bitnami wordpress      In the above case  the  happy panda  release is upgraded with the same chart  but with a new YAML file      yaml mariadb auth username  user1      We can use  helm get values  to see whether that new setting took effect      console   helm get values happy panda mariadb    auth      username  user1      The  helm get  command is a useful tool for looking at a release in the cluster  And as we can see above  it shows that our new values from  panda yaml  were deployed to the cluster   Now  if something does not go as planned during a release  it is easy to roll back to a previous release using  helm rollback  RELEASE   REVISION        console   helm rollback happy panda 1      The above rolls back our happy panda to its very first release version  A release version is an incremental revision  Every time an install  upgrade  or rollback happens  the revision number is incremented by 1  The first revision number is always 1  And we can use  helm history  RELEASE   to see revision numbers for a certain release      Helpful Options for Install Upgrade Rollback  There are several other helpful options you can specify for customizing the behavior of Helm during an install upgrade rollback  Please note that this is not a full list of cli flags  To see a description of all flags  just run  helm  command    help         timeout   A  Go duration  https   golang org pkg time  ParseDuration  value   to wait for Kubernetes commands to complete  This defaults to  5m0s        wait   Waits until all Pods are in a ready state  PVCs are bound    Deployments have minimum   Desired  minus  maxUnavailable   Pods in ready   state and Services have an IP address  and Ingress if a  LoadBalancer   before   marking the release as successful  It will wait for as long as the    timeout    value  If timeout is reached  the release will be marked as  FAILED   Note  In   scenarios where Deployment has  replicas  set to 1 and  maxUnavailable  is not   set to 0 as part of rolling update strategy     wait  will return as ready as   it has satisfied the minimum Pod in ready condition       no hooks   This skips running hooks for the command      recreate pods   only available for  upgrade  and  rollback    This flag   will cause all pods to be recreated  with the exception of pods belonging to   deployments    DEPRECATED in Helm 3       helm uninstall   Uninstalling a Release  When it is time to uninstall a release from the cluster  use the  helm uninstall  command      console   helm uninstall happy panda      This will remove the release from the cluster  You can see all of your currently deployed releases with the  helm list  command      console   helm list NAME            VERSION UPDATED                         STATUS          CHART inky cat        1       Wed Sep 28 12 59 46 2016        DEPLOYED        alpine 0 1 0      From the output above  we can see that the  happy panda  release was uninstalled   In previous versions of Helm  when a release was deleted  a record of its deletion would remain  In Helm 3  deletion removes the release record as well  If you wish to keep a deletion release record  use  helm uninstall   keep history   Using  helm list   uninstalled  will only show releases that were uninstalled with the    keep history  flag   The  helm list   all  flag will show you all release records that Helm has retained  including records for failed or deleted items  if    keep history  was specified       console    helm list   all NAME            VERSION UPDATED                         STATUS          CHART happy panda     2       Wed Sep 28 12 47 54 2016        UNINSTALLED     wordpress 10 4 5 6 0 inky cat        1       Wed Sep 28 12 59 46 2016        DEPLOYED        alpine 0 1 0 kindred angelf  2       Tue Sep 27 16 16 10 2016        UNINSTALLED     alpine 0 1 0      Note that because releases are now deleted by default  it is no longer possible to rollback an uninstalled resource       helm repo   Working with Repositories  Helm 3 no longer ships with a default chart repository  The  helm repo  command group provides commands to add  list  and remove repositories   You can see which repositories are configured using  helm repo list       console   helm repo list NAME            URL stable          https   charts helm sh stable mumoshu         https   mumoshu github io charts      And new repositories can be added with  helm repo add  NAME   URL        console   helm repo add dev https   example com dev charts      Because chart repositories change frequently  at any point you can make sure your Helm client is up to date by running  helm repo update    Repositories can be removed with  helm repo remove       Creating Your Own Charts  The  Chart Development Guide    explains how to develop your own charts  But you can get started quickly by using the  helm create  command      console   helm create deis workflow Creating deis workflow      Now there is a chart in    deis workflow   You can edit it and create your own templates   As you edit your chart  you can validate that it is well formed by running  helm lint    When it s time to package the chart up for distribution  you can run the  helm package  command      console   helm package deis workflow deis workflow 0 1 0 tgz      And that chart can now easily be installed by  helm install       console   helm install deis workflow   deis workflow 0 1 0 tgz          Charts that are packaged can be loaded into chart repositories  See the documentation for  Helm chart repositories    for more details      Conclusion  This chapter has covered the basic usage patterns of the  helm  client  including searching  installation  upgrading  and uninstalling  It has also covered useful utility commands like  helm status    helm get   and  helm repo    For more information on these commands  take a look at Helm s built in help   helm help    In the  next chapter     howto charts tips and tricks    we look at the process of developing charts "}
{"questions":"helm A closer look at best practices surrounding templates Structure of title Templates aliases docs topics chartbestpractices templates This part of the Best Practices Guide focuses on templates weight 3","answers":"---\ntitle: \"Templates\"\ndescription: \"A closer look at best practices surrounding templates.\"\nweight: 3\naliases: [\"\/docs\/topics\/chart_best_practices\/templates\/\"]\n---\n\nThis part of the Best Practices Guide focuses on templates.\n\n## Structure of `templates\/`\n\nThe `templates\/` directory should be structured as follows:\n\n- Template files should have the extension `.yaml` if they produce YAML output.\n  The extension `.tpl` may be used for template files that produce no formatted\n  content.\n- Template file names should use dashed notation (`my-example-configmap.yaml`),\n  not camelcase.\n- Each resource definition should be in its own template file.\n- Template file names should reflect the resource kind in the name. e.g.\n  `foo-pod.yaml`, `bar-svc.yaml`\n\n## Names of Defined Templates\n\nDefined templates (templates created inside a ` ` directive) are\nglobally accessible. That means that a chart and all of its subcharts will have\naccess to all of the templates created with ``.\n\nFor that reason, _all defined template names should be namespaced._\n\nCorrect:\n\n```yaml\n\n\n\n```\n\nIncorrect:\n\n```yaml\n\n\n\n```\nIt is highly recommended that new charts are created via `helm create` command\nas the template names are automatically defined as per this best practice.\n\n## Formatting Templates\n\nTemplates should be indented using _two spaces_ (never tabs).\n\nTemplate directives should have whitespace after the opening  braces and before\nthe closing braces:\n\nCorrect:\n```\n\n\n\n```\n\nIncorrect:\n```\n\n\n\n```\n\nTemplates should chomp whitespace where possible:\n\n```yaml\nfoo:\n  \n  \n  \n```\n\nBlocks (such as control structures) may be indented to indicate flow of the\ntemplate code.\n\n```\n\n  Hello\n\n```\n\nHowever, since YAML is a whitespace-oriented language, it is often not possible\nfor code indentation to follow that convention.\n\n## Whitespace in Generated Templates\n\nIt is preferable to keep the amount of whitespace in generated templates to a\nminimum. In particular, numerous blank lines should not appear adjacent to each\nother. But occasional empty lines (particularly between logical sections) is\nfine.\n\nThis is best:\n\n```yaml\napiVersion: batch\/v1\nkind: Job\nmetadata:\n  name: example\n  labels:\n    first: first\n    second: second\n```\n\nThis is okay:\n\n```yaml\napiVersion: batch\/v1\nkind: Job\n\nmetadata:\n  name: example\n\n  labels:\n    first: first\n    second: second\n\n```\n\nBut this should be avoided:\n\n```yaml\napiVersion: batch\/v1\nkind: Job\n\nmetadata:\n  name: example\n\n\n\n\n\n  labels:\n    first: first\n\n    second: second\n\n```\n\n## Comments (YAML Comments vs. Template Comments)\n\nBoth YAML and Helm Templates have comment markers.\n\nYAML comments:\n```yaml\n# This is a comment\ntype: sprocket\n```\n\nTemplate Comments:\n```yaml\n\ntype: frobnitz\n```\n\nTemplate comments should be used when documenting features of a template, such\nas explaining a defined template:\n\n```yaml\n\n\n\n\n\n```\n\nInside of templates, YAML comments may be used when it is useful for Helm users\nto (possibly) see the comments during debugging.\n\n```yaml\n# This may cause problems if the value is more than 100Gi\nmemory: \n```\n\nThe comment above is visible when the user runs `helm install --debug`, while\ncomments specified in `` sections are not.\n\nBeware of adding `#` YAML comments on template sections containing Helm values that may be required by certain template functions.\n\nFor example, if `required` function is introduced to the above example, and `maxMem` is unset, then a `#` YAML comment will introduce a rendering error.\n\nCorrect: `helm template` does not render this block\n```yaml\n\n*\/ -}}\n```\n\nIncorrect: `helm template` returns `Error: execution error at (templates\/test.yaml:2:13): maxMem must be set`\n```yaml\n# This may cause problems if the value is more than 100Gi\n# memory: \n```\n\nReview [Debugging Templates](..\/chart_template_guide\/debugging.md) for another example of this behavior of how YAML comments are left intact.\n\n## Use of JSON in Templates and Template Output\n\nYAML is a superset of JSON. In some cases, using a JSON syntax can be more\nreadable than other YAML representations.\n\nFor example, this YAML is closer to the normal YAML method of expressing lists:\n\n```yaml\narguments:\n  - \"--dirname\"\n  - \"\/foo\"\n```\n\nBut it is easier to read when collapsed into a JSON list style:\n\n```yaml\narguments: [\"--dirname\", \"\/foo\"]\n```\n\nUsing JSON for increased legibility is good. However, JSON syntax should not be\nused for representing more complex constructs.\n\nWhen dealing with pure JSON embedded inside of YAML (such as init container\nconfiguration), it is of course appropriate to use the JSON format.","site":"helm","answers_cleaned":"    title   Templates  description   A closer look at best practices surrounding templates   weight  3 aliases     docs topics chart best practices templates         This part of the Best Practices Guide focuses on templates      Structure of  templates    The  templates   directory should be structured as follows     Template files should have the extension   yaml  if they produce YAML output    The extension   tpl  may be used for template files that produce no formatted   content    Template file names should use dashed notation   my example configmap yaml      not camelcase    Each resource definition should be in its own template file    Template file names should reflect the resource kind in the name  e g     foo pod yaml    bar svc yaml      Names of Defined Templates  Defined templates  templates created inside a     directive  are globally accessible  That means that a chart and all of its subcharts will have access to all of the templates created with      For that reason   all defined template names should be namespaced    Correct      yaml         Incorrect      yaml        It is highly recommended that new charts are created via  helm create  command as the template names are automatically defined as per this best practice      Formatting Templates  Templates should be indented using  two spaces   never tabs    Template directives should have whitespace after the opening  braces and before the closing braces   Correct              Incorrect              Templates should chomp whitespace where possible      yaml foo                Blocks  such as control structures  may be indented to indicate flow of the template code          Hello       However  since YAML is a whitespace oriented language  it is often not possible for code indentation to follow that convention      Whitespace in Generated Templates  It is preferable to keep the amount of whitespace in generated templates to a minimum  In particular  numerous blank lines should not appear adjacent to each other  But occasional empty lines  particularly between logical sections  is fine   This is best      yaml apiVersion  batch v1 kind  Job metadata    name  example   labels      first  first     second  second      This is okay      yaml apiVersion  batch v1 kind  Job  metadata    name  example    labels      first  first     second  second       But this should be avoided      yaml apiVersion  batch v1 kind  Job  metadata    name  example        labels      first  first      second  second          Comments  YAML Comments vs  Template Comments   Both YAML and Helm Templates have comment markers   YAML comments     yaml   This is a comment type  sprocket      Template Comments     yaml  type  frobnitz      Template comments should be used when documenting features of a template  such as explaining a defined template      yaml           Inside of templates  YAML comments may be used when it is useful for Helm users to  possibly  see the comments during debugging      yaml   This may cause problems if the value is more than 100Gi memory        The comment above is visible when the user runs  helm install   debug   while comments specified in    sections are not   Beware of adding     YAML comments on template sections containing Helm values that may be required by certain template functions   For example  if  required  function is introduced to the above example  and  maxMem  is unset  then a     YAML comment will introduce a rendering error   Correct   helm template  does not render this block    yaml              Incorrect   helm template  returns  Error  execution error at  templates test yaml 2 13   maxMem must be set     yaml   This may cause problems if the value is more than 100Gi   memory        Review  Debugging Templates     chart template guide debugging md  for another example of this behavior of how YAML comments are left intact      Use of JSON in Templates and Template Output  YAML is a superset of JSON  In some cases  using a JSON syntax can be more readable than other YAML representations   For example  this YAML is closer to the normal YAML method of expressing lists      yaml arguments         dirname        foo       But it is easier to read when collapsed into a JSON list style      yaml arguments      dirname     foo        Using JSON for increased legibility is good  However  JSON syntax should not be used for representing more complex constructs   When dealing with pure JSON embedded inside of YAML  such as init container configuration   it is of course appropriate to use the JSON format "}
{"questions":"helm guide we provide recommendations on how you should structure and use your weight 2 title Values Focuses on how you should structure and use your values This part of the best practices guide covers using values In this part of the values with focus on designing a chart s file aliases docs topics chartbestpractices values","answers":"---\ntitle: \"Values\"\ndescription: \"Focuses on how you should structure and use your values.\"\nweight: 2\naliases: [\"\/docs\/topics\/chart_best_practices\/values\/\"]\n---\n\nThis part of the best practices guide covers using values. In this part of the\nguide, we provide recommendations on how you should structure and use your\nvalues, with focus on designing a chart's `values.yaml` file.\n\n## Naming Conventions\n\nVariable names should begin with a lowercase letter, and words should be\nseparated with camelcase:\n\nCorrect:\n\n```yaml\nchicken: true\nchickenNoodleSoup: true\n```\n\nIncorrect:\n\n```yaml\nChicken: true  # initial caps may conflict with built-ins\nchicken-noodle-soup: true # do not use hyphens in the name\n```\n\nNote that all of Helm's built-in variables begin with an uppercase letter to\neasily distinguish them from user-defined values: `.Release.Name`,\n`.Capabilities.KubeVersion`.\n\n## Flat or Nested Values\n\nYAML is a flexible format, and values may be nested deeply or flattened.\n\nNested:\n\n```yaml\nserver:\n  name: nginx\n  port: 80\n```\n\nFlat:\n\n```yaml\nserverName: nginx\nserverPort: 80\n```\n\nIn most cases, flat should be favored over nested. The reason for this is that\nit is simpler for template developers and users.\n\n\nFor optimal safety, a nested value must be checked at every level:\n\n```\n\n  \n\n```\n\nFor every layer of nesting, an existence check must be done. But for flat\nconfiguration, such checks can be skipped, making the template easier to read\nand use.\n\n```\n\n```\n\nWhen there are a large number of related variables, and at least one of them is\nnon-optional, nested values may be used to improve readability.\n\n## Make Types Clear\n\nYAML's type coercion rules are sometimes counterintuitive. For example, `foo:\nfalse` is not the same as `foo: \"false\"`. Large integers like `foo: 12345678`\nwill get converted to scientific notation in some cases.\n\nThe easiest way to avoid type conversion errors is to be explicit about strings,\nand implicit about everything else. Or, in short, _quote all strings_.\n\nOften, to avoid the integer casting issues, it is advantageous to store your\nintegers as strings as well, and use `` in the template to\nconvert from a string back to an integer.\n\nIn most cases, explicit type tags are respected, so `foo: !!string 1234` should\ntreat `1234` as a string. _However_, the YAML parser consumes tags, so the type\ndata is lost after one parse.\n\n## Consider How Users Will Use Your Values\n\nThere are three potential sources of values:\n\n- A chart's `values.yaml` file\n- A values file supplied by `helm install -f` or `helm upgrade -f`\n- The values passed to a `--set` or `--set-string` flag on `helm install` or\n  `helm upgrade`\n\nWhen designing the structure of your values, keep in mind that users of your\nchart may want to override them via either the `-f` flag or with the `--set`\noption.\n\nSince `--set` is more limited in expressiveness, the first guidelines for\nwriting your `values.yaml` file is _make it easy to override from `--set`_.\n\nFor this reason, it's often better to structure your values file using maps.\n\nDifficult to use with `--set`:\n\n```yaml\nservers:\n  - name: foo\n    port: 80\n  - name: bar\n    port: 81\n```\n\nThe above cannot be expressed with `--set` in Helm `<=2.4`. In Helm 2.5,\naccessing the port on foo is `--set servers[0].port=80`. Not only is it harder\nfor the user to figure out, but it is prone to errors if at some later time the\norder of the `servers` is changed.\n\nEasy to use:\n\n```yaml\nservers:\n  foo:\n    port: 80\n  bar:\n    port: 81\n```\n\nAccessing foo's port is much more obvious: `--set servers.foo.port=80`.\n\n## Document `values.yaml`\n\nEvery defined property in `values.yaml` should be documented. The documentation\nstring should begin with the name of the property that it describes, and then\ngive at least a one-sentence description.\n\nIncorrect:\n\n```yaml\n# the host name for the webserver\nserverHost: example\nserverPort: 9191\n```\n\nCorrect:\n\n```yaml\n# serverHost is the host name for the webserver\nserverHost: example\n# serverPort is the HTTP listener port for the webserver\nserverPort: 9191\n```\n\nBeginning each comment with the name of the parameter it documents makes it easy\nto grep out documentation, and will enable documentation tools to reliably\ncorrelate doc strings with the parameters they describe.","site":"helm","answers_cleaned":"    title   Values  description   Focuses on how you should structure and use your values   weight  2 aliases     docs topics chart best practices values         This part of the best practices guide covers using values  In this part of the guide  we provide recommendations on how you should structure and use your values  with focus on designing a chart s  values yaml  file      Naming Conventions  Variable names should begin with a lowercase letter  and words should be separated with camelcase   Correct      yaml chicken  true chickenNoodleSoup  true      Incorrect      yaml Chicken  true    initial caps may conflict with built ins chicken noodle soup  true   do not use hyphens in the name      Note that all of Helm s built in variables begin with an uppercase letter to easily distinguish them from user defined values    Release Name     Capabilities KubeVersion       Flat or Nested Values  YAML is a flexible format  and values may be nested deeply or flattened   Nested      yaml server    name  nginx   port  80      Flat      yaml serverName  nginx serverPort  80      In most cases  flat should be favored over nested  The reason for this is that it is simpler for template developers and users    For optimal safety  a nested value must be checked at every level                 For every layer of nesting  an existence check must be done  But for flat configuration  such checks can be skipped  making the template easier to read and use             When there are a large number of related variables  and at least one of them is non optional  nested values may be used to improve readability      Make Types Clear  YAML s type coercion rules are sometimes counterintuitive  For example   foo  false  is not the same as  foo   false    Large integers like  foo  12345678  will get converted to scientific notation in some cases   The easiest way to avoid type conversion errors is to be explicit about strings  and implicit about everything else  Or  in short   quote all strings    Often  to avoid the integer casting issues  it is advantageous to store your integers as strings as well  and use    in the template to convert from a string back to an integer   In most cases  explicit type tags are respected  so  foo    string 1234  should treat  1234  as a string   However   the YAML parser consumes tags  so the type data is lost after one parse      Consider How Users Will Use Your Values  There are three potential sources of values     A chart s  values yaml  file   A values file supplied by  helm install  f  or  helm upgrade  f    The values passed to a    set  or    set string  flag on  helm install  or    helm upgrade   When designing the structure of your values  keep in mind that users of your chart may want to override them via either the   f  flag or with the    set  option   Since    set  is more limited in expressiveness  the first guidelines for writing your  values yaml  file is  make it easy to override from    set     For this reason  it s often better to structure your values file using maps   Difficult to use with    set       yaml servers      name  foo     port  80     name  bar     port  81      The above cannot be expressed with    set  in Helm    2 4   In Helm 2 5  accessing the port on foo is    set servers 0  port 80   Not only is it harder for the user to figure out  but it is prone to errors if at some later time the order of the  servers  is changed   Easy to use      yaml servers    foo      port  80   bar      port  81      Accessing foo s port is much more obvious     set servers foo port 80       Document  values yaml   Every defined property in  values yaml  should be documented  The documentation string should begin with the name of the property that it describes  and then give at least a one sentence description   Incorrect      yaml   the host name for the webserver serverHost  example serverPort  9191      Correct      yaml   serverHost is the host name for the webserver serverHost  example   serverPort is the HTTP listener port for the webserver serverPort  9191      Beginning each comment with the name of the parameter it documents makes it easy to grep out documentation  and will enable documentation tools to reliably correlate doc strings with the parameters they describe "}
{"questions":"helm title Troubleshooting Run If it shows your repository pointing to a URL you I am getting a warning about Unable to get an update from the stable chart repository weight 4 Troubleshooting","answers":"---\ntitle: \"Troubleshooting\"\nweight: 4\n---\n\n## Troubleshooting\n\n### I am getting a warning about \"Unable to get an update from the \"stable\" chart repository\"\n\nRun `helm repo list`. If it shows your `stable` repository pointing to a `storage.googleapis.com` URL, you\nwill need to update that repository. On November 13, 2020, the Helm Charts repo [became unsupported](https:\/\/github.com\/helm\/charts#deprecation-timeline) after a year-long deprecation. An archive has been made available at\n`https:\/\/charts.helm.sh\/stable` but will no longer receive updates. \n\nYou can run the following command to fix your repository:\n\n```console\n$ helm repo add stable https:\/\/charts.helm.sh\/stable --force-update  \n```\n\nThe same goes for the `incubator` repository, which has an archive available at https:\/\/charts.helm.sh\/incubator.\nYou can run the following command to repair it:\n\n```console\n$ helm repo add incubator https:\/\/charts.helm.sh\/incubator --force-update  \n```\n\n### I am getting the warning 'WARNING: \"kubernetes-charts.storage.googleapis.com\" is deprecated for \"stable\" and will be deleted Nov. 13, 2020.'\n\nThe old Google helm chart repository has been replaced by a new Helm chart repository.\n\nRun the following command to permanently fix this:\n\n```console\n$ helm repo add stable https:\/\/charts.helm.sh\/stable --force-update  \n```\n\nIf you get a similar error for `incubator`, run this command:\n\n```console\n$ helm repo add incubator https:\/\/charts.helm.sh\/incubator --force-update  \n```\n\n### When I add a Helm repo, I get the error 'Error: Repo \"https:\/\/kubernetes-charts.storage.googleapis.com\" is no longer available'\n\nThe Helm Chart repositories are no longer supported after [a year-long deprecation period](https:\/\/github.com\/helm\/charts#deprecation-timeline). \nArchives for these repositories are available at `https:\/\/charts.helm.sh\/stable` and `https:\/\/charts.helm.sh\/incubator`, however they will no longer receive updates. The command\n`helm repo add` will not let you add the old URLs unless you specify `--use-deprecated-repos`.\n\n### On GKE (Google Container Engine) I get \"No SSH tunnels currently open\"\n\n```\nError: Error forwarding ports: error upgrading connection: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-[redacted]\"?\n```\n\nAnother variation of the error message is:\n\n\n```\nUnable to connect to the server: x509: certificate signed by unknown authority\n```\n\nThe issue is that your local Kubernetes config file must have the correct\ncredentials.\n\nWhen you create a cluster on GKE, it will give you credentials, including SSL\ncertificates and certificate authorities. These need to be stored in a\nKubernetes config file (Default: `~\/.kube\/config`) so that `kubectl` and `helm`\ncan access them.\n\n### After migration from Helm 2, `helm list` shows only some (or none) of my releases\n\nIt is likely that you have missed the fact that Helm 3 now uses cluster\nnamespaces throughout to scope releases. This means that for all commands\nreferencing a release you must either:\n\n* rely on the current namespace in the active kubernetes context (as described\n  by the `kubectl config view --minify` command),\n* specify the correct namespace using the `--namespace`\/`-n` flag, or\n* for the `helm list` command, specify the `--all-namespaces`\/`-A` flag\n\nThis applies to `helm ls`, `helm uninstall`, and all other `helm` commands\nreferencing a release.\n\n\n### On macOS, the file `\/etc\/.mdns_debug` is accessed. Why?\n\nWe are aware of a case on macOS where Helm will try to access a file named\n`\/etc\/.mdns_debug`. If the file exists, Helm holds the file handle open while it\nexecutes.\n\nThis is caused by macOS's MDNS library. It attempts to load that file to read\ndebugging settings (if enabled). The file handle probably should not be held open, and\nthis issue has been reported to Apple. However, it is macOS, not Helm, that causes this\nbehavior.\n\nIf you do not want Helm to load this file, you may be able to compile Helm to as\na static library that does not use the host network stack. Doing so will inflate the\nbinary size of Helm, but will prevent the file from being open.\n\nThis issue was originally flagged as a potential security problem. But it has since\nbeen determined that there is no flaw or vulnerability caused by this behavior.\n\n### helm repo add fails when it used to work\n\nIn helm 3.3.1 and before, the command `helm repo add <reponame> <url>` will give\nno output if you attempt to add a repo which already exists. The flag\n`--no-update` would raise an error if the repo was already registered.\n\nIn helm 3.3.2 and beyond, an attempt to add an existing repo will error:\n\n`Error: repository name (reponame) already exists, please specify a different name`\n\nThe default behavior is now reversed. `--no-update` is now ignored, while if you\nwant to replace (overwrite) an existing repo, you can use `--force-update`.\n\nThis is due to a breaking change for a security fix as explained in the [Helm\n3.3.2 release notes](https:\/\/github.com\/helm\/helm\/releases\/tag\/v3.3.2).\n\n### Enabling Kubernetes client logging\n\nPrinting log messages for debugging the Kubernetes client can be enabled using\nthe [klog](https:\/\/pkg.go.dev\/k8s.io\/klog) flags. Using the `-v` flag to set\nverbosity level will be enough for most cases.\n\nFor example:\n\n```\nhelm list -v 6\n```\n\n### Tiller installations stopped working and access is denied\n\nHelm releases used to be available from <https:\/\/storage.googleapis.com\/kubernetes-helm\/>. As explained in [\"Announcing get.helm.sh\"](https:\/\/helm.sh\/blog\/get-helm-sh\/), the official location changed in June 2019. [GitHub Container Registry](https:\/\/github.com\/orgs\/helm\/packages\/container\/package\/tiller) makes all the old Tiller images available.\n\n\nIf you are trying to download older versions of Helm from the storage bucket you used in the past, you may find that they are missing:\n\n```\n<Error>\n    <Code>AccessDenied<\/Code>\n    <Message>Access denied.<\/Message>\n    <Details>Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.<\/Details>\n<\/Error>\n```\n\nThe [legacy Tiller image location](https:\/\/gcr.io\/kubernetes-helm\/tiller) began the removal of images in August 2021. We have made these images available at the [GitHub Container Registry](https:\/\/github.com\/orgs\/helm\/packages\/container\/package\/tiller) location. For example, to download version v2.17.0, replace:\n\n`https:\/\/storage.googleapis.com\/kubernetes-helm\/helm-v2.17.0-linux-amd64.tar.gz`\n\nwith:\n\n`https:\/\/get.helm.sh\/helm-v2.17.0-linux-amd64.tar.gz`\n\nTo initialize with Helm v2.17.0:\n\n`helm init \u2014upgrade`\n\nOr if a different version is needed, use the --tiller-image flag to override the default location and install a specific Helm v2 version:\n\n`helm init --tiller-image ghcr.io\/helm\/tiller:v2.16.9`\n\n**Note:** The Helm maintainers recommend migration to a currently-supported version of Helm. Helm v2.17.0 was the final release of Helm v2; Helm v2 is unsupported since November 2020, as detailed in [Helm 2 and the Charts Project Are Now Unsupported](https:\/\/helm.sh\/blog\/helm-2-becomes-unsupported\/). Many CVEs have been flagged against Helm since then, and those exploits are patched in Helm v3 but will never be patched in Helm v2. See the [current list of published Helm advisories](https:\/\/github.com\/helm\/helm\/security\/advisories?state=published) and make a plan to [migrate to Helm v3](https:\/\/helm.sh\/docs\/topics\/v2_v3_migration\/#helm) today.","site":"helm","answers_cleaned":"    title   Troubleshooting  weight  4         Troubleshooting      I am getting a warning about  Unable to get an update from the  stable  chart repository   Run  helm repo list   If it shows your  stable  repository pointing to a  storage googleapis com  URL  you will need to update that repository  On November 13  2020  the Helm Charts repo  became unsupported  https   github com helm charts deprecation timeline  after a year long deprecation  An archive has been made available at  https   charts helm sh stable  but will no longer receive updates    You can run the following command to fix your repository      console   helm repo add stable https   charts helm sh stable   force update        The same goes for the  incubator  repository  which has an archive available at https   charts helm sh incubator  You can run the following command to repair it      console   helm repo add incubator https   charts helm sh incubator   force update            I am getting the warning  WARNING   kubernetes charts storage googleapis com  is deprecated for  stable  and will be deleted Nov  13  2020    The old Google helm chart repository has been replaced by a new Helm chart repository   Run the following command to permanently fix this      console   helm repo add stable https   charts helm sh stable   force update        If you get a similar error for  incubator   run this command      console   helm repo add incubator https   charts helm sh incubator   force update            When I add a Helm repo  I get the error  Error  Repo  https   kubernetes charts storage googleapis com  is no longer available   The Helm Chart repositories are no longer supported after  a year long deprecation period  https   github com helm charts deprecation timeline    Archives for these repositories are available at  https   charts helm sh stable  and  https   charts helm sh incubator   however they will no longer receive updates  The command  helm repo add  will not let you add the old URLs unless you specify    use deprecated repos        On GKE  Google Container Engine  I get  No SSH tunnels currently open       Error  Error forwarding ports  error upgrading connection  No SSH tunnels currently open  Were the targets able to accept an ssh key for user  gke  redacted         Another variation of the error message is        Unable to connect to the server  x509  certificate signed by unknown authority      The issue is that your local Kubernetes config file must have the correct credentials   When you create a cluster on GKE  it will give you credentials  including SSL certificates and certificate authorities  These need to be stored in a Kubernetes config file  Default      kube config   so that  kubectl  and  helm  can access them       After migration from Helm 2   helm list  shows only some  or none  of my releases  It is likely that you have missed the fact that Helm 3 now uses cluster namespaces throughout to scope releases  This means that for all commands referencing a release you must either     rely on the current namespace in the active kubernetes context  as described   by the  kubectl config view   minify  command     specify the correct namespace using the    namespace    n  flag  or   for the  helm list  command  specify the    all namespaces    A  flag  This applies to  helm ls    helm uninstall   and all other  helm  commands referencing a release        On macOS  the file   etc  mdns debug  is accessed  Why   We are aware of a case on macOS where Helm will try to access a file named   etc  mdns debug   If the file exists  Helm holds the file handle open while it executes   This is caused by macOS s MDNS library  It attempts to load that file to read debugging settings  if enabled   The file handle probably should not be held open  and this issue has been reported to Apple  However  it is macOS  not Helm  that causes this behavior   If you do not want Helm to load this file  you may be able to compile Helm to as a static library that does not use the host network stack  Doing so will inflate the binary size of Helm  but will prevent the file from being open   This issue was originally flagged as a potential security problem  But it has since been determined that there is no flaw or vulnerability caused by this behavior       helm repo add fails when it used to work  In helm 3 3 1 and before  the command  helm repo add  reponame   url   will give no output if you attempt to add a repo which already exists  The flag    no update  would raise an error if the repo was already registered   In helm 3 3 2 and beyond  an attempt to add an existing repo will error    Error  repository name  reponame  already exists  please specify a different name   The default behavior is now reversed     no update  is now ignored  while if you want to replace  overwrite  an existing repo  you can use    force update    This is due to a breaking change for a security fix as explained in the  Helm 3 3 2 release notes  https   github com helm helm releases tag v3 3 2        Enabling Kubernetes client logging  Printing log messages for debugging the Kubernetes client can be enabled using the  klog  https   pkg go dev k8s io klog  flags  Using the   v  flag to set verbosity level will be enough for most cases   For example       helm list  v 6          Tiller installations stopped working and access is denied  Helm releases used to be available from  https   storage googleapis com kubernetes helm    As explained in   Announcing get helm sh   https   helm sh blog get helm sh    the official location changed in June 2019   GitHub Container Registry  https   github com orgs helm packages container package tiller  makes all the old Tiller images available    If you are trying to download older versions of Helm from the storage bucket you used in the past  you may find that they are missing        Error       Code AccessDenied  Code       Message Access denied   Message       Details Anonymous caller does not have storage objects get access to the Google Cloud Storage object   Details    Error       The  legacy Tiller image location  https   gcr io kubernetes helm tiller  began the removal of images in August 2021  We have made these images available at the  GitHub Container Registry  https   github com orgs helm packages container package tiller  location  For example  to download version v2 17 0  replace    https   storage googleapis com kubernetes helm helm v2 17 0 linux amd64 tar gz   with    https   get helm sh helm v2 17 0 linux amd64 tar gz   To initialize with Helm v2 17 0    helm init  upgrade   Or if a different version is needed  use the   tiller image flag to override the default location and install a specific Helm v2 version    helm init   tiller image ghcr io helm tiller v2 16 9     Note    The Helm maintainers recommend migration to a currently supported version of Helm  Helm v2 17 0 was the final release of Helm v2  Helm v2 is unsupported since November 2020  as detailed in  Helm 2 and the Charts Project Are Now Unsupported  https   helm sh blog helm 2 becomes unsupported    Many CVEs have been flagged against Helm since then  and those exploits are patched in Helm v3 but will never be patched in Helm v2  See the  current list of published Helm advisories  https   github com helm helm security advisories state published  and make a plan to  migrate to Helm v3  https   helm sh docs topics v2 v3 migration  helm  today "}
{"questions":"helm Changes since Helm 2 Here s an exhaustive list of all the major changes introduced in Helm 3 Removal of Tiller title Changes Since Helm 2 weight 1","answers":"---\ntitle: \"Changes Since Helm 2\"\nweight: 1\n---\n\n## Changes since Helm 2\n\nHere's an exhaustive list of all the major changes introduced in Helm 3.\n\n### Removal of Tiller\n\nDuring the Helm 2 development cycle, we introduced Tiller. Tiller played an\nimportant role for teams working on a shared cluster - it made it possible for\nmultiple different operators to interact with the same set of releases.\n\nWith role-based access controls (RBAC) enabled by default in Kubernetes 1.6,\nlocking down Tiller for use in a production scenario became more difficult to\nmanage. Due to the vast number of possible security policies, our stance was to\nprovide a permissive default configuration. This allowed first-time users to\nstart experimenting with Helm and Kubernetes without having to dive headfirst\ninto the security controls. Unfortunately, this permissive configuration could\ngrant a user a broad range of permissions they weren\u2019t intended to have. DevOps\nand SREs had to learn additional operational steps when installing Tiller into a\nmulti-tenant cluster.\n\nAfter hearing how community members were using Helm in certain scenarios, we\nfound that Tiller\u2019s release management system did not need to rely upon an\nin-cluster operator to maintain state or act as a central hub for Helm release\ninformation. Instead, we could simply fetch information from the Kubernetes API\nserver, render the Charts client-side, and store a record of the installation in\nKubernetes.\n\nTiller\u2019s primary goal could be accomplished without Tiller, so one of the first\ndecisions we made regarding Helm 3 was to completely remove Tiller.\n\nWith Tiller gone, the security model for Helm is radically simplified. Helm 3\nnow supports all the modern security, identity, and authorization features of\nmodern Kubernetes. Helm\u2019s permissions are evaluated using your [kubeconfig\nfile](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/organize-cluster-access-kubeconfig\/).\nCluster administrators can restrict user permissions at whatever granularity\nthey see fit. Releases are still recorded in-cluster, and the rest of Helm\u2019s\nfunctionality remains.\n\n### Improved Upgrade Strategy: 3-way Strategic Merge Patches\n\nHelm 2 used a two-way strategic merge patch. During an upgrade, it compared the\nmost recent chart's manifest against the proposed chart's manifest (the one\nsupplied during `helm upgrade`). It compared the differences between these two\ncharts to determine what changes needed to be applied to the resources in\nKubernetes. If changes were applied to the cluster out-of-band (such as during a\n`kubectl edit`), those changes were not considered. This resulted in resources\nbeing unable to roll back to its previous state: because Helm only considered\nthe last applied chart's manifest as its current state, if there were no changes\nin the chart's state, the live state was left unchanged.\n\nIn Helm 3, we now use a three-way strategic merge patch. Helm considers the old\nmanifest, its live state, and the new manifest when generating a patch.\n\n#### Examples\n\nLet's go through a few common examples what this change impacts.\n\n##### Rolling back where live state has changed\n\nYour team just deployed their application to production on Kubernetes using\nHelm. The chart contains a Deployment object where the number of replicas is set\nto three:\n\n```console\n$ helm install myapp .\/myapp\n```\n\nA new developer joins the team. On their first day while observing the\nproduction cluster, a horrible coffee-spilling-on-the-keyboard accident happens\nand they `kubectl scale` the production deployment from three replicas down to\nzero.\n\n```console\n$ kubectl scale --replicas=0 deployment\/myapp\n```\n\nAnother developer on your team notices that the production site is down and\ndecides to rollback the release to its previous state:\n\n```console\n$ helm rollback myapp\n```\n\nWhat happens?\n\nIn Helm 2, it would generate a patch, comparing the old manifest against the new\nmanifest. Because this is a rollback, it's the same manifest. Helm would\ndetermine that there is nothing to change because there is no difference between\nthe old manifest and the new manifest. The replica count continues to stay at\nzero. Panic ensues.\n\nIn Helm 3, the patch is generated using the old manifest, the live state, and\nthe new manifest. Helm recognizes that the old state was at three, the live\nstate is at zero and the new manifest wishes to change it back to three, so it\ngenerates a patch to change the state back to three.\n\n##### Upgrades where live state has changed\n\nMany service meshes and other controller-based applications inject data into\nKubernetes objects. This can be something like a sidecar, labels, or other\ninformation. Previously if you had the given manifest rendered from a Chart:\n\n```yaml\ncontainers:\n- name: server\n  image: nginx:2.0.0\n```\n\nAnd the live state was modified by another application to\n\n```yaml\ncontainers:\n- name: server\n  image: nginx:2.0.0\n- name: my-injected-sidecar\n  image: my-cool-mesh:1.0.0\n```\n\nNow, you want to upgrade the `nginx` image tag to `2.1.0`. So, you upgrade to a\nchart with the given manifest:\n\n```yaml\ncontainers:\n- name: server\n  image: nginx:2.1.0\n```\n\nWhat happens?\n\nIn Helm 2, Helm generates a patch of the `containers` object between the old\nmanifest and the new manifest. The cluster's live state is not considered during\nthe patch generation.\n\nThe cluster's live state is modified to look like the following:\n\n```yaml\ncontainers:\n- name: server\n  image: nginx:2.1.0\n```\n\nThe sidecar pod is removed from live state. More panic ensues.\n\nIn Helm 3, Helm generates a patch of the `containers` object between the old\nmanifest, the live state, and the new manifest. It notices that the new manifest\nchanges the image tag to `2.1.0`, but live state contains a sidecar container.\n\nThe cluster's live state is modified to look like the following:\n\n```yaml\ncontainers:\n- name: server\n  image: nginx:2.1.0\n- name: my-injected-sidecar\n  image: my-cool-mesh:1.0.0\n```\n\n### Release Names are now scoped to the Namespace\n\nWith the removal of Tiller, the information about each release had to go\nsomewhere. In Helm 2, this was stored in the same namespace as Tiller. In\npractice, this meant that once a name was used by a release, no other release\ncould use that same name, even if it was deployed in a different namespace.\n\nIn Helm 3, information about a particular release is now stored in the same\nnamespace as the release itself. This means that users can now `helm install\nwordpress stable\/wordpress` in two separate namespaces, and each can be referred\nwith `helm list` by changing the current namespace context (e.g. `helm list\n--namespace foo`).\n\nWith this greater alignment to native cluster namespaces, the `helm list`\ncommand no longer lists all releases by default. Instead, it will list only the\nreleases in the namespace of your current kubernetes context (i.e. the namespace\nshown when you run `kubectl config view --minify`). It also means you must\nsupply the `--all-namespaces` flag to `helm list` to get behaviour similar to\nHelm 2.\n\n### Secrets as the default storage driver\n\nIn Helm 3, Secrets are now used as the [default storage\ndriver](\/docs\/topics\/advanced\/#storage-backends). Helm 2 used ConfigMaps by\ndefault to store release information. In Helm 2.7.0, a new storage backend that\nuses Secrets for storing release information was implemented, and it is now the\ndefault starting in Helm 3.\n\nChanging to Secrets as the Helm 3 default allows for additional security in\nprotecting charts in conjunction with the release of Secret encryption in\nKubernetes.\n\n[Encrypting secrets at\nrest](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/encrypt-data\/) became\navailable as an alpha feature in Kubernetes 1.7 and became stable as of\nKubernetes 1.13. This allows users to encrypt Helm release metadata at rest, and\nso it is a good starting point that can be expanded later into using something\nlike Vault.\n\n### Go import path changes\n\nIn Helm 3, Helm switched the Go import path over from `k8s.io\/helm` to\n`helm.sh\/helm\/v3`. If you intend to upgrade to the Helm 3 Go client libraries,\nmake sure to change your import paths.\n\n### Capabilities\n\nThe `.Capabilities` built-in object available during the rendering stage has\nbeen simplified.\n\n[Built-in Objects](\/docs\/chart_template_guide\/builtin_objects\/)\n\n### Validating Chart Values with JSONSchema\n\nA JSON Schema can now be imposed upon chart values. This ensures that values\nprovided by the user follow the schema laid out by the chart maintainer,\nproviding better error reporting when the user provides an incorrect set of\nvalues for a chart.\n\nValidation occurs when any of the following commands are invoked:\n\n* `helm install`\n* `helm upgrade`\n* `helm template`\n* `helm lint`\n\nSee the documentation on [Schema files](\/docs\/topics\/charts#schema-files) for\nmore information.\n\n### Consolidation of `requirements.yaml` into `Chart.yaml`\n\nThe Chart dependency management system moved from requirements.yaml and\nrequirements.lock to Chart.yaml and Chart.lock. We recommend that new charts\nmeant for Helm 3 use the new format. However, Helm 3 still understands Chart API\nversion 1 (`v1`) and will load existing `requirements.yaml` files\n\nIn Helm 2, this is how a `requirements.yaml` looked:\n\n```yaml\ndependencies:\n- name: mariadb\n  version: 5.x.x\n  repository: https:\/\/charts.helm.sh\/stable\n  condition: mariadb.enabled\n  tags:\n    - database\n```\n\nIn Helm 3, the dependency is expressed the same way, but now from your\n`Chart.yaml`:\n\n```yaml\ndependencies:\n- name: mariadb\n  version: 5.x.x\n  repository: https:\/\/charts.helm.sh\/stable\n  condition: mariadb.enabled\n  tags:\n    - database\n```\n\nCharts are still downloaded and placed in the `charts\/` directory, so subcharts\nvendored into the `charts\/` directory will continue to work without\nmodification.\n\n### Name (or --generate-name) is now required on install\n\nIn Helm 2, if no name was provided, an auto-generated name would be given. In\nproduction, this proved to be more of a nuisance than a helpful feature. In Helm\n3, Helm will throw an error if no name is provided with `helm install`.\n\nFor those who still wish to have a name auto-generated for you, you can use the\n`--generate-name` flag to create one for you.\n\n### Pushing Charts to OCI Registries\n\nThis is an experimental feature introduced in Helm 3. To use, set the\nenvironment variable `HELM_EXPERIMENTAL_OCI=1`.\n\nAt a high level, a Chart Repository is a location where Charts can be stored and\nshared. The Helm client packs and ships Helm Charts to a Chart Repository.\nSimply put, a Chart Repository is a basic HTTP server that houses an index.yaml\nfile and some packaged charts.\n\nWhile there are several benefits to the Chart Repository API meeting the most\nbasic storage requirements, a few drawbacks have started to show:\n\n- Chart Repositories have a very hard time abstracting most of the security\n  implementations required in a production environment. Having a standard API\n  for authentication and authorization is very important in production\n  scenarios.\n- Helm\u2019s Chart provenance tools used for signing and verifying the integrity and\n  origin of a chart are an optional piece of the Chart publishing process.\n- In multi-tenant scenarios, the same Chart can be uploaded by another tenant,\n  costing twice the storage cost to store the same content. Smarter chart\n  repositories have been designed to handle this, but it\u2019s not a part of the\n  formal specification.\n- Using a single index file for search, metadata information, and fetching\n  Charts has made it difficult or clunky to design around in secure multi-tenant\n  implementations.\n\nDocker\u2019s Distribution project (also known as Docker Registry v2) is the\nsuccessor to the Docker Registry project. Many major cloud vendors have a\nproduct offering of the Distribution project, and with so many vendors offering\nthe same product, the Distribution project has benefited from many years of\nhardening, security best practices, and battle-testing.\n\nPlease have a look at `helm help chart` and `helm help registry` for more\ninformation on how to package a chart and push it to a Docker registry.\n\nFor more info, please see [this page](\/docs\/topics\/registries\/).\n\n### Removal of `helm serve`\n\n`helm serve` ran a local Chart Repository on your machine for development\npurposes. However, it didn't receive much uptake as a development tool and had\nnumerous issues with its design. In the end, we decided to remove it and split\nit out as a plugin.\n\nFor a similar experience to `helm serve`, have a look at the local filesystem\nstorage option in\n[ChartMuseum](https:\/\/chartmuseum.com\/docs\/#using-with-local-filesystem-storage)\nand the [servecm plugin](https:\/\/github.com\/jdolitsky\/helm-servecm).\n\n\n### Library chart support\n\nHelm 3 supports a class of chart called a \u201clibrary chart\u201d. This is a chart that\nis shared by other charts, but does not create any release artifacts of its own.\nA library chart\u2019s templates can only declare `define` elements. Globally scoped\nnon-`define` content is simply ignored. This allows users to re-use and share\nsnippets of code that can be re-used across many charts, avoiding redundancy and\nkeeping charts [DRY](https:\/\/en.wikipedia.org\/wiki\/Don%27t_repeat_yourself).\n\nLibrary charts are declared in the dependencies directive in Chart.yaml, and are\ninstalled and managed like any other chart.\n\n```yaml\ndependencies:\n  - name: mylib\n    version: 1.x.x\n    repository: quay.io\n```\n\nWe\u2019re very excited to see the use cases this feature opens up for chart\ndevelopers, as well as any best practices that arise from consuming library\ncharts.\n\n### Chart.yaml apiVersion bump\n\nWith the introduction of library chart support and the consolidation of\nrequirements.yaml into Chart.yaml, clients that understood Helm 2's package\nformat won't understand these new features. So, we bumped the apiVersion in\nChart.yaml from `v1` to `v2`.\n\n`helm create` now creates charts using this new format, so the default\napiVersion was bumped there as well.\n\nClients wishing to support both versions of Helm charts should inspect the\n`apiVersion` field in Chart.yaml to understand how to parse the package format.\n\n### XDG Base Directory Support\n\n[The XDG Base Directory\nSpecification](https:\/\/specifications.freedesktop.org\/basedir-spec\/basedir-spec-latest.html)\nis a portable standard defining where configuration, data, and cached files\nshould be stored on the filesystem.\n\nIn Helm 2, Helm stored all this information in `~\/.helm` (affectionately known\nas `helm home`), which could be changed by setting the `$HELM_HOME` environment\nvariable, or by using the global flag `--home`.\n\nIn Helm 3, Helm now respects the following environment variables as per the XDG\nBase Directory Specification:\n\n- `$XDG_CACHE_HOME`\n- `$XDG_CONFIG_HOME`\n- `$XDG_DATA_HOME`\n\nHelm plugins are still passed `$HELM_HOME` as an alias to `$XDG_DATA_HOME` for\nbackwards compatibility with plugins looking to use `$HELM_HOME` as a scratchpad\nenvironment.\n\nSeveral new environment variables are also passed in to the plugin's environment\nto accommodate this change:\n\n- `$HELM_PATH_CACHE` for the cache path\n- `$HELM_PATH_CONFIG` for the config path\n- `$HELM_PATH_DATA` for the data path\n\nHelm plugins looking to support Helm 3 should consider using these new\nenvironment variables instead.\n\n### CLI Command Renames\n\nIn order to better align the verbiage from other package managers, `helm delete`\nwas re-named to `helm uninstall`. `helm delete` is still retained as an alias to\n`helm uninstall`, so either form can be used.\n\nIn Helm 2, in order to purge the release ledger, the `--purge` flag had to be\nprovided. This functionality is now enabled by default. To retain the previous\nbehavior, use `helm uninstall --keep-history`.\n\nAdditionally, several other commands were re-named to accommodate the same\nconventions:\n\n- `helm inspect` -> `helm show`\n- `helm fetch` -> `helm pull`\n\nThese commands have also retained their older verbs as aliases, so you can\ncontinue to use them in either form.\n\n### Automatically creating namespaces\n\nWhen creating a release in a namespace that does not exist, Helm 2 created the\nnamespace.  Helm 3 follows the behavior of other Kubernetes tooling and returns\nan error if the namespace does not exist.  Helm 3 will create the namespace if\nyou explicitly specify `--create-namespace` flag.\n\n### What happened to .Chart.ApiVersion?\n\nHelm follows the typical convention for CamelCasing which is to capitalize an\nacronym. We have done this elsewhere in the code, such as with\n`.Capabilities.APIVersions.Has`. In Helm v3, we corrected `.Chart.ApiVersion`\nto follow this pattern, renaming it to `.Chart.APIVersion`.\n","site":"helm","answers_cleaned":"    title   Changes Since Helm 2  weight  1         Changes since Helm 2  Here s an exhaustive list of all the major changes introduced in Helm 3       Removal of Tiller  During the Helm 2 development cycle  we introduced Tiller  Tiller played an important role for teams working on a shared cluster   it made it possible for multiple different operators to interact with the same set of releases   With role based access controls  RBAC  enabled by default in Kubernetes 1 6  locking down Tiller for use in a production scenario became more difficult to manage  Due to the vast number of possible security policies  our stance was to provide a permissive default configuration  This allowed first time users to start experimenting with Helm and Kubernetes without having to dive headfirst into the security controls  Unfortunately  this permissive configuration could grant a user a broad range of permissions they weren t intended to have  DevOps and SREs had to learn additional operational steps when installing Tiller into a multi tenant cluster   After hearing how community members were using Helm in certain scenarios  we found that Tiller s release management system did not need to rely upon an in cluster operator to maintain state or act as a central hub for Helm release information  Instead  we could simply fetch information from the Kubernetes API server  render the Charts client side  and store a record of the installation in Kubernetes   Tiller s primary goal could be accomplished without Tiller  so one of the first decisions we made regarding Helm 3 was to completely remove Tiller   With Tiller gone  the security model for Helm is radically simplified  Helm 3 now supports all the modern security  identity  and authorization features of modern Kubernetes  Helm s permissions are evaluated using your  kubeconfig file  https   kubernetes io docs concepts configuration organize cluster access kubeconfig    Cluster administrators can restrict user permissions at whatever granularity they see fit  Releases are still recorded in cluster  and the rest of Helm s functionality remains       Improved Upgrade Strategy  3 way Strategic Merge Patches  Helm 2 used a two way strategic merge patch  During an upgrade  it compared the most recent chart s manifest against the proposed chart s manifest  the one supplied during  helm upgrade    It compared the differences between these two charts to determine what changes needed to be applied to the resources in Kubernetes  If changes were applied to the cluster out of band  such as during a  kubectl edit    those changes were not considered  This resulted in resources being unable to roll back to its previous state  because Helm only considered the last applied chart s manifest as its current state  if there were no changes in the chart s state  the live state was left unchanged   In Helm 3  we now use a three way strategic merge patch  Helm considers the old manifest  its live state  and the new manifest when generating a patch        Examples  Let s go through a few common examples what this change impacts         Rolling back where live state has changed  Your team just deployed their application to production on Kubernetes using Helm  The chart contains a Deployment object where the number of replicas is set to three      console   helm install myapp   myapp      A new developer joins the team  On their first day while observing the production cluster  a horrible coffee spilling on the keyboard accident happens and they  kubectl scale  the production deployment from three replicas down to zero      console   kubectl scale   replicas 0 deployment myapp      Another developer on your team notices that the production site is down and decides to rollback the release to its previous state      console   helm rollback myapp      What happens   In Helm 2  it would generate a patch  comparing the old manifest against the new manifest  Because this is a rollback  it s the same manifest  Helm would determine that there is nothing to change because there is no difference between the old manifest and the new manifest  The replica count continues to stay at zero  Panic ensues   In Helm 3  the patch is generated using the old manifest  the live state  and the new manifest  Helm recognizes that the old state was at three  the live state is at zero and the new manifest wishes to change it back to three  so it generates a patch to change the state back to three         Upgrades where live state has changed  Many service meshes and other controller based applications inject data into Kubernetes objects  This can be something like a sidecar  labels  or other information  Previously if you had the given manifest rendered from a Chart      yaml containers    name  server   image  nginx 2 0 0      And the live state was modified by another application to     yaml containers    name  server   image  nginx 2 0 0   name  my injected sidecar   image  my cool mesh 1 0 0      Now  you want to upgrade the  nginx  image tag to  2 1 0   So  you upgrade to a chart with the given manifest      yaml containers    name  server   image  nginx 2 1 0      What happens   In Helm 2  Helm generates a patch of the  containers  object between the old manifest and the new manifest  The cluster s live state is not considered during the patch generation   The cluster s live state is modified to look like the following      yaml containers    name  server   image  nginx 2 1 0      The sidecar pod is removed from live state  More panic ensues   In Helm 3  Helm generates a patch of the  containers  object between the old manifest  the live state  and the new manifest  It notices that the new manifest changes the image tag to  2 1 0   but live state contains a sidecar container   The cluster s live state is modified to look like the following      yaml containers    name  server   image  nginx 2 1 0   name  my injected sidecar   image  my cool mesh 1 0 0          Release Names are now scoped to the Namespace  With the removal of Tiller  the information about each release had to go somewhere  In Helm 2  this was stored in the same namespace as Tiller  In practice  this meant that once a name was used by a release  no other release could use that same name  even if it was deployed in a different namespace   In Helm 3  information about a particular release is now stored in the same namespace as the release itself  This means that users can now  helm install wordpress stable wordpress  in two separate namespaces  and each can be referred with  helm list  by changing the current namespace context  e g   helm list   namespace foo     With this greater alignment to native cluster namespaces  the  helm list  command no longer lists all releases by default  Instead  it will list only the releases in the namespace of your current kubernetes context  i e  the namespace shown when you run  kubectl config view   minify    It also means you must supply the    all namespaces  flag to  helm list  to get behaviour similar to Helm 2       Secrets as the default storage driver  In Helm 3  Secrets are now used as the  default storage driver   docs topics advanced  storage backends   Helm 2 used ConfigMaps by default to store release information  In Helm 2 7 0  a new storage backend that uses Secrets for storing release information was implemented  and it is now the default starting in Helm 3   Changing to Secrets as the Helm 3 default allows for additional security in protecting charts in conjunction with the release of Secret encryption in Kubernetes    Encrypting secrets at rest  https   kubernetes io docs tasks administer cluster encrypt data   became available as an alpha feature in Kubernetes 1 7 and became stable as of Kubernetes 1 13  This allows users to encrypt Helm release metadata at rest  and so it is a good starting point that can be expanded later into using something like Vault       Go import path changes  In Helm 3  Helm switched the Go import path over from  k8s io helm  to  helm sh helm v3   If you intend to upgrade to the Helm 3 Go client libraries  make sure to change your import paths       Capabilities  The   Capabilities  built in object available during the rendering stage has been simplified    Built in Objects   docs chart template guide builtin objects        Validating Chart Values with JSONSchema  A JSON Schema can now be imposed upon chart values  This ensures that values provided by the user follow the schema laid out by the chart maintainer  providing better error reporting when the user provides an incorrect set of values for a chart   Validation occurs when any of the following commands are invoked      helm install     helm upgrade     helm template     helm lint   See the documentation on  Schema files   docs topics charts schema files  for more information       Consolidation of  requirements yaml  into  Chart yaml   The Chart dependency management system moved from requirements yaml and requirements lock to Chart yaml and Chart lock  We recommend that new charts meant for Helm 3 use the new format  However  Helm 3 still understands Chart API version 1   v1   and will load existing  requirements yaml  files  In Helm 2  this is how a  requirements yaml  looked      yaml dependencies    name  mariadb   version  5 x x   repository  https   charts helm sh stable   condition  mariadb enabled   tags        database      In Helm 3  the dependency is expressed the same way  but now from your  Chart yaml       yaml dependencies    name  mariadb   version  5 x x   repository  https   charts helm sh stable   condition  mariadb enabled   tags        database      Charts are still downloaded and placed in the  charts   directory  so subcharts vendored into the  charts   directory will continue to work without modification       Name  or   generate name  is now required on install  In Helm 2  if no name was provided  an auto generated name would be given  In production  this proved to be more of a nuisance than a helpful feature  In Helm 3  Helm will throw an error if no name is provided with  helm install    For those who still wish to have a name auto generated for you  you can use the    generate name  flag to create one for you       Pushing Charts to OCI Registries  This is an experimental feature introduced in Helm 3  To use  set the environment variable  HELM EXPERIMENTAL OCI 1    At a high level  a Chart Repository is a location where Charts can be stored and shared  The Helm client packs and ships Helm Charts to a Chart Repository  Simply put  a Chart Repository is a basic HTTP server that houses an index yaml file and some packaged charts   While there are several benefits to the Chart Repository API meeting the most basic storage requirements  a few drawbacks have started to show     Chart Repositories have a very hard time abstracting most of the security   implementations required in a production environment  Having a standard API   for authentication and authorization is very important in production   scenarios    Helm s Chart provenance tools used for signing and verifying the integrity and   origin of a chart are an optional piece of the Chart publishing process    In multi tenant scenarios  the same Chart can be uploaded by another tenant    costing twice the storage cost to store the same content  Smarter chart   repositories have been designed to handle this  but it s not a part of the   formal specification    Using a single index file for search  metadata information  and fetching   Charts has made it difficult or clunky to design around in secure multi tenant   implementations   Docker s Distribution project  also known as Docker Registry v2  is the successor to the Docker Registry project  Many major cloud vendors have a product offering of the Distribution project  and with so many vendors offering the same product  the Distribution project has benefited from many years of hardening  security best practices  and battle testing   Please have a look at  helm help chart  and  helm help registry  for more information on how to package a chart and push it to a Docker registry   For more info  please see  this page   docs topics registries         Removal of  helm serve    helm serve  ran a local Chart Repository on your machine for development purposes  However  it didn t receive much uptake as a development tool and had numerous issues with its design  In the end  we decided to remove it and split it out as a plugin   For a similar experience to  helm serve   have a look at the local filesystem storage option in  ChartMuseum  https   chartmuseum com docs  using with local filesystem storage  and the  servecm plugin  https   github com jdolitsky helm servecm         Library chart support  Helm 3 supports a class of chart called a  library chart   This is a chart that is shared by other charts  but does not create any release artifacts of its own  A library chart s templates can only declare  define  elements  Globally scoped non  define  content is simply ignored  This allows users to re use and share snippets of code that can be re used across many charts  avoiding redundancy and keeping charts  DRY  https   en wikipedia org wiki Don 27t repeat yourself    Library charts are declared in the dependencies directive in Chart yaml  and are installed and managed like any other chart      yaml dependencies      name  mylib     version  1 x x     repository  quay io      We re very excited to see the use cases this feature opens up for chart developers  as well as any best practices that arise from consuming library charts       Chart yaml apiVersion bump  With the introduction of library chart support and the consolidation of requirements yaml into Chart yaml  clients that understood Helm 2 s package format won t understand these new features  So  we bumped the apiVersion in Chart yaml from  v1  to  v2     helm create  now creates charts using this new format  so the default apiVersion was bumped there as well   Clients wishing to support both versions of Helm charts should inspect the  apiVersion  field in Chart yaml to understand how to parse the package format       XDG Base Directory Support   The XDG Base Directory Specification  https   specifications freedesktop org basedir spec basedir spec latest html  is a portable standard defining where configuration  data  and cached files should be stored on the filesystem   In Helm 2  Helm stored all this information in     helm   affectionately known as  helm home    which could be changed by setting the   HELM HOME  environment variable  or by using the global flag    home    In Helm 3  Helm now respects the following environment variables as per the XDG Base Directory Specification       XDG CACHE HOME      XDG CONFIG HOME      XDG DATA HOME   Helm plugins are still passed   HELM HOME  as an alias to   XDG DATA HOME  for backwards compatibility with plugins looking to use   HELM HOME  as a scratchpad environment   Several new environment variables are also passed in to the plugin s environment to accommodate this change       HELM PATH CACHE  for the cache path     HELM PATH CONFIG  for the config path     HELM PATH DATA  for the data path  Helm plugins looking to support Helm 3 should consider using these new environment variables instead       CLI Command Renames  In order to better align the verbiage from other package managers   helm delete  was re named to  helm uninstall    helm delete  is still retained as an alias to  helm uninstall   so either form can be used   In Helm 2  in order to purge the release ledger  the    purge  flag had to be provided  This functionality is now enabled by default  To retain the previous behavior  use  helm uninstall   keep history    Additionally  several other commands were re named to accommodate the same conventions      helm inspect      helm show     helm fetch      helm pull   These commands have also retained their older verbs as aliases  so you can continue to use them in either form       Automatically creating namespaces  When creating a release in a namespace that does not exist  Helm 2 created the namespace   Helm 3 follows the behavior of other Kubernetes tooling and returns an error if the namespace does not exist   Helm 3 will create the namespace if you explicitly specify    create namespace  flag       What happened to  Chart ApiVersion   Helm follows the typical convention for CamelCasing which is to capitalize an acronym  We have done this elsewhere in the code  such as with   Capabilities APIVersions Has   In Helm v3  we corrected   Chart ApiVersion  to follow this pattern  renaming it to   Chart APIVersion   "}
{"questions":"helm weight 7 title Flow Control template author with the ability to control the flow of a template s generation Helm s template language provides the following control structures Control structures called actions in template parlance provide you the A quick overview on the flow structure within templates","answers":"---\ntitle: \"Flow Control\"\ndescription: \"A quick overview on the flow structure within templates.\"\nweight: 7\n---\n\nControl structures (called \"actions\" in template parlance) provide you, the\ntemplate author, with the ability to control the flow of a template's\ngeneration. Helm's template language provides the following control structures:\n\n- `if`\/`else` for creating conditional blocks\n- `with` to specify a scope\n- `range`, which provides a \"for each\"-style loop\n\nIn addition to these, it provides a few actions for declaring and using named\ntemplate segments:\n\n- `define` declares a new named template inside of your template\n- `template` imports a named template\n- `block` declares a special kind of fillable template area\n\nIn this section, we'll talk about `if`, `with`, and `range`. The others are\ncovered in the \"Named Templates\" section later in this guide.\n\n## If\/Else\n\nThe first control structure we'll look at is for conditionally including blocks\nof text in a template. This is the `if`\/`else` block.\n\nThe basic structure for a conditional looks like this:\n\n```\n\n  # Do something\n\n  # Do something else\n\n  # Default case\n\n```\n\nNotice that we're now talking about _pipelines_ instead of values. The reason\nfor this is to make it clear that control structures can execute an entire\npipeline, not just evaluate a value.\n\nA pipeline is evaluated as _false_ if the value is:\n\n- a boolean false\n- a numeric zero\n- an empty string\n- a `nil` (empty or null)\n- an empty collection (`map`, `slice`, `tuple`, `dict`, `array`)\n\nUnder all other conditions, the condition is true.\n\nLet's add a simple conditional to our ConfigMap. We'll add another setting if\nthe drink is set to coffee:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \n  food: \n  mug: \"true\"\n```\n\nSince we commented out `drink: coffee` in our last example, the output should\nnot include a `mug: \"true\"` flag. But if we add that line back into our\n`values.yaml` file, the output should look like this:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: eyewitness-elk-configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \"coffee\"\n  food: \"PIZZA\"\n  mug: \"true\"\n```\n\n## Controlling Whitespace\n\nWhile we're looking at conditionals, we should take a quick look at the way\nwhitespace is controlled in templates. Let's take the previous example and\nformat it to be a little easier to read:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \n  food: \n  \n    mug: \"true\"\n  \n```\n\nInitially, this looks good. But if we run it through the template engine, we'll\nget an unfortunate result:\n\n```console\n$ helm install --dry-run --debug .\/mychart\nSERVER: \"localhost:44134\"\nCHART PATH: \/Users\/mattbutcher\/Code\/Go\/src\/helm.sh\/helm\/_scratch\/mychart\nError: YAML parse error on mychart\/templates\/configmap.yaml: error converting YAML to JSON: yaml: line 9: did not find expected key\n```\n\nWhat happened? We generated incorrect YAML because of the whitespacing above.\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: eyewitness-elk-configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \"coffee\"\n  food: \"PIZZA\"\n    mug: \"true\"\n```\n\n`mug` is incorrectly indented. Let's simply out-dent that one line, and re-run:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \n  food: \n  \n  mug: \"true\"\n  \n```\n\nWhen we sent that, we'll get YAML that is valid, but still looks a little funny:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: telling-chimp-configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \"coffee\"\n  food: \"PIZZA\"\n\n  mug: \"true\"\n\n```\n\nNotice that we received a few empty lines in our YAML. Why? When the template\nengine runs, it _removes_ the contents inside of ``, but it leaves\nthe remaining whitespace exactly as is.\n\nYAML ascribes meaning to whitespace, so managing the whitespace becomes pretty\nimportant. Fortunately, Helm templates have a few tools to help.\n\nFirst, the curly brace syntax of template declarations can be modified with\nspecial characters to tell the template engine to chomp whitespace. `` means whitespace to the right should be consumed. _Be careful!\nNewlines are whitespace!_\n\n> Make sure there is a space between the `-` and the rest of your directive.\n> `` means \"trim left whitespace and print 3\" while `` means\n> \"print -3\".\n\nUsing this syntax, we can modify our template to get rid of those new lines:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \n  food: \n  \n  mug: \"true\"\n  \n```\n\nJust for the sake of making this point clear, let's adjust the above, and\nsubstitute an `*` for each whitespace that will be deleted following this rule.\nAn `*` at the end of the line indicates a newline character that would be\nremoved\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \n  food: *\n**\n  mug: \"true\"*\n**\n\n```\n\nKeeping that in mind, we can run our template through Helm and see the result:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: clunky-cat-configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \"coffee\"\n  food: \"PIZZA\"\n  mug: \"true\"\n```\n\nBe careful with the chomping modifiers. It is easy to accidentally do things\nlike this:\n\n```yaml\n  food: \n  \n  mug: \"true\"\n  \n\n```\n\nThat will produce `food: \"PIZZA\"mug: \"true\"` because it consumed newlines on both\nsides.\n\n> For the details on whitespace control in templates, see the [Official Go\n> template documentation](https:\/\/godoc.org\/text\/template)\n\nFinally, sometimes it's easier to tell the template system how to indent for you\ninstead of trying to master the spacing of template directives. For that reason,\nyou may sometimes find it useful to use the `indent` function (``).\n\n## Modifying scope using `with`\n\nThe next control structure to look at is the `with` action. This controls\nvariable scoping. Recall that `.` is a reference to _the current scope_. So\n`.Values` tells the template to find the `Values` object in the current scope.\n\nThe syntax for `with` is similar to a simple `if` statement:\n\n```\n\n  # restricted scope\n\n```\n\nScopes can be changed. `with` can allow you to set the current scope (`.`) to a\nparticular object. For example, we've been working with `.Values.favorite`.\nLet's rewrite our ConfigMap to alter the `.` scope to point to\n`.Values.favorite`:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  \n  drink: \n  food: \n  \n```\n\nNote that we removed the `if` conditional from the previous exercise\nbecause it is now unnecessary - the block after `with` only executes\nif the value of `PIPELINE` is not empty.\n\nNotice that now we can reference `.drink` and `.food` without qualifying them.\nThat is because the `with` statement sets `.` to point to `.Values.favorite`.\nThe `.` is reset to its previous scope after ``.\n\nBut here's a note of caution! Inside of the restricted scope, you will not be\nable to access the other objects from the parent scope using `.`. This, for\nexample, will fail:\n\n```yaml\n  \n  drink: \n  food: \n  release: \n  \n```\n\nIt will produce an error because `Release.Name` is not inside of the restricted\nscope for `.`. However, if we swap the last two lines, all will work as expected\nbecause the scope is reset after ``.\n\n```yaml\n  \n  drink: \n  food: \n  \n  release: \n```\n\nOr, we can use `$` for accessing the object `Release.Name` from the parent\nscope. `$` is mapped to the root scope when template execution begins and it\ndoes not change during template execution. The following would work as well:\n\n```yaml\n  \n  drink: \n  food: \n  release: \n  \n```\n\nAfter looking at `range`, we will take a look at template variables, which offer\none solution to the scoping issue above.\n\n## Looping with the `range` action\n\nMany programming languages have support for looping using `for` loops, `foreach`\nloops, or similar functional mechanisms. In Helm's template language, the way to\niterate through a collection is to use the `range` operator.\n\nTo start, let's add a list of pizza toppings to our `values.yaml` file:\n\n```yaml\nfavorite:\n  drink: coffee\n  food: pizza\npizzaToppings:\n  - mushrooms\n  - cheese\n  - peppers\n  - onions\n```\n\nNow we have a list (called a `slice` in templates) of `pizzaToppings`. We can\nmodify our template to print this list into our ConfigMap:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  \n  drink: \n  food: \n  \n  toppings: |-\n    \n    - \n    \n\n```\n\nWe can use `$` for accessing the list `Values.pizzaToppings` from the parent\nscope. `$` is mapped to the root scope when template execution begins and it\ndoes not change during template execution. The following would work as well:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  \n  drink: \n  food: \n  toppings: |-\n    \n    - \n    \n  \n```\n\nLet's take a closer look at the `toppings:` list. The `range` function will\n\"range over\" (iterate through) the `pizzaToppings` list. But now something\ninteresting happens. Just like `with` sets the scope of `.`, so does a `range`\noperator. Each time through the loop, `.` is set to the current pizza topping.\nThat is, the first time, `.` is set to `mushrooms`. The second iteration it is\nset to `cheese`, and so on.\n\nWe can send the value of `.` directly down a pipeline, so when we do ``, it sends `.` to `title` (title case function) and then to\n`quote`. If we run this template, the output will be:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: edgy-dragonfly-configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \"coffee\"\n  food: \"PIZZA\"\n  toppings: |-\n    - \"Mushrooms\"\n    - \"Cheese\"\n    - \"Peppers\"\n    - \"Onions\"\n```\n\nNow, in this example we've done something tricky. The `toppings: |-` line is\ndeclaring a multi-line string. So our list of toppings is actually not a YAML\nlist. It's a big string. Why would we do this? Because the data in ConfigMaps\n`data` is composed of key\/value pairs, where both the key and the value are\nsimple strings. To understand why this is the case, take a look at the\n[Kubernetes ConfigMap docs](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/configmap\/).\nFor us, though, this detail doesn't matter much.\n\n> The `|-` marker in YAML takes a multi-line string. This can be a useful\n> technique for embedding big blocks of data inside of your manifests, as\n> exemplified here.\n\nSometimes it's useful to be able to quickly make a list inside of your template,\nand then iterate over that list. Helm templates have a function to make this\neasy: `tuple`. In computer science, a tuple is a list-like collection of fixed\nsize, but with arbitrary data types. This roughly conveys the way a `tuple` is\nused.\n\n```yaml\n  sizes: |-\n    \n    - \n    \n```\n\nThe above will produce this:\n\n```yaml\n  sizes: |-\n    - small\n    - medium\n    - large\n```\n\nIn addition to lists and tuples, `range` can be used to iterate over collections\nthat have a key and a value (like a `map` or `dict`). We'll see how to do that\nin the next section when we introduce template variables.","site":"helm","answers_cleaned":"    title   Flow Control  description   A quick overview on the flow structure within templates   weight  7      Control structures  called  actions  in template parlance  provide you  the template author  with the ability to control the flow of a template s generation  Helm s template language provides the following control structures      if   else  for creating conditional blocks    with  to specify a scope    range   which provides a  for each  style loop  In addition to these  it provides a few actions for declaring and using named template segments      define  declares a new named template inside of your template    template  imports a named template    block  declares a special kind of fillable template area  In this section  we ll talk about  if    with   and  range   The others are covered in the  Named Templates  section later in this guide      If Else  The first control structure we ll look at is for conditionally including blocks of text in a template  This is the  if   else  block   The basic structure for a conditional looks like this            Do something      Do something else      Default case       Notice that we re now talking about  pipelines  instead of values  The reason for this is to make it clear that control structures can execute an entire pipeline  not just evaluate a value   A pipeline is evaluated as  false  if the value is     a boolean false   a numeric zero   an empty string   a  nil   empty or null    an empty collection   map    slice    tuple    dict    array    Under all other conditions  the condition is true   Let s add a simple conditional to our ConfigMap  We ll add another setting if the drink is set to coffee      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World    drink     food     mug   true       Since we commented out  drink  coffee  in our last example  the output should not include a  mug   true   flag  But if we add that line back into our  values yaml  file  the output should look like this      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  eyewitness elk configmap data    myvalue   Hello World    drink   coffee    food   PIZZA    mug   true          Controlling Whitespace  While we re looking at conditionals  we should take a quick look at the way whitespace is controlled in templates  Let s take the previous example and format it to be a little easier to read      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World    drink     food          mug   true          Initially  this looks good  But if we run it through the template engine  we ll get an unfortunate result      console   helm install   dry run   debug   mychart SERVER   localhost 44134  CHART PATH   Users mattbutcher Code Go src helm sh helm  scratch mychart Error  YAML parse error on mychart templates configmap yaml  error converting YAML to JSON  yaml  line 9  did not find expected key      What happened  We generated incorrect YAML because of the whitespacing above      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  eyewitness elk configmap data    myvalue   Hello World    drink   coffee    food   PIZZA      mug   true        mug  is incorrectly indented  Let s simply out dent that one line  and re run      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World    drink     food        mug   true          When we sent that  we ll get YAML that is valid  but still looks a little funny      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  telling chimp configmap data    myvalue   Hello World    drink   coffee    food   PIZZA     mug   true        Notice that we received a few empty lines in our YAML  Why  When the template engine runs  it  removes  the contents inside of     but it leaves the remaining whitespace exactly as is   YAML ascribes meaning to whitespace  so managing the whitespace becomes pretty important  Fortunately  Helm templates have a few tools to help   First  the curly brace syntax of template declarations can be modified with special characters to tell the template engine to chomp whitespace     means whitespace to the right should be consumed   Be careful  Newlines are whitespace      Make sure there is a space between the     and the rest of your directive       means  trim left whitespace and print 3  while    means    print  3    Using this syntax  we can modify our template to get rid of those new lines      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World    drink     food        mug   true          Just for the sake of making this point clear  let s adjust the above  and substitute an     for each whitespace that will be deleted following this rule  An     at the end of the line indicates a newline character that would be removed     yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World    drink     food         mug   true            Keeping that in mind  we can run our template through Helm and see the result      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  clunky cat configmap data    myvalue   Hello World    drink   coffee    food   PIZZA    mug   true       Be careful with the chomping modifiers  It is easy to accidentally do things like this      yaml   food        mug   true           That will produce  food   PIZZA mug   true   because it consumed newlines on both sides     For the details on whitespace control in templates  see the  Official Go   template documentation  https   godoc org text template   Finally  sometimes it s easier to tell the template system how to indent for you instead of trying to master the spacing of template directives  For that reason  you may sometimes find it useful to use the  indent  function           Modifying scope using  with   The next control structure to look at is the  with  action  This controls variable scoping  Recall that     is a reference to  the current scope   So   Values  tells the template to find the  Values  object in the current scope   The syntax for  with  is similar to a simple  if  statement            restricted scope       Scopes can be changed   with  can allow you to set the current scope       to a particular object  For example  we ve been working with   Values favorite   Let s rewrite our ConfigMap to alter the     scope to point to   Values favorite       yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World       drink     food           Note that we removed the  if  conditional from the previous exercise because it is now unnecessary   the block after  with  only executes if the value of  PIPELINE  is not empty   Notice that now we can reference   drink  and   food  without qualifying them  That is because the  with  statement sets     to point to   Values favorite   The     is reset to its previous scope after      But here s a note of caution  Inside of the restricted scope  you will not be able to access the other objects from the parent scope using      This  for example  will fail      yaml      drink     food     release           It will produce an error because  Release Name  is not inside of the restricted scope for      However  if we swap the last two lines  all will work as expected because the scope is reset after         yaml      drink     food        release        Or  we can use     for accessing the object  Release Name  from the parent scope      is mapped to the root scope when template execution begins and it does not change during template execution  The following would work as well      yaml      drink     food     release           After looking at  range   we will take a look at template variables  which offer one solution to the scoping issue above      Looping with the  range  action  Many programming languages have support for looping using  for  loops   foreach  loops  or similar functional mechanisms  In Helm s template language  the way to iterate through a collection is to use the  range  operator   To start  let s add a list of pizza toppings to our  values yaml  file      yaml favorite    drink  coffee   food  pizza pizzaToppings      mushrooms     cheese     peppers     onions      Now we have a list  called a  slice  in templates  of  pizzaToppings   We can modify our template to print this list into our ConfigMap      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World       drink     food        toppings                            We can use     for accessing the list  Values pizzaToppings  from the parent scope      is mapped to the root scope when template execution begins and it does not change during template execution  The following would work as well      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World       drink     food     toppings                              Let s take a closer look at the  toppings   list  The  range  function will  range over   iterate through  the  pizzaToppings  list  But now something interesting happens  Just like  with  sets the scope of      so does a  range  operator  Each time through the loop      is set to the current pizza topping  That is  the first time      is set to  mushrooms   The second iteration it is set to  cheese   and so on   We can send the value of     directly down a pipeline  so when we do     it sends     to  title   title case function  and then to  quote   If we run this template  the output will be      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  edgy dragonfly configmap data    myvalue   Hello World    drink   coffee    food   PIZZA    toppings            Mushrooms         Cheese         Peppers         Onions       Now  in this example we ve done something tricky  The  toppings      line is declaring a multi line string  So our list of toppings is actually not a YAML list  It s a big string  Why would we do this  Because the data in ConfigMaps  data  is composed of key value pairs  where both the key and the value are simple strings  To understand why this is the case  take a look at the  Kubernetes ConfigMap docs  https   kubernetes io docs concepts configuration configmap    For us  though  this detail doesn t matter much     The      marker in YAML takes a multi line string  This can be a useful   technique for embedding big blocks of data inside of your manifests  as   exemplified here   Sometimes it s useful to be able to quickly make a list inside of your template  and then iterate over that list  Helm templates have a function to make this easy   tuple   In computer science  a tuple is a list like collection of fixed size  but with arbitrary data types  This roughly conveys the way a  tuple  is used      yaml   sizes                           The above will produce this      yaml   sizes           small       medium       large      In addition to lists and tuples   range  can be used to iterate over collections that have a key and a value  like a  map  or  dict    We ll see how to do that in the next section when we introduce template variables "}
{"questions":"helm Using functions in templates weight 5 So far we ve seen how to place information into a template But that title Template Functions and Pipelines information is placed into the template unmodified Sometimes we want to transform the supplied data in a way that makes it more useable to us","answers":"---\ntitle: \"Template Functions and Pipelines\"\ndescription: \"Using functions in templates.\"\nweight: 5\n---\n\nSo far, we've seen how to place information into a template. But that\ninformation is placed into the template unmodified. Sometimes we want to\ntransform the supplied data in a way that makes it more useable to us.\n\nLet's start with a best practice: When injecting strings from the `.Values`\nobject into the template, we ought to quote these strings. We can do that by\ncalling the `quote` function in the template directive:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \n  food: \n```\n\nTemplate functions follow the syntax `functionName arg1 arg2...`. In the snippet\nabove, `quote .Values.favorite.drink` calls the `quote` function and passes it a\nsingle argument.\n\nHelm has over 60 available functions. Some of them are defined by the [Go\ntemplate language](https:\/\/godoc.org\/text\/template) itself. Most of the others\nare part of the [Sprig template library](https:\/\/masterminds.github.io\/sprig\/).\nWe'll see many of them as we progress through the examples.\n\n> While we talk about the \"Helm template language\" as if it is Helm-specific, it\n> is actually a combination of the Go template language, some extra functions,\n> and a variety of wrappers to expose certain objects to the templates. Many\n> resources on Go templates may be helpful as you learn about templating.\n\n## Pipelines\n\nOne of the powerful features of the template language is its concept of\n_pipelines_. Drawing on a concept from UNIX, pipelines are a tool for chaining\ntogether a series of template commands to compactly express a series of\ntransformations. In other words, pipelines are an efficient way of getting\nseveral things done in sequence. Let's rewrite the above example using a\npipeline.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \n  food: \n```\n\nIn this example, instead of calling `quote ARGUMENT`, we inverted the order. We\n\"sent\" the argument to the function using a pipeline (`|`):\n`.Values.favorite.drink | quote`. Using pipelines, we can chain several\nfunctions together:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \n  food: \n```\n\n> Inverting the order is a common practice in templates. You will see `.val |\n> quote` more often than `quote .val`. Either practice is fine.\n\nWhen evaluated, that template will produce this:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: trendsetting-p-configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \"coffee\"\n  food: \"PIZZA\"\n```\n\nNote that our original `pizza` has now been transformed to `\"PIZZA\"`.\n\nWhen pipelining arguments like this, the result of the first evaluation\n(`.Values.favorite.drink`) is sent as the _last argument to the function_. We\ncan modify the drink example above to illustrate with a function that takes two\narguments: `repeat COUNT STRING`:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \n  food: \n```\n\nThe `repeat` function will echo the given string the given number of times, so\nwe will get this for output:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: melting-porcup-configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \"coffeecoffeecoffeecoffeecoffee\"\n  food: \"PIZZA\"\n```\n\n## Using the `default` function\n\nOne function frequently used in templates is the `default` function: `default\nDEFAULT_VALUE GIVEN_VALUE`. This function allows you to specify a default value\ninside of the template, in case the value is omitted. Let's use it to modify the\ndrink example above:\n\n```yaml\ndrink: \n```\n\nIf we run this as normal, we'll get our `coffee`:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: virtuous-mink-configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \"coffee\"\n  food: \"PIZZA\"\n```\n\nNow, we will remove the favorite drink setting from `values.yaml`:\n\n```yaml\nfavorite:\n  #drink: coffee\n  food: pizza\n```\n\nNow re-running `helm install --dry-run --debug fair-worm .\/mychart` will produce\nthis YAML:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: fair-worm-configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \"tea\"\n  food: \"PIZZA\"\n```\n\nIn an actual chart, all static default values should live in the `values.yaml`,\nand should not be repeated using the `default` command (otherwise they would be\nredundant). However, the `default` command is perfect for computed values, which\ncannot be declared inside `values.yaml`. For example:\n\n```yaml\ndrink: \n```\n\nIn some places, an `if` conditional guard may be better suited than `default`.\nWe'll see those in the next section.\n\nTemplate functions and pipelines are a powerful way to transform information and\nthen insert it into your YAML. But sometimes it's necessary to add some template\nlogic that is a little more sophisticated than just inserting a string. In the\nnext section we will look at the control structures provided by the template\nlanguage.\n\n## Using the `lookup` function\n\nThe `lookup` function can be used to _look up_ resources in a running cluster.\nThe synopsis of the lookup function is `lookup apiVersion, kind, namespace, name\n-> resource or resource list`.\n\n| parameter  | type   |\n|------------|--------|\n| apiVersion | string |\n| kind       | string |\n| namespace  | string |\n| name       | string |\n\nBoth `name` and `namespace` are optional and can be passed as an empty string\n(`\"\"`).\n\nThe following combination of parameters are possible:\n\n| Behavior                               | Lookup function                            |\n|----------------------------------------|--------------------------------------------|\n| `kubectl get pod mypod -n mynamespace` | `lookup \"v1\" \"Pod\" \"mynamespace\" \"mypod\"`  |\n| `kubectl get pods -n mynamespace`      | `lookup \"v1\" \"Pod\" \"mynamespace\" \"\"`       |\n| `kubectl get pods --all-namespaces`    | `lookup \"v1\" \"Pod\" \"\" \"\"`                  |\n| `kubectl get namespace mynamespace`    | `lookup \"v1\" \"Namespace\" \"\" \"mynamespace\"` |\n| `kubectl get namespaces`               | `lookup \"v1\" \"Namespace\" \"\" \"\"`            |\n\nWhen `lookup` returns an object, it will return a dictionary. This dictionary\ncan be further navigated to extract specific values.\n\nThe following example will return the annotations present for the `mynamespace`\nobject:\n\n```go\n(lookup \"v1\" \"Namespace\" \"\" \"mynamespace\").metadata.annotations\n```\n\nWhen `lookup` returns a list of objects, it is possible to access the object\nlist via the `items` field:\n\n```go\n\n    \n\n```\n\nWhen no object is found, an empty value is returned. This can be used to check\nfor the existence of an object.\n\nThe `lookup` function uses Helm's existing Kubernetes connection configuration\nto query Kubernetes. If any error is returned when interacting with calling the\nAPI server (for example due to lack of permission to access a resource), Helm's\ntemplate processing will fail.\n\nKeep in mind that Helm is not supposed to contact the Kubernetes API Server during \na `helm template|install|upgrade|delete|rollback --dry-run` operation. To test `lookup` \nagainst a running cluster, `helm template|install|upgrade|delete|rollback --dry-run=server`\nshould be used instead to allow cluster connection.\n\n## Operators are functions\n\nFor templates, the operators (`eq`, `ne`, `lt`, `gt`, `and`, `or` and so on) are\nall implemented as functions. In pipelines, operations can be grouped with\nparentheses (`(`, and `)`).\n\nNow we can turn from functions and pipelines to flow control with conditions,\nloops, and scope modifiers.","site":"helm","answers_cleaned":"    title   Template Functions and Pipelines  description   Using functions in templates   weight  5      So far  we ve seen how to place information into a template  But that information is placed into the template unmodified  Sometimes we want to transform the supplied data in a way that makes it more useable to us   Let s start with a best practice  When injecting strings from the   Values  object into the template  we ought to quote these strings  We can do that by calling the  quote  function in the template directive      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World    drink     food        Template functions follow the syntax  functionName arg1 arg2      In the snippet above   quote  Values favorite drink  calls the  quote  function and passes it a single argument   Helm has over 60 available functions  Some of them are defined by the  Go template language  https   godoc org text template  itself  Most of the others are part of the  Sprig template library  https   masterminds github io sprig    We ll see many of them as we progress through the examples     While we talk about the  Helm template language  as if it is Helm specific  it   is actually a combination of the Go template language  some extra functions    and a variety of wrappers to expose certain objects to the templates  Many   resources on Go templates may be helpful as you learn about templating      Pipelines  One of the powerful features of the template language is its concept of  pipelines   Drawing on a concept from UNIX  pipelines are a tool for chaining together a series of template commands to compactly express a series of transformations  In other words  pipelines are an efficient way of getting several things done in sequence  Let s rewrite the above example using a pipeline      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World    drink     food        In this example  instead of calling  quote ARGUMENT   we inverted the order  We  sent  the argument to the function using a pipeline          Values favorite drink   quote   Using pipelines  we can chain several functions together      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World    drink     food          Inverting the order is a common practice in templates  You will see   val     quote  more often than  quote  val   Either practice is fine   When evaluated  that template will produce this      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  trendsetting p configmap data    myvalue   Hello World    drink   coffee    food   PIZZA       Note that our original  pizza  has now been transformed to   PIZZA     When pipelining arguments like this  the result of the first evaluation    Values favorite drink   is sent as the  last argument to the function   We can modify the drink example above to illustrate with a function that takes two arguments   repeat COUNT STRING       yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World    drink     food        The  repeat  function will echo the given string the given number of times  so we will get this for output      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  melting porcup configmap data    myvalue   Hello World    drink   coffeecoffeecoffeecoffeecoffee    food   PIZZA          Using the  default  function  One function frequently used in templates is the  default  function   default DEFAULT VALUE GIVEN VALUE   This function allows you to specify a default value inside of the template  in case the value is omitted  Let s use it to modify the drink example above      yaml drink        If we run this as normal  we ll get our  coffee       yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  virtuous mink configmap data    myvalue   Hello World    drink   coffee    food   PIZZA       Now  we will remove the favorite drink setting from  values yaml       yaml favorite     drink  coffee   food  pizza      Now re running  helm install   dry run   debug fair worm   mychart  will produce this YAML      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  fair worm configmap data    myvalue   Hello World    drink   tea    food   PIZZA       In an actual chart  all static default values should live in the  values yaml   and should not be repeated using the  default  command  otherwise they would be redundant   However  the  default  command is perfect for computed values  which cannot be declared inside  values yaml   For example      yaml drink        In some places  an  if  conditional guard may be better suited than  default   We ll see those in the next section   Template functions and pipelines are a powerful way to transform information and then insert it into your YAML  But sometimes it s necessary to add some template logic that is a little more sophisticated than just inserting a string  In the next section we will look at the control structures provided by the template language      Using the  lookup  function  The  lookup  function can be used to  look up  resources in a running cluster  The synopsis of the lookup function is  lookup apiVersion  kind  namespace  name    resource or resource list      parameter    type                               apiVersion   string     kind         string     namespace    string     name         string    Both  name  and  namespace  are optional and can be passed as an empty string          The following combination of parameters are possible     Behavior                                 Lookup function                                                                                                                         kubectl get pod mypod  n mynamespace     lookup  v1   Pod   mynamespace   mypod         kubectl get pods  n mynamespace          lookup  v1   Pod   mynamespace                 kubectl get pods   all namespaces        lookup  v1   Pod                               kubectl get namespace mynamespace        lookup  v1   Namespace      mynamespace        kubectl get namespaces                   lookup  v1   Namespace                       When  lookup  returns an object  it will return a dictionary  This dictionary can be further navigated to extract specific values   The following example will return the annotations present for the  mynamespace  object      go  lookup  v1   Namespace      mynamespace   metadata annotations      When  lookup  returns a list of objects  it is possible to access the object list via the  items  field      go             When no object is found  an empty value is returned  This can be used to check for the existence of an object   The  lookup  function uses Helm s existing Kubernetes connection configuration to query Kubernetes  If any error is returned when interacting with calling the API server  for example due to lack of permission to access a resource   Helm s template processing will fail   Keep in mind that Helm is not supposed to contact the Kubernetes API Server during  a  helm template install upgrade delete rollback   dry run  operation  To test  lookup   against a running cluster   helm template install upgrade delete rollback   dry run server  should be used instead to allow cluster connection      Operators are functions  For templates  the operators   eq    ne    lt    gt    and    or  and so on  are all implemented as functions  In pipelines  operations can be grouped with parentheses       and        Now we can turn from functions and pipelines to flow control with conditions  loops  and scope modifiers "}
{"questions":"helm templates This makes it easy to import one template from within another How to access files from within a template template and inject its contents without sending the contents through the weight 10 In the previous section we looked at several ways to create and access named title Accessing Files Inside Templates template But sometimes it is desirable to import a file that is not a","answers":"---\ntitle: \"Accessing Files Inside Templates\"\ndescription: \"How to access files from within a template.\"\nweight: 10\n---\n\nIn the previous section we looked at several ways to create and access named\ntemplates. This makes it easy to import one template from within another\ntemplate. But sometimes it is desirable to import a _file that is not a\ntemplate_ and inject its contents without sending the contents through the\ntemplate renderer.\n\nHelm provides access to files through the `.Files` object. Before we get going\nwith the template examples, though, there are a few things to note about how\nthis works:\n\n- It is okay to add extra files to your Helm chart. These files will be bundled.\n  Be careful, though. Charts must be smaller than 1M because of the storage\n  limitations of Kubernetes objects.\n- Some files cannot be accessed through the `.Files` object, usually for\n  security reasons.\n  - Files in `templates\/` cannot be accessed.\n  - Files excluded using `.helmignore` cannot be accessed.\n  - Files outside of a Helm application [subchart](), including those of the parent, cannot be accessed\n- Charts do not preserve UNIX mode information, so file-level permissions will\n  have no impact on the availability of a file when it comes to the `.Files`\n  object.\n\n<!-- (see https:\/\/github.com\/jonschlinkert\/markdown-toc) -->\n\n<!-- toc -->\n\n- [Basic example](#basic-example)\n- [Path helpers](#path-helpers)\n- [Glob patterns](#glob-patterns)\n- [ConfigMap and Secrets utility functions](#configmap-and-secrets-utility-functions)\n- [Encoding](#encoding)\n- [Lines](#lines)\n\n<!-- tocstop -->\n\n## Basic example\n\nWith those caveats behind, let's write a template that reads three files into\nour ConfigMap. To get started, we will add three files to the chart, putting all\nthree directly inside of the `mychart\/` directory.\n\n`config1.toml`:\n\n```toml\nmessage = Hello from config 1\n```\n\n`config2.toml`:\n\n```toml\nmessage = This is config 2\n```\n\n`config3.toml`:\n\n```toml\nmessage = Goodbye from config 3\n```\n\nEach of these is a simple TOML file (think old-school Windows INI files). We\nknow the names of these files, so we can use a `range` function to loop through\nthem and inject their contents into our ConfigMap.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  \n  \n  : |-\n    \n  \n```\n\nThis ConfigMap uses several of the techniques discussed in previous sections.\nFor example, we create a `$files` variable to hold a reference to the `.Files`\nobject. We also use the `tuple` function to create a list of files that we loop\nthrough. Then we print each file name (`: |-`) followed by the contents\nof the file ``.\n\nRunning this template will produce a single ConfigMap with the contents of all\nthree files:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: quieting-giraf-configmap\ndata:\n  config1.toml: |-\n    message = Hello from config 1\n\n  config2.toml: |-\n    message = This is config 2\n\n  config3.toml: |-\n    message = Goodbye from config 3\n```\n\n## Path helpers\n\nWhen working with files, it can be very useful to perform some standard\noperations on the file paths themselves. To help with this, Helm imports many of\nthe functions from Go's [path](https:\/\/golang.org\/pkg\/path\/) package for your\nuse. They are all accessible with the same names as in the Go package, but with\na lowercase first letter. For example, `Base` becomes `base`, etc.\n\nThe imported functions are:\n- Base\n- Dir\n- Ext\n- IsAbs\n- Clean\n\n## Glob patterns\n\nAs your chart grows, you may find you have a greater need to organize your files\nmore, and so we provide a `Files.Glob(pattern string)` method to assist in\nextracting certain files with all the flexibility of [glob\npatterns](https:\/\/godoc.org\/github.com\/gobwas\/glob).\n\n`.Glob` returns a `Files` type, so you may call any of the `Files` methods on\nthe returned object.\n\nFor example, imagine the directory structure:\n\n```\nfoo\/:\n  foo.txt foo.yaml\n\nbar\/:\n  bar.go bar.conf baz.yaml\n```\n\nYou have multiple options with Globs:\n\n\n```yaml\n\n\n    \n        \n    \n\n```\n\nOr\n\n```yaml\n\n      \n\n```\n\n## ConfigMap and Secrets utility functions\n\n(Available Helm 2.0.2 and after)\n\nIt is very common to want to place file content into both ConfigMaps and\nSecrets, for mounting into your pods at run time. To help with this, we provide\na couple utility methods on the `Files` type.\n\nFor further organization, it is especially useful to use these methods in\nconjunction with the `Glob` method.\n\nGiven the directory structure from the [Glob](#glob-patterns) example above:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: conf\ndata:\n\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: very-secret\ntype: Opaque\ndata:\n\n```\n\n## Encoding\n\nYou can import a file and have the template base-64 encode it to ensure\nsuccessful transmission:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: -secret\ntype: Opaque\ndata:\n  token: |-\n    \n```\n\nThe above will take the same `config1.toml` file we used before and encode it:\n\n```yaml\n# Source: mychart\/templates\/secret.yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: lucky-turkey-secret\ntype: Opaque\ndata:\n  token: |-\n    bWVzc2FnZSA9IEhlbGxvIGZyb20gY29uZmlnIDEK\n```\n\n## Lines\n\nSometimes it is desirable to access each line of a file in your template. We\nprovide a convenient `Lines` method for this.\n\nYou can loop through `Lines` using a `range` function:\n\n```yaml\ndata:\n  some-file.txt: \n    \n```\n\nThere is no way to pass files external to the chart during `helm install`. So if\nyou are asking users to supply data, it must be loaded using `helm install -f`\nor `helm install --set`.\n\nThis discussion wraps up our dive into the tools and techniques for writing Helm\ntemplates. In the next section we will see how you can use one special file,\n`templates\/NOTES.txt`, to send post-installation instructions to the users of\nyour chart.","site":"helm","answers_cleaned":"    title   Accessing Files Inside Templates  description   How to access files from within a template   weight  10      In the previous section we looked at several ways to create and access named templates  This makes it easy to import one template from within another template  But sometimes it is desirable to import a  file that is not a template  and inject its contents without sending the contents through the template renderer   Helm provides access to files through the   Files  object  Before we get going with the template examples  though  there are a few things to note about how this works     It is okay to add extra files to your Helm chart  These files will be bundled    Be careful  though  Charts must be smaller than 1M because of the storage   limitations of Kubernetes objects    Some files cannot be accessed through the   Files  object  usually for   security reasons      Files in  templates   cannot be accessed      Files excluded using   helmignore  cannot be accessed      Files outside of a Helm application  subchart     including those of the parent  cannot be accessed   Charts do not preserve UNIX mode information  so file level permissions will   have no impact on the availability of a file when it comes to the   Files    object         see https   github com jonschlinkert markdown toc            toc         Basic example   basic example     Path helpers   path helpers     Glob patterns   glob patterns     ConfigMap and Secrets utility functions   configmap and secrets utility functions     Encoding   encoding     Lines   lines        tocstop         Basic example  With those caveats behind  let s write a template that reads three files into our ConfigMap  To get started  we will add three files to the chart  putting all three directly inside of the  mychart   directory    config1 toml       toml message   Hello from config 1       config2 toml       toml message   This is config 2       config3 toml       toml message   Goodbye from config 3      Each of these is a simple TOML file  think old school Windows INI files   We know the names of these files  so we can use a  range  function to loop through them and inject their contents into our ConfigMap      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data                            This ConfigMap uses several of the techniques discussed in previous sections  For example  we create a   files  variable to hold a reference to the   Files  object  We also use the  tuple  function to create a list of files that we loop through  Then we print each file name          followed by the contents of the file      Running this template will produce a single ConfigMap with the contents of all three files      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  quieting giraf configmap data    config1 toml         message   Hello from config 1    config2 toml         message   This is config 2    config3 toml         message   Goodbye from config 3         Path helpers  When working with files  it can be very useful to perform some standard operations on the file paths themselves  To help with this  Helm imports many of the functions from Go s  path  https   golang org pkg path   package for your use  They are all accessible with the same names as in the Go package  but with a lowercase first letter  For example   Base  becomes  base   etc   The imported functions are    Base   Dir   Ext   IsAbs   Clean     Glob patterns  As your chart grows  you may find you have a greater need to organize your files more  and so we provide a  Files Glob pattern string   method to assist in extracting certain files with all the flexibility of  glob patterns  https   godoc org github com gobwas glob      Glob  returns a  Files  type  so you may call any of the  Files  methods on the returned object   For example  imagine the directory structure       foo     foo txt foo yaml  bar     bar go bar conf baz yaml      You have multiple options with Globs       yaml                            Or     yaml                  ConfigMap and Secrets utility functions   Available Helm 2 0 2 and after   It is very common to want to place file content into both ConfigMaps and Secrets  for mounting into your pods at run time  To help with this  we provide a couple utility methods on the  Files  type   For further organization  it is especially useful to use these methods in conjunction with the  Glob  method   Given the directory structure from the  Glob   glob patterns  example above      yaml apiVersion  v1 kind  ConfigMap metadata    name  conf data       apiVersion  v1 kind  Secret metadata    name  very secret type  Opaque data           Encoding  You can import a file and have the template base 64 encode it to ensure successful transmission      yaml apiVersion  v1 kind  Secret metadata    name   secret type  Opaque data    token               The above will take the same  config1 toml  file we used before and encode it      yaml   Source  mychart templates secret yaml apiVersion  v1 kind  Secret metadata    name  lucky turkey secret type  Opaque data    token         bWVzc2FnZSA9IEhlbGxvIGZyb20gY29uZmlnIDEK         Lines  Sometimes it is desirable to access each line of a file in your template  We provide a convenient  Lines  method for this   You can loop through  Lines  using a  range  function      yaml data    some file txt             There is no way to pass files external to the chart during  helm install   So if you are asking users to supply data  it must be loaded using  helm install  f  or  helm install   set    This discussion wraps up our dive into the tools and techniques for writing Helm templates  In the next section we will see how you can use one special file   templates NOTES txt   to send post installation instructions to the users of your chart "}
{"questions":"helm dependencies called subcharts that also have their own values and templates weight 11 Interacting with a subchart s and global values To this point we have been working only with one chart But charts can have In this section we will create a subchart and see the different ways we can title Subcharts and Global Values access values from within templates","answers":"---\ntitle: \"Subcharts and Global Values\"\ndescription: \"Interacting with a subchart's and global values.\"\nweight: 11\n---\n\nTo this point we have been working only with one chart. But charts can have\ndependencies, called _subcharts_, that also have their own values and templates.\nIn this section we will create a subchart and see the different ways we can\naccess values from within templates.\n\nBefore we dive into the code, there are a few important details to learn about application subcharts.\n\n1. A subchart is considered \"stand-alone\", which means a subchart can never\n   explicitly depend on its parent chart.\n2. For that reason, a subchart cannot access the values of its parent.\n3. A parent chart can override values for subcharts.\n4. Helm has a concept of _global values_ that can be accessed by all charts.\n\n> These limitations do not all necessarily apply to [library charts](), which are designed to provide standardized helper functionality.\n\nAs we walk through the examples in this section, many of these concepts will\nbecome clearer.\n\n## Creating a Subchart\n\nFor these exercises, we'll start with the `mychart\/` chart we created at the\nbeginning of this guide, and we'll add a new chart inside of it.\n\n```console\n$ cd mychart\/charts\n$ helm create mysubchart\nCreating mysubchart\n$ rm -rf mysubchart\/templates\/*\n```\n\nNotice that just as before, we deleted all of the base templates so that we can\nstart from scratch. In this guide, we are focused on how templates work, not on\nmanaging dependencies. But the [Charts Guide]()\nhas more information on how subcharts work.\n\n## Adding Values and a Template to the Subchart\n\nNext, let's create a simple template and values file for our `mysubchart` chart.\nThere should already be a `values.yaml` in `mychart\/charts\/mysubchart`. We'll\nset it up like this:\n\n```yaml\ndessert: cake\n```\n\nNext, we'll create a new ConfigMap template in\n`mychart\/charts\/mysubchart\/templates\/configmap.yaml`:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -cfgmap2\ndata:\n  dessert: \n```\n\nBecause every subchart is a _stand-alone chart_, we can test `mysubchart` on its\nown:\n\n```console\n$ helm install --generate-name --dry-run --debug mychart\/charts\/mysubchart\nSERVER: \"localhost:44134\"\nCHART PATH: \/Users\/mattbutcher\/Code\/Go\/src\/helm.sh\/helm\/_scratch\/mychart\/charts\/mysubchart\nNAME:   newbie-elk\nTARGET NAMESPACE:   default\nCHART:  mysubchart 0.1.0\nMANIFEST:\n---\n# Source: mysubchart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: newbie-elk-cfgmap2\ndata:\n  dessert: cake\n```\n\n## Overriding Values from a Parent Chart\n\nOur original chart, `mychart` is now the _parent_ chart of `mysubchart`. This\nrelationship is based entirely on the fact that `mysubchart` is within\n`mychart\/charts`.\n\nBecause `mychart` is a parent, we can specify configuration in `mychart` and\nhave that configuration pushed into `mysubchart`. For example, we can modify\n`mychart\/values.yaml` like this:\n\n```yaml\nfavorite:\n  drink: coffee\n  food: pizza\npizzaToppings:\n  - mushrooms\n  - cheese\n  - peppers\n  - onions\n\nmysubchart:\n  dessert: ice cream\n```\n\nNote the last two lines. Any directives inside of the `mysubchart` section will\nbe sent to the `mysubchart` chart. So if we run `helm install --generate-name --dry-run --debug\nmychart`, one of the things we will see is the `mysubchart` ConfigMap:\n\n```yaml\n# Source: mychart\/charts\/mysubchart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: unhinged-bee-cfgmap2\ndata:\n  dessert: ice cream\n```\n\nThe value at the top level has now overridden the value of the subchart.\n\nThere's an important detail to notice here. We didn't change the template of\n`mychart\/charts\/mysubchart\/templates\/configmap.yaml` to point to\n`.Values.mysubchart.dessert`. From that template's perspective, the value is\nstill located at `.Values.dessert`. As the template engine passes values along,\nit sets the scope. So for the `mysubchart` templates, only values specifically\nfor `mysubchart` will be available in `.Values`.\n\nSometimes, though, you do want certain values to be available to all of the\ntemplates. This is accomplished using global chart values.\n\n## Global Chart Values\n\nGlobal values are values that can be accessed from any chart or subchart by\nexactly the same name. Globals require explicit declaration. You can't use an\nexisting non-global as if it were a global.\n\nThe Values data type has a reserved section called `Values.global` where global\nvalues can be set. Let's set one in our `mychart\/values.yaml` file.\n\n```yaml\nfavorite:\n  drink: coffee\n  food: pizza\npizzaToppings:\n  - mushrooms\n  - cheese\n  - peppers\n  - onions\n\nmysubchart:\n  dessert: ice cream\n\nglobal:\n  salad: caesar\n```\n\nBecause of the way globals work, both `mychart\/templates\/configmap.yaml` and\n`mysubchart\/templates\/configmap.yaml` should be able to access that value as\n``.\n\n`mychart\/templates\/configmap.yaml`:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  salad: \n```\n\n`mysubchart\/templates\/configmap.yaml`:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -cfgmap2\ndata:\n  dessert: \n  salad: \n```\n\nNow if we run a dry run install, we'll see the same value in both outputs:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: silly-snake-configmap\ndata:\n  salad: caesar\n\n---\n# Source: mychart\/charts\/mysubchart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: silly-snake-cfgmap2\ndata:\n  dessert: ice cream\n  salad: caesar\n```\n\nGlobals are useful for passing information like this, though it does take some\nplanning to make sure the right templates are configured to use globals.\n\n## Sharing Templates with Subcharts\n\nParent charts and subcharts can share templates. Any defined block in any chart\nis available to other charts.\n\nFor example, we can define a simple template like this:\n\n```yaml\nfrom: mychart\n```\n\nRecall how the labels on templates are _globally shared_. Thus, the `labels`\nchart can be included from any other chart.\n\nWhile chart developers have a choice between `include` and `template`, one\nadvantage of using `include` is that `include` can dynamically reference\ntemplates:\n\n```yaml\n\n```\n\nThe above will dereference `$mytemplate`. The `template` function, in contrast,\nwill only accept a string literal.\n\n## Avoid Using Blocks\n\nThe Go template language provides a `block` keyword that allows developers to\nprovide a default implementation which is overridden later. In Helm charts,\nblocks are not the best tool for overriding because if multiple implementations\nof the same block are provided, the one selected is unpredictable.\n\nThe suggestion is to instead use `include`.","site":"helm","answers_cleaned":"    title   Subcharts and Global Values  description   Interacting with a subchart s and global values   weight  11      To this point we have been working only with one chart  But charts can have dependencies  called  subcharts   that also have their own values and templates  In this section we will create a subchart and see the different ways we can access values from within templates   Before we dive into the code  there are a few important details to learn about application subcharts   1  A subchart is considered  stand alone   which means a subchart can never    explicitly depend on its parent chart  2  For that reason  a subchart cannot access the values of its parent  3  A parent chart can override values for subcharts  4  Helm has a concept of  global values  that can be accessed by all charts     These limitations do not all necessarily apply to  library charts     which are designed to provide standardized helper functionality   As we walk through the examples in this section  many of these concepts will become clearer      Creating a Subchart  For these exercises  we ll start with the  mychart   chart we created at the beginning of this guide  and we ll add a new chart inside of it      console   cd mychart charts   helm create mysubchart Creating mysubchart   rm  rf mysubchart templates        Notice that just as before  we deleted all of the base templates so that we can start from scratch  In this guide  we are focused on how templates work  not on managing dependencies  But the  Charts Guide    has more information on how subcharts work      Adding Values and a Template to the Subchart  Next  let s create a simple template and values file for our  mysubchart  chart  There should already be a  values yaml  in  mychart charts mysubchart   We ll set it up like this      yaml dessert  cake      Next  we ll create a new ConfigMap template in  mychart charts mysubchart templates configmap yaml       yaml apiVersion  v1 kind  ConfigMap metadata    name   cfgmap2 data    dessert        Because every subchart is a  stand alone chart   we can test  mysubchart  on its own      console   helm install   generate name   dry run   debug mychart charts mysubchart SERVER   localhost 44134  CHART PATH   Users mattbutcher Code Go src helm sh helm  scratch mychart charts mysubchart NAME    newbie elk TARGET NAMESPACE    default CHART   mysubchart 0 1 0 MANIFEST        Source  mysubchart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  newbie elk cfgmap2 data    dessert  cake         Overriding Values from a Parent Chart  Our original chart   mychart  is now the  parent  chart of  mysubchart   This relationship is based entirely on the fact that  mysubchart  is within  mychart charts    Because  mychart  is a parent  we can specify configuration in  mychart  and have that configuration pushed into  mysubchart   For example  we can modify  mychart values yaml  like this      yaml favorite    drink  coffee   food  pizza pizzaToppings      mushrooms     cheese     peppers     onions  mysubchart    dessert  ice cream      Note the last two lines  Any directives inside of the  mysubchart  section will be sent to the  mysubchart  chart  So if we run  helm install   generate name   dry run   debug mychart   one of the things we will see is the  mysubchart  ConfigMap      yaml   Source  mychart charts mysubchart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  unhinged bee cfgmap2 data    dessert  ice cream      The value at the top level has now overridden the value of the subchart   There s an important detail to notice here  We didn t change the template of  mychart charts mysubchart templates configmap yaml  to point to   Values mysubchart dessert   From that template s perspective  the value is still located at   Values dessert   As the template engine passes values along  it sets the scope  So for the  mysubchart  templates  only values specifically for  mysubchart  will be available in   Values    Sometimes  though  you do want certain values to be available to all of the templates  This is accomplished using global chart values      Global Chart Values  Global values are values that can be accessed from any chart or subchart by exactly the same name  Globals require explicit declaration  You can t use an existing non global as if it were a global   The Values data type has a reserved section called  Values global  where global values can be set  Let s set one in our  mychart values yaml  file      yaml favorite    drink  coffee   food  pizza pizzaToppings      mushrooms     cheese     peppers     onions  mysubchart    dessert  ice cream  global    salad  caesar      Because of the way globals work  both  mychart templates configmap yaml  and  mysubchart templates configmap yaml  should be able to access that value as       mychart templates configmap yaml       yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    salad         mysubchart templates configmap yaml       yaml apiVersion  v1 kind  ConfigMap metadata    name   cfgmap2 data    dessert     salad        Now if we run a dry run install  we ll see the same value in both outputs      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  silly snake configmap data    salad  caesar        Source  mychart charts mysubchart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  silly snake cfgmap2 data    dessert  ice cream   salad  caesar      Globals are useful for passing information like this  though it does take some planning to make sure the right templates are configured to use globals      Sharing Templates with Subcharts  Parent charts and subcharts can share templates  Any defined block in any chart is available to other charts   For example  we can define a simple template like this      yaml from  mychart      Recall how the labels on templates are  globally shared   Thus  the  labels  chart can be included from any other chart   While chart developers have a choice between  include  and  template   one advantage of using  include  is that  include  can dynamically reference templates      yaml       The above will dereference   mytemplate   The  template  function  in contrast  will only accept a string literal      Avoid Using Blocks  The Go template language provides a  block  keyword that allows developers to provide a default implementation which is overridden later  In Helm charts  blocks are not the best tool for overriding because if multiple implementations of the same block are provided  the one selected is unpredictable   The suggestion is to instead use  include  "}
{"questions":"helm aliases intro gettingstarted In this section of the guide we ll create a chart and then add a first guide weight 2 template The chart we created here will be used throughout the rest of the A quick guide on Chart templates title Getting Started","answers":"---\ntitle: \"Getting Started\"\nweight: 2\ndescription: \"A quick guide on Chart templates.\"\naliases: [\"\/intro\/getting_started\/\"]\n---\n\nIn this section of the guide, we'll create a chart and then add a first\ntemplate. The chart we created here will be used throughout the rest of the\nguide.\n\nTo get going, let's take a brief look at a Helm chart.\n\n## Charts\n\nAs described in the [Charts Guide](..\/..\/topics\/charts), Helm charts are\nstructured like this:\n\n```\nmychart\/\n  Chart.yaml\n  values.yaml\n  charts\/\n  templates\/\n  ...\n```\n\nThe `templates\/` directory is for template files. When Helm evaluates a chart,\nit will send all of the files in the `templates\/` directory through the template\nrendering engine. It then collects the results of those templates and sends them\non to Kubernetes.\n\nThe `values.yaml` file is also important to templates. This file contains the\n_default values_ for a chart. These values may be overridden by users during\n`helm install` or `helm upgrade`.\n\nThe `Chart.yaml` file contains a description of the chart. You can access it\nfrom within a template.\n\nThe `charts\/` directory _may_ contain other charts\n(which we call _subcharts_). Later in this guide we will see how those work when\nit comes to template rendering.\n\n## A Starter Chart\n\nFor this guide, we'll create a simple chart called `mychart`, and then we'll\ncreate some templates inside of the chart.\n\n```console\n$ helm create mychart\nCreating mychart\n```\n\n### A Quick Glimpse of `mychart\/templates\/`\n\nIf you take a look at the `mychart\/templates\/` directory, you'll notice a few\nfiles already there.\n\n- `NOTES.txt`: The \"help text\" for your chart. This will be displayed to your\n  users when they run `helm install`.\n- `deployment.yaml`: A basic manifest for creating a Kubernetes\n  [deployment](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/)\n- `service.yaml`: A basic manifest for creating a [service\n  endpoint](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/) for your deployment\n- `_helpers.tpl`: A place to put template helpers that you can re-use throughout\n  the chart\n\nAnd what we're going to do is... _remove them all!_ That way we can work through\nour tutorial from scratch. We'll actually create our own `NOTES.txt` and\n`_helpers.tpl` as we go.\n\n```console\n$ rm -rf mychart\/templates\/*\n```\n\nWhen you're writing production grade charts, having basic versions of these\ncharts can be really useful. So in your day-to-day chart authoring, you probably\nwon't want to remove them.\n\n## A First Template\n\nThe first template we are going to create will be a `ConfigMap`. In Kubernetes,\na ConfigMap is simply an object for storing configuration data. Other things,\nlike pods, can access the data in a ConfigMap.\n\nBecause ConfigMaps are basic resources, they make a great starting point for us.\n\nLet's begin by creating a file called `mychart\/templates\/configmap.yaml`:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: mychart-configmap\ndata:\n  myvalue: \"Hello World\"\n```\n\n**TIP:** Template names do not follow a rigid naming pattern. However, we\nrecommend using the extension `.yaml` for YAML files and `.tpl` for helpers.\n\nThe YAML file above is a bare-bones ConfigMap, having the minimal necessary\nfields. By virtue of the fact that this file is in the `mychart\/templates\/`\ndirectory, it will be sent through the template engine.\n\nIt is just fine to put a plain YAML file like this in the `mychart\/templates\/`\ndirectory. When Helm reads this template, it will simply send it to Kubernetes\nas-is.\n\nWith this simple template, we now have an installable chart. And we can install\nit like this:\n\n```console\n$ helm install full-coral .\/mychart\nNAME: full-coral\nLAST DEPLOYED: Tue Nov  1 17:36:01 2016\nNAMESPACE: default\nSTATUS: DEPLOYED\nREVISION: 1\nTEST SUITE: None\n```\n\nUsing Helm, we can retrieve the release and see the actual template that was\nloaded.\n\n```console\n$ helm get manifest full-coral\n\n---\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: mychart-configmap\ndata:\n  myvalue: \"Hello World\"\n```\n\nThe `helm get manifest` command takes a release name (`full-coral`) and prints\nout all of the Kubernetes resources that were uploaded to the server. Each file\nbegins with `---` to indicate the start of a YAML document, and then is followed\nby an automatically generated comment line that tells us what template file\ngenerated this YAML document.\n\nFrom there on, we can see that the YAML data is exactly what we put in our\n`configmap.yaml` file.\n\nNow we can uninstall our release: `helm uninstall full-coral`.\n\n### Adding a Simple Template Call\n\nHard-coding the `name:` into a resource is usually considered to be bad\npractice. Names should be unique to a release. So we might want to generate a\nname field by inserting the release name.\n\n**TIP:** The `name:` field is limited to 63 characters because of limitations to\nthe DNS system. For that reason, release names are limited to 53 characters.\nKubernetes 1.3 and earlier limited to only 24 characters (thus 14 character\nnames).\n\nLet's alter `configmap.yaml` accordingly.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n```\n\nThe big change comes in the value of the `name:` field, which is now\n`-configmap`.\n\n> A template directive is enclosed in `` blocks.\n\nThe template directive `` injects the release name into the\ntemplate. The values that are passed into a template can be thought of as\n_namespaced objects_, where a dot (`.`) separates each namespaced element.\n\nThe leading dot before `Release` indicates that we start with the top-most\nnamespace for this scope (we'll talk about scope in a bit). So we could read\n`.Release.Name` as \"start at the top namespace, find the `Release` object, then\nlook inside of it for an object called `Name`\".\n\nThe `Release` object is one of the built-in objects for Helm, and we'll cover it\nin more depth later. But for now, it is sufficient to say that this will display\nthe release name that the library assigns to our release.\n\nNow when we install our resource, we'll immediately see the result of using this\ntemplate directive:\n\n```console\n$ helm install clunky-serval .\/mychart\nNAME: clunky-serval\nLAST DEPLOYED: Tue Nov  1 17:45:37 2016\nNAMESPACE: default\nSTATUS: DEPLOYED\nREVISION: 1\nTEST SUITE: None\n```\n\nYou can run `helm get manifest clunky-serval` to see the entire generated YAML.\n\nNote that the ConfigMap inside Kubernetes name is `clunky-serval-configmap`\ninstead of `mychart-configmap` previously.\n\nAt this point, we've seen templates at their most basic: YAML files that have\ntemplate directives embedded in ``. In the next part, we'll take a\ndeeper look into templates. But before moving on, there's one quick trick that\ncan make building templates faster: When you want to test the template\nrendering, but not actually install anything, you can use `helm install --debug\n--dry-run goodly-guppy .\/mychart`. This will render the templates. But instead\nof installing the chart, it will return the rendered template to you so you can\nsee the output:\n\n```console\n$ helm install --debug --dry-run goodly-guppy .\/mychart\ninstall.go:149: [debug] Original chart version: \"\"\ninstall.go:166: [debug] CHART PATH: \/Users\/ninja\/mychart\n\nNAME: goodly-guppy\nLAST DEPLOYED: Thu Dec 26 17:24:13 2019\nNAMESPACE: default\nSTATUS: pending-install\nREVISION: 1\nTEST SUITE: None\nUSER-SUPPLIED VALUES:\n{}\n\nCOMPUTED VALUES:\naffinity: {}\nfullnameOverride: \"\"\nimage:\n  pullPolicy: IfNotPresent\n  repository: nginx\nimagePullSecrets: []\ningress:\n  annotations: {}\n  enabled: false\n  hosts:\n  - host: chart-example.local\n    paths: []\n  tls: []\nnameOverride: \"\"\nnodeSelector: {}\npodSecurityContext: {}\nreplicaCount: 1\nresources: {}\nsecurityContext: {}\nservice:\n  port: 80\n  type: ClusterIP\nserviceAccount:\n  create: true\n  name: null\ntolerations: []\n\nHOOKS:\nMANIFEST:\n---\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: goodly-guppy-configmap\ndata:\n  myvalue: \"Hello World\"\n\n```\n\nUsing `--dry-run` will make it easier to test your code, but it won't ensure\nthat Kubernetes itself will accept the templates you generate. It's best not to\nassume that your chart will install just because `--dry-run` works.\n\nIn the [Chart Template Guide](_index.md), we take the basic chart we defined\nhere and explore the Helm template language in detail. And we'll get started\nwith built-in objects.","site":"helm","answers_cleaned":"    title   Getting Started  weight  2 description   A quick guide on Chart templates   aliases     intro getting started         In this section of the guide  we ll create a chart and then add a first template  The chart we created here will be used throughout the rest of the guide   To get going  let s take a brief look at a Helm chart      Charts  As described in the  Charts Guide        topics charts   Helm charts are structured like this       mychart    Chart yaml   values yaml   charts    templates             The  templates   directory is for template files  When Helm evaluates a chart  it will send all of the files in the  templates   directory through the template rendering engine  It then collects the results of those templates and sends them on to Kubernetes   The  values yaml  file is also important to templates  This file contains the  default values  for a chart  These values may be overridden by users during  helm install  or  helm upgrade    The  Chart yaml  file contains a description of the chart  You can access it from within a template   The  charts   directory  may  contain other charts  which we call  subcharts    Later in this guide we will see how those work when it comes to template rendering      A Starter Chart  For this guide  we ll create a simple chart called  mychart   and then we ll create some templates inside of the chart      console   helm create mychart Creating mychart          A Quick Glimpse of  mychart templates    If you take a look at the  mychart templates   directory  you ll notice a few files already there      NOTES txt   The  help text  for your chart  This will be displayed to your   users when they run  helm install      deployment yaml   A basic manifest for creating a Kubernetes    deployment  https   kubernetes io docs concepts workloads controllers deployment      service yaml   A basic manifest for creating a  service   endpoint  https   kubernetes io docs concepts services networking service   for your deployment     helpers tpl   A place to put template helpers that you can re use throughout   the chart  And what we re going to do is     remove them all   That way we can work through our tutorial from scratch  We ll actually create our own  NOTES txt  and   helpers tpl  as we go      console   rm  rf mychart templates        When you re writing production grade charts  having basic versions of these charts can be really useful  So in your day to day chart authoring  you probably won t want to remove them      A First Template  The first template we are going to create will be a  ConfigMap   In Kubernetes  a ConfigMap is simply an object for storing configuration data  Other things  like pods  can access the data in a ConfigMap   Because ConfigMaps are basic resources  they make a great starting point for us   Let s begin by creating a file called  mychart templates configmap yaml       yaml apiVersion  v1 kind  ConfigMap metadata    name  mychart configmap data    myvalue   Hello World         TIP    Template names do not follow a rigid naming pattern  However  we recommend using the extension   yaml  for YAML files and   tpl  for helpers   The YAML file above is a bare bones ConfigMap  having the minimal necessary fields  By virtue of the fact that this file is in the  mychart templates   directory  it will be sent through the template engine   It is just fine to put a plain YAML file like this in the  mychart templates   directory  When Helm reads this template  it will simply send it to Kubernetes as is   With this simple template  we now have an installable chart  And we can install it like this      console   helm install full coral   mychart NAME  full coral LAST DEPLOYED  Tue Nov  1 17 36 01 2016 NAMESPACE  default STATUS  DEPLOYED REVISION  1 TEST SUITE  None      Using Helm  we can retrieve the release and see the actual template that was loaded      console   helm get manifest full coral        Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  mychart configmap data    myvalue   Hello World       The  helm get manifest  command takes a release name   full coral   and prints out all of the Kubernetes resources that were uploaded to the server  Each file begins with       to indicate the start of a YAML document  and then is followed by an automatically generated comment line that tells us what template file generated this YAML document   From there on  we can see that the YAML data is exactly what we put in our  configmap yaml  file   Now we can uninstall our release   helm uninstall full coral        Adding a Simple Template Call  Hard coding the  name   into a resource is usually considered to be bad practice  Names should be unique to a release  So we might want to generate a name field by inserting the release name     TIP    The  name   field is limited to 63 characters because of limitations to the DNS system  For that reason  release names are limited to 53 characters  Kubernetes 1 3 and earlier limited to only 24 characters  thus 14 character names    Let s alter  configmap yaml  accordingly      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World       The big change comes in the value of the  name   field  which is now   configmap      A template directive is enclosed in    blocks   The template directive    injects the release name into the template  The values that are passed into a template can be thought of as  namespaced objects   where a dot       separates each namespaced element   The leading dot before  Release  indicates that we start with the top most namespace for this scope  we ll talk about scope in a bit   So we could read   Release Name  as  start at the top namespace  find the  Release  object  then look inside of it for an object called  Name     The  Release  object is one of the built in objects for Helm  and we ll cover it in more depth later  But for now  it is sufficient to say that this will display the release name that the library assigns to our release   Now when we install our resource  we ll immediately see the result of using this template directive      console   helm install clunky serval   mychart NAME  clunky serval LAST DEPLOYED  Tue Nov  1 17 45 37 2016 NAMESPACE  default STATUS  DEPLOYED REVISION  1 TEST SUITE  None      You can run  helm get manifest clunky serval  to see the entire generated YAML   Note that the ConfigMap inside Kubernetes name is  clunky serval configmap  instead of  mychart configmap  previously   At this point  we ve seen templates at their most basic  YAML files that have template directives embedded in     In the next part  we ll take a deeper look into templates  But before moving on  there s one quick trick that can make building templates faster  When you want to test the template rendering  but not actually install anything  you can use  helm install   debug   dry run goodly guppy   mychart   This will render the templates  But instead of installing the chart  it will return the rendered template to you so you can see the output      console   helm install   debug   dry run goodly guppy   mychart install go 149   debug  Original chart version     install go 166   debug  CHART PATH   Users ninja mychart  NAME  goodly guppy LAST DEPLOYED  Thu Dec 26 17 24 13 2019 NAMESPACE  default STATUS  pending install REVISION  1 TEST SUITE  None USER SUPPLIED VALUES      COMPUTED VALUES  affinity     fullnameOverride     image    pullPolicy  IfNotPresent   repository  nginx imagePullSecrets     ingress    annotations       enabled  false   hosts      host  chart example local     paths       tls     nameOverride     nodeSelector     podSecurityContext     replicaCount  1 resources     securityContext     service    port  80   type  ClusterIP serviceAccount    create  true   name  null tolerations      HOOKS  MANIFEST        Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  goodly guppy configmap data    myvalue   Hello World        Using    dry run  will make it easier to test your code  but it won t ensure that Kubernetes itself will accept the templates you generate  It s best not to assume that your chart will install just because    dry run  works   In the  Chart Template Guide   index md   we take the basic chart we defined here and explore the Helm template language in detail  And we ll get started with built in objects "}
{"questions":"helm template authors can use to make our templates less error prone and easier to title Appendix YAML Techniques read Most of this guide has been focused on writing the template language Here we ll look at the YAML format YAML has some useful features that we as A closer look at the YAML specification and how it applies to Helm weight 15","answers":"---\ntitle: \"Appendix: YAML Techniques\"\ndescription: \"A closer look at the YAML specification and how it applies to Helm.\"\nweight: 15\n---\n\nMost of this guide has been focused on writing the template language. Here,\nwe'll look at the YAML format. YAML has some useful features that we, as\ntemplate authors, can use to make our templates less error prone and easier to\nread.\n\n## Scalars and Collections\n\nAccording to the [YAML spec](https:\/\/yaml.org\/spec\/1.2\/spec.html), there are two\ntypes of collections, and many scalar types.\n\nThe two types of collections are maps and sequences:\n\n```yaml\nmap:\n  one: 1\n  two: 2\n  three: 3\n\nsequence:\n  - one\n  - two\n  - three\n```\n\nScalar values are individual values (as opposed to collections)\n\n### Scalar Types in YAML\n\nIn Helm's dialect of YAML, the scalar data type of a value is determined by a\ncomplex set of rules, including the Kubernetes schema for resource definitions.\nBut when inferring types, the following rules tend to hold true.\n\nIf an integer or float is an unquoted bare word, it is typically treated as a\nnumeric type:\n\n```yaml\ncount: 1\nsize: 2.34\n```\n\nBut if they are quoted, they are treated as strings:\n\n```yaml\ncount: \"1\" # <-- string, not int\nsize: '2.34' # <-- string, not float\n```\n\nThe same is true of booleans:\n\n```yaml\nisGood: true   # bool\nanswer: \"true\" # string\n```\n\nThe word for an empty value is `null` (not `nil`).\n\nNote that `port: \"80\"` is valid YAML, and will pass through both the template\nengine and the YAML parser, but will fail if Kubernetes expects `port` to be an\ninteger.\n\nIn some cases, you can force a particular type inference using YAML node tags:\n\n```yaml\ncoffee: \"yes, please\"\nage: !!str 21\nport: !!int \"80\"\n```\n\nIn the above, `!!str` tells the parser that `age` is a string, even if it looks\nlike an int. And `port` is treated as an int, even though it is quoted.\n\n\n## Strings in YAML\n\nMuch of the data that we place in YAML documents are strings. YAML has more than\none way to represent a string. This section explains the ways and demonstrates\nhow to use some of them.\n\nThere are three \"inline\" ways of declaring a string:\n\n```yaml\nway1: bare words\nway2: \"double-quoted strings\"\nway3: 'single-quoted strings'\n```\n\nAll inline styles must be on one line.\n\n- Bare words are unquoted, and are not escaped. For this reason, you have to be\n  careful what characters you use.\n- Double-quoted strings can have specific characters escaped with `\\`. For\n  example `\"\\\"Hello\\\", she said\"`. You can escape line breaks with `\\n`.\n- Single-quoted strings are \"literal\" strings, and do not use the `\\` to escape\n  characters. The only escape sequence is `''`, which is decoded as a single\n  `'`.\n\nIn addition to the one-line strings, you can declare multi-line strings:\n\n```yaml\ncoffee: |\n  Latte\n  Cappuccino\n  Espresso\n```\n\nThe above will treat the value of `coffee` as a single string equivalent to\n`Latte\\nCappuccino\\nEspresso\\n`.\n\nNote that the first line after the `|` must be correctly indented. So we could\nbreak the example above by doing this:\n\n```yaml\ncoffee: |\n         Latte\n  Cappuccino\n  Espresso\n\n```\n\nBecause `Latte` is incorrectly indented, we'd get an error like this:\n\n```\nError parsing file: error converting YAML to JSON: yaml: line 7: did not find expected key\n```\n\nIn templates, it is sometimes safer to put a fake \"first line\" of content in a\nmulti-line document just for protection from the above error:\n\n```yaml\ncoffee: |\n  # Commented first line\n         Latte\n  Cappuccino\n  Espresso\n\n```\n\nNote that whatever that first line is, it will be preserved in the output of the\nstring. So if you are, for example, using this technique to inject a file's\ncontents into a ConfigMap, the comment should be of the type expected by\nwhatever is reading that entry.\n\n### Controlling Spaces in Multi-line Strings\n\nIn the example above, we used `|` to indicate a multi-line string. But notice\nthat the content of our string was followed with a trailing `\\n`. If we want the\nYAML processor to strip off the trailing newline, we can add a `-` after the\n`|`:\n\n```yaml\ncoffee: |-\n  Latte\n  Cappuccino\n  Espresso\n```\n\nNow the `coffee` value will be: `Latte\\nCappuccino\\nEspresso` (with no trailing\n`\\n`).\n\nOther times, we might want all trailing whitespace to be preserved. We can do\nthis with the `|+` notation:\n\n```yaml\ncoffee: |+\n  Latte\n  Cappuccino\n  Espresso\n\n\nanother: value\n```\n\nNow the value of `coffee` will be `Latte\\nCappuccino\\nEspresso\\n\\n\\n`.\n\nIndentation inside of a text block is preserved, and results in the preservation\nof line breaks, too:\n\n```yaml\ncoffee: |-\n  Latte\n    12 oz\n    16 oz\n  Cappuccino\n  Espresso\n```\n\nIn the above case, `coffee` will be `Latte\\n  12 oz\\n  16\noz\\nCappuccino\\nEspresso`.\n\n### Indenting and Templates\n\nWhen writing templates, you may find yourself wanting to inject the contents of\na file into the template. As we saw in previous chapters, there are two ways of\ndoing this:\n\n- Use `` to get the contents of a file in the chart.\n- Use `` to render a template and then place its\n  contents into the chart.\n\nWhen inserting files into YAML, it's good to understand the multi-line rules\nabove. Often times, the easiest way to insert a static file is to do something\nlike this:\n\n```yaml\nmyfile: |\n\n```\n\nNote how we do the indentation above: `indent 2` tells the template engine to\nindent every line in \"myfile.txt\" with two spaces. Note that we do not indent\nthat template line. That's because if we did, the file content of the first line\nwould be indented twice.\n\n### Folded Multi-line Strings\n\nSometimes you want to represent a string in your YAML with multiple lines, but\nwant it to be treated as one long line when it is interpreted. This is called\n\"folding\". To declare a folded block, use `>` instead of `|`:\n\n```yaml\ncoffee: >\n  Latte\n  Cappuccino\n  Espresso\n\n\n```\n\nThe value of `coffee` above will be `Latte Cappuccino Espresso\\n`. Note that all\nbut the last line feed will be converted to spaces. You can combine the\nwhitespace controls with the folded text marker, so `>-` will replace or trim\nall newlines.\n\nNote that in the folded syntax, indenting text will cause lines to be preserved.\n\n```yaml\ncoffee: >-\n  Latte\n    12 oz\n    16 oz\n  Cappuccino\n  Espresso\n```\n\nThe above will produce `Latte\\n  12 oz\\n  16 oz\\nCappuccino Espresso`. Note that\nboth the spacing and the newlines are still there.\n\n## Embedding Multiple Documents in One File\n\nIt is possible to place more than one YAML document into a single file. This is\ndone by prefixing a new document with `---` and ending the document with\n`...`\n\n```yaml\n\n---\ndocument: 1\n...\n---\ndocument: 2\n...\n```\n\nIn many cases, either the `---` or the `...` may be omitted.\n\nSome files in Helm cannot contain more than one doc. If, for example, more than\none document is provided inside of a `values.yaml` file, only the first will be\nused.\n\nTemplate files, however, may have more than one document. When this happens, the\nfile (and all of its documents) is treated as one object during template\nrendering. But then the resulting YAML is split into multiple documents before\nit is fed to Kubernetes.\n\nWe recommend only using multiple documents per file when it is absolutely\nnecessary. Having multiple documents in a file can be difficult to debug.\n\n## YAML is a Superset of JSON\n\nBecause YAML is a superset of JSON, any valid JSON document _should_ be valid\nYAML.\n\n```json\n{\n  \"coffee\": \"yes, please\",\n  \"coffees\": [\n    \"Latte\", \"Cappuccino\", \"Espresso\"\n  ]\n}\n```\n\nThe above is another way of representing this:\n\n```yaml\ncoffee: yes, please\ncoffees:\n- Latte\n- Cappuccino\n- Espresso\n```\n\nAnd the two can be mixed (with care):\n\n```yaml\ncoffee: \"yes, please\"\ncoffees: [ \"Latte\", \"Cappuccino\", \"Espresso\"]\n```\n\nAll three of these should parse into the same internal representation.\n\nWhile this means that files such as `values.yaml` may contain JSON data, Helm\ndoes not treat the file extension `.json` as a valid suffix.\n\n## YAML Anchors\n\nThe YAML spec provides a way to store a reference to a value, and later refer to\nthat value by reference. YAML refers to this as \"anchoring\":\n\n```yaml\ncoffee: \"yes, please\"\nfavorite: &favoriteCoffee \"Cappuccino\"\ncoffees:\n  - Latte\n  - *favoriteCoffee\n  - Espresso\n```\n\nIn the above, `&favoriteCoffee` sets a reference to `Cappuccino`. Later, that\nreference is used as `*favoriteCoffee`. So `coffees` becomes `Latte, Cappuccino,\nEspresso`.\n\nWhile there are a few cases where anchors are useful, there is one aspect of\nthem that can cause subtle bugs: The first time the YAML is consumed, the\nreference is expanded and then discarded.\n\nSo if we were to decode and then re-encode the example above, the resulting YAML\nwould be:\n\n```yaml\ncoffee: yes, please\nfavorite: Cappuccino\ncoffees:\n- Latte\n- Cappuccino\n- Espresso\n```\n\nBecause Helm and Kubernetes often read, modify, and then rewrite YAML files, the\nanchors will be lost.","site":"helm","answers_cleaned":"    title   Appendix  YAML Techniques  description   A closer look at the YAML specification and how it applies to Helm   weight  15      Most of this guide has been focused on writing the template language  Here  we ll look at the YAML format  YAML has some useful features that we  as template authors  can use to make our templates less error prone and easier to read      Scalars and Collections  According to the  YAML spec  https   yaml org spec 1 2 spec html   there are two types of collections  and many scalar types   The two types of collections are maps and sequences      yaml map    one  1   two  2   three  3  sequence      one     two     three      Scalar values are individual values  as opposed to collections       Scalar Types in YAML  In Helm s dialect of YAML  the scalar data type of a value is determined by a complex set of rules  including the Kubernetes schema for resource definitions  But when inferring types  the following rules tend to hold true   If an integer or float is an unquoted bare word  it is typically treated as a numeric type      yaml count  1 size  2 34      But if they are quoted  they are treated as strings      yaml count   1        string  not int size   2 34        string  not float      The same is true of booleans      yaml isGood  true     bool answer   true    string      The word for an empty value is  null   not  nil     Note that  port   80   is valid YAML  and will pass through both the template engine and the YAML parser  but will fail if Kubernetes expects  port  to be an integer   In some cases  you can force a particular type inference using YAML node tags      yaml coffee   yes  please  age    str 21 port    int  80       In the above     str  tells the parser that  age  is a string  even if it looks like an int  And  port  is treated as an int  even though it is quoted       Strings in YAML  Much of the data that we place in YAML documents are strings  YAML has more than one way to represent a string  This section explains the ways and demonstrates how to use some of them   There are three  inline  ways of declaring a string      yaml way1  bare words way2   double quoted strings  way3   single quoted strings       All inline styles must be on one line     Bare words are unquoted  and are not escaped  For this reason  you have to be   careful what characters you use    Double quoted strings can have specific characters escaped with      For   example     Hello    she said    You can escape line breaks with   n     Single quoted strings are  literal  strings  and do not use the     to escape   characters  The only escape sequence is       which is decoded as a single         In addition to the one line strings  you can declare multi line strings      yaml coffee      Latte   Cappuccino   Espresso      The above will treat the value of  coffee  as a single string equivalent to  Latte nCappuccino nEspresso n    Note that the first line after the     must be correctly indented  So we could break the example above by doing this      yaml coffee             Latte   Cappuccino   Espresso       Because  Latte  is incorrectly indented  we d get an error like this       Error parsing file  error converting YAML to JSON  yaml  line 7  did not find expected key      In templates  it is sometimes safer to put a fake  first line  of content in a multi line document just for protection from the above error      yaml coffee        Commented first line          Latte   Cappuccino   Espresso       Note that whatever that first line is  it will be preserved in the output of the string  So if you are  for example  using this technique to inject a file s contents into a ConfigMap  the comment should be of the type expected by whatever is reading that entry       Controlling Spaces in Multi line Strings  In the example above  we used     to indicate a multi line string  But notice that the content of our string was followed with a trailing   n   If we want the YAML processor to strip off the trailing newline  we can add a     after the          yaml coffee       Latte   Cappuccino   Espresso      Now the  coffee  value will be   Latte nCappuccino nEspresso   with no trailing   n     Other times  we might want all trailing whitespace to be preserved  We can do this with the      notation      yaml coffee       Latte   Cappuccino   Espresso   another  value      Now the value of  coffee  will be  Latte nCappuccino nEspresso n n n    Indentation inside of a text block is preserved  and results in the preservation of line breaks  too      yaml coffee       Latte     12 oz     16 oz   Cappuccino   Espresso      In the above case   coffee  will be  Latte n  12 oz n  16 oz nCappuccino nEspresso        Indenting and Templates  When writing templates  you may find yourself wanting to inject the contents of a file into the template  As we saw in previous chapters  there are two ways of doing this     Use    to get the contents of a file in the chart    Use    to render a template and then place its   contents into the chart   When inserting files into YAML  it s good to understand the multi line rules above  Often times  the easiest way to insert a static file is to do something like this      yaml myfile          Note how we do the indentation above   indent 2  tells the template engine to indent every line in  myfile txt  with two spaces  Note that we do not indent that template line  That s because if we did  the file content of the first line would be indented twice       Folded Multi line Strings  Sometimes you want to represent a string in your YAML with multiple lines  but want it to be treated as one long line when it is interpreted  This is called  folding   To declare a folded block  use     instead of          yaml coffee      Latte   Cappuccino   Espresso        The value of  coffee  above will be  Latte Cappuccino Espresso n   Note that all but the last line feed will be converted to spaces  You can combine the whitespace controls with the folded text marker  so      will replace or trim all newlines   Note that in the folded syntax  indenting text will cause lines to be preserved      yaml coffee       Latte     12 oz     16 oz   Cappuccino   Espresso      The above will produce  Latte n  12 oz n  16 oz nCappuccino Espresso   Note that both the spacing and the newlines are still there      Embedding Multiple Documents in One File  It is possible to place more than one YAML document into a single file  This is done by prefixing a new document with       and ending the document with           yaml      document  1         document  2          In many cases  either the       or the       may be omitted   Some files in Helm cannot contain more than one doc  If  for example  more than one document is provided inside of a  values yaml  file  only the first will be used   Template files  however  may have more than one document  When this happens  the file  and all of its documents  is treated as one object during template rendering  But then the resulting YAML is split into multiple documents before it is fed to Kubernetes   We recommend only using multiple documents per file when it is absolutely necessary  Having multiple documents in a file can be difficult to debug      YAML is a Superset of JSON  Because YAML is a superset of JSON  any valid JSON document  should  be valid YAML      json      coffee    yes  please      coffees          Latte    Cappuccino    Espresso             The above is another way of representing this      yaml coffee  yes  please coffees    Latte   Cappuccino   Espresso      And the two can be mixed  with care       yaml coffee   yes  please  coffees     Latte    Cappuccino    Espresso        All three of these should parse into the same internal representation   While this means that files such as  values yaml  may contain JSON data  Helm does not treat the file extension   json  as a valid suffix      YAML Anchors  The YAML spec provides a way to store a reference to a value  and later refer to that value by reference  YAML refers to this as  anchoring       yaml coffee   yes  please  favorite   favoriteCoffee  Cappuccino  coffees      Latte      favoriteCoffee     Espresso      In the above    favoriteCoffee  sets a reference to  Cappuccino   Later  that reference is used as   favoriteCoffee   So  coffees  becomes  Latte  Cappuccino  Espresso    While there are a few cases where anchors are useful  there is one aspect of them that can cause subtle bugs  The first time the YAML is consumed  the reference is expanded and then discarded   So if we were to decode and then re encode the example above  the resulting YAML would be      yaml coffee  yes  please favorite  Cappuccino coffees    Latte   Cappuccino   Espresso      Because Helm and Kubernetes often read  modify  and then rewrite YAML files  the anchors will be lost "}
{"questions":"helm It is time to move beyond one template and begin to create others In this them elsewhere A named template sometimes called a partial or a How to define named templates subtemplate is simply a template defined inside of a file and given a name section we will see how to define named templates in one file and then use weight 9 title Named Templates","answers":"---\ntitle: \"Named Templates\"\ndescription: \"How to define named templates.\"\nweight: 9\n---\n\nIt is time to move beyond one template, and begin to create others. In this\nsection, we will see how to define _named templates_ in one file, and then use\nthem elsewhere. A _named template_ (sometimes called a _partial_ or a\n_subtemplate_) is simply a template defined inside of a file, and given a name.\nWe'll see two ways to create them, and a few different ways to use them.\n\nIn the [Flow Control](.\/control_structures.md) section we introduced three actions\nfor declaring and managing templates: `define`, `template`, and `block`. In this\nsection, we'll cover those three actions, and also introduce a special-purpose\n`include` function that works similarly to the `template` action.\n\nAn important detail to keep in mind when naming templates: **template names are\nglobal**. If you declare two templates with the same name, whichever one is\nloaded last will be the one used. Because templates in subcharts are compiled\ntogether with top-level templates, you should be careful to name your templates\nwith _chart-specific names_.\n\nOne popular naming convention is to prefix each defined template with the name\nof the chart: ``. By using the specific chart name\nas a prefix we can avoid any conflicts that may arise due to two different\ncharts that implement templates of the same name.\n\nThis behavior also applies to different versions of a chart. If you have\n`mychart` version `1.0.0` that defines a template one way, and a `mychart`\nversion `2.0.0` that modifies the existing named template, it will use the one\nthat was loaded last. You can work around this issue by also adding a version\nin the name of the chart: `` and\n``.\n\n## Partials and `_` files\n\nSo far, we've used one file, and that one file has contained a single template.\nBut Helm's template language allows you to create named embedded templates, that\ncan be accessed by name elsewhere.\n\nBefore we get to the nuts-and-bolts of writing those templates, there is file\nnaming convention that deserves mention:\n\n* Most files in `templates\/` are treated as if they contain Kubernetes manifests\n* The `NOTES.txt` is one exception\n* But files whose name begins with an underscore (`_`) are assumed to _not_ have\n  a manifest inside. These files are not rendered to Kubernetes object\n  definitions, but are available everywhere within other chart templates for\n  use.\n\nThese files are used to store partials and helpers. In fact, when we first\ncreated `mychart`, we saw a file called `_helpers.tpl`. That file is the default\nlocation for template partials.\n\n## Declaring and using templates with `define` and `template`\n\nThe `define` action allows us to create a named template inside of a template\nfile. Its syntax goes like this:\n\n```yaml\n\n  # body of template here\n\n```\n\nFor example, we can define a template to encapsulate a Kubernetes block of\nlabels:\n\n```yaml\n\n  labels:\n    generator: helm\n    date: \n\n```\n\nNow we can embed this template inside of our existing ConfigMap, and then\ninclude it with the `template` action:\n\n```yaml\n\n  labels:\n    generator: helm\n    date: \n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\n  \ndata:\n  myvalue: \"Hello World\"\n  \n  : \n  \n```\n\nWhen the template engine reads this file, it will store away the reference to\n`mychart.labels` until `template \"mychart.labels\"` is called. Then it will\nrender that template inline. So the result will look like this:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: running-panda-configmap\n  labels:\n    generator: helm\n    date: 2016-11-02\ndata:\n  myvalue: \"Hello World\"\n  drink: \"coffee\"\n  food: \"pizza\"\n```\n\nNote: a `define` does not produce output unless it is called with a template,\nas in this example.\n\nConventionally, Helm charts put these templates inside of a partials file,\nusually `_helpers.tpl`. Let's move this function there:\n\n```yaml\n\n\n  labels:\n    generator: helm\n    date: \n\n```\n\nBy convention, `define` functions should have a simple documentation block\n(``) describing what they do.\n\nEven though this definition is in `_helpers.tpl`, it can still be accessed in\n`configmap.yaml`:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\n  \ndata:\n  myvalue: \"Hello World\"\n  \n  : \n  \n```\n\nAs mentioned above, **template names are global**. As a result of this, if two\ntemplates are declared with the same name the last occurrence will be the one\nthat is used. Since templates in subcharts are compiled together with top-level\ntemplates, it is best to name your templates with _chart specific names_. A\npopular naming convention is to prefix each defined template with the name of\nthe chart: ``.\n\n## Setting the scope of a template\n\nIn the template we defined above, we did not use any objects. We just used\nfunctions. Let's modify our defined template to include the chart name and chart\nversion:\n\n```yaml\n\n\n  labels:\n    generator: helm\n    date: \n    chart: \n    version: \n\n```\n\nIf we render this, we will get an error like this:\n\n```console\n$ helm install --dry-run moldy-jaguar .\/mychart\nError: unable to build kubernetes objects from release manifest: error validating \"\": error validating data: [unknown object type \"nil\" in ConfigMap.metadata.labels.chart, unknown object type \"nil\" in ConfigMap.metadata.labels.version]\n```\n\nTo see what rendered, re-run with `--disable-openapi-validation`:\n`helm install --dry-run --disable-openapi-validation moldy-jaguar .\/mychart`.\nThe result will not be what we expect:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: moldy-jaguar-configmap\n  labels:\n    generator: helm\n    date: 2021-03-06\n    chart:\n    version:\n```\n\nWhat happened to the name and version? They weren't in the scope for our defined\ntemplate. When a named template (created with `define`) is rendered, it will\nreceive the scope passed in by the `template` call. In our example, we included\nthe template like this:\n\n```yaml\n\n```\n\nNo scope was passed in, so within the template we cannot access anything in `.`.\nThis is easy enough to fix, though. We simply pass a scope to the template:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\n  \n```\n\nNote that we pass `.` at the end of the `template` call. We could just as easily\npass `.Values` or `.Values.favorite` or whatever scope we want. But what we want\nis the top-level scope.\n\nNow when we execute this template with `helm install --dry-run --debug\nplinking-anaco .\/mychart`, we get this:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: plinking-anaco-configmap\n  labels:\n    generator: helm\n    date: 2021-03-06\n    chart: mychart\n    version: 0.1.0\n```\n\nNow `` resolves to `mychart`, and ``\nresolves to `0.1.0`.\n\n## The `include` function\n\nSay we've defined a simple template that looks like this:\n\n```yaml\n\napp_name: \napp_version: \"\"\n\n```\n\nNow say I want to insert this both into the `labels:` section of my template,\nand also the `data:` section:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\n  labels:\n    \ndata:\n  myvalue: \"Hello World\"\n  \n  : \n  \n\n```\n\nIf we render this, we will get an error like this:\n\n```console\n$ helm install --dry-run measly-whippet .\/mychart\nError: unable to build kubernetes objects from release manifest: error validating \"\": error validating data: [ValidationError(ConfigMap): unknown field \"app_name\" in io.k8s.api.core.v1.ConfigMap, ValidationError(ConfigMap): unknown field \"app_version\" in io.k8s.api.core.v1.ConfigMap]\n```\n\nTo see what rendered, re-run with `--disable-openapi-validation`:\n`helm install --dry-run --disable-openapi-validation measly-whippet .\/mychart`.\nThe output will not be what we expect:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: measly-whippet-configmap\n  labels:\n    app_name: mychart\napp_version: \"0.1.0\"\ndata:\n  myvalue: \"Hello World\"\n  drink: \"coffee\"\n  food: \"pizza\"\napp_name: mychart\napp_version: \"0.1.0\"\n```\n\nNote that the indentation on `app_version` is wrong in both places. Why? Because\nthe template that is substituted in has the text aligned to the left. Because\n`template` is an action, and not a function, there is no way to pass the output\nof a `template` call to other functions; the data is simply inserted inline.\n\nTo work around this case, Helm provides an alternative to `template` that will\nimport the contents of a template into the present pipeline where it can be\npassed along to other functions in the pipeline.\n\nHere's the example above, corrected to use `indent` to indent the `mychart.app`\ntemplate correctly:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\n  labels:\n\ndata:\n  myvalue: \"Hello World\"\n  \n  : \n  \n\n```\n\nNow the produced YAML is correctly indented for each section:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: edgy-mole-configmap\n  labels:\n    app_name: mychart\n    app_version: \"0.1.0\"\ndata:\n  myvalue: \"Hello World\"\n  drink: \"coffee\"\n  food: \"pizza\"\n  app_name: mychart\n  app_version: \"0.1.0\"\n```\n\n> It is considered preferable to use `include` over `template` in Helm templates\n> simply so that the output formatting can be handled better for YAML documents.\n\nSometimes we want to import content, but not as templates. That is, we want to\nimport files verbatim. We can achieve this by accessing files through the\n`.Files` object described in the next section.","site":"helm","answers_cleaned":"    title   Named Templates  description   How to define named templates   weight  9      It is time to move beyond one template  and begin to create others  In this section  we will see how to define  named templates  in one file  and then use them elsewhere  A  named template   sometimes called a  partial  or a  subtemplate   is simply a template defined inside of a file  and given a name  We ll see two ways to create them  and a few different ways to use them   In the  Flow Control    control structures md  section we introduced three actions for declaring and managing templates   define    template   and  block   In this section  we ll cover those three actions  and also introduce a special purpose  include  function that works similarly to the  template  action   An important detail to keep in mind when naming templates    template names are global    If you declare two templates with the same name  whichever one is loaded last will be the one used  Because templates in subcharts are compiled together with top level templates  you should be careful to name your templates with  chart specific names    One popular naming convention is to prefix each defined template with the name of the chart      By using the specific chart name as a prefix we can avoid any conflicts that may arise due to two different charts that implement templates of the same name   This behavior also applies to different versions of a chart  If you have  mychart  version  1 0 0  that defines a template one way  and a  mychart  version  2 0 0  that modifies the existing named template  it will use the one that was loaded last  You can work around this issue by also adding a version in the name of the chart     and         Partials and     files  So far  we ve used one file  and that one file has contained a single template  But Helm s template language allows you to create named embedded templates  that can be accessed by name elsewhere   Before we get to the nuts and bolts of writing those templates  there is file naming convention that deserves mention     Most files in  templates   are treated as if they contain Kubernetes manifests   The  NOTES txt  is one exception   But files whose name begins with an underscore       are assumed to  not  have   a manifest inside  These files are not rendered to Kubernetes object   definitions  but are available everywhere within other chart templates for   use   These files are used to store partials and helpers  In fact  when we first created  mychart   we saw a file called   helpers tpl   That file is the default location for template partials      Declaring and using templates with  define  and  template   The  define  action allows us to create a named template inside of a template file  Its syntax goes like this      yaml      body of template here       For example  we can define a template to encapsulate a Kubernetes block of labels      yaml    labels      generator  helm     date         Now we can embed this template inside of our existing ConfigMap  and then include it with the  template  action      yaml    labels      generator  helm     date    apiVersion  v1 kind  ConfigMap metadata    name   configmap    data    myvalue   Hello World                  When the template engine reads this file  it will store away the reference to  mychart labels  until  template  mychart labels   is called  Then it will render that template inline  So the result will look like this      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  running panda configmap   labels      generator  helm     date  2016 11 02 data    myvalue   Hello World    drink   coffee    food   pizza       Note  a  define  does not produce output unless it is called with a template  as in this example   Conventionally  Helm charts put these templates inside of a partials file  usually   helpers tpl   Let s move this function there      yaml     labels      generator  helm     date         By convention   define  functions should have a simple documentation block      describing what they do   Even though this definition is in   helpers tpl   it can still be accessed in  configmap yaml       yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap    data    myvalue   Hello World                  As mentioned above    template names are global    As a result of this  if two templates are declared with the same name the last occurrence will be the one that is used  Since templates in subcharts are compiled together with top level templates  it is best to name your templates with  chart specific names   A popular naming convention is to prefix each defined template with the name of the chart          Setting the scope of a template  In the template we defined above  we did not use any objects  We just used functions  Let s modify our defined template to include the chart name and chart version      yaml     labels      generator  helm     date       chart       version         If we render this  we will get an error like this      console   helm install   dry run moldy jaguar   mychart Error  unable to build kubernetes objects from release manifest  error validating     error validating data   unknown object type  nil  in ConfigMap metadata labels chart  unknown object type  nil  in ConfigMap metadata labels version       To see what rendered  re run with    disable openapi validation    helm install   dry run   disable openapi validation moldy jaguar   mychart   The result will not be what we expect      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  moldy jaguar configmap   labels      generator  helm     date  2021 03 06     chart      version       What happened to the name and version  They weren t in the scope for our defined template  When a named template  created with  define   is rendered  it will receive the scope passed in by the  template  call  In our example  we included the template like this      yaml       No scope was passed in  so within the template we cannot access anything in      This is easy enough to fix  though  We simply pass a scope to the template      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap         Note that we pass     at the end of the  template  call  We could just as easily pass   Values  or   Values favorite  or whatever scope we want  But what we want is the top level scope   Now when we execute this template with  helm install   dry run   debug plinking anaco   mychart   we get this      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  plinking anaco configmap   labels      generator  helm     date  2021 03 06     chart  mychart     version  0 1 0      Now    resolves to  mychart   and    resolves to  0 1 0       The  include  function  Say we ve defined a simple template that looks like this      yaml  app name   app version           Now say I want to insert this both into the  labels   section of my template  and also the  data   section      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap   labels       data    myvalue   Hello World                   If we render this  we will get an error like this      console   helm install   dry run measly whippet   mychart Error  unable to build kubernetes objects from release manifest  error validating     error validating data   ValidationError ConfigMap   unknown field  app name  in io k8s api core v1 ConfigMap  ValidationError ConfigMap   unknown field  app version  in io k8s api core v1 ConfigMap       To see what rendered  re run with    disable openapi validation    helm install   dry run   disable openapi validation measly whippet   mychart   The output will not be what we expect      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  measly whippet configmap   labels      app name  mychart app version   0 1 0  data    myvalue   Hello World    drink   coffee    food   pizza  app name  mychart app version   0 1 0       Note that the indentation on  app version  is wrong in both places  Why  Because the template that is substituted in has the text aligned to the left  Because  template  is an action  and not a function  there is no way to pass the output of a  template  call to other functions  the data is simply inserted inline   To work around this case  Helm provides an alternative to  template  that will import the contents of a template into the present pipeline where it can be passed along to other functions in the pipeline   Here s the example above  corrected to use  indent  to indent the  mychart app  template correctly      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap   labels   data    myvalue   Hello World                   Now the produced YAML is correctly indented for each section      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  edgy mole configmap   labels      app name  mychart     app version   0 1 0  data    myvalue   Hello World    drink   coffee    food   pizza    app name  mychart   app version   0 1 0         It is considered preferable to use  include  over  template  in Helm templates   simply so that the output formatting can be handled better for YAML documents   Sometimes we want to import content  but not as templates  That is  we want to import files verbatim  We can achieve this by accessing files through the   Files  object described in the next section "}
{"questions":"helm In the previous section we looked at the built in objects that Helm templates Instructions on how to use the values flag title Values Files weight 4 offer One of the built in objects is This object provides access to values passed into the chart Its contents come from multiple sources","answers":"---\ntitle: \"Values Files\"\ndescription: \"Instructions on how to use the --values flag.\"\nweight: 4\n---\n\nIn the previous section we looked at the built-in objects that Helm templates\noffer. One of the built-in objects is `Values`. This object provides access to\nvalues passed into the chart. Its contents come from multiple sources:\n\n- The `values.yaml` file in the chart\n- If this is a subchart, the `values.yaml` file of a parent chart\n- A values file is passed into `helm install` or `helm upgrade` with the `-f`\n  flag (`helm install -f myvals.yaml .\/mychart`)\n- Individual parameters are passed with `--set` (such as `helm install --set foo=bar\n  .\/mychart`)\n\nThe list above is in order of specificity: `values.yaml` is the default, which\ncan be overridden by a parent chart's `values.yaml`, which can in turn be\noverridden by a user-supplied values file, which can in turn be overridden by\n`--set` parameters.\n\nValues files are plain YAML files. Let's edit `mychart\/values.yaml` and then\nedit our ConfigMap template.\n\nRemoving the defaults in `values.yaml`, we'll set just one parameter:\n\n```yaml\nfavoriteDrink: coffee\n```\n\nNow we can use this inside of a template:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \n```\n\nNotice on the last line we access `favoriteDrink` as an attribute of `Values`:\n``.\n\nLet's see how this renders.\n\n```console\n$ helm install geared-marsupi .\/mychart --dry-run --debug\ninstall.go:158: [debug] Original chart version: \"\"\ninstall.go:175: [debug] CHART PATH: \/home\/bagratte\/src\/playground\/mychart\n\nNAME: geared-marsupi\nLAST DEPLOYED: Wed Feb 19 23:21:13 2020\nNAMESPACE: default\nSTATUS: pending-install\nREVISION: 1\nTEST SUITE: None\nUSER-SUPPLIED VALUES:\n{}\n\nCOMPUTED VALUES:\nfavoriteDrink: coffee\n\nHOOKS:\nMANIFEST:\n---\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: geared-marsupi-configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: coffee\n```\n\nBecause `favoriteDrink` is set in the default `values.yaml` file to `coffee`,\nthat's the value displayed in the template. We can easily override that by\nadding a `--set` flag in our call to `helm install`:\n\n```console\n$ helm install solid-vulture .\/mychart --dry-run --debug --set favoriteDrink=slurm\ninstall.go:158: [debug] Original chart version: \"\"\ninstall.go:175: [debug] CHART PATH: \/home\/bagratte\/src\/playground\/mychart\n\nNAME: solid-vulture\nLAST DEPLOYED: Wed Feb 19 23:25:54 2020\nNAMESPACE: default\nSTATUS: pending-install\nREVISION: 1\nTEST SUITE: None\nUSER-SUPPLIED VALUES:\nfavoriteDrink: slurm\n\nCOMPUTED VALUES:\nfavoriteDrink: slurm\n\nHOOKS:\nMANIFEST:\n---\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: solid-vulture-configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: slurm\n```\n\nSince `--set` has a higher precedence than the default `values.yaml` file, our\ntemplate generates `drink: slurm`.\n\nValues files can contain more structured content, too. For example, we could\ncreate a `favorite` section in our `values.yaml` file, and then add several keys\nthere:\n\n```yaml\nfavorite:\n  drink: coffee\n  food: pizza\n```\n\nNow we would have to modify the template slightly:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \n  food: \n```\n\nWhile structuring data this way is possible, the recommendation is that you keep\nyour values trees shallow, favoring flatness. When we look at assigning values\nto subcharts, we'll see how values are named using a tree structure.\n\n## Deleting a default key\n\nIf you need to delete a key from the default values, you may override the value\nof the key to be `null`, in which case Helm will remove the key from the\noverridden values merge.\n\nFor example, the stable Drupal chart allows configuring the liveness probe, in\ncase you configure a custom image. Here are the default values:\n```yaml\nlivenessProbe:\n  httpGet:\n    path: \/user\/login\n    port: http\n  initialDelaySeconds: 120\n```\n\nIf you try to override the livenessProbe handler to `exec` instead of `httpGet`\nusing `--set livenessProbe.exec.command=[cat,docroot\/CHANGELOG.txt]`, Helm will\ncoalesce the default and overridden keys together, resulting in the following\nYAML:\n```yaml\nlivenessProbe:\n  httpGet:\n    path: \/user\/login\n    port: http\n  exec:\n    command:\n    - cat\n    - docroot\/CHANGELOG.txt\n  initialDelaySeconds: 120\n```\n\nHowever, Kubernetes would then fail because you can not declare more than one\nlivenessProbe handler. To overcome this, you may instruct Helm to delete the\n`livenessProbe.httpGet` by setting it to null:\n```sh\nhelm install stable\/drupal --set image=my-registry\/drupal:0.1.0 --set livenessProbe.exec.command=[cat,docroot\/CHANGELOG.txt] --set livenessProbe.httpGet=null\n```\n\nAt this point, we've seen several built-in objects, and used them to inject\ninformation into a template. Now we will take a look at another aspect of the\ntemplate engine: functions and pipelines.","site":"helm","answers_cleaned":"    title   Values Files  description   Instructions on how to use the   values flag   weight  4      In the previous section we looked at the built in objects that Helm templates offer  One of the built in objects is  Values   This object provides access to values passed into the chart  Its contents come from multiple sources     The  values yaml  file in the chart   If this is a subchart  the  values yaml  file of a parent chart   A values file is passed into  helm install  or  helm upgrade  with the   f    flag   helm install  f myvals yaml   mychart     Individual parameters are passed with    set   such as  helm install   set foo bar     mychart    The list above is in order of specificity   values yaml  is the default  which can be overridden by a parent chart s  values yaml   which can in turn be overridden by a user supplied values file  which can in turn be overridden by    set  parameters   Values files are plain YAML files  Let s edit  mychart values yaml  and then edit our ConfigMap template   Removing the defaults in  values yaml   we ll set just one parameter      yaml favoriteDrink  coffee      Now we can use this inside of a template      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World    drink        Notice on the last line we access  favoriteDrink  as an attribute of  Values        Let s see how this renders      console   helm install geared marsupi   mychart   dry run   debug install go 158   debug  Original chart version     install go 175   debug  CHART PATH   home bagratte src playground mychart  NAME  geared marsupi LAST DEPLOYED  Wed Feb 19 23 21 13 2020 NAMESPACE  default STATUS  pending install REVISION  1 TEST SUITE  None USER SUPPLIED VALUES      COMPUTED VALUES  favoriteDrink  coffee  HOOKS  MANIFEST        Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  geared marsupi configmap data    myvalue   Hello World    drink  coffee      Because  favoriteDrink  is set in the default  values yaml  file to  coffee   that s the value displayed in the template  We can easily override that by adding a    set  flag in our call to  helm install       console   helm install solid vulture   mychart   dry run   debug   set favoriteDrink slurm install go 158   debug  Original chart version     install go 175   debug  CHART PATH   home bagratte src playground mychart  NAME  solid vulture LAST DEPLOYED  Wed Feb 19 23 25 54 2020 NAMESPACE  default STATUS  pending install REVISION  1 TEST SUITE  None USER SUPPLIED VALUES  favoriteDrink  slurm  COMPUTED VALUES  favoriteDrink  slurm  HOOKS  MANIFEST        Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  solid vulture configmap data    myvalue   Hello World    drink  slurm      Since    set  has a higher precedence than the default  values yaml  file  our template generates  drink  slurm    Values files can contain more structured content  too  For example  we could create a  favorite  section in our  values yaml  file  and then add several keys there      yaml favorite    drink  coffee   food  pizza      Now we would have to modify the template slightly      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World    drink     food        While structuring data this way is possible  the recommendation is that you keep your values trees shallow  favoring flatness  When we look at assigning values to subcharts  we ll see how values are named using a tree structure      Deleting a default key  If you need to delete a key from the default values  you may override the value of the key to be  null   in which case Helm will remove the key from the overridden values merge   For example  the stable Drupal chart allows configuring the liveness probe  in case you configure a custom image  Here are the default values     yaml livenessProbe    httpGet      path   user login     port  http   initialDelaySeconds  120      If you try to override the livenessProbe handler to  exec  instead of  httpGet  using    set livenessProbe exec command  cat docroot CHANGELOG txt    Helm will coalesce the default and overridden keys together  resulting in the following YAML     yaml livenessProbe    httpGet      path   user login     port  http   exec      command        cat       docroot CHANGELOG txt   initialDelaySeconds  120      However  Kubernetes would then fail because you can not declare more than one livenessProbe handler  To overcome this  you may instruct Helm to delete the  livenessProbe httpGet  by setting it to null     sh helm install stable drupal   set image my registry drupal 0 1 0   set livenessProbe exec command  cat docroot CHANGELOG txt    set livenessProbe httpGet null      At this point  we ve seen several built in objects  and used them to inject information into a template  Now we will take a look at another aspect of the template engine  functions and pipelines "}
{"questions":"helm weight 6 They are listed here and broken down by the following categories A list of template functions available in Helm Helm includes many template functions you can take advantage of in templates title Template Function List","answers":"---\ntitle: \"Template Function List\"\ndescription: \"A list of template functions available in Helm\"\nweight: 6\n---\n\nHelm includes many template functions you can take advantage of in templates.\nThey are listed here and broken down by the following categories:\n\n* [Cryptographic and Security](#cryptographic-and-security-functions)\n* [Date](#date-functions)\n* [Dictionaries](#dictionaries-and-dict-functions)\n* [Encoding](#encoding-functions)\n* [File Path](#file-path-functions)\n* [Kubernetes and Chart](#kubernetes-and-chart-functions)\n* [Logic and Flow Control](#logic-and-flow-control-functions)\n* [Lists](#lists-and-list-functions)\n* [Math](#math-functions)\n* [Float Math](#float-math-functions)\n* [Network](#network-functions)\n* [Reflection](#reflection-functions)\n* [Regular Expressions](#regular-expressions)\n* [Semantic Versions](#semantic-version-functions)\n* [String](#string-functions)\n* [Type Conversion](#type-conversion-functions)\n* [URL](#url-functions)\n* [UUID](#uuid-functions)\n\n## Logic and Flow Control Functions\n\nHelm includes numerous logic and control flow functions including [and](#and),\n[coalesce](#coalesce), [default](#default), [empty](#empty), [eq](#eq),\n[fail](#fail), [ge](#ge), [gt](#gt), [le](#le), [lt](#lt), [ne](#ne),\n[not](#not), [or](#or), and [required](#required).\n\n### and\n\nReturns the boolean AND of two or more arguments\n(the first empty argument, or the last argument).\n\n```\nand .Arg1 .Arg2\n```\n\n### or\n\nReturns the boolean OR of two or more arguments\n(the first non-empty argument, or the last argument).\n\n```\nor .Arg1 .Arg2\n```\n\n### not\n\nReturns the boolean negation of its argument.\n\n```\nnot .Arg\n```\n\n### eq\n\nReturns the boolean equality of the arguments (e.g., Arg1 == Arg2).\n\n```\neq .Arg1 .Arg2\n```\n\n### ne\n\nReturns the boolean inequality of the arguments (e.g., Arg1 != Arg2)\n\n```\nne .Arg1 .Arg2\n```\n\n### lt\n\nReturns a boolean true if the first argument is less than the second. False is\nreturned otherwise (e.g., Arg1 < Arg2).\n\n```\nlt .Arg1 .Arg2\n```\n\n### le\n\nReturns a boolean true if the first argument is less than or equal to the\nsecond. False is returned otherwise (e.g., Arg1 <= Arg2).\n\n```\nle .Arg1 .Arg2\n```\n\n### gt\n\nReturns a boolean true if the first argument is greater than the second. False\nis returned otherwise (e.g., Arg1 > Arg2).\n\n```\ngt .Arg1 .Arg2\n```\n\n### ge\n\nReturns a boolean true if the first argument is greater than or equal to the\nsecond. False is returned otherwise (e.g., Arg1 >= Arg2).\n\n```\nge .Arg1 .Arg2\n```\n\n### default\n\nTo set a simple default value, use `default`:\n\n```\ndefault \"foo\" .Bar\n```\n\nIn the above, if `.Bar` evaluates to a non-empty value, it will be used. But if\nit is empty, `foo` will be returned instead.\n\nThe definition of \"empty\" depends on type:\n\n- Numeric: 0\n- String: \"\"\n- Lists: `[]`\n- Dicts: `{}`\n- Boolean: `false`\n- And always `nil` (aka null)\n\nFor structs, there is no definition of empty, so a struct will never return the\ndefault.\n\n### required\n\nSpecify values that must be set with `required`:\n\n```\nrequired \"A valid foo is required!\" .Bar\n```\n\nIf `.Bar` is empty or not defined (see [default](#default) on how this is \nevaluated), the template will not render and will return the error message \nsupplied instead.\n\n### empty\n\nThe `empty` function returns `true` if the given value is considered empty, and\n`false` otherwise. The empty values are listed in the `default` section.\n\n```\nempty .Foo\n```\n\nNote that in Go template conditionals, emptiness is calculated for you. Thus,\nyou rarely need `if not empty .Foo`. Instead, just use `if .Foo`.\n\n### fail\n\nUnconditionally returns an empty `string` and an `error` with the specified\ntext. This is useful in scenarios where other conditionals have determined that\ntemplate rendering should fail.\n\n```\nfail \"Please accept the end user license agreement\"\n```\n\n### coalesce\n\nThe `coalesce` function takes a list of values and returns the first non-empty\none.\n\n```\ncoalesce 0 1 2\n```\n\nThe above returns `1`.\n\nThis function is useful for scanning through multiple variables or values:\n\n```\ncoalesce .name .parent.name \"Matt\"\n```\n\nThe above will first check to see if `.name` is empty. If it is not, it will\nreturn that value. If it _is_ empty, `coalesce` will evaluate `.parent.name` for\nemptiness. Finally, if both `.name` and `.parent.name` are empty, it will return\n`Matt`.\n\n### ternary\n\nThe `ternary` function takes two values, and a test value. If the test value is\ntrue, the first value will be returned. If the test value is empty, the second\nvalue will be returned. This is similar to the ternary operator in C and other programming languages.\n\n#### true test value\n\n```\nternary \"foo\" \"bar\" true\n```\n\nor\n\n```\ntrue | ternary \"foo\" \"bar\"\n```\n\nThe above returns `\"foo\"`.\n\n#### false test value\n\n```\nternary \"foo\" \"bar\" false\n```\n\nor\n\n```\nfalse | ternary \"foo\" \"bar\"\n```\n\nThe above returns `\"bar\"`.\n\n## String Functions\n\nHelm includes the following string functions: [abbrev](#abbrev),\n[abbrevboth](#abbrevboth), [camelcase](#camelcase), [cat](#cat),\n[contains](#contains), [hasPrefix](#hasprefix-and-hassuffix),\n[hasSuffix](#hasprefix-and-hassuffix), [indent](#indent), [initials](#initials),\n[kebabcase](#kebabcase), [lower](#lower), [nindent](#nindent),\n[nospace](#nospace), [plural](#plural), [print](#print), [printf](#printf),\n[println](#println), [quote](#quote-and-squote),\n[randAlpha](#randalphanum-randalpha-randnumeric-and-randascii),\n[randAlphaNum](#randalphanum-randalpha-randnumeric-and-randascii),\n[randAscii](#randalphanum-randalpha-randnumeric-and-randascii),\n[randNumeric](#randalphanum-randalpha-randnumeric-and-randascii),\n[repeat](#repeat), [replace](#replace), [shuffle](#shuffle),\n[snakecase](#snakecase), [squote](#quote-and-squote), [substr](#substr),\n[swapcase](#swapcase), [title](#title), [trim](#trim), [trimAll](#trimall),\n[trimPrefix](#trimprefix), [trimSuffix](#trimsuffix), [trunc](#trunc),\n[untitle](#untitle), [upper](#upper), [wrap](#wrap), and [wrapWith](#wrapwith).\n\n### print\n\nReturns a string from the combination of its parts.\n\n```\nprint \"Matt has \" .Dogs \" dogs\"\n```\n\nTypes that are not strings are converted to strings where possible.\n\nNote, when two arguments next to each other are not strings a space is added\nbetween them.\n\n### println\n\nWorks the same way as [print](#print) but adds a new line at the end.\n\n### printf\n\nReturns a string based on a formatting string and the arguments to pass to it in\norder.\n\n```\nprintf \"%s has %d dogs.\" .Name .NumberDogs\n```\n\nThe placeholder to use depends on the type for the argument being passed in.\nThis includes:\n\nGeneral purpose:\n\n* `%v` the value in a default format\n  * when printing dicts, the plus flag (%+v) adds field names\n* `%%` a literal percent sign; consumes no value\n\nBoolean:\n\n* `%t` the word true or false\n\nInteger:\n\n* `%b` base 2\n* `%c` the character represented by the corresponding Unicode code point\n* `%d` base 10\n* `%o` base 8\n* `%O` base 8 with 0o prefix\n* `%q` a single-quoted character literal safely escaped\n* `%x` base 16, with lower-case letters for a-f\n* `%X` base 16, with upper-case letters for A-F\n* `%U` Unicode format: U+1234; same as \"U+%04X\"\n\n Floating-point and complex constituents:\n\n* `%b` decimal less scientific notation with exponent a power of two, e.g.\n  -123456p-78\n* `%e` scientific notation, e.g. -1.234456e+78\n* `%E` scientific notation, e.g. -1.234456E+78\n* `%f` decimal point but no exponent, e.g. 123.456\n* `%F` synonym for %f\n* `%g` %e for large exponents, %f otherwise.\n* `%G` %E for large exponents, %F otherwise\n* `%x` hexadecimal notation (with decimal power of two exponent), e.g.\n  -0x1.23abcp+20\n* `%X` upper-case hexadecimal notation, e.g. -0X1.23ABCP+20\n\nString and slice of bytes (treated equivalently with these verbs):\n\n* `%s` the uninterpreted bytes of the string or slice\n* `%q` a double-quoted string safely escaped\n* `%x` base 16, lower-case, two characters per byte\n* `%X` base 16, upper-case, two characters per byte\n\nSlice:\n\n* `%p` address of 0th element in base 16 notation, with leading 0x\n\n### trim\n\nThe `trim` function removes white space from both sides of a string:\n\n```\ntrim \"   hello    \"\n```\n\nThe above produces `hello`\n\n### trimAll\n\nRemoves the given characters from the front and back of a string:\n\n```\ntrimAll \"$\" \"$5.00\"\n```\n\nThe above returns `5.00` (as a string).\n\n### trimPrefix\n\nTrim just the prefix from a string:\n\n```\ntrimPrefix \"-\" \"-hello\"\n```\n\nThe above returns `hello`\n\n### trimSuffix\n\nTrim just the suffix from a string:\n\n```\ntrimSuffix \"-\" \"hello-\"\n```\n\nThe above returns `hello`\n\n### lower\n\nConvert the entire string to lowercase:\n\n```\nlower \"HELLO\"\n```\n\nThe above returns `hello`\n\n### upper\n\nConvert the entire string to uppercase:\n\n```\nupper \"hello\"\n```\n\nThe above returns `HELLO`\n\n### title\n\nConvert to title case:\n\n```\ntitle \"hello world\"\n```\n\nThe above returns `Hello World`\n\n### untitle\n\nRemove title casing. `untitle \"Hello World\"` produces `hello world`.\n\n### repeat\n\nRepeat a string multiple times:\n\n```\nrepeat 3 \"hello\"\n```\n\nThe above returns `hellohellohello`\n\n### substr\n\nGet a substring from a string. It takes three parameters:\n\n- start (int)\n- end (int)\n- string (string)\n\n```\nsubstr 0 5 \"hello world\"\n```\n\nThe above returns `hello`\n\n### nospace\n\nRemove all whitespace from a string.\n\n```\nnospace \"hello w o r l d\"\n```\n\nThe above returns `helloworld`\n\n### trunc\n\nTruncate a string\n\n```\ntrunc 5 \"hello world\"\n```\n\nThe above produces `hello`.\n\n```\ntrunc -5 \"hello world\"\n```\n\nThe above produces `world`.\n\n### abbrev\n\nTruncate a string with ellipses (`...`)\n\nParameters:\n\n- max length\n- the string\n\n```\nabbrev 5 \"hello world\"\n```\n\nThe above returns `he...`, since it counts the width of the ellipses against the\nmaximum length.\n\n### abbrevboth\n\nAbbreviate both sides:\n\n```\nabbrevboth 5 10 \"1234 5678 9123\"\n```\n\nthe above produces `...5678...`\n\nIt takes:\n\n- left offset\n- max length\n- the string\n\n### initials\n\nGiven multiple words, take the first letter of each word and combine.\n\n```\ninitials \"First Try\"\n```\n\nThe above returns `FT`\n\n### randAlphaNum, randAlpha, randNumeric, and randAscii\n\nThese four functions generate cryptographically secure (uses ```crypto\/rand```)\nrandom strings, but with different base character sets:\n\n- `randAlphaNum` uses `0-9a-zA-Z`\n- `randAlpha` uses `a-zA-Z`\n- `randNumeric` uses `0-9`\n- `randAscii` uses all printable ASCII characters\n\nEach of them takes one parameter: the integer length of the string.\n\n```\nrandNumeric 3\n```\n\nThe above will produce a random string with three digits.\n\n### wrap\n\nWrap text at a given column count:\n\n```\nwrap 80 $someText\n```\n\nThe above will wrap the string in `$someText` at 80 columns.\n\n### wrapWith\n\n`wrapWith` works as `wrap`, but lets you specify the string to wrap with.\n(`wrap` uses `\\n`)\n\n```\nwrapWith 5 \"\\t\" \"Hello World\"\n```\n\nThe above produces `Hello World` (where the whitespace is an ASCII tab\ncharacter)\n\n### contains\n\nTest to see if one string is contained inside of another:\n\n```\ncontains \"cat\" \"catch\"\n```\n\nThe above returns `true` because `catch` contains `cat`.\n\n### hasPrefix and hasSuffix\n\nThe `hasPrefix` and `hasSuffix` functions test whether a string has a given\nprefix or suffix:\n\n```\nhasPrefix \"cat\" \"catch\"\n```\n\nThe above returns `true` because `catch` has the prefix `cat`.\n\n### quote and squote\n\nThese functions wrap a string in double quotes (`quote`) or single quotes\n(`squote`).\n\n### cat\n\nThe `cat` function concatenates multiple strings together into one, separating\nthem with spaces:\n\n```\ncat \"hello\" \"beautiful\" \"world\"\n```\n\nThe above produces `hello beautiful world`\n\n### indent\n\nThe `indent` function indents every line in a given string to the specified\nindent width. This is useful when aligning multi-line strings:\n\n```\nindent 4 $lots_of_text\n```\n\nThe above will indent every line of text by 4 space characters.\n\n### nindent\n\nThe `nindent` function is the same as the indent function, but prepends a new\nline to the beginning of the string.\n\n```\nnindent 4 $lots_of_text\n```\n\nThe above will indent every line of text by 4 space characters and add a new\nline to the beginning.\n\n### replace\n\nPerform simple string replacement.\n\nIt takes three arguments:\n\n- string to replace\n- string to replace with\n- source string\n\n```\n\"I Am Henry VIII\" | replace \" \" \"-\"\n```\n\nThe above will produce `I-Am-Henry-VIII`\n\n### plural\n\nPluralize a string.\n\n```\nlen $fish | plural \"one anchovy\" \"many anchovies\"\n```\n\nIn the above, if the length of the string is 1, the first argument will be\nprinted (`one anchovy`). Otherwise, the second argument will be printed (`many\nanchovies`).\n\nThe arguments are:\n\n- singular string\n- plural string\n- length integer\n\nNOTE: Helm does not currently support languages with more complex pluralization\nrules. And `0` is considered a plural because the English language treats it as\nsuch (`zero anchovies`).\n\n### snakecase\n\nConvert string from camelCase to snake_case.\n\n```\nsnakecase \"FirstName\"\n```\n\nThis above will produce `first_name`.\n\n### camelcase\n\nConvert string from snake_case to CamelCase\n\n```\ncamelcase \"http_server\"\n```\n\nThis above will produce `HttpServer`.\n\n### kebabcase\n\nConvert string from camelCase to kebab-case.\n\n```\nkebabcase \"FirstName\"\n```\n\nThis above will produce `first-name`.\n\n### swapcase\n\nSwap the case of a string using a word based algorithm.\n\nConversion algorithm:\n\n- Upper case character converts to Lower case\n- Title case character converts to Lower case\n- Lower case character after Whitespace or at start converts to Title case\n- Other Lower case character converts to Upper case\n- Whitespace is defined by unicode.IsSpace(char)\n\n```\nswapcase \"This Is A.Test\"\n```\n\nThis above will produce `tHIS iS a.tEST`.\n\n### shuffle\n\nShuffle a string.\n\n```\nshuffle \"hello\"\n```\n\nThe above will randomize the letters in `hello`, perhaps producing `oelhl`.\n\n## Type Conversion Functions\n\nThe following type conversion functions are provided by Helm:\n\n- `atoi`: Convert a string to an integer.\n- `float64`: Convert to a `float64`.\n- `int`: Convert to an `int` at the system's width.\n- `int64`: Convert to an `int64`.\n- `toDecimal`: Convert a unix octal to a `int64`.\n- `toString`: Convert to a string.\n- `toStrings`: Convert a list, slice, or array to a list of strings.\n- `toJson` (`mustToJson`): Convert list, slice, array, dict, or object to JSON.\n- `toPrettyJson` (`mustToPrettyJson`): Convert list, slice, array, dict, or\n  object to indented JSON.\n- `toRawJson` (`mustToRawJson`): Convert list, slice, array, dict, or object to\n  JSON with HTML characters unescaped.\n- `fromYaml`: Convert a YAML string to an object.\n- `fromJson`: Convert a JSON string to an object.\n- `fromJsonArray`: Convert a JSON array to a list.\n- `toYaml`: Convert list, slice, array, dict, or object to indented yaml, can be used to copy chunks of yaml from any source. This function is equivalent to GoLang yaml.Marshal function, see docs here: https:\/\/pkg.go.dev\/gopkg.in\/yaml.v2#Marshal\n- `toToml`: Convert list, slice, array, dict, or object to toml, can be used to copy chunks of toml from any source.\n- `fromYamlArray`: Convert a YAML array to a list.\n\nOnly `atoi` requires that the input be a specific type. The others will attempt\nto convert from any type to the destination type. For example, `int64` can\nconvert floats to ints, and it can also convert strings to ints.\n\n### toStrings\n\nGiven a list-like collection, produce a slice of strings.\n\n```\nlist 1 2 3 | toStrings\n```\n\nThe above converts `1` to `\"1\"`, `2` to `\"2\"`, and so on, and then returns them\nas a list.\n\n### toDecimal\n\nGiven a unix octal permission, produce a decimal.\n\n```\n\"0777\" | toDecimal\n```\n\nThe above converts `0777` to `511` and returns the value as an int64.\n\n### toJson, mustToJson\n\nThe `toJson` function encodes an item into a JSON string. If the item cannot be\nconverted to JSON the function will return an empty string. `mustToJson` will\nreturn an error in case the item cannot be encoded in JSON.\n\n```\ntoJson .Item\n```\n\nThe above returns JSON string representation of `.Item`.\n\n### toPrettyJson, mustToPrettyJson\n\nThe `toPrettyJson` function encodes an item into a pretty (indented) JSON\nstring.\n\n```\ntoPrettyJson .Item\n```\n\nThe above returns indented JSON string representation of `.Item`.\n\n### toRawJson, mustToRawJson\n\nThe `toRawJson` function encodes an item into JSON string with HTML characters\nunescaped.\n\n```\ntoRawJson .Item\n```\n\nThe above returns unescaped JSON string representation of `.Item`.\n\n### fromYaml\n\nThe `fromYaml` function takes a YAML string and returns an object that can be used in templates.\n\n`File at: yamls\/person.yaml`\n```yaml\nname: Bob\nage: 25\nhobbies:\n  - hiking\n  - fishing\n  - cooking\n```\n```yaml\n\ngreeting: |\n  Hi, my name is  and I am  years old.\n  My hobbies are  .\n```\n\n### fromJson\n\nThe `fromJson` function takes a JSON string and returns an object that can be used in templates.\n\n`File at: jsons\/person.json`\n```json\n{\n  \"name\": \"Bob\",\n  \"age\": 25,\n  \"hobbies\": [\n    \"hiking\",\n    \"fishing\",\n    \"cooking\"\n  ]\n}\n```\n```yaml\n\ngreeting: |\n  Hi, my name is  and I am  years old.\n  My hobbies are  .\n```\n\n\n### fromJsonArray\n\nThe `fromJsonArray` function takes a JSON Array and returns a list that can be used in templates.\n\n`File at: jsons\/people.json`\n```json\n[\n { \"name\": \"Bob\",\"age\": 25 },\n { \"name\": \"Ram\",\"age\": 16 }\n]\n```\n```yaml\n\n\ngreeting: |\n  Hi, my name is  and I am  years old.\n\n```\n\n### fromYamlArray\n\nThe `fromYamlArray` function takes a YAML Array and returns a list that can be used in templates.\n\n`File at: yamls\/people.yml`\n```yaml\n- name: Bob\n  age: 25\n- name: Ram\n  age: 16\n```\n```yaml\n\n\ngreeting: |\n  Hi, my name is  and I am  years old.\n\n```\n\n\n## Regular Expressions\n\nHelm includes the following regular expression functions: [regexFind\n(mustRegexFind)](#regexfindall-mustregexfindall), [regexFindAll\n(mustRegexFindAll)](#regexfind-mustregexfind), [regexMatch\n(mustRegexMatch)](#regexmatch-mustregexmatch), [regexReplaceAll\n(mustRegexReplaceAll)](#regexreplaceall-mustregexreplaceall),\n[regexReplaceAllLiteral\n(mustRegexReplaceAllLiteral)](#regexreplaceallliteral-mustregexreplaceallliteral),\n[regexSplit (mustRegexSplit)](#regexsplit-mustregexsplit).\n\n### regexMatch, mustRegexMatch\n\nReturns true if the input string contains any match of the regular expression.\n\n```\nregexMatch \"^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\\\.[A-Za-z]{2,}$\" \"test@acme.com\"\n```\n\nThe above produces `true`\n\n`regexMatch` panics if there is a problem and `mustRegexMatch` returns an error\nto the template engine if there is a problem.\n\n### regexFindAll, mustRegexFindAll\n\nReturns a slice of all matches of the regular expression in the input string.\nThe last parameter n determines the number of substrings to return, where -1\nmeans return all matches\n\n```\nregexFindAll \"[2,4,6,8]\" \"123456789\" -1\n```\n\nThe above produces `[2 4 6 8]`\n\n`regexFindAll` panics if there is a problem and `mustRegexFindAll` returns an\nerror to the template engine if there is a problem.\n\n### regexFind, mustRegexFind\n\nReturn the first (left most) match of the regular expression in the input string\n\n```\nregexFind \"[a-zA-Z][1-9]\" \"abcd1234\"\n```\n\nThe above produces `d1`\n\n`regexFind` panics if there is a problem and `mustRegexFind` returns an error to\nthe template engine if there is a problem.\n\n### regexReplaceAll, mustRegexReplaceAll\n\nReturns a copy of the input string, replacing matches of the Regexp with the\nreplacement string replacement. Inside string replacement, $ signs are\ninterpreted as in Expand, so for instance $1 represents the text of the first\nsubmatch\n\n```\nregexReplaceAll \"a(x*)b\" \"-ab-axxb-\" \"${1}W\"\n```\n\nThe above produces `-W-xxW-`\n\n`regexReplaceAll` panics if there is a problem and `mustRegexReplaceAll` returns\nan error to the template engine if there is a problem.\n\n### regexReplaceAllLiteral, mustRegexReplaceAllLiteral\n\nReturns a copy of the input string, replacing matches of the Regexp with the\nreplacement string replacement. The replacement string is substituted directly,\nwithout using Expand\n\n```\nregexReplaceAllLiteral \"a(x*)b\" \"-ab-axxb-\" \"${1}\"\n```\n\nThe above produces `-${1}-${1}-`\n\n`regexReplaceAllLiteral` panics if there is a problem and\n`mustRegexReplaceAllLiteral` returns an error to the template engine if there is\na problem.\n\n### regexSplit, mustRegexSplit\n\nSlices the input string into substrings separated by the expression and returns\na slice of the substrings between those expression matches. The last parameter\n`n` determines the number of substrings to return, where `-1` means return all\nmatches\n\n```\nregexSplit \"z+\" \"pizza\" -1\n```\n\nThe above produces `[pi a]`\n\n`regexSplit` panics if there is a problem and `mustRegexSplit` returns an error\nto the template engine if there is a problem.\n\n## Cryptographic and Security Functions\n\nHelm provides some advanced cryptographic functions. They include\n[adler32sum](#adler32sum), [buildCustomCert](#buildcustomcert),\n[decryptAES](#decryptaes), [derivePassword](#derivepassword),\n[encryptAES](#encryptaes), [genCA](#genca), [genPrivateKey](#genprivatekey),\n[genSelfSignedCert](#genselfsignedcert), [genSignedCert](#gensignedcert),\n[htpasswd](#htpasswd), [sha1sum](#sha1sum), and [sha256sum](#sha256sum).\n\n### sha1sum\n\nThe `sha1sum` function receives a string, and computes it's SHA1 digest.\n\n```\nsha1sum \"Hello world!\"\n```\n\n### sha256sum\n\nThe `sha256sum` function receives a string, and computes it's SHA256 digest.\n\n```\nsha256sum \"Hello world!\"\n```\n\nThe above will compute the SHA 256 sum in an \"ASCII armored\" format that is safe\nto print.\n\n### adler32sum\n\nThe `adler32sum` function receives a string, and computes its Adler-32 checksum.\n\n```\nadler32sum \"Hello world!\"\n```\n\n### htpasswd\n\nThe `htpasswd` function takes a `username` and `password` and generates a\n`bcrypt` hash of the password. The result can be used for basic authentication\non an [Apache HTTP\nServer](https:\/\/httpd.apache.org\/docs\/2.4\/misc\/password_encryptions.html#basic).\n\n```\nhtpasswd \"myUser\" \"myPassword\"\n```\n\nNote that it is insecure to store the password directly in the template.\n\n### derivePassword\n\nThe `derivePassword` function can be used to derive a specific password based on\nsome shared \"master password\" constraints. The algorithm for this is [well\nspecified](https:\/\/web.archive.org\/web\/20211019121301\/https:\/\/masterpassword.app\/masterpassword-algorithm.pdf).\n\n```\nderivePassword 1 \"long\" \"password\" \"user\" \"example.com\"\n```\n\nNote that it is considered insecure to store the parts directly in the template.\n\n### genPrivateKey\n\nThe `genPrivateKey` function generates a new private key encoded into a PEM\nblock.\n\nIt takes one of the values for its first param:\n\n- `ecdsa`: Generate an elliptic curve DSA key (P256)\n- `dsa`: Generate a DSA key (L2048N256)\n- `rsa`: Generate an RSA 4096 key\n\n### buildCustomCert\n\nThe `buildCustomCert` function allows customizing the certificate.\n\nIt takes the following string parameters:\n\n- A base64 encoded PEM format certificate\n- A base64 encoded PEM format private key\n\nIt returns a certificate object with the following attributes:\n\n- `Cert`: A PEM-encoded certificate\n- `Key`: A PEM-encoded private key\n\nExample:\n\n```\n$ca := buildCustomCert \"base64-encoded-ca-crt\" \"base64-encoded-ca-key\"\n```\n\nNote that the returned object can be passed to the `genSignedCert` function to\nsign a certificate using this CA.\n\n### genCA\n\nThe `genCA` function generates a new, self-signed x509 certificate authority.\n\nIt takes the following parameters:\n\n- Subject's common name (cn)\n- Cert validity duration in days\n\nIt returns an object with the following attributes:\n\n- `Cert`: A PEM-encoded certificate\n- `Key`: A PEM-encoded private key\n\nExample:\n\n```\n$ca := genCA \"foo-ca\" 365\n```\n\nNote that the returned object can be passed to the `genSignedCert` function to\nsign a certificate using this CA.\n\n### genSelfSignedCert\n\nThe `genSelfSignedCert` function generates a new, self-signed x509 certificate.\n\nIt takes the following parameters:\n\n- Subject's common name (cn)\n- Optional list of IPs; may be nil\n- Optional list of alternate DNS names; may be nil\n- Cert validity duration in days\n\nIt returns an object with the following attributes:\n\n- `Cert`: A PEM-encoded certificate\n- `Key`: A PEM-encoded private key\n\nExample:\n\n```\n$cert := genSelfSignedCert \"foo.com\" (list \"10.0.0.1\" \"10.0.0.2\") (list \"bar.com\" \"bat.com\") 365\n```\n\n### genSignedCert\n\nThe `genSignedCert` function generates a new, x509 certificate signed by the\nspecified CA.\n\nIt takes the following parameters:\n\n- Subject's common name (cn)\n- Optional list of IPs; may be nil\n- Optional list of alternate DNS names; may be nil\n- Cert validity duration in days\n- CA (see `genCA`)\n\nExample:\n\n```\n$ca := genCA \"foo-ca\" 365\n$cert := genSignedCert \"foo.com\" (list \"10.0.0.1\" \"10.0.0.2\") (list \"bar.com\" \"bat.com\") 365 $ca\n```\n\n### encryptAES\n\nThe `encryptAES` function encrypts text with AES-256 CBC and returns a base64\nencoded string.\n\n```\nencryptAES \"secretkey\" \"plaintext\"\n```\n\n### decryptAES\n\nThe `decryptAES` function receives a base64 string encoded by the AES-256 CBC\nalgorithm and returns the decoded text.\n\n```\n\"30tEfhuJSVRhpG97XCuWgz2okj7L8vQ1s6V9zVUPeDQ=\" | decryptAES \"secretkey\"\n```\n\n## Date Functions\n\nHelm includes the following date functions you can use in templates:\n[ago](#ago), [date](#date), [dateInZone](#dateinzone), [dateModify\n(mustDateModify)](#datemodify-mustdatemodify), [duration](#duration),\n[durationRound](#durationround), [htmlDate](#htmldate),\n[htmlDateInZone](#htmldateinzone), [now](#now), [toDate\n(mustToDate)](#todate-musttodate), and [unixEpoch](#unixepoch).\n\n### now\n\nThe current date\/time. Use this in conjunction with other date functions.\n\n### ago\n\nThe `ago` function returns duration from time. Now in seconds resolution.\n\n```\nago .CreatedAt\n```\n\nreturns in `time.Duration` String() format\n\n```\n2h34m7s\n```\n\n### date\n\nThe `date` function formats a date.\n\nFormat the date to YEAR-MONTH-DAY:\n\n```\nnow | date \"2006-01-02\"\n```\n\nDate formatting in Go is a [little bit\ndifferent](https:\/\/pauladamsmith.com\/blog\/2011\/05\/go_time.html).\n\nIn short, take this as the base date:\n\n```\nMon Jan 2 15:04:05 MST 2006\n```\n\nWrite it in the format you want. Above, `2006-01-02` is the same date, but in\nthe format we want.\n\n### dateInZone\n\nSame as `date`, but with a timezone.\n\n```\ndateInZone \"2006-01-02\" (now) \"UTC\"\n```\n\n### duration\n\nFormats a given amount of seconds as a `time.Duration`.\n\nThis returns 1m35s\n\n```\nduration \"95\"\n```\n\n### durationRound\n\nRounds a given duration to the most significant unit. Strings and\n`time.Duration` gets parsed as a duration, while a `time.Time` is calculated as\nthe duration since.\n\nThis return 2h\n\n```\ndurationRound \"2h10m5s\"\n```\n\nThis returns 3mo\n\n```\ndurationRound \"2400h10m5s\"\n```\n\n### unixEpoch\n\nReturns the seconds since the unix epoch for a `time.Time`.\n\n```\nnow | unixEpoch\n```\n\n### dateModify, mustDateModify\n\nThe `dateModify` takes a modification and a date and returns the timestamp.\n\nSubtract an hour and thirty minutes from the current time:\n\n```\nnow | dateModify \"-1.5h\"\n```\n\nIf the modification format is wrong `dateModify` will return the date\nunmodified. `mustDateModify` will return an error otherwise.\n\n### htmlDate\n\nThe `htmlDate` function formats a date for inserting into an HTML date picker\ninput field.\n\n```\nnow | htmlDate\n```\n\n### htmlDateInZone\n\nSame as htmlDate, but with a timezone.\n\n```\nhtmlDateInZone (now) \"UTC\"\n```\n\n### toDate, mustToDate\n\n`toDate` converts a string to a date. The first argument is the date layout and\nthe second the date string. If the string can't be convert it returns the zero\nvalue. `mustToDate` will return an error in case the string cannot be converted.\n\nThis is useful when you want to convert a string date to another format (using\npipe). The example below converts \"2017-12-31\" to \"31\/12\/2017\".\n\n```\ntoDate \"2006-01-02\" \"2017-12-31\" | date \"02\/01\/2006\"\n```\n\n## Dictionaries and Dict Functions\n\nHelm provides a key\/value storage type called a `dict` (short for \"dictionary\",\nas in Python). A `dict` is an _unordered_ type.\n\nThe key to a dictionary **must be a string**. However, the value can be any\ntype, even another `dict` or `list`.\n\nUnlike `list`s, `dict`s are not immutable. The `set` and `unset` functions will\nmodify the contents of a dictionary.\n\nHelm provides the following functions to support working with dicts: [deepCopy\n(mustDeepCopy)](#deepcopy-mustdeepcopy), [dict](#dict), [dig](#dig), [get](#get),\n[hasKey](#haskey), [keys](#keys), [merge (mustMerge)](#merge-mustmerge),\n[mergeOverwrite (mustMergeOverwrite)](#mergeoverwrite-mustmergeoverwrite),\n[omit](#omit), [pick](#pick), [pluck](#pluck), [set](#set), [unset](#unset), and\n[values](#values).\n\n### dict\n\nCreating dictionaries is done by calling the `dict` function and passing it a\nlist of pairs.\n\nThe following creates a dictionary with three items:\n\n```\n$myDict := dict \"name1\" \"value1\" \"name2\" \"value2\" \"name3\" \"value 3\"\n```\n\n### get\n\nGiven a map and a key, get the value from the map.\n\n```\nget $myDict \"name1\"\n```\n\nThe above returns `\"value1\"`\n\nNote that if the key is not found, this operation will simply return `\"\"`. No\nerror will be generated.\n\n### set\n\nUse `set` to add a new key\/value pair to a dictionary.\n\n```\n$_ := set $myDict \"name4\" \"value4\"\n```\n\nNote that `set` _returns the dictionary_ (a requirement of Go template\nfunctions), so you may need to trap the value as done above with the `$_`\nassignment.\n\n### unset\n\nGiven a map and a key, delete the key from the map.\n\n```\n$_ := unset $myDict \"name4\"\n```\n\nAs with `set`, this returns the dictionary.\n\nNote that if the key is not found, this operation will simply return. No error\nwill be generated.\n\n### hasKey\n\nThe `hasKey` function returns `true` if the given dict contains the given key.\n\n```\nhasKey $myDict \"name1\"\n```\n\nIf the key is not found, this returns `false`.\n\n### pluck\n\nThe `pluck` function makes it possible to give one key and multiple maps, and\nget a list of all of the matches:\n\n```\npluck \"name1\" $myDict $myOtherDict\n```\n\nThe above will return a `list` containing every found value (`[value1\notherValue1]`).\n\nIf the given key is _not found_ in a map, that map will not have an item in the\nlist (and the length of the returned list will be less than the number of dicts\nin the call to `pluck`).\n\nIf the key is _found_ but the value is an empty value, that value will be\ninserted.\n\nA common idiom in Helm templates is to use `pluck... | first` to get the first\nmatching key out of a collection of dictionaries.\n\n### dig\n\nThe `dig` function traverses a nested set of dicts, selecting keys from a list\nof values. It returns a default value if any of the keys are not found at the\nassociated dict.\n\n```\ndig \"user\" \"role\" \"humanName\" \"guest\" $dict\n```\n\nGiven a dict structured like\n```\n{\n  user: {\n    role: {\n      humanName: \"curator\"\n    }\n  }\n}\n```\n\nthe above would return `\"curator\"`. If the dict lacked even a `user` field,\nthe result would be `\"guest\"`.\n\nDig can be very useful in cases where you'd like to avoid guard clauses,\nespecially since Go's template package's `and` doesn't shortcut. For instance\n`and a.maybeNil a.maybeNil.iNeedThis` will always evaluate\n`a.maybeNil.iNeedThis`, and panic if `a` lacks a `maybeNil` field.)\n\n`dig` accepts its dict argument last in order to support pipelining. For instance:\n```\nmerge a b c | dig \"one\" \"two\" \"three\" \"<missing>\"\n```\n\n### merge, mustMerge\n\nMerge two or more dictionaries into one, giving precedence to the dest\ndictionary:\n\nGiven:\n\n```\ndst:\n  default: default\n  overwrite: me\n  key: true\n\nsrc:\n  overwrite: overwritten\n  key: false\n```\n\nwill result in:\n\n```\nnewdict:\n  default: default\n  overwrite: me\n  key: true\n```\n```\n$newdict := merge $dest $source1 $source2\n```\n\nThis is a deep merge operation but not a deep copy operation. Nested objects\nthat are merged are the same instance on both dicts. If you want a deep copy\nalong with the merge, then use the `deepCopy` function along with merging. For\nexample,\n\n```\ndeepCopy $source | merge $dest\n```\n\n`mustMerge` will return an error in case of unsuccessful merge.\n\n### mergeOverwrite, mustMergeOverwrite\n\nMerge two or more dictionaries into one, giving precedence from **right to\nleft**, effectively overwriting values in the dest dictionary:\n\nGiven:\n\n```\ndst:\n  default: default\n  overwrite: me\n  key: true\n\nsrc:\n  overwrite: overwritten\n  key: false\n```\n\nwill result in:\n\n```\nnewdict:\n  default: default\n  overwrite: overwritten\n  key: false\n```\n\n```\n$newdict := mergeOverwrite $dest $source1 $source2\n```\n\nThis is a deep merge operation but not a deep copy operation. Nested objects\nthat are merged are the same instance on both dicts. If you want a deep copy\nalong with the merge then use the `deepCopy` function along with merging. For\nexample,\n\n```\ndeepCopy $source | mergeOverwrite $dest\n```\n\n`mustMergeOverwrite` will return an error in case of unsuccessful merge.\n\n### keys\n\nThe `keys` function will return a `list` of all of the keys in one or more\n`dict` types. Since a dictionary is _unordered_, the keys will not be in a\npredictable order. They can be sorted with `sortAlpha`.\n\n```\nkeys $myDict | sortAlpha\n```\n\nWhen supplying multiple dictionaries, the keys will be concatenated. Use the\n`uniq` function along with `sortAlpha` to get a unique, sorted list of keys.\n\n```\nkeys $myDict $myOtherDict | uniq | sortAlpha\n```\n\n### pick\n\nThe `pick` function selects just the given keys out of a dictionary, creating a\nnew `dict`.\n\n```\n$new := pick $myDict \"name1\" \"name2\"\n```\n\nThe above returns `{name1: value1, name2: value2}`\n\n### omit\n\nThe `omit` function is similar to `pick`, except it returns a new `dict` with\nall the keys that _do not_ match the given keys.\n\n```\n$new := omit $myDict \"name1\" \"name3\"\n```\n\nThe above returns `{name2: value2}`\n\n### values\n\nThe `values` function is similar to `keys`, except it returns a new `list` with\nall the values of the source `dict` (only one dictionary is supported).\n\n```\n$vals := values $myDict\n```\n\nThe above returns `list[\"value1\", \"value2\", \"value 3\"]`. Note that the `values`\nfunction gives no guarantees about the result ordering; if you care about this,\nthen use `sortAlpha`.\n\n### deepCopy, mustDeepCopy\n\nThe `deepCopy` and `mustDeepCopy` functions take a value and make a deep copy\nof the value. This includes dicts and other structures. `deepCopy` panics when\nthere is a problem, while `mustDeepCopy` returns an error to the template system\nwhen there is an error.\n\n```\ndict \"a\" 1 \"b\" 2 | deepCopy\n```\n\n### A Note on Dict Internals\n\nA `dict` is implemented in Go as a `map[string]interface{}`. Go developers can\npass `map[string]interface{}` values into the context to make them available to\ntemplates as `dict`s.\n\n## Encoding Functions\n\nHelm has the following encoding and decoding functions:\n\n- `b64enc`\/`b64dec`: Encode or decode with Base64\n- `b32enc`\/`b32dec`: Encode or decode with Base32\n\n## Lists and List Functions\n\nHelm provides a simple `list` type that can contain arbitrary sequential lists\nof data. This is similar to arrays or slices, but lists are designed to be used\nas immutable data types.\n\nCreate a list of integers:\n\n```\n$myList := list 1 2 3 4 5\n```\n\nThe above creates a list of `[1 2 3 4 5]`.\n\nHelm provides the following list functions: [append\n(mustAppend)](#append-mustappend), [compact\n(mustCompact)](#compact-mustcompact), [concat](#concat), [first\n(mustFirst)](#first-mustfirst), [has (mustHas)](#has-musthas), [initial\n(mustInitial)](#initial-mustinitial), [last (mustLast)](#last-mustlast),\n[prepend (mustPrepend)](#prepend-mustprepend), [rest\n(mustRest)](#rest-mustrest), [reverse (mustReverse)](#reverse-mustreverse),\n[seq](#seq), [index](#index), [slice (mustSlice)](#slice-mustslice), [uniq\n(mustUniq)](#uniq-mustuniq), [until](#until), [untilStep](#untilstep), and\n[without (mustWithout)](#without-mustwithout).\n\n### first, mustFirst\n\nTo get the head item on a list, use `first`.\n\n`first $myList` returns `1`\n\n`first` panics if there is a problem, while `mustFirst` returns an error to the\ntemplate engine if there is a problem.\n\n### rest, mustRest\n\nTo get the tail of the list (everything but the first item), use `rest`.\n\n`rest $myList` returns `[2 3 4 5]`\n\n`rest` panics if there is a problem, while `mustRest` returns an error to the\ntemplate engine if there is a problem.\n\n### last, mustLast\n\nTo get the last item on a list, use `last`:\n\n`last $myList` returns `5`. This is roughly analogous to reversing a list and\nthen calling `first`.\n\n### initial, mustInitial\n\nThis compliments `last` by returning all _but_ the last element. `initial\n$myList` returns `[1 2 3 4]`.\n\n`initial` panics if there is a problem, while `mustInitial` returns an error to\nthe template engine if there is a problem.\n\n### append, mustAppend\n\nAppend a new item to an existing list, creating a new list.\n\n```\n$new = append $myList 6\n```\n\nThe above would set `$new` to `[1 2 3 4 5 6]`. `$myList` would remain unaltered.\n\n`append` panics if there is a problem, while `mustAppend` returns an error to the\ntemplate engine if there is a problem.\n\n### prepend, mustPrepend\n\nPush an element onto the front of a list, creating a new list.\n\n```\nprepend $myList 0\n```\n\nThe above would produce `[0 1 2 3 4 5]`. `$myList` would remain unaltered.\n\n`prepend` panics if there is a problem, while `mustPrepend` returns an error to\nthe template engine if there is a problem.\n\n### concat\n\nConcatenate arbitrary number of lists into one.\n\n```\nconcat $myList ( list 6 7 ) ( list 8 )\n```\n\nThe above would produce `[1 2 3 4 5 6 7 8]`. `$myList` would remain unaltered.\n\n### reverse, mustReverse\n\nProduce a new list with the reversed elements of the given list.\n\n```\nreverse $myList\n```\n\nThe above would generate the list `[5 4 3 2 1]`.\n\n`reverse` panics if there is a problem, while `mustReverse` returns an error to\nthe template engine if there is a problem.\n\n### uniq, mustUniq\n\nGenerate a list with all of the duplicates removed.\n\n```\nlist 1 1 1 2 | uniq\n```\n\nThe above would produce `[1 2]`\n\n`uniq` panics if there is a problem, while `mustUniq` returns an error to the\ntemplate engine if there is a problem.\n\n### without, mustWithout\n\nThe `without` function filters items out of a list.\n\n```\nwithout $myList 3\n```\n\nThe above would produce `[1 2 4 5]`\n\n`without` can take more than one filter:\n\n```\nwithout $myList 1 3 5\n```\n\nThat would produce `[2 4]`\n\n`without` panics if there is a problem, while `mustWithout` returns an error to\nthe template engine if there is a problem.\n\n### has, mustHas\n\nTest to see if a list has a particular element.\n\n```\nhas 4 $myList\n```\n\nThe above would return `true`, while `has \"hello\" $myList` would return false.\n\n`has` panics if there is a problem, while `mustHas` returns an error to the\ntemplate engine if there is a problem.\n\n### compact, mustCompact\n\nAccepts a list and removes entries with empty values.\n\n```\n$list := list 1 \"a\" \"foo\" \"\"\n$copy := compact $list\n```\n\n`compact` will return a new list with the empty (i.e., \"\") item removed.\n\n`compact` panics if there is a problem and `mustCompact` returns an error to the\ntemplate engine if there is a problem.\n\n### index\n\nTo get the nth element of a list, use `index list [n]`. To index into \nmulti-dimensional lists, use `index list [n] [m] ...`\n- `index $myList 0` returns `1`. It is the same as `myList[0]`\n- `index $myList 0 1` would be the same as `myList[0][1]`\n\n### slice, mustSlice\n\nTo get partial elements of a list, use `slice list [n] [m]`. It is equivalent of\n`list[n:m]`.\n\n- `slice $myList` returns `[1 2 3 4 5]`. It is same as `myList[:]`.\n- `slice $myList 3` returns `[4 5]`. It is same as `myList[3:]`.\n- `slice $myList 1 3` returns `[2 3]`. It is same as `myList[1:3]`.\n- `slice $myList 0 3` returns `[1 2 3]`. It is same as `myList[:3]`.\n\n`slice` panics if there is a problem, while `mustSlice` returns an error to the\ntemplate engine if there is a problem.\n\n### until\n\nThe `until` function builds a range of integers.\n\n```\nuntil 5\n```\n\nThe above generates the list `[0, 1, 2, 3, 4]`.\n\nThis is useful for looping with `range $i, $e := until 5`.\n\n### untilStep\n\nLike `until`, `untilStep` generates a list of counting integers. But it allows\nyou to define a start, stop, and step:\n\n```\nuntilStep 3 6 2\n```\n\nThe above will produce `[3 5]` by starting with 3, and adding 2 until it is\nequal or greater than 6. This is similar to Python's `range` function.\n\n### seq\n\nWorks like the bash `seq` command.\n\n* 1 parameter  (end) - will generate all counting integers between 1 and `end`\n  inclusive.\n* 2 parameters (start, end) - will generate all counting integers between\n  `start` and `end` inclusive incrementing or decrementing by 1.\n* 3 parameters (start, step, end) - will generate all counting integers between\n  `start` and `end` inclusive incrementing or decrementing by `step`.\n\n```\nseq 5       => 1 2 3 4 5\nseq -3      => 1 0 -1 -2 -3\nseq 0 2     => 0 1 2\nseq 2 -2    => 2 1 0 -1 -2\nseq 0 2 10  => 0 2 4 6 8 10\nseq 0 -2 -5 => 0 -2 -4\n```\n\n## Math Functions\n\nAll math functions operate on `int64` values unless specified otherwise.\n\nThe following math functions are available: [add](#add), [add1](#add1),\n[ceil](#ceil), [div](#div), [floor](#floor), [len](#len), [max](#max),\n[min](#min), [mod](#mod), [mul](#mul), [round](#round), and [sub](#sub).\n\n### add\n\nSum numbers with `add`. Accepts two or more inputs.\n\n```\nadd 1 2 3\n```\n\n### add1\n\nTo increment by 1, use `add1`.\n\n### sub\n\nTo subtract, use `sub`.\n\n### div\n\nPerform integer division with `div`.\n\n### mod\n\nModulo with `mod`.\n\n### mul\n\nMultiply with `mul`. Accepts two or more inputs.\n\n```\nmul 1 2 3\n```\n\n### max\n\nReturn the largest of a series of integers.\n\nThis will return `3`:\n\n```\nmax 1 2 3\n```\n\n### min\n\nReturn the smallest of a series of integers.\n\n`min 1 2 3` will return `1`.\n\n### len\n\nReturns the length of the argument as an integer.\n\n```\nlen .Arg\n```\n\n## Float Math Functions\n\nAll math functions operate on `float64` values.\n\n### addf\n\nSum numbers with `addf`\n\nThis will return `5.5`:\n\n```\naddf 1.5 2 2\n```\n\n### add1f\n\nTo increment by 1, use `add1f`\n\n### subf\n\nTo subtract, use `subf`\n\nThis is equivalent to `7.5 - 2 - 3` and will return `2.5`:\n\n```\nsubf 7.5 2 3\n```\n\n### divf\n\nPerform integer division with `divf`\n\nThis is equivalent to `10 \/ 2 \/ 4` and will return `1.25`:\n\n```\ndivf 10 2 4\n```\n\n### mulf\n\nMultiply with `mulf`\n\nThis will return `6`:\n\n```\nmulf 1.5 2 2\n```\n\n### maxf\n\nReturn the largest of a series of floats:\n\nThis will return `3`:\n\n```\nmaxf 1 2.5 3\n```\n\n### minf\n\nReturn the smallest of a series of floats.\n\nThis will return `1.5`:\n\n```\nminf 1.5 2 3\n```\n\n### floor\n\nReturns the greatest float value less than or equal to input value.\n\n`floor 123.9999` will return `123.0`.\n\n### ceil\n\nReturns the greatest float value greater than or equal to input value.\n\n`ceil 123.001` will return `124.0`.\n\n### round\n\nReturns a float value with the remainder rounded to the given number to digits\nafter the decimal point.\n\n`round 123.555555 3` will return `123.556`.\n\n## Network Functions\n\nHelm has a single network function, `getHostByName`.\n\nThe `getHostByName` receives a domain name and returns the ip address.\n\n`getHostByName \"www.google.com\"` would return the corresponding ip address of `www.google.com`.\n\n## File Path Functions\n\nWhile Helm template functions do not grant access to the filesystem, they do\nprovide functions for working with strings that follow file path conventions.\nThose include [base](#base), [clean](#clean), [dir](#dir), [ext](#ext), and\n[isAbs](#isabs).\n\n### base\n\nReturn the last element of a path.\n\n```\nbase \"foo\/bar\/baz\"\n```\n\nThe above prints \"baz\".\n\n### dir\n\nReturn the directory, stripping the last part of the path. So `dir\n\"foo\/bar\/baz\"` returns `foo\/bar`.\n\n### clean\n\nClean up a path.\n\n```\nclean \"foo\/bar\/..\/baz\"\n```\n\nThe above resolves the `..` and returns `foo\/baz`.\n\n### ext\n\nReturn the file extension.\n\n```\next \"foo.bar\"\n```\n\nThe above returns `.bar`.\n\n### isAbs\n\nTo check whether a file path is absolute, use `isAbs`.\n\n## Reflection Functions\n\nHelm provides rudimentary reflection tools. These help advanced template\ndevelopers understand the underlying Go type information for a particular value.\nHelm is written in Go and is strongly typed. The type system applies within\ntemplates.\n\nGo has several primitive _kinds_, like `string`, `slice`, `int64`, and `bool`.\n\nGo has an open _type_ system that allows developers to create their own types.\n\nHelm provides a set of functions for each via [kind functions](#kind-functions)\nand [type functions](#type-functions). A [deepEqual](#deepequal) function is\nalso provided to compare to values.\n\n### Kind Functions\n\nThere are two Kind functions: `kindOf` returns the kind of an object.\n\n```\nkindOf \"hello\"\n```\n\nThe above would return `string`. For simple tests (like in `if` blocks), the\n`kindIs` function will let you verify that a value is a particular kind:\n\n```\nkindIs \"int\" 123\n```\n\nThe above will return `true`.\n\n### Type Functions\n\nTypes are slightly harder to work with, so there are three different functions:\n\n- `typeOf` returns the underlying type of a value: `typeOf $foo`\n- `typeIs` is like `kindIs`, but for types: `typeIs \"*io.Buffer\" $myVal`\n- `typeIsLike` works as `typeIs`, except that it also dereferences pointers\n\n**Note:** None of these can test whether or not something implements a given\ninterface, since doing so would require compiling the interface in ahead of\ntime.\n\n### deepEqual\n\n`deepEqual` returns true if two values are [\"deeply\nequal\"](https:\/\/golang.org\/pkg\/reflect\/#DeepEqual)\n\nWorks for non-primitive types as well (compared to the built-in `eq`).\n\n```\ndeepEqual (list 1 2 3) (list 1 2 3)\n```\n\nThe above will return `true`.\n\n## Semantic Version Functions\n\nSome version schemes are easily parseable and comparable. Helm provides\nfunctions for working with [SemVer 2](http:\/\/semver.org) versions. These include\n[semver](#semver) and [semverCompare](#semvercompare). Below you will also find\ndetails on using ranges for comparisons.\n\n### semver\n\nThe `semver` function parses a string into a Semantic Version:\n\n```\n$version := semver \"1.2.3-alpha.1+123\"\n```\n\n_If the parser fails, it will cause template execution to halt with an error._\n\nAt this point, `$version` is a pointer to a `Version` object with the following\nproperties:\n\n- `$version.Major`: The major number (`1` above)\n- `$version.Minor`: The minor number (`2` above)\n- `$version.Patch`: The patch number (`3` above)\n- `$version.Prerelease`: The prerelease (`alpha.1` above)\n- `$version.Metadata`: The build metadata (`123` above)\n- `$version.Original`: The original version as a string\n\nAdditionally, you can compare a `Version` to another `version` using the\n`Compare` function:\n\n```\nsemver \"1.4.3\" | (semver \"1.2.3\").Compare\n```\n\nThe above will return `-1`.\n\nThe return values are:\n\n- `-1` if the given semver is greater than the semver whose `Compare` method was\n  called\n- `1` if the version who's `Compare` function was called is greater.\n- `0` if they are the same version\n\n(Note that in SemVer, the `Metadata` field is not compared during version\ncomparison operations.)\n\n### semverCompare\n\nA more robust comparison function is provided as `semverCompare`. This version\nsupports version ranges:\n\n- `semverCompare \"1.2.3\" \"1.2.3\"` checks for an exact match\n- `semverCompare \"~1.2.0\" \"1.2.3\"` checks that the major and minor versions\n  match, and that the patch number of the second version is _greater than or\n  equal to_ the first parameter.\n\nThe SemVer functions use the [Masterminds semver\nlibrary](https:\/\/github.com\/Masterminds\/semver), from the creators of Sprig.\n\n### Basic Comparisons\n\nThere are two elements to the comparisons. First, a comparison string is a list\nof space or comma separated AND comparisons. These are then separated by || (OR)\ncomparisons. For example, `\">= 1.2 < 3.0.0 || >= 4.2.3\"` is looking for a\ncomparison that's greater than or equal to 1.2 and less than 3.0.0 or is greater\nthan or equal to 4.2.3.\n\nThe basic comparisons are:\n\n- `=`: equal (aliased to no operator)\n- `!=`: not equal\n- `>`: greater than\n- `<`: less than\n- `>=`: greater than or equal to\n- `<=`: less than or equal to\n\n### Working With Prerelease Versions\n\nPre-releases, for those not familiar with them, are used for software releases\nprior to stable or generally available releases. Examples of prereleases include\ndevelopment, alpha, beta, and release candidate releases. A prerelease may be a\nversion such as `1.2.3-beta.1`, while the stable release would be `1.2.3`. In the\norder of precedence, prereleases come before their associated releases. In this\nexample `1.2.3-beta.1 < 1.2.3`.\n\nAccording to the Semantic Version specification prereleases may not be API\ncompliant with their release counterpart. It says,\n\n> A pre-release version indicates that the version is unstable and might not\n> satisfy the intended compatibility requirements as denoted by its associated\n> normal version.\n\nSemVer comparisons using constraints without a prerelease comparator will skip\nprerelease versions. For example, `>=1.2.3` will skip prereleases when looking\nat a list of releases, while `>=1.2.3-0` will evaluate and find prereleases.\n\nThe reason for the `0` as a pre-release version in the example comparison is\nbecause pre-releases can only contain ASCII alphanumerics and hyphens (along\nwith `.` separators), per the spec. Sorting happens in ASCII sort order, again\nper the spec. The lowest character is a `0` in ASCII sort order (see an [ASCII\nTable](http:\/\/www.asciitable.com\/))\n\nUnderstanding ASCII sort ordering is important because A-Z comes before a-z.\nThat means `>=1.2.3-BETA` will return `1.2.3-alpha`. What you might expect from\ncase sensitivity doesn't apply here. This is due to ASCII sort ordering which is\nwhat the spec specifies.\n\n### Hyphen Range Comparisons\n\nThere are multiple methods to handle ranges and the first is hyphens ranges.\nThese look like:\n\n- `1.2 - 1.4.5` which is equivalent to `>= 1.2 <= 1.4.5`\n- `2.3.4 - 4.5` which is equivalent to `>= 2.3.4 <= 4.5`\n\n### Wildcards In Comparisons\n\nThe `x`, `X`, and `*` characters can be used as a wildcard character. This works\nfor all comparison operators. When used on the `=` operator it falls back to the\npatch level comparison (see tilde below). For example,\n\n- `1.2.x` is equivalent to `>= 1.2.0, < 1.3.0`\n- `>= 1.2.x` is equivalent to `>= 1.2.0`\n- `<= 2.x` is equivalent to `< 3`\n- `*` is equivalent to `>= 0.0.0`\n\n### Tilde Range Comparisons (Patch)\n\nThe tilde (`~`) comparison operator is for patch level ranges when a minor\nversion is specified and major level changes when the minor number is missing.\nFor example,\n\n- `~1.2.3` is equivalent to `>= 1.2.3, < 1.3.0`\n- `~1` is equivalent to `>= 1, < 2`\n- `~2.3` is equivalent to `>= 2.3, < 2.4`\n- `~1.2.x` is equivalent to `>= 1.2.0, < 1.3.0`\n- `~1.x` is equivalent to `>= 1, < 2`\n\n### Caret Range Comparisons (Major)\n\nThe caret (`^`) comparison operator is for major level changes once a stable\n(1.0.0) release has occurred. Prior to a 1.0.0 release the minor versions acts\nas the API stability level. This is useful when comparisons of API versions as a\nmajor change is API breaking. For example,\n\n- `^1.2.3` is equivalent to `>= 1.2.3, < 2.0.0`\n- `^1.2.x` is equivalent to `>= 1.2.0, < 2.0.0`\n- `^2.3` is equivalent to `>= 2.3, < 3`\n- `^2.x` is equivalent to `>= 2.0.0, < 3`\n- `^0.2.3` is equivalent to `>=0.2.3 <0.3.0`\n- `^0.2` is equivalent to `>=0.2.0 <0.3.0`\n- `^0.0.3` is equivalent to `>=0.0.3 <0.0.4`\n- `^0.0` is equivalent to `>=0.0.0 <0.1.0`\n- `^0` is equivalent to `>=0.0.0 <1.0.0`\n\n## URL Functions\n\nHelm includes the [urlParse](#urlparse), [urlJoin](#urljoin), and\n[urlquery](#urlquery) functions enabling you to work with URL parts.\n\n### urlParse\n\nParses string for URL and produces dict with URL parts\n\n```\nurlParse \"http:\/\/admin:secret@server.com:8080\/api?list=false#anchor\"\n```\n\nThe above returns a dict, containing URL object:\n\n```yaml\nscheme:   'http'\nhost:     'server.com:8080'\npath:     '\/api'\nquery:    'list=false'\nopaque:   nil\nfragment: 'anchor'\nuserinfo: 'admin:secret'\n```\n\nThis is implemented used the URL packages from the Go standard library. For more\ninfo, check https:\/\/golang.org\/pkg\/net\/url\/#URL\n\n### urlJoin\n\nJoins map (produced by `urlParse`) to produce URL string\n\n```\nurlJoin (dict \"fragment\" \"fragment\" \"host\" \"host:80\" \"path\" \"\/path\" \"query\" \"query\" \"scheme\" \"http\")\n```\n\nThe above returns the following string:\n```\nhttp:\/\/host:80\/path?query#fragment\n```\n\n### urlquery\n\nReturns the escaped version of the value passed in as an argument so that it is\nsuitable for embedding in the query portion of a URL.\n\n```\n$var := urlquery \"string for query\"\n```\n\n## UUID Functions\n\nHelm can generate UUID v4 universally unique IDs.\n\n```\nuuidv4\n```\n\nThe above returns a new UUID of the v4 (randomly generated) type.\n\n## Kubernetes and Chart Functions\n\nHelm includes functions for working with Kubernetes including\n[.Capabilities.APIVersions.Has](#capabilitiesapiversionshas),\n[Files](#file-functions), and [lookup](#lookup).\n\n### lookup\n\n`lookup` is used to look up resource in a running cluster. When used with the\n`helm template` command it always returns an empty response.\n\nYou can find more detail in the [documentation on the lookup\nfunction](functions_and_pipelines.md\/#using-the-lookup-function).\n\n### .Capabilities.APIVersions.Has\n\nReturns if an API version or resource is available in a cluster.\n\n```\n.Capabilities.APIVersions.Has \"apps\/v1\"\n.Capabilities.APIVersions.Has \"apps\/v1\/Deployment\"\n```\n\nMore information is available on the [built-in object\ndocumentation](builtin_objects.md).\n\n### File Functions\n\nThere are several functions that enable you to get to non-special files within a\nchart. For example, to access application configuration files. These are\ndocumented in [Accessing Files Inside Templates](accessing_files.md).\n\n_Note, the documentation for many of these functions come from\n[Sprig](https:\/\/github.com\/Masterminds\/sprig). Sprig is a template function\nlibrary available to Go applications._","site":"helm","answers_cleaned":"    title   Template Function List  description   A list of template functions available in Helm  weight  6      Helm includes many template functions you can take advantage of in templates  They are listed here and broken down by the following categories      Cryptographic and Security   cryptographic and security functions     Date   date functions     Dictionaries   dictionaries and dict functions     Encoding   encoding functions     File Path   file path functions     Kubernetes and Chart   kubernetes and chart functions     Logic and Flow Control   logic and flow control functions     Lists   lists and list functions     Math   math functions     Float Math   float math functions     Network   network functions     Reflection   reflection functions     Regular Expressions   regular expressions     Semantic Versions   semantic version functions     String   string functions     Type Conversion   type conversion functions     URL   url functions     UUID   uuid functions      Logic and Flow Control Functions  Helm includes numerous logic and control flow functions including  and   and    coalesce   coalesce    default   default    empty   empty    eq   eq    fail   fail    ge   ge    gt   gt    le   le    lt   lt    ne   ne    not   not    or   or   and  required   required        and  Returns the boolean AND of two or more arguments  the first empty argument  or the last argument        and  Arg1  Arg2          or  Returns the boolean OR of two or more arguments  the first non empty argument  or the last argument        or  Arg1  Arg2          not  Returns the boolean negation of its argument       not  Arg          eq  Returns the boolean equality of the arguments  e g   Arg1    Arg2        eq  Arg1  Arg2          ne  Returns the boolean inequality of the arguments  e g   Arg1    Arg2       ne  Arg1  Arg2          lt  Returns a boolean true if the first argument is less than the second  False is returned otherwise  e g   Arg1   Arg2        lt  Arg1  Arg2          le  Returns a boolean true if the first argument is less than or equal to the second  False is returned otherwise  e g   Arg1    Arg2        le  Arg1  Arg2          gt  Returns a boolean true if the first argument is greater than the second  False is returned otherwise  e g   Arg1   Arg2        gt  Arg1  Arg2          ge  Returns a boolean true if the first argument is greater than or equal to the second  False is returned otherwise  e g   Arg1    Arg2        ge  Arg1  Arg2          default  To set a simple default value  use  default        default  foo   Bar      In the above  if   Bar  evaluates to a non empty value  it will be used  But if it is empty   foo  will be returned instead   The definition of  empty  depends on type     Numeric  0   String       Lists         Dicts         Boolean   false    And always  nil   aka null   For structs  there is no definition of empty  so a struct will never return the default       required  Specify values that must be set with  required        required  A valid foo is required    Bar      If   Bar  is empty or not defined  see  default   default  on how this is  evaluated   the template will not render and will return the error message  supplied instead       empty  The  empty  function returns  true  if the given value is considered empty  and  false  otherwise  The empty values are listed in the  default  section       empty  Foo      Note that in Go template conditionals  emptiness is calculated for you  Thus  you rarely need  if not empty  Foo   Instead  just use  if  Foo        fail  Unconditionally returns an empty  string  and an  error  with the specified text  This is useful in scenarios where other conditionals have determined that template rendering should fail       fail  Please accept the end user license agreement           coalesce  The  coalesce  function takes a list of values and returns the first non empty one       coalesce 0 1 2      The above returns  1    This function is useful for scanning through multiple variables or values       coalesce  name  parent name  Matt       The above will first check to see if   name  is empty  If it is not  it will return that value  If it  is  empty   coalesce  will evaluate   parent name  for emptiness  Finally  if both   name  and   parent name  are empty  it will return  Matt        ternary  The  ternary  function takes two values  and a test value  If the test value is true  the first value will be returned  If the test value is empty  the second value will be returned  This is similar to the ternary operator in C and other programming languages        true test value      ternary  foo   bar  true      or      true   ternary  foo   bar       The above returns   foo          false test value      ternary  foo   bar  false      or      false   ternary  foo   bar       The above returns   bar        String Functions  Helm includes the following string functions   abbrev   abbrev    abbrevboth   abbrevboth    camelcase   camelcase    cat   cat    contains   contains    hasPrefix   hasprefix and hassuffix    hasSuffix   hasprefix and hassuffix    indent   indent    initials   initials    kebabcase   kebabcase    lower   lower    nindent   nindent    nospace   nospace    plural   plural    print   print    printf   printf    println   println    quote   quote and squote    randAlpha   randalphanum randalpha randnumeric and randascii    randAlphaNum   randalphanum randalpha randnumeric and randascii    randAscii   randalphanum randalpha randnumeric and randascii    randNumeric   randalphanum randalpha randnumeric and randascii    repeat   repeat    replace   replace    shuffle   shuffle    snakecase   snakecase    squote   quote and squote    substr   substr    swapcase   swapcase    title   title    trim   trim    trimAll   trimall    trimPrefix   trimprefix    trimSuffix   trimsuffix    trunc   trunc    untitle   untitle    upper   upper    wrap   wrap   and  wrapWith   wrapwith        print  Returns a string from the combination of its parts       print  Matt has    Dogs   dogs       Types that are not strings are converted to strings where possible   Note  when two arguments next to each other are not strings a space is added between them       println  Works the same way as  print   print  but adds a new line at the end       printf  Returns a string based on a formatting string and the arguments to pass to it in order       printf   s has  d dogs    Name  NumberDogs      The placeholder to use depends on the type for the argument being passed in  This includes   General purpose       v  the value in a default format     when printing dicts  the plus flag    v  adds field names        a literal percent sign  consumes no value  Boolean       t  the word true or false  Integer       b  base 2     c  the character represented by the corresponding Unicode code point     d  base 10     o  base 8     O  base 8 with 0o prefix     q  a single quoted character literal safely escaped     x  base 16  with lower case letters for a f     X  base 16  with upper case letters for A F     U  Unicode format  U 1234  same as  U  04X    Floating point and complex constituents       b  decimal less scientific notation with exponent a power of two  e g     123456p 78     e  scientific notation  e g   1 234456e 78     E  scientific notation  e g   1 234456E 78     f  decimal point but no exponent  e g  123 456     F  synonym for  f     g   e for large exponents   f otherwise      G   E for large exponents   F otherwise     x  hexadecimal notation  with decimal power of two exponent   e g     0x1 23abcp 20     X  upper case hexadecimal notation  e g   0X1 23ABCP 20  String and slice of bytes  treated equivalently with these verbs        s  the uninterpreted bytes of the string or slice     q  a double quoted string safely escaped     x  base 16  lower case  two characters per byte     X  base 16  upper case  two characters per byte  Slice       p  address of 0th element in base 16 notation  with leading 0x      trim  The  trim  function removes white space from both sides of a string       trim     hello           The above produces  hello       trimAll  Removes the given characters from the front and back of a string       trimAll       5 00       The above returns  5 00   as a string        trimPrefix  Trim just the prefix from a string       trimPrefix       hello       The above returns  hello       trimSuffix  Trim just the suffix from a string       trimSuffix      hello        The above returns  hello       lower  Convert the entire string to lowercase       lower  HELLO       The above returns  hello       upper  Convert the entire string to uppercase       upper  hello       The above returns  HELLO       title  Convert to title case       title  hello world       The above returns  Hello World       untitle  Remove title casing   untitle  Hello World   produces  hello world        repeat  Repeat a string multiple times       repeat 3  hello       The above returns  hellohellohello       substr  Get a substring from a string  It takes three parameters     start  int    end  int    string  string       substr 0 5  hello world       The above returns  hello       nospace  Remove all whitespace from a string       nospace  hello w o r l d       The above returns  helloworld       trunc  Truncate a string      trunc 5  hello world       The above produces  hello        trunc  5  hello world       The above produces  world        abbrev  Truncate a string with ellipses          Parameters     max length   the string      abbrev 5  hello world       The above returns  he      since it counts the width of the ellipses against the maximum length       abbrevboth  Abbreviate both sides       abbrevboth 5 10  1234 5678 9123       the above produces     5678      It takes     left offset   max length   the string      initials  Given multiple words  take the first letter of each word and combine       initials  First Try       The above returns  FT       randAlphaNum  randAlpha  randNumeric  and randAscii  These four functions generate cryptographically secure  uses    crypto rand     random strings  but with different base character sets      randAlphaNum  uses  0 9a zA Z     randAlpha  uses  a zA Z     randNumeric  uses  0 9     randAscii  uses all printable ASCII characters  Each of them takes one parameter  the integer length of the string       randNumeric 3      The above will produce a random string with three digits       wrap  Wrap text at a given column count       wrap 80  someText      The above will wrap the string in   someText  at 80 columns       wrapWith   wrapWith  works as  wrap   but lets you specify the string to wrap with    wrap  uses   n        wrapWith 5   t   Hello World       The above produces  Hello World   where the whitespace is an ASCII tab character       contains  Test to see if one string is contained inside of another       contains  cat   catch       The above returns  true  because  catch  contains  cat        hasPrefix and hasSuffix  The  hasPrefix  and  hasSuffix  functions test whether a string has a given prefix or suffix       hasPrefix  cat   catch       The above returns  true  because  catch  has the prefix  cat        quote and squote  These functions wrap a string in double quotes   quote   or single quotes   squote         cat  The  cat  function concatenates multiple strings together into one  separating them with spaces       cat  hello   beautiful   world       The above produces  hello beautiful world       indent  The  indent  function indents every line in a given string to the specified indent width  This is useful when aligning multi line strings       indent 4  lots of text      The above will indent every line of text by 4 space characters       nindent  The  nindent  function is the same as the indent function  but prepends a new line to the beginning of the string       nindent 4  lots of text      The above will indent every line of text by 4 space characters and add a new line to the beginning       replace  Perform simple string replacement   It takes three arguments     string to replace   string to replace with   source string       I Am Henry VIII    replace              The above will produce  I Am Henry VIII       plural  Pluralize a string       len  fish   plural  one anchovy   many anchovies       In the above  if the length of the string is 1  the first argument will be printed   one anchovy    Otherwise  the second argument will be printed   many anchovies     The arguments are     singular string   plural string   length integer  NOTE  Helm does not currently support languages with more complex pluralization rules  And  0  is considered a plural because the English language treats it as such   zero anchovies         snakecase  Convert string from camelCase to snake case       snakecase  FirstName       This above will produce  first name        camelcase  Convert string from snake case to CamelCase      camelcase  http server       This above will produce  HttpServer        kebabcase  Convert string from camelCase to kebab case       kebabcase  FirstName       This above will produce  first name        swapcase  Swap the case of a string using a word based algorithm   Conversion algorithm     Upper case character converts to Lower case   Title case character converts to Lower case   Lower case character after Whitespace or at start converts to Title case   Other Lower case character converts to Upper case   Whitespace is defined by unicode IsSpace char       swapcase  This Is A Test       This above will produce  tHIS iS a tEST        shuffle  Shuffle a string       shuffle  hello       The above will randomize the letters in  hello   perhaps producing  oelhl       Type Conversion Functions  The following type conversion functions are provided by Helm      atoi   Convert a string to an integer     float64   Convert to a  float64      int   Convert to an  int  at the system s width     int64   Convert to an  int64      toDecimal   Convert a unix octal to a  int64      toString   Convert to a string     toStrings   Convert a list  slice  or array to a list of strings     toJson    mustToJson    Convert list  slice  array  dict  or object to JSON     toPrettyJson    mustToPrettyJson    Convert list  slice  array  dict  or   object to indented JSON     toRawJson    mustToRawJson    Convert list  slice  array  dict  or object to   JSON with HTML characters unescaped     fromYaml   Convert a YAML string to an object     fromJson   Convert a JSON string to an object     fromJsonArray   Convert a JSON array to a list     toYaml   Convert list  slice  array  dict  or object to indented yaml  can be used to copy chunks of yaml from any source  This function is equivalent to GoLang yaml Marshal function  see docs here  https   pkg go dev gopkg in yaml v2 Marshal    toToml   Convert list  slice  array  dict  or object to toml  can be used to copy chunks of toml from any source     fromYamlArray   Convert a YAML array to a list   Only  atoi  requires that the input be a specific type  The others will attempt to convert from any type to the destination type  For example   int64  can convert floats to ints  and it can also convert strings to ints       toStrings  Given a list like collection  produce a slice of strings       list 1 2 3   toStrings      The above converts  1  to   1     2  to   2    and so on  and then returns them as a list       toDecimal  Given a unix octal permission  produce a decimal        0777    toDecimal      The above converts  0777  to  511  and returns the value as an int64       toJson  mustToJson  The  toJson  function encodes an item into a JSON string  If the item cannot be converted to JSON the function will return an empty string   mustToJson  will return an error in case the item cannot be encoded in JSON       toJson  Item      The above returns JSON string representation of   Item        toPrettyJson  mustToPrettyJson  The  toPrettyJson  function encodes an item into a pretty  indented  JSON string       toPrettyJson  Item      The above returns indented JSON string representation of   Item        toRawJson  mustToRawJson  The  toRawJson  function encodes an item into JSON string with HTML characters unescaped       toRawJson  Item      The above returns unescaped JSON string representation of   Item        fromYaml  The  fromYaml  function takes a YAML string and returns an object that can be used in templates    File at  yamls person yaml     yaml name  Bob age  25 hobbies      hiking     fishing     cooking        yaml  greeting      Hi  my name is  and I am  years old    My hobbies are             fromJson  The  fromJson  function takes a JSON string and returns an object that can be used in templates    File at  jsons person json     json      name    Bob      age   25     hobbies          hiking        fishing        cooking               yaml  greeting      Hi  my name is  and I am  years old    My hobbies are              fromJsonArray  The  fromJsonArray  function takes a JSON Array and returns a list that can be used in templates    File at  jsons people json     json       name    Bob   age   25        name    Ram   age   16            yaml   greeting      Hi  my name is  and I am  years old            fromYamlArray  The  fromYamlArray  function takes a YAML Array and returns a list that can be used in templates    File at  yamls people yml     yaml   name  Bob   age  25   name  Ram   age  16        yaml   greeting      Hi  my name is  and I am  years old            Regular Expressions  Helm includes the following regular expression functions   regexFind  mustRegexFind    regexfindall mustregexfindall    regexFindAll  mustRegexFindAll    regexfind mustregexfind    regexMatch  mustRegexMatch    regexmatch mustregexmatch    regexReplaceAll  mustRegexReplaceAll    regexreplaceall mustregexreplaceall    regexReplaceAllLiteral  mustRegexReplaceAllLiteral    regexreplaceallliteral mustregexreplaceallliteral    regexSplit  mustRegexSplit    regexsplit mustregexsplit        regexMatch  mustRegexMatch  Returns true if the input string contains any match of the regular expression       regexMatch    A Za z0 9         A Za z0 9        A Za z  2      test acme com       The above produces  true    regexMatch  panics if there is a problem and  mustRegexMatch  returns an error to the template engine if there is a problem       regexFindAll  mustRegexFindAll  Returns a slice of all matches of the regular expression in the input string  The last parameter n determines the number of substrings to return  where  1 means return all matches      regexFindAll   2 4 6 8    123456789   1      The above produces   2 4 6 8     regexFindAll  panics if there is a problem and  mustRegexFindAll  returns an error to the template engine if there is a problem       regexFind  mustRegexFind  Return the first  left most  match of the regular expression in the input string      regexFind   a zA Z  1 9    abcd1234       The above produces  d1    regexFind  panics if there is a problem and  mustRegexFind  returns an error to the template engine if there is a problem       regexReplaceAll  mustRegexReplaceAll  Returns a copy of the input string  replacing matches of the Regexp with the replacement string replacement  Inside string replacement    signs are interpreted as in Expand  so for instance  1 represents the text of the first submatch      regexReplaceAll  a x  b    ab axxb      1 W       The above produces   W xxW     regexReplaceAll  panics if there is a problem and  mustRegexReplaceAll  returns an error to the template engine if there is a problem       regexReplaceAllLiteral  mustRegexReplaceAllLiteral  Returns a copy of the input string  replacing matches of the Regexp with the replacement string replacement  The replacement string is substituted directly  without using Expand      regexReplaceAllLiteral  a x  b    ab axxb      1        The above produces     1    1      regexReplaceAllLiteral  panics if there is a problem and  mustRegexReplaceAllLiteral  returns an error to the template engine if there is a problem       regexSplit  mustRegexSplit  Slices the input string into substrings separated by the expression and returns a slice of the substrings between those expression matches  The last parameter  n  determines the number of substrings to return  where   1  means return all matches      regexSplit  z    pizza   1      The above produces   pi a     regexSplit  panics if there is a problem and  mustRegexSplit  returns an error to the template engine if there is a problem      Cryptographic and Security Functions  Helm provides some advanced cryptographic functions  They include  adler32sum   adler32sum    buildCustomCert   buildcustomcert    decryptAES   decryptaes    derivePassword   derivepassword    encryptAES   encryptaes    genCA   genca    genPrivateKey   genprivatekey    genSelfSignedCert   genselfsignedcert    genSignedCert   gensignedcert    htpasswd   htpasswd    sha1sum   sha1sum   and  sha256sum   sha256sum        sha1sum  The  sha1sum  function receives a string  and computes it s SHA1 digest       sha1sum  Hello world            sha256sum  The  sha256sum  function receives a string  and computes it s SHA256 digest       sha256sum  Hello world        The above will compute the SHA 256 sum in an  ASCII armored  format that is safe to print       adler32sum  The  adler32sum  function receives a string  and computes its Adler 32 checksum       adler32sum  Hello world            htpasswd  The  htpasswd  function takes a  username  and  password  and generates a  bcrypt  hash of the password  The result can be used for basic authentication on an  Apache HTTP Server  https   httpd apache org docs 2 4 misc password encryptions html basic        htpasswd  myUser   myPassword       Note that it is insecure to store the password directly in the template       derivePassword  The  derivePassword  function can be used to derive a specific password based on some shared  master password  constraints  The algorithm for this is  well specified  https   web archive org web 20211019121301 https   masterpassword app masterpassword algorithm pdf        derivePassword 1  long   password   user   example com       Note that it is considered insecure to store the parts directly in the template       genPrivateKey  The  genPrivateKey  function generates a new private key encoded into a PEM block   It takes one of the values for its first param      ecdsa   Generate an elliptic curve DSA key  P256     dsa   Generate a DSA key  L2048N256     rsa   Generate an RSA 4096 key      buildCustomCert  The  buildCustomCert  function allows customizing the certificate   It takes the following string parameters     A base64 encoded PEM format certificate   A base64 encoded PEM format private key  It returns a certificate object with the following attributes      Cert   A PEM encoded certificate    Key   A PEM encoded private key  Example        ca    buildCustomCert  base64 encoded ca crt   base64 encoded ca key       Note that the returned object can be passed to the  genSignedCert  function to sign a certificate using this CA       genCA  The  genCA  function generates a new  self signed x509 certificate authority   It takes the following parameters     Subject s common name  cn    Cert validity duration in days  It returns an object with the following attributes      Cert   A PEM encoded certificate    Key   A PEM encoded private key  Example        ca    genCA  foo ca  365      Note that the returned object can be passed to the  genSignedCert  function to sign a certificate using this CA       genSelfSignedCert  The  genSelfSignedCert  function generates a new  self signed x509 certificate   It takes the following parameters     Subject s common name  cn    Optional list of IPs  may be nil   Optional list of alternate DNS names  may be nil   Cert validity duration in days  It returns an object with the following attributes      Cert   A PEM encoded certificate    Key   A PEM encoded private key  Example        cert    genSelfSignedCert  foo com   list  10 0 0 1   10 0 0 2    list  bar com   bat com   365          genSignedCert  The  genSignedCert  function generates a new  x509 certificate signed by the specified CA   It takes the following parameters     Subject s common name  cn    Optional list of IPs  may be nil   Optional list of alternate DNS names  may be nil   Cert validity duration in days   CA  see  genCA    Example        ca    genCA  foo ca  365  cert    genSignedCert  foo com   list  10 0 0 1   10 0 0 2    list  bar com   bat com   365  ca          encryptAES  The  encryptAES  function encrypts text with AES 256 CBC and returns a base64 encoded string       encryptAES  secretkey   plaintext           decryptAES  The  decryptAES  function receives a base64 string encoded by the AES 256 CBC algorithm and returns the decoded text        30tEfhuJSVRhpG97XCuWgz2okj7L8vQ1s6V9zVUPeDQ     decryptAES  secretkey          Date Functions  Helm includes the following date functions you can use in templates   ago   ago    date   date    dateInZone   dateinzone    dateModify  mustDateModify    datemodify mustdatemodify    duration   duration    durationRound   durationround    htmlDate   htmldate    htmlDateInZone   htmldateinzone    now   now    toDate  mustToDate    todate musttodate   and  unixEpoch   unixepoch        now  The current date time  Use this in conjunction with other date functions       ago  The  ago  function returns duration from time  Now in seconds resolution       ago  CreatedAt      returns in  time Duration  String   format      2h34m7s          date  The  date  function formats a date   Format the date to YEAR MONTH DAY       now   date  2006 01 02       Date formatting in Go is a  little bit different  https   pauladamsmith com blog 2011 05 go time html    In short  take this as the base date       Mon Jan 2 15 04 05 MST 2006      Write it in the format you want  Above   2006 01 02  is the same date  but in the format we want       dateInZone  Same as  date   but with a timezone       dateInZone  2006 01 02   now   UTC           duration  Formats a given amount of seconds as a  time Duration    This returns 1m35s      duration  95           durationRound  Rounds a given duration to the most significant unit  Strings and  time Duration  gets parsed as a duration  while a  time Time  is calculated as the duration since   This return 2h      durationRound  2h10m5s       This returns 3mo      durationRound  2400h10m5s           unixEpoch  Returns the seconds since the unix epoch for a  time Time        now   unixEpoch          dateModify  mustDateModify  The  dateModify  takes a modification and a date and returns the timestamp   Subtract an hour and thirty minutes from the current time       now   dateModify   1 5h       If the modification format is wrong  dateModify  will return the date unmodified   mustDateModify  will return an error otherwise       htmlDate  The  htmlDate  function formats a date for inserting into an HTML date picker input field       now   htmlDate          htmlDateInZone  Same as htmlDate  but with a timezone       htmlDateInZone  now   UTC           toDate  mustToDate   toDate  converts a string to a date  The first argument is the date layout and the second the date string  If the string can t be convert it returns the zero value   mustToDate  will return an error in case the string cannot be converted   This is useful when you want to convert a string date to another format  using pipe   The example below converts  2017 12 31  to  31 12 2017        toDate  2006 01 02   2017 12 31    date  02 01 2006          Dictionaries and Dict Functions  Helm provides a key value storage type called a  dict   short for  dictionary   as in Python   A  dict  is an  unordered  type   The key to a dictionary   must be a string    However  the value can be any type  even another  dict  or  list    Unlike  list s   dict s are not immutable  The  set  and  unset  functions will modify the contents of a dictionary   Helm provides the following functions to support working with dicts   deepCopy  mustDeepCopy    deepcopy mustdeepcopy    dict   dict    dig   dig    get   get    hasKey   haskey    keys   keys    merge  mustMerge    merge mustmerge    mergeOverwrite  mustMergeOverwrite    mergeoverwrite mustmergeoverwrite    omit   omit    pick   pick    pluck   pluck    set   set    unset   unset   and  values   values        dict  Creating dictionaries is done by calling the  dict  function and passing it a list of pairs   The following creates a dictionary with three items        myDict    dict  name1   value1   name2   value2   name3   value 3           get  Given a map and a key  get the value from the map       get  myDict  name1       The above returns   value1    Note that if the key is not found  this operation will simply return       No error will be generated       set  Use  set  to add a new key value pair to a dictionary             set  myDict  name4   value4       Note that  set   returns the dictionary   a requirement of Go template functions   so you may need to trap the value as done above with the      assignment       unset  Given a map and a key  delete the key from the map             unset  myDict  name4       As with  set   this returns the dictionary   Note that if the key is not found  this operation will simply return  No error will be generated       hasKey  The  hasKey  function returns  true  if the given dict contains the given key       hasKey  myDict  name1       If the key is not found  this returns  false        pluck  The  pluck  function makes it possible to give one key and multiple maps  and get a list of all of the matches       pluck  name1   myDict  myOtherDict      The above will return a  list  containing every found value    value1 otherValue1      If the given key is  not found  in a map  that map will not have an item in the list  and the length of the returned list will be less than the number of dicts in the call to  pluck     If the key is  found  but the value is an empty value  that value will be inserted   A common idiom in Helm templates is to use  pluck      first  to get the first matching key out of a collection of dictionaries       dig  The  dig  function traverses a nested set of dicts  selecting keys from a list of values  It returns a default value if any of the keys are not found at the associated dict       dig  user   role   humanName   guest   dict      Given a dict structured like         user        role          humanName   curator                   the above would return   curator    If the dict lacked even a  user  field  the result would be   guest     Dig can be very useful in cases where you d like to avoid guard clauses  especially since Go s template package s  and  doesn t shortcut  For instance  and a maybeNil a maybeNil iNeedThis  will always evaluate  a maybeNil iNeedThis   and panic if  a  lacks a  maybeNil  field     dig  accepts its dict argument last in order to support pipelining  For instance      merge a b c   dig  one   two   three    missing            merge  mustMerge  Merge two or more dictionaries into one  giving precedence to the dest dictionary   Given       dst    default  default   overwrite  me   key  true  src    overwrite  overwritten   key  false      will result in       newdict    default  default   overwrite  me   key  true          newdict    merge  dest  source1  source2      This is a deep merge operation but not a deep copy operation  Nested objects that are merged are the same instance on both dicts  If you want a deep copy along with the merge  then use the  deepCopy  function along with merging  For example       deepCopy  source   merge  dest       mustMerge  will return an error in case of unsuccessful merge       mergeOverwrite  mustMergeOverwrite  Merge two or more dictionaries into one  giving precedence from   right to left    effectively overwriting values in the dest dictionary   Given       dst    default  default   overwrite  me   key  true  src    overwrite  overwritten   key  false      will result in       newdict    default  default   overwrite  overwritten   key  false           newdict    mergeOverwrite  dest  source1  source2      This is a deep merge operation but not a deep copy operation  Nested objects that are merged are the same instance on both dicts  If you want a deep copy along with the merge then use the  deepCopy  function along with merging  For example       deepCopy  source   mergeOverwrite  dest       mustMergeOverwrite  will return an error in case of unsuccessful merge       keys  The  keys  function will return a  list  of all of the keys in one or more  dict  types  Since a dictionary is  unordered   the keys will not be in a predictable order  They can be sorted with  sortAlpha        keys  myDict   sortAlpha      When supplying multiple dictionaries  the keys will be concatenated  Use the  uniq  function along with  sortAlpha  to get a unique  sorted list of keys       keys  myDict  myOtherDict   uniq   sortAlpha          pick  The  pick  function selects just the given keys out of a dictionary  creating a new  dict         new    pick  myDict  name1   name2       The above returns   name1  value1  name2  value2        omit  The  omit  function is similar to  pick   except it returns a new  dict  with all the keys that  do not  match the given keys        new    omit  myDict  name1   name3       The above returns   name2  value2        values  The  values  function is similar to  keys   except it returns a new  list  with all the values of the source  dict   only one dictionary is supported         vals    values  myDict      The above returns  list  value1    value2    value 3     Note that the  values  function gives no guarantees about the result ordering  if you care about this  then use  sortAlpha        deepCopy  mustDeepCopy  The  deepCopy  and  mustDeepCopy  functions take a value and make a deep copy of the value  This includes dicts and other structures   deepCopy  panics when there is a problem  while  mustDeepCopy  returns an error to the template system when there is an error       dict  a  1  b  2   deepCopy          A Note on Dict Internals  A  dict  is implemented in Go as a  map string interface     Go developers can pass  map string interface    values into the context to make them available to templates as  dict s      Encoding Functions  Helm has the following encoding and decoding functions      b64enc   b64dec   Encode or decode with Base64    b32enc   b32dec   Encode or decode with Base32     Lists and List Functions  Helm provides a simple  list  type that can contain arbitrary sequential lists of data  This is similar to arrays or slices  but lists are designed to be used as immutable data types   Create a list of integers        myList    list 1 2 3 4 5      The above creates a list of   1 2 3 4 5     Helm provides the following list functions   append  mustAppend    append mustappend    compact  mustCompact    compact mustcompact    concat   concat    first  mustFirst    first mustfirst    has  mustHas    has musthas    initial  mustInitial    initial mustinitial    last  mustLast    last mustlast    prepend  mustPrepend    prepend mustprepend    rest  mustRest    rest mustrest    reverse  mustReverse    reverse mustreverse    seq   seq    index   index    slice  mustSlice    slice mustslice    uniq  mustUniq    uniq mustuniq    until   until    untilStep   untilstep   and  without  mustWithout    without mustwithout        first  mustFirst  To get the head item on a list  use  first     first  myList  returns  1    first  panics if there is a problem  while  mustFirst  returns an error to the template engine if there is a problem       rest  mustRest  To get the tail of the list  everything but the first item   use  rest     rest  myList  returns   2 3 4 5     rest  panics if there is a problem  while  mustRest  returns an error to the template engine if there is a problem       last  mustLast  To get the last item on a list  use  last     last  myList  returns  5   This is roughly analogous to reversing a list and then calling  first        initial  mustInitial  This compliments  last  by returning all  but  the last element   initial  myList  returns   1 2 3 4      initial  panics if there is a problem  while  mustInitial  returns an error to the template engine if there is a problem       append  mustAppend  Append a new item to an existing list  creating a new list        new   append  myList 6      The above would set   new  to   1 2 3 4 5 6      myList  would remain unaltered    append  panics if there is a problem  while  mustAppend  returns an error to the template engine if there is a problem       prepend  mustPrepend  Push an element onto the front of a list  creating a new list       prepend  myList 0      The above would produce   0 1 2 3 4 5      myList  would remain unaltered    prepend  panics if there is a problem  while  mustPrepend  returns an error to the template engine if there is a problem       concat  Concatenate arbitrary number of lists into one       concat  myList   list 6 7     list 8        The above would produce   1 2 3 4 5 6 7 8      myList  would remain unaltered       reverse  mustReverse  Produce a new list with the reversed elements of the given list       reverse  myList      The above would generate the list   5 4 3 2 1      reverse  panics if there is a problem  while  mustReverse  returns an error to the template engine if there is a problem       uniq  mustUniq  Generate a list with all of the duplicates removed       list 1 1 1 2   uniq      The above would produce   1 2     uniq  panics if there is a problem  while  mustUniq  returns an error to the template engine if there is a problem       without  mustWithout  The  without  function filters items out of a list       without  myList 3      The above would produce   1 2 4 5     without  can take more than one filter       without  myList 1 3 5      That would produce   2 4     without  panics if there is a problem  while  mustWithout  returns an error to the template engine if there is a problem       has  mustHas  Test to see if a list has a particular element       has 4  myList      The above would return  true   while  has  hello   myList  would return false    has  panics if there is a problem  while  mustHas  returns an error to the template engine if there is a problem       compact  mustCompact  Accepts a list and removes entries with empty values        list    list 1  a   foo      copy    compact  list       compact  will return a new list with the empty  i e       item removed    compact  panics if there is a problem and  mustCompact  returns an error to the template engine if there is a problem       index  To get the nth element of a list  use  index list  n    To index into  multi dimensional lists  use  index list  n   m          index  myList 0  returns  1   It is the same as  myList 0      index  myList 0 1  would be the same as  myList 0  1        slice  mustSlice  To get partial elements of a list  use  slice list  n   m    It is equivalent of  list n m        slice  myList  returns   1 2 3 4 5    It is same as  myList         slice  myList 3  returns   4 5    It is same as  myList 3        slice  myList 1 3  returns   2 3    It is same as  myList 1 3       slice  myList 0 3  returns   1 2 3    It is same as  myList  3      slice  panics if there is a problem  while  mustSlice  returns an error to the template engine if there is a problem       until  The  until  function builds a range of integers       until 5      The above generates the list   0  1  2  3  4     This is useful for looping with  range  i   e    until 5        untilStep  Like  until    untilStep  generates a list of counting integers  But it allows you to define a start  stop  and step       untilStep 3 6 2      The above will produce   3 5   by starting with 3  and adding 2 until it is equal or greater than 6  This is similar to Python s  range  function       seq  Works like the bash  seq  command     1 parameter   end    will generate all counting integers between 1 and  end    inclusive    2 parameters  start  end    will generate all counting integers between    start  and  end  inclusive incrementing or decrementing by 1    3 parameters  start  step  end    will generate all counting integers between    start  and  end  inclusive incrementing or decrementing by  step        seq 5          1 2 3 4 5 seq  3         1 0  1  2  3 seq 0 2        0 1 2 seq 2  2       2 1 0  1  2 seq 0 2 10     0 2 4 6 8 10 seq 0  2  5    0  2  4         Math Functions  All math functions operate on  int64  values unless specified otherwise   The following math functions are available   add   add    add1   add1    ceil   ceil    div   div    floor   floor    len   len    max   max    min   min    mod   mod    mul   mul    round   round   and  sub   sub        add  Sum numbers with  add   Accepts two or more inputs       add 1 2 3          add1  To increment by 1  use  add1        sub  To subtract  use  sub        div  Perform integer division with  div        mod  Modulo with  mod        mul  Multiply with  mul   Accepts two or more inputs       mul 1 2 3          max  Return the largest of a series of integers   This will return  3        max 1 2 3          min  Return the smallest of a series of integers    min 1 2 3  will return  1        len  Returns the length of the argument as an integer       len  Arg         Float Math Functions  All math functions operate on  float64  values       addf  Sum numbers with  addf   This will return  5 5        addf 1 5 2 2          add1f  To increment by 1  use  add1f       subf  To subtract  use  subf   This is equivalent to  7 5   2   3  and will return  2 5        subf 7 5 2 3          divf  Perform integer division with  divf   This is equivalent to  10   2   4  and will return  1 25        divf 10 2 4          mulf  Multiply with  mulf   This will return  6        mulf 1 5 2 2          maxf  Return the largest of a series of floats   This will return  3        maxf 1 2 5 3          minf  Return the smallest of a series of floats   This will return  1 5        minf 1 5 2 3          floor  Returns the greatest float value less than or equal to input value    floor 123 9999  will return  123 0        ceil  Returns the greatest float value greater than or equal to input value    ceil 123 001  will return  124 0        round  Returns a float value with the remainder rounded to the given number to digits after the decimal point    round 123 555555 3  will return  123 556       Network Functions  Helm has a single network function   getHostByName    The  getHostByName  receives a domain name and returns the ip address    getHostByName  www google com   would return the corresponding ip address of  www google com       File Path Functions  While Helm template functions do not grant access to the filesystem  they do provide functions for working with strings that follow file path conventions  Those include  base   base    clean   clean    dir   dir    ext   ext   and  isAbs   isabs        base  Return the last element of a path       base  foo bar baz       The above prints  baz        dir  Return the directory  stripping the last part of the path  So  dir  foo bar baz   returns  foo bar        clean  Clean up a path       clean  foo bar    baz       The above resolves the      and returns  foo baz        ext  Return the file extension       ext  foo bar       The above returns   bar        isAbs  To check whether a file path is absolute  use  isAbs       Reflection Functions  Helm provides rudimentary reflection tools  These help advanced template developers understand the underlying Go type information for a particular value  Helm is written in Go and is strongly typed  The type system applies within templates   Go has several primitive  kinds   like  string    slice    int64   and  bool    Go has an open  type  system that allows developers to create their own types   Helm provides a set of functions for each via  kind functions   kind functions  and  type functions   type functions   A  deepEqual   deepequal  function is also provided to compare to values       Kind Functions  There are two Kind functions   kindOf  returns the kind of an object       kindOf  hello       The above would return  string   For simple tests  like in  if  blocks   the  kindIs  function will let you verify that a value is a particular kind       kindIs  int  123      The above will return  true        Type Functions  Types are slightly harder to work with  so there are three different functions      typeOf  returns the underlying type of a value   typeOf  foo     typeIs  is like  kindIs   but for types   typeIs   io Buffer   myVal     typeIsLike  works as  typeIs   except that it also dereferences pointers    Note    None of these can test whether or not something implements a given interface  since doing so would require compiling the interface in ahead of time       deepEqual   deepEqual  returns true if two values are   deeply equal   https   golang org pkg reflect  DeepEqual   Works for non primitive types as well  compared to the built in  eq         deepEqual  list 1 2 3   list 1 2 3       The above will return  true       Semantic Version Functions  Some version schemes are easily parseable and comparable  Helm provides functions for working with  SemVer 2  http   semver org  versions  These include  semver   semver  and  semverCompare   semvercompare   Below you will also find details on using ranges for comparisons       semver  The  semver  function parses a string into a Semantic Version        version    semver  1 2 3 alpha 1 123        If the parser fails  it will cause template execution to halt with an error    At this point    version  is a pointer to a  Version  object with the following properties       version Major   The major number   1  above      version Minor   The minor number   2  above      version Patch   The patch number   3  above      version Prerelease   The prerelease   alpha 1  above      version Metadata   The build metadata   123  above      version Original   The original version as a string  Additionally  you can compare a  Version  to another  version  using the  Compare  function       semver  1 4 3     semver  1 2 3   Compare      The above will return   1    The return values are       1  if the given semver is greater than the semver whose  Compare  method was   called    1  if the version who s  Compare  function was called is greater     0  if they are the same version   Note that in SemVer  the  Metadata  field is not compared during version comparison operations        semverCompare  A more robust comparison function is provided as  semverCompare   This version supports version ranges      semverCompare  1 2 3   1 2 3   checks for an exact match    semverCompare   1 2 0   1 2 3   checks that the major and minor versions   match  and that the patch number of the second version is  greater than or   equal to  the first parameter   The SemVer functions use the  Masterminds semver library  https   github com Masterminds semver   from the creators of Sprig       Basic Comparisons  There are two elements to the comparisons  First  a comparison string is a list of space or comma separated AND comparisons  These are then separated by     OR  comparisons  For example       1 2   3 0 0       4 2 3   is looking for a comparison that s greater than or equal to 1 2 and less than 3 0 0 or is greater than or equal to 4 2 3   The basic comparisons are          equal  aliased to no operator          not equal        greater than        less than         greater than or equal to         less than or equal to      Working With Prerelease Versions  Pre releases  for those not familiar with them  are used for software releases prior to stable or generally available releases  Examples of prereleases include development  alpha  beta  and release candidate releases  A prerelease may be a version such as  1 2 3 beta 1   while the stable release would be  1 2 3   In the order of precedence  prereleases come before their associated releases  In this example  1 2 3 beta 1   1 2 3    According to the Semantic Version specification prereleases may not be API compliant with their release counterpart  It says     A pre release version indicates that the version is unstable and might not   satisfy the intended compatibility requirements as denoted by its associated   normal version   SemVer comparisons using constraints without a prerelease comparator will skip prerelease versions  For example     1 2 3  will skip prereleases when looking at a list of releases  while    1 2 3 0  will evaluate and find prereleases   The reason for the  0  as a pre release version in the example comparison is because pre releases can only contain ASCII alphanumerics and hyphens  along with     separators   per the spec  Sorting happens in ASCII sort order  again per the spec  The lowest character is a  0  in ASCII sort order  see an  ASCII Table  http   www asciitable com     Understanding ASCII sort ordering is important because A Z comes before a z  That means    1 2 3 BETA  will return  1 2 3 alpha   What you might expect from case sensitivity doesn t apply here  This is due to ASCII sort ordering which is what the spec specifies       Hyphen Range Comparisons  There are multiple methods to handle ranges and the first is hyphens ranges  These look like      1 2   1 4 5  which is equivalent to     1 2    1 4 5     2 3 4   4 5  which is equivalent to     2 3 4    4 5       Wildcards In Comparisons  The  x    X   and     characters can be used as a wildcard character  This works for all comparison operators  When used on the     operator it falls back to the patch level comparison  see tilde below   For example      1 2 x  is equivalent to     1 2 0    1 3 0        1 2 x  is equivalent to     1 2 0        2 x  is equivalent to    3        is equivalent to     0 0 0       Tilde Range Comparisons  Patch   The tilde       comparison operator is for patch level ranges when a minor version is specified and major level changes when the minor number is missing  For example       1 2 3  is equivalent to     1 2 3    1 3 0      1  is equivalent to     1    2      2 3  is equivalent to     2 3    2 4      1 2 x  is equivalent to     1 2 0    1 3 0      1 x  is equivalent to     1    2       Caret Range Comparisons  Major   The caret       comparison operator is for major level changes once a stable  1 0 0  release has occurred  Prior to a 1 0 0 release the minor versions acts as the API stability level  This is useful when comparisons of API versions as a major change is API breaking  For example       1 2 3  is equivalent to     1 2 3    2 0 0      1 2 x  is equivalent to     1 2 0    2 0 0      2 3  is equivalent to     2 3    3      2 x  is equivalent to     2 0 0    3      0 2 3  is equivalent to    0 2 3  0 3 0      0 2  is equivalent to    0 2 0  0 3 0      0 0 3  is equivalent to    0 0 3  0 0 4      0 0  is equivalent to    0 0 0  0 1 0      0  is equivalent to    0 0 0  1 0 0      URL Functions  Helm includes the  urlParse   urlparse    urlJoin   urljoin   and  urlquery   urlquery  functions enabling you to work with URL parts       urlParse  Parses string for URL and produces dict with URL parts      urlParse  http   admin secret server com 8080 api list false anchor       The above returns a dict  containing URL object      yaml scheme     http  host       server com 8080  path        api  query      list false  opaque    nil fragment   anchor  userinfo   admin secret       This is implemented used the URL packages from the Go standard library  For more info  check https   golang org pkg net url  URL      urlJoin  Joins map  produced by  urlParse   to produce URL string      urlJoin  dict  fragment   fragment   host   host 80   path    path   query   query   scheme   http        The above returns the following string      http   host 80 path query fragment          urlquery  Returns the escaped version of the value passed in as an argument so that it is suitable for embedding in the query portion of a URL        var    urlquery  string for query          UUID Functions  Helm can generate UUID v4 universally unique IDs       uuidv4      The above returns a new UUID of the v4  randomly generated  type      Kubernetes and Chart Functions  Helm includes functions for working with Kubernetes including   Capabilities APIVersions Has   capabilitiesapiversionshas    Files   file functions   and  lookup   lookup        lookup   lookup  is used to look up resource in a running cluster  When used with the  helm template  command it always returns an empty response   You can find more detail in the  documentation on the lookup function  functions and pipelines md  using the lookup function         Capabilities APIVersions Has  Returns if an API version or resource is available in a cluster        Capabilities APIVersions Has  apps v1   Capabilities APIVersions Has  apps v1 Deployment       More information is available on the  built in object documentation  builtin objects md        File Functions  There are several functions that enable you to get to non special files within a chart  For example  to access application configuration files  These are documented in  Accessing Files Inside Templates  accessing files md     Note  the documentation for many of these functions come from  Sprig  https   github com Masterminds sprig   Sprig is a template function library available to Go applications  "}
{"questions":"helm variables In templates they are less frequently used But we will see how to With functions pipelines objects and control structures under our belts we title Variables use them to simplify code and to make better use of and weight 8 Using variables in templates can turn to one of the more basic ideas in many programming languages","answers":"---\ntitle: \"Variables\"\ndescription: \"Using variables in templates.\"\nweight: 8\n---\n\nWith functions, pipelines, objects, and control structures under our belts, we\ncan turn to one of the more basic ideas in many programming languages:\nvariables. In templates, they are less frequently used. But we will see how to\nuse them to simplify code, and to make better use of `with` and `range`.\n\nIn an earlier example, we saw that this code will fail:\n\n```yaml\n  \n  drink: \n  food: \n  release: \n  \n```\n\n`Release.Name` is not inside of the scope that's restricted in the `with` block.\nOne way to work around scoping issues is to assign objects to variables that can\nbe accessed without respect to the present scope.\n\nIn Helm templates, a variable is a named reference to another object. It follows\nthe form `$name`. Variables are assigned with a special assignment operator:\n`:=`. We can rewrite the above to use a variable for `Release.Name`.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  \n  \n  drink: \n  food: \n  release: \n  \n```\n\nNotice that before we start the `with` block, we assign `$relname :=\n.Release.Name`. Now inside of the `with` block, the `$relname` variable still\npoints to the release name.\n\nRunning that will produce this:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: viable-badger-configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \"coffee\"\n  food: \"PIZZA\"\n  release: viable-badger\n```\n\nVariables are particularly useful in `range` loops. They can be used on\nlist-like objects to capture both the index and the value:\n\n```yaml\n  toppings: |-\n    \n      : \n    \n\n```\n\nNote that `range` comes first, then the variables, then the assignment operator,\nthen the list. This will assign the integer index (starting from zero) to\n`$index` and the value to `$topping`. Running it will produce:\n\n```yaml\n  toppings: |-\n      0: mushrooms\n      1: cheese\n      2: peppers\n      3: onions\n```\n\nFor data structures that have both a key and a value, we can use `range` to get\nboth. For example, we can loop through `.Values.favorite` like this:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: -configmap\ndata:\n  myvalue: \"Hello World\"\n  \n  : \n  \n```\n\nNow on the first iteration, `$key` will be `drink` and `$val` will be `coffee`,\nand on the second, `$key` will be `food` and `$val` will be `pizza`. Running the\nabove will generate this:\n\n```yaml\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: eager-rabbit-configmap\ndata:\n  myvalue: \"Hello World\"\n  drink: \"coffee\"\n  food: \"pizza\"\n```\n\nVariables are normally not \"global\". They are scoped to the block in which they\nare declared. Earlier, we assigned `$relname` in the top level of the template.\nThat variable will be in scope for the entire template. But in our last example,\n`$key` and `$val` will only be in scope inside of the ``\nblock.\n\nHowever, there is one variable that is always global - `$` - this variable will\nalways point to the root context. This can be very useful when you are looping\nin a range and you need to know the chart's release name.\n\nAn example illustrating this:\n```yaml\n\napiVersion: v1\nkind: Secret\nmetadata:\n  name: \n  labels:\n    # Many helm templates would use `.` below, but that will not work,\n    # however `$` will work here\n    app.kubernetes.io\/name: \n    # I cannot reference .Chart.Name, but I can do $.Chart.Name\n    helm.sh\/chart: \"-\"\n    app.kubernetes.io\/instance: \"\"\n    # Value from appVersion in Chart.yaml\n    app.kubernetes.io\/version: \"\"\n    app.kubernetes.io\/managed-by: \"\"\ntype: kubernetes.io\/tls\ndata:\n  tls.crt: \n  tls.key: \n---\n\n```\n\nSo far we have looked at just one template declared in just one file. But one of\nthe powerful features of the Helm template language is its ability to declare\nmultiple templates and use them together. We'll turn to that in the next\nsection.","site":"helm","answers_cleaned":"    title   Variables  description   Using variables in templates   weight  8      With functions  pipelines  objects  and control structures under our belts  we can turn to one of the more basic ideas in many programming languages  variables  In templates  they are less frequently used  But we will see how to use them to simplify code  and to make better use of  with  and  range    In an earlier example  we saw that this code will fail      yaml      drink     food     release            Release Name  is not inside of the scope that s restricted in the  with  block  One way to work around scoping issues is to assign objects to variables that can be accessed without respect to the present scope   In Helm templates  a variable is a named reference to another object  It follows the form   name   Variables are assigned with a special assignment operator        We can rewrite the above to use a variable for  Release Name       yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World          drink     food     release           Notice that before we start the  with  block  we assign   relname     Release Name   Now inside of the  with  block  the   relname  variable still points to the release name   Running that will produce this      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  viable badger configmap data    myvalue   Hello World    drink   coffee    food   PIZZA    release  viable badger      Variables are particularly useful in  range  loops  They can be used on list like objects to capture both the index and the value      yaml   toppings                              Note that  range  comes first  then the variables  then the assignment operator  then the list  This will assign the integer index  starting from zero  to   index  and the value to   topping   Running it will produce      yaml   toppings           0  mushrooms       1  cheese       2  peppers       3  onions      For data structures that have both a key and a value  we can use  range  to get both  For example  we can loop through   Values favorite  like this      yaml apiVersion  v1 kind  ConfigMap metadata    name   configmap data    myvalue   Hello World                  Now on the first iteration    key  will be  drink  and   val  will be  coffee   and on the second    key  will be  food  and   val  will be  pizza   Running the above will generate this      yaml   Source  mychart templates configmap yaml apiVersion  v1 kind  ConfigMap metadata    name  eager rabbit configmap data    myvalue   Hello World    drink   coffee    food   pizza       Variables are normally not  global   They are scoped to the block in which they are declared  Earlier  we assigned   relname  in the top level of the template  That variable will be in scope for the entire template  But in our last example    key  and   val  will only be in scope inside of the    block   However  there is one variable that is always global         this variable will always point to the root context  This can be very useful when you are looping in a range and you need to know the chart s release name   An example illustrating this     yaml  apiVersion  v1 kind  Secret metadata    name     labels        Many helm templates would use     below  but that will not work        however     will work here     app kubernetes io name         I cannot reference  Chart Name  but I can do   Chart Name     helm sh chart          app kubernetes io instance           Value from appVersion in Chart yaml     app kubernetes io version         app kubernetes io managed by     type  kubernetes io tls data    tls crt     tls key             So far we have looked at just one template declared in just one file  But one of the powerful features of the Helm template language is its ability to declare multiple templates and use them together  We ll turn to that in the next section "}
{"questions":"helm Terms used to describe components of Helm s architecture title Glossary Glossary weight 10 Chart","answers":"---\ntitle: \"Glossary\"\ndescription: \"Terms used to describe components of Helm's architecture.\"\nweight: 10\n---\n\n# Glossary\n\n## Chart\n\nA Helm package that contains information sufficient for installing a set of\nKubernetes resources into a Kubernetes cluster.\n\nCharts contain a `Chart.yaml` file as well as templates, default values\n(`values.yaml`), and dependencies.\n\nCharts are developed in a well-defined directory structure, and then packaged\ninto an archive format called a _chart archive_.\n\n## Chart Archive\n\nA _chart archive_ is a tarred and gzipped (and optionally signed) chart.\n\n## Chart Dependency (Subcharts)\n\nCharts may depend upon other charts. There are two ways a dependency may occur:\n\n- Soft dependency: A chart may simply not function without another chart being\n  installed in a cluster. Helm does not provide tooling for this case. In this\n  case, dependencies may be managed separately.\n- Hard dependency: A chart may contain (inside of its `charts\/` directory)\n  another chart upon which it depends. In this case, installing the chart will\n  install all of its dependencies. In this case, a chart and its dependencies\n  are managed as a collection.\n\nWhen a chart is packaged (via `helm package`) all of its hard dependencies are\nbundled with it.\n\n## Chart Version\n\nCharts are versioned according to the [SemVer 2 spec](https:\/\/semver.org). A\nversion number is required on every chart.\n\n## Chart.yaml\n\nInformation about a chart is stored in a special file called `Chart.yaml`. Every\nchart must have this file.\n\n## Helm (and helm)\n\nHelm is the package manager for Kubernetes. As an operating system package\nmanager makes it easy to install tools on an OS, Helm makes it easy to install\napplications and resources into Kubernetes clusters.\n\nWhile _Helm_ is the name of the project, the command line client is also named\n`helm`. By convention, when speaking of the project, _Helm_ is capitalized. When\nspeaking of the client, _helm_ is in lowercase.\n\n## Helm Configuration Files (XDG)\n\nHelm stores its configuration files in XDG directories. These directories are\ncreated the first time `helm` is run.\n\n## Kube Config (KUBECONFIG)\n\nThe Helm client learns about Kubernetes clusters by using files in the _Kube\nconfig_ file format. By default, Helm attempts to find this file in the place\nwhere `kubectl` creates it (`$HOME\/.kube\/config`).\n\n## Lint (Linting)\n\nTo _lint_ a chart is to validate that it follows the conventions and\nrequirements of the Helm chart standard. Helm provides tools to do this, notably\nthe `helm lint` command.\n\n## Provenance (Provenance file)\n\nHelm charts may be accompanied by a _provenance file_ which provides information\nabout where the chart came from and what it contains.\n\nProvenance files are one part of the Helm security story. A provenance contains\na cryptographic hash of the chart archive file, the Chart.yaml data, and a\nsignature block (an OpenPGP \"clearsign\" block). When coupled with a keychain,\nthis provides chart users with the ability to:\n\n- Validate that a chart was signed by a trusted party\n- Validate that the chart file has not been tampered with\n- Validate the contents of a chart metadata (`Chart.yaml`)\n- Quickly match a chart to its provenance data\n\nProvenance files have the `.prov` extension, and can be served from a chart\nrepository server or any other HTTP server.\n\n## Release\n\nWhen a chart is installed, the Helm library creates a _release_ to track that\ninstallation.\n\nA single chart may be installed many times into the same cluster, and create\nmany different releases. For example, one can install three PostgreSQL databases\nby running `helm install` three times with a different release name.\n\n## Release Number (Release Version)\n\nA single release can be updated multiple times. A sequential counter is used to\ntrack releases as they change. After a first `helm install`, a release will have\n_release number_ 1. Each time a release is upgraded or rolled back, the release\nnumber will be incremented.\n\n## Rollback\n\nA release can be upgraded to a newer chart or configuration. But since release\nhistory is stored, a release can also be _rolled back_ to a previous release\nnumber. This is done with the `helm rollback` command.\n\nImportantly, a rolled back release will receive a new release number.\n\n| Operation  | Release Number                                       |\n|------------|------------------------------------------------------|\n| install    | release 1                                            |\n| upgrade    | release 2                                            |\n| upgrade    | release 3                                            |\n| rollback 1 | release 4 (but running the same config as release 1) |\n\nThe above table illustrates how release numbers increment across install,\nupgrade, and rollback.\n\n## Helm Library (or SDK)\n\nThe Helm Library (or SDK) refers to the Go code that interacts directly with the\nKubernetes API server to install, upgrade, query, and remove Kubernetes\nresources. It can be imported into a project to use Helm as a client library\ninstead of a CLI.\n\n## Repository (Repo, Chart Repository)\n\nHelm charts may be stored on dedicated HTTP servers called _chart repositories_\n(_repositories_, or just _repos_).\n\nA chart repository server is a simple HTTP server that can serve an `index.yaml`\nfile that describes a batch of charts, and provides information on where each\nchart can be downloaded from. (Many chart repositories serve the charts as well\nas the `index.yaml` file.)\n\nA Helm client can point to zero or more chart repositories. By default, Helm\nclients are not configured with any chart repositories. Chart repositories can\nbe added at any time using the `helm repo add` command.\n\n## Chart Registry (OCI-based Registry)\n\nA Helm Chart Registry is an [OCI-based](https:\/\/opencontainers.org\/about\/overview\/) storage and distribution system that is used to host and share Helm chart packages. For more information, see the [Helm documentation on registries](https:\/\/helm.sh\/docs\/topics\/registries\/).\n\n## Values (Values Files, values.yaml)\n\nValues provide a way to override template defaults with your own information.\n\nHelm Charts are \"parameterized\", which means the chart developer may expose\nconfiguration that can be overridden at installation time. For example, a chart\nmay expose a `username` field that allows setting a user name for a service.\n\nThese exposed variables are called _values_ in Helm parlance.\n\nValues can be set during `helm install` and `helm upgrade` operations, either by\npassing them in directly, or by using a `values.yaml` file.","site":"helm","answers_cleaned":"    title   Glossary  description   Terms used to describe components of Helm s architecture   weight  10        Glossary     Chart  A Helm package that contains information sufficient for installing a set of Kubernetes resources into a Kubernetes cluster   Charts contain a  Chart yaml  file as well as templates  default values   values yaml    and dependencies   Charts are developed in a well defined directory structure  and then packaged into an archive format called a  chart archive       Chart Archive  A  chart archive  is a tarred and gzipped  and optionally signed  chart      Chart Dependency  Subcharts   Charts may depend upon other charts  There are two ways a dependency may occur     Soft dependency  A chart may simply not function without another chart being   installed in a cluster  Helm does not provide tooling for this case  In this   case  dependencies may be managed separately    Hard dependency  A chart may contain  inside of its  charts   directory    another chart upon which it depends  In this case  installing the chart will   install all of its dependencies  In this case  a chart and its dependencies   are managed as a collection   When a chart is packaged  via  helm package   all of its hard dependencies are bundled with it      Chart Version  Charts are versioned according to the  SemVer 2 spec  https   semver org   A version number is required on every chart      Chart yaml  Information about a chart is stored in a special file called  Chart yaml   Every chart must have this file      Helm  and helm   Helm is the package manager for Kubernetes  As an operating system package manager makes it easy to install tools on an OS  Helm makes it easy to install applications and resources into Kubernetes clusters   While  Helm  is the name of the project  the command line client is also named  helm   By convention  when speaking of the project   Helm  is capitalized  When speaking of the client   helm  is in lowercase      Helm Configuration Files  XDG   Helm stores its configuration files in XDG directories  These directories are created the first time  helm  is run      Kube Config  KUBECONFIG   The Helm client learns about Kubernetes clusters by using files in the  Kube config  file format  By default  Helm attempts to find this file in the place where  kubectl  creates it    HOME  kube config        Lint  Linting   To  lint  a chart is to validate that it follows the conventions and requirements of the Helm chart standard  Helm provides tools to do this  notably the  helm lint  command      Provenance  Provenance file   Helm charts may be accompanied by a  provenance file  which provides information about where the chart came from and what it contains   Provenance files are one part of the Helm security story  A provenance contains a cryptographic hash of the chart archive file  the Chart yaml data  and a signature block  an OpenPGP  clearsign  block   When coupled with a keychain  this provides chart users with the ability to     Validate that a chart was signed by a trusted party   Validate that the chart file has not been tampered with   Validate the contents of a chart metadata   Chart yaml     Quickly match a chart to its provenance data  Provenance files have the   prov  extension  and can be served from a chart repository server or any other HTTP server      Release  When a chart is installed  the Helm library creates a  release  to track that installation   A single chart may be installed many times into the same cluster  and create many different releases  For example  one can install three PostgreSQL databases by running  helm install  three times with a different release name      Release Number  Release Version   A single release can be updated multiple times  A sequential counter is used to track releases as they change  After a first  helm install   a release will have  release number  1  Each time a release is upgraded or rolled back  the release number will be incremented      Rollback  A release can be upgraded to a newer chart or configuration  But since release history is stored  a release can also be  rolled back  to a previous release number  This is done with the  helm rollback  command   Importantly  a rolled back release will receive a new release number     Operation    Release Number                                                                                                                 install      release 1                                                upgrade      release 2                                                upgrade      release 3                                                rollback 1   release 4  but running the same config as release 1     The above table illustrates how release numbers increment across install  upgrade  and rollback      Helm Library  or SDK   The Helm Library  or SDK  refers to the Go code that interacts directly with the Kubernetes API server to install  upgrade  query  and remove Kubernetes resources  It can be imported into a project to use Helm as a client library instead of a CLI      Repository  Repo  Chart Repository   Helm charts may be stored on dedicated HTTP servers called  chart repositories    repositories   or just  repos     A chart repository server is a simple HTTP server that can serve an  index yaml  file that describes a batch of charts  and provides information on where each chart can be downloaded from   Many chart repositories serve the charts as well as the  index yaml  file    A Helm client can point to zero or more chart repositories  By default  Helm clients are not configured with any chart repositories  Chart repositories can be added at any time using the  helm repo add  command      Chart Registry  OCI based Registry   A Helm Chart Registry is an  OCI based  https   opencontainers org about overview   storage and distribution system that is used to host and share Helm chart packages  For more information  see the  Helm documentation on registries  https   helm sh docs topics registries        Values  Values Files  values yaml   Values provide a way to override template defaults with your own information   Helm Charts are  parameterized   which means the chart developer may expose configuration that can be overridden at installation time  For example  a chart may expose a  username  field that allows setting a user name for a service   These exposed variables are called  values  in Helm parlance   Values can be set during  helm install  and  helm upgrade  operations  either by passing them in directly  or by using a  values yaml  file "}
{"questions":"helm Describe how to use Chart Releaser Action to automate releasing charts through GitHub pages workflow to turn a GitHub project into a self hosted Helm chart repo using This guide describes how to use Chart Releaser releasing charts through GitHub pages Chart Releaser Action is a GitHub Action Action https github com marketplace actions helm chart releaser to automate title Chart Releaser Action to Automate GitHub Page Charts weight 3","answers":"---\ntitle: \"Chart Releaser Action to Automate GitHub Page Charts\"\ndescription: \"Describe how to use Chart Releaser Action to automate releasing charts through GitHub pages.\"\nweight: 3\n---\n\nThis guide describes how to use [Chart Releaser\nAction](https:\/\/github.com\/marketplace\/actions\/helm-chart-releaser) to automate\nreleasing charts through GitHub pages.  Chart Releaser Action is a GitHub Action\nworkflow to turn a GitHub project into a self-hosted Helm chart repo, using\n[helm\/chart-releaser](https:\/\/github.com\/helm\/chart-releaser) CLI tool.\n\n## Repository Changes\n\nCreate a Git repository under your GitHub organization.  You could give the name\nof the repository as `helm-charts`, though other names are also acceptable.  The\nsources of all the charts can be placed under the `main` branch.  The charts\nshould be placed under `\/charts` directory at the top-level of the directory\ntree.\n\nThere should be another branch named `gh-pages` to publish the charts.  The\nchanges to that branch will be automatically created by the Chart Releaser\nAction described here.  However, you can create that `gh-branch` and add\n`README.md` file, which is going to be visible to the users visiting the page.\n\nYou can add instruction in the `README.md` for charts installation like this\n(replace `<alias>`, `<orgname>`, and `<chart-name>`):\n\n```\n## Usage\n\n[Helm](https:\/\/helm.sh) must be installed to use the charts.  Please refer to\nHelm's [documentation](https:\/\/helm.sh\/docs) to get started.\n\nOnce Helm has been set up correctly, add the repo as follows:\n\n  helm repo add <alias> https:\/\/<orgname>.github.io\/helm-charts\n\nIf you had already added this repo earlier, run `helm repo update` to retrieve\nthe latest versions of the packages.  You can then run `helm search repo\n<alias>` to see the charts.\n\nTo install the <chart-name> chart:\n\n    helm install my-<chart-name> <alias>\/<chart-name>\n\nTo uninstall the chart:\n\n    helm delete my-<chart-name>\n```\n\nThe charts will be published to a website with URL like this:\n\n    https:\/\/<orgname>.github.io\/helm-charts\n\n## GitHub Actions Workflow\n\nCreate GitHub Actions workflow file in the `main` branch at\n`.github\/workflows\/release.yml`\n\n```\nname: Release Charts\n\non:\n  push:\n    branches:\n      - main\n\njobs:\n  release:\n    permissions:\n      contents: write\n    runs-on: ubuntu-latest\n    steps:\n      - name: Checkout\n        uses: actions\/checkout@v4\n        with:\n          fetch-depth: 0\n\n      - name: Configure Git\n        run: |\n          git config user.name \"$GITHUB_ACTOR\"\n          git config user.email \"$GITHUB_ACTOR@users.noreply.github.com\"\n\n      - name: Run chart-releaser\n        uses: helm\/chart-releaser-action@v1.6.0\n        env:\n          CR_TOKEN: \"$\"\n```\n\nThe above configuration uses\n[@helm\/chart-releaser-action](https:\/\/github.com\/helm\/chart-releaser-action) to\nturn your GitHub project into a self-hosted Helm chart repo.  It does this -\nduring every push to main - by checking each chart in your project, and whenever\nthere's a new chart version, creates a corresponding GitHub release named for\nthe chart version, adds Helm chart artifacts to the release, and creates or\nupdates an `index.yaml` file with metadata about those releases, which is then\nhosted on GitHub pages.\n\nThe Chart Releaser Action version number used in the above example is `v1.6.0`.\nYou can change it to the [latest available\nversion](https:\/\/github.com\/helm\/chart-releaser-action\/releases).\n\nNote: The Chart Releaser Action is almost always used in tandem with the [Helm Testing\nAction](https:\/\/github.com\/marketplace\/actions\/helm-chart-testing) and [Kind\nAction](https:\/\/github.com\/marketplace\/actions\/kind-cluster).","site":"helm","answers_cleaned":"    title   Chart Releaser Action to Automate GitHub Page Charts  description   Describe how to use Chart Releaser Action to automate releasing charts through GitHub pages   weight  3      This guide describes how to use  Chart Releaser Action  https   github com marketplace actions helm chart releaser  to automate releasing charts through GitHub pages   Chart Releaser Action is a GitHub Action workflow to turn a GitHub project into a self hosted Helm chart repo  using  helm chart releaser  https   github com helm chart releaser  CLI tool      Repository Changes  Create a Git repository under your GitHub organization   You could give the name of the repository as  helm charts   though other names are also acceptable   The sources of all the charts can be placed under the  main  branch   The charts should be placed under   charts  directory at the top level of the directory tree   There should be another branch named  gh pages  to publish the charts   The changes to that branch will be automatically created by the Chart Releaser Action described here   However  you can create that  gh branch  and add  README md  file  which is going to be visible to the users visiting the page   You can add instruction in the  README md  for charts installation like this  replace   alias      orgname    and   chart name             Usage   Helm  https   helm sh  must be installed to use the charts   Please refer to Helm s  documentation  https   helm sh docs  to get started   Once Helm has been set up correctly  add the repo as follows     helm repo add  alias  https    orgname  github io helm charts  If you had already added this repo earlier  run  helm repo update  to retrieve the latest versions of the packages   You can then run  helm search repo  alias   to see the charts   To install the  chart name  chart       helm install my  chart name   alias   chart name   To uninstall the chart       helm delete my  chart name       The charts will be published to a website with URL like this       https    orgname  github io helm charts     GitHub Actions Workflow  Create GitHub Actions workflow file in the  main  branch at   github workflows release yml       name  Release Charts  on    push      branches          main  jobs    release      permissions        contents  write     runs on  ubuntu latest     steps          name  Checkout         uses  actions checkout v4         with            fetch depth  0          name  Configure Git         run              git config user name   GITHUB ACTOR            git config user email   GITHUB ACTOR users noreply github com           name  Run chart releaser         uses  helm chart releaser action v1 6 0         env            CR TOKEN           The above configuration uses   helm chart releaser action  https   github com helm chart releaser action  to turn your GitHub project into a self hosted Helm chart repo   It does this   during every push to main   by checking each chart in your project  and whenever there s a new chart version  creates a corresponding GitHub release named for the chart version  adds Helm chart artifacts to the release  and creates or updates an  index yaml  file with metadata about those releases  which is then hosted on GitHub pages   The Chart Releaser Action version number used in the above example is  v1 6 0   You can change it to the  latest available version  https   github com helm chart releaser action releases    Note  The Chart Releaser Action is almost always used in tandem with the  Helm Testing Action  https   github com marketplace actions helm chart testing  and  Kind Action  https   github com marketplace actions kind cluster  "}
{"questions":"helm Covers some of the tips and tricks Helm chart developers have learned while building production quality charts aliases docs chartstipsandtricks while building production quality charts This guide covers some of the tips and tricks Helm chart developers have learned title Chart Development Tips and Tricks weight 1","answers":"---\ntitle: \"Chart Development Tips and Tricks\"\ndescription: \"Covers some of the tips and tricks Helm chart developers have learned while building production-quality charts.\"\nweight: 1\naliases: [\"\/docs\/charts_tips_and_tricks\/\"]\n---\n\nThis guide covers some of the tips and tricks Helm chart developers have learned\nwhile building production-quality charts.\n\n## Know Your Template Functions\n\nHelm uses [Go templates](https:\/\/godoc.org\/text\/template) for templating your\nresource files. While Go ships several built-in functions, we have added many\nothers.\n\nFirst, we added all of the functions in the [Sprig\nlibrary](https:\/\/masterminds.github.io\/sprig\/), except `env` and `expandenv`, for security reasons.\n\nWe also added two special template functions: `include` and `required`. The\n`include` function allows you to bring in another template, and then pass the\nresults to other template functions.\n\nFor example, this template snippet includes a template called `mytpl`, then\nlowercases the result, then wraps that in double quotes.\n\n```yaml\nvalue: \n```\n\nThe `required` function allows you to declare a particular values entry as\nrequired for template rendering.  If the value is empty, the template rendering\nwill fail with a user submitted error message.\n\nThe following example of the `required` function declares an entry for\n`.Values.who` is required, and will print an error message when that entry is\nmissing:\n\n```yaml\nvalue: \n```\n\n## Quote Strings, Don't Quote Integers\n\nWhen you are working with string data, you are always safer quoting the strings\nthan leaving them as bare words:\n\n```yaml\nname: \n```\n\nBut when working with integers _do not quote the values._ That can, in many\ncases, cause parsing errors inside of Kubernetes.\n\n```yaml\nport: \n```\n\nThis remark does not apply to env variables values which are expected to be\nstring, even if they represent integers:\n\n```yaml\nenv:\n  - name: HOST\n    value: \"http:\/\/host\"\n  - name: PORT\n    value: \"1234\"\n```\n\n## Using the 'include' Function\n\nGo provides a way of including one template in another using a built-in\n`template` directive. However, the built-in function cannot be used in Go\ntemplate pipelines.\n\nTo make it possible to include a template, and then perform an operation on that\ntemplate's output, Helm has a special `include` function:\n\n```\n\n```\n\nThe above includes a template called `toYaml`, passes it `$value`, and then\npasses the output of that template to the `indent` function.\n\nBecause YAML ascribes significance to indentation levels and whitespace, this is\none great way to include snippets of code, but handle indentation in a relevant\ncontext.\n\n## Using the 'required' function\n\nGo provides a way for setting template options to control behavior when a map is\nindexed with a key that's not present in the map. This is typically set with\n`template.Options(\"missingkey=option\")`, where `option` can be `default`,\n`zero`, or `error`. While setting this option to error will stop execution with\nan error, this would apply to every missing key in the map. There may be\nsituations where a chart developer wants to enforce this behavior for select\nvalues in the `values.yaml` file.\n\nThe `required` function gives developers the ability to declare a value entry as\nrequired for template rendering. If the entry is empty in `values.yaml`, the\ntemplate will not render and will return an error message supplied by the\ndeveloper.\n\nFor example:\n\n```\n\n```\n\nThe above will render the template when `.Values.foo` is defined, but will fail\nto render and exit when `.Values.foo` is undefined.\n\n## Using the 'tpl' Function\n\nThe `tpl` function allows developers to evaluate strings as templates inside a\ntemplate. This is useful to pass a template string as a value to a chart or\nrender external configuration files. Syntax: ``\n\nExamples:\n\n```yaml\n# values\ntemplate: \"\"\nname: \"Tom\"\n\n# template\n\n\n# output\nTom\n```\n\nRendering an external configuration file:\n\n```yaml\n# external configuration file conf\/app.conf\nfirstName=\nlastName=\n\n# values\nfirstName: Peter\nlastName: Parker\n\n# template\n\n\n# output\nfirstName=Peter\nlastName=Parker\n```\n\n## Creating Image Pull Secrets\nImage pull secrets are essentially a combination of _registry_, _username_, and\n_password_.  You may need them in an application you are deploying, but to\ncreate them requires running `base64` a couple of times.  We can write a helper\ntemplate to compose the Docker configuration file for use as the Secret's\npayload.  Here is an example:\n\nFirst, assume that the credentials are defined in the `values.yaml` file like\nso:\n```yaml\nimageCredentials:\n  registry: quay.io\n  username: someone\n  password: sillyness\n  email: someone@host.com\n```\n\nWe then define our helper template as follows:\n```\n\n\n}\" .registry .username .password .email (printf \"%s:%s\" .username .password | b64enc) | b64enc }}\n\n\n```\n\nFinally, we use the helper template in a larger template to create the Secret\nmanifest:\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: myregistrykey\ntype: kubernetes.io\/dockerconfigjson\ndata:\n  .dockerconfigjson: \n```\n\n## Automatically Roll Deployments\n\nOften times ConfigMaps or Secrets are injected as configuration files in\ncontainers or there are other external dependency changes that require rolling\npods. Depending on the application a restart may be required should those be\nupdated with a subsequent `helm upgrade`, but if the deployment spec itself\ndidn't change the application keeps running with the old configuration resulting\nin an inconsistent deployment.\n\nThe `sha256sum` function can be used to ensure a deployment's annotation section\nis updated if another file changes:\n\n```yaml\nkind: Deployment\nspec:\n  template:\n    metadata:\n      annotations:\n        checksum\/config: \n[...]\n```\n\nNOTE: If you're adding this to a library chart you won't be able to access your\nfile in `$.Template.BasePath`. Instead you can reference your definition with\n``.\n\nIn the event you always want to roll your deployment, you can use a similar\nannotation step as above, instead replacing with a random string so it always\nchanges and causes the deployment to roll:\n\n```yaml\nkind: Deployment\nspec:\n  template:\n    metadata:\n      annotations:\n        rollme: \n[...]\n```\n\nEach invocation of the template function will generate a unique random string.\nThis means that if it's necessary to sync the random strings used by multiple\nresources, all relevant resources will need to be in the same template file.\n\nBoth of these methods allow your Deployment to leverage the built in update\nstrategy logic to avoid taking downtime.\n\nNOTE: In the past we recommended using the `--recreate-pods` flag as another\noption. This flag has been marked as deprecated in Helm 3 in favor of the more\ndeclarative method above.\n\n## Tell Helm Not To Uninstall a Resource\n\nSometimes there are resources that should not be uninstalled when Helm runs a\n`helm uninstall`. Chart developers can add an annotation to a resource to\nprevent it from being uninstalled.\n\n```yaml\nkind: Secret\nmetadata:\n  annotations:\n    helm.sh\/resource-policy: keep\n[...]\n```\n\nThe annotation `helm.sh\/resource-policy: keep` instructs Helm to skip deleting\nthis resource when a helm operation (such as `helm uninstall`, `helm upgrade` or\n`helm rollback`) would result in its deletion. _However_, this resource becomes\norphaned. Helm will no longer manage it in any way. This can lead to problems if\nusing `helm install --replace` on a release that has already been uninstalled,\nbut has kept resources.\n\n## Using \"Partials\" and Template Includes\n\nSometimes you want to create some reusable parts in your chart, whether they're\nblocks or template partials. And often, it's cleaner to keep these in their own\nfiles.\n\nIn the `templates\/` directory, any file that begins with an underscore(`_`) is\nnot expected to output a Kubernetes manifest file. So by convention, helper\ntemplates and partials are placed in a `_helpers.tpl` file.\n\n## Complex Charts with Many Dependencies\n\nMany of the charts in the CNCF [Artifact\nHub](https:\/\/artifacthub.io\/packages\/search?kind=0) are \"building blocks\" for\ncreating more advanced applications. But charts may be used to create instances\nof large-scale applications. In such cases, a single umbrella chart may have\nmultiple subcharts, each of which functions as a piece of the whole.\n\nThe current best practice for composing a complex application from discrete\nparts is to create a top-level umbrella chart that exposes the global\nconfigurations, and then use the `charts\/` subdirectory to embed each of the\ncomponents.\n\n## YAML is a Superset of JSON\n\nAccording to the YAML specification, YAML is a superset of JSON. That means that\nany valid JSON structure ought to be valid in YAML.\n\nThis has an advantage: Sometimes template developers may find it easier to\nexpress a datastructure with a JSON-like syntax rather than deal with YAML's\nwhitespace sensitivity.\n\nAs a best practice, templates should follow a YAML-like syntax _unless_ the JSON\nsyntax substantially reduces the risk of a formatting issue.\n\n## Be Careful with Generating Random Values\n\nThere are functions in Helm that allow you to generate random data,\ncryptographic keys, and so on. These are fine to use. But be aware that during\nupgrades, templates are re-executed. When a template run generates data that\ndiffers from the last run, that will trigger an update of that resource.\n\n## Install or Upgrade a Release with One Command\n\nHelm provides a way to perform an install-or-upgrade as a single command. Use\n`helm upgrade` with the `--install` command. This will cause Helm to see if the\nrelease is already installed. If not, it will run an install. If it is, then the\nexisting release will be upgraded.\n\n```console\n$ helm upgrade --install <release name> --values <values file> <chart directory>\n```","site":"helm","answers_cleaned":"    title   Chart Development Tips and Tricks  description   Covers some of the tips and tricks Helm chart developers have learned while building production quality charts   weight  1 aliases     docs charts tips and tricks         This guide covers some of the tips and tricks Helm chart developers have learned while building production quality charts      Know Your Template Functions  Helm uses  Go templates  https   godoc org text template  for templating your resource files  While Go ships several built in functions  we have added many others   First  we added all of the functions in the  Sprig library  https   masterminds github io sprig    except  env  and  expandenv   for security reasons   We also added two special template functions   include  and  required   The  include  function allows you to bring in another template  and then pass the results to other template functions   For example  this template snippet includes a template called  mytpl   then lowercases the result  then wraps that in double quotes      yaml value        The  required  function allows you to declare a particular values entry as required for template rendering   If the value is empty  the template rendering will fail with a user submitted error message   The following example of the  required  function declares an entry for   Values who  is required  and will print an error message when that entry is missing      yaml value           Quote Strings  Don t Quote Integers  When you are working with string data  you are always safer quoting the strings than leaving them as bare words      yaml name        But when working with integers  do not quote the values   That can  in many cases  cause parsing errors inside of Kubernetes      yaml port        This remark does not apply to env variables values which are expected to be string  even if they represent integers      yaml env      name  HOST     value   http   host      name  PORT     value   1234          Using the  include  Function  Go provides a way of including one template in another using a built in  template  directive  However  the built in function cannot be used in Go template pipelines   To make it possible to include a template  and then perform an operation on that template s output  Helm has a special  include  function             The above includes a template called  toYaml   passes it   value   and then passes the output of that template to the  indent  function   Because YAML ascribes significance to indentation levels and whitespace  this is one great way to include snippets of code  but handle indentation in a relevant context      Using the  required  function  Go provides a way for setting template options to control behavior when a map is indexed with a key that s not present in the map  This is typically set with  template Options  missingkey option     where  option  can be  default    zero   or  error   While setting this option to error will stop execution with an error  this would apply to every missing key in the map  There may be situations where a chart developer wants to enforce this behavior for select values in the  values yaml  file   The  required  function gives developers the ability to declare a value entry as required for template rendering  If the entry is empty in  values yaml   the template will not render and will return an error message supplied by the developer   For example             The above will render the template when   Values foo  is defined  but will fail to render and exit when   Values foo  is undefined      Using the  tpl  Function  The  tpl  function allows developers to evaluate strings as templates inside a template  This is useful to pass a template string as a value to a chart or render external configuration files  Syntax      Examples      yaml   values template     name   Tom     template     output Tom      Rendering an external configuration file      yaml   external configuration file conf app conf firstName  lastName     values firstName  Peter lastName  Parker    template     output firstName Peter lastName Parker         Creating Image Pull Secrets Image pull secrets are essentially a combination of  registry    username   and  password    You may need them in an application you are deploying  but to create them requires running  base64  a couple of times   We can write a helper template to compose the Docker configuration file for use as the Secret s payload   Here is an example   First  assume that the credentials are defined in the  values yaml  file like so     yaml imageCredentials    registry  quay io   username  someone   password  sillyness   email  someone host com      We then define our helper template as follows            registry  username  password  email  printf   s  s   username  password   b64enc    b64enc           Finally  we use the helper template in a larger template to create the Secret manifest     yaml apiVersion  v1 kind  Secret metadata    name  myregistrykey type  kubernetes io dockerconfigjson data     dockerconfigjson           Automatically Roll Deployments  Often times ConfigMaps or Secrets are injected as configuration files in containers or there are other external dependency changes that require rolling pods  Depending on the application a restart may be required should those be updated with a subsequent  helm upgrade   but if the deployment spec itself didn t change the application keeps running with the old configuration resulting in an inconsistent deployment   The  sha256sum  function can be used to ensure a deployment s annotation section is updated if another file changes      yaml kind  Deployment spec    template      metadata        annotations          checksum config              NOTE  If you re adding this to a library chart you won t be able to access your file in    Template BasePath   Instead you can reference your definition with      In the event you always want to roll your deployment  you can use a similar annotation step as above  instead replacing with a random string so it always changes and causes the deployment to roll      yaml kind  Deployment spec    template      metadata        annotations          rollme              Each invocation of the template function will generate a unique random string  This means that if it s necessary to sync the random strings used by multiple resources  all relevant resources will need to be in the same template file   Both of these methods allow your Deployment to leverage the built in update strategy logic to avoid taking downtime   NOTE  In the past we recommended using the    recreate pods  flag as another option  This flag has been marked as deprecated in Helm 3 in favor of the more declarative method above      Tell Helm Not To Uninstall a Resource  Sometimes there are resources that should not be uninstalled when Helm runs a  helm uninstall   Chart developers can add an annotation to a resource to prevent it from being uninstalled      yaml kind  Secret metadata    annotations      helm sh resource policy  keep            The annotation  helm sh resource policy  keep  instructs Helm to skip deleting this resource when a helm operation  such as  helm uninstall    helm upgrade  or  helm rollback   would result in its deletion   However   this resource becomes orphaned  Helm will no longer manage it in any way  This can lead to problems if using  helm install   replace  on a release that has already been uninstalled  but has kept resources      Using  Partials  and Template Includes  Sometimes you want to create some reusable parts in your chart  whether they re blocks or template partials  And often  it s cleaner to keep these in their own files   In the  templates   directory  any file that begins with an underscore      is not expected to output a Kubernetes manifest file  So by convention  helper templates and partials are placed in a   helpers tpl  file      Complex Charts with Many Dependencies  Many of the charts in the CNCF  Artifact Hub  https   artifacthub io packages search kind 0  are  building blocks  for creating more advanced applications  But charts may be used to create instances of large scale applications  In such cases  a single umbrella chart may have multiple subcharts  each of which functions as a piece of the whole   The current best practice for composing a complex application from discrete parts is to create a top level umbrella chart that exposes the global configurations  and then use the  charts   subdirectory to embed each of the components      YAML is a Superset of JSON  According to the YAML specification  YAML is a superset of JSON  That means that any valid JSON structure ought to be valid in YAML   This has an advantage  Sometimes template developers may find it easier to express a datastructure with a JSON like syntax rather than deal with YAML s whitespace sensitivity   As a best practice  templates should follow a YAML like syntax  unless  the JSON syntax substantially reduces the risk of a formatting issue      Be Careful with Generating Random Values  There are functions in Helm that allow you to generate random data  cryptographic keys  and so on  These are fine to use  But be aware that during upgrades  templates are re executed  When a template run generates data that differs from the last run  that will trigger an update of that resource      Install or Upgrade a Release with One Command  Helm provides a way to perform an install or upgrade as a single command  Use  helm upgrade  with the    install  command  This will cause Helm to see if the release is already installed  If not  it will run an install  If it is  then the existing release will be upgraded      console   helm upgrade   install  release name    values  values file   chart directory     "}
{"questions":"helm aliases docs localization weight 5 Instructions for localizing the Helm documentation Getting Started title Localizing Helm Documentation This guide explains how to localize the Helm documentation","answers":"---\ntitle: \"Localizing Helm Documentation\"\ndescription: \"Instructions for localizing the Helm documentation.\"\naliases: [\"\/docs\/localization\/\"]\nweight: 5\n---\n\nThis guide explains how to localize the Helm documentation.\n\n## Getting Started\n\nContributions for translations use the same process as contributions for\ndocumentation. Translations are supplied through [pull\nrequests](https:\/\/help.github.com\/en\/github\/collaborating-with-issues-and-pull-requests\/about-pull-requests)\nto the [helm-www](https:\/\/github.com\/helm\/helm-www) git repository and pull\nrequests are reviewed by the team that manages the website.\n\n### Two-letter Language Code\n\nDocumentation is organized by the [ISO 639-1\nstandard](https:\/\/www.loc.gov\/standards\/iso639-2\/php\/code_list.php) for the\nlanguage codes. For example, the two-letter code for Korean is `ko`.\n\nIn content and configuration you will find the language code in use. Here are 3\nexamples:\n\n- In the `content` directory the language codes are the subdirectories and the\n  localized content for the language is in each directory. Primarily in the\n  `docs` subdirectory of each language code directory.\n- The `i18n` directory contains a configuration file for each language with\n  phrases used on the website. The files are named with the pattern `[LANG].toml`\n  where `[LANG]` is the two letter language code.\n- In the top level `config.toml` file there is configuration for navigation and\n  other details organized by language code.\n\nEnglish, with a language code of `en`, is the default language and source for\ntranslations.\n\n### Fork, Branch, Change, Pull Request\n\nTo contribute translations start by [creating a\nfork](https:\/\/help.github.com\/en\/github\/getting-started-with-github\/fork-a-repo)\nof the [helm-www repository](https:\/\/github.com\/helm\/helm-www) on GitHub. You\nwill start by committing the changes to your fork.\n\nBy default your fork will be set to work on the default branch known as `main`.\nPlease use branches to develop your changes and create pull requests. If you are\nunfamiliar with branches you can [read about them in the GitHub\ndocumentation](https:\/\/help.github.com\/en\/github\/collaborating-with-issues-and-pull-requests\/about-branches).\n\nOnce you have a branch make changes to add translations and localize the content\nto a language.\n\nNote, Helm uses a [Developers Certificate of\nOrigin](https:\/\/developercertificate.org\/). All commits need to have signoff.\nWhen making a commit you can use the `-s` or `--signoff` flag to use your Git\nconfigured name and email address to signoff on the commit. More details are\navailable in the\n[CONTRIBUTING.md](https:\/\/github.com\/helm\/helm-www\/blob\/main\/CONTRIBUTING.md#sign-your-work)\nfile.\n\nWhen you are ready, create a [pull\nrequest](https:\/\/help.github.com\/en\/github\/collaborating-with-issues-and-pull-requests\/about-pull-requests)\nwith the translation back to the helm-www repository.\n\nOnce a pull request has been created one of the maintainers will review it.\nDetails on that process are in the\n[CONTRIBUTING.md](https:\/\/github.com\/helm\/helm-www\/blob\/main\/CONTRIBUTING.md)\nfile.\n\n## Translating Content\n\nLocalizing all of the Helm content is a large task. It is ok to start small. The\ntranslations can be expanded over time.\n\n### Starting A New Language\n\nWhen starting a new language there is a minimum needed. This includes:\n\n- Adding a `content\/[LANG]\/docs` directory containing an `_index.md` file. This\n  is the top level documentation landing page.\n- Creating a `[LANG].toml` file in the `i18n` directory. Initially you can copy\n  the `en.toml` file as a starting point.\n- Adding a section for the language to the `config.toml` file to expose the new\n  language. An existing language section can serve as a starting point.\n\n### Translating\n\nTranslated content needs to reside in the `content\/[LANG]\/docs` directory. It\nshould have the same URL as the English source. For example, to translate the\nintro into Korean it can be useful to copy the english source like:\n\n```sh\nmkdir -p content\/ko\/docs\/intro\ncp content\/en\/docs\/intro\/install.md content\/ko\/docs\/intro\/install.md\n```\n\nThe content in the new file can then be translated into the other language.\n\nDo not add an untranslated copy of an English file to `content\/[LANG]\/`.\nOnce a language exists on the site, any untranslated pages will redirect to\nEnglish automatically. Translation takes time, and you always want to be\ntranslating the most current version of the docs, not an outdated fork.\n\nMake sure you remove any `aliases` lines from the header section. A line like\n`aliases: [\"\/docs\/using_helm\/\"]` does not belong in the translations. Those\nare redirections for old links which don't exist for new pages.\n\nNote, translation tools can help with the process. This includes machine\ngenerated translations. Machine generated translations should be edited or\notherwise reviewing for grammar and meaning by a native language speaker before\npublishing.\n\n\n## Navigating Between Languages\n\n![Screen Shot 2020-05-11 at 11 24 22\nAM](https:\/\/user-images.githubusercontent.com\/686194\/81597103-035de600-937a-11ea-9834-cd9dcef4e914.png)\n\nThe site global\n[config.toml](https:\/\/github.com\/helm\/helm-www\/blob\/main\/config.toml#L83L89)\nfile is where language navigation is configured.\n\nTo add a new language, add a new set of parameters using the [two-letter\nlanguage code](.\/localization\/#two-letter-language-code) defined above. Example:\n\n```\n# Korean\n[languages.ko]\ntitle = \"Helm\"\ndescription = \"Helm - The Kubernetes Package Manager.\"\ncontentDir = \"content\/ko\"\nlanguageName = \"\ud55c\uad6d\uc5b4 Korean\"\nweight = 1\n```\n\n## Resolving Internal Links\n\nTranslated content will sometimes include links to pages that only exist in\nanother language. This will result in site [build\nerrors](https:\/\/app.netlify.com\/sites\/helm-merge\/deploys). Example:\n\n```\n12:45:31 PM: htmltest started at 12:45:30 on app\n12:45:31 PM: ========================================================================\n12:45:31 PM: ko\/docs\/chart_template_guide\/accessing_files\/index.html\n12:45:31 PM:   hash does not exist --- ko\/docs\/chart_template_guide\/accessing_files\/index.html --> #basic-example\n12:45:31 PM: \u2718\u2718\u2718 failed in 197.566561ms\n12:45:31 PM: 1 error in 212 documents\n```\n\nTo resolve this, you need to check your content for internal links.\n\n* anchor links need to reflect the translated `id` value\n* internal page links need to be fixed\n\nFor internal pages that do not exist _(or have not been translated yet)_, the\nsite will not build until a correction is made. As a fallback, the url can point\nto another language where that content _does_ exist as follows:\n\n`< relref path=\"\/docs\/topics\/library_charts.md\" lang=\"en\" >`\n\nSee the [Hugo Docs on cross references between\nlanguages](https:\/\/gohugo.io\/content-management\/cross-references\/#link-to-another-language-version)\nfor more info.","site":"helm","answers_cleaned":"    title   Localizing Helm Documentation  description   Instructions for localizing the Helm documentation   aliases     docs localization    weight  5      This guide explains how to localize the Helm documentation      Getting Started  Contributions for translations use the same process as contributions for documentation  Translations are supplied through  pull requests  https   help github com en github collaborating with issues and pull requests about pull requests  to the  helm www  https   github com helm helm www  git repository and pull requests are reviewed by the team that manages the website       Two letter Language Code  Documentation is organized by the  ISO 639 1 standard  https   www loc gov standards iso639 2 php code list php  for the language codes  For example  the two letter code for Korean is  ko    In content and configuration you will find the language code in use  Here are 3 examples     In the  content  directory the language codes are the subdirectories and the   localized content for the language is in each directory  Primarily in the    docs  subdirectory of each language code directory    The  i18n  directory contains a configuration file for each language with   phrases used on the website  The files are named with the pattern   LANG  toml    where   LANG   is the two letter language code    In the top level  config toml  file there is configuration for navigation and   other details organized by language code   English  with a language code of  en   is the default language and source for translations       Fork  Branch  Change  Pull Request  To contribute translations start by  creating a fork  https   help github com en github getting started with github fork a repo  of the  helm www repository  https   github com helm helm www  on GitHub  You will start by committing the changes to your fork   By default your fork will be set to work on the default branch known as  main   Please use branches to develop your changes and create pull requests  If you are unfamiliar with branches you can  read about them in the GitHub documentation  https   help github com en github collaborating with issues and pull requests about branches    Once you have a branch make changes to add translations and localize the content to a language   Note  Helm uses a  Developers Certificate of Origin  https   developercertificate org    All commits need to have signoff  When making a commit you can use the   s  or    signoff  flag to use your Git configured name and email address to signoff on the commit  More details are available in the  CONTRIBUTING md  https   github com helm helm www blob main CONTRIBUTING md sign your work  file   When you are ready  create a  pull request  https   help github com en github collaborating with issues and pull requests about pull requests  with the translation back to the helm www repository   Once a pull request has been created one of the maintainers will review it  Details on that process are in the  CONTRIBUTING md  https   github com helm helm www blob main CONTRIBUTING md  file      Translating Content  Localizing all of the Helm content is a large task  It is ok to start small  The translations can be expanded over time       Starting A New Language  When starting a new language there is a minimum needed  This includes     Adding a  content  LANG  docs  directory containing an   index md  file  This   is the top level documentation landing page    Creating a   LANG  toml  file in the  i18n  directory  Initially you can copy   the  en toml  file as a starting point    Adding a section for the language to the  config toml  file to expose the new   language  An existing language section can serve as a starting point       Translating  Translated content needs to reside in the  content  LANG  docs  directory  It should have the same URL as the English source  For example  to translate the intro into Korean it can be useful to copy the english source like      sh mkdir  p content ko docs intro cp content en docs intro install md content ko docs intro install md      The content in the new file can then be translated into the other language   Do not add an untranslated copy of an English file to  content  LANG     Once a language exists on the site  any untranslated pages will redirect to English automatically  Translation takes time  and you always want to be translating the most current version of the docs  not an outdated fork   Make sure you remove any  aliases  lines from the header section  A line like  aliases     docs using helm     does not belong in the translations  Those are redirections for old links which don t exist for new pages   Note  translation tools can help with the process  This includes machine generated translations  Machine generated translations should be edited or otherwise reviewing for grammar and meaning by a native language speaker before publishing       Navigating Between Languages    Screen Shot 2020 05 11 at 11 24 22 AM  https   user images githubusercontent com 686194 81597103 035de600 937a 11ea 9834 cd9dcef4e914 png   The site global  config toml  https   github com helm helm www blob main config toml L83L89  file is where language navigation is configured   To add a new language  add a new set of parameters using the  two letter language code    localization  two letter language code  defined above  Example         Korean  languages ko  title    Helm  description    Helm   The Kubernetes Package Manager   contentDir    content ko  languageName        Korean  weight   1         Resolving Internal Links  Translated content will sometimes include links to pages that only exist in another language  This will result in site  build errors  https   app netlify com sites helm merge deploys   Example       12 45 31 PM  htmltest started at 12 45 30 on app 12 45 31 PM                                                                           12 45 31 PM  ko docs chart template guide accessing files index html 12 45 31 PM    hash does not exist     ko docs chart template guide accessing files index html      basic example 12 45 31 PM      failed in 197 566561ms 12 45 31 PM  1 error in 212 documents      To resolve this  you need to check your content for internal links     anchor links need to reflect the translated  id  value   internal page links need to be fixed  For internal pages that do not exist   or have not been translated yet    the site will not build until a correction is made  As a fallback  the url can point to another language where that content  does  exist as follows      relref path   docs topics library charts md  lang  en      See the  Hugo Docs on cross references between languages  https   gohugo io content management cross references  link to another language version  for more info "}
{"questions":"helm A Maintainer s Guide to Releasing Helm weight 2 the best person to update this Time for a new Helm release As a Helm maintainer cutting a release you are title Release Checklist Checklist for maintainers when releasing the next version of Helm","answers":"---\ntitle: \"Release Checklist\"\ndescription: \"Checklist for maintainers when releasing the next version of Helm.\"\nweight: 2\n---\n\n# A Maintainer's Guide to Releasing Helm\n\nTime for a new Helm release! As a Helm maintainer cutting a release, you are\nthe best person to [update this\nrelease checklist](https:\/\/github.com\/helm\/helm-www\/blob\/main\/content\/en\/docs\/community\/release_checklist.md)\nshould your experiences vary from what's documented here.\n\nAll releases will be of the form vX.Y.Z where X is the major version number, Y\nis the minor version number and Z is the patch release number. This project\nstrictly follows [semantic versioning](https:\/\/semver.org\/) so following this\nstep is critical.\n\nHelm announces in advance the date of its next minor release. Every effort\nshould be made to respect the announced date.  Furthermore, when starting\nthe release process, the date for the next release should have been selected\nas it will be used in the release process.\n\nThese directions will cover initial configuration followed by the release\nprocess for three different kinds of releases:\n\n* Major Releases - released less frequently - have breaking changes\n* Minor Releases - released every 3 to 4 months - no breaking changes\n* Patch Releases - released monthly - do not require all steps in this guide\n\n[Initial Configuration](#initial-configuration)\n\n1. [Create the Release Branch](#1-create-the-release-branch)\n2. [Major\/Minor releases: Change the Version Number in Git](#2-majorminor-releases-change-the-version-number-in-git)\n3. [Major\/Minor releases: Commit and Push the Release Branch](#3-majorminor-releases-commit-and-push-the-release-branch)\n4. [Major\/Minor releases: Create a Release Candidate](#4-majorminor-releases-create-a-release-candidate)\n5. [Major\/Minor releases: Iterate on Successive Release Candidates](#5-majorminor-releases-iterate-on-successive-release-candidates)\n6. [Finalize the Release](#6-finalize-the-release)\n7. [Write the Release Notes](#7-write-the-release-notes)\n8. [PGP Sign the downloads](#8-pgp-sign-the-downloads)\n9. [Publish Release](#9-publish-release)\n10. [Update Docs](#10-update-docs)\n11. [Tell the Community](#11-tell-the-community)\n\n## Initial Configuration\n\n### Set Up Git Remote\n\nIt is important to note that this document assumes that the git remote in your\nrepository that corresponds to <https:\/\/github.com\/helm\/helm> is named\n\"upstream\". If yours is not (for example, if you've chosen to name it \"origin\"\nor something similar instead), be sure to adjust the listed snippets for your\nlocal environment accordingly. If you are not sure what your upstream remote is\nnamed, use a command like `git remote -v` to find out.\n\nIf you don't have an [upstream\nremote](https:\/\/docs.github.com\/en\/github\/collaborating-with-issues-and-pull-requests\/configuring-a-remote-for-a-fork)\n, you can add one using something like:\n\n```shell\ngit remote add upstream git@github.com:helm\/helm.git\n```\n\n### Set Up Environment Variables\n\nIn this doc, we are going to reference a few environment variables as well,\nwhich you may want to set for convenience. For major\/minor releases, use the\nfollowing:\n\n```shell\nexport RELEASE_NAME=vX.Y.0\nexport RELEASE_BRANCH_NAME=\"release-X.Y\"\nexport RELEASE_CANDIDATE_NAME=\"$RELEASE_NAME-rc.1\"\n```\n\nIf you are creating a patch release, use the following instead:\n\n```shell\nexport PREVIOUS_PATCH_RELEASE=vX.Y.Z\nexport RELEASE_NAME=vX.Y.Z+1\nexport RELEASE_BRANCH_NAME=\"release-X.Y\"\n```\n\n### Set Up Signing Key\n\nWe are also going to be adding security and verification of the release process\nby hashing the binaries and providing signature files. We perform this using\n[GitHub and\nGPG](https:\/\/help.github.com\/en\/articles\/about-commit-signature-verification).\nIf you do not have GPG already setup you can follow these steps:\n\n1. [Install GPG](https:\/\/gnupg.org\/index.html)\n2. [Generate GPG\n   key](https:\/\/help.github.com\/en\/articles\/generating-a-new-gpg-key)\n3. [Add key to GitHub\n   account](https:\/\/help.github.com\/en\/articles\/adding-a-new-gpg-key-to-your-github-account)\n4. [Set signing key in\n   Git](https:\/\/help.github.com\/en\/articles\/telling-git-about-your-signing-key)\n\nOnce you have a signing key you need to add it to the KEYS file at the root of\nthe repository. The instructions for adding it to the KEYS file are in the file.\nIf you have not done so already, you need to add your public key to the\nkeyserver network. If you use GnuPG you can follow the [instructions provided by\nDebian](https:\/\/debian-administration.org\/article\/451\/Submitting_your_GPG_key_to_a_keyserver).\n\n## 1. Create the Release Branch\n\n### Major\/Minor Releases\n\nMajor releases are for new feature additions and behavioral changes *that break\nbackwards compatibility*. Minor releases are for new feature additions that do\nnot break backwards compatibility. To create a major or minor release, start by\ncreating a `release-X.Y` branch from main.\n\n```shell\ngit fetch upstream\ngit checkout upstream\/main\ngit checkout -b $RELEASE_BRANCH_NAME\n```\n\nThis new branch is going to be the base for the release, which we are going to\niterate upon later.\n\nVerify that a [helm\/helm milestone](https:\/\/github.com\/helm\/helm\/milestones)\nfor the release exists on GitHub (creating it if necessary). Make sure PRs and\nissues for this release are in this milestone.\n\nFor major & minor releases, move on to step 2: [Major\/Minor releases: Change\nthe Version Number in Git](#2-majorminor-releases-change-the-version-number-in-git).\n\n### Patch releases\n\nPatch releases are a few critical cherry-picked fixes to existing releases.\nStart by creating a `release-X.Y` branch:\n\n```shell\ngit fetch upstream\ngit checkout -b $RELEASE_BRANCH_NAME upstream\/$RELEASE_BRANCH_NAME\n```\n\nFrom here, we can cherry-pick the commits we want to bring into the patch\nrelease:\n\n```shell\n# get the commits ids we want to cherry-pick\ngit log --oneline\n# cherry-pick the commits starting from the oldest one, without including merge commits\ngit cherry-pick -x <commit-id>\n```\n\nAfter the commits have been cherry picked the release branch needs to be pushed.\n\n```shell\ngit push upstream $RELEASE_BRANCH_NAME\n```\n\nPushing the branch will cause the tests to run. Make sure they pass prior to\ncreating the tag. This new tag is going to be the base for the patch release.\n\nCreating a [helm\/helm\nmilestone](https:\/\/github.com\/helm\/helm\/milestones) is optional for patch\nreleases.\n\nMake sure to check [GitHub Actions](https:\/\/github.com\/helm\/helm\/actions) to see\nthat the release passed CI before proceeding. Patch releases can skip steps 2-5\nand proceed to step 6 to [Finalize the Release](#6-finalize-the-release).\n\n## 2. Major\/Minor releases: Change the Version Number in Git\n\nWhen doing a major or minor release, make sure to update\n`internal\/version\/version.go` with the new release version.\n\n```shell\n$ git diff internal\/version\/version.go\ndiff --git a\/internal\/version\/version.go b\/internal\/version\/version.go\nindex 712aae64..c1ed191e 100644\n--- a\/internal\/version\/version.go\n+++ b\/internal\/version\/version.go\n@@ -30,7 +30,7 @@ var (\n        \/\/ Increment major number for new feature additions and behavioral changes.\n        \/\/ Increment minor number for bug fixes and performance enhancements.\n        \/\/ Increment patch number for critical fixes to existing releases.\n-       version = \"v3.3\"\n+       version = \"v3.4\"\n\n        \/\/ metadata is extra build time data\n        metadata = \"\"\n```\n\nIn addition to updating the version within the `version.go` file, you will also\nneed to update corresponding tests that are using that version number.\n\n* `cmd\/helm\/testdata\/output\/version.txt`\n* `cmd\/helm\/testdata\/output\/version-client.txt`\n* `cmd\/helm\/testdata\/output\/version-client-shorthand.txt`\n* `cmd\/helm\/testdata\/output\/version-short.txt`\n* `cmd\/helm\/testdata\/output\/version-template.txt`\n* `pkg\/chartutil\/capabilities_test.go`\n\n```shell\ngit add .\ngit commit -m \"bump version to $RELEASE_NAME\"\n```\n\nThis will update it for the $RELEASE_BRANCH_NAME only. You will also need to\npull this change into the main branch for when the next release is being\ncreated, as in [this example of 3.2 to\n3.3](https:\/\/github.com\/helm\/helm\/pull\/8411\/files), and add it to the milestone\nfor the next release.\n\n```shell\n# get the last commit id i.e. commit to bump the version\ngit log --format=\"%H\" -n 1\n\n# create new branch off main\ngit checkout main\ngit checkout -b bump-version-<release_version>\n\n# cherry pick the commit using id from first command\ngit cherry-pick -x <commit-id>\n\n# commit the change\ngit push origin bump-version-<release-version>\n```\n\n## 3. Major\/Minor releases: Commit and Push the Release Branch\n\nIn order for others to start testing, we can now push the release branch\nupstream and start the test process.\n\n```shell\ngit push upstream $RELEASE_BRANCH_NAME\n```\n\nMake sure to check [GitHub Actions](https:\/\/github.com\/helm\/helm\/actions) to see\nthat the release passed CI before proceeding.\n\nIf anyone is available, let others peer-review the branch before continuing to\nensure that all the proper changes have been made and all of the commits for the\nrelease are there.\n\n## 4. Major\/Minor releases: Create a Release Candidate\n\nNow that the release branch is out and ready, it is time to start creating and\niterating on release candidates.\n\n```shell\ngit tag --sign --annotate \"${RELEASE_CANDIDATE_NAME}\" --message \"Helm release ${RELEASE_CANDIDATE_NAME}\"\ngit push upstream $RELEASE_CANDIDATE_NAME\n```\n\nGitHub Actions will automatically create a tagged release image and client binary to\ntest with.\n\nFor testers, the process to start testing after GitHub Actions finishes building the\nartifacts involves the following steps to grab the client:\n\nlinux\/amd64, using \/bin\/bash:\n\n```shell\nwget https:\/\/get.helm.sh\/helm-$RELEASE_CANDIDATE_NAME-linux-amd64.tar.gz\n```\n\ndarwin\/amd64, using Terminal.app:\n\n```shell\nwget https:\/\/get.helm.sh\/helm-$RELEASE_CANDIDATE_NAME-darwin-amd64.tar.gz\n```\n\nwindows\/amd64, using PowerShell:\n\n```shell\nPS C:\\> Invoke-WebRequest -Uri \"https:\/\/get.helm.sh\/helm-$RELEASE_CANDIDATE_NAME-windows-amd64.tar.gz\" -OutFile \"helm-$ReleaseCandidateName-windows-amd64.tar.gz\"\n```\n\nThen, unpack and move the binary to somewhere on your $PATH, or move it\nsomewhere and add it to your $PATH (e.g. \/usr\/local\/bin\/helm for linux\/macOS,\nC:\\Program Files\\helm\\helm.exe for Windows).\n\n## 5. Major\/Minor releases: Iterate on Successive Release Candidates\n\nSpend several days explicitly investing time and resources to try and break helm\nin every possible way, documenting any findings pertinent to the release. This\ntime should be spent testing and finding ways in which the release might have\ncaused various features or upgrade environments to have issues, not coding.\nDuring this time, the release is in code freeze, and any additional code changes\nwill be pushed out to the next release.\n\nDuring this phase, the $RELEASE_BRANCH_NAME branch will keep evolving as you\nwill produce new release candidates. The frequency of new candidates is up to\nthe release manager: use your best judgement taking into account the severity of\nreported issues, testers' availability, and the release deadline date. Generally\nspeaking, it is better to let a release roll over the deadline than to ship a\nbroken release.\n\nEach time you'll want to produce a new release candidate, you will start by\nadding commits to the branch by cherry-picking from main:\n\n```shell\ngit cherry-pick -x <commit_id>\n```\n\nYou will also want to push the branch to GitHub and ensure it passes CI.\n\nAfter that, tag it and notify users of the new release candidate:\n\n```shell\nexport RELEASE_CANDIDATE_NAME=\"$RELEASE_NAME-rc.2\"\ngit tag --sign --annotate \"${RELEASE_CANDIDATE_NAME}\" --message \"Helm release ${RELEASE_CANDIDATE_NAME}\"\ngit push upstream $RELEASE_CANDIDATE_NAME\n```\n\nOnce pushed to GitHub, check to ensure the branch with this tag builds in CI.\n\nFrom here on just repeat this process, continuously testing until you're happy\nwith the release candidate. For a release candidate, we don't write the full notes,\nbut you can scaffold out some [release notes](#7-write-the-release-notes).\n\n## 6. Finalize the Release\n\nWhen you're finally happy with the quality of a release candidate, you can move\non and create the real thing. Double-check one last time to make sure everything\nis in order, then finally push the release tag.\n\n```shell\ngit checkout $RELEASE_BRANCH_NAME\ngit tag --sign --annotate \"${RELEASE_NAME}\" --message \"Helm release ${RELEASE_NAME}\"\ngit push upstream $RELEASE_NAME\n```\n\nVerify that the release succeeded in\n[GitHub Actions](https:\/\/github.com\/helm\/helm\/actions). If not, you will need to fix the\nrelease and push the release again.\n\nAs the CI job will take some time to run, you can move on to writing release\nnotes while you wait for it to complete.\n\n## 7. Write the Release Notes\n\nWe will auto-generate a changelog based on the commits that occurred during a\nrelease cycle, but it is usually more beneficial to the end-user if the release\nnotes are hand-written by a human being\/marketing team\/dog.\n\nIf you're releasing a major\/minor release, listing notable user-facing features\nis usually sufficient. For patch releases, do the same, but make note of the\nsymptoms and who is affected.\n\nThe release notes should include the version and planned date of the next release.\n\nAn example release note for a minor release would look like this:\n\n```markdown\n## vX.Y.Z\n\nHelm vX.Y.Z is a feature release. This release, we focused on <insert focal point>. Users are encouraged to upgrade for the best experience.\n\nThe community keeps growing, and we'd love to see you there!\n\n- Join the discussion in [Kubernetes Slack](https:\/\/kubernetes.slack.com):\n  - `#helm-users` for questions and just to hang out\n  - `#helm-dev` for discussing PRs, code, and bugs\n- Hang out at the Public Developer Call: Thursday, 9:30 Pacific via [Zoom](https:\/\/zoom.us\/j\/696660622)\n- Test, debug, and contribute charts: [Artifact Hub helm charts](https:\/\/artifacthub.io\/packages\/search?kind=0)\n\n## Notable Changes\n\n- Kubernetes 1.16 is now supported including new manifest apiVersions\n- Sprig was upgraded to 2.22\n\n## Installation and Upgrading\n\nDownload Helm X.Y. The common platform binaries are here:\n\n- [MacOS amd64](https:\/\/get.helm.sh\/helm-vX.Y.Z-darwin-amd64.tar.gz) ([checksum](https:\/\/get.helm.sh\/helm-vX.Y.Z-darwin-amd64.tar.gz.sha256sum) \/ CHECKSUM_VAL)\n- [Linux amd64](https:\/\/get.helm.sh\/helm-vX.Y.Z-linux-amd64.tar.gz) ([checksum](https:\/\/get.helm.sh\/helm-vX.Y.Z-linux-amd64.tar.gz.sha256sum) \/ CHECKSUM_VAL)\n- [Linux arm](https:\/\/get.helm.sh\/helm-vX.Y.Z-linux-arm.tar.gz) ([checksum](https:\/\/get.helm.sh\/helm-vX.Y.Z-linux-arm.tar.gz.sha256) \/ CHECKSUM_VAL)\n- [Linux arm64](https:\/\/get.helm.sh\/helm-vX.Y.Z-linux-arm64.tar.gz) ([checksum](https:\/\/get.helm.sh\/helm-vX.Y.Z-linux-arm64.tar.gz.sha256sum) \/ CHECKSUM_VAL)\n- [Linux i386](https:\/\/get.helm.sh\/helm-vX.Y.Z-linux-386.tar.gz) ([checksum](https:\/\/get.helm.sh\/helm-vX.Y.Z-linux-386.tar.gz.sha256) \/ CHECKSUM_VAL)\n- [Linux ppc64le](https:\/\/get.helm.sh\/helm-vX.Y.Z-linux-ppc64le.tar.gz) ([checksum](https:\/\/get.helm.sh\/helm-vX.Y.Z-linux-ppc64le.tar.gz.sha256sum) \/ CHECKSUM_VAL)\n- [Linux s390x](https:\/\/get.helm.sh\/helm-vX.Y.Z-linux-s390x.tar.gz) ([checksum](https:\/\/get.helm.sh\/helm-vX.Y.Z-linux-s390x.tar.gz.sha256sum) \/ CHECKSUM_VAL)\n- [Windows amd64](https:\/\/get.helm.sh\/helm-vX.Y.Z-windows-amd64.zip) ([checksum](https:\/\/get.helm.sh\/helm-vX.Y.Z-windows-amd64.zip.sha256sum) \/ CHECKSUM_VAL)\n\nThe [Quickstart Guide](https:\/\/docs.helm.sh\/using_helm\/#quickstart-guide) will get you going from there. For **upgrade instructions** or detailed installation notes, check the [install guide](https:\/\/docs.helm.sh\/using_helm\/#installing-helm). You can also use a [script to install](https:\/\/raw.githubusercontent.com\/helm\/helm\/main\/scripts\/get-helm-3) on any system with `bash`.\n\n## What's Next\n\n- vX.Y.Z+1 will contain only bug fixes and is planned for <insert DATE>.\n- vX.Y+1.0 is the next feature release and is planned for <insert DATE>. This release will focus on ...\n\n## Changelog\n\n- chore(*): bump version to v2.7.0 08c1144f5eb3e3b636d9775617287cc26e53dba4 (Adam Reese)\n- fix circle not building tags f4f932fabd197f7e6d608c8672b33a483b4b76fa (Matthew Fisher)\n```\n\nA partially completed set of release notes including the changelog can be\ncreated by running the following command:\n\n```shell\nexport VERSION=\"$RELEASE_NAME\"\nexport PREVIOUS_RELEASE=vX.Y.Z\nmake clean\nmake fetch-dist\nmake release-notes\n```\n\nThis will create a good baseline set of release notes to which you should just\nneed to fill out the **Notable Changes** and **What's next** sections.\n\nFeel free to add your voice to the release notes; it's nice for people to think\nwe're not all robots.\n\nYou should also double check the URLs and checksums are correct in the\nauto-generated release notes.\n\nOnce finished, go into GitHub to [helm\/helm\nreleases](https:\/\/github.com\/helm\/helm\/releases) and edit the release notes for\nthe tagged release with the notes written here.\nFor target branch, set to $RELEASE_BRANCH_NAME.\n\nIt is now worth getting other people to take a look at the release notes before\nthe release is published. Send a request out to\n[#helm-dev](https:\/\/kubernetes.slack.com\/messages\/C51E88VDG) for review. It is\nalways beneficial as it can be easy to miss something.\n\n## 8. PGP Sign the downloads\n\nWhile hashes provide a signature that the content of the downloads is what it\nwas generated, signed packages provide traceability of where the package came\nfrom.\n\nTo do this, run the following `make` commands:\n\n```shell\nexport VERSION=\"$RELEASE_NAME\"\nmake clean\t\t# if not already run\nmake fetch-dist\t# if not already run\nmake sign\n```\n\nThis will generate ascii armored signature files for each of the files pushed by\nCI.\n\nAll of the signature files (`*.asc`) need to be uploaded to the release on\nGitHub (attach binaries).\n\n## 9. Publish Release\n\nTime to make the release official!\n\nAfter the release notes are saved on GitHub, the CI build is completed, and\nyou've added the signature files to the release, you can hit \"Publish\" on\nthe release. This publishes the release, listing it as \"latest\", and shows this\nrelease on the front page of the [helm\/helm](https:\/\/github.com\/helm\/helm) repo.\n\n## 10. Update Docs\n\nThe [Helm website docs section](https:\/\/helm.sh\/docs) lists the Helm versions\nfor the docs. Major, minor, and patch versions need to be updated on the site.\nThe date for the next minor release is also published on the site and must be\nupdated.\nTo do that create a pull request against the [helm-www\nrepository](https:\/\/github.com\/helm\/helm-www). In the `config.toml` file find\nthe proper `params.versions` section and update the Helm version, like in this\nexample of [updating the current\nversion](https:\/\/github.com\/helm\/helm-www\/pull\/676\/files).  In the same\n`config.toml` file, update the `params.nextversion` section.\n\nClose the [helm\/helm milestone](https:\/\/github.com\/helm\/helm\/milestones) for\nthe release, if applicable.\n\nUpdate the [version\nskew](https:\/\/github.com\/helm\/helm-www\/blob\/main\/content\/en\/docs\/topics\/version_skew.md)\nfor major and minor releases.\n\nUpdate the release calendar [here](https:\/\/helm.sh\/calendar\/release):\n* create an entry for the next minor release with a reminder for that day at 5pm GMT\n* create an entry for the RC1 of the next minor release on the Monday of the week before the planned release, with a reminder for that day at 5pm GMT\n\n## 11. Tell the Community\n\nCongratulations! You're done. Go grab yourself a $DRINK_OF_CHOICE. You've earned\nit.\n\nAfter enjoying a nice $DRINK_OF_CHOICE, go forth and announce the new release\nin Slack and on Twitter with a link to the [release on\nGitHub](https:\/\/github.com\/helm\/helm\/releases).\n\nOptionally, write a blog post about the new release and showcase some of the new\nfeatures on there!","site":"helm","answers_cleaned":"    title   Release Checklist  description   Checklist for maintainers when releasing the next version of Helm   weight  2        A Maintainer s Guide to Releasing Helm  Time for a new Helm release  As a Helm maintainer cutting a release  you are the best person to  update this release checklist  https   github com helm helm www blob main content en docs community release checklist md  should your experiences vary from what s documented here   All releases will be of the form vX Y Z where X is the major version number  Y is the minor version number and Z is the patch release number  This project strictly follows  semantic versioning  https   semver org   so following this step is critical   Helm announces in advance the date of its next minor release  Every effort should be made to respect the announced date   Furthermore  when starting the release process  the date for the next release should have been selected as it will be used in the release process   These directions will cover initial configuration followed by the release process for three different kinds of releases     Major Releases   released less frequently   have breaking changes   Minor Releases   released every 3 to 4 months   no breaking changes   Patch Releases   released monthly   do not require all steps in this guide   Initial Configuration   initial configuration   1   Create the Release Branch   1 create the release branch  2   Major Minor releases  Change the Version Number in Git   2 majorminor releases change the version number in git  3   Major Minor releases  Commit and Push the Release Branch   3 majorminor releases commit and push the release branch  4   Major Minor releases  Create a Release Candidate   4 majorminor releases create a release candidate  5   Major Minor releases  Iterate on Successive Release Candidates   5 majorminor releases iterate on successive release candidates  6   Finalize the Release   6 finalize the release  7   Write the Release Notes   7 write the release notes  8   PGP Sign the downloads   8 pgp sign the downloads  9   Publish Release   9 publish release  10   Update Docs   10 update docs  11   Tell the Community   11 tell the community      Initial Configuration      Set Up Git Remote  It is important to note that this document assumes that the git remote in your repository that corresponds to  https   github com helm helm  is named  upstream   If yours is not  for example  if you ve chosen to name it  origin  or something similar instead   be sure to adjust the listed snippets for your local environment accordingly  If you are not sure what your upstream remote is named  use a command like  git remote  v  to find out   If you don t have an  upstream remote  https   docs github com en github collaborating with issues and pull requests configuring a remote for a fork    you can add one using something like      shell git remote add upstream git github com helm helm git          Set Up Environment Variables  In this doc  we are going to reference a few environment variables as well  which you may want to set for convenience  For major minor releases  use the following      shell export RELEASE NAME vX Y 0 export RELEASE BRANCH NAME  release X Y  export RELEASE CANDIDATE NAME   RELEASE NAME rc 1       If you are creating a patch release  use the following instead      shell export PREVIOUS PATCH RELEASE vX Y Z export RELEASE NAME vX Y Z 1 export RELEASE BRANCH NAME  release X Y           Set Up Signing Key  We are also going to be adding security and verification of the release process by hashing the binaries and providing signature files  We perform this using  GitHub and GPG  https   help github com en articles about commit signature verification   If you do not have GPG already setup you can follow these steps   1   Install GPG  https   gnupg org index html  2   Generate GPG    key  https   help github com en articles generating a new gpg key  3   Add key to GitHub    account  https   help github com en articles adding a new gpg key to your github account  4   Set signing key in    Git  https   help github com en articles telling git about your signing key   Once you have a signing key you need to add it to the KEYS file at the root of the repository  The instructions for adding it to the KEYS file are in the file  If you have not done so already  you need to add your public key to the keyserver network  If you use GnuPG you can follow the  instructions provided by Debian  https   debian administration org article 451 Submitting your GPG key to a keyserver       1  Create the Release Branch      Major Minor Releases  Major releases are for new feature additions and behavioral changes  that break backwards compatibility   Minor releases are for new feature additions that do not break backwards compatibility  To create a major or minor release  start by creating a  release X Y  branch from main      shell git fetch upstream git checkout upstream main git checkout  b  RELEASE BRANCH NAME      This new branch is going to be the base for the release  which we are going to iterate upon later   Verify that a  helm helm milestone  https   github com helm helm milestones  for the release exists on GitHub  creating it if necessary   Make sure PRs and issues for this release are in this milestone   For major   minor releases  move on to step 2   Major Minor releases  Change the Version Number in Git   2 majorminor releases change the version number in git        Patch releases  Patch releases are a few critical cherry picked fixes to existing releases  Start by creating a  release X Y  branch      shell git fetch upstream git checkout  b  RELEASE BRANCH NAME upstream  RELEASE BRANCH NAME      From here  we can cherry pick the commits we want to bring into the patch release      shell   get the commits ids we want to cherry pick git log   oneline   cherry pick the commits starting from the oldest one  without including merge commits git cherry pick  x  commit id       After the commits have been cherry picked the release branch needs to be pushed      shell git push upstream  RELEASE BRANCH NAME      Pushing the branch will cause the tests to run  Make sure they pass prior to creating the tag  This new tag is going to be the base for the patch release   Creating a  helm helm milestone  https   github com helm helm milestones  is optional for patch releases   Make sure to check  GitHub Actions  https   github com helm helm actions  to see that the release passed CI before proceeding  Patch releases can skip steps 2 5 and proceed to step 6 to  Finalize the Release   6 finalize the release       2  Major Minor releases  Change the Version Number in Git  When doing a major or minor release  make sure to update  internal version version go  with the new release version      shell   git diff internal version version go diff   git a internal version version go b internal version version go index 712aae64  c1ed191e 100644     a internal version version go     b internal version version go     30 7  30 7    var              Increment major number for new feature additions and behavioral changes             Increment minor number for bug fixes and performance enhancements             Increment patch number for critical fixes to existing releases          version    v3 3          version    v3 4              metadata is extra build time data         metadata           In addition to updating the version within the  version go  file  you will also need to update corresponding tests that are using that version number      cmd helm testdata output version txt     cmd helm testdata output version client txt     cmd helm testdata output version client shorthand txt     cmd helm testdata output version short txt     cmd helm testdata output version template txt     pkg chartutil capabilities test go      shell git add   git commit  m  bump version to  RELEASE NAME       This will update it for the  RELEASE BRANCH NAME only  You will also need to pull this change into the main branch for when the next release is being created  as in  this example of 3 2 to 3 3  https   github com helm helm pull 8411 files   and add it to the milestone for the next release      shell   get the last commit id i e  commit to bump the version git log   format   H   n 1    create new branch off main git checkout main git checkout  b bump version  release version     cherry pick the commit using id from first command git cherry pick  x  commit id     commit the change git push origin bump version  release version          3  Major Minor releases  Commit and Push the Release Branch  In order for others to start testing  we can now push the release branch upstream and start the test process      shell git push upstream  RELEASE BRANCH NAME      Make sure to check  GitHub Actions  https   github com helm helm actions  to see that the release passed CI before proceeding   If anyone is available  let others peer review the branch before continuing to ensure that all the proper changes have been made and all of the commits for the release are there      4  Major Minor releases  Create a Release Candidate  Now that the release branch is out and ready  it is time to start creating and iterating on release candidates      shell git tag   sign   annotate    RELEASE CANDIDATE NAME     message  Helm release   RELEASE CANDIDATE NAME   git push upstream  RELEASE CANDIDATE NAME      GitHub Actions will automatically create a tagged release image and client binary to test with   For testers  the process to start testing after GitHub Actions finishes building the artifacts involves the following steps to grab the client   linux amd64  using  bin bash      shell wget https   get helm sh helm  RELEASE CANDIDATE NAME linux amd64 tar gz      darwin amd64  using Terminal app      shell wget https   get helm sh helm  RELEASE CANDIDATE NAME darwin amd64 tar gz      windows amd64  using PowerShell      shell PS C    Invoke WebRequest  Uri  https   get helm sh helm  RELEASE CANDIDATE NAME windows amd64 tar gz   OutFile  helm  ReleaseCandidateName windows amd64 tar gz       Then  unpack and move the binary to somewhere on your  PATH  or move it somewhere and add it to your  PATH  e g   usr local bin helm for linux macOS  C  Program Files helm helm exe for Windows       5  Major Minor releases  Iterate on Successive Release Candidates  Spend several days explicitly investing time and resources to try and break helm in every possible way  documenting any findings pertinent to the release  This time should be spent testing and finding ways in which the release might have caused various features or upgrade environments to have issues  not coding  During this time  the release is in code freeze  and any additional code changes will be pushed out to the next release   During this phase  the  RELEASE BRANCH NAME branch will keep evolving as you will produce new release candidates  The frequency of new candidates is up to the release manager  use your best judgement taking into account the severity of reported issues  testers  availability  and the release deadline date  Generally speaking  it is better to let a release roll over the deadline than to ship a broken release   Each time you ll want to produce a new release candidate  you will start by adding commits to the branch by cherry picking from main      shell git cherry pick  x  commit id       You will also want to push the branch to GitHub and ensure it passes CI   After that  tag it and notify users of the new release candidate      shell export RELEASE CANDIDATE NAME   RELEASE NAME rc 2  git tag   sign   annotate    RELEASE CANDIDATE NAME     message  Helm release   RELEASE CANDIDATE NAME   git push upstream  RELEASE CANDIDATE NAME      Once pushed to GitHub  check to ensure the branch with this tag builds in CI   From here on just repeat this process  continuously testing until you re happy with the release candidate  For a release candidate  we don t write the full notes  but you can scaffold out some  release notes   7 write the release notes       6  Finalize the Release  When you re finally happy with the quality of a release candidate  you can move on and create the real thing  Double check one last time to make sure everything is in order  then finally push the release tag      shell git checkout  RELEASE BRANCH NAME git tag   sign   annotate    RELEASE NAME     message  Helm release   RELEASE NAME   git push upstream  RELEASE NAME      Verify that the release succeeded in  GitHub Actions  https   github com helm helm actions   If not  you will need to fix the release and push the release again   As the CI job will take some time to run  you can move on to writing release notes while you wait for it to complete      7  Write the Release Notes  We will auto generate a changelog based on the commits that occurred during a release cycle  but it is usually more beneficial to the end user if the release notes are hand written by a human being marketing team dog   If you re releasing a major minor release  listing notable user facing features is usually sufficient  For patch releases  do the same  but make note of the symptoms and who is affected   The release notes should include the version and planned date of the next release   An example release note for a minor release would look like this      markdown    vX Y Z  Helm vX Y Z is a feature release  This release  we focused on  insert focal point   Users are encouraged to upgrade for the best experience   The community keeps growing  and we d love to see you there     Join the discussion in  Kubernetes Slack  https   kubernetes slack com         helm users  for questions and just to hang out       helm dev  for discussing PRs  code  and bugs   Hang out at the Public Developer Call  Thursday  9 30 Pacific via  Zoom  https   zoom us j 696660622    Test  debug  and contribute charts   Artifact Hub helm charts  https   artifacthub io packages search kind 0      Notable Changes    Kubernetes 1 16 is now supported including new manifest apiVersions   Sprig was upgraded to 2 22     Installation and Upgrading  Download Helm X Y  The common platform binaries are here      MacOS amd64  https   get helm sh helm vX Y Z darwin amd64 tar gz    checksum  https   get helm sh helm vX Y Z darwin amd64 tar gz sha256sum    CHECKSUM VAL     Linux amd64  https   get helm sh helm vX Y Z linux amd64 tar gz    checksum  https   get helm sh helm vX Y Z linux amd64 tar gz sha256sum    CHECKSUM VAL     Linux arm  https   get helm sh helm vX Y Z linux arm tar gz    checksum  https   get helm sh helm vX Y Z linux arm tar gz sha256    CHECKSUM VAL     Linux arm64  https   get helm sh helm vX Y Z linux arm64 tar gz    checksum  https   get helm sh helm vX Y Z linux arm64 tar gz sha256sum    CHECKSUM VAL     Linux i386  https   get helm sh helm vX Y Z linux 386 tar gz    checksum  https   get helm sh helm vX Y Z linux 386 tar gz sha256    CHECKSUM VAL     Linux ppc64le  https   get helm sh helm vX Y Z linux ppc64le tar gz    checksum  https   get helm sh helm vX Y Z linux ppc64le tar gz sha256sum    CHECKSUM VAL     Linux s390x  https   get helm sh helm vX Y Z linux s390x tar gz    checksum  https   get helm sh helm vX Y Z linux s390x tar gz sha256sum    CHECKSUM VAL     Windows amd64  https   get helm sh helm vX Y Z windows amd64 zip    checksum  https   get helm sh helm vX Y Z windows amd64 zip sha256sum    CHECKSUM VAL   The  Quickstart Guide  https   docs helm sh using helm  quickstart guide  will get you going from there  For   upgrade instructions   or detailed installation notes  check the  install guide  https   docs helm sh using helm  installing helm   You can also use a  script to install  https   raw githubusercontent com helm helm main scripts get helm 3  on any system with  bash       What s Next    vX Y Z 1 will contain only bug fixes and is planned for  insert DATE     vX Y 1 0 is the next feature release and is planned for  insert DATE   This release will focus on         Changelog    chore     bump version to v2 7 0 08c1144f5eb3e3b636d9775617287cc26e53dba4  Adam Reese    fix circle not building tags f4f932fabd197f7e6d608c8672b33a483b4b76fa  Matthew Fisher       A partially completed set of release notes including the changelog can be created by running the following command      shell export VERSION   RELEASE NAME  export PREVIOUS RELEASE vX Y Z make clean make fetch dist make release notes      This will create a good baseline set of release notes to which you should just need to fill out the   Notable Changes   and   What s next   sections   Feel free to add your voice to the release notes  it s nice for people to think we re not all robots   You should also double check the URLs and checksums are correct in the auto generated release notes   Once finished  go into GitHub to  helm helm releases  https   github com helm helm releases  and edit the release notes for the tagged release with the notes written here  For target branch  set to  RELEASE BRANCH NAME   It is now worth getting other people to take a look at the release notes before the release is published  Send a request out to   helm dev  https   kubernetes slack com messages C51E88VDG  for review  It is always beneficial as it can be easy to miss something      8  PGP Sign the downloads  While hashes provide a signature that the content of the downloads is what it was generated  signed packages provide traceability of where the package came from   To do this  run the following  make  commands      shell export VERSION   RELEASE NAME  make clean    if not already run make fetch dist   if not already run make sign      This will generate ascii armored signature files for each of the files pushed by CI   All of the signature files     asc   need to be uploaded to the release on GitHub  attach binaries       9  Publish Release  Time to make the release official   After the release notes are saved on GitHub  the CI build is completed  and you ve added the signature files to the release  you can hit  Publish  on the release  This publishes the release  listing it as  latest   and shows this release on the front page of the  helm helm  https   github com helm helm  repo      10  Update Docs  The  Helm website docs section  https   helm sh docs  lists the Helm versions for the docs  Major  minor  and patch versions need to be updated on the site  The date for the next minor release is also published on the site and must be updated  To do that create a pull request against the  helm www repository  https   github com helm helm www   In the  config toml  file find the proper  params versions  section and update the Helm version  like in this example of  updating the current version  https   github com helm helm www pull 676 files    In the same  config toml  file  update the  params nextversion  section   Close the  helm helm milestone  https   github com helm helm milestones  for the release  if applicable   Update the  version skew  https   github com helm helm www blob main content en docs topics version skew md  for major and minor releases   Update the release calendar  here  https   helm sh calendar release     create an entry for the next minor release with a reminder for that day at 5pm GMT   create an entry for the RC1 of the next minor release on the Monday of the week before the planned release  with a reminder for that day at 5pm GMT     11  Tell the Community  Congratulations  You re done  Go grab yourself a  DRINK OF CHOICE  You ve earned it   After enjoying a nice  DRINK OF CHOICE  go forth and announce the new release in Slack and on Twitter with a link to the  release on GitHub  https   github com helm helm releases    Optionally  write a blog post about the new release and showcase some of the new features on there "}
{"questions":"helm This guide explains how to set up your environment for developing on Helm aliases docs developers title Developer Guide Prerequisites weight 1 Instructions for setting up your environment for developing Helm","answers":"---\ntitle: \"Developer Guide\"\ndescription: \"Instructions for setting up your environment for developing Helm.\"\nweight: 1\naliases: [\"\/docs\/developers\/\"]\n---\n\nThis guide explains how to set up your environment for developing on Helm.\n\n## Prerequisites\n\n- The latest version of Go\n- A Kubernetes cluster w\/ kubectl (optional)\n- Git\n\n## Building Helm\n\nWe use Make to build our programs. The simplest way to get started is:\n\n```console\n$ make\n```\n\nIf required, this will first install dependencies and validate configuration. It will then compile `helm` and place it in\n`bin\/helm`.\n\nTo run Helm locally, you can run `bin\/helm`.\n\n- Helm is known to run on macOS and most Linux distributions, including Alpine.\n\n## Running tests\n\nTo run all the tests, run `make test`.\nAs a pre-requisite, you would need to have\n[golangci-lint](https:\/\/golangci-lint.run)\ninstalled.\n\n## Running Locally\n\nYou can update your path and add the path of your local helm binary. In an editor\nopen your shell config file. Add the following line making sure you replace\n`<path to your binary folder>` with your local bin directory.\n\n``` bash\nexport PATH=\"<path to your binary folder>:$PATH\"\n```\n\nThis will allow you to run the locally built version of helm from your terminal.\n\n## Contribution Guidelines\n\nWe welcome contributions. This project has set up some guidelines in order to\nensure that (a) code quality remains high, (b) the project remains consistent,\nand (c) contributions follow the open source legal requirements. Our intent is\nnot to burden contributors, but to build elegant and high-quality open source\ncode so that our users will benefit.\n\nMake sure you have read and understood the main CONTRIBUTING guide:\n\n<https:\/\/github.com\/helm\/helm\/blob\/main\/CONTRIBUTING.md>\n\n### Structure of the Code\n\nThe code for the Helm project is organized as follows:\n\n- The individual programs are located in `cmd\/`. Code inside of `cmd\/` is not\n  designed for library re-use.\n- Shared libraries are stored in `pkg\/`.\n- The `scripts\/` directory contains a number of utility scripts. Most of these\n  are used by the CI\/CD pipeline.\n\nGo dependency management is in flux, and it is likely to change during the\ncourse of Helm's lifecycle. We encourage developers to _not_ try to manually\nmanage dependencies. Instead, we suggest relying upon the project's `Makefile`\nto do that for you. With Helm 3, it is recommended that you are on Go version\n1.13 or later.\n\n### Writing Documentation\n\nSince Helm 3, documentation has been moved to its own repository. When writing\nnew features, please write accompanying documentation and submit it to the\n[helm-www](https:\/\/github.com\/helm\/helm-www) repository.\n\nOne exception: [Helm CLI output (in English)](https:\/\/helm.sh\/docs\/helm\/) is\ngenerated from the `helm` binary itself. See [Updating the Helm CLI Reference Docs](https:\/\/github.com\/helm\/helm-www#updating-the-helm-cli-reference-docs)\nfor instructions on how to generate this output. When translated, the CLI\noutput is not generated and can be found in `\/content\/<lang>\/docs\/helm`.\n\n### Git Conventions\n\nWe use Git for our version control system. The `main` branch is the home of\nthe current development candidate. Releases are tagged.\n\nWe accept changes to the code via GitHub Pull Requests (PRs). One workflow for\ndoing this is as follows:\n\n1. Fork the `github.com\/helm\/helm` repository into your GitHub account\n2. `git clone` the forked repository into your desired directory\n3. Create a new working branch (`git checkout -b feat\/my-feature`) and do your\n   work on that branch.\n4. When you are ready for us to review, push your branch to GitHub, and then\n   open a new pull request with us.\n\nFor Git commit messages, we follow the [Semantic Commit\nMessages](https:\/\/karma-runner.github.io\/0.13\/dev\/git-commit-msg.html):\n\n```\nfix(helm): add --foo flag to 'helm install'\n\nWhen 'helm install --foo bar' is run, this will print \"foo\" in the\noutput regardless of the outcome of the installation.\n\nCloses #1234\n```\n\nCommon commit types:\n\n- fix: Fix a bug or error\n- feat: Add a new feature\n- docs: Change documentation\n- test: Improve testing\n- ref: refactor existing code\n\nCommon scopes:\n\n- helm: The Helm CLI\n- pkg\/lint: The lint package. Follow a similar convention for any package\n- `*`: two or more scopes\n\nRead more:\n\n- The [Deis\n  Guidelines](https:\/\/github.com\/deis\/workflow\/blob\/master\/src\/contributing\/submitting-a-pull-request.md)\n  were the inspiration for this section.\n- Karma Runner\n  [defines](https:\/\/karma-runner.github.io\/0.13\/dev\/git-commit-msg.html) the\n  semantic commit message idea.\n\n### Go Conventions\n\nWe follow the Go coding style standards very closely. Typically, running `go\nfmt` will make your code beautiful for you.\n\nWe also typically follow the conventions recommended by `go lint` and\n`gometalinter`. Run `make test-style` to test the style conformance.\n\nRead more:\n\n- Effective Go [introduces\n  formatting](https:\/\/golang.org\/doc\/effective_go.html#formatting).\n- The Go Wiki has a great article on\n  [formatting](https:\/\/github.com\/golang\/go\/wiki\/CodeReviewComments).\n\nIf you run the `make test` target, not only will unit tests be run, but so will\nstyle tests. If the `make test` target fails, even for stylistic reasons, your\nPR will not be considered ready for merging.","site":"helm","answers_cleaned":"    title   Developer Guide  description   Instructions for setting up your environment for developing Helm   weight  1 aliases     docs developers         This guide explains how to set up your environment for developing on Helm      Prerequisites    The latest version of Go   A Kubernetes cluster w  kubectl  optional    Git     Building Helm  We use Make to build our programs  The simplest way to get started is      console   make      If required  this will first install dependencies and validate configuration  It will then compile  helm  and place it in  bin helm    To run Helm locally  you can run  bin helm      Helm is known to run on macOS and most Linux distributions  including Alpine      Running tests  To run all the tests  run  make test   As a pre requisite  you would need to have  golangci lint  https   golangci lint run  installed      Running Locally  You can update your path and add the path of your local helm binary  In an editor open your shell config file  Add the following line making sure you replace   path to your binary folder   with your local bin directory       bash export PATH   path to your binary folder   PATH       This will allow you to run the locally built version of helm from your terminal      Contribution Guidelines  We welcome contributions  This project has set up some guidelines in order to ensure that  a  code quality remains high   b  the project remains consistent  and  c  contributions follow the open source legal requirements  Our intent is not to burden contributors  but to build elegant and high quality open source code so that our users will benefit   Make sure you have read and understood the main CONTRIBUTING guide    https   github com helm helm blob main CONTRIBUTING md       Structure of the Code  The code for the Helm project is organized as follows     The individual programs are located in  cmd    Code inside of  cmd   is not   designed for library re use    Shared libraries are stored in  pkg      The  scripts   directory contains a number of utility scripts  Most of these   are used by the CI CD pipeline   Go dependency management is in flux  and it is likely to change during the course of Helm s lifecycle  We encourage developers to  not  try to manually manage dependencies  Instead  we suggest relying upon the project s  Makefile  to do that for you  With Helm 3  it is recommended that you are on Go version 1 13 or later       Writing Documentation  Since Helm 3  documentation has been moved to its own repository  When writing new features  please write accompanying documentation and submit it to the  helm www  https   github com helm helm www  repository   One exception   Helm CLI output  in English   https   helm sh docs helm   is generated from the  helm  binary itself  See  Updating the Helm CLI Reference Docs  https   github com helm helm www updating the helm cli reference docs  for instructions on how to generate this output  When translated  the CLI output is not generated and can be found in   content  lang  docs helm        Git Conventions  We use Git for our version control system  The  main  branch is the home of the current development candidate  Releases are tagged   We accept changes to the code via GitHub Pull Requests  PRs   One workflow for doing this is as follows   1  Fork the  github com helm helm  repository into your GitHub account 2   git clone  the forked repository into your desired directory 3  Create a new working branch   git checkout  b feat my feature   and do your    work on that branch  4  When you are ready for us to review  push your branch to GitHub  and then    open a new pull request with us   For Git commit messages  we follow the  Semantic Commit Messages  https   karma runner github io 0 13 dev git commit msg html        fix helm   add   foo flag to  helm install   When  helm install   foo bar  is run  this will print  foo  in the output regardless of the outcome of the installation   Closes  1234      Common commit types     fix  Fix a bug or error   feat  Add a new feature   docs  Change documentation   test  Improve testing   ref  refactor existing code  Common scopes     helm  The Helm CLI   pkg lint  The lint package  Follow a similar convention for any package        two or more scopes  Read more     The  Deis   Guidelines  https   github com deis workflow blob master src contributing submitting a pull request md    were the inspiration for this section    Karma Runner    defines  https   karma runner github io 0 13 dev git commit msg html  the   semantic commit message idea       Go Conventions  We follow the Go coding style standards very closely  Typically  running  go fmt  will make your code beautiful for you   We also typically follow the conventions recommended by  go lint  and  gometalinter   Run  make test style  to test the style conformance   Read more     Effective Go  introduces   formatting  https   golang org doc effective go html formatting     The Go Wiki has a great article on    formatting  https   github com golang go wiki CodeReviewComments    If you run the  make test  target  not only will unit tests be run  but so will style tests  If the  make test  target fails  even for stylistic reasons  your PR will not be considered ready for merging "}
{"questions":"helm The Helm community has produced many extra tools plugins and documentation title Related Projects and Documentation weight 3 about Helm We love to hear about these projects aliases docs related third party tools plugins and documentation provided by the community","answers":"---\ntitle: \"Related Projects and Documentation\"\ndescription: \"third-party tools, plugins and documentation provided by the community!\"\nweight: 3\naliases: [\"\/docs\/related\/\"]\n---\n\nThe Helm community has produced many extra tools, plugins, and documentation\nabout Helm. We love to hear about these projects.\n\nIf you have anything you'd like to add to this list, please open an\n[issue](https:\/\/github.com\/helm\/helm-www\/issues) or [pull\nrequest](https:\/\/github.com\/helm\/helm-www\/pulls).\n\n## Helm Plugins\n\n- [helm-adopt](https:\/\/github.com\/HamzaZo\/helm-adopt) - A helm v3 plugin to adopt\n  existing k8s resources into a new generated helm chart.\n- [helm-chartsnap](https:\/\/github.com\/jlandowner\/helm-chartsnap) - Snapshot testing plugin for Helm charts.\n- [Helm Diff](https:\/\/github.com\/databus23\/helm-diff) - Preview `helm upgrade`\n  as a coloured diff\n- [Helm Dt](https:\/\/github.com\/vmware-labs\/distribution-tooling-for-helm) - Plugin that helps distributing Helm charts across OCI registries and on Air gap environments\n- [Helm Dashboard](https:\/\/github.com\/komodorio\/helm-dashboard) - GUI for Helm, visualize releases and repositories, manifest diffs\n- [helm-gcs](https:\/\/github.com\/hayorov\/helm-gcs) - Plugin to manage repositories\n  on Google Cloud Storage\n- [helm-git](https:\/\/github.com\/aslafy-z\/helm-git) - Install charts and retrieve\n  values files from your Git repositories\n- [helm-k8comp](https:\/\/github.com\/cststack\/k8comp) - Plugin to create Helm\n  Charts from hiera using k8comp\n- [helm-mapkubeapis](https:\/\/github.com\/helm\/helm-mapkubeapis) - Update helm release\n  metadata to replace deprecated or removed Kubernetes APIs\n- [helm-migrate-values](https:\/\/github.com\/OctopusDeployLabs\/helm-migrate-values) - Plugin to migrate user-specified values across Helm chart versions to handle breaking schema changes in `values.yaml`\n- [helm-monitor](https:\/\/github.com\/ContainerSolutions\/helm-monitor) - Plugin to\n  monitor a release and rollback based on Prometheus\/ElasticSearch query\n- [helm-release-plugin](https:\/\/github.com\/JovianX\/helm-release-plugin) - Plugin for Release management, Update release values, pulls(re-creates) helm Charts from deployed releases, set helm release TTL.\n- [helm-s3](https:\/\/github.com\/hypnoglow\/helm-s3) - Helm plugin that allows to\n  use AWS S3 as a [private] chart repository\n- [helm-schema-gen](https:\/\/github.com\/karuppiah7890\/helm-schema-gen) - Helm\n  Plugin that generates values yaml schema for your Helm 3 charts\n- [helm-secrets](https:\/\/github.com\/jkroepke\/helm-secrets) - Plugin to manage\n  and store secrets safely (based on [sops](https:\/\/github.com\/mozilla\/sops))\n- [helm-sigstore](https:\/\/github.com\/sigstore\/helm-sigstore) -\n  Plugin for Helm to integrate the [sigstore](https:\/\/sigstore.dev\/) ecosystem. Search, upload and verify signed Helm charts.\n- [helm-tanka](https:\/\/github.com\/Duologic\/helm-tanka) - A Helm plugin for\n  rendering Tanka\/Jsonnet inside Helm charts.\n- [hc-unit](https:\/\/github.com\/xchapter7x\/hcunit) - Plugin for unit testing\n  charts locally using OPA (Open Policy Agent) & Rego\n- [helm-unittest](https:\/\/github.com\/quintush\/helm-unittest) - Plugin for unit\n  testing chart locally with YAML\n- [helm-val](https:\/\/github.com\/HamzaZo\/helm-val) - A plugin to get\n  values from a previous release.\n- [helm-external-val](https:\/\/github.com\/kuuji\/helm-external-val) - A plugin that fetches helm values from external sources (configMaps, Secrets, etc.)\n- [helm-images](https:\/\/github.com\/nikhilsbhat\/helm-images) - Helm plugin to fetch all possible images from the chart before deployment or from a deployed release\n- [helm-drift](https:\/\/github.com\/nikhilsbhat\/helm-drift) - Helm plugin that identifies the configuration that has drifted from the Helm chart\n\nWe also encourage GitHub authors to use the\n[helm-plugin](https:\/\/github.com\/search?q=topic%3Ahelm-plugin&type=Repositories)\ntag on their plugin repositories.\n\n## Additional Tools\n\nTools layered on top of Helm.\n\n- [Aptakube](https:\/\/aptakube.com) - Desktop UI for Kubernetes and Helm Releases\n- [Armada](https:\/\/airshipit.readthedocs.io\/projects\/armada\/en\/latest\/) - Manage\n  prefixed releases throughout various Kubernetes namespaces, and removes\n  completed jobs for complex deployments\n- [avionix](https:\/\/github.com\/zbrookle\/avionix) -\n  Python interface for generating Helm\n  charts and Kubernetes yaml, allowing for inheritance and less duplication of code\n- [Botkube](https:\/\/botkube.io) - Run Helm commands directly from Slack,\n  Discord, Microsoft Teams, and Mattermost.\n- [Captain](https:\/\/github.com\/alauda\/captain) - A Helm3 Controller using\n  HelmRequest and Release CRD\n- [Chartify](https:\/\/github.com\/appscode\/chartify) - Generate Helm charts from\n  existing Kubernetes resources.\n- [ChartMuseum](https:\/\/github.com\/helm\/chartmuseum) - Helm Chart Repository\n  with support for Amazon S3 and Google Cloud Storage\n- [chart-registry](https:\/\/github.com\/hangyan\/chart-registry) - Helm Charts\n  Hosts on OCI Registry\n- [Codefresh](https:\/\/codefresh.io) - Kubernetes native CI\/CD and management\n  platform with UI dashboards for managing Helm charts and releases\n- \u2060[Cyclops](https:\/\/cyclops-ui.com) - Dynamic Kubernetes UI rendering based\n  on Helm charts\n- [Flux](https:\/\/fluxcd.io\/docs\/components\/helm\/) -\n  Continuous and progressive delivery from Git to Kubernetes.\n- [Helmfile](https:\/\/github.com\/helmfile\/helmfile) - Helmfile is a declarative\n  spec for deploying helm charts\n- [Helmper](https:\/\/github.com\/ChristofferNissen\/helmper) - Helmper helps you\n  import Helm Charts - including all OCI artifacts(images), to your own OCI\n  registries. Helmper also facilitates security scanning and patching of OCI\n  images. Helmper utilizes Helm, Oras, Trivy, Copacetic and Buildkitd.\n- [Helmsman](https:\/\/github.com\/Praqma\/helmsman) - Helmsman is a\n  helm-charts-as-code tool which enables\n  installing\/upgrading\/protecting\/moving\/deleting releases from version\n  controlled desired state files (described in a simple TOML format)\n- [HULL](https:\/\/github.com\/vidispine\/hull) - This library chart provides a \n  ready-to-use interface for specifying all Kubernetes objects directly in the `values.yaml`.\n  It removes the need to write any templates for your charts and comes with many\n  additional features to simplify Helm chart creation and usage.\n- [Konveyor Move2Kube](https:\/\/konveyor.io\/move2kube\/) -\n  Generate Helm charts for your\n  existing projects.\n- [Landscaper](https:\/\/github.com\/Eneco\/landscaper\/) - \"Landscaper takes a set\n  of Helm Chart references with values (a desired state), and realizes this in a\n  Kubernetes cluster.\"\n- [Monocular](https:\/\/github.com\/helm\/monocular) - Web UI for Helm Chart\n  repositories\n- [Monokle](https:\/\/monokle.io) - Desktop tool for creating, debugging and deploying Kubernetes resources and Helm Charts\n- [Orkestra](https:\/\/azure.github.io\/orkestra\/) - A cloud-native Release\n  Orchestration and Lifecycle Management (LCM) platform for a related group of\n  Helm releases and their subcharts\n- [Tanka](https:\/\/tanka.dev\/helm) - Grafana Tanka configures Kubernetes\n  resources through Jsonnet with the ability to consume Helm Charts\n- [Terraform Helm\n  Provider](https:\/\/github.com\/hashicorp\/terraform-provider-helm) - The Helm\n  provider for HashiCorp Terraform enables lifecycle management of Helm Charts\n  with a declarative infrastructure-as-code syntax.  The Helm provider is often\n  paired the other Terraform providers, like the Kubernetes provider, to create\n  a common workflow across all infrastructure services.\n- [VIM-Kubernetes](https:\/\/github.com\/andrewstuart\/vim-kubernetes) - VIM plugin\n  for Kubernetes and Helm\n\n## Helm Included\n\nPlatforms, distributions, and services that include Helm support.\n\n- [Kubernetic](https:\/\/kubernetic.com\/) - Kubernetes Desktop Client\n- [Jenkins X](https:\/\/jenkins-x.io\/) - open source automated CI\/CD for\n  Kubernetes which uses Helm for\n  [promoting](https:\/\/jenkins-x.io\/docs\/getting-started\/promotion\/) applications\n  through environments via GitOps\n\n## Misc\n\nGrab bag of useful things for Chart authors and Helm users.\n\n- [Await](https:\/\/github.com\/saltside\/await) - Docker image to \"await\" different\n  conditions--especially useful for init containers. [More\n  Info](https:\/\/blog.slashdeploy.com\/2017\/02\/16\/introducing-await\/)","site":"helm","answers_cleaned":"    title   Related Projects and Documentation  description   third party tools  plugins and documentation provided by the community   weight  3 aliases     docs related         The Helm community has produced many extra tools  plugins  and documentation about Helm  We love to hear about these projects   If you have anything you d like to add to this list  please open an  issue  https   github com helm helm www issues  or  pull request  https   github com helm helm www pulls       Helm Plugins     helm adopt  https   github com HamzaZo helm adopt    A helm v3 plugin to adopt   existing k8s resources into a new generated helm chart     helm chartsnap  https   github com jlandowner helm chartsnap    Snapshot testing plugin for Helm charts     Helm Diff  https   github com databus23 helm diff    Preview  helm upgrade    as a coloured diff    Helm Dt  https   github com vmware labs distribution tooling for helm    Plugin that helps distributing Helm charts across OCI registries and on Air gap environments    Helm Dashboard  https   github com komodorio helm dashboard    GUI for Helm  visualize releases and repositories  manifest diffs    helm gcs  https   github com hayorov helm gcs    Plugin to manage repositories   on Google Cloud Storage    helm git  https   github com aslafy z helm git    Install charts and retrieve   values files from your Git repositories    helm k8comp  https   github com cststack k8comp    Plugin to create Helm   Charts from hiera using k8comp    helm mapkubeapis  https   github com helm helm mapkubeapis    Update helm release   metadata to replace deprecated or removed Kubernetes APIs    helm migrate values  https   github com OctopusDeployLabs helm migrate values    Plugin to migrate user specified values across Helm chart versions to handle breaking schema changes in  values yaml     helm monitor  https   github com ContainerSolutions helm monitor    Plugin to   monitor a release and rollback based on Prometheus ElasticSearch query    helm release plugin  https   github com JovianX helm release plugin    Plugin for Release management  Update release values  pulls re creates  helm Charts from deployed releases  set helm release TTL     helm s3  https   github com hypnoglow helm s3    Helm plugin that allows to   use AWS S3 as a  private  chart repository    helm schema gen  https   github com karuppiah7890 helm schema gen    Helm   Plugin that generates values yaml schema for your Helm 3 charts    helm secrets  https   github com jkroepke helm secrets    Plugin to manage   and store secrets safely  based on  sops  https   github com mozilla sops      helm sigstore  https   github com sigstore helm sigstore      Plugin for Helm to integrate the  sigstore  https   sigstore dev   ecosystem  Search  upload and verify signed Helm charts     helm tanka  https   github com Duologic helm tanka    A Helm plugin for   rendering Tanka Jsonnet inside Helm charts     hc unit  https   github com xchapter7x hcunit    Plugin for unit testing   charts locally using OPA  Open Policy Agent    Rego    helm unittest  https   github com quintush helm unittest    Plugin for unit   testing chart locally with YAML    helm val  https   github com HamzaZo helm val    A plugin to get   values from a previous release     helm external val  https   github com kuuji helm external val    A plugin that fetches helm values from external sources  configMaps  Secrets  etc      helm images  https   github com nikhilsbhat helm images    Helm plugin to fetch all possible images from the chart before deployment or from a deployed release    helm drift  https   github com nikhilsbhat helm drift    Helm plugin that identifies the configuration that has drifted from the Helm chart  We also encourage GitHub authors to use the  helm plugin  https   github com search q topic 3Ahelm plugin type Repositories  tag on their plugin repositories      Additional Tools  Tools layered on top of Helm      Aptakube  https   aptakube com    Desktop UI for Kubernetes and Helm Releases    Armada  https   airshipit readthedocs io projects armada en latest     Manage   prefixed releases throughout various Kubernetes namespaces  and removes   completed jobs for complex deployments    avionix  https   github com zbrookle avionix      Python interface for generating Helm   charts and Kubernetes yaml  allowing for inheritance and less duplication of code    Botkube  https   botkube io    Run Helm commands directly from Slack    Discord  Microsoft Teams  and Mattermost     Captain  https   github com alauda captain    A Helm3 Controller using   HelmRequest and Release CRD    Chartify  https   github com appscode chartify    Generate Helm charts from   existing Kubernetes resources     ChartMuseum  https   github com helm chartmuseum    Helm Chart Repository   with support for Amazon S3 and Google Cloud Storage    chart registry  https   github com hangyan chart registry    Helm Charts   Hosts on OCI Registry    Codefresh  https   codefresh io    Kubernetes native CI CD and management   platform with UI dashboards for managing Helm charts and releases     Cyclops  https   cyclops ui com    Dynamic Kubernetes UI rendering based   on Helm charts    Flux  https   fluxcd io docs components helm       Continuous and progressive delivery from Git to Kubernetes     Helmfile  https   github com helmfile helmfile    Helmfile is a declarative   spec for deploying helm charts    Helmper  https   github com ChristofferNissen helmper    Helmper helps you   import Helm Charts   including all OCI artifacts images   to your own OCI   registries  Helmper also facilitates security scanning and patching of OCI   images  Helmper utilizes Helm  Oras  Trivy  Copacetic and Buildkitd     Helmsman  https   github com Praqma helmsman    Helmsman is a   helm charts as code tool which enables   installing upgrading protecting moving deleting releases from version   controlled desired state files  described in a simple TOML format     HULL  https   github com vidispine hull    This library chart provides a    ready to use interface for specifying all Kubernetes objects directly in the  values yaml     It removes the need to write any templates for your charts and comes with many   additional features to simplify Helm chart creation and usage     Konveyor Move2Kube  https   konveyor io move2kube       Generate Helm charts for your   existing projects     Landscaper  https   github com Eneco landscaper      Landscaper takes a set   of Helm Chart references with values  a desired state   and realizes this in a   Kubernetes cluster      Monocular  https   github com helm monocular    Web UI for Helm Chart   repositories    Monokle  https   monokle io    Desktop tool for creating  debugging and deploying Kubernetes resources and Helm Charts    Orkestra  https   azure github io orkestra     A cloud native Release   Orchestration and Lifecycle Management  LCM  platform for a related group of   Helm releases and their subcharts    Tanka  https   tanka dev helm    Grafana Tanka configures Kubernetes   resources through Jsonnet with the ability to consume Helm Charts    Terraform Helm   Provider  https   github com hashicorp terraform provider helm    The Helm   provider for HashiCorp Terraform enables lifecycle management of Helm Charts   with a declarative infrastructure as code syntax   The Helm provider is often   paired the other Terraform providers  like the Kubernetes provider  to create   a common workflow across all infrastructure services     VIM Kubernetes  https   github com andrewstuart vim kubernetes    VIM plugin   for Kubernetes and Helm     Helm Included  Platforms  distributions  and services that include Helm support      Kubernetic  https   kubernetic com     Kubernetes Desktop Client    Jenkins X  https   jenkins x io     open source automated CI CD for   Kubernetes which uses Helm for    promoting  https   jenkins x io docs getting started promotion   applications   through environments via GitOps     Misc  Grab bag of useful things for Chart authors and Helm users      Await  https   github com saltside await    Docker image to  await  different   conditions  especially useful for init containers   More   Info  https   blog slashdeploy com 2017 02 16 introducing await  "}
{"questions":"helm Synopsis The Helm package manager for Kubernetes title Helm helm","answers":"---\ntitle: \"Helm\"\n---\n\n## helm\n\nThe Helm package manager for Kubernetes.\n\n### Synopsis\n\nThe Kubernetes package manager\n\nCommon actions for Helm:\n\n- helm search:    search for charts\n- helm pull:      download a chart to your local directory to view\n- helm install:   upload the chart to Kubernetes\n- helm list:      list releases of charts\n\nEnvironment variables:\n\n| Name                               | Description                                                                                                |\n|------------------------------------|------------------------------------------------------------------------------------------------------------|\n| $HELM_CACHE_HOME                   | set an alternative location for storing cached files.                                                      |\n| $HELM_CONFIG_HOME                  | set an alternative location for storing Helm configuration.                                                |\n| $HELM_DATA_HOME                    | set an alternative location for storing Helm data.                                                         |\n| $HELM_DEBUG                        | indicate whether or not Helm is running in Debug mode                                                      |\n| $HELM_DRIVER                       | set the backend storage driver. Values are: configmap, secret, memory, sql.                                |\n| $HELM_DRIVER_SQL_CONNECTION_STRING | set the connection string the SQL storage driver should use.                                               |\n| $HELM_MAX_HISTORY                  | set the maximum number of helm release history.                                                            |\n| $HELM_NAMESPACE                    | set the namespace used for the helm operations.                                                            |\n| $HELM_NO_PLUGINS                   | disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins.                                                 |\n| $HELM_PLUGINS                      | set the path to the plugins directory                                                                      |\n| $HELM_REGISTRY_CONFIG              | set the path to the registry config file.                                                                  |\n| $HELM_REPOSITORY_CACHE             | set the path to the repository cache directory                                                             |\n| $HELM_REPOSITORY_CONFIG            | set the path to the repositories file.                                                                     |\n| $KUBECONFIG                        | set an alternative Kubernetes configuration file (default \"~\/.kube\/config\")                                |\n| $HELM_KUBEAPISERVER                | set the Kubernetes API Server Endpoint for authentication                                                  |\n| $HELM_KUBECAFILE                   | set the Kubernetes certificate authority file.                                                             |\n| $HELM_KUBEASGROUPS                 | set the Groups to use for impersonation using a comma-separated list.                                      |\n| $HELM_KUBEASUSER                   | set the Username to impersonate for the operation.                                                         |\n| $HELM_KUBECONTEXT                  | set the name of the kubeconfig context.                                                                    |\n| $HELM_KUBETOKEN                    | set the Bearer KubeToken used for authentication.                                                          |\n| $HELM_KUBEINSECURE_SKIP_TLS_VERIFY | indicate if the Kubernetes API server's certificate validation should be skipped (insecure)                |\n| $HELM_KUBETLS_SERVER_NAME          | set the server name used to validate the Kubernetes API server certificate                                 |\n| $HELM_BURST_LIMIT                  | set the default burst limit in the case the server contains many CRDs (default 100, -1 to disable)         |\n| $HELM_QPS                          | set the Queries Per Second in cases where a high number of calls exceed the option for higher burst values |\n\nHelm stores cache, configuration, and data based on the following configuration order:\n\n- If a HELM_*_HOME environment variable is set, it will be used\n- Otherwise, on systems supporting the XDG base directory specification, the XDG variables will be used\n- When no other location is set a default location will be used based on the operating system\n\nBy default, the default directories depend on the Operating System. The defaults are listed below:\n\n| Operating System | Cache Path                | Configuration Path             | Data Path               |\n|------------------|---------------------------|--------------------------------|-------------------------|\n| Linux            | $HOME\/.cache\/helm         | $HOME\/.config\/helm             | $HOME\/.local\/share\/helm |\n| macOS            | $HOME\/Library\/Caches\/helm | $HOME\/Library\/Preferences\/helm | $HOME\/Library\/helm      |\n| Windows          | %TEMP%\\helm               | %APPDATA%\\helm                 | %APPDATA%\\helm          |\n\n\n### Options\n\n```\n      --burst-limit int                 client-side default throttling limit (default 100)\n      --debug                           enable verbose output\n  -h, --help                            help for helm\n      --kube-apiserver string           the address and the port for the Kubernetes API server\n      --kube-as-group stringArray       group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n      --kube-as-user string             username to impersonate for the operation\n      --kube-ca-file string             the certificate authority file for the Kubernetes API server connection\n      --kube-context string             name of the kubeconfig context to use\n      --kube-insecure-skip-tls-verify   if true, the Kubernetes API server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n      --kube-tls-server-name string     server name to use for Kubernetes API server certificate validation. If it is not provided, the hostname used to contact the server is used\n      --kube-token string               bearer token used for authentication\n      --kubeconfig string               path to the kubeconfig file\n  -n, --namespace string                namespace scope for this request\n      --qps float32                     queries per second used when communicating with the Kubernetes API, not including bursting\n      --registry-config string          path to the registry config file (default \"~\/.config\/helm\/registry\/config.json\")\n      --repository-cache string         path to the directory containing cached repository indexes (default \"~\/.cache\/helm\/repository\")\n      --repository-config string        path to the file containing repository names and URLs (default \"~\/.config\/helm\/repositories.yaml\")\n```\n\n### SEE ALSO\n\n* [helm completion](helm_completion.md)\t - generate autocompletion scripts for the specified shell\n* [helm create](helm_create.md)\t - create a new chart with the given name\n* [helm dependency](helm_dependency.md)\t - manage a chart's dependencies\n* [helm env](helm_env.md)\t - helm client environment information\n* [helm get](helm_get.md)\t - download extended information of a named release\n* [helm history](helm_history.md)\t - fetch release history\n* [helm install](helm_install.md)\t - install a chart\n* [helm lint](helm_lint.md)\t - examine a chart for possible issues\n* [helm list](helm_list.md)\t - list releases\n* [helm package](helm_package.md)\t - package a chart directory into a chart archive\n* [helm plugin](helm_plugin.md)\t - install, list, or uninstall Helm plugins\n* [helm pull](helm_pull.md)\t - download a chart from a repository and (optionally) unpack it in local directory\n* [helm push](helm_push.md)\t - push a chart to remote\n* [helm registry](helm_registry.md)\t - login to or logout from a registry\n* [helm repo](helm_repo.md)\t - add, list, remove, update, and index chart repositories\n* [helm rollback](helm_rollback.md)\t - roll back a release to a previous revision\n* [helm search](helm_search.md)\t - search for a keyword in charts\n* [helm show](helm_show.md)\t - show information of a chart\n* [helm status](helm_status.md)\t - display the status of the named release\n* [helm template](helm_template.md)\t - locally render templates\n* [helm test](helm_test.md)\t - run tests for a release\n* [helm uninstall](helm_uninstall.md)\t - uninstall a release\n* [helm upgrade](helm_upgrade.md)\t - upgrade a release\n* [helm verify](helm_verify.md)\t - verify that a chart at the given path has been signed and is valid\n* [helm version](helm_version.md)\t - print the client version information\n\n###### ","site":"helm","answers_cleaned":"    title   Helm          helm  The Helm package manager for Kubernetes       Synopsis  The Kubernetes package manager  Common actions for Helm     helm search     search for charts   helm pull       download a chart to your local directory to view   helm install    upload the chart to Kubernetes   helm list       list releases of charts  Environment variables     Name                                 Description                                                                                                                                                                                                                                                         HELM CACHE HOME                     set an alternative location for storing cached files                                                            HELM CONFIG HOME                    set an alternative location for storing Helm configuration                                                      HELM DATA HOME                      set an alternative location for storing Helm data                                                               HELM DEBUG                          indicate whether or not Helm is running in Debug mode                                                           HELM DRIVER                         set the backend storage driver  Values are  configmap  secret  memory  sql                                      HELM DRIVER SQL CONNECTION STRING   set the connection string the SQL storage driver should use                                                     HELM MAX HISTORY                    set the maximum number of helm release history                                                                  HELM NAMESPACE                      set the namespace used for the helm operations                                                                  HELM NO PLUGINS                     disable plugins  Set HELM NO PLUGINS 1 to disable plugins                                                       HELM PLUGINS                        set the path to the plugins directory                                                                           HELM REGISTRY CONFIG                set the path to the registry config file                                                                        HELM REPOSITORY CACHE               set the path to the repository cache directory                                                                  HELM REPOSITORY CONFIG              set the path to the repositories file                                                                           KUBECONFIG                          set an alternative Kubernetes configuration file  default     kube config                                       HELM KUBEAPISERVER                  set the Kubernetes API Server Endpoint for authentication                                                       HELM KUBECAFILE                     set the Kubernetes certificate authority file                                                                   HELM KUBEASGROUPS                   set the Groups to use for impersonation using a comma separated list                                            HELM KUBEASUSER                     set the Username to impersonate for the operation                                                               HELM KUBECONTEXT                    set the name of the kubeconfig context                                                                          HELM KUBETOKEN                      set the Bearer KubeToken used for authentication                                                                HELM KUBEINSECURE SKIP TLS VERIFY   indicate if the Kubernetes API server s certificate validation should be skipped  insecure                      HELM KUBETLS SERVER NAME            set the server name used to validate the Kubernetes API server certificate                                      HELM BURST LIMIT                    set the default burst limit in the case the server contains many CRDs  default 100   1 to disable               HELM QPS                            set the Queries Per Second in cases where a high number of calls exceed the option for higher burst values    Helm stores cache  configuration  and data based on the following configuration order     If a HELM   HOME environment variable is set  it will be used   Otherwise  on systems supporting the XDG base directory specification  the XDG variables will be used   When no other location is set a default location will be used based on the operating system  By default  the default directories depend on the Operating System  The defaults are listed below     Operating System   Cache Path                  Configuration Path               Data Path                                                                                                                               Linux               HOME  cache helm            HOME  config helm                HOME  local share helm     macOS               HOME Library Caches helm    HOME Library Preferences helm    HOME Library helm          Windows             TEMP  helm                  APPDATA  helm                    APPDATA  helm                  Options              burst limit int                 client side default throttling limit  default 100          debug                           enable verbose output    h    help                            help for helm         kube apiserver string           the address and the port for the Kubernetes API server         kube as group stringArray       group to impersonate for the operation  this flag can be repeated to specify multiple groups          kube as user string             username to impersonate for the operation         kube ca file string             the certificate authority file for the Kubernetes API server connection         kube context string             name of the kubeconfig context to use         kube insecure skip tls verify   if true  the Kubernetes API server s certificate will not be checked for validity  This will make your HTTPS connections insecure         kube tls server name string     server name to use for Kubernetes API server certificate validation  If it is not provided  the hostname used to contact the server is used         kube token string               bearer token used for authentication         kubeconfig string               path to the kubeconfig file    n    namespace string                namespace scope for this request         qps float32                     queries per second used when communicating with the Kubernetes API  not including bursting         registry config string          path to the registry config file  default     config helm registry config json           repository cache string         path to the directory containing cached repository indexes  default     cache helm repository           repository config string        path to the file containing repository names and URLs  default     config helm repositories yaml            SEE ALSO     helm completion  helm completion md     generate autocompletion scripts for the specified shell    helm create  helm create md     create a new chart with the given name    helm dependency  helm dependency md     manage a chart s dependencies    helm env  helm env md     helm client environment information    helm get  helm get md     download extended information of a named release    helm history  helm history md     fetch release history    helm install  helm install md     install a chart    helm lint  helm lint md     examine a chart for possible issues    helm list  helm list md     list releases    helm package  helm package md     package a chart directory into a chart archive    helm plugin  helm plugin md     install  list  or uninstall Helm plugins    helm pull  helm pull md     download a chart from a repository and  optionally  unpack it in local directory    helm push  helm push md     push a chart to remote    helm registry  helm registry md     login to or logout from a registry    helm repo  helm repo md     add  list  remove  update  and index chart repositories    helm rollback  helm rollback md     roll back a release to a previous revision    helm search  helm search md     search for a keyword in charts    helm show  helm show md     show information of a chart    helm status  helm status md     display the status of the named release    helm template  helm template md     locally render templates    helm test  helm test md     run tests for a release    helm uninstall  helm uninstall md     uninstall a release    helm upgrade  helm upgrade md     upgrade a release    helm verify  helm verify md     verify that a chart at the given path has been signed and is valid    helm version  helm version md     print the client version information         "}
{"questions":"helm install a chart title Helm Install Synopsis helm install","answers":"---\ntitle: \"Helm Install\"\n---\n\n## helm install\n\ninstall a chart\n\n### Synopsis\n\n\nThis command installs a chart archive.\n\nThe install argument must be a chart reference, a path to a packaged chart,\na path to an unpacked chart directory or a URL.\n\nTo override values in a chart, use either the '--values' flag and pass in a file\nor use the '--set' flag and pass configuration from the command line, to force\na string value use '--set-string'. You can use '--set-file' to set individual\nvalues from a file when the value itself is too long for the command line\nor is dynamically generated. You can also use '--set-json' to set json values\n(scalars\/objects\/arrays) from the command line.\n\n    $ helm install -f myvalues.yaml myredis .\/redis\n\nor\n\n    $ helm install --set name=prod myredis .\/redis\n\nor\n\n    $ helm install --set-string long_int=1234567890 myredis .\/redis\n\nor\n\n    $ helm install --set-file my_script=dothings.sh myredis .\/redis\n\nor\n\n    $ helm install --set-json 'master.sidecars=[{\"name\":\"sidecar\",\"image\":\"myImage\",\"imagePullPolicy\":\"Always\",\"ports\":[{\"name\":\"portname\",\"containerPort\":1234}]}]' myredis .\/redis\n\n\nYou can specify the '--values'\/'-f' flag multiple times. The priority will be given to the\nlast (right-most) file specified. For example, if both myvalues.yaml and override.yaml\ncontained a key called 'Test', the value set in override.yaml would take precedence:\n\n    $ helm install -f myvalues.yaml -f override.yaml  myredis .\/redis\n\nYou can specify the '--set' flag multiple times. The priority will be given to the\nlast (right-most) set specified. For example, if both 'bar' and 'newbar' values are\nset for a key called 'foo', the 'newbar' value would take precedence:\n\n    $ helm install --set foo=bar --set foo=newbar  myredis .\/redis\n\nSimilarly, in the following example 'foo' is set to '[\"four\"]':\n\n    $ helm install --set-json='foo=[\"one\", \"two\", \"three\"]' --set-json='foo=[\"four\"]' myredis .\/redis\n\nAnd in the following example, 'foo' is set to '{\"key1\":\"value1\",\"key2\":\"bar\"}':\n\n    $ helm install --set-json='foo={\"key1\":\"value1\",\"key2\":\"value2\"}' --set-json='foo.key2=\"bar\"' myredis .\/redis\n\nTo check the generated manifests of a release without installing the chart,\nthe --debug and --dry-run flags can be combined.\n\nThe --dry-run flag will output all generated chart manifests, including Secrets\nwhich can contain sensitive values. To hide Kubernetes Secrets use the\n--hide-secret flag. Please carefully consider how and when these flags are used.\n\nIf --verify is set, the chart MUST have a provenance file, and the provenance\nfile MUST pass all verification steps.\n\nThere are six different ways you can express the chart you want to install:\n\n1. By chart reference: helm install mymaria example\/mariadb\n2. By path to a packaged chart: helm install mynginx .\/nginx-1.2.3.tgz\n3. By path to an unpacked chart directory: helm install mynginx .\/nginx\n4. By absolute URL: helm install mynginx https:\/\/example.com\/charts\/nginx-1.2.3.tgz\n5. By chart reference and repo url: helm install --repo https:\/\/example.com\/charts\/ mynginx nginx\n6. By OCI registries: helm install mynginx --version 1.2.3 oci:\/\/example.com\/charts\/nginx\n\nCHART REFERENCES\n\nA chart reference is a convenient way of referencing a chart in a chart repository.\n\nWhen you use a chart reference with a repo prefix ('example\/mariadb'), Helm will look in the local\nconfiguration for a chart repository named 'example', and will then look for a\nchart in that repository whose name is 'mariadb'. It will install the latest stable version of that chart\nuntil you specify '--devel' flag to also include development version (alpha, beta, and release candidate releases), or\nsupply a version number with the '--version' flag.\n\nTo see the list of chart repositories, use 'helm repo list'. To search for\ncharts in a repository, use 'helm search'.\n\n\n```\nhelm install [NAME] [CHART] [flags]\n```\n\n### Options\n\n```\n      --atomic                                     if set, the installation process deletes the installation on failure. The --wait flag will be set automatically if --atomic is used\n      --ca-file string                             verify certificates of HTTPS-enabled servers using this CA bundle\n      --cert-file string                           identify HTTPS client using this SSL certificate file\n      --create-namespace                           create the release namespace if not present\n      --dependency-update                          update dependencies if they are missing before installing the chart\n      --description string                         add a custom description\n      --devel                                      use development versions, too. Equivalent to version '>0.0.0-0'. If --version is set, this is ignored\n      --disable-openapi-validation                 if set, the installation process will not validate rendered templates against the Kubernetes OpenAPI Schema\n      --dry-run string[=\"client\"]                  simulate an install. If --dry-run is set with no option being specified or as '--dry-run=client', it will not attempt cluster connections. Setting '--dry-run=server' allows attempting cluster connections.\n      --enable-dns                                 enable DNS lookups when rendering templates\n      --force                                      force resource updates through a replacement strategy\n  -g, --generate-name                              generate the name (and omit the NAME parameter)\n  -h, --help                                       help for install\n      --hide-notes                                 if set, do not show notes in install output. Does not affect presence in chart metadata\n      --hide-secret                                hide Kubernetes Secrets when also using the --dry-run flag\n      --insecure-skip-tls-verify                   skip tls certificate checks for the chart download\n      --key-file string                            identify HTTPS client using this SSL key file\n      --keyring string                             location of public keys used for verification (default \"~\/.gnupg\/pubring.gpg\")\n  -l, --labels stringToString                      Labels that would be added to release metadata. Should be divided by comma. (default [])\n      --name-template string                       specify template used to name the release\n      --no-hooks                                   prevent hooks from running during install\n  -o, --output format                              prints the output in the specified format. Allowed values: table, json, yaml (default table)\n      --pass-credentials                           pass credentials to all domains\n      --password string                            chart repository password where to locate the requested chart\n      --plain-http                                 use insecure HTTP connections for the chart download\n      --post-renderer postRendererString           the path to an executable to be used for post rendering. If it exists in $PATH, the binary will be used, otherwise it will try to look for the executable at the given path\n      --post-renderer-args postRendererArgsSlice   an argument to the post-renderer (can specify multiple) (default [])\n      --render-subchart-notes                      if set, render subchart notes along with the parent\n      --replace                                    re-use the given name, only if that name is a deleted release which remains in the history. This is unsafe in production\n      --repo string                                chart repository url where to locate the requested chart\n      --set stringArray                            set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)\n      --set-file stringArray                       set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)\n      --set-json stringArray                       set JSON values on the command line (can specify multiple or separate values with commas: key1=jsonval1,key2=jsonval2)\n      --set-literal stringArray                    set a literal STRING value on the command line\n      --set-string stringArray                     set STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)\n      --skip-crds                                  if set, no CRDs will be installed. By default, CRDs are installed if not already present\n      --skip-schema-validation                     if set, disables JSON schema validation\n      --timeout duration                           time to wait for any individual Kubernetes operation (like Jobs for hooks) (default 5m0s)\n      --username string                            chart repository username where to locate the requested chart\n  -f, --values strings                             specify values in a YAML file or a URL (can specify multiple)\n      --verify                                     verify the package before using it\n      --version string                             specify a version constraint for the chart version to use. This constraint can be a specific tag (e.g. 1.1.1) or it may reference a valid range (e.g. ^2.0.0). If this is not specified, the latest version is used\n      --wait                                       if set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout\n      --wait-for-jobs                              if set and --wait enabled, will wait until all Jobs have been completed before marking the release as successful. It will wait for as long as --timeout\n```\n\n### Options inherited from parent commands\n\n```\n      --burst-limit int                 client-side default throttling limit (default 100)\n      --debug                           enable verbose output\n      --kube-apiserver string           the address and the port for the Kubernetes API server\n      --kube-as-group stringArray       group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n      --kube-as-user string             username to impersonate for the operation\n      --kube-ca-file string             the certificate authority file for the Kubernetes API server connection\n      --kube-context string             name of the kubeconfig context to use\n      --kube-insecure-skip-tls-verify   if true, the Kubernetes API server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n      --kube-tls-server-name string     server name to use for Kubernetes API server certificate validation. If it is not provided, the hostname used to contact the server is used\n      --kube-token string               bearer token used for authentication\n      --kubeconfig string               path to the kubeconfig file\n  -n, --namespace string                namespace scope for this request\n      --qps float32                     queries per second used when communicating with the Kubernetes API, not including bursting\n      --registry-config string          path to the registry config file (default \"~\/.config\/helm\/registry\/config.json\")\n      --repository-cache string         path to the directory containing cached repository indexes (default \"~\/.cache\/helm\/repository\")\n      --repository-config string        path to the file containing repository names and URLs (default \"~\/.config\/helm\/repositories.yaml\")\n```\n\n### SEE ALSO\n\n* [helm](helm.md)\t - The Helm package manager for Kubernetes.\n\n###### ","site":"helm","answers_cleaned":"    title   Helm Install          helm install  install a chart      Synopsis   This command installs a chart archive   The install argument must be a chart reference  a path to a packaged chart  a path to an unpacked chart directory or a URL   To override values in a chart  use either the    values  flag and pass in a file or use the    set  flag and pass configuration from the command line  to force a string value use    set string   You can use    set file  to set individual values from a file when the value itself is too long for the command line or is dynamically generated  You can also use    set json  to set json values  scalars objects arrays  from the command line         helm install  f myvalues yaml myredis   redis  or        helm install   set name prod myredis   redis  or        helm install   set string long int 1234567890 myredis   redis  or        helm install   set file my script dothings sh myredis   redis  or        helm install   set json  master sidecars    name   sidecar   image   myImage   imagePullPolicy   Always   ports     name   portname   containerPort  1234      myredis   redis   You can specify the    values    f  flag multiple times  The priority will be given to the last  right most  file specified  For example  if both myvalues yaml and override yaml contained a key called  Test   the value set in override yaml would take precedence         helm install  f myvalues yaml  f override yaml  myredis   redis  You can specify the    set  flag multiple times  The priority will be given to the last  right most  set specified  For example  if both  bar  and  newbar  values are set for a key called  foo   the  newbar  value would take precedence         helm install   set foo bar   set foo newbar  myredis   redis  Similarly  in the following example  foo  is set to    four            helm install   set json  foo   one    two    three      set json  foo   four    myredis   redis  And in the following example   foo  is set to    key1   value1   key2   bar            helm install   set json  foo   key1   value1   key2   value2      set json  foo key2  bar   myredis   redis  To check the generated manifests of a release without installing the chart  the   debug and   dry run flags can be combined   The   dry run flag will output all generated chart manifests  including Secrets which can contain sensitive values  To hide Kubernetes Secrets use the   hide secret flag  Please carefully consider how and when these flags are used   If   verify is set  the chart MUST have a provenance file  and the provenance file MUST pass all verification steps   There are six different ways you can express the chart you want to install   1  By chart reference  helm install mymaria example mariadb 2  By path to a packaged chart  helm install mynginx   nginx 1 2 3 tgz 3  By path to an unpacked chart directory  helm install mynginx   nginx 4  By absolute URL  helm install mynginx https   example com charts nginx 1 2 3 tgz 5  By chart reference and repo url  helm install   repo https   example com charts  mynginx nginx 6  By OCI registries  helm install mynginx   version 1 2 3 oci   example com charts nginx  CHART REFERENCES  A chart reference is a convenient way of referencing a chart in a chart repository   When you use a chart reference with a repo prefix   example mariadb    Helm will look in the local configuration for a chart repository named  example   and will then look for a chart in that repository whose name is  mariadb   It will install the latest stable version of that chart until you specify    devel  flag to also include development version  alpha  beta  and release candidate releases   or supply a version number with the    version  flag   To see the list of chart repositories  use  helm repo list   To search for charts in a repository  use  helm search         helm install  NAME   CHART   flags           Options              atomic                                     if set  the installation process deletes the installation on failure  The   wait flag will be set automatically if   atomic is used         ca file string                             verify certificates of HTTPS enabled servers using this CA bundle         cert file string                           identify HTTPS client using this SSL certificate file         create namespace                           create the release namespace if not present         dependency update                          update dependencies if they are missing before installing the chart         description string                         add a custom description         devel                                      use development versions  too  Equivalent to version   0 0 0 0   If   version is set  this is ignored         disable openapi validation                 if set  the installation process will not validate rendered templates against the Kubernetes OpenAPI Schema         dry run string   client                    simulate an install  If   dry run is set with no option being specified or as    dry run client   it will not attempt cluster connections  Setting    dry run server  allows attempting cluster connections          enable dns                                 enable DNS lookups when rendering templates         force                                      force resource updates through a replacement strategy    g    generate name                              generate the name  and omit the NAME parameter     h    help                                       help for install         hide notes                                 if set  do not show notes in install output  Does not affect presence in chart metadata         hide secret                                hide Kubernetes Secrets when also using the   dry run flag         insecure skip tls verify                   skip tls certificate checks for the chart download         key file string                            identify HTTPS client using this SSL key file         keyring string                             location of public keys used for verification  default     gnupg pubring gpg      l    labels stringToString                      Labels that would be added to release metadata  Should be divided by comma   default             name template string                       specify template used to name the release         no hooks                                   prevent hooks from running during install    o    output format                              prints the output in the specified format  Allowed values  table  json  yaml  default table          pass credentials                           pass credentials to all domains         password string                            chart repository password where to locate the requested chart         plain http                                 use insecure HTTP connections for the chart download         post renderer postRendererString           the path to an executable to be used for post rendering  If it exists in  PATH  the binary will be used  otherwise it will try to look for the executable at the given path         post renderer args postRendererArgsSlice   an argument to the post renderer  can specify multiple   default             render subchart notes                      if set  render subchart notes along with the parent         replace                                    re use the given name  only if that name is a deleted release which remains in the history  This is unsafe in production         repo string                                chart repository url where to locate the requested chart         set stringArray                            set values on the command line  can specify multiple or separate values with commas  key1 val1 key2 val2          set file stringArray                       set values from respective files specified via the command line  can specify multiple or separate values with commas  key1 path1 key2 path2          set json stringArray                       set JSON values on the command line  can specify multiple or separate values with commas  key1 jsonval1 key2 jsonval2          set literal stringArray                    set a literal STRING value on the command line         set string stringArray                     set STRING values on the command line  can specify multiple or separate values with commas  key1 val1 key2 val2          skip crds                                  if set  no CRDs will be installed  By default  CRDs are installed if not already present         skip schema validation                     if set  disables JSON schema validation         timeout duration                           time to wait for any individual Kubernetes operation  like Jobs for hooks   default 5m0s          username string                            chart repository username where to locate the requested chart    f    values strings                             specify values in a YAML file or a URL  can specify multiple          verify                                     verify the package before using it         version string                             specify a version constraint for the chart version to use  This constraint can be a specific tag  e g  1 1 1  or it may reference a valid range  e g   2 0 0   If this is not specified  the latest version is used         wait                                       if set  will wait until all Pods  PVCs  Services  and minimum number of Pods of a Deployment  StatefulSet  or ReplicaSet are in a ready state before marking the release as successful  It will wait for as long as   timeout         wait for jobs                              if set and   wait enabled  will wait until all Jobs have been completed before marking the release as successful  It will wait for as long as   timeout          Options inherited from parent commands              burst limit int                 client side default throttling limit  default 100          debug                           enable verbose output         kube apiserver string           the address and the port for the Kubernetes API server         kube as group stringArray       group to impersonate for the operation  this flag can be repeated to specify multiple groups          kube as user string             username to impersonate for the operation         kube ca file string             the certificate authority file for the Kubernetes API server connection         kube context string             name of the kubeconfig context to use         kube insecure skip tls verify   if true  the Kubernetes API server s certificate will not be checked for validity  This will make your HTTPS connections insecure         kube tls server name string     server name to use for Kubernetes API server certificate validation  If it is not provided  the hostname used to contact the server is used         kube token string               bearer token used for authentication         kubeconfig string               path to the kubeconfig file    n    namespace string                namespace scope for this request         qps float32                     queries per second used when communicating with the Kubernetes API  not including bursting         registry config string          path to the registry config file  default     config helm registry config json           repository cache string         path to the directory containing cached repository indexes  default     cache helm repository           repository config string        path to the file containing repository names and URLs  default     config helm repositories yaml            SEE ALSO     helm  helm md     The Helm package manager for Kubernetes          "}
{"questions":"helm upgrade a release helm upgrade Synopsis title Helm Upgrade","answers":"---\ntitle: \"Helm Upgrade\"\n---\n\n## helm upgrade\n\nupgrade a release\n\n### Synopsis\n\n\nThis command upgrades a release to a new version of a chart.\n\nThe upgrade arguments must be a release and chart. The chart\nargument can be either: a chart reference('example\/mariadb'), a path to a chart directory,\na packaged chart, or a fully qualified URL. For chart references, the latest\nversion will be specified unless the '--version' flag is set.\n\nTo override values in a chart, use either the '--values' flag and pass in a file\nor use the '--set' flag and pass configuration from the command line, to force string\nvalues, use '--set-string'. You can use '--set-file' to set individual\nvalues from a file when the value itself is too long for the command line\nor is dynamically generated. You can also use '--set-json' to set json values\n(scalars\/objects\/arrays) from the command line.\n\nYou can specify the '--values'\/'-f' flag multiple times. The priority will be given to the\nlast (right-most) file specified. For example, if both myvalues.yaml and override.yaml\ncontained a key called 'Test', the value set in override.yaml would take precedence:\n\n    $ helm upgrade -f myvalues.yaml -f override.yaml redis .\/redis\n\nYou can specify the '--set' flag multiple times. The priority will be given to the\nlast (right-most) set specified. For example, if both 'bar' and 'newbar' values are\nset for a key called 'foo', the 'newbar' value would take precedence:\n\n    $ helm upgrade --set foo=bar --set foo=newbar redis .\/redis\n\nYou can update the values for an existing release with this command as well via the\n'--reuse-values' flag. The 'RELEASE' and 'CHART' arguments should be set to the original\nparameters, and existing values will be merged with any values set via '--values'\/'-f'\nor '--set' flags. Priority is given to new values.\n\n    $ helm upgrade --reuse-values --set foo=bar --set foo=newbar redis .\/redis\n\nThe --dry-run flag will output all generated chart manifests, including Secrets\nwhich can contain sensitive values. To hide Kubernetes Secrets use the\n--hide-secret flag. Please carefully consider how and when these flags are used.\n\n\n```\nhelm upgrade [RELEASE] [CHART] [flags]\n```\n\n### Options\n\n```\n      --atomic                                     if set, upgrade process rolls back changes made in case of failed upgrade. The --wait flag will be set automatically if --atomic is used\n      --ca-file string                             verify certificates of HTTPS-enabled servers using this CA bundle\n      --cert-file string                           identify HTTPS client using this SSL certificate file\n      --cleanup-on-fail                            allow deletion of new resources created in this upgrade when upgrade fails\n      --create-namespace                           if --install is set, create the release namespace if not present\n      --dependency-update                          update dependencies if they are missing before installing the chart\n      --description string                         add a custom description\n      --devel                                      use development versions, too. Equivalent to version '>0.0.0-0'. If --version is set, this is ignored\n      --disable-openapi-validation                 if set, the upgrade process will not validate rendered templates against the Kubernetes OpenAPI Schema\n      --dry-run string[=\"client\"]                  simulate an install. If --dry-run is set with no option being specified or as '--dry-run=client', it will not attempt cluster connections. Setting '--dry-run=server' allows attempting cluster connections.\n      --enable-dns                                 enable DNS lookups when rendering templates\n      --force                                      force resource updates through a replacement strategy\n  -h, --help                                       help for upgrade\n      --hide-notes                                 if set, do not show notes in upgrade output. Does not affect presence in chart metadata\n      --hide-secret                                hide Kubernetes Secrets when also using the --dry-run flag\n      --history-max int                            limit the maximum number of revisions saved per release. Use 0 for no limit (default 10)\n      --insecure-skip-tls-verify                   skip tls certificate checks for the chart download\n  -i, --install                                    if a release by this name doesn't already exist, run an install\n      --key-file string                            identify HTTPS client using this SSL key file\n      --keyring string                             location of public keys used for verification (default \"~\/.gnupg\/pubring.gpg\")\n  -l, --labels stringToString                      Labels that would be added to release metadata. Should be separated by comma. Original release labels will be merged with upgrade labels. You can unset label using null. (default [])\n      --no-hooks                                   disable pre\/post upgrade hooks\n  -o, --output format                              prints the output in the specified format. Allowed values: table, json, yaml (default table)\n      --pass-credentials                           pass credentials to all domains\n      --password string                            chart repository password where to locate the requested chart\n      --plain-http                                 use insecure HTTP connections for the chart download\n      --post-renderer postRendererString           the path to an executable to be used for post rendering. If it exists in $PATH, the binary will be used, otherwise it will try to look for the executable at the given path\n      --post-renderer-args postRendererArgsSlice   an argument to the post-renderer (can specify multiple) (default [])\n      --render-subchart-notes                      if set, render subchart notes along with the parent\n      --repo string                                chart repository url where to locate the requested chart\n      --reset-then-reuse-values                    when upgrading, reset the values to the ones built into the chart, apply the last release's values and merge in any overrides from the command line via --set and -f. If '--reset-values' or '--reuse-values' is specified, this is ignored\n      --reset-values                               when upgrading, reset the values to the ones built into the chart\n      --reuse-values                               when upgrading, reuse the last release's values and merge in any overrides from the command line via --set and -f. If '--reset-values' is specified, this is ignored\n      --set stringArray                            set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)\n      --set-file stringArray                       set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)\n      --set-json stringArray                       set JSON values on the command line (can specify multiple or separate values with commas: key1=jsonval1,key2=jsonval2)\n      --set-literal stringArray                    set a literal STRING value on the command line\n      --set-string stringArray                     set STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)\n      --skip-crds                                  if set, no CRDs will be installed when an upgrade is performed with install flag enabled. By default, CRDs are installed if not already present, when an upgrade is performed with install flag enabled\n      --skip-schema-validation                     if set, disables JSON schema validation\n      --timeout duration                           time to wait for any individual Kubernetes operation (like Jobs for hooks) (default 5m0s)\n      --username string                            chart repository username where to locate the requested chart\n  -f, --values strings                             specify values in a YAML file or a URL (can specify multiple)\n      --verify                                     verify the package before using it\n      --version string                             specify a version constraint for the chart version to use. This constraint can be a specific tag (e.g. 1.1.1) or it may reference a valid range (e.g. ^2.0.0). If this is not specified, the latest version is used\n      --wait                                       if set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout\n      --wait-for-jobs                              if set and --wait enabled, will wait until all Jobs have been completed before marking the release as successful. It will wait for as long as --timeout\n```\n\n### Options inherited from parent commands\n\n```\n      --burst-limit int                 client-side default throttling limit (default 100)\n      --debug                           enable verbose output\n      --kube-apiserver string           the address and the port for the Kubernetes API server\n      --kube-as-group stringArray       group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n      --kube-as-user string             username to impersonate for the operation\n      --kube-ca-file string             the certificate authority file for the Kubernetes API server connection\n      --kube-context string             name of the kubeconfig context to use\n      --kube-insecure-skip-tls-verify   if true, the Kubernetes API server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n      --kube-tls-server-name string     server name to use for Kubernetes API server certificate validation. If it is not provided, the hostname used to contact the server is used\n      --kube-token string               bearer token used for authentication\n      --kubeconfig string               path to the kubeconfig file\n  -n, --namespace string                namespace scope for this request\n      --qps float32                     queries per second used when communicating with the Kubernetes API, not including bursting\n      --registry-config string          path to the registry config file (default \"~\/.config\/helm\/registry\/config.json\")\n      --repository-cache string         path to the directory containing cached repository indexes (default \"~\/.cache\/helm\/repository\")\n      --repository-config string        path to the file containing repository names and URLs (default \"~\/.config\/helm\/repositories.yaml\")\n```\n\n### SEE ALSO\n\n* [helm](helm.md)\t - The Helm package manager for Kubernetes.\n\n###### ","site":"helm","answers_cleaned":"    title   Helm Upgrade          helm upgrade  upgrade a release      Synopsis   This command upgrades a release to a new version of a chart   The upgrade arguments must be a release and chart  The chart argument can be either  a chart reference  example mariadb    a path to a chart directory  a packaged chart  or a fully qualified URL  For chart references  the latest version will be specified unless the    version  flag is set   To override values in a chart  use either the    values  flag and pass in a file or use the    set  flag and pass configuration from the command line  to force string values  use    set string   You can use    set file  to set individual values from a file when the value itself is too long for the command line or is dynamically generated  You can also use    set json  to set json values  scalars objects arrays  from the command line   You can specify the    values    f  flag multiple times  The priority will be given to the last  right most  file specified  For example  if both myvalues yaml and override yaml contained a key called  Test   the value set in override yaml would take precedence         helm upgrade  f myvalues yaml  f override yaml redis   redis  You can specify the    set  flag multiple times  The priority will be given to the last  right most  set specified  For example  if both  bar  and  newbar  values are set for a key called  foo   the  newbar  value would take precedence         helm upgrade   set foo bar   set foo newbar redis   redis  You can update the values for an existing release with this command as well via the    reuse values  flag  The  RELEASE  and  CHART  arguments should be set to the original parameters  and existing values will be merged with any values set via    values    f  or    set  flags  Priority is given to new values         helm upgrade   reuse values   set foo bar   set foo newbar redis   redis  The   dry run flag will output all generated chart manifests  including Secrets which can contain sensitive values  To hide Kubernetes Secrets use the   hide secret flag  Please carefully consider how and when these flags are used        helm upgrade  RELEASE   CHART   flags           Options              atomic                                     if set  upgrade process rolls back changes made in case of failed upgrade  The   wait flag will be set automatically if   atomic is used         ca file string                             verify certificates of HTTPS enabled servers using this CA bundle         cert file string                           identify HTTPS client using this SSL certificate file         cleanup on fail                            allow deletion of new resources created in this upgrade when upgrade fails         create namespace                           if   install is set  create the release namespace if not present         dependency update                          update dependencies if they are missing before installing the chart         description string                         add a custom description         devel                                      use development versions  too  Equivalent to version   0 0 0 0   If   version is set  this is ignored         disable openapi validation                 if set  the upgrade process will not validate rendered templates against the Kubernetes OpenAPI Schema         dry run string   client                    simulate an install  If   dry run is set with no option being specified or as    dry run client   it will not attempt cluster connections  Setting    dry run server  allows attempting cluster connections          enable dns                                 enable DNS lookups when rendering templates         force                                      force resource updates through a replacement strategy    h    help                                       help for upgrade         hide notes                                 if set  do not show notes in upgrade output  Does not affect presence in chart metadata         hide secret                                hide Kubernetes Secrets when also using the   dry run flag         history max int                            limit the maximum number of revisions saved per release  Use 0 for no limit  default 10          insecure skip tls verify                   skip tls certificate checks for the chart download    i    install                                    if a release by this name doesn t already exist  run an install         key file string                            identify HTTPS client using this SSL key file         keyring string                             location of public keys used for verification  default     gnupg pubring gpg      l    labels stringToString                      Labels that would be added to release metadata  Should be separated by comma  Original release labels will be merged with upgrade labels  You can unset label using null   default             no hooks                                   disable pre post upgrade hooks    o    output format                              prints the output in the specified format  Allowed values  table  json  yaml  default table          pass credentials                           pass credentials to all domains         password string                            chart repository password where to locate the requested chart         plain http                                 use insecure HTTP connections for the chart download         post renderer postRendererString           the path to an executable to be used for post rendering  If it exists in  PATH  the binary will be used  otherwise it will try to look for the executable at the given path         post renderer args postRendererArgsSlice   an argument to the post renderer  can specify multiple   default             render subchart notes                      if set  render subchart notes along with the parent         repo string                                chart repository url where to locate the requested chart         reset then reuse values                    when upgrading  reset the values to the ones built into the chart  apply the last release s values and merge in any overrides from the command line via   set and  f  If    reset values  or    reuse values  is specified  this is ignored         reset values                               when upgrading  reset the values to the ones built into the chart         reuse values                               when upgrading  reuse the last release s values and merge in any overrides from the command line via   set and  f  If    reset values  is specified  this is ignored         set stringArray                            set values on the command line  can specify multiple or separate values with commas  key1 val1 key2 val2          set file stringArray                       set values from respective files specified via the command line  can specify multiple or separate values with commas  key1 path1 key2 path2          set json stringArray                       set JSON values on the command line  can specify multiple or separate values with commas  key1 jsonval1 key2 jsonval2          set literal stringArray                    set a literal STRING value on the command line         set string stringArray                     set STRING values on the command line  can specify multiple or separate values with commas  key1 val1 key2 val2          skip crds                                  if set  no CRDs will be installed when an upgrade is performed with install flag enabled  By default  CRDs are installed if not already present  when an upgrade is performed with install flag enabled         skip schema validation                     if set  disables JSON schema validation         timeout duration                           time to wait for any individual Kubernetes operation  like Jobs for hooks   default 5m0s          username string                            chart repository username where to locate the requested chart    f    values strings                             specify values in a YAML file or a URL  can specify multiple          verify                                     verify the package before using it         version string                             specify a version constraint for the chart version to use  This constraint can be a specific tag  e g  1 1 1  or it may reference a valid range  e g   2 0 0   If this is not specified  the latest version is used         wait                                       if set  will wait until all Pods  PVCs  Services  and minimum number of Pods of a Deployment  StatefulSet  or ReplicaSet are in a ready state before marking the release as successful  It will wait for as long as   timeout         wait for jobs                              if set and   wait enabled  will wait until all Jobs have been completed before marking the release as successful  It will wait for as long as   timeout          Options inherited from parent commands              burst limit int                 client side default throttling limit  default 100          debug                           enable verbose output         kube apiserver string           the address and the port for the Kubernetes API server         kube as group stringArray       group to impersonate for the operation  this flag can be repeated to specify multiple groups          kube as user string             username to impersonate for the operation         kube ca file string             the certificate authority file for the Kubernetes API server connection         kube context string             name of the kubeconfig context to use         kube insecure skip tls verify   if true  the Kubernetes API server s certificate will not be checked for validity  This will make your HTTPS connections insecure         kube tls server name string     server name to use for Kubernetes API server certificate validation  If it is not provided  the hostname used to contact the server is used         kube token string               bearer token used for authentication         kubeconfig string               path to the kubeconfig file    n    namespace string                namespace scope for this request         qps float32                     queries per second used when communicating with the Kubernetes API  not including bursting         registry config string          path to the registry config file  default     config helm registry config json           repository cache string         path to the directory containing cached repository indexes  default     cache helm repository           repository config string        path to the file containing repository names and URLs  default     config helm repositories yaml            SEE ALSO     helm  helm md     The Helm package manager for Kubernetes          "}
{"questions":"helm helm template title Helm Template locally render templates Synopsis","answers":"---\ntitle: \"Helm Template\"\n---\n\n## helm template\n\nlocally render templates\n\n### Synopsis\n\n\nRender chart templates locally and display the output.\n\nAny values that would normally be looked up or retrieved in-cluster will be\nfaked locally. Additionally, none of the server-side testing of chart validity\n(e.g. whether an API is supported) is done.\n\n\n```\nhelm template [NAME] [CHART] [flags]\n```\n\n### Options\n\n```\n  -a, --api-versions strings                       Kubernetes api versions used for Capabilities.APIVersions\n      --atomic                                     if set, the installation process deletes the installation on failure. The --wait flag will be set automatically if --atomic is used\n      --ca-file string                             verify certificates of HTTPS-enabled servers using this CA bundle\n      --cert-file string                           identify HTTPS client using this SSL certificate file\n      --create-namespace                           create the release namespace if not present\n      --dependency-update                          update dependencies if they are missing before installing the chart\n      --description string                         add a custom description\n      --devel                                      use development versions, too. Equivalent to version '>0.0.0-0'. If --version is set, this is ignored\n      --disable-openapi-validation                 if set, the installation process will not validate rendered templates against the Kubernetes OpenAPI Schema\n      --dry-run string[=\"client\"]                  simulate an install. If --dry-run is set with no option being specified or as '--dry-run=client', it will not attempt cluster connections. Setting '--dry-run=server' allows attempting cluster connections.\n      --enable-dns                                 enable DNS lookups when rendering templates\n      --force                                      force resource updates through a replacement strategy\n  -g, --generate-name                              generate the name (and omit the NAME parameter)\n  -h, --help                                       help for template\n      --hide-notes                                 if set, do not show notes in install output. Does not affect presence in chart metadata\n      --include-crds                               include CRDs in the templated output\n      --insecure-skip-tls-verify                   skip tls certificate checks for the chart download\n      --is-upgrade                                 set .Release.IsUpgrade instead of .Release.IsInstall\n      --key-file string                            identify HTTPS client using this SSL key file\n      --keyring string                             location of public keys used for verification (default \"~\/.gnupg\/pubring.gpg\")\n      --kube-version string                        Kubernetes version used for Capabilities.KubeVersion\n  -l, --labels stringToString                      Labels that would be added to release metadata. Should be divided by comma. (default [])\n      --name-template string                       specify template used to name the release\n      --no-hooks                                   prevent hooks from running during install\n      --output-dir string                          writes the executed templates to files in output-dir instead of stdout\n      --pass-credentials                           pass credentials to all domains\n      --password string                            chart repository password where to locate the requested chart\n      --plain-http                                 use insecure HTTP connections for the chart download\n      --post-renderer postRendererString           the path to an executable to be used for post rendering. If it exists in $PATH, the binary will be used, otherwise it will try to look for the executable at the given path\n      --post-renderer-args postRendererArgsSlice   an argument to the post-renderer (can specify multiple) (default [])\n      --release-name                               use release name in the output-dir path.\n      --render-subchart-notes                      if set, render subchart notes along with the parent\n      --replace                                    re-use the given name, only if that name is a deleted release which remains in the history. This is unsafe in production\n      --repo string                                chart repository url where to locate the requested chart\n      --set stringArray                            set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)\n      --set-file stringArray                       set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)\n      --set-json stringArray                       set JSON values on the command line (can specify multiple or separate values with commas: key1=jsonval1,key2=jsonval2)\n      --set-literal stringArray                    set a literal STRING value on the command line\n      --set-string stringArray                     set STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)\n  -s, --show-only stringArray                      only show manifests rendered from the given templates\n      --skip-crds                                  if set, no CRDs will be installed. By default, CRDs are installed if not already present\n      --skip-schema-validation                     if set, disables JSON schema validation\n      --skip-tests                                 skip tests from templated output\n      --timeout duration                           time to wait for any individual Kubernetes operation (like Jobs for hooks) (default 5m0s)\n      --username string                            chart repository username where to locate the requested chart\n      --validate                                   validate your manifests against the Kubernetes cluster you are currently pointing at. This is the same validation performed on an install\n  -f, --values strings                             specify values in a YAML file or a URL (can specify multiple)\n      --verify                                     verify the package before using it\n      --version string                             specify a version constraint for the chart version to use. This constraint can be a specific tag (e.g. 1.1.1) or it may reference a valid range (e.g. ^2.0.0). If this is not specified, the latest version is used\n      --wait                                       if set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment, StatefulSet, or ReplicaSet are in a ready state before marking the release as successful. It will wait for as long as --timeout\n      --wait-for-jobs                              if set and --wait enabled, will wait until all Jobs have been completed before marking the release as successful. It will wait for as long as --timeout\n```\n\n### Options inherited from parent commands\n\n```\n      --burst-limit int                 client-side default throttling limit (default 100)\n      --debug                           enable verbose output\n      --kube-apiserver string           the address and the port for the Kubernetes API server\n      --kube-as-group stringArray       group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n      --kube-as-user string             username to impersonate for the operation\n      --kube-ca-file string             the certificate authority file for the Kubernetes API server connection\n      --kube-context string             name of the kubeconfig context to use\n      --kube-insecure-skip-tls-verify   if true, the Kubernetes API server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n      --kube-tls-server-name string     server name to use for Kubernetes API server certificate validation. If it is not provided, the hostname used to contact the server is used\n      --kube-token string               bearer token used for authentication\n      --kubeconfig string               path to the kubeconfig file\n  -n, --namespace string                namespace scope for this request\n      --qps float32                     queries per second used when communicating with the Kubernetes API, not including bursting\n      --registry-config string          path to the registry config file (default \"~\/.config\/helm\/registry\/config.json\")\n      --repository-cache string         path to the directory containing cached repository indexes (default \"~\/.cache\/helm\/repository\")\n      --repository-config string        path to the file containing repository names and URLs (default \"~\/.config\/helm\/repositories.yaml\")\n```\n\n### SEE ALSO\n\n* [helm](helm.md)\t - The Helm package manager for Kubernetes.\n\n###### ","site":"helm","answers_cleaned":"    title   Helm Template          helm template  locally render templates      Synopsis   Render chart templates locally and display the output   Any values that would normally be looked up or retrieved in cluster will be faked locally  Additionally  none of the server side testing of chart validity  e g  whether an API is supported  is done        helm template  NAME   CHART   flags           Options         a    api versions strings                       Kubernetes api versions used for Capabilities APIVersions         atomic                                     if set  the installation process deletes the installation on failure  The   wait flag will be set automatically if   atomic is used         ca file string                             verify certificates of HTTPS enabled servers using this CA bundle         cert file string                           identify HTTPS client using this SSL certificate file         create namespace                           create the release namespace if not present         dependency update                          update dependencies if they are missing before installing the chart         description string                         add a custom description         devel                                      use development versions  too  Equivalent to version   0 0 0 0   If   version is set  this is ignored         disable openapi validation                 if set  the installation process will not validate rendered templates against the Kubernetes OpenAPI Schema         dry run string   client                    simulate an install  If   dry run is set with no option being specified or as    dry run client   it will not attempt cluster connections  Setting    dry run server  allows attempting cluster connections          enable dns                                 enable DNS lookups when rendering templates         force                                      force resource updates through a replacement strategy    g    generate name                              generate the name  and omit the NAME parameter     h    help                                       help for template         hide notes                                 if set  do not show notes in install output  Does not affect presence in chart metadata         include crds                               include CRDs in the templated output         insecure skip tls verify                   skip tls certificate checks for the chart download         is upgrade                                 set  Release IsUpgrade instead of  Release IsInstall         key file string                            identify HTTPS client using this SSL key file         keyring string                             location of public keys used for verification  default     gnupg pubring gpg           kube version string                        Kubernetes version used for Capabilities KubeVersion    l    labels stringToString                      Labels that would be added to release metadata  Should be divided by comma   default             name template string                       specify template used to name the release         no hooks                                   prevent hooks from running during install         output dir string                          writes the executed templates to files in output dir instead of stdout         pass credentials                           pass credentials to all domains         password string                            chart repository password where to locate the requested chart         plain http                                 use insecure HTTP connections for the chart download         post renderer postRendererString           the path to an executable to be used for post rendering  If it exists in  PATH  the binary will be used  otherwise it will try to look for the executable at the given path         post renderer args postRendererArgsSlice   an argument to the post renderer  can specify multiple   default             release name                               use release name in the output dir path          render subchart notes                      if set  render subchart notes along with the parent         replace                                    re use the given name  only if that name is a deleted release which remains in the history  This is unsafe in production         repo string                                chart repository url where to locate the requested chart         set stringArray                            set values on the command line  can specify multiple or separate values with commas  key1 val1 key2 val2          set file stringArray                       set values from respective files specified via the command line  can specify multiple or separate values with commas  key1 path1 key2 path2          set json stringArray                       set JSON values on the command line  can specify multiple or separate values with commas  key1 jsonval1 key2 jsonval2          set literal stringArray                    set a literal STRING value on the command line         set string stringArray                     set STRING values on the command line  can specify multiple or separate values with commas  key1 val1 key2 val2     s    show only stringArray                      only show manifests rendered from the given templates         skip crds                                  if set  no CRDs will be installed  By default  CRDs are installed if not already present         skip schema validation                     if set  disables JSON schema validation         skip tests                                 skip tests from templated output         timeout duration                           time to wait for any individual Kubernetes operation  like Jobs for hooks   default 5m0s          username string                            chart repository username where to locate the requested chart         validate                                   validate your manifests against the Kubernetes cluster you are currently pointing at  This is the same validation performed on an install    f    values strings                             specify values in a YAML file or a URL  can specify multiple          verify                                     verify the package before using it         version string                             specify a version constraint for the chart version to use  This constraint can be a specific tag  e g  1 1 1  or it may reference a valid range  e g   2 0 0   If this is not specified  the latest version is used         wait                                       if set  will wait until all Pods  PVCs  Services  and minimum number of Pods of a Deployment  StatefulSet  or ReplicaSet are in a ready state before marking the release as successful  It will wait for as long as   timeout         wait for jobs                              if set and   wait enabled  will wait until all Jobs have been completed before marking the release as successful  It will wait for as long as   timeout          Options inherited from parent commands              burst limit int                 client side default throttling limit  default 100          debug                           enable verbose output         kube apiserver string           the address and the port for the Kubernetes API server         kube as group stringArray       group to impersonate for the operation  this flag can be repeated to specify multiple groups          kube as user string             username to impersonate for the operation         kube ca file string             the certificate authority file for the Kubernetes API server connection         kube context string             name of the kubeconfig context to use         kube insecure skip tls verify   if true  the Kubernetes API server s certificate will not be checked for validity  This will make your HTTPS connections insecure         kube tls server name string     server name to use for Kubernetes API server certificate validation  If it is not provided  the hostname used to contact the server is used         kube token string               bearer token used for authentication         kubeconfig string               path to the kubeconfig file    n    namespace string                namespace scope for this request         qps float32                     queries per second used when communicating with the Kubernetes API  not including bursting         registry config string          path to the registry config file  default     config helm registry config json           repository cache string         path to the directory containing cached repository indexes  default     cache helm repository           repository config string        path to the file containing repository names and URLs  default     config helm repositories yaml            SEE ALSO     helm  helm md     The Helm package manager for Kubernetes          "}
{"questions":"helm Captures information about using Helm in specific Kubernetes environments title Kubernetes Distribution Guide Kubernetes https github com cncf k8s conformance whether weight 10 or not aliases docs kubernetesdistros Helm should work with any conformant version of","answers":"---\ntitle: \"Kubernetes Distribution Guide\"\ndescription: \"Captures information about using Helm in specific Kubernetes environments.\"\naliases: [\"\/docs\/kubernetes_distros\/\"]\nweight: 10\n---\n\nHelm should work with any [conformant version of\nKubernetes](https:\/\/github.com\/cncf\/k8s-conformance) (whether\n[certified](https:\/\/www.cncf.io\/certification\/software-conformance\/) or not).\n\nThis document captures information about using Helm in specific Kubernetes\nenvironments. Please contribute more details about any distros (sorted\nalphabetically) if desired.\n\n\n## AKS\n\nHelm works with [Azure Kubernetes\nService](https:\/\/docs.microsoft.com\/en-us\/azure\/aks\/kubernetes-helm).\n\n## DC\/OS\n\nHelm has been tested and is working on Mesospheres DC\/OS 1.11 Kubernetes\nplatform, and requires no additional configuration.\n\n## EKS\n\nHelm works with Amazon Elastic Kubernetes Service (Amazon EKS):\n[Using Helm with Amazon\nEKS](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/helm.html).\n\n## GKE\n\nGoogle's GKE hosted Kubernetes platform is known to work with Helm, and requires\nno additional configuration.\n\n## `scripts\/local-cluster` and Hyperkube\n\nHyperkube configured via `scripts\/local-cluster.sh` is known to work. For raw\nHyperkube you may need to do some manual configuration.\n\n## IKS\n\nHelm works with [IBM Cloud Kubernetes\nService](https:\/\/cloud.ibm.com\/docs\/containers?topic=containers-helm).\n\n## KIND (Kubernetes IN Docker)\n\nHelm is regularly tested on [KIND](https:\/\/github.com\/kubernetes-sigs\/kind).\n\n## KubeOne\n\nHelm works in clusters that are set up by KubeOne without caveats.\n\n## Kubermatic\n\nHelm works in user clusters that are created by Kubermatic without caveats.\nSince seed cluster can be set up in different ways Helm support depends on their\nconfiguration.\n\n## MicroK8s\n\nHelm can be enabled in [MicroK8s](https:\/\/microk8s.io) using the command:\n`microk8s.enable helm3`\n\n## Minikube\n\nHelm is tested and known to work with\n[Minikube](https:\/\/github.com\/kubernetes\/minikube). It requires no additional\nconfiguration.\n\n## Openshift\n\nHelm works straightforward on OpenShift Online, OpenShift Dedicated, OpenShift\nContainer Platform (version >= 3.6) or OpenShift Origin (version >= 3.6). To\nlearn more read [this\nblog](https:\/\/blog.openshift.com\/getting-started-helm-openshift\/) post.\n\n## Platform9\n\nHelm is pre-installed with [Platform9 Managed\nKubernetes](https:\/\/platform9.com\/managed-kubernetes\/?utm_source=helm_distro_notes).\nPlatform9 provides access to all official Helm charts through the App Catalog UI\nand native Kubernetes CLI. Additional repositories can be manually added.\nFurther details are available in this [Platform9 App Catalog\narticle](https:\/\/platform9.com\/support\/deploying-kubernetes-apps-platform9-managed-kubernetes\/?utm_source=helm_distro_notes).\n\n## Ubuntu with `kubeadm`\n\nKubernetes bootstrapped with `kubeadm` is known to work on the following Linux\ndistributions:\n\n- Ubuntu 16.04\n- Fedora release 25\n\nSome versions of Helm (v2.0.0-beta2) require you to `export\nKUBECONFIG=\/etc\/kubernetes\/admin.conf` or create a `~\/.kube\/config`.\n\n## VMware Tanzu Kubernetes Grid\n\nHelm runs on VMware Tanzu Kubernetes Grid, TKG, without needing configuration changes. \nThe Tanzu CLI can manage installing packages for [helm-controller](https:\/\/fluxcd.io\/flux\/components\/helm\/) allowing for declaratively managing Helm chart releases. \nFurther details available in the TKG documentation for [CLI-Managed Packages](https:\/\/docs.vmware.com\/en\/VMware-Tanzu-Kubernetes-Grid\/1.6\/vmware-tanzu-kubernetes-grid-16\/GUID-packages-user-managed-index.html#package-locations-and-dependencies-5).","site":"helm","answers_cleaned":"    title   Kubernetes Distribution Guide  description   Captures information about using Helm in specific Kubernetes environments   aliases     docs kubernetes distros    weight  10      Helm should work with any  conformant version of Kubernetes  https   github com cncf k8s conformance   whether  certified  https   www cncf io certification software conformance   or not    This document captures information about using Helm in specific Kubernetes environments  Please contribute more details about any distros  sorted alphabetically  if desired       AKS  Helm works with  Azure Kubernetes Service  https   docs microsoft com en us azure aks kubernetes helm       DC OS  Helm has been tested and is working on Mesospheres DC OS 1 11 Kubernetes platform  and requires no additional configuration      EKS  Helm works with Amazon Elastic Kubernetes Service  Amazon EKS    Using Helm with Amazon EKS  https   docs aws amazon com eks latest userguide helm html       GKE  Google s GKE hosted Kubernetes platform is known to work with Helm  and requires no additional configuration       scripts local cluster  and Hyperkube  Hyperkube configured via  scripts local cluster sh  is known to work  For raw Hyperkube you may need to do some manual configuration      IKS  Helm works with  IBM Cloud Kubernetes Service  https   cloud ibm com docs containers topic containers helm       KIND  Kubernetes IN Docker   Helm is regularly tested on  KIND  https   github com kubernetes sigs kind       KubeOne  Helm works in clusters that are set up by KubeOne without caveats      Kubermatic  Helm works in user clusters that are created by Kubermatic without caveats  Since seed cluster can be set up in different ways Helm support depends on their configuration      MicroK8s  Helm can be enabled in  MicroK8s  https   microk8s io  using the command   microk8s enable helm3      Minikube  Helm is tested and known to work with  Minikube  https   github com kubernetes minikube   It requires no additional configuration      Openshift  Helm works straightforward on OpenShift Online  OpenShift Dedicated  OpenShift Container Platform  version    3 6  or OpenShift Origin  version    3 6   To learn more read  this blog  https   blog openshift com getting started helm openshift   post      Platform9  Helm is pre installed with  Platform9 Managed Kubernetes  https   platform9 com managed kubernetes  utm source helm distro notes   Platform9 provides access to all official Helm charts through the App Catalog UI and native Kubernetes CLI  Additional repositories can be manually added  Further details are available in this  Platform9 App Catalog article  https   platform9 com support deploying kubernetes apps platform9 managed kubernetes  utm source helm distro notes       Ubuntu with  kubeadm   Kubernetes bootstrapped with  kubeadm  is known to work on the following Linux distributions     Ubuntu 16 04   Fedora release 25  Some versions of Helm  v2 0 0 beta2  require you to  export KUBECONFIG  etc kubernetes admin conf  or create a     kube config       VMware Tanzu Kubernetes Grid  Helm runs on VMware Tanzu Kubernetes Grid  TKG  without needing configuration changes   The Tanzu CLI can manage installing packages for  helm controller  https   fluxcd io flux components helm   allowing for declaratively managing Helm chart releases   Further details available in the TKG documentation for  CLI Managed Packages  https   docs vmware com en VMware Tanzu Kubernetes Grid 1 6 vmware tanzu kubernetes grid 16 GUID packages user managed index html package locations and dependencies 5  "}
{"questions":"helm aliases docs chartshooks Describes how to work with chart hooks certain points in a release s life cycle For example you can use hooks to weight 2 Helm provides a hook mechanism to allow chart developers to intervene at title Chart Hooks","answers":"---\ntitle: \"Chart Hooks\"\ndescription: \"Describes how to work with chart hooks.\"\naliases: [\"\/docs\/charts_hooks\/\"]\nweight: 2\n---\n\nHelm provides a _hook_ mechanism to allow chart developers to intervene at\ncertain points in a release's life cycle. For example, you can use hooks to:\n\n- Load a ConfigMap or Secret during install before any other charts are loaded.\n- Execute a Job to back up a database before installing a new chart, and then\n  execute a second job after the upgrade in order to restore data.\n- Run a Job before deleting a release to gracefully take a service out of\n  rotation before removing it.\n\nHooks work like regular templates, but they have special annotations that cause\nHelm to utilize them differently. In this section, we cover the basic usage\npattern for hooks.\n\n## The Available Hooks\n\nThe following hooks are defined:\n\n| Annotation Value | Description                                                                                           |\n| ---------------- | ----------------------------------------------------------------------------------------------------- |\n| `pre-install`    | Executes after templates are rendered, but before any resources are created in Kubernetes             |\n| `post-install`   | Executes after all resources are loaded into Kubernetes                                               |\n| `pre-delete`     | Executes on a deletion request before any resources are deleted from Kubernetes                       |\n| `post-delete`    | Executes on a deletion request after all of the release's resources have been deleted                 |\n| `pre-upgrade`    | Executes on an upgrade request after templates are rendered, but before any resources are updated     |\n| `post-upgrade`   | Executes on an upgrade request after all resources have been upgraded                                 |\n| `pre-rollback`   | Executes on a rollback request after templates are rendered, but before any resources are rolled back |\n| `post-rollback`  | Executes on a rollback request after all resources have been modified                                 |\n| `test`           | Executes when the Helm test subcommand is invoked ([view test docs](\/docs\/chart_tests\/))              |\n\n_Note that the `crd-install` hook has been removed in favor of the `crds\/`\ndirectory in Helm 3._\n\n## Hooks and the Release Lifecycle\n\nHooks allow you, the chart developer, an opportunity to perform operations at\nstrategic points in a release lifecycle. For example, consider the lifecycle for\na `helm install`. By default, the lifecycle looks like this:\n\n1. User runs `helm install foo`\n2. The Helm library install API is called\n3. After some verification, the library renders the `foo` templates\n4. The library loads the resulting resources into Kubernetes\n5. The library returns the release object (and other data) to the client\n6. The client exits\n\nHelm defines two hooks for the `install` lifecycle: `pre-install` and\n`post-install`. If the developer of the `foo` chart implements both hooks, the\nlifecycle is altered like this:\n\n1. User runs `helm install foo`\n2. The Helm library install API is called\n3. CRDs in the `crds\/` directory are installed\n4. After some verification, the library renders the `foo` templates\n5. The library prepares to execute the `pre-install` hooks (loading hook\n   resources into Kubernetes)\n6. The library sorts hooks by weight (assigning a weight of 0 by default), \n   by resource kind and finally by name in ascending order.\n7. The library then loads the hook with the lowest weight first (negative to\n   positive)\n8. The library waits until the hook is \"Ready\" (except for CRDs)\n9. The library loads the resulting resources into Kubernetes. Note that if the\n   `--wait` flag is set, the library will wait until all resources are in a\n   ready state and will not run the `post-install` hook until they are ready.\n10. The library executes the `post-install` hook (loading hook resources)\n11. The library waits until the hook is \"Ready\"\n12. The library returns the release object (and other data) to the client\n13. The client exits\n\nWhat does it mean to wait until a hook is ready? This depends on the resource\ndeclared in the hook. If the resource is a `Job` or `Pod` kind, Helm will wait\nuntil it successfully runs to completion. And if the hook fails, the release\nwill fail. This is a _blocking operation_, so the Helm client will pause while\nthe Job is run.\n\nFor all other kinds, as soon as Kubernetes marks the resource as loaded (added\nor updated), the resource is considered \"Ready\". When many resources are\ndeclared in a hook, the resources are executed serially. If they have hook\nweights (see below), they are executed in weighted order. \nStarting from Helm 3.2.0 hook resources with same weight are installed in the same \norder as normal non-hook resources. Otherwise, ordering is\nnot guaranteed. (In Helm 2.3.0 and after, they are sorted alphabetically. That\nbehavior, though, is not considered binding and could change in the future.) It\nis considered good practice to add a hook weight, and set it to `0` if weight is\nnot important.\n\n### Hook resources are not managed with corresponding releases\n\nThe resources that a hook creates are currently not tracked or managed as part\nof the release. Once Helm verifies that the hook has reached its ready state, it\nwill leave the hook resource alone. Garbage collection of hook resources when\nthe corresponding release is deleted may be added to Helm 3 in the future, so\nany hook resources that must never be deleted should be annotated with\n`helm.sh\/resource-policy: keep`.\n\nPractically speaking, this means that if you create resources in a hook, you\ncannot rely upon `helm uninstall` to remove the resources. To destroy such\nresources, you need to either [add a custom `helm.sh\/hook-delete-policy`\nannotation](#hook-deletion-policies) to the hook template file, or [set the time\nto live (TTL) field of a Job\nresource](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/ttlafterfinished\/).\n\n## Writing a Hook\n\nHooks are just Kubernetes manifest files with special annotations in the\n`metadata` section. Because they are template files, you can use all of the\nnormal template features, including reading `.Values`, `.Release`, and\n`.Template`.\n\nFor example, this template, stored in `templates\/post-install-job.yaml`,\ndeclares a job to be run on `post-install`:\n\n```yaml\napiVersion: batch\/v1\nkind: Job\nmetadata:\n  name: \"\"\n  labels:\n    app.kubernetes.io\/managed-by: \n    app.kubernetes.io\/instance: \n    app.kubernetes.io\/version: \n    helm.sh\/chart: \"-\"\n  annotations:\n    # This is what defines this resource as a hook. Without this line, the\n    # job is considered part of the release.\n    \"helm.sh\/hook\": post-install\n    \"helm.sh\/hook-weight\": \"-5\"\n    \"helm.sh\/hook-delete-policy\": hook-succeeded\nspec:\n  template:\n    metadata:\n      name: \"\"\n      labels:\n        app.kubernetes.io\/managed-by: \n        app.kubernetes.io\/instance: \n        helm.sh\/chart: \"-\"\n    spec:\n      restartPolicy: Never\n      containers:\n      - name: post-install-job\n        image: \"alpine:3.3\"\n        command: [\"\/bin\/sleep\",\"\"]\n\n```\n\nWhat makes this template a hook is the annotation:\n\n```yaml\nannotations:\n  \"helm.sh\/hook\": post-install\n```\n\nOne resource can implement multiple hooks:\n\n```yaml\nannotations:\n  \"helm.sh\/hook\": post-install,post-upgrade\n```\n\nSimilarly, there is no limit to the number of different resources that may\nimplement a given hook. For example, one could declare both a secret and a\nconfig map as a pre-install hook.\n\nWhen subcharts declare hooks, those are also evaluated. There is no way for a\ntop-level chart to disable the hooks declared by subcharts.\n\nIt is possible to define a weight for a hook which will help build a\ndeterministic executing order. Weights are defined using the following\nannotation:\n\n```yaml\nannotations:\n  \"helm.sh\/hook-weight\": \"5\"\n```\n\nHook weights can be positive or negative numbers but must be represented as\nstrings. When Helm starts the execution cycle of hooks of a particular Kind it\nwill sort those hooks in ascending order.\n\n### Hook deletion policies\n\nIt is possible to define policies that determine when to delete corresponding\nhook resources. Hook deletion policies are defined using the following\nannotation:\n\n```yaml\nannotations:\n  \"helm.sh\/hook-delete-policy\": before-hook-creation,hook-succeeded\n```\n\nYou can choose one or more defined annotation values:\n\n| Annotation Value       | Description                                                          |\n| ---------------------- | -------------------------------------------------------------------- |\n| `before-hook-creation` | Delete the previous resource before a new hook is launched (default) |\n| `hook-succeeded`       | Delete the resource after the hook is successfully executed          |\n| `hook-failed`          | Delete the resource if the hook failed during execution              |\n\nIf no hook deletion policy annotation is specified, the `before-hook-creation`\nbehavior applies by default.","site":"helm","answers_cleaned":"    title   Chart Hooks  description   Describes how to work with chart hooks   aliases     docs charts hooks    weight  2      Helm provides a  hook  mechanism to allow chart developers to intervene at certain points in a release s life cycle  For example  you can use hooks to     Load a ConfigMap or Secret during install before any other charts are loaded    Execute a Job to back up a database before installing a new chart  and then   execute a second job after the upgrade in order to restore data    Run a Job before deleting a release to gracefully take a service out of   rotation before removing it   Hooks work like regular templates  but they have special annotations that cause Helm to utilize them differently  In this section  we cover the basic usage pattern for hooks      The Available Hooks  The following hooks are defined     Annotation Value   Description                                                                                                                                                                                                                             pre install       Executes after templates are rendered  but before any resources are created in Kubernetes                  post install      Executes after all resources are loaded into Kubernetes                                                    pre delete        Executes on a deletion request before any resources are deleted from Kubernetes                            post delete       Executes on a deletion request after all of the release s resources have been deleted                      pre upgrade       Executes on an upgrade request after templates are rendered  but before any resources are updated          post upgrade      Executes on an upgrade request after all resources have been upgraded                                      pre rollback      Executes on a rollback request after templates are rendered  but before any resources are rolled back      post rollback     Executes on a rollback request after all resources have been modified                                      test              Executes when the Helm test subcommand is invoked   view test docs   docs chart tests                     Note that the  crd install  hook has been removed in favor of the  crds   directory in Helm 3       Hooks and the Release Lifecycle  Hooks allow you  the chart developer  an opportunity to perform operations at strategic points in a release lifecycle  For example  consider the lifecycle for a  helm install   By default  the lifecycle looks like this   1  User runs  helm install foo  2  The Helm library install API is called 3  After some verification  the library renders the  foo  templates 4  The library loads the resulting resources into Kubernetes 5  The library returns the release object  and other data  to the client 6  The client exits  Helm defines two hooks for the  install  lifecycle   pre install  and  post install   If the developer of the  foo  chart implements both hooks  the lifecycle is altered like this   1  User runs  helm install foo  2  The Helm library install API is called 3  CRDs in the  crds   directory are installed 4  After some verification  the library renders the  foo  templates 5  The library prepares to execute the  pre install  hooks  loading hook    resources into Kubernetes  6  The library sorts hooks by weight  assigning a weight of 0 by default       by resource kind and finally by name in ascending order  7  The library then loads the hook with the lowest weight first  negative to    positive  8  The library waits until the hook is  Ready   except for CRDs  9  The library loads the resulting resources into Kubernetes  Note that if the       wait  flag is set  the library will wait until all resources are in a    ready state and will not run the  post install  hook until they are ready  10  The library executes the  post install  hook  loading hook resources  11  The library waits until the hook is  Ready  12  The library returns the release object  and other data  to the client 13  The client exits  What does it mean to wait until a hook is ready  This depends on the resource declared in the hook  If the resource is a  Job  or  Pod  kind  Helm will wait until it successfully runs to completion  And if the hook fails  the release will fail  This is a  blocking operation   so the Helm client will pause while the Job is run   For all other kinds  as soon as Kubernetes marks the resource as loaded  added or updated   the resource is considered  Ready   When many resources are declared in a hook  the resources are executed serially  If they have hook weights  see below   they are executed in weighted order   Starting from Helm 3 2 0 hook resources with same weight are installed in the same  order as normal non hook resources  Otherwise  ordering is not guaranteed   In Helm 2 3 0 and after  they are sorted alphabetically  That behavior  though  is not considered binding and could change in the future   It is considered good practice to add a hook weight  and set it to  0  if weight is not important       Hook resources are not managed with corresponding releases  The resources that a hook creates are currently not tracked or managed as part of the release  Once Helm verifies that the hook has reached its ready state  it will leave the hook resource alone  Garbage collection of hook resources when the corresponding release is deleted may be added to Helm 3 in the future  so any hook resources that must never be deleted should be annotated with  helm sh resource policy  keep    Practically speaking  this means that if you create resources in a hook  you cannot rely upon  helm uninstall  to remove the resources  To destroy such resources  you need to either  add a custom  helm sh hook delete policy  annotation   hook deletion policies  to the hook template file  or  set the time to live  TTL  field of a Job resource  https   kubernetes io docs concepts workloads controllers ttlafterfinished        Writing a Hook  Hooks are just Kubernetes manifest files with special annotations in the  metadata  section  Because they are template files  you can use all of the normal template features  including reading   Values     Release   and   Template    For example  this template  stored in  templates post install job yaml   declares a job to be run on  post install       yaml apiVersion  batch v1 kind  Job metadata    name       labels      app kubernetes io managed by       app kubernetes io instance       app kubernetes io version       helm sh chart        annotations        This is what defines this resource as a hook  Without this line  the       job is considered part of the release       helm sh hook   post install      helm sh hook weight     5       helm sh hook delete policy   hook succeeded spec    template      metadata        name           labels          app kubernetes io managed by           app kubernetes io instance           helm sh chart          spec        restartPolicy  Never       containers          name  post install job         image   alpine 3 3          command     bin sleep            What makes this template a hook is the annotation      yaml annotations     helm sh hook   post install      One resource can implement multiple hooks      yaml annotations     helm sh hook   post install post upgrade      Similarly  there is no limit to the number of different resources that may implement a given hook  For example  one could declare both a secret and a config map as a pre install hook   When subcharts declare hooks  those are also evaluated  There is no way for a top level chart to disable the hooks declared by subcharts   It is possible to define a weight for a hook which will help build a deterministic executing order  Weights are defined using the following annotation      yaml annotations     helm sh hook weight    5       Hook weights can be positive or negative numbers but must be represented as strings  When Helm starts the execution cycle of hooks of a particular Kind it will sort those hooks in ascending order       Hook deletion policies  It is possible to define policies that determine when to delete corresponding hook resources  Hook deletion policies are defined using the following annotation      yaml annotations     helm sh hook delete policy   before hook creation hook succeeded      You can choose one or more defined annotation values     Annotation Value         Description                                                                                                                                                                 before hook creation    Delete the previous resource before a new hook is launched  default       hook succeeded          Delete the resource after the hook is successfully executed               hook failed             Delete the resource if the hook failed during execution                 If no hook deletion policy annotation is specified  the  before hook creation  behavior applies by default "}
{"questions":"helm weight 13 and managing releases in one or more clusters title Migrating Helm v2 to v3 This guide shows how to migrate Helm v2 to v3 Helm v2 needs to be installed Learn how to migrate Helm v2 to v3 Overview of Helm 3 Changes","answers":"---\ntitle: \"Migrating Helm v2 to v3\"\ndescription: \"Learn how to migrate Helm v2 to v3.\"\nweight: 13\n---\n\nThis guide shows how to migrate  Helm v2 to v3. Helm v2 needs to be installed\nand managing releases in one or more clusters.\n\n## Overview of Helm 3 Changes\n\nThe full list of changes from Helm 2 to 3 are documented in the [FAQ\nsection](https:\/\/v3.helm.sh\/docs\/faq\/#changes-since-helm-2). The following is a\nsummary of some of those changes that a user should be aware of before and\nduring migration:\n\n1. Removal of Tiller:\n   - Replaces client\/server with client\/library architecture (`helm` binary\n     only)\n   - Security is now on per user basis (delegated to Kubernetes user cluster\n     security)\n   - Releases are now stored as in-cluster secrets and the release object\n     metadata has changed\n   - Releases are persisted on a release namespace basis and not in the Tiller\n     namespace anymore\n2. Chart repository updated:\n   - `helm search` now supports both local repository searches and making search\n     queries against Artifact Hub\n3. Chart apiVersion bumped to \"v2\" for following specification changes:\n   - Dynamically linked chart dependencies moved to `Chart.yaml`\n     (`requirements.yaml` removed and  requirements --> dependencies)\n   - Library charts (helper\/common charts) can now be added as dynamically\n     linked chart dependencies\n   - Charts have a `type` metadata field to define the chart to be of an\n     `application` or `library` chart. It is application by default which means\n     it is renderable and installable\n   - Helm 2 charts (apiVersion=v1) are still installable\n4. XDG directory specification added:\n   - Helm home removed and replaced with XDG directory specification for storing\n     configuration files\n   - No longer need to initialize Helm\n   - `helm init` and `helm home` removed\n5. Additional changes:\n   - Helm install\/set-up is simplified:\n     - Helm client (helm binary) only (no Tiller)\n     - Run-as-is paradigm\n   - `local` or `stable` repositories are not set-up by default\n   - `crd-install` hook removed and replaced with `crds` directory in chart\n     where all CRDs defined in it will be installed before any rendering of the\n     chart\n   - `test-failure` hook annotation value removed, and `test-success`\n     deprecated. Use `test` instead\n   - Commands removed\/replaced\/added:\n       - delete --> uninstall : removes all release history by default\n         (previously needed `--purge`)\n       - fetch --> pull\n       - home (removed)\n       - init (removed)\n       - install: requires release name or `--generate-name` argument\n       - inspect --> show\n       - reset (removed)\n       - serve (removed)\n       - template: `-x`\/`--execute` argument renamed to `-s`\/`--show-only`\n       - upgrade: Added argument `--history-max` which limits the maximum number\n         of revisions saved per release (0 for no limit)\n   - Helm 3 Go library has undergone a lot of changes and is incompatible with\n     the Helm 2 library\n   - Release binaries are now hosted on `get.helm.sh`\n\n## Migration Use Cases\n\nThe migration use cases are as follows:\n\n1. Helm v2 and v3 managing the same cluster:\n   - This use case is only recommended if you intend to phase out Helm v2\n     gradually and do not require v3 to manage any releases deployed by v2. All\n     new releases being deployed should be performed by v3 and existing v2\n     deployed releases are updated\/removed by v2 only\n   - Helm v2 and v3 can quite happily manage the same cluster. The Helm versions\n     can be installed on the same or separate systems\n   - If installing Helm v3 on the same system, you need to perform an additional\n     step to ensure that both client versions can co-exist until ready to remove\n     Helm v2 client. Rename or put the Helm v3 binary in a different folder to\n     avoid conflict\n   - Otherwise there are no conflicts between both versions because of the\n     following distinctions:\n     - v2 and v3 release (history) storage are independent of each other. The\n       changes include the Kubernetes resource for storage and the release\n       object metadata contained in the resource. Releases will also be on a per\n       user namespace instead of using the Tiller namespace (for example, v2\n       default Tiller namespace kube-system). v2 uses \"ConfigMaps\" or \"Secrets\"\n       under the Tiller namespace and `TILLER`ownership. v3 uses \"Secrets\" in\n       the user namespace and `helm` ownership. Releases are incremental in both\n       v2 and v3\n     - The only issue could be if Kubernetes cluster scoped resources (e.g.\n       `clusterroles.rbac`) are defined in a chart. The v3 deployment would then\n       fail even if unique in the namespace as the resources would clash\n     - v3 configuration no longer uses `$HELM_HOME` and uses XDG directory\n       specification instead. It is also created on the fly as need be. It is\n       therefore independent of v2 configuration. This is applicable only when\n       both versions are installed on the same system\n\n2. Migrating Helm v2 to Helm v3:\n   - This use case applies when you want Helm v3 to manage existing Helm v2\n     releases\n   - It should be noted that a Helm v2 client:\n     - can manage 1 to many Kubernetes clusters\n     - can connect to 1 to many Tiller instances for  a cluster\n   - This means that you have to be aware of this when migrating as releases\n     are deployed into clusters by Tiller and its namespace. You have to\n     therefore be aware of migrating for each cluster and each Tiller instance\n     that is managed by the Helm v2 client instance\n   - The recommended data migration path is as follows:\n     1. Backup v2 data\n     2. Migrate Helm v2 configuration\n     3. Migrate Helm v2 releases\n     4. When confident that Helm v3 is managing all Helm v2 data (for all\n        clusters and Tiller instances of the Helm v2 client instance) as\n        expected, then clean up Helm v2 data\n   - The migration process is automated by the Helm v3\n     [2to3](https:\/\/github.com\/helm\/helm-2to3) plugin\n\n## Reference\n\n   - Helm v3 [2to3](https:\/\/github.com\/helm\/helm-2to3) plugin\n   - Blog [post](https:\/\/helm.sh\/blog\/migrate-from-helm-v2-to-helm-v3\/)\n     explaining `2to3` plugin usage with examples","site":"helm","answers_cleaned":"    title   Migrating Helm v2 to v3  description   Learn how to migrate Helm v2 to v3   weight  13      This guide shows how to migrate  Helm v2 to v3  Helm v2 needs to be installed and managing releases in one or more clusters      Overview of Helm 3 Changes  The full list of changes from Helm 2 to 3 are documented in the  FAQ section  https   v3 helm sh docs faq  changes since helm 2   The following is a summary of some of those changes that a user should be aware of before and during migration   1  Removal of Tiller       Replaces client server with client library architecture   helm  binary      only       Security is now on per user basis  delegated to Kubernetes user cluster      security       Releases are now stored as in cluster secrets and the release object      metadata has changed      Releases are persisted on a release namespace basis and not in the Tiller      namespace anymore 2  Chart repository updated        helm search  now supports both local repository searches and making search      queries against Artifact Hub 3  Chart apiVersion bumped to  v2  for following specification changes       Dynamically linked chart dependencies moved to  Chart yaml         requirements yaml  removed and  requirements     dependencies       Library charts  helper common charts  can now be added as dynamically      linked chart dependencies      Charts have a  type  metadata field to define the chart to be of an       application  or  library  chart  It is application by default which means      it is renderable and installable      Helm 2 charts  apiVersion v1  are still installable 4  XDG directory specification added       Helm home removed and replaced with XDG directory specification for storing      configuration files      No longer need to initialize Helm       helm init  and  helm home  removed 5  Additional changes       Helm install set up is simplified         Helm client  helm binary  only  no Tiller         Run as is paradigm       local  or  stable  repositories are not set up by default       crd install  hook removed and replaced with  crds  directory in chart      where all CRDs defined in it will be installed before any rendering of the      chart       test failure  hook annotation value removed  and  test success       deprecated  Use  test  instead      Commands removed replaced added           delete     uninstall   removes all release history by default           previously needed    purge            fetch     pull          home  removed           init  removed           install  requires release name or    generate name  argument          inspect     show          reset  removed           serve  removed           template    x     execute  argument renamed to   s     show only           upgrade  Added argument    history max  which limits the maximum number          of revisions saved per release  0 for no limit       Helm 3 Go library has undergone a lot of changes and is incompatible with      the Helm 2 library      Release binaries are now hosted on  get helm sh      Migration Use Cases  The migration use cases are as follows   1  Helm v2 and v3 managing the same cluster       This use case is only recommended if you intend to phase out Helm v2      gradually and do not require v3 to manage any releases deployed by v2  All      new releases being deployed should be performed by v3 and existing v2      deployed releases are updated removed by v2 only      Helm v2 and v3 can quite happily manage the same cluster  The Helm versions      can be installed on the same or separate systems      If installing Helm v3 on the same system  you need to perform an additional      step to ensure that both client versions can co exist until ready to remove      Helm v2 client  Rename or put the Helm v3 binary in a different folder to      avoid conflict      Otherwise there are no conflicts between both versions because of the      following distinctions         v2 and v3 release  history  storage are independent of each other  The        changes include the Kubernetes resource for storage and the release        object metadata contained in the resource  Releases will also be on a per        user namespace instead of using the Tiller namespace  for example  v2        default Tiller namespace kube system   v2 uses  ConfigMaps  or  Secrets         under the Tiller namespace and  TILLER ownership  v3 uses  Secrets  in        the user namespace and  helm  ownership  Releases are incremental in both        v2 and v3        The only issue could be if Kubernetes cluster scoped resources  e g          clusterroles rbac   are defined in a chart  The v3 deployment would then        fail even if unique in the namespace as the resources would clash        v3 configuration no longer uses   HELM HOME  and uses XDG directory        specification instead  It is also created on the fly as need be  It is        therefore independent of v2 configuration  This is applicable only when        both versions are installed on the same system  2  Migrating Helm v2 to Helm v3       This use case applies when you want Helm v3 to manage existing Helm v2      releases      It should be noted that a Helm v2 client         can manage 1 to many Kubernetes clusters        can connect to 1 to many Tiller instances for  a cluster      This means that you have to be aware of this when migrating as releases      are deployed into clusters by Tiller and its namespace  You have to      therefore be aware of migrating for each cluster and each Tiller instance      that is managed by the Helm v2 client instance      The recommended data migration path is as follows       1  Backup v2 data      2  Migrate Helm v2 configuration      3  Migrate Helm v2 releases      4  When confident that Helm v3 is managing all Helm v2 data  for all         clusters and Tiller instances of the Helm v2 client instance  as         expected  then clean up Helm v2 data      The migration process is automated by the Helm v3       2to3  https   github com helm helm 2to3  plugin     Reference       Helm v3  2to3  https   github com helm helm 2to3  plugin      Blog  post  https   helm sh blog migrate from helm v2 to helm v3        explaining  2to3  plugin usage with examples"}
{"questions":"helm title Chart Tests your chart works as expected when it is installed These tests also help the aliases docs charttests together As a chart author you may want to write some tests that validate that A chart contains a number of Kubernetes resources and components that work weight 3 Describes how to run and test your charts","answers":"---\ntitle: \"Chart Tests\"\ndescription: \"Describes how to run and test your charts.\"\naliases: [\"\/docs\/chart_tests\/\"]\nweight: 3\n---\n\nA chart contains a number of Kubernetes resources and components that work\ntogether. As a chart author, you may want to write some tests that validate that\nyour chart works as expected when it is installed. These tests also help the\nchart consumer understand what your chart is supposed to do.\n\nA **test** in a helm chart lives under the `templates\/` directory and is a job\ndefinition that specifies a container with a given command to run. The container\nshould exit successfully (exit 0) for a test to be considered a success. The job\ndefinition must contain the helm test hook annotation: `helm.sh\/hook: test`.\n\nNote that until Helm v3, the job definition needed to contain one of these helm\ntest hook annotations: `helm.sh\/hook: test-success` or `helm.sh\/hook: test-failure`.\n`helm.sh\/hook: test-success` is still accepted as a backwards-compatible\nalternative to `helm.sh\/hook: test`.\n\nExample tests:\n\n- Validate that your configuration from the values.yaml file was properly\n  injected.\n  - Make sure your username and password work correctly\n  - Make sure an incorrect username and password does not work\n- Assert that your services are up and correctly load balancing\n- etc.\n\nYou can run the pre-defined tests in Helm on a release using the command `helm\ntest <RELEASE_NAME>`. For a chart consumer, this is a great way to check that\ntheir release of a chart (or application) works as expected.\n\n## Example Test\n\nThe [helm create](\/docs\/helm\/helm_create) command will automatically create a number of folders and files. To try the helm test functionality, first create a demo helm chart. \n\n```console\n$ helm create demo\n```\n\nYou will now be able to see the following structure in your demo helm chart.\n\n```\ndemo\/\n  Chart.yaml\n  values.yaml\n  charts\/\n  templates\/\n  templates\/tests\/test-connection.yaml\n```\n\nIn `demo\/templates\/tests\/test-connection.yaml` you'll see a test you can try. You can see the helm test pod definition here:\n\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: \"-test-connection\"\n  labels:\n    \n  annotations:\n    \"helm.sh\/hook\": test\nspec:\n  containers:\n    - name: wget\n      image: busybox\n      command: ['wget']\n      args: [':']\n  restartPolicy: Never\n\n```\n\n## Steps to Run a Test Suite on a Release\n\nFirst, install the chart on your cluster to create a release. You may have to\nwait for all pods to become active; if you test immediately after this install,\nit is likely to show a transitive failure, and you will want to re-test.\n\n```console\n$ helm install demo demo --namespace default\n$ helm test demo\nNAME: demo\nLAST DEPLOYED: Mon Feb 14 20:03:16 2022\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 1\nTEST SUITE:     demo-test-connection\nLast Started:   Mon Feb 14 20:35:19 2022\nLast Completed: Mon Feb 14 20:35:23 2022\nPhase:          Succeeded\n[...]\n```\n\n## Notes\n\n- You can define as many tests as you would like in a single yaml file or spread\n  across several yaml files in the `templates\/` directory.\n- You are welcome to nest your test suite under a `tests\/` directory like\n  `<chart-name>\/templates\/tests\/` for more isolation.\n- A test is a [Helm hook](\/docs\/charts_hooks\/), so annotations like\n  `helm.sh\/hook-weight` and `helm.sh\/hook-delete-policy` may be used with test\n  resources.","site":"helm","answers_cleaned":"    title   Chart Tests  description   Describes how to run and test your charts   aliases     docs chart tests    weight  3      A chart contains a number of Kubernetes resources and components that work together  As a chart author  you may want to write some tests that validate that your chart works as expected when it is installed  These tests also help the chart consumer understand what your chart is supposed to do   A   test   in a helm chart lives under the  templates   directory and is a job definition that specifies a container with a given command to run  The container should exit successfully  exit 0  for a test to be considered a success  The job definition must contain the helm test hook annotation   helm sh hook  test    Note that until Helm v3  the job definition needed to contain one of these helm test hook annotations   helm sh hook  test success  or  helm sh hook  test failure    helm sh hook  test success  is still accepted as a backwards compatible alternative to  helm sh hook  test    Example tests     Validate that your configuration from the values yaml file was properly   injected      Make sure your username and password work correctly     Make sure an incorrect username and password does not work   Assert that your services are up and correctly load balancing   etc   You can run the pre defined tests in Helm on a release using the command  helm test  RELEASE NAME    For a chart consumer  this is a great way to check that their release of a chart  or application  works as expected      Example Test  The  helm create   docs helm helm create  command will automatically create a number of folders and files  To try the helm test functionality  first create a demo helm chart       console   helm create demo      You will now be able to see the following structure in your demo helm chart       demo    Chart yaml   values yaml   charts    templates    templates tests test connection yaml      In  demo templates tests test connection yaml  you ll see a test you can try  You can see the helm test pod definition here      yaml apiVersion  v1 kind  Pod metadata    name    test connection    labels         annotations       helm sh hook   test spec    containers        name  wget       image  busybox       command    wget         args          restartPolicy  Never          Steps to Run a Test Suite on a Release  First  install the chart on your cluster to create a release  You may have to wait for all pods to become active  if you test immediately after this install  it is likely to show a transitive failure  and you will want to re test      console   helm install demo demo   namespace default   helm test demo NAME  demo LAST DEPLOYED  Mon Feb 14 20 03 16 2022 NAMESPACE  default STATUS  deployed REVISION  1 TEST SUITE      demo test connection Last Started    Mon Feb 14 20 35 19 2022 Last Completed  Mon Feb 14 20 35 23 2022 Phase           Succeeded               Notes    You can define as many tests as you would like in a single yaml file or spread   across several yaml files in the  templates   directory    You are welcome to nest your test suite under a  tests   directory like     chart name  templates tests   for more isolation    A test is a  Helm hook   docs charts hooks    so annotations like    helm sh hook weight  and  helm sh hook delete policy  may be used with test   resources "}
{"questions":"helm aliases docs registries weight 7 title Use OCI based registries Beginning in Helm 3 you can use container registries with support to store and share chart packages Beginning in Helm v3 8 0 OCI support is enabled by default Describes how to use OCI for Chart distribution","answers":"---\ntitle: \"Use OCI-based registries\"\ndescription: \"Describes how to use OCI for Chart distribution.\"\naliases: [\"\/docs\/registries\/\"]\nweight: 7\n---\n\nBeginning in Helm 3, you can use container registries with [OCI](https:\/\/www.opencontainers.org\/) support to store and share chart packages. Beginning in Helm v3.8.0, OCI support is enabled by default. \n\n\n## OCI support prior to v3.8.0\n\nOCI support graduated from experimental to general availability with Helm v3.8.0. In prior versions of Helm, OCI support behaved differently. If you were using OCI support prior to Helm v3.8.0, its important to understand what has changed with different versions of Helm.\n\n### Enabling OCI support prior to v3.8.0\n\nPrior to Helm v3.8.0, OCI support is *experimental* and must be enabled.\n\nTo enable OCI experimental support for Helm versions prior to v3.8.0, set `HELM_EXPERIMENTAL_OCI` in your environment. For example:\n\n```console\nexport HELM_EXPERIMENTAL_OCI=1\n```\n\n### OCI feature deprecation and behavior changes with v3.8.0\n\nThe release of [Helm v3.8.0](https:\/\/github.com\/helm\/helm\/releases\/tag\/v3.8.0), the following features and behaviors are different from previous versions of Helm:\n\n- When setting a chart in the dependencies as OCI, the version can be set to a range like other dependencies.\n- SemVer tags that include build information can be pushed and used. OCI registries don't support `+` as a tag character. Helm translates the `+` to `_` when stored as a tag.\n- The `helm registry login` command now follows the same structure as the Docker CLI for storing credentials. The same location for registry configuration can be passed to both Helm and the Docker CLI.\n\n### OCI feature deprecation and behavior changes with v3.7.0\n\nThe release of [Helm v3.7.0](https:\/\/github.com\/helm\/helm\/releases\/tag\/v3.7.0) included the implementation of [HIP 6](https:\/\/github.com\/helm\/community\/blob\/main\/hips\/hip-0006.md) for OCI support. As a result, the following features and behaviors are different from previous versions of Helm:\n\n- The `helm chart` subcommand has been removed.\n- The chart cache has been removed (no `helm chart list` etc.).\n- OCI registry references are now always prefixed with `oci:\/\/`.\n- The basename of the registry reference must *always* match the chart's name.\n- The tag of the registry reference must *always* match the chart's semantic version (i.e. no `latest` tags).\n- The chart layer media type was switched from `application\/tar+gzip` to `application\/vnd.cncf.helm.chart.content.v1.tar+gzip`.\n\n\n## Using an OCI-based registry\n\n### Helm repositories in OCI-based registries\n\nA [Helm repository]() is a way to house and distribute packaged Helm charts. An OCI-based registry can contain zero or more Helm repositories and each of those repositories can contain zero or more packaged Helm charts.\n\n### Use hosted registries\n\nThere are several hosted container registries with OCI support that you can use for your Helm charts. For example:\n\n- [Amazon ECR](https:\/\/docs.aws.amazon.com\/AmazonECR\/latest\/userguide\/push-oci-artifact.html)\n- [Azure Container Registry](https:\/\/docs.microsoft.com\/azure\/container-registry\/container-registry-helm-repos#push-chart-to-registry-as-oci-artifact)\n- [Docker Hub](https:\/\/docs.docker.com\/docker-hub\/oci-artifacts\/)\n- [Google Artifact Registry](https:\/\/cloud.google.com\/artifact-registry\/docs\/helm\/manage-charts)\n- [Harbor](https:\/\/goharbor.io\/docs\/main\/administration\/user-defined-oci-artifact\/)\n- [IBM Cloud Container Registry](https:\/\/cloud.ibm.com\/docs\/Registry?topic=Registry-registry_helm_charts)\n- [JFrog Artifactory](https:\/\/jfrog.com\/help\/r\/jfrog-artifactory-documentation\/helm-oci-repositories)\n  \n\nFollow the hosted container registry provider's documentation to create and configure a registry with OCI support. \n\n**Note:**  You can run [Docker Registry](https:\/\/docs.docker.com\/registry\/deploying\/) or [`zot`](https:\/\/github.com\/project-zot\/zot), which are OCI-based registries, on your development computer. Running an OCI-based registry on your development computer should only be used for testing purposes.\n\n### Using sigstore to sign OCI-based charts\n\nThe [`helm-sigstore`](https:\/\/github.com\/sigstore\/helm-sigstore) plugin allows using [Sigstore](https:\/\/sigstore.dev\/) to sign Helm charts with the same tools used to sign container images.  This provides an alternative to the [GPG-based provenance]() supported by classic [chart repositories]().\n\nFor more details on using the `helm sigstore` plugin, see [that project's documentation](https:\/\/github.com\/sigstore\/helm-sigstore\/blob\/main\/USAGE.md).\n\n## Commands for working with registries\n\n### The `registry` subcommand\n\n#### `login`\n\nlogin to a registry (with manual password entry)\n\n```console\n$ helm registry login -u myuser localhost:5000\nPassword:\nLogin succeeded\n```\n\n#### `logout`\n\nlogout from a registry\n\n```console\n$ helm registry logout localhost:5000\nLogout succeeded\n```\n\n### The `push` subcommand\n\nUpload a chart to an OCI-based registry:\n\n```console\n$ helm push mychart-0.1.0.tgz oci:\/\/localhost:5000\/helm-charts\nPushed: localhost:5000\/helm-charts\/mychart:0.1.0\nDigest: sha256:ec5f08ee7be8b557cd1fc5ae1a0ac985e8538da7c93f51a51eff4b277509a723\n```\n\nThe `push` subcommand can only be used against `.tgz` files\ncreated ahead of time using `helm package`.\n\nWhen using `helm push` to upload a chart an OCI registry, the reference\nmust be prefixed with `oci:\/\/` and must not contain the basename or tag.\n\nThe registry reference basename is inferred from the chart's name,\nand the tag is inferred from the chart's semantic version. This is\ncurrently a strict requirement.\n\nCertain registries require the repository and\/or namespace (if specified)\nto be created beforehand. Otherwise, an error will be produced during the\n `helm push` operation.\n\nIf you have created a [provenance file]() (`.prov`), and it is present next to the chart `.tgz` file, it will\nautomatically be uploaded to the registry upon `push`. This results in\nan extra layer on [the Helm chart manifest](#helm-chart-manifest).\n\nUsers of the [helm-push plugin](https:\/\/github.com\/chartmuseum\/helm-push) (for uploading charts to [ChartMuseum](#chartmuseum-repository-server))\nmay experience issues, since the plugin conflicts with the new, built-in `push`.\nAs of version v0.10.0, the plugin has been renamed to `cm-push`.\n\n### Other subcommands\n\nSupport for the `oci:\/\/` protocol is also available in various other subcommands.\nHere is a complete list:\n\n- `helm pull`\n- `helm show `\n- `helm template`\n- `helm install`\n- `helm upgrade`\n\nThe basename (chart name) of the registry reference *is*\nincluded for any type of action involving chart download\n(vs. `helm push` where it is omitted).\n\nHere are a few examples of using the subcommands listed above against\nOCI-based charts:\n\n```\n$ helm pull oci:\/\/localhost:5000\/helm-charts\/mychart --version 0.1.0\nPulled: localhost:5000\/helm-charts\/mychart:0.1.0\nDigest: sha256:0be7ec9fb7b962b46d81e4bb74fdcdb7089d965d3baca9f85d64948b05b402ff\n\n$ helm show all oci:\/\/localhost:5000\/helm-charts\/mychart --version 0.1.0\napiVersion: v2\nappVersion: 1.16.0\ndescription: A Helm chart for Kubernetes\nname: mychart\n...\n\n$ helm template myrelease oci:\/\/localhost:5000\/helm-charts\/mychart --version 0.1.0\n---\n# Source: mychart\/templates\/serviceaccount.yaml\napiVersion: v1\nkind: ServiceAccount\n...\n\n$ helm install myrelease oci:\/\/localhost:5000\/helm-charts\/mychart --version 0.1.0\nNAME: myrelease\nLAST DEPLOYED: Wed Oct 27 15:11:40 2021\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 1\nNOTES:\n...\n\n$ helm upgrade myrelease oci:\/\/localhost:5000\/helm-charts\/mychart --version 0.2.0\nRelease \"myrelease\" has been upgraded. Happy Helming!\nNAME: myrelease\nLAST DEPLOYED: Wed Oct 27 15:12:05 2021\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 2\nNOTES:\n...\n```\n\n## Specifying dependencies\n\nDependencies of a chart can be pulled from a registry using the `dependency update` subcommand.\n\nThe `repository` for a given entry in `Chart.yaml` is specified as the registry reference without the basename:\n\n```\ndependencies:\n  - name: mychart\n    version: \"2.7.0\"\n    repository: \"oci:\/\/localhost:5000\/myrepo\"\n```\nThis will fetch `oci:\/\/localhost:5000\/myrepo\/mychart:2.7.0` when `dependency update` is executed.\n\n## Helm chart manifest\n\nExample Helm chart manifest as represented in a registry\n(note the `mediaType` fields):\n```json\n{\n  \"schemaVersion\": 2,\n  \"config\": {\n    \"mediaType\": \"application\/vnd.cncf.helm.config.v1+json\",\n    \"digest\": \"sha256:8ec7c0f2f6860037c19b54c3cfbab48d9b4b21b485a93d87b64690fdb68c2111\",\n    \"size\": 117\n  },\n  \"layers\": [\n    {\n      \"mediaType\": \"application\/vnd.cncf.helm.chart.content.v1.tar+gzip\",\n      \"digest\": \"sha256:1b251d38cfe948dfc0a5745b7af5ca574ecb61e52aed10b19039db39af6e1617\",\n      \"size\": 2487\n    }\n  ]\n}\n```\n\nThe following example contains a\n[provenance file]()\n(note the extra layer):\n\n```json\n{\n  \"schemaVersion\": 2,\n  \"config\": {\n    \"mediaType\": \"application\/vnd.cncf.helm.config.v1+json\",\n    \"digest\": \"sha256:8ec7c0f2f6860037c19b54c3cfbab48d9b4b21b485a93d87b64690fdb68c2111\",\n    \"size\": 117\n  },\n  \"layers\": [\n    {\n      \"mediaType\": \"application\/vnd.cncf.helm.chart.content.v1.tar+gzip\",\n      \"digest\": \"sha256:1b251d38cfe948dfc0a5745b7af5ca574ecb61e52aed10b19039db39af6e1617\",\n      \"size\": 2487\n    },\n    {\n      \"mediaType\": \"application\/vnd.cncf.helm.chart.provenance.v1.prov\",\n      \"digest\": \"sha256:3e207b409db364b595ba862cdc12be96dcdad8e36c59a03b7b3b61c946a5741a\",\n      \"size\": 643\n    }\n  ]\n}\n```\n\n## Migrating from chart repos\n\nMigrating from classic [chart repositories]()\n(index.yaml-based repos) is as simple using `helm pull`, then using `helm push` to upload the resulting `.tgz` files to a registry.\n\n","site":"helm","answers_cleaned":"    title   Use OCI based registries  description   Describes how to use OCI for Chart distribution   aliases     docs registries    weight  7      Beginning in Helm 3  you can use container registries with  OCI  https   www opencontainers org   support to store and share chart packages  Beginning in Helm v3 8 0  OCI support is enabled by default        OCI support prior to v3 8 0  OCI support graduated from experimental to general availability with Helm v3 8 0  In prior versions of Helm  OCI support behaved differently  If you were using OCI support prior to Helm v3 8 0  its important to understand what has changed with different versions of Helm       Enabling OCI support prior to v3 8 0  Prior to Helm v3 8 0  OCI support is  experimental  and must be enabled   To enable OCI experimental support for Helm versions prior to v3 8 0  set  HELM EXPERIMENTAL OCI  in your environment  For example      console export HELM EXPERIMENTAL OCI 1          OCI feature deprecation and behavior changes with v3 8 0  The release of  Helm v3 8 0  https   github com helm helm releases tag v3 8 0   the following features and behaviors are different from previous versions of Helm     When setting a chart in the dependencies as OCI  the version can be set to a range like other dependencies    SemVer tags that include build information can be pushed and used  OCI registries don t support     as a tag character  Helm translates the     to     when stored as a tag    The  helm registry login  command now follows the same structure as the Docker CLI for storing credentials  The same location for registry configuration can be passed to both Helm and the Docker CLI       OCI feature deprecation and behavior changes with v3 7 0  The release of  Helm v3 7 0  https   github com helm helm releases tag v3 7 0  included the implementation of  HIP 6  https   github com helm community blob main hips hip 0006 md  for OCI support  As a result  the following features and behaviors are different from previous versions of Helm     The  helm chart  subcommand has been removed    The chart cache has been removed  no  helm chart list  etc      OCI registry references are now always prefixed with  oci        The basename of the registry reference must  always  match the chart s name    The tag of the registry reference must  always  match the chart s semantic version  i e  no  latest  tags     The chart layer media type was switched from  application tar gzip  to  application vnd cncf helm chart content v1 tar gzip        Using an OCI based registry      Helm repositories in OCI based registries  A  Helm repository    is a way to house and distribute packaged Helm charts  An OCI based registry can contain zero or more Helm repositories and each of those repositories can contain zero or more packaged Helm charts       Use hosted registries  There are several hosted container registries with OCI support that you can use for your Helm charts  For example      Amazon ECR  https   docs aws amazon com AmazonECR latest userguide push oci artifact html     Azure Container Registry  https   docs microsoft com azure container registry container registry helm repos push chart to registry as oci artifact     Docker Hub  https   docs docker com docker hub oci artifacts      Google Artifact Registry  https   cloud google com artifact registry docs helm manage charts     Harbor  https   goharbor io docs main administration user defined oci artifact      IBM Cloud Container Registry  https   cloud ibm com docs Registry topic Registry registry helm charts     JFrog Artifactory  https   jfrog com help r jfrog artifactory documentation helm oci repositories      Follow the hosted container registry provider s documentation to create and configure a registry with OCI support      Note     You can run  Docker Registry  https   docs docker com registry deploying   or   zot   https   github com project zot zot   which are OCI based registries  on your development computer  Running an OCI based registry on your development computer should only be used for testing purposes       Using sigstore to sign OCI based charts  The   helm sigstore   https   github com sigstore helm sigstore  plugin allows using  Sigstore  https   sigstore dev   to sign Helm charts with the same tools used to sign container images   This provides an alternative to the  GPG based provenance    supported by classic  chart repositories      For more details on using the  helm sigstore  plugin  see  that project s documentation  https   github com sigstore helm sigstore blob main USAGE md       Commands for working with registries      The  registry  subcommand        login   login to a registry  with manual password entry      console   helm registry login  u myuser localhost 5000 Password  Login succeeded            logout   logout from a registry     console   helm registry logout localhost 5000 Logout succeeded          The  push  subcommand  Upload a chart to an OCI based registry      console   helm push mychart 0 1 0 tgz oci   localhost 5000 helm charts Pushed  localhost 5000 helm charts mychart 0 1 0 Digest  sha256 ec5f08ee7be8b557cd1fc5ae1a0ac985e8538da7c93f51a51eff4b277509a723      The  push  subcommand can only be used against   tgz  files created ahead of time using  helm package    When using  helm push  to upload a chart an OCI registry  the reference must be prefixed with  oci     and must not contain the basename or tag   The registry reference basename is inferred from the chart s name  and the tag is inferred from the chart s semantic version  This is currently a strict requirement   Certain registries require the repository and or namespace  if specified  to be created beforehand  Otherwise  an error will be produced during the   helm push  operation   If you have created a  provenance file       prov    and it is present next to the chart   tgz  file  it will automatically be uploaded to the registry upon  push   This results in an extra layer on  the Helm chart manifest   helm chart manifest    Users of the  helm push plugin  https   github com chartmuseum helm push   for uploading charts to  ChartMuseum   chartmuseum repository server   may experience issues  since the plugin conflicts with the new  built in  push   As of version v0 10 0  the plugin has been renamed to  cm push        Other subcommands  Support for the  oci     protocol is also available in various other subcommands  Here is a complete list      helm pull     helm show      helm template     helm install     helm upgrade   The basename  chart name  of the registry reference  is  included for any type of action involving chart download  vs   helm push  where it is omitted    Here are a few examples of using the subcommands listed above against OCI based charts         helm pull oci   localhost 5000 helm charts mychart   version 0 1 0 Pulled  localhost 5000 helm charts mychart 0 1 0 Digest  sha256 0be7ec9fb7b962b46d81e4bb74fdcdb7089d965d3baca9f85d64948b05b402ff    helm show all oci   localhost 5000 helm charts mychart   version 0 1 0 apiVersion  v2 appVersion  1 16 0 description  A Helm chart for Kubernetes name  mychart        helm template myrelease oci   localhost 5000 helm charts mychart   version 0 1 0       Source  mychart templates serviceaccount yaml apiVersion  v1 kind  ServiceAccount        helm install myrelease oci   localhost 5000 helm charts mychart   version 0 1 0 NAME  myrelease LAST DEPLOYED  Wed Oct 27 15 11 40 2021 NAMESPACE  default STATUS  deployed REVISION  1 NOTES         helm upgrade myrelease oci   localhost 5000 helm charts mychart   version 0 2 0 Release  myrelease  has been upgraded  Happy Helming  NAME  myrelease LAST DEPLOYED  Wed Oct 27 15 12 05 2021 NAMESPACE  default STATUS  deployed REVISION  2 NOTES              Specifying dependencies  Dependencies of a chart can be pulled from a registry using the  dependency update  subcommand   The  repository  for a given entry in  Chart yaml  is specified as the registry reference without the basename       dependencies      name  mychart     version   2 7 0      repository   oci   localhost 5000 myrepo      This will fetch  oci   localhost 5000 myrepo mychart 2 7 0  when  dependency update  is executed      Helm chart manifest  Example Helm chart manifest as represented in a registry  note the  mediaType  fields      json      schemaVersion   2     config          mediaType    application vnd cncf helm config v1 json        digest    sha256 8ec7c0f2f6860037c19b54c3cfbab48d9b4b21b485a93d87b64690fdb68c2111        size   117         layers                  mediaType    application vnd cncf helm chart content v1 tar gzip          digest    sha256 1b251d38cfe948dfc0a5745b7af5ca574ecb61e52aed10b19039db39af6e1617          size   2487                  The following example contains a  provenance file     note the extra layer       json      schemaVersion   2     config          mediaType    application vnd cncf helm config v1 json        digest    sha256 8ec7c0f2f6860037c19b54c3cfbab48d9b4b21b485a93d87b64690fdb68c2111        size   117         layers                  mediaType    application vnd cncf helm chart content v1 tar gzip          digest    sha256 1b251d38cfe948dfc0a5745b7af5ca574ecb61e52aed10b19039db39af6e1617          size   2487                     mediaType    application vnd cncf helm chart provenance v1 prov          digest    sha256 3e207b409db364b595ba862cdc12be96dcdad8e36c59a03b7b3b61c946a5741a          size   643                     Migrating from chart repos  Migrating from classic  chart repositories     index yaml based repos  is as simple using  helm pull   then using  helm push  to upload the resulting   tgz  files to a registry   "}
{"questions":"helm Helm has provenance tools which help chart users verify the integrity and origin aliases docs provenance weight 5 well respected package managers Helm can generate and verify signature files title Helm Provenance and Integrity Describes how to verify the integrity and origin of a Chart of a package Using industry standard tools based on PKI GnuPG and","answers":"---\ntitle: \"Helm Provenance and Integrity\"\ndescription: \"Describes how to verify the integrity and origin of a Chart.\"\naliases: [\"\/docs\/provenance\/\"]\nweight: 5\n---\n\nHelm has provenance tools which help chart users verify the integrity and origin\nof a package. Using industry-standard tools based on PKI, GnuPG, and\nwell-respected package managers, Helm can generate and verify signature files.\n\n## Overview\n\nIntegrity is established by comparing a chart to a provenance record. Provenance\nrecords are stored in _provenance files_, which are stored alongside a packaged\nchart. For example, if a chart is named `myapp-1.2.3.tgz`, its provenance file\nwill be `myapp-1.2.3.tgz.prov`.\n\nProvenance files are generated at packaging time (`helm package --sign ...`),\nand can be checked by multiple commands, notably `helm install --verify`.\n\n## The Workflow\n\nThis section describes a potential workflow for using provenance data\neffectively.\n\nPrerequisites:\n\n- A valid PGP keypair in a binary (not ASCII-armored) format\n- The `helm` command line tool\n- GnuPG command line tools (optional)\n- Keybase command line tools (optional)\n\n**NOTE:** If your PGP private key has a passphrase, you will be prompted to\nenter that passphrase for any commands that support the `--sign` option.\n\nCreating a new chart is the same as before:\n\n```console\n$ helm create mychart\nCreating mychart\n```\n\nOnce ready to package, add the `--sign` flag to `helm package`. Also, specify\nthe name under which the signing key is known and the keyring containing the\ncorresponding private key:\n\n```console\n$ helm package --sign --key 'John Smith' --keyring path\/to\/keyring.secret mychart\n```\n\n**Note:** The value of the `--key` argument must be a substring of the desired\nkey's `uid` (in the output of `gpg --list-keys`), for example the name or email.\n**The fingerprint _cannot_ be used.**\n\n**TIP:** for GnuPG users, your secret keyring is in `~\/.gnupg\/secring.gpg`. You\ncan use `gpg --list-secret-keys` to list the keys you have.\n\n**Warning:**  the GnuPG v2 store your secret keyring using a new format `kbx` on\nthe default location  `~\/.gnupg\/pubring.kbx`. Please use the following command\nto convert your keyring to the legacy gpg format:\n\n```console\n$ gpg --export >~\/.gnupg\/pubring.gpg\n$ gpg --export-secret-keys >~\/.gnupg\/secring.gpg\n```\n\nAt this point, you should see both `mychart-0.1.0.tgz` and\n`mychart-0.1.0.tgz.prov`. Both files should eventually be uploaded to your\ndesired chart repository.\n\nYou can verify a chart using `helm verify`:\n\n```console\n$ helm verify mychart-0.1.0.tgz\n```\n\nA failed verification looks like this:\n\n```console\n$ helm verify topchart-0.1.0.tgz\nError: sha256 sum does not match for topchart-0.1.0.tgz: \"sha256:1939fbf7c1023d2f6b865d137bbb600e0c42061c3235528b1e8c82f4450c12a7\" != \"sha256:5a391a90de56778dd3274e47d789a2c84e0e106e1a37ef8cfa51fd60ac9e623a\"\n```\n\nTo verify during an install, use the `--verify` flag.\n\n```console\n$ helm install --generate-name --verify mychart-0.1.0.tgz\n```\n\nIf the keyring containing the public key associated with the signed chart is not\nin the default location, you may need to point to the keyring with `--keyring\nPATH` as in the `helm package` example.\n\nIf verification fails, the install will be aborted before the chart is even\nrendered.\n\n### Using Keybase.io credentials\n\nThe [Keybase.io](https:\/\/keybase.io) service makes it easy to establish a chain\nof trust for a cryptographic identity. Keybase credentials can be used to sign\ncharts.\n\nPrerequisites:\n\n- A configured Keybase.io account\n- GnuPG installed locally\n- The `keybase` CLI installed locally\n\n#### Signing packages\n\nThe first step is to import your keybase keys into your local GnuPG keyring:\n\n```console\n$ keybase pgp export -s | gpg --import\n```\n\nThis will convert your Keybase key into the OpenPGP format, and then import it\nlocally into your `~\/.gnupg\/secring.gpg` file.\n\nYou can double check by running `gpg --list-secret-keys`.\n\n```console\n$ gpg --list-secret-keys\n\/Users\/mattbutcher\/.gnupg\/secring.gpg\n-------------------------------------\nsec   2048R\/1FC18762 2016-07-25\nuid                  technosophos (keybase.io\/technosophos) <technosophos@keybase.io>\nssb   2048R\/D125E546 2016-07-25\n```\n\nNote that your secret key will have an identifier string:\n\n```\ntechnosophos (keybase.io\/technosophos) <technosophos@keybase.io>\n```\n\nThat is the full name of your key.\n\nNext, you can package and sign a chart with `helm package`. Make sure you use at\nleast part of that name string in `--key`.\n\n```console\n$ helm package --sign --key technosophos --keyring ~\/.gnupg\/secring.gpg mychart\n```\n\nAs a result, the `package` command should produce both a `.tgz` file and a\n`.tgz.prov` file.\n\n#### Verifying packages\n\nYou can also use a similar technique to verify a chart signed by someone else's\nKeybase key. Say you want to verify a package signed by\n`keybase.io\/technosophos`. To do this, use the `keybase` tool:\n\n```console\n$ keybase follow technosophos\n$ keybase pgp pull\n```\n\nThe first command above tracks the user `technosophos`. Next `keybase pgp pull`\ndownloads the OpenPGP keys of all of the accounts you follow, placing them in\nyour GnuPG keyring (`~\/.gnupg\/pubring.gpg`).\n\nAt this point, you can now use `helm verify` or any of the commands with a\n`--verify` flag:\n\n```console\n$ helm verify somechart-1.2.3.tgz\n```\n\n### Reasons a chart may not verify\n\nThese are common reasons for failure.\n\n- The `.prov` file is missing or corrupt. This indicates that something is\n  misconfigured or that the original maintainer did not create a provenance\n  file.\n- The key used to sign the file is not in your keyring. This indicate that the\n  entity who signed the chart is not someone you've already signaled that you\n  trust.\n- The verification of the `.prov` file failed. This indicates that something is\n  wrong with either the chart or the provenance data.\n- The file hashes in the provenance file do not match the hash of the archive\n  file. This indicates that the archive has been tampered with.\n\nIf a verification fails, there is reason to distrust the package.\n\n## The Provenance File\n\nThe provenance file contains a chart\u2019s YAML file plus several pieces of\nverification information. Provenance files are designed to be automatically\ngenerated.\n\nThe following pieces of provenance data are added:\n\n* The chart file (`Chart.yaml`) is included to give both humans and tools an\n  easy view into the contents of the chart.\n* The signature (SHA256, just like Docker) of the chart package (the `.tgz`\n  file) is included, and may be used to verify the integrity of the chart\n  package.\n* The entire body is signed using the algorithm used by OpenPGP (see\n  [Keybase.io](https:\/\/keybase.io) for an emerging way of making crypto\n  signing and verification easy).\n\nThe combination of this gives users the following assurances:\n\n* The package itself has not been tampered with (checksum package `.tgz`).\n* The entity who released this package is known (via the GnuPG\/PGP signature).\n\nThe format of the file looks something like this:\n\n```\nHash: SHA512\n\napiVersion: v2\nappVersion: \"1.16.0\"\ndescription: Sample chart\nname: mychart\ntype: application\nversion: 0.1.0\n\n...\nfiles:\n  mychart-0.1.0.tgz: sha256:d31d2f08b885ec696c37c7f7ef106709aaf5e8575b6d3dc5d52112ed29a9cb92\n-----BEGIN PGP SIGNATURE-----\n\nwsBcBAEBCgAQBQJdy0ReCRCEO7+YH8GHYgAAfhUIADx3pHHLLINv0MFkiEYpX\/Kd\nnvHFBNps7hXqSocsg0a9Fi1LRAc3OpVh3knjPfHNGOy8+xOdhbqpdnB+5ty8YopI\nmYMWp6cP\/Mwpkt7\/gP1ecWFMevicbaFH5AmJCBihBaKJE4R1IX49\/wTIaLKiWkv2\ncR64bmZruQPSW83UTNULtdD7kuTZXeAdTMjAK0NECsCz9\/eK5AFggP4CDf7r2zNi\nhZsNrzloIlBZlGGns6mUOTO42J\/+JojnOLIhI3Psd0HBD2bTlsm\/rSfty4yZUs7D\nqtgooNdohoyGSzR5oapd7fEvauRQswJxOA0m0V+u9\/eyLR0+JcYB8Udi1prnWf8=\n=aHfz\n-----END PGP SIGNATURE-----\n```\n\nNote that the YAML section contains two documents (separated by `...\\n`). The\nfirst file is the content of `Chart.yaml`. The second is the checksums, a map of\nfilenames to SHA-256 digests of that file's content at packaging time.\n\nThe signature block is a standard PGP signature, which provides [tamper\nresistance](https:\/\/www.rossde.com\/PGP\/pgp_signatures.html).\n\n## Chart Repositories\n\nChart repositories serve as a centralized collection of Helm charts.\n\nChart repositories must make it possible to serve provenance files over HTTP via\na specific request, and must make them available at the same URI path as the\nchart.\n\nFor example, if the base URL for a package is\n`https:\/\/example.com\/charts\/mychart-1.2.3.tgz`, the provenance file, if it\nexists, MUST be accessible at\n`https:\/\/example.com\/charts\/mychart-1.2.3.tgz.prov`.\n\nFrom the end user's perspective, `helm install --verify myrepo\/mychart-1.2.3`\nshould result in the download of both the chart and the provenance file with no\nadditional user configuration or action.\n\n### Signatures in OCI-based registries\n\nWhen publishing charts to an [OCI-based registry](), the\n[`helm-sigstore` plugin](https:\/\/github.com\/sigstore\/helm-sigstore\/) can be used \nto publish provenance to [sigstore](https:\/\/sigstore.dev\/).  [As described in the\ndocumentation](https:\/\/github.com\/sigstore\/helm-sigstore\/blob\/main\/USAGE.md), the\nprocess of creating provenance and signing with a GPG key are common, but the\n`helm sigstore upload` command can be used to publish the provenance to an\nimmutable transparency log.\n\n## Establishing Authority and Authenticity\n\nWhen dealing with chain-of-trust systems, it is important to be able to\nestablish the authority of a signer. Or, to put this plainly, the system above\nhinges on the fact that you trust the person who signed the chart. That, in\nturn, means you need to trust the public key of the signer.\n\nOne of the design decisions with Helm has been that the Helm project would not\ninsert itself into the chain of trust as a necessary party. We don't want to be\n\"the certificate authority\" for all chart signers. Instead, we strongly favor a\ndecentralized model, which is part of the reason we chose OpenPGP as our\nfoundational technology. So when it comes to establishing authority, we have\nleft this step more-or-less undefined in Helm 2 (a decision carried forward in\nHelm 3).\n\nHowever, we have some pointers and recommendations for those interested in using\nthe provenance system:\n\n- The [Keybase](https:\/\/keybase.io) platform provides a public centralized\n  repository for trust information.\n  - You can use Keybase to store your keys or to get the public keys of others.\n  - Keybase also has fabulous documentation available\n  - While we haven't tested it, Keybase's \"secure website\" feature could be used\n    to serve Helm charts.\n  - The basic idea is that an official \"chart reviewer\" signs charts with her or\n    his key, and the resulting provenance file is then uploaded to the chart\n    repository.\n  - There has been some work on the idea that a list of valid signing keys may\n    be included in the `index.yaml` file of a repository.\n","site":"helm","answers_cleaned":"    title   Helm Provenance and Integrity  description   Describes how to verify the integrity and origin of a Chart   aliases     docs provenance    weight  5      Helm has provenance tools which help chart users verify the integrity and origin of a package  Using industry standard tools based on PKI  GnuPG  and well respected package managers  Helm can generate and verify signature files      Overview  Integrity is established by comparing a chart to a provenance record  Provenance records are stored in  provenance files   which are stored alongside a packaged chart  For example  if a chart is named  myapp 1 2 3 tgz   its provenance file will be  myapp 1 2 3 tgz prov    Provenance files are generated at packaging time   helm package   sign        and can be checked by multiple commands  notably  helm install   verify       The Workflow  This section describes a potential workflow for using provenance data effectively   Prerequisites     A valid PGP keypair in a binary  not ASCII armored  format   The  helm  command line tool   GnuPG command line tools  optional    Keybase command line tools  optional     NOTE    If your PGP private key has a passphrase  you will be prompted to enter that passphrase for any commands that support the    sign  option   Creating a new chart is the same as before      console   helm create mychart Creating mychart      Once ready to package  add the    sign  flag to  helm package   Also  specify the name under which the signing key is known and the keyring containing the corresponding private key      console   helm package   sign   key  John Smith    keyring path to keyring secret mychart        Note    The value of the    key  argument must be a substring of the desired key s  uid   in the output of  gpg   list keys    for example the name or email    The fingerprint  cannot  be used       TIP    for GnuPG users  your secret keyring is in     gnupg secring gpg   You can use  gpg   list secret keys  to list the keys you have     Warning     the GnuPG v2 store your secret keyring using a new format  kbx  on the default location      gnupg pubring kbx   Please use the following command to convert your keyring to the legacy gpg format      console   gpg   export     gnupg pubring gpg   gpg   export secret keys     gnupg secring gpg      At this point  you should see both  mychart 0 1 0 tgz  and  mychart 0 1 0 tgz prov   Both files should eventually be uploaded to your desired chart repository   You can verify a chart using  helm verify       console   helm verify mychart 0 1 0 tgz      A failed verification looks like this      console   helm verify topchart 0 1 0 tgz Error  sha256 sum does not match for topchart 0 1 0 tgz   sha256 1939fbf7c1023d2f6b865d137bbb600e0c42061c3235528b1e8c82f4450c12a7      sha256 5a391a90de56778dd3274e47d789a2c84e0e106e1a37ef8cfa51fd60ac9e623a       To verify during an install  use the    verify  flag      console   helm install   generate name   verify mychart 0 1 0 tgz      If the keyring containing the public key associated with the signed chart is not in the default location  you may need to point to the keyring with    keyring PATH  as in the  helm package  example   If verification fails  the install will be aborted before the chart is even rendered       Using Keybase io credentials  The  Keybase io  https   keybase io  service makes it easy to establish a chain of trust for a cryptographic identity  Keybase credentials can be used to sign charts   Prerequisites     A configured Keybase io account   GnuPG installed locally   The  keybase  CLI installed locally       Signing packages  The first step is to import your keybase keys into your local GnuPG keyring      console   keybase pgp export  s   gpg   import      This will convert your Keybase key into the OpenPGP format  and then import it locally into your     gnupg secring gpg  file   You can double check by running  gpg   list secret keys       console   gpg   list secret keys  Users mattbutcher  gnupg secring gpg                                       sec   2048R 1FC18762 2016 07 25 uid                  technosophos  keybase io technosophos   technosophos keybase io  ssb   2048R D125E546 2016 07 25      Note that your secret key will have an identifier string       technosophos  keybase io technosophos   technosophos keybase io       That is the full name of your key   Next  you can package and sign a chart with  helm package   Make sure you use at least part of that name string in    key       console   helm package   sign   key technosophos   keyring    gnupg secring gpg mychart      As a result  the  package  command should produce both a   tgz  file and a   tgz prov  file        Verifying packages  You can also use a similar technique to verify a chart signed by someone else s Keybase key  Say you want to verify a package signed by  keybase io technosophos   To do this  use the  keybase  tool      console   keybase follow technosophos   keybase pgp pull      The first command above tracks the user  technosophos   Next  keybase pgp pull  downloads the OpenPGP keys of all of the accounts you follow  placing them in your GnuPG keyring      gnupg pubring gpg     At this point  you can now use  helm verify  or any of the commands with a    verify  flag      console   helm verify somechart 1 2 3 tgz          Reasons a chart may not verify  These are common reasons for failure     The   prov  file is missing or corrupt  This indicates that something is   misconfigured or that the original maintainer did not create a provenance   file    The key used to sign the file is not in your keyring  This indicate that the   entity who signed the chart is not someone you ve already signaled that you   trust    The verification of the   prov  file failed  This indicates that something is   wrong with either the chart or the provenance data    The file hashes in the provenance file do not match the hash of the archive   file  This indicates that the archive has been tampered with   If a verification fails  there is reason to distrust the package      The Provenance File  The provenance file contains a chart s YAML file plus several pieces of verification information  Provenance files are designed to be automatically generated   The following pieces of provenance data are added     The chart file   Chart yaml   is included to give both humans and tools an   easy view into the contents of the chart    The signature  SHA256  just like Docker  of the chart package  the   tgz    file  is included  and may be used to verify the integrity of the chart   package    The entire body is signed using the algorithm used by OpenPGP  see    Keybase io  https   keybase io  for an emerging way of making crypto   signing and verification easy    The combination of this gives users the following assurances     The package itself has not been tampered with  checksum package   tgz      The entity who released this package is known  via the GnuPG PGP signature    The format of the file looks something like this       Hash  SHA512  apiVersion  v2 appVersion   1 16 0  description  Sample chart name  mychart type  application version  0 1 0      files    mychart 0 1 0 tgz  sha256 d31d2f08b885ec696c37c7f7ef106709aaf5e8575b6d3dc5d52112ed29a9cb92      BEGIN PGP SIGNATURE       wsBcBAEBCgAQBQJdy0ReCRCEO7 YH8GHYgAAfhUIADx3pHHLLINv0MFkiEYpX Kd nvHFBNps7hXqSocsg0a9Fi1LRAc3OpVh3knjPfHNGOy8 xOdhbqpdnB 5ty8YopI mYMWp6cP Mwpkt7 gP1ecWFMevicbaFH5AmJCBihBaKJE4R1IX49 wTIaLKiWkv2 cR64bmZruQPSW83UTNULtdD7kuTZXeAdTMjAK0NECsCz9 eK5AFggP4CDf7r2zNi hZsNrzloIlBZlGGns6mUOTO42J  JojnOLIhI3Psd0HBD2bTlsm rSfty4yZUs7D qtgooNdohoyGSzR5oapd7fEvauRQswJxOA0m0V u9 eyLR0 JcYB8Udi1prnWf8   aHfz      END PGP SIGNATURE           Note that the YAML section contains two documents  separated by      n    The first file is the content of  Chart yaml   The second is the checksums  a map of filenames to SHA 256 digests of that file s content at packaging time   The signature block is a standard PGP signature  which provides  tamper resistance  https   www rossde com PGP pgp signatures html       Chart Repositories  Chart repositories serve as a centralized collection of Helm charts   Chart repositories must make it possible to serve provenance files over HTTP via a specific request  and must make them available at the same URI path as the chart   For example  if the base URL for a package is  https   example com charts mychart 1 2 3 tgz   the provenance file  if it exists  MUST be accessible at  https   example com charts mychart 1 2 3 tgz prov    From the end user s perspective   helm install   verify myrepo mychart 1 2 3  should result in the download of both the chart and the provenance file with no additional user configuration or action       Signatures in OCI based registries  When publishing charts to an  OCI based registry     the   helm sigstore  plugin  https   github com sigstore helm sigstore   can be used  to publish provenance to  sigstore  https   sigstore dev      As described in the documentation  https   github com sigstore helm sigstore blob main USAGE md   the process of creating provenance and signing with a GPG key are common  but the  helm sigstore upload  command can be used to publish the provenance to an immutable transparency log      Establishing Authority and Authenticity  When dealing with chain of trust systems  it is important to be able to establish the authority of a signer  Or  to put this plainly  the system above hinges on the fact that you trust the person who signed the chart  That  in turn  means you need to trust the public key of the signer   One of the design decisions with Helm has been that the Helm project would not insert itself into the chain of trust as a necessary party  We don t want to be  the certificate authority  for all chart signers  Instead  we strongly favor a decentralized model  which is part of the reason we chose OpenPGP as our foundational technology  So when it comes to establishing authority  we have left this step more or less undefined in Helm 2  a decision carried forward in Helm 3    However  we have some pointers and recommendations for those interested in using the provenance system     The  Keybase  https   keybase io  platform provides a public centralized   repository for trust information      You can use Keybase to store your keys or to get the public keys of others      Keybase also has fabulous documentation available     While we haven t tested it  Keybase s  secure website  feature could be used     to serve Helm charts      The basic idea is that an official  chart reviewer  signs charts with her or     his key  and the resulting provenance file is then uploaded to the chart     repository      There has been some work on the idea that a list of valid signing keys may     be included in the  index yaml  file of a repository  "}
{"questions":"helm weight 6 aliases docs chartrepository and shared This section explains how to create and work with Helm chart repositories At a title The Chart Repository Guide high level a chart repository is a location where packaged charts can be stored How to create and work with Helm chart repositories","answers":"---\ntitle: \"The Chart Repository Guide\"\ndescription: \"How to create and work with Helm chart repositories.\"\naliases: [\"\/docs\/chart_repository\/\"]\nweight: 6\n---\n\nThis section explains how to create and work with Helm chart repositories. At a\nhigh level, a chart repository is a location where packaged charts can be stored\nand shared.\n\nThe distributed community Helm chart repository is located at\n[Artifact Hub](https:\/\/artifacthub.io\/packages\/search?kind=0) and welcomes\nparticipation. But Helm also makes it possible to create and run your own chart\nrepository. This guide explains how to do so.\n\n## Prerequisites\n\n* Go through the [Quickstart]() Guide\n* Read through the [Charts]() document\n\n## Create a chart repository\n\nA _chart repository_ is an HTTP server that houses an `index.yaml` file and\noptionally some packaged charts.  When you're ready to share your charts, the\npreferred way to do so is by uploading them to a chart repository.\n\nAs of Helm 2.2.0, client-side SSL auth to a repository is supported. Other\nauthentication protocols may be available as plugins.\n\nBecause a chart repository can be any HTTP server that can serve YAML and tar\nfiles and can answer GET requests, you have a plethora of options when it comes\ndown to hosting your own chart repository. For example, you can use a Google\nCloud Storage (GCS) bucket, Amazon S3 bucket, GitHub Pages, or even create your\nown web server.\n\n### The chart repository structure\n\nA chart repository consists of packaged charts and a special file called\n`index.yaml` which contains an index of all of the charts in the repository.\nFrequently, the charts that `index.yaml` describes are also hosted on the same\nserver, as are the [provenance files]().\n\nFor example, the layout of the repository `https:\/\/example.com\/charts` might\nlook like this:\n\n```\ncharts\/\n  |\n  |- index.yaml\n  |\n  |- alpine-0.1.2.tgz\n  |\n  |- alpine-0.1.2.tgz.prov\n```\n\nIn this case, the index file would contain information about one chart, the\nAlpine chart, and provide the download URL\n`https:\/\/example.com\/charts\/alpine-0.1.2.tgz` for that chart.\n\nIt is not required that a chart package be located on the same server as the\n`index.yaml` file. However, doing so is often the easiest.\n\n### The index file\n\nThe index file is a yaml file called `index.yaml`. It contains some metadata\nabout the package, including the contents of a chart's `Chart.yaml` file. A\nvalid chart repository must have an index file. The index file contains\ninformation about each chart in the chart repository. The `helm repo index`\ncommand will generate an index file based on a given local directory that\ncontains packaged charts.\n\nThis is an example of an index file:\n\n```yaml\napiVersion: v1\nentries:\n  alpine:\n    - created: 2016-10-06T16:23:20.499814565-06:00\n      description: Deploy a basic Alpine Linux pod\n      digest: 99c76e403d752c84ead610644d4b1c2f2b453a74b921f422b9dcb8a7c8b559cd\n      home: https:\/\/helm.sh\/helm\n      name: alpine\n      sources:\n      - https:\/\/github.com\/helm\/helm\n      urls:\n      - https:\/\/technosophos.github.io\/tscharts\/alpine-0.2.0.tgz\n      version: 0.2.0\n    - created: 2016-10-06T16:23:20.499543808-06:00\n      description: Deploy a basic Alpine Linux pod\n      digest: 515c58e5f79d8b2913a10cb400ebb6fa9c77fe813287afbacf1a0b897cd78727\n      home: https:\/\/helm.sh\/helm\n      name: alpine\n      sources:\n      - https:\/\/github.com\/helm\/helm\n      urls:\n      - https:\/\/technosophos.github.io\/tscharts\/alpine-0.1.0.tgz\n      version: 0.1.0\n  nginx:\n    - created: 2016-10-06T16:23:20.499543808-06:00\n      description: Create a basic nginx HTTP server\n      digest: aaff4545f79d8b2913a10cb400ebb6fa9c77fe813287afbacf1a0b897cdffffff\n      home: https:\/\/helm.sh\/helm\n      name: nginx\n      sources:\n      - https:\/\/github.com\/helm\/charts\n      urls:\n      - https:\/\/technosophos.github.io\/tscharts\/nginx-1.1.0.tgz\n      version: 1.1.0\ngenerated: 2016-10-06T16:23:20.499029981-06:00\n```\n\n## Hosting Chart Repositories\n\nThis part shows several ways to serve a chart repository.\n\n### Google Cloud Storage\n\nThe first step is to **create your GCS bucket**. We'll call ours\n`fantastic-charts`.\n\n![Create a GCS Bucket](https:\/\/helm.sh\/img\/create-a-bucket.png)\n\nNext, make your bucket public by **editing the bucket permissions**.\n\n![Edit Permissions](https:\/\/helm.sh\/img\/edit-permissions.png)\n\nInsert this line item to **make your bucket public**:\n\n![Make Bucket Public](https:\/\/helm.sh\/img\/make-bucket-public.png)\n\nCongratulations, now you have an empty GCS bucket ready to serve charts!\n\nYou may upload your chart repository using the Google Cloud Storage command\nline tool, or using the GCS web UI. A public GCS bucket can be accessed via\nsimple HTTPS at this address: `https:\/\/bucket-name.storage.googleapis.com\/`.\n\n### Cloudsmith\n\nYou can also set up chart repositories using Cloudsmith. Read more about\nchart repositories with Cloudsmith\n[here](https:\/\/help.cloudsmith.io\/docs\/helm-chart-repository)\n\n### JFrog Artifactory\n\nSimilarly, you can also set up chart repositories using JFrog Artifactory. Read more about\nchart repositories with JFrog Artifactory\n[here](https:\/\/www.jfrog.com\/confluence\/display\/RTF\/Helm+Chart+Repositories)\n\n### GitHub Pages example\n\nIn a similar way you can create charts repository using GitHub Pages.\n\nGitHub allows you to serve static web pages in two different ways:\n\n- By configuring a project to serve the contents of its `docs\/` directory\n- By configuring a project to serve a particular branch\n\nWe'll take the second approach, though the first is just as easy.\n\nThe first step will be to **create your gh-pages branch**.  You can do that\nlocally as.\n\n```console\n$ git checkout -b gh-pages\n```\n\nOr via web browser using **Branch** button on your GitHub repository:\n\n![Create GitHub Pages branch](https:\/\/helm.sh\/img\/create-a-gh-page-button.png)\n\nNext, you'll want to make sure your **gh-pages branch** is set as GitHub Pages,\nclick on your repo **Settings** and scroll down to **GitHub pages** section and\nset as per below:\n\n![Create GitHub Pages branch](https:\/\/helm.sh\/img\/set-a-gh-page.png)\n\nBy default **Source** usually gets set to **gh-pages branch**. If this is not\nset by default, then select it.\n\nYou can use a **custom domain** there if you wish so.\n\nAnd check that **Enforce HTTPS** is ticked, so the **HTTPS** will be used when\ncharts are served.\n\nIn such setup you can use your default branch to store your charts code, and\n**gh-pages branch** as charts repository, e.g.:\n`https:\/\/USERNAME.github.io\/REPONAME`. The demonstration [TS\nCharts](https:\/\/github.com\/technosophos\/tscharts) repository is accessible at\n`https:\/\/technosophos.github.io\/tscharts\/`.\n\nIf you have decided to use GitHub pages to host the chart repository, check out\n[Chart Releaser Action]().\nChart Releaser Action is a GitHub Action workflow to turn a GitHub project into\na self-hosted Helm chart repo, using\n[helm\/chart-releaser](https:\/\/github.com\/helm\/chart-releaser) CLI tool.\n\n### Ordinary web servers\n\nTo configure an ordinary web server to serve Helm charts, you merely need to do\nthe following:\n\n- Put your index and charts in a directory that the server can serve\n- Make sure the `index.yaml` file can be accessed with no authentication\n  requirement\n- Make sure `yaml` files are served with the correct content type (`text\/yaml`\n  or `text\/x-yaml`)\n\nFor example, if you want to serve your charts out of `$WEBROOT\/charts`, make\nsure there is a `charts\/` directory in your web root, and put the index file and\ncharts inside of that folder.\n\n### ChartMuseum Repository Server\n\nChartMuseum is an open-source Helm Chart Repository server written in Go\n(Golang), with support for cloud storage backends, including [Google Cloud\nStorage](https:\/\/cloud.google.com\/storage\/), [Amazon\nS3](https:\/\/aws.amazon.com\/s3\/), [Microsoft Azure Blob\nStorage](https:\/\/azure.microsoft.com\/en-us\/services\/storage\/blobs\/), [Alibaba\nCloud OSS Storage](https:\/\/www.alibabacloud.com\/product\/oss), [Openstack Object\nStorage](https:\/\/developer.openstack.org\/api-ref\/object-store\/), [Oracle Cloud\nInfrastructure Object Storage](https:\/\/cloud.oracle.com\/storage), [Baidu Cloud\nBOS Storage](https:\/\/cloud.baidu.com\/product\/bos.html), [Tencent Cloud Object\nStorage](https:\/\/intl.cloud.tencent.com\/product\/cos), [DigitalOcean\nSpaces](https:\/\/www.digitalocean.com\/products\/spaces\/),\n[Minio](https:\/\/min.io\/), and [etcd](https:\/\/etcd.io\/).\n\nYou can also use the\n[ChartMuseum](https:\/\/chartmuseum.com\/docs\/#using-with-local-filesystem-storage)\nserver to host a chart repository from a local file system.\n\n### GitLab Package Registry\n\nWith GitLab you can publish Helm charts in your project\u2019s Package Registry.\nRead more about setting up a helm package repository with GitLab [here](https:\/\/docs.gitlab.com\/ee\/user\/packages\/helm_repository\/).\n\n## Managing Chart Repositories\n\nNow that you have a chart repository, the last part of this guide explains how\nto maintain charts in that repository.\n\n\n### Store charts in your chart repository\n\nNow that you have a chart repository, let's upload a chart and an index file to\nthe repository.  Charts in a chart repository must be packaged (`helm package\nchart-name\/`) and versioned correctly (following [SemVer 2](https:\/\/semver.org\/)\nguidelines).\n\nThese next steps compose an example workflow, but you are welcome to use\nwhatever workflow you fancy for storing and updating charts in your chart\nrepository.\n\nOnce you have a packaged chart ready, create a new directory, and move your\npackaged chart to that directory.\n\n```console\n$ helm package docs\/examples\/alpine\/\n$ mkdir fantastic-charts\n$ mv alpine-0.1.0.tgz fantastic-charts\/\n$ helm repo index fantastic-charts --url https:\/\/fantastic-charts.storage.googleapis.com\n```\n\nThe last command takes the path of the local directory that you just created and\nthe URL of your remote chart repository and composes an `index.yaml` file inside\nthe given directory path.\n\nNow you can upload the chart and the index file to your chart repository using a\nsync tool or manually. If you're using Google Cloud Storage, check out this\n[example workflow]()\nusing the gsutil client. For GitHub, you can simply put the charts in the\nappropriate destination branch.\n\n### Add new charts to an existing repository\n\nEach time you want to add a new chart to your repository, you must regenerate\nthe index. The `helm repo index` command will completely rebuild the\n`index.yaml` file from scratch, including only the charts that it finds locally.\n\nHowever, you can use the `--merge` flag to incrementally add new charts to an\nexisting `index.yaml` file (a great option when working with a remote repository\nlike GCS). Run `helm repo index --help` to learn more,\n\nMake sure that you upload both the revised `index.yaml` file and the chart. And\nif you generated a provenance file, upload that too.\n\n### Share your charts with others\n\nWhen you're ready to share your charts, simply let someone know what the URL of\nyour repository is.\n\nFrom there, they will add the repository to their helm client via the `helm repo\nadd [NAME] [URL]` command with any name they would like to use to reference the\nrepository.\n\n```console\n$ helm repo add fantastic-charts https:\/\/fantastic-charts.storage.googleapis.com\n$ helm repo list\nfantastic-charts    https:\/\/fantastic-charts.storage.googleapis.com\n```\n\nIf the charts are backed by HTTP basic authentication, you can also supply the\nusername and password here:\n\n```console\n$ helm repo add fantastic-charts https:\/\/fantastic-charts.storage.googleapis.com --username my-username --password my-password\n$ helm repo list\nfantastic-charts    https:\/\/fantastic-charts.storage.googleapis.com\n```\n\n**Note:** A repository will not be added if it does not contain a valid\n`index.yaml`.\n\n**Note:** If your helm repository is e.g. using a self signed\ncertificate, you can use `helm repo add --insecure-skip-tls-verify ...` in order\nto skip the CA verification.\n\nAfter that, your users will be able to search through your charts. After you've\nupdated the repository, they can use the `helm repo update` command to get the\nlatest chart information.\n\n*Under the hood, the `helm repo add` and `helm repo update` commands are\nfetching the index.yaml file and storing them in the\n`$XDG_CACHE_HOME\/helm\/repository\/cache\/` directory. This is where the `helm\nsearch` function finds information about charts.*","site":"helm","answers_cleaned":"    title   The Chart Repository Guide  description   How to create and work with Helm chart repositories   aliases     docs chart repository    weight  6      This section explains how to create and work with Helm chart repositories  At a high level  a chart repository is a location where packaged charts can be stored and shared   The distributed community Helm chart repository is located at  Artifact Hub  https   artifacthub io packages search kind 0  and welcomes participation  But Helm also makes it possible to create and run your own chart repository  This guide explains how to do so      Prerequisites    Go through the  Quickstart    Guide   Read through the  Charts    document     Create a chart repository  A  chart repository  is an HTTP server that houses an  index yaml  file and optionally some packaged charts   When you re ready to share your charts  the preferred way to do so is by uploading them to a chart repository   As of Helm 2 2 0  client side SSL auth to a repository is supported  Other authentication protocols may be available as plugins   Because a chart repository can be any HTTP server that can serve YAML and tar files and can answer GET requests  you have a plethora of options when it comes down to hosting your own chart repository  For example  you can use a Google Cloud Storage  GCS  bucket  Amazon S3 bucket  GitHub Pages  or even create your own web server       The chart repository structure  A chart repository consists of packaged charts and a special file called  index yaml  which contains an index of all of the charts in the repository  Frequently  the charts that  index yaml  describes are also hosted on the same server  as are the  provenance files      For example  the layout of the repository  https   example com charts  might look like this       charts           index yaml          alpine 0 1 2 tgz          alpine 0 1 2 tgz prov      In this case  the index file would contain information about one chart  the Alpine chart  and provide the download URL  https   example com charts alpine 0 1 2 tgz  for that chart   It is not required that a chart package be located on the same server as the  index yaml  file  However  doing so is often the easiest       The index file  The index file is a yaml file called  index yaml   It contains some metadata about the package  including the contents of a chart s  Chart yaml  file  A valid chart repository must have an index file  The index file contains information about each chart in the chart repository  The  helm repo index  command will generate an index file based on a given local directory that contains packaged charts   This is an example of an index file      yaml apiVersion  v1 entries    alpine        created  2016 10 06T16 23 20 499814565 06 00       description  Deploy a basic Alpine Linux pod       digest  99c76e403d752c84ead610644d4b1c2f2b453a74b921f422b9dcb8a7c8b559cd       home  https   helm sh helm       name  alpine       sources          https   github com helm helm       urls          https   technosophos github io tscharts alpine 0 2 0 tgz       version  0 2 0       created  2016 10 06T16 23 20 499543808 06 00       description  Deploy a basic Alpine Linux pod       digest  515c58e5f79d8b2913a10cb400ebb6fa9c77fe813287afbacf1a0b897cd78727       home  https   helm sh helm       name  alpine       sources          https   github com helm helm       urls          https   technosophos github io tscharts alpine 0 1 0 tgz       version  0 1 0   nginx        created  2016 10 06T16 23 20 499543808 06 00       description  Create a basic nginx HTTP server       digest  aaff4545f79d8b2913a10cb400ebb6fa9c77fe813287afbacf1a0b897cdffffff       home  https   helm sh helm       name  nginx       sources          https   github com helm charts       urls          https   technosophos github io tscharts nginx 1 1 0 tgz       version  1 1 0 generated  2016 10 06T16 23 20 499029981 06 00         Hosting Chart Repositories  This part shows several ways to serve a chart repository       Google Cloud Storage  The first step is to   create your GCS bucket    We ll call ours  fantastic charts      Create a GCS Bucket  https   helm sh img create a bucket png   Next  make your bucket public by   editing the bucket permissions       Edit Permissions  https   helm sh img edit permissions png   Insert this line item to   make your bucket public       Make Bucket Public  https   helm sh img make bucket public png   Congratulations  now you have an empty GCS bucket ready to serve charts   You may upload your chart repository using the Google Cloud Storage command line tool  or using the GCS web UI  A public GCS bucket can be accessed via simple HTTPS at this address   https   bucket name storage googleapis com         Cloudsmith  You can also set up chart repositories using Cloudsmith  Read more about chart repositories with Cloudsmith  here  https   help cloudsmith io docs helm chart repository       JFrog Artifactory  Similarly  you can also set up chart repositories using JFrog Artifactory  Read more about chart repositories with JFrog Artifactory  here  https   www jfrog com confluence display RTF Helm Chart Repositories       GitHub Pages example  In a similar way you can create charts repository using GitHub Pages   GitHub allows you to serve static web pages in two different ways     By configuring a project to serve the contents of its  docs   directory   By configuring a project to serve a particular branch  We ll take the second approach  though the first is just as easy   The first step will be to   create your gh pages branch     You can do that locally as      console   git checkout  b gh pages      Or via web browser using   Branch   button on your GitHub repository     Create GitHub Pages branch  https   helm sh img create a gh page button png   Next  you ll want to make sure your   gh pages branch   is set as GitHub Pages  click on your repo   Settings   and scroll down to   GitHub pages   section and set as per below     Create GitHub Pages branch  https   helm sh img set a gh page png   By default   Source   usually gets set to   gh pages branch    If this is not set by default  then select it   You can use a   custom domain   there if you wish so   And check that   Enforce HTTPS   is ticked  so the   HTTPS   will be used when charts are served   In such setup you can use your default branch to store your charts code  and   gh pages branch   as charts repository  e g    https   USERNAME github io REPONAME   The demonstration  TS Charts  https   github com technosophos tscharts  repository is accessible at  https   technosophos github io tscharts     If you have decided to use GitHub pages to host the chart repository  check out  Chart Releaser Action     Chart Releaser Action is a GitHub Action workflow to turn a GitHub project into a self hosted Helm chart repo  using  helm chart releaser  https   github com helm chart releaser  CLI tool       Ordinary web servers  To configure an ordinary web server to serve Helm charts  you merely need to do the following     Put your index and charts in a directory that the server can serve   Make sure the  index yaml  file can be accessed with no authentication   requirement   Make sure  yaml  files are served with the correct content type   text yaml    or  text x yaml    For example  if you want to serve your charts out of   WEBROOT charts   make sure there is a  charts   directory in your web root  and put the index file and charts inside of that folder       ChartMuseum Repository Server  ChartMuseum is an open source Helm Chart Repository server written in Go  Golang   with support for cloud storage backends  including  Google Cloud Storage  https   cloud google com storage     Amazon S3  https   aws amazon com s3     Microsoft Azure Blob Storage  https   azure microsoft com en us services storage blobs     Alibaba Cloud OSS Storage  https   www alibabacloud com product oss    Openstack Object Storage  https   developer openstack org api ref object store     Oracle Cloud Infrastructure Object Storage  https   cloud oracle com storage    Baidu Cloud BOS Storage  https   cloud baidu com product bos html    Tencent Cloud Object Storage  https   intl cloud tencent com product cos    DigitalOcean Spaces  https   www digitalocean com products spaces     Minio  https   min io    and  etcd  https   etcd io     You can also use the  ChartMuseum  https   chartmuseum com docs  using with local filesystem storage  server to host a chart repository from a local file system       GitLab Package Registry  With GitLab you can publish Helm charts in your project s Package Registry  Read more about setting up a helm package repository with GitLab  here  https   docs gitlab com ee user packages helm repository        Managing Chart Repositories  Now that you have a chart repository  the last part of this guide explains how to maintain charts in that repository        Store charts in your chart repository  Now that you have a chart repository  let s upload a chart and an index file to the repository   Charts in a chart repository must be packaged   helm package chart name    and versioned correctly  following  SemVer 2  https   semver org   guidelines    These next steps compose an example workflow  but you are welcome to use whatever workflow you fancy for storing and updating charts in your chart repository   Once you have a packaged chart ready  create a new directory  and move your packaged chart to that directory      console   helm package docs examples alpine    mkdir fantastic charts   mv alpine 0 1 0 tgz fantastic charts    helm repo index fantastic charts   url https   fantastic charts storage googleapis com      The last command takes the path of the local directory that you just created and the URL of your remote chart repository and composes an  index yaml  file inside the given directory path   Now you can upload the chart and the index file to your chart repository using a sync tool or manually  If you re using Google Cloud Storage  check out this  example workflow    using the gsutil client  For GitHub  you can simply put the charts in the appropriate destination branch       Add new charts to an existing repository  Each time you want to add a new chart to your repository  you must regenerate the index  The  helm repo index  command will completely rebuild the  index yaml  file from scratch  including only the charts that it finds locally   However  you can use the    merge  flag to incrementally add new charts to an existing  index yaml  file  a great option when working with a remote repository like GCS   Run  helm repo index   help  to learn more   Make sure that you upload both the revised  index yaml  file and the chart  And if you generated a provenance file  upload that too       Share your charts with others  When you re ready to share your charts  simply let someone know what the URL of your repository is   From there  they will add the repository to their helm client via the  helm repo add  NAME   URL   command with any name they would like to use to reference the repository      console   helm repo add fantastic charts https   fantastic charts storage googleapis com   helm repo list fantastic charts    https   fantastic charts storage googleapis com      If the charts are backed by HTTP basic authentication  you can also supply the username and password here      console   helm repo add fantastic charts https   fantastic charts storage googleapis com   username my username   password my password   helm repo list fantastic charts    https   fantastic charts storage googleapis com        Note    A repository will not be added if it does not contain a valid  index yaml      Note    If your helm repository is e g  using a self signed certificate  you can use  helm repo add   insecure skip tls verify      in order to skip the CA verification   After that  your users will be able to search through your charts  After you ve updated the repository  they can use the  helm repo update  command to get the latest chart information    Under the hood  the  helm repo add  and  helm repo update  commands are fetching the index yaml file and storing them in the   XDG CACHE HOME helm repository cache   directory  This is where the  helm search  function finds information about charts  "}
{"questions":"helm aliases docs k8sapis title Deprecated Kubernetes APIs Kubernetes is an API driven system and the API evolves over time to reflect the Explains deprecated Kubernetes APIs in Helm systems and their APIs An important part of evolving APIs is a good deprecation evolving understanding of the problem space This is common practice across policy and process to inform users of how changes to APIs are implemented In","answers":"---\ntitle: \"Deprecated Kubernetes APIs\"\ndescription: \"Explains deprecated Kubernetes APIs in Helm\"\naliases: [\"docs\/k8s_apis\/\"]\n---\n\nKubernetes is an API-driven system and the API evolves over time to reflect the\nevolving understanding of the problem space. This is common practice across\nsystems and their APIs. An important part of evolving APIs is a good deprecation\npolicy and process to inform users of how changes to APIs are implemented. In\nother words, consumers of your API need to know in advance and in what release\nan API will be removed or changed. This removes the element of surprise and\nbreaking changes to consumers.\n\nThe [Kubernetes deprecation\npolicy](https:\/\/kubernetes.io\/docs\/reference\/using-api\/deprecation-policy\/)\ndocuments how Kubernetes handles the changes to its API versions. The policy for\ndeprecation states the timeframe that API versions will be supported following a\ndeprecation announcement. It is therefore important to be aware of deprecation\nannouncements and know when API versions will be removed, to help minimize the\neffect.\n\nThis is an example of an announcement [for the removal of deprecated API\nversions in Kubernetes\n1.16](https:\/\/kubernetes.io\/blog\/2019\/07\/18\/api-deprecations-in-1-16\/) and was\nadvertised a few months prior to the release. These API versions would have been\nannounced for deprecation prior to this again. This shows that there is a good\npolicy in place which informs consumers of API version support.\n\nHelm templates specify a [Kubernetes API\ngroup](https:\/\/kubernetes.io\/docs\/concepts\/overview\/kubernetes-api\/#api-groups)\nwhen defining a Kubernetes object, similar to a Kubernetes manifest file. It is\nspecified in the `apiVersion` field of the template and it identifies the API\nversion of the Kubernetes object. This means that Helm users and chart\nmaintainers need to be aware when Kubernetes API versions have been deprecated\nand in what Kubernetes version they will be removed.\n\n## Chart Maintainers\n\nYou should audit your charts checking for Kubernetes API versions that are\ndeprecated or are removed in a Kubernetes version. The API versions found as due\nto be or that are now out of support, should be updated to the supported version\nand a new version of the chart released. The API version is defined by the\n`kind` and `apiVersion` fields. For example, here is a removed `Deployment`\nobject API version in Kubernetes 1.16:\n\n```yaml\napiVersion: apps\/v1beta1\nkind: Deployment\n```\n\n## Helm Users\n\nYou should audit the charts that you use (similar to [chart\nmaintainers](#chart-maintainers)) and identify any charts where API versions are\ndeprecated or removed in a Kubernetes version. For the charts identified, you\nneed to check for the latest version of the chart (which has supported API\nversions) or update the chart yourself.\n\nAdditionally, you also need to audit any charts deployed (i.e. Helm releases)\nchecking again for any deprecated or removed API versions. This can be done by\ngetting details of a release using the `helm get manifest` command.\n\nThe means for updating a Helm release to supported APIs depends on your findings\nas follows:\n\n1. If you find deprecated API versions only then:\n  - Perform a `helm upgrade` with a version of the chart with supported\n    Kubernetes API versions\n  - Add a description in the upgrade, something along the lines to not perform a\n    rollback to a Helm version prior to this current version\n2.  If you find any API version(s) that is\/are removed in a Kubernetes version\n    then:\n  - If you are running a Kubernetes version where the API version(s) are still\n    available (for example, you are on Kubernetes 1.15 and found you use APIs\n    that will be removed in Kubernetes 1.16):\n    - Follow the step 1 procedure\n  - Otherwise (for example, you are already running a Kubernetes version where\n    some API versions reported by `helm get manifest` are no longer available):\n    - You need to edit the release manifest that is stored in the cluster to\n      update the API versions to supported APIs. See [Updating API Versions of a\n      Release Manifest](#updating-api-versions-of-a-release-manifest) for more\n      details\n\n> Note: In all cases of updating a Helm release with supported APIs, you should\nnever rollback the release to a version prior to the release version with the\nsupported APIs.\n\n> Recommendation: The best practice is to upgrade releases using deprecated API\nversions to supported API versions, prior to upgrading to a kubernetes cluster\nthat removes those API versions.\n\nIf you don't update a release as suggested previously, you will have an error\nsimilar to the following when trying to upgrade a release in a Kubernetes\nversion where its API version(s) is\/are removed:\n\n```\nError: UPGRADE FAILED: current release manifest contains removed kubernetes api(s)\nfor this kubernetes version and it is therefore unable to build the kubernetes\nobjects for performing the diff. error from kubernetes: unable to recognize \"\":\nno matches for kind \"Deployment\" in version \"apps\/v1beta1\"\n```\n\nHelm fails in this scenario because it attempts to create a diff patch between\nthe current deployed release (which contains the Kubernetes APIs that are\nremoved in this Kubernetes version) against the chart you are passing with the\nupdated\/supported API versions. The underlying reason for failure is that when\nKubernetes removes an API version, the Kubernetes Go client library can no\nlonger parse the deprecated objects and Helm therefore fails when calling the\nlibrary. Helm unfortunately is unable to recover from this situation and is no\nlonger able to manage such a release. See [Updating API Versions of a Release\nManifest](#updating-api-versions-of-a-release-manifest) for more details on how\nto recover from this scenario.\n\n## Updating API Versions of a Release Manifest\n\nThe manifest is a property of the Helm release object which is stored in the\ndata field of a Secret (default) or ConfigMap in the cluster. The data field\ncontains a gzipped object which is base 64 encoded (there is an additional base\n64 encoding for a Secret). There is a Secret\/ConfigMap per release\nversion\/revision in the namespace of the release.\n\nYou can use the Helm [mapkubeapis](https:\/\/github.com\/helm\/helm-mapkubeapis)\nplugin to perform the update of a release to supported APIs. Check out the\nreadme for more details.\n\nAlternatively, you can follow these manual steps to perform an update of the API\nversions of a release manifest. Depending on your configuration you will follow\nthe steps for the Secret or ConfigMap backend.\n\n- Get the name of the Secret or Configmap associated with the latest deployed\n  release:\n  - Secrets backend: `kubectl get secret -l\n    owner=helm,status=deployed,name=<release_name> --namespace\n    <release_namespace> | awk '{print $1}' | grep -v NAME`\n  - ConfigMap backend: `kubectl get configmap -l\n    owner=helm,status=deployed,name=<release_name> --namespace\n    <release_namespace> | awk '{print $1}' | grep -v NAME`\n- Get latest deployed release details:\n  - Secrets backend: `kubectl get secret <release_secret_name> -n\n    <release_namespace> -o yaml > release.yaml`\n  - ConfigMap backend: `kubectl get configmap <release_configmap_name> -n\n    <release_namespace> -o yaml > release.yaml`\n- Backup the release in case you need to restore if something goes wrong:\n  - `cp release.yaml release.bak`\n  - In case of emergency, restore: `kubectl apply -f release.bak -n\n    <release_namespace>`\n- Decode the release object:\n  - Secrets backend:`cat release.yaml | grep -oP '(?<=release: ).*' | base64 -d\n    | base64 -d | gzip -d > release.data.decoded`\n  - ConfigMap backend: `cat release.yaml | grep -oP '(?<=release: ).*' | base64\n    -d | gzip -d > release.data.decoded`\n- Change API versions of the manifests. Can use any tool (e.g. editor) to make\n  the changes. This is in the `manifest` field of your decoded release object\n  (`release.data.decoded`)\n- Encode the release object:\n  - Secrets backend: `cat release.data.decoded | gzip | base64 | base64`\n  - ConfigMap backend: `cat release.data.decoded | gzip | base64`\n- Replace `data.release` property value in the deployed release file\n  (`release.yaml`) with the new encoded release object\n- Apply file to namespace: `kubectl apply -f release.yaml -n\n  <release_namespace>`\n- Perform a `helm upgrade` with a version of the chart with supported Kubernetes\n  API versions\n- Add a description in the upgrade, something along the lines to not perform a\n  rollback to a Helm version prior to this current version","site":"helm","answers_cleaned":"    title   Deprecated Kubernetes APIs  description   Explains deprecated Kubernetes APIs in Helm  aliases    docs k8s apis         Kubernetes is an API driven system and the API evolves over time to reflect the evolving understanding of the problem space  This is common practice across systems and their APIs  An important part of evolving APIs is a good deprecation policy and process to inform users of how changes to APIs are implemented  In other words  consumers of your API need to know in advance and in what release an API will be removed or changed  This removes the element of surprise and breaking changes to consumers   The  Kubernetes deprecation policy  https   kubernetes io docs reference using api deprecation policy   documents how Kubernetes handles the changes to its API versions  The policy for deprecation states the timeframe that API versions will be supported following a deprecation announcement  It is therefore important to be aware of deprecation announcements and know when API versions will be removed  to help minimize the effect   This is an example of an announcement  for the removal of deprecated API versions in Kubernetes 1 16  https   kubernetes io blog 2019 07 18 api deprecations in 1 16   and was advertised a few months prior to the release  These API versions would have been announced for deprecation prior to this again  This shows that there is a good policy in place which informs consumers of API version support   Helm templates specify a  Kubernetes API group  https   kubernetes io docs concepts overview kubernetes api  api groups  when defining a Kubernetes object  similar to a Kubernetes manifest file  It is specified in the  apiVersion  field of the template and it identifies the API version of the Kubernetes object  This means that Helm users and chart maintainers need to be aware when Kubernetes API versions have been deprecated and in what Kubernetes version they will be removed      Chart Maintainers  You should audit your charts checking for Kubernetes API versions that are deprecated or are removed in a Kubernetes version  The API versions found as due to be or that are now out of support  should be updated to the supported version and a new version of the chart released  The API version is defined by the  kind  and  apiVersion  fields  For example  here is a removed  Deployment  object API version in Kubernetes 1 16      yaml apiVersion  apps v1beta1 kind  Deployment         Helm Users  You should audit the charts that you use  similar to  chart maintainers   chart maintainers   and identify any charts where API versions are deprecated or removed in a Kubernetes version  For the charts identified  you need to check for the latest version of the chart  which has supported API versions  or update the chart yourself   Additionally  you also need to audit any charts deployed  i e  Helm releases  checking again for any deprecated or removed API versions  This can be done by getting details of a release using the  helm get manifest  command   The means for updating a Helm release to supported APIs depends on your findings as follows   1  If you find deprecated API versions only then      Perform a  helm upgrade  with a version of the chart with supported     Kubernetes API versions     Add a description in the upgrade  something along the lines to not perform a     rollback to a Helm version prior to this current version 2   If you find any API version s  that is are removed in a Kubernetes version     then      If you are running a Kubernetes version where the API version s  are still     available  for example  you are on Kubernetes 1 15 and found you use APIs     that will be removed in Kubernetes 1 16         Follow the step 1 procedure     Otherwise  for example  you are already running a Kubernetes version where     some API versions reported by  helm get manifest  are no longer available         You need to edit the release manifest that is stored in the cluster to       update the API versions to supported APIs  See  Updating API Versions of a       Release Manifest   updating api versions of a release manifest  for more       details    Note  In all cases of updating a Helm release with supported APIs  you should never rollback the release to a version prior to the release version with the supported APIs     Recommendation  The best practice is to upgrade releases using deprecated API versions to supported API versions  prior to upgrading to a kubernetes cluster that removes those API versions   If you don t update a release as suggested previously  you will have an error similar to the following when trying to upgrade a release in a Kubernetes version where its API version s  is are removed       Error  UPGRADE FAILED  current release manifest contains removed kubernetes api s  for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff  error from kubernetes  unable to recognize     no matches for kind  Deployment  in version  apps v1beta1       Helm fails in this scenario because it attempts to create a diff patch between the current deployed release  which contains the Kubernetes APIs that are removed in this Kubernetes version  against the chart you are passing with the updated supported API versions  The underlying reason for failure is that when Kubernetes removes an API version  the Kubernetes Go client library can no longer parse the deprecated objects and Helm therefore fails when calling the library  Helm unfortunately is unable to recover from this situation and is no longer able to manage such a release  See  Updating API Versions of a Release Manifest   updating api versions of a release manifest  for more details on how to recover from this scenario      Updating API Versions of a Release Manifest  The manifest is a property of the Helm release object which is stored in the data field of a Secret  default  or ConfigMap in the cluster  The data field contains a gzipped object which is base 64 encoded  there is an additional base 64 encoding for a Secret   There is a Secret ConfigMap per release version revision in the namespace of the release   You can use the Helm  mapkubeapis  https   github com helm helm mapkubeapis  plugin to perform the update of a release to supported APIs  Check out the readme for more details   Alternatively  you can follow these manual steps to perform an update of the API versions of a release manifest  Depending on your configuration you will follow the steps for the Secret or ConfigMap backend     Get the name of the Secret or Configmap associated with the latest deployed   release      Secrets backend   kubectl get secret  l     owner helm status deployed name  release name    namespace      release namespace    awk   print  1     grep  v NAME      ConfigMap backend   kubectl get configmap  l     owner helm status deployed name  release name    namespace      release namespace    awk   print  1     grep  v NAME    Get latest deployed release details      Secrets backend   kubectl get secret  release secret name   n      release namespace   o yaml   release yaml      ConfigMap backend   kubectl get configmap  release configmap name   n      release namespace   o yaml   release yaml    Backup the release in case you need to restore if something goes wrong       cp release yaml release bak      In case of emergency  restore   kubectl apply  f release bak  n      release namespace     Decode the release object      Secrets backend  cat release yaml   grep  oP      release         base64  d       base64  d   gzip  d   release data decoded      ConfigMap backend   cat release yaml   grep  oP      release         base64      d   gzip  d   release data decoded    Change API versions of the manifests  Can use any tool  e g  editor  to make   the changes  This is in the  manifest  field of your decoded release object     release data decoded     Encode the release object      Secrets backend   cat release data decoded   gzip   base64   base64      ConfigMap backend   cat release data decoded   gzip   base64    Replace  data release  property value in the deployed release file     release yaml   with the new encoded release object   Apply file to namespace   kubectl apply  f release yaml  n    release namespace     Perform a  helm upgrade  with a version of the chart with supported Kubernetes   API versions   Add a description in the upgrade  something along the lines to not perform a   rollback to a Helm version prior to this current version"}
{"questions":"helm title Charts docs developingcharts aliases Explains the chart format and provides basic guidance for building charts with Helm weight 1 developingcharts","answers":"---\ntitle: \"Charts\"\ndescription: \"Explains the chart format, and provides basic guidance for building charts with Helm.\"\naliases: [\n  \"docs\/developing_charts\/\",\n  \"developing_charts\"\n]\nweight: 1\n---\n\nHelm uses a packaging format called _charts_. A chart is a collection of files\nthat describe a related set of Kubernetes resources. A single chart might be\nused to deploy something simple, like a memcached pod, or something complex,\nlike a full web app stack with HTTP servers, databases, caches, and so on.\n\nCharts are created as files laid out in a particular directory tree. They can be\npackaged into versioned archives to be deployed.\n\nIf you want to download and look at the files for a published chart, without\ninstalling it, you can do so with `helm pull chartrepo\/chartname`.\n\nThis document explains the chart format, and provides basic guidance for\nbuilding charts with Helm.\n\n## The Chart File Structure\n\nA chart is organized as a collection of files inside of a directory. The\ndirectory name is the name of the chart (without versioning information). Thus,\na chart describing WordPress would be stored in a `wordpress\/` directory.\n\nInside of this directory, Helm will expect a structure that matches this:\n\n```text\nwordpress\/\n  Chart.yaml          # A YAML file containing information about the chart\n  LICENSE             # OPTIONAL: A plain text file containing the license for the chart\n  README.md           # OPTIONAL: A human-readable README file\n  values.yaml         # The default configuration values for this chart\n  values.schema.json  # OPTIONAL: A JSON Schema for imposing a structure on the values.yaml file\n  charts\/             # A directory containing any charts upon which this chart depends.\n  crds\/               # Custom Resource Definitions\n  templates\/          # A directory of templates that, when combined with values,\n                      # will generate valid Kubernetes manifest files.\n  templates\/NOTES.txt # OPTIONAL: A plain text file containing short usage notes\n```\n\nHelm reserves use of the `charts\/`, `crds\/`, and `templates\/` directories, and\nof the listed file names. Other files will be left as they are.\n\n## The Chart.yaml File\n\nThe `Chart.yaml` file is required for a chart. It contains the following fields:\n\n```yaml\napiVersion: The chart API version (required)\nname: The name of the chart (required)\nversion: A SemVer 2 version (required)\nkubeVersion: A SemVer range of compatible Kubernetes versions (optional)\ndescription: A single-sentence description of this project (optional)\ntype: The type of the chart (optional)\nkeywords:\n  - A list of keywords about this project (optional)\nhome: The URL of this projects home page (optional)\nsources:\n  - A list of URLs to source code for this project (optional)\ndependencies: # A list of the chart requirements (optional)\n  - name: The name of the chart (nginx)\n    version: The version of the chart (\"1.2.3\")\n    repository: (optional) The repository URL (\"https:\/\/example.com\/charts\") or alias (\"@repo-name\")\n    condition: (optional) A yaml path that resolves to a boolean, used for enabling\/disabling charts (e.g. subchart1.enabled )\n    tags: # (optional)\n      - Tags can be used to group charts for enabling\/disabling together\n    import-values: # (optional)\n      - ImportValues holds the mapping of source values to parent key to be imported. Each item can be a string or pair of child\/parent sublist items.\n    alias: (optional) Alias to be used for the chart. Useful when you have to add the same chart multiple times\nmaintainers: # (optional)\n  - name: The maintainers name (required for each maintainer)\n    email: The maintainers email (optional for each maintainer)\n    url: A URL for the maintainer (optional for each maintainer)\nicon: A URL to an SVG or PNG image to be used as an icon (optional).\nappVersion: The version of the app that this contains (optional). Needn't be SemVer. Quotes recommended.\ndeprecated: Whether this chart is deprecated (optional, boolean)\nannotations:\n  example: A list of annotations keyed by name (optional).\n```\n\nAs of [v3.3.2](https:\/\/github.com\/helm\/helm\/releases\/tag\/v3.3.2), additional\nfields are not allowed.\nThe recommended approach is to add custom metadata in `annotations`.\n\n### Charts and Versioning\n\nEvery chart must have a version number. A version must follow the [SemVer\n2](https:\/\/semver.org\/spec\/v2.0.0.html) standard. Unlike Helm Classic, Helm v2\nand later uses version numbers as release markers. Packages in repositories are\nidentified by name plus version.\n\nFor example, an `nginx` chart whose version field is set to `version: 1.2.3`\nwill be named:\n\n```text\nnginx-1.2.3.tgz\n```\n\nMore complex SemVer 2 names are also supported, such as `version:\n1.2.3-alpha.1+ef365`. But non-SemVer names are explicitly disallowed by the\nsystem.\n\n**NOTE:** Whereas Helm Classic and Deployment Manager were both very GitHub\noriented when it came to charts, Helm v2 and later does not rely upon or require\nGitHub or even Git. Consequently, it does not use Git SHAs for versioning at\nall.\n\nThe `version` field inside of the `Chart.yaml` is used by many of the Helm\ntools, including the CLI. When generating a package, the `helm package` command\nwill use the version that it finds in the `Chart.yaml` as a token in the package\nname. The system assumes that the version number in the chart package name\nmatches the version number in the `Chart.yaml`. Failure to meet this assumption\nwill cause an error.\n\n### The `apiVersion` Field\n\nThe `apiVersion` field should be `v2` for Helm charts that require at least Helm\n3. Charts supporting previous Helm versions have an `apiVersion` set to `v1` and\nare still installable by Helm 3.\n\nChanges from `v1` to `v2`:\n\n- A `dependencies` field defining chart dependencies, which were located in a\n  separate `requirements.yaml` file for `v1` charts (see [Chart\n  Dependencies](#chart-dependencies)).\n- The `type` field, discriminating application and library charts (see [Chart\n  Types](#chart-types)).\n\n### The `appVersion` Field\n\nNote that the `appVersion` field is not related to the `version` field. It is a\nway of specifying the version of the application. For example, the `drupal`\nchart may have an `appVersion: \"8.2.1\"`, indicating that the version of Drupal\nincluded in the chart (by default) is `8.2.1`. This field is informational, and\nhas no impact on chart version calculations. Wrapping the version in quotes is highly recommended. It forces the YAML parser to treat the version number as a string. Leaving it unquoted can lead to parsing issues in some cases. For example, YAML interprets `1.0` as a floating point value, and a git commit SHA like `1234e10` as scientific notation.\n\nAs of Helm v3.5.0, `helm create` wraps the default `appVersion` field in quotes.\n\n### The `kubeVersion` Field\n\nThe optional `kubeVersion` field can define semver constraints on supported\nKubernetes versions. Helm will validate the version constraints when installing\nthe chart and fail if the cluster runs an unsupported Kubernetes version.\n\nVersion constraints may comprise space separated AND comparisons such as\n```\n>= 1.13.0 < 1.15.0\n```\nwhich themselves can be combined with the OR `||` operator like in the following\nexample\n```\n>= 1.13.0 < 1.14.0 || >= 1.14.1 < 1.15.0\n```\nIn this example the version `1.14.0` is excluded, which can make sense if a bug\nin certain versions is known to prevent the chart from running properly.\n\nApart from version constrains employing operators `=` `!=` `>` `<` `>=` `<=` the\nfollowing shorthand notations are supported\n\n * hyphen ranges for closed intervals, where `1.1 - 2.3.4` is equivalent to `>=\n   1.1 <= 2.3.4`.\n * wildcards `x`, `X` and `*`, where `1.2.x` is equivalent to `>= 1.2.0 <\n   1.3.0`.\n * tilde ranges (patch version changes allowed), where `~1.2.3` is equivalent to\n   `>= 1.2.3 < 1.3.0`.\n * caret ranges (minor version changes allowed), where `^1.2.3` is equivalent to\n   `>= 1.2.3 < 2.0.0`.\n\nFor a detailed explanation of supported semver constraints see\n[Masterminds\/semver](https:\/\/github.com\/Masterminds\/semver).\n\n### Deprecating Charts\n\nWhen managing charts in a Chart Repository, it is sometimes necessary to\ndeprecate a chart. The optional `deprecated` field in `Chart.yaml` can be used\nto mark a chart as deprecated. If the **latest** version of a chart in the\nrepository is marked as deprecated, then the chart as a whole is considered to\nbe deprecated. The chart name can be later reused by publishing a newer version\nthat is not marked as deprecated. The workflow for deprecating charts is:\n\n1. Update chart's `Chart.yaml` to mark the chart as deprecated, bumping the\n   version\n2. Release the new chart version in the Chart Repository\n3. Remove the chart from the source repository (e.g. git)\n\n### Chart Types\n\nThe `type` field defines the type of chart. There are two types: `application`\nand `library`. Application is the default type and it is the standard chart\nwhich can be operated on fully. The [library chart]() provides utilities or functions for the\nchart builder. A library chart differs from an application chart because it is\nnot installable and usually doesn't contain any resource objects.\n\n**Note:** An application chart can be used as a library chart. This is enabled\nby setting the type to `library`. The chart will then be rendered as a library\nchart where all utilities and functions can be leveraged. All resource objects\nof the chart will not be rendered.\n\n## Chart LICENSE, README and NOTES\n\nCharts can also contain files that describe the installation, configuration,\nusage and license of a chart.\n\nA LICENSE is a plain text file containing the\n[license](https:\/\/en.wikipedia.org\/wiki\/Software_license) for the chart. The\nchart can contain a license as it may have programming logic in the templates\nand would therefore not be configuration only. There can also be separate\nlicense(s) for the application installed by the chart, if required.\n\nA README for a chart should be formatted in Markdown (README.md), and should\ngenerally contain:\n\n- A description of the application or service the chart provides\n- Any prerequisites or requirements to run the chart\n- Descriptions of options in `values.yaml` and default values\n- Any other information that may be relevant to the installation or\n  configuration of the chart\n\nWhen hubs and other user interfaces display details about a chart that detail is\npulled from the content in the `README.md` file.\n\nThe chart can also contain a short plain text `templates\/NOTES.txt` file that\nwill be printed out after installation, and when viewing the status of a\nrelease. This file is evaluated as a [template](#templates-and-values), and can\nbe used to display usage notes, next steps, or any other information relevant to\na release of the chart. For example, instructions could be provided for\nconnecting to a database, or accessing a web UI. Since this file is printed to\nSTDOUT when running `helm install` or `helm status`, it is recommended to keep\nthe content brief and point to the README for greater detail.\n\n## Chart Dependencies\n\nIn Helm, one chart may depend on any number of other charts. These dependencies\ncan be dynamically linked using the `dependencies` field in `Chart.yaml` or\nbrought in to the `charts\/` directory and managed manually.\n\n### Managing Dependencies with the `dependencies` field\n\nThe charts required by the current chart are defined as a list in the\n`dependencies` field.\n\n```yaml\ndependencies:\n  - name: apache\n    version: 1.2.3\n    repository: https:\/\/example.com\/charts\n  - name: mysql\n    version: 3.2.1\n    repository: https:\/\/another.example.com\/charts\n```\n\n- The `name` field is the name of the chart you want.\n- The `version` field is the version of the chart you want.\n- The `repository` field is the full URL to the chart repository. Note that you\n  must also use `helm repo add` to add that repo locally.\n- You might use the name of the repo instead of URL\n\n```console\n$ helm repo add fantastic-charts https:\/\/charts.helm.sh\/incubator\n```\n\n```yaml\ndependencies:\n  - name: awesomeness\n    version: 1.0.0\n    repository: \"@fantastic-charts\"\n```\n\nOnce you have defined dependencies, you can run `helm dependency update` and it\nwill use your dependency file to download all the specified charts into your\n`charts\/` directory for you.\n\n```console\n$ helm dep up foochart\nHang tight while we grab the latest from your chart repositories...\n...Successfully got an update from the \"local\" chart repository\n...Successfully got an update from the \"stable\" chart repository\n...Successfully got an update from the \"example\" chart repository\n...Successfully got an update from the \"another\" chart repository\nUpdate Complete. Happy Helming!\nSaving 2 charts\nDownloading apache from repo https:\/\/example.com\/charts\nDownloading mysql from repo https:\/\/another.example.com\/charts\n```\n\nWhen `helm dependency update` retrieves charts, it will store them as chart\narchives in the `charts\/` directory. So for the example above, one would expect\nto see the following files in the charts directory:\n\n```text\ncharts\/\n  apache-1.2.3.tgz\n  mysql-3.2.1.tgz\n```\n\n#### Alias field in dependencies\n\nIn addition to the other fields above, each requirements entry may contain the\noptional field `alias`.\n\nAdding an alias for a dependency chart would put a chart in dependencies using\nalias as name of new dependency.\n\nOne can use `alias` in cases where they need to access a chart with other\nname(s).\n\n```yaml\n# parentchart\/Chart.yaml\n\ndependencies:\n  - name: subchart\n    repository: http:\/\/localhost:10191\n    version: 0.1.0\n    alias: new-subchart-1\n  - name: subchart\n    repository: http:\/\/localhost:10191\n    version: 0.1.0\n    alias: new-subchart-2\n  - name: subchart\n    repository: http:\/\/localhost:10191\n    version: 0.1.0\n```\n\nIn the above example we will get 3 dependencies in all for `parentchart`:\n\n```text\nsubchart\nnew-subchart-1\nnew-subchart-2\n```\n\nThe manual way of achieving this is by copy\/pasting the same chart in the\n`charts\/` directory multiple times with different names.\n\n#### Tags and Condition fields in dependencies\n\nIn addition to the other fields above, each requirements entry may contain the\noptional fields `tags` and `condition`.\n\nAll charts are loaded by default. If `tags` or `condition` fields are present,\nthey will be evaluated and used to control loading for the chart(s) they are\napplied to.\n\nCondition - The condition field holds one or more YAML paths (delimited by\ncommas). If this path exists in the top parent's values and resolves to a\nboolean value, the chart will be enabled or disabled based on that boolean\nvalue.  Only the first valid path found in the list is evaluated and if no paths\nexist then the condition has no effect.\n\nTags - The tags field is a YAML list of labels to associate with this chart. In\nthe top parent's values, all charts with tags can be enabled or disabled by\nspecifying the tag and a boolean value.\n\n```yaml\n# parentchart\/Chart.yaml\n\ndependencies:\n  - name: subchart1\n    repository: http:\/\/localhost:10191\n    version: 0.1.0\n    condition: subchart1.enabled,global.subchart1.enabled\n    tags:\n      - front-end\n      - subchart1\n  - name: subchart2\n    repository: http:\/\/localhost:10191\n    version: 0.1.0\n    condition: subchart2.enabled,global.subchart2.enabled\n    tags:\n      - back-end\n      - subchart2\n```\n\n```yaml\n# parentchart\/values.yaml\n\nsubchart1:\n  enabled: true\ntags:\n  front-end: false\n  back-end: true\n```\n\nIn the above example all charts with the tag `front-end` would be disabled but\nsince the `subchart1.enabled` path evaluates to 'true' in the parent's values,\nthe condition will override the `front-end` tag and `subchart1` will be enabled.\n\nSince `subchart2` is tagged with `back-end` and that tag evaluates to `true`,\n`subchart2` will be enabled. Also note that although `subchart2` has a condition\nspecified, there is no corresponding path and value in the parent's values so\nthat condition has no effect.\n\n##### Using the CLI with Tags and Conditions\n\nThe `--set` parameter can be used as usual to alter tag and condition values.\n\n```console\nhelm install --set tags.front-end=true --set subchart2.enabled=false\n```\n\n##### Tags and Condition Resolution\n\n- **Conditions (when set in values) always override tags.** The first condition\n  path that exists wins and subsequent ones for that chart are ignored.\n- Tags are evaluated as 'if any of the chart's tags are true then enable the\n  chart'.\n- Tags and conditions values must be set in the top parent's values.\n- The `tags:` key in values must be a top level key. Globals and nested `tags:`\n  tables are not currently supported.\n\n#### Importing Child Values via dependencies\n\nIn some cases it is desirable to allow a child chart's values to propagate to\nthe parent chart and be shared as common defaults. An additional benefit of\nusing the `exports` format is that it will enable future tooling to introspect\nuser-settable values.\n\nThe keys containing the values to be imported can be specified in the parent\nchart's `dependencies` in the field `import-values` using a YAML list. Each item\nin the list is a key which is imported from the child chart's `exports` field.\n\nTo import values not contained in the `exports` key, use the\n[child-parent](#using-the-child-parent-format) format. Examples of both formats\nare described below.\n\n##### Using the exports format\n\nIf a child chart's `values.yaml` file contains an `exports` field at the root,\nits contents may be imported directly into the parent's values by specifying the\nkeys to import as in the example below:\n\n```yaml\n# parent's Chart.yaml file\n\ndependencies:\n  - name: subchart\n    repository: http:\/\/localhost:10191\n    version: 0.1.0\n    import-values:\n      - data\n```\n\n```yaml\n# child's values.yaml file\n\nexports:\n  data:\n    myint: 99\n```\n\nSince we are specifying the key `data` in our import list, Helm looks in the\n`exports` field of the child chart for `data` key and imports its contents.\n\nThe final parent values would contain our exported field:\n\n```yaml\n# parent's values\n\nmyint: 99\n```\n\nPlease note the parent key `data` is not contained in the parent's final values.\nIf you need to specify the parent key, use the 'child-parent' format.\n\n##### Using the child-parent format\n\nTo access values that are not contained in the `exports` key of the child\nchart's values, you will need to specify the source key of the values to be\nimported (`child`) and the destination path in the parent chart's values\n(`parent`).\n\nThe `import-values` in the example below instructs Helm to take any values found\nat `child:` path and copy them to the parent's values at the path specified in\n`parent:`\n\n```yaml\n# parent's Chart.yaml file\n\ndependencies:\n  - name: subchart1\n    repository: http:\/\/localhost:10191\n    version: 0.1.0\n    ...\n    import-values:\n      - child: default.data\n        parent: myimports\n```\n\nIn the above example, values found at `default.data` in the subchart1's values\nwill be imported to the `myimports` key in the parent chart's values as detailed\nbelow:\n\n```yaml\n# parent's values.yaml file\n\nmyimports:\n  myint: 0\n  mybool: false\n  mystring: \"helm rocks!\"\n```\n\n```yaml\n# subchart1's values.yaml file\n\ndefault:\n  data:\n    myint: 999\n    mybool: true\n```\n\nThe parent chart's resulting values would be:\n\n```yaml\n# parent's final values\n\nmyimports:\n  myint: 999\n  mybool: true\n  mystring: \"helm rocks!\"\n```\n\nThe parent's final values now contains the `myint` and `mybool` fields imported\nfrom subchart1.\n\n### Managing Dependencies manually via the `charts\/` directory\n\nIf more control over dependencies is desired, these dependencies can be\nexpressed explicitly by copying the dependency charts into the `charts\/`\ndirectory.\n\nA dependency should be an unpacked chart directory but its name cannot start \nwith `_` or `.`. Such files are ignored by the chart loader.\n\nFor example, if the WordPress chart depends on the Apache chart, the Apache\nchart (of the correct version) is supplied in the WordPress chart's `charts\/`\ndirectory:\n\n```yaml\nwordpress:\n  Chart.yaml\n  # ...\n  charts\/\n    apache\/\n      Chart.yaml\n      # ...\n    mysql\/\n      Chart.yaml\n      # ...\n```\n\nThe example above shows how the WordPress chart expresses its dependency on\nApache and MySQL by including those charts inside of its `charts\/` directory.\n\n**TIP:** _To drop a dependency into your `charts\/` directory, use the `helm\npull` command_\n\n### Operational aspects of using dependencies\n\nThe above sections explain how to specify chart dependencies, but how does this\naffect chart installation using `helm install` and `helm upgrade`?\n\nSuppose that a chart named \"A\" creates the following Kubernetes objects\n\n- namespace \"A-Namespace\"\n- statefulset \"A-StatefulSet\"\n- service \"A-Service\"\n\nFurthermore, A is dependent on chart B that creates objects\n\n- namespace \"B-Namespace\"\n- replicaset \"B-ReplicaSet\"\n- service \"B-Service\"\n\nAfter installation\/upgrade of chart A a single Helm release is created\/modified.\nThe release will create\/update all of the above Kubernetes objects in the\nfollowing order:\n\n- A-Namespace\n- B-Namespace\n- A-Service\n- B-Service\n- B-ReplicaSet\n- A-StatefulSet\n\nThis is because when Helm installs\/upgrades charts, the Kubernetes objects from\nthe charts and all its dependencies are\n\n- aggregated into a single set; then\n- sorted by type followed by name; and then\n- created\/updated in that order.\n\nHence a single release is created with all the objects for the chart and its\ndependencies.\n\nThe install order of Kubernetes types is given by the enumeration InstallOrder\nin kind_sorter.go (see [the Helm source\nfile](https:\/\/github.com\/helm\/helm\/blob\/484d43913f97292648c867b56768775a55e4bba6\/pkg\/releaseutil\/kind_sorter.go)).\n\n## Templates and Values\n\nHelm Chart templates are written in the [Go template\nlanguage](https:\/\/golang.org\/pkg\/text\/template\/), with the addition of 50 or so\nadd-on template functions [from the Sprig\nlibrary](https:\/\/github.com\/Masterminds\/sprig) and a few other [specialized\nfunctions]().\n\nAll template files are stored in a chart's `templates\/` folder. When Helm\nrenders the charts, it will pass every file in that directory through the\ntemplate engine.\n\nValues for the templates are supplied two ways:\n\n- Chart developers may supply a file called `values.yaml` inside of a chart.\n  This file can contain default values.\n- Chart users may supply a YAML file that contains values. This can be provided\n  on the command line with `helm install`.\n\nWhen a user supplies custom values, these values will override the values in the\nchart's `values.yaml` file.\n\n### Template Files\n\nTemplate files follow the standard conventions for writing Go templates (see\n[the text\/template Go package\ndocumentation](https:\/\/golang.org\/pkg\/text\/template\/) for details). An example\ntemplate file might look something like this:\n\n```yaml\napiVersion: v1\nkind: ReplicationController\nmetadata:\n  name: deis-database\n  namespace: deis\n  labels:\n    app.kubernetes.io\/managed-by: deis\nspec:\n  replicas: 1\n  selector:\n    app.kubernetes.io\/name: deis-database\n  template:\n    metadata:\n      labels:\n        app.kubernetes.io\/name: deis-database\n    spec:\n      serviceAccount: deis-database\n      containers:\n        - name: deis-database\n          image: \/postgres:\n          imagePullPolicy: \n          ports:\n            - containerPort: 5432\n          env:\n            - name: DATABASE_STORAGE\n              value: \n```\n\nThe above example, based loosely on\n[https:\/\/github.com\/deis\/charts](https:\/\/github.com\/deis\/charts), is a template\nfor a Kubernetes replication controller. It can use the following four template\nvalues (usually defined in a `values.yaml` file):\n\n- `imageRegistry`: The source registry for the Docker image.\n- `dockerTag`: The tag for the docker image.\n- `pullPolicy`: The Kubernetes pull policy.\n- `storage`: The storage backend, whose default is set to `\"minio\"`\n\nAll of these values are defined by the template author. Helm does not require or\ndictate parameters.\n\nTo see many working charts, check out the CNCF [Artifact\nHub](https:\/\/artifacthub.io\/packages\/search?kind=0).\n\n### Predefined Values\n\nValues that are supplied via a `values.yaml` file (or via the `--set` flag) are\naccessible from the `.Values` object in a template. But there are other\npre-defined pieces of data you can access in your templates.\n\nThe following values are pre-defined, are available to every template, and\ncannot be overridden. As with all values, the names are _case sensitive_.\n\n- `Release.Name`: The name of the release (not the chart)\n- `Release.Namespace`: The namespace the chart was released to.\n- `Release.Service`: The service that conducted the release.\n- `Release.IsUpgrade`: This is set to true if the current operation is an\n  upgrade or rollback.\n- `Release.IsInstall`: This is set to true if the current operation is an\n  install.\n- `Chart`: The contents of the `Chart.yaml`. Thus, the chart version is\n  obtainable as `Chart.Version` and the maintainers are in `Chart.Maintainers`.\n- `Files`: A map-like object containing all non-special files in the chart. This\n  will not give you access to templates, but will give you access to additional\n  files that are present (unless they are excluded using `.helmignore`). Files\n  can be accessed using `` or using the\n  `` function. You can also access the contents of the file\n  as `[]byte` using ``\n- `Capabilities`: A map-like object that contains information about the versions\n  of Kubernetes (``) and the supported Kubernetes\n  API versions (``)\n\n**NOTE:** Any unknown `Chart.yaml` fields will be dropped. They will not be\naccessible inside of the `Chart` object. Thus, `Chart.yaml` cannot be used to\npass arbitrarily structured data into the template. The values file can be used\nfor that, though.\n\n### Values files\n\nConsidering the template in the previous section, a `values.yaml` file that\nsupplies the necessary values would look like this:\n\n```yaml\nimageRegistry: \"quay.io\/deis\"\ndockerTag: \"latest\"\npullPolicy: \"Always\"\nstorage: \"s3\"\n```\n\nA values file is formatted in YAML. A chart may include a default `values.yaml`\nfile. The Helm install command allows a user to override values by supplying\nadditional YAML values:\n\n```console\n$ helm install --generate-name --values=myvals.yaml wordpress\n```\n\nWhen values are passed in this way, they will be merged into the default values\nfile. For example, consider a `myvals.yaml` file that looks like this:\n\n```yaml\nstorage: \"gcs\"\n```\n\nWhen this is merged with the `values.yaml` in the chart, the resulting generated\ncontent will be:\n\n```yaml\nimageRegistry: \"quay.io\/deis\"\ndockerTag: \"latest\"\npullPolicy: \"Always\"\nstorage: \"gcs\"\n```\n\nNote that only the last field was overridden.\n\n**NOTE:** The default values file included inside of a chart _must_ be named\n`values.yaml`. But files specified on the command line can be named anything.\n\n**NOTE:** If the `--set` flag is used on `helm install` or `helm upgrade`, those\nvalues are simply converted to YAML on the client side.\n\n**NOTE:** If any required entries in the values file exist, they can be declared\nas required in the chart template by using the ['required' function]()\n\nAny of these values are then accessible inside of templates using the `.Values`\nobject:\n\n```yaml\napiVersion: v1\nkind: ReplicationController\nmetadata:\n  name: deis-database\n  namespace: deis\n  labels:\n    app.kubernetes.io\/managed-by: deis\nspec:\n  replicas: 1\n  selector:\n    app.kubernetes.io\/name: deis-database\n  template:\n    metadata:\n      labels:\n        app.kubernetes.io\/name: deis-database\n    spec:\n      serviceAccount: deis-database\n      containers:\n        - name: deis-database\n          image: \/postgres:\n          imagePullPolicy: \n          ports:\n            - containerPort: 5432\n          env:\n            - name: DATABASE_STORAGE\n              value: \n```\n\n### Scope, Dependencies, and Values\n\nValues files can declare values for the top-level chart, as well as for any of\nthe charts that are included in that chart's `charts\/` directory. Or, to phrase\nit differently, a values file can supply values to the chart as well as to any\nof its dependencies. For example, the demonstration WordPress chart above has\nboth `mysql` and `apache` as dependencies. The values file could supply values\nto all of these components:\n\n```yaml\ntitle: \"My WordPress Site\" # Sent to the WordPress template\n\nmysql:\n  max_connections: 100 # Sent to MySQL\n  password: \"secret\"\n\napache:\n  port: 8080 # Passed to Apache\n```\n\nCharts at a higher level have access to all of the variables defined beneath. So\nthe WordPress chart can access the MySQL password as `.Values.mysql.password`.\nBut lower level charts cannot access things in parent charts, so MySQL will not\nbe able to access the `title` property. Nor, for that matter, can it access\n`apache.port`.\n\nValues are namespaced, but namespaces are pruned. So for the WordPress chart, it\ncan access the MySQL password field as `.Values.mysql.password`. But for the\nMySQL chart, the scope of the values has been reduced and the namespace prefix\nremoved, so it will see the password field simply as `.Values.password`.\n\n#### Global Values\n\nAs of 2.0.0-Alpha.2, Helm supports special \"global\" value. Consider this\nmodified version of the previous example:\n\n```yaml\ntitle: \"My WordPress Site\" # Sent to the WordPress template\n\nglobal:\n  app: MyWordPress\n\nmysql:\n  max_connections: 100 # Sent to MySQL\n  password: \"secret\"\n\napache:\n  port: 8080 # Passed to Apache\n```\n\nThe above adds a `global` section with the value `app: MyWordPress`. This value\nis available to _all_ charts as `.Values.global.app`.\n\nFor example, the `mysql` templates may access `app` as ``, and so can the `apache` chart. Effectively, the values\nfile above is regenerated like this:\n\n```yaml\ntitle: \"My WordPress Site\" # Sent to the WordPress template\n\nglobal:\n  app: MyWordPress\n\nmysql:\n  global:\n    app: MyWordPress\n  max_connections: 100 # Sent to MySQL\n  password: \"secret\"\n\napache:\n  global:\n    app: MyWordPress\n  port: 8080 # Passed to Apache\n```\n\nThis provides a way of sharing one top-level variable with all subcharts, which\nis useful for things like setting `metadata` properties like labels.\n\nIf a subchart declares a global variable, that global will be passed _downward_\n(to the subchart's subcharts), but not _upward_ to the parent chart. There is no\nway for a subchart to influence the values of the parent chart.\n\nAlso, global variables of parent charts take precedence over the global\nvariables from subcharts.\n\n### Schema Files\n\nSometimes, a chart maintainer might want to define a structure on their values.\nThis can be done by defining a schema in the `values.schema.json` file. A schema\nis represented as a [JSON Schema](https:\/\/json-schema.org\/). It might look\nsomething like this:\n\n```json\n{\n  \"$schema\": \"https:\/\/json-schema.org\/draft-07\/schema#\",\n  \"properties\": {\n    \"image\": {\n      \"description\": \"Container Image\",\n      \"properties\": {\n        \"repo\": {\n          \"type\": \"string\"\n        },\n        \"tag\": {\n          \"type\": \"string\"\n        }\n      },\n      \"type\": \"object\"\n    },\n    \"name\": {\n      \"description\": \"Service name\",\n      \"type\": \"string\"\n    },\n    \"port\": {\n      \"description\": \"Port\",\n      \"minimum\": 0,\n      \"type\": \"integer\"\n    },\n    \"protocol\": {\n      \"type\": \"string\"\n    }\n  },\n  \"required\": [\n    \"protocol\",\n    \"port\"\n  ],\n  \"title\": \"Values\",\n  \"type\": \"object\"\n}\n```\n\nThis schema will be applied to the values to validate it. Validation occurs when\nany of the following commands are invoked:\n\n- `helm install`\n- `helm upgrade`\n- `helm lint`\n- `helm template`\n\nAn example of a `values.yaml` file that meets the requirements of this schema\nmight look something like this:\n\n```yaml\nname: frontend\nprotocol: https\nport: 443\n```\n\nNote that the schema is applied to the final `.Values` object, and not just to\nthe `values.yaml` file. This means that the following `yaml` file is valid,\ngiven that the chart is installed with the appropriate `--set` option shown\nbelow.\n\n```yaml\nname: frontend\nprotocol: https\n```\n\n```console\nhelm install --set port=443\n```\n\nFurthermore, the final `.Values` object is checked against *all* subchart\nschemas. This means that restrictions on a subchart can't be circumvented by a\nparent chart. This also works backwards - if a subchart has a requirement that\nis not met in the subchart's `values.yaml` file, the parent chart *must* satisfy\nthose restrictions in order to be valid.\n\n### References\n\nWhen it comes to writing templates, values, and schema files, there are several\nstandard references that will help you out.\n\n- [Go templates](https:\/\/godoc.org\/text\/template)\n- [Extra template functions](https:\/\/godoc.org\/github.com\/Masterminds\/sprig)\n- [The YAML format](https:\/\/yaml.org\/spec\/)\n- [JSON Schema](https:\/\/json-schema.org\/)\n\n## Custom Resource Definitions (CRDs)\n\nKubernetes provides a mechanism for declaring new types of Kubernetes objects.\nUsing CustomResourceDefinitions (CRDs), Kubernetes developers can declare custom\nresource types.\n\nIn Helm 3, CRDs are treated as a special kind of object. They are installed\nbefore the rest of the chart, and are subject to some limitations.\n\nCRD YAML files should be placed in the `crds\/` directory inside of a chart.\nMultiple CRDs (separated by YAML start and end markers) may be placed in the\nsame file. Helm will attempt to load _all_ of the files in the CRD directory\ninto Kubernetes.\n\nCRD files _cannot be templated_. They must be plain YAML documents.\n\nWhen Helm installs a new chart, it will upload the CRDs, pause until the CRDs\nare made available by the API server, and then start the template engine, render\nthe rest of the chart, and upload it to Kubernetes. Because of this ordering,\nCRD information is available in the `.Capabilities` object in Helm templates,\nand Helm templates may create new instances of objects that were declared in\nCRDs.\n\nFor example, if your chart had a CRD for `CronTab` in the `crds\/` directory, you\nmay create instances of the `CronTab` kind in the `templates\/` directory:\n\n```text\ncrontabs\/\n  Chart.yaml\n  crds\/\n    crontab.yaml\n  templates\/\n    mycrontab.yaml\n```\n\nThe `crontab.yaml` file must contain the CRD with no template directives:\n\n```yaml\nkind: CustomResourceDefinition\nmetadata:\n  name: crontabs.stable.example.com\nspec:\n  group: stable.example.com\n  versions:\n    - name: v1\n      served: true\n      storage: true\n  scope: Namespaced\n  names:\n    plural: crontabs\n    singular: crontab\n    kind: CronTab\n```\n\nThen the template `mycrontab.yaml` may create a new `CronTab` (using templates\nas usual):\n\n```yaml\napiVersion: stable.example.com\nkind: CronTab\nmetadata:\n  name: \nspec:\n   # ...\n```\n\nHelm will make sure that the `CronTab` kind has been installed and is available\nfrom the Kubernetes API server before it proceeds installing the things in\n`templates\/`.\n\n### Limitations on CRDs\n\nUnlike most objects in Kubernetes, CRDs are installed globally. For that reason,\nHelm takes a very cautious approach in managing CRDs. CRDs are subject to the\nfollowing limitations:\n\n- CRDs are never reinstalled. If Helm determines that the CRDs in the `crds\/`\n  directory are already present (regardless of version), Helm will not attempt\n  to install or upgrade.\n- CRDs are never installed on upgrade or rollback. Helm will only create CRDs on\n  installation operations.\n- CRDs are never deleted. Deleting a CRD automatically deletes all of the CRD's\n  contents across all namespaces in the cluster. Consequently, Helm will not\n  delete CRDs.\n\nOperators who want to upgrade or delete CRDs are encouraged to do this manually\nand with great care.\n\n## Using Helm to Manage Charts\n\nThe `helm` tool has several commands for working with charts.\n\nIt can create a new chart for you:\n\n```console\n$ helm create mychart\nCreated mychart\/\n```\n\nOnce you have edited a chart, `helm` can package it into a chart archive for\nyou:\n\n```console\n$ helm package mychart\nArchived mychart-0.1.-.tgz\n```\n\nYou can also use `helm` to help you find issues with your chart's formatting or\ninformation:\n\n```console\n$ helm lint mychart\nNo issues found\n```\n\n## Chart Repositories\n\nA _chart repository_ is an HTTP server that houses one or more packaged charts.\nWhile `helm` can be used to manage local chart directories, when it comes to\nsharing charts, the preferred mechanism is a chart repository.\n\nAny HTTP server that can serve YAML files and tar files and can answer GET\nrequests can be used as a repository server. The Helm team has tested some\nservers, including Google Cloud Storage with website mode enabled, and S3 with\nwebsite mode enabled.\n\nA repository is characterized primarily by the presence of a special file called\n`index.yaml` that has a list of all of the packages supplied by the repository,\ntogether with metadata that allows retrieving and verifying those packages.\n\nOn the client side, repositories are managed with the `helm repo` commands.\nHowever, Helm does not provide tools for uploading charts to remote repository\nservers. This is because doing so would add substantial requirements to an\nimplementing server, and thus raise the barrier for setting up a repository.\n\n## Chart Starter Packs\n\nThe `helm create` command takes an optional `--starter` option that lets you\nspecify a \"starter chart\". Also, the starter option has a short alias `-p`.\n\nExamples of usage:\n\n```console\nhelm create my-chart --starter starter-name\nhelm create my-chart -p starter-name\nhelm create my-chart -p \/absolute\/path\/to\/starter-name\n```\n\nStarters are just regular charts, but are located in\n`$XDG_DATA_HOME\/helm\/starters`. As a chart developer, you may author charts that\nare specifically designed to be used as starters. Such charts should be designed\nwith the following considerations in mind:\n\n- The `Chart.yaml` will be overwritten by the generator.\n- Users will expect to modify such a chart's contents, so documentation should\n  indicate how users can do so.\n- All occurrences of `<CHARTNAME>` will be replaced with the specified chart name so that starter charts can be used as templates, except for some variable files. For example, if you use custom files in the `vars` directory or certain `README.md` files, `<CHARTNAME>` will NOT override inside them. Additionally, the chart description is not inherited.\n\nCurrently the only way to add a chart to `$XDG_DATA_HOME\/helm\/starters` is to\nmanually copy it there. In your chart's documentation, you may want to explain\nthat process.","site":"helm","answers_cleaned":"    title   Charts  description   Explains the chart format  and provides basic guidance for building charts with Helm   aliases       docs developing charts       developing charts    weight  1      Helm uses a packaging format called  charts   A chart is a collection of files that describe a related set of Kubernetes resources  A single chart might be used to deploy something simple  like a memcached pod  or something complex  like a full web app stack with HTTP servers  databases  caches  and so on   Charts are created as files laid out in a particular directory tree  They can be packaged into versioned archives to be deployed   If you want to download and look at the files for a published chart  without installing it  you can do so with  helm pull chartrepo chartname    This document explains the chart format  and provides basic guidance for building charts with Helm      The Chart File Structure  A chart is organized as a collection of files inside of a directory  The directory name is the name of the chart  without versioning information   Thus  a chart describing WordPress would be stored in a  wordpress   directory   Inside of this directory  Helm will expect a structure that matches this      text wordpress    Chart yaml            A YAML file containing information about the chart   LICENSE               OPTIONAL  A plain text file containing the license for the chart   README md             OPTIONAL  A human readable README file   values yaml           The default configuration values for this chart   values schema json    OPTIONAL  A JSON Schema for imposing a structure on the values yaml file   charts                A directory containing any charts upon which this chart depends    crds                  Custom Resource Definitions   templates             A directory of templates that  when combined with values                          will generate valid Kubernetes manifest files    templates NOTES txt   OPTIONAL  A plain text file containing short usage notes      Helm reserves use of the  charts     crds    and  templates   directories  and of the listed file names  Other files will be left as they are      The Chart yaml File  The  Chart yaml  file is required for a chart  It contains the following fields      yaml apiVersion  The chart API version  required  name  The name of the chart  required  version  A SemVer 2 version  required  kubeVersion  A SemVer range of compatible Kubernetes versions  optional  description  A single sentence description of this project  optional  type  The type of the chart  optional  keywords      A list of keywords about this project  optional  home  The URL of this projects home page  optional  sources      A list of URLs to source code for this project  optional  dependencies    A list of the chart requirements  optional      name  The name of the chart  nginx      version  The version of the chart   1 2 3       repository   optional  The repository URL   https   example com charts   or alias    repo name       condition   optional  A yaml path that resolves to a boolean  used for enabling disabling charts  e g  subchart1 enabled       tags     optional          Tags can be used to group charts for enabling disabling together     import values     optional          ImportValues holds the mapping of source values to parent key to be imported  Each item can be a string or pair of child parent sublist items      alias   optional  Alias to be used for the chart  Useful when you have to add the same chart multiple times maintainers     optional      name  The maintainers name  required for each maintainer      email  The maintainers email  optional for each maintainer      url  A URL for the maintainer  optional for each maintainer  icon  A URL to an SVG or PNG image to be used as an icon  optional   appVersion  The version of the app that this contains  optional   Needn t be SemVer  Quotes recommended  deprecated  Whether this chart is deprecated  optional  boolean  annotations    example  A list of annotations keyed by name  optional        As of  v3 3 2  https   github com helm helm releases tag v3 3 2   additional fields are not allowed  The recommended approach is to add custom metadata in  annotations        Charts and Versioning  Every chart must have a version number  A version must follow the  SemVer 2  https   semver org spec v2 0 0 html  standard  Unlike Helm Classic  Helm v2 and later uses version numbers as release markers  Packages in repositories are identified by name plus version   For example  an  nginx  chart whose version field is set to  version  1 2 3  will be named      text nginx 1 2 3 tgz      More complex SemVer 2 names are also supported  such as  version  1 2 3 alpha 1 ef365   But non SemVer names are explicitly disallowed by the system     NOTE    Whereas Helm Classic and Deployment Manager were both very GitHub oriented when it came to charts  Helm v2 and later does not rely upon or require GitHub or even Git  Consequently  it does not use Git SHAs for versioning at all   The  version  field inside of the  Chart yaml  is used by many of the Helm tools  including the CLI  When generating a package  the  helm package  command will use the version that it finds in the  Chart yaml  as a token in the package name  The system assumes that the version number in the chart package name matches the version number in the  Chart yaml   Failure to meet this assumption will cause an error       The  apiVersion  Field  The  apiVersion  field should be  v2  for Helm charts that require at least Helm 3  Charts supporting previous Helm versions have an  apiVersion  set to  v1  and are still installable by Helm 3   Changes from  v1  to  v2      A  dependencies  field defining chart dependencies  which were located in a   separate  requirements yaml  file for  v1  charts  see  Chart   Dependencies   chart dependencies      The  type  field  discriminating application and library charts  see  Chart   Types   chart types         The  appVersion  Field  Note that the  appVersion  field is not related to the  version  field  It is a way of specifying the version of the application  For example  the  drupal  chart may have an  appVersion   8 2 1    indicating that the version of Drupal included in the chart  by default  is  8 2 1   This field is informational  and has no impact on chart version calculations  Wrapping the version in quotes is highly recommended  It forces the YAML parser to treat the version number as a string  Leaving it unquoted can lead to parsing issues in some cases  For example  YAML interprets  1 0  as a floating point value  and a git commit SHA like  1234e10  as scientific notation   As of Helm v3 5 0   helm create  wraps the default  appVersion  field in quotes       The  kubeVersion  Field  The optional  kubeVersion  field can define semver constraints on supported Kubernetes versions  Helm will validate the version constraints when installing the chart and fail if the cluster runs an unsupported Kubernetes version   Version constraints may comprise space separated AND comparisons such as        1 13 0   1 15 0     which themselves can be combined with the OR      operator like in the following example        1 13 0   1 14 0       1 14 1   1 15 0     In this example the version  1 14 0  is excluded  which can make sense if a bug in certain versions is known to prevent the chart from running properly   Apart from version constrains employing operators                            the following shorthand notations are supported     hyphen ranges for closed intervals  where  1 1   2 3 4  is equivalent to        1 1    2 3 4      wildcards  x    X  and      where  1 2 x  is equivalent to     1 2 0      1 3 0      tilde ranges  patch version changes allowed   where   1 2 3  is equivalent to        1 2 3   1 3 0      caret ranges  minor version changes allowed   where   1 2 3  is equivalent to        1 2 3   2 0 0    For a detailed explanation of supported semver constraints see  Masterminds semver  https   github com Masterminds semver        Deprecating Charts  When managing charts in a Chart Repository  it is sometimes necessary to deprecate a chart  The optional  deprecated  field in  Chart yaml  can be used to mark a chart as deprecated  If the   latest   version of a chart in the repository is marked as deprecated  then the chart as a whole is considered to be deprecated  The chart name can be later reused by publishing a newer version that is not marked as deprecated  The workflow for deprecating charts is   1  Update chart s  Chart yaml  to mark the chart as deprecated  bumping the    version 2  Release the new chart version in the Chart Repository 3  Remove the chart from the source repository  e g  git       Chart Types  The  type  field defines the type of chart  There are two types   application  and  library   Application is the default type and it is the standard chart which can be operated on fully  The  library chart    provides utilities or functions for the chart builder  A library chart differs from an application chart because it is not installable and usually doesn t contain any resource objects     Note    An application chart can be used as a library chart  This is enabled by setting the type to  library   The chart will then be rendered as a library chart where all utilities and functions can be leveraged  All resource objects of the chart will not be rendered      Chart LICENSE  README and NOTES  Charts can also contain files that describe the installation  configuration  usage and license of a chart   A LICENSE is a plain text file containing the  license  https   en wikipedia org wiki Software license  for the chart  The chart can contain a license as it may have programming logic in the templates and would therefore not be configuration only  There can also be separate license s  for the application installed by the chart  if required   A README for a chart should be formatted in Markdown  README md   and should generally contain     A description of the application or service the chart provides   Any prerequisites or requirements to run the chart   Descriptions of options in  values yaml  and default values   Any other information that may be relevant to the installation or   configuration of the chart  When hubs and other user interfaces display details about a chart that detail is pulled from the content in the  README md  file   The chart can also contain a short plain text  templates NOTES txt  file that will be printed out after installation  and when viewing the status of a release  This file is evaluated as a  template   templates and values   and can be used to display usage notes  next steps  or any other information relevant to a release of the chart  For example  instructions could be provided for connecting to a database  or accessing a web UI  Since this file is printed to STDOUT when running  helm install  or  helm status   it is recommended to keep the content brief and point to the README for greater detail      Chart Dependencies  In Helm  one chart may depend on any number of other charts  These dependencies can be dynamically linked using the  dependencies  field in  Chart yaml  or brought in to the  charts   directory and managed manually       Managing Dependencies with the  dependencies  field  The charts required by the current chart are defined as a list in the  dependencies  field      yaml dependencies      name  apache     version  1 2 3     repository  https   example com charts     name  mysql     version  3 2 1     repository  https   another example com charts        The  name  field is the name of the chart you want    The  version  field is the version of the chart you want    The  repository  field is the full URL to the chart repository  Note that you   must also use  helm repo add  to add that repo locally    You might use the name of the repo instead of URL     console   helm repo add fantastic charts https   charts helm sh incubator         yaml dependencies      name  awesomeness     version  1 0 0     repository    fantastic charts       Once you have defined dependencies  you can run  helm dependency update  and it will use your dependency file to download all the specified charts into your  charts   directory for you      console   helm dep up foochart Hang tight while we grab the latest from your chart repositories       Successfully got an update from the  local  chart repository    Successfully got an update from the  stable  chart repository    Successfully got an update from the  example  chart repository    Successfully got an update from the  another  chart repository Update Complete  Happy Helming  Saving 2 charts Downloading apache from repo https   example com charts Downloading mysql from repo https   another example com charts      When  helm dependency update  retrieves charts  it will store them as chart archives in the  charts   directory  So for the example above  one would expect to see the following files in the charts directory      text charts    apache 1 2 3 tgz   mysql 3 2 1 tgz           Alias field in dependencies  In addition to the other fields above  each requirements entry may contain the optional field  alias    Adding an alias for a dependency chart would put a chart in dependencies using alias as name of new dependency   One can use  alias  in cases where they need to access a chart with other name s       yaml   parentchart Chart yaml  dependencies      name  subchart     repository  http   localhost 10191     version  0 1 0     alias  new subchart 1     name  subchart     repository  http   localhost 10191     version  0 1 0     alias  new subchart 2     name  subchart     repository  http   localhost 10191     version  0 1 0      In the above example we will get 3 dependencies in all for  parentchart       text subchart new subchart 1 new subchart 2      The manual way of achieving this is by copy pasting the same chart in the  charts   directory multiple times with different names        Tags and Condition fields in dependencies  In addition to the other fields above  each requirements entry may contain the optional fields  tags  and  condition    All charts are loaded by default  If  tags  or  condition  fields are present  they will be evaluated and used to control loading for the chart s  they are applied to   Condition   The condition field holds one or more YAML paths  delimited by commas   If this path exists in the top parent s values and resolves to a boolean value  the chart will be enabled or disabled based on that boolean value   Only the first valid path found in the list is evaluated and if no paths exist then the condition has no effect   Tags   The tags field is a YAML list of labels to associate with this chart  In the top parent s values  all charts with tags can be enabled or disabled by specifying the tag and a boolean value      yaml   parentchart Chart yaml  dependencies      name  subchart1     repository  http   localhost 10191     version  0 1 0     condition  subchart1 enabled global subchart1 enabled     tags          front end         subchart1     name  subchart2     repository  http   localhost 10191     version  0 1 0     condition  subchart2 enabled global subchart2 enabled     tags          back end         subchart2         yaml   parentchart values yaml  subchart1    enabled  true tags    front end  false   back end  true      In the above example all charts with the tag  front end  would be disabled but since the  subchart1 enabled  path evaluates to  true  in the parent s values  the condition will override the  front end  tag and  subchart1  will be enabled   Since  subchart2  is tagged with  back end  and that tag evaluates to  true    subchart2  will be enabled  Also note that although  subchart2  has a condition specified  there is no corresponding path and value in the parent s values so that condition has no effect         Using the CLI with Tags and Conditions  The    set  parameter can be used as usual to alter tag and condition values      console helm install   set tags front end true   set subchart2 enabled false            Tags and Condition Resolution      Conditions  when set in values  always override tags    The first condition   path that exists wins and subsequent ones for that chart are ignored    Tags are evaluated as  if any of the chart s tags are true then enable the   chart     Tags and conditions values must be set in the top parent s values    The  tags   key in values must be a top level key  Globals and nested  tags     tables are not currently supported        Importing Child Values via dependencies  In some cases it is desirable to allow a child chart s values to propagate to the parent chart and be shared as common defaults  An additional benefit of using the  exports  format is that it will enable future tooling to introspect user settable values   The keys containing the values to be imported can be specified in the parent chart s  dependencies  in the field  import values  using a YAML list  Each item in the list is a key which is imported from the child chart s  exports  field   To import values not contained in the  exports  key  use the  child parent   using the child parent format  format  Examples of both formats are described below         Using the exports format  If a child chart s  values yaml  file contains an  exports  field at the root  its contents may be imported directly into the parent s values by specifying the keys to import as in the example below      yaml   parent s Chart yaml file  dependencies      name  subchart     repository  http   localhost 10191     version  0 1 0     import values          data         yaml   child s values yaml file  exports    data      myint  99      Since we are specifying the key  data  in our import list  Helm looks in the  exports  field of the child chart for  data  key and imports its contents   The final parent values would contain our exported field      yaml   parent s values  myint  99      Please note the parent key  data  is not contained in the parent s final values  If you need to specify the parent key  use the  child parent  format         Using the child parent format  To access values that are not contained in the  exports  key of the child chart s values  you will need to specify the source key of the values to be imported   child   and the destination path in the parent chart s values   parent     The  import values  in the example below instructs Helm to take any values found at  child   path and copy them to the parent s values at the path specified in  parent       yaml   parent s Chart yaml file  dependencies      name  subchart1     repository  http   localhost 10191     version  0 1 0             import values          child  default data         parent  myimports      In the above example  values found at  default data  in the subchart1 s values will be imported to the  myimports  key in the parent chart s values as detailed below      yaml   parent s values yaml file  myimports    myint  0   mybool  false   mystring   helm rocks           yaml   subchart1 s values yaml file  default    data      myint  999     mybool  true      The parent chart s resulting values would be      yaml   parent s final values  myimports    myint  999   mybool  true   mystring   helm rocks        The parent s final values now contains the  myint  and  mybool  fields imported from subchart1       Managing Dependencies manually via the  charts   directory  If more control over dependencies is desired  these dependencies can be expressed explicitly by copying the dependency charts into the  charts   directory   A dependency should be an unpacked chart directory but its name cannot start  with     or      Such files are ignored by the chart loader   For example  if the WordPress chart depends on the Apache chart  the Apache chart  of the correct version  is supplied in the WordPress chart s  charts   directory      yaml wordpress    Chart yaml           charts      apache        Chart yaml                 mysql        Chart yaml                  The example above shows how the WordPress chart expresses its dependency on Apache and MySQL by including those charts inside of its  charts   directory     TIP     To drop a dependency into your  charts   directory  use the  helm pull  command       Operational aspects of using dependencies  The above sections explain how to specify chart dependencies  but how does this affect chart installation using  helm install  and  helm upgrade    Suppose that a chart named  A  creates the following Kubernetes objects    namespace  A Namespace    statefulset  A StatefulSet    service  A Service   Furthermore  A is dependent on chart B that creates objects    namespace  B Namespace    replicaset  B ReplicaSet    service  B Service   After installation upgrade of chart A a single Helm release is created modified  The release will create update all of the above Kubernetes objects in the following order     A Namespace   B Namespace   A Service   B Service   B ReplicaSet   A StatefulSet  This is because when Helm installs upgrades charts  the Kubernetes objects from the charts and all its dependencies are    aggregated into a single set  then   sorted by type followed by name  and then   created updated in that order   Hence a single release is created with all the objects for the chart and its dependencies   The install order of Kubernetes types is given by the enumeration InstallOrder in kind sorter go  see  the Helm source file  https   github com helm helm blob 484d43913f97292648c867b56768775a55e4bba6 pkg releaseutil kind sorter go        Templates and Values  Helm Chart templates are written in the  Go template language  https   golang org pkg text template    with the addition of 50 or so add on template functions  from the Sprig library  https   github com Masterminds sprig  and a few other  specialized functions      All template files are stored in a chart s  templates   folder  When Helm renders the charts  it will pass every file in that directory through the template engine   Values for the templates are supplied two ways     Chart developers may supply a file called  values yaml  inside of a chart    This file can contain default values    Chart users may supply a YAML file that contains values  This can be provided   on the command line with  helm install    When a user supplies custom values  these values will override the values in the chart s  values yaml  file       Template Files  Template files follow the standard conventions for writing Go templates  see  the text template Go package documentation  https   golang org pkg text template   for details   An example template file might look something like this      yaml apiVersion  v1 kind  ReplicationController metadata    name  deis database   namespace  deis   labels      app kubernetes io managed by  deis spec    replicas  1   selector      app kubernetes io name  deis database   template      metadata        labels          app kubernetes io name  deis database     spec        serviceAccount  deis database       containers            name  deis database           image   postgres            imagePullPolicy             ports                containerPort  5432           env                name  DATABASE STORAGE               value        The above example  based loosely on  https   github com deis charts  https   github com deis charts   is a template for a Kubernetes replication controller  It can use the following four template values  usually defined in a  values yaml  file       imageRegistry   The source registry for the Docker image     dockerTag   The tag for the docker image     pullPolicy   The Kubernetes pull policy     storage   The storage backend  whose default is set to   minio    All of these values are defined by the template author  Helm does not require or dictate parameters   To see many working charts  check out the CNCF  Artifact Hub  https   artifacthub io packages search kind 0        Predefined Values  Values that are supplied via a  values yaml  file  or via the    set  flag  are accessible from the   Values  object in a template  But there are other pre defined pieces of data you can access in your templates   The following values are pre defined  are available to every template  and cannot be overridden  As with all values  the names are  case sensitive       Release Name   The name of the release  not the chart     Release Namespace   The namespace the chart was released to     Release Service   The service that conducted the release     Release IsUpgrade   This is set to true if the current operation is an   upgrade or rollback     Release IsInstall   This is set to true if the current operation is an   install     Chart   The contents of the  Chart yaml   Thus  the chart version is   obtainable as  Chart Version  and the maintainers are in  Chart Maintainers      Files   A map like object containing all non special files in the chart  This   will not give you access to templates  but will give you access to additional   files that are present  unless they are excluded using   helmignore    Files   can be accessed using    or using the      function  You can also access the contents of the file   as    byte  using       Capabilities   A map like object that contains information about the versions   of Kubernetes      and the supported Kubernetes   API versions         NOTE    Any unknown  Chart yaml  fields will be dropped  They will not be accessible inside of the  Chart  object  Thus   Chart yaml  cannot be used to pass arbitrarily structured data into the template  The values file can be used for that  though       Values files  Considering the template in the previous section  a  values yaml  file that supplies the necessary values would look like this      yaml imageRegistry   quay io deis  dockerTag   latest  pullPolicy   Always  storage   s3       A values file is formatted in YAML  A chart may include a default  values yaml  file  The Helm install command allows a user to override values by supplying additional YAML values      console   helm install   generate name   values myvals yaml wordpress      When values are passed in this way  they will be merged into the default values file  For example  consider a  myvals yaml  file that looks like this      yaml storage   gcs       When this is merged with the  values yaml  in the chart  the resulting generated content will be      yaml imageRegistry   quay io deis  dockerTag   latest  pullPolicy   Always  storage   gcs       Note that only the last field was overridden     NOTE    The default values file included inside of a chart  must  be named  values yaml   But files specified on the command line can be named anything     NOTE    If the    set  flag is used on  helm install  or  helm upgrade   those values are simply converted to YAML on the client side     NOTE    If any required entries in the values file exist  they can be declared as required in the chart template by using the   required  function     Any of these values are then accessible inside of templates using the   Values  object      yaml apiVersion  v1 kind  ReplicationController metadata    name  deis database   namespace  deis   labels      app kubernetes io managed by  deis spec    replicas  1   selector      app kubernetes io name  deis database   template      metadata        labels          app kubernetes io name  deis database     spec        serviceAccount  deis database       containers            name  deis database           image   postgres            imagePullPolicy             ports                containerPort  5432           env                name  DATABASE STORAGE               value            Scope  Dependencies  and Values  Values files can declare values for the top level chart  as well as for any of the charts that are included in that chart s  charts   directory  Or  to phrase it differently  a values file can supply values to the chart as well as to any of its dependencies  For example  the demonstration WordPress chart above has both  mysql  and  apache  as dependencies  The values file could supply values to all of these components      yaml title   My WordPress Site    Sent to the WordPress template  mysql    max connections  100   Sent to MySQL   password   secret   apache    port  8080   Passed to Apache      Charts at a higher level have access to all of the variables defined beneath  So the WordPress chart can access the MySQL password as   Values mysql password   But lower level charts cannot access things in parent charts  so MySQL will not be able to access the  title  property  Nor  for that matter  can it access  apache port    Values are namespaced  but namespaces are pruned  So for the WordPress chart  it can access the MySQL password field as   Values mysql password   But for the MySQL chart  the scope of the values has been reduced and the namespace prefix removed  so it will see the password field simply as   Values password         Global Values  As of 2 0 0 Alpha 2  Helm supports special  global  value  Consider this modified version of the previous example      yaml title   My WordPress Site    Sent to the WordPress template  global    app  MyWordPress  mysql    max connections  100   Sent to MySQL   password   secret   apache    port  8080   Passed to Apache      The above adds a  global  section with the value  app  MyWordPress   This value is available to  all  charts as   Values global app    For example  the  mysql  templates may access  app  as     and so can the  apache  chart  Effectively  the values file above is regenerated like this      yaml title   My WordPress Site    Sent to the WordPress template  global    app  MyWordPress  mysql    global      app  MyWordPress   max connections  100   Sent to MySQL   password   secret   apache    global      app  MyWordPress   port  8080   Passed to Apache      This provides a way of sharing one top level variable with all subcharts  which is useful for things like setting  metadata  properties like labels   If a subchart declares a global variable  that global will be passed  downward   to the subchart s subcharts   but not  upward  to the parent chart  There is no way for a subchart to influence the values of the parent chart   Also  global variables of parent charts take precedence over the global variables from subcharts       Schema Files  Sometimes  a chart maintainer might want to define a structure on their values  This can be done by defining a schema in the  values schema json  file  A schema is represented as a  JSON Schema  https   json schema org    It might look something like this      json       schema    https   json schema org draft 07 schema       properties          image            description    Container Image          properties              repo                type    string                      tag                type    string                            type    object              name            description    Service name          type    string              port            description    Port          minimum   0         type    integer              protocol            type    string                required          protocol        port          title    Values      type    object         This schema will be applied to the values to validate it  Validation occurs when any of the following commands are invoked      helm install     helm upgrade     helm lint     helm template   An example of a  values yaml  file that meets the requirements of this schema might look something like this      yaml name  frontend protocol  https port  443      Note that the schema is applied to the final   Values  object  and not just to the  values yaml  file  This means that the following  yaml  file is valid  given that the chart is installed with the appropriate    set  option shown below      yaml name  frontend protocol  https         console helm install   set port 443      Furthermore  the final   Values  object is checked against  all  subchart schemas  This means that restrictions on a subchart can t be circumvented by a parent chart  This also works backwards   if a subchart has a requirement that is not met in the subchart s  values yaml  file  the parent chart  must  satisfy those restrictions in order to be valid       References  When it comes to writing templates  values  and schema files  there are several standard references that will help you out      Go templates  https   godoc org text template     Extra template functions  https   godoc org github com Masterminds sprig     The YAML format  https   yaml org spec      JSON Schema  https   json schema org       Custom Resource Definitions  CRDs   Kubernetes provides a mechanism for declaring new types of Kubernetes objects  Using CustomResourceDefinitions  CRDs   Kubernetes developers can declare custom resource types   In Helm 3  CRDs are treated as a special kind of object  They are installed before the rest of the chart  and are subject to some limitations   CRD YAML files should be placed in the  crds   directory inside of a chart  Multiple CRDs  separated by YAML start and end markers  may be placed in the same file  Helm will attempt to load  all  of the files in the CRD directory into Kubernetes   CRD files  cannot be templated   They must be plain YAML documents   When Helm installs a new chart  it will upload the CRDs  pause until the CRDs are made available by the API server  and then start the template engine  render the rest of the chart  and upload it to Kubernetes  Because of this ordering  CRD information is available in the   Capabilities  object in Helm templates  and Helm templates may create new instances of objects that were declared in CRDs   For example  if your chart had a CRD for  CronTab  in the  crds   directory  you may create instances of the  CronTab  kind in the  templates   directory      text crontabs    Chart yaml   crds      crontab yaml   templates      mycrontab yaml      The  crontab yaml  file must contain the CRD with no template directives      yaml kind  CustomResourceDefinition metadata    name  crontabs stable example com spec    group  stable example com   versions        name  v1       served  true       storage  true   scope  Namespaced   names      plural  crontabs     singular  crontab     kind  CronTab      Then the template  mycrontab yaml  may create a new  CronTab   using templates as usual       yaml apiVersion  stable example com kind  CronTab metadata    name   spec                Helm will make sure that the  CronTab  kind has been installed and is available from the Kubernetes API server before it proceeds installing the things in  templates         Limitations on CRDs  Unlike most objects in Kubernetes  CRDs are installed globally  For that reason  Helm takes a very cautious approach in managing CRDs  CRDs are subject to the following limitations     CRDs are never reinstalled  If Helm determines that the CRDs in the  crds     directory are already present  regardless of version   Helm will not attempt   to install or upgrade    CRDs are never installed on upgrade or rollback  Helm will only create CRDs on   installation operations    CRDs are never deleted  Deleting a CRD automatically deletes all of the CRD s   contents across all namespaces in the cluster  Consequently  Helm will not   delete CRDs   Operators who want to upgrade or delete CRDs are encouraged to do this manually and with great care      Using Helm to Manage Charts  The  helm  tool has several commands for working with charts   It can create a new chart for you      console   helm create mychart Created mychart       Once you have edited a chart   helm  can package it into a chart archive for you      console   helm package mychart Archived mychart 0 1   tgz      You can also use  helm  to help you find issues with your chart s formatting or information      console   helm lint mychart No issues found         Chart Repositories  A  chart repository  is an HTTP server that houses one or more packaged charts  While  helm  can be used to manage local chart directories  when it comes to sharing charts  the preferred mechanism is a chart repository   Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server  The Helm team has tested some servers  including Google Cloud Storage with website mode enabled  and S3 with website mode enabled   A repository is characterized primarily by the presence of a special file called  index yaml  that has a list of all of the packages supplied by the repository  together with metadata that allows retrieving and verifying those packages   On the client side  repositories are managed with the  helm repo  commands  However  Helm does not provide tools for uploading charts to remote repository servers  This is because doing so would add substantial requirements to an implementing server  and thus raise the barrier for setting up a repository      Chart Starter Packs  The  helm create  command takes an optional    starter  option that lets you specify a  starter chart   Also  the starter option has a short alias   p    Examples of usage      console helm create my chart   starter starter name helm create my chart  p starter name helm create my chart  p  absolute path to starter name      Starters are just regular charts  but are located in   XDG DATA HOME helm starters   As a chart developer  you may author charts that are specifically designed to be used as starters  Such charts should be designed with the following considerations in mind     The  Chart yaml  will be overwritten by the generator    Users will expect to modify such a chart s contents  so documentation should   indicate how users can do so    All occurrences of   CHARTNAME   will be replaced with the specified chart name so that starter charts can be used as templates  except for some variable files  For example  if you use custom files in the  vars  directory or certain  README md  files    CHARTNAME   will NOT override inside them  Additionally  the chart description is not inherited   Currently the only way to add a chart to   XDG DATA HOME helm starters  is to manually copy it there  In your chart s documentation  you may want to explain that process "}
{"questions":"helm Explains various advanced features for Helm power users aliases docs advancedhelmtechniques The information in this section is intended for power users of Helm that wish title Advanced Helm Techniques This section explains various advanced features and techniques for using Helm to do advanced customization and manipulation of their charts and releases Each weight 9","answers":"---\ntitle: \"Advanced Helm Techniques\"\ndescription: \"Explains various advanced features for Helm power users\"\naliases: [\"\/docs\/advanced_helm_techniques\"]\nweight: 9\n---\n\nThis section explains various advanced features and techniques for using Helm.\nThe information in this section is intended for \"power users\" of Helm that wish\nto do advanced customization and manipulation of their charts and releases. Each\nof these advanced features comes with their own tradeoffs and caveats, so each\none must be used carefully and with deep knowledge of Helm. Or in other words,\nremember the [Peter Parker\nprinciple](https:\/\/en.wikipedia.org\/wiki\/With_great_power_comes_great_responsibility)\n\n## Post Rendering\nPost rendering gives chart installers the ability to manually manipulate,\nconfigure, and\/or validate rendered manifests before they are installed by Helm.\nThis allows users with advanced configuration needs to be able to use tools like\n[`kustomize`](https:\/\/kustomize.io) to apply configuration changes without the\nneed to fork a public chart or requiring chart maintainers to specify every last\nconfiguration option for a piece of software. There are also use cases for\ninjecting common tools and side cars in enterprise environments or analysis of\nthe manifests before deployment.\n\n### Prerequisites\n- Helm 3.1+\n\n### Usage\nA post-renderer can be any executable that accepts rendered Kubernetes manifests\non STDIN and returns valid Kubernetes manifests on STDOUT. It should return an\nnon-0 exit code in the event of a failure. This is the only \"API\" between the\ntwo components. It allows for great flexibility in what you can do with your\npost-render process.\n\nA post renderer can be used with `install`, `upgrade`, and `template`. To use a\npost-renderer, use the `--post-renderer` flag with a path to the renderer\nexecutable you wish to use:\n\n```shell\n$ helm install mychart stable\/wordpress --post-renderer .\/path\/to\/executable\n```\n\nIf the path does not contain any separators, it will search in $PATH, otherwise\nit will resolve any relative paths to a fully qualified path\n\nIf you wish to use multiple post-renderers, call all of them in a script or\ntogether in whatever binary tool you have built. In bash, this would be as\nsimple as `renderer1 | renderer2 | renderer3`.\n\nYou can see an example of using `kustomize` as a post renderer\n[here](https:\/\/github.com\/thomastaylor312\/advanced-helm-demos\/tree\/master\/post-render).\n\n### Caveats\nWhen using post renderers, there are several important things to keep in mind.\nThe most important of these is that when using a post-renderer, all people\nmodifying that release **MUST** use the same renderer in order to have\nrepeatable builds. This feature is purposefully built to allow any user to\nswitch out which renderer they are using or to stop using a renderer, but this\nshould be done deliberately to avoid accidental modification or data loss.\n\nOne other important note is around security. If you are using a post-renderer,\nyou should ensure it is coming from a reliable source (as is the case for any\nother arbitrary executable). Using non-trusted or non-verified renderers is NOT\nrecommended as they have full access to rendered templates, which often contain\nsecret data.\n\n### Custom Post Renderers\nThe post render step offers even more flexibility when used in the Go SDK. Any\npost renderer only needs to implement the following Go interface:\n\n```go\ntype PostRenderer interface {\n    \/\/ Run expects a single buffer filled with Helm rendered manifests. It\n    \/\/ expects the modified results to be returned on a separate buffer or an\n    \/\/ error if there was an issue or failure while running the post render step\n    Run(renderedManifests *bytes.Buffer) (modifiedManifests *bytes.Buffer, err error)\n}\n```\n\nFor more information on using the Go SDK, See the [Go SDK section](#go-sdk)\n\n## Go SDK\nHelm 3 debuted a completely restructured Go SDK for a better experience when\nbuilding software and tools that leverage Helm. Full documentation can be found\nin the [Go SDK Section](..\/sdk\/gosdk.md).\n\n## Storage backends\n\nHelm 3 changed the default release information storage to Secrets in the\nnamespace of the release. Helm 2 by default stores release information as\nConfigMaps in the namespace of the Tiller instance. The subsections which follow\nshow how to configure different backends. This configuration is based on the\n`HELM_DRIVER` environment variable. It can be set to one of the values:\n`[configmap, secret, sql]`.\n\n### ConfigMap storage backend\n\nTo enable the ConfigMap backend, you'll need to set the environmental variable\n`HELM_DRIVER` to `configmap`.\n\nYou can set it in a shell as follows:\n\n```shell\nexport HELM_DRIVER=configmap\n```\n\nIf you want to switch from the default backend to the ConfigMap backend, you'll\nhave to do the migration for this on your own. You can retrieve release\ninformation with the following command:\n\n```shell\nkubectl get secret --all-namespaces -l \"owner=helm\"\n```\n\n**PRODUCTION NOTES**: The release information includes the contents of charts and\nvalues files, and therefore might contain sensitive data (like\npasswords, private keys, and other credentials) that needs to be protected from\nunauthorized access. When managing Kubernetes authorization, for instance with\n[RBAC](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/), it is\npossible to grant broader access to ConfigMap resources, while restricting\naccess to Secret resources. For instance, the default [user-facing\nrole](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/#user-facing-roles)\n\"view\" grants access to most resources, but not to Secrets. Furthermore, secrets\ndata can be configured for [encrypted\nstorage](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/encrypt-data\/).\nPlease keep that in mind if you decide to switch to the ConfigMap backend, as it\ncould expose your application's sensitive data.\n\n### SQL storage backend\n\nThere is a ***beta*** SQL storage backend that stores release information in an SQL\ndatabase.\n\nUsing such a storage backend is particularly useful if your release information\nweighs more than 1MB (in which case, it can't be stored in ConfigMaps\/Secrets\nbecause of internal limits in Kubernetes' underlying etcd key-value store).\n\nTo enable the SQL backend, you'll need to deploy a SQL database and set the\nenvironmental variable `HELM_DRIVER` to `sql`. The DB details are set with the\nenvironmental variable `HELM_DRIVER_SQL_CONNECTION_STRING`.\n\nYou can set it in a shell as follows:\n\n```shell\nexport HELM_DRIVER=sql\nexport HELM_DRIVER_SQL_CONNECTION_STRING=postgresql:\/\/helm-postgres:5432\/helm?user=helm&password=changeme\n```\n\n> Note: Only PostgreSQL is supported at this moment.\n\n**PRODUCTION NOTES**: It is recommended to:\n- Make your database production ready. For PostgreSQL, refer to the [Server Administration](https:\/\/www.postgresql.org\/docs\/12\/admin.html) docs for more details\n- Enable [permission management](\/docs\/permissions_sql_storage_backend\/) to\nmirror Kubernetes RBAC for release information\n\nIf you want to switch from the default backend to the SQL backend, you'll have\nto do the migration for this on your own. You can retrieve release information\nwith the following command:\n\n```shell\nkubectl get secret --all-namespaces -l \"owner=helm\"\n```","site":"helm","answers_cleaned":"    title   Advanced Helm Techniques  description   Explains various advanced features for Helm power users  aliases     docs advanced helm techniques   weight  9      This section explains various advanced features and techniques for using Helm  The information in this section is intended for  power users  of Helm that wish to do advanced customization and manipulation of their charts and releases  Each of these advanced features comes with their own tradeoffs and caveats  so each one must be used carefully and with deep knowledge of Helm  Or in other words  remember the  Peter Parker principle  https   en wikipedia org wiki With great power comes great responsibility      Post Rendering Post rendering gives chart installers the ability to manually manipulate  configure  and or validate rendered manifests before they are installed by Helm  This allows users with advanced configuration needs to be able to use tools like   kustomize   https   kustomize io  to apply configuration changes without the need to fork a public chart or requiring chart maintainers to specify every last configuration option for a piece of software  There are also use cases for injecting common tools and side cars in enterprise environments or analysis of the manifests before deployment       Prerequisites   Helm 3 1       Usage A post renderer can be any executable that accepts rendered Kubernetes manifests on STDIN and returns valid Kubernetes manifests on STDOUT  It should return an non 0 exit code in the event of a failure  This is the only  API  between the two components  It allows for great flexibility in what you can do with your post render process   A post renderer can be used with  install    upgrade   and  template   To use a post renderer  use the    post renderer  flag with a path to the renderer executable you wish to use      shell   helm install mychart stable wordpress   post renderer   path to executable      If the path does not contain any separators  it will search in  PATH  otherwise it will resolve any relative paths to a fully qualified path  If you wish to use multiple post renderers  call all of them in a script or together in whatever binary tool you have built  In bash  this would be as simple as  renderer1   renderer2   renderer3    You can see an example of using  kustomize  as a post renderer  here  https   github com thomastaylor312 advanced helm demos tree master post render        Caveats When using post renderers  there are several important things to keep in mind  The most important of these is that when using a post renderer  all people modifying that release   MUST   use the same renderer in order to have repeatable builds  This feature is purposefully built to allow any user to switch out which renderer they are using or to stop using a renderer  but this should be done deliberately to avoid accidental modification or data loss   One other important note is around security  If you are using a post renderer  you should ensure it is coming from a reliable source  as is the case for any other arbitrary executable   Using non trusted or non verified renderers is NOT recommended as they have full access to rendered templates  which often contain secret data       Custom Post Renderers The post render step offers even more flexibility when used in the Go SDK  Any post renderer only needs to implement the following Go interface      go type PostRenderer interface          Run expects a single buffer filled with Helm rendered manifests  It        expects the modified results to be returned on a separate buffer or an        error if there was an issue or failure while running the post render step     Run renderedManifests  bytes Buffer   modifiedManifests  bytes Buffer  err error         For more information on using the Go SDK  See the  Go SDK section   go sdk      Go SDK Helm 3 debuted a completely restructured Go SDK for a better experience when building software and tools that leverage Helm  Full documentation can be found in the  Go SDK Section     sdk gosdk md       Storage backends  Helm 3 changed the default release information storage to Secrets in the namespace of the release  Helm 2 by default stores release information as ConfigMaps in the namespace of the Tiller instance  The subsections which follow show how to configure different backends  This configuration is based on the  HELM DRIVER  environment variable  It can be set to one of the values    configmap  secret  sql         ConfigMap storage backend  To enable the ConfigMap backend  you ll need to set the environmental variable  HELM DRIVER  to  configmap    You can set it in a shell as follows      shell export HELM DRIVER configmap      If you want to switch from the default backend to the ConfigMap backend  you ll have to do the migration for this on your own  You can retrieve release information with the following command      shell kubectl get secret   all namespaces  l  owner helm         PRODUCTION NOTES    The release information includes the contents of charts and values files  and therefore might contain sensitive data  like passwords  private keys  and other credentials  that needs to be protected from unauthorized access  When managing Kubernetes authorization  for instance with  RBAC  https   kubernetes io docs reference access authn authz rbac    it is possible to grant broader access to ConfigMap resources  while restricting access to Secret resources  For instance  the default  user facing role  https   kubernetes io docs reference access authn authz rbac  user facing roles   view  grants access to most resources  but not to Secrets  Furthermore  secrets data can be configured for  encrypted storage  https   kubernetes io docs tasks administer cluster encrypt data    Please keep that in mind if you decide to switch to the ConfigMap backend  as it could expose your application s sensitive data       SQL storage backend  There is a    beta    SQL storage backend that stores release information in an SQL database   Using such a storage backend is particularly useful if your release information weighs more than 1MB  in which case  it can t be stored in ConfigMaps Secrets because of internal limits in Kubernetes  underlying etcd key value store    To enable the SQL backend  you ll need to deploy a SQL database and set the environmental variable  HELM DRIVER  to  sql   The DB details are set with the environmental variable  HELM DRIVER SQL CONNECTION STRING    You can set it in a shell as follows      shell export HELM DRIVER sql export HELM DRIVER SQL CONNECTION STRING postgresql   helm postgres 5432 helm user helm password changeme        Note  Only PostgreSQL is supported at this moment     PRODUCTION NOTES    It is recommended to    Make your database production ready  For PostgreSQL  refer to the  Server Administration  https   www postgresql org docs 12 admin html  docs for more details   Enable  permission management   docs permissions sql storage backend   to mirror Kubernetes RBAC for release information  If you want to switch from the default backend to the SQL backend  you ll have to do the migration for this on your own  You can retrieve release information with the following command      shell kubectl get secret   all namespaces  l  owner helm     "}
{"questions":"helm aliases docs plugins A Helm plugin is a tool that can be accessed through the CLI but which Introduces how to use and create plugins to extend Helm s functionality is not part of the built in Helm codebase title The Helm Plugins Guide weight 12","answers":"---\ntitle: \"The Helm Plugins Guide\"\ndescription: \"Introduces how to use and create plugins to extend Helm's functionality.\"\naliases: [\"\/docs\/plugins\/\"]\nweight: 12\n---\n\nA Helm plugin is a tool that can be accessed through the `helm` CLI, but which\nis not part of the built-in Helm codebase.\n\nExisting plugins can be found on [related]() section or by searching\n[GitHub](https:\/\/github.com\/search?q=topic%3Ahelm-plugin&type=Repositories).\n\nThis guide explains how to use and create plugins.\n\n## An Overview\n\nHelm plugins are add-on tools that integrate seamlessly with Helm. They provide\na way to extend the core feature set of Helm, but without requiring every new\nfeature to be written in Go and added to the core tool.\n\nHelm plugins have the following features:\n\n- They can be added and removed from a Helm installation without impacting the\n  core Helm tool.\n- They can be written in any programming language.\n- They integrate with Helm, and will show up in `helm help` and other places.\n\nHelm plugins live in `$HELM_PLUGINS`. You can find the current value of this,\nincluding the default value when not set in the environment, using the\n`helm env` command.\n\nThe Helm plugin model is partially modeled on Git's plugin model. To that end,\nyou may sometimes hear `helm` referred to as the _porcelain_ layer, with plugins\nbeing the _plumbing_. This is a shorthand way of suggesting that Helm provides\nthe user experience and top level processing logic, while the plugins do the\n\"detail work\" of performing a desired action.\n\n## Installing a Plugin\n\nPlugins are installed using the `$ helm plugin install <path|url>` command. You\ncan pass in a path to a plugin on your local file system or a url of a remote\nVCS repo. The `helm plugin install` command clones or copies the plugin at the\npath\/url given into `$HELM_PLUGINS`\n\n```console\n$ helm plugin install https:\/\/github.com\/adamreese\/helm-env\n```\n\nIf you have a plugin tar distribution, simply untar the plugin into the\n`$HELM_PLUGINS` directory. You can also install tarball plugins\ndirectly from url by issuing `helm plugin install\nhttps:\/\/domain\/path\/to\/plugin.tar.gz`\n\n## Testing a locally built Plugin\n\nFirst you need to find your `HELM_PLUGINS` path to do it run the folowing command:\n\n``` bash\nhelm env\n```\n\nChange your current directory to the director that `HELM_PLUGINS` is set to.\n\nNow you can add a symbolic link to your build out put of your plugin in this example we did it for `mapkubeapis`.\n\n``` bash\nln -s ~\/GitHub\/helm-mapkubeapis .\/helm-mapkubeapis\n```\n\n## Building Plugins\n\nIn many ways, a plugin is similar to a chart. Each plugin has a top-level\ndirectory, and then a `plugin.yaml` file.\n\n```\n$HELM_PLUGINS\/\n  |- last\/\n      |\n      |- plugin.yaml\n      |- last.sh\n\n```\n\nIn the example above, the `last` plugin is contained inside of a directory\nnamed `last`. It has two files: `plugin.yaml` (required) and an executable\nscript, `last.sh` (optional).\n\nThe core of a plugin is a simple YAML file named `plugin.yaml`. Here is a plugin\nYAML for a plugin that helps get the last release name:\n\n```yaml\nname: \"last\"\nversion: \"0.1.0\"\nusage: \"get the last release name\"\ndescription: \"get the last release name\"\nignoreFlags: false\ncommand: \"$HELM_BIN --host $TILLER_HOST list --short --max 1 --date -r\"\nplatformCommand:\n  - os: linux\n    arch: i386\n    command: \"$HELM_BIN list --short --max 1 --date -r\"\n  - os: linux\n    arch: amd64\n    command: \"$HELM_BIN list --short --max 1 --date -r\"\n  - os: windows\n    arch: amd64\n    command: \"$HELM_BIN list --short --max 1 --date -r\"\n```\n\nThe `name` is the name of the plugin. When Helm executes this plugin, this is\nthe name it will use (e.g. `helm NAME` will invoke this plugin).\n\n_`name` should match the directory name._ In our example above, that means the\nplugin with `name: last` should be contained in a directory named `last`.\n\nRestrictions on `name`:\n\n- `name` cannot duplicate one of the existing `helm` top-level commands.\n- `name` must be restricted to the characters ASCII a-z, A-Z, 0-9, `_` and `-`.\n\n`version` is the SemVer 2 version of the plugin. `usage` and `description` are\nboth used to generate the help text of a command.\n\nThe `ignoreFlags` switch tells Helm to _not_ pass flags to the plugin. So if a\nplugin is called with `helm myplugin --foo` and `ignoreFlags: true`, then\n`--foo` is silently discarded.\n\nFinally, and most importantly, `platformCommand` or `command` is the command\nthat this plugin will execute when it is called. The `platformCommand` section\ndefines the OS\/Architecture specific variations of a command. The following\nrules will apply in deciding which command to use:\n\n- If `platformCommand` is present, it will be searched first.\n- If both `os` and `arch` match the current platform, search will stop and the\n  command will be used.\n- If `os` matches and there is no more specific `arch` match, the command will\n  be used.\n- If no `platformCommand` match is found, the default `command` will be used.\n- If no matches are found in `platformCommand` and no `command` is present, Helm\n  will exit with an error.\n\nEnvironment variables are interpolated before the plugin is executed. The\npattern above illustrates the preferred way to indicate where the plugin program\nlives.\n\nThere are some strategies for working with plugin commands:\n\n- If a plugin includes an executable, the executable for a `platformCommand:` or\n  a `command:` should be packaged in the plugin directory.\n- The `platformCommand:` or `command:` line will have any environment variables\n  expanded before execution. `$HELM_PLUGIN_DIR` will point to the plugin\n  directory.\n- The command itself is not executed in a shell. So you can't oneline a shell\n  script.\n- Helm injects lots of configuration into environment variables. Take a look at\n  the environment to see what information is available.\n- Helm makes no assumptions about the language of the plugin. You can write it\n  in whatever you prefer.\n- Commands are responsible for implementing specific help text for `-h` and\n  `--help`. Helm will use `usage` and `description` for `helm help` and `helm\n  help myplugin`, but will not handle `helm myplugin --help`.\n\n## Downloader Plugins\nBy default, Helm is able to pull Charts using HTTP\/S. As of Helm 2.4.0, plugins\ncan have a special capability to download Charts from arbitrary sources.\n\nPlugins shall declare this special capability in the `plugin.yaml` file (top\nlevel):\n\n```yaml\ndownloaders:\n- command: \"bin\/mydownloader\"\n  protocols:\n  - \"myprotocol\"\n  - \"myprotocols\"\n```\n\nIf such plugin is installed, Helm can interact with the repository using the\nspecified protocol scheme by invoking the `command`. The special repository\nshall be added similarly to the regular ones: `helm repo add favorite\nmyprotocol:\/\/example.com\/` The rules for the special repos are the same to the\nregular ones: Helm must be able to download the `index.yaml` file in order to\ndiscover and cache the list of available Charts.\n\nThe defined command will be invoked with the following scheme: `command certFile\nkeyFile caFile full-URL`. The SSL credentials are coming from the repo\ndefinition, stored in `$HELM_REPOSITORY_CONFIG`\n(i.e., `$HELM_CONFIG_HOME\/repositories.yaml`). A Downloader plugin\nis expected to dump the raw content to stdout and report errors on stderr.\n\nThe downloader command also supports sub-commands or arguments, allowing you to\nspecify for example `bin\/mydownloader subcommand -d` in the `plugin.yaml`. This\nis useful if you want to use the same executable for the main plugin command and\nthe downloader command, but with a different sub-command for each.\n\n## Environment Variables\n\nWhen Helm executes a plugin, it passes the outer environment to the plugin, and\nalso injects some additional environment variables.\n\nVariables like `KUBECONFIG` are set for the plugin if they are set in the outer\nenvironment.\n\nThe following variables are guaranteed to be set:\n\n- `HELM_PLUGINS`: The path to the plugins directory.\n- `HELM_PLUGIN_NAME`: The name of the plugin, as invoked by `helm`. So `helm\n  myplug` will have the short name `myplug`.\n- `HELM_PLUGIN_DIR`: The directory that contains the plugin.\n- `HELM_BIN`: The path to the `helm` command (as executed by the user).\n- `HELM_DEBUG`: Indicates if the debug flag was set by helm.\n- `HELM_REGISTRY_CONFIG`: The location for the registry configuration (if\n  using). Note that the use of Helm with registries is an experimental feature.\n- `HELM_REPOSITORY_CACHE`: The path to the repository cache files.\n- `HELM_REPOSITORY_CONFIG`: The path to the repository configuration file.\n- `HELM_NAMESPACE`: The namespace given to the `helm` command (generally using\n  the `-n` flag).\n- `HELM_KUBECONTEXT`: The name of the Kubernetes config context given to the\n  `helm` command.\n\nAdditionally, if a Kubernetes configuration file was explicitly specified, it\nwill be set as the `KUBECONFIG` variable\n\n## A Note on Flag Parsing\n\nWhen executing a plugin, Helm will parse global flags for its own use. None of\nthese flags are passed on to the plugin.\n\n- `--debug`: If this is specified, `$HELM_DEBUG` is set to `1`\n- `--registry-config`: This is converted to `$HELM_REGISTRY_CONFIG`\n- `--repository-cache`: This is converted to `$HELM_REPOSITORY_CACHE`\n- `--repository-config`: This is converted to `$HELM_REPOSITORY_CONFIG`\n- `--namespace` and `-n`: This is converted to `$HELM_NAMESPACE`\n- `--kube-context`: This is converted to `$HELM_KUBECONTEXT`\n- `--kubeconfig`: This is converted to `$KUBECONFIG`\n\nPlugins _should_ display help text and then exit for `-h` and `--help`. In all\nother cases, plugins may use flags as appropriate.\n\n## Providing shell auto-completion\n\nAs of Helm 3.2, a plugin can optionally provide support for shell\nauto-completion as part of Helm's existing auto-completion mechanism.\n\n### Static auto-completion\n\nIf a plugin provides its own flags and\/or sub-commands, it can inform Helm of\nthem by having a `completion.yaml` file located in the plugin's root directory.\nThe `completion.yaml` file has the form:\n\n```yaml\nname: <pluginName>\nflags:\n- <flag 1>\n- <flag 2>\nvalidArgs:\n- <arg value 1>\n- <arg value 2>\ncommands:\n  name: <commandName>\n  flags:\n  - <flag 1>\n  - <flag 2>\n  validArgs:\n  - <arg value 1>\n  - <arg value 2>\n  commands:\n     <and so on, recursively>\n```\n\nNotes:\n1. All sections are optional but should be provided if applicable.\n1. Flags should not include the `-` or `--` prefix.\n1. Both short and long flags can and should be specified. A short flag need not\n   be associated with its corresponding long form, but both forms should be\n   listed.\n1. Flags need not be ordered in any way, but need to be listed at the correct\n   point in the sub-command hierarchy of the file.\n1. Helm's existing global flags are already handled by Helm's auto-completion\n   mechanism, therefore plugins need not specify the following flags `--debug`,\n   `--namespace` or `-n`, `--kube-context`, and `--kubeconfig`, or any other\n   global flag.\n1. The `validArgs` list provides a static list of possible completions for the\n   first parameter following a sub-command.  It is not always possible to\n   provide such a list in advance (see the [Dynamic\n   Completion](#dynamic-completion) section below), in which case the\n   `validArgs` section can be omitted.\n\nThe `completion.yaml` file is entirely optional.  If it is not provided, Helm\nwill simply not provide shell auto-completion for the plugin (unless [Dynamic\nCompletion](#dynamic-completion) is supported by the plugin).  Also, adding a\n`completion.yaml` file is backwards-compatible and will not impact the behavior\nof the plugin when using older helm versions.\n\nAs an example, for the [`fullstatus\nplugin`](https:\/\/github.com\/marckhouzam\/helm-fullstatus) which has no\nsub-commands but accepts the same flags as the `helm status` command, the\n`completion.yaml` file is:\n\n```yaml\nname: fullstatus\nflags:\n- o\n- output\n- revision\n```\n\nA more intricate example for the [`2to3\nplugin`](https:\/\/github.com\/helm\/helm-2to3), has a `completion.yaml` file of:\n\n```yaml\nname: 2to3\ncommands:\n- name: cleanup\n  flags:\n  - config-cleanup\n  - dry-run\n  - l\n  - label\n  - release-cleanup\n  - s\n  - release-storage\n  - tiller-cleanup\n  - t\n  - tiller-ns\n  - tiller-out-cluster\n- name: convert\n  flags:\n  - delete-v2-releases\n  - dry-run\n  - l\n  - label\n  - s\n  - release-storage\n  - release-versions-max\n  - t\n  - tiller-ns\n  - tiller-out-cluster\n- name: move\n  commands:\n  - name: config\n    flags:\n    - dry-run\n```\n\n### Dynamic completion\n\nAlso starting with Helm 3.2, plugins can provide their own dynamic shell\nauto-completion. Dynamic shell auto-completion is the completion of parameter\nvalues or flag values that cannot be defined in advance.  For example,\ncompletion of the names of helm releases currently available on the cluster.\n\nFor the plugin to support dynamic auto-completion, it must provide an\n**executable** file called `plugin.complete` in its root directory. When the\nHelm completion script requires dynamic completions for the plugin, it will\nexecute the `plugin.complete` file, passing it the command-line that needs to be\ncompleted.  The `plugin.complete` executable will need to have the logic to\ndetermine what the proper completion choices are and output them to standard\noutput to be consumed by the Helm completion script.\n\nThe `plugin.complete` file is entirely optional.  If it is not provided, Helm\nwill simply not provide dynamic auto-completion for the plugin.  Also, adding a\n`plugin.complete` file is backwards-compatible and will not impact the behavior\nof the plugin when using older helm versions.\n\nThe output of the `plugin.complete` script should be a new-line separated list\nsuch as:\n\n```\nrel1\nrel2\nrel3\n```\n\nWhen `plugin.complete` is called, the plugin environment is set just like when\nthe plugin's main script is called. Therefore, the variables `$HELM_NAMESPACE`,\n`$HELM_KUBECONTEXT`, and all other plugin variables will already be set, and\ntheir corresponding global flags will be removed.\n\nThe `plugin.complete` file can be in any executable form; it can be a shell\nscript, a Go program, or any other type of program that Helm can execute. The\n`plugin.complete` file ***must*** have executable permissions for the user. The\n`plugin.complete` file ***must*** exit with a success code (value 0).\n\nIn some cases, dynamic completion will require to obtain information from the\nKubernetes cluster.  For example, the `helm fullstatus` plugin requires a\nrelease name as input. In the `fullstatus` plugin, for its `plugin.complete`\nscript to provide completion for current release names, it can simply run `helm\nlist -q` and output the result.\n\nIf it is desired to use the same executable for plugin execution and for plugin\ncompletion, the `plugin.complete` script can be made to call the main plugin\nexecutable with some special parameter or flag; when the main plugin executable\ndetects the special parameter or flag, it will know to run the completion. In\nour example, `plugin.complete` could be implemented like this:\n\n```sh\n#!\/usr\/bin\/env sh\n\n# \"$@\" is the entire command-line that requires completion.\n# It is important to double-quote the \"$@\" variable to preserve a possibly empty last parameter.\n$HELM_PLUGIN_DIR\/status.sh --complete \"$@\"\n```\n\nThe `fullstatus` plugin's real script (`status.sh`) must then look for the\n`--complete` flag and if found, printout the proper completions.\n\n### Tips and tricks\n\n1. The shell will automatically filter out completion choices that don't match\n   user input. A plugin can therefore return all relevant completions without\n   removing the ones that don't match the user input.  For example, if the\n   command-line is `helm fullstatus ngin<TAB>`, the `plugin.complete` script can\n   print *all* release names (of the `default` namespace), not just the ones\n   starting with `ngin`; the shell will only retain the ones starting with\n   `ngin`.\n1. To simplify dynamic completion support, especially if you have a complex\n   plugin, you can have your  `plugin.complete` script call your main plugin\n   script and request completion choices.  See the [Dynamic\n   Completion](#dynamic-completion) section above for an example.\n1. To debug dynamic completion and the `plugin.complete` file, one can run the\n   following to see the completion results :\n    - `helm __complete <pluginName> <arguments to complete>`.  For example:\n    - `helm __complete fullstatus --output js<ENTER>`,\n    - `helm __complete fullstatus -o json \"\"<ENTER>`","site":"helm","answers_cleaned":"    title   The Helm Plugins Guide  description   Introduces how to use and create plugins to extend Helm s functionality   aliases     docs plugins    weight  12      A Helm plugin is a tool that can be accessed through the  helm  CLI  but which is not part of the built in Helm codebase   Existing plugins can be found on  related    section or by searching  GitHub  https   github com search q topic 3Ahelm plugin type Repositories    This guide explains how to use and create plugins      An Overview  Helm plugins are add on tools that integrate seamlessly with Helm  They provide a way to extend the core feature set of Helm  but without requiring every new feature to be written in Go and added to the core tool   Helm plugins have the following features     They can be added and removed from a Helm installation without impacting the   core Helm tool    They can be written in any programming language    They integrate with Helm  and will show up in  helm help  and other places   Helm plugins live in   HELM PLUGINS   You can find the current value of this  including the default value when not set in the environment  using the  helm env  command   The Helm plugin model is partially modeled on Git s plugin model  To that end  you may sometimes hear  helm  referred to as the  porcelain  layer  with plugins being the  plumbing   This is a shorthand way of suggesting that Helm provides the user experience and top level processing logic  while the plugins do the  detail work  of performing a desired action      Installing a Plugin  Plugins are installed using the    helm plugin install  path url   command  You can pass in a path to a plugin on your local file system or a url of a remote VCS repo  The  helm plugin install  command clones or copies the plugin at the path url given into   HELM PLUGINS      console   helm plugin install https   github com adamreese helm env      If you have a plugin tar distribution  simply untar the plugin into the   HELM PLUGINS  directory  You can also install tarball plugins directly from url by issuing  helm plugin install https   domain path to plugin tar gz      Testing a locally built Plugin  First you need to find your  HELM PLUGINS  path to do it run the folowing command       bash helm env      Change your current directory to the director that  HELM PLUGINS  is set to   Now you can add a symbolic link to your build out put of your plugin in this example we did it for  mapkubeapis        bash ln  s   GitHub helm mapkubeapis   helm mapkubeapis         Building Plugins  In many ways  a plugin is similar to a chart  Each plugin has a top level directory  and then a  plugin yaml  file        HELM PLUGINS       last                   plugin yaml          last sh       In the example above  the  last  plugin is contained inside of a directory named  last   It has two files   plugin yaml   required  and an executable script   last sh   optional    The core of a plugin is a simple YAML file named  plugin yaml   Here is a plugin YAML for a plugin that helps get the last release name      yaml name   last  version   0 1 0  usage   get the last release name  description   get the last release name  ignoreFlags  false command    HELM BIN   host  TILLER HOST list   short   max 1   date  r  platformCommand      os  linux     arch  i386     command    HELM BIN list   short   max 1   date  r      os  linux     arch  amd64     command    HELM BIN list   short   max 1   date  r      os  windows     arch  amd64     command    HELM BIN list   short   max 1   date  r       The  name  is the name of the plugin  When Helm executes this plugin  this is the name it will use  e g   helm NAME  will invoke this plugin      name  should match the directory name   In our example above  that means the plugin with  name  last  should be contained in a directory named  last    Restrictions on  name       name  cannot duplicate one of the existing  helm  top level commands     name  must be restricted to the characters ASCII a z  A Z  0 9      and        version  is the SemVer 2 version of the plugin   usage  and  description  are both used to generate the help text of a command   The  ignoreFlags  switch tells Helm to  not  pass flags to the plugin  So if a plugin is called with  helm myplugin   foo  and  ignoreFlags  true   then    foo  is silently discarded   Finally  and most importantly   platformCommand  or  command  is the command that this plugin will execute when it is called  The  platformCommand  section defines the OS Architecture specific variations of a command  The following rules will apply in deciding which command to use     If  platformCommand  is present  it will be searched first    If both  os  and  arch  match the current platform  search will stop and the   command will be used    If  os  matches and there is no more specific  arch  match  the command will   be used    If no  platformCommand  match is found  the default  command  will be used    If no matches are found in  platformCommand  and no  command  is present  Helm   will exit with an error   Environment variables are interpolated before the plugin is executed  The pattern above illustrates the preferred way to indicate where the plugin program lives   There are some strategies for working with plugin commands     If a plugin includes an executable  the executable for a  platformCommand   or   a  command   should be packaged in the plugin directory    The  platformCommand   or  command   line will have any environment variables   expanded before execution    HELM PLUGIN DIR  will point to the plugin   directory    The command itself is not executed in a shell  So you can t oneline a shell   script    Helm injects lots of configuration into environment variables  Take a look at   the environment to see what information is available    Helm makes no assumptions about the language of the plugin  You can write it   in whatever you prefer    Commands are responsible for implementing specific help text for   h  and      help   Helm will use  usage  and  description  for  helm help  and  helm   help myplugin   but will not handle  helm myplugin   help       Downloader Plugins By default  Helm is able to pull Charts using HTTP S  As of Helm 2 4 0  plugins can have a special capability to download Charts from arbitrary sources   Plugins shall declare this special capability in the  plugin yaml  file  top level       yaml downloaders    command   bin mydownloader    protocols       myprotocol       myprotocols       If such plugin is installed  Helm can interact with the repository using the specified protocol scheme by invoking the  command   The special repository shall be added similarly to the regular ones   helm repo add favorite myprotocol   example com   The rules for the special repos are the same to the regular ones  Helm must be able to download the  index yaml  file in order to discover and cache the list of available Charts   The defined command will be invoked with the following scheme   command certFile keyFile caFile full URL   The SSL credentials are coming from the repo definition  stored in   HELM REPOSITORY CONFIG   i e     HELM CONFIG HOME repositories yaml    A Downloader plugin is expected to dump the raw content to stdout and report errors on stderr   The downloader command also supports sub commands or arguments  allowing you to specify for example  bin mydownloader subcommand  d  in the  plugin yaml   This is useful if you want to use the same executable for the main plugin command and the downloader command  but with a different sub command for each      Environment Variables  When Helm executes a plugin  it passes the outer environment to the plugin  and also injects some additional environment variables   Variables like  KUBECONFIG  are set for the plugin if they are set in the outer environment   The following variables are guaranteed to be set      HELM PLUGINS   The path to the plugins directory     HELM PLUGIN NAME   The name of the plugin  as invoked by  helm   So  helm   myplug  will have the short name  myplug      HELM PLUGIN DIR   The directory that contains the plugin     HELM BIN   The path to the  helm  command  as executed by the user      HELM DEBUG   Indicates if the debug flag was set by helm     HELM REGISTRY CONFIG   The location for the registry configuration  if   using   Note that the use of Helm with registries is an experimental feature     HELM REPOSITORY CACHE   The path to the repository cache files     HELM REPOSITORY CONFIG   The path to the repository configuration file     HELM NAMESPACE   The namespace given to the  helm  command  generally using   the   n  flag      HELM KUBECONTEXT   The name of the Kubernetes config context given to the    helm  command   Additionally  if a Kubernetes configuration file was explicitly specified  it will be set as the  KUBECONFIG  variable     A Note on Flag Parsing  When executing a plugin  Helm will parse global flags for its own use  None of these flags are passed on to the plugin        debug   If this is specified    HELM DEBUG  is set to  1       registry config   This is converted to   HELM REGISTRY CONFIG       repository cache   This is converted to   HELM REPOSITORY CACHE       repository config   This is converted to   HELM REPOSITORY CONFIG       namespace  and   n   This is converted to   HELM NAMESPACE       kube context   This is converted to   HELM KUBECONTEXT       kubeconfig   This is converted to   KUBECONFIG   Plugins  should  display help text and then exit for   h  and    help   In all other cases  plugins may use flags as appropriate      Providing shell auto completion  As of Helm 3 2  a plugin can optionally provide support for shell auto completion as part of Helm s existing auto completion mechanism       Static auto completion  If a plugin provides its own flags and or sub commands  it can inform Helm of them by having a  completion yaml  file located in the plugin s root directory  The  completion yaml  file has the form      yaml name   pluginName  flags     flag 1     flag 2  validArgs     arg value 1     arg value 2  commands    name   commandName    flags       flag 1       flag 2    validArgs       arg value 1       arg value 2    commands        and so on  recursively       Notes  1  All sections are optional but should be provided if applicable  1  Flags should not include the     or      prefix  1  Both short and long flags can and should be specified  A short flag need not    be associated with its corresponding long form  but both forms should be    listed  1  Flags need not be ordered in any way  but need to be listed at the correct    point in the sub command hierarchy of the file  1  Helm s existing global flags are already handled by Helm s auto completion    mechanism  therefore plugins need not specify the following flags    debug         namespace  or   n      kube context   and    kubeconfig   or any other    global flag  1  The  validArgs  list provides a static list of possible completions for the    first parameter following a sub command   It is not always possible to    provide such a list in advance  see the  Dynamic    Completion   dynamic completion  section below   in which case the     validArgs  section can be omitted   The  completion yaml  file is entirely optional   If it is not provided  Helm will simply not provide shell auto completion for the plugin  unless  Dynamic Completion   dynamic completion  is supported by the plugin    Also  adding a  completion yaml  file is backwards compatible and will not impact the behavior of the plugin when using older helm versions   As an example  for the   fullstatus plugin   https   github com marckhouzam helm fullstatus  which has no sub commands but accepts the same flags as the  helm status  command  the  completion yaml  file is      yaml name  fullstatus flags    o   output   revision      A more intricate example for the   2to3 plugin   https   github com helm helm 2to3   has a  completion yaml  file of      yaml name  2to3 commands    name  cleanup   flags      config cleanup     dry run     l     label     release cleanup     s     release storage     tiller cleanup     t     tiller ns     tiller out cluster   name  convert   flags      delete v2 releases     dry run     l     label     s     release storage     release versions max     t     tiller ns     tiller out cluster   name  move   commands      name  config     flags        dry run          Dynamic completion  Also starting with Helm 3 2  plugins can provide their own dynamic shell auto completion  Dynamic shell auto completion is the completion of parameter values or flag values that cannot be defined in advance   For example  completion of the names of helm releases currently available on the cluster   For the plugin to support dynamic auto completion  it must provide an   executable   file called  plugin complete  in its root directory  When the Helm completion script requires dynamic completions for the plugin  it will execute the  plugin complete  file  passing it the command line that needs to be completed   The  plugin complete  executable will need to have the logic to determine what the proper completion choices are and output them to standard output to be consumed by the Helm completion script   The  plugin complete  file is entirely optional   If it is not provided  Helm will simply not provide dynamic auto completion for the plugin   Also  adding a  plugin complete  file is backwards compatible and will not impact the behavior of the plugin when using older helm versions   The output of the  plugin complete  script should be a new line separated list such as       rel1 rel2 rel3      When  plugin complete  is called  the plugin environment is set just like when the plugin s main script is called  Therefore  the variables   HELM NAMESPACE     HELM KUBECONTEXT   and all other plugin variables will already be set  and their corresponding global flags will be removed   The  plugin complete  file can be in any executable form  it can be a shell script  a Go program  or any other type of program that Helm can execute  The  plugin complete  file    must    have executable permissions for the user  The  plugin complete  file    must    exit with a success code  value 0    In some cases  dynamic completion will require to obtain information from the Kubernetes cluster   For example  the  helm fullstatus  plugin requires a release name as input  In the  fullstatus  plugin  for its  plugin complete  script to provide completion for current release names  it can simply run  helm list  q  and output the result   If it is desired to use the same executable for plugin execution and for plugin completion  the  plugin complete  script can be made to call the main plugin executable with some special parameter or flag  when the main plugin executable detects the special parameter or flag  it will know to run the completion  In our example   plugin complete  could be implemented like this      sh    usr bin env sh         is the entire command line that requires completion    It is important to double quote the      variable to preserve a possibly empty last parameter   HELM PLUGIN DIR status sh   complete           The  fullstatus  plugin s real script   status sh   must then look for the    complete  flag and if found  printout the proper completions       Tips and tricks  1  The shell will automatically filter out completion choices that don t match    user input  A plugin can therefore return all relevant completions without    removing the ones that don t match the user input   For example  if the    command line is  helm fullstatus ngin TAB    the  plugin complete  script can    print  all  release names  of the  default  namespace   not just the ones    starting with  ngin   the shell will only retain the ones starting with     ngin   1  To simplify dynamic completion support  especially if you have a complex    plugin  you can have your   plugin complete  script call your main plugin    script and request completion choices   See the  Dynamic    Completion   dynamic completion  section above for an example  1  To debug dynamic completion and the  plugin complete  file  one can run the    following to see the completion results          helm   complete  pluginName   arguments to complete     For example         helm   complete fullstatus   output js ENTER           helm   complete fullstatus  o json    ENTER  "}
{"questions":"helm title Role based Access Control Explains how Helm interacts with Kubernetes Role Based Access Control weight 11 scope that you have specified Read more about service account permissions in account is a best practice to ensure that your application is operating in the In Kubernetes granting roles to a user or an application specific service aliases docs rbac","answers":"---\ntitle: \"Role-based Access Control\"\ndescription: \"Explains how Helm interacts with Kubernetes' Role-Based Access Control.\"\naliases: [\"\/docs\/rbac\/\"]\nweight: 11\n---\n\nIn Kubernetes, granting roles to a user or an application-specific service\naccount is a best practice to ensure that your application is operating in the\nscope that you have specified. Read more about service account permissions [in\nthe official Kubernetes\ndocs](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/#service-account-permissions).\n\nFrom Kubernetes 1.6 onwards, Role-based Access Control is enabled by default.\nRBAC allows you to specify which types of actions are permitted depending on the\nuser and their role in your organization.\n\nWith RBAC, you can\n\n- grant privileged operations (creating cluster-wide resources, like new roles)\n  to administrators\n- limit a user's ability to create resources (pods, persistent volumes,\n  deployments) to specific namespaces, or in cluster-wide scopes (resource\n  quotas, roles, custom resource definitions)\n- limit a user's ability to view resources either in specific namespaces or at a\n  cluster-wide scope.\n\nThis guide is for administrators who want to restrict the scope of a user's\ninteraction with the Kubernetes API.\n\n## Managing user accounts\n\nAll Kubernetes clusters have two categories of users: service accounts managed\nby Kubernetes, and normal users.\n\nNormal users are assumed to be managed by an outside, independent service. An\nadministrator distributing private keys, a user store like Keystone or Google\nAccounts, even a file with a list of usernames and passwords. In this regard,\nKubernetes does not have objects which represent normal user accounts. Normal\nusers cannot be added to a cluster through an API call.\n\nIn contrast, service accounts are users managed by the Kubernetes API. They are\nbound to specific namespaces, and created automatically by the API server or\nmanually through API calls. Service accounts are tied to a set of credentials\nstored as Secrets, which are mounted into pods allowing in-cluster processes to\ntalk to the Kubernetes API.\n\nAPI requests are tied to either a normal user or a service account, or are\ntreated as anonymous requests. This means every process inside or outside the\ncluster, from a human user typing `kubectl` on a workstation, to kubelets on\nnodes, to members of the control plane, must authenticate when making requests\nto the API server, or be treated as an anonymous user.\n\n## Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings\n\nIn Kubernetes, user accounts and service accounts can only view and edit\nresources they have been granted access to. This access is granted through the\nuse of Roles and RoleBindings. Roles and RoleBindings are bound to a particular\nnamespace, which grant users the ability to view and\/or edit resources within\nthat namespace the Role provides them access to.\n\nAt a cluster scope, these are called ClusterRoles and ClusterRoleBindings.\nGranting a user a ClusterRole grants them access to view and\/or edit resources\nacross the entire cluster. It is also required to view and\/or edit resources at\nthe cluster scope (namespaces, resource quotas, nodes).\n\nClusterRoles can be bound to a particular namespace through reference in a\nRoleBinding. The `admin`, `edit` and `view` default ClusterRoles are commonly\nused in this manner.\n\nThese are a few ClusterRoles available by default in Kubernetes. They are\nintended to be user-facing roles. They include super-user roles\n(`cluster-admin`), and roles with more granular access (`admin`, `edit`,\n`view`).\n\n| Default ClusterRole | Default ClusterRoleBinding | Description\n|---------------------|----------------------------|-------------\n| `cluster-admin`     | `system:masters` group     | Allows super-user access to perform any action on any resource. When used in a ClusterRoleBinding, it gives full control over every resource in the cluster and in all namespaces. When used in a RoleBinding, it gives full control over every resource in the rolebinding's namespace, including the namespace itself.\n| `admin`             | None                       | Allows admin access, intended to be granted within a namespace using a RoleBinding. If used in a RoleBinding, allows read\/write access to most resources in a namespace, including the ability to create roles and rolebindings within the namespace. It does not allow write access to resource quota or to the namespace itself.\n| `edit`              | None                       | Allows read\/write access to most objects in a namespace. It does not allow viewing or modifying roles or rolebindings.\n| `view`              | None                       | Allows read-only access to see most objects in a namespace. It does not allow viewing roles or rolebindings. It does not allow viewing secrets, since those are escalating.\n\n## Restricting a user account's access using RBAC\n\nNow that we understand the basics of Role-based Access Control, let's discuss\nhow an administrator can restrict a user's scope of access.\n\n### Example: Grant a user read\/write access to a particular namespace\n\nTo restrict a user's access to a particular namespace, we can use either the\n`edit` or the `admin` role. If your charts create or interact with Roles and\nRolebindings, you'll want to use the `admin` ClusterRole.\n\nAdditionally, you may also create a RoleBinding with `cluster-admin` access.\nGranting a user `cluster-admin` access at the namespace scope provides full\ncontrol over every resource in the namespace, including the namespace itself.\n\nFor this example, we will create a user with the `edit` Role. First, create the\nnamespace:\n\n```console\n$ kubectl create namespace foo\n```\n\nNow, create a RoleBinding in that namespace, granting the user the `edit` role.\n\n```console\n$ kubectl create rolebinding sam-edit\n    --clusterrole edit \\\u200b\n    --user sam \\\u200b\n    --namespace foo\n```\n\n### Example: Grant a user read\/write access at the cluster scope\n\nIf a user wishes to install a chart that installs cluster-scope resources\n(namespaces, roles, custom resource definitions, etc.), they will require\ncluster-scope write access.\n\nTo do that, grant the user either `admin` or `cluster-admin` access.\n\nGranting a user `cluster-admin` access grants them access to absolutely every\nresource available in Kubernetes, including node access with `kubectl drain` and\nother administrative tasks. It is highly recommended to consider providing the\nuser `admin` access instead, or to create a custom ClusterRole tailored to their\nneeds.\n\n```console\n$ kubectl create clusterrolebinding sam-view\n    --clusterrole view \\\u200b\n    --user sam\n\n$ kubectl create clusterrolebinding sam-secret-reader\n    --clusterrole secret-reader \\\u200b\n    --user sam\n```\n\n### Example: Grant a user read-only access to a particular namespace\n\nYou might've noticed that there is no ClusterRole available for viewing secrets.\nThe `view` ClusterRole does not grant a user read access to Secrets due to\nescalation concerns. Helm stores release metadata as Secrets by default.\n\nIn order for a user to run `helm list`, they need to be able to read these\nsecrets. For that, we will create a special `secret-reader` ClusterRole.\n\nCreate the file `cluster-role-secret-reader.yaml` and write the following\ncontent into the file:\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\u200b\nkind: ClusterRole\u200b\nmetadata:\u200b\n  name: secret-reader\u200b\nrules:\u200b\n- apiGroups: [\"\"]\u200b\n  resources: [\"secrets\"]\u200b\n  verbs: [\"get\", \"watch\", \"list\"]\n```\n\nThen, create the ClusterRole using\n\n```console\n$ kubectl create -f clusterrole-secret-reader.yaml\u200b\n```\n\nOnce that's done, we can grant a user read access to most resources, and then\ngrant them read access to secrets:\n\n```console\n$ kubectl create namespace foo\n\n$ kubectl create rolebinding sam-view\n    --clusterrole view \\\u200b\n    --user sam \\\u200b\n    --namespace foo\n\n$ kubectl create rolebinding sam-secret-reader\n    --clusterrole secret-reader \\\u200b\n    --user sam \\\u200b\n    --namespace foo\n```\n\n### Example: Grant a user read-only access at the cluster scope\n\nIn certain scenarios, it may be beneficial to grant a user cluster-scope access.\nFor example, if a user wants to run the command `helm list --all-namespaces`,\nthe API requires the user has cluster-scope read access.\n\nTo do that, grant the user both `view` and `secret-reader` access as described\nabove, but with a ClusterRoleBinding.\n\n```console\n$ kubectl create clusterrolebinding sam-view\n    --clusterrole view \\\u200b\n    --user sam\n\n$ kubectl create clusterrolebinding sam-secret-reader\n    --clusterrole secret-reader \\\u200b\n    --user sam\n```\n\n## Additional Thoughts\n\nThe examples shown above utilize the default ClusterRoles provided with\nKubernetes. For more fine-grained control over what resources users are granted\naccess to, have a look at [the Kubernetes\ndocumentation](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/) on\ncreating your own custom Roles and ClusterRoles.","site":"helm","answers_cleaned":"    title   Role based Access Control  description   Explains how Helm interacts with Kubernetes  Role Based Access Control   aliases     docs rbac    weight  11      In Kubernetes  granting roles to a user or an application specific service account is a best practice to ensure that your application is operating in the scope that you have specified  Read more about service account permissions  in the official Kubernetes docs  https   kubernetes io docs reference access authn authz rbac  service account permissions    From Kubernetes 1 6 onwards  Role based Access Control is enabled by default  RBAC allows you to specify which types of actions are permitted depending on the user and their role in your organization   With RBAC  you can    grant privileged operations  creating cluster wide resources  like new roles    to administrators   limit a user s ability to create resources  pods  persistent volumes    deployments  to specific namespaces  or in cluster wide scopes  resource   quotas  roles  custom resource definitions    limit a user s ability to view resources either in specific namespaces or at a   cluster wide scope   This guide is for administrators who want to restrict the scope of a user s interaction with the Kubernetes API      Managing user accounts  All Kubernetes clusters have two categories of users  service accounts managed by Kubernetes  and normal users   Normal users are assumed to be managed by an outside  independent service  An administrator distributing private keys  a user store like Keystone or Google Accounts  even a file with a list of usernames and passwords  In this regard  Kubernetes does not have objects which represent normal user accounts  Normal users cannot be added to a cluster through an API call   In contrast  service accounts are users managed by the Kubernetes API  They are bound to specific namespaces  and created automatically by the API server or manually through API calls  Service accounts are tied to a set of credentials stored as Secrets  which are mounted into pods allowing in cluster processes to talk to the Kubernetes API   API requests are tied to either a normal user or a service account  or are treated as anonymous requests  This means every process inside or outside the cluster  from a human user typing  kubectl  on a workstation  to kubelets on nodes  to members of the control plane  must authenticate when making requests to the API server  or be treated as an anonymous user      Roles  ClusterRoles  RoleBindings  and ClusterRoleBindings  In Kubernetes  user accounts and service accounts can only view and edit resources they have been granted access to  This access is granted through the use of Roles and RoleBindings  Roles and RoleBindings are bound to a particular namespace  which grant users the ability to view and or edit resources within that namespace the Role provides them access to   At a cluster scope  these are called ClusterRoles and ClusterRoleBindings  Granting a user a ClusterRole grants them access to view and or edit resources across the entire cluster  It is also required to view and or edit resources at the cluster scope  namespaces  resource quotas  nodes    ClusterRoles can be bound to a particular namespace through reference in a RoleBinding  The  admin    edit  and  view  default ClusterRoles are commonly used in this manner   These are a few ClusterRoles available by default in Kubernetes  They are intended to be user facing roles  They include super user roles   cluster admin    and roles with more granular access   admin    edit    view       Default ClusterRole   Default ClusterRoleBinding   Description                                                                      cluster admin         system masters  group       Allows super user access to perform any action on any resource  When used in a ClusterRoleBinding  it gives full control over every resource in the cluster and in all namespaces  When used in a RoleBinding  it gives full control over every resource in the rolebinding s namespace  including the namespace itself     admin                None                         Allows admin access  intended to be granted within a namespace using a RoleBinding  If used in a RoleBinding  allows read write access to most resources in a namespace  including the ability to create roles and rolebindings within the namespace  It does not allow write access to resource quota or to the namespace itself     edit                 None                         Allows read write access to most objects in a namespace  It does not allow viewing or modifying roles or rolebindings     view                 None                         Allows read only access to see most objects in a namespace  It does not allow viewing roles or rolebindings  It does not allow viewing secrets  since those are escalating      Restricting a user account s access using RBAC  Now that we understand the basics of Role based Access Control  let s discuss how an administrator can restrict a user s scope of access       Example  Grant a user read write access to a particular namespace  To restrict a user s access to a particular namespace  we can use either the  edit  or the  admin  role  If your charts create or interact with Roles and Rolebindings  you ll want to use the  admin  ClusterRole   Additionally  you may also create a RoleBinding with  cluster admin  access  Granting a user  cluster admin  access at the namespace scope provides full control over every resource in the namespace  including the namespace itself   For this example  we will create a user with the  edit  Role  First  create the namespace      console   kubectl create namespace foo      Now  create a RoleBinding in that namespace  granting the user the  edit  role      console   kubectl create rolebinding sam edit       clusterrole edit          user sam          namespace foo          Example  Grant a user read write access at the cluster scope  If a user wishes to install a chart that installs cluster scope resources  namespaces  roles  custom resource definitions  etc    they will require cluster scope write access   To do that  grant the user either  admin  or  cluster admin  access   Granting a user  cluster admin  access grants them access to absolutely every resource available in Kubernetes  including node access with  kubectl drain  and other administrative tasks  It is highly recommended to consider providing the user  admin  access instead  or to create a custom ClusterRole tailored to their needs      console   kubectl create clusterrolebinding sam view       clusterrole view          user sam    kubectl create clusterrolebinding sam secret reader       clusterrole secret reader          user sam          Example  Grant a user read only access to a particular namespace  You might ve noticed that there is no ClusterRole available for viewing secrets  The  view  ClusterRole does not grant a user read access to Secrets due to escalation concerns  Helm stores release metadata as Secrets by default   In order for a user to run  helm list   they need to be able to read these secrets  For that  we will create a special  secret reader  ClusterRole   Create the file  cluster role secret reader yaml  and write the following content into the file      yaml apiVersion  rbac authorization k8s io v1  kind  ClusterRole  metadata     name  secret reader  rules     apiGroups          resources    secrets      verbs    get    watch    list        Then  create the ClusterRole using     console   kubectl create  f clusterrole secret reader yaml       Once that s done  we can grant a user read access to most resources  and then grant them read access to secrets      console   kubectl create namespace foo    kubectl create rolebinding sam view       clusterrole view          user sam          namespace foo    kubectl create rolebinding sam secret reader       clusterrole secret reader          user sam          namespace foo          Example  Grant a user read only access at the cluster scope  In certain scenarios  it may be beneficial to grant a user cluster scope access  For example  if a user wants to run the command  helm list   all namespaces   the API requires the user has cluster scope read access   To do that  grant the user both  view  and  secret reader  access as described above  but with a ClusterRoleBinding      console   kubectl create clusterrolebinding sam view       clusterrole view          user sam    kubectl create clusterrolebinding sam secret reader       clusterrole secret reader          user sam         Additional Thoughts  The examples shown above utilize the default ClusterRoles provided with Kubernetes  For more fine grained control over what resources users are granted access to  have a look at  the Kubernetes documentation  https   kubernetes io docs reference access authn authz rbac   on creating your own custom Roles and ClusterRoles "}
{"questions":"helm Explains library charts and examples of usage that defines chart primitives or definitions which can be shared by Helm title Library Charts A library chart is a type of templates in other charts This allows users to share snippets of code that can weight 4 aliases docs librarycharts","answers":"---\ntitle: \"Library Charts\"\ndescription: \"Explains library charts and examples of usage\"\naliases: [\"docs\/library_charts\/\"]\nweight: 4\n---\n\nA library chart is a type of [Helm chart]()\nthat defines chart primitives or definitions which can be shared by Helm\ntemplates in other charts. This allows users to share snippets of code that can\nbe re-used across charts, avoiding repetition and keeping charts\n[DRY](https:\/\/en.wikipedia.org\/wiki\/Don%27t_repeat_yourself).\n\nThe library chart was introduced in Helm 3 to formally recognize common or\nhelper charts that have been used by chart maintainers since Helm 2. By\nincluding it as a chart type, it provides:\n- A means to explicitly distinguish between common and application charts\n- Logic to prevent installation of a common chart\n- No rendering of templates in a common chart which may contain release\n  artifacts\n- Allow for dependent charts to use the importer's context\n\nA chart maintainer can define a common chart as a library chart and now be\nconfident that Helm will handle the chart in a standard consistent fashion. It\nalso means that definitions in an application chart can be shared by changing\nthe chart type.\n\n## Create a Simple Library Chart\n\nAs mentioned previously, a library chart is a type of [Helm chart](). This means that you can start off by creating a\nscaffold chart:\n\n```console\n$ helm create mylibchart\nCreating mylibchart\n```\n\nYou will first remove all the files in `templates` directory as we will create\nour own templates definitions in this example.\n\n```console\n$ rm -rf mylibchart\/templates\/*\n```\n\nThe values file will not be required either.\n\n```console\n$ rm -f mylibchart\/values.yaml\n```\n\nBefore we jump into creating common code, lets do a quick review of some\nrelevant Helm concepts. A [named template]() (sometimes called a partial\nor a subtemplate) is simply a template defined inside of a file, and given a\nname.  In the `templates\/` directory, any file that begins with an underscore(_)\nis not expected to output a Kubernetes manifest file. So by convention, helper\ntemplates and partials are placed in a `_*.tpl` or `_*.yaml` files.\n\nIn this example, we will code a common ConfigMap which creates an empty\nConfigMap resource. We will define the common ConfigMap in file\n`mylibchart\/templates\/_configmap.yaml` as follows:\n\n```yaml\n\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: \ndata: {}\n\n\n\n\n```\n\nThe ConfigMap construct is defined in named template `mylibchart.configmap.tpl`.\nIt is a simple ConfigMap with an empty resource, `data`. Within this file there\nis another named template called `mylibchart.configmap`. This named template\nincludes another named template `mylibchart.util.merge` which will take 2 named\ntemplates as arguments, the template calling `mylibchart.configmap` and\n`mylibchart.configmap.tpl`.\n\nThe helper function `mylibchart.util.merge` is a named template in\n`mylibchart\/templates\/_util.yaml`. It is a handy util from [The Common Helm\nHelper Chart](#the-common-helm-helper-chart) because it merges the 2 templates\nand overrides any common parts in both:\n\n```yaml\n\n\n\n\n\n\n\n```\n\nThis is important when a chart wants to use common code that it needs to\ncustomize with its configuration.\n\nFinally, lets change the chart type to `library`. This requires editing\n`mylibchart\/Chart.yaml` as follows:\n\n```yaml\napiVersion: v2\nname: mylibchart\ndescription: A Helm chart for Kubernetes\n\n# A chart can be either an 'application' or a 'library' chart.\n#\n# Application charts are a collection of templates that can be packaged into versioned archives\n# to be deployed.\n#\n# Library charts provide useful utilities or functions for the chart developer. They're included as\n# a dependency of application charts to inject those utilities and functions into the rendering\n# pipeline. Library charts do not define any templates and therefore cannot be deployed.\n# type: application\ntype: library\n\n# This is the chart version. This version number should be incremented each time you make changes\n# to the chart and its templates, including the app version.\nversion: 0.1.0\n\n# This is the version number of the application being deployed. This version number should be\n# incremented each time you make changes to the application and it is recommended to use it with quotes.\nappVersion: \"1.16.0\"\n```\n\nThe library chart is now ready to be shared and its ConfigMap definition to be\nre-used.\n\nBefore moving on, it is worth checking if Helm recognizes the chart as a library\nchart:\n\n```console\n$ helm install mylibchart mylibchart\/\nError: library charts are not installable\n```\n\n## Use the Simple Library Chart\n\nIt is time to use the library chart. This means creating a scaffold chart again:\n\n```console\n$ helm create mychart\nCreating mychart\n```\n\nLets clean out the template files again as we want to create a ConfigMap only:\n\n```console\n$ rm -rf mychart\/templates\/*\n```\n\nWhen we want to create a simple ConfigMap in a Helm template, it could look\nsimilar to the following:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: \ndata:\n  myvalue: \"Hello World\"\n```\n\nWe are however going to re-use the common code already created in `mylibchart`.\nThe ConfigMap can be created in the file `mychart\/templates\/configmap.yaml` as\nfollows:\n\n```yaml\n\n\ndata:\n  myvalue: \"Hello World\"\n\n```\n\nYou can see that it simplifies the work we have to do by inheriting the common\nConfigMap definition which adds standard properties for ConfigMap. In our\ntemplate we add the configuration, in this case the data key `myvalue` and its\nvalue. The configuration override the empty resource of the common ConfigMap.\nThis is feasible because of the helper function `mylibchart.util.merge` we\nmentioned in the previous section.\n\nTo be able to use the common code, we need to add `mylibchart` as a dependency.\nAdd the following to the end of the file `mychart\/Chart.yaml`:\n\n```yaml\n# My common code in my library chart\ndependencies:\n- name: mylibchart\n  version: 0.1.0\n  repository: file:\/\/..\/mylibchart\n```\n\nThis includes the library chart as a dynamic dependency from the filesystem\nwhich is at the same parent path as our application chart. As we are including\nthe library chart as a dynamic dependency, we need to run `helm dependency\nupdate`. It will copy the library chart into your `charts\/` directory.\n\n```console\n$ helm dependency update mychart\/\nHang tight while we grab the latest from your chart repositories...\n...Successfully got an update from the \"stable\" chart repository\nUpdate Complete. \u2388Happy Helming!\u2388\nSaving 1 charts\nDeleting outdated charts\n```\n\nWe are now ready to deploy our chart. Before installing, it is worth checking\nthe rendered template first.\n\n```console\n$ helm install mydemo mychart\/ --debug --dry-run\ninstall.go:159: [debug] Original chart version: \"\"\ninstall.go:176: [debug] CHART PATH: \/root\/test\/helm-charts\/mychart\n\nNAME: mydemo\nLAST DEPLOYED: Tue Mar  3 17:48:47 2020\nNAMESPACE: default\nSTATUS: pending-install\nREVISION: 1\nTEST SUITE: None\nUSER-SUPPLIED VALUES:\n{}\n\nCOMPUTED VALUES:\naffinity: {}\nfullnameOverride: \"\"\nimage:\n  pullPolicy: IfNotPresent\n  repository: nginx\nimagePullSecrets: []\ningress:\n  annotations: {}\n  enabled: false\n  hosts:\n  - host: chart-example.local\n    paths: []\n  tls: []\nmylibchart:\n  global: {}\nnameOverride: \"\"\nnodeSelector: {}\npodSecurityContext: {}\nreplicaCount: 1\nresources: {}\nsecurityContext: {}\nservice:\n  port: 80\n  type: ClusterIP\nserviceAccount:\n  annotations: {}\n  create: true\n  name: null\ntolerations: []\n\nHOOKS:\nMANIFEST:\n---\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\ndata:\n  myvalue: Hello World\nkind: ConfigMap\nmetadata:\n  labels:\n    app: mychart\n    chart: mychart-0.1.0\n    release: mydemo\n  name: mychart-mydemo\n```\n\nThis looks like the ConfigMap we want with data override of `myvalue: Hello\nWorld`. Lets install it:\n\n```console\n$ helm install mydemo mychart\/\nNAME: mydemo\nLAST DEPLOYED: Tue Mar  3 17:52:40 2020\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\n```\n\nWe can retrieve the release and see that the actual template was loaded.\n\n```console\n$ helm get manifest mydemo\n---\n# Source: mychart\/templates\/configmap.yaml\napiVersion: v1\ndata:\n  myvalue: Hello World\nkind: ConfigMap\nmetadata:\n  labels:\n    app: mychart\n    chart: mychart-0.1.0\n    release: mydemo\n  name: mychart-mydemo\n  ```\n\n## Library Chart Benefits\nBecause of their inability to act as standalone charts, library charts can leverage the following functionality:\n- The `.Files` object references the file paths on the parent chart, rather than the path local to the library chart\n- The `.Values` object is the same as the parent chart, in contrast to application [subcharts]() which receive the section of values configured under their header in the parent.\n\n\n## The Common Helm Helper Chart\n\n```markdown\nNote: The Common Helm Helper Chart repo on Github is no longer actively maintained, and the repo has been deprecated and archived.\n```\n\nThis [chart](https:\/\/github.com\/helm\/charts\/tree\/master\/incubator\/common) was\nthe original pattern for common charts. It provides utilities that reflect best\npractices of Kubernetes chart development. Best of all it can be used off the\nbat by you when developing your charts to give you handy shared code.\n\nHere is a quick way to use it. For more details, have a look at the\n[README](https:\/\/github.com\/helm\/charts\/blob\/master\/incubator\/common\/README.md).\n\nCreate a scaffold chart again:\n\n```console\n$ helm create demo\nCreating demo\n```\n\nLets use the common code from the helper chart. First, edit deployment\n`demo\/templates\/deployment.yaml` as follows:\n\n```yaml\n\n\n## Define overrides for your Deployment resource here, e.g.\napiVersion: apps\/v1\nspec:\n  replicas: \n  selector:\n    matchLabels:\n      \n  template:\n    metadata:\n      labels:\n        \n\n\n```\n\nAnd now the service file, `demo\/templates\/service.yaml` as follows:\n\n```yaml\n\n\n## Define overrides for your Service resource here, e.g.\n# metadata:\n#   labels:\n#     custom: label\n# spec:\n#   ports:\n#   - port: 8080\n\n```\n\nThese templates show how inheriting the common code from the helper chart\nsimplifies your coding down to your configuration or customization of the\nresources.\n\nTo be able to use the common code, we need to add `common` as a dependency. Add\nthe following to the end of the file `demo\/Chart.yaml`:\n\n```yaml\ndependencies:\n- name: common\n  version: \"^0.0.5\"\n  repository: \"https:\/\/charts.helm.sh\/incubator\/\"\n```\n\nNote: You will need to add the `incubator` repo to the Helm repository list\n(`helm repo add`).\n\nAs we are including the chart as a dynamic dependency, we need to run `helm\ndependency update`. It will copy the helper chart into your `charts\/` directory.\n\nAs helper chart is using some Helm 2 constructs, you will need to add the\nfollowing to `demo\/values.yaml` to enable the `nginx` image to be loaded as this\nwas updated in Helm 3 scaffold chart:\n\n```yaml\nimage:\n  tag: 1.16.0\n```\n\nYou can test that the chart templates are correct prior to deploying using the `helm lint` and `helm template` commands.\n\nIf it's good to go, deploy away using `helm install`!\n","site":"helm","answers_cleaned":"    title   Library Charts  description   Explains library charts and examples of usage  aliases    docs library charts    weight  4      A library chart is a type of  Helm chart    that defines chart primitives or definitions which can be shared by Helm templates in other charts  This allows users to share snippets of code that can be re used across charts  avoiding repetition and keeping charts  DRY  https   en wikipedia org wiki Don 27t repeat yourself    The library chart was introduced in Helm 3 to formally recognize common or helper charts that have been used by chart maintainers since Helm 2  By including it as a chart type  it provides    A means to explicitly distinguish between common and application charts   Logic to prevent installation of a common chart   No rendering of templates in a common chart which may contain release   artifacts   Allow for dependent charts to use the importer s context  A chart maintainer can define a common chart as a library chart and now be confident that Helm will handle the chart in a standard consistent fashion  It also means that definitions in an application chart can be shared by changing the chart type      Create a Simple Library Chart  As mentioned previously  a library chart is a type of  Helm chart     This means that you can start off by creating a scaffold chart      console   helm create mylibchart Creating mylibchart      You will first remove all the files in  templates  directory as we will create our own templates definitions in this example      console   rm  rf mylibchart templates        The values file will not be required either      console   rm  f mylibchart values yaml      Before we jump into creating common code  lets do a quick review of some relevant Helm concepts  A  named template     sometimes called a partial or a subtemplate  is simply a template defined inside of a file  and given a name   In the  templates   directory  any file that begins with an underscore    is not expected to output a Kubernetes manifest file  So by convention  helper templates and partials are placed in a     tpl  or     yaml  files   In this example  we will code a common ConfigMap which creates an empty ConfigMap resource  We will define the common ConfigMap in file  mylibchart templates  configmap yaml  as follows      yaml  apiVersion  v1 kind  ConfigMap metadata    name   data              The ConfigMap construct is defined in named template  mylibchart configmap tpl   It is a simple ConfigMap with an empty resource   data   Within this file there is another named template called  mylibchart configmap   This named template includes another named template  mylibchart util merge  which will take 2 named templates as arguments  the template calling  mylibchart configmap  and  mylibchart configmap tpl    The helper function  mylibchart util merge  is a named template in  mylibchart templates  util yaml   It is a handy util from  The Common Helm Helper Chart   the common helm helper chart  because it merges the 2 templates and overrides any common parts in both      yaml             This is important when a chart wants to use common code that it needs to customize with its configuration   Finally  lets change the chart type to  library   This requires editing  mylibchart Chart yaml  as follows      yaml apiVersion  v2 name  mylibchart description  A Helm chart for Kubernetes    A chart can be either an  application  or a  library  chart      Application charts are a collection of templates that can be packaged into versioned archives   to be deployed      Library charts provide useful utilities or functions for the chart developer  They re included as   a dependency of application charts to inject those utilities and functions into the rendering   pipeline  Library charts do not define any templates and therefore cannot be deployed    type  application type  library    This is the chart version  This version number should be incremented each time you make changes   to the chart and its templates  including the app version  version  0 1 0    This is the version number of the application being deployed  This version number should be   incremented each time you make changes to the application and it is recommended to use it with quotes  appVersion   1 16 0       The library chart is now ready to be shared and its ConfigMap definition to be re used   Before moving on  it is worth checking if Helm recognizes the chart as a library chart      console   helm install mylibchart mylibchart  Error  library charts are not installable         Use the Simple Library Chart  It is time to use the library chart  This means creating a scaffold chart again      console   helm create mychart Creating mychart      Lets clean out the template files again as we want to create a ConfigMap only      console   rm  rf mychart templates        When we want to create a simple ConfigMap in a Helm template  it could look similar to the following      yaml apiVersion  v1 kind  ConfigMap metadata    name   data    myvalue   Hello World       We are however going to re use the common code already created in  mylibchart   The ConfigMap can be created in the file  mychart templates configmap yaml  as follows      yaml   data    myvalue   Hello World        You can see that it simplifies the work we have to do by inheriting the common ConfigMap definition which adds standard properties for ConfigMap  In our template we add the configuration  in this case the data key  myvalue  and its value  The configuration override the empty resource of the common ConfigMap  This is feasible because of the helper function  mylibchart util merge  we mentioned in the previous section   To be able to use the common code  we need to add  mylibchart  as a dependency  Add the following to the end of the file  mychart Chart yaml       yaml   My common code in my library chart dependencies    name  mylibchart   version  0 1 0   repository  file      mylibchart      This includes the library chart as a dynamic dependency from the filesystem which is at the same parent path as our application chart  As we are including the library chart as a dynamic dependency  we need to run  helm dependency update   It will copy the library chart into your  charts   directory      console   helm dependency update mychart  Hang tight while we grab the latest from your chart repositories       Successfully got an update from the  stable  chart repository Update Complete   Happy Helming   Saving 1 charts Deleting outdated charts      We are now ready to deploy our chart  Before installing  it is worth checking the rendered template first      console   helm install mydemo mychart    debug   dry run install go 159   debug  Original chart version     install go 176   debug  CHART PATH   root test helm charts mychart  NAME  mydemo LAST DEPLOYED  Tue Mar  3 17 48 47 2020 NAMESPACE  default STATUS  pending install REVISION  1 TEST SUITE  None USER SUPPLIED VALUES      COMPUTED VALUES  affinity     fullnameOverride     image    pullPolicy  IfNotPresent   repository  nginx imagePullSecrets     ingress    annotations       enabled  false   hosts      host  chart example local     paths       tls     mylibchart    global     nameOverride     nodeSelector     podSecurityContext     replicaCount  1 resources     securityContext     service    port  80   type  ClusterIP serviceAccount    annotations       create  true   name  null tolerations      HOOKS  MANIFEST        Source  mychart templates configmap yaml apiVersion  v1 data    myvalue  Hello World kind  ConfigMap metadata    labels      app  mychart     chart  mychart 0 1 0     release  mydemo   name  mychart mydemo      This looks like the ConfigMap we want with data override of  myvalue  Hello World   Lets install it      console   helm install mydemo mychart  NAME  mydemo LAST DEPLOYED  Tue Mar  3 17 52 40 2020 NAMESPACE  default STATUS  deployed REVISION  1 TEST SUITE  None      We can retrieve the release and see that the actual template was loaded      console   helm get manifest mydemo       Source  mychart templates configmap yaml apiVersion  v1 data    myvalue  Hello World kind  ConfigMap metadata    labels      app  mychart     chart  mychart 0 1 0     release  mydemo   name  mychart mydemo           Library Chart Benefits Because of their inability to act as standalone charts  library charts can leverage the following functionality    The   Files  object references the file paths on the parent chart  rather than the path local to the library chart   The   Values  object is the same as the parent chart  in contrast to application  subcharts    which receive the section of values configured under their header in the parent       The Common Helm Helper Chart     markdown Note  The Common Helm Helper Chart repo on Github is no longer actively maintained  and the repo has been deprecated and archived       This  chart  https   github com helm charts tree master incubator common  was the original pattern for common charts  It provides utilities that reflect best practices of Kubernetes chart development  Best of all it can be used off the bat by you when developing your charts to give you handy shared code   Here is a quick way to use it  For more details  have a look at the  README  https   github com helm charts blob master incubator common README md    Create a scaffold chart again      console   helm create demo Creating demo      Lets use the common code from the helper chart  First  edit deployment  demo templates deployment yaml  as follows      yaml      Define overrides for your Deployment resource here  e g  apiVersion  apps v1 spec    replicas     selector      matchLabels           template      metadata        labels                  And now the service file   demo templates service yaml  as follows      yaml      Define overrides for your Service resource here  e g    metadata      labels        custom  label   spec      ports        port  8080       These templates show how inheriting the common code from the helper chart simplifies your coding down to your configuration or customization of the resources   To be able to use the common code  we need to add  common  as a dependency  Add the following to the end of the file  demo Chart yaml       yaml dependencies    name  common   version    0 0 5    repository   https   charts helm sh incubator        Note  You will need to add the  incubator  repo to the Helm repository list   helm repo add     As we are including the chart as a dynamic dependency  we need to run  helm dependency update   It will copy the helper chart into your  charts   directory   As helper chart is using some Helm 2 constructs  you will need to add the following to  demo values yaml  to enable the  nginx  image to be loaded as this was updated in Helm 3 scaffold chart      yaml image    tag  1 16 0      You can test that the chart templates are correct prior to deploying using the  helm lint  and  helm template  commands   If it s good to go  deploy away using  helm install   "}
{"questions":"ingress nginx Do not move it without providing redirects https github com kubernetes ingress nginx blob main docs kubectl plugin md This file is referenced in code as NOTICE The ingress nginx kubectl plugin","answers":"<!--\n-----------------NOTICE------------------------\nThis file is referenced in code as\nhttps:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/docs\/kubectl-plugin.md\nDo not move it without providing redirects.\n-----------------------------------------------\n-->\n\n# The ingress-nginx kubectl plugin\n\n## Installation\n\nInstall [krew](https:\/\/github.com\/GoogleContainerTools\/krew), then run\n\n```console\nkubectl krew install ingress-nginx\n```\n\nto install the plugin. Then run\n\n```console\nkubectl ingress-nginx --help\n```\n\nto make sure the plugin is properly installed and to get a list of commands:\n\n```console\nkubectl ingress-nginx --help\nA kubectl plugin for inspecting your ingress-nginx deployments\n\nUsage:\n  ingress-nginx [command]\n\nAvailable Commands:\n  backends    Inspect the dynamic backend information of an ingress-nginx instance\n  certs       Output the certificate data stored in an ingress-nginx pod\n  conf        Inspect the generated nginx.conf\n  exec        Execute a command inside an ingress-nginx pod\n  general     Inspect the other dynamic ingress-nginx information\n  help        Help about any command\n  info        Show information about the ingress-nginx service\n  ingresses   Provide a short summary of all of the ingress definitions\n  lint        Inspect kubernetes resources for possible issues\n  logs        Get the kubernetes logs for an ingress-nginx pod\n  ssh         ssh into a running ingress-nginx pod\n\nFlags:\n      --as string                      Username to impersonate for the operation\n      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n      --cache-dir string               Default HTTP cache directory (default \"\/Users\/alexkursell\/.kube\/http-cache\")\n      --certificate-authority string   Path to a cert file for the certificate authority\n      --client-certificate string      Path to a client certificate file for TLS\n      --client-key string              Path to a client key file for TLS\n      --cluster string                 The name of the kubeconfig cluster to use\n      --context string                 The name of the kubeconfig context to use\n  -h, --help                           help for ingress-nginx\n      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n      --kubeconfig string              Path to the kubeconfig file to use for CLI requests.\n  -n, --namespace string               If present, the namespace scope for this CLI request\n      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\")\n  -s, --server string                  The address and port of the Kubernetes API server\n      --token string                   Bearer token for authentication to the API server\n      --user string                    The name of the kubeconfig user to use\n\nUse \"ingress-nginx [command] --help\" for more information about a command.\n```\n\n## Common Flags\n\n- Every subcommand supports the basic `kubectl` configuration flags like `--namespace`, `--context`, `--client-key` and so on.\n- Subcommands that act on a particular `ingress-nginx` pod (`backends`, `certs`, `conf`, `exec`, `general`, `logs`, `ssh`), support the `--deployment <deployment>`, `--pod <pod>`, and `--container <container>` flags to select either a pod from a deployment with the given name, or a pod with the given name (and the given container name). The `--deployment` flag defaults to `ingress-nginx-controller`, and the `--container` flag defaults to `controller`.\n- Subcommands that inspect resources (`ingresses`, `lint`) support the `--all-namespaces` flag, which causes them to inspect resources in every namespace.\n\n## Subcommands\n\nNote that `backends`, `general`, `certs`, and `conf` require `ingress-nginx` version `0.23.0` or higher.\n\n### backends\n\nRun `kubectl ingress-nginx backends` to get a JSON array of the backends that an ingress-nginx controller currently knows about:\n\n```console\n$ kubectl ingress-nginx backends -n ingress-nginx\n[\n  {\n    \"name\": \"default-apple-service-5678\",\n    \"service\": {\n      \"metadata\": {\n        \"creationTimestamp\": null\n      },\n      \"spec\": {\n        \"ports\": [\n          {\n            \"protocol\": \"TCP\",\n            \"port\": 5678,\n            \"targetPort\": 5678\n          }\n        ],\n        \"selector\": {\n          \"app\": \"apple\"\n        },\n        \"clusterIP\": \"10.97.230.121\",\n        \"type\": \"ClusterIP\",\n        \"sessionAffinity\": \"None\"\n      },\n      \"status\": {\n        \"loadBalancer\": {}\n      }\n    },\n    \"port\": 0,\n    \"sslPassthrough\": false,\n    \"endpoints\": [\n      {\n        \"address\": \"10.1.3.86\",\n        \"port\": \"5678\"\n      }\n    ],\n    \"sessionAffinityConfig\": {\n      \"name\": \"\",\n      \"cookieSessionAffinity\": {\n        \"name\": \"\"\n      }\n    },\n    \"upstreamHashByConfig\": {\n      \"upstream-hash-by-subset-size\": 3\n    },\n    \"noServer\": false,\n    \"trafficShapingPolicy\": {\n      \"weight\": 0,\n      \"header\": \"\",\n      \"headerValue\": \"\",\n      \"cookie\": \"\"\n    }\n  },\n  {\n    \"name\": \"default-echo-service-8080\",\n    ...\n  },\n  {\n    \"name\": \"upstream-default-backend\",\n    ...\n  }\n]\n```\n\nAdd the `--list` option to show only the backend names. Add the `--backend <backend>` option to show only the backend with the given name.\n\n### certs\n\nUse `kubectl ingress-nginx certs --host <hostname>` to dump the SSL cert\/key information for a given host.\n\n**WARNING:** This command will dump sensitive private key information. Don't blindly share the output, and certainly don't log it anywhere.\n\n```console\n$ kubectl ingress-nginx certs -n ingress-nginx --host testaddr.local\n-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----\n\n-----BEGIN RSA PRIVATE KEY-----\n<REDACTED! DO NOT SHARE THIS!>\n-----END RSA PRIVATE KEY-----\n```\n\n### conf\n\nUse `kubectl ingress-nginx conf` to dump the generated `nginx.conf` file. Add the `--host <hostname>` option to view only the server block for that host:\n\n```console\nkubectl ingress-nginx conf -n ingress-nginx --host testaddr.local\n\n\tserver {\n\t\tserver_name testaddr.local ;\n\n\t\tlisten 80;\n\n\t\tset $proxy_upstream_name \"-\";\n\t\tset $pass_access_scheme $scheme;\n\t\tset $pass_server_port $server_port;\n\t\tset $best_http_host $http_host;\n\t\tset $pass_port $pass_server_port;\n\n\t\tlocation \/ {\n\n\t\t\tset $namespace      \"\";\n\t\t\tset $ingress_name   \"\";\n\t\t\tset $service_name   \"\";\n\t\t\tset $service_port   \"0\";\n\t\t\tset $location_path  \"\/\";\n\n...\n```\n\n### exec\n\n`kubectl ingress-nginx exec` is exactly the same as `kubectl exec`, with the same command flags. It will automatically choose an `ingress-nginx` pod to run the command in.\n\n```console\n$ kubectl ingress-nginx exec -i -n ingress-nginx -- ls \/etc\/nginx\nfastcgi_params\ngeoip\nlua\nmime.types\nmodsecurity\nmodules\nnginx.conf\nopentracing.json\nopentelemetry.toml\nowasp-modsecurity-crs\ntemplate\n```\n\n### info\n\nShows the internal and external IP\/CNAMES for an `ingress-nginx` service.\n\n```console\n$ kubectl ingress-nginx info -n ingress-nginx\nService cluster IP address: 10.187.253.31\nLoadBalancer IP|CNAME: 35.123.123.123\n```\n\nUse the `--service <service>` flag if your `ingress-nginx` `LoadBalancer` service is not named `ingress-nginx`.\n\n### ingresses\n\n`kubectl ingress-nginx ingresses`, alternately `kubectl ingress-nginx ing`, shows a more detailed view of the ingress definitions in a namespace.\n\nCompare:\n\n```console\n$ kubectl get ingresses --all-namespaces\nNAMESPACE   NAME               HOSTS                            ADDRESS     PORTS   AGE\ndefault     example-ingress1   testaddr.local,testaddr2.local   localhost   80      5d\ndefault     test-ingress-2     *                                localhost   80      5d\n```\n\nvs.\n\n```console\n$ kubectl ingress-nginx ingresses --all-namespaces\nNAMESPACE   INGRESS NAME       HOST+PATH                        ADDRESSES   TLS   SERVICE         SERVICE PORT   ENDPOINTS\ndefault     example-ingress1   testaddr.local\/etameta           localhost   NO    pear-service    5678           5\ndefault     example-ingress1   testaddr2.local\/otherpath        localhost   NO    apple-service   5678           1\ndefault     example-ingress1   testaddr2.local\/otherotherpath   localhost   NO    pear-service    5678           5\ndefault     test-ingress-2     *                                localhost   NO    echo-service    8080           2\n```\n\n### lint\n\n`kubectl ingress-nginx lint` can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between `ingress-nginx` versions.\n\n```console\n$ kubectl ingress-nginx lint --all-namespaces --verbose\nChecking ingresses...\n\u2717 anamespace\/this-nginx\n  - Contains the removed session-cookie-hash annotation.\n       Lint added for version 0.24.0\n       https:\/\/github.com\/kubernetes\/ingress-nginx\/issues\/3743\n\u2717 othernamespace\/ingress-definition-blah\n  - The rewrite-target annotation value does not reference a capture group\n      Lint added for version 0.22.0\n      https:\/\/github.com\/kubernetes\/ingress-nginx\/issues\/3174\n\nChecking deployments...\n\u2717 namespace2\/ingress-nginx-controller\n  - Uses removed config flag --sort-backends\n      Lint added for version 0.22.0\n      https:\/\/github.com\/kubernetes\/ingress-nginx\/issues\/3655\n  - Uses removed config flag --enable-dynamic-certificates\n      Lint added for version 0.24.0\n      https:\/\/github.com\/kubernetes\/ingress-nginx\/issues\/3808\n```\n\nTo show the lints added **only** for a particular `ingress-nginx` release, use the `--from-version` and `--to-version` flags:\n\n```console\n$ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0.24.0 --to-version 0.24.0\nChecking ingresses...\n\u2717 anamespace\/this-nginx\n  - Contains the removed session-cookie-hash annotation.\n       Lint added for version 0.24.0\n       https:\/\/github.com\/kubernetes\/ingress-nginx\/issues\/3743\n\nChecking deployments...\n\u2717 namespace2\/ingress-nginx-controller\n  - Uses removed config flag --enable-dynamic-certificates\n      Lint added for version 0.24.0\n      https:\/\/github.com\/kubernetes\/ingress-nginx\/issues\/3808\n```\n\n### logs\n\n`kubectl ingress-nginx logs` is almost the same as `kubectl logs`, with fewer flags. It will automatically choose an `ingress-nginx` pod to read logs from.\n\n```console\n$ kubectl ingress-nginx logs -n ingress-nginx\n-------------------------------------------------------------------------------\nNGINX Ingress controller\n  Release:    dev\n  Build:      git-48dc3a867\n  Repository: git@github.com:kubernetes\/ingress-nginx.git\n-------------------------------------------------------------------------------\n\nW0405 16:53:46.061589       7 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)\nnginx version: nginx\/1.15.9\nW0405 16:53:46.070093       7 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.\nI0405 16:53:46.070499       7 main.go:205] Creating API client for https:\/\/10.96.0.1:443\nI0405 16:53:46.077784       7 main.go:249] Running in Kubernetes cluster version v1.10 (v1.10.11) - git (clean) commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 - platform linux\/amd64\nI0405 16:53:46.183359       7 nginx.go:265] Starting NGINX Ingress controller\nI0405 16:53:46.193913       7 event.go:209] Event(v1.ObjectReference{Kind:\"ConfigMap\", Namespace:\"ingress-nginx\", Name:\"udp-services\", UID:\"82258915-563e-11e9-9c52-025000000001\", APIVersion:\"v1\", ResourceVersion:\"494\", FieldPath:\"\"}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx\/udp-services\n...\n```\n\n### ssh\n\n`kubectl ingress-nginx ssh` is exactly the same as `kubectl ingress-nginx exec -it -- \/bin\/bash`. Use it when you want to quickly be dropped into a shell inside a running `ingress-nginx` container.\n\n```console\n$ kubectl ingress-nginx ssh -n ingress-nginx\nwww-data@ingress-nginx-controller-7cbf77c976-wx5pn:\/etc\/nginx$\n```","site":"ingress nginx","answers_cleaned":"                      NOTICE                         This file is referenced in code as https   github com kubernetes ingress nginx blob main docs kubectl plugin md Do not move it without providing redirects                                                         The ingress nginx kubectl plugin     Installation  Install  krew  https   github com GoogleContainerTools krew   then run     console kubectl krew install ingress nginx      to install the plugin  Then run     console kubectl ingress nginx   help      to make sure the plugin is properly installed and to get a list of commands      console kubectl ingress nginx   help A kubectl plugin for inspecting your ingress nginx deployments  Usage    ingress nginx  command   Available Commands    backends    Inspect the dynamic backend information of an ingress nginx instance   certs       Output the certificate data stored in an ingress nginx pod   conf        Inspect the generated nginx conf   exec        Execute a command inside an ingress nginx pod   general     Inspect the other dynamic ingress nginx information   help        Help about any command   info        Show information about the ingress nginx service   ingresses   Provide a short summary of all of the ingress definitions   lint        Inspect kubernetes resources for possible issues   logs        Get the kubernetes logs for an ingress nginx pod   ssh         ssh into a running ingress nginx pod  Flags          as string                      Username to impersonate for the operation         as group stringArray           Group to impersonate for the operation  this flag can be repeated to specify multiple groups          cache dir string               Default HTTP cache directory  default   Users alexkursell  kube http cache           certificate authority string   Path to a cert file for the certificate authority         client certificate string      Path to a client certificate file for TLS         client key string              Path to a client key file for TLS         cluster string                 The name of the kubeconfig cluster to use         context string                 The name of the kubeconfig context to use    h    help                           help for ingress nginx         insecure skip tls verify       If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure         kubeconfig string              Path to the kubeconfig file to use for CLI requests     n    namespace string               If present  the namespace scope for this CLI request         request timeout string         The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   default  0      s    server string                  The address and port of the Kubernetes API server         token string                   Bearer token for authentication to the API server         user string                    The name of the kubeconfig user to use  Use  ingress nginx  command    help  for more information about a command          Common Flags    Every subcommand supports the basic  kubectl  configuration flags like    namespace      context      client key  and so on    Subcommands that act on a particular  ingress nginx  pod   backends    certs    conf    exec    general    logs    ssh    support the    deployment  deployment       pod  pod    and    container  container   flags to select either a pod from a deployment with the given name  or a pod with the given name  and the given container name   The    deployment  flag defaults to  ingress nginx controller   and the    container  flag defaults to  controller     Subcommands that inspect resources   ingresses    lint   support the    all namespaces  flag  which causes them to inspect resources in every namespace      Subcommands  Note that  backends    general    certs   and  conf  require  ingress nginx  version  0 23 0  or higher       backends  Run  kubectl ingress nginx backends  to get a JSON array of the backends that an ingress nginx controller currently knows about      console   kubectl ingress nginx backends  n ingress nginx            name    default apple service 5678        service            metadata              creationTimestamp   null                 spec              ports                              protocol    TCP                port   5678               targetPort   5678                                 selector                app    apple                      clusterIP    10 97 230 121            type    ClusterIP            sessionAffinity    None                  status              loadBalancer                          port   0       sslPassthrough   false       endpoints                      address    10 1 3 86            port    5678                      sessionAffinityConfig            name              cookieSessionAffinity              name                          upstreamHashByConfig            upstream hash by subset size   3             noServer   false       trafficShapingPolicy            weight   0         header              headerValue              cookie                          name    default echo service 8080                         name    upstream default backend                      Add the    list  option to show only the backend names  Add the    backend  backend   option to show only the backend with the given name       certs  Use  kubectl ingress nginx certs   host  hostname   to dump the SSL cert key information for a given host     WARNING    This command will dump sensitive private key information  Don t blindly share the output  and certainly don t log it anywhere      console   kubectl ingress nginx certs  n ingress nginx   host testaddr local      BEGIN CERTIFICATE               END CERTIFICATE           BEGIN CERTIFICATE               END CERTIFICATE            BEGIN RSA PRIVATE KEY       REDACTED  DO NOT SHARE THIS        END RSA PRIVATE KEY               conf  Use  kubectl ingress nginx conf  to dump the generated  nginx conf  file  Add the    host  hostname   option to view only the server block for that host      console kubectl ingress nginx conf  n ingress nginx   host testaddr local   server     server name testaddr local      listen 80     set  proxy upstream name        set  pass access scheme  scheme    set  pass server port  server port    set  best http host  http host    set  pass port  pass server port     location         set  namespace             set  ingress name          set  service name          set  service port    0      set  location path                     exec   kubectl ingress nginx exec  is exactly the same as  kubectl exec   with the same command flags  It will automatically choose an  ingress nginx  pod to run the command in      console   kubectl ingress nginx exec  i  n ingress nginx    ls  etc nginx fastcgi params geoip lua mime types modsecurity modules nginx conf opentracing json opentelemetry toml owasp modsecurity crs template          info  Shows the internal and external IP CNAMES for an  ingress nginx  service      console   kubectl ingress nginx info  n ingress nginx Service cluster IP address  10 187 253 31 LoadBalancer IP CNAME  35 123 123 123      Use the    service  service   flag if your  ingress nginx   LoadBalancer  service is not named  ingress nginx        ingresses   kubectl ingress nginx ingresses   alternately  kubectl ingress nginx ing   shows a more detailed view of the ingress definitions in a namespace   Compare      console   kubectl get ingresses   all namespaces NAMESPACE   NAME               HOSTS                            ADDRESS     PORTS   AGE default     example ingress1   testaddr local testaddr2 local   localhost   80      5d default     test ingress 2                                      localhost   80      5d      vs      console   kubectl ingress nginx ingresses   all namespaces NAMESPACE   INGRESS NAME       HOST PATH                        ADDRESSES   TLS   SERVICE         SERVICE PORT   ENDPOINTS default     example ingress1   testaddr local etameta           localhost   NO    pear service    5678           5 default     example ingress1   testaddr2 local otherpath        localhost   NO    apple service   5678           1 default     example ingress1   testaddr2 local otherotherpath   localhost   NO    pear service    5678           5 default     test ingress 2                                      localhost   NO    echo service    8080           2          lint   kubectl ingress nginx lint  can check a namespace or entire cluster for potential configuration issues  This command is especially useful when upgrading between  ingress nginx  versions      console   kubectl ingress nginx lint   all namespaces   verbose Checking ingresses      anamespace this nginx     Contains the removed session cookie hash annotation         Lint added for version 0 24 0        https   github com kubernetes ingress nginx issues 3743   othernamespace ingress definition blah     The rewrite target annotation value does not reference a capture group       Lint added for version 0 22 0       https   github com kubernetes ingress nginx issues 3174  Checking deployments      namespace2 ingress nginx controller     Uses removed config flag   sort backends       Lint added for version 0 22 0       https   github com kubernetes ingress nginx issues 3655     Uses removed config flag   enable dynamic certificates       Lint added for version 0 24 0       https   github com kubernetes ingress nginx issues 3808      To show the lints added   only   for a particular  ingress nginx  release  use the    from version  and    to version  flags      console   kubectl ingress nginx lint   all namespaces   verbose   from version 0 24 0   to version 0 24 0 Checking ingresses      anamespace this nginx     Contains the removed session cookie hash annotation         Lint added for version 0 24 0        https   github com kubernetes ingress nginx issues 3743  Checking deployments      namespace2 ingress nginx controller     Uses removed config flag   enable dynamic certificates       Lint added for version 0 24 0       https   github com kubernetes ingress nginx issues 3808          logs   kubectl ingress nginx logs  is almost the same as  kubectl logs   with fewer flags  It will automatically choose an  ingress nginx  pod to read logs from      console   kubectl ingress nginx logs  n ingress nginx                                                                                 NGINX Ingress controller   Release     dev   Build       git 48dc3a867   Repository  git github com kubernetes ingress nginx git                                                                                  W0405 16 53 46 061589       7 flags go 214  SSL certificate chain completion is disabled    enable ssl chain completion false  nginx version  nginx 1 15 9 W0405 16 53 46 070093       7 client config go 549  Neither   kubeconfig nor   master was specified   Using the inClusterConfig   This might not work  I0405 16 53 46 070499       7 main go 205  Creating API client for https   10 96 0 1 443 I0405 16 53 46 077784       7 main go 249  Running in Kubernetes cluster version v1 10  v1 10 11    git  clean  commit 637c7e288581ee40ab4ca210618a89a555b6e7e9   platform linux amd64 I0405 16 53 46 183359       7 nginx go 265  Starting NGINX Ingress controller I0405 16 53 46 193913       7 event go 209  Event v1 ObjectReference Kind  ConfigMap   Namespace  ingress nginx   Name  udp services   UID  82258915 563e 11e9 9c52 025000000001   APIVersion  v1   ResourceVersion  494   FieldPath       type   Normal  reason   CREATE  ConfigMap ingress nginx udp services              ssh   kubectl ingress nginx ssh  is exactly the same as  kubectl ingress nginx exec  it     bin bash   Use it when you want to quickly be dropped into a shell inside a running  ingress nginx  container      console   kubectl ingress nginx ssh  n ingress nginx www data ingress nginx controller 7cbf77c976 wx5pn  etc nginx     "}
{"questions":"ingress nginx Do not move it without providing redirects This file is referenced in code as NOTICE https github com kubernetes ingress nginx blob main docs troubleshooting md Troubleshooting","answers":"<!--\n-----------------NOTICE------------------------\nThis file is referenced in code as\nhttps:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/docs\/troubleshooting.md\nDo not move it without providing redirects.\n-----------------------------------------------\n-->\n\n# Troubleshooting\n\n## Ingress-Controller Logs and Events\n\nThere are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting\nmethods to obtain more information.\n\n### Check the Ingress Resource Events\n\n```console\n$ kubectl get ing -n <namespace-of-ingress-resource>\nNAME           HOSTS      ADDRESS     PORTS     AGE\ncafe-ingress   cafe.com   10.0.2.15   80        25s\n\n$ kubectl describe ing <ingress-resource-name> -n <namespace-of-ingress-resource>\nName:             cafe-ingress\nNamespace:        default\nAddress:          10.0.2.15\nDefault backend:  default-http-backend:80 (172.17.0.5:8080)\nRules:\n  Host      Path  Backends\n  ----      ----  --------\n  cafe.com\n            \/tea      tea-svc:80 (<none>)\n            \/coffee   coffee-svc:80 (<none>)\nAnnotations:\n  kubectl.kubernetes.io\/last-applied-configuration:  {\"apiVersion\":\"networking.k8s.io\/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"\/apis\/networking\/v1\/namespaces\/default\/ingresses\/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"\/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"\/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}}\n\nEvents:\n  Type    Reason  Age   From                      Message\n  ----    ------  ----  ----                      -------\n  Normal  CREATE  1m    ingress-nginx-controller  Ingress default\/cafe-ingress\n  Normal  UPDATE  58s   ingress-nginx-controller  Ingress default\/cafe-ingress\n```\n\n### Check the Ingress Controller Logs\n\n```console\n$ kubectl get pods -n <namespace-of-ingress-controller>\nNAME                                        READY     STATUS    RESTARTS   AGE\ningress-nginx-controller-67956bf89d-fv58j   1\/1       Running   0          1m\n\n$ kubectl logs -n <namespace> ingress-nginx-controller-67956bf89d-fv58j\n-------------------------------------------------------------------------------\nNGINX Ingress controller\n  Release:    0.14.0\n  Build:      git-734361d\n  Repository: https:\/\/github.com\/kubernetes\/ingress-nginx\n-------------------------------------------------------------------------------\n....\n```\n\n### Check the Nginx Configuration\n\n```console\n$ kubectl get pods -n <namespace-of-ingress-controller>\nNAME                                        READY     STATUS    RESTARTS   AGE\ningress-nginx-controller-67956bf89d-fv58j   1\/1       Running   0          1m\n\n$ kubectl exec -it -n <namespace-of-ingress-controller> ingress-nginx-controller-67956bf89d-fv58j -- cat \/etc\/nginx\/nginx.conf\ndaemon off;\nworker_processes 2;\npid \/run\/nginx.pid;\nworker_rlimit_nofile 523264;\nworker_shutdown_timeout 240s;\nevents {\n\tmulti_accept        on;\n\tworker_connections  16384;\n\tuse                 epoll;\n}\nhttp {\n....\n```\n\n### Check if used Services Exist\n\n```console\n$ kubectl get svc --all-namespaces\nNAMESPACE     NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE\ndefault       coffee-svc             ClusterIP   10.106.154.35    <none>        80\/TCP          18m\ndefault       kubernetes             ClusterIP   10.96.0.1        <none>        443\/TCP         30m\ndefault       tea-svc                ClusterIP   10.104.172.12    <none>        80\/TCP          18m\nkube-system   default-http-backend   NodePort    10.108.189.236   <none>        80:30001\/TCP    30m\nkube-system   kube-dns               ClusterIP   10.96.0.10       <none>        53\/UDP,53\/TCP   30m\nkube-system   kubernetes-dashboard   NodePort    10.103.128.17    <none>        80:30000\/TCP    30m\n```\n\n## Debug Logging\n\nUsing the flag `--v=XX` it is possible to increase the level of logging. This is performed by editing\nthe deployment.\n\n```console\n$ kubectl get deploy -n <namespace-of-ingress-controller>\nNAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE\ndefault-http-backend       1         1         1            1           35m\ningress-nginx-controller   1         1         1            1           35m\n\n$ kubectl edit deploy -n <namespace-of-ingress-controller> ingress-nginx-controller\n# Add --v=X to \"- args\", where X is an integer\n```\n\n- `--v=2` shows details using `diff` about the changes in the configuration in nginx\n- `--v=3` shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format\n- `--v=5` configures NGINX in [debug mode](https:\/\/nginx.org\/en\/docs\/debugging_log.html)\n\n## Authentication to the Kubernetes API Server\n\nA number of components are involved in the authentication process and the first step is to narrow\ndown the source of the problem, namely whether it is a problem with service authentication or\nwith the kubeconfig file.\n\nBoth authentications must work:\n\n```\n+-------------+   service          +------------+\n|             |   authentication   |            |\n+  apiserver  +<-------------------+  ingress   |\n|             |                    | controller |\n+-------------+                    +------------+\n```\n\n**Service authentication**\n\nThe Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in a couple of ways:\n\n* _Service Account:_ This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details.\n\n* _Kubeconfig file:_ In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the `--kubeconfig` flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the `--kubeconfig` does not requires the flag `--apiserver-host`.\n   The format of the file is identical to `~\/.kube\/config` which is used by kubectl to connect to the API server. See 'kubeconfig' section for details.\n\n* _Using the flag `--apiserver-host`:_ Using this flag `--apiserver-host=http:\/\/localhost:8080` it is possible to specify an unsecured API server or reach a remote kubernetes cluster using [kubectl proxy](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#proxy).\n   Please do not use this approach in production.\n\nIn the diagram below you can see the full authentication flow with all options, starting with the browser\non the lower left hand side.\n\n```\nKubernetes                                                  Workstation\n+---------------------------------------------------+     +------------------+\n|                                                   |     |                  |\n|  +-----------+   apiserver        +------------+  |     |  +------------+  |\n|  |           |   proxy            |            |  |     |  |            |  |\n|  | apiserver |                    |  ingress   |  |     |  |  ingress   |  |\n|  |           |                    | controller |  |     |  | controller |  |\n|  |           |                    |            |  |     |  |            |  |\n|  |           |                    |            |  |     |  |            |  |\n|  |           |  service account\/  |            |  |     |  |            |  |\n|  |           |  kubeconfig        |            |  |     |  |            |  |\n|  |           +<-------------------+            |  |     |  |            |  |\n|  |           |                    |            |  |     |  |            |  |\n|  +------+----+      kubeconfig    +------+-----+  |     |  +------+-----+  |\n|         |<--------------------------------------------------------|        |\n|                                                   |     |                  |\n+---------------------------------------------------+     +------------------+\n```\n\n### Service Account\n\nIf using a service account to connect to the API server, the ingress-controller expects the file\n`\/var\/run\/secrets\/kubernetes.io\/serviceaccount\/token` to be present. It provides a secret\ntoken that is required to authenticate with the API server.\n\nVerify with the following commands:\n\n```console\n# start a container that contains curl\n$ kubectl run -it --rm test --image=curlimages\/curl --restart=Never -- \/bin\/sh\n\n# check if secret exists\n\/ $ ls \/var\/run\/secrets\/kubernetes.io\/serviceaccount\/\nca.crt     namespace  token\n\/ $\n\n# check base connectivity from cluster inside\n\/ $ curl -k https:\/\/kubernetes.default.svc.cluster.local\n{\n  \"kind\": \"Status\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {\n\n  },\n  \"status\": \"Failure\",\n  \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"\/\\\"\",\n  \"reason\": \"Forbidden\",\n  \"details\": {\n\n  },\n  \"code\": 403\n}\/ $\n\n# connect using tokens\n}\/ $ curl --cacert \/var\/run\/secrets\/kubernetes.io\/serviceaccount\/ca.crt -H  \"Authorization: Bearer $(cat \/var\/run\/secrets\/kubernetes.io\/serviceaccount\/token)\" https:\/\/kubernetes.default.svc.cluster.local\n&& echo\n{\n  \"paths\": [\n    \"\/api\",\n    \"\/api\/v1\",\n    \"\/apis\",\n    \"\/apis\/\",\n    ... TRUNCATED\n    \"\/readyz\/shutdown\",\n    \"\/version\"\n  ]\n}\n\/ $\n\n# when you type `exit` or `^D` the test pod will be deleted.\n```\n\nIf it is not working, there are two possible reasons:\n\n1. The contents of the tokens are invalid. Find the secret name with `kubectl get secrets | grep service-account` and\n   delete it with `kubectl delete secret <name>`. It will automatically be recreated.\n\n2. You have a non-standard Kubernetes installation and the file containing the token may not be present.\n   The API server will mount a volume containing this file, but only if the API server is configured to use\n   the ServiceAccount admission controller.\n   If you experience this error, verify that your API server is using the ServiceAccount admission controller.\n   If you are configuring the API server by hand, you can set this with the `--admission-control` parameter.\n   > Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers.\n\nMore information:\n\n- [User Guide: Service Accounts](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/)\n- [Cluster Administrator Guide: Managing Service Accounts](http:\/\/kubernetes.io\/docs\/admin\/service-accounts-admin\/)\n\n## Kube-Config\n\nIf you want to use a kubeconfig file for authentication, follow the [deploy procedure](deploy\/index.md) and\nadd the flag `--kubeconfig=\/etc\/kubernetes\/kubeconfig.yaml` to the args section of the deployment.\n\n## Using GDB with Nginx\n\n[Gdb](https:\/\/www.gnu.org\/software\/gdb\/) can be used to with nginx to perform a configuration\ndump. This allows us to see which configuration is being used, as well as older configurations.\n\nNote: The below is based on the nginx [documentation](https:\/\/docs.nginx.com\/nginx\/admin-guide\/monitoring\/debugging\/#dumping-nginx-configuration-from-a-running-process).\n\n1. SSH into the worker\n\n    ```console\n    $ ssh user@workerIP\n    ```\n\n2. Obtain the Docker Container Running nginx\n\n    ```console\n    $ docker ps | grep ingress-nginx-controller\n    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES\n    d9e1d243156a        registry.k8s.io\/ingress-nginx\/controller   \"\/usr\/bin\/dumb-init \u2026\"   19 minutes ago      Up 19 minutes                                                                            k8s_ingress-nginx-controller_ingress-nginx-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0\n    ```\n\n3. Exec into the container\n\n    ```console\n    $ docker exec -it --user=0 --privileged d9e1d243156a bash\n    ```\n\n4. Make sure nginx is running in `--with-debug`\n\n    ```console\n    $ nginx -V 2>&1 | grep -- '--with-debug'\n    ```\n\n5. Get list of processes running on container\n\n    ```console\n    $ ps -ef\n    UID        PID  PPID  C STIME TTY          TIME CMD\n    root         1     0  0 20:23 ?        00:00:00 \/usr\/bin\/dumb-init \/nginx-ingres\n    root         5     1  0 20:23 ?        00:00:05 \/ingress-nginx-controller --defa\n    root        21     5  0 20:23 ?        00:00:00 nginx: master process \/usr\/sbin\/\n    nobody     106    21  0 20:23 ?        00:00:00 nginx: worker process\n    nobody     107    21  0 20:23 ?        00:00:00 nginx: worker process\n    root       172     0  0 20:43 pts\/0    00:00:00 bash\n    ```\n\n6. Attach gdb to the nginx master process\n\n    ```console\n    $ gdb -p 21\n    ....\n    Attaching to process 21\n    Reading symbols from \/usr\/sbin\/nginx...done.\n    ....\n    (gdb)\n    ```\n\n7. Copy and paste the following:\n\n    ```console\n    set $cd = ngx_cycle->config_dump\n    set $nelts = $cd.nelts\n    set $elts = (ngx_conf_dump_t*)($cd.elts)\n    while ($nelts-- > 0)\n    set $name = $elts[$nelts]->name.data\n    printf \"Dumping %s to nginx_conf.txt\\n\", $name\n    append memory nginx_conf.txt \\\n            $elts[$nelts]->buffer.start $elts[$nelts]->buffer.end\n    end\n    ```\n\n8. Quit GDB by pressing CTRL+D\n\n9. Open nginx_conf.txt\n\n    ```console\n    cat nginx_conf.txt\n    ```\n    \n## Image related issues faced on Nginx 4.2.5 or other versions (Helm chart versions) \n\n1. Incase you face below error while installing Nginx using helm chart (either by helm commands or helm_release terraform provider ) \n```\nWarning  Failed     5m5s (x4 over 6m34s)   kubelet            Failed to pull image \"registry.k8s.io\/ingress-nginx\/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io\/ingress-nginx\/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to resolve reference \"registry.k8s.io\/ingress-nginx\/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to do request: Head \"https:\/\/eu.gcr.io\/v2\/k8s-artifacts-prod\/ingress-nginx\/kube-webhook-certgen\/manifests\/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": EOF\n```\n   Then please follow the below steps.\n\n2. During troubleshooting you can also execute the below commands to test the connectivities from you local machines and repositories  details\n\n      a. curl registry.k8s.io\/ingress-nginx\/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > \/dev\/null\n      ```\n      (\u2388 |myprompt)\u279c  ~ curl registry.k8s.io\/ingress-nginx\/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > \/dev\/null\n                          % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                                          Dload  Upload   Total   Spent    Left  Speed\n                          0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\n       (\u2388 |myprompt)\u279c  ~\n      ```\n      b. curl -I https:\/\/eu.gcr.io\/v2\/k8s-artifacts-prod\/ingress-nginx\/kube-webhook-certgen\/manifests\/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\n      ```\n      (\u2388 |myprompt)\u279c  ~ curl -I https:\/\/eu.gcr.io\/v2\/k8s-artifacts-prod\/ingress-nginx\/kube-webhook-certgen\/manifests\/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\n                                          HTTP\/2 200\n                                          docker-distribution-api-version: registry\/2.0\n                                          content-type: application\/vnd.docker.distribution.manifest.list.v2+json\n                                          docker-content-digest: sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\n                                          content-length: 1384\n                                          date: Wed, 28 Sep 2022 16:46:28 GMT\n                                          server: Docker Registry\n                                          x-xss-protection: 0\n                                          x-frame-options: SAMEORIGIN\n                                          alt-svc: h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"\n\n        (\u2388 |myprompt)\u279c  ~\n      ```\n   Redirection in the proxy is implemented to ensure the pulling of the images.\n\n3. This is the solution recommended to whitelist the below image repositories : \n     ```\n     *.appspot.com    \n     *.k8s.io        \n     *.pkg.dev\n     *.gcr.io\n     \n     ```\n     More details about the above repos : \n     a. *.k8s.io -> To ensure you can pull any images from registry.k8s.io\n     b. *.gcr.io -> GCP services are used for image hosting. This is part of the domains suggested by GCP to allow and ensure users can pull images from their container registry services.\n     c. *.appspot.com -> This a Google domain. part of the domain used for GCR.\n\n## Unable to listen on port (80\/443)\nOne possible reason for this error is lack of permission to bind to the port.  Ports 80, 443, and any other port < 1024 are Linux privileged ports which historically could only be bound by root.  The ingress-nginx-controller uses the CAP_NET_BIND_SERVICE [linux capability](https:\/\/man7.org\/linux\/man-pages\/man7\/capabilities.7.html) to allow binding these ports as a normal user (www-data \/ 101).  This involves two components:\n1. In the image, the \/nginx-ingress-controller file has the cap_net_bind_service capability added (e.g. via [setcap](https:\/\/man7.org\/linux\/man-pages\/man8\/setcap.8.html)) \n2. The NET_BIND_SERVICE capability is added to the container in the containerSecurityContext of the deployment.\n\nIf encountering this on one\/some node(s) and not on others, try to purge and pull a fresh copy of the image to the affected node(s), in case there has been corruption of the underlying layers to lose the capability on the executable.\n\n### Create a test pod\nThe \/nginx-ingress-controller process exits\/crashes when encountering this error, making it difficult to troubleshoot what is happening inside the container.  To get around this, start an equivalent container running \"sleep 3600\", and exec into it for further troubleshooting.  For example:\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: ingress-nginx-sleep\n  namespace: default\n  labels:\n    app: nginx\nspec:\n  containers:\n    - name: nginx\n      image: ##_CONTROLLER_IMAGE_##\n      resources:\n        requests:\n          memory: \"512Mi\"\n          cpu: \"500m\"\n        limits:\n          memory: \"1Gi\"\n          cpu: \"1\"\n      command: [\"sleep\"]\n      args: [\"3600\"]\n      ports:\n      - containerPort: 80\n        name: http\n        protocol: TCP\n      - containerPort: 443\n        name: https\n        protocol: TCP\n      securityContext:\n        allowPrivilegeEscalation: true\n        capabilities:\n          add:\n          - NET_BIND_SERVICE\n          drop:\n          - ALL\n        runAsUser: 101\n  restartPolicy: Never\n  nodeSelector:\n    kubernetes.io\/hostname: ##_NODE_NAME_##\n  tolerations:\n  - key: \"node.kubernetes.io\/unschedulable\"\n    operator: \"Exists\"\n    effect: NoSchedule\n```\n* update the namespace if applicable\/desired\n* replace `##_NODE_NAME_##` with the problematic node (or remove nodeSelector section if problem is not confined to one node)\n* replace `##_CONTROLLER_IMAGE_##` with the same image as in use by your ingress-nginx deployment\n* confirm the securityContext section matches what is in place for ingress-nginx-controller pods in your cluster\n\nApply the YAML and open a shell into the pod.\nTry to manually run the controller process:\n```console\n$ \/nginx-ingress-controller\n```\nYou should get the same error as from the ingress controller pod logs.\n\nConfirm the capabilities are properly surfacing into the pod:\n```console\n$ grep CapBnd \/proc\/1\/status\nCapBnd: 0000000000000400\n```\nThe above value has only net_bind_service enabled (per security context in YAML which adds that and drops all). If you get a different value, then you can decode it on another linux box (capsh not available in this container) like below, and then figure out why specified capabilities are not propagating into the pod\/container.\n```console\n$ capsh --decode=0000000000000400\n0x0000000000000400=cap_net_bind_service\n```\n\n## Create a test pod as root\n(Note, this may be restricted by PodSecurityAdmission\/Standards, OPA Gatekeeper, etc. in which case you will need to do the appropriate workaround for testing, e.g. deploy in a new namespace without the restrictions.)\nTo test further you may want to install additional utilities, etc.  Modify the pod yaml by:\n* changing runAsUser from 101 to 0\n* removing the \"drop..ALL\" section from the capabilities.\n\nSome things to try after shelling into this container:\n\nTry running the controller as the www-data (101) user:\n```console\n$ chmod 4755 \/nginx-ingress-controller\n$ \/nginx-ingress-controller\n```\nExamine the errors to see if there is still an issue listening on the port or if it passed that and moved on to other expected errors due to running out of context.\n\nInstall the libcap package and check capabilities on the file:\n```console\n$ apk add libcap\n(1\/1) Installing libcap (2.50-r0)\nExecuting busybox-1.33.1-r7.trigger\nOK: 26 MiB in 41 packages\n$ getcap \/nginx-ingress-controller\n\/nginx-ingress-controller cap_net_bind_service=ep\n```\n(if missing, see above about purging image on the server and re-pulling)\n\nStrace the executable to see what system calls are being executed when it fails:\n```console\n$ apk add strace\n(1\/1) Installing strace (5.12-r0)\nExecuting busybox-1.33.1-r7.trigger\nOK: 28 MiB in 42 packages\n$ strace \/nginx-ingress-controller\nexecve(\"\/nginx-ingress-controller\", [\"\/nginx-ingress-controller\"], 0x7ffeb9eb3240 \/* 131 vars *\/) = 0\narch_prctl(ARCH_SET_FS, 0x29ea690)      = 0\n...\n```","site":"ingress nginx","answers_cleaned":"                      NOTICE                         This file is referenced in code as https   github com kubernetes ingress nginx blob main docs troubleshooting md Do not move it without providing redirects                                                         Troubleshooting     Ingress Controller Logs and Events  There are many ways to troubleshoot the ingress controller  The following are basic troubleshooting methods to obtain more information       Check the Ingress Resource Events     console   kubectl get ing  n  namespace of ingress resource  NAME           HOSTS      ADDRESS     PORTS     AGE cafe ingress   cafe com   10 0 2 15   80        25s    kubectl describe ing  ingress resource name   n  namespace of ingress resource  Name              cafe ingress Namespace         default Address           10 0 2 15 Default backend   default http backend 80  172 17 0 5 8080  Rules    Host      Path  Backends                              cafe com              tea      tea svc 80   none                coffee   coffee svc 80   none   Annotations    kubectl kubernetes io last applied configuration     apiVersion   networking k8s io v1   kind   Ingress   metadata    annotations      name   cafe ingress   namespace   default   selfLink    apis networking v1 namespaces default ingresses cafe ingress    spec    rules     host   cafe com   http    paths     backend    serviceName   tea svc   servicePort  80   path    tea     backend    serviceName   coffee svc   servicePort  80   path    coffee         status    loadBalancer    ingress     ip   169 48 142 110        Events    Type    Reason  Age   From                      Message                                                             Normal  CREATE  1m    ingress nginx controller  Ingress default cafe ingress   Normal  UPDATE  58s   ingress nginx controller  Ingress default cafe ingress          Check the Ingress Controller Logs     console   kubectl get pods  n  namespace of ingress controller  NAME                                        READY     STATUS    RESTARTS   AGE ingress nginx controller 67956bf89d fv58j   1 1       Running   0          1m    kubectl logs  n  namespace  ingress nginx controller 67956bf89d fv58j                                                                                 NGINX Ingress controller   Release     0 14 0   Build       git 734361d   Repository  https   github com kubernetes ingress nginx                                                                                               Check the Nginx Configuration     console   kubectl get pods  n  namespace of ingress controller  NAME                                        READY     STATUS    RESTARTS   AGE ingress nginx controller 67956bf89d fv58j   1 1       Running   0          1m    kubectl exec  it  n  namespace of ingress controller  ingress nginx controller 67956bf89d fv58j    cat  etc nginx nginx conf daemon off  worker processes 2  pid  run nginx pid  worker rlimit nofile 523264  worker shutdown timeout 240s  events    multi accept        on   worker connections  16384   use                 epoll    http                 Check if used Services Exist     console   kubectl get svc   all namespaces NAMESPACE     NAME                   TYPE        CLUSTER IP       EXTERNAL IP   PORT S          AGE default       coffee svc             ClusterIP   10 106 154 35     none         80 TCP          18m default       kubernetes             ClusterIP   10 96 0 1         none         443 TCP         30m default       tea svc                ClusterIP   10 104 172 12     none         80 TCP          18m kube system   default http backend   NodePort    10 108 189 236    none         80 30001 TCP    30m kube system   kube dns               ClusterIP   10 96 0 10        none         53 UDP 53 TCP   30m kube system   kubernetes dashboard   NodePort    10 103 128 17     none         80 30000 TCP    30m         Debug Logging  Using the flag    v XX  it is possible to increase the level of logging  This is performed by editing the deployment      console   kubectl get deploy  n  namespace of ingress controller  NAME                       DESIRED   CURRENT   UP TO DATE   AVAILABLE   AGE default http backend       1         1         1            1           35m ingress nginx controller   1         1         1            1           35m    kubectl edit deploy  n  namespace of ingress controller  ingress nginx controller   Add   v X to    args   where X is an integer           v 2  shows details using  diff  about the changes in the configuration in nginx      v 3  shows details about the service  Ingress rule  endpoint changes and it dumps the nginx configuration in JSON format      v 5  configures NGINX in  debug mode  https   nginx org en docs debugging log html      Authentication to the Kubernetes API Server  A number of components are involved in the authentication process and the first step is to narrow down the source of the problem  namely whether it is a problem with service authentication or with the kubeconfig file   Both authentications must work                         service                                           authentication                     apiserver                          ingress                                          controller                                                            Service authentication    The Ingress controller needs information from apiserver  Therefore  authentication is required  which can be achieved in a couple of ways      Service Account   This is recommended  because nothing has to be configured  The Ingress controller will use information provided by the system to communicate with the API server  See  Service Account  section for details      Kubeconfig file   In some Kubernetes environments service accounts are not available  In this case a manual configuration is required  The Ingress controller binary can be started with the    kubeconfig  flag  The value of the flag is a path to a file specifying how to connect to the API server  Using the    kubeconfig  does not requires the flag    apiserver host      The format of the file is identical to     kube config  which is used by kubectl to connect to the API server  See  kubeconfig  section for details      Using the flag    apiserver host    Using this flag    apiserver host http   localhost 8080  it is possible to specify an unsecured API server or reach a remote kubernetes cluster using  kubectl proxy  https   kubernetes io docs reference generated kubectl kubectl commands proxy      Please do not use this approach in production   In the diagram below you can see the full authentication flow with all options  starting with the browser on the lower left hand side       Kubernetes                                                  Workstation                                                                                                                                                                                  apiserver                                                                      proxy                                                            apiserver                         ingress                  ingress                                              controller               controller                                                                                                                                                                                      service account                                                                kubeconfig                                                                                                                                                                                                                                       kubeconfig                                                                                                                                                                                                                                                                                                     Service Account  If using a service account to connect to the API server  the ingress controller expects the file   var run secrets kubernetes io serviceaccount token  to be present  It provides a secret token that is required to authenticate with the API server   Verify with the following commands      console   start a container that contains curl   kubectl run  it   rm test   image curlimages curl   restart Never     bin sh    check if secret exists     ls  var run secrets kubernetes io serviceaccount  ca crt     namespace  token        check base connectivity from cluster inside     curl  k https   kubernetes default svc cluster local      kind    Status      apiVersion    v1      metadata              status    Failure      message    forbidden  User   system anonymous   cannot get path            reason    Forbidden      details              code   403         connect using tokens      curl   cacert  var run secrets kubernetes io serviceaccount ca crt  H   Authorization  Bearer   cat  var run secrets kubernetes io serviceaccount token   https   kubernetes default svc cluster local    echo      paths           api         api v1         apis         apis            TRUNCATED       readyz shutdown         version               when you type  exit  or   D  the test pod will be deleted       If it is not working  there are two possible reasons   1  The contents of the tokens are invalid  Find the secret name with  kubectl get secrets   grep service account  and    delete it with  kubectl delete secret  name    It will automatically be recreated   2  You have a non standard Kubernetes installation and the file containing the token may not be present     The API server will mount a volume containing this file  but only if the API server is configured to use    the ServiceAccount admission controller     If you experience this error  verify that your API server is using the ServiceAccount admission controller     If you are configuring the API server by hand  you can set this with the    admission control  parameter       Note that you should use other admission controllers as well  Before configuring this option  you should read about admission controllers   More information      User Guide  Service Accounts  https   kubernetes io docs tasks configure pod container configure service account      Cluster Administrator Guide  Managing Service Accounts  http   kubernetes io docs admin service accounts admin       Kube Config  If you want to use a kubeconfig file for authentication  follow the  deploy procedure  deploy index md  and add the flag    kubeconfig  etc kubernetes kubeconfig yaml  to the args section of the deployment      Using GDB with Nginx   Gdb  https   www gnu org software gdb   can be used to with nginx to perform a configuration dump  This allows us to see which configuration is being used  as well as older configurations   Note  The below is based on the nginx  documentation  https   docs nginx com nginx admin guide monitoring debugging  dumping nginx configuration from a running process    1  SSH into the worker         console       ssh user workerIP          2  Obtain the Docker Container Running nginx         console       docker ps   grep ingress nginx controller     CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES     d9e1d243156a        registry k8s io ingress nginx controller     usr bin dumb init      19 minutes ago      Up 19 minutes                                                                            k8s ingress nginx controller ingress nginx controller 67956bf89d mqxzt kube system 079f31ec aa37 11e8 ad39 080027a227db 0          3  Exec into the container         console       docker exec  it   user 0   privileged d9e1d243156a bash          4  Make sure nginx is running in    with debug          console       nginx  V 2  1   grep       with debug           5  Get list of processes running on container         console       ps  ef     UID        PID  PPID  C STIME TTY          TIME CMD     root         1     0  0 20 23          00 00 00  usr bin dumb init  nginx ingres     root         5     1  0 20 23          00 00 05  ingress nginx controller   defa     root        21     5  0 20 23          00 00 00 nginx  master process  usr sbin      nobody     106    21  0 20 23          00 00 00 nginx  worker process     nobody     107    21  0 20 23          00 00 00 nginx  worker process     root       172     0  0 20 43 pts 0    00 00 00 bash          6  Attach gdb to the nginx master process         console       gdb  p 21              Attaching to process 21     Reading symbols from  usr sbin nginx   done                gdb           7  Copy and paste the following          console     set  cd   ngx cycle  config dump     set  nelts    cd nelts     set  elts    ngx conf dump t    cd elts      while   nelts     0      set  name    elts  nelts   name data     printf  Dumping  s to nginx conf txt n    name     append memory nginx conf txt                elts  nelts   buffer start  elts  nelts   buffer end     end          8  Quit GDB by pressing CTRL D  9  Open nginx conf txt         console     cat nginx conf txt                 Image related issues faced on Nginx 4 2 5 or other versions  Helm chart versions    1  Incase you face below error while installing Nginx using helm chart  either by helm commands or helm release terraform provider        Warning  Failed     5m5s  x4 over 6m34s    kubelet            Failed to pull image  registry k8s io ingress nginx kube webhook certgen v1 3 0 sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47   rpc error  code   Unknown desc   failed to pull and unpack image  registry k8s io ingress nginx kube webhook certgen sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47   failed to resolve reference  registry k8s io ingress nginx kube webhook certgen sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47   failed to do request  Head  https   eu gcr io v2 k8s artifacts prod ingress nginx kube webhook certgen manifests sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47   EOF        Then please follow the below steps   2  During troubleshooting you can also execute the below commands to test the connectivities from you local machines and repositories  details        a  curl registry k8s io ingress nginx kube webhook certgen sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47    dev null                     myprompt      curl registry k8s io ingress nginx kube webhook certgen sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47    dev null                             Total      Received   Xferd  Average Speed   Time    Time     Time  Current                                                           Dload  Upload   Total   Spent    Left  Speed                           0     0    0     0    0     0      0      0                                0            myprompt                      b  curl  I https   eu gcr io v2 k8s artifacts prod ingress nginx kube webhook certgen manifests sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47                     myprompt      curl  I https   eu gcr io v2 k8s artifacts prod ingress nginx kube webhook certgen manifests sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47                                           HTTP 2 200                                           docker distribution api version  registry 2 0                                           content type  application vnd docker distribution manifest list v2 json                                           docker content digest  sha256 549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47                                           content length  1384                                           date  Wed  28 Sep 2022 16 46 28 GMT                                           server  Docker Registry                                           x xss protection  0                                           x frame options  SAMEORIGIN                                           alt svc  h3   443   ma 2592000 h3 29   443   ma 2592000 h3 Q050   443   ma 2592000 h3 Q046   443   ma 2592000 h3 Q043   443   ma 2592000 quic   443   ma 2592000  v  46 43               myprompt                   Redirection in the proxy is implemented to ensure the pulling of the images   3  This is the solution recommended to whitelist the below image repositories                    appspot com            k8s io                pkg dev        gcr io                     More details about the above repos         a    k8s io    To ensure you can pull any images from registry k8s io      b    gcr io    GCP services are used for image hosting  This is part of the domains suggested by GCP to allow and ensure users can pull images from their container registry services       c    appspot com    This a Google domain  part of the domain used for GCR      Unable to listen on port  80 443  One possible reason for this error is lack of permission to bind to the port   Ports 80  443  and any other port   1024 are Linux privileged ports which historically could only be bound by root   The ingress nginx controller uses the CAP NET BIND SERVICE  linux capability  https   man7 org linux man pages man7 capabilities 7 html  to allow binding these ports as a normal user  www data   101    This involves two components  1  In the image  the  nginx ingress controller file has the cap net bind service capability added  e g  via  setcap  https   man7 org linux man pages man8 setcap 8 html    2  The NET BIND SERVICE capability is added to the container in the containerSecurityContext of the deployment   If encountering this on one some node s  and not on others  try to purge and pull a fresh copy of the image to the affected node s   in case there has been corruption of the underlying layers to lose the capability on the executable       Create a test pod The  nginx ingress controller process exits crashes when encountering this error  making it difficult to troubleshoot what is happening inside the container   To get around this  start an equivalent container running  sleep 3600   and exec into it for further troubleshooting   For example     yaml apiVersion  v1 kind  Pod metadata    name  ingress nginx sleep   namespace  default   labels      app  nginx spec    containers        name  nginx       image     CONTROLLER IMAGE          resources          requests            memory   512Mi            cpu   500m          limits            memory   1Gi            cpu   1        command    sleep         args    3600         ports          containerPort  80         name  http         protocol  TCP         containerPort  443         name  https         protocol  TCP       securityContext          allowPrivilegeEscalation  true         capabilities            add              NET BIND SERVICE           drop              ALL         runAsUser  101   restartPolicy  Never   nodeSelector      kubernetes io hostname     NODE NAME      tolerations      key   node kubernetes io unschedulable      operator   Exists      effect  NoSchedule       update the namespace if applicable desired   replace     NODE NAME     with the problematic node  or remove nodeSelector section if problem is not confined to one node    replace     CONTROLLER IMAGE     with the same image as in use by your ingress nginx deployment   confirm the securityContext section matches what is in place for ingress nginx controller pods in your cluster  Apply the YAML and open a shell into the pod  Try to manually run the controller process     console    nginx ingress controller     You should get the same error as from the ingress controller pod logs   Confirm the capabilities are properly surfacing into the pod     console   grep CapBnd  proc 1 status CapBnd  0000000000000400     The above value has only net bind service enabled  per security context in YAML which adds that and drops all   If you get a different value  then you can decode it on another linux box  capsh not available in this container  like below  and then figure out why specified capabilities are not propagating into the pod container     console   capsh   decode 0000000000000400 0x0000000000000400 cap net bind service         Create a test pod as root  Note  this may be restricted by PodSecurityAdmission Standards  OPA Gatekeeper  etc  in which case you will need to do the appropriate workaround for testing  e g  deploy in a new namespace without the restrictions   To test further you may want to install additional utilities  etc   Modify the pod yaml by    changing runAsUser from 101 to 0   removing the  drop  ALL  section from the capabilities   Some things to try after shelling into this container   Try running the controller as the www data  101  user     console   chmod 4755  nginx ingress controller    nginx ingress controller     Examine the errors to see if there is still an issue listening on the port or if it passed that and moved on to other expected errors due to running out of context   Install the libcap package and check capabilities on the file     console   apk add libcap  1 1  Installing libcap  2 50 r0  Executing busybox 1 33 1 r7 trigger OK  26 MiB in 41 packages   getcap  nginx ingress controller  nginx ingress controller cap net bind service ep      if missing  see above about purging image on the server and re pulling   Strace the executable to see what system calls are being executed when it fails     console   apk add strace  1 1  Installing strace  5 12 r0  Executing busybox 1 33 1 r7 trigger OK  28 MiB in 42 packages   strace  nginx ingress controller execve   nginx ingress controller      nginx ingress controller    0x7ffeb9eb3240    131 vars       0 arch prctl ARCH SET FS  0x29ea690         0        "}
{"questions":"ingress nginx e2e test suite for This file is autogenerated Do not try to edit it manually","answers":"<!---\nThis file is autogenerated!\nDo not try to edit it manually.\n-->\n\n# e2e test suite for [Ingress NGINX Controller](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/)\n\n\n### [[Admission] admission controller](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/admission\/admission.go#L39)\n- [should not allow overlaps of host and paths without canary annotations](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/admission\/admission.go#L47)\n- [should allow overlaps of host and paths with canary annotation](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/admission\/admission.go#L64)\n- [should block ingress with invalid path](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/admission\/admission.go#L85)\n- [should return an error if there is an error validating the ingress definition](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/admission\/admission.go#L102)\n- [should return an error if there is an invalid value in some annotation](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/admission\/admission.go#L116)\n- [should return an error if there is a forbidden value in some annotation](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/admission\/admission.go#L130)\n- [should return an error if there is an invalid path and wrong pathType is set](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/admission\/admission.go#L144)\n- [should not return an error if the Ingress V1 definition is valid with Ingress Class](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/admission\/admission.go#L178)\n- [should not return an error if the Ingress V1 definition is valid with IngressClass annotation](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/admission\/admission.go#L194)\n- [should return an error if the Ingress V1 definition contains invalid annotations](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/admission\/admission.go#L210)\n- [should not return an error for an invalid Ingress when it has unknown class](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/admission\/admission.go#L224)\n### [affinity session-cookie-name](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L43)\n- [should set sticky cookie SERVERID](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L50)\n- [should change cookie name on ingress definition change](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L72)\n- [should set the path to \/something on the generated cookie](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L107)\n- [does not set the path to \/ on the generated cookie if there's more than one rule referring to the same backend](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L129)\n- [should set cookie with expires](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L202)\n- [should set cookie with domain](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L234)\n- [should not set cookie without domain annotation](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L257)\n- [should work with use-regex annotation and session-cookie-path](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L279)\n- [should warn user when use-regex is true and session-cookie-path is not set](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L303)\n- [should not set affinity across all server locations when using separate ingresses](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L329)\n- [should set sticky cookie without host](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L361)\n- [should work with server-alias annotation](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L381)\n- [should set secure in cookie with provided true annotation on http](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L421)\n- [should not set secure in cookie with provided false annotation on http](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L444)\n- [should set secure in cookie with provided false annotation on https](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinity.go#L467)\n### [affinitymode](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinitymode.go#L33)\n- [Balanced affinity mode should balance](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinitymode.go#L36)\n- [Check persistent affinity mode](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/affinitymode.go#L69)\n### [server-alias](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/alias.go#L31)\n- [should return status code 200 for host 'foo' and 404 for 'bar'](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/alias.go#L38)\n- [should return status code 200 for host 'foo' and 'bar'](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/alias.go#L64)\n- [should return status code 200 for hosts defined in two ingresses, different path with one alias](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/alias.go#L89)\n### [app-root](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/approot.go#L28)\n- [should redirect to \/foo](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/approot.go#L35)\n### [auth-*](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L45)\n- [should return status code 200 when no authentication is configured](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L52)\n- [should return status code 503 when authentication is configured with an invalid secret](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L71)\n- [should return status code 401 when authentication is configured but Authorization header is not configured](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L95)\n- [should return status code 401 when authentication is configured and Authorization header is sent with invalid credentials](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L122)\n- [should return status code 401 and cors headers when authentication and cors is configured but Authorization header is not configured](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L150)\n- [should return status code 200 when authentication is configured and Authorization header is sent](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L178)\n- [should return status code 200 when authentication is configured with a map and Authorization header is sent](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L205)\n- [should return status code 401 when authentication is configured with invalid content and Authorization header is sent](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L233)\n- [proxy_set_header My-Custom-Header 42;](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L272)\n- [proxy_set_header My-Custom-Header 42;](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L292)\n- [proxy_set_header 'My-Custom-Header' '42';](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L311)\n- [user retains cookie by default](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L420)\n- [user does not retain cookie if upstream returns error status code](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L431)\n- [user with annotated ingress retains cookie if upstream returns error status code](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L442)\n- [should return status code 200 when signed in](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L481)\n- [should redirect to signin url when not signed in](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L490)\n- [keeps processing new ingresses even if one of the existing ingresses is misconfigured](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L501)\n- [should overwrite Foo header with auth response](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L525)\n- [should return status code 200 when signed in](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L701)\n- [should redirect to signin url when not signed in](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L710)\n- [keeps processing new ingresses even if one of the existing ingresses is misconfigured](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L721)\n- [should return status code 200 when signed in after auth backend is deleted ](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L780)\n- [should deny login for different location on same server](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L800)\n- [should deny login for different servers](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L828)\n- [should redirect to signin url when not signed in](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L857)\n- [should return 503 (location was denied)](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L887)\n- [should add error to the config](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/auth.go#L895)\n### [auth-tls-*](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/authtls.go#L31)\n- [should set sslClientCertificate, sslVerifyClient and sslVerifyDepth with auth-tls-secret](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/authtls.go#L38)\n- [should set valid auth-tls-secret, sslVerify to off, and sslVerifyDepth to 2](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/authtls.go#L86)\n- [should 302 redirect to error page instead of 400 when auth-tls-error-page is set](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/authtls.go#L116)\n- [should pass URL-encoded certificate to upstream](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/authtls.go#L163)\n- [should validate auth-tls-verify-client](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/authtls.go#L208)\n- [should return 403 using auth-tls-match-cn with no matching CN from client](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/authtls.go#L267)\n- [should return 200 using auth-tls-match-cn with matching CN from client](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/authtls.go#L296)\n- [should reload the nginx config when auth-tls-match-cn is updated](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/authtls.go#L325)\n- [should return 200 using auth-tls-match-cn where atleast one of the regex options matches CN from client](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/authtls.go#L368)\n### [backend-protocol](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/backendprotocol.go#L29)\n- [should set backend protocol to https:\/\/ and use proxy_pass](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/backendprotocol.go#L36)\n- [should set backend protocol to https:\/\/ and use proxy_pass with lowercase annotation](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/backendprotocol.go#L51)\n- [should set backend protocol to $scheme:\/\/ and use proxy_pass](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/backendprotocol.go#L66)\n- [should set backend protocol to grpc:\/\/ and use grpc_pass](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/backendprotocol.go#L81)\n- [should set backend protocol to grpcs:\/\/ and use grpc_pass](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/backendprotocol.go#L96)\n- [should set backend protocol to '' and use fastcgi_pass](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/backendprotocol.go#L111)\n### [canary-*](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L36)\n- [should response with a 200 status from the mainline upstream when requests are made to the mainline ingress](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L45)\n- [should return 404 status for requests to the canary if no matching ingress is found](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L89)\n- [should return the correct status codes when endpoints are unavailable](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L120)\n- [should route requests to the correct upstream if mainline ingress is created before the canary ingress](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L174)\n- [should route requests to the correct upstream if mainline ingress is created after the canary ingress](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L232)\n- [should route requests to the correct upstream if the mainline ingress is modified](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L289)\n- [should route requests to the correct upstream if the canary ingress is modified](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L363)\n- [should route requests to the correct upstream](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L445)\n- [should route requests to the correct upstream](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L513)\n- [should route requests to the correct upstream](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L594)\n- [should route requests to the correct upstream](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L647)\n- [should routes to mainline upstream when the given Regex causes error](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L692)\n- [should route requests to the correct upstream](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L741)\n- [respects always and never values](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L790)\n- [should route requests only to mainline if canary weight is 0](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L862)\n- [should route requests only to canary if canary weight is 100](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L910)\n- [should route requests only to canary if canary weight is equal to canary weight total](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L952)\n- [should route requests split between mainline and canary if canary weight is 50](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L995)\n- [should route requests split between mainline and canary if canary weight is 100 and weight total is 200](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L1031)\n- [should not use canary as a catch-all server](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L1070)\n- [should not use canary with domain as a server](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L1104)\n- [does not crash when canary ingress has multiple paths to the same non-matching backend](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L1138)\n- [always routes traffic to canary if first request was affinitized to canary (default behavior)](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L1175)\n- [always routes traffic to canary if first request was affinitized to canary (explicit sticky behavior)](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L1242)\n- [routes traffic to either mainline or canary backend (legacy behavior)](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/canary.go#L1310)\n### [client-body-buffer-size](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/clientbodybuffersize.go#L30)\n- [should set client_body_buffer_size to 1000](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/clientbodybuffersize.go#L37)\n- [should set client_body_buffer_size to 1K](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/clientbodybuffersize.go#L59)\n- [should set client_body_buffer_size to 1k](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/clientbodybuffersize.go#L81)\n- [should set client_body_buffer_size to 1m](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/clientbodybuffersize.go#L103)\n- [should set client_body_buffer_size to 1M](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/clientbodybuffersize.go#L125)\n- [should not set client_body_buffer_size to invalid 1b](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/clientbodybuffersize.go#L147)\n### [connection-proxy-header](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/connection.go#L28)\n- [set connection header to keep-alive](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/connection.go#L35)\n### [cors-*](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L33)\n- [should enable cors](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L40)\n- [should set cors methods to only allow POST, GET](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L67)\n- [should set cors max-age](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L83)\n- [should disable cors allow credentials](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L99)\n- [should allow origin for cors](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L115)\n- [should allow headers for cors](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L142)\n- [should expose headers for cors](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L158)\n- [should allow - single origin for multiple cors values](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L174)\n- [should not allow - single origin for multiple cors values](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L201)\n- [should allow correct origins - single origin for multiple cors values](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L221)\n- [should not break functionality](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L272)\n- [should not break functionality - without `*`](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L296)\n- [should not break functionality with extra domain](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L319)\n- [should not match](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L343)\n- [should allow - single origin with required port](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L363)\n- [should not allow - single origin with port and origin without port](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L391)\n- [should not allow - single origin without port and origin with required port](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L410)\n- [should allow - matching origin with wildcard origin (2 subdomains)](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L430)\n- [should not allow - unmatching origin with wildcard origin (2 subdomains)](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L473)\n- [should allow - matching origin+port with wildcard origin](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L493)\n- [should not allow - portless origin with wildcard origin](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L520)\n- [should allow correct origins - missing subdomain + origin with wildcard origin and correct origin](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L540)\n- [should allow - missing origins (should allow all origins)](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L576)\n- [should allow correct origin but not others - cors allow origin annotations contain trailing comma](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L636)\n- [should allow - origins with non-http[s] protocols](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/cors.go#L673)\n### [custom-headers-*](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/customheaders.go#L33)\n- [should return status code 200 when no custom-headers is configured](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/customheaders.go#L40)\n- [should return status code 503 when custom-headers is configured with an invalid secret](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/customheaders.go#L57)\n- [more_set_headers 'My-Custom-Header' '42';](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/customheaders.go#L78)\n### [custom-http-errors](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/customhttperrors.go#L34)\n- [configures Nginx correctly](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/customhttperrors.go#L41)\n### [default-backend](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/default_backend.go#L29)\n- [should use a custom default backend as upstream](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/default_backend.go#L37)\n### [disable-access-log disable-http-access-log disable-stream-access-log](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/disableaccesslog.go#L28)\n- [disable-access-log set access_log off](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/disableaccesslog.go#L35)\n- [disable-http-access-log set access_log off](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/disableaccesslog.go#L53)\n- [disable-stream-access-log set access_log off](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/disableaccesslog.go#L71)\n### [disable-proxy-intercept-errors](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/disableproxyintercepterrors.go#L31)\n- [configures Nginx correctly](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/disableproxyintercepterrors.go#L39)\n### [backend-protocol - FastCGI](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/fastcgi.go#L30)\n- [should use fastcgi_pass in the configuration file](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/fastcgi.go#L37)\n- [should add fastcgi_index in the configuration file](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/fastcgi.go#L54)\n- [should add fastcgi_param in the configuration file](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/fastcgi.go#L71)\n- [should return OK for service with backend protocol FastCGI](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/fastcgi.go#L102)\n### [force-ssl-redirect](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/forcesslredirect.go#L27)\n- [should redirect to https](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/forcesslredirect.go#L34)\n### [from-to-www-redirect](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/fromtowwwredirect.go#L31)\n- [should redirect from www HTTP to HTTP](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/fromtowwwredirect.go#L38)\n- [should redirect from www HTTPS to HTTPS](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/fromtowwwredirect.go#L64)\n### [backend-protocol - GRPC](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/grpc.go#L45)\n- [should use grpc_pass in the configuration file](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/grpc.go#L48)\n- [should return OK for service with backend protocol GRPC](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/grpc.go#L71)\n- [authorization metadata should be overwritten by external auth response headers](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/grpc.go#L132)\n- [should return OK for service with backend protocol GRPCS](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/grpc.go#L193)\n- [should return OK when request not exceed timeout](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/grpc.go#L260)\n- [should return Error when request exceed timeout](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/grpc.go#L303)\n### [http2-push-preload](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/http2pushpreload.go#L27)\n- [enable the http2-push-preload directive](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/http2pushpreload.go#L34)\n### [allowlist-source-range](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/ipallowlist.go#L27)\n- [should set valid ip allowlist range](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/ipallowlist.go#L34)\n### [denylist-source-range](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/ipdenylist.go#L28)\n- [only deny explicitly denied IPs, allow all others](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/ipdenylist.go#L35)\n- [only allow explicitly allowed IPs, deny all others](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/ipdenylist.go#L86)\n### [Annotation - limit-connections](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/limitconnections.go#L31)\n- [should limit-connections](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/limitconnections.go#L38)\n### [limit-rate](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/limitrate.go#L29)\n- [Check limit-rate annotation](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/limitrate.go#L37)\n### [enable-access-log enable-rewrite-log](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/log.go#L27)\n- [set access_log off](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/log.go#L34)\n- [set rewrite_log on](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/log.go#L49)\n### [mirror-*](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/mirror.go#L28)\n- [should set mirror-target to http:\/\/localhost\/mirror](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/mirror.go#L36)\n- [should set mirror-target to https:\/\/test.env.com\/$request_uri](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/mirror.go#L51)\n- [should disable mirror-request-body](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/mirror.go#L67)\n### [modsecurity owasp](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/modsecurity\/modsecurity.go#L39)\n- [should enable modsecurity](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/modsecurity\/modsecurity.go#L46)\n- [should enable modsecurity with transaction ID and OWASP rules](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/modsecurity\/modsecurity.go#L64)\n- [should disable modsecurity](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/modsecurity\/modsecurity.go#L85)\n- [should enable modsecurity with snippet](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/modsecurity\/modsecurity.go#L102)\n- [should enable modsecurity without using 'modsecurity on;'](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/modsecurity\/modsecurity.go#L124)\n- [should disable modsecurity using 'modsecurity off;'](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/modsecurity\/modsecurity.go#L147)\n- [should enable modsecurity with snippet and block requests](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/modsecurity\/modsecurity.go#L169)\n- [should enable modsecurity globally and with modsecurity-snippet block requests](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/modsecurity\/modsecurity.go#L202)\n- [should enable modsecurity when enable-owasp-modsecurity-crs is set to true](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/modsecurity\/modsecurity.go#L235)\n- [should enable modsecurity through the config map](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/modsecurity\/modsecurity.go#L269)\n- [should enable modsecurity through the config map but ignore snippet as disabled by admin](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/modsecurity\/modsecurity.go#L309)\n- [should disable default modsecurity conf setting when modsecurity-snippet is specified](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/modsecurity\/modsecurity.go#L354)\n### [preserve-trailing-slash](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/preservetrailingslash.go#L27)\n- [should allow preservation of trailing slashes](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/preservetrailingslash.go#L34)\n### [proxy-*](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxy.go#L30)\n- [should set proxy_redirect to off](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxy.go#L38)\n- [should set proxy_redirect to default](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxy.go#L54)\n- [should set proxy_redirect to hello.com goodbye.com](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxy.go#L70)\n- [should set proxy client-max-body-size to 8m](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxy.go#L87)\n- [should not set proxy client-max-body-size to incorrect value](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxy.go#L102)\n- [should set valid proxy timeouts](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxy.go#L117)\n- [should not set invalid proxy timeouts](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxy.go#L138)\n- [should turn on proxy-buffering](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxy.go#L159)\n- [should turn off proxy-request-buffering](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxy.go#L181)\n- [should build proxy next upstream](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxy.go#L196)\n- [should setup proxy cookies](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxy.go#L217)\n- [should change the default proxy HTTP version](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxy.go#L235)\n### [proxy-ssl-*](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxyssl.go#L32)\n- [should set valid proxy-ssl-secret](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxyssl.go#L39)\n- [should set valid proxy-ssl-secret, proxy-ssl-verify to on, proxy-ssl-verify-depth to 2, and proxy-ssl-server-name to on](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxyssl.go#L66)\n- [should set valid proxy-ssl-secret, proxy-ssl-ciphers to HIGH:!AES](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxyssl.go#L96)\n- [should set valid proxy-ssl-secret, proxy-ssl-protocols](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxyssl.go#L124)\n- [proxy-ssl-location-only flag should change the nginx config server part](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/proxyssl.go#L152)\n### [permanent-redirect permanent-redirect-code](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/redirect.go#L30)\n- [should respond with a standard redirect code](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/redirect.go#L33)\n- [should respond with a custom redirect code](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/redirect.go#L61)\n### [rewrite-target use-regex enable-rewrite-log](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/rewrite.go#L32)\n- [should write rewrite logs](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/rewrite.go#L39)\n- [should use correct longest path match](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/rewrite.go#L68)\n- [should use ~* location modifier if regex annotation is present](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/rewrite.go#L113)\n- [should fail to use longest match for documented warning](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/rewrite.go#L160)\n- [should allow for custom rewrite parameters](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/rewrite.go#L192)\n### [satisfy](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/satisfy.go#L33)\n- [should configure satisfy directive correctly](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/satisfy.go#L40)\n- [should allow multiple auth with satisfy any](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/satisfy.go#L82)\n### [server-snippet](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/serversnippet.go#L28)\n### [service-upstream](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/serviceupstream.go#L32)\n- [should use the Service Cluster IP and Port ](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/serviceupstream.go#L41)\n- [should use the Service Cluster IP and Port ](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/serviceupstream.go#L69)\n- [should not use the Service Cluster IP and Port](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/serviceupstream.go#L97)\n### [configuration-snippet](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/snippet.go#L28)\n- [set snippet more_set_headers in all locations](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/snippet.go#L34)\n- [drops snippet more_set_header in all locations if disabled by admin](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/snippet.go#L66)\n### [ssl-ciphers](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/sslciphers.go#L28)\n- [should change ssl ciphers](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/sslciphers.go#L35)\n- [should keep ssl ciphers](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/sslciphers.go#L58)\n### [stream-snippet](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/streamsnippet.go#L34)\n- [should add value of stream-snippet to nginx config](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/streamsnippet.go#L41)\n- [should add stream-snippet and drop annotations per admin config](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/streamsnippet.go#L88)\n### [upstream-hash-by-*](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/upstreamhashby.go#L79)\n- [should connect to the same pod](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/upstreamhashby.go#L86)\n- [should connect to the same subset of pods](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/upstreamhashby.go#L95)\n### [upstream-vhost](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/upstreamvhost.go#L27)\n- [set host to upstreamvhost.bar.com](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/upstreamvhost.go#L34)\n### [x-forwarded-prefix](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/xforwardedprefix.go#L28)\n- [should set the X-Forwarded-Prefix to the annotation value](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/xforwardedprefix.go#L35)\n- [should not add X-Forwarded-Prefix if the annotation value is empty](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/annotations\/xforwardedprefix.go#L57)\n### [[CGroups] cgroups](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/cgroups\/cgroups.go#L32)\n- [detects cgroups version v1](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/cgroups\/cgroups.go#L40)\n- [detect cgroups version v2](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/cgroups\/cgroups.go#L83)\n### [Debug CLI](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/dbg\/main.go#L29)\n- [should list the backend servers](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/dbg\/main.go#L37)\n- [should get information for a specific backend server](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/dbg\/main.go#L56)\n- [should produce valid JSON for \/dbg general](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/dbg\/main.go#L85)\n### [[Default Backend] custom service](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/defaultbackend\/custom_default_backend.go#L33)\n- [uses custom default backend that returns 200 as status code](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/defaultbackend\/custom_default_backend.go#L36)\n### [[Default Backend]](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/defaultbackend\/default_backend.go#L30)\n- [should return 404 sending requests when only a default backend is running](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/defaultbackend\/default_backend.go#L33)\n- [enables access logging for default backend](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/defaultbackend\/default_backend.go#L88)\n- [disables access logging for default backend](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/defaultbackend\/default_backend.go#L105)\n### [[Default Backend] SSL](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/defaultbackend\/ssl.go#L26)\n- [should return a self generated SSL certificate](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/defaultbackend\/ssl.go#L29)\n### [[Default Backend] change default settings](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/defaultbackend\/with_hosts.go#L30)\n- [should apply the annotation to the default backend](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/defaultbackend\/with_hosts.go#L38)\n### [[Disable Leader] Routing works when leader election was disabled](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/disableleaderelection\/disable_leader.go#L28)\n- [should create multiple ingress routings rules when leader election has disabled](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/disableleaderelection\/disable_leader.go#L35)\n### [[Endpointslices] long service name](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/endpointslices\/longname.go#L29)\n- [should return 200 when service name has max allowed number of characters 63](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/endpointslices\/longname.go#L38)\n### [[TopologyHints] topology aware routing](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/endpointslices\/topology.go#L34)\n- [should return 200 when service has topology hints](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/endpointslices\/topology.go#L42)\n### [[Shutdown] Grace period shutdown](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/gracefulshutdown\/grace_period.go#L32)\n- [\/healthz should return status code 500 during shutdown grace period](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/gracefulshutdown\/grace_period.go#L35)\n### [[Shutdown] ingress controller](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/gracefulshutdown\/shutdown.go#L30)\n- [should shutdown in less than 60 seconds without pending connections](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/gracefulshutdown\/shutdown.go#L40)\n### [[Shutdown] Graceful shutdown with pending request](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/gracefulshutdown\/slow_requests.go#L25)\n- [should let slow requests finish before shutting down](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/gracefulshutdown\/slow_requests.go#L33)\n### [[Ingress] DeepInspection](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/deep_inspection.go#L27)\n- [should drop whole ingress if one path matches invalid regex](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/deep_inspection.go#L34)\n### [single ingress - multiple hosts](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/multiple_rules.go#L30)\n- [should set the correct $service_name NGINX variable](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/multiple_rules.go#L38)\n### [[Ingress] [PathType] exact](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/pathtype_exact.go#L30)\n- [should choose exact location for \/exact](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/pathtype_exact.go#L37)\n### [[Ingress] [PathType] mix Exact and Prefix paths](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/pathtype_mixed.go#L30)\n- [should choose the correct location](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/pathtype_mixed.go#L39)\n### [[Ingress] [PathType] prefix checks](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/pathtype_prefix.go#L28)\n- [should return 404 when prefix \/aaa does not match request \/aaaccc](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/pathtype_prefix.go#L35)\n- [should test prefix path using simple regex pattern for \/id\/{int}](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/pathtype_prefix.go#L72)\n- [should test prefix path using regex pattern for \/id\/{int} ignoring non-digits characters at end of string](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/pathtype_prefix.go#L113)\n- [should test prefix path using fixed path size regex pattern \/id\/{int}{3}](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/pathtype_prefix.go#L142)\n- [should correctly route multi-segment path patterns](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/pathtype_prefix.go#L177)\n### [[Ingress] definition without host](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/without_host.go#L31)\n- [should set ingress details variables for ingresses without a host](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/without_host.go#L34)\n- [should set ingress details variables for ingresses with host without IngressRuleValue, only Backend](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ingress\/without_host.go#L55)\n### [[Memory Leak] Dynamic Certificates](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/leaks\/lua_ssl.go#L35)\n- [should not leak memory from ingress SSL certificates or configuration updates](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/leaks\/lua_ssl.go#L42)\n### [[Load Balancer] load-balance](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/loadbalance\/configmap.go#L30)\n- [should apply the configmap load-balance setting](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/loadbalance\/configmap.go#L37)\n### [[Load Balancer] EWMA](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/loadbalance\/ewma.go#L31)\n- [does not fail requests](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/loadbalance\/ewma.go#L43)\n### [[Load Balancer] round-robin](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/loadbalance\/round_robin.go#L31)\n- [should evenly distribute requests with round-robin (default algorithm)](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/loadbalance\/round_robin.go#L39)\n### [[Lua] dynamic certificates](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/lua\/dynamic_certificates.go#L37)\n- [picks up the certificate when we add TLS spec to existing ingress](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/lua\/dynamic_certificates.go#L45)\n- [picks up the previously missing secret for a given ingress without reloading](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/lua\/dynamic_certificates.go#L70)\n- [supports requests with domain with trailing dot](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/lua\/dynamic_certificates.go#L145)\n- [picks up the updated certificate without reloading](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/lua\/dynamic_certificates.go#L149)\n- [falls back to using default certificate when secret gets deleted without reloading](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/lua\/dynamic_certificates.go#L185)\n- [picks up a non-certificate only change](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/lua\/dynamic_certificates.go#L218)\n- [removes HTTPS configuration when we delete TLS spec](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/lua\/dynamic_certificates.go#L233)\n### [[Lua] dynamic configuration](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/lua\/dynamic_configuration.go#L41)\n- [configures balancer Lua middleware correctly](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/lua\/dynamic_configuration.go#L49)\n- [handles endpoints only changes](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/lua\/dynamic_configuration.go#L56)\n- [handles endpoints only changes (down scaling of replicas)](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/lua\/dynamic_configuration.go#L81)\n- [handles endpoints only changes consistently (down scaling of replicas vs. empty service)](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/lua\/dynamic_configuration.go#L119)\n- [handles an annotation change](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/lua\/dynamic_configuration.go#L165)\n### [[metrics] exported prometheus metrics](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/metrics\/metrics.go#L36)\n- [exclude socket request metrics are absent](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/metrics\/metrics.go#L51)\n- [exclude socket request metrics are present](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/metrics\/metrics.go#L73)\n- [request metrics per undefined host are present when flag is set](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/metrics\/metrics.go#L95)\n- [request metrics per undefined host are not present when flag is not set](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/metrics\/metrics.go#L128)\n### [nginx-configuration](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/nginx\/nginx.go#L99)\n- [start nginx with default configuration](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/nginx\/nginx.go#L102)\n- [fails when using alias directive](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/nginx\/nginx.go#L114)\n- [fails when using root directive](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/nginx\/nginx.go#L121)\n### [[Security] request smuggling](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/security\/request_smuggling.go#L32)\n- [should not return body content from error_page](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/security\/request_smuggling.go#L39)\n### [[Service] backend status code 503](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/servicebackend\/service_backend.go#L34)\n- [should return 503 when backend service does not exist](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/servicebackend\/service_backend.go#L37)\n- [should return 503 when all backend service endpoints are unavailable](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/servicebackend\/service_backend.go#L55)\n### [[Service] Type ExternalName](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/servicebackend\/service_externalname.go#L38)\n- [works with external name set to incomplete fqdn](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/servicebackend\/service_externalname.go#L41)\n- [should return 200 for service type=ExternalName without a port defined](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/servicebackend\/service_externalname.go#L78)\n- [should return 200 for service type=ExternalName with a port defined](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/servicebackend\/service_externalname.go#L118)\n- [should return status 502 for service type=ExternalName with an invalid host](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/servicebackend\/service_externalname.go#L148)\n- [should return 200 for service type=ExternalName using a port name](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/servicebackend\/service_externalname.go#L184)\n- [should return 200 for service type=ExternalName using FQDN with trailing dot](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/servicebackend\/service_externalname.go#L225)\n- [should update the external name after a service update](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/servicebackend\/service_externalname.go#L261)\n- [should sync ingress on external name service addition\/deletion](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/servicebackend\/service_externalname.go#L344)\n### [[Service] Nil Service Backend](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/servicebackend\/service_nil_backend.go#L31)\n- [should return 404 when backend service is nil](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/servicebackend\/service_nil_backend.go#L38)\n### [access-log](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/access_log.go#L27)\n- [use the default configuration](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/access_log.go#L31)\n- [use the specified configuration](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/access_log.go#L41)\n- [use the specified configuration](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/access_log.go#L52)\n- [use the specified configuration](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/access_log.go#L64)\n- [use the specified configuration](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/access_log.go#L76)\n### [aio-write](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/aio_write.go#L27)\n- [should be enabled by default](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/aio_write.go#L30)\n- [should be enabled when setting is true](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/aio_write.go#L37)\n- [should be disabled when setting is false](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/aio_write.go#L46)\n### [Bad annotation values](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/badannotationvalues.go#L29)\n- [[BAD_ANNOTATIONS] should drop an ingress if there is an invalid character in some annotation](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/badannotationvalues.go#L36)\n- [[BAD_ANNOTATIONS] should drop an ingress if there is a forbidden word in some annotation](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/badannotationvalues.go#L68)\n- [[BAD_ANNOTATIONS] should allow an ingress if there is a default blocklist config in place](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/badannotationvalues.go#L105)\n- [[BAD_ANNOTATIONS] should drop an ingress if there is a custom blocklist config in place and allow others to pass](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/badannotationvalues.go#L138)\n### [brotli](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/brotli.go#L30)\n- [should only compress responses that meet the `brotli-min-length` condition](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/brotli.go#L38)\n### [Configmap change](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/configmap_change.go#L29)\n- [should reload after an update in the configuration](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/configmap_change.go#L36)\n### [add-headers](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/custom_header.go#L30)\n- [Add a custom header](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/custom_header.go#L40)\n- [Add multiple custom headers](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/custom_header.go#L65)\n### [[SSL] [Flag] default-ssl-certificate](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/default_ssl_certificate.go#L35)\n- [uses default ssl certificate for catch-all ingress](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/default_ssl_certificate.go#L66)\n- [uses default ssl certificate for host based ingress when configured certificate does not match host](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/default_ssl_certificate.go#L82)\n### [[Flag] disable-catch-all](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/disable_catch_all.go#L33)\n- [should ignore catch all Ingress with backend](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/disable_catch_all.go#L50)\n- [should ignore catch all Ingress with backend and rules](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/disable_catch_all.go#L69)\n- [should delete Ingress updated to catch-all](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/disable_catch_all.go#L81)\n- [should allow Ingress with rules](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/disable_catch_all.go#L123)\n### [[Flag] disable-service-external-name](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/disable_service_external_name.go#L35)\n- [should ignore services of external-name type](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/disable_service_external_name.go#L55)\n### [[Flag] disable-sync-events](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/disable_sync_events.go#L32)\n- [should create sync events (default)](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/disable_sync_events.go#L35)\n- [should create sync events](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/disable_sync_events.go#L55)\n- [should not create sync events](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/disable_sync_events.go#L83)\n### [enable-real-ip](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/enable_real_ip.go#L30)\n- [trusts X-Forwarded-For header only when setting is true](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/enable_real_ip.go#L40)\n- [should not trust X-Forwarded-For header when setting is false](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/enable_real_ip.go#L79)\n### [use-forwarded-headers](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/forwarded_headers.go#L31)\n- [should trust X-Forwarded headers when setting is true](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/forwarded_headers.go#L41)\n- [should not trust X-Forwarded headers when setting is false](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/forwarded_headers.go#L93)\n### [Geoip2](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/geoip2.go#L36)\n- [should include geoip2 line in config when enabled and db file exists](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/geoip2.go#L45)\n- [should only allow requests from specific countries](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/geoip2.go#L69)\n- [should up and running nginx controller using autoreload flag](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/geoip2.go#L122)\n### [[Security] block-*](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_access_block.go#L28)\n- [should block CIDRs defined in the ConfigMap](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_access_block.go#L38)\n- [should block User-Agents defined in the ConfigMap](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_access_block.go#L55)\n- [should block Referers defined in the ConfigMap](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_access_block.go#L88)\n### [[Security] global-auth-url](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_external_auth.go#L39)\n- [should return status code 401 when request any protected service](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_external_auth.go#L91)\n- [should return status code 200 when request whitelisted (via no-auth-locations) service and 401 when request protected service](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_external_auth.go#L107)\n- [should return status code 200 when request whitelisted (via ingress annotation) service and 401 when request protected service](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_external_auth.go#L130)\n- [should still return status code 200 after auth backend is deleted using cache](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_external_auth.go#L158)\n- [user retains cookie by default](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_external_auth.go#L322)\n- [user does not retain cookie if upstream returns error status code](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_external_auth.go#L333)\n- [user with global-auth-always-set-cookie key in configmap retains cookie if upstream returns error status code](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_external_auth.go#L344)\n### [global-options](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_options.go#L28)\n- [should have worker_rlimit_nofile option](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_options.go#L31)\n- [should have worker_rlimit_nofile option and be independent on amount of worker processes](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/global_options.go#L37)\n### [GRPC](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/grpc.go#L39)\n- [should set the correct GRPC Buffer Size](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/grpc.go#L42)\n### [gzip](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/gzip.go#L30)\n- [should be disabled by default](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/gzip.go#L40)\n- [should be enabled with default settings](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/gzip.go#L56)\n- [should set gzip_comp_level to 4](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/gzip.go#L82)\n- [should set gzip_disable to msie6](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/gzip.go#L102)\n- [should set gzip_min_length to 100](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/gzip.go#L132)\n- [should set gzip_types to text\/html](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/gzip.go#L164)\n### [hash size](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/hash-size.go#L27)\n- [should set server_names_hash_bucket_size](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/hash-size.go#L39)\n- [should set server_names_hash_max_size](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/hash-size.go#L47)\n- [should set proxy-headers-hash-bucket-size](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/hash-size.go#L57)\n- [should set proxy-headers-hash-max-size](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/hash-size.go#L65)\n- [should set variables-hash-bucket-size](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/hash-size.go#L75)\n- [should set variables-hash-max-size](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/hash-size.go#L83)\n- [should set vmap-hash-bucket-size](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/hash-size.go#L93)\n### [[Flag] ingress-class](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ingress_class.go#L41)\n- [should ignore Ingress with a different class annotation](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ingress_class.go#L70)\n- [should ignore Ingress with different controller class](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ingress_class.go#L106)\n- [should accept both Ingresses with default IngressClassName and IngressClass annotation](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ingress_class.go#L134)\n- [should ignore Ingress without IngressClass configuration](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ingress_class.go#L166)\n- [should delete Ingress when class is removed](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ingress_class.go#L194)\n- [should serve Ingress when class is added](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ingress_class.go#L259)\n- [should serve Ingress when class is updated between annotation and ingressClassName](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ingress_class.go#L325)\n- [should ignore Ingress with no class and accept the correctly configured Ingresses](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ingress_class.go#L414)\n- [should watch Ingress with no class and ignore ingress with a different class](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ingress_class.go#L482)\n- [should watch Ingress that uses the class name even if spec is different](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ingress_class.go#L538)\n- [should watch Ingress with correct annotation](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ingress_class.go#L628)\n- [should ignore Ingress with only IngressClassName](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ingress_class.go#L648)\n### [keep-alive keep-alive-requests](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/keep-alive.go#L28)\n- [should set keepalive_timeout](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/keep-alive.go#L40)\n- [should set keepalive_requests](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/keep-alive.go#L48)\n- [should set keepalive connection to upstream server](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/keep-alive.go#L58)\n- [should set keep alive connection timeout to upstream server](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/keep-alive.go#L68)\n- [should set keepalive time to upstream server](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/keep-alive.go#L78)\n- [should set the request count to upstream server through one keep alive connection](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/keep-alive.go#L88)\n### [Configmap - limit-rate](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/limit_rate.go#L28)\n- [Check limit-rate config](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/limit_rate.go#L36)\n### [[Flag] custom HTTP and HTTPS ports](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/listen_nondefault_ports.go#L30)\n- [should set X-Forwarded-Port headers accordingly when listening on a non-default HTTP port](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/listen_nondefault_ports.go#L45)\n- [should set X-Forwarded-Port header to 443](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/listen_nondefault_ports.go#L65)\n- [should set the X-Forwarded-Port header to 443](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/listen_nondefault_ports.go#L93)\n### [log-format-*](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/log-format.go#L28)\n- [should not configure log-format escape by default](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/log-format.go#L39)\n- [should enable the log-format-escape-json](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/log-format.go#L46)\n- [should disable the log-format-escape-json](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/log-format.go#L54)\n- [should enable the log-format-escape-none](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/log-format.go#L62)\n- [should disable the log-format-escape-none](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/log-format.go#L70)\n- [log-format-escape-json enabled](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/log-format.go#L80)\n- [log-format default escape](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/log-format.go#L103)\n- [log-format-escape-none enabled](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/log-format.go#L126)\n### [[Lua] lua-shared-dicts](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/lua_shared_dicts.go#L26)\n- [configures lua shared dicts](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/lua_shared_dicts.go#L29)\n### [main-snippet](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/main_snippet.go#L27)\n- [should add value of main-snippet setting to nginx config](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/main_snippet.go#L31)\n### [[Security] modsecurity-snippet](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/modsecurity\/modsecurity_snippet.go#L27)\n- [should add value of modsecurity-snippet setting to nginx config](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/modsecurity\/modsecurity_snippet.go#L30)\n### [enable-multi-accept](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/multi_accept.go#L27)\n- [should be enabled by default](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/multi_accept.go#L31)\n- [should be enabled when set to true](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/multi_accept.go#L39)\n- [should be disabled when set to false](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/multi_accept.go#L49)\n### [[Flag] watch namespace selector](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/namespace_selector.go#L30)\n- [should ignore Ingress of namespace without label foo=bar and accept those of namespace with label foo=bar](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/namespace_selector.go#L62)\n### [[Security] no-auth-locations](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/no_auth_locations.go#L33)\n- [should return status code 401 when accessing '\/' unauthentication](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/no_auth_locations.go#L54)\n- [should return status code 200 when accessing '\/'  authentication](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/no_auth_locations.go#L68)\n- [should return status code 200 when accessing '\/noauth' unauthenticated](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/no_auth_locations.go#L82)\n### [Add no tls redirect locations](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/no_tls_redirect_locations.go#L27)\n- [Check no tls redirect locations config](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/no_tls_redirect_locations.go#L30)\n### [OCSP](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ocsp\/ocsp.go#L43)\n- [should enable OCSP and contain stapling information in the connection](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ocsp\/ocsp.go#L50)\n### [Configure Opentelemetry](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/opentelemetry.go#L39)\n- [should not exists opentelemetry directive](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/opentelemetry.go#L49)\n- [should exists opentelemetry directive when is enabled](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/opentelemetry.go#L62)\n- [should include opentelemetry_trust_incoming_spans on directive when enabled](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/opentelemetry.go#L76)\n- [should not exists opentelemetry_operation_name directive when is empty](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/opentelemetry.go#L91)\n- [should exists opentelemetry_operation_name directive when is configured](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/opentelemetry.go#L106)\n### [proxy-connect-timeout](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_connect_timeout.go#L29)\n- [should set valid proxy timeouts using configmap values](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_connect_timeout.go#L37)\n- [should not set invalid proxy timeouts using configmap values](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_connect_timeout.go#L53)\n### [Dynamic $proxy_host](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_host.go#L28)\n- [should exist a proxy_host](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_host.go#L36)\n- [should exist a proxy_host using the upstream-vhost annotation value](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_host.go#L60)\n### [proxy-next-upstream](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_next_upstream.go#L28)\n- [should build proxy next upstream using configmap values](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_next_upstream.go#L36)\n### [use-proxy-protocol](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_protocol.go#L38)\n- [should respect port passed by the PROXY Protocol](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_protocol.go#L48)\n- [should respect proto passed by the PROXY Protocol server port](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_protocol.go#L85)\n- [should enable PROXY Protocol for HTTPS](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_protocol.go#L121)\n- [should enable PROXY Protocol for TCP](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_protocol.go#L164)\n### [proxy-read-timeout](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_read_timeout.go#L29)\n- [should set valid proxy read timeouts using configmap values](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_read_timeout.go#L37)\n- [should not set invalid proxy read timeouts using configmap values](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_read_timeout.go#L53)\n### [proxy-send-timeout](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_send_timeout.go#L29)\n- [should set valid proxy send timeouts using configmap values](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_send_timeout.go#L37)\n- [should not set invalid proxy send timeouts using configmap values](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/proxy_send_timeout.go#L53)\n### [reuse-port](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/reuse-port.go#L27)\n- [reuse port should be enabled by default](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/reuse-port.go#L38)\n- [reuse port should be disabled](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/reuse-port.go#L44)\n- [reuse port should be enabled](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/reuse-port.go#L52)\n### [configmap server-snippet](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/server_snippet.go#L28)\n- [should add value of server-snippet setting to all ingress config](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/server_snippet.go#L35)\n- [should add global server-snippet and drop annotations per admin config](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/server_snippet.go#L100)\n### [server-tokens](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/server_tokens.go#L29)\n- [should not exists Server header in the response](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/server_tokens.go#L38)\n- [should exists Server header in the response when is enabled](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/server_tokens.go#L50)\n### [ssl-ciphers](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ssl_ciphers.go#L28)\n- [Add ssl ciphers](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ssl_ciphers.go#L31)\n### [[Flag] enable-ssl-passthrough](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ssl_passthrough.go#L36)\n### [With enable-ssl-passthrough enabled](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ssl_passthrough.go#L55)\n- [should enable ssl-passthrough-proxy-port on a different port](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ssl_passthrough.go#L56)\n- [should pass unknown traffic to default backend and handle known traffic](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/ssl_passthrough.go#L78)\n### [configmap stream-snippet](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/stream_snippet.go#L35)\n- [should add value of stream-snippet via config map to nginx config](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/stream_snippet.go#L42)\n### [[SSL] TLS protocols, ciphers and headers](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/tls.go#L32)\n- [setting cipher suite](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/tls.go#L66)\n- [setting max-age parameter](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/tls.go#L110)\n- [setting includeSubDomains parameter](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/tls.go#L127)\n- [setting preload parameter](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/tls.go#L147)\n- [overriding what's set from the upstream](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/tls.go#L168)\n- [should not use ports during the HTTP to HTTPS redirection](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/tls.go#L190)\n- [should not use ports or X-Forwarded-Host during the HTTP to HTTPS redirection](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/tls.go#L208)\n### [annotation validations](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/validations\/validations.go#L30)\n- [should allow ingress based on their risk on webhooks](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/validations\/validations.go#L33)\n- [should allow ingress based on their risk on webhooks](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/settings\/validations\/validations.go#L68)\n### [[SSL] redirect to HTTPS](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ssl\/http_redirect.go#L29)\n- [should redirect from HTTP to HTTPS when secret is missing](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ssl\/http_redirect.go#L36)\n### [[SSL] secret update](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ssl\/secret_update.go#L33)\n- [should not appear references to secret updates not used in ingress rules](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ssl\/secret_update.go#L40)\n- [should return the fake SSL certificate if the secret is invalid](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/ssl\/secret_update.go#L83)\n### [[Status] status update](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/status\/update.go#L38)\n- [should update status field after client-go reconnection](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/status\/update.go#L43)\n### [[TCP] tcp-services](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/tcpudp\/tcp.go#L38)\n- [should expose a TCP service](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/tcpudp\/tcp.go#L46)\n- [should expose an ExternalName TCP service](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/tcpudp\/tcp.go#L80)\n- [should reload after an update in the configuration](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/\/test\/e2e\/tcpudp\/tcp.go#L169","site":"ingress nginx","answers_cleaned":"      This file is autogenerated  Do not try to edit it manually         e2e test suite for  Ingress NGINX Controller  https   github com kubernetes ingress nginx tree main           Admission  admission controller  https   github com kubernetes ingress nginx tree main  test e2e admission admission go L39     should not allow overlaps of host and paths without canary annotations  https   github com kubernetes ingress nginx tree main  test e2e admission admission go L47     should allow overlaps of host and paths with canary annotation  https   github com kubernetes ingress nginx tree main  test e2e admission admission go L64     should block ingress with invalid path  https   github com kubernetes ingress nginx tree main  test e2e admission admission go L85     should return an error if there is an error validating the ingress definition  https   github com kubernetes ingress nginx tree main  test e2e admission admission go L102     should return an error if there is an invalid value in some annotation  https   github com kubernetes ingress nginx tree main  test e2e admission admission go L116     should return an error if there is a forbidden value in some annotation  https   github com kubernetes ingress nginx tree main  test e2e admission admission go L130     should return an error if there is an invalid path and wrong pathType is set  https   github com kubernetes ingress nginx tree main  test e2e admission admission go L144     should not return an error if the Ingress V1 definition is valid with Ingress Class  https   github com kubernetes ingress nginx tree main  test e2e admission admission go L178     should not return an error if the Ingress V1 definition is valid with IngressClass annotation  https   github com kubernetes ingress nginx tree main  test e2e admission admission go L194     should return an error if the Ingress V1 definition contains invalid annotations  https   github com kubernetes ingress nginx tree main  test e2e admission admission go L210     should not return an error for an invalid Ingress when it has unknown class  https   github com kubernetes ingress nginx tree main  test e2e admission admission go L224       affinity session cookie name  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L43     should set sticky cookie SERVERID  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L50     should change cookie name on ingress definition change  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L72     should set the path to  something on the generated cookie  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L107     does not set the path to   on the generated cookie if there s more than one rule referring to the same backend  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L129     should set cookie with expires  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L202     should set cookie with domain  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L234     should not set cookie without domain annotation  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L257     should work with use regex annotation and session cookie path  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L279     should warn user when use regex is true and session cookie path is not set  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L303     should not set affinity across all server locations when using separate ingresses  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L329     should set sticky cookie without host  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L361     should work with server alias annotation  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L381     should set secure in cookie with provided true annotation on http  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L421     should not set secure in cookie with provided false annotation on http  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L444     should set secure in cookie with provided false annotation on https  https   github com kubernetes ingress nginx tree main  test e2e annotations affinity go L467       affinitymode  https   github com kubernetes ingress nginx tree main  test e2e annotations affinitymode go L33     Balanced affinity mode should balance  https   github com kubernetes ingress nginx tree main  test e2e annotations affinitymode go L36     Check persistent affinity mode  https   github com kubernetes ingress nginx tree main  test e2e annotations affinitymode go L69       server alias  https   github com kubernetes ingress nginx tree main  test e2e annotations alias go L31     should return status code 200 for host  foo  and 404 for  bar   https   github com kubernetes ingress nginx tree main  test e2e annotations alias go L38     should return status code 200 for host  foo  and  bar   https   github com kubernetes ingress nginx tree main  test e2e annotations alias go L64     should return status code 200 for hosts defined in two ingresses  different path with one alias  https   github com kubernetes ingress nginx tree main  test e2e annotations alias go L89       app root  https   github com kubernetes ingress nginx tree main  test e2e annotations approot go L28     should redirect to  foo  https   github com kubernetes ingress nginx tree main  test e2e annotations approot go L35       auth    https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L45     should return status code 200 when no authentication is configured  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L52     should return status code 503 when authentication is configured with an invalid secret  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L71     should return status code 401 when authentication is configured but Authorization header is not configured  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L95     should return status code 401 when authentication is configured and Authorization header is sent with invalid credentials  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L122     should return status code 401 and cors headers when authentication and cors is configured but Authorization header is not configured  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L150     should return status code 200 when authentication is configured and Authorization header is sent  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L178     should return status code 200 when authentication is configured with a map and Authorization header is sent  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L205     should return status code 401 when authentication is configured with invalid content and Authorization header is sent  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L233     proxy set header My Custom Header 42   https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L272     proxy set header My Custom Header 42   https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L292     proxy set header  My Custom Header   42    https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L311     user retains cookie by default  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L420     user does not retain cookie if upstream returns error status code  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L431     user with annotated ingress retains cookie if upstream returns error status code  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L442     should return status code 200 when signed in  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L481     should redirect to signin url when not signed in  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L490     keeps processing new ingresses even if one of the existing ingresses is misconfigured  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L501     should overwrite Foo header with auth response  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L525     should return status code 200 when signed in  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L701     should redirect to signin url when not signed in  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L710     keeps processing new ingresses even if one of the existing ingresses is misconfigured  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L721     should return status code 200 when signed in after auth backend is deleted   https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L780     should deny login for different location on same server  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L800     should deny login for different servers  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L828     should redirect to signin url when not signed in  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L857     should return 503  location was denied   https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L887     should add error to the config  https   github com kubernetes ingress nginx tree main  test e2e annotations auth go L895       auth tls    https   github com kubernetes ingress nginx tree main  test e2e annotations authtls go L31     should set sslClientCertificate  sslVerifyClient and sslVerifyDepth with auth tls secret  https   github com kubernetes ingress nginx tree main  test e2e annotations authtls go L38     should set valid auth tls secret  sslVerify to off  and sslVerifyDepth to 2  https   github com kubernetes ingress nginx tree main  test e2e annotations authtls go L86     should 302 redirect to error page instead of 400 when auth tls error page is set  https   github com kubernetes ingress nginx tree main  test e2e annotations authtls go L116     should pass URL encoded certificate to upstream  https   github com kubernetes ingress nginx tree main  test e2e annotations authtls go L163     should validate auth tls verify client  https   github com kubernetes ingress nginx tree main  test e2e annotations authtls go L208     should return 403 using auth tls match cn with no matching CN from client  https   github com kubernetes ingress nginx tree main  test e2e annotations authtls go L267     should return 200 using auth tls match cn with matching CN from client  https   github com kubernetes ingress nginx tree main  test e2e annotations authtls go L296     should reload the nginx config when auth tls match cn is updated  https   github com kubernetes ingress nginx tree main  test e2e annotations authtls go L325     should return 200 using auth tls match cn where atleast one of the regex options matches CN from client  https   github com kubernetes ingress nginx tree main  test e2e annotations authtls go L368       backend protocol  https   github com kubernetes ingress nginx tree main  test e2e annotations backendprotocol go L29     should set backend protocol to https    and use proxy pass  https   github com kubernetes ingress nginx tree main  test e2e annotations backendprotocol go L36     should set backend protocol to https    and use proxy pass with lowercase annotation  https   github com kubernetes ingress nginx tree main  test e2e annotations backendprotocol go L51     should set backend protocol to  scheme    and use proxy pass  https   github com kubernetes ingress nginx tree main  test e2e annotations backendprotocol go L66     should set backend protocol to grpc    and use grpc pass  https   github com kubernetes ingress nginx tree main  test e2e annotations backendprotocol go L81     should set backend protocol to grpcs    and use grpc pass  https   github com kubernetes ingress nginx tree main  test e2e annotations backendprotocol go L96     should set backend protocol to    and use fastcgi pass  https   github com kubernetes ingress nginx tree main  test e2e annotations backendprotocol go L111       canary    https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L36     should response with a 200 status from the mainline upstream when requests are made to the mainline ingress  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L45     should return 404 status for requests to the canary if no matching ingress is found  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L89     should return the correct status codes when endpoints are unavailable  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L120     should route requests to the correct upstream if mainline ingress is created before the canary ingress  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L174     should route requests to the correct upstream if mainline ingress is created after the canary ingress  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L232     should route requests to the correct upstream if the mainline ingress is modified  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L289     should route requests to the correct upstream if the canary ingress is modified  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L363     should route requests to the correct upstream  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L445     should route requests to the correct upstream  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L513     should route requests to the correct upstream  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L594     should route requests to the correct upstream  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L647     should routes to mainline upstream when the given Regex causes error  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L692     should route requests to the correct upstream  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L741     respects always and never values  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L790     should route requests only to mainline if canary weight is 0  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L862     should route requests only to canary if canary weight is 100  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L910     should route requests only to canary if canary weight is equal to canary weight total  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L952     should route requests split between mainline and canary if canary weight is 50  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L995     should route requests split between mainline and canary if canary weight is 100 and weight total is 200  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L1031     should not use canary as a catch all server  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L1070     should not use canary with domain as a server  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L1104     does not crash when canary ingress has multiple paths to the same non matching backend  https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L1138     always routes traffic to canary if first request was affinitized to canary  default behavior   https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L1175     always routes traffic to canary if first request was affinitized to canary  explicit sticky behavior   https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L1242     routes traffic to either mainline or canary backend  legacy behavior   https   github com kubernetes ingress nginx tree main  test e2e annotations canary go L1310       client body buffer size  https   github com kubernetes ingress nginx tree main  test e2e annotations clientbodybuffersize go L30     should set client body buffer size to 1000  https   github com kubernetes ingress nginx tree main  test e2e annotations clientbodybuffersize go L37     should set client body buffer size to 1K  https   github com kubernetes ingress nginx tree main  test e2e annotations clientbodybuffersize go L59     should set client body buffer size to 1k  https   github com kubernetes ingress nginx tree main  test e2e annotations clientbodybuffersize go L81     should set client body buffer size to 1m  https   github com kubernetes ingress nginx tree main  test e2e annotations clientbodybuffersize go L103     should set client body buffer size to 1M  https   github com kubernetes ingress nginx tree main  test e2e annotations clientbodybuffersize go L125     should not set client body buffer size to invalid 1b  https   github com kubernetes ingress nginx tree main  test e2e annotations clientbodybuffersize go L147       connection proxy header  https   github com kubernetes ingress nginx tree main  test e2e annotations connection go L28     set connection header to keep alive  https   github com kubernetes ingress nginx tree main  test e2e annotations connection go L35       cors    https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L33     should enable cors  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L40     should set cors methods to only allow POST  GET  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L67     should set cors max age  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L83     should disable cors allow credentials  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L99     should allow origin for cors  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L115     should allow headers for cors  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L142     should expose headers for cors  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L158     should allow   single origin for multiple cors values  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L174     should not allow   single origin for multiple cors values  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L201     should allow correct origins   single origin for multiple cors values  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L221     should not break functionality  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L272     should not break functionality   without      https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L296     should not break functionality with extra domain  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L319     should not match  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L343     should allow   single origin with required port  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L363     should not allow   single origin with port and origin without port  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L391     should not allow   single origin without port and origin with required port  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L410     should allow   matching origin with wildcard origin  2 subdomains   https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L430     should not allow   unmatching origin with wildcard origin  2 subdomains   https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L473     should allow   matching origin port with wildcard origin  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L493     should not allow   portless origin with wildcard origin  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L520     should allow correct origins   missing subdomain   origin with wildcard origin and correct origin  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L540     should allow   missing origins  should allow all origins   https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L576     should allow correct origin but not others   cors allow origin annotations contain trailing comma  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L636     should allow   origins with non http s  protocols  https   github com kubernetes ingress nginx tree main  test e2e annotations cors go L673       custom headers    https   github com kubernetes ingress nginx tree main  test e2e annotations customheaders go L33     should return status code 200 when no custom headers is configured  https   github com kubernetes ingress nginx tree main  test e2e annotations customheaders go L40     should return status code 503 when custom headers is configured with an invalid secret  https   github com kubernetes ingress nginx tree main  test e2e annotations customheaders go L57     more set headers  My Custom Header   42    https   github com kubernetes ingress nginx tree main  test e2e annotations customheaders go L78       custom http errors  https   github com kubernetes ingress nginx tree main  test e2e annotations customhttperrors go L34     configures Nginx correctly  https   github com kubernetes ingress nginx tree main  test e2e annotations customhttperrors go L41       default backend  https   github com kubernetes ingress nginx tree main  test e2e annotations default backend go L29     should use a custom default backend as upstream  https   github com kubernetes ingress nginx tree main  test e2e annotations default backend go L37       disable access log disable http access log disable stream access log  https   github com kubernetes ingress nginx tree main  test e2e annotations disableaccesslog go L28     disable access log set access log off  https   github com kubernetes ingress nginx tree main  test e2e annotations disableaccesslog go L35     disable http access log set access log off  https   github com kubernetes ingress nginx tree main  test e2e annotations disableaccesslog go L53     disable stream access log set access log off  https   github com kubernetes ingress nginx tree main  test e2e annotations disableaccesslog go L71       disable proxy intercept errors  https   github com kubernetes ingress nginx tree main  test e2e annotations disableproxyintercepterrors go L31     configures Nginx correctly  https   github com kubernetes ingress nginx tree main  test e2e annotations disableproxyintercepterrors go L39       backend protocol   FastCGI  https   github com kubernetes ingress nginx tree main  test e2e annotations fastcgi go L30     should use fastcgi pass in the configuration file  https   github com kubernetes ingress nginx tree main  test e2e annotations fastcgi go L37     should add fastcgi index in the configuration file  https   github com kubernetes ingress nginx tree main  test e2e annotations fastcgi go L54     should add fastcgi param in the configuration file  https   github com kubernetes ingress nginx tree main  test e2e annotations fastcgi go L71     should return OK for service with backend protocol FastCGI  https   github com kubernetes ingress nginx tree main  test e2e annotations fastcgi go L102       force ssl redirect  https   github com kubernetes ingress nginx tree main  test e2e annotations forcesslredirect go L27     should redirect to https  https   github com kubernetes ingress nginx tree main  test e2e annotations forcesslredirect go L34       from to www redirect  https   github com kubernetes ingress nginx tree main  test e2e annotations fromtowwwredirect go L31     should redirect from www HTTP to HTTP  https   github com kubernetes ingress nginx tree main  test e2e annotations fromtowwwredirect go L38     should redirect from www HTTPS to HTTPS  https   github com kubernetes ingress nginx tree main  test e2e annotations fromtowwwredirect go L64       backend protocol   GRPC  https   github com kubernetes ingress nginx tree main  test e2e annotations grpc go L45     should use grpc pass in the configuration file  https   github com kubernetes ingress nginx tree main  test e2e annotations grpc go L48     should return OK for service with backend protocol GRPC  https   github com kubernetes ingress nginx tree main  test e2e annotations grpc go L71     authorization metadata should be overwritten by external auth response headers  https   github com kubernetes ingress nginx tree main  test e2e annotations grpc go L132     should return OK for service with backend protocol GRPCS  https   github com kubernetes ingress nginx tree main  test e2e annotations grpc go L193     should return OK when request not exceed timeout  https   github com kubernetes ingress nginx tree main  test e2e annotations grpc go L260     should return Error when request exceed timeout  https   github com kubernetes ingress nginx tree main  test e2e annotations grpc go L303       http2 push preload  https   github com kubernetes ingress nginx tree main  test e2e annotations http2pushpreload go L27     enable the http2 push preload directive  https   github com kubernetes ingress nginx tree main  test e2e annotations http2pushpreload go L34       allowlist source range  https   github com kubernetes ingress nginx tree main  test e2e annotations ipallowlist go L27     should set valid ip allowlist range  https   github com kubernetes ingress nginx tree main  test e2e annotations ipallowlist go L34       denylist source range  https   github com kubernetes ingress nginx tree main  test e2e annotations ipdenylist go L28     only deny explicitly denied IPs  allow all others  https   github com kubernetes ingress nginx tree main  test e2e annotations ipdenylist go L35     only allow explicitly allowed IPs  deny all others  https   github com kubernetes ingress nginx tree main  test e2e annotations ipdenylist go L86       Annotation   limit connections  https   github com kubernetes ingress nginx tree main  test e2e annotations limitconnections go L31     should limit connections  https   github com kubernetes ingress nginx tree main  test e2e annotations limitconnections go L38       limit rate  https   github com kubernetes ingress nginx tree main  test e2e annotations limitrate go L29     Check limit rate annotation  https   github com kubernetes ingress nginx tree main  test e2e annotations limitrate go L37       enable access log enable rewrite log  https   github com kubernetes ingress nginx tree main  test e2e annotations log go L27     set access log off  https   github com kubernetes ingress nginx tree main  test e2e annotations log go L34     set rewrite log on  https   github com kubernetes ingress nginx tree main  test e2e annotations log go L49       mirror    https   github com kubernetes ingress nginx tree main  test e2e annotations mirror go L28     should set mirror target to http   localhost mirror  https   github com kubernetes ingress nginx tree main  test e2e annotations mirror go L36     should set mirror target to https   test env com  request uri  https   github com kubernetes ingress nginx tree main  test e2e annotations mirror go L51     should disable mirror request body  https   github com kubernetes ingress nginx tree main  test e2e annotations mirror go L67       modsecurity owasp  https   github com kubernetes ingress nginx tree main  test e2e annotations modsecurity modsecurity go L39     should enable modsecurity  https   github com kubernetes ingress nginx tree main  test e2e annotations modsecurity modsecurity go L46     should enable modsecurity with transaction ID and OWASP rules  https   github com kubernetes ingress nginx tree main  test e2e annotations modsecurity modsecurity go L64     should disable modsecurity  https   github com kubernetes ingress nginx tree main  test e2e annotations modsecurity modsecurity go L85     should enable modsecurity with snippet  https   github com kubernetes ingress nginx tree main  test e2e annotations modsecurity modsecurity go L102     should enable modsecurity without using  modsecurity on    https   github com kubernetes ingress nginx tree main  test e2e annotations modsecurity modsecurity go L124     should disable modsecurity using  modsecurity off    https   github com kubernetes ingress nginx tree main  test e2e annotations modsecurity modsecurity go L147     should enable modsecurity with snippet and block requests  https   github com kubernetes ingress nginx tree main  test e2e annotations modsecurity modsecurity go L169     should enable modsecurity globally and with modsecurity snippet block requests  https   github com kubernetes ingress nginx tree main  test e2e annotations modsecurity modsecurity go L202     should enable modsecurity when enable owasp modsecurity crs is set to true  https   github com kubernetes ingress nginx tree main  test e2e annotations modsecurity modsecurity go L235     should enable modsecurity through the config map  https   github com kubernetes ingress nginx tree main  test e2e annotations modsecurity modsecurity go L269     should enable modsecurity through the config map but ignore snippet as disabled by admin  https   github com kubernetes ingress nginx tree main  test e2e annotations modsecurity modsecurity go L309     should disable default modsecurity conf setting when modsecurity snippet is specified  https   github com kubernetes ingress nginx tree main  test e2e annotations modsecurity modsecurity go L354       preserve trailing slash  https   github com kubernetes ingress nginx tree main  test e2e annotations preservetrailingslash go L27     should allow preservation of trailing slashes  https   github com kubernetes ingress nginx tree main  test e2e annotations preservetrailingslash go L34       proxy    https   github com kubernetes ingress nginx tree main  test e2e annotations proxy go L30     should set proxy redirect to off  https   github com kubernetes ingress nginx tree main  test e2e annotations proxy go L38     should set proxy redirect to default  https   github com kubernetes ingress nginx tree main  test e2e annotations proxy go L54     should set proxy redirect to hello com goodbye com  https   github com kubernetes ingress nginx tree main  test e2e annotations proxy go L70     should set proxy client max body size to 8m  https   github com kubernetes ingress nginx tree main  test e2e annotations proxy go L87     should not set proxy client max body size to incorrect value  https   github com kubernetes ingress nginx tree main  test e2e annotations proxy go L102     should set valid proxy timeouts  https   github com kubernetes ingress nginx tree main  test e2e annotations proxy go L117     should not set invalid proxy timeouts  https   github com kubernetes ingress nginx tree main  test e2e annotations proxy go L138     should turn on proxy buffering  https   github com kubernetes ingress nginx tree main  test e2e annotations proxy go L159     should turn off proxy request buffering  https   github com kubernetes ingress nginx tree main  test e2e annotations proxy go L181     should build proxy next upstream  https   github com kubernetes ingress nginx tree main  test e2e annotations proxy go L196     should setup proxy cookies  https   github com kubernetes ingress nginx tree main  test e2e annotations proxy go L217     should change the default proxy HTTP version  https   github com kubernetes ingress nginx tree main  test e2e annotations proxy go L235       proxy ssl    https   github com kubernetes ingress nginx tree main  test e2e annotations proxyssl go L32     should set valid proxy ssl secret  https   github com kubernetes ingress nginx tree main  test e2e annotations proxyssl go L39     should set valid proxy ssl secret  proxy ssl verify to on  proxy ssl verify depth to 2  and proxy ssl server name to on  https   github com kubernetes ingress nginx tree main  test e2e annotations proxyssl go L66     should set valid proxy ssl secret  proxy ssl ciphers to HIGH  AES  https   github com kubernetes ingress nginx tree main  test e2e annotations proxyssl go L96     should set valid proxy ssl secret  proxy ssl protocols  https   github com kubernetes ingress nginx tree main  test e2e annotations proxyssl go L124     proxy ssl location only flag should change the nginx config server part  https   github com kubernetes ingress nginx tree main  test e2e annotations proxyssl go L152       permanent redirect permanent redirect code  https   github com kubernetes ingress nginx tree main  test e2e annotations redirect go L30     should respond with a standard redirect code  https   github com kubernetes ingress nginx tree main  test e2e annotations redirect go L33     should respond with a custom redirect code  https   github com kubernetes ingress nginx tree main  test e2e annotations redirect go L61       rewrite target use regex enable rewrite log  https   github com kubernetes ingress nginx tree main  test e2e annotations rewrite go L32     should write rewrite logs  https   github com kubernetes ingress nginx tree main  test e2e annotations rewrite go L39     should use correct longest path match  https   github com kubernetes ingress nginx tree main  test e2e annotations rewrite go L68     should use    location modifier if regex annotation is present  https   github com kubernetes ingress nginx tree main  test e2e annotations rewrite go L113     should fail to use longest match for documented warning  https   github com kubernetes ingress nginx tree main  test e2e annotations rewrite go L160     should allow for custom rewrite parameters  https   github com kubernetes ingress nginx tree main  test e2e annotations rewrite go L192       satisfy  https   github com kubernetes ingress nginx tree main  test e2e annotations satisfy go L33     should configure satisfy directive correctly  https   github com kubernetes ingress nginx tree main  test e2e annotations satisfy go L40     should allow multiple auth with satisfy any  https   github com kubernetes ingress nginx tree main  test e2e annotations satisfy go L82       server snippet  https   github com kubernetes ingress nginx tree main  test e2e annotations serversnippet go L28       service upstream  https   github com kubernetes ingress nginx tree main  test e2e annotations serviceupstream go L32     should use the Service Cluster IP and Port   https   github com kubernetes ingress nginx tree main  test e2e annotations serviceupstream go L41     should use the Service Cluster IP and Port   https   github com kubernetes ingress nginx tree main  test e2e annotations serviceupstream go L69     should not use the Service Cluster IP and Port  https   github com kubernetes ingress nginx tree main  test e2e annotations serviceupstream go L97       configuration snippet  https   github com kubernetes ingress nginx tree main  test e2e annotations snippet go L28     set snippet more set headers in all locations  https   github com kubernetes ingress nginx tree main  test e2e annotations snippet go L34     drops snippet more set header in all locations if disabled by admin  https   github com kubernetes ingress nginx tree main  test e2e annotations snippet go L66       ssl ciphers  https   github com kubernetes ingress nginx tree main  test e2e annotations sslciphers go L28     should change ssl ciphers  https   github com kubernetes ingress nginx tree main  test e2e annotations sslciphers go L35     should keep ssl ciphers  https   github com kubernetes ingress nginx tree main  test e2e annotations sslciphers go L58       stream snippet  https   github com kubernetes ingress nginx tree main  test e2e annotations streamsnippet go L34     should add value of stream snippet to nginx config  https   github com kubernetes ingress nginx tree main  test e2e annotations streamsnippet go L41     should add stream snippet and drop annotations per admin config  https   github com kubernetes ingress nginx tree main  test e2e annotations streamsnippet go L88       upstream hash by    https   github com kubernetes ingress nginx tree main  test e2e annotations upstreamhashby go L79     should connect to the same pod  https   github com kubernetes ingress nginx tree main  test e2e annotations upstreamhashby go L86     should connect to the same subset of pods  https   github com kubernetes ingress nginx tree main  test e2e annotations upstreamhashby go L95       upstream vhost  https   github com kubernetes ingress nginx tree main  test e2e annotations upstreamvhost go L27     set host to upstreamvhost bar com  https   github com kubernetes ingress nginx tree main  test e2e annotations upstreamvhost go L34       x forwarded prefix  https   github com kubernetes ingress nginx tree main  test e2e annotations xforwardedprefix go L28     should set the X Forwarded Prefix to the annotation value  https   github com kubernetes ingress nginx tree main  test e2e annotations xforwardedprefix go L35     should not add X Forwarded Prefix if the annotation value is empty  https   github com kubernetes ingress nginx tree main  test e2e annotations xforwardedprefix go L57        CGroups  cgroups  https   github com kubernetes ingress nginx tree main  test e2e cgroups cgroups go L32     detects cgroups version v1  https   github com kubernetes ingress nginx tree main  test e2e cgroups cgroups go L40     detect cgroups version v2  https   github com kubernetes ingress nginx tree main  test e2e cgroups cgroups go L83       Debug CLI  https   github com kubernetes ingress nginx tree main  test e2e dbg main go L29     should list the backend servers  https   github com kubernetes ingress nginx tree main  test e2e dbg main go L37     should get information for a specific backend server  https   github com kubernetes ingress nginx tree main  test e2e dbg main go L56     should produce valid JSON for  dbg general  https   github com kubernetes ingress nginx tree main  test e2e dbg main go L85        Default Backend  custom service  https   github com kubernetes ingress nginx tree main  test e2e defaultbackend custom default backend go L33     uses custom default backend that returns 200 as status code  https   github com kubernetes ingress nginx tree main  test e2e defaultbackend custom default backend go L36        Default Backend   https   github com kubernetes ingress nginx tree main  test e2e defaultbackend default backend go L30     should return 404 sending requests when only a default backend is running  https   github com kubernetes ingress nginx tree main  test e2e defaultbackend default backend go L33     enables access logging for default backend  https   github com kubernetes ingress nginx tree main  test e2e defaultbackend default backend go L88     disables access logging for default backend  https   github com kubernetes ingress nginx tree main  test e2e defaultbackend default backend go L105        Default Backend  SSL  https   github com kubernetes ingress nginx tree main  test e2e defaultbackend ssl go L26     should return a self generated SSL certificate  https   github com kubernetes ingress nginx tree main  test e2e defaultbackend ssl go L29        Default Backend  change default settings  https   github com kubernetes ingress nginx tree main  test e2e defaultbackend with hosts go L30     should apply the annotation to the default backend  https   github com kubernetes ingress nginx tree main  test e2e defaultbackend with hosts go L38        Disable Leader  Routing works when leader election was disabled  https   github com kubernetes ingress nginx tree main  test e2e disableleaderelection disable leader go L28     should create multiple ingress routings rules when leader election has disabled  https   github com kubernetes ingress nginx tree main  test e2e disableleaderelection disable leader go L35        Endpointslices  long service name  https   github com kubernetes ingress nginx tree main  test e2e endpointslices longname go L29     should return 200 when service name has max allowed number of characters 63  https   github com kubernetes ingress nginx tree main  test e2e endpointslices longname go L38        TopologyHints  topology aware routing  https   github com kubernetes ingress nginx tree main  test e2e endpointslices topology go L34     should return 200 when service has topology hints  https   github com kubernetes ingress nginx tree main  test e2e endpointslices topology go L42        Shutdown  Grace period shutdown  https   github com kubernetes ingress nginx tree main  test e2e gracefulshutdown grace period go L32      healthz should return status code 500 during shutdown grace period  https   github com kubernetes ingress nginx tree main  test e2e gracefulshutdown grace period go L35        Shutdown  ingress controller  https   github com kubernetes ingress nginx tree main  test e2e gracefulshutdown shutdown go L30     should shutdown in less than 60 seconds without pending connections  https   github com kubernetes ingress nginx tree main  test e2e gracefulshutdown shutdown go L40        Shutdown  Graceful shutdown with pending request  https   github com kubernetes ingress nginx tree main  test e2e gracefulshutdown slow requests go L25     should let slow requests finish before shutting down  https   github com kubernetes ingress nginx tree main  test e2e gracefulshutdown slow requests go L33        Ingress  DeepInspection  https   github com kubernetes ingress nginx tree main  test e2e ingress deep inspection go L27     should drop whole ingress if one path matches invalid regex  https   github com kubernetes ingress nginx tree main  test e2e ingress deep inspection go L34       single ingress   multiple hosts  https   github com kubernetes ingress nginx tree main  test e2e ingress multiple rules go L30     should set the correct  service name NGINX variable  https   github com kubernetes ingress nginx tree main  test e2e ingress multiple rules go L38        Ingress   PathType  exact  https   github com kubernetes ingress nginx tree main  test e2e ingress pathtype exact go L30     should choose exact location for  exact  https   github com kubernetes ingress nginx tree main  test e2e ingress pathtype exact go L37        Ingress   PathType  mix Exact and Prefix paths  https   github com kubernetes ingress nginx tree main  test e2e ingress pathtype mixed go L30     should choose the correct location  https   github com kubernetes ingress nginx tree main  test e2e ingress pathtype mixed go L39        Ingress   PathType  prefix checks  https   github com kubernetes ingress nginx tree main  test e2e ingress pathtype prefix go L28     should return 404 when prefix  aaa does not match request  aaaccc  https   github com kubernetes ingress nginx tree main  test e2e ingress pathtype prefix go L35     should test prefix path using simple regex pattern for  id  int   https   github com kubernetes ingress nginx tree main  test e2e ingress pathtype prefix go L72     should test prefix path using regex pattern for  id  int  ignoring non digits characters at end of string  https   github com kubernetes ingress nginx tree main  test e2e ingress pathtype prefix go L113     should test prefix path using fixed path size regex pattern  id  int  3   https   github com kubernetes ingress nginx tree main  test e2e ingress pathtype prefix go L142     should correctly route multi segment path patterns  https   github com kubernetes ingress nginx tree main  test e2e ingress pathtype prefix go L177        Ingress  definition without host  https   github com kubernetes ingress nginx tree main  test e2e ingress without host go L31     should set ingress details variables for ingresses without a host  https   github com kubernetes ingress nginx tree main  test e2e ingress without host go L34     should set ingress details variables for ingresses with host without IngressRuleValue  only Backend  https   github com kubernetes ingress nginx tree main  test e2e ingress without host go L55        Memory Leak  Dynamic Certificates  https   github com kubernetes ingress nginx tree main  test e2e leaks lua ssl go L35     should not leak memory from ingress SSL certificates or configuration updates  https   github com kubernetes ingress nginx tree main  test e2e leaks lua ssl go L42        Load Balancer  load balance  https   github com kubernetes ingress nginx tree main  test e2e loadbalance configmap go L30     should apply the configmap load balance setting  https   github com kubernetes ingress nginx tree main  test e2e loadbalance configmap go L37        Load Balancer  EWMA  https   github com kubernetes ingress nginx tree main  test e2e loadbalance ewma go L31     does not fail requests  https   github com kubernetes ingress nginx tree main  test e2e loadbalance ewma go L43        Load Balancer  round robin  https   github com kubernetes ingress nginx tree main  test e2e loadbalance round robin go L31     should evenly distribute requests with round robin  default algorithm   https   github com kubernetes ingress nginx tree main  test e2e loadbalance round robin go L39        Lua  dynamic certificates  https   github com kubernetes ingress nginx tree main  test e2e lua dynamic certificates go L37     picks up the certificate when we add TLS spec to existing ingress  https   github com kubernetes ingress nginx tree main  test e2e lua dynamic certificates go L45     picks up the previously missing secret for a given ingress without reloading  https   github com kubernetes ingress nginx tree main  test e2e lua dynamic certificates go L70     supports requests with domain with trailing dot  https   github com kubernetes ingress nginx tree main  test e2e lua dynamic certificates go L145     picks up the updated certificate without reloading  https   github com kubernetes ingress nginx tree main  test e2e lua dynamic certificates go L149     falls back to using default certificate when secret gets deleted without reloading  https   github com kubernetes ingress nginx tree main  test e2e lua dynamic certificates go L185     picks up a non certificate only change  https   github com kubernetes ingress nginx tree main  test e2e lua dynamic certificates go L218     removes HTTPS configuration when we delete TLS spec  https   github com kubernetes ingress nginx tree main  test e2e lua dynamic certificates go L233        Lua  dynamic configuration  https   github com kubernetes ingress nginx tree main  test e2e lua dynamic configuration go L41     configures balancer Lua middleware correctly  https   github com kubernetes ingress nginx tree main  test e2e lua dynamic configuration go L49     handles endpoints only changes  https   github com kubernetes ingress nginx tree main  test e2e lua dynamic configuration go L56     handles endpoints only changes  down scaling of replicas   https   github com kubernetes ingress nginx tree main  test e2e lua dynamic configuration go L81     handles endpoints only changes consistently  down scaling of replicas vs  empty service   https   github com kubernetes ingress nginx tree main  test e2e lua dynamic configuration go L119     handles an annotation change  https   github com kubernetes ingress nginx tree main  test e2e lua dynamic configuration go L165        metrics  exported prometheus metrics  https   github com kubernetes ingress nginx tree main  test e2e metrics metrics go L36     exclude socket request metrics are absent  https   github com kubernetes ingress nginx tree main  test e2e metrics metrics go L51     exclude socket request metrics are present  https   github com kubernetes ingress nginx tree main  test e2e metrics metrics go L73     request metrics per undefined host are present when flag is set  https   github com kubernetes ingress nginx tree main  test e2e metrics metrics go L95     request metrics per undefined host are not present when flag is not set  https   github com kubernetes ingress nginx tree main  test e2e metrics metrics go L128       nginx configuration  https   github com kubernetes ingress nginx tree main  test e2e nginx nginx go L99     start nginx with default configuration  https   github com kubernetes ingress nginx tree main  test e2e nginx nginx go L102     fails when using alias directive  https   github com kubernetes ingress nginx tree main  test e2e nginx nginx go L114     fails when using root directive  https   github com kubernetes ingress nginx tree main  test e2e nginx nginx go L121        Security  request smuggling  https   github com kubernetes ingress nginx tree main  test e2e security request smuggling go L32     should not return body content from error page  https   github com kubernetes ingress nginx tree main  test e2e security request smuggling go L39        Service  backend status code 503  https   github com kubernetes ingress nginx tree main  test e2e servicebackend service backend go L34     should return 503 when backend service does not exist  https   github com kubernetes ingress nginx tree main  test e2e servicebackend service backend go L37     should return 503 when all backend service endpoints are unavailable  https   github com kubernetes ingress nginx tree main  test e2e servicebackend service backend go L55        Service  Type ExternalName  https   github com kubernetes ingress nginx tree main  test e2e servicebackend service externalname go L38     works with external name set to incomplete fqdn  https   github com kubernetes ingress nginx tree main  test e2e servicebackend service externalname go L41     should return 200 for service type ExternalName without a port defined  https   github com kubernetes ingress nginx tree main  test e2e servicebackend service externalname go L78     should return 200 for service type ExternalName with a port defined  https   github com kubernetes ingress nginx tree main  test e2e servicebackend service externalname go L118     should return status 502 for service type ExternalName with an invalid host  https   github com kubernetes ingress nginx tree main  test e2e servicebackend service externalname go L148     should return 200 for service type ExternalName using a port name  https   github com kubernetes ingress nginx tree main  test e2e servicebackend service externalname go L184     should return 200 for service type ExternalName using FQDN with trailing dot  https   github com kubernetes ingress nginx tree main  test e2e servicebackend service externalname go L225     should update the external name after a service update  https   github com kubernetes ingress nginx tree main  test e2e servicebackend service externalname go L261     should sync ingress on external name service addition deletion  https   github com kubernetes ingress nginx tree main  test e2e servicebackend service externalname go L344        Service  Nil Service Backend  https   github com kubernetes ingress nginx tree main  test e2e servicebackend service nil backend go L31     should return 404 when backend service is nil  https   github com kubernetes ingress nginx tree main  test e2e servicebackend service nil backend go L38       access log  https   github com kubernetes ingress nginx tree main  test e2e settings access log go L27     use the default configuration  https   github com kubernetes ingress nginx tree main  test e2e settings access log go L31     use the specified configuration  https   github com kubernetes ingress nginx tree main  test e2e settings access log go L41     use the specified configuration  https   github com kubernetes ingress nginx tree main  test e2e settings access log go L52     use the specified configuration  https   github com kubernetes ingress nginx tree main  test e2e settings access log go L64     use the specified configuration  https   github com kubernetes ingress nginx tree main  test e2e settings access log go L76       aio write  https   github com kubernetes ingress nginx tree main  test e2e settings aio write go L27     should be enabled by default  https   github com kubernetes ingress nginx tree main  test e2e settings aio write go L30     should be enabled when setting is true  https   github com kubernetes ingress nginx tree main  test e2e settings aio write go L37     should be disabled when setting is false  https   github com kubernetes ingress nginx tree main  test e2e settings aio write go L46       Bad annotation values  https   github com kubernetes ingress nginx tree main  test e2e settings badannotationvalues go L29      BAD ANNOTATIONS  should drop an ingress if there is an invalid character in some annotation  https   github com kubernetes ingress nginx tree main  test e2e settings badannotationvalues go L36      BAD ANNOTATIONS  should drop an ingress if there is a forbidden word in some annotation  https   github com kubernetes ingress nginx tree main  test e2e settings badannotationvalues go L68      BAD ANNOTATIONS  should allow an ingress if there is a default blocklist config in place  https   github com kubernetes ingress nginx tree main  test e2e settings badannotationvalues go L105      BAD ANNOTATIONS  should drop an ingress if there is a custom blocklist config in place and allow others to pass  https   github com kubernetes ingress nginx tree main  test e2e settings badannotationvalues go L138       brotli  https   github com kubernetes ingress nginx tree main  test e2e settings brotli go L30     should only compress responses that meet the  brotli min length  condition  https   github com kubernetes ingress nginx tree main  test e2e settings brotli go L38       Configmap change  https   github com kubernetes ingress nginx tree main  test e2e settings configmap change go L29     should reload after an update in the configuration  https   github com kubernetes ingress nginx tree main  test e2e settings configmap change go L36       add headers  https   github com kubernetes ingress nginx tree main  test e2e settings custom header go L30     Add a custom header  https   github com kubernetes ingress nginx tree main  test e2e settings custom header go L40     Add multiple custom headers  https   github com kubernetes ingress nginx tree main  test e2e settings custom header go L65        SSL   Flag  default ssl certificate  https   github com kubernetes ingress nginx tree main  test e2e settings default ssl certificate go L35     uses default ssl certificate for catch all ingress  https   github com kubernetes ingress nginx tree main  test e2e settings default ssl certificate go L66     uses default ssl certificate for host based ingress when configured certificate does not match host  https   github com kubernetes ingress nginx tree main  test e2e settings default ssl certificate go L82        Flag  disable catch all  https   github com kubernetes ingress nginx tree main  test e2e settings disable catch all go L33     should ignore catch all Ingress with backend  https   github com kubernetes ingress nginx tree main  test e2e settings disable catch all go L50     should ignore catch all Ingress with backend and rules  https   github com kubernetes ingress nginx tree main  test e2e settings disable catch all go L69     should delete Ingress updated to catch all  https   github com kubernetes ingress nginx tree main  test e2e settings disable catch all go L81     should allow Ingress with rules  https   github com kubernetes ingress nginx tree main  test e2e settings disable catch all go L123        Flag  disable service external name  https   github com kubernetes ingress nginx tree main  test e2e settings disable service external name go L35     should ignore services of external name type  https   github com kubernetes ingress nginx tree main  test e2e settings disable service external name go L55        Flag  disable sync events  https   github com kubernetes ingress nginx tree main  test e2e settings disable sync events go L32     should create sync events  default   https   github com kubernetes ingress nginx tree main  test e2e settings disable sync events go L35     should create sync events  https   github com kubernetes ingress nginx tree main  test e2e settings disable sync events go L55     should not create sync events  https   github com kubernetes ingress nginx tree main  test e2e settings disable sync events go L83       enable real ip  https   github com kubernetes ingress nginx tree main  test e2e settings enable real ip go L30     trusts X Forwarded For header only when setting is true  https   github com kubernetes ingress nginx tree main  test e2e settings enable real ip go L40     should not trust X Forwarded For header when setting is false  https   github com kubernetes ingress nginx tree main  test e2e settings enable real ip go L79       use forwarded headers  https   github com kubernetes ingress nginx tree main  test e2e settings forwarded headers go L31     should trust X Forwarded headers when setting is true  https   github com kubernetes ingress nginx tree main  test e2e settings forwarded headers go L41     should not trust X Forwarded headers when setting is false  https   github com kubernetes ingress nginx tree main  test e2e settings forwarded headers go L93       Geoip2  https   github com kubernetes ingress nginx tree main  test e2e settings geoip2 go L36     should include geoip2 line in config when enabled and db file exists  https   github com kubernetes ingress nginx tree main  test e2e settings geoip2 go L45     should only allow requests from specific countries  https   github com kubernetes ingress nginx tree main  test e2e settings geoip2 go L69     should up and running nginx controller using autoreload flag  https   github com kubernetes ingress nginx tree main  test e2e settings geoip2 go L122        Security  block    https   github com kubernetes ingress nginx tree main  test e2e settings global access block go L28     should block CIDRs defined in the ConfigMap  https   github com kubernetes ingress nginx tree main  test e2e settings global access block go L38     should block User Agents defined in the ConfigMap  https   github com kubernetes ingress nginx tree main  test e2e settings global access block go L55     should block Referers defined in the ConfigMap  https   github com kubernetes ingress nginx tree main  test e2e settings global access block go L88        Security  global auth url  https   github com kubernetes ingress nginx tree main  test e2e settings global external auth go L39     should return status code 401 when request any protected service  https   github com kubernetes ingress nginx tree main  test e2e settings global external auth go L91     should return status code 200 when request whitelisted  via no auth locations  service and 401 when request protected service  https   github com kubernetes ingress nginx tree main  test e2e settings global external auth go L107     should return status code 200 when request whitelisted  via ingress annotation  service and 401 when request protected service  https   github com kubernetes ingress nginx tree main  test e2e settings global external auth go L130     should still return status code 200 after auth backend is deleted using cache  https   github com kubernetes ingress nginx tree main  test e2e settings global external auth go L158     user retains cookie by default  https   github com kubernetes ingress nginx tree main  test e2e settings global external auth go L322     user does not retain cookie if upstream returns error status code  https   github com kubernetes ingress nginx tree main  test e2e settings global external auth go L333     user with global auth always set cookie key in configmap retains cookie if upstream returns error status code  https   github com kubernetes ingress nginx tree main  test e2e settings global external auth go L344       global options  https   github com kubernetes ingress nginx tree main  test e2e settings global options go L28     should have worker rlimit nofile option  https   github com kubernetes ingress nginx tree main  test e2e settings global options go L31     should have worker rlimit nofile option and be independent on amount of worker processes  https   github com kubernetes ingress nginx tree main  test e2e settings global options go L37       GRPC  https   github com kubernetes ingress nginx tree main  test e2e settings grpc go L39     should set the correct GRPC Buffer Size  https   github com kubernetes ingress nginx tree main  test e2e settings grpc go L42       gzip  https   github com kubernetes ingress nginx tree main  test e2e settings gzip go L30     should be disabled by default  https   github com kubernetes ingress nginx tree main  test e2e settings gzip go L40     should be enabled with default settings  https   github com kubernetes ingress nginx tree main  test e2e settings gzip go L56     should set gzip comp level to 4  https   github com kubernetes ingress nginx tree main  test e2e settings gzip go L82     should set gzip disable to msie6  https   github com kubernetes ingress nginx tree main  test e2e settings gzip go L102     should set gzip min length to 100  https   github com kubernetes ingress nginx tree main  test e2e settings gzip go L132     should set gzip types to text html  https   github com kubernetes ingress nginx tree main  test e2e settings gzip go L164       hash size  https   github com kubernetes ingress nginx tree main  test e2e settings hash size go L27     should set server names hash bucket size  https   github com kubernetes ingress nginx tree main  test e2e settings hash size go L39     should set server names hash max size  https   github com kubernetes ingress nginx tree main  test e2e settings hash size go L47     should set proxy headers hash bucket size  https   github com kubernetes ingress nginx tree main  test e2e settings hash size go L57     should set proxy headers hash max size  https   github com kubernetes ingress nginx tree main  test e2e settings hash size go L65     should set variables hash bucket size  https   github com kubernetes ingress nginx tree main  test e2e settings hash size go L75     should set variables hash max size  https   github com kubernetes ingress nginx tree main  test e2e settings hash size go L83     should set vmap hash bucket size  https   github com kubernetes ingress nginx tree main  test e2e settings hash size go L93        Flag  ingress class  https   github com kubernetes ingress nginx tree main  test e2e settings ingress class go L41     should ignore Ingress with a different class annotation  https   github com kubernetes ingress nginx tree main  test e2e settings ingress class go L70     should ignore Ingress with different controller class  https   github com kubernetes ingress nginx tree main  test e2e settings ingress class go L106     should accept both Ingresses with default IngressClassName and IngressClass annotation  https   github com kubernetes ingress nginx tree main  test e2e settings ingress class go L134     should ignore Ingress without IngressClass configuration  https   github com kubernetes ingress nginx tree main  test e2e settings ingress class go L166     should delete Ingress when class is removed  https   github com kubernetes ingress nginx tree main  test e2e settings ingress class go L194     should serve Ingress when class is added  https   github com kubernetes ingress nginx tree main  test e2e settings ingress class go L259     should serve Ingress when class is updated between annotation and ingressClassName  https   github com kubernetes ingress nginx tree main  test e2e settings ingress class go L325     should ignore Ingress with no class and accept the correctly configured Ingresses  https   github com kubernetes ingress nginx tree main  test e2e settings ingress class go L414     should watch Ingress with no class and ignore ingress with a different class  https   github com kubernetes ingress nginx tree main  test e2e settings ingress class go L482     should watch Ingress that uses the class name even if spec is different  https   github com kubernetes ingress nginx tree main  test e2e settings ingress class go L538     should watch Ingress with correct annotation  https   github com kubernetes ingress nginx tree main  test e2e settings ingress class go L628     should ignore Ingress with only IngressClassName  https   github com kubernetes ingress nginx tree main  test e2e settings ingress class go L648       keep alive keep alive requests  https   github com kubernetes ingress nginx tree main  test e2e settings keep alive go L28     should set keepalive timeout  https   github com kubernetes ingress nginx tree main  test e2e settings keep alive go L40     should set keepalive requests  https   github com kubernetes ingress nginx tree main  test e2e settings keep alive go L48     should set keepalive connection to upstream server  https   github com kubernetes ingress nginx tree main  test e2e settings keep alive go L58     should set keep alive connection timeout to upstream server  https   github com kubernetes ingress nginx tree main  test e2e settings keep alive go L68     should set keepalive time to upstream server  https   github com kubernetes ingress nginx tree main  test e2e settings keep alive go L78     should set the request count to upstream server through one keep alive connection  https   github com kubernetes ingress nginx tree main  test e2e settings keep alive go L88       Configmap   limit rate  https   github com kubernetes ingress nginx tree main  test e2e settings limit rate go L28     Check limit rate config  https   github com kubernetes ingress nginx tree main  test e2e settings limit rate go L36        Flag  custom HTTP and HTTPS ports  https   github com kubernetes ingress nginx tree main  test e2e settings listen nondefault ports go L30     should set X Forwarded Port headers accordingly when listening on a non default HTTP port  https   github com kubernetes ingress nginx tree main  test e2e settings listen nondefault ports go L45     should set X Forwarded Port header to 443  https   github com kubernetes ingress nginx tree main  test e2e settings listen nondefault ports go L65     should set the X Forwarded Port header to 443  https   github com kubernetes ingress nginx tree main  test e2e settings listen nondefault ports go L93       log format    https   github com kubernetes ingress nginx tree main  test e2e settings log format go L28     should not configure log format escape by default  https   github com kubernetes ingress nginx tree main  test e2e settings log format go L39     should enable the log format escape json  https   github com kubernetes ingress nginx tree main  test e2e settings log format go L46     should disable the log format escape json  https   github com kubernetes ingress nginx tree main  test e2e settings log format go L54     should enable the log format escape none  https   github com kubernetes ingress nginx tree main  test e2e settings log format go L62     should disable the log format escape none  https   github com kubernetes ingress nginx tree main  test e2e settings log format go L70     log format escape json enabled  https   github com kubernetes ingress nginx tree main  test e2e settings log format go L80     log format default escape  https   github com kubernetes ingress nginx tree main  test e2e settings log format go L103     log format escape none enabled  https   github com kubernetes ingress nginx tree main  test e2e settings log format go L126        Lua  lua shared dicts  https   github com kubernetes ingress nginx tree main  test e2e settings lua shared dicts go L26     configures lua shared dicts  https   github com kubernetes ingress nginx tree main  test e2e settings lua shared dicts go L29       main snippet  https   github com kubernetes ingress nginx tree main  test e2e settings main snippet go L27     should add value of main snippet setting to nginx config  https   github com kubernetes ingress nginx tree main  test e2e settings main snippet go L31        Security  modsecurity snippet  https   github com kubernetes ingress nginx tree main  test e2e settings modsecurity modsecurity snippet go L27     should add value of modsecurity snippet setting to nginx config  https   github com kubernetes ingress nginx tree main  test e2e settings modsecurity modsecurity snippet go L30       enable multi accept  https   github com kubernetes ingress nginx tree main  test e2e settings multi accept go L27     should be enabled by default  https   github com kubernetes ingress nginx tree main  test e2e settings multi accept go L31     should be enabled when set to true  https   github com kubernetes ingress nginx tree main  test e2e settings multi accept go L39     should be disabled when set to false  https   github com kubernetes ingress nginx tree main  test e2e settings multi accept go L49        Flag  watch namespace selector  https   github com kubernetes ingress nginx tree main  test e2e settings namespace selector go L30     should ignore Ingress of namespace without label foo bar and accept those of namespace with label foo bar  https   github com kubernetes ingress nginx tree main  test e2e settings namespace selector go L62        Security  no auth locations  https   github com kubernetes ingress nginx tree main  test e2e settings no auth locations go L33     should return status code 401 when accessing     unauthentication  https   github com kubernetes ingress nginx tree main  test e2e settings no auth locations go L54     should return status code 200 when accessing      authentication  https   github com kubernetes ingress nginx tree main  test e2e settings no auth locations go L68     should return status code 200 when accessing   noauth  unauthenticated  https   github com kubernetes ingress nginx tree main  test e2e settings no auth locations go L82       Add no tls redirect locations  https   github com kubernetes ingress nginx tree main  test e2e settings no tls redirect locations go L27     Check no tls redirect locations config  https   github com kubernetes ingress nginx tree main  test e2e settings no tls redirect locations go L30       OCSP  https   github com kubernetes ingress nginx tree main  test e2e settings ocsp ocsp go L43     should enable OCSP and contain stapling information in the connection  https   github com kubernetes ingress nginx tree main  test e2e settings ocsp ocsp go L50       Configure Opentelemetry  https   github com kubernetes ingress nginx tree main  test e2e settings opentelemetry go L39     should not exists opentelemetry directive  https   github com kubernetes ingress nginx tree main  test e2e settings opentelemetry go L49     should exists opentelemetry directive when is enabled  https   github com kubernetes ingress nginx tree main  test e2e settings opentelemetry go L62     should include opentelemetry trust incoming spans on directive when enabled  https   github com kubernetes ingress nginx tree main  test e2e settings opentelemetry go L76     should not exists opentelemetry operation name directive when is empty  https   github com kubernetes ingress nginx tree main  test e2e settings opentelemetry go L91     should exists opentelemetry operation name directive when is configured  https   github com kubernetes ingress nginx tree main  test e2e settings opentelemetry go L106       proxy connect timeout  https   github com kubernetes ingress nginx tree main  test e2e settings proxy connect timeout go L29     should set valid proxy timeouts using configmap values  https   github com kubernetes ingress nginx tree main  test e2e settings proxy connect timeout go L37     should not set invalid proxy timeouts using configmap values  https   github com kubernetes ingress nginx tree main  test e2e settings proxy connect timeout go L53       Dynamic  proxy host  https   github com kubernetes ingress nginx tree main  test e2e settings proxy host go L28     should exist a proxy host  https   github com kubernetes ingress nginx tree main  test e2e settings proxy host go L36     should exist a proxy host using the upstream vhost annotation value  https   github com kubernetes ingress nginx tree main  test e2e settings proxy host go L60       proxy next upstream  https   github com kubernetes ingress nginx tree main  test e2e settings proxy next upstream go L28     should build proxy next upstream using configmap values  https   github com kubernetes ingress nginx tree main  test e2e settings proxy next upstream go L36       use proxy protocol  https   github com kubernetes ingress nginx tree main  test e2e settings proxy protocol go L38     should respect port passed by the PROXY Protocol  https   github com kubernetes ingress nginx tree main  test e2e settings proxy protocol go L48     should respect proto passed by the PROXY Protocol server port  https   github com kubernetes ingress nginx tree main  test e2e settings proxy protocol go L85     should enable PROXY Protocol for HTTPS  https   github com kubernetes ingress nginx tree main  test e2e settings proxy protocol go L121     should enable PROXY Protocol for TCP  https   github com kubernetes ingress nginx tree main  test e2e settings proxy protocol go L164       proxy read timeout  https   github com kubernetes ingress nginx tree main  test e2e settings proxy read timeout go L29     should set valid proxy read timeouts using configmap values  https   github com kubernetes ingress nginx tree main  test e2e settings proxy read timeout go L37     should not set invalid proxy read timeouts using configmap values  https   github com kubernetes ingress nginx tree main  test e2e settings proxy read timeout go L53       proxy send timeout  https   github com kubernetes ingress nginx tree main  test e2e settings proxy send timeout go L29     should set valid proxy send timeouts using configmap values  https   github com kubernetes ingress nginx tree main  test e2e settings proxy send timeout go L37     should not set invalid proxy send timeouts using configmap values  https   github com kubernetes ingress nginx tree main  test e2e settings proxy send timeout go L53       reuse port  https   github com kubernetes ingress nginx tree main  test e2e settings reuse port go L27     reuse port should be enabled by default  https   github com kubernetes ingress nginx tree main  test e2e settings reuse port go L38     reuse port should be disabled  https   github com kubernetes ingress nginx tree main  test e2e settings reuse port go L44     reuse port should be enabled  https   github com kubernetes ingress nginx tree main  test e2e settings reuse port go L52       configmap server snippet  https   github com kubernetes ingress nginx tree main  test e2e settings server snippet go L28     should add value of server snippet setting to all ingress config  https   github com kubernetes ingress nginx tree main  test e2e settings server snippet go L35     should add global server snippet and drop annotations per admin config  https   github com kubernetes ingress nginx tree main  test e2e settings server snippet go L100       server tokens  https   github com kubernetes ingress nginx tree main  test e2e settings server tokens go L29     should not exists Server header in the response  https   github com kubernetes ingress nginx tree main  test e2e settings server tokens go L38     should exists Server header in the response when is enabled  https   github com kubernetes ingress nginx tree main  test e2e settings server tokens go L50       ssl ciphers  https   github com kubernetes ingress nginx tree main  test e2e settings ssl ciphers go L28     Add ssl ciphers  https   github com kubernetes ingress nginx tree main  test e2e settings ssl ciphers go L31        Flag  enable ssl passthrough  https   github com kubernetes ingress nginx tree main  test e2e settings ssl passthrough go L36       With enable ssl passthrough enabled  https   github com kubernetes ingress nginx tree main  test e2e settings ssl passthrough go L55     should enable ssl passthrough proxy port on a different port  https   github com kubernetes ingress nginx tree main  test e2e settings ssl passthrough go L56     should pass unknown traffic to default backend and handle known traffic  https   github com kubernetes ingress nginx tree main  test e2e settings ssl passthrough go L78       configmap stream snippet  https   github com kubernetes ingress nginx tree main  test e2e settings stream snippet go L35     should add value of stream snippet via config map to nginx config  https   github com kubernetes ingress nginx tree main  test e2e settings stream snippet go L42        SSL  TLS protocols  ciphers and headers  https   github com kubernetes ingress nginx tree main  test e2e settings tls go L32     setting cipher suite  https   github com kubernetes ingress nginx tree main  test e2e settings tls go L66     setting max age parameter  https   github com kubernetes ingress nginx tree main  test e2e settings tls go L110     setting includeSubDomains parameter  https   github com kubernetes ingress nginx tree main  test e2e settings tls go L127     setting preload parameter  https   github com kubernetes ingress nginx tree main  test e2e settings tls go L147     overriding what s set from the upstream  https   github com kubernetes ingress nginx tree main  test e2e settings tls go L168     should not use ports during the HTTP to HTTPS redirection  https   github com kubernetes ingress nginx tree main  test e2e settings tls go L190     should not use ports or X Forwarded Host during the HTTP to HTTPS redirection  https   github com kubernetes ingress nginx tree main  test e2e settings tls go L208       annotation validations  https   github com kubernetes ingress nginx tree main  test e2e settings validations validations go L30     should allow ingress based on their risk on webhooks  https   github com kubernetes ingress nginx tree main  test e2e settings validations validations go L33     should allow ingress based on their risk on webhooks  https   github com kubernetes ingress nginx tree main  test e2e settings validations validations go L68        SSL  redirect to HTTPS  https   github com kubernetes ingress nginx tree main  test e2e ssl http redirect go L29     should redirect from HTTP to HTTPS when secret is missing  https   github com kubernetes ingress nginx tree main  test e2e ssl http redirect go L36        SSL  secret update  https   github com kubernetes ingress nginx tree main  test e2e ssl secret update go L33     should not appear references to secret updates not used in ingress rules  https   github com kubernetes ingress nginx tree main  test e2e ssl secret update go L40     should return the fake SSL certificate if the secret is invalid  https   github com kubernetes ingress nginx tree main  test e2e ssl secret update go L83        Status  status update  https   github com kubernetes ingress nginx tree main  test e2e status update go L38     should update status field after client go reconnection  https   github com kubernetes ingress nginx tree main  test e2e status update go L43        TCP  tcp services  https   github com kubernetes ingress nginx tree main  test e2e tcpudp tcp go L38     should expose a TCP service  https   github com kubernetes ingress nginx tree main  test e2e tcpudp tcp go L46     should expose an ExternalName TCP service  https   github com kubernetes ingress nginx tree main  test e2e tcpudp tcp go L80     should reload after an update in the configuration  https   github com kubernetes ingress nginx tree main  test e2e tcpudp tcp go L169"}
{"questions":"ingress nginx FAQ For example the Ingress NGINX control plane has global and per Ingress configuration options that make it insecure if enabled in a multi tenant environment For example enabling snippets a global configuration allows any Ingress object to run arbitrary Lua code that could affect the security of all Ingress objects that a controller is running Do not use in multi tenant Kubernetes production installations This project assumes that users that can create Ingress objects are administrators of the cluster Multi tenant Kubernetes","answers":"\n# FAQ\n\n## Multi-tenant Kubernetes\n\nDo not use in multi-tenant Kubernetes production installations. This project assumes that users that can create Ingress objects are administrators of the cluster.\n\nFor example, the Ingress NGINX control plane has global and per Ingress configuration options that make it insecure, if enabled, in a multi-tenant environment. \n\nFor example, enabling snippets, a global configuration, allows any Ingress object to run arbitrary Lua code that could affect the security of all Ingress objects that a controller is running. \n\nWe changed the default to allow snippets to `false` in https:\/\/github.com\/kubernetes\/ingress-nginx\/pull\/10393.\n\n## Multiple controller in one cluster\n\nQuestion - How can I easily install multiple instances of the ingress-nginx controller in the same cluster?\n\nYou can install them in different namespaces.\n\n- Create a new namespace\n\n  ```\n  kubectl create namespace ingress-nginx-2\n  ```\n\n- Use Helm to install the additional instance of the ingress controller\n- Ensure you have Helm working (refer to the [Helm documentation](https:\/\/helm.sh\/docs\/))\n- We have to assume that you have the helm repo for the ingress-nginx controller already added to your Helm config.\n  But, if you have not added the helm repo then you can do this to add the repo to your helm config;\n\n  ```\n  helm repo add ingress-nginx https:\/\/kubernetes.github.io\/ingress-nginx\n  ```\n\n- Make sure you have updated the helm repo data;\n\n  ```\n  helm repo update\n  ```\n\n- Now, install an additional instance of the ingress-nginx controller like this:\n\n  ```\n  helm install ingress-nginx-2 ingress-nginx\/ingress-nginx  \\\n  --namespace ingress-nginx-2 \\\n  --set controller.ingressClassResource.name=nginx-two \\\n  --set controller.ingressClass=nginx-two \\\n  --set controller.ingressClassResource.controllerValue=\"example.com\/ingress-nginx-2\" \\\n  --set controller.ingressClassResource.enabled=true \\\n  --set controller.ingressClassByName=true\n  ```\n\nIf you need to install yet another instance, then repeat the procedure to create a new namespace,\nchange the values such as names & namespaces (for example from \"-2\" to \"-3\"), or anything else that meets your needs.\n\nNote that `controller.ingressClassResource.name` and `controller.ingressClass` have to be set correctly.\nThe first is to create the IngressClass object and the other is to modify the deployment of the actual ingress controller pod.\n\n### I can't use multiple namespaces, what should I do?\n\nIf you need to install all instances in the same namespace, then you need to specify a different **election id**, like this:\n\n```\nhelm install ingress-nginx-2 ingress-nginx\/ingress-nginx  \\\n--namespace kube-system \\\n--set controller.electionID=nginx-two-leader \\\n--set controller.ingressClassResource.name=nginx-two \\\n--set controller.ingressClass=nginx-two \\\n--set controller.ingressClassResource.controllerValue=\"example.com\/ingress-nginx-2\" \\\n--set controller.ingressClassResource.enabled=true \\\n--set controller.ingressClassByName=true\n```\n\n## Retaining Client IPAddress\n\nQuestion - How to obtain the real-client-ipaddress ?\n\nThe goto solution for retaining the real-client IPaddress is to enable PROXY protocol.\n\nEnabling PROXY protocol has to be done on both, the Ingress NGINX controller, as well as the L4 load balancer, in front of the controller.\n\nThe real-client IP address is lost by default, when traffic is forwarded over the network. But enabling PROXY protocol ensures that the connection details are retained and hence the real-client IP address doesn't get lost.\n\nEnabling proxy-protocol on the controller is documented [here](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/nginx-configuration\/configmap\/#use-proxy-protocol) .\n\nFor enabling proxy-protocol on the LoadBalancer, please refer to the documentation of your infrastructure provider because that is where the LB is provisioned.\n\nSome more info available [here](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/miscellaneous\/#source-ip-address)\n\nSome more info on proxy-protocol is [here](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/miscellaneous\/#proxy-protocol)\n\n### client-ipaddress on single-node cluster\n\nSingle node clusters are created for dev & test uses with tools like \"kind\" or \"minikube\". A trick to simulate a real use network with these clusters (kind or minikube) is to install Metallb and configure the ipaddress of the kind container or the minikube vm\/container, as the starting and ending of the pool for Metallb in L2 mode. Then the host ip becomes a real client ipaddress, for curl requests sent from the host.\n\nAfter installing ingress-nginx controller on a kind or a minikube cluster with helm, you can configure it for real-client-ip with a simple change to the service that ingress-nginx controller creates. The service object of --type LoadBalancer has a field service.spec.externalTrafficPolicy. If you set the value of this field to \"Local\" then the real-ipaddress of a client is visible to the controller.\n\n```\n% kubectl explain service.spec.externalTrafficPolicy\nKIND:       Service\nVERSION:    v1\n\nFIELD: externalTrafficPolicy <string>\n\nDESCRIPTION:\n    externalTrafficPolicy describes how nodes distribute service traffic they\n    receive on one of the Service's \"externally-facing\" addresses (NodePorts,\n    ExternalIPs, and LoadBalancer IPs). If set to \"Local\", the proxy will\n    configure the service in a way that assumes that external load balancers\n    will take care of balancing the service traffic between nodes, and so each\n    node will deliver traffic only to the node-local endpoints of the service,\n    without masquerading the client source IP. (Traffic mistakenly sent to a\n    node with no endpoints will be dropped.) The default value, \"Cluster\", uses\n    the standard behavior of routing to all endpoints evenly (possibly modified\n    by topology and other features). Note that traffic sent to an External IP or\n    LoadBalancer IP from within the cluster will always get \"Cluster\" semantics,\n    but clients sending to a NodePort from within the cluster may need to take\n    traffic policy into account when picking a node.\n    \n    Possible enum values:\n     - `\"Cluster\"` routes traffic to all endpoints.\n     - `\"Local\"` preserves the source IP of the traffic by routing only to\n    endpoints on the same node as the traffic was received on (dropping the\n    traffic if there are no local endpoints).\n```\n\n### client-ipaddress L7\n\nThe solution is to get the real client IPaddress from the [\"X-Forward-For\" HTTP header](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Headers\/X-Forwarded-For)\n\nExample : If your application pod behind Ingress NGINX controller, uses the NGINX webserver and the reverseproxy inside it, then you can do the following to preserve the remote client IP.\n\n- First you need to make sure that the X-Forwarded-For header reaches the backend pod. This is done by using a Ingress NGINX conftroller ConfigMap key. Its documented [here](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/nginx-configuration\/configmap\/#use-forwarded-headers)\n\n- Next, edit `nginx.conf` file inside your app pod, to contain the directives shown below:\n\n```\nset_real_ip_from 0.0.0.0\/0; # Trust all IPs (use your VPC CIDR block in production)\nreal_ip_header X-Forwarded-For;\nreal_ip_recursive on;\n\nlog_format main '$remote_addr - $remote_user [$time_local] \"$request\" '\n                '$status $body_bytes_sent \"$http_referer\" '\n                '\"$http_user_agent\" '\n                'host=$host x-forwarded-for=$http_x_forwarded_for';\n\naccess_log \/var\/log\/nginx\/access.log main;\n\n```\n\n## Kubernetes v1.22 Migration\n\nIf you are using Ingress objects in your cluster (running Kubernetes older than\nversion 1.22), and you plan to upgrade your Kubernetes version to K8S 1.22 or\nabove, then please read [the migration guide here](.\/user-guide\/k8s-122-migration.md).\n\n## Validation Of **`path`**\n\n- For improving security and also following desired standards on Kubernetes API\nspec, the next release, scheduled for v1.8.0, will include a new & optional\nfeature of validating the value for the key `ingress.spec.rules.http.paths.path`.\n\n- This behavior will be disabled by default on the 1.8.0 release and enabled by\ndefault on the next breaking change release, set for 2.0.0.\n\n- When \"`ingress.spec.rules.http.pathType=Exact`\" or \"`pathType=Prefix`\", this\nvalidation will limit the characters accepted on the field \"`ingress.spec.rules.http.paths.path`\",\nto \"`alphanumeric characters`\", and  `\"\/,\" \"_,\" \"-.\"` Also, in this case,\nthe path should start with `\"\/.\"`\n\n- When the ingress resource path contains other characters (like on rewrite\nconfigurations), the pathType value should be \"`ImplementationSpecific`\".\n\n- API Spec on pathType is documented [here](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress\/#path-types)\n\n- When this option is enabled, the validation will happen on the Admission\nWebhook. So if any new ingress object contains characters other than\nalphanumeric characters, and, `\"\/,\",\"_\",\"-\"`, in the `path` field, but\nis not using `pathType` value as `ImplementationSpecific`, then the ingress\nobject will be denied admission.\n\n- The cluster admin should establish validation rules using mechanisms like\n\"`Open Policy Agent`\", to validate that only authorized users can use\nImplementationSpecific pathType and that only the authorized characters can be\nused. [The configmap value is here](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/nginx-configuration\/configmap\/#strict-validate-path-type)\n\n- A complete example of an Openpolicyagent gatekeeper rule is available [here](https:\/\/kubernetes.github.io\/ingress-nginx\/examples\/openpolicyagent\/)\n\n- If you have any issues or concerns, please do one of the following:\n  - Open a GitHub issue\n  - Comment in our Dev Slack Channel\n  - Open a thread in our Google Group <ingress-nginx-dev@kubernetes.io>\n\n## Why is chunking not working since controller v1.10 ?\n\n- If your code is setting the HTTP header `\"Transfer-Encoding: chunked\"` and\nthe controller log messages show an error about duplicate header, it is\nbecause of this change <http:\/\/hg.nginx.org\/nginx\/rev\/2bf7792c262e>\n\n- More details are available in this issue <https:\/\/github.com\/kubernetes\/ingress-nginx\/issues\/11162>","site":"ingress nginx","answers_cleaned":"   FAQ     Multi tenant Kubernetes  Do not use in multi tenant Kubernetes production installations  This project assumes that users that can create Ingress objects are administrators of the cluster   For example  the Ingress NGINX control plane has global and per Ingress configuration options that make it insecure  if enabled  in a multi tenant environment    For example  enabling snippets  a global configuration  allows any Ingress object to run arbitrary Lua code that could affect the security of all Ingress objects that a controller is running    We changed the default to allow snippets to  false  in https   github com kubernetes ingress nginx pull 10393      Multiple controller in one cluster  Question   How can I easily install multiple instances of the ingress nginx controller in the same cluster   You can install them in different namespaces     Create a new namespace          kubectl create namespace ingress nginx 2          Use Helm to install the additional instance of the ingress controller   Ensure you have Helm working  refer to the  Helm documentation  https   helm sh docs      We have to assume that you have the helm repo for the ingress nginx controller already added to your Helm config    But  if you have not added the helm repo then you can do this to add the repo to your helm config           helm repo add ingress nginx https   kubernetes github io ingress nginx          Make sure you have updated the helm repo data           helm repo update          Now  install an additional instance of the ingress nginx controller like this           helm install ingress nginx 2 ingress nginx ingress nginx        namespace ingress nginx 2       set controller ingressClassResource name nginx two       set controller ingressClass nginx two       set controller ingressClassResource controllerValue  example com ingress nginx 2        set controller ingressClassResource enabled true       set controller ingressClassByName true        If you need to install yet another instance  then repeat the procedure to create a new namespace  change the values such as names   namespaces  for example from   2  to   3    or anything else that meets your needs   Note that  controller ingressClassResource name  and  controller ingressClass  have to be set correctly  The first is to create the IngressClass object and the other is to modify the deployment of the actual ingress controller pod       I can t use multiple namespaces  what should I do   If you need to install all instances in the same namespace  then you need to specify a different   election id    like this       helm install ingress nginx 2 ingress nginx ingress nginx      namespace kube system     set controller electionID nginx two leader     set controller ingressClassResource name nginx two     set controller ingressClass nginx two     set controller ingressClassResource controllerValue  example com ingress nginx 2      set controller ingressClassResource enabled true     set controller ingressClassByName true         Retaining Client IPAddress  Question   How to obtain the real client ipaddress    The goto solution for retaining the real client IPaddress is to enable PROXY protocol   Enabling PROXY protocol has to be done on both  the Ingress NGINX controller  as well as the L4 load balancer  in front of the controller   The real client IP address is lost by default  when traffic is forwarded over the network  But enabling PROXY protocol ensures that the connection details are retained and hence the real client IP address doesn t get lost   Enabling proxy protocol on the controller is documented  here  https   kubernetes github io ingress nginx user guide nginx configuration configmap  use proxy protocol     For enabling proxy protocol on the LoadBalancer  please refer to the documentation of your infrastructure provider because that is where the LB is provisioned   Some more info available  here  https   kubernetes github io ingress nginx user guide miscellaneous  source ip address   Some more info on proxy protocol is  here  https   kubernetes github io ingress nginx user guide miscellaneous  proxy protocol       client ipaddress on single node cluster  Single node clusters are created for dev   test uses with tools like  kind  or  minikube   A trick to simulate a real use network with these clusters  kind or minikube  is to install Metallb and configure the ipaddress of the kind container or the minikube vm container  as the starting and ending of the pool for Metallb in L2 mode  Then the host ip becomes a real client ipaddress  for curl requests sent from the host   After installing ingress nginx controller on a kind or a minikube cluster with helm  you can configure it for real client ip with a simple change to the service that ingress nginx controller creates  The service object of   type LoadBalancer has a field service spec externalTrafficPolicy  If you set the value of this field to  Local  then the real ipaddress of a client is visible to the controller         kubectl explain service spec externalTrafficPolicy KIND        Service VERSION     v1  FIELD  externalTrafficPolicy  string   DESCRIPTION      externalTrafficPolicy describes how nodes distribute service traffic they     receive on one of the Service s  externally facing  addresses  NodePorts      ExternalIPs  and LoadBalancer IPs   If set to  Local   the proxy will     configure the service in a way that assumes that external load balancers     will take care of balancing the service traffic between nodes  and so each     node will deliver traffic only to the node local endpoints of the service      without masquerading the client source IP   Traffic mistakenly sent to a     node with no endpoints will be dropped   The default value   Cluster   uses     the standard behavior of routing to all endpoints evenly  possibly modified     by topology and other features   Note that traffic sent to an External IP or     LoadBalancer IP from within the cluster will always get  Cluster  semantics      but clients sending to a NodePort from within the cluster may need to take     traffic policy into account when picking a node           Possible enum values           Cluster   routes traffic to all endpoints           Local   preserves the source IP of the traffic by routing only to     endpoints on the same node as the traffic was received on  dropping the     traffic if there are no local endpoints            client ipaddress L7  The solution is to get the real client IPaddress from the   X Forward For  HTTP header  https   developer mozilla org en US docs Web HTTP Headers X Forwarded For   Example   If your application pod behind Ingress NGINX controller  uses the NGINX webserver and the reverseproxy inside it  then you can do the following to preserve the remote client IP     First you need to make sure that the X Forwarded For header reaches the backend pod  This is done by using a Ingress NGINX conftroller ConfigMap key  Its documented  here  https   kubernetes github io ingress nginx user guide nginx configuration configmap  use forwarded headers     Next  edit  nginx conf  file inside your app pod  to contain the directives shown below       set real ip from 0 0 0 0 0    Trust all IPs  use your VPC CIDR block in production  real ip header X Forwarded For  real ip recursive on   log format main   remote addr    remote user   time local    request                      status  body bytes sent   http referer                       http user agent                     host  host x forwarded for  http x forwarded for    access log  var log nginx access log main           Kubernetes v1 22 Migration  If you are using Ingress objects in your cluster  running Kubernetes older than version 1 22   and you plan to upgrade your Kubernetes version to K8S 1 22 or above  then please read  the migration guide here    user guide k8s 122 migration md       Validation Of    path       For improving security and also following desired standards on Kubernetes API spec  the next release  scheduled for v1 8 0  will include a new   optional feature of validating the value for the key  ingress spec rules http paths path      This behavior will be disabled by default on the 1 8 0 release and enabled by default on the next breaking change release  set for 2 0 0     When   ingress spec rules http pathType Exact   or   pathType Prefix    this validation will limit the characters accepted on the field   ingress spec rules http paths path    to   alphanumeric characters    and                   Also  in this case  the path should start with           When the ingress resource path contains other characters  like on rewrite configurations   the pathType value should be   ImplementationSpecific       API Spec on pathType is documented  here  https   kubernetes io docs concepts services networking ingress  path types     When this option is enabled  the validation will happen on the Admission Webhook  So if any new ingress object contains characters other than alphanumeric characters  and                  in the  path  field  but is not using  pathType  value as  ImplementationSpecific   then the ingress object will be denied admission     The cluster admin should establish validation rules using mechanisms like   Open Policy Agent    to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used   The configmap value is here  https   kubernetes github io ingress nginx user guide nginx configuration configmap  strict validate path type     A complete example of an Openpolicyagent gatekeeper rule is available  here  https   kubernetes github io ingress nginx examples openpolicyagent      If you have any issues or concerns  please do one of the following      Open a GitHub issue     Comment in our Dev Slack Channel     Open a thread in our Google Group  ingress nginx dev kubernetes io      Why is chunking not working since controller v1 10      If your code is setting the HTTP header   Transfer Encoding  chunked   and the controller log messages show an error about duplicate header  it is because of this change  http   hg nginx org nginx rev 2bf7792c262e     More details are available in this issue  https   github com kubernetes ingress nginx issues 11162 "}
{"questions":"ingress nginx TLS Secrets warning Anytime we reference a TLS secret we mean a PEM encoded X 509 RSA 2048 secret You can generate a self signed certificate and private key with TLS HTTPS Ensure that the certificate order is leaf intermediate root otherwise the controller will not be able to import the certificate and you ll see this error in the logs","answers":"# TLS\/HTTPS\n\n## TLS Secrets\n\nAnytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret.\n\n!!! warning\n    Ensure that the certificate order is leaf->intermediate->root, otherwise the controller will not be able to import the certificate, and you'll see this error in the logs ```W1012 09:15:45.920000       6 backend_ssl.go:46] Error obtaining X.509 certificate: unexpected error creating SSL Cert: certificate and private key does not have a matching public key: tls: private key does not match public key```\n\nYou can generate a self-signed certificate and private key with:\n\n```bash\n$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY_FILE} -out ${CERT_FILE} -subj \"\/CN=${HOST}\/O=${HOST}\" -addext \"subjectAltName = DNS:${HOST}\"\n```\n\nThen create the secret in the cluster via:\n\n```bash\nkubectl create secret tls ${CERT_NAME} --key ${KEY_FILE} --cert ${CERT_FILE}\n```\n\nThe resulting secret will be of type `kubernetes.io\/tls`.\n\n## Host names\n\nEnsure that the relevant [ingress rules specify a matching hostname](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress\/#tls).\n\n## Default SSL Certificate\n\nNGINX provides the option to configure a server as a catch-all with\n[server_name](https:\/\/nginx.org\/en\/docs\/http\/server_names.html)\nfor requests that do not match any of the configured server names.\nThis configuration works out-of-the-box for HTTP traffic.\nFor HTTPS, a certificate is naturally required.\n\nFor this reason the Ingress controller provides the flag `--default-ssl-certificate`.\nThe secret referred to by this flag contains the default certificate to be used when\naccessing the catch-all server.\nIf this flag is not provided NGINX will use a self-signed certificate.\n\nFor instance, if you have a TLS secret `foo-tls` in the `default` namespace,\nadd `--default-ssl-certificate=default\/foo-tls` in the `nginx-controller` deployment.\n\nIf the `tls:` section is not set, NGINX will provide the default certificate but will not force HTTPS redirect.\n\nOn the other hand, if the `tls:` section is set - even without specifying a `secretName` option - NGINX will force HTTPS redirect. \n\nTo force redirects for Ingresses that do not specify a TLS-block at all, take a look at `force-ssl-redirect` in [ConfigMap][ConfigMap].\n\n## SSL Passthrough\n\nThe [`--enable-ssl-passthrough`](cli-arguments.md) flag enables the SSL Passthrough feature, which is disabled by\ndefault. This is required to enable passthrough backends in Ingress objects.\n\n!!! warning\n    This feature is implemented by intercepting **all traffic** on the configured HTTPS port (default: 443) and handing\n    it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.\n\nSSL Passthrough leverages [SNI][SNI] and reads the virtual domain from the TLS negotiation, which requires compatible\nclients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back\nand forth between the backend and the client.\n\nIf there is no hostname matching the requested host name, the request is handed over to NGINX on the configured\npassthrough proxy port (default: 442), which proxies the request to the default backend.\n\n!!! note\n    Unlike HTTP backends, traffic to Passthrough backends is sent to the *clusterIP* of the backing Service instead of\n    individual Endpoints.\n\n## HTTP Strict Transport Security\n\nHTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified\nthrough the use of a special response header. Once a supported browser receives\nthis header that browser will prevent any communications from being sent over\nHTTP to the specified domain and will instead send all communications over HTTPS.\n\nHSTS is enabled by default.\n\nTo disable this behavior use `hsts: \"false\"` in the configuration [ConfigMap][ConfigMap].\n\n## Server-side HTTPS enforcement through redirect\n\nBy default the controller redirects HTTP clients to the HTTPS port\n443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.\n\nThis can be disabled globally using `ssl-redirect: \"false\"` in the NGINX [config map][ConfigMap],\nor per-Ingress with the `nginx.ingress.kubernetes.io\/ssl-redirect: \"false\"`\nannotation in the particular resource.\n\n!!! tip\n    When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a\n    redirect to HTTPS even when there is no TLS certificate available.\n    This can be achieved by using the `nginx.ingress.kubernetes.io\/force-ssl-redirect: \"true\"`\n    annotation in the particular resource.\n\n## Automated Certificate Management with cert-manager\n\n[cert-manager] automatically requests missing or expired certificates from a range of \n[supported issuers][cert-manager-issuer-config] (including [Let's Encrypt]) by monitoring \ningress resources.\n\nTo set up cert-manager you should take a look at this [full example][full-cert-manager-example].\n\nTo enable it for an ingress resource you have to deploy cert-manager, configure a certificate \nissuer update the manifest:\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress-demo\n  annotations:\n    cert-manager.io\/issuer: \"letsencrypt-staging\" # Replace this with a production issuer once you've tested it\n    [..]\nspec:\n  tls:\n    - hosts:\n        - ingress-demo.example.com\n      secretName: ingress-demo-tls\n    [...]\n```\n\n## Default TLS Version and Ciphers\n\nTo provide the most secure baseline configuration possible,\n\ningress-nginx defaults to using TLS 1.2 and 1.3 only, with a [secure set of TLS ciphers][ssl-ciphers].\n\n### Legacy TLS\n\nThe default configuration, though secure, does not support some older browsers and operating systems.\n\nFor instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing,\nMay 2018, [approximately 15% of Android devices](https:\/\/developer.android.com\/about\/dashboards\/#Platform)\nare not compatible with ingress-nginx's default configuration.\n\nTo change this default behavior, use a [ConfigMap][ConfigMap].\n\nA sample ConfigMap fragment to allow these older clients to connect could look something like the following\n(generated using the Mozilla SSL Configuration Generator)[mozilla-ssl-config-old]:\n\n```\nkind: ConfigMap\napiVersion: v1\nmetadata:\n  name: nginx-config\ndata:\n  ssl-ciphers: \"ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA\"\n  ssl-protocols: \"TLSv1.2 TLSv1.3\"\n```\n\n\n\n[Let's Encrypt]:https:\/\/letsencrypt.org\n[ConfigMap]: .\/nginx-configuration\/configmap.md\n[ssl-ciphers]: .\/nginx-configuration\/configmap.md#ssl-ciphers\n[SNI]: https:\/\/en.wikipedia.org\/wiki\/Server_Name_Indication\n[mozilla-ssl-config-old]: https:\/\/ssl-config.mozilla.org\/#server=nginx&config=old\n[cert-manager]: https:\/\/github.com\/jetstack\/cert-manager\/\n[full-cert-manager-example]:https:\/\/cert-manager.io\/docs\/tutorials\/acme\/nginx-ingress\/\n[cert-manager-issuer-config]:https:\/\/cert-manager.io\/docs\/configuration\/","site":"ingress nginx","answers_cleaned":"  TLS HTTPS     TLS Secrets  Anytime we reference a TLS secret  we mean a PEM encoded X 509  RSA  2048  secret       warning     Ensure that the certificate order is leaf  intermediate  root  otherwise the controller will not be able to import the certificate  and you ll see this error in the logs    W1012 09 15 45 920000       6 backend ssl go 46  Error obtaining X 509 certificate  unexpected error creating SSL Cert  certificate and private key does not have a matching public key  tls  private key does not match public key     You can generate a self signed certificate and private key with      bash   openssl req  x509  nodes  days 365  newkey rsa 2048  keyout   KEY FILE   out   CERT FILE   subj   CN   HOST  O   HOST    addext  subjectAltName   DNS   HOST        Then create the secret in the cluster via      bash kubectl create secret tls   CERT NAME    key   KEY FILE    cert   CERT FILE       The resulting secret will be of type  kubernetes io tls       Host names  Ensure that the relevant  ingress rules specify a matching hostname  https   kubernetes io docs concepts services networking ingress  tls       Default SSL Certificate  NGINX provides the option to configure a server as a catch all with  server name  https   nginx org en docs http server names html  for requests that do not match any of the configured server names  This configuration works out of the box for HTTP traffic  For HTTPS  a certificate is naturally required   For this reason the Ingress controller provides the flag    default ssl certificate   The secret referred to by this flag contains the default certificate to be used when accessing the catch all server  If this flag is not provided NGINX will use a self signed certificate   For instance  if you have a TLS secret  foo tls  in the  default  namespace  add    default ssl certificate default foo tls  in the  nginx controller  deployment   If the  tls   section is not set  NGINX will provide the default certificate but will not force HTTPS redirect   On the other hand  if the  tls   section is set   even without specifying a  secretName  option   NGINX will force HTTPS redirect    To force redirects for Ingresses that do not specify a TLS block at all  take a look at  force ssl redirect  in  ConfigMap  ConfigMap       SSL Passthrough  The     enable ssl passthrough   cli arguments md  flag enables the SSL Passthrough feature  which is disabled by default  This is required to enable passthrough backends in Ingress objects       warning     This feature is implemented by intercepting   all traffic   on the configured HTTPS port  default  443  and handing     it over to a local TCP proxy  This bypasses NGINX completely and introduces a non negligible performance penalty   SSL Passthrough leverages  SNI  SNI  and reads the virtual domain from the TLS negotiation  which requires compatible clients  After a connection has been accepted by the TLS listener  it is handled by the controller itself and piped back and forth between the backend and the client   If there is no hostname matching the requested host name  the request is handed over to NGINX on the configured passthrough proxy port  default  442   which proxies the request to the default backend       note     Unlike HTTP backends  traffic to Passthrough backends is sent to the  clusterIP  of the backing Service instead of     individual Endpoints      HTTP Strict Transport Security  HTTP Strict Transport Security  HSTS  is an opt in security enhancement specified through the use of a special response header  Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS   HSTS is enabled by default   To disable this behavior use  hsts   false   in the configuration  ConfigMap  ConfigMap       Server side HTTPS enforcement through redirect  By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress   This can be disabled globally using  ssl redirect   false   in the NGINX  config map  ConfigMap   or per Ingress with the  nginx ingress kubernetes io ssl redirect   false   annotation in the particular resource       tip     When using SSL offloading outside of cluster  e g  AWS ELB  it may be useful to enforce a     redirect to HTTPS even when there is no TLS certificate available      This can be achieved by using the  nginx ingress kubernetes io force ssl redirect   true       annotation in the particular resource      Automated Certificate Management with cert manager   cert manager  automatically requests missing or expired certificates from a range of   supported issuers  cert manager issuer config   including  Let s Encrypt   by monitoring  ingress resources   To set up cert manager you should take a look at this  full example  full cert manager example    To enable it for an ingress resource you have to deploy cert manager  configure a certificate  issuer update the manifest      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress demo   annotations      cert manager io issuer   letsencrypt staging    Replace this with a production issuer once you ve tested it          spec    tls        hosts            ingress demo example com       secretName  ingress demo tls                   Default TLS Version and Ciphers  To provide the most secure baseline configuration possible   ingress nginx defaults to using TLS 1 2 and 1 3 only  with a  secure set of TLS ciphers  ssl ciphers        Legacy TLS  The default configuration  though secure  does not support some older browsers and operating systems   For instance  TLS 1 1  is only enabled by default from Android 5 0 on  At the time of writing  May 2018   approximately 15  of Android devices  https   developer android com about dashboards  Platform  are not compatible with ingress nginx s default configuration   To change this default behavior  use a  ConfigMap  ConfigMap    A sample ConfigMap fragment to allow these older clients to connect could look something like the following  generated using the Mozilla SSL Configuration Generator  mozilla ssl config old        kind  ConfigMap apiVersion  v1 metadata    name  nginx config data    ssl ciphers   ECDHE ECDSA AES128 GCM SHA256 ECDHE RSA AES128 GCM SHA256 ECDHE ECDSA AES256 GCM SHA384 ECDHE RSA AES256 GCM SHA384 ECDHE ECDSA CHACHA20 POLY1305 ECDHE RSA CHACHA20 POLY1305 DHE RSA AES128 GCM SHA256 DHE RSA AES256 GCM SHA384 DHE RSA CHACHA20 POLY1305 ECDHE ECDSA AES128 SHA256 ECDHE RSA AES128 SHA256 ECDHE ECDSA AES128 SHA ECDHE RSA AES128 SHA ECDHE ECDSA AES256 SHA384 ECDHE RSA AES256 SHA384 ECDHE ECDSA AES256 SHA ECDHE RSA AES256 SHA DHE RSA AES128 SHA256 DHE RSA AES256 SHA256 AES128 GCM SHA256 AES256 GCM SHA384 AES128 SHA256 AES256 SHA256 AES128 SHA AES256 SHA DES CBC3 SHA    ssl protocols   TLSv1 2 TLSv1 3          Let s Encrypt  https   letsencrypt org  ConfigMap     nginx configuration configmap md  ssl ciphers     nginx configuration configmap md ssl ciphers  SNI   https   en wikipedia org wiki Server Name Indication  mozilla ssl config old   https   ssl config mozilla org  server nginx config old  cert manager   https   github com jetstack cert manager   full cert manager example  https   cert manager io docs tutorials acme nginx ingress   cert manager issuer config  https   cert manager io docs configuration "}
{"questions":"ingress nginx FAQ Migration to Kubernetes 1 22 and apiVersion Please read this If you are using Ingress objects in your cluster running Kubernetes older than v1 22 What is an IngressClass and why is it important for users of ingress nginx controller now and you plan to upgrade to Kubernetes v1 22 this page is relevant to you","answers":"# FAQ - Migration to Kubernetes 1.22 and apiVersion `networking.k8s.io\/v1`\n\nIf you are using Ingress objects in your cluster (running Kubernetes older than v1.22),\nand you plan to upgrade to Kubernetes v1.22, this page is relevant to you.\n\n- Please read this [official blog on deprecated Ingress API versions](https:\/\/kubernetes.io\/blog\/2021\/07\/26\/update-with-ingress-nginx\/)\n- Please read this [official documentation on the IngressClass object](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress\/#ingress-class)\n\n## What is an IngressClass and why is it important for users of ingress-nginx controller now?\n\nIngressClass is a Kubernetes resource. See the description below.\nIt's important because until now, a default install of the ingress-nginx controller did not require a IngressClass object.\nFrom version 1.0.0 of the ingress-nginx controller, an IngressClass object is required.\n\nOn clusters with more than one instance of the ingress-nginx controller, all instances of the controllers must be aware of which Ingress objects they serve.\nThe `ingressClassName` field of an Ingress is the way to let the controller know about that.\n\n```console\nkubectl explain ingressclass\n```\n\n```\nKIND:     IngressClass\nVERSION:  networking.k8s.io\/v1\nDESCRIPTION:\n     IngressClass represents the class of the Ingress, referenced by the Ingress\n     Spec. The `ingressclass.kubernetes.io\/is-default-class` annotation can be\n     used to indicate that an IngressClass should be considered default. When a\n     single IngressClass resource has this annotation set to true, new Ingress\n     resources without a class specified will be assigned this default class.\nFIELDS:\n   apiVersion   <string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#resources\n   kind <string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n   metadata     <Object>\n     Standard object's metadata. More info:\n     https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n   spec <Object>\n     Spec is the desired state of the IngressClass. More info:\n     https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status`\n```\n\n## What has caused this change in behavior?\n\nThere are 2 primary reasons.\n\n### Reason 1\n\nUntil K8s version 1.21, it was possible to create an Ingress resource using deprecated versions of the Ingress API, such as:\n\n- `extensions\/v1beta1`\n- `networking.k8s.io\/v1beta1`\n  You would get a message about deprecation, but the Ingress resource would get created.\n\nFrom K8s version 1.22 onwards, you can **only** access the Ingress API via the stable, `networking.k8s.io\/v1` API.\nThe reason is explained in the [official blog on deprecated ingress API versions](https:\/\/kubernetes.io\/blog\/2021\/07\/26\/update-with-ingress-nginx\/).\n\n### Reason #2\n\nIf you are already using the ingress-nginx controller and then upgrade to Kubernetes 1.22,\nthere are several scenarios where your existing Ingress objects will not work how you expect.\n\nRead this FAQ to check which scenario matches your use case.\n\n## What is the `ingressClassName` field?\n\n`ingressClassName` is a field in the spec of an Ingress object.\n\n```shell\nkubectl explain ingress.spec.ingressClassName\n```\n\n```console\nKIND:     Ingress\nVERSION:  networking.k8s.io\/v1\nFIELD:    ingressClassName <string>\nDESCRIPTION:\n     IngressClassName is the name of the IngressClass cluster resource. The\n     associated IngressClass defines which controller will implement the\n     resource. This replaces the deprecated `kubernetes.io\/ingress.class`\n     annotation. For backwards compatibility, when that annotation is set, it\n     must be given precedence over this field. The controller may emit a warning\n     if the field and annotation have different values. Implementations of this\n     API should ignore Ingresses without a class specified. An IngressClass\n     resource may be marked as default, which can be used to set a default value\n     for this field. For more information, refer to the IngressClass\n     documentation.\n```\n\nThe `.spec.ingressClassName` behavior has precedence over the deprecated `kubernetes.io\/ingress.class` annotation.\n\n## I have only one ingress controller in my cluster. What should I do?\n\nIf a single instance of the ingress-nginx controller is the sole Ingress controller running in your cluster,\nyou should add the annotation \"ingressclass.kubernetes.io\/is-default-class\" in your IngressClass,\nso any new Ingress objects will have this one as default IngressClass.\n\nWhen using Helm, you can enable this annotation by setting `.controller.ingressClassResource.default: true` in your Helm chart installation's values file.\n\nIf you have any old Ingress objects remaining without an IngressClass set, you can do one or more of the following to make the ingress-nginx controller aware of the old objects:\n\n- You can manually set the [`.spec.ingressClassName`](https:\/\/kubernetes.io\/docs\/reference\/kubernetes-api\/service-resources\/ingress-v1\/#IngressSpec) field in the manifest of your own Ingress resources.\n- You can re-create them after setting the `ingressclass.kubernetes.io\/is-default-class` annotation to `true` on the IngressClass\n- Alternatively you can make the ingress-nginx controller watch Ingress objects without the ingressClassName field set by starting your ingress-nginx with the flag [--watch-ingress-without-class=true](#what-is-the-flag-watch-ingress-without-class).\n  When using Helm, you can configure your Helm chart installation's values file with `.controller.watchIngressWithoutClass: true`.\n\nWe recommend that you create the IngressClass as shown below:\n\n```\n---\napiVersion: networking.k8s.io\/v1\nkind: IngressClass\nmetadata:\n  labels:\n    app.kubernetes.io\/component: controller\n  name: nginx\n  annotations:\n    ingressclass.kubernetes.io\/is-default-class: \"true\"\nspec:\n  controller: k8s.io\/ingress-nginx\n```\n\nand add the value `spec.ingressClassName=nginx` in your Ingress objects.\n\n## I have many ingress objects in my cluster. What should I do?\n\nIf you have a lot of ingress objects without ingressClass configuration,\nyou can run the ingress controller with the flag `--watch-ingress-without-class=true`.\n\n### What is the flag `--watch-ingress-without-class`?\n\nIt's a flag that is passed, as an argument, to the `nginx-ingress-controller` executable.\nIn the configuration, it looks like this:\n\n```yaml\n# ...\nargs:\n  - \/nginx-ingress-controller\n  - --watch-ingress-without-class=true\n  - --controller-class=k8s.io\/ingress-nginx\n  # ...\n# ...\n```\n\n## I have more than one controller in my cluster, and I'm already using the annotation\n\nNo problem. This should still keep working, but we highly recommend you to test!\nEven though `kubernetes.io\/ingress.class` is deprecated, the ingress-nginx controller still understands that annotation.\nIf you want to follow good practice, you should consider migrating to use IngressClass and `.spec.ingressClassName`.\n\n## I have more than one controller running in my cluster, and I want to use the new API\n\nIn this scenario, you need to create multiple IngressClasses (see the example above).\n\nBe aware that IngressClass works in a very specific way: you will need to change the `.spec.controller` value in your IngressClass and configure the controller to expect the exact same value.\n\nLet's see an example, supposing that you have three IngressClasses:\n\n- IngressClass `ingress-nginx-one`, with `.spec.controller` equal to `example.com\/ingress-nginx1`\n- IngressClass `ingress-nginx-two`, with `.spec.controller` equal to `example.com\/ingress-nginx2`\n- IngressClass `ingress-nginx-three`, with `.spec.controller` equal to `example.com\/ingress-nginx1`\n\nFor private use, you can also use a controller name that doesn't contain a `\/`, e.g. `ingress-nginx1`.\n\nWhen deploying your ingress controllers, you will have to change the `--controller-class` field as follows:\n\n- Ingress-Nginx A, configured to use controller class name `example.com\/ingress-nginx1`\n- Ingress-Nginx B, configured to use controller class name `example.com\/ingress-nginx2`\n\nWhen you create an Ingress object with its `ingressClassName` set to `ingress-nginx-two`,\nonly controllers looking for the `example.com\/ingress-nginx2` controller class pay attention to the new object.\n\nGiven that Ingress-Nginx B is set up that way, it will serve that object, whereas Ingress-Nginx A ignores the new Ingress.\n\nBear in mind that if you start Ingress-Nginx B with the command line argument `--watch-ingress-without-class=true`, it will serve:\n\n1. Ingresses without any `ingressClassName` set\n2. Ingresses where the deprecated annotation (`kubernetes.io\/ingress.class`) matches the value set in the command line argument `--ingress-class`\n3. Ingresses that refer to any IngressClass that has the same `spec.controller` as configured in `--controller-class`\n4. If you start Ingress-Nginx B with the command line argument `--watch-ingress-without-class=true` and you run Ingress-Nginx A with the command line argument `--watch-ingress-without-class=false` then this is a supported configuration.\n   If you have two ingress-nginx controllers for the same cluster, both running with `--watch-ingress-without-class=true` then there is likely to be a conflict.\n\n## Why am I seeing \"ingress class annotation is not equal to the expected by Ingress Controller\" in my controller logs?\n\nIt is highly likely that you will also see the name of the ingress resource in the same error message.\nThis error message has been observed on use the deprecated annotation (`kubernetes.io\/ingress.class`) in an Ingress resource manifest.\nIt is recommended to use the `.spec.ingressClassName` field of the Ingress resource, to specify the name of the IngressClass of the Ingress you are defining.","site":"ingress nginx","answers_cleaned":"  FAQ   Migration to Kubernetes 1 22 and apiVersion  networking k8s io v1   If you are using Ingress objects in your cluster  running Kubernetes older than v1 22   and you plan to upgrade to Kubernetes v1 22  this page is relevant to you     Please read this  official blog on deprecated Ingress API versions  https   kubernetes io blog 2021 07 26 update with ingress nginx     Please read this  official documentation on the IngressClass object  https   kubernetes io docs concepts services networking ingress  ingress class      What is an IngressClass and why is it important for users of ingress nginx controller now   IngressClass is a Kubernetes resource  See the description below  It s important because until now  a default install of the ingress nginx controller did not require a IngressClass object  From version 1 0 0 of the ingress nginx controller  an IngressClass object is required   On clusters with more than one instance of the ingress nginx controller  all instances of the controllers must be aware of which Ingress objects they serve  The  ingressClassName  field of an Ingress is the way to let the controller know about that      console kubectl explain ingressclass          KIND      IngressClass VERSION   networking k8s io v1 DESCRIPTION       IngressClass represents the class of the Ingress  referenced by the Ingress      Spec  The  ingressclass kubernetes io is default class  annotation can be      used to indicate that an IngressClass should be considered default  When a      single IngressClass resource has this annotation set to true  new Ingress      resources without a class specified will be assigned this default class  FIELDS     apiVersion    string       APIVersion defines the versioned schema of this representation of an      object  Servers should convert recognized schemas to the latest internal      value  and may reject unrecognized values  More info       https   git k8s io community contributors devel sig architecture api conventions md resources    kind  string       Kind is a string value representing the REST resource this object      represents  Servers may infer this from the endpoint the client submits      requests to  Cannot be updated  In CamelCase  More info       https   git k8s io community contributors devel sig architecture api conventions md types kinds    metadata      Object       Standard object s metadata  More info       https   git k8s io community contributors devel sig architecture api conventions md metadata    spec  Object       Spec is the desired state of the IngressClass  More info       https   git k8s io community contributors devel sig architecture api conventions md spec and status          What has caused this change in behavior   There are 2 primary reasons       Reason 1  Until K8s version 1 21  it was possible to create an Ingress resource using deprecated versions of the Ingress API  such as      extensions v1beta1     networking k8s io v1beta1    You would get a message about deprecation  but the Ingress resource would get created   From K8s version 1 22 onwards  you can   only   access the Ingress API via the stable   networking k8s io v1  API  The reason is explained in the  official blog on deprecated ingress API versions  https   kubernetes io blog 2021 07 26 update with ingress nginx         Reason  2  If you are already using the ingress nginx controller and then upgrade to Kubernetes 1 22  there are several scenarios where your existing Ingress objects will not work how you expect   Read this FAQ to check which scenario matches your use case      What is the  ingressClassName  field    ingressClassName  is a field in the spec of an Ingress object      shell kubectl explain ingress spec ingressClassName         console KIND      Ingress VERSION   networking k8s io v1 FIELD     ingressClassName  string  DESCRIPTION       IngressClassName is the name of the IngressClass cluster resource  The      associated IngressClass defines which controller will implement the      resource  This replaces the deprecated  kubernetes io ingress class       annotation  For backwards compatibility  when that annotation is set  it      must be given precedence over this field  The controller may emit a warning      if the field and annotation have different values  Implementations of this      API should ignore Ingresses without a class specified  An IngressClass      resource may be marked as default  which can be used to set a default value      for this field  For more information  refer to the IngressClass      documentation       The   spec ingressClassName  behavior has precedence over the deprecated  kubernetes io ingress class  annotation      I have only one ingress controller in my cluster  What should I do   If a single instance of the ingress nginx controller is the sole Ingress controller running in your cluster  you should add the annotation  ingressclass kubernetes io is default class  in your IngressClass  so any new Ingress objects will have this one as default IngressClass   When using Helm  you can enable this annotation by setting   controller ingressClassResource default  true  in your Helm chart installation s values file   If you have any old Ingress objects remaining without an IngressClass set  you can do one or more of the following to make the ingress nginx controller aware of the old objects     You can manually set the    spec ingressClassName   https   kubernetes io docs reference kubernetes api service resources ingress v1  IngressSpec  field in the manifest of your own Ingress resources    You can re create them after setting the  ingressclass kubernetes io is default class  annotation to  true  on the IngressClass   Alternatively you can make the ingress nginx controller watch Ingress objects without the ingressClassName field set by starting your ingress nginx with the flag    watch ingress without class true   what is the flag watch ingress without class     When using Helm  you can configure your Helm chart installation s values file with   controller watchIngressWithoutClass  true    We recommend that you create the IngressClass as shown below           apiVersion  networking k8s io v1 kind  IngressClass metadata    labels      app kubernetes io component  controller   name  nginx   annotations      ingressclass kubernetes io is default class   true  spec    controller  k8s io ingress nginx      and add the value  spec ingressClassName nginx  in your Ingress objects      I have many ingress objects in my cluster  What should I do   If you have a lot of ingress objects without ingressClass configuration  you can run the ingress controller with the flag    watch ingress without class true        What is the flag    watch ingress without class    It s a flag that is passed  as an argument  to the  nginx ingress controller  executable  In the configuration  it looks like this      yaml       args       nginx ingress controller       watch ingress without class true       controller class k8s io ingress nginx                       I have more than one controller in my cluster  and I m already using the annotation  No problem  This should still keep working  but we highly recommend you to test  Even though  kubernetes io ingress class  is deprecated  the ingress nginx controller still understands that annotation  If you want to follow good practice  you should consider migrating to use IngressClass and   spec ingressClassName       I have more than one controller running in my cluster  and I want to use the new API  In this scenario  you need to create multiple IngressClasses  see the example above    Be aware that IngressClass works in a very specific way  you will need to change the   spec controller  value in your IngressClass and configure the controller to expect the exact same value   Let s see an example  supposing that you have three IngressClasses     IngressClass  ingress nginx one   with   spec controller  equal to  example com ingress nginx1    IngressClass  ingress nginx two   with   spec controller  equal to  example com ingress nginx2    IngressClass  ingress nginx three   with   spec controller  equal to  example com ingress nginx1   For private use  you can also use a controller name that doesn t contain a      e g   ingress nginx1    When deploying your ingress controllers  you will have to change the    controller class  field as follows     Ingress Nginx A  configured to use controller class name  example com ingress nginx1    Ingress Nginx B  configured to use controller class name  example com ingress nginx2   When you create an Ingress object with its  ingressClassName  set to  ingress nginx two   only controllers looking for the  example com ingress nginx2  controller class pay attention to the new object   Given that Ingress Nginx B is set up that way  it will serve that object  whereas Ingress Nginx A ignores the new Ingress   Bear in mind that if you start Ingress Nginx B with the command line argument    watch ingress without class true   it will serve   1  Ingresses without any  ingressClassName  set 2  Ingresses where the deprecated annotation   kubernetes io ingress class   matches the value set in the command line argument    ingress class  3  Ingresses that refer to any IngressClass that has the same  spec controller  as configured in    controller class  4  If you start Ingress Nginx B with the command line argument    watch ingress without class true  and you run Ingress Nginx A with the command line argument    watch ingress without class false  then this is a supported configuration     If you have two ingress nginx controllers for the same cluster  both running with    watch ingress without class true  then there is likely to be a conflict      Why am I seeing  ingress class annotation is not equal to the expected by Ingress Controller  in my controller logs   It is highly likely that you will also see the name of the ingress resource in the same error message  This error message has been observed on use the deprecated annotation   kubernetes io ingress class   in an Ingress resource manifest  It is recommended to use the   spec ingressClassName  field of the Ingress resource  to specify the name of the IngressClass of the Ingress you are defining "}
{"questions":"ingress nginx note important Regular Expression Support Please see the for Validation Of Ingress Path Matching Regular expressions is not supported in the field The wildcard character must appear by itself as the first DNS label and matches only a single label You cannot have a wildcard label by itself e g Host","answers":"# Ingress Path Matching\n\n## Regular Expression Support\n\n!!! important\n    Regular expressions is not supported in the `spec.rules.host` field. The wildcard character '\\*' must appear by itself as the first DNS label and matches only a single label. You cannot have a wildcard label by itself (e.g. Host == \"\\*\").\n\n!!! note\n    Please see the [FAQ](..\/faq.md#validation-of-path) for Validation Of __`path`__\n\nThe ingress controller supports **case insensitive** regular expressions in the `spec.rules.http.paths.path` field.\nThis can be enabled by setting the `nginx.ingress.kubernetes.io\/use-regex` annotation to `true` (the default is false).\n\n!!! hint\n    Kubernetes only accept expressions that comply with the RE2 engine syntax. It is possible that valid expressions accepted by NGINX cannot be used with ingress-nginx, because the PCRE library (used in NGINX) supports a wider syntax than RE2.\n    See the [RE2 Syntax](https:\/\/github.com\/google\/re2\/wiki\/Syntax) documentation for differences.\n\nSee the [description](.\/nginx-configuration\/annotations.md#use-regex) of the `use-regex` annotation for more details.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: test-ingress\n  annotations:\n    nginx.ingress.kubernetes.io\/use-regex: \"true\"\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: test.com\n    http:\n      paths:\n      - path: \/foo\/.*\n        pathType: ImplementationSpecific\n        backend:\n          service:\n            name: test\n            port:\n              number: 80\n```\n\nThe preceding ingress definition would translate to the following location block within the NGINX configuration for the `test.com` server:\n\n```txt\nlocation ~* \"^\/foo\/.*\" {\n  ...\n}\n```\n\n## Path Priority\n\nIn NGINX, regular expressions follow a **first match** policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks.\n\n**Please read the [warning](#warning) before using regular expressions in your ingress definitions.**\n\n### Example\n\nLet the following two ingress definitions be created:\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: test-ingress-1\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: test.com\n    http:\n      paths:\n      - path: \/foo\/bar\n        pathType: Prefix\n        backend:\n          service:\n            name: service1\n            port:\n              number: 80\n      - path: \/foo\/bar\/\n        pathType: Prefix\n        backend:\n          service:\n            name: service2\n            port:\n              number: 80\n```\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: test-ingress-2\n  annotations:\n    nginx.ingress.kubernetes.io\/rewrite-target: \/$1\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: test.com\n    http:\n      paths:\n      - path: \/foo\/bar\/(.+)\n        pathType: ImplementationSpecific\n        backend:\n          service:\n            name: service3\n            port: \n              number: 80\n```\n\nThe ingress controller would define the following location blocks, in order of descending length, within the NGINX template for the `test.com` server:\n\n```txt\nlocation ~* ^\/foo\/bar\/.+ {\n  ...\n}\n\nlocation ~* \"^\/foo\/bar\/\" {\n  ...\n}\n\nlocation ~* \"^\/foo\/bar\" {\n  ...\n}\n```\n\nThe following request URI's would match the corresponding location blocks:\n\n- `test.com\/foo\/bar\/1` matches `~* ^\/foo\/bar\/.+` and will go to service 3.\n- `test.com\/foo\/bar\/` matches `~* ^\/foo\/bar\/` and will go to service 2.\n- `test.com\/foo\/bar` matches `~* ^\/foo\/bar` and will go to service 1.\n\n**IMPORTANT NOTES**:\n\n- If the `use-regex` OR `rewrite-target` annotation is used on any Ingress for a given host, then the case insensitive regular expression [location modifier](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#location) will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.\n\n## Warning\n\nThe following example describes a case that may inflict unwanted path matching behavior.\n\nThis case is expected and a result of NGINX's a first match policy for paths that use the regular expression [location modifier](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#location). For more information about how a path is chosen, please read the following article: [\"Understanding Nginx Server and Location Block Selection Algorithms\"](https:\/\/www.digitalocean.com\/community\/tutorials\/understanding-nginx-server-and-location-block-selection-algorithms).\n\n### Example\n\nLet the following ingress be defined:\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: test-ingress-3\n  annotations:\n    nginx.ingress.kubernetes.io\/use-regex: \"true\"\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: test.com\n    http:\n      paths:\n      - path: \/foo\/bar\/bar\n        pathType: Prefix\n        backend:\n          service:\n            name: test\n            port: \n              number: 80\n      - path: \/foo\/bar\/[A-Z0-9]{3}\n        pathType: ImplementationSpecific\n        backend:\n          service:\n            name: test\n            port: \n              number: 80\n```\n\nThe ingress controller would define the following location blocks (in this order) within the NGINX template for the `test.com` server:\n\n```txt\nlocation ~* \"^\/foo\/bar\/[A-Z0-9]{3}\" {\n  ...\n}\n\nlocation ~* \"^\/foo\/bar\/bar\" {\n  ...\n}\n```\n\nA request to `test.com\/foo\/bar\/bar` would match the `^\/foo\/bar\/[A-Z0-9]{3}` location block instead of the longest EXACT matching path.","site":"ingress nginx","answers_cleaned":"  Ingress Path Matching     Regular Expression Support      important     Regular expressions is not supported in the  spec rules host  field  The wildcard character      must appear by itself as the first DNS label and matches only a single label  You cannot have a wildcard label by itself  e g  Host                note     Please see the  FAQ     faq md validation of path  for Validation Of    path     The ingress controller supports   case insensitive   regular expressions in the  spec rules http paths path  field  This can be enabled by setting the  nginx ingress kubernetes io use regex  annotation to  true   the default is false        hint     Kubernetes only accept expressions that comply with the RE2 engine syntax  It is possible that valid expressions accepted by NGINX cannot be used with ingress nginx  because the PCRE library  used in NGINX  supports a wider syntax than RE2      See the  RE2 Syntax  https   github com google re2 wiki Syntax  documentation for differences   See the  description    nginx configuration annotations md use regex  of the  use regex  annotation for more details      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  test ingress   annotations      nginx ingress kubernetes io use regex   true  spec    ingressClassName  nginx   rules      host  test com     http        paths          path   foo            pathType  ImplementationSpecific         backend            service              name  test             port                number  80      The preceding ingress definition would translate to the following location block within the NGINX configuration for the  test com  server      txt location       foo                       Path Priority  In NGINX  regular expressions follow a   first match   policy  In order to enable more accurate path matching  ingress nginx first orders the paths by descending length before writing them to the NGINX template as location blocks     Please read the  warning   warning  before using regular expressions in your ingress definitions         Example  Let the following two ingress definitions be created      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  test ingress 1 spec    ingressClassName  nginx   rules      host  test com     http        paths          path   foo bar         pathType  Prefix         backend            service              name  service1             port                number  80         path   foo bar          pathType  Prefix         backend            service              name  service2             port                number  80         yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  test ingress 2   annotations      nginx ingress kubernetes io rewrite target    1 spec    ingressClassName  nginx   rules      host  test com     http        paths          path   foo bar              pathType  ImplementationSpecific         backend            service              name  service3             port                 number  80      The ingress controller would define the following location blocks  in order of descending length  within the NGINX template for the  test com  server      txt location      foo bar               location       foo bar              location       foo bar                 The following request URI s would match the corresponding location blocks      test com foo bar 1  matches       foo bar     and will go to service 3     test com foo bar   matches       foo bar   and will go to service 2     test com foo bar  matches       foo bar  and will go to service 1     IMPORTANT NOTES       If the  use regex  OR  rewrite target  annotation is used on any Ingress for a given host  then the case insensitive regular expression  location modifier  https   nginx org en docs http ngx http core module html location  will be enforced on ALL paths for a given host regardless of what Ingress they are defined on      Warning  The following example describes a case that may inflict unwanted path matching behavior   This case is expected and a result of NGINX s a first match policy for paths that use the regular expression  location modifier  https   nginx org en docs http ngx http core module html location   For more information about how a path is chosen  please read the following article    Understanding Nginx Server and Location Block Selection Algorithms   https   www digitalocean com community tutorials understanding nginx server and location block selection algorithms        Example  Let the following ingress be defined      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  test ingress 3   annotations      nginx ingress kubernetes io use regex   true  spec    ingressClassName  nginx   rules      host  test com     http        paths          path   foo bar bar         pathType  Prefix         backend            service              name  test             port                 number  80         path   foo bar  A Z0 9  3          pathType  ImplementationSpecific         backend            service              name  test             port                 number  80      The ingress controller would define the following location blocks  in this order  within the NGINX template for the  test com  server      txt location       foo bar  A Z0 9  3              location       foo bar bar                 A request to  test com foo bar bar  would match the    foo bar  A Z0 9  3   location block instead of the longest EXACT matching path "}
{"questions":"ingress nginx The ingress nginx ingress controller can be used to directly expose servers Enabling FastCGI in your Ingress only requires setting the backend protocol annotation to and with a couple more annotations you can customize the way ingress nginx handles the communication with your FastCGI server For most practical use cases php applications are a good example PHP is not HTML so a FastCGI server like php fpm processes a index php script for the response to a request See a working example below FastCGI is a for interfacing interactive programs with a It s aim is to reduce the overhead related to interfacing between web server and CGI programs allowing a server to handle more web page requests per unit of time mdash Wikipedia Exposing FastCGI Servers","answers":"# Exposing FastCGI Servers\n\n> **FastCGI** is a [binary protocol](https:\/\/en.wikipedia.org\/wiki\/Binary_protocol \"Binary protocol\") for interfacing interactive programs with a [web server](https:\/\/en.wikipedia.org\/wiki\/Web_server \"Web server\"). [...] (It's) aim is to reduce the overhead related to interfacing between web server and CGI programs, allowing a server to handle more web page requests per unit of time.\n>\n> &mdash; Wikipedia\n\nThe _ingress-nginx_ ingress controller can be used to directly expose [FastCGI](https:\/\/en.wikipedia.org\/wiki\/FastCGI) servers.  Enabling FastCGI in your Ingress only requires setting the _backend-protocol_ annotation to `FCGI`, and with a couple more annotations you can customize the way _ingress-nginx_ handles the communication with your FastCGI _server_.\n\nFor most practical use-cases, php applications are a good example. PHP is not HTML so a FastCGI server like php-fpm processes a index.php script for the response to a request. See a working example below.\n\nThis [post in a FactCGI feature issue](https:\/\/github.com\/kubernetes\/ingress-nginx\/issues\/8207#issuecomment-2161405468) describes a test for the FastCGI feature. The same test is described below here.\n\n## Example Objects to expose a FastCGI server pod\n\n### The FasctCGI server pod\n\nThe _Pod_ object example below exposes port `9000`, which is the conventional FastCGI port.\n\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: example-app\n  labels:\n    app: example-app\nspec:\n  containers:\n  - name: example-app\n    image: php:fpm-alpine\n    ports:\n    - containerPort: 9000\n      name: fastcgi\n```\n\n- For this example to work, a HTML response should be received from the FastCGI server being exposed\n- A HTTP request to the FastCGI server pod should be sent\n- The response should be generated by a php script as that is what we are demonstrating here\n\nThe image we are using here `php:fpm-alpine` does not ship with a ready to use php script inside it. So we need to provide the image with a simple php-script for this example to work.\n\n- Use `kubectl exec` to get into the example-app pod\n- You will land at the path `\/var\/www\/html`\n- Create a simple php script there at the path \/var\/www\/html called index.php\n- Make the index.php file look like this\n\n```\n<!DOCTYPE html>\n<html>\n    <head>\n        <title>PHP Test<\/title>\n    <\/head>\n    <body>\n        <?php echo '<p>FastCGI Test Worked!<\/p>'; ?>\n    <\/body>\n<\/html>\n```\n\n- Save and exit from the shell in the pod\n- If you delete the pod, then you will have to recreate the file as this method is not persistent\n\n### The FastCGI service\n\nThe _Service_ object example below matches port `9000` from the _Pod_ object above.\n\n```yaml\napiVersion: v1\nkind: Service\nmetadata:\n  name: example-service\nspec:\n  selector:\n    app: example-app\n  ports:\n  - port: 9000\n    targetPort: 9000\n    name: fastcgi\n```\n\n### The configMap object and the ingress object\n\nThe _Ingress_ and _ConfigMap_ objects below demonstrate the supported _FastCGI_ specific annotations.\n\n!!! Important\n    NGINX actually has 50 [FastCGI directives](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_fastcgi_module.html#directives)\n    All of the nginx directives have not been exposed in the ingress yet\n\n### The ConfigMap object\n\nThis configMap object is required to set the parameters of [FastCGI directives](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_fastcgi_module.html#directives)\n\n!!! Attention\n    - The _ConfigMap_ **must** be created before creating the ingress object\n\n- The _Ingress Controller_ needs to find the configMap when the _Ingress_ object with the FastCGI annotations is created\n- So create the configMap before the ingress\n- If the configMap is created after the ingress is created, then you will need to restart the _Ingress Controller_ pods.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: example-cm\ndata:\n  SCRIPT_FILENAME: \"\/var\/www\/html\/index.php\"\n\n```\n\n### The ingress object\n\n- Do not create the ingress shown below until you have created the configMap seen above.\n- You can see that this ingress matches the service `example-service`, and the port named `fastcgi` from above.\n\n```\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io\/backend-protocol: \"FCGI\"\n    nginx.ingress.kubernetes.io\/fastcgi-index: \"index.php\"\n    nginx.ingress.kubernetes.io\/fastcgi-params-configmap: \"example-cm\"\n  name: example-app\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: app.example.com\n    http:\n      paths:\n      - path: \/\n        pathType: Prefix\n        backend:\n          service:\n            name: example-service\n            port:\n              name: fastcgi\n```\n\n## Send a request to the exposed FastCGI server\n\nYou will have to look at the external-ip of the ingress or you have to send the HTTP request to the ClusterIP address of the ingress-nginx controller pod.\n\n```\n% curl 172.19.0.2 -H \"Host: app.example.com\" -vik\n*   Trying 172.19.0.2:80...\n* Connected to 172.19.0.2 (172.19.0.2) port 80\n> GET \/ HTTP\/1.1\n> Host: app.example.com\n> User-Agent: curl\/8.6.0\n> Accept: *\/*\n> \n< HTTP\/1.1 200 OK\nHTTP\/1.1 200 OK\n< Date: Wed, 12 Jun 2024 07:11:59 GMT\nDate: Wed, 12 Jun 2024 07:11:59 GMT\n< Content-Type: text\/html; charset=UTF-8\nContent-Type: text\/html; charset=UTF-8\n< Transfer-Encoding: chunked\nTransfer-Encoding: chunked\n< Connection: keep-alive\nConnection: keep-alive\n< X-Powered-By: PHP\/8.3.8\nX-Powered-By: PHP\/8.3.8\n\n< \n<!DOCTYPE html>\n<html>\n    <head>\n        <title>PHP Test<\/title>\n    <\/head>\n    <body>\n        <p>FastCGI Test Worked<\/p>    <\/body>\n<\/html>\n\n```\n\n## FastCGI Ingress Annotations\n\nTo enable FastCGI, the `nginx.ingress.kubernetes.io\/backend-protocol` annotation needs to be set to `FCGI`, which overrides the default `HTTP` value.\n\n> `nginx.ingress.kubernetes.io\/backend-protocol: \"FCGI\"`\n\n**This enables the _FastCGI_ mode for all paths defined in the _Ingress_ object**\n\n### The `nginx.ingress.kubernetes.io\/fastcgi-index` Annotation\n\nTo specify an index file, the `fastcgi-index` annotation value can optionally be set.  In the example below, the value is set to `index.php`.  This annotation corresponds to [the _NGINX_ `fastcgi_index` directive](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_fastcgi_module.html#fastcgi_index).\n\n> `nginx.ingress.kubernetes.io\/fastcgi-index: \"index.php\"`\n\n### The `nginx.ingress.kubernetes.io\/fastcgi-params-configmap` Annotation\n\nTo specify [_NGINX_ `fastcgi_param` directives](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_fastcgi_module.html#fastcgi_param), the `fastcgi-params-configmap` annotation is used, which in turn must lead to a _ConfigMap_ object containing the _NGINX_ `fastcgi_param` directives as key\/values.\n\n> `nginx.ingress.kubernetes.io\/fastcgi-params-configmap: \"example-configmap\"`\n\nAnd the _ConfigMap_ object to specify the `SCRIPT_FILENAME` and `HTTP_PROXY`  _NGINX's_ `fastcgi_param` directives will look like the following:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: example-configmap\ndata:\n  SCRIPT_FILENAME: \"\/example\/index.php\"\n  HTTP_PROXY: \"\"\n```\n\nUsing the _namespace\/_ prefix is also supported, for example:\n\n> `nginx.ingress.kubernetes.io\/fastcgi-params-configmap: \"example-namespace\/example-configmap\"`","site":"ingress nginx","answers_cleaned":"  Exposing FastCGI Servers      FastCGI   is a  binary protocol  https   en wikipedia org wiki Binary protocol  Binary protocol   for interfacing interactive programs with a  web server  https   en wikipedia org wiki Web server  Web server           It s  aim is to reduce the overhead related to interfacing between web server and CGI programs  allowing a server to handle more web page requests per unit of time       mdash  Wikipedia  The  ingress nginx  ingress controller can be used to directly expose  FastCGI  https   en wikipedia org wiki FastCGI  servers   Enabling FastCGI in your Ingress only requires setting the  backend protocol  annotation to  FCGI   and with a couple more annotations you can customize the way  ingress nginx  handles the communication with your FastCGI  server    For most practical use cases  php applications are a good example  PHP is not HTML so a FastCGI server like php fpm processes a index php script for the response to a request  See a working example below   This  post in a FactCGI feature issue  https   github com kubernetes ingress nginx issues 8207 issuecomment 2161405468  describes a test for the FastCGI feature  The same test is described below here      Example Objects to expose a FastCGI server pod      The FasctCGI server pod  The  Pod  object example below exposes port  9000   which is the conventional FastCGI port      yaml apiVersion  v1 kind  Pod metadata    name  example app   labels      app  example app spec    containers      name  example app     image  php fpm alpine     ports        containerPort  9000       name  fastcgi        For this example to work  a HTML response should be received from the FastCGI server being exposed   A HTTP request to the FastCGI server pod should be sent   The response should be generated by a php script as that is what we are demonstrating here  The image we are using here  php fpm alpine  does not ship with a ready to use php script inside it  So we need to provide the image with a simple php script for this example to work     Use  kubectl exec  to get into the example app pod   You will land at the path   var www html    Create a simple php script there at the path  var www html called index php   Make the index php file look like this        DOCTYPE html   html       head           title PHP Test  title        head       body            php echo   p FastCGI Test Worked   p             body    html         Save and exit from the shell in the pod   If you delete the pod  then you will have to recreate the file as this method is not persistent      The FastCGI service  The  Service  object example below matches port  9000  from the  Pod  object above      yaml apiVersion  v1 kind  Service metadata    name  example service spec    selector      app  example app   ports      port  9000     targetPort  9000     name  fastcgi          The configMap object and the ingress object  The  Ingress  and  ConfigMap  objects below demonstrate the supported  FastCGI  specific annotations       Important     NGINX actually has 50  FastCGI directives  https   nginx org en docs http ngx http fastcgi module html directives      All of the nginx directives have not been exposed in the ingress yet      The ConfigMap object  This configMap object is required to set the parameters of  FastCGI directives  https   nginx org en docs http ngx http fastcgi module html directives       Attention       The  ConfigMap    must   be created before creating the ingress object    The  Ingress Controller  needs to find the configMap when the  Ingress  object with the FastCGI annotations is created   So create the configMap before the ingress   If the configMap is created after the ingress is created  then you will need to restart the  Ingress Controller  pods      yaml apiVersion  v1 kind  ConfigMap metadata    name  example cm data    SCRIPT FILENAME    var www html index php            The ingress object    Do not create the ingress shown below until you have created the configMap seen above    You can see that this ingress matches the service  example service   and the port named  fastcgi  from above       apiVersion  networking k8s io v1 kind  Ingress metadata    annotations      nginx ingress kubernetes io backend protocol   FCGI      nginx ingress kubernetes io fastcgi index   index php      nginx ingress kubernetes io fastcgi params configmap   example cm    name  example app spec    ingressClassName  nginx   rules      host  app example com     http        paths          path            pathType  Prefix         backend            service              name  example service             port                name  fastcgi         Send a request to the exposed FastCGI server  You will have to look at the external ip of the ingress or you have to send the HTTP request to the ClusterIP address of the ingress nginx controller pod         curl 172 19 0 2  H  Host  app example com   vik     Trying 172 19 0 2 80      Connected to 172 19 0 2  172 19 0 2  port 80   GET   HTTP 1 1   Host  app example com   User Agent  curl 8 6 0   Accept           HTTP 1 1 200 OK HTTP 1 1 200 OK   Date  Wed  12 Jun 2024 07 11 59 GMT Date  Wed  12 Jun 2024 07 11 59 GMT   Content Type  text html  charset UTF 8 Content Type  text html  charset UTF 8   Transfer Encoding  chunked Transfer Encoding  chunked   Connection  keep alive Connection  keep alive   X Powered By  PHP 8 3 8 X Powered By  PHP 8 3 8       DOCTYPE html   html       head           title PHP Test  title        head       body           p FastCGI Test Worked  p       body    html           FastCGI Ingress Annotations  To enable FastCGI  the  nginx ingress kubernetes io backend protocol  annotation needs to be set to  FCGI   which overrides the default  HTTP  value      nginx ingress kubernetes io backend protocol   FCGI      This enables the  FastCGI  mode for all paths defined in the  Ingress  object        The  nginx ingress kubernetes io fastcgi index  Annotation  To specify an index file  the  fastcgi index  annotation value can optionally be set   In the example below  the value is set to  index php    This annotation corresponds to  the  NGINX   fastcgi index  directive  https   nginx org en docs http ngx http fastcgi module html fastcgi index       nginx ingress kubernetes io fastcgi index   index php        The  nginx ingress kubernetes io fastcgi params configmap  Annotation  To specify   NGINX   fastcgi param  directives  https   nginx org en docs http ngx http fastcgi module html fastcgi param   the  fastcgi params configmap  annotation is used  which in turn must lead to a  ConfigMap  object containing the  NGINX   fastcgi param  directives as key values      nginx ingress kubernetes io fastcgi params configmap   example configmap    And the  ConfigMap  object to specify the  SCRIPT FILENAME  and  HTTP PROXY    NGINX s   fastcgi param  directives will look like the following      yaml apiVersion  v1 kind  ConfigMap metadata    name  example configmap data    SCRIPT FILENAME    example index php    HTTP PROXY          Using the  namespace   prefix is also supported  for example      nginx ingress kubernetes io fastcgi params configmap   example namespace example configmap  "}
{"questions":"ingress nginx Using IngressClasses Multiple Ingress controllers To fix this problem use The annotation is not being preferred or suggested to use as it can be deprecated in the future Better to use the field But when user has deployed with then the ingress class resource field is not used By default deploying multiple Ingress controllers e g will result in all controllers simultaneously racing to update Ingress status fields in confusing ways","answers":"# Multiple Ingress controllers\n\nBy default, deploying multiple Ingress controllers (e.g., `ingress-nginx` & `gce`) will result in all controllers simultaneously racing to update Ingress status fields in confusing ways.\n\nTo fix this problem, use [IngressClasses](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress\/#ingress-class). The `kubernetes.io\/ingress.class` annotation is not being preferred or suggested to use as it can be deprecated in the future. Better to use the field `ingress.spec.ingressClassName`.\nBut, when user has deployed with `scope.enabled`, then the ingress class resource field is not used.\n\n\n## Using IngressClasses\n\nIf all ingress controllers respect IngressClasses (e.g. multiple instances of ingress-nginx v1.0), you can deploy two Ingress controllers by granting them control over two different IngressClasses, then selecting one of the two IngressClasses with `ingressClassName`.\n\nFirst, ensure the `--controller-class=` and `--ingress-class` are set to something different on each ingress controller, If your additional ingress controller is to be installed in a namespace, where there is\/are one\/more-than-one ingress-nginx-controller(s) already installed, then you need to specify a different unique `--election-id` for the new instance of the controller.\n\n```yaml\n# ingress-nginx Deployment\/Statefulset\nspec:\n  template:\n     spec:\n       containers:\n         - name: ingress-nginx-internal-controller\n           args:\n             - \/nginx-ingress-controller\n             - '--election-id=ingress-controller-leader'\n             - '--controller-class=k8s.io\/internal-ingress-nginx'\n             - '--ingress-class=k8s.io\/internal-nginx'\n            ...\n```\n\nThen use the same value in the IngressClass:\n\n```yaml\n# ingress-nginx IngressClass\napiVersion: networking.k8s.io\/v1\nkind: IngressClass\nmetadata:\n  name: internal-nginx\nspec:\n  controller: k8s.io\/internal-ingress-nginx\n  ...\n```\n\nAnd refer to that IngressClass in your Ingress:\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: my-ingress\nspec:\n  ingressClassName: internal-nginx\n  ...\n```\n\nor if installing with Helm:\n\n```yaml\ncontroller:\n  electionID: ingress-controller-leader\n  ingressClass: internal-nginx  # default: nginx\n  ingressClassResource:\n    name: internal-nginx  # default: nginx\n    enabled: true\n    default: false\n    controllerValue: \"k8s.io\/internal-ingress-nginx\"  # default: k8s.io\/ingress-nginx\n```\n\n!!! important\n\n    When running multiple ingress-nginx controllers, it will only process an unset class annotation if one of the controllers uses the default\n    `--controller-class` value (see `IsValid` method in `internal\/ingress\/annotations\/class\/main.go`), otherwise the class annotation becomes required.\n\n    If `--controller-class` is set to the default value of `k8s.io\/ingress-nginx`, the controller will monitor Ingresses with no class annotation *and* Ingresses with annotation class set to `nginx`. Use a non-default value for `--controller-class`, to ensure that the controller only satisfied the specific class of Ingresses.\n\n## Using the kubernetes.io\/ingress.class annotation (in deprecation)\n\nIf you're running multiple ingress controllers where one or more do not support IngressClasses, you must specify the annotation `kubernetes.io\/ingress.class: \"nginx\"` in all ingresses that you would like ingress-nginx to claim.\n\n\nFor instance,\n\n```yaml\nmetadata:\n  name: foo\n  annotations:\n    kubernetes.io\/ingress.class: \"gce\"\n```\n\nwill target the GCE controller, forcing the Ingress-NGINX controller to ignore it, while an annotation like:\n\n```yaml\nmetadata:\n  name: foo\n  annotations:\n    kubernetes.io\/ingress.class: \"nginx\"\n```\n\nwill target the Ingress-NGINX controller, forcing the GCE controller to ignore it.\n\nYou can change the value \"nginx\" to something else by setting the `--ingress-class` flag:\n\n```yaml\nspec:\n  template:\n     spec:\n       containers:\n         - name: ingress-nginx-internal-controller\n           args:\n             - \/nginx-ingress-controller\n             - --ingress-class=internal-nginx\n```\n\nthen setting the corresponding `kubernetes.io\/ingress.class: \"internal-nginx\"` annotation on your Ingresses.\n\nTo reiterate, setting the annotation to any value which does not match a valid ingress class will force the Ingress-Nginx Controller to ignore your Ingress.\nIf you are only running a single Ingress-Nginx Controller, this can be achieved by setting the annotation to any value except \"nginx\" or an empty string.\n\nDo this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.","site":"ingress nginx","answers_cleaned":"  Multiple Ingress controllers  By default  deploying multiple Ingress controllers  e g    ingress nginx     gce   will result in all controllers simultaneously racing to update Ingress status fields in confusing ways   To fix this problem  use  IngressClasses  https   kubernetes io docs concepts services networking ingress  ingress class   The  kubernetes io ingress class  annotation is not being preferred or suggested to use as it can be deprecated in the future  Better to use the field  ingress spec ingressClassName   But  when user has deployed with  scope enabled   then the ingress class resource field is not used       Using IngressClasses  If all ingress controllers respect IngressClasses  e g  multiple instances of ingress nginx v1 0   you can deploy two Ingress controllers by granting them control over two different IngressClasses  then selecting one of the two IngressClasses with  ingressClassName    First  ensure the    controller class   and    ingress class  are set to something different on each ingress controller  If your additional ingress controller is to be installed in a namespace  where there is are one more than one ingress nginx controller s  already installed  then you need to specify a different unique    election id  for the new instance of the controller      yaml   ingress nginx Deployment Statefulset spec    template       spec         containers             name  ingress nginx internal controller            args                  nginx ingress controller                   election id ingress controller leader                    controller class k8s io internal ingress nginx                    ingress class k8s io internal nginx                       Then use the same value in the IngressClass      yaml   ingress nginx IngressClass apiVersion  networking k8s io v1 kind  IngressClass metadata    name  internal nginx spec    controller  k8s io internal ingress nginx            And refer to that IngressClass in your Ingress      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  my ingress spec    ingressClassName  internal nginx            or if installing with Helm      yaml controller    electionID  ingress controller leader   ingressClass  internal nginx    default  nginx   ingressClassResource      name  internal nginx    default  nginx     enabled  true     default  false     controllerValue   k8s io internal ingress nginx     default  k8s io ingress nginx          important      When running multiple ingress nginx controllers  it will only process an unset class annotation if one of the controllers uses the default        controller class  value  see  IsValid  method in  internal ingress annotations class main go    otherwise the class annotation becomes required       If    controller class  is set to the default value of  k8s io ingress nginx   the controller will monitor Ingresses with no class annotation  and  Ingresses with annotation class set to  nginx   Use a non default value for    controller class   to ensure that the controller only satisfied the specific class of Ingresses      Using the kubernetes io ingress class annotation  in deprecation   If you re running multiple ingress controllers where one or more do not support IngressClasses  you must specify the annotation  kubernetes io ingress class   nginx   in all ingresses that you would like ingress nginx to claim    For instance      yaml metadata    name  foo   annotations      kubernetes io ingress class   gce       will target the GCE controller  forcing the Ingress NGINX controller to ignore it  while an annotation like      yaml metadata    name  foo   annotations      kubernetes io ingress class   nginx       will target the Ingress NGINX controller  forcing the GCE controller to ignore it   You can change the value  nginx  to something else by setting the    ingress class  flag      yaml spec    template       spec         containers             name  ingress nginx internal controller            args                  nginx ingress controller                  ingress class internal nginx      then setting the corresponding  kubernetes io ingress class   internal nginx   annotation on your Ingresses   To reiterate  setting the annotation to any value which does not match a valid ingress class will force the Ingress Nginx Controller to ignore your Ingress  If you are only running a single Ingress Nginx Controller  this can be achieved by setting the annotation to any value except  nginx  or an empty string   Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller "}
{"questions":"ingress nginx Prometheus and Grafana installation using Pod Annotations This installs Prometheus and Grafana in the same namespace as NGINX Ingress Prometheus and Grafana installation using Service Monitors This installs Prometheus and Grafana in two different namespaces This is the preferred method and helm charts supports this by default This tutorial will show you how to install and for scraping the metrics of the Ingress Nginx Controller Monitoring Prometheus and Grafana installation using Pod Annotations Two different methods to install and configure Prometheus and Grafana are described in this doc","answers":"# Monitoring\n\nTwo different methods to install and configure Prometheus and Grafana are described in this doc.\n* Prometheus and Grafana installation using Pod Annotations. This installs Prometheus and Grafana in the same namespace as NGINX Ingress\n* Prometheus and Grafana installation using Service Monitors. This installs Prometheus and Grafana in two different namespaces. This is the preferred method, and helm charts supports this by default.\n\n## Prometheus and Grafana installation using Pod Annotations\n\nThis tutorial will show you how to install [Prometheus](https:\/\/prometheus.io\/) and [Grafana](https:\/\/grafana.com\/) for scraping the metrics of the Ingress-Nginx Controller.\n\n!!! important\n    This example uses `emptyDir` volumes for Prometheus and Grafana. This means once the pod gets terminated you will lose all the data.\n\n### Before You Begin\n\n- The Ingress-Nginx Controller should already be deployed according to the deployment instructions [here](..\/deploy\/index.md).\n\n- The controller should be configured for exporting metrics. This requires 3 configurations to the controller. These configurations are :\n  1. controller.metrics.enabled=true\n  2. controller.podAnnotations.\"prometheus.io\/scrape\"=\"true\"\n  3. controller.podAnnotations.\"prometheus.io\/port\"=\"10254\"\n\n  - The easiest way to configure the controller for metrics is via helm upgrade. Assuming you have installed the ingress-nginx controller as a helm release named ingress-nginx, then you can simply type the command shown below :\n  ```\n  helm upgrade ingress-nginx ingress-nginx \\\n  --repo https:\/\/kubernetes.github.io\/ingress-nginx \\\n  --namespace ingress-nginx \\\n  --set controller.metrics.enabled=true \\\n  --set-string controller.podAnnotations.\"prometheus\\.io\/scrape\"=\"true\" \\\n  --set-string controller.podAnnotations.\"prometheus\\.io\/port\"=\"10254\"\n  ```\n  - You can validate that the controller is configured for metrics by looking at the values of the installed release, like this:\n  ```\n  helm get values ingress-nginx --namespace ingress-nginx\n  ```\n  - You should be able to see the values shown below:\n  ```\n  ..\n  controller:\n    metrics:\n      enabled: true\n    podAnnotations:\n      prometheus.io\/port: \"10254\"\n      prometheus.io\/scrape: \"true\"\n  ..\n  ```\n   - If you are **not using helm**, you will have to edit your manifests like this:\n     - Service manifest:\n       ```\n       apiVersion: v1\n       kind: Service\n       ..\n       spec:\n         ports:\n           - name: prometheus\n             port: 10254\n             targetPort: prometheus\n             ..\n\n       ```\n      - Deployment manifest:\n         ```\n         apiVersion: v1\n         kind: Deployment\n         ..\n         spec:\n           template:\n             metadata:\n               annotations:\n                 prometheus.io\/scrape: \"true\"\n                 prometheus.io\/port: \"10254\"\n             spec:\n               containers:\n                 - name: controller\n                   ports:\n                     - name: prometheus\n                       containerPort: 10254\n                     ..\n         ```\n\n\n### Deploy and configure Prometheus Server\n\nNote that the kustomize bases used in this tutorial are stored in the [deploy](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/deploy) folder of the GitHub repository [kubernetes\/ingress-nginx](https:\/\/github.com\/kubernetes\/ingress-nginx).\n\n- The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed.\n\n- If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server.\n\n- Running the following command deploys prometheus in Kubernetes:\n\n  ```\n  kubectl apply --kustomize github.com\/kubernetes\/ingress-nginx\/deploy\/prometheus\/\n  ```\n\n#### Prometheus Dashboard\n\n- Open Prometheus dashboard in a web browser:\n\n  ```console\n  kubectl get svc -n ingress-nginx\n  NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                      AGE\n  default-http-backend   ClusterIP   10.103.59.201   <none>        80\/TCP                                       3d\n  ingress-nginx          NodePort    10.97.44.72     <none>        80:30100\/TCP,443:30154\/TCP,10254:32049\/TCP   5h\n  prometheus-server      NodePort    10.98.233.86    <none>        9090:32630\/TCP                               1m\n  ```\n\n  - Obtain the IP address of the nodes in the running cluster:\n\n  ```console\n  kubectl get nodes -o wide\n  ```\n\n  - In some cases where the node only have internal IP addresses we need to execute:\n\n  ```\n  kubectl get nodes --selector=kubernetes.io\/role!=master -o jsonpath={.items[*].status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}\n  10.192.0.2 10.192.0.3 10.192.0.4\n  ```\n\n  - Open your browser and visit the following URL: _http:\/\/{node IP address}:{prometheus-svc-nodeport}_ to load the Prometheus Dashboard.\n\n  - According to the above example, this URL will be http:\/\/10.192.0.3:32630\n\n  ![Prometheus Dashboard](..\/images\/prometheus-dashboard.png)\n\n#### Grafana\n  - Install grafana using the below command\n  ```\n  kubectl apply --kustomize github.com\/kubernetes\/ingress-nginx\/deploy\/grafana\/\n  ```\n  - Look at the services\n  ```\n  kubectl get svc -n ingress-nginx\n  NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                      AGE\n  default-http-backend   ClusterIP   10.103.59.201   <none>        80\/TCP                                       3d\n  ingress-nginx          NodePort    10.97.44.72     <none>        80:30100\/TCP,443:30154\/TCP,10254:32049\/TCP   5h\n  prometheus-server      NodePort    10.98.233.86    <none>        9090:32630\/TCP                               10m\n  grafana                NodePort    10.98.233.87    <none>        3000:31086\/TCP                               10m\n  ```\n\n  - Open your browser and visit the following URL: _http:\/\/{node IP address}:{grafana-svc-nodeport}_ to load the Grafana Dashboard.\nAccording to the above example, this URL will be http:\/\/10.192.0.3:31086\n\n  The username and password is `admin`\n\n  - After the login you can import the Grafana dashboard from [official dashboards](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/deploy\/grafana\/dashboards), by following steps given below :\n\n    - Navigate to lefthand panel of grafana\n    - Hover on the gearwheel icon for Configuration and click \"Data Sources\"\n    - Click \"Add data source\"\n    - Select \"Prometheus\"\n    - Enter the details (note: I used http:\/\/CLUSTER_IP_PROMETHEUS_SVC:9090)\n    - Left menu (hover over +) -> Dashboard\n    - Click \"Import\"\n    - Enter the copy pasted json from https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/main\/deploy\/grafana\/dashboards\/nginx.json\n    - Click Import JSON\n    - Select the Prometheus data source\n    - Click \"Import\"\n\n\n\n  ![Grafana Dashboard](..\/images\/grafana.png)\n\n### Caveats\n\n#### Wildcard ingresses\n\n  - By default request metrics are labeled with the hostname. When you have a wildcard domain ingress, then there will be no metrics for that ingress (to prevent the metrics from exploding in cardinality). To get metrics in this case you have two options:\n    - Run the ingress controller with `--metrics-per-host=false`. You will lose labeling by hostname, but still have labeling by ingress.\n    - Run the ingress controller with `--metrics-per-undefined-host=true --metrics-per-host=true`. You will get labeling by hostname even if the hostname is not explicitly defined on an ingress. Be warned that cardinality could explode due to many hostnames and CPU usage could also increase.\n\n### Grafana dashboard using ingress resource\n  - If you want to expose the dashboard for grafana using an ingress resource, then you can :\n    - change the service type of the prometheus-server service and the grafana service to \"ClusterIP\" like this :\n    ```\n    kubectl -n ingress-nginx edit svc grafana\n    ```\n    - This will open the currently deployed service grafana in the default editor configured in your shell (vi\/nvim\/nano\/other)\n    - scroll down to line 34 that looks like \"type: NodePort\"\n    - change it to look like \"type: ClusterIP\". Save and exit.\n    - create an ingress resource with backend as \"grafana\" and port as \"3000\"\n  - Similarly, you can edit the service \"prometheus-server\" and add an ingress resource.\n\n## Prometheus and Grafana installation using Service Monitors\nThis document assumes you're using helm and using the kube-prometheus-stack package to install Prometheus and Grafana.\n\n### Verify Ingress-Nginx Controller is installed\n\n- The Ingress-Nginx Controller should already be deployed according to the deployment instructions [here](..\/deploy\/index.md).\n\n- To check if Ingress controller is deployed,\n  ```\n  kubectl get pods -n ingress-nginx\n  ```\n- The result should look something like:\n  ```\n  NAME                                        READY   STATUS    RESTARTS   AGE\n  ingress-nginx-controller-7c489dc7b7-ccrf6   1\/1     Running   0          19h\n    ```\n\n### Verify Prometheus is installed\n\n- To check if Prometheus is already deployed, run the following command:\n\n  ```\n  helm ls -A\n  ```\n  ```\n  NAME         \tNAMESPACE    \tREVISION\tUPDATED                             \tSTATUS  \tCHART                       \tAPP VERSION\n  ingress-nginx\tingress-nginx\t10      \t2022-01-20 18:08:55.267373 -0800 PST\tdeployed\tingress-nginx-4.0.16        \t1.1.1\n  prometheus   \tprometheus   \t1       \t2022-01-20 16:07:25.086828 -0800 PST\tdeployed\tkube-prometheus-stack-30.1.0\t0.53.1\n  ```\n- Notice that prometheus is installed in a differenet namespace than ingress-nginx\n\n- If prometheus is not installed, then you can install from [here](https:\/\/artifacthub.io\/packages\/helm\/prometheus-community\/kube-prometheus-stack)\n\n### Re-configure Ingress-Nginx Controller\n\n- The Ingress NGINX controller needs to be reconfigured for exporting metrics. This requires 3 additional configurations to the controller. These configurations are :\n  ```\n  controller.metrics.enabled=true\n  controller.metrics.serviceMonitor.enabled=true\n  controller.metrics.serviceMonitor.additionalLabels.release=\"prometheus\"\n  ```\n- The easiest way of doing this is to helm upgrade\n  ```\n  helm upgrade ingress-nginx ingress-nginx\/ingress-nginx \\\n  --namespace ingress-nginx \\\n  --set controller.metrics.enabled=true \\\n  --set controller.metrics.serviceMonitor.enabled=true \\\n  --set controller.metrics.serviceMonitor.additionalLabels.release=\"prometheus\"\n  ```\n- Here `controller.metrics.serviceMonitor.additionalLabels.release=\"prometheus\"` should match the name of the helm release of the `kube-prometheus-stack`\n\n- You can validate that the controller has been successfully reconfigured to export metrics by looking at the values of the installed release, like this:\n  ```\n  helm get values ingress-nginx --namespace ingress-nginx\n  ```\n  ```\n  controller:\n    metrics:\n      enabled: true\n      serviceMonitor:\n        additionalLabels:\n          release: prometheus\n        enabled: true\n  ```\n### Configure Prometheus\n\n- Since Prometheus is running in a different namespace and not in the ingress-nginx namespace, it would not be able to discover ServiceMonitors in other namespaces when installed. Reconfigure your kube-prometheus-stack Helm installation to set `serviceMonitorSelectorNilUsesHelmValues` flag to false. By default, Prometheus only discovers PodMonitors within its own namespace. This should be disabled by setting `podMonitorSelectorNilUsesHelmValues` to false\n- The configurations required are:\n  ```\n  prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false\n  prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false\n  ```\n- The easiest way of doing this is to use `helm upgrade ...`\n  ```\n  helm upgrade prometheus prometheus-community\/kube-prometheus-stack \\\n  --namespace prometheus  \\\n  --set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \\\n  --set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false\n  ```\n- You can validate that Prometheus has been reconfigured by looking at the values of the installed release, like this:\n  ```\n  helm get values prometheus --namespace prometheus\n  ```\n- You should be able to see the values shown below:\n  ```\n  prometheus:\n    prometheusSpec:\n      podMonitorSelectorNilUsesHelmValues: false\n      serviceMonitorSelectorNilUsesHelmValues: false\n  ```\n\n### Connect and view Prometheus dashboard\n- Port forward to Prometheus service. Find out the name of the prometheus service by using the following command:\n  ```\n  kubectl get svc -n prometheus\n  ```\n\n  The result of this command would look like:\n  ```\n  NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE\n  alertmanager-operated                     ClusterIP   None             <none>        9093\/TCP,9094\/TCP,9094\/UDP   7h46m\n  prometheus-grafana                        ClusterIP   10.106.28.162    <none>        80\/TCP                       7h46m\n  prometheus-kube-prometheus-alertmanager   ClusterIP   10.108.125.245   <none>        9093\/TCP                     7h46m\n  prometheus-kube-prometheus-operator       ClusterIP   10.110.220.1     <none>        443\/TCP                      7h46m\n  prometheus-kube-prometheus-prometheus     ClusterIP   10.102.72.134    <none>        9090\/TCP                     7h46m\n  prometheus-kube-state-metrics             ClusterIP   10.104.231.181   <none>        8080\/TCP                     7h46m\n  prometheus-operated                       ClusterIP   None             <none>        9090\/TCP                     7h46m\n  prometheus-prometheus-node-exporter       ClusterIP   10.96.247.128    <none>        9100\/TCP                     7h46m\n  ```\n  prometheus-kube-prometheus-prometheus is the service we want to port forward to. We can do so using the following command:\n  ```\n  kubectl port-forward svc\/prometheus-kube-prometheus-prometheus -n prometheus 9090:9090\n  ```\n  When you run the above command, you should see something like:\n  ```\n  Forwarding from 127.0.0.1:9090 -> 9090\n  Forwarding from [::1]:9090 -> 9090\n  ```\n- Open your browser and visit the following URL http:\/\/localhost:{port-forwarded-port} according to the above example it would be, http:\/\/localhost:9090\n\n  ![Prometheus Dashboard](..\/images\/prometheus-dashboard1.png)\n\n### Connect and view Grafana dashboard\n- Port forward to Grafana service. Find out the name of the Grafana service by using the following command:\n  ```\n  kubectl get svc -n prometheus\n  ```\n\n  The result of this command would look like:\n  ```\n  NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE\n  alertmanager-operated                     ClusterIP   None             <none>        9093\/TCP,9094\/TCP,9094\/UDP   7h46m\n  prometheus-grafana                        ClusterIP   10.106.28.162    <none>        80\/TCP                       7h46m\n  prometheus-kube-prometheus-alertmanager   ClusterIP   10.108.125.245   <none>        9093\/TCP                     7h46m\n  prometheus-kube-prometheus-operator       ClusterIP   10.110.220.1     <none>        443\/TCP                      7h46m\n  prometheus-kube-prometheus-prometheus     ClusterIP   10.102.72.134    <none>        9090\/TCP                     7h46m\n  prometheus-kube-state-metrics             ClusterIP   10.104.231.181   <none>        8080\/TCP                     7h46m\n  prometheus-operated                       ClusterIP   None             <none>        9090\/TCP                     7h46m\n  prometheus-prometheus-node-exporter       ClusterIP   10.96.247.128    <none>        9100\/TCP                     7h46m\n  ```\n  prometheus-grafana is the service we want to port forward to. We can do so using the following command:\n  ```\n  kubectl port-forward svc\/prometheus-grafana  3000:80 -n prometheus\n  ```\n  When you run the above command, you should see something like:\n  ```\n  Forwarding from 127.0.0.1:3000 -> 3000\n  Forwarding from [::1]:3000 -> 3000\n  ```\n- Open your browser and visit the following URL http:\/\/localhost:{port-forwarded-port} according to the above example it would be, http:\/\/localhost:3000\n  The default username\/ password is admin\/prom-operator\n- After the login you can import the Grafana dashboard from [official dashboards](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/deploy\/grafana\/dashboards), by following steps given below :\n\n  - Navigate to lefthand panel of grafana\n  - Hover on the gearwheel icon for Configuration and click \"Data Sources\"\n  - Click \"Add data source\"\n  - Select \"Prometheus\"\n  - Enter the details (note: I used http:\/\/10.102.72.134:9090 which is the CLUSTER-IP for Prometheus service)\n  - Left menu (hover over +) -> Dashboard\n  - Click \"Import\"\n  - Enter the copy pasted json from https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/main\/deploy\/grafana\/dashboards\/nginx.json\n  - Click Import JSON\n  - Select the Prometheus data source\n  - Click \"Import\"\n\n  ![Grafana Dashboard](..\/images\/grafana-dashboard1.png)\n\n\n## Exposed metrics\n\nPrometheus metrics are exposed on port 10254.\n\n### Request metrics\n\n* `nginx_ingress_controller_request_duration_seconds` Histogram\\\n  The request processing (time elapsed between the first bytes were read from the client and the log write after the last bytes were sent to the client) time in seconds (affected by client speed).\\\n  nginx var: `request_time`\n\n* `nginx_ingress_controller_response_duration_seconds` Histogram\\\n  The time spent on receiving the response from the upstream server in seconds (affected by client speed when the response is bigger than proxy buffers).\\\n  Note: can be up to several millis bigger than the `nginx_ingress_controller_request_duration_seconds` because of the different measuring method.\n  nginx var: `upstream_response_time`\n\n* `nginx_ingress_controller_header_duration_seconds` Histogram\\\n  The time spent on receiving first header from the upstream server\\\n  nginx var: `upstream_header_time`\n\n* `nginx_ingress_controller_connect_duration_seconds` Histogram\\\n  The time spent on establishing a connection with the upstream server\\\n  nginx var: `upstream_connect_time`\n\n* `nginx_ingress_controller_response_size` Histogram\\\n  The response length (including request line, header, and request body)\\\n  nginx var: `bytes_sent`\n\n* `nginx_ingress_controller_request_size` Histogram\\\n  The request length (including request line, header, and request body)\\\n  nginx var: `request_length`\n\n* `nginx_ingress_controller_requests` Counter\\\n  The total number of client requests\n\n* `nginx_ingress_controller_bytes_sent` Histogram\\\n  The number of bytes sent to a client. **Deprecated**, use `nginx_ingress_controller_response_size`\\\n  nginx var: `bytes_sent`\n\n```\n# HELP nginx_ingress_controller_bytes_sent The number of bytes sent to a client. DEPRECATED! Use nginx_ingress_controller_response_size\n# TYPE nginx_ingress_controller_bytes_sent histogram\n# HELP nginx_ingress_controller_connect_duration_seconds The time spent on establishing a connection with the upstream server\n# TYPE nginx_ingress_controller_connect_duration_seconds nginx_ingress_controller_connect_duration_seconds\n* HELP nginx_ingress_controller_header_duration_seconds The time spent on receiving first header from the upstream server\n# TYPE nginx_ingress_controller_header_duration_seconds histogram\n# HELP nginx_ingress_controller_request_duration_seconds The request processing time in milliseconds\n# TYPE nginx_ingress_controller_request_duration_seconds histogram\n# HELP nginx_ingress_controller_request_size The request length (including request line, header, and request body)\n# TYPE nginx_ingress_controller_request_size histogram\n# HELP nginx_ingress_controller_requests The total number of client requests.\n# TYPE nginx_ingress_controller_requests counter\n# HELP nginx_ingress_controller_response_duration_seconds The time spent on receiving the response from the upstream server\n# TYPE nginx_ingress_controller_response_duration_seconds histogram\n# HELP nginx_ingress_controller_response_size The response length (including request line, header, and request body)\n# TYPE nginx_ingress_controller_response_size histogram\n```\n\n\n### Nginx process metrics\n```\n# HELP nginx_ingress_controller_nginx_process_connections current number of client connections with state {active, reading, writing, waiting}\n# TYPE nginx_ingress_controller_nginx_process_connections gauge\n# HELP nginx_ingress_controller_nginx_process_connections_total total number of connections with state {accepted, handled}\n# TYPE nginx_ingress_controller_nginx_process_connections_total counter\n# HELP nginx_ingress_controller_nginx_process_cpu_seconds_total Cpu usage in seconds\n# TYPE nginx_ingress_controller_nginx_process_cpu_seconds_total counter\n# HELP nginx_ingress_controller_nginx_process_num_procs number of processes\n# TYPE nginx_ingress_controller_nginx_process_num_procs gauge\n# HELP nginx_ingress_controller_nginx_process_oldest_start_time_seconds start time in seconds since 1970\/01\/01\n# TYPE nginx_ingress_controller_nginx_process_oldest_start_time_seconds gauge\n# HELP nginx_ingress_controller_nginx_process_read_bytes_total number of bytes read\n# TYPE nginx_ingress_controller_nginx_process_read_bytes_total counter\n# HELP nginx_ingress_controller_nginx_process_requests_total total number of client requests\n# TYPE nginx_ingress_controller_nginx_process_requests_total counter\n# HELP nginx_ingress_controller_nginx_process_resident_memory_bytes number of bytes of memory in use\n# TYPE nginx_ingress_controller_nginx_process_resident_memory_bytes gauge\n# HELP nginx_ingress_controller_nginx_process_virtual_memory_bytes number of bytes of memory in use\n# TYPE nginx_ingress_controller_nginx_process_virtual_memory_bytes gauge\n# HELP nginx_ingress_controller_nginx_process_write_bytes_total number of bytes written\n# TYPE nginx_ingress_controller_nginx_process_write_bytes_total counter\n```\n\n### Controller metrics\n```\n# HELP nginx_ingress_controller_build_info A metric with a constant '1' labeled with information about the build.\n# TYPE nginx_ingress_controller_build_info gauge\n# HELP nginx_ingress_controller_check_success Cumulative number of Ingress controller syntax check operations\n# TYPE nginx_ingress_controller_check_success counter\n# HELP nginx_ingress_controller_config_hash Running configuration hash actually running\n# TYPE nginx_ingress_controller_config_hash gauge\n# HELP nginx_ingress_controller_config_last_reload_successful Whether the last configuration reload attempt was successful\n# TYPE nginx_ingress_controller_config_last_reload_successful gauge\n# HELP nginx_ingress_controller_config_last_reload_successful_timestamp_seconds Timestamp of the last successful configuration reload.\n# TYPE nginx_ingress_controller_config_last_reload_successful_timestamp_seconds gauge\n# HELP nginx_ingress_controller_ssl_certificate_info Hold all labels associated to a certificate\n# TYPE nginx_ingress_controller_ssl_certificate_info gauge\n# HELP nginx_ingress_controller_success Cumulative number of Ingress controller reload operations\n# TYPE nginx_ingress_controller_success counter\n# HELP nginx_ingress_controller_orphan_ingress Gauge reporting status of ingress orphanity, 1 indicates orphaned ingress. 'namespace' is the string used to identify namespace of ingress, 'ingress' for ingress name and 'type' for 'no-service' or 'no-endpoint' of orphanity\n# TYPE nginx_ingress_controller_orphan_ingress gauge\n```\n\n### Admission metrics\n```\n# HELP nginx_ingress_controller_admission_config_size The size of the tested configuration\n# TYPE nginx_ingress_controller_admission_config_size gauge\n# HELP nginx_ingress_controller_admission_render_duration The processing duration of ingresses rendering by the admission controller (float seconds)\n# TYPE nginx_ingress_controller_admission_render_duration gauge\n# HELP nginx_ingress_controller_admission_render_ingresses The length of ingresses rendered by the admission controller\n# TYPE nginx_ingress_controller_admission_render_ingresses gauge\n# HELP nginx_ingress_controller_admission_roundtrip_duration The complete duration of the admission controller at the time to process a new event (float seconds)\n# TYPE nginx_ingress_controller_admission_roundtrip_duration gauge\n# HELP nginx_ingress_controller_admission_tested_duration The processing duration of the admission controller tests (float seconds)\n# TYPE nginx_ingress_controller_admission_tested_duration gauge\n# HELP nginx_ingress_controller_admission_tested_ingresses The length of ingresses processed by the admission controller\n# TYPE nginx_ingress_controller_admission_tested_ingresses gauge\n```\n\n### Histogram buckets\n\nYou can configure buckets for histogram metrics using these command line options (here are their default values):\n* `--time-buckets=[0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]`\n* `--length-buckets=[10, 20, 30, 40, 50, 60, 70, 80, 90, 100]`\n* `--size-buckets=[10, 100, 1000, 10000, 100000, 1e+06, 1e+07]`","site":"ingress nginx","answers_cleaned":"  Monitoring  Two different methods to install and configure Prometheus and Grafana are described in this doc    Prometheus and Grafana installation using Pod Annotations  This installs Prometheus and Grafana in the same namespace as NGINX Ingress   Prometheus and Grafana installation using Service Monitors  This installs Prometheus and Grafana in two different namespaces  This is the preferred method  and helm charts supports this by default      Prometheus and Grafana installation using Pod Annotations  This tutorial will show you how to install  Prometheus  https   prometheus io   and  Grafana  https   grafana com   for scraping the metrics of the Ingress Nginx Controller       important     This example uses  emptyDir  volumes for Prometheus and Grafana  This means once the pod gets terminated you will lose all the data       Before You Begin    The Ingress Nginx Controller should already be deployed according to the deployment instructions  here     deploy index md      The controller should be configured for exporting metrics  This requires 3 configurations to the controller  These configurations are     1  controller metrics enabled true   2  controller podAnnotations  prometheus io scrape   true    3  controller podAnnotations  prometheus io port   10254       The easiest way to configure the controller for metrics is via helm upgrade  Assuming you have installed the ingress nginx controller as a helm release named ingress nginx  then you can simply type the command shown below           helm upgrade ingress nginx ingress nginx       repo https   kubernetes github io ingress nginx       namespace ingress nginx       set controller metrics enabled true       set string controller podAnnotations  prometheus  io scrape   true        set string controller podAnnotations  prometheus  io port   10254            You can validate that the controller is configured for metrics by looking at the values of the installed release  like this          helm get values ingress nginx   namespace ingress nginx           You should be able to see the values shown below               controller      metrics        enabled  true     podAnnotations        prometheus io port   10254        prometheus io scrape   true                  If you are   not using helm    you will have to edit your manifests like this         Service manifest                    apiVersion  v1        kind  Service                  spec           ports               name  prometheus              port  10254              targetPort  prometheus                                     Deployment manifest                        apiVersion  v1          kind  Deployment                      spec             template               metadata                 annotations                   prometheus io scrape   true                   prometheus io port   10254               spec                 containers                     name  controller                    ports                         name  prometheus                        containerPort  10254                                            Deploy and configure Prometheus Server  Note that the kustomize bases used in this tutorial are stored in the  deploy  https   github com kubernetes ingress nginx tree main deploy  folder of the GitHub repository  kubernetes ingress nginx  https   github com kubernetes ingress nginx      The Prometheus server must be configured so that it can discover endpoints of services  If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods  no extra configuration is needed     If there is no existing Prometheus server running  the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server     Running the following command deploys prometheus in Kubernetes           kubectl apply   kustomize github com kubernetes ingress nginx deploy prometheus              Prometheus Dashboard    Open Prometheus dashboard in a web browser        console   kubectl get svc  n ingress nginx   NAME                   TYPE        CLUSTER IP      EXTERNAL IP   PORT S                                       AGE   default http backend   ClusterIP   10 103 59 201    none         80 TCP                                       3d   ingress nginx          NodePort    10 97 44 72      none         80 30100 TCP 443 30154 TCP 10254 32049 TCP   5h   prometheus server      NodePort    10 98 233 86     none         9090 32630 TCP                               1m            Obtain the IP address of the nodes in the running cluster        console   kubectl get nodes  o wide            In some cases where the node only have internal IP addresses we need to execute           kubectl get nodes   selector kubernetes io role  master  o jsonpath   items    status addresses      type    InternalIP      address    10 192 0 2 10 192 0 3 10 192 0 4            Open your browser and visit the following URL   http    node IP address   prometheus svc nodeport   to load the Prometheus Dashboard       According to the above example  this URL will be http   10 192 0 3 32630      Prometheus Dashboard     images prometheus dashboard png        Grafana     Install grafana using the below command         kubectl apply   kustomize github com kubernetes ingress nginx deploy grafana            Look at the services         kubectl get svc  n ingress nginx   NAME                   TYPE        CLUSTER IP      EXTERNAL IP   PORT S                                       AGE   default http backend   ClusterIP   10 103 59 201    none         80 TCP                                       3d   ingress nginx          NodePort    10 97 44 72      none         80 30100 TCP 443 30154 TCP 10254 32049 TCP   5h   prometheus server      NodePort    10 98 233 86     none         9090 32630 TCP                               10m   grafana                NodePort    10 98 233 87     none         3000 31086 TCP                               10m            Open your browser and visit the following URL   http    node IP address   grafana svc nodeport   to load the Grafana Dashboard  According to the above example  this URL will be http   10 192 0 3 31086    The username and password is  admin       After the login you can import the Grafana dashboard from  official dashboards  https   github com kubernetes ingress nginx tree main deploy grafana dashboards   by following steps given below          Navigate to lefthand panel of grafana       Hover on the gearwheel icon for Configuration and click  Data Sources        Click  Add data source        Select  Prometheus        Enter the details  note  I used http   CLUSTER IP PROMETHEUS SVC 9090        Left menu  hover over       Dashboard       Click  Import        Enter the copy pasted json from https   raw githubusercontent com kubernetes ingress nginx main deploy grafana dashboards nginx json       Click Import JSON       Select the Prometheus data source       Click  Import         Grafana Dashboard     images grafana png       Caveats       Wildcard ingresses      By default request metrics are labeled with the hostname  When you have a wildcard domain ingress  then there will be no metrics for that ingress  to prevent the metrics from exploding in cardinality   To get metrics in this case you have two options        Run the ingress controller with    metrics per host false   You will lose labeling by hostname  but still have labeling by ingress        Run the ingress controller with    metrics per undefined host true   metrics per host true   You will get labeling by hostname even if the hostname is not explicitly defined on an ingress  Be warned that cardinality could explode due to many hostnames and CPU usage could also increase       Grafana dashboard using ingress resource     If you want to expose the dashboard for grafana using an ingress resource  then you can         change the service type of the prometheus server service and the grafana service to  ClusterIP  like this               kubectl  n ingress nginx edit svc grafana               This will open the currently deployed service grafana in the default editor configured in your shell  vi nvim nano other        scroll down to line 34 that looks like  type  NodePort        change it to look like  type  ClusterIP   Save and exit        create an ingress resource with backend as  grafana  and port as  3000      Similarly  you can edit the service  prometheus server  and add an ingress resource      Prometheus and Grafana installation using Service Monitors This document assumes you re using helm and using the kube prometheus stack package to install Prometheus and Grafana       Verify Ingress Nginx Controller is installed    The Ingress Nginx Controller should already be deployed according to the deployment instructions  here     deploy index md      To check if Ingress controller is deployed          kubectl get pods  n ingress nginx         The result should look something like          NAME                                        READY   STATUS    RESTARTS   AGE   ingress nginx controller 7c489dc7b7 ccrf6   1 1     Running   0          19h              Verify Prometheus is installed    To check if Prometheus is already deployed  run the following command           helm ls  A               NAME          NAMESPACE     REVISION UPDATED                              STATUS   CHART                        APP VERSION   ingress nginx ingress nginx 10       2022 01 20 18 08 55 267373  0800 PST deployed ingress nginx 4 0 16         1 1 1   prometheus    prometheus    1        2022 01 20 16 07 25 086828  0800 PST deployed kube prometheus stack 30 1 0 0 53 1         Notice that prometheus is installed in a differenet namespace than ingress nginx    If prometheus is not installed  then you can install from  here  https   artifacthub io packages helm prometheus community kube prometheus stack       Re configure Ingress Nginx Controller    The Ingress NGINX controller needs to be reconfigured for exporting metrics  This requires 3 additional configurations to the controller  These configurations are           controller metrics enabled true   controller metrics serviceMonitor enabled true   controller metrics serviceMonitor additionalLabels release  prometheus          The easiest way of doing this is to helm upgrade         helm upgrade ingress nginx ingress nginx ingress nginx       namespace ingress nginx       set controller metrics enabled true       set controller metrics serviceMonitor enabled true       set controller metrics serviceMonitor additionalLabels release  prometheus          Here  controller metrics serviceMonitor additionalLabels release  prometheus   should match the name of the helm release of the  kube prometheus stack     You can validate that the controller has been successfully reconfigured to export metrics by looking at the values of the installed release  like this          helm get values ingress nginx   namespace ingress nginx               controller      metrics        enabled  true       serviceMonitor          additionalLabels            release  prometheus         enabled  true           Configure Prometheus    Since Prometheus is running in a different namespace and not in the ingress nginx namespace  it would not be able to discover ServiceMonitors in other namespaces when installed  Reconfigure your kube prometheus stack Helm installation to set  serviceMonitorSelectorNilUsesHelmValues  flag to false  By default  Prometheus only discovers PodMonitors within its own namespace  This should be disabled by setting  podMonitorSelectorNilUsesHelmValues  to false   The configurations required are          prometheus prometheusSpec podMonitorSelectorNilUsesHelmValues false   prometheus prometheusSpec serviceMonitorSelectorNilUsesHelmValues false         The easiest way of doing this is to use  helm upgrade              helm upgrade prometheus prometheus community kube prometheus stack       namespace prometheus        set prometheus prometheusSpec podMonitorSelectorNilUsesHelmValues false       set prometheus prometheusSpec serviceMonitorSelectorNilUsesHelmValues false         You can validate that Prometheus has been reconfigured by looking at the values of the installed release  like this          helm get values prometheus   namespace prometheus         You should be able to see the values shown below          prometheus      prometheusSpec        podMonitorSelectorNilUsesHelmValues  false       serviceMonitorSelectorNilUsesHelmValues  false            Connect and view Prometheus dashboard   Port forward to Prometheus service  Find out the name of the prometheus service by using the following command          kubectl get svc  n prometheus          The result of this command would look like          NAME                                      TYPE        CLUSTER IP       EXTERNAL IP   PORT S                       AGE   alertmanager operated                     ClusterIP   None              none         9093 TCP 9094 TCP 9094 UDP   7h46m   prometheus grafana                        ClusterIP   10 106 28 162     none         80 TCP                       7h46m   prometheus kube prometheus alertmanager   ClusterIP   10 108 125 245    none         9093 TCP                     7h46m   prometheus kube prometheus operator       ClusterIP   10 110 220 1      none         443 TCP                      7h46m   prometheus kube prometheus prometheus     ClusterIP   10 102 72 134     none         9090 TCP                     7h46m   prometheus kube state metrics             ClusterIP   10 104 231 181    none         8080 TCP                     7h46m   prometheus operated                       ClusterIP   None              none         9090 TCP                     7h46m   prometheus prometheus node exporter       ClusterIP   10 96 247 128     none         9100 TCP                     7h46m         prometheus kube prometheus prometheus is the service we want to port forward to  We can do so using the following command          kubectl port forward svc prometheus kube prometheus prometheus  n prometheus 9090 9090         When you run the above command  you should see something like          Forwarding from 127 0 0 1 9090    9090   Forwarding from    1  9090    9090         Open your browser and visit the following URL http   localhost  port forwarded port  according to the above example it would be  http   localhost 9090      Prometheus Dashboard     images prometheus dashboard1 png       Connect and view Grafana dashboard   Port forward to Grafana service  Find out the name of the Grafana service by using the following command          kubectl get svc  n prometheus          The result of this command would look like          NAME                                      TYPE        CLUSTER IP       EXTERNAL IP   PORT S                       AGE   alertmanager operated                     ClusterIP   None              none         9093 TCP 9094 TCP 9094 UDP   7h46m   prometheus grafana                        ClusterIP   10 106 28 162     none         80 TCP                       7h46m   prometheus kube prometheus alertmanager   ClusterIP   10 108 125 245    none         9093 TCP                     7h46m   prometheus kube prometheus operator       ClusterIP   10 110 220 1      none         443 TCP                      7h46m   prometheus kube prometheus prometheus     ClusterIP   10 102 72 134     none         9090 TCP                     7h46m   prometheus kube state metrics             ClusterIP   10 104 231 181    none         8080 TCP                     7h46m   prometheus operated                       ClusterIP   None              none         9090 TCP                     7h46m   prometheus prometheus node exporter       ClusterIP   10 96 247 128     none         9100 TCP                     7h46m         prometheus grafana is the service we want to port forward to  We can do so using the following command          kubectl port forward svc prometheus grafana  3000 80  n prometheus         When you run the above command  you should see something like          Forwarding from 127 0 0 1 3000    3000   Forwarding from    1  3000    3000         Open your browser and visit the following URL http   localhost  port forwarded port  according to the above example it would be  http   localhost 3000   The default username  password is admin prom operator   After the login you can import the Grafana dashboard from  official dashboards  https   github com kubernetes ingress nginx tree main deploy grafana dashboards   by following steps given below        Navigate to lefthand panel of grafana     Hover on the gearwheel icon for Configuration and click  Data Sources      Click  Add data source      Select  Prometheus      Enter the details  note  I used http   10 102 72 134 9090 which is the CLUSTER IP for Prometheus service      Left menu  hover over       Dashboard     Click  Import      Enter the copy pasted json from https   raw githubusercontent com kubernetes ingress nginx main deploy grafana dashboards nginx json     Click Import JSON     Select the Prometheus data source     Click  Import       Grafana Dashboard     images grafana dashboard1 png       Exposed metrics  Prometheus metrics are exposed on port 10254       Request metrics     nginx ingress controller request duration seconds  Histogram    The request processing  time elapsed between the first bytes were read from the client and the log write after the last bytes were sent to the client  time in seconds  affected by client speed      nginx var   request time      nginx ingress controller response duration seconds  Histogram    The time spent on receiving the response from the upstream server in seconds  affected by client speed when the response is bigger than proxy buffers      Note  can be up to several millis bigger than the  nginx ingress controller request duration seconds  because of the different measuring method    nginx var   upstream response time      nginx ingress controller header duration seconds  Histogram    The time spent on receiving first header from the upstream server    nginx var   upstream header time      nginx ingress controller connect duration seconds  Histogram    The time spent on establishing a connection with the upstream server    nginx var   upstream connect time      nginx ingress controller response size  Histogram    The response length  including request line  header  and request body     nginx var   bytes sent      nginx ingress controller request size  Histogram    The request length  including request line  header  and request body     nginx var   request length      nginx ingress controller requests  Counter    The total number of client requests     nginx ingress controller bytes sent  Histogram    The number of bytes sent to a client    Deprecated    use  nginx ingress controller response size     nginx var   bytes sent         HELP nginx ingress controller bytes sent The number of bytes sent to a client  DEPRECATED  Use nginx ingress controller response size   TYPE nginx ingress controller bytes sent histogram   HELP nginx ingress controller connect duration seconds The time spent on establishing a connection with the upstream server   TYPE nginx ingress controller connect duration seconds nginx ingress controller connect duration seconds   HELP nginx ingress controller header duration seconds The time spent on receiving first header from the upstream server   TYPE nginx ingress controller header duration seconds histogram   HELP nginx ingress controller request duration seconds The request processing time in milliseconds   TYPE nginx ingress controller request duration seconds histogram   HELP nginx ingress controller request size The request length  including request line  header  and request body    TYPE nginx ingress controller request size histogram   HELP nginx ingress controller requests The total number of client requests    TYPE nginx ingress controller requests counter   HELP nginx ingress controller response duration seconds The time spent on receiving the response from the upstream server   TYPE nginx ingress controller response duration seconds histogram   HELP nginx ingress controller response size The response length  including request line  header  and request body    TYPE nginx ingress controller response size histogram           Nginx process metrics       HELP nginx ingress controller nginx process connections current number of client connections with state  active  reading  writing  waiting    TYPE nginx ingress controller nginx process connections gauge   HELP nginx ingress controller nginx process connections total total number of connections with state  accepted  handled    TYPE nginx ingress controller nginx process connections total counter   HELP nginx ingress controller nginx process cpu seconds total Cpu usage in seconds   TYPE nginx ingress controller nginx process cpu seconds total counter   HELP nginx ingress controller nginx process num procs number of processes   TYPE nginx ingress controller nginx process num procs gauge   HELP nginx ingress controller nginx process oldest start time seconds start time in seconds since 1970 01 01   TYPE nginx ingress controller nginx process oldest start time seconds gauge   HELP nginx ingress controller nginx process read bytes total number of bytes read   TYPE nginx ingress controller nginx process read bytes total counter   HELP nginx ingress controller nginx process requests total total number of client requests   TYPE nginx ingress controller nginx process requests total counter   HELP nginx ingress controller nginx process resident memory bytes number of bytes of memory in use   TYPE nginx ingress controller nginx process resident memory bytes gauge   HELP nginx ingress controller nginx process virtual memory bytes number of bytes of memory in use   TYPE nginx ingress controller nginx process virtual memory bytes gauge   HELP nginx ingress controller nginx process write bytes total number of bytes written   TYPE nginx ingress controller nginx process write bytes total counter          Controller metrics       HELP nginx ingress controller build info A metric with a constant  1  labeled with information about the build    TYPE nginx ingress controller build info gauge   HELP nginx ingress controller check success Cumulative number of Ingress controller syntax check operations   TYPE nginx ingress controller check success counter   HELP nginx ingress controller config hash Running configuration hash actually running   TYPE nginx ingress controller config hash gauge   HELP nginx ingress controller config last reload successful Whether the last configuration reload attempt was successful   TYPE nginx ingress controller config last reload successful gauge   HELP nginx ingress controller config last reload successful timestamp seconds Timestamp of the last successful configuration reload    TYPE nginx ingress controller config last reload successful timestamp seconds gauge   HELP nginx ingress controller ssl certificate info Hold all labels associated to a certificate   TYPE nginx ingress controller ssl certificate info gauge   HELP nginx ingress controller success Cumulative number of Ingress controller reload operations   TYPE nginx ingress controller success counter   HELP nginx ingress controller orphan ingress Gauge reporting status of ingress orphanity  1 indicates orphaned ingress   namespace  is the string used to identify namespace of ingress   ingress  for ingress name and  type  for  no service  or  no endpoint  of orphanity   TYPE nginx ingress controller orphan ingress gauge          Admission metrics       HELP nginx ingress controller admission config size The size of the tested configuration   TYPE nginx ingress controller admission config size gauge   HELP nginx ingress controller admission render duration The processing duration of ingresses rendering by the admission controller  float seconds    TYPE nginx ingress controller admission render duration gauge   HELP nginx ingress controller admission render ingresses The length of ingresses rendered by the admission controller   TYPE nginx ingress controller admission render ingresses gauge   HELP nginx ingress controller admission roundtrip duration The complete duration of the admission controller at the time to process a new event  float seconds    TYPE nginx ingress controller admission roundtrip duration gauge   HELP nginx ingress controller admission tested duration The processing duration of the admission controller tests  float seconds    TYPE nginx ingress controller admission tested duration gauge   HELP nginx ingress controller admission tested ingresses The length of ingresses processed by the admission controller   TYPE nginx ingress controller admission tested ingresses gauge          Histogram buckets  You can configure buckets for histogram metrics using these command line options  here are their default values        time buckets  0 005  0 01  0 025  0 05  0 1  0 25  0 5  1  2 5  5  10        length buckets  10  20  30  40  50  60  70  80  90  100        size buckets  10  100  1000  10000  100000  1e 06  1e 07  "}
{"questions":"ingress nginx The ConfigMap API resource stores configuration data as key value pairs The data provides the configurations for system components for the nginx controller ConfigMaps ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable you can add key value pairs to the data section of the config map For Example In order to overwrite nginx controller configuration values as seen in","answers":"# ConfigMaps\n\nConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.\n\nThe ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system\ncomponents for the nginx-controller.\n\nIn order to overwrite nginx-controller configuration values as seen in [config.go](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/internal\/ingress\/controller\/config\/config.go),\nyou can add key-value pairs to the data section of the config-map. For Example:\n\n```yaml\ndata:\n  map-hash-bucket-size: \"128\"\n  ssl-protocols: SSLv2\n```\n\n!!! important\n    The key and values in a ConfigMap can only be strings.\n    This means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\".\n    Same for numbers, like \"100\".\n\n    \"Slice\" types (defined below as `[]string` or `[]int`) can be provided as a comma-delimited string.\n\n## Configuration options\n\nThe following table shows a configuration option's name, type, and the default value:\n\n| name                                                                            | type         | default                                                                                                                                                                                                                                                                                                                                                      | notes                                                                               |\n|:--------------------------------------------------------------------------------|:-------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|\n| [add-headers](#add-headers)                                                     | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [allow-backend-server-header](#allow-backend-server-header)                     | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [allow-cross-namespace-resources](#allow-cross-namespace-resources)             | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [allow-snippet-annotations](#allow-snippet-annotations)                         | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [annotations-risk-level](#annotations-risk-level)                               | string       | High                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [annotation-value-word-blocklist](#annotation-value-word-blocklist)             | string array | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [hide-headers](#hide-headers)                                                   | string array | empty                                                                                                                                                                                                                                                                                                                                                        |                                                                                     |\n| [access-log-params](#access-log-params)                                         | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [access-log-path](#access-log-path)                                             | string       | \"\/var\/log\/nginx\/access.log\"                                                                                                                                                                                                                                                                                                                                  |                                                                                     |\n| [http-access-log-path](#http-access-log-path)                                   | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [stream-access-log-path](#stream-access-log-path)                               | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [enable-access-log-for-default-backend](#enable-access-log-for-default-backend) | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [error-log-path](#error-log-path)                                               | string       | \"\/var\/log\/nginx\/error.log\"                                                                                                                                                                                                                                                                                                                                   |                                                                                     |\n| [enable-modsecurity](#enable-modsecurity)                                       | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [modsecurity-snippet](#modsecurity-snippet)                                     | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [enable-owasp-modsecurity-crs](#enable-owasp-modsecurity-crs)                   | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [client-header-buffer-size](#client-header-buffer-size)                         | string       | \"1k\"                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [client-header-timeout](#client-header-timeout)                                 | int          | 60                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [client-body-buffer-size](#client-body-buffer-size)                             | string       | \"8k\"                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [client-body-timeout](#client-body-timeout)                                     | int          | 60                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [disable-access-log](#disable-access-log)                                       | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [disable-ipv6](#disable-ipv6)                                                   | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [disable-ipv6-dns](#disable-ipv6-dns)                                           | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [enable-underscores-in-headers](#enable-underscores-in-headers)                 | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [enable-ocsp](#enable-ocsp)                                                     | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [ignore-invalid-headers](#ignore-invalid-headers)                               | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [retry-non-idempotent](#retry-non-idempotent)                                   | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [error-log-level](#error-log-level)                                             | string       | \"notice\"                                                                                                                                                                                                                                                                                                                                                     |                                                                                     |\n| [http2-max-field-size](#http2-max-field-size)                                   | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           | DEPRECATED in favour of [large_client_header_buffers](#large-client-header-buffers) |\n| [http2-max-header-size](#http2-max-header-size)                                 | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           | DEPRECATED in favour of [large_client_header_buffers](#large-client-header-buffers) |\n| [http2-max-requests](#http2-max-requests)                                       | int          | 0                                                                                                                                                                                                                                                                                                                                                            | DEPRECATED in favour of [keepalive_requests](#keepalive-requests)                   |\n| [http2-max-concurrent-streams](#http2-max-concurrent-streams)                   | int          | 128                                                                                                                                                                                                                                                                                                                                                          |                                                                                     |\n| [hsts](#hsts)                                                                   | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [hsts-include-subdomains](#hsts-include-subdomains)                             | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [hsts-max-age](#hsts-max-age)                                                   | string       | \"31536000\"                                                                                                                                                                                                                                                                                                                                                   |                                                                                     |\n| [hsts-preload](#hsts-preload)                                                   | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [keep-alive](#keep-alive)                                                       | int          | 75                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [keep-alive-requests](#keep-alive-requests)                                     | int          | 1000                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [large-client-header-buffers](#large-client-header-buffers)                     | string       | \"4 8k\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [log-format-escape-none](#log-format-escape-none)                               | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [log-format-escape-json](#log-format-escape-json)                               | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [log-format-upstream](#log-format-upstream)                                     | string       | `$remote_addr - $remote_user [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id`                                                         |                                                                                     |\n| [log-format-stream](#log-format-stream)                                         | string       | `[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time`                                                                                                                                                                                                                                                                   |                                                                                     |\n| [enable-multi-accept](#enable-multi-accept)                                     | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [max-worker-connections](#max-worker-connections)                               | int          | 16384                                                                                                                                                                                                                                                                                                                                                        |                                                                                     |\n| [max-worker-open-files](#max-worker-open-files)                                 | int          | 0                                                                                                                                                                                                                                                                                                                                                            |                                                                                     |\n| [map-hash-bucket-size](#max-hash-bucket-size)                                   | int          | 64                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [nginx-status-ipv4-whitelist](#nginx-status-ipv4-whitelist)                     | []string     | \"127.0.0.1\"                                                                                                                                                                                                                                                                                                                                                  |                                                                                     |\n| [nginx-status-ipv6-whitelist](#nginx-status-ipv6-whitelist)                     | []string     | \"::1\"                                                                                                                                                                                                                                                                                                                                                        |                                                                                     |\n| [proxy-real-ip-cidr](#proxy-real-ip-cidr)                                       | []string     | \"0.0.0.0\/0\"                                                                                                                                                                                                                                                                                                                                                  |                                                                                     |\n| [proxy-set-headers](#proxy-set-headers)                                         | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [server-name-hash-max-size](#server-name-hash-max-size)                         | int          | 1024                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [server-name-hash-bucket-size](#server-name-hash-bucket-size)                   | int          | `<size of the processor\u2019s cache line>`                                                                                                                                                                                                                                                                                                                       |\n| [proxy-headers-hash-max-size](#proxy-headers-hash-max-size)                     | int          | 512                                                                                                                                                                                                                                                                                                                                                          |                                                                                     |\n| [proxy-headers-hash-bucket-size](#proxy-headers-hash-bucket-size)               | int          | 64                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [reuse-port](#reuse-port)                                                       | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [server-tokens](#server-tokens)                                                 | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [ssl-ciphers](#ssl-ciphers)                                                     | string       | \"ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384\"                                                                                                                          |                                                                                     |\n| [ssl-ecdh-curve](#ssl-ecdh-curve)                                               | string       | \"auto\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [ssl-dh-param](#ssl-dh-param)                                                   | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [ssl-protocols](#ssl-protocols)                                                 | string       | \"TLSv1.2 TLSv1.3\"                                                                                                                                                                                                                                                                                                                                            |                                                                                     |\n| [ssl-session-cache](#ssl-session-cache)                                         | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [ssl-session-cache-size](#ssl-session-cache-size)                               | string       | \"10m\"                                                                                                                                                                                                                                                                                                                                                        |                                                                                     |\n| [ssl-session-tickets](#ssl-session-tickets)                                     | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [ssl-session-ticket-key](#ssl-session-ticket-key)                               | string       | `<Randomly Generated>`                                                                                                                                                                                                                                                                                                                                       |\n| [ssl-session-timeout](#ssl-session-timeout)                                     | string       | \"10m\"                                                                                                                                                                                                                                                                                                                                                        |                                                                                     |\n| [ssl-buffer-size](#ssl-buffer-size)                                             | string       | \"4k\"                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [use-proxy-protocol](#use-proxy-protocol)                                       | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [proxy-protocol-header-timeout](#proxy-protocol-header-timeout)                 | string       | \"5s\"                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [enable-aio-write](#enable-aio-write)                                           | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [use-gzip](#use-gzip)                                                           | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [use-geoip](#use-geoip)                                                         | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [use-geoip2](#use-geoip2)                                                       | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [geoip2-autoreload-in-minutes](#geoip2-autoreload-in-minutes)                   | int          | \"0\"                                                                                                                                                                                                                                                                                                                                                          |                                                                                     |\n| [enable-brotli](#enable-brotli)                                                 | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [brotli-level](#brotli-level)                                                   | int          | 4                                                                                                                                                                                                                                                                                                                                                            |                                                                                     |\n| [brotli-min-length](#brotli-min-length)                                         | int          | 20                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [brotli-types](#brotli-types)                                                   | string       | \"application\/xml+rss application\/atom+xml application\/javascript application\/x-javascript application\/json application\/rss+xml application\/vnd.ms-fontobject application\/x-font-ttf application\/x-web-app-manifest+json application\/xhtml+xml application\/xml font\/opentype image\/svg+xml image\/x-icon text\/css text\/javascript text\/plain text\/x-component\" |                                                                                     |\n| [use-http2](#use-http2)                                                         | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [gzip-disable](#gzip-disable)                                                   | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [gzip-level](#gzip-level)                                                       | int          | 1                                                                                                                                                                                                                                                                                                                                                            |                                                                                     |\n| [gzip-min-length](#gzip-min-length)                                             | int          | 256                                                                                                                                                                                                                                                                                                                                                          |                                                                                     |\n| [gzip-types](#gzip-types)                                                       | string       | \"application\/atom+xml application\/javascript application\/x-javascript application\/json application\/rss+xml application\/vnd.ms-fontobject application\/x-font-ttf application\/x-web-app-manifest+json application\/xhtml+xml application\/xml font\/opentype image\/svg+xml image\/x-icon text\/css text\/javascript text\/plain text\/x-component\"                     |                                                                                     |\n| [worker-processes](#worker-processes)                                           | string       | `<Number of CPUs>`                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [worker-cpu-affinity](#worker-cpu-affinity)                                     | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [worker-shutdown-timeout](#worker-shutdown-timeout)                             | string       | \"240s\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [enable-serial-reloads](#enable-serial-reloads)                                 | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [load-balance](#load-balance)                                                   | string       | \"round_robin\"                                                                                                                                                                                                                                                                                                                                                |                                                                                     |\n| [variables-hash-bucket-size](#variables-hash-bucket-size)                       | int          | 128                                                                                                                                                                                                                                                                                                                                                          |                                                                                     |\n| [variables-hash-max-size](#variables-hash-max-size)                             | int          | 2048                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [upstream-keepalive-connections](#upstream-keepalive-connections)               | int          | 320                                                                                                                                                                                                                                                                                                                                                          |                                                                                     |\n| [upstream-keepalive-time](#upstream-keepalive-time)                             | string       | \"1h\"                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [upstream-keepalive-timeout](#upstream-keepalive-timeout)                       | int          | 60                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [upstream-keepalive-requests](#upstream-keepalive-requests)                     | int          | 10000                                                                                                                                                                                                                                                                                                                                                        |                                                                                     |\n| [limit-conn-zone-variable](#limit-conn-zone-variable)                           | string       | \"$binary_remote_addr\"                                                                                                                                                                                                                                                                                                                                        |                                                                                     |\n| [proxy-stream-timeout](#proxy-stream-timeout)                                   | string       | \"600s\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [proxy-stream-next-upstream](#proxy-stream-next-upstream)                       | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [proxy-stream-next-upstream-timeout](#proxy-stream-next-upstream-timeout)       | string       | \"600s\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [proxy-stream-next-upstream-tries](#proxy-stream-next-upstream-tries)           | int          | 3                                                                                                                                                                                                                                                                                                                                                            |                                                                                     |\n| [proxy-stream-responses](#proxy-stream-responses)                               | int          | 1                                                                                                                                                                                                                                                                                                                                                            |                                                                                     |\n| [bind-address](#bind-address)                                                   | []string     | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [use-forwarded-headers](#use-forwarded-headers)                                 | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [enable-real-ip](#enable-real-ip)                                               | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [forwarded-for-header](#forwarded-for-header)                                   | string       | \"X-Forwarded-For\"                                                                                                                                                                                                                                                                                                                                            |                                                                                     |\n| [compute-full-forwarded-for](#compute-full-forwarded-for)                       | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [proxy-add-original-uri-header](#proxy-add-original-uri-header)                 | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [generate-request-id](#generate-request-id)                                     | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [jaeger-collector-host](#jaeger-collector-host)                                 | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [jaeger-collector-port](#jaeger-collector-port)                                 | int          | 6831                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [jaeger-endpoint](#jaeger-endpoint)                                             | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [jaeger-service-name](#jaeger-service-name)                                     | string       | \"nginx\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [jaeger-propagation-format](#jaeger-propagation-format)                         | string       | \"jaeger\"                                                                                                                                                                                                                                                                                                                                                     |                                                                                     |\n| [jaeger-sampler-type](#jaeger-sampler-type)                                     | string       | \"const\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [jaeger-sampler-param](#jaeger-sampler-param)                                   | string       | \"1\"                                                                                                                                                                                                                                                                                                                                                          |                                                                                     |\n| [jaeger-sampler-host](#jaeger-sampler-host)                                     | string       | \"http:\/\/127.0.0.1\"                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [jaeger-sampler-port](#jaeger-sampler-port)                                     | int          | 5778                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [jaeger-trace-context-header-name](#jaeger-trace-context-header-name)           | string       | uber-trace-id                                                                                                                                                                                                                                                                                                                                                |                                                                                     |\n| [jaeger-debug-header](#jaeger-debug-header)                                     | string       | uber-debug-id                                                                                                                                                                                                                                                                                                                                                |                                                                                     |\n| [jaeger-baggage-header](#jaeger-baggage-header)                                 | string       | jaeger-baggage                                                                                                                                                                                                                                                                                                                                               |                                                                                     |\n| [jaeger-trace-baggage-header-prefix](#jaeger-trace-baggage-header-prefix)       | string       | uberctx-                                                                                                                                                                                                                                                                                                                                                     |                                                                                     |\n| [datadog-collector-host](#datadog-collector-host)                               | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [datadog-collector-port](#datadog-collector-port)                               | int          | 8126                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [datadog-service-name](#datadog-service-name)                                   | string       | \"nginx\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [datadog-environment](#datadog-environment)                                     | string       | \"prod\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [datadog-operation-name-override](#datadog-operation-name-override)             | string       | \"nginx.handle\"                                                                                                                                                                                                                                                                                                                                               |                                                                                     |\n| [datadog-priority-sampling](#datadog-priority-sampling)                         | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [datadog-sample-rate](#datadog-sample-rate)                                     | float        | 1.0                                                                                                                                                                                                                                                                                                                                                          |                                                                                     |\n| [enable-opentelemetry](#enable-opentelemetry)                                   | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [opentelemetry-trust-incoming-span](#opentelemetry-trust-incoming-span)         | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [opentelemetry-operation-name](#opentelemetry-operation-name)                   | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [opentelemetry-config](#\/etc\/nginx\/opentelemetry.toml)                          | string       | \"\/etc\/nginx\/opentelemetry.toml\"                                                                                                                                                                                                                                                                                                                              |                                                                                     |\n| [otlp-collector-host](#otlp-collector-host)                                     | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [otlp-collector-port](#otlp-collector-port)                                     | int          | 4317                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [otel-max-queuesize](#otel-max-queuesize)                                       | int          |                                                                                                                                                                                                                                                                                                                                                              |                                                                                     |\n| [otel-schedule-delay-millis](#otel-schedule-delay-millis)                       | int          |                                                                                                                                                                                                                                                                                                                                                              |                                                                                     |\n| [otel-max-export-batch-size](#otel-max-export-batch-size)                       | int          |                                                                                                                                                                                                                                                                                                                                                              |                                                                                     |\n| [otel-service-name](#otel-service-name)                                         | string       | \"nginx\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [otel-sampler](#otel-sampler)                                                   | string       | \"AlwaysOff\"                                                                                                                                                                                                                                                                                                                                                  |                                                                                     |\n| [otel-sampler-parent-based](#otel-sampler-parent-based)                         | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [otel-sampler-ratio](#otel-sampler-ratio)                                       | float        | 0.01                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [main-snippet](#main-snippet)                                                   | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [http-snippet](#http-snippet)                                                   | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [server-snippet](#server-snippet)                                               | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [stream-snippet](#stream-snippet)                                               | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [location-snippet](#location-snippet)                                           | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [custom-http-errors](#custom-http-errors)                                       | []int        | []int{}                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [proxy-body-size](#proxy-body-size)                                             | string       | \"1m\"                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [proxy-connect-timeout](#proxy-connect-timeout)                                 | int          | 5                                                                                                                                                                                                                                                                                                                                                            |                                                                                     |\n| [proxy-read-timeout](#proxy-read-timeout)                                       | int          | 60                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [proxy-send-timeout](#proxy-send-timeout)                                       | int          | 60                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [proxy-buffers-number](#proxy-buffers-number)                                   | int          | 4                                                                                                                                                                                                                                                                                                                                                            |                                                                                     |\n| [proxy-buffer-size](#proxy-buffer-size)                                         | string       | \"4k\"                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [proxy-busy-buffers-size](#proxy-busy-buffers-size)                             | string       | \"8k\"                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [proxy-cookie-path](#proxy-cookie-path)                                         | string       | \"off\"                                                                                                                                                                                                                                                                                                                                                        |                                                                                     |\n| [proxy-cookie-domain](#proxy-cookie-domain)                                     | string       | \"off\"                                                                                                                                                                                                                                                                                                                                                        |                                                                                     |\n| [proxy-next-upstream](#proxy-next-upstream)                                     | string       | \"error timeout\"                                                                                                                                                                                                                                                                                                                                              |                                                                                     |\n| [proxy-next-upstream-timeout](#proxy-next-upstream-timeout)                     | int          | 0                                                                                                                                                                                                                                                                                                                                                            |                                                                                     |\n| [proxy-next-upstream-tries](#proxy-next-upstream-tries)                         | int          | 3                                                                                                                                                                                                                                                                                                                                                            |                                                                                     |\n| [proxy-redirect-from](#proxy-redirect-from)                                     | string       | \"off\"                                                                                                                                                                                                                                                                                                                                                        |                                                                                     |\n| [proxy-request-buffering](#proxy-request-buffering)                             | string       | \"on\"                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [ssl-redirect](#ssl-redirect)                                                   | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [force-ssl-redirect](#force-ssl-redirect)                                       | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [denylist-source-range](#denylist-source-range)                                 | []string     | []string{}                                                                                                                                                                                                                                                                                                                                                   |                                                                                     |\n| [whitelist-source-range](#whitelist-source-range)                               | []string     | []string{}                                                                                                                                                                                                                                                                                                                                                   |                                                                                     |\n| [skip-access-log-urls](#skip-access-log-urls)                                   | []string     | []string{}                                                                                                                                                                                                                                                                                                                                                   |                                                                                     |\n| [limit-rate](#limit-rate)                                                       | int          | 0                                                                                                                                                                                                                                                                                                                                                            |                                                                                     |\n| [limit-rate-after](#limit-rate-after)                                           | int          | 0                                                                                                                                                                                                                                                                                                                                                            |                                                                                     |\n| [lua-shared-dicts](#lua-shared-dicts)                                           | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [http-redirect-code](#http-redirect-code)                                       | int          | 308                                                                                                                                                                                                                                                                                                                                                          |                                                                                     |\n| [proxy-buffering](#proxy-buffering)                                             | string       | \"off\"                                                                                                                                                                                                                                                                                                                                                        |                                                                                     |\n| [limit-req-status-code](#limit-req-status-code)                                 | int          | 503                                                                                                                                                                                                                                                                                                                                                          |                                                                                     |\n| [limit-conn-status-code](#limit-conn-status-code)                               | int          | 503                                                                                                                                                                                                                                                                                                                                                          |                                                                                     |\n| [enable-syslog](#enable-syslog)                                                 | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [syslog-host](#syslog-host)                                                     | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [syslog-port](#syslog-port)                                                     | int          | 514                                                                                                                                                                                                                                                                                                                                                          |                                                                                     |\n| [no-tls-redirect-locations](#no-tls-redirect-locations)                         | string       | \"\/.well-known\/acme-challenge\"                                                                                                                                                                                                                                                                                                                                |                                                                                     |\n| [global-allowed-response-headers](#global-allowed-response-headers)             | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [global-auth-url](#global-auth-url)                                             | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [global-auth-method](#global-auth-method)                                       | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [global-auth-signin](#global-auth-signin)                                       | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [global-auth-signin-redirect-param](#global-auth-signin-redirect-param)         | string       | \"rd\"                                                                                                                                                                                                                                                                                                                                                         |                                                                                     |\n| [global-auth-response-headers](#global-auth-response-headers)                   | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [global-auth-request-redirect](#global-auth-request-redirect)                   | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [global-auth-snippet](#global-auth-snippet)                                     | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [global-auth-cache-key](#global-auth-cache-key)                                 | string       | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [global-auth-cache-duration](#global-auth-cache-duration)                       | string       | \"200 202 401 5m\"                                                                                                                                                                                                                                                                                                                                             |                                                                                     |\n| [no-auth-locations](#no-auth-locations)                                         | string       | \"\/.well-known\/acme-challenge\"                                                                                                                                                                                                                                                                                                                                |                                                                                     |\n| [block-cidrs](#block-cidrs)                                                     | []string     | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [block-user-agents](#block-user-agents)                                         | []string     | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [block-referers](#block-referers)                                               | []string     | \"\"                                                                                                                                                                                                                                                                                                                                                           |                                                                                     |\n| [proxy-ssl-location-only](#proxy-ssl-location-only)                             | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [default-type](#default-type)                                                   | string       | \"text\/html\"                                                                                                                                                                                                                                                                                                                                                  |                                                                                     |\n| [service-upstream](#service-upstream)                                           | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [ssl-reject-handshake](#ssl-reject-handshake)                                   | bool         | \"false\"                                                                                                                                                                                                                                                                                                                                                      |                                                                                     |\n| [debug-connections](#debug-connections)                                         | []string     | \"127.0.0.1,1.1.1.1\/24\"                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [strict-validate-path-type](#strict-validate-path-type)                         | bool         | \"true\"                                                                                                                                                                                                                                                                                                                                                       |                                                                                     |\n| [grpc-buffer-size-kb](#grpc-buffer-size-kb)                                     | int          | 0                                                                                                                                                                                                                                                                                                                                                            |                                                                                     |\n| [relative-redirects](#relative-redirects)                                       | bool         | false                                                                                                                                                                                                                                                                                                                                                        |                                                                                     |\n\n## add-headers\n\nSets custom headers from named configmap before sending traffic to the client. See [proxy-set-headers](#proxy-set-headers). [example](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/docs\/examples\/customization\/custom-headers)\n\n## allow-backend-server-header\n\nEnables the return of the header Server from the backend instead of the generic nginx string. _**default:**_ is disabled\n\n## allow-cross-namespace-resources\n\nEnables users to consume cross namespace resource on annotations, when was previously enabled . _**default:**_ false\n\n**Annotations that may be impacted with this change**:\n\n* `auth-secret`\n* `auth-proxy-set-header`\n* `auth-tls-secret`\n* `fastcgi-params-configmap`\n* `proxy-ssl-secret`\n\n## allow-snippet-annotations\n\nEnables Ingress to parse and add *-snippet annotations\/directives created by the user. _**default:**_ `false`\n\nWarning: We recommend enabling this option only if you TRUST users with permission to create Ingress objects, as this\nmay allow a user to add restricted configurations to the final nginx.conf file\n\n## annotations-risk-level\n\nRepresents the risk accepted on an annotation. If the risk is, for instance `Medium`, annotations with risk High and Critical will not be accepted.\n\nAccepted values are `Critical`, `High`, `Medium` and `Low`.\n\n_**default:**_ `High`\n\n## annotation-value-word-blocklist\n\nContains a comma-separated value of chars\/words that are well known of being used to abuse Ingress configuration\nand must be blocked. Related to [CVE-2021-25742](https:\/\/github.com\/kubernetes\/ingress-nginx\/issues\/7837)\n\nWhen an annotation is detected with a value that matches one of the blocked bad words, the whole Ingress won't be configured.\n\n_**default:**_ `\"\"`\n\nWhen doing this, the default blocklist is override, which means that the Ingress admin should add all the words\nthat should be blocked, here is a suggested block list.\n\n_**suggested:**_ `\"load_module,lua_package,_by_lua,location,root,proxy_pass,serviceaccount,{,},',\\\"\"`\n\n## hide-headers\n\nSets additional header that will not be passed from the upstream server to the client response.\n_**default:**_ empty\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_hide_header](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_hide_header)\n\n## access-log-params\n\nAdditional params for access_log. For example, buffer=16k, gzip, flush=1m\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_log_module.html#access_log](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_log_module.html#access_log)\n\n## access-log-path\n\nAccess log path for both http and stream context. Goes to `\/var\/log\/nginx\/access.log` by default.\n\n__Note:__ the file `\/var\/log\/nginx\/access.log` is a symlink to `\/dev\/stdout`\n\n## http-access-log-path\n\nAccess log path for http context globally.\n_**default:**_ \"\"\n\n__Note:__ If not specified, the `access-log-path` will be used.\n\n## stream-access-log-path\n\nAccess log path for stream context globally.\n_**default:**_ \"\"\n\n__Note:__ If not specified, the `access-log-path` will be used.\n\n## enable-access-log-for-default-backend\n\nEnables logging access to default backend. _**default:**_ is disabled.\n\n## error-log-path\n\nError log path. Goes to `\/var\/log\/nginx\/error.log` by default.\n\n__Note:__ the file `\/var\/log\/nginx\/error.log` is a symlink to `\/dev\/stderr`\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/ngx_core_module.html#error_log](https:\/\/nginx.org\/en\/docs\/ngx_core_module.html#error_log)\n\n## enable-modsecurity\n\nEnables the modsecurity module for NGINX. _**default:**_ is disabled\n\n## enable-owasp-modsecurity-crs\n\nEnables the OWASP ModSecurity Core Rule Set (CRS). _**default:**_ is disabled\n\n## modsecurity-snippet\n\nAdds custom rules to modsecurity section of nginx configuration\n\n## client-header-buffer-size\n\nAllows to configure a custom buffer size for reading client request header.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#client_header_buffer_size](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#client_header_buffer_size)\n\n## client-header-timeout\n\nDefines a timeout for reading client request header, in seconds.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#client_header_timeout](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#client_header_timeout)\n\n## client-body-buffer-size\n\nSets buffer size for reading client request body.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#client_body_buffer_size](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#client_body_buffer_size)\n\n## client-body-timeout\n\nDefines a timeout for reading client request body, in seconds.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#client_body_timeout](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#client_body_timeout)\n\n## disable-access-log\n\nDisables the Access Log from the entire Ingress Controller. _**default:**_ `false`\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_log_module.html#access_log](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_log_module.html#access_log)\n\n## disable-ipv6\n\nDisable listening on IPV6. _**default:**_ `false`; IPv6 listening is enabled\n\n## disable-ipv6-dns\n\nDisable IPV6 for nginx DNS resolver. _**default:**_ `false`; IPv6 resolving enabled.\n\n## enable-underscores-in-headers\n\nEnables underscores in header names. _**default:**_ is disabled\n\n## enable-ocsp\n\nEnables [Online Certificate Status Protocol stapling](https:\/\/en.wikipedia.org\/wiki\/OCSP_stapling) (OCSP) support.\n_**default:**_ is disabled\n\n## ignore-invalid-headers\n\nSet if header fields with invalid names should be ignored.\n_**default:**_ is enabled\n\n## retry-non-idempotent\n\nSince 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value \"true\".\n\n## error-log-level\n\nConfigures the logging level of errors. Log levels above are listed in the order of increasing severity.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/ngx_core_module.html#error_log](https:\/\/nginx.org\/en\/docs\/ngx_core_module.html#error_log)\n\n## http2-max-field-size\n\n!!! warning\n    This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use [large-client-header-buffers](#large-client-header-buffers) instead.\n\nLimits the maximum size of an HPACK-compressed request header field.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_v2_module.html#http2_max_field_size](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_v2_module.html#http2_max_field_size)\n\n## http2-max-header-size\n\n!!! warning\n    This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use [large-client-header-buffers](#large-client-header-buffers) instead.\n\nLimits the maximum size of the entire request header list after HPACK decompression.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_v2_module.html#http2_max_header_size](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_v2_module.html#http2_max_header_size)\n\n## http2-max-requests\n\n!!! warning\n    This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use [upstream-keepalive-requests](#upstream-keepalive-requests) instead.\n\nSets the maximum number of requests (including push requests) that can be served through one HTTP\/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_v2_module.html#http2_max_requests](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_v2_module.html#http2_max_requests)\n\n## http2-max-concurrent-streams\n\nSets the maximum number of concurrent HTTP\/2 streams in a connection.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_v2_module.html#http2_max_concurrent_streams](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_v2_module.html#http2_max_concurrent_streams)\n\n## hsts\n\nEnables or disables the header HSTS in servers running SSL.\nHTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.\n\n_References:_\n\n- [https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/Security\/HTTP_strict_transport_security](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/Security\/HTTP_strict_transport_security)\n- [https:\/\/blog.qualys.com\/securitylabs\/2016\/03\/28\/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server](https:\/\/blog.qualys.com\/securitylabs\/2016\/03\/28\/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server)\n\n## hsts-include-subdomains\n\nEnables or disables the use of HSTS in all the subdomains of the server-name.\n\n## hsts-max-age\n\nSets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.\n\n## hsts-preload\n\nEnables or disables the preload attribute in the HSTS feature (when it is enabled).\n\n## keep-alive\n\nSets the time, in seconds, during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#keepalive_timeout](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#keepalive_timeout)\n\n!!! important\n    Setting `keep-alive: '0'` will most likely break concurrent http\/2 requests due to changes introduced with nginx 1.19.7\n\n```\nChanges with nginx 1.19.7                                        16 Feb 2021\n\n    *) Change: connections handling in HTTP\/2 has been changed to better\n       match HTTP\/1.x; the \"http2_recv_timeout\", \"http2_idle_timeout\", and\n       \"http2_max_requests\" directives have been removed, the\n       \"keepalive_timeout\" and \"keepalive_requests\" directives should be\n       used instead.\n```\n\n_References:_\n[nginx change log](https:\/\/nginx.org\/en\/CHANGES)\n[nginx issue tracker](https:\/\/trac.nginx.org\/nginx\/ticket\/2155)\n[nginx mailing list](https:\/\/mailman.nginx.org\/pipermail\/nginx\/2021-May\/060697.html)\n\n## keep-alive-requests\n\nSets the maximum number of requests that can be served through one keep-alive connection.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#keepalive_requests](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#keepalive_requests)\n\n## large-client-header-buffers\n\nSets the maximum number and size of buffers used for reading large client request header. _**default:**_ 4 8k\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#large_client_header_buffers](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#large_client_header_buffers)\n\n## log-format-escape-none\n\nSets if the escape parameter is disabled entirely for character escaping in variables (\"true\") or controlled by log-format-escape-json (\"false\") Sets the nginx [log format](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_log_module.html#log_format).\n\n## log-format-escape-json\n\nSets if the escape parameter allows JSON (\"true\") or default characters escaping in variables (\"false\") Sets the nginx [log format](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_log_module.html#log_format).\n\n## log-format-upstream\n\nSets the nginx [log format](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_log_module.html#log_format).\nExample for json output:\n\n```json\n\nlog-format-upstream: '{\"time\": \"$time_iso8601\", \"remote_addr\": \"$proxy_protocol_addr\", \"x_forwarded_for\": \"$proxy_add_x_forwarded_for\", \"request_id\": \"$req_id\",\n  \"remote_user\": \"$remote_user\", \"bytes_sent\": $bytes_sent, \"request_time\": $request_time, \"status\": $status, \"vhost\": \"$host\", \"request_proto\": \"$server_protocol\",\n  \"path\": \"$uri\", \"request_query\": \"$args\", \"request_length\": $request_length, \"duration\": $request_time,\"method\": \"$request_method\", \"http_referrer\": \"$http_referer\",\n  \"http_user_agent\": \"$http_user_agent\" }'\n```\n\nPlease check the [log-format](log-format.md) for definition of each field.\n\n## log-format-stream\n\nSets the nginx [stream format](https:\/\/nginx.org\/en\/docs\/stream\/ngx_stream_log_module.html#log_format).\n\n## enable-multi-accept\n\nIf disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time.\n_**default:**_ true\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/ngx_core_module.html#multi_accept](https:\/\/nginx.org\/en\/docs\/ngx_core_module.html#multi_accept)\n\n## max-worker-connections\n\nSets the [maximum number of simultaneous connections](https:\/\/nginx.org\/en\/docs\/ngx_core_module.html#worker_connections) that can be opened by each worker process.\n0 will use the value of [max-worker-open-files](#max-worker-open-files).\n_**default:**_ 16384\n\n!!! tip\n    Using 0 in scenarios of high load improves performance at the cost of increasing RAM utilization (even on idle).\n\n## max-worker-open-files\n\nSets the [maximum number of files](https:\/\/nginx.org\/en\/docs\/ngx_core_module.html#worker_rlimit_nofile) that can be opened by each worker process.\nThe default of 0 means \"max open files (system's limit) - 1024\".\n_**default:**_ 0\n\n## map-hash-bucket-size\n\nSets the bucket size for the [map variables hash tables](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_map_module.html#map_hash_bucket_size). The details of setting up hash tables are provided in a separate [document](https:\/\/nginx.org\/en\/docs\/hash.html).\n\n## proxy-real-ip-cidr\n\nIf `use-forwarded-headers` or `use-proxy-protocol` is enabled, `proxy-real-ip-cidr` defines the default IP\/network address of your external load balancer. Can be a comma-separated list of CIDR blocks.\n_**default:**_ \"0.0.0.0\/0\"\n\n## proxy-set-headers\n\nSets custom headers from named configmap before sending traffic to backends. The value format is namespace\/name.  See [example](https:\/\/kubernetes.github.io\/ingress-nginx\/examples\/customization\/custom-headers\/)\n\n## server-name-hash-max-size\n\nSets the maximum size of the [server names hash tables](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#server_names_hash_max_size) used in server names,map directive\u2019s values, MIME types, names of request header strings, etc.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/hash.html](https:\/\/nginx.org\/en\/docs\/hash.html)\n\n## server-name-hash-bucket-size\n\nSets the size of the bucket for the server names hash tables.\n\n_References:_\n\n- [https:\/\/nginx.org\/en\/docs\/hash.html](https:\/\/nginx.org\/en\/docs\/hash.html)\n- [https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#server_names_hash_bucket_size](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#server_names_hash_bucket_size)\n\n## proxy-headers-hash-max-size\n\nSets the maximum size of the proxy headers hash tables.\n\n_References:_\n\n- [https:\/\/nginx.org\/en\/docs\/hash.html](https:\/\/nginx.org\/en\/docs\/hash.html)\n- [https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_headers_hash_max_size](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_headers_hash_max_size)\n\n## reuse-port\n\nInstructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes\n_**default:**_ true\n\n## proxy-headers-hash-bucket-size\n\nSets the size of the bucket for the proxy headers hash tables.\n\n_References:_\n\n- [https:\/\/nginx.org\/en\/docs\/hash.html](https:\/\/nginx.org\/en\/docs\/hash.html)\n- [https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size)\n\n## server-tokens\n\nSend NGINX Server header in responses and display NGINX version in error pages. _**default:**_ is disabled\n\n## ssl-ciphers\n\nSets the [ciphers](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_ciphers) list to enable. The ciphers are specified in the format understood by the OpenSSL library.\n\nThe default cipher list is:\n `ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384`.\n\nThe ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect [forward secrecy](https:\/\/wiki.mozilla.org\/Security\/Server_Side_TLS#Forward_Secrecy).\n\nDHE-based cyphers will not be available until DH parameter is configured [Custom DH parameters for perfect forward secrecy](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/docs\/examples\/customization\/ssl-dh-param)\n\nPlease check the [Mozilla SSL Configuration Generator](https:\/\/mozilla.github.io\/server-side-tls\/ssl-config-generator\/).\n\n__Note:__ ssl_prefer_server_ciphers directive will be enabled by default for http context.\n\n## ssl-ecdh-curve\n\nSpecifies a curve for ECDHE ciphers.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_ecdh_curve](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_ecdh_curve)\n\n## ssl-dh-param\n\nSets the name of the secret that contains Diffie-Hellman key to help with \"Perfect Forward Secrecy\".\n\n_References:_\n\n- [https:\/\/wiki.openssl.org\/index.php\/Diffie-Hellman_parameters](https:\/\/wiki.openssl.org\/index.php\/Diffie-Hellman_parameters)\n- [https:\/\/wiki.mozilla.org\/Security\/Server_Side_TLS#DHE_handshake_and_dhparam](https:\/\/wiki.mozilla.org\/Security\/Server_Side_TLS#DHE_handshake_and_dhparam)\n- [https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_dhparam](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_dhparam)\n\n## ssl-protocols\n\nSets the [SSL protocols](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_protocols) to use. The default is: `TLSv1.2 TLSv1.3`.\n\nPlease check the result of the configuration using `https:\/\/ssllabs.com\/ssltest\/analyze.html` or `https:\/\/testssl.sh`.\n\n## ssl-early-data\n\nEnables or disables TLS 1.3 [early data](https:\/\/tools.ietf.org\/html\/rfc8446#section-2.3), also known as Zero Round Trip\nTime Resumption (0-RTT).\n\nThis requires `ssl-protocols` to have `TLSv1.3` enabled. Enable this with caution, because requests sent within early\ndata are subject to [replay attacks](https:\/\/tools.ietf.org\/html\/rfc8470).\n\n[ssl_early_data](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_early_data). The default is: `false`.\n\n## ssl-session-cache\n\nEnables or disables the use of shared [SSL cache](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_session_cache) among worker processes.\n\n## ssl-session-cache-size\n\nSets the size of the [SSL shared session cache](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_session_cache) between all worker processes.\n\n## ssl-session-tickets\n\nEnables or disables session resumption through [TLS session tickets](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_session_tickets).\n\n## ssl-session-ticket-key\n\nSets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string.\nTo create a ticket: `openssl rand 80 | openssl enc -A -base64`\n\n[TLS session ticket-key](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_session_tickets), by default, a randomly generated key is used.\n\n## ssl-session-timeout\n\nSets the time during which a client may [reuse the session](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_session_timeout) parameters stored in a cache.\n\n## ssl-buffer-size\n\nSets the size of the [SSL buffer](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_buffer_size) used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB).\n\n_References:_\n[https:\/\/www.igvita.com\/2013\/12\/16\/optimizing-nginx-tls-time-to-first-byte\/](https:\/\/www.igvita.com\/2013\/12\/16\/optimizing-nginx-tls-time-to-first-byte\/)\n\n## use-proxy-protocol\n\nEnables or disables the [PROXY protocol](https:\/\/www.nginx.com\/resources\/admin-guide\/proxy-protocol\/) to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).\n\n## proxy-protocol-header-timeout\n\nSets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection.\n_**default:**_ 5s\n\n## enable-aio-write\n\nEnables or disables the directive [aio_write](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#aio_write) that writes files asynchronously. _**default:**_ true\n\n## use-gzip\n\nEnables or disables compression of HTTP responses using the [\"gzip\" module](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_gzip_module.html). MIME types to compress are controlled by [gzip-types](#gzip-types). _**default:**_ false\n\n## use-geoip\n\nEnables or disables [\"geoip\" module](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_geoip_module.html) that creates variables with values depending on the client IP address, using the precompiled MaxMind databases.\n_**default:**_ true\n\n> __Note:__ MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. [discontinuation notice](https:\/\/support.maxmind.com\/geolite-legacy-discontinuation-notice\/). Consider [use-geoip2](#use-geoip2) below.\n\n## use-geoip2\n\nEnables the [geoip2 module](https:\/\/github.com\/leev\/ngx_http_geoip2_module) for NGINX.\nSince `0.27.0` and due to a [change in the MaxMind databases](https:\/\/blog.maxmind.com\/2019\/12\/significant-changes-to-accessing-and-using-geolite2-databases\/) a license is required to have access to the databases.\nFor this reason, it is required to define a new flag `--maxmind-license-key` in the ingress controller deployment to download the databases needed during the initialization of the ingress controller.\nAlternatively, it is possible to use a volume to mount the files `\/etc\/ingress-controller\/geoip\/GeoLite2-City.mmdb` and `\/etc\/ingress-controller\/geoip\/GeoLite2-ASN.mmdb`, avoiding the overhead of the download.\n\n!!! important\n    If the feature is enabled but the files are missing, GeoIP2 will not be enabled.\n\n_**default:**_ false\n\n## geoip2-autoreload-in-minutes\n\nEnables the [geoip2 module](https:\/\/github.com\/leev\/ngx_http_geoip2_module) autoreload in MaxMind databases setting the interval in minutes.\n\n_**default:**_ 0\n\n## enable-brotli\n\nEnables or disables compression of HTTP responses using the [\"brotli\" module](https:\/\/github.com\/google\/ngx_brotli).\nThe default mime type list to compress is: `application\/xml+rss application\/atom+xml application\/javascript application\/x-javascript application\/json application\/rss+xml application\/vnd.ms-fontobject application\/x-font-ttf application\/x-web-app-manifest+json application\/xhtml+xml application\/xml font\/opentype image\/svg+xml image\/x-icon text\/css text\/plain text\/x-component`. \n_**default:**_ false\n\n> __Note:__ Brotli does not works in Safari < 11. For more information see [https:\/\/caniuse.com\/#feat=brotli](https:\/\/caniuse.com\/#feat=brotli)\n\n## brotli-level\n\nSets the Brotli Compression Level that will be used. _**default:**_ 4\n\n## brotli-min-length\n\nMinimum length of responses, in bytes, that will be eligible for brotli compression. _**default:**_ 20\n\n## brotli-types\n\nSets the MIME Types that will be compressed on-the-fly by brotli.\n_**default:**_ `application\/xml+rss application\/atom+xml application\/javascript application\/x-javascript application\/json application\/rss+xml application\/vnd.ms-fontobject application\/x-font-ttf application\/x-web-app-manifest+json application\/xhtml+xml application\/xml font\/opentype image\/svg+xml image\/x-icon text\/css text\/plain text\/x-component`\n\n## use-http2\n\nEnables or disables [HTTP\/2](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_v2_module.html) support in secure connections.\n\n## gzip-disable\n\nDisables [gzipping](http:\/\/nginx.org\/en\/docs\/http\/ngx_http_gzip_module.html#gzip_disable) of responses for requests with \"User-Agent\" header fields matching any of the specified regular expressions.\n\n## gzip-level\n\nSets the gzip Compression Level that will be used. _**default:**_ 1\n\n## gzip-min-length\n\nMinimum length of responses to be returned to the client before it is eligible for gzip compression, in bytes. _**default:**_ 256\n\n## gzip-types\n\nSets the MIME types in addition to \"text\/html\" to compress. The special value \"\\*\" matches any MIME type. Responses with the \"text\/html\" type are always compressed if [`use-gzip`](#use-gzip) is enabled.\n_**default:**_ `application\/atom+xml application\/javascript application\/x-javascript application\/json application\/rss+xml application\/vnd.ms-fontobject application\/x-font-ttf application\/x-web-app-manifest+json application\/xhtml+xml application\/xml font\/opentype image\/svg+xml image\/x-icon text\/css text\/plain text\/x-component`.\n\n## worker-processes\n\nSets the number of [worker processes](https:\/\/nginx.org\/en\/docs\/ngx_core_module.html#worker_processes).\nThe default of \"auto\" means number of available CPU cores.\n\n## worker-cpu-affinity\n\nBinds worker processes to the sets of CPUs. [worker_cpu_affinity](https:\/\/nginx.org\/en\/docs\/ngx_core_module.html#worker_cpu_affinity).\nBy default worker processes are not bound to any specific CPUs. The value can be:\n\n- \"\": empty string indicate no affinity is applied.\n- cpumask: e.g. `0001 0010 0100 1000` to bind processes to specific cpus.\n- auto: binding worker processes automatically to available CPUs.\n\n## worker-shutdown-timeout\n\nSets a timeout for Nginx to [wait for worker to gracefully shutdown](https:\/\/nginx.org\/en\/docs\/ngx_core_module.html#worker_shutdown_timeout). _**default:**_ \"240s\"\n\n## load-balance\n\nSets the algorithm to use for load balancing.\nThe value can either be:\n\n- round_robin: to use the default round robin loadbalancer\n- ewma: to use the Peak EWMA method for routing ([implementation](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/rootfs\/etc\/nginx\/lua\/balancer\/ewma.lua))\n\nThe default is `round_robin`.\n\n- To load balance using consistent hashing of IP or other variables, consider the `nginx.ingress.kubernetes.io\/upstream-hash-by` annotation.\n- To load balance using session cookies, consider the `nginx.ingress.kubernetes.io\/affinity` annotation.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/load_balancing.html](https:\/\/nginx.org\/en\/docs\/http\/load_balancing.html)\n\n## variables-hash-bucket-size\n\nSets the bucket size for the variables hash table.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_map_module.html#variables_hash_bucket_size](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_map_module.html#variables_hash_bucket_size)\n\n## variables-hash-max-size\n\nSets the maximum size of the variables hash table.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_map_module.html#variables_hash_max_size](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_map_module.html#variables_hash_max_size)\n\n## upstream-keepalive-connections\n\nActivates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle\nkeepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is\nexceeded, the least recently used connections are closed.\n_**default:**_ 320\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_upstream_module.html#keepalive](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_upstream_module.html#keepalive)\n\n\n## upstream-keepalive-time\n\nSets the maximum time during which requests can be processed through one keepalive connection.\n _**default:**_ \"1h\"\n\n_References:_\n[http:\/\/nginx.org\/en\/docs\/http\/ngx_http_upstream_module.html#keepalive_time](http:\/\/nginx.org\/en\/docs\/http\/ngx_http_upstream_module.html#keepalive_time)\n\n## upstream-keepalive-timeout\n\nSets a timeout during which an idle keepalive connection to an upstream server will stay open.\n _**default:**_ 60\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_upstream_module.html#keepalive_timeout](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_upstream_module.html#keepalive_timeout)\n\n\n## upstream-keepalive-requests\n\nSets the maximum number of requests that can be served through one keepalive connection. After the maximum number of\nrequests is made, the connection is closed.\n_**default:**_ 10000\n\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_upstream_module.html#keepalive_requests](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_upstream_module.html#keepalive_requests)\n\n\n## limit-conn-zone-variable\n\nSets parameters for a shared memory zone that will keep states for various keys of [limit_conn_zone](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_limit_conn_module.html#limit_conn_zone). The default of \"$binary_remote_addr\" variable\u2019s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.\n\n## proxy-stream-timeout\n\nSets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/stream\/ngx_stream_proxy_module.html#proxy_timeout](https:\/\/nginx.org\/en\/docs\/stream\/ngx_stream_proxy_module.html#proxy_timeout)\n\n## proxy-stream-next-upstream\n\nWhen a connection to the proxied server cannot be established, determines whether a client connection will be passed to the next server.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/stream\/ngx_stream_proxy_module.html#proxy_next_upstream](https:\/\/nginx.org\/en\/docs\/stream\/ngx_stream_proxy_module.html#proxy_next_upstream)\n\n## proxy-stream-next-upstream-timeout\n\nLimits the time allowed to pass a connection to the next server. The 0 value turns off this limitation.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/stream\/ngx_stream_proxy_module.html#proxy_next_upstream_timeout](https:\/\/nginx.org\/en\/docs\/stream\/ngx_stream_proxy_module.html#proxy_next_upstream_timeout)\n\n## proxy-stream-next-upstream-tries\n\nLimits the number of possible tries a request should be passed to the next server. The 0 value turns off this limitation.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/stream\/ngx_stream_proxy_module.html#proxy_next_upstream_tries](https:\/\/nginx.org\/en\/docs\/stream\/ngx_stream_proxy_module.html#proxy_next_upstream_timeout)\n\n## proxy-stream-responses\n\nSets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/stream\/ngx_stream_proxy_module.html#proxy_responses](https:\/\/nginx.org\/en\/docs\/stream\/ngx_stream_proxy_module.html#proxy_responses)\n\n## bind-address\n\nSets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.\n\n## use-forwarded-headers\n\nIf true, NGINX passes the incoming `X-Forwarded-*` headers to upstreams. Use this option when NGINX is behind another L7 proxy \/ load balancer that is setting these headers.\n\nIf false, NGINX ignores incoming `X-Forwarded-*` headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3\/packet-based load balancer that doesn't alter the source IP in the packets.\n\n## enable-real-ip\n\n`enable-real-ip` enables the configuration of [https:\/\/nginx.org\/en\/docs\/http\/ngx_http_realip_module.html](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_realip_module.html). Specific attributes of the module can be configured further by using `forwarded-for-header` and `proxy-real-ip-cidr` settings.\n\n## forwarded-for-header\n\nSets the header field for identifying the originating IP address of a client. _**default:**_ X-Forwarded-For\n\n## compute-full-forwarded-for\n\nAppend the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.\n\n## proxy-add-original-uri-header\n\nAdds an X-Original-Uri header with the original request URI to the backend request\n\n## generate-request-id\n\nEnsures that X-Request-ID is defaulted to a random value, if no X-Request-ID is present in the request\n\n## jaeger-collector-host\n\nSpecifies the host to use when uploading traces. It must be a valid URL.\n\n## jaeger-collector-port\n\nSpecifies the port to use when uploading traces. _**default:**_ 6831\n\n## jaeger-endpoint\n\nSpecifies the endpoint to use when uploading traces to a collector. This takes priority over `jaeger-collector-host` if both are specified.\n\n## jaeger-service-name\n\nSpecifies the service name to use for any traces created. _**default:**_ nginx\n\n## jaeger-propagation-format\n\nSpecifies the traceparent\/tracestate propagation format. _**default:**_ jaeger\n\n## jaeger-sampler-type\n\nSpecifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. _**default:**_ const\n\n## jaeger-sampler-param\n\nSpecifies the argument to be passed to the sampler constructor. Must be a number.\nFor const this should be 0 to never sample and 1 to always sample. _**default:**_ 1\n\n## jaeger-sampler-host\n\nSpecifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL.\nLeave blank to use default value (localhost). _**default:**_ http:\/\/127.0.0.1\n\n## jaeger-sampler-port\n\nSpecifies the custom remote sampler port to be passed to the sampler constructor. Must be a number. _**default:**_ 5778\n\n## jaeger-trace-context-header-name\n\nSpecifies the header name used for passing trace context. _**default:**_ uber-trace-id\n\n## jaeger-debug-header\n\nSpecifies the header name used for force sampling. _**default:**_ jaeger-debug-id\n\n## jaeger-baggage-header\n\nSpecifies the header name used to submit baggage if there is no root span. _**default:**_ jaeger-baggage\n\n## jaeger-tracer-baggage-header-prefix\n\nSpecifies the header prefix used to propagate baggage. _**default:**_ uberctx-\n\n## datadog-collector-host\n\nSpecifies the datadog agent host to use when uploading traces. It must be a valid URL.\n\n## datadog-collector-port\n\nSpecifies the port to use when uploading traces. _**default:**_ 8126\n\n## datadog-service-name\n\nSpecifies the service name to use for any traces created. _**default:**_ nginx\n\n## datadog-environment\n\nSpecifies the environment this trace belongs to. _**default:**_ prod\n\n## datadog-operation-name-override\n\nOverrides the operation name to use for any traces crated. _**default:**_ nginx.handle\n\n## datadog-priority-sampling\n\nSpecifies to use client-side sampling.\nIf true disables client-side sampling (thus ignoring `sample_rate`) and enables distributed priority sampling, where traces are sampled based on a combination of user-assigned priorities and configuration from the agent. _**default:**_ true\n\n## datadog-sample-rate\n\nSpecifies sample rate for any traces created.\nThis is effective only when `datadog-priority-sampling` is `false` _**default:**_ 1.0\n\n## enable-opentelemetry\n\nEnables the nginx OpenTelemetry extension. _**default:**_ is disabled\n\n_References:_\n[https:\/\/github.com\/open-telemetry\/opentelemetry-cpp-contrib](https:\/\/github.com\/open-telemetry\/opentelemetry-cpp-contrib\/tree\/main\/instrumentation\/nginx)\n\n## opentelemetry-operation-name\n\nSpecifies a custom name for the server span. _**default:**_ is empty\n\nFor example, set to \"HTTP $request_method $uri\".\n\n## otlp-collector-host\n\nSpecifies the host to use when uploading traces. It must be a valid URL.\n\n## otlp-collector-port\n\nSpecifies the port to use when uploading traces. _**default:**_ 4317\n\n## otel-service-name\n\nSpecifies the service name to use for any traces created. _**default:**_ nginx\n\n##  opentelemetry-trust-incoming-span: \"true\"\nEnables or disables using spans from incoming requests as parent for created ones. _**default:**_ true\n\n##  otel-sampler-parent-based\n\nUses sampler implementation which by default will take a sample if parent Activity is sampled. _**default:**_ false\n\n## otel-sampler-ratio\n\nSpecifies sample rate for any traces created. _**default:**_ 0.01\n\n## otel-sampler\n\nSpecifies the sampler to be used when sampling traces. The available samplers are: AlwaysOff, AlwaysOn, TraceIdRatioBased, remote. _**default:**_ AlwaysOff\n\n## main-snippet\n\nAdds custom configuration to the main section of the nginx configuration.\n\n## http-snippet\n\nAdds custom configuration to the http section of the nginx configuration.\n\n## server-snippet\n\nAdds custom configuration to all the servers in the nginx configuration.\n\n## stream-snippet\n\nAdds custom configuration to the stream section of the nginx configuration.\n\n## location-snippet\n\nAdds custom configuration to all the locations in the nginx configuration.\n\nYou can not use this to add new locations that proxy to the Kubernetes pods, as the snippet does not have access to the Go template functions. If you want to add custom locations you will have to [provide your own nginx.tmpl](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/nginx-configuration\/custom-template\/).\n\n## custom-http-errors\n\nEnables which HTTP codes should be passed for processing with the [error_page directive](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#error_page)\n\nSetting at least one code also enables [proxy_intercept_errors](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_intercept_errors) which are required to process error_page.\n\nExample usage: `custom-http-errors: 404,415`\n\n## proxy-body-size\n\nSets the maximum allowed size of the client request body.\nSee NGINX [client_max_body_size](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#client_max_body_size).\n\n## proxy-connect-timeout\n\nSets the timeout for [establishing a connection with a proxied server](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_connect_timeout). It should be noted that this timeout cannot usually exceed 75 seconds.\n\nIt will also set the [grpc_connect_timeout](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_grpc_module.html#grpc_connect_timeout) for gRPC connections.\n\n## proxy-read-timeout\n\nSets the timeout in seconds for [reading a response from the proxied server](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_read_timeout). The timeout is set only between two successive read operations, not for the transmission of the whole response.\n\nIt will also set the [grpc_read_timeout](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_grpc_module.html#grpc_read_timeout) for gRPC connections.\n\n## proxy-send-timeout\n\nSets the timeout in seconds for [transmitting a request to the proxied server](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_send_timeout). The timeout is set only between two successive write operations, not for the transmission of the whole request.\n\nIt will also set the [grpc_send_timeout](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_grpc_module.html#grpc_send_timeout) for gRPC connections.\n\n## proxy-buffers-number\n\nSets the number of the buffer used for [reading the first part of the response](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_buffers) received from the proxied server. This part usually contains a small response header.\n\n## proxy-buffer-size\n\nSets the size of the buffer used for [reading the first part of the response](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_buffer_size) received from the proxied server. This part usually contains a small response header.\n\n## proxy-busy-buffers-size\n\n[Limits the total size of buffers that can be busy](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_busy_buffers_size) sending a response to the client while the response is not yet fully read.\n\n## proxy-cookie-path\n\nSets a text that [should be changed in the path attribute](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_cookie_path) of the \u201cSet-Cookie\u201d header fields of a proxied server response.\n\n## proxy-cookie-domain\n\nSets a text that [should be changed in the domain attribute](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_cookie_domain) of the \u201cSet-Cookie\u201d header fields of a proxied server response.\n\n## proxy-next-upstream\n\nSpecifies in [which cases](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_next_upstream) a request should be passed to the next server.\n\n## proxy-next-upstream-timeout\n\n[Limits the time](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_next_upstream_timeout) in seconds during which a request can be passed to the next server.\n\n## proxy-next-upstream-tries\n\nLimit the number of [possible tries](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_next_upstream_tries) a request should be passed to the next server.\n\n## proxy-redirect-from\n\nSets the original text that should be changed in the \"Location\" and \"Refresh\" header fields of a proxied server response. _**default:**_ off\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_redirect](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_redirect)\n\n## proxy-request-buffering\n\nEnables or disables [buffering of a client request body](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_request_buffering).\n\n## ssl-redirect\n\nSets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule).\n_**default:**_ \"true\"\n\n## force-ssl-redirect\nSets the global value of redirects (308) to HTTPS if the server has a default TLS certificate (defined in extra-args).\n_**default:**_ \"false\"\n\n## denylist-source-range\n\nSets the default denylisted IPs for each `server` block. This can be overwritten by an annotation on an Ingress rule.\nSee [ngx_http_access_module](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_access_module.html).\n\n## whitelist-source-range\n\nSets the default whitelisted IPs for each `server` block. This can be overwritten by an annotation on an Ingress rule.\nSee [ngx_http_access_module](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_access_module.html).\n\n## skip-access-log-urls\n\nSets a list of URLs that should not appear in the NGINX access log. This is useful with urls like `\/health` or `health-check` that make \"complex\" reading the logs. _**default:**_ is empty\n\n## limit-rate\n\nLimits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#limit_rate](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#limit_rate)\n\n## limit-rate-after\n\nSets the initial amount after which the further transmission of a response to a client will be rate limited.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#limit_rate_after](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#limit_rate_after)\n\n## lua-shared-dicts\n\nCustomize default Lua shared dictionaries or define more. You can use the following syntax to do so:\n\n```\nlua-shared-dicts: \"<my dict name>: <my dict size>, [<my dict name>: <my dict size>], ...\"\n```\n\nFor example following will set default `certificate_data` dictionary to `100M` and will introduce a new dictionary called\n`my_custom_plugin`:\n\n```\nlua-shared-dicts: \"certificate_data: 100, my_custom_plugin: 5\"\n```\n\nYou can optionally set a size unit to allow for kilobyte-granularity. Allowed units are 'm' or 'k' (case-insensitive), and it defaults to MB if no unit is provided. Here is a similar example, but the `my_custom_plugin` dict is only 512KB.\n\n```\nlua-shared-dicts: \"certificate_data: 100, my_custom_plugin: 512k\"\n```\n\n## http-redirect-code\n\nSets the HTTP status code to be used in redirects.\nSupported codes are [301](https:\/\/developer.mozilla.org\/docs\/Web\/HTTP\/Status\/301),[302](https:\/\/developer.mozilla.org\/docs\/Web\/HTTP\/Status\/302),[307](https:\/\/developer.mozilla.org\/docs\/Web\/HTTP\/Status\/307) and [308](https:\/\/developer.mozilla.org\/docs\/Web\/HTTP\/Status\/308)\n_**default:**_ 308\n\n> __Why the default code is 308?__\n\n> [RFC 7238](https:\/\/tools.ietf.org\/html\/rfc7238) was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if we send a redirect in methods like POST.\n\n## proxy-buffering\n\nEnables or disables [buffering of responses from the proxied server](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_buffering).\n\n## limit-req-status-code\n\nSets the [status code to return in response to rejected requests](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_limit_req_module.html#limit_req_status). _**default:**_ 503\n\n## limit-conn-status-code\n\nSets the [status code to return in response to rejected connections](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_limit_conn_module.html#limit_conn_status). _**default:**_ 503\n\n## enable-syslog\n\nEnable [syslog](https:\/\/nginx.org\/en\/docs\/syslog.html) feature for access log and error log. _**default:**_ false\n\n## syslog-host\n\nSets the address of syslog server. The address can be specified as a domain name or IP address.\n\n## syslog-port\n\nSets the port of syslog server. _**default:**_ 514\n\n## no-tls-redirect-locations\n\nA comma-separated list of locations on which http requests will never get redirected to their https counterpart.\n_**default:**_ \"\/.well-known\/acme-challenge\"\n\n## global-allowed-response-headers\n\nA comma-separated list of allowed response headers inside the [custom headers annotations](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/docs\/user-guide\/nginx-configuration\/annotations.md#custom-headers)\n\n## global-auth-url\n\nA url to an existing service that provides authentication for all the locations.\nSimilar to the Ingress rule annotation `nginx.ingress.kubernetes.io\/auth-url`.\nLocations that should not get authenticated can be listed using `no-auth-locations` See [no-auth-locations](#no-auth-locations). In addition, each service can be excluded from authentication via annotation `enable-global-auth` set to \"false\".\n_**default:**_ \"\"\n\n_References:_ [https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/docs\/user-guide\/nginx-configuration\/annotations.md#external-authentication](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/docs\/user-guide\/nginx-configuration\/annotations.md#external-authentication)\n\n## global-auth-method\n\nA HTTP method to use for an existing service that provides authentication for all the locations.\nSimilar to the Ingress rule annotation `nginx.ingress.kubernetes.io\/auth-method`.\n_**default:**_ \"\"\n\n## global-auth-signin\n\nSets the location of the error page for an existing service that provides authentication for all the locations.\nSimilar to the Ingress rule annotation `nginx.ingress.kubernetes.io\/auth-signin`.\n_**default:**_ \"\"\n\n## global-auth-signin-redirect-param\n\nSets the query parameter in the error page signin URL which contains the original URL of the request that failed authentication.\nSimilar to the Ingress rule annotation `nginx.ingress.kubernetes.io\/auth-signin-redirect-param`.\n_**default:**_ \"rd\"\n\n## global-auth-response-headers\n\nSets the headers to pass to backend once authentication request completes. Applied to all the locations.\nSimilar to the Ingress rule annotation `nginx.ingress.kubernetes.io\/auth-response-headers`.\n_**default:**_ \"\"\n\n## global-auth-request-redirect\n\nSets the X-Auth-Request-Redirect header value. Applied to all the locations.\nSimilar to the Ingress rule annotation `nginx.ingress.kubernetes.io\/auth-request-redirect`.\n_**default:**_ \"\"\n\n## global-auth-snippet\n\nSets a custom snippet to use with external authentication. Applied to all the locations.\nSimilar to the Ingress rule annotation `nginx.ingress.kubernetes.io\/auth-snippet`.\n_**default:**_ \"\"\n\n## global-auth-cache-key\n\nEnables caching for global auth requests. Specify a lookup key for auth responses, e.g. `$remote_user$http_authorization`.\n\n## global-auth-cache-duration\n\nSet a caching time for auth responses based on their response codes, e.g. `200 202 30m`. See [proxy_cache_valid](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_cache_valid) for details. You may specify multiple, comma-separated values: `200 202 10m, 401 5m`. defaults to `200 202 401 5m`.\n\n## global-auth-always-set-cookie\n\nAlways set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308.\n_**default:**_ false\n\n## no-auth-locations\n\nA comma-separated list of locations that should not get authenticated.\n_**default:**_ \"\/.well-known\/acme-challenge\"\n\n## block-cidrs\n\nA comma-separated list of IP addresses (or subnets), request from which have to be blocked globally.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_access_module.html#deny](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_access_module.html#deny)\n\n## block-user-agents\n\nA comma-separated list of User-Agent, request from which have to be blocked globally.\nIt's possible to use here full strings and regular expressions. More details about valid patterns can be found at `map` Nginx directive documentation.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_map_module.html#map](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_map_module.html#map)\n\n## block-referers\n\nA comma-separated list of Referers, request from which have to be blocked globally.\nIt's possible to use here full strings and regular expressions. More details about valid patterns can be found at `map` Nginx directive documentation.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_map_module.html#map](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_map_module.html#map)\n\n## proxy-ssl-location-only\n\nSet if proxy-ssl parameters should be applied only on locations and not on servers.\n_**default:**_ is disabled\n\n## default-type\n\nSets the default MIME type of a response.\n_**default:**_ text\/html\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#default_type](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#default_type)\n\n## service-upstream\n\nSet if the service's Cluster IP and port should be used instead of a list of all endpoints. This can be overwritten by an annotation on an Ingress rule.\n_**default:**_ \"false\"\n\n## ssl-reject-handshake\n\nSet to reject SSL handshake to an unknown virtualhost. This parameter helps to mitigate the fingerprinting using default certificate of ingress.\n_**default:**_ \"false\"\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_reject_handshake](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_reject_handshake)\n\n## debug-connections\nEnables debugging log for selected client connections.\n_**default:**_ \"\"\n\n_References:_\n[http:\/\/nginx.org\/en\/docs\/ngx_core_module.html#debug_connection](http:\/\/nginx.org\/en\/docs\/ngx_core_module.html#debug_connection)\n\n## strict-validate-path-type\n\nIngress objects contains a field called pathType that defines the proxy behavior. It can be `Exact`, `Prefix` and `ImplementationSpecific`.\n\nWhen pathType is configured as `Exact` or `Prefix`, there should be a more strict validation, allowing only paths starting with \"\/\" and\ncontaining only alphanumeric characters and \"-\", \"_\" and additional \"\/\".\n\nWhen this option is enabled, the validation will happen on the Admission Webhook, making any Ingress not using pathType `ImplementationSpecific`\nand containing invalid characters to be denied.\n\nThis means that Ingress objects that rely on paths containing regex characters should use `ImplementationSpecific` pathType.\n\nThe cluster admin should establish validation rules using mechanisms like [Open Policy Agent](https:\/\/www.openpolicyagent.org\/) to \nvalidate that only authorized users can use `ImplementationSpecific` pathType and that only the authorized characters can be used.\n\n_**default:**_ \"true\"\n\n## grpc-buffer-size-kb\n\nSets the configuration for the GRPC Buffer Size parameter. If not set it will use the default from NGINX.\n\n_References:_\n[https:\/\/nginx.org\/en\/docs\/http\/ngx_http_grpc_module.html#grpc_buffer_size](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_grpc_module.html#grpc_buffer_size)\n\n## relative-redirects\n\nUse relative redirects instead of absolute redirects. Absolute redirects are the default in nginx. RFC7231 allows relative redirects since 2014.\nSimilar to the Ingress rule annotation `nginx.ingress.kubernetes.io\/relative-redirects`.\n\n_**default:**_ \"false\"\n\n_References:_\n- [https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#absolute_redirect](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#absolute_redirect)\n- [https:\/\/datatracker.ietf.org\/doc\/html\/rfc7231#section-7.1.2](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7231#section-7.1.2)","site":"ingress nginx","answers_cleaned":"  ConfigMaps  ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable   The ConfigMap API resource stores configuration data as key value pairs  The data provides the configurations for system components for the nginx controller   In order to overwrite nginx controller configuration values as seen in  config go  https   github com kubernetes ingress nginx blob main internal ingress controller config config go   you can add key value pairs to the data section of the config map  For Example      yaml data    map hash bucket size   128    ssl protocols  SSLv2          important     The key and values in a ConfigMap can only be strings      This means that we want a value with boolean values we need to quote the values  like  true  or  false       Same for numbers  like  100         Slice  types  defined below as    string  or    int   can be provided as a comma delimited string      Configuration options  The following table shows a configuration option s name  type  and the default value     name                                                                              type           default                                                                                                                                                                                                                                                                                                                                                        notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            add headers   add headers                                                        string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 allow backend server header   allow backend server header                        bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  allow cross namespace resources   allow cross namespace resources                bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  allow snippet annotations   allow snippet annotations                            bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  annotations risk level   annotations risk level                                  string         High                                                                                                                                                                                                                                                                                                                                                                                                                                                    annotation value word blocklist   annotation value word blocklist                string array                                                                                                                                                                                                                                                                                                                                                                                                                                                           hide headers   hide headers                                                      string array   empty                                                                                                                                                                                                                                                                                                                                                                                                                                                   access log params   access log params                                            string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 access log path   access log path                                                string           var log nginx access log                                                                                                                                                                                                                                                                                                                                                                                                                              http access log path   http access log path                                      string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 stream access log path   stream access log path                                  string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 enable access log for default backend   enable access log for default backend    bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  error log path   error log path                                                  string           var log nginx error log                                                                                                                                                                                                                                                                                                                                                                                                                               enable modsecurity   enable modsecurity                                          bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  modsecurity snippet   modsecurity snippet                                        string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 enable owasp modsecurity crs   enable owasp modsecurity crs                      bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  client header buffer size   client header buffer size                            string          1k                                                                                                                                                                                                                                                                                                                                                                                                                                                     client header timeout   client header timeout                                    int            60                                                                                                                                                                                                                                                                                                                                                                                                                                                      client body buffer size   client body buffer size                                string          8k                                                                                                                                                                                                                                                                                                                                                                                                                                                     client body timeout   client body timeout                                        int            60                                                                                                                                                                                                                                                                                                                                                                                                                                                      disable access log   disable access log                                          bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  disable ipv6   disable ipv6                                                      bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  disable ipv6 dns   disable ipv6 dns                                              bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  enable underscores in headers   enable underscores in headers                    bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  enable ocsp   enable ocsp                                                        bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  ignore invalid headers   ignore invalid headers                                  bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   retry non idempotent   retry non idempotent                                      bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  error log level   error log level                                                string          notice                                                                                                                                                                                                                                                                                                                                                                                                                                                 http2 max field size   http2 max field size                                      string                                                                                                                                                                                                                                                                                                                                                                        DEPRECATED in favour of  large client header buffers   large client header buffers       http2 max header size   http2 max header size                                    string                                                                                                                                                                                                                                                                                                                                                                        DEPRECATED in favour of  large client header buffers   large client header buffers       http2 max requests   http2 max requests                                          int            0                                                                                                                                                                                                                                                                                                                                                              DEPRECATED in favour of  keepalive requests   keepalive requests                         http2 max concurrent streams   http2 max concurrent streams                      int            128                                                                                                                                                                                                                                                                                                                                                                                                                                                     hsts   hsts                                                                      bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   hsts include subdomains   hsts include subdomains                                bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   hsts max age   hsts max age                                                      string          31536000                                                                                                                                                                                                                                                                                                                                                                                                                                               hsts preload   hsts preload                                                      bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  keep alive   keep alive                                                          int            75                                                                                                                                                                                                                                                                                                                                                                                                                                                      keep alive requests   keep alive requests                                        int            1000                                                                                                                                                                                                                                                                                                                                                                                                                                                    large client header buffers   large client header buffers                        string          4 8k                                                                                                                                                                                                                                                                                                                                                                                                                                                   log format escape none   log format escape none                                  bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  log format escape json   log format escape json                                  bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  log format upstream   log format upstream                                        string           remote addr    remote user   time local    request   status  body bytes sent   http referer    http user agent   request length  request time   proxy upstream name    proxy alternative upstream name   upstream addr  upstream response length  upstream response time  upstream status  req id                                                                                                                                                     log format stream   log format stream                                            string            remote addr    time local   protocol  status  bytes sent  bytes received  session time                                                                                                                                                                                                                                                                                                                                                               enable multi accept   enable multi accept                                        bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   max worker connections   max worker connections                                  int            16384                                                                                                                                                                                                                                                                                                                                                                                                                                                   max worker open files   max worker open files                                    int            0                                                                                                                                                                                                                                                                                                                                                                                                                                                       map hash bucket size   max hash bucket size                                      int            64                                                                                                                                                                                                                                                                                                                                                                                                                                                      nginx status ipv4 whitelist   nginx status ipv4 whitelist                          string        127 0 0 1                                                                                                                                                                                                                                                                                                                                                                                                                                              nginx status ipv6 whitelist   nginx status ipv6 whitelist                          string          1                                                                                                                                                                                                                                                                                                                                                                                                                                                    proxy real ip cidr   proxy real ip cidr                                            string        0 0 0 0 0                                                                                                                                                                                                                                                                                                                                                                                                                                              proxy set headers   proxy set headers                                            string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 server name hash max size   server name hash max size                            int            1024                                                                                                                                                                                                                                                                                                                                                                                                                                                    server name hash bucket size   server name hash bucket size                      int              size of the processor s cache line                                                                                                                                                                                                                                                                                                                              proxy headers hash max size   proxy headers hash max size                        int            512                                                                                                                                                                                                                                                                                                                                                                                                                                                     proxy headers hash bucket size   proxy headers hash bucket size                  int            64                                                                                                                                                                                                                                                                                                                                                                                                                                                      reuse port   reuse port                                                          bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   server tokens   server tokens                                                    bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  ssl ciphers   ssl ciphers                                                        string          ECDHE ECDSA AES128 GCM SHA256 ECDHE RSA AES128 GCM SHA256 ECDHE ECDSA AES256 GCM SHA384 ECDHE RSA AES256 GCM SHA384 ECDHE ECDSA CHACHA20 POLY1305 ECDHE RSA CHACHA20 POLY1305 DHE RSA AES128 GCM SHA256 DHE RSA AES256 GCM SHA384                                                                                                                                                                                                                      ssl ecdh curve   ssl ecdh curve                                                  string          auto                                                                                                                                                                                                                                                                                                                                                                                                                                                   ssl dh param   ssl dh param                                                      string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 ssl protocols   ssl protocols                                                    string          TLSv1 2 TLSv1 3                                                                                                                                                                                                                                                                                                                                                                                                                                        ssl session cache   ssl session cache                                            bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   ssl session cache size   ssl session cache size                                  string          10m                                                                                                                                                                                                                                                                                                                                                                                                                                                    ssl session tickets   ssl session tickets                                        bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  ssl session ticket key   ssl session ticket key                                  string           Randomly Generated                                                                                                                                                                                                                                                                                                                                              ssl session timeout   ssl session timeout                                        string          10m                                                                                                                                                                                                                                                                                                                                                                                                                                                    ssl buffer size   ssl buffer size                                                string          4k                                                                                                                                                                                                                                                                                                                                                                                                                                                     use proxy protocol   use proxy protocol                                          bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  proxy protocol header timeout   proxy protocol header timeout                    string          5s                                                                                                                                                                                                                                                                                                                                                                                                                                                     enable aio write   enable aio write                                              bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   use gzip   use gzip                                                              bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  use geoip   use geoip                                                            bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   use geoip2   use geoip2                                                          bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  geoip2 autoreload in minutes   geoip2 autoreload in minutes                      int             0                                                                                                                                                                                                                                                                                                                                                                                                                                                      enable brotli   enable brotli                                                    bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  brotli level   brotli level                                                      int            4                                                                                                                                                                                                                                                                                                                                                                                                                                                       brotli min length   brotli min length                                            int            20                                                                                                                                                                                                                                                                                                                                                                                                                                                      brotli types   brotli types                                                      string          application xml rss application atom xml application javascript application x javascript application json application rss xml application vnd ms fontobject application x font ttf application x web app manifest json application xhtml xml application xml font opentype image svg xml image x icon text css text javascript text plain text x component                                                                                             use http2   use http2                                                            bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   gzip disable   gzip disable                                                      string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 gzip level   gzip level                                                          int            1                                                                                                                                                                                                                                                                                                                                                                                                                                                       gzip min length   gzip min length                                                int            256                                                                                                                                                                                                                                                                                                                                                                                                                                                     gzip types   gzip types                                                          string          application atom xml application javascript application x javascript application json application rss xml application vnd ms fontobject application x font ttf application x web app manifest json application xhtml xml application xml font opentype image svg xml image x icon text css text javascript text plain text x component                                                                                                                 worker processes   worker processes                                              string           Number of CPUs                                                                                                                                                                                                                                                                                                                                                                                                                                        worker cpu affinity   worker cpu affinity                                        string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 worker shutdown timeout   worker shutdown timeout                                string          240s                                                                                                                                                                                                                                                                                                                                                                                                                                                   enable serial reloads   enable serial reloads                                    bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  load balance   load balance                                                      string          round robin                                                                                                                                                                                                                                                                                                                                                                                                                                            variables hash bucket size   variables hash bucket size                          int            128                                                                                                                                                                                                                                                                                                                                                                                                                                                     variables hash max size   variables hash max size                                int            2048                                                                                                                                                                                                                                                                                                                                                                                                                                                    upstream keepalive connections   upstream keepalive connections                  int            320                                                                                                                                                                                                                                                                                                                                                                                                                                                     upstream keepalive time   upstream keepalive time                                string          1h                                                                                                                                                                                                                                                                                                                                                                                                                                                     upstream keepalive timeout   upstream keepalive timeout                          int            60                                                                                                                                                                                                                                                                                                                                                                                                                                                      upstream keepalive requests   upstream keepalive requests                        int            10000                                                                                                                                                                                                                                                                                                                                                                                                                                                   limit conn zone variable   limit conn zone variable                              string           binary remote addr                                                                                                                                                                                                                                                                                                                                                                                                                                    proxy stream timeout   proxy stream timeout                                      string          600s                                                                                                                                                                                                                                                                                                                                                                                                                                                   proxy stream next upstream   proxy stream next upstream                          bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   proxy stream next upstream timeout   proxy stream next upstream timeout          string          600s                                                                                                                                                                                                                                                                                                                                                                                                                                                   proxy stream next upstream tries   proxy stream next upstream tries              int            3                                                                                                                                                                                                                                                                                                                                                                                                                                                       proxy stream responses   proxy stream responses                                  int            1                                                                                                                                                                                                                                                                                                                                                                                                                                                       bind address   bind address                                                        string                                                                                                                                                                                                                                                                                                                                                                                                                                                               use forwarded headers   use forwarded headers                                    bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  enable real ip   enable real ip                                                  bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  forwarded for header   forwarded for header                                      string          X Forwarded For                                                                                                                                                                                                                                                                                                                                                                                                                                        compute full forwarded for   compute full forwarded for                          bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  proxy add original uri header   proxy add original uri header                    bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  generate request id   generate request id                                        bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   jaeger collector host   jaeger collector host                                    string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 jaeger collector port   jaeger collector port                                    int            6831                                                                                                                                                                                                                                                                                                                                                                                                                                                    jaeger endpoint   jaeger endpoint                                                string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 jaeger service name   jaeger service name                                        string          nginx                                                                                                                                                                                                                                                                                                                                                                                                                                                  jaeger propagation format   jaeger propagation format                            string          jaeger                                                                                                                                                                                                                                                                                                                                                                                                                                                 jaeger sampler type   jaeger sampler type                                        string          const                                                                                                                                                                                                                                                                                                                                                                                                                                                  jaeger sampler param   jaeger sampler param                                      string          1                                                                                                                                                                                                                                                                                                                                                                                                                                                      jaeger sampler host   jaeger sampler host                                        string          http   127 0 0 1                                                                                                                                                                                                                                                                                                                                                                                                                                       jaeger sampler port   jaeger sampler port                                        int            5778                                                                                                                                                                                                                                                                                                                                                                                                                                                    jaeger trace context header name   jaeger trace context header name              string         uber trace id                                                                                                                                                                                                                                                                                                                                                                                                                                           jaeger debug header   jaeger debug header                                        string         uber debug id                                                                                                                                                                                                                                                                                                                                                                                                                                           jaeger baggage header   jaeger baggage header                                    string         jaeger baggage                                                                                                                                                                                                                                                                                                                                                                                                                                          jaeger trace baggage header prefix   jaeger trace baggage header prefix          string         uberctx                                                                                                                                                                                                                                                                                                                                                                                                                                                 datadog collector host   datadog collector host                                  string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 datadog collector port   datadog collector port                                  int            8126                                                                                                                                                                                                                                                                                                                                                                                                                                                    datadog service name   datadog service name                                      string          nginx                                                                                                                                                                                                                                                                                                                                                                                                                                                  datadog environment   datadog environment                                        string          prod                                                                                                                                                                                                                                                                                                                                                                                                                                                   datadog operation name override   datadog operation name override                string          nginx handle                                                                                                                                                                                                                                                                                                                                                                                                                                           datadog priority sampling   datadog priority sampling                            bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   datadog sample rate   datadog sample rate                                        float          1 0                                                                                                                                                                                                                                                                                                                                                                                                                                                     enable opentelemetry   enable opentelemetry                                      bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  opentelemetry trust incoming span   opentelemetry trust incoming span            bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   opentelemetry operation name   opentelemetry operation name                      string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 opentelemetry config    etc nginx opentelemetry toml                             string           etc nginx opentelemetry toml                                                                                                                                                                                                                                                                                                                                                                                                                          otlp collector host   otlp collector host                                        string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 otlp collector port   otlp collector port                                        int            4317                                                                                                                                                                                                                                                                                                                                                                                                                                                    otel max queuesize   otel max queuesize                                          int                                                                                                                                                                                                                                                                                                                                                                                                                                                                    otel schedule delay millis   otel schedule delay millis                          int                                                                                                                                                                                                                                                                                                                                                                                                                                                                    otel max export batch size   otel max export batch size                          int                                                                                                                                                                                                                                                                                                                                                                                                                                                                    otel service name   otel service name                                            string          nginx                                                                                                                                                                                                                                                                                                                                                                                                                                                  otel sampler   otel sampler                                                      string          AlwaysOff                                                                                                                                                                                                                                                                                                                                                                                                                                              otel sampler parent based   otel sampler parent based                            bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  otel sampler ratio   otel sampler ratio                                          float          0 01                                                                                                                                                                                                                                                                                                                                                                                                                                                    main snippet   main snippet                                                      string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 http snippet   http snippet                                                      string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 server snippet   server snippet                                                  string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 stream snippet   stream snippet                                                  string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 location snippet   location snippet                                              string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 custom http errors   custom http errors                                            int            int                                                                                                                                                                                                                                                                                                                                                                                                                                                   proxy body size   proxy body size                                                string          1m                                                                                                                                                                                                                                                                                                                                                                                                                                                     proxy connect timeout   proxy connect timeout                                    int            5                                                                                                                                                                                                                                                                                                                                                                                                                                                       proxy read timeout   proxy read timeout                                          int            60                                                                                                                                                                                                                                                                                                                                                                                                                                                      proxy send timeout   proxy send timeout                                          int            60                                                                                                                                                                                                                                                                                                                                                                                                                                                      proxy buffers number   proxy buffers number                                      int            4                                                                                                                                                                                                                                                                                                                                                                                                                                                       proxy buffer size   proxy buffer size                                            string          4k                                                                                                                                                                                                                                                                                                                                                                                                                                                     proxy busy buffers size   proxy busy buffers size                                string          8k                                                                                                                                                                                                                                                                                                                                                                                                                                                     proxy cookie path   proxy cookie path                                            string          off                                                                                                                                                                                                                                                                                                                                                                                                                                                    proxy cookie domain   proxy cookie domain                                        string          off                                                                                                                                                                                                                                                                                                                                                                                                                                                    proxy next upstream   proxy next upstream                                        string          error timeout                                                                                                                                                                                                                                                                                                                                                                                                                                          proxy next upstream timeout   proxy next upstream timeout                        int            0                                                                                                                                                                                                                                                                                                                                                                                                                                                       proxy next upstream tries   proxy next upstream tries                            int            3                                                                                                                                                                                                                                                                                                                                                                                                                                                       proxy redirect from   proxy redirect from                                        string          off                                                                                                                                                                                                                                                                                                                                                                                                                                                    proxy request buffering   proxy request buffering                                string          on                                                                                                                                                                                                                                                                                                                                                                                                                                                     ssl redirect   ssl redirect                                                      bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   force ssl redirect   force ssl redirect                                          bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  denylist source range   denylist source range                                      string         string                                                                                                                                                                                                                                                                                                                                                                                                                                                whitelist source range   whitelist source range                                    string         string                                                                                                                                                                                                                                                                                                                                                                                                                                                skip access log urls   skip access log urls                                        string         string                                                                                                                                                                                                                                                                                                                                                                                                                                                limit rate   limit rate                                                          int            0                                                                                                                                                                                                                                                                                                                                                                                                                                                       limit rate after   limit rate after                                              int            0                                                                                                                                                                                                                                                                                                                                                                                                                                                       lua shared dicts   lua shared dicts                                              string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 http redirect code   http redirect code                                          int            308                                                                                                                                                                                                                                                                                                                                                                                                                                                     proxy buffering   proxy buffering                                                string          off                                                                                                                                                                                                                                                                                                                                                                                                                                                    limit req status code   limit req status code                                    int            503                                                                                                                                                                                                                                                                                                                                                                                                                                                     limit conn status code   limit conn status code                                  int            503                                                                                                                                                                                                                                                                                                                                                                                                                                                     enable syslog   enable syslog                                                    bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  syslog host   syslog host                                                        string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 syslog port   syslog port                                                        int            514                                                                                                                                                                                                                                                                                                                                                                                                                                                     no tls redirect locations   no tls redirect locations                            string            well known acme challenge                                                                                                                                                                                                                                                                                                                                                                                                                            global allowed response headers   global allowed response headers                string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 global auth url   global auth url                                                string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 global auth method   global auth method                                          string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 global auth signin   global auth signin                                          string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 global auth signin redirect param   global auth signin redirect param            string          rd                                                                                                                                                                                                                                                                                                                                                                                                                                                     global auth response headers   global auth response headers                      string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 global auth request redirect   global auth request redirect                      string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 global auth snippet   global auth snippet                                        string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 global auth cache key   global auth cache key                                    string                                                                                                                                                                                                                                                                                                                                                                                                                                                                 global auth cache duration   global auth cache duration                          string          200 202 401 5m                                                                                                                                                                                                                                                                                                                                                                                                                                         no auth locations   no auth locations                                            string            well known acme challenge                                                                                                                                                                                                                                                                                                                                                                                                                            block cidrs   block cidrs                                                          string                                                                                                                                                                                                                                                                                                                                                                                                                                                               block user agents   block user agents                                              string                                                                                                                                                                                                                                                                                                                                                                                                                                                               block referers   block referers                                                    string                                                                                                                                                                                                                                                                                                                                                                                                                                                               proxy ssl location only   proxy ssl location only                                bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  default type   default type                                                      string          text html                                                                                                                                                                                                                                                                                                                                                                                                                                              service upstream   service upstream                                              bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  ssl reject handshake   ssl reject handshake                                      bool            false                                                                                                                                                                                                                                                                                                                                                                                                                                                  debug connections   debug connections                                              string        127 0 0 1 1 1 1 1 24                                                                                                                                                                                                                                                                                                                                                                                                                                   strict validate path type   strict validate path type                            bool            true                                                                                                                                                                                                                                                                                                                                                                                                                                                   grpc buffer size kb   grpc buffer size kb                                        int            0                                                                                                                                                                                                                                                                                                                                                                                                                                                       relative redirects   relative redirects                                          bool           false                                                                                                                                                                                                                                                                                                                                                                                                                                                    add headers  Sets custom headers from named configmap before sending traffic to the client  See  proxy set headers   proxy set headers    example  https   github com kubernetes ingress nginx tree main docs examples customization custom headers      allow backend server header  Enables the return of the header Server from the backend instead of the generic nginx string     default     is disabled     allow cross namespace resources  Enables users to consume cross namespace resource on annotations  when was previously enabled      default     false    Annotations that may be impacted with this change        auth secret     auth proxy set header     auth tls secret     fastcgi params configmap     proxy ssl secret      allow snippet annotations  Enables Ingress to parse and add   snippet annotations directives created by the user     default      false   Warning  We recommend enabling this option only if you TRUST users with permission to create Ingress objects  as this may allow a user to add restricted configurations to the final nginx conf file     annotations risk level  Represents the risk accepted on an annotation  If the risk is  for instance  Medium   annotations with risk High and Critical will not be accepted   Accepted values are  Critical    High    Medium  and  Low       default      High      annotation value word blocklist  Contains a comma separated value of chars words that are well known of being used to abuse Ingress configuration and must be blocked  Related to  CVE 2021 25742  https   github com kubernetes ingress nginx issues 7837   When an annotation is detected with a value that matches one of the blocked bad words  the whole Ingress won t be configured      default           When doing this  the default blocklist is override  which means that the Ingress admin should add all the words that should be blocked  here is a suggested block list      suggested       load module lua package  by lua location root proxy pass serviceaccount                hide headers  Sets additional header that will not be passed from the upstream server to the client response     default     empty   References    https   nginx org en docs http ngx http proxy module html proxy hide header  https   nginx org en docs http ngx http proxy module html proxy hide header      access log params  Additional params for access log  For example  buffer 16k  gzip  flush 1m   References    https   nginx org en docs http ngx http log module html access log  https   nginx org en docs http ngx http log module html access log      access log path  Access log path for both http and stream context  Goes to   var log nginx access log  by default     Note    the file   var log nginx access log  is a symlink to   dev stdout      http access log path  Access log path for http context globally     default           Note    If not specified  the  access log path  will be used      stream access log path  Access log path for stream context globally     default           Note    If not specified  the  access log path  will be used      enable access log for default backend  Enables logging access to default backend     default     is disabled      error log path  Error log path  Goes to   var log nginx error log  by default     Note    the file   var log nginx error log  is a symlink to   dev stderr    References    https   nginx org en docs ngx core module html error log  https   nginx org en docs ngx core module html error log      enable modsecurity  Enables the modsecurity module for NGINX     default     is disabled     enable owasp modsecurity crs  Enables the OWASP ModSecurity Core Rule Set  CRS      default     is disabled     modsecurity snippet  Adds custom rules to modsecurity section of nginx configuration     client header buffer size  Allows to configure a custom buffer size for reading client request header    References    https   nginx org en docs http ngx http core module html client header buffer size  https   nginx org en docs http ngx http core module html client header buffer size      client header timeout  Defines a timeout for reading client request header  in seconds    References    https   nginx org en docs http ngx http core module html client header timeout  https   nginx org en docs http ngx http core module html client header timeout      client body buffer size  Sets buffer size for reading client request body    References    https   nginx org en docs http ngx http core module html client body buffer size  https   nginx org en docs http ngx http core module html client body buffer size      client body timeout  Defines a timeout for reading client request body  in seconds    References    https   nginx org en docs http ngx http core module html client body timeout  https   nginx org en docs http ngx http core module html client body timeout      disable access log  Disables the Access Log from the entire Ingress Controller     default      false    References    https   nginx org en docs http ngx http log module html access log  https   nginx org en docs http ngx http log module html access log      disable ipv6  Disable listening on IPV6     default      false   IPv6 listening is enabled     disable ipv6 dns  Disable IPV6 for nginx DNS resolver     default      false   IPv6 resolving enabled      enable underscores in headers  Enables underscores in header names     default     is disabled     enable ocsp  Enables  Online Certificate Status Protocol stapling  https   en wikipedia org wiki OCSP stapling   OCSP  support     default     is disabled     ignore invalid headers  Set if header fields with invalid names should be ignored     default     is enabled     retry non idempotent  Since 1 9 13 NGINX will not retry non idempotent requests  POST  LOCK  PATCH  in case of an error in the upstream server  The previous behavior can be restored using the value  true       error log level  Configures the logging level of errors  Log levels above are listed in the order of increasing severity    References    https   nginx org en docs ngx core module html error log  https   nginx org en docs ngx core module html error log      http2 max field size      warning     This feature was deprecated in 1 1 3 and will be removed in 1 3 0  Use  large client header buffers   large client header buffers  instead   Limits the maximum size of an HPACK compressed request header field    References    https   nginx org en docs http ngx http v2 module html http2 max field size  https   nginx org en docs http ngx http v2 module html http2 max field size      http2 max header size      warning     This feature was deprecated in 1 1 3 and will be removed in 1 3 0  Use  large client header buffers   large client header buffers  instead   Limits the maximum size of the entire request header list after HPACK decompression    References    https   nginx org en docs http ngx http v2 module html http2 max header size  https   nginx org en docs http ngx http v2 module html http2 max header size      http2 max requests      warning     This feature was deprecated in 1 1 3 and will be removed in 1 3 0  Use  upstream keepalive requests   upstream keepalive requests  instead   Sets the maximum number of requests  including push requests  that can be served through one HTTP 2 connection  after which the next client request will lead to connection closing and the need of establishing a new connection    References    https   nginx org en docs http ngx http v2 module html http2 max requests  https   nginx org en docs http ngx http v2 module html http2 max requests      http2 max concurrent streams  Sets the maximum number of concurrent HTTP 2 streams in a connection    References    https   nginx org en docs http ngx http v2 module html http2 max concurrent streams  https   nginx org en docs http ngx http v2 module html http2 max concurrent streams      hsts  Enables or disables the header HSTS in servers running SSL  HTTP Strict Transport Security  often abbreviated as HSTS  is a security feature  HTTP header  that tell browsers that it should only be communicated with using HTTPS  instead of using HTTP  It provides protection against protocol downgrade attacks and cookie theft    References       https   developer mozilla org en US docs Web Security HTTP strict transport security  https   developer mozilla org en US docs Web Security HTTP strict transport security     https   blog qualys com securitylabs 2016 03 28 the importance of a proper http strict transport security implementation on your web server  https   blog qualys com securitylabs 2016 03 28 the importance of a proper http strict transport security implementation on your web server      hsts include subdomains  Enables or disables the use of HSTS in all the subdomains of the server name      hsts max age  Sets the time  in seconds  that the browser should remember that this site is only to be accessed using HTTPS      hsts preload  Enables or disables the preload attribute in the HSTS feature  when it is enabled       keep alive  Sets the time  in seconds  during which a keep alive client connection will stay open on the server side  The zero value disables keep alive client connections    References    https   nginx org en docs http ngx http core module html keepalive timeout  https   nginx org en docs http ngx http core module html keepalive timeout       important     Setting  keep alive   0   will most likely break concurrent http 2 requests due to changes introduced with nginx 1 19 7      Changes with nginx 1 19 7                                        16 Feb 2021         Change  connections handling in HTTP 2 has been changed to better        match HTTP 1 x  the  http2 recv timeout    http2 idle timeout   and         http2 max requests  directives have been removed  the         keepalive timeout  and  keepalive requests  directives should be        used instead        References    nginx change log  https   nginx org en CHANGES   nginx issue tracker  https   trac nginx org nginx ticket 2155   nginx mailing list  https   mailman nginx org pipermail nginx 2021 May 060697 html      keep alive requests  Sets the maximum number of requests that can be served through one keep alive connection    References    https   nginx org en docs http ngx http core module html keepalive requests  https   nginx org en docs http ngx http core module html keepalive requests      large client header buffers  Sets the maximum number and size of buffers used for reading large client request header     default     4 8k   References    https   nginx org en docs http ngx http core module html large client header buffers  https   nginx org en docs http ngx http core module html large client header buffers      log format escape none  Sets if the escape parameter is disabled entirely for character escaping in variables   true   or controlled by log format escape json   false   Sets the nginx  log format  https   nginx org en docs http ngx http log module html log format       log format escape json  Sets if the escape parameter allows JSON   true   or default characters escaping in variables   false   Sets the nginx  log format  https   nginx org en docs http ngx http log module html log format       log format upstream  Sets the nginx  log format  https   nginx org en docs http ngx http log module html log format   Example for json output      json  log format upstream     time     time iso8601    remote addr     proxy protocol addr    x forwarded for     proxy add x forwarded for    request id     req id      remote user     remote user    bytes sent    bytes sent   request time    request time   status    status   vhost     host    request proto     server protocol      path     uri    request query     args    request length    request length   duration    request time  method     request method    http referrer     http referer      http user agent     http user agent          Please check the  log format  log format md  for definition of each field      log format stream  Sets the nginx  stream format  https   nginx org en docs stream ngx stream log module html log format       enable multi accept  If disabled  a worker process will accept one new connection at a time  Otherwise  a worker process will accept all new connections at a time     default     true   References    https   nginx org en docs ngx core module html multi accept  https   nginx org en docs ngx core module html multi accept      max worker connections  Sets the  maximum number of simultaneous connections  https   nginx org en docs ngx core module html worker connections  that can be opened by each worker process  0 will use the value of  max worker open files   max worker open files      default     16384      tip     Using 0 in scenarios of high load improves performance at the cost of increasing RAM utilization  even on idle       max worker open files  Sets the  maximum number of files  https   nginx org en docs ngx core module html worker rlimit nofile  that can be opened by each worker process  The default of 0 means  max open files  system s limit    1024      default     0     map hash bucket size  Sets the bucket size for the  map variables hash tables  https   nginx org en docs http ngx http map module html map hash bucket size   The details of setting up hash tables are provided in a separate  document  https   nginx org en docs hash html       proxy real ip cidr  If  use forwarded headers  or  use proxy protocol  is enabled   proxy real ip cidr  defines the default IP network address of your external load balancer  Can be a comma separated list of CIDR blocks     default      0 0 0 0 0      proxy set headers  Sets custom headers from named configmap before sending traffic to backends  The value format is namespace name   See  example  https   kubernetes github io ingress nginx examples customization custom headers       server name hash max size  Sets the maximum size of the  server names hash tables  https   nginx org en docs http ngx http core module html server names hash max size  used in server names map directive s values  MIME types  names of request header strings  etc    References    https   nginx org en docs hash html  https   nginx org en docs hash html      server name hash bucket size  Sets the size of the bucket for the server names hash tables    References       https   nginx org en docs hash html  https   nginx org en docs hash html     https   nginx org en docs http ngx http core module html server names hash bucket size  https   nginx org en docs http ngx http core module html server names hash bucket size      proxy headers hash max size  Sets the maximum size of the proxy headers hash tables    References       https   nginx org en docs hash html  https   nginx org en docs hash html     https   nginx org en docs http ngx http proxy module html proxy headers hash max size  https   nginx org en docs http ngx http proxy module html proxy headers hash max size      reuse port  Instructs NGINX to create an individual listening socket for each worker process  using the SO REUSEPORT socket option   allowing a kernel to distribute incoming connections between worker processes    default     true     proxy headers hash bucket size  Sets the size of the bucket for the proxy headers hash tables    References       https   nginx org en docs hash html  https   nginx org en docs hash html     https   nginx org en docs http ngx http proxy module html proxy headers hash bucket size  https   nginx org en docs http ngx http proxy module html proxy headers hash bucket size      server tokens  Send NGINX Server header in responses and display NGINX version in error pages     default     is disabled     ssl ciphers  Sets the  ciphers  https   nginx org en docs http ngx http ssl module html ssl ciphers  list to enable  The ciphers are specified in the format understood by the OpenSSL library   The default cipher list is    ECDHE ECDSA AES128 GCM SHA256 ECDHE RSA AES128 GCM SHA256 ECDHE ECDSA AES256 GCM SHA384 ECDHE RSA AES256 GCM SHA384 ECDHE ECDSA CHACHA20 POLY1305 ECDHE RSA CHACHA20 POLY1305 DHE RSA AES128 GCM SHA256 DHE RSA AES256 GCM SHA384    The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority  The recommendation above prioritizes algorithms that provide perfect  forward secrecy  https   wiki mozilla org Security Server Side TLS Forward Secrecy    DHE based cyphers will not be available until DH parameter is configured  Custom DH parameters for perfect forward secrecy  https   github com kubernetes ingress nginx tree main docs examples customization ssl dh param   Please check the  Mozilla SSL Configuration Generator  https   mozilla github io server side tls ssl config generator       Note    ssl prefer server ciphers directive will be enabled by default for http context      ssl ecdh curve  Specifies a curve for ECDHE ciphers    References    https   nginx org en docs http ngx http ssl module html ssl ecdh curve  https   nginx org en docs http ngx http ssl module html ssl ecdh curve      ssl dh param  Sets the name of the secret that contains Diffie Hellman key to help with  Perfect Forward Secrecy     References       https   wiki openssl org index php Diffie Hellman parameters  https   wiki openssl org index php Diffie Hellman parameters     https   wiki mozilla org Security Server Side TLS DHE handshake and dhparam  https   wiki mozilla org Security Server Side TLS DHE handshake and dhparam     https   nginx org en docs http ngx http ssl module html ssl dhparam  https   nginx org en docs http ngx http ssl module html ssl dhparam      ssl protocols  Sets the  SSL protocols  https   nginx org en docs http ngx http ssl module html ssl protocols  to use  The default is   TLSv1 2 TLSv1 3    Please check the result of the configuration using  https   ssllabs com ssltest analyze html  or  https   testssl sh       ssl early data  Enables or disables TLS 1 3  early data  https   tools ietf org html rfc8446 section 2 3   also known as Zero Round Trip Time Resumption  0 RTT    This requires  ssl protocols  to have  TLSv1 3  enabled  Enable this with caution  because requests sent within early data are subject to  replay attacks  https   tools ietf org html rfc8470     ssl early data  https   nginx org en docs http ngx http ssl module html ssl early data   The default is   false       ssl session cache  Enables or disables the use of shared  SSL cache  https   nginx org en docs http ngx http ssl module html ssl session cache  among worker processes      ssl session cache size  Sets the size of the  SSL shared session cache  https   nginx org en docs http ngx http ssl module html ssl session cache  between all worker processes      ssl session tickets  Enables or disables session resumption through  TLS session tickets  https   nginx org en docs http ngx http ssl module html ssl session tickets       ssl session ticket key  Sets the secret key used to encrypt and decrypt TLS session tickets  The value must be a valid base64 string  To create a ticket   openssl rand 80   openssl enc  A  base64    TLS session ticket key  https   nginx org en docs http ngx http ssl module html ssl session tickets   by default  a randomly generated key is used      ssl session timeout  Sets the time during which a client may  reuse the session  https   nginx org en docs http ngx http ssl module html ssl session timeout  parameters stored in a cache      ssl buffer size  Sets the size of the  SSL buffer  https   nginx org en docs http ngx http ssl module html ssl buffer size  used for sending data  The default of 4k helps NGINX to improve TLS Time To First Byte  TTTFB     References    https   www igvita com 2013 12 16 optimizing nginx tls time to first byte   https   www igvita com 2013 12 16 optimizing nginx tls time to first byte       use proxy protocol  Enables or disables the  PROXY protocol  https   www nginx com resources admin guide proxy protocol   to receive client connection  real IP address  information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer  ELB       proxy protocol header timeout  Sets the timeout value for receiving the proxy protocol headers  The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection     default     5s     enable aio write  Enables or disables the directive  aio write  https   nginx org en docs http ngx http core module html aio write  that writes files asynchronously     default     true     use gzip  Enables or disables compression of HTTP responses using the   gzip  module  https   nginx org en docs http ngx http gzip module html   MIME types to compress are controlled by  gzip types   gzip types      default     false     use geoip  Enables or disables   geoip  module  https   nginx org en docs http ngx http geoip module html  that creates variables with values depending on the client IP address  using the precompiled MaxMind databases     default     true      Note    MaxMind legacy databases are discontinued and will not receive updates after 2019 01 02  cf   discontinuation notice  https   support maxmind com geolite legacy discontinuation notice    Consider  use geoip2   use geoip2  below      use geoip2  Enables the  geoip2 module  https   github com leev ngx http geoip2 module  for NGINX  Since  0 27 0  and due to a  change in the MaxMind databases  https   blog maxmind com 2019 12 significant changes to accessing and using geolite2 databases   a license is required to have access to the databases  For this reason  it is required to define a new flag    maxmind license key  in the ingress controller deployment to download the databases needed during the initialization of the ingress controller  Alternatively  it is possible to use a volume to mount the files   etc ingress controller geoip GeoLite2 City mmdb  and   etc ingress controller geoip GeoLite2 ASN mmdb   avoiding the overhead of the download       important     If the feature is enabled but the files are missing  GeoIP2 will not be enabled      default     false     geoip2 autoreload in minutes  Enables the  geoip2 module  https   github com leev ngx http geoip2 module  autoreload in MaxMind databases setting the interval in minutes      default     0     enable brotli  Enables or disables compression of HTTP responses using the   brotli  module  https   github com google ngx brotli   The default mime type list to compress is   application xml rss application atom xml application javascript application x javascript application json application rss xml application vnd ms fontobject application x font ttf application x web app manifest json application xhtml xml application xml font opentype image svg xml image x icon text css text plain text x component       default     false      Note    Brotli does not works in Safari   11  For more information see  https   caniuse com  feat brotli  https   caniuse com  feat brotli      brotli level  Sets the Brotli Compression Level that will be used     default     4     brotli min length  Minimum length of responses  in bytes  that will be eligible for brotli compression     default     20     brotli types  Sets the MIME Types that will be compressed on the fly by brotli     default      application xml rss application atom xml application javascript application x javascript application json application rss xml application vnd ms fontobject application x font ttf application x web app manifest json application xhtml xml application xml font opentype image svg xml image x icon text css text plain text x component      use http2  Enables or disables  HTTP 2  https   nginx org en docs http ngx http v2 module html  support in secure connections      gzip disable  Disables  gzipping  http   nginx org en docs http ngx http gzip module html gzip disable  of responses for requests with  User Agent  header fields matching any of the specified regular expressions      gzip level  Sets the gzip Compression Level that will be used     default     1     gzip min length  Minimum length of responses to be returned to the client before it is eligible for gzip compression  in bytes     default     256     gzip types  Sets the MIME types in addition to  text html  to compress  The special value      matches any MIME type  Responses with the  text html  type are always compressed if   use gzip    use gzip  is enabled     default      application atom xml application javascript application x javascript application json application rss xml application vnd ms fontobject application x font ttf application x web app manifest json application xhtml xml application xml font opentype image svg xml image x icon text css text plain text x component       worker processes  Sets the number of  worker processes  https   nginx org en docs ngx core module html worker processes   The default of  auto  means number of available CPU cores      worker cpu affinity  Binds worker processes to the sets of CPUs   worker cpu affinity  https   nginx org en docs ngx core module html worker cpu affinity   By default worker processes are not bound to any specific CPUs  The value can be         empty string indicate no affinity is applied    cpumask  e g   0001 0010 0100 1000  to bind processes to specific cpus    auto  binding worker processes automatically to available CPUs      worker shutdown timeout  Sets a timeout for Nginx to  wait for worker to gracefully shutdown  https   nginx org en docs ngx core module html worker shutdown timeout      default      240s      load balance  Sets the algorithm to use for load balancing  The value can either be     round robin  to use the default round robin loadbalancer   ewma  to use the Peak EWMA method for routing   implementation  https   github com kubernetes ingress nginx blob main rootfs etc nginx lua balancer ewma lua    The default is  round robin      To load balance using consistent hashing of IP or other variables  consider the  nginx ingress kubernetes io upstream hash by  annotation    To load balance using session cookies  consider the  nginx ingress kubernetes io affinity  annotation    References    https   nginx org en docs http load balancing html  https   nginx org en docs http load balancing html      variables hash bucket size  Sets the bucket size for the variables hash table    References    https   nginx org en docs http ngx http map module html variables hash bucket size  https   nginx org en docs http ngx http map module html variables hash bucket size      variables hash max size  Sets the maximum size of the variables hash table    References    https   nginx org en docs http ngx http map module html variables hash max size  https   nginx org en docs http ngx http map module html variables hash max size      upstream keepalive connections  Activates the cache for connections to upstream servers  The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process  When this number is exceeded  the least recently used connections are closed     default     320   References    https   nginx org en docs http ngx http upstream module html keepalive  https   nginx org en docs http ngx http upstream module html keepalive       upstream keepalive time  Sets the maximum time during which requests can be processed through one keepalive connection      default      1h    References    http   nginx org en docs http ngx http upstream module html keepalive time  http   nginx org en docs http ngx http upstream module html keepalive time      upstream keepalive timeout  Sets a timeout during which an idle keepalive connection to an upstream server will stay open      default     60   References    https   nginx org en docs http ngx http upstream module html keepalive timeout  https   nginx org en docs http ngx http upstream module html keepalive timeout       upstream keepalive requests  Sets the maximum number of requests that can be served through one keepalive connection  After the maximum number of requests is made  the connection is closed     default     10000    References    https   nginx org en docs http ngx http upstream module html keepalive requests  https   nginx org en docs http ngx http upstream module html keepalive requests       limit conn zone variable  Sets parameters for a shared memory zone that will keep states for various keys of  limit conn zone  https   nginx org en docs http ngx http limit conn module html limit conn zone   The default of   binary remote addr  variable s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses      proxy stream timeout  Sets the timeout between two successive read or write operations on client or proxied server connections  If no data is transmitted within this time  the connection is closed    References    https   nginx org en docs stream ngx stream proxy module html proxy timeout  https   nginx org en docs stream ngx stream proxy module html proxy timeout      proxy stream next upstream  When a connection to the proxied server cannot be established  determines whether a client connection will be passed to the next server    References    https   nginx org en docs stream ngx stream proxy module html proxy next upstream  https   nginx org en docs stream ngx stream proxy module html proxy next upstream      proxy stream next upstream timeout  Limits the time allowed to pass a connection to the next server  The 0 value turns off this limitation    References    https   nginx org en docs stream ngx stream proxy module html proxy next upstream timeout  https   nginx org en docs stream ngx stream proxy module html proxy next upstream timeout      proxy stream next upstream tries  Limits the number of possible tries a request should be passed to the next server  The 0 value turns off this limitation    References    https   nginx org en docs stream ngx stream proxy module html proxy next upstream tries  https   nginx org en docs stream ngx stream proxy module html proxy next upstream timeout      proxy stream responses  Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used    References    https   nginx org en docs stream ngx stream proxy module html proxy responses  https   nginx org en docs stream ngx stream proxy module html proxy responses      bind address  Sets the addresses on which the server will accept requests instead of    It should be noted that these addresses must exist in the runtime environment or the controller will crash loop      use forwarded headers  If true  NGINX passes the incoming  X Forwarded    headers to upstreams  Use this option when NGINX is behind another L7 proxy   load balancer that is setting these headers   If false  NGINX ignores incoming  X Forwarded    headers  filling them with the request information it sees  Use this option if NGINX is exposed directly to the internet  or it s behind a L3 packet based load balancer that doesn t alter the source IP in the packets      enable real ip   enable real ip  enables the configuration of  https   nginx org en docs http ngx http realip module html  https   nginx org en docs http ngx http realip module html   Specific attributes of the module can be configured further by using  forwarded for header  and  proxy real ip cidr  settings      forwarded for header  Sets the header field for identifying the originating IP address of a client     default     X Forwarded For     compute full forwarded for  Append the remote address to the X Forwarded For header instead of replacing it  When this option is enabled  the upstream application is responsible for extracting the client IP based on its own list of trusted proxies      proxy add original uri header  Adds an X Original Uri header with the original request URI to the backend request     generate request id  Ensures that X Request ID is defaulted to a random value  if no X Request ID is present in the request     jaeger collector host  Specifies the host to use when uploading traces  It must be a valid URL      jaeger collector port  Specifies the port to use when uploading traces     default     6831     jaeger endpoint  Specifies the endpoint to use when uploading traces to a collector  This takes priority over  jaeger collector host  if both are specified      jaeger service name  Specifies the service name to use for any traces created     default     nginx     jaeger propagation format  Specifies the traceparent tracestate propagation format     default     jaeger     jaeger sampler type  Specifies the sampler to be used when sampling traces  The available samplers are  const  probabilistic  ratelimiting  remote     default     const     jaeger sampler param  Specifies the argument to be passed to the sampler constructor  Must be a number  For const this should be 0 to never sample and 1 to always sample     default     1     jaeger sampler host  Specifies the custom remote sampler host to be passed to the sampler constructor  Must be a valid URL  Leave blank to use default value  localhost      default     http   127 0 0 1     jaeger sampler port  Specifies the custom remote sampler port to be passed to the sampler constructor  Must be a number     default     5778     jaeger trace context header name  Specifies the header name used for passing trace context     default     uber trace id     jaeger debug header  Specifies the header name used for force sampling     default     jaeger debug id     jaeger baggage header  Specifies the header name used to submit baggage if there is no root span     default     jaeger baggage     jaeger tracer baggage header prefix  Specifies the header prefix used to propagate baggage     default     uberctx      datadog collector host  Specifies the datadog agent host to use when uploading traces  It must be a valid URL      datadog collector port  Specifies the port to use when uploading traces     default     8126     datadog service name  Specifies the service name to use for any traces created     default     nginx     datadog environment  Specifies the environment this trace belongs to     default     prod     datadog operation name override  Overrides the operation name to use for any traces crated     default     nginx handle     datadog priority sampling  Specifies to use client side sampling  If true disables client side sampling  thus ignoring  sample rate   and enables distributed priority sampling  where traces are sampled based on a combination of user assigned priorities and configuration from the agent     default     true     datadog sample rate  Specifies sample rate for any traces created  This is effective only when  datadog priority sampling  is  false     default     1 0     enable opentelemetry  Enables the nginx OpenTelemetry extension     default     is disabled   References    https   github com open telemetry opentelemetry cpp contrib  https   github com open telemetry opentelemetry cpp contrib tree main instrumentation nginx      opentelemetry operation name  Specifies a custom name for the server span     default     is empty  For example  set to  HTTP  request method  uri       otlp collector host  Specifies the host to use when uploading traces  It must be a valid URL      otlp collector port  Specifies the port to use when uploading traces     default     4317     otel service name  Specifies the service name to use for any traces created     default     nginx      opentelemetry trust incoming span   true  Enables or disables using spans from incoming requests as parent for created ones     default     true      otel sampler parent based  Uses sampler implementation which by default will take a sample if parent Activity is sampled     default     false     otel sampler ratio  Specifies sample rate for any traces created     default     0 01     otel sampler  Specifies the sampler to be used when sampling traces  The available samplers are  AlwaysOff  AlwaysOn  TraceIdRatioBased  remote     default     AlwaysOff     main snippet  Adds custom configuration to the main section of the nginx configuration      http snippet  Adds custom configuration to the http section of the nginx configuration      server snippet  Adds custom configuration to all the servers in the nginx configuration      stream snippet  Adds custom configuration to the stream section of the nginx configuration      location snippet  Adds custom configuration to all the locations in the nginx configuration   You can not use this to add new locations that proxy to the Kubernetes pods  as the snippet does not have access to the Go template functions  If you want to add custom locations you will have to  provide your own nginx tmpl  https   kubernetes github io ingress nginx user guide nginx configuration custom template        custom http errors  Enables which HTTP codes should be passed for processing with the  error page directive  https   nginx org en docs http ngx http core module html error page   Setting at least one code also enables  proxy intercept errors  https   nginx org en docs http ngx http proxy module html proxy intercept errors  which are required to process error page   Example usage   custom http errors  404 415      proxy body size  Sets the maximum allowed size of the client request body  See NGINX  client max body size  https   nginx org en docs http ngx http core module html client max body size       proxy connect timeout  Sets the timeout for  establishing a connection with a proxied server  https   nginx org en docs http ngx http proxy module html proxy connect timeout   It should be noted that this timeout cannot usually exceed 75 seconds   It will also set the  grpc connect timeout  https   nginx org en docs http ngx http grpc module html grpc connect timeout  for gRPC connections      proxy read timeout  Sets the timeout in seconds for  reading a response from the proxied server  https   nginx org en docs http ngx http proxy module html proxy read timeout   The timeout is set only between two successive read operations  not for the transmission of the whole response   It will also set the  grpc read timeout  https   nginx org en docs http ngx http grpc module html grpc read timeout  for gRPC connections      proxy send timeout  Sets the timeout in seconds for  transmitting a request to the proxied server  https   nginx org en docs http ngx http proxy module html proxy send timeout   The timeout is set only between two successive write operations  not for the transmission of the whole request   It will also set the  grpc send timeout  https   nginx org en docs http ngx http grpc module html grpc send timeout  for gRPC connections      proxy buffers number  Sets the number of the buffer used for  reading the first part of the response  https   nginx org en docs http ngx http proxy module html proxy buffers  received from the proxied server  This part usually contains a small response header      proxy buffer size  Sets the size of the buffer used for  reading the first part of the response  https   nginx org en docs http ngx http proxy module html proxy buffer size  received from the proxied server  This part usually contains a small response header      proxy busy buffers size   Limits the total size of buffers that can be busy  https   nginx org en docs http ngx http proxy module html proxy busy buffers size  sending a response to the client while the response is not yet fully read      proxy cookie path  Sets a text that  should be changed in the path attribute  https   nginx org en docs http ngx http proxy module html proxy cookie path  of the  Set Cookie  header fields of a proxied server response      proxy cookie domain  Sets a text that  should be changed in the domain attribute  https   nginx org en docs http ngx http proxy module html proxy cookie domain  of the  Set Cookie  header fields of a proxied server response      proxy next upstream  Specifies in  which cases  https   nginx org en docs http ngx http proxy module html proxy next upstream  a request should be passed to the next server      proxy next upstream timeout   Limits the time  https   nginx org en docs http ngx http proxy module html proxy next upstream timeout  in seconds during which a request can be passed to the next server      proxy next upstream tries  Limit the number of  possible tries  https   nginx org en docs http ngx http proxy module html proxy next upstream tries  a request should be passed to the next server      proxy redirect from  Sets the original text that should be changed in the  Location  and  Refresh  header fields of a proxied server response     default     off   References    https   nginx org en docs http ngx http proxy module html proxy redirect  https   nginx org en docs http ngx http proxy module html proxy redirect      proxy request buffering  Enables or disables  buffering of a client request body  https   nginx org en docs http ngx http proxy module html proxy request buffering       ssl redirect  Sets the global value of redirects  301  to HTTPS if the server has a TLS certificate  defined in an Ingress rule      default      true      force ssl redirect Sets the global value of redirects  308  to HTTPS if the server has a default TLS certificate  defined in extra args      default      false      denylist source range  Sets the default denylisted IPs for each  server  block  This can be overwritten by an annotation on an Ingress rule  See  ngx http access module  https   nginx org en docs http ngx http access module html       whitelist source range  Sets the default whitelisted IPs for each  server  block  This can be overwritten by an annotation on an Ingress rule  See  ngx http access module  https   nginx org en docs http ngx http access module html       skip access log urls  Sets a list of URLs that should not appear in the NGINX access log  This is useful with urls like   health  or  health check  that make  complex  reading the logs     default     is empty     limit rate  Limits the rate of response transmission to a client  The rate is specified in bytes per second  The zero value disables rate limiting  The limit is set per a request  and so if a client simultaneously opens two connections  the overall rate will be twice as much as the specified limit    References    https   nginx org en docs http ngx http core module html limit rate  https   nginx org en docs http ngx http core module html limit rate      limit rate after  Sets the initial amount after which the further transmission of a response to a client will be rate limited    References    https   nginx org en docs http ngx http core module html limit rate after  https   nginx org en docs http ngx http core module html limit rate after      lua shared dicts  Customize default Lua shared dictionaries or define more  You can use the following syntax to do so       lua shared dicts    my dict name    my dict size     my dict name    my dict size              For example following will set default  certificate data  dictionary to  100M  and will introduce a new dictionary called  my custom plugin        lua shared dicts   certificate data  100  my custom plugin  5       You can optionally set a size unit to allow for kilobyte granularity  Allowed units are  m  or  k   case insensitive   and it defaults to MB if no unit is provided  Here is a similar example  but the  my custom plugin  dict is only 512KB       lua shared dicts   certificate data  100  my custom plugin  512k          http redirect code  Sets the HTTP status code to be used in redirects  Supported codes are  301  https   developer mozilla org docs Web HTTP Status 301   302  https   developer mozilla org docs Web HTTP Status 302   307  https   developer mozilla org docs Web HTTP Status 307  and  308  https   developer mozilla org docs Web HTTP Status 308     default     308      Why the default code is 308        RFC 7238  https   tools ietf org html rfc7238  was created to define the 308  Permanent Redirect  status code that is similar to 301  Moved Permanently  but it keeps the payload in the redirect  This is important if we send a redirect in methods like POST      proxy buffering  Enables or disables  buffering of responses from the proxied server  https   nginx org en docs http ngx http proxy module html proxy buffering       limit req status code  Sets the  status code to return in response to rejected requests  https   nginx org en docs http ngx http limit req module html limit req status      default     503     limit conn status code  Sets the  status code to return in response to rejected connections  https   nginx org en docs http ngx http limit conn module html limit conn status      default     503     enable syslog  Enable  syslog  https   nginx org en docs syslog html  feature for access log and error log     default     false     syslog host  Sets the address of syslog server  The address can be specified as a domain name or IP address      syslog port  Sets the port of syslog server     default     514     no tls redirect locations  A comma separated list of locations on which http requests will never get redirected to their https counterpart     default        well known acme challenge      global allowed response headers  A comma separated list of allowed response headers inside the  custom headers annotations  https   github com kubernetes ingress nginx blob main docs user guide nginx configuration annotations md custom headers      global auth url  A url to an existing service that provides authentication for all the locations  Similar to the Ingress rule annotation  nginx ingress kubernetes io auth url   Locations that should not get authenticated can be listed using  no auth locations  See  no auth locations   no auth locations   In addition  each service can be excluded from authentication via annotation  enable global auth  set to  false      default          References    https   github com kubernetes ingress nginx blob main docs user guide nginx configuration annotations md external authentication  https   github com kubernetes ingress nginx blob main docs user guide nginx configuration annotations md external authentication      global auth method  A HTTP method to use for an existing service that provides authentication for all the locations  Similar to the Ingress rule annotation  nginx ingress kubernetes io auth method      default            global auth signin  Sets the location of the error page for an existing service that provides authentication for all the locations  Similar to the Ingress rule annotation  nginx ingress kubernetes io auth signin      default            global auth signin redirect param  Sets the query parameter in the error page signin URL which contains the original URL of the request that failed authentication  Similar to the Ingress rule annotation  nginx ingress kubernetes io auth signin redirect param      default      rd      global auth response headers  Sets the headers to pass to backend once authentication request completes  Applied to all the locations  Similar to the Ingress rule annotation  nginx ingress kubernetes io auth response headers      default            global auth request redirect  Sets the X Auth Request Redirect header value  Applied to all the locations  Similar to the Ingress rule annotation  nginx ingress kubernetes io auth request redirect      default            global auth snippet  Sets a custom snippet to use with external authentication  Applied to all the locations  Similar to the Ingress rule annotation  nginx ingress kubernetes io auth snippet      default            global auth cache key  Enables caching for global auth requests  Specify a lookup key for auth responses  e g    remote user http authorization       global auth cache duration  Set a caching time for auth responses based on their response codes  e g   200 202 30m   See  proxy cache valid  https   nginx org en docs http ngx http proxy module html proxy cache valid  for details  You may specify multiple  comma separated values   200 202 10m  401 5m   defaults to  200 202 401 5m       global auth always set cookie  Always set a cookie returned by auth request  By default  the cookie will be set only if an upstream reports with the code 200  201  204  206  301  302  303  304  307  or 308     default     false     no auth locations  A comma separated list of locations that should not get authenticated     default        well known acme challenge      block cidrs  A comma separated list of IP addresses  or subnets   request from which have to be blocked globally    References    https   nginx org en docs http ngx http access module html deny  https   nginx org en docs http ngx http access module html deny      block user agents  A comma separated list of User Agent  request from which have to be blocked globally  It s possible to use here full strings and regular expressions  More details about valid patterns can be found at  map  Nginx directive documentation    References    https   nginx org en docs http ngx http map module html map  https   nginx org en docs http ngx http map module html map      block referers  A comma separated list of Referers  request from which have to be blocked globally  It s possible to use here full strings and regular expressions  More details about valid patterns can be found at  map  Nginx directive documentation    References    https   nginx org en docs http ngx http map module html map  https   nginx org en docs http ngx http map module html map      proxy ssl location only  Set if proxy ssl parameters should be applied only on locations and not on servers     default     is disabled     default type  Sets the default MIME type of a response     default     text html   References    https   nginx org en docs http ngx http core module html default type  https   nginx org en docs http ngx http core module html default type      service upstream  Set if the service s Cluster IP and port should be used instead of a list of all endpoints  This can be overwritten by an annotation on an Ingress rule     default      false      ssl reject handshake  Set to reject SSL handshake to an unknown virtualhost  This parameter helps to mitigate the fingerprinting using default certificate of ingress     default      false    References    https   nginx org en docs http ngx http ssl module html ssl reject handshake  https   nginx org en docs http ngx http ssl module html ssl reject handshake      debug connections Enables debugging log for selected client connections     default          References    http   nginx org en docs ngx core module html debug connection  http   nginx org en docs ngx core module html debug connection      strict validate path type  Ingress objects contains a field called pathType that defines the proxy behavior  It can be  Exact    Prefix  and  ImplementationSpecific    When pathType is configured as  Exact  or  Prefix   there should be a more strict validation  allowing only paths starting with     and containing only alphanumeric characters and          and additional       When this option is enabled  the validation will happen on the Admission Webhook  making any Ingress not using pathType  ImplementationSpecific  and containing invalid characters to be denied   This means that Ingress objects that rely on paths containing regex characters should use  ImplementationSpecific  pathType   The cluster admin should establish validation rules using mechanisms like  Open Policy Agent  https   www openpolicyagent org   to  validate that only authorized users can use  ImplementationSpecific  pathType and that only the authorized characters can be used      default      true      grpc buffer size kb  Sets the configuration for the GRPC Buffer Size parameter  If not set it will use the default from NGINX    References    https   nginx org en docs http ngx http grpc module html grpc buffer size  https   nginx org en docs http ngx http grpc module html grpc buffer size      relative redirects  Use relative redirects instead of absolute redirects  Absolute redirects are the default in nginx  RFC7231 allows relative redirects since 2014  Similar to the Ingress rule annotation  nginx ingress kubernetes io relative redirects       default      false    References      https   nginx org en docs http ngx http core module html absolute redirect  https   nginx org en docs http ngx http core module html absolute redirect     https   datatracker ietf org doc html rfc7231 section 7 1 2  https   datatracker ietf org doc html rfc7231 section 7 1 2 "}
{"questions":"ingress nginx BasicDigestAuth auth realm Medium location BasicDigestAuth auth secret Medium location Aliases server alias High ingress BasicDigestAuth auth secret type Low location Allowlist allowlist source range Medium location Group Annotation Risk Scope BackendProtocol backend protocol Low location Annotations Scope and Risk","answers":"# Annotations Scope and Risk\n\n|Group   |Annotation        | Risk | Scope |\n|--------|------------------|------|-------|\n| Aliases | server-alias | High | ingress |\n| Allowlist | allowlist-source-range | Medium | location |\n| BackendProtocol | backend-protocol | Low | location |\n| BasicDigestAuth | auth-realm | Medium | location |\n| BasicDigestAuth | auth-secret | Medium | location |\n| BasicDigestAuth | auth-secret-type | Low | location |\n| BasicDigestAuth | auth-type | Low | location |\n| Canary | canary | Low | ingress |\n| Canary | canary-by-cookie | Medium | ingress |\n| Canary | canary-by-header | Medium | ingress |\n| Canary | canary-by-header-pattern | Medium | ingress |\n| Canary | canary-by-header-value | Medium | ingress |\n| Canary | canary-weight | Low | ingress |\n| Canary | canary-weight-total | Low | ingress |\n| CertificateAuth | auth-tls-error-page | High | location |\n| CertificateAuth | auth-tls-match-cn | High | location |\n| CertificateAuth | auth-tls-pass-certificate-to-upstream | Low | location |\n| CertificateAuth | auth-tls-secret | Medium | location |\n| CertificateAuth | auth-tls-verify-client | Medium | location |\n| CertificateAuth | auth-tls-verify-depth | Low | location |\n| ClientBodyBufferSize | client-body-buffer-size | Low | location |\n| ConfigurationSnippet | configuration-snippet | Critical | location |\n| Connection | connection-proxy-header | Low | location |\n| CorsConfig | cors-allow-credentials | Low | ingress |\n| CorsConfig | cors-allow-headers | Medium | ingress |\n| CorsConfig | cors-allow-methods | Medium | ingress |\n| CorsConfig | cors-allow-origin | Medium | ingress |\n| CorsConfig | cors-expose-headers | Medium | ingress |\n| CorsConfig | cors-max-age | Low | ingress |\n| CorsConfig | enable-cors | Low | ingress |\n| CustomHTTPErrors | custom-http-errors | Low | location |\n| CustomHeaders | custom-headers | Medium | location |\n| DefaultBackend | default-backend | Low | location |\n| Denylist | denylist-source-range | Medium | location |\n| DisableProxyInterceptErrors | disable-proxy-intercept-errors | Low | location |\n| EnableGlobalAuth | enable-global-auth | Low | location |\n| ExternalAuth | auth-always-set-cookie | Low | location |\n| ExternalAuth | auth-cache-duration | Medium | location |\n| ExternalAuth | auth-cache-key | Medium | location |\n| ExternalAuth | auth-keepalive | Low | location |\n| ExternalAuth | auth-keepalive-requests | Low | location |\n| ExternalAuth | auth-keepalive-share-vars | Low | location |\n| ExternalAuth | auth-keepalive-timeout | Low | location |\n| ExternalAuth | auth-method | Low | location |\n| ExternalAuth | auth-proxy-set-headers | Medium | location |\n| ExternalAuth | auth-request-redirect | Medium | location |\n| ExternalAuth | auth-response-headers | Medium | location |\n| ExternalAuth | auth-signin | High | location |\n| ExternalAuth | auth-signin-redirect-param | Medium | location |\n| ExternalAuth | auth-snippet | Critical | location |\n| ExternalAuth | auth-url | High | location |\n| FastCGI | fastcgi-index | Medium | location |\n| FastCGI | fastcgi-params-configmap | Medium | location |\n| HTTP2PushPreload | http2-push-preload | Low | location |\n| LoadBalancing | load-balance | Low | location |\n| Logs | enable-access-log | Low | location |\n| Logs | enable-rewrite-log | Low | location |\n| Mirror | mirror-host | High | ingress |\n| Mirror | mirror-request-body | Low | ingress |\n| Mirror | mirror-target | High | ingress |\n| ModSecurity | enable-modsecurity | Low | ingress |\n| ModSecurity | enable-owasp-core-rules | Low | ingress |\n| ModSecurity | modsecurity-snippet | Critical | ingress |\n| ModSecurity | modsecurity-transaction-id | High | ingress |\n| Opentelemetry | enable-opentelemetry | Low | location |\n| Opentelemetry | opentelemetry-operation-name | Medium | location |\n| Opentelemetry | opentelemetry-trust-incoming-span | Low | location |\n| Proxy | proxy-body-size | Medium | location |\n| Proxy | proxy-buffer-size | Low | location |\n| Proxy | proxy-buffering | Low | location |\n| Proxy | proxy-buffers-number | Low | location |\n| Proxy | proxy-busy-buffers-size | Low | location |\n| Proxy | proxy-connect-timeout | Low | location |\n| Proxy | proxy-cookie-domain | Medium | location |\n| Proxy | proxy-cookie-path | Medium | location |\n| Proxy | proxy-http-version | Low | location |\n| Proxy | proxy-max-temp-file-size | Low | location |\n| Proxy | proxy-next-upstream | Medium | location |\n| Proxy | proxy-next-upstream-timeout | Low | location |\n| Proxy | proxy-next-upstream-tries | Low | location |\n| Proxy | proxy-read-timeout | Low | location |\n| Proxy | proxy-redirect-from | Medium | location |\n| Proxy | proxy-redirect-to | Medium | location |\n| Proxy | proxy-request-buffering | Low | location |\n| Proxy | proxy-send-timeout | Low | location |\n| ProxySSL | proxy-ssl-ciphers | Medium | ingress |\n| ProxySSL | proxy-ssl-name | High | ingress |\n| ProxySSL | proxy-ssl-protocols | Low | ingress |\n| ProxySSL | proxy-ssl-secret | Medium | ingress |\n| ProxySSL | proxy-ssl-server-name | Low | ingress |\n| ProxySSL | proxy-ssl-verify | Low | ingress |\n| ProxySSL | proxy-ssl-verify-depth | Low | ingress |\n| RateLimit | limit-allowlist | Low | location |\n| RateLimit | limit-burst-multiplier | Low | location |\n| RateLimit | limit-connections | Low | location |\n| RateLimit | limit-rate | Low | location |\n| RateLimit | limit-rate-after | Low | location |\n| RateLimit | limit-rpm | Low | location |\n| RateLimit | limit-rps | Low | location |\n| Redirect | from-to-www-redirect | Low | location |\n| Redirect | permanent-redirect | Medium | location |\n| Redirect | permanent-redirect-code | Low | location |\n| Redirect | relative-redirects | Low | location |\n| Redirect | temporal-redirect | Medium | location |\n| Redirect | temporal-redirect-code | Low | location |\n| Rewrite | app-root | Medium | location |\n| Rewrite | force-ssl-redirect | Medium | location |\n| Rewrite | preserve-trailing-slash | Medium | location |\n| Rewrite | rewrite-target | Medium | ingress |\n| Rewrite | ssl-redirect | Low | location |\n| Rewrite | use-regex | Low | location |\n| SSLCipher | ssl-ciphers | Low | ingress |\n| SSLCipher | ssl-prefer-server-ciphers | Low | ingress |\n| SSLPassthrough | ssl-passthrough | Low | ingress |\n| Satisfy | satisfy | Low | location |\n| ServerSnippet | server-snippet | Critical | ingress |\n| ServiceUpstream | service-upstream | Low | ingress |\n| SessionAffinity | affinity | Low | ingress |\n| SessionAffinity | affinity-canary-behavior | Low | ingress |\n| SessionAffinity | affinity-mode | Medium | ingress |\n| SessionAffinity | session-cookie-change-on-failure | Low | ingress |\n| SessionAffinity | session-cookie-conditional-samesite-none | Low | ingress |\n| SessionAffinity | session-cookie-domain | Medium | ingress |\n| SessionAffinity | session-cookie-expires | Medium | ingress |\n| SessionAffinity | session-cookie-max-age | Medium | ingress |\n| SessionAffinity | session-cookie-name | Medium | ingress |\n| SessionAffinity | session-cookie-path | Medium | ingress |\n| SessionAffinity | session-cookie-samesite | Low | ingress |\n| SessionAffinity | session-cookie-secure | Low | ingress |\n| StreamSnippet | stream-snippet | Critical | ingress |\n| UpstreamHashBy | upstream-hash-by | High | location |\n| UpstreamHashBy | upstream-hash-by-subset | Low | location |\n| UpstreamHashBy | upstream-hash-by-subset-size | Low | location |\n| UpstreamVhost | upstream-vhost | Low | location |\n| UsePortInRedirects | use-port-in-redirects | Low | location |\n| XForwardedPrefix | x-forwarded-prefix | Medium | location |\n","site":"ingress nginx","answers_cleaned":"  Annotations Scope and Risk   Group    Annotation          Risk   Scope                                                  Aliases   server alias   High   ingress     Allowlist   allowlist source range   Medium   location     BackendProtocol   backend protocol   Low   location     BasicDigestAuth   auth realm   Medium   location     BasicDigestAuth   auth secret   Medium   location     BasicDigestAuth   auth secret type   Low   location     BasicDigestAuth   auth type   Low   location     Canary   canary   Low   ingress     Canary   canary by cookie   Medium   ingress     Canary   canary by header   Medium   ingress     Canary   canary by header pattern   Medium   ingress     Canary   canary by header value   Medium   ingress     Canary   canary weight   Low   ingress     Canary   canary weight total   Low   ingress     CertificateAuth   auth tls error page   High   location     CertificateAuth   auth tls match cn   High   location     CertificateAuth   auth tls pass certificate to upstream   Low   location     CertificateAuth   auth tls secret   Medium   location     CertificateAuth   auth tls verify client   Medium   location     CertificateAuth   auth tls verify depth   Low   location     ClientBodyBufferSize   client body buffer size   Low   location     ConfigurationSnippet   configuration snippet   Critical   location     Connection   connection proxy header   Low   location     CorsConfig   cors allow credentials   Low   ingress     CorsConfig   cors allow headers   Medium   ingress     CorsConfig   cors allow methods   Medium   ingress     CorsConfig   cors allow origin   Medium   ingress     CorsConfig   cors expose headers   Medium   ingress     CorsConfig   cors max age   Low   ingress     CorsConfig   enable cors   Low   ingress     CustomHTTPErrors   custom http errors   Low   location     CustomHeaders   custom headers   Medium   location     DefaultBackend   default backend   Low   location     Denylist   denylist source range   Medium   location     DisableProxyInterceptErrors   disable proxy intercept errors   Low   location     EnableGlobalAuth   enable global auth   Low   location     ExternalAuth   auth always set cookie   Low   location     ExternalAuth   auth cache duration   Medium   location     ExternalAuth   auth cache key   Medium   location     ExternalAuth   auth keepalive   Low   location     ExternalAuth   auth keepalive requests   Low   location     ExternalAuth   auth keepalive share vars   Low   location     ExternalAuth   auth keepalive timeout   Low   location     ExternalAuth   auth method   Low   location     ExternalAuth   auth proxy set headers   Medium   location     ExternalAuth   auth request redirect   Medium   location     ExternalAuth   auth response headers   Medium   location     ExternalAuth   auth signin   High   location     ExternalAuth   auth signin redirect param   Medium   location     ExternalAuth   auth snippet   Critical   location     ExternalAuth   auth url   High   location     FastCGI   fastcgi index   Medium   location     FastCGI   fastcgi params configmap   Medium   location     HTTP2PushPreload   http2 push preload   Low   location     LoadBalancing   load balance   Low   location     Logs   enable access log   Low   location     Logs   enable rewrite log   Low   location     Mirror   mirror host   High   ingress     Mirror   mirror request body   Low   ingress     Mirror   mirror target   High   ingress     ModSecurity   enable modsecurity   Low   ingress     ModSecurity   enable owasp core rules   Low   ingress     ModSecurity   modsecurity snippet   Critical   ingress     ModSecurity   modsecurity transaction id   High   ingress     Opentelemetry   enable opentelemetry   Low   location     Opentelemetry   opentelemetry operation name   Medium   location     Opentelemetry   opentelemetry trust incoming span   Low   location     Proxy   proxy body size   Medium   location     Proxy   proxy buffer size   Low   location     Proxy   proxy buffering   Low   location     Proxy   proxy buffers number   Low   location     Proxy   proxy busy buffers size   Low   location     Proxy   proxy connect timeout   Low   location     Proxy   proxy cookie domain   Medium   location     Proxy   proxy cookie path   Medium   location     Proxy   proxy http version   Low   location     Proxy   proxy max temp file size   Low   location     Proxy   proxy next upstream   Medium   location     Proxy   proxy next upstream timeout   Low   location     Proxy   proxy next upstream tries   Low   location     Proxy   proxy read timeout   Low   location     Proxy   proxy redirect from   Medium   location     Proxy   proxy redirect to   Medium   location     Proxy   proxy request buffering   Low   location     Proxy   proxy send timeout   Low   location     ProxySSL   proxy ssl ciphers   Medium   ingress     ProxySSL   proxy ssl name   High   ingress     ProxySSL   proxy ssl protocols   Low   ingress     ProxySSL   proxy ssl secret   Medium   ingress     ProxySSL   proxy ssl server name   Low   ingress     ProxySSL   proxy ssl verify   Low   ingress     ProxySSL   proxy ssl verify depth   Low   ingress     RateLimit   limit allowlist   Low   location     RateLimit   limit burst multiplier   Low   location     RateLimit   limit connections   Low   location     RateLimit   limit rate   Low   location     RateLimit   limit rate after   Low   location     RateLimit   limit rpm   Low   location     RateLimit   limit rps   Low   location     Redirect   from to www redirect   Low   location     Redirect   permanent redirect   Medium   location     Redirect   permanent redirect code   Low   location     Redirect   relative redirects   Low   location     Redirect   temporal redirect   Medium   location     Redirect   temporal redirect code   Low   location     Rewrite   app root   Medium   location     Rewrite   force ssl redirect   Medium   location     Rewrite   preserve trailing slash   Medium   location     Rewrite   rewrite target   Medium   ingress     Rewrite   ssl redirect   Low   location     Rewrite   use regex   Low   location     SSLCipher   ssl ciphers   Low   ingress     SSLCipher   ssl prefer server ciphers   Low   ingress     SSLPassthrough   ssl passthrough   Low   ingress     Satisfy   satisfy   Low   location     ServerSnippet   server snippet   Critical   ingress     ServiceUpstream   service upstream   Low   ingress     SessionAffinity   affinity   Low   ingress     SessionAffinity   affinity canary behavior   Low   ingress     SessionAffinity   affinity mode   Medium   ingress     SessionAffinity   session cookie change on failure   Low   ingress     SessionAffinity   session cookie conditional samesite none   Low   ingress     SessionAffinity   session cookie domain   Medium   ingress     SessionAffinity   session cookie expires   Medium   ingress     SessionAffinity   session cookie max age   Medium   ingress     SessionAffinity   session cookie name   Medium   ingress     SessionAffinity   session cookie path   Medium   ingress     SessionAffinity   session cookie samesite   Low   ingress     SessionAffinity   session cookie secure   Low   ingress     StreamSnippet   stream snippet   Critical   ingress     UpstreamHashBy   upstream hash by   High   location     UpstreamHashBy   upstream hash by subset   Low   location     UpstreamHashBy   upstream hash by subset size   Low   location     UpstreamVhost   upstream vhost   Low   location     UsePortInRedirects   use port in redirects   Low   location     XForwardedPrefix   x forwarded prefix   Medium   location   "}
{"questions":"ingress nginx Annotations note You can add these Kubernetes annotations to specific Ingress objects to customize their behavior Annotation keys and values can only be strings Other types such as boolean or numeric values must be quoted tip i e","answers":"# Annotations\n\nYou can add these Kubernetes annotations to specific Ingress objects to customize their behavior.\n\n!!! tip\n    Annotation keys and values can only be strings.\n    Other types, such as boolean or numeric values must be quoted,\n    i.e. `\"true\"`, `\"false\"`, `\"100\"`.\n\n!!! note\n    The annotation prefix can be changed using the\n    [`--annotations-prefix` command line argument](..\/cli-arguments.md),\n    but the default is `nginx.ingress.kubernetes.io`, as described in the\n    table below.\n\n|Name                       | type |\n|---------------------------|------|\n|[nginx.ingress.kubernetes.io\/app-root](#rewrite)|string|\n|[nginx.ingress.kubernetes.io\/affinity](#session-affinity)|cookie|\n|[nginx.ingress.kubernetes.io\/affinity-mode](#session-affinity)|\"balanced\" or \"persistent\"|\n|[nginx.ingress.kubernetes.io\/affinity-canary-behavior](#session-affinity)|\"sticky\" or \"legacy\"|\n|[nginx.ingress.kubernetes.io\/auth-realm](#authentication)|string|\n|[nginx.ingress.kubernetes.io\/auth-secret](#authentication)|string|\n|[nginx.ingress.kubernetes.io\/auth-secret-type](#authentication)|string|\n|[nginx.ingress.kubernetes.io\/auth-type](#authentication)|\"basic\" or \"digest\"|\n|[nginx.ingress.kubernetes.io\/auth-tls-secret](#client-certificate-authentication)|string|\n|[nginx.ingress.kubernetes.io\/auth-tls-verify-depth](#client-certificate-authentication)|number|\n|[nginx.ingress.kubernetes.io\/auth-tls-verify-client](#client-certificate-authentication)|string|\n|[nginx.ingress.kubernetes.io\/auth-tls-error-page](#client-certificate-authentication)|string|\n|[nginx.ingress.kubernetes.io\/auth-tls-pass-certificate-to-upstream](#client-certificate-authentication)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/auth-tls-match-cn](#client-certificate-authentication)|string|\n|[nginx.ingress.kubernetes.io\/auth-url](#external-authentication)|string|\n|[nginx.ingress.kubernetes.io\/auth-cache-key](#external-authentication)|string|\n|[nginx.ingress.kubernetes.io\/auth-cache-duration](#external-authentication)|string|\n|[nginx.ingress.kubernetes.io\/auth-keepalive](#external-authentication)|number|\n|[nginx.ingress.kubernetes.io\/auth-keepalive-share-vars](#external-authentication)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/auth-keepalive-requests](#external-authentication)|number|\n|[nginx.ingress.kubernetes.io\/auth-keepalive-timeout](#external-authentication)|number|\n|[nginx.ingress.kubernetes.io\/auth-proxy-set-headers](#external-authentication)|string|\n|[nginx.ingress.kubernetes.io\/auth-snippet](#external-authentication)|string|\n|[nginx.ingress.kubernetes.io\/enable-global-auth](#external-authentication)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/backend-protocol](#backend-protocol)|string|\n|[nginx.ingress.kubernetes.io\/canary](#canary)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/canary-by-header](#canary)|string|\n|[nginx.ingress.kubernetes.io\/canary-by-header-value](#canary)|string|\n|[nginx.ingress.kubernetes.io\/canary-by-header-pattern](#canary)|string|\n|[nginx.ingress.kubernetes.io\/canary-by-cookie](#canary)|string|\n|[nginx.ingress.kubernetes.io\/canary-weight](#canary)|number|\n|[nginx.ingress.kubernetes.io\/canary-weight-total](#canary)|number|\n|[nginx.ingress.kubernetes.io\/client-body-buffer-size](#client-body-buffer-size)|string|\n|[nginx.ingress.kubernetes.io\/configuration-snippet](#configuration-snippet)|string|\n|[nginx.ingress.kubernetes.io\/custom-http-errors](#custom-http-errors)|[]int|\n|[nginx.ingress.kubernetes.io\/custom-headers](#custom-headers)|string|\n|[nginx.ingress.kubernetes.io\/default-backend](#default-backend)|string|\n|[nginx.ingress.kubernetes.io\/enable-cors](#enable-cors)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/cors-allow-origin](#enable-cors)|string|\n|[nginx.ingress.kubernetes.io\/cors-allow-methods](#enable-cors)|string|\n|[nginx.ingress.kubernetes.io\/cors-allow-headers](#enable-cors)|string|\n|[nginx.ingress.kubernetes.io\/cors-expose-headers](#enable-cors)|string|\n|[nginx.ingress.kubernetes.io\/cors-allow-credentials](#enable-cors)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/cors-max-age](#enable-cors)|number|\n|[nginx.ingress.kubernetes.io\/force-ssl-redirect](#server-side-https-enforcement-through-redirect)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/from-to-www-redirect](#redirect-fromto-www)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/http2-push-preload](#http2-push-preload)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/limit-connections](#rate-limiting)|number|\n|[nginx.ingress.kubernetes.io\/limit-rps](#rate-limiting)|number|\n|[nginx.ingress.kubernetes.io\/permanent-redirect](#permanent-redirect)|string|\n|[nginx.ingress.kubernetes.io\/permanent-redirect-code](#permanent-redirect-code)|number|\n|[nginx.ingress.kubernetes.io\/temporal-redirect](#temporal-redirect)|string|\n|[nginx.ingress.kubernetes.io\/temporal-redirect-code](#temporal-redirect-code)|number|\n|[nginx.ingress.kubernetes.io\/preserve-trailing-slash](#server-side-https-enforcement-through-redirect)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/proxy-body-size](#custom-max-body-size)|string|\n|[nginx.ingress.kubernetes.io\/proxy-cookie-domain](#proxy-cookie-domain)|string|\n|[nginx.ingress.kubernetes.io\/proxy-cookie-path](#proxy-cookie-path)|string|\n|[nginx.ingress.kubernetes.io\/proxy-connect-timeout](#custom-timeouts)|number|\n|[nginx.ingress.kubernetes.io\/proxy-send-timeout](#custom-timeouts)|number|\n|[nginx.ingress.kubernetes.io\/proxy-read-timeout](#custom-timeouts)|number|\n|[nginx.ingress.kubernetes.io\/proxy-next-upstream](#custom-timeouts)|string|\n|[nginx.ingress.kubernetes.io\/proxy-next-upstream-timeout](#custom-timeouts)|number|\n|[nginx.ingress.kubernetes.io\/proxy-next-upstream-tries](#custom-timeouts)|number|\n|[nginx.ingress.kubernetes.io\/proxy-request-buffering](#custom-timeouts)|string|\n|[nginx.ingress.kubernetes.io\/proxy-redirect-from](#proxy-redirect)|string|\n|[nginx.ingress.kubernetes.io\/proxy-redirect-to](#proxy-redirect)|string|\n|[nginx.ingress.kubernetes.io\/proxy-http-version](#proxy-http-version)|\"1.0\" or \"1.1\"|\n|[nginx.ingress.kubernetes.io\/proxy-ssl-secret](#backend-certificate-authentication)|string|\n|[nginx.ingress.kubernetes.io\/proxy-ssl-ciphers](#backend-certificate-authentication)|string|\n|[nginx.ingress.kubernetes.io\/proxy-ssl-name](#backend-certificate-authentication)|string|\n|[nginx.ingress.kubernetes.io\/proxy-ssl-protocols](#backend-certificate-authentication)|string|\n|[nginx.ingress.kubernetes.io\/proxy-ssl-verify](#backend-certificate-authentication)|string|\n|[nginx.ingress.kubernetes.io\/proxy-ssl-verify-depth](#backend-certificate-authentication)|number|\n|[nginx.ingress.kubernetes.io\/proxy-ssl-server-name](#backend-certificate-authentication)|string|\n|[nginx.ingress.kubernetes.io\/enable-rewrite-log](#enable-rewrite-log)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/rewrite-target](#rewrite)|URI|\n|[nginx.ingress.kubernetes.io\/satisfy](#satisfy)|string|\n|[nginx.ingress.kubernetes.io\/server-alias](#server-alias)|string|\n|[nginx.ingress.kubernetes.io\/server-snippet](#server-snippet)|string|\n|[nginx.ingress.kubernetes.io\/service-upstream](#service-upstream)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/session-cookie-change-on-failure](#cookie-affinity)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/session-cookie-conditional-samesite-none](#cookie-affinity)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/session-cookie-domain](#cookie-affinity)|string|\n|[nginx.ingress.kubernetes.io\/session-cookie-expires](#cookie-affinity)|string|\n|[nginx.ingress.kubernetes.io\/session-cookie-max-age](#cookie-affinity)|string|\n|[nginx.ingress.kubernetes.io\/session-cookie-name](#cookie-affinity)|string|default \"INGRESSCOOKIE\"|\n|[nginx.ingress.kubernetes.io\/session-cookie-path](#cookie-affinity)|string|\n|[nginx.ingress.kubernetes.io\/session-cookie-samesite](#cookie-affinity)|string|\"None\", \"Lax\" or \"Strict\"|\n|[nginx.ingress.kubernetes.io\/session-cookie-secure](#cookie-affinity)|string|\n|[nginx.ingress.kubernetes.io\/ssl-redirect](#server-side-https-enforcement-through-redirect)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/ssl-passthrough](#ssl-passthrough)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/stream-snippet](#stream-snippet)|string|\n|[nginx.ingress.kubernetes.io\/upstream-hash-by](#custom-nginx-upstream-hashing)|string|\n|[nginx.ingress.kubernetes.io\/x-forwarded-prefix](#x-forwarded-prefix-header)|string|\n|[nginx.ingress.kubernetes.io\/load-balance](#custom-nginx-load-balancing)|string|\n|[nginx.ingress.kubernetes.io\/upstream-vhost](#custom-nginx-upstream-vhost)|string|\n|[nginx.ingress.kubernetes.io\/denylist-source-range](#denylist-source-range)|CIDR|\n|[nginx.ingress.kubernetes.io\/whitelist-source-range](#whitelist-source-range)|CIDR|\n|[nginx.ingress.kubernetes.io\/proxy-buffering](#proxy-buffering)|string|\n|[nginx.ingress.kubernetes.io\/proxy-buffers-number](#proxy-buffers-number)|number|\n|[nginx.ingress.kubernetes.io\/proxy-buffer-size](#proxy-buffer-size)|string|\n|[nginx.ingress.kubernetes.io\/proxy-busy-buffers-size](#proxy-busy-buffers-size)|string|\n|[nginx.ingress.kubernetes.io\/proxy-max-temp-file-size](#proxy-max-temp-file-size)|string|\n|[nginx.ingress.kubernetes.io\/ssl-ciphers](#ssl-ciphers)|string|\n|[nginx.ingress.kubernetes.io\/ssl-prefer-server-ciphers](#ssl-ciphers)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/connection-proxy-header](#connection-proxy-header)|string|\n|[nginx.ingress.kubernetes.io\/enable-access-log](#enable-access-log)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/enable-opentelemetry](#enable-opentelemetry)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/opentelemetry-trust-incoming-span](#opentelemetry-trust-incoming-spans)|\"true\" or \"false\"|\n|[nginx.ingress.kubernetes.io\/use-regex](#use-regex)|bool|\n|[nginx.ingress.kubernetes.io\/enable-modsecurity](#modsecurity)|bool|\n|[nginx.ingress.kubernetes.io\/enable-owasp-core-rules](#modsecurity)|bool|\n|[nginx.ingress.kubernetes.io\/modsecurity-transaction-id](#modsecurity)|string|\n|[nginx.ingress.kubernetes.io\/modsecurity-snippet](#modsecurity)|string|\n|[nginx.ingress.kubernetes.io\/mirror-request-body](#mirror)|string|\n|[nginx.ingress.kubernetes.io\/mirror-target](#mirror)|string|\n|[nginx.ingress.kubernetes.io\/mirror-host](#mirror)|string|\n\n### Canary\n\nIn some cases, you may want to \"canary\" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after `nginx.ingress.kubernetes.io\/canary: \"true\"` is set:\n\n* `nginx.ingress.kubernetes.io\/canary-by-header`: The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to `always`, it will be routed to the canary. When the header is set to `never`, it will never be routed to the canary. For any other value, the header will be ignored and the request compared against the other canary rules by precedence.\n\n* `nginx.ingress.kubernetes.io\/canary-by-header-value`: The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to this value, it will be routed to the canary. For any other header value, the header will be ignored and the request compared against the other canary rules by precedence. This annotation has to be used together with `nginx.ingress.kubernetes.io\/canary-by-header`. The annotation is an extension of the `nginx.ingress.kubernetes.io\/canary-by-header` to allow customizing the header value instead of using hardcoded values. It doesn't have any effect if the `nginx.ingress.kubernetes.io\/canary-by-header` annotation is not defined.\n\n* `nginx.ingress.kubernetes.io\/canary-by-header-pattern`: This works the same way as `canary-by-header-value` except it does PCRE Regex matching. Note that when `canary-by-header-value` is set this annotation will be ignored. When the given Regex causes error during request processing, the request will be considered as not matching.\n\n* `nginx.ingress.kubernetes.io\/canary-by-cookie`: The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to `always`, it will be routed to the canary. When the cookie is set to `never`, it will never be routed to the canary. For any other value, the cookie will be ignored and the request compared against the other canary rules by precedence.\n\n* `nginx.ingress.kubernetes.io\/canary-weight`: The integer based (0 - <weight-total>) percent of random requests that should be routed to the service specified in the canary Ingress. A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule. A weight of `<weight-total>` means implies all requests will be sent to the alternative service specified in the Ingress. `<weight-total>` defaults to 100, and can be increased via `nginx.ingress.kubernetes.io\/canary-weight-total`.\n\n* `nginx.ingress.kubernetes.io\/canary-weight-total`: The total weight of traffic. If unspecified, it defaults to 100.\n\nCanary rules are evaluated in order of precedence. Precedence is as follows:\n`canary-by-header -> canary-by-cookie -> canary-weight`\n\n**Note** that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except `nginx.ingress.kubernetes.io\/load-balance`, `nginx.ingress.kubernetes.io\/upstream-hash-by`, and [annotations related to session affinity](#session-affinity). If you want to restore the original behavior of canaries when session affinity was ignored, set `nginx.ingress.kubernetes.io\/affinity-canary-behavior` annotation with value `legacy` on the canary ingress definition.\n\n**Known Limitations**\n\nCurrently a maximum of one canary ingress can be applied per Ingress rule.\n\n### Rewrite\n\nIn some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404.\nSet the annotation `nginx.ingress.kubernetes.io\/rewrite-target` to the path expected by the service.\n\nIf the Application Root is exposed in a different path and needs to be redirected, set the annotation `nginx.ingress.kubernetes.io\/app-root` to redirect requests for `\/`.\n\n!!! example\n    Please check the [rewrite](..\/..\/examples\/rewrite\/README.md) example.\n\n### Session Affinity\n\nThe annotation `nginx.ingress.kubernetes.io\/affinity` enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server.\nThe only affinity type available for NGINX is `cookie`.\n\nThe annotation `nginx.ingress.kubernetes.io\/affinity-mode` defines the stickiness of a session. Setting this to `balanced` (default) will redistribute some sessions if a deployment gets scaled up, therefore rebalancing the load on the servers. Setting this to `persistent` will not rebalance sessions to new servers, therefore providing maximum stickiness.\n\nThe annotation `nginx.ingress.kubernetes.io\/affinity-canary-behavior` defines the behavior of canaries when session affinity is enabled. Setting this to `sticky` (default) will ensure that users that were served by canaries, will continue to be served by canaries. Setting this to `legacy` will restore original canary behavior, when session affinity was ignored.\n\n!!! attention\n    If more than one Ingress is defined for a host and at least one Ingress uses `nginx.ingress.kubernetes.io\/affinity: cookie`, then only paths on the Ingress using `nginx.ingress.kubernetes.io\/affinity` will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server.\n\n!!! example\n    Please check the [affinity](..\/..\/examples\/affinity\/cookie\/README.md) example.\n\n#### Cookie affinity\n\nIf you use the ``cookie`` affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation `nginx.ingress.kubernetes.io\/session-cookie-name`. The default is to create a cookie named 'INGRESSCOOKIE'.\n\nThe NGINX annotation `nginx.ingress.kubernetes.io\/session-cookie-path` defines the path that will be set on the cookie. This is optional unless the annotation `nginx.ingress.kubernetes.io\/use-regex` is set to true; Session cookie paths do not support regex.\n\nUse `nginx.ingress.kubernetes.io\/session-cookie-domain` to set the `Domain` attribute of the sticky cookie.\n\nUse `nginx.ingress.kubernetes.io\/session-cookie-samesite` to apply a `SameSite` attribute to the sticky cookie. Browser accepted values are `None`, `Lax`, and `Strict`. Some browsers reject cookies with `SameSite=None`, including those created before the `SameSite=None` specification (e.g. Chrome 5X). Other browsers mistakenly treat `SameSite=None` cookies as `SameSite=Strict` (e.g. Safari running on OSX 14). To omit `SameSite=None` from browsers with these incompatibilities, add the annotation `nginx.ingress.kubernetes.io\/session-cookie-conditional-samesite-none: \"true\"`.\n\nUse `nginx.ingress.kubernetes.io\/session-cookie-expires` to control the cookie expires, its value is a number of seconds until the cookie expires.\n\nUse `nginx.ingress.kubernetes.io\/session-cookie-path` to control the cookie path when use-regex is set to true.\n\nUse `nginx.ingress.kubernetes.io\/session-cookie-change-on-failure` to control the cookie change after request failure.\n\n### Authentication\n\nIt is possible to add authentication by adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords.\n\nThe annotations are:\n```\nnginx.ingress.kubernetes.io\/auth-type: [basic|digest]\n```\n\nIndicates the [HTTP Authentication Type: Basic or Digest Access Authentication](https:\/\/tools.ietf.org\/html\/rfc2617).\n\n```\nnginx.ingress.kubernetes.io\/auth-secret: secretName\n```\n\nThe name of the Secret that contains the usernames and passwords which are granted access to the `path`s defined in the Ingress rules.\nThis annotation also accepts the alternative form \"namespace\/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.\n\n```\nnginx.ingress.kubernetes.io\/auth-secret-type: [auth-file|auth-map]\n```\n\nThe `auth-secret` can have two forms:\n\n- `auth-file` - default, an htpasswd file in the key `auth` within the secret\n- `auth-map` - the keys of the secret are the usernames, and the values are the hashed passwords\n\n```\nnginx.ingress.kubernetes.io\/auth-realm: \"realm string\"\n```\n\n!!! example\n    Please check the [auth](..\/..\/examples\/auth\/basic\/README.md) example.\n\n### Custom NGINX upstream hashing\n\nNGINX supports load balancing by client-server mapping based on [consistent hashing](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_upstream_module.html#hash) for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The [ketama](https:\/\/www.last.fm\/user\/RJ\/journal\/2007\/04\/10\/rz_libketama_-_a_consistent_hashing_algo_for_memcache_clients) consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.\n\nThere is a special mode of upstream hashing called subset. In this mode, upstream servers are grouped into subsets, and stickiness works by mapping keys to a subset instead of individual upstream servers. Specific server is chosen uniformly at random from the selected sticky subset. It provides a balance between stickiness and load distribution.\n\nTo enable consistent hashing for a backend:\n\n`nginx.ingress.kubernetes.io\/upstream-hash-by`: the nginx variable, text value or any combination thereof to use for consistent hashing. For example: `nginx.ingress.kubernetes.io\/upstream-hash-by: \"$request_uri\"` or `nginx.ingress.kubernetes.io\/upstream-hash-by: \"$request_uri$host\"` or `nginx.ingress.kubernetes.io\/upstream-hash-by: \"${request_uri}-text-value\"` to consistently hash upstream requests by the current request URI.\n\n\"subset\" hashing can be enabled setting `nginx.ingress.kubernetes.io\/upstream-hash-by-subset`: \"true\". This maps requests to subset of nodes instead of a single one. `nginx.ingress.kubernetes.io\/upstream-hash-by-subset-size` determines the size of each subset (default 3).\n\nPlease check the [chashsubset](..\/..\/examples\/chashsubset\/deployment.yaml) example.\n\n### Custom NGINX load balancing\n\nThis is similar to [`load-balance` in ConfigMap](.\/configmap.md#load-balance), but configures load balancing algorithm per ingress.\n>Note that `nginx.ingress.kubernetes.io\/upstream-hash-by` takes preference over this. If this and `nginx.ingress.kubernetes.io\/upstream-hash-by` are not set then we fallback to using globally configured load balancing algorithm.\n\n### Custom NGINX upstream vhost\n\nThis configuration setting allows you to control the value for host in the following statement: `proxy_set_header Host $host`, which forms part of the location block.  This is useful if you need to call the upstream server by something other than `$host`.\n\n### Client Certificate Authentication\n\nIt is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.\n\nClient Certificate Authentication is applied per host and it is not possible to specify rules that differ for individual paths.\n\nTo enable, add the annotation `nginx.ingress.kubernetes.io\/auth-tls-secret: namespace\/secretName`. This secret must have a file named `ca.crt` containing the full Certificate Authority chain `ca.crt` that is enabled to authenticate against this Ingress.\n\nYou can further customize client certificate authentication and behavior with these annotations:\n\n* `nginx.ingress.kubernetes.io\/auth-tls-verify-depth`: The validation depth between the provided client certificate and the Certification Authority chain. (default: 1)\n* `nginx.ingress.kubernetes.io\/auth-tls-verify-client`: Enables verification of client certificates. Possible values are:\n    * `on`: Request a client certificate that must be signed by a certificate that is included in the secret key `ca.crt` of the secret specified by `nginx.ingress.kubernetes.io\/auth-tls-secret: namespace\/secretName`. Failed certificate verification will result in a status code 400 (Bad Request) (default)\n    * `off`: Don't request client certificates and don't do client certificate verification.\n    * `optional`: Do optional client certificate validation against the CAs from `auth-tls-secret`. The request fails with status code 400 (Bad Request) when a certificate is provided that is not signed by the CA. When no or an otherwise invalid certificate is provided, the request does not fail, but instead the verification result is sent to the upstream service.\n    * `optional_no_ca`: Do optional client certificate validation, but do not fail the request when the client certificate is not signed by the CAs from `auth-tls-secret`. Certificate verification result is sent to the upstream service.\n* `nginx.ingress.kubernetes.io\/auth-tls-error-page`: The URL\/Page that user should be redirected in case of a Certificate Authentication Error\n* `nginx.ingress.kubernetes.io\/auth-tls-pass-certificate-to-upstream`: Indicates if the received certificates should be passed or not to the upstream server in the header `ssl-client-cert`. Possible values are \"true\" or \"false\" (default).\n* `nginx.ingress.kubernetes.io\/auth-tls-match-cn`: Adds a sanity check for the CN of the client certificate that is sent over using a string \/ regex starting with \"CN=\", example: `\"CN=myvalidclient\"`. If the certificate CN sent during mTLS does not match your string \/ regex it will fail with status code 403. Another way of using this is by adding multiple options in your regex, example: `\"CN=(option1|option2|myvalidclient)\"`. In this case, as long as one of the options in the brackets matches the certificate CN then you will receive a 200 status code. \n\nThe following headers are sent to the upstream service according to the `auth-tls-*` annotations:\n\n* `ssl-client-issuer-dn`: The issuer information of the client certificate. Example: \"CN=My CA\"\n* `ssl-client-subject-dn`: The subject information of the client certificate. Example: \"CN=My Client\"\n* `ssl-client-verify`: The result of the client verification. Possible values: \"SUCCESS\", \"FAILED: <description, why the verification failed>\"\n* `ssl-client-cert`: The full client certificate in PEM format. Will only be sent when `nginx.ingress.kubernetes.io\/auth-tls-pass-certificate-to-upstream` is set to \"true\". Example: `-----BEGIN%20CERTIFICATE-----%0A...---END%20CERTIFICATE-----%0A`\n\n!!! example\n    Please check the [client-certs](..\/..\/examples\/auth\/client-certs\/README.md) example.\n\n!!! attention\n    TLS with Client Authentication is **not** possible in Cloudflare and might result in unexpected behavior.\n\n    Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: [https:\/\/blog.cloudflare.com\/protecting-the-origin-with-tls-authenticated-origin-pulls\/](https:\/\/blog.cloudflare.com\/protecting-the-origin-with-tls-authenticated-origin-pulls\/)\n\n    Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial: [https:\/\/support.cloudflare.com\/hc\/en-us\/articles\/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls](https:\/\/web.archive.org\/web\/20200907143649\/https:\/\/support.cloudflare.com\/hc\/en-us\/articles\/204899617-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls#section5)\n\n### Backend Certificate Authentication\n\nIt is possible to authenticate to a proxied HTTPS backend with certificate using additional annotations in Ingress Rule.\n\n* `nginx.ingress.kubernetes.io\/proxy-ssl-secret: secretName`:\n  Specifies a Secret with the certificate `tls.crt`, key `tls.key` in PEM format used for authentication to a proxied HTTPS server. It should also contain trusted CA certificates `ca.crt` in PEM format used to verify the certificate of the proxied HTTPS server.\n  This annotation expects the Secret name in the form \"namespace\/secretName\".\n* `nginx.ingress.kubernetes.io\/proxy-ssl-verify`:\n  Enables or disables verification of the proxied HTTPS server certificate. (default: off)\n* `nginx.ingress.kubernetes.io\/proxy-ssl-verify-depth`:\n  Sets the verification depth in the proxied HTTPS server certificates chain. (default: 1)\n* `nginx.ingress.kubernetes.io\/proxy-ssl-ciphers`:\n  Specifies the enabled [ciphers](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_ssl_ciphers) for requests to a proxied HTTPS server. The ciphers are specified in the format understood by the OpenSSL library.\n* `nginx.ingress.kubernetes.io\/proxy-ssl-name`:\n  Allows to set [proxy_ssl_name](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_ssl_name). This allows overriding the server name used to verify the certificate of the proxied HTTPS server. This value is also passed through SNI when a connection is established to the proxied HTTPS server.\n* `nginx.ingress.kubernetes.io\/proxy-ssl-protocols`:\n  Enables the specified [protocols](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_ssl_protocols) for requests to a proxied HTTPS server.\n* `nginx.ingress.kubernetes.io\/proxy-ssl-server-name`:\n  Enables passing of the server name through TLS Server Name Indication extension (SNI, RFC 6066) when establishing a connection with the proxied HTTPS server.\n\n### Configuration snippet\n\nUsing this annotation you can add additional configuration to the NGINX location. For example:\n\n```yaml\nnginx.ingress.kubernetes.io\/configuration-snippet: |\n  more_set_headers \"Request-Id: $req_id\";\n```\n\nBe aware this can be dangerous in multi-tenant clusters, as it can lead to people with otherwise limited permissions being able to retrieve all secrets on the cluster. The recommended mitigation for this threat is to disable this feature, so it may not work for you. See CVE-2021-25742 and the [related issue on github](https:\/\/github.com\/kubernetes\/ingress-nginx\/issues\/7837) for more information.\n\n### Custom HTTP Errors\n\nLike the [`custom-http-errors`](.\/configmap.md#custom-http-errors) value in the ConfigMap, this annotation will set NGINX `proxy-intercept-errors`, but only for the NGINX location associated with this ingress. If a [default backend annotation](#default-backend) is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend).\nDifferent ingresses can specify different sets of error codes. Even if multiple ingress objects share the same hostname, this annotation can be used to intercept different error codes for each ingress (for example, different error codes to be intercepted for different paths on the same hostname, if each path is on a different ingress).\nIf `custom-http-errors` is also specified globally, the error values specified in this annotation will override the global value for the given ingress' hostname and path.\n\nExample usage:\n```\nnginx.ingress.kubernetes.io\/custom-http-errors: \"404,415\"\n```\n\n### Custom Headers\nThis annotation is of the form `nginx.ingress.kubernetes.io\/custom-headers: <namespace>\/<custom headers configmap>` to specify a namespace and configmap name that contains custom headers. This annotation uses `more_set_headers` nginx directive.\n\nExample annotation for following example configmap:\n\n```yaml\nnginx.ingress.kubernetes.io\/custom-headers: default\/custom-headers-configmap\n```\n\nExample configmap:\n```yaml\napiVersion: v1\ndata:\n  Content-Type: application\/json\nkind: ConfigMap\nmetadata:\n  name: custom-headers-configmap\n  namespace: default\n```\n\n!!! attention\n  First define the allowed response headers in [global-allowed-response-headers](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/docs\/user-guide\/nginx-configuration\/configmap.md#global-allowed-response-headers).\n\n### Default Backend\n\nThis annotation is of the form `nginx.ingress.kubernetes.io\/default-backend: <svc name>` to specify a custom default backend.  This `<svc name>` is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend. In case the service has [multiple ports](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#multi-port-services), the first one is the one which will receive the backend traffic. \n\nThis service will be used to handle the response when the configured service in the Ingress rule does not have any active endpoints. It will also be used to handle the error responses if both this annotation and the [custom-http-errors annotation](#custom-http-errors) are set.\n\n### Enable CORS\n\nTo enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation\n`nginx.ingress.kubernetes.io\/enable-cors: \"true\"`. This will add a section in the server\nlocation enabling this functionality.\n\nCORS can be controlled with the following annotations:\n\n* `nginx.ingress.kubernetes.io\/cors-allow-methods`: Controls which methods are accepted.\n\n    This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case).\n\n    - Default: `GET, PUT, POST, DELETE, PATCH, OPTIONS`\n    - Example: `nginx.ingress.kubernetes.io\/cors-allow-methods: \"PUT, GET, POST, OPTIONS\"`\n\n* `nginx.ingress.kubernetes.io\/cors-allow-headers`: Controls which headers are accepted.\n\n    This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -.\n\n    - Default: `DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization`\n    - Example: `nginx.ingress.kubernetes.io\/cors-allow-headers: \"X-Forwarded-For, X-app123-XPTO\"`\n\n* `nginx.ingress.kubernetes.io\/cors-expose-headers`: Controls which headers are exposed to response.\n\n    This is a multi-valued field, separated by ',' and accepts letters, numbers, _, - and *.\n\n    - Default: *empty*\n    - Example: `nginx.ingress.kubernetes.io\/cors-expose-headers: \"*, X-CustomResponseHeader\"`\n\n* `nginx.ingress.kubernetes.io\/cors-allow-origin`: Controls what's the accepted Origin for CORS.\n\n    This is a multi-valued field, separated by ','. It must follow this format: `protocol:\/\/origin-site.com` or `protocol:\/\/origin-site.com:port`\n\n    - Default: `*`\n    - Example: `nginx.ingress.kubernetes.io\/cors-allow-origin: \"https:\/\/origin-site.com:4443, http:\/\/origin-site.com, myprotocol:\/\/example.org:1199\"`\n\n    It also supports single level wildcard subdomains and follows this format: `protocol:\/\/*.foo.bar`, `protocol:\/\/*.bar.foo:8080` or `protocol:\/\/*.abc.bar.foo:9000`\n    - Example: `nginx.ingress.kubernetes.io\/cors-allow-origin: \"https:\/\/*.origin-site.com:4443, http:\/\/*.origin-site.com, myprotocol:\/\/example.org:1199\"`\n\n* `nginx.ingress.kubernetes.io\/cors-allow-credentials`: Controls if credentials can be passed during CORS operations.\n\n    - Default: `true`\n    - Example: `nginx.ingress.kubernetes.io\/cors-allow-credentials: \"false\"`\n\n* `nginx.ingress.kubernetes.io\/cors-max-age`: Controls how long preflight requests can be cached.\n\n    - Default: `1728000`\n    - Example: `nginx.ingress.kubernetes.io\/cors-max-age: 600`\n\n!!! note\n    For more information please see [https:\/\/enable-cors.org](https:\/\/enable-cors.org\/server_nginx.html)\n\n### HTTP2 Push Preload.\n\nEnables automatic conversion of preload links specified in the \u201cLink\u201d response header fields into push requests.\n\n!!! example\n\n    * `nginx.ingress.kubernetes.io\/http2-push-preload: \"true\"`\n\n### Server Alias\n\nAllows the definition of one or more aliases in the server definition of the NGINX configuration using the annotation `nginx.ingress.kubernetes.io\/server-alias: \"<alias 1>,<alias 2>\"`.\nThis will create a server with the same configuration, but adding new values to the `server_name` directive.\n\n!!! note\n\t  A server-alias name cannot conflict with the hostname of an existing server. If it does, the server-alias annotation will be ignored.\n    If a server-alias is created and later a new server with the same hostname is created, the new server configuration will take\n    place over the alias configuration.\n\nFor more information please see [the `server_name` documentation](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#server_name).\n\n### Server snippet\n\nUsing the annotation `nginx.ingress.kubernetes.io\/server-snippet` it is possible to add custom configuration in the server configuration block.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io\/server-snippet: |\n        set $agentflag 0;\n\n        if ($http_user_agent ~* \"(Mobile)\" ){\n          set $agentflag 1;\n        }\n\n        if ( $agentflag = 1 ) {\n          return 301 https:\/\/m.example.com;\n        }\n```\n\n!!! attention\n    This annotation can be used only once per host.\n\n### Client Body Buffer Size\n\nSets buffer size for reading client request body per location. In case the request body is larger than the buffer,\nthe whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages.\nThis is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is\napplied to each location provided in the ingress rule.\n\n!!! note\n    The annotation value must be given in a format understood by Nginx.\n\n!!! example\n\n    * `nginx.ingress.kubernetes.io\/client-body-buffer-size: \"1000\"` # 1000 bytes\n    * `nginx.ingress.kubernetes.io\/client-body-buffer-size: 1k` # 1 kilobyte\n    * `nginx.ingress.kubernetes.io\/client-body-buffer-size: 1K` # 1 kilobyte\n    * `nginx.ingress.kubernetes.io\/client-body-buffer-size: 1m` # 1 megabyte\n    * `nginx.ingress.kubernetes.io\/client-body-buffer-size: 1M` # 1 megabyte\n\nFor more information please see [https:\/\/nginx.org](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#client_body_buffer_size)\n\n### External Authentication\n\nTo use an existing service that provides authentication the Ingress rule can be annotated with `nginx.ingress.kubernetes.io\/auth-url` to indicate the URL where the HTTP request should be sent.\n\n```yaml\nnginx.ingress.kubernetes.io\/auth-url: \"URL to the authentication service\"\n```\n\nAdditionally it is possible to set:\n\n* `nginx.ingress.kubernetes.io\/auth-keepalive`:\n  `<Connections>` to specify the maximum number of keepalive connections to `auth-url`. Only takes effect\n   when no variables are used in the host part of the URL. Defaults to `0` (keepalive disabled).\n\n> Note: does not work with HTTP\/2 listener because of a limitation in Lua [subrequests](https:\/\/github.com\/openresty\/lua-nginx-module#spdy-mode-not-fully-supported).\n> [UseHTTP2](.\/configmap.md#use-http2) configuration should be disabled!\n\n* `nginx.ingress.kubernetes.io\/auth-keepalive-share-vars`:\n  Whether to share Nginx variables among the current request and the auth request. Example use case is to track requests: when set to \"true\" X-Request-ID HTTP header will be the same for the backend and the auth request.\n  Defaults to \"false\".\n* `nginx.ingress.kubernetes.io\/auth-keepalive-requests`:\n  `<Requests>` to specify the maximum number of requests that can be served through one keepalive connection.\n  Defaults to `1000` and only applied if `auth-keepalive` is set to higher than `0`.\n* `nginx.ingress.kubernetes.io\/auth-keepalive-timeout`:\n  `<Timeout>` to specify a duration in seconds which an idle keepalive connection to an upstream server will stay open.\n  Defaults to `60` and only applied if `auth-keepalive` is set to higher than `0`.\n* `nginx.ingress.kubernetes.io\/auth-method`:\n  `<Method>` to specify the HTTP method to use.\n* `nginx.ingress.kubernetes.io\/auth-signin`:\n  `<SignIn_URL>` to specify the location of the error page.\n* `nginx.ingress.kubernetes.io\/auth-signin-redirect-param`:\n  `<SignIn_URL>` to specify the URL parameter in the error page which should contain the original URL for a failed signin request.\n* `nginx.ingress.kubernetes.io\/auth-response-headers`:\n  `<Response_Header_1, ..., Response_Header_n>` to specify headers to pass to backend once authentication request completes.\n* `nginx.ingress.kubernetes.io\/auth-proxy-set-headers`:\n  `<ConfigMap>` the name of a ConfigMap that specifies headers to pass to the authentication service\n* `nginx.ingress.kubernetes.io\/auth-request-redirect`:\n  `<Request_Redirect_URL>`  to specify the X-Auth-Request-Redirect header value.\n* `nginx.ingress.kubernetes.io\/auth-cache-key`:\n  `<Cache_Key>` this enables caching for auth requests. specify a lookup key for auth responses. e.g. `$remote_user$http_authorization`. Each server and location has it's own keyspace. Hence a cached response is only valid on a per-server and per-location basis.\n* `nginx.ingress.kubernetes.io\/auth-cache-duration`:\n  `<Cache_duration>` to specify a caching time for auth responses based on their response codes, e.g. `200 202 30m`. See [proxy_cache_valid](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_cache_valid) for details. You may specify multiple, comma-separated values: `200 202 10m, 401 5m`. defaults to `200 202 401 5m`.\n* `nginx.ingress.kubernetes.io\/auth-always-set-cookie`:\n  `<Boolean_Flag>` to set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308.\n* `nginx.ingress.kubernetes.io\/auth-snippet`:\n  `<Auth_Snippet>` to specify a custom snippet to use with external authentication, e.g.\n\n```yaml\nnginx.ingress.kubernetes.io\/auth-url: http:\/\/foo.com\/external-auth\nnginx.ingress.kubernetes.io\/auth-snippet: |\n    proxy_set_header Foo-Header 42;\n```\n> Note: `nginx.ingress.kubernetes.io\/auth-snippet` is an optional annotation. However, it may only be used in conjunction with `nginx.ingress.kubernetes.io\/auth-url` and will be ignored if `nginx.ingress.kubernetes.io\/auth-url` is not set\n\n!!! example\n    Please check the [external-auth](..\/..\/examples\/auth\/external-auth\/README.md) example.\n\n#### Global External Authentication\n\nBy default the controller redirects all requests to an existing service that provides authentication if `global-auth-url` is set in the NGINX ConfigMap. If you want to disable this behavior for that ingress, you can use `enable-global-auth: \"false\"` in the NGINX ConfigMap.\n`nginx.ingress.kubernetes.io\/enable-global-auth`:\n   indicates if GlobalExternalAuth configuration should be applied or not to this Ingress rule. Default values is set to `\"true\"`.\n\n!!! note\n    For more information please see [global-auth-url](.\/configmap.md#global-auth-url).\n\n### Rate Limiting\n\nThese annotations define limits on connections and transmission rates.  These can be used to mitigate [DDoS Attacks](https:\/\/www.nginx.com\/blog\/mitigating-ddos-attacks-with-nginx-and-nginx-plus).\n\n* `nginx.ingress.kubernetes.io\/limit-connections`: number of concurrent connections allowed from a single IP address. A 503 error is returned when exceeding this limit.\n* `nginx.ingress.kubernetes.io\/limit-rps`: number of requests accepted from a given IP each second. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit,  [limit-req-status-code](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/nginx-configuration\/configmap\/#limit-req-status-code) ***default:*** 503 is returned.\n* `nginx.ingress.kubernetes.io\/limit-rpm`: number of requests accepted from a given IP each minute. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit,  [limit-req-status-code](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/nginx-configuration\/configmap\/#limit-req-status-code) ***default:*** 503 is returned.\n* `nginx.ingress.kubernetes.io\/limit-burst-multiplier`: multiplier of the limit rate for burst size. The default burst multiplier is 5, this annotation override the default multiplier. When clients exceed this limit,  [limit-req-status-code](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/nginx-configuration\/configmap\/#limit-req-status-code) ***default:*** 503 is returned.\n* `nginx.ingress.kubernetes.io\/limit-rate-after`: initial number of kilobytes after which the further transmission of a response to a given connection will be rate limited. This feature must be used with [proxy-buffering](#proxy-buffering) enabled.\n* `nginx.ingress.kubernetes.io\/limit-rate`: number of kilobytes per second allowed to send to a given connection.  The zero value disables rate limiting. This feature must be used with [proxy-buffering](#proxy-buffering) enabled.\n* `nginx.ingress.kubernetes.io\/limit-whitelist`: client IP source ranges to be excluded from rate-limiting. The value is a comma separated list of CIDRs.\n\nIf you specify multiple annotations in a single Ingress rule, limits are applied in the order `limit-connections`, `limit-rpm`, `limit-rps`.\n\nTo configure settings globally for all Ingress rules, the `limit-rate-after` and `limit-rate` values may be set in the [NGINX ConfigMap](.\/configmap.md#limit-rate).  The value set in an Ingress annotation will override the global setting.\n\nThe client IP address will be set based on the use of [PROXY protocol](.\/configmap.md#use-proxy-protocol) or from the `X-Forwarded-For` header value when [use-forwarded-headers](.\/configmap.md#use-forwarded-headers) is enabled.\n\n### Permanent Redirect\n\nThis annotation allows to return a permanent redirect (Return Code 301) instead of sending data to the upstream.  For example `nginx.ingress.kubernetes.io\/permanent-redirect: https:\/\/www.google.com` would redirect everything to Google.\n\n### Permanent Redirect Code\n\nThis annotation allows you to modify the status code used for permanent redirects.  For example `nginx.ingress.kubernetes.io\/permanent-redirect-code: '308'` would return your permanent-redirect with a 308.\n\n### Temporal Redirect\nThis annotation allows you to return a temporal redirect (Return Code 302) instead of sending data to the upstream. For example `nginx.ingress.kubernetes.io\/temporal-redirect: https:\/\/www.google.com` would redirect everything to Google with a Return Code of 302 (Moved Temporarily)\n\n### Temporal Redirect Code\n\nThis annotation allows you to modify the status code used for temporal redirects.  For example `nginx.ingress.kubernetes.io\/temporal-redirect-code: '307'` would return your temporal-redirect with a 307.\n\n### SSL Passthrough\n\nThe annotation `nginx.ingress.kubernetes.io\/ssl-passthrough` instructs the controller to send TLS connections directly\nto the backend instead of letting NGINX decrypt the communication. See also [TLS\/HTTPS](..\/tls.md#ssl-passthrough) in\nthe User guide.\n\n!!! note\n    SSL Passthrough is **disabled by default** and requires starting the controller with the\n    [`--enable-ssl-passthrough`](..\/cli-arguments.md) flag.\n\n!!! attention\n    Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough\n    invalidates all the other annotations set on an Ingress object.\n\n### Service Upstream\n\nBy default the Ingress-Nginx Controller uses a list of all endpoints (Pod IP\/port) in the NGINX upstream configuration.\n\nThe `nginx.ingress.kubernetes.io\/service-upstream` annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port.\n\nThis can be desirable for things like zero-downtime deployments . See issue [#257](https:\/\/github.com\/kubernetes\/ingress-nginx\/issues\/257).\n\n#### Known Issues\n\nIf the `service-upstream` annotation is specified the following things should be taken into consideration:\n\n* Sticky Sessions will not work as only round-robin load balancing is supported.\n* The `proxy_next_upstream` directive will not have any effect meaning on error the request will not be dispatched to another upstream.\n\n### Server-side HTTPS enforcement through redirect\n\nBy default the controller redirects (308) to HTTPS if TLS is enabled for that ingress.\nIf you want to disable this behavior globally, you can use `ssl-redirect: \"false\"` in the NGINX [ConfigMap](.\/configmap.md#ssl-redirect).\n\nTo configure this feature for specific ingress resources, you can use the `nginx.ingress.kubernetes.io\/ssl-redirect: \"false\"`\nannotation in the particular resource.\n\nWhen using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS\neven when there is no TLS certificate available.\nThis can be achieved by using the `nginx.ingress.kubernetes.io\/force-ssl-redirect: \"true\"` annotation in the particular resource.\n\nTo preserve the trailing slash in the URI with `ssl-redirect`, set `nginx.ingress.kubernetes.io\/preserve-trailing-slash: \"true\"` annotation for that particular resource.\n\n### Redirect from\/to www\n\nIn some scenarios, it is required to redirect from `www.domain.com` to `domain.com` or vice versa, which way the redirect is performed depends on the configured `host` value in the Ingress object.\n\nFor example, if `.spec.rules.host` is configured with a value like `www.example.com`, then this annotation will redirect from `example.com` to `www.example.com`. If `.spec.rules.host` is configured with a value like `example.com`, so without a `www`, then this annotation will redirect from `www.example.com` to `example.com` instead.\n\nTo enable this feature use the annotation `nginx.ingress.kubernetes.io\/from-to-www-redirect: \"true\"`\n\n!!! attention\n    If at some point a new Ingress is created with a host equal to one of the options (like `domain.com`) the annotation will be omitted.\n\n!!! attention\n    For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret, located in the TLS section of Ingress, contains both FQDN in the common name of the certificate.\n\n### Denylist source range\n\nYou can specify blocked client IP source ranges through the `nginx.ingress.kubernetes.io\/denylist-source-range` annotation.\nThe value is a comma separated list of [CIDRs](https:\/\/en.wikipedia.org\/wiki\/Classless_Inter-Domain_Routing), e.g.  `10.0.0.0\/24,172.10.0.1`.\n\nTo configure this setting globally for all Ingress rules, the `denylist-source-range` value may be set in the [NGINX ConfigMap](.\/configmap.md#denylist-source-range).\n\n!!! note\n    Adding an annotation to an Ingress rule overrides any global restriction.\n\n\n### Whitelist source range\n\nYou can specify allowed client IP source ranges through the `nginx.ingress.kubernetes.io\/whitelist-source-range` annotation.\nThe value is a comma separated list of [CIDRs](https:\/\/en.wikipedia.org\/wiki\/Classless_Inter-Domain_Routing), e.g.  `10.0.0.0\/24,172.10.0.1`.\n\nTo configure this setting globally for all Ingress rules, the `whitelist-source-range` value may be set in the [NGINX ConfigMap](.\/configmap.md#whitelist-source-range).\n\n!!! note\n    Adding an annotation to an Ingress rule overrides any global restriction.\n\n### Custom timeouts\n\nUsing the configuration configmap it is possible to set the default global timeout for connections to the upstream servers.\nIn some scenarios is required to have different values. To allow this we provide annotations that allows this customization:\n\n- `nginx.ingress.kubernetes.io\/proxy-connect-timeout`\n- `nginx.ingress.kubernetes.io\/proxy-send-timeout`\n- `nginx.ingress.kubernetes.io\/proxy-read-timeout`\n- `nginx.ingress.kubernetes.io\/proxy-next-upstream`\n- `nginx.ingress.kubernetes.io\/proxy-next-upstream-timeout`\n- `nginx.ingress.kubernetes.io\/proxy-next-upstream-tries`\n- `nginx.ingress.kubernetes.io\/proxy-request-buffering`\n\nIf you indicate [Backend Protocol](#backend-protocol) as `GRPC` or `GRPCS`, the following grpc values will be set and inherited from proxy timeouts:\n\n- [`grpc_connect_timeout=5s`](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_grpc_module.html#grpc_connect_timeout), from `nginx.ingress.kubernetes.io\/proxy-connect-timeout`\n- [`grpc_send_timeout=60s`](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_grpc_module.html#grpc_send_timeout), from `nginx.ingress.kubernetes.io\/proxy-send-timeout`\n- [`grpc_read_timeout=60s`](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_grpc_module.html#grpc_read_timeout), from `nginx.ingress.kubernetes.io\/proxy-read-timeout`\n\nNote: All timeout values are unitless and in seconds e.g. `nginx.ingress.kubernetes.io\/proxy-read-timeout: \"120\"` sets a valid 120 seconds proxy read timeout.\n\n### Proxy redirect\n\nThe annotations `nginx.ingress.kubernetes.io\/proxy-redirect-from` and `nginx.ingress.kubernetes.io\/proxy-redirect-to` will set the first and second parameters of NGINX's proxy_redirect directive respectively. It is possible to\nset the text that should be changed in the `Location` and `Refresh` header fields of a [proxied server response](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_redirect)\n\nSetting \"off\" or \"default\" in the annotation `nginx.ingress.kubernetes.io\/proxy-redirect-from` disables `nginx.ingress.kubernetes.io\/proxy-redirect-to`,\notherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces.\n\nBy default the value of each annotation is \"off\".\n\n### Custom max body size\n\nFor NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter [`client_max_body_size`](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#client_max_body_size).\n\nTo configure this setting globally for all Ingress rules, the `proxy-body-size` value may be set in the [NGINX ConfigMap](.\/configmap.md#proxy-body-size).\nTo use custom values in an Ingress rule define these annotation:\n\n```yaml\nnginx.ingress.kubernetes.io\/proxy-body-size: 8m\n```\n\n### Proxy cookie domain\n\nSets a text that [should be changed in the domain attribute](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_cookie_domain) of the \"Set-Cookie\" header fields of a proxied server response.\n\nTo configure this setting globally for all Ingress rules, the `proxy-cookie-domain` value may be set in the [NGINX ConfigMap](.\/configmap.md#proxy-cookie-domain).\n\n### Proxy cookie path\n\nSets a text that [should be changed in the path attribute](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_cookie_path) of the \"Set-Cookie\" header fields of a proxied server response.\n\nTo configure this setting globally for all Ingress rules, the `proxy-cookie-path` value may be set in the [NGINX ConfigMap](.\/configmap.md#proxy-cookie-path).\n\n### Proxy buffering\n\nEnable or disable proxy buffering [`proxy_buffering`](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_buffering).\nBy default proxy buffering is disabled in the NGINX config.\n\nTo configure this setting globally for all Ingress rules, the `proxy-buffering` value may be set in the [NGINX ConfigMap](.\/configmap.md#proxy-buffering).\nTo use custom values in an Ingress rule define these annotation:\n\n```yaml\nnginx.ingress.kubernetes.io\/proxy-buffering: \"on\"\n```\n\n### Proxy buffers Number\n\nSets the number of the buffers in [`proxy_buffers`](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_buffers) used for reading the first part of the response received from the proxied server.\nBy default proxy buffers number is set as 4\n\nTo configure this setting globally, set `proxy-buffers-number` in [NGINX ConfigMap](.\/configmap.md#proxy-buffers-number). To use custom values in an Ingress rule, define this annotation:\n```yaml\nnginx.ingress.kubernetes.io\/proxy-buffers-number: \"4\"\n```\n\n### Proxy buffer size\n\nSets the size of the buffer [`proxy_buffer_size`](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_buffer_size) used for reading the first part of the response received from the proxied server.\nBy default proxy buffer size is set as \"4k\"\n\nTo configure this setting globally, set `proxy-buffer-size` in [NGINX ConfigMap](.\/configmap.md#proxy-buffer-size). To use custom values in an Ingress rule, define this annotation:\n```yaml\nnginx.ingress.kubernetes.io\/proxy-buffer-size: \"8k\"\n```\n\n### Proxy busy buffers size\n\n[Limits the total size of buffers that can be busy](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_busy_buffers_size) sending a response to the client while the response is not yet fully read.\n\nBy default proxy busy buffers size is set as \"8k\".\n\nTo configure this setting globally, set `proxy-busy-buffers-size` in the [ConfigMap](.\/configmap.md#proxy-busy-buffers-size). To use custom values in an Ingress rule, define this annotation:\n\n```yaml\nnginx.ingress.kubernetes.io\/proxy-busy-buffers-size: \"16k\"\n```\n\n### Proxy max temp file size\n\nWhen [`buffering`](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_buffering) of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the [`proxy_buffer_size`](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_buffer_size) and [`proxy_buffers`](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_buffers) directives, a part of the response can be saved to a temporary file. This directive sets the maximum `size` of the temporary file setting the [`proxy_max_temp_file_size`](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_max_temp_file_size). The size of data written to the temporary file at a time is set by the [`proxy_temp_file_write_size`](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_temp_file_write_size) directive.\n\nThe zero value disables buffering of responses to temporary files.\n\nTo use custom values in an Ingress rule, define this annotation:\n```yaml\nnginx.ingress.kubernetes.io\/proxy-max-temp-file-size: \"1024m\"\n```\n\n### Proxy HTTP version\n\nUsing this annotation sets the [`proxy_http_version`](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_http_version) that the Nginx reverse proxy will use to communicate with the backend.\nBy default this is set to \"1.1\".\n\n```yaml\nnginx.ingress.kubernetes.io\/proxy-http-version: \"1.0\"\n```\n\n### SSL ciphers\n\nSpecifies the [enabled ciphers](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_ciphers).\n\nUsing this annotation will set the `ssl_ciphers` directive at the server level. This configuration is active for all the paths in the host.\n\n```yaml\nnginx.ingress.kubernetes.io\/ssl-ciphers: \"ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP\"\n```\n\nThe following annotation will set the `ssl_prefer_server_ciphers` directive at the server level. This configuration specifies that server ciphers should be preferred over client ciphers when using the SSLv3 and TLS protocols.\n\n```yaml\nnginx.ingress.kubernetes.io\/ssl-prefer-server-ciphers: \"true\"\n```\n\n### Connection proxy header\n\nUsing this annotation will override the default connection header set by NGINX.\nTo use custom values in an Ingress rule, define the annotation:\n\n```yaml\nnginx.ingress.kubernetes.io\/connection-proxy-header: \"keep-alive\"\n```\n\n### Enable Access Log\n\nAccess logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given\ningress. To do this, use the annotation:\n\n```yaml\nnginx.ingress.kubernetes.io\/enable-access-log: \"false\"\n```\n\n### Enable Rewrite Log\n\nRewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs.\nNote that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation:\n\n```yaml\nnginx.ingress.kubernetes.io\/enable-rewrite-log: \"true\"\n```\n\n### Enable Opentelemetry\n\nOpentelemetry can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden\nto enable it or disable it for a specific ingress (e.g. to turn off telemetry of external health check endpoints)\n\n```yaml\nnginx.ingress.kubernetes.io\/enable-opentelemetry: \"true\"\n```\n\n### Opentelemetry Trust Incoming Span\n\nThe option to trust incoming trace spans can be enabled or disabled globally through the ConfigMap but this will\nsometimes need to be overridden to enable it or disable it for a specific ingress (e.g. only enable on a private endpoint)\n\n```yaml\nnginx.ingress.kubernetes.io\/opentelemetry-trust-incoming-spans: \"true\"\n```\n\n### X-Forwarded-Prefix Header\nTo add the non-standard `X-Forwarded-Prefix` header to the upstream request with a string value, the following annotation can be used:\n\n```yaml\nnginx.ingress.kubernetes.io\/x-forwarded-prefix: \"\/path\"\n```\n\n### ModSecurity\n\n[ModSecurity](http:\/\/modsecurity.org\/) is an OpenSource Web Application firewall. It can be enabled for a particular set\nof ingress locations. The ModSecurity module must first be enabled by enabling ModSecurity in the\n[ConfigMap](.\/configmap.md#enable-modsecurity). Note this will enable ModSecurity for all paths, and each path\nmust be disabled manually.\n\nIt can be enabled using the following annotation:\n```yaml\nnginx.ingress.kubernetes.io\/enable-modsecurity: \"true\"\n```\nModSecurity will run in \"Detection-Only\" mode using the [recommended configuration](https:\/\/github.com\/SpiderLabs\/ModSecurity\/blob\/v3\/master\/modsecurity.conf-recommended).\n\nYou can enable the [OWASP Core Rule Set](https:\/\/www.modsecurity.org\/CRS\/Documentation\/) by\nsetting the following annotation:\n```yaml\nnginx.ingress.kubernetes.io\/enable-owasp-core-rules: \"true\"\n```\n\nYou can pass transactionIDs from nginx by setting up the following:\n```yaml\nnginx.ingress.kubernetes.io\/modsecurity-transaction-id: \"$request_id\"\n```\n\nYou can also add your own set of modsecurity rules via a snippet:\n```yaml\nnginx.ingress.kubernetes.io\/modsecurity-snippet: |\nSecRuleEngine On\nSecDebugLog \/tmp\/modsec_debug.log\n```\n\nNote: If you use both `enable-owasp-core-rules` and `modsecurity-snippet` annotations together, only the\n`modsecurity-snippet` will take effect. If you wish to include the [OWASP Core Rule Set](https:\/\/www.modsecurity.org\/CRS\/Documentation\/) or\n[recommended configuration](https:\/\/github.com\/SpiderLabs\/ModSecurity\/blob\/v3\/master\/modsecurity.conf-recommended) simply use the include\nstatement:\n\nnginx 0.24.1 and below\n```yaml\nnginx.ingress.kubernetes.io\/modsecurity-snippet: |\nInclude \/etc\/nginx\/owasp-modsecurity-crs\/nginx-modsecurity.conf\nInclude \/etc\/nginx\/modsecurity\/modsecurity.conf\n```\nnginx 0.25.0 and above\n```yaml\nnginx.ingress.kubernetes.io\/modsecurity-snippet: |\nInclude \/etc\/nginx\/owasp-modsecurity-crs\/nginx-modsecurity.conf\n```\n\n### Backend Protocol\n\nUsing `backend-protocol` annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces `secure-backends` in older versions)\nValid Values: HTTP, HTTPS, AUTO_HTTP, GRPC, GRPCS and FCGI\n\nBy default NGINX uses `HTTP`.\n\nExample:\n\n```yaml\nnginx.ingress.kubernetes.io\/backend-protocol: \"HTTPS\"\n```\n\n### Use Regex\n\n!!! attention\n    When using this annotation with the NGINX annotation `nginx.ingress.kubernetes.io\/affinity` of type `cookie`,  `nginx.ingress.kubernetes.io\/session-cookie-path` must be also set; Session cookie paths do not support regex.\n\nUsing the `nginx.ingress.kubernetes.io\/use-regex` annotation will indicate whether or not the paths defined on an Ingress use regular expressions.  The default value is `false`.\n\nThe following will indicate that regular expression paths are being used:\n```yaml\nnginx.ingress.kubernetes.io\/use-regex: \"true\"\n```\n\nThe following will indicate that regular expression paths are __not__ being used:\n```yaml\nnginx.ingress.kubernetes.io\/use-regex: \"false\"\n```\n\nWhen this annotation is set to `true`, the case insensitive regular expression [location modifier](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#location) will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.\n\nAdditionally, if the [`rewrite-target` annotation](#rewrite) is used on any Ingress for a given host, then the case insensitive regular expression [location modifier](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#location) will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.\n\nPlease read about [ingress path matching](..\/ingress-path-matching.md) before using this modifier.\n\n### Satisfy\n\nBy default, a request would need to satisfy all authentication requirements in order to be allowed. By using this annotation, requests that satisfy either any or all authentication requirements are allowed, based on the configuration value.\n\n```yaml\nnginx.ingress.kubernetes.io\/satisfy: \"any\"\n```\n\n### Mirror\n\nEnables a request to be mirrored to a mirror backend. Responses by mirror backends are ignored. This feature is useful, to see how requests will react in \"test\" backends.\n\nThe mirror backend can be set by applying:\n\n```yaml\nnginx.ingress.kubernetes.io\/mirror-target: https:\/\/test.env.com$request_uri\n```\n\nBy default the request-body is sent to the mirror backend, but can be turned off by applying:\n\n```yaml\nnginx.ingress.kubernetes.io\/mirror-request-body: \"off\"\n```\n\nAlso by default header Host for mirrored requests will be set the same as a host part of uri in the \"mirror-target\" annotation. You can override it by \"mirror-host\" annotation:\n\n```yaml\nnginx.ingress.kubernetes.io\/mirror-target: https:\/\/1.2.3.4$request_uri\nnginx.ingress.kubernetes.io\/mirror-host: \"test.env.com\"\n```\n\n**Note:** The mirror directive will be applied to all paths within the ingress resource.\n\nThe request sent to the mirror is linked to the original request. If you have a slow mirror backend, then the original request will throttle.\n\nFor more information on the mirror module see [ngx_http_mirror_module](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_mirror_module.html)\n\n\n### Stream snippet\n\nUsing the annotation `nginx.ingress.kubernetes.io\/stream-snippet` it is possible to add custom stream configuration.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io\/stream-snippet: |\n      server {\n        listen 8000;\n        proxy_pass 127.0.0.1:80;\n      }\n```","site":"ingress nginx","answers_cleaned":"  Annotations  You can add these Kubernetes annotations to specific Ingress objects to customize their behavior       tip     Annotation keys and values can only be strings      Other types  such as boolean or numeric values must be quoted      i e    true      false      100         note     The annotation prefix can be changed using the         annotations prefix  command line argument     cli arguments md       but the default is  nginx ingress kubernetes io   as described in the     table below    Name                         type                                          nginx ingress kubernetes io app root   rewrite  string    nginx ingress kubernetes io affinity   session affinity  cookie    nginx ingress kubernetes io affinity mode   session affinity   balanced  or  persistent     nginx ingress kubernetes io affinity canary behavior   session affinity   sticky  or  legacy     nginx ingress kubernetes io auth realm   authentication  string    nginx ingress kubernetes io auth secret   authentication  string    nginx ingress kubernetes io auth secret type   authentication  string    nginx ingress kubernetes io auth type   authentication   basic  or  digest     nginx ingress kubernetes io auth tls secret   client certificate authentication  string    nginx ingress kubernetes io auth tls verify depth   client certificate authentication  number    nginx ingress kubernetes io auth tls verify client   client certificate authentication  string    nginx ingress kubernetes io auth tls error page   client certificate authentication  string    nginx ingress kubernetes io auth tls pass certificate to upstream   client certificate authentication   true  or  false     nginx ingress kubernetes io auth tls match cn   client certificate authentication  string    nginx ingress kubernetes io auth url   external authentication  string    nginx ingress kubernetes io auth cache key   external authentication  string    nginx ingress kubernetes io auth cache duration   external authentication  string    nginx ingress kubernetes io auth keepalive   external authentication  number    nginx ingress kubernetes io auth keepalive share vars   external authentication   true  or  false     nginx ingress kubernetes io auth keepalive requests   external authentication  number    nginx ingress kubernetes io auth keepalive timeout   external authentication  number    nginx ingress kubernetes io auth proxy set headers   external authentication  string    nginx ingress kubernetes io auth snippet   external authentication  string    nginx ingress kubernetes io enable global auth   external authentication   true  or  false     nginx ingress kubernetes io backend protocol   backend protocol  string    nginx ingress kubernetes io canary   canary   true  or  false     nginx ingress kubernetes io canary by header   canary  string    nginx ingress kubernetes io canary by header value   canary  string    nginx ingress kubernetes io canary by header pattern   canary  string    nginx ingress kubernetes io canary by cookie   canary  string    nginx ingress kubernetes io canary weight   canary  number    nginx ingress kubernetes io canary weight total   canary  number    nginx ingress kubernetes io client body buffer size   client body buffer size  string    nginx ingress kubernetes io configuration snippet   configuration snippet  string    nginx ingress kubernetes io custom http errors   custom http errors    int    nginx ingress kubernetes io custom headers   custom headers  string    nginx ingress kubernetes io default backend   default backend  string    nginx ingress kubernetes io enable cors   enable cors   true  or  false     nginx ingress kubernetes io cors allow origin   enable cors  string    nginx ingress kubernetes io cors allow methods   enable cors  string    nginx ingress kubernetes io cors allow headers   enable cors  string    nginx ingress kubernetes io cors expose headers   enable cors  string    nginx ingress kubernetes io cors allow credentials   enable cors   true  or  false     nginx ingress kubernetes io cors max age   enable cors  number    nginx ingress kubernetes io force ssl redirect   server side https enforcement through redirect   true  or  false     nginx ingress kubernetes io from to www redirect   redirect fromto www   true  or  false     nginx ingress kubernetes io http2 push preload   http2 push preload   true  or  false     nginx ingress kubernetes io limit connections   rate limiting  number    nginx ingress kubernetes io limit rps   rate limiting  number    nginx ingress kubernetes io permanent redirect   permanent redirect  string    nginx ingress kubernetes io permanent redirect code   permanent redirect code  number    nginx ingress kubernetes io temporal redirect   temporal redirect  string    nginx ingress kubernetes io temporal redirect code   temporal redirect code  number    nginx ingress kubernetes io preserve trailing slash   server side https enforcement through redirect   true  or  false     nginx ingress kubernetes io proxy body size   custom max body size  string    nginx ingress kubernetes io proxy cookie domain   proxy cookie domain  string    nginx ingress kubernetes io proxy cookie path   proxy cookie path  string    nginx ingress kubernetes io proxy connect timeout   custom timeouts  number    nginx ingress kubernetes io proxy send timeout   custom timeouts  number    nginx ingress kubernetes io proxy read timeout   custom timeouts  number    nginx ingress kubernetes io proxy next upstream   custom timeouts  string    nginx ingress kubernetes io proxy next upstream timeout   custom timeouts  number    nginx ingress kubernetes io proxy next upstream tries   custom timeouts  number    nginx ingress kubernetes io proxy request buffering   custom timeouts  string    nginx ingress kubernetes io proxy redirect from   proxy redirect  string    nginx ingress kubernetes io proxy redirect to   proxy redirect  string    nginx ingress kubernetes io proxy http version   proxy http version   1 0  or  1 1     nginx ingress kubernetes io proxy ssl secret   backend certificate authentication  string    nginx ingress kubernetes io proxy ssl ciphers   backend certificate authentication  string    nginx ingress kubernetes io proxy ssl name   backend certificate authentication  string    nginx ingress kubernetes io proxy ssl protocols   backend certificate authentication  string    nginx ingress kubernetes io proxy ssl verify   backend certificate authentication  string    nginx ingress kubernetes io proxy ssl verify depth   backend certificate authentication  number    nginx ingress kubernetes io proxy ssl server name   backend certificate authentication  string    nginx ingress kubernetes io enable rewrite log   enable rewrite log   true  or  false     nginx ingress kubernetes io rewrite target   rewrite  URI    nginx ingress kubernetes io satisfy   satisfy  string    nginx ingress kubernetes io server alias   server alias  string    nginx ingress kubernetes io server snippet   server snippet  string    nginx ingress kubernetes io service upstream   service upstream   true  or  false     nginx ingress kubernetes io session cookie change on failure   cookie affinity   true  or  false     nginx ingress kubernetes io session cookie conditional samesite none   cookie affinity   true  or  false     nginx ingress kubernetes io session cookie domain   cookie affinity  string    nginx ingress kubernetes io session cookie expires   cookie affinity  string    nginx ingress kubernetes io session cookie max age   cookie affinity  string    nginx ingress kubernetes io session cookie name   cookie affinity  string default  INGRESSCOOKIE     nginx ingress kubernetes io session cookie path   cookie affinity  string    nginx ingress kubernetes io session cookie samesite   cookie affinity  string  None    Lax  or  Strict     nginx ingress kubernetes io session cookie secure   cookie affinity  string    nginx ingress kubernetes io ssl redirect   server side https enforcement through redirect   true  or  false     nginx ingress kubernetes io ssl passthrough   ssl passthrough   true  or  false     nginx ingress kubernetes io stream snippet   stream snippet  string    nginx ingress kubernetes io upstream hash by   custom nginx upstream hashing  string    nginx ingress kubernetes io x forwarded prefix   x forwarded prefix header  string    nginx ingress kubernetes io load balance   custom nginx load balancing  string    nginx ingress kubernetes io upstream vhost   custom nginx upstream vhost  string    nginx ingress kubernetes io denylist source range   denylist source range  CIDR    nginx ingress kubernetes io whitelist source range   whitelist source range  CIDR    nginx ingress kubernetes io proxy buffering   proxy buffering  string    nginx ingress kubernetes io proxy buffers number   proxy buffers number  number    nginx ingress kubernetes io proxy buffer size   proxy buffer size  string    nginx ingress kubernetes io proxy busy buffers size   proxy busy buffers size  string    nginx ingress kubernetes io proxy max temp file size   proxy max temp file size  string    nginx ingress kubernetes io ssl ciphers   ssl ciphers  string    nginx ingress kubernetes io ssl prefer server ciphers   ssl ciphers   true  or  false     nginx ingress kubernetes io connection proxy header   connection proxy header  string    nginx ingress kubernetes io enable access log   enable access log   true  or  false     nginx ingress kubernetes io enable opentelemetry   enable opentelemetry   true  or  false     nginx ingress kubernetes io opentelemetry trust incoming span   opentelemetry trust incoming spans   true  or  false     nginx ingress kubernetes io use regex   use regex  bool    nginx ingress kubernetes io enable modsecurity   modsecurity  bool    nginx ingress kubernetes io enable owasp core rules   modsecurity  bool    nginx ingress kubernetes io modsecurity transaction id   modsecurity  string    nginx ingress kubernetes io modsecurity snippet   modsecurity  string    nginx ingress kubernetes io mirror request body   mirror  string    nginx ingress kubernetes io mirror target   mirror  string    nginx ingress kubernetes io mirror host   mirror  string       Canary  In some cases  you may want to  canary  a new set of changes by sending a small number of requests to a different service than the production service  The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied  The following annotations to configure canary can be enabled after  nginx ingress kubernetes io canary   true   is set      nginx ingress kubernetes io canary by header   The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress  When the request header is set to  always   it will be routed to the canary  When the header is set to  never   it will never be routed to the canary  For any other value  the header will be ignored and the request compared against the other canary rules by precedence      nginx ingress kubernetes io canary by header value   The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress  When the request header is set to this value  it will be routed to the canary  For any other header value  the header will be ignored and the request compared against the other canary rules by precedence  This annotation has to be used together with  nginx ingress kubernetes io canary by header   The annotation is an extension of the  nginx ingress kubernetes io canary by header  to allow customizing the header value instead of using hardcoded values  It doesn t have any effect if the  nginx ingress kubernetes io canary by header  annotation is not defined      nginx ingress kubernetes io canary by header pattern   This works the same way as  canary by header value  except it does PCRE Regex matching  Note that when  canary by header value  is set this annotation will be ignored  When the given Regex causes error during request processing  the request will be considered as not matching      nginx ingress kubernetes io canary by cookie   The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress  When the cookie value is set to  always   it will be routed to the canary  When the cookie is set to  never   it will never be routed to the canary  For any other value  the cookie will be ignored and the request compared against the other canary rules by precedence      nginx ingress kubernetes io canary weight   The integer based  0    weight total   percent of random requests that should be routed to the service specified in the canary Ingress  A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule  A weight of   weight total   means implies all requests will be sent to the alternative service specified in the Ingress    weight total   defaults to 100  and can be increased via  nginx ingress kubernetes io canary weight total       nginx ingress kubernetes io canary weight total   The total weight of traffic  If unspecified  it defaults to 100   Canary rules are evaluated in order of precedence  Precedence is as follows   canary by header    canary by cookie    canary weight     Note   that when you mark an ingress as canary  then all the other non canary annotations will be ignored  inherited from the corresponding main ingress  except  nginx ingress kubernetes io load balance    nginx ingress kubernetes io upstream hash by   and  annotations related to session affinity   session affinity   If you want to restore the original behavior of canaries when session affinity was ignored  set  nginx ingress kubernetes io affinity canary behavior  annotation with value  legacy  on the canary ingress definition     Known Limitations    Currently a maximum of one canary ingress can be applied per Ingress rule       Rewrite  In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule  Without a rewrite any request will return 404  Set the annotation  nginx ingress kubernetes io rewrite target  to the path expected by the service   If the Application Root is exposed in a different path and needs to be redirected  set the annotation  nginx ingress kubernetes io app root  to redirect requests for           example     Please check the  rewrite        examples rewrite README md  example       Session Affinity  The annotation  nginx ingress kubernetes io affinity  enables and sets the affinity type in all Upstreams of an Ingress  This way  a request will always be directed to the same upstream server  The only affinity type available for NGINX is  cookie    The annotation  nginx ingress kubernetes io affinity mode  defines the stickiness of a session  Setting this to  balanced   default  will redistribute some sessions if a deployment gets scaled up  therefore rebalancing the load on the servers  Setting this to  persistent  will not rebalance sessions to new servers  therefore providing maximum stickiness   The annotation  nginx ingress kubernetes io affinity canary behavior  defines the behavior of canaries when session affinity is enabled  Setting this to  sticky   default  will ensure that users that were served by canaries  will continue to be served by canaries  Setting this to  legacy  will restore original canary behavior  when session affinity was ignored       attention     If more than one Ingress is defined for a host and at least one Ingress uses  nginx ingress kubernetes io affinity  cookie   then only paths on the Ingress using  nginx ingress kubernetes io affinity  will use session cookie affinity  All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server       example     Please check the  affinity        examples affinity cookie README md  example        Cookie affinity  If you use the   cookie   affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation  nginx ingress kubernetes io session cookie name   The default is to create a cookie named  INGRESSCOOKIE    The NGINX annotation  nginx ingress kubernetes io session cookie path  defines the path that will be set on the cookie  This is optional unless the annotation  nginx ingress kubernetes io use regex  is set to true  Session cookie paths do not support regex   Use  nginx ingress kubernetes io session cookie domain  to set the  Domain  attribute of the sticky cookie   Use  nginx ingress kubernetes io session cookie samesite  to apply a  SameSite  attribute to the sticky cookie  Browser accepted values are  None    Lax   and  Strict   Some browsers reject cookies with  SameSite None   including those created before the  SameSite None  specification  e g  Chrome 5X   Other browsers mistakenly treat  SameSite None  cookies as  SameSite Strict   e g  Safari running on OSX 14   To omit  SameSite None  from browsers with these incompatibilities  add the annotation  nginx ingress kubernetes io session cookie conditional samesite none   true     Use  nginx ingress kubernetes io session cookie expires  to control the cookie expires  its value is a number of seconds until the cookie expires   Use  nginx ingress kubernetes io session cookie path  to control the cookie path when use regex is set to true   Use  nginx ingress kubernetes io session cookie change on failure  to control the cookie change after request failure       Authentication  It is possible to add authentication by adding additional annotations in the Ingress rule  The source of the authentication is a secret that contains usernames and passwords   The annotations are      nginx ingress kubernetes io auth type   basic digest       Indicates the  HTTP Authentication Type  Basic or Digest Access Authentication  https   tools ietf org html rfc2617        nginx ingress kubernetes io auth secret  secretName      The name of the Secret that contains the usernames and passwords which are granted access to the  path s defined in the Ingress rules  This annotation also accepts the alternative form  namespace secretName   in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace       nginx ingress kubernetes io auth secret type   auth file auth map       The  auth secret  can have two forms      auth file    default  an htpasswd file in the key  auth  within the secret    auth map    the keys of the secret are the usernames  and the values are the hashed passwords      nginx ingress kubernetes io auth realm   realm string           example     Please check the  auth        examples auth basic README md  example       Custom NGINX upstream hashing  NGINX supports load balancing by client server mapping based on  consistent hashing  https   nginx org en docs http ngx http upstream module html hash  for a given key  The key can contain text  variables or any combination thereof  This feature allows for request stickiness other than client IP or cookies  The  ketama  https   www last fm user RJ journal 2007 04 10 rz libketama   a consistent hashing algo for memcache clients  consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes   There is a special mode of upstream hashing called subset  In this mode  upstream servers are grouped into subsets  and stickiness works by mapping keys to a subset instead of individual upstream servers  Specific server is chosen uniformly at random from the selected sticky subset  It provides a balance between stickiness and load distribution   To enable consistent hashing for a backend    nginx ingress kubernetes io upstream hash by   the nginx variable  text value or any combination thereof to use for consistent hashing  For example   nginx ingress kubernetes io upstream hash by    request uri   or  nginx ingress kubernetes io upstream hash by    request uri host   or  nginx ingress kubernetes io upstream hash by     request uri  text value   to consistently hash upstream requests by the current request URI    subset  hashing can be enabled setting  nginx ingress kubernetes io upstream hash by subset    true   This maps requests to subset of nodes instead of a single one   nginx ingress kubernetes io upstream hash by subset size  determines the size of each subset  default 3    Please check the  chashsubset        examples chashsubset deployment yaml  example       Custom NGINX load balancing  This is similar to   load balance  in ConfigMap    configmap md load balance   but configures load balancing algorithm per ingress   Note that  nginx ingress kubernetes io upstream hash by  takes preference over this  If this and  nginx ingress kubernetes io upstream hash by  are not set then we fallback to using globally configured load balancing algorithm       Custom NGINX upstream vhost  This configuration setting allows you to control the value for host in the following statement   proxy set header Host  host   which forms part of the location block   This is useful if you need to call the upstream server by something other than   host        Client Certificate Authentication  It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule   Client Certificate Authentication is applied per host and it is not possible to specify rules that differ for individual paths   To enable  add the annotation  nginx ingress kubernetes io auth tls secret  namespace secretName   This secret must have a file named  ca crt  containing the full Certificate Authority chain  ca crt  that is enabled to authenticate against this Ingress   You can further customize client certificate authentication and behavior with these annotations      nginx ingress kubernetes io auth tls verify depth   The validation depth between the provided client certificate and the Certification Authority chain   default  1     nginx ingress kubernetes io auth tls verify client   Enables verification of client certificates  Possible values are         on   Request a client certificate that must be signed by a certificate that is included in the secret key  ca crt  of the secret specified by  nginx ingress kubernetes io auth tls secret  namespace secretName   Failed certificate verification will result in a status code 400  Bad Request   default         off   Don t request client certificates and don t do client certificate verification         optional   Do optional client certificate validation against the CAs from  auth tls secret   The request fails with status code 400  Bad Request  when a certificate is provided that is not signed by the CA  When no or an otherwise invalid certificate is provided  the request does not fail  but instead the verification result is sent to the upstream service         optional no ca   Do optional client certificate validation  but do not fail the request when the client certificate is not signed by the CAs from  auth tls secret   Certificate verification result is sent to the upstream service     nginx ingress kubernetes io auth tls error page   The URL Page that user should be redirected in case of a Certificate Authentication Error    nginx ingress kubernetes io auth tls pass certificate to upstream   Indicates if the received certificates should be passed or not to the upstream server in the header  ssl client cert   Possible values are  true  or  false   default      nginx ingress kubernetes io auth tls match cn   Adds a sanity check for the CN of the client certificate that is sent over using a string   regex starting with  CN    example    CN myvalidclient    If the certificate CN sent during mTLS does not match your string   regex it will fail with status code 403  Another way of using this is by adding multiple options in your regex  example    CN  option1 option2 myvalidclient     In this case  as long as one of the options in the brackets matches the certificate CN then you will receive a 200 status code    The following headers are sent to the upstream service according to the  auth tls    annotations      ssl client issuer dn   The issuer information of the client certificate  Example   CN My CA     ssl client subject dn   The subject information of the client certificate  Example   CN My Client     ssl client verify   The result of the client verification  Possible values   SUCCESS    FAILED   description  why the verification failed      ssl client cert   The full client certificate in PEM format  Will only be sent when  nginx ingress kubernetes io auth tls pass certificate to upstream  is set to  true   Example        BEGIN 20CERTIFICATE      0A      END 20CERTIFICATE      0A       example     Please check the  client certs        examples auth client certs README md  example       attention     TLS with Client Authentication is   not   possible in Cloudflare and might result in unexpected behavior       Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate   https   blog cloudflare com protecting the origin with tls authenticated origin pulls   https   blog cloudflare com protecting the origin with tls authenticated origin pulls        Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial   https   support cloudflare com hc en us articles 204494148 Setting up NGINX to use TLS Authenticated Origin Pulls  https   web archive org web 20200907143649 https   support cloudflare com hc en us articles 204899617 Setting up NGINX to use TLS Authenticated Origin Pulls section5       Backend Certificate Authentication  It is possible to authenticate to a proxied HTTPS backend with certificate using additional annotations in Ingress Rule      nginx ingress kubernetes io proxy ssl secret  secretName     Specifies a Secret with the certificate  tls crt   key  tls key  in PEM format used for authentication to a proxied HTTPS server  It should also contain trusted CA certificates  ca crt  in PEM format used to verify the certificate of the proxied HTTPS server    This annotation expects the Secret name in the form  namespace secretName      nginx ingress kubernetes io proxy ssl verify     Enables or disables verification of the proxied HTTPS server certificate   default  off     nginx ingress kubernetes io proxy ssl verify depth     Sets the verification depth in the proxied HTTPS server certificates chain   default  1     nginx ingress kubernetes io proxy ssl ciphers     Specifies the enabled  ciphers  https   nginx org en docs http ngx http proxy module html proxy ssl ciphers  for requests to a proxied HTTPS server  The ciphers are specified in the format understood by the OpenSSL library     nginx ingress kubernetes io proxy ssl name     Allows to set  proxy ssl name  https   nginx org en docs http ngx http proxy module html proxy ssl name   This allows overriding the server name used to verify the certificate of the proxied HTTPS server  This value is also passed through SNI when a connection is established to the proxied HTTPS server     nginx ingress kubernetes io proxy ssl protocols     Enables the specified  protocols  https   nginx org en docs http ngx http proxy module html proxy ssl protocols  for requests to a proxied HTTPS server     nginx ingress kubernetes io proxy ssl server name     Enables passing of the server name through TLS Server Name Indication extension  SNI  RFC 6066  when establishing a connection with the proxied HTTPS server       Configuration snippet  Using this annotation you can add additional configuration to the NGINX location  For example      yaml nginx ingress kubernetes io configuration snippet      more set headers  Request Id   req id        Be aware this can be dangerous in multi tenant clusters  as it can lead to people with otherwise limited permissions being able to retrieve all secrets on the cluster  The recommended mitigation for this threat is to disable this feature  so it may not work for you  See CVE 2021 25742 and the  related issue on github  https   github com kubernetes ingress nginx issues 7837  for more information       Custom HTTP Errors  Like the   custom http errors     configmap md custom http errors  value in the ConfigMap  this annotation will set NGINX  proxy intercept errors   but only for the NGINX location associated with this ingress  If a  default backend annotation   default backend  is specified on the ingress  the errors will be routed to that annotation s default backend service  instead of the global default backend   Different ingresses can specify different sets of error codes  Even if multiple ingress objects share the same hostname  this annotation can be used to intercept different error codes for each ingress  for example  different error codes to be intercepted for different paths on the same hostname  if each path is on a different ingress   If  custom http errors  is also specified globally  the error values specified in this annotation will override the global value for the given ingress  hostname and path   Example usage      nginx ingress kubernetes io custom http errors   404 415           Custom Headers This annotation is of the form  nginx ingress kubernetes io custom headers   namespace   custom headers configmap   to specify a namespace and configmap name that contains custom headers  This annotation uses  more set headers  nginx directive   Example annotation for following example configmap      yaml nginx ingress kubernetes io custom headers  default custom headers configmap      Example configmap     yaml apiVersion  v1 data    Content Type  application json kind  ConfigMap metadata    name  custom headers configmap   namespace  default          attention   First define the allowed response headers in  global allowed response headers  https   github com kubernetes ingress nginx blob main docs user guide nginx configuration configmap md global allowed response headers        Default Backend  This annotation is of the form  nginx ingress kubernetes io default backend   svc name   to specify a custom default backend   This   svc name   is a reference to a service inside of the same namespace in which you are applying this annotation  This annotation overrides the global default backend  In case the service has  multiple ports  https   kubernetes io docs concepts services networking service  multi port services   the first one is the one which will receive the backend traffic    This service will be used to handle the response when the configured service in the Ingress rule does not have any active endpoints  It will also be used to handle the error responses if both this annotation and the  custom http errors annotation   custom http errors  are set       Enable CORS  To enable Cross Origin Resource Sharing  CORS  in an Ingress rule  add the annotation  nginx ingress kubernetes io enable cors   true    This will add a section in the server location enabling this functionality   CORS can be controlled with the following annotations      nginx ingress kubernetes io cors allow methods   Controls which methods are accepted       This is a multi valued field  separated by     and accepts only letters  upper and lower case          Default   GET  PUT  POST  DELETE  PATCH  OPTIONS        Example   nginx ingress kubernetes io cors allow methods   PUT  GET  POST  OPTIONS       nginx ingress kubernetes io cors allow headers   Controls which headers are accepted       This is a multi valued field  separated by     and accepts letters  numbers    and           Default   DNT Keep Alive User Agent X Requested With If Modified Since Cache Control Content Type Range Authorization        Example   nginx ingress kubernetes io cors allow headers   X Forwarded For  X app123 XPTO       nginx ingress kubernetes io cors expose headers   Controls which headers are exposed to response       This is a multi valued field  separated by     and accepts letters  numbers       and           Default   empty        Example   nginx ingress kubernetes io cors expose headers      X CustomResponseHeader       nginx ingress kubernetes io cors allow origin   Controls what s the accepted Origin for CORS       This is a multi valued field  separated by      It must follow this format   protocol   origin site com  or  protocol   origin site com port         Default            Example   nginx ingress kubernetes io cors allow origin   https   origin site com 4443  http   origin site com  myprotocol   example org 1199        It also supports single level wildcard subdomains and follows this format   protocol     foo bar    protocol     bar foo 8080  or  protocol     abc bar foo 9000        Example   nginx ingress kubernetes io cors allow origin   https     origin site com 4443  http     origin site com  myprotocol   example org 1199       nginx ingress kubernetes io cors allow credentials   Controls if credentials can be passed during CORS operations         Default   true        Example   nginx ingress kubernetes io cors allow credentials   false       nginx ingress kubernetes io cors max age   Controls how long preflight requests can be cached         Default   1728000        Example   nginx ingress kubernetes io cors max age  600       note     For more information please see  https   enable cors org  https   enable cors org server nginx html       HTTP2 Push Preload   Enables automatic conversion of preload links specified in the  Link  response header fields into push requests       example         nginx ingress kubernetes io http2 push preload   true        Server Alias  Allows the definition of one or more aliases in the server definition of the NGINX configuration using the annotation  nginx ingress kubernetes io server alias    alias 1   alias 2     This will create a server with the same configuration  but adding new values to the  server name  directive       note    A server alias name cannot conflict with the hostname of an existing server  If it does  the server alias annotation will be ignored      If a server alias is created and later a new server with the same hostname is created  the new server configuration will take     place over the alias configuration   For more information please see  the  server name  documentation  https   nginx org en docs http ngx http core module html server name        Server snippet  Using the annotation  nginx ingress kubernetes io server snippet  it is possible to add custom configuration in the server configuration block      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    annotations      nginx ingress kubernetes io server snippet            set  agentflag 0           if   http user agent      Mobile                set  agentflag 1                     if    agentflag   1               return 301 https   m example com                     attention     This annotation can be used only once per host       Client Body Buffer Size  Sets buffer size for reading client request body per location  In case the request body is larger than the buffer  the whole body or only its part is written to a temporary file  By default  buffer size is equal to two memory pages  This is 8K on x86  other 32 bit platforms  and x86 64  It is usually 16K on other 64 bit platforms  This annotation is applied to each location provided in the ingress rule       note     The annotation value must be given in a format understood by Nginx       example         nginx ingress kubernetes io client body buffer size   1000     1000 bytes        nginx ingress kubernetes io client body buffer size  1k    1 kilobyte        nginx ingress kubernetes io client body buffer size  1K    1 kilobyte        nginx ingress kubernetes io client body buffer size  1m    1 megabyte        nginx ingress kubernetes io client body buffer size  1M    1 megabyte  For more information please see  https   nginx org  https   nginx org en docs http ngx http core module html client body buffer size       External Authentication  To use an existing service that provides authentication the Ingress rule can be annotated with  nginx ingress kubernetes io auth url  to indicate the URL where the HTTP request should be sent      yaml nginx ingress kubernetes io auth url   URL to the authentication service       Additionally it is possible to set      nginx ingress kubernetes io auth keepalive       Connections   to specify the maximum number of keepalive connections to  auth url   Only takes effect    when no variables are used in the host part of the URL  Defaults to  0   keepalive disabled      Note  does not work with HTTP 2 listener because of a limitation in Lua  subrequests  https   github com openresty lua nginx module spdy mode not fully supported      UseHTTP2    configmap md use http2  configuration should be disabled      nginx ingress kubernetes io auth keepalive share vars     Whether to share Nginx variables among the current request and the auth request  Example use case is to track requests  when set to  true  X Request ID HTTP header will be the same for the backend and the auth request    Defaults to  false      nginx ingress kubernetes io auth keepalive requests       Requests   to specify the maximum number of requests that can be served through one keepalive connection    Defaults to  1000  and only applied if  auth keepalive  is set to higher than  0      nginx ingress kubernetes io auth keepalive timeout       Timeout   to specify a duration in seconds which an idle keepalive connection to an upstream server will stay open    Defaults to  60  and only applied if  auth keepalive  is set to higher than  0      nginx ingress kubernetes io auth method       Method   to specify the HTTP method to use     nginx ingress kubernetes io auth signin       SignIn URL   to specify the location of the error page     nginx ingress kubernetes io auth signin redirect param       SignIn URL   to specify the URL parameter in the error page which should contain the original URL for a failed signin request     nginx ingress kubernetes io auth response headers       Response Header 1       Response Header n   to specify headers to pass to backend once authentication request completes     nginx ingress kubernetes io auth proxy set headers       ConfigMap   the name of a ConfigMap that specifies headers to pass to the authentication service    nginx ingress kubernetes io auth request redirect       Request Redirect URL    to specify the X Auth Request Redirect header value     nginx ingress kubernetes io auth cache key       Cache Key   this enables caching for auth requests  specify a lookup key for auth responses  e g    remote user http authorization   Each server and location has it s own keyspace  Hence a cached response is only valid on a per server and per location basis     nginx ingress kubernetes io auth cache duration       Cache duration   to specify a caching time for auth responses based on their response codes  e g   200 202 30m   See  proxy cache valid  https   nginx org en docs http ngx http proxy module html proxy cache valid  for details  You may specify multiple  comma separated values   200 202 10m  401 5m   defaults to  200 202 401 5m      nginx ingress kubernetes io auth always set cookie       Boolean Flag   to set a cookie returned by auth request  By default  the cookie will be set only if an upstream reports with the code 200  201  204  206  301  302  303  304  307  or 308     nginx ingress kubernetes io auth snippet       Auth Snippet   to specify a custom snippet to use with external authentication  e g      yaml nginx ingress kubernetes io auth url  http   foo com external auth nginx ingress kubernetes io auth snippet        proxy set header Foo Header 42        Note   nginx ingress kubernetes io auth snippet  is an optional annotation  However  it may only be used in conjunction with  nginx ingress kubernetes io auth url  and will be ignored if  nginx ingress kubernetes io auth url  is not set      example     Please check the  external auth        examples auth external auth README md  example        Global External Authentication  By default the controller redirects all requests to an existing service that provides authentication if  global auth url  is set in the NGINX ConfigMap  If you want to disable this behavior for that ingress  you can use  enable global auth   false   in the NGINX ConfigMap   nginx ingress kubernetes io enable global auth      indicates if GlobalExternalAuth configuration should be applied or not to this Ingress rule  Default values is set to   true         note     For more information please see  global auth url    configmap md global auth url        Rate Limiting  These annotations define limits on connections and transmission rates   These can be used to mitigate  DDoS Attacks  https   www nginx com blog mitigating ddos attacks with nginx and nginx plus       nginx ingress kubernetes io limit connections   number of concurrent connections allowed from a single IP address  A 503 error is returned when exceeding this limit     nginx ingress kubernetes io limit rps   number of requests accepted from a given IP each second  The burst limit is set to this limit multiplied by the burst multiplier  the default multiplier is 5  When clients exceed this limit    limit req status code  https   kubernetes github io ingress nginx user guide nginx configuration configmap  limit req status code     default     503 is returned     nginx ingress kubernetes io limit rpm   number of requests accepted from a given IP each minute  The burst limit is set to this limit multiplied by the burst multiplier  the default multiplier is 5  When clients exceed this limit    limit req status code  https   kubernetes github io ingress nginx user guide nginx configuration configmap  limit req status code     default     503 is returned     nginx ingress kubernetes io limit burst multiplier   multiplier of the limit rate for burst size  The default burst multiplier is 5  this annotation override the default multiplier  When clients exceed this limit    limit req status code  https   kubernetes github io ingress nginx user guide nginx configuration configmap  limit req status code     default     503 is returned     nginx ingress kubernetes io limit rate after   initial number of kilobytes after which the further transmission of a response to a given connection will be rate limited  This feature must be used with  proxy buffering   proxy buffering  enabled     nginx ingress kubernetes io limit rate   number of kilobytes per second allowed to send to a given connection   The zero value disables rate limiting  This feature must be used with  proxy buffering   proxy buffering  enabled     nginx ingress kubernetes io limit whitelist   client IP source ranges to be excluded from rate limiting  The value is a comma separated list of CIDRs   If you specify multiple annotations in a single Ingress rule  limits are applied in the order  limit connections    limit rpm    limit rps    To configure settings globally for all Ingress rules  the  limit rate after  and  limit rate  values may be set in the  NGINX ConfigMap    configmap md limit rate    The value set in an Ingress annotation will override the global setting   The client IP address will be set based on the use of  PROXY protocol    configmap md use proxy protocol  or from the  X Forwarded For  header value when  use forwarded headers    configmap md use forwarded headers  is enabled       Permanent Redirect  This annotation allows to return a permanent redirect  Return Code 301  instead of sending data to the upstream   For example  nginx ingress kubernetes io permanent redirect  https   www google com  would redirect everything to Google       Permanent Redirect Code  This annotation allows you to modify the status code used for permanent redirects   For example  nginx ingress kubernetes io permanent redirect code   308   would return your permanent redirect with a 308       Temporal Redirect This annotation allows you to return a temporal redirect  Return Code 302  instead of sending data to the upstream  For example  nginx ingress kubernetes io temporal redirect  https   www google com  would redirect everything to Google with a Return Code of 302  Moved Temporarily       Temporal Redirect Code  This annotation allows you to modify the status code used for temporal redirects   For example  nginx ingress kubernetes io temporal redirect code   307   would return your temporal redirect with a 307       SSL Passthrough  The annotation  nginx ingress kubernetes io ssl passthrough  instructs the controller to send TLS connections directly to the backend instead of letting NGINX decrypt the communication  See also  TLS HTTPS     tls md ssl passthrough  in the User guide       note     SSL Passthrough is   disabled by default   and requires starting the controller with the         enable ssl passthrough      cli arguments md  flag       attention     Because SSL Passthrough works on layer 4 of the OSI model  TCP  and not on the layer 7  HTTP   using SSL Passthrough     invalidates all the other annotations set on an Ingress object       Service Upstream  By default the Ingress Nginx Controller uses a list of all endpoints  Pod IP port  in the NGINX upstream configuration   The  nginx ingress kubernetes io service upstream  annotation disables that behavior and instead uses a single upstream in NGINX  the service s Cluster IP and port   This can be desirable for things like zero downtime deployments   See issue   257  https   github com kubernetes ingress nginx issues 257         Known Issues  If the  service upstream  annotation is specified the following things should be taken into consideration     Sticky Sessions will not work as only round robin load balancing is supported    The  proxy next upstream  directive will not have any effect meaning on error the request will not be dispatched to another upstream       Server side HTTPS enforcement through redirect  By default the controller redirects  308  to HTTPS if TLS is enabled for that ingress  If you want to disable this behavior globally  you can use  ssl redirect   false   in the NGINX  ConfigMap    configmap md ssl redirect    To configure this feature for specific ingress resources  you can use the  nginx ingress kubernetes io ssl redirect   false   annotation in the particular resource   When using SSL offloading outside of cluster  e g  AWS ELB  it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available  This can be achieved by using the  nginx ingress kubernetes io force ssl redirect   true   annotation in the particular resource   To preserve the trailing slash in the URI with  ssl redirect   set  nginx ingress kubernetes io preserve trailing slash   true   annotation for that particular resource       Redirect from to www  In some scenarios  it is required to redirect from  www domain com  to  domain com  or vice versa  which way the redirect is performed depends on the configured  host  value in the Ingress object   For example  if   spec rules host  is configured with a value like  www example com   then this annotation will redirect from  example com  to  www example com   If   spec rules host  is configured with a value like  example com   so without a  www   then this annotation will redirect from  www example com  to  example com  instead   To enable this feature use the annotation  nginx ingress kubernetes io from to www redirect   true        attention     If at some point a new Ingress is created with a host equal to one of the options  like  domain com   the annotation will be omitted       attention     For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret  located in the TLS section of Ingress  contains both FQDN in the common name of the certificate       Denylist source range  You can specify blocked client IP source ranges through the  nginx ingress kubernetes io denylist source range  annotation  The value is a comma separated list of  CIDRs  https   en wikipedia org wiki Classless Inter Domain Routing   e g    10 0 0 0 24 172 10 0 1    To configure this setting globally for all Ingress rules  the  denylist source range  value may be set in the  NGINX ConfigMap    configmap md denylist source range        note     Adding an annotation to an Ingress rule overrides any global restriction        Whitelist source range  You can specify allowed client IP source ranges through the  nginx ingress kubernetes io whitelist source range  annotation  The value is a comma separated list of  CIDRs  https   en wikipedia org wiki Classless Inter Domain Routing   e g    10 0 0 0 24 172 10 0 1    To configure this setting globally for all Ingress rules  the  whitelist source range  value may be set in the  NGINX ConfigMap    configmap md whitelist source range        note     Adding an annotation to an Ingress rule overrides any global restriction       Custom timeouts  Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers  In some scenarios is required to have different values  To allow this we provide annotations that allows this customization      nginx ingress kubernetes io proxy connect timeout     nginx ingress kubernetes io proxy send timeout     nginx ingress kubernetes io proxy read timeout     nginx ingress kubernetes io proxy next upstream     nginx ingress kubernetes io proxy next upstream timeout     nginx ingress kubernetes io proxy next upstream tries     nginx ingress kubernetes io proxy request buffering   If you indicate  Backend Protocol   backend protocol  as  GRPC  or  GRPCS   the following grpc values will be set and inherited from proxy timeouts       grpc connect timeout 5s   https   nginx org en docs http ngx http grpc module html grpc connect timeout   from  nginx ingress kubernetes io proxy connect timeout      grpc send timeout 60s   https   nginx org en docs http ngx http grpc module html grpc send timeout   from  nginx ingress kubernetes io proxy send timeout      grpc read timeout 60s   https   nginx org en docs http ngx http grpc module html grpc read timeout   from  nginx ingress kubernetes io proxy read timeout   Note  All timeout values are unitless and in seconds e g   nginx ingress kubernetes io proxy read timeout   120   sets a valid 120 seconds proxy read timeout       Proxy redirect  The annotations  nginx ingress kubernetes io proxy redirect from  and  nginx ingress kubernetes io proxy redirect to  will set the first and second parameters of NGINX s proxy redirect directive respectively  It is possible to set the text that should be changed in the  Location  and  Refresh  header fields of a  proxied server response  https   nginx org en docs http ngx http proxy module html proxy redirect   Setting  off  or  default  in the annotation  nginx ingress kubernetes io proxy redirect from  disables  nginx ingress kubernetes io proxy redirect to   otherwise  both annotations must be used in unison  Note that each annotation must be a string without spaces   By default the value of each annotation is  off        Custom max body size  For NGINX  an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body  This size can be configured by the parameter   client max body size   https   nginx org en docs http ngx http core module html client max body size    To configure this setting globally for all Ingress rules  the  proxy body size  value may be set in the  NGINX ConfigMap    configmap md proxy body size   To use custom values in an Ingress rule define these annotation      yaml nginx ingress kubernetes io proxy body size  8m          Proxy cookie domain  Sets a text that  should be changed in the domain attribute  https   nginx org en docs http ngx http proxy module html proxy cookie domain  of the  Set Cookie  header fields of a proxied server response   To configure this setting globally for all Ingress rules  the  proxy cookie domain  value may be set in the  NGINX ConfigMap    configmap md proxy cookie domain        Proxy cookie path  Sets a text that  should be changed in the path attribute  https   nginx org en docs http ngx http proxy module html proxy cookie path  of the  Set Cookie  header fields of a proxied server response   To configure this setting globally for all Ingress rules  the  proxy cookie path  value may be set in the  NGINX ConfigMap    configmap md proxy cookie path        Proxy buffering  Enable or disable proxy buffering   proxy buffering   https   nginx org en docs http ngx http proxy module html proxy buffering   By default proxy buffering is disabled in the NGINX config   To configure this setting globally for all Ingress rules  the  proxy buffering  value may be set in the  NGINX ConfigMap    configmap md proxy buffering   To use custom values in an Ingress rule define these annotation      yaml nginx ingress kubernetes io proxy buffering   on           Proxy buffers Number  Sets the number of the buffers in   proxy buffers   https   nginx org en docs http ngx http proxy module html proxy buffers  used for reading the first part of the response received from the proxied server  By default proxy buffers number is set as 4  To configure this setting globally  set  proxy buffers number  in  NGINX ConfigMap    configmap md proxy buffers number   To use custom values in an Ingress rule  define this annotation     yaml nginx ingress kubernetes io proxy buffers number   4           Proxy buffer size  Sets the size of the buffer   proxy buffer size   https   nginx org en docs http ngx http proxy module html proxy buffer size  used for reading the first part of the response received from the proxied server  By default proxy buffer size is set as  4k   To configure this setting globally  set  proxy buffer size  in  NGINX ConfigMap    configmap md proxy buffer size   To use custom values in an Ingress rule  define this annotation     yaml nginx ingress kubernetes io proxy buffer size   8k           Proxy busy buffers size   Limits the total size of buffers that can be busy  https   nginx org en docs http ngx http proxy module html proxy busy buffers size  sending a response to the client while the response is not yet fully read   By default proxy busy buffers size is set as  8k    To configure this setting globally  set  proxy busy buffers size  in the  ConfigMap    configmap md proxy busy buffers size   To use custom values in an Ingress rule  define this annotation      yaml nginx ingress kubernetes io proxy busy buffers size   16k           Proxy max temp file size  When   buffering   https   nginx org en docs http ngx http proxy module html proxy buffering  of responses from the proxied server is enabled  and the whole response does not fit into the buffers set by the   proxy buffer size   https   nginx org en docs http ngx http proxy module html proxy buffer size  and   proxy buffers   https   nginx org en docs http ngx http proxy module html proxy buffers  directives  a part of the response can be saved to a temporary file  This directive sets the maximum  size  of the temporary file setting the   proxy max temp file size   https   nginx org en docs http ngx http proxy module html proxy max temp file size   The size of data written to the temporary file at a time is set by the   proxy temp file write size   https   nginx org en docs http ngx http proxy module html proxy temp file write size  directive   The zero value disables buffering of responses to temporary files   To use custom values in an Ingress rule  define this annotation     yaml nginx ingress kubernetes io proxy max temp file size   1024m           Proxy HTTP version  Using this annotation sets the   proxy http version   https   nginx org en docs http ngx http proxy module html proxy http version  that the Nginx reverse proxy will use to communicate with the backend  By default this is set to  1 1       yaml nginx ingress kubernetes io proxy http version   1 0           SSL ciphers  Specifies the  enabled ciphers  https   nginx org en docs http ngx http ssl module html ssl ciphers    Using this annotation will set the  ssl ciphers  directive at the server level  This configuration is active for all the paths in the host      yaml nginx ingress kubernetes io ssl ciphers   ALL  aNULL  EXPORT56 RC4 RSA  HIGH  MEDIUM  LOW  SSLv2  EXP       The following annotation will set the  ssl prefer server ciphers  directive at the server level  This configuration specifies that server ciphers should be preferred over client ciphers when using the SSLv3 and TLS protocols      yaml nginx ingress kubernetes io ssl prefer server ciphers   true           Connection proxy header  Using this annotation will override the default connection header set by NGINX  To use custom values in an Ingress rule  define the annotation      yaml nginx ingress kubernetes io connection proxy header   keep alive           Enable Access Log  Access logs are enabled by default  but in some scenarios access logs might be required to be disabled for a given ingress  To do this  use the annotation      yaml nginx ingress kubernetes io enable access log   false           Enable Rewrite Log  Rewrite logs are not enabled by default  In some scenarios it could be required to enable NGINX rewrite logs  Note that rewrite logs are sent to the error log file at the notice level  To enable this feature use the annotation      yaml nginx ingress kubernetes io enable rewrite log   true           Enable Opentelemetry  Opentelemetry can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress  e g  to turn off telemetry of external health check endpoints      yaml nginx ingress kubernetes io enable opentelemetry   true           Opentelemetry Trust Incoming Span  The option to trust incoming trace spans can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress  e g  only enable on a private endpoint      yaml nginx ingress kubernetes io opentelemetry trust incoming spans   true           X Forwarded Prefix Header To add the non standard  X Forwarded Prefix  header to the upstream request with a string value  the following annotation can be used      yaml nginx ingress kubernetes io x forwarded prefix    path           ModSecurity   ModSecurity  http   modsecurity org   is an OpenSource Web Application firewall  It can be enabled for a particular set of ingress locations  The ModSecurity module must first be enabled by enabling ModSecurity in the  ConfigMap    configmap md enable modsecurity   Note this will enable ModSecurity for all paths  and each path must be disabled manually   It can be enabled using the following annotation     yaml nginx ingress kubernetes io enable modsecurity   true      ModSecurity will run in  Detection Only  mode using the  recommended configuration  https   github com SpiderLabs ModSecurity blob v3 master modsecurity conf recommended    You can enable the  OWASP Core Rule Set  https   www modsecurity org CRS Documentation   by setting the following annotation     yaml nginx ingress kubernetes io enable owasp core rules   true       You can pass transactionIDs from nginx by setting up the following     yaml nginx ingress kubernetes io modsecurity transaction id    request id       You can also add your own set of modsecurity rules via a snippet     yaml nginx ingress kubernetes io modsecurity snippet    SecRuleEngine On SecDebugLog  tmp modsec debug log      Note  If you use both  enable owasp core rules  and  modsecurity snippet  annotations together  only the  modsecurity snippet  will take effect  If you wish to include the  OWASP Core Rule Set  https   www modsecurity org CRS Documentation   or  recommended configuration  https   github com SpiderLabs ModSecurity blob v3 master modsecurity conf recommended  simply use the include statement   nginx 0 24 1 and below    yaml nginx ingress kubernetes io modsecurity snippet    Include  etc nginx owasp modsecurity crs nginx modsecurity conf Include  etc nginx modsecurity modsecurity conf     nginx 0 25 0 and above    yaml nginx ingress kubernetes io modsecurity snippet    Include  etc nginx owasp modsecurity crs nginx modsecurity conf          Backend Protocol  Using  backend protocol  annotations is possible to indicate how NGINX should communicate with the backend service   Replaces  secure backends  in older versions  Valid Values  HTTP  HTTPS  AUTO HTTP  GRPC  GRPCS and FCGI  By default NGINX uses  HTTP    Example      yaml nginx ingress kubernetes io backend protocol   HTTPS           Use Regex      attention     When using this annotation with the NGINX annotation  nginx ingress kubernetes io affinity  of type  cookie     nginx ingress kubernetes io session cookie path  must be also set  Session cookie paths do not support regex   Using the  nginx ingress kubernetes io use regex  annotation will indicate whether or not the paths defined on an Ingress use regular expressions   The default value is  false    The following will indicate that regular expression paths are being used     yaml nginx ingress kubernetes io use regex   true       The following will indicate that regular expression paths are   not   being used     yaml nginx ingress kubernetes io use regex   false       When this annotation is set to  true   the case insensitive regular expression  location modifier  https   nginx org en docs http ngx http core module html location  will be enforced on ALL paths for a given host regardless of what Ingress they are defined on   Additionally  if the   rewrite target  annotation   rewrite  is used on any Ingress for a given host  then the case insensitive regular expression  location modifier  https   nginx org en docs http ngx http core module html location  will be enforced on ALL paths for a given host regardless of what Ingress they are defined on   Please read about  ingress path matching     ingress path matching md  before using this modifier       Satisfy  By default  a request would need to satisfy all authentication requirements in order to be allowed  By using this annotation  requests that satisfy either any or all authentication requirements are allowed  based on the configuration value      yaml nginx ingress kubernetes io satisfy   any           Mirror  Enables a request to be mirrored to a mirror backend  Responses by mirror backends are ignored  This feature is useful  to see how requests will react in  test  backends   The mirror backend can be set by applying      yaml nginx ingress kubernetes io mirror target  https   test env com request uri      By default the request body is sent to the mirror backend  but can be turned off by applying      yaml nginx ingress kubernetes io mirror request body   off       Also by default header Host for mirrored requests will be set the same as a host part of uri in the  mirror target  annotation  You can override it by  mirror host  annotation      yaml nginx ingress kubernetes io mirror target  https   1 2 3 4 request uri nginx ingress kubernetes io mirror host   test env com         Note    The mirror directive will be applied to all paths within the ingress resource   The request sent to the mirror is linked to the original request  If you have a slow mirror backend  then the original request will throttle   For more information on the mirror module see  ngx http mirror module  https   nginx org en docs http ngx http mirror module html        Stream snippet  Using the annotation  nginx ingress kubernetes io stream snippet  it is possible to add custom stream configuration      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    annotations      nginx ingress kubernetes io stream snippet          server           listen 8000          proxy pass 127 0 0 1 80             "}
{"questions":"ingress nginx ModSecurity is an open source cross platform web application firewall WAF engine for Apache IIS and Nginx that is developed by Trustwave s SpiderLabs It has a robust event based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring logging and real time analysis The connector is the connection point between NGINX and libmodsecurity ModSecurity v3 To enable the ModSecurity feature we need to specify in the configuration configmap The default ModSecurity configuration file is located in This is the only file located in this directory and contains the default recommended configuration Using a volume we can replace this file with the desired configuration ModSecurity Web Application Firewall Note the default configuration use detection only because that minimizes the chances of post installation disruption","answers":"# ModSecurity Web Application Firewall\n\nModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - [https:\/\/www.modsecurity.org](https:\/\/www.modsecurity.org)\n\nThe [ModSecurity-nginx](https:\/\/github.com\/SpiderLabs\/ModSecurity-nginx) connector is the connection point between NGINX and libmodsecurity (ModSecurity v3).\n\nThe default ModSecurity configuration file is located in `\/etc\/nginx\/modsecurity\/modsecurity.conf`. This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration.\nTo enable the ModSecurity feature we need to specify `enable-modsecurity: \"true\"` in the configuration configmap.\n\n>__Note:__ the default configuration use detection only, because that minimizes the chances of post-installation disruption.\nDue to the value of the setting [SecAuditLogType=Concurrent](https:\/\/github.com\/SpiderLabs\/ModSecurity\/wiki\/Reference-Manual-(v2.x)#secauditlogtype) the ModSecurity log is stored in multiple files inside the directory `\/var\/log\/audit`.\nThe default `Serial` value in SecAuditLogType can impact performance.\n\nThe OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts.\nThe directory `\/etc\/nginx\/owasp-modsecurity-crs` contains the [OWASP ModSecurity Core Rule Set repository](https:\/\/github.com\/coreruleset\/coreruleset).\nUsing `enable-owasp-modsecurity-crs: \"true\"` we enable the use of the rules.\n\n## Supported annotations\n\nFor more info on supported annotations, please see [annotations\/#modsecurity](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/nginx-configuration\/annotations\/#modsecurity)\n\n## Example of using ModSecurity with plugins via the helm chart\n\nSuppose you have a ConfigMap that contains the contents of the [nextcloud-rule-exclusions plugin](https:\/\/github.com\/coreruleset\/nextcloud-rule-exclusions-plugin\/blob\/main\/plugins\/nextcloud-rule-exclusions-before.conf) like this:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: modsecurity-plugins\ndata:\n  empty-after.conf: |\n    # no data\n  empty-before.conf: |\n    # no data\n  empty-config.conf: |\n    # no data\n  nextcloud-rule-exclusions-before.conf:\n    # this is just a snippet\n    # find the full file at https:\/\/github.com\/coreruleset\/nextcloud-rule-exclusions-plugin\n    #\n    # [ File Manager ]\n    # The web interface uploads files, and interacts with the user.\n    SecRule REQUEST_FILENAME \"@contains \/remote.php\/webdav\" \\\n        \"id:9508102,\\\n        phase:1,\\\n        pass,\\\n        t:none,\\\n        nolog,\\\n        ver:'nextcloud-rule-exclusions-plugin\/1.2.0',\\\n        ctl:ruleRemoveById=920420,\\\n        ctl:ruleRemoveById=920440,\\\n        ctl:ruleRemoveById=941000-942999,\\\n        ctl:ruleRemoveById=951000-951999,\\\n        ctl:ruleRemoveById=953100-953130,\\\n        ctl:ruleRemoveByTag=attack-injection-php\"\n```\n\nIf you're using the helm chart, you can pass in the following parameters in your `values.yaml`:\n\n```yaml\ncontroller:\n  config:\n    # Enables Modsecurity\n    enable-modsecurity: \"true\"\n\n    # Update ModSecurity config and rules\n    modsecurity-snippet: |\n      # this enables the mod security nextcloud plugin\n      Include \/etc\/nginx\/owasp-modsecurity-crs\/plugins\/nextcloud-rule-exclusions-before.conf\n\n      # this enables the default OWASP Core Rule Set\n      Include \/etc\/nginx\/owasp-modsecurity-crs\/nginx-modsecurity.conf\n\n      # Enable prevention mode. Options: DetectionOnly,On,Off (default is DetectionOnly)\n      SecRuleEngine On\n\n      # Enable scanning of the request body\n      SecRequestBodyAccess On\n\n      # Enable XML and JSON parsing\n      SecRule REQUEST_HEADERS:Content-Type \"(?:text|application(?:\/soap\\+|\/)|application\/xml)\/\" \\\n        \"id:200000,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=XML\"\n\n      SecRule REQUEST_HEADERS:Content-Type \"application\/json\" \\\n        \"id:200001,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=JSON\"\n\n      # Reject if larger (we could also let it pass with ProcessPartial)\n      SecRequestBodyLimitAction Reject\n\n      # Send ModSecurity audit logs to the stdout (only for rejected requests)\n      SecAuditLog \/dev\/stdout\n\n      # format the logs in JSON\n      SecAuditLogFormat JSON\n\n      # could be On\/Off\/RelevantOnly\n      SecAuditEngine RelevantOnly\n\n  # Add a volume for the plugins directory\n  extraVolumes:\n    - name: plugins\n      configMap:\n        name: modsecurity-plugins\n\n  # override the \/etc\/nginx\/enable-owasp-modsecurity-crs\/plugins with your ConfigMap\n  extraVolumeMounts:\n    - name: plugins\n      mountPath: \/etc\/nginx\/owasp-modsecurity-crs\/plugins\n```","site":"ingress nginx","answers_cleaned":"  ModSecurity Web Application Firewall  ModSecurity is an open source  cross platform web application firewall  WAF  engine for Apache  IIS and Nginx that is developed by Trustwave s SpiderLabs  It has a robust event based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring  logging and real time analysis    https   www modsecurity org  https   www modsecurity org   The  ModSecurity nginx  https   github com SpiderLabs ModSecurity nginx  connector is the connection point between NGINX and libmodsecurity  ModSecurity v3    The default ModSecurity configuration file is located in   etc nginx modsecurity modsecurity conf   This is the only file located in this directory and contains the default recommended configuration  Using a volume we can replace this file with the desired configuration  To enable the ModSecurity feature we need to specify  enable modsecurity   true   in the configuration configmap      Note    the default configuration use detection only  because that minimizes the chances of post installation disruption  Due to the value of the setting  SecAuditLogType Concurrent  https   github com SpiderLabs ModSecurity wiki Reference Manual  v2 x  secauditlogtype  the ModSecurity log is stored in multiple files inside the directory   var log audit   The default  Serial  value in SecAuditLogType can impact performance   The OWASP ModSecurity Core Rule Set  CRS  is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls  The CRS aims to protect web applications from a wide range of attacks  including the OWASP Top Ten  with a minimum of false alerts  The directory   etc nginx owasp modsecurity crs  contains the  OWASP ModSecurity Core Rule Set repository  https   github com coreruleset coreruleset   Using  enable owasp modsecurity crs   true   we enable the use of the rules      Supported annotations  For more info on supported annotations  please see  annotations  modsecurity  https   kubernetes github io ingress nginx user guide nginx configuration annotations  modsecurity      Example of using ModSecurity with plugins via the helm chart  Suppose you have a ConfigMap that contains the contents of the  nextcloud rule exclusions plugin  https   github com coreruleset nextcloud rule exclusions plugin blob main plugins nextcloud rule exclusions before conf  like this      yaml apiVersion  v1 kind  ConfigMap metadata    name  modsecurity plugins data    empty after conf          no data   empty before conf          no data   empty config conf          no data   nextcloud rule exclusions before conf        this is just a snippet       find the full file at https   github com coreruleset nextcloud rule exclusions plugin               File Manager         The web interface uploads files  and interacts with the user      SecRule REQUEST FILENAME   contains  remote php webdav             id 9508102           phase 1           pass           t none           nolog           ver  nextcloud rule exclusions plugin 1 2 0            ctl ruleRemoveById 920420           ctl ruleRemoveById 920440           ctl ruleRemoveById 941000 942999           ctl ruleRemoveById 951000 951999           ctl ruleRemoveById 953100 953130           ctl ruleRemoveByTag attack injection php       If you re using the helm chart  you can pass in the following parameters in your  values yaml       yaml controller    config        Enables Modsecurity     enable modsecurity   true         Update ModSecurity config and rules     modsecurity snippet            this enables the mod security nextcloud plugin       Include  etc nginx owasp modsecurity crs plugins nextcloud rule exclusions before conf          this enables the default OWASP Core Rule Set       Include  etc nginx owasp modsecurity crs nginx modsecurity conf          Enable prevention mode  Options  DetectionOnly On Off  default is DetectionOnly        SecRuleEngine On          Enable scanning of the request body       SecRequestBodyAccess On          Enable XML and JSON parsing       SecRule REQUEST HEADERS Content Type     text application    soap      application xml               id 200000 phase 1 t none t lowercase pass nolog ctl requestBodyProcessor XML         SecRule REQUEST HEADERS Content Type  application json             id 200001 phase 1 t none t lowercase pass nolog ctl requestBodyProcessor JSON           Reject if larger  we could also let it pass with ProcessPartial        SecRequestBodyLimitAction Reject          Send ModSecurity audit logs to the stdout  only for rejected requests        SecAuditLog  dev stdout          format the logs in JSON       SecAuditLogFormat JSON          could be On Off RelevantOnly       SecAuditEngine RelevantOnly      Add a volume for the plugins directory   extraVolumes        name  plugins       configMap          name  modsecurity plugins      override the  etc nginx enable owasp modsecurity crs plugins with your ConfigMap   extraVolumeMounts        name  plugins       mountPath   etc nginx owasp modsecurity crs plugins    "}
{"questions":"ingress nginx Using the third party module the Ingress Nginx Controller can configure NGINX to enable instrumentation and monitoring purposes OpenTelemetry practical demonstration of how OpenTelemetry can be utilized in Ingress NGINX for observability By default this feature is disabled Check out this demo showcasing OpenTelemetry in Ingress NGINX The video provides an overview and Enables requests served by NGINX for distributed telemetry via The OpenTelemetry Project","answers":"# OpenTelemetry\n\nEnables requests served by NGINX for distributed telemetry via The OpenTelemetry Project.\n\nUsing the third party module [opentelemetry-cpp-contrib\/nginx](https:\/\/github.com\/open-telemetry\/opentelemetry-cpp-contrib\/tree\/main\/instrumentation\/nginx) the Ingress-Nginx Controller can configure NGINX to enable [OpenTelemetry](http:\/\/opentelemetry.io) instrumentation.\nBy default this feature is disabled.\n\nCheck out this demo showcasing OpenTelemetry in Ingress NGINX. The video provides an overview and\npractical demonstration of how OpenTelemetry can be utilized in Ingress NGINX for observability\nand monitoring purposes.\n\n<p align=\"center\">\n  <a href=\"https:\/\/www.youtube.com\/watch?v=jpBfgJpTcfw&t=129\" target=\"_blank\" rel=\"noopener noreferrer\">\n    <img src=\"https:\/\/img.youtube.com\/vi\/jpBfgJpTcfw\/0.jpg\" alt=\"Video Thumbnail\" \/>\n  <\/a>\n<\/p>\n\n<p align=\"center\">Demo: OpenTelemetry in Ingress NGINX.<\/p>\n\n## Usage\n\nTo enable the instrumentation we must enable OpenTelemetry in the configuration ConfigMap:\n```yaml\ndata:\n  enable-opentelemetry: \"true\"\n```\n\nTo enable or disable instrumentation for a single Ingress, use\nthe `enable-opentelemetry` annotation:\n```yaml\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io\/enable-opentelemetry: \"true\"\n```\n\nWe must also set the host to use when uploading traces:\n\n```yaml\notlp-collector-host: \"otel-coll-collector.otel.svc\"\n```\nNOTE: While the option is called `otlp-collector-host`, you will need to point this to any backend that receives otlp-grpc.\n\nNext you will need to deploy a distributed telemetry system which uses OpenTelemetry.\n[opentelemetry-collector](https:\/\/github.com\/open-telemetry\/opentelemetry-collector), [Jaeger](https:\/\/www.jaegertracing.io\/)\n[Tempo](https:\/\/github.com\/grafana\/tempo), and [zipkin](https:\/\/zipkin.io\/)\nhave been tested.\n\nOther optional configuration options:\n```yaml\n# specifies the name to use for the server span\nopentelemetry-operation-name\n\n# sets whether or not to trust incoming telemetry spans\nopentelemetry-trust-incoming-span\n\n# specifies the port to use when uploading traces, Default: 4317\notlp-collector-port\n\n# specifies the service name to use for any traces created, Default: nginx\notel-service-name\n\n# The maximum queue size. After the size is reached data are dropped.\notel-max-queuesize\n\n# The delay interval in milliseconds between two consecutive exports.\notel-schedule-delay-millis\n\n# How long the export can run before it is cancelled.\notel-schedule-delay-millis\n\n# The maximum batch size of every export. It must be smaller or equal to maxQueueSize.\notel-max-export-batch-size\n\n# specifies sample rate for any traces created, Default: 0.01\notel-sampler-ratio\n\n# specifies the sampler to be used when sampling traces.\n# The available samplers are: AlwaysOn,  AlwaysOff, TraceIdRatioBased, Default: AlwaysOff\notel-sampler\n\n# Uses sampler implementation which by default will take a sample if parent Activity is sampled, Default: false\notel-sampler-parent-based\n```\n\nNote that you can also set whether to trust incoming spans (global default is true) per-location using annotations like the following:\n```yaml\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io\/opentelemetry-trust-incoming-span: \"true\"\n```\n\n## Examples\n\nThe following examples show how to deploy and test different distributed telemetry systems. These example can be performed using Docker Desktop.\n\nIn the [esigo\/nginx-example](https:\/\/github.com\/esigo\/nginx-example)\nGitHub repository is an example of a simple hello service:\n\n```mermaid\ngraph TB\n    subgraph Browser\n    start[\"http:\/\/esigo.dev\/hello\/nginx\"]\n    end\n\n    subgraph app\n        sa[service-a]\n        sb[service-b]\n        sa --> |name: nginx| sb\n        sb --> |hello nginx!| sa\n    end\n\n    subgraph otel\n        otc[\"Otel Collector\"]\n    end\n\n    subgraph observability\n        tempo[\"Tempo\"]\n        grafana[\"Grafana\"]\n        backend[\"Jaeger\"]\n        zipkin[\"Zipkin\"]\n    end\n\n    subgraph ingress-nginx\n        ngx[nginx]\n    end\n\n    subgraph ngx[nginx]\n        ng[nginx]\n        om[OpenTelemetry module]\n    end\n\n    subgraph Node\n        app\n        otel\n        observability\n        ingress-nginx\n        om --> |otlp-gRPC| otc --> |jaeger| backend\n        otc --> |zipkin| zipkin\n        otc --> |otlp-gRPC| tempo --> grafana\n        sa --> |otlp-gRPC| otc\n        sb --> |otlp-gRPC| otc\n        start --> ng --> sa\n    end\n```\n\nTo install the example and collectors run:\n\n1. Enable OpenTelemetry and set the otlp-collector-host:\n\n    ```yaml\n    $ echo '\n      apiVersion: v1\n      kind: ConfigMap\n      data:\n        enable-opentelemetry: \"true\"\n        opentelemetry-config: \"\/etc\/nginx\/opentelemetry.toml\"\n        opentelemetry-operation-name: \"HTTP $request_method $service_name $uri\"\n        opentelemetry-trust-incoming-span: \"true\"\n        otlp-collector-host: \"otel-coll-collector.otel.svc\"\n        otlp-collector-port: \"4317\"\n        otel-max-queuesize: \"2048\"\n        otel-schedule-delay-millis: \"5000\"\n        otel-max-export-batch-size: \"512\"\n        otel-service-name: \"nginx-proxy\" # Opentelemetry resource name\n        otel-sampler: \"AlwaysOn\" # Also: AlwaysOff, TraceIdRatioBased\n        otel-sampler-ratio: \"1.0\"\n        otel-sampler-parent-based: \"false\"\n      metadata:\n        name: ingress-nginx-controller\n        namespace: ingress-nginx\n      ' | kubectl replace -f -\n    ```\n\n2. Deploy otel-collector, grafana and Jaeger backend:\n\n    ```bash\n    # add helm charts needed for grafana and OpenTelemetry collector\n    helm repo add open-telemetry https:\/\/open-telemetry.github.io\/opentelemetry-helm-charts\n    helm repo add grafana https:\/\/grafana.github.io\/helm-charts\n    helm repo update\n    # deploy cert-manager needed for OpenTelemetry collector operator\n    kubectl apply -f https:\/\/github.com\/cert-manager\/cert-manager\/releases\/download\/v1.15.3\/cert-manager.yaml\n    # create observability namespace\n    kubectl apply -f https:\/\/raw.githubusercontent.com\/esigo\/nginx-example\/main\/observability\/namespace.yaml\n    # install OpenTelemetry collector operator\n    helm upgrade --install otel-collector-operator -n otel --create-namespace open-telemetry\/opentelemetry-operator\n    # deploy OpenTelemetry collector\n    kubectl apply -f https:\/\/raw.githubusercontent.com\/esigo\/nginx-example\/main\/observability\/collector.yaml\n    # deploy Jaeger all-in-one\n    kubectl apply -f https:\/\/github.com\/jaegertracing\/jaeger-operator\/releases\/download\/v1.37.0\/jaeger-operator.yaml -n observability\n    kubectl apply -f https:\/\/raw.githubusercontent.com\/esigo\/nginx-example\/main\/observability\/jaeger.yaml -n observability\n    # deploy zipkin\n    kubectl apply -f https:\/\/raw.githubusercontent.com\/esigo\/nginx-example\/main\/observability\/zipkin.yaml -n observability\n    # deploy tempo and grafana\n\thelm upgrade --install tempo grafana\/tempo --create-namespace -n observability\n\thelm upgrade -f https:\/\/raw.githubusercontent.com\/esigo\/nginx-example\/main\/observability\/grafana\/grafana-values.yaml --install grafana grafana\/grafana --create-namespace -n observability\n    ```\n\n3. Build and deploy demo app:\n\n    ```bash\n    # build images\n    make images\n\n    # deploy demo app:\n    make deploy-app\n    ```\n\n4. Make a few requests to the Service:\n\n    ```bash\n    kubectl port-forward --namespace=ingress-nginx service\/ingress-nginx-controller 8090:80\n    curl http:\/\/esigo.dev:8090\/hello\/nginx\n\n\n    StatusCode        : 200\n    StatusDescription : OK\n    Content           : {\"v\":\"hello nginx!\"}\n\n    RawContent        : HTTP\/1.1 200 OK\n                        Connection: keep-alive\n                        Content-Length: 21\n                        Content-Type: text\/plain; charset=utf-8\n                        Date: Mon, 10 Oct 2022 17:43:33 GMT\n\n                        {\"v\":\"hello nginx!\"}\n\n    Forms             : {}\n    Headers           : {[Connection, keep-alive], [Content-Length, 21], [Content-Type, text\/plain; charset=utf-8], [Date,\n                        Mon, 10 Oct 2022 17:43:33 GMT]}\n    Images            : {}\n    InputFields       : {}\n    Links             : {}\n    ParsedHtml        : System.__ComObject\n    RawContentLength  : 21\n    ```\n\n5. View the Grafana UI:\n\n    ```bash\n    kubectl port-forward --namespace=observability service\/grafana 3000:80\n    ```\n    In the Grafana interface we can see the details:\n    ![grafana screenshot](..\/..\/images\/otel-grafana-demo.png \"grafana screenshot\")\n\n6. View the Jaeger UI:\n\n    ```bash\n    kubectl port-forward --namespace=observability service\/jaeger-all-in-one-query 16686:16686\n    ```\n    In the Jaeger interface we can see the details:\n    ![Jaeger screenshot](..\/..\/images\/otel-jaeger-demo.png \"Jaeger screenshot\")\n\n7. View the Zipkin UI:\n\n    ```bash\n    kubectl port-forward --namespace=observability service\/zipkin 9411:9411\n    ```\n    In the Zipkin interface we can see the details:\n    ![zipkin screenshot](..\/..\/images\/otel-zipkin-demo.png \"zipkin screenshot\")\n\n## Migration from OpenTracing, Jaeger, Zipkin and Datadog\n\nIf you are migrating from OpenTracing, Jaeger, Zipkin, or Datadog to OpenTelemetry,\nyou may need to update various annotations and configurations. Here are the mappings\nfor common annotations and configurations:\n\n### Annotations\n\n| Legacy                                                        | OpenTelemetry                                                   |\n|---------------------------------------------------------------|-----------------------------------------------------------------|\n| `nginx.ingress.kubernetes.io\/enable-opentracing`              | `nginx.ingress.kubernetes.io\/enable-opentelemetry`              |\n| `nginx.ingress.kubernetes.io\/opentracing-trust-incoming-span` | `nginx.ingress.kubernetes.io\/opentelemetry-trust-incoming-span` |\n\n### Configs\n\n| Legacy                                | OpenTelemetry                                |\n|---------------------------------------|----------------------------------------------|\n| `opentracing-operation-name`          | `opentelemetry-operation-name`               |\n| `opentracing-location-operation-name` | `opentelemetry-operation-name`               |\n| `opentracing-trust-incoming-span`     | `opentelemetry-trust-incoming-span`          |\n| `zipkin-collector-port`               | `otlp-collector-port`                        |\n| `zipkin-service-name`                 | `otel-service-name`                          |\n| `zipkin-sample-rate`                  | `otel-sampler-ratio`                         |\n| `jaeger-collector-port`               | `otlp-collector-port`                        |\n| `jaeger-endpoint`                     | `otlp-collector-port`, `otlp-collector-host` |\n| `jaeger-service-name`                 | `otel-service-name`                          |\n| `jaeger-propagation-format`           | `N\/A`                                        |\n| `jaeger-sampler-type`                 | `otel-sampler`                               |\n| `jaeger-sampler-param`                | `otel-sampler`                               |\n| `jaeger-sampler-host`                 | `N\/A`                                        |\n| `jaeger-sampler-port`                 | `N\/A`                                        |\n| `jaeger-trace-context-header-name`    | `N\/A`                                        |\n| `jaeger-debug-header`                 | `N\/A`                                        |\n| `jaeger-baggage-header`               | `N\/A`                                        |\n| `jaeger-tracer-baggage-header-prefix` | `N\/A`                                        |\n| `datadog-collector-port`              | `otlp-collector-port`                        |\n| `datadog-service-name`                | `otel-service-name`                          |\n| `datadog-environment`                 | `N\/A`                                        |\n| `datadog-operation-name-override`     | `N\/A`                                        |\n| `datadog-priority-sampling`           | `otel-sampler`                               |\n| `datadog-sample-rate`                 | `otel-sampler-ratio`                         |","site":"ingress nginx","answers_cleaned":"  OpenTelemetry  Enables requests served by NGINX for distributed telemetry via The OpenTelemetry Project   Using the third party module  opentelemetry cpp contrib nginx  https   github com open telemetry opentelemetry cpp contrib tree main instrumentation nginx  the Ingress Nginx Controller can configure NGINX to enable  OpenTelemetry  http   opentelemetry io  instrumentation  By default this feature is disabled   Check out this demo showcasing OpenTelemetry in Ingress NGINX  The video provides an overview and practical demonstration of how OpenTelemetry can be utilized in Ingress NGINX for observability and monitoring purposes    p align  center      a href  https   www youtube com watch v jpBfgJpTcfw t 129  target   blank  rel  noopener noreferrer        img src  https   img youtube com vi jpBfgJpTcfw 0 jpg  alt  Video Thumbnail         a    p    p align  center  Demo  OpenTelemetry in Ingress NGINX   p      Usage  To enable the instrumentation we must enable OpenTelemetry in the configuration ConfigMap     yaml data    enable opentelemetry   true       To enable or disable instrumentation for a single Ingress  use the  enable opentelemetry  annotation     yaml kind  Ingress metadata    annotations      nginx ingress kubernetes io enable opentelemetry   true       We must also set the host to use when uploading traces      yaml otlp collector host   otel coll collector otel svc      NOTE  While the option is called  otlp collector host   you will need to point this to any backend that receives otlp grpc   Next you will need to deploy a distributed telemetry system which uses OpenTelemetry   opentelemetry collector  https   github com open telemetry opentelemetry collector    Jaeger  https   www jaegertracing io    Tempo  https   github com grafana tempo   and  zipkin  https   zipkin io   have been tested   Other optional configuration options     yaml   specifies the name to use for the server span opentelemetry operation name    sets whether or not to trust incoming telemetry spans opentelemetry trust incoming span    specifies the port to use when uploading traces  Default  4317 otlp collector port    specifies the service name to use for any traces created  Default  nginx otel service name    The maximum queue size  After the size is reached data are dropped  otel max queuesize    The delay interval in milliseconds between two consecutive exports  otel schedule delay millis    How long the export can run before it is cancelled  otel schedule delay millis    The maximum batch size of every export  It must be smaller or equal to maxQueueSize  otel max export batch size    specifies sample rate for any traces created  Default  0 01 otel sampler ratio    specifies the sampler to be used when sampling traces    The available samplers are  AlwaysOn   AlwaysOff  TraceIdRatioBased  Default  AlwaysOff otel sampler    Uses sampler implementation which by default will take a sample if parent Activity is sampled  Default  false otel sampler parent based      Note that you can also set whether to trust incoming spans  global default is true  per location using annotations like the following     yaml kind  Ingress metadata    annotations      nginx ingress kubernetes io opentelemetry trust incoming span   true          Examples  The following examples show how to deploy and test different distributed telemetry systems  These example can be performed using Docker Desktop   In the  esigo nginx example  https   github com esigo nginx example  GitHub repository is an example of a simple hello service      mermaid graph TB     subgraph Browser     start  http   esigo dev hello nginx       end      subgraph app         sa service a          sb service b          sa      name  nginx  sb         sb      hello nginx   sa     end      subgraph otel         otc  Otel Collector       end      subgraph observability         tempo  Tempo           grafana  Grafana           backend  Jaeger           zipkin  Zipkin       end      subgraph ingress nginx         ngx nginx      end      subgraph ngx nginx          ng nginx          om OpenTelemetry module      end      subgraph Node         app         otel         observability         ingress nginx         om      otlp gRPC  otc      jaeger  backend         otc      zipkin  zipkin         otc      otlp gRPC  tempo     grafana         sa      otlp gRPC  otc         sb      otlp gRPC  otc         start     ng     sa     end      To install the example and collectors run   1  Enable OpenTelemetry and set the otlp collector host          yaml       echo         apiVersion  v1       kind  ConfigMap       data          enable opentelemetry   true          opentelemetry config    etc nginx opentelemetry toml          opentelemetry operation name   HTTP  request method  service name  uri          opentelemetry trust incoming span   true          otlp collector host   otel coll collector otel svc          otlp collector port   4317          otel max queuesize   2048          otel schedule delay millis   5000          otel max export batch size   512          otel service name   nginx proxy    Opentelemetry resource name         otel sampler   AlwaysOn    Also  AlwaysOff  TraceIdRatioBased         otel sampler ratio   1 0          otel sampler parent based   false        metadata          name  ingress nginx controller         namespace  ingress nginx           kubectl replace  f            2  Deploy otel collector  grafana and Jaeger backend          bash       add helm charts needed for grafana and OpenTelemetry collector     helm repo add open telemetry https   open telemetry github io opentelemetry helm charts     helm repo add grafana https   grafana github io helm charts     helm repo update       deploy cert manager needed for OpenTelemetry collector operator     kubectl apply  f https   github com cert manager cert manager releases download v1 15 3 cert manager yaml       create observability namespace     kubectl apply  f https   raw githubusercontent com esigo nginx example main observability namespace yaml       install OpenTelemetry collector operator     helm upgrade   install otel collector operator  n otel   create namespace open telemetry opentelemetry operator       deploy OpenTelemetry collector     kubectl apply  f https   raw githubusercontent com esigo nginx example main observability collector yaml       deploy Jaeger all in one     kubectl apply  f https   github com jaegertracing jaeger operator releases download v1 37 0 jaeger operator yaml  n observability     kubectl apply  f https   raw githubusercontent com esigo nginx example main observability jaeger yaml  n observability       deploy zipkin     kubectl apply  f https   raw githubusercontent com esigo nginx example main observability zipkin yaml  n observability       deploy tempo and grafana  helm upgrade   install tempo grafana tempo   create namespace  n observability  helm upgrade  f https   raw githubusercontent com esigo nginx example main observability grafana grafana values yaml   install grafana grafana grafana   create namespace  n observability          3  Build and deploy demo app          bash       build images     make images        deploy demo app      make deploy app          4  Make a few requests to the Service          bash     kubectl port forward   namespace ingress nginx service ingress nginx controller 8090 80     curl http   esigo dev 8090 hello nginx       StatusCode          200     StatusDescription   OK     Content               v   hello nginx         RawContent          HTTP 1 1 200 OK                         Connection  keep alive                         Content Length  21                         Content Type  text plain  charset utf 8                         Date  Mon  10 Oct 2022 17 43 33 GMT                            v   hello nginx         Forms                      Headers               Connection  keep alive    Content Length  21    Content Type  text plain  charset utf 8    Date                          Mon  10 Oct 2022 17 43 33 GMT       Images                     InputFields                Links                      ParsedHtml          System   ComObject     RawContentLength    21          5  View the Grafana UI          bash     kubectl port forward   namespace observability service grafana 3000 80             In the Grafana interface we can see the details        grafana screenshot        images otel grafana demo png  grafana screenshot    6  View the Jaeger UI          bash     kubectl port forward   namespace observability service jaeger all in one query 16686 16686             In the Jaeger interface we can see the details        Jaeger screenshot        images otel jaeger demo png  Jaeger screenshot    7  View the Zipkin UI          bash     kubectl port forward   namespace observability service zipkin 9411 9411             In the Zipkin interface we can see the details        zipkin screenshot        images otel zipkin demo png  zipkin screenshot       Migration from OpenTracing  Jaeger  Zipkin and Datadog  If you are migrating from OpenTracing  Jaeger  Zipkin  or Datadog to OpenTelemetry  you may need to update various annotations and configurations  Here are the mappings for common annotations and configurations       Annotations    Legacy                                                          OpenTelemetry                                                                                                                                                                                            nginx ingress kubernetes io enable opentracing                  nginx ingress kubernetes io enable opentelemetry                    nginx ingress kubernetes io opentracing trust incoming span     nginx ingress kubernetes io opentelemetry trust incoming span         Configs    Legacy                                  OpenTelemetry                                                                                                                              opentracing operation name              opentelemetry operation name                     opentracing location operation name     opentelemetry operation name                     opentracing trust incoming span         opentelemetry trust incoming span                zipkin collector port                   otlp collector port                              zipkin service name                     otel service name                                zipkin sample rate                      otel sampler ratio                               jaeger collector port                   otlp collector port                              jaeger endpoint                         otlp collector port    otlp collector host       jaeger service name                     otel service name                                jaeger propagation format               N A                                              jaeger sampler type                     otel sampler                                     jaeger sampler param                    otel sampler                                     jaeger sampler host                     N A                                              jaeger sampler port                     N A                                              jaeger trace context header name        N A                                              jaeger debug header                     N A                                              jaeger baggage header                   N A                                              jaeger tracer baggage header prefix     N A                                              datadog collector port                  otlp collector port                              datadog service name                    otel service name                                datadog environment                     N A                                              datadog operation name override         N A                                              datadog priority sampling               otel sampler                                     datadog sample rate                     otel sampler ratio                           "}
{"questions":"ingress nginx overlapping in some points Hardening Guide Overview Do not use in multi tenant Kubernetes production installations This project assumes that users that can create Ingress objects are administrators of the cluster There are several ways to do hardening and securing of nginx In this documentation two guides are used the guides are","answers":"\n# Hardening Guide\n\nDo not use in multi-tenant Kubernetes production installations. This project assumes that users that can create Ingress objects are administrators of the cluster.\n\n## Overview\nThere are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are\noverlapping in some points:\n\n- [nginx CIS Benchmark](https:\/\/www.cisecurity.org\/benchmark\/nginx\/)\n- [cipherlist.eu](https:\/\/cipherlist.eu\/) (one of many forks of the now dead project cipherli.st)\n\nThis guide describes, what of the different configurations described in those guides is already implemented as default\nin the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that\nthe nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult\nor not possible.\n\nBe aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may\nlead to have specific clients unable to reach your site or similar consequences.\n\nThis guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself\n\n## Configuration Guide\n\n| Chapter in CIS benchmark | Status | Default | Action to do if not default|\n|:-------------------------|:-------|:--------|:---------------------------|\n| __1 Initial Setup__ ||| |\n| ||| |\n| __1.1 Installation__||| |\n| 1.1.1 Ensure NGINX is installed (Scored)| OK | done through helm charts \/ following documentation to deploy nginx ingress | |\n| 1.1.2 Ensure NGINX is installed from source (Not Scored)| OK | done through helm charts \/ following documentation to deploy nginx ingress | |\n| ||| |\n| __1.2 Configure Software Updates__||| |\n| 1.2.1 Ensure package manager repositories are properly configured (Not Scored) | OK | done via helm, nginx version could be overwritten, however compatibility is not ensured then| |\n| 1.2.2 Ensure the latest software package is installed (Not Scored)| ACTION NEEDED | done via helm, nginx version could be overwritten, however compatibility is not ensured then| Plan for periodic updates |\n| ||| |\n| __2 Basic Configuration__ ||| |\n| ||| |\n| __2.1 Minimize NGINX Modules__||| |\n| 2.1.1 Ensure only required modules are installed (Not Scored) | OK | Already only needed modules are installed, however proposals for further reduction are welcome | |\n| 2.1.2 Ensure HTTP WebDAV module is not installed (Scored) | OK | | |\n| 2.1.3 Ensure modules with gzip functionality are disabled (Scored)| OK | | |\n| 2.1.4 Ensure the autoindex module is disabled (Scored)| OK | No autoindex configs so far in ingress defaults| |\n| ||| |\n| __2.2 Account Security__||| |\n| 2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) | OK | Pod configured as user www-data: [See this line in helm chart values](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/0cbe783f43a9313c9c26136e888324b1ee91a72f\/charts\/ingress-nginx\/values.yaml#L10). Compiled with user www-data: [See this line in build script](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/5d67794f4fbf38ec6575476de46201b068eabf87\/images\/nginx\/rootfs\/build.sh#L529) | |\n| 2.2.2 Ensure the NGINX service account is locked (Scored) | OK | Docker design ensures this | |\n| 2.2.3 Ensure the NGINX service account has an invalid shell (Scored)| OK | Shell is nologin: [see this line in build script](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/5d67794f4fbf38ec6575476de46201b068eabf87\/images\/nginx\/rootfs\/build.sh#L613)| |\n| ||| |\n| __2.3 Permissions and Ownership__ ||| |\n| 2.3.1 Ensure NGINX directories and files are owned by root (Scored) | OK | Obsolete through docker-design and ingress controller needs to update the configs dynamically| |\n| 2.3.2 Ensure access to NGINX directories and files is restricted (Scored) | OK | See previous answer| |\n| 2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored)| OK | No PID-File due to docker design | |\n| 2.3.4 Ensure the core dump directory is secured (Not Scored)| OK | No working_directory configured by default | |\n| ||| |\n| __2.4 Network Configuration__ ||| |\n| 2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored)| OK | Ensured by automatic nginx.conf configuration| |\n| 2.4.2 Ensure requests for unknown host names are rejected (Not Scored)| OK | They are not rejected but send to the \"default backend\" delivering appropriate errors (mostly 404)| |\n| 2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored)| ACTION NEEDED| Default is 75s | configure keep-alive to 10 seconds [according to this documentation](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/docs\/user-guide\/nginx-configuration\/configmap.md#keep-alive) |\n| 2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored)| RISK TO BE ACCEPTED| Not configured, however the nginx default is 60s| Not configurable|\n| ||| |\n| __2.5 Information Disclosure__||| |\n| 2.5.1 Ensure server_tokens directive is set to `off` (Scored) | OK | server_tokens is configured to off by default| |\n| 2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) | ACTION NEEDED| 404 shows no version at all, 503 and 403 show \"nginx\", which is hardcoded [see this line in nginx source code](https:\/\/github.com\/nginx\/nginx\/blob\/master\/src\/http\/ngx_http_special_response.c#L36) | configure custom error pages at least for 403, 404 and 503 and 500|\n| 2.5.3 Ensure hidden file serving is disabled (Not Scored) | ACTION NEEDED | config not set | configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please |\n| 2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored)| ACTION NEEDED| hide not configured| configure hide-headers with array of \"X-Powered-By\" and \"Server\": [according to this documentation](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/docs\/user-guide\/nginx-configuration\/configmap.md#hide-headers) |\n| ||| |\n| __3 Logging__ ||| |\n| ||| |\n| 3.1 Ensure detailed logging is enabled (Not Scored) | OK | nginx ingress has a very detailed log format by default | |\n| 3.2 Ensure access logging is enabled (Scored) | OK | Access log is enabled by default | |\n| 3.3 Ensure error logging is enabled and set to the info logging level (Scored)| OK | Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway | |\n| 3.4 Ensure log files are rotated (Scored) | OBSOLETE | Log file handling is not part of the nginx ingress and should be handled separately | |\n| 3.5 Ensure error logs are sent to a remote syslog server (Not Scored) | OBSOLETE | See previous answer| |\n| 3.6 Ensure access logs are sent to a remote syslog server (Not Scored)| OBSOLETE | See previous answer| |\n| 3.7 Ensure proxies pass source IP information (Scored)| OK | Headers are set by default | |\n| ||| |\n| __4 Encryption__ ||| |\n| ||| |\n| __4.1 TLS \/ SSL Configuration__ ||| |\n| 4.1.1 Ensure HTTP is redirected to HTTPS (Scored) | OK | Redirect to TLS is default | |\n| 4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored)| ACTION NEEDED| For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager | Install proper certificates or use lets encrypt with cert-manager |\n| 4.1.3 Ensure private key permissions are restricted (Scored)| ACTION NEEDED| See previous answer| |\n| 4.1.4 Ensure only modern TLS protocols are used (Scored)| OK\/ACTION NEEDED | Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's | Set controller.config.ssl-protocols to \"TLSv1.3\"|\n| 4.1.5 Disable weak ciphers (Scored) | ACTION NEEDED| Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers | Set controller.config.ssl-ciphers to \"EECDH+AESGCM:EDH+AESGCM\"|\n| 4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) | ACTION NEEDED| No custom DH parameters are generated| Generate dh parameters for each ingress deployment you use - [see here for a how to](https:\/\/kubernetes.github.io\/ingress-nginx\/examples\/customization\/ssl-dh-param\/) |\n| 4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) | ACTION NEEDED | Not enabled | set via [this configuration parameter](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/nginx-configuration\/configmap\/#enable-ocsp) |\n| 4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored)| OK | HSTS is enabled by default | |\n| 4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored)| ACTION NEEDED \/ RISK TO BE ACCEPTED | HKPK not enabled by default | If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown |\n| 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) | DEPENDS ON BACKEND | Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh| If backend allows it, [manual is here](https:\/\/kubernetes.github.io\/ingress-nginx\/examples\/auth\/client-certs\/)|\n| 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) | DEPENDS ON BACKEND | Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh| If backend allows it, [see configuration here](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/docs\/user-guide\/nginx-configuration\/annotations.md#backend-certificate-authentication) |\n| 4.1.12 Ensure your domain is preloaded (Not Scored) | ACTION NEEDED| Preload is not active by default | Set controller.config.hsts-preload to true|\n| 4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored)| OK | Session tickets are disabled by default | |\n| 4.1.14 Ensure HTTP\/2.0 is used (Not Scored) | OK | http2 is set by default| |\n| ||| |\n| __5 Request Filtering and Restrictions__||| |\n| ||| |\n| __5.1 Access Control__||| |\n| 5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored)| OK\/ACTION NEEDED | Depends on use case, geo ip module is compiled into Ingress-Nginx Controller, there are several ways to use it | If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!) |\n| 5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) | OK\/ACTION NEEDED | Depends on use case| If required it can be set via config snippet|\n| ||| |\n| __5.2 Request Limits__||| |\n| 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) | ACTION NEEDED| Default timeout is 60s | Set via [this configuration parameter](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/docs\/user-guide\/nginx-configuration\/configmap.md#client-header-timeout) and respective body equivalent|\n| 5.2.2 Ensure the maximum request body size is set correctly (Scored)| ACTION NEEDED| Default is 1m| set via [this configuration parameter](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/docs\/user-guide\/nginx-configuration\/configmap.md#proxy-body-size)|\n| 5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) | ACTION NEEDED| Default is 4 8k| Set via [this configuration parameter](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/docs\/user-guide\/nginx-configuration\/configmap.md#large-client-header-buffers)|\n| 5.2.4 Ensure the number of connections per IP address is limited (Not Scored) | OK\/ACTION NEEDED| No limit set| Depends on use case, limit can be set via [these annotations](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/nginx-configuration\/annotations\/#rate-limiting)|\n| 5.2.5 Ensure rate limits by IP address are set (Not Scored) | OK\/ACTION NEEDED| No limit set| Depends on use case, limit can be set via [these annotations](https:\/\/kubernetes.github.io\/ingress-nginx\/user-guide\/nginx-configuration\/annotations\/#rate-limiting)|\n| ||| |\n| __5.3 Browser Security__||| |\n| 5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored)| ACTION NEEDED| Header not set by default| Several ways to implement this - with the helm charts it works via controller.add-headers |\n| 5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) | ACTION NEEDED| See previous answer| See previous answer |\n| 5.3.3 Ensure the X-XSS-Protection Header is enabled and configured properly (Scored)| ACTION NEEDED| See previous answer| See previous answer |\n| 5.3.4 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) | ACTION NEEDED| See previous answer| See previous answer |\n| 5.3.5 Ensure the Referrer Policy is enabled and configured properly (Not Scored)| ACTION NEEDED | Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress | check backend webserver |\n| ||| |\n| __6 Mandatory Access Control__| n\/a| too high level, depends on backends | |\n\n<style type=\"text\/css\" rel=\"stylesheet\">\n@media only screen and (min-width: 768px) {\n\ttd:nth-child(1){\n\t\twhite-space:normal !important;\n    }\n\n    .md-typeset table:not([class]) td {\n        padding: .2rem .3rem;\n    }\n}\n<\/style>","site":"ingress nginx","answers_cleaned":"   Hardening Guide  Do not use in multi tenant Kubernetes production installations  This project assumes that users that can create Ingress objects are administrators of the cluster      Overview There are several ways to do hardening and securing of nginx  In this documentation two guides are used  the guides are overlapping in some points      nginx CIS Benchmark  https   www cisecurity org benchmark nginx      cipherlist eu  https   cipherlist eu    one of many forks of the now dead project cipherli st   This guide describes  what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress  what needs to be configured  what is obsolete due to the fact that the nginx is running as container  the CIS benchmark relates to a non containerized installation  and what is difficult or not possible   Be aware that this is only a guide and you are responsible for your own implementation  Some of the configurations may lead to have specific clients unable to reach your site or similar consequences   This guide refers to chapters in the CIS Benchmark  For full explanation you should refer to the benchmark document itself     Configuration Guide    Chapter in CIS benchmark   Status   Default   Action to do if not default                                                                                   1 Initial Setup                     1 1 Installation          1 1 1 Ensure NGINX is installed  Scored   OK   done through helm charts   following documentation to deploy nginx ingress       1 1 2 Ensure NGINX is installed from source  Not Scored   OK   done through helm charts   following documentation to deploy nginx ingress                 1 2 Configure Software Updates          1 2 1 Ensure package manager repositories are properly configured  Not Scored    OK   done via helm  nginx version could be overwritten  however compatibility is not ensured then      1 2 2 Ensure the latest software package is installed  Not Scored   ACTION NEEDED   done via helm  nginx version could be overwritten  however compatibility is not ensured then  Plan for periodic updates               2 Basic Configuration                     2 1 Minimize NGINX Modules          2 1 1 Ensure only required modules are installed  Not Scored    OK   Already only needed modules are installed  however proposals for further reduction are welcome       2 1 2 Ensure HTTP WebDAV module is not installed  Scored    OK         2 1 3 Ensure modules with gzip functionality are disabled  Scored   OK         2 1 4 Ensure the autoindex module is disabled  Scored   OK   No autoindex configs so far in ingress defaults                2 2 Account Security          2 2 1 Ensure that NGINX is run using a non privileged  dedicated service account  Not Scored    OK   Pod configured as user www data   See this line in helm chart values  https   github com kubernetes ingress nginx blob 0cbe783f43a9313c9c26136e888324b1ee91a72f charts ingress nginx values yaml L10   Compiled with user www data   See this line in build script  https   github com kubernetes ingress nginx blob 5d67794f4fbf38ec6575476de46201b068eabf87 images nginx rootfs build sh L529        2 2 2 Ensure the NGINX service account is locked  Scored    OK   Docker design ensures this       2 2 3 Ensure the NGINX service account has an invalid shell  Scored   OK   Shell is nologin   see this line in build script  https   github com kubernetes ingress nginx blob 5d67794f4fbf38ec6575476de46201b068eabf87 images nginx rootfs build sh L613                 2 3 Permissions and Ownership           2 3 1 Ensure NGINX directories and files are owned by root  Scored    OK   Obsolete through docker design and ingress controller needs to update the configs dynamically      2 3 2 Ensure access to NGINX directories and files is restricted  Scored    OK   See previous answer      2 3 3 Ensure the NGINX process ID  PID  file is secured  Scored   OK   No PID File due to docker design       2 3 4 Ensure the core dump directory is secured  Not Scored   OK   No working directory configured by default                 2 4 Network Configuration           2 4 1 Ensure NGINX only listens for network connections on authorized ports  Not Scored   OK   Ensured by automatic nginx conf configuration      2 4 2 Ensure requests for unknown host names are rejected  Not Scored   OK   They are not rejected but send to the  default backend  delivering appropriate errors  mostly 404       2 4 3 Ensure keepalive timeout is 10 seconds or less  but not 0  Scored   ACTION NEEDED  Default is 75s   configure keep alive to 10 seconds  according to this documentation  https   github com kubernetes ingress nginx blob main docs user guide nginx configuration configmap md keep alive      2 4 4 Ensure send timeout is set to 10 seconds or less  but not 0  Scored   RISK TO BE ACCEPTED  Not configured  however the nginx default is 60s  Not configurable              2 5 Information Disclosure          2 5 1 Ensure server tokens directive is set to  off   Scored    OK   server tokens is configured to off by default      2 5 2 Ensure default error and index html pages do not reference NGINX  Scored    ACTION NEEDED  404 shows no version at all  503 and 403 show  nginx   which is hardcoded  see this line in nginx source code  https   github com nginx nginx blob master src http ngx http special response c L36    configure custom error pages at least for 403  404 and 503 and 500    2 5 3 Ensure hidden file serving is disabled  Not Scored    ACTION NEEDED   config not set   configure a config server snippet Snippet  but beware of  well known challenges or similar  Refer to the benchmark here please     2 5 4 Ensure the NGINX reverse proxy does not enable information disclosure  Scored   ACTION NEEDED  hide not configured  configure hide headers with array of  X Powered By  and  Server    according to this documentation  https   github com kubernetes ingress nginx blob main docs user guide nginx configuration configmap md hide headers                3 Logging                   3 1 Ensure detailed logging is enabled  Not Scored    OK   nginx ingress has a very detailed log format by default       3 2 Ensure access logging is enabled  Scored    OK   Access log is enabled by default       3 3 Ensure error logging is enabled and set to the info logging level  Scored   OK   Error log is configured by default  The log level does not matter  because it is all sent to STDOUT anyway       3 4 Ensure log files are rotated  Scored    OBSOLETE   Log file handling is not part of the nginx ingress and should be handled separately       3 5 Ensure error logs are sent to a remote syslog server  Not Scored    OBSOLETE   See previous answer      3 6 Ensure access logs are sent to a remote syslog server  Not Scored   OBSOLETE   See previous answer      3 7 Ensure proxies pass source IP information  Scored   OK   Headers are set by default                 4 Encryption                     4 1 TLS   SSL Configuration           4 1 1 Ensure HTTP is redirected to HTTPS  Scored    OK   Redirect to TLS is default       4 1 2 Ensure a trusted certificate and trust chain is installed  Not Scored   ACTION NEEDED  For installing certs there are enough manuals in the web  A good way is to use lets encrypt through cert manager   Install proper certificates or use lets encrypt with cert manager     4 1 3 Ensure private key permissions are restricted  Scored   ACTION NEEDED  See previous answer      4 1 4 Ensure only modern TLS protocols are used  Scored   OK ACTION NEEDED   Default is TLS 1 2   1 3  while this is okay for CIS Benchmark  cipherlist eu only recommends 1 3  This may cut off old OS s   Set controller config ssl protocols to  TLSv1 3     4 1 5 Disable weak ciphers  Scored    ACTION NEEDED  Default ciphers are already good  but cipherlist eu recommends even stronger ciphers   Set controller config ssl ciphers to  EECDH AESGCM EDH AESGCM     4 1 6 Ensure custom Diffie Hellman parameters are used  Scored    ACTION NEEDED  No custom DH parameters are generated  Generate dh parameters for each ingress deployment you use    see here for a how to  https   kubernetes github io ingress nginx examples customization ssl dh param       4 1 7 Ensure Online Certificate Status Protocol  OCSP  stapling is enabled  Scored    ACTION NEEDED   Not enabled   set via  this configuration parameter  https   kubernetes github io ingress nginx user guide nginx configuration configmap  enable ocsp      4 1 8 Ensure HTTP Strict Transport Security  HSTS  is enabled  Scored   OK   HSTS is enabled by default       4 1 9 Ensure HTTP Public Key Pinning is enabled  Not Scored   ACTION NEEDED   RISK TO BE ACCEPTED   HKPK not enabled by default   If lets encrypt is not used  set correct HPKP header  There are several ways to implement this   with the helm charts it works via controller add headers  If lets encrypt is used  this is complicated  a solution here is yet unknown     4 1 10 Ensure upstream server traffic is authenticated with a client certificate  Scored    DEPENDS ON BACKEND   Highly dependent on backends  not every backend allows configuring this  can also be mitigated via a service mesh  If backend allows it   manual is here  https   kubernetes github io ingress nginx examples auth client certs      4 1 11 Ensure the upstream traffic server certificate is trusted  Not Scored    DEPENDS ON BACKEND   Highly dependent on backends  not every backend allows configuring this  can also be mitigated via a service mesh  If backend allows it   see configuration here  https   github com kubernetes ingress nginx blob main docs user guide nginx configuration annotations md backend certificate authentication      4 1 12 Ensure your domain is preloaded  Not Scored    ACTION NEEDED  Preload is not active by default   Set controller config hsts preload to true    4 1 13 Ensure session resumption is disabled to enable perfect forward security  Scored   OK   Session tickets are disabled by default       4 1 14 Ensure HTTP 2 0 is used  Not Scored    OK   http2 is set by default                5 Request Filtering and Restrictions                    5 1 Access Control          5 1 1 Ensure allow and deny filters limit access to specific IP addresses  Not Scored   OK ACTION NEEDED   Depends on use case  geo ip module is compiled into Ingress Nginx Controller  there are several ways to use it   If needed set IP restrictions via annotations or work with config snippets  be careful with lets encrypt http challenge       5 1 2 Ensure only whitelisted HTTP methods are allowed  Not Scored    OK ACTION NEEDED   Depends on use case  If required it can be set via config snippet              5 2 Request Limits          5 2 1 Ensure timeout values for reading the client header and body are set correctly  Scored    ACTION NEEDED  Default timeout is 60s   Set via  this configuration parameter  https   github com kubernetes ingress nginx blob main docs user guide nginx configuration configmap md client header timeout  and respective body equivalent    5 2 2 Ensure the maximum request body size is set correctly  Scored   ACTION NEEDED  Default is 1m  set via  this configuration parameter  https   github com kubernetes ingress nginx blob main docs user guide nginx configuration configmap md proxy body size     5 2 3 Ensure the maximum buffer size for URIs is defined  Scored    ACTION NEEDED  Default is 4 8k  Set via  this configuration parameter  https   github com kubernetes ingress nginx blob main docs user guide nginx configuration configmap md large client header buffers     5 2 4 Ensure the number of connections per IP address is limited  Not Scored    OK ACTION NEEDED  No limit set  Depends on use case  limit can be set via  these annotations  https   kubernetes github io ingress nginx user guide nginx configuration annotations  rate limiting     5 2 5 Ensure rate limits by IP address are set  Not Scored    OK ACTION NEEDED  No limit set  Depends on use case  limit can be set via  these annotations  https   kubernetes github io ingress nginx user guide nginx configuration annotations  rate limiting               5 3 Browser Security          5 3 1 Ensure X Frame Options header is configured and enabled  Scored   ACTION NEEDED  Header not set by default  Several ways to implement this   with the helm charts it works via controller add headers     5 3 2 Ensure X Content Type Options header is configured and enabled  Scored    ACTION NEEDED  See previous answer  See previous answer     5 3 3 Ensure the X XSS Protection Header is enabled and configured properly  Scored   ACTION NEEDED  See previous answer  See previous answer     5 3 4 Ensure that Content Security Policy  CSP  is enabled and configured properly  Not Scored    ACTION NEEDED  See previous answer  See previous answer     5 3 5 Ensure the Referrer Policy is enabled and configured properly  Not Scored   ACTION NEEDED   Depends on application  It should be handled in the applications webserver itself  not in the load balancing ingress   check backend webserver               6 Mandatory Access Control    n a  too high level  depends on backends       style type  text css  rel  stylesheet    media only screen and  min width  768px     td nth child 1     white space normal  important              md typeset table not  class   td           padding   2rem  3rem            style "}
{"questions":"ingress nginx with specific addons e g for or There are multiple ways to install the Ingress Nginx Controller get started as fast as possible you can check the instructions However in many Installation Guide On most Kubernetes clusters the ingress controller will work without requiring any extra configuration If you want to with using the project repository chart with using YAML manifests","answers":"# Installation Guide\n\nThere are multiple ways to install the Ingress-Nginx Controller:\n\n- with [Helm](https:\/\/helm.sh), using the project repository chart;\n- with `kubectl apply`, using YAML manifests;\n- with specific addons (e.g. for [minikube](#minikube) or [MicroK8s](#microk8s)).\n\nOn most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to\nget started as fast as possible, you can check the [quick start](#quick-start) instructions. However, in many\nenvironments, you can improve the performance or get better logs by enabling extra features. We recommend that you\ncheck the [environment-specific instructions](#environment-specific-instructions) for details about optimizing the\ningress controller for your particular environment or cloud provider.\n\n## Contents\n\n<!-- Quick tip: run `grep '^##' index.md` to check that the table of contents is up-to-date. -->\n\n- [Quick start](#quick-start)\n\n- [Environment-specific instructions](#environment-specific-instructions)\n  - ... [Docker Desktop](#docker-desktop)\n  - ... [Rancher Desktop](#rancher-desktop)\n  - ... [minikube](#minikube)\n  - ... [MicroK8s](#microk8s)\n  - ... [AWS](#aws)\n  - ... [GCE - GKE](#gce-gke)\n  - ... [Azure](#azure)\n  - ... [Digital Ocean](#digital-ocean)\n  - ... [Scaleway](#scaleway)\n  - ... [Exoscale](#exoscale)\n  - ... [Oracle Cloud Infrastructure](#oracle-cloud-infrastructure)\n  - ... [OVHcloud](#ovhcloud)\n  - ... [Bare-metal](#bare-metal-clusters)\n- [Miscellaneous](#miscellaneous)\n\n<!-- TODO: We have subdirectories for kubernetes versions now because of a PR\nhttps:\/\/github.com\/kubernetes\/ingress-nginx\/pull\/8162 . You can see this here\nhttps:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/deploy\/static\/provider\/cloud .\nWe need to add documentation here that is clear and unambiguous in guiding users to pick the deployment manifest\nunder a subdirectory, based on the K8S version being used. But until the explicit clear docs land here, users are\nfree to use those subdirectories and get the manifest(s) related to their K8S version. -->\n\n## Quick start\n\n**If you have Helm,** you can deploy the ingress controller with the following command:\n\n```console\nhelm upgrade --install ingress-nginx ingress-nginx \\\n  --repo https:\/\/kubernetes.github.io\/ingress-nginx \\\n  --namespace ingress-nginx --create-namespace\n```\n\nIt will install the controller in the `ingress-nginx` namespace, creating that namespace if it doesn't already exist.\n\n!!! info\n    This command is *idempotent*:\n\n    - if the ingress controller is not installed, it will install it,\n    - if the ingress controller is already installed, it will upgrade it.\n\n**If you want a full list of values that you can set, while installing with Helm,** then run:\n\n```console\nhelm show values ingress-nginx --repo https:\/\/kubernetes.github.io\/ingress-nginx\n```\n\n!!! attention \"Helm install on AWS\/GCP\/Azure\/Other providers\"\n    The *ingress-nginx-controller helm-chart is a generic install out of the box*. The default set of helm values is **not** configured for installation on any infra provider. The annotations that are applicable to the cloud provider must be customized by the users.<br\/>\n    See [AWS LB Controller](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/v2.2\/guide\/service\/annotations\/).<br\/>\n    Examples of some annotations recommended (healthecheck ones are required for target-type IP) for the service resource of `--type LoadBalancer` on AWS are below:\n    ```yaml\n      annotations:\n        service.beta.kubernetes.io\/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=270\n        service.beta.kubernetes.io\/aws-load-balancer-nlb-target-type: ip\n        service.beta.kubernetes.io\/aws-load-balancer-healthcheck-path: \/healthz\n        service.beta.kubernetes.io\/aws-load-balancer-healthcheck-port: \"10254\"\n        service.beta.kubernetes.io\/aws-load-balancer-healthcheck-protocol: http\n        service.beta.kubernetes.io\/aws-load-balancer-healthcheck-success-codes: 200-299\n        service.beta.kubernetes.io\/aws-load-balancer-scheme: \"internet-facing\"\n        service.beta.kubernetes.io\/aws-load-balancer-backend-protocol: tcp\n        service.beta.kubernetes.io\/aws-load-balancer-cross-zone-load-balancing-enabled: \"true\"\n        service.beta.kubernetes.io\/aws-load-balancer-type: nlb\n        service.beta.kubernetes.io\/aws-load-balancer-manage-backend-security-group-rules: \"true\"\n        service.beta.kubernetes.io\/aws-load-balancer-access-log-enabled: \"true\"\n        service.beta.kubernetes.io\/aws-load-balancer-security-groups: \"sg-something1 sg-something2\"\n        service.beta.kubernetes.io\/aws-load-balancer-access-log-s3-bucket-name: \"somebucket\"\n        service.beta.kubernetes.io\/aws-load-balancer-access-log-s3-bucket-prefix: \"ingress-nginx\"\n        service.beta.kubernetes.io\/aws-load-balancer-access-log-emit-interval: \"5\"\n    ```\n\n**If you don't have Helm** or if you prefer to use a YAML manifest, you can run the following command instead:\n\n```console\nkubectl apply -f https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/controller-v1.12.0-beta.0\/deploy\/static\/provider\/cloud\/deploy.yaml\n```\n\n!!! info\n    The YAML manifest in the command above was generated with `helm template`, so you will end up with almost the same\n    resources as if you had used Helm to install the controller.\n\n!!! attention\n    If you are running an old version of Kubernetes (1.18 or earlier), please read [this paragraph](#running-on-Kubernetes-versions-older-than-1.19) for specific instructions.\n    Because of api deprecations, the default manifest may not work on your cluster.\n    Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider.\n\n### Firewall configuration\n\nTo check which ports are used by your installation of ingress-nginx, look at the output of `kubectl -n ingress-nginx get pod -o yaml`. In general, you need:\n\n- Port 8443 open between all hosts on which the kubernetes nodes are running. This is used for the ingress-nginx [admission controller](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/admission-controllers\/).\n- Port 80 (for HTTP) and\/or 443 (for HTTPS) open to the public on the kubernetes nodes to which the DNS of your apps are pointing.\n\n### Pre-flight check\n\nA few pods should start in the `ingress-nginx` namespace:\n\n```console\nkubectl get pods --namespace=ingress-nginx\n```\n\nAfter a while, they should all be running. The following command will wait for the ingress controller pod to be up,\nrunning, and ready:\n\n```console\nkubectl wait --namespace ingress-nginx \\\n  --for=condition=ready pod \\\n  --selector=app.kubernetes.io\/component=controller \\\n  --timeout=120s\n```\n\n### Local testing\n\nLet's create a simple web server and the associated service:\n\n```console\nkubectl create deployment demo --image=httpd --port=80\nkubectl expose deployment demo\n```\n\nThen create an ingress resource. The following example uses a host that maps to `localhost`:\n\n```console\nkubectl create ingress demo-localhost --class=nginx \\\n  --rule=\"demo.localdev.me\/*=demo:80\"\n```\n\nNow, forward a local port to the ingress controller:\n\n```console\nkubectl port-forward --namespace=ingress-nginx service\/ingress-nginx-controller 8080:80\n```\n\n!!! info\n    A note on DNS & network-connection.\n    This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress.\n    The port-forwarding mentioned above, is the easiest way to demo the working of ingress. The \"kubectl port-forward...\" command above has forwarded the port number 8080, on the localhost's tcp\/ip stack, where the command was typed, to the port  number 80, of the service created by the installation of ingress-nginx controller. So now, the traffic sent to port number 8080 on localhost will reach the port number 80, of the ingress-controller's service.\n    Port-forwarding is not for a production environment use-case. But here we use port-forwarding, to simulate a HTTP request, originating from outside the cluster, to reach the service of the ingress-nginx controller, that is exposed to receive traffic from outside the cluster.\n  [This issue](https:\/\/github.com\/kubernetes\/ingress-nginx\/issues\/10014#issuecomment-1567791549described) shows a typical DNS problem and its solution.\n\nAt this point, you can access your deployment using curl ;\n\n```console\ncurl --resolve demo.localdev.me:8080:127.0.0.1 http:\/\/demo.localdev.me:8080\n```\n\nYou should see a HTML response containing text like **\"It works!\"**.\n\n### Online testing\n\nIf your Kubernetes cluster is a \"real\" cluster that supports services of type `LoadBalancer`, it will have allocated an\nexternal IP address or FQDN to the ingress controller.\n\nYou can see that IP address or FQDN with the following command:\n\n```console\nkubectl get service ingress-nginx-controller --namespace=ingress-nginx\n```\n\nIt will be the `EXTERNAL-IP` field. If that field shows `<pending>`, this means that your Kubernetes cluster wasn't\nable to provision the load balancer (generally, this is because it doesn't support services of type `LoadBalancer`).\n\nOnce you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress\nresource. The following example assumes that you have set up a DNS record for `www.demo.io`:\n\n```console\nkubectl create ingress demo --class=nginx \\\n  --rule=\"www.demo.io\/*=demo:80\"\n```\n\nAlternatively, the above command can be rewritten as follows for the ```--rule``` command and below.\n\n```console\nkubectl create ingress demo --class=nginx \\\n  --rule www.demo.io\/=demo:80\n```\n\nYou should then be able to see the \"It works!\" page when you connect to <http:\/\/www.demo.io\/>. Congratulations,\nyou are serving a public website hosted on a Kubernetes cluster! \ud83c\udf89\n\n## Environment-specific instructions\n\n### Local development clusters\n\n#### minikube\n\nThe ingress controller can be installed through minikube's addons system:\n\n```console\nminikube addons enable ingress\n```\n\n#### MicroK8s\n\nThe ingress controller can be installed through MicroK8s's addons system:\n\n```console\nmicrok8s enable ingress\n```\n\nPlease check the MicroK8s [documentation page](https:\/\/microk8s.io\/docs\/addon-ingress) for details.\n\n#### Docker Desktop\n\nKubernetes is available in Docker Desktop:\n\n- Mac, from [version 18.06.0-ce](https:\/\/docs.docker.com\/docker-for-mac\/release-notes\/#stable-releases-of-2018)\n- Windows, from [version 18.06.0-ce](https:\/\/docs.docker.com\/docker-for-windows\/release-notes\/#docker-community-edition-18060-ce-win70-2018-07-25)\n\nFirst, make sure that Kubernetes is enabled in the Docker settings. The command `kubectl get nodes` should show a\nsingle node called `docker-desktop`.\n\nThe ingress controller can be installed on Docker Desktop using the default [quick start](#quick-start) instructions.\n\nOn most systems, if you don't have any other service of type `LoadBalancer` bound to port 80, the ingress controller\nwill be assigned the `EXTERNAL-IP` of `localhost`, which means that it will be reachable on localhost:80. If that\ndoesn't work, you might have to fall back to the `kubectl port-forward` method described in the\n[local testing section](#local-testing).\n\n#### Rancher Desktop\n\nRancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.\n\nRancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.\n\nOnce traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default [quick start](#quick-start) instructions. Follow the instructions described in the [local testing section](#local-testing) to try a sample.\n\n### Cloud deployments\n\nIf the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the\n`externalTrafficPolicy` of the ingress controller Service to `Local` (instead of the default `Cluster`) to save an\nextra hop in some cases. If you're installing with Helm, this can be done by adding\n`--set controller.service.externalTrafficPolicy=Local` to the `helm install` or `helm upgrade` command.\n\nFurthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will\nlet the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of\nthe upstream load balancer. This must be done both in the ingress controller\n(with e.g. `--set controller.config.use-proxy-protocol=true`) and in the cloud provider's load balancer configuration\nto function correctly.\n\nIn the following sections, we provide YAML manifests that enable these options when possible, using the specific\noptions of various cloud providers.\n\n#### AWS\n\nIn AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of `Type=LoadBalancer`.\n\n!!! info\n    The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB.\n    AWS provides the documentation on how to use\n    [Network load balancing on Amazon EKS](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/network-load-balancing.html)\n    with [AWS Load Balancer Controller](https:\/\/github.com\/kubernetes-sigs\/aws-load-balancer-controller).\n\n##### Network Load Balancer (NLB)\n\n```console\nkubectl apply -f https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/controller-v1.12.0-beta.0\/deploy\/static\/provider\/aws\/deploy.yaml\n```\n\n##### TLS termination in AWS Load Balancer (NLB)\n\nBy default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer.\nThis section explains how to do that on AWS using an NLB.\n\n1. Download the [deploy.yaml](https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/controller-v1.12.0-beta.0\/deploy\/static\/provider\/aws\/nlb-with-tls-termination\/deploy.yaml) template\n\n  ```console\n  wget https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/controller-v1.12.0-beta.0\/deploy\/static\/provider\/aws\/nlb-with-tls-termination\/deploy.yaml\n  ```\n\n2. Edit the file and change the VPC CIDR in use for the Kubernetes cluster:\n\n   ```\n   proxy-real-ip-cidr: XXX.XXX.XXX\/XX\n   ```\n\n3. Change the AWS Certificate Manager (ACM) ID as well:\n\n   ```\n   arn:aws:acm:us-west-2:XXXXXXXX:certificate\/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX\n   ```\n\n4. Deploy the manifest:\n\n   ```console\n   kubectl apply -f deploy.yaml\n   ```\n\n##### NLB Idle Timeouts\n\nIdle timeout value for TCP flows is 350 seconds and\n[cannot be modified](https:\/\/docs.aws.amazon.com\/elasticloadbalancing\/latest\/network\/network-load-balancers.html#connection-idle-timeout).\n\nFor this reason, you need to ensure the\n[keepalive_timeout](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#keepalive_timeout)\nvalue is configured less than 350 seconds to work as expected.\n\nBy default, NGINX `keepalive_timeout` is set to `75s`.\n\nMore information with regard to timeouts can be found in the\n[official AWS documentation](https:\/\/docs.aws.amazon.com\/elasticloadbalancing\/latest\/network\/network-load-balancers.html#connection-idle-timeout)\n\n#### GCE-GKE\n\nFirst, your user needs to have `cluster-admin` permissions on the cluster. This can be done with the following command:\n\n```console\nkubectl create clusterrolebinding cluster-admin-binding \\\n  --clusterrole cluster-admin \\\n  --user $(gcloud config get-value account)\n```\n\nThen, the ingress controller can be installed like this:\n\n```console\nkubectl apply -f https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/controller-v1.12.0-beta.0\/deploy\/static\/provider\/cloud\/deploy.yaml\n```\n\n!!! warning\n    For private clusters, you will need to either add a firewall rule that allows master nodes access to\n    port `8443\/tcp` on worker nodes, or change the existing rule that allows access to port `80\/tcp`, `443\/tcp` and\n    `10254\/tcp` to also allow access to port `8443\/tcp`. More information can be found in the\n    [Official GCP Documentation](https:\/\/cloud.google.com\/load-balancing\/docs\/tcp\/setting-up-tcp#config-hc-firewall).\n\n    See the [GKE documentation](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/private-clusters#add_firewall_rules)\n    on adding rules and the [Kubernetes issue](https:\/\/github.com\/kubernetes\/kubernetes\/issues\/79739) for more detail.\n\nProxy-protocol is supported in GCE check the [Official Documentations on how to enable.](https:\/\/cloud.google.com\/load-balancing\/docs\/tcp\/setting-up-tcp#proxy-protocol)\n\n#### Azure\n\n```console\nkubectl apply -f https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/controller-v1.12.0-beta.0\/deploy\/static\/provider\/cloud\/deploy.yaml\n```\n\nMore information with regard to Azure annotations for ingress controller can be found in the [official AKS documentation](https:\/\/docs.microsoft.com\/en-us\/azure\/aks\/ingress-internal-ip#create-an-ingress-controller).\n\n#### Digital Ocean\n\n```console\nkubectl apply -f https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/controller-v1.12.0-beta.0\/deploy\/static\/provider\/do\/deploy.yaml\n```\n\n- By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one `service.beta.kubernetes.io\/do-loadbalancer-enable-proxy-protocol: \"true\"`. While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows `no data`, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in [this issue](https:\/\/github.com\/kubernetes\/ingress-nginx\/issues\/8965). Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.\n\n#### Scaleway\n\n```console\nkubectl apply -f https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/controller-v1.12.0-beta.0\/deploy\/static\/provider\/scw\/deploy.yaml\n```\n\nRefer to the [dedicated tutorial](https:\/\/www.scaleway.com\/en\/docs\/tutorials\/proxy-protocol-v2-load-balancer\/#configuring-proxy-protocol-for-ingress-nginx) in the Scaleway documentation for configuring the proxy protocol for ingress-nginx with the Scaleway load balancer.\n\n#### Exoscale\n\n```console\nkubectl apply -f https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/main\/deploy\/static\/provider\/exoscale\/deploy.yaml\n```\n\nThe full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager\n[documentation](https:\/\/github.com\/exoscale\/exoscale-cloud-controller-manager\/blob\/master\/docs\/service-loadbalancer.md).\n\n#### Oracle Cloud Infrastructure\n\n```console\nkubectl apply -f https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/controller-v1.12.0-beta.0\/deploy\/static\/provider\/cloud\/deploy.yaml\n```\n\nA\n[complete list of available annotations for Oracle Cloud Infrastructure](https:\/\/github.com\/oracle\/oci-cloud-controller-manager\/blob\/master\/docs\/load-balancer-annotations.md)\ncan be found in the [OCI Cloud Controller Manager](https:\/\/github.com\/oracle\/oci-cloud-controller-manager) documentation.\n\n#### OVHcloud\n\n```console\nhelm repo add ingress-nginx https:\/\/kubernetes.github.io\/ingress-nginx\nhelm repo update\nhelm -n ingress-nginx install ingress-nginx ingress-nginx\/ingress-nginx --create-namespace\n```\n\nYou can find the [complete tutorial](https:\/\/docs.ovh.com\/gb\/en\/kubernetes\/installing-nginx-ingress\/).\n\n### Bare metal clusters\n\nThis section is applicable to Kubernetes clusters deployed on bare metal servers, as well as \"raw\" VMs where Kubernetes\nwas installed manually, using generic Linux distros (like CentOS, Ubuntu...)\n\nFor quick testing, you can use a\n[NodePort](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#type-nodeport).\nThis should work on almost every cluster, but it will typically use a port in the range 30000-32767.\n\n```console\nkubectl apply -f https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/controller-v1.12.0-beta.0\/deploy\/static\/provider\/baremetal\/deploy.yaml\n```\n\nFor more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range),\nsee [bare-metal considerations](.\/baremetal.md).\n\n## Miscellaneous\n\n### Checking ingress controller version\n\nRun `\/nginx-ingress-controller --version` within the pod, for instance with `kubectl exec`:\n\n```console\nPOD_NAMESPACE=ingress-nginx\nPOD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io\/name=ingress-nginx --field-selector=status.phase=Running -o name)\nkubectl exec $POD_NAME -n $POD_NAMESPACE -- \/nginx-ingress-controller --version\n```\n\n### Scope\n\nBy default, the controller watches Ingress objects from all namespaces. If you want to change this behavior,\nuse the flag `--watch-namespace` or check the Helm chart value `controller.scope` to limit the controller to a single\nnamespace. Although the use of this flag is not popular, one important fact to note is that the secret containing the default-ssl-certificate needs to also be present in the watched namespace(s).\n\nSee also\n[\u201cHow to easily install multiple instances of the Ingress NGINX controller in the same cluster\u201d](https:\/\/kubernetes.github.io\/ingress-nginx\/#how-to-easily-install-multiple-instances-of-the-ingress-nginx-controller-in-the-same-cluster)\nfor more details.\n\n### Webhook network access\n\n!!! warning\n    The controller uses an [admission webhook](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/extensible-admission-controllers\/)\n    to validate Ingress definitions. Make sure that you don't have\n    [Network policies](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/network-policies\/)\n    or additional firewalls preventing connections from the API server to the `ingress-nginx-controller-admission` service.\n\n### Certificate generation\n\n!!! attention\n    The first time the ingress controller starts, two [Jobs](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/jobs-run-to-completion\/) create the SSL Certificate used by the admission webhook.\n\nThis can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions.\n\nYou can wait until it is ready to run the next command:\n\n```yaml\n kubectl wait --namespace ingress-nginx \\\n  --for=condition=ready pod \\\n  --selector=app.kubernetes.io\/component=controller \\\n  --timeout=120s\n```\n\n### Running on Kubernetes versions older than 1.19\n\nIngress resources evolved over time. They started with `apiVersion: extensions\/v1beta1`,\nthen moved to `apiVersion: networking.k8s.io\/v1beta1` and more recently to `apiVersion: networking.k8s.io\/v1`.\n\nHere is how these Ingress versions are supported in Kubernetes:\n\n- before Kubernetes 1.19, only `v1beta1` Ingress resources are supported\n- from Kubernetes 1.19 to 1.21, both `v1beta1` and `v1` Ingress resources are supported\n- in Kubernetes 1.22 and above, only `v1` Ingress resources are supported\n\nAnd here is how these Ingress versions are supported in Ingress-Nginx Controller:\n\n- before version 1.0, only `v1beta1` Ingress resources are supported\n- in version 1.0 and above, only `v1` Ingress resources are\n\nAs a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX\nIngress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X\nof the Ingress-Nginx Controller (e.g. version 0.49).\n\nThe Helm chart of the Ingress-Nginx Controller switched to version 1 in version 4 of the chart. In other words, if\nyou're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding\n`--version='<4'` to the `helm install` command ).","site":"ingress nginx","answers_cleaned":"  Installation Guide  There are multiple ways to install the Ingress Nginx Controller     with  Helm  https   helm sh   using the project repository chart    with  kubectl apply   using YAML manifests    with specific addons  e g  for  minikube   minikube  or  MicroK8s   microk8s     On most Kubernetes clusters  the ingress controller will work without requiring any extra configuration  If you want to get started as fast as possible  you can check the  quick start   quick start  instructions  However  in many environments  you can improve the performance or get better logs by enabling extra features  We recommend that you check the  environment specific instructions   environment specific instructions  for details about optimizing the ingress controller for your particular environment or cloud provider      Contents       Quick tip  run  grep       index md  to check that the table of contents is up to date          Quick start   quick start      Environment specific instructions   environment specific instructions           Docker Desktop   docker desktop           Rancher Desktop   rancher desktop           minikube   minikube           MicroK8s   microk8s           AWS   aws           GCE   GKE   gce gke           Azure   azure           Digital Ocean   digital ocean           Scaleway   scaleway           Exoscale   exoscale           Oracle Cloud Infrastructure   oracle cloud infrastructure           OVHcloud   ovhcloud           Bare metal   bare metal clusters     Miscellaneous   miscellaneous        TODO  We have subdirectories for kubernetes versions now because of a PR https   github com kubernetes ingress nginx pull 8162   You can see this here https   github com kubernetes ingress nginx tree main deploy static provider cloud   We need to add documentation here that is clear and unambiguous in guiding users to pick the deployment manifest under a subdirectory  based on the K8S version being used  But until the explicit clear docs land here  users are free to use those subdirectories and get the manifest s  related to their K8S version          Quick start    If you have Helm    you can deploy the ingress controller with the following command      console helm upgrade   install ingress nginx ingress nginx       repo https   kubernetes github io ingress nginx       namespace ingress nginx   create namespace      It will install the controller in the  ingress nginx  namespace  creating that namespace if it doesn t already exist       info     This command is  idempotent          if the ingress controller is not installed  it will install it        if the ingress controller is already installed  it will upgrade it     If you want a full list of values that you can set  while installing with Helm    then run      console helm show values ingress nginx   repo https   kubernetes github io ingress nginx          attention  Helm install on AWS GCP Azure Other providers      The  ingress nginx controller helm chart is a generic install out of the box   The default set of helm values is   not   configured for installation on any infra provider  The annotations that are applicable to the cloud provider must be customized by the users  br       See  AWS LB Controller  https   kubernetes sigs github io aws load balancer controller v2 2 guide service annotations    br       Examples of some annotations recommended  healthecheck ones are required for target type IP  for the service resource of    type LoadBalancer  on AWS are below         yaml       annotations          service beta kubernetes io aws load balancer target group attributes  deregistration delay timeout seconds 270         service beta kubernetes io aws load balancer nlb target type  ip         service beta kubernetes io aws load balancer healthcheck path   healthz         service beta kubernetes io aws load balancer healthcheck port   10254          service beta kubernetes io aws load balancer healthcheck protocol  http         service beta kubernetes io aws load balancer healthcheck success codes  200 299         service beta kubernetes io aws load balancer scheme   internet facing          service beta kubernetes io aws load balancer backend protocol  tcp         service beta kubernetes io aws load balancer cross zone load balancing enabled   true          service beta kubernetes io aws load balancer type  nlb         service beta kubernetes io aws load balancer manage backend security group rules   true          service beta kubernetes io aws load balancer access log enabled   true          service beta kubernetes io aws load balancer security groups   sg something1 sg something2          service beta kubernetes io aws load balancer access log s3 bucket name   somebucket          service beta kubernetes io aws load balancer access log s3 bucket prefix   ingress nginx          service beta kubernetes io aws load balancer access log emit interval   5             If you don t have Helm   or if you prefer to use a YAML manifest  you can run the following command instead      console kubectl apply  f https   raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider cloud deploy yaml          info     The YAML manifest in the command above was generated with  helm template   so you will end up with almost the same     resources as if you had used Helm to install the controller       attention     If you are running an old version of Kubernetes  1 18 or earlier   please read  this paragraph   running on Kubernetes versions older than 1 19  for specific instructions      Because of api deprecations  the default manifest may not work on your cluster      Specific manifests for supported Kubernetes versions are available within a sub folder of each provider       Firewall configuration  To check which ports are used by your installation of ingress nginx  look at the output of  kubectl  n ingress nginx get pod  o yaml   In general  you need     Port 8443 open between all hosts on which the kubernetes nodes are running  This is used for the ingress nginx  admission controller  https   kubernetes io docs reference access authn authz admission controllers      Port 80  for HTTP  and or 443  for HTTPS  open to the public on the kubernetes nodes to which the DNS of your apps are pointing       Pre flight check  A few pods should start in the  ingress nginx  namespace      console kubectl get pods   namespace ingress nginx      After a while  they should all be running  The following command will wait for the ingress controller pod to be up  running  and ready      console kubectl wait   namespace ingress nginx       for condition ready pod       selector app kubernetes io component controller       timeout 120s          Local testing  Let s create a simple web server and the associated service      console kubectl create deployment demo   image httpd   port 80 kubectl expose deployment demo      Then create an ingress resource  The following example uses a host that maps to  localhost       console kubectl create ingress demo localhost   class nginx       rule  demo localdev me   demo 80       Now  forward a local port to the ingress controller      console kubectl port forward   namespace ingress nginx service ingress nginx controller 8080 80          info     A note on DNS   network connection      This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress      The port forwarding mentioned above  is the easiest way to demo the working of ingress  The  kubectl port forward     command above has forwarded the port number 8080  on the localhost s tcp ip stack  where the command was typed  to the port  number 80  of the service created by the installation of ingress nginx controller  So now  the traffic sent to port number 8080 on localhost will reach the port number 80  of the ingress controller s service      Port forwarding is not for a production environment use case  But here we use port forwarding  to simulate a HTTP request  originating from outside the cluster  to reach the service of the ingress nginx controller  that is exposed to receive traffic from outside the cluster     This issue  https   github com kubernetes ingress nginx issues 10014 issuecomment 1567791549described  shows a typical DNS problem and its solution   At this point  you can access your deployment using curl       console curl   resolve demo localdev me 8080 127 0 0 1 http   demo localdev me 8080      You should see a HTML response containing text like    It works           Online testing  If your Kubernetes cluster is a  real  cluster that supports services of type  LoadBalancer   it will have allocated an external IP address or FQDN to the ingress controller   You can see that IP address or FQDN with the following command      console kubectl get service ingress nginx controller   namespace ingress nginx      It will be the  EXTERNAL IP  field  If that field shows   pending    this means that your Kubernetes cluster wasn t able to provision the load balancer  generally  this is because it doesn t support services of type  LoadBalancer     Once you have the external IP address  or FQDN   set up a DNS record pointing to it  Then you can create an ingress resource  The following example assumes that you have set up a DNS record for  www demo io       console kubectl create ingress demo   class nginx       rule  www demo io   demo 80       Alternatively  the above command can be rewritten as follows for the      rule    command and below      console kubectl create ingress demo   class nginx       rule www demo io  demo 80      You should then be able to see the  It works   page when you connect to  http   www demo io    Congratulations  you are serving a public website hosted on a Kubernetes cluster        Environment specific instructions      Local development clusters       minikube  The ingress controller can be installed through minikube s addons system      console minikube addons enable ingress           MicroK8s  The ingress controller can be installed through MicroK8s s addons system      console microk8s enable ingress      Please check the MicroK8s  documentation page  https   microk8s io docs addon ingress  for details        Docker Desktop  Kubernetes is available in Docker Desktop     Mac  from  version 18 06 0 ce  https   docs docker com docker for mac release notes  stable releases of 2018    Windows  from  version 18 06 0 ce  https   docs docker com docker for windows release notes  docker community edition 18060 ce win70 2018 07 25   First  make sure that Kubernetes is enabled in the Docker settings  The command  kubectl get nodes  should show a single node called  docker desktop    The ingress controller can be installed on Docker Desktop using the default  quick start   quick start  instructions   On most systems  if you don t have any other service of type  LoadBalancer  bound to port 80  the ingress controller will be assigned the  EXTERNAL IP  of  localhost   which means that it will be reachable on localhost 80  If that doesn t work  you might have to fall back to the  kubectl port forward  method described in the  local testing section   local testing         Rancher Desktop  Rancher Desktop provides Kubernetes and Container Management on the desktop  Kubernetes is enabled by default in Rancher Desktop   Rancher Desktop uses K3s under the hood  which in turn uses Traefik as the default ingress controller for the Kubernetes cluster  To use Ingress Nginx Controller in place of the default Traefik  disable Traefik from Preference   Kubernetes menu   Once traefik is disabled  the Ingress Nginx Controller can be installed on Rancher Desktop using the default  quick start   quick start  instructions  Follow the instructions described in the  local testing section   local testing  to try a sample       Cloud deployments  If the load balancers of your cloud provider do active healthchecks on their backends  most do   you can change the  externalTrafficPolicy  of the ingress controller Service to  Local   instead of the default  Cluster   to save an extra hop in some cases  If you re installing with Helm  this can be done by adding    set controller service externalTrafficPolicy Local  to the  helm install  or  helm upgrade  command   Furthermore  if the load balancers of your cloud provider support the PROXY protocol  you can enable it  and it will let the ingress controller see the real IP address of the clients  Otherwise  it will generally see the IP address of the upstream load balancer  This must be done both in the ingress controller  with e g     set controller config use proxy protocol true   and in the cloud provider s load balancer configuration to function correctly   In the following sections  we provide YAML manifests that enable these options when possible  using the specific options of various cloud providers        AWS  In AWS  we use a Network load balancer  NLB  to expose the Ingress Nginx Controller behind a Service of  Type LoadBalancer        info     The provided templates illustrate the setup for legacy in tree service load balancer for AWS NLB      AWS provides the documentation on how to use      Network load balancing on Amazon EKS  https   docs aws amazon com eks latest userguide network load balancing html      with  AWS Load Balancer Controller  https   github com kubernetes sigs aws load balancer controller          Network Load Balancer  NLB      console kubectl apply  f https   raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider aws deploy yaml            TLS termination in AWS Load Balancer  NLB   By default  TLS is terminated in the ingress controller  But it is also possible to terminate TLS in the Load Balancer  This section explains how to do that on AWS using an NLB   1  Download the  deploy yaml  https   raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider aws nlb with tls termination deploy yaml  template       console   wget https   raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider aws nlb with tls termination deploy yaml        2  Edit the file and change the VPC CIDR in use for the Kubernetes cluster             proxy real ip cidr  XXX XXX XXX XX         3  Change the AWS Certificate Manager  ACM  ID as well             arn aws acm us west 2 XXXXXXXX certificate XXXXXX XXXXXXX XXXXXXX XXXXXXXX         4  Deploy the manifest         console    kubectl apply  f deploy yaml               NLB Idle Timeouts  Idle timeout value for TCP flows is 350 seconds and  cannot be modified  https   docs aws amazon com elasticloadbalancing latest network network load balancers html connection idle timeout    For this reason  you need to ensure the  keepalive timeout  https   nginx org en docs http ngx http core module html keepalive timeout  value is configured less than 350 seconds to work as expected   By default  NGINX  keepalive timeout  is set to  75s    More information with regard to timeouts can be found in the  official AWS documentation  https   docs aws amazon com elasticloadbalancing latest network network load balancers html connection idle timeout        GCE GKE  First  your user needs to have  cluster admin  permissions on the cluster  This can be done with the following command      console kubectl create clusterrolebinding cluster admin binding       clusterrole cluster admin       user   gcloud config get value account       Then  the ingress controller can be installed like this      console kubectl apply  f https   raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider cloud deploy yaml          warning     For private clusters  you will need to either add a firewall rule that allows master nodes access to     port  8443 tcp  on worker nodes  or change the existing rule that allows access to port  80 tcp    443 tcp  and      10254 tcp  to also allow access to port  8443 tcp   More information can be found in the      Official GCP Documentation  https   cloud google com load balancing docs tcp setting up tcp config hc firewall        See the  GKE documentation  https   cloud google com kubernetes engine docs how to private clusters add firewall rules      on adding rules and the  Kubernetes issue  https   github com kubernetes kubernetes issues 79739  for more detail   Proxy protocol is supported in GCE check the  Official Documentations on how to enable   https   cloud google com load balancing docs tcp setting up tcp proxy protocol        Azure     console kubectl apply  f https   raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider cloud deploy yaml      More information with regard to Azure annotations for ingress controller can be found in the  official AKS documentation  https   docs microsoft com en us azure aks ingress internal ip create an ingress controller         Digital Ocean     console kubectl apply  f https   raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider do deploy yaml        By default the service object of the ingress nginx controller for Digital Ocean  only configures one annotation  Its this one  service beta kubernetes io do loadbalancer enable proxy protocol   true    While this makes the service functional  it was reported that the Digital Ocean LoadBalancer graphs shows  no data   unless a few other annotations are also configured  Some of these other annotations require values that can not be generic and hence not forced in a out of the box installation  These annotations and a discussion on them is well documented in  this issue  https   github com kubernetes ingress nginx issues 8965   Please refer to the issue to add annotations  with values specific to user  to get graphs of the DO LB populated with data        Scaleway     console kubectl apply  f https   raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider scw deploy yaml      Refer to the  dedicated tutorial  https   www scaleway com en docs tutorials proxy protocol v2 load balancer  configuring proxy protocol for ingress nginx  in the Scaleway documentation for configuring the proxy protocol for ingress nginx with the Scaleway load balancer        Exoscale     console kubectl apply  f https   raw githubusercontent com kubernetes ingress nginx main deploy static provider exoscale deploy yaml      The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager  documentation  https   github com exoscale exoscale cloud controller manager blob master docs service loadbalancer md         Oracle Cloud Infrastructure     console kubectl apply  f https   raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider cloud deploy yaml      A  complete list of available annotations for Oracle Cloud Infrastructure  https   github com oracle oci cloud controller manager blob master docs load balancer annotations md  can be found in the  OCI Cloud Controller Manager  https   github com oracle oci cloud controller manager  documentation        OVHcloud     console helm repo add ingress nginx https   kubernetes github io ingress nginx helm repo update helm  n ingress nginx install ingress nginx ingress nginx ingress nginx   create namespace      You can find the  complete tutorial  https   docs ovh com gb en kubernetes installing nginx ingress         Bare metal clusters  This section is applicable to Kubernetes clusters deployed on bare metal servers  as well as  raw  VMs where Kubernetes was installed manually  using generic Linux distros  like CentOS  Ubuntu      For quick testing  you can use a  NodePort  https   kubernetes io docs concepts services networking service  type nodeport   This should work on almost every cluster  but it will typically use a port in the range 30000 32767      console kubectl apply  f https   raw githubusercontent com kubernetes ingress nginx controller v1 12 0 beta 0 deploy static provider baremetal deploy yaml      For more information about bare metal deployments  and how to use port 80 instead of a random port in the 30000 32767 range   see  bare metal considerations    baremetal md       Miscellaneous      Checking ingress controller version  Run   nginx ingress controller   version  within the pod  for instance with  kubectl exec       console POD NAMESPACE ingress nginx POD NAME   kubectl get pods  n  POD NAMESPACE  l app kubernetes io name ingress nginx   field selector status phase Running  o name  kubectl exec  POD NAME  n  POD NAMESPACE     nginx ingress controller   version          Scope  By default  the controller watches Ingress objects from all namespaces  If you want to change this behavior  use the flag    watch namespace  or check the Helm chart value  controller scope  to limit the controller to a single namespace  Although the use of this flag is not popular  one important fact to note is that the secret containing the default ssl certificate needs to also be present in the watched namespace s    See also   How to easily install multiple instances of the Ingress NGINX controller in the same cluster   https   kubernetes github io ingress nginx  how to easily install multiple instances of the ingress nginx controller in the same cluster  for more details       Webhook network access      warning     The controller uses an  admission webhook  https   kubernetes io docs reference access authn authz extensible admission controllers       to validate Ingress definitions  Make sure that you don t have      Network policies  https   kubernetes io docs concepts services networking network policies       or additional firewalls preventing connections from the API server to the  ingress nginx controller admission  service       Certificate generation      attention     The first time the ingress controller starts  two  Jobs  https   kubernetes io docs concepts workloads controllers jobs run to completion   create the SSL Certificate used by the admission webhook   This can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions   You can wait until it is ready to run the next command      yaml  kubectl wait   namespace ingress nginx       for condition ready pod       selector app kubernetes io component controller       timeout 120s          Running on Kubernetes versions older than 1 19  Ingress resources evolved over time  They started with  apiVersion  extensions v1beta1   then moved to  apiVersion  networking k8s io v1beta1  and more recently to  apiVersion  networking k8s io v1    Here is how these Ingress versions are supported in Kubernetes     before Kubernetes 1 19  only  v1beta1  Ingress resources are supported   from Kubernetes 1 19 to 1 21  both  v1beta1  and  v1  Ingress resources are supported   in Kubernetes 1 22 and above  only  v1  Ingress resources are supported  And here is how these Ingress versions are supported in Ingress Nginx Controller     before version 1 0  only  v1beta1  Ingress resources are supported   in version 1 0 and above  only  v1  Ingress resources are  As a result  if you re running Kubernetes 1 19 or later  you should be able to use the latest version of the NGINX Ingress Controller  but if you re using an old version of Kubernetes  1 18 or earlier  you will have to use version 0 X of the Ingress Nginx Controller  e g  version 0 49    The Helm chart of the Ingress Nginx Controller switched to version 1 in version 4 of the chart  In other words  if you re running Kubernetes 1 19 or earlier  you should use version 3 X of the chart  this can be done by adding    version   4   to the  helm install  command   "}
{"questions":"ingress nginx suffices to provide a single point of contact to the Ingress Nginx Controller to external clients and indirectly to different setup to offer the same kind of access to external consumers Bare metal considerations In traditional cloud environments where network load balancers are available on demand a single Kubernetes manifest any application running inside the cluster Bare metal environments lack this commodity requiring a slightly","answers":"# Bare-metal considerations\n\nIn traditional *cloud* environments, where network load balancers are available on-demand, a single Kubernetes manifest\nsuffices to provide a single point of contact to the Ingress-Nginx Controller to external clients and, indirectly, to\nany application running inside the cluster. *Bare-metal* environments lack this commodity, requiring a slightly\ndifferent setup to offer the same kind of access to external consumers.\n\n![Cloud environment](..\/images\/baremetal\/cloud_overview.jpg)\n![Bare-metal environment](..\/images\/baremetal\/baremetal_overview.jpg)\n\nThe rest of this document describes a few recommended approaches to deploying the Ingress-Nginx Controller inside a\nKubernetes cluster running on bare-metal.\n\n## A pure software solution: MetalLB\n\n[MetalLB][metallb] provides a network load-balancer implementation for Kubernetes clusters that do not run on a\nsupported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.\n\nThis section demonstrates how to use the [Layer 2 configuration mode][metallb-l2] of MetalLB together with the NGINX\nIngress controller in a Kubernetes cluster that has **publicly accessible nodes**. In this mode, one node attracts all\nthe traffic for the `ingress-nginx` Service IP. See [Traffic policies][metallb-trafficpolicies] for more details.\n\n![MetalLB in L2 mode](..\/images\/baremetal\/metallb.jpg)\n\n!!! note\n    The description of other supported configuration modes is off-scope for this document.\n\n!!! warning\n    MetalLB is currently in *beta*. Read about the [Project maturity][metallb-maturity] and make sure you inform\n    yourself by reading the official documentation thoroughly.\n\nMetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB\nwas deployed following the [Installation][metallb-install] instructions, and that the Ingress-Nginx Controller was installed \nusing the steps described in the [quickstart section of the installation guide][install-quickstart].\n\nMetalLB requires a pool of IP addresses in order to be able to take ownership of the `ingress-nginx` Service. This pool\ncan be defined through `IPAddressPool` objects in the same namespace as the MetalLB controller. This pool of IPs **must** be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.\n\n!!! example\n    Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal\n    environments this value is <None\\>)\n\n    ```console\n    $ kubectl get node\n    NAME     STATUS   ROLES    EXTERNAL-IP\n    host-1   Ready    master   203.0.113.1\n    host-2   Ready    node     203.0.113.2\n    host-3   Ready    node     203.0.113.3\n    ```\n\n    After creating the following objects, MetalLB takes ownership of one of the IP addresses in the pool and updates\n    the *loadBalancer* IP field of the `ingress-nginx` Service accordingly.\n\n    ```yaml\n    ---\n    apiVersion: metallb.io\/v1beta1\n    kind: IPAddressPool\n    metadata:\n      name: default\n      namespace: metallb-system\n    spec:\n      addresses:\n      - 203.0.113.10-203.0.113.15\n      autoAssign: true\n    ---\n    apiVersion: metallb.io\/v1beta1\n    kind: L2Advertisement\n    metadata:\n      name: default\n      namespace: metallb-system\n    spec:\n      ipAddressPools:\n      - default\n    ```\n\n    ```console\n    $ kubectl -n ingress-nginx get svc\n    NAME                   TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)\n    default-http-backend   ClusterIP     10.0.64.249    <none>       80\/TCP\n    ingress-nginx          LoadBalancer  10.0.220.217   203.0.113.10  80:30100\/TCP,443:30101\/TCP\n    ```\n\nAs soon as MetalLB sets the external IP address of the `ingress-nginx` LoadBalancer Service, the corresponding entries\nare created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on\nthe ports configured in the LoadBalancer Service:\n\n```console\n$ curl -D- http:\/\/203.0.113.10 -H 'Host: myapp.example.com'\nHTTP\/1.1 200 OK\nServer: nginx\/1.15.2\n```\n\n!!! tip\n    In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the `Local`\n    traffic policy. Traffic policies are described in more details in [Traffic policies][metallb-trafficpolicies] as\n    well as in the next section.\n\n[metallb]: https:\/\/metallb.universe.tf\/\n[metallb-maturity]: https:\/\/metallb.universe.tf\/concepts\/maturity\/\n[metallb-l2]: https:\/\/metallb.universe.tf\/concepts\/layer2\/\n[metallb-install]: https:\/\/metallb.universe.tf\/installation\/\n[metallb-trafficpolicies]: https:\/\/metallb.universe.tf\/usage\/#traffic-policies\n\n## Over a NodePort Service\n\nDue to its simplicity, this is the setup a user will deploy by default when following the steps described in the\n[installation guide][install-baremetal].\n\n!!! info\n    A Service of type `NodePort` exposes, via the `kube-proxy` component, the **same unprivileged** port (default:\n    30000-32767) on every Kubernetes node, masters included. For more information, see [Services][nodeport-def].\n\nIn this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to\nany port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client\nlocated outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports\n80 and 443. Instead, the external client must append the NodePort allocated to the `ingress-nginx` Service to HTTP\nrequests.\n\n![NodePort request flow](..\/images\/baremetal\/nodeport.jpg)\n\n!!! example\n    Given the NodePort `30100` allocated to the `ingress-nginx` Service\n\n    ```console\n    $ kubectl -n ingress-nginx get svc\n    NAME                   TYPE        CLUSTER-IP     PORT(S)\n    default-http-backend   ClusterIP   10.0.64.249    80\/TCP\n    ingress-nginx          NodePort    10.0.220.217   80:30100\/TCP,443:30101\/TCP\n    ```\n\n    and a Kubernetes node with the public IP address `203.0.113.2` (the external IP is added as an example, in most\n    bare-metal environments this value is <None\\>)\n\n    ```console\n    $ kubectl get node\n    NAME     STATUS   ROLES    EXTERNAL-IP\n    host-1   Ready    master   203.0.113.1\n    host-2   Ready    node     203.0.113.2\n    host-3   Ready    node     203.0.113.3\n    ```\n\n    a client would reach an Ingress with `host: myapp.example.com` at `http:\/\/myapp.example.com:30100`, where the\n    myapp.example.com subdomain resolves to the 203.0.113.2 IP address.\n\n!!! danger \"Impact on the host system\"\n    While it may sound tempting to reconfigure the NodePort range using the `--service-node-port-range` API server flag\n    to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues\n    including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant\n    `kube-proxy` privileges it may otherwise not require.\n\n    This practice is therefore **discouraged**. See the other approaches proposed in this page for alternatives.\n\nThis approach has a few other limitations one ought to be aware of:\n\n* **Source IP address**\n\nServices of type NodePort perform [source address translation][nodeport-nat] by default. This means the source IP of a\nHTTP request is always **the IP address of the Kubernetes node that received the request** from the perspective of\nNGINX.\n\nThe recommended way to preserve the source IP in a NodePort setup is to set the value of the `externalTrafficPolicy`\nfield of the `ingress-nginx` Service spec to `Local` ([example][preserve-ip]).\n\n!!! warning\n    This setting effectively **drops packets** sent to Kubernetes nodes which are not running any instance of the NGINX\n    Ingress controller. Consider [assigning NGINX Pods to specific nodes][pod-assign] in order to control on what nodes\n    the Ingress-Nginx Controller should be scheduled or not scheduled.\n\n!!! example\n    In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments\n    this value is <None\\>)\n\n    ```console\n    $ kubectl get node\n    NAME     STATUS   ROLES    EXTERNAL-IP\n    host-1   Ready    master   203.0.113.1\n    host-2   Ready    node     203.0.113.2\n    host-3   Ready    node     203.0.113.3\n    ```\n\n    with a `ingress-nginx-controller` Deployment composed of 2 replicas\n\n    ```console\n    $ kubectl -n ingress-nginx get pod -o wide\n    NAME                                       READY   STATUS    IP           NODE\n    default-http-backend-7c5bc89cc9-p86md      1\/1     Running   172.17.1.1   host-2\n    ingress-nginx-controller-cf9ff8c96-8vvf8   1\/1     Running   172.17.0.3   host-3\n    ingress-nginx-controller-cf9ff8c96-pxsds   1\/1     Running   172.17.1.4   host-2\n    ```\n\n    Requests sent to `host-2` and `host-3` would be forwarded to NGINX and original client's IP would be preserved,\n    while requests to `host-1` would get dropped because there is no NGINX replica running on that node.\n\n* **Ingress status**\n\nBecause NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controller **does not\nupdate the status of Ingress objects it manages**.\n\n```console\n$ kubectl get ingress\nNAME           HOSTS               ADDRESS   PORTS\ntest-ingress   myapp.example.com             80\n```\n\nDespite the fact there is no load balancer providing a public IP address to the Ingress-Nginx Controller, it is possible\nto force the status update of all managed Ingress objects by setting the `externalIPs` field of the `ingress-nginx`\nService.\n\n!!! warning\n    There is more to setting `externalIPs` than just enabling the Ingress-Nginx Controller to update the status of\n    Ingress objects. Please read about this option in the [Services][external-ips] page of official Kubernetes\n    documentation as well as the section about [External IPs](#external-ips) in this document for more information.\n\n!!! example\n    Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal\n    environments this value is <None\\>)\n\n    ```console\n    $ kubectl get node\n    NAME     STATUS   ROLES    EXTERNAL-IP\n    host-1   Ready    master   203.0.113.1\n    host-2   Ready    node     203.0.113.2\n    host-3   Ready    node     203.0.113.3\n    ```\n\n    one could edit the `ingress-nginx` Service and add the following field to the object spec\n\n    ```yaml\n    spec:\n      externalIPs:\n      - 203.0.113.1\n      - 203.0.113.2\n      - 203.0.113.3\n    ```\n\n    which would in turn be reflected on Ingress objects as follows:\n\n    ```console\n    $ kubectl get ingress -o wide\n    NAME           HOSTS               ADDRESS                               PORTS\n    test-ingress   myapp.example.com   203.0.113.1,203.0.113.2,203.0.113.3   80\n    ```\n\n* **Redirects**\n\nAs NGINX is **not aware of the port translation operated by the NodePort Service**, backend applications are responsible\nfor generating redirect URLs that take into account the URL used by external clients, including the NodePort.\n\n!!! example\n    Redirects generated by NGINX, for instance HTTP to HTTPS or `domain` to `www.domain`, are generated without\n    NodePort:\n\n    ```console\n    $ curl -D- http:\/\/myapp.example.com:30100`\n    HTTP\/1.1 308 Permanent Redirect\n    Server: nginx\/1.15.2\n    Location: https:\/\/myapp.example.com\/  #-> missing NodePort in HTTPS redirect\n    ```\n\n[install-baremetal]: .\/index.md#bare-metal\n[install-quickstart]: .\/index.md#quick-start\n[nodeport-def]: https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#type-nodeport\n[nodeport-nat]: https:\/\/kubernetes.io\/docs\/tutorials\/services\/source-ip\/#source-ip-for-services-with-type-nodeport\n[pod-assign]: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/\n[preserve-ip]: https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/nginx-0.19.0\/deploy\/provider\/aws\/service-nlb.yaml#L12-L14\n\n## Via the host network\n\nIn a setup where there is no external load balancer available but using NodePorts is not an option, one can configure\n`ingress-nginx` Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of\nthis approach is that the Ingress-Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes' network\ninterfaces, without the extra network translation imposed by NodePort Services.\n\n!!! note\n    This approach does not leverage any Service object to expose the Ingress-Nginx Controller. If the `ingress-nginx`\n    Service exists in the target cluster, it is **recommended to delete it**.\n\nThis can be achieved by enabling the `hostNetwork` option in the Pods' spec.\n\n```yaml\ntemplate:\n  spec:\n    hostNetwork: true\n```\n\n!!! danger \"Security considerations\"\n    Enabling this option **exposes every system daemon to the Ingress-Nginx Controller** on any network interface,\n    including the host's loopback. Please evaluate the impact this may have on the security of your system carefully.\n\n!!! example\n    Consider this `ingress-nginx-controller` Deployment composed of 2 replicas, NGINX Pods inherit from the IP address\n    of their host instead of an internal Pod IP.\n\n    ```console\n    $ kubectl -n ingress-nginx get pod -o wide\n    NAME                                       READY   STATUS    IP            NODE\n    default-http-backend-7c5bc89cc9-p86md      1\/1     Running   172.17.1.1    host-2\n    ingress-nginx-controller-5b4cf5fc6-7lg6c   1\/1     Running   203.0.113.3   host-3\n    ingress-nginx-controller-5b4cf5fc6-lzrls   1\/1     Running   203.0.113.2   host-2\n    ```\n\nOne major limitation of this deployment approach is that only **a single Ingress-Nginx Controller Pod** may be scheduled\non each cluster node, because binding the same port multiple times on the same network interface is technically\nimpossible. Pods that are unschedulable due to such situation fail with the following event:\n\n```console\n$ kubectl -n ingress-nginx describe pod <unschedulable-ingress-nginx-controller-pod>\n...\nEvents:\n  Type     Reason            From               Message\n  ----     ------            ----               -------\n  Warning  FailedScheduling  default-scheduler  0\/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.\n```\n\nOne way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as a *DaemonSet* instead\nof a traditional Deployment.\n\n!!! info\n    A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to\n    [repel those Pods][taints]. For more information, see [DaemonSet][daemonset].\n\nBecause most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the\nconfiguration of the corresponding manifest at the user's discretion.\n\n![DaemonSet with hostNetwork flow](..\/images\/baremetal\/hostnetwork.jpg)\n\nLike with NodePorts, this approach has a few quirks it is important to be aware of.\n\n* **DNS resolution**\n\nPods configured with `hostNetwork: true` do not use the internal DNS resolver (i.e. *kube-dns* or *CoreDNS*), unless\ntheir `dnsPolicy` spec field is set to [`ClusterFirstWithHostNet`][dnspolicy]. Consider using this setting if NGINX is\nexpected to resolve internal names for any reason.\n\n* **Ingress status**\n\nBecause there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default\n`--publish-service` flag used in standard cloud setups **does not apply** and the status of all Ingress objects remains\nblank.\n\n```console\n$ kubectl get ingress\nNAME           HOSTS               ADDRESS   PORTS\ntest-ingress   myapp.example.com             80\n```\n\nInstead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the\n[`--report-node-internal-ip-address`][cli-args] flag, which sets the status of all Ingress objects to the internal IP\naddress of all nodes running the Ingress-Nginx Controller.\n\n!!! example\n    Given a `ingress-nginx-controller` DaemonSet composed of 2 replicas\n\n    ```console\n    $ kubectl -n ingress-nginx get pod -o wide\n    NAME                                       READY   STATUS    IP            NODE\n    default-http-backend-7c5bc89cc9-p86md      1\/1     Running   172.17.1.1    host-2\n    ingress-nginx-controller-5b4cf5fc6-7lg6c   1\/1     Running   203.0.113.3   host-3\n    ingress-nginx-controller-5b4cf5fc6-lzrls   1\/1     Running   203.0.113.2   host-2\n    ```\n\n    the controller sets the status of all Ingress objects it manages to the following value:\n\n    ```console\n    $ kubectl get ingress -o wide\n    NAME           HOSTS               ADDRESS                   PORTS\n    test-ingress   myapp.example.com   203.0.113.2,203.0.113.3   80\n    ```\n\n!!! note\n    Alternatively, it is possible to override the address written to Ingress objects using the\n    `--publish-status-address` flag. See [Command line arguments][cli-args].\n\n[taints]: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/taint-and-toleration\/\n[daemonset]: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/daemonset\/\n[dnspolicy]: https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/dns-pod-service\/#pod-s-dns-policy\n[cli-args]: ..\/user-guide\/cli-arguments.md\n\n## Using a self-provisioned edge\n\nSimilarly to cloud environments, this deployment approach requires an edge network component providing a public\nentrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software\n(e.g. _HAproxy_) and is usually managed outside of the Kubernetes landscape by operations teams.\n\nSuch deployment builds upon the NodePort Service described above in [Over a NodePort Service](#over-a-nodeport-service),\nwith one significant difference: external clients do not access cluster nodes directly, only the edge component does.\nThis is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address.\n\nOn the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes\nnodes and\/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort\non the target nodes as shown in the diagram below:\n\n![User edge](..\/images\/baremetal\/user_edge.jpg)\n\n## External IPs\n\n!!! danger \"Source IP address\"\n    This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore **not\n    recommended** to use it despite its apparent simplicity.\n\nThe `externalIPs` Service option was previously mentioned in the [NodePort](#over-a-nodeport-service) section.\n\nAs per the [Services][external-ips] page of the official Kubernetes documentation, the `externalIPs` option causes\n`kube-proxy` to route traffic sent to arbitrary IP addresses **and on the Service ports** to the endpoints of that\nService. These IP addresses **must belong to the target node**.\n\n!!! example\n    Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal\n    environments this value is <None\\>)\n\n    ```console\n    $ kubectl get node\n    NAME     STATUS   ROLES    EXTERNAL-IP\n    host-1   Ready    master   203.0.113.1\n    host-2   Ready    node     203.0.113.2\n    host-3   Ready    node     203.0.113.3\n    ```\n\n    and the following `ingress-nginx` NodePort Service\n\n    ```console\n    $ kubectl -n ingress-nginx get svc\n    NAME                   TYPE        CLUSTER-IP     PORT(S)\n    ingress-nginx          NodePort    10.0.220.217   80:30100\/TCP,443:30101\/TCP\n    ```\n\n    One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort\n    and the Service port:\n\n    ```yaml\n    spec:\n      externalIPs:\n      - 203.0.113.2\n      - 203.0.113.3\n    ```\n\n    ```console\n    $ curl -D- http:\/\/myapp.example.com:30100\n    HTTP\/1.1 200 OK\n    Server: nginx\/1.15.2\n\n    $ curl -D- http:\/\/myapp.example.com\n    HTTP\/1.1 200 OK\n    Server: nginx\/1.15.2\n    ```\n\n    We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.\n\n[external-ips]: https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#external-ips","site":"ingress nginx","answers_cleaned":"  Bare metal considerations  In traditional  cloud  environments  where network load balancers are available on demand  a single Kubernetes manifest suffices to provide a single point of contact to the Ingress Nginx Controller to external clients and  indirectly  to any application running inside the cluster   Bare metal  environments lack this commodity  requiring a slightly different setup to offer the same kind of access to external consumers     Cloud environment     images baremetal cloud overview jpg    Bare metal environment     images baremetal baremetal overview jpg   The rest of this document describes a few recommended approaches to deploying the Ingress Nginx Controller inside a Kubernetes cluster running on bare metal      A pure software solution  MetalLB   MetalLB  metallb  provides a network load balancer implementation for Kubernetes clusters that do not run on a supported cloud provider  effectively allowing the usage of LoadBalancer Services within any cluster   This section demonstrates how to use the  Layer 2 configuration mode  metallb l2  of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has   publicly accessible nodes    In this mode  one node attracts all the traffic for the  ingress nginx  Service IP  See  Traffic policies  metallb trafficpolicies  for more details     MetalLB in L2 mode     images baremetal metallb jpg       note     The description of other supported configuration modes is off scope for this document       warning     MetalLB is currently in  beta   Read about the  Project maturity  metallb maturity  and make sure you inform     yourself by reading the official documentation thoroughly   MetalLB can be deployed either with a simple Kubernetes manifest or with Helm  The rest of this example assumes MetalLB was deployed following the  Installation  metallb install  instructions  and that the Ingress Nginx Controller was installed  using the steps described in the  quickstart section of the installation guide  install quickstart    MetalLB requires a pool of IP addresses in order to be able to take ownership of the  ingress nginx  Service  This pool can be defined through  IPAddressPool  objects in the same namespace as the MetalLB controller  This pool of IPs   must   be dedicated to MetalLB s use  you can t reuse the Kubernetes node IPs or IPs handed out by a DHCP server       example     Given the following 3 node Kubernetes cluster  the external IP is added as an example  in most bare metal     environments this value is  None            console       kubectl get node     NAME     STATUS   ROLES    EXTERNAL IP     host 1   Ready    master   203 0 113 1     host 2   Ready    node     203 0 113 2     host 3   Ready    node     203 0 113 3              After creating the following objects  MetalLB takes ownership of one of the IP addresses in the pool and updates     the  loadBalancer  IP field of the  ingress nginx  Service accordingly          yaml             apiVersion  metallb io v1beta1     kind  IPAddressPool     metadata        name  default       namespace  metallb system     spec        addresses          203 0 113 10 203 0 113 15       autoAssign  true             apiVersion  metallb io v1beta1     kind  L2Advertisement     metadata        name  default       namespace  metallb system     spec        ipAddressPools          default                 console       kubectl  n ingress nginx get svc     NAME                   TYPE          CLUSTER IP     EXTERNAL IP  PORT S      default http backend   ClusterIP     10 0 64 249     none        80 TCP     ingress nginx          LoadBalancer  10 0 220 217   203 0 113 10  80 30100 TCP 443 30101 TCP          As soon as MetalLB sets the external IP address of the  ingress nginx  LoadBalancer Service  the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service      console   curl  D  http   203 0 113 10  H  Host  myapp example com  HTTP 1 1 200 OK Server  nginx 1 15 2          tip     In order to preserve the source IP address in HTTP requests sent to NGINX  it is necessary to use the  Local      traffic policy  Traffic policies are described in more details in  Traffic policies  metallb trafficpolicies  as     well as in the next section    metallb   https   metallb universe tf   metallb maturity   https   metallb universe tf concepts maturity   metallb l2   https   metallb universe tf concepts layer2   metallb install   https   metallb universe tf installation   metallb trafficpolicies   https   metallb universe tf usage  traffic policies     Over a NodePort Service  Due to its simplicity  this is the setup a user will deploy by default when following the steps described in the  installation guide  install baremetal        info     A Service of type  NodePort  exposes  via the  kube proxy  component  the   same unprivileged   port  default      30000 32767  on every Kubernetes node  masters included  For more information  see  Services  nodeport def    In this configuration  the NGINX container remains isolated from the host network  As a result  it can safely bind to any port  including the standard HTTP ports 80 and 443  However  due to the container namespace isolation  a client located outside the cluster network  e g  on the public internet  is not able to access Ingress hosts directly on ports 80 and 443  Instead  the external client must append the NodePort allocated to the  ingress nginx  Service to HTTP requests     NodePort request flow     images baremetal nodeport jpg       example     Given the NodePort  30100  allocated to the  ingress nginx  Service         console       kubectl  n ingress nginx get svc     NAME                   TYPE        CLUSTER IP     PORT S      default http backend   ClusterIP   10 0 64 249    80 TCP     ingress nginx          NodePort    10 0 220 217   80 30100 TCP 443 30101 TCP              and a Kubernetes node with the public IP address  203 0 113 2   the external IP is added as an example  in most     bare metal environments this value is  None            console       kubectl get node     NAME     STATUS   ROLES    EXTERNAL IP     host 1   Ready    master   203 0 113 1     host 2   Ready    node     203 0 113 2     host 3   Ready    node     203 0 113 3              a client would reach an Ingress with  host  myapp example com  at  http   myapp example com 30100   where the     myapp example com subdomain resolves to the 203 0 113 2 IP address       danger  Impact on the host system      While it may sound tempting to reconfigure the NodePort range using the    service node port range  API server flag     to include unprivileged ports and be able to expose ports 80 and 443  doing so may result in unexpected issues     including  but not limited to  the use of ports otherwise reserved to system daemons and the necessity to grant      kube proxy  privileges it may otherwise not require       This practice is therefore   discouraged    See the other approaches proposed in this page for alternatives   This approach has a few other limitations one ought to be aware of       Source IP address    Services of type NodePort perform  source address translation  nodeport nat  by default  This means the source IP of a HTTP request is always   the IP address of the Kubernetes node that received the request   from the perspective of NGINX   The recommended way to preserve the source IP in a NodePort setup is to set the value of the  externalTrafficPolicy  field of the  ingress nginx  Service spec to  Local    example  preserve ip         warning     This setting effectively   drops packets   sent to Kubernetes nodes which are not running any instance of the NGINX     Ingress controller  Consider  assigning NGINX Pods to specific nodes  pod assign  in order to control on what nodes     the Ingress Nginx Controller should be scheduled or not scheduled       example     In a Kubernetes cluster composed of 3 nodes  the external IP is added as an example  in most bare metal environments     this value is  None            console       kubectl get node     NAME     STATUS   ROLES    EXTERNAL IP     host 1   Ready    master   203 0 113 1     host 2   Ready    node     203 0 113 2     host 3   Ready    node     203 0 113 3              with a  ingress nginx controller  Deployment composed of 2 replicas         console       kubectl  n ingress nginx get pod  o wide     NAME                                       READY   STATUS    IP           NODE     default http backend 7c5bc89cc9 p86md      1 1     Running   172 17 1 1   host 2     ingress nginx controller cf9ff8c96 8vvf8   1 1     Running   172 17 0 3   host 3     ingress nginx controller cf9ff8c96 pxsds   1 1     Running   172 17 1 4   host 2              Requests sent to  host 2  and  host 3  would be forwarded to NGINX and original client s IP would be preserved      while requests to  host 1  would get dropped because there is no NGINX replica running on that node       Ingress status    Because NodePort Services do not get a LoadBalancerIP assigned by definition  the Ingress Nginx Controller   does not update the status of Ingress objects it manages        console   kubectl get ingress NAME           HOSTS               ADDRESS   PORTS test ingress   myapp example com             80      Despite the fact there is no load balancer providing a public IP address to the Ingress Nginx Controller  it is possible to force the status update of all managed Ingress objects by setting the  externalIPs  field of the  ingress nginx  Service       warning     There is more to setting  externalIPs  than just enabling the Ingress Nginx Controller to update the status of     Ingress objects  Please read about this option in the  Services  external ips  page of official Kubernetes     documentation as well as the section about  External IPs   external ips  in this document for more information       example     Given the following 3 node Kubernetes cluster  the external IP is added as an example  in most bare metal     environments this value is  None            console       kubectl get node     NAME     STATUS   ROLES    EXTERNAL IP     host 1   Ready    master   203 0 113 1     host 2   Ready    node     203 0 113 2     host 3   Ready    node     203 0 113 3              one could edit the  ingress nginx  Service and add the following field to the object spec         yaml     spec        externalIPs          203 0 113 1         203 0 113 2         203 0 113 3              which would in turn be reflected on Ingress objects as follows          console       kubectl get ingress  o wide     NAME           HOSTS               ADDRESS                               PORTS     test ingress   myapp example com   203 0 113 1 203 0 113 2 203 0 113 3   80              Redirects    As NGINX is   not aware of the port translation operated by the NodePort Service    backend applications are responsible for generating redirect URLs that take into account the URL used by external clients  including the NodePort       example     Redirects generated by NGINX  for instance HTTP to HTTPS or  domain  to  www domain   are generated without     NodePort          console       curl  D  http   myapp example com 30100      HTTP 1 1 308 Permanent Redirect     Server  nginx 1 15 2     Location  https   myapp example com       missing NodePort in HTTPS redirect           install baremetal     index md bare metal  install quickstart     index md quick start  nodeport def   https   kubernetes io docs concepts services networking service  type nodeport  nodeport nat   https   kubernetes io docs tutorials services source ip  source ip for services with type nodeport  pod assign   https   kubernetes io docs concepts configuration assign pod node   preserve ip   https   github com kubernetes ingress nginx blob nginx 0 19 0 deploy provider aws service nlb yaml L12 L14     Via the host network  In a setup where there is no external load balancer available but using NodePorts is not an option  one can configure  ingress nginx  Pods to use the network of the host they run on instead of a dedicated network namespace  The benefit of this approach is that the Ingress Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes  network interfaces  without the extra network translation imposed by NodePort Services       note     This approach does not leverage any Service object to expose the Ingress Nginx Controller  If the  ingress nginx      Service exists in the target cluster  it is   recommended to delete it     This can be achieved by enabling the  hostNetwork  option in the Pods  spec      yaml template    spec      hostNetwork  true          danger  Security considerations      Enabling this option   exposes every system daemon to the Ingress Nginx Controller   on any network interface      including the host s loopback  Please evaluate the impact this may have on the security of your system carefully       example     Consider this  ingress nginx controller  Deployment composed of 2 replicas  NGINX Pods inherit from the IP address     of their host instead of an internal Pod IP          console       kubectl  n ingress nginx get pod  o wide     NAME                                       READY   STATUS    IP            NODE     default http backend 7c5bc89cc9 p86md      1 1     Running   172 17 1 1    host 2     ingress nginx controller 5b4cf5fc6 7lg6c   1 1     Running   203 0 113 3   host 3     ingress nginx controller 5b4cf5fc6 lzrls   1 1     Running   203 0 113 2   host 2          One major limitation of this deployment approach is that only   a single Ingress Nginx Controller Pod   may be scheduled on each cluster node  because binding the same port multiple times on the same network interface is technically impossible  Pods that are unschedulable due to such situation fail with the following event      console   kubectl  n ingress nginx describe pod  unschedulable ingress nginx controller pod      Events    Type     Reason            From               Message                                                           Warning  FailedScheduling  default scheduler  0 3 nodes are available  3 node s  didn t have free ports for the requested pod ports       One way to ensure only schedulable Pods are created is to deploy the Ingress Nginx Controller as a  DaemonSet  instead of a traditional Deployment       info     A DaemonSet schedules exactly one type of Pod per cluster node  masters included  unless a node is configured to      repel those Pods  taints   For more information  see  DaemonSet  daemonset    Because most properties of DaemonSet objects are identical to Deployment objects  this documentation page leaves the configuration of the corresponding manifest at the user s discretion     DaemonSet with hostNetwork flow     images baremetal hostnetwork jpg   Like with NodePorts  this approach has a few quirks it is important to be aware of       DNS resolution    Pods configured with  hostNetwork  true  do not use the internal DNS resolver  i e   kube dns  or  CoreDNS    unless their  dnsPolicy  spec field is set to   ClusterFirstWithHostNet   dnspolicy   Consider using this setting if NGINX is expected to resolve internal names for any reason       Ingress status    Because there is no Service exposing the Ingress Nginx Controller in a configuration using the host network  the default    publish service  flag used in standard cloud setups   does not apply   and the status of all Ingress objects remains blank      console   kubectl get ingress NAME           HOSTS               ADDRESS   PORTS test ingress   myapp example com             80      Instead  and because bare metal nodes usually don t have an ExternalIP  one has to enable the     report node internal ip address   cli args  flag  which sets the status of all Ingress objects to the internal IP address of all nodes running the Ingress Nginx Controller       example     Given a  ingress nginx controller  DaemonSet composed of 2 replicas         console       kubectl  n ingress nginx get pod  o wide     NAME                                       READY   STATUS    IP            NODE     default http backend 7c5bc89cc9 p86md      1 1     Running   172 17 1 1    host 2     ingress nginx controller 5b4cf5fc6 7lg6c   1 1     Running   203 0 113 3   host 3     ingress nginx controller 5b4cf5fc6 lzrls   1 1     Running   203 0 113 2   host 2              the controller sets the status of all Ingress objects it manages to the following value          console       kubectl get ingress  o wide     NAME           HOSTS               ADDRESS                   PORTS     test ingress   myapp example com   203 0 113 2 203 0 113 3   80              note     Alternatively  it is possible to override the address written to Ingress objects using the        publish status address  flag  See  Command line arguments  cli args     taints   https   kubernetes io docs concepts configuration taint and toleration   daemonset   https   kubernetes io docs concepts workloads controllers daemonset   dnspolicy   https   kubernetes io docs concepts services networking dns pod service  pod s dns policy  cli args      user guide cli arguments md     Using a self provisioned edge  Similarly to cloud environments  this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster  This edge component can be either hardware  e g  vendor appliance  or software  e g   HAproxy   and is usually managed outside of the Kubernetes landscape by operations teams   Such deployment builds upon the NodePort Service described above in  Over a NodePort Service   over a nodeport service   with one significant difference  external clients do not access cluster nodes directly  only the edge component does  This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address   On the edge side  the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and or masters  Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below     User edge     images baremetal user edge jpg      External IPs      danger  Source IP address      This method does not allow preserving the source IP of HTTP requests in any manner  it is therefore   not     recommended   to use it despite its apparent simplicity   The  externalIPs  Service option was previously mentioned in the  NodePort   over a nodeport service  section   As per the  Services  external ips  page of the official Kubernetes documentation  the  externalIPs  option causes  kube proxy  to route traffic sent to arbitrary IP addresses   and on the Service ports   to the endpoints of that Service  These IP addresses   must belong to the target node         example     Given the following 3 node Kubernetes cluster  the external IP is added as an example  in most bare metal     environments this value is  None            console       kubectl get node     NAME     STATUS   ROLES    EXTERNAL IP     host 1   Ready    master   203 0 113 1     host 2   Ready    node     203 0 113 2     host 3   Ready    node     203 0 113 3              and the following  ingress nginx  NodePort Service         console       kubectl  n ingress nginx get svc     NAME                   TYPE        CLUSTER IP     PORT S      ingress nginx          NodePort    10 0 220 217   80 30100 TCP 443 30101 TCP              One could set the following external IPs in the Service spec  and NGINX would become available on both the NodePort     and the Service port          yaml     spec        externalIPs          203 0 113 2         203 0 113 3                 console       curl  D  http   myapp example com 30100     HTTP 1 1 200 OK     Server  nginx 1 15 2        curl  D  http   myapp example com     HTTP 1 1 200 OK     Server  nginx 1 15 2              We assume the myapp example com subdomain above resolves to both 203 0 113 2 and 203 0 113 3 IP addresses    external ips   https   kubernetes io docs concepts services networking service  external ips"}
{"questions":"ingress nginx key cert pair with an arbitrarily chosen hostname created as follows Unless otherwise mentioned the TLS secret used in examples is a 2048 bit RSA console Many of the examples in this directory have common prerequisites Prerequisites TLS certificates","answers":"# Prerequisites\n\nMany of the examples in this directory have common prerequisites.\n\n## TLS certificates\n\nUnless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA\nkey\/cert pair with an arbitrarily chosen hostname, created as follows\n\n```console\n$ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj \"\/CN=nginxsvc\/O=nginxsvc\"\nGenerating a 2048 bit RSA private key\n................+++\n................+++\nwriting new private key to 'tls.key'\n-----\n\n$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt\nsecret \"tls-secret\" created\n```\n\nNote: If using CA Authentication, described below, you will need to sign the server certificate with the CA.\n\n## Client Certificate Authentication\n\nCA Authentication also known as Mutual Authentication allows both the server and client to verify each others\nidentity via a common CA.\n\nWe have a CA Certificate which we usually obtain from a Certificate Authority and use that to sign\nboth our server certificate and client certificate. Then every time we want to access our backend, we must\npass the client certificate.\n\nThese instructions are based on the following [blog](https:\/\/medium.com\/@awkwardferny\/configuring-certificate-based-mutual-authentication-with-kubernetes-ingress-nginx-20e7e38fdfca)\n\n**Generate the CA Key and Certificate:**\n\n```console\nopenssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 356 -nodes -subj '\/CN=My Cert Authority'\n```\n\n**Generate the Server Key, and Certificate and Sign with the CA Certificate:**\n\n```console\nopenssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '\/CN=mydomain.com'\nopenssl x509 -req -sha256 -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt\n```\n\n**Generate the Client Key, and Certificate and Sign with the CA Certificate:**\n\n```console\nopenssl req -new -newkey rsa:4096 -keyout client.key -out client.csr -nodes -subj '\/CN=My Client'\nopenssl x509 -req -sha256 -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 02 -out client.crt\n```\n\nOnce this is complete you can continue to follow the instructions [here](.\/auth\/client-certs\/README.md#creating-certificate-secrets)\n\n\n\n## Test HTTP Service\n\nAll examples that require a test HTTP Service use the standard http-svc pod,\nwhich you can deploy as follows\n\n```console\n$ kubectl create -f https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/main\/docs\/examples\/http-svc.yaml\nservice \"http-svc\" created\nreplicationcontroller \"http-svc\" created\n\n$ kubectl get po\nNAME             READY     STATUS    RESTARTS   AGE\nhttp-svc-p1t3t   1\/1       Running   0          1d\n\n$ kubectl get svc\nNAME             CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE\nhttp-svc         10.0.122.116   <pending>     80:30301\/TCP       1d\n```\n\nYou can test that the HTTP Service works by exposing it temporarily\n\n```console\n$ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"LoadBalancer\"}}'\n\"http-svc\" patched\n\n$ kubectl get svc http-svc\nNAME             CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE\nhttp-svc         10.0.122.116   <pending>     80:30301\/TCP       1d\n\n$ kubectl describe svc http-svc\nName:\t\t\t\t    http-svc\nNamespace:\t\t\t    default\nLabels:\t\t\t        app=http-svc\nSelector:\t\t        app=http-svc\nType:\t\t\t        LoadBalancer\nIP:\t\t\t            10.0.122.116\nLoadBalancer Ingress:\t108.59.87.136\nPort:\t\t\t        http\t80\/TCP\nNodePort:\t\t        http\t30301\/TCP\nEndpoints:\t\t        10.180.1.6:8080\nSession Affinity:\t    None\nEvents:\n  FirstSeen\tLastSeen\tCount\tFrom\t\t\tSubObjectPath\tType\t\tReason\t\t\tMessage\n  ---------\t--------\t-----\t----\t\t\t-------------\t--------\t------\t\t\t-------\n  1m\t\t1m\t\t1\t{service-controller }\t\t\tNormal\t\tType\t\t\tClusterIP -> LoadBalancer\n  1m\t\t1m\t\t1\t{service-controller }\t\t\tNormal\t\tCreatingLoadBalancer\tCreating load balancer\n  16s\t\t16s\t\t1\t{service-controller }\t\t\tNormal\t\tCreatedLoadBalancer\tCreated load balancer\n\n$ curl 108.59.87.136\nCLIENT VALUES:\nclient_address=10.240.0.3\ncommand=GET\nreal path=\/\nquery=nil\nrequest_version=1.1\nrequest_uri=http:\/\/108.59.87.136:8080\/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*\/*\nhost=108.59.87.136\nuser-agent=curl\/7.46.0\nBODY:\n-no body in request-\n\n$ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"NodePort\"}}'\n\"http-svc\" patched\n```","site":"ingress nginx","answers_cleaned":"  Prerequisites  Many of the examples in this directory have common prerequisites      TLS certificates  Unless otherwise mentioned  the TLS secret used in examples is a 2048 bit RSA key cert pair with an arbitrarily chosen hostname  created as follows     console   openssl req  x509  sha256  nodes  days 365  newkey rsa 2048  keyout tls key  out tls crt  subj   CN nginxsvc O nginxsvc  Generating a 2048 bit RSA private key                                         writing new private key to  tls key           kubectl create secret tls tls secret   key tls key   cert tls crt secret  tls secret  created      Note  If using CA Authentication  described below  you will need to sign the server certificate with the CA      Client Certificate Authentication  CA Authentication also known as Mutual Authentication allows both the server and client to verify each others identity via a common CA   We have a CA Certificate which we usually obtain from a Certificate Authority and use that to sign both our server certificate and client certificate  Then every time we want to access our backend  we must pass the client certificate   These instructions are based on the following  blog  https   medium com  awkwardferny configuring certificate based mutual authentication with kubernetes ingress nginx 20e7e38fdfca     Generate the CA Key and Certificate        console openssl req  x509  sha256  newkey rsa 4096  keyout ca key  out ca crt  days 356  nodes  subj   CN My Cert Authority         Generate the Server Key  and Certificate and Sign with the CA Certificate        console openssl req  new  newkey rsa 4096  keyout server key  out server csr  nodes  subj   CN mydomain com  openssl x509  req  sha256  days 365  in server csr  CA ca crt  CAkey ca key  set serial 01  out server crt        Generate the Client Key  and Certificate and Sign with the CA Certificate        console openssl req  new  newkey rsa 4096  keyout client key  out client csr  nodes  subj   CN My Client  openssl x509  req  sha256  days 365  in client csr  CA ca crt  CAkey ca key  set serial 02  out client crt      Once this is complete you can continue to follow the instructions  here    auth client certs README md creating certificate secrets        Test HTTP Service  All examples that require a test HTTP Service use the standard http svc pod  which you can deploy as follows     console   kubectl create  f https   raw githubusercontent com kubernetes ingress nginx main docs examples http svc yaml service  http svc  created replicationcontroller  http svc  created    kubectl get po NAME             READY     STATUS    RESTARTS   AGE http svc p1t3t   1 1       Running   0          1d    kubectl get svc NAME             CLUSTER IP     EXTERNAL IP   PORT S             AGE http svc         10 0 122 116    pending      80 30301 TCP       1d      You can test that the HTTP Service works by exposing it temporarily     console   kubectl patch svc http svc  p    spec    type    LoadBalancer      http svc  patched    kubectl get svc http svc NAME             CLUSTER IP     EXTERNAL IP   PORT S             AGE http svc         10 0 122 116    pending      80 30301 TCP       1d    kubectl describe svc http svc Name         http svc Namespace        default Labels            app http svc Selector           app http svc Type            LoadBalancer IP                10 0 122 116 LoadBalancer Ingress  108 59 87 136 Port            http 80 TCP NodePort           http 30301 TCP Endpoints           10 180 1 6 8080 Session Affinity      None Events    FirstSeen LastSeen Count From   SubObjectPath Type  Reason   Message                                                                             1m  1m  1  service controller     Normal  Type   ClusterIP    LoadBalancer   1m  1m  1  service controller     Normal  CreatingLoadBalancer Creating load balancer   16s  16s  1  service controller     Normal  CreatedLoadBalancer Created load balancer    curl 108 59 87 136 CLIENT VALUES  client address 10 240 0 3 command GET real path   query nil request version 1 1 request uri http   108 59 87 136 8080   SERVER VALUES  server version nginx  1 9 11   lua  10001  HEADERS RECEIVED  accept     host 108 59 87 136 user agent curl 7 46 0 BODY   no body in request     kubectl patch svc http svc  p    spec    type    NodePort      http svc  patched    "}
{"questions":"ingress nginx This example demonstrates how to use annotations and that you have an ingress controller in your cluster You will need to make sure your Ingress targets exactly one Ingress Prerequisites Rewrite controller by specifying the","answers":"# Rewrite\n\nThis example demonstrates how to use `Rewrite` annotations.\n\n## Prerequisites\n\nYou will need to make sure your Ingress targets exactly one Ingress\ncontroller by specifying the [ingress.class annotation](..\/..\/user-guide\/multiple-ingress.md),\nand that you have an ingress controller [running](..\/..\/deploy\/) in your cluster.\n\n## Deployment\n\nRewriting can be controlled using the following annotations:\n\n|Name|Description|Values|\n| --- | --- | --- |\n|nginx.ingress.kubernetes.io\/rewrite-target|Target URI where the traffic must be redirected|string|\n|nginx.ingress.kubernetes.io\/ssl-redirect|Indicates if the location section is only accessible via SSL (defaults to True when Ingress contains a Certificate)|bool|\n|nginx.ingress.kubernetes.io\/force-ssl-redirect|Forces the redirection to HTTPS even if the Ingress is not TLS Enabled|bool|\n|nginx.ingress.kubernetes.io\/app-root|Defines the Application Root that the Controller must redirect if it's in `\/` context|string|\n|nginx.ingress.kubernetes.io\/use-regex|Indicates if the paths defined on an Ingress use regular expressions|bool|\n\n## Examples\n\n### Rewrite Target\n\n!!! attention\n    Starting in Version 0.22.0, ingress definitions using the annotation `nginx.ingress.kubernetes.io\/rewrite-target` are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a [capture group](https:\/\/www.regular-expressions.info\/refcapture.html).\n\n!!! note\n    [Captured groups](https:\/\/www.regular-expressions.info\/refcapture.html) are saved in numbered placeholders, chronologically, in the form `$1`, `$2` ... `$n`. These placeholders can be used as parameters in the `rewrite-target` annotation.\n\n!!! note\n    Please see the [FAQ](..\/..\/faq.md#validation-of-path) for Validation Of __`path`__\n\nCreate an Ingress rule with a rewrite annotation:\n\n```console\n$ echo '\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io\/use-regex: \"true\"\n    nginx.ingress.kubernetes.io\/rewrite-target: \/$2\n  name: rewrite\n  namespace: default\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: rewrite.bar.com\n    http:\n      paths:\n      - path: \/something(\/|$)(.*)\n        pathType: ImplementationSpecific\n        backend:\n          service:\n            name: http-svc\n            port: \n              number: 80\n' | kubectl create -f -\n```\n\nIn this ingress definition, any characters captured by `(.*)` will be assigned to the placeholder `$2`, which is then used as a parameter in the `rewrite-target` annotation.\n\nFor example, the ingress definition above will result in the following rewrites:\n\n- `rewrite.bar.com\/something` rewrites to `rewrite.bar.com\/`\n- `rewrite.bar.com\/something\/` rewrites to `rewrite.bar.com\/`\n- `rewrite.bar.com\/something\/new` rewrites to `rewrite.bar.com\/new`\n\n### App Root\n\nCreate an Ingress rule with an app-root annotation:\n```\n$ echo \"\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io\/app-root: \/app1\n  name: approot\n  namespace: default\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: approot.bar.com\n    http:\n      paths:\n      - path: \/\n        pathType: Prefix\n        backend:\n          service:\n            name: http-svc\n            port: \n              number: 80\n\" | kubectl create -f -\n```\n\nCheck the rewrite is working\n\n```\n$ curl -I -k http:\/\/approot.bar.com\/\nHTTP\/1.1 302 Moved Temporarily\nServer: nginx\/1.11.10\nDate: Mon, 13 Mar 2017 14:57:15 GMT\nContent-Type: text\/html\nContent-Length: 162\nLocation: http:\/\/approot.bar.com\/app1\nConnection: keep-alive\n```","site":"ingress nginx","answers_cleaned":"  Rewrite  This example demonstrates how to use  Rewrite  annotations      Prerequisites  You will need to make sure your Ingress targets exactly one Ingress controller by specifying the  ingress class annotation        user guide multiple ingress md   and that you have an ingress controller  running        deploy   in your cluster      Deployment  Rewriting can be controlled using the following annotations    Name Description Values                       nginx ingress kubernetes io rewrite target Target URI where the traffic must be redirected string   nginx ingress kubernetes io ssl redirect Indicates if the location section is only accessible via SSL  defaults to True when Ingress contains a Certificate  bool   nginx ingress kubernetes io force ssl redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool   nginx ingress kubernetes io app root Defines the Application Root that the Controller must redirect if it s in     context string   nginx ingress kubernetes io use regex Indicates if the paths defined on an Ingress use regular expressions bool      Examples      Rewrite Target      attention     Starting in Version 0 22 0  ingress definitions using the annotation  nginx ingress kubernetes io rewrite target  are not backwards compatible with previous versions  In Version 0 22 0 and beyond  any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a  capture group  https   www regular expressions info refcapture html        note      Captured groups  https   www regular expressions info refcapture html  are saved in numbered placeholders  chronologically  in the form   1     2        n   These placeholders can be used as parameters in the  rewrite target  annotation       note     Please see the  FAQ        faq md validation of path  for Validation Of    path     Create an Ingress rule with a rewrite annotation      console   echo   apiVersion  networking k8s io v1 kind  Ingress metadata    annotations      nginx ingress kubernetes io use regex   true      nginx ingress kubernetes io rewrite target    2   name  rewrite   namespace  default spec    ingressClassName  nginx   rules      host  rewrite bar com     http        paths          path   something                  pathType  ImplementationSpecific         backend            service              name  http svc             port                 number  80     kubectl create  f        In this ingress definition  any characters captured by        will be assigned to the placeholder   2   which is then used as a parameter in the  rewrite target  annotation   For example  the ingress definition above will result in the following rewrites      rewrite bar com something  rewrites to  rewrite bar com      rewrite bar com something   rewrites to  rewrite bar com      rewrite bar com something new  rewrites to  rewrite bar com new       App Root  Create an Ingress rule with an app root annotation        echo   apiVersion  networking k8s io v1 kind  Ingress metadata    annotations      nginx ingress kubernetes io app root   app1   name  approot   namespace  default spec    ingressClassName  nginx   rules      host  approot bar com     http        paths          path            pathType  Prefix         backend            service              name  http svc             port                 number  80     kubectl create  f        Check the rewrite is working        curl  I  k http   approot bar com  HTTP 1 1 302 Moved Temporarily Server  nginx 1 11 10 Date  Mon  13 Mar 2017 14 57 15 GMT Content Type  text html Content Length  162 Location  http   approot bar com app1 Connection  keep alive    "}
{"questions":"ingress nginx Ingress Nginx Has the ability to handle canary routing by setting specific Create your main deployment and service This is the main deployment of your application with the service that will be Canary annotations the following is an example of how to configure a canary deployment with weighted canary routing used to route to it","answers":"# Canary\n\nIngress Nginx Has the ability to handle canary routing by setting specific\nannotations, the following is an example of how to configure a canary\ndeployment with weighted canary routing.\n\n## Create your main deployment and service\n\nThis is the main deployment of your application with the service that will be\nused to route to it\n\n```bash\necho \"\n---\n# Deployment\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: production\n  labels:\n    app: production\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: production\n  template:\n    metadata:\n      labels:\n        app: production\n    spec:\n      containers:\n      - name: production\n        image: registry.k8s.io\/ingress-nginx\/e2e-test-echo:v1.0.1@sha256:1cec65aa768720290d05d65ab1c297ca46b39930e56bc9488259f9114fcd30e2\n        ports:\n        - containerPort: 80\n        env:\n          - name: NODE_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: spec.nodeName\n          - name: POD_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.name\n          - name: POD_NAMESPACE\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.namespace\n          - name: POD_IP\n            valueFrom:\n              fieldRef:\n                fieldPath: status.podIP\n---\n# Service\napiVersion: v1\nkind: Service\nmetadata:\n  name: production\n  labels:\n    app: production\nspec:\n  ports:\n  - port: 80\n    targetPort: 80\n    protocol: TCP\n    name: http\n  selector:\n    app: production\n\" | kubectl apply -f -\n```\n\n## Create the canary deployment and service\n\nThis is the canary deployment that will take a weighted amount of requests\ninstead of the main deployment\n\n```bash\necho \"\n---\n# Deployment\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: canary\n  labels:\n    app: canary\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: canary\n  template:\n    metadata:\n      labels:\n        app: canary\n    spec:\n      containers:\n      - name: canary\n        image: registry.k8s.io\/ingress-nginx\/e2e-test-echo:v1.0.1@sha256:1cec65aa768720290d05d65ab1c297ca46b39930e56bc9488259f9114fcd30e2\n        ports:\n        - containerPort: 80\n        env:\n          - name: NODE_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: spec.nodeName\n          - name: POD_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.name\n          - name: POD_NAMESPACE\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.namespace\n          - name: POD_IP\n            valueFrom:\n              fieldRef:\n                fieldPath: status.podIP\n---\n# Service\napiVersion: v1\nkind: Service\nmetadata:\n  name: canary\n  labels:\n    app: canary\nspec:\n  ports:\n  - port: 80\n    targetPort: 80\n    protocol: TCP\n    name: http\n  selector:\n    app: canary\n\" | kubectl apply -f -\n```\n\n## Create Ingress Pointing To Your Main Deployment\n\nNext you will need to expose your main deployment with an ingress resource,\nnote there are no canary specific annotations on this ingress\n\n```bash\necho \"\n---\n# Ingress\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: production\n  annotations:\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: echo.prod.mydomain.com\n    http:\n      paths:\n      - pathType: Prefix\n        path: \/\n        backend:\n          service:\n            name: production\n            port:\n              number: 80\n\" | kubectl apply -f -\n```\n\n## Create Ingress Pointing To Your Canary Deployment\n\nYou will then create an Ingress that has the canary specific configuration,\nplease pay special notice of the following:\n\n- The host name is identical to the main ingress host name\n- The `nginx.ingress.kubernetes.io\/canary: \"true\"` annotation is required and\n  defines this as a canary annotation (if you do not have this the Ingresses\n  will clash)\n- The `nginx.ingress.kubernetes.io\/canary-weight: \"50\"` annotation dictates the\n  weight of the routing, in this case there is a \"50%\" chance a request will\n  hit the canary deployment over the main deployment\n```bash\necho \"\n---\n# Ingress\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: canary\n  annotations:\n    nginx.ingress.kubernetes.io\/canary: \\\"true\\\"\n    nginx.ingress.kubernetes.io\/canary-weight: \\\"50\\\"\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: echo.prod.mydomain.com\n    http:\n      paths:\n      - pathType: Prefix\n        path: \/\n        backend:\n          service:\n            name: canary\n            port:\n              number: 80\n\" | kubectl apply -f -\n```\n\n## Testing your setup\n\nYou can use the following command to test your setup (replacing\nINGRESS_CONTROLLER_IP with your ingresse controllers IP Address)\n\n```bash\nfor i in $(seq 1 10); do curl -s --resolve echo.prod.mydomain.com:80:$INGRESS_CONTROLLER_IP echo.prod.mydomain.com  | grep \"Hostname\"; done\n```\n\nYou will get the following output showing that your canary setup is working as\nexpected:\n\n```bash\nHostname: production-5c5f65d859-phqzc\nHostname: canary-6697778457-zkfjf\nHostname: canary-6697778457-zkfjf\nHostname: production-5c5f65d859-phqzc\nHostname: canary-6697778457-zkfjf\nHostname: production-5c5f65d859-phqzc\nHostname: production-5c5f65d859-phqzc\nHostname: production-5c5f65d859-phqzc\nHostname: canary-6697778457-zkfjf\nHostname: production-5c5f65d859-phqzc\n```","site":"ingress nginx","answers_cleaned":"  Canary  Ingress Nginx Has the ability to handle canary routing by setting specific annotations  the following is an example of how to configure a canary deployment with weighted canary routing      Create your main deployment and service  This is the main deployment of your application with the service that will be used to route to it     bash echo         Deployment apiVersion  apps v1 kind  Deployment metadata    name  production   labels      app  production spec    replicas  1   selector      matchLabels        app  production   template      metadata        labels          app  production     spec        containers          name  production         image  registry k8s io ingress nginx e2e test echo v1 0 1 sha256 1cec65aa768720290d05d65ab1c297ca46b39930e56bc9488259f9114fcd30e2         ports            containerPort  80         env              name  NODE NAME             valueFrom                fieldRef                  fieldPath  spec nodeName             name  POD NAME             valueFrom                fieldRef                  fieldPath  metadata name             name  POD NAMESPACE             valueFrom                fieldRef                  fieldPath  metadata namespace             name  POD IP             valueFrom                fieldRef                  fieldPath  status podIP       Service apiVersion  v1 kind  Service metadata    name  production   labels      app  production spec    ports      port  80     targetPort  80     protocol  TCP     name  http   selector      app  production     kubectl apply  f           Create the canary deployment and service  This is the canary deployment that will take a weighted amount of requests instead of the main deployment     bash echo         Deployment apiVersion  apps v1 kind  Deployment metadata    name  canary   labels      app  canary spec    replicas  1   selector      matchLabels        app  canary   template      metadata        labels          app  canary     spec        containers          name  canary         image  registry k8s io ingress nginx e2e test echo v1 0 1 sha256 1cec65aa768720290d05d65ab1c297ca46b39930e56bc9488259f9114fcd30e2         ports            containerPort  80         env              name  NODE NAME             valueFrom                fieldRef                  fieldPath  spec nodeName             name  POD NAME             valueFrom                fieldRef                  fieldPath  metadata name             name  POD NAMESPACE             valueFrom                fieldRef                  fieldPath  metadata namespace             name  POD IP             valueFrom                fieldRef                  fieldPath  status podIP       Service apiVersion  v1 kind  Service metadata    name  canary   labels      app  canary spec    ports      port  80     targetPort  80     protocol  TCP     name  http   selector      app  canary     kubectl apply  f           Create Ingress Pointing To Your Main Deployment  Next you will need to expose your main deployment with an ingress resource  note there are no canary specific annotations on this ingress     bash echo         Ingress apiVersion  networking k8s io v1 kind  Ingress metadata    name  production   annotations  spec    ingressClassName  nginx   rules      host  echo prod mydomain com     http        paths          pathType  Prefix         path            backend            service              name  production             port                number  80     kubectl apply  f           Create Ingress Pointing To Your Canary Deployment  You will then create an Ingress that has the canary specific configuration  please pay special notice of the following     The host name is identical to the main ingress host name   The  nginx ingress kubernetes io canary   true   annotation is required and   defines this as a canary annotation  if you do not have this the Ingresses   will clash    The  nginx ingress kubernetes io canary weight   50   annotation dictates the   weight of the routing  in this case there is a  50   chance a request will   hit the canary deployment over the main deployment    bash echo         Ingress apiVersion  networking k8s io v1 kind  Ingress metadata    name  canary   annotations      nginx ingress kubernetes io canary    true       nginx ingress kubernetes io canary weight    50   spec    ingressClassName  nginx   rules      host  echo prod mydomain com     http        paths          pathType  Prefix         path            backend            service              name  canary             port                number  80     kubectl apply  f           Testing your setup  You can use the following command to test your setup  replacing INGRESS CONTROLLER IP with your ingresse controllers IP Address      bash for i in   seq 1 10   do curl  s   resolve echo prod mydomain com 80  INGRESS CONTROLLER IP echo prod mydomain com    grep  Hostname   done      You will get the following output showing that your canary setup is working as expected      bash Hostname  production 5c5f65d859 phqzc Hostname  canary 6697778457 zkfjf Hostname  canary 6697778457 zkfjf Hostname  production 5c5f65d859 phqzc Hostname  canary 6697778457 zkfjf Hostname  production 5c5f65d859 phqzc Hostname  production 5c5f65d859 phqzc Hostname  production 5c5f65d859 phqzc Hostname  canary 6697778457 zkfjf Hostname  production 5c5f65d859 phqzc    "}
{"questions":"ingress nginx to a backend service This example demonstrates propagation of selected authentication service response headers Authentication logic is based on HTTP header requests with header containing string are considered authenticated Sample configuration includes Sample authentication service producing several response headers After successful authentication service generates response headers and External authentication authentication service response headers propagation","answers":"# External authentication, authentication service response headers propagation\n\nThis example demonstrates propagation of selected authentication service response headers\nto a backend service.\n\nSample configuration includes:\n\n* Sample authentication service producing several response headers\n  * Authentication logic is based on HTTP header: requests with header `User` containing string `internal` are considered authenticated\n  * After successful authentication service generates response headers `UserID` and `UserRole`\n* Sample echo service displaying header information\n* Two ingress objects pointing to echo service\n  * Public, which allows access from unauthenticated users\n  * Private, which allows access from authenticated users only\n\nYou can deploy the controller as\nfollows:\n\n```console\n$ kubectl create -f deploy\/\ndeployment \"demo-auth-service\" created\nservice \"demo-auth-service\" created\ningress \"demo-auth-service\" created\ndeployment \"demo-echo-service\" created\nservice \"demo-echo-service\" created\ningress \"public-demo-echo-service\" created\ningress \"secure-demo-echo-service\" created\n\n$ kubectl get po\nNAME                                        READY     STATUS    RESTARTS   AGE\ndemo-auth-service-2769076528-7g9mh          1\/1       Running            0          30s\ndemo-echo-service-3636052215-3vw8c          1\/1       Running            0          29s\n\nkubectl get ing\nNAME                       HOSTS                                 ADDRESS   PORTS     AGE\npublic-demo-echo-service   public-demo-echo-service.kube.local             80        1m\nsecure-demo-echo-service   secure-demo-echo-service.kube.local             80        1m\n```\n\n## Test 1: public service with no auth header\n\n```console\n$ curl -H 'Host: public-demo-echo-service.kube.local' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100\/\n*   Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET \/ HTTP\/1.1\n> Host: public-demo-echo-service.kube.local\n> User-Agent: curl\/7.43.0\n> Accept: *\/*\n>\n< HTTP\/1.1 200 OK\n< Server: nginx\/1.11.10\n< Date: Mon, 13 Mar 2017 20:19:21 GMT\n< Content-Type: text\/plain; charset=utf-8\n< Content-Length: 20\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: , UserRole:\n```\n\n## Test 2: secure service with no auth header\n\n```console\n$ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100\/\n*   Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET \/ HTTP\/1.1\n> Host: secure-demo-echo-service.kube.local\n> User-Agent: curl\/7.43.0\n> Accept: *\/*\n>\n< HTTP\/1.1 403 Forbidden\n< Server: nginx\/1.11.10\n< Date: Mon, 13 Mar 2017 20:18:48 GMT\n< Content-Type: text\/html\n< Content-Length: 170\n< Connection: keep-alive\n<\n<html>\n<head><title>403 Forbidden<\/title><\/head>\n<body bgcolor=\"white\">\n<center><h1>403 Forbidden<\/h1><\/center>\n<hr><center>nginx\/1.11.10<\/center>\n<\/body>\n<\/html>\n* Connection #0 to host 192.168.99.100 left intact\n```\n\n## Test 3: public service with valid auth header\n\n```console\n$ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100\/\n*   Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET \/ HTTP\/1.1\n> Host: public-demo-echo-service.kube.local\n> User-Agent: curl\/7.43.0\n> Accept: *\/*\n> User:internal\n>\n< HTTP\/1.1 200 OK\n< Server: nginx\/1.11.10\n< Date: Mon, 13 Mar 2017 20:19:59 GMT\n< Content-Type: text\/plain; charset=utf-8\n< Content-Length: 44\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: 1443635317331776148, UserRole: admin\n```\n\n## Test 4: secure service with valid auth header\n\n```console\n$ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100\/\n*   Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET \/ HTTP\/1.1\n> Host: secure-demo-echo-service.kube.local\n> User-Agent: curl\/7.43.0\n> Accept: *\/*\n> User:internal\n>\n< HTTP\/1.1 200 OK\n< Server: nginx\/1.11.10\n< Date: Mon, 13 Mar 2017 20:17:23 GMT\n< Content-Type: text\/plain; charset=utf-8\n< Content-Length: 43\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: 605394647632969758, UserRole: admin\n```","site":"ingress nginx","answers_cleaned":"  External authentication  authentication service response headers propagation  This example demonstrates propagation of selected authentication service response headers to a backend service   Sample configuration includes     Sample authentication service producing several response headers     Authentication logic is based on HTTP header  requests with header  User  containing string  internal  are considered authenticated     After successful authentication service generates response headers  UserID  and  UserRole    Sample echo service displaying header information   Two ingress objects pointing to echo service     Public  which allows access from unauthenticated users     Private  which allows access from authenticated users only  You can deploy the controller as follows      console   kubectl create  f deploy  deployment  demo auth service  created service  demo auth service  created ingress  demo auth service  created deployment  demo echo service  created service  demo echo service  created ingress  public demo echo service  created ingress  secure demo echo service  created    kubectl get po NAME                                        READY     STATUS    RESTARTS   AGE demo auth service 2769076528 7g9mh          1 1       Running            0          30s demo echo service 3636052215 3vw8c          1 1       Running            0          29s  kubectl get ing NAME                       HOSTS                                 ADDRESS   PORTS     AGE public demo echo service   public demo echo service kube local             80        1m secure demo echo service   secure demo echo service kube local             80        1m         Test 1  public service with no auth header     console   curl  H  Host  public demo echo service kube local   v 192 168 99 100   Rebuilt URL to  192 168 99 100      Trying 192 168 99 100      Connected to 192 168 99 100  192 168 99 100  port 80   0    GET   HTTP 1 1   Host  public demo echo service kube local   User Agent  curl 7 43 0   Accept          HTTP 1 1 200 OK   Server  nginx 1 11 10   Date  Mon  13 Mar 2017 20 19 21 GMT   Content Type  text plain  charset utf 8   Content Length  20   Connection  keep alive     Connection  0 to host 192 168 99 100 left intact UserID    UserRole          Test 2  secure service with no auth header     console   curl  H  Host  secure demo echo service kube local   v 192 168 99 100   Rebuilt URL to  192 168 99 100      Trying 192 168 99 100      Connected to 192 168 99 100  192 168 99 100  port 80   0    GET   HTTP 1 1   Host  secure demo echo service kube local   User Agent  curl 7 43 0   Accept          HTTP 1 1 403 Forbidden   Server  nginx 1 11 10   Date  Mon  13 Mar 2017 20 18 48 GMT   Content Type  text html   Content Length  170   Connection  keep alive    html   head  title 403 Forbidden  title   head   body bgcolor  white    center  h1 403 Forbidden  h1   center   hr  center nginx 1 11 10  center    body    html    Connection  0 to host 192 168 99 100 left intact         Test 3  public service with valid auth header     console   curl  H  Host  public demo echo service kube local   H  User internal   v 192 168 99 100   Rebuilt URL to  192 168 99 100      Trying 192 168 99 100      Connected to 192 168 99 100  192 168 99 100  port 80   0    GET   HTTP 1 1   Host  public demo echo service kube local   User Agent  curl 7 43 0   Accept        User internal     HTTP 1 1 200 OK   Server  nginx 1 11 10   Date  Mon  13 Mar 2017 20 19 59 GMT   Content Type  text plain  charset utf 8   Content Length  44   Connection  keep alive     Connection  0 to host 192 168 99 100 left intact UserID  1443635317331776148  UserRole  admin         Test 4  secure service with valid auth header     console   curl  H  Host  secure demo echo service kube local   H  User internal   v 192 168 99 100   Rebuilt URL to  192 168 99 100      Trying 192 168 99 100      Connected to 192 168 99 100  192 168 99 100  port 80   0    GET   HTTP 1 1   Host  secure demo echo service kube local   User Agent  curl 7 43 0   Accept        User internal     HTTP 1 1 200 OK   Server  nginx 1 11 10   Date  Mon  13 Mar 2017 20 17 23 GMT   Content Type  text plain  charset utf 8   Content Length  43   Connection  keep alive     Connection  0 to host 192 168 99 100 left intact UserID  605394647632969758  UserRole  admin    "}
{"questions":"ingress nginx You will also need to make sure your Ingress targets exactly one Ingress and that you have an ingress controller in your cluster Static IPs You need a and a for this example Prerequisites This example demonstrates how to assign a static ip to an Ingress on through the Ingress NGINX controller controller by specifying the","answers":"# Static IPs\n\nThis example demonstrates how to assign a static-ip to an Ingress on through the Ingress-NGINX controller.\n\n## Prerequisites\n\nYou need a [TLS cert](..\/PREREQUISITES.md#tls-certificates) and a [test HTTP service](..\/PREREQUISITES.md#test-http-service) for this example.\nYou will also need to make sure your Ingress targets exactly one Ingress\ncontroller by specifying the [ingress.class annotation](..\/..\/user-guide\/multiple-ingress.md),\nand that you have an ingress controller [running](..\/..\/deploy\/) in your cluster.\n\n## Acquiring an IP\n\nSince instances of the ingress nginx controller actually run on nodes in your cluster,\nby default nginx Ingresses will only get static IPs if your cloudprovider\nsupports static IP assignments to nodes. On GKE\/GCE for example, even though\nnodes get static IPs, the IPs are not retained across upgrades.\n\nTo acquire a static IP for the ingress-nginx-controller, simply put it\nbehind a Service of `Type=LoadBalancer`.\n\nFirst, create a loadbalancer Service and wait for it to acquire an IP:\n\n```console\n$ kubectl create -f static-ip-svc.yaml\nservice \"ingress-nginx-lb\" created\n\n$ kubectl get svc ingress-nginx-lb\nNAME               CLUSTER-IP     EXTERNAL-IP       PORT(S)                      AGE\ningress-nginx-lb   10.0.138.113   104.154.109.191   80:31457\/TCP,443:32240\/TCP   15m\n```\n\nThen, update the ingress controller so it adopts the static IP of the Service\nby passing the `--publish-service` flag (the example yaml used in the next step\nalready has it set to \"ingress-nginx-lb\").\n\n```console\n$ kubectl create -f ingress-nginx-controller.yaml\ndeployment \"ingress-nginx-controller\" created\n```\n\n## Assigning the IP to an Ingress\n\nFrom here on every Ingress created with the `ingress.class` annotation set to\n`nginx` will get the IP allocated in the previous step.\n\n```console\n$ kubectl create -f ingress-nginx.yaml\ningress \"ingress-nginx\" created\n\n$ kubectl get ing ingress-nginx\nNAME            HOSTS     ADDRESS           PORTS     AGE\ningress-nginx   *         104.154.109.191   80, 443   13m\n\n$ curl 104.154.109.191 -kL\nCLIENT VALUES:\nclient_address=10.180.1.25\ncommand=GET\nreal path=\/\nquery=nil\nrequest_version=1.1\nrequest_uri=http:\/\/104.154.109.191:8080\/\n...\n```\n\n## Retaining the IP\n\nYou can test retention by deleting the Ingress:\n\n```console\n$ kubectl delete ing ingress-nginx\ningress \"ingress-nginx\" deleted\n\n$ kubectl create -f ingress-nginx.yaml\ningress \"ingress-nginx\" created\n\n$ kubectl get ing ingress-nginx\nNAME            HOSTS     ADDRESS           PORTS     AGE\ningress-nginx   *         104.154.109.191   80, 443   13m\n```\n\n> Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all\n> Ingresses, because all requests are proxied through the same set of nginx\n> controllers.\n\n## Promote ephemeral to static IP\n\nTo promote the allocated IP to static, you can update the Service manifest:\n\n```console\n$ kubectl patch svc ingress-nginx-lb -p '{\"spec\": {\"loadBalancerIP\": \"104.154.109.191\"}}'\n\"ingress-nginx-lb\" patched\n```\n\n... and promote the IP to static (promotion works differently for cloudproviders,\nprovided example is for GKE\/GCE):\n\n```console\n$ gcloud compute addresses create ingress-nginx-lb --addresses 104.154.109.191 --region us-central1\nCreated [https:\/\/www.googleapis.com\/compute\/v1\/projects\/kubernetesdev\/regions\/us-central1\/addresses\/ingress-nginx-lb].\n---\naddress: 104.154.109.191\ncreationTimestamp: '2017-01-31T16:34:50.089-08:00'\ndescription: ''\nid: '5208037144487826373'\nkind: compute#address\nname: ingress-nginx-lb\nregion: us-central1\nselfLink: https:\/\/www.googleapis.com\/compute\/v1\/projects\/kubernetesdev\/regions\/us-central1\/addresses\/ingress-nginx-lb\nstatus: IN_USE\nusers:\n- us-central1\/forwardingRules\/a09f6913ae80e11e6a8c542010af0000\n```\n\nNow even if the Service is deleted, the IP will persist, so you can recreate the\nService with `spec.loadBalancerIP` set to `104.154.109.191`.","site":"ingress nginx","answers_cleaned":"  Static IPs  This example demonstrates how to assign a static ip to an Ingress on through the Ingress NGINX controller      Prerequisites  You need a  TLS cert     PREREQUISITES md tls certificates  and a  test HTTP service     PREREQUISITES md test http service  for this example  You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the  ingress class annotation        user guide multiple ingress md   and that you have an ingress controller  running        deploy   in your cluster      Acquiring an IP  Since instances of the ingress nginx controller actually run on nodes in your cluster  by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes  On GKE GCE for example  even though nodes get static IPs  the IPs are not retained across upgrades   To acquire a static IP for the ingress nginx controller  simply put it behind a Service of  Type LoadBalancer    First  create a loadbalancer Service and wait for it to acquire an IP      console   kubectl create  f static ip svc yaml service  ingress nginx lb  created    kubectl get svc ingress nginx lb NAME               CLUSTER IP     EXTERNAL IP       PORT S                       AGE ingress nginx lb   10 0 138 113   104 154 109 191   80 31457 TCP 443 32240 TCP   15m      Then  update the ingress controller so it adopts the static IP of the Service by passing the    publish service  flag  the example yaml used in the next step already has it set to  ingress nginx lb        console   kubectl create  f ingress nginx controller yaml deployment  ingress nginx controller  created         Assigning the IP to an Ingress  From here on every Ingress created with the  ingress class  annotation set to  nginx  will get the IP allocated in the previous step      console   kubectl create  f ingress nginx yaml ingress  ingress nginx  created    kubectl get ing ingress nginx NAME            HOSTS     ADDRESS           PORTS     AGE ingress nginx             104 154 109 191   80  443   13m    curl 104 154 109 191  kL CLIENT VALUES  client address 10 180 1 25 command GET real path   query nil request version 1 1 request uri http   104 154 109 191 8080              Retaining the IP  You can test retention by deleting the Ingress      console   kubectl delete ing ingress nginx ingress  ingress nginx  deleted    kubectl create  f ingress nginx yaml ingress  ingress nginx  created    kubectl get ing ingress nginx NAME            HOSTS     ADDRESS           PORTS     AGE ingress nginx             104 154 109 191   80  443   13m        Note that unlike the GCE Ingress  the same loadbalancer IP is shared amongst all   Ingresses  because all requests are proxied through the same set of nginx   controllers      Promote ephemeral to static IP  To promote the allocated IP to static  you can update the Service manifest      console   kubectl patch svc ingress nginx lb  p    spec     loadBalancerIP    104 154 109 191      ingress nginx lb  patched          and promote the IP to static  promotion works differently for cloudproviders  provided example is for GKE GCE       console   gcloud compute addresses create ingress nginx lb   addresses 104 154 109 191   region us central1 Created  https   www googleapis com compute v1 projects kubernetesdev regions us central1 addresses ingress nginx lb       address  104 154 109 191 creationTimestamp   2017 01 31T16 34 50 089 08 00  description     id   5208037144487826373  kind  compute address name  ingress nginx lb region  us central1 selfLink  https   www googleapis com compute v1 projects kubernetesdev regions us central1 addresses ingress nginx lb status  IN USE users    us central1 forwardingRules a09f6913ae80e11e6a8c542010af0000      Now even if the Service is deleted  the IP will persist  so you can recreate the Service with  spec loadBalancerIP  set to  104 154 109 191  "}
{"questions":"ingress nginx 1 You have a kubernetes cluster running This example demonstrates how to route traffic to a gRPC service through the Ingress NGINX controller 3 You have the ingress nginx controller installed as per docs 4 You have a backend application running a gRPC server listening for TCP traffic If you want you can use https github com grpc grpc go blob 91e0aeb192456225adf27966d04ada4cf8599915 examples features reflection server main goas an example Prerequisites gRPC 2 You have a domain name such as that is configured to route traffic to the Ingress NGINX controller","answers":"# gRPC\n\nThis example demonstrates how to route traffic to a gRPC service through the Ingress-NGINX controller.\n\n## Prerequisites\n\n1. You have a kubernetes cluster running.\n2. You have a domain name such as `example.com` that is configured to route traffic to the Ingress-NGINX controller.\n3. You have the ingress-nginx-controller installed as per docs.\n4. You have a backend application running a gRPC server listening for TCP traffic.  If you want, you can use <https:\/\/github.com\/grpc\/grpc-go\/blob\/91e0aeb192456225adf27966d04ada4cf8599915\/examples\/features\/reflection\/server\/main.go> as an example.\n5. You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type `tls`, in the same namespace as the gRPC application.\n\n### Step 1: Create a Kubernetes `Deployment` for gRPC app\n\n- Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below:\n  ```console\n  $ kubectl get po -A -o wide | grep go-grpc-greeter-server\n  ```\n- If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below.\n\n- As an example gRPC application, we can use this app <https:\/\/github.com\/grpc\/grpc-go\/blob\/91e0aeb192456225adf27966d04ada4cf8599915\/examples\/features\/reflection\/server\/main.go>.\n\n- To create a container image for this app, you can use [this Dockerfile](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/images\/go-grpc-greeter-server\/rootfs\/Dockerfile). \n\n- If you use the Dockerfile mentioned above, to create a image, then you can use the following example Kubernetes manifest to create a deployment resource that uses that image. If necessary edit this manifest to suit your needs.\n\n  ```\n  cat <<EOF | kubectl apply -f -\n  apiVersion: apps\/v1\n  kind: Deployment\n  metadata:\n    labels:\n      app: go-grpc-greeter-server\n    name: go-grpc-greeter-server\n  spec:\n    replicas: 1\n    selector:\n      matchLabels:\n        app: go-grpc-greeter-server\n    template:\n      metadata:\n        labels:\n          app: go-grpc-greeter-server\n      spec:\n        containers:\n        - image: <reponame>\/go-grpc-greeter-server   # Edit this for your reponame\n          resources:\n            limits:\n              cpu: 100m\n              memory: 100Mi\n            requests:\n              cpu: 50m\n              memory: 50Mi\n          name: go-grpc-greeter-server\n          ports:\n          - containerPort: 50051\n  EOF\n  ```\n\n### Step 2: Create the Kubernetes `Service` for the gRPC app\n\n- You can use the following example manifest to create a service of type ClusterIP. Edit the name\/namespace\/label\/port to match your deployment\/pod.\n  ```\n  cat <<EOF | kubectl apply -f -\n  apiVersion: v1\n  kind: Service\n  metadata:\n    labels:\n      app: go-grpc-greeter-server\n    name: go-grpc-greeter-server\n  spec:\n    ports:\n    - port: 80\n      protocol: TCP\n      targetPort: 50051\n    selector:\n      app: go-grpc-greeter-server\n    type: ClusterIP\n  EOF\n  ```\n- You can save the above example manifest to a file with name `service.go-grpc-greeter-server.yaml` and edit it to match your deployment\/pod, if required. You can create the service resource with a kubectl command like this:\n\n  ```\n  $ kubectl create -f service.go-grpc-greeter-server.yaml\n  ```\n\n### Step 3: Create the Kubernetes `Ingress` resource for the gRPC app\n\n- Use the following example manifest of a ingress resource to create a ingress for your grpc app. If required, edit it to match your app's details like name, namespace, service, secret etc. Make sure you have the required SSL-Certificate, existing in your Kubernetes cluster in the same namespace where the gRPC app is. The certificate must be available as a kubernetes secret resource, of type \"kubernetes.io\/tls\" https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/#tls-secrets. This is because we are terminating TLS on the ingress.\n\n  ```\n  cat <<EOF | kubectl apply -f -\n  apiVersion: networking.k8s.io\/v1\n  kind: Ingress\n  metadata:\n    annotations:\n      nginx.ingress.kubernetes.io\/ssl-redirect: \"true\"\n      nginx.ingress.kubernetes.io\/backend-protocol: \"GRPC\"\n    name: fortune-ingress\n    namespace: default\n  spec:\n    ingressClassName: nginx\n    rules:\n    - host: grpctest.dev.mydomain.com\n      http:\n        paths:\n        - path: \/\n          pathType: Prefix\n          backend:\n            service:\n              name: go-grpc-greeter-server\n              port:\n                number: 80\n    tls:\n    # This secret must exist beforehand\n    # The cert must also contain the subj-name grpctest.dev.mydomain.com\n    # https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/master\/docs\/examples\/PREREQUISITES.md#tls-certificates\n    - secretName: wildcard.dev.mydomain.com\n      hosts:\n        - grpctest.dev.mydomain.com\n  EOF\n  ```\n\n- If you save the above example manifest as a file named `ingress.go-grpc-greeter-server.yaml` and edit it to match your deployment and service, you can create the ingress like this:\n\n  ```\n  $ kubectl create -f ingress.go-grpc-greeter-server.yaml\n  ```\n\n- The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, gRPC traffic will travel unencrypted inside the cluster and arrive \"insecure\").\n\n- For your own application you may or may not want to do this.  If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation `nginx.ingress.kubernetes.io\/backend-protocol: \"GRPCS\"`.\n\n- A few more things to note:\n\n  - We've tagged the ingress with the annotation `nginx.ingress.kubernetes.io\/backend-protocol: \"GRPC\"`.  This is the magic ingredient that sets up the appropriate nginx configuration to route http\/2 traffic to our service.\n\n  - We're terminating TLS at the ingress and have configured an SSL certificate `wildcard.dev.mydomain.com`.  The ingress matches traffic arriving as `https:\/\/grpctest.dev.mydomain.com:443` and routes unencrypted messages to the backend Kubernetes service.\n\n### Step 4: test the connection\n\n- Once we've applied our configuration to Kubernetes, it's time to test that we can actually talk to the backend.  To do this, we'll use the [grpcurl](https:\/\/github.com\/fullstorydev\/grpcurl) utility:\n\n  ```\n  $ grpcurl grpctest.dev.mydomain.com:443 helloworld.Greeter\/SayHello\n  {\n    \"message\": \"Hello \"\n  }\n  ```\n\n### Debugging Hints\n\n1. Obviously, watch the logs on your app.\n2. Watch the logs for the ingress-nginx-controller (increasing verbosity as\n   needed).\n3. Double-check your address and ports.\n4. Set the `GODEBUG=http2debug=2` environment variable to get detailed http\/2\n   logging on the client and\/or server.\n5. Study RFC 7540 (http\/2) <https:\/\/tools.ietf.org\/html\/rfc7540>.\n\n> If you are developing public gRPC endpoints, check out\n> https:\/\/proto.stack.build, a protocol buffer \/ gRPC build service that can use\n> to help make it easier for your users to consume your API.\n\n> See also the specific gRPC settings of NGINX: https:\/\/nginx.org\/en\/docs\/http\/ngx_http_grpc_module.html\n\n### Notes on using response\/request streams\n\n> `grpc_read_timeout` and `grpc_send_timeout` will be set as `proxy_read_timeout` and `proxy_send_timeout` when you set backend protocol to `GRPC` or `GRPCS`.\n\n1. If your server only does response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the `grpc_read_timeout` to accommodate this.\n2. If your service only does request streaming and you expect a stream to be open longer than 60 seconds, you have to change the\n`grpc_send_timeout` and the `client_body_timeout`.\n3. If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: `grpc_read_timeout`, `grpc_send_timeout` and `client_body_timeout`.","site":"ingress nginx","answers_cleaned":"  gRPC  This example demonstrates how to route traffic to a gRPC service through the Ingress NGINX controller      Prerequisites  1  You have a kubernetes cluster running  2  You have a domain name such as  example com  that is configured to route traffic to the Ingress NGINX controller  3  You have the ingress nginx controller installed as per docs  4  You have a backend application running a gRPC server listening for TCP traffic   If you want  you can use  https   github com grpc grpc go blob 91e0aeb192456225adf27966d04ada4cf8599915 examples features reflection server main go  as an example  5  You re also responsible for provisioning an SSL certificate for the ingress  So you need to have a valid SSL certificate  deployed as a Kubernetes secret of type  tls   in the same namespace as the gRPC application       Step 1  Create a Kubernetes  Deployment  for gRPC app    Make sure your gRPC application pod is running and listening for connections  For example you can try a kubectl command like this below       console     kubectl get po  A  o wide   grep go grpc greeter server         If you have a gRPC app deployed in your cluster  then skip further notes in this Step 1  and continue from Step 2 below     As an example gRPC application  we can use this app  https   github com grpc grpc go blob 91e0aeb192456225adf27966d04ada4cf8599915 examples features reflection server main go      To create a container image for this app  you can use  this Dockerfile  https   github com kubernetes ingress nginx blob main images go grpc greeter server rootfs Dockerfile       If you use the Dockerfile mentioned above  to create a image  then you can use the following example Kubernetes manifest to create a deployment resource that uses that image  If necessary edit this manifest to suit your needs           cat   EOF   kubectl apply  f     apiVersion  apps v1   kind  Deployment   metadata      labels        app  go grpc greeter server     name  go grpc greeter server   spec      replicas  1     selector        matchLabels          app  go grpc greeter server     template        metadata          labels            app  go grpc greeter server       spec          containers            image   reponame  go grpc greeter server     Edit this for your reponame           resources              limits                cpu  100m               memory  100Mi             requests                cpu  50m               memory  50Mi           name  go grpc greeter server           ports              containerPort  50051   EOF            Step 2  Create the Kubernetes  Service  for the gRPC app    You can use the following example manifest to create a service of type ClusterIP  Edit the name namespace label port to match your deployment pod          cat   EOF   kubectl apply  f     apiVersion  v1   kind  Service   metadata      labels        app  go grpc greeter server     name  go grpc greeter server   spec      ports        port  80       protocol  TCP       targetPort  50051     selector        app  go grpc greeter server     type  ClusterIP   EOF         You can save the above example manifest to a file with name  service go grpc greeter server yaml  and edit it to match your deployment pod  if required  You can create the service resource with a kubectl command like this             kubectl create  f service go grpc greeter server yaml            Step 3  Create the Kubernetes  Ingress  resource for the gRPC app    Use the following example manifest of a ingress resource to create a ingress for your grpc app  If required  edit it to match your app s details like name  namespace  service  secret etc  Make sure you have the required SSL Certificate  existing in your Kubernetes cluster in the same namespace where the gRPC app is  The certificate must be available as a kubernetes secret resource  of type  kubernetes io tls  https   kubernetes io docs concepts configuration secret  tls secrets  This is because we are terminating TLS on the ingress           cat   EOF   kubectl apply  f     apiVersion  networking k8s io v1   kind  Ingress   metadata      annotations        nginx ingress kubernetes io ssl redirect   true        nginx ingress kubernetes io backend protocol   GRPC      name  fortune ingress     namespace  default   spec      ingressClassName  nginx     rules        host  grpctest dev mydomain com       http          paths            path              pathType  Prefix           backend              service                name  go grpc greeter server               port                  number  80     tls        This secret must exist beforehand       The cert must also contain the subj name grpctest dev mydomain com       https   github com kubernetes ingress nginx blob master docs examples PREREQUISITES md tls certificates       secretName  wildcard dev mydomain com       hosts            grpctest dev mydomain com   EOF          If you save the above example manifest as a file named  ingress go grpc greeter server yaml  and edit it to match your deployment and service  you can create the ingress like this             kubectl create  f ingress go grpc greeter server yaml          The takeaway is that we are not doing any TLS configuration on the server  as we are terminating TLS at the ingress level  gRPC traffic will travel unencrypted inside the cluster and arrive  insecure       For your own application you may or may not want to do this   If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself  add the ingress annotation  nginx ingress kubernetes io backend protocol   GRPCS       A few more things to note       We ve tagged the ingress with the annotation  nginx ingress kubernetes io backend protocol   GRPC     This is the magic ingredient that sets up the appropriate nginx configuration to route http 2 traffic to our service       We re terminating TLS at the ingress and have configured an SSL certificate  wildcard dev mydomain com    The ingress matches traffic arriving as  https   grpctest dev mydomain com 443  and routes unencrypted messages to the backend Kubernetes service       Step 4  test the connection    Once we ve applied our configuration to Kubernetes  it s time to test that we can actually talk to the backend   To do this  we ll use the  grpcurl  https   github com fullstorydev grpcurl  utility             grpcurl grpctest dev mydomain com 443 helloworld Greeter SayHello          message    Hello                  Debugging Hints  1  Obviously  watch the logs on your app  2  Watch the logs for the ingress nginx controller  increasing verbosity as    needed   3  Double check your address and ports  4  Set the  GODEBUG http2debug 2  environment variable to get detailed http 2    logging on the client and or server  5  Study RFC 7540  http 2   https   tools ietf org html rfc7540      If you are developing public gRPC endpoints  check out   https   proto stack build  a protocol buffer   gRPC build service that can use   to help make it easier for your users to consume your API     See also the specific gRPC settings of NGINX  https   nginx org en docs http ngx http grpc module html      Notes on using response request streams     grpc read timeout  and  grpc send timeout  will be set as  proxy read timeout  and  proxy send timeout  when you set backend protocol to  GRPC  or  GRPCS    1  If your server only does response streaming and you expect a stream to be open longer than 60 seconds  you will have to change the  grpc read timeout  to accommodate this  2  If your service only does request streaming and you expect a stream to be open longer than 60 seconds  you have to change the  grpc send timeout  and the  client body timeout   3  If you do both response and request streaming with an open stream longer than 60 seconds  you have to change all three timeouts   grpc read timeout    grpc send timeout  and  client body timeout  "}
{"questions":"ingress nginx ingress external auth created Example 1 External Basic Authentication Use an external service Basic Auth located in kubectl create f ingress yaml","answers":"# External Basic Authentication\n\n### Example 1\n\nUse an external service (Basic Auth) located in `https:\/\/httpbin.org`\n\n```\n$ kubectl create -f ingress.yaml\ningress \"external-auth\" created\n\n$ kubectl get ing external-auth\nNAME            HOSTS                         ADDRESS       PORTS     AGE\nexternal-auth   external-auth-01.sample.com   172.17.4.99   80        13s\n\n$ kubectl get ing external-auth -o yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io\/auth-url: https:\/\/httpbin.org\/basic-auth\/user\/passwd\n  creationTimestamp: 2016-10-03T13:50:35Z\n  generation: 1\n  name: external-auth\n  namespace: default\n  resourceVersion: \"2068378\"\n  selfLink: \/apis\/networking\/v1\/namespaces\/default\/ingresses\/external-auth\n  uid: 5c388f1d-8970-11e6-9004-080027d2dc94\nspec:\n  rules:\n  - host: external-auth-01.sample.com\n    http:\n      paths:\n      - path: \/\n        pathType: Prefix\n        backend:\n          service: \n            name: http-svc\n            port: \n              number: 80\nstatus:\n  loadBalancer:\n    ingress:\n    - ip: 172.17.4.99\n$\n```\n\n## Test 1: no username\/password (expect code 401)\n\n```console\n$ curl -k http:\/\/172.17.4.99 -v -H 'Host: external-auth-01.sample.com'\n* Rebuilt URL to: http:\/\/172.17.4.99\/\n*   Trying 172.17.4.99...\n* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)\n> GET \/ HTTP\/1.1\n> Host: external-auth-01.sample.com\n> User-Agent: curl\/7.50.1\n> Accept: *\/*\n>\n< HTTP\/1.1 401 Unauthorized\n< Server: nginx\/1.11.3\n< Date: Mon, 03 Oct 2016 14:52:08 GMT\n< Content-Type: text\/html\n< Content-Length: 195\n< Connection: keep-alive\n< WWW-Authenticate: Basic realm=\"Fake Realm\"\n<\n<html>\n<head><title>401 Authorization Required<\/title><\/head>\n<body bgcolor=\"white\">\n<center><h1>401 Authorization Required<\/h1><\/center>\n<hr><center>nginx\/1.11.3<\/center>\n<\/body>\n<\/html>\n* Connection #0 to host 172.17.4.99 left intact\n```\n\n## Test 2: valid username\/password (expect code 200)\n\n```\n$ curl -k http:\/\/172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:passwd'\n* Rebuilt URL to: http:\/\/172.17.4.99\/\n*   Trying 172.17.4.99...\n* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)\n* Server auth using Basic with user 'user'\n> GET \/ HTTP\/1.1\n> Host: external-auth-01.sample.com\n> Authorization: Basic dXNlcjpwYXNzd2Q=\n> User-Agent: curl\/7.50.1\n> Accept: *\/*\n>\n< HTTP\/1.1 200 OK\n< Server: nginx\/1.11.3\n< Date: Mon, 03 Oct 2016 14:52:50 GMT\n< Content-Type: text\/plain\n< Transfer-Encoding: chunked\n< Connection: keep-alive\n<\nCLIENT VALUES:\nclient_address=10.2.60.2\ncommand=GET\nreal path=\/\nquery=nil\nrequest_version=1.1\nrequest_uri=http:\/\/external-auth-01.sample.com:8080\/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*\/*\nauthorization=Basic dXNlcjpwYXNzd2Q=\nconnection=close\nhost=external-auth-01.sample.com\nuser-agent=curl\/7.50.1\nx-forwarded-for=10.2.60.1\nx-forwarded-host=external-auth-01.sample.com\nx-forwarded-port=80\nx-forwarded-proto=http\nx-real-ip=10.2.60.1\nBODY:\n* Connection #0 to host 172.17.4.99 left intact\n-no body in request-\n```\n\n## Test 3: invalid username\/password (expect code 401)\n\n```\ncurl -k http:\/\/172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:user'\n* Rebuilt URL to: http:\/\/172.17.4.99\/\n*   Trying 172.17.4.99...\n* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)\n* Server auth using Basic with user 'user'\n> GET \/ HTTP\/1.1\n> Host: external-auth-01.sample.com\n> Authorization: Basic dXNlcjp1c2Vy\n> User-Agent: curl\/7.50.1\n> Accept: *\/*\n>\n< HTTP\/1.1 401 Unauthorized\n< Server: nginx\/1.11.3\n< Date: Mon, 03 Oct 2016 14:53:04 GMT\n< Content-Type: text\/html\n< Content-Length: 195\n< Connection: keep-alive\n* Authentication problem. Ignoring this.\n< WWW-Authenticate: Basic realm=\"Fake Realm\"\n<\n<html>\n<head><title>401 Authorization Required<\/title><\/head>\n<body bgcolor=\"white\">\n<center><h1>401 Authorization Required<\/h1><\/center>\n<hr><center>nginx\/1.11.3<\/center>\n<\/body>\n<\/html>\n* Connection #0 to host 172.17.4.99 left intact\n```","site":"ingress nginx","answers_cleaned":"  External Basic Authentication      Example 1  Use an external service  Basic Auth  located in  https   httpbin org         kubectl create  f ingress yaml ingress  external auth  created    kubectl get ing external auth NAME            HOSTS                         ADDRESS       PORTS     AGE external auth   external auth 01 sample com   172 17 4 99   80        13s    kubectl get ing external auth  o yaml apiVersion  networking k8s io v1 kind  Ingress metadata    annotations      nginx ingress kubernetes io auth url  https   httpbin org basic auth user passwd   creationTimestamp  2016 10 03T13 50 35Z   generation  1   name  external auth   namespace  default   resourceVersion   2068378    selfLink   apis networking v1 namespaces default ingresses external auth   uid  5c388f1d 8970 11e6 9004 080027d2dc94 spec    rules      host  external auth 01 sample com     http        paths          path            pathType  Prefix         backend            service               name  http svc             port                 number  80 status    loadBalancer      ingress        ip  172 17 4 99           Test 1  no username password  expect code 401      console   curl  k http   172 17 4 99  v  H  Host  external auth 01 sample com    Rebuilt URL to  http   172 17 4 99      Trying 172 17 4 99      Connected to 172 17 4 99  172 17 4 99  port 80   0    GET   HTTP 1 1   Host  external auth 01 sample com   User Agent  curl 7 50 1   Accept          HTTP 1 1 401 Unauthorized   Server  nginx 1 11 3   Date  Mon  03 Oct 2016 14 52 08 GMT   Content Type  text html   Content Length  195   Connection  keep alive   WWW Authenticate  Basic realm  Fake Realm     html   head  title 401 Authorization Required  title   head   body bgcolor  white    center  h1 401 Authorization Required  h1   center   hr  center nginx 1 11 3  center    body    html    Connection  0 to host 172 17 4 99 left intact         Test 2  valid username password  expect code 200         curl  k http   172 17 4 99  v  H  Host  external auth 01 sample com   u  user passwd    Rebuilt URL to  http   172 17 4 99      Trying 172 17 4 99      Connected to 172 17 4 99  172 17 4 99  port 80   0    Server auth using Basic with user  user    GET   HTTP 1 1   Host  external auth 01 sample com   Authorization  Basic dXNlcjpwYXNzd2Q    User Agent  curl 7 50 1   Accept          HTTP 1 1 200 OK   Server  nginx 1 11 3   Date  Mon  03 Oct 2016 14 52 50 GMT   Content Type  text plain   Transfer Encoding  chunked   Connection  keep alive   CLIENT VALUES  client address 10 2 60 2 command GET real path   query nil request version 1 1 request uri http   external auth 01 sample com 8080   SERVER VALUES  server version nginx  1 9 11   lua  10001  HEADERS RECEIVED  accept     authorization Basic dXNlcjpwYXNzd2Q  connection close host external auth 01 sample com user agent curl 7 50 1 x forwarded for 10 2 60 1 x forwarded host external auth 01 sample com x forwarded port 80 x forwarded proto http x real ip 10 2 60 1 BODY    Connection  0 to host 172 17 4 99 left intact  no body in request          Test 3  invalid username password  expect code 401       curl  k http   172 17 4 99  v  H  Host  external auth 01 sample com   u  user user    Rebuilt URL to  http   172 17 4 99      Trying 172 17 4 99      Connected to 172 17 4 99  172 17 4 99  port 80   0    Server auth using Basic with user  user    GET   HTTP 1 1   Host  external auth 01 sample com   Authorization  Basic dXNlcjp1c2Vy   User Agent  curl 7 50 1   Accept          HTTP 1 1 401 Unauthorized   Server  nginx 1 11 3   Date  Mon  03 Oct 2016 14 53 04 GMT   Content Type  text html   Content Length  195   Connection  keep alive   Authentication problem  Ignoring this    WWW Authenticate  Basic realm  Fake Realm     html   head  title 401 Authorization Required  title   head   body bgcolor  white    center  h1 401 Authorization Required  h1   center   hr  center nginx 1 11 3  center    body    html    Connection  0 to host 172 17 4 99 left intact    "}
{"questions":"ingress nginx External OAUTH Authentication This annotation requires or greater authentication provider to protect your Ingress resources Important The and annotations allow you to use an external Overview","answers":"# External OAUTH Authentication\n\n### Overview\n\nThe `auth-url` and `auth-signin` annotations allow you to use an external\nauthentication provider to protect your Ingress resources.\n\n!!! Important\n    This annotation requires `ingress-nginx-controller v0.9.0` or greater.\n\n### Key Detail\n\nThis functionality is enabled by deploying multiple Ingress objects for a single host.\nOne Ingress object has no special annotations and handles authentication.\n\nOther Ingress objects can then be annotated in such a way that require the user to\nauthenticate against the first Ingress's endpoint, and can redirect `401`s to the\nsame endpoint.\n\nSample:\n\n```yaml\n...\nmetadata:\n  name: application\n  annotations:\n    nginx.ingress.kubernetes.io\/auth-url: \"https:\/\/$host\/oauth2\/auth\"\n    nginx.ingress.kubernetes.io\/auth-signin: \"https:\/\/$host\/oauth2\/start?rd=$escaped_request_uri\"\n...\n```\n\n### Example: OAuth2 Proxy + Kubernetes-Dashboard\n\nThis example will show you how to deploy [`oauth2_proxy`](https:\/\/github.com\/pusher\/oauth2_proxy)\ninto a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider.\n\n#### Prepare\n\n1. Install the kubernetes dashboard\n\n    ```console\n    kubectl create -f https:\/\/raw.githubusercontent.com\/kubernetes\/kops\/master\/addons\/kubernetes-dashboard\/v1.10.1.yaml\n    ```\n\n2. Create a [custom GitHub OAuth application](https:\/\/github.com\/settings\/applications\/new)\n\n    ![Register OAuth2 Application](images\/register-oauth-app.png)\n\n    - Homepage URL is the FQDN in the Ingress rule, like `https:\/\/foo.bar.com`\n    - Authorization callback URL is the same as the base FQDN plus `\/oauth2\/callback`, like `https:\/\/foo.bar.com\/oauth2\/callback`\n\n    ![Register OAuth2 Application](images\/register-oauth-app-2.png)\n\n3. Configure values in the file [`oauth2-proxy.yaml`](https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/main\/docs\/examples\/auth\/oauth-external-auth\/oauth2-proxy.yaml) with the values:\n\n    - OAUTH2_PROXY_CLIENT_ID with the github `<Client ID>`\n    - OAUTH2_PROXY_CLIENT_SECRET with the github `<Client Secret>`\n    - OAUTH2_PROXY_COOKIE_SECRET with value of `python -c 'import os,base64; print(base64.b64encode(os.urandom(16)).decode(\"ascii\"))'`\n    - (optional, but recommended) OAUTH2_PROXY_GITHUB_USERS with GitHub usernames to allow to login\n    - `__INGRESS_HOST__` with a valid FQDN (e.g. `foo.bar.com`)\n    - `__INGRESS_SECRET__` with a Secret with a valid SSL certificate\n\n4. Deploy the oauth2 proxy and the ingress rules by running:\n\n    ```console\n    $ kubectl create -f oauth2-proxy.yaml\n    ```\n\n#### Test\n\nTest the integration by accessing the configured URL, e.g. `https:\/\/foo.bar.com`\n\n![Register OAuth2 Application](images\/github-auth.png)\n\n![GitHub authentication](images\/oauth-login.png)\n\n![Kubernetes dashboard](images\/dashboard.png)\n\n\n### Example: Vouch Proxy + Kubernetes-Dashboard\n\nThis example will show you how to deploy [`Vouch Proxy`](https:\/\/github.com\/vouch\/vouch-proxy)\ninto a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider.\n\n#### Prepare\n\n1. Install the kubernetes dashboard\n\n    ```console\n    kubectl create -f https:\/\/raw.githubusercontent.com\/kubernetes\/kops\/master\/addons\/kubernetes-dashboard\/v1.10.1.yaml\n    ```\n\n2. Create a [custom GitHub OAuth application](https:\/\/github.com\/settings\/applications\/new)\n\n    ![Register OAuth2 Application](images\/register-oauth-app.png)\n\n    - Homepage URL is the FQDN in the Ingress rule, like `https:\/\/foo.bar.com`\n    - Authorization callback URL is the same as the base FQDN plus `\/oauth2\/auth`, like `https:\/\/foo.bar.com\/oauth2\/auth`\n\n    ![Register OAuth2 Application](images\/register-oauth-app-2.png)\n\n3. Configure Vouch Proxy values in the file [`vouch-proxy.yaml`](https:\/\/raw.githubusercontent.com\/kubernetes\/ingress-nginx\/main\/docs\/examples\/auth\/oauth-external-auth\/vouch-proxy.yaml) with the values:\n\n    - VOUCH_COOKIE_DOMAIN with value of `<Ingress Host>`\n    - OAUTH_CLIENT_ID with the github `<Client ID>`\n    - OAUTH_CLIENT_SECRET with the github `<Client Secret>`\n    - (optional, but recommended) VOUCH_WHITELIST with GitHub usernames to allow to login\n    - `__INGRESS_HOST__` with a valid FQDN (e.g. `foo.bar.com`)\n    - `__INGRESS_SECRET__` with a Secret with a valid SSL certificate\n\n4. Deploy Vouch Proxy and the ingress rules by running:\n\n    ```console\n    $ kubectl create -f vouch-proxy.yaml\n    ```\n\n#### Test\n\nTest the integration by accessing the configured URL, e.g. `https:\/\/foo.bar.com`\n\n![Register OAuth2 Application](images\/github-auth.png)\n\n![GitHub authentication](images\/oauth-login.png)\n\n![Kubernetes dashboard](images\/dashboard.png)","site":"ingress nginx","answers_cleaned":"  External OAUTH Authentication      Overview  The  auth url  and  auth signin  annotations allow you to use an external authentication provider to protect your Ingress resources       Important     This annotation requires  ingress nginx controller v0 9 0  or greater       Key Detail  This functionality is enabled by deploying multiple Ingress objects for a single host  One Ingress object has no special annotations and handles authentication   Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress s endpoint  and can redirect  401 s to the same endpoint   Sample      yaml     metadata    name  application   annotations      nginx ingress kubernetes io auth url   https    host oauth2 auth      nginx ingress kubernetes io auth signin   https    host oauth2 start rd  escaped request uri               Example  OAuth2 Proxy   Kubernetes Dashboard  This example will show you how to deploy   oauth2 proxy   https   github com pusher oauth2 proxy  into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider        Prepare  1  Install the kubernetes dashboard         console     kubectl create  f https   raw githubusercontent com kubernetes kops master addons kubernetes dashboard v1 10 1 yaml          2  Create a  custom GitHub OAuth application  https   github com settings applications new         Register OAuth2 Application  images register oauth app png         Homepage URL is the FQDN in the Ingress rule  like  https   foo bar com        Authorization callback URL is the same as the base FQDN plus   oauth2 callback   like  https   foo bar com oauth2 callback         Register OAuth2 Application  images register oauth app 2 png   3  Configure values in the file   oauth2 proxy yaml   https   raw githubusercontent com kubernetes ingress nginx main docs examples auth oauth external auth oauth2 proxy yaml  with the values         OAUTH2 PROXY CLIENT ID with the github   Client ID         OAUTH2 PROXY CLIENT SECRET with the github   Client Secret         OAUTH2 PROXY COOKIE SECRET with value of  python  c  import os base64  print base64 b64encode os urandom 16   decode  ascii             optional  but recommended  OAUTH2 PROXY GITHUB USERS with GitHub usernames to allow to login          INGRESS HOST    with a valid FQDN  e g   foo bar com            INGRESS SECRET    with a Secret with a valid SSL certificate  4  Deploy the oauth2 proxy and the ingress rules by running          console       kubectl create  f oauth2 proxy yaml               Test  Test the integration by accessing the configured URL  e g   https   foo bar com     Register OAuth2 Application  images github auth png     GitHub authentication  images oauth login png     Kubernetes dashboard  images dashboard png        Example  Vouch Proxy   Kubernetes Dashboard  This example will show you how to deploy   Vouch Proxy   https   github com vouch vouch proxy  into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider        Prepare  1  Install the kubernetes dashboard         console     kubectl create  f https   raw githubusercontent com kubernetes kops master addons kubernetes dashboard v1 10 1 yaml          2  Create a  custom GitHub OAuth application  https   github com settings applications new         Register OAuth2 Application  images register oauth app png         Homepage URL is the FQDN in the Ingress rule  like  https   foo bar com        Authorization callback URL is the same as the base FQDN plus   oauth2 auth   like  https   foo bar com oauth2 auth         Register OAuth2 Application  images register oauth app 2 png   3  Configure Vouch Proxy values in the file   vouch proxy yaml   https   raw githubusercontent com kubernetes ingress nginx main docs examples auth oauth external auth vouch proxy yaml  with the values         VOUCH COOKIE DOMAIN with value of   Ingress Host         OAUTH CLIENT ID with the github   Client ID         OAUTH CLIENT SECRET with the github   Client Secret          optional  but recommended  VOUCH WHITELIST with GitHub usernames to allow to login          INGRESS HOST    with a valid FQDN  e g   foo bar com            INGRESS SECRET    with a Secret with a valid SSL certificate  4  Deploy Vouch Proxy and the ingress rules by running          console       kubectl create  f vouch proxy yaml               Test  Test the integration by accessing the configured URL  e g   https   foo bar com     Register OAuth2 Application  images github auth png     GitHub authentication  images oauth login png     Kubernetes dashboard  images dashboard png "}
{"questions":"ingress nginx Basic Authentication Create htpasswd file This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd c auth foo It s important the file generated is named actually that the secret has a key otherwise the ingress controller returns a 503 console New password bar","answers":"# Basic Authentication\n\nThis example shows how to add authentication in a Ingress rule using a secret that contains a file generated with `htpasswd`.\nIt's important the file generated is named `auth` (actually - that the secret has a key `data.auth`), otherwise the ingress-controller returns a 503.\n\n## Create htpasswd file\n\n```console\n$ htpasswd -c auth foo\nNew password: <bar>\nNew password:\nRe-type new password:\nAdding password for user foo\n```\n\n## Convert htpasswd into a secret\n\n```console\n$ kubectl create secret generic basic-auth --from-file=auth\nsecret \"basic-auth\" created\n```\n\n## Examine secret\n\n```console\n$ kubectl get secret basic-auth -o yaml\napiVersion: v1\ndata:\n  auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK\nkind: Secret\nmetadata:\n  name: basic-auth\n  namespace: default\ntype: Opaque\n```\n\n## Using kubectl, create an ingress tied to the basic-auth secret\n\n```console\n$ echo \"\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress-with-auth\n  annotations:\n    # type of authentication\n    nginx.ingress.kubernetes.io\/auth-type: basic\n    # name of the secret that contains the user\/password definitions\n    nginx.ingress.kubernetes.io\/auth-secret: basic-auth\n    # message to display with an appropriate context why the authentication is required\n    nginx.ingress.kubernetes.io\/auth-realm: 'Authentication Required - foo'\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: foo.bar.com\n    http:\n      paths:\n      - path: \/\n        pathType: Prefix\n        backend:\n          service: \n            name: http-svc\n            port: \n              number: 80\n\" | kubectl create -f -\n```\n\n## Use curl to confirm authorization is required by the ingress\n\n```\n$ curl -v http:\/\/10.2.29.4\/ -H 'Host: foo.bar.com'\n*   Trying 10.2.29.4...\n* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0)\n> GET \/ HTTP\/1.1\n> Host: foo.bar.com\n> User-Agent: curl\/7.43.0\n> Accept: *\/*\n>\n< HTTP\/1.1 401 Unauthorized\n< Server: nginx\/1.10.0\n< Date: Wed, 11 May 2016 05:27:23 GMT\n< Content-Type: text\/html\n< Content-Length: 195\n< Connection: keep-alive\n< WWW-Authenticate: Basic realm=\"Authentication Required - foo\"\n<\n<html>\n<head><title>401 Authorization Required<\/title><\/head>\n<body bgcolor=\"white\">\n<center><h1>401 Authorization Required<\/h1><\/center>\n<hr><center>nginx\/1.10.0<\/center>\n<\/body>\n<\/html>\n* Connection #0 to host 10.2.29.4 left intact\n```\n\n## Use curl with the correct credentials to connect to the ingress\n\n```\n$ curl -v http:\/\/10.2.29.4\/ -H 'Host: foo.bar.com' -u 'foo:bar'\n*   Trying 10.2.29.4...\n* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0)\n* Server auth using Basic with user 'foo'\n> GET \/ HTTP\/1.1\n> Host: foo.bar.com\n> Authorization: Basic Zm9vOmJhcg==\n> User-Agent: curl\/7.43.0\n> Accept: *\/*\n>\n< HTTP\/1.1 200 OK\n< Server: nginx\/1.10.0\n< Date: Wed, 11 May 2016 06:05:26 GMT\n< Content-Type: text\/plain\n< Transfer-Encoding: chunked\n< Connection: keep-alive\n< Vary: Accept-Encoding\n<\nCLIENT VALUES:\nclient_address=10.2.29.4\ncommand=GET\nreal path=\/\nquery=nil\nrequest_version=1.1\nrequest_uri=http:\/\/foo.bar.com:8080\/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*\/*\nconnection=close\nhost=foo.bar.com\nuser-agent=curl\/7.43.0\nx-request-id=e426c7829ef9f3b18d40730857c3eddb\nx-forwarded-for=10.2.29.1\nx-forwarded-host=foo.bar.com\nx-forwarded-port=80\nx-forwarded-proto=http\nx-real-ip=10.2.29.1\nx-scheme=http\nBODY:\n* Connection #0 to host 10.2.29.4 left intact\n-no body in request-\n```","site":"ingress nginx","answers_cleaned":"  Basic Authentication  This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with  htpasswd   It s important the file generated is named  auth   actually   that the secret has a key  data auth    otherwise the ingress controller returns a 503      Create htpasswd file     console   htpasswd  c auth foo New password   bar  New password  Re type new password  Adding password for user foo         Convert htpasswd into a secret     console   kubectl create secret generic basic auth   from file auth secret  basic auth  created         Examine secret     console   kubectl get secret basic auth  o yaml apiVersion  v1 data    auth  Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK kind  Secret metadata    name  basic auth   namespace  default type  Opaque         Using kubectl  create an ingress tied to the basic auth secret     console   echo   apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress with auth   annotations        type of authentication     nginx ingress kubernetes io auth type  basic       name of the secret that contains the user password definitions     nginx ingress kubernetes io auth secret  basic auth       message to display with an appropriate context why the authentication is required     nginx ingress kubernetes io auth realm   Authentication Required   foo  spec    ingressClassName  nginx   rules      host  foo bar com     http        paths          path            pathType  Prefix         backend            service               name  http svc             port                 number  80     kubectl create  f           Use curl to confirm authorization is required by the ingress        curl  v http   10 2 29 4   H  Host  foo bar com      Trying 10 2 29 4      Connected to 10 2 29 4  10 2 29 4  port 80   0    GET   HTTP 1 1   Host  foo bar com   User Agent  curl 7 43 0   Accept          HTTP 1 1 401 Unauthorized   Server  nginx 1 10 0   Date  Wed  11 May 2016 05 27 23 GMT   Content Type  text html   Content Length  195   Connection  keep alive   WWW Authenticate  Basic realm  Authentication Required   foo     html   head  title 401 Authorization Required  title   head   body bgcolor  white    center  h1 401 Authorization Required  h1   center   hr  center nginx 1 10 0  center    body    html    Connection  0 to host 10 2 29 4 left intact         Use curl with the correct credentials to connect to the ingress        curl  v http   10 2 29 4   H  Host  foo bar com   u  foo bar      Trying 10 2 29 4      Connected to 10 2 29 4  10 2 29 4  port 80   0    Server auth using Basic with user  foo    GET   HTTP 1 1   Host  foo bar com   Authorization  Basic Zm9vOmJhcg     User Agent  curl 7 43 0   Accept          HTTP 1 1 200 OK   Server  nginx 1 10 0   Date  Wed  11 May 2016 06 05 26 GMT   Content Type  text plain   Transfer Encoding  chunked   Connection  keep alive   Vary  Accept Encoding   CLIENT VALUES  client address 10 2 29 4 command GET real path   query nil request version 1 1 request uri http   foo bar com 8080   SERVER VALUES  server version nginx  1 9 11   lua  10001  HEADERS RECEIVED  accept     connection close host foo bar com user agent curl 7 43 0 x request id e426c7829ef9f3b18d40730857c3eddb x forwarded for 10 2 29 1 x forwarded host foo bar com x forwarded port 80 x forwarded proto http x real ip 10 2 29 1 x scheme http BODY    Connection  0 to host 10 2 29 4 left intact  no body in request     "}
{"questions":"ingress nginx TLS termination This example demonstrates how to terminate TLS through the Ingress Nginx Controller You need a and a for this example Prerequisites Deployment","answers":"# TLS termination\n\nThis example demonstrates how to terminate TLS through the Ingress-Nginx Controller.\n\n## Prerequisites\n\nYou need a [TLS cert](..\/PREREQUISITES.md#tls-certificates) and a [test HTTP service](..\/PREREQUISITES.md#test-http-service) for this example.\n\n## Deployment\n\nCreate a `ingress.yaml` file.\n\n```yaml\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: nginx-test\nspec:\n  tls:\n    - hosts:\n      - foo.bar.com\n      # This assumes tls-secret exists and the SSL\n      # certificate contains a CN for foo.bar.com\n      secretName: tls-secret\n  ingressClassName: nginx\n  rules:\n    - host: foo.bar.com\n      http:\n        paths:\n        - path: \/\n          pathType: Prefix\n          backend:\n            # This assumes http-svc exists and routes to healthy endpoints\n            service:\n              name: http-svc\n              port:\n                number: 80\n```\n\nThe following command instructs the controller to terminate traffic using the provided\nTLS cert, and forward un-encrypted HTTP traffic to the test HTTP service.\n\n```console\nkubectl apply -f ingress.yaml\n```\n\n## Validation\n\nYou can confirm that the Ingress works.\n\n```console\n$ kubectl describe ing nginx-test\nName:\t\t\tnginx-test\nNamespace:\t\tdefault\nAddress:\t\t104.198.183.6\nDefault backend:\tdefault-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080)\nTLS:\n  tls-secret terminates\nRules:\n  Host\tPath\tBackends\n  ----\t----\t--------\n  *\n    \t \thttp-svc:80 (<none>)\nAnnotations:\nEvents:\n  FirstSeen\tLastSeen\tCount\tFrom\t\t\t\tSubObjectPath\tType\t\tReason\tMessage\n  ---------\t--------\t-----\t----\t\t\t\t-------------\t--------\t------\t-------\n  7s\t\t7s\t\t1\t{ingress-nginx-controller }\t\t\tNormal\t\tCREATE\tdefault\/nginx-test\n  7s\t\t7s\t\t1\t{ingress-nginx-controller }\t\t\tNormal\t\tUPDATE\tdefault\/nginx-test\n  7s\t\t7s\t\t1\t{ingress-nginx-controller }\t\t\tNormal\t\tCREATE\tip: 104.198.183.6\n  7s\t\t7s\t\t1\t{ingress-nginx-controller }\t\t\tWarning\t\tMAPPING\tIngress rule 'default\/nginx-test' contains no path definition. Assuming \/\n\n$ curl 104.198.183.6 -L\ncurl: (60) SSL certificate problem: self signed certificate\nMore details here: http:\/\/curl.haxx.se\/docs\/sslcerts.html\n\n$ curl 104.198.183.6 -Lk\nCLIENT VALUES:\nclient_address=10.240.0.4\ncommand=GET\nreal path=\/\nquery=nil\nrequest_version=1.1\nrequest_uri=http:\/\/35.186.221.137:8080\/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*\/*\nconnection=Keep-Alive\nhost=35.186.221.137\nuser-agent=curl\/7.46.0\nvia=1.1 google\nx-cloud-trace-context=f708ea7e369d4514fc90d51d7e27e91d\/13322322294276298106\nx-forwarded-for=104.132.0.80, 35.186.221.137\nx-forwarded-proto=https\nBODY:\n\n```","site":"ingress nginx","answers_cleaned":"  TLS termination  This example demonstrates how to terminate TLS through the Ingress Nginx Controller      Prerequisites  You need a  TLS cert     PREREQUISITES md tls certificates  and a  test HTTP service     PREREQUISITES md test http service  for this example      Deployment  Create a  ingress yaml  file      yaml apiVersion  networking k8s io v1 kind  Ingress metadata    name  nginx test spec    tls        hosts          foo bar com         This assumes tls secret exists and the SSL         certificate contains a CN for foo bar com       secretName  tls secret   ingressClassName  nginx   rules        host  foo bar com       http          paths            path              pathType  Prefix           backend                This assumes http svc exists and routes to healthy endpoints             service                name  http svc               port                  number  80      The following command instructs the controller to terminate traffic using the provided TLS cert  and forward un encrypted HTTP traffic to the test HTTP service      console kubectl apply  f ingress yaml         Validation  You can confirm that the Ingress works      console   kubectl describe ing nginx test Name    nginx test Namespace   default Address   104 198 183 6 Default backend  default http backend 80  10 180 0 4 8080 10 240 0 2 8080  TLS    tls secret terminates Rules    Host Path Backends                                 http svc 80   none   Annotations  Events    FirstSeen LastSeen Count From    SubObjectPath Type  Reason Message                                                                            7s  7s  1  ingress nginx controller     Normal  CREATE default nginx test   7s  7s  1  ingress nginx controller     Normal  UPDATE default nginx test   7s  7s  1  ingress nginx controller     Normal  CREATE ip  104 198 183 6   7s  7s  1  ingress nginx controller     Warning  MAPPING Ingress rule  default nginx test  contains no path definition  Assuming      curl 104 198 183 6  L curl   60  SSL certificate problem  self signed certificate More details here  http   curl haxx se docs sslcerts html    curl 104 198 183 6  Lk CLIENT VALUES  client address 10 240 0 4 command GET real path   query nil request version 1 1 request uri http   35 186 221 137 8080   SERVER VALUES  server version nginx  1 9 11   lua  10001  HEADERS RECEIVED  accept     connection Keep Alive host 35 186 221 137 user agent curl 7 46 0 via 1 1 google x cloud trace context f708ea7e369d4514fc90d51d7e27e91d 13322322294276298106 x forwarded for 104 132 0 80  35 186 221 137 x forwarded proto https BODY      "}
{"questions":"ingress nginx approvers title Availability zone aware routing editor TBD reviewers aledbf ElvinEfendi authors creation date 2019 08 15","answers":"---\ntitle: Availability zone aware routing\nauthors:\n  - \"@ElvinEfendi\"\nreviewers:\n  - \"@aledbf\"\napprovers:\n  - \"@aledbf\"\neditor: TBD\ncreation-date: 2019-08-15\nlast-updated: 2019-08-16\nstatus: implementable\n---\n\n# Availability zone aware routing\n\n## Table of Contents\n\n<!-- toc -->\n- [Availability zone aware routing](#availability-zone-aware-routing)\n  - [Table of Contents](#table-of-contents)\n  - [Summary](#summary)\n  - [Motivation](#motivation)\n    - [Goals](#goals)\n    - [Non-Goals](#non-goals)\n  - [Proposal](#proposal)\n  - [Implementation History](#implementation-history)\n  - [Drawbacks [optional]](#drawbacks-optional)\n<!-- \/toc -->\n\n## Summary\n\nTeach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.\n\n## Motivation\n\nWhen users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature.\ningress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely\nthat it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered\ninter-zone traffic and usually costs extra money.\n\n\nAt the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https:\/\/cloud.google.com\/compute\/network-pricing.\nAccording to [https:\/\/datapath.io\/resources\/blog\/what-are-aws-data-transfer-costs-and-how-to-minimize-them\/](https:\/\/web.archive.org\/web\/20201008160149\/https:\/\/datapath.io\/resources\/blog\/what-are-aws-data-transfer-costs-and-how-to-minimize-them\/) Amazon also charges the same amount of money as GCP for cross-zone, egress traffic.\n\nThis can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost.\n\nArguably inter-zone network latency should also be better than cross-zone.\n\n### Goals\n\n* Given a regional cluster running ingress-nginx, ingress-nginx should do best-effort to pick a zone-local endpoint when proxying\n* This should not impact canary feature\n* ingress-nginx should be able to operate successfully if there are no zonal endpoints\n\n### Non-Goals\n\n* This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone\n* This feature will be relying on https:\/\/kubernetes.io\/docs\/reference\/kubernetes-api\/labels-annotations-taints\/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases\n\n## Proposal\n\nThe idea here is to have the controller part of ingress-nginx\n(1) detect what zone its current pod is running in and\n(2) detect the zone for every endpoint it knows about.\nAfter that, it will post that data as part of endpoints to Lua land.\nWhen picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and\nif there is no zone-local endpoint then it will fall back to current behavior.\n\nInitially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that.\n\n**How does controller know what zone it runs in?**\nWe can have the pod spec pass the node name using downward API as an environment variable.\nUpon startup, the controller can get node details from the API based on the node name.\nOnce the node details are obtained\nwe can extract the zone from the `failure-domain.beta.kubernetes.io\/zone` annotation.\nThen we can pass that value to Lua land through Nginx configuration\nwhen loading `lua_ingress.lua` module in `init_by_lua` phase.\n\n**How do we extract zones for endpoints?**\nWe can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory.\nAnd when we generate endpoints list, we can access node name using `.subsets.addresses[i].nodeName`\nand based on that fetch zone from the map in memory and store it as a field on the endpoint.\n__This solution assumes `failure-domain.beta.kubernetes.io\/zone`__ annotation does not change until the end of the node's life. Otherwise, we have to\nwatch update events as well on the nodes and that'll add even more overhead.\n\nAlternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution\nbecause then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start.\nFrom there on,  it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint.\nThis means an extra API call in case cluster has expanded.\n\n**How do we make sure we do our best to choose zone-local endpoint?**\nThis will be done on the Lua side. For every backend, we will initialize two balancer instances:\n(1) with all endpoints\n(2) with all endpoints corresponding to the current zone for the backend.\nThen given the request once we choose what backend\nneeds to serve the request, we will first try to use a zonal balancer for that backend.\nIf a zonal balancer does not exist (i.e. there's no zonal endpoint)\nthen we will use a general balancer.\nIn case of zonal outages, we assume that the readiness probe will fail and the controller will\nsee no endpoints for the backend and therefore we will use a general balancer.\n\nWe can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.\n\n## Implementation History\n\n- initial version of KEP is shipped\n- proposal and implementation details are done\n\n## Drawbacks [optional]\n\nMore load on the Kubernetes API server.","site":"ingress nginx","answers_cleaned":"    title  Availability zone aware routing authors        ElvinEfendi  reviewers        aledbf  approvers        aledbf  editor  TBD creation date  2019 08 15 last updated  2019 08 16 status  implementable        Availability zone aware routing     Table of Contents       toc        Availability zone aware routing   availability zone aware routing       Table of Contents   table of contents       Summary   summary       Motivation   motivation         Goals   goals         Non Goals   non goals       Proposal   proposal       Implementation History   implementation history       Drawbacks  optional    drawbacks optional        toc         Summary  Teach ingress nginx about availability zones where endpoints are running in  This way ingress nginx pod will do its best to proxy to zone local endpoint      Motivation  When users run their services across multiple availability zones they usually pay for egress traffic between zones  Providers such as GCP  and Amazon EC2 usually charge extra for this feature  ingress nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one  That means it s at least equally likely that it will pick an endpoint from another zone and proxy the request to it  In this situation response from the endpoint to the ingress nginx pod is considered inter zone traffic and usually costs extra money    At the time of this writing  GCP charges  0 01 per GB of inter zone egress traffic according to https   cloud google com compute network pricing  According to  https   datapath io resources blog what are aws data transfer costs and how to minimize them   https   web archive org web 20201008160149 https   datapath io resources blog what are aws data transfer costs and how to minimize them   Amazon also charges the same amount of money as GCP for cross zone  egress traffic   This can be a lot of money depending on once s traffic  By teaching ingress nginx about zones we can eliminate or at least decrease this cost   Arguably inter zone network latency should also be better than cross zone       Goals    Given a regional cluster running ingress nginx  ingress nginx should do best effort to pick a zone local endpoint when proxying   This should not impact canary feature   ingress nginx should be able to operate successfully if there are no zonal endpoints      Non Goals    This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress nginx pod s  in that zone   This feature will be relying on https   kubernetes io docs reference kubernetes api labels annotations taints  failure domainbetakubernetesiozone  it is not this KEP s goal to support other cases     Proposal  The idea here is to have the controller part of ingress nginx  1  detect what zone its current pod is running in and  2  detect the zone for every endpoint it knows about  After that  it will post that data as part of endpoints to Lua land  When picking an endpoint  the Lua balancer will try to pick zone local endpoint first and if there is no zone local endpoint then it will fall back to current behavior   Initially  this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that     How does controller know what zone it runs in    We can have the pod spec pass the node name using downward API as an environment variable  Upon startup  the controller can get node details from the API based on the node name  Once the node details are obtained we can extract the zone from the  failure domain beta kubernetes io zone  annotation  Then we can pass that value to Lua land through Nginx configuration when loading  lua ingress lua  module in  init by lua  phase     How do we extract zones for endpoints    We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory  And when we generate endpoints list  we can access node name using   subsets addresses i  nodeName  and based on that fetch zone from the map in memory and store it as a field on the endpoint    This solution assumes  failure domain beta kubernetes io zone    annotation does not change until the end of the node s life  Otherwise  we have to watch update events as well on the nodes and that ll add even more overhead   Alternatively  we can get the list of nodes only when there s no node in the memory for the given node name  This is probably a better solution because then we would avoid watching for API changes on node resources  We can eagerly fetch all the nodes and build node name to zone mapping on start  From there on   it will sync during endpoint building in the main event loop if there s no existing entry for the node of an endpoint  This means an extra API call in case cluster has expanded     How do we make sure we do our best to choose zone local endpoint    This will be done on the Lua side  For every backend  we will initialize two balancer instances   1  with all endpoints  2  with all endpoints corresponding to the current zone for the backend  Then given the request once we choose what backend needs to serve the request  we will first try to use a zonal balancer for that backend  If a zonal balancer does not exist  i e  there s no zonal endpoint  then we will use a general balancer  In case of zonal outages  we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer   We can enable the feature using a configmap setting  Doing it this way makes it easier to rollback in case of a problem      Implementation History    initial version of KEP is shipped   proposal and implementation details are done     Drawbacks  optional   More load on the Kubernetes API server "}
{"questions":"ingress nginx All the controller files should live on a different container Controller container should have bare minimum to work just go program No file other than NGINX files should exist on this container Inside nginx container there should be a really small http listener just able Proposal to split containers This includes not mounting the service account All the NGINX files should live on one container ServiceAccount should be mounted just on controller","answers":"# Proposal to split containers\n\n* All the NGINX files should live on one container\n  * No file other than NGINX files should exist on this container\n  * This includes not mounting the service account\n* All the controller files should live on a different container\n  * Controller container should have bare minimum to work (just go program)\n  * ServiceAccount should be mounted just on controller\n\n* Inside nginx container, there should be a really small http listener just able \nto start, stop and reload NGINX\n\n## Roadmap (what needs to be done)\n* Map what needs to be done to mount the SA just on controller container\n* Map all the required files for NGINX to work\n* Map all the required network calls between controller and NGINX\n  * eg.: Dynamic lua reconfiguration\n* Map problematic features that will need attention\n  * SSLPassthrough today happens on controller process and needs to happen on NGINX\n\n### Ports and endpoints on NGINX container\n* Public HTTP\/HTTPs port - 80 and 443\n* Lua configuration port - 10246 (HTTP) and 10247 (Stream)\n* 3333 (temp) - Dataplane controller http server\n  * \/reload - (POST) Reloads the configuration.\n    * \"config\" argument is the location of temporary file that should be used \/ moved to nginx.conf\n  * \/test - (POST) Test the configuration of a given file location\n    * \"config\" argument is the location of temporary file that should be tested\n\n### Mounting empty SA on controller container\n\n```yaml\nkind: Pod\napiVersion: v1\nmetadata:\n  name: test\nspec:\n  containers:\n  - name: nginx\n    image: nginx:latest\n    ports:\n    - containerPort: 80\n  - name: othernginx\n    image: alpine:latest\n    command: [\"\/bin\/sh\"]\n    args: [\"-c\", \"while true; do date; sleep 3; done\"]\n    volumeMounts:\n    - mountPath: \/var\/run\/secrets\/kubernetes.io\/serviceaccount\n      name: emptysecret\n  volumes:\n  - name: emptysecret\n    emptyDir:\n      sizeLimit: 1Mi\n```\n\n### Mapped folders on NGINX configuration\n**WARNING** We need to be aware of inter mount containers and inode problems. If we \nmount a file instead of a directory, it may take time to reflect the file value on \nthe target container\n\n*  \"\/etc\/nginx\/lua\/?.lua;\/etc\/nginx\/lua\/vendor\/?.lua;;\"; - Lua scripts\n* \"\/var\/log\/nginx\" - NGINX logs\n* \"\/tmp\/nginx (nginx.pid)\" - NGINX pid directory \/ file, fcgi socket, etc\n* \" \/etc\/nginx\/geoip\" - GeoIP database directory - OK - \/etc\/ingress-controller\/geoip\n* \/etc\/nginx\/mime.types - Mime types\n* \/etc\/ingress-controller\/ssl - SSL directory (fake cert, auth cert)\n* \/etc\/ingress-controller\/auth - Authentication files\n* \/etc\/nginx\/modsecurity - Modsecurity configuration\n* \/etc\/nginx\/owasp-modsecurity-crs - Modsecurity rules\n* \/etc\/nginx\/tickets.key - SSL tickets - OK - \/etc\/ingress-controller\/tickets.key\n* \/etc\/nginx\/opentelemetry.toml - OTEL config - OK - \/etc\/ingress-controller\/telemetry\n* \/etc\/nginx\/opentracing.json - Opentracing config - OK - \/etc\/ingress-controller\/telemetry\n* \/etc\/nginx\/modules - NGINX modules\n* \/etc\/nginx\/fastcgi_params (maybe) - fcgi params\n* \/etc\/nginx\/template - Template, may be used by controller only\n\n##### List of modules\n```\nngx_http_auth_digest_module.so    ngx_http_modsecurity_module.so\nngx_http_brotli_filter_module.so  ngx_http_opentracing_module.so\nngx_http_brotli_static_module.so  ngx_stream_geoip2_module.so\nngx_http_geoip2_module.so\n```\n\n##### List of files that may be removed\n```\n-rw-r--r--    1 www-data www-data      1077 Jun 23 19:44 fastcgi.conf\n-rw-r--r--    1 www-data www-data      1077 Jun 23 19:44 fastcgi.conf.default\n-rw-r--r--    1 www-data www-data      1007 Jun 23 19:44 fastcgi_params\n-rw-r--r--    1 www-data www-data      1007 Jun 23 19:44 fastcgi_params.default\ndrwxr-xr-x    2 www-data www-data      4096 Jun 23 19:34 geoip\n-rw-r--r--    1 www-data www-data      2837 Jun 23 19:44 koi-utf\n-rw-r--r--    1 www-data www-data      2223 Jun 23 19:44 koi-win\ndrwxr-xr-x    6 www-data www-data      4096 Sep 19 14:13 lua\n-rw-r--r--    1 www-data www-data      5349 Jun 23 19:44 mime.types\n-rw-r--r--    1 www-data www-data      5349 Jun 23 19:44 mime.types.default\ndrwxr-xr-x    2 www-data www-data      4096 Jun 23 19:44 modsecurity\ndrwxr-xr-x    2 www-data www-data      4096 Jun 23 19:44 modules\n-rw-r--r--    1 www-data www-data     18275 Oct  1 21:28 nginx.conf\n-rw-r--r--    1 www-data www-data      2656 Jun 23 19:44 nginx.conf.default\n-rwx------    1 www-data www-data       420 Oct  1 21:28 opentelemetry.toml\n-rw-r--r--    1 www-data www-data         2 Oct  1 21:28 opentracing.json\ndrwxr-xr-x    7 www-data www-data      4096 Jun 23 19:44 owasp-modsecurity-crs\n-rw-r--r--    1 www-data www-data       636 Jun 23 19:44 scgi_params\n-rw-r--r--    1 www-data www-data       636 Jun 23 19:44 scgi_params.default\ndrwxr-xr-x    2 www-data www-data      4096 Sep 19 14:13 template\n-rw-r--r--    1 www-data www-data       664 Jun 23 19:44 uwsgi_params\n-rw-r--r--    1 www-data www-data       664 Jun 23 19:44 uwsgi_params.default\n-rw-r--r--    1 www-data www-data      3610 Jun 23 19:44 win-utf\n```","site":"ingress nginx","answers_cleaned":"  Proposal to split containers    All the NGINX files should live on one container     No file other than NGINX files should exist on this container     This includes not mounting the service account   All the controller files should live on a different container     Controller container should have bare minimum to work  just go program      ServiceAccount should be mounted just on controller    Inside nginx container  there should be a really small http listener just able  to start  stop and reload NGINX     Roadmap  what needs to be done    Map what needs to be done to mount the SA just on controller container   Map all the required files for NGINX to work   Map all the required network calls between controller and NGINX     eg   Dynamic lua reconfiguration   Map problematic features that will need attention     SSLPassthrough today happens on controller process and needs to happen on NGINX      Ports and endpoints on NGINX container   Public HTTP HTTPs port   80 and 443   Lua configuration port   10246  HTTP  and 10247  Stream    3333  temp    Dataplane controller http server      reload    POST  Reloads the configuration         config  argument is the location of temporary file that should be used   moved to nginx conf      test    POST  Test the configuration of a given file location        config  argument is the location of temporary file that should be tested      Mounting empty SA on controller container     yaml kind  Pod apiVersion  v1 metadata    name  test spec    containers      name  nginx     image  nginx latest     ports        containerPort  80     name  othernginx     image  alpine latest     command     bin sh       args     c    while true  do date  sleep 3  done       volumeMounts        mountPath   var run secrets kubernetes io serviceaccount       name  emptysecret   volumes      name  emptysecret     emptyDir        sizeLimit  1Mi          Mapped folders on NGINX configuration   WARNING   We need to be aware of inter mount containers and inode problems  If we  mount a file instead of a directory  it may take time to reflect the file value on  the target container       etc nginx lua   lua  etc nginx lua vendor   lua       Lua scripts     var log nginx    NGINX logs     tmp nginx  nginx pid     NGINX pid directory   file  fcgi socket  etc      etc nginx geoip    GeoIP database directory   OK    etc ingress controller geoip    etc nginx mime types   Mime types    etc ingress controller ssl   SSL directory  fake cert  auth cert     etc ingress controller auth   Authentication files    etc nginx modsecurity   Modsecurity configuration    etc nginx owasp modsecurity crs   Modsecurity rules    etc nginx tickets key   SSL tickets   OK    etc ingress controller tickets key    etc nginx opentelemetry toml   OTEL config   OK    etc ingress controller telemetry    etc nginx opentracing json   Opentracing config   OK    etc ingress controller telemetry    etc nginx modules   NGINX modules    etc nginx fastcgi params  maybe    fcgi params    etc nginx template   Template  may be used by controller only        List of modules     ngx http auth digest module so    ngx http modsecurity module so ngx http brotli filter module so  ngx http opentracing module so ngx http brotli static module so  ngx stream geoip2 module so ngx http geoip2 module so            List of files that may be removed      rw r  r      1 www data www data      1077 Jun 23 19 44 fastcgi conf  rw r  r      1 www data www data      1077 Jun 23 19 44 fastcgi conf default  rw r  r      1 www data www data      1007 Jun 23 19 44 fastcgi params  rw r  r      1 www data www data      1007 Jun 23 19 44 fastcgi params default drwxr xr x    2 www data www data      4096 Jun 23 19 34 geoip  rw r  r      1 www data www data      2837 Jun 23 19 44 koi utf  rw r  r      1 www data www data      2223 Jun 23 19 44 koi win drwxr xr x    6 www data www data      4096 Sep 19 14 13 lua  rw r  r      1 www data www data      5349 Jun 23 19 44 mime types  rw r  r      1 www data www data      5349 Jun 23 19 44 mime types default drwxr xr x    2 www data www data      4096 Jun 23 19 44 modsecurity drwxr xr x    2 www data www data      4096 Jun 23 19 44 modules  rw r  r      1 www data www data     18275 Oct  1 21 28 nginx conf  rw r  r      1 www data www data      2656 Jun 23 19 44 nginx conf default  rwx          1 www data www data       420 Oct  1 21 28 opentelemetry toml  rw r  r      1 www data www data         2 Oct  1 21 28 opentracing json drwxr xr x    7 www data www data      4096 Jun 23 19 44 owasp modsecurity crs  rw r  r      1 www data www data       636 Jun 23 19 44 scgi params  rw r  r      1 www data www data       636 Jun 23 19 44 scgi params default drwxr xr x    2 www data www data      4096 Sep 19 14 13 template  rw r  r      1 www data www data       664 Jun 23 19 44 uwsgi params  rw r  r      1 www data www data       664 Jun 23 19 44 uwsgi params default  rw r  r      1 www data www data      3610 Jun 23 19 44 win utf    "}
{"questions":"ingress nginx oscardoe janedoe approvers title KEP Template alicedoe reviewers TBD authors","answers":"---\ntitle: KEP Template\nauthors:\n  - \"@janedoe\"\nreviewers:\n  - TBD\n  - \"@alicedoe\"\napprovers:\n  - TBD\n  - \"@oscardoe\"\neditor: TBD\ncreation-date: yyyy-mm-dd\nlast-updated: yyyy-mm-dd\nstatus: provisional|implementable|implemented|deferred|rejected|withdrawn|replaced\nsee-also:\n  - \"\/docs\/enhancements\/20190101-we-heard-you-like-keps.md\"\n  - \"\/docs\/enhancements\/20190102-everyone-gets-a-kep.md\"\nreplaces:\n  - \"\/docs\/enhancements\/20181231-replaced-kep.md\"\nsuperseded-by:\n  - \"\/docs\/enhancements\/20190104-superseding-kep.md\"\n---\n\n# Title\n\nThis is the title of the KEP.\nKeep it simple and descriptive.\nA good title can help communicate what the KEP is and should be considered as part of any review.\n\nThe title should be lowercased and spaces\/punctuation should be replaced with `-`.\n\nTo get started with this template:\n\n1. **Make a copy of this template.**\n  Create a copy of this template and name it `YYYYMMDD-my-title.md`, where `YYYYMMDD` is the date the KEP was first drafted.\n1. **Fill out the \"overview\" sections.**\n  This includes the Summary and Motivation sections.\n  These should be easy if you've preflighted the idea of the KEP in an issue.\n1. **Create a PR.**\n  Assign it to folks that are sponsoring this process.\n1. **Create an issue**\n  When filing an enhancement tracking issue, please ensure to complete all fields in the template.\n1. **Merge early.**\n  Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly.\n  The best way to do this is to just start with the \"Overview\" sections and fill out details incrementally in follow on PRs.\n  View anything marked as a `provisional` as a working document and subject to change.\n  Aim for single topic PRs to keep discussions focused.\n  If you disagree with what is already in a document, open a new PR with suggested changes.\n\nThe canonical place for the latest set of instructions (and the likely source of this file) is [here](YYYYMMDD-kep-template.md).\n\nThe `Metadata` section above is intended to support the creation of tooling around the KEP process.\nThis will be a YAML section that is fenced as a code block.\nSee the KEP process for details on each of these items.\n\n## Table of Contents\n\nA table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template.\n\nEnsure the TOC is wrapped with <code>&lt;!-- toc --&rt;&lt;!-- \/toc --&rt;<\/code> tags, and then generate with `hack\/update-toc.sh`.\n\n<!-- toc -->\n- [Summary](#summary)\n- [Motivation](#motivation)\n  - [Goals](#goals)\n  - [Non-Goals](#non-goals)\n- [Proposal](#proposal)\n  - [User Stories [optional]](#user-stories-optional)\n    - [Story 1](#story-1)\n    - [Story 2](#story-2)\n  - [Implementation Details\/Notes\/Constraints [optional]](#implementation-detailsnotesconstraints-optional)\n  - [Risks and Mitigations](#risks-and-mitigations)\n- [Design Details](#design-details)\n  - [Test Plan](#test-plan)\n    - [Removing a deprecated flag](#removing-a-deprecated-flag)\n- [Implementation History](#implementation-history)\n- [Drawbacks [optional]](#drawbacks-optional)\n- [Alternatives [optional]](#alternatives-optional)\n<!-- \/toc -->\n\n## Summary\n\nThe `Summary` section is incredibly important for producing high quality user-focused documentation such as release notes or a development roadmap.\nIt should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself.\n\nA good summary is probably at least a paragraph in length.\n\n## Motivation\n\nThis section is for explicitly listing the motivation, goals and non-goals of this KEP.\nDescribe why the change is important and the benefits to users.\nThe motivation section can optionally provide links to [experience reports][] to demonstrate the interest in a KEP within the wider Kubernetes community.\n\n[experience reports]: https:\/\/github.com\/golang\/go\/wiki\/ExperienceReports\n\n### Goals\n\nList the specific goals of the KEP.\nHow will we know that this has succeeded?\n\n### Non-Goals\n\nWhat is out of scope for this KEP?\nListing non-goals helps to focus discussion and make progress.\n\n## Proposal\n\nThis is where we get down to the nitty gritty of what the proposal actually is.\n\n### User Stories [optional]\n\nDetail the things that people will be able to do if this KEP is implemented.\nInclude as much detail as possible so that people can understand the \"how\" of the system.\nThe goal here is to make this feel real for users without getting bogged down.\n\n#### Story 1\n\n#### Story 2\n\n### Implementation Details\/Notes\/Constraints [optional]\n\nWhat are the caveats to the implementation?\nWhat are some important details that didn't come across above.\nGo in to as much detail as necessary here.\nThis might be a good place to talk about core concepts and how they relate.\n\n### Risks and Mitigations\n\nWhat are the risks of this proposal and how do we mitigate.\nThink broadly.\nFor example, consider both security and how this will impact the larger kubernetes ecosystem.\n\nHow will security be reviewed and by whom?\nHow will UX be reviewed and by whom?\n\nConsider including folks that also work outside project.\n\n## Design Details\n\n### Test Plan\n\n**Note:** *Section not required until targeted at a release.*\n\nConsider the following in developing a test plan for this enhancement:\n\n- Will there be e2e and integration tests, in addition to unit tests?\n- How will it be tested in isolation vs with other components?\n\nNo need to outline all of the test cases, just the general strategy.\nAnything that would count as tricky in the implementation and anything particularly challenging to test should be called out.\n\nAll code is expected to have adequate tests (eventually with coverage expectations).\nPlease adhere to the [Kubernetes testing guidelines][testing-guidelines] when drafting this test plan.\n\n[testing-guidelines]: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-testing\/testing.md\n\n#### Removing a deprecated flag\n\n- Announce deprecation and support policy of the existing flag\n- Two versions passed since introducing the functionality which deprecates the flag (to address version skew)\n- Address feedback on usage\/changed behavior, provided on GitHub issues\n- Deprecate the flag\n\n## Implementation History\n\nMajor milestones in the life cycle of a KEP should be tracked in `Implementation History`.\nMajor milestones might include\n\n- the `Summary` and `Motivation` sections being merged signaling acceptance\n- the `Proposal` section being merged signaling agreement on a proposed design\n- the date implementation started\n- the first Kubernetes release where an initial version of the KEP was available\n- the version of Kubernetes where the KEP graduated to general availability\n- when the KEP was retired or superseded\n\n## Drawbacks [optional]\n\nWhy should this KEP _not_ be implemented.\n\n## Alternatives [optional]\n\nSimilar to the `Drawbacks` section the `Alternatives` section is used to highlight and record other possible approaches to delivering the value proposed by a KEP.","site":"ingress nginx","answers_cleaned":"    title  KEP Template authors        janedoe  reviewers      TBD       alicedoe  approvers      TBD       oscardoe  editor  TBD creation date  yyyy mm dd last updated  yyyy mm dd status  provisional implementable implemented deferred rejected withdrawn replaced see also        docs enhancements 20190101 we heard you like keps md        docs enhancements 20190102 everyone gets a kep md  replaces        docs enhancements 20181231 replaced kep md  superseded by        docs enhancements 20190104 superseding kep md         Title  This is the title of the KEP  Keep it simple and descriptive  A good title can help communicate what the KEP is and should be considered as part of any review   The title should be lowercased and spaces punctuation should be replaced with       To get started with this template   1    Make a copy of this template      Create a copy of this template and name it  YYYYMMDD my title md   where  YYYYMMDD  is the date the KEP was first drafted  1    Fill out the  overview  sections      This includes the Summary and Motivation sections    These should be easy if you ve preflighted the idea of the KEP in an issue  1    Create a PR      Assign it to folks that are sponsoring this process  1    Create an issue     When filing an enhancement tracking issue  please ensure to complete all fields in the template  1    Merge early      Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly    The best way to do this is to just start with the  Overview  sections and fill out details incrementally in follow on PRs    View anything marked as a  provisional  as a working document and subject to change    Aim for single topic PRs to keep discussions focused    If you disagree with what is already in a document  open a new PR with suggested changes   The canonical place for the latest set of instructions  and the likely source of this file  is  here  YYYYMMDD kep template md    The  Metadata  section above is intended to support the creation of tooling around the KEP process  This will be a YAML section that is fenced as a code block  See the KEP process for details on each of these items      Table of Contents  A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template   Ensure the TOC is wrapped with  code  lt     toc    rt  lt      toc    rt   code  tags  and then generate with  hack update toc sh         toc        Summary   summary     Motivation   motivation       Goals   goals       Non Goals   non goals     Proposal   proposal       User Stories  optional    user stories optional         Story 1   story 1         Story 2   story 2       Implementation Details Notes Constraints  optional    implementation detailsnotesconstraints optional       Risks and Mitigations   risks and mitigations     Design Details   design details       Test Plan   test plan         Removing a deprecated flag   removing a deprecated flag     Implementation History   implementation history     Drawbacks  optional    drawbacks optional     Alternatives  optional    alternatives optional        toc         Summary  The  Summary  section is incredibly important for producing high quality user focused documentation such as release notes or a development roadmap  It should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself   A good summary is probably at least a paragraph in length      Motivation  This section is for explicitly listing the motivation  goals and non goals of this KEP  Describe why the change is important and the benefits to users  The motivation section can optionally provide links to  experience reports    to demonstrate the interest in a KEP within the wider Kubernetes community    experience reports   https   github com golang go wiki ExperienceReports      Goals  List the specific goals of the KEP  How will we know that this has succeeded       Non Goals  What is out of scope for this KEP  Listing non goals helps to focus discussion and make progress      Proposal  This is where we get down to the nitty gritty of what the proposal actually is       User Stories  optional   Detail the things that people will be able to do if this KEP is implemented  Include as much detail as possible so that people can understand the  how  of the system  The goal here is to make this feel real for users without getting bogged down        Story 1       Story 2      Implementation Details Notes Constraints  optional   What are the caveats to the implementation  What are some important details that didn t come across above  Go in to as much detail as necessary here  This might be a good place to talk about core concepts and how they relate       Risks and Mitigations  What are the risks of this proposal and how do we mitigate  Think broadly  For example  consider both security and how this will impact the larger kubernetes ecosystem   How will security be reviewed and by whom  How will UX be reviewed and by whom   Consider including folks that also work outside project      Design Details      Test Plan    Note     Section not required until targeted at a release    Consider the following in developing a test plan for this enhancement     Will there be e2e and integration tests  in addition to unit tests    How will it be tested in isolation vs with other components   No need to outline all of the test cases  just the general strategy  Anything that would count as tricky in the implementation and anything particularly challenging to test should be called out   All code is expected to have adequate tests  eventually with coverage expectations   Please adhere to the  Kubernetes testing guidelines  testing guidelines  when drafting this test plan    testing guidelines   https   git k8s io community contributors devel sig testing testing md       Removing a deprecated flag    Announce deprecation and support policy of the existing flag   Two versions passed since introducing the functionality which deprecates the flag  to address version skew    Address feedback on usage changed behavior  provided on GitHub issues   Deprecate the flag     Implementation History  Major milestones in the life cycle of a KEP should be tracked in  Implementation History   Major milestones might include    the  Summary  and  Motivation  sections being merged signaling acceptance   the  Proposal  section being merged signaling agreement on a proposed design   the date implementation started   the first Kubernetes release where an initial version of the KEP was available   the version of Kubernetes where the KEP graduated to general availability   when the KEP was retired or superseded     Drawbacks  optional   Why should this KEP  not  be implemented      Alternatives  optional   Similar to the  Drawbacks  section the  Alternatives  section is used to highlight and record other possible approaches to delivering the value proposed by a KEP "}
{"questions":"ingress nginx Ingress NGINX Code Overview Core Golang code This document provides an overview of Ingress NGINX code This part of the code is responsible for the main logic of Ingress NGINX It contains all the logics that parses watches Endpoints and turn them into usable nginx conf configuration","answers":"# Ingress NGINX - Code Overview\n\nThis document provides an overview of Ingress NGINX code.\n\n\n## Core Golang code\n\nThis part of the code is responsible for the main logic of Ingress NGINX. It contains all the logics that parses [Ingress Objects](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress\/), \n[annotations](https:\/\/kubernetes.io\/docs\/reference\/glossary\/?fundamental=true#term-annotation), watches Endpoints and turn them into usable nginx.conf configuration.\n\n\n### Core Sync Logics:\n\nIngress-nginx has an internal model of the ingresses, secrets and endpoints in a given cluster. It maintains two copies of that:\n\n1. One copy is the currently running configuration model\n2. Second copy is the one generated in response to some changes in the cluster\n\nThe sync logic diffs the two models and if there's a change it tries to converge the running configuration to the new one. \n\nThere are static and dynamic configuration changes. \n\nAll endpoints and certificate changes are handled dynamically by posting the payload to an internal NGINX endpoint that is handled by Lua.\n\n---\n\nThe following parts of the code can be found:\n\n### Entrypoint\n\nThe `main` package is responsible for starting ingress-nginx program, which can be found in [cmd\/nginx](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/cmd\/nginx) directory.\n\n### Version\n\nIs the package of the code responsible for adding `version` subcommand, and can be found in [version](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/version) directory.\n\n### Internal code\n\nThis part of the code contains the internal logics that compose Ingress NGINX Controller, and it's split into:\n\n#### Admission Controller\n\nContains the code of [Kubernetes Admission Controller](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/admission-controllers\/) which validates the syntax of ingress objects before accepting it.\n\nThis code can be found in [internal\/admission\/controller](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/internal\/admission\/controller) directory.\n\n\n#### File functions\n\nContains auxiliary codes that deal with files, such as generating the SHA1 checksum of a file, or creating required directories.\n\nThis code can be found in [internal\/file](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/internal\/file) directory.\n\n#### Ingress functions\n\nContains all the logics from Ingress-Nginx Controller, with some examples being:\n\n* Expected Golang structures that will be used in templates and other parts of the code - [internal\/ingress\/types.go](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/internal\/ingress\/types.go).\n* supported annotations and its parsing logics - [internal\/ingress\/annotations](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/internal\/ingress\/annotations).\n* reconciliation loops and logics - [internal\/ingress\/controller](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/internal\/ingress\/controller)\n* defaults - define the default struct - [internal\/ingress\/defaults](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/internal\/ingress\/defaults).\n* Error interface and types implementation - [internal\/ingress\/errors](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/internal\/ingress\/errors)\n* Metrics collectors for Prometheus exporting - [internal\/ingress\/metric](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/internal\/ingress\/metric).\n* Resolver - Extracts information from a controller - [internal\/ingress\/resolver](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/internal\/ingress\/resolver).\n* Ingress Object status publisher - [internal\/ingress\/status](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/internal\/ingress\/status).\n\nAnd other parts of the code that will be written in this document in a future.\n\n#### K8s functions\n\nContains helper functions for parsing Kubernetes objects.\n\nThis part of the code can be found in [internal\/k8s](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/internal\/k8s) directory.\n\n#### Networking functions\n\nContains helper functions for networking, such as IPv4 and IPv6 parsing, SSL certificate parsing, etc.\n\nThis part of the code can be found in [internal\/net](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/internal\/net) directory.\n\n#### NGINX functions\n\nContains helper function to deal with NGINX, such as verify if it's running and reading it's configuration file parts.\n\nThis part of the code can be found in [internal\/nginx](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/internal\/nginx) directory.\n\n#### Tasks \/ Queue\n\nContains the functions responsible for the sync queue part of the controller.\n\nThis part of the code can be found in [internal\/task](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/internal\/task) directory.\n\n#### Other parts of internal\n\nOther parts of internal code might not be covered here, like runtime and watch but they can be added in a future.\n\n## E2E Test\n\nThe e2e tests code is in [test](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/test) directory.\n\n## Other programs\n\nDescribe here `kubectl plugin`, `dbg`, `waitshutdown` and cover the hack scripts.\n\n### kubectl plugin\n\nIt contains kubectl plugin for inspecting your ingress-nginx deployments.\nThis part of code can be found in [cmd\/plugin](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/cmd\/plugin) directory\nDetail functions flow and available flow can be found in [kubectl-plugin](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/docs\/kubectl-plugin.md)\n\n## Deploy files\n\nThis directory contains the `yaml` deploy files used as examples or references in the docs to deploy Ingress NGINX and other components.\n\nThose files are in [deploy](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/deploy) directory.\n\n## Helm Chart\n\nUsed to generate the Helm chart published.\n\nCode is in [charts\/ingress-nginx](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/charts\/ingress-nginx).\n\n## Documentation\/Website\n\nThe documentation used to generate the website https:\/\/kubernetes.github.io\/ingress-nginx\/\n\nThis code is available in [docs](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/docs) and it's main \"language\" is `Markdown`, used by [mkdocs](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/mkdocs.yml) file to generate static pages.\n\n## Container Images\n\nContainer images used to run ingress-nginx, or to build the final image.\n\n### Base Images\n\nContains the `Dockerfiles` and scripts used to build base images that are used in other parts of the repo. They are present in [images](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/images) repo. Some examples:\n* [nginx](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/images\/nginx) - The base NGINX image ingress-nginx uses is not a vanilla NGINX. It bundles many libraries together and it is a job in itself to maintain that and keep things up-to-date.\n* [custom-error-pages](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/images\/custom-error-pages) - Used on the custom error page examples.\n\nThere are other images inside this directory.\n\n### Ingress Controller Image\n\nThe image used to build the final ingress controller, used in deploy scripts and Helm charts. \n\nThis is NGINX with some Lua enhancement. We do dynamic certificate, endpoints handling, canary traffic split, custom load balancing etc at this component. One can also add new functionalities using Lua plugin system.\n\nThe files are in [rootfs](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/rootfs) directory and contains:\n\n* The Dockerfile\n* [nginx config](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/rootfs\/etc\/nginx)\n\n#### Ingress NGINX Lua Scripts\n\nIngress NGINX uses Lua Scripts to enable features like hot reloading, rate limiting and monitoring. Some are written using the [OpenResty](https:\/\/openresty.org\/en\/) helper.\n\nThe directory containing Lua scripts is [rootfs\/etc\/nginx\/lua](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/rootfs\/etc\/nginx\/lua).\n\n#### Nginx Go template file\n\nOne of the functions of Ingress NGINX is to turn [Ingress](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress\/) objects into nginx.conf file. \n\nTo do so, the final step is to apply those configurations in [nginx.tmpl](https:\/\/github.com\/kubernetes\/ingress-nginx\/tree\/main\/rootfs\/etc\/nginx\/template) turning it into a final nginx.conf file.\n","site":"ingress nginx","answers_cleaned":"  Ingress NGINX   Code Overview  This document provides an overview of Ingress NGINX code       Core Golang code  This part of the code is responsible for the main logic of Ingress NGINX  It contains all the logics that parses  Ingress Objects  https   kubernetes io docs concepts services networking ingress      annotations  https   kubernetes io docs reference glossary  fundamental true term annotation   watches Endpoints and turn them into usable nginx conf configuration        Core Sync Logics   Ingress nginx has an internal model of the ingresses  secrets and endpoints in a given cluster  It maintains two copies of that   1  One copy is the currently running configuration model 2  Second copy is the one generated in response to some changes in the cluster  The sync logic diffs the two models and if there s a change it tries to converge the running configuration to the new one    There are static and dynamic configuration changes    All endpoints and certificate changes are handled dynamically by posting the payload to an internal NGINX endpoint that is handled by Lua        The following parts of the code can be found       Entrypoint  The  main  package is responsible for starting ingress nginx program  which can be found in  cmd nginx  https   github com kubernetes ingress nginx tree main cmd nginx  directory       Version  Is the package of the code responsible for adding  version  subcommand  and can be found in  version  https   github com kubernetes ingress nginx tree main version  directory       Internal code  This part of the code contains the internal logics that compose Ingress NGINX Controller  and it s split into        Admission Controller  Contains the code of  Kubernetes Admission Controller  https   kubernetes io docs reference access authn authz admission controllers   which validates the syntax of ingress objects before accepting it   This code can be found in  internal admission controller  https   github com kubernetes ingress nginx tree main internal admission controller  directory         File functions  Contains auxiliary codes that deal with files  such as generating the SHA1 checksum of a file  or creating required directories   This code can be found in  internal file  https   github com kubernetes ingress nginx blob main internal file  directory        Ingress functions  Contains all the logics from Ingress Nginx Controller  with some examples being     Expected Golang structures that will be used in templates and other parts of the code    internal ingress types go  https   github com kubernetes ingress nginx blob main internal ingress types go     supported annotations and its parsing logics    internal ingress annotations  https   github com kubernetes ingress nginx tree main internal ingress annotations     reconciliation loops and logics    internal ingress controller  https   github com kubernetes ingress nginx tree main internal ingress controller    defaults   define the default struct    internal ingress defaults  https   github com kubernetes ingress nginx tree main internal ingress defaults     Error interface and types implementation    internal ingress errors  https   github com kubernetes ingress nginx tree main internal ingress errors    Metrics collectors for Prometheus exporting    internal ingress metric  https   github com kubernetes ingress nginx tree main internal ingress metric     Resolver   Extracts information from a controller    internal ingress resolver  https   github com kubernetes ingress nginx tree main internal ingress resolver     Ingress Object status publisher    internal ingress status  https   github com kubernetes ingress nginx tree main internal ingress status    And other parts of the code that will be written in this document in a future        K8s functions  Contains helper functions for parsing Kubernetes objects   This part of the code can be found in  internal k8s  https   github com kubernetes ingress nginx tree main internal k8s  directory        Networking functions  Contains helper functions for networking  such as IPv4 and IPv6 parsing  SSL certificate parsing  etc   This part of the code can be found in  internal net  https   github com kubernetes ingress nginx tree main internal net  directory        NGINX functions  Contains helper function to deal with NGINX  such as verify if it s running and reading it s configuration file parts   This part of the code can be found in  internal nginx  https   github com kubernetes ingress nginx tree main internal nginx  directory        Tasks   Queue  Contains the functions responsible for the sync queue part of the controller   This part of the code can be found in  internal task  https   github com kubernetes ingress nginx tree main internal task  directory        Other parts of internal  Other parts of internal code might not be covered here  like runtime and watch but they can be added in a future      E2E Test  The e2e tests code is in  test  https   github com kubernetes ingress nginx tree main test  directory      Other programs  Describe here  kubectl plugin    dbg    waitshutdown  and cover the hack scripts       kubectl plugin  It contains kubectl plugin for inspecting your ingress nginx deployments  This part of code can be found in  cmd plugin  https   github com kubernetes ingress nginx tree main cmd plugin  directory Detail functions flow and available flow can be found in  kubectl plugin  https   github com kubernetes ingress nginx blob main docs kubectl plugin md      Deploy files  This directory contains the  yaml  deploy files used as examples or references in the docs to deploy Ingress NGINX and other components   Those files are in  deploy  https   github com kubernetes ingress nginx tree main deploy  directory      Helm Chart  Used to generate the Helm chart published   Code is in  charts ingress nginx  https   github com kubernetes ingress nginx tree main charts ingress nginx       Documentation Website  The documentation used to generate the website https   kubernetes github io ingress nginx   This code is available in  docs  https   github com kubernetes ingress nginx tree main docs  and it s main  language  is  Markdown   used by  mkdocs  https   github com kubernetes ingress nginx blob main mkdocs yml  file to generate static pages      Container Images  Container images used to run ingress nginx  or to build the final image       Base Images  Contains the  Dockerfiles  and scripts used to build base images that are used in other parts of the repo  They are present in  images  https   github com kubernetes ingress nginx tree main images  repo  Some examples     nginx  https   github com kubernetes ingress nginx tree main images nginx    The base NGINX image ingress nginx uses is not a vanilla NGINX  It bundles many libraries together and it is a job in itself to maintain that and keep things up to date     custom error pages  https   github com kubernetes ingress nginx tree main images custom error pages    Used on the custom error page examples   There are other images inside this directory       Ingress Controller Image  The image used to build the final ingress controller  used in deploy scripts and Helm charts    This is NGINX with some Lua enhancement  We do dynamic certificate  endpoints handling  canary traffic split  custom load balancing etc at this component  One can also add new functionalities using Lua plugin system   The files are in  rootfs  https   github com kubernetes ingress nginx tree main rootfs  directory and contains     The Dockerfile    nginx config  https   github com kubernetes ingress nginx tree main rootfs etc nginx        Ingress NGINX Lua Scripts  Ingress NGINX uses Lua Scripts to enable features like hot reloading  rate limiting and monitoring  Some are written using the  OpenResty  https   openresty org en   helper   The directory containing Lua scripts is  rootfs etc nginx lua  https   github com kubernetes ingress nginx tree main rootfs etc nginx lua         Nginx Go template file  One of the functions of Ingress NGINX is to turn  Ingress  https   kubernetes io docs concepts services networking ingress   objects into nginx conf file    To do so  the final step is to apply those configurations in  nginx tmpl  https   github com kubernetes ingress nginx tree main rootfs etc nginx template  turning it into a final nginx conf file  "}
{"questions":"ingress nginx that are needed to work with the Kubernetes ingress resource here is a link to the Developing for Ingress Nginx Controller http request termination of connection reverseproxy etc etc you can skip this and move on to the sections below For the really new contributors who want to contribute to the INGRESS NGINX project but need help with understanding some basic concepts This guide contains tips on how a http https request travels from a browser or a curl command to the webserver process running inside a container in a pod in a Kubernetes cluster but enters the cluster via a ingress resource For those who are familiar with those basic networking concepts like routing of a packet with regards to a This document explains how to get started with developing for Ingress Nginx Controller","answers":" Developing for Ingress-Nginx Controller\n\nThis document explains how to get started with developing for Ingress-Nginx Controller.\n\nFor the really new contributors, who want to contribute to the INGRESS-NGINX project, but need help with understanding some basic concepts,\nthat are needed to work with the Kubernetes ingress resource, here is a link to the [New Contributors Guide](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/NEW_CONTRIBUTOR.md).\nThis guide contains tips on how a http\/https request travels, from a browser or a curl command,\nto the webserver process running inside a container, in a pod, in a Kubernetes cluster, but enters the cluster via a ingress resource.\nFor those who are familiar with those basic networking concepts like routing of a packet with regards to a\nhttp request, termination of connection, reverseproxy etc. etc., you can skip this and move on to the sections below.\n(or read it anyways just for context and also provide feedbacks if any)\n\n## Prerequisites\n\nInstall [Go 1.14](https:\/\/golang.org\/dl\/) or later.\n\n!!! note\n    The project uses [Go Modules](https:\/\/github.com\/golang\/go\/wiki\/Modules)\n\nInstall [Docker](https:\/\/docs.docker.com\/engine\/install\/) (v19.03.0 or later with experimental feature on)\n\nInstall [kubectl](https:\/\/kubernetes.io\/docs\/tasks\/tools\/) (1.24.0 or higher)\n\nInstall [Kind](https:\/\/kind.sigs.k8s.io\/)\n\n!!! important\n    The majority of make tasks run as docker containers\n\n## Quick Start\n\n\n1. Fork the repository\n2. Clone the repository to any location in your work station\n3. Add a `GO111MODULE` environment variable with `export GO111MODULE=on`\n4. Run `go mod download` to install dependencies\n\n### Local build\n\nStart a local Kubernetes cluster using [kind](https:\/\/kind.sigs.k8s.io\/), build and deploy the ingress controller\n\n```console\nmake dev-env\n```\n- If you are working on the v1.x.x version of this controller, and you want to create a cluster with kubernetes version 1.22, then please visit the [documentation for kind](https:\/\/kind.sigs.k8s.io\/docs\/user\/configuration\/#a-note-on-cli-parameters-and-configuration-files), and look for how to set a custom image for the kind node (image: kindest\/node...), in the kind config file.\n\n### Testing\n\n**Run go unit tests**\n\n```console\nmake test\n```\n\n**Run unit-tests for lua code**\n\n```console\nmake lua-test\n```\n\nLua tests are located in the directory `rootfs\/etc\/nginx\/lua\/test`\n\n!!! important\n    Test files must follow the naming convention `<mytest>_test.lua` or it will be ignored\n\n\n**Run e2e test suite**\n\n```console\nmake kind-e2e-test\n```\n\nTo limit the scope of the tests to execute, we can use the environment variable `FOCUS`\n\n```console\nFOCUS=\"no-auth-locations\" make kind-e2e-test\n```\n\n!!! note\n    The variable `FOCUS` defines Ginkgo [Focused Specs](https:\/\/onsi.github.io\/ginkgo\/#focused-specs)\n\nValid values are defined in the describe definition of the e2e tests like [Default Backend](https:\/\/github.com\/kubernetes\/ingress-nginx\/blob\/main\/test\/e2e\/defaultbackend\/default_backend.go#L29)\n\nThe complete list of tests can be found [here](..\/e2e-tests.md)\n\n### Custom docker image\n\nIn some cases, it can be useful to build a docker image and publish such an image to a private or custom registry location.\n\nThis can be done setting two environment variables, `REGISTRY` and `TAG`\n\n```console\nexport TAG=\"dev\"\nexport REGISTRY=\"$USER\"\n\nmake build image\n```\n\nand then publish such version with\n\n```console\ndocker push $REGISTRY\/controller:$TAG\n```","site":"ingress nginx","answers_cleaned":" Developing for Ingress Nginx Controller  This document explains how to get started with developing for Ingress Nginx Controller   For the really new contributors  who want to contribute to the INGRESS NGINX project  but need help with understanding some basic concepts  that are needed to work with the Kubernetes ingress resource  here is a link to the  New Contributors Guide  https   github com kubernetes ingress nginx blob main NEW CONTRIBUTOR md   This guide contains tips on how a http https request travels  from a browser or a curl command  to the webserver process running inside a container  in a pod  in a Kubernetes cluster  but enters the cluster via a ingress resource  For those who are familiar with those basic networking concepts like routing of a packet with regards to a http request  termination of connection  reverseproxy etc  etc   you can skip this and move on to the sections below   or read it anyways just for context and also provide feedbacks if any      Prerequisites  Install  Go 1 14  https   golang org dl   or later       note     The project uses  Go Modules  https   github com golang go wiki Modules   Install  Docker  https   docs docker com engine install    v19 03 0 or later with experimental feature on   Install  kubectl  https   kubernetes io docs tasks tools    1 24 0 or higher   Install  Kind  https   kind sigs k8s io        important     The majority of make tasks run as docker containers     Quick Start   1  Fork the repository 2  Clone the repository to any location in your work station 3  Add a  GO111MODULE  environment variable with  export GO111MODULE on  4  Run  go mod download  to install dependencies      Local build  Start a local Kubernetes cluster using  kind  https   kind sigs k8s io    build and deploy the ingress controller     console make dev env       If you are working on the v1 x x version of this controller  and you want to create a cluster with kubernetes version 1 22  then please visit the  documentation for kind  https   kind sigs k8s io docs user configuration  a note on cli parameters and configuration files   and look for how to set a custom image for the kind node  image  kindest node      in the kind config file       Testing    Run go unit tests       console make test        Run unit tests for lua code       console make lua test      Lua tests are located in the directory  rootfs etc nginx lua test       important     Test files must follow the naming convention   mytest  test lua  or it will be ignored     Run e2e test suite       console make kind e2e test      To limit the scope of the tests to execute  we can use the environment variable  FOCUS      console FOCUS  no auth locations  make kind e2e test          note     The variable  FOCUS  defines Ginkgo  Focused Specs  https   onsi github io ginkgo  focused specs   Valid values are defined in the describe definition of the e2e tests like  Default Backend  https   github com kubernetes ingress nginx blob main test e2e defaultbackend default backend go L29   The complete list of tests can be found  here     e2e tests md       Custom docker image  In some cases  it can be useful to build a docker image and publish such an image to a private or custom registry location   This can be done setting two environment variables   REGISTRY  and  TAG      console export TAG  dev  export REGISTRY   USER   make build image      and then publish such version with     console docker push  REGISTRY controller  TAG    "}
{"questions":"istio Shows how to do health checking for Istio services help ops app health check docs ops app health check help ops setup app health check aliases title Health Checking of Istio Services docs tasks traffic management app health check docs ops security health checks and mtls weight 50","answers":"---\ntitle: Health Checking of Istio Services\ndescription: Shows how to do health checking for Istio services.\nweight: 50\naliases:\n  - \/docs\/tasks\/traffic-management\/app-health-check\/\n  - \/docs\/ops\/security\/health-checks-and-mtls\/\n  - \/help\/ops\/setup\/app-health-check\n  - \/help\/ops\/app-health-check\n  - \/docs\/ops\/app-health-check\n  - \/docs\/ops\/setup\/app-health-check\nkeywords: [security,health-check]\nowner: istio\/wg-user-experience-maintainers\ntest: yes\n---\n\n[Kubernetes liveness and readiness probes](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-liveness-readiness-probes\/)\ndescribes several ways to configure liveness and readiness probes:\n\n1. [Command](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-liveness-readiness-startup-probes\/#define-a-liveness-command)\n1. [HTTP request](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-liveness-readiness-startup-probes\/#define-a-liveness-http-request)\n1. [TCP probe](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-liveness-readiness-startup-probes\/#define-a-tcp-liveness-probe)\n1. [gRPC probe](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-liveness-readiness-startup-probes\/#define-a-grpc-liveness-probe)\n\nThe command approach works with no changes required, but HTTP requests, TCP probes, and gRPC probes require Istio to make changes to the pod configuration.\n\nThe health check requests to the `liveness-http` service are sent by Kubelet.\nThis becomes a problem when mutual TLS is enabled, because the Kubelet does not have an Istio issued certificate.\nTherefore the health check requests will fail.\n\nTCP probe checks need special handling, because Istio redirects all incoming traffic into the sidecar, and so all TCP ports appear open. The Kubelet simply checks if some process is listening on the specified port, and so the probe will always succeed as long as the sidecar is running.\n\nIstio solves both these problems by rewriting the application `PodSpec` readiness\/liveness probe,\nso that the probe request is sent to the [sidecar agent](\/docs\/reference\/commands\/pilot-agent\/).\n\n## Liveness probe rewrite example\n\nTo demonstrate how the readiness\/liveness probe is rewritten at the application `PodSpec` level, let us use the [liveness-http-same-port sample](\/samples\/health-check\/liveness-http-same-port.yaml).\n\nFirst create and label a namespace for the example:\n\n\n$ kubectl create namespace istio-io-health-rewrite\n$ kubectl label namespace istio-io-health-rewrite istio-injection=enabled\n\n\nAnd deploy the sample application:\n\n\n$ kubectl apply -f - <<EOF\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: liveness-http\n  namespace: istio-io-health-rewrite\nspec:\n  selector:\n    matchLabels:\n      app: liveness-http\n      version: v1\n  template:\n    metadata:\n      labels:\n        app: liveness-http\n        version: v1\n    spec:\n      containers:\n      - name: liveness-http\n        image: docker.io\/istio\/health:example\n        ports:\n        - containerPort: 8001\n        livenessProbe:\n          httpGet:\n            path: \/foo\n            port: 8001\n          initialDelaySeconds: 5\n          periodSeconds: 5\nEOF\n\n\nOnce deployed, you can inspect the pod's application container to see the changed path:\n\n\n$ kubectl get pod \"$LIVENESS_POD\" -n istio-io-health-rewrite -o json | jq '.spec.containers[0].livenessProbe.httpGet'\n{\n  \"path\": \"\/app-health\/liveness-http\/livez\",\n  \"port\": 15020,\n  \"scheme\": \"HTTP\"\n}\n\n\nThe original `livenessProbe` path is now mapped against the new path in the sidecar container environment variable `ISTIO_KUBE_APP_PROBERS`:\n\n\n$ kubectl get pod \"$LIVENESS_POD\" -n istio-io-health-rewrite -o=jsonpath=\"{.spec.containers[1].env[?(@.name=='ISTIO_KUBE_APP_PROBERS')]}\"\n{\n  \"name\":\"ISTIO_KUBE_APP_PROBERS\",\n  \"value\":\"{\\\"\/app-health\/liveness-http\/livez\\\":{\\\"httpGet\\\":{\\\"path\\\":\\\"\/foo\\\",\\\"port\\\":8001,\\\"scheme\\\":\\\"HTTP\\\"},\\\"timeoutSeconds\\\":1}}\"\n}\n\n\nFor HTTP and gRPC requests, the sidecar agent redirects the request to the application and strips the response body, only returning the response code. For TCP probes, the sidecar agent will then do the port check while avoiding the traffic redirection.\n\nThe rewriting of problematic probes is enabled by default in all built-in Istio\n[configuration profiles](\/docs\/setup\/additional-setup\/config-profiles\/) but can be disabled as described below.\n\n## Liveness and readiness probes using the command approach\n\nIstio provides a [liveness sample](\/samples\/health-check\/liveness-command.yaml) that\nimplements this approach. To demonstrate it working with mutual TLS enabled,\nfirst create a namespace for the example:\n\n\n$ kubectl create ns istio-io-health\n\n\nTo configure strict mutual TLS, run:\n\n\n$ kubectl apply -f - <<EOF\napiVersion: security.istio.io\/v1\nkind: PeerAuthentication\nmetadata:\n  name: \"default\"\n  namespace: \"istio-io-health\"\nspec:\n  mtls:\n    mode: STRICT\nEOF\n\n\nNext, change directory to the root of the Istio installation and run the following command to deploy the sample service:\n\n\n$ kubectl -n istio-io-health apply -f <(istioctl kube-inject -f @samples\/health-check\/liveness-command.yaml@)\n\n\nTo confirm that the liveness probes are working, check the status of the sample pod to verify that it is running.\n\n\n$ kubectl -n istio-io-health get pod\nNAME                             READY     STATUS    RESTARTS   AGE\nliveness-6857c8775f-zdv9r        2\/2       Running   0           4m\n\n\n## Liveness and readiness probes using the HTTP, TCP, and gRPC approach {#liveness-and-readiness-probes-using-the-http-request-approach}\n\nAs stated previously, Istio uses probe rewrite to implement HTTP, TCP, and gRPC probes by default. You can disable this\nfeature either for specific pods, or globally.\n\n### Disable the probe rewrite for a pod {#disable-the-http-probe-rewrite-for-a-pod}\n\nYou can [annotate the pod](\/docs\/reference\/config\/annotations\/) with `sidecar.istio.io\/rewriteAppHTTPProbers: \"false\"`\nto disable the probe rewrite option. Make sure you add the annotation to the\n[pod resource](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-overview\/) because it will be ignored\nanywhere else (for example, on an enclosing deployment resource).\n\n\n\n\n\n\nkubectl apply -f - <<EOF\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: liveness-http\nspec:\n  selector:\n    matchLabels:\n      app: liveness-http\n      version: v1\n  template:\n    metadata:\n      labels:\n        app: liveness-http\n        version: v1\n      annotations:\n        sidecar.istio.io\/rewriteAppHTTPProbers: \"false\"\n    spec:\n      containers:\n      - name: liveness-http\n        image: docker.io\/istio\/health:example\n        ports:\n        - containerPort: 8001\n        livenessProbe:\n          httpGet:\n            path: \/foo\n            port: 8001\n          initialDelaySeconds: 5\n          periodSeconds: 5\nEOF\n\n\n\n\n\n\n\nkubectl apply -f - <<EOF\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: liveness-grpc\nspec:\n  selector:\n    matchLabels:\n      app: liveness-grpc\n      version: v1\n  template:\n    metadata:\n      labels:\n        app: liveness-grpc\n        version: v1\n      annotations:\n        sidecar.istio.io\/rewriteAppHTTPProbers: \"false\"\n    spec:\n      containers:\n      - name: etcd\n        image: registry.k8s.io\/etcd:3.5.1-0\n        command: [\"--listen-client-urls\", \"http:\/\/0.0.0.0:2379\", \"--advertise-client-urls\", \"http:\/\/127.0.0.1:2379\", \"--log-level\", \"debug\"]\n        ports:\n        - containerPort: 2379\n        livenessProbe:\n          grpc:\n            port: 2379\n          initialDelaySeconds: 10\n          periodSeconds: 5\nEOF\n\n\n\n\n\n\nThis approach allows you to disable the health check probe rewrite gradually on individual deployments,\nwithout reinstalling Istio.\n\n### Disable the probe rewrite globally\n\n[Install Istio](\/docs\/setup\/install\/istioctl\/) using `--set values.sidecarInjectorWebhook.rewriteAppHTTPProbe=false`\nto disable the probe rewrite globally. **Alternatively**, update the configuration map for the Istio sidecar injector:\n\n\n$ kubectl get cm istio-sidecar-injector -n istio-system -o yaml | sed -e 's\/\"rewriteAppHTTPProbe\": true\/\"rewriteAppHTTPProbe\": false\/' | kubectl apply -f -\n\n\n## Cleanup\n\nRemove the namespaces used for the examples:\n\n\n$ kubectl delete ns istio-io-health istio-io-health-rewrite\n","site":"istio","answers_cleaned":"    title  Health Checking of Istio Services description  Shows how to do health checking for Istio services  weight  50 aliases       docs tasks traffic management app health check       docs ops security health checks and mtls       help ops setup app health check      help ops app health check      docs ops app health check      docs ops setup app health check keywords   security health check  owner  istio wg user experience maintainers test  yes       Kubernetes liveness and readiness probes  https   kubernetes io docs tasks configure pod container configure liveness readiness probes   describes several ways to configure liveness and readiness probes   1   Command  https   kubernetes io docs tasks configure pod container configure liveness readiness startup probes  define a liveness command  1   HTTP request  https   kubernetes io docs tasks configure pod container configure liveness readiness startup probes  define a liveness http request  1   TCP probe  https   kubernetes io docs tasks configure pod container configure liveness readiness startup probes  define a tcp liveness probe  1   gRPC probe  https   kubernetes io docs tasks configure pod container configure liveness readiness startup probes  define a grpc liveness probe   The command approach works with no changes required  but HTTP requests  TCP probes  and gRPC probes require Istio to make changes to the pod configuration   The health check requests to the  liveness http  service are sent by Kubelet  This becomes a problem when mutual TLS is enabled  because the Kubelet does not have an Istio issued certificate  Therefore the health check requests will fail   TCP probe checks need special handling  because Istio redirects all incoming traffic into the sidecar  and so all TCP ports appear open  The Kubelet simply checks if some process is listening on the specified port  and so the probe will always succeed as long as the sidecar is running   Istio solves both these problems by rewriting the application  PodSpec  readiness liveness probe  so that the probe request is sent to the  sidecar agent   docs reference commands pilot agent        Liveness probe rewrite example  To demonstrate how the readiness liveness probe is rewritten at the application  PodSpec  level  let us use the  liveness http same port sample   samples health check liveness http same port yaml    First create and label a namespace for the example      kubectl create namespace istio io health rewrite   kubectl label namespace istio io health rewrite istio injection enabled   And deploy the sample application      kubectl apply  f     EOF apiVersion  apps v1 kind  Deployment metadata    name  liveness http   namespace  istio io health rewrite spec    selector      matchLabels        app  liveness http       version  v1   template      metadata        labels          app  liveness http         version  v1     spec        containers          name  liveness http         image  docker io istio health example         ports            containerPort  8001         livenessProbe            httpGet              path   foo             port  8001           initialDelaySeconds  5           periodSeconds  5 EOF   Once deployed  you can inspect the pod s application container to see the changed path      kubectl get pod   LIVENESS POD   n istio io health rewrite  o json   jq   spec containers 0  livenessProbe httpGet       path     app health liveness http livez      port   15020     scheme    HTTP      The original  livenessProbe  path is now mapped against the new path in the sidecar container environment variable  ISTIO KUBE APP PROBERS       kubectl get pod   LIVENESS POD   n istio io health rewrite  o jsonpath    spec containers 1  env     name   ISTIO KUBE APP PROBERS           name   ISTIO KUBE APP PROBERS      value       app health liveness http livez      httpGet      path      foo     port   8001   scheme     HTTP      timeoutSeconds   1        For HTTP and gRPC requests  the sidecar agent redirects the request to the application and strips the response body  only returning the response code  For TCP probes  the sidecar agent will then do the port check while avoiding the traffic redirection   The rewriting of problematic probes is enabled by default in all built in Istio  configuration profiles   docs setup additional setup config profiles   but can be disabled as described below      Liveness and readiness probes using the command approach  Istio provides a  liveness sample   samples health check liveness command yaml  that implements this approach  To demonstrate it working with mutual TLS enabled  first create a namespace for the example      kubectl create ns istio io health   To configure strict mutual TLS  run      kubectl apply  f     EOF apiVersion  security istio io v1 kind  PeerAuthentication metadata    name   default    namespace   istio io health  spec    mtls      mode  STRICT EOF   Next  change directory to the root of the Istio installation and run the following command to deploy the sample service      kubectl  n istio io health apply  f   istioctl kube inject  f  samples health check liveness command yaml     To confirm that the liveness probes are working  check the status of the sample pod to verify that it is running      kubectl  n istio io health get pod NAME                             READY     STATUS    RESTARTS   AGE liveness 6857c8775f zdv9r        2 2       Running   0           4m      Liveness and readiness probes using the HTTP  TCP  and gRPC approach   liveness and readiness probes using the http request approach   As stated previously  Istio uses probe rewrite to implement HTTP  TCP  and gRPC probes by default  You can disable this feature either for specific pods  or globally       Disable the probe rewrite for a pod   disable the http probe rewrite for a pod   You can  annotate the pod   docs reference config annotations   with  sidecar istio io rewriteAppHTTPProbers   false   to disable the probe rewrite option  Make sure you add the annotation to the  pod resource  https   kubernetes io docs concepts workloads pods pod overview   because it will be ignored anywhere else  for example  on an enclosing deployment resource         kubectl apply  f     EOF apiVersion  apps v1 kind  Deployment metadata    name  liveness http spec    selector      matchLabels        app  liveness http       version  v1   template      metadata        labels          app  liveness http         version  v1       annotations          sidecar istio io rewriteAppHTTPProbers   false      spec        containers          name  liveness http         image  docker io istio health example         ports            containerPort  8001         livenessProbe            httpGet              path   foo             port  8001           initialDelaySeconds  5           periodSeconds  5 EOF        kubectl apply  f     EOF apiVersion  apps v1 kind  Deployment metadata    name  liveness grpc spec    selector      matchLabels        app  liveness grpc       version  v1   template      metadata        labels          app  liveness grpc         version  v1       annotations          sidecar istio io rewriteAppHTTPProbers   false      spec        containers          name  etcd         image  registry k8s io etcd 3 5 1 0         command      listen client urls    http   0 0 0 0 2379      advertise client urls    http   127 0 0 1 2379      log level    debug           ports            containerPort  2379         livenessProbe            grpc              port  2379           initialDelaySeconds  10           periodSeconds  5 EOF       This approach allows you to disable the health check probe rewrite gradually on individual deployments  without reinstalling Istio       Disable the probe rewrite globally   Install Istio   docs setup install istioctl   using    set values sidecarInjectorWebhook rewriteAppHTTPProbe false  to disable the probe rewrite globally    Alternatively    update the configuration map for the Istio sidecar injector      kubectl get cm istio sidecar injector  n istio system  o yaml   sed  e  s  rewriteAppHTTPProbe   true  rewriteAppHTTPProbe   false     kubectl apply  f        Cleanup  Remove the namespaces used for the examples      kubectl delete ns istio io health istio io health rewrite "}
{"questions":"istio test no owner istio wg networking maintainers keywords scalability title Configuration Scoping In order to program the service mesh the Istio control plane Istiod reads a variety of configurations including core Kubernetes types like and Shows how to scope configuration in Istio for operational and performance benefits weight 60","answers":"---\ntitle: Configuration Scoping\ndescription: Shows how to scope configuration in Istio, for operational and performance benefits.\nweight: 60\nkeywords: [scalability]\nowner: istio\/wg-networking-maintainers\ntest: no\n---\n\nIn order to program the service mesh, the Istio control plane (Istiod) reads a variety of configurations, including core Kubernetes types like `Service` and `Node`,\nand Istio's own types like `Gateway`.\nThese are then sent to the data plane (see [Architecture](\/docs\/ops\/deployment\/architecture\/) for more information).\n\nBy default, the control plane will read all configuration in all namespaces.\nEach proxy instance will receive configuration for all namespaces as well.\nThis includes information about workloads that are not enrolled in the mesh.\n\nThis default ensures correct behavior out of the box, but comes with a scalability cost.\nEach configuration has a cost (in CPU and memory, primarily) to maintain and keep up to date.\nAt large scales, it is critical to limit the configuration scope to avoid excessive resource consumption.\n\n## Scoping mechanisms\n\nIstio offers a few tools to help control the scope of a configuration to meet different use cases.\nDepending on your requirements, these can be used alone or together.\n\n* `Sidecar` provides a mechanism for specific workloads to _import_ a set of configurations\n* `exportTo` provides a mechanism to _export_ a configuration to a set of workloads\n* `discoverySelectors` provides a mechanism to let Istio completely ignore a set of configurations\n\n### `Sidecar` import\n\nThe [`egress.hosts`](\/docs\/reference\/config\/networking\/sidecar\/#IstioEgressListener) field in `Sidecar`\nallows specifying a list of configurations to import.\nOnly configurations matching the specified criteria will be seen by sidecars impacted by the `Sidecar` resource.\n\nFor example:\n\n\napiVersion: networking.istio.io\/v1\nkind: Sidecar\nmetadata:\n  name: default\nspec:\n  egress:\n  - hosts:\n    - \".\/*\" # Import all configuration from our own namespace\n    - \"bookinfo\/*\" # Import all configuration from the bookinfo namespace\n    - \"external-services\/example.com\" # Import only 'example.com' from the external-services namespace\n\n\n### `exportTo`\n\nIstio's `VirtualService`, `DestinationRule`, and `ServiceEntry` provide a `spec.exportTo` field.\nSimilarly, `Service` can be configured with the `networking.istio.io\/exportTo` annotation.\n\nUnlike `Sidecar` which allows a workload owner to control what dependencies it has, `exportTo` works in the opposite way, and allows the service owners to control\ntheir own service's visibility.\n\nFor example, this configuration makes the `details` `Service` only visible to its own namespace, and the `client` namespace:\n\n\napiVersion: v1\nkind: Service\nmetadata:\n  name: details\n  annotations:\n    networking.istio.io\/exportTo: \".,client\"\nspec: ...\n\n\n### `DiscoverySelectors`\n\nWhile the previous controls operate on a workload or service owner level, [`DiscoverySelectors`](\/docs\/reference\/config\/istio.mesh.v1alpha1\/#MeshConfig) provides mesh wide control over configuration visibility.\nDiscovery selectors allows specifying criteria for which namespaces should be visible to the control plane.\nAny namespaces not matching are ignored by the control plane entirely.\n\nThis can be configured as part of `meshConfig` during installation. For example:\n\n\nmeshConfig:\n  discoverySelectors:\n    - matchLabels:\n        # Allow any namespaces with `istio-discovery=enabled`\n        istio-discovery: enabled\n    - matchLabels:\n        # Allow \"kube-system\"; Kubernetes automatically adds this label to each namespace\n        kubernetes.io\/metadata.name: kube-system\n\n\n\nIstiod will always open a watch to Kubernetes for all namespaces.\nHowever, discovery selectors will ignore objects that are not selected very early in its processing, minimizing costs.\n\n\n## Frequently asked questions\n\n### How can I understand the cost of a certain configuration?\n\nIn order to get the best return-on-investment for scoping down configuration, it can be helpful to understand the cost of each object.\nUnfortunately, there is not a straightforward answer; scalability depends on a large number of factors.\nHowever, there are a few general guidelines:\n\nConfiguration *changes* are expensive in Istio, as they require recomputation.\nWhile `Endpoints` changes (generally from a Pod scaling up or down) are heavily optimized, most other configurations are fairly expensive.\nThis can be especially harmful when controllers are constantly making changes to an object (sometimes this happens accidentally!).\nSome tools to detect which configurations are changing:\n* Istiod will log each change like: `Push debounce stable 1 for config Gateway\/default\/gateway: ..., full=true`.\n  This shows a `Gateway` object in the `default` namespace changed. `full=false` would represent and optimized update such as `Endpoint`.\n  Note: changes to `Service` and `Endpoints` will all show as `ServiceEntry`.\n* Istiod exposes metrics `pilot_k8s_cfg_events` and `pilot_k8s_reg_events` for each change.\n* `kubectl get <resource> --watch -oyaml --show-managed-fields` can show changes to an object (or objects) to help understand what is changing, and by whom.\n\n[Headless services](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#headless-services) (besides ones declared as [HTTP](\/docs\/ops\/configuration\/traffic-management\/protocol-selection\/#explicit-protocol-selection))\nscale with the number of instances. This makes large headless services expensive, and a good candidate for exclusion with `exportTo` or equivalent.\n\n### What happens if I connect to a service outside of my scope?\n\nWhen connecting to a service that has been excluded through one of the scoping mechanisms, the data plane will not know anything about the destination,\nso it will be treated as [Unmatched traffic](\/docs\/ops\/configuration\/traffic-management\/traffic-routing\/#unmatched-traffic).\n\n### What about Gateways?\n\nWhile [Gateways](\/docs\/setup\/additional-setup\/gateway\/) will respect `exportTo` and `DiscoverySelectors`, `Sidecar` objects do not impact Gateways.\nHowever, unlike sidecars, gateways do not have configuration for the entire cluster by default.\nInstead, each configuration is explicitly attached to the gateway, which mostly avoids this problem.\n\nHowever, [currently](https:\/\/github.com\/istio\/istio\/issues\/29131) part of the data plane configuration (a \"cluster\", in Envoy terms), is always sent for\nthe entire cluster, even if it is not referenced explicitly.","site":"istio","answers_cleaned":"    title  Configuration Scoping description  Shows how to scope configuration in Istio  for operational and performance benefits  weight  60 keywords   scalability  owner  istio wg networking maintainers test  no      In order to program the service mesh  the Istio control plane  Istiod  reads a variety of configurations  including core Kubernetes types like  Service  and  Node   and Istio s own types like  Gateway   These are then sent to the data plane  see  Architecture   docs ops deployment architecture   for more information    By default  the control plane will read all configuration in all namespaces  Each proxy instance will receive configuration for all namespaces as well  This includes information about workloads that are not enrolled in the mesh   This default ensures correct behavior out of the box  but comes with a scalability cost  Each configuration has a cost  in CPU and memory  primarily  to maintain and keep up to date  At large scales  it is critical to limit the configuration scope to avoid excessive resource consumption      Scoping mechanisms  Istio offers a few tools to help control the scope of a configuration to meet different use cases  Depending on your requirements  these can be used alone or together      Sidecar  provides a mechanism for specific workloads to  import  a set of configurations    exportTo  provides a mechanism to  export  a configuration to a set of workloads    discoverySelectors  provides a mechanism to let Istio completely ignore a set of configurations       Sidecar  import  The   egress hosts    docs reference config networking sidecar  IstioEgressListener  field in  Sidecar  allows specifying a list of configurations to import  Only configurations matching the specified criteria will be seen by sidecars impacted by the  Sidecar  resource   For example    apiVersion  networking istio io v1 kind  Sidecar metadata    name  default spec    egress      hosts                Import all configuration from our own namespace        bookinfo      Import all configuration from the bookinfo namespace        external services example com    Import only  example com  from the external services namespace        exportTo   Istio s  VirtualService    DestinationRule   and  ServiceEntry  provide a  spec exportTo  field  Similarly   Service  can be configured with the  networking istio io exportTo  annotation   Unlike  Sidecar  which allows a workload owner to control what dependencies it has   exportTo  works in the opposite way  and allows the service owners to control their own service s visibility   For example  this configuration makes the  details   Service  only visible to its own namespace  and the  client  namespace    apiVersion  v1 kind  Service metadata    name  details   annotations      networking istio io exportTo     client  spec             DiscoverySelectors   While the previous controls operate on a workload or service owner level    DiscoverySelectors    docs reference config istio mesh v1alpha1  MeshConfig  provides mesh wide control over configuration visibility  Discovery selectors allows specifying criteria for which namespaces should be visible to the control plane  Any namespaces not matching are ignored by the control plane entirely   This can be configured as part of  meshConfig  during installation  For example    meshConfig    discoverySelectors        matchLabels            Allow any namespaces with  istio discovery enabled          istio discovery  enabled       matchLabels            Allow  kube system   Kubernetes automatically adds this label to each namespace         kubernetes io metadata name  kube system    Istiod will always open a watch to Kubernetes for all namespaces  However  discovery selectors will ignore objects that are not selected very early in its processing  minimizing costs       Frequently asked questions      How can I understand the cost of a certain configuration   In order to get the best return on investment for scoping down configuration  it can be helpful to understand the cost of each object  Unfortunately  there is not a straightforward answer  scalability depends on a large number of factors  However  there are a few general guidelines   Configuration  changes  are expensive in Istio  as they require recomputation  While  Endpoints  changes  generally from a Pod scaling up or down  are heavily optimized  most other configurations are fairly expensive  This can be especially harmful when controllers are constantly making changes to an object  sometimes this happens accidentally    Some tools to detect which configurations are changing    Istiod will log each change like   Push debounce stable 1 for config Gateway default gateway       full true     This shows a  Gateway  object in the  default  namespace changed   full false  would represent and optimized update such as  Endpoint     Note  changes to  Service  and  Endpoints  will all show as  ServiceEntry     Istiod exposes metrics  pilot k8s cfg events  and  pilot k8s reg events  for each change     kubectl get  resource    watch  oyaml   show managed fields  can show changes to an object  or objects  to help understand what is changing  and by whom    Headless services  https   kubernetes io docs concepts services networking service  headless services   besides ones declared as  HTTP   docs ops configuration traffic management protocol selection  explicit protocol selection   scale with the number of instances  This makes large headless services expensive  and a good candidate for exclusion with  exportTo  or equivalent       What happens if I connect to a service outside of my scope   When connecting to a service that has been excluded through one of the scoping mechanisms  the data plane will not know anything about the destination  so it will be treated as  Unmatched traffic   docs ops configuration traffic management traffic routing  unmatched traffic        What about Gateways   While  Gateways   docs setup additional setup gateway   will respect  exportTo  and  DiscoverySelectors    Sidecar  objects do not impact Gateways  However  unlike sidecars  gateways do not have configuration for the entire cluster by default  Instead  each configuration is explicitly attached to the gateway  which mostly avoids this problem   However   currently  https   github com istio istio issues 29131  part of the data plane configuration  a  cluster   in Envoy terms   is always sent for the entire cluster  even if it is not referenced explicitly "}
{"questions":"istio Shows common examples of using Istio security policy title Security policy examples owner istio wg security maintainers test yes weight 60 Background","answers":"---\ntitle: Security policy examples\ndescription: Shows common examples of using Istio security policy.\nweight: 60\nowner: istio\/wg-security-maintainers\ntest: yes\n---\n\n## Background\n\nThis page shows common patterns of using Istio security policies. You may find them useful in your deployment or use this\nas a quick reference to example policies.\n\nThe policies demonstrated here are just examples and require changes to adapt to your actual environment\nbefore applying.\n\nAlso read the [authentication](\/docs\/tasks\/security\/authentication\/authn-policy) and\n[authorization](\/docs\/tasks\/security\/authorization) tasks for a hands-on tutorial of using the security policy in\nmore detail.\n\n## Require different JWT issuer per host\n\nJWT validation is common on the ingress gateway and you may want to require different JWT issuers for different\nhosts. You can use the authorization policy for fine grained JWT validation in addition to the\n[request authentication](\/docs\/tasks\/security\/authentication\/authn-policy\/#end-user-authentication) policy.\n\nUse the following policy if you want to allow access to the given hosts if JWT principal matches. Access to other hosts\nwill always be denied.\n\n\napiVersion: security.istio.io\/v1\nkind: AuthorizationPolicy\nmetadata:\n  name: jwt-per-host\n  namespace: istio-system\nspec:\n  selector:\n    matchLabels:\n      istio: ingressgateway\n  action: ALLOW\n  rules:\n  - from:\n    - source:\n        # the JWT token must have issuer with suffix \"@example.com\"\n        requestPrincipals: [\"*@example.com\"]\n    to:\n    - operation:\n        hosts: [\"example.com\", \"*.example.com\"]\n  - from:\n    - source:\n        # the JWT token must have issuer with suffix \"@another.org\"\n        requestPrincipals: [\"*@another.org\"]\n    to:\n    - operation:\n        hosts: [\".another.org\", \"*.another.org\"]\n\n\n## Namespace isolation\n\nThe following two policies enable strict mTLS on namespace `foo`, and allow traffic from the same namespace.\n\n\napiVersion: security.istio.io\/v1\nkind: PeerAuthentication\nmetadata:\n  name: default\n  namespace: foo\nspec:\n  mtls:\n    mode: STRICT\n---\napiVersion: security.istio.io\/v1\nkind: AuthorizationPolicy\nmetadata:\n  name: foo-isolation\n  namespace: foo\nspec:\n  action: ALLOW\n  rules:\n  - from:\n    - source:\n        namespaces: [\"foo\"]\n\n\n## Namespace isolation with ingress exception\n\nThe following two policies enable strict mTLS on namespace `foo`, and allow traffic from the same namespace and also\nfrom the ingress gateway.\n\n\napiVersion: security.istio.io\/v1\nkind: PeerAuthentication\nmetadata:\n  name: default\n  namespace: foo\nspec:\n  mtls:\n    mode: STRICT\n---\napiVersion: security.istio.io\/v1\nkind: AuthorizationPolicy\nmetadata:\n  name: ns-isolation-except-ingress\n  namespace: foo\nspec:\n  action: ALLOW\n  rules:\n  - from:\n    - source:\n        namespaces: [\"foo\"]\n    - source:\n        principals: [\"cluster.local\/ns\/istio-system\/sa\/istio-ingressgateway-service-account\"]\n\n\n## Require mTLS in authorization layer (defense in depth)\n\nYou have configured `PeerAuthentication` to `STRICT` but want to make sure the traffic is indeed protected by mTLS with\nan extra check in the authorization layer, i.e., defense in depth.\n\nThe following policy denies the request if the principal is empty. The principal will be empty if plain text is used.\nIn other words, the policy allows requests if the principal is non-empty.\n`\"*\"` means non-empty match and using with `notPrincipals` means matching on empty principal.\n\n\napiVersion: security.istio.io\/v1\nkind: AuthorizationPolicy\nmetadata:\n  name: require-mtls\n  namespace: foo\nspec:\n  action: DENY\n  rules:\n  - from:\n    - source:\n        notPrincipals: [\"*\"]\n\n\n## Require mandatory authorization check with `DENY` policy\n\nYou can use the `DENY` policy if you want to require mandatory authorization check that must be satisfied and cannot be\nbypassed by another more permissive `ALLOW` policy. This works because the `DENY` policy takes precedence over the\n`ALLOW` policy and could deny a request early before `ALLOW` policies.\n\nUse the following policy to enforce mandatory JWT validation in addition to the [request authentication](\/docs\/tasks\/security\/authentication\/authn-policy\/#end-user-authentication) policy.\nThe policy denies the request if the request principal is empty. The request principal will be empty if JWT validation failed.\nIn other words, the policy allows requests if the request principal is non-empty.\n`\"*\"` means non-empty match and using with `notRequestPrincipals` means matching on empty request principal.\n\n\napiVersion: security.istio.io\/v1\nkind: AuthorizationPolicy\nmetadata:\n  name: require-jwt\n  namespace: istio-system\nspec:\n  selector:\n    matchLabels:\n      istio: ingressgateway\n  action: DENY\n  rules:\n  - from:\n    - source:\n        notRequestPrincipals: [\"*\"]\n\n\nSimilarly, Use the following policy to require mandatory namespace isolation and also allow requests from ingress gateway.\nThe policy denies the request if the namespace is not `foo` and the principal is not `cluster.local\/ns\/istio-system\/sa\/istio-ingressgateway-service-account`.\nIn other words, the policy allows the request only if the namespace is `foo` or the principal is `cluster.local\/ns\/istio-system\/sa\/istio-ingressgateway-service-account`.\n\n\napiVersion: security.istio.io\/v1\nkind: AuthorizationPolicy\nmetadata:\n  name: ns-isolation-except-ingress\n  namespace: foo\nspec:\n  action: DENY\n  rules:\n  - from:\n    - source:\n        notNamespaces: [\"foo\"]\n        notPrincipals: [\"cluster.local\/ns\/istio-system\/sa\/istio-ingressgateway-service-account\"]\n","site":"istio","answers_cleaned":"    title  Security policy examples description  Shows common examples of using Istio security policy  weight  60 owner  istio wg security maintainers test  yes         Background  This page shows common patterns of using Istio security policies  You may find them useful in your deployment or use this as a quick reference to example policies   The policies demonstrated here are just examples and require changes to adapt to your actual environment before applying   Also read the  authentication   docs tasks security authentication authn policy  and  authorization   docs tasks security authorization  tasks for a hands on tutorial of using the security policy in more detail      Require different JWT issuer per host  JWT validation is common on the ingress gateway and you may want to require different JWT issuers for different hosts  You can use the authorization policy for fine grained JWT validation in addition to the  request authentication   docs tasks security authentication authn policy  end user authentication  policy   Use the following policy if you want to allow access to the given hosts if JWT principal matches  Access to other hosts will always be denied    apiVersion  security istio io v1 kind  AuthorizationPolicy metadata    name  jwt per host   namespace  istio system spec    selector      matchLabels        istio  ingressgateway   action  ALLOW   rules      from        source            the JWT token must have issuer with suffix   example com          requestPrincipals      example com       to        operation          hosts    example com      example com       from        source            the JWT token must have issuer with suffix   another org          requestPrincipals      another org       to        operation          hosts     another org      another org        Namespace isolation  The following two policies enable strict mTLS on namespace  foo   and allow traffic from the same namespace    apiVersion  security istio io v1 kind  PeerAuthentication metadata    name  default   namespace  foo spec    mtls      mode  STRICT     apiVersion  security istio io v1 kind  AuthorizationPolicy metadata    name  foo isolation   namespace  foo spec    action  ALLOW   rules      from        source          namespaces    foo        Namespace isolation with ingress exception  The following two policies enable strict mTLS on namespace  foo   and allow traffic from the same namespace and also from the ingress gateway    apiVersion  security istio io v1 kind  PeerAuthentication metadata    name  default   namespace  foo spec    mtls      mode  STRICT     apiVersion  security istio io v1 kind  AuthorizationPolicy metadata    name  ns isolation except ingress   namespace  foo spec    action  ALLOW   rules      from        source          namespaces    foo         source          principals    cluster local ns istio system sa istio ingressgateway service account        Require mTLS in authorization layer  defense in depth   You have configured  PeerAuthentication  to  STRICT  but want to make sure the traffic is indeed protected by mTLS with an extra check in the authorization layer  i e   defense in depth   The following policy denies the request if the principal is empty  The principal will be empty if plain text is used  In other words  the policy allows requests if the principal is non empty        means non empty match and using with  notPrincipals  means matching on empty principal    apiVersion  security istio io v1 kind  AuthorizationPolicy metadata    name  require mtls   namespace  foo spec    action  DENY   rules      from        source          notPrincipals             Require mandatory authorization check with  DENY  policy  You can use the  DENY  policy if you want to require mandatory authorization check that must be satisfied and cannot be bypassed by another more permissive  ALLOW  policy  This works because the  DENY  policy takes precedence over the  ALLOW  policy and could deny a request early before  ALLOW  policies   Use the following policy to enforce mandatory JWT validation in addition to the  request authentication   docs tasks security authentication authn policy  end user authentication  policy  The policy denies the request if the request principal is empty  The request principal will be empty if JWT validation failed  In other words  the policy allows requests if the request principal is non empty        means non empty match and using with  notRequestPrincipals  means matching on empty request principal    apiVersion  security istio io v1 kind  AuthorizationPolicy metadata    name  require jwt   namespace  istio system spec    selector      matchLabels        istio  ingressgateway   action  DENY   rules      from        source          notRequestPrincipals          Similarly  Use the following policy to require mandatory namespace isolation and also allow requests from ingress gateway  The policy denies the request if the namespace is not  foo  and the principal is not  cluster local ns istio system sa istio ingressgateway service account   In other words  the policy allows the request only if the namespace is  foo  or the principal is  cluster local ns istio system sa istio ingressgateway service account     apiVersion  security istio io v1 kind  AuthorizationPolicy metadata    name  ns isolation except ingress   namespace  foo spec    action  DENY   rules      from        source          notNamespaces    foo           notPrincipals    cluster local ns istio system sa istio ingressgateway service account   "}
{"questions":"istio docs ops telemetry monitoring multicluster prometheus owner istio wg policies and telemetry maintainers test no aliases help ops telemetry monitoring multicluster prometheus title Monitoring Multicluster Istio with Prometheus weight 10 Configure Prometheus to monitor multicluster Istio","answers":"---\ntitle: Monitoring Multicluster Istio with Prometheus\ndescription: Configure Prometheus to monitor multicluster Istio.\nweight: 10\naliases:\n  - \/help\/ops\/telemetry\/monitoring-multicluster-prometheus\n  - \/docs\/ops\/telemetry\/monitoring-multicluster-prometheus\nowner: istio\/wg-policies-and-telemetry-maintainers\ntest: no\n---\n\n## Overview\n\nThis guide is meant to provide operational guidance on how to configure monitoring of Istio meshes comprised of two\nor more individual Kubernetes clusters. It is not meant to establish the *only* possible path forward, but rather\nto demonstrate a workable approach to multicluster telemetry with Prometheus.\n\nOur recommendation for multicluster monitoring of Istio with Prometheus is built upon the foundation of Prometheus\n[hierarchical federation](https:\/\/prometheus.io\/docs\/prometheus\/latest\/federation\/#hierarchical-federation).\nPrometheus instances that are deployed locally to each cluster by Istio act as initial collectors that then federate up\nto a production mesh-wide Prometheus instance. That mesh-wide Prometheus can either live outside of the mesh (external), or in one\nof the clusters within the mesh.\n\n## Multicluster Istio setup\n\nFollow the [multicluster installation](\/docs\/setup\/install\/multicluster\/) section to set up your Istio clusters in one of the\nsupported [multicluster deployment models](\/docs\/ops\/deployment\/deployment-models\/#multiple-clusters). For the purpose of\nthis guide, any of those approaches will work, with the following caveat:\n\n**Ensure that a cluster-local Istio Prometheus instance is installed in each cluster.**\n\nIndividual Istio deployment of Prometheus in each cluster is required to form the basis of cross-cluster monitoring by\nway of federation to a production-ready instance of Prometheus that runs externally or in one of the clusters.\n\nValidate that you have an instance of Prometheus running in each cluster:\n\n\n$ kubectl -n istio-system get services prometheus\nNAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE\nprometheus   ClusterIP   10.8.4.109   <none>        9090\/TCP   20h\n\n\n## Configure Prometheus federation\n\n### External production Prometheus\n\nThere are several reasons why you may want to have a Prometheus instance running outside of your Istio deployment.\nPerhaps you want long-term monitoring disjoint from the cluster being monitored. Perhaps you want to monitor multiple\nseparate meshes in a single place. Or maybe you have other motivations. Whatever your reason is, you\u2019ll need some special\nconfigurations to make it all work.\n\n\n\n\nThis guide demonstrates connectivity to cluster-local Prometheus instances, but does not address security considerations.\nFor production use, secure access to each Prometheus endpoint with HTTPS. In addition, take precautions, such as using an\ninternal load-balancer instead of a public endpoint and the appropriate configuration of firewall rules.\n\n\nIstio provides a way to expose cluster services externally via [Gateways](\/docs\/reference\/config\/networking\/gateway\/).\nYou can configure an ingress gateway for the cluster-local Prometheus, providing external connectivity to the in-cluster\nPrometheus endpoint.\n\nFor each cluster, follow the appropriate instructions from the [Remotely Accessing Telemetry Addons](\/docs\/tasks\/observability\/gateways\/#option-1-secure-access-https) task.\nAlso note that you **SHOULD** establish secure (HTTPS) access.\n\nNext, configure your external Prometheus instance to access the cluster-local Prometheus instances using a configuration\nlike the following (replacing the ingress domain and cluster name):\n\n\nscrape_configs:\n- job_name: 'federate-'\n  scrape_interval: 15s\n\n  honor_labels: true\n  metrics_path: '\/federate'\n\n  params:\n    'match[]':\n      - '{job=\"kubernetes-pods\"}'\n\n  static_configs:\n    - targets:\n      - 'prometheus.'\n      labels:\n        cluster: ''\n\n\nNotes:\n\n* `CLUSTER_NAME` should be set to the same value that you used to create the cluster (set via `values.global.multiCluster.clusterName`).\n\n* No authentication to the Prometheus endpoint(s) is provided. This means that anyone can query your\ncluster-local Prometheus instances. This may not be desirable.\n\n* Without proper HTTPS configuration of the gateway, everything is being transported via plaintext. This may not be\ndesirable.\n\n### Production Prometheus on an in-mesh cluster\n\nIf you prefer to run the production Prometheus in one of the clusters, you need to establish connectivity from it to\nthe other cluster-local Prometheus instances in the mesh.\n\nThis is really just a variation of the configuration for external federation. In this case the configuration on the\ncluster running the production Prometheus is different from the configuration for remote cluster Prometheus scraping.\n\n\n\nConfigure your production Prometheus to access both of the *local* and *remote* Prometheus instances.\n\nFirst execute the following command:\n\n\n$ kubectl -n istio-system edit cm prometheus -o yaml\n\n\nThen add configurations for the *remote* clusters (replacing the ingress domain and cluster name for each cluster) and\nadd one configuration for the *local* cluster:\n\n\nscrape_configs:\n- job_name: 'federate-'\n  scrape_interval: 15s\n\n  honor_labels: true\n  metrics_path: '\/federate'\n\n  params:\n    'match[]':\n      - '{job=\"kubernetes-pods\"}'\n\n  static_configs:\n    - targets:\n      - 'prometheus.'\n      labels:\n        cluster: ''\n\n- job_name: 'federate-local'\n\n  honor_labels: true\n  metrics_path: '\/federate'\n\n  metric_relabel_configs:\n  - replacement: ''\n    target_label: cluster\n\n  kubernetes_sd_configs:\n  - role: pod\n    namespaces:\n      names: ['istio-system']\n  params:\n    'match[]':\n    - '{__name__=~\"istio_(.*)\"}'\n    - '{__name__=~\"pilot(.*)\"}'\n","site":"istio","answers_cleaned":"    title  Monitoring Multicluster Istio with Prometheus description  Configure Prometheus to monitor multicluster Istio  weight  10 aliases       help ops telemetry monitoring multicluster prometheus      docs ops telemetry monitoring multicluster prometheus owner  istio wg policies and telemetry maintainers test  no         Overview  This guide is meant to provide operational guidance on how to configure monitoring of Istio meshes comprised of two or more individual Kubernetes clusters  It is not meant to establish the  only  possible path forward  but rather to demonstrate a workable approach to multicluster telemetry with Prometheus   Our recommendation for multicluster monitoring of Istio with Prometheus is built upon the foundation of Prometheus  hierarchical federation  https   prometheus io docs prometheus latest federation  hierarchical federation   Prometheus instances that are deployed locally to each cluster by Istio act as initial collectors that then federate up to a production mesh wide Prometheus instance  That mesh wide Prometheus can either live outside of the mesh  external   or in one of the clusters within the mesh      Multicluster Istio setup  Follow the  multicluster installation   docs setup install multicluster   section to set up your Istio clusters in one of the supported  multicluster deployment models   docs ops deployment deployment models  multiple clusters   For the purpose of this guide  any of those approaches will work  with the following caveat     Ensure that a cluster local Istio Prometheus instance is installed in each cluster     Individual Istio deployment of Prometheus in each cluster is required to form the basis of cross cluster monitoring by way of federation to a production ready instance of Prometheus that runs externally or in one of the clusters   Validate that you have an instance of Prometheus running in each cluster      kubectl  n istio system get services prometheus NAME         TYPE        CLUSTER IP   EXTERNAL IP   PORT S     AGE prometheus   ClusterIP   10 8 4 109    none         9090 TCP   20h      Configure Prometheus federation      External production Prometheus  There are several reasons why you may want to have a Prometheus instance running outside of your Istio deployment  Perhaps you want long term monitoring disjoint from the cluster being monitored  Perhaps you want to monitor multiple separate meshes in a single place  Or maybe you have other motivations  Whatever your reason is  you ll need some special configurations to make it all work      This guide demonstrates connectivity to cluster local Prometheus instances  but does not address security considerations  For production use  secure access to each Prometheus endpoint with HTTPS  In addition  take precautions  such as using an internal load balancer instead of a public endpoint and the appropriate configuration of firewall rules    Istio provides a way to expose cluster services externally via  Gateways   docs reference config networking gateway    You can configure an ingress gateway for the cluster local Prometheus  providing external connectivity to the in cluster Prometheus endpoint   For each cluster  follow the appropriate instructions from the  Remotely Accessing Telemetry Addons   docs tasks observability gateways  option 1 secure access https  task  Also note that you   SHOULD   establish secure  HTTPS  access   Next  configure your external Prometheus instance to access the cluster local Prometheus instances using a configuration like the following  replacing the ingress domain and cluster name     scrape configs    job name   federate     scrape interval  15s    honor labels  true   metrics path    federate     params       match               job  kubernetes pods       static configs        targets           prometheus         labels          cluster       Notes      CLUSTER NAME  should be set to the same value that you used to create the cluster  set via  values global multiCluster clusterName       No authentication to the Prometheus endpoint s  is provided  This means that anyone can query your cluster local Prometheus instances  This may not be desirable     Without proper HTTPS configuration of the gateway  everything is being transported via plaintext  This may not be desirable       Production Prometheus on an in mesh cluster  If you prefer to run the production Prometheus in one of the clusters  you need to establish connectivity from it to the other cluster local Prometheus instances in the mesh   This is really just a variation of the configuration for external federation  In this case the configuration on the cluster running the production Prometheus is different from the configuration for remote cluster Prometheus scraping     Configure your production Prometheus to access both of the  local  and  remote  Prometheus instances   First execute the following command      kubectl  n istio system edit cm prometheus  o yaml   Then add configurations for the  remote  clusters  replacing the ingress domain and cluster name for each cluster  and add one configuration for the  local  cluster    scrape configs    job name   federate     scrape interval  15s    honor labels  true   metrics path    federate     params       match               job  kubernetes pods       static configs        targets           prometheus         labels          cluster        job name   federate local     honor labels  true   metrics path    federate     metric relabel configs      replacement         target label  cluster    kubernetes sd configs      role  pod     namespaces        names    istio system     params       match               name     istio                   name     pilot        "}
{"questions":"istio owner istio wg networking maintainers weight 30 linktitle TLS Configuration keywords traffic management proxy How to configure TLS settings to secure network traffic test n a title Understanding TLS Configuration","answers":"---\ntitle: Understanding TLS Configuration\nlinktitle: TLS Configuration\ndescription: How to configure TLS settings to secure network traffic.\nweight: 30\nkeywords: [traffic-management,proxy]\nowner: istio\/wg-networking-maintainers\ntest: n\/a\n---\n\nOne of Istio's most important features is the ability to lock down and secure network traffic to, from,\nand within the mesh. However, configuring TLS settings can be confusing and a common source of misconfiguration.\nThis document attempts to explain the various connections involved when sending requests in Istio and how\ntheir associated TLS settings are configured.\nRefer to [TLS configuration mistakes](\/docs\/ops\/common-problems\/network-issues\/#tls-configuration-mistakes)\nfor a summary of some the most common TLS configuration problems.\n\n## Sidecars\n\nSidecar traffic has a variety of associated connections. Let's break them down one at a time.\n\n\n\n1. **External inbound traffic**\n    This is traffic coming from an outside client that is captured by the sidecar.\n    If the client is inside the mesh, this traffic may be encrypted with Istio mutual TLS.\n    By default, the sidecar will be configured to accept both mTLS and non-mTLS traffic, known as `PERMISSIVE` mode.\n    The mode can alternatively be configured to `STRICT`, where traffic must be mTLS, or `DISABLE`, where traffic must be plaintext.\n    The mTLS mode is configured using a [`PeerAuthentication` resource](\/docs\/reference\/config\/security\/peer_authentication\/).\n\n1. **Local inbound traffic**\n    This is traffic going to your application service, from the sidecar. This traffic will always be forwarded as-is.\n    Note that this does not mean it's always plaintext; the sidecar may pass a TLS connection through.\n    It just means that a new TLS connection will never be originated from the sidecar.\n\n1. **Local outbound traffic**\n    This is outgoing traffic from your application service that is intercepted by the sidecar.\n    Your application may be sending plaintext or TLS traffic.\n    If [automatic protocol selection](\/docs\/ops\/configuration\/traffic-management\/protocol-selection\/#automatic-protocol-selection)\n    is enabled, Istio will automatically detect the protocol. Otherwise you should use the port name in the destination service to\n    [manually specify the protocol](\/docs\/ops\/configuration\/traffic-management\/protocol-selection\/#explicit-protocol-selection).\n\n1. **External outbound traffic**\n    This is traffic leaving the sidecar to some external destination. Traffic can be forwarded as is, or a TLS connection can\n    be initiated (mTLS or standard TLS). This is controlled using the TLS mode setting in the `trafficPolicy` of a\n    [`DestinationRule` resource](\/docs\/reference\/config\/networking\/destination-rule\/).\n    A mode setting of `DISABLE` will send plaintext, while `SIMPLE`, `MUTUAL`, and `ISTIO_MUTUAL` will originate a TLS connection.\n\nThe key takeaways are:\n\n- `PeerAuthentication` is used to configure what type of mTLS traffic the sidecar will accept.\n- `DestinationRule` is used to configure what type of TLS traffic the sidecar will send.\n- Port names, or automatic protocol selection, determines which protocol the sidecar will parse traffic as.\n\n## Auto mTLS\n\nAs described above, a `DestinationRule` controls whether outgoing traffic uses mTLS or not.\nHowever, configuring this for every workload can be tedious. Typically, you want Istio to always use mTLS\nwherever possible, and only send plaintext to workloads that are not part of the mesh (i.e., ones without sidecars).\n\nIstio makes this easy with a feature called \"Auto mTLS\". Auto mTLS works by doing exactly that. If TLS settings are\nnot explicitly configured in a `DestinationRule`, the sidecar will automatically determine if\n[Istio mutual TLS](\/about\/faq\/#difference-between-mutual-and-istio-mutual) should be sent.\nThis means that without any configuration, all inter-mesh traffic will be mTLS encrypted.\n\n## Gateways\n\nAny given request to a gateway will have two connections.\n\n\n\n1. The inbound request, initiated by some client such as `curl` or a web browser. This is often called the \"downstream\" connection.\n\n1. The outbound request, initiated by the gateway to some backend. This is often called the \"upstream\" connection.\n\nBoth of these connections have independent TLS configurations.\n\nNote that the configuration of ingress and egress gateways are identical.\nThe `istio-ingress-gateway` and `istio-egress-gateway` are just two specialized gateway deployments.\nThe difference is that the client of an ingress gateway is running outside of the mesh while in the case of an egress gateway,\nthe destination is outside of the mesh.\n\n### Inbound\n\nAs part of the inbound request, the gateway must decode the traffic in order to apply routing rules.\nThis is done based on the server configuration in a [`Gateway` resource](\/docs\/reference\/config\/networking\/gateway\/).\nFor example, if an inbound connection is plaintext HTTP, the port protocol is configured as `HTTP`:\n\n\napiVersion: networking.istio.io\/v1\nkind: Gateway\n...\n  servers:\n  - port:\n      number: 80\n      name: http\n      protocol: HTTP\n\n\nSimilarly, for raw TCP traffic, the protocol would be set to `TCP`.\n\nFor TLS connections, there are a few more options:\n\n1. What protocol is encapsulated?\n    If the connection is HTTPS, the server protocol should be configured as `HTTPS`.\n    Otherwise, for a raw TCP connection encapsulated with TLS, the protocol should be set to `TLS`.\n\n1. Is the TLS connection terminated or passed through?\n    For passthrough traffic, configure the TLS mode field to `PASSTHROUGH`:\n\n    \n    apiVersion: networking.istio.io\/v1\n    kind: Gateway\n    ...\n      servers:\n      - port:\n          number: 443\n          name: https\n          protocol: HTTPS\n        tls:\n          mode: PASSTHROUGH\n    \n\n    In this mode, Istio will route based on SNI information and forward the connection as-is to the destination.\n\n1. Should mutual TLS be used?\n    Mutual TLS can be configured through the TLS mode `MUTUAL`. When this is configured, a client certificate will be\n    requested and verified against the configured `caCertificates` or `credentialName`:\n\n    \n    apiVersion: networking.istio.io\/v1\n    kind: Gateway\n    ...\n      servers:\n      - port:\n          number: 443\n          name: https\n          protocol: HTTPS\n        tls:\n          mode: MUTUAL\n          caCertificates: ...\n    \n\n### Outbound\n\nWhile the inbound side configures what type of traffic to expect and how to process it, the outbound configuration controls\nwhat type of traffic the gateway will send. This is configured by the TLS settings in a `DestinationRule`,\njust like external outbound traffic from [sidecars](#sidecars), or [auto mTLS](#auto-mtls) by default.\n\nThe only difference is that you should be careful to consider the `Gateway` settings when configuring this.\nFor example, if the `Gateway` is configured with TLS `PASSTHROUGH` while the `DestinationRule` configures TLS origination,\nyou will end up with [double encryption](\/docs\/ops\/common-problems\/network-issues\/#double-tls).\nThis works, but is often not the desired behavior.\n\nA `VirtualService` bound to the gateway needs care as well to\n[ensure it is consistent](\/docs\/ops\/common-problems\/network-issues\/#gateway-mismatch)\nwith the `Gateway` definition.","site":"istio","answers_cleaned":"    title  Understanding TLS Configuration linktitle  TLS Configuration description  How to configure TLS settings to secure network traffic  weight  30 keywords   traffic management proxy  owner  istio wg networking maintainers test  n a      One of Istio s most important features is the ability to lock down and secure network traffic to  from  and within the mesh  However  configuring TLS settings can be confusing and a common source of misconfiguration  This document attempts to explain the various connections involved when sending requests in Istio and how their associated TLS settings are configured  Refer to  TLS configuration mistakes   docs ops common problems network issues  tls configuration mistakes  for a summary of some the most common TLS configuration problems      Sidecars  Sidecar traffic has a variety of associated connections  Let s break them down one at a time     1    External inbound traffic       This is traffic coming from an outside client that is captured by the sidecar      If the client is inside the mesh  this traffic may be encrypted with Istio mutual TLS      By default  the sidecar will be configured to accept both mTLS and non mTLS traffic  known as  PERMISSIVE  mode      The mode can alternatively be configured to  STRICT   where traffic must be mTLS  or  DISABLE   where traffic must be plaintext      The mTLS mode is configured using a   PeerAuthentication  resource   docs reference config security peer authentication     1    Local inbound traffic       This is traffic going to your application service  from the sidecar  This traffic will always be forwarded as is      Note that this does not mean it s always plaintext  the sidecar may pass a TLS connection through      It just means that a new TLS connection will never be originated from the sidecar   1    Local outbound traffic       This is outgoing traffic from your application service that is intercepted by the sidecar      Your application may be sending plaintext or TLS traffic      If  automatic protocol selection   docs ops configuration traffic management protocol selection  automatic protocol selection      is enabled  Istio will automatically detect the protocol  Otherwise you should use the port name in the destination service to      manually specify the protocol   docs ops configuration traffic management protocol selection  explicit protocol selection    1    External outbound traffic       This is traffic leaving the sidecar to some external destination  Traffic can be forwarded as is  or a TLS connection can     be initiated  mTLS or standard TLS   This is controlled using the TLS mode setting in the  trafficPolicy  of a       DestinationRule  resource   docs reference config networking destination rule        A mode setting of  DISABLE  will send plaintext  while  SIMPLE    MUTUAL   and  ISTIO MUTUAL  will originate a TLS connection   The key takeaways are      PeerAuthentication  is used to configure what type of mTLS traffic the sidecar will accept     DestinationRule  is used to configure what type of TLS traffic the sidecar will send    Port names  or automatic protocol selection  determines which protocol the sidecar will parse traffic as      Auto mTLS  As described above  a  DestinationRule  controls whether outgoing traffic uses mTLS or not  However  configuring this for every workload can be tedious  Typically  you want Istio to always use mTLS wherever possible  and only send plaintext to workloads that are not part of the mesh  i e   ones without sidecars    Istio makes this easy with a feature called  Auto mTLS   Auto mTLS works by doing exactly that  If TLS settings are not explicitly configured in a  DestinationRule   the sidecar will automatically determine if  Istio mutual TLS   about faq  difference between mutual and istio mutual  should be sent  This means that without any configuration  all inter mesh traffic will be mTLS encrypted      Gateways  Any given request to a gateway will have two connections     1  The inbound request  initiated by some client such as  curl  or a web browser  This is often called the  downstream  connection   1  The outbound request  initiated by the gateway to some backend  This is often called the  upstream  connection   Both of these connections have independent TLS configurations   Note that the configuration of ingress and egress gateways are identical  The  istio ingress gateway  and  istio egress gateway  are just two specialized gateway deployments  The difference is that the client of an ingress gateway is running outside of the mesh while in the case of an egress gateway  the destination is outside of the mesh       Inbound  As part of the inbound request  the gateway must decode the traffic in order to apply routing rules  This is done based on the server configuration in a   Gateway  resource   docs reference config networking gateway    For example  if an inbound connection is plaintext HTTP  the port protocol is configured as  HTTP     apiVersion  networking istio io v1 kind  Gateway       servers      port        number  80       name  http       protocol  HTTP   Similarly  for raw TCP traffic  the protocol would be set to  TCP    For TLS connections  there are a few more options   1  What protocol is encapsulated      If the connection is HTTPS  the server protocol should be configured as  HTTPS       Otherwise  for a raw TCP connection encapsulated with TLS  the protocol should be set to  TLS    1  Is the TLS connection terminated or passed through      For passthrough traffic  configure the TLS mode field to  PASSTHROUGH             apiVersion  networking istio io v1     kind  Gateway               servers          port            number  443           name  https           protocol  HTTPS         tls            mode  PASSTHROUGH           In this mode  Istio will route based on SNI information and forward the connection as is to the destination   1  Should mutual TLS be used      Mutual TLS can be configured through the TLS mode  MUTUAL   When this is configured  a client certificate will be     requested and verified against the configured  caCertificates  or  credentialName             apiVersion  networking istio io v1     kind  Gateway               servers          port            number  443           name  https           protocol  HTTPS         tls            mode  MUTUAL           caCertificates                Outbound  While the inbound side configures what type of traffic to expect and how to process it  the outbound configuration controls what type of traffic the gateway will send  This is configured by the TLS settings in a  DestinationRule   just like external outbound traffic from  sidecars   sidecars   or  auto mTLS   auto mtls  by default   The only difference is that you should be careful to consider the  Gateway  settings when configuring this  For example  if the  Gateway  is configured with TLS  PASSTHROUGH  while the  DestinationRule  configures TLS origination  you will end up with  double encryption   docs ops common problems network issues  double tls   This works  but is often not the desired behavior   A  VirtualService  bound to the gateway needs care as well to  ensure it is consistent   docs ops common problems network issues  gateway mismatch  with the  Gateway  definition "}
{"questions":"istio owner istio wg networking maintainers How to configure gateway network topology status Alpha keywords traffic management ingress gateway test yes weight 60 title Configuring Gateway Network Topology","answers":"---\ntitle: Configuring Gateway Network Topology\ndescription: How to configure gateway network topology.\nweight: 60\nkeywords: [traffic-management,ingress,gateway]\nowner: istio\/wg-networking-maintainers\ntest: yes\nstatus: Alpha\n---\n\n\n\n\n\n## Forwarding external client attributes (IP address, certificate info) to destination workloads\n\nMany applications require knowing the client IP address and certificate information of the originating request to behave\nproperly. Notable cases include logging and audit tools that require the client IP be populated and security tools,\nsuch as Web Application Firewalls (WAF), that need this information to apply rule sets properly. The ability to\nprovide client attributes to services has long been a staple of reverse proxies. To forward these client\nattributes to destination workloads, proxies use the `X-Forwarded-For` (XFF) and `X-Forwarded-Client-Cert` (XFCC) headers.\n\nToday's networks vary widely in nature, but support for these attributes is a requirement no matter what the network topology is.\nThis information should be preserved\nand forwarded whether the network uses cloud-based Load Balancers, on-premise Load Balancers, gateways that are\nexposed directly to the internet, gateways that serve many intermediate proxies, and other deployment topologies not\nspecified.\n\nWhile Istio provides an [ingress gateway](\/docs\/tasks\/traffic-management\/ingress\/ingress-control\/), given the varieties\nof architectures mentioned above, reasonable defaults are not able to be shipped that support the proper forwarding of\nclient attributes to the destination workloads.\nThis becomes ever more vital as Istio multicluster deployment models become more common.\n\nFor more information on `X-Forwarded-For`, see the IETF's [RFC](https:\/\/tools.ietf.org\/html\/rfc7239).\n\n## Configuring network topologies\n\nConfiguration of XFF and XFCC headers can be set globally for all gateway workloads via `MeshConfig` or per gateway using\na pod annotation. For example, to configure globally during install or upgrade when using an `IstioOperator` custom resource:\n\n\nspec:\n  meshConfig:\n    defaultConfig:\n      gatewayTopology:\n        numTrustedProxies: <VALUE>\n        forwardClientCertDetails: <ENUM_VALUE>\n\n\nYou can also configure both of these settings by adding the `proxy.istio.io\/config` annotation to the Pod spec\nof your Istio ingress gateway.\n\n\n...\n  metadata:\n    annotations:\n      \"proxy.istio.io\/config\": '{\"gatewayTopology\" : { \"numTrustedProxies\": <VALUE>, \"forwardClientCertDetails\": <ENUM_VALUE> } }'\n\n\n### Configuring X-Forwarded-For Headers\n\nApplications rely on reverse proxies to forward client attributes in a request, such as `X-Forward-For` header. However, due to the variety of network\ntopologies that Istio can be deployed in, you must set the `numTrustedProxies` to the number of trusted proxies deployed in front\nof the Istio gateway proxy, so that the client address can be extracted correctly.\nThis controls the value populated by the ingress gateway in the `X-Envoy-External-Address` header\nwhich can be reliably used by the upstream services to access client's original IP address.\n\nFor example, if you have a cloud based Load Balancer and a reverse proxy in front of your Istio gateway, set `numTrustedProxies` to `2`.\n\n\nNote that all proxies in front of the Istio gateway proxy must parse HTTP traffic and append to the `X-Forwarded-For`\nheader at each hop. If the number of entries in the `X-Forwarded-For` header is less than the number of\ntrusted hops configured, Envoy falls back to using the immediate downstream address as the trusted\nclient address. Please refer to the [Envoy documentation](https:\/\/www.envoyproxy.io\/docs\/envoy\/latest\/configuration\/http\/http_conn_man\/headers#x-forwarded-for)\nto understand how `X-Forwarded-For` headers and trusted client addresses are determined.\n\n\n#### Example using X-Forwarded-For capability with httpbin\n\n1. Run the following command to create a file named `topology.yaml` with `numTrustedProxies` set to `2` and install Istio:\n\n    \n    $ cat <<EOF > topology.yaml\n    apiVersion: install.istio.io\/v1alpha1\n    kind: IstioOperator\n    spec:\n      meshConfig:\n        defaultConfig:\n          gatewayTopology:\n            numTrustedProxies: 2\n    EOF\n    $ istioctl install -f topology.yaml\n    \n\n    \n    If you previously installed an Istio ingress gateway, restart all ingress gateway pods after step 1.\n    \n\n1. Create an `httpbin` namespace:\n\n    \n    $ kubectl create namespace httpbin\n    namespace\/httpbin created\n    \n\n1. Set the `istio-injection` label to `enabled` for sidecar injection:\n\n    \n    $ kubectl label --overwrite namespace httpbin istio-injection=enabled\n    namespace\/httpbin labeled\n    \n\n1. Deploy `httpbin` in the `httpbin` namespace:\n\n    \n    $ kubectl apply -n httpbin -f @samples\/httpbin\/httpbin.yaml@\n    \n\n1. Deploy a gateway associated with `httpbin`:\n\n\n\n\n\n\n$ kubectl apply -n httpbin -f @samples\/httpbin\/httpbin-gateway.yaml@\n\n\n\n\n\n\n\n$ kubectl apply -n httpbin -f @samples\/httpbin\/gateway-api\/httpbin-gateway.yaml@\n$ kubectl wait --for=condition=programmed gtw -n httpbin httpbin-gateway\n\n\n\n\n\n\n6) Set a local `GATEWAY_URL` environmental variable based on your Istio ingress gateway's IP address:\n\n\n\n\n\n\n$ export GATEWAY_URL=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')\n\n\n\n\n\n\n\n$ export GATEWAY_URL=$(kubectl get gateways.gateway.networking.k8s.io httpbin-gateway -n httpbin -ojsonpath='{.status.addresses[0].value}')\n\n\n\n\n\n\n7) Run the following `curl` command to simulate a request with proxy addresses in the `X-Forwarded-For` header:\n\n    \n    $ curl -s -H 'X-Forwarded-For: 56.5.6.7, 72.9.5.6, 98.1.2.3' \"$GATEWAY_URL\/get?show_env=true\" | jq '.headers[\"X-Forwarded-For\"][0]'\n      \"56.5.6.7, 72.9.5.6, 98.1.2.3,10.244.0.1\"\n    \n\n\nIn the above example `$GATEWAY_URL` resolved to 10.244.0.1. This will not be the case in your environment.\n\n\nThe above output shows the request headers that the `httpbin` workload received. When the Istio gateway received this\nrequest, it set the `X-Envoy-External-Address` header to the second to last (`numTrustedProxies: 2`) address in the\n`X-Forwarded-For` header from your curl command. Additionally, the gateway appends its own IP to the `X-Forwarded-For`\nheader before forwarding it to the httpbin workload.\n\n### Configuring X-Forwarded-Client-Cert Headers\n\nFrom [Envoy's documentation](https:\/\/www.envoyproxy.io\/docs\/envoy\/latest\/configuration\/http\/http_conn_man\/headers#x-forwarded-client-cert)\nregarding XFCC:\n\n\nx-forwarded-client-cert (XFCC) is a proxy header which indicates certificate information of part or all of the clients\nor proxies that a request has flowed through, on its way from the client to the server. A proxy may choose to\nsanitize\/append\/forward the XFCC header before proxying the request.\n\n\nTo configure how XFCC headers are handled, set `forwardClientCertDetails` in your `IstioOperator`\n\n\napiVersion: install.istio.io\/v1alpha1\nkind: IstioOperator\nspec:\n  meshConfig:\n    defaultConfig:\n      gatewayTopology:\n        forwardClientCertDetails: <ENUM_VALUE>\n\n\nwhere `ENUM_VALUE` can be of the following type.\n\n| `ENUM_VALUE`          |                                                                                                                                                                         |\n|-----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `UNDEFINED`           | Field is not set.                                                                                                                                                       |\n| `SANITIZE`            | Do not send the XFCC header to the next hop.                                                                                                                            |\n| `FORWARD_ONLY`        | When the client connection is mTLS (Mutual TLS), forward the XFCC header in the request.                                                                                |\n| `APPEND_FORWARD`      | When the client connection is mTLS, append the client certificate information to the request\u2019s XFCC header and forward it.                                              |\n| `SANITIZE_SET`        | When the client connection is mTLS, reset the XFCC header with the client certificate information and send it to the next hop. This is the default value for a gateway. |\n| `ALWAYS_FORWARD_ONLY` | Always forward the XFCC header in the request, regardless of whether the client connection is mTLS.                                                                     |\n\nSee the [Envoy documentation](https:\/\/www.envoyproxy.io\/docs\/envoy\/latest\/configuration\/http\/http_conn_man\/headers#x-forwarded-client-cert)\nfor examples of using this capability.\n\n## PROXY Protocol\n\nThe [PROXY protocol](https:\/\/www.haproxy.org\/download\/1.8\/doc\/proxy-protocol.txt) allows for exchanging and preservation of client attributes between TCP proxies,\nwithout relying on L7 protocols such as HTTP and the `X-Forwarded-For` and `X-Envoy-External-Address` headers. It is intended for scenarios where an external TCP load balancer needs to proxy TCP traffic through an Istio gateway to a backend TCP service and still expose client attributes such as source IP to upstream TCP service endpoints. PROXY protocol can be enabled via `EnvoyFilter`.\n\n\nPROXY protocol is only supported for TCP traffic forwarding by Envoy. See the [Envoy documentation](https:\/\/www.envoyproxy.io\/docs\/envoy\/latest\/intro\/arch_overview\/other_features\/ip_transparency#proxy-protocol) for more details, along with some important performance caveats.\n\nPROXY protocol should not be used for L7 traffic, or for Istio gateways behind L7 load balancers.\n\n\nIf your external TCP load balancer is configured to forward TCP traffic and use the PROXY protocol, the Istio Gateway TCP listener must also be configured to accept the PROXY protocol.\nTo enable PROXY protocol on all TCP listeners on the gateways, set `proxyProtocol` in your `IstioOperator`. For example:\n\n\napiVersion: install.istio.io\/v1alpha1\nkind: IstioOperator\nspec:\n  meshConfig:\n    defaultConfig:\n      gatewayTopology:\n        proxyProtocol: {}\n\n\nAlternatively, deploy a gateway with the following pod annotation:\n\n\nmetadata:\n  annotations:\n    \"proxy.istio.io\/config\": '{\"gatewayTopology\" : { \"proxyProtocol\": {} }}'\n\n\nThe client IP is retrieved from the PROXY protocol by the gateway and set (or appended) in the `X-Forwarded-For` and `X-Envoy-External-Address` header. Note that the PROXY protocol is mutually exclusive with L7 headers like `X-Forwarded-For` and `X-Envoy-External-Address`. When PROXY protocol is used in conjunction with the `gatewayTopology` configuration, the `numTrustedProxies` and the received `X-Forwarded-For` header takes precedence in determining the trusted client addresses, and PROXY protocol client information will be ignored.\n\nNote that the above example only configures the Gateway to accept incoming PROXY protocol TCP traffic - See the [Envoy documentation](https:\/\/www.envoyproxy.io\/docs\/envoy\/latest\/intro\/arch_overview\/other_features\/ip_transparency#proxy-protocol) for examples of how to configure Envoy itself to communicate with upstream services using PROXY protocol.","site":"istio","answers_cleaned":"    title  Configuring Gateway Network Topology description  How to configure gateway network topology  weight  60 keywords   traffic management ingress gateway  owner  istio wg networking maintainers test  yes status  Alpha             Forwarding external client attributes  IP address  certificate info  to destination workloads  Many applications require knowing the client IP address and certificate information of the originating request to behave properly  Notable cases include logging and audit tools that require the client IP be populated and security tools  such as Web Application Firewalls  WAF   that need this information to apply rule sets properly  The ability to provide client attributes to services has long been a staple of reverse proxies  To forward these client attributes to destination workloads  proxies use the  X Forwarded For   XFF  and  X Forwarded Client Cert   XFCC  headers   Today s networks vary widely in nature  but support for these attributes is a requirement no matter what the network topology is  This information should be preserved and forwarded whether the network uses cloud based Load Balancers  on premise Load Balancers  gateways that are exposed directly to the internet  gateways that serve many intermediate proxies  and other deployment topologies not specified   While Istio provides an  ingress gateway   docs tasks traffic management ingress ingress control    given the varieties of architectures mentioned above  reasonable defaults are not able to be shipped that support the proper forwarding of client attributes to the destination workloads  This becomes ever more vital as Istio multicluster deployment models become more common   For more information on  X Forwarded For   see the IETF s  RFC  https   tools ietf org html rfc7239       Configuring network topologies  Configuration of XFF and XFCC headers can be set globally for all gateway workloads via  MeshConfig  or per gateway using a pod annotation  For example  to configure globally during install or upgrade when using an  IstioOperator  custom resource    spec    meshConfig      defaultConfig        gatewayTopology          numTrustedProxies   VALUE          forwardClientCertDetails   ENUM VALUE    You can also configure both of these settings by adding the  proxy istio io config  annotation to the Pod spec of your Istio ingress gateway          metadata      annotations         proxy istio io config      gatewayTopology       numTrustedProxies    VALUE    forwardClientCertDetails    ENUM VALUE             Configuring X Forwarded For Headers  Applications rely on reverse proxies to forward client attributes in a request  such as  X Forward For  header  However  due to the variety of network topologies that Istio can be deployed in  you must set the  numTrustedProxies  to the number of trusted proxies deployed in front of the Istio gateway proxy  so that the client address can be extracted correctly  This controls the value populated by the ingress gateway in the  X Envoy External Address  header which can be reliably used by the upstream services to access client s original IP address   For example  if you have a cloud based Load Balancer and a reverse proxy in front of your Istio gateway  set  numTrustedProxies  to  2     Note that all proxies in front of the Istio gateway proxy must parse HTTP traffic and append to the  X Forwarded For  header at each hop  If the number of entries in the  X Forwarded For  header is less than the number of trusted hops configured  Envoy falls back to using the immediate downstream address as the trusted client address  Please refer to the  Envoy documentation  https   www envoyproxy io docs envoy latest configuration http http conn man headers x forwarded for  to understand how  X Forwarded For  headers and trusted client addresses are determined         Example using X Forwarded For capability with httpbin  1  Run the following command to create a file named  topology yaml  with  numTrustedProxies  set to  2  and install Istio              cat   EOF   topology yaml     apiVersion  install istio io v1alpha1     kind  IstioOperator     spec        meshConfig          defaultConfig            gatewayTopology              numTrustedProxies  2     EOF       istioctl install  f topology yaml                If you previously installed an Istio ingress gateway  restart all ingress gateway pods after step 1        1  Create an  httpbin  namespace              kubectl create namespace httpbin     namespace httpbin created       1  Set the  istio injection  label to  enabled  for sidecar injection              kubectl label   overwrite namespace httpbin istio injection enabled     namespace httpbin labeled       1  Deploy  httpbin  in the  httpbin  namespace              kubectl apply  n httpbin  f  samples httpbin httpbin yaml        1  Deploy a gateway associated with  httpbin           kubectl apply  n httpbin  f  samples httpbin httpbin gateway yaml           kubectl apply  n httpbin  f  samples httpbin gateway api httpbin gateway yaml    kubectl wait   for condition programmed gtw  n httpbin httpbin gateway       6  Set a local  GATEWAY URL  environmental variable based on your Istio ingress gateway s IP address          export GATEWAY URL   kubectl  n istio system get service istio ingressgateway  o jsonpath    status loadBalancer ingress 0  ip             export GATEWAY URL   kubectl get gateways gateway networking k8s io httpbin gateway  n httpbin  ojsonpath    status addresses 0  value          7  Run the following  curl  command to simulate a request with proxy addresses in the  X Forwarded For  header              curl  s  H  X Forwarded For  56 5 6 7  72 9 5 6  98 1 2 3    GATEWAY URL get show env true    jq   headers  X Forwarded For   0          56 5 6 7  72 9 5 6  98 1 2 3 10 244 0 1         In the above example   GATEWAY URL  resolved to 10 244 0 1  This will not be the case in your environment    The above output shows the request headers that the  httpbin  workload received  When the Istio gateway received this request  it set the  X Envoy External Address  header to the second to last   numTrustedProxies  2   address in the  X Forwarded For  header from your curl command  Additionally  the gateway appends its own IP to the  X Forwarded For  header before forwarding it to the httpbin workload       Configuring X Forwarded Client Cert Headers  From  Envoy s documentation  https   www envoyproxy io docs envoy latest configuration http http conn man headers x forwarded client cert  regarding XFCC    x forwarded client cert  XFCC  is a proxy header which indicates certificate information of part or all of the clients or proxies that a request has flowed through  on its way from the client to the server  A proxy may choose to sanitize append forward the XFCC header before proxying the request    To configure how XFCC headers are handled  set  forwardClientCertDetails  in your  IstioOperator    apiVersion  install istio io v1alpha1 kind  IstioOperator spec    meshConfig      defaultConfig        gatewayTopology          forwardClientCertDetails   ENUM VALUE    where  ENUM VALUE  can be of the following type      ENUM VALUE                                                                                                                                                                                                                                                                                                                                                                                              UNDEFINED              Field is not set                                                                                                                                                             SANITIZE               Do not send the XFCC header to the next hop                                                                                                                                  FORWARD ONLY           When the client connection is mTLS  Mutual TLS   forward the XFCC header in the request                                                                                      APPEND FORWARD         When the client connection is mTLS  append the client certificate information to the request s XFCC header and forward it                                                    SANITIZE SET           When the client connection is mTLS  reset the XFCC header with the client certificate information and send it to the next hop  This is the default value for a gateway       ALWAYS FORWARD ONLY    Always forward the XFCC header in the request  regardless of whether the client connection is mTLS                                                                         See the  Envoy documentation  https   www envoyproxy io docs envoy latest configuration http http conn man headers x forwarded client cert  for examples of using this capability      PROXY Protocol  The  PROXY protocol  https   www haproxy org download 1 8 doc proxy protocol txt  allows for exchanging and preservation of client attributes between TCP proxies  without relying on L7 protocols such as HTTP and the  X Forwarded For  and  X Envoy External Address  headers  It is intended for scenarios where an external TCP load balancer needs to proxy TCP traffic through an Istio gateway to a backend TCP service and still expose client attributes such as source IP to upstream TCP service endpoints  PROXY protocol can be enabled via  EnvoyFilter     PROXY protocol is only supported for TCP traffic forwarding by Envoy  See the  Envoy documentation  https   www envoyproxy io docs envoy latest intro arch overview other features ip transparency proxy protocol  for more details  along with some important performance caveats   PROXY protocol should not be used for L7 traffic  or for Istio gateways behind L7 load balancers    If your external TCP load balancer is configured to forward TCP traffic and use the PROXY protocol  the Istio Gateway TCP listener must also be configured to accept the PROXY protocol  To enable PROXY protocol on all TCP listeners on the gateways  set  proxyProtocol  in your  IstioOperator   For example    apiVersion  install istio io v1alpha1 kind  IstioOperator spec    meshConfig      defaultConfig        gatewayTopology          proxyProtocol       Alternatively  deploy a gateway with the following pod annotation    metadata    annotations       proxy istio io config      gatewayTopology       proxyProtocol            The client IP is retrieved from the PROXY protocol by the gateway and set  or appended  in the  X Forwarded For  and  X Envoy External Address  header  Note that the PROXY protocol is mutually exclusive with L7 headers like  X Forwarded For  and  X Envoy External Address   When PROXY protocol is used in conjunction with the  gatewayTopology  configuration  the  numTrustedProxies  and the received  X Forwarded For  header takes precedence in determining the trusted client addresses  and PROXY protocol client information will be ignored   Note that the above example only configures the Gateway to accept incoming PROXY protocol TCP traffic   See the  Envoy documentation  https   www envoyproxy io docs envoy latest intro arch overview other features ip transparency proxy protocol  for examples of how to configure Envoy itself to communicate with upstream services using PROXY protocol "}
{"questions":"istio weight 60 In addition to capturing application traffic Istio can also capture DNS requests to improve the performance and usability of your mesh title DNS Proxying owner istio wg networking maintainers How to configure DNS proxying test yes keywords traffic management dns virtual machine","answers":"---\ntitle: DNS Proxying\ndescription: How to configure DNS proxying.\nweight: 60\nkeywords: [traffic-management,dns,virtual-machine]\nowner: istio\/wg-networking-maintainers\ntest: yes\n---\n\nIn addition to capturing application traffic, Istio can also capture DNS requests to improve the performance and usability of your mesh.\nWhen proxying DNS, all DNS requests from an application will be redirected to the sidecar, which stores a local mapping of domain names to IP addresses. If the request can be handled by the sidecar, it will directly return a response to the application, avoiding a roundtrip to the upstream DNS server. Otherwise, the request is forwarded upstream following the standard `\/etc\/resolv.conf` DNS configuration.\n\nWhile Kubernetes provides DNS resolution for Kubernetes `Service`s out of the box, any custom `ServiceEntry`s will not be recognized. With this feature, `ServiceEntry` addresses can be resolved without requiring custom configuration of a DNS server. For Kubernetes `Service`s, the DNS response will be the same, but with reduced load on `kube-dns` and increased performance.\n\nThis functionality is also available for services running outside of Kubernetes. This means that all internal services can be resolved without clunky workarounds to expose Kubernetes DNS entries outside of the cluster.\n\n## Getting started\n\nThis feature is not currently enabled by default. To enable it, install Istio with the following settings:\n\n\n$ cat <<EOF | istioctl install -y -f -\napiVersion: install.istio.io\/v1alpha1\nkind: IstioOperator\nspec:\n  meshConfig:\n    defaultConfig:\n      proxyMetadata:\n        # Enable basic DNS proxying\n        ISTIO_META_DNS_CAPTURE: \"true\"\nEOF\n\n\nThis can also be enabled on a per-pod basis with the [`proxy.istio.io\/config` annotation](\/docs\/reference\/config\/annotations\/):\n\n\nkind: Deployment\nmetadata:\n\u00a0 name: curl\nspec:\n...\n\u00a0 template:\n\u00a0\u00a0\u00a0 metadata:\n\u00a0\u00a0\u00a0\u00a0\u00a0 annotations:\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 proxy.istio.io\/config: |\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 proxyMetadata:\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ISTIO_META_DNS_CAPTURE: \"true\"\n...\n\n\n\nWhen deploying to a VM using [`istioctl workload entry configure`](\/docs\/setup\/install\/virtual-machine\/), basic DNS proxying will be enabled by default.\n\n\n## DNS capture In action\n\nTo try out the DNS capture, first setup a `ServiceEntry` for some external service:\n\n\n$ kubectl apply -f - <<EOF\napiVersion: networking.istio.io\/v1\nkind: ServiceEntry\nmetadata:\n  name: external-address\nspec:\n  addresses:\n  - 198.51.100.1\n  hosts:\n  - address.internal\n  ports:\n  - name: http\n    number: 80\n    protocol: HTTP\nEOF\n\n\nBring up a client application to initiate the DNS request:\n\n\n$ kubectl label namespace default istio-injection=enabled --overwrite\n$ kubectl apply -f @samples\/curl\/curl.yaml@\n\n\nWithout the DNS capture, a request to `address.internal` would likely fail to resolve. Once this is enabled, you should instead get a response back based on the configured `address`:\n\n\n$ kubectl exec deploy\/curl -- curl -sS -v address.internal\n*   Trying 198.51.100.1:80...\n\n\n## Address auto allocation\n\nIn the above example, you had a predefined IP address for the service to which you sent the request. However, it's common to access external services that do not have stable addresses, and instead rely on DNS. In this case, the DNS proxy will not have enough information to return a response, and will need to forward DNS requests upstream.\n\nThis is especially problematic with TCP traffic. Unlike HTTP requests, which are routed based on `Host` headers, TCP carries much less information; you can only route on the destination IP and port number. Because you don't have a stable IP for the backend, you cannot route based on that either, leaving only port number, which leads to conflicts when multiple `ServiceEntry`s for TCP services share the same port. Refer\nto [the following section](#external-tcp-services-without-vips) for more details.\n\nTo work around these issues, the DNS proxy additionally supports automatically allocating addresses for `ServiceEntry`s that do not explicitly define one. The DNS response will include a distinct and automatically assigned address for each `ServiceEntry`. The proxy is then configured to match requests to this IP address, and forward the request to the corresponding `ServiceEntry`. Istio will automatically allocate non-routable VIPs (from the Class E subnet) to such services as long as they do not use a wildcard host. The Istio agent on the sidecar will use the VIPs as responses to the DNS lookup queries from the application. Envoy can now clearly distinguish traffic bound for each external TCP service and forward it to the right target.\n\n\nBecause this feature modifies DNS responses, it may not be compatible with all applications.\n\n\nTo try this out, configure another `ServiceEntry`:\n\n\n$ kubectl apply -f - <<EOF\napiVersion: networking.istio.io\/v1\nkind: ServiceEntry\nmetadata:\n  name: external-auto\nspec:\n  hosts:\n  - auto.internal\n  ports:\n  - name: http\n    number: 80\n    protocol: HTTP\n  resolution: DNS\nEOF\n\n\nNow, send a request:\n\n\n$ kubectl exec deploy\/curl -- curl -sS -v auto.internal\n*   Trying 240.240.0.1:80...\n\n\nAs you can see, the request is sent to an automatically allocated address, `240.240.0.1`. These addresses will be picked from the `240.240.0.0\/16` reserved IP address range to avoid conflicting with real services.\n\nUsers also have the flexibility for more granular configuration by adding the label `networking.istio.io\/enable-autoallocate-ip=\"true\/false\"` to their `ServiceEntry`. This label configures whether a `ServiceEntry` without any `spec.addresses` set should get an IP address automatically allocated for it.\n\nTo try this out, update the existing `ServiceEntry` with the opt-out label:\n\n\n$ kubectl apply -f - <<EOF\napiVersion: networking.istio.io\/v1\nkind: ServiceEntry\nmetadata:\n  name: external-auto\n  labels:\n    networking.istio.io\/enable-autoallocate-ip: \"false\"\nspec:\n  hosts:\n  - auto.internal\n  ports:\n  - name: http\n    number: 80\n    protocol: HTTP\n  resolution: DNS\nEOF\n\n\nNow, send a request and verify that the auto allocation is no longer happening:\n\n\n$ kubectl exec deploy\/curl -- curl -sS -v auto.internal\n* Could not resolve host: auto.internal\n* shutting down connection #0\n\n\n## External TCP services without VIPs\n\nBy default, Istio has a limitation when routing external TCP traffic because it is unable to distinguish between multiple TCP services on the same port. This limitation is particularly apparent when using third party databases such as AWS Relational Database Service or any database setup with geographical redundancy. Similar, but different external TCP services, cannot be handled separately by default. For the sidecar to distinguish traffic between two different TCP services that are outside of the mesh, the services must be on different ports or they need to have globally unique VIPs.\n\nFor example, if you have two external database services, `mysql-instance1` and `mysql-instance2`, and you create service entries for both, client sidecars will still have a single listener on `0.0.0.0:{port}` that looks up the IP address of only `mysql-instance1`, from public DNS servers, and forwards traffic to it. It cannot route traffic to `mysql-instance2` because it has no way of distinguishing whether traffic arriving at `0.0.0.0:{port}` is bound for `mysql-instance1` or `mysql-instance2`.\n\nThe following example shows how DNS proxying can be used to solve this problem.\nA virtual IP address will be assigned to every service entry so that client sidecars can clearly distinguish traffic bound for each external TCP service.\n\n1.  Update the Istio configuration specified in the [Getting Started](#getting-started) section to also configure `discoverySelectors` that restrict the mesh to namespaces with `istio-injection` enabled. This will let us use any other namespaces in the cluster to run TCP services outside of the mesh.\n\n    \n    $ cat <<EOF | istioctl install -y -f -\n    apiVersion: install.istio.io\/v1alpha1\n    kind: IstioOperator\n    spec:\n      meshConfig:\n        defaultConfig:\n          proxyMetadata:\n            # Enable basic DNS proxying\n            ISTIO_META_DNS_CAPTURE: \"true\"\n        # discoverySelectors configuration below is just used for simulating the external service TCP scenario,\n        # so that we do not have to use an external site for testing.\n        discoverySelectors:\n        - matchLabels:\n            istio-injection: enabled\n    EOF\n    \n\n1.  Deploy the first external sample TCP application:\n\n    \n    $ kubectl create ns external-1\n    $ kubectl -n external-1 apply -f samples\/tcp-echo\/tcp-echo.yaml\n    \n\n1.  Deploy the second external sample TCP application:\n\n    \n    $ kubectl create ns external-2\n    $ kubectl -n external-2 apply -f samples\/tcp-echo\/tcp-echo.yaml\n    \n\n1.  Configure `ServiceEntry` to reach external services:\n\n    \n    $ kubectl apply -f - <<EOF\n    apiVersion: networking.istio.io\/v1\n    kind: ServiceEntry\n    metadata:\n      name: external-svc-1\n    spec:\n      hosts:\n      - tcp-echo.external-1.svc.cluster.local\n      ports:\n      - name: external-svc-1\n        number: 9000\n        protocol: TCP\n      resolution: DNS\n    ---\n    apiVersion: networking.istio.io\/v1\n    kind: ServiceEntry\n    metadata:\n      name: external-svc-2\n    spec:\n      hosts:\n      - tcp-echo.external-2.svc.cluster.local\n      ports:\n      - name: external-svc-2\n        number: 9000\n        protocol: TCP\n      resolution: DNS\n    EOF\n    \n\n1.  Verify listeners are configured separately for each service at the client side:\n\n    \n    $ istioctl pc listener deploy\/curl | grep tcp-echo | awk '{printf \"ADDRESS=%s, DESTINATION=%s %s\\n\", $1, $4, $5}'\n    ADDRESS=240.240.105.94, DESTINATION=Cluster: outbound|9000||tcp-echo.external-2.svc.cluster.local\n    ADDRESS=240.240.69.138, DESTINATION=Cluster: outbound|9000||tcp-echo.external-1.svc.cluster.local\n    \n\n## Cleanup\n\n\n$ kubectl -n external-1 delete -f @samples\/tcp-echo\/tcp-echo.yaml@\n$ kubectl -n external-2 delete -f @samples\/tcp-echo\/tcp-echo.yaml@\n$ kubectl delete -f @samples\/curl\/curl.yaml@\n$ istioctl uninstall --purge -y\n$ kubectl delete ns istio-system external-1 external-2\n$ kubectl label namespace default istio-injection-\n","site":"istio","answers_cleaned":"    title  DNS Proxying description  How to configure DNS proxying  weight  60 keywords   traffic management dns virtual machine  owner  istio wg networking maintainers test  yes      In addition to capturing application traffic  Istio can also capture DNS requests to improve the performance and usability of your mesh  When proxying DNS  all DNS requests from an application will be redirected to the sidecar  which stores a local mapping of domain names to IP addresses  If the request can be handled by the sidecar  it will directly return a response to the application  avoiding a roundtrip to the upstream DNS server  Otherwise  the request is forwarded upstream following the standard   etc resolv conf  DNS configuration   While Kubernetes provides DNS resolution for Kubernetes  Service s out of the box  any custom  ServiceEntry s will not be recognized  With this feature   ServiceEntry  addresses can be resolved without requiring custom configuration of a DNS server  For Kubernetes  Service s  the DNS response will be the same  but with reduced load on  kube dns  and increased performance   This functionality is also available for services running outside of Kubernetes  This means that all internal services can be resolved without clunky workarounds to expose Kubernetes DNS entries outside of the cluster      Getting started  This feature is not currently enabled by default  To enable it  install Istio with the following settings      cat   EOF   istioctl install  y  f   apiVersion  install istio io v1alpha1 kind  IstioOperator spec    meshConfig      defaultConfig        proxyMetadata            Enable basic DNS proxying         ISTIO META DNS CAPTURE   true  EOF   This can also be enabled on a per pod basis with the   proxy istio io config  annotation   docs reference config annotations      kind  Deployment metadata    name  curl spec        template      metadata        annotations          proxy istio io config              proxyMetadata              ISTIO META DNS CAPTURE   true         When deploying to a VM using   istioctl workload entry configure    docs setup install virtual machine    basic DNS proxying will be enabled by default       DNS capture In action  To try out the DNS capture  first setup a  ServiceEntry  for some external service      kubectl apply  f     EOF apiVersion  networking istio io v1 kind  ServiceEntry metadata    name  external address spec    addresses      198 51 100 1   hosts      address internal   ports      name  http     number  80     protocol  HTTP EOF   Bring up a client application to initiate the DNS request      kubectl label namespace default istio injection enabled   overwrite   kubectl apply  f  samples curl curl yaml    Without the DNS capture  a request to  address internal  would likely fail to resolve  Once this is enabled  you should instead get a response back based on the configured  address       kubectl exec deploy curl    curl  sS  v address internal     Trying 198 51 100 1 80         Address auto allocation  In the above example  you had a predefined IP address for the service to which you sent the request  However  it s common to access external services that do not have stable addresses  and instead rely on DNS  In this case  the DNS proxy will not have enough information to return a response  and will need to forward DNS requests upstream   This is especially problematic with TCP traffic  Unlike HTTP requests  which are routed based on  Host  headers  TCP carries much less information  you can only route on the destination IP and port number  Because you don t have a stable IP for the backend  you cannot route based on that either  leaving only port number  which leads to conflicts when multiple  ServiceEntry s for TCP services share the same port  Refer to  the following section   external tcp services without vips  for more details   To work around these issues  the DNS proxy additionally supports automatically allocating addresses for  ServiceEntry s that do not explicitly define one  The DNS response will include a distinct and automatically assigned address for each  ServiceEntry   The proxy is then configured to match requests to this IP address  and forward the request to the corresponding  ServiceEntry   Istio will automatically allocate non routable VIPs  from the Class E subnet  to such services as long as they do not use a wildcard host  The Istio agent on the sidecar will use the VIPs as responses to the DNS lookup queries from the application  Envoy can now clearly distinguish traffic bound for each external TCP service and forward it to the right target    Because this feature modifies DNS responses  it may not be compatible with all applications    To try this out  configure another  ServiceEntry       kubectl apply  f     EOF apiVersion  networking istio io v1 kind  ServiceEntry metadata    name  external auto spec    hosts      auto internal   ports      name  http     number  80     protocol  HTTP   resolution  DNS EOF   Now  send a request      kubectl exec deploy curl    curl  sS  v auto internal     Trying 240 240 0 1 80      As you can see  the request is sent to an automatically allocated address   240 240 0 1   These addresses will be picked from the  240 240 0 0 16  reserved IP address range to avoid conflicting with real services   Users also have the flexibility for more granular configuration by adding the label  networking istio io enable autoallocate ip  true false   to their  ServiceEntry   This label configures whether a  ServiceEntry  without any  spec addresses  set should get an IP address automatically allocated for it   To try this out  update the existing  ServiceEntry  with the opt out label      kubectl apply  f     EOF apiVersion  networking istio io v1 kind  ServiceEntry metadata    name  external auto   labels      networking istio io enable autoallocate ip   false  spec    hosts      auto internal   ports      name  http     number  80     protocol  HTTP   resolution  DNS EOF   Now  send a request and verify that the auto allocation is no longer happening      kubectl exec deploy curl    curl  sS  v auto internal   Could not resolve host  auto internal   shutting down connection  0      External TCP services without VIPs  By default  Istio has a limitation when routing external TCP traffic because it is unable to distinguish between multiple TCP services on the same port  This limitation is particularly apparent when using third party databases such as AWS Relational Database Service or any database setup with geographical redundancy  Similar  but different external TCP services  cannot be handled separately by default  For the sidecar to distinguish traffic between two different TCP services that are outside of the mesh  the services must be on different ports or they need to have globally unique VIPs   For example  if you have two external database services   mysql instance1  and  mysql instance2   and you create service entries for both  client sidecars will still have a single listener on  0 0 0 0  port   that looks up the IP address of only  mysql instance1   from public DNS servers  and forwards traffic to it  It cannot route traffic to  mysql instance2  because it has no way of distinguishing whether traffic arriving at  0 0 0 0  port   is bound for  mysql instance1  or  mysql instance2    The following example shows how DNS proxying can be used to solve this problem  A virtual IP address will be assigned to every service entry so that client sidecars can clearly distinguish traffic bound for each external TCP service   1   Update the Istio configuration specified in the  Getting Started   getting started  section to also configure  discoverySelectors  that restrict the mesh to namespaces with  istio injection  enabled  This will let us use any other namespaces in the cluster to run TCP services outside of the mesh              cat   EOF   istioctl install  y  f       apiVersion  install istio io v1alpha1     kind  IstioOperator     spec        meshConfig          defaultConfig            proxyMetadata                Enable basic DNS proxying             ISTIO META DNS CAPTURE   true            discoverySelectors configuration below is just used for simulating the external service TCP scenario            so that we do not have to use an external site for testing          discoverySelectors            matchLabels              istio injection  enabled     EOF       1   Deploy the first external sample TCP application              kubectl create ns external 1       kubectl  n external 1 apply  f samples tcp echo tcp echo yaml       1   Deploy the second external sample TCP application              kubectl create ns external 2       kubectl  n external 2 apply  f samples tcp echo tcp echo yaml       1   Configure  ServiceEntry  to reach external services              kubectl apply  f     EOF     apiVersion  networking istio io v1     kind  ServiceEntry     metadata        name  external svc 1     spec        hosts          tcp echo external 1 svc cluster local       ports          name  external svc 1         number  9000         protocol  TCP       resolution  DNS             apiVersion  networking istio io v1     kind  ServiceEntry     metadata        name  external svc 2     spec        hosts          tcp echo external 2 svc cluster local       ports          name  external svc 2         number  9000         protocol  TCP       resolution  DNS     EOF       1   Verify listeners are configured separately for each service at the client side              istioctl pc listener deploy curl   grep tcp echo   awk   printf  ADDRESS  s  DESTINATION  s  s n    1   4   5       ADDRESS 240 240 105 94  DESTINATION Cluster  outbound 9000  tcp echo external 2 svc cluster local     ADDRESS 240 240 69 138  DESTINATION Cluster  outbound 9000  tcp echo external 1 svc cluster local          Cleanup     kubectl  n external 1 delete  f  samples tcp echo tcp echo yaml    kubectl  n external 2 delete  f  samples tcp echo tcp echo yaml    kubectl delete  f  samples curl curl yaml    istioctl uninstall   purge  y   kubectl delete ns istio system external 1 external 2   kubectl label namespace default istio injection  "}
{"questions":"istio weight 70 test no owner istio wg networking maintainers Within a multicluster mesh traffic rules specific to the cluster topology may be desirable This document describes How to configure how traffic is distributed among clusters in the mesh keywords traffic management multicluster title Multi cluster Traffic Management","answers":"---\ntitle: Multi-cluster Traffic Management\ndescription: How to configure how traffic is distributed among clusters in the mesh.\nweight: 70\nkeywords: [traffic-management,multicluster]\nowner: istio\/wg-networking-maintainers\ntest: no\n---\n\nWithin a multicluster mesh, traffic rules specific to the cluster topology may be desirable. This document describes\na few ways to manage traffic in a multicluster mesh. Before reading this guide:\n\n1. Read [Deployment Models](\/docs\/ops\/deployment\/deployment-models\/#multiple-clusters)\n1. Make sure your deployed services follow the concept of namespace sameness.\n\n## Keeping traffic in-cluster\n\nIn some cases the default cross-cluster load balancing behavior is not desirable. To keep traffic \"cluster-local\" (i.e.\ntraffic sent from `cluster-a` will only reach destinations in `cluster-a`), mark hostnames or wildcards as `clusterLocal`\nusing [`MeshConfig.serviceSettings`](\/docs\/reference\/config\/istio.mesh.v1alpha1\/#MeshConfig-ServiceSettings-Settings).\n\nFor example, you can enforce cluster-local traffic for an individual service, all services in a particular namespace, or globally for all services in the mesh, as follows:\n\n\n\n\n\n\nserviceSettings:\n- settings:\n    clusterLocal: true\n  hosts:\n  - \"mysvc.myns.svc.cluster.local\"\n\n\n\n\n\n\n\nserviceSettings:\n- settings:\n    clusterLocal: true\n  hosts:\n  - \"*.myns.svc.cluster.local\"\n\n\n\n\n\n\n\nserviceSettings:\n- settings:\n    clusterLocal: true\n  hosts:\n  - \"*\"\n\n\n\n\n\n\n## Partitioning Services {#partitioning-services}\n\n[`DestinationRule.subsets`](\/docs\/reference\/config\/networking\/destination-rule\/#Subset) allows partitioning a service\nby selecting labels. These labels can be the labels from Kubernetes metadata, or from [built-in labels](\/docs\/reference\/config\/labels\/).\nOne of these built-in labels, `topology.istio.io\/cluster`, in the subset selector for a `DestinationRule` allows\ncreating per-cluster subsets.\n\n\napiVersion: networking.istio.io\/v1\nkind: DestinationRule\nmetadata:\n  name: mysvc-per-cluster-dr\nspec:\n  host: mysvc.myns.svc.cluster.local\n  subsets:\n  - name: cluster-1\n    labels:\n      topology.istio.io\/cluster: cluster-1\n  - name: cluster-2\n    labels:\n      topology.istio.io\/cluster: cluster-2\n\n\nUsing these subsets you can create various routing rules based on the cluster such as [mirroring](\/docs\/tasks\/traffic-management\/mirroring\/)\nor [shifting](\/docs\/tasks\/traffic-management\/traffic-shifting\/).\n\nThis provides another option to create cluster-local traffic rules by restricting the destination subset in a `VirtualService`:\n\n\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: mysvc-cluster-local-vs\nspec:\n  hosts:\n  - mysvc.myns.svc.cluster.local\n  http:\n  - name: \"cluster-1-local\"\n    match:\n    - sourceLabels:\n        topology.istio.io\/cluster: \"cluster-1\"\n    route:\n    - destination:\n        host: mysvc.myns.svc.cluster.local\n        subset: cluster-1\n  - name: \"cluster-2-local\"\n    match:\n    - sourceLabels:\n        topology.istio.io\/cluster: \"cluster-2\"\n    route:\n    - destination:\n        host: mysvc.myns.svc.cluster.local\n        subset: cluster-2\n\n\nUsing subset-based routing this way to control cluster-local traffic, as opposed to\n[`MeshConfig.serviceSettings`](\/docs\/reference\/config\/istio.mesh.v1alpha1\/#MeshConfig-ServiceSettings-Settings),\nhas the downside of mixing service-level policy with topology-level policy.\nFor example, a rule that sends 10% of traffic to `v2` of a service will need twice the\nnumber of subsets (e.g., `cluster-1-v2`, `cluster-2-v2`).\nThis approach is best limited to situations where more granular control of cluster-based routing is needed.","site":"istio","answers_cleaned":"    title  Multi cluster Traffic Management description  How to configure how traffic is distributed among clusters in the mesh  weight  70 keywords   traffic management multicluster  owner  istio wg networking maintainers test  no      Within a multicluster mesh  traffic rules specific to the cluster topology may be desirable  This document describes a few ways to manage traffic in a multicluster mesh  Before reading this guide   1  Read  Deployment Models   docs ops deployment deployment models  multiple clusters  1  Make sure your deployed services follow the concept of namespace sameness      Keeping traffic in cluster  In some cases the default cross cluster load balancing behavior is not desirable  To keep traffic  cluster local   i e  traffic sent from  cluster a  will only reach destinations in  cluster a    mark hostnames or wildcards as  clusterLocal  using   MeshConfig serviceSettings    docs reference config istio mesh v1alpha1  MeshConfig ServiceSettings Settings    For example  you can enforce cluster local traffic for an individual service  all services in a particular namespace  or globally for all services in the mesh  as follows        serviceSettings    settings      clusterLocal  true   hosts       mysvc myns svc cluster local         serviceSettings    settings      clusterLocal  true   hosts         myns svc cluster local         serviceSettings    settings      clusterLocal  true   hosts                   Partitioning Services   partitioning services     DestinationRule subsets    docs reference config networking destination rule  Subset  allows partitioning a service by selecting labels  These labels can be the labels from Kubernetes metadata  or from  built in labels   docs reference config labels    One of these built in labels   topology istio io cluster   in the subset selector for a  DestinationRule  allows creating per cluster subsets    apiVersion  networking istio io v1 kind  DestinationRule metadata    name  mysvc per cluster dr spec    host  mysvc myns svc cluster local   subsets      name  cluster 1     labels        topology istio io cluster  cluster 1     name  cluster 2     labels        topology istio io cluster  cluster 2   Using these subsets you can create various routing rules based on the cluster such as  mirroring   docs tasks traffic management mirroring   or  shifting   docs tasks traffic management traffic shifting     This provides another option to create cluster local traffic rules by restricting the destination subset in a  VirtualService     apiVersion  networking istio io v1 kind  VirtualService metadata    name  mysvc cluster local vs spec    hosts      mysvc myns svc cluster local   http      name   cluster 1 local      match        sourceLabels          topology istio io cluster   cluster 1      route        destination          host  mysvc myns svc cluster local         subset  cluster 1     name   cluster 2 local      match        sourceLabels          topology istio io cluster   cluster 2      route        destination          host  mysvc myns svc cluster local         subset  cluster 2   Using subset based routing this way to control cluster local traffic  as opposed to   MeshConfig serviceSettings    docs reference config istio mesh v1alpha1  MeshConfig ServiceSettings Settings   has the downside of mixing service level policy with topology level policy  For example  a rule that sends 10  of traffic to  v2  of a service will need twice the number of subsets  e g    cluster 1 v2    cluster 2 v2    This approach is best limited to situations where more granular control of cluster based routing is needed "}
{"questions":"istio linktitle Traffic Routing owner istio wg networking maintainers weight 30 How Istio routes traffic through the mesh keywords traffic management proxy title Understanding Traffic Routing test n a","answers":"---\ntitle: Understanding Traffic Routing\nlinktitle: Traffic Routing\ndescription: How Istio routes traffic through the mesh.\nweight: 30\nkeywords: [traffic-management,proxy]\nowner: istio\/wg-networking-maintainers\ntest: n\/a\n---\n\nOne of the goals of Istio is to act as a \"transparent proxy\" which can be dropped into an existing cluster, allowing traffic to continue to flow as before.\nHowever, there are powerful ways Istio can manage traffic differently than a typical Kubernetes cluster because of the additional features such as request load balancing.\nTo understand what is happening in your mesh, it is important to understand how Istio routes traffic.\n\n\nThis document describes low level implementation details. For a higher level overview, check out the traffic management [Concepts](\/docs\/concepts\/traffic-management\/) or [Tasks](\/docs\/tasks\/traffic-management\/).\n\n\n## Frontends and backends\n\nIn traffic routing in Istio, there are two primary phases:\n\n* The \"frontend\" refers to how we match the type of traffic we are handling.\n  This is necessary to identify which backend to route traffic to, and which policies to apply.\n  For example, we may read the `Host` header of `http.ns.svc.cluster.local` and identify the request is intended for the `http` Service.\n  More information on how this matching works can be found below.\n* The \"backend\" refers to where we send traffic once we have matched it.\n  Using the example above, after identifying the request as targeting the `http` Service, we would send it to an endpoint in that Service.\n  However, this selection is not always so simple; Istio allows customization of this logic, through `VirtualService` routing rules.\n\nStandard Kubernetes networking has these same concepts, too, but they are much simpler and generally hidden.\nWhen a `Service` is created, there is typically an associated frontend -- the automatically created DNS name (such as `http.ns.svc.cluster.local`),\nand an automatically created IP address to represent the service (the `ClusterIP`).\nSimilarly, a backend is also created - the `Endpoints` or `EndpointSlice` - which represents all of the pods selected by the service.\n\n## Protocols\n\nUnlike Kubernetes, Istio has the ability to process application level protocols such as HTTP and TLS.\nThis allows for different types of [frontend](#frontends-and-backends) matching than is available in Kubernetes.\n\nIn general, there are three classes of protocols Istio understands:\n\n* HTTP, which includes HTTP\/1.1, HTTP\/2, and gRPC. Note that this does not include TLS encrypted traffic (HTTPS).\n* TLS, which includes HTTPS.\n* Raw TCP bytes.\n\nThe [protocol selection](\/docs\/ops\/configuration\/traffic-management\/protocol-selection\/) document describes how Istio decides which protocol is used.\n\nThe use of \"TCP\" can be confusing, as in other contexts it is used to distinguish between other L4 protocols, such as UDP.\nWhen referring to the TCP protocol in Istio, this typically means we are treating it as a raw stream of bytes,\nand not parsing application level protocols such as TLS or HTTP.\n\n## Traffic Routing\n\nWhen an Envoy proxy receives a request, it must decide where, if anywhere, to forward it to.\nBy default, this will be to the original service that was requested, unless [customized](\/docs\/tasks\/traffic-management\/traffic-shifting\/).\nHow this works depends on the protocol used.\n\n### TCP\n\nWhen processing TCP traffic, Istio has a very small amount of useful information to route the connection - only the destination IP and Port.\nThese attributes are used to determine the intended Service; the proxy is configured to listen on each service IP (`<Kubernetes ClusterIP>:<Port>`) pair and forward traffic to the upstream service.\n\nFor customizations, a TCP `VirtualService` can be configured, which allows [matching on specific IPs and ports](\/docs\/reference\/config\/networking\/virtual-service\/#L4MatchAttributes) and routing it to different upstream services than requested.\n\n### TLS\n\nWhen processing TLS traffic, Istio has slightly more information available than raw TCP: we can inspect the [SNI](https:\/\/en.wikipedia.org\/wiki\/Server_Name_Indication) field presented during the TLS handshake.\n\nFor standard Services, the same IP:Port matching is used as for raw TCP.\nHowever, for services that do not have a Service IP defined, such as [ExternalName services](#externalname-services), the SNI field will be used for routing.\n\nAdditionally, custom routing can be configured with a TLS `VirtualService` to [match on SNI](\/docs\/reference\/config\/networking\/virtual-service\/#TLSMatchAttributes) and route requests to custom destinations.\n\n### HTTP\n\nHTTP allows much richer routing than TCP and TLS. With HTTP, you can route individual HTTP requests, rather than just connections.\nIn addition, a [number of rich attributes](\/docs\/reference\/config\/networking\/virtual-service\/#HTTPMatchRequest) are available, such as host, path, headers, query parameters, etc.\n\nWhile TCP and TLS traffic generally behave the same with or without Istio (assuming no configuration has been applied to customize the routing), HTTP has significant differences.\n\n* Istio will load balance individual requests. In general, this is highly desirable, especially in scenarios with long-lived connections such as gRPC and HTTP\/2, where connection level load balancing is ineffective.\n* Requests are routed based on the port and *`Host` header*, rather than port and IP. This means the destination IP address is effectively ignored. For example, `curl 8.8.8.8 -H \"Host: productpage.default.svc.cluster.local\"`, would be routed to the `productpage` Service.\n\n## Unmatched traffic\n\nIf traffic cannot be matched using one of the methods described above, it is treated as [passthrough traffic](\/docs\/tasks\/traffic-management\/egress\/egress-control\/#envoy-passthrough-to-external-services).\nBy default, these requests will be forwarded as-is, which ensures that traffic to services that Istio is not aware of (such as external services that do not have `ServiceEntry`s created) continues to function.\nNote that when these requests are forwarded, mutual TLS will not be used and telemetry collection is limited.\n\n## Service types\n\nAlong with standard `ClusterIP` Services, Istio supports the full range of Kubernetes Services, with some caveats.\n\n### `LoadBalancer` and `NodePort` Services\n\nThese Services are supersets of `ClusterIP` Services, and are mostly concerned with allowing access from external clients.\nThese service types are supported and behave exactly like standard `ClusterIP` Services.\n\n### Headless Services\n\nA [headless Service](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#headless-services) is a Service that does not have a `ClusterIP` assigned.\nInstead, the DNS response will contain the IP addresses of each endpoint (i.e. the Pod IP) that is a part of the Service.\n\nIn general, Istio does not configure listeners for each Pod IP, as it works at the Service level.\nHowever, to support headless services, listeners are set up for each IP:Port pair in the headless service.\nAn exception to this is for protocols declared as HTTP, which will match traffic by the `Host` header.\n\n\nWithout Istio, the `ports` field of a headless service is not strictly required because requests go directly to pod IPs, which can accept traffic on all ports.\nHowever, with Istio the port must be declared in the Service, or it will [not be matched](\/docs\/ops\/configuration\/traffic-management\/traffic-routing\/#unmatched-traffic).\n\n\n### ExternalName Services\n\nAn [ExternalName Service](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#externalname) is essentially just a DNS alias.\n\nTo make things more concrete, consider the following example:\n\n\napiVersion: v1\nkind: Service\nmetadata:\n  name: alias\nspec:\n  type: ExternalName\n  externalName: concrete.example.com\n\n\nBecause there is no `ClusterIP` nor pod IPs to match on, for TCP traffic there are no changes at all to traffic matching in Istio.\nWhen Istio receives the request, they will see the IP for `concrete.example.com`.\nIf this is a service Istio knows about, it will be routed as described [above](#tcp).\nIf not, it will be handled as [unmatched traffic](#unmatched-traffic)\n\nFor HTTP and TLS, which match on hostname, things are a bit different.\nIf the target service (`concrete.example.com`) is a service Istio knows about, then the alias hostname (`alias.default.svc.cluster.local`) will be added\nas an _additional_ match to the [TLS](#tls) or [HTTP](#http) matching.\nIf not, there will be no changes, so it will be handled as [unmatched traffic](#unmatched-traffic).\n\nAn `ExternalName` service can never be a [backend](#frontends-and-backends) on its own.\nInstead, it is only ever used as additional [frontend](#frontends-and-backends) matches to existing Services.\nIf one is explicitly used as a backend, such as in a `VirtualService` destination, the same aliasing applies.\nThat is, if `alias.default.svc.cluster.local` is set as the destination, then requests will go to the `concrete.example.com`.\nIf that hostname is not known to Istio, the requests will fail; in this case, a `ServiceEntry` for `concrete.example.com` would make this configuration work.\n\n### ServiceEntry\n\nIn addition to Kubernetes Services, [Service Entries](\/docs\/reference\/config\/networking\/service-entry\/#ServiceEntry) can be created to extend the set of services known to Istio.\nThis can be useful to ensure that traffic to external services, such as `example.com`, get the functionality of Istio.\n\nA ServiceEntry with `addresses` set will perform routing just like a `ClusterIP` Service.\n\nHowever, for Service Entries without any `addresses`, all IPs on the port will be matched.\nThis may prevent [unmatched traffic](#unmatched-traffic) on the same port from being forwarded correctly.\nAs such, it is best to avoid these where possible, or use dedicated ports when needed.\nHTTP and TLS do not share this constraint, as routing is done based on the hostname\/SNI.\n\n\nThe `addresses` field and `endpoints` field are often confused.\n`addresses` refers to IPs that will be matched against, while endpoints refer to the set of IPs we will send traffic to.\n\nFor example, the Service entry below would match traffic for `1.1.1.1`, and send the request to `2.2.2.2` and `3.3.3.3` following the configured load balancing policy:\n\n\naddresses: [1.1.1.1]\nresolution: STATIC\nendpoints:\n- address: 2.2.2.2\n- address: 3.3.3.3\n\n\n","site":"istio","answers_cleaned":"    title  Understanding Traffic Routing linktitle  Traffic Routing description  How Istio routes traffic through the mesh  weight  30 keywords   traffic management proxy  owner  istio wg networking maintainers test  n a      One of the goals of Istio is to act as a  transparent proxy  which can be dropped into an existing cluster  allowing traffic to continue to flow as before  However  there are powerful ways Istio can manage traffic differently than a typical Kubernetes cluster because of the additional features such as request load balancing  To understand what is happening in your mesh  it is important to understand how Istio routes traffic    This document describes low level implementation details  For a higher level overview  check out the traffic management  Concepts   docs concepts traffic management   or  Tasks   docs tasks traffic management         Frontends and backends  In traffic routing in Istio  there are two primary phases     The  frontend  refers to how we match the type of traffic we are handling    This is necessary to identify which backend to route traffic to  and which policies to apply    For example  we may read the  Host  header of  http ns svc cluster local  and identify the request is intended for the  http  Service    More information on how this matching works can be found below    The  backend  refers to where we send traffic once we have matched it    Using the example above  after identifying the request as targeting the  http  Service  we would send it to an endpoint in that Service    However  this selection is not always so simple  Istio allows customization of this logic  through  VirtualService  routing rules   Standard Kubernetes networking has these same concepts  too  but they are much simpler and generally hidden  When a  Service  is created  there is typically an associated frontend    the automatically created DNS name  such as  http ns svc cluster local    and an automatically created IP address to represent the service  the  ClusterIP    Similarly  a backend is also created   the  Endpoints  or  EndpointSlice    which represents all of the pods selected by the service      Protocols  Unlike Kubernetes  Istio has the ability to process application level protocols such as HTTP and TLS  This allows for different types of  frontend   frontends and backends  matching than is available in Kubernetes   In general  there are three classes of protocols Istio understands     HTTP  which includes HTTP 1 1  HTTP 2  and gRPC  Note that this does not include TLS encrypted traffic  HTTPS     TLS  which includes HTTPS    Raw TCP bytes   The  protocol selection   docs ops configuration traffic management protocol selection   document describes how Istio decides which protocol is used   The use of  TCP  can be confusing  as in other contexts it is used to distinguish between other L4 protocols  such as UDP  When referring to the TCP protocol in Istio  this typically means we are treating it as a raw stream of bytes  and not parsing application level protocols such as TLS or HTTP      Traffic Routing  When an Envoy proxy receives a request  it must decide where  if anywhere  to forward it to  By default  this will be to the original service that was requested  unless  customized   docs tasks traffic management traffic shifting    How this works depends on the protocol used       TCP  When processing TCP traffic  Istio has a very small amount of useful information to route the connection   only the destination IP and Port  These attributes are used to determine the intended Service  the proxy is configured to listen on each service IP    Kubernetes ClusterIP   Port    pair and forward traffic to the upstream service   For customizations  a TCP  VirtualService  can be configured  which allows  matching on specific IPs and ports   docs reference config networking virtual service  L4MatchAttributes  and routing it to different upstream services than requested       TLS  When processing TLS traffic  Istio has slightly more information available than raw TCP  we can inspect the  SNI  https   en wikipedia org wiki Server Name Indication  field presented during the TLS handshake   For standard Services  the same IP Port matching is used as for raw TCP  However  for services that do not have a Service IP defined  such as  ExternalName services   externalname services   the SNI field will be used for routing   Additionally  custom routing can be configured with a TLS  VirtualService  to  match on SNI   docs reference config networking virtual service  TLSMatchAttributes  and route requests to custom destinations       HTTP  HTTP allows much richer routing than TCP and TLS  With HTTP  you can route individual HTTP requests  rather than just connections  In addition  a  number of rich attributes   docs reference config networking virtual service  HTTPMatchRequest  are available  such as host  path  headers  query parameters  etc   While TCP and TLS traffic generally behave the same with or without Istio  assuming no configuration has been applied to customize the routing   HTTP has significant differences     Istio will load balance individual requests  In general  this is highly desirable  especially in scenarios with long lived connections such as gRPC and HTTP 2  where connection level load balancing is ineffective    Requests are routed based on the port and   Host  header   rather than port and IP  This means the destination IP address is effectively ignored  For example   curl 8 8 8 8  H  Host  productpage default svc cluster local    would be routed to the  productpage  Service      Unmatched traffic  If traffic cannot be matched using one of the methods described above  it is treated as  passthrough traffic   docs tasks traffic management egress egress control  envoy passthrough to external services   By default  these requests will be forwarded as is  which ensures that traffic to services that Istio is not aware of  such as external services that do not have  ServiceEntry s created  continues to function  Note that when these requests are forwarded  mutual TLS will not be used and telemetry collection is limited      Service types  Along with standard  ClusterIP  Services  Istio supports the full range of Kubernetes Services  with some caveats        LoadBalancer  and  NodePort  Services  These Services are supersets of  ClusterIP  Services  and are mostly concerned with allowing access from external clients  These service types are supported and behave exactly like standard  ClusterIP  Services       Headless Services  A  headless Service  https   kubernetes io docs concepts services networking service  headless services  is a Service that does not have a  ClusterIP  assigned  Instead  the DNS response will contain the IP addresses of each endpoint  i e  the Pod IP  that is a part of the Service   In general  Istio does not configure listeners for each Pod IP  as it works at the Service level  However  to support headless services  listeners are set up for each IP Port pair in the headless service  An exception to this is for protocols declared as HTTP  which will match traffic by the  Host  header    Without Istio  the  ports  field of a headless service is not strictly required because requests go directly to pod IPs  which can accept traffic on all ports  However  with Istio the port must be declared in the Service  or it will  not be matched   docs ops configuration traffic management traffic routing  unmatched traffic         ExternalName Services  An  ExternalName Service  https   kubernetes io docs concepts services networking service  externalname  is essentially just a DNS alias   To make things more concrete  consider the following example    apiVersion  v1 kind  Service metadata    name  alias spec    type  ExternalName   externalName  concrete example com   Because there is no  ClusterIP  nor pod IPs to match on  for TCP traffic there are no changes at all to traffic matching in Istio  When Istio receives the request  they will see the IP for  concrete example com   If this is a service Istio knows about  it will be routed as described  above   tcp   If not  it will be handled as  unmatched traffic   unmatched traffic   For HTTP and TLS  which match on hostname  things are a bit different  If the target service   concrete example com   is a service Istio knows about  then the alias hostname   alias default svc cluster local   will be added as an  additional  match to the  TLS   tls  or  HTTP   http  matching  If not  there will be no changes  so it will be handled as  unmatched traffic   unmatched traffic    An  ExternalName  service can never be a  backend   frontends and backends  on its own  Instead  it is only ever used as additional  frontend   frontends and backends  matches to existing Services  If one is explicitly used as a backend  such as in a  VirtualService  destination  the same aliasing applies  That is  if  alias default svc cluster local  is set as the destination  then requests will go to the  concrete example com   If that hostname is not known to Istio  the requests will fail  in this case  a  ServiceEntry  for  concrete example com  would make this configuration work       ServiceEntry  In addition to Kubernetes Services   Service Entries   docs reference config networking service entry  ServiceEntry  can be created to extend the set of services known to Istio  This can be useful to ensure that traffic to external services  such as  example com   get the functionality of Istio   A ServiceEntry with  addresses  set will perform routing just like a  ClusterIP  Service   However  for Service Entries without any  addresses   all IPs on the port will be matched  This may prevent  unmatched traffic   unmatched traffic  on the same port from being forwarded correctly  As such  it is best to avoid these where possible  or use dedicated ports when needed  HTTP and TLS do not share this constraint  as routing is done based on the hostname SNI    The  addresses  field and  endpoints  field are often confused   addresses  refers to IPs that will be matched against  while endpoints refer to the set of IPs we will send traffic to   For example  the Service entry below would match traffic for  1 1 1 1   and send the request to  2 2 2 2  and  3 3 3 3  following the configured load balancing policy    addresses   1 1 1 1  resolution  STATIC endpoints    address  2 2 2 2   address  3 3 3 3   "}
{"questions":"istio title Managing In Mesh Certificates linktitle Managing In Mesh Certificates weight 30 keywords traffic management proxy owner istio wg networking maintainers istio wg environments maintainers How to configure certificates within your mesh test n a","answers":"---\ntitle: Managing In-Mesh Certificates\nlinktitle: Managing In-Mesh Certificates\ndescription: How to configure certificates within your mesh.\nweight: 30\nkeywords: [traffic-management,proxy]\nowner: istio\/wg-networking-maintainers,istio\/wg-environments-maintainers\ntest: n\/a\n---\n\n\n\nMany users need to manage the types of the certificates used within their environment. For example,\nsome users require the use of Elliptical Curve Cryptography (ECC) while others may need to use a\nstronger bit length for RSA certificates. Configuring certificates within your environment can be\na daunting task for most users.\n\nThis document is only intended to be used for in-mesh communication. For managing certificates at\nyour Gateway, see the [Secure Gateways](\/docs\/tasks\/traffic-management\/ingress\/secure-ingress\/) document.\nFor managing the CA used by istiod to generate workload certificates, see\nthe [Plugin CA Certificates](\/docs\/tasks\/security\/cert-management\/plugin-ca-cert\/) document.\n\n## istiod\n\nWhen Istio is installed without a root CA certificate, istiod will generate a self-signed\nCA certificate using RSA 2048.\n\nTo change the self-signed CA certificate's bit length, you will need to modify either the IstioOperator manifest provided to\n`istioctl` or the values file used during the Helm installation of the [istio-discovery](\/manifests\/charts\/istio-control\/istio-discovery) chart.\n\n\nWhile there are many environment variables that can be changed for\n[pilot-discovery](\/docs\/reference\/commands\/pilot-discovery\/), this document will only\noutline some of them.\n\n\n\n\n\n\n\napiVersion: install.istio.io\/v1alpha1\nkind: IstioOperator\nspec:\n  values:\n    pilot:\n      env:\n        CITADEL_SELF_SIGNED_CA_RSA_KEY_SIZE: 4096\n\n\n\n\n\n\n\npilot:\n  env:\n    CITADEL_SELF_SIGNED_CA_RSA_KEY_SIZE: 4096\n\n\n\n\n\n\n## Sidecars\n\nSince sidecars manage their own certificates for in-mesh communication, the sidecars\nare responsible for managing their private keys and generated Certificate Signing Request (CSRs). The sidecar\ninjector needs to be modified to inject the environment variables to be used for\nthis purpose.\n\n\nWhile there are many environment variables that can be changed for\n[pilot-agent](\/docs\/reference\/commands\/pilot-agent\/), this document will only\noutline some of them.\n\n\n\n\n\n\n\napiVersion: install.istio.io\/v1alpha1\nkind: IstioOperator\nspec:\n  meshConfig:\n    defaultConfig:\n      proxyMetadata:\n        CITADEL_SELF_SIGNED_CA_RSA_KEY_SIZE: 4096\n\n\n\n\n\n\n\nmeshConfig:\n  defaultConfig:\n    proxyMetadata:\n      CITADEL_SELF_SIGNED_CA_RSA_KEY_SIZE: 4096\n\n\n\n\n\n\n\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: curl\nspec:\n  ...\n  template:\n    metadata:\n      ...\n      annotations:\n        ...\n        proxy.istio.io\/config: |\n          CITADEL_SELF_SIGNED_CA_RSA_KEY_SIZE: 4096\n    spec:\n      ...\n\n\n\n\n\n\n### Signature Algorithm\n\nBy default, the sidecars will create RSA certificates. If you want to change it to\nECC, you need to set `ECC_SIGNATURE_ALGORITHM` to `ECDSA`.\n\n\n\n\n\n\napiVersion: install.istio.io\/v1alpha1\nkind: IstioOperator\nspec:\n  meshConfig:\n    defaultConfig:\n      proxyMetadata:\n        ECC_SIGNATURE_ALGORITHM: \"ECDSA\"\n\n\n\n\n\n\n\nmeshConfig:\n  defaultConfig:\n    proxyMetadata:\n      ECC_SIGNATURE_ALGORITHM: \"ECDSA\"\n\n\n\n\n\n\nOnly P256 and P384 are supported via `ECC_CURVE`.\n\nIf you prefer to retain RSA signature algorithms and want to modify the RSA key size,\nyou can change the value of `WORKLOAD_RSA_KEY_SIZE`.","site":"istio","answers_cleaned":"    title  Managing In Mesh Certificates linktitle  Managing In Mesh Certificates description  How to configure certificates within your mesh  weight  30 keywords   traffic management proxy  owner  istio wg networking maintainers istio wg environments maintainers test  n a        Many users need to manage the types of the certificates used within their environment  For example  some users require the use of Elliptical Curve Cryptography  ECC  while others may need to use a stronger bit length for RSA certificates  Configuring certificates within your environment can be a daunting task for most users   This document is only intended to be used for in mesh communication  For managing certificates at your Gateway  see the  Secure Gateways   docs tasks traffic management ingress secure ingress   document  For managing the CA used by istiod to generate workload certificates  see the  Plugin CA Certificates   docs tasks security cert management plugin ca cert   document      istiod  When Istio is installed without a root CA certificate  istiod will generate a self signed CA certificate using RSA 2048   To change the self signed CA certificate s bit length  you will need to modify either the IstioOperator manifest provided to  istioctl  or the values file used during the Helm installation of the  istio discovery   manifests charts istio control istio discovery  chart    While there are many environment variables that can be changed for  pilot discovery   docs reference commands pilot discovery    this document will only outline some of them         apiVersion  install istio io v1alpha1 kind  IstioOperator spec    values      pilot        env          CITADEL SELF SIGNED CA RSA KEY SIZE  4096        pilot    env      CITADEL SELF SIGNED CA RSA KEY SIZE  4096          Sidecars  Since sidecars manage their own certificates for in mesh communication  the sidecars are responsible for managing their private keys and generated Certificate Signing Request  CSRs   The sidecar injector needs to be modified to inject the environment variables to be used for this purpose    While there are many environment variables that can be changed for  pilot agent   docs reference commands pilot agent    this document will only outline some of them         apiVersion  install istio io v1alpha1 kind  IstioOperator spec    meshConfig      defaultConfig        proxyMetadata          CITADEL SELF SIGNED CA RSA KEY SIZE  4096        meshConfig    defaultConfig      proxyMetadata        CITADEL SELF SIGNED CA RSA KEY SIZE  4096        apiVersion  apps v1 kind  Deployment metadata    name  curl spec          template      metadata                  annotations                      proxy istio io config              CITADEL SELF SIGNED CA RSA KEY SIZE  4096     spec                      Signature Algorithm  By default  the sidecars will create RSA certificates  If you want to change it to ECC  you need to set  ECC SIGNATURE ALGORITHM  to  ECDSA         apiVersion  install istio io v1alpha1 kind  IstioOperator spec    meshConfig      defaultConfig        proxyMetadata          ECC SIGNATURE ALGORITHM   ECDSA         meshConfig    defaultConfig      proxyMetadata        ECC SIGNATURE ALGORITHM   ECDSA        Only P256 and P384 are supported via  ECC CURVE    If you prefer to retain RSA signature algorithms and want to modify the RSA key size  you can change the value of  WORKLOAD RSA KEY SIZE  "}
{"questions":"istio is a diagnostic tool that can detect potential issues with your owner istio wg user experience maintainers weight 40 title Diagnose your Configuration with Istioctl Analyze keywords istioctl debugging kubernetes Shows you how to use istioctl analyze to identify potential issues with your configuration test yes","answers":"---\ntitle: Diagnose your Configuration with Istioctl Analyze\ndescription: Shows you how to use istioctl analyze to identify potential issues with your configuration.\nweight: 40\nkeywords: [istioctl, debugging, kubernetes]\nowner: istio\/wg-user-experience-maintainers\ntest: yes\n---\n\n`istioctl analyze` is a diagnostic tool that can detect potential issues with your\nIstio configuration. It can run against a live cluster or a set of local configuration files.\nIt can also run against a combination of the two, allowing you to catch problems before you\napply changes to a cluster.\n\n## Getting started in under a minute\n\nYou can analyze your current live Kubernetes cluster by running:\n\n\n$ istioctl analyze --all-namespaces\n\n\nAnd that\u2019s it! It\u2019ll give you any recommendations that apply.\n\nFor example, if you forgot to enable Istio injection (a very common issue), you would get the following 'Info' message:\n\n\nInfo [IST0102] (Namespace default) The namespace is not enabled for Istio injection. Run 'kubectl label namespace default istio-injection=enabled' to enable it, or 'kubectl label namespace default istio-injection=disabled' to explicitly mark it as not needing injection.\n\n\nFix the issue:\n\n\n$ kubectl label namespace default istio-injection=enabled\n\n\nThen try again:\n\n\n$ istioctl analyze --namespace default\n\u2714 No validation issues found when analyzing namespace: default.\n\n\n## Analyzing live clusters, local files, or both\n\nAnalyze the current live cluster, simulating the effect of applying additional yaml files\nlike `bookinfo-gateway.yaml` and `destination-rule-all.yaml` in the `samples\/bookinfo\/networking` directory:\n\n\n$ istioctl analyze @samples\/bookinfo\/networking\/bookinfo-gateway.yaml@ @samples\/bookinfo\/networking\/destination-rule-all.yaml@\nError [IST0101] (Gateway default\/bookinfo-gateway samples\/bookinfo\/networking\/bookinfo-gateway.yaml:9) Referenced selector not found: \"istio=ingressgateway\"\nError [IST0101] (VirtualService default\/bookinfo samples\/bookinfo\/networking\/bookinfo-gateway.yaml:41) Referenced host not found: \"productpage\"\nError: Analyzers found issues when analyzing namespace: default.\nSee https:\/\/istio.io\/v\/docs\/reference\/config\/analysis for more information about causes and resolutions.\n\n\nAnalyze the entire `networking` folder:\n\n\n$ istioctl analyze samples\/bookinfo\/networking\/\n\n\nAnalyze all yaml files in the `networking` folder:\n\n\n$ istioctl analyze samples\/bookinfo\/networking\/*.yaml\n\n\nThe above examples are doing analysis on a live cluster. The tool also supports performing analysis\nof a set of local Kubernetes yaml configuration files, or on a combination of local files and a\nlive cluster. When analyzing a set of local files, the file-set is expected to be fully self-contained.\nTypically, this is used to analyze the entire set of configuration files that are intended to be deployed\nto a cluster. To use this feature, simply add the `--use-kube=false` flag.\n\nAnalyze all yaml files in the `networking` folder:\n\n\n$ istioctl analyze --use-kube=false samples\/bookinfo\/networking\/*.yaml\n\n\nYou can run `istioctl analyze --help` to see the full set of options.\n\n## Advanced\n\n### Enabling validation messages for resource status\n\n\n\nStarting with v1.5, Istio can be set up to perform configuration analysis alongside\nthe configuration distribution that it is primarily responsible for, via the `istiod.enableAnalysis` flag.\nThis analysis uses the same logic and error messages as when using `istioctl analyze`.\nValidation messages from the analysis are written to the status subresource of the affected Istio resource.\n\nFor example. if you have a misconfigured gateway on your \"ratings\" virtual service,\nrunning `kubectl get virtualservice ratings` would give you something like:\n\n\napiVersion: networking.istio.io\/v1\nkind: VirtualService\n...\nspec:\n  gateways:\n  - bogus-gateway\n  hosts:\n  - ratings\n...\nstatus:\n  observedGeneration: \"1\"\n  validationMessages:\n  - documentationUrl: https:\/\/istio.io\/v\/docs\/reference\/config\/analysis\/ist0101\/\n    level: ERROR\n    type:\n      code: IST0101\n\n\n`enableAnalysis` runs in the background, and will keep the status field of a resource up to date\nwith its current validation status. Note that this isn't a replacement for `istioctl analyze`:\n\n- Not all resources have a custom status field (e.g. Kubernetes `namespace` resources),\n  so messages attached to those resources won't show validation messages.\n- `enableAnalysis` only works on Istio versions starting with 1.5, while\n  `istioctl analyze` can be used with older versions.\n- While it makes it easy to see what's wrong with a particular resource,\n  it's harder to get a holistic view of validation status in the mesh.\n\nYou can enable this feature with:\n\n\n$ istioctl install --set values.global.istiod.enableAnalysis=true\n\n\n### Ignoring specific analyzer messages via CLI\n\nSometimes you might find it useful to hide or ignore analyzer messages in certain cases.\nFor example, imagine a situation where a message is emitted about a resource you don't have permissions to update:\n\n\n$ istioctl analyze -k --namespace frod\nInfo [IST0102] (Namespace frod) The namespace is not enabled for Istio injection. Run 'kubectl label namespace frod istio-injection=enabled' to enable it, or 'kubectl label namespace frod istio-injection=disabled' to explicitly mark it as not needing injection.\n\n\nBecause you don't have permissions to update the namespace, you cannot resolve the message\nby annotating the namespace. Instead, you can direct `istioctl analyze` to suppress the above message on the resource:\n\n\n$ istioctl analyze -k --namespace frod --suppress \"IST0102=Namespace frod\"\n\u2714 No validation issues found when analyzing namespace: frod.\n\n\nThe syntax used for suppression is the same syntax used throughout `istioctl` when referring to\nresources: `<kind> <name>.<namespace>`, or just `<kind> <name>` for cluster-scoped resources like\n`Namespace`. If you want to suppress multiple objects, you can either repeat the `--suppress` argument or use wildcards:\n\n\n$ # Suppress code IST0102 on namespace frod and IST0107 on all pods in namespace baz\n$ istioctl analyze -k --all-namespaces --suppress \"IST0102=Namespace frod\" --suppress \"IST0107=Pod *.baz\"\n\n\n### Ignoring specific analyzer messages via annotations\n\nYou can also ignore specific analyzer messages using an annotation on the resource.\nFor example, to ignore code IST0107 (`MisplacedAnnotation`) on resource `deployment\/my-deployment`:\n\n\n$ kubectl annotate deployment my-deployment galley.istio.io\/analyze-suppress=IST0107\n\n\nTo ignore multiple codes for a resource, separate each code with a comma:\n\n\n$ kubectl annotate deployment my-deployment galley.istio.io\/analyze-suppress=IST0107,IST0002\n\n\n## Helping us improve this tool\n\nWe're continuing to add more analysis capability and we'd love your help in identifying more use cases.\nIf you've discovered some Istio configuration \"gotcha\", some tricky situation that caused you some\nproblems, open an issue and let us know. We might be able to automatically flag this problem so that\nothers can discover and avoid the problem in the first place.\n\nTo do this, [open an issue](https:\/\/github.com\/istio\/istio\/issues) describing your scenario. For example:\n\n- Look at all the virtual services\n- For each, look at their list of gateways\n- If some of the gateways don\u2019t exist, produce an error\n\nWe already have an analyzer for this specific scenario, so this is just an example to illustrate what\nkind of information you should provide.\n\n## Q&A\n\n- **What Istio release does this tool target?**\n\n    Like other `istioctl` tools, we generally recommend using a downloaded version\n    that matches the version deployed in your cluster.\n\n    For the time being, analysis is generally backwards compatible, so that you can,\n    for example, run the  version of `istioctl analyze` against\n    a cluster running an older Istio 1.x version and expect to get useful feedback.\n    Analysis rules that are not meaningful with an older Istio release will be skipped.\n\n    If you decide to use the latest `istioctl` for analysis purposes on a cluster\n    running an older Istio version, we suggest that you keep it in a separate folder\n    from the version of the binary used to manage your deployed Istio release.\n\n- **What analyzers are supported today?**\n\n    We're still working to documenting the analyzers. In the meantime, you can see\n    all the analyzers in the [Istio source](\/pkg\/config\/analysis\/analyzers).\n\n    You can also see what [configuration analysis messages](\/docs\/reference\/config\/analysis\/)\n    are supported to get an idea of what is currently covered.\n\n- **Can analysis do anything harmful to my cluster?**\n\n    Analysis never changes configuration state. It is a completely read-only operation\n    that will never alter the state of a cluster.\n\n- **What about analysis that goes beyond configuration?**\n\n    Today, the analysis is purely based on Kubernetes configuration, but in the future\n    we\u2019d like to expand beyond that. For example, we could allow analyzers to also look\n    at logs to generate recommendations.\n\n- **Where can I find out how to fix the errors I'm getting?**\n\n    The set of [configuration analysis messages](\/docs\/reference\/config\/analysis\/)\n    contains descriptions of each message along with suggested fixes.","site":"istio","answers_cleaned":"    title  Diagnose your Configuration with Istioctl Analyze description  Shows you how to use istioctl analyze to identify potential issues with your configuration  weight  40 keywords   istioctl  debugging  kubernetes  owner  istio wg user experience maintainers test  yes       istioctl analyze  is a diagnostic tool that can detect potential issues with your Istio configuration  It can run against a live cluster or a set of local configuration files  It can also run against a combination of the two  allowing you to catch problems before you apply changes to a cluster      Getting started in under a minute  You can analyze your current live Kubernetes cluster by running      istioctl analyze   all namespaces   And that s it  It ll give you any recommendations that apply   For example  if you forgot to enable Istio injection  a very common issue   you would get the following  Info  message    Info  IST0102   Namespace default  The namespace is not enabled for Istio injection  Run  kubectl label namespace default istio injection enabled  to enable it  or  kubectl label namespace default istio injection disabled  to explicitly mark it as not needing injection    Fix the issue      kubectl label namespace default istio injection enabled   Then try again      istioctl analyze   namespace default   No validation issues found when analyzing namespace  default       Analyzing live clusters  local files  or both  Analyze the current live cluster  simulating the effect of applying additional yaml files like  bookinfo gateway yaml  and  destination rule all yaml  in the  samples bookinfo networking  directory      istioctl analyze  samples bookinfo networking bookinfo gateway yaml   samples bookinfo networking destination rule all yaml  Error  IST0101   Gateway default bookinfo gateway samples bookinfo networking bookinfo gateway yaml 9  Referenced selector not found   istio ingressgateway  Error  IST0101   VirtualService default bookinfo samples bookinfo networking bookinfo gateway yaml 41  Referenced host not found   productpage  Error  Analyzers found issues when analyzing namespace  default  See https   istio io v docs reference config analysis for more information about causes and resolutions    Analyze the entire  networking  folder      istioctl analyze samples bookinfo networking    Analyze all yaml files in the  networking  folder      istioctl analyze samples bookinfo networking   yaml   The above examples are doing analysis on a live cluster  The tool also supports performing analysis of a set of local Kubernetes yaml configuration files  or on a combination of local files and a live cluster  When analyzing a set of local files  the file set is expected to be fully self contained  Typically  this is used to analyze the entire set of configuration files that are intended to be deployed to a cluster  To use this feature  simply add the    use kube false  flag   Analyze all yaml files in the  networking  folder      istioctl analyze   use kube false samples bookinfo networking   yaml   You can run  istioctl analyze   help  to see the full set of options      Advanced      Enabling validation messages for resource status    Starting with v1 5  Istio can be set up to perform configuration analysis alongside the configuration distribution that it is primarily responsible for  via the  istiod enableAnalysis  flag  This analysis uses the same logic and error messages as when using  istioctl analyze   Validation messages from the analysis are written to the status subresource of the affected Istio resource   For example  if you have a misconfigured gateway on your  ratings  virtual service  running  kubectl get virtualservice ratings  would give you something like    apiVersion  networking istio io v1 kind  VirtualService     spec    gateways      bogus gateway   hosts      ratings     status    observedGeneration   1    validationMessages      documentationUrl  https   istio io v docs reference config analysis ist0101      level  ERROR     type        code  IST0101    enableAnalysis  runs in the background  and will keep the status field of a resource up to date with its current validation status  Note that this isn t a replacement for  istioctl analyze      Not all resources have a custom status field  e g  Kubernetes  namespace  resources     so messages attached to those resources won t show validation messages     enableAnalysis  only works on Istio versions starting with 1 5  while    istioctl analyze  can be used with older versions    While it makes it easy to see what s wrong with a particular resource    it s harder to get a holistic view of validation status in the mesh   You can enable this feature with      istioctl install   set values global istiod enableAnalysis true       Ignoring specific analyzer messages via CLI  Sometimes you might find it useful to hide or ignore analyzer messages in certain cases  For example  imagine a situation where a message is emitted about a resource you don t have permissions to update      istioctl analyze  k   namespace frod Info  IST0102   Namespace frod  The namespace is not enabled for Istio injection  Run  kubectl label namespace frod istio injection enabled  to enable it  or  kubectl label namespace frod istio injection disabled  to explicitly mark it as not needing injection    Because you don t have permissions to update the namespace  you cannot resolve the message by annotating the namespace  Instead  you can direct  istioctl analyze  to suppress the above message on the resource      istioctl analyze  k   namespace frod   suppress  IST0102 Namespace frod    No validation issues found when analyzing namespace  frod    The syntax used for suppression is the same syntax used throughout  istioctl  when referring to resources    kind   name   namespace    or just   kind   name   for cluster scoped resources like  Namespace   If you want to suppress multiple objects  you can either repeat the    suppress  argument or use wildcards        Suppress code IST0102 on namespace frod and IST0107 on all pods in namespace baz   istioctl analyze  k   all namespaces   suppress  IST0102 Namespace frod    suppress  IST0107 Pod   baz        Ignoring specific analyzer messages via annotations  You can also ignore specific analyzer messages using an annotation on the resource  For example  to ignore code IST0107   MisplacedAnnotation   on resource  deployment my deployment       kubectl annotate deployment my deployment galley istio io analyze suppress IST0107   To ignore multiple codes for a resource  separate each code with a comma      kubectl annotate deployment my deployment galley istio io analyze suppress IST0107 IST0002      Helping us improve this tool  We re continuing to add more analysis capability and we d love your help in identifying more use cases  If you ve discovered some Istio configuration  gotcha   some tricky situation that caused you some problems  open an issue and let us know  We might be able to automatically flag this problem so that others can discover and avoid the problem in the first place   To do this   open an issue  https   github com istio istio issues  describing your scenario  For example     Look at all the virtual services   For each  look at their list of gateways   If some of the gateways don t exist  produce an error  We already have an analyzer for this specific scenario  so this is just an example to illustrate what kind of information you should provide      Q A      What Istio release does this tool target         Like other  istioctl  tools  we generally recommend using a downloaded version     that matches the version deployed in your cluster       For the time being  analysis is generally backwards compatible  so that you can      for example  run the  version of  istioctl analyze  against     a cluster running an older Istio 1 x version and expect to get useful feedback      Analysis rules that are not meaningful with an older Istio release will be skipped       If you decide to use the latest  istioctl  for analysis purposes on a cluster     running an older Istio version  we suggest that you keep it in a separate folder     from the version of the binary used to manage your deployed Istio release       What analyzers are supported today         We re still working to documenting the analyzers  In the meantime  you can see     all the analyzers in the  Istio source   pkg config analysis analyzers        You can also see what  configuration analysis messages   docs reference config analysis       are supported to get an idea of what is currently covered       Can analysis do anything harmful to my cluster         Analysis never changes configuration state  It is a completely read only operation     that will never alter the state of a cluster       What about analysis that goes beyond configuration         Today  the analysis is purely based on Kubernetes configuration  but in the future     we d like to expand beyond that  For example  we could allow analyzers to also look     at logs to generate recommendations       Where can I find out how to fix the errors I m getting         The set of  configuration analysis messages   docs reference config analysis       contains descriptions of each message along with suggested fixes "}
{"questions":"istio weight 20 keywords debug proxy status config pilot envoy aliases owner istio wg user experience maintainers Describes tools and techniques to diagnose Envoy configuration issues related to traffic management title Debugging Envoy and Istiod help ops misc help ops troubleshooting proxy cmd help ops traffic management proxy cmd","answers":"---\ntitle: Debugging Envoy and Istiod\ndescription: Describes tools and techniques to diagnose Envoy configuration issues related to traffic management.\nweight: 20\nkeywords: [debug,proxy,status,config,pilot,envoy]\naliases:\n    - \/help\/ops\/traffic-management\/proxy-cmd\n    - \/help\/ops\/misc\n    - \/help\/ops\/troubleshooting\/proxy-cmd\nowner: istio\/wg-user-experience-maintainers\ntest: no\n---\n\nIstio provides two very valuable commands to help diagnose traffic management configuration problems,\nthe [`proxy-status`](\/docs\/reference\/commands\/istioctl\/#istioctl-proxy-status)\nand [`proxy-config`](\/docs\/reference\/commands\/istioctl\/#istioctl-proxy-config) commands. The `proxy-status` command\nallows you to get an overview of your mesh and identify the proxy causing the problem. Then `proxy-config` can be used\nto inspect Envoy configuration and diagnose the issue.\n\nIf you want to try the commands described below, you can either:\n\n* Have a Kubernetes cluster with Istio and Bookinfo installed (as described in\n[installation steps](\/docs\/setup\/getting-started\/) and\n[Bookinfo installation steps](\/docs\/examples\/bookinfo\/#deploying-the-application)).\n\nOR\n\n* Use similar commands against your own application running in a Kubernetes cluster.\n\n## Get an overview of your mesh\n\nThe `proxy-status` command allows you to get an overview of your mesh. If you suspect one of your sidecars isn't\nreceiving configuration or is out of sync then `proxy-status` will tell you this.\n\n\n$ istioctl proxy-status\nNAME                                                   CDS        LDS        EDS        RDS          ISTIOD                      VERSION\ndetails-v1-558b8b4b76-qzqsg.default                    SYNCED     SYNCED     SYNCED     SYNCED       istiod-6cf8d4f9cb-wm7x6     1.7.0\nistio-ingressgateway-66c994c45c-cmb7x.istio-system     SYNCED     SYNCED     SYNCED     NOT SENT     istiod-6cf8d4f9cb-wm7x6     1.7.0\nproductpage-v1-6987489c74-nc7tj.default                SYNCED     SYNCED     SYNCED     SYNCED       istiod-6cf8d4f9cb-wm7x6     1.7.0\nprometheus-7bdc59c94d-hcp59.istio-system               SYNCED     SYNCED     SYNCED     SYNCED       istiod-6cf8d4f9cb-wm7x6     1.7.0\nratings-v1-7dc98c7588-5m6xj.default                    SYNCED     SYNCED     SYNCED     SYNCED       istiod-6cf8d4f9cb-wm7x6     1.7.0\nreviews-v1-7f99cc4496-rtsqn.default                    SYNCED     SYNCED     SYNCED     SYNCED       istiod-6cf8d4f9cb-wm7x6     1.7.0\nreviews-v2-7d79d5bd5d-tj6kf.default                    SYNCED     SYNCED     SYNCED     SYNCED       istiod-6cf8d4f9cb-wm7x6     1.7.0\nreviews-v3-7dbcdcbc56-t8wrx.default                    SYNCED     SYNCED     SYNCED     SYNCED       istiod-6cf8d4f9cb-wm7x6     1.7.0\n\n\nIf a proxy is missing from this list it means that it is not currently connected to a Istiod instance so will not be\nreceiving any configuration.\n\n* `SYNCED` means that Envoy has acknowledged the last configuration Istiod has sent to it.\n* `NOT SENT` means that Istiod hasn't sent anything to Envoy. This usually is because Istiod has nothing to send.\n* `STALE` means that Istiod has sent an update to Envoy but has not received an acknowledgement. This usually indicates\na networking issue between Envoy and Istiod or a bug with Istio itself.\n\n## Retrieve diffs between Envoy and Istiod\n\nThe `proxy-status` command can also be used to retrieve a diff between the configuration Envoy has loaded and the\nconfiguration Istiod would send, by providing a proxy ID. This can help you determine exactly what is out of sync and\nwhere the issue may lie.\n\n\n$ istioctl proxy-status details-v1-6dcc6fbb9d-wsjz4.default\n--- Istiod Clusters\n+++ Envoy Clusters\n@@ -374,36 +374,14 @@\n             \"edsClusterConfig\": {\n                \"edsConfig\": {\n                   \"ads\": {\n\n                   }\n                },\n                \"serviceName\": \"outbound|443||public-cr0bdc785ce3f14722918080a97e1f26be-alb1.kube-system.svc.cluster.local\"\n-            },\n-            \"connectTimeout\": \"1.000s\",\n-            \"circuitBreakers\": {\n-               \"thresholds\": [\n-                  {\n-\n-                  }\n-               ]\n-            }\n-         }\n-      },\n-      {\n-         \"cluster\": {\n-            \"name\": \"outbound|53||kube-dns.kube-system.svc.cluster.local\",\n-            \"type\": \"EDS\",\n-            \"edsClusterConfig\": {\n-               \"edsConfig\": {\n-                  \"ads\": {\n-\n-                  }\n-               },\n-               \"serviceName\": \"outbound|53||kube-dns.kube-system.svc.cluster.local\"\n             },\n             \"connectTimeout\": \"1.000s\",\n             \"circuitBreakers\": {\n                \"thresholds\": [\n                   {\n\n                   }\n\nListeners Match\nRoutes Match (RDS last loaded at Tue, 04 Aug 2020 11:52:54 IST)\n\n\nHere you can see that the listeners and routes match but the clusters are out of sync.\n\n## Deep dive into Envoy configuration\n\nThe `proxy-config` command can be used to see how a given Envoy instance is configured. This can then be used to\npinpoint any issues you are unable to detect by just looking through your Istio configuration and custom resources.\nTo get a basic summary of clusters, listeners or routes for a given pod use the command as follows (changing clusters\nfor listeners or routes when required):\n\n\n$ istioctl proxy-config cluster -n istio-system istio-ingressgateway-7d6874b48f-qxhn5\nSERVICE FQDN                                                               PORT      SUBSET     DIRECTION     TYPE           DESTINATION RULE\nBlackHoleCluster                                                           -         -          -             STATIC\nagent                                                                      -         -          -             STATIC\ndetails.default.svc.cluster.local                                          9080      -          outbound      EDS            details.default\nistio-ingressgateway.istio-system.svc.cluster.local                        80        -          outbound      EDS\nistio-ingressgateway.istio-system.svc.cluster.local                        443       -          outbound      EDS\nistio-ingressgateway.istio-system.svc.cluster.local                        15021     -          outbound      EDS\nistio-ingressgateway.istio-system.svc.cluster.local                        15443     -          outbound      EDS\nistiod.istio-system.svc.cluster.local                                      443       -          outbound      EDS\nistiod.istio-system.svc.cluster.local                                      853       -          outbound      EDS\nistiod.istio-system.svc.cluster.local                                      15010     -          outbound      EDS\nistiod.istio-system.svc.cluster.local                                      15012     -          outbound      EDS\nistiod.istio-system.svc.cluster.local                                      15014     -          outbound      EDS\nkube-dns.kube-system.svc.cluster.local                                     53        -          outbound      EDS\nkube-dns.kube-system.svc.cluster.local                                     9153      -          outbound      EDS\nkubernetes.default.svc.cluster.local                                       443       -          outbound      EDS\n...\nproductpage.default.svc.cluster.local                                      9080      -          outbound      EDS\nprometheus_stats                                                           -         -          -             STATIC\nratings.default.svc.cluster.local                                          9080      -          outbound      EDS\nreviews.default.svc.cluster.local                                          9080      -          outbound      EDS\nsds-grpc                                                                   -         -          -             STATIC\nxds-grpc                                                                   -         -          -             STRICT_DNS\nzipkin                                                                     -         -          -             STRICT_DNS\n\n\nIn order to debug Envoy you need to understand Envoy clusters\/listeners\/routes\/endpoints and how they all interact.\nWe will use the `proxy-config` command with the `-o json` and filtering flags to follow Envoy as it determines where\nto send a request from the `productpage` pod to the `reviews` pod at `reviews:9080`.\n\n1. If you query the listener summary on a pod you will notice Istio generates the following listeners:\n    * A listener on `0.0.0.0:15006` that receives all inbound traffic to the pod and a listener on `0.0.0.0:15001` that receives all outbound traffic to the pod, then hands the request over to a virtual listener.\n    * A virtual listener per service IP, per each non-HTTP for outbound TCP\/HTTPS traffic.\n    * A virtual listener on the pod IP for each exposed port for inbound traffic.\n    * A virtual listener on `0.0.0.0` per each HTTP port for outbound HTTP traffic.\n\n    \n    $ istioctl proxy-config listeners productpage-v1-6c886ff494-7vxhs\n    ADDRESS       PORT  MATCH                                            DESTINATION\n    10.96.0.10    53    ALL                                              Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local\n    0.0.0.0       80    App: HTTP                                        Route: 80\n    0.0.0.0       80    ALL                                              PassthroughCluster\n    10.100.93.102 443   ALL                                              Cluster: outbound|443||istiod.istio-system.svc.cluster.local\n    10.111.121.13 443   ALL                                              Cluster: outbound|443||istio-ingressgateway.istio-system.svc.cluster.local\n    10.96.0.1     443   ALL                                              Cluster: outbound|443||kubernetes.default.svc.cluster.local\n    10.100.93.102 853   App: HTTP                                        Route: istiod.istio-system.svc.cluster.local:853\n    10.100.93.102 853   ALL                                              Cluster: outbound|853||istiod.istio-system.svc.cluster.local\n    0.0.0.0       9080  App: HTTP                                        Route: 9080\n    0.0.0.0       9080  ALL                                              PassthroughCluster\n    0.0.0.0       9090  App: HTTP                                        Route: 9090\n    0.0.0.0       9090  ALL                                              PassthroughCluster\n    10.96.0.10    9153  App: HTTP                                        Route: kube-dns.kube-system.svc.cluster.local:9153\n    10.96.0.10    9153  ALL                                              Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local\n    0.0.0.0       15001 ALL                                              PassthroughCluster\n    0.0.0.0       15006 Addr: 10.244.0.22\/32:15021                       inbound|15021|mgmt-15021|mgmtCluster\n    0.0.0.0       15006 Addr: 10.244.0.22\/32:9080                        Inline Route: \/*\n    0.0.0.0       15006 Trans: tls; App: HTTP TLS; Addr: 0.0.0.0\/0       Inline Route: \/*\n    0.0.0.0       15006 App: HTTP; Addr: 0.0.0.0\/0                       Inline Route: \/*\n    0.0.0.0       15006 App: Istio HTTP Plain; Addr: 10.244.0.22\/32:9080 Inline Route: \/*\n    0.0.0.0       15006 Addr: 0.0.0.0\/0                                  InboundPassthroughClusterIpv4\n    0.0.0.0       15006 Trans: tls; App: TCP TLS; Addr: 0.0.0.0\/0        InboundPassthroughClusterIpv4\n    0.0.0.0       15010 App: HTTP                                        Route: 15010\n    0.0.0.0       15010 ALL                                              PassthroughCluster\n    10.100.93.102 15012 ALL                                              Cluster: outbound|15012||istiod.istio-system.svc.cluster.local\n    0.0.0.0       15014 App: HTTP                                        Route: 15014\n    0.0.0.0       15014 ALL                                              PassthroughCluster\n    0.0.0.0       15021 ALL                                              Inline Route: \/healthz\/ready*\n    10.111.121.13 15021 App: HTTP                                        Route: istio-ingressgateway.istio-system.svc.cluster.local:15021\n    10.111.121.13 15021 ALL                                              Cluster: outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local\n    0.0.0.0       15090 ALL                                              Inline Route: \/stats\/prometheus*\n    10.111.121.13 15443 ALL                                              Cluster: outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local\n    \n\n1. From the above summary you can see that every sidecar has a listener bound to `0.0.0.0:15006` which is where IP tables routes all inbound pod traffic to and a listener bound to `0.0.0.0:15001` which is where IP tables routes all outbound pod traffic to. The `0.0.0.0:15001` listener hands the request over to the virtual listener that best matches the original destination of the request, if it can find a matching one. Otherwise, it sends the request to the `PassthroughCluster` which connects to the destination directly.\n\n    \n    $ istioctl proxy-config listeners productpage-v1-6c886ff494-7vxhs --port 15001 -o json\n    [\n        {\n            \"name\": \"virtualOutbound\",\n            \"address\": {\n                \"socketAddress\": {\n                    \"address\": \"0.0.0.0\",\n                    \"portValue\": 15001\n                }\n            },\n            \"filterChains\": [\n                {\n                    \"filters\": [\n                        {\n                            \"name\": \"istio.stats\",\n                            \"typedConfig\": {\n                                \"@type\": \"type.googleapis.com\/udpa.type.v1.TypedStruct\",\n                                \"typeUrl\": \"type.googleapis.com\/envoy.extensions.filters.network.wasm.v3.Wasm\",\n                                \"value\": {\n                                    \"config\": {\n                                        \"configuration\": \"{\\n  \\\"debug\\\": \\\"false\\\",\\n  \\\"stat_prefix\\\": \\\"istio\\\"\\n}\\n\",\n                                        \"root_id\": \"stats_outbound\",\n                                        \"vm_config\": {\n                                            \"code\": {\n                                                \"local\": {\n                                                    \"inline_string\": \"envoy.wasm.stats\"\n                                                }\n                                            },\n                                            \"runtime\": \"envoy.wasm.runtime.null\",\n                                            \"vm_id\": \"tcp_stats_outbound\"\n                                        }\n                                    }\n                                }\n                            }\n                        },\n                        {\n                            \"name\": \"envoy.tcp_proxy\",\n                            \"typedConfig\": {\n                                \"@type\": \"type.googleapis.com\/envoy.config.filter.network.tcp_proxy.v2.TcpProxy\",\n                                \"statPrefix\": \"PassthroughCluster\",\n                                \"cluster\": \"PassthroughCluster\"\n                            }\n                        }\n                    ],\n                    \"name\": \"virtualOutbound-catchall-tcp\"\n                }\n            ],\n            \"trafficDirection\": \"OUTBOUND\",\n            \"hiddenEnvoyDeprecatedUseOriginalDst\": true\n        }\n    ]\n    \n\n1. Our request is an outbound HTTP request to port `9080` this means it gets handed off to the `0.0.0.0:9080` virtual\nlistener. This listener then looks up the route configuration in its configured RDS. In this case it will be looking\nup route `9080` in RDS configured by Istiod (via ADS).\n\n    \n    $ istioctl proxy-config listeners productpage-v1-6c886ff494-7vxhs -o json --address 0.0.0.0 --port 9080\n    ...\n    \"rds\": {\n        \"configSource\": {\n            \"ads\": {},\n            \"resourceApiVersion\": \"V3\"\n        },\n        \"routeConfigName\": \"9080\"\n    }\n    ...\n    \n\n1. The `9080` route configuration only has a virtual host for each service. Our request is heading to the reviews\nservice so Envoy will select the virtual host to which our request matches a domain. Once matched on domain Envoy\nlooks for the first route that matches the request. In this case we don't have any advanced routing so there is only\none route that matches on everything. This route tells Envoy to send the request to the\n`outbound|9080||reviews.default.svc.cluster.local` cluster.\n\n    \n    $ istioctl proxy-config routes productpage-v1-6c886ff494-7vxhs --name 9080 -o json\n    [\n        {\n            \"name\": \"9080\",\n            \"virtualHosts\": [\n                {\n                    \"name\": \"reviews.default.svc.cluster.local:9080\",\n                    \"domains\": [\n                        \"reviews.default.svc.cluster.local\",\n                        \"reviews\",\n                        \"reviews.default.svc\",\n                        \"reviews.default\",\n                        \"10.98.88.0\",\n                    ],\n                    \"routes\": [\n                        {\n                            \"name\": \"default\",\n                            \"match\": {\n                                \"prefix\": \"\/\"\n                            },\n                            \"route\": {\n                                \"cluster\": \"outbound|9080||reviews.default.svc.cluster.local\",\n                                \"timeout\": \"0s\",\n                            }\n                        }\n                    ]\n    ...\n    \n\n1. This cluster is configured to retrieve the associated endpoints from Istiod (via ADS). So Envoy will then use the\n`serviceName` field as a key to look up the list of Endpoints and proxy the request to one of them.\n\n    \n    $ istioctl proxy-config cluster productpage-v1-6c886ff494-7vxhs --fqdn reviews.default.svc.cluster.local -o json\n    [\n        {\n            \"name\": \"outbound|9080||reviews.default.svc.cluster.local\",\n            \"type\": \"EDS\",\n            \"edsClusterConfig\": {\n                \"edsConfig\": {\n                    \"ads\": {},\n                    \"resourceApiVersion\": \"V3\"\n                },\n                \"serviceName\": \"outbound|9080||reviews.default.svc.cluster.local\"\n            },\n            \"connectTimeout\": \"10s\",\n            \"circuitBreakers\": {\n                \"thresholds\": [\n                    {\n                        \"maxConnections\": 4294967295,\n                        \"maxPendingRequests\": 4294967295,\n                        \"maxRequests\": 4294967295,\n                        \"maxRetries\": 4294967295\n                    }\n                ]\n            },\n        }\n    ]\n    \n\n1. To see the endpoints currently available for this cluster use the `proxy-config` endpoints command.\n\n    \n    $ istioctl proxy-config endpoints productpage-v1-6c886ff494-7vxhs --cluster \"outbound|9080||reviews.default.svc.cluster.local\"\n    ENDPOINT            STATUS      OUTLIER CHECK     CLUSTER\n    172.17.0.7:9080     HEALTHY     OK                outbound|9080||reviews.default.svc.cluster.local\n    172.17.0.8:9080     HEALTHY     OK                outbound|9080||reviews.default.svc.cluster.local\n    172.17.0.9:9080     HEALTHY     OK                outbound|9080||reviews.default.svc.cluster.local\n    \n\n## Inspecting bootstrap configuration\n\nSo far we have looked at configuration retrieved (mostly) from Istiod, however Envoy requires some bootstrap configuration that\nincludes information like where Istiod can be found. To view this use the following command:\n\n\n$ istioctl proxy-config bootstrap -n istio-system istio-ingressgateway-7d6874b48f-qxhn5\n{\n    \"bootstrap\": {\n        \"node\": {\n            \"id\": \"router~172.30.86.14~istio-ingressgateway-7d6874b48f-qxhn5.istio-system~istio-system.svc.cluster.local\",\n            \"cluster\": \"istio-ingressgateway\",\n            \"metadata\": {\n                    \"CLUSTER_ID\": \"Kubernetes\",\n                    \"EXCHANGE_KEYS\": \"NAME,NAMESPACE,INSTANCE_IPS,LABELS,OWNER,PLATFORM_METADATA,WORKLOAD_NAME,MESH_ID,SERVICE_ACCOUNT,CLUSTER_ID\",\n                    \"INSTANCE_IPS\": \"10.244.0.7\",\n                    \"ISTIO_PROXY_SHA\": \"istio-proxy:f98b7e538920abc408fbc91c22a3b32bc854d9dc\",\n                    \"ISTIO_VERSION\": \"1.7.0\",\n                    \"LABELS\": {\n                                \"app\": \"istio-ingressgateway\",\n                                \"chart\": \"gateways\",\n                                \"heritage\": \"Tiller\",\n                                \"istio\": \"ingressgateway\",\n                                \"pod-template-hash\": \"68bf7d7f94\",\n                                \"release\": \"istio\",\n                                \"service.istio.io\/canonical-name\": \"istio-ingressgateway\",\n                                \"service.istio.io\/canonical-revision\": \"latest\"\n                            },\n                    \"MESH_ID\": \"cluster.local\",\n                    \"NAME\": \"istio-ingressgateway-68bf7d7f94-sp226\",\n                    \"NAMESPACE\": \"istio-system\",\n                    \"OWNER\": \"kubernetes:\/\/apis\/apps\/v1\/namespaces\/istio-system\/deployments\/istio-ingressgateway\",\n                    \"ROUTER_MODE\": \"sni-dnat\",\n                    \"SDS\": \"true\",\n                    \"SERVICE_ACCOUNT\": \"istio-ingressgateway-service-account\",\n                    \"WORKLOAD_NAME\": \"istio-ingressgateway\"\n                },\n            \"userAgentBuildVersion\": {\n                \"version\": {\n                    \"majorNumber\": 1,\n                    \"minorNumber\": 15\n                },\n                \"metadata\": {\n                        \"build.type\": \"RELEASE\",\n                        \"revision.sha\": \"f98b7e538920abc408fbc91c22a3b32bc854d9dc\",\n                        \"revision.status\": \"Clean\",\n                        \"ssl.version\": \"BoringSSL\"\n                    }\n            },\n        },\n...\n\n\n## Verifying connectivity to Istiod\n\nVerifying connectivity to Istiod is a useful troubleshooting step. Every proxy container in the service mesh should be able to communicate with Istiod. This can be accomplished in a few simple steps:\n\n1.  Create a `curl` pod:\n\n    \n    $ kubectl create namespace foo\n    $ kubectl apply -f <(istioctl kube-inject -f samples\/curl\/curl.yaml) -n foo\n    \n\n1.  Test connectivity to Istiod using `curl`. The following example invokes the v1 registration API using default Istiod configuration parameters and mutual TLS enabled:\n\n    \n    $ kubectl exec $(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name}) -c curl -n foo -- curl -sS istiod.istio-system:15014\/version\n    \n\nYou should receive a response listing the version of Istiod.\n\n## What Envoy version is Istio using?\n\nTo find out the Envoy version used in deployment, you can `exec` into the container and query the `server_info` endpoint:\n\n\n$ kubectl exec -it productpage-v1-6b746f74dc-9stvs -c istio-proxy -n default  -- pilot-agent request GET server_info --log_as_json | jq {version}\n{\n \"version\": \"2d4ec97f3ac7b3256d060e1bb8aa6c415f5cef63\/1.17.0\/Clean\/RELEASE\/BoringSSL\"\n}\n","site":"istio","answers_cleaned":"    title  Debugging Envoy and Istiod description  Describes tools and techniques to diagnose Envoy configuration issues related to traffic management  weight  20 keywords   debug proxy status config pilot envoy  aliases         help ops traffic management proxy cmd        help ops misc        help ops troubleshooting proxy cmd owner  istio wg user experience maintainers test  no      Istio provides two very valuable commands to help diagnose traffic management configuration problems  the   proxy status    docs reference commands istioctl  istioctl proxy status  and   proxy config    docs reference commands istioctl  istioctl proxy config  commands  The  proxy status  command allows you to get an overview of your mesh and identify the proxy causing the problem  Then  proxy config  can be used to inspect Envoy configuration and diagnose the issue   If you want to try the commands described below  you can either     Have a Kubernetes cluster with Istio and Bookinfo installed  as described in  installation steps   docs setup getting started   and  Bookinfo installation steps   docs examples bookinfo  deploying the application     OR    Use similar commands against your own application running in a Kubernetes cluster      Get an overview of your mesh  The  proxy status  command allows you to get an overview of your mesh  If you suspect one of your sidecars isn t receiving configuration or is out of sync then  proxy status  will tell you this      istioctl proxy status NAME                                                   CDS        LDS        EDS        RDS          ISTIOD                      VERSION details v1 558b8b4b76 qzqsg default                    SYNCED     SYNCED     SYNCED     SYNCED       istiod 6cf8d4f9cb wm7x6     1 7 0 istio ingressgateway 66c994c45c cmb7x istio system     SYNCED     SYNCED     SYNCED     NOT SENT     istiod 6cf8d4f9cb wm7x6     1 7 0 productpage v1 6987489c74 nc7tj default                SYNCED     SYNCED     SYNCED     SYNCED       istiod 6cf8d4f9cb wm7x6     1 7 0 prometheus 7bdc59c94d hcp59 istio system               SYNCED     SYNCED     SYNCED     SYNCED       istiod 6cf8d4f9cb wm7x6     1 7 0 ratings v1 7dc98c7588 5m6xj default                    SYNCED     SYNCED     SYNCED     SYNCED       istiod 6cf8d4f9cb wm7x6     1 7 0 reviews v1 7f99cc4496 rtsqn default                    SYNCED     SYNCED     SYNCED     SYNCED       istiod 6cf8d4f9cb wm7x6     1 7 0 reviews v2 7d79d5bd5d tj6kf default                    SYNCED     SYNCED     SYNCED     SYNCED       istiod 6cf8d4f9cb wm7x6     1 7 0 reviews v3 7dbcdcbc56 t8wrx default                    SYNCED     SYNCED     SYNCED     SYNCED       istiod 6cf8d4f9cb wm7x6     1 7 0   If a proxy is missing from this list it means that it is not currently connected to a Istiod instance so will not be receiving any configuration      SYNCED  means that Envoy has acknowledged the last configuration Istiod has sent to it     NOT SENT  means that Istiod hasn t sent anything to Envoy  This usually is because Istiod has nothing to send     STALE  means that Istiod has sent an update to Envoy but has not received an acknowledgement  This usually indicates a networking issue between Envoy and Istiod or a bug with Istio itself      Retrieve diffs between Envoy and Istiod  The  proxy status  command can also be used to retrieve a diff between the configuration Envoy has loaded and the configuration Istiod would send  by providing a proxy ID  This can help you determine exactly what is out of sync and where the issue may lie      istioctl proxy status details v1 6dcc6fbb9d wsjz4 default     Istiod Clusters     Envoy Clusters     374 36  374 14                  edsClusterConfig                      edsConfig                         ads                                                               serviceName    outbound 443  public cr0bdc785ce3f14722918080a97e1f26be alb1 kube system svc cluster local                                connectTimeout    1 000s                 circuitBreakers                      thresholds                                                                                                                            cluster                   name    outbound 53  kube dns kube system svc cluster local                 type    EDS                 edsClusterConfig                      edsConfig                         ads                                                                serviceName    outbound 53  kube dns kube system svc cluster local                                connectTimeout    1 000s                 circuitBreakers                      thresholds                                                 Listeners Match Routes Match  RDS last loaded at Tue  04 Aug 2020 11 52 54 IST    Here you can see that the listeners and routes match but the clusters are out of sync      Deep dive into Envoy configuration  The  proxy config  command can be used to see how a given Envoy instance is configured  This can then be used to pinpoint any issues you are unable to detect by just looking through your Istio configuration and custom resources  To get a basic summary of clusters  listeners or routes for a given pod use the command as follows  changing clusters for listeners or routes when required       istioctl proxy config cluster  n istio system istio ingressgateway 7d6874b48f qxhn5 SERVICE FQDN                                                               PORT      SUBSET     DIRECTION     TYPE           DESTINATION RULE BlackHoleCluster                                                                                              STATIC agent                                                                                                         STATIC details default svc cluster local                                          9080                 outbound      EDS            details default istio ingressgateway istio system svc cluster local                        80                   outbound      EDS istio ingressgateway istio system svc cluster local                        443                  outbound      EDS istio ingressgateway istio system svc cluster local                        15021                outbound      EDS istio ingressgateway istio system svc cluster local                        15443                outbound      EDS istiod istio system svc cluster local                                      443                  outbound      EDS istiod istio system svc cluster local                                      853                  outbound      EDS istiod istio system svc cluster local                                      15010                outbound      EDS istiod istio system svc cluster local                                      15012                outbound      EDS istiod istio system svc cluster local                                      15014                outbound      EDS kube dns kube system svc cluster local                                     53                   outbound      EDS kube dns kube system svc cluster local                                     9153                 outbound      EDS kubernetes default svc cluster local                                       443                  outbound      EDS     productpage default svc cluster local                                      9080                 outbound      EDS prometheus stats                                                                                              STATIC ratings default svc cluster local                                          9080                 outbound      EDS reviews default svc cluster local                                          9080                 outbound      EDS sds grpc                                                                                                      STATIC xds grpc                                                                                                      STRICT DNS zipkin                                                                                                        STRICT DNS   In order to debug Envoy you need to understand Envoy clusters listeners routes endpoints and how they all interact  We will use the  proxy config  command with the   o json  and filtering flags to follow Envoy as it determines where to send a request from the  productpage  pod to the  reviews  pod at  reviews 9080    1  If you query the listener summary on a pod you will notice Istio generates the following listeners        A listener on  0 0 0 0 15006  that receives all inbound traffic to the pod and a listener on  0 0 0 0 15001  that receives all outbound traffic to the pod  then hands the request over to a virtual listener        A virtual listener per service IP  per each non HTTP for outbound TCP HTTPS traffic        A virtual listener on the pod IP for each exposed port for inbound traffic        A virtual listener on  0 0 0 0  per each HTTP port for outbound HTTP traffic              istioctl proxy config listeners productpage v1 6c886ff494 7vxhs     ADDRESS       PORT  MATCH                                            DESTINATION     10 96 0 10    53    ALL                                              Cluster  outbound 53  kube dns kube system svc cluster local     0 0 0 0       80    App  HTTP                                        Route  80     0 0 0 0       80    ALL                                              PassthroughCluster     10 100 93 102 443   ALL                                              Cluster  outbound 443  istiod istio system svc cluster local     10 111 121 13 443   ALL                                              Cluster  outbound 443  istio ingressgateway istio system svc cluster local     10 96 0 1     443   ALL                                              Cluster  outbound 443  kubernetes default svc cluster local     10 100 93 102 853   App  HTTP                                        Route  istiod istio system svc cluster local 853     10 100 93 102 853   ALL                                              Cluster  outbound 853  istiod istio system svc cluster local     0 0 0 0       9080  App  HTTP                                        Route  9080     0 0 0 0       9080  ALL                                              PassthroughCluster     0 0 0 0       9090  App  HTTP                                        Route  9090     0 0 0 0       9090  ALL                                              PassthroughCluster     10 96 0 10    9153  App  HTTP                                        Route  kube dns kube system svc cluster local 9153     10 96 0 10    9153  ALL                                              Cluster  outbound 9153  kube dns kube system svc cluster local     0 0 0 0       15001 ALL                                              PassthroughCluster     0 0 0 0       15006 Addr  10 244 0 22 32 15021                       inbound 15021 mgmt 15021 mgmtCluster     0 0 0 0       15006 Addr  10 244 0 22 32 9080                        Inline Route         0 0 0 0       15006 Trans  tls  App  HTTP TLS  Addr  0 0 0 0 0       Inline Route         0 0 0 0       15006 App  HTTP  Addr  0 0 0 0 0                       Inline Route         0 0 0 0       15006 App  Istio HTTP Plain  Addr  10 244 0 22 32 9080 Inline Route         0 0 0 0       15006 Addr  0 0 0 0 0                                  InboundPassthroughClusterIpv4     0 0 0 0       15006 Trans  tls  App  TCP TLS  Addr  0 0 0 0 0        InboundPassthroughClusterIpv4     0 0 0 0       15010 App  HTTP                                        Route  15010     0 0 0 0       15010 ALL                                              PassthroughCluster     10 100 93 102 15012 ALL                                              Cluster  outbound 15012  istiod istio system svc cluster local     0 0 0 0       15014 App  HTTP                                        Route  15014     0 0 0 0       15014 ALL                                              PassthroughCluster     0 0 0 0       15021 ALL                                              Inline Route   healthz ready      10 111 121 13 15021 App  HTTP                                        Route  istio ingressgateway istio system svc cluster local 15021     10 111 121 13 15021 ALL                                              Cluster  outbound 15021  istio ingressgateway istio system svc cluster local     0 0 0 0       15090 ALL                                              Inline Route   stats prometheus      10 111 121 13 15443 ALL                                              Cluster  outbound 15443  istio ingressgateway istio system svc cluster local       1  From the above summary you can see that every sidecar has a listener bound to  0 0 0 0 15006  which is where IP tables routes all inbound pod traffic to and a listener bound to  0 0 0 0 15001  which is where IP tables routes all outbound pod traffic to  The  0 0 0 0 15001  listener hands the request over to the virtual listener that best matches the original destination of the request  if it can find a matching one  Otherwise  it sends the request to the  PassthroughCluster  which connects to the destination directly              istioctl proxy config listeners productpage v1 6c886ff494 7vxhs   port 15001  o json                              name    virtualOutbound                address                      socketAddress                          address    0 0 0 0                        portValue   15001                                               filterChains                                            filters                                                            name    istio stats                                typedConfig                                       type    type googleapis com udpa type v1 TypedStruct                                    typeUrl    type googleapis com envoy extensions filters network wasm v3 Wasm                                    value                                          config                                              configuration      n    debug      false    n    stat prefix      istio   n  n                                            root id    stats outbound                                            vm config                                                  code                                                      local                                                          inline string    envoy wasm stats                                                                                                                                                runtime    envoy wasm runtime null                                                vm id    tcp stats outbound                                                                                                                                                                                                                                    name    envoy tcp proxy                                typedConfig                                       type    type googleapis com envoy config filter network tcp proxy v2 TcpProxy                                    statPrefix    PassthroughCluster                                    cluster    PassthroughCluster                                                                                                      name    virtualOutbound catchall tcp                                                trafficDirection    OUTBOUND                hiddenEnvoyDeprecatedUseOriginalDst   true                       1  Our request is an outbound HTTP request to port  9080  this means it gets handed off to the  0 0 0 0 9080  virtual listener  This listener then looks up the route configuration in its configured RDS  In this case it will be looking up route  9080  in RDS configured by Istiod  via ADS               istioctl proxy config listeners productpage v1 6c886ff494 7vxhs  o json   address 0 0 0 0   port 9080              rds              configSource                  ads                    resourceApiVersion    V3                      routeConfigName    9080                      1  The  9080  route configuration only has a virtual host for each service  Our request is heading to the reviews service so Envoy will select the virtual host to which our request matches a domain  Once matched on domain Envoy looks for the first route that matches the request  In this case we don t have any advanced routing so there is only one route that matches on everything  This route tells Envoy to send the request to the  outbound 9080  reviews default svc cluster local  cluster              istioctl proxy config routes productpage v1 6c886ff494 7vxhs   name 9080  o json                              name    9080                virtualHosts                                            name    reviews default svc cluster local 9080                        domains                              reviews default svc cluster local                            reviews                            reviews default svc                            reviews default                            10 98 88 0                                               routes                                                            name    default                                match                                      prefix                                                                   route                                      cluster    outbound 9080  reviews default svc cluster local                                    timeout    0s                                                                                               1  This cluster is configured to retrieve the associated endpoints from Istiod  via ADS   So Envoy will then use the  serviceName  field as a key to look up the list of Endpoints and proxy the request to one of them              istioctl proxy config cluster productpage v1 6c886ff494 7vxhs   fqdn reviews default svc cluster local  o json                              name    outbound 9080  reviews default svc cluster local                type    EDS                edsClusterConfig                      edsConfig                          ads                            resourceApiVersion    V3                                      serviceName    outbound 9080  reviews default svc cluster local                              connectTimeout    10s                circuitBreakers                      thresholds                                                    maxConnections   4294967295                           maxPendingRequests   4294967295                           maxRequests   4294967295                           maxRetries   4294967295                                                                              1  To see the endpoints currently available for this cluster use the  proxy config  endpoints command              istioctl proxy config endpoints productpage v1 6c886ff494 7vxhs   cluster  outbound 9080  reviews default svc cluster local      ENDPOINT            STATUS      OUTLIER CHECK     CLUSTER     172 17 0 7 9080     HEALTHY     OK                outbound 9080  reviews default svc cluster local     172 17 0 8 9080     HEALTHY     OK                outbound 9080  reviews default svc cluster local     172 17 0 9 9080     HEALTHY     OK                outbound 9080  reviews default svc cluster local          Inspecting bootstrap configuration  So far we have looked at configuration retrieved  mostly  from Istiod  however Envoy requires some bootstrap configuration that includes information like where Istiod can be found  To view this use the following command      istioctl proxy config bootstrap  n istio system istio ingressgateway 7d6874b48f qxhn5        bootstrap              node                  id    router 172 30 86 14 istio ingressgateway 7d6874b48f qxhn5 istio system istio system svc cluster local                cluster    istio ingressgateway                metadata                          CLUSTER ID    Kubernetes                        EXCHANGE KEYS    NAME NAMESPACE INSTANCE IPS LABELS OWNER PLATFORM METADATA WORKLOAD NAME MESH ID SERVICE ACCOUNT CLUSTER ID                        INSTANCE IPS    10 244 0 7                        ISTIO PROXY SHA    istio proxy f98b7e538920abc408fbc91c22a3b32bc854d9dc                        ISTIO VERSION    1 7 0                        LABELS                                      app    istio ingressgateway                                    chart    gateways                                    heritage    Tiller                                    istio    ingressgateway                                    pod template hash    68bf7d7f94                                    release    istio                                    service istio io canonical name    istio ingressgateway                                    service istio io canonical revision    latest                                                      MESH ID    cluster local                        NAME    istio ingressgateway 68bf7d7f94 sp226                        NAMESPACE    istio system                        OWNER    kubernetes   apis apps v1 namespaces istio system deployments istio ingressgateway                        ROUTER MODE    sni dnat                        SDS    true                        SERVICE ACCOUNT    istio ingressgateway service account                        WORKLOAD NAME    istio ingressgateway                                  userAgentBuildVersion                      version                          majorNumber   1                       minorNumber   15                                     metadata                              build type    RELEASE                            revision sha    f98b7e538920abc408fbc91c22a3b32bc854d9dc                            revision status    Clean                            ssl version    BoringSSL                                                           Verifying connectivity to Istiod  Verifying connectivity to Istiod is a useful troubleshooting step  Every proxy container in the service mesh should be able to communicate with Istiod  This can be accomplished in a few simple steps   1   Create a  curl  pod              kubectl create namespace foo       kubectl apply  f   istioctl kube inject  f samples curl curl yaml   n foo       1   Test connectivity to Istiod using  curl   The following example invokes the v1 registration API using default Istiod configuration parameters and mutual TLS enabled              kubectl exec   kubectl get pod  l app curl  n foo  o jsonpath   items  metadata name    c curl  n foo    curl  sS istiod istio system 15014 version       You should receive a response listing the version of Istiod      What Envoy version is Istio using   To find out the Envoy version used in deployment  you can  exec  into the container and query the  server info  endpoint      kubectl exec  it productpage v1 6b746f74dc 9stvs  c istio proxy  n default     pilot agent request GET server info   log as json   jq  version      version    2d4ec97f3ac7b3256d060e1bb8aa6c415f5cef63 1 17 0 Clean RELEASE BoringSSL    "}
{"questions":"istio title Understand your Mesh with Istioctl Describe test no weight 30 aliases keywords traffic management istioctl debugging kubernetes docs ops troubleshooting istioctl describe owner istio wg user experience maintainers Shows you how to use istioctl describe to verify the configurations of a pod in your mesh","answers":"---\ntitle: Understand your Mesh with Istioctl Describe\ndescription: Shows you how to use istioctl describe to verify the configurations of a pod in your mesh.\nweight: 30\nkeywords: [traffic-management, istioctl, debugging, kubernetes]\naliases:\n  - \/docs\/ops\/troubleshooting\/istioctl-describe\nowner: istio\/wg-user-experience-maintainers\ntest: no\n---\n\n\n\nIn Istio 1.3, we included the [`istioctl experimental describe`](\/docs\/reference\/commands\/istioctl\/#istioctl-experimental-describe-pod)\ncommand. This CLI command provides you with the information needed to understand\nthe configuration impacting a pod. This guide shows\nyou how to use this experimental sub-command to see if a pod is in the mesh and\nverify its configuration.\n\nThe basic usage of the command is as follows:\n\n\n$ istioctl experimental describe pod <pod-name>[.<namespace>]\n\n\nAppending a namespace to the pod name has the same affect as using the `-n` option\nof `istioctl` to specify a non-default namespace.\n\n\nJust like all other `istioctl` commands, you can replace `experimental`\nwith `x` for convenience.\n\n\nThis guide assumes you have deployed the [Bookinfo](\/docs\/examples\/bookinfo\/)\nsample in your mesh. If you haven't already done so,\n[start the application's services](\/docs\/examples\/bookinfo\/#start-the-application-services)\nand [determine the IP and port of the ingress](\/docs\/examples\/bookinfo\/#determine-the-ingress-ip-and-port)\nbefore continuing.\n\n## Verify a pod is in the mesh\n\nThe `istioctl describe` command returns a warning if the Envoy\nproxy is not present in a pod or if the proxy has not started. Additionally, the command warns\nif some of the [Istio requirements for pods](\/docs\/ops\/deployment\/application-requirements\/)\nare not met.\n\nFor example, the following command produces a warning indicating a `kube-dns`\npod is not part of the service mesh because it has no sidecar:\n\n\n$ export KUBE_POD=$(kubectl -n kube-system get pod -l k8s-app=kube-dns -o jsonpath='{.items[0].metadata.name}')\n$ istioctl x describe pod -n kube-system $KUBE_POD\nPod: coredns-f9fd979d6-2zsxk\n   Pod Ports: 53\/UDP (coredns), 53 (coredns), 9153 (coredns)\nWARNING: coredns-f9fd979d6-2zsxk is not part of mesh; no Istio sidecar\n--------------------\n2021-01-22T16:10:14.080091Z     error   klog    an error occurred forwarding 42785 -> 15000: error forwarding port 15000 to pod 692362a4fe313005439a873a1019a62f52ecd02c3de9a0957cd0af8f947866e5, uid : failed to execute portforward in network namespace \"\/var\/run\/netns\/cni-3c000d0a-fb1c-d9df-8af8-1403e6803c22\": failed to dial 15000: dial tcp4 127.0.0.1:15000: connect: connection refused[]\nError: failed to execute command on sidecar: failure running port forward process: Get \"http:\/\/localhost:42785\/config_dump\": EOF\n\n\nThe command will not produce such a warning for a pod that is part of the mesh,\nthe Bookinfo `ratings` service for example, but instead will output the Istio configuration applied to the pod:\n\n\n$ export RATINGS_POD=$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')\n$ istioctl experimental describe pod $RATINGS_POD\nPod: ratings-v1-7dc98c7588-8jsbw\n   Pod Ports: 9080 (ratings), 15090 (istio-proxy)\n--------------------\nService: ratings\n   Port: http 9080\/HTTP targets pod port 9080\n\n\nThe output shows the following information:\n\n- The ports of the service container in the pod, `9080` for the `ratings` container in this example.\n- The ports of the `istio-proxy` container in the pod, `15090` in this example.\n- The protocol used by the service in the pod, `HTTP` over port `9080` in this example.\n\n## Verify destination rule configurations\n\nYou can use `istioctl describe` to see what\n[destination rules](\/docs\/concepts\/traffic-management\/#destination-rules) apply to requests\nto a pod. For example, apply the Bookinfo\n[mutual TLS destination rules](\/samples\/bookinfo\/networking\/destination-rule-all-mtls.yaml):\n\n\n$ kubectl apply -f @samples\/bookinfo\/networking\/destination-rule-all-mtls.yaml@\n\n\nNow describe the `ratings` pod again:\n\n\n$ istioctl x describe pod $RATINGS_POD\nPod: ratings-v1-f745cf57b-qrxl2\n   Pod Ports: 9080 (ratings), 15090 (istio-proxy)\n--------------------\nService: ratings\n   Port: http 9080\/HTTP\nDestinationRule: ratings for \"ratings\"\n   Matching subsets: v1\n      (Non-matching subsets v2,v2-mysql,v2-mysql-vm)\n   Traffic Policy TLS Mode: ISTIO_MUTUAL\n\n\nThe command now shows additional output:\n\n- The `ratings` destination rule applies to request to the `ratings` service.\n- The subset of the `ratings` destination rule that matches the pod, `v1` in this example.\n- The other subsets defined by the destination rule.\n- The pod accepts either HTTP or mutual TLS requests but clients use mutual TLS.\n\n## Verify virtual service configurations\n\nWhen [virtual services](\/docs\/concepts\/traffic-management\/#virtual-services) configure\nroutes to a pod, `istioctl describe` will also include the routes in its output.\nFor example, apply the\n[Bookinfo virtual services](\/samples\/bookinfo\/networking\/virtual-service-all-v1.yaml)\nthat route all requests to `v1` pods:\n\n\n$ kubectl apply -f @samples\/bookinfo\/networking\/virtual-service-all-v1.yaml@\n\n\nThen, describe a pod implementing `v1` of the `reviews` service:\n\n\n$ export REVIEWS_V1_POD=$(kubectl get pod -l app=reviews,version=v1 -o jsonpath='{.items[0].metadata.name}')\n$ istioctl x describe pod $REVIEWS_V1_POD\n...\nVirtualService: reviews\n   1 HTTP route(s)\n\n\nThe output contains similar information to that shown previously for the `ratings` pod,\nbut it also includes the virtual service's routes to the pod.\n\nThe `istioctl describe` command doesn't just show the virtual services impacting the pod.\nIf a virtual service configures the service host of a pod but no traffic will reach it,\nthe command's output includes a warning. This case can occur if the virtual service\nactually blocks traffic by never routing traffic to the pod's subset. For\nexample:\n\n\n$ export REVIEWS_V2_POD=$(kubectl get pod -l app=reviews,version=v2 -o jsonpath='{.items[0].metadata.name}')\n$ istioctl x describe pod $REVIEWS_V2_POD\n...\nVirtualService: reviews\n   WARNING: No destinations match pod subsets (checked 1 HTTP routes)\n      Route to non-matching subset v1 for (everything)\n\n\nThe warning includes the cause of the problem, how many routes were checked, and\neven gives you information about the other routes in place. In this example,\nno traffic arrives at the `v2` pod because the route in the virtual service directs all\ntraffic to the `v1` subset.\n\nIf you now delete the Bookinfo destination rules:\n\n\n$ kubectl delete -f @samples\/bookinfo\/networking\/destination-rule-all-mtls.yaml@\n\n\nYou can see another useful feature of `istioctl describe`:\n\n\n$ istioctl x describe pod $REVIEWS_V1_POD\n...\nVirtualService: reviews\n   WARNING: No destinations match pod subsets (checked 1 HTTP routes)\n      Warning: Route to subset v1 but NO DESTINATION RULE defining subsets!\n\n\nThe output shows you that you deleted the destination rule but not the virtual\nservice that depends on it. The virtual service routes traffic to the `v1`\nsubset, but there is no destination rule defining the `v1` subset.\nThus, traffic destined for version `v1` can't flow to the pod.\n\nIf you refresh the browser to send a new request to Bookinfo at this\npoint, you would see the following message: `Error fetching product reviews`.\nTo fix the problem, reapply the destination rule:\n\n\n$ kubectl apply -f @samples\/bookinfo\/networking\/destination-rule-all-mtls.yaml@\n\n\nReloading the browser shows the app working again and\nrunning `istioctl experimental describe pod $REVIEWS_V1_POD` no longer produces\nwarnings.\n\n## Verifying traffic routes\n\nThe `istioctl describe` command shows split traffic weights too.\nFor example, run the following command to route 90% of traffic to the `v1` subset\nand 10% to the `v2` subset of the `reviews` service:\n\n\n$ kubectl apply -f @samples\/bookinfo\/networking\/virtual-service-reviews-90-10.yaml@\n\n\nNow describe the `reviews v1` pod:\n\n\n$ istioctl x describe pod $REVIEWS_V1_POD\n...\nVirtualService: reviews\n   Weight 90%\n\n\nThe output shows that the `reviews` virtual service has a weight of 90% for the\n`v1` subset.\n\nThis function is also helpful for other types of routing. For example, you can deploy\nheader-specific routing:\n\n\n$ kubectl apply -f @samples\/bookinfo\/networking\/virtual-service-reviews-jason-v2-v3.yaml@\n\n\nThen, describe the pod again:\n\n\n$ istioctl x describe pod $REVIEWS_V1_POD\n...\nVirtualService: reviews\n   WARNING: No destinations match pod subsets (checked 2 HTTP routes)\n      Route to non-matching subset v2 for (when headers are end-user=jason)\n      Route to non-matching subset v3 for (everything)\n\n\nThe output produces a warning since you are describing a pod in the `v1` subset.\nHowever, the virtual service configuration you applied routes traffic to the `v2`\nsubset if the header contains `end-user=jason` and to the `v3` subset in all\nother cases.\n\n## Verifying strict mutual TLS\n\nFollowing the [mutual TLS migration](\/docs\/tasks\/security\/authentication\/mtls-migration\/)\ninstructions, you can enable strict mutual TLS for the `ratings` service:\n\n\n$ kubectl apply -f - <<EOF\napiVersion: security.istio.io\/v1\nkind: PeerAuthentication\nmetadata:\n  name: ratings-strict\nspec:\n  selector:\n    matchLabels:\n      app: ratings\n  mtls:\n    mode: STRICT\nEOF\n\n\nRun the following command to describe the `ratings` pod:\n\n\n$ istioctl x describe pod $RATINGS_POD\nPilot reports that pod enforces mTLS and clients speak mTLS\n\n\nThe output reports that requests to the `ratings` pod are now locked down and secure.\n\nSometimes, however, a deployment breaks when switching mutual TLS to `STRICT`.\nThe likely cause is that the destination rule didn't match the new configuration.\nFor example, if you configure the Bookinfo clients to not use mutual TLS using the\n[plain HTTP destination rules](\/samples\/bookinfo\/networking\/destination-rule-all.yaml):\n\n\n$ kubectl apply -f @samples\/bookinfo\/networking\/destination-rule-all.yaml@\n\n\nIf you open Bookinfo in your browser, you see `Ratings service is currently unavailable`.\nTo learn why, run the following command:\n\n\n$ istioctl x describe pod $RATINGS_POD\n...\nWARNING Pilot predicts TLS Conflict on ratings-v1-f745cf57b-qrxl2 port 9080 (pod enforces mTLS, clients speak HTTP)\n  Check DestinationRule ratings\/default and AuthenticationPolicy ratings-strict\/default\n\n\nThe output includes a warning describing the conflict\nbetween the destination rule and the authentication policy.\n\nYou can restore correct behavior by applying a destination rule that uses\nmutual TLS:\n\n\n$ kubectl apply -f @samples\/bookinfo\/networking\/destination-rule-all-mtls.yaml@\n\n\n## Conclusion and cleanup\n\nOur goal with the `istioctl x describe` command is to help you understand the\ntraffic and security configurations in your Istio mesh.\n\nWe would love to hear your ideas for improvements!\nPlease join us at [https:\/\/discuss.istio.io](https:\/\/discuss.istio.io).\n\nTo remove the Bookinfo pods and configurations used in this guide, run the\nfollowing commands:\n\n\n$ kubectl delete -f @samples\/bookinfo\/platform\/kube\/bookinfo.yaml@\n$ kubectl delete -f @samples\/bookinfo\/networking\/bookinfo-gateway.yaml@\n$ kubectl delete -f @samples\/bookinfo\/networking\/destination-rule-all-mtls.yaml@\n$ kubectl delete -f @samples\/bookinfo\/networking\/virtual-service-all-v1.yaml@\n","site":"istio","answers_cleaned":"    title  Understand your Mesh with Istioctl Describe description  Shows you how to use istioctl describe to verify the configurations of a pod in your mesh  weight  30 keywords   traffic management  istioctl  debugging  kubernetes  aliases       docs ops troubleshooting istioctl describe owner  istio wg user experience maintainers test  no        In Istio 1 3  we included the   istioctl experimental describe    docs reference commands istioctl  istioctl experimental describe pod  command  This CLI command provides you with the information needed to understand the configuration impacting a pod  This guide shows you how to use this experimental sub command to see if a pod is in the mesh and verify its configuration   The basic usage of the command is as follows      istioctl experimental describe pod  pod name    namespace     Appending a namespace to the pod name has the same affect as using the   n  option of  istioctl  to specify a non default namespace    Just like all other  istioctl  commands  you can replace  experimental  with  x  for convenience    This guide assumes you have deployed the  Bookinfo   docs examples bookinfo   sample in your mesh  If you haven t already done so   start the application s services   docs examples bookinfo  start the application services  and  determine the IP and port of the ingress   docs examples bookinfo  determine the ingress ip and port  before continuing      Verify a pod is in the mesh  The  istioctl describe  command returns a warning if the Envoy proxy is not present in a pod or if the proxy has not started  Additionally  the command warns if some of the  Istio requirements for pods   docs ops deployment application requirements   are not met   For example  the following command produces a warning indicating a  kube dns  pod is not part of the service mesh because it has no sidecar      export KUBE POD   kubectl  n kube system get pod  l k8s app kube dns  o jsonpath    items 0  metadata name      istioctl x describe pod  n kube system  KUBE POD Pod  coredns f9fd979d6 2zsxk    Pod Ports  53 UDP  coredns   53  coredns   9153  coredns  WARNING  coredns f9fd979d6 2zsxk is not part of mesh  no Istio sidecar                      2021 01 22T16 10 14 080091Z     error   klog    an error occurred forwarding 42785    15000  error forwarding port 15000 to pod 692362a4fe313005439a873a1019a62f52ecd02c3de9a0957cd0af8f947866e5  uid   failed to execute portforward in network namespace   var run netns cni 3c000d0a fb1c d9df 8af8 1403e6803c22   failed to dial 15000  dial tcp4 127 0 0 1 15000  connect  connection refused   Error  failed to execute command on sidecar  failure running port forward process  Get  http   localhost 42785 config dump   EOF   The command will not produce such a warning for a pod that is part of the mesh  the Bookinfo  ratings  service for example  but instead will output the Istio configuration applied to the pod      export RATINGS POD   kubectl get pod  l app ratings  o jsonpath    items 0  metadata name      istioctl experimental describe pod  RATINGS POD Pod  ratings v1 7dc98c7588 8jsbw    Pod Ports  9080  ratings   15090  istio proxy                       Service  ratings    Port  http 9080 HTTP targets pod port 9080   The output shows the following information     The ports of the service container in the pod   9080  for the  ratings  container in this example    The ports of the  istio proxy  container in the pod   15090  in this example    The protocol used by the service in the pod   HTTP  over port  9080  in this example      Verify destination rule configurations  You can use  istioctl describe  to see what  destination rules   docs concepts traffic management  destination rules  apply to requests to a pod  For example  apply the Bookinfo  mutual TLS destination rules   samples bookinfo networking destination rule all mtls yaml       kubectl apply  f  samples bookinfo networking destination rule all mtls yaml    Now describe the  ratings  pod again      istioctl x describe pod  RATINGS POD Pod  ratings v1 f745cf57b qrxl2    Pod Ports  9080  ratings   15090  istio proxy                       Service  ratings    Port  http 9080 HTTP DestinationRule  ratings for  ratings     Matching subsets  v1        Non matching subsets v2 v2 mysql v2 mysql vm     Traffic Policy TLS Mode  ISTIO MUTUAL   The command now shows additional output     The  ratings  destination rule applies to request to the  ratings  service    The subset of the  ratings  destination rule that matches the pod   v1  in this example    The other subsets defined by the destination rule    The pod accepts either HTTP or mutual TLS requests but clients use mutual TLS      Verify virtual service configurations  When  virtual services   docs concepts traffic management  virtual services  configure routes to a pod   istioctl describe  will also include the routes in its output  For example  apply the  Bookinfo virtual services   samples bookinfo networking virtual service all v1 yaml  that route all requests to  v1  pods      kubectl apply  f  samples bookinfo networking virtual service all v1 yaml    Then  describe a pod implementing  v1  of the  reviews  service      export REVIEWS V1 POD   kubectl get pod  l app reviews version v1  o jsonpath    items 0  metadata name      istioctl x describe pod  REVIEWS V1 POD     VirtualService  reviews    1 HTTP route s    The output contains similar information to that shown previously for the  ratings  pod  but it also includes the virtual service s routes to the pod   The  istioctl describe  command doesn t just show the virtual services impacting the pod  If a virtual service configures the service host of a pod but no traffic will reach it  the command s output includes a warning  This case can occur if the virtual service actually blocks traffic by never routing traffic to the pod s subset  For example      export REVIEWS V2 POD   kubectl get pod  l app reviews version v2  o jsonpath    items 0  metadata name      istioctl x describe pod  REVIEWS V2 POD     VirtualService  reviews    WARNING  No destinations match pod subsets  checked 1 HTTP routes        Route to non matching subset v1 for  everything    The warning includes the cause of the problem  how many routes were checked  and even gives you information about the other routes in place  In this example  no traffic arrives at the  v2  pod because the route in the virtual service directs all traffic to the  v1  subset   If you now delete the Bookinfo destination rules      kubectl delete  f  samples bookinfo networking destination rule all mtls yaml    You can see another useful feature of  istioctl describe       istioctl x describe pod  REVIEWS V1 POD     VirtualService  reviews    WARNING  No destinations match pod subsets  checked 1 HTTP routes        Warning  Route to subset v1 but NO DESTINATION RULE defining subsets    The output shows you that you deleted the destination rule but not the virtual service that depends on it  The virtual service routes traffic to the  v1  subset  but there is no destination rule defining the  v1  subset  Thus  traffic destined for version  v1  can t flow to the pod   If you refresh the browser to send a new request to Bookinfo at this point  you would see the following message   Error fetching product reviews   To fix the problem  reapply the destination rule      kubectl apply  f  samples bookinfo networking destination rule all mtls yaml    Reloading the browser shows the app working again and running  istioctl experimental describe pod  REVIEWS V1 POD  no longer produces warnings      Verifying traffic routes  The  istioctl describe  command shows split traffic weights too  For example  run the following command to route 90  of traffic to the  v1  subset and 10  to the  v2  subset of the  reviews  service      kubectl apply  f  samples bookinfo networking virtual service reviews 90 10 yaml    Now describe the  reviews v1  pod      istioctl x describe pod  REVIEWS V1 POD     VirtualService  reviews    Weight 90    The output shows that the  reviews  virtual service has a weight of 90  for the  v1  subset   This function is also helpful for other types of routing  For example  you can deploy header specific routing      kubectl apply  f  samples bookinfo networking virtual service reviews jason v2 v3 yaml    Then  describe the pod again      istioctl x describe pod  REVIEWS V1 POD     VirtualService  reviews    WARNING  No destinations match pod subsets  checked 2 HTTP routes        Route to non matching subset v2 for  when headers are end user jason        Route to non matching subset v3 for  everything    The output produces a warning since you are describing a pod in the  v1  subset  However  the virtual service configuration you applied routes traffic to the  v2  subset if the header contains  end user jason  and to the  v3  subset in all other cases      Verifying strict mutual TLS  Following the  mutual TLS migration   docs tasks security authentication mtls migration   instructions  you can enable strict mutual TLS for the  ratings  service      kubectl apply  f     EOF apiVersion  security istio io v1 kind  PeerAuthentication metadata    name  ratings strict spec    selector      matchLabels        app  ratings   mtls      mode  STRICT EOF   Run the following command to describe the  ratings  pod      istioctl x describe pod  RATINGS POD Pilot reports that pod enforces mTLS and clients speak mTLS   The output reports that requests to the  ratings  pod are now locked down and secure   Sometimes  however  a deployment breaks when switching mutual TLS to  STRICT   The likely cause is that the destination rule didn t match the new configuration  For example  if you configure the Bookinfo clients to not use mutual TLS using the  plain HTTP destination rules   samples bookinfo networking destination rule all yaml       kubectl apply  f  samples bookinfo networking destination rule all yaml    If you open Bookinfo in your browser  you see  Ratings service is currently unavailable   To learn why  run the following command      istioctl x describe pod  RATINGS POD     WARNING Pilot predicts TLS Conflict on ratings v1 f745cf57b qrxl2 port 9080  pod enforces mTLS  clients speak HTTP    Check DestinationRule ratings default and AuthenticationPolicy ratings strict default   The output includes a warning describing the conflict between the destination rule and the authentication policy   You can restore correct behavior by applying a destination rule that uses mutual TLS      kubectl apply  f  samples bookinfo networking destination rule all mtls yaml       Conclusion and cleanup  Our goal with the  istioctl x describe  command is to help you understand the traffic and security configurations in your Istio mesh   We would love to hear your ideas for improvements  Please join us at  https   discuss istio io  https   discuss istio io    To remove the Bookinfo pods and configurations used in this guide  run the following commands      kubectl delete  f  samples bookinfo platform kube bookinfo yaml    kubectl delete  f  samples bookinfo networking bookinfo gateway yaml    kubectl delete  f  samples bookinfo networking destination rule all mtls yaml    kubectl delete  f  samples bookinfo networking virtual service all v1 yaml  "}
{"questions":"istio docs ops troubleshooting istioctl test no Istio includes a supplemental tool that provides debugging and diagnosis for Istio service mesh deployments title Using the Istioctl Command line Tool aliases owner istio wg user experience maintainers weight 10 keywords istioctl bash zsh shell command line help ops component debugging","answers":"---\ntitle: Using the Istioctl Command-line Tool\ndescription: Istio includes a supplemental tool that provides debugging and diagnosis for Istio service mesh deployments.\nweight: 10\nkeywords: [istioctl,bash,zsh,shell,command-line]\naliases:\n  - \/help\/ops\/component-debugging\n  - \/docs\/ops\/troubleshooting\/istioctl\nowner: istio\/wg-user-experience-maintainers\ntest: no\n---\n\nYou can gain insights into what individual components are doing by inspecting their\n[logs](\/docs\/ops\/diagnostic-tools\/component-logging\/) or peering inside via\n[introspection](\/docs\/ops\/diagnostic-tools\/controlz\/). If that's insufficient,\nthe steps below explain how to get under the hood.\n\nThe [`istioctl`](\/docs\/reference\/commands\/istioctl) tool is a configuration command line utility\nthat allows service operators to debug and diagnose their Istio service mesh deployments.\nThe Istio project also includes two helpful scripts for `istioctl` that enable auto-completion\nfor Bash and Zsh. Both of these scripts provide support for the currently available `istioctl` commands.\n\n\n`istioctl` only has auto-completion enabled for non-deprecated commands.\n\n\n## Before you begin\n\nWe recommend you use an `istioctl` version that is the same version as your Istio control plane.\nUsing matching versions helps avoid unforeseen issues.\n\n\nIf you have already [downloaded the Istio release](\/docs\/setup\/additional-setup\/download-istio-release\/), you should\nalready have `istioctl` and do not need to install it again.\n\n\n## Install \n\nInstall the `istioctl` binary with `curl`:\n\n1. Download the latest release with the command:\n\n    \n    $ curl -sL https:\/\/istio.io\/downloadIstioctl | sh -\n    \n\n1. Add the `istioctl` client to your path, on a macOS or Linux system:\n\n    \n    $ export PATH=$HOME\/.istioctl\/bin:$PATH\n    \n\n1. You can optionally enable the [auto-completion option](#enabling-auto-completion) when working with a bash or Zsh console.\n\n## Get an overview of your mesh\n\nYou can get an overview of your mesh using the `proxy-status` or `ps` command:\n\n\n$ istioctl proxy-status\n\n\nIf a proxy is missing from the output list it means that it is not currently connected to an istiod instance and so it\nwill not receive any configuration. Additionally, if it is marked stale, it likely means there are networking issues or\nistiod needs to be scaled.\n\n## Get proxy configuration\n\n[`istioctl`](\/docs\/reference\/commands\/istioctl) allows you to retrieve information\nabout proxy configuration using the `proxy-config` or `pc` command.\n\nFor example, to retrieve information about cluster configuration for the Envoy instance in a specific pod:\n\n\n$ istioctl proxy-config cluster <pod-name> [flags]\n\n\nTo retrieve information about bootstrap configuration for the Envoy instance in a specific pod:\n\n\n$ istioctl proxy-config bootstrap <pod-name> [flags]\n\n\nTo retrieve information about listener configuration for the Envoy instance in a specific pod:\n\n\n$ istioctl proxy-config listener <pod-name> [flags]\n\n\nTo retrieve information about route configuration for the Envoy instance in a specific pod:\n\n\n$ istioctl proxy-config route <pod-name> [flags]\n\n\nTo retrieve information about endpoint configuration for the Envoy instance in a specific pod:\n\n\n$ istioctl proxy-config endpoints <pod-name> [flags]\n\n\nSee [Debugging Envoy and Istiod](\/docs\/ops\/diagnostic-tools\/proxy-cmd\/) for more advice on interpreting this information.\n\n## `istioctl` auto-completion\n\n\n\n\n\nIf you are using the macOS operating system with the Zsh terminal shell, make sure that\nthe `zsh-completions` package is installed. With the [brew](https:\/\/brew.sh) package manager\nfor macOS, you can check to see if the `zsh-completions` package is installed with the following command:\n\n\n$ brew list zsh-completions\n\/usr\/local\/Cellar\/zsh-completions\/0.34.0\/share\/zsh-completions\/ (147 files)\n\n\nIf you receive `Error: No such keg: \/usr\/local\/Cellar\/zsh-completion`,\nproceed with installing the `zsh-completions` package with the following command:\n\n\n$ brew install zsh-completions\n\n\nOnce the `zsh-completions package` has been installed on your macOS system, add the following to your `~\/.zshrc` file:\n\n\n    if type brew &>\/dev\/null; then\n      FPATH=$(brew --prefix)\/share\/zsh-completions:$FPATH\n\n      autoload -Uz compinit\n      compinit\n    fi\n\n\nYou may also need to force rebuild `zcompdump`:\n\n\n$ rm -f ~\/.zcompdump; compinit\n\n\nAdditionally, if you receive `Zsh compinit: insecure directories` warnings\nwhen attempting to load these completions, you may need to run this:\n\n\n$ chmod -R go-w \"$(brew --prefix)\/share\"\n\n\n\n\n\n\nIf you are using a Linux-based operating system, you can install the Bash completion package\nwith the `apt-get install bash-completion` command for Debian-based Linux distributions or\n`yum install bash-completion` for RPM-based Linux distributions, the two most common occurrences.\n\nOnce the `bash-completion` package has been installed on your Linux system,\nadd the following line to your `~\/.bash_profile` file:\n\n\n[[ -r \"\/usr\/local\/etc\/profile.d\/bash_completion.sh\" ]] && . \"\/usr\/local\/etc\/profile.d\/bash_completion.sh\"\n\n\n\n\n\n\n### Enabling auto-completion\n\nTo enable `istioctl` completion on your system, follow the steps for your preferred shell:\n\n\nYou will need to download the full Istio release containing the auto-completion files (in the `\/tools` directory).\nIf you haven't already done so, [download the full release](\/docs\/setup\/additional-setup\/download-istio-release\/) now.\n\n\n\n\n\n\nInstalling the bash auto-completion file\n\nIf you are using bash, the `istioctl` auto-completion file is located in the `tools` directory.\nTo use it, copy the `istioctl.bash` file to your home directory, then add the following line to\nsource the `istioctl` tab completion file from your `.bashrc` file:\n\n\n$ source ~\/istioctl.bash\n\n\n\n\n\n\nInstalling the Zsh auto-completion file\n\nFor Zsh users, the `istioctl` auto-completion file is located in the `tools` directory.\nCopy the `_istioctl` file to your home directory, or any directory of your choosing\n(update directory in script snippet below), and source the `istioctl` auto-completion file\nin your `.zshrc` file as follows:\n\n\nsource ~\/_istioctl\n\n\nYou may also add the `_istioctl` file to a directory listed in the `fpath` variable.\nTo achieve this, place the `_istioctl` file in an existing directory in the `fpath`,\nor create a new directory and add it to the `fpath` variable in your `~\/.zshrc` file.\n\n\n\nIf you get an error like `complete:13: command not found: compdef`,\nthen add the following to the beginning of your `~\/.zshrc` file:\n\n\n$ autoload -Uz compinit\n$ compinit\n\n\nIf your auto-completion is not working, try again after restarting your terminal.\nIf auto-completion still does not work, try resetting the completion cache using\nthe above commands in your terminal.\n\n\n\n\n\n\n\n### Using auto-completion\n\nIf the `istioctl` completion file has been installed correctly, press the Tab key\nwhile writing an `istioctl` command, and it should return a set of command suggestions\nfor you to choose from:\n\n\n$ istioctl proxy-<TAB>\nproxy-config proxy-status\n","site":"istio","answers_cleaned":"    title  Using the Istioctl Command line Tool description  Istio includes a supplemental tool that provides debugging and diagnosis for Istio service mesh deployments  weight  10 keywords   istioctl bash zsh shell command line  aliases       help ops component debugging      docs ops troubleshooting istioctl owner  istio wg user experience maintainers test  no      You can gain insights into what individual components are doing by inspecting their  logs   docs ops diagnostic tools component logging   or peering inside via  introspection   docs ops diagnostic tools controlz    If that s insufficient  the steps below explain how to get under the hood   The   istioctl    docs reference commands istioctl  tool is a configuration command line utility that allows service operators to debug and diagnose their Istio service mesh deployments  The Istio project also includes two helpful scripts for  istioctl  that enable auto completion for Bash and Zsh  Both of these scripts provide support for the currently available  istioctl  commands     istioctl  only has auto completion enabled for non deprecated commands       Before you begin  We recommend you use an  istioctl  version that is the same version as your Istio control plane  Using matching versions helps avoid unforeseen issues    If you have already  downloaded the Istio release   docs setup additional setup download istio release    you should already have  istioctl  and do not need to install it again       Install   Install the  istioctl  binary with  curl    1  Download the latest release with the command              curl  sL https   istio io downloadIstioctl   sh         1  Add the  istioctl  client to your path  on a macOS or Linux system              export PATH  HOME  istioctl bin  PATH       1  You can optionally enable the  auto completion option   enabling auto completion  when working with a bash or Zsh console      Get an overview of your mesh  You can get an overview of your mesh using the  proxy status  or  ps  command      istioctl proxy status   If a proxy is missing from the output list it means that it is not currently connected to an istiod instance and so it will not receive any configuration  Additionally  if it is marked stale  it likely means there are networking issues or istiod needs to be scaled      Get proxy configuration    istioctl    docs reference commands istioctl  allows you to retrieve information about proxy configuration using the  proxy config  or  pc  command   For example  to retrieve information about cluster configuration for the Envoy instance in a specific pod      istioctl proxy config cluster  pod name   flags    To retrieve information about bootstrap configuration for the Envoy instance in a specific pod      istioctl proxy config bootstrap  pod name   flags    To retrieve information about listener configuration for the Envoy instance in a specific pod      istioctl proxy config listener  pod name   flags    To retrieve information about route configuration for the Envoy instance in a specific pod      istioctl proxy config route  pod name   flags    To retrieve information about endpoint configuration for the Envoy instance in a specific pod      istioctl proxy config endpoints  pod name   flags    See  Debugging Envoy and Istiod   docs ops diagnostic tools proxy cmd   for more advice on interpreting this information       istioctl  auto completion      If you are using the macOS operating system with the Zsh terminal shell  make sure that the  zsh completions  package is installed  With the  brew  https   brew sh  package manager for macOS  you can check to see if the  zsh completions  package is installed with the following command      brew list zsh completions  usr local Cellar zsh completions 0 34 0 share zsh completions   147 files    If you receive  Error  No such keg   usr local Cellar zsh completion   proceed with installing the  zsh completions  package with the following command      brew install zsh completions   Once the  zsh completions package  has been installed on your macOS system  add the following to your     zshrc  file        if type brew    dev null  then       FPATH   brew   prefix  share zsh completions  FPATH        autoload  Uz compinit       compinit     fi   You may also need to force rebuild  zcompdump       rm  f    zcompdump  compinit   Additionally  if you receive  Zsh compinit  insecure directories  warnings when attempting to load these completions  you may need to run this      chmod  R go w    brew   prefix  share        If you are using a Linux based operating system  you can install the Bash completion package with the  apt get install bash completion  command for Debian based Linux distributions or  yum install bash completion  for RPM based Linux distributions  the two most common occurrences   Once the  bash completion  package has been installed on your Linux system  add the following line to your     bash profile  file        r   usr local etc profile d bash completion sh            usr local etc profile d bash completion sh            Enabling auto completion  To enable  istioctl  completion on your system  follow the steps for your preferred shell    You will need to download the full Istio release containing the auto completion files  in the   tools  directory   If you haven t already done so   download the full release   docs setup additional setup download istio release   now        Installing the bash auto completion file  If you are using bash  the  istioctl  auto completion file is located in the  tools  directory  To use it  copy the  istioctl bash  file to your home directory  then add the following line to source the  istioctl  tab completion file from your   bashrc  file      source   istioctl bash       Installing the Zsh auto completion file  For Zsh users  the  istioctl  auto completion file is located in the  tools  directory  Copy the   istioctl  file to your home directory  or any directory of your choosing  update directory in script snippet below   and source the  istioctl  auto completion file in your   zshrc  file as follows    source    istioctl   You may also add the   istioctl  file to a directory listed in the  fpath  variable  To achieve this  place the   istioctl  file in an existing directory in the  fpath   or create a new directory and add it to the  fpath  variable in your     zshrc  file     If you get an error like  complete 13  command not found  compdef   then add the following to the beginning of your     zshrc  file      autoload  Uz compinit   compinit   If your auto completion is not working  try again after restarting your terminal  If auto completion still does not work  try resetting the completion cache using the above commands in your terminal             Using auto completion  If the  istioctl  completion file has been installed correctly  press the Tab key while writing an  istioctl  command  and it should return a set of command suggestions for you to choose from      istioctl proxy  TAB  proxy config proxy status "}
{"questions":"istio title Troubleshooting Multicluster test no keywords debug multicluster multi network envoy weight 90 This page describes how to troubleshoot issues with Istio deployed to multiple clusters and or networks owner istio wg environments maintainers Describes tools and techniques to diagnose issues with multicluster and multi network installations","answers":"---\ntitle: Troubleshooting Multicluster\ndescription: Describes tools and techniques to diagnose issues with multicluster and multi-network installations.\nweight: 90\nkeywords: [debug,multicluster,multi-network,envoy]\nowner: istio\/wg-environments-maintainers\ntest: no\n---\n\nThis page describes how to troubleshoot issues with Istio deployed to multiple clusters and\/or networks.\nBefore reading this, you should take the steps in [Multicluster Installation](\/docs\/setup\/install\/multicluster\/)\nand read the [Deployment Models](\/docs\/ops\/deployment\/deployment-models\/) guide.\n\n## Cross-Cluster Load Balancing\n\nThe most common, but also broad problem with multi-network installations is that cross-cluster load balancing doesn\u2019t work. Usually this manifests itself as only seeing responses from the cluster-local instance of a Service:\n\n\n$ for i in $(seq 10); do kubectl --context=$CTX_CLUSTER1 -n sample exec curl-dd98b5f48-djwdw -c curl -- curl -s helloworld:5000\/hello; done\nHello version: v1, instance: helloworld-v1-578dd69f69-j69pf\nHello version: v1, instance: helloworld-v1-578dd69f69-j69pf\nHello version: v1, instance: helloworld-v1-578dd69f69-j69pf\n...\n\n\nWhen following the guide to [verify multicluster installation](\/docs\/setup\/install\/multicluster\/verify\/)\nwe would expect both `v1` and `v2` responses, indicating traffic is going to both clusters.\n\nThere are many possible causes to the problem:\n\n### Connectivity and firewall issues\n\nIn some environments it may not be apparent that a firewall is blocking traffic between your clusters. It's possible\nthat `ICMP` (ping) traffic may succeed, but HTTP and other types of traffic do not. This can appear as a timeout, or\nin some cases a more confusing error such as:\n\n\nupstream connect error or disconnect\/reset before headers. reset reason: local reset, transport failure reason: TLS error: 268435612:SSL routines:OPENSSL_internal:HTTP_REQUEST\n\n\nWhile Istio provides service discovery capabilities to make it easier, cross-cluster traffic should still succeed\nif pods in each cluster are on a single network without Istio. To rule out issues with TLS\/mTLS, you can do a manual\ntraffic test using pods without Istio sidecars.\n\nIn each cluster, create a new namespace for this test. Do _not_ enable sidecar injection:\n\n\n$ kubectl create --context=\"${CTX_CLUSTER1}\" namespace uninjected-sample\n$ kubectl create --context=\"${CTX_CLUSTER2}\" namespace uninjected-sample\n\n\nThen deploy the same apps used in [verify multicluster installation](\/docs\/setup\/install\/multicluster\/verify\/):\n\n\n$ kubectl apply --context=\"${CTX_CLUSTER1}\" \\\n    -f samples\/helloworld\/helloworld.yaml \\\n    -l service=helloworld -n uninjected-sample\n$ kubectl apply --context=\"${CTX_CLUSTER2}\" \\\n    -f samples\/helloworld\/helloworld.yaml \\\n    -l service=helloworld -n uninjected-sample\n$ kubectl apply --context=\"${CTX_CLUSTER1}\" \\\n    -f samples\/helloworld\/helloworld.yaml \\\n    -l version=v1 -n uninjected-sample\n$ kubectl apply --context=\"${CTX_CLUSTER2}\" \\\n    -f samples\/helloworld\/helloworld.yaml \\\n    -l version=v2 -n uninjected-sample\n$ kubectl apply --context=\"${CTX_CLUSTER1}\" \\\n    -f samples\/curl\/curl.yaml -n uninjected-sample\n$ kubectl apply --context=\"${CTX_CLUSTER2}\" \\\n    -f samples\/curl\/curl.yaml -n uninjected-sample\n\n\nVerify that there is a helloworld pod running in `cluster2`, using the `-o wide` flag, so we can get the Pod IP:\n\n\n$ kubectl --context=\"${CTX_CLUSTER2}\" -n uninjected-sample get pod -o wide\nNAME                             READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES\ncurl-557747455f-jdsd8            1\/1     Running   0          41s   10.100.0.2   node-2   <none>           <none>\nhelloworld-v2-54df5f84b-z28p5    1\/1     Running   0          43s   10.100.0.1   node-1   <none>           <none>\n\n\nTake note of the `IP` column for `helloworld`. In this case, it is `10.100.0.1`:\n\n\n$ REMOTE_POD_IP=10.100.0.1\n\n\nNext, attempt to send traffic from the `curl` pod in `cluster1` directly to this Pod IP:\n\n\n$ kubectl exec --context=\"${CTX_CLUSTER1}\" -n uninjected-sample -c curl \\\n    \"$(kubectl get pod --context=\"${CTX_CLUSTER1}\" -n uninjected-sample -l \\\n    app=curl -o jsonpath='{.items[0].metadata.name}')\" \\\n    -- curl -sS $REMOTE_POD_IP:5000\/hello\nHello version: v2, instance: helloworld-v2-54df5f84b-z28p5\n\n\nIf successful, there should be responses only from `helloworld-v2`. Repeat the steps, but send traffic from `cluster2`\nto `cluster1`.\n\nIf this succeeds, you can rule out connectivity issues. If it does not, the cause of the problem may lie outside your\nIstio configuration.\n\n### Locality Load Balancing\n\n[Locality load balancing](\/docs\/tasks\/traffic-management\/locality-load-balancing\/failover\/#configure-locality-failover)\ncan be used to make clients prefer that traffic go to the nearest destination. If the clusters\nare in different localities (region\/zone), locality load balancing will prefer the local-cluster and is working as\nintended. If locality load balancing is disabled, or the clusters are in the same locality, there may be another issue.\n\n### Trust Configuration\n\nCross-cluster traffic, as with intra-cluster traffic, relies on a common root of trust between the proxies. The default\nIstio installation will use their own individually generated root certificate-authorities. For multi-cluster, we\nmust manually configure a shared root of trust. Follow Plug-in Certs below or read [Identity and Trust Models](\/docs\/ops\/deployment\/deployment-models\/#identity-and-trust-models)\nto learn more.\n\n**Plug-in Certs:**\n\nTo verify certs are configured correctly, you can compare the root-cert in each cluster:\n\n\n$ diff \\\n   <(kubectl --context=\"${CTX_CLUSTER1}\" -n istio-system get secret cacerts -ojsonpath='{.data.root-cert\\.pem}') \\\n   <(kubectl --context=\"${CTX_CLUSTER2}\" -n istio-system get secret cacerts -ojsonpath='{.data.root-cert\\.pem}')\n\n\nIf the root-certs do not match or the secret does not exist at all, you can follow the [Plugin CA Certs](\/docs\/tasks\/security\/cert-management\/plugin-ca-cert\/)\nguide, ensuring to run the steps for every cluster.\n\n### Step-by-step Diagnosis\n\nIf you've gone through the sections above and are still having issues, then it's time to dig a little deeper.\n\nThe following steps assume you're following the [HelloWorld verification](\/docs\/setup\/install\/multicluster\/verify\/).\nBefore continuing, make sure both `helloworld` and `curl` are deployed in each cluster.\n\nFrom each cluster, find the endpoints the `curl` service has for `helloworld`:\n\n\n$ istioctl --context $CTX_CLUSTER1 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld\n\n\nTroubleshooting information differs based on the cluster that is the source of traffic:\n\n\n\n\n\n\n$ istioctl --context $CTX_CLUSTER1 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld\n10.0.0.11:5000                   HEALTHY     OK                outbound|5000||helloworld.sample.svc.cluster.local\n\n\nOnly one endpoint is shown, indicating the control plane cannot read endpoints from the remote cluster.\nVerify that remote secrets are configured properly.\n\n\n$ kubectl get secrets --context=$CTX_CLUSTER1 -n istio-system -l \"istio\/multiCluster=true\"\n\n\n* If the secret is missing, create it.\n* If the secret is present:\n    * Look at the config in the secret. Make sure the cluster name is used as the data key for the remote `kubeconfig`.\n    * If the secret looks correct, check the logs of `istiod` for connectivity or permissions issues reaching the\n     remote Kubernetes API server. Log messages may include `Failed to add remote cluster from secret` along with an\n     error reason.\n\n\n\n\n\n\n$ istioctl --context $CTX_CLUSTER2 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld\n10.0.1.11:5000                   HEALTHY     OK                outbound|5000||helloworld.sample.svc.cluster.local\n\n\nOnly one endpoint is shown, indicating the control plane cannot read endpoints from the remote cluster.\nVerify that remote secrets are configured properly.\n\n\n$ kubectl get secrets --context=$CTX_CLUSTER1 -n istio-system -l \"istio\/multiCluster=true\"\n\n\n* If the secret is missing, create it.\n* If the secret is present and the endpoint is a Pod in the **primary** cluster:\n    * Look at the config in the secret. Make sure the cluster name is used as the data key for the remote `kubeconfig`.\n    * If the secret looks correct, check the logs of `istiod` for connectivity or permissions issues reaching the\n     remote Kubernetes API server. Log messages may include `Failed to add remote cluster from secret` along with an\n     error reason.\n* If the secret is present and the endpoint is a Pod in the **remote** cluster:\n    * The proxy is reading configuration from an istiod inside the remote cluster. When a remote cluster has an in\n     -cluster istiod,  it is only meant for sidecar injection and CA. You can verify this is the problem by looking\n     for a Service named `istiod-remote` in the `istio-system` namespace. If it's missing, reinstall making sure\n     `values.global.remotePilotAddress` is set.\n\n\n\n\n\nThe steps for Primary and Remote clusters still apply for multi-network, although multi-network has an additional case:\n\n\n$ istioctl --context $CTX_CLUSTER1 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld\n10.0.5.11:5000                   HEALTHY     OK                outbound|5000||helloworld.sample.svc.cluster.local\n10.0.6.13:5000                   HEALTHY     OK                outbound|5000||helloworld.sample.svc.cluster.local\n\n\nIn multi-network, we expect one of the endpoint IPs to match the remote cluster's east-west gateway public IP. Seeing\nmultiple Pod IPs indicates one of two things:\n\n* The address of the gateway for the remote network cannot be determined.\n* The network of either the client or server pod cannot be determined.\n\n**The address of the gateway for the remote network cannot be determined:**\n\nIn the remote cluster that cannot be reached, check that the Service has an External IP:\n\n\n$ kubectl -n istio-system get service -l \"istio=eastwestgateway\"\nNAME                      TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)                                                           AGE\nistio-eastwestgateway    LoadBalancer   10.8.17.119   <PENDING>        15021:31781\/TCP,15443:30498\/TCP,15012:30879\/TCP,15017:30336\/TCP   76m\n\n\nIf the `EXTERNAL-IP` is stuck in `<PENDING>`, the environment may not support `LoadBalancer` services. In this case, it\nmay be necessary to customize the `spec.externalIPs` section of the Service to manually give the Gateway an IP reachable\nfrom outside the cluster.\n\nIf the external IP is present, check that the Service includes a `topology.istio.io\/network` label with the correct\nvalue. If that is incorrect, reinstall the gateway and make sure to set the --network flag on the generation script.\n\n**The network of either the client or server cannot be determined.**\n\nOn the source pod, check the proxy metadata.\n\n\n$ kubectl get pod $CURL_POD_NAME \\\n  -o jsonpath=\"{.spec.containers[*].env[?(@.name=='ISTIO_META_NETWORK')].value}\"\n\n\n\n$ kubectl get pod $HELLOWORLD_POD_NAME \\\n  -o jsonpath=\"{.metadata.labels.topology\\.istio\\.io\/network}\"\n\n\nIf either of these values aren't set, or have the wrong value, istiod may treat the source and client proxies as being on the same network and send network-local endpoints.\nWhen these aren't set, check that `values.global.network` was set properly during install, or that the injection webhook is configured correctly.\n\nIstio determines the network of a Pod using the `topology.istio.io\/network` label which is set during injection. For\nnon-injected Pods, Istio relies on the `topology.istio.io\/network` label set on the system namespace in the cluster.\n\nIn each cluster, check the network:\n\n\n$ kubectl --context=\"${CTX_CLUSTER1}\" get ns istio-system -ojsonpath='{.metadata.labels.topology\\.istio\\.io\/network}'\n\n\nIf the above command doesn't output the expected network name, set the label:\n\n\n$ kubectl --context=\"${CTX_CLUSTER1}\" label namespace istio-system topology.istio.io\/network=network1\n\n\n\n\n","site":"istio","answers_cleaned":"    title  Troubleshooting Multicluster description  Describes tools and techniques to diagnose issues with multicluster and multi network installations  weight  90 keywords   debug multicluster multi network envoy  owner  istio wg environments maintainers test  no      This page describes how to troubleshoot issues with Istio deployed to multiple clusters and or networks  Before reading this  you should take the steps in  Multicluster Installation   docs setup install multicluster   and read the  Deployment Models   docs ops deployment deployment models   guide      Cross Cluster Load Balancing  The most common  but also broad problem with multi network installations is that cross cluster load balancing doesn t work  Usually this manifests itself as only seeing responses from the cluster local instance of a Service      for i in   seq 10   do kubectl   context  CTX CLUSTER1  n sample exec curl dd98b5f48 djwdw  c curl    curl  s helloworld 5000 hello  done Hello version  v1  instance  helloworld v1 578dd69f69 j69pf Hello version  v1  instance  helloworld v1 578dd69f69 j69pf Hello version  v1  instance  helloworld v1 578dd69f69 j69pf       When following the guide to  verify multicluster installation   docs setup install multicluster verify   we would expect both  v1  and  v2  responses  indicating traffic is going to both clusters   There are many possible causes to the problem       Connectivity and firewall issues  In some environments it may not be apparent that a firewall is blocking traffic between your clusters  It s possible that  ICMP   ping  traffic may succeed  but HTTP and other types of traffic do not  This can appear as a timeout  or in some cases a more confusing error such as    upstream connect error or disconnect reset before headers  reset reason  local reset  transport failure reason  TLS error  268435612 SSL routines OPENSSL internal HTTP REQUEST   While Istio provides service discovery capabilities to make it easier  cross cluster traffic should still succeed if pods in each cluster are on a single network without Istio  To rule out issues with TLS mTLS  you can do a manual traffic test using pods without Istio sidecars   In each cluster  create a new namespace for this test  Do  not  enable sidecar injection      kubectl create   context    CTX CLUSTER1   namespace uninjected sample   kubectl create   context    CTX CLUSTER2   namespace uninjected sample   Then deploy the same apps used in  verify multicluster installation   docs setup install multicluster verify        kubectl apply   context    CTX CLUSTER1          f samples helloworld helloworld yaml        l service helloworld  n uninjected sample   kubectl apply   context    CTX CLUSTER2          f samples helloworld helloworld yaml        l service helloworld  n uninjected sample   kubectl apply   context    CTX CLUSTER1          f samples helloworld helloworld yaml        l version v1  n uninjected sample   kubectl apply   context    CTX CLUSTER2          f samples helloworld helloworld yaml        l version v2  n uninjected sample   kubectl apply   context    CTX CLUSTER1          f samples curl curl yaml  n uninjected sample   kubectl apply   context    CTX CLUSTER2          f samples curl curl yaml  n uninjected sample   Verify that there is a helloworld pod running in  cluster2   using the   o wide  flag  so we can get the Pod IP      kubectl   context    CTX CLUSTER2    n uninjected sample get pod  o wide NAME                             READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES curl 557747455f jdsd8            1 1     Running   0          41s   10 100 0 2   node 2    none             none  helloworld v2 54df5f84b z28p5    1 1     Running   0          43s   10 100 0 1   node 1    none             none    Take note of the  IP  column for  helloworld   In this case  it is  10 100 0 1       REMOTE POD IP 10 100 0 1   Next  attempt to send traffic from the  curl  pod in  cluster1  directly to this Pod IP      kubectl exec   context    CTX CLUSTER1    n uninjected sample  c curl          kubectl get pod   context    CTX CLUSTER1    n uninjected sample  l       app curl  o jsonpath    items 0  metadata name              curl  sS  REMOTE POD IP 5000 hello Hello version  v2  instance  helloworld v2 54df5f84b z28p5   If successful  there should be responses only from  helloworld v2   Repeat the steps  but send traffic from  cluster2  to  cluster1    If this succeeds  you can rule out connectivity issues  If it does not  the cause of the problem may lie outside your Istio configuration       Locality Load Balancing   Locality load balancing   docs tasks traffic management locality load balancing failover  configure locality failover  can be used to make clients prefer that traffic go to the nearest destination  If the clusters are in different localities  region zone   locality load balancing will prefer the local cluster and is working as intended  If locality load balancing is disabled  or the clusters are in the same locality  there may be another issue       Trust Configuration  Cross cluster traffic  as with intra cluster traffic  relies on a common root of trust between the proxies  The default Istio installation will use their own individually generated root certificate authorities  For multi cluster  we must manually configure a shared root of trust  Follow Plug in Certs below or read  Identity and Trust Models   docs ops deployment deployment models  identity and trust models  to learn more     Plug in Certs     To verify certs are configured correctly  you can compare the root cert in each cluster      diff        kubectl   context    CTX CLUSTER1    n istio system get secret cacerts  ojsonpath    data root cert  pem           kubectl   context    CTX CLUSTER2    n istio system get secret cacerts  ojsonpath    data root cert  pem      If the root certs do not match or the secret does not exist at all  you can follow the  Plugin CA Certs   docs tasks security cert management plugin ca cert   guide  ensuring to run the steps for every cluster       Step by step Diagnosis  If you ve gone through the sections above and are still having issues  then it s time to dig a little deeper   The following steps assume you re following the  HelloWorld verification   docs setup install multicluster verify    Before continuing  make sure both  helloworld  and  curl  are deployed in each cluster   From each cluster  find the endpoints the  curl  service has for  helloworld       istioctl   context  CTX CLUSTER1 proxy config endpoint curl dd98b5f48 djwdw sample   grep helloworld   Troubleshooting information differs based on the cluster that is the source of traffic          istioctl   context  CTX CLUSTER1 proxy config endpoint curl dd98b5f48 djwdw sample   grep helloworld 10 0 0 11 5000                   HEALTHY     OK                outbound 5000  helloworld sample svc cluster local   Only one endpoint is shown  indicating the control plane cannot read endpoints from the remote cluster  Verify that remote secrets are configured properly      kubectl get secrets   context  CTX CLUSTER1  n istio system  l  istio multiCluster true      If the secret is missing  create it    If the secret is present        Look at the config in the secret  Make sure the cluster name is used as the data key for the remote  kubeconfig         If the secret looks correct  check the logs of  istiod  for connectivity or permissions issues reaching the      remote Kubernetes API server  Log messages may include  Failed to add remote cluster from secret  along with an      error reason          istioctl   context  CTX CLUSTER2 proxy config endpoint curl dd98b5f48 djwdw sample   grep helloworld 10 0 1 11 5000                   HEALTHY     OK                outbound 5000  helloworld sample svc cluster local   Only one endpoint is shown  indicating the control plane cannot read endpoints from the remote cluster  Verify that remote secrets are configured properly      kubectl get secrets   context  CTX CLUSTER1  n istio system  l  istio multiCluster true      If the secret is missing  create it    If the secret is present and the endpoint is a Pod in the   primary   cluster        Look at the config in the secret  Make sure the cluster name is used as the data key for the remote  kubeconfig         If the secret looks correct  check the logs of  istiod  for connectivity or permissions issues reaching the      remote Kubernetes API server  Log messages may include  Failed to add remote cluster from secret  along with an      error reason    If the secret is present and the endpoint is a Pod in the   remote   cluster        The proxy is reading configuration from an istiod inside the remote cluster  When a remote cluster has an in       cluster istiod   it is only meant for sidecar injection and CA  You can verify this is the problem by looking      for a Service named  istiod remote  in the  istio system  namespace  If it s missing  reinstall making sure       values global remotePilotAddress  is set       The steps for Primary and Remote clusters still apply for multi network  although multi network has an additional case      istioctl   context  CTX CLUSTER1 proxy config endpoint curl dd98b5f48 djwdw sample   grep helloworld 10 0 5 11 5000                   HEALTHY     OK                outbound 5000  helloworld sample svc cluster local 10 0 6 13 5000                   HEALTHY     OK                outbound 5000  helloworld sample svc cluster local   In multi network  we expect one of the endpoint IPs to match the remote cluster s east west gateway public IP  Seeing multiple Pod IPs indicates one of two things     The address of the gateway for the remote network cannot be determined    The network of either the client or server pod cannot be determined     The address of the gateway for the remote network cannot be determined     In the remote cluster that cannot be reached  check that the Service has an External IP      kubectl  n istio system get service  l  istio eastwestgateway  NAME                      TYPE           CLUSTER IP    EXTERNAL IP      PORT S                                                            AGE istio eastwestgateway    LoadBalancer   10 8 17 119    PENDING         15021 31781 TCP 15443 30498 TCP 15012 30879 TCP 15017 30336 TCP   76m   If the  EXTERNAL IP  is stuck in   PENDING    the environment may not support  LoadBalancer  services  In this case  it may be necessary to customize the  spec externalIPs  section of the Service to manually give the Gateway an IP reachable from outside the cluster   If the external IP is present  check that the Service includes a  topology istio io network  label with the correct value  If that is incorrect  reinstall the gateway and make sure to set the   network flag on the generation script     The network of either the client or server cannot be determined     On the source pod  check the proxy metadata      kubectl get pod  CURL POD NAME      o jsonpath    spec containers    env     name   ISTIO META NETWORK    value        kubectl get pod  HELLOWORLD POD NAME      o jsonpath    metadata labels topology  istio  io network     If either of these values aren t set  or have the wrong value  istiod may treat the source and client proxies as being on the same network and send network local endpoints  When these aren t set  check that  values global network  was set properly during install  or that the injection webhook is configured correctly   Istio determines the network of a Pod using the  topology istio io network  label which is set during injection  For non injected Pods  Istio relies on the  topology istio io network  label set on the system namespace in the cluster   In each cluster  check the network      kubectl   context    CTX CLUSTER1   get ns istio system  ojsonpath    metadata labels topology  istio  io network     If the above command doesn t output the expected network name  set the label      kubectl   context    CTX CLUSTER1   label namespace istio system topology istio io network network1     "}
{"questions":"istio keywords debug virtual machines envoy weight 80 This page describes how to troubleshoot issues with Istio deployed to Virtual Machines title Debugging Virtual Machines Describes tools and techniques to diagnose issues with Virtual Machines owner istio wg environments maintainers test n a","answers":"---\ntitle: Debugging Virtual Machines\ndescription: Describes tools and techniques to diagnose issues with Virtual Machines.\nweight: 80\nkeywords: [debug,virtual-machines,envoy]\nowner: istio\/wg-environments-maintainers\ntest: n\/a\n---\n\nThis page describes how to troubleshoot issues with Istio deployed to Virtual Machines.\nBefore reading this, you should take the steps in [Virtual Machine Installation](\/docs\/setup\/install\/virtual-machine\/).\nAdditionally, [Virtual Machine Architecture](\/docs\/ops\/deployment\/vm-architecture\/) can help you understand how the components interact.\n\nTroubleshooting an Istio Virtual Machine installation is similar to troubleshooting issues with proxies running inside Kubernetes, but there are some key differences to be aware of.\n\nWhile much of the same information is available on both platforms, accessing this information differs.\n\n## Monitoring health\n\nThe Istio sidecar is typically run as a `systemd` unit. To ensure its running properly, you can check that status:\n\n\n$ systemctl status istio\n\n\nAdditionally, the sidecar health can be programmatically check at its health endpoint:\n\n\n$ curl localhost:15021\/healthz\/ready -I\n\n\n## Logs\n\nLogs for the Istio proxy can be found in a few places.\n\nTo access the `systemd` logs, which has details about the initialization of the proxy:\n\n\n$ journalctl -f -u istio -n 1000\n\n\nThe proxy will redirect `stderr` and `stdout` to `\/var\/log\/istio\/istio.err.log` and  `\/var\/log\/istio\/istio.log`, respectively.\nTo view these in a format similar to `kubectl`:\n\n\n$ tail \/var\/log\/istio\/istio.err.log \/var\/log\/istio\/istio.log -Fq -n 100\n\n\nLog levels can be modified by changing the `cluster.env` configuration file. Make sure to restart `istio` if it is already running:\n\n\n$ echo \"ISTIO_AGENT_FLAGS=\\\"--log_output_level=dns:debug --proxyLogLevel=debug\\\"\" >> \/var\/lib\/istio\/envoy\/cluster.env\n$ systemctl restart istio\n\n\n## Iptables\n\nTo ensure `iptables` rules have been successfully applied:\n\n\n$ sudo iptables-save\n...\n-A ISTIO_OUTPUT -d 127.0.0.1\/32 -j RETURN\n-A ISTIO_OUTPUT -j ISTIO_REDIRECT\n\n\n## Istioctl\n\nMost `istioctl` commands will function properly with virtual machines. For example, `istioctl proxy-status` can be used to view all connected proxies:\n\n\n$ istioctl proxy-status\nNAME           CDS        LDS        EDS        RDS      ISTIOD                    VERSION\nvm-1.default   SYNCED     SYNCED     SYNCED     SYNCED   istiod-789ffff8-f2fkt     \n\n\nHowever, `istioctl proxy-config` relies on functionality in Kubernetes to connect to a proxy, which will not work for virtual machines.\nInstead, a file containing the configuration dump from Envoy can be passed. For example:\n\n\n$ curl -s localhost:15000\/config_dump | istioctl proxy-config clusters --file -\nSERVICE FQDN                            PORT      SUBSET  DIRECTION     TYPE\nistiod.istio-system.svc.cluster.local   443       -       outbound      EDS\nistiod.istio-system.svc.cluster.local   15010     -       outbound      EDS\nistiod.istio-system.svc.cluster.local   15012     -       outbound      EDS\nistiod.istio-system.svc.cluster.local   15014     -       outbound      EDS\n\n\n## Automatic registration\n\nWhen a virtual machine connects to Istiod, a `WorkloadEntry` will automatically be created. This enables\nthe virtual machine to become a part of a `Service`, similar to an `Endpoint` in Kubernetes.\n\nTo check these are created correctly:\n\n\n$ kubectl get workloadentries\nNAME             AGE   ADDRESS\nvm-10.128.0.50   14m   10.128.0.50\n\n\n## Certificates\n\nVirtual machines handle certificates differently than Kubernetes Pods, which use a Kubernetes-provided service account token\nto authenticate and renew mTLS certificates. Instead, existing mTLS credentials are used to authenticate with the certificate authority and\nrenew certificates.\n\nThe status of these certificates can be viewed in the same way as in Kubernetes:\n\n\n$ curl -s localhost:15000\/config_dump | .\/istioctl proxy-config secret --file -\nRESOURCE NAME     TYPE           STATUS     VALID CERT     SERIAL NUMBER                               NOT AFTER                NOT BEFORE\ndefault           Cert Chain     ACTIVE     true           251932493344649542420616421203546836446     2021-01-29T18:07:21Z     2021-01-28T18:07:21Z\nROOTCA            CA             ACTIVE     true           81663936513052336343895977765039160718      2031-01-26T17:54:44Z     2021-01-28T17:54:44Z\n\n\nAdditionally, these are persisted to disk to ensure downtime or restarts do not lose state.\n\n\n$ ls \/etc\/certs\ncert-chain.pem  key.pem  root-cert.pem\n","site":"istio","answers_cleaned":"    title  Debugging Virtual Machines description  Describes tools and techniques to diagnose issues with Virtual Machines  weight  80 keywords   debug virtual machines envoy  owner  istio wg environments maintainers test  n a      This page describes how to troubleshoot issues with Istio deployed to Virtual Machines  Before reading this  you should take the steps in  Virtual Machine Installation   docs setup install virtual machine    Additionally   Virtual Machine Architecture   docs ops deployment vm architecture   can help you understand how the components interact   Troubleshooting an Istio Virtual Machine installation is similar to troubleshooting issues with proxies running inside Kubernetes  but there are some key differences to be aware of   While much of the same information is available on both platforms  accessing this information differs      Monitoring health  The Istio sidecar is typically run as a  systemd  unit  To ensure its running properly  you can check that status      systemctl status istio   Additionally  the sidecar health can be programmatically check at its health endpoint      curl localhost 15021 healthz ready  I      Logs  Logs for the Istio proxy can be found in a few places   To access the  systemd  logs  which has details about the initialization of the proxy      journalctl  f  u istio  n 1000   The proxy will redirect  stderr  and  stdout  to   var log istio istio err log  and    var log istio istio log   respectively  To view these in a format similar to  kubectl       tail  var log istio istio err log  var log istio istio log  Fq  n 100   Log levels can be modified by changing the  cluster env  configuration file  Make sure to restart  istio  if it is already running      echo  ISTIO AGENT FLAGS     log output level dns debug   proxyLogLevel debug        var lib istio envoy cluster env   systemctl restart istio      Iptables  To ensure  iptables  rules have been successfully applied      sudo iptables save      A ISTIO OUTPUT  d 127 0 0 1 32  j RETURN  A ISTIO OUTPUT  j ISTIO REDIRECT      Istioctl  Most  istioctl  commands will function properly with virtual machines  For example   istioctl proxy status  can be used to view all connected proxies      istioctl proxy status NAME           CDS        LDS        EDS        RDS      ISTIOD                    VERSION vm 1 default   SYNCED     SYNCED     SYNCED     SYNCED   istiod 789ffff8 f2fkt        However   istioctl proxy config  relies on functionality in Kubernetes to connect to a proxy  which will not work for virtual machines  Instead  a file containing the configuration dump from Envoy can be passed  For example      curl  s localhost 15000 config dump   istioctl proxy config clusters   file   SERVICE FQDN                            PORT      SUBSET  DIRECTION     TYPE istiod istio system svc cluster local   443               outbound      EDS istiod istio system svc cluster local   15010             outbound      EDS istiod istio system svc cluster local   15012             outbound      EDS istiod istio system svc cluster local   15014             outbound      EDS      Automatic registration  When a virtual machine connects to Istiod  a  WorkloadEntry  will automatically be created  This enables the virtual machine to become a part of a  Service   similar to an  Endpoint  in Kubernetes   To check these are created correctly      kubectl get workloadentries NAME             AGE   ADDRESS vm 10 128 0 50   14m   10 128 0 50      Certificates  Virtual machines handle certificates differently than Kubernetes Pods  which use a Kubernetes provided service account token to authenticate and renew mTLS certificates  Instead  existing mTLS credentials are used to authenticate with the certificate authority and renew certificates   The status of these certificates can be viewed in the same way as in Kubernetes      curl  s localhost 15000 config dump     istioctl proxy config secret   file   RESOURCE NAME     TYPE           STATUS     VALID CERT     SERIAL NUMBER                               NOT AFTER                NOT BEFORE default           Cert Chain     ACTIVE     true           251932493344649542420616421203546836446     2021 01 29T18 07 21Z     2021 01 28T18 07 21Z ROOTCA            CA             ACTIVE     true           81663936513052336343895977765039160718      2031 01 26T17 54 44Z     2021 01 28T17 54 44Z   Additionally  these are persisted to disk to ensure downtime or restarts do not lose state      ls  etc certs cert chain pem  key pem  root cert pem "}
{"questions":"istio weight 31 How to configure Istio to integrate with SPIRE to get cryptographic identities through Envoy s SDS API title SPIRE owner istio wg networking maintainers aliases test yes keywords kubernetes spiffe spire","answers":"---\ntitle: SPIRE\ndescription: How to configure Istio to integrate with SPIRE to get cryptographic identities through Envoy's SDS API.\nweight: 31\nkeywords: [kubernetes,spiffe,spire]\naliases:\nowner: istio\/wg-networking-maintainers\ntest: yes\n---\n\n[SPIRE](https:\/\/spiffe.io\/docs\/latest\/spire-about\/spire-concepts\/) is a production-ready implementation of the SPIFFE specification that performs node and workload attestation in order to securely\nissue cryptographic identities to workloads running in heterogeneous environments. SPIRE can be configured as a source of cryptographic identities for Istio workloads through an integration with\n[Envoy's SDS API](https:\/\/www.envoyproxy.io\/docs\/envoy\/latest\/configuration\/security\/secret). Istio can detect the existence of a UNIX Domain Socket that implements the Envoy SDS API on a defined\nsocket path, allowing Envoy to communicate and fetch identities directly from it.\n\nThis integration with SPIRE provides flexible attestation options not available with the default Istio identity management while harnessing Istio's powerful service management.\nFor example, SPIRE's plugin architecture enables diverse workload attestation options beyond the Kubernetes namespace and service account attestation offered by Istio.\nSPIRE's node attestation extends attestation to the physical or virtual hardware on which workloads run.\n\nFor a quick demo of how this SPIRE integration with Istio works, see [Integrating SPIRE as a CA through Envoy's SDS API](\/samples\/security\/spire).\n\n## Install SPIRE\n\nWe recommend you follow SPIRE's installation instructions and best practices for installing SPIRE, and for deploying SPIRE in production environments.\n\nFor the examples in this guide, the [SPIRE Helm charts](https:\/\/artifacthub.io\/packages\/helm\/spiffe\/spire) will be used with upstream defaults, to focus on just the configuration necessary to integrate SPIRE and Istio.\n\n\n$ helm upgrade --install -n spire-server spire-crds spire-crds --repo https:\/\/spiffe.github.io\/helm-charts-hardened\/ --create-namespace\n\n\n\n$ helm upgrade --install -n spire-server spire spire --repo https:\/\/spiffe.github.io\/helm-charts-hardened\/ --wait --set global.spire.trustDomain=\"example.org\"\n\n\n\nSee the [SPIRE Helm chart](https:\/\/artifacthub.io\/packages\/helm\/spiffe\/spire) documentation for other values you can configure for your installation.\n\nIt is important that SPIRE and Istio are configured with the exact same trust domain, to prevent authentication and authorization errors, and that the [SPIFFE CSI driver](https:\/\/github.com\/spiffe\/spiffe-csi) is enabled and installed.\n\n\nBy default, the above will also install:\n\n- The [SPIFFE CSI driver](https:\/\/github.com\/spiffe\/spiffe-csi), which is used to mount an Envoy-compatible SDS socket into proxies. Using the SPIFFE CSI driver to mount SDS sockets is strongly recommended by both Istio and SPIRE, as `hostMounts` are a larger security risk and introduce operational hurdles. This guide assumes the use of the SPIFFE CSI driver.\n\n- The [SPIRE Controller Manager](https:\/\/github.com\/spiffe\/spire-controller-manager), which eases the creation of SPIFFE registrations for workloads.\n\n## Register workloads\n\nBy design, SPIRE only grants identities to workloads that have been registered with the SPIRE server; this includes user workloads, as well as Istio components. Istio sidecars and gateways, once configured for SPIRE integration, cannot get identities, and therefore cannot reach READY status, unless there is a preexisting, matching SPIRE registration created for them ahead of time.\n\nSee the [SPIRE docs on registering workloads](https:\/\/spiffe.io\/docs\/latest\/deploying\/registering\/) for more information on using multiple selectors to strengthen attestation criteria, and the selectors available.\n\nThis section describes the options available for registering Istio workloads in a SPIRE Server and provides some example workload registrations.\n\n\nIstio currently requires a specific SPIFFE ID format for workloads. All registrations must follow the Istio SPIFFE ID pattern: `spiffe:\/\/<trust.domain>\/ns\/<namespace>\/sa\/<service-account>`\n\n\n### Option 1: Auto-registration using the SPIRE Controller Manager\n\nNew entries will be automatically registered for each new pod that matches the selector defined in a [ClusterSPIFFEID](https:\/\/github.com\/spiffe\/spire-controller-manager\/blob\/main\/docs\/clusterspiffeid-crd.md) custom resource.\n\nBoth Istio sidecars and Istio gateways need to be registered with SPIRE, so that they can request identities.\n\n#### Istio Gateway `ClusterSPIFFEID`\n\nThe following will create a `ClusterSPIFFEID`, which will auto-register any Istio Ingress gateway pod with SPIRE if it is scheduled into the `istio-system` namespace, and has a service account named `istio-ingressgateway-service-account`. These selectors are used as a simple example; consult the [SPIRE Controller Manager documentation](https:\/\/github.com\/spiffe\/spire-controller-manager\/blob\/main\/docs\/clusterspiffeid-crd.md) for more details.\n\n\n$ kubectl apply -f - <<EOF\napiVersion: spire.spiffe.io\/v1alpha1\nkind: ClusterSPIFFEID\nmetadata:\n  name: istio-ingressgateway-reg\nspec:\n  spiffeIDTemplate: \"spiffe:\/\/\/ns\/\/sa\/\"\n  workloadSelectorTemplates:\n    - \"k8s:ns:istio-system\"\n    - \"k8s:sa:istio-ingressgateway-service-account\"\nEOF\n\n\n#### Istio Sidecar `ClusterSPIFFEID`\n\nThe following will create a `ClusterSPIFFEID` which will auto-register any pod with the `spiffe.io\/spire-managed-identity: true` label that is deployed into the `default` namespace with SPIRE. These selectors are used as a simple example; consult the [SPIRE Controller Manager documentation](https:\/\/github.com\/spiffe\/spire-controller-manager\/blob\/main\/docs\/clusterspiffeid-crd.md) for more details.\n\n\n$ kubectl apply -f - <<EOF\napiVersion: spire.spiffe.io\/v1alpha1\nkind: ClusterSPIFFEID\nmetadata:\n  name: istio-sidecar-reg\nspec:\n  spiffeIDTemplate: \"spiffe:\/\/\/ns\/\/sa\/\"\n  podSelector:\n    matchLabels:\n      spiffe.io\/spire-managed-identity: \"true\"\n  workloadSelectorTemplates:\n    - \"k8s:ns:default\"\nEOF\n\n\n### Option 2: Manual Registration\n\nIf you wish to manually create your SPIRE registrations, rather than use the SPIRE Controller Manager mentioned in [the recommended option](#option-1-auto-registration-using-the-spire-controller-manager), refer to the [SPIRE documentation on manual registration](https:\/\/spiffe.io\/docs\/latest\/deploying\/registering\/).\n\nBelow are the equivalent manual registrations based off the automatic registrations in [Option 1](#option-1-auto-registration-using-the-spire-controller-manager). The following steps assume you have [already followed the SPIRE documentation to manually register your SPIRE agent and node attestation](https:\/\/spiffe.io\/docs\/latest\/deploying\/registering\/#1-defining-the-spiffe-id-of-the-agent) and that your SPIRE agent was registered with the SPIFFE identity `spiffe:\/\/example.org\/ns\/spire\/sa\/spire-agent`.\n\n1. Get the `spire-server` pod:\n\n    \n    $ SPIRE_SERVER_POD=$(kubectl get pod -l statefulset.kubernetes.io\/pod-name=spire-server-0 -n spire-server -o jsonpath=\"{.items[0].metadata.name}\")\n    \n\n1. Register an entry for the Istio Ingress gateway pod:\n\n    \n    $ kubectl exec -n spire \"$SPIRE_SERVER_POD\" -- \\\n    \/opt\/spire\/bin\/spire-server entry create \\\n        -spiffeID spiffe:\/\/example.org\/ns\/istio-system\/sa\/istio-ingressgateway-service-account \\\n        -parentID spiffe:\/\/example.org\/ns\/spire\/sa\/spire-agent \\\n        -selector k8s:sa:istio-ingressgateway-service-account \\\n        -selector k8s:ns:istio-system \\\n        -socketPath \/run\/spire\/sockets\/server.sock\n\n    Entry ID         : 6f2fe370-5261-4361-ac36-10aae8d91ff7\n    SPIFFE ID        : spiffe:\/\/example.org\/ns\/istio-system\/sa\/istio-ingressgateway-service-account\n    Parent ID        : spiffe:\/\/example.org\/ns\/spire\/sa\/spire-agent\n    Revision         : 0\n    TTL              : default\n    Selector         : k8s:ns:istio-system\n    Selector         : k8s:sa:istio-ingressgateway-service-account\n    \n\n1. Register an entry for workloads injected with an Istio sidecar:\n\n    \n    $ kubectl exec -n spire \"$SPIRE_SERVER_POD\" -- \\\n    \/opt\/spire\/bin\/spire-server entry create \\\n        -spiffeID spiffe:\/\/example.org\/ns\/default\/sa\/curl \\\n        -parentID spiffe:\/\/example.org\/ns\/spire\/sa\/spire-agent \\\n        -selector k8s:ns:default \\\n        -selector k8s:pod-label:spiffe.io\/spire-managed-identity:true \\\n        -socketPath \/run\/spire\/sockets\/server.sock\n    \n\n## Install Istio\n\n1. [Download the Istio release](\/docs\/setup\/additional-setup\/download-istio-release\/).\n\n1. Create the Istio configuration with custom patches for the Ingress Gateway and `istio-proxy`. The Ingress Gateway component includes the `spiffe.io\/spire-managed-identity: \"true\"` label.\n\n    \n    $ cat <<EOF > .\/istio.yaml\n    apiVersion: install.istio.io\/v1alpha1\n    kind: IstioOperator\n    metadata:\n      namespace: istio-system\n    spec:\n      profile: default\n      meshConfig:\n        trustDomain: example.org\n      values:\n        # This is used to customize the sidecar template.\n        # It adds both the label to indicate that SPIRE should manage the\n        # identity of this pod, as well as the CSI driver mounts.\n        sidecarInjectorWebhook:\n          templates:\n            spire: |\n              labels:\n                spiffe.io\/spire-managed-identity: \"true\"\n              spec:\n                containers:\n                - name: istio-proxy\n                  volumeMounts:\n                  - name: workload-socket\n                    mountPath: \/run\/secrets\/workload-spiffe-uds\n                    readOnly: true\n                volumes:\n                  - name: workload-socket\n                    csi:\n                      driver: \"csi.spiffe.io\"\n                      readOnly: true\n      components:\n        ingressGateways:\n          - name: istio-ingressgateway\n            enabled: true\n            label:\n              istio: ingressgateway\n            k8s:\n              overlays:\n                # This is used to customize the ingress gateway template.\n                # It adds the CSI driver mounts, as well as an init container\n                # to stall gateway startup until the CSI driver mounts the socket.\n                - apiVersion: apps\/v1\n                  kind: Deployment\n                  name: istio-ingressgateway\n                  patches:\n                    - path: spec.template.spec.volumes.[name:workload-socket]\n                      value:\n                        name: workload-socket\n                        csi:\n                          driver: \"csi.spiffe.io\"\n                          readOnly: true\n                    - path: spec.template.spec.containers.[name:istio-proxy].volumeMounts.[name:workload-socket]\n                      value:\n                        name: workload-socket\n                        mountPath: \"\/run\/secrets\/workload-spiffe-uds\"\n                        readOnly: true\n                    - path: spec.template.spec.initContainers\n                      value:\n                        - name: wait-for-spire-socket\n                          image: busybox:1.36\n                          volumeMounts:\n                            - name: workload-socket\n                              mountPath: \/run\/secrets\/workload-spiffe-uds\n                              readOnly: true\n                          env:\n                            - name: CHECK_FILE\n                              value: \/run\/secrets\/workload-spiffe-uds\/socket\n                          command:\n                            - sh\n                            - \"-c\"\n                            - |-\n                              echo \"$(date -Iseconds)\" Waiting for: ${CHECK_FILE}\n                              while [[ ! -e ${CHECK_FILE} ]] ; do\n                                echo \"$(date -Iseconds)\" File does not exist: ${CHECK_FILE}\n                                sleep 15\n                              done\n                              ls -l ${CHECK_FILE}\n    EOF\n    \n\n1. Apply the configuration:\n\n    \n    $ istioctl install --skip-confirmation -f .\/istio.yaml\n    \n\n1. Check Ingress Gateway pod state:\n\n    \n    $ kubectl get pods -n istio-system\n    NAME                                    READY   STATUS    RESTARTS   AGE\n    istio-ingressgateway-5b45864fd4-lgrxs   1\/1     Running   0          17s\n    istiod-989f54d9c-sg7sn                  1\/1     Running   0          23s\n    \n\n    The Ingress Gateway pod is `Ready` since the corresponding registration entry is automatically created for it on the SPIRE Server. Envoy is able to fetch cryptographic identities from SPIRE.\n\n    This configuration also adds an `initContainer` to the gateway that will wait for SPIRE to create the UNIX Domain Socket before starting the `istio-proxy`. If the SPIRE agent is not ready, or has not been properly configured with the same socket path, the Ingress Gateway `initContainer` will wait forever.\n\n1. Deploy an example workload:\n\n    \n    $ istioctl kube-inject --filename @samples\/security\/spire\/curl-spire.yaml@ | kubectl apply -f -\n    \n\n    In addition to needing `spiffe.io\/spire-managed-identity` label, the workload will need the SPIFFE CSI Driver volume to access the SPIRE Agent socket. To accomplish this,\n    you can leverage the `spire` pod annotation template from the [Install Istio](#install-istio) section or add the CSI volume to\n    the deployment spec of your workload. Both of these alternatives are highlighted on the example snippet below:\n\n    \n    apiVersion: apps\/v1\n    kind: Deployment\n    metadata:\n      name: curl\n    spec:\n      replicas: 1\n      selector:\n          matchLabels:\n            app: curl\n      template:\n          metadata:\n            labels:\n              app: curl\n            # Injects custom sidecar template\n            annotations:\n                inject.istio.io\/templates: \"sidecar,spire\"\n          spec:\n            terminationGracePeriodSeconds: 0\n            serviceAccountName: curl\n            containers:\n            - name: curl\n              image: curlimages\/curl\n              command: [\"\/bin\/sleep\", \"3650d\"]\n              imagePullPolicy: IfNotPresent\n              volumeMounts:\n                - name: tmp\n                  mountPath: \/tmp\n              securityContext:\n                runAsUser: 1000\n            volumes:\n              - name: tmp\n                emptyDir: {}\n              # CSI volume\n              - name: workload-socket\n                csi:\n                  driver: \"csi.spiffe.io\"\n                  readOnly: true\n    \n\nThe Istio configuration shares the `spiffe-csi-driver` with the Ingress Gateway and the sidecars that are going to be injected on workload pods, granting them access to the SPIRE Agent's UNIX Domain Socket.\n\nSee [Verifying that identities were created for workloads](#verifying-that-identities-were-created-for-workloads)\nto check issued identities.\n\n## Verifying that identities were created for workloads\n\nUse the following command to confirm that identities were created for the workloads:\n\n\n$ kubectl exec -t \"$SPIRE_SERVER_POD\" -n spire-server -c spire-server -- .\/bin\/spire-server entry show\nFound 2 entries\nEntry ID         : c8dfccdc-9762-4762-80d3-5434e5388ae7\nSPIFFE ID        : spiffe:\/\/example.org\/ns\/istio-system\/sa\/istio-ingressgateway-service-account\nParent ID        : spiffe:\/\/example.org\/spire\/agent\/k8s_psat\/demo-cluster\/bea19580-ae04-4679-a22e-472e18ca4687\nRevision         : 0\nX509-SVID TTL    : default\nJWT-SVID TTL     : default\nSelector         : k8s:pod-uid:88b71387-4641-4d9c-9a89-989c88f7509d\n\nEntry ID         : af7b53dc-4cc9-40d3-aaeb-08abbddd8e54\nSPIFFE ID        : spiffe:\/\/example.org\/ns\/default\/sa\/curl\nParent ID        : spiffe:\/\/example.org\/spire\/agent\/k8s_psat\/demo-cluster\/bea19580-ae04-4679-a22e-472e18ca4687\nRevision         : 0\nX509-SVID TTL    : default\nJWT-SVID TTL     : default\nSelector         : k8s:pod-uid:ee490447-e502-46bd-8532-5a746b0871d6\n\n\nCheck the Ingress-gateway pod state:\n\n\n$ kubectl get pods -n istio-system\nNAME                                    READY   STATUS    RESTARTS   AGE\nistio-ingressgateway-5b45864fd4-lgrxs   1\/1     Running   0          60s\nistiod-989f54d9c-sg7sn                  1\/1     Running   0          45s\n\n\nAfter registering an entry for the Ingress-gateway pod, Envoy receives the identity issued by SPIRE and uses it for all TLS and mTLS communications.\n\n### Check that the workload identity was issued by SPIRE\n\n1. Get pod information:\n\n    \n    $ CURL_POD=$(kubectl get pod -l app=curl -o jsonpath=\"{.items[0].metadata.name}\")\n    \n\n1. Retrieve curl's SVID identity document using the istioctl proxy-config secret command:\n\n    \n    $ istioctl proxy-config secret \"$CURL_POD\" -o json | jq -r \\\n    '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 --decode > chain.pem\n    \n\n1. Inspect the certificate and verify that SPIRE was the issuer:\n\n    \n    $ openssl x509 -in chain.pem -text | grep SPIRE\n        Subject: C = US, O = SPIRE, CN = curl-5f4d47c948-njvpk\n    \n\n## SPIFFE federation\n\nSPIRE Servers are able to authenticate SPIFFE identities originating from different trust domains. This is known as SPIFFE federation.\n\nSPIRE Agent can be configured to push federated bundles to Envoy through the Envoy SDS API, allowing Envoy to use [validation context](https:\/\/spiffe.io\/docs\/latest\/microservices\/envoy\/#validation-context)\nto verify peer certificates and trust a workload from another trust domain.\nTo enable Istio to federate SPIFFE identities through SPIRE integration, consult [SPIRE Agent SDS configuration](https:\/\/github.com\/spiffe\/spire\/blob\/main\/doc\/spire_agent.md#sds-configuration) and set the following\nSDS configuration values for your SPIRE Agent configuration file.\n\n| Configuration              | Description                                                                                      | Resource Name |\n|----------------------------|--------------------------------------------------------------------------------------------------|---------------|\n| `default_svid_name`        | The TLS Certificate resource name to use for the default `X509-SVID` with Envoy SDS              | default       |\n| `default_bundle_name`      | The Validation Context resource name to use for the default X.509 bundle with Envoy SDS          | null          |\n| `default_all_bundles_name` | The Validation Context resource name to use for all bundles (including federated) with Envoy SDS | ROOTCA        |\n\nThis will allow Envoy to get federated bundles directly from SPIRE.\n\n### Create federated registration entries\n\n- If using the SPIRE Controller Manager, create federated entries for workloads by setting the `federatesWith` field of the [ClusterSPIFFEID CR](https:\/\/github.com\/spiffe\/spire-controller-manager\/blob\/main\/docs\/clusterspiffeid-crd.md) to the trust domains you want the pod to federate with:\n\n    \n    apiVersion: spire.spiffe.io\/v1alpha1\n    kind: ClusterSPIFFEID\n    metadata:\n      name: federation\n    spec:\n      spiffeIDTemplate: \"spiffe:\/\/\/ns\/\/sa\/\"\n      podSelector:\n        matchLabels:\n          spiffe.io\/spire-managed-identity: \"true\"\n      federatesWith: [\"example.io\", \"example.ai\"]\n    \n\n- For manual registration see [Create Registration Entries for Federation](https:\/\/spiffe.io\/docs\/latest\/architecture\/federation\/readme\/#create-registration-entries-for-federation).\n\n## Cleanup SPIRE\n\nRemove SPIRE by uninstalling its Helm charts:\n\n\n$ helm delete -n spire-server spire\n\n\n\n$ helm delete -n spire-server spire-crds\n","site":"istio","answers_cleaned":"    title  SPIRE description  How to configure Istio to integrate with SPIRE to get cryptographic identities through Envoy s SDS API  weight  31 keywords   kubernetes spiffe spire  aliases  owner  istio wg networking maintainers test  yes       SPIRE  https   spiffe io docs latest spire about spire concepts   is a production ready implementation of the SPIFFE specification that performs node and workload attestation in order to securely issue cryptographic identities to workloads running in heterogeneous environments  SPIRE can be configured as a source of cryptographic identities for Istio workloads through an integration with  Envoy s SDS API  https   www envoyproxy io docs envoy latest configuration security secret   Istio can detect the existence of a UNIX Domain Socket that implements the Envoy SDS API on a defined socket path  allowing Envoy to communicate and fetch identities directly from it   This integration with SPIRE provides flexible attestation options not available with the default Istio identity management while harnessing Istio s powerful service management  For example  SPIRE s plugin architecture enables diverse workload attestation options beyond the Kubernetes namespace and service account attestation offered by Istio  SPIRE s node attestation extends attestation to the physical or virtual hardware on which workloads run   For a quick demo of how this SPIRE integration with Istio works  see  Integrating SPIRE as a CA through Envoy s SDS API   samples security spire       Install SPIRE  We recommend you follow SPIRE s installation instructions and best practices for installing SPIRE  and for deploying SPIRE in production environments   For the examples in this guide  the  SPIRE Helm charts  https   artifacthub io packages helm spiffe spire  will be used with upstream defaults  to focus on just the configuration necessary to integrate SPIRE and Istio      helm upgrade   install  n spire server spire crds spire crds   repo https   spiffe github io helm charts hardened    create namespace      helm upgrade   install  n spire server spire spire   repo https   spiffe github io helm charts hardened    wait   set global spire trustDomain  example org     See the  SPIRE Helm chart  https   artifacthub io packages helm spiffe spire  documentation for other values you can configure for your installation   It is important that SPIRE and Istio are configured with the exact same trust domain  to prevent authentication and authorization errors  and that the  SPIFFE CSI driver  https   github com spiffe spiffe csi  is enabled and installed    By default  the above will also install     The  SPIFFE CSI driver  https   github com spiffe spiffe csi   which is used to mount an Envoy compatible SDS socket into proxies  Using the SPIFFE CSI driver to mount SDS sockets is strongly recommended by both Istio and SPIRE  as  hostMounts  are a larger security risk and introduce operational hurdles  This guide assumes the use of the SPIFFE CSI driver     The  SPIRE Controller Manager  https   github com spiffe spire controller manager   which eases the creation of SPIFFE registrations for workloads      Register workloads  By design  SPIRE only grants identities to workloads that have been registered with the SPIRE server  this includes user workloads  as well as Istio components  Istio sidecars and gateways  once configured for SPIRE integration  cannot get identities  and therefore cannot reach READY status  unless there is a preexisting  matching SPIRE registration created for them ahead of time   See the  SPIRE docs on registering workloads  https   spiffe io docs latest deploying registering   for more information on using multiple selectors to strengthen attestation criteria  and the selectors available   This section describes the options available for registering Istio workloads in a SPIRE Server and provides some example workload registrations    Istio currently requires a specific SPIFFE ID format for workloads  All registrations must follow the Istio SPIFFE ID pattern   spiffe    trust domain  ns  namespace  sa  service account         Option 1  Auto registration using the SPIRE Controller Manager  New entries will be automatically registered for each new pod that matches the selector defined in a  ClusterSPIFFEID  https   github com spiffe spire controller manager blob main docs clusterspiffeid crd md  custom resource   Both Istio sidecars and Istio gateways need to be registered with SPIRE  so that they can request identities        Istio Gateway  ClusterSPIFFEID   The following will create a  ClusterSPIFFEID   which will auto register any Istio Ingress gateway pod with SPIRE if it is scheduled into the  istio system  namespace  and has a service account named  istio ingressgateway service account   These selectors are used as a simple example  consult the  SPIRE Controller Manager documentation  https   github com spiffe spire controller manager blob main docs clusterspiffeid crd md  for more details      kubectl apply  f     EOF apiVersion  spire spiffe io v1alpha1 kind  ClusterSPIFFEID metadata    name  istio ingressgateway reg spec    spiffeIDTemplate   spiffe    ns  sa     workloadSelectorTemplates         k8s ns istio system         k8s sa istio ingressgateway service account  EOF        Istio Sidecar  ClusterSPIFFEID   The following will create a  ClusterSPIFFEID  which will auto register any pod with the  spiffe io spire managed identity  true  label that is deployed into the  default  namespace with SPIRE  These selectors are used as a simple example  consult the  SPIRE Controller Manager documentation  https   github com spiffe spire controller manager blob main docs clusterspiffeid crd md  for more details      kubectl apply  f     EOF apiVersion  spire spiffe io v1alpha1 kind  ClusterSPIFFEID metadata    name  istio sidecar reg spec    spiffeIDTemplate   spiffe    ns  sa     podSelector      matchLabels        spiffe io spire managed identity   true    workloadSelectorTemplates         k8s ns default  EOF       Option 2  Manual Registration  If you wish to manually create your SPIRE registrations  rather than use the SPIRE Controller Manager mentioned in  the recommended option   option 1 auto registration using the spire controller manager   refer to the  SPIRE documentation on manual registration  https   spiffe io docs latest deploying registering     Below are the equivalent manual registrations based off the automatic registrations in  Option 1   option 1 auto registration using the spire controller manager   The following steps assume you have  already followed the SPIRE documentation to manually register your SPIRE agent and node attestation  https   spiffe io docs latest deploying registering  1 defining the spiffe id of the agent  and that your SPIRE agent was registered with the SPIFFE identity  spiffe   example org ns spire sa spire agent    1  Get the  spire server  pod              SPIRE SERVER POD   kubectl get pod  l statefulset kubernetes io pod name spire server 0  n spire server  o jsonpath    items 0  metadata name          1  Register an entry for the Istio Ingress gateway pod              kubectl exec  n spire   SPIRE SERVER POD            opt spire bin spire server entry create            spiffeID spiffe   example org ns istio system sa istio ingressgateway service account            parentID spiffe   example org ns spire sa spire agent            selector k8s sa istio ingressgateway service account            selector k8s ns istio system            socketPath  run spire sockets server sock      Entry ID           6f2fe370 5261 4361 ac36 10aae8d91ff7     SPIFFE ID          spiffe   example org ns istio system sa istio ingressgateway service account     Parent ID          spiffe   example org ns spire sa spire agent     Revision           0     TTL                default     Selector           k8s ns istio system     Selector           k8s sa istio ingressgateway service account       1  Register an entry for workloads injected with an Istio sidecar              kubectl exec  n spire   SPIRE SERVER POD            opt spire bin spire server entry create            spiffeID spiffe   example org ns default sa curl            parentID spiffe   example org ns spire sa spire agent            selector k8s ns default            selector k8s pod label spiffe io spire managed identity true            socketPath  run spire sockets server sock          Install Istio  1   Download the Istio release   docs setup additional setup download istio release     1  Create the Istio configuration with custom patches for the Ingress Gateway and  istio proxy   The Ingress Gateway component includes the  spiffe io spire managed identity   true   label              cat   EOF     istio yaml     apiVersion  install istio io v1alpha1     kind  IstioOperator     metadata        namespace  istio system     spec        profile  default       meshConfig          trustDomain  example org       values            This is used to customize the sidecar template            It adds both the label to indicate that SPIRE should manage the           identity of this pod  as well as the CSI driver mounts          sidecarInjectorWebhook            templates              spire                  labels                  spiffe io spire managed identity   true                spec                  containers                    name  istio proxy                   volumeMounts                      name  workload socket                     mountPath   run secrets workload spiffe uds                     readOnly  true                 volumes                      name  workload socket                     csi                        driver   csi spiffe io                        readOnly  true       components          ingressGateways              name  istio ingressgateway             enabled  true             label                istio  ingressgateway             k8s                overlays                    This is used to customize the ingress gateway template                    It adds the CSI driver mounts  as well as an init container                   to stall gateway startup until the CSI driver mounts the socket                    apiVersion  apps v1                   kind  Deployment                   name  istio ingressgateway                   patches                        path  spec template spec volumes  name workload socket                        value                          name  workload socket                         csi                            driver   csi spiffe io                            readOnly  true                       path  spec template spec containers  name istio proxy  volumeMounts  name workload socket                        value                          name  workload socket                         mountPath    run secrets workload spiffe uds                          readOnly  true                       path  spec template spec initContainers                       value                            name  wait for spire socket                           image  busybox 1 36                           volumeMounts                                name  workload socket                               mountPath   run secrets workload spiffe uds                               readOnly  true                           env                                name  CHECK FILE                               value   run secrets workload spiffe uds socket                           command                                sh                                 c                                                                 echo    date  Iseconds   Waiting for    CHECK FILE                                while       e   CHECK FILE       do                                 echo    date  Iseconds   File does not exist    CHECK FILE                                  sleep 15                               done                               ls  l   CHECK FILE      EOF       1  Apply the configuration              istioctl install   skip confirmation  f   istio yaml       1  Check Ingress Gateway pod state              kubectl get pods  n istio system     NAME                                    READY   STATUS    RESTARTS   AGE     istio ingressgateway 5b45864fd4 lgrxs   1 1     Running   0          17s     istiod 989f54d9c sg7sn                  1 1     Running   0          23s           The Ingress Gateway pod is  Ready  since the corresponding registration entry is automatically created for it on the SPIRE Server  Envoy is able to fetch cryptographic identities from SPIRE       This configuration also adds an  initContainer  to the gateway that will wait for SPIRE to create the UNIX Domain Socket before starting the  istio proxy   If the SPIRE agent is not ready  or has not been properly configured with the same socket path  the Ingress Gateway  initContainer  will wait forever   1  Deploy an example workload              istioctl kube inject   filename  samples security spire curl spire yaml    kubectl apply  f             In addition to needing  spiffe io spire managed identity  label  the workload will need the SPIFFE CSI Driver volume to access the SPIRE Agent socket  To accomplish this      you can leverage the  spire  pod annotation template from the  Install Istio   install istio  section or add the CSI volume to     the deployment spec of your workload  Both of these alternatives are highlighted on the example snippet below            apiVersion  apps v1     kind  Deployment     metadata        name  curl     spec        replicas  1       selector            matchLabels              app  curl       template            metadata              labels                app  curl               Injects custom sidecar template             annotations                  inject istio io templates   sidecar spire            spec              terminationGracePeriodSeconds  0             serviceAccountName  curl             containers                name  curl               image  curlimages curl               command     bin sleep    3650d                 imagePullPolicy  IfNotPresent               volumeMounts                    name  tmp                   mountPath   tmp               securityContext                  runAsUser  1000             volumes                  name  tmp                 emptyDir                     CSI volume                 name  workload socket                 csi                    driver   csi spiffe io                    readOnly  true       The Istio configuration shares the  spiffe csi driver  with the Ingress Gateway and the sidecars that are going to be injected on workload pods  granting them access to the SPIRE Agent s UNIX Domain Socket   See  Verifying that identities were created for workloads   verifying that identities were created for workloads  to check issued identities      Verifying that identities were created for workloads  Use the following command to confirm that identities were created for the workloads      kubectl exec  t   SPIRE SERVER POD   n spire server  c spire server      bin spire server entry show Found 2 entries Entry ID           c8dfccdc 9762 4762 80d3 5434e5388ae7 SPIFFE ID          spiffe   example org ns istio system sa istio ingressgateway service account Parent ID          spiffe   example org spire agent k8s psat demo cluster bea19580 ae04 4679 a22e 472e18ca4687 Revision           0 X509 SVID TTL      default JWT SVID TTL       default Selector           k8s pod uid 88b71387 4641 4d9c 9a89 989c88f7509d  Entry ID           af7b53dc 4cc9 40d3 aaeb 08abbddd8e54 SPIFFE ID          spiffe   example org ns default sa curl Parent ID          spiffe   example org spire agent k8s psat demo cluster bea19580 ae04 4679 a22e 472e18ca4687 Revision           0 X509 SVID TTL      default JWT SVID TTL       default Selector           k8s pod uid ee490447 e502 46bd 8532 5a746b0871d6   Check the Ingress gateway pod state      kubectl get pods  n istio system NAME                                    READY   STATUS    RESTARTS   AGE istio ingressgateway 5b45864fd4 lgrxs   1 1     Running   0          60s istiod 989f54d9c sg7sn                  1 1     Running   0          45s   After registering an entry for the Ingress gateway pod  Envoy receives the identity issued by SPIRE and uses it for all TLS and mTLS communications       Check that the workload identity was issued by SPIRE  1  Get pod information              CURL POD   kubectl get pod  l app curl  o jsonpath    items 0  metadata name          1  Retrieve curl s SVID identity document using the istioctl proxy config secret command              istioctl proxy config secret   CURL POD   o json   jq  r         dynamicActiveSecrets 0  secret tlsCertificate certificateChain inlineBytes    base64   decode   chain pem       1  Inspect the certificate and verify that SPIRE was the issuer              openssl x509  in chain pem  text   grep SPIRE         Subject  C   US  O   SPIRE  CN   curl 5f4d47c948 njvpk          SPIFFE federation  SPIRE Servers are able to authenticate SPIFFE identities originating from different trust domains  This is known as SPIFFE federation   SPIRE Agent can be configured to push federated bundles to Envoy through the Envoy SDS API  allowing Envoy to use  validation context  https   spiffe io docs latest microservices envoy  validation context  to verify peer certificates and trust a workload from another trust domain  To enable Istio to federate SPIFFE identities through SPIRE integration  consult  SPIRE Agent SDS configuration  https   github com spiffe spire blob main doc spire agent md sds configuration  and set the following SDS configuration values for your SPIRE Agent configuration file     Configuration                Description                                                                                        Resource Name                                                                                                                                                        default svid name           The TLS Certificate resource name to use for the default  X509 SVID  with Envoy SDS                default            default bundle name         The Validation Context resource name to use for the default X 509 bundle with Envoy SDS            null               default all bundles name    The Validation Context resource name to use for all bundles  including federated  with Envoy SDS   ROOTCA           This will allow Envoy to get federated bundles directly from SPIRE       Create federated registration entries    If using the SPIRE Controller Manager  create federated entries for workloads by setting the  federatesWith  field of the  ClusterSPIFFEID CR  https   github com spiffe spire controller manager blob main docs clusterspiffeid crd md  to the trust domains you want the pod to federate with            apiVersion  spire spiffe io v1alpha1     kind  ClusterSPIFFEID     metadata        name  federation     spec        spiffeIDTemplate   spiffe    ns  sa         podSelector          matchLabels            spiffe io spire managed identity   true        federatesWith    example io    example ai           For manual registration see  Create Registration Entries for Federation  https   spiffe io docs latest architecture federation readme  create registration entries for federation       Cleanup SPIRE  Remove SPIRE by uninstalling its Helm charts      helm delete  n spire server spire      helm delete  n spire server spire crds "}
{"questions":"istio test no docs examples advanced gateways ingress certmgr weight 26 title cert manager aliases keywords integration cert manager owner istio wg environments maintainers docs tasks traffic management ingress ingress certmgr Information on how to integrate with cert manager","answers":"---\ntitle: cert-manager\ndescription: Information on how to integrate with cert-manager.\nweight: 26\nkeywords: [integration,cert-manager]\naliases:\n  - \/docs\/tasks\/traffic-management\/ingress\/ingress-certmgr\/\n  - \/docs\/examples\/advanced-gateways\/ingress-certmgr\/\nowner: istio\/wg-environments-maintainers\ntest: no\n---\n\n[cert-manager](https:\/\/cert-manager.io\/) is a tool that automates certificate management.\nThis can be integrated with Istio gateways to manage TLS certificates.\n\n## Configuration\n\nConsult the [cert-manager installation documentation](https:\/\/cert-manager.io\/docs\/installation\/kubernetes\/)\nto get started. No special changes are needed to work with Istio.\n\n## Usage\n\n### Istio Gateway\n\ncert-manager can be used to write a secret to Kubernetes, which can then be referenced by a Gateway.\n\n1. To get started, configure an `Issuer` resource, following the [cert-manager issuer documentation](https:\/\/cert-manager.io\/docs\/configuration\/). `Issuer`s are Kubernetes resources that represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. For example: an `Issuer` may look like:\n\n    \n    apiVersion: cert-manager.io\/v1\n    kind: Issuer\n    metadata:\n      name: ca-issuer\n      namespace: istio-system\n    spec:\n      ca:\n        secretName: ca-key-pair\n    \n\n    \n    For a common Issuer type, ACME, a pod and service are created to respond to challenge requests in order to verify the client owns the domain. To respond to those challenges, an endpoint at `http:\/\/<YOUR_DOMAIN>\/.well-known\/acme-challenge\/<TOKEN>` will need to be reachable. That configuration may be implementation specific.\n    \n\n1. Next, configure a `Certificate` resource, following the\n[cert-manager documentation](https:\/\/cert-manager.io\/docs\/usage\/certificate\/).\nThe `Certificate` should be created in the same namespace as the `istio-ingressgateway` deployment.\nFor example, a `Certificate` may look like:\n\n    \n    apiVersion: cert-manager.io\/v1\n    kind: Certificate\n    metadata:\n      name: ingress-cert\n      namespace: istio-system\n    spec:\n      secretName: ingress-cert\n      commonName: my.example.com\n      dnsNames:\n      - my.example.com\n      ...\n    \n\n1. Once we have the certificate created, we should see the secret created in the `istio-system` namespace.\n  This can then be referenced in the `tls` config for a Gateway under `credentialName`:\n\n    \n    apiVersion: networking.istio.io\/v1\n    kind: Gateway\n    metadata:\n      name: gateway\n    spec:\n      selector:\n        istio: ingressgateway\n      servers:\n      - port:\n          number: 443\n          name: https\n          protocol: HTTPS\n        tls:\n          mode: SIMPLE\n          credentialName: ingress-cert # This should match the Certificate secretName\n        hosts:\n        - my.example.com # This should match a DNS name in the Certificate\n    \n\n### Kubernetes Ingress\n\ncert-manager provides direct integration with Kubernetes Ingress by configuring an\n[annotation on the Ingress object](https:\/\/cert-manager.io\/docs\/usage\/ingress\/).\nIf this method is used, the Ingress must reside in the same namespace as the\n`istio-ingressgateway` deployment, as secrets will only be read within the same namespace.\n\nAlternatively, a `Certificate` can be created as described in [Istio Gateway](#istio-gateway),\nthen referenced in the `Ingress` object:\n\n\napiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: ingress\n  annotations:\n    kubernetes.io\/ingress.class: istio\nspec:\n  rules:\n  - host: my.example.com\n    http: ...\n  tls:\n  - hosts:\n    - my.example.com # This should match a DNS name in the Certificate\n    secretName: ingress-cert # This should match the Certificate secretName\n","site":"istio","answers_cleaned":"    title  cert manager description  Information on how to integrate with cert manager  weight  26 keywords   integration cert manager  aliases       docs tasks traffic management ingress ingress certmgr       docs examples advanced gateways ingress certmgr  owner  istio wg environments maintainers test  no       cert manager  https   cert manager io   is a tool that automates certificate management  This can be integrated with Istio gateways to manage TLS certificates      Configuration  Consult the  cert manager installation documentation  https   cert manager io docs installation kubernetes   to get started  No special changes are needed to work with Istio      Usage      Istio Gateway  cert manager can be used to write a secret to Kubernetes  which can then be referenced by a Gateway   1  To get started  configure an  Issuer  resource  following the  cert manager issuer documentation  https   cert manager io docs configuration     Issuer s are Kubernetes resources that represent certificate authorities  CAs  that are able to generate signed certificates by honoring certificate signing requests  For example  an  Issuer  may look like            apiVersion  cert manager io v1     kind  Issuer     metadata        name  ca issuer       namespace  istio system     spec        ca          secretName  ca key pair                For a common Issuer type  ACME  a pod and service are created to respond to challenge requests in order to verify the client owns the domain  To respond to those challenges  an endpoint at  http    YOUR DOMAIN   well known acme challenge  TOKEN   will need to be reachable  That configuration may be implementation specific        1  Next  configure a  Certificate  resource  following the  cert manager documentation  https   cert manager io docs usage certificate    The  Certificate  should be created in the same namespace as the  istio ingressgateway  deployment  For example  a  Certificate  may look like            apiVersion  cert manager io v1     kind  Certificate     metadata        name  ingress cert       namespace  istio system     spec        secretName  ingress cert       commonName  my example com       dnsNames          my example com                 1  Once we have the certificate created  we should see the secret created in the  istio system  namespace    This can then be referenced in the  tls  config for a Gateway under  credentialName             apiVersion  networking istio io v1     kind  Gateway     metadata        name  gateway     spec        selector          istio  ingressgateway       servers          port            number  443           name  https           protocol  HTTPS         tls            mode  SIMPLE           credentialName  ingress cert   This should match the Certificate secretName         hosts            my example com   This should match a DNS name in the Certificate           Kubernetes Ingress  cert manager provides direct integration with Kubernetes Ingress by configuring an  annotation on the Ingress object  https   cert manager io docs usage ingress    If this method is used  the Ingress must reside in the same namespace as the  istio ingressgateway  deployment  as secrets will only be read within the same namespace   Alternatively  a  Certificate  can be created as described in  Istio Gateway   istio gateway   then referenced in the  Ingress  object    apiVersion  networking k8s io v1 kind  Ingress metadata    name  ingress   annotations      kubernetes io ingress class  istio spec    rules      host  my example com     http        tls      hosts        my example com   This should match a DNS name in the Certificate     secretName  ingress cert   This should match the Certificate secretName "}
{"questions":"istio is an open source monitoring system and time series database keywords integration prometheus weight 30 owner istio wg environments maintainers How to integrate with Prometheus test n a title Prometheus","answers":"---\ntitle: Prometheus\ndescription: How to integrate with Prometheus.\nweight: 30\nkeywords: [integration,prometheus]\nowner: istio\/wg-environments-maintainers\ntest: n\/a\n---\n\n[Prometheus](https:\/\/prometheus.io\/) is an open source monitoring system and time series database.\nYou can use Prometheus with Istio to record metrics that track the health of Istio and of\napplications within the service mesh. You can visualize metrics using tools like\n[Grafana](\/docs\/ops\/integrations\/grafana\/) and [Kiali](\/docs\/tasks\/observability\/kiali\/).\n\n## Installation\n\n### Option 1: Quick start\n\nIstio provides a basic sample installation to quickly get Prometheus up and running:\n\n\n$ kubectl apply -f \/samples\/addons\/prometheus.yaml\n\n\nThis will deploy Prometheus into your cluster. This is intended for demonstration only, and is not tuned for performance or security.\n\n\nWhile the quick-start configuration is well-suited for small clusters and monitoring for short time horizons,\nit is not suitable for large-scale meshes or monitoring over a period of days or weeks. In particular,\nthe introduced labels can increase metrics cardinality, requiring a large amount of storage. And, when trying\nto identify trends and differences in traffic over time, access to historical data can be paramount.\n\n\n### Option 2: Customizable install\n\nConsult the [Prometheus documentation](https:\/\/www.prometheus.io\/) to get started\ndeploying Prometheus into your environment. See [Configuration](#configuration)\nfor more information on configuring Prometheus to scrape Istio deployments.\n\n## Configuration\n\nIn an Istio mesh, each component exposes an endpoint that emits metrics. Prometheus works\nby scraping these endpoints and collecting the results. This is configured through the\n[Prometheus configuration file](https:\/\/prometheus.io\/docs\/prometheus\/latest\/configuration\/configuration\/)\nwhich controls settings for which endpoints to query, the port and path to query, TLS settings, and more.\n\nTo gather metrics for the entire mesh, configure Prometheus to scrape:\n\n1. The control plane (`istiod` deployment)\n1. Ingress and Egress gateways\n1. The Envoy sidecar\n1. The user applications (if they expose Prometheus metrics)\n\nTo simplify the configuration of metrics, Istio offers two modes of operation.\n\n### Option 1: Metrics merging\n\nTo simplify configuration, Istio has the ability to control scraping entirely by\n`prometheus.io` annotations. This allows Istio scraping to work out of the box with\nstandard configurations such as the ones provided by the\n[Helm `stable\/prometheus`](https:\/\/github.com\/helm\/charts\/tree\/master\/stable\/prometheus) charts.\n\n\nWhile `prometheus.io` annotations are not a core part of Prometheus,\nthey have become the de facto standard to configure scraping.\n\n\nThis option is enabled by default but can be disabled by passing\n`--set meshConfig.enablePrometheusMerge=false` during [installation](\/docs\/setup\/install\/istioctl\/).\nWhen enabled, appropriate `prometheus.io` annotations will be added to all data plane pods to set up scraping.\nIf these annotations already exist, they will be overwritten. With this option, the Envoy sidecar will\nmerge Istio's metrics with the application metrics. The merged metrics will be scraped from `:15020\/stats\/prometheus`.\n\nThis option exposes all the metrics in plain text.\n\nThis feature may not suit your needs in the following situations:\n\n* You need to scrape metrics using TLS.\n* Your application exposes metrics with the same names as Istio metrics. For example,\n  your application metrics expose an `istio_requests_total` metric.\n  This might happen if the application is itself running Envoy.\n* Your Prometheus deployment is not configured to scrape based on standard `prometheus.io` annotations.\n\nIf required, this feature can be disabled per workload by adding a `prometheus.istio.io\/merge-metrics: \"false\"` annotation on a pod.\n\n### Option 2: Customized scraping configurations\n\nTo configure an existing Prometheus instance to scrape stats generated by Istio, several jobs need to be added.\n\n* To scrape `Istiod` stats, the following example job can be added to scrape its `http-monitoring` port:\n\n\n- job_name: 'istiod'\n  kubernetes_sd_configs:\n  - role: endpoints\n    namespaces:\n      names:\n      - istio-system\n  relabel_configs:\n  - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]\n    action: keep\n    regex: istiod;http-monitoring\n\n\n* To scrape Envoy stats, including sidecar proxies and gateway proxies,\n  the following job can be added to scrape ports that end with `-envoy-prom`:\n\n\n    - job_name: 'envoy-stats'\n      metrics_path: \/stats\/prometheus\n      kubernetes_sd_configs:\n      - role: pod\n\n      relabel_configs:\n      - source_labels: [__meta_kubernetes_pod_container_port_name]\n        action: keep\n        regex: '.*-envoy-prom'\n\n\n* For application stats, if [Strict mTLS](\/docs\/tasks\/security\/authentication\/authn-policy\/#globally-enabling-istio-mutual-tls-in-strict-mode)\n  is not enabled, your existing scraping configuration should still work. Otherwise,\n  Prometheus needs to be configured to [scrape with Istio certs](#tls-settings).\n\n#### TLS settings\n\nThe control plane, gateway, and Envoy sidecar metrics will all be scraped over cleartext.\nHowever, the application metrics will follow whatever [Istio authentication policy](\/docs\/tasks\/security\/authentication\/authn-policy) has been configured\nfor the workload.\n\n* If you use `STRICT` mode, then Prometheus will need to be configured to scrape using Istio certificates as described below.\n* If you use `PERMISSIVE` mode, the workload typically accepts TLS and cleartext. However, Prometheus cannot send the special variant of TLS Istio requires for `PERMISSIVE` mode. As a result, you must *not* configure TLS in Prometheus.\n* If you use `DISABLE` mode, no TLS configuration is required for Prometheus.\n\n\nNote this only applies to Istio-terminated TLS. If your application directly handles TLS:\n* `STRICT` mode is not supported, as Prometheus would need to send two layers of TLS which it cannot do.\n* `PERMISSIVE` mode and `DISABLE` mode should be configured the same as if Istio was not present.\n\nSee [Understanding TLS Configuration](\/docs\/ops\/configuration\/traffic-management\/tls-configuration\/) for more information.\n\n\nOne way to provision Istio certificates for Prometheus is by injecting a sidecar\nwhich will rotate SDS certificates and output them to a volume that can be shared with Prometheus.\nHowever, the sidecar should not intercept requests for Prometheus because Prometheus's\nmodel of direct endpoint access is incompatible with Istio's sidecar proxy model.\n\nTo achieve this, configure a cert volume mount on the Prometheus server container:\n\n\ncontainers:\n  - name: prometheus-server\n    ...\n    volumeMounts:\n      mountPath: \/etc\/prom-certs\/\n      name: istio-certs\nvolumes:\n  - emptyDir:\n      medium: Memory\n    name: istio-certs\n\n\nThen add the following annotations to the Prometheus deployment pod template,\nand deploy it with [sidecar injection](\/docs\/setup\/additional-setup\/sidecar-injection\/).\nThis configures the sidecar to write a certificate to the shared volume, but without configuring traffic redirection:\n\n\nspec:\n  template:\n    metadata:\n      annotations:\n        traffic.sidecar.istio.io\/includeInboundPorts: \"\"   # do not intercept any inbound ports\n        traffic.sidecar.istio.io\/includeOutboundIPRanges: \"\"  # do not intercept any outbound traffic\n        proxy.istio.io\/config: |  # configure an env variable `OUTPUT_CERTS` to write certificates to the given folder\n          proxyMetadata:\n            OUTPUT_CERTS: \/etc\/istio-output-certs\n        sidecar.istio.io\/userVolumeMount: '[{\"name\": \"istio-certs\", \"mountPath\": \"\/etc\/istio-output-certs\"}]' # mount the shared volume at sidecar proxy\n\n\nFinally, set the scraping job TLS context as follows:\n\n\nscheme: https\ntls_config:\n  ca_file: \/etc\/prom-certs\/root-cert.pem\n  cert_file: \/etc\/prom-certs\/cert-chain.pem\n  key_file: \/etc\/prom-certs\/key.pem\n  insecure_skip_verify: true  # Prometheus does not support Istio security naming, thus skip verifying target pod certificate\n\n\n## Best practices\n\nFor larger meshes, advanced configuration might help Prometheus scale.\nSee [Using Prometheus for production-scale monitoring](\/docs\/ops\/best-practices\/observability\/#using-prometheus-for-production-scale-monitoring)\nfor more information.","site":"istio","answers_cleaned":"    title  Prometheus description  How to integrate with Prometheus  weight  30 keywords   integration prometheus  owner  istio wg environments maintainers test  n a       Prometheus  https   prometheus io   is an open source monitoring system and time series database  You can use Prometheus with Istio to record metrics that track the health of Istio and of applications within the service mesh  You can visualize metrics using tools like  Grafana   docs ops integrations grafana   and  Kiali   docs tasks observability kiali        Installation      Option 1  Quick start  Istio provides a basic sample installation to quickly get Prometheus up and running      kubectl apply  f  samples addons prometheus yaml   This will deploy Prometheus into your cluster  This is intended for demonstration only  and is not tuned for performance or security    While the quick start configuration is well suited for small clusters and monitoring for short time horizons  it is not suitable for large scale meshes or monitoring over a period of days or weeks  In particular  the introduced labels can increase metrics cardinality  requiring a large amount of storage  And  when trying to identify trends and differences in traffic over time  access to historical data can be paramount        Option 2  Customizable install  Consult the  Prometheus documentation  https   www prometheus io   to get started deploying Prometheus into your environment  See  Configuration   configuration  for more information on configuring Prometheus to scrape Istio deployments      Configuration  In an Istio mesh  each component exposes an endpoint that emits metrics  Prometheus works by scraping these endpoints and collecting the results  This is configured through the  Prometheus configuration file  https   prometheus io docs prometheus latest configuration configuration   which controls settings for which endpoints to query  the port and path to query  TLS settings  and more   To gather metrics for the entire mesh  configure Prometheus to scrape   1  The control plane   istiod  deployment  1  Ingress and Egress gateways 1  The Envoy sidecar 1  The user applications  if they expose Prometheus metrics   To simplify the configuration of metrics  Istio offers two modes of operation       Option 1  Metrics merging  To simplify configuration  Istio has the ability to control scraping entirely by  prometheus io  annotations  This allows Istio scraping to work out of the box with standard configurations such as the ones provided by the  Helm  stable prometheus   https   github com helm charts tree master stable prometheus  charts    While  prometheus io  annotations are not a core part of Prometheus  they have become the de facto standard to configure scraping    This option is enabled by default but can be disabled by passing    set meshConfig enablePrometheusMerge false  during  installation   docs setup install istioctl    When enabled  appropriate  prometheus io  annotations will be added to all data plane pods to set up scraping  If these annotations already exist  they will be overwritten  With this option  the Envoy sidecar will merge Istio s metrics with the application metrics  The merged metrics will be scraped from   15020 stats prometheus    This option exposes all the metrics in plain text   This feature may not suit your needs in the following situations     You need to scrape metrics using TLS    Your application exposes metrics with the same names as Istio metrics  For example    your application metrics expose an  istio requests total  metric    This might happen if the application is itself running Envoy    Your Prometheus deployment is not configured to scrape based on standard  prometheus io  annotations   If required  this feature can be disabled per workload by adding a  prometheus istio io merge metrics   false   annotation on a pod       Option 2  Customized scraping configurations  To configure an existing Prometheus instance to scrape stats generated by Istio  several jobs need to be added     To scrape  Istiod  stats  the following example job can be added to scrape its  http monitoring  port      job name   istiod    kubernetes sd configs      role  endpoints     namespaces        names          istio system   relabel configs      source labels     meta kubernetes service name    meta kubernetes endpoint port name      action  keep     regex  istiod http monitoring     To scrape Envoy stats  including sidecar proxies and gateway proxies    the following job can be added to scrape ports that end with   envoy prom           job name   envoy stats        metrics path   stats prometheus       kubernetes sd configs          role  pod        relabel configs          source labels     meta kubernetes pod container port name          action  keep         regex      envoy prom      For application stats  if  Strict mTLS   docs tasks security authentication authn policy  globally enabling istio mutual tls in strict mode    is not enabled  your existing scraping configuration should still work  Otherwise    Prometheus needs to be configured to  scrape with Istio certs   tls settings         TLS settings  The control plane  gateway  and Envoy sidecar metrics will all be scraped over cleartext  However  the application metrics will follow whatever  Istio authentication policy   docs tasks security authentication authn policy  has been configured for the workload     If you use  STRICT  mode  then Prometheus will need to be configured to scrape using Istio certificates as described below    If you use  PERMISSIVE  mode  the workload typically accepts TLS and cleartext  However  Prometheus cannot send the special variant of TLS Istio requires for  PERMISSIVE  mode  As a result  you must  not  configure TLS in Prometheus    If you use  DISABLE  mode  no TLS configuration is required for Prometheus    Note this only applies to Istio terminated TLS  If your application directly handles TLS     STRICT  mode is not supported  as Prometheus would need to send two layers of TLS which it cannot do     PERMISSIVE  mode and  DISABLE  mode should be configured the same as if Istio was not present   See  Understanding TLS Configuration   docs ops configuration traffic management tls configuration   for more information    One way to provision Istio certificates for Prometheus is by injecting a sidecar which will rotate SDS certificates and output them to a volume that can be shared with Prometheus  However  the sidecar should not intercept requests for Prometheus because Prometheus s model of direct endpoint access is incompatible with Istio s sidecar proxy model   To achieve this  configure a cert volume mount on the Prometheus server container    containers      name  prometheus server             volumeMounts        mountPath   etc prom certs        name  istio certs volumes      emptyDir        medium  Memory     name  istio certs   Then add the following annotations to the Prometheus deployment pod template  and deploy it with  sidecar injection   docs setup additional setup sidecar injection    This configures the sidecar to write a certificate to the shared volume  but without configuring traffic redirection    spec    template      metadata        annotations          traffic sidecar istio io includeInboundPorts         do not intercept any inbound ports         traffic sidecar istio io includeOutboundIPRanges        do not intercept any outbound traffic         proxy istio io config       configure an env variable  OUTPUT CERTS  to write certificates to the given folder           proxyMetadata              OUTPUT CERTS   etc istio output certs         sidecar istio io userVolumeMount      name    istio certs    mountPath     etc istio output certs       mount the shared volume at sidecar proxy   Finally  set the scraping job TLS context as follows    scheme  https tls config    ca file   etc prom certs root cert pem   cert file   etc prom certs cert chain pem   key file   etc prom certs key pem   insecure skip verify  true    Prometheus does not support Istio security naming  thus skip verifying target pod certificate      Best practices  For larger meshes  advanced configuration might help Prometheus scale  See  Using Prometheus for production scale monitoring   docs ops best practices observability  using prometheus for production scale monitoring  for more information "}
{"questions":"istio virtual machine owner istio wg environments maintainers keywords Describes Istio s high level architecture for virtual machines title Virtual Machine Architecture weight 25 test n a","answers":"---\ntitle: Virtual Machine Architecture\ndescription: Describes Istio's high-level architecture for virtual machines.\nweight: 25\nkeywords:\n- virtual-machine\ntest: n\/a\nowner: istio\/wg-environments-maintainers\n---\n\nBefore reading this document, be sure to review [Istio's architecture](\/docs\/ops\/deployment\/architecture\/) and [deployment models](\/docs\/ops\/deployment\/deployment-models\/).\nThis page builds on those documents to explain how Istio can be extended to support joining virtual machines into the mesh.\n\nIstio's virtual machine support allows connecting workloads outside of a Kubernetes cluster to the mesh.\nThis enables legacy applications, or applications not suitable to run in a containerized environment, to get all the benefits that Istio provides to applications running inside Kubernetes.\n\nFor workloads running on Kubernetes, the Kubernetes platform itself provides various features like service discovery, DNS resolution, and health checks which are often missing in virtual machine environments.\nIstio enables these features for workloads running on virtual machines, and in addition allows these workloads to utilize Istio functionality such as mutual TLS (mTLS), rich telemetry, and advanced traffic management capabilities.\n\nThe following diagram shows the architecture of a mesh with virtual machines:\n\n\n\n\n\nIn this mesh, there is a single [network](\/docs\/ops\/deployment\/deployment-models\/#network-models), where pods and virtual machines can communicate directly with each other.\n\nControl plane traffic, including XDS configuration and certificate signing, are sent through a Gateway in the cluster.\nThis ensures that the virtual machines have a stable address to connect to when they are bootstrapping. Pods and virtual machines can communicate directly with each other without requiring any intermediate Gateway.\n\n\n\n\n\n\n\nIn this mesh, there are multiple [networks](\/docs\/ops\/deployment\/deployment-models\/#network-models), where pods and virtual machines are not able to communicate directly with each other.\n\nControl plane traffic, including XDS configuration and certificate signing, are sent through a Gateway in the cluster.\nSimilarly, all communication between pods and virtual machines goes through the gateway, which acts as a bridge between the two networks.\n\n\n\n\n\n\n\n## Service association\n\nIstio provides two mechanisms to represent virtual machine workloads:\n\n* [`WorkloadGroup`](\/docs\/reference\/config\/networking\/workload-group\/) represents a logical group of virtual machine workloads that share common properties. This is similar to a `Deployment` in Kubernetes.\n* [`WorkloadEntry`](\/docs\/reference\/config\/networking\/workload-entry\/) represents a single instance of a virtual machine workload. This is similar to a `Pod` in Kubernetes.\n\nCreating these resources (`WorkloadGroup` and `WorkloadEntry`) does not result in provisioning of any resources or running any virtual machine workloads.\nRather, these resources just reference these workloads and inform Istio how to configure the mesh appropriately.\n\nWhen adding a virtual machine workload to the mesh, you will need to create a `WorkloadGroup` that acts as template for each `WorkloadEntry` instance:\n\n\napiVersion: networking.istio.io\/v1\nkind: WorkloadGroup\nmetadata:\n  name: product-vm\nspec:\n  metadata:\n    labels:\n      app: product\n  template:\n    serviceAccount: default\n  probe:\n    httpGet:\n      port: 8080\n\n\nOnce a virtual machine has been [configured and added to the mesh](\/docs\/setup\/install\/virtual-machine\/#configure-the-virtual-machine), a corresponding `WorkloadEntry` will be automatically created by the Istio control plane.\nFor example:\n\n\napiVersion: networking.istio.io\/v1\nkind: WorkloadEntry\nmetadata:\n  annotations:\n    istio.io\/autoRegistrationGroup: product-vm\n  labels:\n    app: product\n  name: product-vm-1.2.3.4\nspec:\n  address: 1.2.3.4\n  labels:\n    app: product\n  serviceAccount: default\n\n\nThis `WorkloadEntry` resource describes a single instance of a workload, similar to a pod in Kubernetes. When the workload is removed from the mesh, the `WorkloadEntry` resource will\nbe automatically removed.  Additionally, if any probes are configured in the `WorkloadGroup` resource, the Istio control plane automatically updates the health status of associated `WorkloadEntry` instances.\n\nIn order for consumers to reliably call your workload, it's recommended to declare a `Service` association. This allows clients to reach a stable hostname, like `product.default.svc.cluster.local`, rather than an ephemeral IP addresses. This also enables you to use advanced routing capabilities in Istio via the `DestinationRule` and `VirtualService` APIs.\n\nAny Kubernetes service can transparently select workloads across both pods and virtual machines via the selector fields which are matched with pod and `WorkloadEntry` labels respectively.\n\nFor example, a `Service` named `product` is composed of a `Pod` and a `WorkloadEntry`:\n\n\n\nWith this configuration, requests to `product` would be load-balanced across both the pod and virtual machine workload instances.\n\n## DNS\n\nKubernetes provides DNS resolution in pods for `Service` names allowing pods to easily communicate with one another by stable hostnames.\n\nFor virtual machine expansion, Istio provides similar functionality via a [DNS Proxy](\/docs\/ops\/configuration\/traffic-management\/dns-proxy\/).\nThis feature redirects all DNS queries from the virtual machine workload to the Istio proxy, which maintains a mapping of hostnames to IP addresses.\n\nAs a result, workloads running on virtual machines can transparently call `Service`s (similar to pods) without requiring any additional configuration.","site":"istio","answers_cleaned":"    title  Virtual Machine Architecture description  Describes Istio s high level architecture for virtual machines  weight  25 keywords    virtual machine test  n a owner  istio wg environments maintainers      Before reading this document  be sure to review  Istio s architecture   docs ops deployment architecture   and  deployment models   docs ops deployment deployment models    This page builds on those documents to explain how Istio can be extended to support joining virtual machines into the mesh   Istio s virtual machine support allows connecting workloads outside of a Kubernetes cluster to the mesh  This enables legacy applications  or applications not suitable to run in a containerized environment  to get all the benefits that Istio provides to applications running inside Kubernetes   For workloads running on Kubernetes  the Kubernetes platform itself provides various features like service discovery  DNS resolution  and health checks which are often missing in virtual machine environments  Istio enables these features for workloads running on virtual machines  and in addition allows these workloads to utilize Istio functionality such as mutual TLS  mTLS   rich telemetry  and advanced traffic management capabilities   The following diagram shows the architecture of a mesh with virtual machines       In this mesh  there is a single  network   docs ops deployment deployment models  network models   where pods and virtual machines can communicate directly with each other   Control plane traffic  including XDS configuration and certificate signing  are sent through a Gateway in the cluster  This ensures that the virtual machines have a stable address to connect to when they are bootstrapping  Pods and virtual machines can communicate directly with each other without requiring any intermediate Gateway         In this mesh  there are multiple  networks   docs ops deployment deployment models  network models   where pods and virtual machines are not able to communicate directly with each other   Control plane traffic  including XDS configuration and certificate signing  are sent through a Gateway in the cluster  Similarly  all communication between pods and virtual machines goes through the gateway  which acts as a bridge between the two networks            Service association  Istio provides two mechanisms to represent virtual machine workloads       WorkloadGroup    docs reference config networking workload group   represents a logical group of virtual machine workloads that share common properties  This is similar to a  Deployment  in Kubernetes      WorkloadEntry    docs reference config networking workload entry   represents a single instance of a virtual machine workload  This is similar to a  Pod  in Kubernetes   Creating these resources   WorkloadGroup  and  WorkloadEntry   does not result in provisioning of any resources or running any virtual machine workloads  Rather  these resources just reference these workloads and inform Istio how to configure the mesh appropriately   When adding a virtual machine workload to the mesh  you will need to create a  WorkloadGroup  that acts as template for each  WorkloadEntry  instance    apiVersion  networking istio io v1 kind  WorkloadGroup metadata    name  product vm spec    metadata      labels        app  product   template      serviceAccount  default   probe      httpGet        port  8080   Once a virtual machine has been  configured and added to the mesh   docs setup install virtual machine  configure the virtual machine   a corresponding  WorkloadEntry  will be automatically created by the Istio control plane  For example    apiVersion  networking istio io v1 kind  WorkloadEntry metadata    annotations      istio io autoRegistrationGroup  product vm   labels      app  product   name  product vm 1 2 3 4 spec    address  1 2 3 4   labels      app  product   serviceAccount  default   This  WorkloadEntry  resource describes a single instance of a workload  similar to a pod in Kubernetes  When the workload is removed from the mesh  the  WorkloadEntry  resource will be automatically removed   Additionally  if any probes are configured in the  WorkloadGroup  resource  the Istio control plane automatically updates the health status of associated  WorkloadEntry  instances   In order for consumers to reliably call your workload  it s recommended to declare a  Service  association  This allows clients to reach a stable hostname  like  product default svc cluster local   rather than an ephemeral IP addresses  This also enables you to use advanced routing capabilities in Istio via the  DestinationRule  and  VirtualService  APIs   Any Kubernetes service can transparently select workloads across both pods and virtual machines via the selector fields which are matched with pod and  WorkloadEntry  labels respectively   For example  a  Service  named  product  is composed of a  Pod  and a  WorkloadEntry      With this configuration  requests to  product  would be load balanced across both the pod and virtual machine workload instances      DNS  Kubernetes provides DNS resolution in pods for  Service  names allowing pods to easily communicate with one another by stable hostnames   For virtual machine expansion  Istio provides similar functionality via a  DNS Proxy   docs ops configuration traffic management dns proxy    This feature redirects all DNS queries from the virtual machine workload to the Istio proxy  which maintains a mapping of hostnames to IP addresses   As a result  workloads running on virtual machines can transparently call  Service s  similar to pods  without requiring any additional configuration "}
{"questions":"istio deployment models pods kubernetes weight 40 sidecar title Application Requirements keywords sidecar injection Requirements of applications deployed in an Istio enabled cluster","answers":"---\ntitle: Application Requirements\ndescription: Requirements of applications deployed in an Istio-enabled cluster.\nweight: 40\nkeywords:\n  - kubernetes\n  - sidecar\n  - sidecar-injection\n  - deployment-models\n  - pods\n  - setup\naliases:\n  - \/docs\/setup\/kubernetes\/spec-requirements\/\n  - \/docs\/setup\/kubernetes\/prepare\/spec-requirements\/\n  - \/docs\/setup\/kubernetes\/prepare\/requirements\/\n  - \/docs\/setup\/kubernetes\/additional-setup\/requirements\/\n  - \/docs\/setup\/additional-setup\/requirements\n  - \/docs\/ops\/setup\/required-pod-capabilities\n  - \/help\/ops\/setup\/required-pod-capabilities\n  - \/docs\/ops\/prep\/requirements\n  - \/docs\/ops\/deployment\/requirements\nowner: istio\/wg-environments-maintainers\ntest: n\/a\n---\n\nIstio provides a great deal of functionality to applications with little or no impact on the application code itself.\nMany Kubernetes applications can be deployed in an Istio-enabled cluster without any changes at all.\nHowever, there are some implications of Istio's sidecar model that may need special consideration when deploying\nan Istio-enabled application.\nThis document describes these application considerations and specific requirements of Istio enablement.\n\n## Pod requirements\n\nTo be part of a mesh, Kubernetes pods must satisfy the following requirements:\n\n- **Application UIDs**: Ensure your pods do **not** run applications as a user\n  with the user ID (UID) value of `1337` because `1337` is reserved for the sidecar proxy.\n\n- **`NET_ADMIN` and `NET_RAW` capabilities**: If [pod security policies](https:\/\/kubernetes.io\/docs\/concepts\/policy\/pod-security-policy\/)\n    are [enforced](https:\/\/kubernetes.io\/docs\/concepts\/policy\/pod-security-policy\/#enabling-pod-security-policies)\n    in your cluster and unless you use the [Istio CNI Plugin](\/docs\/setup\/additional-setup\/cni\/), your pods must have the\n    `NET_ADMIN` and `NET_RAW` capabilities allowed. The initialization containers of the Envoy\n    proxies require these capabilities.\n\n    To check if the `NET_ADMIN` and `NET_RAW` capabilities are allowed for your pods, you need to check if their\n    [service account](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/)\n    can use a pod security policy that allows the `NET_ADMIN` and `NET_RAW` capabilities.\n    If you haven't specified a service account in your pods' deployment, the pods run using\n    the `default` service account in their deployment's namespace.\n\n    To list the capabilities for a service account, replace `<your namespace>` and `<your service account>`\n    with your values in the following command:\n\n    \n    $ for psp in $(kubectl get psp -o jsonpath=\"{range .items[*]}{@.metadata.name}{'\\n'}{end}\"); do if [ $(kubectl auth can-i use psp\/$psp --as=system:serviceaccount:<your namespace>:<your service account>) = yes ]; then kubectl get psp\/$psp --no-headers -o=custom-columns=NAME:.metadata.name,CAPS:.spec.allowedCapabilities; fi; done\n    \n\n    For example, to check for the `default` service account in the `default` namespace, run the following command:\n\n    \n    $ for psp in $(kubectl get psp -o jsonpath=\"{range .items[*]}{@.metadata.name}{'\\n'}{end}\"); do if [ $(kubectl auth can-i use psp\/$psp --as=system:serviceaccount:default:default) = yes ]; then kubectl get psp\/$psp --no-headers -o=custom-columns=NAME:.metadata.name,CAPS:.spec.allowedCapabilities; fi; done\n    \n\n    If you see `NET_ADMIN` and `NET_RAW` or `*` in the list of capabilities of one of the allowed\n    policies for your service account, your pods have permission to run the Istio init containers.\n    Otherwise, you will need to [provide the permission](https:\/\/kubernetes.io\/docs\/concepts\/policy\/pod-security-policy\/#authorizing-policies).\n\n- **Pod labels**: We recommend explicitly declaring pods with an application identifier and version by using a pod label.\n  These labels add contextual information to the metrics and telemetry that Istio collects.\n  Each of these values are read from multiple labels ordered from highest to lowest precedence:\n\n    - Application name: `service.istio.io\/canonical-name`, `app.kubernetes.io\/name`, or `app`.\n    - Application version: `service.istio.io\/canonical-revision`, `app.kubernetes.io\/version`, or `version`.\n\n- **Named service ports**: Service ports may optionally be named to explicitly specify a protocol.\n  See [Protocol Selection](\/docs\/ops\/configuration\/traffic-management\/protocol-selection\/) for\n  more details. If a pod belongs to multiple [Kubernetes services](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/),\n  the services cannot use the same port number for different protocols, for\n  instance HTTP and TCP.\n\n## Ports used by Istio\n\nThe following ports and protocols are used by the Istio sidecar proxy (Envoy).\n\n\nTo avoid port conflicts with sidecars, applications should not use any of the ports used by Envoy.\n\n\n| Port | Protocol | Description | Pod-internal only |\n|----|----|----|----|\n| 15000 | TCP | Envoy admin port (commands\/diagnostics) | Yes |\n| 15001 | TCP | Envoy outbound | No |\n| 15004 | HTTP | Debug port | Yes |\n| 15006 | TCP | Envoy inbound | No |\n| 15008 | HTTP2 | HBONE mTLS tunnel port | No |\n| 15020 | HTTP | Merged Prometheus telemetry from Istio agent, Envoy, and application | No |\n| 15021 | HTTP | Health checks | No |\n| 15053 | DNS  | DNS port, if capture is enabled | Yes |\n| 15090 | HTTP | Envoy Prometheus telemetry | No |\n\nThe following ports and protocols are used by the Istio control plane (istiod).\n\n| Port | Protocol | Description | Local host only |\n|----|----|----|----|\n| 443 | HTTPS | Webhooks service port | No |\n| 8080 | HTTP | Debug interface (deprecated, container port only) | No |\n| 15010 | GRPC | XDS and CA services (Plaintext, only for secure networks) | No |\n| 15012 | GRPC | XDS and CA services (TLS and mTLS, recommended for production use) | No |\n| 15014 | HTTP | Control plane monitoring | No |\n| 15017 | HTTPS | Webhook container port, forwarded from 443 | No |\n\n## Server First Protocols\n\nSome protocols are \"Server First\" protocols, which means the server will send the first bytes. This may have an impact on\n[`PERMISSIVE`](\/docs\/reference\/config\/security\/peer_authentication\/#PeerAuthentication-MutualTLS-Mode) mTLS and [Automatic protocol selection](\/docs\/ops\/configuration\/traffic-management\/protocol-selection\/#automatic-protocol-selection).\n\nBoth of these features work by inspecting the initial bytes of a connection to determine the protocol, which is incompatible with server first protocols.\n\nIn order to support these cases, follow the [Explicit protocol selection](\/docs\/ops\/configuration\/traffic-management\/protocol-selection\/#explicit-protocol-selection) steps to declare the protocol of the application as `TCP`.\n\nThe following ports are known to commonly carry server first protocols, and are automatically assumed to be `TCP`:\n\n|Protocol|Port|\n|--------|----|\n| SMTP   |25  |\n| DNS    |53  |\n| MySQL  |3306|\n| MongoDB|27017|\n\nBecause TLS communication is not server first, TLS encrypted server first traffic will work with automatic protocol detection as long as you make sure that all traffic subjected to TLS sniffing is encrypted:\n\n1. Configure `mTLS` mode `STRICT` for the server. This will enforce TLS encryption for all requests.\n1. Configure `mTLS` mode `DISABLE` for the server. This will disable the TLS sniffing, allowing server first protocols to be used.\n1. Configure all clients to send `TLS` traffic, generally through a [`DestinationRule`](\/docs\/reference\/config\/networking\/destination-rule\/#ClientTLSSettings) or by relying on auto mTLS.\n1. Configure your application to send TLS traffic directly.\n\n## Outbound traffic\n\nIn order to support Istio's traffic routing capabilities, traffic leaving a pod may be routed differently than\nwhen a sidecar is not deployed.\n\nFor HTTP-based traffic, traffic is routed based on the `Host` header. This may lead to unexpected behavior if the destination IP\nand `Host` header are not aligned. For example, a request like `curl 1.2.3.4 -H \"Host: httpbin.default\"` will be routed to the `httpbin` service,\nrather than `1.2.3.4`.\n\nFor Non HTTP-based traffic (including HTTPS), Istio does not have access to an `Host` header, so routing decisions are based on the Service IP address.\n\nOne implication of this is that direct calls to pods (for example, `curl <POD_IP>`), rather than Services, will not be matched. While the traffic may\nbe [passed through](\/docs\/tasks\/traffic-management\/egress\/egress-control\/#envoy-passthrough-to-external-services), it will not get the full Istio functionality\nincluding mTLS encryption, traffic routing, and telemetry.\n\nSee the [Traffic Routing](\/docs\/ops\/configuration\/traffic-management\/traffic-routing) page for more information.","site":"istio","answers_cleaned":"    title  Application Requirements description  Requirements of applications deployed in an Istio enabled cluster  weight  40 keywords      kubernetes     sidecar     sidecar injection     deployment models     pods     setup aliases       docs setup kubernetes spec requirements       docs setup kubernetes prepare spec requirements       docs setup kubernetes prepare requirements       docs setup kubernetes additional setup requirements       docs setup additional setup requirements      docs ops setup required pod capabilities      help ops setup required pod capabilities      docs ops prep requirements      docs ops deployment requirements owner  istio wg environments maintainers test  n a      Istio provides a great deal of functionality to applications with little or no impact on the application code itself  Many Kubernetes applications can be deployed in an Istio enabled cluster without any changes at all  However  there are some implications of Istio s sidecar model that may need special consideration when deploying an Istio enabled application  This document describes these application considerations and specific requirements of Istio enablement      Pod requirements  To be part of a mesh  Kubernetes pods must satisfy the following requirements       Application UIDs    Ensure your pods do   not   run applications as a user   with the user ID  UID  value of  1337  because  1337  is reserved for the sidecar proxy        NET ADMIN  and  NET RAW  capabilities    If  pod security policies  https   kubernetes io docs concepts policy pod security policy       are  enforced  https   kubernetes io docs concepts policy pod security policy  enabling pod security policies      in your cluster and unless you use the  Istio CNI Plugin   docs setup additional setup cni    your pods must have the      NET ADMIN  and  NET RAW  capabilities allowed  The initialization containers of the Envoy     proxies require these capabilities       To check if the  NET ADMIN  and  NET RAW  capabilities are allowed for your pods  you need to check if their      service account  https   kubernetes io docs tasks configure pod container configure service account       can use a pod security policy that allows the  NET ADMIN  and  NET RAW  capabilities      If you haven t specified a service account in your pods  deployment  the pods run using     the  default  service account in their deployment s namespace       To list the capabilities for a service account  replace   your namespace   and   your service account       with your values in the following command              for psp in   kubectl get psp  o jsonpath   range  items       metadata name    n   end     do if     kubectl auth can i use psp  psp   as system serviceaccount  your namespace   your service account     yes    then kubectl get psp  psp   no headers  o custom columns NAME  metadata name CAPS  spec allowedCapabilities  fi  done           For example  to check for the  default  service account in the  default  namespace  run the following command              for psp in   kubectl get psp  o jsonpath   range  items       metadata name    n   end     do if     kubectl auth can i use psp  psp   as system serviceaccount default default    yes    then kubectl get psp  psp   no headers  o custom columns NAME  metadata name CAPS  spec allowedCapabilities  fi  done           If you see  NET ADMIN  and  NET RAW  or     in the list of capabilities of one of the allowed     policies for your service account  your pods have permission to run the Istio init containers      Otherwise  you will need to  provide the permission  https   kubernetes io docs concepts policy pod security policy  authorizing policies        Pod labels    We recommend explicitly declaring pods with an application identifier and version by using a pod label    These labels add contextual information to the metrics and telemetry that Istio collects    Each of these values are read from multiple labels ordered from highest to lowest precedence         Application name   service istio io canonical name    app kubernetes io name   or  app         Application version   service istio io canonical revision    app kubernetes io version   or  version        Named service ports    Service ports may optionally be named to explicitly specify a protocol    See  Protocol Selection   docs ops configuration traffic management protocol selection   for   more details  If a pod belongs to multiple  Kubernetes services  https   kubernetes io docs concepts services networking service      the services cannot use the same port number for different protocols  for   instance HTTP and TCP      Ports used by Istio  The following ports and protocols are used by the Istio sidecar proxy  Envoy     To avoid port conflicts with sidecars  applications should not use any of the ports used by Envoy      Port   Protocol   Description   Pod internal only                           15000   TCP   Envoy admin port  commands diagnostics    Yes     15001   TCP   Envoy outbound   No     15004   HTTP   Debug port   Yes     15006   TCP   Envoy inbound   No     15008   HTTP2   HBONE mTLS tunnel port   No     15020   HTTP   Merged Prometheus telemetry from Istio agent  Envoy  and application   No     15021   HTTP   Health checks   No     15053   DNS    DNS port  if capture is enabled   Yes     15090   HTTP   Envoy Prometheus telemetry   No    The following ports and protocols are used by the Istio control plane  istiod      Port   Protocol   Description   Local host only                           443   HTTPS   Webhooks service port   No     8080   HTTP   Debug interface  deprecated  container port only    No     15010   GRPC   XDS and CA services  Plaintext  only for secure networks    No     15012   GRPC   XDS and CA services  TLS and mTLS  recommended for production use    No     15014   HTTP   Control plane monitoring   No     15017   HTTPS   Webhook container port  forwarded from 443   No       Server First Protocols  Some protocols are  Server First  protocols  which means the server will send the first bytes  This may have an impact on   PERMISSIVE    docs reference config security peer authentication  PeerAuthentication MutualTLS Mode  mTLS and  Automatic protocol selection   docs ops configuration traffic management protocol selection  automatic protocol selection    Both of these features work by inspecting the initial bytes of a connection to determine the protocol  which is incompatible with server first protocols   In order to support these cases  follow the  Explicit protocol selection   docs ops configuration traffic management protocol selection  explicit protocol selection  steps to declare the protocol of the application as  TCP    The following ports are known to commonly carry server first protocols  and are automatically assumed to be  TCP     Protocol Port                    SMTP    25      DNS     53      MySQL   3306    MongoDB 27017   Because TLS communication is not server first  TLS encrypted server first traffic will work with automatic protocol detection as long as you make sure that all traffic subjected to TLS sniffing is encrypted   1  Configure  mTLS  mode  STRICT  for the server  This will enforce TLS encryption for all requests  1  Configure  mTLS  mode  DISABLE  for the server  This will disable the TLS sniffing  allowing server first protocols to be used  1  Configure all clients to send  TLS  traffic  generally through a   DestinationRule    docs reference config networking destination rule  ClientTLSSettings  or by relying on auto mTLS  1  Configure your application to send TLS traffic directly      Outbound traffic  In order to support Istio s traffic routing capabilities  traffic leaving a pod may be routed differently than when a sidecar is not deployed   For HTTP based traffic  traffic is routed based on the  Host  header  This may lead to unexpected behavior if the destination IP and  Host  header are not aligned  For example  a request like  curl 1 2 3 4  H  Host  httpbin default   will be routed to the  httpbin  service  rather than  1 2 3 4    For Non HTTP based traffic  including HTTPS   Istio does not have access to an  Host  header  so routing decisions are based on the Service IP address   One implication of this is that direct calls to pods  for example   curl  POD IP     rather than Services  will not be matched  While the traffic may be  passed through   docs tasks traffic management egress egress control  envoy passthrough to external services   it will not get the full Istio functionality including mTLS encryption  traffic routing  and telemetry   See the  Traffic Routing   docs ops configuration traffic management traffic routing  page for more information "}
{"questions":"istio docs ops architecture aliases weight 10 title Architecture owner istio wg environments maintainers Describes Istio s high level architecture and design goals docs concepts architecture test n a","answers":"---\ntitle: Architecture\ndescription: Describes Istio's high-level architecture and design goals.\nweight: 10\naliases:\n  - \/docs\/concepts\/architecture\n  - \/docs\/ops\/architecture\nowner: istio\/wg-environments-maintainers\ntest: n\/a\n---\n\nAn Istio service mesh is logically split into a **data plane** and a **control\nplane**.\n\n* The **data plane** is composed of a set of intelligent proxies\n  ([Envoy](https:\/\/www.envoyproxy.io\/)) deployed as sidecars. These proxies\n  mediate and control all network communication between microservices. They\n  also collect and report telemetry on all mesh traffic.\n\n* The **control plane** manages and configures the proxies to route traffic.\n\nThe following diagram shows the different components that make up each plane:\n\n\n\n## Components\n\nThe following sections provide a brief overview of each of Istio's core components.\n\n### Envoy\n\nIstio uses an extended version of the\n[Envoy](https:\/\/www.envoyproxy.io\/) proxy. Envoy is a high-performance\nproxy developed in C++ to mediate all inbound and outbound traffic for all\nservices in the service mesh.\nEnvoy proxies are the only Istio components that interact with data plane\ntraffic.\n\nEnvoy proxies are deployed as sidecars to services, logically\naugmenting the services with Envoy\u2019s many built-in features,\nfor example:\n\n* Dynamic service discovery\n* Load balancing\n* TLS termination\n* HTTP\/2 and gRPC proxies\n* Circuit breakers\n* Health checks\n* Staged rollouts with %-based traffic split\n* Fault injection\n* Rich metrics\n\nThis sidecar deployment allows Istio to enforce policy decisions and extract\nrich telemetry which can be sent to monitoring systems to provide information\nabout the behavior of the entire mesh.\n\nThe sidecar proxy model also allows you to add Istio capabilities to an\nexisting deployment without requiring you to rearchitect or rewrite code.\n\nSome of the Istio features and tasks enabled by Envoy proxies include:\n\n* Traffic control features: enforce fine-grained traffic control with rich\n  routing rules for HTTP, gRPC, WebSocket, and TCP traffic.\n\n* Network resiliency features: setup retries, failovers, circuit breakers, and\n  fault injection.\n\n* Security and authentication features: enforce security policies and enforce\n  access control and rate limiting defined through the configuration API.\n\n* Pluggable extensions model based on WebAssembly that allows for custom policy\n  enforcement and telemetry generation for mesh traffic.\n\n### Istiod\n\nIstiod provides service discovery, configuration and certificate management.\n\nIstiod converts high level routing rules that control traffic behavior into\nEnvoy-specific configurations, and propagates them to the sidecars at runtime.\nIt abstracts platform-specific service discovery mechanisms and synthesizes\nthem into a standard format that any sidecar conforming with the\n[Envoy API](https:\/\/www.envoyproxy.io\/docs\/envoy\/latest\/api\/api) can consume.\n\nIstio can support discovery for multiple environments such as Kubernetes or VMs.\n\nYou can use Istio's\n[Traffic Management API](\/docs\/concepts\/traffic-management\/#introducing-istio-traffic-management)\nto instruct Istiod to refine the Envoy configuration to exercise more granular control\nover the traffic in your service mesh.\n\nIstiod [security](\/docs\/concepts\/security\/) enables strong service-to-service and\nend-user authentication with built-in identity and credential management. You\ncan use Istio to upgrade unencrypted traffic in the service mesh. Using\nIstio, operators can enforce policies based on service identity rather than\non relatively unstable layer 3 or layer 4 network identifiers.\nAdditionally, you can use [Istio's authorization feature](\/docs\/concepts\/security\/#authorization)\nto control who can access your services.\n\nIstiod acts as a Certificate Authority (CA) and generates certificates to allow\nsecure mTLS communication in the data plane.","site":"istio","answers_cleaned":"    title  Architecture description  Describes Istio s high level architecture and design goals  weight  10 aliases       docs concepts architecture      docs ops architecture owner  istio wg environments maintainers test  n a      An Istio service mesh is logically split into a   data plane   and a   control plane       The   data plane   is composed of a set of intelligent proxies     Envoy  https   www envoyproxy io    deployed as sidecars  These proxies   mediate and control all network communication between microservices  They   also collect and report telemetry on all mesh traffic     The   control plane   manages and configures the proxies to route traffic   The following diagram shows the different components that make up each plane        Components  The following sections provide a brief overview of each of Istio s core components       Envoy  Istio uses an extended version of the  Envoy  https   www envoyproxy io   proxy  Envoy is a high performance proxy developed in C   to mediate all inbound and outbound traffic for all services in the service mesh  Envoy proxies are the only Istio components that interact with data plane traffic   Envoy proxies are deployed as sidecars to services  logically augmenting the services with Envoy s many built in features  for example     Dynamic service discovery   Load balancing   TLS termination   HTTP 2 and gRPC proxies   Circuit breakers   Health checks   Staged rollouts with   based traffic split   Fault injection   Rich metrics  This sidecar deployment allows Istio to enforce policy decisions and extract rich telemetry which can be sent to monitoring systems to provide information about the behavior of the entire mesh   The sidecar proxy model also allows you to add Istio capabilities to an existing deployment without requiring you to rearchitect or rewrite code   Some of the Istio features and tasks enabled by Envoy proxies include     Traffic control features  enforce fine grained traffic control with rich   routing rules for HTTP  gRPC  WebSocket  and TCP traffic     Network resiliency features  setup retries  failovers  circuit breakers  and   fault injection     Security and authentication features  enforce security policies and enforce   access control and rate limiting defined through the configuration API     Pluggable extensions model based on WebAssembly that allows for custom policy   enforcement and telemetry generation for mesh traffic       Istiod  Istiod provides service discovery  configuration and certificate management   Istiod converts high level routing rules that control traffic behavior into Envoy specific configurations  and propagates them to the sidecars at runtime  It abstracts platform specific service discovery mechanisms and synthesizes them into a standard format that any sidecar conforming with the  Envoy API  https   www envoyproxy io docs envoy latest api api  can consume   Istio can support discovery for multiple environments such as Kubernetes or VMs   You can use Istio s  Traffic Management API   docs concepts traffic management  introducing istio traffic management  to instruct Istiod to refine the Envoy configuration to exercise more granular control over the traffic in your service mesh   Istiod  security   docs concepts security   enables strong service to service and end user authentication with built in identity and credential management  You can use Istio to upgrade unencrypted traffic in the service mesh  Using Istio  operators can enforce policies based on service identity rather than on relatively unstable layer 3 or layer 4 network identifiers  Additionally  you can use  Istio s authorization feature   docs concepts security  authorization  to control who can access your services   Istiod acts as a Certificate Authority  CA  and generates certificates to allow secure mTLS communication in the data plane "}
{"questions":"istio Istio performance and scalability summary performance title Performance and Scalability benchmarks weight 30 aliases scalability keywords scale","answers":"---\ntitle: Performance and Scalability\ndescription: Istio performance and scalability summary.\nweight: 30\nkeywords:\n  - performance\n  - scalability\n  - scale\n  - benchmarks\naliases:\n  - \/docs\/performance-and-scalability\/overview\n  - \/docs\/performance-and-scalability\/microbenchmarks\n  - \/docs\/performance-and-scalability\/performance-testing-automation\n  - \/docs\/performance-and-scalability\/realistic-app-benchmark\n  - \/docs\/performance-and-scalability\/scalability\n  - \/docs\/performance-and-scalability\/scenarios\n  - \/docs\/performance-and-scalability\/synthetic-benchmarks\n  - \/docs\/concepts\/performance-and-scalability\n  - \/docs\/ops\/performance-and-scalability\nowner: istio\/wg-environments-maintainers\ntest: n\/a\n---\n\nIstio makes it easy to create a network of deployed services with rich routing,\nload balancing, service-to-service authentication, monitoring, and more - all\nwithout any changes to the application code. Istio strives to provide\nthese benefits with minimal resource overhead and aims to support very\nlarge meshes with high request rates while adding minimal latency.\n\nThe Istio data plane components, the Envoy proxies, handle data flowing through\nthe system. The Istio control plane component, Istiod, configures\nthe data plane. The data plane and control plane have distinct performance concerns.\n\n## Performance summary for Istio 1.24\n\nThe [Istio load tests](https:\/\/github.com\/istio\/tools\/tree\/\/perf\/load) mesh consists\nof **1000** services and **2000** pods in an Istio mesh with 70,000 mesh-wide requests per second.\n\n## Control plane performance\n\nIstiod configures sidecar proxies based on user authored configuration files and the current\nstate of the system. In a Kubernetes environment, Custom Resource Definitions (CRDs) and deployments\nconstitute the configuration and state of the system. The Istio configuration objects like gateways and virtual\nservices, provide the user-authored configuration.\nTo produce the configuration for the proxies, Istiod processes the combined configuration and system state\nfrom the Kubernetes environment and the user-authored configuration.\n\nThe control plane supports thousands of services, spread across thousands of pods with a\nsimilar number of user authored virtual services and other configuration objects.\nIstiod's CPU and memory requirements scale with the amount of configurations and possible system states.\nThe CPU consumption scales with the following factors:\n\n- The rate of deployment changes.\n- The rate of configuration changes.\n- The number of proxies connecting to Istiod.\n\nHowever, this part is inherently horizontally scalable.\n\nYou can increase the number of Istiod instances to reduce the amount of time it takes for the configuration\nto reach all proxies.\n\nAt large scale, [configuration scoping](\/docs\/ops\/configuration\/mesh\/configuration-scoping) is highly recommended.\n\n## Data plane performance\n\nData plane performance depends on many factors, for example:\n\n- Number of client connections\n- Target request rate\n- Request size and Response size\n- Number of proxy worker threads\n- Protocol\n- CPU cores\n- Various proxy features. In particular, telemetry filters (logging, tracing, and metrics) are known to have a moderate impact.\n\nThe latency, throughput, and the proxies' CPU and memory consumption are measured as a function of said factors.\n\n### Sidecar and ztunnel resource usage\n\nSince the sidecar proxy performs additional work on the data path, it consumes CPU\nand memory. In Istio 1.24, with 1000 http requests per second containing 1 KB of payload each\n- a single sidecar proxy with 2 worker threads consumes about 0.20 vCPU and 60 MB of memory.\n- a single waypoint proxy with 2 worker threads consumes about 0.25 vCPU and 60 MB of memory\n- a single ztunnel proxy consumes about 0.06 vCPU and 12 MB of memory.\n\nThe memory consumption of the proxy depends on the total configuration state the proxy holds.\nA large number of listeners, clusters, and routes can increase memory usage.\n\n### Latency\n\nSince Istio adds a sidecar proxy or ztunnel proxy on the data path, latency is an important\nconsideration.\nEvery feature Istio adds also adds to the path length inside the proxy and potentially affects latency.\n\nThe Envoy proxy collects raw telemetry data after a response is sent to the\nclient.\nThe time spent collecting raw telemetry for a request does not contribute to the total time taken to complete that request.\nHowever, since the worker is busy handling the request, the worker won't start handling the next request immediately.\nThis process adds to the queue wait time of the next request and affects average and tail latencies.\nThe actual tail latency depends on the traffic pattern.\n\n### Latency for Istio 1.24\n\nIn sidecar mode, a request will pass through the client sidecar proxy and then the server sidecar proxy before reaching the server, and vice versa.\nIn ambient mode, a request will pass through the client node ztunnel and then the server node ztunnel before reaching the server.\nWith waypoints configured, a request will go through a waypoint proxy between the ztunnels.\nThe following charts show the P90 and P99 latency of http\/1.1 requests traveling through various dataplane modes.\nTo run the tests, we used a bare-metal cluster of 5 [M3 Large](https:\/\/deploy.equinix.com\/product\/servers\/m3-large\/) machines and [Flannel](https:\/\/github.com\/flannel-io\/flannel) as the primary CNI.\nWe obtained these results using the [Istio benchmarks](https:\/\/github.com\/istio\/tools\/tree\/\/perf\/benchmark) for the `http\/1.1` protocol with a 1 KB payload at 500, 750, 1000, 1250, and 1500 requests per second using 4 client connections, 2 proxy workers and mutual TLS enabled.\n\nNote: This testing was performed on the [CNCF Community Infrastructure Lab](https:\/\/github.com\/cncf\/cluster).\nDifferent hardware will give different values.\n\n<img width=\"90%\" style=\"display: block; margin: auto;\"\n    src=\"istio-1.24.0-fortio-90.png\"\n    alt=\"P90 latency vs client connections\"\n    caption=\"P90 latency vs client connections\"\n\/>\n<br><br>\n<img width=\"90%\" style=\"display: block; margin: auto;\"\n    src=\"istio-1.24.0-fortio-99.png\"\n    alt=\"P99 latency vs client connections\"\n    caption=\"P99 latency vs client connections\"\n\/>\n<br>\n\n- `no mesh`: Client pod directly calls the server pod, no pods in Istio service mesh.\n- `ambient: L4`: Default ambient mode with the secure L4 overlay\n- `ambient: L4+L7` Default ambient mode with the secure L4 overlay and waypoints enabled for the namespace.\n- `sidecar` Client and server sidecars.\n\n### Benchmarking tools\n\nIstio uses the following tools for benchmarking\n\n- [`fortio.org`](https:\/\/fortio.org\/) - a constant throughput load testing tool.\n- [`nighthawk`](https:\/\/github.com\/envoyproxy\/nighthawk) - a load testing tool based on Envoy.\n- [`isotope`](https:\/\/github.com\/istio\/tools\/tree\/\/isotope) - a synthetic application with configurable topology.","site":"istio","answers_cleaned":"    title  Performance and Scalability description  Istio performance and scalability summary  weight  30 keywords      performance     scalability     scale     benchmarks aliases       docs performance and scalability overview      docs performance and scalability microbenchmarks      docs performance and scalability performance testing automation      docs performance and scalability realistic app benchmark      docs performance and scalability scalability      docs performance and scalability scenarios      docs performance and scalability synthetic benchmarks      docs concepts performance and scalability      docs ops performance and scalability owner  istio wg environments maintainers test  n a      Istio makes it easy to create a network of deployed services with rich routing  load balancing  service to service authentication  monitoring  and more   all without any changes to the application code  Istio strives to provide these benefits with minimal resource overhead and aims to support very large meshes with high request rates while adding minimal latency   The Istio data plane components  the Envoy proxies  handle data flowing through the system  The Istio control plane component  Istiod  configures the data plane  The data plane and control plane have distinct performance concerns      Performance summary for Istio 1 24  The  Istio load tests  https   github com istio tools tree  perf load  mesh consists of   1000   services and   2000   pods in an Istio mesh with 70 000 mesh wide requests per second      Control plane performance  Istiod configures sidecar proxies based on user authored configuration files and the current state of the system  In a Kubernetes environment  Custom Resource Definitions  CRDs  and deployments constitute the configuration and state of the system  The Istio configuration objects like gateways and virtual services  provide the user authored configuration  To produce the configuration for the proxies  Istiod processes the combined configuration and system state from the Kubernetes environment and the user authored configuration   The control plane supports thousands of services  spread across thousands of pods with a similar number of user authored virtual services and other configuration objects  Istiod s CPU and memory requirements scale with the amount of configurations and possible system states  The CPU consumption scales with the following factors     The rate of deployment changes    The rate of configuration changes    The number of proxies connecting to Istiod   However  this part is inherently horizontally scalable   You can increase the number of Istiod instances to reduce the amount of time it takes for the configuration to reach all proxies   At large scale   configuration scoping   docs ops configuration mesh configuration scoping  is highly recommended      Data plane performance  Data plane performance depends on many factors  for example     Number of client connections   Target request rate   Request size and Response size   Number of proxy worker threads   Protocol   CPU cores   Various proxy features  In particular  telemetry filters  logging  tracing  and metrics  are known to have a moderate impact   The latency  throughput  and the proxies  CPU and memory consumption are measured as a function of said factors       Sidecar and ztunnel resource usage  Since the sidecar proxy performs additional work on the data path  it consumes CPU and memory  In Istio 1 24  with 1000 http requests per second containing 1 KB of payload each   a single sidecar proxy with 2 worker threads consumes about 0 20 vCPU and 60 MB of memory    a single waypoint proxy with 2 worker threads consumes about 0 25 vCPU and 60 MB of memory   a single ztunnel proxy consumes about 0 06 vCPU and 12 MB of memory   The memory consumption of the proxy depends on the total configuration state the proxy holds  A large number of listeners  clusters  and routes can increase memory usage       Latency  Since Istio adds a sidecar proxy or ztunnel proxy on the data path  latency is an important consideration  Every feature Istio adds also adds to the path length inside the proxy and potentially affects latency   The Envoy proxy collects raw telemetry data after a response is sent to the client  The time spent collecting raw telemetry for a request does not contribute to the total time taken to complete that request  However  since the worker is busy handling the request  the worker won t start handling the next request immediately  This process adds to the queue wait time of the next request and affects average and tail latencies  The actual tail latency depends on the traffic pattern       Latency for Istio 1 24  In sidecar mode  a request will pass through the client sidecar proxy and then the server sidecar proxy before reaching the server  and vice versa  In ambient mode  a request will pass through the client node ztunnel and then the server node ztunnel before reaching the server  With waypoints configured  a request will go through a waypoint proxy between the ztunnels  The following charts show the P90 and P99 latency of http 1 1 requests traveling through various dataplane modes  To run the tests  we used a bare metal cluster of 5  M3 Large  https   deploy equinix com product servers m3 large   machines and  Flannel  https   github com flannel io flannel  as the primary CNI  We obtained these results using the  Istio benchmarks  https   github com istio tools tree  perf benchmark  for the  http 1 1  protocol with a 1 KB payload at 500  750  1000  1250  and 1500 requests per second using 4 client connections  2 proxy workers and mutual TLS enabled   Note  This testing was performed on the  CNCF Community Infrastructure Lab  https   github com cncf cluster   Different hardware will give different values    img width  90   style  display  block  margin  auto       src  istio 1 24 0 fortio 90 png      alt  P90 latency vs client connections      caption  P90 latency vs client connections      br  br   img width  90   style  display  block  margin  auto       src  istio 1 24 0 fortio 99 png      alt  P99 latency vs client connections      caption  P99 latency vs client connections      br      no mesh   Client pod directly calls the server pod  no pods in Istio service mesh     ambient  L4   Default ambient mode with the secure L4 overlay    ambient  L4 L7  Default ambient mode with the secure L4 overlay and waypoints enabled for the namespace     sidecar  Client and server sidecars       Benchmarking tools  Istio uses the following tools for benchmarking      fortio org   https   fortio org     a constant throughput load testing tool      nighthawk   https   github com envoyproxy nighthawk    a load testing tool based on Envoy      isotope   https   github com istio tools tree  isotope    a synthetic application with configurable topology "}
{"questions":"istio weight 20 networks multiple clusters tenancy title Deployment Models keywords Describes the options and considerations when configuring your Istio deployment single cluster control plane","answers":"---\ntitle: Deployment Models\ndescription: Describes the options and considerations when configuring your Istio deployment.\nweight: 20\nkeywords:\n  - single-cluster\n  - multiple-clusters\n  - control-plane\n  - tenancy\n  - networks\n  - identity\n  - trust\n  - single-mesh\n  - multiple-meshes\naliases:\n  - \/docs\/concepts\/multicluster-deployments\n  - \/docs\/concepts\/deployment-models\n  - \/docs\/ops\/prep\/deployment-models\nowner: istio\/wg-environments-maintainers\ntest: n\/a\n---\n\nWhen configuring a production deployment of Istio, you need to answer a number of questions.\nWill the mesh be confined to a single cluster or distributed across\nmultiple clusters? Will all the services be located in a single fully connected network, or will\ngateways be required to connect services across multiple networks? Is there a single\ncontrol plane, potentially shared across clusters,\nor are there multiple control planes deployed to ensure high availability (HA)?\nAre all clusters going to be connected into a single multicluster\nservice mesh or will they be federated into a multi-mesh deployment?\n\nAll of these questions, among others, represent independent dimensions of configuration for an Istio deployment.\n\n1. single or multiple cluster\n1. single or multiple network\n1. single or multiple control plane\n1. single or multiple mesh\n\nIn a production environment involving multiple clusters, you can use a mix\nof deployment models. For example, having more than one control plane is recommended for HA,\nbut you could achieve this for a 3 cluster deployment by deploying 2 clusters with\na single shared control plane and then adding the third cluster with a second\ncontrol plane in a different network. All three clusters could then be configured\nto share both control planes so that all the clusters have 2 sources of control\nto ensure HA.\n\nChoosing the right deployment model depends on the isolation, performance,\nand HA requirements for your use case. This guide describes the various options and\nconsiderations when configuring your Istio deployment.\n\n## Cluster models\n\nThe workload instances of your application run in one or more\nclusters. For isolation, performance, and\nhigh availability, you can confine clusters to availability zones and regions.\n\nProduction systems, depending on their requirements, can run across multiple\nclusters spanning a number of zones or regions, leveraging cloud load balancers\nto handle things like locality and zonal or regional fail over.\n\nIn most cases, clusters represent boundaries for configuration and endpoint\ndiscovery. For example, each Kubernetes cluster has an API Server which manages\nthe configuration for the cluster as well as serving\nservice endpoint information as pods are brought up\nor down. Since Kubernetes configures this behavior on a per-cluster basis, this\napproach helps limit the potential problems caused by incorrect configurations.\n\nIn Istio, you can configure a single service mesh to span any number of\nclusters.\n\n### Single cluster\n\nIn the simplest case, you can confine an Istio mesh to a single\ncluster. A cluster usually operates over a\n[single network](#single-network), but it varies between infrastructure\nproviders. A single cluster and single network model includes a control plane,\nwhich results in the simplest Istio deployment.\n\n\n\nSingle cluster deployments offer simplicity, but lack other features, for\nexample, fault isolation and fail over. If you need higher availability, you\nshould use multiple clusters.\n\n### Multiple clusters\n\nYou can configure a single mesh to include\nmultiple clusters. Using a\nmulticluster deployment within a single mesh affords\nthe following capabilities beyond that of a single cluster deployment:\n\n- Fault isolation and fail over: `cluster-1` goes down, fail over to `cluster-2`.\n- Location-aware routing and fail over: Send requests to the nearest service.\n- Various [control plane models](#control-plane-models): Support different\n  levels of availability.\n- Team or project isolation: Each team runs its own set of clusters.\n\n\n\nMulticluster deployments give you a greater degree of isolation and\navailability but increase complexity. If your systems have high availability\nrequirements, you likely need clusters across multiple zones and regions. You\ncan canary configuration changes or new binary releases in a single cluster,\nwhere the configuration changes only affect a small amount of user traffic.\nAdditionally, if a cluster has a problem, you can temporarily route traffic to\nnearby clusters until you address the issue.\n\nYou can configure inter-cluster communication based on the\n[network](#network-models) and the options supported by your cloud provider. For\nexample, if two clusters reside on the same underlying network, you can enable\ncross-cluster communication by simply configuring firewall rules.\n\nWithin a multicluster mesh, all services are shared by default, according to the\nconcept of namespace sameness.\n[Traffic management rules](\/docs\/ops\/configuration\/traffic-management\/multicluster)\nprovide fine-grained control over the behavior of multicluster traffic.\n\n### DNS with multiple clusters\n\nWhen a client application makes a request to some host, it must first perform a\nDNS lookup for the hostname to obtain an IP address before it can proceed with\nthe request.\nIn Kubernetes, the DNS server residing within the cluster typically handles\nthis DNS lookup, based on the configured `Service` definitions.\n\nIstio uses the virtual IP returned by the DNS lookup to load balance\nacross the list of active endpoints for the requested service, taking into account any\nIstio configured routing rules.\nIstio uses either Kubernetes `Service`\/`Endpoint` or Istio `ServiceEntry` to\nconfigure its internal mapping of hostname to workload IP addresses.\n\nThis two-tiered naming system becomes more complicated when you have multiple\nclusters. Istio is inherently multicluster-aware, but Kubernetes is not\n(today). Because of this, the client cluster must have a DNS entry for the\nservice in order for the DNS lookup to succeed, and a request to be\nsuccessfully sent. This is true even if there are no instances of that\nservice's pods running in the client cluster.\n\nTo ensure that DNS lookup succeeds, you must deploy a Kubernetes `Service` to\neach cluster that consumes that service. This ensures that regardless of\nwhere the request originates, it will pass DNS lookup and be handed to Istio\nfor proper routing.\nThis can also be achieved with Istio `ServiceEntry`, rather than Kubernetes\n`Service`. However, a `ServiceEntry` does not configure the Kubernetes DNS server.\nThis means that DNS will need to be configured either manually or\nwith automated tooling such as the\n[Address auto allocation](\/docs\/ops\/configuration\/traffic-management\/dns-proxy\/#address-auto-allocation)\nfeature of [Istio DNS Proxying](\/docs\/ops\/configuration\/traffic-management\/dns-proxy\/).\n\n\nThere are a few efforts in progress that will help simplify the DNS story:\n\n- [Admiral](https:\/\/github.com\/istio-ecosystem\/admiral) is an Istio community\n  project that provides a number of multicluster capabilities. If you need to support multi-network\n  topologies, managing this configuration across multiple clusters at scale is challenging.\n  Admiral takes an opinionated view on this configuration and provides automatic provisioning and\n  synchronization across clusters.\n\n- [Kubernetes Multi-Cluster Services](https:\/\/github.com\/kubernetes\/enhancements\/tree\/master\/keps\/sig-multicluster\/1645-multi-cluster-services-api)\n  is a Kubernetes Enhancement Proposal (KEP) that defines an API for exporting\n  services to multiple clusters. This effectively pushes the responsibility of\n  service visibility and DNS resolution for the entire `clusterset` onto\n  Kubernetes. There is also work in progress to build layers of `MCS` support\n  into Istio, which would allow Istio to work with any cloud vendor `MCS`\n  controller or even act as the `MCS` controller for the entire mesh.\n\n\n## Network models\n\nIstio uses a simplified definition of network to\nrefer to workload instances that have direct\nreachability. For example, by default all workload instances in a single\ncluster are on the same network.\n\nMany production systems require multiple networks or subnets for isolation\nand high availability. Istio supports spanning a service mesh over a variety of\nnetwork topologies. This approach allows you to select the network model that\nfits your existing network topology.\n\n### Single network\n\nIn the simplest case, a service mesh operates over a single fully connected\nnetwork. In a single network model, all\nworkload instances\ncan reach each other directly without an Istio gateway.\n\nA single network allows Istio to configure service consumers in a uniform\nway across the mesh with the ability to directly address workload instances.\nNote, however, that for a single network across multiple clusters,\nservices and endpoints cannot have overlapping IP addresses.\n\n\n\n### Multiple networks\n\nYou can span a single service mesh across multiple networks; such a\nconfiguration is known as **multi-network**.\n\nMultiple networks afford the following capabilities beyond that of single networks:\n\n- Overlapping IP or VIP ranges for **service endpoints**\n- Crossing of administrative boundaries\n- Fault tolerance\n- Scaling of network addresses\n- Compliance with standards that require network segmentation\n\nIn this model, the workload instances in different networks can only reach each\nother through one or more [Istio gateways](\/docs\/concepts\/traffic-management\/#gateways).\nIstio uses **partitioned service discovery** to provide consumers a different\nview of service endpoints. The view depends on the\nnetwork of the consumers.\n\n\n\nThis solution requires exposing all services (or a subset) through the gateway.\nCloud vendors may provide options that will not require exposing services on\nthe public internet. Such an option, if it exists and meets your requirements,\nwill likely be the best choice.\n\n\nIn order to ensure secure communications in a multi-network scenario, Istio\nonly supports cross-network communication to workloads with an Istio proxy.\nThis is due to the fact that Istio exposes services at the Ingress Gateway with TLS\npass-through, which enables mTLS directly to the workload. A workload without\nan Istio proxy, however, will likely not be able to participate in mutual\nauthentication with other workloads. For this reason, Istio filters\nout-of-network endpoints for proxyless services.\n\n\n## Control plane models\n\nAn Istio mesh uses the control plane to configure all\ncommunication between workload instances within the mesh. Workload instances\nconnect to a control plane instance to get their configuration.\n\nIn the simplest case, you can run your mesh with a control plane on a single\ncluster.\n\n\n\nA cluster like this one, with its own local control plane, is referred to as\na primary cluster.\n\nMulticluster deployments can also share control plane instances. In this case,\nthe control plane instances can reside in one or more primary clusters.\nClusters without their own control plane are referred to as\nremote clusters.\n\n\n\nTo support remote clusters in a multicluster mesh, the control plane in\na primary cluster must be accessible via a stable IP (e.g., a cluster IP).\nFor clusters spanning networks,\nthis can be achieved by exposing the control plane through an Istio gateway.\nCloud vendors may provide options, such as internal load balancers, for\nproviding this capability without exposing the control plane on the\npublic internet. Such an option, if it exists and meets your requirements,\nwill likely be the best choice.\n\nIn multicluster deployments with more than one primary cluster, each primary\ncluster receives its configuration (i.e., `Service` and `ServiceEntry`,\n`DestinationRule`, etc.) from the Kubernetes API Server residing in the same\ncluster. Each primary cluster, therefore, has an independent source of\nconfiguration.\nThis duplication of configuration across primary clusters does require\nadditional steps when rolling out changes. Large production\nsystems may automate this process with tooling, such as CI\/CD systems, in\norder to manage configuration rollout.\n\nInstead of running control planes in primary clusters inside the mesh, a\nservice mesh composed entirely of remote clusters can be controlled by an\nexternal control plane. This provides isolated\nmanagement and complete separation of the control plane deployment from the\ndata plane services that comprise the mesh.\n\n\n\nA cloud vendor's managed control plane is a\ntypical example of an external control plane.\n\nFor high availability, you should deploy multiple control planes across\nclusters, zones, or regions.\n\n\n\nThis model affords the following benefits:\n\n- Improved availability: If a control plane becomes unavailable, the scope of\n  the outage is limited to only workloads in clusters managed by that control plane.\n\n- Configuration isolation: You can make configuration changes in one cluster,\n  zone, or region without impacting others.\n\n- Controlled rollout: You have more fine-grained control over configuration\n  rollout (e.g., one cluster at a time). You can also canary configuration changes in a sub-section of the mesh\n  controlled by a given primary cluster.\n\n- Selective service visibility: You can restrict service visibility to part\n  of the mesh, helping to establish service-level isolation. For example, an\n  administrator may choose to deploy the `HelloWorld` service to Cluster A,\n  but not Cluster B. Any attempt to call `HelloWorld` from Cluster B will\n  fail the DNS lookup.\n\nThe following list ranks control plane deployment examples by availability:\n\n- One cluster per region (**lowest availability**)\n- Multiple clusters per region\n- One cluster per zone\n- Multiple clusters per zone\n- Each cluster (**highest availability**)\n\n### Endpoint discovery with multiple control planes\n\nAn Istio control plane manages traffic within the mesh by providing each proxy\nwith the list of service endpoints. In order to make this work in a\nmulticluster scenario, each control plane must observe endpoints from the API\nServer in every cluster.\n\nTo enable endpoint discovery for a cluster, an administrator generates a\n`remote secret` and deploys it to each primary cluster in the mesh. The\n`remote secret` contains credentials, granting access to the API server in the\ncluster.\nThe control planes will then connect and discover the service endpoints for\nthe cluster, enabling cross-cluster load balancing for these services.\n\n\n\nBy default, Istio will load balance requests evenly between endpoints in\neach cluster. In large systems that span geographic regions, it may be\ndesirable to use [locality load balancing](\/docs\/tasks\/traffic-management\/locality-load-balancing)\nto prefer that traffic stay in the same zone or region.\n\nIn some advanced scenarios, load balancing across clusters may not be desired.\nFor example, in a blue\/green deployment, you may deploy different versions of\nthe system to different clusters. In this case, each cluster is effectively\noperating as an independent mesh. This behavior can be achieved in a couple of\nways:\n\n- Do not exchange remote secrets between the clusters. This offers the\n  strongest isolation between the clusters.\n\n- Use `VirtualService` and `DestinationRule` to disallow routing between two\n  versions of the services.\n\nIn either case, cross-cluster load balancing is prevented. External traffic\ncan be routed to one cluster or the other using an external load balancer.\n\n## Identity and trust models\n\nWhen a workload instance is created within a service mesh, Istio assigns the\nworkload an identity.\n\nThe Certificate Authority (CA) creates and signs the certificates used to verify\nthe identities used within the mesh. You can verify the identity of the message sender\nwith the public key of the CA that created and signed the certificate\nfor that identity. A **trust bundle** is the set of all CA public keys used by\nan Istio mesh. With a mesh's trust bundle, anyone can verify the sender of any\nmessage coming from that mesh.\n\n### Trust within a mesh\n\nWithin a single Istio mesh, Istio ensures each workload instance has an\nappropriate certificate representing its own identity, and the trust bundle\nnecessary to recognize all identities within the mesh and any federated meshes.\nThe CA creates and signs the certificates for those identities. This model\nallows workload instances in the mesh to authenticate each other when\ncommunicating.\n\n\n\n### Trust between meshes\n\nTo enable communication between two meshes with different CAs, you must\nexchange the trust bundles of the meshes. Istio does not provide any tooling\nto exchange trust bundles across meshes. You can exchange the trust bundles\neither manually or automatically using a protocol such as [SPIFFE Trust Domain Federation](https:\/\/github.com\/spiffe\/spiffe\/blob\/main\/standards\/SPIFFE_Federation.md).\nOnce you import a trust bundle to a mesh, you can configure local policies for\nthose identities.\n\n\n\n## Mesh models\n\nIstio supports having all of your services in a\nmesh, or federating multiple meshes\ntogether, which is also known as multi-mesh.\n\n### Single mesh\n\nThe simplest Istio deployment is a single mesh. Within a mesh, service names are\nunique. For example, only one service can have the name `mysvc` in the `foo`\nnamespace. Additionally, workload instances share a common identity since\nservice account names are unique within a namespace, just like service names.\n\nA single mesh can span [one or more clusters](#cluster-models) and\n[one or more networks](#network-models). Within a mesh,\n[namespaces](#namespace-tenancy) are used for [tenancy](#tenancy-models).\n\n### Multiple meshes\n\nMultiple mesh deployments result from mesh federation.\n\nMultiple meshes afford the following capabilities beyond that of a single mesh:\n\n- Organizational boundaries: lines of business\n- Service name or namespace reuse: multiple distinct uses of the `default`\n  namespace\n- Stronger isolation: isolating test workloads from production workloads\n\nYou can enable inter-mesh communication with mesh federation. When federating, each mesh can expose a set of services and\nidentities, which all participating meshes can recognize.\n\n\n\nTo avoid service naming collisions, you can give each mesh a globally unique\n**mesh ID**, to ensure that the fully qualified domain\nname (FQDN) for each service is distinct.\n\nWhen federating two meshes that do not share the same\ntrust domain, you must federate\nidentity and **trust bundles** between them. See the\nsection on [Trust between meshes](#trust-between-meshes) for more details.\n\n## Tenancy models\n\nIn Istio, a **tenant** is a group of users that share\ncommon access and privileges for a set of deployed workloads.\nTenants can be used to provide a level of isolation between different teams.\n\nYou can configure tenancy models to satisfy the following organizational\nrequirements for isolation:\n\n- Security\n- Policy\n- Capacity\n- Cost\n- Performance\n\nIstio supports three types of tenancy models:\n\n- [Namespace tenancy](#namespace-tenancy)\n- [Cluster tenancy](#cluster-tenancy)\n- [Mesh tenancy](#mesh-tenancy)\n\n### Namespace tenancy\n\nA cluster can be shared across multiple teams, each using a different namespace.\nYou can grant a team permission to deploy its workloads only to a given namespace\nor set of namespaces.\n\nBy default, services from multiple namespaces can communicate with each other,\nbut you can increase isolation by selectively choosing which services to expose to other\nnamespaces. You can configure authorization policies for exposed services to restrict\naccess to only the appropriate callers.\n\n\n\nNamespace tenancy can extend beyond a single cluster.\nWhen using [multiple clusters](#multiple-clusters), the namespaces in each\ncluster sharing the same name are considered the same namespace by default.\nFor example, `Service B` in the `Team-1` namespace of cluster `West` and `Service B` in the\n`Team-1` namespace of cluster `East` refer to the same service, and Istio merges their\nendpoints for service discovery and load balancing.\n\n\n\n### Cluster tenancy\n\nIstio supports using clusters as a unit of tenancy. In this case, you can give\neach team a dedicated cluster or set of clusters to deploy their\nworkloads. Permissions for a cluster are usually limited to the members of the\nteam that owns it. You can set various roles for finer grained control, for\nexample:\n\n- Cluster administrator\n- Developer\n\nTo use cluster tenancy with Istio, you configure each team's cluster with its\nown control plane, allowing each team to manage its own configuration.\nAlternatively, you can use Istio to implement a group of clusters as a single tenant\nusing remote clusters or multiple\nsynchronized primary clusters.\nRefer to [control plane models](#control-plane-models) for details.\n\n### Mesh Tenancy\n\nIn a multi-mesh deployment with mesh federation, each mesh\ncan be used as the unit of isolation.\n\n\n\nSince a different team or organization operates each mesh, service naming\nis rarely distinct. For example, a `Service C` in the `foo` namespace of\ncluster `Team-1` and the `Service C` service in the `foo` namespace of cluster\n`Team-2` will not refer to the same service. The most common example is the\nscenario in Kubernetes where many teams deploy their workloads to the `default`\nnamespace.\n\nWhen each team has its own mesh, cross-mesh communication follows the\nconcepts described in the [multiple meshes](#multiple-meshes) model.","site":"istio","answers_cleaned":"    title  Deployment Models description  Describes the options and considerations when configuring your Istio deployment  weight  20 keywords      single cluster     multiple clusters     control plane     tenancy     networks     identity     trust     single mesh     multiple meshes aliases       docs concepts multicluster deployments      docs concepts deployment models      docs ops prep deployment models owner  istio wg environments maintainers test  n a      When configuring a production deployment of Istio  you need to answer a number of questions  Will the mesh be confined to a single cluster or distributed across multiple clusters  Will all the services be located in a single fully connected network  or will gateways be required to connect services across multiple networks  Is there a single control plane  potentially shared across clusters  or are there multiple control planes deployed to ensure high availability  HA   Are all clusters going to be connected into a single multicluster service mesh or will they be federated into a multi mesh deployment   All of these questions  among others  represent independent dimensions of configuration for an Istio deployment   1  single or multiple cluster 1  single or multiple network 1  single or multiple control plane 1  single or multiple mesh  In a production environment involving multiple clusters  you can use a mix of deployment models  For example  having more than one control plane is recommended for HA  but you could achieve this for a 3 cluster deployment by deploying 2 clusters with a single shared control plane and then adding the third cluster with a second control plane in a different network  All three clusters could then be configured to share both control planes so that all the clusters have 2 sources of control to ensure HA   Choosing the right deployment model depends on the isolation  performance  and HA requirements for your use case  This guide describes the various options and considerations when configuring your Istio deployment      Cluster models  The workload instances of your application run in one or more clusters  For isolation  performance  and high availability  you can confine clusters to availability zones and regions   Production systems  depending on their requirements  can run across multiple clusters spanning a number of zones or regions  leveraging cloud load balancers to handle things like locality and zonal or regional fail over   In most cases  clusters represent boundaries for configuration and endpoint discovery  For example  each Kubernetes cluster has an API Server which manages the configuration for the cluster as well as serving service endpoint information as pods are brought up or down  Since Kubernetes configures this behavior on a per cluster basis  this approach helps limit the potential problems caused by incorrect configurations   In Istio  you can configure a single service mesh to span any number of clusters       Single cluster  In the simplest case  you can confine an Istio mesh to a single cluster  A cluster usually operates over a  single network   single network   but it varies between infrastructure providers  A single cluster and single network model includes a control plane  which results in the simplest Istio deployment     Single cluster deployments offer simplicity  but lack other features  for example  fault isolation and fail over  If you need higher availability  you should use multiple clusters       Multiple clusters  You can configure a single mesh to include multiple clusters  Using a multicluster deployment within a single mesh affords the following capabilities beyond that of a single cluster deployment     Fault isolation and fail over   cluster 1  goes down  fail over to  cluster 2     Location aware routing and fail over  Send requests to the nearest service    Various  control plane models   control plane models   Support different   levels of availability    Team or project isolation  Each team runs its own set of clusters     Multicluster deployments give you a greater degree of isolation and availability but increase complexity  If your systems have high availability requirements  you likely need clusters across multiple zones and regions  You can canary configuration changes or new binary releases in a single cluster  where the configuration changes only affect a small amount of user traffic  Additionally  if a cluster has a problem  you can temporarily route traffic to nearby clusters until you address the issue   You can configure inter cluster communication based on the  network   network models  and the options supported by your cloud provider  For example  if two clusters reside on the same underlying network  you can enable cross cluster communication by simply configuring firewall rules   Within a multicluster mesh  all services are shared by default  according to the concept of namespace sameness   Traffic management rules   docs ops configuration traffic management multicluster  provide fine grained control over the behavior of multicluster traffic       DNS with multiple clusters  When a client application makes a request to some host  it must first perform a DNS lookup for the hostname to obtain an IP address before it can proceed with the request  In Kubernetes  the DNS server residing within the cluster typically handles this DNS lookup  based on the configured  Service  definitions   Istio uses the virtual IP returned by the DNS lookup to load balance across the list of active endpoints for the requested service  taking into account any Istio configured routing rules  Istio uses either Kubernetes  Service   Endpoint  or Istio  ServiceEntry  to configure its internal mapping of hostname to workload IP addresses   This two tiered naming system becomes more complicated when you have multiple clusters  Istio is inherently multicluster aware  but Kubernetes is not  today   Because of this  the client cluster must have a DNS entry for the service in order for the DNS lookup to succeed  and a request to be successfully sent  This is true even if there are no instances of that service s pods running in the client cluster   To ensure that DNS lookup succeeds  you must deploy a Kubernetes  Service  to each cluster that consumes that service  This ensures that regardless of where the request originates  it will pass DNS lookup and be handed to Istio for proper routing  This can also be achieved with Istio  ServiceEntry   rather than Kubernetes  Service   However  a  ServiceEntry  does not configure the Kubernetes DNS server  This means that DNS will need to be configured either manually or with automated tooling such as the  Address auto allocation   docs ops configuration traffic management dns proxy  address auto allocation  feature of  Istio DNS Proxying   docs ops configuration traffic management dns proxy      There are a few efforts in progress that will help simplify the DNS story      Admiral  https   github com istio ecosystem admiral  is an Istio community   project that provides a number of multicluster capabilities  If you need to support multi network   topologies  managing this configuration across multiple clusters at scale is challenging    Admiral takes an opinionated view on this configuration and provides automatic provisioning and   synchronization across clusters      Kubernetes Multi Cluster Services  https   github com kubernetes enhancements tree master keps sig multicluster 1645 multi cluster services api    is a Kubernetes Enhancement Proposal  KEP  that defines an API for exporting   services to multiple clusters  This effectively pushes the responsibility of   service visibility and DNS resolution for the entire  clusterset  onto   Kubernetes  There is also work in progress to build layers of  MCS  support   into Istio  which would allow Istio to work with any cloud vendor  MCS    controller or even act as the  MCS  controller for the entire mesh       Network models  Istio uses a simplified definition of network to refer to workload instances that have direct reachability  For example  by default all workload instances in a single cluster are on the same network   Many production systems require multiple networks or subnets for isolation and high availability  Istio supports spanning a service mesh over a variety of network topologies  This approach allows you to select the network model that fits your existing network topology       Single network  In the simplest case  a service mesh operates over a single fully connected network  In a single network model  all workload instances can reach each other directly without an Istio gateway   A single network allows Istio to configure service consumers in a uniform way across the mesh with the ability to directly address workload instances  Note  however  that for a single network across multiple clusters  services and endpoints cannot have overlapping IP addresses         Multiple networks  You can span a single service mesh across multiple networks  such a configuration is known as   multi network     Multiple networks afford the following capabilities beyond that of single networks     Overlapping IP or VIP ranges for   service endpoints     Crossing of administrative boundaries   Fault tolerance   Scaling of network addresses   Compliance with standards that require network segmentation  In this model  the workload instances in different networks can only reach each other through one or more  Istio gateways   docs concepts traffic management  gateways   Istio uses   partitioned service discovery   to provide consumers a different view of service endpoints  The view depends on the network of the consumers     This solution requires exposing all services  or a subset  through the gateway  Cloud vendors may provide options that will not require exposing services on the public internet  Such an option  if it exists and meets your requirements  will likely be the best choice    In order to ensure secure communications in a multi network scenario  Istio only supports cross network communication to workloads with an Istio proxy  This is due to the fact that Istio exposes services at the Ingress Gateway with TLS pass through  which enables mTLS directly to the workload  A workload without an Istio proxy  however  will likely not be able to participate in mutual authentication with other workloads  For this reason  Istio filters out of network endpoints for proxyless services       Control plane models  An Istio mesh uses the control plane to configure all communication between workload instances within the mesh  Workload instances connect to a control plane instance to get their configuration   In the simplest case  you can run your mesh with a control plane on a single cluster     A cluster like this one  with its own local control plane  is referred to as a primary cluster   Multicluster deployments can also share control plane instances  In this case  the control plane instances can reside in one or more primary clusters  Clusters without their own control plane are referred to as remote clusters     To support remote clusters in a multicluster mesh  the control plane in a primary cluster must be accessible via a stable IP  e g   a cluster IP   For clusters spanning networks  this can be achieved by exposing the control plane through an Istio gateway  Cloud vendors may provide options  such as internal load balancers  for providing this capability without exposing the control plane on the public internet  Such an option  if it exists and meets your requirements  will likely be the best choice   In multicluster deployments with more than one primary cluster  each primary cluster receives its configuration  i e    Service  and  ServiceEntry    DestinationRule   etc   from the Kubernetes API Server residing in the same cluster  Each primary cluster  therefore  has an independent source of configuration  This duplication of configuration across primary clusters does require additional steps when rolling out changes  Large production systems may automate this process with tooling  such as CI CD systems  in order to manage configuration rollout   Instead of running control planes in primary clusters inside the mesh  a service mesh composed entirely of remote clusters can be controlled by an external control plane  This provides isolated management and complete separation of the control plane deployment from the data plane services that comprise the mesh     A cloud vendor s managed control plane is a typical example of an external control plane   For high availability  you should deploy multiple control planes across clusters  zones  or regions     This model affords the following benefits     Improved availability  If a control plane becomes unavailable  the scope of   the outage is limited to only workloads in clusters managed by that control plane     Configuration isolation  You can make configuration changes in one cluster    zone  or region without impacting others     Controlled rollout  You have more fine grained control over configuration   rollout  e g   one cluster at a time   You can also canary configuration changes in a sub section of the mesh   controlled by a given primary cluster     Selective service visibility  You can restrict service visibility to part   of the mesh  helping to establish service level isolation  For example  an   administrator may choose to deploy the  HelloWorld  service to Cluster A    but not Cluster B  Any attempt to call  HelloWorld  from Cluster B will   fail the DNS lookup   The following list ranks control plane deployment examples by availability     One cluster per region    lowest availability      Multiple clusters per region   One cluster per zone   Multiple clusters per zone   Each cluster    highest availability         Endpoint discovery with multiple control planes  An Istio control plane manages traffic within the mesh by providing each proxy with the list of service endpoints  In order to make this work in a multicluster scenario  each control plane must observe endpoints from the API Server in every cluster   To enable endpoint discovery for a cluster  an administrator generates a  remote secret  and deploys it to each primary cluster in the mesh  The  remote secret  contains credentials  granting access to the API server in the cluster  The control planes will then connect and discover the service endpoints for the cluster  enabling cross cluster load balancing for these services     By default  Istio will load balance requests evenly between endpoints in each cluster  In large systems that span geographic regions  it may be desirable to use  locality load balancing   docs tasks traffic management locality load balancing  to prefer that traffic stay in the same zone or region   In some advanced scenarios  load balancing across clusters may not be desired  For example  in a blue green deployment  you may deploy different versions of the system to different clusters  In this case  each cluster is effectively operating as an independent mesh  This behavior can be achieved in a couple of ways     Do not exchange remote secrets between the clusters  This offers the   strongest isolation between the clusters     Use  VirtualService  and  DestinationRule  to disallow routing between two   versions of the services   In either case  cross cluster load balancing is prevented  External traffic can be routed to one cluster or the other using an external load balancer      Identity and trust models  When a workload instance is created within a service mesh  Istio assigns the workload an identity   The Certificate Authority  CA  creates and signs the certificates used to verify the identities used within the mesh  You can verify the identity of the message sender with the public key of the CA that created and signed the certificate for that identity  A   trust bundle   is the set of all CA public keys used by an Istio mesh  With a mesh s trust bundle  anyone can verify the sender of any message coming from that mesh       Trust within a mesh  Within a single Istio mesh  Istio ensures each workload instance has an appropriate certificate representing its own identity  and the trust bundle necessary to recognize all identities within the mesh and any federated meshes  The CA creates and signs the certificates for those identities  This model allows workload instances in the mesh to authenticate each other when communicating         Trust between meshes  To enable communication between two meshes with different CAs  you must exchange the trust bundles of the meshes  Istio does not provide any tooling to exchange trust bundles across meshes  You can exchange the trust bundles either manually or automatically using a protocol such as  SPIFFE Trust Domain Federation  https   github com spiffe spiffe blob main standards SPIFFE Federation md   Once you import a trust bundle to a mesh  you can configure local policies for those identities        Mesh models  Istio supports having all of your services in a mesh  or federating multiple meshes together  which is also known as multi mesh       Single mesh  The simplest Istio deployment is a single mesh  Within a mesh  service names are unique  For example  only one service can have the name  mysvc  in the  foo  namespace  Additionally  workload instances share a common identity since service account names are unique within a namespace  just like service names   A single mesh can span  one or more clusters   cluster models  and  one or more networks   network models   Within a mesh   namespaces   namespace tenancy  are used for  tenancy   tenancy models        Multiple meshes  Multiple mesh deployments result from mesh federation   Multiple meshes afford the following capabilities beyond that of a single mesh     Organizational boundaries  lines of business   Service name or namespace reuse  multiple distinct uses of the  default    namespace   Stronger isolation  isolating test workloads from production workloads  You can enable inter mesh communication with mesh federation  When federating  each mesh can expose a set of services and identities  which all participating meshes can recognize     To avoid service naming collisions  you can give each mesh a globally unique   mesh ID    to ensure that the fully qualified domain name  FQDN  for each service is distinct   When federating two meshes that do not share the same trust domain  you must federate identity and   trust bundles   between them  See the section on  Trust between meshes   trust between meshes  for more details      Tenancy models  In Istio  a   tenant   is a group of users that share common access and privileges for a set of deployed workloads  Tenants can be used to provide a level of isolation between different teams   You can configure tenancy models to satisfy the following organizational requirements for isolation     Security   Policy   Capacity   Cost   Performance  Istio supports three types of tenancy models      Namespace tenancy   namespace tenancy     Cluster tenancy   cluster tenancy     Mesh tenancy   mesh tenancy       Namespace tenancy  A cluster can be shared across multiple teams  each using a different namespace  You can grant a team permission to deploy its workloads only to a given namespace or set of namespaces   By default  services from multiple namespaces can communicate with each other  but you can increase isolation by selectively choosing which services to expose to other namespaces  You can configure authorization policies for exposed services to restrict access to only the appropriate callers     Namespace tenancy can extend beyond a single cluster  When using  multiple clusters   multiple clusters   the namespaces in each cluster sharing the same name are considered the same namespace by default  For example   Service B  in the  Team 1  namespace of cluster  West  and  Service B  in the  Team 1  namespace of cluster  East  refer to the same service  and Istio merges their endpoints for service discovery and load balancing         Cluster tenancy  Istio supports using clusters as a unit of tenancy  In this case  you can give each team a dedicated cluster or set of clusters to deploy their workloads  Permissions for a cluster are usually limited to the members of the team that owns it  You can set various roles for finer grained control  for example     Cluster administrator   Developer  To use cluster tenancy with Istio  you configure each team s cluster with its own control plane  allowing each team to manage its own configuration  Alternatively  you can use Istio to implement a group of clusters as a single tenant using remote clusters or multiple synchronized primary clusters  Refer to  control plane models   control plane models  for details       Mesh Tenancy  In a multi mesh deployment with mesh federation  each mesh can be used as the unit of isolation     Since a different team or organization operates each mesh  service naming is rarely distinct  For example  a  Service C  in the  foo  namespace of cluster  Team 1  and the  Service C  service in the  foo  namespace of cluster  Team 2  will not refer to the same service  The most common example is the scenario in Kubernetes where many teams deploy their workloads to the  default  namespace   When each team has its own mesh  cross mesh communication follows the concepts described in the  multiple meshes   multiple meshes  model "}
{"questions":"istio Describes Istio s security model title Security Model owner istio wg security maintainers This document aims to describe the security posture of Istio s various components and how possible attacks can impact the system weight 10 test n a","answers":"---\ntitle: Security Model\ndescription: Describes Istio's security model.\nweight: 10\nowner: istio\/wg-security-maintainers\ntest: n\/a\n---\n\nThis document aims to describe the security posture of Istio's various components, and how possible attacks can impact the system.\n\n## Components\n\nIstio comes with a variety of optional components that will be covered here.\nFor a high level overview, see [Istio Architecture](\/docs\/ops\/deployment\/architecture\/).\nNote that Istio deployments are highly flexible; below, we will primarily assume the worst case scenarios.\n\n### Istiod\n\nIstiod serves as the core control plane component of Istio, often serving the role of the [XDS serving component](\/docs\/concepts\/traffic-management\/) as well\nas the mesh [mTLS Certificate Authority](\/docs\/concepts\/security\/).\n\nIstiod is considered a highly privileged component, similar to that of the Kubernetes API server itself.\n* It has high Kubernetes RBAC privileges, typically including `Secret` read access and webhook write access.\n* When acting as the CA, it can provision arbitrary certificates.\n* When acting as the XDS control plane, it can program proxies to perform arbitrary behavior.\n\nAs such, the security of the cluster is tightly coupled to the security of Istiod.\nFollowing [Kubernetes security best practices](https:\/\/kubernetes.io\/docs\/concepts\/security\/) around Istiod access is paramount.\n\n### Istio CNI plugin\n\nIstio can optionally be deployed with the [Istio CNI Plugin `DaemonSet`](\/docs\/setup\/additional-setup\/cni\/).\nThis `DaemonSet` is responsible for setting up networking rules in Istio to ensure traffic is transparently redirected as needed.\nThis is an alternative to the `istio-init` container discussed [below](#sidecar-proxies).\n\nBecause the CNI `DaemonSet` modifies networking rules on the node, it requires an elevated `securityContext`.\nHowever, unlike [Istiod](#istiod), this is a **node-local** privilege.\nThe implications of this are discussed [below](#node-compromise).\n\nBecause this consolidates the elevated privileges required to setup networking into a single pod, rather than *every* pod,\nthis option is generally recommended.\n\n### Sidecar Proxies\n\nIstio may [optionally](\/docs\/overview\/dataplane-modes\/) deploy a sidecar proxy next to an application.\n\nThe sidecar proxy needs the network to be programmed to direct all traffic through the proxy.\nThis can be done with the [Istio CNI plugin](#istio-cni-plugin) or by deploying an `initContainer` (`istio-init`) on the pod (this is done automatically if the CNI plugin is not deployed).\nThe `istio-init` container requires `NET_ADMIN` and `NET_RAW` capabilities.\nHowever, these capabilities are only present during the initialization - the primary sidecar container is completely unprivileged.\n\nAdditionally, the sidecar proxy does not require any associated Kubernetes RBAC privileges at all.\n\nEach sidecar proxy is authorized to request a certificate for the associated Pod Service Account.\n\n### Gateways and Waypoints\n\nGateways and Waypoints act as standalone proxy deployments.\nUnlike [sidecars](#sidecar-proxies), they do not require any networking modifications, and thus don't require any privilege.\n\nThese components run with their own service accounts, distinct from application identities.\n\n### Ztunnel\n\nZtunnel acts as a node-level proxy.\nThis task requires the `NET_ADMIN`, `SYS_ADMIN`, and `NET_RAW` capabilities.\nLike the [Istio CNI Plugin](#istio-cni-plugin), these are **node-local** privileges only.\nThe Ztunnel does not have any associated Kubernetes RBAC privileges.\n\nZtunnel is authorized to request certificates for any Service Accounts of pods running on the same node.\nSimilar to [kubelet](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/node\/), this explicitly does not allow requesting arbitrary\ncertificates.\nThis, again, ensures these privileges are **node-local** only.\n\n## Traffic Capture Properties\n\nWhen a pod is enrolled in the mesh, all incoming TCP traffic will be redirected to the proxy.\nThis includes both mTLS\/HBONE traffic and plaintext traffic.\nAny applicable [policies](\/docs\/tasks\/security\/authorization\/) for the workload will be enforced before forwarding the traffic to the workload.\n\nHowever, Istio does not currently guarantee that _outgoing_ traffic is redirect to the proxy.\nSee [traffic capture limitations](\/docs\/ops\/best-practices\/security\/#understand-traffic-capture-limitations).\nAs such, care must be taken to follow the [securing egress traffic](\/docs\/ops\/best-practices\/security\/#securing-egress-traffic) steps if outbound policies are required.\n\n## Mutual TLS Properties\n\n[Mutual TLS](\/docs\/concepts\/security\/#mutual-tls-authentication) provides the basis for much of Istio's security posture.\nBelow explains various properties mutual TLS provides for the security posture of Istio.\n\n### Certificate Authority\n\nIstio comes out of the box with its own Certificate Authority.\n\nBy default, the CA allows authenticating clients based on either of the options below:\n* A Kubernetes JWT token, with an audience of `istio-ca`, verified with a Kubernetes `TokenReview`. This is the default method in Kubernetes Pods.\n* An existing mutual TLS certificate.\n* Custom JWT tokens, verified using OIDC (requires configuration).\n\nThe CA will only issue certificates that are requested for identities that a client is authenticated for.\n\nIstio can also integrate with a variety of third party CAs; please refer to any of their security documentation for more information on how they behave.\n\n### Client mTLS\n\n\n\nIn sidecar mode, the client sidecar will [automatically use TLS](\/docs\/ops\/configuration\/traffic-management\/tls-configuration\/#auto-mtls) when connecting to a service\nthat is detected to support mTLS. This can also be [explicitly configured](\/docs\/ops\/configuration\/traffic-management\/tls-configuration\/#sidecars).\nNote that this automatic detection relies on Istio associating the traffic to a Service.\n[Unsupported traffic types](\/docs\/ops\/configuration\/traffic-management\/traffic-routing\/#unmatched-traffic) or [configuration scoping](\/docs\/ops\/configuration\/mesh\/configuration-scoping\/) can prevent this.\n\nWhen [connecting to a backend](\/docs\/concepts\/security\/#secure-naming), the set of allowed identities is computed, at the Service level, based on the union of all backend's identities.\n\n\n\nIn ambient mode, Istio will automatically use mTLS when connecting to any backend that supports mTLS, and verify the identity of the destination matches the identity the workload is expected to be running as.\n\nThese properties differ from sidecar mode in that they are properties of individual workloads, rather than of the service.\nThis enables more fine-grained authentication checks, as well as supporting a wider variety of workloads.\n\n\n\n### Server mTLS\n\nBy default, Istio will accept mTLS and non-mTLS traffic (often called \"permissive mode\").\nUsers can opt-in to strict enforcement by writing `PeerAuthentication` or `AuthorizationPolicy` rules requiring mTLS.\n\nWhen mTLS connections are established, the peer certificate is verified.\nAdditionally, the peer identity is verified to be within the same trust domain.\nTo verify only specific identities are allowed, an `AuthorizationPolicy` can be used.\n\n## Compromise types explored\n\nBased on the above overview, we will consider the impact on the cluster if various parts of the system are compromised.\nIn the real world, there are a variety of different variables around any security attack:\n\n* How easy it is to execute\n* What prior privileges are required\n* How often it can exploited\n* What the impact is (total remote execution, denial of service, etc).\n\nIn this document, we will primarily consider the worst case scenario: a compromised component means an attacker has complete remote code execution capabilities.\n\n### Workload compromise\n\nIn this scenario, an application workload (pod) is compromised.\n\nA pod [*may* have access](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/#opt-out-of-api-credential-automounting) to its service account token.\nIf so, a workload compromise can move laterally from a single pod to compromising the entire service account.\n\n\n\nIn the sidecar model, the proxy is co-located with the pod, and runs within the same trust boundary.\nA compromised application can tamper with the proxy through the admin API or other surfaces, including exfiltration of private key material, allowing another agent to impersonate the workload.\nIt should be assumed that a compromised workload also includes a compromise of the sidecar proxy.\n\nGiven this, a compromised workload may:\n* Send arbitrary traffic, with or without mutual TLS.\n  These may bypass any proxy configuration, or even the proxy entirely.\n  Note that Istio does not offer egress-based authorization policies, so there is no egress authorization policy bypass occurring.\n* Accept traffic that was already destined to the application. It may bypass policies that were configured in the sidecar proxy.\n\nThe key takeaway here is that while the compromised workload may behave maliciously, this does not give them the ability to bypass policies in _other_ workloads.\n\n\n\nIn ambient mode, the node proxy is not co-located within the pod, and runs in another trust boundary as part of an independent pod.\n\nA compromised application may send arbitrary traffic.\nHowever, they do not have control over the node proxy, which will chose how to handle incoming and outbound traffic.\n\nAdditionally, as the pod itself doesn't have access to a service account token to request a mutual TLS certificate, lateral movement possibilities are reduced.\n\n\n\nIstio offers a variety of features that can limit the impact of such a compromise:\n* [Observability](\/docs\/tasks\/observability\/) features can be used to identify the attack.\n* [Policies](\/docs\/tasks\/security\/authorization\/) can be used to restrict what type of traffic a workload can send or receive.\n\n### Proxy compromise - Sidecars\n\nIn this scenario, a sidecar proxy is compromised.\nBecause the sidecar and application reside in the same trust domain, this is functionally equivalent to the [Workload compromise](#workload-compromise).\n\n### Proxy compromise - Waypoint\n\nIn this scenario, a [waypoint proxy](#gateways-and-waypoints) is compromised.\nWhile waypoints do not have any privileges for a hacker to exploit, they do serve (potentially) many different services and workloads.\nA compromised waypoint will receive all traffic for these, which it can view, modify, or drop.\n\nIstio offers the flexibility of [configuring the granularity of a waypoint deployment](\/docs\/ambient\/usage\/waypoint\/#useawaypoint).\nUsers may consider deploying more isolated waypoints if they require stronger isolation.\n\nBecause waypoints run with a distinct identity from the applications they serve, a compromised waypoint does not imply the user's applications can be impersonated.\n\n### Proxy compromise - Ztunnel\n\nIn this scenario, a [ztunnel](#ztunnel) proxy is compromised.\n\nA compromised ztunnel gives the attacker control of the networking of the node.\n\nZtunnel has access to private key material for each application running on it's node.\nA compromised ztunnel could have these exfiltrated and used elsewhere.\nHowever, lateral movement to identities beyond co-located workloads is not possible; each ztunnel is only authorized to access certificates for workloads running on its node, scoping the blast radius of a compromised ztunnel.\n\n### Node compromise\n\nIn this scenario, the Kubernetes Node is compromised.\nBoth [Kubernetes](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/node\/) and Istio are designed to limit the blast radius of a single node compromise, such that\nthe compromise of a single node does not lead to a [cluster-wide compromise](#cluster-api-server-compromise).\n\nHowever, the attack does have complete control over any workloads running on that node.\nFor instance, it can compromise any co-located [waypoints](#proxy-compromise---waypoint), the local [ztunnel](#proxy-compromise---ztunnel), any [sidecars](#proxy-compromise---sidecars), any co-located [Istiod instances](#istiod-compromise), etc.\n\n### Cluster (API Server) compromise\n\nA compromise of the Kubernetes API Server effectively means the entire cluster and mesh are compromised.\nUnlike most other attack vectors, there isn't much Istio can do to control the blast radius of such an attack.\nA compromised API Server gives a hacker complete control over the cluster, including actions such as running `kubectl exec` on arbitrary pods,\nremoving any Istio `AuthorizationPolicies`, or even uninstalling Istio entirely.\n\n### Istiod compromise\n\nA compromise of Istiod generally leads to the same result as an [API Server compromise](#cluster-api-server-compromise).\nIstiod is a highly privileged component that should be strongly protected.\nFollowing the [security best practices](\/docs\/ops\/best-practices\/security) is crucial to maintaining a secure cluster.","site":"istio","answers_cleaned":"    title  Security Model description  Describes Istio s security model  weight  10 owner  istio wg security maintainers test  n a      This document aims to describe the security posture of Istio s various components  and how possible attacks can impact the system      Components  Istio comes with a variety of optional components that will be covered here  For a high level overview  see  Istio Architecture   docs ops deployment architecture    Note that Istio deployments are highly flexible  below  we will primarily assume the worst case scenarios       Istiod  Istiod serves as the core control plane component of Istio  often serving the role of the  XDS serving component   docs concepts traffic management   as well as the mesh  mTLS Certificate Authority   docs concepts security     Istiod is considered a highly privileged component  similar to that of the Kubernetes API server itself    It has high Kubernetes RBAC privileges  typically including  Secret  read access and webhook write access    When acting as the CA  it can provision arbitrary certificates    When acting as the XDS control plane  it can program proxies to perform arbitrary behavior   As such  the security of the cluster is tightly coupled to the security of Istiod  Following  Kubernetes security best practices  https   kubernetes io docs concepts security   around Istiod access is paramount       Istio CNI plugin  Istio can optionally be deployed with the  Istio CNI Plugin  DaemonSet    docs setup additional setup cni    This  DaemonSet  is responsible for setting up networking rules in Istio to ensure traffic is transparently redirected as needed  This is an alternative to the  istio init  container discussed  below   sidecar proxies    Because the CNI  DaemonSet  modifies networking rules on the node  it requires an elevated  securityContext   However  unlike  Istiod   istiod   this is a   node local   privilege  The implications of this are discussed  below   node compromise    Because this consolidates the elevated privileges required to setup networking into a single pod  rather than  every  pod  this option is generally recommended       Sidecar Proxies  Istio may  optionally   docs overview dataplane modes   deploy a sidecar proxy next to an application   The sidecar proxy needs the network to be programmed to direct all traffic through the proxy  This can be done with the  Istio CNI plugin   istio cni plugin  or by deploying an  initContainer    istio init   on the pod  this is done automatically if the CNI plugin is not deployed   The  istio init  container requires  NET ADMIN  and  NET RAW  capabilities  However  these capabilities are only present during the initialization   the primary sidecar container is completely unprivileged   Additionally  the sidecar proxy does not require any associated Kubernetes RBAC privileges at all   Each sidecar proxy is authorized to request a certificate for the associated Pod Service Account       Gateways and Waypoints  Gateways and Waypoints act as standalone proxy deployments  Unlike  sidecars   sidecar proxies   they do not require any networking modifications  and thus don t require any privilege   These components run with their own service accounts  distinct from application identities       Ztunnel  Ztunnel acts as a node level proxy  This task requires the  NET ADMIN    SYS ADMIN   and  NET RAW  capabilities  Like the  Istio CNI Plugin   istio cni plugin   these are   node local   privileges only  The Ztunnel does not have any associated Kubernetes RBAC privileges   Ztunnel is authorized to request certificates for any Service Accounts of pods running on the same node  Similar to  kubelet  https   kubernetes io docs reference access authn authz node    this explicitly does not allow requesting arbitrary certificates  This  again  ensures these privileges are   node local   only      Traffic Capture Properties  When a pod is enrolled in the mesh  all incoming TCP traffic will be redirected to the proxy  This includes both mTLS HBONE traffic and plaintext traffic  Any applicable  policies   docs tasks security authorization   for the workload will be enforced before forwarding the traffic to the workload   However  Istio does not currently guarantee that  outgoing  traffic is redirect to the proxy  See  traffic capture limitations   docs ops best practices security  understand traffic capture limitations   As such  care must be taken to follow the  securing egress traffic   docs ops best practices security  securing egress traffic  steps if outbound policies are required      Mutual TLS Properties   Mutual TLS   docs concepts security  mutual tls authentication  provides the basis for much of Istio s security posture  Below explains various properties mutual TLS provides for the security posture of Istio       Certificate Authority  Istio comes out of the box with its own Certificate Authority   By default  the CA allows authenticating clients based on either of the options below    A Kubernetes JWT token  with an audience of  istio ca   verified with a Kubernetes  TokenReview   This is the default method in Kubernetes Pods    An existing mutual TLS certificate    Custom JWT tokens  verified using OIDC  requires configuration    The CA will only issue certificates that are requested for identities that a client is authenticated for   Istio can also integrate with a variety of third party CAs  please refer to any of their security documentation for more information on how they behave       Client mTLS    In sidecar mode  the client sidecar will  automatically use TLS   docs ops configuration traffic management tls configuration  auto mtls  when connecting to a service that is detected to support mTLS  This can also be  explicitly configured   docs ops configuration traffic management tls configuration  sidecars   Note that this automatic detection relies on Istio associating the traffic to a Service   Unsupported traffic types   docs ops configuration traffic management traffic routing  unmatched traffic  or  configuration scoping   docs ops configuration mesh configuration scoping   can prevent this   When  connecting to a backend   docs concepts security  secure naming   the set of allowed identities is computed  at the Service level  based on the union of all backend s identities     In ambient mode  Istio will automatically use mTLS when connecting to any backend that supports mTLS  and verify the identity of the destination matches the identity the workload is expected to be running as   These properties differ from sidecar mode in that they are properties of individual workloads  rather than of the service  This enables more fine grained authentication checks  as well as supporting a wider variety of workloads         Server mTLS  By default  Istio will accept mTLS and non mTLS traffic  often called  permissive mode    Users can opt in to strict enforcement by writing  PeerAuthentication  or  AuthorizationPolicy  rules requiring mTLS   When mTLS connections are established  the peer certificate is verified  Additionally  the peer identity is verified to be within the same trust domain  To verify only specific identities are allowed  an  AuthorizationPolicy  can be used      Compromise types explored  Based on the above overview  we will consider the impact on the cluster if various parts of the system are compromised  In the real world  there are a variety of different variables around any security attack     How easy it is to execute   What prior privileges are required   How often it can exploited   What the impact is  total remote execution  denial of service  etc    In this document  we will primarily consider the worst case scenario  a compromised component means an attacker has complete remote code execution capabilities       Workload compromise  In this scenario  an application workload  pod  is compromised   A pod   may  have access  https   kubernetes io docs tasks configure pod container configure service account  opt out of api credential automounting  to its service account token  If so  a workload compromise can move laterally from a single pod to compromising the entire service account     In the sidecar model  the proxy is co located with the pod  and runs within the same trust boundary  A compromised application can tamper with the proxy through the admin API or other surfaces  including exfiltration of private key material  allowing another agent to impersonate the workload  It should be assumed that a compromised workload also includes a compromise of the sidecar proxy   Given this  a compromised workload may    Send arbitrary traffic  with or without mutual TLS    These may bypass any proxy configuration  or even the proxy entirely    Note that Istio does not offer egress based authorization policies  so there is no egress authorization policy bypass occurring    Accept traffic that was already destined to the application  It may bypass policies that were configured in the sidecar proxy   The key takeaway here is that while the compromised workload may behave maliciously  this does not give them the ability to bypass policies in  other  workloads     In ambient mode  the node proxy is not co located within the pod  and runs in another trust boundary as part of an independent pod   A compromised application may send arbitrary traffic  However  they do not have control over the node proxy  which will chose how to handle incoming and outbound traffic   Additionally  as the pod itself doesn t have access to a service account token to request a mutual TLS certificate  lateral movement possibilities are reduced     Istio offers a variety of features that can limit the impact of such a compromise     Observability   docs tasks observability   features can be used to identify the attack     Policies   docs tasks security authorization   can be used to restrict what type of traffic a workload can send or receive       Proxy compromise   Sidecars  In this scenario  a sidecar proxy is compromised  Because the sidecar and application reside in the same trust domain  this is functionally equivalent to the  Workload compromise   workload compromise        Proxy compromise   Waypoint  In this scenario  a  waypoint proxy   gateways and waypoints  is compromised  While waypoints do not have any privileges for a hacker to exploit  they do serve  potentially  many different services and workloads  A compromised waypoint will receive all traffic for these  which it can view  modify  or drop   Istio offers the flexibility of  configuring the granularity of a waypoint deployment   docs ambient usage waypoint  useawaypoint   Users may consider deploying more isolated waypoints if they require stronger isolation   Because waypoints run with a distinct identity from the applications they serve  a compromised waypoint does not imply the user s applications can be impersonated       Proxy compromise   Ztunnel  In this scenario  a  ztunnel   ztunnel  proxy is compromised   A compromised ztunnel gives the attacker control of the networking of the node   Ztunnel has access to private key material for each application running on it s node  A compromised ztunnel could have these exfiltrated and used elsewhere  However  lateral movement to identities beyond co located workloads is not possible  each ztunnel is only authorized to access certificates for workloads running on its node  scoping the blast radius of a compromised ztunnel       Node compromise  In this scenario  the Kubernetes Node is compromised  Both  Kubernetes  https   kubernetes io docs reference access authn authz node   and Istio are designed to limit the blast radius of a single node compromise  such that the compromise of a single node does not lead to a  cluster wide compromise   cluster api server compromise    However  the attack does have complete control over any workloads running on that node  For instance  it can compromise any co located  waypoints   proxy compromise   waypoint   the local  ztunnel   proxy compromise   ztunnel   any  sidecars   proxy compromise   sidecars   any co located  Istiod instances   istiod compromise   etc       Cluster  API Server  compromise  A compromise of the Kubernetes API Server effectively means the entire cluster and mesh are compromised  Unlike most other attack vectors  there isn t much Istio can do to control the blast radius of such an attack  A compromised API Server gives a hacker complete control over the cluster  including actions such as running  kubectl exec  on arbitrary pods  removing any Istio  AuthorizationPolicies   or even uninstalling Istio entirely       Istiod compromise  A compromise of Istiod generally leads to the same result as an  API Server compromise   cluster api server compromise   Istiod is a highly privileged component that should be strongly protected  Following the  security best practices   docs ops best practices security  is crucial to maintaining a secure cluster "}
{"questions":"istio help ops traffic management troubleshooting owner istio wg networking maintainers aliases weight 10 docs ops troubleshooting network issues title Traffic Management Problems forceinlinetoc true help ops troubleshooting network issues Techniques to address common Istio traffic management and network problems","answers":"---\ntitle: Traffic Management Problems\ndescription: Techniques to address common Istio traffic management and network problems.\nforce_inline_toc: true\nweight: 10\naliases:\n  - \/help\/ops\/traffic-management\/troubleshooting\n  - \/help\/ops\/troubleshooting\/network-issues\n  - \/docs\/ops\/troubleshooting\/network-issues\nowner: istio\/wg-networking-maintainers\ntest: n\/a\n---\n\n## Requests are rejected by Envoy\n\nRequests may be rejected for various reasons. The best way to understand why requests are being rejected is\nby inspecting Envoy's access logs. By default, access logs are output to the standard output of the container.\nRun the following command to see the log:\n\n\n$ kubectl logs PODNAME -c istio-proxy -n NAMESPACE\n\n\nIn the default access log format, Envoy response flags are located after the response code,\nif you are using a custom log format, make sure to include `%RESPONSE_FLAGS%`.\n\nRefer to the [Envoy response flags](https:\/\/www.envoyproxy.io\/docs\/envoy\/latest\/configuration\/observability\/access_log\/usage#config-access-log-format-response-flags)\nfor details of response flags.\n\nCommon response flags are:\n\n- `NR`: No route configured, check your `DestinationRule` or `VirtualService`.\n- `UO`: Upstream overflow with circuit breaking, check your circuit breaker configuration in `DestinationRule`.\n- `UF`: Failed to connect to upstream, if you're using Istio authentication, check for a\n[mutual TLS configuration conflict](#503-errors-after-setting-destination-rule).\n\n## Route rules don't seem to affect traffic flow\n\nWith the current Envoy sidecar implementation, up to 100 requests may be required for weighted\nversion distribution to be observed.\n\nIf route rules are working perfectly for the [Bookinfo](\/docs\/examples\/bookinfo\/) sample,\nbut similar version routing rules have no effect on your own application, it may be that\nyour Kubernetes services need to be changed slightly.\nKubernetes services must adhere to certain restrictions in order to take advantage of\nIstio's L7 routing features.\nRefer to the [Requirements for Pods and Services](\/docs\/ops\/deployment\/application-requirements\/)\nfor details.\n\nAnother potential issue is that the route rules may simply be slow to take effect.\nThe Istio implementation on Kubernetes utilizes an eventually consistent\nalgorithm to ensure all Envoy sidecars have the correct configuration\nincluding all route rules. A configuration change will take some time\nto propagate to all the sidecars.  With large deployments the\npropagation will take longer and there may be a lag time on the\norder of seconds.\n\n## 503 errors after setting destination rule\n\n\nYou should only see this error if you disabled [automatic mutual TLS](\/docs\/tasks\/security\/authentication\/authn-policy\/#auto-mutual-tls) during install.\n\n\nIf requests to a service immediately start generating HTTP 503 errors after you applied a `DestinationRule`\nand the errors continue until you remove or revert the `DestinationRule`, then the `DestinationRule` is probably\ncausing a TLS conflict for the service.\n\nFor example, if you configure mutual TLS in the cluster globally, the `DestinationRule` must include the following `trafficPolicy`:\n\n\ntrafficPolicy:\n  tls:\n    mode: ISTIO_MUTUAL\n\n\nOtherwise, the mode defaults to `DISABLE` causing client proxy sidecars to make plain HTTP requests\ninstead of TLS encrypted requests. Thus, the requests conflict with the server proxy because the server proxy expects\nencrypted requests.\n\nWhenever you apply a `DestinationRule`, ensure the `trafficPolicy` TLS mode matches the global server configuration.\n\n## Route rules have no effect on ingress gateway requests\n\nLet's assume you are using an ingress `Gateway` and corresponding `VirtualService` to access an internal service.\nFor example, your `VirtualService` looks something like this:\n\n\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: myapp\nspec:\n  hosts:\n  - \"myapp.com\" # or maybe \"*\" if you are testing without DNS using the ingress-gateway IP (e.g., http:\/\/1.2.3.4\/hello)\n  gateways:\n  - myapp-gateway\n  http:\n  - match:\n    - uri:\n        prefix: \/hello\n    route:\n    - destination:\n        host: helloworld.default.svc.cluster.local\n  - match:\n    ...\n\n\nYou also have a `VirtualService` which routes traffic for the helloworld service to a particular subset:\n\n\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: helloworld\nspec:\n  hosts:\n  - helloworld.default.svc.cluster.local\n  http:\n  - route:\n    - destination:\n        host: helloworld.default.svc.cluster.local\n        subset: v1\n\n\nIn this situation you will notice that requests to the helloworld service via the ingress gateway will\nnot be directed to subset v1 but instead will continue to use default round-robin routing.\n\nThe ingress requests are using the gateway host (e.g., `myapp.com`)\nwhich will activate the rules in the myapp `VirtualService` that routes to any endpoint of the helloworld service.\nOnly internal requests with the host `helloworld.default.svc.cluster.local` will use the\nhelloworld `VirtualService` which directs traffic exclusively to subset v1.\n\nTo control the traffic from the gateway, you need to also include the subset rule in the myapp `VirtualService`:\n\n\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: myapp\nspec:\n  hosts:\n  - \"myapp.com\" # or maybe \"*\" if you are testing without DNS using the ingress-gateway IP (e.g., http:\/\/1.2.3.4\/hello)\n  gateways:\n  - myapp-gateway\n  http:\n  - match:\n    - uri:\n        prefix: \/hello\n    route:\n    - destination:\n        host: helloworld.default.svc.cluster.local\n        subset: v1\n  - match:\n    ...\n\n\nAlternatively, you can combine both `VirtualServices` into one unit if possible:\n\n\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: myapp\nspec:\n  hosts:\n  - myapp.com # cannot use \"*\" here since this is being combined with the mesh services\n  - helloworld.default.svc.cluster.local\n  gateways:\n  - mesh # applies internally as well as externally\n  - myapp-gateway\n  http:\n  - match:\n    - uri:\n        prefix: \/hello\n      gateways:\n      - myapp-gateway #restricts this rule to apply only to ingress gateway\n    route:\n    - destination:\n        host: helloworld.default.svc.cluster.local\n        subset: v1\n  - match:\n    - gateways:\n      - mesh # applies to all services inside the mesh\n    route:\n    - destination:\n        host: helloworld.default.svc.cluster.local\n        subset: v1\n\n\n## Envoy is crashing under load\n\nCheck your `ulimit -a`. Many systems have a 1024 open file descriptor limit by default which will cause Envoy to assert and crash with:\n\n\n[2017-05-17 03:00:52.735][14236][critical][assert] assert failure: fd_ != -1: external\/envoy\/source\/common\/network\/connection_impl.cc:58\n\n\nMake sure to raise your ulimit. Example: `ulimit -n 16384`\n\n## Envoy won't connect to my HTTP\/1.0 service\n\nEnvoy requires `HTTP\/1.1` or `HTTP\/2` traffic for upstream services. For example, when using [NGINX](https:\/\/www.nginx.com\/) for serving traffic behind Envoy, you\nwill need to set the [proxy_http_version](https:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_http_version) directive in your NGINX configuration to be \"1.1\", since the NGINX default is 1.0.\n\nExample configuration:\n\n\nupstream http_backend {\n    server 127.0.0.1:8080;\n\n    keepalive 16;\n}\n\nserver {\n    ...\n\n    location \/http\/ {\n        proxy_pass http:\/\/http_backend;\n        proxy_http_version 1.1;\n        proxy_set_header Connection \"\";\n        ...\n    }\n}\n\n\n## 503 error while accessing headless services\n\nAssume Istio is installed with the following configuration:\n\n- `mTLS mode` set to `STRICT` within the mesh\n- `meshConfig.outboundTrafficPolicy.mode` set to `ALLOW_ANY`\n\nConsider `nginx` is deployed as a `StatefulSet` in the default namespace and a corresponding `Headless Service` is defined as shown below:\n\n\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx\n  labels:\n    app: nginx\nspec:\n  ports:\n  - port: 80\n    name: http-web  # Explicitly defining an http port\n  clusterIP: None   # Creates a Headless Service\n  selector:\n    app: nginx\n---\napiVersion: apps\/v1\nkind: StatefulSet\nmetadata:\n  name: web\nspec:\n  selector:\n    matchLabels:\n      app: nginx\n  serviceName: \"nginx\"\n  replicas: 3\n  template:\n    metadata:\n      labels:\n        app: nginx\n    spec:\n      containers:\n      - name: nginx\n        image: registry.k8s.io\/nginx-slim:0.8\n        ports:\n        - containerPort: 80\n          name: web\n\n\nThe port name `http-web` in the Service definition explicitly specifies the http protocol for that port.\n\nLet us assume we have a [curl](\/samples\/curl) pod `Deployment` as well in the default namespace.\nWhen `nginx` is accessed from this `curl` pod using its Pod IP (this is one of the common ways to access a headless service), the request goes via the `PassthroughCluster` to the server-side, but the sidecar proxy on the server-side fails to find the route entry to `nginx` and fails with `HTTP 503 UC`.\n\n\n$ export SOURCE_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}')\n$ kubectl exec -it $SOURCE_POD -c curl -- curl 10.1.1.171 -s -o \/dev\/null -w \"%{http_code}\"\n  503\n\n\n`10.1.1.171` is the Pod IP of one of the replicas of `nginx` and the service is accessed on `containerPort` 80.\n\nHere are some of the ways to avoid this 503 error:\n\n1. Specify the correct Host header:\n\n    The Host header in the curl request above will be the Pod IP by default. Specifying the Host header as `nginx.default` in our request to `nginx` successfully returns `HTTP 200 OK`.\n\n    \n    $ export SOURCE_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}')\n    $ kubectl exec -it $SOURCE_POD -c curl -- curl -H \"Host: nginx.default\" 10.1.1.171 -s -o \/dev\/null -w \"%{http_code}\"\n      200\n    \n\n1. Set port name to `tcp` or `tcp-web` or `tcp-<custom_name>`:\n\n    Here the protocol is explicitly specified as `tcp`. In this case, only the `TCP Proxy` network filter on the sidecar proxy is used both on the client-side and server-side. HTTP Connection Manager is not used at all and therefore, any kind of header is not expected in the request.\n\n    A request to `nginx` with or without explicitly setting the Host header successfully returns `HTTP 200 OK`.\n\n    This is useful in certain scenarios where a client may not be able to include header information in the request.\n\n    \n    $ export SOURCE_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}')\n    $ kubectl exec -it $SOURCE_POD -c curl -- curl 10.1.1.171 -s -o \/dev\/null -w \"%{http_code}\"\n      200\n    \n\n    \n    $ kubectl exec -it $SOURCE_POD -c curl -- curl -H \"Host: nginx.default\" 10.1.1.171 -s -o \/dev\/null -w \"%{http_code}\"\n      200\n    \n\n1. Use domain name instead of Pod IP:\n\n    A specific instance of a headless service can also be accessed using just the domain name.\n\n    \n    $ export SOURCE_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}')\n    $ kubectl exec -it $SOURCE_POD -c curl -- curl web-0.nginx.default -s -o \/dev\/null -w \"%{http_code}\"\n      200\n    \n\n    Here `web-0` is the pod name of one of the 3 replicas of `nginx`.\n\nRefer to this [traffic routing](\/docs\/ops\/configuration\/traffic-management\/traffic-routing\/) page for some additional information on headless services and traffic routing behavior for different protocols.\n\n## TLS configuration mistakes\n\nMany traffic management problems\nare caused by incorrect [TLS configuration](\/docs\/ops\/configuration\/traffic-management\/tls-configuration\/).\nThe following sections describe some of the most common misconfigurations.\n\n### Sending HTTPS to an HTTP port\n\nIf your application sends an HTTPS request to a service declared to be HTTP,\nthe Envoy sidecar will attempt to parse the request as HTTP while forwarding the request,\nwhich will fail because the HTTP is unexpectedly encrypted.\n\n\napiVersion: networking.istio.io\/v1\nkind: ServiceEntry\nmetadata:\n  name: httpbin\nspec:\n  hosts:\n  - httpbin.org\n  ports:\n  - number: 443\n    name: http\n    protocol: HTTP\n  resolution: DNS\n\n\nAlthough the above configuration may be correct if you are intentionally sending plaintext on port 443 (e.g., `curl http:\/\/httpbin.org:443`),\ngenerally port 443 is dedicated for HTTPS traffic.\n\nSending an HTTPS request like `curl https:\/\/httpbin.org`, which defaults to port 443, will result in an error like\n`curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number`.\nThe access logs may also show an error like `400 DPE`.\n\nTo fix this, you should change the port protocol to HTTPS:\n\n\nspec:\n  ports:\n  - number: 443\n    name: https\n    protocol: HTTPS\n\n\n### Gateway to virtual service TLS mismatch {#gateway-mismatch}\n\nThere are two common TLS mismatches that can occur when binding a virtual service to a gateway.\n\n1. The gateway terminates TLS while the virtual service configures TLS routing.\n1. The gateway does TLS passthrough while the virtual service configures HTTP routing.\n\n#### Gateway with TLS termination\n\n\napiVersion: networking.istio.io\/v1\nkind: Gateway\nmetadata:\n  name: gateway\n  namespace: istio-system\nspec:\n  selector:\n    istio: ingressgateway\n  servers:\n  - port:\n      number: 443\n      name: https\n      protocol: HTTPS\n    hosts:\n      - \"*\"\n    tls:\n      mode: SIMPLE\n      credentialName: sds-credential\n---\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: httpbin\nspec:\n  hosts:\n  - \"*.example.com\"\n  gateways:\n  - istio-system\/gateway\n  tls:\n  - match:\n    - sniHosts:\n      - \"*.example.com\"\n    route:\n    - destination:\n        host: httpbin.org\n\n\nIn this example, the gateway is terminating TLS (the `tls.mode` configuration of the gateway is `SIMPLE`,\nnot `PASSTHROUGH`) while the virtual service is using TLS-based routing. Evaluating routing rules\noccurs after the gateway terminates TLS, so the TLS rule will have no effect because the\nrequest is then HTTP rather than HTTPS.\n\nWith this misconfiguration, you will end up getting 404 responses because the requests will be\nsent to HTTP routing but there are no HTTP routes configured.\nYou can confirm this using the `istioctl proxy-config routes` command.\n\nTo fix this problem, you should switch the virtual service to specify `http` routing, instead of `tls`:\n\n\nspec:\n  ...\n  http:\n  - match:\n    - headers:\n        \":authority\":\n          regex: \"*.example.com\"\n\n\n#### Gateway with TLS passthrough\n\n\napiVersion: networking.istio.io\/v1\nkind: Gateway\nmetadata:\n  name: gateway\nspec:\n  selector:\n    istio: ingressgateway\n  servers:\n  - hosts:\n    - \"*\"\n    port:\n      name: https\n      number: 443\n      protocol: HTTPS\n    tls:\n      mode: PASSTHROUGH\n---\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: virtual-service\nspec:\n  gateways:\n  - gateway\n  hosts:\n  - httpbin.example.com\n  http:\n  - route:\n    - destination:\n        host: httpbin.org\n\n\nIn this configuration, the virtual service is attempting to match HTTP traffic against TLS traffic passed through the gateway.\nThis will result in the virtual service configuration having no effect. You can observe that the HTTP route is not applied using\nthe `istioctl proxy-config listener` and `istioctl proxy-config route` commands.\n\nTo fix this, you should switch the virtual service to configure `tls` routing:\n\n\nspec:\n  tls:\n  - match:\n    - sniHosts: [\"httpbin.example.com\"]\n    route:\n    - destination:\n        host: httpbin.org\n\n\nAlternatively, you could terminate TLS, rather than passing it through, by switching the `tls` configuration in the gateway:\n\n\nspec:\n  ...\n    tls:\n      credentialName: sds-credential\n      mode: SIMPLE\n\n\n### Double TLS (TLS origination for a TLS request) {#double-tls}\n\nWhen configuring Istio to perform TLS origination, you need to make sure\nthat the application sends plaintext requests to the sidecar, which will then originate the TLS.\n\nThe following `DestinationRule` originates TLS for requests to the `httpbin.org` service,\nbut the corresponding `ServiceEntry` defines the protocol as HTTPS on port 443.\n\n\napiVersion: networking.istio.io\/v1\nkind: ServiceEntry\nmetadata:\n  name: httpbin\nspec:\n  hosts:\n  - httpbin.org\n  ports:\n  - number: 443\n    name: https\n    protocol: HTTPS\n  resolution: DNS\n---\napiVersion: networking.istio.io\/v1\nkind: DestinationRule\nmetadata:\n  name: originate-tls\nspec:\n  host: httpbin.org\n  trafficPolicy:\n    tls:\n      mode: SIMPLE\n\n\nWith this configuration, the sidecar expects the application to send TLS traffic on port 443\n(e.g., `curl https:\/\/httpbin.org`), but it will also perform TLS origination before forwarding requests.\nThis will cause the requests to be double encrypted.\n\nFor example, sending a request like `curl https:\/\/httpbin.org` will result in an error:\n`(35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number`.\n\nYou can fix this example by changing the port protocol in the `ServiceEntry` to HTTP:\n\n\nspec:\n  hosts:\n  - httpbin.org\n  ports:\n  - number: 443\n    name: http\n    protocol: HTTP\n\n\nNote that with this configuration your application will need to send plaintext requests to port 443,\nlike `curl http:\/\/httpbin.org:443`, because TLS origination does not change the port.\nHowever, starting in Istio 1.8, you can expose HTTP port 80 to the application (e.g., `curl http:\/\/httpbin.org`)\nand then redirect requests to `targetPort` 443 for the TLS origination:\n\n\nspec:\n  hosts:\n  - httpbin.org\n  ports:\n  - number: 80\n    name: http\n    protocol: HTTP\n    targetPort: 443\n\n\n### 404 errors occur when multiple gateways configured with same TLS certificate\n\nConfiguring more than one gateway using the same TLS certificate will cause browsers\nthat leverage [HTTP\/2 connection reuse](https:\/\/httpwg.org\/specs\/rfc7540.html#reuse)\n(i.e., most browsers) to produce 404 errors when accessing a second host after a\nconnection to another host has already been established.\n\nFor example, let's say you have 2 hosts that share the same TLS certificate like this:\n\n- Wildcard certificate `*.test.com` installed in `istio-ingressgateway`\n- `Gateway` configuration `gw1` with host `service1.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate\n- `Gateway` configuration `gw2` with host `service2.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate\n- `VirtualService` configuration `vs1` with host `service1.test.com` and gateway `gw1`\n- `VirtualService` configuration `vs2` with host `service2.test.com` and gateway `gw2`\n\nSince both gateways are served by the same workload (i.e., selector `istio: ingressgateway`) requests to both services\n(`service1.test.com` and `service2.test.com`) will resolve to the same IP. If `service1.test.com` is accessed first, it\nwill return the wildcard certificate (`*.test.com`) indicating that connections to `service2.test.com` can use the same certificate.\nBrowsers like Chrome and Firefox will consequently reuse the existing connection for requests to `service2.test.com`.\nSince the gateway (`gw1`) has no route for `service2.test.com`, it will then return a 404 (Not Found) response.\n\nYou can avoid this problem by configuring a single wildcard `Gateway`, instead of two (`gw1` and `gw2`).\nThen, simply bind both `VirtualServices` to it like this:\n\n- `Gateway` configuration `gw` with host `*.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate\n- `VirtualService` configuration `vs1` with host `service1.test.com` and gateway `gw`\n- `VirtualService` configuration `vs2` with host `service2.test.com` and gateway `gw`\n\n### Configuring SNI routing when not sending SNI\n\nAn HTTPS `Gateway` that specifies the `hosts` field will perform an [SNI](https:\/\/en.wikipedia.org\/wiki\/Server_Name_Indication) match on incoming requests.\nFor example, the following configuration would only allow requests that match `*.example.com` in the SNI:\n\n\nservers:\n- port:\n    number: 443\n    name: https\n    protocol: HTTPS\n  hosts:\n  - \"*.example.com\"\n\n\nThis may cause certain requests to fail.\n\nFor example, if you do not have DNS set up and are instead directly setting the host header, such as `curl 1.2.3.4 -H \"Host: app.example.com\"`, no SNI will be set, causing the request to fail.\nInstead, you can set up DNS or use the `--resolve` flag of `curl`. See the [Secure Gateways](\/docs\/tasks\/traffic-management\/ingress\/secure-ingress\/) task for more information.\n\nAnother common issue is load balancers in front of Istio.\nMost cloud load balancers will not forward the SNI, so if you are terminating TLS in your cloud load balancer you may need to do one of the following:\n\n- Configure the cloud load balancer to instead passthrough the TLS connection\n- Disable SNI matching in the `Gateway` by setting the hosts field to `*`\n\nA common symptom of this is for the load balancer health checks to succeed while real traffic fails.\n\n## Unchanged Envoy filter configuration suddenly stops working\n\nAn `EnvoyFilter` configuration that specifies an insert position relative to another filter can be very\nfragile because, by default, the order of evaluation is based on the creation time of the filters.\nConsider a filter with the following specification:\n\n\nspec:\n  configPatches:\n  - applyTo: NETWORK_FILTER\n    match:\n      context: SIDECAR_OUTBOUND\n      listener:\n        portNumber: 443\n        filterChain:\n          filter:\n            name: istio.stats\n    patch:\n      operation: INSERT_BEFORE\n      value:\n        ...\n\n\nTo work properly, this filter configuration depends on the `istio.stats` filter having an older creation time\nthan it. Otherwise, the `INSERT_BEFORE` operation will be silently ignored. There will be nothing in the\nerror log to indicate that this filter has not been added to the chain.\n\nThis is particularly problematic when matching filters, like `istio.stats`, that are version\nspecific (i.e., that include the `proxyVersion` field in their match criteria). Such filters may be removed\nor replaced by newer ones when upgrading Istio. As a result, an `EnvoyFilter` like the one above may initially\nbe working perfectly but after upgrading Istio to a newer version it will no longer be included in the network\nfilter chain of the sidecars.\n\nTo avoid this issue, you can either change the operation to one that does not depend on the presence of\nanother filter (e.g., `INSERT_FIRST`), or set an explicit priority in the `EnvoyFilter` to override the\ndefault creation time-based ordering. For example, adding `priority: 10` to the above filter will ensure\nthat it is processed after the `istio.stats` filter which has a default priority of 0.\n\n## Virtual service with fault injection and retry\/timeout policies not working as expected\n\nCurrently, Istio does not support configuring fault injections and retry or timeout policies on the\nsame `VirtualService`. Consider the following configuration:\n\n\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: helloworld\nspec:\n  hosts:\n    - \"*\"\n  gateways:\n  - helloworld-gateway\n  http:\n  - match:\n    - uri:\n        exact: \/hello\n    fault:\n      abort:\n        httpStatus: 500\n        percentage:\n          value: 50\n    retries:\n      attempts: 5\n      retryOn: 5xx\n    route:\n    - destination:\n        host: helloworld\n        port:\n          number: 5000\n\n\nYou would expect that given the configured five retry attempts, the user would almost never see any\nerrors when calling the `helloworld` service. However since both fault and retries are configured on\nthe same `VirtualService`, the retry configuration does not take effect, resulting in a 50% failure\nrate. To work around this issue, you may remove the fault config from your `VirtualService` and\ninject the fault to the upstream Envoy proxy using `EnvoyFilter` instead:\n\n\napiVersion: networking.istio.io\/v1alpha3\nkind: EnvoyFilter\nmetadata:\n  name: hello-world-filter\nspec:\n  workloadSelector:\n    labels:\n      app: helloworld\n  configPatches:\n  - applyTo: HTTP_FILTER\n    match:\n      context: SIDECAR_INBOUND # will match outbound listeners in all sidecars\n      listener:\n        filterChain:\n          filter:\n            name: \"envoy.filters.network.http_connection_manager\"\n    patch:\n      operation: INSERT_BEFORE\n      value:\n        name: envoy.fault\n        typed_config:\n          \"@type\": \"type.googleapis.com\/envoy.extensions.filters.http.fault.v3.HTTPFault\"\n          abort:\n            http_status: 500\n            percentage:\n              numerator: 50\n              denominator: HUNDRED\n\n\nThis works because this way the retry policy is configured for the client proxy while the fault\ninjection is configured for the upstream proxy.","site":"istio","answers_cleaned":"    title  Traffic Management Problems description  Techniques to address common Istio traffic management and network problems  force inline toc  true weight  10 aliases       help ops traffic management troubleshooting      help ops troubleshooting network issues      docs ops troubleshooting network issues owner  istio wg networking maintainers test  n a         Requests are rejected by Envoy  Requests may be rejected for various reasons  The best way to understand why requests are being rejected is by inspecting Envoy s access logs  By default  access logs are output to the standard output of the container  Run the following command to see the log      kubectl logs PODNAME  c istio proxy  n NAMESPACE   In the default access log format  Envoy response flags are located after the response code  if you are using a custom log format  make sure to include   RESPONSE FLAGS     Refer to the  Envoy response flags  https   www envoyproxy io docs envoy latest configuration observability access log usage config access log format response flags  for details of response flags   Common response flags are      NR   No route configured  check your  DestinationRule  or  VirtualService      UO   Upstream overflow with circuit breaking  check your circuit breaker configuration in  DestinationRule      UF   Failed to connect to upstream  if you re using Istio authentication  check for a  mutual TLS configuration conflict   503 errors after setting destination rule       Route rules don t seem to affect traffic flow  With the current Envoy sidecar implementation  up to 100 requests may be required for weighted version distribution to be observed   If route rules are working perfectly for the  Bookinfo   docs examples bookinfo   sample  but similar version routing rules have no effect on your own application  it may be that your Kubernetes services need to be changed slightly  Kubernetes services must adhere to certain restrictions in order to take advantage of Istio s L7 routing features  Refer to the  Requirements for Pods and Services   docs ops deployment application requirements   for details   Another potential issue is that the route rules may simply be slow to take effect  The Istio implementation on Kubernetes utilizes an eventually consistent algorithm to ensure all Envoy sidecars have the correct configuration including all route rules  A configuration change will take some time to propagate to all the sidecars   With large deployments the propagation will take longer and there may be a lag time on the order of seconds      503 errors after setting destination rule   You should only see this error if you disabled  automatic mutual TLS   docs tasks security authentication authn policy  auto mutual tls  during install    If requests to a service immediately start generating HTTP 503 errors after you applied a  DestinationRule  and the errors continue until you remove or revert the  DestinationRule   then the  DestinationRule  is probably causing a TLS conflict for the service   For example  if you configure mutual TLS in the cluster globally  the  DestinationRule  must include the following  trafficPolicy     trafficPolicy    tls      mode  ISTIO MUTUAL   Otherwise  the mode defaults to  DISABLE  causing client proxy sidecars to make plain HTTP requests instead of TLS encrypted requests  Thus  the requests conflict with the server proxy because the server proxy expects encrypted requests   Whenever you apply a  DestinationRule   ensure the  trafficPolicy  TLS mode matches the global server configuration      Route rules have no effect on ingress gateway requests  Let s assume you are using an ingress  Gateway  and corresponding  VirtualService  to access an internal service  For example  your  VirtualService  looks something like this    apiVersion  networking istio io v1 kind  VirtualService metadata    name  myapp spec    hosts       myapp com    or maybe     if you are testing without DNS using the ingress gateway IP  e g   http   1 2 3 4 hello    gateways      myapp gateway   http      match        uri          prefix   hello     route        destination          host  helloworld default svc cluster local     match            You also have a  VirtualService  which routes traffic for the helloworld service to a particular subset    apiVersion  networking istio io v1 kind  VirtualService metadata    name  helloworld spec    hosts      helloworld default svc cluster local   http      route        destination          host  helloworld default svc cluster local         subset  v1   In this situation you will notice that requests to the helloworld service via the ingress gateway will not be directed to subset v1 but instead will continue to use default round robin routing   The ingress requests are using the gateway host  e g    myapp com   which will activate the rules in the myapp  VirtualService  that routes to any endpoint of the helloworld service  Only internal requests with the host  helloworld default svc cluster local  will use the helloworld  VirtualService  which directs traffic exclusively to subset v1   To control the traffic from the gateway  you need to also include the subset rule in the myapp  VirtualService     apiVersion  networking istio io v1 kind  VirtualService metadata    name  myapp spec    hosts       myapp com    or maybe     if you are testing without DNS using the ingress gateway IP  e g   http   1 2 3 4 hello    gateways      myapp gateway   http      match        uri          prefix   hello     route        destination          host  helloworld default svc cluster local         subset  v1     match            Alternatively  you can combine both  VirtualServices  into one unit if possible    apiVersion  networking istio io v1 kind  VirtualService metadata    name  myapp spec    hosts      myapp com   cannot use     here since this is being combined with the mesh services     helloworld default svc cluster local   gateways      mesh   applies internally as well as externally     myapp gateway   http      match        uri          prefix   hello       gateways          myapp gateway  restricts this rule to apply only to ingress gateway     route        destination          host  helloworld default svc cluster local         subset  v1     match        gateways          mesh   applies to all services inside the mesh     route        destination          host  helloworld default svc cluster local         subset  v1      Envoy is crashing under load  Check your  ulimit  a   Many systems have a 1024 open file descriptor limit by default which will cause Envoy to assert and crash with     2017 05 17 03 00 52 735  14236  critical  assert  assert failure  fd      1  external envoy source common network connection impl cc 58   Make sure to raise your ulimit  Example   ulimit  n 16384      Envoy won t connect to my HTTP 1 0 service  Envoy requires  HTTP 1 1  or  HTTP 2  traffic for upstream services  For example  when using  NGINX  https   www nginx com   for serving traffic behind Envoy  you will need to set the  proxy http version  https   nginx org en docs http ngx http proxy module html proxy http version  directive in your NGINX configuration to be  1 1   since the NGINX default is 1 0   Example configuration    upstream http backend       server 127 0 0 1 8080       keepalive 16     server                location  http            proxy pass http   http backend          proxy http version 1 1          proxy set header Connection                              503 error while accessing headless services  Assume Istio is installed with the following configuration      mTLS mode  set to  STRICT  within the mesh    meshConfig outboundTrafficPolicy mode  set to  ALLOW ANY   Consider  nginx  is deployed as a  StatefulSet  in the default namespace and a corresponding  Headless Service  is defined as shown below    apiVersion  v1 kind  Service metadata    name  nginx   labels      app  nginx spec    ports      port  80     name  http web    Explicitly defining an http port   clusterIP  None     Creates a Headless Service   selector      app  nginx     apiVersion  apps v1 kind  StatefulSet metadata    name  web spec    selector      matchLabels        app  nginx   serviceName   nginx    replicas  3   template      metadata        labels          app  nginx     spec        containers          name  nginx         image  registry k8s io nginx slim 0 8         ports            containerPort  80           name  web   The port name  http web  in the Service definition explicitly specifies the http protocol for that port   Let us assume we have a  curl   samples curl  pod  Deployment  as well in the default namespace  When  nginx  is accessed from this  curl  pod using its Pod IP  this is one of the common ways to access a headless service   the request goes via the  PassthroughCluster  to the server side  but the sidecar proxy on the server side fails to find the route entry to  nginx  and fails with  HTTP 503 UC       export SOURCE POD   kubectl get pod  l app curl  o jsonpath    items  metadata name      kubectl exec  it  SOURCE POD  c curl    curl 10 1 1 171  s  o  dev null  w    http code     503    10 1 1 171  is the Pod IP of one of the replicas of  nginx  and the service is accessed on  containerPort  80   Here are some of the ways to avoid this 503 error   1  Specify the correct Host header       The Host header in the curl request above will be the Pod IP by default  Specifying the Host header as  nginx default  in our request to  nginx  successfully returns  HTTP 200 OK               export SOURCE POD   kubectl get pod  l app curl  o jsonpath    items  metadata name          kubectl exec  it  SOURCE POD  c curl    curl  H  Host  nginx default  10 1 1 171  s  o  dev null  w    http code         200       1  Set port name to  tcp  or  tcp web  or  tcp  custom name         Here the protocol is explicitly specified as  tcp   In this case  only the  TCP Proxy  network filter on the sidecar proxy is used both on the client side and server side  HTTP Connection Manager is not used at all and therefore  any kind of header is not expected in the request       A request to  nginx  with or without explicitly setting the Host header successfully returns  HTTP 200 OK        This is useful in certain scenarios where a client may not be able to include header information in the request              export SOURCE POD   kubectl get pod  l app curl  o jsonpath    items  metadata name          kubectl exec  it  SOURCE POD  c curl    curl 10 1 1 171  s  o  dev null  w    http code         200                  kubectl exec  it  SOURCE POD  c curl    curl  H  Host  nginx default  10 1 1 171  s  o  dev null  w    http code         200       1  Use domain name instead of Pod IP       A specific instance of a headless service can also be accessed using just the domain name              export SOURCE POD   kubectl get pod  l app curl  o jsonpath    items  metadata name          kubectl exec  it  SOURCE POD  c curl    curl web 0 nginx default  s  o  dev null  w    http code         200           Here  web 0  is the pod name of one of the 3 replicas of  nginx    Refer to this  traffic routing   docs ops configuration traffic management traffic routing   page for some additional information on headless services and traffic routing behavior for different protocols      TLS configuration mistakes  Many traffic management problems are caused by incorrect  TLS configuration   docs ops configuration traffic management tls configuration    The following sections describe some of the most common misconfigurations       Sending HTTPS to an HTTP port  If your application sends an HTTPS request to a service declared to be HTTP  the Envoy sidecar will attempt to parse the request as HTTP while forwarding the request  which will fail because the HTTP is unexpectedly encrypted    apiVersion  networking istio io v1 kind  ServiceEntry metadata    name  httpbin spec    hosts      httpbin org   ports      number  443     name  http     protocol  HTTP   resolution  DNS   Although the above configuration may be correct if you are intentionally sending plaintext on port 443  e g    curl http   httpbin org 443    generally port 443 is dedicated for HTTPS traffic   Sending an HTTPS request like  curl https   httpbin org   which defaults to port 443  will result in an error like  curl   35  error 1408F10B SSL routines ssl3 get record wrong version number   The access logs may also show an error like  400 DPE    To fix this  you should change the port protocol to HTTPS    spec    ports      number  443     name  https     protocol  HTTPS       Gateway to virtual service TLS mismatch   gateway mismatch   There are two common TLS mismatches that can occur when binding a virtual service to a gateway   1  The gateway terminates TLS while the virtual service configures TLS routing  1  The gateway does TLS passthrough while the virtual service configures HTTP routing        Gateway with TLS termination   apiVersion  networking istio io v1 kind  Gateway metadata    name  gateway   namespace  istio system spec    selector      istio  ingressgateway   servers      port        number  443       name  https       protocol  HTTPS     hosts                  tls        mode  SIMPLE       credentialName  sds credential     apiVersion  networking istio io v1 kind  VirtualService metadata    name  httpbin spec    hosts         example com    gateways      istio system gateway   tls      match        sniHosts             example com      route        destination          host  httpbin org   In this example  the gateway is terminating TLS  the  tls mode  configuration of the gateway is  SIMPLE   not  PASSTHROUGH   while the virtual service is using TLS based routing  Evaluating routing rules occurs after the gateway terminates TLS  so the TLS rule will have no effect because the request is then HTTP rather than HTTPS   With this misconfiguration  you will end up getting 404 responses because the requests will be sent to HTTP routing but there are no HTTP routes configured  You can confirm this using the  istioctl proxy config routes  command   To fix this problem  you should switch the virtual service to specify  http  routing  instead of  tls     spec          http      match        headers            authority             regex     example com         Gateway with TLS passthrough   apiVersion  networking istio io v1 kind  Gateway metadata    name  gateway spec    selector      istio  ingressgateway   servers      hosts                port        name  https       number  443       protocol  HTTPS     tls        mode  PASSTHROUGH     apiVersion  networking istio io v1 kind  VirtualService metadata    name  virtual service spec    gateways      gateway   hosts      httpbin example com   http      route        destination          host  httpbin org   In this configuration  the virtual service is attempting to match HTTP traffic against TLS traffic passed through the gateway  This will result in the virtual service configuration having no effect  You can observe that the HTTP route is not applied using the  istioctl proxy config listener  and  istioctl proxy config route  commands   To fix this  you should switch the virtual service to configure  tls  routing    spec    tls      match        sniHosts    httpbin example com       route        destination          host  httpbin org   Alternatively  you could terminate TLS  rather than passing it through  by switching the  tls  configuration in the gateway    spec            tls        credentialName  sds credential       mode  SIMPLE       Double TLS  TLS origination for a TLS request    double tls   When configuring Istio to perform TLS origination  you need to make sure that the application sends plaintext requests to the sidecar  which will then originate the TLS   The following  DestinationRule  originates TLS for requests to the  httpbin org  service  but the corresponding  ServiceEntry  defines the protocol as HTTPS on port 443    apiVersion  networking istio io v1 kind  ServiceEntry metadata    name  httpbin spec    hosts      httpbin org   ports      number  443     name  https     protocol  HTTPS   resolution  DNS     apiVersion  networking istio io v1 kind  DestinationRule metadata    name  originate tls spec    host  httpbin org   trafficPolicy      tls        mode  SIMPLE   With this configuration  the sidecar expects the application to send TLS traffic on port 443  e g    curl https   httpbin org    but it will also perform TLS origination before forwarding requests  This will cause the requests to be double encrypted   For example  sending a request like  curl https   httpbin org  will result in an error    35  error 1408F10B SSL routines ssl3 get record wrong version number    You can fix this example by changing the port protocol in the  ServiceEntry  to HTTP    spec    hosts      httpbin org   ports      number  443     name  http     protocol  HTTP   Note that with this configuration your application will need to send plaintext requests to port 443  like  curl http   httpbin org 443   because TLS origination does not change the port  However  starting in Istio 1 8  you can expose HTTP port 80 to the application  e g    curl http   httpbin org   and then redirect requests to  targetPort  443 for the TLS origination    spec    hosts      httpbin org   ports      number  80     name  http     protocol  HTTP     targetPort  443       404 errors occur when multiple gateways configured with same TLS certificate  Configuring more than one gateway using the same TLS certificate will cause browsers that leverage  HTTP 2 connection reuse  https   httpwg org specs rfc7540 html reuse   i e   most browsers  to produce 404 errors when accessing a second host after a connection to another host has already been established   For example  let s say you have 2 hosts that share the same TLS certificate like this     Wildcard certificate    test com  installed in  istio ingressgateway     Gateway  configuration  gw1  with host  service1 test com   selector  istio  ingressgateway   and TLS using gateway s mounted  wildcard  certificate    Gateway  configuration  gw2  with host  service2 test com   selector  istio  ingressgateway   and TLS using gateway s mounted  wildcard  certificate    VirtualService  configuration  vs1  with host  service1 test com  and gateway  gw1     VirtualService  configuration  vs2  with host  service2 test com  and gateway  gw2   Since both gateways are served by the same workload  i e   selector  istio  ingressgateway   requests to both services   service1 test com  and  service2 test com   will resolve to the same IP  If  service1 test com  is accessed first  it will return the wildcard certificate     test com   indicating that connections to  service2 test com  can use the same certificate  Browsers like Chrome and Firefox will consequently reuse the existing connection for requests to  service2 test com   Since the gateway   gw1   has no route for  service2 test com   it will then return a 404  Not Found  response   You can avoid this problem by configuring a single wildcard  Gateway   instead of two   gw1  and  gw2    Then  simply bind both  VirtualServices  to it like this      Gateway  configuration  gw  with host    test com   selector  istio  ingressgateway   and TLS using gateway s mounted  wildcard  certificate    VirtualService  configuration  vs1  with host  service1 test com  and gateway  gw     VirtualService  configuration  vs2  with host  service2 test com  and gateway  gw       Configuring SNI routing when not sending SNI  An HTTPS  Gateway  that specifies the  hosts  field will perform an  SNI  https   en wikipedia org wiki Server Name Indication  match on incoming requests  For example  the following configuration would only allow requests that match    example com  in the SNI    servers    port      number  443     name  https     protocol  HTTPS   hosts         example com    This may cause certain requests to fail   For example  if you do not have DNS set up and are instead directly setting the host header  such as  curl 1 2 3 4  H  Host  app example com    no SNI will be set  causing the request to fail  Instead  you can set up DNS or use the    resolve  flag of  curl   See the  Secure Gateways   docs tasks traffic management ingress secure ingress   task for more information   Another common issue is load balancers in front of Istio  Most cloud load balancers will not forward the SNI  so if you are terminating TLS in your cloud load balancer you may need to do one of the following     Configure the cloud load balancer to instead passthrough the TLS connection   Disable SNI matching in the  Gateway  by setting the hosts field to      A common symptom of this is for the load balancer health checks to succeed while real traffic fails      Unchanged Envoy filter configuration suddenly stops working  An  EnvoyFilter  configuration that specifies an insert position relative to another filter can be very fragile because  by default  the order of evaluation is based on the creation time of the filters  Consider a filter with the following specification    spec    configPatches      applyTo  NETWORK FILTER     match        context  SIDECAR OUTBOUND       listener          portNumber  443         filterChain            filter              name  istio stats     patch        operation  INSERT BEFORE       value                To work properly  this filter configuration depends on the  istio stats  filter having an older creation time than it  Otherwise  the  INSERT BEFORE  operation will be silently ignored  There will be nothing in the error log to indicate that this filter has not been added to the chain   This is particularly problematic when matching filters  like  istio stats   that are version specific  i e   that include the  proxyVersion  field in their match criteria   Such filters may be removed or replaced by newer ones when upgrading Istio  As a result  an  EnvoyFilter  like the one above may initially be working perfectly but after upgrading Istio to a newer version it will no longer be included in the network filter chain of the sidecars   To avoid this issue  you can either change the operation to one that does not depend on the presence of another filter  e g    INSERT FIRST    or set an explicit priority in the  EnvoyFilter  to override the default creation time based ordering  For example  adding  priority  10  to the above filter will ensure that it is processed after the  istio stats  filter which has a default priority of 0      Virtual service with fault injection and retry timeout policies not working as expected  Currently  Istio does not support configuring fault injections and retry or timeout policies on the same  VirtualService   Consider the following configuration    apiVersion  networking istio io v1 kind  VirtualService metadata    name  helloworld spec    hosts              gateways      helloworld gateway   http      match        uri          exact   hello     fault        abort          httpStatus  500         percentage            value  50     retries        attempts  5       retryOn  5xx     route        destination          host  helloworld         port            number  5000   You would expect that given the configured five retry attempts  the user would almost never see any errors when calling the  helloworld  service  However since both fault and retries are configured on the same  VirtualService   the retry configuration does not take effect  resulting in a 50  failure rate  To work around this issue  you may remove the fault config from your  VirtualService  and inject the fault to the upstream Envoy proxy using  EnvoyFilter  instead    apiVersion  networking istio io v1alpha3 kind  EnvoyFilter metadata    name  hello world filter spec    workloadSelector      labels        app  helloworld   configPatches      applyTo  HTTP FILTER     match        context  SIDECAR INBOUND   will match outbound listeners in all sidecars       listener          filterChain            filter              name   envoy filters network http connection manager      patch        operation  INSERT BEFORE       value          name  envoy fault         typed config              type    type googleapis com envoy extensions filters http fault v3 HTTPFault            abort              http status  500             percentage                numerator  50               denominator  HUNDRED   This works because this way the retry policy is configured for the client proxy while the fault injection is configured for the upstream proxy "}
{"questions":"istio aliases owner istio wg user experience maintainers weight 40 test n a forceinlinetoc true docs ops troubleshooting injection title Sidecar Injection Problems Resolve common problems with Istio s use of Kubernetes webhooks for automatic sidecar injection","answers":"---\ntitle: Sidecar Injection Problems\ndescription: Resolve common problems with Istio's use of Kubernetes webhooks for automatic sidecar injection.\nforce_inline_toc: true\nweight: 40\naliases:\n  - \/docs\/ops\/troubleshooting\/injection\nowner: istio\/wg-user-experience-maintainers\ntest: n\/a\n---\n\n## The result of sidecar injection was not what I expected\n\nThis includes an injected sidecar when it wasn't expected and a lack\nof injected sidecar when it was.\n\n1. Ensure your pod is not in the `kube-system` or `kube-public` namespace.\n   Automatic sidecar injection will be ignored for pods in these namespaces.\n\n1. Ensure your pod does not have `hostNetwork: true` in its pod spec.\n   Automatic sidecar injection will be ignored for pods that are on the host network.\n\n    The sidecar model assumes that the iptables changes required for Envoy to intercept\n    traffic are within the pod. For pods on the host network this assumption is violated,\n    and this can lead to routing failures at the host level.\n\n1. Check the webhook's `namespaceSelector` to determine whether the\n   webhook is scoped to opt-in or opt-out for the target namespace.\n\n    The `namespaceSelector` for opt-in will look like the following:\n\n    \n    $ kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml | grep \"namespaceSelector:\" -A5\n      namespaceSelector:\n        matchLabels:\n          istio-injection: enabled\n      rules:\n      - apiGroups:\n        - \"\"\n    \n\n    The injection webhook will be invoked for pods created\n    in namespaces with the `istio-injection=enabled` label.\n\n    \n    $ kubectl get namespace -L istio-injection\n    NAME           STATUS    AGE       ISTIO-INJECTION\n    default        Active    18d       enabled\n    istio-system   Active    3d\n    kube-public    Active    18d\n    kube-system    Active    18d\n    \n\n    The `namespaceSelector` for opt-out will look like the following:\n\n    \n    $ kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml | grep \"namespaceSelector:\" -A5\n      namespaceSelector:\n        matchExpressions:\n        - key: istio-injection\n          operator: NotIn\n          values:\n          - disabled\n      rules:\n      - apiGroups:\n        - \"\"\n    \n\n    The injection webhook will be invoked for pods created in namespaces\n    without the `istio-injection=disabled` label.\n\n    \n    $ kubectl get namespace -L istio-injection\n    NAME           STATUS    AGE       ISTIO-INJECTION\n    default        Active    18d\n    istio-system   Active    3d        disabled\n    kube-public    Active    18d       disabled\n    kube-system    Active    18d       disabled\n    \n\n    Verify the application pod's namespace is labeled properly and (re) label accordingly, e.g.\n\n    \n    $ kubectl label namespace istio-system istio-injection=disabled --overwrite\n    \n\n    (repeat for all namespaces in which the injection webhook should be invoked for new pods)\n\n    \n    $ kubectl label namespace default istio-injection=enabled --overwrite\n    \n\n1. Check default policy\n\n    Check the default injection policy in the `istio-sidecar-injector configmap`.\n\n    \n    $ kubectl -n istio-system get configmap istio-sidecar-injector -o jsonpath='{.data.config}' | grep policy:\n    policy: enabled\n    \n\n    Allowed policy values are `disabled` and `enabled`. The default policy\n    only applies if the webhook\u2019s `namespaceSelector` matches the target\n    namespace. Unrecognized policy causes injection to be disabled completely.\n\n1. Check the per-pod override annotation\n\n    The default policy can be overridden with the\n    `sidecar.istio.io\/inject` label in the _pod template spec\u2019s metadata_.\n    The deployment\u2019s metadata is ignored. Label value\n    of `true` forces the sidecar to be injected while a value of\n    `false` forces the sidecar to _not_ be injected.\n\n    The following label overrides whatever the default `policy` was\n    to force the sidecar to be injected:\n\n    \n    $ kubectl get deployment curl -o yaml | grep \"sidecar.istio.io\/inject:\" -B4\n    template:\n      metadata:\n        labels:\n          app: curl\n          sidecar.istio.io\/inject: \"true\"\n    \n\n## Pods cannot be created at all\n\nRun `kubectl describe -n namespace deployment name` on the failing\npod's deployment. Failure to invoke the injection webhook will\ntypically be captured in the event log.\n\n### x509 certificate related errors\n\n\nWarning  FailedCreate  3m (x17 over 8m)  replicaset-controller  Error creating: Internal error occurred: \\\n    failed calling admission webhook \"sidecar-injector.istio.io\": Post https:\/\/istiod.istio-system.svc:443\/inject: \\\n    x509: certificate signed by unknown authority (possibly because of \"crypto\/rsa: verification error\" while trying \\\n    to verify candidate authority certificate \"Kubernetes.cluster.local\")\n\n\n`x509: certificate signed by unknown authority` errors are typically\ncaused by an empty `caBundle` in the webhook configuration.\n\nVerify the `caBundle` in the `mutatingwebhookconfiguration` matches the\n   root certificate mounted in the `istiod` pod.\n\n\n$ kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml -o jsonpath='{.webhooks[0].clientConfig.caBundle}' | md5sum\n4b95d2ba22ce8971c7c92084da31faf0  -\n$ kubectl -n istio-system get configmap istio-ca-root-cert -o jsonpath='{.data.root-cert\\.pem}' | base64 -w 0 | md5sum\n4b95d2ba22ce8971c7c92084da31faf0  -\n\n\nThe CA certificate should match. If they do not, restart the\nistiod pods.\n\n\n$ kubectl -n istio-system patch deployment istiod \\\n    -p \"{\\\"spec\\\":{\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"date\\\":\\\"`date +'%s'`\\\"}}}}}\"\ndeployment.extensions \"istiod\" patched\n\n\n### Errors in deployment status\n\nWhen automatic sidecar injection is enabled for a pod, and the injection fails for any reason, the pod creation\nwill also fail. In such cases, you can check the deployment status of the pod to identify the error. The errors\nwill also appear in the events of the namespace associated with the deployment.\n\nFor example, if the `istiod` control plane pod was not running when you tried to deploy your pod, the events would show the following error:\n\n\n$ kubectl get events -n curl\n...\n23m Normal   SuccessfulCreate replicaset\/curl-9454cc476   Created pod: curl-9454cc476-khp45\n22m Warning  FailedCreate     replicaset\/curl-9454cc476   Error creating: Internal error occurred: failed calling webhook \"namespace.sidecar-injector.istio.io\": failed to call webhook: Post \"https:\/\/istiod.istio-system.svc:443\/inject?timeout=10s\": dial tcp 10.96.44.51:443: connect: connection refused\n\n\n\n$ kubectl -n istio-system get pod -lapp=istiod\nNAME                            READY     STATUS    RESTARTS   AGE\nistiod-7d46d8d9db-jz2mh         1\/1       Running     0         2d\n\n\n\n$ kubectl -n istio-system get endpoints istiod\nNAME           ENDPOINTS                                                  AGE\nistiod   10.244.2.8:15012,10.244.2.8:15010,10.244.2.8:15017 + 1 more...   3h18m\n\n\nIf the istiod pod or endpoints aren't ready, check the pod logs and status\nfor any indication about why the webhook pod is failing to start and\nserve traffic.\n\n\n$ for pod in $(kubectl -n istio-system get pod -lapp=istiod -o jsonpath='{.items[*].metadata.name}'); do \\\n    kubectl -n istio-system logs ${pod} \\\ndone\n\n\n$ for pod in $(kubectl -n istio-system get pod -l app=istiod -o name); do \\\nkubectl -n istio-system describe ${pod}; \\\ndone\n$\n\n\n## Automatic sidecar injection fails if the Kubernetes API server has proxy settings\n\nWhen the Kubernetes API server includes proxy settings such as:\n\n\nenv:\n  - name: http_proxy\n    value: http:\/\/proxy-wsa.esl.foo.com:80\n  - name: https_proxy\n    value: http:\/\/proxy-wsa.esl.foo.com:80\n  - name: no_proxy\n    value: 127.0.0.1,localhost,dockerhub.foo.com,devhub-docker.foo.com,10.84.100.125,10.84.100.126,10.84.100.127\n\n\nWith these settings, Sidecar injection fails. The only related failure log can be found in `kube-apiserver` log:\n\n\nW0227 21:51:03.156818       1 admission.go:257] Failed calling webhook, failing open sidecar-injector.istio.io: failed calling admission webhook \"sidecar-injector.istio.io\": Post https:\/\/istio-sidecar-injector.istio-system.svc:443\/inject: Service Unavailable\n\n\nMake sure both pod and service CIDRs are not proxied according to `*_proxy` variables.  Check the `kube-apiserver` files and logs to verify the configuration and whether any requests are being proxied.\n\nOne workaround is to remove the proxy settings from the `kube-apiserver` manifest, another workaround is to include `istio-sidecar-injector.istio-system.svc` or `.svc` in the `no_proxy` value. Make sure that `kube-apiserver` is restarted after each workaround.\n\nAn [issue](https:\/\/github.com\/kubernetes\/kubeadm\/issues\/666) was filed with Kubernetes related to this and has since been closed.\n[https:\/\/github.com\/kubernetes\/kubernetes\/pull\/58698#discussion_r163879443](https:\/\/github.com\/kubernetes\/kubernetes\/pull\/58698#discussion_r163879443)\n\n## Limitations for using Tcpdump in pods\n\nTcpdump doesn't work in the sidecar pod - the container doesn't run as root. However any other container in the same pod will see all the packets, since the\nnetwork namespace is shared. `iptables` will also see the pod-wide configuration.\n\nCommunication between Envoy and the app happens on 127.0.0.1, and is not encrypted.\n\n## Cluster is not scaled down automatically\n\nDue to the fact that the sidecar container mounts a local storage volume, the\nnode autoscaler is unable to evict nodes with the injected pods. This is\na [known issue](https:\/\/github.com\/kubernetes\/autoscaler\/issues\/3947). The workaround is\nto add a pod annotation `\"cluster-autoscaler.kubernetes.io\/safe-to-evict\":\n\"true\"` to the injected pods.\n\n## Pod or containers start with network issues if istio-proxy is not ready\n\nMany applications execute commands or checks during startup, which require network connectivity. This can cause application containers to hang or restart if the `istio-proxy` sidecar container is not ready.\n\nTo avoid this, set `holdApplicationUntilProxyStarts` to `true`. This causes the sidecar injector to inject the sidecar at the start of the pod\u2019s container list, and configures it to block the start of all other containers until the proxy is ready.\n\nThis can be added as a global config option:\n\n\nvalues.global.proxy.holdApplicationUntilProxyStarts: true\n\n\nor as a pod annotation:\n\n\nproxy.istio.io\/config: '{ \"holdApplicationUntilProxyStarts\": true }'\n","site":"istio","answers_cleaned":"    title  Sidecar Injection Problems description  Resolve common problems with Istio s use of Kubernetes webhooks for automatic sidecar injection  force inline toc  true weight  40 aliases       docs ops troubleshooting injection owner  istio wg user experience maintainers test  n a         The result of sidecar injection was not what I expected  This includes an injected sidecar when it wasn t expected and a lack of injected sidecar when it was   1  Ensure your pod is not in the  kube system  or  kube public  namespace     Automatic sidecar injection will be ignored for pods in these namespaces   1  Ensure your pod does not have  hostNetwork  true  in its pod spec     Automatic sidecar injection will be ignored for pods that are on the host network       The sidecar model assumes that the iptables changes required for Envoy to intercept     traffic are within the pod  For pods on the host network this assumption is violated      and this can lead to routing failures at the host level   1  Check the webhook s  namespaceSelector  to determine whether the    webhook is scoped to opt in or opt out for the target namespace       The  namespaceSelector  for opt in will look like the following              kubectl get mutatingwebhookconfiguration istio sidecar injector  o yaml   grep  namespaceSelector    A5       namespaceSelector          matchLabels            istio injection  enabled       rules          apiGroups                         The injection webhook will be invoked for pods created     in namespaces with the  istio injection enabled  label              kubectl get namespace  L istio injection     NAME           STATUS    AGE       ISTIO INJECTION     default        Active    18d       enabled     istio system   Active    3d     kube public    Active    18d     kube system    Active    18d           The  namespaceSelector  for opt out will look like the following              kubectl get mutatingwebhookconfiguration istio sidecar injector  o yaml   grep  namespaceSelector    A5       namespaceSelector          matchExpressions            key  istio injection           operator  NotIn           values              disabled       rules          apiGroups                         The injection webhook will be invoked for pods created in namespaces     without the  istio injection disabled  label              kubectl get namespace  L istio injection     NAME           STATUS    AGE       ISTIO INJECTION     default        Active    18d     istio system   Active    3d        disabled     kube public    Active    18d       disabled     kube system    Active    18d       disabled           Verify the application pod s namespace is labeled properly and  re  label accordingly  e g              kubectl label namespace istio system istio injection disabled   overwrite            repeat for all namespaces in which the injection webhook should be invoked for new pods              kubectl label namespace default istio injection enabled   overwrite       1  Check default policy      Check the default injection policy in the  istio sidecar injector configmap               kubectl  n istio system get configmap istio sidecar injector  o jsonpath    data config     grep policy      policy  enabled           Allowed policy values are  disabled  and  enabled   The default policy     only applies if the webhook s  namespaceSelector  matches the target     namespace  Unrecognized policy causes injection to be disabled completely   1  Check the per pod override annotation      The default policy can be overridden with the      sidecar istio io inject  label in the  pod template spec s metadata       The deployment s metadata is ignored  Label value     of  true  forces the sidecar to be injected while a value of      false  forces the sidecar to  not  be injected       The following label overrides whatever the default  policy  was     to force the sidecar to be injected              kubectl get deployment curl  o yaml   grep  sidecar istio io inject    B4     template        metadata          labels            app  curl           sidecar istio io inject   true           Pods cannot be created at all  Run  kubectl describe  n namespace deployment name  on the failing pod s deployment  Failure to invoke the injection webhook will typically be captured in the event log       x509 certificate related errors   Warning  FailedCreate  3m  x17 over 8m   replicaset controller  Error creating  Internal error occurred        failed calling admission webhook  sidecar injector istio io   Post https   istiod istio system svc 443 inject        x509  certificate signed by unknown authority  possibly because of  crypto rsa  verification error  while trying       to verify candidate authority certificate  Kubernetes cluster local      x509  certificate signed by unknown authority  errors are typically caused by an empty  caBundle  in the webhook configuration   Verify the  caBundle  in the  mutatingwebhookconfiguration  matches the    root certificate mounted in the  istiod  pod      kubectl get mutatingwebhookconfiguration istio sidecar injector  o yaml  o jsonpath    webhooks 0  clientConfig caBundle     md5sum 4b95d2ba22ce8971c7c92084da31faf0      kubectl  n istio system get configmap istio ca root cert  o jsonpath    data root cert  pem     base64  w 0   md5sum 4b95d2ba22ce8971c7c92084da31faf0      The CA certificate should match  If they do not  restart the istiod pods      kubectl  n istio system patch deployment istiod        p     spec      template      metadata      labels      date      date    s           deployment extensions  istiod  patched       Errors in deployment status  When automatic sidecar injection is enabled for a pod  and the injection fails for any reason  the pod creation will also fail  In such cases  you can check the deployment status of the pod to identify the error  The errors will also appear in the events of the namespace associated with the deployment   For example  if the  istiod  control plane pod was not running when you tried to deploy your pod  the events would show the following error      kubectl get events  n curl     23m Normal   SuccessfulCreate replicaset curl 9454cc476   Created pod  curl 9454cc476 khp45 22m Warning  FailedCreate     replicaset curl 9454cc476   Error creating  Internal error occurred  failed calling webhook  namespace sidecar injector istio io   failed to call webhook  Post  https   istiod istio system svc 443 inject timeout 10s   dial tcp 10 96 44 51 443  connect  connection refused      kubectl  n istio system get pod  lapp istiod NAME                            READY     STATUS    RESTARTS   AGE istiod 7d46d8d9db jz2mh         1 1       Running     0         2d      kubectl  n istio system get endpoints istiod NAME           ENDPOINTS                                                  AGE istiod   10 244 2 8 15012 10 244 2 8 15010 10 244 2 8 15017   1 more      3h18m   If the istiod pod or endpoints aren t ready  check the pod logs and status for any indication about why the webhook pod is failing to start and serve traffic      for pod in   kubectl  n istio system get pod  lapp istiod  o jsonpath    items    metadata name     do       kubectl  n istio system logs   pod    done     for pod in   kubectl  n istio system get pod  l app istiod  o name   do   kubectl  n istio system describe   pod     done        Automatic sidecar injection fails if the Kubernetes API server has proxy settings  When the Kubernetes API server includes proxy settings such as    env      name  http proxy     value  http   proxy wsa esl foo com 80     name  https proxy     value  http   proxy wsa esl foo com 80     name  no proxy     value  127 0 0 1 localhost dockerhub foo com devhub docker foo com 10 84 100 125 10 84 100 126 10 84 100 127   With these settings  Sidecar injection fails  The only related failure log can be found in  kube apiserver  log    W0227 21 51 03 156818       1 admission go 257  Failed calling webhook  failing open sidecar injector istio io  failed calling admission webhook  sidecar injector istio io   Post https   istio sidecar injector istio system svc 443 inject  Service Unavailable   Make sure both pod and service CIDRs are not proxied according to    proxy  variables   Check the  kube apiserver  files and logs to verify the configuration and whether any requests are being proxied   One workaround is to remove the proxy settings from the  kube apiserver  manifest  another workaround is to include  istio sidecar injector istio system svc  or   svc  in the  no proxy  value  Make sure that  kube apiserver  is restarted after each workaround   An  issue  https   github com kubernetes kubeadm issues 666  was filed with Kubernetes related to this and has since been closed   https   github com kubernetes kubernetes pull 58698 discussion r163879443  https   github com kubernetes kubernetes pull 58698 discussion r163879443      Limitations for using Tcpdump in pods  Tcpdump doesn t work in the sidecar pod   the container doesn t run as root  However any other container in the same pod will see all the packets  since the network namespace is shared   iptables  will also see the pod wide configuration   Communication between Envoy and the app happens on 127 0 0 1  and is not encrypted      Cluster is not scaled down automatically  Due to the fact that the sidecar container mounts a local storage volume  the node autoscaler is unable to evict nodes with the injected pods  This is a  known issue  https   github com kubernetes autoscaler issues 3947   The workaround is to add a pod annotation   cluster autoscaler kubernetes io safe to evict    true   to the injected pods      Pod or containers start with network issues if istio proxy is not ready  Many applications execute commands or checks during startup  which require network connectivity  This can cause application containers to hang or restart if the  istio proxy  sidecar container is not ready   To avoid this  set  holdApplicationUntilProxyStarts  to  true   This causes the sidecar injector to inject the sidecar at the start of the pod s container list  and configures it to block the start of all other containers until the proxy is ready   This can be added as a global config option    values global proxy holdApplicationUntilProxyStarts  true   or as a pod annotation    proxy istio io config      holdApplicationUntilProxyStarts   true    "}
{"questions":"istio weight 20 keywords security citadel Techniques to address common Istio authentication authorization and general security related problems title Security Problems aliases help ops security repairing citadel docs ops troubleshooting repairing citadel help ops troubleshooting repairing citadel forceinlinetoc true","answers":"---\ntitle: Security Problems\ndescription: Techniques to address common Istio authentication, authorization, and general security-related problems.\nforce_inline_toc: true\nweight: 20\nkeywords: [security,citadel]\naliases:\n    - \/help\/ops\/security\/repairing-citadel\n    - \/help\/ops\/troubleshooting\/repairing-citadel\n    - \/docs\/ops\/troubleshooting\/repairing-citadel\nowner: istio\/wg-security-maintainers\ntest: n\/a\n---\n\n## End-user authentication fails\n\nWith Istio, you can enable authentication for end users through [request authentication policies](\/docs\/tasks\/security\/authentication\/authn-policy\/#end-user-authentication). Follow these steps to troubleshoot the policy specification.\n\n1. If `jwksUri` isn\u2019t set, make sure the JWT issuer is of url format and `url + \/.well-known\/openid-configuration` can be opened in browser; for example, if the JWT issuer is `https:\/\/accounts.google.com`, make sure `https:\/\/accounts.google.com\/.well-known\/openid-configuration` is a valid url and can be opened in a browser.\n\n    \n    apiVersion: security.istio.io\/v1\n    kind: RequestAuthentication\n    metadata:\n      name: \"example-3\"\n    spec:\n      selector:\n        matchLabels:\n          app: httpbin\n      jwtRules:\n      - issuer: \"testing@secure.istio.io\"\n        jwksUri: \"\/security\/tools\/jwt\/samples\/jwks.json\"\n    \n\n1. If the JWT token is placed in the Authorization header in http requests, make sure the JWT token is valid (not expired, etc). The fields in a JWT token can be decoded by using online JWT parsing tools, e.g., [jwt.io](https:\/\/jwt.io\/).\n\n1. Verify the Envoy proxy configuration of the target workload using `istioctl proxy-config` command.\n\n    With the example policy above applied, use the following command to check the `listener` configuration on the inbound port `80`. You should see `envoy.filters.http.jwt_authn` filter with settings matching the issuer and JWKS as specified in the policy.\n\n    \n    $ POD=$(kubectl get pod -l app=httpbin -n foo -o jsonpath={.items..metadata.name})\n    $ istioctl proxy-config listener ${POD} -n foo --port 80 --type HTTP -o json\n    <redacted>\n                                {\n                                    \"name\": \"envoy.filters.http.jwt_authn\",\n                                    \"typedConfig\": {\n                                        \"@type\": \"type.googleapis.com\/envoy.config.filter.http.jwt_authn.v2alpha.JwtAuthentication\",\n                                        \"providers\": {\n                                            \"origins-0\": {\n                                                \"issuer\": \"testing@secure.istio.io\",\n                                                \"localJwks\": {\n                                                    \"inlineString\": \"*redacted*\"\n                                                },\n                                                \"payloadInMetadata\": \"testing@secure.istio.io\"\n                                            }\n                                        },\n                                        \"rules\": [\n                                            {\n                                                \"match\": {\n                                                    \"prefix\": \"\/\"\n                                                },\n                                                \"requires\": {\n                                                    \"requiresAny\": {\n                                                        \"requirements\": [\n                                                            {\n                                                                \"providerName\": \"origins-0\"\n                                                            },\n                                                            {\n                                                                \"allowMissing\": {}\n                                                            }\n                                                        ]\n                                                    }\n                                                }\n                                            }\n                                        ]\n                                    }\n                                },\n    <redacted>\n    \n\n## Authorization is too restrictive or permissive\n\n### Make sure there are no typos in the policy YAML file\n\nOne common mistake is specifying multiple items unintentionally in the YAML. Take the following policy as an example:\n\n\napiVersion: security.istio.io\/v1\nkind: AuthorizationPolicy\nmetadata:\n  name: example\n  namespace: foo\nspec:\n  action: ALLOW\n  rules:\n  - to:\n    - operation:\n        paths:\n        - \/foo\n  - from:\n    - source:\n        namespaces:\n        - foo\n\n\nYou may expect the policy to allow requests if the path is `\/foo` **and** the source namespace is `foo`.\nHowever, the policy actually allows requests if the path is `\/foo` **or** the source namespace is `foo`, which is\nmore permissive.\n\nIn the YAML syntax, the `-` in front of the `from:` means it's a new element in the list. This creates 2 rules in the\npolicy instead of 1. In authorization policy, multiple rules have the semantics of `OR`.\n\nTo fix the problem, just remove the extra `-` to make the policy have only 1 rule that allows requests if the\npath is `\/foo` **and** the source namespace is `foo`, which is more restrictive.\n\n### Make sure you are NOT using HTTP-only fields on TCP ports\n\nThe authorization policy will be more restrictive because HTTP-only fields (e.g. `host`, `path`, `headers`, JWT, etc.)\ndo not exist in the raw TCP connections.\n\nIn the case of `ALLOW` policy, these fields are never matched. In the case of `DENY` and `CUSTOM` action, these fields\nare considered always matched. The final effect is a more restrictive policy that could cause unexpected denies.\n\nCheck the Kubernetes service definition to verify that the port is [named with the correct protocol properly](\/docs\/ops\/configuration\/traffic-management\/protocol-selection\/#explicit-protocol-selection).\nIf you are using HTTP-only fields on the port, make sure the port name has the `http-` prefix.\n\n### Make sure the policy is applied to the correct target\n\nCheck the workload selector and namespace to confirm it's applied to the correct targets. You can determine the\nauthorization policy in effect by running `istioctl x authz check POD-NAME.POD-NAMESPACE`.\n\n### Pay attention to the action specified in the policy\n\n- If not specified, the policy defaults to use action `ALLOW`.\n\n- When a workload has multiple actions (`CUSTOM`, `ALLOW` and `DENY`) applied at the same time, all actions must be\n  satisfied to allow a request. In other words, a request is denied if any of the action denies and is allowed only if\n  all actions allow.\n\n- The `AUDIT` action does not enforce access control and will not deny the request at any cases.\n\nRead [authorization implicit enablement](\/docs\/concepts\/security\/#implicit-enablement) for more details of the evaluation order.\n\n## Ensure Istiod accepts the policies\n\nIstiod converts and distributes your authorization policies to the proxies. The following steps help\nyou ensure Istiod is working as expected:\n\n1. Run the following command to enable the debug logging in istiod:\n\n    \n    $ istioctl admin log --level authorization:debug\n    \n\n1. Get the Istiod log with the following command:\n\n    \n    You probably need to first delete and then re-apply your authorization policies so that\n    the debug output is generated for these policies.\n    \n\n    \n    $ kubectl logs $(kubectl -n istio-system get pods -l app=istiod -o jsonpath='{.items[0].metadata.name}') -c discovery -n istio-system\n    \n\n1. Check the output and verify there are no errors. For example, you might see something similar to the following:\n\n    \n    2021-04-23T20:53:29.507314Z info ads Push debounce stable[31] 1: 100.981865ms since last change, 100.981653ms since last push, full=true\n    2021-04-23T20:53:29.507641Z info ads XDS: Pushing:2021-04-23T20:53:29Z\/23 Services:15 ConnectedEndpoints:2  Version:2021-04-23T20:53:29Z\/23\n    2021-04-23T20:53:29.507911Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details:\n        * found 0 CUSTOM actions\n    2021-04-23T20:53:29.508077Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details:\n        * found 0 CUSTOM actions\n    2021-04-23T20:53:29.508128Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details:\n        * found 1 DENY actions, 0 ALLOW actions, 0 AUDIT actions\n        * generated config from rule ns[foo]-policy[deny-path-headers]-rule[0] on HTTP filter chain successfully\n        * built 1 HTTP filters for DENY action\n        * added 1 HTTP filters to filter chain 0\n        * added 1 HTTP filters to filter chain 1\n    2021-04-23T20:53:29.508158Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details:\n        * found 0 DENY actions, 0 ALLOW actions, 0 AUDIT actions\n    2021-04-23T20:53:29.509097Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details:\n        * found 0 CUSTOM actions\n    2021-04-23T20:53:29.509167Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details:\n        * found 0 DENY actions, 0 ALLOW actions, 0 AUDIT actions\n    2021-04-23T20:53:29.509501Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details:\n        * found 0 CUSTOM actions\n    2021-04-23T20:53:29.509652Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details:\n        * found 1 DENY actions, 0 ALLOW actions, 0 AUDIT actions\n        * generated config from rule ns[foo]-policy[deny-path-headers]-rule[0] on HTTP filter chain successfully\n        * built 1 HTTP filters for DENY action\n        * added 1 HTTP filters to filter chain 0\n        * added 1 HTTP filters to filter chain 1\n        * generated config from rule ns[foo]-policy[deny-path-headers]-rule[0] on TCP filter chain successfully\n        * built 1 TCP filters for DENY action\n        * added 1 TCP filters to filter chain 2\n        * added 1 TCP filters to filter chain 3\n        * added 1 TCP filters to filter chain 4\n    2021-04-23T20:53:29.510903Z info ads LDS: PUSH for node:curl-557747455f-6dxbl.foo resources:18 size:85.0kB\n    2021-04-23T20:53:29.511487Z info ads LDS: PUSH for node:httpbin-74fb669cc6-lpscm.foo resources:18 size:86.4kB\n    \n\n    This shows that Istiod generated:\n\n    - An HTTP filter config with policy `ns[foo]-policy[deny-path-headers]-rule[0]` for workload `httpbin-74fb669cc6-lpscm.foo`.\n\n    - A TCP filter config with policy `ns[foo]-policy[deny-path-headers]-rule[0]` for workload `httpbin-74fb669cc6-lpscm.foo`.\n\n## Ensure Istiod distributes policies to proxies correctly\n\nIstiod distributes the authorization policies to proxies. The following steps help you ensure istiod is working as expected:\n\n\nThe command below assumes you have deployed `httpbin`, you should replace `\"-l app=httpbin\"` with your actual pod if\nyou are not using `httpbin`.\n\n\n1. Run the following command to get the proxy configuration dump for the `httpbin` workload:\n\n    \n    $ kubectl exec  $(kubectl get pods -l app=httpbin -o jsonpath='{.items[0].metadata.name}') -c istio-proxy -- pilot-agent request GET config_dump\n    \n\n1. Check the log and verify:\n\n    - The log includes an `envoy.filters.http.rbac` filter to enforce the authorization policy on each incoming request.\n    - Istio updates the filter accordingly after you update your authorization policy.\n\n1. The following output means the proxy of `httpbin` has enabled the `envoy.filters.http.rbac` filter with rules that rejects\n   anyone to access path `\/headers`.\n\n    \n    {\n     \"name\": \"envoy.filters.http.rbac\",\n     \"typed_config\": {\n      \"@type\": \"type.googleapis.com\/envoy.extensions.filters.http.rbac.v3.RBAC\",\n      \"rules\": {\n       \"action\": \"DENY\",\n       \"policies\": {\n        \"ns[foo]-policy[deny-path-headers]-rule[0]\": {\n         \"permissions\": [\n          {\n           \"and_rules\": {\n            \"rules\": [\n             {\n              \"or_rules\": {\n               \"rules\": [\n                {\n                 \"url_path\": {\n                  \"path\": {\n                   \"exact\": \"\/headers\"\n                  }\n                 }\n                }\n               ]\n              }\n             }\n            ]\n           }\n          }\n         ],\n         \"principals\": [\n          {\n           \"and_ids\": {\n            \"ids\": [\n             {\n              \"any\": true\n             }\n            ]\n           }\n          }\n         ]\n        }\n       }\n      },\n      \"shadow_rules_stat_prefix\": \"istio_dry_run_allow_\"\n     }\n    },\n    \n\n## Ensure proxies enforce policies correctly\n\nProxies eventually enforce the authorization policies. The following steps help you ensure the proxy is working as expected:\n\n\nThe command below assumes you have deployed `httpbin`, you should replace `\"-l app=httpbin\"` with your actual pod if you\nare not using `httpbin`.\n\n\n1. Turn on the authorization debug logging in proxy with the following command:\n\n    \n    $ istioctl proxy-config log deploy\/httpbin --level \"rbac:debug\"\n    \n\n1. Verify you see the following output:\n\n    \n    active loggers:\n      ... ...\n      rbac: debug\n      ... ...\n    \n\n1. Send some requests to the `httpbin` workload to generate some logs.\n\n1. Print the proxy logs with the following command:\n\n    \n    $ kubectl logs $(kubectl get pods -l app=httpbin -o jsonpath='{.items[0].metadata.name}') -c istio-proxy\n    \n\n1. Check the output and verify:\n\n    - The output log shows either `enforced allowed` or `enforced denied` depending on whether the request\n      was allowed or denied respectively.\n\n    - Your authorization policy expects the data extracted from the request.\n\n1. The following is an example output for a request at path `\/httpbin`:\n\n    \n    ...\n    2021-04-23T20:43:18.552857Z debug envoy rbac checking request: requestedServerName: outbound_.8000_._.httpbin.foo.svc.cluster.local, sourceIP: 10.44.3.13:46180, directRemoteIP: 10.44.3.13:46180, remoteIP: 10.44.3.13:46180,localAddress: 10.44.1.18:80, ssl: uriSanPeerCertificate: spiffe:\/\/cluster.local\/ns\/foo\/sa\/curl, dnsSanPeerCertificate: , subjectPeerCertificate: , headers: ':authority', 'httpbin:8000'\n    ':path', '\/headers'\n    ':method', 'GET'\n    ':scheme', 'http'\n    'user-agent', 'curl\/7.76.1-DEV'\n    'accept', '*\/*'\n    'x-forwarded-proto', 'http'\n    'x-request-id', '672c9166-738c-4865-b541-128259cc65e5'\n    'x-envoy-attempt-count', '1'\n    'x-b3-traceid', '8a124905edf4291a21df326729b264e9'\n    'x-b3-spanid', '21df326729b264e9'\n    'x-b3-sampled', '0'\n    'x-forwarded-client-cert', 'By=spiffe:\/\/cluster.local\/ns\/foo\/sa\/httpbin;Hash=d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d;Subject=\"\";URI=spiffe:\/\/cluster.local\/ns\/foo\/sa\/curl'\n    , dynamicMetadata: filter_metadata {\n      key: \"istio_authn\"\n      value {\n        fields {\n          key: \"request.auth.principal\"\n          value {\n            string_value: \"cluster.local\/ns\/foo\/sa\/curl\"\n          }\n        }\n        fields {\n          key: \"source.namespace\"\n          value {\n            string_value: \"foo\"\n          }\n        }\n        fields {\n          key: \"source.principal\"\n          value {\n            string_value: \"cluster.local\/ns\/foo\/sa\/curl\"\n          }\n        }\n        fields {\n          key: \"source.user\"\n          value {\n            string_value: \"cluster.local\/ns\/foo\/sa\/curl\"\n          }\n        }\n      }\n    }\n\n    2021-04-23T20:43:18.552910Z debug envoy rbac enforced denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0]\n    ...\n    \n\n    The log `enforced denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0]` means the request is rejected by\n    the policy `ns[foo]-policy[deny-path-headers]-rule[0]`.\n\n1. The following is an example output for authorization policy in the [dry-run mode](\/docs\/tasks\/security\/authorization\/authz-dry-run):\n\n    \n    ...\n    2021-04-23T20:59:11.838468Z debug envoy rbac checking request: requestedServerName: outbound_.8000_._.httpbin.foo.svc.cluster.local, sourceIP: 10.44.3.13:49826, directRemoteIP: 10.44.3.13:49826, remoteIP: 10.44.3.13:49826,localAddress: 10.44.1.18:80, ssl: uriSanPeerCertificate: spiffe:\/\/cluster.local\/ns\/foo\/sa\/curl, dnsSanPeerCertificate: , subjectPeerCertificate: , headers: ':authority', 'httpbin:8000'\n    ':path', '\/headers'\n    ':method', 'GET'\n    ':scheme', 'http'\n    'user-agent', 'curl\/7.76.1-DEV'\n    'accept', '*\/*'\n    'x-forwarded-proto', 'http'\n    'x-request-id', 'e7b2fdb0-d2ea-4782-987c-7845939e6313'\n    'x-envoy-attempt-count', '1'\n    'x-b3-traceid', '696607fc4382b50017c1f7017054c751'\n    'x-b3-spanid', '17c1f7017054c751'\n    'x-b3-sampled', '0'\n    'x-forwarded-client-cert', 'By=spiffe:\/\/cluster.local\/ns\/foo\/sa\/httpbin;Hash=d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d;Subject=\"\";URI=spiffe:\/\/cluster.local\/ns\/foo\/sa\/curl'\n    , dynamicMetadata: filter_metadata {\n      key: \"istio_authn\"\n      value {\n        fields {\n          key: \"request.auth.principal\"\n          value {\n            string_value: \"cluster.local\/ns\/foo\/sa\/curl\"\n          }\n        }\n        fields {\n          key: \"source.namespace\"\n          value {\n            string_value: \"foo\"\n          }\n        }\n        fields {\n          key: \"source.principal\"\n          value {\n            string_value: \"cluster.local\/ns\/foo\/sa\/curl\"\n          }\n        }\n        fields {\n          key: \"source.user\"\n          value {\n            string_value: \"cluster.local\/ns\/foo\/sa\/curl\"\n          }\n        }\n      }\n    }\n\n    2021-04-23T20:59:11.838529Z debug envoy rbac shadow denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0]\n    2021-04-23T20:59:11.838538Z debug envoy rbac no engine, allowed by default\n    ...\n    \n\n    The log `shadow denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0]` means the request would be rejected\n    by the **dry-run** policy `ns[foo]-policy[deny-path-headers]-rule[0]`.\n\n    The log `no engine, allowed by default` means the request is actually allowed because the dry-run policy is the\n    only policy on the workload.\n\n## Keys and certificates errors\n\nIf you suspect that some of the keys and\/or certificates used by Istio aren't correct, you can inspect the contents from any pod:\n\n\n$ istioctl proxy-config secret curl-8f795f47d-4s4t7\nRESOURCE NAME     TYPE           STATUS     VALID CERT     SERIAL NUMBER                               NOT AFTER                NOT BEFORE\ndefault           Cert Chain     ACTIVE     true           138092480869518152837211547060273851586     2020-11-11T16:39:48Z     2020-11-10T16:39:48Z\nROOTCA            CA             ACTIVE     true           288553090258624301170355571152070165215     2030-11-08T16:34:52Z     2020-11-10T16:34:52Z\n\n\nBy passing the `-o json` flag, you can pass the full certificate content to `openssl` to analyze its contents:\n\n\n$ istioctl proxy-config secret curl-8f795f47d-4s4t7 -o json | jq '[.dynamicActiveSecrets[] | select(.name == \"default\")][0].secret.tlsCertificate.certificateChain.inlineBytes' -r | base64 -d | openssl x509 -noout -text\nCertificate:\n    Data:\n        Version: 3 (0x2)\n        Serial Number:\n            99:59:6b:a2:5a:f4:20:f4:03:d7:f0:bc:59:f5:d8:40\n    Signature Algorithm: sha256WithRSAEncryption\n        Issuer: O = k8s.cluster.local\n        Validity\n            Not Before: Jun  4 20:38:20 2018 GMT\n            Not After : Sep  2 20:38:20 2018 GMT\n...\n        X509v3 extensions:\n            X509v3 Key Usage: critical\n                Digital Signature, Key Encipherment\n            X509v3 Extended Key Usage:\n                TLS Web Server Authentication, TLS Web Client Authentication\n            X509v3 Basic Constraints: critical\n                CA:FALSE\n            X509v3 Subject Alternative Name:\n                URI:spiffe:\/\/cluster.local\/ns\/my-ns\/sa\/my-sa\n...\n\n\nMake sure the displayed certificate contains valid information. In particular, the `Subject Alternative Name` field should be `URI:spiffe:\/\/cluster.local\/ns\/my-ns\/sa\/my-sa`.\n\n## Mutual TLS errors\n\nIf you suspect problems with mutual TLS, first ensure that istiod is healthy, and\nsecond ensure that [keys and certificates are being delivered](#keys-and-certificates-errors) to sidecars properly.\n\nIf everything appears to be working so far, the next step is to verify that the right [authentication policy](\/docs\/tasks\/security\/authentication\/authn-policy\/)\nis applied and the right destination rules are in place.\n\nIf you suspect the client side sidecar may send mutual TLS or plaintext traffic incorrectly, check the\n[Grafana Workload dashboard](\/docs\/ops\/integrations\/grafana\/). The outbound requests are annotated whether mTLS\n is used or not. After checking this if you believe the client sidecars are misbehaved, report an issue on GitHub.","site":"istio","answers_cleaned":"    title  Security Problems description  Techniques to address common Istio authentication  authorization  and general security related problems  force inline toc  true weight  20 keywords   security citadel  aliases         help ops security repairing citadel        help ops troubleshooting repairing citadel        docs ops troubleshooting repairing citadel owner  istio wg security maintainers test  n a         End user authentication fails  With Istio  you can enable authentication for end users through  request authentication policies   docs tasks security authentication authn policy  end user authentication   Follow these steps to troubleshoot the policy specification   1  If  jwksUri  isn t set  make sure the JWT issuer is of url format and  url     well known openid configuration  can be opened in browser  for example  if the JWT issuer is  https   accounts google com   make sure  https   accounts google com  well known openid configuration  is a valid url and can be opened in a browser            apiVersion  security istio io v1     kind  RequestAuthentication     metadata        name   example 3      spec        selector          matchLabels            app  httpbin       jwtRules          issuer   testing secure istio io          jwksUri    security tools jwt samples jwks json        1  If the JWT token is placed in the Authorization header in http requests  make sure the JWT token is valid  not expired  etc   The fields in a JWT token can be decoded by using online JWT parsing tools  e g    jwt io  https   jwt io     1  Verify the Envoy proxy configuration of the target workload using  istioctl proxy config  command       With the example policy above applied  use the following command to check the  listener  configuration on the inbound port  80   You should see  envoy filters http jwt authn  filter with settings matching the issuer and JWKS as specified in the policy              POD   kubectl get pod  l app httpbin  n foo  o jsonpath   items  metadata name         istioctl proxy config listener   POD   n foo   port 80   type HTTP  o json      redacted                                                                         name    envoy filters http jwt authn                                        typedConfig                                               type    type googleapis com envoy config filter http jwt authn v2alpha JwtAuthentication                                            providers                                                  origins 0                                                      issuer    testing secure istio io                                                    localJwks                                                          inlineString     redacted                                                                                                       payloadInMetadata    testing secure istio io                                                                                                                                    rules                                                                                                    match                                                          prefix                                                                                                           requires                                                          requiresAny                                                              requirements                                                                                                                                    providerName    origins 0                                                                                                                                                                                                allowMissing                                                                                                                                                                                                                                                                                                                                                                                                            redacted           Authorization is too restrictive or permissive      Make sure there are no typos in the policy YAML file  One common mistake is specifying multiple items unintentionally in the YAML  Take the following policy as an example    apiVersion  security istio io v1 kind  AuthorizationPolicy metadata    name  example   namespace  foo spec    action  ALLOW   rules      to        operation          paths             foo     from        source          namespaces            foo   You may expect the policy to allow requests if the path is   foo    and   the source namespace is  foo   However  the policy actually allows requests if the path is   foo    or   the source namespace is  foo   which is more permissive   In the YAML syntax  the     in front of the  from   means it s a new element in the list  This creates 2 rules in the policy instead of 1  In authorization policy  multiple rules have the semantics of  OR    To fix the problem  just remove the extra     to make the policy have only 1 rule that allows requests if the path is   foo    and   the source namespace is  foo   which is more restrictive       Make sure you are NOT using HTTP only fields on TCP ports  The authorization policy will be more restrictive because HTTP only fields  e g   host    path    headers   JWT  etc   do not exist in the raw TCP connections   In the case of  ALLOW  policy  these fields are never matched  In the case of  DENY  and  CUSTOM  action  these fields are considered always matched  The final effect is a more restrictive policy that could cause unexpected denies   Check the Kubernetes service definition to verify that the port is  named with the correct protocol properly   docs ops configuration traffic management protocol selection  explicit protocol selection   If you are using HTTP only fields on the port  make sure the port name has the  http   prefix       Make sure the policy is applied to the correct target  Check the workload selector and namespace to confirm it s applied to the correct targets  You can determine the authorization policy in effect by running  istioctl x authz check POD NAME POD NAMESPACE        Pay attention to the action specified in the policy    If not specified  the policy defaults to use action  ALLOW      When a workload has multiple actions   CUSTOM    ALLOW  and  DENY   applied at the same time  all actions must be   satisfied to allow a request  In other words  a request is denied if any of the action denies and is allowed only if   all actions allow     The  AUDIT  action does not enforce access control and will not deny the request at any cases   Read  authorization implicit enablement   docs concepts security  implicit enablement  for more details of the evaluation order      Ensure Istiod accepts the policies  Istiod converts and distributes your authorization policies to the proxies  The following steps help you ensure Istiod is working as expected   1  Run the following command to enable the debug logging in istiod              istioctl admin log   level authorization debug       1  Get the Istiod log with the following command            You probably need to first delete and then re apply your authorization policies so that     the debug output is generated for these policies                   kubectl logs   kubectl  n istio system get pods  l app istiod  o jsonpath    items 0  metadata name     c discovery  n istio system       1  Check the output and verify there are no errors  For example  you might see something similar to the following            2021 04 23T20 53 29 507314Z info ads Push debounce stable 31  1  100 981865ms since last change  100 981653ms since last push  full true     2021 04 23T20 53 29 507641Z info ads XDS  Pushing 2021 04 23T20 53 29Z 23 Services 15 ConnectedEndpoints 2  Version 2021 04 23T20 53 29Z 23     2021 04 23T20 53 29 507911Z debug authorization Processed authorization policy for httpbin 74fb669cc6 lpscm foo with details            found 0 CUSTOM actions     2021 04 23T20 53 29 508077Z debug authorization Processed authorization policy for curl 557747455f 6dxbl foo with details            found 0 CUSTOM actions     2021 04 23T20 53 29 508128Z debug authorization Processed authorization policy for httpbin 74fb669cc6 lpscm foo with details            found 1 DENY actions  0 ALLOW actions  0 AUDIT actions           generated config from rule ns foo  policy deny path headers  rule 0  on HTTP filter chain successfully           built 1 HTTP filters for DENY action           added 1 HTTP filters to filter chain 0           added 1 HTTP filters to filter chain 1     2021 04 23T20 53 29 508158Z debug authorization Processed authorization policy for curl 557747455f 6dxbl foo with details            found 0 DENY actions  0 ALLOW actions  0 AUDIT actions     2021 04 23T20 53 29 509097Z debug authorization Processed authorization policy for curl 557747455f 6dxbl foo with details            found 0 CUSTOM actions     2021 04 23T20 53 29 509167Z debug authorization Processed authorization policy for curl 557747455f 6dxbl foo with details            found 0 DENY actions  0 ALLOW actions  0 AUDIT actions     2021 04 23T20 53 29 509501Z debug authorization Processed authorization policy for httpbin 74fb669cc6 lpscm foo with details            found 0 CUSTOM actions     2021 04 23T20 53 29 509652Z debug authorization Processed authorization policy for httpbin 74fb669cc6 lpscm foo with details            found 1 DENY actions  0 ALLOW actions  0 AUDIT actions           generated config from rule ns foo  policy deny path headers  rule 0  on HTTP filter chain successfully           built 1 HTTP filters for DENY action           added 1 HTTP filters to filter chain 0           added 1 HTTP filters to filter chain 1           generated config from rule ns foo  policy deny path headers  rule 0  on TCP filter chain successfully           built 1 TCP filters for DENY action           added 1 TCP filters to filter chain 2           added 1 TCP filters to filter chain 3           added 1 TCP filters to filter chain 4     2021 04 23T20 53 29 510903Z info ads LDS  PUSH for node curl 557747455f 6dxbl foo resources 18 size 85 0kB     2021 04 23T20 53 29 511487Z info ads LDS  PUSH for node httpbin 74fb669cc6 lpscm foo resources 18 size 86 4kB           This shows that Istiod generated         An HTTP filter config with policy  ns foo  policy deny path headers  rule 0   for workload  httpbin 74fb669cc6 lpscm foo          A TCP filter config with policy  ns foo  policy deny path headers  rule 0   for workload  httpbin 74fb669cc6 lpscm foo       Ensure Istiod distributes policies to proxies correctly  Istiod distributes the authorization policies to proxies  The following steps help you ensure istiod is working as expected    The command below assumes you have deployed  httpbin   you should replace    l app httpbin   with your actual pod if you are not using  httpbin     1  Run the following command to get the proxy configuration dump for the  httpbin  workload              kubectl exec    kubectl get pods  l app httpbin  o jsonpath    items 0  metadata name     c istio proxy    pilot agent request GET config dump       1  Check the log and verify         The log includes an  envoy filters http rbac  filter to enforce the authorization policy on each incoming request        Istio updates the filter accordingly after you update your authorization policy   1  The following output means the proxy of  httpbin  has enabled the  envoy filters http rbac  filter with rules that rejects    anyone to access path   headers                     name    envoy filters http rbac         typed config             type    type googleapis com envoy extensions filters http rbac v3 RBAC          rules             action    DENY           policies              ns foo  policy deny path headers  rule 0                permissions                             and rules                  rules                                   or rules                     rules                                         url path                        path                         exact     headers                                                                                                                                                                        principals                             and ids                  ids                                   any   true                                                                                                     shadow rules stat prefix    istio dry run allow                          Ensure proxies enforce policies correctly  Proxies eventually enforce the authorization policies  The following steps help you ensure the proxy is working as expected    The command below assumes you have deployed  httpbin   you should replace    l app httpbin   with your actual pod if you are not using  httpbin     1  Turn on the authorization debug logging in proxy with the following command              istioctl proxy config log deploy httpbin   level  rbac debug        1  Verify you see the following output            active loggers                      rbac  debug                     1  Send some requests to the  httpbin  workload to generate some logs   1  Print the proxy logs with the following command              kubectl logs   kubectl get pods  l app httpbin  o jsonpath    items 0  metadata name     c istio proxy       1  Check the output and verify         The output log shows either  enforced allowed  or  enforced denied  depending on whether the request       was allowed or denied respectively         Your authorization policy expects the data extracted from the request   1  The following is an example output for a request at path   httpbin                     2021 04 23T20 43 18 552857Z debug envoy rbac checking request  requestedServerName  outbound  8000    httpbin foo svc cluster local  sourceIP  10 44 3 13 46180  directRemoteIP  10 44 3 13 46180  remoteIP  10 44 3 13 46180 localAddress  10 44 1 18 80  ssl  uriSanPeerCertificate  spiffe   cluster local ns foo sa curl  dnsSanPeerCertificate    subjectPeerCertificate    headers    authority    httpbin 8000        path     headers        method    GET        scheme    http       user agent    curl 7 76 1 DEV       accept              x forwarded proto    http       x request id    672c9166 738c 4865 b541 128259cc65e5       x envoy attempt count    1       x b3 traceid    8a124905edf4291a21df326729b264e9       x b3 spanid    21df326729b264e9       x b3 sampled    0       x forwarded client cert    By spiffe   cluster local ns foo sa httpbin Hash d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d Subject    URI spiffe   cluster local ns foo sa curl        dynamicMetadata  filter metadata         key   istio authn        value           fields             key   request auth principal            value               string value   cluster local ns foo sa curl                                fields             key   source namespace            value               string value   foo                                fields             key   source principal            value               string value   cluster local ns foo sa curl                                fields             key   source user            value               string value   cluster local ns foo sa curl                                           2021 04 23T20 43 18 552910Z debug envoy rbac enforced denied  matched policy ns foo  policy deny path headers  rule 0                    The log  enforced denied  matched policy ns foo  policy deny path headers  rule 0   means the request is rejected by     the policy  ns foo  policy deny path headers  rule 0     1  The following is an example output for authorization policy in the  dry run mode   docs tasks security authorization authz dry run                     2021 04 23T20 59 11 838468Z debug envoy rbac checking request  requestedServerName  outbound  8000    httpbin foo svc cluster local  sourceIP  10 44 3 13 49826  directRemoteIP  10 44 3 13 49826  remoteIP  10 44 3 13 49826 localAddress  10 44 1 18 80  ssl  uriSanPeerCertificate  spiffe   cluster local ns foo sa curl  dnsSanPeerCertificate    subjectPeerCertificate    headers    authority    httpbin 8000        path     headers        method    GET        scheme    http       user agent    curl 7 76 1 DEV       accept              x forwarded proto    http       x request id    e7b2fdb0 d2ea 4782 987c 7845939e6313       x envoy attempt count    1       x b3 traceid    696607fc4382b50017c1f7017054c751       x b3 spanid    17c1f7017054c751       x b3 sampled    0       x forwarded client cert    By spiffe   cluster local ns foo sa httpbin Hash d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d Subject    URI spiffe   cluster local ns foo sa curl        dynamicMetadata  filter metadata         key   istio authn        value           fields             key   request auth principal            value               string value   cluster local ns foo sa curl                                fields             key   source namespace            value               string value   foo                                fields             key   source principal            value               string value   cluster local ns foo sa curl                                fields             key   source user            value               string value   cluster local ns foo sa curl                                           2021 04 23T20 59 11 838529Z debug envoy rbac shadow denied  matched policy ns foo  policy deny path headers  rule 0      2021 04 23T20 59 11 838538Z debug envoy rbac no engine  allowed by default                   The log  shadow denied  matched policy ns foo  policy deny path headers  rule 0   means the request would be rejected     by the   dry run   policy  ns foo  policy deny path headers  rule 0         The log  no engine  allowed by default  means the request is actually allowed because the dry run policy is the     only policy on the workload      Keys and certificates errors  If you suspect that some of the keys and or certificates used by Istio aren t correct  you can inspect the contents from any pod      istioctl proxy config secret curl 8f795f47d 4s4t7 RESOURCE NAME     TYPE           STATUS     VALID CERT     SERIAL NUMBER                               NOT AFTER                NOT BEFORE default           Cert Chain     ACTIVE     true           138092480869518152837211547060273851586     2020 11 11T16 39 48Z     2020 11 10T16 39 48Z ROOTCA            CA             ACTIVE     true           288553090258624301170355571152070165215     2030 11 08T16 34 52Z     2020 11 10T16 34 52Z   By passing the   o json  flag  you can pass the full certificate content to  openssl  to analyze its contents      istioctl proxy config secret curl 8f795f47d 4s4t7  o json   jq    dynamicActiveSecrets     select  name     default    0  secret tlsCertificate certificateChain inlineBytes   r   base64  d   openssl x509  noout  text Certificate      Data          Version  3  0x2          Serial Number              99 59 6b a2 5a f4 20 f4 03 d7 f0 bc 59 f5 d8 40     Signature Algorithm  sha256WithRSAEncryption         Issuer  O   k8s cluster local         Validity             Not Before  Jun  4 20 38 20 2018 GMT             Not After   Sep  2 20 38 20 2018 GMT             X509v3 extensions              X509v3 Key Usage  critical                 Digital Signature  Key Encipherment             X509v3 Extended Key Usage                  TLS Web Server Authentication  TLS Web Client Authentication             X509v3 Basic Constraints  critical                 CA FALSE             X509v3 Subject Alternative Name                  URI spiffe   cluster local ns my ns sa my sa       Make sure the displayed certificate contains valid information  In particular  the  Subject Alternative Name  field should be  URI spiffe   cluster local ns my ns sa my sa       Mutual TLS errors  If you suspect problems with mutual TLS  first ensure that istiod is healthy  and second ensure that  keys and certificates are being delivered   keys and certificates errors  to sidecars properly   If everything appears to be working so far  the next step is to verify that the right  authentication policy   docs tasks security authentication authn policy   is applied and the right destination rules are in place   If you suspect the client side sidecar may send mutual TLS or plaintext traffic incorrectly  check the  Grafana Workload dashboard   docs ops integrations grafana    The outbound requests are annotated whether mTLS  is used or not  After checking this if you believe the client sidecars are misbehaved  report an issue on GitHub "}
{"questions":"istio owner istio wg policies and telemetry maintainers EnvoyFilter migration Resolve common problems with Istio upgrades title Upgrade Problems test n a weight 60","answers":"---\ntitle: Upgrade Problems\ndescription: Resolve common problems with Istio upgrades.\nweight: 60\nowner: istio\/wg-policies-and-telemetry-maintainers\ntest: n\/a\n---\n\n## EnvoyFilter migration\n\n`EnvoyFilter` is an alpha API that is tightly coupled to the implementation\ndetails of Istio xDS configuration generation. Production use of the\n`EnvoyFilter` alpha API must be carefully curated during the upgrade of Istio's\ncontrol or data plane. In many instances, `EnvoyFilter` can be replaced with a\nfirst-class Istio API which carries substantially lower upgrade risks.\n\n### Use Telemetry API for metrics customization\n\nThe usage of `IstioOperator` to customize Prometheus metrics generation has been\nreplaced by the [Telemetry API](\/docs\/tasks\/observability\/metrics\/customize-metrics\/),\nbecause `IstioOperator` relies on a template `EnvoyFilter` to change the\nmetrics filter configuration. Note that the two methods are incompatible, and\nthe Telemetry API does not work with `EnvoyFilter` or `IstioOperator` metric\ncustomization configuration.\n\nAs an example, the following `IstioOperator` configuration adds a `destination_port` tag:\n\n\napiVersion: install.istio.io\/v1alpha1\nkind: IstioOperator\nspec:\n  values:\n    telemetry:\n      v2:\n        prometheus:\n          configOverride:\n            inboundSidecar:\n              metrics:\n                - name: requests_total\n                  dimensions:\n                    destination_port: string(destination.port)\n\n\nThe following `Telemetry` configuration replaces the above:\n\n\napiVersion: telemetry.istio.io\/v1\nkind: Telemetry\nmetadata:\n  name: namespace-metrics\nspec:\n  metrics:\n  - providers:\n    - name: prometheus\n    overrides:\n    - match:\n        metric: REQUEST_COUNT\n      mode: SERVER\n      tagOverrides:\n        destination_port:\n          value: \"string(destination.port)\"\n\n\n### Use the WasmPlugin API for Wasm data plane extensibility\n\nThe usage of `EnvoyFilter` to inject Wasm filters has been replaced by the\n[WasmPlugin API](\/docs\/tasks\/extensibility\/wasm-module-distribution).\nWasmPlugin API allows dynamic loading of the plugins from artifact registries,\nURLs, or local files. The \"Null\" plugin runtime is no longer a recommended option\nfor deployment of Wasm code.\n\n### Use gateway topology to set the number of the trusted hops\n\nThe usage of `EnvoyFilter` to configure the number of the trusted hops in the\nHTTP connection manager has been replaced by the\n[`gatewayTopology`](\/docs\/reference\/config\/istio.mesh.v1alpha1\/#Topology)\nfield in\n[`ProxyConfig`](\/docs\/ops\/configuration\/traffic-management\/network-topologies).\nFor example, the following `EnvoyFilter` configuration should use an annotation\non the pod or the mesh default. Instead of:\n\n\napiVersion: networking.istio.io\/v1alpha3\nkind: EnvoyFilter\nmetadata:\n  name: ingressgateway-redirect-config\nspec:\n  configPatches:\n  - applyTo: NETWORK_FILTER\n    match:\n      context: GATEWAY\n      listener:\n        filterChain:\n          filter:\n            name: envoy.filters.network.http_connection_manager\n    patch:\n      operation: MERGE\n      value:\n        typed_config:\n          '@type': type.googleapis.com\/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager\n          xff_num_trusted_hops: 1\n  workloadSelector:\n    labels:\n      istio: ingress-gateway\n\n\nUse the equivalent ingress gateway pod proxy configuration annotation:\n\n\nmetadata:\n  annotations:\n    \"proxy.istio.io\/config\": '{\"gatewayTopology\" : { \"numTrustedProxies\": 1 }}'\n\n\n### Use gateway topology to enable PROXY protocol on the ingress gateways\n\nThe usage of `EnvoyFilter` to enable [PROXY\nprotocol](https:\/\/www.haproxy.org\/download\/1.8\/doc\/proxy-protocol.txt) on the\ningress gateways has been replaced by the\n[`gatewayTopology`](\/docs\/reference\/config\/istio.mesh.v1alpha1\/#Topology)\nfield in\n[`ProxyConfig`](\/docs\/ops\/configuration\/traffic-management\/network-topologies).\nFor example, the following `EnvoyFilter` configuration should use an annotation\non the pod or the mesh default. Instead of:\n\n\napiVersion: networking.istio.io\/v1alpha3\nkind: EnvoyFilter\nmetadata:\n  name: proxy-protocol\nspec:\n  configPatches:\n  - applyTo: LISTENER_FILTER\n    patch:\n      operation: INSERT_FIRST\n      value:\n        name: proxy_protocol\n        typed_config:\n          \"@type\": \"type.googleapis.com\/envoy.extensions.filters.listener.proxy_protocol.v3.ProxyProtocol\"\n  workloadSelector:\n    labels:\n      istio: ingress-gateway\n\n\nUse the equivalent ingress gateway pod proxy configuration annotation:\n\n\nmetadata:\n  annotations:\n    \"proxy.istio.io\/config\": '{\"gatewayTopology\" : { \"proxyProtocol\": {} }}'\n\n\n### Use a proxy annotation to customize the histogram bucket sizes\n\nThe usage of `EnvoyFilter` and the experimental bootstrap discovery service to\nconfigure the bucket sizes for the histogram metrics has been replaced by the\nproxy annotation `sidecar.istio.io\/statsHistogramBuckets`. For example, the\nfollowing `EnvoyFilter` configuration should use an annotation on the pod.\nInstead of:\n\n\napiVersion: networking.istio.io\/v1alpha3\nkind: EnvoyFilter\nmetadata:\n  name: envoy-stats-1\n  namespace: istio-system\nspec:\n  workloadSelector:\n    labels:\n      istio: ingressgateway\n  configPatches:\n  - applyTo: BOOTSTRAP\n    patch:\n      operation: MERGE\n      value:\n        stats_config:\n          histogram_bucket_settings:\n            - match:\n                prefix: istiocustom\n              buckets: [1,5,50,500,5000,10000]\n\n\nUse the equivalent pod annotation:\n\n\nmetadata:\n  annotations:\n    \"sidecar.istio.io\/statsHistogramBuckets\": '{\"istiocustom\":[1,5,50,500,5000,10000]}'\n","site":"istio","answers_cleaned":"    title  Upgrade Problems description  Resolve common problems with Istio upgrades  weight  60 owner  istio wg policies and telemetry maintainers test  n a         EnvoyFilter migration   EnvoyFilter  is an alpha API that is tightly coupled to the implementation details of Istio xDS configuration generation  Production use of the  EnvoyFilter  alpha API must be carefully curated during the upgrade of Istio s control or data plane  In many instances   EnvoyFilter  can be replaced with a first class Istio API which carries substantially lower upgrade risks       Use Telemetry API for metrics customization  The usage of  IstioOperator  to customize Prometheus metrics generation has been replaced by the  Telemetry API   docs tasks observability metrics customize metrics    because  IstioOperator  relies on a template  EnvoyFilter  to change the metrics filter configuration  Note that the two methods are incompatible  and the Telemetry API does not work with  EnvoyFilter  or  IstioOperator  metric customization configuration   As an example  the following  IstioOperator  configuration adds a  destination port  tag    apiVersion  install istio io v1alpha1 kind  IstioOperator spec    values      telemetry        v2          prometheus            configOverride              inboundSidecar                metrics                    name  requests total                   dimensions                      destination port  string destination port    The following  Telemetry  configuration replaces the above    apiVersion  telemetry istio io v1 kind  Telemetry metadata    name  namespace metrics spec    metrics      providers        name  prometheus     overrides        match          metric  REQUEST COUNT       mode  SERVER       tagOverrides          destination port            value   string destination port         Use the WasmPlugin API for Wasm data plane extensibility  The usage of  EnvoyFilter  to inject Wasm filters has been replaced by the  WasmPlugin API   docs tasks extensibility wasm module distribution   WasmPlugin API allows dynamic loading of the plugins from artifact registries  URLs  or local files  The  Null  plugin runtime is no longer a recommended option for deployment of Wasm code       Use gateway topology to set the number of the trusted hops  The usage of  EnvoyFilter  to configure the number of the trusted hops in the HTTP connection manager has been replaced by the   gatewayTopology    docs reference config istio mesh v1alpha1  Topology  field in   ProxyConfig    docs ops configuration traffic management network topologies   For example  the following  EnvoyFilter  configuration should use an annotation on the pod or the mesh default  Instead of    apiVersion  networking istio io v1alpha3 kind  EnvoyFilter metadata    name  ingressgateway redirect config spec    configPatches      applyTo  NETWORK FILTER     match        context  GATEWAY       listener          filterChain            filter              name  envoy filters network http connection manager     patch        operation  MERGE       value          typed config              type   type googleapis com envoy extensions filters network http connection manager v3 HttpConnectionManager           xff num trusted hops  1   workloadSelector      labels        istio  ingress gateway   Use the equivalent ingress gateway pod proxy configuration annotation    metadata    annotations       proxy istio io config      gatewayTopology       numTrustedProxies   1           Use gateway topology to enable PROXY protocol on the ingress gateways  The usage of  EnvoyFilter  to enable  PROXY protocol  https   www haproxy org download 1 8 doc proxy protocol txt  on the ingress gateways has been replaced by the   gatewayTopology    docs reference config istio mesh v1alpha1  Topology  field in   ProxyConfig    docs ops configuration traffic management network topologies   For example  the following  EnvoyFilter  configuration should use an annotation on the pod or the mesh default  Instead of    apiVersion  networking istio io v1alpha3 kind  EnvoyFilter metadata    name  proxy protocol spec    configPatches      applyTo  LISTENER FILTER     patch        operation  INSERT FIRST       value          name  proxy protocol         typed config              type    type googleapis com envoy extensions filters listener proxy protocol v3 ProxyProtocol    workloadSelector      labels        istio  ingress gateway   Use the equivalent ingress gateway pod proxy configuration annotation    metadata    annotations       proxy istio io config      gatewayTopology       proxyProtocol                Use a proxy annotation to customize the histogram bucket sizes  The usage of  EnvoyFilter  and the experimental bootstrap discovery service to configure the bucket sizes for the histogram metrics has been replaced by the proxy annotation  sidecar istio io statsHistogramBuckets   For example  the following  EnvoyFilter  configuration should use an annotation on the pod  Instead of    apiVersion  networking istio io v1alpha3 kind  EnvoyFilter metadata    name  envoy stats 1   namespace  istio system spec    workloadSelector      labels        istio  ingressgateway   configPatches      applyTo  BOOTSTRAP     patch        operation  MERGE       value          stats config            histogram bucket settings                match                  prefix  istiocustom               buckets   1 5 50 500 5000 10000    Use the equivalent pod annotation    metadata    annotations       sidecar istio io statsHistogramBuckets      istiocustom   1 5 50 500 5000 10000    "}
{"questions":"istio help ops troubleshooting validation aliases help ops setup validation title Configuration Validation Problems owner istio wg user experience maintainers Describes how to resolve configuration validation problems forceinlinetoc true docs ops troubleshooting validation weight 50","answers":"---\ntitle: Configuration Validation Problems\ndescription: Describes how to resolve configuration validation problems.\nforce_inline_toc: true\nweight: 50\naliases:\n    - \/help\/ops\/setup\/validation\n    - \/help\/ops\/troubleshooting\/validation\n    - \/docs\/ops\/troubleshooting\/validation\nowner: istio\/wg-user-experience-maintainers\ntest: no\n---\n\n## Seemingly valid configuration is rejected\n\nUse [istioctl validate -f](\/docs\/reference\/commands\/istioctl\/#istioctl-validate) and [istioctl analyze](\/docs\/reference\/commands\/istioctl\/#istioctl-analyze) for more insight into why the configuration is rejected.  Use an _istioctl_ CLI with a similar version to the control plane version.\n\nThe most commonly reported problems with configuration are YAML indentation and array notation (`-`) mistakes.\n\nManually verify your configuration is correct, cross-referencing\n[Istio API reference](\/docs\/reference\/config) when\nnecessary.\n\n## Invalid configuration is accepted\n\nVerify that a `validatingwebhookconfiguration` named `istio-validator-` followed by\n`<revision>-`, if not the default revision, followed by the Istio system namespace\n(e.g., `istio-validator-myrev-istio-system`) exists and is correct.\nThe `apiVersion`, `apiGroup`, and `resource` of the\ninvalid configuration should be listed in the `webhooks` section of the `validatingwebhookconfiguration`.\n\n\n$ kubectl get validatingwebhookconfiguration istio-validator-istio-system -o yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: ValidatingWebhookConfiguration\nmetadata:\n  labels:\n    app: istiod\n    install.operator.istio.io\/owning-resource-namespace: istio-system\n    istio: istiod\n    istio.io\/rev: default\n    operator.istio.io\/component: Pilot\n    operator.istio.io\/managed: Reconcile\n    operator.istio.io\/version: unknown\n    release: istio\n  name: istio-validator-istio-system\n  resourceVersion: \"615569\"\n  uid: 112fed62-93e7-41c9-8cb1-b2665f392dd7\nwebhooks:\n- admissionReviewVersions:\n  - v1beta1\n  - v1\n  clientConfig:\n    # caBundle should be non-empty. This is periodically (re)patched\n    # every second by the webhook service using the ca-cert\n    # from the mounted service account secret.\n    caBundle: LS0t...\n    # service corresponds to the Kubernetes service that implements the webhook\n    service:\n      name: istiod\n      namespace: istio-system\n      path: \/validate\n      port: 443\n  failurePolicy: Fail\n  matchPolicy: Equivalent\n  name: rev.validation.istio.io\n  namespaceSelector: {}\n  objectSelector:\n    matchExpressions:\n    - key: istio.io\/rev\n      operator: In\n      values:\n      - default\n  rules:\n  - apiGroups:\n    - security.istio.io\n    - networking.istio.io\n    - telemetry.istio.io\n    - extensions.istio.io\n    apiVersions:\n    - '*'\n    operations:\n    - CREATE\n    - UPDATE\n    resources:\n    - '*'\n    scope: '*'\n  sideEffects: None\n  timeoutSeconds: 10\n\n\nIf the `istio-validator-` webhook does not exist, verify\nthe `global.configValidation` installation option is\nset to `true`.\n\nThe validation configuration is fail-close. If\nconfiguration exists and is scoped properly, the webhook will be\ninvoked. A missing `caBundle`, bad certificate, or network connectivity\nproblem will produce an error message when the resource is\ncreated\/updated. If you don\u2019t see any error message and the webhook\nwasn\u2019t invoked and the webhook configuration is valid, your cluster is\nmisconfigured.\n\n## Creating configuration fails with x509 certificate errors\n\n`x509: certificate signed by unknown authority` related errors are\ntypically caused by an empty `caBundle` in the webhook\nconfiguration. Verify that it is not empty (see [verify webhook\nconfiguration](#invalid-configuration-is-accepted)). Istio consciously reconciles webhook configuration\nused the `istio-validation` `configmap` and root certificate.\n\n1. Verify the `istiod` pod(s) are running:\n\n    \n    $  kubectl -n istio-system get pod -lapp=istiod\n    NAME                            READY     STATUS    RESTARTS   AGE\n    istiod-5dbbbdb746-d676g   1\/1       Running   0          2d\n    \n\n1. Check the pod logs for errors. Failing to patch the\n       `caBundle` should print an error.\n\n    \n    $ for pod in $(kubectl -n istio-system get pod -lapp=istiod -o jsonpath='{.items[*].metadata.name}'); do \\\n        kubectl -n istio-system logs ${pod} \\\n    done\n    \n\n1. If the patching failed, verify the RBAC configuration for Istiod:\n\n    \n    $ kubectl get clusterrole istiod-istio-system -o yaml\n    apiVersion: rbac.authorization.k8s.io\/v1\n    kind: ClusterRole\n      name: istiod-istio-system\n    rules:\n    - apiGroups:\n      - admissionregistration.k8s.io\n      resources:\n      - validatingwebhookconfigurations\n      verbs:\n      - '*'\n    \n\n    Istio needs `validatingwebhookconfigurations` write access to\n    create and update the `validatingwebhookconfiguration`.\n\n## Creating configuration fails with `no such hosts` or `no endpoints available` errors\n\nValidation is fail-close. If the `istiod` pod is not ready,\nconfiguration cannot be created and updated.  In such cases you\u2019ll see\nan error about `no endpoints available`.\n\nVerify the `istiod` pod(s) are running and endpoints are ready.\n\n\n$  kubectl -n istio-system get pod -lapp=istiod\nNAME                            READY     STATUS    RESTARTS   AGE\nistiod-5dbbbdb746-d676g   1\/1       Running   0          2d\n\n\n\n$ kubectl -n istio-system get endpoints istiod\nNAME           ENDPOINTS                          AGE\nistiod         10.48.6.108:15014,10.48.6.108:443   3d\n\n\nIf the pods or endpoints aren't ready, check the pod logs and\nstatus for any indication about why the webhook pod is failing to start\nand serve traffic.\n\n\n$ for pod in $(kubectl -n istio-system get pod -lapp=istiod -o jsonpath='{.items[*].metadata.name}'); do \\\n    kubectl -n istio-system logs ${pod} \\\ndone\n\n\n\n$ for pod in $(kubectl -n istio-system get pod -lapp=istiod -o name); do \\\n    kubectl -n istio-system describe ${pod} \\\ndone\n","site":"istio","answers_cleaned":"    title  Configuration Validation Problems description  Describes how to resolve configuration validation problems  force inline toc  true weight  50 aliases         help ops setup validation        help ops troubleshooting validation        docs ops troubleshooting validation owner  istio wg user experience maintainers test  no         Seemingly valid configuration is rejected  Use  istioctl validate  f   docs reference commands istioctl  istioctl validate  and  istioctl analyze   docs reference commands istioctl  istioctl analyze  for more insight into why the configuration is rejected   Use an  istioctl  CLI with a similar version to the control plane version   The most commonly reported problems with configuration are YAML indentation and array notation       mistakes   Manually verify your configuration is correct  cross referencing  Istio API reference   docs reference config  when necessary      Invalid configuration is accepted  Verify that a  validatingwebhookconfiguration  named  istio validator   followed by   revision     if not the default revision  followed by the Istio system namespace  e g    istio validator myrev istio system   exists and is correct  The  apiVersion    apiGroup   and  resource  of the invalid configuration should be listed in the  webhooks  section of the  validatingwebhookconfiguration       kubectl get validatingwebhookconfiguration istio validator istio system  o yaml apiVersion  admissionregistration k8s io v1 kind  ValidatingWebhookConfiguration metadata    labels      app  istiod     install operator istio io owning resource namespace  istio system     istio  istiod     istio io rev  default     operator istio io component  Pilot     operator istio io managed  Reconcile     operator istio io version  unknown     release  istio   name  istio validator istio system   resourceVersion   615569    uid  112fed62 93e7 41c9 8cb1 b2665f392dd7 webhooks    admissionReviewVersions      v1beta1     v1   clientConfig        caBundle should be non empty  This is periodically  re patched       every second by the webhook service using the ca cert       from the mounted service account secret      caBundle  LS0t          service corresponds to the Kubernetes service that implements the webhook     service        name  istiod       namespace  istio system       path   validate       port  443   failurePolicy  Fail   matchPolicy  Equivalent   name  rev validation istio io   namespaceSelector       objectSelector      matchExpressions        key  istio io rev       operator  In       values          default   rules      apiGroups        security istio io       networking istio io       telemetry istio io       extensions istio io     apiVersions                operations        CREATE       UPDATE     resources                scope        sideEffects  None   timeoutSeconds  10   If the  istio validator   webhook does not exist  verify the  global configValidation  installation option is set to  true    The validation configuration is fail close  If configuration exists and is scoped properly  the webhook will be invoked  A missing  caBundle   bad certificate  or network connectivity problem will produce an error message when the resource is created updated  If you don t see any error message and the webhook wasn t invoked and the webhook configuration is valid  your cluster is misconfigured      Creating configuration fails with x509 certificate errors   x509  certificate signed by unknown authority  related errors are typically caused by an empty  caBundle  in the webhook configuration  Verify that it is not empty  see  verify webhook configuration   invalid configuration is accepted    Istio consciously reconciles webhook configuration used the  istio validation   configmap  and root certificate   1  Verify the  istiod  pod s  are running               kubectl  n istio system get pod  lapp istiod     NAME                            READY     STATUS    RESTARTS   AGE     istiod 5dbbbdb746 d676g   1 1       Running   0          2d       1  Check the pod logs for errors  Failing to patch the         caBundle  should print an error              for pod in   kubectl  n istio system get pod  lapp istiod  o jsonpath    items    metadata name     do           kubectl  n istio system logs   pod        done       1  If the patching failed  verify the RBAC configuration for Istiod              kubectl get clusterrole istiod istio system  o yaml     apiVersion  rbac authorization k8s io v1     kind  ClusterRole       name  istiod istio system     rules        apiGroups          admissionregistration k8s io       resources          validatingwebhookconfigurations       verbs                        Istio needs  validatingwebhookconfigurations  write access to     create and update the  validatingwebhookconfiguration       Creating configuration fails with  no such hosts  or  no endpoints available  errors  Validation is fail close  If the  istiod  pod is not ready  configuration cannot be created and updated   In such cases you ll see an error about  no endpoints available    Verify the  istiod  pod s  are running and endpoints are ready       kubectl  n istio system get pod  lapp istiod NAME                            READY     STATUS    RESTARTS   AGE istiod 5dbbbdb746 d676g   1 1       Running   0          2d      kubectl  n istio system get endpoints istiod NAME           ENDPOINTS                          AGE istiod         10 48 6 108 15014 10 48 6 108 443   3d   If the pods or endpoints aren t ready  check the pod logs and status for any indication about why the webhook pod is failing to start and serve traffic      for pod in   kubectl  n istio system get pod  lapp istiod  o jsonpath    items    metadata name     do       kubectl  n istio system logs   pod    done      for pod in   kubectl  n istio system get pod  lapp istiod  o name   do       kubectl  n istio system describe   pod    done "}
{"questions":"istio weight 30 owner istio wg security maintainers Istio security features provide strong identity powerful policy transparent TLS encryption and authentication authorization and audit AAA tools to protect your services and data forceinlinetoc true title Security Best Practices test n a Best practices for securing applications using Istio","answers":"---\ntitle: Security Best Practices\ndescription: Best practices for securing applications using Istio.\nforce_inline_toc: true\nweight: 30\nowner: istio\/wg-security-maintainers\ntest: n\/a\n---\n\nIstio security features provide strong identity, powerful policy, transparent TLS encryption, and authentication, authorization and audit (AAA) tools to protect your services and data.\nHowever, to fully make use of these features securely, care must be taken to follow best practices. It is recommended to review the [Security overview](\/docs\/concepts\/security\/) before proceeding.\n\n## Mutual TLS\n\nIstio will [automatically](\/docs\/ops\/configuration\/traffic-management\/tls-configuration\/#auto-mtls) encrypt traffic using [Mutual TLS](\/docs\/concepts\/security\/#mutual-tls-authentication) whenever possible.\nHowever, proxies are configured in [permissive mode](\/docs\/concepts\/security\/#permissive-mode) by default, meaning they will accept both mutual TLS and plaintext traffic.\n\nWhile this is required for incremental adoption or allowing traffic from clients without an Istio sidecar, it also weakens the security stance.\nIt is recommended to [migrate to strict mode](\/docs\/tasks\/security\/authentication\/mtls-migration\/) when possible, to enforce that mutual TLS is used.\n\nMutual TLS alone is not always enough to fully secure traffic, however, as it provides only authentication, not authorization.\nThis means that anyone with a valid certificate can still access a service.\n\nTo fully lock down traffic, it is recommended to configure [authorization policies](\/docs\/tasks\/security\/authorization\/).\nThese allow creating fine-grained policies to allow or deny traffic. For example, you can allow only requests from the `app` namespace to access the `hello-world` service.\n\n## Authorization policies\n\nIstio [authorization](\/docs\/concepts\/security\/#authorization) plays a critical part in Istio security.\nIt takes effort to configure the correct authorization policies to best protect your clusters.\nIt is important to understand the implications of these configurations as Istio cannot determine the proper authorization for all users.\nPlease follow this section in its entirety.\n\n### Safer Authorization Policy Patterns\n\n#### Use default-deny patterns\n\nWe recommend you define your Istio authorization policies following the default-deny pattern to enhance your cluster's security posture.\nThe default-deny authorization pattern means your system denies all requests by default, and you define the conditions in which the requests are allowed.\nIn case you miss some conditions, traffic will be unexpectedly denied, instead of traffic being unexpectedly allowed.\nThe latter typically being a security incident while the former may result in a poor user experience, a service outage or will not match your SLO\/SLA.\n\nFor example, in the [authorization for HTTP traffic task](\/docs\/tasks\/security\/authorization\/authz-http\/),\nthe authorization policy named `allow-nothing` makes sure all traffic is denied by default.\nFrom there, other authorization policies allow traffic based on specific conditions.\n\n#### Use `ALLOW-with-positive-matching` and `DENY-with-negative-match` patterns\n\nUse the `ALLOW-with-positive-matching` or `DENY-with-negative-matching` patterns whenever possible. These authorization policy\npatterns are safer because the worst result in the case of policy mismatch is an unexpected 403 rejection instead of\nan authorization policy bypass.\n\nThe `ALLOW-with-positive-matching` pattern is to use the `ALLOW` action only with **positive** matching fields (e.g. `paths`, `values`)\nand do not use any of the **negative** matching fields (e.g. `notPaths`, `notValues`).\n\nThe `DENY-with-negative-matching` pattern is to use the `DENY` action only with **negative** matching fields (e.g. `notPaths`, `notValues`)\nand do not use any of the **positive** matching fields (e.g. `paths`, `values`).\n\nFor example, the authorization policy below uses the `ALLOW-with-positive-matching` pattern to allow requests to path `\/public`:\n\n\napiVersion: security.istio.io\/v1\nkind: AuthorizationPolicy\nmetadata:\n  name: foo\nspec:\n  action: ALLOW\n  rules:\n  - to:\n    - operation:\n        paths: [\"\/public\"]\n\n\nThe above policy explicitly lists the allowed path (`\/public`). This means the request path must be exactly the same as\n`\/public` to allow the request. Any other requests will be rejected by default eliminating the risk\nof unknown normalization behavior causing policy bypass.\n\nThe following is an example using the `DENY-with-negative-matching` pattern to achieve the same result:\n\n\napiVersion: security.istio.io\/v1\nkind: AuthorizationPolicy\nmetadata:\n  name: foo\nspec:\n  action: DENY\n  rules:\n  - to:\n    - operation:\n        notPaths: [\"\/public\"]\n\n\n### Understand path normalization in authorization policy\n\nThe enforcement point for authorization policies is the Envoy proxy instead of the usual resource access point in the backend application. A policy mismatch happens when the Envoy proxy and the backend application interpret the request\ndifferently.\n\nA mismatch can lead to either unexpected rejection or a policy bypass. The latter is usually a security incident that needs to be\nfixed immediately, and it's also why we need path normalization in the authorization policy.\n\nFor example, consider an authorization policy to reject requests with path `\/data\/secret`. A request with path `\/data\/\/secret` will\nnot be rejected because it does not match the path defined in the authorization policy due to the extra forward slash `\/` in the path.\n\nThe request goes through and later the backend application returns the same response that it returns for the path `\/data\/secret`\nbecause the backend application normalizes the path `\/data\/\/secret` to `\/data\/secret` as it considers the double forward slashes\n`\/\/` equivalent to a single forward slash `\/`.\n\nIn this example, the policy enforcement point (Envoy proxy) had a different understanding of the path than the resource access\npoint (backend application). The different understanding caused the mismatch and subsequently the bypass of the authorization policy.\n\nThis becomes a complicated problem because of the following factors:\n\n* Lack of a clear standard for the normalization.\n\n* Backends and frameworks in different layers have their own special normalization.\n\n* Applications can even have arbitrary normalizations for their own use cases.\n\nIstio authorization policy implements built-in support of various basic normalization options to help you to better address\nthe problem:\n\n* Refer to [Guideline on configuring the path normalization option](\/docs\/ops\/best-practices\/security\/#guideline-on-configuring-the-path-normalization-option)\n  to understand which normalization options you may want to use.\n\n* Refer to [Customize your system on path normalization](\/docs\/ops\/best-practices\/security\/#customize-your-system-on-path-normalization) to\n  understand the detail of each normalization option.\n\n* Refer to [Mitigation for unsupported normalization](\/docs\/ops\/best-practices\/security\/#mitigation-for-unsupported-normalization) for\n  alternative solutions in case you need any unsupported normalization options.\n\n### Guideline on configuring the path normalization option\n\n#### Case 1: You do not need normalization at all\n\nBefore diving into the details of configuring normalization, you should first make sure that normalizations are needed.\n\nYou do not need normalization if you don't use authorization policies or if your authorization policies don't\nuse any `path` fields.\n\nYou may not need normalization if all your authorization policies follow the [safer authorization pattern](\/docs\/ops\/best-practices\/security\/#safer-authorization-policy-patterns)\nwhich, in the worst case, results in unexpected rejection instead of policy bypass.\n\n#### Case 2: You need normalization but not sure which normalization option to use\n\nYou need normalization but you have no idea of which option to use. The safest choice is the strictest normalization option\nthat provides the maximum level of normalization in the authorization policy.\n\nThis is often the case due to the fact that complicated multi-layered systems make it practically impossible to figure\nout what normalization is actually happening to a request beyond the enforcement point.\n\nYou could use a less strict normalization option if it already satisfies your requirements and you are sure of its implications.\n\nFor either option, make sure you write both positive and negative tests specifically for your requirements to verify the\nnormalization is working as expected. The tests are useful in catching potential bypass issues caused by a misunderstanding\nor incomplete knowledge of the normalization happening to your request.\n\nRefer to [Customize your system on path normalization](\/docs\/ops\/best-practices\/security\/#customize-your-system-on-path-normalization)\nfor more details on configuring the normalization option.\n\n#### Case 3: You need an unsupported normalization option\n\nIf you need a specific normalization option that is not supported by Istio yet, please follow\n[Mitigation for unsupported normalization](\/docs\/ops\/best-practices\/security\/#mitigation-for-unsupported-normalization)\nfor customized normalization support or create a feature request for the Istio community.\n\n### Customize your system on path normalization\n\nIstio authorization policies can be based on the URL paths in the HTTP request.\n[Path normalization (a.k.a., URI normalization)](https:\/\/en.wikipedia.org\/wiki\/URI_normalization) modifies and standardizes the incoming requests' paths,\nso that the normalized paths can be processed in a standard way.\nSyntactically different paths may be equivalent after path normalization.\n\nIstio supports the following normalization schemes on the request paths,\nbefore evaluating against the authorization policies and routing the requests:\n\n| Option | Description | Example |\n| --- | --- | --- |\n| `NONE` | No normalization is done. Anything received by Envoy will be forwarded exactly as-is to any backend service. | `..\/%2Fa..\/b` is evaluated by the authorization policies and sent to your service. |\n| `BASE` | This is currently the option used in the *default* installation of Istio. This applies the [`normalize_path`](https:\/\/www.envoyproxy.io\/docs\/envoy\/latest\/api-v3\/extensions\/filters\/network\/http_connection_manager\/v3\/http_connection_manager.proto#envoy-v3-api-field-extensions-filters-network-http-connection-manager-v3-httpconnectionmanager-normalize-path) option on Envoy proxies, which follows [RFC 3986](https:\/\/tools.ietf.org\/html\/rfc3986) with extra normalization to convert backslashes to forward slashes. | `\/a\/..\/b` is normalized to `\/b`. `\\da` is normalized to `\/da`. |\n| `MERGE_SLASHES` | Slashes are merged after the _BASE_ normalization. | `\/a\/\/b` is normalized to `\/a\/b`. |\n| `DECODE_AND_MERGE_SLASHES` | The most strict setting when you allow all traffic by default. This setting is recommended, with the caveat that you will need to thoroughly test your authorization policies routes. [Percent-encoded](https:\/\/tools.ietf.org\/html\/rfc3986#section-2.1) slash and backslash characters (`%2F`, `%2f`, `%5C` and `%5c`) are decoded to `\/` or `\\`, before the `MERGE_SLASHES` normalization. | `\/a%2fb` is normalized to `\/a\/b`. |\n\n\nThe configuration is specified via the [`pathNormalization`](\/docs\/reference\/config\/istio.mesh.v1alpha1\/#MeshConfig-ProxyPathNormalization)\nfield in the [mesh config](\/docs\/reference\/config\/istio.mesh.v1alpha1\/).\n\n\nTo emphasize, the normalization algorithms are conducted in the following order:\n\n1. Percent-decode `%2F`, `%2f`, `%5C` and `%5c`.\n1. The [RFC 3986](https:\/\/tools.ietf.org\/html\/rfc3986) and other normalization implemented by the [`normalize_path`](https:\/\/www.envoyproxy.io\/docs\/envoy\/latest\/api-v3\/extensions\/filters\/network\/http_connection_manager\/v3\/http_connection_manager.proto#envoy-v3-api-field-extensions-filters-network-http-connection-manager-v3-httpconnectionmanager-normalize-path) option in Envoy.\n1. Merge slashes\n\n\nWhile these normalization options represent recommendations from HTTP standards and common industry practices,\napplications may interpret a URL in any way it chooses to. When using denial policies, ensure that you understand how your application behaves.\n\n\nFor a complete list of supported normalizations, please refer to [authorization policy normalization](\/docs\/reference\/config\/security\/normalization\/).\n\n#### Examples of configuration\n\nEnsuring Envoy normalizes request paths to match your backend services' expectation is critical to the security of your system.\nThe following examples can be used as reference for you to configure your system.\nThe normalized URL paths, or the original URL paths if _NONE_ is selected, will be:\n\n1. Used to check against the authorization policies\n1. Forwarded to the backend application\n\n| Your application... | Choose... |\n| --- | --- |\n| Relies on the proxy to do normalization | `BASE`, `MERGE_SLASHES` or `DECODE_AND_MERGE_SLASHES` |\n| Normalizes request paths based on [RFC 3986](https:\/\/tools.ietf.org\/html\/rfc3986) and does not merge slashes | `BASE` |\n| Normalizes request paths based on [RFC 3986](https:\/\/tools.ietf.org\/html\/rfc3986), merges slashes but does not decode [percent-encoded](https:\/\/tools.ietf.org\/html\/rfc3986#section-2.1) slashes | `MERGE_SLASHES` |\n| Normalizes request paths based on [RFC 3986](https:\/\/tools.ietf.org\/html\/rfc3986), decodes [percent-encoded](https:\/\/tools.ietf.org\/html\/rfc3986#section-2.1) slashes and merges slashes | `DECODE_AND_MERGE_SLASHES` |\n| Processes request paths in a way that is incompatible with [RFC 3986](https:\/\/tools.ietf.org\/html\/rfc3986) | `NONE` |\n\n#### How to configure\n\nYou can use `istioctl` to update the [mesh config](\/docs\/reference\/config\/istio.mesh.v1alpha1\/):\n\n\n$ istioctl upgrade --set meshConfig.pathNormalization.normalization=DECODE_AND_MERGE_SLASHES\n\n\nor by altering your operator overrides file\n\n\n$ cat <<EOF > iop.yaml\napiVersion: install.istio.io\/v1alpha1\nkind: IstioOperator\nspec:\n  meshConfig:\n    pathNormalization:\n      normalization: DECODE_AND_MERGE_SLASHES\nEOF\n$ istioctl install -f iop.yaml\n\n\nAlternatively, if you want to directly edit the mesh config,\nyou can add the [`pathNormalization`](\/docs\/reference\/config\/istio.mesh.v1alpha1\/#MeshConfig-ProxyPathNormalization)\nto the [mesh config](\/docs\/reference\/config\/istio.mesh.v1alpha1\/), which is the `istio-<REVISION_ID>` configmap in the `istio-system` namespace.\nFor example, if you choose the `DECODE_AND_MERGE_SLASHES` option, you modify the mesh config as the following:\n\n\napiVersion: v1\n  data:\n    mesh: |-\n      ...\n      pathNormalization:\n        normalization: DECODE_AND_MERGE_SLASHES\n      ...\n\n\n### Mitigation for unsupported normalization\n\nThis section describes various mitigations for unsupported normalization. These could be useful when you need a specific\nnormalization that is not supported by Istio.\n\nPlease make sure you understand the mitigation thoroughly and use it carefully as some mitigations rely on things that are\nout the scope of Istio and also not supported by Istio.\n\n#### Custom normalization logic\n\nYou can apply custom normalization logic using the WASM or Lua filter. It is recommended to use the WASM filter because\nit's officially supported and also used by Istio. You could use the Lua filter for a quick proof-of-concept DEMO but we do\nnot recommend using the Lua filter in production because it is not supported by Istio.\n\n##### Example custom normalization (case normalization)\n\nIn some environments, it may be useful to have paths in authorization policies compared in a case insensitive manner.\nFor example, treating `https:\/\/myurl\/get` and `https:\/\/myurl\/GeT` as equivalent.\n\nIn those cases, the `EnvoyFilter` shown below can be used to insert a Lua filter to normalize the path to lower case.\nThis filter will change both the path used for comparison and the path presented to the application.\n\n\napiVersion: networking.istio.io\/v1alpha3\nkind: EnvoyFilter\nmetadata:\n  name: ingress-case-insensitive\n  namespace: istio-system\nspec:\n  configPatches:\n  - applyTo: HTTP_FILTER\n    match:\n      context: GATEWAY\n      listener:\n        filterChain:\n          filter:\n            name: \"envoy.filters.network.http_connection_manager\"\n    patch:\n      operation: INSERT_FIRST\n      value:\n        name: envoy.lua\n        typed_config:\n            \"@type\": \"type.googleapis.com\/envoy.extensions.filters.http.lua.v3.Lua\"\n            inlineCode: |\n              function envoy_on_request(request_handle)\n                local path = request_handle:headers():get(\":path\")\n                request_handle:headers():replace(\":path\", string.lower(path))\n              end\n\n\n#### Writing Host Match Policies\n\nIstio generates hostnames for both the hostname itself and all matching ports. For instance, a virtual service or Gateway\nfor a host of `example.com` generates a config matching `example.com` and `example.com:*`. However, exact match authorization\npolicies only match the exact string given for the `hosts` or `notHosts` fields.\n\n[Authorization policy rules](\/docs\/reference\/config\/security\/authorization-policy\/#Rule) matching hosts should be written using\nprefix matches instead of exact matches.  For example, for an `AuthorizationPolicy` matching the Envoy configuration generated\nfor a hostname of `example.com`, you would use `hosts: [\"example.com\", \"example.com:*\"]` as shown in the below `AuthorizationPolicy`.\n\n\napiVersion: security.istio.io\/v1\nkind: AuthorizationPolicy\nmetadata:\n  name: ingress-host\n  namespace: istio-system\nspec:\n  selector:\n    matchLabels:\n      app: istio-ingressgateway\n  action: DENY\n  rules:\n  - to:\n    - operation:\n        hosts: [\"example.com\", \"example.com:*\"]\n\n\nAdditionally, the `host` and `notHosts` fields should generally only be used on gateway for external traffic entering the mesh\nand not on sidecars for traffic within the mesh. This is because the sidecar on server side (where the authorization policy is enforced)\ndoes not use the `Host` header when redirecting the request to the application. This makes the `host` and `notHost` meaningless\non sidecar because a client could reach out to the application using explicit IP address and arbitrary `Host` header instead of\nthe service name.\n\nIf you really need to enforce access control based on the `Host` header on sidecars for any reason, follow with the [default-deny patterns](\/docs\/ops\/best-practices\/security\/#use-default-deny-patterns)\nwhich would reject the request if the client uses an arbitrary `Host` header.\n\n#### Specialized Web Application Firewall (WAF)\n\nMany specialized Web Application Firewall (WAF) products provide additional normalization options. They can be deployed in\nfront of the Istio ingress gateway to normalize requests entering the mesh. The authorization policy will then be enforced\non the normalized requests. Please refer to your specific WAF product for configuring the normalization options.\n\n#### Feature request to Istio\n\nIf you believe Istio should officially support a specific normalization, you can follow the [reporting a vulnerability](\/docs\/releases\/security-vulnerabilities\/#reporting-a-vulnerability)\npage to send a feature request about the specific normalization to the Istio Product Security Work Group for initial evaluation.\n\nPlease do not open any issues in public without first contacting the Istio Product Security Work Group because the\nissue might be considered a security vulnerability that needs to be fixed in private.\n\nIf the Istio Product Security Work Group evaluates the feature request as not a security vulnerability, an issue will\nbe opened in public for further discussions of the feature request.\n\n### Known limitations\n\nThis section lists known limitations of the authorization policy.\n\n#### Server-first TCP protocols are not supported\n\nServer-first TCP protocols mean the server application will send the first bytes right after accepting the TCP connection\nbefore receiving any data from the client.\n\nCurrently, the authorization policy only supports enforcing access control on inbound traffic and not the outbound traffic.\n\nIt also does not support server-first TCP protocols because the first bytes are sent by the server application even before\nit received any data from the client. In this case, the initial first bytes sent by the server are returned to the client\ndirectly without going through the access control check of the authorization policy.\n\nYou should not use the authorization policy if the first bytes sent by the server-first TCP protocols include any sensitive\ndata that need to be protected by proper authorization.\n\nYou could still use the authorization policy in this case if the first bytes does not include any sensitive data, for example,\nthe first bytes are used for negotiating the connection with data that are publicly accessible to any clients. The authorization\npolicy will work as usual for the following requests sent by the client after the first bytes.\n\n## Understand traffic capture limitations\n\nThe Istio sidecar works by capturing both inbound traffic and outbound traffic and directing them through the sidecar proxy.\n\nHowever, not *all* traffic is captured:\n\n* Redirection only handles TCP based traffic. Any UDP or ICMP packets will not be captured or modified.\n* Inbound capture is disabled on many [ports used by the sidecar](\/docs\/ops\/deployment\/application-requirements\/#ports-used-by-istio) as well as port 22. This list can be expanded by options like `traffic.sidecar.istio.io\/excludeInboundPorts`.\n* Outbound capture may similarly be reduced through settings like `traffic.sidecar.istio.io\/excludeOutboundPorts` or other means.\n\nIn general, there is minimal security boundary between an application and its sidecar proxy. Configuration of the sidecar is allowed on a per-pod basis, and both run in the same network\/process namespace.\nAs such, the application may have the ability to remove redirection rules and remove, alter, terminate, or replace the sidecar proxy.\nThis allows a pod to intentionally bypass its sidecar for outbound traffic or intentionally allow inbound traffic to bypass its sidecar.\n\nAs a result, it is not secure to rely on all traffic being captured unconditionally by Istio.\nInstead, the security boundary is that a client may not bypass *another* pod's sidecar.\n\nFor example, if I run the `reviews` application on port `9080`, I can assume that all traffic from the `productpage` application will be captured by the sidecar proxy,\nwhere Istio authentication and authorization policies may apply.\n\n### Defense in depth with `NetworkPolicy`\n\nTo further secure traffic, Istio policies can be layered with Kubernetes [Network Policies](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/network-policies\/).\nThis enables a strong [defense in depth](https:\/\/en.wikipedia.org\/wiki\/Defense_in_depth_(computing)) strategy that can be used to further strengthen the security of your mesh.\n\nFor example, you may choose to only allow traffic to port `9080` of our `reviews` application.\nIn the event of a compromised pod or security vulnerability in the cluster, this may limit or stop an attackers progress.\n\nDepending on the actual implementation, changes to network policy may not affect existing connections in the Istio proxies.\nYou may need to restart the Istio proxies after applying the policy so that existing connections will be closed and\nnew connections will be subject to the new policy.\n\n### Securing egress traffic\n\nA common misconception is that options like [`outboundTrafficPolicy: REGISTRY_ONLY`](\/docs\/tasks\/traffic-management\/egress\/egress-control\/#envoy-passthrough-to-external-services) acts as a security policy preventing all access to undeclared services.\nHowever, this is not a strong security boundary as mentioned above, and should be considered best-effort.\n\nWhile this is useful to prevent accidental dependencies, if you want to secure egress traffic, and enforce all outbound traffic goes through a proxy, you should instead rely on an [Egress Gateway](\/docs\/tasks\/traffic-management\/egress\/egress-gateway\/).\nWhen combined with a [Network Policy](\/docs\/tasks\/traffic-management\/egress\/egress-gateway\/#apply-kubernetes-network-policies), you can enforce all traffic, or some subset, goes through the egress gateway.\nThis ensures that even if a client accidentally or maliciously bypasses their sidecar, the request will be blocked.\n\n## Configure TLS verification in Destination Rule when using TLS origination\n\nIstio offers the ability to [originate TLS](\/docs\/tasks\/traffic-management\/egress\/egress-tls-origination\/) from a sidecar proxy or gateway.\nThis enables applications that send plaintext HTTP traffic to be transparently \"upgraded\" to HTTPS.\n\nCare must be taken when configuring the `DestinationRule`'s `tls` setting to specify the `caCertificates`, `subjectAltNames`, and `sni` fields.\nThe `caCertificate` can be automatically set from the system's certificate store's CA certificate by enabling the environment variable `VERIFY_CERTIFICATE_AT_CLIENT=true` on Istiod.\nIf the Operating System CA certificate being automatically used is only desired for select host(s), the environment variable `VERIFY_CERTIFICATE_AT_CLIENT=false` on Istiod, `caCertificates` can be set to `system` in the desired `DestinationRule`(s).\nSpecifying the `caCertificates` in a `DestinationRule` will take priority and the OS CA Cert will not be used.\nBy default, egress traffic does not send SNI during the TLS handshake.\nSNI must be set in the `DestinationRule` to ensure the host properly handle the request.\n\n\nIn order to verify the server's certificate it is important that both `caCertificates` and `subjectAltNames` be set.\n\nVerification of the certificate presented by the server against a CA is not sufficient, as the Subject Alternative Names must also be validated.\n\nIf `VERIFY_CERTIFICATE_AT_CLIENT` is set, but `subjectAltNames` is not set then you are not verifying all credentials.\n\nIf no CA certificate is being used, `subjectAltNames` will not be used regardless of it being set or not.\n\n\nFor example:\n\n\napiVersion: networking.istio.io\/v1\nkind: DestinationRule\nmetadata:\n  name: google-tls\nspec:\n  host: google.com\n  trafficPolicy:\n    tls:\n      mode: SIMPLE\n      caCertificates: \/etc\/ssl\/certs\/ca-certificates.crt\n      subjectAltNames:\n      - \"google.com\"\n      sni: \"google.com\"\n\n\n## Gateways\n\nWhen running an Istio [gateway](\/docs\/tasks\/traffic-management\/ingress\/), there are a few resources involved:\n\n* `Gateway`s, which controls the ports and TLS settings for the gateway.\n* `VirtualService`s, which control the routing logic. These are associated with `Gateway`s by direct reference in the `gateways` field and a mutual agreement on the `hosts` field in the `Gateway` and `VirtualService`.\n\n### Restrict `Gateway` creation privileges\n\nIt is recommended to restrict creation of Gateway resources to trusted cluster administrators. This can be achieved by [Kubernetes RBAC policies](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/) or tools like [Open Policy Agent](https:\/\/www.openpolicyagent.org\/).\n\n### Avoid overly broad `hosts` configurations\n\nWhen possible, avoid overly broad `hosts` settings in `Gateway`.\n\nFor example, this configuration will allow any `VirtualService` to bind to the `Gateway`, potentially exposing unexpected domains:\n\n\nservers:\n- port:\n    number: 80\n    name: http\n    protocol: HTTP\n  hosts:\n  - \"*\"\n\n\nThis should be locked down to allow only specific domains or specific namespaces:\n\n\nservers:\n- port:\n    number: 80\n    name: http\n    protocol: HTTP\n  hosts:\n  - \"foo.example.com\" # Allow only VirtualServices that are for foo.example.com\n  - \"default\/bar.example.com\" # Allow only VirtualServices in the default namespace that are for bar.example.com\n  - \"route-namespace\/*\" # Allow only VirtualServices in the route-namespace namespace for any host\n\n\n### Isolate sensitive services\n\nIt may be desired to enforce stricter physical isolation for sensitive services. For example, you may want to run a\n[dedicated gateway instance](\/docs\/setup\/install\/istioctl\/#configure-gateways) for a sensitive `payments.example.com`, while utilizing a single\nshared gateway instance for less sensitive domains like `blog.example.com` and `store.example.com`.\nThis can offer a stronger defense-in-depth and help meet certain regulatory compliance guidelines.\n\n### Explicitly disable all the sensitive http host under relaxed SNI host matching\n\nIt is reasonable to use multiple `Gateway`s to define mutual TLS and simple TLS on different hosts.\nFor example, use mutual TLS for SNI host `admin.example.com` and simple TLS for SNI host `*.example.com`.\n\n\nkind: Gateway\nmetadata:\n  name: guestgateway\nspec:\n  selector:\n    istio: ingressgateway\n  servers:\n  - port:\n      number: 443\n      name: https\n      protocol: HTTPS\n    hosts:\n    - \"*.example.com\"\n    tls:\n      mode: SIMPLE\n---\nkind: Gateway\nmetadata:\n  name: admingateway\nspec:\n  selector:\n    istio: ingressgateway\n  servers:\n  - port:\n      number: 443\n      name: https\n      protocol: HTTPS\n    hosts:\n    - admin.example.com\n    tls:\n      mode: MUTUAL\n\n\nIf the above is necessary, it's highly recommended to explicitly disable the http host `admin.example.com` in the `VirtualService` that attaches to `*.example.com`. The reason is that currently the underlying [envoy proxy does not require](https:\/\/github.com\/envoyproxy\/envoy\/issues\/6767) the http 1 header `Host` or the http 2 pseudo header `:authority` following the SNI constraints, an attacker can reuse the guest-SNI TLS connection to access admin `VirtualService`. The http response code 421 is designed for this `Host` SNI mismatch and can be used to fulfill the disable.\n\n\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: disable-sensitive\nspec:\n  hosts:\n  - \"admin.example.com\"\n  gateways:\n  - guestgateway\n  http:\n  - match:\n    - uri:\n        prefix: \/\n    fault:\n      abort:\n        percentage:\n          value: 100\n        httpStatus: 421\n    route:\n    - destination:\n        port:\n          number: 8000\n        host: dest.default.cluster.local\n\n\n## Protocol detection\n\nIstio will [automatically determine the protocol](\/docs\/ops\/configuration\/traffic-management\/protocol-selection\/#automatic-protocol-selection) of traffic it sees.\nTo avoid accidental or intentional miss detection, which may result in unexpected traffic behavior, it is recommended to [explicitly declare the protocol](\/docs\/ops\/configuration\/traffic-management\/protocol-selection\/#explicit-protocol-selection) where possible.\n\n## CNI\n\nIn order to transparently capture all traffic, Istio relies on `iptables` rules configured by the `istio-init` `initContainer`.\nThis adds a [requirement](\/docs\/ops\/deployment\/application-requirements\/) for the `NET_ADMIN` and `NET_RAW` [capabilities](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/#set-capabilities-for-a-container) to be available to the pod.\n\nTo reduce privileges granted to pods, Istio offers a [CNI plugin](\/docs\/setup\/additional-setup\/cni\/) which removes this requirement.\n\n## Use hardened docker images\n\nIstio's default docker images, including those run by the control plane, gateway, and sidecar proxies, are based on `ubuntu`.\nThis provides various tools such as `bash` and `curl`, which trades off convenience for an increase attack surface.\n\nIstio also offers a smaller image based on [distroless images](\/docs\/ops\/configuration\/security\/harden-docker-images\/) that reduces the dependencies in the image.\n\n\nDistroless images are currently an alpha feature.\n\n\n## Release and security policy\n\nIn order to ensure your cluster has the latest security patches for known vulnerabilities, it is important to stay on the latest patch release of Istio and ensure that you are on a [supported release](\/docs\/releases\/supported-releases) that is still receiving security patches.\n\n## Detect invalid configurations\n\nWhile Istio provides validation of resources when they are created, these checks cannot catch all issues preventing configuration being distributed in the mesh.\nThis could result in applying a policy that is unexpectedly ignored, leading to unexpected results.\n\n* Run `istioctl analyze` before or after applying configuration to ensure it is valid.\n* Monitor the control plane for rejected configurations. These are exposed by the `pilot_total_xds_rejects` metric, in addition to logs.\n* Test your configuration to ensure it gives the expected results.\n  For a security policy, it is useful to run positive and negative tests to ensure you do not accidentally restrict too much or too few traffic.\n\n## Avoid alpha and experimental features\n\nAll Istio features and APIs are assigned a [feature status](\/docs\/releases\/feature-stages\/), defining its stability, deprecation policy, and security policy.\n\nBecause alpha and experimental features do not have as strong security guarantees, it is recommended to avoid them whenever possible.\nSecurity issues found in these features may not be fixed immediately or otherwise not follow our standard [security vulnerability](\/docs\/releases\/security-vulnerabilities\/) process.\n\nTo determine the feature status of features in use in your cluster, consult the [Istio features](\/docs\/releases\/feature-stages\/#istio-features) list.\n\n<!-- In the future, we should document the `istioctl` command to check this when available. -->\n\n## Lock down ports\n\nIstio configures a [variety of ports](\/docs\/ops\/deployment\/application-requirements\/#ports-used-by-istio) that may be locked down to improve security.\n\n### Control Plane\n\nIstiod exposes a few unauthenticated plaintext ports for convenience by default. If desired, these can be closed:\n\n* Port `8080` exposes the debug interface, which offers read access to a variety of details about the clusters state.\n  This can be disabled by set the environment variable `ENABLE_DEBUG_ON_HTTP=false` on Istiod. Warning: many `istioctl` commands\n  depend on this interface and will not function if it is disabled.\n* Port `15010` exposes the XDS service over plaintext. This can be disabled by adding the `--grpcAddr=\"\"` flag to the Istiod Deployment.\n  Note: highly sensitive services, such as the certificate signing and distribution services, are never served over plaintext.\n\n### Data Plane\n\nThe proxy exposes a variety of ports. Exposed externally are port `15090` (telemetry) and port `15021` (health check).\nPorts `15020` and `15000` provide debugging endpoints. These are exposed over `localhost` only.\nAs a result, the applications running in the same pod as the proxy have access; there is no trust boundary between the sidecar and application.\n\n## Configure third party service account tokens\n\nTo authenticate with the Istio control plane, the Istio proxy will use a Service Account token. Kubernetes supports two forms of these tokens:\n\n* Third party tokens, which have a scoped audience and expiration.\n* First party tokens, which have no expiration and are mounted into all pods.\n\nBecause the properties of the first party token are less secure, Istio will default to using third party tokens. However, this feature is not enabled on all Kubernetes platforms.\n\nIf you are using `istioctl` to install, support will be automatically detected. This can be done manually as well, and configured by passing `--set values.global.jwtPolicy=third-party-jwt` or `--set values.global.jwtPolicy=first-party-jwt`.\n\nTo determine if your cluster supports third party tokens, look for the `TokenRequest` API. If this returns no response, then the feature is not supported:\n\n\n$ kubectl get --raw \/api\/v1 | jq '.resources[] | select(.name | index(\"serviceaccounts\/token\"))'\n{\n    \"name\": \"serviceaccounts\/token\",\n    \"singularName\": \"\",\n    \"namespaced\": true,\n    \"group\": \"authentication.k8s.io\",\n    \"version\": \"v1\",\n    \"kind\": \"TokenRequest\",\n    \"verbs\": [\n        \"create\"\n    ]\n}\n\n\nWhile most cloud providers support this feature now, many local development tools and custom installations may not prior to Kubernetes 1.20. To enable this feature, please refer to the [Kubernetes documentation](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/#service-account-token-volume-projection).\n\n## Configure a limit on downstream connections\n\nBy default, Istio (and Envoy) have no limit on the number of downstream connections. This can be exploited by a malicious actor (see [security bulletin 2020-007](\/news\/security\/istio-security-2020-007\/)). To work around you this, you must configure an appropriate connection limit for your environment.\n\n### Configure `global_downstream_max_connections` value\n\nThe following configuration can be supplied during installation:\n\n\nmeshConfig:\n  defaultConfig:\n    runtimeValues:\n      \"overload.global_downstream_max_connections\": \"100000\"\n","site":"istio","answers_cleaned":"    title  Security Best Practices description  Best practices for securing applications using Istio  force inline toc  true weight  30 owner  istio wg security maintainers test  n a      Istio security features provide strong identity  powerful policy  transparent TLS encryption  and authentication  authorization and audit  AAA  tools to protect your services and data  However  to fully make use of these features securely  care must be taken to follow best practices  It is recommended to review the  Security overview   docs concepts security   before proceeding      Mutual TLS  Istio will  automatically   docs ops configuration traffic management tls configuration  auto mtls  encrypt traffic using  Mutual TLS   docs concepts security  mutual tls authentication  whenever possible  However  proxies are configured in  permissive mode   docs concepts security  permissive mode  by default  meaning they will accept both mutual TLS and plaintext traffic   While this is required for incremental adoption or allowing traffic from clients without an Istio sidecar  it also weakens the security stance  It is recommended to  migrate to strict mode   docs tasks security authentication mtls migration   when possible  to enforce that mutual TLS is used   Mutual TLS alone is not always enough to fully secure traffic  however  as it provides only authentication  not authorization  This means that anyone with a valid certificate can still access a service   To fully lock down traffic  it is recommended to configure  authorization policies   docs tasks security authorization    These allow creating fine grained policies to allow or deny traffic  For example  you can allow only requests from the  app  namespace to access the  hello world  service      Authorization policies  Istio  authorization   docs concepts security  authorization  plays a critical part in Istio security  It takes effort to configure the correct authorization policies to best protect your clusters  It is important to understand the implications of these configurations as Istio cannot determine the proper authorization for all users  Please follow this section in its entirety       Safer Authorization Policy Patterns       Use default deny patterns  We recommend you define your Istio authorization policies following the default deny pattern to enhance your cluster s security posture  The default deny authorization pattern means your system denies all requests by default  and you define the conditions in which the requests are allowed  In case you miss some conditions  traffic will be unexpectedly denied  instead of traffic being unexpectedly allowed  The latter typically being a security incident while the former may result in a poor user experience  a service outage or will not match your SLO SLA   For example  in the  authorization for HTTP traffic task   docs tasks security authorization authz http    the authorization policy named  allow nothing  makes sure all traffic is denied by default  From there  other authorization policies allow traffic based on specific conditions        Use  ALLOW with positive matching  and  DENY with negative match  patterns  Use the  ALLOW with positive matching  or  DENY with negative matching  patterns whenever possible  These authorization policy patterns are safer because the worst result in the case of policy mismatch is an unexpected 403 rejection instead of an authorization policy bypass   The  ALLOW with positive matching  pattern is to use the  ALLOW  action only with   positive   matching fields  e g   paths    values   and do not use any of the   negative   matching fields  e g   notPaths    notValues     The  DENY with negative matching  pattern is to use the  DENY  action only with   negative   matching fields  e g   notPaths    notValues   and do not use any of the   positive   matching fields  e g   paths    values     For example  the authorization policy below uses the  ALLOW with positive matching  pattern to allow requests to path   public     apiVersion  security istio io v1 kind  AuthorizationPolicy metadata    name  foo spec    action  ALLOW   rules      to        operation          paths     public     The above policy explicitly lists the allowed path    public    This means the request path must be exactly the same as   public  to allow the request  Any other requests will be rejected by default eliminating the risk of unknown normalization behavior causing policy bypass   The following is an example using the  DENY with negative matching  pattern to achieve the same result    apiVersion  security istio io v1 kind  AuthorizationPolicy metadata    name  foo spec    action  DENY   rules      to        operation          notPaths     public         Understand path normalization in authorization policy  The enforcement point for authorization policies is the Envoy proxy instead of the usual resource access point in the backend application  A policy mismatch happens when the Envoy proxy and the backend application interpret the request differently   A mismatch can lead to either unexpected rejection or a policy bypass  The latter is usually a security incident that needs to be fixed immediately  and it s also why we need path normalization in the authorization policy   For example  consider an authorization policy to reject requests with path   data secret   A request with path   data  secret  will not be rejected because it does not match the path defined in the authorization policy due to the extra forward slash     in the path   The request goes through and later the backend application returns the same response that it returns for the path   data secret  because the backend application normalizes the path   data  secret  to   data secret  as it considers the double forward slashes      equivalent to a single forward slash       In this example  the policy enforcement point  Envoy proxy  had a different understanding of the path than the resource access point  backend application   The different understanding caused the mismatch and subsequently the bypass of the authorization policy   This becomes a complicated problem because of the following factors     Lack of a clear standard for the normalization     Backends and frameworks in different layers have their own special normalization     Applications can even have arbitrary normalizations for their own use cases   Istio authorization policy implements built in support of various basic normalization options to help you to better address the problem     Refer to  Guideline on configuring the path normalization option   docs ops best practices security  guideline on configuring the path normalization option    to understand which normalization options you may want to use     Refer to  Customize your system on path normalization   docs ops best practices security  customize your system on path normalization  to   understand the detail of each normalization option     Refer to  Mitigation for unsupported normalization   docs ops best practices security  mitigation for unsupported normalization  for   alternative solutions in case you need any unsupported normalization options       Guideline on configuring the path normalization option       Case 1  You do not need normalization at all  Before diving into the details of configuring normalization  you should first make sure that normalizations are needed   You do not need normalization if you don t use authorization policies or if your authorization policies don t use any  path  fields   You may not need normalization if all your authorization policies follow the  safer authorization pattern   docs ops best practices security  safer authorization policy patterns  which  in the worst case  results in unexpected rejection instead of policy bypass        Case 2  You need normalization but not sure which normalization option to use  You need normalization but you have no idea of which option to use  The safest choice is the strictest normalization option that provides the maximum level of normalization in the authorization policy   This is often the case due to the fact that complicated multi layered systems make it practically impossible to figure out what normalization is actually happening to a request beyond the enforcement point   You could use a less strict normalization option if it already satisfies your requirements and you are sure of its implications   For either option  make sure you write both positive and negative tests specifically for your requirements to verify the normalization is working as expected  The tests are useful in catching potential bypass issues caused by a misunderstanding or incomplete knowledge of the normalization happening to your request   Refer to  Customize your system on path normalization   docs ops best practices security  customize your system on path normalization  for more details on configuring the normalization option        Case 3  You need an unsupported normalization option  If you need a specific normalization option that is not supported by Istio yet  please follow  Mitigation for unsupported normalization   docs ops best practices security  mitigation for unsupported normalization  for customized normalization support or create a feature request for the Istio community       Customize your system on path normalization  Istio authorization policies can be based on the URL paths in the HTTP request   Path normalization  a k a   URI normalization   https   en wikipedia org wiki URI normalization  modifies and standardizes the incoming requests  paths  so that the normalized paths can be processed in a standard way  Syntactically different paths may be equivalent after path normalization   Istio supports the following normalization schemes on the request paths  before evaluating against the authorization policies and routing the requests     Option   Description   Example                          NONE    No normalization is done  Anything received by Envoy will be forwarded exactly as is to any backend service         2Fa   b  is evaluated by the authorization policies and sent to your service       BASE    This is currently the option used in the  default  installation of Istio  This applies the   normalize path   https   www envoyproxy io docs envoy latest api v3 extensions filters network http connection manager v3 http connection manager proto envoy v3 api field extensions filters network http connection manager v3 httpconnectionmanager normalize path  option on Envoy proxies  which follows  RFC 3986  https   tools ietf org html rfc3986  with extra normalization to convert backslashes to forward slashes      a    b  is normalized to   b     da  is normalized to   da        MERGE SLASHES    Slashes are merged after the  BASE  normalization      a  b  is normalized to   a b        DECODE AND MERGE SLASHES    The most strict setting when you allow all traffic by default  This setting is recommended  with the caveat that you will need to thoroughly test your authorization policies routes   Percent encoded  https   tools ietf org html rfc3986 section 2 1  slash and backslash characters    2F     2f     5C  and   5c   are decoded to     or      before the  MERGE SLASHES  normalization      a 2fb  is normalized to   a b       The configuration is specified via the   pathNormalization    docs reference config istio mesh v1alpha1  MeshConfig ProxyPathNormalization  field in the  mesh config   docs reference config istio mesh v1alpha1      To emphasize  the normalization algorithms are conducted in the following order   1  Percent decode   2F     2f     5C  and   5c   1  The  RFC 3986  https   tools ietf org html rfc3986  and other normalization implemented by the   normalize path   https   www envoyproxy io docs envoy latest api v3 extensions filters network http connection manager v3 http connection manager proto envoy v3 api field extensions filters network http connection manager v3 httpconnectionmanager normalize path  option in Envoy  1  Merge slashes   While these normalization options represent recommendations from HTTP standards and common industry practices  applications may interpret a URL in any way it chooses to  When using denial policies  ensure that you understand how your application behaves    For a complete list of supported normalizations  please refer to  authorization policy normalization   docs reference config security normalization          Examples of configuration  Ensuring Envoy normalizes request paths to match your backend services  expectation is critical to the security of your system  The following examples can be used as reference for you to configure your system  The normalized URL paths  or the original URL paths if  NONE  is selected  will be   1  Used to check against the authorization policies 1  Forwarded to the backend application    Your application      Choose                      Relies on the proxy to do normalization    BASE    MERGE SLASHES  or  DECODE AND MERGE SLASHES      Normalizes request paths based on  RFC 3986  https   tools ietf org html rfc3986  and does not merge slashes    BASE      Normalizes request paths based on  RFC 3986  https   tools ietf org html rfc3986   merges slashes but does not decode  percent encoded  https   tools ietf org html rfc3986 section 2 1  slashes    MERGE SLASHES      Normalizes request paths based on  RFC 3986  https   tools ietf org html rfc3986   decodes  percent encoded  https   tools ietf org html rfc3986 section 2 1  slashes and merges slashes    DECODE AND MERGE SLASHES      Processes request paths in a way that is incompatible with  RFC 3986  https   tools ietf org html rfc3986     NONE          How to configure  You can use  istioctl  to update the  mesh config   docs reference config istio mesh v1alpha1        istioctl upgrade   set meshConfig pathNormalization normalization DECODE AND MERGE SLASHES   or by altering your operator overrides file     cat   EOF   iop yaml apiVersion  install istio io v1alpha1 kind  IstioOperator spec    meshConfig      pathNormalization        normalization  DECODE AND MERGE SLASHES EOF   istioctl install  f iop yaml   Alternatively  if you want to directly edit the mesh config  you can add the   pathNormalization    docs reference config istio mesh v1alpha1  MeshConfig ProxyPathNormalization  to the  mesh config   docs reference config istio mesh v1alpha1    which is the  istio  REVISION ID   configmap in the  istio system  namespace  For example  if you choose the  DECODE AND MERGE SLASHES  option  you modify the mesh config as the following    apiVersion  v1   data      mesh                     pathNormalization          normalization  DECODE AND MERGE SLASHES                 Mitigation for unsupported normalization  This section describes various mitigations for unsupported normalization  These could be useful when you need a specific normalization that is not supported by Istio   Please make sure you understand the mitigation thoroughly and use it carefully as some mitigations rely on things that are out the scope of Istio and also not supported by Istio        Custom normalization logic  You can apply custom normalization logic using the WASM or Lua filter  It is recommended to use the WASM filter because it s officially supported and also used by Istio  You could use the Lua filter for a quick proof of concept DEMO but we do not recommend using the Lua filter in production because it is not supported by Istio         Example custom normalization  case normalization   In some environments  it may be useful to have paths in authorization policies compared in a case insensitive manner  For example  treating  https   myurl get  and  https   myurl GeT  as equivalent   In those cases  the  EnvoyFilter  shown below can be used to insert a Lua filter to normalize the path to lower case  This filter will change both the path used for comparison and the path presented to the application    apiVersion  networking istio io v1alpha3 kind  EnvoyFilter metadata    name  ingress case insensitive   namespace  istio system spec    configPatches      applyTo  HTTP FILTER     match        context  GATEWAY       listener          filterChain            filter              name   envoy filters network http connection manager      patch        operation  INSERT FIRST       value          name  envoy lua         typed config                type    type googleapis com envoy extensions filters http lua v3 Lua              inlineCode                  function envoy on request request handle                  local path   request handle headers   get   path                   request handle headers   replace   path   string lower path                 end        Writing Host Match Policies  Istio generates hostnames for both the hostname itself and all matching ports  For instance  a virtual service or Gateway for a host of  example com  generates a config matching  example com  and  example com     However  exact match authorization policies only match the exact string given for the  hosts  or  notHosts  fields    Authorization policy rules   docs reference config security authorization policy  Rule  matching hosts should be written using prefix matches instead of exact matches   For example  for an  AuthorizationPolicy  matching the Envoy configuration generated for a hostname of  example com   you would use  hosts    example com    example com      as shown in the below  AuthorizationPolicy     apiVersion  security istio io v1 kind  AuthorizationPolicy metadata    name  ingress host   namespace  istio system spec    selector      matchLabels        app  istio ingressgateway   action  DENY   rules      to        operation          hosts    example com    example com       Additionally  the  host  and  notHosts  fields should generally only be used on gateway for external traffic entering the mesh and not on sidecars for traffic within the mesh  This is because the sidecar on server side  where the authorization policy is enforced  does not use the  Host  header when redirecting the request to the application  This makes the  host  and  notHost  meaningless on sidecar because a client could reach out to the application using explicit IP address and arbitrary  Host  header instead of the service name   If you really need to enforce access control based on the  Host  header on sidecars for any reason  follow with the  default deny patterns   docs ops best practices security  use default deny patterns  which would reject the request if the client uses an arbitrary  Host  header        Specialized Web Application Firewall  WAF   Many specialized Web Application Firewall  WAF  products provide additional normalization options  They can be deployed in front of the Istio ingress gateway to normalize requests entering the mesh  The authorization policy will then be enforced on the normalized requests  Please refer to your specific WAF product for configuring the normalization options        Feature request to Istio  If you believe Istio should officially support a specific normalization  you can follow the  reporting a vulnerability   docs releases security vulnerabilities  reporting a vulnerability  page to send a feature request about the specific normalization to the Istio Product Security Work Group for initial evaluation   Please do not open any issues in public without first contacting the Istio Product Security Work Group because the issue might be considered a security vulnerability that needs to be fixed in private   If the Istio Product Security Work Group evaluates the feature request as not a security vulnerability  an issue will be opened in public for further discussions of the feature request       Known limitations  This section lists known limitations of the authorization policy        Server first TCP protocols are not supported  Server first TCP protocols mean the server application will send the first bytes right after accepting the TCP connection before receiving any data from the client   Currently  the authorization policy only supports enforcing access control on inbound traffic and not the outbound traffic   It also does not support server first TCP protocols because the first bytes are sent by the server application even before it received any data from the client  In this case  the initial first bytes sent by the server are returned to the client directly without going through the access control check of the authorization policy   You should not use the authorization policy if the first bytes sent by the server first TCP protocols include any sensitive data that need to be protected by proper authorization   You could still use the authorization policy in this case if the first bytes does not include any sensitive data  for example  the first bytes are used for negotiating the connection with data that are publicly accessible to any clients  The authorization policy will work as usual for the following requests sent by the client after the first bytes      Understand traffic capture limitations  The Istio sidecar works by capturing both inbound traffic and outbound traffic and directing them through the sidecar proxy   However  not  all  traffic is captured     Redirection only handles TCP based traffic  Any UDP or ICMP packets will not be captured or modified    Inbound capture is disabled on many  ports used by the sidecar   docs ops deployment application requirements  ports used by istio  as well as port 22  This list can be expanded by options like  traffic sidecar istio io excludeInboundPorts     Outbound capture may similarly be reduced through settings like  traffic sidecar istio io excludeOutboundPorts  or other means   In general  there is minimal security boundary between an application and its sidecar proxy  Configuration of the sidecar is allowed on a per pod basis  and both run in the same network process namespace  As such  the application may have the ability to remove redirection rules and remove  alter  terminate  or replace the sidecar proxy  This allows a pod to intentionally bypass its sidecar for outbound traffic or intentionally allow inbound traffic to bypass its sidecar   As a result  it is not secure to rely on all traffic being captured unconditionally by Istio  Instead  the security boundary is that a client may not bypass  another  pod s sidecar   For example  if I run the  reviews  application on port  9080   I can assume that all traffic from the  productpage  application will be captured by the sidecar proxy  where Istio authentication and authorization policies may apply       Defense in depth with  NetworkPolicy   To further secure traffic  Istio policies can be layered with Kubernetes  Network Policies  https   kubernetes io docs concepts services networking network policies    This enables a strong  defense in depth  https   en wikipedia org wiki Defense in depth  computing   strategy that can be used to further strengthen the security of your mesh   For example  you may choose to only allow traffic to port  9080  of our  reviews  application  In the event of a compromised pod or security vulnerability in the cluster  this may limit or stop an attackers progress   Depending on the actual implementation  changes to network policy may not affect existing connections in the Istio proxies  You may need to restart the Istio proxies after applying the policy so that existing connections will be closed and new connections will be subject to the new policy       Securing egress traffic  A common misconception is that options like   outboundTrafficPolicy  REGISTRY ONLY    docs tasks traffic management egress egress control  envoy passthrough to external services  acts as a security policy preventing all access to undeclared services  However  this is not a strong security boundary as mentioned above  and should be considered best effort   While this is useful to prevent accidental dependencies  if you want to secure egress traffic  and enforce all outbound traffic goes through a proxy  you should instead rely on an  Egress Gateway   docs tasks traffic management egress egress gateway    When combined with a  Network Policy   docs tasks traffic management egress egress gateway  apply kubernetes network policies   you can enforce all traffic  or some subset  goes through the egress gateway  This ensures that even if a client accidentally or maliciously bypasses their sidecar  the request will be blocked      Configure TLS verification in Destination Rule when using TLS origination  Istio offers the ability to  originate TLS   docs tasks traffic management egress egress tls origination   from a sidecar proxy or gateway  This enables applications that send plaintext HTTP traffic to be transparently  upgraded  to HTTPS   Care must be taken when configuring the  DestinationRule  s  tls  setting to specify the  caCertificates    subjectAltNames   and  sni  fields  The  caCertificate  can be automatically set from the system s certificate store s CA certificate by enabling the environment variable  VERIFY CERTIFICATE AT CLIENT true  on Istiod  If the Operating System CA certificate being automatically used is only desired for select host s   the environment variable  VERIFY CERTIFICATE AT CLIENT false  on Istiod   caCertificates  can be set to  system  in the desired  DestinationRule  s   Specifying the  caCertificates  in a  DestinationRule  will take priority and the OS CA Cert will not be used  By default  egress traffic does not send SNI during the TLS handshake  SNI must be set in the  DestinationRule  to ensure the host properly handle the request    In order to verify the server s certificate it is important that both  caCertificates  and  subjectAltNames  be set   Verification of the certificate presented by the server against a CA is not sufficient  as the Subject Alternative Names must also be validated   If  VERIFY CERTIFICATE AT CLIENT  is set  but  subjectAltNames  is not set then you are not verifying all credentials   If no CA certificate is being used   subjectAltNames  will not be used regardless of it being set or not    For example    apiVersion  networking istio io v1 kind  DestinationRule metadata    name  google tls spec    host  google com   trafficPolicy      tls        mode  SIMPLE       caCertificates   etc ssl certs ca certificates crt       subjectAltNames           google com        sni   google com       Gateways  When running an Istio  gateway   docs tasks traffic management ingress    there are a few resources involved      Gateway s  which controls the ports and TLS settings for the gateway     VirtualService s  which control the routing logic  These are associated with  Gateway s by direct reference in the  gateways  field and a mutual agreement on the  hosts  field in the  Gateway  and  VirtualService        Restrict  Gateway  creation privileges  It is recommended to restrict creation of Gateway resources to trusted cluster administrators  This can be achieved by  Kubernetes RBAC policies  https   kubernetes io docs reference access authn authz rbac   or tools like  Open Policy Agent  https   www openpolicyagent org         Avoid overly broad  hosts  configurations  When possible  avoid overly broad  hosts  settings in  Gateway    For example  this configuration will allow any  VirtualService  to bind to the  Gateway   potentially exposing unexpected domains    servers    port      number  80     name  http     protocol  HTTP   hosts            This should be locked down to allow only specific domains or specific namespaces    servers    port      number  80     name  http     protocol  HTTP   hosts       foo example com    Allow only VirtualServices that are for foo example com      default bar example com    Allow only VirtualServices in the default namespace that are for bar example com      route namespace      Allow only VirtualServices in the route namespace namespace for any host       Isolate sensitive services  It may be desired to enforce stricter physical isolation for sensitive services  For example  you may want to run a  dedicated gateway instance   docs setup install istioctl  configure gateways  for a sensitive  payments example com   while utilizing a single shared gateway instance for less sensitive domains like  blog example com  and  store example com   This can offer a stronger defense in depth and help meet certain regulatory compliance guidelines       Explicitly disable all the sensitive http host under relaxed SNI host matching  It is reasonable to use multiple  Gateway s to define mutual TLS and simple TLS on different hosts  For example  use mutual TLS for SNI host  admin example com  and simple TLS for SNI host    example com     kind  Gateway metadata    name  guestgateway spec    selector      istio  ingressgateway   servers      port        number  443       name  https       protocol  HTTPS     hosts           example com      tls        mode  SIMPLE     kind  Gateway metadata    name  admingateway spec    selector      istio  ingressgateway   servers      port        number  443       name  https       protocol  HTTPS     hosts        admin example com     tls        mode  MUTUAL   If the above is necessary  it s highly recommended to explicitly disable the http host  admin example com  in the  VirtualService  that attaches to    example com   The reason is that currently the underlying  envoy proxy does not require  https   github com envoyproxy envoy issues 6767  the http 1 header  Host  or the http 2 pseudo header   authority  following the SNI constraints  an attacker can reuse the guest SNI TLS connection to access admin  VirtualService   The http response code 421 is designed for this  Host  SNI mismatch and can be used to fulfill the disable    apiVersion  networking istio io v1 kind  VirtualService metadata    name  disable sensitive spec    hosts       admin example com    gateways      guestgateway   http      match        uri          prefix        fault        abort          percentage            value  100         httpStatus  421     route        destination          port            number  8000         host  dest default cluster local      Protocol detection  Istio will  automatically determine the protocol   docs ops configuration traffic management protocol selection  automatic protocol selection  of traffic it sees  To avoid accidental or intentional miss detection  which may result in unexpected traffic behavior  it is recommended to  explicitly declare the protocol   docs ops configuration traffic management protocol selection  explicit protocol selection  where possible      CNI  In order to transparently capture all traffic  Istio relies on  iptables  rules configured by the  istio init   initContainer   This adds a  requirement   docs ops deployment application requirements   for the  NET ADMIN  and  NET RAW   capabilities  https   kubernetes io docs tasks configure pod container security context  set capabilities for a container  to be available to the pod   To reduce privileges granted to pods  Istio offers a  CNI plugin   docs setup additional setup cni   which removes this requirement      Use hardened docker images  Istio s default docker images  including those run by the control plane  gateway  and sidecar proxies  are based on  ubuntu   This provides various tools such as  bash  and  curl   which trades off convenience for an increase attack surface   Istio also offers a smaller image based on  distroless images   docs ops configuration security harden docker images   that reduces the dependencies in the image    Distroless images are currently an alpha feature       Release and security policy  In order to ensure your cluster has the latest security patches for known vulnerabilities  it is important to stay on the latest patch release of Istio and ensure that you are on a  supported release   docs releases supported releases  that is still receiving security patches      Detect invalid configurations  While Istio provides validation of resources when they are created  these checks cannot catch all issues preventing configuration being distributed in the mesh  This could result in applying a policy that is unexpectedly ignored  leading to unexpected results     Run  istioctl analyze  before or after applying configuration to ensure it is valid    Monitor the control plane for rejected configurations  These are exposed by the  pilot total xds rejects  metric  in addition to logs    Test your configuration to ensure it gives the expected results    For a security policy  it is useful to run positive and negative tests to ensure you do not accidentally restrict too much or too few traffic      Avoid alpha and experimental features  All Istio features and APIs are assigned a  feature status   docs releases feature stages    defining its stability  deprecation policy  and security policy   Because alpha and experimental features do not have as strong security guarantees  it is recommended to avoid them whenever possible  Security issues found in these features may not be fixed immediately or otherwise not follow our standard  security vulnerability   docs releases security vulnerabilities   process   To determine the feature status of features in use in your cluster  consult the  Istio features   docs releases feature stages  istio features  list        In the future  we should document the  istioctl  command to check this when available          Lock down ports  Istio configures a  variety of ports   docs ops deployment application requirements  ports used by istio  that may be locked down to improve security       Control Plane  Istiod exposes a few unauthenticated plaintext ports for convenience by default  If desired  these can be closed     Port  8080  exposes the debug interface  which offers read access to a variety of details about the clusters state    This can be disabled by set the environment variable  ENABLE DEBUG ON HTTP false  on Istiod  Warning  many  istioctl  commands   depend on this interface and will not function if it is disabled    Port  15010  exposes the XDS service over plaintext  This can be disabled by adding the    grpcAddr     flag to the Istiod Deployment    Note  highly sensitive services  such as the certificate signing and distribution services  are never served over plaintext       Data Plane  The proxy exposes a variety of ports  Exposed externally are port  15090   telemetry  and port  15021   health check   Ports  15020  and  15000  provide debugging endpoints  These are exposed over  localhost  only  As a result  the applications running in the same pod as the proxy have access  there is no trust boundary between the sidecar and application      Configure third party service account tokens  To authenticate with the Istio control plane  the Istio proxy will use a Service Account token  Kubernetes supports two forms of these tokens     Third party tokens  which have a scoped audience and expiration    First party tokens  which have no expiration and are mounted into all pods   Because the properties of the first party token are less secure  Istio will default to using third party tokens  However  this feature is not enabled on all Kubernetes platforms   If you are using  istioctl  to install  support will be automatically detected  This can be done manually as well  and configured by passing    set values global jwtPolicy third party jwt  or    set values global jwtPolicy first party jwt    To determine if your cluster supports third party tokens  look for the  TokenRequest  API  If this returns no response  then the feature is not supported      kubectl get   raw  api v1   jq   resources     select  name   index  serviceaccounts token            name    serviceaccounts token        singularName            namespaced   true       group    authentication k8s io        version    v1        kind    TokenRequest        verbs              create            While most cloud providers support this feature now  many local development tools and custom installations may not prior to Kubernetes 1 20  To enable this feature  please refer to the  Kubernetes documentation  https   kubernetes io docs tasks configure pod container configure service account  service account token volume projection       Configure a limit on downstream connections  By default  Istio  and Envoy  have no limit on the number of downstream connections  This can be exploited by a malicious actor  see  security bulletin 2020 007   news security istio security 2020 007     To work around you this  you must configure an appropriate connection limit for your environment       Configure  global downstream max connections  value  The following configuration can be supplied during installation    meshConfig    defaultConfig      runtimeValues         overload global downstream max connections    100000  "}
{"questions":"istio owner istio wg policies and telemetry maintainers Using Prometheus for production scale monitoring test no title Observability Best Practices forceinlinetoc true Best practices for observing applications using Istio weight 50","answers":"---\ntitle: Observability Best Practices\ndescription: Best practices for observing applications using Istio.\nforce_inline_toc: true\nweight: 50\nowner: istio\/wg-policies-and-telemetry-maintainers\ntest: no\n---\n\n## Using Prometheus for production-scale monitoring\n\nThe recommended approach for production-scale monitoring of Istio meshes with Prometheus\nis to use [hierarchical federation](https:\/\/prometheus.io\/docs\/prometheus\/latest\/federation\/#hierarchical-federation)\nin combination with a collection of [recording rules](https:\/\/prometheus.io\/docs\/prometheus\/latest\/configuration\/recording_rules\/).\n\nAlthough installing Istio does not deploy [Prometheus](http:\/\/prometheus.io) by default, the\n[Getting Started](\/docs\/setup\/getting-started\/) instructions install the `Option 1: Quick Start` deployment\nof Prometheus described in the [Prometheus integration guide](\/docs\/ops\/integrations\/prometheus\/).\nThis deployment of Prometheus is intentionally configured with a very short retention window (6 hours). The\nquick-start Prometheus deployment is also configured to collect metrics from each Envoy proxy\nrunning in the mesh, augmenting each metric with a set of labels about their origin (`instance`,\n`pod`, and `namespace`).\n\n\n\n### Workload-level aggregation via recording rules\n\nIn order to aggregate metrics across instances and pods, update the default Prometheus configuration with\nthe following recording rules:\n\n\n\n\n\n\ngroups:\n- name: \"istio.recording-rules\"\n  interval: 5s\n  rules:\n  - record: \"workload:istio_requests_total\"\n    expr: |\n      sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_requests_total)\n\n  - record: \"workload:istio_request_duration_milliseconds_count\"\n    expr: |\n      sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_request_duration_milliseconds_count)\n\n  - record: \"workload:istio_request_duration_milliseconds_sum\"\n    expr: |\n      sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_request_duration_milliseconds_sum)\n\n  - record: \"workload:istio_request_duration_milliseconds_bucket\"\n    expr: |\n      sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_request_duration_milliseconds_bucket)\n\n  - record: \"workload:istio_request_bytes_count\"\n    expr: |\n      sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_request_bytes_count)\n\n  - record: \"workload:istio_request_bytes_sum\"\n    expr: |\n      sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_request_bytes_sum)\n\n  - record: \"workload:istio_request_bytes_bucket\"\n    expr: |\n      sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_request_bytes_bucket)\n\n  - record: \"workload:istio_response_bytes_count\"\n    expr: |\n      sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_response_bytes_count)\n\n  - record: \"workload:istio_response_bytes_sum\"\n    expr: |\n      sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_response_bytes_sum)\n\n  - record: \"workload:istio_response_bytes_bucket\"\n    expr: |\n      sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_response_bytes_bucket)\n\n  - record: \"workload:istio_tcp_sent_bytes_total\"\n    expr: |\n      sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_tcp_sent_bytes_total)\n\n  - record: \"workload:istio_tcp_received_bytes_total\"\n    expr: |\n      sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_tcp_received_bytes_total)\n\n  - record: \"workload:istio_tcp_connections_opened_total\"\n    expr: |\n      sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_tcp_connections_opened_total)\n\n  - record: \"workload:istio_tcp_connections_closed_total\"\n    expr: |\n      sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_tcp_connections_closed_total)\n\n\n\n\n\n\n\napiVersion: monitoring.coreos.com\/v1\nkind: PrometheusRule\nmetadata:\n  name: istio-metrics-aggregation\n  labels:\n    app.kubernetes.io\/name: istio-prometheus\nspec:\n  groups:\n  - name: \"istio.metricsAggregation-rules\"\n    interval: 5s\n    rules:\n    - record: \"workload:istio_requests_total\"\n      expr: \"sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_requests_total)\"\n\n    - record: \"workload:istio_request_duration_milliseconds_count\"\n      expr: \"sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_request_duration_milliseconds_count)\"\n    - record: \"workload:istio_request_duration_milliseconds_sum\"\n      expr: \"sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_request_duration_milliseconds_sum)\"\n    - record: \"workload:istio_request_duration_milliseconds_bucket\"\n      expr: \"sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_request_duration_milliseconds_bucket)\"\n\n    - record: \"workload:istio_request_bytes_count\"\n      expr: \"sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_request_bytes_count)\"\n    - record: \"workload:istio_request_bytes_sum\"\n      expr: \"sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_request_bytes_sum)\"\n    - record: \"workload:istio_request_bytes_bucket\"\n      expr: \"sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_request_bytes_bucket)\"\n\n    - record: \"workload:istio_response_bytes_count\"\n      expr: \"sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_response_bytes_count)\"\n    - record: \"workload:istio_response_bytes_sum\"\n      expr: \"sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_response_bytes_sum)\"\n    - record: \"workload:istio_response_bytes_bucket\"\n      expr: \"sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_response_bytes_bucket)\"\n\n    - record: \"workload:istio_tcp_sent_bytes_total\"\n      expr: \"sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_tcp_sent_bytes_total)\"\n    - record: \"workload:istio_tcp_received_bytes_total\"\n      expr: \"sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_tcp_received_bytes_total)\"\n    - record: \"workload:istio_tcp_connections_opened_total\"\n      expr: \"sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_tcp_connections_opened_total)\"\n    - record: \"workload:istio_tcp_connections_closed_total\"\n      expr: \"sum without(instance, kubernetes_namespace, kubernetes_pod_name) (istio_tcp_connections_closed_total)\"\n\n\n\n\n\n\n\nThe recording rules above only aggregate across pods and instances. They still preserve the full set of\n[Istio Standard Metrics](\/docs\/reference\/config\/metrics\/), including all Istio dimensions. While this\nwill help with controlling metrics cardinality via federation, you may want to further optimize the recording rules\nto match your existing dashboards, alerts, and ad-hoc queries.\n\nFor more information on tailoring your recording rules, see the section on\n[Optimizing metrics collection with recording rules](\/docs\/ops\/best-practices\/observability\/#optimizing-metrics-collection-with-recording-rules).\n\n\n### Federation using workload-level aggregated metrics\n\nTo establish Prometheus federation, modify the configuration of your production-ready deployment of Prometheus to\nscrape the federation endpoint of the Istio Prometheus.\n\nAdd the following job to your configuration:\n\n\n- job_name: 'istio-prometheus'\n  honor_labels: true\n  metrics_path: '\/federate'\n  kubernetes_sd_configs:\n  - role: pod\n    namespaces:\n      names: ['istio-system']\n  metric_relabel_configs:\n  - source_labels: [__name__]\n    regex: 'workload:(.*)'\n    target_label: __name__\n    action: replace\n  params:\n    'match[]':\n    - '{__name__=~\"workload:(.*)\"}'\n    - '{__name__=~\"pilot(.*)\"}'\n\n\nIf you are using the [Prometheus Operator](https:\/\/github.com\/coreos\/prometheus-operator), use the following configuration instead:\n\n\napiVersion: monitoring.coreos.com\/v1\nkind: ServiceMonitor\nmetadata:\n  name: istio-federation\n  labels:\n    app.kubernetes.io\/name: istio-prometheus\nspec:\n  namespaceSelector:\n    matchNames:\n    - istio-system\n  selector:\n    matchLabels:\n      app: prometheus\n  endpoints:\n  - interval: 30s\n    scrapeTimeout: 30s\n    params:\n      'match[]':\n      - '{__name__=~\"workload:(.*)\"}'\n      - '{__name__=~\"pilot(.*)\"}'\n    path: \/federate\n    targetPort: 9090\n    honorLabels: true\n    metricRelabelings:\n    - sourceLabels: [\"__name__\"]\n      regex: 'workload:(.*)'\n      targetLabel: \"__name__\"\n      action: replace\n\n\n\nThe key to the federation configuration is matching on the job in the Istio-deployed Prometheus that is collecting\n[Istio Standard Metrics](\/docs\/reference\/config\/metrics\/) and renaming any metrics collected by removing\nthe prefix used in the workload-level recording rules (`workload:`). This will allow existing dashboards and\nqueries to seamlessly continue working when pointed at the production Prometheus instance (and away from the Istio instance).\n\nYou can also include additional metrics (for example, envoy, go, etc.) when setting up federation.\n\nControl plane metrics are also collected and federated up to the production Prometheus.\n\n\n### Optimizing metrics collection with recording rules\n\nBeyond just using recording rules to [aggregate over pods and instances](#workload-level-aggregation-via-recording-rules), you may\nwant to use recording rules to generate aggregated metrics tailored specifically to your existing dashboards and alerts. Optimizing\nyour collection in this manner can result in large savings in resource consumption in your production instance of Prometheus, in\naddition to faster query performance.\n\nFor example, imagine a custom monitoring dashboard that used the following Prometheus queries:\n\n* Total rate of requests averaged over the past minute by destination service name and namespace\n\n    \n    sum(irate(istio_requests_total{reporter=\"source\"}[1m]))\n    by (\n        destination_canonical_service,\n        destination_workload_namespace\n    )\n    \n\n* P95 client latency averaged over the past minute by source and destination service names and namespace\n\n    \n    histogram_quantile(0.95,\n      sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"source\"}[1m]))\n      by (\n        destination_canonical_service,\n        destination_workload_namespace,\n        source_canonical_service,\n        source_workload_namespace,\n        le\n      )\n    )\n    \n\nThe following set of recording rules could be added to the Istio Prometheus configuration, using the `istio` prefix\nto make identifying these metrics for federation simple.\n\n\ngroups:\n- name: \"istio.recording-rules\"\n  interval: 5s\n  rules:\n  - record: \"istio:istio_requests:by_destination_service:rate1m\"\n    expr: |\n      sum(irate(istio_requests_total{reporter=\"destination\"}[1m]))\n      by (\n        destination_canonical_service,\n        destination_workload_namespace\n      )\n  - record: \"istio:istio_request_duration_milliseconds_bucket:p95:rate1m\"\n    expr: |\n      histogram_quantile(0.95,\n        sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"source\"}[1m]))\n        by (\n          destination_canonical_service,\n          destination_workload_namespace,\n          source_canonical_service,\n          source_workload_namespace,\n          le\n        )\n      )\n\n\nThe production instance of Prometheus would then be updated to federate from the Istio instance with:\n\n* match clause of `{__name__=~\"istio:(.*)\"}`\n\n* metric relabeling config with: `regex: \"istio:(.*)\"`\n\nThe original queries would then be replaced with:\n\n* `istio_requests:by_destination_service:rate1m`\n\n* `avg(istio_request_duration_milliseconds_bucket:p95:rate1m)`\n\n\nA detailed write-up on [metrics collection optimization in production at AutoTrader](https:\/\/karlstoney.com\/2020\/02\/25\/federated-prometheus-to-reduce-metric-cardinality\/)\nprovides a more fleshed out example of aggregating directly to the queries that power dashboards and alerts.\n","site":"istio","answers_cleaned":"    title  Observability Best Practices description  Best practices for observing applications using Istio  force inline toc  true weight  50 owner  istio wg policies and telemetry maintainers test  no         Using Prometheus for production scale monitoring  The recommended approach for production scale monitoring of Istio meshes with Prometheus is to use  hierarchical federation  https   prometheus io docs prometheus latest federation  hierarchical federation  in combination with a collection of  recording rules  https   prometheus io docs prometheus latest configuration recording rules     Although installing Istio does not deploy  Prometheus  http   prometheus io  by default  the  Getting Started   docs setup getting started   instructions install the  Option 1  Quick Start  deployment of Prometheus described in the  Prometheus integration guide   docs ops integrations prometheus    This deployment of Prometheus is intentionally configured with a very short retention window  6 hours   The quick start Prometheus deployment is also configured to collect metrics from each Envoy proxy running in the mesh  augmenting each metric with a set of labels about their origin   instance    pod   and  namespace           Workload level aggregation via recording rules  In order to aggregate metrics across instances and pods  update the default Prometheus configuration with the following recording rules        groups    name   istio recording rules    interval  5s   rules      record   workload istio requests total      expr          sum without instance  kubernetes namespace  kubernetes pod name   istio requests total       record   workload istio request duration milliseconds count      expr          sum without instance  kubernetes namespace  kubernetes pod name   istio request duration milliseconds count       record   workload istio request duration milliseconds sum      expr          sum without instance  kubernetes namespace  kubernetes pod name   istio request duration milliseconds sum       record   workload istio request duration milliseconds bucket      expr          sum without instance  kubernetes namespace  kubernetes pod name   istio request duration milliseconds bucket       record   workload istio request bytes count      expr          sum without instance  kubernetes namespace  kubernetes pod name   istio request bytes count       record   workload istio request bytes sum      expr          sum without instance  kubernetes namespace  kubernetes pod name   istio request bytes sum       record   workload istio request bytes bucket      expr          sum without instance  kubernetes namespace  kubernetes pod name   istio request bytes bucket       record   workload istio response bytes count      expr          sum without instance  kubernetes namespace  kubernetes pod name   istio response bytes count       record   workload istio response bytes sum      expr          sum without instance  kubernetes namespace  kubernetes pod name   istio response bytes sum       record   workload istio response bytes bucket      expr          sum without instance  kubernetes namespace  kubernetes pod name   istio response bytes bucket       record   workload istio tcp sent bytes total      expr          sum without instance  kubernetes namespace  kubernetes pod name   istio tcp sent bytes total       record   workload istio tcp received bytes total      expr          sum without instance  kubernetes namespace  kubernetes pod name   istio tcp received bytes total       record   workload istio tcp connections opened total      expr          sum without instance  kubernetes namespace  kubernetes pod name   istio tcp connections opened total       record   workload istio tcp connections closed total      expr          sum without instance  kubernetes namespace  kubernetes pod name   istio tcp connections closed total         apiVersion  monitoring coreos com v1 kind  PrometheusRule metadata    name  istio metrics aggregation   labels      app kubernetes io name  istio prometheus spec    groups      name   istio metricsAggregation rules      interval  5s     rules        record   workload istio requests total        expr   sum without instance  kubernetes namespace  kubernetes pod name   istio requests total          record   workload istio request duration milliseconds count        expr   sum without instance  kubernetes namespace  kubernetes pod name   istio request duration milliseconds count         record   workload istio request duration milliseconds sum        expr   sum without instance  kubernetes namespace  kubernetes pod name   istio request duration milliseconds sum         record   workload istio request duration milliseconds bucket        expr   sum without instance  kubernetes namespace  kubernetes pod name   istio request duration milliseconds bucket          record   workload istio request bytes count        expr   sum without instance  kubernetes namespace  kubernetes pod name   istio request bytes count         record   workload istio request bytes sum        expr   sum without instance  kubernetes namespace  kubernetes pod name   istio request bytes sum         record   workload istio request bytes bucket        expr   sum without instance  kubernetes namespace  kubernetes pod name   istio request bytes bucket          record   workload istio response bytes count        expr   sum without instance  kubernetes namespace  kubernetes pod name   istio response bytes count         record   workload istio response bytes sum        expr   sum without instance  kubernetes namespace  kubernetes pod name   istio response bytes sum         record   workload istio response bytes bucket        expr   sum without instance  kubernetes namespace  kubernetes pod name   istio response bytes bucket          record   workload istio tcp sent bytes total        expr   sum without instance  kubernetes namespace  kubernetes pod name   istio tcp sent bytes total         record   workload istio tcp received bytes total        expr   sum without instance  kubernetes namespace  kubernetes pod name   istio tcp received bytes total         record   workload istio tcp connections opened total        expr   sum without instance  kubernetes namespace  kubernetes pod name   istio tcp connections opened total         record   workload istio tcp connections closed total        expr   sum without instance  kubernetes namespace  kubernetes pod name   istio tcp connections closed total          The recording rules above only aggregate across pods and instances  They still preserve the full set of  Istio Standard Metrics   docs reference config metrics    including all Istio dimensions  While this will help with controlling metrics cardinality via federation  you may want to further optimize the recording rules to match your existing dashboards  alerts  and ad hoc queries   For more information on tailoring your recording rules  see the section on  Optimizing metrics collection with recording rules   docs ops best practices observability  optimizing metrics collection with recording rules         Federation using workload level aggregated metrics  To establish Prometheus federation  modify the configuration of your production ready deployment of Prometheus to scrape the federation endpoint of the Istio Prometheus   Add the following job to your configuration      job name   istio prometheus    honor labels  true   metrics path    federate    kubernetes sd configs      role  pod     namespaces        names    istio system     metric relabel configs      source labels     name        regex   workload           target label    name       action  replace   params       match               name     workload                   name     pilot          If you are using the  Prometheus Operator  https   github com coreos prometheus operator   use the following configuration instead    apiVersion  monitoring coreos com v1 kind  ServiceMonitor metadata    name  istio federation   labels      app kubernetes io name  istio prometheus spec    namespaceSelector      matchNames        istio system   selector      matchLabels        app  prometheus   endpoints      interval  30s     scrapeTimeout  30s     params         match                 name     workload                     name     pilot            path   federate     targetPort  9090     honorLabels  true     metricRelabelings        sourceLabels      name           regex   workload             targetLabel     name          action  replace    The key to the federation configuration is matching on the job in the Istio deployed Prometheus that is collecting  Istio Standard Metrics   docs reference config metrics   and renaming any metrics collected by removing the prefix used in the workload level recording rules   workload     This will allow existing dashboards and queries to seamlessly continue working when pointed at the production Prometheus instance  and away from the Istio instance    You can also include additional metrics  for example  envoy  go  etc   when setting up federation   Control plane metrics are also collected and federated up to the production Prometheus        Optimizing metrics collection with recording rules  Beyond just using recording rules to  aggregate over pods and instances   workload level aggregation via recording rules   you may want to use recording rules to generate aggregated metrics tailored specifically to your existing dashboards and alerts  Optimizing your collection in this manner can result in large savings in resource consumption in your production instance of Prometheus  in addition to faster query performance   For example  imagine a custom monitoring dashboard that used the following Prometheus queries     Total rate of requests averaged over the past minute by destination service name and namespace           sum irate istio requests total reporter  source   1m        by           destination canonical service          destination workload namespace               P95 client latency averaged over the past minute by source and destination service names and namespace           histogram quantile 0 95        sum irate istio request duration milliseconds bucket reporter  source   1m          by           destination canonical service          destination workload namespace          source canonical service          source workload namespace          le                     The following set of recording rules could be added to the Istio Prometheus configuration  using the  istio  prefix to make identifying these metrics for federation simple    groups    name   istio recording rules    interval  5s   rules      record   istio istio requests by destination service rate1m      expr          sum irate istio requests total reporter  destination   1m          by           destination canonical service          destination workload namespace             record   istio istio request duration milliseconds bucket p95 rate1m      expr          histogram quantile 0 95          sum irate istio request duration milliseconds bucket reporter  source   1m            by             destination canonical service            destination workload namespace            source canonical service            source workload namespace            le                     The production instance of Prometheus would then be updated to federate from the Istio instance with     match clause of     name     istio            metric relabeling config with   regex   istio         The original queries would then be replaced with      istio requests by destination service rate1m      avg istio request duration milliseconds bucket p95 rate1m     A detailed write up on  metrics collection optimization in production at AutoTrader  https   karlstoney com 2020 02 25 federated prometheus to reduce metric cardinality   provides a more fleshed out example of aggregating directly to the queries that power dashboards and alerts  "}
{"questions":"istio docs ops traffic management deploy guidelines title Traffic Management Best Practices weight 20 owner istio wg networking maintainers aliases help ops traffic management deploy guidelines forceinlinetoc true Configuration best practices to avoid networking or traffic management issues help ops deploy guidelines","answers":"---\ntitle: Traffic Management Best Practices\ndescription: Configuration best practices to avoid networking or traffic management issues.\nforce_inline_toc: true\nweight: 20\naliases:\n  - \/help\/ops\/traffic-management\/deploy-guidelines\n  - \/help\/ops\/deploy-guidelines\n  - \/docs\/ops\/traffic-management\/deploy-guidelines\nowner: istio\/wg-networking-maintainers\ntest: n\/a\n---\n\nThis section provides specific deployment or configuration guidelines to avoid networking or traffic management issues.\n\n## Set default routes for services\n\nAlthough the default Istio behavior conveniently sends traffic from any\nsource to all versions of a destination service without any rules being set,\ncreating a `VirtualService` with a default route for every service,\nright from the start, is generally considered a best practice in Istio.\n\nEven if you initially have only one version of a service, as soon as you decide\nto deploy a second version, you need to have a routing rule in place **before**\nthe new version is started, to prevent it from immediately receiving traffic\nin an uncontrolled way.\n\nAnother potential issue when relying on Istio's default round-robin routing is\ndue to a subtlety in Istio's destination rule evaluation algorithm.\nWhen routing a request, Envoy first evaluates route rules in virtual services\nto determine if a particular subset is being routed to.\nIf so, only then will it activate any destination rule policies corresponding to the subset.\nConsequently, Istio only applies the policies you define for specific subsets if\nyou **explicitly** routed traffic to the corresponding subset.\n\nFor example, consider the following destination rule as the one and only configuration defined for the\n*reviews* service, that is, there are no route rules in a corresponding `VirtualService` definition:\n\n\napiVersion: networking.istio.io\/v1\nkind: DestinationRule\nmetadata:\n  name: reviews\nspec:\n  host: reviews\n  subsets:\n  - name: v1\n    labels:\n      version: v1\n    trafficPolicy:\n      connectionPool:\n        tcp:\n          maxConnections: 100\n\n\nEven if Istio\u2019s default round-robin routing calls \"v1\" instances on occasion,\nmaybe even always if \"v1\" is the only running version, the above traffic policy\nwill never be invoked.\n\nYou can fix the above example in one of two ways. You can either move the\ntraffic policy up a level in the `DestinationRule` to make it apply to any version:\n\n\napiVersion: networking.istio.io\/v1\nkind: DestinationRule\nmetadata:\n  name: reviews\nspec:\n  host: reviews\n  trafficPolicy:\n    connectionPool:\n      tcp:\n        maxConnections: 100\n  subsets:\n  - name: v1\n    labels:\n      version: v1\n\n\nOr, better yet, define a proper route rule for the service in the `VirtualService` definition.\nFor example, add a simple route rule for \"reviews:v1\":\n\n\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: reviews\nspec:\n  hosts:\n  - reviews\n  http:\n  - route:\n    - destination:\n        host: reviews\n        subset: v1\n\n\n## Control configuration sharing across namespaces {#cross-namespace-configuration}\n\nYou can define virtual services, destination rules, or service entries\nin one namespace and then reuse them in other namespaces, if they are exported\nto those namespaces.\nIstio exports all traffic management resources to all namespaces by default,\nbut you can override the visibility with the `exportTo` field.\nFor example, only requests from workloads in the same namespace can be affected by the following virtual service:\n\n\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: myservice\nspec:\n  hosts:\n  - myservice.com\n  exportTo:\n  - \".\"\n  http:\n  - route:\n    - destination:\n        host: myservice\n\n\n\nYou can similarly control the visibility of a Kubernetes `Service` using the `networking.istio.io\/exportTo` annotation.\n\n\nSetting the visibility of destination rules in a particular namespace doesn't\nguarantee the rule is used. Exporting a destination rule to other namespaces enables you to use it\nin those namespaces, but to actually be applied during a request the namespace also needs to be\non the destination rule lookup path:\n\n1. client namespace\n1. service namespace\n1. the configured `meshconfig.rootNamespace` namespace (`istio-system` by default)\n\nFor example, consider the following destination rule:\n\n\napiVersion: networking.istio.io\/v1\nkind: DestinationRule\nmetadata:\n  name: myservice\nspec:\n  host: myservice.default.svc.cluster.local\n  trafficPolicy:\n    connectionPool:\n      tcp:\n        maxConnections: 100\n\n\nLet's assume you create this destination rule in namespace `ns1`.\n\nIf you send a request to the `myservice` service from a client in `ns1`, the destination\nrule would be applied, because it is in the first namespace on the lookup path, that is,\nin the client namespace.\n\nIf you now send the request from a different namespace, for example `ns2`,\nthe client is no longer in the same namespace as the destination rule, `ns1`.\nBecause the corresponding service, `myservice.default.svc.cluster.local`, is also not in `ns1`,\nbut rather in the `default` namespace, the destination rule will also not be found in\nthe second namespace of the lookup path, the service namespace.\n\nEven if the `myservice` service is exported to all namespaces and therefore visible\nin `ns2` and the destination rule is also exported to all namespaces, including `ns2`,\nit will not be applied during the request from `ns2` because it's not in any\nof the namespaces on the lookup path.\n\nYou can avoid this problem by creating the destination rule in the same namespace as\nthe corresponding service, `default` in this example. It would then get applied to requests\nfrom clients in any namespace.\nYou can also move the destination rule to the `istio-system` namespace, the third namespace on\nthe lookup path, although this isn't recommended unless the destination rule is really a global\nconfiguration that is applicable in all namespaces, and it would require administrator authority.\n\nIstio uses this restricted destination rule lookup path for two reasons:\n\n1. Prevent destination rules from being defined that can override the behavior of services\n   in completely unrelated namespaces.\n1. Have a clear lookup order in case there is more than one destination rule for\n   the same host.\n\n## Split large virtual services and destination rules into multiple resources {#split-virtual-services}\n\nIn situations where it is inconvenient to define the complete set of route rules or policies for a particular\nhost in a single `VirtualService` or `DestinationRule` resource, it may be preferable to incrementally specify\nthe configuration for the host in multiple resources.\nThe control plane will merge such destination rules\nand merge such virtual services if they are bound to a gateway.\n\nConsider the case of a `VirtualService` bound to an ingress gateway exposing an application host which uses\npath-based delegation to several implementation services, something like this:\n\n\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: myapp\nspec:\n  hosts:\n  - myapp.com\n  gateways:\n  - myapp-gateway\n  http:\n  - match:\n    - uri:\n        prefix: \/service1\n    route:\n    - destination:\n        host: service1.default.svc.cluster.local\n  - match:\n    - uri:\n        prefix: \/service2\n    route:\n    - destination:\n        host: service2.default.svc.cluster.local\n  - match:\n    ...\n\n\nThe downside of this kind of configuration is that other configuration (e.g., route rules) for any of the\nunderlying microservices, will need to also be included in this single configuration file, instead of\nin separate resources associated with, and potentially owned by, the individual service teams.\nSee [Route rules have no effect on ingress gateway requests](\/docs\/ops\/common-problems\/network-issues\/#route-rules-have-no-effect-on-ingress-gateway-requests)\nfor details.\n\nTo avoid this problem, it may be preferable to break up the configuration of `myapp.com` into several\n`VirtualService` fragments, one per backend service. For example:\n\n\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: myapp-service1\nspec:\n  hosts:\n  - myapp.com\n  gateways:\n  - myapp-gateway\n  http:\n  - match:\n    - uri:\n        prefix: \/service1\n    route:\n    - destination:\n        host: service1.default.svc.cluster.local\n---\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: myapp-service2\nspec:\n  hosts:\n  - myapp.com\n  gateways:\n  - myapp-gateway\n  http:\n  - match:\n    - uri:\n        prefix: \/service2\n    route:\n    - destination:\n        host: service2.default.svc.cluster.local\n---\napiVersion: networking.istio.io\/v1\nkind: VirtualService\nmetadata:\n  name: myapp-...\n\n\nWhen a second and subsequent `VirtualService` for an existing host is applied, `istiod` will merge\nthe additional route rules into the existing configuration of the host. There are, however, several\ncaveats with this feature that must be considered carefully when using it.\n\n1. Although the order of evaluation for rules in any given source `VirtualService` will be retained,\n   the cross-resource order is UNDEFINED. In other words, there is no guaranteed order of evaluation\n   for rules across the fragment configurations, so it will only have predictable behavior if there\n   are no conflicting rules or order dependency between rules across fragments.\n1. There should only be one \"catch-all\" rule (i.e., a rule without a `match` field) in the fragments.\n   All such \"catch-all\" rules will be moved to the end of the list in the merged configuration, but\n   since they catch all requests, whichever is applied first will essentially override and disable any others.\n1. A `VirtualService` can only be fragmented this way if it is bound to a gateway.\n   Host merging is not supported in sidecars.\n\nA `DestinationRule` can also be fragmented with similar merge semantics and restrictions.\n\n1. There should only be one definition of any given subset across multiple destination rules for the same host.\n   If there is more than one with the same name, the first definition is used and any following duplicates are discarded.\n   No merging of subset content is supported.\n1. There should only be one top-level `trafficPolicy` for the same host.\n   When top-level traffic policies are defined in multiple destination rules, the first one processed will be used.\n   Any following top-level `trafficPolicy` configuration is discarded.\n1. Unlike virtual service merging, destination rule merging works in both sidecars and gateways.\n\n## Avoid 503 errors while reconfiguring service routes\n\nWhen setting route rules to direct traffic to specific versions (subsets) of a service, care must be taken to ensure\nthat the subsets are available before they are used in the routes. Otherwise, calls to the service may return\n503 errors during a reconfiguration period.\n\nCreating both the `VirtualServices` and `DestinationRules` that define the corresponding subsets using a single `kubectl`\ncall (e.g., `kubectl apply -f myVirtualServiceAndDestinationRule.yaml` is not sufficient because the\nresources propagate (from the configuration server, i.e., Kubernetes API server) to the istiod instances in an eventually consistent manner. If the\n`VirtualService` using the subsets arrives before the `DestinationRule` where the subsets are defined, the Envoy configuration generated by istiod would refer to non-existent upstream pools. This results in HTTP 503 errors until all configuration objects are available to istiod.\n\nTo make sure services will have zero down-time when configuring routes with subsets, follow a \"make-before-break\" process as described below:\n\n* When adding new subsets:\n\n    1. Update `DestinationRules` to add a new subset first, before updating any `VirtualServices` that use it. Apply the rule using `kubectl` or any platform-specific tooling.\n\n    1. Wait a few seconds for the `DestinationRule` configuration to propagate to the Envoy sidecars\n\n    1. Update the `VirtualService` to refer to the newly added subsets.\n\n* When removing subsets:\n\n    1. Update `VirtualServices` to remove any references to a subset, before removing the subset from a `DestinationRule`.\n\n    1. Wait a few seconds for the `VirtualService` configuration to propagate to the Envoy sidecars.\n\n    1. Update the `DestinationRule` to remove the unused subsets.","site":"istio","answers_cleaned":"    title  Traffic Management Best Practices description  Configuration best practices to avoid networking or traffic management issues  force inline toc  true weight  20 aliases       help ops traffic management deploy guidelines      help ops deploy guidelines      docs ops traffic management deploy guidelines owner  istio wg networking maintainers test  n a      This section provides specific deployment or configuration guidelines to avoid networking or traffic management issues      Set default routes for services  Although the default Istio behavior conveniently sends traffic from any source to all versions of a destination service without any rules being set  creating a  VirtualService  with a default route for every service  right from the start  is generally considered a best practice in Istio   Even if you initially have only one version of a service  as soon as you decide to deploy a second version  you need to have a routing rule in place   before   the new version is started  to prevent it from immediately receiving traffic in an uncontrolled way   Another potential issue when relying on Istio s default round robin routing is due to a subtlety in Istio s destination rule evaluation algorithm  When routing a request  Envoy first evaluates route rules in virtual services to determine if a particular subset is being routed to  If so  only then will it activate any destination rule policies corresponding to the subset  Consequently  Istio only applies the policies you define for specific subsets if you   explicitly   routed traffic to the corresponding subset   For example  consider the following destination rule as the one and only configuration defined for the  reviews  service  that is  there are no route rules in a corresponding  VirtualService  definition    apiVersion  networking istio io v1 kind  DestinationRule metadata    name  reviews spec    host  reviews   subsets      name  v1     labels        version  v1     trafficPolicy        connectionPool          tcp            maxConnections  100   Even if Istio s default round robin routing calls  v1  instances on occasion  maybe even always if  v1  is the only running version  the above traffic policy will never be invoked   You can fix the above example in one of two ways  You can either move the traffic policy up a level in the  DestinationRule  to make it apply to any version    apiVersion  networking istio io v1 kind  DestinationRule metadata    name  reviews spec    host  reviews   trafficPolicy      connectionPool        tcp          maxConnections  100   subsets      name  v1     labels        version  v1   Or  better yet  define a proper route rule for the service in the  VirtualService  definition  For example  add a simple route rule for  reviews v1     apiVersion  networking istio io v1 kind  VirtualService metadata    name  reviews spec    hosts      reviews   http      route        destination          host  reviews         subset  v1      Control configuration sharing across namespaces   cross namespace configuration   You can define virtual services  destination rules  or service entries in one namespace and then reuse them in other namespaces  if they are exported to those namespaces  Istio exports all traffic management resources to all namespaces by default  but you can override the visibility with the  exportTo  field  For example  only requests from workloads in the same namespace can be affected by the following virtual service    apiVersion  networking istio io v1 kind  VirtualService metadata    name  myservice spec    hosts      myservice com   exportTo            http      route        destination          host  myservice    You can similarly control the visibility of a Kubernetes  Service  using the  networking istio io exportTo  annotation    Setting the visibility of destination rules in a particular namespace doesn t guarantee the rule is used  Exporting a destination rule to other namespaces enables you to use it in those namespaces  but to actually be applied during a request the namespace also needs to be on the destination rule lookup path   1  client namespace 1  service namespace 1  the configured  meshconfig rootNamespace  namespace   istio system  by default   For example  consider the following destination rule    apiVersion  networking istio io v1 kind  DestinationRule metadata    name  myservice spec    host  myservice default svc cluster local   trafficPolicy      connectionPool        tcp          maxConnections  100   Let s assume you create this destination rule in namespace  ns1    If you send a request to the  myservice  service from a client in  ns1   the destination rule would be applied  because it is in the first namespace on the lookup path  that is  in the client namespace   If you now send the request from a different namespace  for example  ns2   the client is no longer in the same namespace as the destination rule   ns1   Because the corresponding service   myservice default svc cluster local   is also not in  ns1   but rather in the  default  namespace  the destination rule will also not be found in the second namespace of the lookup path  the service namespace   Even if the  myservice  service is exported to all namespaces and therefore visible in  ns2  and the destination rule is also exported to all namespaces  including  ns2   it will not be applied during the request from  ns2  because it s not in any of the namespaces on the lookup path   You can avoid this problem by creating the destination rule in the same namespace as the corresponding service   default  in this example  It would then get applied to requests from clients in any namespace  You can also move the destination rule to the  istio system  namespace  the third namespace on the lookup path  although this isn t recommended unless the destination rule is really a global configuration that is applicable in all namespaces  and it would require administrator authority   Istio uses this restricted destination rule lookup path for two reasons   1  Prevent destination rules from being defined that can override the behavior of services    in completely unrelated namespaces  1  Have a clear lookup order in case there is more than one destination rule for    the same host      Split large virtual services and destination rules into multiple resources   split virtual services   In situations where it is inconvenient to define the complete set of route rules or policies for a particular host in a single  VirtualService  or  DestinationRule  resource  it may be preferable to incrementally specify the configuration for the host in multiple resources  The control plane will merge such destination rules and merge such virtual services if they are bound to a gateway   Consider the case of a  VirtualService  bound to an ingress gateway exposing an application host which uses path based delegation to several implementation services  something like this    apiVersion  networking istio io v1 kind  VirtualService metadata    name  myapp spec    hosts      myapp com   gateways      myapp gateway   http      match        uri          prefix   service1     route        destination          host  service1 default svc cluster local     match        uri          prefix   service2     route        destination          host  service2 default svc cluster local     match            The downside of this kind of configuration is that other configuration  e g   route rules  for any of the underlying microservices  will need to also be included in this single configuration file  instead of in separate resources associated with  and potentially owned by  the individual service teams  See  Route rules have no effect on ingress gateway requests   docs ops common problems network issues  route rules have no effect on ingress gateway requests  for details   To avoid this problem  it may be preferable to break up the configuration of  myapp com  into several  VirtualService  fragments  one per backend service  For example    apiVersion  networking istio io v1 kind  VirtualService metadata    name  myapp service1 spec    hosts      myapp com   gateways      myapp gateway   http      match        uri          prefix   service1     route        destination          host  service1 default svc cluster local     apiVersion  networking istio io v1 kind  VirtualService metadata    name  myapp service2 spec    hosts      myapp com   gateways      myapp gateway   http      match        uri          prefix   service2     route        destination          host  service2 default svc cluster local     apiVersion  networking istio io v1 kind  VirtualService metadata    name  myapp       When a second and subsequent  VirtualService  for an existing host is applied   istiod  will merge the additional route rules into the existing configuration of the host  There are  however  several caveats with this feature that must be considered carefully when using it   1  Although the order of evaluation for rules in any given source  VirtualService  will be retained     the cross resource order is UNDEFINED  In other words  there is no guaranteed order of evaluation    for rules across the fragment configurations  so it will only have predictable behavior if there    are no conflicting rules or order dependency between rules across fragments  1  There should only be one  catch all  rule  i e   a rule without a  match  field  in the fragments     All such  catch all  rules will be moved to the end of the list in the merged configuration  but    since they catch all requests  whichever is applied first will essentially override and disable any others  1  A  VirtualService  can only be fragmented this way if it is bound to a gateway     Host merging is not supported in sidecars   A  DestinationRule  can also be fragmented with similar merge semantics and restrictions   1  There should only be one definition of any given subset across multiple destination rules for the same host     If there is more than one with the same name  the first definition is used and any following duplicates are discarded     No merging of subset content is supported  1  There should only be one top level  trafficPolicy  for the same host     When top level traffic policies are defined in multiple destination rules  the first one processed will be used     Any following top level  trafficPolicy  configuration is discarded  1  Unlike virtual service merging  destination rule merging works in both sidecars and gateways      Avoid 503 errors while reconfiguring service routes  When setting route rules to direct traffic to specific versions  subsets  of a service  care must be taken to ensure that the subsets are available before they are used in the routes  Otherwise  calls to the service may return 503 errors during a reconfiguration period   Creating both the  VirtualServices  and  DestinationRules  that define the corresponding subsets using a single  kubectl  call  e g    kubectl apply  f myVirtualServiceAndDestinationRule yaml  is not sufficient because the resources propagate  from the configuration server  i e   Kubernetes API server  to the istiod instances in an eventually consistent manner  If the  VirtualService  using the subsets arrives before the  DestinationRule  where the subsets are defined  the Envoy configuration generated by istiod would refer to non existent upstream pools  This results in HTTP 503 errors until all configuration objects are available to istiod   To make sure services will have zero down time when configuring routes with subsets  follow a  make before break  process as described below     When adding new subsets       1  Update  DestinationRules  to add a new subset first  before updating any  VirtualServices  that use it  Apply the rule using  kubectl  or any platform specific tooling       1  Wait a few seconds for the  DestinationRule  configuration to propagate to the Envoy sidecars      1  Update the  VirtualService  to refer to the newly added subsets     When removing subsets       1  Update  VirtualServices  to remove any references to a subset  before removing the subset from a  DestinationRule        1  Wait a few seconds for the  VirtualService  configuration to propagate to the Envoy sidecars       1  Update the  DestinationRule  to remove the unused subsets "}
{"questions":"kubernetes reference weight 70 chenopis approvers nolist true contenttype concept Reference mainmenu true title Reference","answers":"---\ntitle: Reference\napprovers:\n- chenopis\nlinkTitle: \"Reference\"\nmain_menu: true\nweight: 70\ncontent_type: concept\nno_list: true\n---\n\n<!-- overview -->\n\nThis section of the Kubernetes documentation contains references.\n\n<!-- body -->\n\n## API Reference\n\n* [Glossary](\/docs\/reference\/glossary\/) -  a comprehensive, standardized list of Kubernetes terminology\n\n* [Kubernetes API Reference](\/docs\/reference\/kubernetes-api\/)\n* [One-page API Reference for Kubernetes ](\/docs\/reference\/generated\/kubernetes-api\/\/)\n* [Using The Kubernetes API](\/docs\/reference\/using-api\/) - overview of the API for Kubernetes.\n* [API access control](\/docs\/reference\/access-authn-authz\/) - details on how Kubernetes controls API access\n* [Well-Known Labels, Annotations and Taints](\/docs\/reference\/labels-annotations-taints\/)\n\n## Officially supported client libraries\n\nTo call the Kubernetes API from a programming language, you can use\n[client libraries](\/docs\/reference\/using-api\/client-libraries\/). Officially supported\nclient libraries:\n\n- [Kubernetes Go client library](https:\/\/github.com\/kubernetes\/client-go\/)\n- [Kubernetes Python client library](https:\/\/github.com\/kubernetes-client\/python)\n- [Kubernetes Java client library](https:\/\/github.com\/kubernetes-client\/java)\n- [Kubernetes JavaScript client library](https:\/\/github.com\/kubernetes-client\/javascript)\n- [Kubernetes C# client library](https:\/\/github.com\/kubernetes-client\/csharp)\n- [Kubernetes Haskell client library](https:\/\/github.com\/kubernetes-client\/haskell)\n\n## CLI\n\n* [kubectl](\/docs\/reference\/kubectl\/) - Main CLI tool for running commands and managing Kubernetes clusters.\n  * [JSONPath](\/docs\/reference\/kubectl\/jsonpath\/) - Syntax guide for using [JSONPath expressions](https:\/\/goessner.net\/articles\/JsonPath\/) with kubectl.\n* [kubeadm](\/docs\/reference\/setup-tools\/kubeadm\/) - CLI tool to easily provision a secure Kubernetes cluster.\n\n## Components\n\n* [kubelet](\/docs\/reference\/command-line-tools-reference\/kubelet\/) - The\n  primary agent that runs on each node. The kubelet takes a set of PodSpecs\n  and ensures that the described containers are running and healthy.\n* [kube-apiserver](\/docs\/reference\/command-line-tools-reference\/kube-apiserver\/) -\n  REST API that validates and configures data for API objects such as  pods,\n  services, replication controllers.\n* [kube-controller-manager](\/docs\/reference\/command-line-tools-reference\/kube-controller-manager\/) -\n  Daemon that embeds the core control loops shipped with Kubernetes.\n* [kube-proxy](\/docs\/reference\/command-line-tools-reference\/kube-proxy\/) - Can\n  do simple TCP\/UDP stream forwarding or round-robin TCP\/UDP forwarding across\n  a set of back-ends.\n* [kube-scheduler](\/docs\/reference\/command-line-tools-reference\/kube-scheduler\/) -\n  Scheduler that manages availability, performance, and capacity.\n  \n  * [Scheduler Policies](\/docs\/reference\/scheduling\/policies)\n  * [Scheduler Profiles](\/docs\/reference\/scheduling\/config#profiles)\n\n* List of [ports and protocols](\/docs\/reference\/networking\/ports-and-protocols\/) that\n  should be open on control plane and worker nodes\n\n## Config APIs\n\nThis section hosts the documentation for \"unpublished\" APIs which are used to\nconfigure  kubernetes components or tools. Most of these APIs are not exposed\nby the API server in a RESTful way though they are essential for a user or an\noperator to use or manage a cluster.\n\n\n* [kubeconfig (v1)](\/docs\/reference\/config-api\/kubeconfig.v1\/)\n* [kube-apiserver admission (v1)](\/docs\/reference\/config-api\/apiserver-admission.v1\/)\n* [kube-apiserver configuration (v1alpha1)](\/docs\/reference\/config-api\/apiserver-config.v1alpha1\/) and\n* [kube-apiserver configuration (v1beta1)](\/docs\/reference\/config-api\/apiserver-config.v1beta1\/) and\n  [kube-apiserver configuration (v1)](\/docs\/reference\/config-api\/apiserver-config.v1\/)\n* [kube-apiserver event rate limit (v1alpha1)](\/docs\/reference\/config-api\/apiserver-eventratelimit.v1alpha1\/)\n* [kubelet configuration (v1alpha1)](\/docs\/reference\/config-api\/kubelet-config.v1alpha1\/) and\n  [kubelet configuration (v1beta1)](\/docs\/reference\/config-api\/kubelet-config.v1beta1\/)\n  [kubelet configuration (v1)](\/docs\/reference\/config-api\/kubelet-config.v1\/)\n* [kubelet credential providers (v1)](\/docs\/reference\/config-api\/kubelet-credentialprovider.v1\/)\n* [kube-scheduler configuration (v1beta3)](\/docs\/reference\/config-api\/kube-scheduler-config.v1beta3\/) and\n  [kube-scheduler configuration (v1)](\/docs\/reference\/config-api\/kube-scheduler-config.v1\/)\n* [kube-controller-manager configuration (v1alpha1)](\/docs\/reference\/config-api\/kube-controller-manager-config.v1alpha1\/)\n* [kube-proxy configuration (v1alpha1)](\/docs\/reference\/config-api\/kube-proxy-config.v1alpha1\/)\n* [`audit.k8s.io\/v1` API](\/docs\/reference\/config-api\/apiserver-audit.v1\/)\n* [Client authentication API (v1beta1)](\/docs\/reference\/config-api\/client-authentication.v1beta1\/) and \n  [Client authentication API (v1)](\/docs\/reference\/config-api\/client-authentication.v1\/)\n* [WebhookAdmission configuration (v1)](\/docs\/reference\/config-api\/apiserver-webhookadmission.v1\/)\n* [ImagePolicy API (v1alpha1)](\/docs\/reference\/config-api\/imagepolicy.v1alpha1\/)\n\n## Config API for kubeadm\n\n* [v1beta3](\/docs\/reference\/config-api\/kubeadm-config.v1beta3\/)\n* [v1beta4](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/)\n\n## External APIs\n\nThese are the APIs defined by the Kubernetes project, but are not implemented\nby the core project:\n\n* [Metrics API (v1beta1)](\/docs\/reference\/external-api\/metrics.v1beta1\/)\n* [Custom Metrics API (v1beta2)](\/docs\/reference\/external-api\/custom-metrics.v1beta2)\n* [External Metrics API (v1beta1)](\/docs\/reference\/external-api\/external-metrics.v1beta1)\n\n## Design Docs\n\nAn archive of the design docs for Kubernetes functionality. Good starting points are\n[Kubernetes Architecture](https:\/\/git.k8s.io\/design-proposals-archive\/architecture\/architecture.md) and\n[Kubernetes Design Overview](https:\/\/git.k8s.io\/design-proposals-archive).\n","site":"kubernetes reference","answers_cleaned":"    title  Reference approvers    chenopis linkTitle   Reference  main menu  true weight  70 content type  concept no list  true           overview      This section of the Kubernetes documentation contains references        body         API Reference     Glossary   docs reference glossary      a comprehensive  standardized list of Kubernetes terminology     Kubernetes API Reference   docs reference kubernetes api      One page API Reference for Kubernetes    docs reference generated kubernetes api       Using The Kubernetes API   docs reference using api     overview of the API for Kubernetes     API access control   docs reference access authn authz     details on how Kubernetes controls API access    Well Known Labels  Annotations and Taints   docs reference labels annotations taints       Officially supported client libraries  To call the Kubernetes API from a programming language  you can use  client libraries   docs reference using api client libraries    Officially supported client libraries      Kubernetes Go client library  https   github com kubernetes client go      Kubernetes Python client library  https   github com kubernetes client python     Kubernetes Java client library  https   github com kubernetes client java     Kubernetes JavaScript client library  https   github com kubernetes client javascript     Kubernetes C  client library  https   github com kubernetes client csharp     Kubernetes Haskell client library  https   github com kubernetes client haskell      CLI     kubectl   docs reference kubectl     Main CLI tool for running commands and managing Kubernetes clusters       JSONPath   docs reference kubectl jsonpath     Syntax guide for using  JSONPath expressions  https   goessner net articles JsonPath   with kubectl     kubeadm   docs reference setup tools kubeadm     CLI tool to easily provision a secure Kubernetes cluster      Components     kubelet   docs reference command line tools reference kubelet     The   primary agent that runs on each node  The kubelet takes a set of PodSpecs   and ensures that the described containers are running and healthy     kube apiserver   docs reference command line tools reference kube apiserver       REST API that validates and configures data for API objects such as  pods    services  replication controllers     kube controller manager   docs reference command line tools reference kube controller manager       Daemon that embeds the core control loops shipped with Kubernetes     kube proxy   docs reference command line tools reference kube proxy     Can   do simple TCP UDP stream forwarding or round robin TCP UDP forwarding across   a set of back ends     kube scheduler   docs reference command line tools reference kube scheduler       Scheduler that manages availability  performance  and capacity          Scheduler Policies   docs reference scheduling policies       Scheduler Profiles   docs reference scheduling config profiles     List of  ports and protocols   docs reference networking ports and protocols   that   should be open on control plane and worker nodes     Config APIs  This section hosts the documentation for  unpublished  APIs which are used to configure  kubernetes components or tools  Most of these APIs are not exposed by the API server in a RESTful way though they are essential for a user or an operator to use or manage a cluster       kubeconfig  v1    docs reference config api kubeconfig v1      kube apiserver admission  v1    docs reference config api apiserver admission v1      kube apiserver configuration  v1alpha1    docs reference config api apiserver config v1alpha1   and    kube apiserver configuration  v1beta1    docs reference config api apiserver config v1beta1   and    kube apiserver configuration  v1    docs reference config api apiserver config v1      kube apiserver event rate limit  v1alpha1    docs reference config api apiserver eventratelimit v1alpha1      kubelet configuration  v1alpha1    docs reference config api kubelet config v1alpha1   and    kubelet configuration  v1beta1    docs reference config api kubelet config v1beta1      kubelet configuration  v1    docs reference config api kubelet config v1      kubelet credential providers  v1    docs reference config api kubelet credentialprovider v1      kube scheduler configuration  v1beta3    docs reference config api kube scheduler config v1beta3   and    kube scheduler configuration  v1    docs reference config api kube scheduler config v1      kube controller manager configuration  v1alpha1    docs reference config api kube controller manager config v1alpha1      kube proxy configuration  v1alpha1    docs reference config api kube proxy config v1alpha1       audit k8s io v1  API   docs reference config api apiserver audit v1      Client authentication API  v1beta1    docs reference config api client authentication v1beta1   and     Client authentication API  v1    docs reference config api client authentication v1      WebhookAdmission configuration  v1    docs reference config api apiserver webhookadmission v1      ImagePolicy API  v1alpha1    docs reference config api imagepolicy v1alpha1       Config API for kubeadm     v1beta3   docs reference config api kubeadm config v1beta3      v1beta4   docs reference config api kubeadm config v1beta4       External APIs  These are the APIs defined by the Kubernetes project  but are not implemented by the core project      Metrics API  v1beta1    docs reference external api metrics v1beta1      Custom Metrics API  v1beta2    docs reference external api custom metrics v1beta2     External Metrics API  v1beta1    docs reference external api external metrics v1beta1      Design Docs  An archive of the design docs for Kubernetes functionality  Good starting points are  Kubernetes Architecture  https   git k8s io design proposals archive architecture architecture md  and  Kubernetes Design Overview  https   git k8s io design proposals archive   "}
{"questions":"kubernetes reference title SubjectAccessReview import k8s io api authorization v1 contenttype apireference apiVersion authorization k8s io v1 kind SubjectAccessReview apimetadata weight 4 autogenerated true SubjectAccessReview checks whether or not a user or group can perform an action","answers":"---\napi_metadata:\n  apiVersion: \"authorization.k8s.io\/v1\"\n  import: \"k8s.io\/api\/authorization\/v1\"\n  kind: \"SubjectAccessReview\"\ncontent_type: \"api_reference\"\ndescription: \"SubjectAccessReview checks whether or not a user or group can perform an action.\"\ntitle: \"SubjectAccessReview\"\nweight: 4\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: authorization.k8s.io\/v1`\n\n`import \"k8s.io\/api\/authorization\/v1\"`\n\n\n## SubjectAccessReview {#SubjectAccessReview}\n\nSubjectAccessReview checks whether or not a user or group can perform an action.\n\n<hr>\n\n- **apiVersion**: authorization.k8s.io\/v1\n\n\n- **kind**: SubjectAccessReview\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">SubjectAccessReviewSpec<\/a>), required\n\n  Spec holds information about the request being evaluated\n\n- **status** (<a href=\"\">SubjectAccessReviewStatus<\/a>)\n\n  Status is filled in by the server and indicates whether the request is allowed or not\n\n\n\n\n\n## SubjectAccessReviewSpec {#SubjectAccessReviewSpec}\n\nSubjectAccessReviewSpec is a description of the access request.  Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set\n\n<hr>\n\n- **extra** (map[string][]string)\n\n  Extra corresponds to the user.Info.GetExtra() method from the authenticator.  Since that is input to the authorizer it needs a reflection here.\n\n- **groups** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  Groups is the groups you're testing for.\n\n- **nonResourceAttributes** (NonResourceAttributes)\n\n  NonResourceAttributes describes information for a non-resource access request\n\n  <a name=\"NonResourceAttributes\"><\/a>\n  *NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface*\n\n  - **nonResourceAttributes.path** (string)\n\n    Path is the URL path of the request\n\n  - **nonResourceAttributes.verb** (string)\n\n    Verb is the standard HTTP verb\n\n- **resourceAttributes** (ResourceAttributes)\n\n  ResourceAuthorizationAttributes describes information for a resource access request\n\n  <a name=\"ResourceAttributes\"><\/a>\n  *ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface*\n\n  - **resourceAttributes.fieldSelector** (FieldSelectorAttributes)\n\n    fieldSelector describes the limitation on access based on field.  It can only limit access, not broaden it.\n    \n    This field  is alpha-level. To use this field, you must enable the `AuthorizeWithSelectors` feature gate (disabled by default).\n\n    <a name=\"FieldSelectorAttributes\"><\/a>\n    *FieldSelectorAttributes indicates a field limited access. Webhook authors are encouraged to * ensure rawSelector and requirements are not both set * consider the requirements field if set * not try to parse or consider the rawSelector field if set. This is to avoid another CVE-2022-2880 (i.e. getting different systems to agree on how exactly to parse a query is not something we want), see https:\/\/www.oxeye.io\/resources\/golang-parameter-smuggling-attack for more details. For the *SubjectAccessReview endpoints of the kube-apiserver: * If rawSelector is empty and requirements are empty, the request is not limited. * If rawSelector is present and requirements are empty, the rawSelector will be parsed and limited if the parsing succeeds. * If rawSelector is empty and requirements are present, the requirements should be honored * If rawSelector is present and requirements are present, the request is invalid.*\n\n    - **resourceAttributes.fieldSelector.rawSelector** (string)\n\n      rawSelector is the serialization of a field selector that would be included in a query parameter. Webhook implementations are encouraged to ignore rawSelector. The kube-apiserver's *SubjectAccessReview will parse the rawSelector as long as the requirements are not present.\n\n    - **resourceAttributes.fieldSelector.requirements** ([]FieldSelectorRequirement)\n\n      *Atomic: will be replaced during a merge*\n      \n      requirements is the parsed interpretation of a field selector. All requirements must be met for a resource instance to match the selector. Webhook implementations should handle requirements, but how to handle them is up to the webhook. Since requirements can only limit the request, it is safe to authorize as unlimited request if the requirements are not understood.\n\n      <a name=\"FieldSelectorRequirement\"><\/a>\n      *FieldSelectorRequirement is a selector that contains values, a key, and an operator that relates the key and values.*\n\n      - **resourceAttributes.fieldSelector.requirements.key** (string), required\n\n        key is the field selector key that the requirement applies to.\n\n      - **resourceAttributes.fieldSelector.requirements.operator** (string), required\n\n        operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. The list of operators may grow in the future.\n\n      - **resourceAttributes.fieldSelector.requirements.values** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.\n\n  - **resourceAttributes.group** (string)\n\n    Group is the API Group of the Resource.  \"*\" means all.\n\n  - **resourceAttributes.labelSelector** (LabelSelectorAttributes)\n\n    labelSelector describes the limitation on access based on labels.  It can only limit access, not broaden it.\n    \n    This field  is alpha-level. To use this field, you must enable the `AuthorizeWithSelectors` feature gate (disabled by default).\n\n    <a name=\"LabelSelectorAttributes\"><\/a>\n    *LabelSelectorAttributes indicates a label limited access. Webhook authors are encouraged to * ensure rawSelector and requirements are not both set * consider the requirements field if set * not try to parse or consider the rawSelector field if set. This is to avoid another CVE-2022-2880 (i.e. getting different systems to agree on how exactly to parse a query is not something we want), see https:\/\/www.oxeye.io\/resources\/golang-parameter-smuggling-attack for more details. For the *SubjectAccessReview endpoints of the kube-apiserver: * If rawSelector is empty and requirements are empty, the request is not limited. * If rawSelector is present and requirements are empty, the rawSelector will be parsed and limited if the parsing succeeds. * If rawSelector is empty and requirements are present, the requirements should be honored * If rawSelector is present and requirements are present, the request is invalid.*\n\n    - **resourceAttributes.labelSelector.rawSelector** (string)\n\n      rawSelector is the serialization of a field selector that would be included in a query parameter. Webhook implementations are encouraged to ignore rawSelector. The kube-apiserver's *SubjectAccessReview will parse the rawSelector as long as the requirements are not present.\n\n    - **resourceAttributes.labelSelector.requirements** ([]LabelSelectorRequirement)\n\n      *Atomic: will be replaced during a merge*\n      \n      requirements is the parsed interpretation of a label selector. All requirements must be met for a resource instance to match the selector. Webhook implementations should handle requirements, but how to handle them is up to the webhook. Since requirements can only limit the request, it is safe to authorize as unlimited request if the requirements are not understood.\n\n      <a name=\"LabelSelectorRequirement\"><\/a>\n      *A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.*\n\n      - **resourceAttributes.labelSelector.requirements.key** (string), required\n\n        key is the label key that the selector applies to.\n\n      - **resourceAttributes.labelSelector.requirements.operator** (string), required\n\n        operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n\n      - **resourceAttributes.labelSelector.requirements.values** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n\n  - **resourceAttributes.name** (string)\n\n    Name is the name of the resource being requested for a \"get\" or deleted for a \"delete\". \"\" (empty) means all.\n\n  - **resourceAttributes.namespace** (string)\n\n    Namespace is the namespace of the action being requested.  Currently, there is no distinction between no namespace and all namespaces \"\" (empty) is defaulted for LocalSubjectAccessReviews \"\" (empty) is empty for cluster-scoped resources \"\" (empty) means \"all\" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview\n\n  - **resourceAttributes.resource** (string)\n\n    Resource is one of the existing resource types.  \"*\" means all.\n\n  - **resourceAttributes.subresource** (string)\n\n    Subresource is one of the existing resource types.  \"\" means none.\n\n  - **resourceAttributes.verb** (string)\n\n    Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy.  \"*\" means all.\n\n  - **resourceAttributes.version** (string)\n\n    Version is the API Version of the Resource.  \"*\" means all.\n\n- **uid** (string)\n\n  UID information about the requesting user.\n\n- **user** (string)\n\n  User is the user you're testing for. If you specify \"User\" but not \"Groups\", then is it interpreted as \"What if User were not a member of any groups\n\n\n\n\n\n## SubjectAccessReviewStatus {#SubjectAccessReviewStatus}\n\nSubjectAccessReviewStatus\n\n<hr>\n\n- **allowed** (boolean), required\n\n  Allowed is required. True if the action would be allowed, false otherwise.\n\n- **denied** (boolean)\n\n  Denied is optional. True if the action would be denied, otherwise false. If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action. Denied may not be true if Allowed is true.\n\n- **evaluationError** (string)\n\n  EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request.\n\n- **reason** (string)\n\n  Reason is optional.  It indicates why a request was allowed or denied.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `create` create a SubjectAccessReview\n\n#### HTTP Request\n\nPOST \/apis\/authorization.k8s.io\/v1\/subjectaccessreviews\n\n#### Parameters\n\n\n- **body**: <a href=\"\">SubjectAccessReview<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">SubjectAccessReview<\/a>): OK\n\n201 (<a href=\"\">SubjectAccessReview<\/a>): Created\n\n202 (<a href=\"\">SubjectAccessReview<\/a>): Accepted\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   authorization k8s io v1    import   k8s io api authorization v1    kind   SubjectAccessReview  content type   api reference  description   SubjectAccessReview checks whether or not a user or group can perform an action   title   SubjectAccessReview  weight  4 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  authorization k8s io v1    import  k8s io api authorization v1        SubjectAccessReview   SubjectAccessReview   SubjectAccessReview checks whether or not a user or group can perform an action    hr       apiVersion    authorization k8s io v1       kind    SubjectAccessReview       metadata     a href    ObjectMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    SubjectAccessReviewSpec  a    required    Spec holds information about the request being evaluated      status     a href    SubjectAccessReviewStatus  a      Status is filled in by the server and indicates whether the request is allowed or not         SubjectAccessReviewSpec   SubjectAccessReviewSpec   SubjectAccessReviewSpec is a description of the access request   Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set   hr       extra    map string   string     Extra corresponds to the user Info GetExtra   method from the authenticator   Since that is input to the authorizer it needs a reflection here       groups      string      Atomic  will be replaced during a merge       Groups is the groups you re testing for       nonResourceAttributes    NonResourceAttributes     NonResourceAttributes describes information for a non resource access request     a name  NonResourceAttributes    a     NonResourceAttributes includes the authorization attributes available for non resource requests to the Authorizer interface         nonResourceAttributes path    string       Path is the URL path of the request        nonResourceAttributes verb    string       Verb is the standard HTTP verb      resourceAttributes    ResourceAttributes     ResourceAuthorizationAttributes describes information for a resource access request     a name  ResourceAttributes    a     ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface         resourceAttributes fieldSelector    FieldSelectorAttributes       fieldSelector describes the limitation on access based on field   It can only limit access  not broaden it           This field  is alpha level  To use this field  you must enable the  AuthorizeWithSelectors  feature gate  disabled by default         a name  FieldSelectorAttributes    a       FieldSelectorAttributes indicates a field limited access  Webhook authors are encouraged to   ensure rawSelector and requirements are not both set   consider the requirements field if set   not try to parse or consider the rawSelector field if set  This is to avoid another CVE 2022 2880  i e  getting different systems to agree on how exactly to parse a query is not something we want   see https   www oxeye io resources golang parameter smuggling attack for more details  For the  SubjectAccessReview endpoints of the kube apiserver    If rawSelector is empty and requirements are empty  the request is not limited    If rawSelector is present and requirements are empty  the rawSelector will be parsed and limited if the parsing succeeds    If rawSelector is empty and requirements are present  the requirements should be honored   If rawSelector is present and requirements are present  the request is invalid            resourceAttributes fieldSelector rawSelector    string         rawSelector is the serialization of a field selector that would be included in a query parameter  Webhook implementations are encouraged to ignore rawSelector  The kube apiserver s  SubjectAccessReview will parse the rawSelector as long as the requirements are not present           resourceAttributes fieldSelector requirements      FieldSelectorRequirement          Atomic  will be replaced during a merge               requirements is the parsed interpretation of a field selector  All requirements must be met for a resource instance to match the selector  Webhook implementations should handle requirements  but how to handle them is up to the webhook  Since requirements can only limit the request  it is safe to authorize as unlimited request if the requirements are not understood          a name  FieldSelectorRequirement    a         FieldSelectorRequirement is a selector that contains values  a key  and an operator that relates the key and values              resourceAttributes fieldSelector requirements key    string   required          key is the field selector key that the requirement applies to             resourceAttributes fieldSelector requirements operator    string   required          operator represents a key s relationship to a set of values  Valid operators are In  NotIn  Exists  DoesNotExist  The list of operators may grow in the future             resourceAttributes fieldSelector requirements values      string            Atomic  will be replaced during a merge                   values is an array of string values  If the operator is In or NotIn  the values array must be non empty  If the operator is Exists or DoesNotExist  the values array must be empty         resourceAttributes group    string       Group is the API Group of the Resource       means all         resourceAttributes labelSelector    LabelSelectorAttributes       labelSelector describes the limitation on access based on labels   It can only limit access  not broaden it           This field  is alpha level  To use this field  you must enable the  AuthorizeWithSelectors  feature gate  disabled by default         a name  LabelSelectorAttributes    a       LabelSelectorAttributes indicates a label limited access  Webhook authors are encouraged to   ensure rawSelector and requirements are not both set   consider the requirements field if set   not try to parse or consider the rawSelector field if set  This is to avoid another CVE 2022 2880  i e  getting different systems to agree on how exactly to parse a query is not something we want   see https   www oxeye io resources golang parameter smuggling attack for more details  For the  SubjectAccessReview endpoints of the kube apiserver    If rawSelector is empty and requirements are empty  the request is not limited    If rawSelector is present and requirements are empty  the rawSelector will be parsed and limited if the parsing succeeds    If rawSelector is empty and requirements are present  the requirements should be honored   If rawSelector is present and requirements are present  the request is invalid            resourceAttributes labelSelector rawSelector    string         rawSelector is the serialization of a field selector that would be included in a query parameter  Webhook implementations are encouraged to ignore rawSelector  The kube apiserver s  SubjectAccessReview will parse the rawSelector as long as the requirements are not present           resourceAttributes labelSelector requirements      LabelSelectorRequirement          Atomic  will be replaced during a merge               requirements is the parsed interpretation of a label selector  All requirements must be met for a resource instance to match the selector  Webhook implementations should handle requirements  but how to handle them is up to the webhook  Since requirements can only limit the request  it is safe to authorize as unlimited request if the requirements are not understood          a name  LabelSelectorRequirement    a         A label selector requirement is a selector that contains values  a key  and an operator that relates the key and values              resourceAttributes labelSelector requirements key    string   required          key is the label key that the selector applies to             resourceAttributes labelSelector requirements operator    string   required          operator represents a key s relationship to a set of values  Valid operators are In  NotIn  Exists and DoesNotExist             resourceAttributes labelSelector requirements values      string            Atomic  will be replaced during a merge                   values is an array of string values  If the operator is In or NotIn  the values array must be non empty  If the operator is Exists or DoesNotExist  the values array must be empty  This array is replaced during a strategic merge patch         resourceAttributes name    string       Name is the name of the resource being requested for a  get  or deleted for a  delete       empty  means all         resourceAttributes namespace    string       Namespace is the namespace of the action being requested   Currently  there is no distinction between no namespace and all namespaces     empty  is defaulted for LocalSubjectAccessReviews     empty  is empty for cluster scoped resources     empty  means  all  for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview        resourceAttributes resource    string       Resource is one of the existing resource types       means all         resourceAttributes subresource    string       Subresource is one of the existing resource types      means none         resourceAttributes verb    string       Verb is a kubernetes resource API verb  like  get  list  watch  create  update  delete  proxy       means all         resourceAttributes version    string       Version is the API Version of the Resource       means all       uid    string     UID information about the requesting user       user    string     User is the user you re testing for  If you specify  User  but not  Groups   then is it interpreted as  What if User were not a member of any groups         SubjectAccessReviewStatus   SubjectAccessReviewStatus   SubjectAccessReviewStatus   hr       allowed    boolean   required    Allowed is required  True if the action would be allowed  false otherwise       denied    boolean     Denied is optional  True if the action would be denied  otherwise false  If both allowed is false and denied is false  then the authorizer has no opinion on whether to authorize the action  Denied may not be true if Allowed is true       evaluationError    string     EvaluationError is an indication that some error occurred during the authorization check  It is entirely possible to get an error and be able to continue determine authorization status in spite of it  For instance  RBAC can be missing a role  but enough roles are still present and bound to reason about the request       reason    string     Reason is optional   It indicates why a request was allowed or denied          Operations   Operations      hr             create  create a SubjectAccessReview       HTTP Request  POST  apis authorization k8s io v1 subjectaccessreviews       Parameters       body     a href    SubjectAccessReview  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    SubjectAccessReview  a    OK  201   a href    SubjectAccessReview  a    Created  202   a href    SubjectAccessReview  a    Accepted  401  Unauthorized "}
{"questions":"kubernetes reference ClusterRoleBinding references a ClusterRole but not contain it weight 6 title ClusterRoleBinding import k8s io api rbac v1 apiVersion rbac authorization k8s io v1 contenttype apireference apimetadata kind ClusterRoleBinding autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"rbac.authorization.k8s.io\/v1\"\n  import: \"k8s.io\/api\/rbac\/v1\"\n  kind: \"ClusterRoleBinding\"\ncontent_type: \"api_reference\"\ndescription: \"ClusterRoleBinding references a ClusterRole, but not contain it.\"\ntitle: \"ClusterRoleBinding\"\nweight: 6\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: rbac.authorization.k8s.io\/v1`\n\n`import \"k8s.io\/api\/rbac\/v1\"`\n\n\n## ClusterRoleBinding {#ClusterRoleBinding}\n\nClusterRoleBinding references a ClusterRole, but not contain it.  It can reference a ClusterRole in the global namespace, and adds who information via Subject.\n\n<hr>\n\n- **apiVersion**: rbac.authorization.k8s.io\/v1\n\n\n- **kind**: ClusterRoleBinding\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata.\n\n- **roleRef** (RoleRef), required\n\n  RoleRef can only reference a ClusterRole in the global namespace. If the RoleRef cannot be resolved, the Authorizer must return an error. This field is immutable.\n\n  <a name=\"RoleRef\"><\/a>\n  *RoleRef contains information that points to the role being used*\n\n  - **roleRef.apiGroup** (string), required\n\n    APIGroup is the group for the resource being referenced\n\n  - **roleRef.kind** (string), required\n\n    Kind is the type of resource being referenced\n\n  - **roleRef.name** (string), required\n\n    Name is the name of resource being referenced\n\n- **subjects** ([]Subject)\n\n  *Atomic: will be replaced during a merge*\n  \n  Subjects holds references to the objects the role applies to.\n\n  <a name=\"Subject\"><\/a>\n  *Subject contains a reference to the object or user identities a role binding applies to.  This can either hold a direct API object reference, or a value for non-objects such as user and group names.*\n\n  - **subjects.kind** (string), required\n\n    Kind of object being referenced. Values defined by this API group are \"User\", \"Group\", and \"ServiceAccount\". If the Authorizer does not recognized the kind value, the Authorizer should report an error.\n\n  - **subjects.name** (string), required\n\n    Name of the object being referenced.\n\n  - **subjects.apiGroup** (string)\n\n    APIGroup holds the API group of the referenced subject. Defaults to \"\" for ServiceAccount subjects. Defaults to \"rbac.authorization.k8s.io\" for User and Group subjects.\n\n  - **subjects.namespace** (string)\n\n    Namespace of the referenced object.  If the object kind is non-namespace, such as \"User\" or \"Group\", and this value is not empty the Authorizer should report an error.\n\n\n\n\n\n## ClusterRoleBindingList {#ClusterRoleBindingList}\n\nClusterRoleBindingList is a collection of ClusterRoleBindings\n\n<hr>\n\n- **apiVersion**: rbac.authorization.k8s.io\/v1\n\n\n- **kind**: ClusterRoleBindingList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard object's metadata.\n\n- **items** ([]<a href=\"\">ClusterRoleBinding<\/a>), required\n\n  Items is a list of ClusterRoleBindings\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ClusterRoleBinding\n\n#### HTTP Request\n\nGET \/apis\/rbac.authorization.k8s.io\/v1\/clusterrolebindings\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ClusterRoleBinding\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterRoleBinding<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ClusterRoleBinding\n\n#### HTTP Request\n\nGET \/apis\/rbac.authorization.k8s.io\/v1\/clusterrolebindings\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterRoleBindingList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ClusterRoleBinding\n\n#### HTTP Request\n\nPOST \/apis\/rbac.authorization.k8s.io\/v1\/clusterrolebindings\n\n#### Parameters\n\n\n- **body**: <a href=\"\">ClusterRoleBinding<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterRoleBinding<\/a>): OK\n\n201 (<a href=\"\">ClusterRoleBinding<\/a>): Created\n\n202 (<a href=\"\">ClusterRoleBinding<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ClusterRoleBinding\n\n#### HTTP Request\n\nPUT \/apis\/rbac.authorization.k8s.io\/v1\/clusterrolebindings\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ClusterRoleBinding\n\n\n- **body**: <a href=\"\">ClusterRoleBinding<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterRoleBinding<\/a>): OK\n\n201 (<a href=\"\">ClusterRoleBinding<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ClusterRoleBinding\n\n#### HTTP Request\n\nPATCH \/apis\/rbac.authorization.k8s.io\/v1\/clusterrolebindings\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ClusterRoleBinding\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterRoleBinding<\/a>): OK\n\n201 (<a href=\"\">ClusterRoleBinding<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ClusterRoleBinding\n\n#### HTTP Request\n\nDELETE \/apis\/rbac.authorization.k8s.io\/v1\/clusterrolebindings\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ClusterRoleBinding\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ClusterRoleBinding\n\n#### HTTP Request\n\nDELETE \/apis\/rbac.authorization.k8s.io\/v1\/clusterrolebindings\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   rbac authorization k8s io v1    import   k8s io api rbac v1    kind   ClusterRoleBinding  content type   api reference  description   ClusterRoleBinding references a ClusterRole  but not contain it   title   ClusterRoleBinding  weight  6 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  rbac authorization k8s io v1    import  k8s io api rbac v1        ClusterRoleBinding   ClusterRoleBinding   ClusterRoleBinding references a ClusterRole  but not contain it   It can reference a ClusterRole in the global namespace  and adds who information via Subject    hr       apiVersion    rbac authorization k8s io v1       kind    ClusterRoleBinding       metadata     a href    ObjectMeta  a      Standard object s metadata       roleRef    RoleRef   required    RoleRef can only reference a ClusterRole in the global namespace  If the RoleRef cannot be resolved  the Authorizer must return an error  This field is immutable      a name  RoleRef    a     RoleRef contains information that points to the role being used         roleRef apiGroup    string   required      APIGroup is the group for the resource being referenced        roleRef kind    string   required      Kind is the type of resource being referenced        roleRef name    string   required      Name is the name of resource being referenced      subjects      Subject      Atomic  will be replaced during a merge       Subjects holds references to the objects the role applies to      a name  Subject    a     Subject contains a reference to the object or user identities a role binding applies to   This can either hold a direct API object reference  or a value for non objects such as user and group names          subjects kind    string   required      Kind of object being referenced  Values defined by this API group are  User    Group   and  ServiceAccount   If the Authorizer does not recognized the kind value  the Authorizer should report an error         subjects name    string   required      Name of the object being referenced         subjects apiGroup    string       APIGroup holds the API group of the referenced subject  Defaults to    for ServiceAccount subjects  Defaults to  rbac authorization k8s io  for User and Group subjects         subjects namespace    string       Namespace of the referenced object   If the object kind is non namespace  such as  User  or  Group   and this value is not empty the Authorizer should report an error          ClusterRoleBindingList   ClusterRoleBindingList   ClusterRoleBindingList is a collection of ClusterRoleBindings   hr       apiVersion    rbac authorization k8s io v1       kind    ClusterRoleBindingList       metadata     a href    ListMeta  a      Standard object s metadata       items       a href    ClusterRoleBinding  a    required    Items is a list of ClusterRoleBindings         Operations   Operations      hr             get  read the specified ClusterRoleBinding       HTTP Request  GET  apis rbac authorization k8s io v1 clusterrolebindings  name        Parameters       name     in path    string  required    name of the ClusterRoleBinding       pretty     in query    string     a href    pretty  a          Response   200   a href    ClusterRoleBinding  a    OK  401  Unauthorized        list  list or watch objects of kind ClusterRoleBinding       HTTP Request  GET  apis rbac authorization k8s io v1 clusterrolebindings       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ClusterRoleBindingList  a    OK  401  Unauthorized        create  create a ClusterRoleBinding       HTTP Request  POST  apis rbac authorization k8s io v1 clusterrolebindings       Parameters       body     a href    ClusterRoleBinding  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ClusterRoleBinding  a    OK  201   a href    ClusterRoleBinding  a    Created  202   a href    ClusterRoleBinding  a    Accepted  401  Unauthorized        update  replace the specified ClusterRoleBinding       HTTP Request  PUT  apis rbac authorization k8s io v1 clusterrolebindings  name        Parameters       name     in path    string  required    name of the ClusterRoleBinding       body     a href    ClusterRoleBinding  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ClusterRoleBinding  a    OK  201   a href    ClusterRoleBinding  a    Created  401  Unauthorized        patch  partially update the specified ClusterRoleBinding       HTTP Request  PATCH  apis rbac authorization k8s io v1 clusterrolebindings  name        Parameters       name     in path    string  required    name of the ClusterRoleBinding       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ClusterRoleBinding  a    OK  201   a href    ClusterRoleBinding  a    Created  401  Unauthorized        delete  delete a ClusterRoleBinding       HTTP Request  DELETE  apis rbac authorization k8s io v1 clusterrolebindings  name        Parameters       name     in path    string  required    name of the ClusterRoleBinding       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of ClusterRoleBinding       HTTP Request  DELETE  apis rbac authorization k8s io v1 clusterrolebindings       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference weight 2 import k8s io api authorization v1 contenttype apireference SelfSubjectAccessReview checks whether or the current user can perform an action apiVersion authorization k8s io v1 apimetadata title SelfSubjectAccessReview autogenerated true kind SelfSubjectAccessReview","answers":"---\napi_metadata:\n  apiVersion: \"authorization.k8s.io\/v1\"\n  import: \"k8s.io\/api\/authorization\/v1\"\n  kind: \"SelfSubjectAccessReview\"\ncontent_type: \"api_reference\"\ndescription: \"SelfSubjectAccessReview checks whether or the current user can perform an action.\"\ntitle: \"SelfSubjectAccessReview\"\nweight: 2\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: authorization.k8s.io\/v1`\n\n`import \"k8s.io\/api\/authorization\/v1\"`\n\n\n## SelfSubjectAccessReview {#SelfSubjectAccessReview}\n\nSelfSubjectAccessReview checks whether or the current user can perform an action.  Not filling in a spec.namespace means \"in all namespaces\".  Self is a special case, because users should always be able to check whether they can perform an action\n\n<hr>\n\n- **apiVersion**: authorization.k8s.io\/v1\n\n\n- **kind**: SelfSubjectAccessReview\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">SelfSubjectAccessReviewSpec<\/a>), required\n\n  Spec holds information about the request being evaluated.  user and groups must be empty\n\n- **status** (<a href=\"\">SubjectAccessReviewStatus<\/a>)\n\n  Status is filled in by the server and indicates whether the request is allowed or not\n\n\n\n\n\n## SelfSubjectAccessReviewSpec {#SelfSubjectAccessReviewSpec}\n\nSelfSubjectAccessReviewSpec is a description of the access request.  Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set\n\n<hr>\n\n- **nonResourceAttributes** (NonResourceAttributes)\n\n  NonResourceAttributes describes information for a non-resource access request\n\n  <a name=\"NonResourceAttributes\"><\/a>\n  *NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface*\n\n  - **nonResourceAttributes.path** (string)\n\n    Path is the URL path of the request\n\n  - **nonResourceAttributes.verb** (string)\n\n    Verb is the standard HTTP verb\n\n- **resourceAttributes** (ResourceAttributes)\n\n  ResourceAuthorizationAttributes describes information for a resource access request\n\n  <a name=\"ResourceAttributes\"><\/a>\n  *ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface*\n\n  - **resourceAttributes.fieldSelector** (FieldSelectorAttributes)\n\n    fieldSelector describes the limitation on access based on field.  It can only limit access, not broaden it.\n    \n    This field  is alpha-level. To use this field, you must enable the `AuthorizeWithSelectors` feature gate (disabled by default).\n\n    <a name=\"FieldSelectorAttributes\"><\/a>\n    *FieldSelectorAttributes indicates a field limited access. Webhook authors are encouraged to * ensure rawSelector and requirements are not both set * consider the requirements field if set * not try to parse or consider the rawSelector field if set. This is to avoid another CVE-2022-2880 (i.e. getting different systems to agree on how exactly to parse a query is not something we want), see https:\/\/www.oxeye.io\/resources\/golang-parameter-smuggling-attack for more details. For the *SubjectAccessReview endpoints of the kube-apiserver: * If rawSelector is empty and requirements are empty, the request is not limited. * If rawSelector is present and requirements are empty, the rawSelector will be parsed and limited if the parsing succeeds. * If rawSelector is empty and requirements are present, the requirements should be honored * If rawSelector is present and requirements are present, the request is invalid.*\n\n    - **resourceAttributes.fieldSelector.rawSelector** (string)\n\n      rawSelector is the serialization of a field selector that would be included in a query parameter. Webhook implementations are encouraged to ignore rawSelector. The kube-apiserver's *SubjectAccessReview will parse the rawSelector as long as the requirements are not present.\n\n    - **resourceAttributes.fieldSelector.requirements** ([]FieldSelectorRequirement)\n\n      *Atomic: will be replaced during a merge*\n      \n      requirements is the parsed interpretation of a field selector. All requirements must be met for a resource instance to match the selector. Webhook implementations should handle requirements, but how to handle them is up to the webhook. Since requirements can only limit the request, it is safe to authorize as unlimited request if the requirements are not understood.\n\n      <a name=\"FieldSelectorRequirement\"><\/a>\n      *FieldSelectorRequirement is a selector that contains values, a key, and an operator that relates the key and values.*\n\n      - **resourceAttributes.fieldSelector.requirements.key** (string), required\n\n        key is the field selector key that the requirement applies to.\n\n      - **resourceAttributes.fieldSelector.requirements.operator** (string), required\n\n        operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. The list of operators may grow in the future.\n\n      - **resourceAttributes.fieldSelector.requirements.values** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.\n\n  - **resourceAttributes.group** (string)\n\n    Group is the API Group of the Resource.  \"*\" means all.\n\n  - **resourceAttributes.labelSelector** (LabelSelectorAttributes)\n\n    labelSelector describes the limitation on access based on labels.  It can only limit access, not broaden it.\n    \n    This field  is alpha-level. To use this field, you must enable the `AuthorizeWithSelectors` feature gate (disabled by default).\n\n    <a name=\"LabelSelectorAttributes\"><\/a>\n    *LabelSelectorAttributes indicates a label limited access. Webhook authors are encouraged to * ensure rawSelector and requirements are not both set * consider the requirements field if set * not try to parse or consider the rawSelector field if set. This is to avoid another CVE-2022-2880 (i.e. getting different systems to agree on how exactly to parse a query is not something we want), see https:\/\/www.oxeye.io\/resources\/golang-parameter-smuggling-attack for more details. For the *SubjectAccessReview endpoints of the kube-apiserver: * If rawSelector is empty and requirements are empty, the request is not limited. * If rawSelector is present and requirements are empty, the rawSelector will be parsed and limited if the parsing succeeds. * If rawSelector is empty and requirements are present, the requirements should be honored * If rawSelector is present and requirements are present, the request is invalid.*\n\n    - **resourceAttributes.labelSelector.rawSelector** (string)\n\n      rawSelector is the serialization of a field selector that would be included in a query parameter. Webhook implementations are encouraged to ignore rawSelector. The kube-apiserver's *SubjectAccessReview will parse the rawSelector as long as the requirements are not present.\n\n    - **resourceAttributes.labelSelector.requirements** ([]LabelSelectorRequirement)\n\n      *Atomic: will be replaced during a merge*\n      \n      requirements is the parsed interpretation of a label selector. All requirements must be met for a resource instance to match the selector. Webhook implementations should handle requirements, but how to handle them is up to the webhook. Since requirements can only limit the request, it is safe to authorize as unlimited request if the requirements are not understood.\n\n      <a name=\"LabelSelectorRequirement\"><\/a>\n      *A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.*\n\n      - **resourceAttributes.labelSelector.requirements.key** (string), required\n\n        key is the label key that the selector applies to.\n\n      - **resourceAttributes.labelSelector.requirements.operator** (string), required\n\n        operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.\n\n      - **resourceAttributes.labelSelector.requirements.values** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n\n  - **resourceAttributes.name** (string)\n\n    Name is the name of the resource being requested for a \"get\" or deleted for a \"delete\". \"\" (empty) means all.\n\n  - **resourceAttributes.namespace** (string)\n\n    Namespace is the namespace of the action being requested.  Currently, there is no distinction between no namespace and all namespaces \"\" (empty) is defaulted for LocalSubjectAccessReviews \"\" (empty) is empty for cluster-scoped resources \"\" (empty) means \"all\" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview\n\n  - **resourceAttributes.resource** (string)\n\n    Resource is one of the existing resource types.  \"*\" means all.\n\n  - **resourceAttributes.subresource** (string)\n\n    Subresource is one of the existing resource types.  \"\" means none.\n\n  - **resourceAttributes.verb** (string)\n\n    Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy.  \"*\" means all.\n\n  - **resourceAttributes.version** (string)\n\n    Version is the API Version of the Resource.  \"*\" means all.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `create` create a SelfSubjectAccessReview\n\n#### HTTP Request\n\nPOST \/apis\/authorization.k8s.io\/v1\/selfsubjectaccessreviews\n\n#### Parameters\n\n\n- **body**: <a href=\"\">SelfSubjectAccessReview<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">SelfSubjectAccessReview<\/a>): OK\n\n201 (<a href=\"\">SelfSubjectAccessReview<\/a>): Created\n\n202 (<a href=\"\">SelfSubjectAccessReview<\/a>): Accepted\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   authorization k8s io v1    import   k8s io api authorization v1    kind   SelfSubjectAccessReview  content type   api reference  description   SelfSubjectAccessReview checks whether or the current user can perform an action   title   SelfSubjectAccessReview  weight  2 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  authorization k8s io v1    import  k8s io api authorization v1        SelfSubjectAccessReview   SelfSubjectAccessReview   SelfSubjectAccessReview checks whether or the current user can perform an action   Not filling in a spec namespace means  in all namespaces    Self is a special case  because users should always be able to check whether they can perform an action   hr       apiVersion    authorization k8s io v1       kind    SelfSubjectAccessReview       metadata     a href    ObjectMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    SelfSubjectAccessReviewSpec  a    required    Spec holds information about the request being evaluated   user and groups must be empty      status     a href    SubjectAccessReviewStatus  a      Status is filled in by the server and indicates whether the request is allowed or not         SelfSubjectAccessReviewSpec   SelfSubjectAccessReviewSpec   SelfSubjectAccessReviewSpec is a description of the access request   Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set   hr       nonResourceAttributes    NonResourceAttributes     NonResourceAttributes describes information for a non resource access request     a name  NonResourceAttributes    a     NonResourceAttributes includes the authorization attributes available for non resource requests to the Authorizer interface         nonResourceAttributes path    string       Path is the URL path of the request        nonResourceAttributes verb    string       Verb is the standard HTTP verb      resourceAttributes    ResourceAttributes     ResourceAuthorizationAttributes describes information for a resource access request     a name  ResourceAttributes    a     ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface         resourceAttributes fieldSelector    FieldSelectorAttributes       fieldSelector describes the limitation on access based on field   It can only limit access  not broaden it           This field  is alpha level  To use this field  you must enable the  AuthorizeWithSelectors  feature gate  disabled by default         a name  FieldSelectorAttributes    a       FieldSelectorAttributes indicates a field limited access  Webhook authors are encouraged to   ensure rawSelector and requirements are not both set   consider the requirements field if set   not try to parse or consider the rawSelector field if set  This is to avoid another CVE 2022 2880  i e  getting different systems to agree on how exactly to parse a query is not something we want   see https   www oxeye io resources golang parameter smuggling attack for more details  For the  SubjectAccessReview endpoints of the kube apiserver    If rawSelector is empty and requirements are empty  the request is not limited    If rawSelector is present and requirements are empty  the rawSelector will be parsed and limited if the parsing succeeds    If rawSelector is empty and requirements are present  the requirements should be honored   If rawSelector is present and requirements are present  the request is invalid            resourceAttributes fieldSelector rawSelector    string         rawSelector is the serialization of a field selector that would be included in a query parameter  Webhook implementations are encouraged to ignore rawSelector  The kube apiserver s  SubjectAccessReview will parse the rawSelector as long as the requirements are not present           resourceAttributes fieldSelector requirements      FieldSelectorRequirement          Atomic  will be replaced during a merge               requirements is the parsed interpretation of a field selector  All requirements must be met for a resource instance to match the selector  Webhook implementations should handle requirements  but how to handle them is up to the webhook  Since requirements can only limit the request  it is safe to authorize as unlimited request if the requirements are not understood          a name  FieldSelectorRequirement    a         FieldSelectorRequirement is a selector that contains values  a key  and an operator that relates the key and values              resourceAttributes fieldSelector requirements key    string   required          key is the field selector key that the requirement applies to             resourceAttributes fieldSelector requirements operator    string   required          operator represents a key s relationship to a set of values  Valid operators are In  NotIn  Exists  DoesNotExist  The list of operators may grow in the future             resourceAttributes fieldSelector requirements values      string            Atomic  will be replaced during a merge                   values is an array of string values  If the operator is In or NotIn  the values array must be non empty  If the operator is Exists or DoesNotExist  the values array must be empty         resourceAttributes group    string       Group is the API Group of the Resource       means all         resourceAttributes labelSelector    LabelSelectorAttributes       labelSelector describes the limitation on access based on labels   It can only limit access  not broaden it           This field  is alpha level  To use this field  you must enable the  AuthorizeWithSelectors  feature gate  disabled by default         a name  LabelSelectorAttributes    a       LabelSelectorAttributes indicates a label limited access  Webhook authors are encouraged to   ensure rawSelector and requirements are not both set   consider the requirements field if set   not try to parse or consider the rawSelector field if set  This is to avoid another CVE 2022 2880  i e  getting different systems to agree on how exactly to parse a query is not something we want   see https   www oxeye io resources golang parameter smuggling attack for more details  For the  SubjectAccessReview endpoints of the kube apiserver    If rawSelector is empty and requirements are empty  the request is not limited    If rawSelector is present and requirements are empty  the rawSelector will be parsed and limited if the parsing succeeds    If rawSelector is empty and requirements are present  the requirements should be honored   If rawSelector is present and requirements are present  the request is invalid            resourceAttributes labelSelector rawSelector    string         rawSelector is the serialization of a field selector that would be included in a query parameter  Webhook implementations are encouraged to ignore rawSelector  The kube apiserver s  SubjectAccessReview will parse the rawSelector as long as the requirements are not present           resourceAttributes labelSelector requirements      LabelSelectorRequirement          Atomic  will be replaced during a merge               requirements is the parsed interpretation of a label selector  All requirements must be met for a resource instance to match the selector  Webhook implementations should handle requirements  but how to handle them is up to the webhook  Since requirements can only limit the request  it is safe to authorize as unlimited request if the requirements are not understood          a name  LabelSelectorRequirement    a         A label selector requirement is a selector that contains values  a key  and an operator that relates the key and values              resourceAttributes labelSelector requirements key    string   required          key is the label key that the selector applies to             resourceAttributes labelSelector requirements operator    string   required          operator represents a key s relationship to a set of values  Valid operators are In  NotIn  Exists and DoesNotExist             resourceAttributes labelSelector requirements values      string            Atomic  will be replaced during a merge                   values is an array of string values  If the operator is In or NotIn  the values array must be non empty  If the operator is Exists or DoesNotExist  the values array must be empty  This array is replaced during a strategic merge patch         resourceAttributes name    string       Name is the name of the resource being requested for a  get  or deleted for a  delete       empty  means all         resourceAttributes namespace    string       Namespace is the namespace of the action being requested   Currently  there is no distinction between no namespace and all namespaces     empty  is defaulted for LocalSubjectAccessReviews     empty  is empty for cluster scoped resources     empty  means  all  for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview        resourceAttributes resource    string       Resource is one of the existing resource types       means all         resourceAttributes subresource    string       Subresource is one of the existing resource types      means none         resourceAttributes verb    string       Verb is a kubernetes resource API verb  like  get  list  watch  create  update  delete  proxy       means all         resourceAttributes version    string       Version is the API Version of the Resource       means all          Operations   Operations      hr             create  create a SelfSubjectAccessReview       HTTP Request  POST  apis authorization k8s io v1 selfsubjectaccessreviews       Parameters       body     a href    SelfSubjectAccessReview  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    SelfSubjectAccessReview  a    OK  201   a href    SelfSubjectAccessReview  a    Created  202   a href    SelfSubjectAccessReview  a    Accepted  401  Unauthorized "}
{"questions":"kubernetes reference kind SelfSubjectRulesReview import k8s io api authorization v1 SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace contenttype apireference apiVersion authorization k8s io v1 apimetadata weight 3 title SelfSubjectRulesReview autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"authorization.k8s.io\/v1\"\n  import: \"k8s.io\/api\/authorization\/v1\"\n  kind: \"SelfSubjectRulesReview\"\ncontent_type: \"api_reference\"\ndescription: \"SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace.\"\ntitle: \"SelfSubjectRulesReview\"\nweight: 3\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: authorization.k8s.io\/v1`\n\n`import \"k8s.io\/api\/authorization\/v1\"`\n\n\n## SelfSubjectRulesReview {#SelfSubjectRulesReview}\n\nSelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace. The returned list of actions may be incomplete depending on the server's authorization mode, and any errors experienced during the evaluation. SelfSubjectRulesReview should be used by UIs to show\/hide actions, or to quickly let an end user reason about their permissions. It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy, cache lifetime\/revocation, and correctness concerns. SubjectAccessReview, and LocalAccessReview are the correct way to defer authorization decisions to the API server.\n\n<hr>\n\n- **apiVersion**: authorization.k8s.io\/v1\n\n\n- **kind**: SelfSubjectRulesReview\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">SelfSubjectRulesReviewSpec<\/a>), required\n\n  Spec holds information about the request being evaluated.\n\n- **status** (SubjectRulesReviewStatus)\n\n  Status is filled in by the server and indicates the set of actions a user can perform.\n\n  <a name=\"SubjectRulesReviewStatus\"><\/a>\n  *SubjectRulesReviewStatus contains the result of a rules check. This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation. Because authorization rules are additive, if a rule appears in a list it's safe to assume the subject has that permission, even if that list is incomplete.*\n\n  - **status.incomplete** (boolean), required\n\n    Incomplete is true when the rules returned by this call are incomplete. This is most commonly encountered when an authorizer, such as an external authorizer, doesn't support rules evaluation.\n\n  - **status.nonResourceRules** ([]NonResourceRule), required\n\n    *Atomic: will be replaced during a merge*\n    \n    NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete.\n\n    <a name=\"NonResourceRule\"><\/a>\n    *NonResourceRule holds information that describes a rule for the non-resource*\n\n    - **status.nonResourceRules.verbs** ([]string), required\n\n      *Atomic: will be replaced during a merge*\n      \n      Verb is a list of kubernetes non-resource API verbs, like: get, post, put, delete, patch, head, options.  \"*\" means all.\n\n    - **status.nonResourceRules.nonResourceURLs** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      NonResourceURLs is a set of partial urls that a user should have access to.  *s are allowed, but only as the full, final step in the path.  \"*\" means all.\n\n  - **status.resourceRules** ([]ResourceRule), required\n\n    *Atomic: will be replaced during a merge*\n    \n    ResourceRules is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete.\n\n    <a name=\"ResourceRule\"><\/a>\n    *ResourceRule is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete.*\n\n    - **status.resourceRules.verbs** ([]string), required\n\n      *Atomic: will be replaced during a merge*\n      \n      Verb is a list of kubernetes resource API verbs, like: get, list, watch, create, update, delete, proxy.  \"*\" means all.\n\n    - **status.resourceRules.apiGroups** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      APIGroups is the name of the APIGroup that contains the resources.  If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed.  \"*\" means all.\n\n    - **status.resourceRules.resourceNames** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      ResourceNames is an optional white list of names that the rule applies to.  An empty set means that everything is allowed.  \"*\" means all.\n\n    - **status.resourceRules.resources** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      Resources is a list of resources this rule applies to.  \"*\" means all in the specified apiGroups.\n       \"*\/foo\" represents the subresource 'foo' for all resources in the specified apiGroups.\n\n  - **status.evaluationError** (string)\n\n    EvaluationError can appear in combination with Rules. It indicates an error occurred during rule evaluation, such as an authorizer that doesn't support rule evaluation, and that ResourceRules and\/or NonResourceRules may be incomplete.\n\n\n\n\n\n## SelfSubjectRulesReviewSpec {#SelfSubjectRulesReviewSpec}\n\nSelfSubjectRulesReviewSpec defines the specification for SelfSubjectRulesReview.\n\n<hr>\n\n- **namespace** (string)\n\n  Namespace to evaluate rules for. Required.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `create` create a SelfSubjectRulesReview\n\n#### HTTP Request\n\nPOST \/apis\/authorization.k8s.io\/v1\/selfsubjectrulesreviews\n\n#### Parameters\n\n\n- **body**: <a href=\"\">SelfSubjectRulesReview<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">SelfSubjectRulesReview<\/a>): OK\n\n201 (<a href=\"\">SelfSubjectRulesReview<\/a>): Created\n\n202 (<a href=\"\">SelfSubjectRulesReview<\/a>): Accepted\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   authorization k8s io v1    import   k8s io api authorization v1    kind   SelfSubjectRulesReview  content type   api reference  description   SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace   title   SelfSubjectRulesReview  weight  3 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  authorization k8s io v1    import  k8s io api authorization v1        SelfSubjectRulesReview   SelfSubjectRulesReview   SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace  The returned list of actions may be incomplete depending on the server s authorization mode  and any errors experienced during the evaluation  SelfSubjectRulesReview should be used by UIs to show hide actions  or to quickly let an end user reason about their permissions  It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy  cache lifetime revocation  and correctness concerns  SubjectAccessReview  and LocalAccessReview are the correct way to defer authorization decisions to the API server    hr       apiVersion    authorization k8s io v1       kind    SelfSubjectRulesReview       metadata     a href    ObjectMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    SelfSubjectRulesReviewSpec  a    required    Spec holds information about the request being evaluated       status    SubjectRulesReviewStatus     Status is filled in by the server and indicates the set of actions a user can perform      a name  SubjectRulesReviewStatus    a     SubjectRulesReviewStatus contains the result of a rules check  This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation  Because authorization rules are additive  if a rule appears in a list it s safe to assume the subject has that permission  even if that list is incomplete          status incomplete    boolean   required      Incomplete is true when the rules returned by this call are incomplete  This is most commonly encountered when an authorizer  such as an external authorizer  doesn t support rules evaluation         status nonResourceRules      NonResourceRule   required       Atomic  will be replaced during a merge           NonResourceRules is the list of actions the subject is allowed to perform on non resources  The list ordering isn t significant  may contain duplicates  and possibly be incomplete        a name  NonResourceRule    a       NonResourceRule holds information that describes a rule for the non resource           status nonResourceRules verbs      string   required         Atomic  will be replaced during a merge               Verb is a list of kubernetes non resource API verbs  like  get  post  put  delete  patch  head  options       means all           status nonResourceRules nonResourceURLs      string          Atomic  will be replaced during a merge               NonResourceURLs is a set of partial urls that a user should have access to    s are allowed  but only as the full  final step in the path       means all         status resourceRules      ResourceRule   required       Atomic  will be replaced during a merge           ResourceRules is the list of actions the subject is allowed to perform on resources  The list ordering isn t significant  may contain duplicates  and possibly be incomplete        a name  ResourceRule    a       ResourceRule is the list of actions the subject is allowed to perform on resources  The list ordering isn t significant  may contain duplicates  and possibly be incomplete            status resourceRules verbs      string   required         Atomic  will be replaced during a merge               Verb is a list of kubernetes resource API verbs  like  get  list  watch  create  update  delete  proxy       means all           status resourceRules apiGroups      string          Atomic  will be replaced during a merge               APIGroups is the name of the APIGroup that contains the resources   If multiple API groups are specified  any action requested against one of the enumerated resources in any API group will be allowed       means all           status resourceRules resourceNames      string          Atomic  will be replaced during a merge               ResourceNames is an optional white list of names that the rule applies to   An empty set means that everything is allowed       means all           status resourceRules resources      string          Atomic  will be replaced during a merge               Resources is a list of resources this rule applies to       means all in the specified apiGroups            foo  represents the subresource  foo  for all resources in the specified apiGroups         status evaluationError    string       EvaluationError can appear in combination with Rules  It indicates an error occurred during rule evaluation  such as an authorizer that doesn t support rule evaluation  and that ResourceRules and or NonResourceRules may be incomplete          SelfSubjectRulesReviewSpec   SelfSubjectRulesReviewSpec   SelfSubjectRulesReviewSpec defines the specification for SelfSubjectRulesReview    hr       namespace    string     Namespace to evaluate rules for  Required          Operations   Operations      hr             create  create a SelfSubjectRulesReview       HTTP Request  POST  apis authorization k8s io v1 selfsubjectrulesreviews       Parameters       body     a href    SelfSubjectRulesReview  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    SelfSubjectRulesReview  a    OK  201   a href    SelfSubjectRulesReview  a    Created  202   a href    SelfSubjectRulesReview  a    Accepted  401  Unauthorized "}
{"questions":"kubernetes reference kind LocalSubjectAccessReview LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace title LocalSubjectAccessReview import k8s io api authorization v1 contenttype apireference apiVersion authorization k8s io v1 apimetadata autogenerated true weight 1","answers":"---\napi_metadata:\n  apiVersion: \"authorization.k8s.io\/v1\"\n  import: \"k8s.io\/api\/authorization\/v1\"\n  kind: \"LocalSubjectAccessReview\"\ncontent_type: \"api_reference\"\ndescription: \"LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace.\"\ntitle: \"LocalSubjectAccessReview\"\nweight: 1\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: authorization.k8s.io\/v1`\n\n`import \"k8s.io\/api\/authorization\/v1\"`\n\n\n## LocalSubjectAccessReview {#LocalSubjectAccessReview}\n\nLocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace. Having a namespace scoped resource makes it much easier to grant namespace scoped policy that includes permissions checking.\n\n<hr>\n\n- **apiVersion**: authorization.k8s.io\/v1\n\n\n- **kind**: LocalSubjectAccessReview\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">SubjectAccessReviewSpec<\/a>), required\n\n  Spec holds information about the request being evaluated.  spec.namespace must be equal to the namespace you made the request against.  If empty, it is defaulted.\n\n- **status** (<a href=\"\">SubjectAccessReviewStatus<\/a>)\n\n  Status is filled in by the server and indicates whether the request is allowed or not\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `create` create a LocalSubjectAccessReview\n\n#### HTTP Request\n\nPOST \/apis\/authorization.k8s.io\/v1\/namespaces\/{namespace}\/localsubjectaccessreviews\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">LocalSubjectAccessReview<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LocalSubjectAccessReview<\/a>): OK\n\n201 (<a href=\"\">LocalSubjectAccessReview<\/a>): Created\n\n202 (<a href=\"\">LocalSubjectAccessReview<\/a>): Accepted\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   authorization k8s io v1    import   k8s io api authorization v1    kind   LocalSubjectAccessReview  content type   api reference  description   LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace   title   LocalSubjectAccessReview  weight  1 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  authorization k8s io v1    import  k8s io api authorization v1        LocalSubjectAccessReview   LocalSubjectAccessReview   LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace  Having a namespace scoped resource makes it much easier to grant namespace scoped policy that includes permissions checking    hr       apiVersion    authorization k8s io v1       kind    LocalSubjectAccessReview       metadata     a href    ObjectMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    SubjectAccessReviewSpec  a    required    Spec holds information about the request being evaluated   spec namespace must be equal to the namespace you made the request against   If empty  it is defaulted       status     a href    SubjectAccessReviewStatus  a      Status is filled in by the server and indicates whether the request is allowed or not         Operations   Operations      hr             create  create a LocalSubjectAccessReview       HTTP Request  POST  apis authorization k8s io v1 namespaces  namespace  localsubjectaccessreviews       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    LocalSubjectAccessReview  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    LocalSubjectAccessReview  a    OK  201   a href    LocalSubjectAccessReview  a    Created  202   a href    LocalSubjectAccessReview  a    Accepted  401  Unauthorized "}
{"questions":"kubernetes reference import k8s io api rbac v1 apiVersion rbac authorization k8s io v1 contenttype apireference title RoleBinding weight 8 apimetadata autogenerated true kind RoleBinding RoleBinding references a role but does not contain it","answers":"---\napi_metadata:\n  apiVersion: \"rbac.authorization.k8s.io\/v1\"\n  import: \"k8s.io\/api\/rbac\/v1\"\n  kind: \"RoleBinding\"\ncontent_type: \"api_reference\"\ndescription: \"RoleBinding references a role, but does not contain it.\"\ntitle: \"RoleBinding\"\nweight: 8\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: rbac.authorization.k8s.io\/v1`\n\n`import \"k8s.io\/api\/rbac\/v1\"`\n\n\n## RoleBinding {#RoleBinding}\n\nRoleBinding references a role, but does not contain it.  It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in.  RoleBindings in a given namespace only have effect in that namespace.\n\n<hr>\n\n- **apiVersion**: rbac.authorization.k8s.io\/v1\n\n\n- **kind**: RoleBinding\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata.\n\n- **roleRef** (RoleRef), required\n\n  RoleRef can reference a Role in the current namespace or a ClusterRole in the global namespace. If the RoleRef cannot be resolved, the Authorizer must return an error. This field is immutable.\n\n  <a name=\"RoleRef\"><\/a>\n  *RoleRef contains information that points to the role being used*\n\n  - **roleRef.apiGroup** (string), required\n\n    APIGroup is the group for the resource being referenced\n\n  - **roleRef.kind** (string), required\n\n    Kind is the type of resource being referenced\n\n  - **roleRef.name** (string), required\n\n    Name is the name of resource being referenced\n\n- **subjects** ([]Subject)\n\n  *Atomic: will be replaced during a merge*\n  \n  Subjects holds references to the objects the role applies to.\n\n  <a name=\"Subject\"><\/a>\n  *Subject contains a reference to the object or user identities a role binding applies to.  This can either hold a direct API object reference, or a value for non-objects such as user and group names.*\n\n  - **subjects.kind** (string), required\n\n    Kind of object being referenced. Values defined by this API group are \"User\", \"Group\", and \"ServiceAccount\". If the Authorizer does not recognized the kind value, the Authorizer should report an error.\n\n  - **subjects.name** (string), required\n\n    Name of the object being referenced.\n\n  - **subjects.apiGroup** (string)\n\n    APIGroup holds the API group of the referenced subject. Defaults to \"\" for ServiceAccount subjects. Defaults to \"rbac.authorization.k8s.io\" for User and Group subjects.\n\n  - **subjects.namespace** (string)\n\n    Namespace of the referenced object.  If the object kind is non-namespace, such as \"User\" or \"Group\", and this value is not empty the Authorizer should report an error.\n\n\n\n\n\n## RoleBindingList {#RoleBindingList}\n\nRoleBindingList is a collection of RoleBindings\n\n<hr>\n\n- **apiVersion**: rbac.authorization.k8s.io\/v1\n\n\n- **kind**: RoleBindingList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard object's metadata.\n\n- **items** ([]<a href=\"\">RoleBinding<\/a>), required\n\n  Items is a list of RoleBindings\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified RoleBinding\n\n#### HTTP Request\n\nGET \/apis\/rbac.authorization.k8s.io\/v1\/namespaces\/{namespace}\/rolebindings\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the RoleBinding\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">RoleBinding<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind RoleBinding\n\n#### HTTP Request\n\nGET \/apis\/rbac.authorization.k8s.io\/v1\/namespaces\/{namespace}\/rolebindings\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">RoleBindingList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind RoleBinding\n\n#### HTTP Request\n\nGET \/apis\/rbac.authorization.k8s.io\/v1\/rolebindings\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">RoleBindingList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a RoleBinding\n\n#### HTTP Request\n\nPOST \/apis\/rbac.authorization.k8s.io\/v1\/namespaces\/{namespace}\/rolebindings\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">RoleBinding<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">RoleBinding<\/a>): OK\n\n201 (<a href=\"\">RoleBinding<\/a>): Created\n\n202 (<a href=\"\">RoleBinding<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified RoleBinding\n\n#### HTTP Request\n\nPUT \/apis\/rbac.authorization.k8s.io\/v1\/namespaces\/{namespace}\/rolebindings\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the RoleBinding\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">RoleBinding<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">RoleBinding<\/a>): OK\n\n201 (<a href=\"\">RoleBinding<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified RoleBinding\n\n#### HTTP Request\n\nPATCH \/apis\/rbac.authorization.k8s.io\/v1\/namespaces\/{namespace}\/rolebindings\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the RoleBinding\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">RoleBinding<\/a>): OK\n\n201 (<a href=\"\">RoleBinding<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a RoleBinding\n\n#### HTTP Request\n\nDELETE \/apis\/rbac.authorization.k8s.io\/v1\/namespaces\/{namespace}\/rolebindings\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the RoleBinding\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of RoleBinding\n\n#### HTTP Request\n\nDELETE \/apis\/rbac.authorization.k8s.io\/v1\/namespaces\/{namespace}\/rolebindings\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   rbac authorization k8s io v1    import   k8s io api rbac v1    kind   RoleBinding  content type   api reference  description   RoleBinding references a role  but does not contain it   title   RoleBinding  weight  8 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  rbac authorization k8s io v1    import  k8s io api rbac v1        RoleBinding   RoleBinding   RoleBinding references a role  but does not contain it   It can reference a Role in the same namespace or a ClusterRole in the global namespace  It adds who information via Subjects and namespace information by which namespace it exists in   RoleBindings in a given namespace only have effect in that namespace    hr       apiVersion    rbac authorization k8s io v1       kind    RoleBinding       metadata     a href    ObjectMeta  a      Standard object s metadata       roleRef    RoleRef   required    RoleRef can reference a Role in the current namespace or a ClusterRole in the global namespace  If the RoleRef cannot be resolved  the Authorizer must return an error  This field is immutable      a name  RoleRef    a     RoleRef contains information that points to the role being used         roleRef apiGroup    string   required      APIGroup is the group for the resource being referenced        roleRef kind    string   required      Kind is the type of resource being referenced        roleRef name    string   required      Name is the name of resource being referenced      subjects      Subject      Atomic  will be replaced during a merge       Subjects holds references to the objects the role applies to      a name  Subject    a     Subject contains a reference to the object or user identities a role binding applies to   This can either hold a direct API object reference  or a value for non objects such as user and group names          subjects kind    string   required      Kind of object being referenced  Values defined by this API group are  User    Group   and  ServiceAccount   If the Authorizer does not recognized the kind value  the Authorizer should report an error         subjects name    string   required      Name of the object being referenced         subjects apiGroup    string       APIGroup holds the API group of the referenced subject  Defaults to    for ServiceAccount subjects  Defaults to  rbac authorization k8s io  for User and Group subjects         subjects namespace    string       Namespace of the referenced object   If the object kind is non namespace  such as  User  or  Group   and this value is not empty the Authorizer should report an error          RoleBindingList   RoleBindingList   RoleBindingList is a collection of RoleBindings   hr       apiVersion    rbac authorization k8s io v1       kind    RoleBindingList       metadata     a href    ListMeta  a      Standard object s metadata       items       a href    RoleBinding  a    required    Items is a list of RoleBindings         Operations   Operations      hr             get  read the specified RoleBinding       HTTP Request  GET  apis rbac authorization k8s io v1 namespaces  namespace  rolebindings  name        Parameters       name     in path    string  required    name of the RoleBinding       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    RoleBinding  a    OK  401  Unauthorized        list  list or watch objects of kind RoleBinding       HTTP Request  GET  apis rbac authorization k8s io v1 namespaces  namespace  rolebindings       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    RoleBindingList  a    OK  401  Unauthorized        list  list or watch objects of kind RoleBinding       HTTP Request  GET  apis rbac authorization k8s io v1 rolebindings       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    RoleBindingList  a    OK  401  Unauthorized        create  create a RoleBinding       HTTP Request  POST  apis rbac authorization k8s io v1 namespaces  namespace  rolebindings       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    RoleBinding  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    RoleBinding  a    OK  201   a href    RoleBinding  a    Created  202   a href    RoleBinding  a    Accepted  401  Unauthorized        update  replace the specified RoleBinding       HTTP Request  PUT  apis rbac authorization k8s io v1 namespaces  namespace  rolebindings  name        Parameters       name     in path    string  required    name of the RoleBinding       namespace     in path    string  required     a href    namespace  a        body     a href    RoleBinding  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    RoleBinding  a    OK  201   a href    RoleBinding  a    Created  401  Unauthorized        patch  partially update the specified RoleBinding       HTTP Request  PATCH  apis rbac authorization k8s io v1 namespaces  namespace  rolebindings  name        Parameters       name     in path    string  required    name of the RoleBinding       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    RoleBinding  a    OK  201   a href    RoleBinding  a    Created  401  Unauthorized        delete  delete a RoleBinding       HTTP Request  DELETE  apis rbac authorization k8s io v1 namespaces  namespace  rolebindings  name        Parameters       name     in path    string  required    name of the RoleBinding       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of RoleBinding       HTTP Request  DELETE  apis rbac authorization k8s io v1 namespaces  namespace  rolebindings       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference weight 7 title Role import k8s io api rbac v1 apiVersion rbac authorization k8s io v1 contenttype apireference Role is a namespaced logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding apimetadata autogenerated true kind Role","answers":"---\napi_metadata:\n  apiVersion: \"rbac.authorization.k8s.io\/v1\"\n  import: \"k8s.io\/api\/rbac\/v1\"\n  kind: \"Role\"\ncontent_type: \"api_reference\"\ndescription: \"Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding.\"\ntitle: \"Role\"\nweight: 7\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: rbac.authorization.k8s.io\/v1`\n\n`import \"k8s.io\/api\/rbac\/v1\"`\n\n\n## Role {#Role}\n\nRole is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding.\n\n<hr>\n\n- **apiVersion**: rbac.authorization.k8s.io\/v1\n\n\n- **kind**: Role\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata.\n\n- **rules** ([]PolicyRule)\n\n  *Atomic: will be replaced during a merge*\n  \n  Rules holds all the PolicyRules for this Role\n\n  <a name=\"PolicyRule\"><\/a>\n  *PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to.*\n\n  - **rules.apiGroups** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    APIGroups is the name of the APIGroup that contains the resources.  If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. \"\" represents the core API group and \"*\" represents all API groups.\n\n  - **rules.resources** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    Resources is a list of resources this rule applies to. '*' represents all resources.\n\n  - **rules.verbs** ([]string), required\n\n    *Atomic: will be replaced during a merge*\n    \n    Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs.\n\n  - **rules.resourceNames** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    ResourceNames is an optional white list of names that the rule applies to.  An empty set means that everything is allowed.\n\n  - **rules.nonResourceURLs** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    NonResourceURLs is a set of partial urls that a user should have access to.  *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as \"pods\" or \"secrets\") or non-resource URL paths (such as \"\/api\"),  but not both.\n\n\n\n\n\n## RoleList {#RoleList}\n\nRoleList is a collection of Roles\n\n<hr>\n\n- **apiVersion**: rbac.authorization.k8s.io\/v1\n\n\n- **kind**: RoleList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard object's metadata.\n\n- **items** ([]<a href=\"\">Role<\/a>), required\n\n  Items is a list of Roles\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified Role\n\n#### HTTP Request\n\nGET \/apis\/rbac.authorization.k8s.io\/v1\/namespaces\/{namespace}\/roles\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Role\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Role<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Role\n\n#### HTTP Request\n\nGET \/apis\/rbac.authorization.k8s.io\/v1\/namespaces\/{namespace}\/roles\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">RoleList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Role\n\n#### HTTP Request\n\nGET \/apis\/rbac.authorization.k8s.io\/v1\/roles\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">RoleList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a Role\n\n#### HTTP Request\n\nPOST \/apis\/rbac.authorization.k8s.io\/v1\/namespaces\/{namespace}\/roles\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Role<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Role<\/a>): OK\n\n201 (<a href=\"\">Role<\/a>): Created\n\n202 (<a href=\"\">Role<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified Role\n\n#### HTTP Request\n\nPUT \/apis\/rbac.authorization.k8s.io\/v1\/namespaces\/{namespace}\/roles\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Role\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Role<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Role<\/a>): OK\n\n201 (<a href=\"\">Role<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified Role\n\n#### HTTP Request\n\nPATCH \/apis\/rbac.authorization.k8s.io\/v1\/namespaces\/{namespace}\/roles\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Role\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Role<\/a>): OK\n\n201 (<a href=\"\">Role<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a Role\n\n#### HTTP Request\n\nDELETE \/apis\/rbac.authorization.k8s.io\/v1\/namespaces\/{namespace}\/roles\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Role\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of Role\n\n#### HTTP Request\n\nDELETE \/apis\/rbac.authorization.k8s.io\/v1\/namespaces\/{namespace}\/roles\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   rbac authorization k8s io v1    import   k8s io api rbac v1    kind   Role  content type   api reference  description   Role is a namespaced  logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding   title   Role  weight  7 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  rbac authorization k8s io v1    import  k8s io api rbac v1        Role   Role   Role is a namespaced  logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding    hr       apiVersion    rbac authorization k8s io v1       kind    Role       metadata     a href    ObjectMeta  a      Standard object s metadata       rules      PolicyRule      Atomic  will be replaced during a merge       Rules holds all the PolicyRules for this Role     a name  PolicyRule    a     PolicyRule holds information that describes a policy rule  but does not contain information about who the rule applies to or which namespace the rule applies to          rules apiGroups      string        Atomic  will be replaced during a merge           APIGroups is the name of the APIGroup that contains the resources   If multiple API groups are specified  any action requested against one of the enumerated resources in any API group will be allowed     represents the core API group and     represents all API groups         rules resources      string        Atomic  will be replaced during a merge           Resources is a list of resources this rule applies to      represents all resources         rules verbs      string   required       Atomic  will be replaced during a merge           Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule      represents all verbs         rules resourceNames      string        Atomic  will be replaced during a merge           ResourceNames is an optional white list of names that the rule applies to   An empty set means that everything is allowed         rules nonResourceURLs      string        Atomic  will be replaced during a merge           NonResourceURLs is a set of partial urls that a user should have access to    s are allowed  but only as the full  final step in the path Since non resource URLs are not namespaced  this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding  Rules can either apply to API resources  such as  pods  or  secrets   or non resource URL paths  such as   api     but not both          RoleList   RoleList   RoleList is a collection of Roles   hr       apiVersion    rbac authorization k8s io v1       kind    RoleList       metadata     a href    ListMeta  a      Standard object s metadata       items       a href    Role  a    required    Items is a list of Roles         Operations   Operations      hr             get  read the specified Role       HTTP Request  GET  apis rbac authorization k8s io v1 namespaces  namespace  roles  name        Parameters       name     in path    string  required    name of the Role       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Role  a    OK  401  Unauthorized        list  list or watch objects of kind Role       HTTP Request  GET  apis rbac authorization k8s io v1 namespaces  namespace  roles       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    RoleList  a    OK  401  Unauthorized        list  list or watch objects of kind Role       HTTP Request  GET  apis rbac authorization k8s io v1 roles       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    RoleList  a    OK  401  Unauthorized        create  create a Role       HTTP Request  POST  apis rbac authorization k8s io v1 namespaces  namespace  roles       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    Role  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Role  a    OK  201   a href    Role  a    Created  202   a href    Role  a    Accepted  401  Unauthorized        update  replace the specified Role       HTTP Request  PUT  apis rbac authorization k8s io v1 namespaces  namespace  roles  name        Parameters       name     in path    string  required    name of the Role       namespace     in path    string  required     a href    namespace  a        body     a href    Role  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Role  a    OK  201   a href    Role  a    Created  401  Unauthorized        patch  partially update the specified Role       HTTP Request  PATCH  apis rbac authorization k8s io v1 namespaces  namespace  roles  name        Parameters       name     in path    string  required    name of the Role       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Role  a    OK  201   a href    Role  a    Created  401  Unauthorized        delete  delete a Role       HTTP Request  DELETE  apis rbac authorization k8s io v1 namespaces  namespace  roles  name        Parameters       name     in path    string  required    name of the Role       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of Role       HTTP Request  DELETE  apis rbac authorization k8s io v1 namespaces  namespace  roles       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference kind ClusterRole weight 5 title ClusterRole import k8s io api rbac v1 apiVersion rbac authorization k8s io v1 contenttype apireference apimetadata autogenerated true ClusterRole is a cluster level logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding","answers":"---\napi_metadata:\n  apiVersion: \"rbac.authorization.k8s.io\/v1\"\n  import: \"k8s.io\/api\/rbac\/v1\"\n  kind: \"ClusterRole\"\ncontent_type: \"api_reference\"\ndescription: \"ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding.\"\ntitle: \"ClusterRole\"\nweight: 5\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: rbac.authorization.k8s.io\/v1`\n\n`import \"k8s.io\/api\/rbac\/v1\"`\n\n\n## ClusterRole {#ClusterRole}\n\nClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding.\n\n<hr>\n\n- **apiVersion**: rbac.authorization.k8s.io\/v1\n\n\n- **kind**: ClusterRole\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata.\n\n- **aggregationRule** (AggregationRule)\n\n  AggregationRule is an optional field that describes how to build the Rules for this ClusterRole. If AggregationRule is set, then the Rules are controller managed and direct changes to Rules will be stomped by the controller.\n\n  <a name=\"AggregationRule\"><\/a>\n  *AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole*\n\n  - **aggregationRule.clusterRoleSelectors** ([]<a href=\"\">LabelSelector<\/a>)\n\n    *Atomic: will be replaced during a merge*\n    \n    ClusterRoleSelectors holds a list of selectors which will be used to find ClusterRoles and create the rules. If any of the selectors match, then the ClusterRole's permissions will be added\n\n- **rules** ([]PolicyRule)\n\n  *Atomic: will be replaced during a merge*\n  \n  Rules holds all the PolicyRules for this ClusterRole\n\n  <a name=\"PolicyRule\"><\/a>\n  *PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to.*\n\n  - **rules.apiGroups** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    APIGroups is the name of the APIGroup that contains the resources.  If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. \"\" represents the core API group and \"*\" represents all API groups.\n\n  - **rules.resources** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    Resources is a list of resources this rule applies to. '*' represents all resources.\n\n  - **rules.verbs** ([]string), required\n\n    *Atomic: will be replaced during a merge*\n    \n    Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs.\n\n  - **rules.resourceNames** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    ResourceNames is an optional white list of names that the rule applies to.  An empty set means that everything is allowed.\n\n  - **rules.nonResourceURLs** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    NonResourceURLs is a set of partial urls that a user should have access to.  *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as \"pods\" or \"secrets\") or non-resource URL paths (such as \"\/api\"),  but not both.\n\n\n\n\n\n## ClusterRoleList {#ClusterRoleList}\n\nClusterRoleList is a collection of ClusterRoles\n\n<hr>\n\n- **apiVersion**: rbac.authorization.k8s.io\/v1\n\n\n- **kind**: ClusterRoleList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard object's metadata.\n\n- **items** ([]<a href=\"\">ClusterRole<\/a>), required\n\n  Items is a list of ClusterRoles\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ClusterRole\n\n#### HTTP Request\n\nGET \/apis\/rbac.authorization.k8s.io\/v1\/clusterroles\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ClusterRole\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterRole<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ClusterRole\n\n#### HTTP Request\n\nGET \/apis\/rbac.authorization.k8s.io\/v1\/clusterroles\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterRoleList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ClusterRole\n\n#### HTTP Request\n\nPOST \/apis\/rbac.authorization.k8s.io\/v1\/clusterroles\n\n#### Parameters\n\n\n- **body**: <a href=\"\">ClusterRole<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterRole<\/a>): OK\n\n201 (<a href=\"\">ClusterRole<\/a>): Created\n\n202 (<a href=\"\">ClusterRole<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ClusterRole\n\n#### HTTP Request\n\nPUT \/apis\/rbac.authorization.k8s.io\/v1\/clusterroles\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ClusterRole\n\n\n- **body**: <a href=\"\">ClusterRole<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterRole<\/a>): OK\n\n201 (<a href=\"\">ClusterRole<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ClusterRole\n\n#### HTTP Request\n\nPATCH \/apis\/rbac.authorization.k8s.io\/v1\/clusterroles\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ClusterRole\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterRole<\/a>): OK\n\n201 (<a href=\"\">ClusterRole<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ClusterRole\n\n#### HTTP Request\n\nDELETE \/apis\/rbac.authorization.k8s.io\/v1\/clusterroles\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ClusterRole\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ClusterRole\n\n#### HTTP Request\n\nDELETE \/apis\/rbac.authorization.k8s.io\/v1\/clusterroles\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   rbac authorization k8s io v1    import   k8s io api rbac v1    kind   ClusterRole  content type   api reference  description   ClusterRole is a cluster level  logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding   title   ClusterRole  weight  5 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  rbac authorization k8s io v1    import  k8s io api rbac v1        ClusterRole   ClusterRole   ClusterRole is a cluster level  logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding    hr       apiVersion    rbac authorization k8s io v1       kind    ClusterRole       metadata     a href    ObjectMeta  a      Standard object s metadata       aggregationRule    AggregationRule     AggregationRule is an optional field that describes how to build the Rules for this ClusterRole  If AggregationRule is set  then the Rules are controller managed and direct changes to Rules will be stomped by the controller      a name  AggregationRule    a     AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole         aggregationRule clusterRoleSelectors       a href    LabelSelector  a         Atomic  will be replaced during a merge           ClusterRoleSelectors holds a list of selectors which will be used to find ClusterRoles and create the rules  If any of the selectors match  then the ClusterRole s permissions will be added      rules      PolicyRule      Atomic  will be replaced during a merge       Rules holds all the PolicyRules for this ClusterRole     a name  PolicyRule    a     PolicyRule holds information that describes a policy rule  but does not contain information about who the rule applies to or which namespace the rule applies to          rules apiGroups      string        Atomic  will be replaced during a merge           APIGroups is the name of the APIGroup that contains the resources   If multiple API groups are specified  any action requested against one of the enumerated resources in any API group will be allowed     represents the core API group and     represents all API groups         rules resources      string        Atomic  will be replaced during a merge           Resources is a list of resources this rule applies to      represents all resources         rules verbs      string   required       Atomic  will be replaced during a merge           Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule      represents all verbs         rules resourceNames      string        Atomic  will be replaced during a merge           ResourceNames is an optional white list of names that the rule applies to   An empty set means that everything is allowed         rules nonResourceURLs      string        Atomic  will be replaced during a merge           NonResourceURLs is a set of partial urls that a user should have access to    s are allowed  but only as the full  final step in the path Since non resource URLs are not namespaced  this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding  Rules can either apply to API resources  such as  pods  or  secrets   or non resource URL paths  such as   api     but not both          ClusterRoleList   ClusterRoleList   ClusterRoleList is a collection of ClusterRoles   hr       apiVersion    rbac authorization k8s io v1       kind    ClusterRoleList       metadata     a href    ListMeta  a      Standard object s metadata       items       a href    ClusterRole  a    required    Items is a list of ClusterRoles         Operations   Operations      hr             get  read the specified ClusterRole       HTTP Request  GET  apis rbac authorization k8s io v1 clusterroles  name        Parameters       name     in path    string  required    name of the ClusterRole       pretty     in query    string     a href    pretty  a          Response   200   a href    ClusterRole  a    OK  401  Unauthorized        list  list or watch objects of kind ClusterRole       HTTP Request  GET  apis rbac authorization k8s io v1 clusterroles       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ClusterRoleList  a    OK  401  Unauthorized        create  create a ClusterRole       HTTP Request  POST  apis rbac authorization k8s io v1 clusterroles       Parameters       body     a href    ClusterRole  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ClusterRole  a    OK  201   a href    ClusterRole  a    Created  202   a href    ClusterRole  a    Accepted  401  Unauthorized        update  replace the specified ClusterRole       HTTP Request  PUT  apis rbac authorization k8s io v1 clusterroles  name        Parameters       name     in path    string  required    name of the ClusterRole       body     a href    ClusterRole  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ClusterRole  a    OK  201   a href    ClusterRole  a    Created  401  Unauthorized        patch  partially update the specified ClusterRole       HTTP Request  PATCH  apis rbac authorization k8s io v1 clusterroles  name        Parameters       name     in path    string  required    name of the ClusterRole       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ClusterRole  a    OK  201   a href    ClusterRole  a    Created  401  Unauthorized        delete  delete a ClusterRole       HTTP Request  DELETE  apis rbac authorization k8s io v1 clusterroles  name        Parameters       name     in path    string  required    name of the ClusterRole       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of ClusterRole       HTTP Request  DELETE  apis rbac authorization k8s io v1 clusterroles       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference title ResourceClaim v1alpha3 weight 16 kind ResourceClaim contenttype apireference apiVersion resource k8s io v1alpha3 import k8s io api resource v1alpha3 apimetadata autogenerated true ResourceClaim describes a request for access to resources in the cluster for use by workloads","answers":"---\napi_metadata:\n  apiVersion: \"resource.k8s.io\/v1alpha3\"\n  import: \"k8s.io\/api\/resource\/v1alpha3\"\n  kind: \"ResourceClaim\"\ncontent_type: \"api_reference\"\ndescription: \"ResourceClaim describes a request for access to resources in the cluster, for use by workloads.\"\ntitle: \"ResourceClaim v1alpha3\"\nweight: 16\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: resource.k8s.io\/v1alpha3`\n\n`import \"k8s.io\/api\/resource\/v1alpha3\"`\n\n\n## ResourceClaim {#ResourceClaim}\n\nResourceClaim describes a request for access to resources in the cluster, for use by workloads. For example, if a workload needs an accelerator device with specific properties, this is how that request is expressed. The status stanza tracks whether this claim has been satisfied and what specific resources have been allocated.\n\nThis is an alpha type and requires enabling the DynamicResourceAllocation feature gate.\n\n<hr>\n\n- **apiVersion**: resource.k8s.io\/v1alpha3\n\n\n- **kind**: ResourceClaim\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata\n\n- **spec** (<a href=\"\">ResourceClaimSpec<\/a>), required\n\n  Spec describes what is being requested and how to configure it. The spec is immutable.\n\n- **status** (<a href=\"\">ResourceClaimStatus<\/a>)\n\n  Status describes whether the claim is ready to use and what has been allocated.\n\n\n\n\n\n## ResourceClaimSpec {#ResourceClaimSpec}\n\nResourceClaimSpec defines what is being requested in a ResourceClaim and how to configure it.\n\n<hr>\n\n- **controller** (string)\n\n  Controller is the name of the DRA driver that is meant to handle allocation of this claim. If empty, allocation is handled by the scheduler while scheduling a pod.\n  \n  Must be a DNS subdomain and should end with a DNS domain owned by the vendor of the driver.\n  \n  This is an alpha field and requires enabling the DRAControlPlaneController feature gate.\n\n- **devices** (DeviceClaim)\n\n  Devices defines how to request devices.\n\n  <a name=\"DeviceClaim\"><\/a>\n  *DeviceClaim defines how to request devices with a ResourceClaim.*\n\n  - **devices.config** ([]DeviceClaimConfiguration)\n\n    *Atomic: will be replaced during a merge*\n    \n    This field holds configuration for multiple potential drivers which could satisfy requests in this claim. It is ignored while allocating the claim.\n\n    <a name=\"DeviceClaimConfiguration\"><\/a>\n    *DeviceClaimConfiguration is used for configuration parameters in DeviceClaim.*\n\n    - **devices.config.opaque** (OpaqueDeviceConfiguration)\n\n      Opaque provides driver-specific configuration parameters.\n\n      <a name=\"OpaqueDeviceConfiguration\"><\/a>\n      *OpaqueDeviceConfiguration contains configuration parameters for a driver in a format defined by the driver vendor.*\n\n      - **devices.config.opaque.driver** (string), required\n\n        Driver is used to determine which kubelet plugin needs to be passed these configuration parameters.\n        \n        An admission policy provided by the driver developer could use this to decide whether it needs to validate them.\n        \n        Must be a DNS subdomain and should end with a DNS domain owned by the vendor of the driver.\n\n      - **devices.config.opaque.parameters** (RawExtension), required\n\n        Parameters can contain arbitrary data. It is the responsibility of the driver developer to handle validation and versioning. Typically this includes self-identification and a version (\"kind\" + \"apiVersion\" for Kubernetes types), with conversion between different versions.\n\n        <a name=\"RawExtension\"><\/a>\n        *RawExtension is used to hold extensions in external versions.\n        \n        To use this, make a field which has RawExtension as its type in your external, versioned struct, and Object in your internal struct. You also need to register your various plugin types.\n        \n        \/\/ Internal package:\n        \n        \ttype MyAPIObject struct {\n        \t\truntime.TypeMeta `json:\",inline\"`\n        \t\tMyPlugin runtime.Object `json:\"myPlugin\"`\n        \t}\n        \n        \ttype PluginA struct {\n        \t\tAOption string `json:\"aOption\"`\n        \t}\n        \n        \/\/ External package:\n        \n        \ttype MyAPIObject struct {\n        \t\truntime.TypeMeta `json:\",inline\"`\n        \t\tMyPlugin runtime.RawExtension `json:\"myPlugin\"`\n        \t}\n        \n        \ttype PluginA struct {\n        \t\tAOption string `json:\"aOption\"`\n        \t}\n        \n        \/\/ On the wire, the JSON will look something like this:\n        \n        \t{\n        \t\t\"kind\":\"MyAPIObject\",\n        \t\t\"apiVersion\":\"v1\",\n        \t\t\"myPlugin\": {\n        \t\t\t\"kind\":\"PluginA\",\n        \t\t\t\"aOption\":\"foo\",\n        \t\t},\n        \t}\n        \n        So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (using pkg\/conversion) into the internal struct. The runtime package's DefaultScheme has conversion functions installed which will unpack the JSON stored in RawExtension, turning it into the correct object type, and storing it in the Object. (TODO: In the case where the object is of an unknown type, a runtime.Unknown object will be created and stored.)*\n\n    - **devices.config.requests** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      Requests lists the names of requests where the configuration applies. If empty, it applies to all requests.\n\n  - **devices.constraints** ([]DeviceConstraint)\n\n    *Atomic: will be replaced during a merge*\n    \n    These constraints must be satisfied by the set of devices that get allocated for the claim.\n\n    <a name=\"DeviceConstraint\"><\/a>\n    *DeviceConstraint must have exactly one field set besides Requests.*\n\n    - **devices.constraints.matchAttribute** (string)\n\n      MatchAttribute requires that all devices in question have this attribute and that its type and value are the same across those devices.\n      \n      For example, if you specified \"dra.example.com\/numa\" (a hypothetical example!), then only devices in the same NUMA node will be chosen. A device which does not have that attribute will not be chosen. All devices should use a value of the same type for this attribute because that is part of its specification, but if one device doesn't, then it also will not be chosen.\n      \n      Must include the domain qualifier.\n\n    - **devices.constraints.requests** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      Requests is a list of the one or more requests in this claim which must co-satisfy this constraint. If a request is fulfilled by multiple devices, then all of the devices must satisfy the constraint. If this is not specified, this constraint applies to all requests in this claim.\n\n  - **devices.requests** ([]DeviceRequest)\n\n    *Atomic: will be replaced during a merge*\n    \n    Requests represent individual requests for distinct devices which must all be satisfied. If empty, nothing needs to be allocated.\n\n    <a name=\"DeviceRequest\"><\/a>\n    *DeviceRequest is a request for devices required for a claim. This is typically a request for a single resource like a device, but can also ask for several identical devices.\n    \n    A DeviceClassName is currently required. Clients must check that it is indeed set. It's absence indicates that something changed in a way that is not supported by the client yet, in which case it must refuse to handle the request.*\n\n    - **devices.requests.deviceClassName** (string), required\n\n      DeviceClassName references a specific DeviceClass, which can define additional configuration and selectors to be inherited by this request.\n      \n      A class is required. Which classes are available depends on the cluster.\n      \n      Administrators may use this to restrict which devices may get requested by only installing classes with selectors for permitted devices. If users are free to request anything without restrictions, then administrators can create an empty DeviceClass for users to reference.\n\n    - **devices.requests.name** (string), required\n\n      Name can be used to reference this request in a pod.spec.containers[].resources.claims entry and in a constraint of the claim.\n      \n      Must be a DNS label.\n\n    - **devices.requests.adminAccess** (boolean)\n\n      AdminAccess indicates that this is a claim for administrative access to the device(s). Claims with AdminAccess are expected to be used for monitoring or other management services for a device.  They ignore all ordinary claims to the device with respect to access modes and any resource allocations.\n\n    - **devices.requests.allocationMode** (string)\n\n      AllocationMode and its related fields define how devices are allocated to satisfy this request. Supported values are:\n      \n      - ExactCount: This request is for a specific number of devices.\n        This is the default. The exact number is provided in the\n        count field.\n      \n      - All: This request is for all of the matching devices in a pool.\n        Allocation will fail if some devices are already allocated,\n        unless adminAccess is requested.\n      \n      If AlloctionMode is not specified, the default mode is ExactCount. If the mode is ExactCount and count is not specified, the default count is one. Any other requests must specify this field.\n      \n      More modes may get added in the future. Clients must refuse to handle requests with unknown modes.\n\n    - **devices.requests.count** (int64)\n\n      Count is used only when the count mode is \"ExactCount\". Must be greater than zero. If AllocationMode is ExactCount and this field is not specified, the default is one.\n\n    - **devices.requests.selectors** ([]DeviceSelector)\n\n      *Atomic: will be replaced during a merge*\n      \n      Selectors define criteria which must be satisfied by a specific device in order for that device to be considered for this request. All selectors must be satisfied for a device to be considered.\n\n      <a name=\"DeviceSelector\"><\/a>\n      *DeviceSelector must have exactly one field set.*\n\n      - **devices.requests.selectors.cel** (CELDeviceSelector)\n\n        CEL contains a CEL expression for selecting a device.\n\n        <a name=\"CELDeviceSelector\"><\/a>\n        *CELDeviceSelector contains a CEL expression for selecting a device.*\n\n        - **devices.requests.selectors.cel.expression** (string), required\n\n          Expression is a CEL expression which evaluates a single device. It must evaluate to true when the device under consideration satisfies the desired criteria, and false when it does not. Any other result is an error and causes allocation of devices to abort.\n          \n          The expression's input is an object named \"device\", which carries the following properties:\n           - driver (string): the name of the driver which defines this device.\n           - attributes (map[string]object): the device's attributes, grouped by prefix\n             (e.g. device.attributes[\"dra.example.com\"] evaluates to an object with all\n             of the attributes which were prefixed by \"dra.example.com\".\n           - capacity (map[string]object): the device's capacities, grouped by prefix.\n          \n          Example: Consider a device with driver=\"dra.example.com\", which exposes two attributes named \"model\" and \"ext.example.com\/family\" and which exposes one capacity named \"modules\". This input to this expression would have the following fields:\n          \n              device.driver\n              device.attributes[\"dra.example.com\"].model\n              device.attributes[\"ext.example.com\"].family\n              device.capacity[\"dra.example.com\"].modules\n          \n          The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expression that is meant to consider devices from different drivers.\n          \n          The value type of each attribute is defined by the device definition, and users who write these expressions must consult the documentation for their specific drivers. The value type of each capacity is Quantity.\n          \n          If an unknown prefix is used as a lookup in either device.attributes or device.capacity, an empty map will be returned. Any reference to an unknown field will cause an evaluation error and allocation to abort.\n          \n          A robust expression should check for the existence of attributes before referencing them.\n          \n          For ease of use, the cel.bind() function is enabled, and can be used to simplify expressions that access multiple attributes with the same domain. For example:\n          \n              cel.bind(dra, device.attributes[\"dra.example.com\"], dra.someBool && dra.anotherBool)\n\n\n\n\n\n## ResourceClaimStatus {#ResourceClaimStatus}\n\nResourceClaimStatus tracks whether the resource has been allocated and what the result of that was.\n\n<hr>\n\n- **allocation** (AllocationResult)\n\n  Allocation is set once the claim has been allocated successfully.\n\n  <a name=\"AllocationResult\"><\/a>\n  *AllocationResult contains attributes of an allocated resource.*\n\n  - **allocation.controller** (string)\n\n    Controller is the name of the DRA driver which handled the allocation. That driver is also responsible for deallocating the claim. It is empty when the claim can be deallocated without involving a driver.\n    \n    A driver may allocate devices provided by other drivers, so this driver name here can be different from the driver names listed for the results.\n    \n    This is an alpha field and requires enabling the DRAControlPlaneController feature gate.\n\n  - **allocation.devices** (DeviceAllocationResult)\n\n    Devices is the result of allocating devices.\n\n    <a name=\"DeviceAllocationResult\"><\/a>\n    *DeviceAllocationResult is the result of allocating devices.*\n\n    - **allocation.devices.config** ([]DeviceAllocationConfiguration)\n\n      *Atomic: will be replaced during a merge*\n      \n      This field is a combination of all the claim and class configuration parameters. Drivers can distinguish between those based on a flag.\n      \n      This includes configuration parameters for drivers which have no allocated devices in the result because it is up to the drivers which configuration parameters they support. They can silently ignore unknown configuration parameters.\n\n      <a name=\"DeviceAllocationConfiguration\"><\/a>\n      *DeviceAllocationConfiguration gets embedded in an AllocationResult.*\n\n      - **allocation.devices.config.source** (string), required\n\n        Source records whether the configuration comes from a class and thus is not something that a normal user would have been able to set or from a claim.\n\n      - **allocation.devices.config.opaque** (OpaqueDeviceConfiguration)\n\n        Opaque provides driver-specific configuration parameters.\n\n        <a name=\"OpaqueDeviceConfiguration\"><\/a>\n        *OpaqueDeviceConfiguration contains configuration parameters for a driver in a format defined by the driver vendor.*\n\n        - **allocation.devices.config.opaque.driver** (string), required\n\n          Driver is used to determine which kubelet plugin needs to be passed these configuration parameters.\n          \n          An admission policy provided by the driver developer could use this to decide whether it needs to validate them.\n          \n          Must be a DNS subdomain and should end with a DNS domain owned by the vendor of the driver.\n\n        - **allocation.devices.config.opaque.parameters** (RawExtension), required\n\n          Parameters can contain arbitrary data. It is the responsibility of the driver developer to handle validation and versioning. Typically this includes self-identification and a version (\"kind\" + \"apiVersion\" for Kubernetes types), with conversion between different versions.\n\n          <a name=\"RawExtension\"><\/a>\n          *RawExtension is used to hold extensions in external versions.\n          \n          To use this, make a field which has RawExtension as its type in your external, versioned struct, and Object in your internal struct. You also need to register your various plugin types.\n          \n          \/\/ Internal package:\n          \n          \ttype MyAPIObject struct {\n          \t\truntime.TypeMeta `json:\",inline\"`\n          \t\tMyPlugin runtime.Object `json:\"myPlugin\"`\n          \t}\n          \n          \ttype PluginA struct {\n          \t\tAOption string `json:\"aOption\"`\n          \t}\n          \n          \/\/ External package:\n          \n          \ttype MyAPIObject struct {\n          \t\truntime.TypeMeta `json:\",inline\"`\n          \t\tMyPlugin runtime.RawExtension `json:\"myPlugin\"`\n          \t}\n          \n          \ttype PluginA struct {\n          \t\tAOption string `json:\"aOption\"`\n          \t}\n          \n          \/\/ On the wire, the JSON will look something like this:\n          \n          \t{\n          \t\t\"kind\":\"MyAPIObject\",\n          \t\t\"apiVersion\":\"v1\",\n          \t\t\"myPlugin\": {\n          \t\t\t\"kind\":\"PluginA\",\n          \t\t\t\"aOption\":\"foo\",\n          \t\t},\n          \t}\n          \n          So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (using pkg\/conversion) into the internal struct. The runtime package's DefaultScheme has conversion functions installed which will unpack the JSON stored in RawExtension, turning it into the correct object type, and storing it in the Object. (TODO: In the case where the object is of an unknown type, a runtime.Unknown object will be created and stored.)*\n\n      - **allocation.devices.config.requests** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Requests lists the names of requests where the configuration applies. If empty, its applies to all requests.\n\n    - **allocation.devices.results** ([]DeviceRequestAllocationResult)\n\n      *Atomic: will be replaced during a merge*\n      \n      Results lists all allocated devices.\n\n      <a name=\"DeviceRequestAllocationResult\"><\/a>\n      *DeviceRequestAllocationResult contains the allocation result for one request.*\n\n      - **allocation.devices.results.device** (string), required\n\n        Device references one device instance via its name in the driver's resource pool. It must be a DNS label.\n\n      - **allocation.devices.results.driver** (string), required\n\n        Driver specifies the name of the DRA driver whose kubelet plugin should be invoked to process the allocation once the claim is needed on a node.\n        \n        Must be a DNS subdomain and should end with a DNS domain owned by the vendor of the driver.\n\n      - **allocation.devices.results.pool** (string), required\n\n        This name together with the driver name and the device name field identify which device was allocated (`\\<driver name>\/\\<pool name>\/\\<device name>`).\n        \n        Must not be longer than 253 characters and may contain one or more DNS sub-domains separated by slashes.\n\n      - **allocation.devices.results.request** (string), required\n\n        Request is the name of the request in the claim which caused this device to be allocated. Multiple devices may have been allocated per request.\n\n  - **allocation.nodeSelector** (NodeSelector)\n\n    NodeSelector defines where the allocated resources are available. If unset, they are available everywhere.\n\n    <a name=\"NodeSelector\"><\/a>\n    *A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms.*\n\n    - **allocation.nodeSelector.nodeSelectorTerms** ([]NodeSelectorTerm), required\n\n      *Atomic: will be replaced during a merge*\n      \n      Required. A list of node selector terms. The terms are ORed.\n\n      <a name=\"NodeSelectorTerm\"><\/a>\n      *A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.*\n\n      - **allocation.nodeSelector.nodeSelectorTerms.matchExpressions** ([]<a href=\"\">NodeSelectorRequirement<\/a>)\n\n        *Atomic: will be replaced during a merge*\n        \n        A list of node selector requirements by node's labels.\n\n      - **allocation.nodeSelector.nodeSelectorTerms.matchFields** ([]<a href=\"\">NodeSelectorRequirement<\/a>)\n\n        *Atomic: will be replaced during a merge*\n        \n        A list of node selector requirements by node's fields.\n\n- **deallocationRequested** (boolean)\n\n  Indicates that a claim is to be deallocated. While this is set, no new consumers may be added to ReservedFor.\n  \n  This is only used if the claim needs to be deallocated by a DRA driver. That driver then must deallocate this claim and reset the field together with clearing the Allocation field.\n  \n  This is an alpha field and requires enabling the DRAControlPlaneController feature gate.\n\n- **reservedFor** ([]ResourceClaimConsumerReference)\n\n  *Patch strategy: merge on key `uid`*\n  \n  *Map: unique values on key uid will be kept during a merge*\n  \n  ReservedFor indicates which entities are currently allowed to use the claim. A Pod which references a ResourceClaim which is not reserved for that Pod will not be started. A claim that is in use or might be in use because it has been reserved must not get deallocated.\n  \n  In a cluster with multiple scheduler instances, two pods might get scheduled concurrently by different schedulers. When they reference the same ResourceClaim which already has reached its maximum number of consumers, only one pod can be scheduled.\n  \n  Both schedulers try to add their pod to the claim.status.reservedFor field, but only the update that reaches the API server first gets stored. The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue, waiting for the ResourceClaim to become usable again.\n  \n  There can be at most 32 such reservations. This may get increased in the future, but not reduced.\n\n  <a name=\"ResourceClaimConsumerReference\"><\/a>\n  *ResourceClaimConsumerReference contains enough information to let you locate the consumer of a ResourceClaim. The user must be a resource in the same namespace as the ResourceClaim.*\n\n  - **reservedFor.name** (string), required\n\n    Name is the name of resource being referenced.\n\n  - **reservedFor.resource** (string), required\n\n    Resource is the type of resource being referenced, for example \"pods\".\n\n  - **reservedFor.uid** (string), required\n\n    UID identifies exactly one incarnation of the resource.\n\n  - **reservedFor.apiGroup** (string)\n\n    APIGroup is the group for the resource being referenced. It is empty for the core API. This matches the group in the APIVersion that is used when creating the resources.\n\n\n\n\n\n## ResourceClaimList {#ResourceClaimList}\n\nResourceClaimList is a collection of claims.\n\n<hr>\n\n- **apiVersion**: resource.k8s.io\/v1alpha3\n\n\n- **kind**: ResourceClaimList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata\n\n- **items** ([]<a href=\"\">ResourceClaim<\/a>), required\n\n  Items is the list of resource claims.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ResourceClaim\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaims\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceClaim\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaim<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified ResourceClaim\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaims\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceClaim\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaim<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ResourceClaim\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaims\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaimList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ResourceClaim\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/resourceclaims\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaimList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ResourceClaim\n\n#### HTTP Request\n\nPOST \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaims\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ResourceClaim<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaim<\/a>): OK\n\n201 (<a href=\"\">ResourceClaim<\/a>): Created\n\n202 (<a href=\"\">ResourceClaim<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ResourceClaim\n\n#### HTTP Request\n\nPUT \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaims\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceClaim\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ResourceClaim<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaim<\/a>): OK\n\n201 (<a href=\"\">ResourceClaim<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified ResourceClaim\n\n#### HTTP Request\n\nPUT \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaims\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceClaim\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ResourceClaim<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaim<\/a>): OK\n\n201 (<a href=\"\">ResourceClaim<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ResourceClaim\n\n#### HTTP Request\n\nPATCH \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaims\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceClaim\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaim<\/a>): OK\n\n201 (<a href=\"\">ResourceClaim<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified ResourceClaim\n\n#### HTTP Request\n\nPATCH \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaims\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceClaim\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaim<\/a>): OK\n\n201 (<a href=\"\">ResourceClaim<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ResourceClaim\n\n#### HTTP Request\n\nDELETE \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaims\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceClaim\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaim<\/a>): OK\n\n202 (<a href=\"\">ResourceClaim<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ResourceClaim\n\n#### HTTP Request\n\nDELETE \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaims\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   resource k8s io v1alpha3    import   k8s io api resource v1alpha3    kind   ResourceClaim  content type   api reference  description   ResourceClaim describes a request for access to resources in the cluster  for use by workloads   title   ResourceClaim v1alpha3  weight  16 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  resource k8s io v1alpha3    import  k8s io api resource v1alpha3        ResourceClaim   ResourceClaim   ResourceClaim describes a request for access to resources in the cluster  for use by workloads  For example  if a workload needs an accelerator device with specific properties  this is how that request is expressed  The status stanza tracks whether this claim has been satisfied and what specific resources have been allocated   This is an alpha type and requires enabling the DynamicResourceAllocation feature gate    hr       apiVersion    resource k8s io v1alpha3       kind    ResourceClaim       metadata     a href    ObjectMeta  a      Standard object metadata      spec     a href    ResourceClaimSpec  a    required    Spec describes what is being requested and how to configure it  The spec is immutable       status     a href    ResourceClaimStatus  a      Status describes whether the claim is ready to use and what has been allocated          ResourceClaimSpec   ResourceClaimSpec   ResourceClaimSpec defines what is being requested in a ResourceClaim and how to configure it    hr       controller    string     Controller is the name of the DRA driver that is meant to handle allocation of this claim  If empty  allocation is handled by the scheduler while scheduling a pod       Must be a DNS subdomain and should end with a DNS domain owned by the vendor of the driver       This is an alpha field and requires enabling the DRAControlPlaneController feature gate       devices    DeviceClaim     Devices defines how to request devices      a name  DeviceClaim    a     DeviceClaim defines how to request devices with a ResourceClaim          devices config      DeviceClaimConfiguration        Atomic  will be replaced during a merge           This field holds configuration for multiple potential drivers which could satisfy requests in this claim  It is ignored while allocating the claim        a name  DeviceClaimConfiguration    a       DeviceClaimConfiguration is used for configuration parameters in DeviceClaim            devices config opaque    OpaqueDeviceConfiguration         Opaque provides driver specific configuration parameters          a name  OpaqueDeviceConfiguration    a         OpaqueDeviceConfiguration contains configuration parameters for a driver in a format defined by the driver vendor              devices config opaque driver    string   required          Driver is used to determine which kubelet plugin needs to be passed these configuration parameters                   An admission policy provided by the driver developer could use this to decide whether it needs to validate them                   Must be a DNS subdomain and should end with a DNS domain owned by the vendor of the driver             devices config opaque parameters    RawExtension   required          Parameters can contain arbitrary data  It is the responsibility of the driver developer to handle validation and versioning  Typically this includes self identification and a version   kind     apiVersion  for Kubernetes types   with conversion between different versions            a name  RawExtension    a           RawExtension is used to hold extensions in external versions                   To use this  make a field which has RawExtension as its type in your external  versioned struct  and Object in your internal struct  You also need to register your various plugin types                      Internal package                    type MyAPIObject struct             runtime TypeMeta  json   inline             MyPlugin runtime Object  json  myPlugin                                type PluginA struct             AOption string  json  aOption                                  External package                    type MyAPIObject struct             runtime TypeMeta  json   inline             MyPlugin runtime RawExtension  json  myPlugin                                type PluginA struct             AOption string  json  aOption                                  On the wire  the JSON will look something like this                                 kind   MyAPIObject              apiVersion   v1              myPlugin                 kind   PluginA               aOption   foo                                            So what happens  Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject  That causes the raw JSON to be stored  but not unpacked  The next step is to copy  using pkg conversion  into the internal struct  The runtime package s DefaultScheme has conversion functions installed which will unpack the JSON stored in RawExtension  turning it into the correct object type  and storing it in the Object   TODO  In the case where the object is of an unknown type  a runtime Unknown object will be created and stored             devices config requests      string          Atomic  will be replaced during a merge               Requests lists the names of requests where the configuration applies  If empty  it applies to all requests         devices constraints      DeviceConstraint        Atomic  will be replaced during a merge           These constraints must be satisfied by the set of devices that get allocated for the claim        a name  DeviceConstraint    a       DeviceConstraint must have exactly one field set besides Requests            devices constraints matchAttribute    string         MatchAttribute requires that all devices in question have this attribute and that its type and value are the same across those devices               For example  if you specified  dra example com numa   a hypothetical example    then only devices in the same NUMA node will be chosen  A device which does not have that attribute will not be chosen  All devices should use a value of the same type for this attribute because that is part of its specification  but if one device doesn t  then it also will not be chosen               Must include the domain qualifier           devices constraints requests      string          Atomic  will be replaced during a merge               Requests is a list of the one or more requests in this claim which must co satisfy this constraint  If a request is fulfilled by multiple devices  then all of the devices must satisfy the constraint  If this is not specified  this constraint applies to all requests in this claim         devices requests      DeviceRequest        Atomic  will be replaced during a merge           Requests represent individual requests for distinct devices which must all be satisfied  If empty  nothing needs to be allocated        a name  DeviceRequest    a       DeviceRequest is a request for devices required for a claim  This is typically a request for a single resource like a device  but can also ask for several identical devices           A DeviceClassName is currently required  Clients must check that it is indeed set  It s absence indicates that something changed in a way that is not supported by the client yet  in which case it must refuse to handle the request            devices requests deviceClassName    string   required        DeviceClassName references a specific DeviceClass  which can define additional configuration and selectors to be inherited by this request               A class is required  Which classes are available depends on the cluster               Administrators may use this to restrict which devices may get requested by only installing classes with selectors for permitted devices  If users are free to request anything without restrictions  then administrators can create an empty DeviceClass for users to reference           devices requests name    string   required        Name can be used to reference this request in a pod spec containers   resources claims entry and in a constraint of the claim               Must be a DNS label           devices requests adminAccess    boolean         AdminAccess indicates that this is a claim for administrative access to the device s   Claims with AdminAccess are expected to be used for monitoring or other management services for a device   They ignore all ordinary claims to the device with respect to access modes and any resource allocations           devices requests allocationMode    string         AllocationMode and its related fields define how devices are allocated to satisfy this request  Supported values are                 ExactCount  This request is for a specific number of devices          This is the default  The exact number is provided in the         count field                 All  This request is for all of the matching devices in a pool          Allocation will fail if some devices are already allocated          unless adminAccess is requested               If AlloctionMode is not specified  the default mode is ExactCount  If the mode is ExactCount and count is not specified  the default count is one  Any other requests must specify this field               More modes may get added in the future  Clients must refuse to handle requests with unknown modes           devices requests count    int64         Count is used only when the count mode is  ExactCount   Must be greater than zero  If AllocationMode is ExactCount and this field is not specified  the default is one           devices requests selectors      DeviceSelector          Atomic  will be replaced during a merge               Selectors define criteria which must be satisfied by a specific device in order for that device to be considered for this request  All selectors must be satisfied for a device to be considered          a name  DeviceSelector    a         DeviceSelector must have exactly one field set              devices requests selectors cel    CELDeviceSelector           CEL contains a CEL expression for selecting a device            a name  CELDeviceSelector    a           CELDeviceSelector contains a CEL expression for selecting a device                devices requests selectors cel expression    string   required            Expression is a CEL expression which evaluates a single device  It must evaluate to true when the device under consideration satisfies the desired criteria  and false when it does not  Any other result is an error and causes allocation of devices to abort                       The expression s input is an object named  device   which carries the following properties               driver  string   the name of the driver which defines this device               attributes  map string object   the device s attributes  grouped by prefix               e g  device attributes  dra example com   evaluates to an object with all              of the attributes which were prefixed by  dra example com                capacity  map string object   the device s capacities  grouped by prefix                       Example  Consider a device with driver  dra example com   which exposes two attributes named  model  and  ext example com family  and which exposes one capacity named  modules   This input to this expression would have the following fields                           device driver               device attributes  dra example com   model               device attributes  ext example com   family               device capacity  dra example com   modules                      The device driver field can be used to check for a specific driver  either as a high level precondition  i e  you only want to consider devices from this driver  or as part of a multi clause expression that is meant to consider devices from different drivers                       The value type of each attribute is defined by the device definition  and users who write these expressions must consult the documentation for their specific drivers  The value type of each capacity is Quantity                       If an unknown prefix is used as a lookup in either device attributes or device capacity  an empty map will be returned  Any reference to an unknown field will cause an evaluation error and allocation to abort                       A robust expression should check for the existence of attributes before referencing them                       For ease of use  the cel bind   function is enabled  and can be used to simplify expressions that access multiple attributes with the same domain  For example                           cel bind dra  device attributes  dra example com    dra someBool    dra anotherBool          ResourceClaimStatus   ResourceClaimStatus   ResourceClaimStatus tracks whether the resource has been allocated and what the result of that was    hr       allocation    AllocationResult     Allocation is set once the claim has been allocated successfully      a name  AllocationResult    a     AllocationResult contains attributes of an allocated resource          allocation controller    string       Controller is the name of the DRA driver which handled the allocation  That driver is also responsible for deallocating the claim  It is empty when the claim can be deallocated without involving a driver           A driver may allocate devices provided by other drivers  so this driver name here can be different from the driver names listed for the results           This is an alpha field and requires enabling the DRAControlPlaneController feature gate         allocation devices    DeviceAllocationResult       Devices is the result of allocating devices        a name  DeviceAllocationResult    a       DeviceAllocationResult is the result of allocating devices            allocation devices config      DeviceAllocationConfiguration          Atomic  will be replaced during a merge               This field is a combination of all the claim and class configuration parameters  Drivers can distinguish between those based on a flag               This includes configuration parameters for drivers which have no allocated devices in the result because it is up to the drivers which configuration parameters they support  They can silently ignore unknown configuration parameters          a name  DeviceAllocationConfiguration    a         DeviceAllocationConfiguration gets embedded in an AllocationResult              allocation devices config source    string   required          Source records whether the configuration comes from a class and thus is not something that a normal user would have been able to set or from a claim             allocation devices config opaque    OpaqueDeviceConfiguration           Opaque provides driver specific configuration parameters            a name  OpaqueDeviceConfiguration    a           OpaqueDeviceConfiguration contains configuration parameters for a driver in a format defined by the driver vendor                allocation devices config opaque driver    string   required            Driver is used to determine which kubelet plugin needs to be passed these configuration parameters                       An admission policy provided by the driver developer could use this to decide whether it needs to validate them                       Must be a DNS subdomain and should end with a DNS domain owned by the vendor of the driver               allocation devices config opaque parameters    RawExtension   required            Parameters can contain arbitrary data  It is the responsibility of the driver developer to handle validation and versioning  Typically this includes self identification and a version   kind     apiVersion  for Kubernetes types   with conversion between different versions              a name  RawExtension    a             RawExtension is used to hold extensions in external versions                       To use this  make a field which has RawExtension as its type in your external  versioned struct  and Object in your internal struct  You also need to register your various plugin types                          Internal package                        type MyAPIObject struct               runtime TypeMeta  json   inline               MyPlugin runtime Object  json  myPlugin                                      type PluginA struct               AOption string  json  aOption                                        External package                        type MyAPIObject struct               runtime TypeMeta  json   inline               MyPlugin runtime RawExtension  json  myPlugin                                      type PluginA struct               AOption string  json  aOption                                        On the wire  the JSON will look something like this                                       kind   MyAPIObject                apiVersion   v1                myPlugin                   kind   PluginA                 aOption   foo                                                    So what happens  Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject  That causes the raw JSON to be stored  but not unpacked  The next step is to copy  using pkg conversion  into the internal struct  The runtime package s DefaultScheme has conversion functions installed which will unpack the JSON stored in RawExtension  turning it into the correct object type  and storing it in the Object   TODO  In the case where the object is of an unknown type  a runtime Unknown object will be created and stored               allocation devices config requests      string            Atomic  will be replaced during a merge                   Requests lists the names of requests where the configuration applies  If empty  its applies to all requests           allocation devices results      DeviceRequestAllocationResult          Atomic  will be replaced during a merge               Results lists all allocated devices          a name  DeviceRequestAllocationResult    a         DeviceRequestAllocationResult contains the allocation result for one request              allocation devices results device    string   required          Device references one device instance via its name in the driver s resource pool  It must be a DNS label             allocation devices results driver    string   required          Driver specifies the name of the DRA driver whose kubelet plugin should be invoked to process the allocation once the claim is needed on a node                   Must be a DNS subdomain and should end with a DNS domain owned by the vendor of the driver             allocation devices results pool    string   required          This name together with the driver name and the device name field identify which device was allocated     driver name    pool name    device name                      Must not be longer than 253 characters and may contain one or more DNS sub domains separated by slashes             allocation devices results request    string   required          Request is the name of the request in the claim which caused this device to be allocated  Multiple devices may have been allocated per request         allocation nodeSelector    NodeSelector       NodeSelector defines where the allocated resources are available  If unset  they are available everywhere        a name  NodeSelector    a       A node selector represents the union of the results of one or more label queries over a set of nodes  that is  it represents the OR of the selectors represented by the node selector terms            allocation nodeSelector nodeSelectorTerms      NodeSelectorTerm   required         Atomic  will be replaced during a merge               Required  A list of node selector terms  The terms are ORed          a name  NodeSelectorTerm    a         A null or empty node selector term matches no objects  The requirements of them are ANDed  The TopologySelectorTerm type implements a subset of the NodeSelectorTerm              allocation nodeSelector nodeSelectorTerms matchExpressions       a href    NodeSelectorRequirement  a             Atomic  will be replaced during a merge                   A list of node selector requirements by node s labels             allocation nodeSelector nodeSelectorTerms matchFields       a href    NodeSelectorRequirement  a             Atomic  will be replaced during a merge                   A list of node selector requirements by node s fields       deallocationRequested    boolean     Indicates that a claim is to be deallocated  While this is set  no new consumers may be added to ReservedFor       This is only used if the claim needs to be deallocated by a DRA driver  That driver then must deallocate this claim and reset the field together with clearing the Allocation field       This is an alpha field and requires enabling the DRAControlPlaneController feature gate       reservedFor      ResourceClaimConsumerReference      Patch strategy  merge on key  uid         Map  unique values on key uid will be kept during a merge       ReservedFor indicates which entities are currently allowed to use the claim  A Pod which references a ResourceClaim which is not reserved for that Pod will not be started  A claim that is in use or might be in use because it has been reserved must not get deallocated       In a cluster with multiple scheduler instances  two pods might get scheduled concurrently by different schedulers  When they reference the same ResourceClaim which already has reached its maximum number of consumers  only one pod can be scheduled       Both schedulers try to add their pod to the claim status reservedFor field  but only the update that reaches the API server first gets stored  The other one fails with an error and the scheduler which issued it knows that it must put the pod back into the queue  waiting for the ResourceClaim to become usable again       There can be at most 32 such reservations  This may get increased in the future  but not reduced      a name  ResourceClaimConsumerReference    a     ResourceClaimConsumerReference contains enough information to let you locate the consumer of a ResourceClaim  The user must be a resource in the same namespace as the ResourceClaim          reservedFor name    string   required      Name is the name of resource being referenced         reservedFor resource    string   required      Resource is the type of resource being referenced  for example  pods          reservedFor uid    string   required      UID identifies exactly one incarnation of the resource         reservedFor apiGroup    string       APIGroup is the group for the resource being referenced  It is empty for the core API  This matches the group in the APIVersion that is used when creating the resources          ResourceClaimList   ResourceClaimList   ResourceClaimList is a collection of claims    hr       apiVersion    resource k8s io v1alpha3       kind    ResourceClaimList       metadata     a href    ListMeta  a      Standard list metadata      items       a href    ResourceClaim  a    required    Items is the list of resource claims          Operations   Operations      hr             get  read the specified ResourceClaim       HTTP Request  GET  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaims  name        Parameters       name     in path    string  required    name of the ResourceClaim       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceClaim  a    OK  401  Unauthorized        get  read status of the specified ResourceClaim       HTTP Request  GET  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaims  name  status       Parameters       name     in path    string  required    name of the ResourceClaim       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceClaim  a    OK  401  Unauthorized        list  list or watch objects of kind ResourceClaim       HTTP Request  GET  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaims       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ResourceClaimList  a    OK  401  Unauthorized        list  list or watch objects of kind ResourceClaim       HTTP Request  GET  apis resource k8s io v1alpha3 resourceclaims       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ResourceClaimList  a    OK  401  Unauthorized        create  create a ResourceClaim       HTTP Request  POST  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaims       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    ResourceClaim  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceClaim  a    OK  201   a href    ResourceClaim  a    Created  202   a href    ResourceClaim  a    Accepted  401  Unauthorized        update  replace the specified ResourceClaim       HTTP Request  PUT  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaims  name        Parameters       name     in path    string  required    name of the ResourceClaim       namespace     in path    string  required     a href    namespace  a        body     a href    ResourceClaim  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceClaim  a    OK  201   a href    ResourceClaim  a    Created  401  Unauthorized        update  replace status of the specified ResourceClaim       HTTP Request  PUT  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaims  name  status       Parameters       name     in path    string  required    name of the ResourceClaim       namespace     in path    string  required     a href    namespace  a        body     a href    ResourceClaim  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceClaim  a    OK  201   a href    ResourceClaim  a    Created  401  Unauthorized        patch  partially update the specified ResourceClaim       HTTP Request  PATCH  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaims  name        Parameters       name     in path    string  required    name of the ResourceClaim       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceClaim  a    OK  201   a href    ResourceClaim  a    Created  401  Unauthorized        patch  partially update status of the specified ResourceClaim       HTTP Request  PATCH  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaims  name  status       Parameters       name     in path    string  required    name of the ResourceClaim       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceClaim  a    OK  201   a href    ResourceClaim  a    Created  401  Unauthorized        delete  delete a ResourceClaim       HTTP Request  DELETE  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaims  name        Parameters       name     in path    string  required    name of the ResourceClaim       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    ResourceClaim  a    OK  202   a href    ResourceClaim  a    Accepted  401  Unauthorized        deletecollection  delete collection of ResourceClaim       HTTP Request  DELETE  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaims       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference autogenerated true contenttype apireference apiVersion resource k8s io v1alpha3 import k8s io api resource v1alpha3 apimetadata weight 15 kind PodSchedulingContext PodSchedulingContext objects hold information that is needed to schedule a Pod with ResourceClaims that use WaitForFirstConsumer allocation mode title PodSchedulingContext v1alpha3","answers":"---\napi_metadata:\n  apiVersion: \"resource.k8s.io\/v1alpha3\"\n  import: \"k8s.io\/api\/resource\/v1alpha3\"\n  kind: \"PodSchedulingContext\"\ncontent_type: \"api_reference\"\ndescription: \"PodSchedulingContext objects hold information that is needed to schedule a Pod with ResourceClaims that use \\\"WaitForFirstConsumer\\\" allocation mode.\"\ntitle: \"PodSchedulingContext v1alpha3\"\nweight: 15\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: resource.k8s.io\/v1alpha3`\n\n`import \"k8s.io\/api\/resource\/v1alpha3\"`\n\n\n## PodSchedulingContext {#PodSchedulingContext}\n\nPodSchedulingContext objects hold information that is needed to schedule a Pod with ResourceClaims that use \"WaitForFirstConsumer\" allocation mode.\n\nThis is an alpha type and requires enabling the DRAControlPlaneController feature gate.\n\n<hr>\n\n- **apiVersion**: resource.k8s.io\/v1alpha3\n\n\n- **kind**: PodSchedulingContext\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata\n\n- **spec** (<a href=\"\">PodSchedulingContextSpec<\/a>), required\n\n  Spec describes where resources for the Pod are needed.\n\n- **status** (<a href=\"\">PodSchedulingContextStatus<\/a>)\n\n  Status describes where resources for the Pod can be allocated.\n\n\n\n\n\n## PodSchedulingContextSpec {#PodSchedulingContextSpec}\n\nPodSchedulingContextSpec describes where resources for the Pod are needed.\n\n<hr>\n\n- **potentialNodes** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  PotentialNodes lists nodes where the Pod might be able to run.\n  \n  The size of this field is limited to 128. This is large enough for many clusters. Larger clusters may need more attempts to find a node that suits all pending resources. This may get increased in the future, but not reduced.\n\n- **selectedNode** (string)\n\n  SelectedNode is the node for which allocation of ResourceClaims that are referenced by the Pod and that use \"WaitForFirstConsumer\" allocation is to be attempted.\n\n\n\n\n\n## PodSchedulingContextStatus {#PodSchedulingContextStatus}\n\nPodSchedulingContextStatus describes where resources for the Pod can be allocated.\n\n<hr>\n\n- **resourceClaims** ([]ResourceClaimSchedulingStatus)\n\n  *Map: unique values on key name will be kept during a merge*\n  \n  ResourceClaims describes resource availability for each pod.spec.resourceClaim entry where the corresponding ResourceClaim uses \"WaitForFirstConsumer\" allocation mode.\n\n  <a name=\"ResourceClaimSchedulingStatus\"><\/a>\n  *ResourceClaimSchedulingStatus contains information about one particular ResourceClaim with \"WaitForFirstConsumer\" allocation mode.*\n\n  - **resourceClaims.name** (string), required\n\n    Name matches the pod.spec.resourceClaims[*].Name field.\n\n  - **resourceClaims.unsuitableNodes** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    UnsuitableNodes lists nodes that the ResourceClaim cannot be allocated for.\n    \n    The size of this field is limited to 128, the same as for PodSchedulingSpec.PotentialNodes. This may get increased in the future, but not reduced.\n\n\n\n\n\n## PodSchedulingContextList {#PodSchedulingContextList}\n\nPodSchedulingContextList is a collection of Pod scheduling objects.\n\n<hr>\n\n- **apiVersion**: resource.k8s.io\/v1alpha3\n\n\n- **kind**: PodSchedulingContextList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata\n\n- **items** ([]<a href=\"\">PodSchedulingContext<\/a>), required\n\n  Items is the list of PodSchedulingContext objects.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified PodSchedulingContext\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/podschedulingcontexts\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodSchedulingContext\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodSchedulingContext<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified PodSchedulingContext\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/podschedulingcontexts\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodSchedulingContext\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodSchedulingContext<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind PodSchedulingContext\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/podschedulingcontexts\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodSchedulingContextList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind PodSchedulingContext\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/podschedulingcontexts\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodSchedulingContextList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a PodSchedulingContext\n\n#### HTTP Request\n\nPOST \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/podschedulingcontexts\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">PodSchedulingContext<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodSchedulingContext<\/a>): OK\n\n201 (<a href=\"\">PodSchedulingContext<\/a>): Created\n\n202 (<a href=\"\">PodSchedulingContext<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified PodSchedulingContext\n\n#### HTTP Request\n\nPUT \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/podschedulingcontexts\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodSchedulingContext\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">PodSchedulingContext<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodSchedulingContext<\/a>): OK\n\n201 (<a href=\"\">PodSchedulingContext<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified PodSchedulingContext\n\n#### HTTP Request\n\nPUT \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/podschedulingcontexts\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodSchedulingContext\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">PodSchedulingContext<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodSchedulingContext<\/a>): OK\n\n201 (<a href=\"\">PodSchedulingContext<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified PodSchedulingContext\n\n#### HTTP Request\n\nPATCH \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/podschedulingcontexts\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodSchedulingContext\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodSchedulingContext<\/a>): OK\n\n201 (<a href=\"\">PodSchedulingContext<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified PodSchedulingContext\n\n#### HTTP Request\n\nPATCH \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/podschedulingcontexts\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodSchedulingContext\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodSchedulingContext<\/a>): OK\n\n201 (<a href=\"\">PodSchedulingContext<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a PodSchedulingContext\n\n#### HTTP Request\n\nDELETE \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/podschedulingcontexts\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodSchedulingContext\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodSchedulingContext<\/a>): OK\n\n202 (<a href=\"\">PodSchedulingContext<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of PodSchedulingContext\n\n#### HTTP Request\n\nDELETE \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/podschedulingcontexts\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   resource k8s io v1alpha3    import   k8s io api resource v1alpha3    kind   PodSchedulingContext  content type   api reference  description   PodSchedulingContext objects hold information that is needed to schedule a Pod with ResourceClaims that use   WaitForFirstConsumer   allocation mode   title   PodSchedulingContext v1alpha3  weight  15 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  resource k8s io v1alpha3    import  k8s io api resource v1alpha3        PodSchedulingContext   PodSchedulingContext   PodSchedulingContext objects hold information that is needed to schedule a Pod with ResourceClaims that use  WaitForFirstConsumer  allocation mode   This is an alpha type and requires enabling the DRAControlPlaneController feature gate    hr       apiVersion    resource k8s io v1alpha3       kind    PodSchedulingContext       metadata     a href    ObjectMeta  a      Standard object metadata      spec     a href    PodSchedulingContextSpec  a    required    Spec describes where resources for the Pod are needed       status     a href    PodSchedulingContextStatus  a      Status describes where resources for the Pod can be allocated          PodSchedulingContextSpec   PodSchedulingContextSpec   PodSchedulingContextSpec describes where resources for the Pod are needed    hr       potentialNodes      string      Atomic  will be replaced during a merge       PotentialNodes lists nodes where the Pod might be able to run       The size of this field is limited to 128  This is large enough for many clusters  Larger clusters may need more attempts to find a node that suits all pending resources  This may get increased in the future  but not reduced       selectedNode    string     SelectedNode is the node for which allocation of ResourceClaims that are referenced by the Pod and that use  WaitForFirstConsumer  allocation is to be attempted          PodSchedulingContextStatus   PodSchedulingContextStatus   PodSchedulingContextStatus describes where resources for the Pod can be allocated    hr       resourceClaims      ResourceClaimSchedulingStatus      Map  unique values on key name will be kept during a merge       ResourceClaims describes resource availability for each pod spec resourceClaim entry where the corresponding ResourceClaim uses  WaitForFirstConsumer  allocation mode      a name  ResourceClaimSchedulingStatus    a     ResourceClaimSchedulingStatus contains information about one particular ResourceClaim with  WaitForFirstConsumer  allocation mode          resourceClaims name    string   required      Name matches the pod spec resourceClaims    Name field         resourceClaims unsuitableNodes      string        Atomic  will be replaced during a merge           UnsuitableNodes lists nodes that the ResourceClaim cannot be allocated for           The size of this field is limited to 128  the same as for PodSchedulingSpec PotentialNodes  This may get increased in the future  but not reduced          PodSchedulingContextList   PodSchedulingContextList   PodSchedulingContextList is a collection of Pod scheduling objects    hr       apiVersion    resource k8s io v1alpha3       kind    PodSchedulingContextList       metadata     a href    ListMeta  a      Standard list metadata      items       a href    PodSchedulingContext  a    required    Items is the list of PodSchedulingContext objects          Operations   Operations      hr             get  read the specified PodSchedulingContext       HTTP Request  GET  apis resource k8s io v1alpha3 namespaces  namespace  podschedulingcontexts  name        Parameters       name     in path    string  required    name of the PodSchedulingContext       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodSchedulingContext  a    OK  401  Unauthorized        get  read status of the specified PodSchedulingContext       HTTP Request  GET  apis resource k8s io v1alpha3 namespaces  namespace  podschedulingcontexts  name  status       Parameters       name     in path    string  required    name of the PodSchedulingContext       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodSchedulingContext  a    OK  401  Unauthorized        list  list or watch objects of kind PodSchedulingContext       HTTP Request  GET  apis resource k8s io v1alpha3 namespaces  namespace  podschedulingcontexts       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    PodSchedulingContextList  a    OK  401  Unauthorized        list  list or watch objects of kind PodSchedulingContext       HTTP Request  GET  apis resource k8s io v1alpha3 podschedulingcontexts       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    PodSchedulingContextList  a    OK  401  Unauthorized        create  create a PodSchedulingContext       HTTP Request  POST  apis resource k8s io v1alpha3 namespaces  namespace  podschedulingcontexts       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    PodSchedulingContext  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodSchedulingContext  a    OK  201   a href    PodSchedulingContext  a    Created  202   a href    PodSchedulingContext  a    Accepted  401  Unauthorized        update  replace the specified PodSchedulingContext       HTTP Request  PUT  apis resource k8s io v1alpha3 namespaces  namespace  podschedulingcontexts  name        Parameters       name     in path    string  required    name of the PodSchedulingContext       namespace     in path    string  required     a href    namespace  a        body     a href    PodSchedulingContext  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodSchedulingContext  a    OK  201   a href    PodSchedulingContext  a    Created  401  Unauthorized        update  replace status of the specified PodSchedulingContext       HTTP Request  PUT  apis resource k8s io v1alpha3 namespaces  namespace  podschedulingcontexts  name  status       Parameters       name     in path    string  required    name of the PodSchedulingContext       namespace     in path    string  required     a href    namespace  a        body     a href    PodSchedulingContext  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodSchedulingContext  a    OK  201   a href    PodSchedulingContext  a    Created  401  Unauthorized        patch  partially update the specified PodSchedulingContext       HTTP Request  PATCH  apis resource k8s io v1alpha3 namespaces  namespace  podschedulingcontexts  name        Parameters       name     in path    string  required    name of the PodSchedulingContext       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodSchedulingContext  a    OK  201   a href    PodSchedulingContext  a    Created  401  Unauthorized        patch  partially update status of the specified PodSchedulingContext       HTTP Request  PATCH  apis resource k8s io v1alpha3 namespaces  namespace  podschedulingcontexts  name  status       Parameters       name     in path    string  required    name of the PodSchedulingContext       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodSchedulingContext  a    OK  201   a href    PodSchedulingContext  a    Created  401  Unauthorized        delete  delete a PodSchedulingContext       HTTP Request  DELETE  apis resource k8s io v1alpha3 namespaces  namespace  podschedulingcontexts  name        Parameters       name     in path    string  required    name of the PodSchedulingContext       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    PodSchedulingContext  a    OK  202   a href    PodSchedulingContext  a    Accepted  401  Unauthorized        deletecollection  delete collection of PodSchedulingContext       HTTP Request  DELETE  apis resource k8s io v1alpha3 namespaces  namespace  podschedulingcontexts       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference kind ResourceClaimTemplate title ResourceClaimTemplate v1alpha3 weight 17 contenttype apireference apiVersion resource k8s io v1alpha3 import k8s io api resource v1alpha3 apimetadata autogenerated true ResourceClaimTemplate is used to produce ResourceClaim objects","answers":"---\napi_metadata:\n  apiVersion: \"resource.k8s.io\/v1alpha3\"\n  import: \"k8s.io\/api\/resource\/v1alpha3\"\n  kind: \"ResourceClaimTemplate\"\ncontent_type: \"api_reference\"\ndescription: \"ResourceClaimTemplate is used to produce ResourceClaim objects.\"\ntitle: \"ResourceClaimTemplate v1alpha3\"\nweight: 17\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: resource.k8s.io\/v1alpha3`\n\n`import \"k8s.io\/api\/resource\/v1alpha3\"`\n\n\n## ResourceClaimTemplate {#ResourceClaimTemplate}\n\nResourceClaimTemplate is used to produce ResourceClaim objects.\n\nThis is an alpha type and requires enabling the DynamicResourceAllocation feature gate.\n\n<hr>\n\n- **apiVersion**: resource.k8s.io\/v1alpha3\n\n\n- **kind**: ResourceClaimTemplate\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata\n\n- **spec** (<a href=\"\">ResourceClaimTemplateSpec<\/a>), required\n\n  Describes the ResourceClaim that is to be generated.\n  \n  This field is immutable. A ResourceClaim will get created by the control plane for a Pod when needed and then not get updated anymore.\n\n\n\n\n\n## ResourceClaimTemplateSpec {#ResourceClaimTemplateSpec}\n\nResourceClaimTemplateSpec contains the metadata and fields for a ResourceClaim.\n\n<hr>\n\n- **spec** (<a href=\"\">ResourceClaimSpec<\/a>), required\n\n  Spec for the ResourceClaim. The entire content is copied unchanged into the ResourceClaim that gets created from this template. The same fields as in a ResourceClaim are also valid here.\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  ObjectMeta may contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation.\n\n\n\n\n\n## ResourceClaimTemplateList {#ResourceClaimTemplateList}\n\nResourceClaimTemplateList is a collection of claim templates.\n\n<hr>\n\n- **apiVersion**: resource.k8s.io\/v1alpha3\n\n\n- **kind**: ResourceClaimTemplateList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata\n\n- **items** ([]<a href=\"\">ResourceClaimTemplate<\/a>), required\n\n  Items is the list of resource claim templates.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ResourceClaimTemplate\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaimtemplates\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceClaimTemplate\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaimTemplate<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ResourceClaimTemplate\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaimtemplates\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaimTemplateList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ResourceClaimTemplate\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/resourceclaimtemplates\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaimTemplateList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ResourceClaimTemplate\n\n#### HTTP Request\n\nPOST \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaimtemplates\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ResourceClaimTemplate<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaimTemplate<\/a>): OK\n\n201 (<a href=\"\">ResourceClaimTemplate<\/a>): Created\n\n202 (<a href=\"\">ResourceClaimTemplate<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ResourceClaimTemplate\n\n#### HTTP Request\n\nPUT \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaimtemplates\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceClaimTemplate\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ResourceClaimTemplate<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaimTemplate<\/a>): OK\n\n201 (<a href=\"\">ResourceClaimTemplate<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ResourceClaimTemplate\n\n#### HTTP Request\n\nPATCH \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaimtemplates\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceClaimTemplate\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaimTemplate<\/a>): OK\n\n201 (<a href=\"\">ResourceClaimTemplate<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ResourceClaimTemplate\n\n#### HTTP Request\n\nDELETE \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaimtemplates\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceClaimTemplate\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceClaimTemplate<\/a>): OK\n\n202 (<a href=\"\">ResourceClaimTemplate<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ResourceClaimTemplate\n\n#### HTTP Request\n\nDELETE \/apis\/resource.k8s.io\/v1alpha3\/namespaces\/{namespace}\/resourceclaimtemplates\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   resource k8s io v1alpha3    import   k8s io api resource v1alpha3    kind   ResourceClaimTemplate  content type   api reference  description   ResourceClaimTemplate is used to produce ResourceClaim objects   title   ResourceClaimTemplate v1alpha3  weight  17 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  resource k8s io v1alpha3    import  k8s io api resource v1alpha3        ResourceClaimTemplate   ResourceClaimTemplate   ResourceClaimTemplate is used to produce ResourceClaim objects   This is an alpha type and requires enabling the DynamicResourceAllocation feature gate    hr       apiVersion    resource k8s io v1alpha3       kind    ResourceClaimTemplate       metadata     a href    ObjectMeta  a      Standard object metadata      spec     a href    ResourceClaimTemplateSpec  a    required    Describes the ResourceClaim that is to be generated       This field is immutable  A ResourceClaim will get created by the control plane for a Pod when needed and then not get updated anymore          ResourceClaimTemplateSpec   ResourceClaimTemplateSpec   ResourceClaimTemplateSpec contains the metadata and fields for a ResourceClaim    hr       spec     a href    ResourceClaimSpec  a    required    Spec for the ResourceClaim  The entire content is copied unchanged into the ResourceClaim that gets created from this template  The same fields as in a ResourceClaim are also valid here       metadata     a href    ObjectMeta  a      ObjectMeta may contain labels and annotations that will be copied into the PVC when creating it  No other fields are allowed and will be rejected during validation          ResourceClaimTemplateList   ResourceClaimTemplateList   ResourceClaimTemplateList is a collection of claim templates    hr       apiVersion    resource k8s io v1alpha3       kind    ResourceClaimTemplateList       metadata     a href    ListMeta  a      Standard list metadata      items       a href    ResourceClaimTemplate  a    required    Items is the list of resource claim templates          Operations   Operations      hr             get  read the specified ResourceClaimTemplate       HTTP Request  GET  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaimtemplates  name        Parameters       name     in path    string  required    name of the ResourceClaimTemplate       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceClaimTemplate  a    OK  401  Unauthorized        list  list or watch objects of kind ResourceClaimTemplate       HTTP Request  GET  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaimtemplates       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ResourceClaimTemplateList  a    OK  401  Unauthorized        list  list or watch objects of kind ResourceClaimTemplate       HTTP Request  GET  apis resource k8s io v1alpha3 resourceclaimtemplates       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ResourceClaimTemplateList  a    OK  401  Unauthorized        create  create a ResourceClaimTemplate       HTTP Request  POST  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaimtemplates       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    ResourceClaimTemplate  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceClaimTemplate  a    OK  201   a href    ResourceClaimTemplate  a    Created  202   a href    ResourceClaimTemplate  a    Accepted  401  Unauthorized        update  replace the specified ResourceClaimTemplate       HTTP Request  PUT  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaimtemplates  name        Parameters       name     in path    string  required    name of the ResourceClaimTemplate       namespace     in path    string  required     a href    namespace  a        body     a href    ResourceClaimTemplate  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceClaimTemplate  a    OK  201   a href    ResourceClaimTemplate  a    Created  401  Unauthorized        patch  partially update the specified ResourceClaimTemplate       HTTP Request  PATCH  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaimtemplates  name        Parameters       name     in path    string  required    name of the ResourceClaimTemplate       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceClaimTemplate  a    OK  201   a href    ResourceClaimTemplate  a    Created  401  Unauthorized        delete  delete a ResourceClaimTemplate       HTTP Request  DELETE  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaimtemplates  name        Parameters       name     in path    string  required    name of the ResourceClaimTemplate       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    ResourceClaimTemplate  a    OK  202   a href    ResourceClaimTemplate  a    Accepted  401  Unauthorized        deletecollection  delete collection of ResourceClaimTemplate       HTTP Request  DELETE  apis resource k8s io v1alpha3 namespaces  namespace  resourceclaimtemplates       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference apiVersion apps v1 contenttype apireference DaemonSet represents the configuration of a daemon set kind DaemonSet title DaemonSet apimetadata autogenerated true weight 9 import k8s io api apps v1","answers":"---\napi_metadata:\n  apiVersion: \"apps\/v1\"\n  import: \"k8s.io\/api\/apps\/v1\"\n  kind: \"DaemonSet\"\ncontent_type: \"api_reference\"\ndescription: \"DaemonSet represents the configuration of a daemon set.\"\ntitle: \"DaemonSet\"\nweight: 9\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: apps\/v1`\n\n`import \"k8s.io\/api\/apps\/v1\"`\n\n\n## DaemonSet {#DaemonSet}\n\nDaemonSet represents the configuration of a daemon set.\n\n<hr>\n\n- **apiVersion**: apps\/v1\n\n\n- **kind**: DaemonSet\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">DaemonSetSpec<\/a>)\n\n  The desired behavior of this daemon set. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **status** (<a href=\"\">DaemonSetStatus<\/a>)\n\n  The current status of this daemon set. This data may be out of date by some window of time. Populated by the system. Read-only. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## DaemonSetSpec {#DaemonSetSpec}\n\nDaemonSetSpec is the specification of a daemon set.\n\n<hr>\n\n- **selector** (<a href=\"\">LabelSelector<\/a>), required\n\n  A label query over pods that are managed by the daemon set. Must match in order to be controlled. It must match the pod template's labels. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\/#label-selectors\n\n- **template** (<a href=\"\">PodTemplateSpec<\/a>), required\n\n  An object that describes the pod that will be created. The DaemonSet will create exactly one copy of this pod on every node that matches the template's node selector (or on every node if no node selector is specified). The only allowed template.spec.restartPolicy value is \"Always\". More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/replicationcontroller#pod-template\n\n- **minReadySeconds** (int32)\n\n  The minimum number of seconds for which a newly created DaemonSet pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready).\n\n- **updateStrategy** (DaemonSetUpdateStrategy)\n\n  An update strategy to replace existing DaemonSet pods with new pods.\n\n  <a name=\"DaemonSetUpdateStrategy\"><\/a>\n  *DaemonSetUpdateStrategy is a struct used to control the update strategy for a DaemonSet.*\n\n  - **updateStrategy.type** (string)\n\n    Type of daemon set update. Can be \"RollingUpdate\" or \"OnDelete\". Default is RollingUpdate.\n\n  - **updateStrategy.rollingUpdate** (RollingUpdateDaemonSet)\n\n    Rolling update config params. Present only if type = \"RollingUpdate\".\n\n    <a name=\"RollingUpdateDaemonSet\"><\/a>\n    *Spec to control the desired behavior of daemon set rolling update.*\n\n    - **updateStrategy.rollingUpdate.maxSurge** (IntOrString)\n\n      The maximum number of nodes with an existing available DaemonSet pod that can have an updated DaemonSet pod during during an update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up to a minimum of 1. Default value is 0. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their a new pod created before the old pod is marked as deleted. The update starts by launching new pods on 30% of nodes. Once an updated pod is available (Ready for at least minReadySeconds) the old DaemonSet pod on that node is marked deleted. If the old pod becomes unavailable for any reason (Ready transitions to false, is evicted, or is drained) an updated pod is immediatedly created on that node without considering surge limits. Allowing surge implies the possibility that the resources consumed by the daemonset on any given node can double if the readiness check fails, and so resource intensive daemonsets should take into account that they may cause evictions during disruption.\n\n      <a name=\"IntOrString\"><\/a>\n      *IntOrString is a type that can hold an int32 or a string.  When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type.  This allows you to have, for example, a JSON field that can accept a name or number.*\n\n    - **updateStrategy.rollingUpdate.maxUnavailable** (IntOrString)\n\n      The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0 if MaxSurge is 0 Default value is 1. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their pods stopped for an update at any given time. The update starts by stopping at most 30% of those DaemonSet pods and then brings up new DaemonSet pods in their place. Once the new pods are available, it then proceeds onto other DaemonSet pods, thus ensuring that at least 70% of original number of DaemonSet pods are available at all times during the update.\n\n      <a name=\"IntOrString\"><\/a>\n      *IntOrString is a type that can hold an int32 or a string.  When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type.  This allows you to have, for example, a JSON field that can accept a name or number.*\n\n- **revisionHistoryLimit** (int32)\n\n  The number of old history to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10.\n\n\n\n\n\n## DaemonSetStatus {#DaemonSetStatus}\n\nDaemonSetStatus represents the current status of a daemon set.\n\n<hr>\n\n- **numberReady** (int32), required\n\n  numberReady is the number of nodes that should be running the daemon pod and have one or more of the daemon pod running with a Ready Condition.\n\n- **numberAvailable** (int32)\n\n  The number of nodes that should be running the daemon pod and have one or more of the daemon pod running and available (ready for at least spec.minReadySeconds)\n\n- **numberUnavailable** (int32)\n\n  The number of nodes that should be running the daemon pod and have none of the daemon pod running and available (ready for at least spec.minReadySeconds)\n\n- **numberMisscheduled** (int32), required\n\n  The number of nodes that are running the daemon pod, but are not supposed to run the daemon pod. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/daemonset\/\n\n- **desiredNumberScheduled** (int32), required\n\n  The total number of nodes that should be running the daemon pod (including nodes correctly running the daemon pod). More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/daemonset\/\n\n- **currentNumberScheduled** (int32), required\n\n  The number of nodes that are running at least 1 daemon pod and are supposed to run the daemon pod. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/daemonset\/\n\n- **updatedNumberScheduled** (int32)\n\n  The total number of nodes that are running updated daemon pod\n\n- **collisionCount** (int32)\n\n  Count of hash collisions for the DaemonSet. The DaemonSet controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ControllerRevision.\n\n- **conditions** ([]DaemonSetCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  Represents the latest available observations of a DaemonSet's current state.\n\n  <a name=\"DaemonSetCondition\"><\/a>\n  *DaemonSetCondition describes the state of a DaemonSet at a certain point.*\n\n  - **conditions.status** (string), required\n\n    Status of the condition, one of True, False, Unknown.\n\n  - **conditions.type** (string), required\n\n    Type of DaemonSet condition.\n\n  - **conditions.lastTransitionTime** (Time)\n\n    Last time the condition transitioned from one status to another.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    A human readable message indicating details about the transition.\n\n  - **conditions.reason** (string)\n\n    The reason for the condition's last transition.\n\n- **observedGeneration** (int64)\n\n  The most recent generation observed by the daemon set controller.\n\n\n\n\n\n## DaemonSetList {#DaemonSetList}\n\nDaemonSetList is a collection of daemon sets.\n\n<hr>\n\n- **apiVersion**: apps\/v1\n\n\n- **kind**: DaemonSetList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">DaemonSet<\/a>), required\n\n  A list of daemon sets.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified DaemonSet\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/namespaces\/{namespace}\/daemonsets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the DaemonSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DaemonSet<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified DaemonSet\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/namespaces\/{namespace}\/daemonsets\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the DaemonSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DaemonSet<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind DaemonSet\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/namespaces\/{namespace}\/daemonsets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DaemonSetList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind DaemonSet\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/daemonsets\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DaemonSetList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a DaemonSet\n\n#### HTTP Request\n\nPOST \/apis\/apps\/v1\/namespaces\/{namespace}\/daemonsets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DaemonSet<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DaemonSet<\/a>): OK\n\n201 (<a href=\"\">DaemonSet<\/a>): Created\n\n202 (<a href=\"\">DaemonSet<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified DaemonSet\n\n#### HTTP Request\n\nPUT \/apis\/apps\/v1\/namespaces\/{namespace}\/daemonsets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the DaemonSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DaemonSet<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DaemonSet<\/a>): OK\n\n201 (<a href=\"\">DaemonSet<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified DaemonSet\n\n#### HTTP Request\n\nPUT \/apis\/apps\/v1\/namespaces\/{namespace}\/daemonsets\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the DaemonSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DaemonSet<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DaemonSet<\/a>): OK\n\n201 (<a href=\"\">DaemonSet<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified DaemonSet\n\n#### HTTP Request\n\nPATCH \/apis\/apps\/v1\/namespaces\/{namespace}\/daemonsets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the DaemonSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DaemonSet<\/a>): OK\n\n201 (<a href=\"\">DaemonSet<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified DaemonSet\n\n#### HTTP Request\n\nPATCH \/apis\/apps\/v1\/namespaces\/{namespace}\/daemonsets\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the DaemonSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DaemonSet<\/a>): OK\n\n201 (<a href=\"\">DaemonSet<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a DaemonSet\n\n#### HTTP Request\n\nDELETE \/apis\/apps\/v1\/namespaces\/{namespace}\/daemonsets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the DaemonSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of DaemonSet\n\n#### HTTP Request\n\nDELETE \/apis\/apps\/v1\/namespaces\/{namespace}\/daemonsets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   apps v1    import   k8s io api apps v1    kind   DaemonSet  content type   api reference  description   DaemonSet represents the configuration of a daemon set   title   DaemonSet  weight  9 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  apps v1    import  k8s io api apps v1        DaemonSet   DaemonSet   DaemonSet represents the configuration of a daemon set    hr       apiVersion    apps v1       kind    DaemonSet       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    DaemonSetSpec  a      The desired behavior of this daemon set  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status      status     a href    DaemonSetStatus  a      The current status of this daemon set  This data may be out of date by some window of time  Populated by the system  Read only  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         DaemonSetSpec   DaemonSetSpec   DaemonSetSpec is the specification of a daemon set    hr       selector     a href    LabelSelector  a    required    A label query over pods that are managed by the daemon set  Must match in order to be controlled  It must match the pod template s labels  More info  https   kubernetes io docs concepts overview working with objects labels  label selectors      template     a href    PodTemplateSpec  a    required    An object that describes the pod that will be created  The DaemonSet will create exactly one copy of this pod on every node that matches the template s node selector  or on every node if no node selector is specified   The only allowed template spec restartPolicy value is  Always   More info  https   kubernetes io docs concepts workloads controllers replicationcontroller pod template      minReadySeconds    int32     The minimum number of seconds for which a newly created DaemonSet pod should be ready without any of its container crashing  for it to be considered available  Defaults to 0  pod will be considered available as soon as it is ready        updateStrategy    DaemonSetUpdateStrategy     An update strategy to replace existing DaemonSet pods with new pods      a name  DaemonSetUpdateStrategy    a     DaemonSetUpdateStrategy is a struct used to control the update strategy for a DaemonSet          updateStrategy type    string       Type of daemon set update  Can be  RollingUpdate  or  OnDelete   Default is RollingUpdate         updateStrategy rollingUpdate    RollingUpdateDaemonSet       Rolling update config params  Present only if type    RollingUpdate         a name  RollingUpdateDaemonSet    a       Spec to control the desired behavior of daemon set rolling update            updateStrategy rollingUpdate maxSurge    IntOrString         The maximum number of nodes with an existing available DaemonSet pod that can have an updated DaemonSet pod during during an update  Value can be an absolute number  ex  5  or a percentage of desired pods  ex  10    This can not be 0 if MaxUnavailable is 0  Absolute number is calculated from percentage by rounding up to a minimum of 1  Default value is 0  Example  when this is set to 30   at most 30  of the total number of nodes that should be running the daemon pod  i e  status desiredNumberScheduled  can have their a new pod created before the old pod is marked as deleted  The update starts by launching new pods on 30  of nodes  Once an updated pod is available  Ready for at least minReadySeconds  the old DaemonSet pod on that node is marked deleted  If the old pod becomes unavailable for any reason  Ready transitions to false  is evicted  or is drained  an updated pod is immediatedly created on that node without considering surge limits  Allowing surge implies the possibility that the resources consumed by the daemonset on any given node can double if the readiness check fails  and so resource intensive daemonsets should take into account that they may cause evictions during disruption          a name  IntOrString    a         IntOrString is a type that can hold an int32 or a string   When used in JSON or YAML marshalling and unmarshalling  it produces or consumes the inner type   This allows you to have  for example  a JSON field that can accept a name or number            updateStrategy rollingUpdate maxUnavailable    IntOrString         The maximum number of DaemonSet pods that can be unavailable during the update  Value can be an absolute number  ex  5  or a percentage of total number of DaemonSet pods at the start of the update  ex  10    Absolute number is calculated from percentage by rounding up  This cannot be 0 if MaxSurge is 0 Default value is 1  Example  when this is set to 30   at most 30  of the total number of nodes that should be running the daemon pod  i e  status desiredNumberScheduled  can have their pods stopped for an update at any given time  The update starts by stopping at most 30  of those DaemonSet pods and then brings up new DaemonSet pods in their place  Once the new pods are available  it then proceeds onto other DaemonSet pods  thus ensuring that at least 70  of original number of DaemonSet pods are available at all times during the update          a name  IntOrString    a         IntOrString is a type that can hold an int32 or a string   When used in JSON or YAML marshalling and unmarshalling  it produces or consumes the inner type   This allows you to have  for example  a JSON field that can accept a name or number        revisionHistoryLimit    int32     The number of old history to retain to allow rollback  This is a pointer to distinguish between explicit zero and not specified  Defaults to 10          DaemonSetStatus   DaemonSetStatus   DaemonSetStatus represents the current status of a daemon set    hr       numberReady    int32   required    numberReady is the number of nodes that should be running the daemon pod and have one or more of the daemon pod running with a Ready Condition       numberAvailable    int32     The number of nodes that should be running the daemon pod and have one or more of the daemon pod running and available  ready for at least spec minReadySeconds       numberUnavailable    int32     The number of nodes that should be running the daemon pod and have none of the daemon pod running and available  ready for at least spec minReadySeconds       numberMisscheduled    int32   required    The number of nodes that are running the daemon pod  but are not supposed to run the daemon pod  More info  https   kubernetes io docs concepts workloads controllers daemonset       desiredNumberScheduled    int32   required    The total number of nodes that should be running the daemon pod  including nodes correctly running the daemon pod   More info  https   kubernetes io docs concepts workloads controllers daemonset       currentNumberScheduled    int32   required    The number of nodes that are running at least 1 daemon pod and are supposed to run the daemon pod  More info  https   kubernetes io docs concepts workloads controllers daemonset       updatedNumberScheduled    int32     The total number of nodes that are running updated daemon pod      collisionCount    int32     Count of hash collisions for the DaemonSet  The DaemonSet controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ControllerRevision       conditions      DaemonSetCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       Represents the latest available observations of a DaemonSet s current state      a name  DaemonSetCondition    a     DaemonSetCondition describes the state of a DaemonSet at a certain point          conditions status    string   required      Status of the condition  one of True  False  Unknown         conditions type    string   required      Type of DaemonSet condition         conditions lastTransitionTime    Time       Last time the condition transitioned from one status to another        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string       A human readable message indicating details about the transition         conditions reason    string       The reason for the condition s last transition       observedGeneration    int64     The most recent generation observed by the daemon set controller          DaemonSetList   DaemonSetList   DaemonSetList is a collection of daemon sets    hr       apiVersion    apps v1       kind    DaemonSetList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    DaemonSet  a    required    A list of daemon sets          Operations   Operations      hr             get  read the specified DaemonSet       HTTP Request  GET  apis apps v1 namespaces  namespace  daemonsets  name        Parameters       name     in path    string  required    name of the DaemonSet       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    DaemonSet  a    OK  401  Unauthorized        get  read status of the specified DaemonSet       HTTP Request  GET  apis apps v1 namespaces  namespace  daemonsets  name  status       Parameters       name     in path    string  required    name of the DaemonSet       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    DaemonSet  a    OK  401  Unauthorized        list  list or watch objects of kind DaemonSet       HTTP Request  GET  apis apps v1 namespaces  namespace  daemonsets       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    DaemonSetList  a    OK  401  Unauthorized        list  list or watch objects of kind DaemonSet       HTTP Request  GET  apis apps v1 daemonsets       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    DaemonSetList  a    OK  401  Unauthorized        create  create a DaemonSet       HTTP Request  POST  apis apps v1 namespaces  namespace  daemonsets       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DaemonSet  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    DaemonSet  a    OK  201   a href    DaemonSet  a    Created  202   a href    DaemonSet  a    Accepted  401  Unauthorized        update  replace the specified DaemonSet       HTTP Request  PUT  apis apps v1 namespaces  namespace  daemonsets  name        Parameters       name     in path    string  required    name of the DaemonSet       namespace     in path    string  required     a href    namespace  a        body     a href    DaemonSet  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    DaemonSet  a    OK  201   a href    DaemonSet  a    Created  401  Unauthorized        update  replace status of the specified DaemonSet       HTTP Request  PUT  apis apps v1 namespaces  namespace  daemonsets  name  status       Parameters       name     in path    string  required    name of the DaemonSet       namespace     in path    string  required     a href    namespace  a        body     a href    DaemonSet  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    DaemonSet  a    OK  201   a href    DaemonSet  a    Created  401  Unauthorized        patch  partially update the specified DaemonSet       HTTP Request  PATCH  apis apps v1 namespaces  namespace  daemonsets  name        Parameters       name     in path    string  required    name of the DaemonSet       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    DaemonSet  a    OK  201   a href    DaemonSet  a    Created  401  Unauthorized        patch  partially update status of the specified DaemonSet       HTTP Request  PATCH  apis apps v1 namespaces  namespace  daemonsets  name  status       Parameters       name     in path    string  required    name of the DaemonSet       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    DaemonSet  a    OK  201   a href    DaemonSet  a    Created  401  Unauthorized        delete  delete a DaemonSet       HTTP Request  DELETE  apis apps v1 namespaces  namespace  daemonsets  name        Parameters       name     in path    string  required    name of the DaemonSet       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of DaemonSet       HTTP Request  DELETE  apis apps v1 namespaces  namespace  daemonsets       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference title Job kind Job contenttype apireference weight 10 apiVersion batch v1 apimetadata Job represents the configuration of a single job autogenerated true import k8s io api batch v1","answers":"---\napi_metadata:\n  apiVersion: \"batch\/v1\"\n  import: \"k8s.io\/api\/batch\/v1\"\n  kind: \"Job\"\ncontent_type: \"api_reference\"\ndescription: \"Job represents the configuration of a single job.\"\ntitle: \"Job\"\nweight: 10\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: batch\/v1`\n\n`import \"k8s.io\/api\/batch\/v1\"`\n\n\n## Job {#Job}\n\nJob represents the configuration of a single job.\n\n<hr>\n\n- **apiVersion**: batch\/v1\n\n\n- **kind**: Job\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">JobSpec<\/a>)\n\n  Specification of the desired behavior of a job. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **status** (<a href=\"\">JobStatus<\/a>)\n\n  Current status of a job. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## JobSpec {#JobSpec}\n\nJobSpec describes how the job execution will look like.\n\n<hr>\n\n\n\n### Replicas\n\n\n- **template** (<a href=\"\">PodTemplateSpec<\/a>), required\n\n  Describes the pod that will be created when executing a job. The only allowed template.spec.restartPolicy values are \"Never\" or \"OnFailure\". More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/jobs-run-to-completion\/\n\n- **parallelism** (int32)\n\n  Specifies the maximum desired number of pods the job should run at any given time. The actual number of pods running in steady state will be less than this number when ((.spec.completions - .status.successful) \\< .spec.parallelism), i.e. when the work left to do is less than max parallelism. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/jobs-run-to-completion\/\n\n### Lifecycle\n\n\n- **completions** (int32)\n\n  Specifies the desired number of successfully finished pods the job should be run with.  Setting to null means that the success of any pod signals the success of all pods, and allows parallelism to have any positive value.  Setting to 1 means that parallelism is limited to 1 and the success of that pod signals the success of the job. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/jobs-run-to-completion\/\n\n- **completionMode** (string)\n\n  completionMode specifies how Pod completions are tracked. It can be `NonIndexed` (default) or `Indexed`.\n  \n  `NonIndexed` means that the Job is considered complete when there have been .spec.completions successfully completed Pods. Each Pod completion is homologous to each other.\n  \n  `Indexed` means that the Pods of a Job get an associated completion index from 0 to (.spec.completions - 1), available in the annotation batch.kubernetes.io\/job-completion-index. The Job is considered complete when there is one successfully completed Pod for each index. When value is `Indexed`, .spec.completions must be specified and `.spec.parallelism` must be less than or equal to 10^5. In addition, The Pod name takes the form `$(job-name)-$(index)-$(random-string)`, the Pod hostname takes the form `$(job-name)-$(index)`.\n  \n  More completion modes can be added in the future. If the Job controller observes a mode that it doesn't recognize, which is possible during upgrades due to version skew, the controller skips updates for the Job.\n\n- **backoffLimit** (int32)\n\n  Specifies the number of retries before marking this job failed. Defaults to 6\n\n- **activeDeadlineSeconds** (int64)\n\n  Specifies the duration in seconds relative to the startTime that the job may be continuously active before the system tries to terminate it; value must be positive integer. If a Job is suspended (at creation or through an update), this timer will effectively be stopped and reset when the Job is resumed again.\n\n- **ttlSecondsAfterFinished** (int32)\n\n  ttlSecondsAfterFinished limits the lifetime of a Job that has finished execution (either Complete or Failed). If this field is set, ttlSecondsAfterFinished after the Job finishes, it is eligible to be automatically deleted. When the Job is being deleted, its lifecycle guarantees (e.g. finalizers) will be honored. If this field is unset, the Job won't be automatically deleted. If this field is set to zero, the Job becomes eligible to be deleted immediately after it finishes.\n\n- **suspend** (boolean)\n\n  suspend specifies whether the Job controller should create Pods or not. If a Job is created with suspend set to true, no Pods are created by the Job controller. If a Job is suspended after creation (i.e. the flag goes from false to true), the Job controller will delete all active Pods associated with this Job. Users must design their workload to gracefully handle this. Suspending a Job will reset the StartTime field of the Job, effectively resetting the ActiveDeadlineSeconds timer too. Defaults to false.\n\n### Selector\n\n\n- **selector** (<a href=\"\">LabelSelector<\/a>)\n\n  A label query over pods that should match the pod count. Normally, the system sets this field for you. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\/#label-selectors\n\n- **manualSelector** (boolean)\n\n  manualSelector controls generation of pod labels and pod selectors. Leave `manualSelector` unset unless you are certain what you are doing. When false or unset, the system pick labels unique to this job and appends those labels to the pod template.  When true, the user is responsible for picking unique labels and specifying the selector.  Failure to pick a unique label may cause this and other jobs to not function correctly.  However, You may see `manualSelector=true` in jobs that were created with the old `extensions\/v1beta1` API. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/jobs-run-to-completion\/#specifying-your-own-pod-selector\n\n### Beta level\n\n\n- **podFailurePolicy** (PodFailurePolicy)\n\n  Specifies the policy of handling failed pods. In particular, it allows to specify the set of actions and conditions which need to be satisfied to take the associated action. If empty, the default behaviour applies - the counter of failed pods, represented by the jobs's .status.failed field, is incremented and it is checked against the backoffLimit. This field cannot be used in combination with restartPolicy=OnFailure.\n\n  <a name=\"PodFailurePolicy\"><\/a>\n  *PodFailurePolicy describes how failed pods influence the backoffLimit.*\n\n  - **podFailurePolicy.rules** ([]PodFailurePolicyRule), required\n\n    *Atomic: will be replaced during a merge*\n    \n    A list of pod failure policy rules. The rules are evaluated in order. Once a rule matches a Pod failure, the remaining of the rules are ignored. When no rule matches the Pod failure, the default handling applies - the counter of pod failures is incremented and it is checked against the backoffLimit. At most 20 elements are allowed.\n\n    <a name=\"PodFailurePolicyRule\"><\/a>\n    *PodFailurePolicyRule describes how a pod failure is handled when the requirements are met. One of onExitCodes and onPodConditions, but not both, can be used in each rule.*\n\n    - **podFailurePolicy.rules.action** (string), required\n\n      Specifies the action taken on a pod failure when the requirements are satisfied. Possible values are:\n      \n      - FailJob: indicates that the pod's job is marked as Failed and all\n        running pods are terminated.\n      - FailIndex: indicates that the pod's index is marked as Failed and will\n        not be restarted.\n        This value is beta-level. It can be used when the\n        `JobBackoffLimitPerIndex` feature gate is enabled (enabled by default).\n      - Ignore: indicates that the counter towards the .backoffLimit is not\n        incremented and a replacement pod is created.\n      - Count: indicates that the pod is handled in the default way - the\n        counter towards the .backoffLimit is incremented.\n      Additional values are considered to be added in the future. Clients should react to an unknown action by skipping the rule.\n\n    - **podFailurePolicy.rules.onExitCodes** (PodFailurePolicyOnExitCodesRequirement)\n\n      Represents the requirement on the container exit codes.\n\n      <a name=\"PodFailurePolicyOnExitCodesRequirement\"><\/a>\n      *PodFailurePolicyOnExitCodesRequirement describes the requirement for handling a failed pod based on its container exit codes. In particular, it lookups the .state.terminated.exitCode for each app container and init container status, represented by the .status.containerStatuses and .status.initContainerStatuses fields in the Pod status, respectively. Containers completed with success (exit code 0) are excluded from the requirement check.*\n\n      - **podFailurePolicy.rules.onExitCodes.operator** (string), required\n\n        Represents the relationship between the container exit code(s) and the specified values. Containers completed with success (exit code 0) are excluded from the requirement check. Possible values are:\n        \n        - In: the requirement is satisfied if at least one container exit code\n          (might be multiple if there are multiple containers not restricted\n          by the 'containerName' field) is in the set of specified values.\n        - NotIn: the requirement is satisfied if at least one container exit code\n          (might be multiple if there are multiple containers not restricted\n          by the 'containerName' field) is not in the set of specified values.\n        Additional values are considered to be added in the future. Clients should react to an unknown operator by assuming the requirement is not satisfied.\n\n      - **podFailurePolicy.rules.onExitCodes.values** ([]int32), required\n\n        *Set: unique values will be kept during a merge*\n        \n        Specifies the set of values. Each returned container exit code (might be multiple in case of multiple containers) is checked against this set of values with respect to the operator. The list of values must be ordered and must not contain duplicates. Value '0' cannot be used for the In operator. At least one element is required. At most 255 elements are allowed.\n\n      - **podFailurePolicy.rules.onExitCodes.containerName** (string)\n\n        Restricts the check for exit codes to the container with the specified name. When null, the rule applies to all containers. When specified, it should match one the container or initContainer names in the pod template.\n\n    - **podFailurePolicy.rules.onPodConditions** ([]PodFailurePolicyOnPodConditionsPattern)\n\n      *Atomic: will be replaced during a merge*\n      \n      Represents the requirement on the pod conditions. The requirement is represented as a list of pod condition patterns. The requirement is satisfied if at least one pattern matches an actual pod condition. At most 20 elements are allowed.\n\n      <a name=\"PodFailurePolicyOnPodConditionsPattern\"><\/a>\n      *PodFailurePolicyOnPodConditionsPattern describes a pattern for matching an actual pod condition type.*\n\n      - **podFailurePolicy.rules.onPodConditions.status** (string), required\n\n        Specifies the required Pod condition status. To match a pod condition it is required that the specified status equals the pod condition status. Defaults to True.\n\n      - **podFailurePolicy.rules.onPodConditions.type** (string), required\n\n        Specifies the required Pod condition type. To match a pod condition it is required that specified type equals the pod condition type.\n\n- **successPolicy** (SuccessPolicy)\n\n  successPolicy specifies the policy when the Job can be declared as succeeded. If empty, the default behavior applies - the Job is declared as succeeded only when the number of succeeded pods equals to the completions. When the field is specified, it must be immutable and works only for the Indexed Jobs. Once the Job meets the SuccessPolicy, the lingering pods are terminated.\n  \n  This field is beta-level. To use this field, you must enable the `JobSuccessPolicy` feature gate (enabled by default).\n\n  <a name=\"SuccessPolicy\"><\/a>\n  *SuccessPolicy describes when a Job can be declared as succeeded based on the success of some indexes.*\n\n  - **successPolicy.rules** ([]SuccessPolicyRule), required\n\n    *Atomic: will be replaced during a merge*\n    \n    rules represents the list of alternative rules for the declaring the Jobs as successful before `.status.succeeded >= .spec.completions`. Once any of the rules are met, the \"SucceededCriteriaMet\" condition is added, and the lingering pods are removed. The terminal state for such a Job has the \"Complete\" condition. Additionally, these rules are evaluated in order; Once the Job meets one of the rules, other rules are ignored. At most 20 elements are allowed.\n\n    <a name=\"SuccessPolicyRule\"><\/a>\n    *SuccessPolicyRule describes rule for declaring a Job as succeeded. Each rule must have at least one of the \"succeededIndexes\" or \"succeededCount\" specified.*\n\n    - **successPolicy.rules.succeededCount** (int32)\n\n      succeededCount specifies the minimal required size of the actual set of the succeeded indexes for the Job. When succeededCount is used along with succeededIndexes, the check is constrained only to the set of indexes specified by succeededIndexes. For example, given that succeededIndexes is \"1-4\", succeededCount is \"3\", and completed indexes are \"1\", \"3\", and \"5\", the Job isn't declared as succeeded because only \"1\" and \"3\" indexes are considered in that rules. When this field is null, this doesn't default to any value and is never evaluated at any time. When specified it needs to be a positive integer.\n\n    - **successPolicy.rules.succeededIndexes** (string)\n\n      succeededIndexes specifies the set of indexes which need to be contained in the actual set of the succeeded indexes for the Job. The list of indexes must be within 0 to \".spec.completions-1\" and must not contain duplicates. At least one element is required. The indexes are represented as intervals separated by commas. The intervals can be a decimal integer or a pair of decimal integers separated by a hyphen. The number are listed in represented by the first and last element of the series, separated by a hyphen. For example, if the completed indexes are 1, 3, 4, 5 and 7, they are represented as \"1,3-5,7\". When this field is null, this field doesn't default to any value and is never evaluated at any time.\n\n### Alpha level\n\n\n- **backoffLimitPerIndex** (int32)\n\n  Specifies the limit for the number of retries within an index before marking this index as failed. When enabled the number of failures per index is kept in the pod's batch.kubernetes.io\/job-index-failure-count annotation. It can only be set when Job's completionMode=Indexed, and the Pod's restart policy is Never. The field is immutable. This field is beta-level. It can be used when the `JobBackoffLimitPerIndex` feature gate is enabled (enabled by default).\n\n- **managedBy** (string)\n\n  ManagedBy field indicates the controller that manages a Job. The k8s Job controller reconciles jobs which don't have this field at all or the field value is the reserved string `kubernetes.io\/job-controller`, but skips reconciling Jobs with a custom value for this field. The value must be a valid domain-prefixed path (e.g. acme.io\/foo) - all characters before the first \"\/\" must be a valid subdomain as defined by RFC 1123. All characters trailing the first \"\/\" must be valid HTTP Path characters as defined by RFC 3986. The value cannot exceed 63 characters. This field is immutable.\n  \n  This field is alpha-level. The job controller accepts setting the field when the feature gate JobManagedBy is enabled (disabled by default).\n\n- **maxFailedIndexes** (int32)\n\n  Specifies the maximal number of failed indexes before marking the Job as failed, when backoffLimitPerIndex is set. Once the number of failed indexes exceeds this number the entire Job is marked as Failed and its execution is terminated. When left as null the job continues execution of all of its indexes and is marked with the `Complete` Job condition. It can only be specified when backoffLimitPerIndex is set. It can be null or up to completions. It is required and must be less than or equal to 10^4 when is completions greater than 10^5. This field is beta-level. It can be used when the `JobBackoffLimitPerIndex` feature gate is enabled (enabled by default).\n\n- **podReplacementPolicy** (string)\n\n  podReplacementPolicy specifies when to create replacement Pods. Possible values are: - TerminatingOrFailed means that we recreate pods\n    when they are terminating (has a metadata.deletionTimestamp) or failed.\n  - Failed means to wait until a previously created Pod is fully terminated (has phase\n    Failed or Succeeded) before creating a replacement Pod.\n  \n  When using podFailurePolicy, Failed is the the only allowed value. TerminatingOrFailed and Failed are allowed values when podFailurePolicy is not in use. This is an beta field. To use this, enable the JobPodReplacementPolicy feature toggle. This is on by default.\n\n\n\n## JobStatus {#JobStatus}\n\nJobStatus represents the current state of a Job.\n\n<hr>\n\n- **startTime** (Time)\n\n  Represents time when the job controller started processing a job. When a Job is created in the suspended state, this field is not set until the first time it is resumed. This field is reset every time a Job is resumed from suspension. It is represented in RFC3339 form and is in UTC.\n  \n  Once set, the field can only be removed when the job is suspended. The field cannot be modified while the job is unsuspended or finished.\n\n  <a name=\"Time\"><\/a>\n  *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **completionTime** (Time)\n\n  Represents time when the job was completed. It is not guaranteed to be set in happens-before order across separate operations. It is represented in RFC3339 form and is in UTC. The completion time is set when the job finishes successfully, and only then. The value cannot be updated or removed. The value indicates the same or later point in time as the startTime field.\n\n  <a name=\"Time\"><\/a>\n  *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **active** (int32)\n\n  The number of pending and running pods which are not terminating (without a deletionTimestamp). The value is zero for finished jobs.\n\n- **failed** (int32)\n\n  The number of pods which reached phase Failed. The value increases monotonically.\n\n- **succeeded** (int32)\n\n  The number of pods which reached phase Succeeded. The value increases monotonically for a given spec. However, it may decrease in reaction to scale down of elastic indexed jobs.\n\n- **completedIndexes** (string)\n\n  completedIndexes holds the completed indexes when .spec.completionMode = \"Indexed\" in a text format. The indexes are represented as decimal integers separated by commas. The numbers are listed in increasing order. Three or more consecutive numbers are compressed and represented by the first and last element of the series, separated by a hyphen. For example, if the completed indexes are 1, 3, 4, 5 and 7, they are represented as \"1,3-5,7\".\n\n- **conditions** ([]JobCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Atomic: will be replaced during a merge*\n  \n  The latest available observations of an object's current state. When a Job fails, one of the conditions will have type \"Failed\" and status true. When a Job is suspended, one of the conditions will have type \"Suspended\" and status true; when the Job is resumed, the status of this condition will become false. When a Job is completed, one of the conditions will have type \"Complete\" and status true.\n  \n  A job is considered finished when it is in a terminal condition, either \"Complete\" or \"Failed\". A Job cannot have both the \"Complete\" and \"Failed\" conditions. Additionally, it cannot be in the \"Complete\" and \"FailureTarget\" conditions. The \"Complete\", \"Failed\" and \"FailureTarget\" conditions cannot be disabled.\n  \n  More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/jobs-run-to-completion\/\n\n  <a name=\"JobCondition\"><\/a>\n  *JobCondition describes current state of a job.*\n\n  - **conditions.status** (string), required\n\n    Status of the condition, one of True, False, Unknown.\n\n  - **conditions.type** (string), required\n\n    Type of job condition, Complete or Failed.\n\n  - **conditions.lastProbeTime** (Time)\n\n    Last time the condition was checked.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.lastTransitionTime** (Time)\n\n    Last time the condition transit from one status to another.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    Human readable message indicating details about last transition.\n\n  - **conditions.reason** (string)\n\n    (brief) reason for the condition's last transition.\n\n- **uncountedTerminatedPods** (UncountedTerminatedPods)\n\n  uncountedTerminatedPods holds the UIDs of Pods that have terminated but the job controller hasn't yet accounted for in the status counters.\n  \n  The job controller creates pods with a finalizer. When a pod terminates (succeeded or failed), the controller does three steps to account for it in the job status:\n  \n  1. Add the pod UID to the arrays in this field. 2. Remove the pod finalizer. 3. Remove the pod UID from the arrays while increasing the corresponding\n      counter.\n  \n  Old jobs might not be tracked using this field, in which case the field remains null. The structure is empty for finished jobs.\n\n  <a name=\"UncountedTerminatedPods\"><\/a>\n  *UncountedTerminatedPods holds UIDs of Pods that have terminated but haven't been accounted in Job status counters.*\n\n  - **uncountedTerminatedPods.failed** ([]string)\n\n    *Set: unique values will be kept during a merge*\n    \n    failed holds UIDs of failed Pods.\n\n  - **uncountedTerminatedPods.succeeded** ([]string)\n\n    *Set: unique values will be kept during a merge*\n    \n    succeeded holds UIDs of succeeded Pods.\n\n\n\n### Beta level\n\n\n- **ready** (int32)\n\n  The number of active pods which have a Ready condition and are not terminating (without a deletionTimestamp).\n\n### Alpha level\n\n\n- **failedIndexes** (string)\n\n  FailedIndexes holds the failed indexes when spec.backoffLimitPerIndex is set. The indexes are represented in the text format analogous as for the `completedIndexes` field, ie. they are kept as decimal integers separated by commas. The numbers are listed in increasing order. Three or more consecutive numbers are compressed and represented by the first and last element of the series, separated by a hyphen. For example, if the failed indexes are 1, 3, 4, 5 and 7, they are represented as \"1,3-5,7\". The set of failed indexes cannot overlap with the set of completed indexes.\n  \n  This field is beta-level. It can be used when the `JobBackoffLimitPerIndex` feature gate is enabled (enabled by default).\n\n- **terminating** (int32)\n\n  The number of pods which are terminating (in phase Pending or Running and have a deletionTimestamp).\n  \n  This field is beta-level. The job controller populates the field when the feature gate JobPodReplacementPolicy is enabled (enabled by default).\n\n\n\n## JobList {#JobList}\n\nJobList is a collection of jobs.\n\n<hr>\n\n- **apiVersion**: batch\/v1\n\n\n- **kind**: JobList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">Job<\/a>), required\n\n  items is the list of Jobs.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified Job\n\n#### HTTP Request\n\nGET \/apis\/batch\/v1\/namespaces\/{namespace}\/jobs\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Job\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Job<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified Job\n\n#### HTTP Request\n\nGET \/apis\/batch\/v1\/namespaces\/{namespace}\/jobs\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Job\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Job<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Job\n\n#### HTTP Request\n\nGET \/apis\/batch\/v1\/namespaces\/{namespace}\/jobs\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">JobList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Job\n\n#### HTTP Request\n\nGET \/apis\/batch\/v1\/jobs\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">JobList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a Job\n\n#### HTTP Request\n\nPOST \/apis\/batch\/v1\/namespaces\/{namespace}\/jobs\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Job<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Job<\/a>): OK\n\n201 (<a href=\"\">Job<\/a>): Created\n\n202 (<a href=\"\">Job<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified Job\n\n#### HTTP Request\n\nPUT \/apis\/batch\/v1\/namespaces\/{namespace}\/jobs\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Job\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Job<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Job<\/a>): OK\n\n201 (<a href=\"\">Job<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified Job\n\n#### HTTP Request\n\nPUT \/apis\/batch\/v1\/namespaces\/{namespace}\/jobs\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Job\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Job<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Job<\/a>): OK\n\n201 (<a href=\"\">Job<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified Job\n\n#### HTTP Request\n\nPATCH \/apis\/batch\/v1\/namespaces\/{namespace}\/jobs\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Job\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Job<\/a>): OK\n\n201 (<a href=\"\">Job<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified Job\n\n#### HTTP Request\n\nPATCH \/apis\/batch\/v1\/namespaces\/{namespace}\/jobs\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Job\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Job<\/a>): OK\n\n201 (<a href=\"\">Job<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a Job\n\n#### HTTP Request\n\nDELETE \/apis\/batch\/v1\/namespaces\/{namespace}\/jobs\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Job\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of Job\n\n#### HTTP Request\n\nDELETE \/apis\/batch\/v1\/namespaces\/{namespace}\/jobs\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   batch v1    import   k8s io api batch v1    kind   Job  content type   api reference  description   Job represents the configuration of a single job   title   Job  weight  10 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  batch v1    import  k8s io api batch v1        Job   Job   Job represents the configuration of a single job    hr       apiVersion    batch v1       kind    Job       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    JobSpec  a      Specification of the desired behavior of a job  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status      status     a href    JobStatus  a      Current status of a job  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         JobSpec   JobSpec   JobSpec describes how the job execution will look like    hr         Replicas       template     a href    PodTemplateSpec  a    required    Describes the pod that will be created when executing a job  The only allowed template spec restartPolicy values are  Never  or  OnFailure   More info  https   kubernetes io docs concepts workloads controllers jobs run to completion       parallelism    int32     Specifies the maximum desired number of pods the job should run at any given time  The actual number of pods running in steady state will be less than this number when    spec completions    status successful      spec parallelism   i e  when the work left to do is less than max parallelism  More info  https   kubernetes io docs concepts workloads controllers jobs run to completion       Lifecycle       completions    int32     Specifies the desired number of successfully finished pods the job should be run with   Setting to null means that the success of any pod signals the success of all pods  and allows parallelism to have any positive value   Setting to 1 means that parallelism is limited to 1 and the success of that pod signals the success of the job  More info  https   kubernetes io docs concepts workloads controllers jobs run to completion       completionMode    string     completionMode specifies how Pod completions are tracked  It can be  NonIndexed   default  or  Indexed         NonIndexed  means that the Job is considered complete when there have been  spec completions successfully completed Pods  Each Pod completion is homologous to each other        Indexed  means that the Pods of a Job get an associated completion index from 0 to   spec completions   1   available in the annotation batch kubernetes io job completion index  The Job is considered complete when there is one successfully completed Pod for each index  When value is  Indexed    spec completions must be specified and   spec parallelism  must be less than or equal to 10 5  In addition  The Pod name takes the form    job name    index    random string    the Pod hostname takes the form    job name    index         More completion modes can be added in the future  If the Job controller observes a mode that it doesn t recognize  which is possible during upgrades due to version skew  the controller skips updates for the Job       backoffLimit    int32     Specifies the number of retries before marking this job failed  Defaults to 6      activeDeadlineSeconds    int64     Specifies the duration in seconds relative to the startTime that the job may be continuously active before the system tries to terminate it  value must be positive integer  If a Job is suspended  at creation or through an update   this timer will effectively be stopped and reset when the Job is resumed again       ttlSecondsAfterFinished    int32     ttlSecondsAfterFinished limits the lifetime of a Job that has finished execution  either Complete or Failed   If this field is set  ttlSecondsAfterFinished after the Job finishes  it is eligible to be automatically deleted  When the Job is being deleted  its lifecycle guarantees  e g  finalizers  will be honored  If this field is unset  the Job won t be automatically deleted  If this field is set to zero  the Job becomes eligible to be deleted immediately after it finishes       suspend    boolean     suspend specifies whether the Job controller should create Pods or not  If a Job is created with suspend set to true  no Pods are created by the Job controller  If a Job is suspended after creation  i e  the flag goes from false to true   the Job controller will delete all active Pods associated with this Job  Users must design their workload to gracefully handle this  Suspending a Job will reset the StartTime field of the Job  effectively resetting the ActiveDeadlineSeconds timer too  Defaults to false       Selector       selector     a href    LabelSelector  a      A label query over pods that should match the pod count  Normally  the system sets this field for you  More info  https   kubernetes io docs concepts overview working with objects labels  label selectors      manualSelector    boolean     manualSelector controls generation of pod labels and pod selectors  Leave  manualSelector  unset unless you are certain what you are doing  When false or unset  the system pick labels unique to this job and appends those labels to the pod template   When true  the user is responsible for picking unique labels and specifying the selector   Failure to pick a unique label may cause this and other jobs to not function correctly   However  You may see  manualSelector true  in jobs that were created with the old  extensions v1beta1  API  More info  https   kubernetes io docs concepts workloads controllers jobs run to completion  specifying your own pod selector      Beta level       podFailurePolicy    PodFailurePolicy     Specifies the policy of handling failed pods  In particular  it allows to specify the set of actions and conditions which need to be satisfied to take the associated action  If empty  the default behaviour applies   the counter of failed pods  represented by the jobs s  status failed field  is incremented and it is checked against the backoffLimit  This field cannot be used in combination with restartPolicy OnFailure      a name  PodFailurePolicy    a     PodFailurePolicy describes how failed pods influence the backoffLimit          podFailurePolicy rules      PodFailurePolicyRule   required       Atomic  will be replaced during a merge           A list of pod failure policy rules  The rules are evaluated in order  Once a rule matches a Pod failure  the remaining of the rules are ignored  When no rule matches the Pod failure  the default handling applies   the counter of pod failures is incremented and it is checked against the backoffLimit  At most 20 elements are allowed        a name  PodFailurePolicyRule    a       PodFailurePolicyRule describes how a pod failure is handled when the requirements are met  One of onExitCodes and onPodConditions  but not both  can be used in each rule            podFailurePolicy rules action    string   required        Specifies the action taken on a pod failure when the requirements are satisfied  Possible values are                 FailJob  indicates that the pod s job is marked as Failed and all         running pods are terminated          FailIndex  indicates that the pod s index is marked as Failed and will         not be restarted          This value is beta level  It can be used when the          JobBackoffLimitPerIndex  feature gate is enabled  enabled by default           Ignore  indicates that the counter towards the  backoffLimit is not         incremented and a replacement pod is created          Count  indicates that the pod is handled in the default way   the         counter towards the  backoffLimit is incremented        Additional values are considered to be added in the future  Clients should react to an unknown action by skipping the rule           podFailurePolicy rules onExitCodes    PodFailurePolicyOnExitCodesRequirement         Represents the requirement on the container exit codes          a name  PodFailurePolicyOnExitCodesRequirement    a         PodFailurePolicyOnExitCodesRequirement describes the requirement for handling a failed pod based on its container exit codes  In particular  it lookups the  state terminated exitCode for each app container and init container status  represented by the  status containerStatuses and  status initContainerStatuses fields in the Pod status  respectively  Containers completed with success  exit code 0  are excluded from the requirement check              podFailurePolicy rules onExitCodes operator    string   required          Represents the relationship between the container exit code s  and the specified values  Containers completed with success  exit code 0  are excluded from the requirement check  Possible values are                     In  the requirement is satisfied if at least one container exit code            might be multiple if there are multiple containers not restricted           by the  containerName  field  is in the set of specified values            NotIn  the requirement is satisfied if at least one container exit code            might be multiple if there are multiple containers not restricted           by the  containerName  field  is not in the set of specified values          Additional values are considered to be added in the future  Clients should react to an unknown operator by assuming the requirement is not satisfied             podFailurePolicy rules onExitCodes values      int32   required           Set  unique values will be kept during a merge                   Specifies the set of values  Each returned container exit code  might be multiple in case of multiple containers  is checked against this set of values with respect to the operator  The list of values must be ordered and must not contain duplicates  Value  0  cannot be used for the In operator  At least one element is required  At most 255 elements are allowed             podFailurePolicy rules onExitCodes containerName    string           Restricts the check for exit codes to the container with the specified name  When null  the rule applies to all containers  When specified  it should match one the container or initContainer names in the pod template           podFailurePolicy rules onPodConditions      PodFailurePolicyOnPodConditionsPattern          Atomic  will be replaced during a merge               Represents the requirement on the pod conditions  The requirement is represented as a list of pod condition patterns  The requirement is satisfied if at least one pattern matches an actual pod condition  At most 20 elements are allowed          a name  PodFailurePolicyOnPodConditionsPattern    a         PodFailurePolicyOnPodConditionsPattern describes a pattern for matching an actual pod condition type              podFailurePolicy rules onPodConditions status    string   required          Specifies the required Pod condition status  To match a pod condition it is required that the specified status equals the pod condition status  Defaults to True             podFailurePolicy rules onPodConditions type    string   required          Specifies the required Pod condition type  To match a pod condition it is required that specified type equals the pod condition type       successPolicy    SuccessPolicy     successPolicy specifies the policy when the Job can be declared as succeeded  If empty  the default behavior applies   the Job is declared as succeeded only when the number of succeeded pods equals to the completions  When the field is specified  it must be immutable and works only for the Indexed Jobs  Once the Job meets the SuccessPolicy  the lingering pods are terminated       This field is beta level  To use this field  you must enable the  JobSuccessPolicy  feature gate  enabled by default       a name  SuccessPolicy    a     SuccessPolicy describes when a Job can be declared as succeeded based on the success of some indexes          successPolicy rules      SuccessPolicyRule   required       Atomic  will be replaced during a merge           rules represents the list of alternative rules for the declaring the Jobs as successful before   status succeeded     spec completions   Once any of the rules are met  the  SucceededCriteriaMet  condition is added  and the lingering pods are removed  The terminal state for such a Job has the  Complete  condition  Additionally  these rules are evaluated in order  Once the Job meets one of the rules  other rules are ignored  At most 20 elements are allowed        a name  SuccessPolicyRule    a       SuccessPolicyRule describes rule for declaring a Job as succeeded  Each rule must have at least one of the  succeededIndexes  or  succeededCount  specified            successPolicy rules succeededCount    int32         succeededCount specifies the minimal required size of the actual set of the succeeded indexes for the Job  When succeededCount is used along with succeededIndexes  the check is constrained only to the set of indexes specified by succeededIndexes  For example  given that succeededIndexes is  1 4   succeededCount is  3   and completed indexes are  1    3   and  5   the Job isn t declared as succeeded because only  1  and  3  indexes are considered in that rules  When this field is null  this doesn t default to any value and is never evaluated at any time  When specified it needs to be a positive integer           successPolicy rules succeededIndexes    string         succeededIndexes specifies the set of indexes which need to be contained in the actual set of the succeeded indexes for the Job  The list of indexes must be within 0 to   spec completions 1  and must not contain duplicates  At least one element is required  The indexes are represented as intervals separated by commas  The intervals can be a decimal integer or a pair of decimal integers separated by a hyphen  The number are listed in represented by the first and last element of the series  separated by a hyphen  For example  if the completed indexes are 1  3  4  5 and 7  they are represented as  1 3 5 7   When this field is null  this field doesn t default to any value and is never evaluated at any time       Alpha level       backoffLimitPerIndex    int32     Specifies the limit for the number of retries within an index before marking this index as failed  When enabled the number of failures per index is kept in the pod s batch kubernetes io job index failure count annotation  It can only be set when Job s completionMode Indexed  and the Pod s restart policy is Never  The field is immutable  This field is beta level  It can be used when the  JobBackoffLimitPerIndex  feature gate is enabled  enabled by default        managedBy    string     ManagedBy field indicates the controller that manages a Job  The k8s Job controller reconciles jobs which don t have this field at all or the field value is the reserved string  kubernetes io job controller   but skips reconciling Jobs with a custom value for this field  The value must be a valid domain prefixed path  e g  acme io foo    all characters before the first     must be a valid subdomain as defined by RFC 1123  All characters trailing the first     must be valid HTTP Path characters as defined by RFC 3986  The value cannot exceed 63 characters  This field is immutable       This field is alpha level  The job controller accepts setting the field when the feature gate JobManagedBy is enabled  disabled by default        maxFailedIndexes    int32     Specifies the maximal number of failed indexes before marking the Job as failed  when backoffLimitPerIndex is set  Once the number of failed indexes exceeds this number the entire Job is marked as Failed and its execution is terminated  When left as null the job continues execution of all of its indexes and is marked with the  Complete  Job condition  It can only be specified when backoffLimitPerIndex is set  It can be null or up to completions  It is required and must be less than or equal to 10 4 when is completions greater than 10 5  This field is beta level  It can be used when the  JobBackoffLimitPerIndex  feature gate is enabled  enabled by default        podReplacementPolicy    string     podReplacementPolicy specifies when to create replacement Pods  Possible values are    TerminatingOrFailed means that we recreate pods     when they are terminating  has a metadata deletionTimestamp  or failed      Failed means to wait until a previously created Pod is fully terminated  has phase     Failed or Succeeded  before creating a replacement Pod       When using podFailurePolicy  Failed is the the only allowed value  TerminatingOrFailed and Failed are allowed values when podFailurePolicy is not in use  This is an beta field  To use this  enable the JobPodReplacementPolicy feature toggle  This is on by default        JobStatus   JobStatus   JobStatus represents the current state of a Job    hr       startTime    Time     Represents time when the job controller started processing a job  When a Job is created in the suspended state  this field is not set until the first time it is resumed  This field is reset every time a Job is resumed from suspension  It is represented in RFC3339 form and is in UTC       Once set  the field can only be removed when the job is suspended  The field cannot be modified while the job is unsuspended or finished      a name  Time    a     Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        completionTime    Time     Represents time when the job was completed  It is not guaranteed to be set in happens before order across separate operations  It is represented in RFC3339 form and is in UTC  The completion time is set when the job finishes successfully  and only then  The value cannot be updated or removed  The value indicates the same or later point in time as the startTime field      a name  Time    a     Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        active    int32     The number of pending and running pods which are not terminating  without a deletionTimestamp   The value is zero for finished jobs       failed    int32     The number of pods which reached phase Failed  The value increases monotonically       succeeded    int32     The number of pods which reached phase Succeeded  The value increases monotonically for a given spec  However  it may decrease in reaction to scale down of elastic indexed jobs       completedIndexes    string     completedIndexes holds the completed indexes when  spec completionMode    Indexed  in a text format  The indexes are represented as decimal integers separated by commas  The numbers are listed in increasing order  Three or more consecutive numbers are compressed and represented by the first and last element of the series  separated by a hyphen  For example  if the completed indexes are 1  3  4  5 and 7  they are represented as  1 3 5 7        conditions      JobCondition      Patch strategy  merge on key  type         Atomic  will be replaced during a merge       The latest available observations of an object s current state  When a Job fails  one of the conditions will have type  Failed  and status true  When a Job is suspended  one of the conditions will have type  Suspended  and status true  when the Job is resumed  the status of this condition will become false  When a Job is completed  one of the conditions will have type  Complete  and status true       A job is considered finished when it is in a terminal condition  either  Complete  or  Failed   A Job cannot have both the  Complete  and  Failed  conditions  Additionally  it cannot be in the  Complete  and  FailureTarget  conditions  The  Complete    Failed  and  FailureTarget  conditions cannot be disabled       More info  https   kubernetes io docs concepts workloads controllers jobs run to completion      a name  JobCondition    a     JobCondition describes current state of a job          conditions status    string   required      Status of the condition  one of True  False  Unknown         conditions type    string   required      Type of job condition  Complete or Failed         conditions lastProbeTime    Time       Last time the condition was checked        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions lastTransitionTime    Time       Last time the condition transit from one status to another        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string       Human readable message indicating details about last transition         conditions reason    string        brief  reason for the condition s last transition       uncountedTerminatedPods    UncountedTerminatedPods     uncountedTerminatedPods holds the UIDs of Pods that have terminated but the job controller hasn t yet accounted for in the status counters       The job controller creates pods with a finalizer  When a pod terminates  succeeded or failed   the controller does three steps to account for it in the job status       1  Add the pod UID to the arrays in this field  2  Remove the pod finalizer  3  Remove the pod UID from the arrays while increasing the corresponding       counter       Old jobs might not be tracked using this field  in which case the field remains null  The structure is empty for finished jobs      a name  UncountedTerminatedPods    a     UncountedTerminatedPods holds UIDs of Pods that have terminated but haven t been accounted in Job status counters          uncountedTerminatedPods failed      string        Set  unique values will be kept during a merge           failed holds UIDs of failed Pods         uncountedTerminatedPods succeeded      string        Set  unique values will be kept during a merge           succeeded holds UIDs of succeeded Pods         Beta level       ready    int32     The number of active pods which have a Ready condition and are not terminating  without a deletionTimestamp        Alpha level       failedIndexes    string     FailedIndexes holds the failed indexes when spec backoffLimitPerIndex is set  The indexes are represented in the text format analogous as for the  completedIndexes  field  ie  they are kept as decimal integers separated by commas  The numbers are listed in increasing order  Three or more consecutive numbers are compressed and represented by the first and last element of the series  separated by a hyphen  For example  if the failed indexes are 1  3  4  5 and 7  they are represented as  1 3 5 7   The set of failed indexes cannot overlap with the set of completed indexes       This field is beta level  It can be used when the  JobBackoffLimitPerIndex  feature gate is enabled  enabled by default        terminating    int32     The number of pods which are terminating  in phase Pending or Running and have a deletionTimestamp        This field is beta level  The job controller populates the field when the feature gate JobPodReplacementPolicy is enabled  enabled by default         JobList   JobList   JobList is a collection of jobs    hr       apiVersion    batch v1       kind    JobList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    Job  a    required    items is the list of Jobs          Operations   Operations      hr             get  read the specified Job       HTTP Request  GET  apis batch v1 namespaces  namespace  jobs  name        Parameters       name     in path    string  required    name of the Job       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Job  a    OK  401  Unauthorized        get  read status of the specified Job       HTTP Request  GET  apis batch v1 namespaces  namespace  jobs  name  status       Parameters       name     in path    string  required    name of the Job       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Job  a    OK  401  Unauthorized        list  list or watch objects of kind Job       HTTP Request  GET  apis batch v1 namespaces  namespace  jobs       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    JobList  a    OK  401  Unauthorized        list  list or watch objects of kind Job       HTTP Request  GET  apis batch v1 jobs       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    JobList  a    OK  401  Unauthorized        create  create a Job       HTTP Request  POST  apis batch v1 namespaces  namespace  jobs       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    Job  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Job  a    OK  201   a href    Job  a    Created  202   a href    Job  a    Accepted  401  Unauthorized        update  replace the specified Job       HTTP Request  PUT  apis batch v1 namespaces  namespace  jobs  name        Parameters       name     in path    string  required    name of the Job       namespace     in path    string  required     a href    namespace  a        body     a href    Job  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Job  a    OK  201   a href    Job  a    Created  401  Unauthorized        update  replace status of the specified Job       HTTP Request  PUT  apis batch v1 namespaces  namespace  jobs  name  status       Parameters       name     in path    string  required    name of the Job       namespace     in path    string  required     a href    namespace  a        body     a href    Job  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Job  a    OK  201   a href    Job  a    Created  401  Unauthorized        patch  partially update the specified Job       HTTP Request  PATCH  apis batch v1 namespaces  namespace  jobs  name        Parameters       name     in path    string  required    name of the Job       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Job  a    OK  201   a href    Job  a    Created  401  Unauthorized        patch  partially update status of the specified Job       HTTP Request  PATCH  apis batch v1 namespaces  namespace  jobs  name  status       Parameters       name     in path    string  required    name of the Job       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Job  a    OK  201   a href    Job  a    Created  401  Unauthorized        delete  delete a Job       HTTP Request  DELETE  apis batch v1 namespaces  namespace  jobs  name        Parameters       name     in path    string  required    name of the Job       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of Job       HTTP Request  DELETE  apis batch v1 namespaces  namespace  jobs       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference weight 5 kind ReplicaSet apiVersion apps v1 contenttype apireference ReplicaSet ensures that a specified number of pod replicas are running at any given time apimetadata title ReplicaSet autogenerated true import k8s io api apps v1","answers":"---\napi_metadata:\n  apiVersion: \"apps\/v1\"\n  import: \"k8s.io\/api\/apps\/v1\"\n  kind: \"ReplicaSet\"\ncontent_type: \"api_reference\"\ndescription: \"ReplicaSet ensures that a specified number of pod replicas are running at any given time.\"\ntitle: \"ReplicaSet\"\nweight: 5\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: apps\/v1`\n\n`import \"k8s.io\/api\/apps\/v1\"`\n\n\n## ReplicaSet {#ReplicaSet}\n\nReplicaSet ensures that a specified number of pod replicas are running at any given time.\n\n<hr>\n\n- **apiVersion**: apps\/v1\n\n\n- **kind**: ReplicaSet\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  If the Labels of a ReplicaSet are empty, they are defaulted to be the same as the Pod(s) that the ReplicaSet manages. Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">ReplicaSetSpec<\/a>)\n\n  Spec defines the specification of the desired behavior of the ReplicaSet. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **status** (<a href=\"\">ReplicaSetStatus<\/a>)\n\n  Status is the most recently observed status of the ReplicaSet. This data may be out of date by some window of time. Populated by the system. Read-only. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## ReplicaSetSpec {#ReplicaSetSpec}\n\nReplicaSetSpec is the specification of a ReplicaSet.\n\n<hr>\n\n- **selector** (<a href=\"\">LabelSelector<\/a>), required\n\n  Selector is a label query over pods that should match the replica count. Label keys and values that must match in order to be controlled by this replica set. It must match the pod template's labels. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\/#label-selectors\n\n- **template** (<a href=\"\">PodTemplateSpec<\/a>)\n\n  Template is the object that describes the pod that will be created if insufficient replicas are detected. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/replicationcontroller#pod-template\n\n- **replicas** (int32)\n\n  Replicas is the number of desired replicas. This is a pointer to distinguish between explicit zero and unspecified. Defaults to 1. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/replicationcontroller\/#what-is-a-replicationcontroller\n\n- **minReadySeconds** (int32)\n\n  Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready)\n\n\n\n\n\n## ReplicaSetStatus {#ReplicaSetStatus}\n\nReplicaSetStatus represents the current status of a ReplicaSet.\n\n<hr>\n\n- **replicas** (int32), required\n\n  Replicas is the most recently observed number of replicas. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/replicationcontroller\/#what-is-a-replicationcontroller\n\n- **availableReplicas** (int32)\n\n  The number of available replicas (ready for at least minReadySeconds) for this replica set.\n\n- **readyReplicas** (int32)\n\n  readyReplicas is the number of pods targeted by this ReplicaSet with a Ready Condition.\n\n- **fullyLabeledReplicas** (int32)\n\n  The number of pods that have labels matching the labels of the pod template of the replicaset.\n\n- **conditions** ([]ReplicaSetCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  Represents the latest available observations of a replica set's current state.\n\n  <a name=\"ReplicaSetCondition\"><\/a>\n  *ReplicaSetCondition describes the state of a replica set at a certain point.*\n\n  - **conditions.status** (string), required\n\n    Status of the condition, one of True, False, Unknown.\n\n  - **conditions.type** (string), required\n\n    Type of replica set condition.\n\n  - **conditions.lastTransitionTime** (Time)\n\n    The last time the condition transitioned from one status to another.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    A human readable message indicating details about the transition.\n\n  - **conditions.reason** (string)\n\n    The reason for the condition's last transition.\n\n- **observedGeneration** (int64)\n\n  ObservedGeneration reflects the generation of the most recently observed ReplicaSet.\n\n\n\n\n\n## ReplicaSetList {#ReplicaSetList}\n\nReplicaSetList is a collection of ReplicaSets.\n\n<hr>\n\n- **apiVersion**: apps\/v1\n\n\n- **kind**: ReplicaSetList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">ReplicaSet<\/a>), required\n\n  List of ReplicaSets. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/replicationcontroller\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ReplicaSet\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/namespaces\/{namespace}\/replicasets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ReplicaSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicaSet<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified ReplicaSet\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/namespaces\/{namespace}\/replicasets\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ReplicaSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicaSet<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ReplicaSet\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/namespaces\/{namespace}\/replicasets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicaSetList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ReplicaSet\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/replicasets\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicaSetList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ReplicaSet\n\n#### HTTP Request\n\nPOST \/apis\/apps\/v1\/namespaces\/{namespace}\/replicasets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ReplicaSet<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicaSet<\/a>): OK\n\n201 (<a href=\"\">ReplicaSet<\/a>): Created\n\n202 (<a href=\"\">ReplicaSet<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ReplicaSet\n\n#### HTTP Request\n\nPUT \/apis\/apps\/v1\/namespaces\/{namespace}\/replicasets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ReplicaSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ReplicaSet<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicaSet<\/a>): OK\n\n201 (<a href=\"\">ReplicaSet<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified ReplicaSet\n\n#### HTTP Request\n\nPUT \/apis\/apps\/v1\/namespaces\/{namespace}\/replicasets\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ReplicaSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ReplicaSet<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicaSet<\/a>): OK\n\n201 (<a href=\"\">ReplicaSet<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ReplicaSet\n\n#### HTTP Request\n\nPATCH \/apis\/apps\/v1\/namespaces\/{namespace}\/replicasets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ReplicaSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicaSet<\/a>): OK\n\n201 (<a href=\"\">ReplicaSet<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified ReplicaSet\n\n#### HTTP Request\n\nPATCH \/apis\/apps\/v1\/namespaces\/{namespace}\/replicasets\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ReplicaSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicaSet<\/a>): OK\n\n201 (<a href=\"\">ReplicaSet<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ReplicaSet\n\n#### HTTP Request\n\nDELETE \/apis\/apps\/v1\/namespaces\/{namespace}\/replicasets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ReplicaSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ReplicaSet\n\n#### HTTP Request\n\nDELETE \/apis\/apps\/v1\/namespaces\/{namespace}\/replicasets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   apps v1    import   k8s io api apps v1    kind   ReplicaSet  content type   api reference  description   ReplicaSet ensures that a specified number of pod replicas are running at any given time   title   ReplicaSet  weight  5 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  apps v1    import  k8s io api apps v1        ReplicaSet   ReplicaSet   ReplicaSet ensures that a specified number of pod replicas are running at any given time    hr       apiVersion    apps v1       kind    ReplicaSet       metadata     a href    ObjectMeta  a      If the Labels of a ReplicaSet are empty  they are defaulted to be the same as the Pod s  that the ReplicaSet manages  Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    ReplicaSetSpec  a      Spec defines the specification of the desired behavior of the ReplicaSet  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status      status     a href    ReplicaSetStatus  a      Status is the most recently observed status of the ReplicaSet  This data may be out of date by some window of time  Populated by the system  Read only  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         ReplicaSetSpec   ReplicaSetSpec   ReplicaSetSpec is the specification of a ReplicaSet    hr       selector     a href    LabelSelector  a    required    Selector is a label query over pods that should match the replica count  Label keys and values that must match in order to be controlled by this replica set  It must match the pod template s labels  More info  https   kubernetes io docs concepts overview working with objects labels  label selectors      template     a href    PodTemplateSpec  a      Template is the object that describes the pod that will be created if insufficient replicas are detected  More info  https   kubernetes io docs concepts workloads controllers replicationcontroller pod template      replicas    int32     Replicas is the number of desired replicas  This is a pointer to distinguish between explicit zero and unspecified  Defaults to 1  More info  https   kubernetes io docs concepts workloads controllers replicationcontroller  what is a replicationcontroller      minReadySeconds    int32     Minimum number of seconds for which a newly created pod should be ready without any of its container crashing  for it to be considered available  Defaults to 0  pod will be considered available as soon as it is ready          ReplicaSetStatus   ReplicaSetStatus   ReplicaSetStatus represents the current status of a ReplicaSet    hr       replicas    int32   required    Replicas is the most recently observed number of replicas  More info  https   kubernetes io docs concepts workloads controllers replicationcontroller  what is a replicationcontroller      availableReplicas    int32     The number of available replicas  ready for at least minReadySeconds  for this replica set       readyReplicas    int32     readyReplicas is the number of pods targeted by this ReplicaSet with a Ready Condition       fullyLabeledReplicas    int32     The number of pods that have labels matching the labels of the pod template of the replicaset       conditions      ReplicaSetCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       Represents the latest available observations of a replica set s current state      a name  ReplicaSetCondition    a     ReplicaSetCondition describes the state of a replica set at a certain point          conditions status    string   required      Status of the condition  one of True  False  Unknown         conditions type    string   required      Type of replica set condition         conditions lastTransitionTime    Time       The last time the condition transitioned from one status to another        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string       A human readable message indicating details about the transition         conditions reason    string       The reason for the condition s last transition       observedGeneration    int64     ObservedGeneration reflects the generation of the most recently observed ReplicaSet          ReplicaSetList   ReplicaSetList   ReplicaSetList is a collection of ReplicaSets    hr       apiVersion    apps v1       kind    ReplicaSetList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    ReplicaSet  a    required    List of ReplicaSets  More info  https   kubernetes io docs concepts workloads controllers replicationcontroller         Operations   Operations      hr             get  read the specified ReplicaSet       HTTP Request  GET  apis apps v1 namespaces  namespace  replicasets  name        Parameters       name     in path    string  required    name of the ReplicaSet       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ReplicaSet  a    OK  401  Unauthorized        get  read status of the specified ReplicaSet       HTTP Request  GET  apis apps v1 namespaces  namespace  replicasets  name  status       Parameters       name     in path    string  required    name of the ReplicaSet       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ReplicaSet  a    OK  401  Unauthorized        list  list or watch objects of kind ReplicaSet       HTTP Request  GET  apis apps v1 namespaces  namespace  replicasets       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ReplicaSetList  a    OK  401  Unauthorized        list  list or watch objects of kind ReplicaSet       HTTP Request  GET  apis apps v1 replicasets       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ReplicaSetList  a    OK  401  Unauthorized        create  create a ReplicaSet       HTTP Request  POST  apis apps v1 namespaces  namespace  replicasets       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    ReplicaSet  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ReplicaSet  a    OK  201   a href    ReplicaSet  a    Created  202   a href    ReplicaSet  a    Accepted  401  Unauthorized        update  replace the specified ReplicaSet       HTTP Request  PUT  apis apps v1 namespaces  namespace  replicasets  name        Parameters       name     in path    string  required    name of the ReplicaSet       namespace     in path    string  required     a href    namespace  a        body     a href    ReplicaSet  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ReplicaSet  a    OK  201   a href    ReplicaSet  a    Created  401  Unauthorized        update  replace status of the specified ReplicaSet       HTTP Request  PUT  apis apps v1 namespaces  namespace  replicasets  name  status       Parameters       name     in path    string  required    name of the ReplicaSet       namespace     in path    string  required     a href    namespace  a        body     a href    ReplicaSet  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ReplicaSet  a    OK  201   a href    ReplicaSet  a    Created  401  Unauthorized        patch  partially update the specified ReplicaSet       HTTP Request  PATCH  apis apps v1 namespaces  namespace  replicasets  name        Parameters       name     in path    string  required    name of the ReplicaSet       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ReplicaSet  a    OK  201   a href    ReplicaSet  a    Created  401  Unauthorized        patch  partially update status of the specified ReplicaSet       HTTP Request  PATCH  apis apps v1 namespaces  namespace  replicasets  name  status       Parameters       name     in path    string  required    name of the ReplicaSet       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ReplicaSet  a    OK  201   a href    ReplicaSet  a    Created  401  Unauthorized        delete  delete a ReplicaSet       HTTP Request  DELETE  apis apps v1 namespaces  namespace  replicasets  name        Parameters       name     in path    string  required    name of the ReplicaSet       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of ReplicaSet       HTTP Request  DELETE  apis apps v1 namespaces  namespace  replicasets       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference weight 11 contenttype apireference kind CronJob apiVersion batch v1 apimetadata autogenerated true import k8s io api batch v1 CronJob represents the configuration of a single cron job title CronJob","answers":"---\napi_metadata:\n  apiVersion: \"batch\/v1\"\n  import: \"k8s.io\/api\/batch\/v1\"\n  kind: \"CronJob\"\ncontent_type: \"api_reference\"\ndescription: \"CronJob represents the configuration of a single cron job.\"\ntitle: \"CronJob\"\nweight: 11\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: batch\/v1`\n\n`import \"k8s.io\/api\/batch\/v1\"`\n\n\n## CronJob {#CronJob}\n\nCronJob represents the configuration of a single cron job.\n\n<hr>\n\n- **apiVersion**: batch\/v1\n\n\n- **kind**: CronJob\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">CronJobSpec<\/a>)\n\n  Specification of the desired behavior of a cron job, including the schedule. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **status** (<a href=\"\">CronJobStatus<\/a>)\n\n  Current status of a cron job. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## CronJobSpec {#CronJobSpec}\n\nCronJobSpec describes how the job execution will look like and when it will actually run.\n\n<hr>\n\n- **jobTemplate** (JobTemplateSpec), required\n\n  Specifies the job that will be created when executing a CronJob.\n\n  <a name=\"JobTemplateSpec\"><\/a>\n  *JobTemplateSpec describes the data a Job should have when created from a template*\n\n  - **jobTemplate.metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n    Standard object's metadata of the jobs created from this template. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n  - **jobTemplate.spec** (<a href=\"\">JobSpec<\/a>)\n\n    Specification of the desired behavior of the job. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **schedule** (string), required\n\n  The schedule in Cron format, see https:\/\/en.wikipedia.org\/wiki\/Cron.\n\n- **timeZone** (string)\n\n  The time zone name for the given schedule, see https:\/\/en.wikipedia.org\/wiki\/List_of_tz_database_time_zones. If not specified, this will default to the time zone of the kube-controller-manager process. The set of valid time zone names and the time zone offset is loaded from the system-wide time zone database by the API server during CronJob validation and the controller manager during execution. If no system-wide time zone database can be found a bundled version of the database is used instead. If the time zone name becomes invalid during the lifetime of a CronJob or due to a change in host configuration, the controller will stop creating new new Jobs and will create a system event with the reason UnknownTimeZone. More information can be found in https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/cron-jobs\/#time-zones\n\n- **concurrencyPolicy** (string)\n\n  Specifies how to treat concurrent executions of a Job. Valid values are:\n  \n  - \"Allow\" (default): allows CronJobs to run concurrently; - \"Forbid\": forbids concurrent runs, skipping next run if previous run hasn't finished yet; - \"Replace\": cancels currently running job and replaces it with a new one\n\n- **startingDeadlineSeconds** (int64)\n\n  Optional deadline in seconds for starting the job if it misses scheduled time for any reason.  Missed jobs executions will be counted as failed ones.\n\n- **suspend** (boolean)\n\n  This flag tells the controller to suspend subsequent executions, it does not apply to already started executions.  Defaults to false.\n\n- **successfulJobsHistoryLimit** (int32)\n\n  The number of successful finished jobs to retain. Value must be non-negative integer. Defaults to 3.\n\n- **failedJobsHistoryLimit** (int32)\n\n  The number of failed finished jobs to retain. Value must be non-negative integer. Defaults to 1.\n\n\n\n\n\n## CronJobStatus {#CronJobStatus}\n\nCronJobStatus represents the current state of a cron job.\n\n<hr>\n\n- **active** ([]<a href=\"\">ObjectReference<\/a>)\n\n  *Atomic: will be replaced during a merge*\n  \n  A list of pointers to currently running jobs.\n\n- **lastScheduleTime** (Time)\n\n  Information when was the last time the job was successfully scheduled.\n\n  <a name=\"Time\"><\/a>\n  *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **lastSuccessfulTime** (Time)\n\n  Information when was the last time the job successfully completed.\n\n  <a name=\"Time\"><\/a>\n  *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n\n\n\n\n## CronJobList {#CronJobList}\n\nCronJobList is a collection of cron jobs.\n\n<hr>\n\n- **apiVersion**: batch\/v1\n\n\n- **kind**: CronJobList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">CronJob<\/a>), required\n\n  items is the list of CronJobs.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified CronJob\n\n#### HTTP Request\n\nGET \/apis\/batch\/v1\/namespaces\/{namespace}\/cronjobs\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CronJob\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CronJob<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified CronJob\n\n#### HTTP Request\n\nGET \/apis\/batch\/v1\/namespaces\/{namespace}\/cronjobs\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CronJob\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CronJob<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind CronJob\n\n#### HTTP Request\n\nGET \/apis\/batch\/v1\/namespaces\/{namespace}\/cronjobs\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CronJobList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind CronJob\n\n#### HTTP Request\n\nGET \/apis\/batch\/v1\/cronjobs\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CronJobList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a CronJob\n\n#### HTTP Request\n\nPOST \/apis\/batch\/v1\/namespaces\/{namespace}\/cronjobs\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">CronJob<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CronJob<\/a>): OK\n\n201 (<a href=\"\">CronJob<\/a>): Created\n\n202 (<a href=\"\">CronJob<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified CronJob\n\n#### HTTP Request\n\nPUT \/apis\/batch\/v1\/namespaces\/{namespace}\/cronjobs\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CronJob\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">CronJob<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CronJob<\/a>): OK\n\n201 (<a href=\"\">CronJob<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified CronJob\n\n#### HTTP Request\n\nPUT \/apis\/batch\/v1\/namespaces\/{namespace}\/cronjobs\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CronJob\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">CronJob<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CronJob<\/a>): OK\n\n201 (<a href=\"\">CronJob<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified CronJob\n\n#### HTTP Request\n\nPATCH \/apis\/batch\/v1\/namespaces\/{namespace}\/cronjobs\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CronJob\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CronJob<\/a>): OK\n\n201 (<a href=\"\">CronJob<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified CronJob\n\n#### HTTP Request\n\nPATCH \/apis\/batch\/v1\/namespaces\/{namespace}\/cronjobs\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CronJob\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CronJob<\/a>): OK\n\n201 (<a href=\"\">CronJob<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a CronJob\n\n#### HTTP Request\n\nDELETE \/apis\/batch\/v1\/namespaces\/{namespace}\/cronjobs\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CronJob\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of CronJob\n\n#### HTTP Request\n\nDELETE \/apis\/batch\/v1\/namespaces\/{namespace}\/cronjobs\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   batch v1    import   k8s io api batch v1    kind   CronJob  content type   api reference  description   CronJob represents the configuration of a single cron job   title   CronJob  weight  11 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  batch v1    import  k8s io api batch v1        CronJob   CronJob   CronJob represents the configuration of a single cron job    hr       apiVersion    batch v1       kind    CronJob       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    CronJobSpec  a      Specification of the desired behavior of a cron job  including the schedule  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status      status     a href    CronJobStatus  a      Current status of a cron job  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         CronJobSpec   CronJobSpec   CronJobSpec describes how the job execution will look like and when it will actually run    hr       jobTemplate    JobTemplateSpec   required    Specifies the job that will be created when executing a CronJob      a name  JobTemplateSpec    a     JobTemplateSpec describes the data a Job should have when created from a template         jobTemplate metadata     a href    ObjectMeta  a        Standard object s metadata of the jobs created from this template  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata        jobTemplate spec     a href    JobSpec  a        Specification of the desired behavior of the job  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status      schedule    string   required    The schedule in Cron format  see https   en wikipedia org wiki Cron       timeZone    string     The time zone name for the given schedule  see https   en wikipedia org wiki List of tz database time zones  If not specified  this will default to the time zone of the kube controller manager process  The set of valid time zone names and the time zone offset is loaded from the system wide time zone database by the API server during CronJob validation and the controller manager during execution  If no system wide time zone database can be found a bundled version of the database is used instead  If the time zone name becomes invalid during the lifetime of a CronJob or due to a change in host configuration  the controller will stop creating new new Jobs and will create a system event with the reason UnknownTimeZone  More information can be found in https   kubernetes io docs concepts workloads controllers cron jobs  time zones      concurrencyPolicy    string     Specifies how to treat concurrent executions of a Job  Valid values are          Allow   default   allows CronJobs to run concurrently     Forbid   forbids concurrent runs  skipping next run if previous run hasn t finished yet     Replace   cancels currently running job and replaces it with a new one      startingDeadlineSeconds    int64     Optional deadline in seconds for starting the job if it misses scheduled time for any reason   Missed jobs executions will be counted as failed ones       suspend    boolean     This flag tells the controller to suspend subsequent executions  it does not apply to already started executions   Defaults to false       successfulJobsHistoryLimit    int32     The number of successful finished jobs to retain  Value must be non negative integer  Defaults to 3       failedJobsHistoryLimit    int32     The number of failed finished jobs to retain  Value must be non negative integer  Defaults to 1          CronJobStatus   CronJobStatus   CronJobStatus represents the current state of a cron job    hr       active       a href    ObjectReference  a       Atomic  will be replaced during a merge       A list of pointers to currently running jobs       lastScheduleTime    Time     Information when was the last time the job was successfully scheduled      a name  Time    a     Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        lastSuccessfulTime    Time     Information when was the last time the job successfully completed      a name  Time    a     Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers           CronJobList   CronJobList   CronJobList is a collection of cron jobs    hr       apiVersion    batch v1       kind    CronJobList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    CronJob  a    required    items is the list of CronJobs          Operations   Operations      hr             get  read the specified CronJob       HTTP Request  GET  apis batch v1 namespaces  namespace  cronjobs  name        Parameters       name     in path    string  required    name of the CronJob       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CronJob  a    OK  401  Unauthorized        get  read status of the specified CronJob       HTTP Request  GET  apis batch v1 namespaces  namespace  cronjobs  name  status       Parameters       name     in path    string  required    name of the CronJob       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CronJob  a    OK  401  Unauthorized        list  list or watch objects of kind CronJob       HTTP Request  GET  apis batch v1 namespaces  namespace  cronjobs       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    CronJobList  a    OK  401  Unauthorized        list  list or watch objects of kind CronJob       HTTP Request  GET  apis batch v1 cronjobs       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    CronJobList  a    OK  401  Unauthorized        create  create a CronJob       HTTP Request  POST  apis batch v1 namespaces  namespace  cronjobs       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    CronJob  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CronJob  a    OK  201   a href    CronJob  a    Created  202   a href    CronJob  a    Accepted  401  Unauthorized        update  replace the specified CronJob       HTTP Request  PUT  apis batch v1 namespaces  namespace  cronjobs  name        Parameters       name     in path    string  required    name of the CronJob       namespace     in path    string  required     a href    namespace  a        body     a href    CronJob  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CronJob  a    OK  201   a href    CronJob  a    Created  401  Unauthorized        update  replace status of the specified CronJob       HTTP Request  PUT  apis batch v1 namespaces  namespace  cronjobs  name  status       Parameters       name     in path    string  required    name of the CronJob       namespace     in path    string  required     a href    namespace  a        body     a href    CronJob  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CronJob  a    OK  201   a href    CronJob  a    Created  401  Unauthorized        patch  partially update the specified CronJob       HTTP Request  PATCH  apis batch v1 namespaces  namespace  cronjobs  name        Parameters       name     in path    string  required    name of the CronJob       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CronJob  a    OK  201   a href    CronJob  a    Created  401  Unauthorized        patch  partially update status of the specified CronJob       HTTP Request  PATCH  apis batch v1 namespaces  namespace  cronjobs  name  status       Parameters       name     in path    string  required    name of the CronJob       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CronJob  a    OK  201   a href    CronJob  a    Created  401  Unauthorized        delete  delete a CronJob       HTTP Request  DELETE  apis batch v1 namespaces  namespace  cronjobs  name        Parameters       name     in path    string  required    name of the CronJob       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of CronJob       HTTP Request  DELETE  apis batch v1 namespaces  namespace  cronjobs       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference kind HorizontalPodAutoscaler apiVersion autoscaling v1 title HorizontalPodAutoscaler configuration of a horizontal pod autoscaler contenttype apireference import k8s io api autoscaling v1 apimetadata weight 12 autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"autoscaling\/v1\"\n  import: \"k8s.io\/api\/autoscaling\/v1\"\n  kind: \"HorizontalPodAutoscaler\"\ncontent_type: \"api_reference\"\ndescription: \"configuration of a horizontal pod autoscaler.\"\ntitle: \"HorizontalPodAutoscaler\"\nweight: 12\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: autoscaling\/v1`\n\n`import \"k8s.io\/api\/autoscaling\/v1\"`\n\n\n## HorizontalPodAutoscaler {#HorizontalPodAutoscaler}\n\nconfiguration of a horizontal pod autoscaler.\n\n<hr>\n\n- **apiVersion**: autoscaling\/v1\n\n\n- **kind**: HorizontalPodAutoscaler\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">HorizontalPodAutoscalerSpec<\/a>)\n\n  spec defines the behaviour of autoscaler. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status.\n\n- **status** (<a href=\"\">HorizontalPodAutoscalerStatus<\/a>)\n\n  status is the current information about the autoscaler.\n\n\n\n\n\n## HorizontalPodAutoscalerSpec {#HorizontalPodAutoscalerSpec}\n\nspecification of a horizontal pod autoscaler.\n\n<hr>\n\n- **maxReplicas** (int32), required\n\n  maxReplicas is the upper limit for the number of pods that can be set by the autoscaler; cannot be smaller than MinReplicas.\n\n- **scaleTargetRef** (CrossVersionObjectReference), required\n\n  reference to scaled resource; horizontal pod autoscaler will learn the current resource consumption and will set the desired number of pods by using its Scale subresource.\n\n  <a name=\"CrossVersionObjectReference\"><\/a>\n  *CrossVersionObjectReference contains enough information to let you identify the referred resource.*\n\n  - **scaleTargetRef.kind** (string), required\n\n    kind is the kind of the referent; More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n  - **scaleTargetRef.name** (string), required\n\n    name is the name of the referent; More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n  - **scaleTargetRef.apiVersion** (string)\n\n    apiVersion is the API version of the referent\n\n- **minReplicas** (int32)\n\n  minReplicas is the lower limit for the number of replicas to which the autoscaler can scale down.  It defaults to 1 pod.  minReplicas is allowed to be 0 if the alpha feature gate HPAScaleToZero is enabled and at least one Object or External metric is configured.  Scaling is active as long as at least one metric value is available.\n\n- **targetCPUUtilizationPercentage** (int32)\n\n  targetCPUUtilizationPercentage is the target average CPU utilization (represented as a percentage of requested CPU) over all the pods; if not specified the default autoscaling policy will be used.\n\n\n\n\n\n## HorizontalPodAutoscalerStatus {#HorizontalPodAutoscalerStatus}\n\ncurrent status of a horizontal pod autoscaler\n\n<hr>\n\n- **currentReplicas** (int32), required\n\n  currentReplicas is the current number of replicas of pods managed by this autoscaler.\n\n- **desiredReplicas** (int32), required\n\n  desiredReplicas is the  desired number of replicas of pods managed by this autoscaler.\n\n- **currentCPUUtilizationPercentage** (int32)\n\n  currentCPUUtilizationPercentage is the current average CPU utilization over all pods, represented as a percentage of requested CPU, e.g. 70 means that an average pod is using now 70% of its requested CPU.\n\n- **lastScaleTime** (Time)\n\n  lastScaleTime is the last time the HorizontalPodAutoscaler scaled the number of pods; used by the autoscaler to control how often the number of pods is changed.\n\n  <a name=\"Time\"><\/a>\n  *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **observedGeneration** (int64)\n\n  observedGeneration is the most recent generation observed by this autoscaler.\n\n\n\n\n\n## HorizontalPodAutoscalerList {#HorizontalPodAutoscalerList}\n\nlist of horizontal pod autoscaler objects.\n\n<hr>\n\n- **apiVersion**: autoscaling\/v1\n\n\n- **kind**: HorizontalPodAutoscalerList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata.\n\n- **items** ([]<a href=\"\">HorizontalPodAutoscaler<\/a>), required\n\n  items is the list of horizontal pod autoscaler objects.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified HorizontalPodAutoscaler\n\n#### HTTP Request\n\nGET \/apis\/autoscaling\/v1\/namespaces\/{namespace}\/horizontalpodautoscalers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the HorizontalPodAutoscaler\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscaler<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified HorizontalPodAutoscaler\n\n#### HTTP Request\n\nGET \/apis\/autoscaling\/v1\/namespaces\/{namespace}\/horizontalpodautoscalers\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the HorizontalPodAutoscaler\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscaler<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind HorizontalPodAutoscaler\n\n#### HTTP Request\n\nGET \/apis\/autoscaling\/v1\/namespaces\/{namespace}\/horizontalpodautoscalers\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscalerList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind HorizontalPodAutoscaler\n\n#### HTTP Request\n\nGET \/apis\/autoscaling\/v1\/horizontalpodautoscalers\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscalerList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a HorizontalPodAutoscaler\n\n#### HTTP Request\n\nPOST \/apis\/autoscaling\/v1\/namespaces\/{namespace}\/horizontalpodautoscalers\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">HorizontalPodAutoscaler<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscaler<\/a>): OK\n\n201 (<a href=\"\">HorizontalPodAutoscaler<\/a>): Created\n\n202 (<a href=\"\">HorizontalPodAutoscaler<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified HorizontalPodAutoscaler\n\n#### HTTP Request\n\nPUT \/apis\/autoscaling\/v1\/namespaces\/{namespace}\/horizontalpodautoscalers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the HorizontalPodAutoscaler\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">HorizontalPodAutoscaler<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscaler<\/a>): OK\n\n201 (<a href=\"\">HorizontalPodAutoscaler<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified HorizontalPodAutoscaler\n\n#### HTTP Request\n\nPUT \/apis\/autoscaling\/v1\/namespaces\/{namespace}\/horizontalpodautoscalers\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the HorizontalPodAutoscaler\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">HorizontalPodAutoscaler<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscaler<\/a>): OK\n\n201 (<a href=\"\">HorizontalPodAutoscaler<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified HorizontalPodAutoscaler\n\n#### HTTP Request\n\nPATCH \/apis\/autoscaling\/v1\/namespaces\/{namespace}\/horizontalpodautoscalers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the HorizontalPodAutoscaler\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscaler<\/a>): OK\n\n201 (<a href=\"\">HorizontalPodAutoscaler<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified HorizontalPodAutoscaler\n\n#### HTTP Request\n\nPATCH \/apis\/autoscaling\/v1\/namespaces\/{namespace}\/horizontalpodautoscalers\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the HorizontalPodAutoscaler\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscaler<\/a>): OK\n\n201 (<a href=\"\">HorizontalPodAutoscaler<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a HorizontalPodAutoscaler\n\n#### HTTP Request\n\nDELETE \/apis\/autoscaling\/v1\/namespaces\/{namespace}\/horizontalpodautoscalers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the HorizontalPodAutoscaler\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of HorizontalPodAutoscaler\n\n#### HTTP Request\n\nDELETE \/apis\/autoscaling\/v1\/namespaces\/{namespace}\/horizontalpodautoscalers\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   autoscaling v1    import   k8s io api autoscaling v1    kind   HorizontalPodAutoscaler  content type   api reference  description   configuration of a horizontal pod autoscaler   title   HorizontalPodAutoscaler  weight  12 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  autoscaling v1    import  k8s io api autoscaling v1        HorizontalPodAutoscaler   HorizontalPodAutoscaler   configuration of a horizontal pod autoscaler    hr       apiVersion    autoscaling v1       kind    HorizontalPodAutoscaler       metadata     a href    ObjectMeta  a      Standard object metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    HorizontalPodAutoscalerSpec  a      spec defines the behaviour of autoscaler  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status       status     a href    HorizontalPodAutoscalerStatus  a      status is the current information about the autoscaler          HorizontalPodAutoscalerSpec   HorizontalPodAutoscalerSpec   specification of a horizontal pod autoscaler    hr       maxReplicas    int32   required    maxReplicas is the upper limit for the number of pods that can be set by the autoscaler  cannot be smaller than MinReplicas       scaleTargetRef    CrossVersionObjectReference   required    reference to scaled resource  horizontal pod autoscaler will learn the current resource consumption and will set the desired number of pods by using its Scale subresource      a name  CrossVersionObjectReference    a     CrossVersionObjectReference contains enough information to let you identify the referred resource          scaleTargetRef kind    string   required      kind is the kind of the referent  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds        scaleTargetRef name    string   required      name is the name of the referent  More info  https   kubernetes io docs concepts overview working with objects names  names        scaleTargetRef apiVersion    string       apiVersion is the API version of the referent      minReplicas    int32     minReplicas is the lower limit for the number of replicas to which the autoscaler can scale down   It defaults to 1 pod   minReplicas is allowed to be 0 if the alpha feature gate HPAScaleToZero is enabled and at least one Object or External metric is configured   Scaling is active as long as at least one metric value is available       targetCPUUtilizationPercentage    int32     targetCPUUtilizationPercentage is the target average CPU utilization  represented as a percentage of requested CPU  over all the pods  if not specified the default autoscaling policy will be used          HorizontalPodAutoscalerStatus   HorizontalPodAutoscalerStatus   current status of a horizontal pod autoscaler   hr       currentReplicas    int32   required    currentReplicas is the current number of replicas of pods managed by this autoscaler       desiredReplicas    int32   required    desiredReplicas is the  desired number of replicas of pods managed by this autoscaler       currentCPUUtilizationPercentage    int32     currentCPUUtilizationPercentage is the current average CPU utilization over all pods  represented as a percentage of requested CPU  e g  70 means that an average pod is using now 70  of its requested CPU       lastScaleTime    Time     lastScaleTime is the last time the HorizontalPodAutoscaler scaled the number of pods  used by the autoscaler to control how often the number of pods is changed      a name  Time    a     Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        observedGeneration    int64     observedGeneration is the most recent generation observed by this autoscaler          HorizontalPodAutoscalerList   HorizontalPodAutoscalerList   list of horizontal pod autoscaler objects    hr       apiVersion    autoscaling v1       kind    HorizontalPodAutoscalerList       metadata     a href    ListMeta  a      Standard list metadata       items       a href    HorizontalPodAutoscaler  a    required    items is the list of horizontal pod autoscaler objects          Operations   Operations      hr             get  read the specified HorizontalPodAutoscaler       HTTP Request  GET  apis autoscaling v1 namespaces  namespace  horizontalpodautoscalers  name        Parameters       name     in path    string  required    name of the HorizontalPodAutoscaler       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    HorizontalPodAutoscaler  a    OK  401  Unauthorized        get  read status of the specified HorizontalPodAutoscaler       HTTP Request  GET  apis autoscaling v1 namespaces  namespace  horizontalpodautoscalers  name  status       Parameters       name     in path    string  required    name of the HorizontalPodAutoscaler       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    HorizontalPodAutoscaler  a    OK  401  Unauthorized        list  list or watch objects of kind HorizontalPodAutoscaler       HTTP Request  GET  apis autoscaling v1 namespaces  namespace  horizontalpodautoscalers       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    HorizontalPodAutoscalerList  a    OK  401  Unauthorized        list  list or watch objects of kind HorizontalPodAutoscaler       HTTP Request  GET  apis autoscaling v1 horizontalpodautoscalers       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    HorizontalPodAutoscalerList  a    OK  401  Unauthorized        create  create a HorizontalPodAutoscaler       HTTP Request  POST  apis autoscaling v1 namespaces  namespace  horizontalpodautoscalers       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    HorizontalPodAutoscaler  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    HorizontalPodAutoscaler  a    OK  201   a href    HorizontalPodAutoscaler  a    Created  202   a href    HorizontalPodAutoscaler  a    Accepted  401  Unauthorized        update  replace the specified HorizontalPodAutoscaler       HTTP Request  PUT  apis autoscaling v1 namespaces  namespace  horizontalpodautoscalers  name        Parameters       name     in path    string  required    name of the HorizontalPodAutoscaler       namespace     in path    string  required     a href    namespace  a        body     a href    HorizontalPodAutoscaler  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    HorizontalPodAutoscaler  a    OK  201   a href    HorizontalPodAutoscaler  a    Created  401  Unauthorized        update  replace status of the specified HorizontalPodAutoscaler       HTTP Request  PUT  apis autoscaling v1 namespaces  namespace  horizontalpodautoscalers  name  status       Parameters       name     in path    string  required    name of the HorizontalPodAutoscaler       namespace     in path    string  required     a href    namespace  a        body     a href    HorizontalPodAutoscaler  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    HorizontalPodAutoscaler  a    OK  201   a href    HorizontalPodAutoscaler  a    Created  401  Unauthorized        patch  partially update the specified HorizontalPodAutoscaler       HTTP Request  PATCH  apis autoscaling v1 namespaces  namespace  horizontalpodautoscalers  name        Parameters       name     in path    string  required    name of the HorizontalPodAutoscaler       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    HorizontalPodAutoscaler  a    OK  201   a href    HorizontalPodAutoscaler  a    Created  401  Unauthorized        patch  partially update status of the specified HorizontalPodAutoscaler       HTTP Request  PATCH  apis autoscaling v1 namespaces  namespace  horizontalpodautoscalers  name  status       Parameters       name     in path    string  required    name of the HorizontalPodAutoscaler       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    HorizontalPodAutoscaler  a    OK  201   a href    HorizontalPodAutoscaler  a    Created  401  Unauthorized        delete  delete a HorizontalPodAutoscaler       HTTP Request  DELETE  apis autoscaling v1 namespaces  namespace  horizontalpodautoscalers  name        Parameters       name     in path    string  required    name of the HorizontalPodAutoscaler       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of HorizontalPodAutoscaler       HTTP Request  DELETE  apis autoscaling v1 namespaces  namespace  horizontalpodautoscalers       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference weight 7 title StatefulSet StatefulSet represents a set of pods with consistent identities apiVersion apps v1 contenttype apireference apimetadata kind StatefulSet autogenerated true import k8s io api apps v1","answers":"---\napi_metadata:\n  apiVersion: \"apps\/v1\"\n  import: \"k8s.io\/api\/apps\/v1\"\n  kind: \"StatefulSet\"\ncontent_type: \"api_reference\"\ndescription: \"StatefulSet represents a set of pods with consistent identities.\"\ntitle: \"StatefulSet\"\nweight: 7\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: apps\/v1`\n\n`import \"k8s.io\/api\/apps\/v1\"`\n\n\n## StatefulSet {#StatefulSet}\n\nStatefulSet represents a set of pods with consistent identities. Identities are defined as:\n  - Network: A single stable DNS and hostname.\n  - Storage: As many VolumeClaims as requested.\n\nThe StatefulSet guarantees that a given network identity will always map to the same storage identity.\n\n<hr>\n\n- **apiVersion**: apps\/v1\n\n\n- **kind**: StatefulSet\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">StatefulSetSpec<\/a>)\n\n  Spec defines the desired identities of pods in this set.\n\n- **status** (<a href=\"\">StatefulSetStatus<\/a>)\n\n  Status is the current status of Pods in this StatefulSet. This data may be out of date by some window of time.\n\n\n\n\n\n## StatefulSetSpec {#StatefulSetSpec}\n\nA StatefulSetSpec is the specification of a StatefulSet.\n\n<hr>\n\n- **serviceName** (string), required\n\n  serviceName is the name of the service that governs this StatefulSet. This service must exist before the StatefulSet, and is responsible for the network identity of the set. Pods get DNS\/hostnames that follow the pattern: pod-specific-string.serviceName.default.svc.cluster.local where \"pod-specific-string\" is managed by the StatefulSet controller.\n\n- **selector** (<a href=\"\">LabelSelector<\/a>), required\n\n  selector is a label query over pods that should match the replica count. It must match the pod template's labels. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\/#label-selectors\n\n- **template** (<a href=\"\">PodTemplateSpec<\/a>), required\n\n  template is the object that describes the pod that will be created if insufficient replicas are detected. Each pod stamped out by the StatefulSet will fulfill this Template, but have a unique identity from the rest of the StatefulSet. Each pod will be named with the format \\<statefulsetname>-\\<podindex>. For example, a pod in a StatefulSet named \"web\" with index number \"3\" would be named \"web-3\". The only allowed template.spec.restartPolicy value is \"Always\".\n\n- **replicas** (int32)\n\n  replicas is the desired number of replicas of the given Template. These are replicas in the sense that they are instantiations of the same Template, but individual replicas also have a consistent identity. If unspecified, defaults to 1.\n\n- **updateStrategy** (StatefulSetUpdateStrategy)\n\n  updateStrategy indicates the StatefulSetUpdateStrategy that will be employed to update Pods in the StatefulSet when a revision is made to Template.\n\n  <a name=\"StatefulSetUpdateStrategy\"><\/a>\n  *StatefulSetUpdateStrategy indicates the strategy that the StatefulSet controller will use to perform updates. It includes any additional parameters necessary to perform the update for the indicated strategy.*\n\n  - **updateStrategy.type** (string)\n\n    Type indicates the type of the StatefulSetUpdateStrategy. Default is RollingUpdate.\n\n  - **updateStrategy.rollingUpdate** (RollingUpdateStatefulSetStrategy)\n\n    RollingUpdate is used to communicate parameters when Type is RollingUpdateStatefulSetStrategyType.\n\n    <a name=\"RollingUpdateStatefulSetStrategy\"><\/a>\n    *RollingUpdateStatefulSetStrategy is used to communicate parameter for RollingUpdateStatefulSetStrategyType.*\n\n    - **updateStrategy.rollingUpdate.maxUnavailable** (IntOrString)\n\n      The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding up. This can not be 0. Defaults to 1. This field is alpha-level and is only honored by servers that enable the MaxUnavailableStatefulSet feature. The field applies to all pods in the range 0 to Replicas-1. That means if there is any unavailable pod in the range 0 to Replicas-1, it will be counted towards MaxUnavailable.\n\n      <a name=\"IntOrString\"><\/a>\n      *IntOrString is a type that can hold an int32 or a string.  When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type.  This allows you to have, for example, a JSON field that can accept a name or number.*\n\n    - **updateStrategy.rollingUpdate.partition** (int32)\n\n      Partition indicates the ordinal at which the StatefulSet should be partitioned for updates. During a rolling update, all pods from ordinal Replicas-1 to Partition are updated. All pods from ordinal Partition-1 to 0 remain untouched. This is helpful in being able to do a canary based deployment. The default value is 0.\n\n- **podManagementPolicy** (string)\n\n  podManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. The default policy is `OrderedReady`, where pods are created in increasing order (pod-0, then pod-1, etc) and the controller will wait until each pod is ready before continuing. When scaling down, the pods are removed in the opposite order. The alternative policy is `Parallel` which will create pods in parallel to match the desired scale without waiting, and on scale down will delete all pods at once.\n\n- **revisionHistoryLimit** (int32)\n\n  revisionHistoryLimit is the maximum number of revisions that will be maintained in the StatefulSet's revision history. The revision history consists of all revisions not represented by a currently applied StatefulSetSpec version. The default value is 10.\n\n- **volumeClaimTemplates** ([]<a href=\"\">PersistentVolumeClaim<\/a>)\n\n  *Atomic: will be replaced during a merge*\n  \n  volumeClaimTemplates is a list of claims that pods are allowed to reference. The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod. Every claim in this list must have at least one matching (by name) volumeMount in one container in the template. A claim in this list takes precedence over any volumes in the template, with the same name.\n\n- **minReadySeconds** (int32)\n\n  Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready)\n\n- **persistentVolumeClaimRetentionPolicy** (StatefulSetPersistentVolumeClaimRetentionPolicy)\n\n  persistentVolumeClaimRetentionPolicy describes the lifecycle of persistent volume claims created from volumeClaimTemplates. By default, all persistent volume claims are created as needed and retained until manually deleted. This policy allows the lifecycle to be altered, for example by deleting persistent volume claims when their stateful set is deleted, or when their pod is scaled down. This requires the StatefulSetAutoDeletePVC feature gate to be enabled, which is beta.\n\n  <a name=\"StatefulSetPersistentVolumeClaimRetentionPolicy\"><\/a>\n  *StatefulSetPersistentVolumeClaimRetentionPolicy describes the policy used for PVCs created from the StatefulSet VolumeClaimTemplates.*\n\n  - **persistentVolumeClaimRetentionPolicy.whenDeleted** (string)\n\n    WhenDeleted specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is deleted. The default policy of `Retain` causes PVCs to not be affected by StatefulSet deletion. The `Delete` policy causes those PVCs to be deleted.\n\n  - **persistentVolumeClaimRetentionPolicy.whenScaled** (string)\n\n    WhenScaled specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is scaled down. The default policy of `Retain` causes PVCs to not be affected by a scaledown. The `Delete` policy causes the associated PVCs for any excess pods above the replica count to be deleted.\n\n- **ordinals** (StatefulSetOrdinals)\n\n  ordinals controls the numbering of replica indices in a StatefulSet. The default ordinals behavior assigns a \"0\" index to the first replica and increments the index by one for each additional replica requested.\n\n  <a name=\"StatefulSetOrdinals\"><\/a>\n  *StatefulSetOrdinals describes the policy used for replica ordinal assignment in this StatefulSet.*\n\n  - **ordinals.start** (int32)\n\n    start is the number representing the first replica's index. It may be used to number replicas from an alternate index (eg: 1-indexed) over the default 0-indexed names, or to orchestrate progressive movement of replicas from one StatefulSet to another. If set, replica indices will be in the range:\n      [.spec.ordinals.start, .spec.ordinals.start + .spec.replicas).\n    If unset, defaults to 0. Replica indices will be in the range:\n      [0, .spec.replicas).\n\n\n\n\n\n## StatefulSetStatus {#StatefulSetStatus}\n\nStatefulSetStatus represents the current state of a StatefulSet.\n\n<hr>\n\n- **replicas** (int32), required\n\n  replicas is the number of Pods created by the StatefulSet controller.\n\n- **readyReplicas** (int32)\n\n  readyReplicas is the number of pods created for this StatefulSet with a Ready Condition.\n\n- **currentReplicas** (int32)\n\n  currentReplicas is the number of Pods created by the StatefulSet controller from the StatefulSet version indicated by currentRevision.\n\n- **updatedReplicas** (int32)\n\n  updatedReplicas is the number of Pods created by the StatefulSet controller from the StatefulSet version indicated by updateRevision.\n\n- **availableReplicas** (int32)\n\n  Total number of available pods (ready for at least minReadySeconds) targeted by this statefulset.\n\n- **collisionCount** (int32)\n\n  collisionCount is the count of hash collisions for the StatefulSet. The StatefulSet controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ControllerRevision.\n\n- **conditions** ([]StatefulSetCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  Represents the latest available observations of a statefulset's current state.\n\n  <a name=\"StatefulSetCondition\"><\/a>\n  *StatefulSetCondition describes the state of a statefulset at a certain point.*\n\n  - **conditions.status** (string), required\n\n    Status of the condition, one of True, False, Unknown.\n\n  - **conditions.type** (string), required\n\n    Type of statefulset condition.\n\n  - **conditions.lastTransitionTime** (Time)\n\n    Last time the condition transitioned from one status to another.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    A human readable message indicating details about the transition.\n\n  - **conditions.reason** (string)\n\n    The reason for the condition's last transition.\n\n- **currentRevision** (string)\n\n  currentRevision, if not empty, indicates the version of the StatefulSet used to generate Pods in the sequence [0,currentReplicas).\n\n- **updateRevision** (string)\n\n  updateRevision, if not empty, indicates the version of the StatefulSet used to generate Pods in the sequence [replicas-updatedReplicas,replicas)\n\n- **observedGeneration** (int64)\n\n  observedGeneration is the most recent generation observed for this StatefulSet. It corresponds to the StatefulSet's generation, which is updated on mutation by the API Server.\n\n\n\n\n\n## StatefulSetList {#StatefulSetList}\n\nStatefulSetList is a collection of StatefulSets.\n\n<hr>\n\n- **apiVersion**: apps\/v1\n\n\n- **kind**: StatefulSetList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">StatefulSet<\/a>), required\n\n  Items is the list of stateful sets.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified StatefulSet\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/namespaces\/{namespace}\/statefulsets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StatefulSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StatefulSet<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified StatefulSet\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/namespaces\/{namespace}\/statefulsets\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StatefulSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StatefulSet<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind StatefulSet\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/namespaces\/{namespace}\/statefulsets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StatefulSetList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind StatefulSet\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/statefulsets\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StatefulSetList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a StatefulSet\n\n#### HTTP Request\n\nPOST \/apis\/apps\/v1\/namespaces\/{namespace}\/statefulsets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">StatefulSet<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StatefulSet<\/a>): OK\n\n201 (<a href=\"\">StatefulSet<\/a>): Created\n\n202 (<a href=\"\">StatefulSet<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified StatefulSet\n\n#### HTTP Request\n\nPUT \/apis\/apps\/v1\/namespaces\/{namespace}\/statefulsets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StatefulSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">StatefulSet<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StatefulSet<\/a>): OK\n\n201 (<a href=\"\">StatefulSet<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified StatefulSet\n\n#### HTTP Request\n\nPUT \/apis\/apps\/v1\/namespaces\/{namespace}\/statefulsets\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StatefulSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">StatefulSet<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StatefulSet<\/a>): OK\n\n201 (<a href=\"\">StatefulSet<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified StatefulSet\n\n#### HTTP Request\n\nPATCH \/apis\/apps\/v1\/namespaces\/{namespace}\/statefulsets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StatefulSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StatefulSet<\/a>): OK\n\n201 (<a href=\"\">StatefulSet<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified StatefulSet\n\n#### HTTP Request\n\nPATCH \/apis\/apps\/v1\/namespaces\/{namespace}\/statefulsets\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StatefulSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StatefulSet<\/a>): OK\n\n201 (<a href=\"\">StatefulSet<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a StatefulSet\n\n#### HTTP Request\n\nDELETE \/apis\/apps\/v1\/namespaces\/{namespace}\/statefulsets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StatefulSet\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of StatefulSet\n\n#### HTTP Request\n\nDELETE \/apis\/apps\/v1\/namespaces\/{namespace}\/statefulsets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   apps v1    import   k8s io api apps v1    kind   StatefulSet  content type   api reference  description   StatefulSet represents a set of pods with consistent identities   title   StatefulSet  weight  7 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  apps v1    import  k8s io api apps v1        StatefulSet   StatefulSet   StatefulSet represents a set of pods with consistent identities  Identities are defined as      Network  A single stable DNS and hostname      Storage  As many VolumeClaims as requested   The StatefulSet guarantees that a given network identity will always map to the same storage identity    hr       apiVersion    apps v1       kind    StatefulSet       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    StatefulSetSpec  a      Spec defines the desired identities of pods in this set       status     a href    StatefulSetStatus  a      Status is the current status of Pods in this StatefulSet  This data may be out of date by some window of time          StatefulSetSpec   StatefulSetSpec   A StatefulSetSpec is the specification of a StatefulSet    hr       serviceName    string   required    serviceName is the name of the service that governs this StatefulSet  This service must exist before the StatefulSet  and is responsible for the network identity of the set  Pods get DNS hostnames that follow the pattern  pod specific string serviceName default svc cluster local where  pod specific string  is managed by the StatefulSet controller       selector     a href    LabelSelector  a    required    selector is a label query over pods that should match the replica count  It must match the pod template s labels  More info  https   kubernetes io docs concepts overview working with objects labels  label selectors      template     a href    PodTemplateSpec  a    required    template is the object that describes the pod that will be created if insufficient replicas are detected  Each pod stamped out by the StatefulSet will fulfill this Template  but have a unique identity from the rest of the StatefulSet  Each pod will be named with the format   statefulsetname    podindex   For example  a pod in a StatefulSet named  web  with index number  3  would be named  web 3   The only allowed template spec restartPolicy value is  Always        replicas    int32     replicas is the desired number of replicas of the given Template  These are replicas in the sense that they are instantiations of the same Template  but individual replicas also have a consistent identity  If unspecified  defaults to 1       updateStrategy    StatefulSetUpdateStrategy     updateStrategy indicates the StatefulSetUpdateStrategy that will be employed to update Pods in the StatefulSet when a revision is made to Template      a name  StatefulSetUpdateStrategy    a     StatefulSetUpdateStrategy indicates the strategy that the StatefulSet controller will use to perform updates  It includes any additional parameters necessary to perform the update for the indicated strategy          updateStrategy type    string       Type indicates the type of the StatefulSetUpdateStrategy  Default is RollingUpdate         updateStrategy rollingUpdate    RollingUpdateStatefulSetStrategy       RollingUpdate is used to communicate parameters when Type is RollingUpdateStatefulSetStrategyType        a name  RollingUpdateStatefulSetStrategy    a       RollingUpdateStatefulSetStrategy is used to communicate parameter for RollingUpdateStatefulSetStrategyType            updateStrategy rollingUpdate maxUnavailable    IntOrString         The maximum number of pods that can be unavailable during the update  Value can be an absolute number  ex  5  or a percentage of desired pods  ex  10    Absolute number is calculated from percentage by rounding up  This can not be 0  Defaults to 1  This field is alpha level and is only honored by servers that enable the MaxUnavailableStatefulSet feature  The field applies to all pods in the range 0 to Replicas 1  That means if there is any unavailable pod in the range 0 to Replicas 1  it will be counted towards MaxUnavailable          a name  IntOrString    a         IntOrString is a type that can hold an int32 or a string   When used in JSON or YAML marshalling and unmarshalling  it produces or consumes the inner type   This allows you to have  for example  a JSON field that can accept a name or number            updateStrategy rollingUpdate partition    int32         Partition indicates the ordinal at which the StatefulSet should be partitioned for updates  During a rolling update  all pods from ordinal Replicas 1 to Partition are updated  All pods from ordinal Partition 1 to 0 remain untouched  This is helpful in being able to do a canary based deployment  The default value is 0       podManagementPolicy    string     podManagementPolicy controls how pods are created during initial scale up  when replacing pods on nodes  or when scaling down  The default policy is  OrderedReady   where pods are created in increasing order  pod 0  then pod 1  etc  and the controller will wait until each pod is ready before continuing  When scaling down  the pods are removed in the opposite order  The alternative policy is  Parallel  which will create pods in parallel to match the desired scale without waiting  and on scale down will delete all pods at once       revisionHistoryLimit    int32     revisionHistoryLimit is the maximum number of revisions that will be maintained in the StatefulSet s revision history  The revision history consists of all revisions not represented by a currently applied StatefulSetSpec version  The default value is 10       volumeClaimTemplates       a href    PersistentVolumeClaim  a       Atomic  will be replaced during a merge       volumeClaimTemplates is a list of claims that pods are allowed to reference  The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod  Every claim in this list must have at least one matching  by name  volumeMount in one container in the template  A claim in this list takes precedence over any volumes in the template  with the same name       minReadySeconds    int32     Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available  Defaults to 0  pod will be considered available as soon as it is ready       persistentVolumeClaimRetentionPolicy    StatefulSetPersistentVolumeClaimRetentionPolicy     persistentVolumeClaimRetentionPolicy describes the lifecycle of persistent volume claims created from volumeClaimTemplates  By default  all persistent volume claims are created as needed and retained until manually deleted  This policy allows the lifecycle to be altered  for example by deleting persistent volume claims when their stateful set is deleted  or when their pod is scaled down  This requires the StatefulSetAutoDeletePVC feature gate to be enabled  which is beta      a name  StatefulSetPersistentVolumeClaimRetentionPolicy    a     StatefulSetPersistentVolumeClaimRetentionPolicy describes the policy used for PVCs created from the StatefulSet VolumeClaimTemplates          persistentVolumeClaimRetentionPolicy whenDeleted    string       WhenDeleted specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is deleted  The default policy of  Retain  causes PVCs to not be affected by StatefulSet deletion  The  Delete  policy causes those PVCs to be deleted         persistentVolumeClaimRetentionPolicy whenScaled    string       WhenScaled specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is scaled down  The default policy of  Retain  causes PVCs to not be affected by a scaledown  The  Delete  policy causes the associated PVCs for any excess pods above the replica count to be deleted       ordinals    StatefulSetOrdinals     ordinals controls the numbering of replica indices in a StatefulSet  The default ordinals behavior assigns a  0  index to the first replica and increments the index by one for each additional replica requested      a name  StatefulSetOrdinals    a     StatefulSetOrdinals describes the policy used for replica ordinal assignment in this StatefulSet          ordinals start    int32       start is the number representing the first replica s index  It may be used to number replicas from an alternate index  eg  1 indexed  over the default 0 indexed names  or to orchestrate progressive movement of replicas from one StatefulSet to another  If set  replica indices will be in the range          spec ordinals start   spec ordinals start    spec replicas       If unset  defaults to 0  Replica indices will be in the range         0   spec replicas           StatefulSetStatus   StatefulSetStatus   StatefulSetStatus represents the current state of a StatefulSet    hr       replicas    int32   required    replicas is the number of Pods created by the StatefulSet controller       readyReplicas    int32     readyReplicas is the number of pods created for this StatefulSet with a Ready Condition       currentReplicas    int32     currentReplicas is the number of Pods created by the StatefulSet controller from the StatefulSet version indicated by currentRevision       updatedReplicas    int32     updatedReplicas is the number of Pods created by the StatefulSet controller from the StatefulSet version indicated by updateRevision       availableReplicas    int32     Total number of available pods  ready for at least minReadySeconds  targeted by this statefulset       collisionCount    int32     collisionCount is the count of hash collisions for the StatefulSet  The StatefulSet controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ControllerRevision       conditions      StatefulSetCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       Represents the latest available observations of a statefulset s current state      a name  StatefulSetCondition    a     StatefulSetCondition describes the state of a statefulset at a certain point          conditions status    string   required      Status of the condition  one of True  False  Unknown         conditions type    string   required      Type of statefulset condition         conditions lastTransitionTime    Time       Last time the condition transitioned from one status to another        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string       A human readable message indicating details about the transition         conditions reason    string       The reason for the condition s last transition       currentRevision    string     currentRevision  if not empty  indicates the version of the StatefulSet used to generate Pods in the sequence  0 currentReplicas        updateRevision    string     updateRevision  if not empty  indicates the version of the StatefulSet used to generate Pods in the sequence  replicas updatedReplicas replicas       observedGeneration    int64     observedGeneration is the most recent generation observed for this StatefulSet  It corresponds to the StatefulSet s generation  which is updated on mutation by the API Server          StatefulSetList   StatefulSetList   StatefulSetList is a collection of StatefulSets    hr       apiVersion    apps v1       kind    StatefulSetList       metadata     a href    ListMeta  a      Standard list s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    StatefulSet  a    required    Items is the list of stateful sets          Operations   Operations      hr             get  read the specified StatefulSet       HTTP Request  GET  apis apps v1 namespaces  namespace  statefulsets  name        Parameters       name     in path    string  required    name of the StatefulSet       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StatefulSet  a    OK  401  Unauthorized        get  read status of the specified StatefulSet       HTTP Request  GET  apis apps v1 namespaces  namespace  statefulsets  name  status       Parameters       name     in path    string  required    name of the StatefulSet       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StatefulSet  a    OK  401  Unauthorized        list  list or watch objects of kind StatefulSet       HTTP Request  GET  apis apps v1 namespaces  namespace  statefulsets       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    StatefulSetList  a    OK  401  Unauthorized        list  list or watch objects of kind StatefulSet       HTTP Request  GET  apis apps v1 statefulsets       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    StatefulSetList  a    OK  401  Unauthorized        create  create a StatefulSet       HTTP Request  POST  apis apps v1 namespaces  namespace  statefulsets       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    StatefulSet  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StatefulSet  a    OK  201   a href    StatefulSet  a    Created  202   a href    StatefulSet  a    Accepted  401  Unauthorized        update  replace the specified StatefulSet       HTTP Request  PUT  apis apps v1 namespaces  namespace  statefulsets  name        Parameters       name     in path    string  required    name of the StatefulSet       namespace     in path    string  required     a href    namespace  a        body     a href    StatefulSet  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StatefulSet  a    OK  201   a href    StatefulSet  a    Created  401  Unauthorized        update  replace status of the specified StatefulSet       HTTP Request  PUT  apis apps v1 namespaces  namespace  statefulsets  name  status       Parameters       name     in path    string  required    name of the StatefulSet       namespace     in path    string  required     a href    namespace  a        body     a href    StatefulSet  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StatefulSet  a    OK  201   a href    StatefulSet  a    Created  401  Unauthorized        patch  partially update the specified StatefulSet       HTTP Request  PATCH  apis apps v1 namespaces  namespace  statefulsets  name        Parameters       name     in path    string  required    name of the StatefulSet       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StatefulSet  a    OK  201   a href    StatefulSet  a    Created  401  Unauthorized        patch  partially update status of the specified StatefulSet       HTTP Request  PATCH  apis apps v1 namespaces  namespace  statefulsets  name  status       Parameters       name     in path    string  required    name of the StatefulSet       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StatefulSet  a    OK  201   a href    StatefulSet  a    Created  401  Unauthorized        delete  delete a StatefulSet       HTTP Request  DELETE  apis apps v1 namespaces  namespace  statefulsets  name        Parameters       name     in path    string  required    name of the StatefulSet       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of StatefulSet       HTTP Request  DELETE  apis apps v1 namespaces  namespace  statefulsets       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference title ResourceSlice v1alpha3 contenttype apireference apiVersion resource k8s io v1alpha3 weight 18 import k8s io api resource v1alpha3 apimetadata ResourceSlice represents one or more resources in a pool of similar resources managed by a common driver autogenerated true kind ResourceSlice","answers":"---\napi_metadata:\n  apiVersion: \"resource.k8s.io\/v1alpha3\"\n  import: \"k8s.io\/api\/resource\/v1alpha3\"\n  kind: \"ResourceSlice\"\ncontent_type: \"api_reference\"\ndescription: \"ResourceSlice represents one or more resources in a pool of similar resources, managed by a common driver.\"\ntitle: \"ResourceSlice v1alpha3\"\nweight: 18\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: resource.k8s.io\/v1alpha3`\n\n`import \"k8s.io\/api\/resource\/v1alpha3\"`\n\n\n## ResourceSlice {#ResourceSlice}\n\nResourceSlice represents one or more resources in a pool of similar resources, managed by a common driver. A pool may span more than one ResourceSlice, and exactly how many ResourceSlices comprise a pool is determined by the driver.\n\nAt the moment, the only supported resources are devices with attributes and capacities. Each device in a given pool, regardless of how many ResourceSlices, must have a unique name. The ResourceSlice in which a device gets published may change over time. The unique identifier for a device is the tuple \\<driver name>, \\<pool name>, \\<device name>.\n\nWhenever a driver needs to update a pool, it increments the pool.Spec.Pool.Generation number and updates all ResourceSlices with that new number and new resource definitions. A consumer must only use ResourceSlices with the highest generation number and ignore all others.\n\nWhen allocating all resources in a pool matching certain criteria or when looking for the best solution among several different alternatives, a consumer should check the number of ResourceSlices in a pool (included in each ResourceSlice) to determine whether its view of a pool is complete and if not, should wait until the driver has completed updating the pool.\n\nFor resources that are not local to a node, the node name is not set. Instead, the driver may use a node selector to specify where the devices are available.\n\nThis is an alpha type and requires enabling the DynamicResourceAllocation feature gate.\n\n<hr>\n\n- **apiVersion**: resource.k8s.io\/v1alpha3\n\n\n- **kind**: ResourceSlice\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata\n\n- **spec** (<a href=\"\">ResourceSliceSpec<\/a>), required\n\n  Contains the information published by the driver.\n  \n  Changing the spec automatically increments the metadata.generation number.\n\n\n\n\n\n## ResourceSliceSpec {#ResourceSliceSpec}\n\nResourceSliceSpec contains the information published by the driver in one ResourceSlice.\n\n<hr>\n\n- **driver** (string), required\n\n  Driver identifies the DRA driver providing the capacity information. A field selector can be used to list only ResourceSlice objects with a certain driver name.\n  \n  Must be a DNS subdomain and should end with a DNS domain owned by the vendor of the driver. This field is immutable.\n\n- **pool** (ResourcePool), required\n\n  Pool describes the pool that this ResourceSlice belongs to.\n\n  <a name=\"ResourcePool\"><\/a>\n  *ResourcePool describes the pool that ResourceSlices belong to.*\n\n  - **pool.generation** (int64), required\n\n    Generation tracks the change in a pool over time. Whenever a driver changes something about one or more of the resources in a pool, it must change the generation in all ResourceSlices which are part of that pool. Consumers of ResourceSlices should only consider resources from the pool with the highest generation number. The generation may be reset by drivers, which should be fine for consumers, assuming that all ResourceSlices in a pool are updated to match or deleted.\n    \n    Combined with ResourceSliceCount, this mechanism enables consumers to detect pools which are comprised of multiple ResourceSlices and are in an incomplete state.\n\n  - **pool.name** (string), required\n\n    Name is used to identify the pool. For node-local devices, this is often the node name, but this is not required.\n    \n    It must not be longer than 253 characters and must consist of one or more DNS sub-domains separated by slashes. This field is immutable.\n\n  - **pool.resourceSliceCount** (int64), required\n\n    ResourceSliceCount is the total number of ResourceSlices in the pool at this generation number. Must be greater than zero.\n    \n    Consumers can use this to check whether they have seen all ResourceSlices belonging to the same pool.\n\n- **allNodes** (boolean)\n\n  AllNodes indicates that all nodes have access to the resources in the pool.\n  \n  Exactly one of NodeName, NodeSelector and AllNodes must be set.\n\n- **devices** ([]Device)\n\n  *Atomic: will be replaced during a merge*\n  \n  Devices lists some or all of the devices in this pool.\n  \n  Must not have more than 128 entries.\n\n  <a name=\"Device\"><\/a>\n  *Device represents one individual hardware instance that can be selected based on its attributes. Besides the name, exactly one field must be set.*\n\n  - **devices.name** (string), required\n\n    Name is unique identifier among all devices managed by the driver in the pool. It must be a DNS label.\n\n  - **devices.basic** (BasicDevice)\n\n    Basic defines one device instance.\n\n    <a name=\"BasicDevice\"><\/a>\n    *BasicDevice defines one device instance.*\n\n    - **devices.basic.attributes** (map[string]DeviceAttribute)\n\n      Attributes defines the set of attributes for this device. The name of each attribute must be unique in that set.\n      \n      The maximum number of attributes and capacities combined is 32.\n\n      <a name=\"DeviceAttribute\"><\/a>\n      *DeviceAttribute must have exactly one field set.*\n\n      - **devices.basic.attributes.bool** (boolean)\n\n        BoolValue is a true\/false value.\n\n      - **devices.basic.attributes.int** (int64)\n\n        IntValue is a number.\n\n      - **devices.basic.attributes.string** (string)\n\n        StringValue is a string. Must not be longer than 64 characters.\n\n      - **devices.basic.attributes.version** (string)\n\n        VersionValue is a semantic version according to semver.org spec 2.0.0. Must not be longer than 64 characters.\n\n    - **devices.basic.capacity** (map[string]<a href=\"\">Quantity<\/a>)\n\n      Capacity defines the set of capacities for this device. The name of each capacity must be unique in that set.\n      \n      The maximum number of attributes and capacities combined is 32.\n\n- **nodeName** (string)\n\n  NodeName identifies the node which provides the resources in this pool. A field selector can be used to list only ResourceSlice objects belonging to a certain node.\n  \n  This field can be used to limit access from nodes to ResourceSlices with the same node name. It also indicates to autoscalers that adding new nodes of the same type as some old node might also make new resources available.\n  \n  Exactly one of NodeName, NodeSelector and AllNodes must be set. This field is immutable.\n\n- **nodeSelector** (NodeSelector)\n\n  NodeSelector defines which nodes have access to the resources in the pool, when that pool is not limited to a single node.\n  \n  Must use exactly one term.\n  \n  Exactly one of NodeName, NodeSelector and AllNodes must be set.\n\n  <a name=\"NodeSelector\"><\/a>\n  *A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms.*\n\n  - **nodeSelector.nodeSelectorTerms** ([]NodeSelectorTerm), required\n\n    *Atomic: will be replaced during a merge*\n    \n    Required. A list of node selector terms. The terms are ORed.\n\n    <a name=\"NodeSelectorTerm\"><\/a>\n    *A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.*\n\n    - **nodeSelector.nodeSelectorTerms.matchExpressions** ([]<a href=\"\">NodeSelectorRequirement<\/a>)\n\n      *Atomic: will be replaced during a merge*\n      \n      A list of node selector requirements by node's labels.\n\n    - **nodeSelector.nodeSelectorTerms.matchFields** ([]<a href=\"\">NodeSelectorRequirement<\/a>)\n\n      *Atomic: will be replaced during a merge*\n      \n      A list of node selector requirements by node's fields.\n\n\n\n\n\n## ResourceSliceList {#ResourceSliceList}\n\nResourceSliceList is a collection of ResourceSlices.\n\n<hr>\n\n- **apiVersion**: resource.k8s.io\/v1alpha3\n\n\n- **kind**: ResourceSliceList\n\n\n- **items** ([]<a href=\"\">ResourceSlice<\/a>), required\n\n  Items is the list of resource ResourceSlices.\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ResourceSlice\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/resourceslices\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceSlice\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceSlice<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ResourceSlice\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/resourceslices\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceSliceList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ResourceSlice\n\n#### HTTP Request\n\nPOST \/apis\/resource.k8s.io\/v1alpha3\/resourceslices\n\n#### Parameters\n\n\n- **body**: <a href=\"\">ResourceSlice<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceSlice<\/a>): OK\n\n201 (<a href=\"\">ResourceSlice<\/a>): Created\n\n202 (<a href=\"\">ResourceSlice<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ResourceSlice\n\n#### HTTP Request\n\nPUT \/apis\/resource.k8s.io\/v1alpha3\/resourceslices\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceSlice\n\n\n- **body**: <a href=\"\">ResourceSlice<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceSlice<\/a>): OK\n\n201 (<a href=\"\">ResourceSlice<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ResourceSlice\n\n#### HTTP Request\n\nPATCH \/apis\/resource.k8s.io\/v1alpha3\/resourceslices\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceSlice\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceSlice<\/a>): OK\n\n201 (<a href=\"\">ResourceSlice<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ResourceSlice\n\n#### HTTP Request\n\nDELETE \/apis\/resource.k8s.io\/v1alpha3\/resourceslices\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceSlice\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceSlice<\/a>): OK\n\n202 (<a href=\"\">ResourceSlice<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ResourceSlice\n\n#### HTTP Request\n\nDELETE \/apis\/resource.k8s.io\/v1alpha3\/resourceslices\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   resource k8s io v1alpha3    import   k8s io api resource v1alpha3    kind   ResourceSlice  content type   api reference  description   ResourceSlice represents one or more resources in a pool of similar resources  managed by a common driver   title   ResourceSlice v1alpha3  weight  18 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  resource k8s io v1alpha3    import  k8s io api resource v1alpha3        ResourceSlice   ResourceSlice   ResourceSlice represents one or more resources in a pool of similar resources  managed by a common driver  A pool may span more than one ResourceSlice  and exactly how many ResourceSlices comprise a pool is determined by the driver   At the moment  the only supported resources are devices with attributes and capacities  Each device in a given pool  regardless of how many ResourceSlices  must have a unique name  The ResourceSlice in which a device gets published may change over time  The unique identifier for a device is the tuple   driver name     pool name     device name    Whenever a driver needs to update a pool  it increments the pool Spec Pool Generation number and updates all ResourceSlices with that new number and new resource definitions  A consumer must only use ResourceSlices with the highest generation number and ignore all others   When allocating all resources in a pool matching certain criteria or when looking for the best solution among several different alternatives  a consumer should check the number of ResourceSlices in a pool  included in each ResourceSlice  to determine whether its view of a pool is complete and if not  should wait until the driver has completed updating the pool   For resources that are not local to a node  the node name is not set  Instead  the driver may use a node selector to specify where the devices are available   This is an alpha type and requires enabling the DynamicResourceAllocation feature gate    hr       apiVersion    resource k8s io v1alpha3       kind    ResourceSlice       metadata     a href    ObjectMeta  a      Standard object metadata      spec     a href    ResourceSliceSpec  a    required    Contains the information published by the driver       Changing the spec automatically increments the metadata generation number          ResourceSliceSpec   ResourceSliceSpec   ResourceSliceSpec contains the information published by the driver in one ResourceSlice    hr       driver    string   required    Driver identifies the DRA driver providing the capacity information  A field selector can be used to list only ResourceSlice objects with a certain driver name       Must be a DNS subdomain and should end with a DNS domain owned by the vendor of the driver  This field is immutable       pool    ResourcePool   required    Pool describes the pool that this ResourceSlice belongs to      a name  ResourcePool    a     ResourcePool describes the pool that ResourceSlices belong to          pool generation    int64   required      Generation tracks the change in a pool over time  Whenever a driver changes something about one or more of the resources in a pool  it must change the generation in all ResourceSlices which are part of that pool  Consumers of ResourceSlices should only consider resources from the pool with the highest generation number  The generation may be reset by drivers  which should be fine for consumers  assuming that all ResourceSlices in a pool are updated to match or deleted           Combined with ResourceSliceCount  this mechanism enables consumers to detect pools which are comprised of multiple ResourceSlices and are in an incomplete state         pool name    string   required      Name is used to identify the pool  For node local devices  this is often the node name  but this is not required           It must not be longer than 253 characters and must consist of one or more DNS sub domains separated by slashes  This field is immutable         pool resourceSliceCount    int64   required      ResourceSliceCount is the total number of ResourceSlices in the pool at this generation number  Must be greater than zero           Consumers can use this to check whether they have seen all ResourceSlices belonging to the same pool       allNodes    boolean     AllNodes indicates that all nodes have access to the resources in the pool       Exactly one of NodeName  NodeSelector and AllNodes must be set       devices      Device      Atomic  will be replaced during a merge       Devices lists some or all of the devices in this pool       Must not have more than 128 entries      a name  Device    a     Device represents one individual hardware instance that can be selected based on its attributes  Besides the name  exactly one field must be set          devices name    string   required      Name is unique identifier among all devices managed by the driver in the pool  It must be a DNS label         devices basic    BasicDevice       Basic defines one device instance        a name  BasicDevice    a       BasicDevice defines one device instance            devices basic attributes    map string DeviceAttribute         Attributes defines the set of attributes for this device  The name of each attribute must be unique in that set               The maximum number of attributes and capacities combined is 32          a name  DeviceAttribute    a         DeviceAttribute must have exactly one field set              devices basic attributes bool    boolean           BoolValue is a true false value             devices basic attributes int    int64           IntValue is a number             devices basic attributes string    string           StringValue is a string  Must not be longer than 64 characters             devices basic attributes version    string           VersionValue is a semantic version according to semver org spec 2 0 0  Must not be longer than 64 characters           devices basic capacity    map string  a href    Quantity  a          Capacity defines the set of capacities for this device  The name of each capacity must be unique in that set               The maximum number of attributes and capacities combined is 32       nodeName    string     NodeName identifies the node which provides the resources in this pool  A field selector can be used to list only ResourceSlice objects belonging to a certain node       This field can be used to limit access from nodes to ResourceSlices with the same node name  It also indicates to autoscalers that adding new nodes of the same type as some old node might also make new resources available       Exactly one of NodeName  NodeSelector and AllNodes must be set  This field is immutable       nodeSelector    NodeSelector     NodeSelector defines which nodes have access to the resources in the pool  when that pool is not limited to a single node       Must use exactly one term       Exactly one of NodeName  NodeSelector and AllNodes must be set      a name  NodeSelector    a     A node selector represents the union of the results of one or more label queries over a set of nodes  that is  it represents the OR of the selectors represented by the node selector terms          nodeSelector nodeSelectorTerms      NodeSelectorTerm   required       Atomic  will be replaced during a merge           Required  A list of node selector terms  The terms are ORed        a name  NodeSelectorTerm    a       A null or empty node selector term matches no objects  The requirements of them are ANDed  The TopologySelectorTerm type implements a subset of the NodeSelectorTerm            nodeSelector nodeSelectorTerms matchExpressions       a href    NodeSelectorRequirement  a           Atomic  will be replaced during a merge               A list of node selector requirements by node s labels           nodeSelector nodeSelectorTerms matchFields       a href    NodeSelectorRequirement  a           Atomic  will be replaced during a merge               A list of node selector requirements by node s fields          ResourceSliceList   ResourceSliceList   ResourceSliceList is a collection of ResourceSlices    hr       apiVersion    resource k8s io v1alpha3       kind    ResourceSliceList       items       a href    ResourceSlice  a    required    Items is the list of resource ResourceSlices       metadata     a href    ListMeta  a      Standard list metadata         Operations   Operations      hr             get  read the specified ResourceSlice       HTTP Request  GET  apis resource k8s io v1alpha3 resourceslices  name        Parameters       name     in path    string  required    name of the ResourceSlice       pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceSlice  a    OK  401  Unauthorized        list  list or watch objects of kind ResourceSlice       HTTP Request  GET  apis resource k8s io v1alpha3 resourceslices       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ResourceSliceList  a    OK  401  Unauthorized        create  create a ResourceSlice       HTTP Request  POST  apis resource k8s io v1alpha3 resourceslices       Parameters       body     a href    ResourceSlice  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceSlice  a    OK  201   a href    ResourceSlice  a    Created  202   a href    ResourceSlice  a    Accepted  401  Unauthorized        update  replace the specified ResourceSlice       HTTP Request  PUT  apis resource k8s io v1alpha3 resourceslices  name        Parameters       name     in path    string  required    name of the ResourceSlice       body     a href    ResourceSlice  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceSlice  a    OK  201   a href    ResourceSlice  a    Created  401  Unauthorized        patch  partially update the specified ResourceSlice       HTTP Request  PATCH  apis resource k8s io v1alpha3 resourceslices  name        Parameters       name     in path    string  required    name of the ResourceSlice       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceSlice  a    OK  201   a href    ResourceSlice  a    Created  401  Unauthorized        delete  delete a ResourceSlice       HTTP Request  DELETE  apis resource k8s io v1alpha3 resourceslices  name        Parameters       name     in path    string  required    name of the ResourceSlice       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    ResourceSlice  a    OK  202   a href    ResourceSlice  a    Accepted  401  Unauthorized        deletecollection  delete collection of ResourceSlice       HTTP Request  DELETE  apis resource k8s io v1alpha3 resourceslices       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference kind HorizontalPodAutoscaler title HorizontalPodAutoscaler weight 13 contenttype apireference apiVersion autoscaling v2 apimetadata import k8s io api autoscaling v2 HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"autoscaling\/v2\"\n  import: \"k8s.io\/api\/autoscaling\/v2\"\n  kind: \"HorizontalPodAutoscaler\"\ncontent_type: \"api_reference\"\ndescription: \"HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified.\"\ntitle: \"HorizontalPodAutoscaler\"\nweight: 13\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: autoscaling\/v2`\n\n`import \"k8s.io\/api\/autoscaling\/v2\"`\n\n\n## HorizontalPodAutoscaler {#HorizontalPodAutoscaler}\n\nHorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified.\n\n<hr>\n\n- **apiVersion**: autoscaling\/v2\n\n\n- **kind**: HorizontalPodAutoscaler\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  metadata is the standard object metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">HorizontalPodAutoscalerSpec<\/a>)\n\n  spec is the specification for the behaviour of the autoscaler. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status.\n\n- **status** (<a href=\"\">HorizontalPodAutoscalerStatus<\/a>)\n\n  status is the current information about the autoscaler.\n\n\n\n\n\n## HorizontalPodAutoscalerSpec {#HorizontalPodAutoscalerSpec}\n\nHorizontalPodAutoscalerSpec describes the desired functionality of the HorizontalPodAutoscaler.\n\n<hr>\n\n- **maxReplicas** (int32), required\n\n  maxReplicas is the upper limit for the number of replicas to which the autoscaler can scale up. It cannot be less that minReplicas.\n\n- **scaleTargetRef** (CrossVersionObjectReference), required\n\n  scaleTargetRef points to the target resource to scale, and is used to the pods for which metrics should be collected, as well as to actually change the replica count.\n\n  <a name=\"CrossVersionObjectReference\"><\/a>\n  *CrossVersionObjectReference contains enough information to let you identify the referred resource.*\n\n  - **scaleTargetRef.kind** (string), required\n\n    kind is the kind of the referent; More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n  - **scaleTargetRef.name** (string), required\n\n    name is the name of the referent; More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n  - **scaleTargetRef.apiVersion** (string)\n\n    apiVersion is the API version of the referent\n\n- **minReplicas** (int32)\n\n  minReplicas is the lower limit for the number of replicas to which the autoscaler can scale down.  It defaults to 1 pod.  minReplicas is allowed to be 0 if the alpha feature gate HPAScaleToZero is enabled and at least one Object or External metric is configured.  Scaling is active as long as at least one metric value is available.\n\n- **behavior** (HorizontalPodAutoscalerBehavior)\n\n  behavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively). If not set, the default HPAScalingRules for scale up and scale down are used.\n\n  <a name=\"HorizontalPodAutoscalerBehavior\"><\/a>\n  *HorizontalPodAutoscalerBehavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively).*\n\n  - **behavior.scaleDown** (HPAScalingRules)\n\n    scaleDown is scaling policy for scaling Down. If not set, the default value is to allow to scale down to minReplicas pods, with a 300 second stabilization window (i.e., the highest recommendation for the last 300sec is used).\n\n    <a name=\"HPAScalingRules\"><\/a>\n    *HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen.*\n\n    - **behavior.scaleDown.policies** ([]HPAScalingPolicy)\n\n      *Atomic: will be replaced during a merge*\n      \n      policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid\n\n      <a name=\"HPAScalingPolicy\"><\/a>\n      *HPAScalingPolicy is a single policy which must hold true for a specified past interval.*\n\n      - **behavior.scaleDown.policies.type** (string), required\n\n        type is used to specify the scaling policy.\n\n      - **behavior.scaleDown.policies.value** (int32), required\n\n        value contains the amount of change which is permitted by the policy. It must be greater than zero\n\n      - **behavior.scaleDown.policies.periodSeconds** (int32), required\n\n        periodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min).\n\n    - **behavior.scaleDown.selectPolicy** (string)\n\n      selectPolicy is used to specify which policy should be used. If not set, the default value Max is used.\n\n    - **behavior.scaleDown.stabilizationWindowSeconds** (int32)\n\n      stabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long).\n\n  - **behavior.scaleUp** (HPAScalingRules)\n\n    scaleUp is scaling policy for scaling Up. If not set, the default value is the higher of:\n      * increase no more than 4 pods per 60 seconds\n      * double the number of pods per 60 seconds\n    No stabilization is used.\n\n    <a name=\"HPAScalingRules\"><\/a>\n    *HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen.*\n\n    - **behavior.scaleUp.policies** ([]HPAScalingPolicy)\n\n      *Atomic: will be replaced during a merge*\n      \n      policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid\n\n      <a name=\"HPAScalingPolicy\"><\/a>\n      *HPAScalingPolicy is a single policy which must hold true for a specified past interval.*\n\n      - **behavior.scaleUp.policies.type** (string), required\n\n        type is used to specify the scaling policy.\n\n      - **behavior.scaleUp.policies.value** (int32), required\n\n        value contains the amount of change which is permitted by the policy. It must be greater than zero\n\n      - **behavior.scaleUp.policies.periodSeconds** (int32), required\n\n        periodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min).\n\n    - **behavior.scaleUp.selectPolicy** (string)\n\n      selectPolicy is used to specify which policy should be used. If not set, the default value Max is used.\n\n    - **behavior.scaleUp.stabilizationWindowSeconds** (int32)\n\n      stabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long).\n\n- **metrics** ([]MetricSpec)\n\n  *Atomic: will be replaced during a merge*\n  \n  metrics contains the specifications for which to use to calculate the desired replica count (the maximum replica count across all metrics will be used).  The desired replica count is calculated multiplying the ratio between the target value and the current value by the current number of pods.  Ergo, metrics used must decrease as the pod count is increased, and vice-versa.  See the individual metric source types for more information about how each type of metric must respond. If not set, the default metric will be set to 80% average CPU utilization.\n\n  <a name=\"MetricSpec\"><\/a>\n  *MetricSpec specifies how to scale based on a single metric (only `type` and one other matching field should be set at once).*\n\n  - **metrics.type** (string), required\n\n    type is the type of metric source.  It should be one of \"ContainerResource\", \"External\", \"Object\", \"Pods\" or \"Resource\", each mapping to a matching field in the object. Note: \"ContainerResource\" type is available on when the feature-gate HPAContainerMetrics is enabled\n\n  - **metrics.containerResource** (ContainerResourceMetricSource)\n\n    containerResource refers to a resource metric (such as those specified in requests and limits) known to Kubernetes describing a single container in each pod of the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the \"pods\" source. This is an alpha feature and can be enabled by the HPAContainerMetrics feature flag.\n\n    <a name=\"ContainerResourceMetricSource\"><\/a>\n    *ContainerResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory).  The values will be averaged together before being compared to the target.  Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the \"pods\" source.  Only one \"target\" type should be set.*\n\n    - **metrics.containerResource.container** (string), required\n\n      container is the name of the container in the pods of the scaling target\n\n    - **metrics.containerResource.name** (string), required\n\n      name is the name of the resource in question.\n\n    - **metrics.containerResource.target** (MetricTarget), required\n\n      target specifies the target value for the given metric\n\n      <a name=\"MetricTarget\"><\/a>\n      *MetricTarget defines the target value, average value, or average utilization of a specific metric*\n\n      - **metrics.containerResource.target.type** (string), required\n\n        type represents whether the metric type is Utilization, Value, or AverageValue\n\n      - **metrics.containerResource.target.averageUtilization** (int32)\n\n        averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type\n\n      - **metrics.containerResource.target.averageValue** (<a href=\"\">Quantity<\/a>)\n\n        averageValue is the target value of the average of the metric across all relevant pods (as a quantity)\n\n      - **metrics.containerResource.target.value** (<a href=\"\">Quantity<\/a>)\n\n        value is the target value of the metric (as a quantity).\n\n  - **metrics.external** (ExternalMetricSource)\n\n    external refers to a global metric that is not associated with any Kubernetes object. It allows autoscaling based on information coming from components running outside of cluster (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster).\n\n    <a name=\"ExternalMetricSource\"><\/a>\n    *ExternalMetricSource indicates how to scale on a metric not associated with any Kubernetes object (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster).*\n\n    - **metrics.external.metric** (MetricIdentifier), required\n\n      metric identifies the target metric by name and selector\n\n      <a name=\"MetricIdentifier\"><\/a>\n      *MetricIdentifier defines the name and optionally selector for a metric*\n\n      - **metrics.external.metric.name** (string), required\n\n        name is the name of the given metric\n\n      - **metrics.external.metric.selector** (<a href=\"\">LabelSelector<\/a>)\n\n        selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics.\n\n    - **metrics.external.target** (MetricTarget), required\n\n      target specifies the target value for the given metric\n\n      <a name=\"MetricTarget\"><\/a>\n      *MetricTarget defines the target value, average value, or average utilization of a specific metric*\n\n      - **metrics.external.target.type** (string), required\n\n        type represents whether the metric type is Utilization, Value, or AverageValue\n\n      - **metrics.external.target.averageUtilization** (int32)\n\n        averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type\n\n      - **metrics.external.target.averageValue** (<a href=\"\">Quantity<\/a>)\n\n        averageValue is the target value of the average of the metric across all relevant pods (as a quantity)\n\n      - **metrics.external.target.value** (<a href=\"\">Quantity<\/a>)\n\n        value is the target value of the metric (as a quantity).\n\n  - **metrics.object** (ObjectMetricSource)\n\n    object refers to a metric describing a single kubernetes object (for example, hits-per-second on an Ingress object).\n\n    <a name=\"ObjectMetricSource\"><\/a>\n    *ObjectMetricSource indicates how to scale on a metric describing a kubernetes object (for example, hits-per-second on an Ingress object).*\n\n    - **metrics.object.describedObject** (CrossVersionObjectReference), required\n\n      describedObject specifies the descriptions of a object,such as kind,name apiVersion\n\n      <a name=\"CrossVersionObjectReference\"><\/a>\n      *CrossVersionObjectReference contains enough information to let you identify the referred resource.*\n\n      - **metrics.object.describedObject.kind** (string), required\n\n        kind is the kind of the referent; More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n      - **metrics.object.describedObject.name** (string), required\n\n        name is the name of the referent; More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n      - **metrics.object.describedObject.apiVersion** (string)\n\n        apiVersion is the API version of the referent\n\n    - **metrics.object.metric** (MetricIdentifier), required\n\n      metric identifies the target metric by name and selector\n\n      <a name=\"MetricIdentifier\"><\/a>\n      *MetricIdentifier defines the name and optionally selector for a metric*\n\n      - **metrics.object.metric.name** (string), required\n\n        name is the name of the given metric\n\n      - **metrics.object.metric.selector** (<a href=\"\">LabelSelector<\/a>)\n\n        selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics.\n\n    - **metrics.object.target** (MetricTarget), required\n\n      target specifies the target value for the given metric\n\n      <a name=\"MetricTarget\"><\/a>\n      *MetricTarget defines the target value, average value, or average utilization of a specific metric*\n\n      - **metrics.object.target.type** (string), required\n\n        type represents whether the metric type is Utilization, Value, or AverageValue\n\n      - **metrics.object.target.averageUtilization** (int32)\n\n        averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type\n\n      - **metrics.object.target.averageValue** (<a href=\"\">Quantity<\/a>)\n\n        averageValue is the target value of the average of the metric across all relevant pods (as a quantity)\n\n      - **metrics.object.target.value** (<a href=\"\">Quantity<\/a>)\n\n        value is the target value of the metric (as a quantity).\n\n  - **metrics.pods** (PodsMetricSource)\n\n    pods refers to a metric describing each pod in the current scale target (for example, transactions-processed-per-second).  The values will be averaged together before being compared to the target value.\n\n    <a name=\"PodsMetricSource\"><\/a>\n    *PodsMetricSource indicates how to scale on a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value.*\n\n    - **metrics.pods.metric** (MetricIdentifier), required\n\n      metric identifies the target metric by name and selector\n\n      <a name=\"MetricIdentifier\"><\/a>\n      *MetricIdentifier defines the name and optionally selector for a metric*\n\n      - **metrics.pods.metric.name** (string), required\n\n        name is the name of the given metric\n\n      - **metrics.pods.metric.selector** (<a href=\"\">LabelSelector<\/a>)\n\n        selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics.\n\n    - **metrics.pods.target** (MetricTarget), required\n\n      target specifies the target value for the given metric\n\n      <a name=\"MetricTarget\"><\/a>\n      *MetricTarget defines the target value, average value, or average utilization of a specific metric*\n\n      - **metrics.pods.target.type** (string), required\n\n        type represents whether the metric type is Utilization, Value, or AverageValue\n\n      - **metrics.pods.target.averageUtilization** (int32)\n\n        averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type\n\n      - **metrics.pods.target.averageValue** (<a href=\"\">Quantity<\/a>)\n\n        averageValue is the target value of the average of the metric across all relevant pods (as a quantity)\n\n      - **metrics.pods.target.value** (<a href=\"\">Quantity<\/a>)\n\n        value is the target value of the metric (as a quantity).\n\n  - **metrics.resource** (ResourceMetricSource)\n\n    resource refers to a resource metric (such as those specified in requests and limits) known to Kubernetes describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the \"pods\" source.\n\n    <a name=\"ResourceMetricSource\"><\/a>\n    *ResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory).  The values will be averaged together before being compared to the target.  Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the \"pods\" source.  Only one \"target\" type should be set.*\n\n    - **metrics.resource.name** (string), required\n\n      name is the name of the resource in question.\n\n    - **metrics.resource.target** (MetricTarget), required\n\n      target specifies the target value for the given metric\n\n      <a name=\"MetricTarget\"><\/a>\n      *MetricTarget defines the target value, average value, or average utilization of a specific metric*\n\n      - **metrics.resource.target.type** (string), required\n\n        type represents whether the metric type is Utilization, Value, or AverageValue\n\n      - **metrics.resource.target.averageUtilization** (int32)\n\n        averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type\n\n      - **metrics.resource.target.averageValue** (<a href=\"\">Quantity<\/a>)\n\n        averageValue is the target value of the average of the metric across all relevant pods (as a quantity)\n\n      - **metrics.resource.target.value** (<a href=\"\">Quantity<\/a>)\n\n        value is the target value of the metric (as a quantity).\n\n\n\n\n\n## HorizontalPodAutoscalerStatus {#HorizontalPodAutoscalerStatus}\n\nHorizontalPodAutoscalerStatus describes the current status of a horizontal pod autoscaler.\n\n<hr>\n\n- **desiredReplicas** (int32), required\n\n  desiredReplicas is the desired number of replicas of pods managed by this autoscaler, as last calculated by the autoscaler.\n\n- **conditions** ([]HorizontalPodAutoscalerCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  conditions is the set of conditions required for this autoscaler to scale its target, and indicates whether or not those conditions are met.\n\n  <a name=\"HorizontalPodAutoscalerCondition\"><\/a>\n  *HorizontalPodAutoscalerCondition describes the state of a HorizontalPodAutoscaler at a certain point.*\n\n  - **conditions.status** (string), required\n\n    status is the status of the condition (True, False, Unknown)\n\n  - **conditions.type** (string), required\n\n    type describes the current condition\n\n  - **conditions.lastTransitionTime** (Time)\n\n    lastTransitionTime is the last time the condition transitioned from one status to another\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    message is a human-readable explanation containing details about the transition\n\n  - **conditions.reason** (string)\n\n    reason is the reason for the condition's last transition.\n\n- **currentMetrics** ([]MetricStatus)\n\n  *Atomic: will be replaced during a merge*\n  \n  currentMetrics is the last read state of the metrics used by this autoscaler.\n\n  <a name=\"MetricStatus\"><\/a>\n  *MetricStatus describes the last-read state of a single metric.*\n\n  - **currentMetrics.type** (string), required\n\n    type is the type of metric source.  It will be one of \"ContainerResource\", \"External\", \"Object\", \"Pods\" or \"Resource\", each corresponds to a matching field in the object. Note: \"ContainerResource\" type is available on when the feature-gate HPAContainerMetrics is enabled\n\n  - **currentMetrics.containerResource** (ContainerResourceMetricStatus)\n\n    container resource refers to a resource metric (such as those specified in requests and limits) known to Kubernetes describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the \"pods\" source.\n\n    <a name=\"ContainerResourceMetricStatus\"><\/a>\n    *ContainerResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing a single container in each pod in the current scale target (e.g. CPU or memory).  Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the \"pods\" source.*\n\n    - **currentMetrics.containerResource.container** (string), required\n\n      container is the name of the container in the pods of the scaling target\n\n    - **currentMetrics.containerResource.current** (MetricValueStatus), required\n\n      current contains the current value for the given metric\n\n      <a name=\"MetricValueStatus\"><\/a>\n      *MetricValueStatus holds the current value for a metric*\n\n      - **currentMetrics.containerResource.current.averageUtilization** (int32)\n\n        currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods.\n\n      - **currentMetrics.containerResource.current.averageValue** (<a href=\"\">Quantity<\/a>)\n\n        averageValue is the current value of the average of the metric across all relevant pods (as a quantity)\n\n      - **currentMetrics.containerResource.current.value** (<a href=\"\">Quantity<\/a>)\n\n        value is the current value of the metric (as a quantity).\n\n    - **currentMetrics.containerResource.name** (string), required\n\n      name is the name of the resource in question.\n\n  - **currentMetrics.external** (ExternalMetricStatus)\n\n    external refers to a global metric that is not associated with any Kubernetes object. It allows autoscaling based on information coming from components running outside of cluster (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster).\n\n    <a name=\"ExternalMetricStatus\"><\/a>\n    *ExternalMetricStatus indicates the current value of a global metric not associated with any Kubernetes object.*\n\n    - **currentMetrics.external.current** (MetricValueStatus), required\n\n      current contains the current value for the given metric\n\n      <a name=\"MetricValueStatus\"><\/a>\n      *MetricValueStatus holds the current value for a metric*\n\n      - **currentMetrics.external.current.averageUtilization** (int32)\n\n        currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods.\n\n      - **currentMetrics.external.current.averageValue** (<a href=\"\">Quantity<\/a>)\n\n        averageValue is the current value of the average of the metric across all relevant pods (as a quantity)\n\n      - **currentMetrics.external.current.value** (<a href=\"\">Quantity<\/a>)\n\n        value is the current value of the metric (as a quantity).\n\n    - **currentMetrics.external.metric** (MetricIdentifier), required\n\n      metric identifies the target metric by name and selector\n\n      <a name=\"MetricIdentifier\"><\/a>\n      *MetricIdentifier defines the name and optionally selector for a metric*\n\n      - **currentMetrics.external.metric.name** (string), required\n\n        name is the name of the given metric\n\n      - **currentMetrics.external.metric.selector** (<a href=\"\">LabelSelector<\/a>)\n\n        selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics.\n\n  - **currentMetrics.object** (ObjectMetricStatus)\n\n    object refers to a metric describing a single kubernetes object (for example, hits-per-second on an Ingress object).\n\n    <a name=\"ObjectMetricStatus\"><\/a>\n    *ObjectMetricStatus indicates the current value of a metric describing a kubernetes object (for example, hits-per-second on an Ingress object).*\n\n    - **currentMetrics.object.current** (MetricValueStatus), required\n\n      current contains the current value for the given metric\n\n      <a name=\"MetricValueStatus\"><\/a>\n      *MetricValueStatus holds the current value for a metric*\n\n      - **currentMetrics.object.current.averageUtilization** (int32)\n\n        currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods.\n\n      - **currentMetrics.object.current.averageValue** (<a href=\"\">Quantity<\/a>)\n\n        averageValue is the current value of the average of the metric across all relevant pods (as a quantity)\n\n      - **currentMetrics.object.current.value** (<a href=\"\">Quantity<\/a>)\n\n        value is the current value of the metric (as a quantity).\n\n    - **currentMetrics.object.describedObject** (CrossVersionObjectReference), required\n\n      DescribedObject specifies the descriptions of a object,such as kind,name apiVersion\n\n      <a name=\"CrossVersionObjectReference\"><\/a>\n      *CrossVersionObjectReference contains enough information to let you identify the referred resource.*\n\n      - **currentMetrics.object.describedObject.kind** (string), required\n\n        kind is the kind of the referent; More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n      - **currentMetrics.object.describedObject.name** (string), required\n\n        name is the name of the referent; More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n      - **currentMetrics.object.describedObject.apiVersion** (string)\n\n        apiVersion is the API version of the referent\n\n    - **currentMetrics.object.metric** (MetricIdentifier), required\n\n      metric identifies the target metric by name and selector\n\n      <a name=\"MetricIdentifier\"><\/a>\n      *MetricIdentifier defines the name and optionally selector for a metric*\n\n      - **currentMetrics.object.metric.name** (string), required\n\n        name is the name of the given metric\n\n      - **currentMetrics.object.metric.selector** (<a href=\"\">LabelSelector<\/a>)\n\n        selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics.\n\n  - **currentMetrics.pods** (PodsMetricStatus)\n\n    pods refers to a metric describing each pod in the current scale target (for example, transactions-processed-per-second).  The values will be averaged together before being compared to the target value.\n\n    <a name=\"PodsMetricStatus\"><\/a>\n    *PodsMetricStatus indicates the current value of a metric describing each pod in the current scale target (for example, transactions-processed-per-second).*\n\n    - **currentMetrics.pods.current** (MetricValueStatus), required\n\n      current contains the current value for the given metric\n\n      <a name=\"MetricValueStatus\"><\/a>\n      *MetricValueStatus holds the current value for a metric*\n\n      - **currentMetrics.pods.current.averageUtilization** (int32)\n\n        currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods.\n\n      - **currentMetrics.pods.current.averageValue** (<a href=\"\">Quantity<\/a>)\n\n        averageValue is the current value of the average of the metric across all relevant pods (as a quantity)\n\n      - **currentMetrics.pods.current.value** (<a href=\"\">Quantity<\/a>)\n\n        value is the current value of the metric (as a quantity).\n\n    - **currentMetrics.pods.metric** (MetricIdentifier), required\n\n      metric identifies the target metric by name and selector\n\n      <a name=\"MetricIdentifier\"><\/a>\n      *MetricIdentifier defines the name and optionally selector for a metric*\n\n      - **currentMetrics.pods.metric.name** (string), required\n\n        name is the name of the given metric\n\n      - **currentMetrics.pods.metric.selector** (<a href=\"\">LabelSelector<\/a>)\n\n        selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics.\n\n  - **currentMetrics.resource** (ResourceMetricStatus)\n\n    resource refers to a resource metric (such as those specified in requests and limits) known to Kubernetes describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the \"pods\" source.\n\n    <a name=\"ResourceMetricStatus\"><\/a>\n    *ResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory).  Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the \"pods\" source.*\n\n    - **currentMetrics.resource.current** (MetricValueStatus), required\n\n      current contains the current value for the given metric\n\n      <a name=\"MetricValueStatus\"><\/a>\n      *MetricValueStatus holds the current value for a metric*\n\n      - **currentMetrics.resource.current.averageUtilization** (int32)\n\n        currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods.\n\n      - **currentMetrics.resource.current.averageValue** (<a href=\"\">Quantity<\/a>)\n\n        averageValue is the current value of the average of the metric across all relevant pods (as a quantity)\n\n      - **currentMetrics.resource.current.value** (<a href=\"\">Quantity<\/a>)\n\n        value is the current value of the metric (as a quantity).\n\n    - **currentMetrics.resource.name** (string), required\n\n      name is the name of the resource in question.\n\n- **currentReplicas** (int32)\n\n  currentReplicas is current number of replicas of pods managed by this autoscaler, as last seen by the autoscaler.\n\n- **lastScaleTime** (Time)\n\n  lastScaleTime is the last time the HorizontalPodAutoscaler scaled the number of pods, used by the autoscaler to control how often the number of pods is changed.\n\n  <a name=\"Time\"><\/a>\n  *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **observedGeneration** (int64)\n\n  observedGeneration is the most recent generation observed by this autoscaler.\n\n\n\n\n\n## HorizontalPodAutoscalerList {#HorizontalPodAutoscalerList}\n\nHorizontalPodAutoscalerList is a list of horizontal pod autoscaler objects.\n\n<hr>\n\n- **apiVersion**: autoscaling\/v2\n\n\n- **kind**: HorizontalPodAutoscalerList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  metadata is the standard list metadata.\n\n- **items** ([]<a href=\"\">HorizontalPodAutoscaler<\/a>), required\n\n  items is the list of horizontal pod autoscaler objects.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified HorizontalPodAutoscaler\n\n#### HTTP Request\n\nGET \/apis\/autoscaling\/v2\/namespaces\/{namespace}\/horizontalpodautoscalers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the HorizontalPodAutoscaler\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscaler<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified HorizontalPodAutoscaler\n\n#### HTTP Request\n\nGET \/apis\/autoscaling\/v2\/namespaces\/{namespace}\/horizontalpodautoscalers\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the HorizontalPodAutoscaler\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscaler<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind HorizontalPodAutoscaler\n\n#### HTTP Request\n\nGET \/apis\/autoscaling\/v2\/namespaces\/{namespace}\/horizontalpodautoscalers\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscalerList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind HorizontalPodAutoscaler\n\n#### HTTP Request\n\nGET \/apis\/autoscaling\/v2\/horizontalpodautoscalers\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscalerList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a HorizontalPodAutoscaler\n\n#### HTTP Request\n\nPOST \/apis\/autoscaling\/v2\/namespaces\/{namespace}\/horizontalpodautoscalers\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">HorizontalPodAutoscaler<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscaler<\/a>): OK\n\n201 (<a href=\"\">HorizontalPodAutoscaler<\/a>): Created\n\n202 (<a href=\"\">HorizontalPodAutoscaler<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified HorizontalPodAutoscaler\n\n#### HTTP Request\n\nPUT \/apis\/autoscaling\/v2\/namespaces\/{namespace}\/horizontalpodautoscalers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the HorizontalPodAutoscaler\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">HorizontalPodAutoscaler<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscaler<\/a>): OK\n\n201 (<a href=\"\">HorizontalPodAutoscaler<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified HorizontalPodAutoscaler\n\n#### HTTP Request\n\nPUT \/apis\/autoscaling\/v2\/namespaces\/{namespace}\/horizontalpodautoscalers\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the HorizontalPodAutoscaler\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">HorizontalPodAutoscaler<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscaler<\/a>): OK\n\n201 (<a href=\"\">HorizontalPodAutoscaler<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified HorizontalPodAutoscaler\n\n#### HTTP Request\n\nPATCH \/apis\/autoscaling\/v2\/namespaces\/{namespace}\/horizontalpodautoscalers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the HorizontalPodAutoscaler\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscaler<\/a>): OK\n\n201 (<a href=\"\">HorizontalPodAutoscaler<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified HorizontalPodAutoscaler\n\n#### HTTP Request\n\nPATCH \/apis\/autoscaling\/v2\/namespaces\/{namespace}\/horizontalpodautoscalers\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the HorizontalPodAutoscaler\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">HorizontalPodAutoscaler<\/a>): OK\n\n201 (<a href=\"\">HorizontalPodAutoscaler<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a HorizontalPodAutoscaler\n\n#### HTTP Request\n\nDELETE \/apis\/autoscaling\/v2\/namespaces\/{namespace}\/horizontalpodautoscalers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the HorizontalPodAutoscaler\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of HorizontalPodAutoscaler\n\n#### HTTP Request\n\nDELETE \/apis\/autoscaling\/v2\/namespaces\/{namespace}\/horizontalpodautoscalers\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   autoscaling v2    import   k8s io api autoscaling v2    kind   HorizontalPodAutoscaler  content type   api reference  description   HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler  which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified   title   HorizontalPodAutoscaler  weight  13 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  autoscaling v2    import  k8s io api autoscaling v2        HorizontalPodAutoscaler   HorizontalPodAutoscaler   HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler  which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified    hr       apiVersion    autoscaling v2       kind    HorizontalPodAutoscaler       metadata     a href    ObjectMeta  a      metadata is the standard object metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    HorizontalPodAutoscalerSpec  a      spec is the specification for the behaviour of the autoscaler  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status       status     a href    HorizontalPodAutoscalerStatus  a      status is the current information about the autoscaler          HorizontalPodAutoscalerSpec   HorizontalPodAutoscalerSpec   HorizontalPodAutoscalerSpec describes the desired functionality of the HorizontalPodAutoscaler    hr       maxReplicas    int32   required    maxReplicas is the upper limit for the number of replicas to which the autoscaler can scale up  It cannot be less that minReplicas       scaleTargetRef    CrossVersionObjectReference   required    scaleTargetRef points to the target resource to scale  and is used to the pods for which metrics should be collected  as well as to actually change the replica count      a name  CrossVersionObjectReference    a     CrossVersionObjectReference contains enough information to let you identify the referred resource          scaleTargetRef kind    string   required      kind is the kind of the referent  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds        scaleTargetRef name    string   required      name is the name of the referent  More info  https   kubernetes io docs concepts overview working with objects names  names        scaleTargetRef apiVersion    string       apiVersion is the API version of the referent      minReplicas    int32     minReplicas is the lower limit for the number of replicas to which the autoscaler can scale down   It defaults to 1 pod   minReplicas is allowed to be 0 if the alpha feature gate HPAScaleToZero is enabled and at least one Object or External metric is configured   Scaling is active as long as at least one metric value is available       behavior    HorizontalPodAutoscalerBehavior     behavior configures the scaling behavior of the target in both Up and Down directions  scaleUp and scaleDown fields respectively   If not set  the default HPAScalingRules for scale up and scale down are used      a name  HorizontalPodAutoscalerBehavior    a     HorizontalPodAutoscalerBehavior configures the scaling behavior of the target in both Up and Down directions  scaleUp and scaleDown fields respectively           behavior scaleDown    HPAScalingRules       scaleDown is scaling policy for scaling Down  If not set  the default value is to allow to scale down to minReplicas pods  with a 300 second stabilization window  i e   the highest recommendation for the last 300sec is used         a name  HPAScalingRules    a       HPAScalingRules configures the scaling behavior for one direction  These Rules are applied after calculating DesiredReplicas from metrics for the HPA  They can limit the scaling velocity by specifying scaling policies  They can prevent flapping by specifying the stabilization window  so that the number of replicas is not set instantly  instead  the safest value from the stabilization window is chosen            behavior scaleDown policies      HPAScalingPolicy          Atomic  will be replaced during a merge               policies is a list of potential scaling polices which can be used during scaling  At least one policy must be specified  otherwise the HPAScalingRules will be discarded as invalid         a name  HPAScalingPolicy    a         HPAScalingPolicy is a single policy which must hold true for a specified past interval              behavior scaleDown policies type    string   required          type is used to specify the scaling policy             behavior scaleDown policies value    int32   required          value contains the amount of change which is permitted by the policy  It must be greater than zero            behavior scaleDown policies periodSeconds    int32   required          periodSeconds specifies the window of time for which the policy should hold true  PeriodSeconds must be greater than zero and less than or equal to 1800  30 min            behavior scaleDown selectPolicy    string         selectPolicy is used to specify which policy should be used  If not set  the default value Max is used           behavior scaleDown stabilizationWindowSeconds    int32         stabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down  StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600  one hour   If not set  use the default values    For scale up  0  i e  no stabilization is done     For scale down  300  i e  the stabilization window is 300 seconds long          behavior scaleUp    HPAScalingRules       scaleUp is scaling policy for scaling Up  If not set  the default value is the higher of          increase no more than 4 pods per 60 seconds         double the number of pods per 60 seconds     No stabilization is used        a name  HPAScalingRules    a       HPAScalingRules configures the scaling behavior for one direction  These Rules are applied after calculating DesiredReplicas from metrics for the HPA  They can limit the scaling velocity by specifying scaling policies  They can prevent flapping by specifying the stabilization window  so that the number of replicas is not set instantly  instead  the safest value from the stabilization window is chosen            behavior scaleUp policies      HPAScalingPolicy          Atomic  will be replaced during a merge               policies is a list of potential scaling polices which can be used during scaling  At least one policy must be specified  otherwise the HPAScalingRules will be discarded as invalid         a name  HPAScalingPolicy    a         HPAScalingPolicy is a single policy which must hold true for a specified past interval              behavior scaleUp policies type    string   required          type is used to specify the scaling policy             behavior scaleUp policies value    int32   required          value contains the amount of change which is permitted by the policy  It must be greater than zero            behavior scaleUp policies periodSeconds    int32   required          periodSeconds specifies the window of time for which the policy should hold true  PeriodSeconds must be greater than zero and less than or equal to 1800  30 min            behavior scaleUp selectPolicy    string         selectPolicy is used to specify which policy should be used  If not set  the default value Max is used           behavior scaleUp stabilizationWindowSeconds    int32         stabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down  StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600  one hour   If not set  use the default values    For scale up  0  i e  no stabilization is done     For scale down  300  i e  the stabilization window is 300 seconds long        metrics      MetricSpec      Atomic  will be replaced during a merge       metrics contains the specifications for which to use to calculate the desired replica count  the maximum replica count across all metrics will be used    The desired replica count is calculated multiplying the ratio between the target value and the current value by the current number of pods   Ergo  metrics used must decrease as the pod count is increased  and vice versa   See the individual metric source types for more information about how each type of metric must respond  If not set  the default metric will be set to 80  average CPU utilization      a name  MetricSpec    a     MetricSpec specifies how to scale based on a single metric  only  type  and one other matching field should be set at once           metrics type    string   required      type is the type of metric source   It should be one of  ContainerResource    External    Object    Pods  or  Resource   each mapping to a matching field in the object  Note   ContainerResource  type is available on when the feature gate HPAContainerMetrics is enabled        metrics containerResource    ContainerResourceMetricSource       containerResource refers to a resource metric  such as those specified in requests and limits  known to Kubernetes describing a single container in each pod of the current scale target  e g  CPU or memory   Such metrics are built in to Kubernetes  and have special scaling options on top of those available to normal per pod metrics using the  pods  source  This is an alpha feature and can be enabled by the HPAContainerMetrics feature flag        a name  ContainerResourceMetricSource    a       ContainerResourceMetricSource indicates how to scale on a resource metric known to Kubernetes  as specified in requests and limits  describing each pod in the current scale target  e g  CPU or memory    The values will be averaged together before being compared to the target   Such metrics are built in to Kubernetes  and have special scaling options on top of those available to normal per pod metrics using the  pods  source   Only one  target  type should be set            metrics containerResource container    string   required        container is the name of the container in the pods of the scaling target          metrics containerResource name    string   required        name is the name of the resource in question           metrics containerResource target    MetricTarget   required        target specifies the target value for the given metric         a name  MetricTarget    a         MetricTarget defines the target value  average value  or average utilization of a specific metric             metrics containerResource target type    string   required          type represents whether the metric type is Utilization  Value  or AverageValue            metrics containerResource target averageUtilization    int32           averageUtilization is the target value of the average of the resource metric across all relevant pods  represented as a percentage of the requested value of the resource for the pods  Currently only valid for Resource metric source type            metrics containerResource target averageValue     a href    Quantity  a            averageValue is the target value of the average of the metric across all relevant pods  as a quantity             metrics containerResource target value     a href    Quantity  a            value is the target value of the metric  as a quantity          metrics external    ExternalMetricSource       external refers to a global metric that is not associated with any Kubernetes object  It allows autoscaling based on information coming from components running outside of cluster  for example length of queue in cloud messaging service  or QPS from loadbalancer running outside of cluster         a name  ExternalMetricSource    a       ExternalMetricSource indicates how to scale on a metric not associated with any Kubernetes object  for example length of queue in cloud messaging service  or QPS from loadbalancer running outside of cluster             metrics external metric    MetricIdentifier   required        metric identifies the target metric by name and selector         a name  MetricIdentifier    a         MetricIdentifier defines the name and optionally selector for a metric             metrics external metric name    string   required          name is the name of the given metric            metrics external metric selector     a href    LabelSelector  a            selector is the string encoded form of a standard kubernetes label selector for the given metric When set  it is passed as an additional parameter to the metrics server for more specific metrics scoping  When unset  just the metricName will be used to gather metrics           metrics external target    MetricTarget   required        target specifies the target value for the given metric         a name  MetricTarget    a         MetricTarget defines the target value  average value  or average utilization of a specific metric             metrics external target type    string   required          type represents whether the metric type is Utilization  Value  or AverageValue            metrics external target averageUtilization    int32           averageUtilization is the target value of the average of the resource metric across all relevant pods  represented as a percentage of the requested value of the resource for the pods  Currently only valid for Resource metric source type            metrics external target averageValue     a href    Quantity  a            averageValue is the target value of the average of the metric across all relevant pods  as a quantity             metrics external target value     a href    Quantity  a            value is the target value of the metric  as a quantity          metrics object    ObjectMetricSource       object refers to a metric describing a single kubernetes object  for example  hits per second on an Ingress object         a name  ObjectMetricSource    a       ObjectMetricSource indicates how to scale on a metric describing a kubernetes object  for example  hits per second on an Ingress object             metrics object describedObject    CrossVersionObjectReference   required        describedObject specifies the descriptions of a object such as kind name apiVersion         a name  CrossVersionObjectReference    a         CrossVersionObjectReference contains enough information to let you identify the referred resource              metrics object describedObject kind    string   required          kind is the kind of the referent  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds            metrics object describedObject name    string   required          name is the name of the referent  More info  https   kubernetes io docs concepts overview working with objects names  names            metrics object describedObject apiVersion    string           apiVersion is the API version of the referent          metrics object metric    MetricIdentifier   required        metric identifies the target metric by name and selector         a name  MetricIdentifier    a         MetricIdentifier defines the name and optionally selector for a metric             metrics object metric name    string   required          name is the name of the given metric            metrics object metric selector     a href    LabelSelector  a            selector is the string encoded form of a standard kubernetes label selector for the given metric When set  it is passed as an additional parameter to the metrics server for more specific metrics scoping  When unset  just the metricName will be used to gather metrics           metrics object target    MetricTarget   required        target specifies the target value for the given metric         a name  MetricTarget    a         MetricTarget defines the target value  average value  or average utilization of a specific metric             metrics object target type    string   required          type represents whether the metric type is Utilization  Value  or AverageValue            metrics object target averageUtilization    int32           averageUtilization is the target value of the average of the resource metric across all relevant pods  represented as a percentage of the requested value of the resource for the pods  Currently only valid for Resource metric source type            metrics object target averageValue     a href    Quantity  a            averageValue is the target value of the average of the metric across all relevant pods  as a quantity             metrics object target value     a href    Quantity  a            value is the target value of the metric  as a quantity          metrics pods    PodsMetricSource       pods refers to a metric describing each pod in the current scale target  for example  transactions processed per second    The values will be averaged together before being compared to the target value        a name  PodsMetricSource    a       PodsMetricSource indicates how to scale on a metric describing each pod in the current scale target  for example  transactions processed per second   The values will be averaged together before being compared to the target value            metrics pods metric    MetricIdentifier   required        metric identifies the target metric by name and selector         a name  MetricIdentifier    a         MetricIdentifier defines the name and optionally selector for a metric             metrics pods metric name    string   required          name is the name of the given metric            metrics pods metric selector     a href    LabelSelector  a            selector is the string encoded form of a standard kubernetes label selector for the given metric When set  it is passed as an additional parameter to the metrics server for more specific metrics scoping  When unset  just the metricName will be used to gather metrics           metrics pods target    MetricTarget   required        target specifies the target value for the given metric         a name  MetricTarget    a         MetricTarget defines the target value  average value  or average utilization of a specific metric             metrics pods target type    string   required          type represents whether the metric type is Utilization  Value  or AverageValue            metrics pods target averageUtilization    int32           averageUtilization is the target value of the average of the resource metric across all relevant pods  represented as a percentage of the requested value of the resource for the pods  Currently only valid for Resource metric source type            metrics pods target averageValue     a href    Quantity  a            averageValue is the target value of the average of the metric across all relevant pods  as a quantity             metrics pods target value     a href    Quantity  a            value is the target value of the metric  as a quantity          metrics resource    ResourceMetricSource       resource refers to a resource metric  such as those specified in requests and limits  known to Kubernetes describing each pod in the current scale target  e g  CPU or memory   Such metrics are built in to Kubernetes  and have special scaling options on top of those available to normal per pod metrics using the  pods  source        a name  ResourceMetricSource    a       ResourceMetricSource indicates how to scale on a resource metric known to Kubernetes  as specified in requests and limits  describing each pod in the current scale target  e g  CPU or memory    The values will be averaged together before being compared to the target   Such metrics are built in to Kubernetes  and have special scaling options on top of those available to normal per pod metrics using the  pods  source   Only one  target  type should be set            metrics resource name    string   required        name is the name of the resource in question           metrics resource target    MetricTarget   required        target specifies the target value for the given metric         a name  MetricTarget    a         MetricTarget defines the target value  average value  or average utilization of a specific metric             metrics resource target type    string   required          type represents whether the metric type is Utilization  Value  or AverageValue            metrics resource target averageUtilization    int32           averageUtilization is the target value of the average of the resource metric across all relevant pods  represented as a percentage of the requested value of the resource for the pods  Currently only valid for Resource metric source type            metrics resource target averageValue     a href    Quantity  a            averageValue is the target value of the average of the metric across all relevant pods  as a quantity             metrics resource target value     a href    Quantity  a            value is the target value of the metric  as a quantity           HorizontalPodAutoscalerStatus   HorizontalPodAutoscalerStatus   HorizontalPodAutoscalerStatus describes the current status of a horizontal pod autoscaler    hr       desiredReplicas    int32   required    desiredReplicas is the desired number of replicas of pods managed by this autoscaler  as last calculated by the autoscaler       conditions      HorizontalPodAutoscalerCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       conditions is the set of conditions required for this autoscaler to scale its target  and indicates whether or not those conditions are met      a name  HorizontalPodAutoscalerCondition    a     HorizontalPodAutoscalerCondition describes the state of a HorizontalPodAutoscaler at a certain point          conditions status    string   required      status is the status of the condition  True  False  Unknown         conditions type    string   required      type describes the current condition        conditions lastTransitionTime    Time       lastTransitionTime is the last time the condition transitioned from one status to another       a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string       message is a human readable explanation containing details about the transition        conditions reason    string       reason is the reason for the condition s last transition       currentMetrics      MetricStatus      Atomic  will be replaced during a merge       currentMetrics is the last read state of the metrics used by this autoscaler      a name  MetricStatus    a     MetricStatus describes the last read state of a single metric          currentMetrics type    string   required      type is the type of metric source   It will be one of  ContainerResource    External    Object    Pods  or  Resource   each corresponds to a matching field in the object  Note   ContainerResource  type is available on when the feature gate HPAContainerMetrics is enabled        currentMetrics containerResource    ContainerResourceMetricStatus       container resource refers to a resource metric  such as those specified in requests and limits  known to Kubernetes describing a single container in each pod in the current scale target  e g  CPU or memory   Such metrics are built in to Kubernetes  and have special scaling options on top of those available to normal per pod metrics using the  pods  source        a name  ContainerResourceMetricStatus    a       ContainerResourceMetricStatus indicates the current value of a resource metric known to Kubernetes  as specified in requests and limits  describing a single container in each pod in the current scale target  e g  CPU or memory    Such metrics are built in to Kubernetes  and have special scaling options on top of those available to normal per pod metrics using the  pods  source            currentMetrics containerResource container    string   required        container is the name of the container in the pods of the scaling target          currentMetrics containerResource current    MetricValueStatus   required        current contains the current value for the given metric         a name  MetricValueStatus    a         MetricValueStatus holds the current value for a metric             currentMetrics containerResource current averageUtilization    int32           currentAverageUtilization is the current value of the average of the resource metric across all relevant pods  represented as a percentage of the requested value of the resource for the pods             currentMetrics containerResource current averageValue     a href    Quantity  a            averageValue is the current value of the average of the metric across all relevant pods  as a quantity             currentMetrics containerResource current value     a href    Quantity  a            value is the current value of the metric  as a quantity            currentMetrics containerResource name    string   required        name is the name of the resource in question         currentMetrics external    ExternalMetricStatus       external refers to a global metric that is not associated with any Kubernetes object  It allows autoscaling based on information coming from components running outside of cluster  for example length of queue in cloud messaging service  or QPS from loadbalancer running outside of cluster         a name  ExternalMetricStatus    a       ExternalMetricStatus indicates the current value of a global metric not associated with any Kubernetes object            currentMetrics external current    MetricValueStatus   required        current contains the current value for the given metric         a name  MetricValueStatus    a         MetricValueStatus holds the current value for a metric             currentMetrics external current averageUtilization    int32           currentAverageUtilization is the current value of the average of the resource metric across all relevant pods  represented as a percentage of the requested value of the resource for the pods             currentMetrics external current averageValue     a href    Quantity  a            averageValue is the current value of the average of the metric across all relevant pods  as a quantity             currentMetrics external current value     a href    Quantity  a            value is the current value of the metric  as a quantity            currentMetrics external metric    MetricIdentifier   required        metric identifies the target metric by name and selector         a name  MetricIdentifier    a         MetricIdentifier defines the name and optionally selector for a metric             currentMetrics external metric name    string   required          name is the name of the given metric            currentMetrics external metric selector     a href    LabelSelector  a            selector is the string encoded form of a standard kubernetes label selector for the given metric When set  it is passed as an additional parameter to the metrics server for more specific metrics scoping  When unset  just the metricName will be used to gather metrics         currentMetrics object    ObjectMetricStatus       object refers to a metric describing a single kubernetes object  for example  hits per second on an Ingress object         a name  ObjectMetricStatus    a       ObjectMetricStatus indicates the current value of a metric describing a kubernetes object  for example  hits per second on an Ingress object             currentMetrics object current    MetricValueStatus   required        current contains the current value for the given metric         a name  MetricValueStatus    a         MetricValueStatus holds the current value for a metric             currentMetrics object current averageUtilization    int32           currentAverageUtilization is the current value of the average of the resource metric across all relevant pods  represented as a percentage of the requested value of the resource for the pods             currentMetrics object current averageValue     a href    Quantity  a            averageValue is the current value of the average of the metric across all relevant pods  as a quantity             currentMetrics object current value     a href    Quantity  a            value is the current value of the metric  as a quantity            currentMetrics object describedObject    CrossVersionObjectReference   required        DescribedObject specifies the descriptions of a object such as kind name apiVersion         a name  CrossVersionObjectReference    a         CrossVersionObjectReference contains enough information to let you identify the referred resource              currentMetrics object describedObject kind    string   required          kind is the kind of the referent  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds            currentMetrics object describedObject name    string   required          name is the name of the referent  More info  https   kubernetes io docs concepts overview working with objects names  names            currentMetrics object describedObject apiVersion    string           apiVersion is the API version of the referent          currentMetrics object metric    MetricIdentifier   required        metric identifies the target metric by name and selector         a name  MetricIdentifier    a         MetricIdentifier defines the name and optionally selector for a metric             currentMetrics object metric name    string   required          name is the name of the given metric            currentMetrics object metric selector     a href    LabelSelector  a            selector is the string encoded form of a standard kubernetes label selector for the given metric When set  it is passed as an additional parameter to the metrics server for more specific metrics scoping  When unset  just the metricName will be used to gather metrics         currentMetrics pods    PodsMetricStatus       pods refers to a metric describing each pod in the current scale target  for example  transactions processed per second    The values will be averaged together before being compared to the target value        a name  PodsMetricStatus    a       PodsMetricStatus indicates the current value of a metric describing each pod in the current scale target  for example  transactions processed per second             currentMetrics pods current    MetricValueStatus   required        current contains the current value for the given metric         a name  MetricValueStatus    a         MetricValueStatus holds the current value for a metric             currentMetrics pods current averageUtilization    int32           currentAverageUtilization is the current value of the average of the resource metric across all relevant pods  represented as a percentage of the requested value of the resource for the pods             currentMetrics pods current averageValue     a href    Quantity  a            averageValue is the current value of the average of the metric across all relevant pods  as a quantity             currentMetrics pods current value     a href    Quantity  a            value is the current value of the metric  as a quantity            currentMetrics pods metric    MetricIdentifier   required        metric identifies the target metric by name and selector         a name  MetricIdentifier    a         MetricIdentifier defines the name and optionally selector for a metric             currentMetrics pods metric name    string   required          name is the name of the given metric            currentMetrics pods metric selector     a href    LabelSelector  a            selector is the string encoded form of a standard kubernetes label selector for the given metric When set  it is passed as an additional parameter to the metrics server for more specific metrics scoping  When unset  just the metricName will be used to gather metrics         currentMetrics resource    ResourceMetricStatus       resource refers to a resource metric  such as those specified in requests and limits  known to Kubernetes describing each pod in the current scale target  e g  CPU or memory   Such metrics are built in to Kubernetes  and have special scaling options on top of those available to normal per pod metrics using the  pods  source        a name  ResourceMetricStatus    a       ResourceMetricStatus indicates the current value of a resource metric known to Kubernetes  as specified in requests and limits  describing each pod in the current scale target  e g  CPU or memory    Such metrics are built in to Kubernetes  and have special scaling options on top of those available to normal per pod metrics using the  pods  source            currentMetrics resource current    MetricValueStatus   required        current contains the current value for the given metric         a name  MetricValueStatus    a         MetricValueStatus holds the current value for a metric             currentMetrics resource current averageUtilization    int32           currentAverageUtilization is the current value of the average of the resource metric across all relevant pods  represented as a percentage of the requested value of the resource for the pods             currentMetrics resource current averageValue     a href    Quantity  a            averageValue is the current value of the average of the metric across all relevant pods  as a quantity             currentMetrics resource current value     a href    Quantity  a            value is the current value of the metric  as a quantity            currentMetrics resource name    string   required        name is the name of the resource in question       currentReplicas    int32     currentReplicas is current number of replicas of pods managed by this autoscaler  as last seen by the autoscaler       lastScaleTime    Time     lastScaleTime is the last time the HorizontalPodAutoscaler scaled the number of pods  used by the autoscaler to control how often the number of pods is changed      a name  Time    a     Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        observedGeneration    int64     observedGeneration is the most recent generation observed by this autoscaler          HorizontalPodAutoscalerList   HorizontalPodAutoscalerList   HorizontalPodAutoscalerList is a list of horizontal pod autoscaler objects    hr       apiVersion    autoscaling v2       kind    HorizontalPodAutoscalerList       metadata     a href    ListMeta  a      metadata is the standard list metadata       items       a href    HorizontalPodAutoscaler  a    required    items is the list of horizontal pod autoscaler objects          Operations   Operations      hr             get  read the specified HorizontalPodAutoscaler       HTTP Request  GET  apis autoscaling v2 namespaces  namespace  horizontalpodautoscalers  name        Parameters       name     in path    string  required    name of the HorizontalPodAutoscaler       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    HorizontalPodAutoscaler  a    OK  401  Unauthorized        get  read status of the specified HorizontalPodAutoscaler       HTTP Request  GET  apis autoscaling v2 namespaces  namespace  horizontalpodautoscalers  name  status       Parameters       name     in path    string  required    name of the HorizontalPodAutoscaler       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    HorizontalPodAutoscaler  a    OK  401  Unauthorized        list  list or watch objects of kind HorizontalPodAutoscaler       HTTP Request  GET  apis autoscaling v2 namespaces  namespace  horizontalpodautoscalers       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    HorizontalPodAutoscalerList  a    OK  401  Unauthorized        list  list or watch objects of kind HorizontalPodAutoscaler       HTTP Request  GET  apis autoscaling v2 horizontalpodautoscalers       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    HorizontalPodAutoscalerList  a    OK  401  Unauthorized        create  create a HorizontalPodAutoscaler       HTTP Request  POST  apis autoscaling v2 namespaces  namespace  horizontalpodautoscalers       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    HorizontalPodAutoscaler  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    HorizontalPodAutoscaler  a    OK  201   a href    HorizontalPodAutoscaler  a    Created  202   a href    HorizontalPodAutoscaler  a    Accepted  401  Unauthorized        update  replace the specified HorizontalPodAutoscaler       HTTP Request  PUT  apis autoscaling v2 namespaces  namespace  horizontalpodautoscalers  name        Parameters       name     in path    string  required    name of the HorizontalPodAutoscaler       namespace     in path    string  required     a href    namespace  a        body     a href    HorizontalPodAutoscaler  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    HorizontalPodAutoscaler  a    OK  201   a href    HorizontalPodAutoscaler  a    Created  401  Unauthorized        update  replace status of the specified HorizontalPodAutoscaler       HTTP Request  PUT  apis autoscaling v2 namespaces  namespace  horizontalpodautoscalers  name  status       Parameters       name     in path    string  required    name of the HorizontalPodAutoscaler       namespace     in path    string  required     a href    namespace  a        body     a href    HorizontalPodAutoscaler  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    HorizontalPodAutoscaler  a    OK  201   a href    HorizontalPodAutoscaler  a    Created  401  Unauthorized        patch  partially update the specified HorizontalPodAutoscaler       HTTP Request  PATCH  apis autoscaling v2 namespaces  namespace  horizontalpodautoscalers  name        Parameters       name     in path    string  required    name of the HorizontalPodAutoscaler       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    HorizontalPodAutoscaler  a    OK  201   a href    HorizontalPodAutoscaler  a    Created  401  Unauthorized        patch  partially update status of the specified HorizontalPodAutoscaler       HTTP Request  PATCH  apis autoscaling v2 namespaces  namespace  horizontalpodautoscalers  name  status       Parameters       name     in path    string  required    name of the HorizontalPodAutoscaler       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    HorizontalPodAutoscaler  a    OK  201   a href    HorizontalPodAutoscaler  a    Created  401  Unauthorized        delete  delete a HorizontalPodAutoscaler       HTTP Request  DELETE  apis autoscaling v2 namespaces  namespace  horizontalpodautoscalers  name        Parameters       name     in path    string  required    name of the HorizontalPodAutoscaler       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of HorizontalPodAutoscaler       HTTP Request  DELETE  apis autoscaling v2 namespaces  namespace  horizontalpodautoscalers       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference kind PodTemplate apiVersion v1 PodTemplate describes a template for creating copies of a predefined pod contenttype apireference apimetadata title PodTemplate weight 3 autogenerated true import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"PodTemplate\"\ncontent_type: \"api_reference\"\ndescription: \"PodTemplate describes a template for creating copies of a predefined pod.\"\ntitle: \"PodTemplate\"\nweight: 3\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## PodTemplate {#PodTemplate}\n\nPodTemplate describes a template for creating copies of a predefined pod.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: PodTemplate\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **template** (<a href=\"\">PodTemplateSpec<\/a>)\n\n  Template defines the pods that will be created from this pod template. https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## PodTemplateSpec {#PodTemplateSpec}\n\nPodTemplateSpec describes the data a pod should have when created from a template\n\n<hr>\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">PodSpec<\/a>)\n\n  Specification of the desired behavior of the pod. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## PodTemplateList {#PodTemplateList}\n\nPodTemplateList is a list of PodTemplates.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: PodTemplateList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">PodTemplate<\/a>), required\n\n  List of pod templates\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified PodTemplate\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/podtemplates\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodTemplate\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodTemplate<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind PodTemplate\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/podtemplates\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodTemplateList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind PodTemplate\n\n#### HTTP Request\n\nGET \/api\/v1\/podtemplates\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodTemplateList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a PodTemplate\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\/{namespace}\/podtemplates\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">PodTemplate<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodTemplate<\/a>): OK\n\n201 (<a href=\"\">PodTemplate<\/a>): Created\n\n202 (<a href=\"\">PodTemplate<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified PodTemplate\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/podtemplates\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodTemplate\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">PodTemplate<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodTemplate<\/a>): OK\n\n201 (<a href=\"\">PodTemplate<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified PodTemplate\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/podtemplates\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodTemplate\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodTemplate<\/a>): OK\n\n201 (<a href=\"\">PodTemplate<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a PodTemplate\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/podtemplates\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodTemplate\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodTemplate<\/a>): OK\n\n202 (<a href=\"\">PodTemplate<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of PodTemplate\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/podtemplates\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   PodTemplate  content type   api reference  description   PodTemplate describes a template for creating copies of a predefined pod   title   PodTemplate  weight  3 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        PodTemplate   PodTemplate   PodTemplate describes a template for creating copies of a predefined pod    hr       apiVersion    v1       kind    PodTemplate       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      template     a href    PodTemplateSpec  a      Template defines the pods that will be created from this pod template  https   git k8s io community contributors devel sig architecture api conventions md spec and status         PodTemplateSpec   PodTemplateSpec   PodTemplateSpec describes the data a pod should have when created from a template   hr       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    PodSpec  a      Specification of the desired behavior of the pod  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         PodTemplateList   PodTemplateList   PodTemplateList is a list of PodTemplates    hr       apiVersion    v1       kind    PodTemplateList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    PodTemplate  a    required    List of pod templates         Operations   Operations      hr             get  read the specified PodTemplate       HTTP Request  GET  api v1 namespaces  namespace  podtemplates  name        Parameters       name     in path    string  required    name of the PodTemplate       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodTemplate  a    OK  401  Unauthorized        list  list or watch objects of kind PodTemplate       HTTP Request  GET  api v1 namespaces  namespace  podtemplates       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    PodTemplateList  a    OK  401  Unauthorized        list  list or watch objects of kind PodTemplate       HTTP Request  GET  api v1 podtemplates       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    PodTemplateList  a    OK  401  Unauthorized        create  create a PodTemplate       HTTP Request  POST  api v1 namespaces  namespace  podtemplates       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    PodTemplate  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodTemplate  a    OK  201   a href    PodTemplate  a    Created  202   a href    PodTemplate  a    Accepted  401  Unauthorized        update  replace the specified PodTemplate       HTTP Request  PUT  api v1 namespaces  namespace  podtemplates  name        Parameters       name     in path    string  required    name of the PodTemplate       namespace     in path    string  required     a href    namespace  a        body     a href    PodTemplate  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodTemplate  a    OK  201   a href    PodTemplate  a    Created  401  Unauthorized        patch  partially update the specified PodTemplate       HTTP Request  PATCH  api v1 namespaces  namespace  podtemplates  name        Parameters       name     in path    string  required    name of the PodTemplate       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodTemplate  a    OK  201   a href    PodTemplate  a    Created  401  Unauthorized        delete  delete a PodTemplate       HTTP Request  DELETE  api v1 namespaces  namespace  podtemplates  name        Parameters       name     in path    string  required    name of the PodTemplate       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    PodTemplate  a    OK  202   a href    PodTemplate  a    Accepted  401  Unauthorized        deletecollection  delete collection of PodTemplate       HTTP Request  DELETE  api v1 namespaces  namespace  podtemplates       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference Pod is a collection of containers that can run on a host apiVersion v1 contenttype apireference apimetadata kind Pod autogenerated true title Pod weight 1 import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"Pod\"\ncontent_type: \"api_reference\"\ndescription: \"Pod is a collection of containers that can run on a host.\"\ntitle: \"Pod\"\nweight: 1\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## Pod {#Pod}\n\nPod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: Pod\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">PodSpec<\/a>)\n\n  Specification of the desired behavior of the pod. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **status** (<a href=\"\">PodStatus<\/a>)\n\n  Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## PodSpec {#PodSpec}\n\nPodSpec is a description of a pod.\n\n<hr>\n\n\n\n### Containers\n\n\n- **containers** ([]<a href=\"\">Container<\/a>), required\n\n  *Patch strategy: merge on key `name`*\n  \n  *Map: unique values on key name will be kept during a merge*\n  \n  List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated.\n\n- **initContainers** ([]<a href=\"\">Container<\/a>)\n\n  *Patch strategy: merge on key `name`*\n  \n  *Map: unique values on key name will be kept during a merge*\n  \n  List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request\/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/init-containers\/\n\n- **ephemeralContainers** ([]<a href=\"\">EphemeralContainer<\/a>)\n\n  *Patch strategy: merge on key `name`*\n  \n  *Map: unique values on key name will be kept during a merge*\n  \n  List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource.\n\n- **imagePullSecrets** ([]<a href=\"\">LocalObjectReference<\/a>)\n\n  *Patch strategy: merge on key `name`*\n  \n  *Map: unique values on key name will be kept during a merge*\n  \n  ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#specifying-imagepullsecrets-on-a-pod\n\n- **enableServiceLinks** (boolean)\n\n  EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true.\n\n- **os** (PodOS)\n\n  Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set.\n  \n  If the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions\n  \n  If the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.hostUsers - spec.securityContext.appArmorProfile - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.securityContext.supplementalGroupsPolicy - spec.containers[*].securityContext.appArmorProfile - spec.containers[*].securityContext.seLinuxOptions - spec.containers[*].securityContext.seccompProfile - spec.containers[*].securityContext.capabilities - spec.containers[*].securityContext.readOnlyRootFilesystem - spec.containers[*].securityContext.privileged - spec.containers[*].securityContext.allowPrivilegeEscalation - spec.containers[*].securityContext.procMount - spec.containers[*].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup\n\n  <a name=\"PodOS\"><\/a>\n  *PodOS defines the OS parameters of a pod.*\n\n  - **os.name** (string), required\n\n    Name is the name of the operating system. The currently supported values are linux and windows. Additional value may be defined in future and can be one of: https:\/\/github.com\/opencontainers\/runtime-spec\/blob\/master\/config.md#platform-specific-configuration Clients should expect to handle additional values and treat unrecognized values in this field as os: null\n\n### Volumes\n\n\n- **volumes** ([]<a href=\"\">Volume<\/a>)\n\n  *Patch strategies: retainKeys, merge on key `name`*\n  \n  *Map: unique values on key name will be kept during a merge*\n  \n  List of volumes that can be mounted by containers belonging to the pod. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\n\n### Scheduling\n\n\n- **nodeSelector** (map[string]string)\n\n  NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/\n\n- **nodeName** (string)\n\n  NodeName indicates in which node this pod is scheduled. If empty, this pod is a candidate for scheduling by the scheduler defined in schedulerName. Once this field is set, the kubelet for this node becomes responsible for the lifecycle of this pod. This field should not be used to express a desire for the pod to be scheduled on a specific node. https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#nodename\n\n- **affinity** (Affinity)\n\n  If specified, the pod's scheduling constraints\n\n  <a name=\"Affinity\"><\/a>\n  *Affinity is a group of affinity scheduling rules.*\n\n  - **affinity.nodeAffinity** (<a href=\"\">NodeAffinity<\/a>)\n\n    Describes node affinity scheduling rules for the pod.\n\n  - **affinity.podAffinity** (<a href=\"\">PodAffinity<\/a>)\n\n    Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).\n\n  - **affinity.podAntiAffinity** (<a href=\"\">PodAntiAffinity<\/a>)\n\n    Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).\n\n- **tolerations** ([]Toleration)\n\n  *Atomic: will be replaced during a merge*\n  \n  If specified, the pod's tolerations.\n\n  <a name=\"Toleration\"><\/a>\n  *The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>.*\n\n  - **tolerations.key** (string)\n\n    Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.\n\n  - **tolerations.operator** (string)\n\n    Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.\n\n  - **tolerations.value** (string)\n\n    Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.\n\n  - **tolerations.effect** (string)\n\n    Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.\n\n  - **tolerations.tolerationSeconds** (int64)\n\n    TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.\n\n- **schedulerName** (string)\n\n  If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler.\n\n- **runtimeClassName** (string)\n\n  RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod.  If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the \"legacy\" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https:\/\/git.k8s.io\/enhancements\/keps\/sig-node\/585-runtime-class\n\n- **priorityClassName** (string)\n\n  If specified, indicates the pod's priority. \"system-node-critical\" and \"system-cluster-critical\" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default.\n\n- **priority** (int32)\n\n  The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority.\n\n- **preemptionPolicy** (string)\n\n  PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset.\n\n- **topologySpreadConstraints** ([]TopologySpreadConstraint)\n\n  *Patch strategy: merge on key `topologyKey`*\n  \n  *Map: unique values on keys `topologyKey, whenUnsatisfiable` will be kept during a merge*\n  \n  TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed.\n\n  <a name=\"TopologySpreadConstraint\"><\/a>\n  *TopologySpreadConstraint specifies how to spread matching pods among the given topology.*\n\n  - **topologySpreadConstraints.maxSkew** (int32), required\n\n    MaxSkew describes the degree to which pods may be unevenly distributed. When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2\/2\/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | |  P P  |  P P  |   P   | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2\/2\/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed.\n\n  - **topologySpreadConstraints.topologyKey** (string), required\n\n    TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each \\<key, value> as a \"bucket\", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is \"kubernetes.io\/hostname\", each Node is a domain of that topology. And, if TopologyKey is \"topology.kubernetes.io\/zone\", each zone is a domain of that topology. It's a required field.\n\n  - **topologySpreadConstraints.whenUnsatisfiable** (string), required\n\n    WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location,\n      but giving higher precedence to topologies that would help reduce the\n      skew.\n    A constraint is considered \"Unsatisfiable\" for an incoming pod if and only if every possible node assignment for that pod would violate \"MaxSkew\" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3\/1\/1: | zone1 | zone2 | zone3 | | P P P |   P   |   P   | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3\/2\/1(3\/1\/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it *more* imbalanced. It's a required field.\n\n  - **topologySpreadConstraints.labelSelector** (<a href=\"\">LabelSelector<\/a>)\n\n    LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.\n\n  - **topologySpreadConstraints.matchLabelKeys** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.\n    \n    This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default).\n\n  - **topologySpreadConstraints.minDomains** (int32)\n\n    MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats \"global minimum\" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule.\n    \n    For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2\/2\/2: | zone1 | zone2 | zone3 | |  P P  |  P P  |  P P  | The number of domains is less than 5(MinDomains), so \"global minimum\" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew.\n\n  - **topologySpreadConstraints.nodeAffinityPolicy** (string)\n\n    NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity\/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity\/nodeSelector are included in the calculations. - Ignore: nodeAffinity\/nodeSelector are ignored. All nodes are included in the calculations.\n    \n    If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.\n\n  - **topologySpreadConstraints.nodeTaintsPolicy** (string)\n\n    NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.\n    \n    If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.\n\n- **overhead** (map[string]<a href=\"\">Quantity<\/a>)\n\n  Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https:\/\/git.k8s.io\/enhancements\/keps\/sig-node\/688-pod-overhead\/README.md\n\n### Lifecycle\n\n\n- **restartPolicy** (string)\n\n  Restart policy for all containers within the pod. One of Always, OnFailure, Never. In some contexts, only a subset of those values may be permitted. Default to Always. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle\/#restart-policy\n\n- **terminationGracePeriodSeconds** (int64)\n\n  Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds.\n\n- **activeDeadlineSeconds** (int64)\n\n  Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer.\n\n- **readinessGates** ([]PodReadinessGate)\n\n  *Atomic: will be replaced during a merge*\n  \n  If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to \"True\" More info: https:\/\/git.k8s.io\/enhancements\/keps\/sig-network\/580-pod-readiness-gates\n\n  <a name=\"PodReadinessGate\"><\/a>\n  *PodReadinessGate contains the reference to a pod condition*\n\n  - **readinessGates.conditionType** (string), required\n\n    ConditionType refers to a condition in the pod's condition list with matching type.\n\n### Hostname and Name resolution\n\n\n- **hostname** (string)\n\n  Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value.\n\n- **setHostnameAsFQDN** (boolean)\n\n  If true the pod's hostname will be configured as the pod's FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false.\n\n- **subdomain** (string)\n\n  If specified, the fully qualified Pod hostname will be \"\\<hostname>.\\<subdomain>.\\<pod namespace>.svc.\\<cluster domain>\". If not specified, the pod will not have a domainname at all.\n\n- **hostAliases** ([]HostAlias)\n\n  *Patch strategy: merge on key `ip`*\n  \n  *Map: unique values on key ip will be kept during a merge*\n  \n  HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified.\n\n  <a name=\"HostAlias\"><\/a>\n  *HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file.*\n\n  - **hostAliases.ip** (string), required\n\n    IP address of the host file entry.\n\n  - **hostAliases.hostnames** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    Hostnames for the above IP address.\n\n- **dnsConfig** (PodDNSConfig)\n\n  Specifies the DNS parameters of a pod. Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy.\n\n  <a name=\"PodDNSConfig\"><\/a>\n  *PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy.*\n\n  - **dnsConfig.nameservers** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed.\n\n  - **dnsConfig.options** ([]PodDNSConfigOption)\n\n    *Atomic: will be replaced during a merge*\n    \n    A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy.\n\n    <a name=\"PodDNSConfigOption\"><\/a>\n    *PodDNSConfigOption defines DNS resolver options of a pod.*\n\n    - **dnsConfig.options.name** (string)\n\n      Required.\n\n    - **dnsConfig.options.value** (string)\n\n\n  - **dnsConfig.searches** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed.\n\n- **dnsPolicy** (string)\n\n  Set DNS policy for the pod. Defaults to \"ClusterFirst\". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'.\n\n### Hosts namespaces\n\n\n- **hostNetwork** (boolean)\n\n  Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false.\n\n- **hostPID** (boolean)\n\n  Use the host's pid namespace. Optional: Default to false.\n\n- **hostIPC** (boolean)\n\n  Use the host's ipc namespace. Optional: Default to false.\n\n- **shareProcessNamespace** (boolean)\n\n  Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false.\n\n### Service account\n\n\n- **serviceAccountName** (string)\n\n  ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/\n\n- **automountServiceAccountToken** (boolean)\n\n  AutomountServiceAccountToken indicates whether a service account token should be automatically mounted.\n\n### Security context\n\n\n- **securityContext** (PodSecurityContext)\n\n  SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty.  See type description for default values of each field.\n\n  <a name=\"PodSecurityContext\"><\/a>\n  *PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext.  Field values of container.securityContext take precedence over field values of PodSecurityContext.*\n\n  - **securityContext.appArmorProfile** (AppArmorProfile)\n\n    appArmorProfile is the AppArmor options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows.\n\n    <a name=\"AppArmorProfile\"><\/a>\n    *AppArmorProfile defines a pod or container's AppArmor settings.*\n\n    - **securityContext.appArmorProfile.type** (string), required\n\n      type indicates which kind of AppArmor profile will be applied. Valid options are:\n        Localhost - a profile pre-loaded on the node.\n        RuntimeDefault - the container runtime's default profile.\n        Unconfined - no AppArmor enforcement.\n\n    - **securityContext.appArmorProfile.localhostProfile** (string)\n\n      localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is \"Localhost\".\n\n  - **securityContext.fsGroup** (int64)\n\n    A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod:\n    \n    1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw----\n    \n    If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.fsGroupChangePolicy** (string)\n\n    fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are \"OnRootMismatch\" and \"Always\". If not specified, \"Always\" is used. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.runAsUser** (int64)\n\n    The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext.  If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.runAsNonRoot** (boolean)\n\n    Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext.  If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.\n\n  - **securityContext.runAsGroup** (int64)\n\n    The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext.  If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.seccompProfile** (SeccompProfile)\n\n    The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows.\n\n    <a name=\"SeccompProfile\"><\/a>\n    *SeccompProfile defines a pod\/container's seccomp profile settings. Only one profile source may be set.*\n\n    - **securityContext.seccompProfile.type** (string), required\n\n      type indicates which kind of seccomp profile will be applied. Valid options are:\n      \n      Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.\n\n    - **securityContext.seccompProfile.localhostProfile** (string)\n\n      localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is \"Localhost\". Must NOT be set for any other type.\n\n  - **securityContext.seLinuxOptions** (SELinuxOptions)\n\n    The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container.  May also be set in SecurityContext.  If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.\n\n    <a name=\"SELinuxOptions\"><\/a>\n    *SELinuxOptions are the labels to be applied to the container*\n\n    - **securityContext.seLinuxOptions.level** (string)\n\n      Level is SELinux level label that applies to the container.\n\n    - **securityContext.seLinuxOptions.role** (string)\n\n      Role is a SELinux role label that applies to the container.\n\n    - **securityContext.seLinuxOptions.type** (string)\n\n      Type is a SELinux type label that applies to the container.\n\n    - **securityContext.seLinuxOptions.user** (string)\n\n      User is a SELinux user label that applies to the container.\n\n  - **securityContext.supplementalGroups** ([]int64)\n\n    *Atomic: will be replaced during a merge*\n    \n    A list of groups applied to the first process run in each container, in addition to the container's primary GID and fsGroup (if specified).  If the SupplementalGroupsPolicy feature is enabled, the supplementalGroupsPolicy field determines whether these are in addition to or instead of any group memberships defined in the container image. If unspecified, no additional groups are added, though group memberships defined in the container image may still be used, depending on the supplementalGroupsPolicy field. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.supplementalGroupsPolicy** (string)\n\n    Defines how supplemental groups of the first container processes are calculated. Valid values are \"Merge\" and \"Strict\". If not specified, \"Merge\" is used. (Alpha) Using the field requires the SupplementalGroupsPolicy feature gate to be enabled and the container runtime must implement support for this feature. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.sysctls** ([]Sysctl)\n\n    *Atomic: will be replaced during a merge*\n    \n    Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows.\n\n    <a name=\"Sysctl\"><\/a>\n    *Sysctl defines a kernel parameter to be set*\n\n    - **securityContext.sysctls.name** (string), required\n\n      Name of a property to set\n\n    - **securityContext.sysctls.value** (string), required\n\n      Value of a property to set\n\n  - **securityContext.windowsOptions** (WindowsSecurityContextOptions)\n\n    The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.\n\n    <a name=\"WindowsSecurityContextOptions\"><\/a>\n    *WindowsSecurityContextOptions contain Windows-specific options and credentials.*\n\n    - **securityContext.windowsOptions.gmsaCredentialSpec** (string)\n\n      GMSACredentialSpec is where the GMSA admission webhook (https:\/\/github.com\/kubernetes-sigs\/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field.\n\n    - **securityContext.windowsOptions.gmsaCredentialSpecName** (string)\n\n      GMSACredentialSpecName is the name of the GMSA credential spec to use.\n\n    - **securityContext.windowsOptions.hostProcess** (boolean)\n\n      HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.\n\n    - **securityContext.windowsOptions.runAsUserName** (string)\n\n      The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.\n\n### Alpha level\n\n\n- **hostUsers** (boolean)\n\n  Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature.\n\n- **resourceClaims** ([]PodResourceClaim)\n\n  *Patch strategies: retainKeys, merge on key `name`*\n  \n  *Map: unique values on key name will be kept during a merge*\n  \n  ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name.\n  \n  This is an alpha field and requires enabling the DynamicResourceAllocation feature gate.\n  \n  This field is immutable.\n\n  <a name=\"PodResourceClaim\"><\/a>\n  *PodResourceClaim references exactly one ResourceClaim, either directly or by naming a ResourceClaimTemplate which is then turned into a ResourceClaim for the pod.\n  \n  It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name.*\n\n  - **resourceClaims.name** (string), required\n\n    Name uniquely identifies this resource claim inside the pod. This must be a DNS_LABEL.\n\n  - **resourceClaims.resourceClaimName** (string)\n\n    ResourceClaimName is the name of a ResourceClaim object in the same namespace as this pod.\n    \n    Exactly one of ResourceClaimName and ResourceClaimTemplateName must be set.\n\n  - **resourceClaims.resourceClaimTemplateName** (string)\n\n    ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod.\n    \n    The template will be used to create a new ResourceClaim, which will be bound to this pod. When this pod is deleted, the ResourceClaim will also be deleted. The pod name and resource name, along with a generated component, will be used to form a unique name for the ResourceClaim, which will be recorded in pod.status.resourceClaimStatuses.\n    \n    This field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim.\n    \n    Exactly one of ResourceClaimName and ResourceClaimTemplateName must be set.\n\n- **schedulingGates** ([]PodSchedulingGate)\n\n  *Patch strategy: merge on key `name`*\n  \n  *Map: unique values on key name will be kept during a merge*\n  \n  SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod.\n  \n  SchedulingGates can only be set at pod creation time, and be removed only afterwards.\n\n  <a name=\"PodSchedulingGate\"><\/a>\n  *PodSchedulingGate is associated to a Pod to guard its scheduling.*\n\n  - **schedulingGates.name** (string), required\n\n    Name of the scheduling gate. Each scheduling gate must have a unique name field.\n\n### Deprecated\n\n\n- **serviceAccount** (string)\n\n  DeprecatedServiceAccount is a deprecated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead.\n\n\n\n## Container {#Container}\n\nA single application container that you want to run within a pod.\n\n<hr>\n\n- **name** (string), required\n\n  Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.\n\n\n\n### Image\n\n\n- **image** (string)\n\n  Container image name. More info: https:\/\/kubernetes.io\/docs\/concepts\/containers\/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets.\n\n- **imagePullPolicy** (string)\n\n  Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#updating-images\n\n### Entrypoint\n\n\n- **command** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\n\n- **args** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\n\n- **workingDir** (string)\n\n  Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.\n\n### Ports\n\n\n- **ports** ([]ContainerPort)\n\n  *Patch strategy: merge on key `containerPort`*\n  \n  *Map: unique values on keys `containerPort, protocol` will be kept during a merge*\n  \n  List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https:\/\/github.com\/kubernetes\/kubernetes\/issues\/108255. Cannot be updated.\n\n  <a name=\"ContainerPort\"><\/a>\n  *ContainerPort represents a network port in a single container.*\n\n  - **ports.containerPort** (int32), required\n\n    Number of port to expose on the pod's IP address. This must be a valid port number, 0 \\< x \\< 65536.\n\n  - **ports.hostIP** (string)\n\n    What host IP to bind the external port to.\n\n  - **ports.hostPort** (int32)\n\n    Number of port to expose on the host. If specified, this must be a valid port number, 0 \\< x \\< 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this.\n\n  - **ports.name** (string)\n\n    If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services.\n\n  - **ports.protocol** (string)\n\n    Protocol for port. Must be UDP, TCP, or SCTP. Defaults to \"TCP\".\n\n### Environment variables\n\n\n- **env** ([]EnvVar)\n\n  *Patch strategy: merge on key `name`*\n  \n  *Map: unique values on key name will be kept during a merge*\n  \n  List of environment variables to set in the container. Cannot be updated.\n\n  <a name=\"EnvVar\"><\/a>\n  *EnvVar represents an environment variable present in a Container.*\n\n  - **env.name** (string), required\n\n    Name of the environment variable. Must be a C_IDENTIFIER.\n\n  - **env.value** (string)\n\n    Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to \"\".\n\n  - **env.valueFrom** (EnvVarSource)\n\n    Source for the environment variable's value. Cannot be used if value is not empty.\n\n    <a name=\"EnvVarSource\"><\/a>\n    *EnvVarSource represents a source for the value of an EnvVar.*\n\n    - **env.valueFrom.configMapKeyRef** (ConfigMapKeySelector)\n\n      Selects a key of a ConfigMap.\n\n      <a name=\"ConfigMapKeySelector\"><\/a>\n      *Selects a key from a ConfigMap.*\n\n      - **env.valueFrom.configMapKeyRef.key** (string), required\n\n        The key to select.\n\n      - **env.valueFrom.configMapKeyRef.name** (string)\n\n        Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n      - **env.valueFrom.configMapKeyRef.optional** (boolean)\n\n        Specify whether the ConfigMap or its key must be defined\n\n    - **env.valueFrom.fieldRef** (<a href=\"\">ObjectFieldSelector<\/a>)\n\n      Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['\\<KEY>']`, `metadata.annotations['\\<KEY>']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.\n\n    - **env.valueFrom.resourceFieldRef** (<a href=\"\">ResourceFieldSelector<\/a>)\n\n      Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.\n\n    - **env.valueFrom.secretKeyRef** (SecretKeySelector)\n\n      Selects a key of a secret in the pod's namespace\n\n      <a name=\"SecretKeySelector\"><\/a>\n      *SecretKeySelector selects a key of a Secret.*\n\n      - **env.valueFrom.secretKeyRef.key** (string), required\n\n        The key of the secret to select from.  Must be a valid secret key.\n\n      - **env.valueFrom.secretKeyRef.name** (string)\n\n        Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n      - **env.valueFrom.secretKeyRef.optional** (boolean)\n\n        Specify whether the Secret or its key must be defined\n\n- **envFrom** ([]EnvFromSource)\n\n  *Atomic: will be replaced during a merge*\n  \n  List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.\n\n  <a name=\"EnvFromSource\"><\/a>\n  *EnvFromSource represents the source of a set of ConfigMaps*\n\n  - **envFrom.configMapRef** (ConfigMapEnvSource)\n\n    The ConfigMap to select from\n\n    <a name=\"ConfigMapEnvSource\"><\/a>\n    *ConfigMapEnvSource selects a ConfigMap to populate the environment variables with.\n    \n    The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables.*\n\n    - **envFrom.configMapRef.name** (string)\n\n      Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n    - **envFrom.configMapRef.optional** (boolean)\n\n      Specify whether the ConfigMap must be defined\n\n  - **envFrom.prefix** (string)\n\n    An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.\n\n  - **envFrom.secretRef** (SecretEnvSource)\n\n    The Secret to select from\n\n    <a name=\"SecretEnvSource\"><\/a>\n    *SecretEnvSource selects a Secret to populate the environment variables with.\n    \n    The contents of the target Secret's Data field will represent the key-value pairs as environment variables.*\n\n    - **envFrom.secretRef.name** (string)\n\n      Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n    - **envFrom.secretRef.optional** (boolean)\n\n      Specify whether the Secret must be defined\n\n### Volumes\n\n\n- **volumeMounts** ([]VolumeMount)\n\n  *Patch strategy: merge on key `mountPath`*\n  \n  *Map: unique values on key mountPath will be kept during a merge*\n  \n  Pod volumes to mount into the container's filesystem. Cannot be updated.\n\n  <a name=\"VolumeMount\"><\/a>\n  *VolumeMount describes a mounting of a Volume within a container.*\n\n  - **volumeMounts.mountPath** (string), required\n\n    Path within the container at which the volume should be mounted.  Must not contain ':'.\n\n  - **volumeMounts.name** (string), required\n\n    This must match the Name of a Volume.\n\n  - **volumeMounts.mountPropagation** (string)\n\n    mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None).\n\n  - **volumeMounts.readOnly** (boolean)\n\n    Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.\n\n  - **volumeMounts.recursiveReadOnly** (string)\n\n    RecursiveReadOnly specifies whether read-only mounts should be handled recursively.\n    \n    If ReadOnly is false, this field has no meaning and must be unspecified.\n    \n    If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only.  If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime.  If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason.\n    \n    If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None).\n    \n    If this field is not specified, it is treated as an equivalent of Disabled.\n\n  - **volumeMounts.subPath** (string)\n\n    Path within the volume from which the container's volume should be mounted. Defaults to \"\" (volume's root).\n\n  - **volumeMounts.subPathExpr** (string)\n\n    Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to \"\" (volume's root). SubPathExpr and SubPath are mutually exclusive.\n\n- **volumeDevices** ([]VolumeDevice)\n\n  *Patch strategy: merge on key `devicePath`*\n  \n  *Map: unique values on key devicePath will be kept during a merge*\n  \n  volumeDevices is the list of block devices to be used by the container.\n\n  <a name=\"VolumeDevice\"><\/a>\n  *volumeDevice describes a mapping of a raw block device within a container.*\n\n  - **volumeDevices.devicePath** (string), required\n\n    devicePath is the path inside of the container that the device will be mapped to.\n\n  - **volumeDevices.name** (string), required\n\n    name must match the name of a persistentVolumeClaim in the pod\n\n### Resources\n\n\n- **resources** (ResourceRequirements)\n\n  Compute Resources required by this container. Cannot be updated. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n  <a name=\"ResourceRequirements\"><\/a>\n  *ResourceRequirements describes the compute resource requirements.*\n\n  - **resources.claims** ([]ResourceClaim)\n\n    *Map: unique values on key name will be kept during a merge*\n    \n    Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container.\n    \n    This is an alpha field and requires enabling the DynamicResourceAllocation feature gate.\n    \n    This field is immutable. It can only be set for containers.\n\n    <a name=\"ResourceClaim\"><\/a>\n    *ResourceClaim references one entry in PodSpec.ResourceClaims.*\n\n    - **resources.claims.name** (string), required\n\n      Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.\n\n    - **resources.claims.request** (string)\n\n      Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request.\n\n  - **resources.limits** (map[string]<a href=\"\">Quantity<\/a>)\n\n    Limits describes the maximum amount of compute resources allowed. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n  - **resources.requests** (map[string]<a href=\"\">Quantity<\/a>)\n\n    Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n- **resizePolicy** ([]ContainerResizePolicy)\n\n  *Atomic: will be replaced during a merge*\n  \n  Resources resize policy for the container.\n\n  <a name=\"ContainerResizePolicy\"><\/a>\n  *ContainerResizePolicy represents resource resize policy for the container.*\n\n  - **resizePolicy.resourceName** (string), required\n\n    Name of the resource to which this resource resize policy applies. Supported values: cpu, memory.\n\n  - **resizePolicy.restartPolicy** (string), required\n\n    Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired.\n\n### Lifecycle\n\n\n- **lifecycle** (Lifecycle)\n\n  Actions that the management system should take in response to container lifecycle events. Cannot be updated.\n\n  <a name=\"Lifecycle\"><\/a>\n  *Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted.*\n\n  - **lifecycle.postStart** (<a href=\"\">LifecycleHandler<\/a>)\n\n    PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https:\/\/kubernetes.io\/docs\/concepts\/containers\/container-lifecycle-hooks\/#container-hooks\n\n  - **lifecycle.preStop** (<a href=\"\">LifecycleHandler<\/a>)\n\n    PreStop is called immediately before a container is terminated due to an API request or management event such as liveness\/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https:\/\/kubernetes.io\/docs\/concepts\/containers\/container-lifecycle-hooks\/#container-hooks\n\n- **terminationMessagePath** (string)\n\n  Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to \/dev\/termination-log. Cannot be updated.\n\n- **terminationMessagePolicy** (string)\n\n  Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.\n\n- **livenessProbe** (<a href=\"\">Probe<\/a>)\n\n  Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\n\n- **readinessProbe** (<a href=\"\">Probe<\/a>)\n\n  Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\n\n- **startupProbe** (<a href=\"\">Probe<\/a>)\n\n  StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\n\n- **restartPolicy** (string)\n\n  RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is \"Always\". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as \"Always\" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy \"Always\" will be shut down. This lifecycle differs from normal init containers and is often referred to as a \"sidecar\" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the next init container. Instead, the next init container starts immediately after this init container is started, or after any startupProbe has successfully completed.\n\n### Security Context\n\n\n- **securityContext** (SecurityContext)\n\n  SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/\n\n  <a name=\"SecurityContext\"><\/a>\n  *SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext.  When both are set, the values in SecurityContext take precedence.*\n\n  - **securityContext.allowPrivilegeEscalation** (boolean)\n\n    AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.appArmorProfile** (AppArmorProfile)\n\n    appArmorProfile is the AppArmor options to use by this container. If set, this profile overrides the pod's appArmorProfile. Note that this field cannot be set when spec.os.name is windows.\n\n    <a name=\"AppArmorProfile\"><\/a>\n    *AppArmorProfile defines a pod or container's AppArmor settings.*\n\n    - **securityContext.appArmorProfile.type** (string), required\n\n      type indicates which kind of AppArmor profile will be applied. Valid options are:\n        Localhost - a profile pre-loaded on the node.\n        RuntimeDefault - the container runtime's default profile.\n        Unconfined - no AppArmor enforcement.\n\n    - **securityContext.appArmorProfile.localhostProfile** (string)\n\n      localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is \"Localhost\".\n\n  - **securityContext.capabilities** (Capabilities)\n\n    The capabilities to add\/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows.\n\n    <a name=\"Capabilities\"><\/a>\n    *Adds and removes POSIX capabilities from running containers.*\n\n    - **securityContext.capabilities.add** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      Added capabilities\n\n    - **securityContext.capabilities.drop** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      Removed capabilities\n\n  - **securityContext.procMount** (string)\n\n    procMount denotes the type of proc mount to use for the containers. The default value is Default which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.privileged** (boolean)\n\n    Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.readOnlyRootFilesystem** (boolean)\n\n    Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.runAsUser** (int64)\n\n    The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext.  If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.runAsNonRoot** (boolean)\n\n    Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext.  If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.\n\n  - **securityContext.runAsGroup** (int64)\n\n    The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext.  If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.seLinuxOptions** (SELinuxOptions)\n\n    The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container.  May also be set in PodSecurityContext.  If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.\n\n    <a name=\"SELinuxOptions\"><\/a>\n    *SELinuxOptions are the labels to be applied to the container*\n\n    - **securityContext.seLinuxOptions.level** (string)\n\n      Level is SELinux level label that applies to the container.\n\n    - **securityContext.seLinuxOptions.role** (string)\n\n      Role is a SELinux role label that applies to the container.\n\n    - **securityContext.seLinuxOptions.type** (string)\n\n      Type is a SELinux type label that applies to the container.\n\n    - **securityContext.seLinuxOptions.user** (string)\n\n      User is a SELinux user label that applies to the container.\n\n  - **securityContext.seccompProfile** (SeccompProfile)\n\n    The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows.\n\n    <a name=\"SeccompProfile\"><\/a>\n    *SeccompProfile defines a pod\/container's seccomp profile settings. Only one profile source may be set.*\n\n    - **securityContext.seccompProfile.type** (string), required\n\n      type indicates which kind of seccomp profile will be applied. Valid options are:\n      \n      Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.\n\n    - **securityContext.seccompProfile.localhostProfile** (string)\n\n      localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is \"Localhost\". Must NOT be set for any other type.\n\n  - **securityContext.windowsOptions** (WindowsSecurityContextOptions)\n\n    The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.\n\n    <a name=\"WindowsSecurityContextOptions\"><\/a>\n    *WindowsSecurityContextOptions contain Windows-specific options and credentials.*\n\n    - **securityContext.windowsOptions.gmsaCredentialSpec** (string)\n\n      GMSACredentialSpec is where the GMSA admission webhook (https:\/\/github.com\/kubernetes-sigs\/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field.\n\n    - **securityContext.windowsOptions.gmsaCredentialSpecName** (string)\n\n      GMSACredentialSpecName is the name of the GMSA credential spec to use.\n\n    - **securityContext.windowsOptions.hostProcess** (boolean)\n\n      HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.\n\n    - **securityContext.windowsOptions.runAsUserName** (string)\n\n      The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.\n\n### Debugging\n\n\n- **stdin** (boolean)\n\n  Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false.\n\n- **stdinOnce** (boolean)\n\n  Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false\n\n- **tty** (boolean)\n\n  Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false.\n\n\n\n## EphemeralContainer {#EphemeralContainer}\n\nAn EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation.\n\nTo add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted.\n\n<hr>\n\n- **name** (string), required\n\n  Name of the ephemeral container specified as a DNS_LABEL. This name must be unique among all containers, init containers and ephemeral containers.\n\n- **targetContainerName** (string)\n\n  If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container uses the namespaces configured in the Pod spec.\n  \n  The container runtime must implement support for this feature. If the runtime does not support namespace targeting then the result of setting this field is undefined.\n\n\n\n### Image\n\n\n- **image** (string)\n\n  Container image name. More info: https:\/\/kubernetes.io\/docs\/concepts\/containers\/images\n\n- **imagePullPolicy** (string)\n\n  Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#updating-images\n\n### Entrypoint\n\n\n- **command** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  Entrypoint array. Not executed within a shell. The image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\n\n- **args** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  Arguments to the entrypoint. The image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\n\n- **workingDir** (string)\n\n  Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.\n\n### Environment variables\n\n\n- **env** ([]EnvVar)\n\n  *Patch strategy: merge on key `name`*\n  \n  *Map: unique values on key name will be kept during a merge*\n  \n  List of environment variables to set in the container. Cannot be updated.\n\n  <a name=\"EnvVar\"><\/a>\n  *EnvVar represents an environment variable present in a Container.*\n\n  - **env.name** (string), required\n\n    Name of the environment variable. Must be a C_IDENTIFIER.\n\n  - **env.value** (string)\n\n    Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to \"\".\n\n  - **env.valueFrom** (EnvVarSource)\n\n    Source for the environment variable's value. Cannot be used if value is not empty.\n\n    <a name=\"EnvVarSource\"><\/a>\n    *EnvVarSource represents a source for the value of an EnvVar.*\n\n    - **env.valueFrom.configMapKeyRef** (ConfigMapKeySelector)\n\n      Selects a key of a ConfigMap.\n\n      <a name=\"ConfigMapKeySelector\"><\/a>\n      *Selects a key from a ConfigMap.*\n\n      - **env.valueFrom.configMapKeyRef.key** (string), required\n\n        The key to select.\n\n      - **env.valueFrom.configMapKeyRef.name** (string)\n\n        Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n      - **env.valueFrom.configMapKeyRef.optional** (boolean)\n\n        Specify whether the ConfigMap or its key must be defined\n\n    - **env.valueFrom.fieldRef** (<a href=\"\">ObjectFieldSelector<\/a>)\n\n      Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['\\<KEY>']`, `metadata.annotations['\\<KEY>']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.\n\n    - **env.valueFrom.resourceFieldRef** (<a href=\"\">ResourceFieldSelector<\/a>)\n\n      Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.\n\n    - **env.valueFrom.secretKeyRef** (SecretKeySelector)\n\n      Selects a key of a secret in the pod's namespace\n\n      <a name=\"SecretKeySelector\"><\/a>\n      *SecretKeySelector selects a key of a Secret.*\n\n      - **env.valueFrom.secretKeyRef.key** (string), required\n\n        The key of the secret to select from.  Must be a valid secret key.\n\n      - **env.valueFrom.secretKeyRef.name** (string)\n\n        Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n      - **env.valueFrom.secretKeyRef.optional** (boolean)\n\n        Specify whether the Secret or its key must be defined\n\n- **envFrom** ([]EnvFromSource)\n\n  *Atomic: will be replaced during a merge*\n  \n  List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.\n\n  <a name=\"EnvFromSource\"><\/a>\n  *EnvFromSource represents the source of a set of ConfigMaps*\n\n  - **envFrom.configMapRef** (ConfigMapEnvSource)\n\n    The ConfigMap to select from\n\n    <a name=\"ConfigMapEnvSource\"><\/a>\n    *ConfigMapEnvSource selects a ConfigMap to populate the environment variables with.\n    \n    The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables.*\n\n    - **envFrom.configMapRef.name** (string)\n\n      Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n    - **envFrom.configMapRef.optional** (boolean)\n\n      Specify whether the ConfigMap must be defined\n\n  - **envFrom.prefix** (string)\n\n    An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.\n\n  - **envFrom.secretRef** (SecretEnvSource)\n\n    The Secret to select from\n\n    <a name=\"SecretEnvSource\"><\/a>\n    *SecretEnvSource selects a Secret to populate the environment variables with.\n    \n    The contents of the target Secret's Data field will represent the key-value pairs as environment variables.*\n\n    - **envFrom.secretRef.name** (string)\n\n      Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n    - **envFrom.secretRef.optional** (boolean)\n\n      Specify whether the Secret must be defined\n\n### Volumes\n\n\n- **volumeMounts** ([]VolumeMount)\n\n  *Patch strategy: merge on key `mountPath`*\n  \n  *Map: unique values on key mountPath will be kept during a merge*\n  \n  Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated.\n\n  <a name=\"VolumeMount\"><\/a>\n  *VolumeMount describes a mounting of a Volume within a container.*\n\n  - **volumeMounts.mountPath** (string), required\n\n    Path within the container at which the volume should be mounted.  Must not contain ':'.\n\n  - **volumeMounts.name** (string), required\n\n    This must match the Name of a Volume.\n\n  - **volumeMounts.mountPropagation** (string)\n\n    mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None).\n\n  - **volumeMounts.readOnly** (boolean)\n\n    Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.\n\n  - **volumeMounts.recursiveReadOnly** (string)\n\n    RecursiveReadOnly specifies whether read-only mounts should be handled recursively.\n    \n    If ReadOnly is false, this field has no meaning and must be unspecified.\n    \n    If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only.  If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime.  If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason.\n    \n    If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None).\n    \n    If this field is not specified, it is treated as an equivalent of Disabled.\n\n  - **volumeMounts.subPath** (string)\n\n    Path within the volume from which the container's volume should be mounted. Defaults to \"\" (volume's root).\n\n  - **volumeMounts.subPathExpr** (string)\n\n    Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to \"\" (volume's root). SubPathExpr and SubPath are mutually exclusive.\n\n- **volumeDevices** ([]VolumeDevice)\n\n  *Patch strategy: merge on key `devicePath`*\n  \n  *Map: unique values on key devicePath will be kept during a merge*\n  \n  volumeDevices is the list of block devices to be used by the container.\n\n  <a name=\"VolumeDevice\"><\/a>\n  *volumeDevice describes a mapping of a raw block device within a container.*\n\n  - **volumeDevices.devicePath** (string), required\n\n    devicePath is the path inside of the container that the device will be mapped to.\n\n  - **volumeDevices.name** (string), required\n\n    name must match the name of a persistentVolumeClaim in the pod\n\n### Resources\n\n\n- **resizePolicy** ([]ContainerResizePolicy)\n\n  *Atomic: will be replaced during a merge*\n  \n  Resources resize policy for the container.\n\n  <a name=\"ContainerResizePolicy\"><\/a>\n  *ContainerResizePolicy represents resource resize policy for the container.*\n\n  - **resizePolicy.resourceName** (string), required\n\n    Name of the resource to which this resource resize policy applies. Supported values: cpu, memory.\n\n  - **resizePolicy.restartPolicy** (string), required\n\n    Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired.\n\n### Lifecycle\n\n\n- **terminationMessagePath** (string)\n\n  Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to \/dev\/termination-log. Cannot be updated.\n\n- **terminationMessagePolicy** (string)\n\n  Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.\n\n- **restartPolicy** (string)\n\n  Restart policy for the container to manage the restart behavior of each container within a pod. This may only be set for init containers. You cannot set this field on ephemeral containers.\n\n### Debugging\n\n\n- **stdin** (boolean)\n\n  Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false.\n\n- **stdinOnce** (boolean)\n\n  Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false\n\n- **tty** (boolean)\n\n  Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false.\n\n### Security context\n\n\n- **securityContext** (SecurityContext)\n\n  Optional: SecurityContext defines the security options the ephemeral container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.\n\n  <a name=\"SecurityContext\"><\/a>\n  *SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext.  When both are set, the values in SecurityContext take precedence.*\n\n  - **securityContext.allowPrivilegeEscalation** (boolean)\n\n    AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.appArmorProfile** (AppArmorProfile)\n\n    appArmorProfile is the AppArmor options to use by this container. If set, this profile overrides the pod's appArmorProfile. Note that this field cannot be set when spec.os.name is windows.\n\n    <a name=\"AppArmorProfile\"><\/a>\n    *AppArmorProfile defines a pod or container's AppArmor settings.*\n\n    - **securityContext.appArmorProfile.type** (string), required\n\n      type indicates which kind of AppArmor profile will be applied. Valid options are:\n        Localhost - a profile pre-loaded on the node.\n        RuntimeDefault - the container runtime's default profile.\n        Unconfined - no AppArmor enforcement.\n\n    - **securityContext.appArmorProfile.localhostProfile** (string)\n\n      localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is \"Localhost\".\n\n  - **securityContext.capabilities** (Capabilities)\n\n    The capabilities to add\/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows.\n\n    <a name=\"Capabilities\"><\/a>\n    *Adds and removes POSIX capabilities from running containers.*\n\n    - **securityContext.capabilities.add** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      Added capabilities\n\n    - **securityContext.capabilities.drop** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      Removed capabilities\n\n  - **securityContext.procMount** (string)\n\n    procMount denotes the type of proc mount to use for the containers. The default value is Default which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.privileged** (boolean)\n\n    Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.readOnlyRootFilesystem** (boolean)\n\n    Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.runAsUser** (int64)\n\n    The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext.  If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.runAsNonRoot** (boolean)\n\n    Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext.  If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.\n\n  - **securityContext.runAsGroup** (int64)\n\n    The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext.  If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.\n\n  - **securityContext.seLinuxOptions** (SELinuxOptions)\n\n    The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container.  May also be set in PodSecurityContext.  If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.\n\n    <a name=\"SELinuxOptions\"><\/a>\n    *SELinuxOptions are the labels to be applied to the container*\n\n    - **securityContext.seLinuxOptions.level** (string)\n\n      Level is SELinux level label that applies to the container.\n\n    - **securityContext.seLinuxOptions.role** (string)\n\n      Role is a SELinux role label that applies to the container.\n\n    - **securityContext.seLinuxOptions.type** (string)\n\n      Type is a SELinux type label that applies to the container.\n\n    - **securityContext.seLinuxOptions.user** (string)\n\n      User is a SELinux user label that applies to the container.\n\n  - **securityContext.seccompProfile** (SeccompProfile)\n\n    The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows.\n\n    <a name=\"SeccompProfile\"><\/a>\n    *SeccompProfile defines a pod\/container's seccomp profile settings. Only one profile source may be set.*\n\n    - **securityContext.seccompProfile.type** (string), required\n\n      type indicates which kind of seccomp profile will be applied. Valid options are:\n      \n      Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.\n\n    - **securityContext.seccompProfile.localhostProfile** (string)\n\n      localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is \"Localhost\". Must NOT be set for any other type.\n\n  - **securityContext.windowsOptions** (WindowsSecurityContextOptions)\n\n    The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.\n\n    <a name=\"WindowsSecurityContextOptions\"><\/a>\n    *WindowsSecurityContextOptions contain Windows-specific options and credentials.*\n\n    - **securityContext.windowsOptions.gmsaCredentialSpec** (string)\n\n      GMSACredentialSpec is where the GMSA admission webhook (https:\/\/github.com\/kubernetes-sigs\/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field.\n\n    - **securityContext.windowsOptions.gmsaCredentialSpecName** (string)\n\n      GMSACredentialSpecName is the name of the GMSA credential spec to use.\n\n    - **securityContext.windowsOptions.hostProcess** (boolean)\n\n      HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.\n\n    - **securityContext.windowsOptions.runAsUserName** (string)\n\n      The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.\n\n### Not allowed\n\n\n- **ports** ([]ContainerPort)\n\n  *Patch strategy: merge on key `containerPort`*\n  \n  *Map: unique values on keys `containerPort, protocol` will be kept during a merge*\n  \n  Ports are not allowed for ephemeral containers.\n\n  <a name=\"ContainerPort\"><\/a>\n  *ContainerPort represents a network port in a single container.*\n\n  - **ports.containerPort** (int32), required\n\n    Number of port to expose on the pod's IP address. This must be a valid port number, 0 \\< x \\< 65536.\n\n  - **ports.hostIP** (string)\n\n    What host IP to bind the external port to.\n\n  - **ports.hostPort** (int32)\n\n    Number of port to expose on the host. If specified, this must be a valid port number, 0 \\< x \\< 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this.\n\n  - **ports.name** (string)\n\n    If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services.\n\n  - **ports.protocol** (string)\n\n    Protocol for port. Must be UDP, TCP, or SCTP. Defaults to \"TCP\".\n\n- **resources** (ResourceRequirements)\n\n  Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod.\n\n  <a name=\"ResourceRequirements\"><\/a>\n  *ResourceRequirements describes the compute resource requirements.*\n\n  - **resources.claims** ([]ResourceClaim)\n\n    *Map: unique values on key name will be kept during a merge*\n    \n    Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container.\n    \n    This is an alpha field and requires enabling the DynamicResourceAllocation feature gate.\n    \n    This field is immutable. It can only be set for containers.\n\n    <a name=\"ResourceClaim\"><\/a>\n    *ResourceClaim references one entry in PodSpec.ResourceClaims.*\n\n    - **resources.claims.name** (string), required\n\n      Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.\n\n    - **resources.claims.request** (string)\n\n      Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request.\n\n  - **resources.limits** (map[string]<a href=\"\">Quantity<\/a>)\n\n    Limits describes the maximum amount of compute resources allowed. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n  - **resources.requests** (map[string]<a href=\"\">Quantity<\/a>)\n\n    Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n- **lifecycle** (Lifecycle)\n\n  Lifecycle is not allowed for ephemeral containers.\n\n  <a name=\"Lifecycle\"><\/a>\n  *Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted.*\n\n  - **lifecycle.postStart** (<a href=\"\">LifecycleHandler<\/a>)\n\n    PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https:\/\/kubernetes.io\/docs\/concepts\/containers\/container-lifecycle-hooks\/#container-hooks\n\n  - **lifecycle.preStop** (<a href=\"\">LifecycleHandler<\/a>)\n\n    PreStop is called immediately before a container is terminated due to an API request or management event such as liveness\/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https:\/\/kubernetes.io\/docs\/concepts\/containers\/container-lifecycle-hooks\/#container-hooks\n\n- **livenessProbe** (<a href=\"\">Probe<\/a>)\n\n  Probes are not allowed for ephemeral containers.\n\n- **readinessProbe** (<a href=\"\">Probe<\/a>)\n\n  Probes are not allowed for ephemeral containers.\n\n- **startupProbe** (<a href=\"\">Probe<\/a>)\n\n  Probes are not allowed for ephemeral containers.\n\n\n\n## LifecycleHandler {#LifecycleHandler}\n\nLifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified.\n\n<hr>\n\n- **exec** (ExecAction)\n\n  Exec specifies the action to take.\n\n  <a name=\"ExecAction\"><\/a>\n  *ExecAction describes a \"run in container\" action.*\n\n  - **exec.command** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    Command is the command line to execute inside the container, the working directory for the command  is root ('\/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live\/healthy and non-zero is unhealthy.\n\n- **httpGet** (HTTPGetAction)\n\n  HTTPGet specifies the http request to perform.\n\n  <a name=\"HTTPGetAction\"><\/a>\n  *HTTPGetAction describes an action based on HTTP Get requests.*\n\n  - **httpGet.port** (IntOrString), required\n\n    Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.\n\n    <a name=\"IntOrString\"><\/a>\n    *IntOrString is a type that can hold an int32 or a string.  When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type.  This allows you to have, for example, a JSON field that can accept a name or number.*\n\n  - **httpGet.host** (string)\n\n    Host name to connect to, defaults to the pod IP. You probably want to set \"Host\" in httpHeaders instead.\n\n  - **httpGet.httpHeaders** ([]HTTPHeader)\n\n    *Atomic: will be replaced during a merge*\n    \n    Custom headers to set in the request. HTTP allows repeated headers.\n\n    <a name=\"HTTPHeader\"><\/a>\n    *HTTPHeader describes a custom header to be used in HTTP probes*\n\n    - **httpGet.httpHeaders.name** (string), required\n\n      The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header.\n\n    - **httpGet.httpHeaders.value** (string), required\n\n      The header field value\n\n  - **httpGet.path** (string)\n\n    Path to access on the HTTP server.\n\n  - **httpGet.scheme** (string)\n\n    Scheme to use for connecting to the host. Defaults to HTTP.\n\n- **sleep** (SleepAction)\n\n  Sleep represents the duration that the container should sleep before being terminated.\n\n  <a name=\"SleepAction\"><\/a>\n  *SleepAction describes a \"sleep\" action.*\n\n  - **sleep.seconds** (int64), required\n\n    Seconds is the number of seconds to sleep.\n\n- **tcpSocket** (TCPSocketAction)\n\n  Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.\n\n  <a name=\"TCPSocketAction\"><\/a>\n  *TCPSocketAction describes an action based on opening a socket*\n\n  - **tcpSocket.port** (IntOrString), required\n\n    Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.\n\n    <a name=\"IntOrString\"><\/a>\n    *IntOrString is a type that can hold an int32 or a string.  When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type.  This allows you to have, for example, a JSON field that can accept a name or number.*\n\n  - **tcpSocket.host** (string)\n\n    Optional: Host name to connect to, defaults to the pod IP.\n\n\n\n\n\n## NodeAffinity {#NodeAffinity}\n\nNode affinity is a group of node affinity scheduling rules.\n\n<hr>\n\n- **preferredDuringSchedulingIgnoredDuringExecution** ([]PreferredSchedulingTerm)\n\n  *Atomic: will be replaced during a merge*\n  \n  The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred.\n\n  <a name=\"PreferredSchedulingTerm\"><\/a>\n  *An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).*\n\n  - **preferredDuringSchedulingIgnoredDuringExecution.preference** (NodeSelectorTerm), required\n\n    A node selector term, associated with the corresponding weight.\n\n    <a name=\"NodeSelectorTerm\"><\/a>\n    *A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.*\n\n    - **preferredDuringSchedulingIgnoredDuringExecution.preference.matchExpressions** ([]<a href=\"\">NodeSelectorRequirement<\/a>)\n\n      *Atomic: will be replaced during a merge*\n      \n      A list of node selector requirements by node's labels.\n\n    - **preferredDuringSchedulingIgnoredDuringExecution.preference.matchFields** ([]<a href=\"\">NodeSelectorRequirement<\/a>)\n\n      *Atomic: will be replaced during a merge*\n      \n      A list of node selector requirements by node's fields.\n\n  - **preferredDuringSchedulingIgnoredDuringExecution.weight** (int32), required\n\n    Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.\n\n- **requiredDuringSchedulingIgnoredDuringExecution** (NodeSelector)\n\n  If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node.\n\n  <a name=\"NodeSelector\"><\/a>\n  *A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms.*\n\n  - **requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms** ([]NodeSelectorTerm), required\n\n    *Atomic: will be replaced during a merge*\n    \n    Required. A list of node selector terms. The terms are ORed.\n\n    <a name=\"NodeSelectorTerm\"><\/a>\n    *A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.*\n\n    - **requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions** ([]<a href=\"\">NodeSelectorRequirement<\/a>)\n\n      *Atomic: will be replaced during a merge*\n      \n      A list of node selector requirements by node's labels.\n\n    - **requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchFields** ([]<a href=\"\">NodeSelectorRequirement<\/a>)\n\n      *Atomic: will be replaced during a merge*\n      \n      A list of node selector requirements by node's fields.\n\n\n\n\n\n## PodAffinity {#PodAffinity}\n\nPod affinity is a group of inter pod affinity scheduling rules.\n\n<hr>\n\n- **preferredDuringSchedulingIgnoredDuringExecution** ([]WeightedPodAffinityTerm)\n\n  *Atomic: will be replaced during a merge*\n  \n  The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.\n\n  <a name=\"WeightedPodAffinityTerm\"><\/a>\n  *The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)*\n\n  - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm** (PodAffinityTerm), required\n\n    Required. A pod affinity term, associated with the corresponding weight.\n\n    <a name=\"PodAffinityTerm\"><\/a>\n    *Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running*\n\n    - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.topologyKey** (string), required\n\n      This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n\n    - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.\n\n    - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.matchLabelKeys** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).\n\n    - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.mismatchLabelKeys** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).\n\n    - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means \"this pod's namespace\". An empty selector ({}) matches all namespaces.\n\n    - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaces** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means \"this pod's namespace\".\n\n  - **preferredDuringSchedulingIgnoredDuringExecution.weight** (int32), required\n\n    weight associated with matching the corresponding podAffinityTerm, in the range 1-100.\n\n- **requiredDuringSchedulingIgnoredDuringExecution** ([]PodAffinityTerm)\n\n  *Atomic: will be replaced during a merge*\n  \n  If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.\n\n  <a name=\"PodAffinityTerm\"><\/a>\n  *Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running*\n\n  - **requiredDuringSchedulingIgnoredDuringExecution.topologyKey** (string), required\n\n    This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n\n  - **requiredDuringSchedulingIgnoredDuringExecution.labelSelector** (<a href=\"\">LabelSelector<\/a>)\n\n    A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.\n\n  - **requiredDuringSchedulingIgnoredDuringExecution.matchLabelKeys** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).\n\n  - **requiredDuringSchedulingIgnoredDuringExecution.mismatchLabelKeys** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).\n\n  - **requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector** (<a href=\"\">LabelSelector<\/a>)\n\n    A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means \"this pod's namespace\". An empty selector ({}) matches all namespaces.\n\n  - **requiredDuringSchedulingIgnoredDuringExecution.namespaces** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means \"this pod's namespace\".\n\n\n\n\n\n## PodAntiAffinity {#PodAntiAffinity}\n\nPod anti affinity is a group of inter pod anti affinity scheduling rules.\n\n<hr>\n\n- **preferredDuringSchedulingIgnoredDuringExecution** ([]WeightedPodAffinityTerm)\n\n  *Atomic: will be replaced during a merge*\n  \n  The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding \"weight\" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.\n\n  <a name=\"WeightedPodAffinityTerm\"><\/a>\n  *The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)*\n\n  - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm** (PodAffinityTerm), required\n\n    Required. A pod affinity term, associated with the corresponding weight.\n\n    <a name=\"PodAffinityTerm\"><\/a>\n    *Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running*\n\n    - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.topologyKey** (string), required\n\n      This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n\n    - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.labelSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.\n\n    - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.matchLabelKeys** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).\n\n    - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.mismatchLabelKeys** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).\n\n    - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaceSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means \"this pod's namespace\". An empty selector ({}) matches all namespaces.\n\n    - **preferredDuringSchedulingIgnoredDuringExecution.podAffinityTerm.namespaces** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means \"this pod's namespace\".\n\n  - **preferredDuringSchedulingIgnoredDuringExecution.weight** (int32), required\n\n    weight associated with matching the corresponding podAffinityTerm, in the range 1-100.\n\n- **requiredDuringSchedulingIgnoredDuringExecution** ([]PodAffinityTerm)\n\n  *Atomic: will be replaced during a merge*\n  \n  If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.\n\n  <a name=\"PodAffinityTerm\"><\/a>\n  *Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running*\n\n  - **requiredDuringSchedulingIgnoredDuringExecution.topologyKey** (string), required\n\n    This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.\n\n  - **requiredDuringSchedulingIgnoredDuringExecution.labelSelector** (<a href=\"\">LabelSelector<\/a>)\n\n    A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods.\n\n  - **requiredDuringSchedulingIgnoredDuringExecution.matchLabelKeys** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).\n\n  - **requiredDuringSchedulingIgnoredDuringExecution.mismatchLabelKeys** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).\n\n  - **requiredDuringSchedulingIgnoredDuringExecution.namespaceSelector** (<a href=\"\">LabelSelector<\/a>)\n\n    A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means \"this pod's namespace\". An empty selector ({}) matches all namespaces.\n\n  - **requiredDuringSchedulingIgnoredDuringExecution.namespaces** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means \"this pod's namespace\".\n\n\n\n\n\n## Probe {#Probe}\n\nProbe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.\n\n<hr>\n\n- **exec** (ExecAction)\n\n  Exec specifies the action to take.\n\n  <a name=\"ExecAction\"><\/a>\n  *ExecAction describes a \"run in container\" action.*\n\n  - **exec.command** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    Command is the command line to execute inside the container, the working directory for the command  is root ('\/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live\/healthy and non-zero is unhealthy.\n\n- **httpGet** (HTTPGetAction)\n\n  HTTPGet specifies the http request to perform.\n\n  <a name=\"HTTPGetAction\"><\/a>\n  *HTTPGetAction describes an action based on HTTP Get requests.*\n\n  - **httpGet.port** (IntOrString), required\n\n    Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.\n\n    <a name=\"IntOrString\"><\/a>\n    *IntOrString is a type that can hold an int32 or a string.  When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type.  This allows you to have, for example, a JSON field that can accept a name or number.*\n\n  - **httpGet.host** (string)\n\n    Host name to connect to, defaults to the pod IP. You probably want to set \"Host\" in httpHeaders instead.\n\n  - **httpGet.httpHeaders** ([]HTTPHeader)\n\n    *Atomic: will be replaced during a merge*\n    \n    Custom headers to set in the request. HTTP allows repeated headers.\n\n    <a name=\"HTTPHeader\"><\/a>\n    *HTTPHeader describes a custom header to be used in HTTP probes*\n\n    - **httpGet.httpHeaders.name** (string), required\n\n      The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header.\n\n    - **httpGet.httpHeaders.value** (string), required\n\n      The header field value\n\n  - **httpGet.path** (string)\n\n    Path to access on the HTTP server.\n\n  - **httpGet.scheme** (string)\n\n    Scheme to use for connecting to the host. Defaults to HTTP.\n\n- **tcpSocket** (TCPSocketAction)\n\n  TCPSocket specifies an action involving a TCP port.\n\n  <a name=\"TCPSocketAction\"><\/a>\n  *TCPSocketAction describes an action based on opening a socket*\n\n  - **tcpSocket.port** (IntOrString), required\n\n    Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.\n\n    <a name=\"IntOrString\"><\/a>\n    *IntOrString is a type that can hold an int32 or a string.  When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type.  This allows you to have, for example, a JSON field that can accept a name or number.*\n\n  - **tcpSocket.host** (string)\n\n    Optional: Host name to connect to, defaults to the pod IP.\n\n- **initialDelaySeconds** (int32)\n\n  Number of seconds after the container has started before liveness probes are initiated. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\n\n- **terminationGracePeriodSeconds** (int64)\n\n  Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.\n\n- **periodSeconds** (int32)\n\n  How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.\n\n- **timeoutSeconds** (int32)\n\n  Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\n\n- **failureThreshold** (int32)\n\n  Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.\n\n- **successThreshold** (int32)\n\n  Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.\n\n- **grpc** (GRPCAction)\n\n  GRPC specifies an action involving a GRPC port.\n\n  <a name=\"GRPCAction\"><\/a>\n  **\n\n  - **grpc.port** (int32), required\n\n    Port number of the gRPC service. Number must be in the range 1 to 65535.\n\n  - **grpc.service** (string)\n\n    Service is the name of the service to place in the gRPC HealthCheckRequest (see https:\/\/github.com\/grpc\/grpc\/blob\/master\/doc\/health-checking.md).\n    \n    If this is not specified, the default behavior is defined by gRPC.\n\n\n\n\n\n## PodStatus {#PodStatus}\n\nPodStatus represents information about the status of a pod. Status may trail the actual state of a system, especially if the node that hosts the pod cannot contact the control plane.\n\n<hr>\n\n- **nominatedNodeName** (string)\n\n  nominatedNodeName is set only when this pod preempts other pods on the node, but it cannot be scheduled right away as preemption victims receive their graceful termination periods. This field does not guarantee that the pod will be scheduled on this node. Scheduler may decide to place the pod elsewhere if other nodes become available sooner. Scheduler may also decide to give the resources on this node to a higher priority pod that is created after preemption. As a result, this field may be different than PodSpec.nodeName when the pod is scheduled.\n\n- **hostIP** (string)\n\n  hostIP holds the IP address of the host to which the pod is assigned. Empty if the pod has not started yet. A pod can be assigned to a node that has a problem in kubelet which in turns mean that HostIP will not be updated even if there is a node is assigned to pod\n\n- **hostIPs** ([]HostIP)\n\n  *Patch strategy: merge on key `ip`*\n  \n  *Atomic: will be replaced during a merge*\n  \n  hostIPs holds the IP addresses allocated to the host. If this field is specified, the first entry must match the hostIP field. This list is empty if the pod has not started yet. A pod can be assigned to a node that has a problem in kubelet which in turns means that HostIPs will not be updated even if there is a node is assigned to this pod.\n\n  <a name=\"HostIP\"><\/a>\n  *HostIP represents a single IP address allocated to the host.*\n\n  - **hostIPs.ip** (string), required\n\n    IP is the IP address assigned to the host\n\n- **startTime** (Time)\n\n  RFC 3339 date and time at which the object was acknowledged by the Kubelet. This is before the Kubelet pulled the container image(s) for the pod.\n\n  <a name=\"Time\"><\/a>\n  *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **phase** (string)\n\n  The phase of a Pod is a simple, high-level summary of where the Pod is in its lifecycle. The conditions array, the reason and message fields, and the individual container status arrays contain more detail about the pod's status. There are five possible phase values:\n  \n  Pending: The pod has been accepted by the Kubernetes system, but one or more of the container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while. Running: The pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting. Succeeded: All containers in the pod have terminated in success, and will not be restarted. Failed: All containers in the pod have terminated, and at least one container has terminated in failure. The container either exited with non-zero status or was terminated by the system. Unknown: For some reason the state of the pod could not be obtained, typically due to an error in communicating with the host of the pod.\n  \n  More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#pod-phase\n\n- **message** (string)\n\n  A human readable message indicating details about why the pod is in this condition.\n\n- **reason** (string)\n\n  A brief CamelCase message indicating details about why the pod is in this state. e.g. 'Evicted'\n\n- **podIP** (string)\n\n  podIP address allocated to the pod. Routable at least within the cluster. Empty if not yet allocated.\n\n- **podIPs** ([]PodIP)\n\n  *Patch strategy: merge on key `ip`*\n  \n  *Map: unique values on key ip will be kept during a merge*\n  \n  podIPs holds the IP addresses allocated to the pod. If this field is specified, the 0th entry must match the podIP field. Pods may be allocated at most 1 value for each of IPv4 and IPv6. This list is empty if no IPs have been allocated yet.\n\n  <a name=\"PodIP\"><\/a>\n  *PodIP represents a single IP address allocated to the pod.*\n\n  - **podIPs.ip** (string), required\n\n    IP is the IP address assigned to the pod\n\n- **conditions** ([]PodCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  Current service state of pod. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#pod-conditions\n\n  <a name=\"PodCondition\"><\/a>\n  *PodCondition contains details for the current condition of this pod.*\n\n  - **conditions.status** (string), required\n\n    Status is the status of the condition. Can be True, False, Unknown. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#pod-conditions\n\n  - **conditions.type** (string), required\n\n    Type is the type of the condition. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#pod-conditions\n\n  - **conditions.lastProbeTime** (Time)\n\n    Last time we probed the condition.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.lastTransitionTime** (Time)\n\n    Last time the condition transitioned from one status to another.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    Human-readable message indicating details about last transition.\n\n  - **conditions.reason** (string)\n\n    Unique, one-word, CamelCase reason for the condition's last transition.\n\n- **qosClass** (string)\n\n  The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-qos\/#quality-of-service-classes\n\n- **initContainerStatuses** ([]ContainerStatus)\n\n  *Atomic: will be replaced during a merge*\n  \n  The list has one entry per init container in the manifest. The most recent successful init container will have ready = true, the most recently started container will have startTime set. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#pod-and-container-status\n\n  <a name=\"ContainerStatus\"><\/a>\n  *ContainerStatus contains details for the current status of this container.*\n\n  - **initContainerStatuses.allocatedResources** (map[string]<a href=\"\">Quantity<\/a>)\n\n    AllocatedResources represents the compute resources allocated for this container by the node. Kubelet sets this value to Container.Resources.Requests upon successful pod admission and after successfully admitting desired pod resize.\n\n  - **initContainerStatuses.allocatedResourcesStatus** ([]ResourceStatus)\n\n    *Patch strategy: merge on key `name`*\n    \n    *Map: unique values on key name will be kept during a merge*\n    \n    AllocatedResourcesStatus represents the status of various resources allocated for this Pod.\n\n    <a name=\"ResourceStatus\"><\/a>\n    **\n\n    - **initContainerStatuses.allocatedResourcesStatus.name** (string), required\n\n      Name of the resource. Must be unique within the pod and match one of the resources from the pod spec.\n\n    - **initContainerStatuses.allocatedResourcesStatus.resources** ([]ResourceHealth)\n\n      *Map: unique values on key resourceID will be kept during a merge*\n      \n      List of unique Resources health. Each element in the list contains an unique resource ID and resource health. At a minimum, ResourceID must uniquely identify the Resource allocated to the Pod on the Node for the lifetime of a Pod. See ResourceID type for it's definition.\n\n      <a name=\"ResourceHealth\"><\/a>\n      *ResourceHealth represents the health of a resource. It has the latest device health information. This is a part of KEP https:\/\/kep.k8s.io\/4680 and historical health changes are planned to be added in future iterations of a KEP.*\n\n      - **initContainerStatuses.allocatedResourcesStatus.resources.resourceID** (string), required\n\n        ResourceID is the unique identifier of the resource. See the ResourceID type for more information.\n\n      - **initContainerStatuses.allocatedResourcesStatus.resources.health** (string)\n\n        Health of the resource. can be one of:\n         - Healthy: operates as normal\n         - Unhealthy: reported unhealthy. We consider this a temporary health issue\n                      since we do not have a mechanism today to distinguish\n                      temporary and permanent issues.\n         - Unknown: The status cannot be determined.\n                    For example, Device Plugin got unregistered and hasn't been re-registered since.\n        \n        In future we may want to introduce the PermanentlyUnhealthy Status.\n\n  - **initContainerStatuses.containerID** (string)\n\n    ContainerID is the ID of the container in the format '\\<type>:\/\/\\<container_id>'. Where type is a container runtime identifier, returned from Version call of CRI API (for example \"containerd\").\n\n  - **initContainerStatuses.image** (string), required\n\n    Image is the name of container image that the container is running. The container image may not match the image used in the PodSpec, as it may have been resolved by the runtime. More info: https:\/\/kubernetes.io\/docs\/concepts\/containers\/images.\n\n  - **initContainerStatuses.imageID** (string), required\n\n    ImageID is the image ID of the container's image. The image ID may not match the image ID of the image used in the PodSpec, as it may have been resolved by the runtime.\n\n  - **initContainerStatuses.lastState** (ContainerState)\n\n    LastTerminationState holds the last termination state of the container to help debug container crashes and restarts. This field is not populated if the container is still running and RestartCount is 0.\n\n    <a name=\"ContainerState\"><\/a>\n    *ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting.*\n\n    - **initContainerStatuses.lastState.running** (ContainerStateRunning)\n\n      Details about a running container\n\n      <a name=\"ContainerStateRunning\"><\/a>\n      *ContainerStateRunning is a running state of a container.*\n\n      - **initContainerStatuses.lastState.running.startedAt** (Time)\n\n        Time at which the container was last (re-)started\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n    - **initContainerStatuses.lastState.terminated** (ContainerStateTerminated)\n\n      Details about a terminated container\n\n      <a name=\"ContainerStateTerminated\"><\/a>\n      *ContainerStateTerminated is a terminated state of a container.*\n\n      - **initContainerStatuses.lastState.terminated.containerID** (string)\n\n        Container's ID in the format '\\<type>:\/\/\\<container_id>'\n\n      - **initContainerStatuses.lastState.terminated.exitCode** (int32), required\n\n        Exit status from the last termination of the container\n\n      - **initContainerStatuses.lastState.terminated.startedAt** (Time)\n\n        Time at which previous execution of the container started\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n      - **initContainerStatuses.lastState.terminated.finishedAt** (Time)\n\n        Time at which the container last terminated\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n      - **initContainerStatuses.lastState.terminated.message** (string)\n\n        Message regarding the last termination of the container\n\n      - **initContainerStatuses.lastState.terminated.reason** (string)\n\n        (brief) reason from the last termination of the container\n\n      - **initContainerStatuses.lastState.terminated.signal** (int32)\n\n        Signal from the last termination of the container\n\n    - **initContainerStatuses.lastState.waiting** (ContainerStateWaiting)\n\n      Details about a waiting container\n\n      <a name=\"ContainerStateWaiting\"><\/a>\n      *ContainerStateWaiting is a waiting state of a container.*\n\n      - **initContainerStatuses.lastState.waiting.message** (string)\n\n        Message regarding why the container is not yet running.\n\n      - **initContainerStatuses.lastState.waiting.reason** (string)\n\n        (brief) reason the container is not yet running.\n\n  - **initContainerStatuses.name** (string), required\n\n    Name is a DNS_LABEL representing the unique name of the container. Each container in a pod must have a unique name across all container types. Cannot be updated.\n\n  - **initContainerStatuses.ready** (boolean), required\n\n    Ready specifies whether the container is currently passing its readiness check. The value will change as readiness probes keep executing. If no readiness probes are specified, this field defaults to true once the container is fully started (see Started field).\n    \n    The value is typically used to determine whether a container is ready to accept traffic.\n\n  - **initContainerStatuses.resources** (ResourceRequirements)\n\n    Resources represents the compute resource requests and limits that have been successfully enacted on the running container after it has been started or has been successfully resized.\n\n    <a name=\"ResourceRequirements\"><\/a>\n    *ResourceRequirements describes the compute resource requirements.*\n\n    - **initContainerStatuses.resources.claims** ([]ResourceClaim)\n\n      *Map: unique values on key name will be kept during a merge*\n      \n      Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container.\n      \n      This is an alpha field and requires enabling the DynamicResourceAllocation feature gate.\n      \n      This field is immutable. It can only be set for containers.\n\n      <a name=\"ResourceClaim\"><\/a>\n      *ResourceClaim references one entry in PodSpec.ResourceClaims.*\n\n      - **initContainerStatuses.resources.claims.name** (string), required\n\n        Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.\n\n      - **initContainerStatuses.resources.claims.request** (string)\n\n        Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request.\n\n    - **initContainerStatuses.resources.limits** (map[string]<a href=\"\">Quantity<\/a>)\n\n      Limits describes the maximum amount of compute resources allowed. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n    - **initContainerStatuses.resources.requests** (map[string]<a href=\"\">Quantity<\/a>)\n\n      Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n  - **initContainerStatuses.restartCount** (int32), required\n\n    RestartCount holds the number of times the container has been restarted. Kubelet makes an effort to always increment the value, but there are cases when the state may be lost due to node restarts and then the value may be reset to 0. The value is never negative.\n\n  - **initContainerStatuses.started** (boolean)\n\n    Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe. Initialized as false, becomes true after startupProbe is considered successful. Resets to false when the container is restarted, or if kubelet loses state temporarily. In both cases, startup probes will run again. Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook. The null value must be treated the same as false.\n\n  - **initContainerStatuses.state** (ContainerState)\n\n    State holds details about the container's current condition.\n\n    <a name=\"ContainerState\"><\/a>\n    *ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting.*\n\n    - **initContainerStatuses.state.running** (ContainerStateRunning)\n\n      Details about a running container\n\n      <a name=\"ContainerStateRunning\"><\/a>\n      *ContainerStateRunning is a running state of a container.*\n\n      - **initContainerStatuses.state.running.startedAt** (Time)\n\n        Time at which the container was last (re-)started\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n    - **initContainerStatuses.state.terminated** (ContainerStateTerminated)\n\n      Details about a terminated container\n\n      <a name=\"ContainerStateTerminated\"><\/a>\n      *ContainerStateTerminated is a terminated state of a container.*\n\n      - **initContainerStatuses.state.terminated.containerID** (string)\n\n        Container's ID in the format '\\<type>:\/\/\\<container_id>'\n\n      - **initContainerStatuses.state.terminated.exitCode** (int32), required\n\n        Exit status from the last termination of the container\n\n      - **initContainerStatuses.state.terminated.startedAt** (Time)\n\n        Time at which previous execution of the container started\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n      - **initContainerStatuses.state.terminated.finishedAt** (Time)\n\n        Time at which the container last terminated\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n      - **initContainerStatuses.state.terminated.message** (string)\n\n        Message regarding the last termination of the container\n\n      - **initContainerStatuses.state.terminated.reason** (string)\n\n        (brief) reason from the last termination of the container\n\n      - **initContainerStatuses.state.terminated.signal** (int32)\n\n        Signal from the last termination of the container\n\n    - **initContainerStatuses.state.waiting** (ContainerStateWaiting)\n\n      Details about a waiting container\n\n      <a name=\"ContainerStateWaiting\"><\/a>\n      *ContainerStateWaiting is a waiting state of a container.*\n\n      - **initContainerStatuses.state.waiting.message** (string)\n\n        Message regarding why the container is not yet running.\n\n      - **initContainerStatuses.state.waiting.reason** (string)\n\n        (brief) reason the container is not yet running.\n\n  - **initContainerStatuses.user** (ContainerUser)\n\n    User represents user identity information initially attached to the first process of the container\n\n    <a name=\"ContainerUser\"><\/a>\n    *ContainerUser represents user identity information*\n\n    - **initContainerStatuses.user.linux** (LinuxContainerUser)\n\n      Linux holds user identity information initially attached to the first process of the containers in Linux. Note that the actual running identity can be changed if the process has enough privilege to do so.\n\n      <a name=\"LinuxContainerUser\"><\/a>\n      *LinuxContainerUser represents user identity information in Linux containers*\n\n      - **initContainerStatuses.user.linux.gid** (int64), required\n\n        GID is the primary gid initially attached to the first process in the container\n\n      - **initContainerStatuses.user.linux.uid** (int64), required\n\n        UID is the primary uid initially attached to the first process in the container\n\n      - **initContainerStatuses.user.linux.supplementalGroups** ([]int64)\n\n        *Atomic: will be replaced during a merge*\n        \n        SupplementalGroups are the supplemental groups initially attached to the first process in the container\n\n  - **initContainerStatuses.volumeMounts** ([]VolumeMountStatus)\n\n    *Patch strategy: merge on key `mountPath`*\n    \n    *Map: unique values on key mountPath will be kept during a merge*\n    \n    Status of volume mounts.\n\n    <a name=\"VolumeMountStatus\"><\/a>\n    *VolumeMountStatus shows status of volume mounts.*\n\n    - **initContainerStatuses.volumeMounts.mountPath** (string), required\n\n      MountPath corresponds to the original VolumeMount.\n\n    - **initContainerStatuses.volumeMounts.name** (string), required\n\n      Name corresponds to the name of the original VolumeMount.\n\n    - **initContainerStatuses.volumeMounts.readOnly** (boolean)\n\n      ReadOnly corresponds to the original VolumeMount.\n\n    - **initContainerStatuses.volumeMounts.recursiveReadOnly** (string)\n\n      RecursiveReadOnly must be set to Disabled, Enabled, or unspecified (for non-readonly mounts). An IfPossible value in the original VolumeMount must be translated to Disabled or Enabled, depending on the mount result.\n\n- **containerStatuses** ([]ContainerStatus)\n\n  *Atomic: will be replaced during a merge*\n  \n  The list has one entry per container in the manifest. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#pod-and-container-status\n\n  <a name=\"ContainerStatus\"><\/a>\n  *ContainerStatus contains details for the current status of this container.*\n\n  - **containerStatuses.allocatedResources** (map[string]<a href=\"\">Quantity<\/a>)\n\n    AllocatedResources represents the compute resources allocated for this container by the node. Kubelet sets this value to Container.Resources.Requests upon successful pod admission and after successfully admitting desired pod resize.\n\n  - **containerStatuses.allocatedResourcesStatus** ([]ResourceStatus)\n\n    *Patch strategy: merge on key `name`*\n    \n    *Map: unique values on key name will be kept during a merge*\n    \n    AllocatedResourcesStatus represents the status of various resources allocated for this Pod.\n\n    <a name=\"ResourceStatus\"><\/a>\n    **\n\n    - **containerStatuses.allocatedResourcesStatus.name** (string), required\n\n      Name of the resource. Must be unique within the pod and match one of the resources from the pod spec.\n\n    - **containerStatuses.allocatedResourcesStatus.resources** ([]ResourceHealth)\n\n      *Map: unique values on key resourceID will be kept during a merge*\n      \n      List of unique Resources health. Each element in the list contains an unique resource ID and resource health. At a minimum, ResourceID must uniquely identify the Resource allocated to the Pod on the Node for the lifetime of a Pod. See ResourceID type for it's definition.\n\n      <a name=\"ResourceHealth\"><\/a>\n      *ResourceHealth represents the health of a resource. It has the latest device health information. This is a part of KEP https:\/\/kep.k8s.io\/4680 and historical health changes are planned to be added in future iterations of a KEP.*\n\n      - **containerStatuses.allocatedResourcesStatus.resources.resourceID** (string), required\n\n        ResourceID is the unique identifier of the resource. See the ResourceID type for more information.\n\n      - **containerStatuses.allocatedResourcesStatus.resources.health** (string)\n\n        Health of the resource. can be one of:\n         - Healthy: operates as normal\n         - Unhealthy: reported unhealthy. We consider this a temporary health issue\n                      since we do not have a mechanism today to distinguish\n                      temporary and permanent issues.\n         - Unknown: The status cannot be determined.\n                    For example, Device Plugin got unregistered and hasn't been re-registered since.\n        \n        In future we may want to introduce the PermanentlyUnhealthy Status.\n\n  - **containerStatuses.containerID** (string)\n\n    ContainerID is the ID of the container in the format '\\<type>:\/\/\\<container_id>'. Where type is a container runtime identifier, returned from Version call of CRI API (for example \"containerd\").\n\n  - **containerStatuses.image** (string), required\n\n    Image is the name of container image that the container is running. The container image may not match the image used in the PodSpec, as it may have been resolved by the runtime. More info: https:\/\/kubernetes.io\/docs\/concepts\/containers\/images.\n\n  - **containerStatuses.imageID** (string), required\n\n    ImageID is the image ID of the container's image. The image ID may not match the image ID of the image used in the PodSpec, as it may have been resolved by the runtime.\n\n  - **containerStatuses.lastState** (ContainerState)\n\n    LastTerminationState holds the last termination state of the container to help debug container crashes and restarts. This field is not populated if the container is still running and RestartCount is 0.\n\n    <a name=\"ContainerState\"><\/a>\n    *ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting.*\n\n    - **containerStatuses.lastState.running** (ContainerStateRunning)\n\n      Details about a running container\n\n      <a name=\"ContainerStateRunning\"><\/a>\n      *ContainerStateRunning is a running state of a container.*\n\n      - **containerStatuses.lastState.running.startedAt** (Time)\n\n        Time at which the container was last (re-)started\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n    - **containerStatuses.lastState.terminated** (ContainerStateTerminated)\n\n      Details about a terminated container\n\n      <a name=\"ContainerStateTerminated\"><\/a>\n      *ContainerStateTerminated is a terminated state of a container.*\n\n      - **containerStatuses.lastState.terminated.containerID** (string)\n\n        Container's ID in the format '\\<type>:\/\/\\<container_id>'\n\n      - **containerStatuses.lastState.terminated.exitCode** (int32), required\n\n        Exit status from the last termination of the container\n\n      - **containerStatuses.lastState.terminated.startedAt** (Time)\n\n        Time at which previous execution of the container started\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n      - **containerStatuses.lastState.terminated.finishedAt** (Time)\n\n        Time at which the container last terminated\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n      - **containerStatuses.lastState.terminated.message** (string)\n\n        Message regarding the last termination of the container\n\n      - **containerStatuses.lastState.terminated.reason** (string)\n\n        (brief) reason from the last termination of the container\n\n      - **containerStatuses.lastState.terminated.signal** (int32)\n\n        Signal from the last termination of the container\n\n    - **containerStatuses.lastState.waiting** (ContainerStateWaiting)\n\n      Details about a waiting container\n\n      <a name=\"ContainerStateWaiting\"><\/a>\n      *ContainerStateWaiting is a waiting state of a container.*\n\n      - **containerStatuses.lastState.waiting.message** (string)\n\n        Message regarding why the container is not yet running.\n\n      - **containerStatuses.lastState.waiting.reason** (string)\n\n        (brief) reason the container is not yet running.\n\n  - **containerStatuses.name** (string), required\n\n    Name is a DNS_LABEL representing the unique name of the container. Each container in a pod must have a unique name across all container types. Cannot be updated.\n\n  - **containerStatuses.ready** (boolean), required\n\n    Ready specifies whether the container is currently passing its readiness check. The value will change as readiness probes keep executing. If no readiness probes are specified, this field defaults to true once the container is fully started (see Started field).\n    \n    The value is typically used to determine whether a container is ready to accept traffic.\n\n  - **containerStatuses.resources** (ResourceRequirements)\n\n    Resources represents the compute resource requests and limits that have been successfully enacted on the running container after it has been started or has been successfully resized.\n\n    <a name=\"ResourceRequirements\"><\/a>\n    *ResourceRequirements describes the compute resource requirements.*\n\n    - **containerStatuses.resources.claims** ([]ResourceClaim)\n\n      *Map: unique values on key name will be kept during a merge*\n      \n      Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container.\n      \n      This is an alpha field and requires enabling the DynamicResourceAllocation feature gate.\n      \n      This field is immutable. It can only be set for containers.\n\n      <a name=\"ResourceClaim\"><\/a>\n      *ResourceClaim references one entry in PodSpec.ResourceClaims.*\n\n      - **containerStatuses.resources.claims.name** (string), required\n\n        Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.\n\n      - **containerStatuses.resources.claims.request** (string)\n\n        Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request.\n\n    - **containerStatuses.resources.limits** (map[string]<a href=\"\">Quantity<\/a>)\n\n      Limits describes the maximum amount of compute resources allowed. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n    - **containerStatuses.resources.requests** (map[string]<a href=\"\">Quantity<\/a>)\n\n      Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n  - **containerStatuses.restartCount** (int32), required\n\n    RestartCount holds the number of times the container has been restarted. Kubelet makes an effort to always increment the value, but there are cases when the state may be lost due to node restarts and then the value may be reset to 0. The value is never negative.\n\n  - **containerStatuses.started** (boolean)\n\n    Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe. Initialized as false, becomes true after startupProbe is considered successful. Resets to false when the container is restarted, or if kubelet loses state temporarily. In both cases, startup probes will run again. Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook. The null value must be treated the same as false.\n\n  - **containerStatuses.state** (ContainerState)\n\n    State holds details about the container's current condition.\n\n    <a name=\"ContainerState\"><\/a>\n    *ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting.*\n\n    - **containerStatuses.state.running** (ContainerStateRunning)\n\n      Details about a running container\n\n      <a name=\"ContainerStateRunning\"><\/a>\n      *ContainerStateRunning is a running state of a container.*\n\n      - **containerStatuses.state.running.startedAt** (Time)\n\n        Time at which the container was last (re-)started\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n    - **containerStatuses.state.terminated** (ContainerStateTerminated)\n\n      Details about a terminated container\n\n      <a name=\"ContainerStateTerminated\"><\/a>\n      *ContainerStateTerminated is a terminated state of a container.*\n\n      - **containerStatuses.state.terminated.containerID** (string)\n\n        Container's ID in the format '\\<type>:\/\/\\<container_id>'\n\n      - **containerStatuses.state.terminated.exitCode** (int32), required\n\n        Exit status from the last termination of the container\n\n      - **containerStatuses.state.terminated.startedAt** (Time)\n\n        Time at which previous execution of the container started\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n      - **containerStatuses.state.terminated.finishedAt** (Time)\n\n        Time at which the container last terminated\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n      - **containerStatuses.state.terminated.message** (string)\n\n        Message regarding the last termination of the container\n\n      - **containerStatuses.state.terminated.reason** (string)\n\n        (brief) reason from the last termination of the container\n\n      - **containerStatuses.state.terminated.signal** (int32)\n\n        Signal from the last termination of the container\n\n    - **containerStatuses.state.waiting** (ContainerStateWaiting)\n\n      Details about a waiting container\n\n      <a name=\"ContainerStateWaiting\"><\/a>\n      *ContainerStateWaiting is a waiting state of a container.*\n\n      - **containerStatuses.state.waiting.message** (string)\n\n        Message regarding why the container is not yet running.\n\n      - **containerStatuses.state.waiting.reason** (string)\n\n        (brief) reason the container is not yet running.\n\n  - **containerStatuses.user** (ContainerUser)\n\n    User represents user identity information initially attached to the first process of the container\n\n    <a name=\"ContainerUser\"><\/a>\n    *ContainerUser represents user identity information*\n\n    - **containerStatuses.user.linux** (LinuxContainerUser)\n\n      Linux holds user identity information initially attached to the first process of the containers in Linux. Note that the actual running identity can be changed if the process has enough privilege to do so.\n\n      <a name=\"LinuxContainerUser\"><\/a>\n      *LinuxContainerUser represents user identity information in Linux containers*\n\n      - **containerStatuses.user.linux.gid** (int64), required\n\n        GID is the primary gid initially attached to the first process in the container\n\n      - **containerStatuses.user.linux.uid** (int64), required\n\n        UID is the primary uid initially attached to the first process in the container\n\n      - **containerStatuses.user.linux.supplementalGroups** ([]int64)\n\n        *Atomic: will be replaced during a merge*\n        \n        SupplementalGroups are the supplemental groups initially attached to the first process in the container\n\n  - **containerStatuses.volumeMounts** ([]VolumeMountStatus)\n\n    *Patch strategy: merge on key `mountPath`*\n    \n    *Map: unique values on key mountPath will be kept during a merge*\n    \n    Status of volume mounts.\n\n    <a name=\"VolumeMountStatus\"><\/a>\n    *VolumeMountStatus shows status of volume mounts.*\n\n    - **containerStatuses.volumeMounts.mountPath** (string), required\n\n      MountPath corresponds to the original VolumeMount.\n\n    - **containerStatuses.volumeMounts.name** (string), required\n\n      Name corresponds to the name of the original VolumeMount.\n\n    - **containerStatuses.volumeMounts.readOnly** (boolean)\n\n      ReadOnly corresponds to the original VolumeMount.\n\n    - **containerStatuses.volumeMounts.recursiveReadOnly** (string)\n\n      RecursiveReadOnly must be set to Disabled, Enabled, or unspecified (for non-readonly mounts). An IfPossible value in the original VolumeMount must be translated to Disabled or Enabled, depending on the mount result.\n\n- **ephemeralContainerStatuses** ([]ContainerStatus)\n\n  *Atomic: will be replaced during a merge*\n  \n  Status for any ephemeral containers that have run in this pod.\n\n  <a name=\"ContainerStatus\"><\/a>\n  *ContainerStatus contains details for the current status of this container.*\n\n  - **ephemeralContainerStatuses.allocatedResources** (map[string]<a href=\"\">Quantity<\/a>)\n\n    AllocatedResources represents the compute resources allocated for this container by the node. Kubelet sets this value to Container.Resources.Requests upon successful pod admission and after successfully admitting desired pod resize.\n\n  - **ephemeralContainerStatuses.allocatedResourcesStatus** ([]ResourceStatus)\n\n    *Patch strategy: merge on key `name`*\n    \n    *Map: unique values on key name will be kept during a merge*\n    \n    AllocatedResourcesStatus represents the status of various resources allocated for this Pod.\n\n    <a name=\"ResourceStatus\"><\/a>\n    **\n\n    - **ephemeralContainerStatuses.allocatedResourcesStatus.name** (string), required\n\n      Name of the resource. Must be unique within the pod and match one of the resources from the pod spec.\n\n    - **ephemeralContainerStatuses.allocatedResourcesStatus.resources** ([]ResourceHealth)\n\n      *Map: unique values on key resourceID will be kept during a merge*\n      \n      List of unique Resources health. Each element in the list contains an unique resource ID and resource health. At a minimum, ResourceID must uniquely identify the Resource allocated to the Pod on the Node for the lifetime of a Pod. See ResourceID type for it's definition.\n\n      <a name=\"ResourceHealth\"><\/a>\n      *ResourceHealth represents the health of a resource. It has the latest device health information. This is a part of KEP https:\/\/kep.k8s.io\/4680 and historical health changes are planned to be added in future iterations of a KEP.*\n\n      - **ephemeralContainerStatuses.allocatedResourcesStatus.resources.resourceID** (string), required\n\n        ResourceID is the unique identifier of the resource. See the ResourceID type for more information.\n\n      - **ephemeralContainerStatuses.allocatedResourcesStatus.resources.health** (string)\n\n        Health of the resource. can be one of:\n         - Healthy: operates as normal\n         - Unhealthy: reported unhealthy. We consider this a temporary health issue\n                      since we do not have a mechanism today to distinguish\n                      temporary and permanent issues.\n         - Unknown: The status cannot be determined.\n                    For example, Device Plugin got unregistered and hasn't been re-registered since.\n        \n        In future we may want to introduce the PermanentlyUnhealthy Status.\n\n  - **ephemeralContainerStatuses.containerID** (string)\n\n    ContainerID is the ID of the container in the format '\\<type>:\/\/\\<container_id>'. Where type is a container runtime identifier, returned from Version call of CRI API (for example \"containerd\").\n\n  - **ephemeralContainerStatuses.image** (string), required\n\n    Image is the name of container image that the container is running. The container image may not match the image used in the PodSpec, as it may have been resolved by the runtime. More info: https:\/\/kubernetes.io\/docs\/concepts\/containers\/images.\n\n  - **ephemeralContainerStatuses.imageID** (string), required\n\n    ImageID is the image ID of the container's image. The image ID may not match the image ID of the image used in the PodSpec, as it may have been resolved by the runtime.\n\n  - **ephemeralContainerStatuses.lastState** (ContainerState)\n\n    LastTerminationState holds the last termination state of the container to help debug container crashes and restarts. This field is not populated if the container is still running and RestartCount is 0.\n\n    <a name=\"ContainerState\"><\/a>\n    *ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting.*\n\n    - **ephemeralContainerStatuses.lastState.running** (ContainerStateRunning)\n\n      Details about a running container\n\n      <a name=\"ContainerStateRunning\"><\/a>\n      *ContainerStateRunning is a running state of a container.*\n\n      - **ephemeralContainerStatuses.lastState.running.startedAt** (Time)\n\n        Time at which the container was last (re-)started\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n    - **ephemeralContainerStatuses.lastState.terminated** (ContainerStateTerminated)\n\n      Details about a terminated container\n\n      <a name=\"ContainerStateTerminated\"><\/a>\n      *ContainerStateTerminated is a terminated state of a container.*\n\n      - **ephemeralContainerStatuses.lastState.terminated.containerID** (string)\n\n        Container's ID in the format '\\<type>:\/\/\\<container_id>'\n\n      - **ephemeralContainerStatuses.lastState.terminated.exitCode** (int32), required\n\n        Exit status from the last termination of the container\n\n      - **ephemeralContainerStatuses.lastState.terminated.startedAt** (Time)\n\n        Time at which previous execution of the container started\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n      - **ephemeralContainerStatuses.lastState.terminated.finishedAt** (Time)\n\n        Time at which the container last terminated\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n      - **ephemeralContainerStatuses.lastState.terminated.message** (string)\n\n        Message regarding the last termination of the container\n\n      - **ephemeralContainerStatuses.lastState.terminated.reason** (string)\n\n        (brief) reason from the last termination of the container\n\n      - **ephemeralContainerStatuses.lastState.terminated.signal** (int32)\n\n        Signal from the last termination of the container\n\n    - **ephemeralContainerStatuses.lastState.waiting** (ContainerStateWaiting)\n\n      Details about a waiting container\n\n      <a name=\"ContainerStateWaiting\"><\/a>\n      *ContainerStateWaiting is a waiting state of a container.*\n\n      - **ephemeralContainerStatuses.lastState.waiting.message** (string)\n\n        Message regarding why the container is not yet running.\n\n      - **ephemeralContainerStatuses.lastState.waiting.reason** (string)\n\n        (brief) reason the container is not yet running.\n\n  - **ephemeralContainerStatuses.name** (string), required\n\n    Name is a DNS_LABEL representing the unique name of the container. Each container in a pod must have a unique name across all container types. Cannot be updated.\n\n  - **ephemeralContainerStatuses.ready** (boolean), required\n\n    Ready specifies whether the container is currently passing its readiness check. The value will change as readiness probes keep executing. If no readiness probes are specified, this field defaults to true once the container is fully started (see Started field).\n    \n    The value is typically used to determine whether a container is ready to accept traffic.\n\n  - **ephemeralContainerStatuses.resources** (ResourceRequirements)\n\n    Resources represents the compute resource requests and limits that have been successfully enacted on the running container after it has been started or has been successfully resized.\n\n    <a name=\"ResourceRequirements\"><\/a>\n    *ResourceRequirements describes the compute resource requirements.*\n\n    - **ephemeralContainerStatuses.resources.claims** ([]ResourceClaim)\n\n      *Map: unique values on key name will be kept during a merge*\n      \n      Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container.\n      \n      This is an alpha field and requires enabling the DynamicResourceAllocation feature gate.\n      \n      This field is immutable. It can only be set for containers.\n\n      <a name=\"ResourceClaim\"><\/a>\n      *ResourceClaim references one entry in PodSpec.ResourceClaims.*\n\n      - **ephemeralContainerStatuses.resources.claims.name** (string), required\n\n        Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container.\n\n      - **ephemeralContainerStatuses.resources.claims.request** (string)\n\n        Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request.\n\n    - **ephemeralContainerStatuses.resources.limits** (map[string]<a href=\"\">Quantity<\/a>)\n\n      Limits describes the maximum amount of compute resources allowed. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n    - **ephemeralContainerStatuses.resources.requests** (map[string]<a href=\"\">Quantity<\/a>)\n\n      Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n  - **ephemeralContainerStatuses.restartCount** (int32), required\n\n    RestartCount holds the number of times the container has been restarted. Kubelet makes an effort to always increment the value, but there are cases when the state may be lost due to node restarts and then the value may be reset to 0. The value is never negative.\n\n  - **ephemeralContainerStatuses.started** (boolean)\n\n    Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe. Initialized as false, becomes true after startupProbe is considered successful. Resets to false when the container is restarted, or if kubelet loses state temporarily. In both cases, startup probes will run again. Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook. The null value must be treated the same as false.\n\n  - **ephemeralContainerStatuses.state** (ContainerState)\n\n    State holds details about the container's current condition.\n\n    <a name=\"ContainerState\"><\/a>\n    *ContainerState holds a possible state of container. Only one of its members may be specified. If none of them is specified, the default one is ContainerStateWaiting.*\n\n    - **ephemeralContainerStatuses.state.running** (ContainerStateRunning)\n\n      Details about a running container\n\n      <a name=\"ContainerStateRunning\"><\/a>\n      *ContainerStateRunning is a running state of a container.*\n\n      - **ephemeralContainerStatuses.state.running.startedAt** (Time)\n\n        Time at which the container was last (re-)started\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n    - **ephemeralContainerStatuses.state.terminated** (ContainerStateTerminated)\n\n      Details about a terminated container\n\n      <a name=\"ContainerStateTerminated\"><\/a>\n      *ContainerStateTerminated is a terminated state of a container.*\n\n      - **ephemeralContainerStatuses.state.terminated.containerID** (string)\n\n        Container's ID in the format '\\<type>:\/\/\\<container_id>'\n\n      - **ephemeralContainerStatuses.state.terminated.exitCode** (int32), required\n\n        Exit status from the last termination of the container\n\n      - **ephemeralContainerStatuses.state.terminated.startedAt** (Time)\n\n        Time at which previous execution of the container started\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n      - **ephemeralContainerStatuses.state.terminated.finishedAt** (Time)\n\n        Time at which the container last terminated\n\n        <a name=\"Time\"><\/a>\n        *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n      - **ephemeralContainerStatuses.state.terminated.message** (string)\n\n        Message regarding the last termination of the container\n\n      - **ephemeralContainerStatuses.state.terminated.reason** (string)\n\n        (brief) reason from the last termination of the container\n\n      - **ephemeralContainerStatuses.state.terminated.signal** (int32)\n\n        Signal from the last termination of the container\n\n    - **ephemeralContainerStatuses.state.waiting** (ContainerStateWaiting)\n\n      Details about a waiting container\n\n      <a name=\"ContainerStateWaiting\"><\/a>\n      *ContainerStateWaiting is a waiting state of a container.*\n\n      - **ephemeralContainerStatuses.state.waiting.message** (string)\n\n        Message regarding why the container is not yet running.\n\n      - **ephemeralContainerStatuses.state.waiting.reason** (string)\n\n        (brief) reason the container is not yet running.\n\n  - **ephemeralContainerStatuses.user** (ContainerUser)\n\n    User represents user identity information initially attached to the first process of the container\n\n    <a name=\"ContainerUser\"><\/a>\n    *ContainerUser represents user identity information*\n\n    - **ephemeralContainerStatuses.user.linux** (LinuxContainerUser)\n\n      Linux holds user identity information initially attached to the first process of the containers in Linux. Note that the actual running identity can be changed if the process has enough privilege to do so.\n\n      <a name=\"LinuxContainerUser\"><\/a>\n      *LinuxContainerUser represents user identity information in Linux containers*\n\n      - **ephemeralContainerStatuses.user.linux.gid** (int64), required\n\n        GID is the primary gid initially attached to the first process in the container\n\n      - **ephemeralContainerStatuses.user.linux.uid** (int64), required\n\n        UID is the primary uid initially attached to the first process in the container\n\n      - **ephemeralContainerStatuses.user.linux.supplementalGroups** ([]int64)\n\n        *Atomic: will be replaced during a merge*\n        \n        SupplementalGroups are the supplemental groups initially attached to the first process in the container\n\n  - **ephemeralContainerStatuses.volumeMounts** ([]VolumeMountStatus)\n\n    *Patch strategy: merge on key `mountPath`*\n    \n    *Map: unique values on key mountPath will be kept during a merge*\n    \n    Status of volume mounts.\n\n    <a name=\"VolumeMountStatus\"><\/a>\n    *VolumeMountStatus shows status of volume mounts.*\n\n    - **ephemeralContainerStatuses.volumeMounts.mountPath** (string), required\n\n      MountPath corresponds to the original VolumeMount.\n\n    - **ephemeralContainerStatuses.volumeMounts.name** (string), required\n\n      Name corresponds to the name of the original VolumeMount.\n\n    - **ephemeralContainerStatuses.volumeMounts.readOnly** (boolean)\n\n      ReadOnly corresponds to the original VolumeMount.\n\n    - **ephemeralContainerStatuses.volumeMounts.recursiveReadOnly** (string)\n\n      RecursiveReadOnly must be set to Disabled, Enabled, or unspecified (for non-readonly mounts). An IfPossible value in the original VolumeMount must be translated to Disabled or Enabled, depending on the mount result.\n\n- **resourceClaimStatuses** ([]PodResourceClaimStatus)\n\n  *Patch strategies: retainKeys, merge on key `name`*\n  \n  *Map: unique values on key name will be kept during a merge*\n  \n  Status of resource claims.\n\n  <a name=\"PodResourceClaimStatus\"><\/a>\n  *PodResourceClaimStatus is stored in the PodStatus for each PodResourceClaim which references a ResourceClaimTemplate. It stores the generated name for the corresponding ResourceClaim.*\n\n  - **resourceClaimStatuses.name** (string), required\n\n    Name uniquely identifies this resource claim inside the pod. This must match the name of an entry in pod.spec.resourceClaims, which implies that the string must be a DNS_LABEL.\n\n  - **resourceClaimStatuses.resourceClaimName** (string)\n\n    ResourceClaimName is the name of the ResourceClaim that was generated for the Pod in the namespace of the Pod. If this is unset, then generating a ResourceClaim was not necessary. The pod.spec.resourceClaims entry can be ignored in this case.\n\n- **resize** (string)\n\n  Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to \"Proposed\"\n\n\n\n\n\n## PodList {#PodList}\n\nPodList is a list of Pods.\n\n<hr>\n\n- **items** ([]<a href=\"\">Pod<\/a>), required\n\n  List of pods. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md\n\n- **apiVersion** (string)\n\n  APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#resources\n\n- **kind** (string)\n\n  Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified Pod\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/pods\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Pod\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Pod<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read ephemeralcontainers of the specified Pod\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/pods\/{name}\/ephemeralcontainers\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Pod\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Pod<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read log of the specified Pod\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/pods\/{name}\/log\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Pod\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **container** (*in query*): string\n\n  The container for which to stream logs. Defaults to only container if there is one container in the pod.\n\n\n- **follow** (*in query*): boolean\n\n  Follow the log stream of the pod. Defaults to false.\n\n\n- **insecureSkipTLSVerifyBackend** (*in query*): boolean\n\n  insecureSkipTLSVerifyBackend indicates that the apiserver should not confirm the validity of the serving certificate of the backend it is connecting to.  This will make the HTTPS connection between the apiserver and the backend insecure. This means the apiserver cannot verify the log data it is receiving came from the real kubelet.  If the kubelet is configured to verify the apiserver's TLS credentials, it does not mean the connection to the real kubelet is vulnerable to a man in the middle attack (e.g. an attacker could not intercept the actual log data coming from the real kubelet).\n\n\n- **limitBytes** (*in query*): integer\n\n  If set, the number of bytes to read from the server before terminating the log output. This may not display a complete final line of logging, and may return slightly more or slightly less than the specified limit.\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **previous** (*in query*): boolean\n\n  Return previous terminated container logs. Defaults to false.\n\n\n- **sinceSeconds** (*in query*): integer\n\n  A relative time in seconds before the current time from which to show logs. If this value precedes the time a pod was started, only logs since the pod start will be returned. If this value is in the future, no logs will be returned. Only one of sinceSeconds or sinceTime may be specified.\n\n\n- **tailLines** (*in query*): integer\n\n  If set, the number of lines from the end of the logs to show. If not specified, logs are shown from the creation of the container or sinceSeconds or sinceTime\n\n\n- **timestamps** (*in query*): boolean\n\n  If true, add an RFC3339 or RFC3339Nano timestamp at the beginning of every line of log output. Defaults to false.\n\n\n\n#### Response\n\n\n200 (string): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified Pod\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/pods\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Pod\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Pod<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Pod\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/pods\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Pod\n\n#### HTTP Request\n\nGET \/api\/v1\/pods\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a Pod\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\/{namespace}\/pods\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Pod<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Pod<\/a>): OK\n\n201 (<a href=\"\">Pod<\/a>): Created\n\n202 (<a href=\"\">Pod<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified Pod\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/pods\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Pod\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Pod<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Pod<\/a>): OK\n\n201 (<a href=\"\">Pod<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace ephemeralcontainers of the specified Pod\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/pods\/{name}\/ephemeralcontainers\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Pod\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Pod<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Pod<\/a>): OK\n\n201 (<a href=\"\">Pod<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified Pod\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/pods\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Pod\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Pod<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Pod<\/a>): OK\n\n201 (<a href=\"\">Pod<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified Pod\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/pods\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Pod\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Pod<\/a>): OK\n\n201 (<a href=\"\">Pod<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update ephemeralcontainers of the specified Pod\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/pods\/{name}\/ephemeralcontainers\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Pod\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Pod<\/a>): OK\n\n201 (<a href=\"\">Pod<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified Pod\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/pods\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Pod\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Pod<\/a>): OK\n\n201 (<a href=\"\">Pod<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a Pod\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/pods\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Pod\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Pod<\/a>): OK\n\n202 (<a href=\"\">Pod<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of Pod\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/pods\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   Pod  content type   api reference  description   Pod is a collection of containers that can run on a host   title   Pod  weight  1 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        Pod   Pod   Pod is a collection of containers that can run on a host  This resource is created by clients and scheduled onto hosts    hr       apiVersion    v1       kind    Pod       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    PodSpec  a      Specification of the desired behavior of the pod  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status      status     a href    PodStatus  a      Most recently observed status of the pod  This data may not be up to date  Populated by the system  Read only  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         PodSpec   PodSpec   PodSpec is a description of a pod    hr         Containers       containers       a href    Container  a    required     Patch strategy  merge on key  name         Map  unique values on key name will be kept during a merge       List of containers belonging to the pod  Containers cannot currently be added or removed  There must be at least one container in a Pod  Cannot be updated       initContainers       a href    Container  a       Patch strategy  merge on key  name         Map  unique values on key name will be kept during a merge       List of initialization containers belonging to the pod  Init containers are executed in order prior to containers being started  If any init container fails  the pod is considered to have failed and is handled according to its restartPolicy  The name for an init container or normal container must be unique among all containers  Init containers may not have Lifecycle actions  Readiness probes  Liveness probes  or Startup probes  The resourceRequirements of an init container are taken into account during scheduling by finding the highest request limit for each resource type  and then using the max of of that value or the sum of the normal containers  Limits are applied to init containers in a similar fashion  Init containers cannot currently be added or removed  Cannot be updated  More info  https   kubernetes io docs concepts workloads pods init containers       ephemeralContainers       a href    EphemeralContainer  a       Patch strategy  merge on key  name         Map  unique values on key name will be kept during a merge       List of ephemeral containers run in this pod  Ephemeral containers may be run in an existing pod to perform user initiated actions such as debugging  This list cannot be specified when creating a pod  and it cannot be modified by updating the pod spec  In order to add an ephemeral container to an existing pod  use the pod s ephemeralcontainers subresource       imagePullSecrets       a href    LocalObjectReference  a       Patch strategy  merge on key  name         Map  unique values on key name will be kept during a merge       ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec  If specified  these secrets will be passed to individual puller implementations for them to use  More info  https   kubernetes io docs concepts containers images specifying imagepullsecrets on a pod      enableServiceLinks    boolean     EnableServiceLinks indicates whether information about services should be injected into pod s environment variables  matching the syntax of Docker links  Optional  Defaults to true       os    PodOS     Specifies the OS of the containers in the pod  Some pod and container fields are restricted if this is set       If the OS field is set to linux  the following fields must be unset   securityContext windowsOptions      If the OS field is set to windows  following fields must be unset    spec hostPID   spec hostIPC   spec hostUsers   spec securityContext appArmorProfile   spec securityContext seLinuxOptions   spec securityContext seccompProfile   spec securityContext fsGroup   spec securityContext fsGroupChangePolicy   spec securityContext sysctls   spec shareProcessNamespace   spec securityContext runAsUser   spec securityContext runAsGroup   spec securityContext supplementalGroups   spec securityContext supplementalGroupsPolicy   spec containers    securityContext appArmorProfile   spec containers    securityContext seLinuxOptions   spec containers    securityContext seccompProfile   spec containers    securityContext capabilities   spec containers    securityContext readOnlyRootFilesystem   spec containers    securityContext privileged   spec containers    securityContext allowPrivilegeEscalation   spec containers    securityContext procMount   spec containers    securityContext runAsUser   spec containers    securityContext runAsGroup     a name  PodOS    a     PodOS defines the OS parameters of a pod          os name    string   required      Name is the name of the operating system  The currently supported values are linux and windows  Additional value may be defined in future and can be one of  https   github com opencontainers runtime spec blob master config md platform specific configuration Clients should expect to handle additional values and treat unrecognized values in this field as os  null      Volumes       volumes       a href    Volume  a       Patch strategies  retainKeys  merge on key  name         Map  unique values on key name will be kept during a merge       List of volumes that can be mounted by containers belonging to the pod  More info  https   kubernetes io docs concepts storage volumes      Scheduling       nodeSelector    map string string     NodeSelector is a selector which must be true for the pod to fit on a node  Selector which must match a node s labels for the pod to be scheduled on that node  More info  https   kubernetes io docs concepts configuration assign pod node       nodeName    string     NodeName indicates in which node this pod is scheduled  If empty  this pod is a candidate for scheduling by the scheduler defined in schedulerName  Once this field is set  the kubelet for this node becomes responsible for the lifecycle of this pod  This field should not be used to express a desire for the pod to be scheduled on a specific node  https   kubernetes io docs concepts scheduling eviction assign pod node  nodename      affinity    Affinity     If specified  the pod s scheduling constraints     a name  Affinity    a     Affinity is a group of affinity scheduling rules          affinity nodeAffinity     a href    NodeAffinity  a        Describes node affinity scheduling rules for the pod         affinity podAffinity     a href    PodAffinity  a        Describes pod affinity scheduling rules  e g  co locate this pod in the same node  zone  etc  as some other pod s           affinity podAntiAffinity     a href    PodAntiAffinity  a        Describes pod anti affinity scheduling rules  e g  avoid putting this pod in the same node  zone  etc  as some other pod s         tolerations      Toleration      Atomic  will be replaced during a merge       If specified  the pod s tolerations      a name  Toleration    a     The pod this Toleration is attached to tolerates any taint that matches the triple  key value effect  using the matching operator  operator           tolerations key    string       Key is the taint key that the toleration applies to  Empty means match all taint keys  If the key is empty  operator must be Exists  this combination means to match all values and all keys         tolerations operator    string       Operator represents a key s relationship to the value  Valid operators are Exists and Equal  Defaults to Equal  Exists is equivalent to wildcard for value  so that a pod can tolerate all taints of a particular category         tolerations value    string       Value is the taint value the toleration matches to  If the operator is Exists  the value should be empty  otherwise just a regular string         tolerations effect    string       Effect indicates the taint effect to match  Empty means match all taint effects  When specified  allowed values are NoSchedule  PreferNoSchedule and NoExecute         tolerations tolerationSeconds    int64       TolerationSeconds represents the period of time the toleration  which must be of effect NoExecute  otherwise this field is ignored  tolerates the taint  By default  it is not set  which means tolerate the taint forever  do not evict   Zero and negative values will be treated as 0  evict immediately  by the system       schedulerName    string     If specified  the pod will be dispatched by specified scheduler  If not specified  the pod will be dispatched by default scheduler       runtimeClassName    string     RuntimeClassName refers to a RuntimeClass object in the node k8s io group  which should be used to run this pod   If no RuntimeClass resource matches the named class  the pod will not be run  If unset or empty  the  legacy  RuntimeClass will be used  which is an implicit class with an empty definition that uses the default runtime handler  More info  https   git k8s io enhancements keps sig node 585 runtime class      priorityClassName    string     If specified  indicates the pod s priority   system node critical  and  system cluster critical  are two special keywords which indicate the highest priorities with the former being the highest priority  Any other name must be defined by creating a PriorityClass object with that name  If not specified  the pod priority will be default or zero if there is no default       priority    int32     The priority value  Various system components use this field to find the priority of the pod  When Priority Admission Controller is enabled  it prevents users from setting this field  The admission controller populates this field from PriorityClassName  The higher the value  the higher the priority       preemptionPolicy    string     PreemptionPolicy is the Policy for preempting pods with lower priority  One of Never  PreemptLowerPriority  Defaults to PreemptLowerPriority if unset       topologySpreadConstraints      TopologySpreadConstraint      Patch strategy  merge on key  topologyKey         Map  unique values on keys  topologyKey  whenUnsatisfiable  will be kept during a merge       TopologySpreadConstraints describes how a group of pods ought to spread across topology domains  Scheduler will schedule pods in a way which abides by the constraints  All topologySpreadConstraints are ANDed      a name  TopologySpreadConstraint    a     TopologySpreadConstraint specifies how to spread matching pods among the given topology          topologySpreadConstraints maxSkew    int32   required      MaxSkew describes the degree to which pods may be unevenly distributed  When  whenUnsatisfiable DoNotSchedule   it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum  The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains  For example  in a 3 zone cluster  MaxSkew is set to 1  and pods with the same labelSelector spread as 2 2 1  In this case  the global minimum is 1    zone1   zone2   zone3      P P     P P      P       if MaxSkew is 1  incoming pod can only be scheduled to zone3 to become 2 2 2  scheduling it onto zone1 zone2  would make the ActualSkew 3 1  on zone1 zone2  violate MaxSkew 1     if MaxSkew is 2  incoming pod can be scheduled onto any zone  When  whenUnsatisfiable ScheduleAnyway   it is used to give higher precedence to topologies that satisfy it  It s a required field  Default value is 1 and 0 is not allowed         topologySpreadConstraints topologyKey    string   required      TopologyKey is the key of node labels  Nodes that have a label with this key and identical values are considered to be in the same topology  We consider each   key  value  as a  bucket   and try to put balanced number of pods into each bucket  We define a domain as a particular instance of a topology  Also  we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy  e g  If TopologyKey is  kubernetes io hostname   each Node is a domain of that topology  And  if TopologyKey is  topology kubernetes io zone   each zone is a domain of that topology  It s a required field         topologySpreadConstraints whenUnsatisfiable    string   required      WhenUnsatisfiable indicates how to deal with a pod if it doesn t satisfy the spread constraint    DoNotSchedule  default  tells the scheduler not to schedule it    ScheduleAnyway tells the scheduler to schedule the pod in any location        but giving higher precedence to topologies that would help reduce the       skew      A constraint is considered  Unsatisfiable  for an incoming pod if and only if every possible node assignment for that pod would violate  MaxSkew  on some topology  For example  in a 3 zone cluster  MaxSkew is set to 1  and pods with the same labelSelector spread as 3 1 1    zone1   zone2   zone3     P P P     P       P     If WhenUnsatisfiable is set to DoNotSchedule  incoming pod can only be scheduled to zone2 zone3  to become 3 2 1 3 1 2  as ActualSkew 2 1  on zone2 zone3  satisfies MaxSkew 1   In other words  the cluster can still be imbalanced  but scheduler won t make it  more  imbalanced  It s a required field         topologySpreadConstraints labelSelector     a href    LabelSelector  a        LabelSelector is used to find matching pods  Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain         topologySpreadConstraints matchLabelKeys      string        Atomic  will be replaced during a merge           MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated  The keys are used to lookup values from the incoming pod labels  those key value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod  The same key is forbidden to exist in both MatchLabelKeys and LabelSelector  MatchLabelKeys cannot be set when LabelSelector isn t set  Keys that don t exist in the incoming pod labels will be ignored  A null or empty list means only match against labelSelector           This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled  enabled by default          topologySpreadConstraints minDomains    int32       MinDomains indicates a minimum number of eligible domains  When the number of eligible domains with matching topology keys is less than minDomains  Pod Topology Spread treats  global minimum  as 0  and then the calculation of Skew is performed  And when the number of eligible domains with matching topology keys equals or greater than minDomains  this value has no effect on scheduling  As a result  when the number of eligible domains is less than minDomains  scheduler won t schedule more than maxSkew Pods to those domains  If value is nil  the constraint behaves as if MinDomains is equal to 1  Valid values are integers greater than 0  When value is not nil  WhenUnsatisfiable must be DoNotSchedule           For example  in a 3 zone cluster  MaxSkew is set to 2  MinDomains is set to 5 and pods with the same labelSelector spread as 2 2 2    zone1   zone2   zone3      P P     P P     P P    The number of domains is less than 5 MinDomains   so  global minimum  is treated as 0  In this situation  new pod with the same labelSelector cannot be scheduled  because computed skew will be 3 3   0  if new Pod is scheduled to any of the three zones  it will violate MaxSkew         topologySpreadConstraints nodeAffinityPolicy    string       NodeAffinityPolicy indicates how we will treat Pod s nodeAffinity nodeSelector when calculating pod topology spread skew  Options are    Honor  only nodes matching nodeAffinity nodeSelector are included in the calculations    Ignore  nodeAffinity nodeSelector are ignored  All nodes are included in the calculations           If this value is nil  the behavior is equivalent to the Honor policy  This is a beta level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag         topologySpreadConstraints nodeTaintsPolicy    string       NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew  Options are    Honor  nodes without taints  along with tainted nodes for which the incoming pod has a toleration  are included    Ignore  node taints are ignored  All nodes are included           If this value is nil  the behavior is equivalent to the Ignore policy  This is a beta level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag       overhead    map string  a href    Quantity  a      Overhead represents the resource overhead associated with running a pod for a given RuntimeClass  This field will be autopopulated at admission time by the RuntimeClass admission controller  If the RuntimeClass admission controller is enabled  overhead must not be set in Pod create requests  The RuntimeClass admission controller will reject Pod create requests which have the overhead already set  If RuntimeClass is configured and selected in the PodSpec  Overhead will be set to the value defined in the corresponding RuntimeClass  otherwise it will remain unset and treated as zero  More info  https   git k8s io enhancements keps sig node 688 pod overhead README md      Lifecycle       restartPolicy    string     Restart policy for all containers within the pod  One of Always  OnFailure  Never  In some contexts  only a subset of those values may be permitted  Default to Always  More info  https   kubernetes io docs concepts workloads pods pod lifecycle  restart policy      terminationGracePeriodSeconds    int64     Optional duration in seconds the pod needs to terminate gracefully  May be decreased in delete request  Value must be non negative integer  The value zero indicates stop immediately via the kill signal  no opportunity to shut down   If this value is nil  the default grace period will be used instead  The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal  Set this value longer than the expected cleanup time for your process  Defaults to 30 seconds       activeDeadlineSeconds    int64     Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers  Value must be a positive integer       readinessGates      PodReadinessGate      Atomic  will be replaced during a merge       If specified  all readiness gates will be evaluated for pod readiness  A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to  True  More info  https   git k8s io enhancements keps sig network 580 pod readiness gates     a name  PodReadinessGate    a     PodReadinessGate contains the reference to a pod condition         readinessGates conditionType    string   required      ConditionType refers to a condition in the pod s condition list with matching type       Hostname and Name resolution       hostname    string     Specifies the hostname of the Pod If not specified  the pod s hostname will be set to a system defined value       setHostnameAsFQDN    boolean     If true the pod s hostname will be configured as the pod s FQDN  rather than the leaf name  the default   In Linux containers  this means setting the FQDN in the hostname field of the kernel  the nodename field of struct utsname   In Windows containers  this means setting the registry value of hostname for the registry key HKEY LOCAL MACHINE SYSTEM CurrentControlSet Services Tcpip Parameters to FQDN  If a pod does not have FQDN  this has no effect  Default to false       subdomain    string     If specified  the fully qualified Pod hostname will be    hostname    subdomain    pod namespace  svc   cluster domain    If not specified  the pod will not have a domainname at all       hostAliases      HostAlias      Patch strategy  merge on key  ip         Map  unique values on key ip will be kept during a merge       HostAliases is an optional list of hosts and IPs that will be injected into the pod s hosts file if specified      a name  HostAlias    a     HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod s hosts file          hostAliases ip    string   required      IP address of the host file entry         hostAliases hostnames      string        Atomic  will be replaced during a merge           Hostnames for the above IP address       dnsConfig    PodDNSConfig     Specifies the DNS parameters of a pod  Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy      a name  PodDNSConfig    a     PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy          dnsConfig nameservers      string        Atomic  will be replaced during a merge           A list of DNS name server IP addresses  This will be appended to the base nameservers generated from DNSPolicy  Duplicated nameservers will be removed         dnsConfig options      PodDNSConfigOption        Atomic  will be replaced during a merge           A list of DNS resolver options  This will be merged with the base options generated from DNSPolicy  Duplicated entries will be removed  Resolution options given in Options will override those that appear in the base DNSPolicy        a name  PodDNSConfigOption    a       PodDNSConfigOption defines DNS resolver options of a pod            dnsConfig options name    string         Required           dnsConfig options value    string          dnsConfig searches      string        Atomic  will be replaced during a merge           A list of DNS search domains for host name lookup  This will be appended to the base search paths generated from DNSPolicy  Duplicated search paths will be removed       dnsPolicy    string     Set DNS policy for the pod  Defaults to  ClusterFirst   Valid values are  ClusterFirstWithHostNet    ClusterFirst    Default  or  None   DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy  To have DNS options set along with hostNetwork  you have to specify DNS policy explicitly to  ClusterFirstWithHostNet        Hosts namespaces       hostNetwork    boolean     Host networking requested for this pod  Use the host s network namespace  If this option is set  the ports that will be used must be specified  Default to false       hostPID    boolean     Use the host s pid namespace  Optional  Default to false       hostIPC    boolean     Use the host s ipc namespace  Optional  Default to false       shareProcessNamespace    boolean     Share a single process namespace between all of the containers in a pod  When this is set containers will be able to view and signal processes from other containers in the same pod  and the first process in each container will not be assigned PID 1  HostPID and ShareProcessNamespace cannot both be set  Optional  Default to false       Service account       serviceAccountName    string     ServiceAccountName is the name of the ServiceAccount to use to run this pod  More info  https   kubernetes io docs tasks configure pod container configure service account       automountServiceAccountToken    boolean     AutomountServiceAccountToken indicates whether a service account token should be automatically mounted       Security context       securityContext    PodSecurityContext     SecurityContext holds pod level security attributes and common container settings  Optional  Defaults to empty   See type description for default values of each field      a name  PodSecurityContext    a     PodSecurityContext holds pod level security attributes and common container settings  Some fields are also present in container securityContext   Field values of container securityContext take precedence over field values of PodSecurityContext          securityContext appArmorProfile    AppArmorProfile       appArmorProfile is the AppArmor options to use by the containers in this pod  Note that this field cannot be set when spec os name is windows        a name  AppArmorProfile    a       AppArmorProfile defines a pod or container s AppArmor settings            securityContext appArmorProfile type    string   required        type indicates which kind of AppArmor profile will be applied  Valid options are          Localhost   a profile pre loaded on the node          RuntimeDefault   the container runtime s default profile          Unconfined   no AppArmor enforcement           securityContext appArmorProfile localhostProfile    string         localhostProfile indicates a profile loaded on the node that should be used  The profile must be preconfigured on the node to work  Must match the loaded name of the profile  Must be set if and only if type is  Localhost          securityContext fsGroup    int64       A special supplemental group that applies to all containers in a pod  Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod           1  The owning GID will be the FSGroup 2  The setgid bit is set  new files created in the volume will be owned by FSGroup  3  The permission bits are OR d with rw rw              If unset  the Kubelet will not modify the ownership and permissions of any volume  Note that this field cannot be set when spec os name is windows         securityContext fsGroupChangePolicy    string       fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod  This field will only apply to volume types which support fsGroup based ownership and permissions   It will have no effect on ephemeral volume types such as  secret  configmaps and emptydir  Valid values are  OnRootMismatch  and  Always   If not specified   Always  is used  Note that this field cannot be set when spec os name is windows         securityContext runAsUser    int64       The UID to run the entrypoint of the container process  Defaults to user specified in image metadata if unspecified  May also be set in SecurityContext   If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence for that container  Note that this field cannot be set when spec os name is windows         securityContext runAsNonRoot    boolean       Indicates that the container must run as a non root user  If true  the Kubelet will validate the image at runtime to ensure that it does not run as UID 0  root  and fail to start the container if it does  If unset or false  no such validation will be performed  May also be set in SecurityContext   If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence         securityContext runAsGroup    int64       The GID to run the entrypoint of the container process  Uses runtime default if unset  May also be set in SecurityContext   If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence for that container  Note that this field cannot be set when spec os name is windows         securityContext seccompProfile    SeccompProfile       The seccomp options to use by the containers in this pod  Note that this field cannot be set when spec os name is windows        a name  SeccompProfile    a       SeccompProfile defines a pod container s seccomp profile settings  Only one profile source may be set            securityContext seccompProfile type    string   required        type indicates which kind of seccomp profile will be applied  Valid options are               Localhost   a profile defined in a file on the node should be used  RuntimeDefault   the container runtime default profile should be used  Unconfined   no profile should be applied           securityContext seccompProfile localhostProfile    string         localhostProfile indicates a profile defined in a file on the node should be used  The profile must be preconfigured on the node to work  Must be a descending path  relative to the kubelet s configured seccomp profile location  Must be set if type is  Localhost   Must NOT be set for any other type         securityContext seLinuxOptions    SELinuxOptions       The SELinux context to be applied to all containers  If unspecified  the container runtime will allocate a random SELinux context for each container   May also be set in SecurityContext   If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence for that container  Note that this field cannot be set when spec os name is windows        a name  SELinuxOptions    a       SELinuxOptions are the labels to be applied to the container           securityContext seLinuxOptions level    string         Level is SELinux level label that applies to the container           securityContext seLinuxOptions role    string         Role is a SELinux role label that applies to the container           securityContext seLinuxOptions type    string         Type is a SELinux type label that applies to the container           securityContext seLinuxOptions user    string         User is a SELinux user label that applies to the container         securityContext supplementalGroups      int64        Atomic  will be replaced during a merge           A list of groups applied to the first process run in each container  in addition to the container s primary GID and fsGroup  if specified    If the SupplementalGroupsPolicy feature is enabled  the supplementalGroupsPolicy field determines whether these are in addition to or instead of any group memberships defined in the container image  If unspecified  no additional groups are added  though group memberships defined in the container image may still be used  depending on the supplementalGroupsPolicy field  Note that this field cannot be set when spec os name is windows         securityContext supplementalGroupsPolicy    string       Defines how supplemental groups of the first container processes are calculated  Valid values are  Merge  and  Strict   If not specified   Merge  is used   Alpha  Using the field requires the SupplementalGroupsPolicy feature gate to be enabled and the container runtime must implement support for this feature  Note that this field cannot be set when spec os name is windows         securityContext sysctls      Sysctl        Atomic  will be replaced during a merge           Sysctls hold a list of namespaced sysctls used for the pod  Pods with unsupported sysctls  by the container runtime  might fail to launch  Note that this field cannot be set when spec os name is windows        a name  Sysctl    a       Sysctl defines a kernel parameter to be set           securityContext sysctls name    string   required        Name of a property to set          securityContext sysctls value    string   required        Value of a property to set        securityContext windowsOptions    WindowsSecurityContextOptions       The Windows specific settings applied to all containers  If unspecified  the options within a container s SecurityContext will be used  If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence  Note that this field cannot be set when spec os name is linux        a name  WindowsSecurityContextOptions    a       WindowsSecurityContextOptions contain Windows specific options and credentials            securityContext windowsOptions gmsaCredentialSpec    string         GMSACredentialSpec is where the GMSA admission webhook  https   github com kubernetes sigs windows gmsa  inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field           securityContext windowsOptions gmsaCredentialSpecName    string         GMSACredentialSpecName is the name of the GMSA credential spec to use           securityContext windowsOptions hostProcess    boolean         HostProcess determines if a container should be run as a  Host Process  container  All of a Pod s containers must have the same effective HostProcess value  it is not allowed to have a mix of HostProcess containers and non HostProcess containers   In addition  if HostProcess is true then HostNetwork must also be set to true           securityContext windowsOptions runAsUserName    string         The UserName in Windows to run the entrypoint of the container process  Defaults to the user specified in image metadata if unspecified  May also be set in PodSecurityContext  If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence       Alpha level       hostUsers    boolean     Use the host s user namespace  Optional  Default to true  If set to true or not present  the pod will be run in the host user namespace  useful for when the pod needs a feature only available to the host user namespace  such as loading a kernel module with CAP SYS MODULE  When set to false  a new userns is created for the pod  Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host  This field is alpha level and is only honored by servers that enable the UserNamespacesSupport feature       resourceClaims      PodResourceClaim      Patch strategies  retainKeys  merge on key  name         Map  unique values on key name will be kept during a merge       ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start  The resources will be made available to those containers which consume them by name       This is an alpha field and requires enabling the DynamicResourceAllocation feature gate       This field is immutable      a name  PodResourceClaim    a     PodResourceClaim references exactly one ResourceClaim  either directly or by naming a ResourceClaimTemplate which is then turned into a ResourceClaim for the pod       It adds a name to it that uniquely identifies the ResourceClaim inside the Pod  Containers that need access to the ResourceClaim reference it with this name          resourceClaims name    string   required      Name uniquely identifies this resource claim inside the pod  This must be a DNS LABEL         resourceClaims resourceClaimName    string       ResourceClaimName is the name of a ResourceClaim object in the same namespace as this pod           Exactly one of ResourceClaimName and ResourceClaimTemplateName must be set         resourceClaims resourceClaimTemplateName    string       ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod           The template will be used to create a new ResourceClaim  which will be bound to this pod  When this pod is deleted  the ResourceClaim will also be deleted  The pod name and resource name  along with a generated component  will be used to form a unique name for the ResourceClaim  which will be recorded in pod status resourceClaimStatuses           This field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim           Exactly one of ResourceClaimName and ResourceClaimTemplateName must be set       schedulingGates      PodSchedulingGate      Patch strategy  merge on key  name         Map  unique values on key name will be kept during a merge       SchedulingGates is an opaque list of values that if specified will block scheduling the pod  If schedulingGates is not empty  the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod       SchedulingGates can only be set at pod creation time  and be removed only afterwards      a name  PodSchedulingGate    a     PodSchedulingGate is associated to a Pod to guard its scheduling          schedulingGates name    string   required      Name of the scheduling gate  Each scheduling gate must have a unique name field       Deprecated       serviceAccount    string     DeprecatedServiceAccount is a deprecated alias for ServiceAccountName  Deprecated  Use serviceAccountName instead        Container   Container   A single application container that you want to run within a pod    hr       name    string   required    Name of the container specified as a DNS LABEL  Each container in a pod must have a unique name  DNS LABEL   Cannot be updated         Image       image    string     Container image name  More info  https   kubernetes io docs concepts containers images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets       imagePullPolicy    string     Image pull policy  One of Always  Never  IfNotPresent  Defaults to Always if  latest tag is specified  or IfNotPresent otherwise  Cannot be updated  More info  https   kubernetes io docs concepts containers images updating images      Entrypoint       command      string      Atomic  will be replaced during a merge       Entrypoint array  Not executed within a shell  The container image s ENTRYPOINT is used if this is not provided  Variable references   VAR NAME  are expanded using the container s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e      VAR NAME   will produce the string literal    VAR NAME    Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell      args      string      Atomic  will be replaced during a merge       Arguments to the entrypoint  The container image s CMD is used if this is not provided  Variable references   VAR NAME  are expanded using the container s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e      VAR NAME   will produce the string literal    VAR NAME    Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell      workingDir    string     Container s working directory  If not specified  the container runtime s default will be used  which might be configured in the container image  Cannot be updated       Ports       ports      ContainerPort      Patch strategy  merge on key  containerPort         Map  unique values on keys  containerPort  protocol  will be kept during a merge       List of ports to expose from the container  Not specifying a port here DOES NOT prevent that port from being exposed  Any port which is listening on the default  0 0 0 0  address inside a container will be accessible from the network  Modifying this array with strategic merge patch may corrupt the data  For more information See https   github com kubernetes kubernetes issues 108255  Cannot be updated      a name  ContainerPort    a     ContainerPort represents a network port in a single container          ports containerPort    int32   required      Number of port to expose on the pod s IP address  This must be a valid port number  0    x    65536         ports hostIP    string       What host IP to bind the external port to         ports hostPort    int32       Number of port to expose on the host  If specified  this must be a valid port number  0    x    65536  If HostNetwork is specified  this must match ContainerPort  Most containers do not need this         ports name    string       If specified  this must be an IANA SVC NAME and unique within the pod  Each named port in a pod must have a unique name  Name for the port that can be referred to by services         ports protocol    string       Protocol for port  Must be UDP  TCP  or SCTP  Defaults to  TCP        Environment variables       env      EnvVar      Patch strategy  merge on key  name         Map  unique values on key name will be kept during a merge       List of environment variables to set in the container  Cannot be updated      a name  EnvVar    a     EnvVar represents an environment variable present in a Container          env name    string   required      Name of the environment variable  Must be a C IDENTIFIER         env value    string       Variable references   VAR NAME  are expanded using the previously defined environment variables in the container and any service environment variables  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e      VAR NAME   will produce the string literal    VAR NAME    Escaped references will never be expanded  regardless of whether the variable exists or not  Defaults to            env valueFrom    EnvVarSource       Source for the environment variable s value  Cannot be used if value is not empty        a name  EnvVarSource    a       EnvVarSource represents a source for the value of an EnvVar            env valueFrom configMapKeyRef    ConfigMapKeySelector         Selects a key of a ConfigMap          a name  ConfigMapKeySelector    a         Selects a key from a ConfigMap              env valueFrom configMapKeyRef key    string   required          The key to select             env valueFrom configMapKeyRef name    string           Name of the referent  This field is effectively required  but due to backwards compatibility is allowed to be empty  Instances of this type with an empty value here are almost certainly wrong  More info  https   kubernetes io docs concepts overview working with objects names  names            env valueFrom configMapKeyRef optional    boolean           Specify whether the ConfigMap or its key must be defined          env valueFrom fieldRef     a href    ObjectFieldSelector  a          Selects a field of the pod  supports metadata name  metadata namespace   metadata labels    KEY       metadata annotations    KEY      spec nodeName  spec serviceAccountName  status hostIP  status podIP  status podIPs           env valueFrom resourceFieldRef     a href    ResourceFieldSelector  a          Selects a resource of the container  only resources limits and requests  limits cpu  limits memory  limits ephemeral storage  requests cpu  requests memory and requests ephemeral storage  are currently supported           env valueFrom secretKeyRef    SecretKeySelector         Selects a key of a secret in the pod s namespace         a name  SecretKeySelector    a         SecretKeySelector selects a key of a Secret              env valueFrom secretKeyRef key    string   required          The key of the secret to select from   Must be a valid secret key             env valueFrom secretKeyRef name    string           Name of the referent  This field is effectively required  but due to backwards compatibility is allowed to be empty  Instances of this type with an empty value here are almost certainly wrong  More info  https   kubernetes io docs concepts overview working with objects names  names            env valueFrom secretKeyRef optional    boolean           Specify whether the Secret or its key must be defined      envFrom      EnvFromSource      Atomic  will be replaced during a merge       List of sources to populate environment variables in the container  The keys defined within a source must be a C IDENTIFIER  All invalid keys will be reported as an event when the container is starting  When a key exists in multiple sources  the value associated with the last source will take precedence  Values defined by an Env with a duplicate key will take precedence  Cannot be updated      a name  EnvFromSource    a     EnvFromSource represents the source of a set of ConfigMaps         envFrom configMapRef    ConfigMapEnvSource       The ConfigMap to select from       a name  ConfigMapEnvSource    a       ConfigMapEnvSource selects a ConfigMap to populate the environment variables with           The contents of the target ConfigMap s Data field will represent the key value pairs as environment variables            envFrom configMapRef name    string         Name of the referent  This field is effectively required  but due to backwards compatibility is allowed to be empty  Instances of this type with an empty value here are almost certainly wrong  More info  https   kubernetes io docs concepts overview working with objects names  names          envFrom configMapRef optional    boolean         Specify whether the ConfigMap must be defined        envFrom prefix    string       An optional identifier to prepend to each key in the ConfigMap  Must be a C IDENTIFIER         envFrom secretRef    SecretEnvSource       The Secret to select from       a name  SecretEnvSource    a       SecretEnvSource selects a Secret to populate the environment variables with           The contents of the target Secret s Data field will represent the key value pairs as environment variables            envFrom secretRef name    string         Name of the referent  This field is effectively required  but due to backwards compatibility is allowed to be empty  Instances of this type with an empty value here are almost certainly wrong  More info  https   kubernetes io docs concepts overview working with objects names  names          envFrom secretRef optional    boolean         Specify whether the Secret must be defined      Volumes       volumeMounts      VolumeMount      Patch strategy  merge on key  mountPath         Map  unique values on key mountPath will be kept during a merge       Pod volumes to mount into the container s filesystem  Cannot be updated      a name  VolumeMount    a     VolumeMount describes a mounting of a Volume within a container          volumeMounts mountPath    string   required      Path within the container at which the volume should be mounted   Must not contain             volumeMounts name    string   required      This must match the Name of a Volume         volumeMounts mountPropagation    string       mountPropagation determines how mounts are propagated from the host to container and the other way around  When not set  MountPropagationNone is used  This field is beta in 1 10  When RecursiveReadOnly is set to IfPossible or to Enabled  MountPropagation must be None or unspecified  which defaults to None          volumeMounts readOnly    boolean       Mounted read only if true  read write otherwise  false or unspecified   Defaults to false         volumeMounts recursiveReadOnly    string       RecursiveReadOnly specifies whether read only mounts should be handled recursively           If ReadOnly is false  this field has no meaning and must be unspecified           If ReadOnly is true  and this field is set to Disabled  the mount is not made recursively read only   If this field is set to IfPossible  the mount is made recursively read only  if it is supported by the container runtime   If this field is set to Enabled  the mount is made recursively read only if it is supported by the container runtime  otherwise the pod will not be started and an error will be generated to indicate the reason           If this field is set to IfPossible or Enabled  MountPropagation must be set to None  or be unspecified  which defaults to None            If this field is not specified  it is treated as an equivalent of Disabled         volumeMounts subPath    string       Path within the volume from which the container s volume should be mounted  Defaults to     volume s root          volumeMounts subPathExpr    string       Expanded path within the volume from which the container s volume should be mounted  Behaves similarly to SubPath but environment variable references   VAR NAME  are expanded using the container s environment  Defaults to     volume s root   SubPathExpr and SubPath are mutually exclusive       volumeDevices      VolumeDevice      Patch strategy  merge on key  devicePath         Map  unique values on key devicePath will be kept during a merge       volumeDevices is the list of block devices to be used by the container      a name  VolumeDevice    a     volumeDevice describes a mapping of a raw block device within a container          volumeDevices devicePath    string   required      devicePath is the path inside of the container that the device will be mapped to         volumeDevices name    string   required      name must match the name of a persistentVolumeClaim in the pod      Resources       resources    ResourceRequirements     Compute Resources required by this container  Cannot be updated  More info  https   kubernetes io docs concepts configuration manage resources containers      a name  ResourceRequirements    a     ResourceRequirements describes the compute resource requirements          resources claims      ResourceClaim        Map  unique values on key name will be kept during a merge           Claims lists the names of resources  defined in spec resourceClaims  that are used by this container           This is an alpha field and requires enabling the DynamicResourceAllocation feature gate           This field is immutable  It can only be set for containers        a name  ResourceClaim    a       ResourceClaim references one entry in PodSpec ResourceClaims            resources claims name    string   required        Name must match the name of one entry in pod spec resourceClaims of the Pod where this field is used  It makes that resource available inside a container           resources claims request    string         Request is the name chosen for a request in the referenced claim  If empty  everything from the claim is made available  otherwise only the result of this request         resources limits    map string  a href    Quantity  a        Limits describes the maximum amount of compute resources allowed  More info  https   kubernetes io docs concepts configuration manage resources containers         resources requests    map string  a href    Quantity  a        Requests describes the minimum amount of compute resources required  If Requests is omitted for a container  it defaults to Limits if that is explicitly specified  otherwise to an implementation defined value  Requests cannot exceed Limits  More info  https   kubernetes io docs concepts configuration manage resources containers       resizePolicy      ContainerResizePolicy      Atomic  will be replaced during a merge       Resources resize policy for the container      a name  ContainerResizePolicy    a     ContainerResizePolicy represents resource resize policy for the container          resizePolicy resourceName    string   required      Name of the resource to which this resource resize policy applies  Supported values  cpu  memory         resizePolicy restartPolicy    string   required      Restart policy to apply when specified resource is resized  If not specified  it defaults to NotRequired       Lifecycle       lifecycle    Lifecycle     Actions that the management system should take in response to container lifecycle events  Cannot be updated      a name  Lifecycle    a     Lifecycle describes actions that the management system should take in response to container lifecycle events  For the PostStart and PreStop lifecycle handlers  management of the container blocks until the action is complete  unless the container process fails  in which case the handler is aborted          lifecycle postStart     a href    LifecycleHandler  a        PostStart is called immediately after a container is created  If the handler fails  the container is terminated and restarted according to its restart policy  Other management of the container blocks until the hook completes  More info  https   kubernetes io docs concepts containers container lifecycle hooks  container hooks        lifecycle preStop     a href    LifecycleHandler  a        PreStop is called immediately before a container is terminated due to an API request or management event such as liveness startup probe failure  preemption  resource contention  etc  The handler is not called if the container crashes or exits  The Pod s termination grace period countdown begins before the PreStop hook is executed  Regardless of the outcome of the handler  the container will eventually terminate within the Pod s termination grace period  unless delayed by finalizers   Other management of the container blocks until the hook completes or until the termination grace period is reached  More info  https   kubernetes io docs concepts containers container lifecycle hooks  container hooks      terminationMessagePath    string     Optional  Path at which the file to which the container s termination message will be written is mounted into the container s filesystem  Message written is intended to be brief final status  such as an assertion failure message  Will be truncated by the node if greater than 4096 bytes  The total message length across all containers will be limited to 12kb  Defaults to  dev termination log  Cannot be updated       terminationMessagePolicy    string     Indicate how the termination message should be populated  File will use the contents of terminationMessagePath to populate the container status message on both success and failure  FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error  The log output is limited to 2048 bytes or 80 lines  whichever is smaller  Defaults to File  Cannot be updated       livenessProbe     a href    Probe  a      Periodic probe of container liveness  Container will be restarted if the probe fails  Cannot be updated  More info  https   kubernetes io docs concepts workloads pods pod lifecycle container probes      readinessProbe     a href    Probe  a      Periodic probe of container service readiness  Container will be removed from service endpoints if the probe fails  Cannot be updated  More info  https   kubernetes io docs concepts workloads pods pod lifecycle container probes      startupProbe     a href    Probe  a      StartupProbe indicates that the Pod has successfully initialized  If specified  no other probes are executed until this completes successfully  If this probe fails  the Pod will be restarted  just as if the livenessProbe failed  This can be used to provide different probe parameters at the beginning of a Pod s lifecycle  when it might take a long time to load data or warm a cache  than during steady state operation  This cannot be updated  More info  https   kubernetes io docs concepts workloads pods pod lifecycle container probes      restartPolicy    string     RestartPolicy defines the restart behavior of individual containers in a pod  This field may only be set for init containers  and the only allowed value is  Always   For non init containers or when this field is not specified  the restart behavior is defined by the Pod s restart policy and the container type  Setting the RestartPolicy as  Always  for the init container will have the following effect  this init container will be continually restarted on exit until all regular containers have terminated  Once all regular containers have completed  all init containers with restartPolicy  Always  will be shut down  This lifecycle differs from normal init containers and is often referred to as a  sidecar  container  Although this init container still starts in the init container sequence  it does not wait for the container to complete before proceeding to the next init container  Instead  the next init container starts immediately after this init container is started  or after any startupProbe has successfully completed       Security Context       securityContext    SecurityContext     SecurityContext defines the security options the container should be run with  If set  the fields of SecurityContext override the equivalent fields of PodSecurityContext  More info  https   kubernetes io docs tasks configure pod container security context      a name  SecurityContext    a     SecurityContext holds security configuration that will be applied to a container  Some fields are present in both SecurityContext and PodSecurityContext   When both are set  the values in SecurityContext take precedence          securityContext allowPrivilegeEscalation    boolean       AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process  This bool directly controls if the no new privs flag will be set on the container process  AllowPrivilegeEscalation is true always when the container is  1  run as Privileged 2  has CAP SYS ADMIN Note that this field cannot be set when spec os name is windows         securityContext appArmorProfile    AppArmorProfile       appArmorProfile is the AppArmor options to use by this container  If set  this profile overrides the pod s appArmorProfile  Note that this field cannot be set when spec os name is windows        a name  AppArmorProfile    a       AppArmorProfile defines a pod or container s AppArmor settings            securityContext appArmorProfile type    string   required        type indicates which kind of AppArmor profile will be applied  Valid options are          Localhost   a profile pre loaded on the node          RuntimeDefault   the container runtime s default profile          Unconfined   no AppArmor enforcement           securityContext appArmorProfile localhostProfile    string         localhostProfile indicates a profile loaded on the node that should be used  The profile must be preconfigured on the node to work  Must match the loaded name of the profile  Must be set if and only if type is  Localhost          securityContext capabilities    Capabilities       The capabilities to add drop when running containers  Defaults to the default set of capabilities granted by the container runtime  Note that this field cannot be set when spec os name is windows        a name  Capabilities    a       Adds and removes POSIX capabilities from running containers            securityContext capabilities add      string          Atomic  will be replaced during a merge               Added capabilities          securityContext capabilities drop      string          Atomic  will be replaced during a merge               Removed capabilities        securityContext procMount    string       procMount denotes the type of proc mount to use for the containers  The default value is Default which uses the container runtime defaults for readonly paths and masked paths  This requires the ProcMountType feature flag to be enabled  Note that this field cannot be set when spec os name is windows         securityContext privileged    boolean       Run container in privileged mode  Processes in privileged containers are essentially equivalent to root on the host  Defaults to false  Note that this field cannot be set when spec os name is windows         securityContext readOnlyRootFilesystem    boolean       Whether this container has a read only root filesystem  Default is false  Note that this field cannot be set when spec os name is windows         securityContext runAsUser    int64       The UID to run the entrypoint of the container process  Defaults to user specified in image metadata if unspecified  May also be set in PodSecurityContext   If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence  Note that this field cannot be set when spec os name is windows         securityContext runAsNonRoot    boolean       Indicates that the container must run as a non root user  If true  the Kubelet will validate the image at runtime to ensure that it does not run as UID 0  root  and fail to start the container if it does  If unset or false  no such validation will be performed  May also be set in PodSecurityContext   If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence         securityContext runAsGroup    int64       The GID to run the entrypoint of the container process  Uses runtime default if unset  May also be set in PodSecurityContext   If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence  Note that this field cannot be set when spec os name is windows         securityContext seLinuxOptions    SELinuxOptions       The SELinux context to be applied to the container  If unspecified  the container runtime will allocate a random SELinux context for each container   May also be set in PodSecurityContext   If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence  Note that this field cannot be set when spec os name is windows        a name  SELinuxOptions    a       SELinuxOptions are the labels to be applied to the container           securityContext seLinuxOptions level    string         Level is SELinux level label that applies to the container           securityContext seLinuxOptions role    string         Role is a SELinux role label that applies to the container           securityContext seLinuxOptions type    string         Type is a SELinux type label that applies to the container           securityContext seLinuxOptions user    string         User is a SELinux user label that applies to the container         securityContext seccompProfile    SeccompProfile       The seccomp options to use by this container  If seccomp options are provided at both the pod   container level  the container options override the pod options  Note that this field cannot be set when spec os name is windows        a name  SeccompProfile    a       SeccompProfile defines a pod container s seccomp profile settings  Only one profile source may be set            securityContext seccompProfile type    string   required        type indicates which kind of seccomp profile will be applied  Valid options are               Localhost   a profile defined in a file on the node should be used  RuntimeDefault   the container runtime default profile should be used  Unconfined   no profile should be applied           securityContext seccompProfile localhostProfile    string         localhostProfile indicates a profile defined in a file on the node should be used  The profile must be preconfigured on the node to work  Must be a descending path  relative to the kubelet s configured seccomp profile location  Must be set if type is  Localhost   Must NOT be set for any other type         securityContext windowsOptions    WindowsSecurityContextOptions       The Windows specific settings applied to all containers  If unspecified  the options from the PodSecurityContext will be used  If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence  Note that this field cannot be set when spec os name is linux        a name  WindowsSecurityContextOptions    a       WindowsSecurityContextOptions contain Windows specific options and credentials            securityContext windowsOptions gmsaCredentialSpec    string         GMSACredentialSpec is where the GMSA admission webhook  https   github com kubernetes sigs windows gmsa  inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field           securityContext windowsOptions gmsaCredentialSpecName    string         GMSACredentialSpecName is the name of the GMSA credential spec to use           securityContext windowsOptions hostProcess    boolean         HostProcess determines if a container should be run as a  Host Process  container  All of a Pod s containers must have the same effective HostProcess value  it is not allowed to have a mix of HostProcess containers and non HostProcess containers   In addition  if HostProcess is true then HostNetwork must also be set to true           securityContext windowsOptions runAsUserName    string         The UserName in Windows to run the entrypoint of the container process  Defaults to the user specified in image metadata if unspecified  May also be set in PodSecurityContext  If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence       Debugging       stdin    boolean     Whether this container should allocate a buffer for stdin in the container runtime  If this is not set  reads from stdin in the container will always result in EOF  Default is false       stdinOnce    boolean     Whether the container runtime should close the stdin channel after it has been opened by a single attach  When stdin is true the stdin stream will remain open across multiple attach sessions  If stdinOnce is set to true  stdin is opened on container start  is empty until the first client attaches to stdin  and then remains open and accepts data until the client disconnects  at which time stdin is closed and remains closed until the container is restarted  If this flag is false  a container processes that reads from stdin will never receive an EOF  Default is false      tty    boolean     Whether this container should allocate a TTY for itself  also requires  stdin  to be true  Default is false        EphemeralContainer   EphemeralContainer   An EphemeralContainer is a temporary container that you may add to an existing Pod for user initiated activities such as debugging  Ephemeral containers have no resource or scheduling guarantees  and they will not be restarted when they exit or when a Pod is removed or restarted  The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation   To add an ephemeral container  use the ephemeralcontainers subresource of an existing Pod  Ephemeral containers may not be removed or restarted    hr       name    string   required    Name of the ephemeral container specified as a DNS LABEL  This name must be unique among all containers  init containers and ephemeral containers       targetContainerName    string     If set  the name of the container from PodSpec that this ephemeral container targets  The ephemeral container will be run in the namespaces  IPC  PID  etc  of this container  If not set then the ephemeral container uses the namespaces configured in the Pod spec       The container runtime must implement support for this feature  If the runtime does not support namespace targeting then the result of setting this field is undefined         Image       image    string     Container image name  More info  https   kubernetes io docs concepts containers images      imagePullPolicy    string     Image pull policy  One of Always  Never  IfNotPresent  Defaults to Always if  latest tag is specified  or IfNotPresent otherwise  Cannot be updated  More info  https   kubernetes io docs concepts containers images updating images      Entrypoint       command      string      Atomic  will be replaced during a merge       Entrypoint array  Not executed within a shell  The image s ENTRYPOINT is used if this is not provided  Variable references   VAR NAME  are expanded using the container s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e      VAR NAME   will produce the string literal    VAR NAME    Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell      args      string      Atomic  will be replaced during a merge       Arguments to the entrypoint  The image s CMD is used if this is not provided  Variable references   VAR NAME  are expanded using the container s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e      VAR NAME   will produce the string literal    VAR NAME    Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell      workingDir    string     Container s working directory  If not specified  the container runtime s default will be used  which might be configured in the container image  Cannot be updated       Environment variables       env      EnvVar      Patch strategy  merge on key  name         Map  unique values on key name will be kept during a merge       List of environment variables to set in the container  Cannot be updated      a name  EnvVar    a     EnvVar represents an environment variable present in a Container          env name    string   required      Name of the environment variable  Must be a C IDENTIFIER         env value    string       Variable references   VAR NAME  are expanded using the previously defined environment variables in the container and any service environment variables  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e      VAR NAME   will produce the string literal    VAR NAME    Escaped references will never be expanded  regardless of whether the variable exists or not  Defaults to            env valueFrom    EnvVarSource       Source for the environment variable s value  Cannot be used if value is not empty        a name  EnvVarSource    a       EnvVarSource represents a source for the value of an EnvVar            env valueFrom configMapKeyRef    ConfigMapKeySelector         Selects a key of a ConfigMap          a name  ConfigMapKeySelector    a         Selects a key from a ConfigMap              env valueFrom configMapKeyRef key    string   required          The key to select             env valueFrom configMapKeyRef name    string           Name of the referent  This field is effectively required  but due to backwards compatibility is allowed to be empty  Instances of this type with an empty value here are almost certainly wrong  More info  https   kubernetes io docs concepts overview working with objects names  names            env valueFrom configMapKeyRef optional    boolean           Specify whether the ConfigMap or its key must be defined          env valueFrom fieldRef     a href    ObjectFieldSelector  a          Selects a field of the pod  supports metadata name  metadata namespace   metadata labels    KEY       metadata annotations    KEY      spec nodeName  spec serviceAccountName  status hostIP  status podIP  status podIPs           env valueFrom resourceFieldRef     a href    ResourceFieldSelector  a          Selects a resource of the container  only resources limits and requests  limits cpu  limits memory  limits ephemeral storage  requests cpu  requests memory and requests ephemeral storage  are currently supported           env valueFrom secretKeyRef    SecretKeySelector         Selects a key of a secret in the pod s namespace         a name  SecretKeySelector    a         SecretKeySelector selects a key of a Secret              env valueFrom secretKeyRef key    string   required          The key of the secret to select from   Must be a valid secret key             env valueFrom secretKeyRef name    string           Name of the referent  This field is effectively required  but due to backwards compatibility is allowed to be empty  Instances of this type with an empty value here are almost certainly wrong  More info  https   kubernetes io docs concepts overview working with objects names  names            env valueFrom secretKeyRef optional    boolean           Specify whether the Secret or its key must be defined      envFrom      EnvFromSource      Atomic  will be replaced during a merge       List of sources to populate environment variables in the container  The keys defined within a source must be a C IDENTIFIER  All invalid keys will be reported as an event when the container is starting  When a key exists in multiple sources  the value associated with the last source will take precedence  Values defined by an Env with a duplicate key will take precedence  Cannot be updated      a name  EnvFromSource    a     EnvFromSource represents the source of a set of ConfigMaps         envFrom configMapRef    ConfigMapEnvSource       The ConfigMap to select from       a name  ConfigMapEnvSource    a       ConfigMapEnvSource selects a ConfigMap to populate the environment variables with           The contents of the target ConfigMap s Data field will represent the key value pairs as environment variables            envFrom configMapRef name    string         Name of the referent  This field is effectively required  but due to backwards compatibility is allowed to be empty  Instances of this type with an empty value here are almost certainly wrong  More info  https   kubernetes io docs concepts overview working with objects names  names          envFrom configMapRef optional    boolean         Specify whether the ConfigMap must be defined        envFrom prefix    string       An optional identifier to prepend to each key in the ConfigMap  Must be a C IDENTIFIER         envFrom secretRef    SecretEnvSource       The Secret to select from       a name  SecretEnvSource    a       SecretEnvSource selects a Secret to populate the environment variables with           The contents of the target Secret s Data field will represent the key value pairs as environment variables            envFrom secretRef name    string         Name of the referent  This field is effectively required  but due to backwards compatibility is allowed to be empty  Instances of this type with an empty value here are almost certainly wrong  More info  https   kubernetes io docs concepts overview working with objects names  names          envFrom secretRef optional    boolean         Specify whether the Secret must be defined      Volumes       volumeMounts      VolumeMount      Patch strategy  merge on key  mountPath         Map  unique values on key mountPath will be kept during a merge       Pod volumes to mount into the container s filesystem  Subpath mounts are not allowed for ephemeral containers  Cannot be updated      a name  VolumeMount    a     VolumeMount describes a mounting of a Volume within a container          volumeMounts mountPath    string   required      Path within the container at which the volume should be mounted   Must not contain             volumeMounts name    string   required      This must match the Name of a Volume         volumeMounts mountPropagation    string       mountPropagation determines how mounts are propagated from the host to container and the other way around  When not set  MountPropagationNone is used  This field is beta in 1 10  When RecursiveReadOnly is set to IfPossible or to Enabled  MountPropagation must be None or unspecified  which defaults to None          volumeMounts readOnly    boolean       Mounted read only if true  read write otherwise  false or unspecified   Defaults to false         volumeMounts recursiveReadOnly    string       RecursiveReadOnly specifies whether read only mounts should be handled recursively           If ReadOnly is false  this field has no meaning and must be unspecified           If ReadOnly is true  and this field is set to Disabled  the mount is not made recursively read only   If this field is set to IfPossible  the mount is made recursively read only  if it is supported by the container runtime   If this field is set to Enabled  the mount is made recursively read only if it is supported by the container runtime  otherwise the pod will not be started and an error will be generated to indicate the reason           If this field is set to IfPossible or Enabled  MountPropagation must be set to None  or be unspecified  which defaults to None            If this field is not specified  it is treated as an equivalent of Disabled         volumeMounts subPath    string       Path within the volume from which the container s volume should be mounted  Defaults to     volume s root          volumeMounts subPathExpr    string       Expanded path within the volume from which the container s volume should be mounted  Behaves similarly to SubPath but environment variable references   VAR NAME  are expanded using the container s environment  Defaults to     volume s root   SubPathExpr and SubPath are mutually exclusive       volumeDevices      VolumeDevice      Patch strategy  merge on key  devicePath         Map  unique values on key devicePath will be kept during a merge       volumeDevices is the list of block devices to be used by the container      a name  VolumeDevice    a     volumeDevice describes a mapping of a raw block device within a container          volumeDevices devicePath    string   required      devicePath is the path inside of the container that the device will be mapped to         volumeDevices name    string   required      name must match the name of a persistentVolumeClaim in the pod      Resources       resizePolicy      ContainerResizePolicy      Atomic  will be replaced during a merge       Resources resize policy for the container      a name  ContainerResizePolicy    a     ContainerResizePolicy represents resource resize policy for the container          resizePolicy resourceName    string   required      Name of the resource to which this resource resize policy applies  Supported values  cpu  memory         resizePolicy restartPolicy    string   required      Restart policy to apply when specified resource is resized  If not specified  it defaults to NotRequired       Lifecycle       terminationMessagePath    string     Optional  Path at which the file to which the container s termination message will be written is mounted into the container s filesystem  Message written is intended to be brief final status  such as an assertion failure message  Will be truncated by the node if greater than 4096 bytes  The total message length across all containers will be limited to 12kb  Defaults to  dev termination log  Cannot be updated       terminationMessagePolicy    string     Indicate how the termination message should be populated  File will use the contents of terminationMessagePath to populate the container status message on both success and failure  FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error  The log output is limited to 2048 bytes or 80 lines  whichever is smaller  Defaults to File  Cannot be updated       restartPolicy    string     Restart policy for the container to manage the restart behavior of each container within a pod  This may only be set for init containers  You cannot set this field on ephemeral containers       Debugging       stdin    boolean     Whether this container should allocate a buffer for stdin in the container runtime  If this is not set  reads from stdin in the container will always result in EOF  Default is false       stdinOnce    boolean     Whether the container runtime should close the stdin channel after it has been opened by a single attach  When stdin is true the stdin stream will remain open across multiple attach sessions  If stdinOnce is set to true  stdin is opened on container start  is empty until the first client attaches to stdin  and then remains open and accepts data until the client disconnects  at which time stdin is closed and remains closed until the container is restarted  If this flag is false  a container processes that reads from stdin will never receive an EOF  Default is false      tty    boolean     Whether this container should allocate a TTY for itself  also requires  stdin  to be true  Default is false       Security context       securityContext    SecurityContext     Optional  SecurityContext defines the security options the ephemeral container should be run with  If set  the fields of SecurityContext override the equivalent fields of PodSecurityContext      a name  SecurityContext    a     SecurityContext holds security configuration that will be applied to a container  Some fields are present in both SecurityContext and PodSecurityContext   When both are set  the values in SecurityContext take precedence          securityContext allowPrivilegeEscalation    boolean       AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process  This bool directly controls if the no new privs flag will be set on the container process  AllowPrivilegeEscalation is true always when the container is  1  run as Privileged 2  has CAP SYS ADMIN Note that this field cannot be set when spec os name is windows         securityContext appArmorProfile    AppArmorProfile       appArmorProfile is the AppArmor options to use by this container  If set  this profile overrides the pod s appArmorProfile  Note that this field cannot be set when spec os name is windows        a name  AppArmorProfile    a       AppArmorProfile defines a pod or container s AppArmor settings            securityContext appArmorProfile type    string   required        type indicates which kind of AppArmor profile will be applied  Valid options are          Localhost   a profile pre loaded on the node          RuntimeDefault   the container runtime s default profile          Unconfined   no AppArmor enforcement           securityContext appArmorProfile localhostProfile    string         localhostProfile indicates a profile loaded on the node that should be used  The profile must be preconfigured on the node to work  Must match the loaded name of the profile  Must be set if and only if type is  Localhost          securityContext capabilities    Capabilities       The capabilities to add drop when running containers  Defaults to the default set of capabilities granted by the container runtime  Note that this field cannot be set when spec os name is windows        a name  Capabilities    a       Adds and removes POSIX capabilities from running containers            securityContext capabilities add      string          Atomic  will be replaced during a merge               Added capabilities          securityContext capabilities drop      string          Atomic  will be replaced during a merge               Removed capabilities        securityContext procMount    string       procMount denotes the type of proc mount to use for the containers  The default value is Default which uses the container runtime defaults for readonly paths and masked paths  This requires the ProcMountType feature flag to be enabled  Note that this field cannot be set when spec os name is windows         securityContext privileged    boolean       Run container in privileged mode  Processes in privileged containers are essentially equivalent to root on the host  Defaults to false  Note that this field cannot be set when spec os name is windows         securityContext readOnlyRootFilesystem    boolean       Whether this container has a read only root filesystem  Default is false  Note that this field cannot be set when spec os name is windows         securityContext runAsUser    int64       The UID to run the entrypoint of the container process  Defaults to user specified in image metadata if unspecified  May also be set in PodSecurityContext   If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence  Note that this field cannot be set when spec os name is windows         securityContext runAsNonRoot    boolean       Indicates that the container must run as a non root user  If true  the Kubelet will validate the image at runtime to ensure that it does not run as UID 0  root  and fail to start the container if it does  If unset or false  no such validation will be performed  May also be set in PodSecurityContext   If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence         securityContext runAsGroup    int64       The GID to run the entrypoint of the container process  Uses runtime default if unset  May also be set in PodSecurityContext   If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence  Note that this field cannot be set when spec os name is windows         securityContext seLinuxOptions    SELinuxOptions       The SELinux context to be applied to the container  If unspecified  the container runtime will allocate a random SELinux context for each container   May also be set in PodSecurityContext   If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence  Note that this field cannot be set when spec os name is windows        a name  SELinuxOptions    a       SELinuxOptions are the labels to be applied to the container           securityContext seLinuxOptions level    string         Level is SELinux level label that applies to the container           securityContext seLinuxOptions role    string         Role is a SELinux role label that applies to the container           securityContext seLinuxOptions type    string         Type is a SELinux type label that applies to the container           securityContext seLinuxOptions user    string         User is a SELinux user label that applies to the container         securityContext seccompProfile    SeccompProfile       The seccomp options to use by this container  If seccomp options are provided at both the pod   container level  the container options override the pod options  Note that this field cannot be set when spec os name is windows        a name  SeccompProfile    a       SeccompProfile defines a pod container s seccomp profile settings  Only one profile source may be set            securityContext seccompProfile type    string   required        type indicates which kind of seccomp profile will be applied  Valid options are               Localhost   a profile defined in a file on the node should be used  RuntimeDefault   the container runtime default profile should be used  Unconfined   no profile should be applied           securityContext seccompProfile localhostProfile    string         localhostProfile indicates a profile defined in a file on the node should be used  The profile must be preconfigured on the node to work  Must be a descending path  relative to the kubelet s configured seccomp profile location  Must be set if type is  Localhost   Must NOT be set for any other type         securityContext windowsOptions    WindowsSecurityContextOptions       The Windows specific settings applied to all containers  If unspecified  the options from the PodSecurityContext will be used  If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence  Note that this field cannot be set when spec os name is linux        a name  WindowsSecurityContextOptions    a       WindowsSecurityContextOptions contain Windows specific options and credentials            securityContext windowsOptions gmsaCredentialSpec    string         GMSACredentialSpec is where the GMSA admission webhook  https   github com kubernetes sigs windows gmsa  inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field           securityContext windowsOptions gmsaCredentialSpecName    string         GMSACredentialSpecName is the name of the GMSA credential spec to use           securityContext windowsOptions hostProcess    boolean         HostProcess determines if a container should be run as a  Host Process  container  All of a Pod s containers must have the same effective HostProcess value  it is not allowed to have a mix of HostProcess containers and non HostProcess containers   In addition  if HostProcess is true then HostNetwork must also be set to true           securityContext windowsOptions runAsUserName    string         The UserName in Windows to run the entrypoint of the container process  Defaults to the user specified in image metadata if unspecified  May also be set in PodSecurityContext  If set in both SecurityContext and PodSecurityContext  the value specified in SecurityContext takes precedence       Not allowed       ports      ContainerPort      Patch strategy  merge on key  containerPort         Map  unique values on keys  containerPort  protocol  will be kept during a merge       Ports are not allowed for ephemeral containers      a name  ContainerPort    a     ContainerPort represents a network port in a single container          ports containerPort    int32   required      Number of port to expose on the pod s IP address  This must be a valid port number  0    x    65536         ports hostIP    string       What host IP to bind the external port to         ports hostPort    int32       Number of port to expose on the host  If specified  this must be a valid port number  0    x    65536  If HostNetwork is specified  this must match ContainerPort  Most containers do not need this         ports name    string       If specified  this must be an IANA SVC NAME and unique within the pod  Each named port in a pod must have a unique name  Name for the port that can be referred to by services         ports protocol    string       Protocol for port  Must be UDP  TCP  or SCTP  Defaults to  TCP        resources    ResourceRequirements     Resources are not allowed for ephemeral containers  Ephemeral containers use spare resources already allocated to the pod      a name  ResourceRequirements    a     ResourceRequirements describes the compute resource requirements          resources claims      ResourceClaim        Map  unique values on key name will be kept during a merge           Claims lists the names of resources  defined in spec resourceClaims  that are used by this container           This is an alpha field and requires enabling the DynamicResourceAllocation feature gate           This field is immutable  It can only be set for containers        a name  ResourceClaim    a       ResourceClaim references one entry in PodSpec ResourceClaims            resources claims name    string   required        Name must match the name of one entry in pod spec resourceClaims of the Pod where this field is used  It makes that resource available inside a container           resources claims request    string         Request is the name chosen for a request in the referenced claim  If empty  everything from the claim is made available  otherwise only the result of this request         resources limits    map string  a href    Quantity  a        Limits describes the maximum amount of compute resources allowed  More info  https   kubernetes io docs concepts configuration manage resources containers         resources requests    map string  a href    Quantity  a        Requests describes the minimum amount of compute resources required  If Requests is omitted for a container  it defaults to Limits if that is explicitly specified  otherwise to an implementation defined value  Requests cannot exceed Limits  More info  https   kubernetes io docs concepts configuration manage resources containers       lifecycle    Lifecycle     Lifecycle is not allowed for ephemeral containers      a name  Lifecycle    a     Lifecycle describes actions that the management system should take in response to container lifecycle events  For the PostStart and PreStop lifecycle handlers  management of the container blocks until the action is complete  unless the container process fails  in which case the handler is aborted          lifecycle postStart     a href    LifecycleHandler  a        PostStart is called immediately after a container is created  If the handler fails  the container is terminated and restarted according to its restart policy  Other management of the container blocks until the hook completes  More info  https   kubernetes io docs concepts containers container lifecycle hooks  container hooks        lifecycle preStop     a href    LifecycleHandler  a        PreStop is called immediately before a container is terminated due to an API request or management event such as liveness startup probe failure  preemption  resource contention  etc  The handler is not called if the container crashes or exits  The Pod s termination grace period countdown begins before the PreStop hook is executed  Regardless of the outcome of the handler  the container will eventually terminate within the Pod s termination grace period  unless delayed by finalizers   Other management of the container blocks until the hook completes or until the termination grace period is reached  More info  https   kubernetes io docs concepts containers container lifecycle hooks  container hooks      livenessProbe     a href    Probe  a      Probes are not allowed for ephemeral containers       readinessProbe     a href    Probe  a      Probes are not allowed for ephemeral containers       startupProbe     a href    Probe  a      Probes are not allowed for ephemeral containers        LifecycleHandler   LifecycleHandler   LifecycleHandler defines a specific action that should be taken in a lifecycle hook  One and only one of the fields  except TCPSocket must be specified    hr       exec    ExecAction     Exec specifies the action to take      a name  ExecAction    a     ExecAction describes a  run in container  action          exec command      string        Atomic  will be replaced during a merge           Command is the command line to execute inside the container  the working directory for the command  is root       in the container s filesystem  The command is simply exec d  it is not run inside a shell  so traditional shell instructions       etc  won t work  To use a shell  you need to explicitly call out to that shell  Exit status of 0 is treated as live healthy and non zero is unhealthy       httpGet    HTTPGetAction     HTTPGet specifies the http request to perform      a name  HTTPGetAction    a     HTTPGetAction describes an action based on HTTP Get requests          httpGet port    IntOrString   required      Name or number of the port to access on the container  Number must be in the range 1 to 65535  Name must be an IANA SVC NAME        a name  IntOrString    a       IntOrString is a type that can hold an int32 or a string   When used in JSON or YAML marshalling and unmarshalling  it produces or consumes the inner type   This allows you to have  for example  a JSON field that can accept a name or number          httpGet host    string       Host name to connect to  defaults to the pod IP  You probably want to set  Host  in httpHeaders instead         httpGet httpHeaders      HTTPHeader        Atomic  will be replaced during a merge           Custom headers to set in the request  HTTP allows repeated headers        a name  HTTPHeader    a       HTTPHeader describes a custom header to be used in HTTP probes           httpGet httpHeaders name    string   required        The header field name  This will be canonicalized upon output  so case variant names will be understood as the same header           httpGet httpHeaders value    string   required        The header field value        httpGet path    string       Path to access on the HTTP server         httpGet scheme    string       Scheme to use for connecting to the host  Defaults to HTTP       sleep    SleepAction     Sleep represents the duration that the container should sleep before being terminated      a name  SleepAction    a     SleepAction describes a  sleep  action          sleep seconds    int64   required      Seconds is the number of seconds to sleep       tcpSocket    TCPSocketAction     Deprecated  TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility  There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified      a name  TCPSocketAction    a     TCPSocketAction describes an action based on opening a socket         tcpSocket port    IntOrString   required      Number or name of the port to access on the container  Number must be in the range 1 to 65535  Name must be an IANA SVC NAME        a name  IntOrString    a       IntOrString is a type that can hold an int32 or a string   When used in JSON or YAML marshalling and unmarshalling  it produces or consumes the inner type   This allows you to have  for example  a JSON field that can accept a name or number          tcpSocket host    string       Optional  Host name to connect to  defaults to the pod IP          NodeAffinity   NodeAffinity   Node affinity is a group of node affinity scheduling rules    hr       preferredDuringSchedulingIgnoredDuringExecution      PreferredSchedulingTerm      Atomic  will be replaced during a merge       The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field  but it may choose a node that violates one or more of the expressions  The node that is most preferred is the one with the greatest sum of weights  i e  for each node that meets all of the scheduling requirements  resource request  requiredDuringScheduling affinity expressions  etc    compute a sum by iterating through the elements of this field and adding  weight  to the sum if the node matches the corresponding matchExpressions  the node s  with the highest sum are the most preferred      a name  PreferredSchedulingTerm    a     An empty preferred scheduling term matches all objects with implicit weight 0  i e  it s a no op   A null preferred scheduling term matches no objects  i e  is also a no op           preferredDuringSchedulingIgnoredDuringExecution preference    NodeSelectorTerm   required      A node selector term  associated with the corresponding weight        a name  NodeSelectorTerm    a       A null or empty node selector term matches no objects  The requirements of them are ANDed  The TopologySelectorTerm type implements a subset of the NodeSelectorTerm            preferredDuringSchedulingIgnoredDuringExecution preference matchExpressions       a href    NodeSelectorRequirement  a           Atomic  will be replaced during a merge               A list of node selector requirements by node s labels           preferredDuringSchedulingIgnoredDuringExecution preference matchFields       a href    NodeSelectorRequirement  a           Atomic  will be replaced during a merge               A list of node selector requirements by node s fields         preferredDuringSchedulingIgnoredDuringExecution weight    int32   required      Weight associated with matching the corresponding nodeSelectorTerm  in the range 1 100       requiredDuringSchedulingIgnoredDuringExecution    NodeSelector     If the affinity requirements specified by this field are not met at scheduling time  the pod will not be scheduled onto the node  If the affinity requirements specified by this field cease to be met at some point during pod execution  e g  due to an update   the system may or may not try to eventually evict the pod from its node      a name  NodeSelector    a     A node selector represents the union of the results of one or more label queries over a set of nodes  that is  it represents the OR of the selectors represented by the node selector terms          requiredDuringSchedulingIgnoredDuringExecution nodeSelectorTerms      NodeSelectorTerm   required       Atomic  will be replaced during a merge           Required  A list of node selector terms  The terms are ORed        a name  NodeSelectorTerm    a       A null or empty node selector term matches no objects  The requirements of them are ANDed  The TopologySelectorTerm type implements a subset of the NodeSelectorTerm            requiredDuringSchedulingIgnoredDuringExecution nodeSelectorTerms matchExpressions       a href    NodeSelectorRequirement  a           Atomic  will be replaced during a merge               A list of node selector requirements by node s labels           requiredDuringSchedulingIgnoredDuringExecution nodeSelectorTerms matchFields       a href    NodeSelectorRequirement  a           Atomic  will be replaced during a merge               A list of node selector requirements by node s fields          PodAffinity   PodAffinity   Pod affinity is a group of inter pod affinity scheduling rules    hr       preferredDuringSchedulingIgnoredDuringExecution      WeightedPodAffinityTerm      Atomic  will be replaced during a merge       The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field  but it may choose a node that violates one or more of the expressions  The node that is most preferred is the one with the greatest sum of weights  i e  for each node that meets all of the scheduling requirements  resource request  requiredDuringScheduling affinity expressions  etc    compute a sum by iterating through the elements of this field and adding  weight  to the sum if the node has pods which matches the corresponding podAffinityTerm  the node s  with the highest sum are the most preferred      a name  WeightedPodAffinityTerm    a     The weights of all of the matched WeightedPodAffinityTerm fields are added per node to find the most preferred node s          preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm    PodAffinityTerm   required      Required  A pod affinity term  associated with the corresponding weight        a name  PodAffinityTerm    a       Defines a set of pods  namely those matching the labelSelector relative to the given namespace s   that this pod should be co located  affinity  or not co located  anti affinity  with  where co located is defined as running on a node whose value of the label with key  topologyKey  matches that of any node on which a pod of the set of pods is running           preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm topologyKey    string   required        This pod should be co located  affinity  or not co located  anti affinity  with the pods matching the labelSelector in the specified namespaces  where co located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running  Empty topologyKey is not allowed           preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm labelSelector     a href    LabelSelector  a          A label query over a set of resources  in this case pods  If it s null  this PodAffinityTerm matches with no Pods           preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm matchLabelKeys      string          Atomic  will be replaced during a merge               MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration  The keys are used to lookup values from the incoming pod labels  those key value labels are merged with  labelSelector  as  key in  value   to select the group of existing pods which pods will be taken into consideration for the incoming pod s pod  anti  affinity  Keys that don t exist in the incoming pod labels will be ignored  The default value is empty  The same key is forbidden to exist in both matchLabelKeys and labelSelector  Also  matchLabelKeys cannot be set when labelSelector isn t set  This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate  enabled by default            preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm mismatchLabelKeys      string          Atomic  will be replaced during a merge               MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration  The keys are used to lookup values from the incoming pod labels  those key value labels are merged with  labelSelector  as  key notin  value   to select the group of existing pods which pods will be taken into consideration for the incoming pod s pod  anti  affinity  Keys that don t exist in the incoming pod labels will be ignored  The default value is empty  The same key is forbidden to exist in both mismatchLabelKeys and labelSelector  Also  mismatchLabelKeys cannot be set when labelSelector isn t set  This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate  enabled by default            preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm namespaceSelector     a href    LabelSelector  a          A label query over the set of namespaces that the term applies to  The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field  null selector and null or empty namespaces list means  this pod s namespace   An empty selector      matches all namespaces           preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm namespaces      string          Atomic  will be replaced during a merge               namespaces specifies a static list of namespace names that the term applies to  The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector  null or empty namespaces list and null namespaceSelector means  this pod s namespace          preferredDuringSchedulingIgnoredDuringExecution weight    int32   required      weight associated with matching the corresponding podAffinityTerm  in the range 1 100       requiredDuringSchedulingIgnoredDuringExecution      PodAffinityTerm      Atomic  will be replaced during a merge       If the affinity requirements specified by this field are not met at scheduling time  the pod will not be scheduled onto the node  If the affinity requirements specified by this field cease to be met at some point during pod execution  e g  due to a pod label update   the system may or may not try to eventually evict the pod from its node  When there are multiple elements  the lists of nodes corresponding to each podAffinityTerm are intersected  i e  all terms must be satisfied      a name  PodAffinityTerm    a     Defines a set of pods  namely those matching the labelSelector relative to the given namespace s   that this pod should be co located  affinity  or not co located  anti affinity  with  where co located is defined as running on a node whose value of the label with key  topologyKey  matches that of any node on which a pod of the set of pods is running         requiredDuringSchedulingIgnoredDuringExecution topologyKey    string   required      This pod should be co located  affinity  or not co located  anti affinity  with the pods matching the labelSelector in the specified namespaces  where co located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running  Empty topologyKey is not allowed         requiredDuringSchedulingIgnoredDuringExecution labelSelector     a href    LabelSelector  a        A label query over a set of resources  in this case pods  If it s null  this PodAffinityTerm matches with no Pods         requiredDuringSchedulingIgnoredDuringExecution matchLabelKeys      string        Atomic  will be replaced during a merge           MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration  The keys are used to lookup values from the incoming pod labels  those key value labels are merged with  labelSelector  as  key in  value   to select the group of existing pods which pods will be taken into consideration for the incoming pod s pod  anti  affinity  Keys that don t exist in the incoming pod labels will be ignored  The default value is empty  The same key is forbidden to exist in both matchLabelKeys and labelSelector  Also  matchLabelKeys cannot be set when labelSelector isn t set  This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate  enabled by default          requiredDuringSchedulingIgnoredDuringExecution mismatchLabelKeys      string        Atomic  will be replaced during a merge           MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration  The keys are used to lookup values from the incoming pod labels  those key value labels are merged with  labelSelector  as  key notin  value   to select the group of existing pods which pods will be taken into consideration for the incoming pod s pod  anti  affinity  Keys that don t exist in the incoming pod labels will be ignored  The default value is empty  The same key is forbidden to exist in both mismatchLabelKeys and labelSelector  Also  mismatchLabelKeys cannot be set when labelSelector isn t set  This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate  enabled by default          requiredDuringSchedulingIgnoredDuringExecution namespaceSelector     a href    LabelSelector  a        A label query over the set of namespaces that the term applies to  The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field  null selector and null or empty namespaces list means  this pod s namespace   An empty selector      matches all namespaces         requiredDuringSchedulingIgnoredDuringExecution namespaces      string        Atomic  will be replaced during a merge           namespaces specifies a static list of namespace names that the term applies to  The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector  null or empty namespaces list and null namespaceSelector means  this pod s namespace           PodAntiAffinity   PodAntiAffinity   Pod anti affinity is a group of inter pod anti affinity scheduling rules    hr       preferredDuringSchedulingIgnoredDuringExecution      WeightedPodAffinityTerm      Atomic  will be replaced during a merge       The scheduler will prefer to schedule pods to nodes that satisfy the anti affinity expressions specified by this field  but it may choose a node that violates one or more of the expressions  The node that is most preferred is the one with the greatest sum of weights  i e  for each node that meets all of the scheduling requirements  resource request  requiredDuringScheduling anti affinity expressions  etc    compute a sum by iterating through the elements of this field and adding  weight  to the sum if the node has pods which matches the corresponding podAffinityTerm  the node s  with the highest sum are the most preferred      a name  WeightedPodAffinityTerm    a     The weights of all of the matched WeightedPodAffinityTerm fields are added per node to find the most preferred node s          preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm    PodAffinityTerm   required      Required  A pod affinity term  associated with the corresponding weight        a name  PodAffinityTerm    a       Defines a set of pods  namely those matching the labelSelector relative to the given namespace s   that this pod should be co located  affinity  or not co located  anti affinity  with  where co located is defined as running on a node whose value of the label with key  topologyKey  matches that of any node on which a pod of the set of pods is running           preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm topologyKey    string   required        This pod should be co located  affinity  or not co located  anti affinity  with the pods matching the labelSelector in the specified namespaces  where co located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running  Empty topologyKey is not allowed           preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm labelSelector     a href    LabelSelector  a          A label query over a set of resources  in this case pods  If it s null  this PodAffinityTerm matches with no Pods           preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm matchLabelKeys      string          Atomic  will be replaced during a merge               MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration  The keys are used to lookup values from the incoming pod labels  those key value labels are merged with  labelSelector  as  key in  value   to select the group of existing pods which pods will be taken into consideration for the incoming pod s pod  anti  affinity  Keys that don t exist in the incoming pod labels will be ignored  The default value is empty  The same key is forbidden to exist in both matchLabelKeys and labelSelector  Also  matchLabelKeys cannot be set when labelSelector isn t set  This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate  enabled by default            preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm mismatchLabelKeys      string          Atomic  will be replaced during a merge               MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration  The keys are used to lookup values from the incoming pod labels  those key value labels are merged with  labelSelector  as  key notin  value   to select the group of existing pods which pods will be taken into consideration for the incoming pod s pod  anti  affinity  Keys that don t exist in the incoming pod labels will be ignored  The default value is empty  The same key is forbidden to exist in both mismatchLabelKeys and labelSelector  Also  mismatchLabelKeys cannot be set when labelSelector isn t set  This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate  enabled by default            preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm namespaceSelector     a href    LabelSelector  a          A label query over the set of namespaces that the term applies to  The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field  null selector and null or empty namespaces list means  this pod s namespace   An empty selector      matches all namespaces           preferredDuringSchedulingIgnoredDuringExecution podAffinityTerm namespaces      string          Atomic  will be replaced during a merge               namespaces specifies a static list of namespace names that the term applies to  The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector  null or empty namespaces list and null namespaceSelector means  this pod s namespace          preferredDuringSchedulingIgnoredDuringExecution weight    int32   required      weight associated with matching the corresponding podAffinityTerm  in the range 1 100       requiredDuringSchedulingIgnoredDuringExecution      PodAffinityTerm      Atomic  will be replaced during a merge       If the anti affinity requirements specified by this field are not met at scheduling time  the pod will not be scheduled onto the node  If the anti affinity requirements specified by this field cease to be met at some point during pod execution  e g  due to a pod label update   the system may or may not try to eventually evict the pod from its node  When there are multiple elements  the lists of nodes corresponding to each podAffinityTerm are intersected  i e  all terms must be satisfied      a name  PodAffinityTerm    a     Defines a set of pods  namely those matching the labelSelector relative to the given namespace s   that this pod should be co located  affinity  or not co located  anti affinity  with  where co located is defined as running on a node whose value of the label with key  topologyKey  matches that of any node on which a pod of the set of pods is running         requiredDuringSchedulingIgnoredDuringExecution topologyKey    string   required      This pod should be co located  affinity  or not co located  anti affinity  with the pods matching the labelSelector in the specified namespaces  where co located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running  Empty topologyKey is not allowed         requiredDuringSchedulingIgnoredDuringExecution labelSelector     a href    LabelSelector  a        A label query over a set of resources  in this case pods  If it s null  this PodAffinityTerm matches with no Pods         requiredDuringSchedulingIgnoredDuringExecution matchLabelKeys      string        Atomic  will be replaced during a merge           MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration  The keys are used to lookup values from the incoming pod labels  those key value labels are merged with  labelSelector  as  key in  value   to select the group of existing pods which pods will be taken into consideration for the incoming pod s pod  anti  affinity  Keys that don t exist in the incoming pod labels will be ignored  The default value is empty  The same key is forbidden to exist in both matchLabelKeys and labelSelector  Also  matchLabelKeys cannot be set when labelSelector isn t set  This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate  enabled by default          requiredDuringSchedulingIgnoredDuringExecution mismatchLabelKeys      string        Atomic  will be replaced during a merge           MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration  The keys are used to lookup values from the incoming pod labels  those key value labels are merged with  labelSelector  as  key notin  value   to select the group of existing pods which pods will be taken into consideration for the incoming pod s pod  anti  affinity  Keys that don t exist in the incoming pod labels will be ignored  The default value is empty  The same key is forbidden to exist in both mismatchLabelKeys and labelSelector  Also  mismatchLabelKeys cannot be set when labelSelector isn t set  This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate  enabled by default          requiredDuringSchedulingIgnoredDuringExecution namespaceSelector     a href    LabelSelector  a        A label query over the set of namespaces that the term applies to  The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field  null selector and null or empty namespaces list means  this pod s namespace   An empty selector      matches all namespaces         requiredDuringSchedulingIgnoredDuringExecution namespaces      string        Atomic  will be replaced during a merge           namespaces specifies a static list of namespace names that the term applies to  The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector  null or empty namespaces list and null namespaceSelector means  this pod s namespace           Probe   Probe   Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic    hr       exec    ExecAction     Exec specifies the action to take      a name  ExecAction    a     ExecAction describes a  run in container  action          exec command      string        Atomic  will be replaced during a merge           Command is the command line to execute inside the container  the working directory for the command  is root       in the container s filesystem  The command is simply exec d  it is not run inside a shell  so traditional shell instructions       etc  won t work  To use a shell  you need to explicitly call out to that shell  Exit status of 0 is treated as live healthy and non zero is unhealthy       httpGet    HTTPGetAction     HTTPGet specifies the http request to perform      a name  HTTPGetAction    a     HTTPGetAction describes an action based on HTTP Get requests          httpGet port    IntOrString   required      Name or number of the port to access on the container  Number must be in the range 1 to 65535  Name must be an IANA SVC NAME        a name  IntOrString    a       IntOrString is a type that can hold an int32 or a string   When used in JSON or YAML marshalling and unmarshalling  it produces or consumes the inner type   This allows you to have  for example  a JSON field that can accept a name or number          httpGet host    string       Host name to connect to  defaults to the pod IP  You probably want to set  Host  in httpHeaders instead         httpGet httpHeaders      HTTPHeader        Atomic  will be replaced during a merge           Custom headers to set in the request  HTTP allows repeated headers        a name  HTTPHeader    a       HTTPHeader describes a custom header to be used in HTTP probes           httpGet httpHeaders name    string   required        The header field name  This will be canonicalized upon output  so case variant names will be understood as the same header           httpGet httpHeaders value    string   required        The header field value        httpGet path    string       Path to access on the HTTP server         httpGet scheme    string       Scheme to use for connecting to the host  Defaults to HTTP       tcpSocket    TCPSocketAction     TCPSocket specifies an action involving a TCP port      a name  TCPSocketAction    a     TCPSocketAction describes an action based on opening a socket         tcpSocket port    IntOrString   required      Number or name of the port to access on the container  Number must be in the range 1 to 65535  Name must be an IANA SVC NAME        a name  IntOrString    a       IntOrString is a type that can hold an int32 or a string   When used in JSON or YAML marshalling and unmarshalling  it produces or consumes the inner type   This allows you to have  for example  a JSON field that can accept a name or number          tcpSocket host    string       Optional  Host name to connect to  defaults to the pod IP       initialDelaySeconds    int32     Number of seconds after the container has started before liveness probes are initiated  More info  https   kubernetes io docs concepts workloads pods pod lifecycle container probes      terminationGracePeriodSeconds    int64     Optional duration in seconds the pod needs to terminate gracefully upon probe failure  The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal  Set this value longer than the expected cleanup time for your process  If this value is nil  the pod s terminationGracePeriodSeconds will be used  Otherwise  this value overrides the value provided by the pod spec  Value must be non negative integer  The value zero indicates stop immediately via the kill signal  no opportunity to shut down   This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate  Minimum value is 1  spec terminationGracePeriodSeconds is used if unset       periodSeconds    int32     How often  in seconds  to perform the probe  Default to 10 seconds  Minimum value is 1       timeoutSeconds    int32     Number of seconds after which the probe times out  Defaults to 1 second  Minimum value is 1  More info  https   kubernetes io docs concepts workloads pods pod lifecycle container probes      failureThreshold    int32     Minimum consecutive failures for the probe to be considered failed after having succeeded  Defaults to 3  Minimum value is 1       successThreshold    int32     Minimum consecutive successes for the probe to be considered successful after having failed  Defaults to 1  Must be 1 for liveness and startup  Minimum value is 1       grpc    GRPCAction     GRPC specifies an action involving a GRPC port      a name  GRPCAction    a              grpc port    int32   required      Port number of the gRPC service  Number must be in the range 1 to 65535         grpc service    string       Service is the name of the service to place in the gRPC HealthCheckRequest  see https   github com grpc grpc blob master doc health checking md            If this is not specified  the default behavior is defined by gRPC          PodStatus   PodStatus   PodStatus represents information about the status of a pod  Status may trail the actual state of a system  especially if the node that hosts the pod cannot contact the control plane    hr       nominatedNodeName    string     nominatedNodeName is set only when this pod preempts other pods on the node  but it cannot be scheduled right away as preemption victims receive their graceful termination periods  This field does not guarantee that the pod will be scheduled on this node  Scheduler may decide to place the pod elsewhere if other nodes become available sooner  Scheduler may also decide to give the resources on this node to a higher priority pod that is created after preemption  As a result  this field may be different than PodSpec nodeName when the pod is scheduled       hostIP    string     hostIP holds the IP address of the host to which the pod is assigned  Empty if the pod has not started yet  A pod can be assigned to a node that has a problem in kubelet which in turns mean that HostIP will not be updated even if there is a node is assigned to pod      hostIPs      HostIP      Patch strategy  merge on key  ip         Atomic  will be replaced during a merge       hostIPs holds the IP addresses allocated to the host  If this field is specified  the first entry must match the hostIP field  This list is empty if the pod has not started yet  A pod can be assigned to a node that has a problem in kubelet which in turns means that HostIPs will not be updated even if there is a node is assigned to this pod      a name  HostIP    a     HostIP represents a single IP address allocated to the host          hostIPs ip    string   required      IP is the IP address assigned to the host      startTime    Time     RFC 3339 date and time at which the object was acknowledged by the Kubelet  This is before the Kubelet pulled the container image s  for the pod      a name  Time    a     Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        phase    string     The phase of a Pod is a simple  high level summary of where the Pod is in its lifecycle  The conditions array  the reason and message fields  and the individual container status arrays contain more detail about the pod s status  There are five possible phase values       Pending  The pod has been accepted by the Kubernetes system  but one or more of the container images has not been created  This includes time before being scheduled as well as time spent downloading images over the network  which could take a while  Running  The pod has been bound to a node  and all of the containers have been created  At least one container is still running  or is in the process of starting or restarting  Succeeded  All containers in the pod have terminated in success  and will not be restarted  Failed  All containers in the pod have terminated  and at least one container has terminated in failure  The container either exited with non zero status or was terminated by the system  Unknown  For some reason the state of the pod could not be obtained  typically due to an error in communicating with the host of the pod       More info  https   kubernetes io docs concepts workloads pods pod lifecycle pod phase      message    string     A human readable message indicating details about why the pod is in this condition       reason    string     A brief CamelCase message indicating details about why the pod is in this state  e g   Evicted       podIP    string     podIP address allocated to the pod  Routable at least within the cluster  Empty if not yet allocated       podIPs      PodIP      Patch strategy  merge on key  ip         Map  unique values on key ip will be kept during a merge       podIPs holds the IP addresses allocated to the pod  If this field is specified  the 0th entry must match the podIP field  Pods may be allocated at most 1 value for each of IPv4 and IPv6  This list is empty if no IPs have been allocated yet      a name  PodIP    a     PodIP represents a single IP address allocated to the pod          podIPs ip    string   required      IP is the IP address assigned to the pod      conditions      PodCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       Current service state of pod  More info  https   kubernetes io docs concepts workloads pods pod lifecycle pod conditions     a name  PodCondition    a     PodCondition contains details for the current condition of this pod          conditions status    string   required      Status is the status of the condition  Can be True  False  Unknown  More info  https   kubernetes io docs concepts workloads pods pod lifecycle pod conditions        conditions type    string   required      Type is the type of the condition  More info  https   kubernetes io docs concepts workloads pods pod lifecycle pod conditions        conditions lastProbeTime    Time       Last time we probed the condition        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions lastTransitionTime    Time       Last time the condition transitioned from one status to another        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string       Human readable message indicating details about last transition         conditions reason    string       Unique  one word  CamelCase reason for the condition s last transition       qosClass    string     The Quality of Service  QOS  classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info  https   kubernetes io docs concepts workloads pods pod qos  quality of service classes      initContainerStatuses      ContainerStatus      Atomic  will be replaced during a merge       The list has one entry per init container in the manifest  The most recent successful init container will have ready   true  the most recently started container will have startTime set  More info  https   kubernetes io docs concepts workloads pods pod lifecycle pod and container status     a name  ContainerStatus    a     ContainerStatus contains details for the current status of this container          initContainerStatuses allocatedResources    map string  a href    Quantity  a        AllocatedResources represents the compute resources allocated for this container by the node  Kubelet sets this value to Container Resources Requests upon successful pod admission and after successfully admitting desired pod resize         initContainerStatuses allocatedResourcesStatus      ResourceStatus        Patch strategy  merge on key  name             Map  unique values on key name will be kept during a merge           AllocatedResourcesStatus represents the status of various resources allocated for this Pod        a name  ResourceStatus    a                  initContainerStatuses allocatedResourcesStatus name    string   required        Name of the resource  Must be unique within the pod and match one of the resources from the pod spec           initContainerStatuses allocatedResourcesStatus resources      ResourceHealth          Map  unique values on key resourceID will be kept during a merge               List of unique Resources health  Each element in the list contains an unique resource ID and resource health  At a minimum  ResourceID must uniquely identify the Resource allocated to the Pod on the Node for the lifetime of a Pod  See ResourceID type for it s definition          a name  ResourceHealth    a         ResourceHealth represents the health of a resource  It has the latest device health information  This is a part of KEP https   kep k8s io 4680 and historical health changes are planned to be added in future iterations of a KEP              initContainerStatuses allocatedResourcesStatus resources resourceID    string   required          ResourceID is the unique identifier of the resource  See the ResourceID type for more information             initContainerStatuses allocatedResourcesStatus resources health    string           Health of the resource  can be one of             Healthy  operates as normal            Unhealthy  reported unhealthy  We consider this a temporary health issue                       since we do not have a mechanism today to distinguish                       temporary and permanent issues             Unknown  The status cannot be determined                      For example  Device Plugin got unregistered and hasn t been re registered since                   In future we may want to introduce the PermanentlyUnhealthy Status         initContainerStatuses containerID    string       ContainerID is the ID of the container in the format    type      container id    Where type is a container runtime identifier  returned from Version call of CRI API  for example  containerd           initContainerStatuses image    string   required      Image is the name of container image that the container is running  The container image may not match the image used in the PodSpec  as it may have been resolved by the runtime  More info  https   kubernetes io docs concepts containers images         initContainerStatuses imageID    string   required      ImageID is the image ID of the container s image  The image ID may not match the image ID of the image used in the PodSpec  as it may have been resolved by the runtime         initContainerStatuses lastState    ContainerState       LastTerminationState holds the last termination state of the container to help debug container crashes and restarts  This field is not populated if the container is still running and RestartCount is 0        a name  ContainerState    a       ContainerState holds a possible state of container  Only one of its members may be specified  If none of them is specified  the default one is ContainerStateWaiting            initContainerStatuses lastState running    ContainerStateRunning         Details about a running container         a name  ContainerStateRunning    a         ContainerStateRunning is a running state of a container              initContainerStatuses lastState running startedAt    Time           Time at which the container was last  re  started           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers            initContainerStatuses lastState terminated    ContainerStateTerminated         Details about a terminated container         a name  ContainerStateTerminated    a         ContainerStateTerminated is a terminated state of a container              initContainerStatuses lastState terminated containerID    string           Container s ID in the format    type      container id              initContainerStatuses lastState terminated exitCode    int32   required          Exit status from the last termination of the container            initContainerStatuses lastState terminated startedAt    Time           Time at which previous execution of the container started           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers              initContainerStatuses lastState terminated finishedAt    Time           Time at which the container last terminated           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers              initContainerStatuses lastState terminated message    string           Message regarding the last termination of the container            initContainerStatuses lastState terminated reason    string            brief  reason from the last termination of the container            initContainerStatuses lastState terminated signal    int32           Signal from the last termination of the container          initContainerStatuses lastState waiting    ContainerStateWaiting         Details about a waiting container         a name  ContainerStateWaiting    a         ContainerStateWaiting is a waiting state of a container              initContainerStatuses lastState waiting message    string           Message regarding why the container is not yet running             initContainerStatuses lastState waiting reason    string            brief  reason the container is not yet running         initContainerStatuses name    string   required      Name is a DNS LABEL representing the unique name of the container  Each container in a pod must have a unique name across all container types  Cannot be updated         initContainerStatuses ready    boolean   required      Ready specifies whether the container is currently passing its readiness check  The value will change as readiness probes keep executing  If no readiness probes are specified  this field defaults to true once the container is fully started  see Started field            The value is typically used to determine whether a container is ready to accept traffic         initContainerStatuses resources    ResourceRequirements       Resources represents the compute resource requests and limits that have been successfully enacted on the running container after it has been started or has been successfully resized        a name  ResourceRequirements    a       ResourceRequirements describes the compute resource requirements            initContainerStatuses resources claims      ResourceClaim          Map  unique values on key name will be kept during a merge               Claims lists the names of resources  defined in spec resourceClaims  that are used by this container               This is an alpha field and requires enabling the DynamicResourceAllocation feature gate               This field is immutable  It can only be set for containers          a name  ResourceClaim    a         ResourceClaim references one entry in PodSpec ResourceClaims              initContainerStatuses resources claims name    string   required          Name must match the name of one entry in pod spec resourceClaims of the Pod where this field is used  It makes that resource available inside a container             initContainerStatuses resources claims request    string           Request is the name chosen for a request in the referenced claim  If empty  everything from the claim is made available  otherwise only the result of this request           initContainerStatuses resources limits    map string  a href    Quantity  a          Limits describes the maximum amount of compute resources allowed  More info  https   kubernetes io docs concepts configuration manage resources containers           initContainerStatuses resources requests    map string  a href    Quantity  a          Requests describes the minimum amount of compute resources required  If Requests is omitted for a container  it defaults to Limits if that is explicitly specified  otherwise to an implementation defined value  Requests cannot exceed Limits  More info  https   kubernetes io docs concepts configuration manage resources containers         initContainerStatuses restartCount    int32   required      RestartCount holds the number of times the container has been restarted  Kubelet makes an effort to always increment the value  but there are cases when the state may be lost due to node restarts and then the value may be reset to 0  The value is never negative         initContainerStatuses started    boolean       Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe  Initialized as false  becomes true after startupProbe is considered successful  Resets to false when the container is restarted  or if kubelet loses state temporarily  In both cases  startup probes will run again  Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook  The null value must be treated the same as false         initContainerStatuses state    ContainerState       State holds details about the container s current condition        a name  ContainerState    a       ContainerState holds a possible state of container  Only one of its members may be specified  If none of them is specified  the default one is ContainerStateWaiting            initContainerStatuses state running    ContainerStateRunning         Details about a running container         a name  ContainerStateRunning    a         ContainerStateRunning is a running state of a container              initContainerStatuses state running startedAt    Time           Time at which the container was last  re  started           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers            initContainerStatuses state terminated    ContainerStateTerminated         Details about a terminated container         a name  ContainerStateTerminated    a         ContainerStateTerminated is a terminated state of a container              initContainerStatuses state terminated containerID    string           Container s ID in the format    type      container id              initContainerStatuses state terminated exitCode    int32   required          Exit status from the last termination of the container            initContainerStatuses state terminated startedAt    Time           Time at which previous execution of the container started           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers              initContainerStatuses state terminated finishedAt    Time           Time at which the container last terminated           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers              initContainerStatuses state terminated message    string           Message regarding the last termination of the container            initContainerStatuses state terminated reason    string            brief  reason from the last termination of the container            initContainerStatuses state terminated signal    int32           Signal from the last termination of the container          initContainerStatuses state waiting    ContainerStateWaiting         Details about a waiting container         a name  ContainerStateWaiting    a         ContainerStateWaiting is a waiting state of a container              initContainerStatuses state waiting message    string           Message regarding why the container is not yet running             initContainerStatuses state waiting reason    string            brief  reason the container is not yet running         initContainerStatuses user    ContainerUser       User represents user identity information initially attached to the first process of the container       a name  ContainerUser    a       ContainerUser represents user identity information           initContainerStatuses user linux    LinuxContainerUser         Linux holds user identity information initially attached to the first process of the containers in Linux  Note that the actual running identity can be changed if the process has enough privilege to do so          a name  LinuxContainerUser    a         LinuxContainerUser represents user identity information in Linux containers             initContainerStatuses user linux gid    int64   required          GID is the primary gid initially attached to the first process in the container            initContainerStatuses user linux uid    int64   required          UID is the primary uid initially attached to the first process in the container            initContainerStatuses user linux supplementalGroups      int64            Atomic  will be replaced during a merge                   SupplementalGroups are the supplemental groups initially attached to the first process in the container        initContainerStatuses volumeMounts      VolumeMountStatus        Patch strategy  merge on key  mountPath             Map  unique values on key mountPath will be kept during a merge           Status of volume mounts        a name  VolumeMountStatus    a       VolumeMountStatus shows status of volume mounts            initContainerStatuses volumeMounts mountPath    string   required        MountPath corresponds to the original VolumeMount           initContainerStatuses volumeMounts name    string   required        Name corresponds to the name of the original VolumeMount           initContainerStatuses volumeMounts readOnly    boolean         ReadOnly corresponds to the original VolumeMount           initContainerStatuses volumeMounts recursiveReadOnly    string         RecursiveReadOnly must be set to Disabled  Enabled  or unspecified  for non readonly mounts   An IfPossible value in the original VolumeMount must be translated to Disabled or Enabled  depending on the mount result       containerStatuses      ContainerStatus      Atomic  will be replaced during a merge       The list has one entry per container in the manifest  More info  https   kubernetes io docs concepts workloads pods pod lifecycle pod and container status     a name  ContainerStatus    a     ContainerStatus contains details for the current status of this container          containerStatuses allocatedResources    map string  a href    Quantity  a        AllocatedResources represents the compute resources allocated for this container by the node  Kubelet sets this value to Container Resources Requests upon successful pod admission and after successfully admitting desired pod resize         containerStatuses allocatedResourcesStatus      ResourceStatus        Patch strategy  merge on key  name             Map  unique values on key name will be kept during a merge           AllocatedResourcesStatus represents the status of various resources allocated for this Pod        a name  ResourceStatus    a                  containerStatuses allocatedResourcesStatus name    string   required        Name of the resource  Must be unique within the pod and match one of the resources from the pod spec           containerStatuses allocatedResourcesStatus resources      ResourceHealth          Map  unique values on key resourceID will be kept during a merge               List of unique Resources health  Each element in the list contains an unique resource ID and resource health  At a minimum  ResourceID must uniquely identify the Resource allocated to the Pod on the Node for the lifetime of a Pod  See ResourceID type for it s definition          a name  ResourceHealth    a         ResourceHealth represents the health of a resource  It has the latest device health information  This is a part of KEP https   kep k8s io 4680 and historical health changes are planned to be added in future iterations of a KEP              containerStatuses allocatedResourcesStatus resources resourceID    string   required          ResourceID is the unique identifier of the resource  See the ResourceID type for more information             containerStatuses allocatedResourcesStatus resources health    string           Health of the resource  can be one of             Healthy  operates as normal            Unhealthy  reported unhealthy  We consider this a temporary health issue                       since we do not have a mechanism today to distinguish                       temporary and permanent issues             Unknown  The status cannot be determined                      For example  Device Plugin got unregistered and hasn t been re registered since                   In future we may want to introduce the PermanentlyUnhealthy Status         containerStatuses containerID    string       ContainerID is the ID of the container in the format    type      container id    Where type is a container runtime identifier  returned from Version call of CRI API  for example  containerd           containerStatuses image    string   required      Image is the name of container image that the container is running  The container image may not match the image used in the PodSpec  as it may have been resolved by the runtime  More info  https   kubernetes io docs concepts containers images         containerStatuses imageID    string   required      ImageID is the image ID of the container s image  The image ID may not match the image ID of the image used in the PodSpec  as it may have been resolved by the runtime         containerStatuses lastState    ContainerState       LastTerminationState holds the last termination state of the container to help debug container crashes and restarts  This field is not populated if the container is still running and RestartCount is 0        a name  ContainerState    a       ContainerState holds a possible state of container  Only one of its members may be specified  If none of them is specified  the default one is ContainerStateWaiting            containerStatuses lastState running    ContainerStateRunning         Details about a running container         a name  ContainerStateRunning    a         ContainerStateRunning is a running state of a container              containerStatuses lastState running startedAt    Time           Time at which the container was last  re  started           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers            containerStatuses lastState terminated    ContainerStateTerminated         Details about a terminated container         a name  ContainerStateTerminated    a         ContainerStateTerminated is a terminated state of a container              containerStatuses lastState terminated containerID    string           Container s ID in the format    type      container id              containerStatuses lastState terminated exitCode    int32   required          Exit status from the last termination of the container            containerStatuses lastState terminated startedAt    Time           Time at which previous execution of the container started           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers              containerStatuses lastState terminated finishedAt    Time           Time at which the container last terminated           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers              containerStatuses lastState terminated message    string           Message regarding the last termination of the container            containerStatuses lastState terminated reason    string            brief  reason from the last termination of the container            containerStatuses lastState terminated signal    int32           Signal from the last termination of the container          containerStatuses lastState waiting    ContainerStateWaiting         Details about a waiting container         a name  ContainerStateWaiting    a         ContainerStateWaiting is a waiting state of a container              containerStatuses lastState waiting message    string           Message regarding why the container is not yet running             containerStatuses lastState waiting reason    string            brief  reason the container is not yet running         containerStatuses name    string   required      Name is a DNS LABEL representing the unique name of the container  Each container in a pod must have a unique name across all container types  Cannot be updated         containerStatuses ready    boolean   required      Ready specifies whether the container is currently passing its readiness check  The value will change as readiness probes keep executing  If no readiness probes are specified  this field defaults to true once the container is fully started  see Started field            The value is typically used to determine whether a container is ready to accept traffic         containerStatuses resources    ResourceRequirements       Resources represents the compute resource requests and limits that have been successfully enacted on the running container after it has been started or has been successfully resized        a name  ResourceRequirements    a       ResourceRequirements describes the compute resource requirements            containerStatuses resources claims      ResourceClaim          Map  unique values on key name will be kept during a merge               Claims lists the names of resources  defined in spec resourceClaims  that are used by this container               This is an alpha field and requires enabling the DynamicResourceAllocation feature gate               This field is immutable  It can only be set for containers          a name  ResourceClaim    a         ResourceClaim references one entry in PodSpec ResourceClaims              containerStatuses resources claims name    string   required          Name must match the name of one entry in pod spec resourceClaims of the Pod where this field is used  It makes that resource available inside a container             containerStatuses resources claims request    string           Request is the name chosen for a request in the referenced claim  If empty  everything from the claim is made available  otherwise only the result of this request           containerStatuses resources limits    map string  a href    Quantity  a          Limits describes the maximum amount of compute resources allowed  More info  https   kubernetes io docs concepts configuration manage resources containers           containerStatuses resources requests    map string  a href    Quantity  a          Requests describes the minimum amount of compute resources required  If Requests is omitted for a container  it defaults to Limits if that is explicitly specified  otherwise to an implementation defined value  Requests cannot exceed Limits  More info  https   kubernetes io docs concepts configuration manage resources containers         containerStatuses restartCount    int32   required      RestartCount holds the number of times the container has been restarted  Kubelet makes an effort to always increment the value  but there are cases when the state may be lost due to node restarts and then the value may be reset to 0  The value is never negative         containerStatuses started    boolean       Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe  Initialized as false  becomes true after startupProbe is considered successful  Resets to false when the container is restarted  or if kubelet loses state temporarily  In both cases  startup probes will run again  Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook  The null value must be treated the same as false         containerStatuses state    ContainerState       State holds details about the container s current condition        a name  ContainerState    a       ContainerState holds a possible state of container  Only one of its members may be specified  If none of them is specified  the default one is ContainerStateWaiting            containerStatuses state running    ContainerStateRunning         Details about a running container         a name  ContainerStateRunning    a         ContainerStateRunning is a running state of a container              containerStatuses state running startedAt    Time           Time at which the container was last  re  started           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers            containerStatuses state terminated    ContainerStateTerminated         Details about a terminated container         a name  ContainerStateTerminated    a         ContainerStateTerminated is a terminated state of a container              containerStatuses state terminated containerID    string           Container s ID in the format    type      container id              containerStatuses state terminated exitCode    int32   required          Exit status from the last termination of the container            containerStatuses state terminated startedAt    Time           Time at which previous execution of the container started           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers              containerStatuses state terminated finishedAt    Time           Time at which the container last terminated           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers              containerStatuses state terminated message    string           Message regarding the last termination of the container            containerStatuses state terminated reason    string            brief  reason from the last termination of the container            containerStatuses state terminated signal    int32           Signal from the last termination of the container          containerStatuses state waiting    ContainerStateWaiting         Details about a waiting container         a name  ContainerStateWaiting    a         ContainerStateWaiting is a waiting state of a container              containerStatuses state waiting message    string           Message regarding why the container is not yet running             containerStatuses state waiting reason    string            brief  reason the container is not yet running         containerStatuses user    ContainerUser       User represents user identity information initially attached to the first process of the container       a name  ContainerUser    a       ContainerUser represents user identity information           containerStatuses user linux    LinuxContainerUser         Linux holds user identity information initially attached to the first process of the containers in Linux  Note that the actual running identity can be changed if the process has enough privilege to do so          a name  LinuxContainerUser    a         LinuxContainerUser represents user identity information in Linux containers             containerStatuses user linux gid    int64   required          GID is the primary gid initially attached to the first process in the container            containerStatuses user linux uid    int64   required          UID is the primary uid initially attached to the first process in the container            containerStatuses user linux supplementalGroups      int64            Atomic  will be replaced during a merge                   SupplementalGroups are the supplemental groups initially attached to the first process in the container        containerStatuses volumeMounts      VolumeMountStatus        Patch strategy  merge on key  mountPath             Map  unique values on key mountPath will be kept during a merge           Status of volume mounts        a name  VolumeMountStatus    a       VolumeMountStatus shows status of volume mounts            containerStatuses volumeMounts mountPath    string   required        MountPath corresponds to the original VolumeMount           containerStatuses volumeMounts name    string   required        Name corresponds to the name of the original VolumeMount           containerStatuses volumeMounts readOnly    boolean         ReadOnly corresponds to the original VolumeMount           containerStatuses volumeMounts recursiveReadOnly    string         RecursiveReadOnly must be set to Disabled  Enabled  or unspecified  for non readonly mounts   An IfPossible value in the original VolumeMount must be translated to Disabled or Enabled  depending on the mount result       ephemeralContainerStatuses      ContainerStatus      Atomic  will be replaced during a merge       Status for any ephemeral containers that have run in this pod      a name  ContainerStatus    a     ContainerStatus contains details for the current status of this container          ephemeralContainerStatuses allocatedResources    map string  a href    Quantity  a        AllocatedResources represents the compute resources allocated for this container by the node  Kubelet sets this value to Container Resources Requests upon successful pod admission and after successfully admitting desired pod resize         ephemeralContainerStatuses allocatedResourcesStatus      ResourceStatus        Patch strategy  merge on key  name             Map  unique values on key name will be kept during a merge           AllocatedResourcesStatus represents the status of various resources allocated for this Pod        a name  ResourceStatus    a                  ephemeralContainerStatuses allocatedResourcesStatus name    string   required        Name of the resource  Must be unique within the pod and match one of the resources from the pod spec           ephemeralContainerStatuses allocatedResourcesStatus resources      ResourceHealth          Map  unique values on key resourceID will be kept during a merge               List of unique Resources health  Each element in the list contains an unique resource ID and resource health  At a minimum  ResourceID must uniquely identify the Resource allocated to the Pod on the Node for the lifetime of a Pod  See ResourceID type for it s definition          a name  ResourceHealth    a         ResourceHealth represents the health of a resource  It has the latest device health information  This is a part of KEP https   kep k8s io 4680 and historical health changes are planned to be added in future iterations of a KEP              ephemeralContainerStatuses allocatedResourcesStatus resources resourceID    string   required          ResourceID is the unique identifier of the resource  See the ResourceID type for more information             ephemeralContainerStatuses allocatedResourcesStatus resources health    string           Health of the resource  can be one of             Healthy  operates as normal            Unhealthy  reported unhealthy  We consider this a temporary health issue                       since we do not have a mechanism today to distinguish                       temporary and permanent issues             Unknown  The status cannot be determined                      For example  Device Plugin got unregistered and hasn t been re registered since                   In future we may want to introduce the PermanentlyUnhealthy Status         ephemeralContainerStatuses containerID    string       ContainerID is the ID of the container in the format    type      container id    Where type is a container runtime identifier  returned from Version call of CRI API  for example  containerd           ephemeralContainerStatuses image    string   required      Image is the name of container image that the container is running  The container image may not match the image used in the PodSpec  as it may have been resolved by the runtime  More info  https   kubernetes io docs concepts containers images         ephemeralContainerStatuses imageID    string   required      ImageID is the image ID of the container s image  The image ID may not match the image ID of the image used in the PodSpec  as it may have been resolved by the runtime         ephemeralContainerStatuses lastState    ContainerState       LastTerminationState holds the last termination state of the container to help debug container crashes and restarts  This field is not populated if the container is still running and RestartCount is 0        a name  ContainerState    a       ContainerState holds a possible state of container  Only one of its members may be specified  If none of them is specified  the default one is ContainerStateWaiting            ephemeralContainerStatuses lastState running    ContainerStateRunning         Details about a running container         a name  ContainerStateRunning    a         ContainerStateRunning is a running state of a container              ephemeralContainerStatuses lastState running startedAt    Time           Time at which the container was last  re  started           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers            ephemeralContainerStatuses lastState terminated    ContainerStateTerminated         Details about a terminated container         a name  ContainerStateTerminated    a         ContainerStateTerminated is a terminated state of a container              ephemeralContainerStatuses lastState terminated containerID    string           Container s ID in the format    type      container id              ephemeralContainerStatuses lastState terminated exitCode    int32   required          Exit status from the last termination of the container            ephemeralContainerStatuses lastState terminated startedAt    Time           Time at which previous execution of the container started           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers              ephemeralContainerStatuses lastState terminated finishedAt    Time           Time at which the container last terminated           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers              ephemeralContainerStatuses lastState terminated message    string           Message regarding the last termination of the container            ephemeralContainerStatuses lastState terminated reason    string            brief  reason from the last termination of the container            ephemeralContainerStatuses lastState terminated signal    int32           Signal from the last termination of the container          ephemeralContainerStatuses lastState waiting    ContainerStateWaiting         Details about a waiting container         a name  ContainerStateWaiting    a         ContainerStateWaiting is a waiting state of a container              ephemeralContainerStatuses lastState waiting message    string           Message regarding why the container is not yet running             ephemeralContainerStatuses lastState waiting reason    string            brief  reason the container is not yet running         ephemeralContainerStatuses name    string   required      Name is a DNS LABEL representing the unique name of the container  Each container in a pod must have a unique name across all container types  Cannot be updated         ephemeralContainerStatuses ready    boolean   required      Ready specifies whether the container is currently passing its readiness check  The value will change as readiness probes keep executing  If no readiness probes are specified  this field defaults to true once the container is fully started  see Started field            The value is typically used to determine whether a container is ready to accept traffic         ephemeralContainerStatuses resources    ResourceRequirements       Resources represents the compute resource requests and limits that have been successfully enacted on the running container after it has been started or has been successfully resized        a name  ResourceRequirements    a       ResourceRequirements describes the compute resource requirements            ephemeralContainerStatuses resources claims      ResourceClaim          Map  unique values on key name will be kept during a merge               Claims lists the names of resources  defined in spec resourceClaims  that are used by this container               This is an alpha field and requires enabling the DynamicResourceAllocation feature gate               This field is immutable  It can only be set for containers          a name  ResourceClaim    a         ResourceClaim references one entry in PodSpec ResourceClaims              ephemeralContainerStatuses resources claims name    string   required          Name must match the name of one entry in pod spec resourceClaims of the Pod where this field is used  It makes that resource available inside a container             ephemeralContainerStatuses resources claims request    string           Request is the name chosen for a request in the referenced claim  If empty  everything from the claim is made available  otherwise only the result of this request           ephemeralContainerStatuses resources limits    map string  a href    Quantity  a          Limits describes the maximum amount of compute resources allowed  More info  https   kubernetes io docs concepts configuration manage resources containers           ephemeralContainerStatuses resources requests    map string  a href    Quantity  a          Requests describes the minimum amount of compute resources required  If Requests is omitted for a container  it defaults to Limits if that is explicitly specified  otherwise to an implementation defined value  Requests cannot exceed Limits  More info  https   kubernetes io docs concepts configuration manage resources containers         ephemeralContainerStatuses restartCount    int32   required      RestartCount holds the number of times the container has been restarted  Kubelet makes an effort to always increment the value  but there are cases when the state may be lost due to node restarts and then the value may be reset to 0  The value is never negative         ephemeralContainerStatuses started    boolean       Started indicates whether the container has finished its postStart lifecycle hook and passed its startup probe  Initialized as false  becomes true after startupProbe is considered successful  Resets to false when the container is restarted  or if kubelet loses state temporarily  In both cases  startup probes will run again  Is always true when no startupProbe is defined and container is running and has passed the postStart lifecycle hook  The null value must be treated the same as false         ephemeralContainerStatuses state    ContainerState       State holds details about the container s current condition        a name  ContainerState    a       ContainerState holds a possible state of container  Only one of its members may be specified  If none of them is specified  the default one is ContainerStateWaiting            ephemeralContainerStatuses state running    ContainerStateRunning         Details about a running container         a name  ContainerStateRunning    a         ContainerStateRunning is a running state of a container              ephemeralContainerStatuses state running startedAt    Time           Time at which the container was last  re  started           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers            ephemeralContainerStatuses state terminated    ContainerStateTerminated         Details about a terminated container         a name  ContainerStateTerminated    a         ContainerStateTerminated is a terminated state of a container              ephemeralContainerStatuses state terminated containerID    string           Container s ID in the format    type      container id              ephemeralContainerStatuses state terminated exitCode    int32   required          Exit status from the last termination of the container            ephemeralContainerStatuses state terminated startedAt    Time           Time at which previous execution of the container started           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers              ephemeralContainerStatuses state terminated finishedAt    Time           Time at which the container last terminated           a name  Time    a           Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers              ephemeralContainerStatuses state terminated message    string           Message regarding the last termination of the container            ephemeralContainerStatuses state terminated reason    string            brief  reason from the last termination of the container            ephemeralContainerStatuses state terminated signal    int32           Signal from the last termination of the container          ephemeralContainerStatuses state waiting    ContainerStateWaiting         Details about a waiting container         a name  ContainerStateWaiting    a         ContainerStateWaiting is a waiting state of a container              ephemeralContainerStatuses state waiting message    string           Message regarding why the container is not yet running             ephemeralContainerStatuses state waiting reason    string            brief  reason the container is not yet running         ephemeralContainerStatuses user    ContainerUser       User represents user identity information initially attached to the first process of the container       a name  ContainerUser    a       ContainerUser represents user identity information           ephemeralContainerStatuses user linux    LinuxContainerUser         Linux holds user identity information initially attached to the first process of the containers in Linux  Note that the actual running identity can be changed if the process has enough privilege to do so          a name  LinuxContainerUser    a         LinuxContainerUser represents user identity information in Linux containers             ephemeralContainerStatuses user linux gid    int64   required          GID is the primary gid initially attached to the first process in the container            ephemeralContainerStatuses user linux uid    int64   required          UID is the primary uid initially attached to the first process in the container            ephemeralContainerStatuses user linux supplementalGroups      int64            Atomic  will be replaced during a merge                   SupplementalGroups are the supplemental groups initially attached to the first process in the container        ephemeralContainerStatuses volumeMounts      VolumeMountStatus        Patch strategy  merge on key  mountPath             Map  unique values on key mountPath will be kept during a merge           Status of volume mounts        a name  VolumeMountStatus    a       VolumeMountStatus shows status of volume mounts            ephemeralContainerStatuses volumeMounts mountPath    string   required        MountPath corresponds to the original VolumeMount           ephemeralContainerStatuses volumeMounts name    string   required        Name corresponds to the name of the original VolumeMount           ephemeralContainerStatuses volumeMounts readOnly    boolean         ReadOnly corresponds to the original VolumeMount           ephemeralContainerStatuses volumeMounts recursiveReadOnly    string         RecursiveReadOnly must be set to Disabled  Enabled  or unspecified  for non readonly mounts   An IfPossible value in the original VolumeMount must be translated to Disabled or Enabled  depending on the mount result       resourceClaimStatuses      PodResourceClaimStatus      Patch strategies  retainKeys  merge on key  name         Map  unique values on key name will be kept during a merge       Status of resource claims      a name  PodResourceClaimStatus    a     PodResourceClaimStatus is stored in the PodStatus for each PodResourceClaim which references a ResourceClaimTemplate  It stores the generated name for the corresponding ResourceClaim          resourceClaimStatuses name    string   required      Name uniquely identifies this resource claim inside the pod  This must match the name of an entry in pod spec resourceClaims  which implies that the string must be a DNS LABEL         resourceClaimStatuses resourceClaimName    string       ResourceClaimName is the name of the ResourceClaim that was generated for the Pod in the namespace of the Pod  If this is unset  then generating a ResourceClaim was not necessary  The pod spec resourceClaims entry can be ignored in this case       resize    string     Status of resources resize desired for pod s containers  It is empty if no resources resize is pending  Any changes to container resources will automatically set this to  Proposed          PodList   PodList   PodList is a list of Pods    hr       items       a href    Pod  a    required    List of pods  More info  https   git k8s io community contributors devel sig architecture api conventions md      apiVersion    string     APIVersion defines the versioned schema of this representation of an object  Servers should convert recognized schemas to the latest internal value  and may reject unrecognized values  More info  https   git k8s io community contributors devel sig architecture api conventions md resources      kind    string     Kind is a string value representing the REST resource this object represents  Servers may infer this from the endpoint the client submits requests to  Cannot be updated  In CamelCase  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds         Operations   Operations      hr             get  read the specified Pod       HTTP Request  GET  api v1 namespaces  namespace  pods  name        Parameters       name     in path    string  required    name of the Pod       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Pod  a    OK  401  Unauthorized        get  read ephemeralcontainers of the specified Pod       HTTP Request  GET  api v1 namespaces  namespace  pods  name  ephemeralcontainers       Parameters       name     in path    string  required    name of the Pod       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Pod  a    OK  401  Unauthorized        get  read log of the specified Pod       HTTP Request  GET  api v1 namespaces  namespace  pods  name  log       Parameters       name     in path    string  required    name of the Pod       namespace     in path    string  required     a href    namespace  a        container     in query    string    The container for which to stream logs  Defaults to only container if there is one container in the pod        follow     in query    boolean    Follow the log stream of the pod  Defaults to false        insecureSkipTLSVerifyBackend     in query    boolean    insecureSkipTLSVerifyBackend indicates that the apiserver should not confirm the validity of the serving certificate of the backend it is connecting to   This will make the HTTPS connection between the apiserver and the backend insecure  This means the apiserver cannot verify the log data it is receiving came from the real kubelet   If the kubelet is configured to verify the apiserver s TLS credentials  it does not mean the connection to the real kubelet is vulnerable to a man in the middle attack  e g  an attacker could not intercept the actual log data coming from the real kubelet         limitBytes     in query    integer    If set  the number of bytes to read from the server before terminating the log output  This may not display a complete final line of logging  and may return slightly more or slightly less than the specified limit        pretty     in query    string     a href    pretty  a        previous     in query    boolean    Return previous terminated container logs  Defaults to false        sinceSeconds     in query    integer    A relative time in seconds before the current time from which to show logs  If this value precedes the time a pod was started  only logs since the pod start will be returned  If this value is in the future  no logs will be returned  Only one of sinceSeconds or sinceTime may be specified        tailLines     in query    integer    If set  the number of lines from the end of the logs to show  If not specified  logs are shown from the creation of the container or sinceSeconds or sinceTime       timestamps     in query    boolean    If true  add an RFC3339 or RFC3339Nano timestamp at the beginning of every line of log output  Defaults to false          Response   200  string   OK  401  Unauthorized        get  read status of the specified Pod       HTTP Request  GET  api v1 namespaces  namespace  pods  name  status       Parameters       name     in path    string  required    name of the Pod       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Pod  a    OK  401  Unauthorized        list  list or watch objects of kind Pod       HTTP Request  GET  api v1 namespaces  namespace  pods       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    PodList  a    OK  401  Unauthorized        list  list or watch objects of kind Pod       HTTP Request  GET  api v1 pods       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    PodList  a    OK  401  Unauthorized        create  create a Pod       HTTP Request  POST  api v1 namespaces  namespace  pods       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    Pod  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Pod  a    OK  201   a href    Pod  a    Created  202   a href    Pod  a    Accepted  401  Unauthorized        update  replace the specified Pod       HTTP Request  PUT  api v1 namespaces  namespace  pods  name        Parameters       name     in path    string  required    name of the Pod       namespace     in path    string  required     a href    namespace  a        body     a href    Pod  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Pod  a    OK  201   a href    Pod  a    Created  401  Unauthorized        update  replace ephemeralcontainers of the specified Pod       HTTP Request  PUT  api v1 namespaces  namespace  pods  name  ephemeralcontainers       Parameters       name     in path    string  required    name of the Pod       namespace     in path    string  required     a href    namespace  a        body     a href    Pod  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Pod  a    OK  201   a href    Pod  a    Created  401  Unauthorized        update  replace status of the specified Pod       HTTP Request  PUT  api v1 namespaces  namespace  pods  name  status       Parameters       name     in path    string  required    name of the Pod       namespace     in path    string  required     a href    namespace  a        body     a href    Pod  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Pod  a    OK  201   a href    Pod  a    Created  401  Unauthorized        patch  partially update the specified Pod       HTTP Request  PATCH  api v1 namespaces  namespace  pods  name        Parameters       name     in path    string  required    name of the Pod       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Pod  a    OK  201   a href    Pod  a    Created  401  Unauthorized        patch  partially update ephemeralcontainers of the specified Pod       HTTP Request  PATCH  api v1 namespaces  namespace  pods  name  ephemeralcontainers       Parameters       name     in path    string  required    name of the Pod       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Pod  a    OK  201   a href    Pod  a    Created  401  Unauthorized        patch  partially update status of the specified Pod       HTTP Request  PATCH  api v1 namespaces  namespace  pods  name  status       Parameters       name     in path    string  required    name of the Pod       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Pod  a    OK  201   a href    Pod  a    Created  401  Unauthorized        delete  delete a Pod       HTTP Request  DELETE  api v1 namespaces  namespace  pods  name        Parameters       name     in path    string  required    name of the Pod       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Pod  a    OK  202   a href    Pod  a    Accepted  401  Unauthorized        deletecollection  delete collection of Pod       HTTP Request  DELETE  api v1 namespaces  namespace  pods       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference apiVersion v1 contenttype apireference title ReplicationController apimetadata weight 4 kind ReplicationController autogenerated true ReplicationController represents the configuration of a replication controller import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"ReplicationController\"\ncontent_type: \"api_reference\"\ndescription: \"ReplicationController represents the configuration of a replication controller.\"\ntitle: \"ReplicationController\"\nweight: 4\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## ReplicationController {#ReplicationController}\n\nReplicationController represents the configuration of a replication controller.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: ReplicationController\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  If the Labels of a ReplicationController are empty, they are defaulted to be the same as the Pod(s) that the replication controller manages. Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">ReplicationControllerSpec<\/a>)\n\n  Spec defines the specification of the desired behavior of the replication controller. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **status** (<a href=\"\">ReplicationControllerStatus<\/a>)\n\n  Status is the most recently observed status of the replication controller. This data may be out of date by some window of time. Populated by the system. Read-only. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## ReplicationControllerSpec {#ReplicationControllerSpec}\n\nReplicationControllerSpec is the specification of a replication controller.\n\n<hr>\n\n- **selector** (map[string]string)\n\n  Selector is a label query over pods that should match the Replicas count. If Selector is empty, it is defaulted to the labels present on the Pod template. Label keys and values that must match in order to be controlled by this replication controller, if empty defaulted to labels on Pod template. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\/#label-selectors\n\n- **template** (<a href=\"\">PodTemplateSpec<\/a>)\n\n  Template is the object that describes the pod that will be created if insufficient replicas are detected. This takes precedence over a TemplateRef. The only allowed template.spec.restartPolicy value is \"Always\". More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/replicationcontroller#pod-template\n\n- **replicas** (int32)\n\n  Replicas is the number of desired replicas. This is a pointer to distinguish between explicit zero and unspecified. Defaults to 1. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/replicationcontroller#what-is-a-replicationcontroller\n\n- **minReadySeconds** (int32)\n\n  Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready)\n\n\n\n\n\n## ReplicationControllerStatus {#ReplicationControllerStatus}\n\nReplicationControllerStatus represents the current status of a replication controller.\n\n<hr>\n\n- **replicas** (int32), required\n\n  Replicas is the most recently observed number of replicas. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/replicationcontroller#what-is-a-replicationcontroller\n\n- **availableReplicas** (int32)\n\n  The number of available replicas (ready for at least minReadySeconds) for this replication controller.\n\n- **readyReplicas** (int32)\n\n  The number of ready replicas for this replication controller.\n\n- **fullyLabeledReplicas** (int32)\n\n  The number of pods that have labels matching the labels of the pod template of the replication controller.\n\n- **conditions** ([]ReplicationControllerCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  Represents the latest available observations of a replication controller's current state.\n\n  <a name=\"ReplicationControllerCondition\"><\/a>\n  *ReplicationControllerCondition describes the state of a replication controller at a certain point.*\n\n  - **conditions.status** (string), required\n\n    Status of the condition, one of True, False, Unknown.\n\n  - **conditions.type** (string), required\n\n    Type of replication controller condition.\n\n  - **conditions.lastTransitionTime** (Time)\n\n    The last time the condition transitioned from one status to another.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    A human readable message indicating details about the transition.\n\n  - **conditions.reason** (string)\n\n    The reason for the condition's last transition.\n\n- **observedGeneration** (int64)\n\n  ObservedGeneration reflects the generation of the most recently observed replication controller.\n\n\n\n\n\n## ReplicationControllerList {#ReplicationControllerList}\n\nReplicationControllerList is a collection of replication controllers.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: ReplicationControllerList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">ReplicationController<\/a>), required\n\n  List of replication controllers. More info: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/replicationcontroller\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ReplicationController\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/replicationcontrollers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ReplicationController\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicationController<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified ReplicationController\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/replicationcontrollers\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ReplicationController\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicationController<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ReplicationController\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/replicationcontrollers\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicationControllerList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ReplicationController\n\n#### HTTP Request\n\nGET \/api\/v1\/replicationcontrollers\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicationControllerList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ReplicationController\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\/{namespace}\/replicationcontrollers\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ReplicationController<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicationController<\/a>): OK\n\n201 (<a href=\"\">ReplicationController<\/a>): Created\n\n202 (<a href=\"\">ReplicationController<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ReplicationController\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/replicationcontrollers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ReplicationController\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ReplicationController<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicationController<\/a>): OK\n\n201 (<a href=\"\">ReplicationController<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified ReplicationController\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/replicationcontrollers\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ReplicationController\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ReplicationController<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicationController<\/a>): OK\n\n201 (<a href=\"\">ReplicationController<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ReplicationController\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/replicationcontrollers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ReplicationController\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicationController<\/a>): OK\n\n201 (<a href=\"\">ReplicationController<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified ReplicationController\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/replicationcontrollers\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ReplicationController\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ReplicationController<\/a>): OK\n\n201 (<a href=\"\">ReplicationController<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ReplicationController\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/replicationcontrollers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ReplicationController\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ReplicationController\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/replicationcontrollers\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   ReplicationController  content type   api reference  description   ReplicationController represents the configuration of a replication controller   title   ReplicationController  weight  4 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        ReplicationController   ReplicationController   ReplicationController represents the configuration of a replication controller    hr       apiVersion    v1       kind    ReplicationController       metadata     a href    ObjectMeta  a      If the Labels of a ReplicationController are empty  they are defaulted to be the same as the Pod s  that the replication controller manages  Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    ReplicationControllerSpec  a      Spec defines the specification of the desired behavior of the replication controller  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status      status     a href    ReplicationControllerStatus  a      Status is the most recently observed status of the replication controller  This data may be out of date by some window of time  Populated by the system  Read only  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         ReplicationControllerSpec   ReplicationControllerSpec   ReplicationControllerSpec is the specification of a replication controller    hr       selector    map string string     Selector is a label query over pods that should match the Replicas count  If Selector is empty  it is defaulted to the labels present on the Pod template  Label keys and values that must match in order to be controlled by this replication controller  if empty defaulted to labels on Pod template  More info  https   kubernetes io docs concepts overview working with objects labels  label selectors      template     a href    PodTemplateSpec  a      Template is the object that describes the pod that will be created if insufficient replicas are detected  This takes precedence over a TemplateRef  The only allowed template spec restartPolicy value is  Always   More info  https   kubernetes io docs concepts workloads controllers replicationcontroller pod template      replicas    int32     Replicas is the number of desired replicas  This is a pointer to distinguish between explicit zero and unspecified  Defaults to 1  More info  https   kubernetes io docs concepts workloads controllers replicationcontroller what is a replicationcontroller      minReadySeconds    int32     Minimum number of seconds for which a newly created pod should be ready without any of its container crashing  for it to be considered available  Defaults to 0  pod will be considered available as soon as it is ready          ReplicationControllerStatus   ReplicationControllerStatus   ReplicationControllerStatus represents the current status of a replication controller    hr       replicas    int32   required    Replicas is the most recently observed number of replicas  More info  https   kubernetes io docs concepts workloads controllers replicationcontroller what is a replicationcontroller      availableReplicas    int32     The number of available replicas  ready for at least minReadySeconds  for this replication controller       readyReplicas    int32     The number of ready replicas for this replication controller       fullyLabeledReplicas    int32     The number of pods that have labels matching the labels of the pod template of the replication controller       conditions      ReplicationControllerCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       Represents the latest available observations of a replication controller s current state      a name  ReplicationControllerCondition    a     ReplicationControllerCondition describes the state of a replication controller at a certain point          conditions status    string   required      Status of the condition  one of True  False  Unknown         conditions type    string   required      Type of replication controller condition         conditions lastTransitionTime    Time       The last time the condition transitioned from one status to another        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string       A human readable message indicating details about the transition         conditions reason    string       The reason for the condition s last transition       observedGeneration    int64     ObservedGeneration reflects the generation of the most recently observed replication controller          ReplicationControllerList   ReplicationControllerList   ReplicationControllerList is a collection of replication controllers    hr       apiVersion    v1       kind    ReplicationControllerList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    ReplicationController  a    required    List of replication controllers  More info  https   kubernetes io docs concepts workloads controllers replicationcontroller         Operations   Operations      hr             get  read the specified ReplicationController       HTTP Request  GET  api v1 namespaces  namespace  replicationcontrollers  name        Parameters       name     in path    string  required    name of the ReplicationController       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ReplicationController  a    OK  401  Unauthorized        get  read status of the specified ReplicationController       HTTP Request  GET  api v1 namespaces  namespace  replicationcontrollers  name  status       Parameters       name     in path    string  required    name of the ReplicationController       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ReplicationController  a    OK  401  Unauthorized        list  list or watch objects of kind ReplicationController       HTTP Request  GET  api v1 namespaces  namespace  replicationcontrollers       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ReplicationControllerList  a    OK  401  Unauthorized        list  list or watch objects of kind ReplicationController       HTTP Request  GET  api v1 replicationcontrollers       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ReplicationControllerList  a    OK  401  Unauthorized        create  create a ReplicationController       HTTP Request  POST  api v1 namespaces  namespace  replicationcontrollers       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    ReplicationController  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ReplicationController  a    OK  201   a href    ReplicationController  a    Created  202   a href    ReplicationController  a    Accepted  401  Unauthorized        update  replace the specified ReplicationController       HTTP Request  PUT  api v1 namespaces  namespace  replicationcontrollers  name        Parameters       name     in path    string  required    name of the ReplicationController       namespace     in path    string  required     a href    namespace  a        body     a href    ReplicationController  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ReplicationController  a    OK  201   a href    ReplicationController  a    Created  401  Unauthorized        update  replace status of the specified ReplicationController       HTTP Request  PUT  api v1 namespaces  namespace  replicationcontrollers  name  status       Parameters       name     in path    string  required    name of the ReplicationController       namespace     in path    string  required     a href    namespace  a        body     a href    ReplicationController  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ReplicationController  a    OK  201   a href    ReplicationController  a    Created  401  Unauthorized        patch  partially update the specified ReplicationController       HTTP Request  PATCH  api v1 namespaces  namespace  replicationcontrollers  name        Parameters       name     in path    string  required    name of the ReplicationController       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ReplicationController  a    OK  201   a href    ReplicationController  a    Created  401  Unauthorized        patch  partially update status of the specified ReplicationController       HTTP Request  PATCH  api v1 namespaces  namespace  replicationcontrollers  name  status       Parameters       name     in path    string  required    name of the ReplicationController       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ReplicationController  a    OK  201   a href    ReplicationController  a    Created  401  Unauthorized        delete  delete a ReplicationController       HTTP Request  DELETE  api v1 namespaces  namespace  replicationcontrollers  name        Parameters       name     in path    string  required    name of the ReplicationController       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of ReplicationController       HTTP Request  DELETE  api v1 namespaces  namespace  replicationcontrollers       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference apiVersion apps v1 contenttype apireference kind ControllerRevision weight 8 apimetadata title ControllerRevision autogenerated true import k8s io api apps v1 ControllerRevision implements an immutable snapshot of state data","answers":"---\napi_metadata:\n  apiVersion: \"apps\/v1\"\n  import: \"k8s.io\/api\/apps\/v1\"\n  kind: \"ControllerRevision\"\ncontent_type: \"api_reference\"\ndescription: \"ControllerRevision implements an immutable snapshot of state data.\"\ntitle: \"ControllerRevision\"\nweight: 8\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: apps\/v1`\n\n`import \"k8s.io\/api\/apps\/v1\"`\n\n\n## ControllerRevision {#ControllerRevision}\n\nControllerRevision implements an immutable snapshot of state data. Clients are responsible for serializing and deserializing the objects that contain their internal state. Once a ControllerRevision has been successfully created, it can not be updated. The API Server will fail validation of all requests that attempt to mutate the Data field. ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback, this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability. It is primarily for internal use by controllers.\n\n<hr>\n\n- **apiVersion**: apps\/v1\n\n\n- **kind**: ControllerRevision\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **revision** (int64), required\n\n  Revision indicates the revision of the state represented by Data.\n\n- **data** (RawExtension)\n\n  Data is the serialized representation of the state.\n\n  <a name=\"RawExtension\"><\/a>\n  *RawExtension is used to hold extensions in external versions.\n  \n  To use this, make a field which has RawExtension as its type in your external, versioned struct, and Object in your internal struct. You also need to register your various plugin types.\n  \n  \/\/ Internal package:\n  \n  \ttype MyAPIObject struct {\n  \t\truntime.TypeMeta `json:\",inline\"`\n  \t\tMyPlugin runtime.Object `json:\"myPlugin\"`\n  \t}\n  \n  \ttype PluginA struct {\n  \t\tAOption string `json:\"aOption\"`\n  \t}\n  \n  \/\/ External package:\n  \n  \ttype MyAPIObject struct {\n  \t\truntime.TypeMeta `json:\",inline\"`\n  \t\tMyPlugin runtime.RawExtension `json:\"myPlugin\"`\n  \t}\n  \n  \ttype PluginA struct {\n  \t\tAOption string `json:\"aOption\"`\n  \t}\n  \n  \/\/ On the wire, the JSON will look something like this:\n  \n  \t{\n  \t\t\"kind\":\"MyAPIObject\",\n  \t\t\"apiVersion\":\"v1\",\n  \t\t\"myPlugin\": {\n  \t\t\t\"kind\":\"PluginA\",\n  \t\t\t\"aOption\":\"foo\",\n  \t\t},\n  \t}\n  \n  So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (using pkg\/conversion) into the internal struct. The runtime package's DefaultScheme has conversion functions installed which will unpack the JSON stored in RawExtension, turning it into the correct object type, and storing it in the Object. (TODO: In the case where the object is of an unknown type, a runtime.Unknown object will be created and stored.)*\n\n\n\n\n\n## ControllerRevisionList {#ControllerRevisionList}\n\nControllerRevisionList is a resource containing a list of ControllerRevision objects.\n\n<hr>\n\n- **apiVersion**: apps\/v1\n\n\n- **kind**: ControllerRevisionList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">ControllerRevision<\/a>), required\n\n  Items is the list of ControllerRevisions\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ControllerRevision\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/namespaces\/{namespace}\/controllerrevisions\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ControllerRevision\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ControllerRevision<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ControllerRevision\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/namespaces\/{namespace}\/controllerrevisions\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ControllerRevisionList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ControllerRevision\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/controllerrevisions\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ControllerRevisionList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ControllerRevision\n\n#### HTTP Request\n\nPOST \/apis\/apps\/v1\/namespaces\/{namespace}\/controllerrevisions\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ControllerRevision<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ControllerRevision<\/a>): OK\n\n201 (<a href=\"\">ControllerRevision<\/a>): Created\n\n202 (<a href=\"\">ControllerRevision<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ControllerRevision\n\n#### HTTP Request\n\nPUT \/apis\/apps\/v1\/namespaces\/{namespace}\/controllerrevisions\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ControllerRevision\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ControllerRevision<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ControllerRevision<\/a>): OK\n\n201 (<a href=\"\">ControllerRevision<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ControllerRevision\n\n#### HTTP Request\n\nPATCH \/apis\/apps\/v1\/namespaces\/{namespace}\/controllerrevisions\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ControllerRevision\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ControllerRevision<\/a>): OK\n\n201 (<a href=\"\">ControllerRevision<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ControllerRevision\n\n#### HTTP Request\n\nDELETE \/apis\/apps\/v1\/namespaces\/{namespace}\/controllerrevisions\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ControllerRevision\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ControllerRevision\n\n#### HTTP Request\n\nDELETE \/apis\/apps\/v1\/namespaces\/{namespace}\/controllerrevisions\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   apps v1    import   k8s io api apps v1    kind   ControllerRevision  content type   api reference  description   ControllerRevision implements an immutable snapshot of state data   title   ControllerRevision  weight  8 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  apps v1    import  k8s io api apps v1        ControllerRevision   ControllerRevision   ControllerRevision implements an immutable snapshot of state data  Clients are responsible for serializing and deserializing the objects that contain their internal state  Once a ControllerRevision has been successfully created  it can not be updated  The API Server will fail validation of all requests that attempt to mutate the Data field  ControllerRevisions may  however  be deleted  Note that  due to its use by both the DaemonSet and StatefulSet controllers for update and rollback  this object is beta  However  it may be subject to name and representation changes in future releases  and clients should not depend on its stability  It is primarily for internal use by controllers    hr       apiVersion    apps v1       kind    ControllerRevision       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      revision    int64   required    Revision indicates the revision of the state represented by Data       data    RawExtension     Data is the serialized representation of the state      a name  RawExtension    a     RawExtension is used to hold extensions in external versions       To use this  make a field which has RawExtension as its type in your external  versioned struct  and Object in your internal struct  You also need to register your various plugin types          Internal package        type MyAPIObject struct       runtime TypeMeta  json   inline       MyPlugin runtime Object  json  myPlugin              type PluginA struct       AOption string  json  aOption                External package        type MyAPIObject struct       runtime TypeMeta  json   inline       MyPlugin runtime RawExtension  json  myPlugin              type PluginA struct       AOption string  json  aOption                On the wire  the JSON will look something like this               kind   MyAPIObject        apiVersion   v1        myPlugin           kind   PluginA         aOption   foo                    So what happens  Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject  That causes the raw JSON to be stored  but not unpacked  The next step is to copy  using pkg conversion  into the internal struct  The runtime package s DefaultScheme has conversion functions installed which will unpack the JSON stored in RawExtension  turning it into the correct object type  and storing it in the Object   TODO  In the case where the object is of an unknown type  a runtime Unknown object will be created and stored            ControllerRevisionList   ControllerRevisionList   ControllerRevisionList is a resource containing a list of ControllerRevision objects    hr       apiVersion    apps v1       kind    ControllerRevisionList       metadata     a href    ListMeta  a      More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    ControllerRevision  a    required    Items is the list of ControllerRevisions         Operations   Operations      hr             get  read the specified ControllerRevision       HTTP Request  GET  apis apps v1 namespaces  namespace  controllerrevisions  name        Parameters       name     in path    string  required    name of the ControllerRevision       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ControllerRevision  a    OK  401  Unauthorized        list  list or watch objects of kind ControllerRevision       HTTP Request  GET  apis apps v1 namespaces  namespace  controllerrevisions       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ControllerRevisionList  a    OK  401  Unauthorized        list  list or watch objects of kind ControllerRevision       HTTP Request  GET  apis apps v1 controllerrevisions       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ControllerRevisionList  a    OK  401  Unauthorized        create  create a ControllerRevision       HTTP Request  POST  apis apps v1 namespaces  namespace  controllerrevisions       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    ControllerRevision  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ControllerRevision  a    OK  201   a href    ControllerRevision  a    Created  202   a href    ControllerRevision  a    Accepted  401  Unauthorized        update  replace the specified ControllerRevision       HTTP Request  PUT  apis apps v1 namespaces  namespace  controllerrevisions  name        Parameters       name     in path    string  required    name of the ControllerRevision       namespace     in path    string  required     a href    namespace  a        body     a href    ControllerRevision  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ControllerRevision  a    OK  201   a href    ControllerRevision  a    Created  401  Unauthorized        patch  partially update the specified ControllerRevision       HTTP Request  PATCH  apis apps v1 namespaces  namespace  controllerrevisions  name        Parameters       name     in path    string  required    name of the ControllerRevision       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ControllerRevision  a    OK  201   a href    ControllerRevision  a    Created  401  Unauthorized        delete  delete a ControllerRevision       HTTP Request  DELETE  apis apps v1 namespaces  namespace  controllerrevisions  name        Parameters       name     in path    string  required    name of the ControllerRevision       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of ControllerRevision       HTTP Request  DELETE  apis apps v1 namespaces  namespace  controllerrevisions       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference Binding ties one object to another for example a pod is bound to a node by a scheduler apiVersion v1 weight 2 contenttype apireference apimetadata title Binding autogenerated true kind Binding import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"Binding\"\ncontent_type: \"api_reference\"\ndescription: \"Binding ties one object to another; for example, a pod is bound to a node by a scheduler.\"\ntitle: \"Binding\"\nweight: 2\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## Binding {#Binding}\n\nBinding ties one object to another; for example, a pod is bound to a node by a scheduler. Deprecated in 1.7, please use the bindings subresource of pods instead.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: Binding\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **target** (<a href=\"\">ObjectReference<\/a>), required\n\n  The target object that you want to bind to the standard object.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `create` create a Binding\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\/{namespace}\/bindings\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Binding<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Binding<\/a>): OK\n\n201 (<a href=\"\">Binding<\/a>): Created\n\n202 (<a href=\"\">Binding<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `create` create binding of a Pod\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\/{namespace}\/pods\/{name}\/binding\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Binding\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Binding<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Binding<\/a>): OK\n\n201 (<a href=\"\">Binding<\/a>): Created\n\n202 (<a href=\"\">Binding<\/a>): Accepted\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   Binding  content type   api reference  description   Binding ties one object to another  for example  a pod is bound to a node by a scheduler   title   Binding  weight  2 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        Binding   Binding   Binding ties one object to another  for example  a pod is bound to a node by a scheduler  Deprecated in 1 7  please use the bindings subresource of pods instead    hr       apiVersion    v1       kind    Binding       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      target     a href    ObjectReference  a    required    The target object that you want to bind to the standard object          Operations   Operations      hr             create  create a Binding       HTTP Request  POST  api v1 namespaces  namespace  bindings       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    Binding  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Binding  a    OK  201   a href    Binding  a    Created  202   a href    Binding  a    Accepted  401  Unauthorized        create  create binding of a Pod       HTTP Request  POST  api v1 namespaces  namespace  pods  name  binding       Parameters       name     in path    string  required    name of the Binding       namespace     in path    string  required     a href    namespace  a        body     a href    Binding  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Binding  a    OK  201   a href    Binding  a    Created  202   a href    Binding  a    Accepted  401  Unauthorized "}
{"questions":"kubernetes reference weight 6 apiVersion apps v1 contenttype apireference kind Deployment apimetadata autogenerated true import k8s io api apps v1 title Deployment Deployment enables declarative updates for Pods and ReplicaSets","answers":"---\napi_metadata:\n  apiVersion: \"apps\/v1\"\n  import: \"k8s.io\/api\/apps\/v1\"\n  kind: \"Deployment\"\ncontent_type: \"api_reference\"\ndescription: \"Deployment enables declarative updates for Pods and ReplicaSets.\"\ntitle: \"Deployment\"\nweight: 6\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: apps\/v1`\n\n`import \"k8s.io\/api\/apps\/v1\"`\n\n\n## Deployment {#Deployment}\n\nDeployment enables declarative updates for Pods and ReplicaSets.\n\n<hr>\n\n- **apiVersion**: apps\/v1\n\n\n- **kind**: Deployment\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">DeploymentSpec<\/a>)\n\n  Specification of the desired behavior of the Deployment.\n\n- **status** (<a href=\"\">DeploymentStatus<\/a>)\n\n  Most recently observed status of the Deployment.\n\n\n\n\n\n## DeploymentSpec {#DeploymentSpec}\n\nDeploymentSpec is the specification of the desired behavior of the Deployment.\n\n<hr>\n\n- **selector** (<a href=\"\">LabelSelector<\/a>), required\n\n  Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels.\n\n- **template** (<a href=\"\">PodTemplateSpec<\/a>), required\n\n  Template describes the pods that will be created. The only allowed template.spec.restartPolicy value is \"Always\".\n\n- **replicas** (int32)\n\n  Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1.\n\n- **minReadySeconds** (int32)\n\n  Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready)\n\n- **strategy** (DeploymentStrategy)\n\n  *Patch strategy: retainKeys*\n  \n  The deployment strategy to use to replace existing pods with new ones.\n\n  <a name=\"DeploymentStrategy\"><\/a>\n  *DeploymentStrategy describes how to replace existing pods with new ones.*\n\n  - **strategy.type** (string)\n\n    Type of deployment. Can be \"Recreate\" or \"RollingUpdate\". Default is RollingUpdate.\n\n  - **strategy.rollingUpdate** (RollingUpdateDeployment)\n\n    Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate.\n\n    <a name=\"RollingUpdateDeployment\"><\/a>\n    *Spec to control the desired behavior of rolling update.*\n\n    - **strategy.rollingUpdate.maxSurge** (IntOrString)\n\n      The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. Defaults to 25%. Example: when this is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new ReplicaSet can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods.\n\n      <a name=\"IntOrString\"><\/a>\n      *IntOrString is a type that can hold an int32 or a string.  When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type.  This allows you to have, for example, a JSON field that can accept a name or number.*\n\n    - **strategy.rollingUpdate.maxUnavailable** (IntOrString)\n\n      The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if MaxSurge is 0. Defaults to 25%. Example: when this is set to 30%, the old ReplicaSet can be scaled down to 70% of desired pods immediately when the rolling update starts. Once new pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of pods available at all times during the update is at least 70% of desired pods.\n\n      <a name=\"IntOrString\"><\/a>\n      *IntOrString is a type that can hold an int32 or a string.  When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type.  This allows you to have, for example, a JSON field that can accept a name or number.*\n\n- **revisionHistoryLimit** (int32)\n\n  The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10.\n\n- **progressDeadlineSeconds** (int32)\n\n  The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. Defaults to 600s.\n\n- **paused** (boolean)\n\n  Indicates that the deployment is paused.\n\n\n\n\n\n## DeploymentStatus {#DeploymentStatus}\n\nDeploymentStatus is the most recently observed status of the Deployment.\n\n<hr>\n\n- **replicas** (int32)\n\n  Total number of non-terminated pods targeted by this deployment (their labels match the selector).\n\n- **availableReplicas** (int32)\n\n  Total number of available pods (ready for at least minReadySeconds) targeted by this deployment.\n\n- **readyReplicas** (int32)\n\n  readyReplicas is the number of pods targeted by this Deployment with a Ready Condition.\n\n- **unavailableReplicas** (int32)\n\n  Total number of unavailable pods targeted by this deployment. This is the total number of pods that are still required for the deployment to have 100% available capacity. They may either be pods that are running but not yet available or pods that still have not been created.\n\n- **updatedReplicas** (int32)\n\n  Total number of non-terminated pods targeted by this deployment that have the desired template spec.\n\n- **collisionCount** (int32)\n\n  Count of hash collisions for the Deployment. The Deployment controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ReplicaSet.\n\n- **conditions** ([]DeploymentCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  Represents the latest available observations of a deployment's current state.\n\n  <a name=\"DeploymentCondition\"><\/a>\n  *DeploymentCondition describes the state of a deployment at a certain point.*\n\n  - **conditions.status** (string), required\n\n    Status of the condition, one of True, False, Unknown.\n\n  - **conditions.type** (string), required\n\n    Type of deployment condition.\n\n  - **conditions.lastTransitionTime** (Time)\n\n    Last time the condition transitioned from one status to another.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.lastUpdateTime** (Time)\n\n    The last time this condition was updated.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    A human readable message indicating details about the transition.\n\n  - **conditions.reason** (string)\n\n    The reason for the condition's last transition.\n\n- **observedGeneration** (int64)\n\n  The generation observed by the deployment controller.\n\n\n\n\n\n## DeploymentList {#DeploymentList}\n\nDeploymentList is a list of Deployments.\n\n<hr>\n\n- **apiVersion**: apps\/v1\n\n\n- **kind**: DeploymentList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata.\n\n- **items** ([]<a href=\"\">Deployment<\/a>), required\n\n  Items is the list of Deployments.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified Deployment\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/namespaces\/{namespace}\/deployments\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Deployment\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Deployment<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified Deployment\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/namespaces\/{namespace}\/deployments\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Deployment\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Deployment<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Deployment\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/namespaces\/{namespace}\/deployments\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DeploymentList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Deployment\n\n#### HTTP Request\n\nGET \/apis\/apps\/v1\/deployments\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DeploymentList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a Deployment\n\n#### HTTP Request\n\nPOST \/apis\/apps\/v1\/namespaces\/{namespace}\/deployments\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Deployment<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Deployment<\/a>): OK\n\n201 (<a href=\"\">Deployment<\/a>): Created\n\n202 (<a href=\"\">Deployment<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified Deployment\n\n#### HTTP Request\n\nPUT \/apis\/apps\/v1\/namespaces\/{namespace}\/deployments\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Deployment\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Deployment<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Deployment<\/a>): OK\n\n201 (<a href=\"\">Deployment<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified Deployment\n\n#### HTTP Request\n\nPUT \/apis\/apps\/v1\/namespaces\/{namespace}\/deployments\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Deployment\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Deployment<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Deployment<\/a>): OK\n\n201 (<a href=\"\">Deployment<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified Deployment\n\n#### HTTP Request\n\nPATCH \/apis\/apps\/v1\/namespaces\/{namespace}\/deployments\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Deployment\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Deployment<\/a>): OK\n\n201 (<a href=\"\">Deployment<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified Deployment\n\n#### HTTP Request\n\nPATCH \/apis\/apps\/v1\/namespaces\/{namespace}\/deployments\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Deployment\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Deployment<\/a>): OK\n\n201 (<a href=\"\">Deployment<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a Deployment\n\n#### HTTP Request\n\nDELETE \/apis\/apps\/v1\/namespaces\/{namespace}\/deployments\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Deployment\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of Deployment\n\n#### HTTP Request\n\nDELETE \/apis\/apps\/v1\/namespaces\/{namespace}\/deployments\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   apps v1    import   k8s io api apps v1    kind   Deployment  content type   api reference  description   Deployment enables declarative updates for Pods and ReplicaSets   title   Deployment  weight  6 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  apps v1    import  k8s io api apps v1        Deployment   Deployment   Deployment enables declarative updates for Pods and ReplicaSets    hr       apiVersion    apps v1       kind    Deployment       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    DeploymentSpec  a      Specification of the desired behavior of the Deployment       status     a href    DeploymentStatus  a      Most recently observed status of the Deployment          DeploymentSpec   DeploymentSpec   DeploymentSpec is the specification of the desired behavior of the Deployment    hr       selector     a href    LabelSelector  a    required    Label selector for pods  Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment  It must match the pod template s labels       template     a href    PodTemplateSpec  a    required    Template describes the pods that will be created  The only allowed template spec restartPolicy value is  Always        replicas    int32     Number of desired pods  This is a pointer to distinguish between explicit zero and not specified  Defaults to 1       minReadySeconds    int32     Minimum number of seconds for which a newly created pod should be ready without any of its container crashing  for it to be considered available  Defaults to 0  pod will be considered available as soon as it is ready       strategy    DeploymentStrategy      Patch strategy  retainKeys       The deployment strategy to use to replace existing pods with new ones      a name  DeploymentStrategy    a     DeploymentStrategy describes how to replace existing pods with new ones          strategy type    string       Type of deployment  Can be  Recreate  or  RollingUpdate   Default is RollingUpdate         strategy rollingUpdate    RollingUpdateDeployment       Rolling update config params  Present only if DeploymentStrategyType   RollingUpdate        a name  RollingUpdateDeployment    a       Spec to control the desired behavior of rolling update            strategy rollingUpdate maxSurge    IntOrString         The maximum number of pods that can be scheduled above the desired number of pods  Value can be an absolute number  ex  5  or a percentage of desired pods  ex  10    This can not be 0 if MaxUnavailable is 0  Absolute number is calculated from percentage by rounding up  Defaults to 25   Example  when this is set to 30   the new ReplicaSet can be scaled up immediately when the rolling update starts  such that the total number of old and new pods do not exceed 130  of desired pods  Once old pods have been killed  new ReplicaSet can be scaled up further  ensuring that total number of pods running at any time during the update is at most 130  of desired pods          a name  IntOrString    a         IntOrString is a type that can hold an int32 or a string   When used in JSON or YAML marshalling and unmarshalling  it produces or consumes the inner type   This allows you to have  for example  a JSON field that can accept a name or number            strategy rollingUpdate maxUnavailable    IntOrString         The maximum number of pods that can be unavailable during the update  Value can be an absolute number  ex  5  or a percentage of desired pods  ex  10    Absolute number is calculated from percentage by rounding down  This can not be 0 if MaxSurge is 0  Defaults to 25   Example  when this is set to 30   the old ReplicaSet can be scaled down to 70  of desired pods immediately when the rolling update starts  Once new pods are ready  old ReplicaSet can be scaled down further  followed by scaling up the new ReplicaSet  ensuring that the total number of pods available at all times during the update is at least 70  of desired pods          a name  IntOrString    a         IntOrString is a type that can hold an int32 or a string   When used in JSON or YAML marshalling and unmarshalling  it produces or consumes the inner type   This allows you to have  for example  a JSON field that can accept a name or number        revisionHistoryLimit    int32     The number of old ReplicaSets to retain to allow rollback  This is a pointer to distinguish between explicit zero and not specified  Defaults to 10       progressDeadlineSeconds    int32     The maximum time in seconds for a deployment to make progress before it is considered to be failed  The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status  Note that progress will not be estimated during the time a deployment is paused  Defaults to 600s       paused    boolean     Indicates that the deployment is paused          DeploymentStatus   DeploymentStatus   DeploymentStatus is the most recently observed status of the Deployment    hr       replicas    int32     Total number of non terminated pods targeted by this deployment  their labels match the selector        availableReplicas    int32     Total number of available pods  ready for at least minReadySeconds  targeted by this deployment       readyReplicas    int32     readyReplicas is the number of pods targeted by this Deployment with a Ready Condition       unavailableReplicas    int32     Total number of unavailable pods targeted by this deployment  This is the total number of pods that are still required for the deployment to have 100  available capacity  They may either be pods that are running but not yet available or pods that still have not been created       updatedReplicas    int32     Total number of non terminated pods targeted by this deployment that have the desired template spec       collisionCount    int32     Count of hash collisions for the Deployment  The Deployment controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ReplicaSet       conditions      DeploymentCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       Represents the latest available observations of a deployment s current state      a name  DeploymentCondition    a     DeploymentCondition describes the state of a deployment at a certain point          conditions status    string   required      Status of the condition  one of True  False  Unknown         conditions type    string   required      Type of deployment condition         conditions lastTransitionTime    Time       Last time the condition transitioned from one status to another        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions lastUpdateTime    Time       The last time this condition was updated        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string       A human readable message indicating details about the transition         conditions reason    string       The reason for the condition s last transition       observedGeneration    int64     The generation observed by the deployment controller          DeploymentList   DeploymentList   DeploymentList is a list of Deployments    hr       apiVersion    apps v1       kind    DeploymentList       metadata     a href    ListMeta  a      Standard list metadata       items       a href    Deployment  a    required    Items is the list of Deployments          Operations   Operations      hr             get  read the specified Deployment       HTTP Request  GET  apis apps v1 namespaces  namespace  deployments  name        Parameters       name     in path    string  required    name of the Deployment       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Deployment  a    OK  401  Unauthorized        get  read status of the specified Deployment       HTTP Request  GET  apis apps v1 namespaces  namespace  deployments  name  status       Parameters       name     in path    string  required    name of the Deployment       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Deployment  a    OK  401  Unauthorized        list  list or watch objects of kind Deployment       HTTP Request  GET  apis apps v1 namespaces  namespace  deployments       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    DeploymentList  a    OK  401  Unauthorized        list  list or watch objects of kind Deployment       HTTP Request  GET  apis apps v1 deployments       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    DeploymentList  a    OK  401  Unauthorized        create  create a Deployment       HTTP Request  POST  apis apps v1 namespaces  namespace  deployments       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    Deployment  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Deployment  a    OK  201   a href    Deployment  a    Created  202   a href    Deployment  a    Accepted  401  Unauthorized        update  replace the specified Deployment       HTTP Request  PUT  apis apps v1 namespaces  namespace  deployments  name        Parameters       name     in path    string  required    name of the Deployment       namespace     in path    string  required     a href    namespace  a        body     a href    Deployment  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Deployment  a    OK  201   a href    Deployment  a    Created  401  Unauthorized        update  replace status of the specified Deployment       HTTP Request  PUT  apis apps v1 namespaces  namespace  deployments  name  status       Parameters       name     in path    string  required    name of the Deployment       namespace     in path    string  required     a href    namespace  a        body     a href    Deployment  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Deployment  a    OK  201   a href    Deployment  a    Created  401  Unauthorized        patch  partially update the specified Deployment       HTTP Request  PATCH  apis apps v1 namespaces  namespace  deployments  name        Parameters       name     in path    string  required    name of the Deployment       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Deployment  a    OK  201   a href    Deployment  a    Created  401  Unauthorized        patch  partially update status of the specified Deployment       HTTP Request  PATCH  apis apps v1 namespaces  namespace  deployments  name  status       Parameters       name     in path    string  required    name of the Deployment       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Deployment  a    OK  201   a href    Deployment  a    Created  401  Unauthorized        delete  delete a Deployment       HTTP Request  DELETE  apis apps v1 namespaces  namespace  deployments  name        Parameters       name     in path    string  required    name of the Deployment       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of Deployment       HTTP Request  DELETE  apis apps v1 namespaces  namespace  deployments       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference PriorityClass defines mapping from a priority class name to the priority integer value title PriorityClass contenttype apireference import k8s io api scheduling v1 apimetadata kind PriorityClass autogenerated true weight 14 apiVersion scheduling k8s io v1","answers":"---\napi_metadata:\n  apiVersion: \"scheduling.k8s.io\/v1\"\n  import: \"k8s.io\/api\/scheduling\/v1\"\n  kind: \"PriorityClass\"\ncontent_type: \"api_reference\"\ndescription: \"PriorityClass defines mapping from a priority class name to the priority integer value.\"\ntitle: \"PriorityClass\"\nweight: 14\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: scheduling.k8s.io\/v1`\n\n`import \"k8s.io\/api\/scheduling\/v1\"`\n\n\n## PriorityClass {#PriorityClass}\n\nPriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer.\n\n<hr>\n\n- **apiVersion**: scheduling.k8s.io\/v1\n\n\n- **kind**: PriorityClass\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **value** (int32), required\n\n  value represents the integer value of this priority class. This is the actual priority that pods receive when they have the name of this class in their pod spec.\n\n- **description** (string)\n\n  description is an arbitrary string that usually provides guidelines on when this priority class should be used.\n\n- **globalDefault** (boolean)\n\n  globalDefault specifies whether this PriorityClass should be considered as the default priority for pods that do not have any priority class. Only one PriorityClass can be marked as `globalDefault`. However, if more than one PriorityClasses exists with their `globalDefault` field set to true, the smallest value of such global default PriorityClasses will be used as the default priority.\n\n- **preemptionPolicy** (string)\n\n  preemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset.\n\n\n\n\n\n## PriorityClassList {#PriorityClassList}\n\nPriorityClassList is a collection of priority classes.\n\n<hr>\n\n- **apiVersion**: scheduling.k8s.io\/v1\n\n\n- **kind**: PriorityClassList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">PriorityClass<\/a>), required\n\n  items is the list of PriorityClasses\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified PriorityClass\n\n#### HTTP Request\n\nGET \/apis\/scheduling.k8s.io\/v1\/priorityclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PriorityClass\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PriorityClass<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind PriorityClass\n\n#### HTTP Request\n\nGET \/apis\/scheduling.k8s.io\/v1\/priorityclasses\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PriorityClassList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a PriorityClass\n\n#### HTTP Request\n\nPOST \/apis\/scheduling.k8s.io\/v1\/priorityclasses\n\n#### Parameters\n\n\n- **body**: <a href=\"\">PriorityClass<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PriorityClass<\/a>): OK\n\n201 (<a href=\"\">PriorityClass<\/a>): Created\n\n202 (<a href=\"\">PriorityClass<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified PriorityClass\n\n#### HTTP Request\n\nPUT \/apis\/scheduling.k8s.io\/v1\/priorityclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PriorityClass\n\n\n- **body**: <a href=\"\">PriorityClass<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PriorityClass<\/a>): OK\n\n201 (<a href=\"\">PriorityClass<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified PriorityClass\n\n#### HTTP Request\n\nPATCH \/apis\/scheduling.k8s.io\/v1\/priorityclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PriorityClass\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PriorityClass<\/a>): OK\n\n201 (<a href=\"\">PriorityClass<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a PriorityClass\n\n#### HTTP Request\n\nDELETE \/apis\/scheduling.k8s.io\/v1\/priorityclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PriorityClass\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of PriorityClass\n\n#### HTTP Request\n\nDELETE \/apis\/scheduling.k8s.io\/v1\/priorityclasses\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   scheduling k8s io v1    import   k8s io api scheduling v1    kind   PriorityClass  content type   api reference  description   PriorityClass defines mapping from a priority class name to the priority integer value   title   PriorityClass  weight  14 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  scheduling k8s io v1    import  k8s io api scheduling v1        PriorityClass   PriorityClass   PriorityClass defines mapping from a priority class name to the priority integer value  The value can be any valid integer    hr       apiVersion    scheduling k8s io v1       kind    PriorityClass       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      value    int32   required    value represents the integer value of this priority class  This is the actual priority that pods receive when they have the name of this class in their pod spec       description    string     description is an arbitrary string that usually provides guidelines on when this priority class should be used       globalDefault    boolean     globalDefault specifies whether this PriorityClass should be considered as the default priority for pods that do not have any priority class  Only one PriorityClass can be marked as  globalDefault   However  if more than one PriorityClasses exists with their  globalDefault  field set to true  the smallest value of such global default PriorityClasses will be used as the default priority       preemptionPolicy    string     preemptionPolicy is the Policy for preempting pods with lower priority  One of Never  PreemptLowerPriority  Defaults to PreemptLowerPriority if unset          PriorityClassList   PriorityClassList   PriorityClassList is a collection of priority classes    hr       apiVersion    scheduling k8s io v1       kind    PriorityClassList       metadata     a href    ListMeta  a      Standard list metadata More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    PriorityClass  a    required    items is the list of PriorityClasses         Operations   Operations      hr             get  read the specified PriorityClass       HTTP Request  GET  apis scheduling k8s io v1 priorityclasses  name        Parameters       name     in path    string  required    name of the PriorityClass       pretty     in query    string     a href    pretty  a          Response   200   a href    PriorityClass  a    OK  401  Unauthorized        list  list or watch objects of kind PriorityClass       HTTP Request  GET  apis scheduling k8s io v1 priorityclasses       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    PriorityClassList  a    OK  401  Unauthorized        create  create a PriorityClass       HTTP Request  POST  apis scheduling k8s io v1 priorityclasses       Parameters       body     a href    PriorityClass  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PriorityClass  a    OK  201   a href    PriorityClass  a    Created  202   a href    PriorityClass  a    Accepted  401  Unauthorized        update  replace the specified PriorityClass       HTTP Request  PUT  apis scheduling k8s io v1 priorityclasses  name        Parameters       name     in path    string  required    name of the PriorityClass       body     a href    PriorityClass  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PriorityClass  a    OK  201   a href    PriorityClass  a    Created  401  Unauthorized        patch  partially update the specified PriorityClass       HTTP Request  PATCH  apis scheduling k8s io v1 priorityclasses  name        Parameters       name     in path    string  required    name of the PriorityClass       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PriorityClass  a    OK  201   a href    PriorityClass  a    Created  401  Unauthorized        delete  delete a PriorityClass       HTTP Request  DELETE  apis scheduling k8s io v1 priorityclasses  name        Parameters       name     in path    string  required    name of the PriorityClass       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of PriorityClass       HTTP Request  DELETE  apis scheduling k8s io v1 priorityclasses       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference title DeviceClass v1alpha3 kind DeviceClass weight 2 DeviceClass is a vendor or admin provided resource that contains device configuration and selectors contenttype apireference apiVersion resource k8s io v1alpha3 import k8s io api resource v1alpha3 apimetadata autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"resource.k8s.io\/v1alpha3\"\n  import: \"k8s.io\/api\/resource\/v1alpha3\"\n  kind: \"DeviceClass\"\ncontent_type: \"api_reference\"\ndescription: \"DeviceClass is a vendor- or admin-provided resource that contains device configuration and selectors.\"\ntitle: \"DeviceClass v1alpha3\"\nweight: 2\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: resource.k8s.io\/v1alpha3`\n\n`import \"k8s.io\/api\/resource\/v1alpha3\"`\n\n\n## DeviceClass {#DeviceClass}\n\nDeviceClass is a vendor- or admin-provided resource that contains device configuration and selectors. It can be referenced in the device requests of a claim to apply these presets. Cluster scoped.\n\nThis is an alpha type and requires enabling the DynamicResourceAllocation feature gate.\n\n<hr>\n\n- **apiVersion**: resource.k8s.io\/v1alpha3\n\n\n- **kind**: DeviceClass\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata\n\n- **spec** (<a href=\"\">DeviceClassSpec<\/a>), required\n\n  Spec defines what can be allocated and how to configure it.\n  \n  This is mutable. Consumers have to be prepared for classes changing at any time, either because they get updated or replaced. Claim allocations are done once based on whatever was set in classes at the time of allocation.\n  \n  Changing the spec automatically increments the metadata.generation number.\n\n\n\n\n\n## DeviceClassSpec {#DeviceClassSpec}\n\nDeviceClassSpec is used in a [DeviceClass] to define what can be allocated and how to configure it.\n\n<hr>\n\n- **config** ([]DeviceClassConfiguration)\n\n  *Atomic: will be replaced during a merge*\n  \n  Config defines configuration parameters that apply to each device that is claimed via this class. Some classses may potentially be satisfied by multiple drivers, so each instance of a vendor configuration applies to exactly one driver.\n  \n  They are passed to the driver, but are not considered while allocating the claim.\n\n  <a name=\"DeviceClassConfiguration\"><\/a>\n  *DeviceClassConfiguration is used in DeviceClass.*\n\n  - **config.opaque** (OpaqueDeviceConfiguration)\n\n    Opaque provides driver-specific configuration parameters.\n\n    <a name=\"OpaqueDeviceConfiguration\"><\/a>\n    *OpaqueDeviceConfiguration contains configuration parameters for a driver in a format defined by the driver vendor.*\n\n    - **config.opaque.driver** (string), required\n\n      Driver is used to determine which kubelet plugin needs to be passed these configuration parameters.\n      \n      An admission policy provided by the driver developer could use this to decide whether it needs to validate them.\n      \n      Must be a DNS subdomain and should end with a DNS domain owned by the vendor of the driver.\n\n    - **config.opaque.parameters** (RawExtension), required\n\n      Parameters can contain arbitrary data. It is the responsibility of the driver developer to handle validation and versioning. Typically this includes self-identification and a version (\"kind\" + \"apiVersion\" for Kubernetes types), with conversion between different versions.\n\n      <a name=\"RawExtension\"><\/a>\n      *RawExtension is used to hold extensions in external versions.\n      \n      To use this, make a field which has RawExtension as its type in your external, versioned struct, and Object in your internal struct. You also need to register your various plugin types.\n      \n      \/\/ Internal package:\n      \n      \ttype MyAPIObject struct {\n      \t\truntime.TypeMeta `json:\",inline\"`\n      \t\tMyPlugin runtime.Object `json:\"myPlugin\"`\n      \t}\n      \n      \ttype PluginA struct {\n      \t\tAOption string `json:\"aOption\"`\n      \t}\n      \n      \/\/ External package:\n      \n      \ttype MyAPIObject struct {\n      \t\truntime.TypeMeta `json:\",inline\"`\n      \t\tMyPlugin runtime.RawExtension `json:\"myPlugin\"`\n      \t}\n      \n      \ttype PluginA struct {\n      \t\tAOption string `json:\"aOption\"`\n      \t}\n      \n      \/\/ On the wire, the JSON will look something like this:\n      \n      \t{\n      \t\t\"kind\":\"MyAPIObject\",\n      \t\t\"apiVersion\":\"v1\",\n      \t\t\"myPlugin\": {\n      \t\t\t\"kind\":\"PluginA\",\n      \t\t\t\"aOption\":\"foo\",\n      \t\t},\n      \t}\n      \n      So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The next step is to copy (using pkg\/conversion) into the internal struct. The runtime package's DefaultScheme has conversion functions installed which will unpack the JSON stored in RawExtension, turning it into the correct object type, and storing it in the Object. (TODO: In the case where the object is of an unknown type, a runtime.Unknown object will be created and stored.)*\n\n- **selectors** ([]DeviceSelector)\n\n  *Atomic: will be replaced during a merge*\n  \n  Each selector must be satisfied by a device which is claimed via this class.\n\n  <a name=\"DeviceSelector\"><\/a>\n  *DeviceSelector must have exactly one field set.*\n\n  - **selectors.cel** (CELDeviceSelector)\n\n    CEL contains a CEL expression for selecting a device.\n\n    <a name=\"CELDeviceSelector\"><\/a>\n    *CELDeviceSelector contains a CEL expression for selecting a device.*\n\n    - **selectors.cel.expression** (string), required\n\n      Expression is a CEL expression which evaluates a single device. It must evaluate to true when the device under consideration satisfies the desired criteria, and false when it does not. Any other result is an error and causes allocation of devices to abort.\n      \n      The expression's input is an object named \"device\", which carries the following properties:\n       - driver (string): the name of the driver which defines this device.\n       - attributes (map[string]object): the device's attributes, grouped by prefix\n         (e.g. device.attributes[\"dra.example.com\"] evaluates to an object with all\n         of the attributes which were prefixed by \"dra.example.com\".\n       - capacity (map[string]object): the device's capacities, grouped by prefix.\n      \n      Example: Consider a device with driver=\"dra.example.com\", which exposes two attributes named \"model\" and \"ext.example.com\/family\" and which exposes one capacity named \"modules\". This input to this expression would have the following fields:\n      \n          device.driver\n          device.attributes[\"dra.example.com\"].model\n          device.attributes[\"ext.example.com\"].family\n          device.capacity[\"dra.example.com\"].modules\n      \n      The device.driver field can be used to check for a specific driver, either as a high-level precondition (i.e. you only want to consider devices from this driver) or as part of a multi-clause expression that is meant to consider devices from different drivers.\n      \n      The value type of each attribute is defined by the device definition, and users who write these expressions must consult the documentation for their specific drivers. The value type of each capacity is Quantity.\n      \n      If an unknown prefix is used as a lookup in either device.attributes or device.capacity, an empty map will be returned. Any reference to an unknown field will cause an evaluation error and allocation to abort.\n      \n      A robust expression should check for the existence of attributes before referencing them.\n      \n      For ease of use, the cel.bind() function is enabled, and can be used to simplify expressions that access multiple attributes with the same domain. For example:\n      \n          cel.bind(dra, device.attributes[\"dra.example.com\"], dra.someBool && dra.anotherBool)\n\n- **suitableNodes** (NodeSelector)\n\n  Only nodes matching the selector will be considered by the scheduler when trying to find a Node that fits a Pod when that Pod uses a claim that has not been allocated yet *and* that claim gets allocated through a control plane controller. It is ignored when the claim does not use a control plane controller for allocation.\n  \n  Setting this field is optional. If unset, all Nodes are candidates.\n  \n  This is an alpha field and requires enabling the DRAControlPlaneController feature gate.\n\n  <a name=\"NodeSelector\"><\/a>\n  *A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms.*\n\n  - **suitableNodes.nodeSelectorTerms** ([]NodeSelectorTerm), required\n\n    *Atomic: will be replaced during a merge*\n    \n    Required. A list of node selector terms. The terms are ORed.\n\n    <a name=\"NodeSelectorTerm\"><\/a>\n    *A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.*\n\n    - **suitableNodes.nodeSelectorTerms.matchExpressions** ([]<a href=\"\">NodeSelectorRequirement<\/a>)\n\n      *Atomic: will be replaced during a merge*\n      \n      A list of node selector requirements by node's labels.\n\n    - **suitableNodes.nodeSelectorTerms.matchFields** ([]<a href=\"\">NodeSelectorRequirement<\/a>)\n\n      *Atomic: will be replaced during a merge*\n      \n      A list of node selector requirements by node's fields.\n\n\n\n\n\n## DeviceClassList {#DeviceClassList}\n\nDeviceClassList is a collection of classes.\n\n<hr>\n\n- **apiVersion**: resource.k8s.io\/v1alpha3\n\n\n- **kind**: DeviceClassList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata\n\n- **items** ([]<a href=\"\">DeviceClass<\/a>), required\n\n  Items is the list of resource classes.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified DeviceClass\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/deviceclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the DeviceClass\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DeviceClass<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind DeviceClass\n\n#### HTTP Request\n\nGET \/apis\/resource.k8s.io\/v1alpha3\/deviceclasses\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DeviceClassList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a DeviceClass\n\n#### HTTP Request\n\nPOST \/apis\/resource.k8s.io\/v1alpha3\/deviceclasses\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeviceClass<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DeviceClass<\/a>): OK\n\n201 (<a href=\"\">DeviceClass<\/a>): Created\n\n202 (<a href=\"\">DeviceClass<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified DeviceClass\n\n#### HTTP Request\n\nPUT \/apis\/resource.k8s.io\/v1alpha3\/deviceclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the DeviceClass\n\n\n- **body**: <a href=\"\">DeviceClass<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DeviceClass<\/a>): OK\n\n201 (<a href=\"\">DeviceClass<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified DeviceClass\n\n#### HTTP Request\n\nPATCH \/apis\/resource.k8s.io\/v1alpha3\/deviceclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the DeviceClass\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DeviceClass<\/a>): OK\n\n201 (<a href=\"\">DeviceClass<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a DeviceClass\n\n#### HTTP Request\n\nDELETE \/apis\/resource.k8s.io\/v1alpha3\/deviceclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the DeviceClass\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">DeviceClass<\/a>): OK\n\n202 (<a href=\"\">DeviceClass<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of DeviceClass\n\n#### HTTP Request\n\nDELETE \/apis\/resource.k8s.io\/v1alpha3\/deviceclasses\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   resource k8s io v1alpha3    import   k8s io api resource v1alpha3    kind   DeviceClass  content type   api reference  description   DeviceClass is a vendor  or admin provided resource that contains device configuration and selectors   title   DeviceClass v1alpha3  weight  2 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  resource k8s io v1alpha3    import  k8s io api resource v1alpha3        DeviceClass   DeviceClass   DeviceClass is a vendor  or admin provided resource that contains device configuration and selectors  It can be referenced in the device requests of a claim to apply these presets  Cluster scoped   This is an alpha type and requires enabling the DynamicResourceAllocation feature gate    hr       apiVersion    resource k8s io v1alpha3       kind    DeviceClass       metadata     a href    ObjectMeta  a      Standard object metadata      spec     a href    DeviceClassSpec  a    required    Spec defines what can be allocated and how to configure it       This is mutable  Consumers have to be prepared for classes changing at any time  either because they get updated or replaced  Claim allocations are done once based on whatever was set in classes at the time of allocation       Changing the spec automatically increments the metadata generation number          DeviceClassSpec   DeviceClassSpec   DeviceClassSpec is used in a  DeviceClass  to define what can be allocated and how to configure it    hr       config      DeviceClassConfiguration      Atomic  will be replaced during a merge       Config defines configuration parameters that apply to each device that is claimed via this class  Some classses may potentially be satisfied by multiple drivers  so each instance of a vendor configuration applies to exactly one driver       They are passed to the driver  but are not considered while allocating the claim      a name  DeviceClassConfiguration    a     DeviceClassConfiguration is used in DeviceClass          config opaque    OpaqueDeviceConfiguration       Opaque provides driver specific configuration parameters        a name  OpaqueDeviceConfiguration    a       OpaqueDeviceConfiguration contains configuration parameters for a driver in a format defined by the driver vendor            config opaque driver    string   required        Driver is used to determine which kubelet plugin needs to be passed these configuration parameters               An admission policy provided by the driver developer could use this to decide whether it needs to validate them               Must be a DNS subdomain and should end with a DNS domain owned by the vendor of the driver           config opaque parameters    RawExtension   required        Parameters can contain arbitrary data  It is the responsibility of the driver developer to handle validation and versioning  Typically this includes self identification and a version   kind     apiVersion  for Kubernetes types   with conversion between different versions          a name  RawExtension    a         RawExtension is used to hold extensions in external versions               To use this  make a field which has RawExtension as its type in your external  versioned struct  and Object in your internal struct  You also need to register your various plugin types                  Internal package                type MyAPIObject struct           runtime TypeMeta  json   inline           MyPlugin runtime Object  json  myPlugin                          type PluginA struct           AOption string  json  aOption                            External package                type MyAPIObject struct           runtime TypeMeta  json   inline           MyPlugin runtime RawExtension  json  myPlugin                          type PluginA struct           AOption string  json  aOption                            On the wire  the JSON will look something like this                           kind   MyAPIObject            apiVersion   v1            myPlugin               kind   PluginA             aOption   foo                                    So what happens  Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject  That causes the raw JSON to be stored  but not unpacked  The next step is to copy  using pkg conversion  into the internal struct  The runtime package s DefaultScheme has conversion functions installed which will unpack the JSON stored in RawExtension  turning it into the correct object type  and storing it in the Object   TODO  In the case where the object is of an unknown type  a runtime Unknown object will be created and stored         selectors      DeviceSelector      Atomic  will be replaced during a merge       Each selector must be satisfied by a device which is claimed via this class      a name  DeviceSelector    a     DeviceSelector must have exactly one field set          selectors cel    CELDeviceSelector       CEL contains a CEL expression for selecting a device        a name  CELDeviceSelector    a       CELDeviceSelector contains a CEL expression for selecting a device            selectors cel expression    string   required        Expression is a CEL expression which evaluates a single device  It must evaluate to true when the device under consideration satisfies the desired criteria  and false when it does not  Any other result is an error and causes allocation of devices to abort               The expression s input is an object named  device   which carries the following properties           driver  string   the name of the driver which defines this device           attributes  map string object   the device s attributes  grouped by prefix           e g  device attributes  dra example com   evaluates to an object with all          of the attributes which were prefixed by  dra example com            capacity  map string object   the device s capacities  grouped by prefix               Example  Consider a device with driver  dra example com   which exposes two attributes named  model  and  ext example com family  and which exposes one capacity named  modules   This input to this expression would have the following fields                   device driver           device attributes  dra example com   model           device attributes  ext example com   family           device capacity  dra example com   modules              The device driver field can be used to check for a specific driver  either as a high level precondition  i e  you only want to consider devices from this driver  or as part of a multi clause expression that is meant to consider devices from different drivers               The value type of each attribute is defined by the device definition  and users who write these expressions must consult the documentation for their specific drivers  The value type of each capacity is Quantity               If an unknown prefix is used as a lookup in either device attributes or device capacity  an empty map will be returned  Any reference to an unknown field will cause an evaluation error and allocation to abort               A robust expression should check for the existence of attributes before referencing them               For ease of use  the cel bind   function is enabled  and can be used to simplify expressions that access multiple attributes with the same domain  For example                   cel bind dra  device attributes  dra example com    dra someBool    dra anotherBool       suitableNodes    NodeSelector     Only nodes matching the selector will be considered by the scheduler when trying to find a Node that fits a Pod when that Pod uses a claim that has not been allocated yet  and  that claim gets allocated through a control plane controller  It is ignored when the claim does not use a control plane controller for allocation       Setting this field is optional  If unset  all Nodes are candidates       This is an alpha field and requires enabling the DRAControlPlaneController feature gate      a name  NodeSelector    a     A node selector represents the union of the results of one or more label queries over a set of nodes  that is  it represents the OR of the selectors represented by the node selector terms          suitableNodes nodeSelectorTerms      NodeSelectorTerm   required       Atomic  will be replaced during a merge           Required  A list of node selector terms  The terms are ORed        a name  NodeSelectorTerm    a       A null or empty node selector term matches no objects  The requirements of them are ANDed  The TopologySelectorTerm type implements a subset of the NodeSelectorTerm            suitableNodes nodeSelectorTerms matchExpressions       a href    NodeSelectorRequirement  a           Atomic  will be replaced during a merge               A list of node selector requirements by node s labels           suitableNodes nodeSelectorTerms matchFields       a href    NodeSelectorRequirement  a           Atomic  will be replaced during a merge               A list of node selector requirements by node s fields          DeviceClassList   DeviceClassList   DeviceClassList is a collection of classes    hr       apiVersion    resource k8s io v1alpha3       kind    DeviceClassList       metadata     a href    ListMeta  a      Standard list metadata      items       a href    DeviceClass  a    required    Items is the list of resource classes          Operations   Operations      hr             get  read the specified DeviceClass       HTTP Request  GET  apis resource k8s io v1alpha3 deviceclasses  name        Parameters       name     in path    string  required    name of the DeviceClass       pretty     in query    string     a href    pretty  a          Response   200   a href    DeviceClass  a    OK  401  Unauthorized        list  list or watch objects of kind DeviceClass       HTTP Request  GET  apis resource k8s io v1alpha3 deviceclasses       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    DeviceClassList  a    OK  401  Unauthorized        create  create a DeviceClass       HTTP Request  POST  apis resource k8s io v1alpha3 deviceclasses       Parameters       body     a href    DeviceClass  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    DeviceClass  a    OK  201   a href    DeviceClass  a    Created  202   a href    DeviceClass  a    Accepted  401  Unauthorized        update  replace the specified DeviceClass       HTTP Request  PUT  apis resource k8s io v1alpha3 deviceclasses  name        Parameters       name     in path    string  required    name of the DeviceClass       body     a href    DeviceClass  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    DeviceClass  a    OK  201   a href    DeviceClass  a    Created  401  Unauthorized        patch  partially update the specified DeviceClass       HTTP Request  PATCH  apis resource k8s io v1alpha3 deviceclasses  name        Parameters       name     in path    string  required    name of the DeviceClass       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    DeviceClass  a    OK  201   a href    DeviceClass  a    Created  401  Unauthorized        delete  delete a DeviceClass       HTTP Request  DELETE  apis resource k8s io v1alpha3 deviceclasses  name        Parameters       name     in path    string  required    name of the DeviceClass       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    DeviceClass  a    OK  202   a href    DeviceClass  a    Accepted  401  Unauthorized        deletecollection  delete collection of DeviceClass       HTTP Request  DELETE  apis resource k8s io v1alpha3 deviceclasses       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference kind CustomResourceDefinition apiVersion apiextensions k8s io v1 import k8s io apiextensions apiserver pkg apis apiextensions v1 title CustomResourceDefinition contenttype apireference CustomResourceDefinition represents a resource that should be exposed on the API server apimetadata autogenerated true weight 1","answers":"---\napi_metadata:\n  apiVersion: \"apiextensions.k8s.io\/v1\"\n  import: \"k8s.io\/apiextensions-apiserver\/pkg\/apis\/apiextensions\/v1\"\n  kind: \"CustomResourceDefinition\"\ncontent_type: \"api_reference\"\ndescription: \"CustomResourceDefinition represents a resource that should be exposed on the API server.\"\ntitle: \"CustomResourceDefinition\"\nweight: 1\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: apiextensions.k8s.io\/v1`\n\n`import \"k8s.io\/apiextensions-apiserver\/pkg\/apis\/apiextensions\/v1\"`\n\n\n## CustomResourceDefinition {#CustomResourceDefinition}\n\nCustomResourceDefinition represents a resource that should be exposed on the API server.  Its name MUST be in the format \\<.spec.name>.\\<.spec.group>.\n\n<hr>\n\n- **apiVersion**: apiextensions.k8s.io\/v1\n\n\n- **kind**: CustomResourceDefinition\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">CustomResourceDefinitionSpec<\/a>), required\n\n  spec describes how the user wants the resources to appear\n\n- **status** (<a href=\"\">CustomResourceDefinitionStatus<\/a>)\n\n  status indicates the actual state of the CustomResourceDefinition\n\n\n\n\n\n## CustomResourceDefinitionSpec {#CustomResourceDefinitionSpec}\n\nCustomResourceDefinitionSpec describes how a user wants their resource to appear\n\n<hr>\n\n- **group** (string), required\n\n  group is the API group of the defined custom resource. The custom resources are served under `\/apis\/\\<group>\/...`. Must match the name of the CustomResourceDefinition (in the form `\\<names.plural>.\\<group>`).\n\n- **names** (CustomResourceDefinitionNames), required\n\n  names specify the resource and kind names for the custom resource.\n\n  <a name=\"CustomResourceDefinitionNames\"><\/a>\n  *CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition*\n\n  - **names.kind** (string), required\n\n    kind is the serialized kind of the resource. It is normally CamelCase and singular. Custom resource instances will use this value as the `kind` attribute in API calls.\n\n  - **names.plural** (string), required\n\n    plural is the plural name of the resource to serve. The custom resources are served under `\/apis\/\\<group>\/\\<version>\/...\/\\<plural>`. Must match the name of the CustomResourceDefinition (in the form `\\<names.plural>.\\<group>`). Must be all lowercase.\n\n  - **names.categories** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    categories is a list of grouped resources this custom resource belongs to (e.g. 'all'). This is published in API discovery documents, and used by clients to support invocations like `kubectl get all`.\n\n  - **names.listKind** (string)\n\n    listKind is the serialized kind of the list for this resource. Defaults to \"`kind`List\".\n\n  - **names.shortNames** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like `kubectl get \\<shortname>`. It must be all lowercase.\n\n  - **names.singular** (string)\n\n    singular is the singular name of the resource. It must be all lowercase. Defaults to lowercased `kind`.\n\n- **scope** (string), required\n\n  scope indicates whether the defined custom resource is cluster- or namespace-scoped. Allowed values are `Cluster` and `Namespaced`.\n\n- **versions** ([]CustomResourceDefinitionVersion), required\n\n  *Atomic: will be replaced during a merge*\n  \n  versions is the list of all API versions of the defined custom resource. Version names are used to compute the order in which served versions are listed in API discovery. If the version string is \"kube-like\", it will sort above non \"kube-like\" version strings, which are ordered lexicographically. \"Kube-like\" versions start with a \"v\", then are followed by a number (the major version), then optionally the string \"alpha\" or \"beta\" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10.\n\n  <a name=\"CustomResourceDefinitionVersion\"><\/a>\n  *CustomResourceDefinitionVersion describes a version for CRD.*\n\n  - **versions.name** (string), required\n\n    name is the version name, e.g. \u201cv1\u201d, \u201cv2beta1\u201d, etc. The custom resources are served under this version at `\/apis\/\\<group>\/\\<version>\/...` if `served` is true.\n\n  - **versions.served** (boolean), required\n\n    served is a flag enabling\/disabling this version from being served via REST APIs\n\n  - **versions.storage** (boolean), required\n\n    storage indicates this version should be used when persisting custom resources to storage. There must be exactly one version with storage=true.\n\n  - **versions.additionalPrinterColumns** ([]CustomResourceColumnDefinition)\n\n    *Atomic: will be replaced during a merge*\n    \n    additionalPrinterColumns specifies additional columns returned in Table output. See https:\/\/kubernetes.io\/docs\/reference\/using-api\/api-concepts\/#receiving-resources-as-tables for details. If no columns are specified, a single column displaying the age of the custom resource is used.\n\n    <a name=\"CustomResourceColumnDefinition\"><\/a>\n    *CustomResourceColumnDefinition specifies a column for server side printing.*\n\n    - **versions.additionalPrinterColumns.jsonPath** (string), required\n\n      jsonPath is a simple JSON path (i.e. with array notation) which is evaluated against each custom resource to produce the value for this column.\n\n    - **versions.additionalPrinterColumns.name** (string), required\n\n      name is a human readable name for the column.\n\n    - **versions.additionalPrinterColumns.type** (string), required\n\n      type is an OpenAPI type definition for this column. See https:\/\/github.com\/OAI\/OpenAPI-Specification\/blob\/master\/versions\/2.0.md#data-types for details.\n\n    - **versions.additionalPrinterColumns.description** (string)\n\n      description is a human readable description of this column.\n\n    - **versions.additionalPrinterColumns.format** (string)\n\n      format is an optional OpenAPI type definition for this column. The 'name' format is applied to the primary identifier column to assist in clients identifying column is the resource name. See https:\/\/github.com\/OAI\/OpenAPI-Specification\/blob\/master\/versions\/2.0.md#data-types for details.\n\n    - **versions.additionalPrinterColumns.priority** (int32)\n\n      priority is an integer defining the relative importance of this column compared to others. Lower numbers are considered higher priority. Columns that may be omitted in limited space scenarios should be given a priority greater than 0.\n\n  - **versions.deprecated** (boolean)\n\n    deprecated indicates this version of the custom resource API is deprecated. When set to true, API requests to this version receive a warning header in the server response. Defaults to false.\n\n  - **versions.deprecationWarning** (string)\n\n    deprecationWarning overrides the default warning returned to API clients. May only be set when `deprecated` is true. The default warning indicates this version is deprecated and recommends use of the newest served version of equal or greater stability, if one exists.\n\n  - **versions.schema** (CustomResourceValidation)\n\n    schema describes the schema used for validation, pruning, and defaulting of this version of the custom resource.\n\n    <a name=\"CustomResourceValidation\"><\/a>\n    *CustomResourceValidation is a list of validation methods for CustomResources.*\n\n    - **versions.schema.openAPIV3Schema** (<a href=\"\">JSONSchemaProps<\/a>)\n\n      openAPIV3Schema is the OpenAPI v3 schema to use for validation and pruning.\n\n  - **versions.selectableFields** ([]SelectableField)\n\n    *Atomic: will be replaced during a merge*\n    \n    selectableFields specifies paths to fields that may be used as field selectors. A maximum of 8 selectable fields are allowed. See https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/field-selectors\n\n    <a name=\"SelectableField\"><\/a>\n    *SelectableField specifies the JSON path of a field that may be used with field selectors.*\n\n    - **versions.selectableFields.jsonPath** (string), required\n\n      jsonPath is a simple JSON path which is evaluated against each custom resource to produce a field selector value. Only JSON paths without the array notation are allowed. Must point to a field of type string, boolean or integer. Types with enum values and strings with formats are allowed. If jsonPath refers to absent field in a resource, the jsonPath evaluates to an empty string. Must not point to metdata fields. Required.\n\n  - **versions.subresources** (CustomResourceSubresources)\n\n    subresources specify what subresources this version of the defined custom resource have.\n\n    <a name=\"CustomResourceSubresources\"><\/a>\n    *CustomResourceSubresources defines the status and scale subresources for CustomResources.*\n\n    - **versions.subresources.scale** (CustomResourceSubresourceScale)\n\n      scale indicates the custom resource should serve a `\/scale` subresource that returns an `autoscaling\/v1` Scale object.\n\n      <a name=\"CustomResourceSubresourceScale\"><\/a>\n      *CustomResourceSubresourceScale defines how to serve the scale subresource for CustomResources.*\n\n      - **versions.subresources.scale.specReplicasPath** (string), required\n\n        specReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale `spec.replicas`. Only JSON paths without the array notation are allowed. Must be a JSON Path under `.spec`. If there is no value under the given path in the custom resource, the `\/scale` subresource will return an error on GET.\n\n      - **versions.subresources.scale.statusReplicasPath** (string), required\n\n        statusReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale `status.replicas`. Only JSON paths without the array notation are allowed. Must be a JSON Path under `.status`. If there is no value under the given path in the custom resource, the `status.replicas` value in the `\/scale` subresource will default to 0.\n\n      - **versions.subresources.scale.labelSelectorPath** (string)\n\n        labelSelectorPath defines the JSON path inside of a custom resource that corresponds to Scale `status.selector`. Only JSON paths without the array notation are allowed. Must be a JSON Path under `.status` or `.spec`. Must be set to work with HorizontalPodAutoscaler. The field pointed by this JSON path must be a string field (not a complex selector struct) which contains a serialized label selector in string form. More info: https:\/\/kubernetes.io\/docs\/tasks\/access-kubernetes-api\/custom-resources\/custom-resource-definitions#scale-subresource If there is no value under the given path in the custom resource, the `status.selector` value in the `\/scale` subresource will default to the empty string.\n\n    - **versions.subresources.status** (CustomResourceSubresourceStatus)\n\n      status indicates the custom resource should serve a `\/status` subresource. When enabled: 1. requests to the custom resource primary endpoint ignore changes to the `status` stanza of the object. 2. requests to the custom resource `\/status` subresource ignore changes to anything other than the `status` stanza of the object.\n\n      <a name=\"CustomResourceSubresourceStatus\"><\/a>\n      *CustomResourceSubresourceStatus defines how to serve the status subresource for CustomResources. Status is represented by the `.status` JSON path inside of a CustomResource. When set, * exposes a \/status subresource for the custom resource * PUT requests to the \/status subresource take a custom resource object, and ignore changes to anything except the status stanza * PUT\/POST\/PATCH requests to the custom resource ignore changes to the status stanza*\n\n- **conversion** (CustomResourceConversion)\n\n  conversion defines conversion settings for the CRD.\n\n  <a name=\"CustomResourceConversion\"><\/a>\n  *CustomResourceConversion describes how to convert different versions of a CR.*\n\n  - **conversion.strategy** (string), required\n\n    strategy specifies how custom resources are converted between versions. Allowed values are: - `\"None\"`: The converter only change the apiVersion and would not touch any other field in the custom resource. - `\"Webhook\"`: API Server will call to an external webhook to do the conversion. Additional information\n      is needed for this option. This requires spec.preserveUnknownFields to be false, and spec.conversion.webhook to be set.\n\n  - **conversion.webhook** (WebhookConversion)\n\n    webhook describes how to call the conversion webhook. Required when `strategy` is set to `\"Webhook\"`.\n\n    <a name=\"WebhookConversion\"><\/a>\n    *WebhookConversion describes how to call a conversion webhook*\n\n    - **conversion.webhook.conversionReviewVersions** ([]string), required\n\n      *Atomic: will be replaced during a merge*\n      \n      conversionReviewVersions is an ordered list of preferred `ConversionReview` versions the Webhook expects. The API server will use the first version in the list which it supports. If none of the versions specified in this list are supported by API server, conversion will fail for the custom resource. If a persisted Webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail.\n\n    - **conversion.webhook.clientConfig** (WebhookClientConfig)\n\n      clientConfig is the instructions for how to call the webhook if strategy is `Webhook`.\n\n      <a name=\"WebhookClientConfig\"><\/a>\n      *WebhookClientConfig contains the information to make a TLS connection with the webhook.*\n\n      - **conversion.webhook.clientConfig.caBundle** ([]byte)\n\n        caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used.\n\n      - **conversion.webhook.clientConfig.service** (ServiceReference)\n\n        service is a reference to the service for this webhook. Either service or url must be specified.\n        \n        If the webhook is running within the cluster, then you should use `service`.\n\n        <a name=\"ServiceReference\"><\/a>\n        *ServiceReference holds a reference to Service.legacy.k8s.io*\n\n        - **conversion.webhook.clientConfig.service.name** (string), required\n\n          name is the name of the service. Required\n\n        - **conversion.webhook.clientConfig.service.namespace** (string), required\n\n          namespace is the namespace of the service. Required\n\n        - **conversion.webhook.clientConfig.service.path** (string)\n\n          path is an optional URL path at which the webhook will be contacted.\n\n        - **conversion.webhook.clientConfig.service.port** (int32)\n\n          port is an optional service port at which the webhook will be contacted. `port` should be a valid port number (1-65535, inclusive). Defaults to 443 for backward compatibility.\n\n      - **conversion.webhook.clientConfig.url** (string)\n\n        url gives the location of the webhook, in standard URL form (`scheme:\/\/host:port\/path`). Exactly one of `url` or `service` must be specified.\n        \n        The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.\n        \n        Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.\n        \n        The scheme must be \"https\"; the URL must begin with \"https:\/\/\".\n        \n        A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.\n        \n        Attempting to use a user or basic auth e.g. \"user:password@\" is not allowed. Fragments (\"#...\") and query parameters (\"?...\") are not allowed, either.\n\n- **preserveUnknownFields** (boolean)\n\n  preserveUnknownFields indicates that object fields which are not specified in the OpenAPI schema should be preserved when persisting to storage. apiVersion, kind, metadata and known fields inside metadata are always preserved. This field is deprecated in favor of setting `x-preserve-unknown-fields` to true in `spec.versions[*].schema.openAPIV3Schema`. See https:\/\/kubernetes.io\/docs\/tasks\/extend-kubernetes\/custom-resources\/custom-resource-definitions\/#field-pruning for details.\n\n\n\n\n\n## JSONSchemaProps {#JSONSchemaProps}\n\nJSONSchemaProps is a JSON-Schema following Specification Draft 4 (http:\/\/json-schema.org\/).\n\n<hr>\n\n- **$ref** (string)\n\n\n- **$schema** (string)\n\n\n- **additionalItems** (JSONSchemaPropsOrBool)\n\n\n  <a name=\"JSONSchemaPropsOrBool\"><\/a>\n  *JSONSchemaPropsOrBool represents JSONSchemaProps or a boolean value. Defaults to true for the boolean property.*\n\n- **additionalProperties** (JSONSchemaPropsOrBool)\n\n\n  <a name=\"JSONSchemaPropsOrBool\"><\/a>\n  *JSONSchemaPropsOrBool represents JSONSchemaProps or a boolean value. Defaults to true for the boolean property.*\n\n- **allOf** ([]<a href=\"\">JSONSchemaProps<\/a>)\n\n  *Atomic: will be replaced during a merge*\n  \n  \n\n- **anyOf** ([]<a href=\"\">JSONSchemaProps<\/a>)\n\n  *Atomic: will be replaced during a merge*\n  \n  \n\n- **default** (JSON)\n\n  default is a default value for undefined object fields. Defaulting is a beta feature under the CustomResourceDefaulting feature gate. Defaulting requires spec.preserveUnknownFields to be false.\n\n  <a name=\"JSON\"><\/a>\n  *JSON represents any valid JSON value. These types are supported: bool, int64, float64, string, []interface{}, map[string]interface{} and nil.*\n\n- **definitions** (map[string]<a href=\"\">JSONSchemaProps<\/a>)\n\n\n- **dependencies** (map[string]JSONSchemaPropsOrStringArray)\n\n\n  <a name=\"JSONSchemaPropsOrStringArray\"><\/a>\n  *JSONSchemaPropsOrStringArray represents a JSONSchemaProps or a string array.*\n\n- **description** (string)\n\n\n- **enum** ([]JSON)\n\n  *Atomic: will be replaced during a merge*\n  \n  \n\n  <a name=\"JSON\"><\/a>\n  *JSON represents any valid JSON value. These types are supported: bool, int64, float64, string, []interface{}, map[string]interface{} and nil.*\n\n- **example** (JSON)\n\n\n  <a name=\"JSON\"><\/a>\n  *JSON represents any valid JSON value. These types are supported: bool, int64, float64, string, []interface{}, map[string]interface{} and nil.*\n\n- **exclusiveMaximum** (boolean)\n\n\n- **exclusiveMinimum** (boolean)\n\n\n- **externalDocs** (ExternalDocumentation)\n\n\n  <a name=\"ExternalDocumentation\"><\/a>\n  *ExternalDocumentation allows referencing an external resource for extended documentation.*\n\n  - **externalDocs.description** (string)\n\n\n  - **externalDocs.url** (string)\n\n\n- **format** (string)\n\n  format is an OpenAPI v3 format string. Unknown formats are ignored. The following formats are validated:\n  \n  - bsonobjectid: a bson object ID, i.e. a 24 characters hex string - uri: an URI as parsed by Golang net\/url.ParseRequestURI - email: an email address as parsed by Golang net\/mail.ParseAddress - hostname: a valid representation for an Internet host name, as defined by RFC 1034, section 3.1 [RFC1034]. - ipv4: an IPv4 IP as parsed by Golang net.ParseIP - ipv6: an IPv6 IP as parsed by Golang net.ParseIP - cidr: a CIDR as parsed by Golang net.ParseCIDR - mac: a MAC address as parsed by Golang net.ParseMAC - uuid: an UUID that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{12}$ - uuid3: an UUID3 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?3[0-9a-f]{3}-?[0-9a-f]{4}-?[0-9a-f]{12}$ - uuid4: an UUID4 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?4[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$ - uuid5: an UUID5 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?5[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}$ - isbn: an ISBN10 or ISBN13 number string like \"0321751043\" or \"978-0321751041\" - isbn10: an ISBN10 number string like \"0321751043\" - isbn13: an ISBN13 number string like \"978-0321751041\" - creditcard: a credit card number defined by the regex ^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\\d{3})\\d{11})$ with any non digit characters mixed in - ssn: a U.S. social security number following the regex ^\\d{3}[- ]?\\d{2}[- ]?\\d{4}$ - hexcolor: an hexadecimal color code like \"#FFFFFF: following the regex ^#?([0-9a-fA-F]{3}|[0-9a-fA-F]{6})$ - rgbcolor: an RGB color code like rgb like \"rgb(255,255,2559\" - byte: base64 encoded binary data - password: any kind of string - date: a date string like \"2006-01-02\" as defined by full-date in RFC3339 - duration: a duration string like \"22 ns\" as parsed by Golang time.ParseDuration or compatible with Scala duration format - datetime: a date time string like \"2014-12-15T19:30:20.000Z\" as defined by date-time in RFC3339.\n\n- **id** (string)\n\n\n- **items** (JSONSchemaPropsOrArray)\n\n\n  <a name=\"JSONSchemaPropsOrArray\"><\/a>\n  *JSONSchemaPropsOrArray represents a value that can either be a JSONSchemaProps or an array of JSONSchemaProps. Mainly here for serialization purposes.*\n\n- **maxItems** (int64)\n\n\n- **maxLength** (int64)\n\n\n- **maxProperties** (int64)\n\n\n- **maximum** (double)\n\n\n- **minItems** (int64)\n\n\n- **minLength** (int64)\n\n\n- **minProperties** (int64)\n\n\n- **minimum** (double)\n\n\n- **multipleOf** (double)\n\n\n- **not** (<a href=\"\">JSONSchemaProps<\/a>)\n\n\n- **nullable** (boolean)\n\n\n- **oneOf** ([]<a href=\"\">JSONSchemaProps<\/a>)\n\n  *Atomic: will be replaced during a merge*\n  \n  \n\n- **pattern** (string)\n\n\n- **patternProperties** (map[string]<a href=\"\">JSONSchemaProps<\/a>)\n\n\n- **properties** (map[string]<a href=\"\">JSONSchemaProps<\/a>)\n\n\n- **required** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  \n\n- **title** (string)\n\n\n- **type** (string)\n\n\n- **uniqueItems** (boolean)\n\n\n- **x-kubernetes-embedded-resource** (boolean)\n\n  x-kubernetes-embedded-resource defines that the value is an embedded Kubernetes runtime.Object, with TypeMeta and ObjectMeta. The type must be object. It is allowed to further restrict the embedded object. kind, apiVersion and metadata are validated automatically. x-kubernetes-preserve-unknown-fields is allowed to be true, but does not have to be if the object is fully specified (up to kind, apiVersion, metadata).\n\n- **x-kubernetes-int-or-string** (boolean)\n\n  x-kubernetes-int-or-string specifies that this value is either an integer or a string. If this is true, an empty type is allowed and type as child of anyOf is permitted if following one of the following patterns:\n  \n  1) anyOf:\n     - type: integer\n     - type: string\n  2) allOf:\n     - anyOf:\n       - type: integer\n       - type: string\n     - ... zero or more\n\n- **x-kubernetes-list-map-keys** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  x-kubernetes-list-map-keys annotates an array with the x-kubernetes-list-type `map` by specifying the keys used as the index of the map.\n  \n  This tag MUST only be used on lists that have the \"x-kubernetes-list-type\" extension set to \"map\". Also, the values specified for this attribute must be a scalar typed field of the child structure (no nesting is supported).\n  \n  The properties specified must either be required or have a default value, to ensure those properties are present for all list items.\n\n- **x-kubernetes-list-type** (string)\n\n  x-kubernetes-list-type annotates an array to further describe its topology. This extension must only be used on lists and may have 3 possible values:\n  \n  1) `atomic`: the list is treated as a single entity, like a scalar.\n       Atomic lists will be entirely replaced when updated. This extension\n       may be used on any type of list (struct, scalar, ...).\n  2) `set`:\n       Sets are lists that must not have multiple items with the same value. Each\n       value must be a scalar, an object with x-kubernetes-map-type `atomic` or an\n       array with x-kubernetes-list-type `atomic`.\n  3) `map`:\n       These lists are like maps in that their elements have a non-index key\n       used to identify them. Order is preserved upon merge. The map tag\n       must only be used on a list with elements of type object.\n  Defaults to atomic for arrays.\n\n- **x-kubernetes-map-type** (string)\n\n  x-kubernetes-map-type annotates an object to further describe its topology. This extension must only be used when type is object and may have 2 possible values:\n  \n  1) `granular`:\n       These maps are actual maps (key-value pairs) and each fields are independent\n       from each other (they can each be manipulated by separate actors). This is\n       the default behaviour for all maps.\n  2) `atomic`: the list is treated as a single entity, like a scalar.\n       Atomic maps will be entirely replaced when updated.\n\n- **x-kubernetes-preserve-unknown-fields** (boolean)\n\n  x-kubernetes-preserve-unknown-fields stops the API server decoding step from pruning fields which are not specified in the validation schema. This affects fields recursively, but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema. This can either be true or undefined. False is forbidden.\n\n- **x-kubernetes-validations** ([]ValidationRule)\n\n  *Patch strategy: merge on key `rule`*\n  \n  *Map: unique values on key rule will be kept during a merge*\n  \n  x-kubernetes-validations describes a list of validation rules written in the CEL expression language.\n\n  <a name=\"ValidationRule\"><\/a>\n  *ValidationRule describes a validation rule written in the CEL expression language.*\n\n  - **x-kubernetes-validations.rule** (string), required\n\n    Rule represents the expression which will be evaluated by CEL. ref: https:\/\/github.com\/google\/cel-spec The Rule is scoped to the location of the x-kubernetes-validations extension in the schema. The `self` variable in the CEL expression is bound to the scoped value. Example: - Rule scoped to the root of a resource with a status subresource: {\"rule\": \"self.status.actual \\<= self.spec.maxDesired\"}\n    \n    If the Rule is scoped to an object with properties, the accessible properties of the object are field selectable via `self.field` and field presence can be checked via `has(self.field)`. Null valued fields are treated as absent fields in CEL expressions. If the Rule is scoped to an object with additionalProperties (i.e. a map) the value of the map are accessible via `self[mapKey]`, map containment can be checked via `mapKey in self` and all entries of the map are accessible via CEL macros and functions such as `self.all(...)`. If the Rule is scoped to an array, the elements of the array are accessible via `self[i]` and also by macros and functions. If the Rule is scoped to a scalar, `self` is bound to the scalar value. Examples: - Rule scoped to a map of objects: {\"rule\": \"self.components['Widget'].priority \\< 10\"} - Rule scoped to a list of integers: {\"rule\": \"self.values.all(value, value >= 0 && value \\< 100)\"} - Rule scoped to a string value: {\"rule\": \"self.startsWith('kube')\"}\n    \n    The `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from the root of the object and from any x-kubernetes-embedded-resource annotated objects. No other metadata properties are accessible.\n    \n    Unknown data preserved in custom resources via x-kubernetes-preserve-unknown-fields is not accessible in CEL expressions. This includes: - Unknown field values that are preserved by object schemas with x-kubernetes-preserve-unknown-fields. - Object properties where the property schema is of an \"unknown type\". An \"unknown type\" is recursively defined as:\n      - A schema with no type and x-kubernetes-preserve-unknown-fields set to true\n      - An array where the items schema is of an \"unknown type\"\n      - An object where the additionalProperties schema is of an \"unknown type\"\n    \n    Only property names of the form `[a-zA-Z_.-\/][a-zA-Z0-9_.-\/]*` are accessible. Accessible property names are escaped according to the following rules when accessed in the expression: - '__' escapes to '__underscores__' - '.' escapes to '__dot__' - '-' escapes to '__dash__' - '\/' escapes to '__slash__' - Property names that exactly match a CEL RESERVED keyword escape to '__{keyword}__'. The keywords are:\n    \t  \"true\", \"false\", \"null\", \"in\", \"as\", \"break\", \"const\", \"continue\", \"else\", \"for\", \"function\", \"if\",\n    \t  \"import\", \"let\", \"loop\", \"package\", \"namespace\", \"return\".\n    Examples:\n      - Rule accessing a property named \"namespace\": {\"rule\": \"self.__namespace__ > 0\"}\n      - Rule accessing a property named \"x-prop\": {\"rule\": \"self.x__dash__prop > 0\"}\n      - Rule accessing a property named \"redact__d\": {\"rule\": \"self.redact__underscores__d > 0\"}\n    \n    Equality on arrays with x-kubernetes-list-type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1]. Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type:\n      - 'set': `X + Y` performs a union where the array positions of all elements in `X` are preserved and\n        non-intersecting elements in `Y` are appended, retaining their partial order.\n      - 'map': `X + Y` performs a merge where the array positions of all keys in `X` are preserved but the values\n        are overwritten by values in `Y` when the key sets of `X` and `Y` intersect. Elements in `Y` with\n        non-intersecting keys are appended, retaining their partial order.\n    \n    If `rule` makes use of the `oldSelf` variable it is implicitly a `transition rule`.\n    \n    By default, the `oldSelf` variable is the same type as `self`. When `optionalOldSelf` is true, the `oldSelf` variable is a CEL optional\n     variable whose value() is the same type as `self`.\n    See the documentation for the `optionalOldSelf` field for details.\n    \n    Transition rules by default are applied only on UPDATE requests and are skipped if an old value could not be found. You can opt a transition rule into unconditional evaluation by setting `optionalOldSelf` to true.\n\n  - **x-kubernetes-validations.fieldPath** (string)\n\n    fieldPath represents the field path returned when the validation fails. It must be a relative JSON path (i.e. with array notation) scoped to the location of this x-kubernetes-validations extension in the schema and refer to an existing field. e.g. when validation checks if a specific attribute `foo` under a map `testMap`, the fieldPath could be set to `.testMap.foo` If the validation checks two lists must have unique attributes, the fieldPath could be set to either of the list: e.g. `.testList` It does not support list numeric index. It supports child operation to refer to an existing field currently. Refer to [JSONPath support in Kubernetes](https:\/\/kubernetes.io\/docs\/reference\/kubectl\/jsonpath\/) for more info. Numeric index of array is not supported. For field name which contains special characters, use `['specialName']` to refer the field name. e.g. for attribute `foo.34$` appears in a list `testList`, the fieldPath could be set to `.testList['foo.34$']`\n\n  - **x-kubernetes-validations.message** (string)\n\n    Message represents the message displayed when validation fails. The message is required if the Rule contains line breaks. The message must not contain line breaks. If unset, the message is \"failed rule: {Rule}\". e.g. \"must be a URL with the host matching spec.host\"\n\n  - **x-kubernetes-validations.messageExpression** (string)\n\n    MessageExpression declares a CEL expression that evaluates to the validation failure message that is returned when this rule fails. Since messageExpression is used as a failure message, it must evaluate to a string. If both message and messageExpression are present on a rule, then messageExpression will be used if validation fails. If messageExpression results in a runtime error, the runtime error is logged, and the validation failure message is produced as if the messageExpression field were unset. If messageExpression evaluates to an empty string, a string with only spaces, or a string that contains line breaks, then the validation failure message will also be produced as if the messageExpression field were unset, and the fact that messageExpression produced an empty string\/string with only spaces\/string with line breaks will be logged. messageExpression has access to all the same variables as the rule; the only difference is the return type. Example: \"x must be less than max (\"+string(self.max)+\")\"\n\n  - **x-kubernetes-validations.optionalOldSelf** (boolean)\n\n    optionalOldSelf is used to opt a transition rule into evaluation even when the object is first created, or if the old object is missing the value.\n    \n    When enabled `oldSelf` will be a CEL optional whose value will be `None` if there is no old value, or when the object is initially created.\n    \n    You may check for presence of oldSelf using `oldSelf.hasValue()` and unwrap it after checking using `oldSelf.value()`. Check the CEL documentation for Optional types for more information: https:\/\/pkg.go.dev\/github.com\/google\/cel-go\/cel#OptionalTypes\n    \n    May not be set unless `oldSelf` is used in `rule`.\n\n  - **x-kubernetes-validations.reason** (string)\n\n    reason provides a machine-readable validation failure reason that is returned to the caller when a request fails this validation rule. The HTTP status code returned to the caller will match the reason of the reason of the first failed validation rule. The currently supported reasons are: \"FieldValueInvalid\", \"FieldValueForbidden\", \"FieldValueRequired\", \"FieldValueDuplicate\". If not set, default to use \"FieldValueInvalid\". All future added reasons must be accepted by clients when reading this value and unknown reasons should be treated as FieldValueInvalid.\n\n\n\n\n\n## CustomResourceDefinitionStatus {#CustomResourceDefinitionStatus}\n\nCustomResourceDefinitionStatus indicates the state of the CustomResourceDefinition\n\n<hr>\n\n- **acceptedNames** (CustomResourceDefinitionNames)\n\n  acceptedNames are the names that are actually being used to serve discovery. They may be different than the names in spec.\n\n  <a name=\"CustomResourceDefinitionNames\"><\/a>\n  *CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition*\n\n  - **acceptedNames.kind** (string), required\n\n    kind is the serialized kind of the resource. It is normally CamelCase and singular. Custom resource instances will use this value as the `kind` attribute in API calls.\n\n  - **acceptedNames.plural** (string), required\n\n    plural is the plural name of the resource to serve. The custom resources are served under `\/apis\/\\<group>\/\\<version>\/...\/\\<plural>`. Must match the name of the CustomResourceDefinition (in the form `\\<names.plural>.\\<group>`). Must be all lowercase.\n\n  - **acceptedNames.categories** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    categories is a list of grouped resources this custom resource belongs to (e.g. 'all'). This is published in API discovery documents, and used by clients to support invocations like `kubectl get all`.\n\n  - **acceptedNames.listKind** (string)\n\n    listKind is the serialized kind of the list for this resource. Defaults to \"`kind`List\".\n\n  - **acceptedNames.shortNames** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like `kubectl get \\<shortname>`. It must be all lowercase.\n\n  - **acceptedNames.singular** (string)\n\n    singular is the singular name of the resource. It must be all lowercase. Defaults to lowercased `kind`.\n\n- **conditions** ([]CustomResourceDefinitionCondition)\n\n  *Map: unique values on key type will be kept during a merge*\n  \n  conditions indicate state for particular aspects of a CustomResourceDefinition\n\n  <a name=\"CustomResourceDefinitionCondition\"><\/a>\n  *CustomResourceDefinitionCondition contains details for the current condition of this pod.*\n\n  - **conditions.status** (string), required\n\n    status is the status of the condition. Can be True, False, Unknown.\n\n  - **conditions.type** (string), required\n\n    type is the type of the condition. Types include Established, NamesAccepted and Terminating.\n\n  - **conditions.lastTransitionTime** (Time)\n\n    lastTransitionTime last time the condition transitioned from one status to another.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    message is a human-readable message indicating details about last transition.\n\n  - **conditions.reason** (string)\n\n    reason is a unique, one-word, CamelCase reason for the condition's last transition.\n\n- **storedVersions** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  storedVersions lists all versions of CustomResources that were ever persisted. Tracking these versions allows a migration path for stored versions in etcd. The field is mutable so a migration controller can finish a migration to another version (ensuring no old objects are left in storage), and then remove the rest of the versions from this list. Versions may not be removed from `spec.versions` while they exist in this list.\n\n\n\n\n\n## CustomResourceDefinitionList {#CustomResourceDefinitionList}\n\nCustomResourceDefinitionList is a list of CustomResourceDefinition objects.\n\n<hr>\n\n- **items** ([]<a href=\"\">CustomResourceDefinition<\/a>), required\n\n  items list individual CustomResourceDefinition objects\n\n- **apiVersion** (string)\n\n  APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#resources\n\n- **kind** (string)\n\n  Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard object's metadata More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified CustomResourceDefinition\n\n#### HTTP Request\n\nGET \/apis\/apiextensions.k8s.io\/v1\/customresourcedefinitions\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CustomResourceDefinition\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CustomResourceDefinition<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified CustomResourceDefinition\n\n#### HTTP Request\n\nGET \/apis\/apiextensions.k8s.io\/v1\/customresourcedefinitions\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CustomResourceDefinition\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CustomResourceDefinition<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind CustomResourceDefinition\n\n#### HTTP Request\n\nGET \/apis\/apiextensions.k8s.io\/v1\/customresourcedefinitions\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CustomResourceDefinitionList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a CustomResourceDefinition\n\n#### HTTP Request\n\nPOST \/apis\/apiextensions.k8s.io\/v1\/customresourcedefinitions\n\n#### Parameters\n\n\n- **body**: <a href=\"\">CustomResourceDefinition<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CustomResourceDefinition<\/a>): OK\n\n201 (<a href=\"\">CustomResourceDefinition<\/a>): Created\n\n202 (<a href=\"\">CustomResourceDefinition<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified CustomResourceDefinition\n\n#### HTTP Request\n\nPUT \/apis\/apiextensions.k8s.io\/v1\/customresourcedefinitions\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CustomResourceDefinition\n\n\n- **body**: <a href=\"\">CustomResourceDefinition<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CustomResourceDefinition<\/a>): OK\n\n201 (<a href=\"\">CustomResourceDefinition<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified CustomResourceDefinition\n\n#### HTTP Request\n\nPUT \/apis\/apiextensions.k8s.io\/v1\/customresourcedefinitions\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CustomResourceDefinition\n\n\n- **body**: <a href=\"\">CustomResourceDefinition<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CustomResourceDefinition<\/a>): OK\n\n201 (<a href=\"\">CustomResourceDefinition<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified CustomResourceDefinition\n\n#### HTTP Request\n\nPATCH \/apis\/apiextensions.k8s.io\/v1\/customresourcedefinitions\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CustomResourceDefinition\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CustomResourceDefinition<\/a>): OK\n\n201 (<a href=\"\">CustomResourceDefinition<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified CustomResourceDefinition\n\n#### HTTP Request\n\nPATCH \/apis\/apiextensions.k8s.io\/v1\/customresourcedefinitions\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CustomResourceDefinition\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CustomResourceDefinition<\/a>): OK\n\n201 (<a href=\"\">CustomResourceDefinition<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a CustomResourceDefinition\n\n#### HTTP Request\n\nDELETE \/apis\/apiextensions.k8s.io\/v1\/customresourcedefinitions\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CustomResourceDefinition\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of CustomResourceDefinition\n\n#### HTTP Request\n\nDELETE \/apis\/apiextensions.k8s.io\/v1\/customresourcedefinitions\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   apiextensions k8s io v1    import   k8s io apiextensions apiserver pkg apis apiextensions v1    kind   CustomResourceDefinition  content type   api reference  description   CustomResourceDefinition represents a resource that should be exposed on the API server   title   CustomResourceDefinition  weight  1 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  apiextensions k8s io v1    import  k8s io apiextensions apiserver pkg apis apiextensions v1        CustomResourceDefinition   CustomResourceDefinition   CustomResourceDefinition represents a resource that should be exposed on the API server   Its name MUST be in the format    spec name     spec group     hr       apiVersion    apiextensions k8s io v1       kind    CustomResourceDefinition       metadata     a href    ObjectMeta  a      Standard object s metadata More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    CustomResourceDefinitionSpec  a    required    spec describes how the user wants the resources to appear      status     a href    CustomResourceDefinitionStatus  a      status indicates the actual state of the CustomResourceDefinition         CustomResourceDefinitionSpec   CustomResourceDefinitionSpec   CustomResourceDefinitionSpec describes how a user wants their resource to appear   hr       group    string   required    group is the API group of the defined custom resource  The custom resources are served under   apis   group        Must match the name of the CustomResourceDefinition  in the form    names plural    group          names    CustomResourceDefinitionNames   required    names specify the resource and kind names for the custom resource      a name  CustomResourceDefinitionNames    a     CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition         names kind    string   required      kind is the serialized kind of the resource  It is normally CamelCase and singular  Custom resource instances will use this value as the  kind  attribute in API calls         names plural    string   required      plural is the plural name of the resource to serve  The custom resources are served under   apis   group    version        plural    Must match the name of the CustomResourceDefinition  in the form    names plural    group     Must be all lowercase         names categories      string        Atomic  will be replaced during a merge           categories is a list of grouped resources this custom resource belongs to  e g   all    This is published in API discovery documents  and used by clients to support invocations like  kubectl get all          names listKind    string       listKind is the serialized kind of the list for this resource  Defaults to   kind List          names shortNames      string        Atomic  will be replaced during a merge           shortNames are short names for the resource  exposed in API discovery documents  and used by clients to support invocations like  kubectl get   shortname    It must be all lowercase         names singular    string       singular is the singular name of the resource  It must be all lowercase  Defaults to lowercased  kind        scope    string   required    scope indicates whether the defined custom resource is cluster  or namespace scoped  Allowed values are  Cluster  and  Namespaced        versions      CustomResourceDefinitionVersion   required     Atomic  will be replaced during a merge       versions is the list of all API versions of the defined custom resource  Version names are used to compute the order in which served versions are listed in API discovery  If the version string is  kube like   it will sort above non  kube like  version strings  which are ordered lexicographically   Kube like  versions start with a  v   then are followed by a number  the major version   then optionally the string  alpha  or  beta  and another number  the minor version   These are sorted first by GA   beta   alpha  where GA is a version with no suffix such as beta or alpha   and then by comparing major version  then minor version  An example sorted list of versions  v10  v2  v1  v11beta2  v10beta3  v3beta1  v12alpha1  v11alpha2  foo1  foo10      a name  CustomResourceDefinitionVersion    a     CustomResourceDefinitionVersion describes a version for CRD          versions name    string   required      name is the version name  e g   v1    v2beta1   etc  The custom resources are served under this version at   apis   group    version       if  served  is true         versions served    boolean   required      served is a flag enabling disabling this version from being served via REST APIs        versions storage    boolean   required      storage indicates this version should be used when persisting custom resources to storage  There must be exactly one version with storage true         versions additionalPrinterColumns      CustomResourceColumnDefinition        Atomic  will be replaced during a merge           additionalPrinterColumns specifies additional columns returned in Table output  See https   kubernetes io docs reference using api api concepts  receiving resources as tables for details  If no columns are specified  a single column displaying the age of the custom resource is used        a name  CustomResourceColumnDefinition    a       CustomResourceColumnDefinition specifies a column for server side printing            versions additionalPrinterColumns jsonPath    string   required        jsonPath is a simple JSON path  i e  with array notation  which is evaluated against each custom resource to produce the value for this column           versions additionalPrinterColumns name    string   required        name is a human readable name for the column           versions additionalPrinterColumns type    string   required        type is an OpenAPI type definition for this column  See https   github com OAI OpenAPI Specification blob master versions 2 0 md data types for details           versions additionalPrinterColumns description    string         description is a human readable description of this column           versions additionalPrinterColumns format    string         format is an optional OpenAPI type definition for this column  The  name  format is applied to the primary identifier column to assist in clients identifying column is the resource name  See https   github com OAI OpenAPI Specification blob master versions 2 0 md data types for details           versions additionalPrinterColumns priority    int32         priority is an integer defining the relative importance of this column compared to others  Lower numbers are considered higher priority  Columns that may be omitted in limited space scenarios should be given a priority greater than 0         versions deprecated    boolean       deprecated indicates this version of the custom resource API is deprecated  When set to true  API requests to this version receive a warning header in the server response  Defaults to false         versions deprecationWarning    string       deprecationWarning overrides the default warning returned to API clients  May only be set when  deprecated  is true  The default warning indicates this version is deprecated and recommends use of the newest served version of equal or greater stability  if one exists         versions schema    CustomResourceValidation       schema describes the schema used for validation  pruning  and defaulting of this version of the custom resource        a name  CustomResourceValidation    a       CustomResourceValidation is a list of validation methods for CustomResources            versions schema openAPIV3Schema     a href    JSONSchemaProps  a          openAPIV3Schema is the OpenAPI v3 schema to use for validation and pruning         versions selectableFields      SelectableField        Atomic  will be replaced during a merge           selectableFields specifies paths to fields that may be used as field selectors  A maximum of 8 selectable fields are allowed  See https   kubernetes io docs concepts overview working with objects field selectors       a name  SelectableField    a       SelectableField specifies the JSON path of a field that may be used with field selectors            versions selectableFields jsonPath    string   required        jsonPath is a simple JSON path which is evaluated against each custom resource to produce a field selector value  Only JSON paths without the array notation are allowed  Must point to a field of type string  boolean or integer  Types with enum values and strings with formats are allowed  If jsonPath refers to absent field in a resource  the jsonPath evaluates to an empty string  Must not point to metdata fields  Required         versions subresources    CustomResourceSubresources       subresources specify what subresources this version of the defined custom resource have        a name  CustomResourceSubresources    a       CustomResourceSubresources defines the status and scale subresources for CustomResources            versions subresources scale    CustomResourceSubresourceScale         scale indicates the custom resource should serve a   scale  subresource that returns an  autoscaling v1  Scale object          a name  CustomResourceSubresourceScale    a         CustomResourceSubresourceScale defines how to serve the scale subresource for CustomResources              versions subresources scale specReplicasPath    string   required          specReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale  spec replicas   Only JSON paths without the array notation are allowed  Must be a JSON Path under   spec   If there is no value under the given path in the custom resource  the   scale  subresource will return an error on GET             versions subresources scale statusReplicasPath    string   required          statusReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale  status replicas   Only JSON paths without the array notation are allowed  Must be a JSON Path under   status   If there is no value under the given path in the custom resource  the  status replicas  value in the   scale  subresource will default to 0             versions subresources scale labelSelectorPath    string           labelSelectorPath defines the JSON path inside of a custom resource that corresponds to Scale  status selector   Only JSON paths without the array notation are allowed  Must be a JSON Path under   status  or   spec   Must be set to work with HorizontalPodAutoscaler  The field pointed by this JSON path must be a string field  not a complex selector struct  which contains a serialized label selector in string form  More info  https   kubernetes io docs tasks access kubernetes api custom resources custom resource definitions scale subresource If there is no value under the given path in the custom resource  the  status selector  value in the   scale  subresource will default to the empty string           versions subresources status    CustomResourceSubresourceStatus         status indicates the custom resource should serve a   status  subresource  When enabled  1  requests to the custom resource primary endpoint ignore changes to the  status  stanza of the object  2  requests to the custom resource   status  subresource ignore changes to anything other than the  status  stanza of the object          a name  CustomResourceSubresourceStatus    a         CustomResourceSubresourceStatus defines how to serve the status subresource for CustomResources  Status is represented by the   status  JSON path inside of a CustomResource  When set    exposes a  status subresource for the custom resource   PUT requests to the  status subresource take a custom resource object  and ignore changes to anything except the status stanza   PUT POST PATCH requests to the custom resource ignore changes to the status stanza       conversion    CustomResourceConversion     conversion defines conversion settings for the CRD      a name  CustomResourceConversion    a     CustomResourceConversion describes how to convert different versions of a CR          conversion strategy    string   required      strategy specifies how custom resources are converted between versions  Allowed values are      None    The converter only change the apiVersion and would not touch any other field in the custom resource      Webhook    API Server will call to an external webhook to do the conversion  Additional information       is needed for this option  This requires spec preserveUnknownFields to be false  and spec conversion webhook to be set         conversion webhook    WebhookConversion       webhook describes how to call the conversion webhook  Required when  strategy  is set to   Webhook          a name  WebhookConversion    a       WebhookConversion describes how to call a conversion webhook           conversion webhook conversionReviewVersions      string   required         Atomic  will be replaced during a merge               conversionReviewVersions is an ordered list of preferred  ConversionReview  versions the Webhook expects  The API server will use the first version in the list which it supports  If none of the versions specified in this list are supported by API server  conversion will fail for the custom resource  If a persisted Webhook configuration specifies allowed versions and does not include any versions known to the API Server  calls to the webhook will fail           conversion webhook clientConfig    WebhookClientConfig         clientConfig is the instructions for how to call the webhook if strategy is  Webhook           a name  WebhookClientConfig    a         WebhookClientConfig contains the information to make a TLS connection with the webhook              conversion webhook clientConfig caBundle      byte           caBundle is a PEM encoded CA bundle which will be used to validate the webhook s server certificate  If unspecified  system trust roots on the apiserver are used             conversion webhook clientConfig service    ServiceReference           service is a reference to the service for this webhook  Either service or url must be specified                   If the webhook is running within the cluster  then you should use  service             a name  ServiceReference    a           ServiceReference holds a reference to Service legacy k8s io               conversion webhook clientConfig service name    string   required            name is the name of the service  Required              conversion webhook clientConfig service namespace    string   required            namespace is the namespace of the service  Required              conversion webhook clientConfig service path    string             path is an optional URL path at which the webhook will be contacted               conversion webhook clientConfig service port    int32             port is an optional service port at which the webhook will be contacted   port  should be a valid port number  1 65535  inclusive   Defaults to 443 for backward compatibility             conversion webhook clientConfig url    string           url gives the location of the webhook  in standard URL form   scheme   host port path    Exactly one of  url  or  service  must be specified                   The  host  should not refer to a service running in the cluster  use the  service  field instead  The host might be resolved via external DNS in some apiservers  e g    kube apiserver  cannot resolve in cluster DNS as that would be a layering violation    host  may also be an IP address                   Please note that using  localhost  or  127 0 0 1  as a  host  is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook  Such installs are likely to be non portable  i e   not easy to turn up in a new cluster                   The scheme must be  https   the URL must begin with  https                       A path is optional  and if present may be any string permissible in a URL  You may use the path to pass an arbitrary string to the webhook  for example  a cluster identifier                   Attempting to use a user or basic auth e g   user password   is not allowed  Fragments          and query parameters          are not allowed  either       preserveUnknownFields    boolean     preserveUnknownFields indicates that object fields which are not specified in the OpenAPI schema should be preserved when persisting to storage  apiVersion  kind  metadata and known fields inside metadata are always preserved  This field is deprecated in favor of setting  x preserve unknown fields  to true in  spec versions    schema openAPIV3Schema   See https   kubernetes io docs tasks extend kubernetes custom resources custom resource definitions  field pruning for details          JSONSchemaProps   JSONSchemaProps   JSONSchemaProps is a JSON Schema following Specification Draft 4  http   json schema org      hr        ref    string         schema    string        additionalItems    JSONSchemaPropsOrBool       a name  JSONSchemaPropsOrBool    a     JSONSchemaPropsOrBool represents JSONSchemaProps or a boolean value  Defaults to true for the boolean property        additionalProperties    JSONSchemaPropsOrBool       a name  JSONSchemaPropsOrBool    a     JSONSchemaPropsOrBool represents JSONSchemaProps or a boolean value  Defaults to true for the boolean property        allOf       a href    JSONSchemaProps  a       Atomic  will be replaced during a merge             anyOf       a href    JSONSchemaProps  a       Atomic  will be replaced during a merge             default    JSON     default is a default value for undefined object fields  Defaulting is a beta feature under the CustomResourceDefaulting feature gate  Defaulting requires spec preserveUnknownFields to be false      a name  JSON    a     JSON represents any valid JSON value  These types are supported  bool  int64  float64  string    interface    map string interface   and nil        definitions    map string  a href    JSONSchemaProps  a         dependencies    map string JSONSchemaPropsOrStringArray       a name  JSONSchemaPropsOrStringArray    a     JSONSchemaPropsOrStringArray represents a JSONSchemaProps or a string array        description    string        enum      JSON      Atomic  will be replaced during a merge            a name  JSON    a     JSON represents any valid JSON value  These types are supported  bool  int64  float64  string    interface    map string interface   and nil        example    JSON       a name  JSON    a     JSON represents any valid JSON value  These types are supported  bool  int64  float64  string    interface    map string interface   and nil        exclusiveMaximum    boolean        exclusiveMinimum    boolean        externalDocs    ExternalDocumentation       a name  ExternalDocumentation    a     ExternalDocumentation allows referencing an external resource for extended documentation          externalDocs description    string          externalDocs url    string        format    string     format is an OpenAPI v3 format string  Unknown formats are ignored  The following formats are validated         bsonobjectid  a bson object ID  i e  a 24 characters hex string   uri  an URI as parsed by Golang net url ParseRequestURI   email  an email address as parsed by Golang net mail ParseAddress   hostname  a valid representation for an Internet host name  as defined by RFC 1034  section 3 1  RFC1034     ipv4  an IPv4 IP as parsed by Golang net ParseIP   ipv6  an IPv6 IP as parsed by Golang net ParseIP   cidr  a CIDR as parsed by Golang net ParseCIDR   mac  a MAC address as parsed by Golang net ParseMAC   uuid  an UUID that allows uppercase defined by the regex   i   0 9a f  8    0 9a f  4    0 9a f  4    0 9a f  4    0 9a f  12     uuid3  an UUID3 that allows uppercase defined by the regex   i   0 9a f  8    0 9a f  4   3 0 9a f  3    0 9a f  4    0 9a f  12     uuid4  an UUID4 that allows uppercase defined by the regex   i   0 9a f  8    0 9a f  4   4 0 9a f  3    89ab  0 9a f  3    0 9a f  12     uuid5  an UUID5 that allows uppercase defined by the regex   i   0 9a f  8    0 9a f  4   5 0 9a f  3    89ab  0 9a f  3    0 9a f  12     isbn  an ISBN10 or ISBN13 number string like  0321751043  or  978 0321751041    isbn10  an ISBN10 number string like  0321751043    isbn13  an ISBN13 number string like  978 0321751041    creditcard  a credit card number defined by the regex     4 0 9  12     0 9  3    5 1 5  0 9  14  6   011 5 0 9  0 9   0 9  12  3 47  0 9  13  3   0 0 5   68  0 9   0 9  11     2131 1800 35 d 3   d 11    with any non digit characters mixed in   ssn  a U S  social security number following the regex   d 3       d 2       d 4     hexcolor  an hexadecimal color code like   FFFFFF  following the regex      0 9a fA F  3   0 9a fA F  6      rgbcolor  an RGB color code like rgb like  rgb 255 255 2559    byte  base64 encoded binary data   password  any kind of string   date  a date string like  2006 01 02  as defined by full date in RFC3339   duration  a duration string like  22 ns  as parsed by Golang time ParseDuration or compatible with Scala duration format   datetime  a date time string like  2014 12 15T19 30 20 000Z  as defined by date time in RFC3339       id    string        items    JSONSchemaPropsOrArray       a name  JSONSchemaPropsOrArray    a     JSONSchemaPropsOrArray represents a value that can either be a JSONSchemaProps or an array of JSONSchemaProps  Mainly here for serialization purposes        maxItems    int64        maxLength    int64        maxProperties    int64        maximum    double        minItems    int64        minLength    int64        minProperties    int64        minimum    double        multipleOf    double        not     a href    JSONSchemaProps  a         nullable    boolean        oneOf       a href    JSONSchemaProps  a       Atomic  will be replaced during a merge             pattern    string        patternProperties    map string  a href    JSONSchemaProps  a         properties    map string  a href    JSONSchemaProps  a         required      string      Atomic  will be replaced during a merge             title    string        type    string        uniqueItems    boolean        x kubernetes embedded resource    boolean     x kubernetes embedded resource defines that the value is an embedded Kubernetes runtime Object  with TypeMeta and ObjectMeta  The type must be object  It is allowed to further restrict the embedded object  kind  apiVersion and metadata are validated automatically  x kubernetes preserve unknown fields is allowed to be true  but does not have to be if the object is fully specified  up to kind  apiVersion  metadata        x kubernetes int or string    boolean     x kubernetes int or string specifies that this value is either an integer or a string  If this is true  an empty type is allowed and type as child of anyOf is permitted if following one of the following patterns       1  anyOf         type  integer        type  string   2  allOf         anyOf           type  integer          type  string            zero or more      x kubernetes list map keys      string      Atomic  will be replaced during a merge       x kubernetes list map keys annotates an array with the x kubernetes list type  map  by specifying the keys used as the index of the map       This tag MUST only be used on lists that have the  x kubernetes list type  extension set to  map   Also  the values specified for this attribute must be a scalar typed field of the child structure  no nesting is supported        The properties specified must either be required or have a default value  to ensure those properties are present for all list items       x kubernetes list type    string     x kubernetes list type annotates an array to further describe its topology  This extension must only be used on lists and may have 3 possible values       1   atomic   the list is treated as a single entity  like a scalar         Atomic lists will be entirely replaced when updated  This extension        may be used on any type of list  struct  scalar          2   set          Sets are lists that must not have multiple items with the same value  Each        value must be a scalar  an object with x kubernetes map type  atomic  or an        array with x kubernetes list type  atomic     3   map          These lists are like maps in that their elements have a non index key        used to identify them  Order is preserved upon merge  The map tag        must only be used on a list with elements of type object    Defaults to atomic for arrays       x kubernetes map type    string     x kubernetes map type annotates an object to further describe its topology  This extension must only be used when type is object and may have 2 possible values       1   granular          These maps are actual maps  key value pairs  and each fields are independent        from each other  they can each be manipulated by separate actors   This is        the default behaviour for all maps    2   atomic   the list is treated as a single entity  like a scalar         Atomic maps will be entirely replaced when updated       x kubernetes preserve unknown fields    boolean     x kubernetes preserve unknown fields stops the API server decoding step from pruning fields which are not specified in the validation schema  This affects fields recursively  but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema  This can either be true or undefined  False is forbidden       x kubernetes validations      ValidationRule      Patch strategy  merge on key  rule         Map  unique values on key rule will be kept during a merge       x kubernetes validations describes a list of validation rules written in the CEL expression language      a name  ValidationRule    a     ValidationRule describes a validation rule written in the CEL expression language          x kubernetes validations rule    string   required      Rule represents the expression which will be evaluated by CEL  ref  https   github com google cel spec The Rule is scoped to the location of the x kubernetes validations extension in the schema  The  self  variable in the CEL expression is bound to the scoped value  Example    Rule scoped to the root of a resource with a status subresource    rule    self status actual     self spec maxDesired            If the Rule is scoped to an object with properties  the accessible properties of the object are field selectable via  self field  and field presence can be checked via  has self field    Null valued fields are treated as absent fields in CEL expressions  If the Rule is scoped to an object with additionalProperties  i e  a map  the value of the map are accessible via  self mapKey    map containment can be checked via  mapKey in self  and all entries of the map are accessible via CEL macros and functions such as  self all        If the Rule is scoped to an array  the elements of the array are accessible via  self i   and also by macros and functions  If the Rule is scoped to a scalar   self  is bound to the scalar value  Examples    Rule scoped to a map of objects    rule    self components  Widget   priority    10     Rule scoped to a list of integers    rule    self values all value  value    0    value    100      Rule scoped to a string value    rule    self startsWith  kube              The  apiVersion    kind    metadata name  and  metadata generateName  are always accessible from the root of the object and from any x kubernetes embedded resource annotated objects  No other metadata properties are accessible           Unknown data preserved in custom resources via x kubernetes preserve unknown fields is not accessible in CEL expressions  This includes    Unknown field values that are preserved by object schemas with x kubernetes preserve unknown fields    Object properties where the property schema is of an  unknown type   An  unknown type  is recursively defined as          A schema with no type and x kubernetes preserve unknown fields set to true         An array where the items schema is of an  unknown type          An object where the additionalProperties schema is of an  unknown type           Only property names of the form   a zA Z      a zA Z0 9        are accessible  Accessible property names are escaped according to the following rules when accessed in the expression         escapes to    underscores          escapes to    dot          escapes to    dash          escapes to    slash      Property names that exactly match a CEL RESERVED keyword escape to     keyword      The keywords are          true    false    null    in    as    break    const    continue    else    for    function    if           import    let    loop    package    namespace    return       Examples          Rule accessing a property named  namespace     rule    self   namespace     0           Rule accessing a property named  x prop     rule    self x  dash  prop   0           Rule accessing a property named  redact  d     rule    self redact  underscores  d   0            Equality on arrays with x kubernetes list type of  set  or  map  ignores element order  i e   1  2      2  1   Concatenation on arrays with x kubernetes list type use the semantics of the list type           set    X   Y  performs a union where the array positions of all elements in  X  are preserved and         non intersecting elements in  Y  are appended  retaining their partial order           map    X   Y  performs a merge where the array positions of all keys in  X  are preserved but the values         are overwritten by values in  Y  when the key sets of  X  and  Y  intersect  Elements in  Y  with         non intersecting keys are appended  retaining their partial order           If  rule  makes use of the  oldSelf  variable it is implicitly a  transition rule            By default  the  oldSelf  variable is the same type as  self   When  optionalOldSelf  is true  the  oldSelf  variable is a CEL optional      variable whose value   is the same type as  self       See the documentation for the  optionalOldSelf  field for details           Transition rules by default are applied only on UPDATE requests and are skipped if an old value could not be found  You can opt a transition rule into unconditional evaluation by setting  optionalOldSelf  to true         x kubernetes validations fieldPath    string       fieldPath represents the field path returned when the validation fails  It must be a relative JSON path  i e  with array notation  scoped to the location of this x kubernetes validations extension in the schema and refer to an existing field  e g  when validation checks if a specific attribute  foo  under a map  testMap   the fieldPath could be set to   testMap foo  If the validation checks two lists must have unique attributes  the fieldPath could be set to either of the list  e g    testList  It does not support list numeric index  It supports child operation to refer to an existing field currently  Refer to  JSONPath support in Kubernetes  https   kubernetes io docs reference kubectl jsonpath   for more info  Numeric index of array is not supported  For field name which contains special characters  use    specialName    to refer the field name  e g  for attribute  foo 34   appears in a list  testList   the fieldPath could be set to   testList  foo 34            x kubernetes validations message    string       Message represents the message displayed when validation fails  The message is required if the Rule contains line breaks  The message must not contain line breaks  If unset  the message is  failed rule   Rule    e g   must be a URL with the host matching spec host         x kubernetes validations messageExpression    string       MessageExpression declares a CEL expression that evaluates to the validation failure message that is returned when this rule fails  Since messageExpression is used as a failure message  it must evaluate to a string  If both message and messageExpression are present on a rule  then messageExpression will be used if validation fails  If messageExpression results in a runtime error  the runtime error is logged  and the validation failure message is produced as if the messageExpression field were unset  If messageExpression evaluates to an empty string  a string with only spaces  or a string that contains line breaks  then the validation failure message will also be produced as if the messageExpression field were unset  and the fact that messageExpression produced an empty string string with only spaces string with line breaks will be logged  messageExpression has access to all the same variables as the rule  the only difference is the return type  Example   x must be less than max    string self max             x kubernetes validations optionalOldSelf    boolean       optionalOldSelf is used to opt a transition rule into evaluation even when the object is first created  or if the old object is missing the value           When enabled  oldSelf  will be a CEL optional whose value will be  None  if there is no old value  or when the object is initially created           You may check for presence of oldSelf using  oldSelf hasValue    and unwrap it after checking using  oldSelf value     Check the CEL documentation for Optional types for more information  https   pkg go dev github com google cel go cel OptionalTypes          May not be set unless  oldSelf  is used in  rule          x kubernetes validations reason    string       reason provides a machine readable validation failure reason that is returned to the caller when a request fails this validation rule  The HTTP status code returned to the caller will match the reason of the reason of the first failed validation rule  The currently supported reasons are   FieldValueInvalid    FieldValueForbidden    FieldValueRequired    FieldValueDuplicate   If not set  default to use  FieldValueInvalid   All future added reasons must be accepted by clients when reading this value and unknown reasons should be treated as FieldValueInvalid          CustomResourceDefinitionStatus   CustomResourceDefinitionStatus   CustomResourceDefinitionStatus indicates the state of the CustomResourceDefinition   hr       acceptedNames    CustomResourceDefinitionNames     acceptedNames are the names that are actually being used to serve discovery  They may be different than the names in spec      a name  CustomResourceDefinitionNames    a     CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition         acceptedNames kind    string   required      kind is the serialized kind of the resource  It is normally CamelCase and singular  Custom resource instances will use this value as the  kind  attribute in API calls         acceptedNames plural    string   required      plural is the plural name of the resource to serve  The custom resources are served under   apis   group    version        plural    Must match the name of the CustomResourceDefinition  in the form    names plural    group     Must be all lowercase         acceptedNames categories      string        Atomic  will be replaced during a merge           categories is a list of grouped resources this custom resource belongs to  e g   all    This is published in API discovery documents  and used by clients to support invocations like  kubectl get all          acceptedNames listKind    string       listKind is the serialized kind of the list for this resource  Defaults to   kind List          acceptedNames shortNames      string        Atomic  will be replaced during a merge           shortNames are short names for the resource  exposed in API discovery documents  and used by clients to support invocations like  kubectl get   shortname    It must be all lowercase         acceptedNames singular    string       singular is the singular name of the resource  It must be all lowercase  Defaults to lowercased  kind        conditions      CustomResourceDefinitionCondition      Map  unique values on key type will be kept during a merge       conditions indicate state for particular aspects of a CustomResourceDefinition     a name  CustomResourceDefinitionCondition    a     CustomResourceDefinitionCondition contains details for the current condition of this pod          conditions status    string   required      status is the status of the condition  Can be True  False  Unknown         conditions type    string   required      type is the type of the condition  Types include Established  NamesAccepted and Terminating         conditions lastTransitionTime    Time       lastTransitionTime last time the condition transitioned from one status to another        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string       message is a human readable message indicating details about last transition         conditions reason    string       reason is a unique  one word  CamelCase reason for the condition s last transition       storedVersions      string      Atomic  will be replaced during a merge       storedVersions lists all versions of CustomResources that were ever persisted  Tracking these versions allows a migration path for stored versions in etcd  The field is mutable so a migration controller can finish a migration to another version  ensuring no old objects are left in storage   and then remove the rest of the versions from this list  Versions may not be removed from  spec versions  while they exist in this list          CustomResourceDefinitionList   CustomResourceDefinitionList   CustomResourceDefinitionList is a list of CustomResourceDefinition objects    hr       items       a href    CustomResourceDefinition  a    required    items list individual CustomResourceDefinition objects      apiVersion    string     APIVersion defines the versioned schema of this representation of an object  Servers should convert recognized schemas to the latest internal value  and may reject unrecognized values  More info  https   git k8s io community contributors devel sig architecture api conventions md resources      kind    string     Kind is a string value representing the REST resource this object represents  Servers may infer this from the endpoint the client submits requests to  Cannot be updated  In CamelCase  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      metadata     a href    ListMeta  a      Standard object s metadata More info  https   git k8s io community contributors devel sig architecture api conventions md metadata         Operations   Operations      hr             get  read the specified CustomResourceDefinition       HTTP Request  GET  apis apiextensions k8s io v1 customresourcedefinitions  name        Parameters       name     in path    string  required    name of the CustomResourceDefinition       pretty     in query    string     a href    pretty  a          Response   200   a href    CustomResourceDefinition  a    OK  401  Unauthorized        get  read status of the specified CustomResourceDefinition       HTTP Request  GET  apis apiextensions k8s io v1 customresourcedefinitions  name  status       Parameters       name     in path    string  required    name of the CustomResourceDefinition       pretty     in query    string     a href    pretty  a          Response   200   a href    CustomResourceDefinition  a    OK  401  Unauthorized        list  list or watch objects of kind CustomResourceDefinition       HTTP Request  GET  apis apiextensions k8s io v1 customresourcedefinitions       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    CustomResourceDefinitionList  a    OK  401  Unauthorized        create  create a CustomResourceDefinition       HTTP Request  POST  apis apiextensions k8s io v1 customresourcedefinitions       Parameters       body     a href    CustomResourceDefinition  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CustomResourceDefinition  a    OK  201   a href    CustomResourceDefinition  a    Created  202   a href    CustomResourceDefinition  a    Accepted  401  Unauthorized        update  replace the specified CustomResourceDefinition       HTTP Request  PUT  apis apiextensions k8s io v1 customresourcedefinitions  name        Parameters       name     in path    string  required    name of the CustomResourceDefinition       body     a href    CustomResourceDefinition  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CustomResourceDefinition  a    OK  201   a href    CustomResourceDefinition  a    Created  401  Unauthorized        update  replace status of the specified CustomResourceDefinition       HTTP Request  PUT  apis apiextensions k8s io v1 customresourcedefinitions  name  status       Parameters       name     in path    string  required    name of the CustomResourceDefinition       body     a href    CustomResourceDefinition  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CustomResourceDefinition  a    OK  201   a href    CustomResourceDefinition  a    Created  401  Unauthorized        patch  partially update the specified CustomResourceDefinition       HTTP Request  PATCH  apis apiextensions k8s io v1 customresourcedefinitions  name        Parameters       name     in path    string  required    name of the CustomResourceDefinition       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CustomResourceDefinition  a    OK  201   a href    CustomResourceDefinition  a    Created  401  Unauthorized        patch  partially update status of the specified CustomResourceDefinition       HTTP Request  PATCH  apis apiextensions k8s io v1 customresourcedefinitions  name  status       Parameters       name     in path    string  required    name of the CustomResourceDefinition       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CustomResourceDefinition  a    OK  201   a href    CustomResourceDefinition  a    Created  401  Unauthorized        delete  delete a CustomResourceDefinition       HTTP Request  DELETE  apis apiextensions k8s io v1 customresourcedefinitions  name        Parameters       name     in path    string  required    name of the CustomResourceDefinition       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of CustomResourceDefinition       HTTP Request  DELETE  apis apiextensions k8s io v1 customresourcedefinitions       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference contenttype apireference apiVersion admissionregistration k8s io v1 kind ValidatingWebhookConfiguration apimetadata title ValidatingWebhookConfiguration weight 4 autogenerated true import k8s io api admissionregistration v1 ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it","answers":"---\napi_metadata:\n  apiVersion: \"admissionregistration.k8s.io\/v1\"\n  import: \"k8s.io\/api\/admissionregistration\/v1\"\n  kind: \"ValidatingWebhookConfiguration\"\ncontent_type: \"api_reference\"\ndescription: \"ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it.\"\ntitle: \"ValidatingWebhookConfiguration\"\nweight: 4\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: admissionregistration.k8s.io\/v1`\n\n`import \"k8s.io\/api\/admissionregistration\/v1\"`\n\n\n## ValidatingWebhookConfiguration {#ValidatingWebhookConfiguration}\n\nValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it.\n\n<hr>\n\n- **apiVersion**: admissionregistration.k8s.io\/v1\n\n\n- **kind**: ValidatingWebhookConfiguration\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata; More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata.\n\n- **webhooks** ([]ValidatingWebhook)\n\n  *Patch strategy: merge on key `name`*\n  \n  *Map: unique values on key name will be kept during a merge*\n  \n  Webhooks is a list of webhooks and the affected resources and operations.\n\n  <a name=\"ValidatingWebhook\"><\/a>\n  *ValidatingWebhook describes an admission webhook and the resources and operations it applies to.*\n\n  - **webhooks.admissionReviewVersions** ([]string), required\n\n    *Atomic: will be replaced during a merge*\n    \n    AdmissionReviewVersions is an ordered list of preferred `AdmissionReview` versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy.\n\n  - **webhooks.clientConfig** (WebhookClientConfig), required\n\n    ClientConfig defines how to communicate with the hook. Required\n\n    <a name=\"WebhookClientConfig\"><\/a>\n    *WebhookClientConfig contains the information to make a TLS connection with the webhook*\n\n    - **webhooks.clientConfig.caBundle** ([]byte)\n\n      `caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used.\n\n    - **webhooks.clientConfig.service** (ServiceReference)\n\n      `service` is a reference to the service for this webhook. Either `service` or `url` must be specified.\n      \n      If the webhook is running within the cluster, then you should use `service`.\n\n      <a name=\"ServiceReference\"><\/a>\n      *ServiceReference holds a reference to Service.legacy.k8s.io*\n\n      - **webhooks.clientConfig.service.name** (string), required\n\n        `name` is the name of the service. Required\n\n      - **webhooks.clientConfig.service.namespace** (string), required\n\n        `namespace` is the namespace of the service. Required\n\n      - **webhooks.clientConfig.service.path** (string)\n\n        `path` is an optional URL path which will be sent in any request to this service.\n\n      - **webhooks.clientConfig.service.port** (int32)\n\n        If specified, the port on the service that hosting webhook. Default to 443 for backward compatibility. `port` should be a valid port number (1-65535, inclusive).\n\n    - **webhooks.clientConfig.url** (string)\n\n      `url` gives the location of the webhook, in standard URL form (`scheme:\/\/host:port\/path`). Exactly one of `url` or `service` must be specified.\n      \n      The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.\n      \n      Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.\n      \n      The scheme must be \"https\"; the URL must begin with \"https:\/\/\".\n      \n      A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.\n      \n      Attempting to use a user or basic auth e.g. \"user:password@\" is not allowed. Fragments (\"#...\") and query parameters (\"?...\") are not allowed, either.\n\n  - **webhooks.name** (string), required\n\n    The name of the admission webhook. Name should be fully qualified, e.g., imagepolicy.kubernetes.io, where \"imagepolicy\" is the name of the webhook, and kubernetes.io is the name of the organization. Required.\n\n  - **webhooks.sideEffects** (string), required\n\n    SideEffects states whether this webhook has side effects. Acceptable values are: None, NoneOnDryRun (webhooks created via v1beta1 may also specify Some or Unknown). Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission chain and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some.\n\n  - **webhooks.failurePolicy** (string)\n\n    FailurePolicy defines how unrecognized errors from the admission endpoint are handled - allowed values are Ignore or Fail. Defaults to Fail.\n\n  - **webhooks.matchConditions** ([]MatchCondition)\n\n    *Patch strategy: merge on key `name`*\n    \n    *Map: unique values on key name will be kept during a merge*\n    \n    MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed.\n    \n    The exact matching logic is (in order):\n      1. If ANY matchCondition evaluates to FALSE, the webhook is skipped.\n      2. If ALL matchConditions evaluate to TRUE, the webhook is called.\n      3. If any matchCondition evaluates to an error (but none are FALSE):\n         - If failurePolicy=Fail, reject the request\n         - If failurePolicy=Ignore, the error is ignored and the webhook is skipped\n\n    <a name=\"MatchCondition\"><\/a>\n    *MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook.*\n\n    - **webhooks.matchConditions.expression** (string), required\n\n      Expression represents the expression which will be evaluated by CEL. Must evaluate to bool. CEL expressions have access to the contents of the AdmissionRequest and Authorizer, organized into CEL variables:\n      \n      'object' - The object from the incoming request. The value is null for DELETE requests. 'oldObject' - The existing object. The value is null for CREATE requests. 'request' - Attributes of the admission request(\/pkg\/apis\/admission\/types.go#AdmissionRequest). 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request.\n        See https:\/\/pkg.go.dev\/k8s.io\/apiserver\/pkg\/cel\/library#Authz\n      'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the\n        request resource.\n      Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/\n      \n      Required.\n\n    - **webhooks.matchConditions.name** (string), required\n\n      Name is an identifier for this match condition, used for strategic merging of MatchConditions, as well as providing an identifier for logging purposes. A good name should be descriptive of the associated expression. Name must be a qualified name consisting of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName',  or 'my.name',  or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]') with an optional DNS subdomain prefix and '\/' (e.g. 'example.com\/MyName')\n      \n      Required.\n\n  - **webhooks.matchPolicy** (string)\n\n    matchPolicy defines how the \"rules\" list is used to match incoming requests. Allowed values are \"Exact\" or \"Equivalent\".\n    \n    - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps\/v1, apps\/v1beta1, and extensions\/v1beta1, but \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps\/v1beta1 or extensions\/v1beta1 would not be sent to the webhook.\n    \n    - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps\/v1, apps\/v1beta1, and extensions\/v1beta1, and \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps\/v1beta1 or extensions\/v1beta1 would be converted to apps\/v1 and sent to the webhook.\n    \n    Defaults to \"Equivalent\"\n\n  - **webhooks.namespaceSelector** (<a href=\"\">LabelSelector<\/a>)\n\n    NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the webhook.\n    \n    For example, to run the webhook on any objects whose namespace is not associated with \"runlevel\" of \"0\" or \"1\";  you will set the selector as follows: \"namespaceSelector\": {\n      \"matchExpressions\": [\n        {\n          \"key\": \"runlevel\",\n          \"operator\": \"NotIn\",\n          \"values\": [\n            \"0\",\n            \"1\"\n          ]\n        }\n      ]\n    }\n    \n    If instead you want to only run the webhook on any objects whose namespace is associated with the \"environment\" of \"prod\" or \"staging\"; you will set the selector as follows: \"namespaceSelector\": {\n      \"matchExpressions\": [\n        {\n          \"key\": \"environment\",\n          \"operator\": \"In\",\n          \"values\": [\n            \"prod\",\n            \"staging\"\n          ]\n        }\n      ]\n    }\n    \n    See https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels for more examples of label selectors.\n    \n    Default to the empty LabelSelector, which matches everything.\n\n  - **webhooks.objectSelector** (<a href=\"\">LabelSelector<\/a>)\n\n    ObjectSelector decides whether to run the webhook based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the webhook, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything.\n\n  - **webhooks.rules** ([]RuleWithOperations)\n\n    *Atomic: will be replaced during a merge*\n    \n    Rules describes what operations on what resources\/subresources the webhook cares about. The webhook cares about an operation if it matches _any_ Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects.\n\n    <a name=\"RuleWithOperations\"><\/a>\n    *RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid.*\n\n    - **webhooks.rules.apiGroups** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      APIGroups is the API groups the resources belong to. '*' is all groups. If '*' is present, the length of the slice must be one. Required.\n\n    - **webhooks.rules.apiVersions** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      APIVersions is the API versions the resources belong to. '*' is all versions. If '*' is present, the length of the slice must be one. Required.\n\n    - **webhooks.rules.operations** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required.\n\n    - **webhooks.rules.resources** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      Resources is a list of resources this rule applies to.\n      \n      For example: 'pods' means pods. 'pods\/log' means the log subresource of pods. '*' means all resources, but not subresources. 'pods\/*' means all subresources of pods. '*\/scale' means all scale subresources. '*\/*' means all resources and their subresources.\n      \n      If wildcard is present, the validation rule will ensure resources do not overlap with each other.\n      \n      Depending on the enclosing object, subresources might not be allowed. Required.\n\n    - **webhooks.rules.scope** (string)\n\n      scope specifies the scope of this rule. Valid values are \"Cluster\", \"Namespaced\", and \"*\" \"Cluster\" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. \"Namespaced\" means that only namespaced resources will match this rule. \"*\" means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is \"*\".\n\n  - **webhooks.timeoutSeconds** (int32)\n\n    TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 10 seconds.\n\n\n\n\n\n## ValidatingWebhookConfigurationList {#ValidatingWebhookConfigurationList}\n\nValidatingWebhookConfigurationList is a list of ValidatingWebhookConfiguration.\n\n<hr>\n\n- **items** ([]<a href=\"\">ValidatingWebhookConfiguration<\/a>), required\n\n  List of ValidatingWebhookConfiguration.\n\n- **apiVersion** (string)\n\n  APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#resources\n\n- **kind** (string)\n\n  Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ValidatingWebhookConfiguration\n\n#### HTTP Request\n\nGET \/apis\/admissionregistration.k8s.io\/v1\/validatingwebhookconfigurations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingWebhookConfiguration\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingWebhookConfiguration<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ValidatingWebhookConfiguration\n\n#### HTTP Request\n\nGET \/apis\/admissionregistration.k8s.io\/v1\/validatingwebhookconfigurations\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingWebhookConfigurationList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ValidatingWebhookConfiguration\n\n#### HTTP Request\n\nPOST \/apis\/admissionregistration.k8s.io\/v1\/validatingwebhookconfigurations\n\n#### Parameters\n\n\n- **body**: <a href=\"\">ValidatingWebhookConfiguration<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingWebhookConfiguration<\/a>): OK\n\n201 (<a href=\"\">ValidatingWebhookConfiguration<\/a>): Created\n\n202 (<a href=\"\">ValidatingWebhookConfiguration<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ValidatingWebhookConfiguration\n\n#### HTTP Request\n\nPUT \/apis\/admissionregistration.k8s.io\/v1\/validatingwebhookconfigurations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingWebhookConfiguration\n\n\n- **body**: <a href=\"\">ValidatingWebhookConfiguration<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingWebhookConfiguration<\/a>): OK\n\n201 (<a href=\"\">ValidatingWebhookConfiguration<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ValidatingWebhookConfiguration\n\n#### HTTP Request\n\nPATCH \/apis\/admissionregistration.k8s.io\/v1\/validatingwebhookconfigurations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingWebhookConfiguration\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingWebhookConfiguration<\/a>): OK\n\n201 (<a href=\"\">ValidatingWebhookConfiguration<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ValidatingWebhookConfiguration\n\n#### HTTP Request\n\nDELETE \/apis\/admissionregistration.k8s.io\/v1\/validatingwebhookconfigurations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingWebhookConfiguration\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ValidatingWebhookConfiguration\n\n#### HTTP Request\n\nDELETE \/apis\/admissionregistration.k8s.io\/v1\/validatingwebhookconfigurations\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   admissionregistration k8s io v1    import   k8s io api admissionregistration v1    kind   ValidatingWebhookConfiguration  content type   api reference  description   ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it   title   ValidatingWebhookConfiguration  weight  4 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  admissionregistration k8s io v1    import  k8s io api admissionregistration v1        ValidatingWebhookConfiguration   ValidatingWebhookConfiguration   ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it    hr       apiVersion    admissionregistration k8s io v1       kind    ValidatingWebhookConfiguration       metadata     a href    ObjectMeta  a      Standard object metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata       webhooks      ValidatingWebhook      Patch strategy  merge on key  name         Map  unique values on key name will be kept during a merge       Webhooks is a list of webhooks and the affected resources and operations      a name  ValidatingWebhook    a     ValidatingWebhook describes an admission webhook and the resources and operations it applies to          webhooks admissionReviewVersions      string   required       Atomic  will be replaced during a merge           AdmissionReviewVersions is an ordered list of preferred  AdmissionReview  versions the Webhook expects  API server will try to use first version in the list which it supports  If none of the versions specified in this list supported by API server  validation will fail for this object  If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server  calls to the webhook will fail and be subject to the failure policy         webhooks clientConfig    WebhookClientConfig   required      ClientConfig defines how to communicate with the hook  Required       a name  WebhookClientConfig    a       WebhookClientConfig contains the information to make a TLS connection with the webhook           webhooks clientConfig caBundle      byte          caBundle  is a PEM encoded CA bundle which will be used to validate the webhook s server certificate  If unspecified  system trust roots on the apiserver are used           webhooks clientConfig service    ServiceReference          service  is a reference to the service for this webhook  Either  service  or  url  must be specified               If the webhook is running within the cluster  then you should use  service           a name  ServiceReference    a         ServiceReference holds a reference to Service legacy k8s io             webhooks clientConfig service name    string   required           name  is the name of the service  Required            webhooks clientConfig service namespace    string   required           namespace  is the namespace of the service  Required            webhooks clientConfig service path    string            path  is an optional URL path which will be sent in any request to this service             webhooks clientConfig service port    int32           If specified  the port on the service that hosting webhook  Default to 443 for backward compatibility   port  should be a valid port number  1 65535  inclusive            webhooks clientConfig url    string          url  gives the location of the webhook  in standard URL form   scheme   host port path    Exactly one of  url  or  service  must be specified               The  host  should not refer to a service running in the cluster  use the  service  field instead  The host might be resolved via external DNS in some apiservers  e g    kube apiserver  cannot resolve in cluster DNS as that would be a layering violation    host  may also be an IP address               Please note that using  localhost  or  127 0 0 1  as a  host  is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook  Such installs are likely to be non portable  i e   not easy to turn up in a new cluster               The scheme must be  https   the URL must begin with  https                   A path is optional  and if present may be any string permissible in a URL  You may use the path to pass an arbitrary string to the webhook  for example  a cluster identifier               Attempting to use a user or basic auth e g   user password   is not allowed  Fragments          and query parameters          are not allowed  either         webhooks name    string   required      The name of the admission webhook  Name should be fully qualified  e g   imagepolicy kubernetes io  where  imagepolicy  is the name of the webhook  and kubernetes io is the name of the organization  Required         webhooks sideEffects    string   required      SideEffects states whether this webhook has side effects  Acceptable values are  None  NoneOnDryRun  webhooks created via v1beta1 may also specify Some or Unknown   Webhooks with side effects MUST implement a reconciliation system  since a request may be rejected by a future step in the admission chain and the side effects therefore need to be undone  Requests with the dryRun attribute will be auto rejected if they match a webhook with sideEffects    Unknown or Some         webhooks failurePolicy    string       FailurePolicy defines how unrecognized errors from the admission endpoint are handled   allowed values are Ignore or Fail  Defaults to Fail         webhooks matchConditions      MatchCondition        Patch strategy  merge on key  name             Map  unique values on key name will be kept during a merge           MatchConditions is a list of conditions that must be met for a request to be sent to this webhook  Match conditions filter requests that have already been matched by the rules  namespaceSelector  and objectSelector  An empty list of matchConditions matches all requests  There are a maximum of 64 match conditions allowed           The exact matching logic is  in order         1  If ANY matchCondition evaluates to FALSE  the webhook is skipped        2  If ALL matchConditions evaluate to TRUE  the webhook is called        3  If any matchCondition evaluates to an error  but none are FALSE              If failurePolicy Fail  reject the request            If failurePolicy Ignore  the error is ignored and the webhook is skipped       a name  MatchCondition    a       MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook            webhooks matchConditions expression    string   required        Expression represents the expression which will be evaluated by CEL  Must evaluate to bool  CEL expressions have access to the contents of the AdmissionRequest and Authorizer  organized into CEL variables                object    The object from the incoming request  The value is null for DELETE requests   oldObject    The existing object  The value is null for CREATE requests   request    Attributes of the admission request  pkg apis admission types go AdmissionRequest    authorizer    A CEL Authorizer  May be used to perform authorization checks for the principal  user or service account  of the request          See https   pkg go dev k8s io apiserver pkg cel library Authz        authorizer requestResource    A CEL ResourceCheck constructed from the  authorizer  and configured with the         request resource        Documentation on CEL  https   kubernetes io docs reference using api cel               Required           webhooks matchConditions name    string   required        Name is an identifier for this match condition  used for strategic merging of MatchConditions  as well as providing an identifier for logging purposes  A good name should be descriptive of the associated expression  Name must be a qualified name consisting of alphanumeric characters           or      and must start and end with an alphanumeric character  e g   MyName    or  my name    or  123 abc   regex used for validation is    A Za z0 9   A Za z0 9       A Za z0 9    with an optional DNS subdomain prefix and      e g   example com MyName                Required         webhooks matchPolicy    string       matchPolicy defines how the  rules  list is used to match incoming requests  Allowed values are  Exact  or  Equivalent              Exact  match a request only if it exactly matches a specified rule  For example  if deployments can be modified via apps v1  apps v1beta1  and extensions v1beta1  but  rules  only included  apiGroups   apps    apiVersions   v1    resources    deployments     a request to apps v1beta1 or extensions v1beta1 would not be sent to the webhook             Equivalent  match a request if modifies a resource listed in rules  even via another API group or version  For example  if deployments can be modified via apps v1  apps v1beta1  and extensions v1beta1  and  rules  only included  apiGroups   apps    apiVersions   v1    resources    deployments     a request to apps v1beta1 or extensions v1beta1 would be converted to apps v1 and sent to the webhook           Defaults to  Equivalent         webhooks namespaceSelector     a href    LabelSelector  a        NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the selector  If the object itself is a namespace  the matching is performed on object metadata labels  If the object is another cluster scoped resource  it never skips the webhook           For example  to run the webhook on any objects whose namespace is not associated with  runlevel  of  0  or  1    you will set the selector as follows   namespaceSelector            matchExpressions                          key    runlevel              operator    NotIn              values                  0                1                                               If instead you want to only run the webhook on any objects whose namespace is associated with the  environment  of  prod  or  staging   you will set the selector as follows   namespaceSelector            matchExpressions                          key    environment              operator    In              values                  prod                staging                                               See https   kubernetes io docs concepts overview working with objects labels for more examples of label selectors           Default to the empty LabelSelector  which matches everything         webhooks objectSelector     a href    LabelSelector  a        ObjectSelector decides whether to run the webhook based on if the object has matching labels  objectSelector is evaluated against both the oldObject and newObject that would be sent to the webhook  and is considered to match if either object matches the selector  A null object  oldObject in the case of create  or newObject in the case of delete  or an object that cannot have labels  like a DeploymentRollback or a PodProxyOptions object  is not considered to match  Use the object selector only if the webhook is opt in  because end users may skip the admission webhook by setting the labels  Default to the empty LabelSelector  which matches everything         webhooks rules      RuleWithOperations        Atomic  will be replaced during a merge           Rules describes what operations on what resources subresources the webhook cares about  The webhook cares about an operation if it matches  any  Rule  However  in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin  ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects        a name  RuleWithOperations    a       RuleWithOperations is a tuple of Operations and Resources  It is recommended to make sure that all the tuple expansions are valid            webhooks rules apiGroups      string          Atomic  will be replaced during a merge               APIGroups is the API groups the resources belong to      is all groups  If     is present  the length of the slice must be one  Required           webhooks rules apiVersions      string          Atomic  will be replaced during a merge               APIVersions is the API versions the resources belong to      is all versions  If     is present  the length of the slice must be one  Required           webhooks rules operations      string          Atomic  will be replaced during a merge               Operations is the operations the admission hook cares about   CREATE  UPDATE  DELETE  CONNECT or   for all of those operations and any future admission operations that are added  If     is present  the length of the slice must be one  Required           webhooks rules resources      string          Atomic  will be replaced during a merge               Resources is a list of resources this rule applies to               For example   pods  means pods   pods log  means the log subresource of pods      means all resources  but not subresources   pods    means all subresources of pods     scale  means all scale subresources        means all resources and their subresources               If wildcard is present  the validation rule will ensure resources do not overlap with each other               Depending on the enclosing object  subresources might not be allowed  Required           webhooks rules scope    string         scope specifies the scope of this rule  Valid values are  Cluster    Namespaced   and      Cluster  means that only cluster scoped resources will match this rule  Namespace API objects are cluster scoped   Namespaced  means that only namespaced resources will match this rule      means that there are no scope restrictions  Subresources match the scope of their parent resource  Default is             webhooks timeoutSeconds    int32       TimeoutSeconds specifies the timeout for this webhook  After the timeout passes  the webhook call will be ignored or the API call will fail based on the failure policy  The timeout value must be between 1 and 30 seconds  Default to 10 seconds          ValidatingWebhookConfigurationList   ValidatingWebhookConfigurationList   ValidatingWebhookConfigurationList is a list of ValidatingWebhookConfiguration    hr       items       a href    ValidatingWebhookConfiguration  a    required    List of ValidatingWebhookConfiguration       apiVersion    string     APIVersion defines the versioned schema of this representation of an object  Servers should convert recognized schemas to the latest internal value  and may reject unrecognized values  More info  https   git k8s io community contributors devel sig architecture api conventions md resources      kind    string     Kind is a string value representing the REST resource this object represents  Servers may infer this from the endpoint the client submits requests to  Cannot be updated  In CamelCase  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds         Operations   Operations      hr             get  read the specified ValidatingWebhookConfiguration       HTTP Request  GET  apis admissionregistration k8s io v1 validatingwebhookconfigurations  name        Parameters       name     in path    string  required    name of the ValidatingWebhookConfiguration       pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingWebhookConfiguration  a    OK  401  Unauthorized        list  list or watch objects of kind ValidatingWebhookConfiguration       HTTP Request  GET  apis admissionregistration k8s io v1 validatingwebhookconfigurations       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ValidatingWebhookConfigurationList  a    OK  401  Unauthorized        create  create a ValidatingWebhookConfiguration       HTTP Request  POST  apis admissionregistration k8s io v1 validatingwebhookconfigurations       Parameters       body     a href    ValidatingWebhookConfiguration  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingWebhookConfiguration  a    OK  201   a href    ValidatingWebhookConfiguration  a    Created  202   a href    ValidatingWebhookConfiguration  a    Accepted  401  Unauthorized        update  replace the specified ValidatingWebhookConfiguration       HTTP Request  PUT  apis admissionregistration k8s io v1 validatingwebhookconfigurations  name        Parameters       name     in path    string  required    name of the ValidatingWebhookConfiguration       body     a href    ValidatingWebhookConfiguration  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingWebhookConfiguration  a    OK  201   a href    ValidatingWebhookConfiguration  a    Created  401  Unauthorized        patch  partially update the specified ValidatingWebhookConfiguration       HTTP Request  PATCH  apis admissionregistration k8s io v1 validatingwebhookconfigurations  name        Parameters       name     in path    string  required    name of the ValidatingWebhookConfiguration       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingWebhookConfiguration  a    OK  201   a href    ValidatingWebhookConfiguration  a    Created  401  Unauthorized        delete  delete a ValidatingWebhookConfiguration       HTTP Request  DELETE  apis admissionregistration k8s io v1 validatingwebhookconfigurations  name        Parameters       name     in path    string  required    name of the ValidatingWebhookConfiguration       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of ValidatingWebhookConfiguration       HTTP Request  DELETE  apis admissionregistration k8s io v1 validatingwebhookconfigurations       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object kind MutatingWebhookConfiguration title MutatingWebhookConfiguration contenttype apireference apiVersion admissionregistration k8s io v1 apimetadata weight 3 autogenerated true import k8s io api admissionregistration v1","answers":"---\napi_metadata:\n  apiVersion: \"admissionregistration.k8s.io\/v1\"\n  import: \"k8s.io\/api\/admissionregistration\/v1\"\n  kind: \"MutatingWebhookConfiguration\"\ncontent_type: \"api_reference\"\ndescription: \"MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object.\"\ntitle: \"MutatingWebhookConfiguration\"\nweight: 3\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: admissionregistration.k8s.io\/v1`\n\n`import \"k8s.io\/api\/admissionregistration\/v1\"`\n\n\n## MutatingWebhookConfiguration {#MutatingWebhookConfiguration}\n\nMutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object.\n\n<hr>\n\n- **apiVersion**: admissionregistration.k8s.io\/v1\n\n\n- **kind**: MutatingWebhookConfiguration\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata; More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata.\n\n- **webhooks** ([]MutatingWebhook)\n\n  *Patch strategy: merge on key `name`*\n  \n  *Map: unique values on key name will be kept during a merge*\n  \n  Webhooks is a list of webhooks and the affected resources and operations.\n\n  <a name=\"MutatingWebhook\"><\/a>\n  *MutatingWebhook describes an admission webhook and the resources and operations it applies to.*\n\n  - **webhooks.admissionReviewVersions** ([]string), required\n\n    *Atomic: will be replaced during a merge*\n    \n    AdmissionReviewVersions is an ordered list of preferred `AdmissionReview` versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy.\n\n  - **webhooks.clientConfig** (WebhookClientConfig), required\n\n    ClientConfig defines how to communicate with the hook. Required\n\n    <a name=\"WebhookClientConfig\"><\/a>\n    *WebhookClientConfig contains the information to make a TLS connection with the webhook*\n\n    - **webhooks.clientConfig.caBundle** ([]byte)\n\n      `caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used.\n\n    - **webhooks.clientConfig.service** (ServiceReference)\n\n      `service` is a reference to the service for this webhook. Either `service` or `url` must be specified.\n      \n      If the webhook is running within the cluster, then you should use `service`.\n\n      <a name=\"ServiceReference\"><\/a>\n      *ServiceReference holds a reference to Service.legacy.k8s.io*\n\n      - **webhooks.clientConfig.service.name** (string), required\n\n        `name` is the name of the service. Required\n\n      - **webhooks.clientConfig.service.namespace** (string), required\n\n        `namespace` is the namespace of the service. Required\n\n      - **webhooks.clientConfig.service.path** (string)\n\n        `path` is an optional URL path which will be sent in any request to this service.\n\n      - **webhooks.clientConfig.service.port** (int32)\n\n        If specified, the port on the service that hosting webhook. Default to 443 for backward compatibility. `port` should be a valid port number (1-65535, inclusive).\n\n    - **webhooks.clientConfig.url** (string)\n\n      `url` gives the location of the webhook, in standard URL form (`scheme:\/\/host:port\/path`). Exactly one of `url` or `service` must be specified.\n      \n      The `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.\n      \n      Please note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.\n      \n      The scheme must be \"https\"; the URL must begin with \"https:\/\/\".\n      \n      A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.\n      \n      Attempting to use a user or basic auth e.g. \"user:password@\" is not allowed. Fragments (\"#...\") and query parameters (\"?...\") are not allowed, either.\n\n  - **webhooks.name** (string), required\n\n    The name of the admission webhook. Name should be fully qualified, e.g., imagepolicy.kubernetes.io, where \"imagepolicy\" is the name of the webhook, and kubernetes.io is the name of the organization. Required.\n\n  - **webhooks.sideEffects** (string), required\n\n    SideEffects states whether this webhook has side effects. Acceptable values are: None, NoneOnDryRun (webhooks created via v1beta1 may also specify Some or Unknown). Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission chain and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some.\n\n  - **webhooks.failurePolicy** (string)\n\n    FailurePolicy defines how unrecognized errors from the admission endpoint are handled - allowed values are Ignore or Fail. Defaults to Fail.\n\n  - **webhooks.matchConditions** ([]MatchCondition)\n\n    *Patch strategy: merge on key `name`*\n    \n    *Map: unique values on key name will be kept during a merge*\n    \n    MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed.\n    \n    The exact matching logic is (in order):\n      1. If ANY matchCondition evaluates to FALSE, the webhook is skipped.\n      2. If ALL matchConditions evaluate to TRUE, the webhook is called.\n      3. If any matchCondition evaluates to an error (but none are FALSE):\n         - If failurePolicy=Fail, reject the request\n         - If failurePolicy=Ignore, the error is ignored and the webhook is skipped\n\n    <a name=\"MatchCondition\"><\/a>\n    *MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook.*\n\n    - **webhooks.matchConditions.expression** (string), required\n\n      Expression represents the expression which will be evaluated by CEL. Must evaluate to bool. CEL expressions have access to the contents of the AdmissionRequest and Authorizer, organized into CEL variables:\n      \n      'object' - The object from the incoming request. The value is null for DELETE requests. 'oldObject' - The existing object. The value is null for CREATE requests. 'request' - Attributes of the admission request(\/pkg\/apis\/admission\/types.go#AdmissionRequest). 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request.\n        See https:\/\/pkg.go.dev\/k8s.io\/apiserver\/pkg\/cel\/library#Authz\n      'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the\n        request resource.\n      Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/\n      \n      Required.\n\n    - **webhooks.matchConditions.name** (string), required\n\n      Name is an identifier for this match condition, used for strategic merging of MatchConditions, as well as providing an identifier for logging purposes. A good name should be descriptive of the associated expression. Name must be a qualified name consisting of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName',  or 'my.name',  or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]') with an optional DNS subdomain prefix and '\/' (e.g. 'example.com\/MyName')\n      \n      Required.\n\n  - **webhooks.matchPolicy** (string)\n\n    matchPolicy defines how the \"rules\" list is used to match incoming requests. Allowed values are \"Exact\" or \"Equivalent\".\n    \n    - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps\/v1, apps\/v1beta1, and extensions\/v1beta1, but \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps\/v1beta1 or extensions\/v1beta1 would not be sent to the webhook.\n    \n    - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps\/v1, apps\/v1beta1, and extensions\/v1beta1, and \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps\/v1beta1 or extensions\/v1beta1 would be converted to apps\/v1 and sent to the webhook.\n    \n    Defaults to \"Equivalent\"\n\n  - **webhooks.namespaceSelector** (<a href=\"\">LabelSelector<\/a>)\n\n    NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the webhook.\n    \n    For example, to run the webhook on any objects whose namespace is not associated with \"runlevel\" of \"0\" or \"1\";  you will set the selector as follows: \"namespaceSelector\": {\n      \"matchExpressions\": [\n        {\n          \"key\": \"runlevel\",\n          \"operator\": \"NotIn\",\n          \"values\": [\n            \"0\",\n            \"1\"\n          ]\n        }\n      ]\n    }\n    \n    If instead you want to only run the webhook on any objects whose namespace is associated with the \"environment\" of \"prod\" or \"staging\"; you will set the selector as follows: \"namespaceSelector\": {\n      \"matchExpressions\": [\n        {\n          \"key\": \"environment\",\n          \"operator\": \"In\",\n          \"values\": [\n            \"prod\",\n            \"staging\"\n          ]\n        }\n      ]\n    }\n    \n    See https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\/ for more examples of label selectors.\n    \n    Default to the empty LabelSelector, which matches everything.\n\n  - **webhooks.objectSelector** (<a href=\"\">LabelSelector<\/a>)\n\n    ObjectSelector decides whether to run the webhook based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the webhook, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything.\n\n  - **webhooks.reinvocationPolicy** (string)\n\n    reinvocationPolicy indicates whether this webhook should be called multiple times as part of a single admission evaluation. Allowed values are \"Never\" and \"IfNeeded\".\n    \n    Never: the webhook will not be called more than once in a single admission evaluation.\n    \n    IfNeeded: the webhook will be called at least one additional time as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call. Webhooks that specify this option *must* be idempotent, able to process objects they previously admitted. Note: * the number of additional invocations is not guaranteed to be exactly one. * if additional invocations result in further modifications to the object, webhooks are not guaranteed to be invoked again. * webhooks that use this option may be reordered to minimize the number of additional invocations. * to validate an object after all mutations are guaranteed complete, use a validating admission webhook instead.\n    \n    Defaults to \"Never\".\n\n  - **webhooks.rules** ([]RuleWithOperations)\n\n    *Atomic: will be replaced during a merge*\n    \n    Rules describes what operations on what resources\/subresources the webhook cares about. The webhook cares about an operation if it matches _any_ Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects.\n\n    <a name=\"RuleWithOperations\"><\/a>\n    *RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid.*\n\n    - **webhooks.rules.apiGroups** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      APIGroups is the API groups the resources belong to. '*' is all groups. If '*' is present, the length of the slice must be one. Required.\n\n    - **webhooks.rules.apiVersions** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      APIVersions is the API versions the resources belong to. '*' is all versions. If '*' is present, the length of the slice must be one. Required.\n\n    - **webhooks.rules.operations** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required.\n\n    - **webhooks.rules.resources** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      Resources is a list of resources this rule applies to.\n      \n      For example: 'pods' means pods. 'pods\/log' means the log subresource of pods. '*' means all resources, but not subresources. 'pods\/*' means all subresources of pods. '*\/scale' means all scale subresources. '*\/*' means all resources and their subresources.\n      \n      If wildcard is present, the validation rule will ensure resources do not overlap with each other.\n      \n      Depending on the enclosing object, subresources might not be allowed. Required.\n\n    - **webhooks.rules.scope** (string)\n\n      scope specifies the scope of this rule. Valid values are \"Cluster\", \"Namespaced\", and \"*\" \"Cluster\" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. \"Namespaced\" means that only namespaced resources will match this rule. \"*\" means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is \"*\".\n\n  - **webhooks.timeoutSeconds** (int32)\n\n    TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 10 seconds.\n\n\n\n\n\n## MutatingWebhookConfigurationList {#MutatingWebhookConfigurationList}\n\nMutatingWebhookConfigurationList is a list of MutatingWebhookConfiguration.\n\n<hr>\n\n- **apiVersion**: admissionregistration.k8s.io\/v1\n\n\n- **kind**: MutatingWebhookConfigurationList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">MutatingWebhookConfiguration<\/a>), required\n\n  List of MutatingWebhookConfiguration.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified MutatingWebhookConfiguration\n\n#### HTTP Request\n\nGET \/apis\/admissionregistration.k8s.io\/v1\/mutatingwebhookconfigurations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the MutatingWebhookConfiguration\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">MutatingWebhookConfiguration<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind MutatingWebhookConfiguration\n\n#### HTTP Request\n\nGET \/apis\/admissionregistration.k8s.io\/v1\/mutatingwebhookconfigurations\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">MutatingWebhookConfigurationList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a MutatingWebhookConfiguration\n\n#### HTTP Request\n\nPOST \/apis\/admissionregistration.k8s.io\/v1\/mutatingwebhookconfigurations\n\n#### Parameters\n\n\n- **body**: <a href=\"\">MutatingWebhookConfiguration<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">MutatingWebhookConfiguration<\/a>): OK\n\n201 (<a href=\"\">MutatingWebhookConfiguration<\/a>): Created\n\n202 (<a href=\"\">MutatingWebhookConfiguration<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified MutatingWebhookConfiguration\n\n#### HTTP Request\n\nPUT \/apis\/admissionregistration.k8s.io\/v1\/mutatingwebhookconfigurations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the MutatingWebhookConfiguration\n\n\n- **body**: <a href=\"\">MutatingWebhookConfiguration<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">MutatingWebhookConfiguration<\/a>): OK\n\n201 (<a href=\"\">MutatingWebhookConfiguration<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified MutatingWebhookConfiguration\n\n#### HTTP Request\n\nPATCH \/apis\/admissionregistration.k8s.io\/v1\/mutatingwebhookconfigurations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the MutatingWebhookConfiguration\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">MutatingWebhookConfiguration<\/a>): OK\n\n201 (<a href=\"\">MutatingWebhookConfiguration<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a MutatingWebhookConfiguration\n\n#### HTTP Request\n\nDELETE \/apis\/admissionregistration.k8s.io\/v1\/mutatingwebhookconfigurations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the MutatingWebhookConfiguration\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of MutatingWebhookConfiguration\n\n#### HTTP Request\n\nDELETE \/apis\/admissionregistration.k8s.io\/v1\/mutatingwebhookconfigurations\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   admissionregistration k8s io v1    import   k8s io api admissionregistration v1    kind   MutatingWebhookConfiguration  content type   api reference  description   MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object   title   MutatingWebhookConfiguration  weight  3 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  admissionregistration k8s io v1    import  k8s io api admissionregistration v1        MutatingWebhookConfiguration   MutatingWebhookConfiguration   MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object    hr       apiVersion    admissionregistration k8s io v1       kind    MutatingWebhookConfiguration       metadata     a href    ObjectMeta  a      Standard object metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata       webhooks      MutatingWebhook      Patch strategy  merge on key  name         Map  unique values on key name will be kept during a merge       Webhooks is a list of webhooks and the affected resources and operations      a name  MutatingWebhook    a     MutatingWebhook describes an admission webhook and the resources and operations it applies to          webhooks admissionReviewVersions      string   required       Atomic  will be replaced during a merge           AdmissionReviewVersions is an ordered list of preferred  AdmissionReview  versions the Webhook expects  API server will try to use first version in the list which it supports  If none of the versions specified in this list supported by API server  validation will fail for this object  If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server  calls to the webhook will fail and be subject to the failure policy         webhooks clientConfig    WebhookClientConfig   required      ClientConfig defines how to communicate with the hook  Required       a name  WebhookClientConfig    a       WebhookClientConfig contains the information to make a TLS connection with the webhook           webhooks clientConfig caBundle      byte          caBundle  is a PEM encoded CA bundle which will be used to validate the webhook s server certificate  If unspecified  system trust roots on the apiserver are used           webhooks clientConfig service    ServiceReference          service  is a reference to the service for this webhook  Either  service  or  url  must be specified               If the webhook is running within the cluster  then you should use  service           a name  ServiceReference    a         ServiceReference holds a reference to Service legacy k8s io             webhooks clientConfig service name    string   required           name  is the name of the service  Required            webhooks clientConfig service namespace    string   required           namespace  is the namespace of the service  Required            webhooks clientConfig service path    string            path  is an optional URL path which will be sent in any request to this service             webhooks clientConfig service port    int32           If specified  the port on the service that hosting webhook  Default to 443 for backward compatibility   port  should be a valid port number  1 65535  inclusive            webhooks clientConfig url    string          url  gives the location of the webhook  in standard URL form   scheme   host port path    Exactly one of  url  or  service  must be specified               The  host  should not refer to a service running in the cluster  use the  service  field instead  The host might be resolved via external DNS in some apiservers  e g    kube apiserver  cannot resolve in cluster DNS as that would be a layering violation    host  may also be an IP address               Please note that using  localhost  or  127 0 0 1  as a  host  is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook  Such installs are likely to be non portable  i e   not easy to turn up in a new cluster               The scheme must be  https   the URL must begin with  https                   A path is optional  and if present may be any string permissible in a URL  You may use the path to pass an arbitrary string to the webhook  for example  a cluster identifier               Attempting to use a user or basic auth e g   user password   is not allowed  Fragments          and query parameters          are not allowed  either         webhooks name    string   required      The name of the admission webhook  Name should be fully qualified  e g   imagepolicy kubernetes io  where  imagepolicy  is the name of the webhook  and kubernetes io is the name of the organization  Required         webhooks sideEffects    string   required      SideEffects states whether this webhook has side effects  Acceptable values are  None  NoneOnDryRun  webhooks created via v1beta1 may also specify Some or Unknown   Webhooks with side effects MUST implement a reconciliation system  since a request may be rejected by a future step in the admission chain and the side effects therefore need to be undone  Requests with the dryRun attribute will be auto rejected if they match a webhook with sideEffects    Unknown or Some         webhooks failurePolicy    string       FailurePolicy defines how unrecognized errors from the admission endpoint are handled   allowed values are Ignore or Fail  Defaults to Fail         webhooks matchConditions      MatchCondition        Patch strategy  merge on key  name             Map  unique values on key name will be kept during a merge           MatchConditions is a list of conditions that must be met for a request to be sent to this webhook  Match conditions filter requests that have already been matched by the rules  namespaceSelector  and objectSelector  An empty list of matchConditions matches all requests  There are a maximum of 64 match conditions allowed           The exact matching logic is  in order         1  If ANY matchCondition evaluates to FALSE  the webhook is skipped        2  If ALL matchConditions evaluate to TRUE  the webhook is called        3  If any matchCondition evaluates to an error  but none are FALSE              If failurePolicy Fail  reject the request            If failurePolicy Ignore  the error is ignored and the webhook is skipped       a name  MatchCondition    a       MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook            webhooks matchConditions expression    string   required        Expression represents the expression which will be evaluated by CEL  Must evaluate to bool  CEL expressions have access to the contents of the AdmissionRequest and Authorizer  organized into CEL variables                object    The object from the incoming request  The value is null for DELETE requests   oldObject    The existing object  The value is null for CREATE requests   request    Attributes of the admission request  pkg apis admission types go AdmissionRequest    authorizer    A CEL Authorizer  May be used to perform authorization checks for the principal  user or service account  of the request          See https   pkg go dev k8s io apiserver pkg cel library Authz        authorizer requestResource    A CEL ResourceCheck constructed from the  authorizer  and configured with the         request resource        Documentation on CEL  https   kubernetes io docs reference using api cel               Required           webhooks matchConditions name    string   required        Name is an identifier for this match condition  used for strategic merging of MatchConditions  as well as providing an identifier for logging purposes  A good name should be descriptive of the associated expression  Name must be a qualified name consisting of alphanumeric characters           or      and must start and end with an alphanumeric character  e g   MyName    or  my name    or  123 abc   regex used for validation is    A Za z0 9   A Za z0 9       A Za z0 9    with an optional DNS subdomain prefix and      e g   example com MyName                Required         webhooks matchPolicy    string       matchPolicy defines how the  rules  list is used to match incoming requests  Allowed values are  Exact  or  Equivalent              Exact  match a request only if it exactly matches a specified rule  For example  if deployments can be modified via apps v1  apps v1beta1  and extensions v1beta1  but  rules  only included  apiGroups   apps    apiVersions   v1    resources    deployments     a request to apps v1beta1 or extensions v1beta1 would not be sent to the webhook             Equivalent  match a request if modifies a resource listed in rules  even via another API group or version  For example  if deployments can be modified via apps v1  apps v1beta1  and extensions v1beta1  and  rules  only included  apiGroups   apps    apiVersions   v1    resources    deployments     a request to apps v1beta1 or extensions v1beta1 would be converted to apps v1 and sent to the webhook           Defaults to  Equivalent         webhooks namespaceSelector     a href    LabelSelector  a        NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the selector  If the object itself is a namespace  the matching is performed on object metadata labels  If the object is another cluster scoped resource  it never skips the webhook           For example  to run the webhook on any objects whose namespace is not associated with  runlevel  of  0  or  1    you will set the selector as follows   namespaceSelector            matchExpressions                          key    runlevel              operator    NotIn              values                  0                1                                               If instead you want to only run the webhook on any objects whose namespace is associated with the  environment  of  prod  or  staging   you will set the selector as follows   namespaceSelector            matchExpressions                          key    environment              operator    In              values                  prod                staging                                               See https   kubernetes io docs concepts overview working with objects labels  for more examples of label selectors           Default to the empty LabelSelector  which matches everything         webhooks objectSelector     a href    LabelSelector  a        ObjectSelector decides whether to run the webhook based on if the object has matching labels  objectSelector is evaluated against both the oldObject and newObject that would be sent to the webhook  and is considered to match if either object matches the selector  A null object  oldObject in the case of create  or newObject in the case of delete  or an object that cannot have labels  like a DeploymentRollback or a PodProxyOptions object  is not considered to match  Use the object selector only if the webhook is opt in  because end users may skip the admission webhook by setting the labels  Default to the empty LabelSelector  which matches everything         webhooks reinvocationPolicy    string       reinvocationPolicy indicates whether this webhook should be called multiple times as part of a single admission evaluation  Allowed values are  Never  and  IfNeeded            Never  the webhook will not be called more than once in a single admission evaluation           IfNeeded  the webhook will be called at least one additional time as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call  Webhooks that specify this option  must  be idempotent  able to process objects they previously admitted  Note    the number of additional invocations is not guaranteed to be exactly one    if additional invocations result in further modifications to the object  webhooks are not guaranteed to be invoked again    webhooks that use this option may be reordered to minimize the number of additional invocations    to validate an object after all mutations are guaranteed complete  use a validating admission webhook instead           Defaults to  Never          webhooks rules      RuleWithOperations        Atomic  will be replaced during a merge           Rules describes what operations on what resources subresources the webhook cares about  The webhook cares about an operation if it matches  any  Rule  However  in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin  ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects        a name  RuleWithOperations    a       RuleWithOperations is a tuple of Operations and Resources  It is recommended to make sure that all the tuple expansions are valid            webhooks rules apiGroups      string          Atomic  will be replaced during a merge               APIGroups is the API groups the resources belong to      is all groups  If     is present  the length of the slice must be one  Required           webhooks rules apiVersions      string          Atomic  will be replaced during a merge               APIVersions is the API versions the resources belong to      is all versions  If     is present  the length of the slice must be one  Required           webhooks rules operations      string          Atomic  will be replaced during a merge               Operations is the operations the admission hook cares about   CREATE  UPDATE  DELETE  CONNECT or   for all of those operations and any future admission operations that are added  If     is present  the length of the slice must be one  Required           webhooks rules resources      string          Atomic  will be replaced during a merge               Resources is a list of resources this rule applies to               For example   pods  means pods   pods log  means the log subresource of pods      means all resources  but not subresources   pods    means all subresources of pods     scale  means all scale subresources        means all resources and their subresources               If wildcard is present  the validation rule will ensure resources do not overlap with each other               Depending on the enclosing object  subresources might not be allowed  Required           webhooks rules scope    string         scope specifies the scope of this rule  Valid values are  Cluster    Namespaced   and      Cluster  means that only cluster scoped resources will match this rule  Namespace API objects are cluster scoped   Namespaced  means that only namespaced resources will match this rule      means that there are no scope restrictions  Subresources match the scope of their parent resource  Default is             webhooks timeoutSeconds    int32       TimeoutSeconds specifies the timeout for this webhook  After the timeout passes  the webhook call will be ignored or the API call will fail based on the failure policy  The timeout value must be between 1 and 30 seconds  Default to 10 seconds          MutatingWebhookConfigurationList   MutatingWebhookConfigurationList   MutatingWebhookConfigurationList is a list of MutatingWebhookConfiguration    hr       apiVersion    admissionregistration k8s io v1       kind    MutatingWebhookConfigurationList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    MutatingWebhookConfiguration  a    required    List of MutatingWebhookConfiguration          Operations   Operations      hr             get  read the specified MutatingWebhookConfiguration       HTTP Request  GET  apis admissionregistration k8s io v1 mutatingwebhookconfigurations  name        Parameters       name     in path    string  required    name of the MutatingWebhookConfiguration       pretty     in query    string     a href    pretty  a          Response   200   a href    MutatingWebhookConfiguration  a    OK  401  Unauthorized        list  list or watch objects of kind MutatingWebhookConfiguration       HTTP Request  GET  apis admissionregistration k8s io v1 mutatingwebhookconfigurations       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    MutatingWebhookConfigurationList  a    OK  401  Unauthorized        create  create a MutatingWebhookConfiguration       HTTP Request  POST  apis admissionregistration k8s io v1 mutatingwebhookconfigurations       Parameters       body     a href    MutatingWebhookConfiguration  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    MutatingWebhookConfiguration  a    OK  201   a href    MutatingWebhookConfiguration  a    Created  202   a href    MutatingWebhookConfiguration  a    Accepted  401  Unauthorized        update  replace the specified MutatingWebhookConfiguration       HTTP Request  PUT  apis admissionregistration k8s io v1 mutatingwebhookconfigurations  name        Parameters       name     in path    string  required    name of the MutatingWebhookConfiguration       body     a href    MutatingWebhookConfiguration  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    MutatingWebhookConfiguration  a    OK  201   a href    MutatingWebhookConfiguration  a    Created  401  Unauthorized        patch  partially update the specified MutatingWebhookConfiguration       HTTP Request  PATCH  apis admissionregistration k8s io v1 mutatingwebhookconfigurations  name        Parameters       name     in path    string  required    name of the MutatingWebhookConfiguration       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    MutatingWebhookConfiguration  a    OK  201   a href    MutatingWebhookConfiguration  a    Created  401  Unauthorized        delete  delete a MutatingWebhookConfiguration       HTTP Request  DELETE  apis admissionregistration k8s io v1 mutatingwebhookconfigurations  name        Parameters       name     in path    string  required    name of the MutatingWebhookConfiguration       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of MutatingWebhookConfiguration       HTTP Request  DELETE  apis admissionregistration k8s io v1 mutatingwebhookconfigurations       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference autogenerated true import k8s io api certificates v1alpha1 weight 5 apiVersion certificates k8s io v1alpha1 contenttype apireference ClusterTrustBundle is a cluster scoped container for X apimetadata title ClusterTrustBundle v1alpha1 kind ClusterTrustBundle","answers":"---\napi_metadata:\n  apiVersion: \"certificates.k8s.io\/v1alpha1\"\n  import: \"k8s.io\/api\/certificates\/v1alpha1\"\n  kind: \"ClusterTrustBundle\"\ncontent_type: \"api_reference\"\ndescription: \"ClusterTrustBundle is a cluster-scoped container for X.\"\ntitle: \"ClusterTrustBundle v1alpha1\"\nweight: 5\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: certificates.k8s.io\/v1alpha1`\n\n`import \"k8s.io\/api\/certificates\/v1alpha1\"`\n\n\n## ClusterTrustBundle {#ClusterTrustBundle}\n\nClusterTrustBundle is a cluster-scoped container for X.509 trust anchors (root certificates).\n\nClusterTrustBundle objects are considered to be readable by any authenticated user in the cluster, because they can be mounted by pods using the `clusterTrustBundle` projection.  All service accounts have read access to ClusterTrustBundles by default.  Users who only have namespace-level access to a cluster can read ClusterTrustBundles by impersonating a serviceaccount that they have access to.\n\nIt can be optionally associated with a particular assigner, in which case it contains one valid set of trust anchors for that signer. Signers may have multiple associated ClusterTrustBundles; each is an independent set of trust anchors for that signer. Admission control is used to enforce that only users with permissions on the signer can create or modify the corresponding bundle.\n\n<hr>\n\n- **apiVersion**: certificates.k8s.io\/v1alpha1\n\n\n- **kind**: ClusterTrustBundle\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  metadata contains the object metadata.\n\n- **spec** (<a href=\"\">ClusterTrustBundleSpec<\/a>), required\n\n  spec contains the signer (if any) and trust anchors.\n\n\n\n\n\n## ClusterTrustBundleSpec {#ClusterTrustBundleSpec}\n\nClusterTrustBundleSpec contains the signer and trust anchors.\n\n<hr>\n\n- **trustBundle** (string), required\n\n  trustBundle contains the individual X.509 trust anchors for this bundle, as PEM bundle of PEM-wrapped, DER-formatted X.509 certificates.\n  \n  The data must consist only of PEM certificate blocks that parse as valid X.509 certificates.  Each certificate must include a basic constraints extension with the CA bit set.  The API server will reject objects that contain duplicate certificates, or that use PEM block headers.\n  \n  Users of ClusterTrustBundles, including Kubelet, are free to reorder and deduplicate certificate blocks in this file according to their own logic, as well as to drop PEM block headers and inter-block data.\n\n- **signerName** (string)\n\n  signerName indicates the associated signer, if any.\n  \n  In order to create or update a ClusterTrustBundle that sets signerName, you must have the following cluster-scoped permission: group=certificates.k8s.io resource=signers resourceName=\\<the signer name> verb=attest.\n  \n  If signerName is not empty, then the ClusterTrustBundle object must be named with the signer name as a prefix (translating slashes to colons). For example, for the signer name `example.com\/foo`, valid ClusterTrustBundle object names include `example.com:foo:abc` and `example.com:foo:v1`.\n  \n  If signerName is empty, then the ClusterTrustBundle object's name must not have such a prefix.\n  \n  List\/watch requests for ClusterTrustBundles can filter on this field using a `spec.signerName=NAME` field selector.\n\n\n\n\n\n## ClusterTrustBundleList {#ClusterTrustBundleList}\n\nClusterTrustBundleList is a collection of ClusterTrustBundle objects\n\n<hr>\n\n- **apiVersion**: certificates.k8s.io\/v1alpha1\n\n\n- **kind**: ClusterTrustBundleList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  metadata contains the list metadata.\n\n- **items** ([]<a href=\"\">ClusterTrustBundle<\/a>), required\n\n  items is a collection of ClusterTrustBundle objects\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ClusterTrustBundle\n\n#### HTTP Request\n\nGET \/apis\/certificates.k8s.io\/v1alpha1\/clustertrustbundles\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ClusterTrustBundle\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterTrustBundle<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ClusterTrustBundle\n\n#### HTTP Request\n\nGET \/apis\/certificates.k8s.io\/v1alpha1\/clustertrustbundles\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterTrustBundleList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ClusterTrustBundle\n\n#### HTTP Request\n\nPOST \/apis\/certificates.k8s.io\/v1alpha1\/clustertrustbundles\n\n#### Parameters\n\n\n- **body**: <a href=\"\">ClusterTrustBundle<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterTrustBundle<\/a>): OK\n\n201 (<a href=\"\">ClusterTrustBundle<\/a>): Created\n\n202 (<a href=\"\">ClusterTrustBundle<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ClusterTrustBundle\n\n#### HTTP Request\n\nPUT \/apis\/certificates.k8s.io\/v1alpha1\/clustertrustbundles\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ClusterTrustBundle\n\n\n- **body**: <a href=\"\">ClusterTrustBundle<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterTrustBundle<\/a>): OK\n\n201 (<a href=\"\">ClusterTrustBundle<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ClusterTrustBundle\n\n#### HTTP Request\n\nPATCH \/apis\/certificates.k8s.io\/v1alpha1\/clustertrustbundles\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ClusterTrustBundle\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ClusterTrustBundle<\/a>): OK\n\n201 (<a href=\"\">ClusterTrustBundle<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ClusterTrustBundle\n\n#### HTTP Request\n\nDELETE \/apis\/certificates.k8s.io\/v1alpha1\/clustertrustbundles\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ClusterTrustBundle\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ClusterTrustBundle\n\n#### HTTP Request\n\nDELETE \/apis\/certificates.k8s.io\/v1alpha1\/clustertrustbundles\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   certificates k8s io v1alpha1    import   k8s io api certificates v1alpha1    kind   ClusterTrustBundle  content type   api reference  description   ClusterTrustBundle is a cluster scoped container for X   title   ClusterTrustBundle v1alpha1  weight  5 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  certificates k8s io v1alpha1    import  k8s io api certificates v1alpha1        ClusterTrustBundle   ClusterTrustBundle   ClusterTrustBundle is a cluster scoped container for X 509 trust anchors  root certificates    ClusterTrustBundle objects are considered to be readable by any authenticated user in the cluster  because they can be mounted by pods using the  clusterTrustBundle  projection   All service accounts have read access to ClusterTrustBundles by default   Users who only have namespace level access to a cluster can read ClusterTrustBundles by impersonating a serviceaccount that they have access to   It can be optionally associated with a particular assigner  in which case it contains one valid set of trust anchors for that signer  Signers may have multiple associated ClusterTrustBundles  each is an independent set of trust anchors for that signer  Admission control is used to enforce that only users with permissions on the signer can create or modify the corresponding bundle    hr       apiVersion    certificates k8s io v1alpha1       kind    ClusterTrustBundle       metadata     a href    ObjectMeta  a      metadata contains the object metadata       spec     a href    ClusterTrustBundleSpec  a    required    spec contains the signer  if any  and trust anchors          ClusterTrustBundleSpec   ClusterTrustBundleSpec   ClusterTrustBundleSpec contains the signer and trust anchors    hr       trustBundle    string   required    trustBundle contains the individual X 509 trust anchors for this bundle  as PEM bundle of PEM wrapped  DER formatted X 509 certificates       The data must consist only of PEM certificate blocks that parse as valid X 509 certificates   Each certificate must include a basic constraints extension with the CA bit set   The API server will reject objects that contain duplicate certificates  or that use PEM block headers       Users of ClusterTrustBundles  including Kubelet  are free to reorder and deduplicate certificate blocks in this file according to their own logic  as well as to drop PEM block headers and inter block data       signerName    string     signerName indicates the associated signer  if any       In order to create or update a ClusterTrustBundle that sets signerName  you must have the following cluster scoped permission  group certificates k8s io resource signers resourceName   the signer name  verb attest       If signerName is not empty  then the ClusterTrustBundle object must be named with the signer name as a prefix  translating slashes to colons   For example  for the signer name  example com foo   valid ClusterTrustBundle object names include  example com foo abc  and  example com foo v1        If signerName is empty  then the ClusterTrustBundle object s name must not have such a prefix       List watch requests for ClusterTrustBundles can filter on this field using a  spec signerName NAME  field selector          ClusterTrustBundleList   ClusterTrustBundleList   ClusterTrustBundleList is a collection of ClusterTrustBundle objects   hr       apiVersion    certificates k8s io v1alpha1       kind    ClusterTrustBundleList       metadata     a href    ListMeta  a      metadata contains the list metadata       items       a href    ClusterTrustBundle  a    required    items is a collection of ClusterTrustBundle objects         Operations   Operations      hr             get  read the specified ClusterTrustBundle       HTTP Request  GET  apis certificates k8s io v1alpha1 clustertrustbundles  name        Parameters       name     in path    string  required    name of the ClusterTrustBundle       pretty     in query    string     a href    pretty  a          Response   200   a href    ClusterTrustBundle  a    OK  401  Unauthorized        list  list or watch objects of kind ClusterTrustBundle       HTTP Request  GET  apis certificates k8s io v1alpha1 clustertrustbundles       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ClusterTrustBundleList  a    OK  401  Unauthorized        create  create a ClusterTrustBundle       HTTP Request  POST  apis certificates k8s io v1alpha1 clustertrustbundles       Parameters       body     a href    ClusterTrustBundle  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ClusterTrustBundle  a    OK  201   a href    ClusterTrustBundle  a    Created  202   a href    ClusterTrustBundle  a    Accepted  401  Unauthorized        update  replace the specified ClusterTrustBundle       HTTP Request  PUT  apis certificates k8s io v1alpha1 clustertrustbundles  name        Parameters       name     in path    string  required    name of the ClusterTrustBundle       body     a href    ClusterTrustBundle  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ClusterTrustBundle  a    OK  201   a href    ClusterTrustBundle  a    Created  401  Unauthorized        patch  partially update the specified ClusterTrustBundle       HTTP Request  PATCH  apis certificates k8s io v1alpha1 clustertrustbundles  name        Parameters       name     in path    string  required    name of the ClusterTrustBundle       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ClusterTrustBundle  a    OK  201   a href    ClusterTrustBundle  a    Created  401  Unauthorized        delete  delete a ClusterTrustBundle       HTTP Request  DELETE  apis certificates k8s io v1alpha1 clustertrustbundles  name        Parameters       name     in path    string  required    name of the ClusterTrustBundle       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of ClusterTrustBundle       HTTP Request  DELETE  apis certificates k8s io v1alpha1 clustertrustbundles       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference ServiceAccount binds together a name understood by users and perhaps by peripheral systems for an identity a principal that can be authenticated and authorized a set of secrets apiVersion v1 kind ServiceAccount contenttype apireference apimetadata title ServiceAccount autogenerated true weight 1 import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"ServiceAccount\"\ncontent_type: \"api_reference\"\ndescription: \"ServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets.\"\ntitle: \"ServiceAccount\"\nweight: 1\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## ServiceAccount {#ServiceAccount}\n\nServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: ServiceAccount\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **automountServiceAccountToken** (boolean)\n\n  AutomountServiceAccountToken indicates whether pods running as this service account should have an API token automatically mounted. Can be overridden at the pod level.\n\n- **imagePullSecrets** ([]<a href=\"\">LocalObjectReference<\/a>)\n\n  *Atomic: will be replaced during a merge*\n  \n  ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https:\/\/kubernetes.io\/docs\/concepts\/containers\/images\/#specifying-imagepullsecrets-on-a-pod\n\n- **secrets** ([]<a href=\"\">ObjectReference<\/a>)\n\n  *Patch strategy: merge on key `name`*\n  \n  *Map: unique values on key name will be kept during a merge*\n  \n  Secrets is a list of the secrets in the same namespace that pods running using this ServiceAccount are allowed to use. Pods are only limited to this list if this service account has a \"kubernetes.io\/enforce-mountable-secrets\" annotation set to \"true\". This field should not be used to find auto-generated service account token secrets for use outside of pods. Instead, tokens can be requested directly using the TokenRequest API, or service account token secrets can be manually created. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\n\n\n\n\n\n## ServiceAccountList {#ServiceAccountList}\n\nServiceAccountList is a list of ServiceAccount objects\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: ServiceAccountList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">ServiceAccount<\/a>), required\n\n  List of ServiceAccounts. More info: https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ServiceAccount\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/serviceaccounts\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ServiceAccount\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceAccount<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ServiceAccount\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/serviceaccounts\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceAccountList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ServiceAccount\n\n#### HTTP Request\n\nGET \/api\/v1\/serviceaccounts\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceAccountList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ServiceAccount\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\/{namespace}\/serviceaccounts\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ServiceAccount<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceAccount<\/a>): OK\n\n201 (<a href=\"\">ServiceAccount<\/a>): Created\n\n202 (<a href=\"\">ServiceAccount<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ServiceAccount\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/serviceaccounts\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ServiceAccount\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ServiceAccount<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceAccount<\/a>): OK\n\n201 (<a href=\"\">ServiceAccount<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ServiceAccount\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/serviceaccounts\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ServiceAccount\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceAccount<\/a>): OK\n\n201 (<a href=\"\">ServiceAccount<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ServiceAccount\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/serviceaccounts\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ServiceAccount\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceAccount<\/a>): OK\n\n202 (<a href=\"\">ServiceAccount<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ServiceAccount\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/serviceaccounts\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   ServiceAccount  content type   api reference  description   ServiceAccount binds together    a name  understood by users  and perhaps by peripheral systems  for an identity   a principal that can be authenticated and authorized   a set of secrets   title   ServiceAccount  weight  1 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        ServiceAccount   ServiceAccount   ServiceAccount binds together    a name  understood by users  and perhaps by peripheral systems  for an identity   a principal that can be authenticated and authorized   a set of secrets   hr       apiVersion    v1       kind    ServiceAccount       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      automountServiceAccountToken    boolean     AutomountServiceAccountToken indicates whether pods running as this service account should have an API token automatically mounted  Can be overridden at the pod level       imagePullSecrets       a href    LocalObjectReference  a       Atomic  will be replaced during a merge       ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount  ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod  but ImagePullSecrets are only accessed by the kubelet  More info  https   kubernetes io docs concepts containers images  specifying imagepullsecrets on a pod      secrets       a href    ObjectReference  a       Patch strategy  merge on key  name         Map  unique values on key name will be kept during a merge       Secrets is a list of the secrets in the same namespace that pods running using this ServiceAccount are allowed to use  Pods are only limited to this list if this service account has a  kubernetes io enforce mountable secrets  annotation set to  true   This field should not be used to find auto generated service account token secrets for use outside of pods  Instead  tokens can be requested directly using the TokenRequest API  or service account token secrets can be manually created  More info  https   kubernetes io docs concepts configuration secret         ServiceAccountList   ServiceAccountList   ServiceAccountList is a list of ServiceAccount objects   hr       apiVersion    v1       kind    ServiceAccountList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    ServiceAccount  a    required    List of ServiceAccounts  More info  https   kubernetes io docs tasks configure pod container configure service account          Operations   Operations      hr             get  read the specified ServiceAccount       HTTP Request  GET  api v1 namespaces  namespace  serviceaccounts  name        Parameters       name     in path    string  required    name of the ServiceAccount       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ServiceAccount  a    OK  401  Unauthorized        list  list or watch objects of kind ServiceAccount       HTTP Request  GET  api v1 namespaces  namespace  serviceaccounts       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ServiceAccountList  a    OK  401  Unauthorized        list  list or watch objects of kind ServiceAccount       HTTP Request  GET  api v1 serviceaccounts       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ServiceAccountList  a    OK  401  Unauthorized        create  create a ServiceAccount       HTTP Request  POST  api v1 namespaces  namespace  serviceaccounts       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    ServiceAccount  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ServiceAccount  a    OK  201   a href    ServiceAccount  a    Created  202   a href    ServiceAccount  a    Accepted  401  Unauthorized        update  replace the specified ServiceAccount       HTTP Request  PUT  api v1 namespaces  namespace  serviceaccounts  name        Parameters       name     in path    string  required    name of the ServiceAccount       namespace     in path    string  required     a href    namespace  a        body     a href    ServiceAccount  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ServiceAccount  a    OK  201   a href    ServiceAccount  a    Created  401  Unauthorized        patch  partially update the specified ServiceAccount       HTTP Request  PATCH  api v1 namespaces  namespace  serviceaccounts  name        Parameters       name     in path    string  required    name of the ServiceAccount       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ServiceAccount  a    OK  201   a href    ServiceAccount  a    Created  401  Unauthorized        delete  delete a ServiceAccount       HTTP Request  DELETE  api v1 namespaces  namespace  serviceaccounts  name        Parameters       name     in path    string  required    name of the ServiceAccount       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    ServiceAccount  a    OK  202   a href    ServiceAccount  a    Accepted  401  Unauthorized        deletecollection  delete collection of ServiceAccount       HTTP Request  DELETE  api v1 namespaces  namespace  serviceaccounts       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference TokenRequest requests a token for a given service account weight 2 title TokenRequest contenttype apireference import k8s io api authentication v1 apimetadata apiVersion authentication k8s io v1 autogenerated true kind TokenRequest","answers":"---\napi_metadata:\n  apiVersion: \"authentication.k8s.io\/v1\"\n  import: \"k8s.io\/api\/authentication\/v1\"\n  kind: \"TokenRequest\"\ncontent_type: \"api_reference\"\ndescription: \"TokenRequest requests a token for a given service account.\"\ntitle: \"TokenRequest\"\nweight: 2\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: authentication.k8s.io\/v1`\n\n`import \"k8s.io\/api\/authentication\/v1\"`\n\n\n## TokenRequest {#TokenRequest}\n\nTokenRequest requests a token for a given service account.\n\n<hr>\n\n- **apiVersion**: authentication.k8s.io\/v1\n\n\n- **kind**: TokenRequest\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">TokenRequestSpec<\/a>), required\n\n  Spec holds information about the request being evaluated\n\n- **status** (<a href=\"\">TokenRequestStatus<\/a>)\n\n  Status is filled in by the server and indicates whether the token can be authenticated.\n\n\n\n\n\n## TokenRequestSpec {#TokenRequestSpec}\n\nTokenRequestSpec contains client provided parameters of a token request.\n\n<hr>\n\n- **audiences** ([]string), required\n\n  *Atomic: will be replaced during a merge*\n  \n  Audiences are the intendend audiences of the token. A recipient of a token must identify themself with an identifier in the list of audiences of the token, and otherwise should reject the token. A token issued for multiple audiences may be used to authenticate against any of the audiences listed but implies a high degree of trust between the target audiences.\n\n- **boundObjectRef** (BoundObjectReference)\n\n  BoundObjectRef is a reference to an object that the token will be bound to. The token will only be valid for as long as the bound object exists. NOTE: The API server's TokenReview endpoint will validate the BoundObjectRef, but other audiences may not. Keep ExpirationSeconds small if you want prompt revocation.\n\n  <a name=\"BoundObjectReference\"><\/a>\n  *BoundObjectReference is a reference to an object that a token is bound to.*\n\n  - **boundObjectRef.apiVersion** (string)\n\n    API version of the referent.\n\n  - **boundObjectRef.kind** (string)\n\n    Kind of the referent. Valid kinds are 'Pod' and 'Secret'.\n\n  - **boundObjectRef.name** (string)\n\n    Name of the referent.\n\n  - **boundObjectRef.uid** (string)\n\n    UID of the referent.\n\n- **expirationSeconds** (int64)\n\n  ExpirationSeconds is the requested duration of validity of the request. The token issuer may return a token with a different validity duration so a client needs to check the 'expiration' field in a response.\n\n\n\n\n\n## TokenRequestStatus {#TokenRequestStatus}\n\nTokenRequestStatus is the result of a token request.\n\n<hr>\n\n- **expirationTimestamp** (Time), required\n\n  ExpirationTimestamp is the time of expiration of the returned token.\n\n  <a name=\"Time\"><\/a>\n  *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **token** (string), required\n\n  Token is the opaque bearer token.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `create` create token of a ServiceAccount\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\/{namespace}\/serviceaccounts\/{name}\/token\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the TokenRequest\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">TokenRequest<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">TokenRequest<\/a>): OK\n\n201 (<a href=\"\">TokenRequest<\/a>): Created\n\n202 (<a href=\"\">TokenRequest<\/a>): Accepted\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   authentication k8s io v1    import   k8s io api authentication v1    kind   TokenRequest  content type   api reference  description   TokenRequest requests a token for a given service account   title   TokenRequest  weight  2 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  authentication k8s io v1    import  k8s io api authentication v1        TokenRequest   TokenRequest   TokenRequest requests a token for a given service account    hr       apiVersion    authentication k8s io v1       kind    TokenRequest       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    TokenRequestSpec  a    required    Spec holds information about the request being evaluated      status     a href    TokenRequestStatus  a      Status is filled in by the server and indicates whether the token can be authenticated          TokenRequestSpec   TokenRequestSpec   TokenRequestSpec contains client provided parameters of a token request    hr       audiences      string   required     Atomic  will be replaced during a merge       Audiences are the intendend audiences of the token  A recipient of a token must identify themself with an identifier in the list of audiences of the token  and otherwise should reject the token  A token issued for multiple audiences may be used to authenticate against any of the audiences listed but implies a high degree of trust between the target audiences       boundObjectRef    BoundObjectReference     BoundObjectRef is a reference to an object that the token will be bound to  The token will only be valid for as long as the bound object exists  NOTE  The API server s TokenReview endpoint will validate the BoundObjectRef  but other audiences may not  Keep ExpirationSeconds small if you want prompt revocation      a name  BoundObjectReference    a     BoundObjectReference is a reference to an object that a token is bound to          boundObjectRef apiVersion    string       API version of the referent         boundObjectRef kind    string       Kind of the referent  Valid kinds are  Pod  and  Secret          boundObjectRef name    string       Name of the referent         boundObjectRef uid    string       UID of the referent       expirationSeconds    int64     ExpirationSeconds is the requested duration of validity of the request  The token issuer may return a token with a different validity duration so a client needs to check the  expiration  field in a response          TokenRequestStatus   TokenRequestStatus   TokenRequestStatus is the result of a token request    hr       expirationTimestamp    Time   required    ExpirationTimestamp is the time of expiration of the returned token      a name  Time    a     Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        token    string   required    Token is the opaque bearer token          Operations   Operations      hr             create  create token of a ServiceAccount       HTTP Request  POST  api v1 namespaces  namespace  serviceaccounts  name  token       Parameters       name     in path    string  required    name of the TokenRequest       namespace     in path    string  required     a href    namespace  a        body     a href    TokenRequest  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    TokenRequest  a    OK  201   a href    TokenRequest  a    Created  202   a href    TokenRequest  a    Accepted  401  Unauthorized "}
{"questions":"kubernetes reference import k8s io api certificates v1 kind CertificateSigningRequest contenttype apireference CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request and having it asynchronously approved and issued title CertificateSigningRequest weight 4 apimetadata apiVersion certificates k8s io v1 autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"certificates.k8s.io\/v1\"\n  import: \"k8s.io\/api\/certificates\/v1\"\n  kind: \"CertificateSigningRequest\"\ncontent_type: \"api_reference\"\ndescription: \"CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request, and having it asynchronously approved and issued.\"\ntitle: \"CertificateSigningRequest\"\nweight: 4\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: certificates.k8s.io\/v1`\n\n`import \"k8s.io\/api\/certificates\/v1\"`\n\n\n## CertificateSigningRequest {#CertificateSigningRequest}\n\nCertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request, and having it asynchronously approved and issued.\n\nKubelets use this API to obtain:\n 1. client certificates to authenticate to kube-apiserver (with the \"kubernetes.io\/kube-apiserver-client-kubelet\" signerName).\n 2. serving certificates for TLS endpoints kube-apiserver can connect to securely (with the \"kubernetes.io\/kubelet-serving\" signerName).\n\nThis API can be used to request client certificates to authenticate to kube-apiserver (with the \"kubernetes.io\/kube-apiserver-client\" signerName), or to obtain certificates from custom non-Kubernetes signers.\n\n<hr>\n\n- **apiVersion**: certificates.k8s.io\/v1\n\n\n- **kind**: CertificateSigningRequest\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n\n- **spec** (<a href=\"\">CertificateSigningRequestSpec<\/a>), required\n\n  spec contains the certificate request, and is immutable after creation. Only the request, signerName, expirationSeconds, and usages fields can be set on creation. Other fields are derived by Kubernetes and cannot be modified by users.\n\n- **status** (<a href=\"\">CertificateSigningRequestStatus<\/a>)\n\n  status contains information about whether the request is approved or denied, and the certificate issued by the signer, or the failure condition indicating signer failure.\n\n\n\n\n\n## CertificateSigningRequestSpec {#CertificateSigningRequestSpec}\n\nCertificateSigningRequestSpec contains the certificate request.\n\n<hr>\n\n- **request** ([]byte), required\n\n  *Atomic: will be replaced during a merge*\n  \n  request contains an x509 certificate signing request encoded in a \"CERTIFICATE REQUEST\" PEM block. When serialized as JSON or YAML, the data is additionally base64-encoded.\n\n- **signerName** (string), required\n\n  signerName indicates the requested signer, and is a qualified name.\n  \n  List\/watch requests for CertificateSigningRequests can filter on this field using a \"spec.signerName=NAME\" fieldSelector.\n  \n  Well-known Kubernetes signers are:\n   1. \"kubernetes.io\/kube-apiserver-client\": issues client certificates that can be used to authenticate to kube-apiserver.\n    Requests for this signer are never auto-approved by kube-controller-manager, can be issued by the \"csrsigning\" controller in kube-controller-manager.\n   2. \"kubernetes.io\/kube-apiserver-client-kubelet\": issues client certificates that kubelets use to authenticate to kube-apiserver.\n    Requests for this signer can be auto-approved by the \"csrapproving\" controller in kube-controller-manager, and can be issued by the \"csrsigning\" controller in kube-controller-manager.\n   3. \"kubernetes.io\/kubelet-serving\" issues serving certificates that kubelets use to serve TLS endpoints, which kube-apiserver can connect to securely.\n    Requests for this signer are never auto-approved by kube-controller-manager, and can be issued by the \"csrsigning\" controller in kube-controller-manager.\n  \n  More details are available at https:\/\/k8s.io\/docs\/reference\/access-authn-authz\/certificate-signing-requests\/#kubernetes-signers\n  \n  Custom signerNames can also be specified. The signer defines:\n   1. Trust distribution: how trust (CA bundles) are distributed.\n   2. Permitted subjects: and behavior when a disallowed subject is requested.\n   3. Required, permitted, or forbidden x509 extensions in the request (including whether subjectAltNames are allowed, which types, restrictions on allowed values) and behavior when a disallowed extension is requested.\n   4. Required, permitted, or forbidden key usages \/ extended key usages.\n   5. Expiration\/certificate lifetime: whether it is fixed by the signer, configurable by the admin.\n   6. Whether or not requests for CA certificates are allowed.\n\n- **expirationSeconds** (int32)\n\n  expirationSeconds is the requested duration of validity of the issued certificate. The certificate signer may issue a certificate with a different validity duration so a client must check the delta between the notBefore and and notAfter fields in the issued certificate to determine the actual duration.\n  \n  The v1.22+ in-tree implementations of the well-known Kubernetes signers will honor this field as long as the requested duration is not greater than the maximum duration they will honor per the --cluster-signing-duration CLI flag to the Kubernetes controller manager.\n  \n  Certificate signers may not honor this field for various reasons:\n  \n    1. Old signer that is unaware of the field (such as the in-tree\n       implementations prior to v1.22)\n    2. Signer whose configured maximum is shorter than the requested duration\n    3. Signer whose configured minimum is longer than the requested duration\n  \n  The minimum valid value for expirationSeconds is 600, i.e. 10 minutes.\n\n- **extra** (map[string][]string)\n\n  extra contains extra attributes of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable.\n\n- **groups** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  groups contains group membership of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable.\n\n- **uid** (string)\n\n  uid contains the uid of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable.\n\n- **usages** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  usages specifies a set of key usages requested in the issued certificate.\n  \n  Requests for TLS client certificates typically request: \"digital signature\", \"key encipherment\", \"client auth\".\n  \n  Requests for TLS serving certificates typically request: \"key encipherment\", \"digital signature\", \"server auth\".\n  \n  Valid values are:\n   \"signing\", \"digital signature\", \"content commitment\",\n   \"key encipherment\", \"key agreement\", \"data encipherment\",\n   \"cert sign\", \"crl sign\", \"encipher only\", \"decipher only\", \"any\",\n   \"server auth\", \"client auth\",\n   \"code signing\", \"email protection\", \"s\/mime\",\n   \"ipsec end system\", \"ipsec tunnel\", \"ipsec user\",\n   \"timestamping\", \"ocsp signing\", \"microsoft sgc\", \"netscape sgc\"\n\n- **username** (string)\n\n  username contains the name of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable.\n\n\n\n\n\n## CertificateSigningRequestStatus {#CertificateSigningRequestStatus}\n\nCertificateSigningRequestStatus contains conditions used to indicate approved\/denied\/failed status of the request, and the issued certificate.\n\n<hr>\n\n- **certificate** ([]byte)\n\n  *Atomic: will be replaced during a merge*\n  \n  certificate is populated with an issued certificate by the signer after an Approved condition is present. This field is set via the \/status subresource. Once populated, this field is immutable.\n  \n  If the certificate signing request is denied, a condition of type \"Denied\" is added and this field remains empty. If the signer cannot issue the certificate, a condition of type \"Failed\" is added and this field remains empty.\n  \n  Validation requirements:\n   1. certificate must contain one or more PEM blocks.\n   2. All PEM blocks must have the \"CERTIFICATE\" label, contain no headers, and the encoded data\n    must be a BER-encoded ASN.1 Certificate structure as described in section 4 of RFC5280.\n   3. Non-PEM content may appear before or after the \"CERTIFICATE\" PEM blocks and is unvalidated,\n    to allow for explanatory text as described in section 5.2 of RFC7468.\n  \n  If more than one PEM block is present, and the definition of the requested spec.signerName does not indicate otherwise, the first block is the issued certificate, and subsequent blocks should be treated as intermediate certificates and presented in TLS handshakes.\n  \n  The certificate is encoded in PEM format.\n  \n  When serialized as JSON or YAML, the data is additionally base64-encoded, so it consists of:\n  \n      base64(\n      -----BEGIN CERTIFICATE-----\n      ...\n      -----END CERTIFICATE-----\n      )\n\n- **conditions** ([]CertificateSigningRequestCondition)\n\n  *Map: unique values on key type will be kept during a merge*\n  \n  conditions applied to the request. Known conditions are \"Approved\", \"Denied\", and \"Failed\".\n\n  <a name=\"CertificateSigningRequestCondition\"><\/a>\n  *CertificateSigningRequestCondition describes a condition of a CertificateSigningRequest object*\n\n  - **conditions.status** (string), required\n\n    status of the condition, one of True, False, Unknown. Approved, Denied, and Failed conditions may not be \"False\" or \"Unknown\".\n\n  - **conditions.type** (string), required\n\n    type of the condition. Known conditions are \"Approved\", \"Denied\", and \"Failed\".\n    \n    An \"Approved\" condition is added via the \/approval subresource, indicating the request was approved and should be issued by the signer.\n    \n    A \"Denied\" condition is added via the \/approval subresource, indicating the request was denied and should not be issued by the signer.\n    \n    A \"Failed\" condition is added via the \/status subresource, indicating the signer failed to issue the certificate.\n    \n    Approved and Denied conditions are mutually exclusive. Approved, Denied, and Failed conditions cannot be removed once added.\n    \n    Only one condition of a given type is allowed.\n\n  - **conditions.lastTransitionTime** (Time)\n\n    lastTransitionTime is the time the condition last transitioned from one status to another. If unset, when a new condition type is added or an existing condition's status is changed, the server defaults this to the current time.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.lastUpdateTime** (Time)\n\n    lastUpdateTime is the time of the last update to this condition\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    message contains a human readable message with details about the request state\n\n  - **conditions.reason** (string)\n\n    reason indicates a brief reason for the request state\n\n\n\n\n\n## CertificateSigningRequestList {#CertificateSigningRequestList}\n\nCertificateSigningRequestList is a collection of CertificateSigningRequest objects\n\n<hr>\n\n- **apiVersion**: certificates.k8s.io\/v1\n\n\n- **kind**: CertificateSigningRequestList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n\n- **items** ([]<a href=\"\">CertificateSigningRequest<\/a>), required\n\n  items is a collection of CertificateSigningRequest objects\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified CertificateSigningRequest\n\n#### HTTP Request\n\nGET \/apis\/certificates.k8s.io\/v1\/certificatesigningrequests\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CertificateSigningRequest\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CertificateSigningRequest<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read approval of the specified CertificateSigningRequest\n\n#### HTTP Request\n\nGET \/apis\/certificates.k8s.io\/v1\/certificatesigningrequests\/{name}\/approval\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CertificateSigningRequest\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CertificateSigningRequest<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified CertificateSigningRequest\n\n#### HTTP Request\n\nGET \/apis\/certificates.k8s.io\/v1\/certificatesigningrequests\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CertificateSigningRequest\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CertificateSigningRequest<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind CertificateSigningRequest\n\n#### HTTP Request\n\nGET \/apis\/certificates.k8s.io\/v1\/certificatesigningrequests\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CertificateSigningRequestList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a CertificateSigningRequest\n\n#### HTTP Request\n\nPOST \/apis\/certificates.k8s.io\/v1\/certificatesigningrequests\n\n#### Parameters\n\n\n- **body**: <a href=\"\">CertificateSigningRequest<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CertificateSigningRequest<\/a>): OK\n\n201 (<a href=\"\">CertificateSigningRequest<\/a>): Created\n\n202 (<a href=\"\">CertificateSigningRequest<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified CertificateSigningRequest\n\n#### HTTP Request\n\nPUT \/apis\/certificates.k8s.io\/v1\/certificatesigningrequests\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CertificateSigningRequest\n\n\n- **body**: <a href=\"\">CertificateSigningRequest<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CertificateSigningRequest<\/a>): OK\n\n201 (<a href=\"\">CertificateSigningRequest<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace approval of the specified CertificateSigningRequest\n\n#### HTTP Request\n\nPUT \/apis\/certificates.k8s.io\/v1\/certificatesigningrequests\/{name}\/approval\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CertificateSigningRequest\n\n\n- **body**: <a href=\"\">CertificateSigningRequest<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CertificateSigningRequest<\/a>): OK\n\n201 (<a href=\"\">CertificateSigningRequest<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified CertificateSigningRequest\n\n#### HTTP Request\n\nPUT \/apis\/certificates.k8s.io\/v1\/certificatesigningrequests\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CertificateSigningRequest\n\n\n- **body**: <a href=\"\">CertificateSigningRequest<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CertificateSigningRequest<\/a>): OK\n\n201 (<a href=\"\">CertificateSigningRequest<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified CertificateSigningRequest\n\n#### HTTP Request\n\nPATCH \/apis\/certificates.k8s.io\/v1\/certificatesigningrequests\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CertificateSigningRequest\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CertificateSigningRequest<\/a>): OK\n\n201 (<a href=\"\">CertificateSigningRequest<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update approval of the specified CertificateSigningRequest\n\n#### HTTP Request\n\nPATCH \/apis\/certificates.k8s.io\/v1\/certificatesigningrequests\/{name}\/approval\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CertificateSigningRequest\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CertificateSigningRequest<\/a>): OK\n\n201 (<a href=\"\">CertificateSigningRequest<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified CertificateSigningRequest\n\n#### HTTP Request\n\nPATCH \/apis\/certificates.k8s.io\/v1\/certificatesigningrequests\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CertificateSigningRequest\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CertificateSigningRequest<\/a>): OK\n\n201 (<a href=\"\">CertificateSigningRequest<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a CertificateSigningRequest\n\n#### HTTP Request\n\nDELETE \/apis\/certificates.k8s.io\/v1\/certificatesigningrequests\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CertificateSigningRequest\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of CertificateSigningRequest\n\n#### HTTP Request\n\nDELETE \/apis\/certificates.k8s.io\/v1\/certificatesigningrequests\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   certificates k8s io v1    import   k8s io api certificates v1    kind   CertificateSigningRequest  content type   api reference  description   CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request  and having it asynchronously approved and issued   title   CertificateSigningRequest  weight  4 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  certificates k8s io v1    import  k8s io api certificates v1        CertificateSigningRequest   CertificateSigningRequest   CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request  and having it asynchronously approved and issued   Kubelets use this API to obtain   1  client certificates to authenticate to kube apiserver  with the  kubernetes io kube apiserver client kubelet  signerName    2  serving certificates for TLS endpoints kube apiserver can connect to securely  with the  kubernetes io kubelet serving  signerName    This API can be used to request client certificates to authenticate to kube apiserver  with the  kubernetes io kube apiserver client  signerName   or to obtain certificates from custom non Kubernetes signers    hr       apiVersion    certificates k8s io v1       kind    CertificateSigningRequest       metadata     a href    ObjectMeta  a         spec     a href    CertificateSigningRequestSpec  a    required    spec contains the certificate request  and is immutable after creation  Only the request  signerName  expirationSeconds  and usages fields can be set on creation  Other fields are derived by Kubernetes and cannot be modified by users       status     a href    CertificateSigningRequestStatus  a      status contains information about whether the request is approved or denied  and the certificate issued by the signer  or the failure condition indicating signer failure          CertificateSigningRequestSpec   CertificateSigningRequestSpec   CertificateSigningRequestSpec contains the certificate request    hr       request      byte   required     Atomic  will be replaced during a merge       request contains an x509 certificate signing request encoded in a  CERTIFICATE REQUEST  PEM block  When serialized as JSON or YAML  the data is additionally base64 encoded       signerName    string   required    signerName indicates the requested signer  and is a qualified name       List watch requests for CertificateSigningRequests can filter on this field using a  spec signerName NAME  fieldSelector       Well known Kubernetes signers are     1   kubernetes io kube apiserver client   issues client certificates that can be used to authenticate to kube apiserver      Requests for this signer are never auto approved by kube controller manager  can be issued by the  csrsigning  controller in kube controller manager     2   kubernetes io kube apiserver client kubelet   issues client certificates that kubelets use to authenticate to kube apiserver      Requests for this signer can be auto approved by the  csrapproving  controller in kube controller manager  and can be issued by the  csrsigning  controller in kube controller manager     3   kubernetes io kubelet serving  issues serving certificates that kubelets use to serve TLS endpoints  which kube apiserver can connect to securely      Requests for this signer are never auto approved by kube controller manager  and can be issued by the  csrsigning  controller in kube controller manager       More details are available at https   k8s io docs reference access authn authz certificate signing requests  kubernetes signers      Custom signerNames can also be specified  The signer defines     1  Trust distribution  how trust  CA bundles  are distributed     2  Permitted subjects  and behavior when a disallowed subject is requested     3  Required  permitted  or forbidden x509 extensions in the request  including whether subjectAltNames are allowed  which types  restrictions on allowed values  and behavior when a disallowed extension is requested     4  Required  permitted  or forbidden key usages   extended key usages     5  Expiration certificate lifetime  whether it is fixed by the signer  configurable by the admin     6  Whether or not requests for CA certificates are allowed       expirationSeconds    int32     expirationSeconds is the requested duration of validity of the issued certificate  The certificate signer may issue a certificate with a different validity duration so a client must check the delta between the notBefore and and notAfter fields in the issued certificate to determine the actual duration       The v1 22  in tree implementations of the well known Kubernetes signers will honor this field as long as the requested duration is not greater than the maximum duration they will honor per the   cluster signing duration CLI flag to the Kubernetes controller manager       Certificate signers may not honor this field for various reasons         1  Old signer that is unaware of the field  such as the in tree        implementations prior to v1 22      2  Signer whose configured maximum is shorter than the requested duration     3  Signer whose configured minimum is longer than the requested duration      The minimum valid value for expirationSeconds is 600  i e  10 minutes       extra    map string   string     extra contains extra attributes of the user that created the CertificateSigningRequest  Populated by the API server on creation and immutable       groups      string      Atomic  will be replaced during a merge       groups contains group membership of the user that created the CertificateSigningRequest  Populated by the API server on creation and immutable       uid    string     uid contains the uid of the user that created the CertificateSigningRequest  Populated by the API server on creation and immutable       usages      string      Atomic  will be replaced during a merge       usages specifies a set of key usages requested in the issued certificate       Requests for TLS client certificates typically request   digital signature    key encipherment    client auth        Requests for TLS serving certificates typically request   key encipherment    digital signature    server auth        Valid values are      signing    digital signature    content commitment       key encipherment    key agreement    data encipherment       cert sign    crl sign    encipher only    decipher only    any       server auth    client auth       code signing    email protection    s mime       ipsec end system    ipsec tunnel    ipsec user       timestamping    ocsp signing    microsoft sgc    netscape sgc       username    string     username contains the name of the user that created the CertificateSigningRequest  Populated by the API server on creation and immutable          CertificateSigningRequestStatus   CertificateSigningRequestStatus   CertificateSigningRequestStatus contains conditions used to indicate approved denied failed status of the request  and the issued certificate    hr       certificate      byte      Atomic  will be replaced during a merge       certificate is populated with an issued certificate by the signer after an Approved condition is present  This field is set via the  status subresource  Once populated  this field is immutable       If the certificate signing request is denied  a condition of type  Denied  is added and this field remains empty  If the signer cannot issue the certificate  a condition of type  Failed  is added and this field remains empty       Validation requirements     1  certificate must contain one or more PEM blocks     2  All PEM blocks must have the  CERTIFICATE  label  contain no headers  and the encoded data     must be a BER encoded ASN 1 Certificate structure as described in section 4 of RFC5280     3  Non PEM content may appear before or after the  CERTIFICATE  PEM blocks and is unvalidated      to allow for explanatory text as described in section 5 2 of RFC7468       If more than one PEM block is present  and the definition of the requested spec signerName does not indicate otherwise  the first block is the issued certificate  and subsequent blocks should be treated as intermediate certificates and presented in TLS handshakes       The certificate is encoded in PEM format       When serialized as JSON or YAML  the data is additionally base64 encoded  so it consists of           base64             BEGIN CERTIFICATE                           END CERTIFICATE                   conditions      CertificateSigningRequestCondition      Map  unique values on key type will be kept during a merge       conditions applied to the request  Known conditions are  Approved    Denied   and  Failed       a name  CertificateSigningRequestCondition    a     CertificateSigningRequestCondition describes a condition of a CertificateSigningRequest object         conditions status    string   required      status of the condition  one of True  False  Unknown  Approved  Denied  and Failed conditions may not be  False  or  Unknown          conditions type    string   required      type of the condition  Known conditions are  Approved    Denied   and  Failed            An  Approved  condition is added via the  approval subresource  indicating the request was approved and should be issued by the signer           A  Denied  condition is added via the  approval subresource  indicating the request was denied and should not be issued by the signer           A  Failed  condition is added via the  status subresource  indicating the signer failed to issue the certificate           Approved and Denied conditions are mutually exclusive  Approved  Denied  and Failed conditions cannot be removed once added           Only one condition of a given type is allowed         conditions lastTransitionTime    Time       lastTransitionTime is the time the condition last transitioned from one status to another  If unset  when a new condition type is added or an existing condition s status is changed  the server defaults this to the current time        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions lastUpdateTime    Time       lastUpdateTime is the time of the last update to this condition       a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string       message contains a human readable message with details about the request state        conditions reason    string       reason indicates a brief reason for the request state         CertificateSigningRequestList   CertificateSigningRequestList   CertificateSigningRequestList is a collection of CertificateSigningRequest objects   hr       apiVersion    certificates k8s io v1       kind    CertificateSigningRequestList       metadata     a href    ListMeta  a         items       a href    CertificateSigningRequest  a    required    items is a collection of CertificateSigningRequest objects         Operations   Operations      hr             get  read the specified CertificateSigningRequest       HTTP Request  GET  apis certificates k8s io v1 certificatesigningrequests  name        Parameters       name     in path    string  required    name of the CertificateSigningRequest       pretty     in query    string     a href    pretty  a          Response   200   a href    CertificateSigningRequest  a    OK  401  Unauthorized        get  read approval of the specified CertificateSigningRequest       HTTP Request  GET  apis certificates k8s io v1 certificatesigningrequests  name  approval       Parameters       name     in path    string  required    name of the CertificateSigningRequest       pretty     in query    string     a href    pretty  a          Response   200   a href    CertificateSigningRequest  a    OK  401  Unauthorized        get  read status of the specified CertificateSigningRequest       HTTP Request  GET  apis certificates k8s io v1 certificatesigningrequests  name  status       Parameters       name     in path    string  required    name of the CertificateSigningRequest       pretty     in query    string     a href    pretty  a          Response   200   a href    CertificateSigningRequest  a    OK  401  Unauthorized        list  list or watch objects of kind CertificateSigningRequest       HTTP Request  GET  apis certificates k8s io v1 certificatesigningrequests       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    CertificateSigningRequestList  a    OK  401  Unauthorized        create  create a CertificateSigningRequest       HTTP Request  POST  apis certificates k8s io v1 certificatesigningrequests       Parameters       body     a href    CertificateSigningRequest  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CertificateSigningRequest  a    OK  201   a href    CertificateSigningRequest  a    Created  202   a href    CertificateSigningRequest  a    Accepted  401  Unauthorized        update  replace the specified CertificateSigningRequest       HTTP Request  PUT  apis certificates k8s io v1 certificatesigningrequests  name        Parameters       name     in path    string  required    name of the CertificateSigningRequest       body     a href    CertificateSigningRequest  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CertificateSigningRequest  a    OK  201   a href    CertificateSigningRequest  a    Created  401  Unauthorized        update  replace approval of the specified CertificateSigningRequest       HTTP Request  PUT  apis certificates k8s io v1 certificatesigningrequests  name  approval       Parameters       name     in path    string  required    name of the CertificateSigningRequest       body     a href    CertificateSigningRequest  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CertificateSigningRequest  a    OK  201   a href    CertificateSigningRequest  a    Created  401  Unauthorized        update  replace status of the specified CertificateSigningRequest       HTTP Request  PUT  apis certificates k8s io v1 certificatesigningrequests  name  status       Parameters       name     in path    string  required    name of the CertificateSigningRequest       body     a href    CertificateSigningRequest  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CertificateSigningRequest  a    OK  201   a href    CertificateSigningRequest  a    Created  401  Unauthorized        patch  partially update the specified CertificateSigningRequest       HTTP Request  PATCH  apis certificates k8s io v1 certificatesigningrequests  name        Parameters       name     in path    string  required    name of the CertificateSigningRequest       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CertificateSigningRequest  a    OK  201   a href    CertificateSigningRequest  a    Created  401  Unauthorized        patch  partially update approval of the specified CertificateSigningRequest       HTTP Request  PATCH  apis certificates k8s io v1 certificatesigningrequests  name  approval       Parameters       name     in path    string  required    name of the CertificateSigningRequest       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CertificateSigningRequest  a    OK  201   a href    CertificateSigningRequest  a    Created  401  Unauthorized        patch  partially update status of the specified CertificateSigningRequest       HTTP Request  PATCH  apis certificates k8s io v1 certificatesigningrequests  name  status       Parameters       name     in path    string  required    name of the CertificateSigningRequest       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CertificateSigningRequest  a    OK  201   a href    CertificateSigningRequest  a    Created  401  Unauthorized        delete  delete a CertificateSigningRequest       HTTP Request  DELETE  apis certificates k8s io v1 certificatesigningrequests  name        Parameters       name     in path    string  required    name of the CertificateSigningRequest       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of CertificateSigningRequest       HTTP Request  DELETE  apis certificates k8s io v1 certificatesigningrequests       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference kind SelfSubjectReview weight 6 contenttype apireference import k8s io api authentication v1 SelfSubjectReview contains the user information that the kube apiserver has about the user making this request apimetadata title SelfSubjectReview apiVersion authentication k8s io v1 autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"authentication.k8s.io\/v1\"\n  import: \"k8s.io\/api\/authentication\/v1\"\n  kind: \"SelfSubjectReview\"\ncontent_type: \"api_reference\"\ndescription: \"SelfSubjectReview contains the user information that the kube-apiserver has about the user making this request.\"\ntitle: \"SelfSubjectReview\"\nweight: 6\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: authentication.k8s.io\/v1`\n\n`import \"k8s.io\/api\/authentication\/v1\"`\n\n\n## SelfSubjectReview {#SelfSubjectReview}\n\nSelfSubjectReview contains the user information that the kube-apiserver has about the user making this request. When using impersonation, users will receive the user info of the user being impersonated.  If impersonation or request header authentication is used, any extra keys will have their case ignored and returned as lowercase.\n\n<hr>\n\n- **apiVersion**: authentication.k8s.io\/v1\n\n\n- **kind**: SelfSubjectReview\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **status** (<a href=\"\">SelfSubjectReviewStatus<\/a>)\n\n  Status is filled in by the server with the user attributes.\n\n\n\n\n\n## SelfSubjectReviewStatus {#SelfSubjectReviewStatus}\n\nSelfSubjectReviewStatus is filled by the kube-apiserver and sent back to a user.\n\n<hr>\n\n- **userInfo** (UserInfo)\n\n  User attributes of the user making this request.\n\n  <a name=\"UserInfo\"><\/a>\n  *UserInfo holds the information about the user needed to implement the user.Info interface.*\n\n  - **userInfo.extra** (map[string][]string)\n\n    Any additional information provided by the authenticator.\n\n  - **userInfo.groups** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    The names of groups this user is a part of.\n\n  - **userInfo.uid** (string)\n\n    A unique value that identifies this user across time. If this user is deleted and another user by the same name is added, they will have different UIDs.\n\n  - **userInfo.username** (string)\n\n    The name that uniquely identifies this user among all active users.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `create` create a SelfSubjectReview\n\n#### HTTP Request\n\nPOST \/apis\/authentication.k8s.io\/v1\/selfsubjectreviews\n\n#### Parameters\n\n\n- **body**: <a href=\"\">SelfSubjectReview<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">SelfSubjectReview<\/a>): OK\n\n201 (<a href=\"\">SelfSubjectReview<\/a>): Created\n\n202 (<a href=\"\">SelfSubjectReview<\/a>): Accepted\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   authentication k8s io v1    import   k8s io api authentication v1    kind   SelfSubjectReview  content type   api reference  description   SelfSubjectReview contains the user information that the kube apiserver has about the user making this request   title   SelfSubjectReview  weight  6 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  authentication k8s io v1    import  k8s io api authentication v1        SelfSubjectReview   SelfSubjectReview   SelfSubjectReview contains the user information that the kube apiserver has about the user making this request  When using impersonation  users will receive the user info of the user being impersonated   If impersonation or request header authentication is used  any extra keys will have their case ignored and returned as lowercase    hr       apiVersion    authentication k8s io v1       kind    SelfSubjectReview       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      status     a href    SelfSubjectReviewStatus  a      Status is filled in by the server with the user attributes          SelfSubjectReviewStatus   SelfSubjectReviewStatus   SelfSubjectReviewStatus is filled by the kube apiserver and sent back to a user    hr       userInfo    UserInfo     User attributes of the user making this request      a name  UserInfo    a     UserInfo holds the information about the user needed to implement the user Info interface          userInfo extra    map string   string       Any additional information provided by the authenticator         userInfo groups      string        Atomic  will be replaced during a merge           The names of groups this user is a part of         userInfo uid    string       A unique value that identifies this user across time  If this user is deleted and another user by the same name is added  they will have different UIDs         userInfo username    string       The name that uniquely identifies this user among all active users          Operations   Operations      hr             create  create a SelfSubjectReview       HTTP Request  POST  apis authentication k8s io v1 selfsubjectreviews       Parameters       body     a href    SelfSubjectReview  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    SelfSubjectReview  a    OK  201   a href    SelfSubjectReview  a    Created  202   a href    SelfSubjectReview  a    Accepted  401  Unauthorized "}
{"questions":"kubernetes reference kind TokenReview contenttype apireference title TokenReview import k8s io api authentication v1 weight 3 apimetadata apiVersion authentication k8s io v1 TokenReview attempts to authenticate a token to a known user autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"authentication.k8s.io\/v1\"\n  import: \"k8s.io\/api\/authentication\/v1\"\n  kind: \"TokenReview\"\ncontent_type: \"api_reference\"\ndescription: \"TokenReview attempts to authenticate a token to a known user.\"\ntitle: \"TokenReview\"\nweight: 3\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: authentication.k8s.io\/v1`\n\n`import \"k8s.io\/api\/authentication\/v1\"`\n\n\n## TokenReview {#TokenReview}\n\nTokenReview attempts to authenticate a token to a known user. Note: TokenReview requests may be cached by the webhook token authenticator plugin in the kube-apiserver.\n\n<hr>\n\n- **apiVersion**: authentication.k8s.io\/v1\n\n\n- **kind**: TokenReview\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">TokenReviewSpec<\/a>), required\n\n  Spec holds information about the request being evaluated\n\n- **status** (<a href=\"\">TokenReviewStatus<\/a>)\n\n  Status is filled in by the server and indicates whether the request can be authenticated.\n\n\n\n\n\n## TokenReviewSpec {#TokenReviewSpec}\n\nTokenReviewSpec is a description of the token authentication request.\n\n<hr>\n\n- **audiences** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  Audiences is a list of the identifiers that the resource server presented with the token identifies as. Audience-aware token authenticators will verify that the token was intended for at least one of the audiences in this list. If no audiences are provided, the audience will default to the audience of the Kubernetes apiserver.\n\n- **token** (string)\n\n  Token is the opaque bearer token.\n\n\n\n\n\n## TokenReviewStatus {#TokenReviewStatus}\n\nTokenReviewStatus is the result of the token authentication request.\n\n<hr>\n\n- **audiences** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  Audiences are audience identifiers chosen by the authenticator that are compatible with both the TokenReview and token. An identifier is any identifier in the intersection of the TokenReviewSpec audiences and the token's audiences. A client of the TokenReview API that sets the spec.audiences field should validate that a compatible audience identifier is returned in the status.audiences field to ensure that the TokenReview server is audience aware. If a TokenReview returns an empty status.audience field where status.authenticated is \"true\", the token is valid against the audience of the Kubernetes API server.\n\n- **authenticated** (boolean)\n\n  Authenticated indicates that the token was associated with a known user.\n\n- **error** (string)\n\n  Error indicates that the token couldn't be checked\n\n- **user** (UserInfo)\n\n  User is the UserInfo associated with the provided token.\n\n  <a name=\"UserInfo\"><\/a>\n  *UserInfo holds the information about the user needed to implement the user.Info interface.*\n\n  - **user.extra** (map[string][]string)\n\n    Any additional information provided by the authenticator.\n\n  - **user.groups** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    The names of groups this user is a part of.\n\n  - **user.uid** (string)\n\n    A unique value that identifies this user across time. If this user is deleted and another user by the same name is added, they will have different UIDs.\n\n  - **user.username** (string)\n\n    The name that uniquely identifies this user among all active users.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `create` create a TokenReview\n\n#### HTTP Request\n\nPOST \/apis\/authentication.k8s.io\/v1\/tokenreviews\n\n#### Parameters\n\n\n- **body**: <a href=\"\">TokenReview<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">TokenReview<\/a>): OK\n\n201 (<a href=\"\">TokenReview<\/a>): Created\n\n202 (<a href=\"\">TokenReview<\/a>): Accepted\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   authentication k8s io v1    import   k8s io api authentication v1    kind   TokenReview  content type   api reference  description   TokenReview attempts to authenticate a token to a known user   title   TokenReview  weight  3 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  authentication k8s io v1    import  k8s io api authentication v1        TokenReview   TokenReview   TokenReview attempts to authenticate a token to a known user  Note  TokenReview requests may be cached by the webhook token authenticator plugin in the kube apiserver    hr       apiVersion    authentication k8s io v1       kind    TokenReview       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    TokenReviewSpec  a    required    Spec holds information about the request being evaluated      status     a href    TokenReviewStatus  a      Status is filled in by the server and indicates whether the request can be authenticated          TokenReviewSpec   TokenReviewSpec   TokenReviewSpec is a description of the token authentication request    hr       audiences      string      Atomic  will be replaced during a merge       Audiences is a list of the identifiers that the resource server presented with the token identifies as  Audience aware token authenticators will verify that the token was intended for at least one of the audiences in this list  If no audiences are provided  the audience will default to the audience of the Kubernetes apiserver       token    string     Token is the opaque bearer token          TokenReviewStatus   TokenReviewStatus   TokenReviewStatus is the result of the token authentication request    hr       audiences      string      Atomic  will be replaced during a merge       Audiences are audience identifiers chosen by the authenticator that are compatible with both the TokenReview and token  An identifier is any identifier in the intersection of the TokenReviewSpec audiences and the token s audiences  A client of the TokenReview API that sets the spec audiences field should validate that a compatible audience identifier is returned in the status audiences field to ensure that the TokenReview server is audience aware  If a TokenReview returns an empty status audience field where status authenticated is  true   the token is valid against the audience of the Kubernetes API server       authenticated    boolean     Authenticated indicates that the token was associated with a known user       error    string     Error indicates that the token couldn t be checked      user    UserInfo     User is the UserInfo associated with the provided token      a name  UserInfo    a     UserInfo holds the information about the user needed to implement the user Info interface          user extra    map string   string       Any additional information provided by the authenticator         user groups      string        Atomic  will be replaced during a merge           The names of groups this user is a part of         user uid    string       A unique value that identifies this user across time  If this user is deleted and another user by the same name is added  they will have different UIDs         user username    string       The name that uniquely identifies this user among all active users          Operations   Operations      hr             create  create a TokenReview       HTTP Request  POST  apis authentication k8s io v1 tokenreviews       Parameters       body     a href    TokenReview  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    TokenReview  a    OK  201   a href    TokenReview  a    Created  202   a href    TokenReview  a    Accepted  401  Unauthorized "}
{"questions":"kubernetes reference apiVersion node k8s io v1 title RuntimeClass RuntimeClass defines a class of container runtime supported in the cluster contenttype apireference kind RuntimeClass apimetadata autogenerated true weight 9 import k8s io api node v1","answers":"---\napi_metadata:\n  apiVersion: \"node.k8s.io\/v1\"\n  import: \"k8s.io\/api\/node\/v1\"\n  kind: \"RuntimeClass\"\ncontent_type: \"api_reference\"\ndescription: \"RuntimeClass defines a class of container runtime supported in the cluster.\"\ntitle: \"RuntimeClass\"\nweight: 9\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: node.k8s.io\/v1`\n\n`import \"k8s.io\/api\/node\/v1\"`\n\n\n## RuntimeClass {#RuntimeClass}\n\nRuntimeClass defines a class of container runtime supported in the cluster. The RuntimeClass is used to determine which container runtime is used to run all containers in a pod. RuntimeClasses are manually defined by a user or cluster provisioner, and referenced in the PodSpec. The Kubelet is responsible for resolving the RuntimeClassName reference before running the pod.  For more details, see https:\/\/kubernetes.io\/docs\/concepts\/containers\/runtime-class\/\n\n<hr>\n\n- **apiVersion**: node.k8s.io\/v1\n\n\n- **kind**: RuntimeClass\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **handler** (string), required\n\n  handler specifies the underlying runtime and configuration that the CRI implementation will use to handle pods of this class. The possible values are specific to the node & CRI configuration.  It is assumed that all handlers are available on every node, and handlers of the same name are equivalent on every node. For example, a handler called \"runc\" might specify that the runc OCI runtime (using native Linux containers) will be used to run the containers in a pod. The Handler must be lowercase, conform to the DNS Label (RFC 1123) requirements, and is immutable.\n\n- **overhead** (Overhead)\n\n  overhead represents the resource overhead associated with running a pod for a given RuntimeClass. For more details, see\n   https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/pod-overhead\/\n\n  <a name=\"Overhead\"><\/a>\n  *Overhead structure represents the resource overhead associated with running a pod.*\n\n  - **overhead.podFixed** (map[string]<a href=\"\">Quantity<\/a>)\n\n    podFixed represents the fixed resource overhead associated with running a pod.\n\n- **scheduling** (Scheduling)\n\n  scheduling holds the scheduling constraints to ensure that pods running with this RuntimeClass are scheduled to nodes that support it. If scheduling is nil, this RuntimeClass is assumed to be supported by all nodes.\n\n  <a name=\"Scheduling\"><\/a>\n  *Scheduling specifies the scheduling constraints for nodes supporting a RuntimeClass.*\n\n  - **scheduling.nodeSelector** (map[string]string)\n\n    nodeSelector lists labels that must be present on nodes that support this RuntimeClass. Pods using this RuntimeClass can only be scheduled to a node matched by this selector. The RuntimeClass nodeSelector is merged with a pod's existing nodeSelector. Any conflicts will cause the pod to be rejected in admission.\n\n  - **scheduling.tolerations** ([]Toleration)\n\n    *Atomic: will be replaced during a merge*\n    \n    tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission, effectively unioning the set of nodes tolerated by the pod and the RuntimeClass.\n\n    <a name=\"Toleration\"><\/a>\n    *The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>.*\n\n    - **scheduling.tolerations.key** (string)\n\n      Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.\n\n    - **scheduling.tolerations.operator** (string)\n\n      Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.\n\n    - **scheduling.tolerations.value** (string)\n\n      Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.\n\n    - **scheduling.tolerations.effect** (string)\n\n      Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.\n\n    - **scheduling.tolerations.tolerationSeconds** (int64)\n\n      TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.\n\n\n\n\n\n## RuntimeClassList {#RuntimeClassList}\n\nRuntimeClassList is a list of RuntimeClass objects.\n\n<hr>\n\n- **apiVersion**: node.k8s.io\/v1\n\n\n- **kind**: RuntimeClassList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">RuntimeClass<\/a>), required\n\n  items is a list of schema objects.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified RuntimeClass\n\n#### HTTP Request\n\nGET \/apis\/node.k8s.io\/v1\/runtimeclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the RuntimeClass\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">RuntimeClass<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind RuntimeClass\n\n#### HTTP Request\n\nGET \/apis\/node.k8s.io\/v1\/runtimeclasses\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">RuntimeClassList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a RuntimeClass\n\n#### HTTP Request\n\nPOST \/apis\/node.k8s.io\/v1\/runtimeclasses\n\n#### Parameters\n\n\n- **body**: <a href=\"\">RuntimeClass<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">RuntimeClass<\/a>): OK\n\n201 (<a href=\"\">RuntimeClass<\/a>): Created\n\n202 (<a href=\"\">RuntimeClass<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified RuntimeClass\n\n#### HTTP Request\n\nPUT \/apis\/node.k8s.io\/v1\/runtimeclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the RuntimeClass\n\n\n- **body**: <a href=\"\">RuntimeClass<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">RuntimeClass<\/a>): OK\n\n201 (<a href=\"\">RuntimeClass<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified RuntimeClass\n\n#### HTTP Request\n\nPATCH \/apis\/node.k8s.io\/v1\/runtimeclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the RuntimeClass\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">RuntimeClass<\/a>): OK\n\n201 (<a href=\"\">RuntimeClass<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a RuntimeClass\n\n#### HTTP Request\n\nDELETE \/apis\/node.k8s.io\/v1\/runtimeclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the RuntimeClass\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of RuntimeClass\n\n#### HTTP Request\n\nDELETE \/apis\/node.k8s.io\/v1\/runtimeclasses\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   node k8s io v1    import   k8s io api node v1    kind   RuntimeClass  content type   api reference  description   RuntimeClass defines a class of container runtime supported in the cluster   title   RuntimeClass  weight  9 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  node k8s io v1    import  k8s io api node v1        RuntimeClass   RuntimeClass   RuntimeClass defines a class of container runtime supported in the cluster  The RuntimeClass is used to determine which container runtime is used to run all containers in a pod  RuntimeClasses are manually defined by a user or cluster provisioner  and referenced in the PodSpec  The Kubelet is responsible for resolving the RuntimeClassName reference before running the pod   For more details  see https   kubernetes io docs concepts containers runtime class    hr       apiVersion    node k8s io v1       kind    RuntimeClass       metadata     a href    ObjectMeta  a      More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      handler    string   required    handler specifies the underlying runtime and configuration that the CRI implementation will use to handle pods of this class  The possible values are specific to the node   CRI configuration   It is assumed that all handlers are available on every node  and handlers of the same name are equivalent on every node  For example  a handler called  runc  might specify that the runc OCI runtime  using native Linux containers  will be used to run the containers in a pod  The Handler must be lowercase  conform to the DNS Label  RFC 1123  requirements  and is immutable       overhead    Overhead     overhead represents the resource overhead associated with running a pod for a given RuntimeClass  For more details  see    https   kubernetes io docs concepts scheduling eviction pod overhead      a name  Overhead    a     Overhead structure represents the resource overhead associated with running a pod          overhead podFixed    map string  a href    Quantity  a        podFixed represents the fixed resource overhead associated with running a pod       scheduling    Scheduling     scheduling holds the scheduling constraints to ensure that pods running with this RuntimeClass are scheduled to nodes that support it  If scheduling is nil  this RuntimeClass is assumed to be supported by all nodes      a name  Scheduling    a     Scheduling specifies the scheduling constraints for nodes supporting a RuntimeClass          scheduling nodeSelector    map string string       nodeSelector lists labels that must be present on nodes that support this RuntimeClass  Pods using this RuntimeClass can only be scheduled to a node matched by this selector  The RuntimeClass nodeSelector is merged with a pod s existing nodeSelector  Any conflicts will cause the pod to be rejected in admission         scheduling tolerations      Toleration        Atomic  will be replaced during a merge           tolerations are appended  excluding duplicates  to pods running with this RuntimeClass during admission  effectively unioning the set of nodes tolerated by the pod and the RuntimeClass        a name  Toleration    a       The pod this Toleration is attached to tolerates any taint that matches the triple  key value effect  using the matching operator  operator             scheduling tolerations key    string         Key is the taint key that the toleration applies to  Empty means match all taint keys  If the key is empty  operator must be Exists  this combination means to match all values and all keys           scheduling tolerations operator    string         Operator represents a key s relationship to the value  Valid operators are Exists and Equal  Defaults to Equal  Exists is equivalent to wildcard for value  so that a pod can tolerate all taints of a particular category           scheduling tolerations value    string         Value is the taint value the toleration matches to  If the operator is Exists  the value should be empty  otherwise just a regular string           scheduling tolerations effect    string         Effect indicates the taint effect to match  Empty means match all taint effects  When specified  allowed values are NoSchedule  PreferNoSchedule and NoExecute           scheduling tolerations tolerationSeconds    int64         TolerationSeconds represents the period of time the toleration  which must be of effect NoExecute  otherwise this field is ignored  tolerates the taint  By default  it is not set  which means tolerate the taint forever  do not evict   Zero and negative values will be treated as 0  evict immediately  by the system          RuntimeClassList   RuntimeClassList   RuntimeClassList is a list of RuntimeClass objects    hr       apiVersion    node k8s io v1       kind    RuntimeClassList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    RuntimeClass  a    required    items is a list of schema objects          Operations   Operations      hr             get  read the specified RuntimeClass       HTTP Request  GET  apis node k8s io v1 runtimeclasses  name        Parameters       name     in path    string  required    name of the RuntimeClass       pretty     in query    string     a href    pretty  a          Response   200   a href    RuntimeClass  a    OK  401  Unauthorized        list  list or watch objects of kind RuntimeClass       HTTP Request  GET  apis node k8s io v1 runtimeclasses       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    RuntimeClassList  a    OK  401  Unauthorized        create  create a RuntimeClass       HTTP Request  POST  apis node k8s io v1 runtimeclasses       Parameters       body     a href    RuntimeClass  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    RuntimeClass  a    OK  201   a href    RuntimeClass  a    Created  202   a href    RuntimeClass  a    Accepted  401  Unauthorized        update  replace the specified RuntimeClass       HTTP Request  PUT  apis node k8s io v1 runtimeclasses  name        Parameters       name     in path    string  required    name of the RuntimeClass       body     a href    RuntimeClass  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    RuntimeClass  a    OK  201   a href    RuntimeClass  a    Created  401  Unauthorized        patch  partially update the specified RuntimeClass       HTTP Request  PATCH  apis node k8s io v1 runtimeclasses  name        Parameters       name     in path    string  required    name of the RuntimeClass       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    RuntimeClass  a    OK  201   a href    RuntimeClass  a    Created  401  Unauthorized        delete  delete a RuntimeClass       HTTP Request  DELETE  apis node k8s io v1 runtimeclasses  name        Parameters       name     in path    string  required    name of the RuntimeClass       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of RuntimeClass       HTTP Request  DELETE  apis node k8s io v1 runtimeclasses       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference title Node apiVersion v1 Node is a worker node in Kubernetes contenttype apireference weight 8 apimetadata kind Node autogenerated true import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"Node\"\ncontent_type: \"api_reference\"\ndescription: \"Node is a worker node in Kubernetes.\"\ntitle: \"Node\"\nweight: 8\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## Node {#Node}\n\nNode is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd).\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: Node\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">NodeSpec<\/a>)\n\n  Spec defines the behavior of a node. https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **status** (<a href=\"\">NodeStatus<\/a>)\n\n  Most recently observed status of the node. Populated by the system. Read-only. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## NodeSpec {#NodeSpec}\n\nNodeSpec describes the attributes that a node is created with.\n\n<hr>\n\n- **configSource** (NodeConfigSource)\n\n  Deprecated: Previously used to specify the source of the node's configuration for the DynamicKubeletConfig feature. This feature is removed.\n\n  <a name=\"NodeConfigSource\"><\/a>\n  *NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22*\n\n  - **configSource.configMap** (ConfigMapNodeConfigSource)\n\n    ConfigMap is a reference to a Node's ConfigMap\n\n    <a name=\"ConfigMapNodeConfigSource\"><\/a>\n    *ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https:\/\/git.k8s.io\/enhancements\/keps\/sig-node\/281-dynamic-kubelet-configuration*\n\n    - **configSource.configMap.kubeletConfigKey** (string), required\n\n      KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases.\n\n    - **configSource.configMap.name** (string), required\n\n      Name is the metadata.name of the referenced ConfigMap. This field is required in all cases.\n\n    - **configSource.configMap.namespace** (string), required\n\n      Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases.\n\n    - **configSource.configMap.resourceVersion** (string)\n\n      ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status.\n\n    - **configSource.configMap.uid** (string)\n\n      UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status.\n\n- **externalID** (string)\n\n  Deprecated. Not all kubelets will set this field. Remove field after 1.13. see: https:\/\/issues.k8s.io\/61966\n\n- **podCIDR** (string)\n\n  PodCIDR represents the pod IP range assigned to the node.\n\n- **podCIDRs** ([]string)\n\n  *Set: unique values will be kept during a merge*\n  \n  podCIDRs represents the IP ranges assigned to the node for usage by Pods on that node. If this field is specified, the 0th entry must match the podCIDR field. It may contain at most 1 value for each of IPv4 and IPv6.\n\n- **providerID** (string)\n\n  ID of the node assigned by the cloud provider in the format: \\<ProviderName>:\/\/\\<ProviderSpecificNodeID>\n\n- **taints** ([]Taint)\n\n  *Atomic: will be replaced during a merge*\n  \n  If specified, the node's taints.\n\n  <a name=\"Taint\"><\/a>\n  *The node this Taint is attached to has the \"effect\" on any pod that does not tolerate the Taint.*\n\n  - **taints.effect** (string), required\n\n    Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute.\n\n  - **taints.key** (string), required\n\n    Required. The taint key to be applied to a node.\n\n  - **taints.timeAdded** (Time)\n\n    TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **taints.value** (string)\n\n    The taint value corresponding to the taint key.\n\n- **unschedulable** (boolean)\n\n  Unschedulable controls node schedulability of new pods. By default, node is schedulable. More info: https:\/\/kubernetes.io\/docs\/concepts\/nodes\/node\/#manual-node-administration\n\n\n\n\n\n## NodeStatus {#NodeStatus}\n\nNodeStatus is information about the current status of a node.\n\n<hr>\n\n- **addresses** ([]NodeAddress)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  List of addresses reachable to the node. Queried from cloud provider, if available. More info: https:\/\/kubernetes.io\/docs\/concepts\/nodes\/node\/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https:\/\/pr.k8s.io\/79391 for an example. Consumers should assume that addresses can change during the lifetime of a Node. However, there are some exceptions where this may not be possible, such as Pods that inherit a Node's address in its own status or consumers of the downward API (status.hostIP).\n\n  <a name=\"NodeAddress\"><\/a>\n  *NodeAddress contains information for the node's address.*\n\n  - **addresses.address** (string), required\n\n    The node address.\n\n  - **addresses.type** (string), required\n\n    Node address type, one of Hostname, ExternalIP or InternalIP.\n\n- **allocatable** (map[string]<a href=\"\">Quantity<\/a>)\n\n  Allocatable represents the resources of a node that are available for scheduling. Defaults to Capacity.\n\n- **capacity** (map[string]<a href=\"\">Quantity<\/a>)\n\n  Capacity represents the total resources of a node. More info: https:\/\/kubernetes.io\/docs\/reference\/node\/node-status\/#capacity\n\n- **conditions** ([]NodeCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  Conditions is an array of current observed node conditions. More info: https:\/\/kubernetes.io\/docs\/concepts\/nodes\/node\/#condition\n\n  <a name=\"NodeCondition\"><\/a>\n  *NodeCondition contains condition information for a node.*\n\n  - **conditions.status** (string), required\n\n    Status of the condition, one of True, False, Unknown.\n\n  - **conditions.type** (string), required\n\n    Type of node condition.\n\n  - **conditions.lastHeartbeatTime** (Time)\n\n    Last time we got an update on a given condition.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.lastTransitionTime** (Time)\n\n    Last time the condition transit from one status to another.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    Human readable message indicating details about last transition.\n\n  - **conditions.reason** (string)\n\n    (brief) reason for the condition's last transition.\n\n- **config** (NodeConfigStatus)\n\n  Status of the config assigned to the node via the dynamic Kubelet config feature.\n\n  <a name=\"NodeConfigStatus\"><\/a>\n  *NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource.*\n\n  - **config.active** (NodeConfigSource)\n\n    Active reports the checkpointed config the node is actively using. Active will represent either the current version of the Assigned config, or the current LastKnownGood config, depending on whether attempting to use the Assigned config results in an error.\n\n    <a name=\"NodeConfigSource\"><\/a>\n    *NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22*\n\n    - **config.active.configMap** (ConfigMapNodeConfigSource)\n\n      ConfigMap is a reference to a Node's ConfigMap\n\n      <a name=\"ConfigMapNodeConfigSource\"><\/a>\n      *ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https:\/\/git.k8s.io\/enhancements\/keps\/sig-node\/281-dynamic-kubelet-configuration*\n\n      - **config.active.configMap.kubeletConfigKey** (string), required\n\n        KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases.\n\n      - **config.active.configMap.name** (string), required\n\n        Name is the metadata.name of the referenced ConfigMap. This field is required in all cases.\n\n      - **config.active.configMap.namespace** (string), required\n\n        Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases.\n\n      - **config.active.configMap.resourceVersion** (string)\n\n        ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status.\n\n      - **config.active.configMap.uid** (string)\n\n        UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status.\n\n  - **config.assigned** (NodeConfigSource)\n\n    Assigned reports the checkpointed config the node will try to use. When Node.Spec.ConfigSource is updated, the node checkpoints the associated config payload to local disk, along with a record indicating intended config. The node refers to this record to choose its config checkpoint, and reports this record in Assigned. Assigned only updates in the status after the record has been checkpointed to disk. When the Kubelet is restarted, it tries to make the Assigned config the Active config by loading and validating the checkpointed payload identified by Assigned.\n\n    <a name=\"NodeConfigSource\"><\/a>\n    *NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22*\n\n    - **config.assigned.configMap** (ConfigMapNodeConfigSource)\n\n      ConfigMap is a reference to a Node's ConfigMap\n\n      <a name=\"ConfigMapNodeConfigSource\"><\/a>\n      *ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https:\/\/git.k8s.io\/enhancements\/keps\/sig-node\/281-dynamic-kubelet-configuration*\n\n      - **config.assigned.configMap.kubeletConfigKey** (string), required\n\n        KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases.\n\n      - **config.assigned.configMap.name** (string), required\n\n        Name is the metadata.name of the referenced ConfigMap. This field is required in all cases.\n\n      - **config.assigned.configMap.namespace** (string), required\n\n        Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases.\n\n      - **config.assigned.configMap.resourceVersion** (string)\n\n        ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status.\n\n      - **config.assigned.configMap.uid** (string)\n\n        UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status.\n\n  - **config.error** (string)\n\n    Error describes any problems reconciling the Spec.ConfigSource to the Active config. Errors may occur, for example, attempting to checkpoint Spec.ConfigSource to the local Assigned record, attempting to checkpoint the payload associated with Spec.ConfigSource, attempting to load or validate the Assigned config, etc. Errors may occur at different points while syncing config. Earlier errors (e.g. download or checkpointing errors) will not result in a rollback to LastKnownGood, and may resolve across Kubelet retries. Later errors (e.g. loading or validating a checkpointed config) will result in a rollback to LastKnownGood. In the latter case, it is usually possible to resolve the error by fixing the config assigned in Spec.ConfigSource. You can find additional information for debugging by searching the error message in the Kubelet log. Error is a human-readable description of the error state; machines can check whether or not Error is empty, but should not rely on the stability of the Error text across Kubelet versions.\n\n  - **config.lastKnownGood** (NodeConfigSource)\n\n    LastKnownGood reports the checkpointed config the node will fall back to when it encounters an error attempting to use the Assigned config. The Assigned config becomes the LastKnownGood config when the node determines that the Assigned config is stable and correct. This is currently implemented as a 10-minute soak period starting when the local record of Assigned config is updated. If the Assigned config is Active at the end of this period, it becomes the LastKnownGood. Note that if Spec.ConfigSource is reset to nil (use local defaults), the LastKnownGood is also immediately reset to nil, because the local default config is always assumed good. You should not make assumptions about the node's method of determining config stability and correctness, as this may change or become configurable in the future.\n\n    <a name=\"NodeConfigSource\"><\/a>\n    *NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22*\n\n    - **config.lastKnownGood.configMap** (ConfigMapNodeConfigSource)\n\n      ConfigMap is a reference to a Node's ConfigMap\n\n      <a name=\"ConfigMapNodeConfigSource\"><\/a>\n      *ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https:\/\/git.k8s.io\/enhancements\/keps\/sig-node\/281-dynamic-kubelet-configuration*\n\n      - **config.lastKnownGood.configMap.kubeletConfigKey** (string), required\n\n        KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases.\n\n      - **config.lastKnownGood.configMap.name** (string), required\n\n        Name is the metadata.name of the referenced ConfigMap. This field is required in all cases.\n\n      - **config.lastKnownGood.configMap.namespace** (string), required\n\n        Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases.\n\n      - **config.lastKnownGood.configMap.resourceVersion** (string)\n\n        ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status.\n\n      - **config.lastKnownGood.configMap.uid** (string)\n\n        UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status.\n\n- **daemonEndpoints** (NodeDaemonEndpoints)\n\n  Endpoints of daemons running on the Node.\n\n  <a name=\"NodeDaemonEndpoints\"><\/a>\n  *NodeDaemonEndpoints lists ports opened by daemons running on the Node.*\n\n  - **daemonEndpoints.kubeletEndpoint** (DaemonEndpoint)\n\n    Endpoint on which Kubelet is listening.\n\n    <a name=\"DaemonEndpoint\"><\/a>\n    *DaemonEndpoint contains information about a single Daemon endpoint.*\n\n    - **daemonEndpoints.kubeletEndpoint.Port** (int32), required\n\n      Port number of the given endpoint.\n\n- **features** (NodeFeatures)\n\n  Features describes the set of features implemented by the CRI implementation.\n\n  <a name=\"NodeFeatures\"><\/a>\n  *NodeFeatures describes the set of features implemented by the CRI implementation. The features contained in the NodeFeatures should depend only on the cri implementation independent of runtime handlers.*\n\n  - **features.supplementalGroupsPolicy** (boolean)\n\n    SupplementalGroupsPolicy is set to true if the runtime supports SupplementalGroupsPolicy and ContainerUser.\n\n- **images** ([]ContainerImage)\n\n  *Atomic: will be replaced during a merge*\n  \n  List of container images on this node\n\n  <a name=\"ContainerImage\"><\/a>\n  *Describe a container image*\n\n  - **images.names** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    Names by which this image is known. e.g. [\"kubernetes.example\/hyperkube:v1.0.7\", \"cloud-vendor.registry.example\/cloud-vendor\/hyperkube:v1.0.7\"]\n\n  - **images.sizeBytes** (int64)\n\n    The size of the image in bytes.\n\n- **nodeInfo** (NodeSystemInfo)\n\n  Set of ids\/uuids to uniquely identify the node. More info: https:\/\/kubernetes.io\/docs\/concepts\/nodes\/node\/#info\n\n  <a name=\"NodeSystemInfo\"><\/a>\n  *NodeSystemInfo is a set of ids\/uuids to uniquely identify the node.*\n\n  - **nodeInfo.architecture** (string), required\n\n    The Architecture reported by the node\n\n  - **nodeInfo.bootID** (string), required\n\n    Boot ID reported by the node.\n\n  - **nodeInfo.containerRuntimeVersion** (string), required\n\n    ContainerRuntime Version reported by the node through runtime remote API (e.g. containerd:\/\/1.4.2).\n\n  - **nodeInfo.kernelVersion** (string), required\n\n    Kernel Version reported by the node from 'uname -r' (e.g. 3.16.0-0.bpo.4-amd64).\n\n  - **nodeInfo.kubeProxyVersion** (string), required\n\n    Deprecated: KubeProxy Version reported by the node.\n\n  - **nodeInfo.kubeletVersion** (string), required\n\n    Kubelet Version reported by the node.\n\n  - **nodeInfo.machineID** (string), required\n\n    MachineID reported by the node. For unique machine identification in the cluster this field is preferred. Learn more from man(5) machine-id: http:\/\/man7.org\/linux\/man-pages\/man5\/machine-id.5.html\n\n  - **nodeInfo.operatingSystem** (string), required\n\n    The Operating System reported by the node\n\n  - **nodeInfo.osImage** (string), required\n\n    OS Image reported by the node from \/etc\/os-release (e.g. Debian GNU\/Linux 7 (wheezy)).\n\n  - **nodeInfo.systemUUID** (string), required\n\n    SystemUUID reported by the node. For unique machine identification MachineID is preferred. This field is specific to Red Hat hosts https:\/\/access.redhat.com\/documentation\/en-us\/red_hat_subscription_management\/1\/html\/rhsm\/uuid\n\n- **phase** (string)\n\n  NodePhase is the recently observed lifecycle phase of the node. More info: https:\/\/kubernetes.io\/docs\/concepts\/nodes\/node\/#phase The field is never populated, and now is deprecated.\n\n- **runtimeHandlers** ([]NodeRuntimeHandler)\n\n  *Atomic: will be replaced during a merge*\n  \n  The available runtime handlers.\n\n  <a name=\"NodeRuntimeHandler\"><\/a>\n  *NodeRuntimeHandler is a set of runtime handler information.*\n\n  - **runtimeHandlers.features** (NodeRuntimeHandlerFeatures)\n\n    Supported features.\n\n    <a name=\"NodeRuntimeHandlerFeatures\"><\/a>\n    *NodeRuntimeHandlerFeatures is a set of features implemented by the runtime handler.*\n\n    - **runtimeHandlers.features.recursiveReadOnlyMounts** (boolean)\n\n      RecursiveReadOnlyMounts is set to true if the runtime handler supports RecursiveReadOnlyMounts.\n\n    - **runtimeHandlers.features.userNamespaces** (boolean)\n\n      UserNamespaces is set to true if the runtime handler supports UserNamespaces, including for volumes.\n\n  - **runtimeHandlers.name** (string)\n\n    Runtime handler name. Empty for the default runtime handler.\n\n- **volumesAttached** ([]AttachedVolume)\n\n  *Atomic: will be replaced during a merge*\n  \n  List of volumes that are attached to the node.\n\n  <a name=\"AttachedVolume\"><\/a>\n  *AttachedVolume describes a volume attached to a node*\n\n  - **volumesAttached.devicePath** (string), required\n\n    DevicePath represents the device path where the volume should be available\n\n  - **volumesAttached.name** (string), required\n\n    Name of the attached volume\n\n- **volumesInUse** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  List of attachable volumes in use (mounted) by the node.\n\n\n\n\n\n## NodeList {#NodeList}\n\nNodeList is the whole list of all Nodes which have been registered with master.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: NodeList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">Node<\/a>), required\n\n  List of nodes\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified Node\n\n#### HTTP Request\n\nGET \/api\/v1\/nodes\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Node\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Node<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified Node\n\n#### HTTP Request\n\nGET \/api\/v1\/nodes\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Node\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Node<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Node\n\n#### HTTP Request\n\nGET \/api\/v1\/nodes\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">NodeList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a Node\n\n#### HTTP Request\n\nPOST \/api\/v1\/nodes\n\n#### Parameters\n\n\n- **body**: <a href=\"\">Node<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Node<\/a>): OK\n\n201 (<a href=\"\">Node<\/a>): Created\n\n202 (<a href=\"\">Node<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified Node\n\n#### HTTP Request\n\nPUT \/api\/v1\/nodes\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Node\n\n\n- **body**: <a href=\"\">Node<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Node<\/a>): OK\n\n201 (<a href=\"\">Node<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified Node\n\n#### HTTP Request\n\nPUT \/api\/v1\/nodes\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Node\n\n\n- **body**: <a href=\"\">Node<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Node<\/a>): OK\n\n201 (<a href=\"\">Node<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified Node\n\n#### HTTP Request\n\nPATCH \/api\/v1\/nodes\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Node\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Node<\/a>): OK\n\n201 (<a href=\"\">Node<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified Node\n\n#### HTTP Request\n\nPATCH \/api\/v1\/nodes\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Node\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Node<\/a>): OK\n\n201 (<a href=\"\">Node<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a Node\n\n#### HTTP Request\n\nDELETE \/api\/v1\/nodes\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Node\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of Node\n\n#### HTTP Request\n\nDELETE \/api\/v1\/nodes\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   Node  content type   api reference  description   Node is a worker node in Kubernetes   title   Node  weight  8 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        Node   Node   Node is a worker node in Kubernetes  Each node will have a unique identifier in the cache  i e  in etcd     hr       apiVersion    v1       kind    Node       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    NodeSpec  a      Spec defines the behavior of a node  https   git k8s io community contributors devel sig architecture api conventions md spec and status      status     a href    NodeStatus  a      Most recently observed status of the node  Populated by the system  Read only  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         NodeSpec   NodeSpec   NodeSpec describes the attributes that a node is created with    hr       configSource    NodeConfigSource     Deprecated  Previously used to specify the source of the node s configuration for the DynamicKubeletConfig feature  This feature is removed      a name  NodeConfigSource    a     NodeConfigSource specifies a source of node configuration  Exactly one subfield  excluding metadata  must be non nil  This API is deprecated since 1 22         configSource configMap    ConfigMapNodeConfigSource       ConfigMap is a reference to a Node s ConfigMap       a name  ConfigMapNodeConfigSource    a       ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node  This API is deprecated since 1 22  https   git k8s io enhancements keps sig node 281 dynamic kubelet configuration           configSource configMap kubeletConfigKey    string   required        KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases           configSource configMap name    string   required        Name is the metadata name of the referenced ConfigMap  This field is required in all cases           configSource configMap namespace    string   required        Namespace is the metadata namespace of the referenced ConfigMap  This field is required in all cases           configSource configMap resourceVersion    string         ResourceVersion is the metadata ResourceVersion of the referenced ConfigMap  This field is forbidden in Node Spec  and required in Node Status           configSource configMap uid    string         UID is the metadata UID of the referenced ConfigMap  This field is forbidden in Node Spec  and required in Node Status       externalID    string     Deprecated  Not all kubelets will set this field  Remove field after 1 13  see  https   issues k8s io 61966      podCIDR    string     PodCIDR represents the pod IP range assigned to the node       podCIDRs      string      Set  unique values will be kept during a merge       podCIDRs represents the IP ranges assigned to the node for usage by Pods on that node  If this field is specified  the 0th entry must match the podCIDR field  It may contain at most 1 value for each of IPv4 and IPv6       providerID    string     ID of the node assigned by the cloud provider in the format    ProviderName      ProviderSpecificNodeID       taints      Taint      Atomic  will be replaced during a merge       If specified  the node s taints      a name  Taint    a     The node this Taint is attached to has the  effect  on any pod that does not tolerate the Taint          taints effect    string   required      Required  The effect of the taint on pods that do not tolerate the taint  Valid effects are NoSchedule  PreferNoSchedule and NoExecute         taints key    string   required      Required  The taint key to be applied to a node         taints timeAdded    Time       TimeAdded represents the time at which the taint was added  It is only written for NoExecute taints        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          taints value    string       The taint value corresponding to the taint key       unschedulable    boolean     Unschedulable controls node schedulability of new pods  By default  node is schedulable  More info  https   kubernetes io docs concepts nodes node  manual node administration         NodeStatus   NodeStatus   NodeStatus is information about the current status of a node    hr       addresses      NodeAddress      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       List of addresses reachable to the node  Queried from cloud provider  if available  More info  https   kubernetes io docs concepts nodes node  addresses Note  This field is declared as mergeable  but the merge key is not sufficiently unique  which can cause data corruption when it is merged  Callers should instead use a full replacement patch  See https   pr k8s io 79391 for an example  Consumers should assume that addresses can change during the lifetime of a Node  However  there are some exceptions where this may not be possible  such as Pods that inherit a Node s address in its own status or consumers of the downward API  status hostIP       a name  NodeAddress    a     NodeAddress contains information for the node s address          addresses address    string   required      The node address         addresses type    string   required      Node address type  one of Hostname  ExternalIP or InternalIP       allocatable    map string  a href    Quantity  a      Allocatable represents the resources of a node that are available for scheduling  Defaults to Capacity       capacity    map string  a href    Quantity  a      Capacity represents the total resources of a node  More info  https   kubernetes io docs reference node node status  capacity      conditions      NodeCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       Conditions is an array of current observed node conditions  More info  https   kubernetes io docs concepts nodes node  condition     a name  NodeCondition    a     NodeCondition contains condition information for a node          conditions status    string   required      Status of the condition  one of True  False  Unknown         conditions type    string   required      Type of node condition         conditions lastHeartbeatTime    Time       Last time we got an update on a given condition        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions lastTransitionTime    Time       Last time the condition transit from one status to another        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string       Human readable message indicating details about last transition         conditions reason    string        brief  reason for the condition s last transition       config    NodeConfigStatus     Status of the config assigned to the node via the dynamic Kubelet config feature      a name  NodeConfigStatus    a     NodeConfigStatus describes the status of the config assigned by Node Spec ConfigSource          config active    NodeConfigSource       Active reports the checkpointed config the node is actively using  Active will represent either the current version of the Assigned config  or the current LastKnownGood config  depending on whether attempting to use the Assigned config results in an error        a name  NodeConfigSource    a       NodeConfigSource specifies a source of node configuration  Exactly one subfield  excluding metadata  must be non nil  This API is deprecated since 1 22           config active configMap    ConfigMapNodeConfigSource         ConfigMap is a reference to a Node s ConfigMap         a name  ConfigMapNodeConfigSource    a         ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node  This API is deprecated since 1 22  https   git k8s io enhancements keps sig node 281 dynamic kubelet configuration             config active configMap kubeletConfigKey    string   required          KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases             config active configMap name    string   required          Name is the metadata name of the referenced ConfigMap  This field is required in all cases             config active configMap namespace    string   required          Namespace is the metadata namespace of the referenced ConfigMap  This field is required in all cases             config active configMap resourceVersion    string           ResourceVersion is the metadata ResourceVersion of the referenced ConfigMap  This field is forbidden in Node Spec  and required in Node Status             config active configMap uid    string           UID is the metadata UID of the referenced ConfigMap  This field is forbidden in Node Spec  and required in Node Status         config assigned    NodeConfigSource       Assigned reports the checkpointed config the node will try to use  When Node Spec ConfigSource is updated  the node checkpoints the associated config payload to local disk  along with a record indicating intended config  The node refers to this record to choose its config checkpoint  and reports this record in Assigned  Assigned only updates in the status after the record has been checkpointed to disk  When the Kubelet is restarted  it tries to make the Assigned config the Active config by loading and validating the checkpointed payload identified by Assigned        a name  NodeConfigSource    a       NodeConfigSource specifies a source of node configuration  Exactly one subfield  excluding metadata  must be non nil  This API is deprecated since 1 22           config assigned configMap    ConfigMapNodeConfigSource         ConfigMap is a reference to a Node s ConfigMap         a name  ConfigMapNodeConfigSource    a         ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node  This API is deprecated since 1 22  https   git k8s io enhancements keps sig node 281 dynamic kubelet configuration             config assigned configMap kubeletConfigKey    string   required          KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases             config assigned configMap name    string   required          Name is the metadata name of the referenced ConfigMap  This field is required in all cases             config assigned configMap namespace    string   required          Namespace is the metadata namespace of the referenced ConfigMap  This field is required in all cases             config assigned configMap resourceVersion    string           ResourceVersion is the metadata ResourceVersion of the referenced ConfigMap  This field is forbidden in Node Spec  and required in Node Status             config assigned configMap uid    string           UID is the metadata UID of the referenced ConfigMap  This field is forbidden in Node Spec  and required in Node Status         config error    string       Error describes any problems reconciling the Spec ConfigSource to the Active config  Errors may occur  for example  attempting to checkpoint Spec ConfigSource to the local Assigned record  attempting to checkpoint the payload associated with Spec ConfigSource  attempting to load or validate the Assigned config  etc  Errors may occur at different points while syncing config  Earlier errors  e g  download or checkpointing errors  will not result in a rollback to LastKnownGood  and may resolve across Kubelet retries  Later errors  e g  loading or validating a checkpointed config  will result in a rollback to LastKnownGood  In the latter case  it is usually possible to resolve the error by fixing the config assigned in Spec ConfigSource  You can find additional information for debugging by searching the error message in the Kubelet log  Error is a human readable description of the error state  machines can check whether or not Error is empty  but should not rely on the stability of the Error text across Kubelet versions         config lastKnownGood    NodeConfigSource       LastKnownGood reports the checkpointed config the node will fall back to when it encounters an error attempting to use the Assigned config  The Assigned config becomes the LastKnownGood config when the node determines that the Assigned config is stable and correct  This is currently implemented as a 10 minute soak period starting when the local record of Assigned config is updated  If the Assigned config is Active at the end of this period  it becomes the LastKnownGood  Note that if Spec ConfigSource is reset to nil  use local defaults   the LastKnownGood is also immediately reset to nil  because the local default config is always assumed good  You should not make assumptions about the node s method of determining config stability and correctness  as this may change or become configurable in the future        a name  NodeConfigSource    a       NodeConfigSource specifies a source of node configuration  Exactly one subfield  excluding metadata  must be non nil  This API is deprecated since 1 22           config lastKnownGood configMap    ConfigMapNodeConfigSource         ConfigMap is a reference to a Node s ConfigMap         a name  ConfigMapNodeConfigSource    a         ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node  This API is deprecated since 1 22  https   git k8s io enhancements keps sig node 281 dynamic kubelet configuration             config lastKnownGood configMap kubeletConfigKey    string   required          KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases             config lastKnownGood configMap name    string   required          Name is the metadata name of the referenced ConfigMap  This field is required in all cases             config lastKnownGood configMap namespace    string   required          Namespace is the metadata namespace of the referenced ConfigMap  This field is required in all cases             config lastKnownGood configMap resourceVersion    string           ResourceVersion is the metadata ResourceVersion of the referenced ConfigMap  This field is forbidden in Node Spec  and required in Node Status             config lastKnownGood configMap uid    string           UID is the metadata UID of the referenced ConfigMap  This field is forbidden in Node Spec  and required in Node Status       daemonEndpoints    NodeDaemonEndpoints     Endpoints of daemons running on the Node      a name  NodeDaemonEndpoints    a     NodeDaemonEndpoints lists ports opened by daemons running on the Node          daemonEndpoints kubeletEndpoint    DaemonEndpoint       Endpoint on which Kubelet is listening        a name  DaemonEndpoint    a       DaemonEndpoint contains information about a single Daemon endpoint            daemonEndpoints kubeletEndpoint Port    int32   required        Port number of the given endpoint       features    NodeFeatures     Features describes the set of features implemented by the CRI implementation      a name  NodeFeatures    a     NodeFeatures describes the set of features implemented by the CRI implementation  The features contained in the NodeFeatures should depend only on the cri implementation independent of runtime handlers          features supplementalGroupsPolicy    boolean       SupplementalGroupsPolicy is set to true if the runtime supports SupplementalGroupsPolicy and ContainerUser       images      ContainerImage      Atomic  will be replaced during a merge       List of container images on this node     a name  ContainerImage    a     Describe a container image         images names      string        Atomic  will be replaced during a merge           Names by which this image is known  e g    kubernetes example hyperkube v1 0 7    cloud vendor registry example cloud vendor hyperkube v1 0 7          images sizeBytes    int64       The size of the image in bytes       nodeInfo    NodeSystemInfo     Set of ids uuids to uniquely identify the node  More info  https   kubernetes io docs concepts nodes node  info     a name  NodeSystemInfo    a     NodeSystemInfo is a set of ids uuids to uniquely identify the node          nodeInfo architecture    string   required      The Architecture reported by the node        nodeInfo bootID    string   required      Boot ID reported by the node         nodeInfo containerRuntimeVersion    string   required      ContainerRuntime Version reported by the node through runtime remote API  e g  containerd   1 4 2          nodeInfo kernelVersion    string   required      Kernel Version reported by the node from  uname  r   e g  3 16 0 0 bpo 4 amd64          nodeInfo kubeProxyVersion    string   required      Deprecated  KubeProxy Version reported by the node         nodeInfo kubeletVersion    string   required      Kubelet Version reported by the node         nodeInfo machineID    string   required      MachineID reported by the node  For unique machine identification in the cluster this field is preferred  Learn more from man 5  machine id  http   man7 org linux man pages man5 machine id 5 html        nodeInfo operatingSystem    string   required      The Operating System reported by the node        nodeInfo osImage    string   required      OS Image reported by the node from  etc os release  e g  Debian GNU Linux 7  wheezy           nodeInfo systemUUID    string   required      SystemUUID reported by the node  For unique machine identification MachineID is preferred  This field is specific to Red Hat hosts https   access redhat com documentation en us red hat subscription management 1 html rhsm uuid      phase    string     NodePhase is the recently observed lifecycle phase of the node  More info  https   kubernetes io docs concepts nodes node  phase The field is never populated  and now is deprecated       runtimeHandlers      NodeRuntimeHandler      Atomic  will be replaced during a merge       The available runtime handlers      a name  NodeRuntimeHandler    a     NodeRuntimeHandler is a set of runtime handler information          runtimeHandlers features    NodeRuntimeHandlerFeatures       Supported features        a name  NodeRuntimeHandlerFeatures    a       NodeRuntimeHandlerFeatures is a set of features implemented by the runtime handler            runtimeHandlers features recursiveReadOnlyMounts    boolean         RecursiveReadOnlyMounts is set to true if the runtime handler supports RecursiveReadOnlyMounts           runtimeHandlers features userNamespaces    boolean         UserNamespaces is set to true if the runtime handler supports UserNamespaces  including for volumes         runtimeHandlers name    string       Runtime handler name  Empty for the default runtime handler       volumesAttached      AttachedVolume      Atomic  will be replaced during a merge       List of volumes that are attached to the node      a name  AttachedVolume    a     AttachedVolume describes a volume attached to a node         volumesAttached devicePath    string   required      DevicePath represents the device path where the volume should be available        volumesAttached name    string   required      Name of the attached volume      volumesInUse      string      Atomic  will be replaced during a merge       List of attachable volumes in use  mounted  by the node          NodeList   NodeList   NodeList is the whole list of all Nodes which have been registered with master    hr       apiVersion    v1       kind    NodeList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    Node  a    required    List of nodes         Operations   Operations      hr             get  read the specified Node       HTTP Request  GET  api v1 nodes  name        Parameters       name     in path    string  required    name of the Node       pretty     in query    string     a href    pretty  a          Response   200   a href    Node  a    OK  401  Unauthorized        get  read status of the specified Node       HTTP Request  GET  api v1 nodes  name  status       Parameters       name     in path    string  required    name of the Node       pretty     in query    string     a href    pretty  a          Response   200   a href    Node  a    OK  401  Unauthorized        list  list or watch objects of kind Node       HTTP Request  GET  api v1 nodes       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    NodeList  a    OK  401  Unauthorized        create  create a Node       HTTP Request  POST  api v1 nodes       Parameters       body     a href    Node  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Node  a    OK  201   a href    Node  a    Created  202   a href    Node  a    Accepted  401  Unauthorized        update  replace the specified Node       HTTP Request  PUT  api v1 nodes  name        Parameters       name     in path    string  required    name of the Node       body     a href    Node  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Node  a    OK  201   a href    Node  a    Created  401  Unauthorized        update  replace status of the specified Node       HTTP Request  PUT  api v1 nodes  name  status       Parameters       name     in path    string  required    name of the Node       body     a href    Node  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Node  a    OK  201   a href    Node  a    Created  401  Unauthorized        patch  partially update the specified Node       HTTP Request  PATCH  api v1 nodes  name        Parameters       name     in path    string  required    name of the Node       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Node  a    OK  201   a href    Node  a    Created  401  Unauthorized        patch  partially update status of the specified Node       HTTP Request  PATCH  api v1 nodes  name  status       Parameters       name     in path    string  required    name of the Node       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Node  a    OK  201   a href    Node  a    Created  401  Unauthorized        delete  delete a Node       HTTP Request  DELETE  api v1 nodes  name        Parameters       name     in path    string  required    name of the Node       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of Node       HTTP Request  DELETE  api v1 nodes       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference weight 7 apiVersion v1 contenttype apireference title Namespace apimetadata kind Namespace autogenerated true Namespace provides a scope for Names import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"Namespace\"\ncontent_type: \"api_reference\"\ndescription: \"Namespace provides a scope for Names.\"\ntitle: \"Namespace\"\nweight: 7\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## Namespace {#Namespace}\n\nNamespace provides a scope for Names. Use of multiple namespaces is optional.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: Namespace\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">NamespaceSpec<\/a>)\n\n  Spec defines the behavior of the Namespace. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **status** (<a href=\"\">NamespaceStatus<\/a>)\n\n  Status describes the current status of a Namespace. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## NamespaceSpec {#NamespaceSpec}\n\nNamespaceSpec describes the attributes on a Namespace.\n\n<hr>\n\n- **finalizers** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  Finalizers is an opaque list of values that must be empty to permanently remove object from storage. More info: https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/namespaces\/\n\n\n\n\n\n## NamespaceStatus {#NamespaceStatus}\n\nNamespaceStatus is information about the current status of a Namespace.\n\n<hr>\n\n- **conditions** ([]NamespaceCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  Represents the latest available observations of a namespace's current state.\n\n  <a name=\"NamespaceCondition\"><\/a>\n  *NamespaceCondition contains details about state of namespace.*\n\n  - **conditions.status** (string), required\n\n    Status of the condition, one of True, False, Unknown.\n\n  - **conditions.type** (string), required\n\n    Type of namespace controller condition.\n\n  - **conditions.lastTransitionTime** (Time)\n\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n\n  - **conditions.reason** (string)\n\n\n- **phase** (string)\n\n  Phase is the current lifecycle phase of the namespace. More info: https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/namespaces\/\n\n\n\n\n\n## NamespaceList {#NamespaceList}\n\nNamespaceList is a list of Namespaces.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: NamespaceList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">Namespace<\/a>), required\n\n  Items is the list of Namespace objects in the list. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/namespaces\/\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified Namespace\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Namespace\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Namespace<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified Namespace\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Namespace\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Namespace<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Namespace\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">NamespaceList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a Namespace\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\n\n#### Parameters\n\n\n- **body**: <a href=\"\">Namespace<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Namespace<\/a>): OK\n\n201 (<a href=\"\">Namespace<\/a>): Created\n\n202 (<a href=\"\">Namespace<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified Namespace\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Namespace\n\n\n- **body**: <a href=\"\">Namespace<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Namespace<\/a>): OK\n\n201 (<a href=\"\">Namespace<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace finalize of the specified Namespace\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{name}\/finalize\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Namespace\n\n\n- **body**: <a href=\"\">Namespace<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Namespace<\/a>): OK\n\n201 (<a href=\"\">Namespace<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified Namespace\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Namespace\n\n\n- **body**: <a href=\"\">Namespace<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Namespace<\/a>): OK\n\n201 (<a href=\"\">Namespace<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified Namespace\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Namespace\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Namespace<\/a>): OK\n\n201 (<a href=\"\">Namespace<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified Namespace\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Namespace\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Namespace<\/a>): OK\n\n201 (<a href=\"\">Namespace<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a Namespace\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Namespace\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   Namespace  content type   api reference  description   Namespace provides a scope for Names   title   Namespace  weight  7 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        Namespace   Namespace   Namespace provides a scope for Names  Use of multiple namespaces is optional    hr       apiVersion    v1       kind    Namespace       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    NamespaceSpec  a      Spec defines the behavior of the Namespace  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status      status     a href    NamespaceStatus  a      Status describes the current status of a Namespace  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         NamespaceSpec   NamespaceSpec   NamespaceSpec describes the attributes on a Namespace    hr       finalizers      string      Atomic  will be replaced during a merge       Finalizers is an opaque list of values that must be empty to permanently remove object from storage  More info  https   kubernetes io docs tasks administer cluster namespaces          NamespaceStatus   NamespaceStatus   NamespaceStatus is information about the current status of a Namespace    hr       conditions      NamespaceCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       Represents the latest available observations of a namespace s current state      a name  NamespaceCondition    a     NamespaceCondition contains details about state of namespace          conditions status    string   required      Status of the condition  one of True  False  Unknown         conditions type    string   required      Type of namespace controller condition         conditions lastTransitionTime    Time         a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string          conditions reason    string        phase    string     Phase is the current lifecycle phase of the namespace  More info  https   kubernetes io docs tasks administer cluster namespaces          NamespaceList   NamespaceList   NamespaceList is a list of Namespaces    hr       apiVersion    v1       kind    NamespaceList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    Namespace  a    required    Items is the list of Namespace objects in the list  More info  https   kubernetes io docs concepts overview working with objects namespaces          Operations   Operations      hr             get  read the specified Namespace       HTTP Request  GET  api v1 namespaces  name        Parameters       name     in path    string  required    name of the Namespace       pretty     in query    string     a href    pretty  a          Response   200   a href    Namespace  a    OK  401  Unauthorized        get  read status of the specified Namespace       HTTP Request  GET  api v1 namespaces  name  status       Parameters       name     in path    string  required    name of the Namespace       pretty     in query    string     a href    pretty  a          Response   200   a href    Namespace  a    OK  401  Unauthorized        list  list or watch objects of kind Namespace       HTTP Request  GET  api v1 namespaces       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    NamespaceList  a    OK  401  Unauthorized        create  create a Namespace       HTTP Request  POST  api v1 namespaces       Parameters       body     a href    Namespace  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Namespace  a    OK  201   a href    Namespace  a    Created  202   a href    Namespace  a    Accepted  401  Unauthorized        update  replace the specified Namespace       HTTP Request  PUT  api v1 namespaces  name        Parameters       name     in path    string  required    name of the Namespace       body     a href    Namespace  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Namespace  a    OK  201   a href    Namespace  a    Created  401  Unauthorized        update  replace finalize of the specified Namespace       HTTP Request  PUT  api v1 namespaces  name  finalize       Parameters       name     in path    string  required    name of the Namespace       body     a href    Namespace  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Namespace  a    OK  201   a href    Namespace  a    Created  401  Unauthorized        update  replace status of the specified Namespace       HTTP Request  PUT  api v1 namespaces  name  status       Parameters       name     in path    string  required    name of the Namespace       body     a href    Namespace  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Namespace  a    OK  201   a href    Namespace  a    Created  401  Unauthorized        patch  partially update the specified Namespace       HTTP Request  PATCH  api v1 namespaces  name        Parameters       name     in path    string  required    name of the Namespace       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Namespace  a    OK  201   a href    Namespace  a    Created  401  Unauthorized        patch  partially update status of the specified Namespace       HTTP Request  PATCH  api v1 namespaces  name  status       Parameters       name     in path    string  required    name of the Namespace       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Namespace  a    OK  201   a href    Namespace  a    Created  401  Unauthorized        delete  delete a Namespace       HTTP Request  DELETE  api v1 namespaces  name        Parameters       name     in path    string  required    name of the Namespace       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized "}
{"questions":"kubernetes reference import k8s io api networking v1beta1 IPAddress represents a single IP of a single IP Family contenttype apireference title IPAddress v1beta1 apiVersion networking k8s io v1beta1 apimetadata kind IPAddress weight 4 autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"networking.k8s.io\/v1beta1\"\n  import: \"k8s.io\/api\/networking\/v1beta1\"\n  kind: \"IPAddress\"\ncontent_type: \"api_reference\"\ndescription: \"IPAddress represents a single IP of a single IP Family.\"\ntitle: \"IPAddress v1beta1\"\nweight: 4\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: networking.k8s.io\/v1beta1`\n\n`import \"k8s.io\/api\/networking\/v1beta1\"`\n\n\n## IPAddress {#IPAddress}\n\nIPAddress represents a single IP of a single IP Family. The object is designed to be used by APIs that operate on IP addresses. The object is used by the Service core API for allocation of IP addresses. An IP address can be represented in different formats, to guarantee the uniqueness of the IP, the name of the object is the IP address in canonical format, four decimal digits separated by dots suppressing leading zeros for IPv4 and the representation defined by RFC 5952 for IPv6. Valid: 192.168.1.5 or 2001:db8::1 or 2001:db8:aaaa:bbbb:cccc:dddd:eeee:1 Invalid: 10.01.2.3 or 2001:db8:0:0:0::1\n\n<hr>\n\n- **apiVersion**: networking.k8s.io\/v1beta1\n\n\n- **kind**: IPAddress\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">IPAddressSpec<\/a>)\n\n  spec is the desired state of the IPAddress. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## IPAddressSpec {#IPAddressSpec}\n\nIPAddressSpec describe the attributes in an IP Address.\n\n<hr>\n\n- **parentRef** (ParentReference), required\n\n  ParentRef references the resource that an IPAddress is attached to. An IPAddress must reference a parent object.\n\n  <a name=\"ParentReference\"><\/a>\n  *ParentReference describes a reference to a parent object.*\n\n  - **parentRef.name** (string), required\n\n    Name is the name of the object being referenced.\n\n  - **parentRef.resource** (string), required\n\n    Resource is the resource of the object being referenced.\n\n  - **parentRef.group** (string)\n\n    Group is the group of the object being referenced.\n\n  - **parentRef.namespace** (string)\n\n    Namespace is the namespace of the object being referenced.\n\n\n\n\n\n## IPAddressList {#IPAddressList}\n\nIPAddressList contains a list of IPAddress.\n\n<hr>\n\n- **apiVersion**: networking.k8s.io\/v1beta1\n\n\n- **kind**: IPAddressList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">IPAddress<\/a>), required\n\n  items is the list of IPAddresses.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified IPAddress\n\n#### HTTP Request\n\nGET \/apis\/networking.k8s.io\/v1beta1\/ipaddresses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the IPAddress\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">IPAddress<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind IPAddress\n\n#### HTTP Request\n\nGET \/apis\/networking.k8s.io\/v1beta1\/ipaddresses\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">IPAddressList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create an IPAddress\n\n#### HTTP Request\n\nPOST \/apis\/networking.k8s.io\/v1beta1\/ipaddresses\n\n#### Parameters\n\n\n- **body**: <a href=\"\">IPAddress<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">IPAddress<\/a>): OK\n\n201 (<a href=\"\">IPAddress<\/a>): Created\n\n202 (<a href=\"\">IPAddress<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified IPAddress\n\n#### HTTP Request\n\nPUT \/apis\/networking.k8s.io\/v1beta1\/ipaddresses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the IPAddress\n\n\n- **body**: <a href=\"\">IPAddress<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">IPAddress<\/a>): OK\n\n201 (<a href=\"\">IPAddress<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified IPAddress\n\n#### HTTP Request\n\nPATCH \/apis\/networking.k8s.io\/v1beta1\/ipaddresses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the IPAddress\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">IPAddress<\/a>): OK\n\n201 (<a href=\"\">IPAddress<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete an IPAddress\n\n#### HTTP Request\n\nDELETE \/apis\/networking.k8s.io\/v1beta1\/ipaddresses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the IPAddress\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of IPAddress\n\n#### HTTP Request\n\nDELETE \/apis\/networking.k8s.io\/v1beta1\/ipaddresses\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   networking k8s io v1beta1    import   k8s io api networking v1beta1    kind   IPAddress  content type   api reference  description   IPAddress represents a single IP of a single IP Family   title   IPAddress v1beta1  weight  4 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  networking k8s io v1beta1    import  k8s io api networking v1beta1        IPAddress   IPAddress   IPAddress represents a single IP of a single IP Family  The object is designed to be used by APIs that operate on IP addresses  The object is used by the Service core API for allocation of IP addresses  An IP address can be represented in different formats  to guarantee the uniqueness of the IP  the name of the object is the IP address in canonical format  four decimal digits separated by dots suppressing leading zeros for IPv4 and the representation defined by RFC 5952 for IPv6  Valid  192 168 1 5 or 2001 db8  1 or 2001 db8 aaaa bbbb cccc dddd eeee 1 Invalid  10 01 2 3 or 2001 db8 0 0 0  1   hr       apiVersion    networking k8s io v1beta1       kind    IPAddress       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    IPAddressSpec  a      spec is the desired state of the IPAddress  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         IPAddressSpec   IPAddressSpec   IPAddressSpec describe the attributes in an IP Address    hr       parentRef    ParentReference   required    ParentRef references the resource that an IPAddress is attached to  An IPAddress must reference a parent object      a name  ParentReference    a     ParentReference describes a reference to a parent object          parentRef name    string   required      Name is the name of the object being referenced         parentRef resource    string   required      Resource is the resource of the object being referenced         parentRef group    string       Group is the group of the object being referenced         parentRef namespace    string       Namespace is the namespace of the object being referenced          IPAddressList   IPAddressList   IPAddressList contains a list of IPAddress    hr       apiVersion    networking k8s io v1beta1       kind    IPAddressList       metadata     a href    ListMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    IPAddress  a    required    items is the list of IPAddresses          Operations   Operations      hr             get  read the specified IPAddress       HTTP Request  GET  apis networking k8s io v1beta1 ipaddresses  name        Parameters       name     in path    string  required    name of the IPAddress       pretty     in query    string     a href    pretty  a          Response   200   a href    IPAddress  a    OK  401  Unauthorized        list  list or watch objects of kind IPAddress       HTTP Request  GET  apis networking k8s io v1beta1 ipaddresses       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    IPAddressList  a    OK  401  Unauthorized        create  create an IPAddress       HTTP Request  POST  apis networking k8s io v1beta1 ipaddresses       Parameters       body     a href    IPAddress  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    IPAddress  a    OK  201   a href    IPAddress  a    Created  202   a href    IPAddress  a    Accepted  401  Unauthorized        update  replace the specified IPAddress       HTTP Request  PUT  apis networking k8s io v1beta1 ipaddresses  name        Parameters       name     in path    string  required    name of the IPAddress       body     a href    IPAddress  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    IPAddress  a    OK  201   a href    IPAddress  a    Created  401  Unauthorized        patch  partially update the specified IPAddress       HTTP Request  PATCH  apis networking k8s io v1beta1 ipaddresses  name        Parameters       name     in path    string  required    name of the IPAddress       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    IPAddress  a    OK  201   a href    IPAddress  a    Created  401  Unauthorized        delete  delete an IPAddress       HTTP Request  DELETE  apis networking k8s io v1beta1 ipaddresses  name        Parameters       name     in path    string  required    name of the IPAddress       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of IPAddress       HTTP Request  DELETE  apis networking k8s io v1beta1 ipaddresses       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference weight 6 apiVersion coordination k8s io v1alpha1 contenttype apireference import k8s io api coordination v1alpha1 kind LeaseCandidate apimetadata LeaseCandidate defines a candidate for a Lease object title LeaseCandidate v1alpha1 autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"coordination.k8s.io\/v1alpha1\"\n  import: \"k8s.io\/api\/coordination\/v1alpha1\"\n  kind: \"LeaseCandidate\"\ncontent_type: \"api_reference\"\ndescription: \"LeaseCandidate defines a candidate for a Lease object.\"\ntitle: \"LeaseCandidate v1alpha1\"\nweight: 6\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: coordination.k8s.io\/v1alpha1`\n\n`import \"k8s.io\/api\/coordination\/v1alpha1\"`\n\n\n## LeaseCandidate {#LeaseCandidate}\n\nLeaseCandidate defines a candidate for a Lease object. Candidates are created such that coordinated leader election will pick the best leader from the list of candidates.\n\n<hr>\n\n- **apiVersion**: coordination.k8s.io\/v1alpha1\n\n\n- **kind**: LeaseCandidate\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">LeaseCandidateSpec<\/a>)\n\n  spec contains the specification of the Lease. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## LeaseCandidateSpec {#LeaseCandidateSpec}\n\nLeaseCandidateSpec is a specification of a Lease.\n\n<hr>\n\n- **leaseName** (string), required\n\n  LeaseName is the name of the lease for which this candidate is contending. This field is immutable.\n\n- **preferredStrategies** ([]string), required\n\n  *Atomic: will be replaced during a merge*\n  \n  PreferredStrategies indicates the list of strategies for picking the leader for coordinated leader election. The list is ordered, and the first strategy supersedes all other strategies. The list is used by coordinated leader election to make a decision about the final election strategy. This follows as - If all clients have strategy X as the first element in this list, strategy X will be used. - If a candidate has strategy [X] and another candidate has strategy [Y, X], Y supersedes X and strategy Y\n    will be used.\n  - If a candidate has strategy [X, Y] and another candidate has strategy [Y, X], this is a user error and leader\n    election will not operate the Lease until resolved.\n  (Alpha) Using this field requires the CoordinatedLeaderElection feature gate to be enabled.\n\n- **binaryVersion** (string)\n\n  BinaryVersion is the binary version. It must be in a semver format without leading `v`. This field is required when strategy is \"OldestEmulationVersion\"\n\n- **emulationVersion** (string)\n\n  EmulationVersion is the emulation version. It must be in a semver format without leading `v`. EmulationVersion must be less than or equal to BinaryVersion. This field is required when strategy is \"OldestEmulationVersion\"\n\n- **pingTime** (MicroTime)\n\n  PingTime is the last time that the server has requested the LeaseCandidate to renew. It is only done during leader election to check if any LeaseCandidates have become ineligible. When PingTime is updated, the LeaseCandidate will respond by updating RenewTime.\n\n  <a name=\"MicroTime\"><\/a>\n  *MicroTime is version of Time with microsecond level precision.*\n\n- **renewTime** (MicroTime)\n\n  RenewTime is the time that the LeaseCandidate was last updated. Any time a Lease needs to do leader election, the PingTime field is updated to signal to the LeaseCandidate that they should update the RenewTime. Old LeaseCandidate objects are also garbage collected if it has been hours since the last renew. The PingTime field is updated regularly to prevent garbage collection for still active LeaseCandidates.\n\n  <a name=\"MicroTime\"><\/a>\n  *MicroTime is version of Time with microsecond level precision.*\n\n\n\n\n\n## LeaseCandidateList {#LeaseCandidateList}\n\nLeaseCandidateList is a list of Lease objects.\n\n<hr>\n\n- **apiVersion**: coordination.k8s.io\/v1alpha1\n\n\n- **kind**: LeaseCandidateList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">LeaseCandidate<\/a>), required\n\n  items is a list of schema objects.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified LeaseCandidate\n\n#### HTTP Request\n\nGET \/apis\/coordination.k8s.io\/v1alpha1\/namespaces\/{namespace}\/leasecandidates\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the LeaseCandidate\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LeaseCandidate<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind LeaseCandidate\n\n#### HTTP Request\n\nGET \/apis\/coordination.k8s.io\/v1alpha1\/namespaces\/{namespace}\/leasecandidates\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LeaseCandidateList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind LeaseCandidate\n\n#### HTTP Request\n\nGET \/apis\/coordination.k8s.io\/v1alpha1\/leasecandidates\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LeaseCandidateList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a LeaseCandidate\n\n#### HTTP Request\n\nPOST \/apis\/coordination.k8s.io\/v1alpha1\/namespaces\/{namespace}\/leasecandidates\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">LeaseCandidate<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LeaseCandidate<\/a>): OK\n\n201 (<a href=\"\">LeaseCandidate<\/a>): Created\n\n202 (<a href=\"\">LeaseCandidate<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified LeaseCandidate\n\n#### HTTP Request\n\nPUT \/apis\/coordination.k8s.io\/v1alpha1\/namespaces\/{namespace}\/leasecandidates\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the LeaseCandidate\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">LeaseCandidate<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LeaseCandidate<\/a>): OK\n\n201 (<a href=\"\">LeaseCandidate<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified LeaseCandidate\n\n#### HTTP Request\n\nPATCH \/apis\/coordination.k8s.io\/v1alpha1\/namespaces\/{namespace}\/leasecandidates\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the LeaseCandidate\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LeaseCandidate<\/a>): OK\n\n201 (<a href=\"\">LeaseCandidate<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a LeaseCandidate\n\n#### HTTP Request\n\nDELETE \/apis\/coordination.k8s.io\/v1alpha1\/namespaces\/{namespace}\/leasecandidates\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the LeaseCandidate\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of LeaseCandidate\n\n#### HTTP Request\n\nDELETE \/apis\/coordination.k8s.io\/v1alpha1\/namespaces\/{namespace}\/leasecandidates\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   coordination k8s io v1alpha1    import   k8s io api coordination v1alpha1    kind   LeaseCandidate  content type   api reference  description   LeaseCandidate defines a candidate for a Lease object   title   LeaseCandidate v1alpha1  weight  6 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  coordination k8s io v1alpha1    import  k8s io api coordination v1alpha1        LeaseCandidate   LeaseCandidate   LeaseCandidate defines a candidate for a Lease object  Candidates are created such that coordinated leader election will pick the best leader from the list of candidates    hr       apiVersion    coordination k8s io v1alpha1       kind    LeaseCandidate       metadata     a href    ObjectMeta  a      More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    LeaseCandidateSpec  a      spec contains the specification of the Lease  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         LeaseCandidateSpec   LeaseCandidateSpec   LeaseCandidateSpec is a specification of a Lease    hr       leaseName    string   required    LeaseName is the name of the lease for which this candidate is contending  This field is immutable       preferredStrategies      string   required     Atomic  will be replaced during a merge       PreferredStrategies indicates the list of strategies for picking the leader for coordinated leader election  The list is ordered  and the first strategy supersedes all other strategies  The list is used by coordinated leader election to make a decision about the final election strategy  This follows as   If all clients have strategy X as the first element in this list  strategy X will be used    If a candidate has strategy  X  and another candidate has strategy  Y  X   Y supersedes X and strategy Y     will be used      If a candidate has strategy  X  Y  and another candidate has strategy  Y  X   this is a user error and leader     election will not operate the Lease until resolved     Alpha  Using this field requires the CoordinatedLeaderElection feature gate to be enabled       binaryVersion    string     BinaryVersion is the binary version  It must be in a semver format without leading  v   This field is required when strategy is  OldestEmulationVersion       emulationVersion    string     EmulationVersion is the emulation version  It must be in a semver format without leading  v   EmulationVersion must be less than or equal to BinaryVersion  This field is required when strategy is  OldestEmulationVersion       pingTime    MicroTime     PingTime is the last time that the server has requested the LeaseCandidate to renew  It is only done during leader election to check if any LeaseCandidates have become ineligible  When PingTime is updated  the LeaseCandidate will respond by updating RenewTime      a name  MicroTime    a     MicroTime is version of Time with microsecond level precision        renewTime    MicroTime     RenewTime is the time that the LeaseCandidate was last updated  Any time a Lease needs to do leader election  the PingTime field is updated to signal to the LeaseCandidate that they should update the RenewTime  Old LeaseCandidate objects are also garbage collected if it has been hours since the last renew  The PingTime field is updated regularly to prevent garbage collection for still active LeaseCandidates      a name  MicroTime    a     MicroTime is version of Time with microsecond level precision           LeaseCandidateList   LeaseCandidateList   LeaseCandidateList is a list of Lease objects    hr       apiVersion    coordination k8s io v1alpha1       kind    LeaseCandidateList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    LeaseCandidate  a    required    items is a list of schema objects          Operations   Operations      hr             get  read the specified LeaseCandidate       HTTP Request  GET  apis coordination k8s io v1alpha1 namespaces  namespace  leasecandidates  name        Parameters       name     in path    string  required    name of the LeaseCandidate       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    LeaseCandidate  a    OK  401  Unauthorized        list  list or watch objects of kind LeaseCandidate       HTTP Request  GET  apis coordination k8s io v1alpha1 namespaces  namespace  leasecandidates       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    LeaseCandidateList  a    OK  401  Unauthorized        list  list or watch objects of kind LeaseCandidate       HTTP Request  GET  apis coordination k8s io v1alpha1 leasecandidates       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    LeaseCandidateList  a    OK  401  Unauthorized        create  create a LeaseCandidate       HTTP Request  POST  apis coordination k8s io v1alpha1 namespaces  namespace  leasecandidates       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    LeaseCandidate  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    LeaseCandidate  a    OK  201   a href    LeaseCandidate  a    Created  202   a href    LeaseCandidate  a    Accepted  401  Unauthorized        update  replace the specified LeaseCandidate       HTTP Request  PUT  apis coordination k8s io v1alpha1 namespaces  namespace  leasecandidates  name        Parameters       name     in path    string  required    name of the LeaseCandidate       namespace     in path    string  required     a href    namespace  a        body     a href    LeaseCandidate  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    LeaseCandidate  a    OK  201   a href    LeaseCandidate  a    Created  401  Unauthorized        patch  partially update the specified LeaseCandidate       HTTP Request  PATCH  apis coordination k8s io v1alpha1 namespaces  namespace  leasecandidates  name        Parameters       name     in path    string  required    name of the LeaseCandidate       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    LeaseCandidate  a    OK  201   a href    LeaseCandidate  a    Created  401  Unauthorized        delete  delete a LeaseCandidate       HTTP Request  DELETE  apis coordination k8s io v1alpha1 namespaces  namespace  leasecandidates  name        Parameters       name     in path    string  required    name of the LeaseCandidate       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of LeaseCandidate       HTTP Request  DELETE  apis coordination k8s io v1alpha1 namespaces  namespace  leasecandidates       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference title ServiceCIDR v1beta1 kind ServiceCIDR import k8s io api networking v1beta1 contenttype apireference apiVersion networking k8s io v1beta1 weight 10 ServiceCIDR defines a range of IP addresses using CIDR format e apimetadata autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"networking.k8s.io\/v1beta1\"\n  import: \"k8s.io\/api\/networking\/v1beta1\"\n  kind: \"ServiceCIDR\"\ncontent_type: \"api_reference\"\ndescription: \"ServiceCIDR defines a range of IP addresses using CIDR format (e.\"\ntitle: \"ServiceCIDR v1beta1\"\nweight: 10\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: networking.k8s.io\/v1beta1`\n\n`import \"k8s.io\/api\/networking\/v1beta1\"`\n\n\n## ServiceCIDR {#ServiceCIDR}\n\nServiceCIDR defines a range of IP addresses using CIDR format (e.g. 192.168.0.0\/24 or 2001:db2::\/64). This range is used to allocate ClusterIPs to Service objects.\n\n<hr>\n\n- **apiVersion**: networking.k8s.io\/v1beta1\n\n\n- **kind**: ServiceCIDR\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">ServiceCIDRSpec<\/a>)\n\n  spec is the desired state of the ServiceCIDR. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **status** (<a href=\"\">ServiceCIDRStatus<\/a>)\n\n  status represents the current state of the ServiceCIDR. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## ServiceCIDRSpec {#ServiceCIDRSpec}\n\nServiceCIDRSpec define the CIDRs the user wants to use for allocating ClusterIPs for Services.\n\n<hr>\n\n- **cidrs** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  CIDRs defines the IP blocks in CIDR notation (e.g. \"192.168.0.0\/24\" or \"2001:db8::\/64\") from which to assign service cluster IPs. Max of two CIDRs is allowed, one of each IP family. This field is immutable.\n\n\n\n\n\n## ServiceCIDRStatus {#ServiceCIDRStatus}\n\nServiceCIDRStatus describes the current state of the ServiceCIDR.\n\n<hr>\n\n- **conditions** ([]Condition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  conditions holds an array of metav1.Condition that describe the state of the ServiceCIDR. Current service state\n\n  <a name=\"Condition\"><\/a>\n  *Condition contains details for one aspect of the current state of this API Resource.*\n\n  - **conditions.lastTransitionTime** (Time), required\n\n    lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed.  If that is not known, then using the time when the API field changed is acceptable.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string), required\n\n    message is a human readable message indicating details about the transition. This may be an empty string.\n\n  - **conditions.reason** (string), required\n\n    reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty.\n\n  - **conditions.status** (string), required\n\n    status of the condition, one of True, False, Unknown.\n\n  - **conditions.type** (string), required\n\n    type of condition in CamelCase or in foo.example.com\/CamelCase.\n\n  - **conditions.observedGeneration** (int64)\n\n    observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance.\n\n\n\n\n\n## ServiceCIDRList {#ServiceCIDRList}\n\nServiceCIDRList contains a list of ServiceCIDR objects.\n\n<hr>\n\n- **apiVersion**: networking.k8s.io\/v1beta1\n\n\n- **kind**: ServiceCIDRList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">ServiceCIDR<\/a>), required\n\n  items is the list of ServiceCIDRs.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ServiceCIDR\n\n#### HTTP Request\n\nGET \/apis\/networking.k8s.io\/v1beta1\/servicecidrs\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ServiceCIDR\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceCIDR<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified ServiceCIDR\n\n#### HTTP Request\n\nGET \/apis\/networking.k8s.io\/v1beta1\/servicecidrs\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ServiceCIDR\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceCIDR<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ServiceCIDR\n\n#### HTTP Request\n\nGET \/apis\/networking.k8s.io\/v1beta1\/servicecidrs\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceCIDRList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ServiceCIDR\n\n#### HTTP Request\n\nPOST \/apis\/networking.k8s.io\/v1beta1\/servicecidrs\n\n#### Parameters\n\n\n- **body**: <a href=\"\">ServiceCIDR<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceCIDR<\/a>): OK\n\n201 (<a href=\"\">ServiceCIDR<\/a>): Created\n\n202 (<a href=\"\">ServiceCIDR<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ServiceCIDR\n\n#### HTTP Request\n\nPUT \/apis\/networking.k8s.io\/v1beta1\/servicecidrs\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ServiceCIDR\n\n\n- **body**: <a href=\"\">ServiceCIDR<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceCIDR<\/a>): OK\n\n201 (<a href=\"\">ServiceCIDR<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified ServiceCIDR\n\n#### HTTP Request\n\nPUT \/apis\/networking.k8s.io\/v1beta1\/servicecidrs\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ServiceCIDR\n\n\n- **body**: <a href=\"\">ServiceCIDR<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceCIDR<\/a>): OK\n\n201 (<a href=\"\">ServiceCIDR<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ServiceCIDR\n\n#### HTTP Request\n\nPATCH \/apis\/networking.k8s.io\/v1beta1\/servicecidrs\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ServiceCIDR\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceCIDR<\/a>): OK\n\n201 (<a href=\"\">ServiceCIDR<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified ServiceCIDR\n\n#### HTTP Request\n\nPATCH \/apis\/networking.k8s.io\/v1beta1\/servicecidrs\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ServiceCIDR\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceCIDR<\/a>): OK\n\n201 (<a href=\"\">ServiceCIDR<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ServiceCIDR\n\n#### HTTP Request\n\nDELETE \/apis\/networking.k8s.io\/v1beta1\/servicecidrs\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ServiceCIDR\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ServiceCIDR\n\n#### HTTP Request\n\nDELETE \/apis\/networking.k8s.io\/v1beta1\/servicecidrs\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   networking k8s io v1beta1    import   k8s io api networking v1beta1    kind   ServiceCIDR  content type   api reference  description   ServiceCIDR defines a range of IP addresses using CIDR format  e   title   ServiceCIDR v1beta1  weight  10 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  networking k8s io v1beta1    import  k8s io api networking v1beta1        ServiceCIDR   ServiceCIDR   ServiceCIDR defines a range of IP addresses using CIDR format  e g  192 168 0 0 24 or 2001 db2   64   This range is used to allocate ClusterIPs to Service objects    hr       apiVersion    networking k8s io v1beta1       kind    ServiceCIDR       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    ServiceCIDRSpec  a      spec is the desired state of the ServiceCIDR  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status      status     a href    ServiceCIDRStatus  a      status represents the current state of the ServiceCIDR  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         ServiceCIDRSpec   ServiceCIDRSpec   ServiceCIDRSpec define the CIDRs the user wants to use for allocating ClusterIPs for Services    hr       cidrs      string      Atomic  will be replaced during a merge       CIDRs defines the IP blocks in CIDR notation  e g   192 168 0 0 24  or  2001 db8   64   from which to assign service cluster IPs  Max of two CIDRs is allowed  one of each IP family  This field is immutable          ServiceCIDRStatus   ServiceCIDRStatus   ServiceCIDRStatus describes the current state of the ServiceCIDR    hr       conditions      Condition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       conditions holds an array of metav1 Condition that describe the state of the ServiceCIDR  Current service state     a name  Condition    a     Condition contains details for one aspect of the current state of this API Resource          conditions lastTransitionTime    Time   required      lastTransitionTime is the last time the condition transitioned from one status to another  This should be when the underlying condition changed   If that is not known  then using the time when the API field changed is acceptable        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string   required      message is a human readable message indicating details about the transition  This may be an empty string         conditions reason    string   required      reason contains a programmatic identifier indicating the reason for the condition s last transition  Producers of specific condition types may define expected values and meanings for this field  and whether the values are considered a guaranteed API  The value should be a CamelCase string  This field may not be empty         conditions status    string   required      status of the condition  one of True  False  Unknown         conditions type    string   required      type of condition in CamelCase or in foo example com CamelCase         conditions observedGeneration    int64       observedGeneration represents the  metadata generation that the condition was set based upon  For instance  if  metadata generation is currently 12  but the  status conditions x  observedGeneration is 9  the condition is out of date with respect to the current state of the instance          ServiceCIDRList   ServiceCIDRList   ServiceCIDRList contains a list of ServiceCIDR objects    hr       apiVersion    networking k8s io v1beta1       kind    ServiceCIDRList       metadata     a href    ListMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    ServiceCIDR  a    required    items is the list of ServiceCIDRs          Operations   Operations      hr             get  read the specified ServiceCIDR       HTTP Request  GET  apis networking k8s io v1beta1 servicecidrs  name        Parameters       name     in path    string  required    name of the ServiceCIDR       pretty     in query    string     a href    pretty  a          Response   200   a href    ServiceCIDR  a    OK  401  Unauthorized        get  read status of the specified ServiceCIDR       HTTP Request  GET  apis networking k8s io v1beta1 servicecidrs  name  status       Parameters       name     in path    string  required    name of the ServiceCIDR       pretty     in query    string     a href    pretty  a          Response   200   a href    ServiceCIDR  a    OK  401  Unauthorized        list  list or watch objects of kind ServiceCIDR       HTTP Request  GET  apis networking k8s io v1beta1 servicecidrs       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ServiceCIDRList  a    OK  401  Unauthorized        create  create a ServiceCIDR       HTTP Request  POST  apis networking k8s io v1beta1 servicecidrs       Parameters       body     a href    ServiceCIDR  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ServiceCIDR  a    OK  201   a href    ServiceCIDR  a    Created  202   a href    ServiceCIDR  a    Accepted  401  Unauthorized        update  replace the specified ServiceCIDR       HTTP Request  PUT  apis networking k8s io v1beta1 servicecidrs  name        Parameters       name     in path    string  required    name of the ServiceCIDR       body     a href    ServiceCIDR  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ServiceCIDR  a    OK  201   a href    ServiceCIDR  a    Created  401  Unauthorized        update  replace status of the specified ServiceCIDR       HTTP Request  PUT  apis networking k8s io v1beta1 servicecidrs  name  status       Parameters       name     in path    string  required    name of the ServiceCIDR       body     a href    ServiceCIDR  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ServiceCIDR  a    OK  201   a href    ServiceCIDR  a    Created  401  Unauthorized        patch  partially update the specified ServiceCIDR       HTTP Request  PATCH  apis networking k8s io v1beta1 servicecidrs  name        Parameters       name     in path    string  required    name of the ServiceCIDR       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ServiceCIDR  a    OK  201   a href    ServiceCIDR  a    Created  401  Unauthorized        patch  partially update status of the specified ServiceCIDR       HTTP Request  PATCH  apis networking k8s io v1beta1 servicecidrs  name  status       Parameters       name     in path    string  required    name of the ServiceCIDR       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ServiceCIDR  a    OK  201   a href    ServiceCIDR  a    Created  401  Unauthorized        delete  delete a ServiceCIDR       HTTP Request  DELETE  apis networking k8s io v1beta1 servicecidrs  name        Parameters       name     in path    string  required    name of the ServiceCIDR       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of ServiceCIDR       HTTP Request  DELETE  apis networking k8s io v1beta1 servicecidrs       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference APIService represents a server for a particular GroupVersion title APIService kind APIService contenttype apireference import k8s io kube aggregator pkg apis apiregistration v1 apimetadata autogenerated true apiVersion apiregistration k8s io v1 weight 1","answers":"---\napi_metadata:\n  apiVersion: \"apiregistration.k8s.io\/v1\"\n  import: \"k8s.io\/kube-aggregator\/pkg\/apis\/apiregistration\/v1\"\n  kind: \"APIService\"\ncontent_type: \"api_reference\"\ndescription: \"APIService represents a server for a particular GroupVersion.\"\ntitle: \"APIService\"\nweight: 1\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: apiregistration.k8s.io\/v1`\n\n`import \"k8s.io\/kube-aggregator\/pkg\/apis\/apiregistration\/v1\"`\n\n\n## APIService {#APIService}\n\nAPIService represents a server for a particular GroupVersion. Name must be \"version.group\".\n\n<hr>\n\n- **apiVersion**: apiregistration.k8s.io\/v1\n\n\n- **kind**: APIService\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">APIServiceSpec<\/a>)\n\n  Spec contains information for locating and communicating with a server\n\n- **status** (<a href=\"\">APIServiceStatus<\/a>)\n\n  Status contains derived information about an API server\n\n\n\n\n\n## APIServiceSpec {#APIServiceSpec}\n\nAPIServiceSpec contains information for locating and communicating with a server. Only https is supported, though you are able to disable certificate verification.\n\n<hr>\n\n- **groupPriorityMinimum** (int32), required\n\n  GroupPriorityMinimum is the priority this group should have at least. Higher priority means that the group is preferred by clients over lower priority ones. Note that other versions of this group might specify even higher GroupPriorityMinimum values such that the whole group gets a higher priority. The primary sort is based on GroupPriorityMinimum, ordered highest number to lowest (20 before 10). The secondary sort is based on the alphabetical comparison of the name of the object.  (v1.bar before v1.foo) We'd recommend something like: *.k8s.io (except extensions) at 18000 and PaaSes (OpenShift, Deis) are recommended to be in the 2000s\n\n- **versionPriority** (int32), required\n\n  VersionPriority controls the ordering of this API version inside of its group.  Must be greater than zero. The primary sort is based on VersionPriority, ordered highest to lowest (20 before 10). Since it's inside of a group, the number can be small, probably in the 10s. In case of equal version priorities, the version string will be used to compute the order inside a group. If the version string is \"kube-like\", it will sort above non \"kube-like\" version strings, which are ordered lexicographically. \"Kube-like\" versions start with a \"v\", then are followed by a number (the major version), then optionally the string \"alpha\" or \"beta\" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10.\n\n- **caBundle** ([]byte)\n\n  *Atomic: will be replaced during a merge*\n  \n  CABundle is a PEM encoded CA bundle which will be used to validate an API server's serving certificate. If unspecified, system trust roots on the apiserver are used.\n\n- **group** (string)\n\n  Group is the API group name this server hosts\n\n- **insecureSkipTLSVerify** (boolean)\n\n  InsecureSkipTLSVerify disables TLS certificate verification when communicating with this server. This is strongly discouraged.  You should use the CABundle instead.\n\n- **service** (ServiceReference)\n\n  Service is a reference to the service for this API server.  It must communicate on port 443. If the Service is nil, that means the handling for the API groupversion is handled locally on this server. The call will simply delegate to the normal handler chain to be fulfilled.\n\n  <a name=\"ServiceReference\"><\/a>\n  *ServiceReference holds a reference to Service.legacy.k8s.io*\n\n  - **service.name** (string)\n\n    Name is the name of the service\n\n  - **service.namespace** (string)\n\n    Namespace is the namespace of the service\n\n  - **service.port** (int32)\n\n    If specified, the port on the service that hosting webhook. Default to 443 for backward compatibility. `port` should be a valid port number (1-65535, inclusive).\n\n- **version** (string)\n\n  Version is the API version this server hosts.  For example, \"v1\"\n\n\n\n\n\n## APIServiceStatus {#APIServiceStatus}\n\nAPIServiceStatus contains derived information about an API server\n\n<hr>\n\n- **conditions** ([]APIServiceCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  Current service state of apiService.\n\n  <a name=\"APIServiceCondition\"><\/a>\n  *APIServiceCondition describes the state of an APIService at a particular point*\n\n  - **conditions.status** (string), required\n\n    Status is the status of the condition. Can be True, False, Unknown.\n\n  - **conditions.type** (string), required\n\n    Type is the type of the condition.\n\n  - **conditions.lastTransitionTime** (Time)\n\n    Last time the condition transitioned from one status to another.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    Human-readable message indicating details about last transition.\n\n  - **conditions.reason** (string)\n\n    Unique, one-word, CamelCase reason for the condition's last transition.\n\n\n\n\n\n## APIServiceList {#APIServiceList}\n\nAPIServiceList is a list of APIService objects.\n\n<hr>\n\n- **apiVersion**: apiregistration.k8s.io\/v1\n\n\n- **kind**: APIServiceList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">APIService<\/a>), required\n\n  Items is the list of APIService\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified APIService\n\n#### HTTP Request\n\nGET \/apis\/apiregistration.k8s.io\/v1\/apiservices\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the APIService\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">APIService<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified APIService\n\n#### HTTP Request\n\nGET \/apis\/apiregistration.k8s.io\/v1\/apiservices\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the APIService\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">APIService<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind APIService\n\n#### HTTP Request\n\nGET \/apis\/apiregistration.k8s.io\/v1\/apiservices\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">APIServiceList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create an APIService\n\n#### HTTP Request\n\nPOST \/apis\/apiregistration.k8s.io\/v1\/apiservices\n\n#### Parameters\n\n\n- **body**: <a href=\"\">APIService<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">APIService<\/a>): OK\n\n201 (<a href=\"\">APIService<\/a>): Created\n\n202 (<a href=\"\">APIService<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified APIService\n\n#### HTTP Request\n\nPUT \/apis\/apiregistration.k8s.io\/v1\/apiservices\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the APIService\n\n\n- **body**: <a href=\"\">APIService<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">APIService<\/a>): OK\n\n201 (<a href=\"\">APIService<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified APIService\n\n#### HTTP Request\n\nPUT \/apis\/apiregistration.k8s.io\/v1\/apiservices\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the APIService\n\n\n- **body**: <a href=\"\">APIService<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">APIService<\/a>): OK\n\n201 (<a href=\"\">APIService<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified APIService\n\n#### HTTP Request\n\nPATCH \/apis\/apiregistration.k8s.io\/v1\/apiservices\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the APIService\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">APIService<\/a>): OK\n\n201 (<a href=\"\">APIService<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified APIService\n\n#### HTTP Request\n\nPATCH \/apis\/apiregistration.k8s.io\/v1\/apiservices\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the APIService\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">APIService<\/a>): OK\n\n201 (<a href=\"\">APIService<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete an APIService\n\n#### HTTP Request\n\nDELETE \/apis\/apiregistration.k8s.io\/v1\/apiservices\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the APIService\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of APIService\n\n#### HTTP Request\n\nDELETE \/apis\/apiregistration.k8s.io\/v1\/apiservices\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   apiregistration k8s io v1    import   k8s io kube aggregator pkg apis apiregistration v1    kind   APIService  content type   api reference  description   APIService represents a server for a particular GroupVersion   title   APIService  weight  1 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  apiregistration k8s io v1    import  k8s io kube aggregator pkg apis apiregistration v1        APIService   APIService   APIService represents a server for a particular GroupVersion  Name must be  version group     hr       apiVersion    apiregistration k8s io v1       kind    APIService       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    APIServiceSpec  a      Spec contains information for locating and communicating with a server      status     a href    APIServiceStatus  a      Status contains derived information about an API server         APIServiceSpec   APIServiceSpec   APIServiceSpec contains information for locating and communicating with a server  Only https is supported  though you are able to disable certificate verification    hr       groupPriorityMinimum    int32   required    GroupPriorityMinimum is the priority this group should have at least  Higher priority means that the group is preferred by clients over lower priority ones  Note that other versions of this group might specify even higher GroupPriorityMinimum values such that the whole group gets a higher priority  The primary sort is based on GroupPriorityMinimum  ordered highest number to lowest  20 before 10   The secondary sort is based on the alphabetical comparison of the name of the object    v1 bar before v1 foo  We d recommend something like    k8s io  except extensions  at 18000 and PaaSes  OpenShift  Deis  are recommended to be in the 2000s      versionPriority    int32   required    VersionPriority controls the ordering of this API version inside of its group   Must be greater than zero  The primary sort is based on VersionPriority  ordered highest to lowest  20 before 10   Since it s inside of a group  the number can be small  probably in the 10s  In case of equal version priorities  the version string will be used to compute the order inside a group  If the version string is  kube like   it will sort above non  kube like  version strings  which are ordered lexicographically   Kube like  versions start with a  v   then are followed by a number  the major version   then optionally the string  alpha  or  beta  and another number  the minor version   These are sorted first by GA   beta   alpha  where GA is a version with no suffix such as beta or alpha   and then by comparing major version  then minor version  An example sorted list of versions  v10  v2  v1  v11beta2  v10beta3  v3beta1  v12alpha1  v11alpha2  foo1  foo10       caBundle      byte      Atomic  will be replaced during a merge       CABundle is a PEM encoded CA bundle which will be used to validate an API server s serving certificate  If unspecified  system trust roots on the apiserver are used       group    string     Group is the API group name this server hosts      insecureSkipTLSVerify    boolean     InsecureSkipTLSVerify disables TLS certificate verification when communicating with this server  This is strongly discouraged   You should use the CABundle instead       service    ServiceReference     Service is a reference to the service for this API server   It must communicate on port 443  If the Service is nil  that means the handling for the API groupversion is handled locally on this server  The call will simply delegate to the normal handler chain to be fulfilled      a name  ServiceReference    a     ServiceReference holds a reference to Service legacy k8s io         service name    string       Name is the name of the service        service namespace    string       Namespace is the namespace of the service        service port    int32       If specified  the port on the service that hosting webhook  Default to 443 for backward compatibility   port  should be a valid port number  1 65535  inclusive        version    string     Version is the API version this server hosts   For example   v1          APIServiceStatus   APIServiceStatus   APIServiceStatus contains derived information about an API server   hr       conditions      APIServiceCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       Current service state of apiService      a name  APIServiceCondition    a     APIServiceCondition describes the state of an APIService at a particular point         conditions status    string   required      Status is the status of the condition  Can be True  False  Unknown         conditions type    string   required      Type is the type of the condition         conditions lastTransitionTime    Time       Last time the condition transitioned from one status to another        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string       Human readable message indicating details about last transition         conditions reason    string       Unique  one word  CamelCase reason for the condition s last transition          APIServiceList   APIServiceList   APIServiceList is a list of APIService objects    hr       apiVersion    apiregistration k8s io v1       kind    APIServiceList       metadata     a href    ListMeta  a      Standard list metadata More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    APIService  a    required    Items is the list of APIService         Operations   Operations      hr             get  read the specified APIService       HTTP Request  GET  apis apiregistration k8s io v1 apiservices  name        Parameters       name     in path    string  required    name of the APIService       pretty     in query    string     a href    pretty  a          Response   200   a href    APIService  a    OK  401  Unauthorized        get  read status of the specified APIService       HTTP Request  GET  apis apiregistration k8s io v1 apiservices  name  status       Parameters       name     in path    string  required    name of the APIService       pretty     in query    string     a href    pretty  a          Response   200   a href    APIService  a    OK  401  Unauthorized        list  list or watch objects of kind APIService       HTTP Request  GET  apis apiregistration k8s io v1 apiservices       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    APIServiceList  a    OK  401  Unauthorized        create  create an APIService       HTTP Request  POST  apis apiregistration k8s io v1 apiservices       Parameters       body     a href    APIService  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    APIService  a    OK  201   a href    APIService  a    Created  202   a href    APIService  a    Accepted  401  Unauthorized        update  replace the specified APIService       HTTP Request  PUT  apis apiregistration k8s io v1 apiservices  name        Parameters       name     in path    string  required    name of the APIService       body     a href    APIService  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    APIService  a    OK  201   a href    APIService  a    Created  401  Unauthorized        update  replace status of the specified APIService       HTTP Request  PUT  apis apiregistration k8s io v1 apiservices  name  status       Parameters       name     in path    string  required    name of the APIService       body     a href    APIService  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    APIService  a    OK  201   a href    APIService  a    Created  401  Unauthorized        patch  partially update the specified APIService       HTTP Request  PATCH  apis apiregistration k8s io v1 apiservices  name        Parameters       name     in path    string  required    name of the APIService       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    APIService  a    OK  201   a href    APIService  a    Created  401  Unauthorized        patch  partially update status of the specified APIService       HTTP Request  PATCH  apis apiregistration k8s io v1 apiservices  name  status       Parameters       name     in path    string  required    name of the APIService       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    APIService  a    OK  201   a href    APIService  a    Created  401  Unauthorized        delete  delete an APIService       HTTP Request  DELETE  apis apiregistration k8s io v1 apiservices  name        Parameters       name     in path    string  required    name of the APIService       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of APIService       HTTP Request  DELETE  apis apiregistration k8s io v1 apiservices       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference autogenerated true weight 5 import k8s io api coordination v1 contenttype apireference Lease defines a lease concept apiVersion coordination k8s io v1 apimetadata kind Lease title Lease","answers":"---\napi_metadata:\n  apiVersion: \"coordination.k8s.io\/v1\"\n  import: \"k8s.io\/api\/coordination\/v1\"\n  kind: \"Lease\"\ncontent_type: \"api_reference\"\ndescription: \"Lease defines a lease concept.\"\ntitle: \"Lease\"\nweight: 5\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: coordination.k8s.io\/v1`\n\n`import \"k8s.io\/api\/coordination\/v1\"`\n\n\n## Lease {#Lease}\n\nLease defines a lease concept.\n\n<hr>\n\n- **apiVersion**: coordination.k8s.io\/v1\n\n\n- **kind**: Lease\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">LeaseSpec<\/a>)\n\n  spec contains the specification of the Lease. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## LeaseSpec {#LeaseSpec}\n\nLeaseSpec is a specification of a Lease.\n\n<hr>\n\n- **acquireTime** (MicroTime)\n\n  acquireTime is a time when the current lease was acquired.\n\n  <a name=\"MicroTime\"><\/a>\n  *MicroTime is version of Time with microsecond level precision.*\n\n- **holderIdentity** (string)\n\n  holderIdentity contains the identity of the holder of a current lease. If Coordinated Leader Election is used, the holder identity must be equal to the elected LeaseCandidate.metadata.name field.\n\n- **leaseDurationSeconds** (int32)\n\n  leaseDurationSeconds is a duration that candidates for a lease need to wait to force acquire it. This is measured against the time of last observed renewTime.\n\n- **leaseTransitions** (int32)\n\n  leaseTransitions is the number of transitions of a lease between holders.\n\n- **preferredHolder** (string)\n\n  PreferredHolder signals to a lease holder that the lease has a more optimal holder and should be given up. This field can only be set if Strategy is also set.\n\n- **renewTime** (MicroTime)\n\n  renewTime is a time when the current holder of a lease has last updated the lease.\n\n  <a name=\"MicroTime\"><\/a>\n  *MicroTime is version of Time with microsecond level precision.*\n\n- **strategy** (string)\n\n  Strategy indicates the strategy for picking the leader for coordinated leader election. If the field is not specified, there is no active coordination for this lease. (Alpha) Using this field requires the CoordinatedLeaderElection feature gate to be enabled.\n\n\n\n\n\n## LeaseList {#LeaseList}\n\nLeaseList is a list of Lease objects.\n\n<hr>\n\n- **apiVersion**: coordination.k8s.io\/v1\n\n\n- **kind**: LeaseList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">Lease<\/a>), required\n\n  items is a list of schema objects.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified Lease\n\n#### HTTP Request\n\nGET \/apis\/coordination.k8s.io\/v1\/namespaces\/{namespace}\/leases\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Lease\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Lease<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Lease\n\n#### HTTP Request\n\nGET \/apis\/coordination.k8s.io\/v1\/namespaces\/{namespace}\/leases\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LeaseList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Lease\n\n#### HTTP Request\n\nGET \/apis\/coordination.k8s.io\/v1\/leases\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LeaseList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a Lease\n\n#### HTTP Request\n\nPOST \/apis\/coordination.k8s.io\/v1\/namespaces\/{namespace}\/leases\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Lease<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Lease<\/a>): OK\n\n201 (<a href=\"\">Lease<\/a>): Created\n\n202 (<a href=\"\">Lease<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified Lease\n\n#### HTTP Request\n\nPUT \/apis\/coordination.k8s.io\/v1\/namespaces\/{namespace}\/leases\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Lease\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Lease<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Lease<\/a>): OK\n\n201 (<a href=\"\">Lease<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified Lease\n\n#### HTTP Request\n\nPATCH \/apis\/coordination.k8s.io\/v1\/namespaces\/{namespace}\/leases\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Lease\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Lease<\/a>): OK\n\n201 (<a href=\"\">Lease<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a Lease\n\n#### HTTP Request\n\nDELETE \/apis\/coordination.k8s.io\/v1\/namespaces\/{namespace}\/leases\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Lease\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of Lease\n\n#### HTTP Request\n\nDELETE \/apis\/coordination.k8s.io\/v1\/namespaces\/{namespace}\/leases\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   coordination k8s io v1    import   k8s io api coordination v1    kind   Lease  content type   api reference  description   Lease defines a lease concept   title   Lease  weight  5 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  coordination k8s io v1    import  k8s io api coordination v1        Lease   Lease   Lease defines a lease concept    hr       apiVersion    coordination k8s io v1       kind    Lease       metadata     a href    ObjectMeta  a      More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    LeaseSpec  a      spec contains the specification of the Lease  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         LeaseSpec   LeaseSpec   LeaseSpec is a specification of a Lease    hr       acquireTime    MicroTime     acquireTime is a time when the current lease was acquired      a name  MicroTime    a     MicroTime is version of Time with microsecond level precision        holderIdentity    string     holderIdentity contains the identity of the holder of a current lease  If Coordinated Leader Election is used  the holder identity must be equal to the elected LeaseCandidate metadata name field       leaseDurationSeconds    int32     leaseDurationSeconds is a duration that candidates for a lease need to wait to force acquire it  This is measured against the time of last observed renewTime       leaseTransitions    int32     leaseTransitions is the number of transitions of a lease between holders       preferredHolder    string     PreferredHolder signals to a lease holder that the lease has a more optimal holder and should be given up  This field can only be set if Strategy is also set       renewTime    MicroTime     renewTime is a time when the current holder of a lease has last updated the lease      a name  MicroTime    a     MicroTime is version of Time with microsecond level precision        strategy    string     Strategy indicates the strategy for picking the leader for coordinated leader election  If the field is not specified  there is no active coordination for this lease   Alpha  Using this field requires the CoordinatedLeaderElection feature gate to be enabled          LeaseList   LeaseList   LeaseList is a list of Lease objects    hr       apiVersion    coordination k8s io v1       kind    LeaseList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    Lease  a    required    items is a list of schema objects          Operations   Operations      hr             get  read the specified Lease       HTTP Request  GET  apis coordination k8s io v1 namespaces  namespace  leases  name        Parameters       name     in path    string  required    name of the Lease       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Lease  a    OK  401  Unauthorized        list  list or watch objects of kind Lease       HTTP Request  GET  apis coordination k8s io v1 namespaces  namespace  leases       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    LeaseList  a    OK  401  Unauthorized        list  list or watch objects of kind Lease       HTTP Request  GET  apis coordination k8s io v1 leases       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    LeaseList  a    OK  401  Unauthorized        create  create a Lease       HTTP Request  POST  apis coordination k8s io v1 namespaces  namespace  leases       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    Lease  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Lease  a    OK  201   a href    Lease  a    Created  202   a href    Lease  a    Accepted  401  Unauthorized        update  replace the specified Lease       HTTP Request  PUT  apis coordination k8s io v1 namespaces  namespace  leases  name        Parameters       name     in path    string  required    name of the Lease       namespace     in path    string  required     a href    namespace  a        body     a href    Lease  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Lease  a    OK  201   a href    Lease  a    Created  401  Unauthorized        patch  partially update the specified Lease       HTTP Request  PATCH  apis coordination k8s io v1 namespaces  namespace  leases  name        Parameters       name     in path    string  required    name of the Lease       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Lease  a    OK  201   a href    Lease  a    Created  401  Unauthorized        delete  delete a Lease       HTTP Request  DELETE  apis coordination k8s io v1 namespaces  namespace  leases  name        Parameters       name     in path    string  required    name of the Lease       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of Lease       HTTP Request  DELETE  apis coordination k8s io v1 namespaces  namespace  leases       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference Event is a report of an event somewhere in the cluster contenttype apireference import k8s io api events v1 title Event kind Event apiVersion events k8s io v1 apimetadata weight 3 autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"events.k8s.io\/v1\"\n  import: \"k8s.io\/api\/events\/v1\"\n  kind: \"Event\"\ncontent_type: \"api_reference\"\ndescription: \"Event is a report of an event somewhere in the cluster.\"\ntitle: \"Event\"\nweight: 3\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: events.k8s.io\/v1`\n\n`import \"k8s.io\/api\/events\/v1\"`\n\n\n## Event {#Event}\n\nEvent is a report of an event somewhere in the cluster. It generally denotes some state change in the system. Events have a limited retention time and triggers and messages may evolve with time.  Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason.  Events should be treated as informative, best-effort, supplemental data.\n\n<hr>\n\n- **apiVersion**: events.k8s.io\/v1\n\n\n- **kind**: Event\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **eventTime** (MicroTime), required\n\n  eventTime is the time when this Event was first observed. It is required.\n\n  <a name=\"MicroTime\"><\/a>\n  *MicroTime is version of Time with microsecond level precision.*\n\n- **action** (string)\n\n  action is what action was taken\/failed regarding to the regarding object. It is machine-readable. This field cannot be empty for new Events and it can have at most 128 characters.\n\n- **deprecatedCount** (int32)\n\n  deprecatedCount is the deprecated field assuring backward compatibility with core.v1 Event type.\n\n- **deprecatedFirstTimestamp** (Time)\n\n  deprecatedFirstTimestamp is the deprecated field assuring backward compatibility with core.v1 Event type.\n\n  <a name=\"Time\"><\/a>\n  *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **deprecatedLastTimestamp** (Time)\n\n  deprecatedLastTimestamp is the deprecated field assuring backward compatibility with core.v1 Event type.\n\n  <a name=\"Time\"><\/a>\n  *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **deprecatedSource** (EventSource)\n\n  deprecatedSource is the deprecated field assuring backward compatibility with core.v1 Event type.\n\n  <a name=\"EventSource\"><\/a>\n  *EventSource contains information for an event.*\n\n  - **deprecatedSource.component** (string)\n\n    Component from which the event is generated.\n\n  - **deprecatedSource.host** (string)\n\n    Node name on which the event is generated.\n\n- **note** (string)\n\n  note is a human-readable description of the status of this operation. Maximal length of the note is 1kB, but libraries should be prepared to handle values up to 64kB.\n\n- **reason** (string)\n\n  reason is why the action was taken. It is human-readable. This field cannot be empty for new Events and it can have at most 128 characters.\n\n- **regarding** (<a href=\"\">ObjectReference<\/a>)\n\n  regarding contains the object this Event is about. In most cases it's an Object reporting controller implements, e.g. ReplicaSetController implements ReplicaSets and this event is emitted because it acts on some changes in a ReplicaSet object.\n\n- **related** (<a href=\"\">ObjectReference<\/a>)\n\n  related is the optional secondary object for more complex actions. E.g. when regarding object triggers a creation or deletion of related object.\n\n- **reportingController** (string)\n\n  reportingController is the name of the controller that emitted this Event, e.g. `kubernetes.io\/kubelet`. This field cannot be empty for new Events.\n\n- **reportingInstance** (string)\n\n  reportingInstance is the ID of the controller instance, e.g. `kubelet-xyzf`. This field cannot be empty for new Events and it can have at most 128 characters.\n\n- **series** (EventSeries)\n\n  series is data about the Event series this event represents or nil if it's a singleton Event.\n\n  <a name=\"EventSeries\"><\/a>\n  *EventSeries contain information on series of events, i.e. thing that was\/is happening continuously for some time. How often to update the EventSeries is up to the event reporters. The default event reporter in \"k8s.io\/client-go\/tools\/events\/event_broadcaster.go\" shows how this struct is updated on heartbeats and can guide customized reporter implementations.*\n\n  - **series.count** (int32), required\n\n    count is the number of occurrences in this series up to the last heartbeat time.\n\n  - **series.lastObservedTime** (MicroTime), required\n\n    lastObservedTime is the time when last Event from the series was seen before last heartbeat.\n\n    <a name=\"MicroTime\"><\/a>\n    *MicroTime is version of Time with microsecond level precision.*\n\n- **type** (string)\n\n  type is the type of this event (Normal, Warning), new types could be added in the future. It is machine-readable. This field cannot be empty for new Events.\n\n\n\n\n\n## EventList {#EventList}\n\nEventList is a list of Event objects.\n\n<hr>\n\n- **apiVersion**: events.k8s.io\/v1\n\n\n- **kind**: EventList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">Event<\/a>), required\n\n  items is a list of schema objects.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified Event\n\n#### HTTP Request\n\nGET \/apis\/events.k8s.io\/v1\/namespaces\/{namespace}\/events\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Event\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Event<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Event\n\n#### HTTP Request\n\nGET \/apis\/events.k8s.io\/v1\/namespaces\/{namespace}\/events\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">EventList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Event\n\n#### HTTP Request\n\nGET \/apis\/events.k8s.io\/v1\/events\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">EventList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create an Event\n\n#### HTTP Request\n\nPOST \/apis\/events.k8s.io\/v1\/namespaces\/{namespace}\/events\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Event<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Event<\/a>): OK\n\n201 (<a href=\"\">Event<\/a>): Created\n\n202 (<a href=\"\">Event<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified Event\n\n#### HTTP Request\n\nPUT \/apis\/events.k8s.io\/v1\/namespaces\/{namespace}\/events\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Event\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Event<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Event<\/a>): OK\n\n201 (<a href=\"\">Event<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified Event\n\n#### HTTP Request\n\nPATCH \/apis\/events.k8s.io\/v1\/namespaces\/{namespace}\/events\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Event\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Event<\/a>): OK\n\n201 (<a href=\"\">Event<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete an Event\n\n#### HTTP Request\n\nDELETE \/apis\/events.k8s.io\/v1\/namespaces\/{namespace}\/events\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Event\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of Event\n\n#### HTTP Request\n\nDELETE \/apis\/events.k8s.io\/v1\/namespaces\/{namespace}\/events\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   events k8s io v1    import   k8s io api events v1    kind   Event  content type   api reference  description   Event is a report of an event somewhere in the cluster   title   Event  weight  3 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  events k8s io v1    import  k8s io api events v1        Event   Event   Event is a report of an event somewhere in the cluster  It generally denotes some state change in the system  Events have a limited retention time and triggers and messages may evolve with time   Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger  or the continued existence of events with that Reason   Events should be treated as informative  best effort  supplemental data    hr       apiVersion    events k8s io v1       kind    Event       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      eventTime    MicroTime   required    eventTime is the time when this Event was first observed  It is required      a name  MicroTime    a     MicroTime is version of Time with microsecond level precision        action    string     action is what action was taken failed regarding to the regarding object  It is machine readable  This field cannot be empty for new Events and it can have at most 128 characters       deprecatedCount    int32     deprecatedCount is the deprecated field assuring backward compatibility with core v1 Event type       deprecatedFirstTimestamp    Time     deprecatedFirstTimestamp is the deprecated field assuring backward compatibility with core v1 Event type      a name  Time    a     Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        deprecatedLastTimestamp    Time     deprecatedLastTimestamp is the deprecated field assuring backward compatibility with core v1 Event type      a name  Time    a     Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        deprecatedSource    EventSource     deprecatedSource is the deprecated field assuring backward compatibility with core v1 Event type      a name  EventSource    a     EventSource contains information for an event          deprecatedSource component    string       Component from which the event is generated         deprecatedSource host    string       Node name on which the event is generated       note    string     note is a human readable description of the status of this operation  Maximal length of the note is 1kB  but libraries should be prepared to handle values up to 64kB       reason    string     reason is why the action was taken  It is human readable  This field cannot be empty for new Events and it can have at most 128 characters       regarding     a href    ObjectReference  a      regarding contains the object this Event is about  In most cases it s an Object reporting controller implements  e g  ReplicaSetController implements ReplicaSets and this event is emitted because it acts on some changes in a ReplicaSet object       related     a href    ObjectReference  a      related is the optional secondary object for more complex actions  E g  when regarding object triggers a creation or deletion of related object       reportingController    string     reportingController is the name of the controller that emitted this Event  e g   kubernetes io kubelet   This field cannot be empty for new Events       reportingInstance    string     reportingInstance is the ID of the controller instance  e g   kubelet xyzf   This field cannot be empty for new Events and it can have at most 128 characters       series    EventSeries     series is data about the Event series this event represents or nil if it s a singleton Event      a name  EventSeries    a     EventSeries contain information on series of events  i e  thing that was is happening continuously for some time  How often to update the EventSeries is up to the event reporters  The default event reporter in  k8s io client go tools events event broadcaster go  shows how this struct is updated on heartbeats and can guide customized reporter implementations          series count    int32   required      count is the number of occurrences in this series up to the last heartbeat time         series lastObservedTime    MicroTime   required      lastObservedTime is the time when last Event from the series was seen before last heartbeat        a name  MicroTime    a       MicroTime is version of Time with microsecond level precision        type    string     type is the type of this event  Normal  Warning   new types could be added in the future  It is machine readable  This field cannot be empty for new Events          EventList   EventList   EventList is a list of Event objects    hr       apiVersion    events k8s io v1       kind    EventList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    Event  a    required    items is a list of schema objects          Operations   Operations      hr             get  read the specified Event       HTTP Request  GET  apis events k8s io v1 namespaces  namespace  events  name        Parameters       name     in path    string  required    name of the Event       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Event  a    OK  401  Unauthorized        list  list or watch objects of kind Event       HTTP Request  GET  apis events k8s io v1 namespaces  namespace  events       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    EventList  a    OK  401  Unauthorized        list  list or watch objects of kind Event       HTTP Request  GET  apis events k8s io v1 events       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    EventList  a    OK  401  Unauthorized        create  create an Event       HTTP Request  POST  apis events k8s io v1 namespaces  namespace  events       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    Event  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Event  a    OK  201   a href    Event  a    Created  202   a href    Event  a    Accepted  401  Unauthorized        update  replace the specified Event       HTTP Request  PUT  apis events k8s io v1 namespaces  namespace  events  name        Parameters       name     in path    string  required    name of the Event       namespace     in path    string  required     a href    namespace  a        body     a href    Event  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Event  a    OK  201   a href    Event  a    Created  401  Unauthorized        patch  partially update the specified Event       HTTP Request  PATCH  apis events k8s io v1 namespaces  namespace  events  name        Parameters       name     in path    string  required    name of the Event       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Event  a    OK  201   a href    Event  a    Created  401  Unauthorized        delete  delete an Event       HTTP Request  DELETE  apis events k8s io v1 namespaces  namespace  events  name        Parameters       name     in path    string  required    name of the Event       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of Event       HTTP Request  DELETE  apis events k8s io v1 namespaces  namespace  events       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference apiVersion v1 weight 2 kind ComponentStatus title ComponentStatus contenttype apireference apimetadata autogenerated true ComponentStatus and ComponentStatusList holds the cluster validation info import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"ComponentStatus\"\ncontent_type: \"api_reference\"\ndescription: \"ComponentStatus (and ComponentStatusList) holds the cluster validation info.\"\ntitle: \"ComponentStatus\"\nweight: 2\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## ComponentStatus {#ComponentStatus}\n\nComponentStatus (and ComponentStatusList) holds the cluster validation info. Deprecated: This API is deprecated in v1.19+\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: ComponentStatus\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **conditions** ([]ComponentCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  List of component conditions observed\n\n  <a name=\"ComponentCondition\"><\/a>\n  *Information about the condition of a component.*\n\n  - **conditions.status** (string), required\n\n    Status of the condition for a component. Valid values for \"Healthy\": \"True\", \"False\", or \"Unknown\".\n\n  - **conditions.type** (string), required\n\n    Type of condition for a component. Valid value: \"Healthy\"\n\n  - **conditions.error** (string)\n\n    Condition error code for a component. For example, a health check error code.\n\n  - **conditions.message** (string)\n\n    Message about the condition for a component. For example, information about a health check.\n\n\n\n\n\n## ComponentStatusList {#ComponentStatusList}\n\nStatus of all the conditions for the component as a list of ComponentStatus objects. Deprecated: This API is deprecated in v1.19+\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: ComponentStatusList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">ComponentStatus<\/a>), required\n\n  List of ComponentStatus objects.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ComponentStatus\n\n#### HTTP Request\n\nGET \/api\/v1\/componentstatuses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ComponentStatus\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ComponentStatus<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list objects of kind ComponentStatus\n\n#### HTTP Request\n\nGET \/api\/v1\/componentstatuses\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ComponentStatusList<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   ComponentStatus  content type   api reference  description   ComponentStatus  and ComponentStatusList  holds the cluster validation info   title   ComponentStatus  weight  2 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        ComponentStatus   ComponentStatus   ComponentStatus  and ComponentStatusList  holds the cluster validation info  Deprecated  This API is deprecated in v1 19    hr       apiVersion    v1       kind    ComponentStatus       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      conditions      ComponentCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       List of component conditions observed     a name  ComponentCondition    a     Information about the condition of a component          conditions status    string   required      Status of the condition for a component  Valid values for  Healthy    True    False   or  Unknown          conditions type    string   required      Type of condition for a component  Valid value   Healthy         conditions error    string       Condition error code for a component  For example  a health check error code         conditions message    string       Message about the condition for a component  For example  information about a health check          ComponentStatusList   ComponentStatusList   Status of all the conditions for the component as a list of ComponentStatus objects  Deprecated  This API is deprecated in v1 19    hr       apiVersion    v1       kind    ComponentStatusList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    ComponentStatus  a    required    List of ComponentStatus objects          Operations   Operations      hr             get  read the specified ComponentStatus       HTTP Request  GET  api v1 componentstatuses  name        Parameters       name     in path    string  required    name of the ComponentStatus       pretty     in query    string     a href    pretty  a          Response   200   a href    ComponentStatus  a    OK  401  Unauthorized        list  list objects of kind ComponentStatus       HTTP Request  GET  api v1 componentstatuses       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ComponentStatusList  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference kind Volume apiVersion contenttype apireference title Volume weight 10 Volume represents a named volume in a pod that may be accessed by any container in the pod apimetadata autogenerated true import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"Volume\"\ncontent_type: \"api_reference\"\ndescription: \"Volume represents a named volume in a pod that may be accessed by any container in the pod.\"\ntitle: \"Volume\"\nweight: 10\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## Volume {#Volume}\n\nVolume represents a named volume in a pod that may be accessed by any container in the pod.\n\n<hr>\n\n- **name** (string), required\n\n  name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n\n\n### Exposed Persistent volumes\n\n\n- **persistentVolumeClaim** (PersistentVolumeClaimVolumeSource)\n\n  persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#persistentvolumeclaims\n\n  <a name=\"PersistentVolumeClaimVolumeSource\"><\/a>\n  *PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system).*\n\n  - **persistentVolumeClaim.claimName** (string), required\n\n    claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#persistentvolumeclaims\n\n  - **persistentVolumeClaim.readOnly** (boolean)\n\n    readOnly Will force the ReadOnly setting in VolumeMounts. Default false.\n\n### Projections\n\n\n- **configMap** (ConfigMapVolumeSource)\n\n  configMap represents a configMap that should populate this volume\n\n  <a name=\"ConfigMapVolumeSource\"><\/a>\n  *Adapts a ConfigMap into a volume.\n  \n  The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling.*\n\n  - **configMap.name** (string)\n\n    Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n  - **configMap.optional** (boolean)\n\n    optional specify whether the ConfigMap or its keys must be defined\n\n  - **configMap.defaultMode** (int32)\n\n    defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.\n\n  - **configMap.items** ([]<a href=\"\">KeyToPath<\/a>)\n\n    *Atomic: will be replaced during a merge*\n    \n    items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.\n\n- **secret** (SecretVolumeSource)\n\n  secret represents a secret that should populate this volume. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#secret\n\n  <a name=\"SecretVolumeSource\"><\/a>\n  *Adapts a Secret into a volume.\n  \n  The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling.*\n\n  - **secret.secretName** (string)\n\n    secretName is the name of the secret in the pod's namespace to use. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#secret\n\n  - **secret.optional** (boolean)\n\n    optional field specify whether the Secret or its keys must be defined\n\n  - **secret.defaultMode** (int32)\n\n    defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.\n\n  - **secret.items** ([]<a href=\"\">KeyToPath<\/a>)\n\n    *Atomic: will be replaced during a merge*\n    \n    items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.\n\n- **downwardAPI** (DownwardAPIVolumeSource)\n\n  downwardAPI represents downward API about the pod that should populate this volume\n\n  <a name=\"DownwardAPIVolumeSource\"><\/a>\n  *DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling.*\n\n  - **downwardAPI.defaultMode** (int32)\n\n    Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.\n\n  - **downwardAPI.items** ([]<a href=\"\">DownwardAPIVolumeFile<\/a>)\n\n    *Atomic: will be replaced during a merge*\n    \n    Items is a list of downward API volume file\n\n- **projected** (ProjectedVolumeSource)\n\n  projected items for all in one resources secrets, configmaps, and downward API\n\n  <a name=\"ProjectedVolumeSource\"><\/a>\n  *Represents a projected volume source*\n\n  - **projected.defaultMode** (int32)\n\n    defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.\n\n  - **projected.sources** ([]VolumeProjection)\n\n    *Atomic: will be replaced during a merge*\n    \n    sources is the list of volume projections. Each entry in this list handles one source.\n\n    <a name=\"VolumeProjection\"><\/a>\n    *Projection that may be projected along with other supported volume types. Exactly one of these fields must be set.*\n\n    - **projected.sources.clusterTrustBundle** (ClusterTrustBundleProjection)\n\n      ClusterTrustBundle allows a pod to access the `.spec.trustBundle` field of ClusterTrustBundle objects in an auto-updating file.\n      \n      Alpha, gated by the ClusterTrustBundleProjection feature gate.\n      \n      ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector.\n      \n      Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem.  Esoteric PEM features such as inter-block comments and block headers are stripped.  Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time.\n\n      <a name=\"ClusterTrustBundleProjection\"><\/a>\n      *ClusterTrustBundleProjection describes how to select a set of ClusterTrustBundle objects and project their contents into the pod filesystem.*\n\n      - **projected.sources.clusterTrustBundle.path** (string), required\n\n        Relative path from the volume root to write the bundle.\n\n      - **projected.sources.clusterTrustBundle.labelSelector** (<a href=\"\">LabelSelector<\/a>)\n\n        Select all ClusterTrustBundles that match this label selector.  Only has effect if signerName is set.  Mutually-exclusive with name.  If unset, interpreted as \"match nothing\".  If set but empty, interpreted as \"match everything\".\n\n      - **projected.sources.clusterTrustBundle.name** (string)\n\n        Select a single ClusterTrustBundle by object name.  Mutually-exclusive with signerName and labelSelector.\n\n      - **projected.sources.clusterTrustBundle.optional** (boolean)\n\n        If true, don't block pod startup if the referenced ClusterTrustBundle(s) aren't available.  If using name, then the named ClusterTrustBundle is allowed not to exist.  If using signerName, then the combination of signerName and labelSelector is allowed to match zero ClusterTrustBundles.\n\n      - **projected.sources.clusterTrustBundle.signerName** (string)\n\n        Select all ClusterTrustBundles that match this signer name. Mutually-exclusive with name.  The contents of all selected ClusterTrustBundles will be unified and deduplicated.\n\n    - **projected.sources.configMap** (ConfigMapProjection)\n\n      configMap information about the configMap data to project\n\n      <a name=\"ConfigMapProjection\"><\/a>\n      *Adapts a ConfigMap into a projected volume.\n      \n      The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode.*\n\n      - **projected.sources.configMap.name** (string)\n\n        Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n      - **projected.sources.configMap.optional** (boolean)\n\n        optional specify whether the ConfigMap or its keys must be defined\n\n      - **projected.sources.configMap.items** ([]<a href=\"\">KeyToPath<\/a>)\n\n        *Atomic: will be replaced during a merge*\n        \n        items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.\n\n    - **projected.sources.downwardAPI** (DownwardAPIProjection)\n\n      downwardAPI information about the downwardAPI data to project\n\n      <a name=\"DownwardAPIProjection\"><\/a>\n      *Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode.*\n\n      - **projected.sources.downwardAPI.items** ([]<a href=\"\">DownwardAPIVolumeFile<\/a>)\n\n        *Atomic: will be replaced during a merge*\n        \n        Items is a list of DownwardAPIVolume file\n\n    - **projected.sources.secret** (SecretProjection)\n\n      secret information about the secret data to project\n\n      <a name=\"SecretProjection\"><\/a>\n      *Adapts a secret into a projected volume.\n      \n      The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode.*\n\n      - **projected.sources.secret.name** (string)\n\n        Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names\n\n      - **projected.sources.secret.optional** (boolean)\n\n        optional field specify whether the Secret or its key must be defined\n\n      - **projected.sources.secret.items** ([]<a href=\"\">KeyToPath<\/a>)\n\n        *Atomic: will be replaced during a merge*\n        \n        items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.\n\n    - **projected.sources.serviceAccountToken** (ServiceAccountTokenProjection)\n\n      serviceAccountToken is information about the serviceAccountToken data to project\n\n      <a name=\"ServiceAccountTokenProjection\"><\/a>\n      *ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise).*\n\n      - **projected.sources.serviceAccountToken.path** (string), required\n\n        path is the path relative to the mount point of the file to project the token into.\n\n      - **projected.sources.serviceAccountToken.audience** (string)\n\n        audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver.\n\n      - **projected.sources.serviceAccountToken.expirationSeconds** (int64)\n\n        expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes.\n\n### Local \/ Temporary Directory\n\n\n- **emptyDir** (EmptyDirVolumeSource)\n\n  emptyDir represents a temporary directory that shares a pod's lifetime. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#emptydir\n\n  <a name=\"EmptyDirVolumeSource\"><\/a>\n  *Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling.*\n\n  - **emptyDir.medium** (string)\n\n    medium represents what type of storage medium should back this directory. The default is \"\" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#emptydir\n\n  - **emptyDir.sizeLimit** (<a href=\"\">Quantity<\/a>)\n\n    sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#emptydir\n\n- **hostPath** (HostPathVolumeSource)\n\n  hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#hostpath\n\n  <a name=\"HostPathVolumeSource\"><\/a>\n  *Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling.*\n\n  - **hostPath.path** (string), required\n\n    path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#hostpath\n\n  - **hostPath.type** (string)\n\n    type for HostPath Volume Defaults to \"\" More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#hostpath\n\n### Persistent volumes\n\n\n- **awsElasticBlockStore** (AWSElasticBlockStoreVolumeSource)\n\n  awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#awselasticblockstore\n\n  <a name=\"AWSElasticBlockStoreVolumeSource\"><\/a>\n  *Represents a Persistent Disk resource in AWS.\n  \n  An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read\/write once. AWS EBS volumes support ownership management and SELinux relabeling.*\n\n  - **awsElasticBlockStore.volumeID** (string), required\n\n    volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#awselasticblockstore\n\n  - **awsElasticBlockStore.fsType** (string)\n\n    fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#awselasticblockstore\n\n  - **awsElasticBlockStore.partition** (int32)\n\n    partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume \/dev\/sda1, you specify the partition as \"1\". Similarly, the volume partition for \/dev\/sda is \"0\" (or you can leave the property empty).\n\n  - **awsElasticBlockStore.readOnly** (boolean)\n\n    readOnly value true will force the readOnly setting in VolumeMounts. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#awselasticblockstore\n\n- **azureDisk** (AzureDiskVolumeSource)\n\n  azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.\n\n  <a name=\"AzureDiskVolumeSource\"><\/a>\n  *AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.*\n\n  - **azureDisk.diskName** (string), required\n\n    diskName is the Name of the data disk in the blob storage\n\n  - **azureDisk.diskURI** (string), required\n\n    diskURI is the URI of data disk in the blob storage\n\n  - **azureDisk.cachingMode** (string)\n\n    cachingMode is the Host Caching mode: None, Read Only, Read Write.\n\n  - **azureDisk.fsType** (string)\n\n    fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified.\n\n  - **azureDisk.kind** (string)\n\n    kind expected values are Shared: multiple blob disks per storage account  Dedicated: single blob disk per storage account  Managed: azure managed data disk (only in managed availability set). defaults to shared\n\n  - **azureDisk.readOnly** (boolean)\n\n    readOnly Defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.\n\n- **azureFile** (AzureFileVolumeSource)\n\n  azureFile represents an Azure File Service mount on the host and bind mount to the pod.\n\n  <a name=\"AzureFileVolumeSource\"><\/a>\n  *AzureFile represents an Azure File Service mount on the host and bind mount to the pod.*\n\n  - **azureFile.secretName** (string), required\n\n    secretName is the  name of secret that contains Azure Storage Account Name and Key\n\n  - **azureFile.shareName** (string), required\n\n    shareName is the azure share Name\n\n  - **azureFile.readOnly** (boolean)\n\n    readOnly defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.\n\n- **cephfs** (CephFSVolumeSource)\n\n  cephFS represents a Ceph FS mount on the host that shares a pod's lifetime\n\n  <a name=\"CephFSVolumeSource\"><\/a>\n  *Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling.*\n\n  - **cephfs.monitors** ([]string), required\n\n    *Atomic: will be replaced during a merge*\n    \n    monitors is Required: Monitors is a collection of Ceph monitors More info: https:\/\/examples.k8s.io\/volumes\/cephfs\/README.md#how-to-use-it\n\n  - **cephfs.path** (string)\n\n    path is Optional: Used as the mounted root, rather than the full Ceph tree, default is \/\n\n  - **cephfs.readOnly** (boolean)\n\n    readOnly is Optional: Defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https:\/\/examples.k8s.io\/volumes\/cephfs\/README.md#how-to-use-it\n\n  - **cephfs.secretFile** (string)\n\n    secretFile is Optional: SecretFile is the path to key ring for User, default is \/etc\/ceph\/user.secret More info: https:\/\/examples.k8s.io\/volumes\/cephfs\/README.md#how-to-use-it\n\n  - **cephfs.secretRef** (<a href=\"\">LocalObjectReference<\/a>)\n\n    secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https:\/\/examples.k8s.io\/volumes\/cephfs\/README.md#how-to-use-it\n\n  - **cephfs.user** (string)\n\n    user is optional: User is the rados user name, default is admin More info: https:\/\/examples.k8s.io\/volumes\/cephfs\/README.md#how-to-use-it\n\n- **cinder** (CinderVolumeSource)\n\n  cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https:\/\/examples.k8s.io\/mysql-cinder-pd\/README.md\n\n  <a name=\"CinderVolumeSource\"><\/a>\n  *Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling.*\n\n  - **cinder.volumeID** (string), required\n\n    volumeID used to identify the volume in cinder. More info: https:\/\/examples.k8s.io\/mysql-cinder-pd\/README.md\n\n  - **cinder.fsType** (string)\n\n    fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https:\/\/examples.k8s.io\/mysql-cinder-pd\/README.md\n\n  - **cinder.readOnly** (boolean)\n\n    readOnly defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https:\/\/examples.k8s.io\/mysql-cinder-pd\/README.md\n\n  - **cinder.secretRef** (<a href=\"\">LocalObjectReference<\/a>)\n\n    secretRef is optional: points to a secret object containing parameters used to connect to OpenStack.\n\n- **csi** (CSIVolumeSource)\n\n  csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).\n\n  <a name=\"CSIVolumeSource\"><\/a>\n  *Represents a source location of a volume to mount, managed by an external CSI driver*\n\n  - **csi.driver** (string), required\n\n    driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster.\n\n  - **csi.fsType** (string)\n\n    fsType to mount. Ex. \"ext4\", \"xfs\", \"ntfs\". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply.\n\n  - **csi.nodePublishSecretRef** (<a href=\"\">LocalObjectReference<\/a>)\n\n    nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and  may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed.\n\n  - **csi.readOnly** (boolean)\n\n    readOnly specifies a read-only configuration for the volume. Defaults to false (read\/write).\n\n  - **csi.volumeAttributes** (map[string]string)\n\n    volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values.\n\n- **ephemeral** (EphemeralVolumeSource)\n\n  ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed.\n  \n  Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity\n     tracking are needed,\n  c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through\n     a PersistentVolumeClaim (see EphemeralVolumeSource for more\n     information on the connection between this volume type\n     and PersistentVolumeClaim).\n  \n  Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod.\n  \n  Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information.\n  \n  A pod can use both types of ephemeral volumes and persistent volumes at the same time.\n\n  <a name=\"EphemeralVolumeSource\"><\/a>\n  *Represents an ephemeral volume that is handled by a normal storage driver.*\n\n  - **ephemeral.volumeClaimTemplate** (PersistentVolumeClaimTemplate)\n\n    Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod.  The name of the PVC will be `\\<pod name>-\\<volume name>` where `\\<volume name>` is the name from the `PodSpec.Volumes` array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long).\n    \n    An existing PVC with that name that is not owned by the pod will *not* be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster.\n    \n    This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created.\n    \n    Required, must not be nil.\n\n    <a name=\"PersistentVolumeClaimTemplate\"><\/a>\n    *PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource.*\n\n    - **ephemeral.volumeClaimTemplate.spec** (<a href=\"\">PersistentVolumeClaimSpec<\/a>), required\n\n      The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here.\n\n    - **ephemeral.volumeClaimTemplate.metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n      May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation.\n\n- **fc** (FCVolumeSource)\n\n  fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod.\n\n  <a name=\"FCVolumeSource\"><\/a>\n  *Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read\/write once. Fibre Channel volumes support ownership management and SELinux relabeling.*\n\n  - **fc.fsType** (string)\n\n    fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified.\n\n  - **fc.lun** (int32)\n\n    lun is Optional: FC target lun number\n\n  - **fc.readOnly** (boolean)\n\n    readOnly is Optional: Defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.\n\n  - **fc.targetWWNs** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    targetWWNs is Optional: FC target worldwide names (WWNs)\n\n  - **fc.wwids** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously.\n\n- **flexVolume** (FlexVolumeSource)\n\n  flexVolume represents a generic volume resource that is provisioned\/attached using an exec based plugin.\n\n  <a name=\"FlexVolumeSource\"><\/a>\n  *FlexVolume represents a generic volume resource that is provisioned\/attached using an exec based plugin.*\n\n  - **flexVolume.driver** (string), required\n\n    driver is the name of the driver to use for this volume.\n\n  - **flexVolume.fsType** (string)\n\n    fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". The default filesystem depends on FlexVolume script.\n\n  - **flexVolume.options** (map[string]string)\n\n    options is Optional: this field holds extra command options if any.\n\n  - **flexVolume.readOnly** (boolean)\n\n    readOnly is Optional: defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.\n\n  - **flexVolume.secretRef** (<a href=\"\">LocalObjectReference<\/a>)\n\n    secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts.\n\n- **flocker** (FlockerVolumeSource)\n\n  flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running\n\n  <a name=\"FlockerVolumeSource\"><\/a>\n  *Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling.*\n\n  - **flocker.datasetName** (string)\n\n    datasetName is Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated\n\n  - **flocker.datasetUUID** (string)\n\n    datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset\n\n- **gcePersistentDisk** (GCEPersistentDiskVolumeSource)\n\n  gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#gcepersistentdisk\n\n  <a name=\"GCEPersistentDiskVolumeSource\"><\/a>\n  *Represents a Persistent Disk resource in Google Compute Engine.\n  \n  A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read\/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling.*\n\n  - **gcePersistentDisk.pdName** (string), required\n\n    pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#gcepersistentdisk\n\n  - **gcePersistentDisk.fsType** (string)\n\n    fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#gcepersistentdisk\n\n  - **gcePersistentDisk.partition** (int32)\n\n    partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume \/dev\/sda1, you specify the partition as \"1\". Similarly, the volume partition for \/dev\/sda is \"0\" (or you can leave the property empty). More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#gcepersistentdisk\n\n  - **gcePersistentDisk.readOnly** (boolean)\n\n    readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#gcepersistentdisk\n\n- **glusterfs** (GlusterfsVolumeSource)\n\n  glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https:\/\/examples.k8s.io\/volumes\/glusterfs\/README.md\n\n  <a name=\"GlusterfsVolumeSource\"><\/a>\n  *Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling.*\n\n  - **glusterfs.endpoints** (string), required\n\n    endpoints is the endpoint name that details Glusterfs topology. More info: https:\/\/examples.k8s.io\/volumes\/glusterfs\/README.md#create-a-pod\n\n  - **glusterfs.path** (string), required\n\n    path is the Glusterfs volume path. More info: https:\/\/examples.k8s.io\/volumes\/glusterfs\/README.md#create-a-pod\n\n  - **glusterfs.readOnly** (boolean)\n\n    readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https:\/\/examples.k8s.io\/volumes\/glusterfs\/README.md#create-a-pod\n\n- **iscsi** (ISCSIVolumeSource)\n\n  iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https:\/\/examples.k8s.io\/volumes\/iscsi\/README.md\n\n  <a name=\"ISCSIVolumeSource\"><\/a>\n  *Represents an ISCSI disk. ISCSI volumes can only be mounted as read\/write once. ISCSI volumes support ownership management and SELinux relabeling.*\n\n  - **iscsi.iqn** (string), required\n\n    iqn is the target iSCSI Qualified Name.\n\n  - **iscsi.lun** (int32), required\n\n    lun represents iSCSI Target Lun number.\n\n  - **iscsi.targetPortal** (string), required\n\n    targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).\n\n  - **iscsi.chapAuthDiscovery** (boolean)\n\n    chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication\n\n  - **iscsi.chapAuthSession** (boolean)\n\n    chapAuthSession defines whether support iSCSI Session CHAP authentication\n\n  - **iscsi.fsType** (string)\n\n    fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#iscsi\n\n  - **iscsi.initiatorName** (string)\n\n    initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface \\<target portal>:\\<volume name> will be created for the connection.\n\n  - **iscsi.iscsiInterface** (string)\n\n    iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp).\n\n  - **iscsi.portals** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).\n\n  - **iscsi.readOnly** (boolean)\n\n    readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false.\n\n  - **iscsi.secretRef** (<a href=\"\">LocalObjectReference<\/a>)\n\n    secretRef is the CHAP Secret for iSCSI target and initiator authentication\n\n- **image** (ImageVolumeSource)\n\n  image represents an OCI object (a container image or artifact) pulled and mounted on the kubelet's host machine. The volume is resolved at pod startup depending on which PullPolicy value is provided:\n  \n  - Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. - Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. - IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails.\n  \n  The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.\n\n  <a name=\"ImageVolumeSource\"><\/a>\n  *ImageVolumeSource represents a image volume resource.*\n\n  - **image.pullPolicy** (string)\n\n    Policy for pulling OCI objects. Possible values are: Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise.\n\n  - **image.reference** (string)\n\n    Required: Image or artifact reference to be used. Behaves in the same way as pod.spec.containers[*].image. Pull secrets will be assembled in the same way as for the container image by looking up node credentials, SA image pull secrets, and pod spec image pull secrets. More info: https:\/\/kubernetes.io\/docs\/concepts\/containers\/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets.\n\n- **nfs** (NFSVolumeSource)\n\n  nfs represents an NFS mount on the host that shares a pod's lifetime More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#nfs\n\n  <a name=\"NFSVolumeSource\"><\/a>\n  *Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling.*\n\n  - **nfs.path** (string), required\n\n    path that is exported by the NFS server. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#nfs\n\n  - **nfs.server** (string), required\n\n    server is the hostname or IP address of the NFS server. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#nfs\n\n  - **nfs.readOnly** (boolean)\n\n    readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#nfs\n\n- **photonPersistentDisk** (PhotonPersistentDiskVolumeSource)\n\n  photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine\n\n  <a name=\"PhotonPersistentDiskVolumeSource\"><\/a>\n  *Represents a Photon Controller persistent disk resource.*\n\n  - **photonPersistentDisk.pdID** (string), required\n\n    pdID is the ID that identifies Photon Controller persistent disk\n\n  - **photonPersistentDisk.fsType** (string)\n\n    fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified.\n\n- **portworxVolume** (PortworxVolumeSource)\n\n  portworxVolume represents a portworx volume attached and mounted on kubelets host machine\n\n  <a name=\"PortworxVolumeSource\"><\/a>\n  *PortworxVolumeSource represents a Portworx volume resource.*\n\n  - **portworxVolume.volumeID** (string), required\n\n    volumeID uniquely identifies a Portworx volume\n\n  - **portworxVolume.fsType** (string)\n\n    fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\". Implicitly inferred to be \"ext4\" if unspecified.\n\n  - **portworxVolume.readOnly** (boolean)\n\n    readOnly defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.\n\n- **quobyte** (QuobyteVolumeSource)\n\n  quobyte represents a Quobyte mount on the host that shares a pod's lifetime\n\n  <a name=\"QuobyteVolumeSource\"><\/a>\n  *Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling.*\n\n  - **quobyte.registry** (string), required\n\n    registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes\n\n  - **quobyte.volume** (string), required\n\n    volume is a string that references an already created Quobyte volume by name.\n\n  - **quobyte.group** (string)\n\n    group to map volume access to Default is no group\n\n  - **quobyte.readOnly** (boolean)\n\n    readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false.\n\n  - **quobyte.tenant** (string)\n\n    tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin\n\n  - **quobyte.user** (string)\n\n    user to map volume access to Defaults to serivceaccount user\n\n- **rbd** (RBDVolumeSource)\n\n  rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md\n\n  <a name=\"RBDVolumeSource\"><\/a>\n  *Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling.*\n\n  - **rbd.image** (string), required\n\n    image is the rados image name. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md#how-to-use-it\n\n  - **rbd.monitors** ([]string), required\n\n    *Atomic: will be replaced during a merge*\n    \n    monitors is a collection of Ceph monitors. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md#how-to-use-it\n\n  - **rbd.fsType** (string)\n\n    fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#rbd\n\n  - **rbd.keyring** (string)\n\n    keyring is the path to key ring for RBDUser. Default is \/etc\/ceph\/keyring. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md#how-to-use-it\n\n  - **rbd.pool** (string)\n\n    pool is the rados pool name. Default is rbd. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md#how-to-use-it\n\n  - **rbd.readOnly** (boolean)\n\n    readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md#how-to-use-it\n\n  - **rbd.secretRef** (<a href=\"\">LocalObjectReference<\/a>)\n\n    secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md#how-to-use-it\n\n  - **rbd.user** (string)\n\n    user is the rados user name. Default is admin. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md#how-to-use-it\n\n- **scaleIO** (ScaleIOVolumeSource)\n\n  scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes.\n\n  <a name=\"ScaleIOVolumeSource\"><\/a>\n  *ScaleIOVolumeSource represents a persistent ScaleIO volume*\n\n  - **scaleIO.gateway** (string), required\n\n    gateway is the host address of the ScaleIO API Gateway.\n\n  - **scaleIO.secretRef** (<a href=\"\">LocalObjectReference<\/a>), required\n\n    secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail.\n\n  - **scaleIO.system** (string), required\n\n    system is the name of the storage system as configured in ScaleIO.\n\n  - **scaleIO.fsType** (string)\n\n    fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Default is \"xfs\".\n\n  - **scaleIO.protectionDomain** (string)\n\n    protectionDomain is the name of the ScaleIO Protection Domain for the configured storage.\n\n  - **scaleIO.readOnly** (boolean)\n\n    readOnly Defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.\n\n  - **scaleIO.sslEnabled** (boolean)\n\n    sslEnabled Flag enable\/disable SSL communication with Gateway, default false\n\n  - **scaleIO.storageMode** (string)\n\n    storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned.\n\n  - **scaleIO.storagePool** (string)\n\n    storagePool is the ScaleIO Storage Pool associated with the protection domain.\n\n  - **scaleIO.volumeName** (string)\n\n    volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source.\n\n- **storageos** (StorageOSVolumeSource)\n\n  storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes.\n\n  <a name=\"StorageOSVolumeSource\"><\/a>\n  *Represents a StorageOS persistent volume resource.*\n\n  - **storageos.fsType** (string)\n\n    fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified.\n\n  - **storageos.readOnly** (boolean)\n\n    readOnly defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.\n\n  - **storageos.secretRef** (<a href=\"\">LocalObjectReference<\/a>)\n\n    secretRef specifies the secret to use for obtaining the StorageOS API credentials.  If not specified, default values will be attempted.\n\n  - **storageos.volumeName** (string)\n\n    volumeName is the human-readable name of the StorageOS volume.  Volume names are only unique within a namespace.\n\n  - **storageos.volumeNamespace** (string)\n\n    volumeNamespace specifies the scope of the volume within StorageOS.  If no namespace is specified then the Pod's namespace will be used.  This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to \"default\" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created.\n\n- **vsphereVolume** (VsphereVirtualDiskVolumeSource)\n\n  vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine\n\n  <a name=\"VsphereVirtualDiskVolumeSource\"><\/a>\n  *Represents a vSphere volume resource.*\n\n  - **vsphereVolume.volumePath** (string), required\n\n    volumePath is the path that identifies vSphere volume vmdk\n\n  - **vsphereVolume.fsType** (string)\n\n    fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified.\n\n  - **vsphereVolume.storagePolicyID** (string)\n\n    storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName.\n\n  - **vsphereVolume.storagePolicyName** (string)\n\n    storagePolicyName is the storage Policy Based Management (SPBM) profile name.\n\n### Deprecated\n\n\n- **gitRepo** (GitRepoVolumeSource)\n\n  gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container.\n\n  <a name=\"GitRepoVolumeSource\"><\/a>\n  *Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling.\n  \n  DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container.*\n\n  - **gitRepo.repository** (string), required\n\n    repository is the URL\n\n  - **gitRepo.directory** (string)\n\n    directory is the target directory name. Must not contain or start with '..'.  If '.' is supplied, the volume directory will be the git repository.  Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name.\n\n  - **gitRepo.revision** (string)\n\n    revision is the commit hash for the specified revision.\n\n\n\n## DownwardAPIVolumeFile {#DownwardAPIVolumeFile}\n\nDownwardAPIVolumeFile represents information to create the file containing the pod field\n\n<hr>\n\n- **path** (string), required\n\n  Required: Path is  the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..'\n\n- **fieldRef** (<a href=\"\">ObjectFieldSelector<\/a>)\n\n  Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported.\n\n- **mode** (int32)\n\n  Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.\n\n- **resourceFieldRef** (<a href=\"\">ResourceFieldSelector<\/a>)\n\n  Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.\n\n\n\n\n\n## KeyToPath {#KeyToPath}\n\nMaps a string key to a path within a volume.\n\n<hr>\n\n- **key** (string), required\n\n  key is the key to project.\n\n- **path** (string), required\n\n  path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'.\n\n- **mode** (int32)\n\n  mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.\n\n\n\n\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion       import   k8s io api core v1    kind   Volume  content type   api reference  description   Volume represents a named volume in a pod that may be accessed by any container in the pod   title   Volume  weight  10 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project          import  k8s io api core v1        Volume   Volume   Volume represents a named volume in a pod that may be accessed by any container in the pod    hr       name    string   required    name of the volume  Must be a DNS LABEL and unique within the pod  More info  https   kubernetes io docs concepts overview working with objects names  names        Exposed Persistent volumes       persistentVolumeClaim    PersistentVolumeClaimVolumeSource     persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace  More info  https   kubernetes io docs concepts storage persistent volumes persistentvolumeclaims     a name  PersistentVolumeClaimVolumeSource    a     PersistentVolumeClaimVolumeSource references the user s PVC in the same namespace  This volume finds the bound PV and mounts that volume for the pod  A PersistentVolumeClaimVolumeSource is  essentially  a wrapper around another type of volume that is owned by someone else  the system           persistentVolumeClaim claimName    string   required      claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume  More info  https   kubernetes io docs concepts storage persistent volumes persistentvolumeclaims        persistentVolumeClaim readOnly    boolean       readOnly Will force the ReadOnly setting in VolumeMounts  Default false       Projections       configMap    ConfigMapVolumeSource     configMap represents a configMap that should populate this volume     a name  ConfigMapVolumeSource    a     Adapts a ConfigMap into a volume       The contents of the target ConfigMap s Data field will be presented in a volume as files using the keys in the Data field as the file names  unless the items element is populated with specific mappings of keys to paths  ConfigMap volumes support ownership management and SELinux relabeling          configMap name    string       Name of the referent  This field is effectively required  but due to backwards compatibility is allowed to be empty  Instances of this type with an empty value here are almost certainly wrong  More info  https   kubernetes io docs concepts overview working with objects names  names        configMap optional    boolean       optional specify whether the ConfigMap or its keys must be defined        configMap defaultMode    int32       defaultMode is optional  mode bits used to set permissions on created files by default  Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511  YAML accepts both octal and decimal values  JSON requires decimal values for mode bits  Defaults to 0644  Directories within the path are not affected by this setting  This might be in conflict with other options that affect the file mode  like fsGroup  and the result can be other mode bits set         configMap items       a href    KeyToPath  a         Atomic  will be replaced during a merge           items if unspecified  each key value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value  If specified  the listed keys will be projected into the specified paths  and unlisted keys will not be present  If a key is specified which is not present in the ConfigMap  the volume setup will error unless it is marked optional  Paths must be relative and may not contain the      path or start with            secret    SecretVolumeSource     secret represents a secret that should populate this volume  More info  https   kubernetes io docs concepts storage volumes secret     a name  SecretVolumeSource    a     Adapts a Secret into a volume       The contents of the target Secret s Data field will be presented in a volume as files using the keys in the Data field as the file names  Secret volumes support ownership management and SELinux relabeling          secret secretName    string       secretName is the name of the secret in the pod s namespace to use  More info  https   kubernetes io docs concepts storage volumes secret        secret optional    boolean       optional field specify whether the Secret or its keys must be defined        secret defaultMode    int32       defaultMode is Optional  mode bits used to set permissions on created files by default  Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511  YAML accepts both octal and decimal values  JSON requires decimal values for mode bits  Defaults to 0644  Directories within the path are not affected by this setting  This might be in conflict with other options that affect the file mode  like fsGroup  and the result can be other mode bits set         secret items       a href    KeyToPath  a         Atomic  will be replaced during a merge           items If unspecified  each key value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value  If specified  the listed keys will be projected into the specified paths  and unlisted keys will not be present  If a key is specified which is not present in the Secret  the volume setup will error unless it is marked optional  Paths must be relative and may not contain the      path or start with            downwardAPI    DownwardAPIVolumeSource     downwardAPI represents downward API about the pod that should populate this volume     a name  DownwardAPIVolumeSource    a     DownwardAPIVolumeSource represents a volume containing downward API info  Downward API volumes support ownership management and SELinux relabeling          downwardAPI defaultMode    int32       Optional  mode bits to use on created files by default  Must be a Optional  mode bits used to set permissions on created files by default  Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511  YAML accepts both octal and decimal values  JSON requires decimal values for mode bits  Defaults to 0644  Directories within the path are not affected by this setting  This might be in conflict with other options that affect the file mode  like fsGroup  and the result can be other mode bits set         downwardAPI items       a href    DownwardAPIVolumeFile  a         Atomic  will be replaced during a merge           Items is a list of downward API volume file      projected    ProjectedVolumeSource     projected items for all in one resources secrets  configmaps  and downward API     a name  ProjectedVolumeSource    a     Represents a projected volume source         projected defaultMode    int32       defaultMode are the mode bits used to set permissions on created files by default  Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511  YAML accepts both octal and decimal values  JSON requires decimal values for mode bits  Directories within the path are not affected by this setting  This might be in conflict with other options that affect the file mode  like fsGroup  and the result can be other mode bits set         projected sources      VolumeProjection        Atomic  will be replaced during a merge           sources is the list of volume projections  Each entry in this list handles one source        a name  VolumeProjection    a       Projection that may be projected along with other supported volume types  Exactly one of these fields must be set            projected sources clusterTrustBundle    ClusterTrustBundleProjection         ClusterTrustBundle allows a pod to access the   spec trustBundle  field of ClusterTrustBundle objects in an auto updating file               Alpha  gated by the ClusterTrustBundleProjection feature gate               ClusterTrustBundle objects can either be selected by name  or by the combination of signer name and a label selector               Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem   Esoteric PEM features such as inter block comments and block headers are stripped   Certificates are deduplicated  The ordering of certificates within the file is arbitrary  and Kubelet may change the order over time          a name  ClusterTrustBundleProjection    a         ClusterTrustBundleProjection describes how to select a set of ClusterTrustBundle objects and project their contents into the pod filesystem              projected sources clusterTrustBundle path    string   required          Relative path from the volume root to write the bundle             projected sources clusterTrustBundle labelSelector     a href    LabelSelector  a            Select all ClusterTrustBundles that match this label selector   Only has effect if signerName is set   Mutually exclusive with name   If unset  interpreted as  match nothing    If set but empty  interpreted as  match everything              projected sources clusterTrustBundle name    string           Select a single ClusterTrustBundle by object name   Mutually exclusive with signerName and labelSelector             projected sources clusterTrustBundle optional    boolean           If true  don t block pod startup if the referenced ClusterTrustBundle s  aren t available   If using name  then the named ClusterTrustBundle is allowed not to exist   If using signerName  then the combination of signerName and labelSelector is allowed to match zero ClusterTrustBundles             projected sources clusterTrustBundle signerName    string           Select all ClusterTrustBundles that match this signer name  Mutually exclusive with name   The contents of all selected ClusterTrustBundles will be unified and deduplicated           projected sources configMap    ConfigMapProjection         configMap information about the configMap data to project         a name  ConfigMapProjection    a         Adapts a ConfigMap into a projected volume               The contents of the target ConfigMap s Data field will be presented in a projected volume as files using the keys in the Data field as the file names  unless the items element is populated with specific mappings of keys to paths  Note that this is identical to a configmap volume source without the default mode              projected sources configMap name    string           Name of the referent  This field is effectively required  but due to backwards compatibility is allowed to be empty  Instances of this type with an empty value here are almost certainly wrong  More info  https   kubernetes io docs concepts overview working with objects names  names            projected sources configMap optional    boolean           optional specify whether the ConfigMap or its keys must be defined            projected sources configMap items       a href    KeyToPath  a             Atomic  will be replaced during a merge                   items if unspecified  each key value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value  If specified  the listed keys will be projected into the specified paths  and unlisted keys will not be present  If a key is specified which is not present in the ConfigMap  the volume setup will error unless it is marked optional  Paths must be relative and may not contain the      path or start with                projected sources downwardAPI    DownwardAPIProjection         downwardAPI information about the downwardAPI data to project         a name  DownwardAPIProjection    a         Represents downward API info for projecting into a projected volume  Note that this is identical to a downwardAPI volume source without the default mode              projected sources downwardAPI items       a href    DownwardAPIVolumeFile  a             Atomic  will be replaced during a merge                   Items is a list of DownwardAPIVolume file          projected sources secret    SecretProjection         secret information about the secret data to project         a name  SecretProjection    a         Adapts a secret into a projected volume               The contents of the target Secret s Data field will be presented in a projected volume as files using the keys in the Data field as the file names  Note that this is identical to a secret volume source without the default mode              projected sources secret name    string           Name of the referent  This field is effectively required  but due to backwards compatibility is allowed to be empty  Instances of this type with an empty value here are almost certainly wrong  More info  https   kubernetes io docs concepts overview working with objects names  names            projected sources secret optional    boolean           optional field specify whether the Secret or its key must be defined            projected sources secret items       a href    KeyToPath  a             Atomic  will be replaced during a merge                   items if unspecified  each key value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value  If specified  the listed keys will be projected into the specified paths  and unlisted keys will not be present  If a key is specified which is not present in the Secret  the volume setup will error unless it is marked optional  Paths must be relative and may not contain the      path or start with                projected sources serviceAccountToken    ServiceAccountTokenProjection         serviceAccountToken is information about the serviceAccountToken data to project         a name  ServiceAccountTokenProjection    a         ServiceAccountTokenProjection represents a projected service account token volume  This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs  Kubernetes API Server or otherwise               projected sources serviceAccountToken path    string   required          path is the path relative to the mount point of the file to project the token into             projected sources serviceAccountToken audience    string           audience is the intended audience of the token  A recipient of a token must identify itself with an identifier specified in the audience of the token  and otherwise should reject the token  The audience defaults to the identifier of the apiserver             projected sources serviceAccountToken expirationSeconds    int64           expirationSeconds is the requested duration of validity of the service account token  As the token approaches expiration  the kubelet volume plugin will proactively rotate the service account token  The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours Defaults to 1 hour and must be at least 10 minutes       Local   Temporary Directory       emptyDir    EmptyDirVolumeSource     emptyDir represents a temporary directory that shares a pod s lifetime  More info  https   kubernetes io docs concepts storage volumes emptydir     a name  EmptyDirVolumeSource    a     Represents an empty directory for a pod  Empty directory volumes support ownership management and SELinux relabeling          emptyDir medium    string       medium represents what type of storage medium should back this directory  The default is    which means to use the node s default medium  Must be an empty string  default  or Memory  More info  https   kubernetes io docs concepts storage volumes emptydir        emptyDir sizeLimit     a href    Quantity  a        sizeLimit is the total amount of local storage required for this EmptyDir volume  The size limit is also applicable for memory medium  The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod  The default is nil which means that the limit is undefined  More info  https   kubernetes io docs concepts storage volumes emptydir      hostPath    HostPathVolumeSource     hostPath represents a pre existing file or directory on the host machine that is directly exposed to the container  This is generally used for system agents or other privileged things that are allowed to see the host machine  Most containers will NOT need this  More info  https   kubernetes io docs concepts storage volumes hostpath     a name  HostPathVolumeSource    a     Represents a host path mapped into a pod  Host path volumes do not support ownership management or SELinux relabeling          hostPath path    string   required      path of the directory on the host  If the path is a symlink  it will follow the link to the real path  More info  https   kubernetes io docs concepts storage volumes hostpath        hostPath type    string       type for HostPath Volume Defaults to    More info  https   kubernetes io docs concepts storage volumes hostpath      Persistent volumes       awsElasticBlockStore    AWSElasticBlockStoreVolumeSource     awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet s host machine and then exposed to the pod  More info  https   kubernetes io docs concepts storage volumes awselasticblockstore     a name  AWSElasticBlockStoreVolumeSource    a     Represents a Persistent Disk resource in AWS       An AWS EBS disk must exist before mounting to a container  The disk must also be in the same AWS zone as the kubelet  An AWS EBS disk can only be mounted as read write once  AWS EBS volumes support ownership management and SELinux relabeling          awsElasticBlockStore volumeID    string   required      volumeID is unique ID of the persistent disk resource in AWS  Amazon EBS volume   More info  https   kubernetes io docs concepts storage volumes awselasticblockstore        awsElasticBlockStore fsType    string       fsType is the filesystem type of the volume that you want to mount  Tip  Ensure that the filesystem type is supported by the host operating system  Examples   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified  More info  https   kubernetes io docs concepts storage volumes awselasticblockstore        awsElasticBlockStore partition    int32       partition is the partition in the volume that you want to mount  If omitted  the default is to mount by volume name  Examples  For volume  dev sda1  you specify the partition as  1   Similarly  the volume partition for  dev sda is  0   or you can leave the property empty          awsElasticBlockStore readOnly    boolean       readOnly value true will force the readOnly setting in VolumeMounts  More info  https   kubernetes io docs concepts storage volumes awselasticblockstore      azureDisk    AzureDiskVolumeSource     azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod      a name  AzureDiskVolumeSource    a     AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod          azureDisk diskName    string   required      diskName is the Name of the data disk in the blob storage        azureDisk diskURI    string   required      diskURI is the URI of data disk in the blob storage        azureDisk cachingMode    string       cachingMode is the Host Caching mode  None  Read Only  Read Write         azureDisk fsType    string       fsType is Filesystem type to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified         azureDisk kind    string       kind expected values are Shared  multiple blob disks per storage account  Dedicated  single blob disk per storage account  Managed  azure managed data disk  only in managed availability set   defaults to shared        azureDisk readOnly    boolean       readOnly Defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts       azureFile    AzureFileVolumeSource     azureFile represents an Azure File Service mount on the host and bind mount to the pod      a name  AzureFileVolumeSource    a     AzureFile represents an Azure File Service mount on the host and bind mount to the pod          azureFile secretName    string   required      secretName is the  name of secret that contains Azure Storage Account Name and Key        azureFile shareName    string   required      shareName is the azure share Name        azureFile readOnly    boolean       readOnly defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts       cephfs    CephFSVolumeSource     cephFS represents a Ceph FS mount on the host that shares a pod s lifetime     a name  CephFSVolumeSource    a     Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling          cephfs monitors      string   required       Atomic  will be replaced during a merge           monitors is Required  Monitors is a collection of Ceph monitors More info  https   examples k8s io volumes cephfs README md how to use it        cephfs path    string       path is Optional  Used as the mounted root  rather than the full Ceph tree  default is          cephfs readOnly    boolean       readOnly is Optional  Defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts  More info  https   examples k8s io volumes cephfs README md how to use it        cephfs secretFile    string       secretFile is Optional  SecretFile is the path to key ring for User  default is  etc ceph user secret More info  https   examples k8s io volumes cephfs README md how to use it        cephfs secretRef     a href    LocalObjectReference  a        secretRef is Optional  SecretRef is reference to the authentication secret for User  default is empty  More info  https   examples k8s io volumes cephfs README md how to use it        cephfs user    string       user is optional  User is the rados user name  default is admin More info  https   examples k8s io volumes cephfs README md how to use it      cinder    CinderVolumeSource     cinder represents a cinder volume attached and mounted on kubelets host machine  More info  https   examples k8s io mysql cinder pd README md     a name  CinderVolumeSource    a     Represents a cinder volume resource in Openstack  A Cinder volume must exist before mounting to a container  The volume must also be in the same region as the kubelet  Cinder volumes support ownership management and SELinux relabeling          cinder volumeID    string   required      volumeID used to identify the volume in cinder  More info  https   examples k8s io mysql cinder pd README md        cinder fsType    string       fsType is the filesystem type to mount  Must be a filesystem type supported by the host operating system  Examples   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified  More info  https   examples k8s io mysql cinder pd README md        cinder readOnly    boolean       readOnly defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts  More info  https   examples k8s io mysql cinder pd README md        cinder secretRef     a href    LocalObjectReference  a        secretRef is optional  points to a secret object containing parameters used to connect to OpenStack       csi    CSIVolumeSource     csi  Container Storage Interface  represents ephemeral storage that is handled by certain external CSI drivers  Beta feature       a name  CSIVolumeSource    a     Represents a source location of a volume to mount  managed by an external CSI driver         csi driver    string   required      driver is the name of the CSI driver that handles this volume  Consult with your admin for the correct name as registered in the cluster         csi fsType    string       fsType to mount  Ex   ext4    xfs    ntfs   If not provided  the empty value is passed to the associated CSI driver which will determine the default filesystem to apply         csi nodePublishSecretRef     a href    LocalObjectReference  a        nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls  This field is optional  and  may be empty if no secret is required  If the secret object contains more than one secret  all secret references are passed         csi readOnly    boolean       readOnly specifies a read only configuration for the volume  Defaults to false  read write          csi volumeAttributes    map string string       volumeAttributes stores driver specific properties that are passed to the CSI driver  Consult your driver s documentation for supported values       ephemeral    EphemeralVolumeSource     ephemeral represents a volume that is handled by a cluster storage driver  The volume s lifecycle is tied to the pod that defines it   it will be created before the pod starts  and deleted when the pod is removed       Use this if  a  the volume is only needed while the pod runs  b  features of normal volumes like restoring from snapshot or capacity      tracking are needed    c  the storage driver is specified through a storage class  and d  the storage driver supports dynamic volume provisioning through      a PersistentVolumeClaim  see EphemeralVolumeSource for more      information on the connection between this volume type      and PersistentVolumeClaim        Use PersistentVolumeClaim or one of the vendor specific APIs for volumes that persist for longer than the lifecycle of an individual pod       Use CSI for light weight local ephemeral volumes if the CSI driver is meant to be used that way   see the documentation of the driver for more information       A pod can use both types of ephemeral volumes and persistent volumes at the same time      a name  EphemeralVolumeSource    a     Represents an ephemeral volume that is handled by a normal storage driver          ephemeral volumeClaimTemplate    PersistentVolumeClaimTemplate       Will be used to create a stand alone PVC to provision the volume  The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC  i e  the PVC will be deleted together with the pod   The name of the PVC will be    pod name    volume name   where    volume name   is the name from the  PodSpec Volumes  array entry  Pod validation will reject the pod if the concatenated name is not valid for a PVC  for example  too long            An existing PVC with that name that is not owned by the pod will  not  be used for the pod to avoid using an unrelated volume by mistake  Starting the pod is then blocked until the unrelated PVC is removed  If such a pre created PVC is meant to be used by the pod  the PVC has to updated with an owner reference to the pod once the pod exists  Normally this should not be necessary  but it may be useful when manually reconstructing a broken cluster           This field is read only and no changes will be made by Kubernetes to the PVC after it has been created           Required  must not be nil        a name  PersistentVolumeClaimTemplate    a       PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource            ephemeral volumeClaimTemplate spec     a href    PersistentVolumeClaimSpec  a    required        The specification for the PersistentVolumeClaim  The entire content is copied unchanged into the PVC that gets created from this template  The same fields as in a PersistentVolumeClaim are also valid here           ephemeral volumeClaimTemplate metadata     a href    ObjectMeta  a          May contain labels and annotations that will be copied into the PVC when creating it  No other fields are allowed and will be rejected during validation       fc    FCVolumeSource     fc represents a Fibre Channel resource that is attached to a kubelet s host machine and then exposed to the pod      a name  FCVolumeSource    a     Represents a Fibre Channel volume  Fibre Channel volumes can only be mounted as read write once  Fibre Channel volumes support ownership management and SELinux relabeling          fc fsType    string       fsType is the filesystem type to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified         fc lun    int32       lun is Optional  FC target lun number        fc readOnly    boolean       readOnly is Optional  Defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts         fc targetWWNs      string        Atomic  will be replaced during a merge           targetWWNs is Optional  FC target worldwide names  WWNs         fc wwids      string        Atomic  will be replaced during a merge           wwids Optional  FC volume world wide identifiers  wwids  Either wwids or combination of targetWWNs and lun must be set  but not both simultaneously       flexVolume    FlexVolumeSource     flexVolume represents a generic volume resource that is provisioned attached using an exec based plugin      a name  FlexVolumeSource    a     FlexVolume represents a generic volume resource that is provisioned attached using an exec based plugin          flexVolume driver    string   required      driver is the name of the driver to use for this volume         flexVolume fsType    string       fsType is the filesystem type to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   The default filesystem depends on FlexVolume script         flexVolume options    map string string       options is Optional  this field holds extra command options if any         flexVolume readOnly    boolean       readOnly is Optional  defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts         flexVolume secretRef     a href    LocalObjectReference  a        secretRef is Optional  secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts  This may be empty if no secret object is specified  If the secret object contains more than one secret  all secrets are passed to the plugin scripts       flocker    FlockerVolumeSource     flocker represents a Flocker volume attached to a kubelet s host machine  This depends on the Flocker control service being running     a name  FlockerVolumeSource    a     Represents a Flocker volume mounted by the Flocker agent  One and only one of datasetName and datasetUUID should be set  Flocker volumes do not support ownership management or SELinux relabeling          flocker datasetName    string       datasetName is Name of the dataset stored as metadata    name on the dataset for Flocker should be considered as deprecated        flocker datasetUUID    string       datasetUUID is the UUID of the dataset  This is unique identifier of a Flocker dataset      gcePersistentDisk    GCEPersistentDiskVolumeSource     gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet s host machine and then exposed to the pod  More info  https   kubernetes io docs concepts storage volumes gcepersistentdisk     a name  GCEPersistentDiskVolumeSource    a     Represents a Persistent Disk resource in Google Compute Engine       A GCE PD must exist before mounting to a container  The disk must also be in the same GCE project and zone as the kubelet  A GCE PD can only be mounted as read write once or read only many times  GCE PDs support ownership management and SELinux relabeling          gcePersistentDisk pdName    string   required      pdName is unique name of the PD resource in GCE  Used to identify the disk in GCE  More info  https   kubernetes io docs concepts storage volumes gcepersistentdisk        gcePersistentDisk fsType    string       fsType is filesystem type of the volume that you want to mount  Tip  Ensure that the filesystem type is supported by the host operating system  Examples   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified  More info  https   kubernetes io docs concepts storage volumes gcepersistentdisk        gcePersistentDisk partition    int32       partition is the partition in the volume that you want to mount  If omitted  the default is to mount by volume name  Examples  For volume  dev sda1  you specify the partition as  1   Similarly  the volume partition for  dev sda is  0   or you can leave the property empty   More info  https   kubernetes io docs concepts storage volumes gcepersistentdisk        gcePersistentDisk readOnly    boolean       readOnly here will force the ReadOnly setting in VolumeMounts  Defaults to false  More info  https   kubernetes io docs concepts storage volumes gcepersistentdisk      glusterfs    GlusterfsVolumeSource     glusterfs represents a Glusterfs mount on the host that shares a pod s lifetime  More info  https   examples k8s io volumes glusterfs README md     a name  GlusterfsVolumeSource    a     Represents a Glusterfs mount that lasts the lifetime of a pod  Glusterfs volumes do not support ownership management or SELinux relabeling          glusterfs endpoints    string   required      endpoints is the endpoint name that details Glusterfs topology  More info  https   examples k8s io volumes glusterfs README md create a pod        glusterfs path    string   required      path is the Glusterfs volume path  More info  https   examples k8s io volumes glusterfs README md create a pod        glusterfs readOnly    boolean       readOnly here will force the Glusterfs volume to be mounted with read only permissions  Defaults to false  More info  https   examples k8s io volumes glusterfs README md create a pod      iscsi    ISCSIVolumeSource     iscsi represents an ISCSI Disk resource that is attached to a kubelet s host machine and then exposed to the pod  More info  https   examples k8s io volumes iscsi README md     a name  ISCSIVolumeSource    a     Represents an ISCSI disk  ISCSI volumes can only be mounted as read write once  ISCSI volumes support ownership management and SELinux relabeling          iscsi iqn    string   required      iqn is the target iSCSI Qualified Name         iscsi lun    int32   required      lun represents iSCSI Target Lun number         iscsi targetPortal    string   required      targetPortal is iSCSI Target Portal  The Portal is either an IP or ip addr port if the port is other than default  typically TCP ports 860 and 3260          iscsi chapAuthDiscovery    boolean       chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication        iscsi chapAuthSession    boolean       chapAuthSession defines whether support iSCSI Session CHAP authentication        iscsi fsType    string       fsType is the filesystem type of the volume that you want to mount  Tip  Ensure that the filesystem type is supported by the host operating system  Examples   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified  More info  https   kubernetes io docs concepts storage volumes iscsi        iscsi initiatorName    string       initiatorName is the custom iSCSI Initiator Name  If initiatorName is specified with iscsiInterface simultaneously  new iSCSI interface   target portal    volume name  will be created for the connection         iscsi iscsiInterface    string       iscsiInterface is the interface Name that uses an iSCSI transport  Defaults to  default   tcp          iscsi portals      string        Atomic  will be replaced during a merge           portals is the iSCSI Target Portal List  The portal is either an IP or ip addr port if the port is other than default  typically TCP ports 860 and 3260          iscsi readOnly    boolean       readOnly here will force the ReadOnly setting in VolumeMounts  Defaults to false         iscsi secretRef     a href    LocalObjectReference  a        secretRef is the CHAP Secret for iSCSI target and initiator authentication      image    ImageVolumeSource     image represents an OCI object  a container image or artifact  pulled and mounted on the kubelet s host machine  The volume is resolved at pod startup depending on which PullPolicy value is provided         Always  the kubelet always attempts to pull the reference  Container creation will fail If the pull fails    Never  the kubelet never pulls the reference and only uses a local image or artifact  Container creation will fail if the reference isn t present    IfNotPresent  the kubelet pulls if the reference isn t already present on disk  Container creation will fail if the reference isn t present and the pull fails       The volume gets re resolved if the pod gets deleted and recreated  which means that new remote content will become available on pod recreation  A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency  Failures will be retried using normal volume backoff and will be reported on the pod reason and message  The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field  The OCI object gets mounted in a single directory  spec containers    volumeMounts mountPath  by merging the manifest layers in the same way as for container images  The volume will be mounted read only  ro  and non executable files  noexec   Sub path mounts for containers are not supported  spec containers    volumeMounts subpath   The field spec securityContext fsGroupChangePolicy has no effect on this volume type      a name  ImageVolumeSource    a     ImageVolumeSource represents a image volume resource          image pullPolicy    string       Policy for pulling OCI objects  Possible values are  Always  the kubelet always attempts to pull the reference  Container creation will fail If the pull fails  Never  the kubelet never pulls the reference and only uses a local image or artifact  Container creation will fail if the reference isn t present  IfNotPresent  the kubelet pulls if the reference isn t already present on disk  Container creation will fail if the reference isn t present and the pull fails  Defaults to Always if  latest tag is specified  or IfNotPresent otherwise         image reference    string       Required  Image or artifact reference to be used  Behaves in the same way as pod spec containers    image  Pull secrets will be assembled in the same way as for the container image by looking up node credentials  SA image pull secrets  and pod spec image pull secrets  More info  https   kubernetes io docs concepts containers images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets       nfs    NFSVolumeSource     nfs represents an NFS mount on the host that shares a pod s lifetime More info  https   kubernetes io docs concepts storage volumes nfs     a name  NFSVolumeSource    a     Represents an NFS mount that lasts the lifetime of a pod  NFS volumes do not support ownership management or SELinux relabeling          nfs path    string   required      path that is exported by the NFS server  More info  https   kubernetes io docs concepts storage volumes nfs        nfs server    string   required      server is the hostname or IP address of the NFS server  More info  https   kubernetes io docs concepts storage volumes nfs        nfs readOnly    boolean       readOnly here will force the NFS export to be mounted with read only permissions  Defaults to false  More info  https   kubernetes io docs concepts storage volumes nfs      photonPersistentDisk    PhotonPersistentDiskVolumeSource     photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine     a name  PhotonPersistentDiskVolumeSource    a     Represents a Photon Controller persistent disk resource          photonPersistentDisk pdID    string   required      pdID is the ID that identifies Photon Controller persistent disk        photonPersistentDisk fsType    string       fsType is the filesystem type to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified       portworxVolume    PortworxVolumeSource     portworxVolume represents a portworx volume attached and mounted on kubelets host machine     a name  PortworxVolumeSource    a     PortworxVolumeSource represents a Portworx volume resource          portworxVolume volumeID    string   required      volumeID uniquely identifies a Portworx volume        portworxVolume fsType    string       fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system  Ex   ext4    xfs   Implicitly inferred to be  ext4  if unspecified         portworxVolume readOnly    boolean       readOnly defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts       quobyte    QuobyteVolumeSource     quobyte represents a Quobyte mount on the host that shares a pod s lifetime     a name  QuobyteVolumeSource    a     Represents a Quobyte mount that lasts the lifetime of a pod  Quobyte volumes do not support ownership management or SELinux relabeling          quobyte registry    string   required      registry represents a single or multiple Quobyte Registry services specified as a string as host port pair  multiple entries are separated with commas  which acts as the central registry for volumes        quobyte volume    string   required      volume is a string that references an already created Quobyte volume by name         quobyte group    string       group to map volume access to Default is no group        quobyte readOnly    boolean       readOnly here will force the Quobyte volume to be mounted with read only permissions  Defaults to false         quobyte tenant    string       tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes  value is set by the plugin        quobyte user    string       user to map volume access to Defaults to serivceaccount user      rbd    RBDVolumeSource     rbd represents a Rados Block Device mount on the host that shares a pod s lifetime  More info  https   examples k8s io volumes rbd README md     a name  RBDVolumeSource    a     Represents a Rados Block Device mount that lasts the lifetime of a pod  RBD volumes support ownership management and SELinux relabeling          rbd image    string   required      image is the rados image name  More info  https   examples k8s io volumes rbd README md how to use it        rbd monitors      string   required       Atomic  will be replaced during a merge           monitors is a collection of Ceph monitors  More info  https   examples k8s io volumes rbd README md how to use it        rbd fsType    string       fsType is the filesystem type of the volume that you want to mount  Tip  Ensure that the filesystem type is supported by the host operating system  Examples   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified  More info  https   kubernetes io docs concepts storage volumes rbd        rbd keyring    string       keyring is the path to key ring for RBDUser  Default is  etc ceph keyring  More info  https   examples k8s io volumes rbd README md how to use it        rbd pool    string       pool is the rados pool name  Default is rbd  More info  https   examples k8s io volumes rbd README md how to use it        rbd readOnly    boolean       readOnly here will force the ReadOnly setting in VolumeMounts  Defaults to false  More info  https   examples k8s io volumes rbd README md how to use it        rbd secretRef     a href    LocalObjectReference  a        secretRef is name of the authentication secret for RBDUser  If provided overrides keyring  Default is nil  More info  https   examples k8s io volumes rbd README md how to use it        rbd user    string       user is the rados user name  Default is admin  More info  https   examples k8s io volumes rbd README md how to use it      scaleIO    ScaleIOVolumeSource     scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes      a name  ScaleIOVolumeSource    a     ScaleIOVolumeSource represents a persistent ScaleIO volume         scaleIO gateway    string   required      gateway is the host address of the ScaleIO API Gateway         scaleIO secretRef     a href    LocalObjectReference  a    required      secretRef references to the secret for ScaleIO user and other sensitive information  If this is not provided  Login operation will fail         scaleIO system    string   required      system is the name of the storage system as configured in ScaleIO         scaleIO fsType    string       fsType is the filesystem type to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   Default is  xfs          scaleIO protectionDomain    string       protectionDomain is the name of the ScaleIO Protection Domain for the configured storage         scaleIO readOnly    boolean       readOnly Defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts         scaleIO sslEnabled    boolean       sslEnabled Flag enable disable SSL communication with Gateway  default false        scaleIO storageMode    string       storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned  Default is ThinProvisioned         scaleIO storagePool    string       storagePool is the ScaleIO Storage Pool associated with the protection domain         scaleIO volumeName    string       volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source       storageos    StorageOSVolumeSource     storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes      a name  StorageOSVolumeSource    a     Represents a StorageOS persistent volume resource          storageos fsType    string       fsType is the filesystem type to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified         storageos readOnly    boolean       readOnly defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts         storageos secretRef     a href    LocalObjectReference  a        secretRef specifies the secret to use for obtaining the StorageOS API credentials   If not specified  default values will be attempted         storageos volumeName    string       volumeName is the human readable name of the StorageOS volume   Volume names are only unique within a namespace         storageos volumeNamespace    string       volumeNamespace specifies the scope of the volume within StorageOS   If no namespace is specified then the Pod s namespace will be used   This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration  Set VolumeName to any name to override the default behaviour  Set to  default  if you are not using namespaces within StorageOS  Namespaces that do not pre exist within StorageOS will be created       vsphereVolume    VsphereVirtualDiskVolumeSource     vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine     a name  VsphereVirtualDiskVolumeSource    a     Represents a vSphere volume resource          vsphereVolume volumePath    string   required      volumePath is the path that identifies vSphere volume vmdk        vsphereVolume fsType    string       fsType is filesystem type to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified         vsphereVolume storagePolicyID    string       storagePolicyID is the storage Policy Based Management  SPBM  profile ID associated with the StoragePolicyName         vsphereVolume storagePolicyName    string       storagePolicyName is the storage Policy Based Management  SPBM  profile name       Deprecated       gitRepo    GitRepoVolumeSource     gitRepo represents a git repository at a particular revision  DEPRECATED  GitRepo is deprecated  To provision a container with a git repo  mount an EmptyDir into an InitContainer that clones the repo using git  then mount the EmptyDir into the Pod s container      a name  GitRepoVolumeSource    a     Represents a volume that is populated with the contents of a git repository  Git repo volumes do not support ownership management  Git repo volumes support SELinux relabeling       DEPRECATED  GitRepo is deprecated  To provision a container with a git repo  mount an EmptyDir into an InitContainer that clones the repo using git  then mount the EmptyDir into the Pod s container          gitRepo repository    string   required      repository is the URL        gitRepo directory    string       directory is the target directory name  Must not contain or start with        If     is supplied  the volume directory will be the git repository   Otherwise  if specified  the volume will contain the git repository in the subdirectory with the given name         gitRepo revision    string       revision is the commit hash for the specified revision        DownwardAPIVolumeFile   DownwardAPIVolumeFile   DownwardAPIVolumeFile represents information to create the file containing the pod field   hr       path    string   required    Required  Path is  the relative path name of the file to be created  Must not be absolute or contain the      path  Must be utf 8 encoded  The first item of the relative path must not start with           fieldRef     a href    ObjectFieldSelector  a      Required  Selects a field of the pod  only annotations  labels  name  namespace and uid are supported       mode    int32     Optional  mode bits used to set permissions on this file  must be an octal value between 0000 and 0777 or a decimal value between 0 and 511  YAML accepts both octal and decimal values  JSON requires decimal values for mode bits  If not specified  the volume defaultMode will be used  This might be in conflict with other options that affect the file mode  like fsGroup  and the result can be other mode bits set       resourceFieldRef     a href    ResourceFieldSelector  a      Selects a resource of the container  only resources limits and requests  limits cpu  limits memory  requests cpu and requests memory  are currently supported          KeyToPath   KeyToPath   Maps a string key to a path within a volume    hr       key    string   required    key is the key to project       path    string   required    path is the relative path of the file to map the key to  May not be an absolute path  May not contain the path element       May not start with the string            mode    int32     mode is Optional  mode bits used to set permissions on this file  Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511  YAML accepts both octal and decimal values  JSON requires decimal values for mode bits  If not specified  the volume defaultMode will be used  This might be in conflict with other options that affect the file mode  like fsGroup  and the result can be other mode bits set      "}
{"questions":"kubernetes reference title CSIStorageCapacity weight 5 contenttype apireference kind CSIStorageCapacity apimetadata apiVersion storage k8s io v1 CSIStorageCapacity stores the result of one CSI GetCapacity call autogenerated true import k8s io api storage v1","answers":"---\napi_metadata:\n  apiVersion: \"storage.k8s.io\/v1\"\n  import: \"k8s.io\/api\/storage\/v1\"\n  kind: \"CSIStorageCapacity\"\ncontent_type: \"api_reference\"\ndescription: \"CSIStorageCapacity stores the result of one CSI GetCapacity call.\"\ntitle: \"CSIStorageCapacity\"\nweight: 5\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: storage.k8s.io\/v1`\n\n`import \"k8s.io\/api\/storage\/v1\"`\n\n\n## CSIStorageCapacity {#CSIStorageCapacity}\n\nCSIStorageCapacity stores the result of one CSI GetCapacity call. For a given StorageClass, this describes the available capacity in a particular topology segment.  This can be used when considering where to instantiate new PersistentVolumes.\n\nFor example this can express things like: - StorageClass \"standard\" has \"1234 GiB\" available in \"topology.kubernetes.io\/zone=us-east1\" - StorageClass \"localssd\" has \"10 GiB\" available in \"kubernetes.io\/hostname=knode-abc123\"\n\nThe following three cases all imply that no capacity is available for a certain combination: - no object exists with suitable topology and storage class name - such an object exists, but the capacity is unset - such an object exists, but the capacity is zero\n\nThe producer of these objects can decide which approach is more suitable.\n\nThey are consumed by the kube-scheduler when a CSI driver opts into capacity-aware scheduling with CSIDriverSpec.StorageCapacity. The scheduler compares the MaximumVolumeSize against the requested size of pending volumes to filter out unsuitable nodes. If MaximumVolumeSize is unset, it falls back to a comparison against the less precise Capacity. If that is also unset, the scheduler assumes that capacity is insufficient and tries some other node.\n\n<hr>\n\n- **apiVersion**: storage.k8s.io\/v1\n\n\n- **kind**: CSIStorageCapacity\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. The name has no particular meaning. It must be a DNS subdomain (dots allowed, 253 characters). To ensure that there are no conflicts with other CSI drivers on the cluster, the recommendation is to use csisc-\\<uuid>, a generated name, or a reverse-domain name which ends with the unique CSI driver name.\n  \n  Objects are namespaced.\n  \n  More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **storageClassName** (string), required\n\n  storageClassName represents the name of the StorageClass that the reported capacity applies to. It must meet the same requirements as the name of a StorageClass object (non-empty, DNS subdomain). If that object no longer exists, the CSIStorageCapacity object is obsolete and should be removed by its creator. This field is immutable.\n\n- **capacity** (<a href=\"\">Quantity<\/a>)\n\n  capacity is the value reported by the CSI driver in its GetCapacityResponse for a GetCapacityRequest with topology and parameters that match the previous fields.\n  \n  The semantic is currently (CSI spec 1.2) defined as: The available capacity, in bytes, of the storage that can be used to provision volumes. If not set, that information is currently unavailable.\n\n- **maximumVolumeSize** (<a href=\"\">Quantity<\/a>)\n\n  maximumVolumeSize is the value reported by the CSI driver in its GetCapacityResponse for a GetCapacityRequest with topology and parameters that match the previous fields.\n  \n  This is defined since CSI spec 1.4.0 as the largest size that may be used in a CreateVolumeRequest.capacity_range.required_bytes field to create a volume with the same parameters as those in GetCapacityRequest. The corresponding value in the Kubernetes API is ResourceRequirements.Requests in a volume claim.\n\n- **nodeTopology** (<a href=\"\">LabelSelector<\/a>)\n\n  nodeTopology defines which nodes have access to the storage for which capacity was reported. If not set, the storage is not accessible from any node in the cluster. If empty, the storage is accessible from all nodes. This field is immutable.\n\n\n\n\n\n## CSIStorageCapacityList {#CSIStorageCapacityList}\n\nCSIStorageCapacityList is a collection of CSIStorageCapacity objects.\n\n<hr>\n\n- **apiVersion**: storage.k8s.io\/v1\n\n\n- **kind**: CSIStorageCapacityList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">CSIStorageCapacity<\/a>), required\n\n  items is the list of CSIStorageCapacity objects.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified CSIStorageCapacity\n\n#### HTTP Request\n\nGET \/apis\/storage.k8s.io\/v1\/namespaces\/{namespace}\/csistoragecapacities\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CSIStorageCapacity\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSIStorageCapacity<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind CSIStorageCapacity\n\n#### HTTP Request\n\nGET \/apis\/storage.k8s.io\/v1\/namespaces\/{namespace}\/csistoragecapacities\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSIStorageCapacityList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind CSIStorageCapacity\n\n#### HTTP Request\n\nGET \/apis\/storage.k8s.io\/v1\/csistoragecapacities\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSIStorageCapacityList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a CSIStorageCapacity\n\n#### HTTP Request\n\nPOST \/apis\/storage.k8s.io\/v1\/namespaces\/{namespace}\/csistoragecapacities\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">CSIStorageCapacity<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSIStorageCapacity<\/a>): OK\n\n201 (<a href=\"\">CSIStorageCapacity<\/a>): Created\n\n202 (<a href=\"\">CSIStorageCapacity<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified CSIStorageCapacity\n\n#### HTTP Request\n\nPUT \/apis\/storage.k8s.io\/v1\/namespaces\/{namespace}\/csistoragecapacities\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CSIStorageCapacity\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">CSIStorageCapacity<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSIStorageCapacity<\/a>): OK\n\n201 (<a href=\"\">CSIStorageCapacity<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified CSIStorageCapacity\n\n#### HTTP Request\n\nPATCH \/apis\/storage.k8s.io\/v1\/namespaces\/{namespace}\/csistoragecapacities\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CSIStorageCapacity\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSIStorageCapacity<\/a>): OK\n\n201 (<a href=\"\">CSIStorageCapacity<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a CSIStorageCapacity\n\n#### HTTP Request\n\nDELETE \/apis\/storage.k8s.io\/v1\/namespaces\/{namespace}\/csistoragecapacities\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CSIStorageCapacity\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of CSIStorageCapacity\n\n#### HTTP Request\n\nDELETE \/apis\/storage.k8s.io\/v1\/namespaces\/{namespace}\/csistoragecapacities\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   storage k8s io v1    import   k8s io api storage v1    kind   CSIStorageCapacity  content type   api reference  description   CSIStorageCapacity stores the result of one CSI GetCapacity call   title   CSIStorageCapacity  weight  5 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  storage k8s io v1    import  k8s io api storage v1        CSIStorageCapacity   CSIStorageCapacity   CSIStorageCapacity stores the result of one CSI GetCapacity call  For a given StorageClass  this describes the available capacity in a particular topology segment   This can be used when considering where to instantiate new PersistentVolumes   For example this can express things like    StorageClass  standard  has  1234 GiB  available in  topology kubernetes io zone us east1    StorageClass  localssd  has  10 GiB  available in  kubernetes io hostname knode abc123   The following three cases all imply that no capacity is available for a certain combination    no object exists with suitable topology and storage class name   such an object exists  but the capacity is unset   such an object exists  but the capacity is zero  The producer of these objects can decide which approach is more suitable   They are consumed by the kube scheduler when a CSI driver opts into capacity aware scheduling with CSIDriverSpec StorageCapacity  The scheduler compares the MaximumVolumeSize against the requested size of pending volumes to filter out unsuitable nodes  If MaximumVolumeSize is unset  it falls back to a comparison against the less precise Capacity  If that is also unset  the scheduler assumes that capacity is insufficient and tries some other node    hr       apiVersion    storage k8s io v1       kind    CSIStorageCapacity       metadata     a href    ObjectMeta  a      Standard object s metadata  The name has no particular meaning  It must be a DNS subdomain  dots allowed  253 characters   To ensure that there are no conflicts with other CSI drivers on the cluster  the recommendation is to use csisc   uuid   a generated name  or a reverse domain name which ends with the unique CSI driver name       Objects are namespaced       More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      storageClassName    string   required    storageClassName represents the name of the StorageClass that the reported capacity applies to  It must meet the same requirements as the name of a StorageClass object  non empty  DNS subdomain   If that object no longer exists  the CSIStorageCapacity object is obsolete and should be removed by its creator  This field is immutable       capacity     a href    Quantity  a      capacity is the value reported by the CSI driver in its GetCapacityResponse for a GetCapacityRequest with topology and parameters that match the previous fields       The semantic is currently  CSI spec 1 2  defined as  The available capacity  in bytes  of the storage that can be used to provision volumes  If not set  that information is currently unavailable       maximumVolumeSize     a href    Quantity  a      maximumVolumeSize is the value reported by the CSI driver in its GetCapacityResponse for a GetCapacityRequest with topology and parameters that match the previous fields       This is defined since CSI spec 1 4 0 as the largest size that may be used in a CreateVolumeRequest capacity range required bytes field to create a volume with the same parameters as those in GetCapacityRequest  The corresponding value in the Kubernetes API is ResourceRequirements Requests in a volume claim       nodeTopology     a href    LabelSelector  a      nodeTopology defines which nodes have access to the storage for which capacity was reported  If not set  the storage is not accessible from any node in the cluster  If empty  the storage is accessible from all nodes  This field is immutable          CSIStorageCapacityList   CSIStorageCapacityList   CSIStorageCapacityList is a collection of CSIStorageCapacity objects    hr       apiVersion    storage k8s io v1       kind    CSIStorageCapacityList       metadata     a href    ListMeta  a      Standard list metadata More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    CSIStorageCapacity  a    required    items is the list of CSIStorageCapacity objects          Operations   Operations      hr             get  read the specified CSIStorageCapacity       HTTP Request  GET  apis storage k8s io v1 namespaces  namespace  csistoragecapacities  name        Parameters       name     in path    string  required    name of the CSIStorageCapacity       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CSIStorageCapacity  a    OK  401  Unauthorized        list  list or watch objects of kind CSIStorageCapacity       HTTP Request  GET  apis storage k8s io v1 namespaces  namespace  csistoragecapacities       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    CSIStorageCapacityList  a    OK  401  Unauthorized        list  list or watch objects of kind CSIStorageCapacity       HTTP Request  GET  apis storage k8s io v1 csistoragecapacities       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    CSIStorageCapacityList  a    OK  401  Unauthorized        create  create a CSIStorageCapacity       HTTP Request  POST  apis storage k8s io v1 namespaces  namespace  csistoragecapacities       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    CSIStorageCapacity  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CSIStorageCapacity  a    OK  201   a href    CSIStorageCapacity  a    Created  202   a href    CSIStorageCapacity  a    Accepted  401  Unauthorized        update  replace the specified CSIStorageCapacity       HTTP Request  PUT  apis storage k8s io v1 namespaces  namespace  csistoragecapacities  name        Parameters       name     in path    string  required    name of the CSIStorageCapacity       namespace     in path    string  required     a href    namespace  a        body     a href    CSIStorageCapacity  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CSIStorageCapacity  a    OK  201   a href    CSIStorageCapacity  a    Created  401  Unauthorized        patch  partially update the specified CSIStorageCapacity       HTTP Request  PATCH  apis storage k8s io v1 namespaces  namespace  csistoragecapacities  name        Parameters       name     in path    string  required    name of the CSIStorageCapacity       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CSIStorageCapacity  a    OK  201   a href    CSIStorageCapacity  a    Created  401  Unauthorized        delete  delete a CSIStorageCapacity       HTTP Request  DELETE  apis storage k8s io v1 namespaces  namespace  csistoragecapacities  name        Parameters       name     in path    string  required    name of the CSIStorageCapacity       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of CSIStorageCapacity       HTTP Request  DELETE  apis storage k8s io v1 namespaces  namespace  csistoragecapacities       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference apiVersion storage k8s io v1beta1 kind VolumeAttributesClass import k8s io api storage v1beta1 contenttype apireference VolumeAttributesClass represents a specification of mutable volume attributes defined by the CSI driver title VolumeAttributesClass v1beta1 apimetadata weight 12 autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"storage.k8s.io\/v1beta1\"\n  import: \"k8s.io\/api\/storage\/v1beta1\"\n  kind: \"VolumeAttributesClass\"\ncontent_type: \"api_reference\"\ndescription: \"VolumeAttributesClass represents a specification of mutable volume attributes defined by the CSI driver.\"\ntitle: \"VolumeAttributesClass v1beta1\"\nweight: 12\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: storage.k8s.io\/v1beta1`\n\n`import \"k8s.io\/api\/storage\/v1beta1\"`\n\n\n## VolumeAttributesClass {#VolumeAttributesClass}\n\nVolumeAttributesClass represents a specification of mutable volume attributes defined by the CSI driver. The class can be specified during dynamic provisioning of PersistentVolumeClaims, and changed in the PersistentVolumeClaim spec after provisioning.\n\n<hr>\n\n- **apiVersion**: storage.k8s.io\/v1beta1\n\n\n- **kind**: VolumeAttributesClass\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **driverName** (string), required\n\n  Name of the CSI driver This field is immutable.\n\n- **parameters** (map[string]string)\n\n  parameters hold volume attributes defined by the CSI driver. These values are opaque to the Kubernetes and are passed directly to the CSI driver. The underlying storage provider supports changing these attributes on an existing volume, however the parameters field itself is immutable. To invoke a volume update, a new VolumeAttributesClass should be created with new parameters, and the PersistentVolumeClaim should be updated to reference the new VolumeAttributesClass.\n  \n  This field is required and must contain at least one key\/value pair. The keys cannot be empty, and the maximum number of parameters is 512, with a cumulative max size of 256K. If the CSI driver rejects invalid parameters, the target PersistentVolumeClaim will be set to an \"Infeasible\" state in the modifyVolumeStatus field.\n\n\n\n\n\n## VolumeAttributesClassList {#VolumeAttributesClassList}\n\nVolumeAttributesClassList is a collection of VolumeAttributesClass objects.\n\n<hr>\n\n- **apiVersion**: storage.k8s.io\/v1beta1\n\n\n- **kind**: VolumeAttributesClassList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">VolumeAttributesClass<\/a>), required\n\n  items is the list of VolumeAttributesClass objects.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified VolumeAttributesClass\n\n#### HTTP Request\n\nGET \/apis\/storage.k8s.io\/v1beta1\/volumeattributesclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the VolumeAttributesClass\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttributesClass<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind VolumeAttributesClass\n\n#### HTTP Request\n\nGET \/apis\/storage.k8s.io\/v1beta1\/volumeattributesclasses\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttributesClassList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a VolumeAttributesClass\n\n#### HTTP Request\n\nPOST \/apis\/storage.k8s.io\/v1beta1\/volumeattributesclasses\n\n#### Parameters\n\n\n- **body**: <a href=\"\">VolumeAttributesClass<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttributesClass<\/a>): OK\n\n201 (<a href=\"\">VolumeAttributesClass<\/a>): Created\n\n202 (<a href=\"\">VolumeAttributesClass<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified VolumeAttributesClass\n\n#### HTTP Request\n\nPUT \/apis\/storage.k8s.io\/v1beta1\/volumeattributesclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the VolumeAttributesClass\n\n\n- **body**: <a href=\"\">VolumeAttributesClass<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttributesClass<\/a>): OK\n\n201 (<a href=\"\">VolumeAttributesClass<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified VolumeAttributesClass\n\n#### HTTP Request\n\nPATCH \/apis\/storage.k8s.io\/v1beta1\/volumeattributesclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the VolumeAttributesClass\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttributesClass<\/a>): OK\n\n201 (<a href=\"\">VolumeAttributesClass<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a VolumeAttributesClass\n\n#### HTTP Request\n\nDELETE \/apis\/storage.k8s.io\/v1beta1\/volumeattributesclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the VolumeAttributesClass\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttributesClass<\/a>): OK\n\n202 (<a href=\"\">VolumeAttributesClass<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of VolumeAttributesClass\n\n#### HTTP Request\n\nDELETE \/apis\/storage.k8s.io\/v1beta1\/volumeattributesclasses\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   storage k8s io v1beta1    import   k8s io api storage v1beta1    kind   VolumeAttributesClass  content type   api reference  description   VolumeAttributesClass represents a specification of mutable volume attributes defined by the CSI driver   title   VolumeAttributesClass v1beta1  weight  12 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  storage k8s io v1beta1    import  k8s io api storage v1beta1        VolumeAttributesClass   VolumeAttributesClass   VolumeAttributesClass represents a specification of mutable volume attributes defined by the CSI driver  The class can be specified during dynamic provisioning of PersistentVolumeClaims  and changed in the PersistentVolumeClaim spec after provisioning    hr       apiVersion    storage k8s io v1beta1       kind    VolumeAttributesClass       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      driverName    string   required    Name of the CSI driver This field is immutable       parameters    map string string     parameters hold volume attributes defined by the CSI driver  These values are opaque to the Kubernetes and are passed directly to the CSI driver  The underlying storage provider supports changing these attributes on an existing volume  however the parameters field itself is immutable  To invoke a volume update  a new VolumeAttributesClass should be created with new parameters  and the PersistentVolumeClaim should be updated to reference the new VolumeAttributesClass       This field is required and must contain at least one key value pair  The keys cannot be empty  and the maximum number of parameters is 512  with a cumulative max size of 256K  If the CSI driver rejects invalid parameters  the target PersistentVolumeClaim will be set to an  Infeasible  state in the modifyVolumeStatus field          VolumeAttributesClassList   VolumeAttributesClassList   VolumeAttributesClassList is a collection of VolumeAttributesClass objects    hr       apiVersion    storage k8s io v1beta1       kind    VolumeAttributesClassList       metadata     a href    ListMeta  a      Standard list metadata More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    VolumeAttributesClass  a    required    items is the list of VolumeAttributesClass objects          Operations   Operations      hr             get  read the specified VolumeAttributesClass       HTTP Request  GET  apis storage k8s io v1beta1 volumeattributesclasses  name        Parameters       name     in path    string  required    name of the VolumeAttributesClass       pretty     in query    string     a href    pretty  a          Response   200   a href    VolumeAttributesClass  a    OK  401  Unauthorized        list  list or watch objects of kind VolumeAttributesClass       HTTP Request  GET  apis storage k8s io v1beta1 volumeattributesclasses       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    VolumeAttributesClassList  a    OK  401  Unauthorized        create  create a VolumeAttributesClass       HTTP Request  POST  apis storage k8s io v1beta1 volumeattributesclasses       Parameters       body     a href    VolumeAttributesClass  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    VolumeAttributesClass  a    OK  201   a href    VolumeAttributesClass  a    Created  202   a href    VolumeAttributesClass  a    Accepted  401  Unauthorized        update  replace the specified VolumeAttributesClass       HTTP Request  PUT  apis storage k8s io v1beta1 volumeattributesclasses  name        Parameters       name     in path    string  required    name of the VolumeAttributesClass       body     a href    VolumeAttributesClass  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    VolumeAttributesClass  a    OK  201   a href    VolumeAttributesClass  a    Created  401  Unauthorized        patch  partially update the specified VolumeAttributesClass       HTTP Request  PATCH  apis storage k8s io v1beta1 volumeattributesclasses  name        Parameters       name     in path    string  required    name of the VolumeAttributesClass       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    VolumeAttributesClass  a    OK  201   a href    VolumeAttributesClass  a    Created  401  Unauthorized        delete  delete a VolumeAttributesClass       HTTP Request  DELETE  apis storage k8s io v1beta1 volumeattributesclasses  name        Parameters       name     in path    string  required    name of the VolumeAttributesClass       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    VolumeAttributesClass  a    OK  202   a href    VolumeAttributesClass  a    Accepted  401  Unauthorized        deletecollection  delete collection of VolumeAttributesClass       HTTP Request  DELETE  apis storage k8s io v1beta1 volumeattributesclasses       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference kind PersistentVolume weight 7 apiVersion v1 contenttype apireference apimetadata autogenerated true PersistentVolume PV is a storage resource provisioned by an administrator title PersistentVolume import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"PersistentVolume\"\ncontent_type: \"api_reference\"\ndescription: \"PersistentVolume (PV) is a storage resource provisioned by an administrator.\"\ntitle: \"PersistentVolume\"\nweight: 7\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## PersistentVolume {#PersistentVolume}\n\nPersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: PersistentVolume\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">PersistentVolumeSpec<\/a>)\n\n  spec defines a specification of a persistent volume owned by the cluster. Provisioned by an administrator. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#persistent-volumes\n\n- **status** (<a href=\"\">PersistentVolumeStatus<\/a>)\n\n  status represents the current information\/status for the persistent volume. Populated by the system. Read-only. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#persistent-volumes\n\n\n\n\n\n## PersistentVolumeSpec {#PersistentVolumeSpec}\n\nPersistentVolumeSpec is the specification of a persistent volume.\n\n<hr>\n\n- **accessModes** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  accessModes contains all ways the volume can be mounted. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#access-modes\n\n- **capacity** (map[string]<a href=\"\">Quantity<\/a>)\n\n  capacity is the description of the persistent volume's resources and capacity. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#capacity\n\n- **claimRef** (<a href=\"\">ObjectReference<\/a>)\n\n  claimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#binding\n\n- **mountOptions** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  mountOptions is the list of mount options, e.g. [\"ro\", \"soft\"]. Not validated - mount will simply fail if one is invalid. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes\/#mount-options\n\n- **nodeAffinity** (VolumeNodeAffinity)\n\n  nodeAffinity defines constraints that limit what nodes this volume can be accessed from. This field influences the scheduling of pods that use this volume.\n\n  <a name=\"VolumeNodeAffinity\"><\/a>\n  *VolumeNodeAffinity defines constraints that limit what nodes this volume can be accessed from.*\n\n  - **nodeAffinity.required** (NodeSelector)\n\n    required specifies hard node constraints that must be met.\n\n    <a name=\"NodeSelector\"><\/a>\n    *A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms.*\n\n    - **nodeAffinity.required.nodeSelectorTerms** ([]NodeSelectorTerm), required\n\n      *Atomic: will be replaced during a merge*\n      \n      Required. A list of node selector terms. The terms are ORed.\n\n      <a name=\"NodeSelectorTerm\"><\/a>\n      *A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.*\n\n      - **nodeAffinity.required.nodeSelectorTerms.matchExpressions** ([]<a href=\"\">NodeSelectorRequirement<\/a>)\n\n        *Atomic: will be replaced during a merge*\n        \n        A list of node selector requirements by node's labels.\n\n      - **nodeAffinity.required.nodeSelectorTerms.matchFields** ([]<a href=\"\">NodeSelectorRequirement<\/a>)\n\n        *Atomic: will be replaced during a merge*\n        \n        A list of node selector requirements by node's fields.\n\n- **persistentVolumeReclaimPolicy** (string)\n\n  persistentVolumeReclaimPolicy defines what happens to a persistent volume when released from its claim. Valid options are Retain (default for manually created PersistentVolumes), Delete (default for dynamically provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be supported by the volume plugin underlying this PersistentVolume. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#reclaiming\n\n- **storageClassName** (string)\n\n  storageClassName is the name of StorageClass to which this persistent volume belongs. Empty value means that this volume does not belong to any StorageClass.\n\n- **volumeAttributesClassName** (string)\n\n  Name of VolumeAttributesClass to which this persistent volume belongs. Empty value is not allowed. When this field is not set, it indicates that this volume does not belong to any VolumeAttributesClass. This field is mutable and can be changed by the CSI driver after a volume has been updated successfully to a new class. For an unbound PersistentVolume, the volumeAttributesClassName will be matched with unbound PersistentVolumeClaims during the binding process. This is a beta field and requires enabling VolumeAttributesClass feature (off by default).\n\n- **volumeMode** (string)\n\n  volumeMode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. Value of Filesystem is implied when not included in spec.\n\n\n\n### Local\n\n\n- **hostPath** (HostPathVolumeSource)\n\n  hostPath represents a directory on the host. Provisioned by a developer or tester. This is useful for single-node development and testing only! On-host storage is not supported in any way and WILL NOT WORK in a multi-node cluster. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#hostpath\n\n  <a name=\"HostPathVolumeSource\"><\/a>\n  *Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling.*\n\n  - **hostPath.path** (string), required\n\n    path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#hostpath\n\n  - **hostPath.type** (string)\n\n    type for HostPath Volume Defaults to \"\" More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#hostpath\n\n- **local** (LocalVolumeSource)\n\n  local represents directly-attached storage with node affinity\n\n  <a name=\"LocalVolumeSource\"><\/a>\n  *Local represents directly-attached storage with node affinity (Beta feature)*\n\n  - **local.path** (string), required\n\n    path of the full path to the volume on the node. It can be either a directory or block device (disk, partition, ...).\n\n  - **local.fsType** (string)\n\n    fsType is the filesystem type to mount. It applies only when the Path is a block device. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". The default value is to auto-select a filesystem if unspecified.\n\n### Persistent volumes\n\n\n- **awsElasticBlockStore** (AWSElasticBlockStoreVolumeSource)\n\n  awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#awselasticblockstore\n\n  <a name=\"AWSElasticBlockStoreVolumeSource\"><\/a>\n  *Represents a Persistent Disk resource in AWS.\n  \n  An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read\/write once. AWS EBS volumes support ownership management and SELinux relabeling.*\n\n  - **awsElasticBlockStore.volumeID** (string), required\n\n    volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#awselasticblockstore\n\n  - **awsElasticBlockStore.fsType** (string)\n\n    fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#awselasticblockstore\n\n  - **awsElasticBlockStore.partition** (int32)\n\n    partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume \/dev\/sda1, you specify the partition as \"1\". Similarly, the volume partition for \/dev\/sda is \"0\" (or you can leave the property empty).\n\n  - **awsElasticBlockStore.readOnly** (boolean)\n\n    readOnly value true will force the readOnly setting in VolumeMounts. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#awselasticblockstore\n\n- **azureDisk** (AzureDiskVolumeSource)\n\n  azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.\n\n  <a name=\"AzureDiskVolumeSource\"><\/a>\n  *AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.*\n\n  - **azureDisk.diskName** (string), required\n\n    diskName is the Name of the data disk in the blob storage\n\n  - **azureDisk.diskURI** (string), required\n\n    diskURI is the URI of data disk in the blob storage\n\n  - **azureDisk.cachingMode** (string)\n\n    cachingMode is the Host Caching mode: None, Read Only, Read Write.\n\n  - **azureDisk.fsType** (string)\n\n    fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified.\n\n  - **azureDisk.kind** (string)\n\n    kind expected values are Shared: multiple blob disks per storage account  Dedicated: single blob disk per storage account  Managed: azure managed data disk (only in managed availability set). defaults to shared\n\n  - **azureDisk.readOnly** (boolean)\n\n    readOnly Defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.\n\n- **azureFile** (AzureFilePersistentVolumeSource)\n\n  azureFile represents an Azure File Service mount on the host and bind mount to the pod.\n\n  <a name=\"AzureFilePersistentVolumeSource\"><\/a>\n  *AzureFile represents an Azure File Service mount on the host and bind mount to the pod.*\n\n  - **azureFile.secretName** (string), required\n\n    secretName is the name of secret that contains Azure Storage Account Name and Key\n\n  - **azureFile.shareName** (string), required\n\n    shareName is the azure Share Name\n\n  - **azureFile.readOnly** (boolean)\n\n    readOnly defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.\n\n  - **azureFile.secretNamespace** (string)\n\n    secretNamespace is the namespace of the secret that contains Azure Storage Account Name and Key default is the same as the Pod\n\n- **cephfs** (CephFSPersistentVolumeSource)\n\n  cephFS represents a Ceph FS mount on the host that shares a pod's lifetime\n\n  <a name=\"CephFSPersistentVolumeSource\"><\/a>\n  *Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling.*\n\n  - **cephfs.monitors** ([]string), required\n\n    *Atomic: will be replaced during a merge*\n    \n    monitors is Required: Monitors is a collection of Ceph monitors More info: https:\/\/examples.k8s.io\/volumes\/cephfs\/README.md#how-to-use-it\n\n  - **cephfs.path** (string)\n\n    path is Optional: Used as the mounted root, rather than the full Ceph tree, default is \/\n\n  - **cephfs.readOnly** (boolean)\n\n    readOnly is Optional: Defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https:\/\/examples.k8s.io\/volumes\/cephfs\/README.md#how-to-use-it\n\n  - **cephfs.secretFile** (string)\n\n    secretFile is Optional: SecretFile is the path to key ring for User, default is \/etc\/ceph\/user.secret More info: https:\/\/examples.k8s.io\/volumes\/cephfs\/README.md#how-to-use-it\n\n  - **cephfs.secretRef** (SecretReference)\n\n    secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https:\/\/examples.k8s.io\/volumes\/cephfs\/README.md#how-to-use-it\n\n    <a name=\"SecretReference\"><\/a>\n    *SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace*\n\n    - **cephfs.secretRef.name** (string)\n\n      name is unique within a namespace to reference a secret resource.\n\n    - **cephfs.secretRef.namespace** (string)\n\n      namespace defines the space within which the secret name must be unique.\n\n  - **cephfs.user** (string)\n\n    user is Optional: User is the rados user name, default is admin More info: https:\/\/examples.k8s.io\/volumes\/cephfs\/README.md#how-to-use-it\n\n- **cinder** (CinderPersistentVolumeSource)\n\n  cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https:\/\/examples.k8s.io\/mysql-cinder-pd\/README.md\n\n  <a name=\"CinderPersistentVolumeSource\"><\/a>\n  *Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling.*\n\n  - **cinder.volumeID** (string), required\n\n    volumeID used to identify the volume in cinder. More info: https:\/\/examples.k8s.io\/mysql-cinder-pd\/README.md\n\n  - **cinder.fsType** (string)\n\n    fsType Filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https:\/\/examples.k8s.io\/mysql-cinder-pd\/README.md\n\n  - **cinder.readOnly** (boolean)\n\n    readOnly is Optional: Defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https:\/\/examples.k8s.io\/mysql-cinder-pd\/README.md\n\n  - **cinder.secretRef** (SecretReference)\n\n    secretRef is Optional: points to a secret object containing parameters used to connect to OpenStack.\n\n    <a name=\"SecretReference\"><\/a>\n    *SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace*\n\n    - **cinder.secretRef.name** (string)\n\n      name is unique within a namespace to reference a secret resource.\n\n    - **cinder.secretRef.namespace** (string)\n\n      namespace defines the space within which the secret name must be unique.\n\n- **csi** (CSIPersistentVolumeSource)\n\n  csi represents storage that is handled by an external CSI driver (Beta feature).\n\n  <a name=\"CSIPersistentVolumeSource\"><\/a>\n  *Represents storage that is managed by an external CSI volume driver (Beta feature)*\n\n  - **csi.driver** (string), required\n\n    driver is the name of the driver to use for this volume. Required.\n\n  - **csi.volumeHandle** (string), required\n\n    volumeHandle is the unique volume name returned by the CSI volume plugin\u2019s CreateVolume to refer to the volume on all subsequent calls. Required.\n\n  - **csi.controllerExpandSecretRef** (SecretReference)\n\n    controllerExpandSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI ControllerExpandVolume call. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed.\n\n    <a name=\"SecretReference\"><\/a>\n    *SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace*\n\n    - **csi.controllerExpandSecretRef.name** (string)\n\n      name is unique within a namespace to reference a secret resource.\n\n    - **csi.controllerExpandSecretRef.namespace** (string)\n\n      namespace defines the space within which the secret name must be unique.\n\n  - **csi.controllerPublishSecretRef** (SecretReference)\n\n    controllerPublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI ControllerPublishVolume and ControllerUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed.\n\n    <a name=\"SecretReference\"><\/a>\n    *SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace*\n\n    - **csi.controllerPublishSecretRef.name** (string)\n\n      name is unique within a namespace to reference a secret resource.\n\n    - **csi.controllerPublishSecretRef.namespace** (string)\n\n      namespace defines the space within which the secret name must be unique.\n\n  - **csi.fsType** (string)\n\n    fsType to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\".\n\n  - **csi.nodeExpandSecretRef** (SecretReference)\n\n    nodeExpandSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodeExpandVolume call. This field is optional, may be omitted if no secret is required. If the secret object contains more than one secret, all secrets are passed.\n\n    <a name=\"SecretReference\"><\/a>\n    *SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace*\n\n    - **csi.nodeExpandSecretRef.name** (string)\n\n      name is unique within a namespace to reference a secret resource.\n\n    - **csi.nodeExpandSecretRef.namespace** (string)\n\n      namespace defines the space within which the secret name must be unique.\n\n  - **csi.nodePublishSecretRef** (SecretReference)\n\n    nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed.\n\n    <a name=\"SecretReference\"><\/a>\n    *SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace*\n\n    - **csi.nodePublishSecretRef.name** (string)\n\n      name is unique within a namespace to reference a secret resource.\n\n    - **csi.nodePublishSecretRef.namespace** (string)\n\n      namespace defines the space within which the secret name must be unique.\n\n  - **csi.nodeStageSecretRef** (SecretReference)\n\n    nodeStageSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodeStageVolume and NodeStageVolume and NodeUnstageVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed.\n\n    <a name=\"SecretReference\"><\/a>\n    *SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace*\n\n    - **csi.nodeStageSecretRef.name** (string)\n\n      name is unique within a namespace to reference a secret resource.\n\n    - **csi.nodeStageSecretRef.namespace** (string)\n\n      namespace defines the space within which the secret name must be unique.\n\n  - **csi.readOnly** (boolean)\n\n    readOnly value to pass to ControllerPublishVolumeRequest. Defaults to false (read\/write).\n\n  - **csi.volumeAttributes** (map[string]string)\n\n    volumeAttributes of the volume to publish.\n\n- **fc** (FCVolumeSource)\n\n  fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod.\n\n  <a name=\"FCVolumeSource\"><\/a>\n  *Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read\/write once. Fibre Channel volumes support ownership management and SELinux relabeling.*\n\n  - **fc.fsType** (string)\n\n    fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified.\n\n  - **fc.lun** (int32)\n\n    lun is Optional: FC target lun number\n\n  - **fc.readOnly** (boolean)\n\n    readOnly is Optional: Defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.\n\n  - **fc.targetWWNs** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    targetWWNs is Optional: FC target worldwide names (WWNs)\n\n  - **fc.wwids** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously.\n\n- **flexVolume** (FlexPersistentVolumeSource)\n\n  flexVolume represents a generic volume resource that is provisioned\/attached using an exec based plugin.\n\n  <a name=\"FlexPersistentVolumeSource\"><\/a>\n  *FlexPersistentVolumeSource represents a generic persistent volume resource that is provisioned\/attached using an exec based plugin.*\n\n  - **flexVolume.driver** (string), required\n\n    driver is the name of the driver to use for this volume.\n\n  - **flexVolume.fsType** (string)\n\n    fsType is the Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". The default filesystem depends on FlexVolume script.\n\n  - **flexVolume.options** (map[string]string)\n\n    options is Optional: this field holds extra command options if any.\n\n  - **flexVolume.readOnly** (boolean)\n\n    readOnly is Optional: defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.\n\n  - **flexVolume.secretRef** (SecretReference)\n\n    secretRef is Optional: SecretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts.\n\n    <a name=\"SecretReference\"><\/a>\n    *SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace*\n\n    - **flexVolume.secretRef.name** (string)\n\n      name is unique within a namespace to reference a secret resource.\n\n    - **flexVolume.secretRef.namespace** (string)\n\n      namespace defines the space within which the secret name must be unique.\n\n- **flocker** (FlockerVolumeSource)\n\n  flocker represents a Flocker volume attached to a kubelet's host machine and exposed to the pod for its usage. This depends on the Flocker control service being running\n\n  <a name=\"FlockerVolumeSource\"><\/a>\n  *Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling.*\n\n  - **flocker.datasetName** (string)\n\n    datasetName is Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated\n\n  - **flocker.datasetUUID** (string)\n\n    datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset\n\n- **gcePersistentDisk** (GCEPersistentDiskVolumeSource)\n\n  gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#gcepersistentdisk\n\n  <a name=\"GCEPersistentDiskVolumeSource\"><\/a>\n  *Represents a Persistent Disk resource in Google Compute Engine.\n  \n  A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read\/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling.*\n\n  - **gcePersistentDisk.pdName** (string), required\n\n    pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#gcepersistentdisk\n\n  - **gcePersistentDisk.fsType** (string)\n\n    fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#gcepersistentdisk\n\n  - **gcePersistentDisk.partition** (int32)\n\n    partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume \/dev\/sda1, you specify the partition as \"1\". Similarly, the volume partition for \/dev\/sda is \"0\" (or you can leave the property empty). More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#gcepersistentdisk\n\n  - **gcePersistentDisk.readOnly** (boolean)\n\n    readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#gcepersistentdisk\n\n- **glusterfs** (GlusterfsPersistentVolumeSource)\n\n  glusterfs represents a Glusterfs volume that is attached to a host and exposed to the pod. Provisioned by an admin. More info: https:\/\/examples.k8s.io\/volumes\/glusterfs\/README.md\n\n  <a name=\"GlusterfsPersistentVolumeSource\"><\/a>\n  *Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling.*\n\n  - **glusterfs.endpoints** (string), required\n\n    endpoints is the endpoint name that details Glusterfs topology. More info: https:\/\/examples.k8s.io\/volumes\/glusterfs\/README.md#create-a-pod\n\n  - **glusterfs.path** (string), required\n\n    path is the Glusterfs volume path. More info: https:\/\/examples.k8s.io\/volumes\/glusterfs\/README.md#create-a-pod\n\n  - **glusterfs.endpointsNamespace** (string)\n\n    endpointsNamespace is the namespace that contains Glusterfs endpoint. If this field is empty, the EndpointNamespace defaults to the same namespace as the bound PVC. More info: https:\/\/examples.k8s.io\/volumes\/glusterfs\/README.md#create-a-pod\n\n  - **glusterfs.readOnly** (boolean)\n\n    readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https:\/\/examples.k8s.io\/volumes\/glusterfs\/README.md#create-a-pod\n\n- **iscsi** (ISCSIPersistentVolumeSource)\n\n  iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin.\n\n  <a name=\"ISCSIPersistentVolumeSource\"><\/a>\n  *ISCSIPersistentVolumeSource represents an ISCSI disk. ISCSI volumes can only be mounted as read\/write once. ISCSI volumes support ownership management and SELinux relabeling.*\n\n  - **iscsi.iqn** (string), required\n\n    iqn is Target iSCSI Qualified Name.\n\n  - **iscsi.lun** (int32), required\n\n    lun is iSCSI Target Lun number.\n\n  - **iscsi.targetPortal** (string), required\n\n    targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).\n\n  - **iscsi.chapAuthDiscovery** (boolean)\n\n    chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication\n\n  - **iscsi.chapAuthSession** (boolean)\n\n    chapAuthSession defines whether support iSCSI Session CHAP authentication\n\n  - **iscsi.fsType** (string)\n\n    fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#iscsi\n\n  - **iscsi.initiatorName** (string)\n\n    initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface \\<target portal>:\\<volume name> will be created for the connection.\n\n  - **iscsi.iscsiInterface** (string)\n\n    iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp).\n\n  - **iscsi.portals** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    portals is the iSCSI Target Portal List. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).\n\n  - **iscsi.readOnly** (boolean)\n\n    readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false.\n\n  - **iscsi.secretRef** (SecretReference)\n\n    secretRef is the CHAP Secret for iSCSI target and initiator authentication\n\n    <a name=\"SecretReference\"><\/a>\n    *SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace*\n\n    - **iscsi.secretRef.name** (string)\n\n      name is unique within a namespace to reference a secret resource.\n\n    - **iscsi.secretRef.namespace** (string)\n\n      namespace defines the space within which the secret name must be unique.\n\n- **nfs** (NFSVolumeSource)\n\n  nfs represents an NFS mount on the host. Provisioned by an admin. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#nfs\n\n  <a name=\"NFSVolumeSource\"><\/a>\n  *Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling.*\n\n  - **nfs.path** (string), required\n\n    path that is exported by the NFS server. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#nfs\n\n  - **nfs.server** (string), required\n\n    server is the hostname or IP address of the NFS server. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#nfs\n\n  - **nfs.readOnly** (boolean)\n\n    readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#nfs\n\n- **photonPersistentDisk** (PhotonPersistentDiskVolumeSource)\n\n  photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine\n\n  <a name=\"PhotonPersistentDiskVolumeSource\"><\/a>\n  *Represents a Photon Controller persistent disk resource.*\n\n  - **photonPersistentDisk.pdID** (string), required\n\n    pdID is the ID that identifies Photon Controller persistent disk\n\n  - **photonPersistentDisk.fsType** (string)\n\n    fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified.\n\n- **portworxVolume** (PortworxVolumeSource)\n\n  portworxVolume represents a portworx volume attached and mounted on kubelets host machine\n\n  <a name=\"PortworxVolumeSource\"><\/a>\n  *PortworxVolumeSource represents a Portworx volume resource.*\n\n  - **portworxVolume.volumeID** (string), required\n\n    volumeID uniquely identifies a Portworx volume\n\n  - **portworxVolume.fsType** (string)\n\n    fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\". Implicitly inferred to be \"ext4\" if unspecified.\n\n  - **portworxVolume.readOnly** (boolean)\n\n    readOnly defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.\n\n- **quobyte** (QuobyteVolumeSource)\n\n  quobyte represents a Quobyte mount on the host that shares a pod's lifetime\n\n  <a name=\"QuobyteVolumeSource\"><\/a>\n  *Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling.*\n\n  - **quobyte.registry** (string), required\n\n    registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes\n\n  - **quobyte.volume** (string), required\n\n    volume is a string that references an already created Quobyte volume by name.\n\n  - **quobyte.group** (string)\n\n    group to map volume access to Default is no group\n\n  - **quobyte.readOnly** (boolean)\n\n    readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false.\n\n  - **quobyte.tenant** (string)\n\n    tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin\n\n  - **quobyte.user** (string)\n\n    user to map volume access to Defaults to serivceaccount user\n\n- **rbd** (RBDPersistentVolumeSource)\n\n  rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md\n\n  <a name=\"RBDPersistentVolumeSource\"><\/a>\n  *Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling.*\n\n  - **rbd.image** (string), required\n\n    image is the rados image name. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md#how-to-use-it\n\n  - **rbd.monitors** ([]string), required\n\n    *Atomic: will be replaced during a merge*\n    \n    monitors is a collection of Ceph monitors. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md#how-to-use-it\n\n  - **rbd.fsType** (string)\n\n    fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#rbd\n\n  - **rbd.keyring** (string)\n\n    keyring is the path to key ring for RBDUser. Default is \/etc\/ceph\/keyring. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md#how-to-use-it\n\n  - **rbd.pool** (string)\n\n    pool is the rados pool name. Default is rbd. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md#how-to-use-it\n\n  - **rbd.readOnly** (boolean)\n\n    readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md#how-to-use-it\n\n  - **rbd.secretRef** (SecretReference)\n\n    secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md#how-to-use-it\n\n    <a name=\"SecretReference\"><\/a>\n    *SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace*\n\n    - **rbd.secretRef.name** (string)\n\n      name is unique within a namespace to reference a secret resource.\n\n    - **rbd.secretRef.namespace** (string)\n\n      namespace defines the space within which the secret name must be unique.\n\n  - **rbd.user** (string)\n\n    user is the rados user name. Default is admin. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md#how-to-use-it\n\n- **scaleIO** (ScaleIOPersistentVolumeSource)\n\n  scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes.\n\n  <a name=\"ScaleIOPersistentVolumeSource\"><\/a>\n  *ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume*\n\n  - **scaleIO.gateway** (string), required\n\n    gateway is the host address of the ScaleIO API Gateway.\n\n  - **scaleIO.secretRef** (SecretReference), required\n\n    secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail.\n\n    <a name=\"SecretReference\"><\/a>\n    *SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace*\n\n    - **scaleIO.secretRef.name** (string)\n\n      name is unique within a namespace to reference a secret resource.\n\n    - **scaleIO.secretRef.namespace** (string)\n\n      namespace defines the space within which the secret name must be unique.\n\n  - **scaleIO.system** (string), required\n\n    system is the name of the storage system as configured in ScaleIO.\n\n  - **scaleIO.fsType** (string)\n\n    fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Default is \"xfs\"\n\n  - **scaleIO.protectionDomain** (string)\n\n    protectionDomain is the name of the ScaleIO Protection Domain for the configured storage.\n\n  - **scaleIO.readOnly** (boolean)\n\n    readOnly defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.\n\n  - **scaleIO.sslEnabled** (boolean)\n\n    sslEnabled is the flag to enable\/disable SSL communication with Gateway, default false\n\n  - **scaleIO.storageMode** (string)\n\n    storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned.\n\n  - **scaleIO.storagePool** (string)\n\n    storagePool is the ScaleIO Storage Pool associated with the protection domain.\n\n  - **scaleIO.volumeName** (string)\n\n    volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source.\n\n- **storageos** (StorageOSPersistentVolumeSource)\n\n  storageOS represents a StorageOS volume that is attached to the kubelet's host machine and mounted into the pod More info: https:\/\/examples.k8s.io\/volumes\/storageos\/README.md\n\n  <a name=\"StorageOSPersistentVolumeSource\"><\/a>\n  *Represents a StorageOS persistent volume resource.*\n\n  - **storageos.fsType** (string)\n\n    fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified.\n\n  - **storageos.readOnly** (boolean)\n\n    readOnly defaults to false (read\/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.\n\n  - **storageos.secretRef** (<a href=\"\">ObjectReference<\/a>)\n\n    secretRef specifies the secret to use for obtaining the StorageOS API credentials.  If not specified, default values will be attempted.\n\n  - **storageos.volumeName** (string)\n\n    volumeName is the human-readable name of the StorageOS volume.  Volume names are only unique within a namespace.\n\n  - **storageos.volumeNamespace** (string)\n\n    volumeNamespace specifies the scope of the volume within StorageOS.  If no namespace is specified then the Pod's namespace will be used.  This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to \"default\" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created.\n\n- **vsphereVolume** (VsphereVirtualDiskVolumeSource)\n\n  vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine\n\n  <a name=\"VsphereVirtualDiskVolumeSource\"><\/a>\n  *Represents a vSphere volume resource.*\n\n  - **vsphereVolume.volumePath** (string), required\n\n    volumePath is the path that identifies vSphere volume vmdk\n\n  - **vsphereVolume.fsType** (string)\n\n    fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. \"ext4\", \"xfs\", \"ntfs\". Implicitly inferred to be \"ext4\" if unspecified.\n\n  - **vsphereVolume.storagePolicyID** (string)\n\n    storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName.\n\n  - **vsphereVolume.storagePolicyName** (string)\n\n    storagePolicyName is the storage Policy Based Management (SPBM) profile name.\n\n\n\n## PersistentVolumeStatus {#PersistentVolumeStatus}\n\nPersistentVolumeStatus is the current status of a persistent volume.\n\n<hr>\n\n- **lastPhaseTransitionTime** (Time)\n\n  lastPhaseTransitionTime is the time the phase transitioned from one to another and automatically resets to current time everytime a volume phase transitions.\n\n  <a name=\"Time\"><\/a>\n  *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **message** (string)\n\n  message is a human-readable message indicating details about why the volume is in this state.\n\n- **phase** (string)\n\n  phase indicates if a volume is available, bound to a claim, or released by a claim. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#phase\n\n- **reason** (string)\n\n  reason is a brief CamelCase string that describes any failure and is meant for machine parsing and tidy display in the CLI.\n\n\n\n\n\n## PersistentVolumeList {#PersistentVolumeList}\n\nPersistentVolumeList is a list of PersistentVolume items.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: PersistentVolumeList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">PersistentVolume<\/a>), required\n\n  items is a list of persistent volumes. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified PersistentVolume\n\n#### HTTP Request\n\nGET \/api\/v1\/persistentvolumes\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PersistentVolume\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolume<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified PersistentVolume\n\n#### HTTP Request\n\nGET \/api\/v1\/persistentvolumes\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PersistentVolume\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolume<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind PersistentVolume\n\n#### HTTP Request\n\nGET \/api\/v1\/persistentvolumes\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolumeList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a PersistentVolume\n\n#### HTTP Request\n\nPOST \/api\/v1\/persistentvolumes\n\n#### Parameters\n\n\n- **body**: <a href=\"\">PersistentVolume<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolume<\/a>): OK\n\n201 (<a href=\"\">PersistentVolume<\/a>): Created\n\n202 (<a href=\"\">PersistentVolume<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified PersistentVolume\n\n#### HTTP Request\n\nPUT \/api\/v1\/persistentvolumes\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PersistentVolume\n\n\n- **body**: <a href=\"\">PersistentVolume<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolume<\/a>): OK\n\n201 (<a href=\"\">PersistentVolume<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified PersistentVolume\n\n#### HTTP Request\n\nPUT \/api\/v1\/persistentvolumes\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PersistentVolume\n\n\n- **body**: <a href=\"\">PersistentVolume<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolume<\/a>): OK\n\n201 (<a href=\"\">PersistentVolume<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified PersistentVolume\n\n#### HTTP Request\n\nPATCH \/api\/v1\/persistentvolumes\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PersistentVolume\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolume<\/a>): OK\n\n201 (<a href=\"\">PersistentVolume<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified PersistentVolume\n\n#### HTTP Request\n\nPATCH \/api\/v1\/persistentvolumes\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PersistentVolume\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolume<\/a>): OK\n\n201 (<a href=\"\">PersistentVolume<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a PersistentVolume\n\n#### HTTP Request\n\nDELETE \/api\/v1\/persistentvolumes\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PersistentVolume\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolume<\/a>): OK\n\n202 (<a href=\"\">PersistentVolume<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of PersistentVolume\n\n#### HTTP Request\n\nDELETE \/api\/v1\/persistentvolumes\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   PersistentVolume  content type   api reference  description   PersistentVolume  PV  is a storage resource provisioned by an administrator   title   PersistentVolume  weight  7 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        PersistentVolume   PersistentVolume   PersistentVolume  PV  is a storage resource provisioned by an administrator  It is analogous to a node  More info  https   kubernetes io docs concepts storage persistent volumes   hr       apiVersion    v1       kind    PersistentVolume       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    PersistentVolumeSpec  a      spec defines a specification of a persistent volume owned by the cluster  Provisioned by an administrator  More info  https   kubernetes io docs concepts storage persistent volumes persistent volumes      status     a href    PersistentVolumeStatus  a      status represents the current information status for the persistent volume  Populated by the system  Read only  More info  https   kubernetes io docs concepts storage persistent volumes persistent volumes         PersistentVolumeSpec   PersistentVolumeSpec   PersistentVolumeSpec is the specification of a persistent volume    hr       accessModes      string      Atomic  will be replaced during a merge       accessModes contains all ways the volume can be mounted  More info  https   kubernetes io docs concepts storage persistent volumes access modes      capacity    map string  a href    Quantity  a      capacity is the description of the persistent volume s resources and capacity  More info  https   kubernetes io docs concepts storage persistent volumes capacity      claimRef     a href    ObjectReference  a      claimRef is part of a bi directional binding between PersistentVolume and PersistentVolumeClaim  Expected to be non nil when bound  claim VolumeName is the authoritative bind between PV and PVC  More info  https   kubernetes io docs concepts storage persistent volumes binding      mountOptions      string      Atomic  will be replaced during a merge       mountOptions is the list of mount options  e g    ro    soft    Not validated   mount will simply fail if one is invalid  More info  https   kubernetes io docs concepts storage persistent volumes  mount options      nodeAffinity    VolumeNodeAffinity     nodeAffinity defines constraints that limit what nodes this volume can be accessed from  This field influences the scheduling of pods that use this volume      a name  VolumeNodeAffinity    a     VolumeNodeAffinity defines constraints that limit what nodes this volume can be accessed from          nodeAffinity required    NodeSelector       required specifies hard node constraints that must be met        a name  NodeSelector    a       A node selector represents the union of the results of one or more label queries over a set of nodes  that is  it represents the OR of the selectors represented by the node selector terms            nodeAffinity required nodeSelectorTerms      NodeSelectorTerm   required         Atomic  will be replaced during a merge               Required  A list of node selector terms  The terms are ORed          a name  NodeSelectorTerm    a         A null or empty node selector term matches no objects  The requirements of them are ANDed  The TopologySelectorTerm type implements a subset of the NodeSelectorTerm              nodeAffinity required nodeSelectorTerms matchExpressions       a href    NodeSelectorRequirement  a             Atomic  will be replaced during a merge                   A list of node selector requirements by node s labels             nodeAffinity required nodeSelectorTerms matchFields       a href    NodeSelectorRequirement  a             Atomic  will be replaced during a merge                   A list of node selector requirements by node s fields       persistentVolumeReclaimPolicy    string     persistentVolumeReclaimPolicy defines what happens to a persistent volume when released from its claim  Valid options are Retain  default for manually created PersistentVolumes   Delete  default for dynamically provisioned PersistentVolumes   and Recycle  deprecated   Recycle must be supported by the volume plugin underlying this PersistentVolume  More info  https   kubernetes io docs concepts storage persistent volumes reclaiming      storageClassName    string     storageClassName is the name of StorageClass to which this persistent volume belongs  Empty value means that this volume does not belong to any StorageClass       volumeAttributesClassName    string     Name of VolumeAttributesClass to which this persistent volume belongs  Empty value is not allowed  When this field is not set  it indicates that this volume does not belong to any VolumeAttributesClass  This field is mutable and can be changed by the CSI driver after a volume has been updated successfully to a new class  For an unbound PersistentVolume  the volumeAttributesClassName will be matched with unbound PersistentVolumeClaims during the binding process  This is a beta field and requires enabling VolumeAttributesClass feature  off by default        volumeMode    string     volumeMode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state  Value of Filesystem is implied when not included in spec         Local       hostPath    HostPathVolumeSource     hostPath represents a directory on the host  Provisioned by a developer or tester  This is useful for single node development and testing only  On host storage is not supported in any way and WILL NOT WORK in a multi node cluster  More info  https   kubernetes io docs concepts storage volumes hostpath     a name  HostPathVolumeSource    a     Represents a host path mapped into a pod  Host path volumes do not support ownership management or SELinux relabeling          hostPath path    string   required      path of the directory on the host  If the path is a symlink  it will follow the link to the real path  More info  https   kubernetes io docs concepts storage volumes hostpath        hostPath type    string       type for HostPath Volume Defaults to    More info  https   kubernetes io docs concepts storage volumes hostpath      local    LocalVolumeSource     local represents directly attached storage with node affinity     a name  LocalVolumeSource    a     Local represents directly attached storage with node affinity  Beta feature          local path    string   required      path of the full path to the volume on the node  It can be either a directory or block device  disk  partition               local fsType    string       fsType is the filesystem type to mount  It applies only when the Path is a block device  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   The default value is to auto select a filesystem if unspecified       Persistent volumes       awsElasticBlockStore    AWSElasticBlockStoreVolumeSource     awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet s host machine and then exposed to the pod  More info  https   kubernetes io docs concepts storage volumes awselasticblockstore     a name  AWSElasticBlockStoreVolumeSource    a     Represents a Persistent Disk resource in AWS       An AWS EBS disk must exist before mounting to a container  The disk must also be in the same AWS zone as the kubelet  An AWS EBS disk can only be mounted as read write once  AWS EBS volumes support ownership management and SELinux relabeling          awsElasticBlockStore volumeID    string   required      volumeID is unique ID of the persistent disk resource in AWS  Amazon EBS volume   More info  https   kubernetes io docs concepts storage volumes awselasticblockstore        awsElasticBlockStore fsType    string       fsType is the filesystem type of the volume that you want to mount  Tip  Ensure that the filesystem type is supported by the host operating system  Examples   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified  More info  https   kubernetes io docs concepts storage volumes awselasticblockstore        awsElasticBlockStore partition    int32       partition is the partition in the volume that you want to mount  If omitted  the default is to mount by volume name  Examples  For volume  dev sda1  you specify the partition as  1   Similarly  the volume partition for  dev sda is  0   or you can leave the property empty          awsElasticBlockStore readOnly    boolean       readOnly value true will force the readOnly setting in VolumeMounts  More info  https   kubernetes io docs concepts storage volumes awselasticblockstore      azureDisk    AzureDiskVolumeSource     azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod      a name  AzureDiskVolumeSource    a     AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod          azureDisk diskName    string   required      diskName is the Name of the data disk in the blob storage        azureDisk diskURI    string   required      diskURI is the URI of data disk in the blob storage        azureDisk cachingMode    string       cachingMode is the Host Caching mode  None  Read Only  Read Write         azureDisk fsType    string       fsType is Filesystem type to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified         azureDisk kind    string       kind expected values are Shared  multiple blob disks per storage account  Dedicated  single blob disk per storage account  Managed  azure managed data disk  only in managed availability set   defaults to shared        azureDisk readOnly    boolean       readOnly Defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts       azureFile    AzureFilePersistentVolumeSource     azureFile represents an Azure File Service mount on the host and bind mount to the pod      a name  AzureFilePersistentVolumeSource    a     AzureFile represents an Azure File Service mount on the host and bind mount to the pod          azureFile secretName    string   required      secretName is the name of secret that contains Azure Storage Account Name and Key        azureFile shareName    string   required      shareName is the azure Share Name        azureFile readOnly    boolean       readOnly defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts         azureFile secretNamespace    string       secretNamespace is the namespace of the secret that contains Azure Storage Account Name and Key default is the same as the Pod      cephfs    CephFSPersistentVolumeSource     cephFS represents a Ceph FS mount on the host that shares a pod s lifetime     a name  CephFSPersistentVolumeSource    a     Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling          cephfs monitors      string   required       Atomic  will be replaced during a merge           monitors is Required  Monitors is a collection of Ceph monitors More info  https   examples k8s io volumes cephfs README md how to use it        cephfs path    string       path is Optional  Used as the mounted root  rather than the full Ceph tree  default is          cephfs readOnly    boolean       readOnly is Optional  Defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts  More info  https   examples k8s io volumes cephfs README md how to use it        cephfs secretFile    string       secretFile is Optional  SecretFile is the path to key ring for User  default is  etc ceph user secret More info  https   examples k8s io volumes cephfs README md how to use it        cephfs secretRef    SecretReference       secretRef is Optional  SecretRef is reference to the authentication secret for User  default is empty  More info  https   examples k8s io volumes cephfs README md how to use it       a name  SecretReference    a       SecretReference represents a Secret Reference  It has enough information to retrieve secret in any namespace           cephfs secretRef name    string         name is unique within a namespace to reference a secret resource           cephfs secretRef namespace    string         namespace defines the space within which the secret name must be unique         cephfs user    string       user is Optional  User is the rados user name  default is admin More info  https   examples k8s io volumes cephfs README md how to use it      cinder    CinderPersistentVolumeSource     cinder represents a cinder volume attached and mounted on kubelets host machine  More info  https   examples k8s io mysql cinder pd README md     a name  CinderPersistentVolumeSource    a     Represents a cinder volume resource in Openstack  A Cinder volume must exist before mounting to a container  The volume must also be in the same region as the kubelet  Cinder volumes support ownership management and SELinux relabeling          cinder volumeID    string   required      volumeID used to identify the volume in cinder  More info  https   examples k8s io mysql cinder pd README md        cinder fsType    string       fsType Filesystem type to mount  Must be a filesystem type supported by the host operating system  Examples   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified  More info  https   examples k8s io mysql cinder pd README md        cinder readOnly    boolean       readOnly is Optional  Defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts  More info  https   examples k8s io mysql cinder pd README md        cinder secretRef    SecretReference       secretRef is Optional  points to a secret object containing parameters used to connect to OpenStack        a name  SecretReference    a       SecretReference represents a Secret Reference  It has enough information to retrieve secret in any namespace           cinder secretRef name    string         name is unique within a namespace to reference a secret resource           cinder secretRef namespace    string         namespace defines the space within which the secret name must be unique       csi    CSIPersistentVolumeSource     csi represents storage that is handled by an external CSI driver  Beta feature       a name  CSIPersistentVolumeSource    a     Represents storage that is managed by an external CSI volume driver  Beta feature          csi driver    string   required      driver is the name of the driver to use for this volume  Required         csi volumeHandle    string   required      volumeHandle is the unique volume name returned by the CSI volume plugin s CreateVolume to refer to the volume on all subsequent calls  Required         csi controllerExpandSecretRef    SecretReference       controllerExpandSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI ControllerExpandVolume call  This field is optional  and may be empty if no secret is required  If the secret object contains more than one secret  all secrets are passed        a name  SecretReference    a       SecretReference represents a Secret Reference  It has enough information to retrieve secret in any namespace           csi controllerExpandSecretRef name    string         name is unique within a namespace to reference a secret resource           csi controllerExpandSecretRef namespace    string         namespace defines the space within which the secret name must be unique         csi controllerPublishSecretRef    SecretReference       controllerPublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI ControllerPublishVolume and ControllerUnpublishVolume calls  This field is optional  and may be empty if no secret is required  If the secret object contains more than one secret  all secrets are passed        a name  SecretReference    a       SecretReference represents a Secret Reference  It has enough information to retrieve secret in any namespace           csi controllerPublishSecretRef name    string         name is unique within a namespace to reference a secret resource           csi controllerPublishSecretRef namespace    string         namespace defines the space within which the secret name must be unique         csi fsType    string       fsType to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs          csi nodeExpandSecretRef    SecretReference       nodeExpandSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodeExpandVolume call  This field is optional  may be omitted if no secret is required  If the secret object contains more than one secret  all secrets are passed        a name  SecretReference    a       SecretReference represents a Secret Reference  It has enough information to retrieve secret in any namespace           csi nodeExpandSecretRef name    string         name is unique within a namespace to reference a secret resource           csi nodeExpandSecretRef namespace    string         namespace defines the space within which the secret name must be unique         csi nodePublishSecretRef    SecretReference       nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls  This field is optional  and may be empty if no secret is required  If the secret object contains more than one secret  all secrets are passed        a name  SecretReference    a       SecretReference represents a Secret Reference  It has enough information to retrieve secret in any namespace           csi nodePublishSecretRef name    string         name is unique within a namespace to reference a secret resource           csi nodePublishSecretRef namespace    string         namespace defines the space within which the secret name must be unique         csi nodeStageSecretRef    SecretReference       nodeStageSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodeStageVolume and NodeStageVolume and NodeUnstageVolume calls  This field is optional  and may be empty if no secret is required  If the secret object contains more than one secret  all secrets are passed        a name  SecretReference    a       SecretReference represents a Secret Reference  It has enough information to retrieve secret in any namespace           csi nodeStageSecretRef name    string         name is unique within a namespace to reference a secret resource           csi nodeStageSecretRef namespace    string         namespace defines the space within which the secret name must be unique         csi readOnly    boolean       readOnly value to pass to ControllerPublishVolumeRequest  Defaults to false  read write          csi volumeAttributes    map string string       volumeAttributes of the volume to publish       fc    FCVolumeSource     fc represents a Fibre Channel resource that is attached to a kubelet s host machine and then exposed to the pod      a name  FCVolumeSource    a     Represents a Fibre Channel volume  Fibre Channel volumes can only be mounted as read write once  Fibre Channel volumes support ownership management and SELinux relabeling          fc fsType    string       fsType is the filesystem type to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified         fc lun    int32       lun is Optional  FC target lun number        fc readOnly    boolean       readOnly is Optional  Defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts         fc targetWWNs      string        Atomic  will be replaced during a merge           targetWWNs is Optional  FC target worldwide names  WWNs         fc wwids      string        Atomic  will be replaced during a merge           wwids Optional  FC volume world wide identifiers  wwids  Either wwids or combination of targetWWNs and lun must be set  but not both simultaneously       flexVolume    FlexPersistentVolumeSource     flexVolume represents a generic volume resource that is provisioned attached using an exec based plugin      a name  FlexPersistentVolumeSource    a     FlexPersistentVolumeSource represents a generic persistent volume resource that is provisioned attached using an exec based plugin          flexVolume driver    string   required      driver is the name of the driver to use for this volume         flexVolume fsType    string       fsType is the Filesystem type to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   The default filesystem depends on FlexVolume script         flexVolume options    map string string       options is Optional  this field holds extra command options if any         flexVolume readOnly    boolean       readOnly is Optional  defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts         flexVolume secretRef    SecretReference       secretRef is Optional  SecretRef is reference to the secret object containing sensitive information to pass to the plugin scripts  This may be empty if no secret object is specified  If the secret object contains more than one secret  all secrets are passed to the plugin scripts        a name  SecretReference    a       SecretReference represents a Secret Reference  It has enough information to retrieve secret in any namespace           flexVolume secretRef name    string         name is unique within a namespace to reference a secret resource           flexVolume secretRef namespace    string         namespace defines the space within which the secret name must be unique       flocker    FlockerVolumeSource     flocker represents a Flocker volume attached to a kubelet s host machine and exposed to the pod for its usage  This depends on the Flocker control service being running     a name  FlockerVolumeSource    a     Represents a Flocker volume mounted by the Flocker agent  One and only one of datasetName and datasetUUID should be set  Flocker volumes do not support ownership management or SELinux relabeling          flocker datasetName    string       datasetName is Name of the dataset stored as metadata    name on the dataset for Flocker should be considered as deprecated        flocker datasetUUID    string       datasetUUID is the UUID of the dataset  This is unique identifier of a Flocker dataset      gcePersistentDisk    GCEPersistentDiskVolumeSource     gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet s host machine and then exposed to the pod  Provisioned by an admin  More info  https   kubernetes io docs concepts storage volumes gcepersistentdisk     a name  GCEPersistentDiskVolumeSource    a     Represents a Persistent Disk resource in Google Compute Engine       A GCE PD must exist before mounting to a container  The disk must also be in the same GCE project and zone as the kubelet  A GCE PD can only be mounted as read write once or read only many times  GCE PDs support ownership management and SELinux relabeling          gcePersistentDisk pdName    string   required      pdName is unique name of the PD resource in GCE  Used to identify the disk in GCE  More info  https   kubernetes io docs concepts storage volumes gcepersistentdisk        gcePersistentDisk fsType    string       fsType is filesystem type of the volume that you want to mount  Tip  Ensure that the filesystem type is supported by the host operating system  Examples   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified  More info  https   kubernetes io docs concepts storage volumes gcepersistentdisk        gcePersistentDisk partition    int32       partition is the partition in the volume that you want to mount  If omitted  the default is to mount by volume name  Examples  For volume  dev sda1  you specify the partition as  1   Similarly  the volume partition for  dev sda is  0   or you can leave the property empty   More info  https   kubernetes io docs concepts storage volumes gcepersistentdisk        gcePersistentDisk readOnly    boolean       readOnly here will force the ReadOnly setting in VolumeMounts  Defaults to false  More info  https   kubernetes io docs concepts storage volumes gcepersistentdisk      glusterfs    GlusterfsPersistentVolumeSource     glusterfs represents a Glusterfs volume that is attached to a host and exposed to the pod  Provisioned by an admin  More info  https   examples k8s io volumes glusterfs README md     a name  GlusterfsPersistentVolumeSource    a     Represents a Glusterfs mount that lasts the lifetime of a pod  Glusterfs volumes do not support ownership management or SELinux relabeling          glusterfs endpoints    string   required      endpoints is the endpoint name that details Glusterfs topology  More info  https   examples k8s io volumes glusterfs README md create a pod        glusterfs path    string   required      path is the Glusterfs volume path  More info  https   examples k8s io volumes glusterfs README md create a pod        glusterfs endpointsNamespace    string       endpointsNamespace is the namespace that contains Glusterfs endpoint  If this field is empty  the EndpointNamespace defaults to the same namespace as the bound PVC  More info  https   examples k8s io volumes glusterfs README md create a pod        glusterfs readOnly    boolean       readOnly here will force the Glusterfs volume to be mounted with read only permissions  Defaults to false  More info  https   examples k8s io volumes glusterfs README md create a pod      iscsi    ISCSIPersistentVolumeSource     iscsi represents an ISCSI Disk resource that is attached to a kubelet s host machine and then exposed to the pod  Provisioned by an admin      a name  ISCSIPersistentVolumeSource    a     ISCSIPersistentVolumeSource represents an ISCSI disk  ISCSI volumes can only be mounted as read write once  ISCSI volumes support ownership management and SELinux relabeling          iscsi iqn    string   required      iqn is Target iSCSI Qualified Name         iscsi lun    int32   required      lun is iSCSI Target Lun number         iscsi targetPortal    string   required      targetPortal is iSCSI Target Portal  The Portal is either an IP or ip addr port if the port is other than default  typically TCP ports 860 and 3260          iscsi chapAuthDiscovery    boolean       chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication        iscsi chapAuthSession    boolean       chapAuthSession defines whether support iSCSI Session CHAP authentication        iscsi fsType    string       fsType is the filesystem type of the volume that you want to mount  Tip  Ensure that the filesystem type is supported by the host operating system  Examples   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified  More info  https   kubernetes io docs concepts storage volumes iscsi        iscsi initiatorName    string       initiatorName is the custom iSCSI Initiator Name  If initiatorName is specified with iscsiInterface simultaneously  new iSCSI interface   target portal    volume name  will be created for the connection         iscsi iscsiInterface    string       iscsiInterface is the interface Name that uses an iSCSI transport  Defaults to  default   tcp          iscsi portals      string        Atomic  will be replaced during a merge           portals is the iSCSI Target Portal List  The Portal is either an IP or ip addr port if the port is other than default  typically TCP ports 860 and 3260          iscsi readOnly    boolean       readOnly here will force the ReadOnly setting in VolumeMounts  Defaults to false         iscsi secretRef    SecretReference       secretRef is the CHAP Secret for iSCSI target and initiator authentication       a name  SecretReference    a       SecretReference represents a Secret Reference  It has enough information to retrieve secret in any namespace           iscsi secretRef name    string         name is unique within a namespace to reference a secret resource           iscsi secretRef namespace    string         namespace defines the space within which the secret name must be unique       nfs    NFSVolumeSource     nfs represents an NFS mount on the host  Provisioned by an admin  More info  https   kubernetes io docs concepts storage volumes nfs     a name  NFSVolumeSource    a     Represents an NFS mount that lasts the lifetime of a pod  NFS volumes do not support ownership management or SELinux relabeling          nfs path    string   required      path that is exported by the NFS server  More info  https   kubernetes io docs concepts storage volumes nfs        nfs server    string   required      server is the hostname or IP address of the NFS server  More info  https   kubernetes io docs concepts storage volumes nfs        nfs readOnly    boolean       readOnly here will force the NFS export to be mounted with read only permissions  Defaults to false  More info  https   kubernetes io docs concepts storage volumes nfs      photonPersistentDisk    PhotonPersistentDiskVolumeSource     photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine     a name  PhotonPersistentDiskVolumeSource    a     Represents a Photon Controller persistent disk resource          photonPersistentDisk pdID    string   required      pdID is the ID that identifies Photon Controller persistent disk        photonPersistentDisk fsType    string       fsType is the filesystem type to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified       portworxVolume    PortworxVolumeSource     portworxVolume represents a portworx volume attached and mounted on kubelets host machine     a name  PortworxVolumeSource    a     PortworxVolumeSource represents a Portworx volume resource          portworxVolume volumeID    string   required      volumeID uniquely identifies a Portworx volume        portworxVolume fsType    string       fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system  Ex   ext4    xfs   Implicitly inferred to be  ext4  if unspecified         portworxVolume readOnly    boolean       readOnly defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts       quobyte    QuobyteVolumeSource     quobyte represents a Quobyte mount on the host that shares a pod s lifetime     a name  QuobyteVolumeSource    a     Represents a Quobyte mount that lasts the lifetime of a pod  Quobyte volumes do not support ownership management or SELinux relabeling          quobyte registry    string   required      registry represents a single or multiple Quobyte Registry services specified as a string as host port pair  multiple entries are separated with commas  which acts as the central registry for volumes        quobyte volume    string   required      volume is a string that references an already created Quobyte volume by name         quobyte group    string       group to map volume access to Default is no group        quobyte readOnly    boolean       readOnly here will force the Quobyte volume to be mounted with read only permissions  Defaults to false         quobyte tenant    string       tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes  value is set by the plugin        quobyte user    string       user to map volume access to Defaults to serivceaccount user      rbd    RBDPersistentVolumeSource     rbd represents a Rados Block Device mount on the host that shares a pod s lifetime  More info  https   examples k8s io volumes rbd README md     a name  RBDPersistentVolumeSource    a     Represents a Rados Block Device mount that lasts the lifetime of a pod  RBD volumes support ownership management and SELinux relabeling          rbd image    string   required      image is the rados image name  More info  https   examples k8s io volumes rbd README md how to use it        rbd monitors      string   required       Atomic  will be replaced during a merge           monitors is a collection of Ceph monitors  More info  https   examples k8s io volumes rbd README md how to use it        rbd fsType    string       fsType is the filesystem type of the volume that you want to mount  Tip  Ensure that the filesystem type is supported by the host operating system  Examples   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified  More info  https   kubernetes io docs concepts storage volumes rbd        rbd keyring    string       keyring is the path to key ring for RBDUser  Default is  etc ceph keyring  More info  https   examples k8s io volumes rbd README md how to use it        rbd pool    string       pool is the rados pool name  Default is rbd  More info  https   examples k8s io volumes rbd README md how to use it        rbd readOnly    boolean       readOnly here will force the ReadOnly setting in VolumeMounts  Defaults to false  More info  https   examples k8s io volumes rbd README md how to use it        rbd secretRef    SecretReference       secretRef is name of the authentication secret for RBDUser  If provided overrides keyring  Default is nil  More info  https   examples k8s io volumes rbd README md how to use it       a name  SecretReference    a       SecretReference represents a Secret Reference  It has enough information to retrieve secret in any namespace           rbd secretRef name    string         name is unique within a namespace to reference a secret resource           rbd secretRef namespace    string         namespace defines the space within which the secret name must be unique         rbd user    string       user is the rados user name  Default is admin  More info  https   examples k8s io volumes rbd README md how to use it      scaleIO    ScaleIOPersistentVolumeSource     scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes      a name  ScaleIOPersistentVolumeSource    a     ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume         scaleIO gateway    string   required      gateway is the host address of the ScaleIO API Gateway         scaleIO secretRef    SecretReference   required      secretRef references to the secret for ScaleIO user and other sensitive information  If this is not provided  Login operation will fail        a name  SecretReference    a       SecretReference represents a Secret Reference  It has enough information to retrieve secret in any namespace           scaleIO secretRef name    string         name is unique within a namespace to reference a secret resource           scaleIO secretRef namespace    string         namespace defines the space within which the secret name must be unique         scaleIO system    string   required      system is the name of the storage system as configured in ScaleIO         scaleIO fsType    string       fsType is the filesystem type to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   Default is  xfs         scaleIO protectionDomain    string       protectionDomain is the name of the ScaleIO Protection Domain for the configured storage         scaleIO readOnly    boolean       readOnly defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts         scaleIO sslEnabled    boolean       sslEnabled is the flag to enable disable SSL communication with Gateway  default false        scaleIO storageMode    string       storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned  Default is ThinProvisioned         scaleIO storagePool    string       storagePool is the ScaleIO Storage Pool associated with the protection domain         scaleIO volumeName    string       volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source       storageos    StorageOSPersistentVolumeSource     storageOS represents a StorageOS volume that is attached to the kubelet s host machine and mounted into the pod More info  https   examples k8s io volumes storageos README md     a name  StorageOSPersistentVolumeSource    a     Represents a StorageOS persistent volume resource          storageos fsType    string       fsType is the filesystem type to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified         storageos readOnly    boolean       readOnly defaults to false  read write   ReadOnly here will force the ReadOnly setting in VolumeMounts         storageos secretRef     a href    ObjectReference  a        secretRef specifies the secret to use for obtaining the StorageOS API credentials   If not specified  default values will be attempted         storageos volumeName    string       volumeName is the human readable name of the StorageOS volume   Volume names are only unique within a namespace         storageos volumeNamespace    string       volumeNamespace specifies the scope of the volume within StorageOS   If no namespace is specified then the Pod s namespace will be used   This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration  Set VolumeName to any name to override the default behaviour  Set to  default  if you are not using namespaces within StorageOS  Namespaces that do not pre exist within StorageOS will be created       vsphereVolume    VsphereVirtualDiskVolumeSource     vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine     a name  VsphereVirtualDiskVolumeSource    a     Represents a vSphere volume resource          vsphereVolume volumePath    string   required      volumePath is the path that identifies vSphere volume vmdk        vsphereVolume fsType    string       fsType is filesystem type to mount  Must be a filesystem type supported by the host operating system  Ex   ext4    xfs    ntfs   Implicitly inferred to be  ext4  if unspecified         vsphereVolume storagePolicyID    string       storagePolicyID is the storage Policy Based Management  SPBM  profile ID associated with the StoragePolicyName         vsphereVolume storagePolicyName    string       storagePolicyName is the storage Policy Based Management  SPBM  profile name        PersistentVolumeStatus   PersistentVolumeStatus   PersistentVolumeStatus is the current status of a persistent volume    hr       lastPhaseTransitionTime    Time     lastPhaseTransitionTime is the time the phase transitioned from one to another and automatically resets to current time everytime a volume phase transitions      a name  Time    a     Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        message    string     message is a human readable message indicating details about why the volume is in this state       phase    string     phase indicates if a volume is available  bound to a claim  or released by a claim  More info  https   kubernetes io docs concepts storage persistent volumes phase      reason    string     reason is a brief CamelCase string that describes any failure and is meant for machine parsing and tidy display in the CLI          PersistentVolumeList   PersistentVolumeList   PersistentVolumeList is a list of PersistentVolume items    hr       apiVersion    v1       kind    PersistentVolumeList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    PersistentVolume  a    required    items is a list of persistent volumes  More info  https   kubernetes io docs concepts storage persistent volumes         Operations   Operations      hr             get  read the specified PersistentVolume       HTTP Request  GET  api v1 persistentvolumes  name        Parameters       name     in path    string  required    name of the PersistentVolume       pretty     in query    string     a href    pretty  a          Response   200   a href    PersistentVolume  a    OK  401  Unauthorized        get  read status of the specified PersistentVolume       HTTP Request  GET  api v1 persistentvolumes  name  status       Parameters       name     in path    string  required    name of the PersistentVolume       pretty     in query    string     a href    pretty  a          Response   200   a href    PersistentVolume  a    OK  401  Unauthorized        list  list or watch objects of kind PersistentVolume       HTTP Request  GET  api v1 persistentvolumes       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    PersistentVolumeList  a    OK  401  Unauthorized        create  create a PersistentVolume       HTTP Request  POST  api v1 persistentvolumes       Parameters       body     a href    PersistentVolume  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PersistentVolume  a    OK  201   a href    PersistentVolume  a    Created  202   a href    PersistentVolume  a    Accepted  401  Unauthorized        update  replace the specified PersistentVolume       HTTP Request  PUT  api v1 persistentvolumes  name        Parameters       name     in path    string  required    name of the PersistentVolume       body     a href    PersistentVolume  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PersistentVolume  a    OK  201   a href    PersistentVolume  a    Created  401  Unauthorized        update  replace status of the specified PersistentVolume       HTTP Request  PUT  api v1 persistentvolumes  name  status       Parameters       name     in path    string  required    name of the PersistentVolume       body     a href    PersistentVolume  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PersistentVolume  a    OK  201   a href    PersistentVolume  a    Created  401  Unauthorized        patch  partially update the specified PersistentVolume       HTTP Request  PATCH  api v1 persistentvolumes  name        Parameters       name     in path    string  required    name of the PersistentVolume       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PersistentVolume  a    OK  201   a href    PersistentVolume  a    Created  401  Unauthorized        patch  partially update status of the specified PersistentVolume       HTTP Request  PATCH  api v1 persistentvolumes  name  status       Parameters       name     in path    string  required    name of the PersistentVolume       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PersistentVolume  a    OK  201   a href    PersistentVolume  a    Created  401  Unauthorized        delete  delete a PersistentVolume       HTTP Request  DELETE  api v1 persistentvolumes  name        Parameters       name     in path    string  required    name of the PersistentVolume       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    PersistentVolume  a    OK  202   a href    PersistentVolume  a    Accepted  401  Unauthorized        deletecollection  delete collection of PersistentVolume       HTTP Request  DELETE  api v1 persistentvolumes       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference apiVersion v1 kind ConfigMap title ConfigMap contenttype apireference ConfigMap holds configuration data for pods to consume apimetadata autogenerated true weight 1 import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"ConfigMap\"\ncontent_type: \"api_reference\"\ndescription: \"ConfigMap holds configuration data for pods to consume.\"\ntitle: \"ConfigMap\"\nweight: 1\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## ConfigMap {#ConfigMap}\n\nConfigMap holds configuration data for pods to consume.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: ConfigMap\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **binaryData** (map[string][]byte)\n\n  BinaryData contains the binary data. Each key must consist of alphanumeric characters, '-', '_' or '.'. BinaryData can contain byte sequences that are not in the UTF-8 range. The keys stored in BinaryData must not overlap with the ones in the Data field, this is enforced during validation process. Using this field will require 1.10+ apiserver and kubelet.\n\n- **data** (map[string]string)\n\n  Data contains the configuration data. Each key must consist of alphanumeric characters, '-', '_' or '.'. Values with non-UTF-8 byte sequences must use the BinaryData field. The keys stored in Data must not overlap with the keys in the BinaryData field, this is enforced during validation process.\n\n- **immutable** (boolean)\n\n  Immutable, if set to true, ensures that data stored in the ConfigMap cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. Defaulted to nil.\n\n\n\n\n\n## ConfigMapList {#ConfigMapList}\n\nConfigMapList is a resource containing a list of ConfigMap objects.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: ConfigMapList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">ConfigMap<\/a>), required\n\n  Items is the list of ConfigMaps.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ConfigMap\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/configmaps\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ConfigMap\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ConfigMap<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ConfigMap\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/configmaps\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ConfigMapList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ConfigMap\n\n#### HTTP Request\n\nGET \/api\/v1\/configmaps\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ConfigMapList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ConfigMap\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\/{namespace}\/configmaps\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ConfigMap<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ConfigMap<\/a>): OK\n\n201 (<a href=\"\">ConfigMap<\/a>): Created\n\n202 (<a href=\"\">ConfigMap<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ConfigMap\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/configmaps\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ConfigMap\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ConfigMap<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ConfigMap<\/a>): OK\n\n201 (<a href=\"\">ConfigMap<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ConfigMap\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/configmaps\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ConfigMap\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ConfigMap<\/a>): OK\n\n201 (<a href=\"\">ConfigMap<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ConfigMap\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/configmaps\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ConfigMap\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ConfigMap\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/configmaps\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   ConfigMap  content type   api reference  description   ConfigMap holds configuration data for pods to consume   title   ConfigMap  weight  1 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        ConfigMap   ConfigMap   ConfigMap holds configuration data for pods to consume    hr       apiVersion    v1       kind    ConfigMap       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      binaryData    map string   byte     BinaryData contains the binary data  Each key must consist of alphanumeric characters           or      BinaryData can contain byte sequences that are not in the UTF 8 range  The keys stored in BinaryData must not overlap with the ones in the Data field  this is enforced during validation process  Using this field will require 1 10  apiserver and kubelet       data    map string string     Data contains the configuration data  Each key must consist of alphanumeric characters           or      Values with non UTF 8 byte sequences must use the BinaryData field  The keys stored in Data must not overlap with the keys in the BinaryData field  this is enforced during validation process       immutable    boolean     Immutable  if set to true  ensures that data stored in the ConfigMap cannot be updated  only object metadata can be modified   If not set to true  the field can be modified at any time  Defaulted to nil          ConfigMapList   ConfigMapList   ConfigMapList is a resource containing a list of ConfigMap objects    hr       apiVersion    v1       kind    ConfigMapList       metadata     a href    ListMeta  a      More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    ConfigMap  a    required    Items is the list of ConfigMaps          Operations   Operations      hr             get  read the specified ConfigMap       HTTP Request  GET  api v1 namespaces  namespace  configmaps  name        Parameters       name     in path    string  required    name of the ConfigMap       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ConfigMap  a    OK  401  Unauthorized        list  list or watch objects of kind ConfigMap       HTTP Request  GET  api v1 namespaces  namespace  configmaps       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ConfigMapList  a    OK  401  Unauthorized        list  list or watch objects of kind ConfigMap       HTTP Request  GET  api v1 configmaps       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ConfigMapList  a    OK  401  Unauthorized        create  create a ConfigMap       HTTP Request  POST  api v1 namespaces  namespace  configmaps       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    ConfigMap  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ConfigMap  a    OK  201   a href    ConfigMap  a    Created  202   a href    ConfigMap  a    Accepted  401  Unauthorized        update  replace the specified ConfigMap       HTTP Request  PUT  api v1 namespaces  namespace  configmaps  name        Parameters       name     in path    string  required    name of the ConfigMap       namespace     in path    string  required     a href    namespace  a        body     a href    ConfigMap  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ConfigMap  a    OK  201   a href    ConfigMap  a    Created  401  Unauthorized        patch  partially update the specified ConfigMap       HTTP Request  PATCH  api v1 namespaces  namespace  configmaps  name        Parameters       name     in path    string  required    name of the ConfigMap       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ConfigMap  a    OK  201   a href    ConfigMap  a    Created  401  Unauthorized        delete  delete a ConfigMap       HTTP Request  DELETE  api v1 namespaces  namespace  configmaps  name        Parameters       name     in path    string  required    name of the ConfigMap       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of ConfigMap       HTTP Request  DELETE  api v1 namespaces  namespace  configmaps       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference kind Secret apiVersion v1 weight 2 contenttype apireference Secret holds secret data of a certain type apimetadata title Secret autogenerated true import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"Secret\"\ncontent_type: \"api_reference\"\ndescription: \"Secret holds secret data of a certain type.\"\ntitle: \"Secret\"\nweight: 2\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## Secret {#Secret}\n\nSecret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: Secret\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **data** (map[string][]byte)\n\n  Data contains the secret data. Each key must consist of alphanumeric characters, '-', '_' or '.'. The serialized form of the secret data is a base64 encoded string, representing the arbitrary (possibly non-string) data value here. Described in https:\/\/tools.ietf.org\/html\/rfc4648#section-4\n\n- **immutable** (boolean)\n\n  Immutable, if set to true, ensures that data stored in the Secret cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. Defaulted to nil.\n\n- **stringData** (map[string]string)\n\n  stringData allows specifying non-binary secret data in string form. It is provided as a write-only input field for convenience. All keys and values are merged into the data field on write, overwriting any existing values. The stringData field is never output when reading from the API.\n\n- **type** (string)\n\n  Used to facilitate programmatic handling of secret data. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/#secret-types\n\n\n\n\n\n## SecretList {#SecretList}\n\nSecretList is a list of Secret.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: SecretList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">Secret<\/a>), required\n\n  Items is a list of secret objects. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified Secret\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/secrets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Secret\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Secret<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Secret\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/secrets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">SecretList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Secret\n\n#### HTTP Request\n\nGET \/api\/v1\/secrets\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">SecretList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a Secret\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\/{namespace}\/secrets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Secret<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Secret<\/a>): OK\n\n201 (<a href=\"\">Secret<\/a>): Created\n\n202 (<a href=\"\">Secret<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified Secret\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/secrets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Secret\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Secret<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Secret<\/a>): OK\n\n201 (<a href=\"\">Secret<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified Secret\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/secrets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Secret\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Secret<\/a>): OK\n\n201 (<a href=\"\">Secret<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a Secret\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/secrets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Secret\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of Secret\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/secrets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   Secret  content type   api reference  description   Secret holds secret data of a certain type   title   Secret  weight  2 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        Secret   Secret   Secret holds secret data of a certain type  The total bytes of the values in the Data field must be less than MaxSecretSize bytes    hr       apiVersion    v1       kind    Secret       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      data    map string   byte     Data contains the secret data  Each key must consist of alphanumeric characters           or      The serialized form of the secret data is a base64 encoded string  representing the arbitrary  possibly non string  data value here  Described in https   tools ietf org html rfc4648 section 4      immutable    boolean     Immutable  if set to true  ensures that data stored in the Secret cannot be updated  only object metadata can be modified   If not set to true  the field can be modified at any time  Defaulted to nil       stringData    map string string     stringData allows specifying non binary secret data in string form  It is provided as a write only input field for convenience  All keys and values are merged into the data field on write  overwriting any existing values  The stringData field is never output when reading from the API       type    string     Used to facilitate programmatic handling of secret data  More info  https   kubernetes io docs concepts configuration secret  secret types         SecretList   SecretList   SecretList is a list of Secret    hr       apiVersion    v1       kind    SecretList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    Secret  a    required    Items is a list of secret objects  More info  https   kubernetes io docs concepts configuration secret         Operations   Operations      hr             get  read the specified Secret       HTTP Request  GET  api v1 namespaces  namespace  secrets  name        Parameters       name     in path    string  required    name of the Secret       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Secret  a    OK  401  Unauthorized        list  list or watch objects of kind Secret       HTTP Request  GET  api v1 namespaces  namespace  secrets       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    SecretList  a    OK  401  Unauthorized        list  list or watch objects of kind Secret       HTTP Request  GET  api v1 secrets       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    SecretList  a    OK  401  Unauthorized        create  create a Secret       HTTP Request  POST  api v1 namespaces  namespace  secrets       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    Secret  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Secret  a    OK  201   a href    Secret  a    Created  202   a href    Secret  a    Accepted  401  Unauthorized        update  replace the specified Secret       HTTP Request  PUT  api v1 namespaces  namespace  secrets  name        Parameters       name     in path    string  required    name of the Secret       namespace     in path    string  required     a href    namespace  a        body     a href    Secret  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Secret  a    OK  201   a href    Secret  a    Created  401  Unauthorized        patch  partially update the specified Secret       HTTP Request  PATCH  api v1 namespaces  namespace  secrets  name        Parameters       name     in path    string  required    name of the Secret       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Secret  a    OK  201   a href    Secret  a    Created  401  Unauthorized        delete  delete a Secret       HTTP Request  DELETE  api v1 namespaces  namespace  secrets  name        Parameters       name     in path    string  required    name of the Secret       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of Secret       HTTP Request  DELETE  api v1 namespaces  namespace  secrets       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference autogenerated true CSINode holds information about all CSI drivers installed on a node contenttype apireference apimetadata apiVersion storage k8s io v1 weight 4 title CSINode import k8s io api storage v1 kind CSINode","answers":"---\napi_metadata:\n  apiVersion: \"storage.k8s.io\/v1\"\n  import: \"k8s.io\/api\/storage\/v1\"\n  kind: \"CSINode\"\ncontent_type: \"api_reference\"\ndescription: \"CSINode holds information about all CSI drivers installed on a node.\"\ntitle: \"CSINode\"\nweight: 4\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: storage.k8s.io\/v1`\n\n`import \"k8s.io\/api\/storage\/v1\"`\n\n\n## CSINode {#CSINode}\n\nCSINode holds information about all CSI drivers installed on a node. CSI drivers do not need to create the CSINode object directly. As long as they use the node-driver-registrar sidecar container, the kubelet will automatically populate the CSINode object for the CSI driver as part of kubelet plugin registration. CSINode has the same name as a node. If the object is missing, it means either there are no CSI Drivers available on the node, or the Kubelet version is low enough that it doesn't create this object. CSINode has an OwnerReference that points to the corresponding node object.\n\n<hr>\n\n- **apiVersion**: storage.k8s.io\/v1\n\n\n- **kind**: CSINode\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. metadata.name must be the Kubernetes node name.\n\n- **spec** (<a href=\"\">CSINodeSpec<\/a>), required\n\n  spec is the specification of CSINode\n\n\n\n\n\n## CSINodeSpec {#CSINodeSpec}\n\nCSINodeSpec holds information about the specification of all CSI drivers installed on a node\n\n<hr>\n\n- **drivers** ([]CSINodeDriver), required\n\n  *Patch strategy: merge on key `name`*\n  \n  *Map: unique values on key name will be kept during a merge*\n  \n  drivers is a list of information of all CSI Drivers existing on a node. If all drivers in the list are uninstalled, this can become empty.\n\n  <a name=\"CSINodeDriver\"><\/a>\n  *CSINodeDriver holds information about the specification of one CSI driver installed on a node*\n\n  - **drivers.name** (string), required\n\n    name represents the name of the CSI driver that this object refers to. This MUST be the same name returned by the CSI GetPluginName() call for that driver.\n\n  - **drivers.nodeID** (string), required\n\n    nodeID of the node from the driver point of view. This field enables Kubernetes to communicate with storage systems that do not share the same nomenclature for nodes. For example, Kubernetes may refer to a given node as \"node1\", but the storage system may refer to the same node as \"nodeA\". When Kubernetes issues a command to the storage system to attach a volume to a specific node, it can use this field to refer to the node name using the ID that the storage system will understand, e.g. \"nodeA\" instead of \"node1\". This field is required.\n\n  - **drivers.allocatable** (VolumeNodeResources)\n\n    allocatable represents the volume resources of a node that are available for scheduling. This field is beta.\n\n    <a name=\"VolumeNodeResources\"><\/a>\n    *VolumeNodeResources is a set of resource limits for scheduling of volumes.*\n\n    - **drivers.allocatable.count** (int32)\n\n      count indicates the maximum number of unique volumes managed by the CSI driver that can be used on a node. A volume that is both attached and mounted on a node is considered to be used once, not twice. The same rule applies for a unique volume that is shared among multiple pods on the same node. If this field is not specified, then the supported number of volumes on this node is unbounded.\n\n  - **drivers.topologyKeys** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    topologyKeys is the list of keys supported by the driver. When a driver is initialized on a cluster, it provides a set of topology keys that it understands (e.g. \"company.com\/zone\", \"company.com\/region\"). When a driver is initialized on a node, it provides the same topology keys along with values. Kubelet will expose these topology keys as labels on its own node object. When Kubernetes does topology aware provisioning, it can use this list to determine which labels it should retrieve from the node object and pass back to the driver. It is possible for different nodes to use different topology keys. This can be empty if driver does not support topology.\n\n\n\n\n\n## CSINodeList {#CSINodeList}\n\nCSINodeList is a collection of CSINode objects.\n\n<hr>\n\n- **apiVersion**: storage.k8s.io\/v1\n\n\n- **kind**: CSINodeList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">CSINode<\/a>), required\n\n  items is the list of CSINode\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified CSINode\n\n#### HTTP Request\n\nGET \/apis\/storage.k8s.io\/v1\/csinodes\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CSINode\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSINode<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind CSINode\n\n#### HTTP Request\n\nGET \/apis\/storage.k8s.io\/v1\/csinodes\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSINodeList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a CSINode\n\n#### HTTP Request\n\nPOST \/apis\/storage.k8s.io\/v1\/csinodes\n\n#### Parameters\n\n\n- **body**: <a href=\"\">CSINode<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSINode<\/a>): OK\n\n201 (<a href=\"\">CSINode<\/a>): Created\n\n202 (<a href=\"\">CSINode<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified CSINode\n\n#### HTTP Request\n\nPUT \/apis\/storage.k8s.io\/v1\/csinodes\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CSINode\n\n\n- **body**: <a href=\"\">CSINode<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSINode<\/a>): OK\n\n201 (<a href=\"\">CSINode<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified CSINode\n\n#### HTTP Request\n\nPATCH \/apis\/storage.k8s.io\/v1\/csinodes\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CSINode\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSINode<\/a>): OK\n\n201 (<a href=\"\">CSINode<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a CSINode\n\n#### HTTP Request\n\nDELETE \/apis\/storage.k8s.io\/v1\/csinodes\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CSINode\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSINode<\/a>): OK\n\n202 (<a href=\"\">CSINode<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of CSINode\n\n#### HTTP Request\n\nDELETE \/apis\/storage.k8s.io\/v1\/csinodes\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   storage k8s io v1    import   k8s io api storage v1    kind   CSINode  content type   api reference  description   CSINode holds information about all CSI drivers installed on a node   title   CSINode  weight  4 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  storage k8s io v1    import  k8s io api storage v1        CSINode   CSINode   CSINode holds information about all CSI drivers installed on a node  CSI drivers do not need to create the CSINode object directly  As long as they use the node driver registrar sidecar container  the kubelet will automatically populate the CSINode object for the CSI driver as part of kubelet plugin registration  CSINode has the same name as a node  If the object is missing  it means either there are no CSI Drivers available on the node  or the Kubelet version is low enough that it doesn t create this object  CSINode has an OwnerReference that points to the corresponding node object    hr       apiVersion    storage k8s io v1       kind    CSINode       metadata     a href    ObjectMeta  a      Standard object s metadata  metadata name must be the Kubernetes node name       spec     a href    CSINodeSpec  a    required    spec is the specification of CSINode         CSINodeSpec   CSINodeSpec   CSINodeSpec holds information about the specification of all CSI drivers installed on a node   hr       drivers      CSINodeDriver   required     Patch strategy  merge on key  name         Map  unique values on key name will be kept during a merge       drivers is a list of information of all CSI Drivers existing on a node  If all drivers in the list are uninstalled  this can become empty      a name  CSINodeDriver    a     CSINodeDriver holds information about the specification of one CSI driver installed on a node         drivers name    string   required      name represents the name of the CSI driver that this object refers to  This MUST be the same name returned by the CSI GetPluginName   call for that driver         drivers nodeID    string   required      nodeID of the node from the driver point of view  This field enables Kubernetes to communicate with storage systems that do not share the same nomenclature for nodes  For example  Kubernetes may refer to a given node as  node1   but the storage system may refer to the same node as  nodeA   When Kubernetes issues a command to the storage system to attach a volume to a specific node  it can use this field to refer to the node name using the ID that the storage system will understand  e g   nodeA  instead of  node1   This field is required         drivers allocatable    VolumeNodeResources       allocatable represents the volume resources of a node that are available for scheduling  This field is beta        a name  VolumeNodeResources    a       VolumeNodeResources is a set of resource limits for scheduling of volumes            drivers allocatable count    int32         count indicates the maximum number of unique volumes managed by the CSI driver that can be used on a node  A volume that is both attached and mounted on a node is considered to be used once  not twice  The same rule applies for a unique volume that is shared among multiple pods on the same node  If this field is not specified  then the supported number of volumes on this node is unbounded         drivers topologyKeys      string        Atomic  will be replaced during a merge           topologyKeys is the list of keys supported by the driver  When a driver is initialized on a cluster  it provides a set of topology keys that it understands  e g   company com zone    company com region    When a driver is initialized on a node  it provides the same topology keys along with values  Kubelet will expose these topology keys as labels on its own node object  When Kubernetes does topology aware provisioning  it can use this list to determine which labels it should retrieve from the node object and pass back to the driver  It is possible for different nodes to use different topology keys  This can be empty if driver does not support topology          CSINodeList   CSINodeList   CSINodeList is a collection of CSINode objects    hr       apiVersion    storage k8s io v1       kind    CSINodeList       metadata     a href    ListMeta  a      Standard list metadata More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    CSINode  a    required    items is the list of CSINode         Operations   Operations      hr             get  read the specified CSINode       HTTP Request  GET  apis storage k8s io v1 csinodes  name        Parameters       name     in path    string  required    name of the CSINode       pretty     in query    string     a href    pretty  a          Response   200   a href    CSINode  a    OK  401  Unauthorized        list  list or watch objects of kind CSINode       HTTP Request  GET  apis storage k8s io v1 csinodes       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    CSINodeList  a    OK  401  Unauthorized        create  create a CSINode       HTTP Request  POST  apis storage k8s io v1 csinodes       Parameters       body     a href    CSINode  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CSINode  a    OK  201   a href    CSINode  a    Created  202   a href    CSINode  a    Accepted  401  Unauthorized        update  replace the specified CSINode       HTTP Request  PUT  apis storage k8s io v1 csinodes  name        Parameters       name     in path    string  required    name of the CSINode       body     a href    CSINode  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CSINode  a    OK  201   a href    CSINode  a    Created  401  Unauthorized        patch  partially update the specified CSINode       HTTP Request  PATCH  apis storage k8s io v1 csinodes  name        Parameters       name     in path    string  required    name of the CSINode       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CSINode  a    OK  201   a href    CSINode  a    Created  401  Unauthorized        delete  delete a CSINode       HTTP Request  DELETE  apis storage k8s io v1 csinodes  name        Parameters       name     in path    string  required    name of the CSINode       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    CSINode  a    OK  202   a href    CSINode  a    Accepted  401  Unauthorized        deletecollection  delete collection of CSINode       HTTP Request  DELETE  apis storage k8s io v1 csinodes       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference title StorageClass autogenerated true contenttype apireference kind StorageClass weight 8 apimetadata apiVersion storage k8s io v1 StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned import k8s io api storage v1","answers":"---\napi_metadata:\n  apiVersion: \"storage.k8s.io\/v1\"\n  import: \"k8s.io\/api\/storage\/v1\"\n  kind: \"StorageClass\"\ncontent_type: \"api_reference\"\ndescription: \"StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned.\"\ntitle: \"StorageClass\"\nweight: 8\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: storage.k8s.io\/v1`\n\n`import \"k8s.io\/api\/storage\/v1\"`\n\n\n## StorageClass {#StorageClass}\n\nStorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned.\n\nStorageClasses are non-namespaced; the name of the storage class according to etcd is in ObjectMeta.Name.\n\n<hr>\n\n- **apiVersion**: storage.k8s.io\/v1\n\n\n- **kind**: StorageClass\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **provisioner** (string), required\n\n  provisioner indicates the type of the provisioner.\n\n- **allowVolumeExpansion** (boolean)\n\n  allowVolumeExpansion shows whether the storage class allow volume expand.\n\n- **allowedTopologies** ([]TopologySelectorTerm)\n\n  *Atomic: will be replaced during a merge*\n  \n  allowedTopologies restrict the node topologies where volumes can be dynamically provisioned. Each volume plugin defines its own supported topology specifications. An empty TopologySelectorTerm list means there is no topology restriction. This field is only honored by servers that enable the VolumeScheduling feature.\n\n  <a name=\"TopologySelectorTerm\"><\/a>\n  *A topology selector term represents the result of label queries. A null or empty topology selector term matches no objects. The requirements of them are ANDed. It provides a subset of functionality as NodeSelectorTerm. This is an alpha feature and may change in the future.*\n\n  - **allowedTopologies.matchLabelExpressions** ([]TopologySelectorLabelRequirement)\n\n    *Atomic: will be replaced during a merge*\n    \n    A list of topology selector requirements by labels.\n\n    <a name=\"TopologySelectorLabelRequirement\"><\/a>\n    *A topology selector requirement is a selector that matches given label. This is an alpha feature and may change in the future.*\n\n    - **allowedTopologies.matchLabelExpressions.key** (string), required\n\n      The label key that the selector applies to.\n\n    - **allowedTopologies.matchLabelExpressions.values** ([]string), required\n\n      *Atomic: will be replaced during a merge*\n      \n      An array of string values. One value must match the label to be selected. Each entry in Values is ORed.\n\n- **mountOptions** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  mountOptions controls the mountOptions for dynamically provisioned PersistentVolumes of this storage class. e.g. [\"ro\", \"soft\"]. Not validated - mount of the PVs will simply fail if one is invalid.\n\n- **parameters** (map[string]string)\n\n  parameters holds the parameters for the provisioner that should create volumes of this storage class.\n\n- **reclaimPolicy** (string)\n\n  reclaimPolicy controls the reclaimPolicy for dynamically provisioned PersistentVolumes of this storage class. Defaults to Delete.\n\n- **volumeBindingMode** (string)\n\n  volumeBindingMode indicates how PersistentVolumeClaims should be provisioned and bound.  When unset, VolumeBindingImmediate is used. This field is only honored by servers that enable the VolumeScheduling feature.\n\n\n\n\n\n## StorageClassList {#StorageClassList}\n\nStorageClassList is a collection of storage classes.\n\n<hr>\n\n- **apiVersion**: storage.k8s.io\/v1\n\n\n- **kind**: StorageClassList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">StorageClass<\/a>), required\n\n  items is the list of StorageClasses\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified StorageClass\n\n#### HTTP Request\n\nGET \/apis\/storage.k8s.io\/v1\/storageclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StorageClass\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StorageClass<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind StorageClass\n\n#### HTTP Request\n\nGET \/apis\/storage.k8s.io\/v1\/storageclasses\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StorageClassList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a StorageClass\n\n#### HTTP Request\n\nPOST \/apis\/storage.k8s.io\/v1\/storageclasses\n\n#### Parameters\n\n\n- **body**: <a href=\"\">StorageClass<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StorageClass<\/a>): OK\n\n201 (<a href=\"\">StorageClass<\/a>): Created\n\n202 (<a href=\"\">StorageClass<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified StorageClass\n\n#### HTTP Request\n\nPUT \/apis\/storage.k8s.io\/v1\/storageclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StorageClass\n\n\n- **body**: <a href=\"\">StorageClass<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StorageClass<\/a>): OK\n\n201 (<a href=\"\">StorageClass<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified StorageClass\n\n#### HTTP Request\n\nPATCH \/apis\/storage.k8s.io\/v1\/storageclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StorageClass\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StorageClass<\/a>): OK\n\n201 (<a href=\"\">StorageClass<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a StorageClass\n\n#### HTTP Request\n\nDELETE \/apis\/storage.k8s.io\/v1\/storageclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StorageClass\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StorageClass<\/a>): OK\n\n202 (<a href=\"\">StorageClass<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of StorageClass\n\n#### HTTP Request\n\nDELETE \/apis\/storage.k8s.io\/v1\/storageclasses\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   storage k8s io v1    import   k8s io api storage v1    kind   StorageClass  content type   api reference  description   StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned   title   StorageClass  weight  8 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  storage k8s io v1    import  k8s io api storage v1        StorageClass   StorageClass   StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned   StorageClasses are non namespaced  the name of the storage class according to etcd is in ObjectMeta Name    hr       apiVersion    storage k8s io v1       kind    StorageClass       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      provisioner    string   required    provisioner indicates the type of the provisioner       allowVolumeExpansion    boolean     allowVolumeExpansion shows whether the storage class allow volume expand       allowedTopologies      TopologySelectorTerm      Atomic  will be replaced during a merge       allowedTopologies restrict the node topologies where volumes can be dynamically provisioned  Each volume plugin defines its own supported topology specifications  An empty TopologySelectorTerm list means there is no topology restriction  This field is only honored by servers that enable the VolumeScheduling feature      a name  TopologySelectorTerm    a     A topology selector term represents the result of label queries  A null or empty topology selector term matches no objects  The requirements of them are ANDed  It provides a subset of functionality as NodeSelectorTerm  This is an alpha feature and may change in the future          allowedTopologies matchLabelExpressions      TopologySelectorLabelRequirement        Atomic  will be replaced during a merge           A list of topology selector requirements by labels        a name  TopologySelectorLabelRequirement    a       A topology selector requirement is a selector that matches given label  This is an alpha feature and may change in the future            allowedTopologies matchLabelExpressions key    string   required        The label key that the selector applies to           allowedTopologies matchLabelExpressions values      string   required         Atomic  will be replaced during a merge               An array of string values  One value must match the label to be selected  Each entry in Values is ORed       mountOptions      string      Atomic  will be replaced during a merge       mountOptions controls the mountOptions for dynamically provisioned PersistentVolumes of this storage class  e g    ro    soft    Not validated   mount of the PVs will simply fail if one is invalid       parameters    map string string     parameters holds the parameters for the provisioner that should create volumes of this storage class       reclaimPolicy    string     reclaimPolicy controls the reclaimPolicy for dynamically provisioned PersistentVolumes of this storage class  Defaults to Delete       volumeBindingMode    string     volumeBindingMode indicates how PersistentVolumeClaims should be provisioned and bound   When unset  VolumeBindingImmediate is used  This field is only honored by servers that enable the VolumeScheduling feature          StorageClassList   StorageClassList   StorageClassList is a collection of storage classes    hr       apiVersion    storage k8s io v1       kind    StorageClassList       metadata     a href    ListMeta  a      Standard list metadata More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    StorageClass  a    required    items is the list of StorageClasses         Operations   Operations      hr             get  read the specified StorageClass       HTTP Request  GET  apis storage k8s io v1 storageclasses  name        Parameters       name     in path    string  required    name of the StorageClass       pretty     in query    string     a href    pretty  a          Response   200   a href    StorageClass  a    OK  401  Unauthorized        list  list or watch objects of kind StorageClass       HTTP Request  GET  apis storage k8s io v1 storageclasses       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    StorageClassList  a    OK  401  Unauthorized        create  create a StorageClass       HTTP Request  POST  apis storage k8s io v1 storageclasses       Parameters       body     a href    StorageClass  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StorageClass  a    OK  201   a href    StorageClass  a    Created  202   a href    StorageClass  a    Accepted  401  Unauthorized        update  replace the specified StorageClass       HTTP Request  PUT  apis storage k8s io v1 storageclasses  name        Parameters       name     in path    string  required    name of the StorageClass       body     a href    StorageClass  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StorageClass  a    OK  201   a href    StorageClass  a    Created  401  Unauthorized        patch  partially update the specified StorageClass       HTTP Request  PATCH  apis storage k8s io v1 storageclasses  name        Parameters       name     in path    string  required    name of the StorageClass       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StorageClass  a    OK  201   a href    StorageClass  a    Created  401  Unauthorized        delete  delete a StorageClass       HTTP Request  DELETE  apis storage k8s io v1 storageclasses  name        Parameters       name     in path    string  required    name of the StorageClass       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    StorageClass  a    OK  202   a href    StorageClass  a    Accepted  401  Unauthorized        deletecollection  delete collection of StorageClass       HTTP Request  DELETE  apis storage k8s io v1 storageclasses       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference apiVersion v1 weight 6 title PersistentVolumeClaim contenttype apireference apimetadata import k8s io api core v1 autogenerated true PersistentVolumeClaim is a user s request for and claim to a persistent volume kind PersistentVolumeClaim","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"PersistentVolumeClaim\"\ncontent_type: \"api_reference\"\ndescription: \"PersistentVolumeClaim is a user's request for and claim to a persistent volume.\"\ntitle: \"PersistentVolumeClaim\"\nweight: 6\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## PersistentVolumeClaim {#PersistentVolumeClaim}\n\nPersistentVolumeClaim is a user's request for and claim to a persistent volume\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: PersistentVolumeClaim\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">PersistentVolumeClaimSpec<\/a>)\n\n  spec defines the desired characteristics of a volume requested by a pod author. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#persistentvolumeclaims\n\n- **status** (<a href=\"\">PersistentVolumeClaimStatus<\/a>)\n\n  status represents the current information\/status of a persistent volume claim. Read-only. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#persistentvolumeclaims\n\n\n\n\n\n## PersistentVolumeClaimSpec {#PersistentVolumeClaimSpec}\n\nPersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes\n\n<hr>\n\n- **accessModes** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  accessModes contains the desired access modes the volume should have. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#access-modes-1\n\n- **selector** (<a href=\"\">LabelSelector<\/a>)\n\n  selector is a label query over volumes to consider for binding.\n\n- **resources** (VolumeResourceRequirements)\n\n  resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#resources\n\n  <a name=\"VolumeResourceRequirements\"><\/a>\n  *VolumeResourceRequirements describes the storage resource requirements for a volume.*\n\n  - **resources.limits** (map[string]<a href=\"\">Quantity<\/a>)\n\n    Limits describes the maximum amount of compute resources allowed. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n  - **resources.requests** (map[string]<a href=\"\">Quantity<\/a>)\n\n    Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n- **volumeName** (string)\n\n  volumeName is the binding reference to the PersistentVolume backing this claim.\n\n- **storageClassName** (string)\n\n  storageClassName is the name of the StorageClass required by the claim. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#class-1\n\n- **volumeMode** (string)\n\n  volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.\n\n\n\n### Beta level\n\n\n- **dataSource** (<a href=\"\">TypedLocalObjectReference<\/a>)\n\n  dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io\/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.\n\n- **dataSourceRef** (TypedObjectReference)\n\n  dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef\n    allows any non-core object, as well as PersistentVolumeClaim objects.\n  * While dataSource ignores disallowed values (dropping them), dataSourceRef\n    preserves all values, and generates an error if a disallowed value is\n    specified.\n  * While dataSource only allows local objects, dataSourceRef allows objects\n    in any namespaces.\n  (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.\n\n  <a name=\"TypedObjectReference\"><\/a>\n  **\n\n  - **dataSourceRef.kind** (string), required\n\n    Kind is the type of resource being referenced\n\n  - **dataSourceRef.name** (string), required\n\n    Name is the name of resource being referenced\n\n  - **dataSourceRef.apiGroup** (string)\n\n    APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.\n\n  - **dataSourceRef.namespace** (string)\n\n    Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io\/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.\n\n- **volumeAttributesClassName** (string)\n\n  volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volume-attributes-classes\/ (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default).\n\n\n\n## PersistentVolumeClaimStatus {#PersistentVolumeClaimStatus}\n\nPersistentVolumeClaimStatus is the current status of a persistent volume claim.\n\n<hr>\n\n- **accessModes** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  accessModes contains the actual access modes the volume backing the PVC has. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#access-modes-1\n\n- **allocatedResourceStatuses** (map[string]string)\n\n  allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either:\n  \t* Un-prefixed keys:\n  \t\t- storage - the capacity of the volume.\n  \t* Custom resources must use implementation-defined prefixed names such as \"example.com\/my-custom-resource\"\n  Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used.\n  \n  ClaimResourceStatus can be in any of following states:\n  \t- ControllerResizeInProgress:\n  \t\tState set when resize controller starts resizing the volume in control-plane.\n  \t- ControllerResizeFailed:\n  \t\tState set when resize has failed in resize controller with a terminal error.\n  \t- NodeResizePending:\n  \t\tState set when resize controller has finished resizing the volume but further resizing of\n  \t\tvolume is needed on the node.\n  \t- NodeResizeInProgress:\n  \t\tState set when kubelet starts resizing the volume.\n  \t- NodeResizeFailed:\n  \t\tState set when resizing has failed in kubelet with a terminal error. Transient errors don't set\n  \t\tNodeResizeFailed.\n  For example: if expanding a PVC for more capacity - this field can be one of the following states:\n  \t- pvc.status.allocatedResourceStatus['storage'] = \"ControllerResizeInProgress\"\n       - pvc.status.allocatedResourceStatus['storage'] = \"ControllerResizeFailed\"\n       - pvc.status.allocatedResourceStatus['storage'] = \"NodeResizePending\"\n       - pvc.status.allocatedResourceStatus['storage'] = \"NodeResizeInProgress\"\n       - pvc.status.allocatedResourceStatus['storage'] = \"NodeResizeFailed\"\n  When this field is not set, it means that no resize operation is in progress for the given PVC.\n  \n  A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC.\n  \n  This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.\n\n- **allocatedResources** (map[string]<a href=\"\">Quantity<\/a>)\n\n  allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either:\n  \t* Un-prefixed keys:\n  \t\t- storage - the capacity of the volume.\n  \t* Custom resources must use implementation-defined prefixed names such as \"example.com\/my-custom-resource\"\n  Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used.\n  \n  Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity.\n  \n  A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC.\n  \n  This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.\n\n- **capacity** (map[string]<a href=\"\">Quantity<\/a>)\n\n  capacity represents the actual resources of the underlying volume.\n\n- **conditions** ([]PersistentVolumeClaimCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'Resizing'.\n\n  <a name=\"PersistentVolumeClaimCondition\"><\/a>\n  *PersistentVolumeClaimCondition contains details about state of pvc*\n\n  - **conditions.status** (string), required\n\n\n  - **conditions.type** (string), required\n\n\n  - **conditions.lastProbeTime** (Time)\n\n    lastProbeTime is the time we probed the condition.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.lastTransitionTime** (Time)\n\n    lastTransitionTime is the time the condition transitioned from one status to another.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    message is the human-readable message indicating details about last transition.\n\n  - **conditions.reason** (string)\n\n    reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports \"Resizing\" that means the underlying persistent volume is being resized.\n\n- **currentVolumeAttributesClassName** (string)\n\n  currentVolumeAttributesClassName is the current name of the VolumeAttributesClass the PVC is using. When unset, there is no VolumeAttributeClass applied to this PersistentVolumeClaim This is a beta field and requires enabling VolumeAttributesClass feature (off by default).\n\n- **modifyVolumeStatus** (ModifyVolumeStatus)\n\n  ModifyVolumeStatus represents the status object of ControllerModifyVolume operation. When this is unset, there is no ModifyVolume operation being attempted. This is a beta field and requires enabling VolumeAttributesClass feature (off by default).\n\n  <a name=\"ModifyVolumeStatus\"><\/a>\n  *ModifyVolumeStatus represents the status object of ControllerModifyVolume operation*\n\n  - **modifyVolumeStatus.status** (string), required\n\n    status is the status of the ControllerModifyVolume operation. It can be in any of following states:\n     - Pending\n       Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements, such as\n       the specified VolumeAttributesClass not existing.\n     - InProgress\n       InProgress indicates that the volume is being modified.\n     - Infeasible\n      Infeasible indicates that the request has been rejected as invalid by the CSI driver. To\n    \t  resolve the error, a valid VolumeAttributesClass needs to be specified.\n    Note: New statuses can be added in the future. Consumers should check for unknown statuses and fail appropriately.\n\n  - **modifyVolumeStatus.targetVolumeAttributesClassName** (string)\n\n    targetVolumeAttributesClassName is the name of the VolumeAttributesClass the PVC currently being reconciled\n\n- **phase** (string)\n\n  phase represents the current phase of PersistentVolumeClaim.\n\n\n\n\n\n## PersistentVolumeClaimList {#PersistentVolumeClaimList}\n\nPersistentVolumeClaimList is a list of PersistentVolumeClaim items.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: PersistentVolumeClaimList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">PersistentVolumeClaim<\/a>), required\n\n  items is a list of persistent volume claims. More info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#persistentvolumeclaims\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified PersistentVolumeClaim\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/persistentvolumeclaims\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PersistentVolumeClaim\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolumeClaim<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified PersistentVolumeClaim\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/persistentvolumeclaims\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PersistentVolumeClaim\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolumeClaim<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind PersistentVolumeClaim\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/persistentvolumeclaims\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolumeClaimList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind PersistentVolumeClaim\n\n#### HTTP Request\n\nGET \/api\/v1\/persistentvolumeclaims\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolumeClaimList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a PersistentVolumeClaim\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\/{namespace}\/persistentvolumeclaims\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">PersistentVolumeClaim<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolumeClaim<\/a>): OK\n\n201 (<a href=\"\">PersistentVolumeClaim<\/a>): Created\n\n202 (<a href=\"\">PersistentVolumeClaim<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified PersistentVolumeClaim\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/persistentvolumeclaims\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PersistentVolumeClaim\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">PersistentVolumeClaim<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolumeClaim<\/a>): OK\n\n201 (<a href=\"\">PersistentVolumeClaim<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified PersistentVolumeClaim\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/persistentvolumeclaims\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PersistentVolumeClaim\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">PersistentVolumeClaim<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolumeClaim<\/a>): OK\n\n201 (<a href=\"\">PersistentVolumeClaim<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified PersistentVolumeClaim\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/persistentvolumeclaims\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PersistentVolumeClaim\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolumeClaim<\/a>): OK\n\n201 (<a href=\"\">PersistentVolumeClaim<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified PersistentVolumeClaim\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/persistentvolumeclaims\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PersistentVolumeClaim\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolumeClaim<\/a>): OK\n\n201 (<a href=\"\">PersistentVolumeClaim<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a PersistentVolumeClaim\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/persistentvolumeclaims\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PersistentVolumeClaim\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PersistentVolumeClaim<\/a>): OK\n\n202 (<a href=\"\">PersistentVolumeClaim<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of PersistentVolumeClaim\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/persistentvolumeclaims\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   PersistentVolumeClaim  content type   api reference  description   PersistentVolumeClaim is a user s request for and claim to a persistent volume   title   PersistentVolumeClaim  weight  6 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        PersistentVolumeClaim   PersistentVolumeClaim   PersistentVolumeClaim is a user s request for and claim to a persistent volume   hr       apiVersion    v1       kind    PersistentVolumeClaim       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    PersistentVolumeClaimSpec  a      spec defines the desired characteristics of a volume requested by a pod author  More info  https   kubernetes io docs concepts storage persistent volumes persistentvolumeclaims      status     a href    PersistentVolumeClaimStatus  a      status represents the current information status of a persistent volume claim  Read only  More info  https   kubernetes io docs concepts storage persistent volumes persistentvolumeclaims         PersistentVolumeClaimSpec   PersistentVolumeClaimSpec   PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider specific attributes   hr       accessModes      string      Atomic  will be replaced during a merge       accessModes contains the desired access modes the volume should have  More info  https   kubernetes io docs concepts storage persistent volumes access modes 1      selector     a href    LabelSelector  a      selector is a label query over volumes to consider for binding       resources    VolumeResourceRequirements     resources represents the minimum resources the volume should have  If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim  More info  https   kubernetes io docs concepts storage persistent volumes resources     a name  VolumeResourceRequirements    a     VolumeResourceRequirements describes the storage resource requirements for a volume          resources limits    map string  a href    Quantity  a        Limits describes the maximum amount of compute resources allowed  More info  https   kubernetes io docs concepts configuration manage resources containers         resources requests    map string  a href    Quantity  a        Requests describes the minimum amount of compute resources required  If Requests is omitted for a container  it defaults to Limits if that is explicitly specified  otherwise to an implementation defined value  Requests cannot exceed Limits  More info  https   kubernetes io docs concepts configuration manage resources containers       volumeName    string     volumeName is the binding reference to the PersistentVolume backing this claim       storageClassName    string     storageClassName is the name of the StorageClass required by the claim  More info  https   kubernetes io docs concepts storage persistent volumes class 1      volumeMode    string     volumeMode defines what type of volume is required by the claim  Value of Filesystem is implied when not included in claim spec         Beta level       dataSource     a href    TypedLocalObjectReference  a      dataSource field can be used to specify either    An existing VolumeSnapshot object  snapshot storage k8s io VolumeSnapshot    An existing PVC  PersistentVolumeClaim  If the provisioner or an external controller can support the specified data source  it will create a new volume based on the contents of the specified data source  When the AnyVolumeDataSource feature gate is enabled  dataSource contents will be copied to dataSourceRef  and dataSourceRef contents will be copied to dataSource when dataSourceRef namespace is not specified  If the namespace is specified  then dataSourceRef will not be copied to dataSource       dataSourceRef    TypedObjectReference     dataSourceRef specifies the object from which to populate the volume with data  if a non empty volume is desired  This may be any object from a non empty API group  non core object  or a PersistentVolumeClaim object  When this field is specified  volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner  This field will replace the functionality of the dataSource field and as such if both fields are non empty  they must have the same value  For backwards compatibility  when namespace isn t specified in dataSourceRef  both fields  dataSource and dataSourceRef  will be set to the same value automatically if one of them is empty and the other is non empty  When namespace is specified in dataSourceRef  dataSource isn t set to the same value and must be empty  There are three important differences between dataSource and dataSourceRef    While dataSource only allows two specific types of objects  dataSourceRef     allows any non core object  as well as PersistentVolumeClaim objects      While dataSource ignores disallowed values  dropping them   dataSourceRef     preserves all values  and generates an error if a disallowed value is     specified      While dataSource only allows local objects  dataSourceRef allows objects     in any namespaces     Beta  Using this field requires the AnyVolumeDataSource feature gate to be enabled   Alpha  Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled      a name  TypedObjectReference    a              dataSourceRef kind    string   required      Kind is the type of resource being referenced        dataSourceRef name    string   required      Name is the name of resource being referenced        dataSourceRef apiGroup    string       APIGroup is the group for the resource being referenced  If APIGroup is not specified  the specified Kind must be in the core API group  For any other third party types  APIGroup is required         dataSourceRef namespace    string       Namespace is the namespace of resource being referenced Note that when a namespace is specified  a gateway networking k8s io ReferenceGrant object is required in the referent namespace to allow that namespace s owner to accept the reference  See the ReferenceGrant documentation for details   Alpha  This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled       volumeAttributesClassName    string     volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim  If specified  the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass  This has a different purpose than storageClassName  it can be changed after the claim is created  An empty string value means that no VolumeAttributesClass will be applied to the claim but it s not allowed to reset this field to empty string once it is set  If unspecified and the PersistentVolumeClaim is unbound  the default VolumeAttributesClass will be set by the persistentvolume controller if it exists  If the resource referred to by volumeAttributesClass does not exist  this PersistentVolumeClaim will be set to a Pending state  as reflected by the modifyVolumeStatus field  until such as a resource exists  More info  https   kubernetes io docs concepts storage volume attributes classes   Beta  Using this field requires the VolumeAttributesClass feature gate to be enabled  off by default         PersistentVolumeClaimStatus   PersistentVolumeClaimStatus   PersistentVolumeClaimStatus is the current status of a persistent volume claim    hr       accessModes      string      Atomic  will be replaced during a merge       accessModes contains the actual access modes the volume backing the PVC has  More info  https   kubernetes io docs concepts storage persistent volumes access modes 1      allocatedResourceStatuses    map string string     allocatedResourceStatuses stores status of resource being resized for the given PVC  Key names follow standard Kubernetes label syntax  Valid values are either       Un prefixed keys        storage   the capacity of the volume       Custom resources must use implementation defined prefixed names such as  example com my custom resource    Apart from above values   keys that are unprefixed or have kubernetes io prefix are considered reserved and hence may not be used       ClaimResourceStatus can be in any of following states       ControllerResizeInProgress      State set when resize controller starts resizing the volume in control plane       ControllerResizeFailed      State set when resize has failed in resize controller with a terminal error       NodeResizePending      State set when resize controller has finished resizing the volume but further resizing of     volume is needed on the node       NodeResizeInProgress      State set when kubelet starts resizing the volume       NodeResizeFailed      State set when resizing has failed in kubelet with a terminal error  Transient errors don t set     NodeResizeFailed    For example  if expanding a PVC for more capacity   this field can be one of the following states       pvc status allocatedResourceStatus  storage      ControllerResizeInProgress           pvc status allocatedResourceStatus  storage      ControllerResizeFailed           pvc status allocatedResourceStatus  storage      NodeResizePending           pvc status allocatedResourceStatus  storage      NodeResizeInProgress           pvc status allocatedResourceStatus  storage      NodeResizeFailed    When this field is not set  it means that no resize operation is in progress for the given PVC       A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed  For example   a controller that only is responsible for resizing capacity of the volume  should ignore PVC updates that change other valid resources associated with PVC       This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature       allocatedResources    map string  a href    Quantity  a      allocatedResources tracks the resources allocated to a PVC including its capacity  Key names follow standard Kubernetes label syntax  Valid values are either       Un prefixed keys        storage   the capacity of the volume       Custom resources must use implementation defined prefixed names such as  example com my custom resource    Apart from above values   keys that are unprefixed or have kubernetes io prefix are considered reserved and hence may not be used       Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested  For storage quota  the larger value from allocatedResources and PVC spec resources is used  If allocatedResources is not set  PVC spec resources alone is used for quota calculation  If a volume expansion capacity request is lowered  allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity       A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed  For example   a controller that only is responsible for resizing capacity of the volume  should ignore PVC updates that change other valid resources associated with PVC       This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature       capacity    map string  a href    Quantity  a      capacity represents the actual resources of the underlying volume       conditions      PersistentVolumeClaimCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       conditions is the current Condition of persistent volume claim  If underlying persistent volume is being resized then the Condition will be set to  Resizing       a name  PersistentVolumeClaimCondition    a     PersistentVolumeClaimCondition contains details about state of pvc         conditions status    string   required         conditions type    string   required         conditions lastProbeTime    Time       lastProbeTime is the time we probed the condition        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions lastTransitionTime    Time       lastTransitionTime is the time the condition transitioned from one status to another        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string       message is the human readable message indicating details about last transition         conditions reason    string       reason is a unique  this should be a short  machine understandable string that gives the reason for condition s last transition  If it reports  Resizing  that means the underlying persistent volume is being resized       currentVolumeAttributesClassName    string     currentVolumeAttributesClassName is the current name of the VolumeAttributesClass the PVC is using  When unset  there is no VolumeAttributeClass applied to this PersistentVolumeClaim This is a beta field and requires enabling VolumeAttributesClass feature  off by default        modifyVolumeStatus    ModifyVolumeStatus     ModifyVolumeStatus represents the status object of ControllerModifyVolume operation  When this is unset  there is no ModifyVolume operation being attempted  This is a beta field and requires enabling VolumeAttributesClass feature  off by default       a name  ModifyVolumeStatus    a     ModifyVolumeStatus represents the status object of ControllerModifyVolume operation         modifyVolumeStatus status    string   required      status is the status of the ControllerModifyVolume operation  It can be in any of following states         Pending        Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements  such as        the specified VolumeAttributesClass not existing         InProgress        InProgress indicates that the volume is being modified         Infeasible       Infeasible indicates that the request has been rejected as invalid by the CSI driver  To        resolve the error  a valid VolumeAttributesClass needs to be specified      Note  New statuses can be added in the future  Consumers should check for unknown statuses and fail appropriately         modifyVolumeStatus targetVolumeAttributesClassName    string       targetVolumeAttributesClassName is the name of the VolumeAttributesClass the PVC currently being reconciled      phase    string     phase represents the current phase of PersistentVolumeClaim          PersistentVolumeClaimList   PersistentVolumeClaimList   PersistentVolumeClaimList is a list of PersistentVolumeClaim items    hr       apiVersion    v1       kind    PersistentVolumeClaimList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    PersistentVolumeClaim  a    required    items is a list of persistent volume claims  More info  https   kubernetes io docs concepts storage persistent volumes persistentvolumeclaims         Operations   Operations      hr             get  read the specified PersistentVolumeClaim       HTTP Request  GET  api v1 namespaces  namespace  persistentvolumeclaims  name        Parameters       name     in path    string  required    name of the PersistentVolumeClaim       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PersistentVolumeClaim  a    OK  401  Unauthorized        get  read status of the specified PersistentVolumeClaim       HTTP Request  GET  api v1 namespaces  namespace  persistentvolumeclaims  name  status       Parameters       name     in path    string  required    name of the PersistentVolumeClaim       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PersistentVolumeClaim  a    OK  401  Unauthorized        list  list or watch objects of kind PersistentVolumeClaim       HTTP Request  GET  api v1 namespaces  namespace  persistentvolumeclaims       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    PersistentVolumeClaimList  a    OK  401  Unauthorized        list  list or watch objects of kind PersistentVolumeClaim       HTTP Request  GET  api v1 persistentvolumeclaims       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    PersistentVolumeClaimList  a    OK  401  Unauthorized        create  create a PersistentVolumeClaim       HTTP Request  POST  api v1 namespaces  namespace  persistentvolumeclaims       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    PersistentVolumeClaim  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PersistentVolumeClaim  a    OK  201   a href    PersistentVolumeClaim  a    Created  202   a href    PersistentVolumeClaim  a    Accepted  401  Unauthorized        update  replace the specified PersistentVolumeClaim       HTTP Request  PUT  api v1 namespaces  namespace  persistentvolumeclaims  name        Parameters       name     in path    string  required    name of the PersistentVolumeClaim       namespace     in path    string  required     a href    namespace  a        body     a href    PersistentVolumeClaim  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PersistentVolumeClaim  a    OK  201   a href    PersistentVolumeClaim  a    Created  401  Unauthorized        update  replace status of the specified PersistentVolumeClaim       HTTP Request  PUT  api v1 namespaces  namespace  persistentvolumeclaims  name  status       Parameters       name     in path    string  required    name of the PersistentVolumeClaim       namespace     in path    string  required     a href    namespace  a        body     a href    PersistentVolumeClaim  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PersistentVolumeClaim  a    OK  201   a href    PersistentVolumeClaim  a    Created  401  Unauthorized        patch  partially update the specified PersistentVolumeClaim       HTTP Request  PATCH  api v1 namespaces  namespace  persistentvolumeclaims  name        Parameters       name     in path    string  required    name of the PersistentVolumeClaim       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PersistentVolumeClaim  a    OK  201   a href    PersistentVolumeClaim  a    Created  401  Unauthorized        patch  partially update status of the specified PersistentVolumeClaim       HTTP Request  PATCH  api v1 namespaces  namespace  persistentvolumeclaims  name  status       Parameters       name     in path    string  required    name of the PersistentVolumeClaim       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PersistentVolumeClaim  a    OK  201   a href    PersistentVolumeClaim  a    Created  401  Unauthorized        delete  delete a PersistentVolumeClaim       HTTP Request  DELETE  api v1 namespaces  namespace  persistentvolumeclaims  name        Parameters       name     in path    string  required    name of the PersistentVolumeClaim       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    PersistentVolumeClaim  a    OK  202   a href    PersistentVolumeClaim  a    Accepted  401  Unauthorized        deletecollection  delete collection of PersistentVolumeClaim       HTTP Request  DELETE  api v1 namespaces  namespace  persistentvolumeclaims       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference kind VolumeAttachment title VolumeAttachment autogenerated true weight 11 contenttype apireference apimetadata apiVersion storage k8s io v1 VolumeAttachment captures the intent to attach or detach the specified volume to from the specified node import k8s io api storage v1","answers":"---\napi_metadata:\n  apiVersion: \"storage.k8s.io\/v1\"\n  import: \"k8s.io\/api\/storage\/v1\"\n  kind: \"VolumeAttachment\"\ncontent_type: \"api_reference\"\ndescription: \"VolumeAttachment captures the intent to attach or detach the specified volume to\/from the specified node.\"\ntitle: \"VolumeAttachment\"\nweight: 11\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: storage.k8s.io\/v1`\n\n`import \"k8s.io\/api\/storage\/v1\"`\n\n\n## VolumeAttachment {#VolumeAttachment}\n\nVolumeAttachment captures the intent to attach or detach the specified volume to\/from the specified node.\n\nVolumeAttachment objects are non-namespaced.\n\n<hr>\n\n- **apiVersion**: storage.k8s.io\/v1\n\n\n- **kind**: VolumeAttachment\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">VolumeAttachmentSpec<\/a>), required\n\n  spec represents specification of the desired attach\/detach volume behavior. Populated by the Kubernetes system.\n\n- **status** (<a href=\"\">VolumeAttachmentStatus<\/a>)\n\n  status represents status of the VolumeAttachment request. Populated by the entity completing the attach or detach operation, i.e. the external-attacher.\n\n\n\n\n\n## VolumeAttachmentSpec {#VolumeAttachmentSpec}\n\nVolumeAttachmentSpec is the specification of a VolumeAttachment request.\n\n<hr>\n\n- **attacher** (string), required\n\n  attacher indicates the name of the volume driver that MUST handle this request. This is the name returned by GetPluginName().\n\n- **nodeName** (string), required\n\n  nodeName represents the node that the volume should be attached to.\n\n- **source** (VolumeAttachmentSource), required\n\n  source represents the volume that should be attached.\n\n  <a name=\"VolumeAttachmentSource\"><\/a>\n  *VolumeAttachmentSource represents a volume that should be attached. Right now only PersistenVolumes can be attached via external attacher, in future we may allow also inline volumes in pods. Exactly one member can be set.*\n\n  - **source.inlineVolumeSpec** (<a href=\"\">PersistentVolumeSpec<\/a>)\n\n    inlineVolumeSpec contains all the information necessary to attach a persistent volume defined by a pod's inline VolumeSource. This field is populated only for the CSIMigration feature. It contains translated fields from a pod's inline VolumeSource to a PersistentVolumeSpec. This field is beta-level and is only honored by servers that enabled the CSIMigration feature.\n\n  - **source.persistentVolumeName** (string)\n\n    persistentVolumeName represents the name of the persistent volume to attach.\n\n\n\n\n\n## VolumeAttachmentStatus {#VolumeAttachmentStatus}\n\nVolumeAttachmentStatus is the status of a VolumeAttachment request.\n\n<hr>\n\n- **attached** (boolean), required\n\n  attached indicates the volume is successfully attached. This field must only be set by the entity completing the attach operation, i.e. the external-attacher.\n\n- **attachError** (VolumeError)\n\n  attachError represents the last error encountered during attach operation, if any. This field must only be set by the entity completing the attach operation, i.e. the external-attacher.\n\n  <a name=\"VolumeError\"><\/a>\n  *VolumeError captures an error encountered during a volume operation.*\n\n  - **attachError.message** (string)\n\n    message represents the error encountered during Attach or Detach operation. This string may be logged, so it should not contain sensitive information.\n\n  - **attachError.time** (Time)\n\n    time represents the time the error was encountered.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **attachmentMetadata** (map[string]string)\n\n  attachmentMetadata is populated with any information returned by the attach operation, upon successful attach, that must be passed into subsequent WaitForAttach or Mount calls. This field must only be set by the entity completing the attach operation, i.e. the external-attacher.\n\n- **detachError** (VolumeError)\n\n  detachError represents the last error encountered during detach operation, if any. This field must only be set by the entity completing the detach operation, i.e. the external-attacher.\n\n  <a name=\"VolumeError\"><\/a>\n  *VolumeError captures an error encountered during a volume operation.*\n\n  - **detachError.message** (string)\n\n    message represents the error encountered during Attach or Detach operation. This string may be logged, so it should not contain sensitive information.\n\n  - **detachError.time** (Time)\n\n    time represents the time the error was encountered.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n\n\n\n\n## VolumeAttachmentList {#VolumeAttachmentList}\n\nVolumeAttachmentList is a collection of VolumeAttachment objects.\n\n<hr>\n\n- **apiVersion**: storage.k8s.io\/v1\n\n\n- **kind**: VolumeAttachmentList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">VolumeAttachment<\/a>), required\n\n  items is the list of VolumeAttachments\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified VolumeAttachment\n\n#### HTTP Request\n\nGET \/apis\/storage.k8s.io\/v1\/volumeattachments\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the VolumeAttachment\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttachment<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified VolumeAttachment\n\n#### HTTP Request\n\nGET \/apis\/storage.k8s.io\/v1\/volumeattachments\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the VolumeAttachment\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttachment<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind VolumeAttachment\n\n#### HTTP Request\n\nGET \/apis\/storage.k8s.io\/v1\/volumeattachments\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttachmentList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a VolumeAttachment\n\n#### HTTP Request\n\nPOST \/apis\/storage.k8s.io\/v1\/volumeattachments\n\n#### Parameters\n\n\n- **body**: <a href=\"\">VolumeAttachment<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttachment<\/a>): OK\n\n201 (<a href=\"\">VolumeAttachment<\/a>): Created\n\n202 (<a href=\"\">VolumeAttachment<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified VolumeAttachment\n\n#### HTTP Request\n\nPUT \/apis\/storage.k8s.io\/v1\/volumeattachments\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the VolumeAttachment\n\n\n- **body**: <a href=\"\">VolumeAttachment<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttachment<\/a>): OK\n\n201 (<a href=\"\">VolumeAttachment<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified VolumeAttachment\n\n#### HTTP Request\n\nPUT \/apis\/storage.k8s.io\/v1\/volumeattachments\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the VolumeAttachment\n\n\n- **body**: <a href=\"\">VolumeAttachment<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttachment<\/a>): OK\n\n201 (<a href=\"\">VolumeAttachment<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified VolumeAttachment\n\n#### HTTP Request\n\nPATCH \/apis\/storage.k8s.io\/v1\/volumeattachments\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the VolumeAttachment\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttachment<\/a>): OK\n\n201 (<a href=\"\">VolumeAttachment<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified VolumeAttachment\n\n#### HTTP Request\n\nPATCH \/apis\/storage.k8s.io\/v1\/volumeattachments\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the VolumeAttachment\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttachment<\/a>): OK\n\n201 (<a href=\"\">VolumeAttachment<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a VolumeAttachment\n\n#### HTTP Request\n\nDELETE \/apis\/storage.k8s.io\/v1\/volumeattachments\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the VolumeAttachment\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">VolumeAttachment<\/a>): OK\n\n202 (<a href=\"\">VolumeAttachment<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of VolumeAttachment\n\n#### HTTP Request\n\nDELETE \/apis\/storage.k8s.io\/v1\/volumeattachments\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   storage k8s io v1    import   k8s io api storage v1    kind   VolumeAttachment  content type   api reference  description   VolumeAttachment captures the intent to attach or detach the specified volume to from the specified node   title   VolumeAttachment  weight  11 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  storage k8s io v1    import  k8s io api storage v1        VolumeAttachment   VolumeAttachment   VolumeAttachment captures the intent to attach or detach the specified volume to from the specified node   VolumeAttachment objects are non namespaced    hr       apiVersion    storage k8s io v1       kind    VolumeAttachment       metadata     a href    ObjectMeta  a      Standard object metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    VolumeAttachmentSpec  a    required    spec represents specification of the desired attach detach volume behavior  Populated by the Kubernetes system       status     a href    VolumeAttachmentStatus  a      status represents status of the VolumeAttachment request  Populated by the entity completing the attach or detach operation  i e  the external attacher          VolumeAttachmentSpec   VolumeAttachmentSpec   VolumeAttachmentSpec is the specification of a VolumeAttachment request    hr       attacher    string   required    attacher indicates the name of the volume driver that MUST handle this request  This is the name returned by GetPluginName         nodeName    string   required    nodeName represents the node that the volume should be attached to       source    VolumeAttachmentSource   required    source represents the volume that should be attached      a name  VolumeAttachmentSource    a     VolumeAttachmentSource represents a volume that should be attached  Right now only PersistenVolumes can be attached via external attacher  in future we may allow also inline volumes in pods  Exactly one member can be set          source inlineVolumeSpec     a href    PersistentVolumeSpec  a        inlineVolumeSpec contains all the information necessary to attach a persistent volume defined by a pod s inline VolumeSource  This field is populated only for the CSIMigration feature  It contains translated fields from a pod s inline VolumeSource to a PersistentVolumeSpec  This field is beta level and is only honored by servers that enabled the CSIMigration feature         source persistentVolumeName    string       persistentVolumeName represents the name of the persistent volume to attach          VolumeAttachmentStatus   VolumeAttachmentStatus   VolumeAttachmentStatus is the status of a VolumeAttachment request    hr       attached    boolean   required    attached indicates the volume is successfully attached  This field must only be set by the entity completing the attach operation  i e  the external attacher       attachError    VolumeError     attachError represents the last error encountered during attach operation  if any  This field must only be set by the entity completing the attach operation  i e  the external attacher      a name  VolumeError    a     VolumeError captures an error encountered during a volume operation          attachError message    string       message represents the error encountered during Attach or Detach operation  This string may be logged  so it should not contain sensitive information         attachError time    Time       time represents the time the error was encountered        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        attachmentMetadata    map string string     attachmentMetadata is populated with any information returned by the attach operation  upon successful attach  that must be passed into subsequent WaitForAttach or Mount calls  This field must only be set by the entity completing the attach operation  i e  the external attacher       detachError    VolumeError     detachError represents the last error encountered during detach operation  if any  This field must only be set by the entity completing the detach operation  i e  the external attacher      a name  VolumeError    a     VolumeError captures an error encountered during a volume operation          detachError message    string       message represents the error encountered during Attach or Detach operation  This string may be logged  so it should not contain sensitive information         detachError time    Time       time represents the time the error was encountered        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers           VolumeAttachmentList   VolumeAttachmentList   VolumeAttachmentList is a collection of VolumeAttachment objects    hr       apiVersion    storage k8s io v1       kind    VolumeAttachmentList       metadata     a href    ListMeta  a      Standard list metadata More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    VolumeAttachment  a    required    items is the list of VolumeAttachments         Operations   Operations      hr             get  read the specified VolumeAttachment       HTTP Request  GET  apis storage k8s io v1 volumeattachments  name        Parameters       name     in path    string  required    name of the VolumeAttachment       pretty     in query    string     a href    pretty  a          Response   200   a href    VolumeAttachment  a    OK  401  Unauthorized        get  read status of the specified VolumeAttachment       HTTP Request  GET  apis storage k8s io v1 volumeattachments  name  status       Parameters       name     in path    string  required    name of the VolumeAttachment       pretty     in query    string     a href    pretty  a          Response   200   a href    VolumeAttachment  a    OK  401  Unauthorized        list  list or watch objects of kind VolumeAttachment       HTTP Request  GET  apis storage k8s io v1 volumeattachments       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    VolumeAttachmentList  a    OK  401  Unauthorized        create  create a VolumeAttachment       HTTP Request  POST  apis storage k8s io v1 volumeattachments       Parameters       body     a href    VolumeAttachment  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    VolumeAttachment  a    OK  201   a href    VolumeAttachment  a    Created  202   a href    VolumeAttachment  a    Accepted  401  Unauthorized        update  replace the specified VolumeAttachment       HTTP Request  PUT  apis storage k8s io v1 volumeattachments  name        Parameters       name     in path    string  required    name of the VolumeAttachment       body     a href    VolumeAttachment  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    VolumeAttachment  a    OK  201   a href    VolumeAttachment  a    Created  401  Unauthorized        update  replace status of the specified VolumeAttachment       HTTP Request  PUT  apis storage k8s io v1 volumeattachments  name  status       Parameters       name     in path    string  required    name of the VolumeAttachment       body     a href    VolumeAttachment  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    VolumeAttachment  a    OK  201   a href    VolumeAttachment  a    Created  401  Unauthorized        patch  partially update the specified VolumeAttachment       HTTP Request  PATCH  apis storage k8s io v1 volumeattachments  name        Parameters       name     in path    string  required    name of the VolumeAttachment       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    VolumeAttachment  a    OK  201   a href    VolumeAttachment  a    Created  401  Unauthorized        patch  partially update status of the specified VolumeAttachment       HTTP Request  PATCH  apis storage k8s io v1 volumeattachments  name  status       Parameters       name     in path    string  required    name of the VolumeAttachment       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    VolumeAttachment  a    OK  201   a href    VolumeAttachment  a    Created  401  Unauthorized        delete  delete a VolumeAttachment       HTTP Request  DELETE  apis storage k8s io v1 volumeattachments  name        Parameters       name     in path    string  required    name of the VolumeAttachment       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    VolumeAttachment  a    OK  202   a href    VolumeAttachment  a    Accepted  401  Unauthorized        deletecollection  delete collection of VolumeAttachment       HTTP Request  DELETE  apis storage k8s io v1 volumeattachments       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference autogenerated true apiVersion storagemigration k8s io v1alpha1 StorageVersionMigration represents a migration of stored data to the latest storage version contenttype apireference title StorageVersionMigration v1alpha1 import k8s io api storagemigration v1alpha1 apimetadata kind StorageVersionMigration weight 9","answers":"---\napi_metadata:\n  apiVersion: \"storagemigration.k8s.io\/v1alpha1\"\n  import: \"k8s.io\/api\/storagemigration\/v1alpha1\"\n  kind: \"StorageVersionMigration\"\ncontent_type: \"api_reference\"\ndescription: \"StorageVersionMigration represents a migration of stored data to the latest storage version.\"\ntitle: \"StorageVersionMigration v1alpha1\"\nweight: 9\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: storagemigration.k8s.io\/v1alpha1`\n\n`import \"k8s.io\/api\/storagemigration\/v1alpha1\"`\n\n\n## StorageVersionMigration {#StorageVersionMigration}\n\nStorageVersionMigration represents a migration of stored data to the latest storage version.\n\n<hr>\n\n- **apiVersion**: storagemigration.k8s.io\/v1alpha1\n\n\n- **kind**: StorageVersionMigration\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">StorageVersionMigrationSpec<\/a>)\n\n  Specification of the migration.\n\n- **status** (<a href=\"\">StorageVersionMigrationStatus<\/a>)\n\n  Status of the migration.\n\n\n\n\n\n## StorageVersionMigrationSpec {#StorageVersionMigrationSpec}\n\nSpec of the storage version migration.\n\n<hr>\n\n- **continueToken** (string)\n\n  The token used in the list options to get the next chunk of objects to migrate. When the .status.conditions indicates the migration is \"Running\", users can use this token to check the progress of the migration.\n\n- **resource** (GroupVersionResource), required\n\n  The resource that is being migrated. The migrator sends requests to the endpoint serving the resource. Immutable.\n\n  <a name=\"GroupVersionResource\"><\/a>\n  *The names of the group, the version, and the resource.*\n\n  - **resource.group** (string)\n\n    The name of the group.\n\n  - **resource.resource** (string)\n\n    The name of the resource.\n\n  - **resource.version** (string)\n\n    The name of the version.\n\n\n\n\n\n## StorageVersionMigrationStatus {#StorageVersionMigrationStatus}\n\nStatus of the storage version migration.\n\n<hr>\n\n- **conditions** ([]MigrationCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  The latest available observations of the migration's current state.\n\n  <a name=\"MigrationCondition\"><\/a>\n  *Describes the state of a migration at a certain point.*\n\n  - **conditions.status** (string), required\n\n    Status of the condition, one of True, False, Unknown.\n\n  - **conditions.type** (string), required\n\n    Type of the condition.\n\n  - **conditions.lastUpdateTime** (Time)\n\n    The last time this condition was updated.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    A human readable message indicating details about the transition.\n\n  - **conditions.reason** (string)\n\n    The reason for the condition's last transition.\n\n- **resourceVersion** (string)\n\n  ResourceVersion to compare with the GC cache for performing the migration. This is the current resource version of given group, version and resource when kube-controller-manager first observes this StorageVersionMigration resource.\n\n\n\n\n\n## StorageVersionMigrationList {#StorageVersionMigrationList}\n\nStorageVersionMigrationList is a collection of storage version migrations.\n\n<hr>\n\n- **apiVersion**: storagemigration.k8s.io\/v1alpha1\n\n\n- **kind**: StorageVersionMigrationList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">StorageVersionMigration<\/a>), required\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  Items is the list of StorageVersionMigration\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified StorageVersionMigration\n\n#### HTTP Request\n\nGET \/apis\/storagemigration.k8s.io\/v1alpha1\/storageversionmigrations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StorageVersionMigration\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StorageVersionMigration<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified StorageVersionMigration\n\n#### HTTP Request\n\nGET \/apis\/storagemigration.k8s.io\/v1alpha1\/storageversionmigrations\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StorageVersionMigration\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StorageVersionMigration<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind StorageVersionMigration\n\n#### HTTP Request\n\nGET \/apis\/storagemigration.k8s.io\/v1alpha1\/storageversionmigrations\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StorageVersionMigrationList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a StorageVersionMigration\n\n#### HTTP Request\n\nPOST \/apis\/storagemigration.k8s.io\/v1alpha1\/storageversionmigrations\n\n#### Parameters\n\n\n- **body**: <a href=\"\">StorageVersionMigration<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StorageVersionMigration<\/a>): OK\n\n201 (<a href=\"\">StorageVersionMigration<\/a>): Created\n\n202 (<a href=\"\">StorageVersionMigration<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified StorageVersionMigration\n\n#### HTTP Request\n\nPUT \/apis\/storagemigration.k8s.io\/v1alpha1\/storageversionmigrations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StorageVersionMigration\n\n\n- **body**: <a href=\"\">StorageVersionMigration<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StorageVersionMigration<\/a>): OK\n\n201 (<a href=\"\">StorageVersionMigration<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified StorageVersionMigration\n\n#### HTTP Request\n\nPUT \/apis\/storagemigration.k8s.io\/v1alpha1\/storageversionmigrations\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StorageVersionMigration\n\n\n- **body**: <a href=\"\">StorageVersionMigration<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StorageVersionMigration<\/a>): OK\n\n201 (<a href=\"\">StorageVersionMigration<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified StorageVersionMigration\n\n#### HTTP Request\n\nPATCH \/apis\/storagemigration.k8s.io\/v1alpha1\/storageversionmigrations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StorageVersionMigration\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StorageVersionMigration<\/a>): OK\n\n201 (<a href=\"\">StorageVersionMigration<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified StorageVersionMigration\n\n#### HTTP Request\n\nPATCH \/apis\/storagemigration.k8s.io\/v1alpha1\/storageversionmigrations\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StorageVersionMigration\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">StorageVersionMigration<\/a>): OK\n\n201 (<a href=\"\">StorageVersionMigration<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a StorageVersionMigration\n\n#### HTTP Request\n\nDELETE \/apis\/storagemigration.k8s.io\/v1alpha1\/storageversionmigrations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the StorageVersionMigration\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of StorageVersionMigration\n\n#### HTTP Request\n\nDELETE \/apis\/storagemigration.k8s.io\/v1alpha1\/storageversionmigrations\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   storagemigration k8s io v1alpha1    import   k8s io api storagemigration v1alpha1    kind   StorageVersionMigration  content type   api reference  description   StorageVersionMigration represents a migration of stored data to the latest storage version   title   StorageVersionMigration v1alpha1  weight  9 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  storagemigration k8s io v1alpha1    import  k8s io api storagemigration v1alpha1        StorageVersionMigration   StorageVersionMigration   StorageVersionMigration represents a migration of stored data to the latest storage version    hr       apiVersion    storagemigration k8s io v1alpha1       kind    StorageVersionMigration       metadata     a href    ObjectMeta  a      Standard object metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    StorageVersionMigrationSpec  a      Specification of the migration       status     a href    StorageVersionMigrationStatus  a      Status of the migration          StorageVersionMigrationSpec   StorageVersionMigrationSpec   Spec of the storage version migration    hr       continueToken    string     The token used in the list options to get the next chunk of objects to migrate  When the  status conditions indicates the migration is  Running   users can use this token to check the progress of the migration       resource    GroupVersionResource   required    The resource that is being migrated  The migrator sends requests to the endpoint serving the resource  Immutable      a name  GroupVersionResource    a     The names of the group  the version  and the resource          resource group    string       The name of the group         resource resource    string       The name of the resource         resource version    string       The name of the version          StorageVersionMigrationStatus   StorageVersionMigrationStatus   Status of the storage version migration    hr       conditions      MigrationCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       The latest available observations of the migration s current state      a name  MigrationCondition    a     Describes the state of a migration at a certain point          conditions status    string   required      Status of the condition  one of True  False  Unknown         conditions type    string   required      Type of the condition         conditions lastUpdateTime    Time       The last time this condition was updated        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string       A human readable message indicating details about the transition         conditions reason    string       The reason for the condition s last transition       resourceVersion    string     ResourceVersion to compare with the GC cache for performing the migration  This is the current resource version of given group  version and resource when kube controller manager first observes this StorageVersionMigration resource          StorageVersionMigrationList   StorageVersionMigrationList   StorageVersionMigrationList is a collection of storage version migrations    hr       apiVersion    storagemigration k8s io v1alpha1       kind    StorageVersionMigrationList       metadata     a href    ListMeta  a      Standard list metadata More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    StorageVersionMigration  a    required     Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       Items is the list of StorageVersionMigration         Operations   Operations      hr             get  read the specified StorageVersionMigration       HTTP Request  GET  apis storagemigration k8s io v1alpha1 storageversionmigrations  name        Parameters       name     in path    string  required    name of the StorageVersionMigration       pretty     in query    string     a href    pretty  a          Response   200   a href    StorageVersionMigration  a    OK  401  Unauthorized        get  read status of the specified StorageVersionMigration       HTTP Request  GET  apis storagemigration k8s io v1alpha1 storageversionmigrations  name  status       Parameters       name     in path    string  required    name of the StorageVersionMigration       pretty     in query    string     a href    pretty  a          Response   200   a href    StorageVersionMigration  a    OK  401  Unauthorized        list  list or watch objects of kind StorageVersionMigration       HTTP Request  GET  apis storagemigration k8s io v1alpha1 storageversionmigrations       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    StorageVersionMigrationList  a    OK  401  Unauthorized        create  create a StorageVersionMigration       HTTP Request  POST  apis storagemigration k8s io v1alpha1 storageversionmigrations       Parameters       body     a href    StorageVersionMigration  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StorageVersionMigration  a    OK  201   a href    StorageVersionMigration  a    Created  202   a href    StorageVersionMigration  a    Accepted  401  Unauthorized        update  replace the specified StorageVersionMigration       HTTP Request  PUT  apis storagemigration k8s io v1alpha1 storageversionmigrations  name        Parameters       name     in path    string  required    name of the StorageVersionMigration       body     a href    StorageVersionMigration  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StorageVersionMigration  a    OK  201   a href    StorageVersionMigration  a    Created  401  Unauthorized        update  replace status of the specified StorageVersionMigration       HTTP Request  PUT  apis storagemigration k8s io v1alpha1 storageversionmigrations  name  status       Parameters       name     in path    string  required    name of the StorageVersionMigration       body     a href    StorageVersionMigration  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StorageVersionMigration  a    OK  201   a href    StorageVersionMigration  a    Created  401  Unauthorized        patch  partially update the specified StorageVersionMigration       HTTP Request  PATCH  apis storagemigration k8s io v1alpha1 storageversionmigrations  name        Parameters       name     in path    string  required    name of the StorageVersionMigration       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StorageVersionMigration  a    OK  201   a href    StorageVersionMigration  a    Created  401  Unauthorized        patch  partially update status of the specified StorageVersionMigration       HTTP Request  PATCH  apis storagemigration k8s io v1alpha1 storageversionmigrations  name  status       Parameters       name     in path    string  required    name of the StorageVersionMigration       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    StorageVersionMigration  a    OK  201   a href    StorageVersionMigration  a    Created  401  Unauthorized        delete  delete a StorageVersionMigration       HTTP Request  DELETE  apis storagemigration k8s io v1alpha1 storageversionmigrations  name        Parameters       name     in path    string  required    name of the StorageVersionMigration       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of StorageVersionMigration       HTTP Request  DELETE  apis storagemigration k8s io v1alpha1 storageversionmigrations       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference CSIDriver captures information about a Container Storage Interface CSI volume driver deployed on the cluster autogenerated true kind CSIDriver contenttype apireference apimetadata apiVersion storage k8s io v1 weight 3 title CSIDriver import k8s io api storage v1","answers":"---\napi_metadata:\n  apiVersion: \"storage.k8s.io\/v1\"\n  import: \"k8s.io\/api\/storage\/v1\"\n  kind: \"CSIDriver\"\ncontent_type: \"api_reference\"\ndescription: \"CSIDriver captures information about a Container Storage Interface (CSI) volume driver deployed on the cluster.\"\ntitle: \"CSIDriver\"\nweight: 3\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: storage.k8s.io\/v1`\n\n`import \"k8s.io\/api\/storage\/v1\"`\n\n\n## CSIDriver {#CSIDriver}\n\nCSIDriver captures information about a Container Storage Interface (CSI) volume driver deployed on the cluster. Kubernetes attach detach controller uses this object to determine whether attach is required. Kubelet uses this object to determine whether pod information needs to be passed on mount. CSIDriver objects are non-namespaced.\n\n<hr>\n\n- **apiVersion**: storage.k8s.io\/v1\n\n\n- **kind**: CSIDriver\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata. metadata.Name indicates the name of the CSI driver that this object refers to; it MUST be the same name returned by the CSI GetPluginName() call for that driver. The driver name must be 63 characters or less, beginning and ending with an alphanumeric character ([a-z0-9A-Z]) with dashes (-), dots (.), and alphanumerics between. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">CSIDriverSpec<\/a>), required\n\n  spec represents the specification of the CSI Driver.\n\n\n\n\n\n## CSIDriverSpec {#CSIDriverSpec}\n\nCSIDriverSpec is the specification of a CSIDriver.\n\n<hr>\n\n- **attachRequired** (boolean)\n\n  attachRequired indicates this CSI volume driver requires an attach operation (because it implements the CSI ControllerPublishVolume() method), and that the Kubernetes attach detach controller should call the attach volume interface which checks the volumeattachment status and waits until the volume is attached before proceeding to mounting. The CSI external-attacher coordinates with CSI volume driver and updates the volumeattachment status when the attach operation is complete. If the CSIDriverRegistry feature gate is enabled and the value is specified to false, the attach operation will be skipped. Otherwise the attach operation will be called.\n  \n  This field is immutable.\n\n- **fsGroupPolicy** (string)\n\n  fsGroupPolicy defines if the underlying volume supports changing ownership and permission of the volume before being mounted. Refer to the specific FSGroupPolicy values for additional details.\n  \n  This field was immutable in Kubernetes \\< 1.29 and now is mutable.\n  \n  Defaults to ReadWriteOnceWithFSType, which will examine each volume to determine if Kubernetes should modify ownership and permissions of the volume. With the default policy the defined fsGroup will only be applied if a fstype is defined and the volume's access mode contains ReadWriteOnce.\n\n- **podInfoOnMount** (boolean)\n\n  podInfoOnMount indicates this CSI volume driver requires additional pod information (like podName, podUID, etc.) during mount operations, if set to true. If set to false, pod information will not be passed on mount. Default is false.\n  \n  The CSI driver specifies podInfoOnMount as part of driver deployment. If true, Kubelet will pass pod information as VolumeContext in the CSI NodePublishVolume() calls. The CSI driver is responsible for parsing and validating the information passed in as VolumeContext.\n  \n  The following VolumeContext will be passed if podInfoOnMount is set to true. This list might grow, but the prefix will be used. \"csi.storage.k8s.io\/pod.name\": pod.Name \"csi.storage.k8s.io\/pod.namespace\": pod.Namespace \"csi.storage.k8s.io\/pod.uid\": string(pod.UID) \"csi.storage.k8s.io\/ephemeral\": \"true\" if the volume is an ephemeral inline volume\n                                  defined by a CSIVolumeSource, otherwise \"false\"\n  \n  \"csi.storage.k8s.io\/ephemeral\" is a new feature in Kubernetes 1.16. It is only required for drivers which support both the \"Persistent\" and \"Ephemeral\" VolumeLifecycleMode. Other drivers can leave pod info disabled and\/or ignore this field. As Kubernetes 1.15 doesn't support this field, drivers can only support one mode when deployed on such a cluster and the deployment determines which mode that is, for example via a command line parameter of the driver.\n  \n  This field was immutable in Kubernetes \\< 1.29 and now is mutable.\n\n- **requiresRepublish** (boolean)\n\n  requiresRepublish indicates the CSI driver wants `NodePublishVolume` being periodically called to reflect any possible change in the mounted volume. This field defaults to false.\n  \n  Note: After a successful initial NodePublishVolume call, subsequent calls to NodePublishVolume should only update the contents of the volume. New mount points will not be seen by a running container.\n\n- **seLinuxMount** (boolean)\n\n  seLinuxMount specifies if the CSI driver supports \"-o context\" mount option.\n  \n  When \"true\", the CSI driver must ensure that all volumes provided by this CSI driver can be mounted separately with different `-o context` options. This is typical for storage backends that provide volumes as filesystems on block devices or as independent shared volumes. Kubernetes will call NodeStage \/ NodePublish with \"-o context=xyz\" mount option when mounting a ReadWriteOncePod volume used in Pod that has explicitly set SELinux context. In the future, it may be expanded to other volume AccessModes. In any case, Kubernetes will ensure that the volume is mounted only with a single SELinux context.\n  \n  When \"false\", Kubernetes won't pass any special SELinux mount options to the driver. This is typical for volumes that represent subdirectories of a bigger shared filesystem.\n  \n  Default is \"false\".\n\n- **storageCapacity** (boolean)\n\n  storageCapacity indicates that the CSI volume driver wants pod scheduling to consider the storage capacity that the driver deployment will report by creating CSIStorageCapacity objects with capacity information, if set to true.\n  \n  The check can be enabled immediately when deploying a driver. In that case, provisioning new volumes with late binding will pause until the driver deployment has published some suitable CSIStorageCapacity object.\n  \n  Alternatively, the driver can be deployed with the field unset or false and it can be flipped later when storage capacity information has been published.\n  \n  This field was immutable in Kubernetes \\<= 1.22 and now is mutable.\n\n- **tokenRequests** ([]TokenRequest)\n\n  *Atomic: will be replaced during a merge*\n  \n  tokenRequests indicates the CSI driver needs pods' service account tokens it is mounting volume for to do necessary authentication. Kubelet will pass the tokens in VolumeContext in the CSI NodePublishVolume calls. The CSI driver should parse and validate the following VolumeContext: \"csi.storage.k8s.io\/serviceAccount.tokens\": {\n    \"\\<audience>\": {\n      \"token\": \\<token>,\n      \"expirationTimestamp\": \\<expiration timestamp in RFC3339>,\n    },\n    ...\n  }\n  \n  Note: Audience in each TokenRequest should be different and at most one token is empty string. To receive a new token after expiry, RequiresRepublish can be used to trigger NodePublishVolume periodically.\n\n  <a name=\"TokenRequest\"><\/a>\n  *TokenRequest contains parameters of a service account token.*\n\n  - **tokenRequests.audience** (string), required\n\n    audience is the intended audience of the token in \"TokenRequestSpec\". It will default to the audiences of kube apiserver.\n\n  - **tokenRequests.expirationSeconds** (int64)\n\n    expirationSeconds is the duration of validity of the token in \"TokenRequestSpec\". It has the same default value of \"ExpirationSeconds\" in \"TokenRequestSpec\".\n\n- **volumeLifecycleModes** ([]string)\n\n  *Set: unique values will be kept during a merge*\n  \n  volumeLifecycleModes defines what kind of volumes this CSI volume driver supports. The default if the list is empty is \"Persistent\", which is the usage defined by the CSI specification and implemented in Kubernetes via the usual PV\/PVC mechanism.\n  \n  The other mode is \"Ephemeral\". In this mode, volumes are defined inline inside the pod spec with CSIVolumeSource and their lifecycle is tied to the lifecycle of that pod. A driver has to be aware of this because it is only going to get a NodePublishVolume call for such a volume.\n  \n  For more information about implementing this mode, see https:\/\/kubernetes-csi.github.io\/docs\/ephemeral-local-volumes.html A driver can support one or more of these modes and more modes may be added in the future.\n  \n  This field is beta. This field is immutable.\n\n\n\n\n\n## CSIDriverList {#CSIDriverList}\n\nCSIDriverList is a collection of CSIDriver objects.\n\n<hr>\n\n- **apiVersion**: storage.k8s.io\/v1\n\n\n- **kind**: CSIDriverList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">CSIDriver<\/a>), required\n\n  items is the list of CSIDriver\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified CSIDriver\n\n#### HTTP Request\n\nGET \/apis\/storage.k8s.io\/v1\/csidrivers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CSIDriver\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSIDriver<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind CSIDriver\n\n#### HTTP Request\n\nGET \/apis\/storage.k8s.io\/v1\/csidrivers\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSIDriverList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a CSIDriver\n\n#### HTTP Request\n\nPOST \/apis\/storage.k8s.io\/v1\/csidrivers\n\n#### Parameters\n\n\n- **body**: <a href=\"\">CSIDriver<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSIDriver<\/a>): OK\n\n201 (<a href=\"\">CSIDriver<\/a>): Created\n\n202 (<a href=\"\">CSIDriver<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified CSIDriver\n\n#### HTTP Request\n\nPUT \/apis\/storage.k8s.io\/v1\/csidrivers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CSIDriver\n\n\n- **body**: <a href=\"\">CSIDriver<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSIDriver<\/a>): OK\n\n201 (<a href=\"\">CSIDriver<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified CSIDriver\n\n#### HTTP Request\n\nPATCH \/apis\/storage.k8s.io\/v1\/csidrivers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CSIDriver\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSIDriver<\/a>): OK\n\n201 (<a href=\"\">CSIDriver<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a CSIDriver\n\n#### HTTP Request\n\nDELETE \/apis\/storage.k8s.io\/v1\/csidrivers\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the CSIDriver\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">CSIDriver<\/a>): OK\n\n202 (<a href=\"\">CSIDriver<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of CSIDriver\n\n#### HTTP Request\n\nDELETE \/apis\/storage.k8s.io\/v1\/csidrivers\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   storage k8s io v1    import   k8s io api storage v1    kind   CSIDriver  content type   api reference  description   CSIDriver captures information about a Container Storage Interface  CSI  volume driver deployed on the cluster   title   CSIDriver  weight  3 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  storage k8s io v1    import  k8s io api storage v1        CSIDriver   CSIDriver   CSIDriver captures information about a Container Storage Interface  CSI  volume driver deployed on the cluster  Kubernetes attach detach controller uses this object to determine whether attach is required  Kubelet uses this object to determine whether pod information needs to be passed on mount  CSIDriver objects are non namespaced    hr       apiVersion    storage k8s io v1       kind    CSIDriver       metadata     a href    ObjectMeta  a      Standard object metadata  metadata Name indicates the name of the CSI driver that this object refers to  it MUST be the same name returned by the CSI GetPluginName   call for that driver  The driver name must be 63 characters or less  beginning and ending with an alphanumeric character   a z0 9A Z   with dashes      dots      and alphanumerics between  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    CSIDriverSpec  a    required    spec represents the specification of the CSI Driver          CSIDriverSpec   CSIDriverSpec   CSIDriverSpec is the specification of a CSIDriver    hr       attachRequired    boolean     attachRequired indicates this CSI volume driver requires an attach operation  because it implements the CSI ControllerPublishVolume   method   and that the Kubernetes attach detach controller should call the attach volume interface which checks the volumeattachment status and waits until the volume is attached before proceeding to mounting  The CSI external attacher coordinates with CSI volume driver and updates the volumeattachment status when the attach operation is complete  If the CSIDriverRegistry feature gate is enabled and the value is specified to false  the attach operation will be skipped  Otherwise the attach operation will be called       This field is immutable       fsGroupPolicy    string     fsGroupPolicy defines if the underlying volume supports changing ownership and permission of the volume before being mounted  Refer to the specific FSGroupPolicy values for additional details       This field was immutable in Kubernetes    1 29 and now is mutable       Defaults to ReadWriteOnceWithFSType  which will examine each volume to determine if Kubernetes should modify ownership and permissions of the volume  With the default policy the defined fsGroup will only be applied if a fstype is defined and the volume s access mode contains ReadWriteOnce       podInfoOnMount    boolean     podInfoOnMount indicates this CSI volume driver requires additional pod information  like podName  podUID  etc   during mount operations  if set to true  If set to false  pod information will not be passed on mount  Default is false       The CSI driver specifies podInfoOnMount as part of driver deployment  If true  Kubelet will pass pod information as VolumeContext in the CSI NodePublishVolume   calls  The CSI driver is responsible for parsing and validating the information passed in as VolumeContext       The following VolumeContext will be passed if podInfoOnMount is set to true  This list might grow  but the prefix will be used   csi storage k8s io pod name   pod Name  csi storage k8s io pod namespace   pod Namespace  csi storage k8s io pod uid   string pod UID   csi storage k8s io ephemeral    true  if the volume is an ephemeral inline volume                                   defined by a CSIVolumeSource  otherwise  false        csi storage k8s io ephemeral  is a new feature in Kubernetes 1 16  It is only required for drivers which support both the  Persistent  and  Ephemeral  VolumeLifecycleMode  Other drivers can leave pod info disabled and or ignore this field  As Kubernetes 1 15 doesn t support this field  drivers can only support one mode when deployed on such a cluster and the deployment determines which mode that is  for example via a command line parameter of the driver       This field was immutable in Kubernetes    1 29 and now is mutable       requiresRepublish    boolean     requiresRepublish indicates the CSI driver wants  NodePublishVolume  being periodically called to reflect any possible change in the mounted volume  This field defaults to false       Note  After a successful initial NodePublishVolume call  subsequent calls to NodePublishVolume should only update the contents of the volume  New mount points will not be seen by a running container       seLinuxMount    boolean     seLinuxMount specifies if the CSI driver supports   o context  mount option       When  true   the CSI driver must ensure that all volumes provided by this CSI driver can be mounted separately with different   o context  options  This is typical for storage backends that provide volumes as filesystems on block devices or as independent shared volumes  Kubernetes will call NodeStage   NodePublish with   o context xyz  mount option when mounting a ReadWriteOncePod volume used in Pod that has explicitly set SELinux context  In the future  it may be expanded to other volume AccessModes  In any case  Kubernetes will ensure that the volume is mounted only with a single SELinux context       When  false   Kubernetes won t pass any special SELinux mount options to the driver  This is typical for volumes that represent subdirectories of a bigger shared filesystem       Default is  false        storageCapacity    boolean     storageCapacity indicates that the CSI volume driver wants pod scheduling to consider the storage capacity that the driver deployment will report by creating CSIStorageCapacity objects with capacity information  if set to true       The check can be enabled immediately when deploying a driver  In that case  provisioning new volumes with late binding will pause until the driver deployment has published some suitable CSIStorageCapacity object       Alternatively  the driver can be deployed with the field unset or false and it can be flipped later when storage capacity information has been published       This field was immutable in Kubernetes     1 22 and now is mutable       tokenRequests      TokenRequest      Atomic  will be replaced during a merge       tokenRequests indicates the CSI driver needs pods  service account tokens it is mounting volume for to do necessary authentication  Kubelet will pass the tokens in VolumeContext in the CSI NodePublishVolume calls  The CSI driver should parse and validate the following VolumeContext   csi storage k8s io serviceAccount tokens            audience             token     token          expirationTimestamp     expiration timestamp in RFC3339                           Note  Audience in each TokenRequest should be different and at most one token is empty string  To receive a new token after expiry  RequiresRepublish can be used to trigger NodePublishVolume periodically      a name  TokenRequest    a     TokenRequest contains parameters of a service account token          tokenRequests audience    string   required      audience is the intended audience of the token in  TokenRequestSpec   It will default to the audiences of kube apiserver         tokenRequests expirationSeconds    int64       expirationSeconds is the duration of validity of the token in  TokenRequestSpec   It has the same default value of  ExpirationSeconds  in  TokenRequestSpec        volumeLifecycleModes      string      Set  unique values will be kept during a merge       volumeLifecycleModes defines what kind of volumes this CSI volume driver supports  The default if the list is empty is  Persistent   which is the usage defined by the CSI specification and implemented in Kubernetes via the usual PV PVC mechanism       The other mode is  Ephemeral   In this mode  volumes are defined inline inside the pod spec with CSIVolumeSource and their lifecycle is tied to the lifecycle of that pod  A driver has to be aware of this because it is only going to get a NodePublishVolume call for such a volume       For more information about implementing this mode  see https   kubernetes csi github io docs ephemeral local volumes html A driver can support one or more of these modes and more modes may be added in the future       This field is beta  This field is immutable          CSIDriverList   CSIDriverList   CSIDriverList is a collection of CSIDriver objects    hr       apiVersion    storage k8s io v1       kind    CSIDriverList       metadata     a href    ListMeta  a      Standard list metadata More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    CSIDriver  a    required    items is the list of CSIDriver         Operations   Operations      hr             get  read the specified CSIDriver       HTTP Request  GET  apis storage k8s io v1 csidrivers  name        Parameters       name     in path    string  required    name of the CSIDriver       pretty     in query    string     a href    pretty  a          Response   200   a href    CSIDriver  a    OK  401  Unauthorized        list  list or watch objects of kind CSIDriver       HTTP Request  GET  apis storage k8s io v1 csidrivers       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    CSIDriverList  a    OK  401  Unauthorized        create  create a CSIDriver       HTTP Request  POST  apis storage k8s io v1 csidrivers       Parameters       body     a href    CSIDriver  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CSIDriver  a    OK  201   a href    CSIDriver  a    Created  202   a href    CSIDriver  a    Accepted  401  Unauthorized        update  replace the specified CSIDriver       HTTP Request  PUT  apis storage k8s io v1 csidrivers  name        Parameters       name     in path    string  required    name of the CSIDriver       body     a href    CSIDriver  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CSIDriver  a    OK  201   a href    CSIDriver  a    Created  401  Unauthorized        patch  partially update the specified CSIDriver       HTTP Request  PATCH  apis storage k8s io v1 csidrivers  name        Parameters       name     in path    string  required    name of the CSIDriver       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    CSIDriver  a    OK  201   a href    CSIDriver  a    Created  401  Unauthorized        delete  delete a CSIDriver       HTTP Request  DELETE  apis storage k8s io v1 csidrivers  name        Parameters       name     in path    string  required    name of the CSIDriver       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    CSIDriver  a    OK  202   a href    CSIDriver  a    Accepted  401  Unauthorized        deletecollection  delete collection of CSIDriver       HTTP Request  DELETE  apis storage k8s io v1 csidrivers       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference apiVersion v1 weight 2 kind LimitRange contenttype apireference title LimitRange apimetadata autogenerated true LimitRange sets resource usage limits for each kind of resource in a Namespace import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"LimitRange\"\ncontent_type: \"api_reference\"\ndescription: \"LimitRange sets resource usage limits for each kind of resource in a Namespace.\"\ntitle: \"LimitRange\"\nweight: 2\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## LimitRange {#LimitRange}\n\nLimitRange sets resource usage limits for each kind of resource in a Namespace.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: LimitRange\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">LimitRangeSpec<\/a>)\n\n  Spec defines the limits enforced. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## LimitRangeSpec {#LimitRangeSpec}\n\nLimitRangeSpec defines a min\/max usage limit for resources that match on kind.\n\n<hr>\n\n- **limits** ([]LimitRangeItem), required\n\n  *Atomic: will be replaced during a merge*\n  \n  Limits is the list of LimitRangeItem objects that are enforced.\n\n  <a name=\"LimitRangeItem\"><\/a>\n  *LimitRangeItem defines a min\/max usage limit for any resource that matches on kind.*\n\n  - **limits.type** (string), required\n\n    Type of resource that this limit applies to.\n\n  - **limits.default** (map[string]<a href=\"\">Quantity<\/a>)\n\n    Default resource requirement limit value by resource name if resource limit is omitted.\n\n  - **limits.defaultRequest** (map[string]<a href=\"\">Quantity<\/a>)\n\n    DefaultRequest is the default resource requirement request value by resource name if resource request is omitted.\n\n  - **limits.max** (map[string]<a href=\"\">Quantity<\/a>)\n\n    Max usage constraints on this kind by resource name.\n\n  - **limits.maxLimitRequestRatio** (map[string]<a href=\"\">Quantity<\/a>)\n\n    MaxLimitRequestRatio if specified, the named resource must have a request and limit that are both non-zero where limit divided by request is less than or equal to the enumerated value; this represents the max burst for the named resource.\n\n  - **limits.min** (map[string]<a href=\"\">Quantity<\/a>)\n\n    Min usage constraints on this kind by resource name.\n\n\n\n\n\n## LimitRangeList {#LimitRangeList}\n\nLimitRangeList is a list of LimitRange items.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: LimitRangeList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">LimitRange<\/a>), required\n\n  Items is a list of LimitRange objects. More info: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified LimitRange\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/limitranges\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the LimitRange\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LimitRange<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind LimitRange\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/limitranges\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LimitRangeList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind LimitRange\n\n#### HTTP Request\n\nGET \/api\/v1\/limitranges\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LimitRangeList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a LimitRange\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\/{namespace}\/limitranges\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">LimitRange<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LimitRange<\/a>): OK\n\n201 (<a href=\"\">LimitRange<\/a>): Created\n\n202 (<a href=\"\">LimitRange<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified LimitRange\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/limitranges\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the LimitRange\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">LimitRange<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LimitRange<\/a>): OK\n\n201 (<a href=\"\">LimitRange<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified LimitRange\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/limitranges\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the LimitRange\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">LimitRange<\/a>): OK\n\n201 (<a href=\"\">LimitRange<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a LimitRange\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/limitranges\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the LimitRange\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of LimitRange\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/limitranges\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   LimitRange  content type   api reference  description   LimitRange sets resource usage limits for each kind of resource in a Namespace   title   LimitRange  weight  2 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        LimitRange   LimitRange   LimitRange sets resource usage limits for each kind of resource in a Namespace    hr       apiVersion    v1       kind    LimitRange       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    LimitRangeSpec  a      Spec defines the limits enforced  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         LimitRangeSpec   LimitRangeSpec   LimitRangeSpec defines a min max usage limit for resources that match on kind    hr       limits      LimitRangeItem   required     Atomic  will be replaced during a merge       Limits is the list of LimitRangeItem objects that are enforced      a name  LimitRangeItem    a     LimitRangeItem defines a min max usage limit for any resource that matches on kind          limits type    string   required      Type of resource that this limit applies to         limits default    map string  a href    Quantity  a        Default resource requirement limit value by resource name if resource limit is omitted         limits defaultRequest    map string  a href    Quantity  a        DefaultRequest is the default resource requirement request value by resource name if resource request is omitted         limits max    map string  a href    Quantity  a        Max usage constraints on this kind by resource name         limits maxLimitRequestRatio    map string  a href    Quantity  a        MaxLimitRequestRatio if specified  the named resource must have a request and limit that are both non zero where limit divided by request is less than or equal to the enumerated value  this represents the max burst for the named resource         limits min    map string  a href    Quantity  a        Min usage constraints on this kind by resource name          LimitRangeList   LimitRangeList   LimitRangeList is a list of LimitRange items    hr       apiVersion    v1       kind    LimitRangeList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    LimitRange  a    required    Items is a list of LimitRange objects  More info  https   kubernetes io docs concepts configuration manage resources containers          Operations   Operations      hr             get  read the specified LimitRange       HTTP Request  GET  api v1 namespaces  namespace  limitranges  name        Parameters       name     in path    string  required    name of the LimitRange       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    LimitRange  a    OK  401  Unauthorized        list  list or watch objects of kind LimitRange       HTTP Request  GET  api v1 namespaces  namespace  limitranges       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    LimitRangeList  a    OK  401  Unauthorized        list  list or watch objects of kind LimitRange       HTTP Request  GET  api v1 limitranges       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    LimitRangeList  a    OK  401  Unauthorized        create  create a LimitRange       HTTP Request  POST  api v1 namespaces  namespace  limitranges       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    LimitRange  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    LimitRange  a    OK  201   a href    LimitRange  a    Created  202   a href    LimitRange  a    Accepted  401  Unauthorized        update  replace the specified LimitRange       HTTP Request  PUT  api v1 namespaces  namespace  limitranges  name        Parameters       name     in path    string  required    name of the LimitRange       namespace     in path    string  required     a href    namespace  a        body     a href    LimitRange  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    LimitRange  a    OK  201   a href    LimitRange  a    Created  401  Unauthorized        patch  partially update the specified LimitRange       HTTP Request  PATCH  api v1 namespaces  namespace  limitranges  name        Parameters       name     in path    string  required    name of the LimitRange       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    LimitRange  a    OK  201   a href    LimitRange  a    Created  401  Unauthorized        delete  delete a LimitRange       HTTP Request  DELETE  api v1 namespaces  namespace  limitranges  name        Parameters       name     in path    string  required    name of the LimitRange       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of LimitRange       HTTP Request  DELETE  api v1 namespaces  namespace  limitranges       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference kind NetworkPolicy NetworkPolicy describes what network traffic is allowed for a set of Pods title NetworkPolicy contenttype apireference apimetadata weight 4 autogenerated true import k8s io api networking v1 apiVersion networking k8s io v1","answers":"---\napi_metadata:\n  apiVersion: \"networking.k8s.io\/v1\"\n  import: \"k8s.io\/api\/networking\/v1\"\n  kind: \"NetworkPolicy\"\ncontent_type: \"api_reference\"\ndescription: \"NetworkPolicy describes what network traffic is allowed for a set of Pods.\"\ntitle: \"NetworkPolicy\"\nweight: 4\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: networking.k8s.io\/v1`\n\n`import \"k8s.io\/api\/networking\/v1\"`\n\n\n## NetworkPolicy {#NetworkPolicy}\n\nNetworkPolicy describes what network traffic is allowed for a set of Pods\n\n<hr>\n\n- **apiVersion**: networking.k8s.io\/v1\n\n\n- **kind**: NetworkPolicy\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">NetworkPolicySpec<\/a>)\n\n  spec represents the specification of the desired behavior for this NetworkPolicy.\n\n\n\n\n\n## NetworkPolicySpec {#NetworkPolicySpec}\n\nNetworkPolicySpec provides the specification of a NetworkPolicy\n\n<hr>\n\n- **podSelector** (<a href=\"\">LabelSelector<\/a>), required\n\n  podSelector selects the pods to which this NetworkPolicy object applies. The array of ingress rules is applied to any pods selected by this field. Multiple network policies can select the same set of pods. In this case, the ingress rules for each are combined additively. This field is NOT optional and follows standard label selector semantics. An empty podSelector matches all pods in this namespace.\n\n- **policyTypes** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  policyTypes is a list of rule types that the NetworkPolicy relates to. Valid options are [\"Ingress\"], [\"Egress\"], or [\"Ingress\", \"Egress\"]. If this field is not specified, it will default based on the existence of ingress or egress rules; policies that contain an egress section are assumed to affect egress, and all policies (whether or not they contain an ingress section) are assumed to affect ingress. If you want to write an egress-only policy, you must explicitly specify policyTypes [ \"Egress\" ]. Likewise, if you want to write a policy that specifies that no egress is allowed, you must specify a policyTypes value that include \"Egress\" (since such a policy would not include an egress section and would otherwise default to just [ \"Ingress\" ]). This field is beta-level in 1.8\n\n- **ingress** ([]NetworkPolicyIngressRule)\n\n  *Atomic: will be replaced during a merge*\n  \n  ingress is a list of ingress rules to be applied to the selected pods. Traffic is allowed to a pod if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic source is the pod's local node, OR if the traffic matches at least one ingress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy does not allow any traffic (and serves solely to ensure that the pods it selects are isolated by default)\n\n  <a name=\"NetworkPolicyIngressRule\"><\/a>\n  *NetworkPolicyIngressRule describes a particular set of traffic that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and from.*\n\n  - **ingress.from** ([]NetworkPolicyPeer)\n\n    *Atomic: will be replaced during a merge*\n    \n    from is a list of sources which should be able to access the pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all sources (traffic not restricted by source). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the from list.\n\n    <a name=\"NetworkPolicyPeer\"><\/a>\n    *NetworkPolicyPeer describes a peer to allow traffic to\/from. Only certain combinations of fields are allowed*\n\n    - **ingress.from.ipBlock** (IPBlock)\n\n      ipBlock defines policy on a particular IPBlock. If this field is set then neither of the other fields can be.\n\n      <a name=\"IPBlock\"><\/a>\n      *IPBlock describes a particular CIDR (Ex. \"192.168.1.0\/24\",\"2001:db8::\/64\") that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The except entry describes CIDRs that should not be included within this rule.*\n\n      - **ingress.from.ipBlock.cidr** (string), required\n\n        cidr is a string representing the IPBlock Valid examples are \"192.168.1.0\/24\" or \"2001:db8::\/64\"\n\n      - **ingress.from.ipBlock.except** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        except is a slice of CIDRs that should not be included within an IPBlock Valid examples are \"192.168.1.0\/24\" or \"2001:db8::\/64\" Except values will be rejected if they are outside the cidr range\n\n    - **ingress.from.namespaceSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      namespaceSelector selects namespaces using cluster-scoped labels. This field follows standard label selector semantics; if present but empty, it selects all namespaces.\n      \n      If podSelector is also set, then the NetworkPolicyPeer as a whole selects the pods matching podSelector in the namespaces selected by namespaceSelector. Otherwise it selects all pods in the namespaces selected by namespaceSelector.\n\n    - **ingress.from.podSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      podSelector is a label selector which selects pods. This field follows standard label selector semantics; if present but empty, it selects all pods.\n      \n      If namespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the pods matching podSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the pods matching podSelector in the policy's own namespace.\n\n  - **ingress.ports** ([]NetworkPolicyPort)\n\n    *Atomic: will be replaced during a merge*\n    \n    ports is a list of ports which should be made accessible on the pods selected for this rule. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list.\n\n    <a name=\"NetworkPolicyPort\"><\/a>\n    *NetworkPolicyPort describes a port to allow traffic on*\n\n    - **ingress.ports.port** (IntOrString)\n\n      port represents the port on the given protocol. This can either be a numerical or named port on a pod. If this field is not provided, this matches all port names and numbers. If present, only traffic on the specified protocol AND port will be matched.\n\n      <a name=\"IntOrString\"><\/a>\n      *IntOrString is a type that can hold an int32 or a string.  When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type.  This allows you to have, for example, a JSON field that can accept a name or number.*\n\n    - **ingress.ports.endPort** (int32)\n\n      endPort indicates that the range of ports from port to endPort if set, inclusive, should be allowed by the policy. This field cannot be defined if the port field is not defined or if the port field is defined as a named (string) port. The endPort must be equal or greater than port.\n\n    - **ingress.ports.protocol** (string)\n\n      protocol represents the protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this field defaults to TCP.\n\n- **egress** ([]NetworkPolicyEgressRule)\n\n  *Atomic: will be replaced during a merge*\n  \n  egress is a list of egress rules to be applied to the selected pods. Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic matches at least one egress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy limits all outgoing traffic (and serves solely to ensure that the pods it selects are isolated by default). This field is beta-level in 1.8\n\n  <a name=\"NetworkPolicyEgressRule\"><\/a>\n  *NetworkPolicyEgressRule describes a particular set of traffic that is allowed out of pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and to. This type is beta-level in 1.8*\n\n  - **egress.to** ([]NetworkPolicyPeer)\n\n    *Atomic: will be replaced during a merge*\n    \n    to is a list of destinations for outgoing traffic of pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all destinations (traffic not restricted by destination). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the to list.\n\n    <a name=\"NetworkPolicyPeer\"><\/a>\n    *NetworkPolicyPeer describes a peer to allow traffic to\/from. Only certain combinations of fields are allowed*\n\n    - **egress.to.ipBlock** (IPBlock)\n\n      ipBlock defines policy on a particular IPBlock. If this field is set then neither of the other fields can be.\n\n      <a name=\"IPBlock\"><\/a>\n      *IPBlock describes a particular CIDR (Ex. \"192.168.1.0\/24\",\"2001:db8::\/64\") that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The except entry describes CIDRs that should not be included within this rule.*\n\n      - **egress.to.ipBlock.cidr** (string), required\n\n        cidr is a string representing the IPBlock Valid examples are \"192.168.1.0\/24\" or \"2001:db8::\/64\"\n\n      - **egress.to.ipBlock.except** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        except is a slice of CIDRs that should not be included within an IPBlock Valid examples are \"192.168.1.0\/24\" or \"2001:db8::\/64\" Except values will be rejected if they are outside the cidr range\n\n    - **egress.to.namespaceSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      namespaceSelector selects namespaces using cluster-scoped labels. This field follows standard label selector semantics; if present but empty, it selects all namespaces.\n      \n      If podSelector is also set, then the NetworkPolicyPeer as a whole selects the pods matching podSelector in the namespaces selected by namespaceSelector. Otherwise it selects all pods in the namespaces selected by namespaceSelector.\n\n    - **egress.to.podSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      podSelector is a label selector which selects pods. This field follows standard label selector semantics; if present but empty, it selects all pods.\n      \n      If namespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the pods matching podSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the pods matching podSelector in the policy's own namespace.\n\n  - **egress.ports** ([]NetworkPolicyPort)\n\n    *Atomic: will be replaced during a merge*\n    \n    ports is a list of destination ports for outgoing traffic. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list.\n\n    <a name=\"NetworkPolicyPort\"><\/a>\n    *NetworkPolicyPort describes a port to allow traffic on*\n\n    - **egress.ports.port** (IntOrString)\n\n      port represents the port on the given protocol. This can either be a numerical or named port on a pod. If this field is not provided, this matches all port names and numbers. If present, only traffic on the specified protocol AND port will be matched.\n\n      <a name=\"IntOrString\"><\/a>\n      *IntOrString is a type that can hold an int32 or a string.  When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type.  This allows you to have, for example, a JSON field that can accept a name or number.*\n\n    - **egress.ports.endPort** (int32)\n\n      endPort indicates that the range of ports from port to endPort if set, inclusive, should be allowed by the policy. This field cannot be defined if the port field is not defined or if the port field is defined as a named (string) port. The endPort must be equal or greater than port.\n\n    - **egress.ports.protocol** (string)\n\n      protocol represents the protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this field defaults to TCP.\n\n\n\n\n\n## NetworkPolicyList {#NetworkPolicyList}\n\nNetworkPolicyList is a list of NetworkPolicy objects.\n\n<hr>\n\n- **apiVersion**: networking.k8s.io\/v1\n\n\n- **kind**: NetworkPolicyList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">NetworkPolicy<\/a>), required\n\n  items is a list of schema objects.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified NetworkPolicy\n\n#### HTTP Request\n\nGET \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/networkpolicies\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the NetworkPolicy\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">NetworkPolicy<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind NetworkPolicy\n\n#### HTTP Request\n\nGET \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/networkpolicies\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">NetworkPolicyList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind NetworkPolicy\n\n#### HTTP Request\n\nGET \/apis\/networking.k8s.io\/v1\/networkpolicies\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">NetworkPolicyList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a NetworkPolicy\n\n#### HTTP Request\n\nPOST \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/networkpolicies\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">NetworkPolicy<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">NetworkPolicy<\/a>): OK\n\n201 (<a href=\"\">NetworkPolicy<\/a>): Created\n\n202 (<a href=\"\">NetworkPolicy<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified NetworkPolicy\n\n#### HTTP Request\n\nPUT \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/networkpolicies\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the NetworkPolicy\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">NetworkPolicy<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">NetworkPolicy<\/a>): OK\n\n201 (<a href=\"\">NetworkPolicy<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified NetworkPolicy\n\n#### HTTP Request\n\nPATCH \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/networkpolicies\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the NetworkPolicy\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">NetworkPolicy<\/a>): OK\n\n201 (<a href=\"\">NetworkPolicy<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a NetworkPolicy\n\n#### HTTP Request\n\nDELETE \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/networkpolicies\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the NetworkPolicy\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of NetworkPolicy\n\n#### HTTP Request\n\nDELETE \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/networkpolicies\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   networking k8s io v1    import   k8s io api networking v1    kind   NetworkPolicy  content type   api reference  description   NetworkPolicy describes what network traffic is allowed for a set of Pods   title   NetworkPolicy  weight  4 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  networking k8s io v1    import  k8s io api networking v1        NetworkPolicy   NetworkPolicy   NetworkPolicy describes what network traffic is allowed for a set of Pods   hr       apiVersion    networking k8s io v1       kind    NetworkPolicy       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    NetworkPolicySpec  a      spec represents the specification of the desired behavior for this NetworkPolicy          NetworkPolicySpec   NetworkPolicySpec   NetworkPolicySpec provides the specification of a NetworkPolicy   hr       podSelector     a href    LabelSelector  a    required    podSelector selects the pods to which this NetworkPolicy object applies  The array of ingress rules is applied to any pods selected by this field  Multiple network policies can select the same set of pods  In this case  the ingress rules for each are combined additively  This field is NOT optional and follows standard label selector semantics  An empty podSelector matches all pods in this namespace       policyTypes      string      Atomic  will be replaced during a merge       policyTypes is a list of rule types that the NetworkPolicy relates to  Valid options are   Ingress      Egress    or   Ingress    Egress    If this field is not specified  it will default based on the existence of ingress or egress rules  policies that contain an egress section are assumed to affect egress  and all policies  whether or not they contain an ingress section  are assumed to affect ingress  If you want to write an egress only policy  you must explicitly specify policyTypes    Egress     Likewise  if you want to write a policy that specifies that no egress is allowed  you must specify a policyTypes value that include  Egress   since such a policy would not include an egress section and would otherwise default to just    Ingress      This field is beta level in 1 8      ingress      NetworkPolicyIngressRule      Atomic  will be replaced during a merge       ingress is a list of ingress rules to be applied to the selected pods  Traffic is allowed to a pod if there are no NetworkPolicies selecting the pod  and cluster policy otherwise allows the traffic   OR if the traffic source is the pod s local node  OR if the traffic matches at least one ingress rule across all of the NetworkPolicy objects whose podSelector matches the pod  If this field is empty then this NetworkPolicy does not allow any traffic  and serves solely to ensure that the pods it selects are isolated by default      a name  NetworkPolicyIngressRule    a     NetworkPolicyIngressRule describes a particular set of traffic that is allowed to the pods matched by a NetworkPolicySpec s podSelector  The traffic must match both ports and from          ingress from      NetworkPolicyPeer        Atomic  will be replaced during a merge           from is a list of sources which should be able to access the pods selected for this rule  Items in this list are combined using a logical OR operation  If this field is empty or missing  this rule matches all sources  traffic not restricted by source   If this field is present and contains at least one item  this rule allows traffic only if the traffic matches at least one item in the from list        a name  NetworkPolicyPeer    a       NetworkPolicyPeer describes a peer to allow traffic to from  Only certain combinations of fields are allowed           ingress from ipBlock    IPBlock         ipBlock defines policy on a particular IPBlock  If this field is set then neither of the other fields can be          a name  IPBlock    a         IPBlock describes a particular CIDR  Ex   192 168 1 0 24   2001 db8   64   that is allowed to the pods matched by a NetworkPolicySpec s podSelector  The except entry describes CIDRs that should not be included within this rule              ingress from ipBlock cidr    string   required          cidr is a string representing the IPBlock Valid examples are  192 168 1 0 24  or  2001 db8   64             ingress from ipBlock except      string            Atomic  will be replaced during a merge                   except is a slice of CIDRs that should not be included within an IPBlock Valid examples are  192 168 1 0 24  or  2001 db8   64  Except values will be rejected if they are outside the cidr range          ingress from namespaceSelector     a href    LabelSelector  a          namespaceSelector selects namespaces using cluster scoped labels  This field follows standard label selector semantics  if present but empty  it selects all namespaces               If podSelector is also set  then the NetworkPolicyPeer as a whole selects the pods matching podSelector in the namespaces selected by namespaceSelector  Otherwise it selects all pods in the namespaces selected by namespaceSelector           ingress from podSelector     a href    LabelSelector  a          podSelector is a label selector which selects pods  This field follows standard label selector semantics  if present but empty  it selects all pods               If namespaceSelector is also set  then the NetworkPolicyPeer as a whole selects the pods matching podSelector in the Namespaces selected by NamespaceSelector  Otherwise it selects the pods matching podSelector in the policy s own namespace         ingress ports      NetworkPolicyPort        Atomic  will be replaced during a merge           ports is a list of ports which should be made accessible on the pods selected for this rule  Each item in this list is combined using a logical OR  If this field is empty or missing  this rule matches all ports  traffic not restricted by port   If this field is present and contains at least one item  then this rule allows traffic only if the traffic matches at least one port in the list        a name  NetworkPolicyPort    a       NetworkPolicyPort describes a port to allow traffic on           ingress ports port    IntOrString         port represents the port on the given protocol  This can either be a numerical or named port on a pod  If this field is not provided  this matches all port names and numbers  If present  only traffic on the specified protocol AND port will be matched          a name  IntOrString    a         IntOrString is a type that can hold an int32 or a string   When used in JSON or YAML marshalling and unmarshalling  it produces or consumes the inner type   This allows you to have  for example  a JSON field that can accept a name or number            ingress ports endPort    int32         endPort indicates that the range of ports from port to endPort if set  inclusive  should be allowed by the policy  This field cannot be defined if the port field is not defined or if the port field is defined as a named  string  port  The endPort must be equal or greater than port           ingress ports protocol    string         protocol represents the protocol  TCP  UDP  or SCTP  which traffic must match  If not specified  this field defaults to TCP       egress      NetworkPolicyEgressRule      Atomic  will be replaced during a merge       egress is a list of egress rules to be applied to the selected pods  Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod  and cluster policy otherwise allows the traffic   OR if the traffic matches at least one egress rule across all of the NetworkPolicy objects whose podSelector matches the pod  If this field is empty then this NetworkPolicy limits all outgoing traffic  and serves solely to ensure that the pods it selects are isolated by default   This field is beta level in 1 8     a name  NetworkPolicyEgressRule    a     NetworkPolicyEgressRule describes a particular set of traffic that is allowed out of pods matched by a NetworkPolicySpec s podSelector  The traffic must match both ports and to  This type is beta level in 1 8         egress to      NetworkPolicyPeer        Atomic  will be replaced during a merge           to is a list of destinations for outgoing traffic of pods selected for this rule  Items in this list are combined using a logical OR operation  If this field is empty or missing  this rule matches all destinations  traffic not restricted by destination   If this field is present and contains at least one item  this rule allows traffic only if the traffic matches at least one item in the to list        a name  NetworkPolicyPeer    a       NetworkPolicyPeer describes a peer to allow traffic to from  Only certain combinations of fields are allowed           egress to ipBlock    IPBlock         ipBlock defines policy on a particular IPBlock  If this field is set then neither of the other fields can be          a name  IPBlock    a         IPBlock describes a particular CIDR  Ex   192 168 1 0 24   2001 db8   64   that is allowed to the pods matched by a NetworkPolicySpec s podSelector  The except entry describes CIDRs that should not be included within this rule              egress to ipBlock cidr    string   required          cidr is a string representing the IPBlock Valid examples are  192 168 1 0 24  or  2001 db8   64             egress to ipBlock except      string            Atomic  will be replaced during a merge                   except is a slice of CIDRs that should not be included within an IPBlock Valid examples are  192 168 1 0 24  or  2001 db8   64  Except values will be rejected if they are outside the cidr range          egress to namespaceSelector     a href    LabelSelector  a          namespaceSelector selects namespaces using cluster scoped labels  This field follows standard label selector semantics  if present but empty  it selects all namespaces               If podSelector is also set  then the NetworkPolicyPeer as a whole selects the pods matching podSelector in the namespaces selected by namespaceSelector  Otherwise it selects all pods in the namespaces selected by namespaceSelector           egress to podSelector     a href    LabelSelector  a          podSelector is a label selector which selects pods  This field follows standard label selector semantics  if present but empty  it selects all pods               If namespaceSelector is also set  then the NetworkPolicyPeer as a whole selects the pods matching podSelector in the Namespaces selected by NamespaceSelector  Otherwise it selects the pods matching podSelector in the policy s own namespace         egress ports      NetworkPolicyPort        Atomic  will be replaced during a merge           ports is a list of destination ports for outgoing traffic  Each item in this list is combined using a logical OR  If this field is empty or missing  this rule matches all ports  traffic not restricted by port   If this field is present and contains at least one item  then this rule allows traffic only if the traffic matches at least one port in the list        a name  NetworkPolicyPort    a       NetworkPolicyPort describes a port to allow traffic on           egress ports port    IntOrString         port represents the port on the given protocol  This can either be a numerical or named port on a pod  If this field is not provided  this matches all port names and numbers  If present  only traffic on the specified protocol AND port will be matched          a name  IntOrString    a         IntOrString is a type that can hold an int32 or a string   When used in JSON or YAML marshalling and unmarshalling  it produces or consumes the inner type   This allows you to have  for example  a JSON field that can accept a name or number            egress ports endPort    int32         endPort indicates that the range of ports from port to endPort if set  inclusive  should be allowed by the policy  This field cannot be defined if the port field is not defined or if the port field is defined as a named  string  port  The endPort must be equal or greater than port           egress ports protocol    string         protocol represents the protocol  TCP  UDP  or SCTP  which traffic must match  If not specified  this field defaults to TCP          NetworkPolicyList   NetworkPolicyList   NetworkPolicyList is a list of NetworkPolicy objects    hr       apiVersion    networking k8s io v1       kind    NetworkPolicyList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    NetworkPolicy  a    required    items is a list of schema objects          Operations   Operations      hr             get  read the specified NetworkPolicy       HTTP Request  GET  apis networking k8s io v1 namespaces  namespace  networkpolicies  name        Parameters       name     in path    string  required    name of the NetworkPolicy       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    NetworkPolicy  a    OK  401  Unauthorized        list  list or watch objects of kind NetworkPolicy       HTTP Request  GET  apis networking k8s io v1 namespaces  namespace  networkpolicies       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    NetworkPolicyList  a    OK  401  Unauthorized        list  list or watch objects of kind NetworkPolicy       HTTP Request  GET  apis networking k8s io v1 networkpolicies       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    NetworkPolicyList  a    OK  401  Unauthorized        create  create a NetworkPolicy       HTTP Request  POST  apis networking k8s io v1 namespaces  namespace  networkpolicies       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    NetworkPolicy  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    NetworkPolicy  a    OK  201   a href    NetworkPolicy  a    Created  202   a href    NetworkPolicy  a    Accepted  401  Unauthorized        update  replace the specified NetworkPolicy       HTTP Request  PUT  apis networking k8s io v1 namespaces  namespace  networkpolicies  name        Parameters       name     in path    string  required    name of the NetworkPolicy       namespace     in path    string  required     a href    namespace  a        body     a href    NetworkPolicy  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    NetworkPolicy  a    OK  201   a href    NetworkPolicy  a    Created  401  Unauthorized        patch  partially update the specified NetworkPolicy       HTTP Request  PATCH  apis networking k8s io v1 namespaces  namespace  networkpolicies  name        Parameters       name     in path    string  required    name of the NetworkPolicy       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    NetworkPolicy  a    OK  201   a href    NetworkPolicy  a    Created  401  Unauthorized        delete  delete a NetworkPolicy       HTTP Request  DELETE  apis networking k8s io v1 namespaces  namespace  networkpolicies  name        Parameters       name     in path    string  required    name of the NetworkPolicy       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of NetworkPolicy       HTTP Request  DELETE  apis networking k8s io v1 namespaces  namespace  networkpolicies       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference title PodDisruptionBudget import k8s io api policy v1 weight 5 PodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods kind PodDisruptionBudget contenttype apireference apimetadata autogenerated true apiVersion policy v1","answers":"---\napi_metadata:\n  apiVersion: \"policy\/v1\"\n  import: \"k8s.io\/api\/policy\/v1\"\n  kind: \"PodDisruptionBudget\"\ncontent_type: \"api_reference\"\ndescription: \"PodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods.\"\ntitle: \"PodDisruptionBudget\"\nweight: 5\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: policy\/v1`\n\n`import \"k8s.io\/api\/policy\/v1\"`\n\n\n## PodDisruptionBudget {#PodDisruptionBudget}\n\nPodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods\n\n<hr>\n\n- **apiVersion**: policy\/v1\n\n\n- **kind**: PodDisruptionBudget\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">PodDisruptionBudgetSpec<\/a>)\n\n  Specification of the desired behavior of the PodDisruptionBudget.\n\n- **status** (<a href=\"\">PodDisruptionBudgetStatus<\/a>)\n\n  Most recently observed status of the PodDisruptionBudget.\n\n\n\n\n\n## PodDisruptionBudgetSpec {#PodDisruptionBudgetSpec}\n\nPodDisruptionBudgetSpec is a description of a PodDisruptionBudget.\n\n<hr>\n\n- **maxUnavailable** (IntOrString)\n\n  An eviction is allowed if at most \"maxUnavailable\" pods selected by \"selector\" are unavailable after the eviction, i.e. even in absence of the evicted pod. For example, one can prevent all voluntary evictions by specifying 0. This is a mutually exclusive setting with \"minAvailable\".\n\n  <a name=\"IntOrString\"><\/a>\n  *IntOrString is a type that can hold an int32 or a string.  When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type.  This allows you to have, for example, a JSON field that can accept a name or number.*\n\n- **minAvailable** (IntOrString)\n\n  An eviction is allowed if at least \"minAvailable\" pods selected by \"selector\" will still be available after the eviction, i.e. even in the absence of the evicted pod.  So for example you can prevent all voluntary evictions by specifying \"100%\".\n\n  <a name=\"IntOrString\"><\/a>\n  *IntOrString is a type that can hold an int32 or a string.  When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type.  This allows you to have, for example, a JSON field that can accept a name or number.*\n\n- **selector** (<a href=\"\">LabelSelector<\/a>)\n\n  Label query over pods whose evictions are managed by the disruption budget. A null selector will match no pods, while an empty ({}) selector will select all pods within the namespace.\n\n- **unhealthyPodEvictionPolicy** (string)\n\n  UnhealthyPodEvictionPolicy defines the criteria for when unhealthy pods should be considered for eviction. Current implementation considers healthy pods, as pods that have status.conditions item with type=\"Ready\",status=\"True\".\n  \n  Valid policies are IfHealthyBudget and AlwaysAllow. If no policy is specified, the default behavior will be used, which corresponds to the IfHealthyBudget policy.\n  \n  IfHealthyBudget policy means that running pods (status.phase=\"Running\"), but not yet healthy can be evicted only if the guarded application is not disrupted (status.currentHealthy is at least equal to status.desiredHealthy). Healthy pods will be subject to the PDB for eviction.\n  \n  AlwaysAllow policy means that all running pods (status.phase=\"Running\"), but not yet healthy are considered disrupted and can be evicted regardless of whether the criteria in a PDB is met. This means perspective running pods of a disrupted application might not get a chance to become healthy. Healthy pods will be subject to the PDB for eviction.\n  \n  Additional policies may be added in the future. Clients making eviction decisions should disallow eviction of unhealthy pods if they encounter an unrecognized policy in this field.\n  \n  This field is beta-level. The eviction API uses this field when the feature gate PDBUnhealthyPodEvictionPolicy is enabled (enabled by default).\n\n\n\n\n\n## PodDisruptionBudgetStatus {#PodDisruptionBudgetStatus}\n\nPodDisruptionBudgetStatus represents information about the status of a PodDisruptionBudget. Status may trail the actual state of a system.\n\n<hr>\n\n- **currentHealthy** (int32), required\n\n  current number of healthy pods\n\n- **desiredHealthy** (int32), required\n\n  minimum desired number of healthy pods\n\n- **disruptionsAllowed** (int32), required\n\n  Number of pod disruptions that are currently allowed.\n\n- **expectedPods** (int32), required\n\n  total number of pods counted by this disruption budget\n\n- **conditions** ([]Condition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  Conditions contain conditions for PDB. The disruption controller sets the DisruptionAllowed condition. The following are known values for the reason field (additional reasons could be added in the future): - SyncFailed: The controller encountered an error and wasn't able to compute\n                the number of allowed disruptions. Therefore no disruptions are\n                allowed and the status of the condition will be False.\n  - InsufficientPods: The number of pods are either at or below the number\n                      required by the PodDisruptionBudget. No disruptions are\n                      allowed and the status of the condition will be False.\n  - SufficientPods: There are more pods than required by the PodDisruptionBudget.\n                    The condition will be True, and the number of allowed\n                    disruptions are provided by the disruptionsAllowed property.\n\n  <a name=\"Condition\"><\/a>\n  *Condition contains details for one aspect of the current state of this API Resource.*\n\n  - **conditions.lastTransitionTime** (Time), required\n\n    lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed.  If that is not known, then using the time when the API field changed is acceptable.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string), required\n\n    message is a human readable message indicating details about the transition. This may be an empty string.\n\n  - **conditions.reason** (string), required\n\n    reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty.\n\n  - **conditions.status** (string), required\n\n    status of the condition, one of True, False, Unknown.\n\n  - **conditions.type** (string), required\n\n    type of condition in CamelCase or in foo.example.com\/CamelCase.\n\n  - **conditions.observedGeneration** (int64)\n\n    observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance.\n\n- **disruptedPods** (map[string]Time)\n\n  DisruptedPods contains information about pods whose eviction was processed by the API server eviction subresource handler but has not yet been observed by the PodDisruptionBudget controller. A pod will be in this map from the time when the API server processed the eviction request to the time when the pod is seen by PDB controller as having been marked for deletion (or after a timeout). The key in the map is the name of the pod and the value is the time when the API server processed the eviction request. If the deletion didn't occur and a pod is still there it will be removed from the list automatically by PodDisruptionBudget controller after some time. If everything goes smooth this map should be empty for the most of the time. Large number of entries in the map may indicate problems with pod deletions.\n\n  <a name=\"Time\"><\/a>\n  *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **observedGeneration** (int64)\n\n  Most recent generation observed when updating this PDB status. DisruptionsAllowed and other status information is valid only if observedGeneration equals to PDB's object generation.\n\n\n\n\n\n## PodDisruptionBudgetList {#PodDisruptionBudgetList}\n\nPodDisruptionBudgetList is a collection of PodDisruptionBudgets.\n\n<hr>\n\n- **apiVersion**: policy\/v1\n\n\n- **kind**: PodDisruptionBudgetList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">PodDisruptionBudget<\/a>), required\n\n  Items is a list of PodDisruptionBudgets\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified PodDisruptionBudget\n\n#### HTTP Request\n\nGET \/apis\/policy\/v1\/namespaces\/{namespace}\/poddisruptionbudgets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodDisruptionBudget\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodDisruptionBudget<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified PodDisruptionBudget\n\n#### HTTP Request\n\nGET \/apis\/policy\/v1\/namespaces\/{namespace}\/poddisruptionbudgets\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodDisruptionBudget\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodDisruptionBudget<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind PodDisruptionBudget\n\n#### HTTP Request\n\nGET \/apis\/policy\/v1\/namespaces\/{namespace}\/poddisruptionbudgets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodDisruptionBudgetList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind PodDisruptionBudget\n\n#### HTTP Request\n\nGET \/apis\/policy\/v1\/poddisruptionbudgets\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodDisruptionBudgetList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a PodDisruptionBudget\n\n#### HTTP Request\n\nPOST \/apis\/policy\/v1\/namespaces\/{namespace}\/poddisruptionbudgets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">PodDisruptionBudget<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodDisruptionBudget<\/a>): OK\n\n201 (<a href=\"\">PodDisruptionBudget<\/a>): Created\n\n202 (<a href=\"\">PodDisruptionBudget<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified PodDisruptionBudget\n\n#### HTTP Request\n\nPUT \/apis\/policy\/v1\/namespaces\/{namespace}\/poddisruptionbudgets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodDisruptionBudget\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">PodDisruptionBudget<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodDisruptionBudget<\/a>): OK\n\n201 (<a href=\"\">PodDisruptionBudget<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified PodDisruptionBudget\n\n#### HTTP Request\n\nPUT \/apis\/policy\/v1\/namespaces\/{namespace}\/poddisruptionbudgets\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodDisruptionBudget\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">PodDisruptionBudget<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodDisruptionBudget<\/a>): OK\n\n201 (<a href=\"\">PodDisruptionBudget<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified PodDisruptionBudget\n\n#### HTTP Request\n\nPATCH \/apis\/policy\/v1\/namespaces\/{namespace}\/poddisruptionbudgets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodDisruptionBudget\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodDisruptionBudget<\/a>): OK\n\n201 (<a href=\"\">PodDisruptionBudget<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified PodDisruptionBudget\n\n#### HTTP Request\n\nPATCH \/apis\/policy\/v1\/namespaces\/{namespace}\/poddisruptionbudgets\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodDisruptionBudget\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PodDisruptionBudget<\/a>): OK\n\n201 (<a href=\"\">PodDisruptionBudget<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a PodDisruptionBudget\n\n#### HTTP Request\n\nDELETE \/apis\/policy\/v1\/namespaces\/{namespace}\/poddisruptionbudgets\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PodDisruptionBudget\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of PodDisruptionBudget\n\n#### HTTP Request\n\nDELETE \/apis\/policy\/v1\/namespaces\/{namespace}\/poddisruptionbudgets\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   policy v1    import   k8s io api policy v1    kind   PodDisruptionBudget  content type   api reference  description   PodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods   title   PodDisruptionBudget  weight  5 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  policy v1    import  k8s io api policy v1        PodDisruptionBudget   PodDisruptionBudget   PodDisruptionBudget is an object to define the max disruption that can be caused to a collection of pods   hr       apiVersion    policy v1       kind    PodDisruptionBudget       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    PodDisruptionBudgetSpec  a      Specification of the desired behavior of the PodDisruptionBudget       status     a href    PodDisruptionBudgetStatus  a      Most recently observed status of the PodDisruptionBudget          PodDisruptionBudgetSpec   PodDisruptionBudgetSpec   PodDisruptionBudgetSpec is a description of a PodDisruptionBudget    hr       maxUnavailable    IntOrString     An eviction is allowed if at most  maxUnavailable  pods selected by  selector  are unavailable after the eviction  i e  even in absence of the evicted pod  For example  one can prevent all voluntary evictions by specifying 0  This is a mutually exclusive setting with  minAvailable       a name  IntOrString    a     IntOrString is a type that can hold an int32 or a string   When used in JSON or YAML marshalling and unmarshalling  it produces or consumes the inner type   This allows you to have  for example  a JSON field that can accept a name or number        minAvailable    IntOrString     An eviction is allowed if at least  minAvailable  pods selected by  selector  will still be available after the eviction  i e  even in the absence of the evicted pod   So for example you can prevent all voluntary evictions by specifying  100        a name  IntOrString    a     IntOrString is a type that can hold an int32 or a string   When used in JSON or YAML marshalling and unmarshalling  it produces or consumes the inner type   This allows you to have  for example  a JSON field that can accept a name or number        selector     a href    LabelSelector  a      Label query over pods whose evictions are managed by the disruption budget  A null selector will match no pods  while an empty      selector will select all pods within the namespace       unhealthyPodEvictionPolicy    string     UnhealthyPodEvictionPolicy defines the criteria for when unhealthy pods should be considered for eviction  Current implementation considers healthy pods  as pods that have status conditions item with type  Ready  status  True        Valid policies are IfHealthyBudget and AlwaysAllow  If no policy is specified  the default behavior will be used  which corresponds to the IfHealthyBudget policy       IfHealthyBudget policy means that running pods  status phase  Running    but not yet healthy can be evicted only if the guarded application is not disrupted  status currentHealthy is at least equal to status desiredHealthy   Healthy pods will be subject to the PDB for eviction       AlwaysAllow policy means that all running pods  status phase  Running    but not yet healthy are considered disrupted and can be evicted regardless of whether the criteria in a PDB is met  This means perspective running pods of a disrupted application might not get a chance to become healthy  Healthy pods will be subject to the PDB for eviction       Additional policies may be added in the future  Clients making eviction decisions should disallow eviction of unhealthy pods if they encounter an unrecognized policy in this field       This field is beta level  The eviction API uses this field when the feature gate PDBUnhealthyPodEvictionPolicy is enabled  enabled by default           PodDisruptionBudgetStatus   PodDisruptionBudgetStatus   PodDisruptionBudgetStatus represents information about the status of a PodDisruptionBudget  Status may trail the actual state of a system    hr       currentHealthy    int32   required    current number of healthy pods      desiredHealthy    int32   required    minimum desired number of healthy pods      disruptionsAllowed    int32   required    Number of pod disruptions that are currently allowed       expectedPods    int32   required    total number of pods counted by this disruption budget      conditions      Condition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       Conditions contain conditions for PDB  The disruption controller sets the DisruptionAllowed condition  The following are known values for the reason field  additional reasons could be added in the future     SyncFailed  The controller encountered an error and wasn t able to compute                 the number of allowed disruptions  Therefore no disruptions are                 allowed and the status of the condition will be False      InsufficientPods  The number of pods are either at or below the number                       required by the PodDisruptionBudget  No disruptions are                       allowed and the status of the condition will be False      SufficientPods  There are more pods than required by the PodDisruptionBudget                      The condition will be True  and the number of allowed                     disruptions are provided by the disruptionsAllowed property      a name  Condition    a     Condition contains details for one aspect of the current state of this API Resource          conditions lastTransitionTime    Time   required      lastTransitionTime is the last time the condition transitioned from one status to another  This should be when the underlying condition changed   If that is not known  then using the time when the API field changed is acceptable        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string   required      message is a human readable message indicating details about the transition  This may be an empty string         conditions reason    string   required      reason contains a programmatic identifier indicating the reason for the condition s last transition  Producers of specific condition types may define expected values and meanings for this field  and whether the values are considered a guaranteed API  The value should be a CamelCase string  This field may not be empty         conditions status    string   required      status of the condition  one of True  False  Unknown         conditions type    string   required      type of condition in CamelCase or in foo example com CamelCase         conditions observedGeneration    int64       observedGeneration represents the  metadata generation that the condition was set based upon  For instance  if  metadata generation is currently 12  but the  status conditions x  observedGeneration is 9  the condition is out of date with respect to the current state of the instance       disruptedPods    map string Time     DisruptedPods contains information about pods whose eviction was processed by the API server eviction subresource handler but has not yet been observed by the PodDisruptionBudget controller  A pod will be in this map from the time when the API server processed the eviction request to the time when the pod is seen by PDB controller as having been marked for deletion  or after a timeout   The key in the map is the name of the pod and the value is the time when the API server processed the eviction request  If the deletion didn t occur and a pod is still there it will be removed from the list automatically by PodDisruptionBudget controller after some time  If everything goes smooth this map should be empty for the most of the time  Large number of entries in the map may indicate problems with pod deletions      a name  Time    a     Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        observedGeneration    int64     Most recent generation observed when updating this PDB status  DisruptionsAllowed and other status information is valid only if observedGeneration equals to PDB s object generation          PodDisruptionBudgetList   PodDisruptionBudgetList   PodDisruptionBudgetList is a collection of PodDisruptionBudgets    hr       apiVersion    policy v1       kind    PodDisruptionBudgetList       metadata     a href    ListMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    PodDisruptionBudget  a    required    Items is a list of PodDisruptionBudgets         Operations   Operations      hr             get  read the specified PodDisruptionBudget       HTTP Request  GET  apis policy v1 namespaces  namespace  poddisruptionbudgets  name        Parameters       name     in path    string  required    name of the PodDisruptionBudget       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodDisruptionBudget  a    OK  401  Unauthorized        get  read status of the specified PodDisruptionBudget       HTTP Request  GET  apis policy v1 namespaces  namespace  poddisruptionbudgets  name  status       Parameters       name     in path    string  required    name of the PodDisruptionBudget       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodDisruptionBudget  a    OK  401  Unauthorized        list  list or watch objects of kind PodDisruptionBudget       HTTP Request  GET  apis policy v1 namespaces  namespace  poddisruptionbudgets       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    PodDisruptionBudgetList  a    OK  401  Unauthorized        list  list or watch objects of kind PodDisruptionBudget       HTTP Request  GET  apis policy v1 poddisruptionbudgets       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    PodDisruptionBudgetList  a    OK  401  Unauthorized        create  create a PodDisruptionBudget       HTTP Request  POST  apis policy v1 namespaces  namespace  poddisruptionbudgets       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    PodDisruptionBudget  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodDisruptionBudget  a    OK  201   a href    PodDisruptionBudget  a    Created  202   a href    PodDisruptionBudget  a    Accepted  401  Unauthorized        update  replace the specified PodDisruptionBudget       HTTP Request  PUT  apis policy v1 namespaces  namespace  poddisruptionbudgets  name        Parameters       name     in path    string  required    name of the PodDisruptionBudget       namespace     in path    string  required     a href    namespace  a        body     a href    PodDisruptionBudget  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodDisruptionBudget  a    OK  201   a href    PodDisruptionBudget  a    Created  401  Unauthorized        update  replace status of the specified PodDisruptionBudget       HTTP Request  PUT  apis policy v1 namespaces  namespace  poddisruptionbudgets  name  status       Parameters       name     in path    string  required    name of the PodDisruptionBudget       namespace     in path    string  required     a href    namespace  a        body     a href    PodDisruptionBudget  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodDisruptionBudget  a    OK  201   a href    PodDisruptionBudget  a    Created  401  Unauthorized        patch  partially update the specified PodDisruptionBudget       HTTP Request  PATCH  apis policy v1 namespaces  namespace  poddisruptionbudgets  name        Parameters       name     in path    string  required    name of the PodDisruptionBudget       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodDisruptionBudget  a    OK  201   a href    PodDisruptionBudget  a    Created  401  Unauthorized        patch  partially update status of the specified PodDisruptionBudget       HTTP Request  PATCH  apis policy v1 namespaces  namespace  poddisruptionbudgets  name  status       Parameters       name     in path    string  required    name of the PodDisruptionBudget       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PodDisruptionBudget  a    OK  201   a href    PodDisruptionBudget  a    Created  401  Unauthorized        delete  delete a PodDisruptionBudget       HTTP Request  DELETE  apis policy v1 namespaces  namespace  poddisruptionbudgets  name        Parameters       name     in path    string  required    name of the PodDisruptionBudget       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of PodDisruptionBudget       HTTP Request  DELETE  apis policy v1 namespaces  namespace  poddisruptionbudgets       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference import k8s io api flowcontrol v1 apiVersion flowcontrol apiserver k8s io v1 contenttype apireference title FlowSchema apimetadata FlowSchema defines the schema of a group of flows autogenerated true weight 1 kind FlowSchema","answers":"---\napi_metadata:\n  apiVersion: \"flowcontrol.apiserver.k8s.io\/v1\"\n  import: \"k8s.io\/api\/flowcontrol\/v1\"\n  kind: \"FlowSchema\"\ncontent_type: \"api_reference\"\ndescription: \"FlowSchema defines the schema of a group of flows.\"\ntitle: \"FlowSchema\"\nweight: 1\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: flowcontrol.apiserver.k8s.io\/v1`\n\n`import \"k8s.io\/api\/flowcontrol\/v1\"`\n\n\n## FlowSchema {#FlowSchema}\n\nFlowSchema defines the schema of a group of flows. Note that a flow is made up of a set of inbound API requests with similar attributes and is identified by a pair of strings: the name of the FlowSchema and a \"flow distinguisher\".\n\n<hr>\n\n- **apiVersion**: flowcontrol.apiserver.k8s.io\/v1\n\n\n- **kind**: FlowSchema\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  `metadata` is the standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">FlowSchemaSpec<\/a>)\n\n  `spec` is the specification of the desired behavior of a FlowSchema. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **status** (<a href=\"\">FlowSchemaStatus<\/a>)\n\n  `status` is the current status of a FlowSchema. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## FlowSchemaSpec {#FlowSchemaSpec}\n\nFlowSchemaSpec describes how the FlowSchema's specification looks like.\n\n<hr>\n\n- **distinguisherMethod** (FlowDistinguisherMethod)\n\n  `distinguisherMethod` defines how to compute the flow distinguisher for requests that match this schema. `nil` specifies that the distinguisher is disabled and thus will always be the empty string.\n\n  <a name=\"FlowDistinguisherMethod\"><\/a>\n  *FlowDistinguisherMethod specifies the method of a flow distinguisher.*\n\n  - **distinguisherMethod.type** (string), required\n\n    `type` is the type of flow distinguisher method The supported types are \"ByUser\" and \"ByNamespace\". Required.\n\n- **matchingPrecedence** (int32)\n\n  `matchingPrecedence` is used to choose among the FlowSchemas that match a given request. The chosen FlowSchema is among those with the numerically lowest (which we take to be logically highest) MatchingPrecedence.  Each MatchingPrecedence value must be ranged in [1,10000]. Note that if the precedence is not specified, it will be set to 1000 as default.\n\n- **priorityLevelConfiguration** (PriorityLevelConfigurationReference), required\n\n  `priorityLevelConfiguration` should reference a PriorityLevelConfiguration in the cluster. If the reference cannot be resolved, the FlowSchema will be ignored and marked as invalid in its status. Required.\n\n  <a name=\"PriorityLevelConfigurationReference\"><\/a>\n  *PriorityLevelConfigurationReference contains information that points to the \"request-priority\" being used.*\n\n  - **priorityLevelConfiguration.name** (string), required\n\n    `name` is the name of the priority level configuration being referenced Required.\n\n- **rules** ([]PolicyRulesWithSubjects)\n\n  *Atomic: will be replaced during a merge*\n  \n  `rules` describes which requests will match this flow schema. This FlowSchema matches a request if and only if at least one member of rules matches the request. if it is an empty slice, there will be no requests matching the FlowSchema.\n\n  <a name=\"PolicyRulesWithSubjects\"><\/a>\n  *PolicyRulesWithSubjects prescribes a test that applies to a request to an apiserver. The test considers the subject making the request, the verb being requested, and the resource to be acted upon. This PolicyRulesWithSubjects matches a request if and only if both (a) at least one member of subjects matches the request and (b) at least one member of resourceRules or nonResourceRules matches the request.*\n\n  - **rules.subjects** ([]Subject), required\n\n    *Atomic: will be replaced during a merge*\n    \n    subjects is the list of normal user, serviceaccount, or group that this rule cares about. There must be at least one member in this slice. A slice that includes both the system:authenticated and system:unauthenticated user groups matches every request. Required.\n\n    <a name=\"Subject\"><\/a>\n    *Subject matches the originator of a request, as identified by the request authentication system. There are three ways of matching an originator; by user, group, or service account.*\n\n    - **rules.subjects.kind** (string), required\n\n      `kind` indicates which one of the other fields is non-empty. Required\n\n    - **rules.subjects.group** (GroupSubject)\n\n      `group` matches based on user group name.\n\n      <a name=\"GroupSubject\"><\/a>\n      *GroupSubject holds detailed information for group-kind subject.*\n\n      - **rules.subjects.group.name** (string), required\n\n        name is the user group that matches, or \"*\" to match all user groups. See https:\/\/github.com\/kubernetes\/apiserver\/blob\/master\/pkg\/authentication\/user\/user.go for some well-known group names. Required.\n\n    - **rules.subjects.serviceAccount** (ServiceAccountSubject)\n\n      `serviceAccount` matches ServiceAccounts.\n\n      <a name=\"ServiceAccountSubject\"><\/a>\n      *ServiceAccountSubject holds detailed information for service-account-kind subject.*\n\n      - **rules.subjects.serviceAccount.name** (string), required\n\n        `name` is the name of matching ServiceAccount objects, or \"*\" to match regardless of name. Required.\n\n      - **rules.subjects.serviceAccount.namespace** (string), required\n\n        `namespace` is the namespace of matching ServiceAccount objects. Required.\n\n    - **rules.subjects.user** (UserSubject)\n\n      `user` matches based on username.\n\n      <a name=\"UserSubject\"><\/a>\n      *UserSubject holds detailed information for user-kind subject.*\n\n      - **rules.subjects.user.name** (string), required\n\n        `name` is the username that matches, or \"*\" to match all usernames. Required.\n\n  - **rules.nonResourceRules** ([]NonResourcePolicyRule)\n\n    *Atomic: will be replaced during a merge*\n    \n    `nonResourceRules` is a list of NonResourcePolicyRules that identify matching requests according to their verb and the target non-resource URL.\n\n    <a name=\"NonResourcePolicyRule\"><\/a>\n    *NonResourcePolicyRule is a predicate that matches non-resource requests according to their verb and the target non-resource URL. A NonResourcePolicyRule matches a request if and only if both (a) at least one member of verbs matches the request and (b) at least one member of nonResourceURLs matches the request.*\n\n    - **rules.nonResourceRules.nonResourceURLs** ([]string), required\n\n      *Set: unique values will be kept during a merge*\n      \n      `nonResourceURLs` is a set of url prefixes that a user should have access to and may not be empty. For example:\n        - \"\/healthz\" is legal\n        - \"\/hea*\" is illegal\n        - \"\/hea\" is legal but matches nothing\n        - \"\/hea\/*\" also matches nothing\n        - \"\/healthz\/*\" matches all per-component health checks.\n      \"*\" matches all non-resource urls. if it is present, it must be the only entry. Required.\n\n    - **rules.nonResourceRules.verbs** ([]string), required\n\n      *Set: unique values will be kept during a merge*\n      \n      `verbs` is a list of matching verbs and may not be empty. \"*\" matches all verbs. If it is present, it must be the only entry. Required.\n\n  - **rules.resourceRules** ([]ResourcePolicyRule)\n\n    *Atomic: will be replaced during a merge*\n    \n    `resourceRules` is a slice of ResourcePolicyRules that identify matching requests according to their verb and the target resource. At least one of `resourceRules` and `nonResourceRules` has to be non-empty.\n\n    <a name=\"ResourcePolicyRule\"><\/a>\n    *ResourcePolicyRule is a predicate that matches some resource requests, testing the request's verb and the target resource. A ResourcePolicyRule matches a resource request if and only if: (a) at least one member of verbs matches the request, (b) at least one member of apiGroups matches the request, (c) at least one member of resources matches the request, and (d) either (d1) the request does not specify a namespace (i.e., `Namespace==\"\"`) and clusterScope is true or (d2) the request specifies a namespace and least one member of namespaces matches the request's namespace.*\n\n    - **rules.resourceRules.apiGroups** ([]string), required\n\n      *Set: unique values will be kept during a merge*\n      \n      `apiGroups` is a list of matching API groups and may not be empty. \"*\" matches all API groups and, if present, must be the only entry. Required.\n\n    - **rules.resourceRules.resources** ([]string), required\n\n      *Set: unique values will be kept during a merge*\n      \n      `resources` is a list of matching resources (i.e., lowercase and plural) with, if desired, subresource.  For example, [ \"services\", \"nodes\/status\" ].  This list may not be empty. \"*\" matches all resources and, if present, must be the only entry. Required.\n\n    - **rules.resourceRules.verbs** ([]string), required\n\n      *Set: unique values will be kept during a merge*\n      \n      `verbs` is a list of matching verbs and may not be empty. \"*\" matches all verbs and, if present, must be the only entry. Required.\n\n    - **rules.resourceRules.clusterScope** (boolean)\n\n      `clusterScope` indicates whether to match requests that do not specify a namespace (which happens either because the resource is not namespaced or the request targets all namespaces). If this field is omitted or false then the `namespaces` field must contain a non-empty list.\n\n    - **rules.resourceRules.namespaces** ([]string)\n\n      *Set: unique values will be kept during a merge*\n      \n      `namespaces` is a list of target namespaces that restricts matches.  A request that specifies a target namespace matches only if either (a) this list contains that target namespace or (b) this list contains \"*\".  Note that \"*\" matches any specified namespace but does not match a request that _does not specify_ a namespace (see the `clusterScope` field for that). This list may be empty, but only if `clusterScope` is true.\n\n\n\n\n\n## FlowSchemaStatus {#FlowSchemaStatus}\n\nFlowSchemaStatus represents the current state of a FlowSchema.\n\n<hr>\n\n- **conditions** ([]FlowSchemaCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  `conditions` is a list of the current states of FlowSchema.\n\n  <a name=\"FlowSchemaCondition\"><\/a>\n  *FlowSchemaCondition describes conditions for a FlowSchema.*\n\n  - **conditions.lastTransitionTime** (Time)\n\n    `lastTransitionTime` is the last time the condition transitioned from one status to another.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    `message` is a human-readable message indicating details about last transition.\n\n  - **conditions.reason** (string)\n\n    `reason` is a unique, one-word, CamelCase reason for the condition's last transition.\n\n  - **conditions.status** (string)\n\n    `status` is the status of the condition. Can be True, False, Unknown. Required.\n\n  - **conditions.type** (string)\n\n    `type` is the type of the condition. Required.\n\n\n\n\n\n## FlowSchemaList {#FlowSchemaList}\n\nFlowSchemaList is a list of FlowSchema objects.\n\n<hr>\n\n- **apiVersion**: flowcontrol.apiserver.k8s.io\/v1\n\n\n- **kind**: FlowSchemaList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  `metadata` is the standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">FlowSchema<\/a>), required\n\n  `items` is a list of FlowSchemas.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified FlowSchema\n\n#### HTTP Request\n\nGET \/apis\/flowcontrol.apiserver.k8s.io\/v1\/flowschemas\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the FlowSchema\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">FlowSchema<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified FlowSchema\n\n#### HTTP Request\n\nGET \/apis\/flowcontrol.apiserver.k8s.io\/v1\/flowschemas\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the FlowSchema\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">FlowSchema<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind FlowSchema\n\n#### HTTP Request\n\nGET \/apis\/flowcontrol.apiserver.k8s.io\/v1\/flowschemas\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">FlowSchemaList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a FlowSchema\n\n#### HTTP Request\n\nPOST \/apis\/flowcontrol.apiserver.k8s.io\/v1\/flowschemas\n\n#### Parameters\n\n\n- **body**: <a href=\"\">FlowSchema<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">FlowSchema<\/a>): OK\n\n201 (<a href=\"\">FlowSchema<\/a>): Created\n\n202 (<a href=\"\">FlowSchema<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified FlowSchema\n\n#### HTTP Request\n\nPUT \/apis\/flowcontrol.apiserver.k8s.io\/v1\/flowschemas\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the FlowSchema\n\n\n- **body**: <a href=\"\">FlowSchema<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">FlowSchema<\/a>): OK\n\n201 (<a href=\"\">FlowSchema<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified FlowSchema\n\n#### HTTP Request\n\nPUT \/apis\/flowcontrol.apiserver.k8s.io\/v1\/flowschemas\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the FlowSchema\n\n\n- **body**: <a href=\"\">FlowSchema<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">FlowSchema<\/a>): OK\n\n201 (<a href=\"\">FlowSchema<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified FlowSchema\n\n#### HTTP Request\n\nPATCH \/apis\/flowcontrol.apiserver.k8s.io\/v1\/flowschemas\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the FlowSchema\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">FlowSchema<\/a>): OK\n\n201 (<a href=\"\">FlowSchema<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified FlowSchema\n\n#### HTTP Request\n\nPATCH \/apis\/flowcontrol.apiserver.k8s.io\/v1\/flowschemas\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the FlowSchema\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">FlowSchema<\/a>): OK\n\n201 (<a href=\"\">FlowSchema<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a FlowSchema\n\n#### HTTP Request\n\nDELETE \/apis\/flowcontrol.apiserver.k8s.io\/v1\/flowschemas\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the FlowSchema\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of FlowSchema\n\n#### HTTP Request\n\nDELETE \/apis\/flowcontrol.apiserver.k8s.io\/v1\/flowschemas\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   flowcontrol apiserver k8s io v1    import   k8s io api flowcontrol v1    kind   FlowSchema  content type   api reference  description   FlowSchema defines the schema of a group of flows   title   FlowSchema  weight  1 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  flowcontrol apiserver k8s io v1    import  k8s io api flowcontrol v1        FlowSchema   FlowSchema   FlowSchema defines the schema of a group of flows  Note that a flow is made up of a set of inbound API requests with similar attributes and is identified by a pair of strings  the name of the FlowSchema and a  flow distinguisher     hr       apiVersion    flowcontrol apiserver k8s io v1       kind    FlowSchema       metadata     a href    ObjectMeta  a       metadata  is the standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    FlowSchemaSpec  a       spec  is the specification of the desired behavior of a FlowSchema  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status      status     a href    FlowSchemaStatus  a       status  is the current status of a FlowSchema  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         FlowSchemaSpec   FlowSchemaSpec   FlowSchemaSpec describes how the FlowSchema s specification looks like    hr       distinguisherMethod    FlowDistinguisherMethod      distinguisherMethod  defines how to compute the flow distinguisher for requests that match this schema   nil  specifies that the distinguisher is disabled and thus will always be the empty string      a name  FlowDistinguisherMethod    a     FlowDistinguisherMethod specifies the method of a flow distinguisher          distinguisherMethod type    string   required       type  is the type of flow distinguisher method The supported types are  ByUser  and  ByNamespace   Required       matchingPrecedence    int32      matchingPrecedence  is used to choose among the FlowSchemas that match a given request  The chosen FlowSchema is among those with the numerically lowest  which we take to be logically highest  MatchingPrecedence   Each MatchingPrecedence value must be ranged in  1 10000   Note that if the precedence is not specified  it will be set to 1000 as default       priorityLevelConfiguration    PriorityLevelConfigurationReference   required     priorityLevelConfiguration  should reference a PriorityLevelConfiguration in the cluster  If the reference cannot be resolved  the FlowSchema will be ignored and marked as invalid in its status  Required      a name  PriorityLevelConfigurationReference    a     PriorityLevelConfigurationReference contains information that points to the  request priority  being used          priorityLevelConfiguration name    string   required       name  is the name of the priority level configuration being referenced Required       rules      PolicyRulesWithSubjects      Atomic  will be replaced during a merge        rules  describes which requests will match this flow schema  This FlowSchema matches a request if and only if at least one member of rules matches the request  if it is an empty slice  there will be no requests matching the FlowSchema      a name  PolicyRulesWithSubjects    a     PolicyRulesWithSubjects prescribes a test that applies to a request to an apiserver  The test considers the subject making the request  the verb being requested  and the resource to be acted upon  This PolicyRulesWithSubjects matches a request if and only if both  a  at least one member of subjects matches the request and  b  at least one member of resourceRules or nonResourceRules matches the request          rules subjects      Subject   required       Atomic  will be replaced during a merge           subjects is the list of normal user  serviceaccount  or group that this rule cares about  There must be at least one member in this slice  A slice that includes both the system authenticated and system unauthenticated user groups matches every request  Required        a name  Subject    a       Subject matches the originator of a request  as identified by the request authentication system  There are three ways of matching an originator  by user  group  or service account            rules subjects kind    string   required         kind  indicates which one of the other fields is non empty  Required          rules subjects group    GroupSubject          group  matches based on user group name          a name  GroupSubject    a         GroupSubject holds detailed information for group kind subject              rules subjects group name    string   required          name is the user group that matches  or     to match all user groups  See https   github com kubernetes apiserver blob master pkg authentication user user go for some well known group names  Required           rules subjects serviceAccount    ServiceAccountSubject          serviceAccount  matches ServiceAccounts          a name  ServiceAccountSubject    a         ServiceAccountSubject holds detailed information for service account kind subject              rules subjects serviceAccount name    string   required           name  is the name of matching ServiceAccount objects  or     to match regardless of name  Required             rules subjects serviceAccount namespace    string   required           namespace  is the namespace of matching ServiceAccount objects  Required           rules subjects user    UserSubject          user  matches based on username          a name  UserSubject    a         UserSubject holds detailed information for user kind subject              rules subjects user name    string   required           name  is the username that matches  or     to match all usernames  Required         rules nonResourceRules      NonResourcePolicyRule        Atomic  will be replaced during a merge            nonResourceRules  is a list of NonResourcePolicyRules that identify matching requests according to their verb and the target non resource URL        a name  NonResourcePolicyRule    a       NonResourcePolicyRule is a predicate that matches non resource requests according to their verb and the target non resource URL  A NonResourcePolicyRule matches a request if and only if both  a  at least one member of verbs matches the request and  b  at least one member of nonResourceURLs matches the request            rules nonResourceRules nonResourceURLs      string   required         Set  unique values will be kept during a merge                nonResourceURLs  is a set of url prefixes that a user should have access to and may not be empty  For example              healthz  is legal             hea   is illegal             hea  is legal but matches nothing             hea    also matches nothing             healthz    matches all per component health checks            matches all non resource urls  if it is present  it must be the only entry  Required           rules nonResourceRules verbs      string   required         Set  unique values will be kept during a merge                verbs  is a list of matching verbs and may not be empty      matches all verbs  If it is present  it must be the only entry  Required         rules resourceRules      ResourcePolicyRule        Atomic  will be replaced during a merge            resourceRules  is a slice of ResourcePolicyRules that identify matching requests according to their verb and the target resource  At least one of  resourceRules  and  nonResourceRules  has to be non empty        a name  ResourcePolicyRule    a       ResourcePolicyRule is a predicate that matches some resource requests  testing the request s verb and the target resource  A ResourcePolicyRule matches a resource request if and only if   a  at least one member of verbs matches the request   b  at least one member of apiGroups matches the request   c  at least one member of resources matches the request  and  d  either  d1  the request does not specify a namespace  i e    Namespace       and clusterScope is true or  d2  the request specifies a namespace and least one member of namespaces matches the request s namespace            rules resourceRules apiGroups      string   required         Set  unique values will be kept during a merge                apiGroups  is a list of matching API groups and may not be empty      matches all API groups and  if present  must be the only entry  Required           rules resourceRules resources      string   required         Set  unique values will be kept during a merge                resources  is a list of matching resources  i e   lowercase and plural  with  if desired  subresource   For example     services    nodes status      This list may not be empty      matches all resources and  if present  must be the only entry  Required           rules resourceRules verbs      string   required         Set  unique values will be kept during a merge                verbs  is a list of matching verbs and may not be empty      matches all verbs and  if present  must be the only entry  Required           rules resourceRules clusterScope    boolean          clusterScope  indicates whether to match requests that do not specify a namespace  which happens either because the resource is not namespaced or the request targets all namespaces   If this field is omitted or false then the  namespaces  field must contain a non empty list           rules resourceRules namespaces      string          Set  unique values will be kept during a merge                namespaces  is a list of target namespaces that restricts matches   A request that specifies a target namespace matches only if either  a  this list contains that target namespace or  b  this list contains       Note that     matches any specified namespace but does not match a request that  does not specify  a namespace  see the  clusterScope  field for that   This list may be empty  but only if  clusterScope  is true          FlowSchemaStatus   FlowSchemaStatus   FlowSchemaStatus represents the current state of a FlowSchema    hr       conditions      FlowSchemaCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge        conditions  is a list of the current states of FlowSchema      a name  FlowSchemaCondition    a     FlowSchemaCondition describes conditions for a FlowSchema          conditions lastTransitionTime    Time        lastTransitionTime  is the last time the condition transitioned from one status to another        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string        message  is a human readable message indicating details about last transition         conditions reason    string        reason  is a unique  one word  CamelCase reason for the condition s last transition         conditions status    string        status  is the status of the condition  Can be True  False  Unknown  Required         conditions type    string        type  is the type of the condition  Required          FlowSchemaList   FlowSchemaList   FlowSchemaList is a list of FlowSchema objects    hr       apiVersion    flowcontrol apiserver k8s io v1       kind    FlowSchemaList       metadata     a href    ListMeta  a       metadata  is the standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    FlowSchema  a    required     items  is a list of FlowSchemas          Operations   Operations      hr             get  read the specified FlowSchema       HTTP Request  GET  apis flowcontrol apiserver k8s io v1 flowschemas  name        Parameters       name     in path    string  required    name of the FlowSchema       pretty     in query    string     a href    pretty  a          Response   200   a href    FlowSchema  a    OK  401  Unauthorized        get  read status of the specified FlowSchema       HTTP Request  GET  apis flowcontrol apiserver k8s io v1 flowschemas  name  status       Parameters       name     in path    string  required    name of the FlowSchema       pretty     in query    string     a href    pretty  a          Response   200   a href    FlowSchema  a    OK  401  Unauthorized        list  list or watch objects of kind FlowSchema       HTTP Request  GET  apis flowcontrol apiserver k8s io v1 flowschemas       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    FlowSchemaList  a    OK  401  Unauthorized        create  create a FlowSchema       HTTP Request  POST  apis flowcontrol apiserver k8s io v1 flowschemas       Parameters       body     a href    FlowSchema  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    FlowSchema  a    OK  201   a href    FlowSchema  a    Created  202   a href    FlowSchema  a    Accepted  401  Unauthorized        update  replace the specified FlowSchema       HTTP Request  PUT  apis flowcontrol apiserver k8s io v1 flowschemas  name        Parameters       name     in path    string  required    name of the FlowSchema       body     a href    FlowSchema  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    FlowSchema  a    OK  201   a href    FlowSchema  a    Created  401  Unauthorized        update  replace status of the specified FlowSchema       HTTP Request  PUT  apis flowcontrol apiserver k8s io v1 flowschemas  name  status       Parameters       name     in path    string  required    name of the FlowSchema       body     a href    FlowSchema  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    FlowSchema  a    OK  201   a href    FlowSchema  a    Created  401  Unauthorized        patch  partially update the specified FlowSchema       HTTP Request  PATCH  apis flowcontrol apiserver k8s io v1 flowschemas  name        Parameters       name     in path    string  required    name of the FlowSchema       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    FlowSchema  a    OK  201   a href    FlowSchema  a    Created  401  Unauthorized        patch  partially update status of the specified FlowSchema       HTTP Request  PATCH  apis flowcontrol apiserver k8s io v1 flowschemas  name  status       Parameters       name     in path    string  required    name of the FlowSchema       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    FlowSchema  a    OK  201   a href    FlowSchema  a    Created  401  Unauthorized        delete  delete a FlowSchema       HTTP Request  DELETE  apis flowcontrol apiserver k8s io v1 flowschemas  name        Parameters       name     in path    string  required    name of the FlowSchema       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of FlowSchema       HTTP Request  DELETE  apis flowcontrol apiserver k8s io v1 flowschemas       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference title PriorityLevelConfiguration weight 6 import k8s io api flowcontrol v1 apiVersion flowcontrol apiserver k8s io v1 contenttype apireference kind PriorityLevelConfiguration PriorityLevelConfiguration represents the configuration of a priority level apimetadata autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"flowcontrol.apiserver.k8s.io\/v1\"\n  import: \"k8s.io\/api\/flowcontrol\/v1\"\n  kind: \"PriorityLevelConfiguration\"\ncontent_type: \"api_reference\"\ndescription: \"PriorityLevelConfiguration represents the configuration of a priority level.\"\ntitle: \"PriorityLevelConfiguration\"\nweight: 6\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: flowcontrol.apiserver.k8s.io\/v1`\n\n`import \"k8s.io\/api\/flowcontrol\/v1\"`\n\n\n## PriorityLevelConfiguration {#PriorityLevelConfiguration}\n\nPriorityLevelConfiguration represents the configuration of a priority level.\n\n<hr>\n\n- **apiVersion**: flowcontrol.apiserver.k8s.io\/v1\n\n\n- **kind**: PriorityLevelConfiguration\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  `metadata` is the standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">PriorityLevelConfigurationSpec<\/a>)\n\n  `spec` is the specification of the desired behavior of a \"request-priority\". More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **status** (<a href=\"\">PriorityLevelConfigurationStatus<\/a>)\n\n  `status` is the current status of a \"request-priority\". More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## PriorityLevelConfigurationSpec {#PriorityLevelConfigurationSpec}\n\nPriorityLevelConfigurationSpec specifies the configuration of a priority level.\n\n<hr>\n\n- **exempt** (ExemptPriorityLevelConfiguration)\n\n  `exempt` specifies how requests are handled for an exempt priority level. This field MUST be empty if `type` is `\"Limited\"`. This field MAY be non-empty if `type` is `\"Exempt\"`. If empty and `type` is `\"Exempt\"` then the default values for `ExemptPriorityLevelConfiguration` apply.\n\n  <a name=\"ExemptPriorityLevelConfiguration\"><\/a>\n  *ExemptPriorityLevelConfiguration describes the configurable aspects of the handling of exempt requests. In the mandatory exempt configuration object the values in the fields here can be modified by authorized users, unlike the rest of the `spec`.*\n\n  - **exempt.lendablePercent** (int32)\n\n    `lendablePercent` prescribes the fraction of the level's NominalCL that can be borrowed by other priority levels.  This value of this field must be between 0 and 100, inclusive, and it defaults to 0. The number of seats that other levels can borrow from this level, known as this level's LendableConcurrencyLimit (LendableCL), is defined as follows.\n    \n    LendableCL(i) = round( NominalCL(i) * lendablePercent(i)\/100.0 )\n\n  - **exempt.nominalConcurrencyShares** (int32)\n\n    `nominalConcurrencyShares` (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats nominally reserved for this priority level. This DOES NOT limit the dispatching from this priority level but affects the other priority levels through the borrowing mechanism. The server's concurrency limit (ServerCL) is divided among all the priority levels in proportion to their NCS values:\n    \n    NominalCL(i)  = ceil( ServerCL * NCS(i) \/ sum_ncs ) sum_ncs = sum[priority level k] NCS(k)\n    \n    Bigger numbers mean a larger nominal concurrency limit, at the expense of every other priority level. This field has a default value of zero.\n\n- **limited** (LimitedPriorityLevelConfiguration)\n\n  `limited` specifies how requests are handled for a Limited priority level. This field must be non-empty if and only if `type` is `\"Limited\"`.\n\n  <a name=\"LimitedPriorityLevelConfiguration\"><\/a>\n  *LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues:\n    - How are requests for this priority level limited?\n    - What should be done with requests that exceed the limit?*\n\n  - **limited.borrowingLimitPercent** (int32)\n\n    `borrowingLimitPercent`, if present, configures a limit on how many seats this priority level can borrow from other priority levels. The limit is known as this level's BorrowingConcurrencyLimit (BorrowingCL) and is a limit on the total number of seats that this level may borrow at any one time. This field holds the ratio of that limit to the level's nominal concurrency limit. When this field is non-nil, it must hold a non-negative integer and the limit is calculated as follows.\n    \n    BorrowingCL(i) = round( NominalCL(i) * borrowingLimitPercent(i)\/100.0 )\n    \n    The value of this field can be more than 100, implying that this priority level can borrow a number of seats that is greater than its own nominal concurrency limit (NominalCL). When this field is left `nil`, the limit is effectively infinite.\n\n  - **limited.lendablePercent** (int32)\n\n    `lendablePercent` prescribes the fraction of the level's NominalCL that can be borrowed by other priority levels. The value of this field must be between 0 and 100, inclusive, and it defaults to 0. The number of seats that other levels can borrow from this level, known as this level's LendableConcurrencyLimit (LendableCL), is defined as follows.\n    \n    LendableCL(i) = round( NominalCL(i) * lendablePercent(i)\/100.0 )\n\n  - **limited.limitResponse** (LimitResponse)\n\n    `limitResponse` indicates what to do with requests that can not be executed right now\n\n    <a name=\"LimitResponse\"><\/a>\n    *LimitResponse defines how to handle requests that can not be executed right now.*\n\n    - **limited.limitResponse.type** (string), required\n\n      `type` is \"Queue\" or \"Reject\". \"Queue\" means that requests that can not be executed upon arrival are held in a queue until they can be executed or a queuing limit is reached. \"Reject\" means that requests that can not be executed upon arrival are rejected. Required.\n\n    - **limited.limitResponse.queuing** (QueuingConfiguration)\n\n      `queuing` holds the configuration parameters for queuing. This field may be non-empty only if `type` is `\"Queue\"`.\n\n      <a name=\"QueuingConfiguration\"><\/a>\n      *QueuingConfiguration holds the configuration parameters for queuing*\n\n      - **limited.limitResponse.queuing.handSize** (int32)\n\n        `handSize` is a small positive number that configures the shuffle sharding of requests into queues.  When enqueuing a request at this priority level the request's flow identifier (a string pair) is hashed and the hash value is used to shuffle the list of queues and deal a hand of the size specified here.  The request is put into one of the shortest queues in that hand. `handSize` must be no larger than `queues`, and should be significantly smaller (so that a few heavy flows do not saturate most of the queues).  See the user-facing documentation for more extensive guidance on setting this field.  This field has a default value of 8.\n\n      - **limited.limitResponse.queuing.queueLengthLimit** (int32)\n\n        `queueLengthLimit` is the maximum number of requests allowed to be waiting in a given queue of this priority level at a time; excess requests are rejected.  This value must be positive.  If not specified, it will be defaulted to 50.\n\n      - **limited.limitResponse.queuing.queues** (int32)\n\n        `queues` is the number of queues for this priority level. The queues exist independently at each apiserver. The value must be positive.  Setting it to 1 effectively precludes shufflesharding and thus makes the distinguisher method of associated flow schemas irrelevant.  This field has a default value of 64.\n\n  - **limited.nominalConcurrencyShares** (int32)\n\n    `nominalConcurrencyShares` (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats available at this priority level. This is used both for requests dispatched from this priority level as well as requests dispatched from other priority levels borrowing seats from this level. The server's concurrency limit (ServerCL) is divided among the Limited priority levels in proportion to their NCS values:\n    \n    NominalCL(i)  = ceil( ServerCL * NCS(i) \/ sum_ncs ) sum_ncs = sum[priority level k] NCS(k)\n    \n    Bigger numbers mean a larger nominal concurrency limit, at the expense of every other priority level.\n    \n    If not specified, this field defaults to a value of 30.\n    \n    Setting this field to zero supports the construction of a \"jail\" for this priority level that is used to hold some request(s)\n\n- **type** (string), required\n\n  `type` indicates whether this priority level is subject to limitation on request execution.  A value of `\"Exempt\"` means that requests of this priority level are not subject to a limit (and thus are never queued) and do not detract from the capacity made available to other priority levels.  A value of `\"Limited\"` means that (a) requests of this priority level _are_ subject to limits and (b) some of the server's limited capacity is made available exclusively to this priority level. Required.\n\n\n\n\n\n## PriorityLevelConfigurationStatus {#PriorityLevelConfigurationStatus}\n\nPriorityLevelConfigurationStatus represents the current state of a \"request-priority\".\n\n<hr>\n\n- **conditions** ([]PriorityLevelConfigurationCondition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  `conditions` is the current state of \"request-priority\".\n\n  <a name=\"PriorityLevelConfigurationCondition\"><\/a>\n  *PriorityLevelConfigurationCondition defines the condition of priority level.*\n\n  - **conditions.lastTransitionTime** (Time)\n\n    `lastTransitionTime` is the last time the condition transitioned from one status to another.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string)\n\n    `message` is a human-readable message indicating details about last transition.\n\n  - **conditions.reason** (string)\n\n    `reason` is a unique, one-word, CamelCase reason for the condition's last transition.\n\n  - **conditions.status** (string)\n\n    `status` is the status of the condition. Can be True, False, Unknown. Required.\n\n  - **conditions.type** (string)\n\n    `type` is the type of the condition. Required.\n\n\n\n\n\n## PriorityLevelConfigurationList {#PriorityLevelConfigurationList}\n\nPriorityLevelConfigurationList is a list of PriorityLevelConfiguration objects.\n\n<hr>\n\n- **apiVersion**: flowcontrol.apiserver.k8s.io\/v1\n\n\n- **kind**: PriorityLevelConfigurationList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  `metadata` is the standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **items** ([]<a href=\"\">PriorityLevelConfiguration<\/a>), required\n\n  `items` is a list of request-priorities.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified PriorityLevelConfiguration\n\n#### HTTP Request\n\nGET \/apis\/flowcontrol.apiserver.k8s.io\/v1\/prioritylevelconfigurations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PriorityLevelConfiguration\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PriorityLevelConfiguration<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified PriorityLevelConfiguration\n\n#### HTTP Request\n\nGET \/apis\/flowcontrol.apiserver.k8s.io\/v1\/prioritylevelconfigurations\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PriorityLevelConfiguration\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PriorityLevelConfiguration<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind PriorityLevelConfiguration\n\n#### HTTP Request\n\nGET \/apis\/flowcontrol.apiserver.k8s.io\/v1\/prioritylevelconfigurations\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PriorityLevelConfigurationList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a PriorityLevelConfiguration\n\n#### HTTP Request\n\nPOST \/apis\/flowcontrol.apiserver.k8s.io\/v1\/prioritylevelconfigurations\n\n#### Parameters\n\n\n- **body**: <a href=\"\">PriorityLevelConfiguration<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PriorityLevelConfiguration<\/a>): OK\n\n201 (<a href=\"\">PriorityLevelConfiguration<\/a>): Created\n\n202 (<a href=\"\">PriorityLevelConfiguration<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified PriorityLevelConfiguration\n\n#### HTTP Request\n\nPUT \/apis\/flowcontrol.apiserver.k8s.io\/v1\/prioritylevelconfigurations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PriorityLevelConfiguration\n\n\n- **body**: <a href=\"\">PriorityLevelConfiguration<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PriorityLevelConfiguration<\/a>): OK\n\n201 (<a href=\"\">PriorityLevelConfiguration<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified PriorityLevelConfiguration\n\n#### HTTP Request\n\nPUT \/apis\/flowcontrol.apiserver.k8s.io\/v1\/prioritylevelconfigurations\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PriorityLevelConfiguration\n\n\n- **body**: <a href=\"\">PriorityLevelConfiguration<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PriorityLevelConfiguration<\/a>): OK\n\n201 (<a href=\"\">PriorityLevelConfiguration<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified PriorityLevelConfiguration\n\n#### HTTP Request\n\nPATCH \/apis\/flowcontrol.apiserver.k8s.io\/v1\/prioritylevelconfigurations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PriorityLevelConfiguration\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PriorityLevelConfiguration<\/a>): OK\n\n201 (<a href=\"\">PriorityLevelConfiguration<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified PriorityLevelConfiguration\n\n#### HTTP Request\n\nPATCH \/apis\/flowcontrol.apiserver.k8s.io\/v1\/prioritylevelconfigurations\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PriorityLevelConfiguration\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">PriorityLevelConfiguration<\/a>): OK\n\n201 (<a href=\"\">PriorityLevelConfiguration<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a PriorityLevelConfiguration\n\n#### HTTP Request\n\nDELETE \/apis\/flowcontrol.apiserver.k8s.io\/v1\/prioritylevelconfigurations\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the PriorityLevelConfiguration\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of PriorityLevelConfiguration\n\n#### HTTP Request\n\nDELETE \/apis\/flowcontrol.apiserver.k8s.io\/v1\/prioritylevelconfigurations\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   flowcontrol apiserver k8s io v1    import   k8s io api flowcontrol v1    kind   PriorityLevelConfiguration  content type   api reference  description   PriorityLevelConfiguration represents the configuration of a priority level   title   PriorityLevelConfiguration  weight  6 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  flowcontrol apiserver k8s io v1    import  k8s io api flowcontrol v1        PriorityLevelConfiguration   PriorityLevelConfiguration   PriorityLevelConfiguration represents the configuration of a priority level    hr       apiVersion    flowcontrol apiserver k8s io v1       kind    PriorityLevelConfiguration       metadata     a href    ObjectMeta  a       metadata  is the standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    PriorityLevelConfigurationSpec  a       spec  is the specification of the desired behavior of a  request priority   More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status      status     a href    PriorityLevelConfigurationStatus  a       status  is the current status of a  request priority   More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         PriorityLevelConfigurationSpec   PriorityLevelConfigurationSpec   PriorityLevelConfigurationSpec specifies the configuration of a priority level    hr       exempt    ExemptPriorityLevelConfiguration      exempt  specifies how requests are handled for an exempt priority level  This field MUST be empty if  type  is   Limited    This field MAY be non empty if  type  is   Exempt    If empty and  type  is   Exempt   then the default values for  ExemptPriorityLevelConfiguration  apply      a name  ExemptPriorityLevelConfiguration    a     ExemptPriorityLevelConfiguration describes the configurable aspects of the handling of exempt requests  In the mandatory exempt configuration object the values in the fields here can be modified by authorized users  unlike the rest of the  spec           exempt lendablePercent    int32        lendablePercent  prescribes the fraction of the level s NominalCL that can be borrowed by other priority levels   This value of this field must be between 0 and 100  inclusive  and it defaults to 0  The number of seats that other levels can borrow from this level  known as this level s LendableConcurrencyLimit  LendableCL   is defined as follows           LendableCL i    round  NominalCL i    lendablePercent i  100 0          exempt nominalConcurrencyShares    int32        nominalConcurrencyShares   NCS  contributes to the computation of the NominalConcurrencyLimit  NominalCL  of this level  This is the number of execution seats nominally reserved for this priority level  This DOES NOT limit the dispatching from this priority level but affects the other priority levels through the borrowing mechanism  The server s concurrency limit  ServerCL  is divided among all the priority levels in proportion to their NCS values           NominalCL i     ceil  ServerCL   NCS i    sum ncs   sum ncs   sum priority level k  NCS k           Bigger numbers mean a larger nominal concurrency limit  at the expense of every other priority level  This field has a default value of zero       limited    LimitedPriorityLevelConfiguration      limited  specifies how requests are handled for a Limited priority level  This field must be non empty if and only if  type  is   Limited        a name  LimitedPriorityLevelConfiguration    a     LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits  It addresses two issues        How are requests for this priority level limited        What should be done with requests that exceed the limit          limited borrowingLimitPercent    int32        borrowingLimitPercent   if present  configures a limit on how many seats this priority level can borrow from other priority levels  The limit is known as this level s BorrowingConcurrencyLimit  BorrowingCL  and is a limit on the total number of seats that this level may borrow at any one time  This field holds the ratio of that limit to the level s nominal concurrency limit  When this field is non nil  it must hold a non negative integer and the limit is calculated as follows           BorrowingCL i    round  NominalCL i    borrowingLimitPercent i  100 0            The value of this field can be more than 100  implying that this priority level can borrow a number of seats that is greater than its own nominal concurrency limit  NominalCL   When this field is left  nil   the limit is effectively infinite         limited lendablePercent    int32        lendablePercent  prescribes the fraction of the level s NominalCL that can be borrowed by other priority levels  The value of this field must be between 0 and 100  inclusive  and it defaults to 0  The number of seats that other levels can borrow from this level  known as this level s LendableConcurrencyLimit  LendableCL   is defined as follows           LendableCL i    round  NominalCL i    lendablePercent i  100 0          limited limitResponse    LimitResponse        limitResponse  indicates what to do with requests that can not be executed right now       a name  LimitResponse    a       LimitResponse defines how to handle requests that can not be executed right now            limited limitResponse type    string   required         type  is  Queue  or  Reject    Queue  means that requests that can not be executed upon arrival are held in a queue until they can be executed or a queuing limit is reached   Reject  means that requests that can not be executed upon arrival are rejected  Required           limited limitResponse queuing    QueuingConfiguration          queuing  holds the configuration parameters for queuing  This field may be non empty only if  type  is   Queue            a name  QueuingConfiguration    a         QueuingConfiguration holds the configuration parameters for queuing             limited limitResponse queuing handSize    int32            handSize  is a small positive number that configures the shuffle sharding of requests into queues   When enqueuing a request at this priority level the request s flow identifier  a string pair  is hashed and the hash value is used to shuffle the list of queues and deal a hand of the size specified here   The request is put into one of the shortest queues in that hand   handSize  must be no larger than  queues   and should be significantly smaller  so that a few heavy flows do not saturate most of the queues    See the user facing documentation for more extensive guidance on setting this field   This field has a default value of 8             limited limitResponse queuing queueLengthLimit    int32            queueLengthLimit  is the maximum number of requests allowed to be waiting in a given queue of this priority level at a time  excess requests are rejected   This value must be positive   If not specified  it will be defaulted to 50             limited limitResponse queuing queues    int32            queues  is the number of queues for this priority level  The queues exist independently at each apiserver  The value must be positive   Setting it to 1 effectively precludes shufflesharding and thus makes the distinguisher method of associated flow schemas irrelevant   This field has a default value of 64         limited nominalConcurrencyShares    int32        nominalConcurrencyShares   NCS  contributes to the computation of the NominalConcurrencyLimit  NominalCL  of this level  This is the number of execution seats available at this priority level  This is used both for requests dispatched from this priority level as well as requests dispatched from other priority levels borrowing seats from this level  The server s concurrency limit  ServerCL  is divided among the Limited priority levels in proportion to their NCS values           NominalCL i     ceil  ServerCL   NCS i    sum ncs   sum ncs   sum priority level k  NCS k           Bigger numbers mean a larger nominal concurrency limit  at the expense of every other priority level           If not specified  this field defaults to a value of 30           Setting this field to zero supports the construction of a  jail  for this priority level that is used to hold some request s       type    string   required     type  indicates whether this priority level is subject to limitation on request execution   A value of   Exempt   means that requests of this priority level are not subject to a limit  and thus are never queued  and do not detract from the capacity made available to other priority levels   A value of   Limited   means that  a  requests of this priority level  are  subject to limits and  b  some of the server s limited capacity is made available exclusively to this priority level  Required          PriorityLevelConfigurationStatus   PriorityLevelConfigurationStatus   PriorityLevelConfigurationStatus represents the current state of a  request priority     hr       conditions      PriorityLevelConfigurationCondition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge        conditions  is the current state of  request priority       a name  PriorityLevelConfigurationCondition    a     PriorityLevelConfigurationCondition defines the condition of priority level          conditions lastTransitionTime    Time        lastTransitionTime  is the last time the condition transitioned from one status to another        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string        message  is a human readable message indicating details about last transition         conditions reason    string        reason  is a unique  one word  CamelCase reason for the condition s last transition         conditions status    string        status  is the status of the condition  Can be True  False  Unknown  Required         conditions type    string        type  is the type of the condition  Required          PriorityLevelConfigurationList   PriorityLevelConfigurationList   PriorityLevelConfigurationList is a list of PriorityLevelConfiguration objects    hr       apiVersion    flowcontrol apiserver k8s io v1       kind    PriorityLevelConfigurationList       metadata     a href    ListMeta  a       metadata  is the standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      items       a href    PriorityLevelConfiguration  a    required     items  is a list of request priorities          Operations   Operations      hr             get  read the specified PriorityLevelConfiguration       HTTP Request  GET  apis flowcontrol apiserver k8s io v1 prioritylevelconfigurations  name        Parameters       name     in path    string  required    name of the PriorityLevelConfiguration       pretty     in query    string     a href    pretty  a          Response   200   a href    PriorityLevelConfiguration  a    OK  401  Unauthorized        get  read status of the specified PriorityLevelConfiguration       HTTP Request  GET  apis flowcontrol apiserver k8s io v1 prioritylevelconfigurations  name  status       Parameters       name     in path    string  required    name of the PriorityLevelConfiguration       pretty     in query    string     a href    pretty  a          Response   200   a href    PriorityLevelConfiguration  a    OK  401  Unauthorized        list  list or watch objects of kind PriorityLevelConfiguration       HTTP Request  GET  apis flowcontrol apiserver k8s io v1 prioritylevelconfigurations       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    PriorityLevelConfigurationList  a    OK  401  Unauthorized        create  create a PriorityLevelConfiguration       HTTP Request  POST  apis flowcontrol apiserver k8s io v1 prioritylevelconfigurations       Parameters       body     a href    PriorityLevelConfiguration  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PriorityLevelConfiguration  a    OK  201   a href    PriorityLevelConfiguration  a    Created  202   a href    PriorityLevelConfiguration  a    Accepted  401  Unauthorized        update  replace the specified PriorityLevelConfiguration       HTTP Request  PUT  apis flowcontrol apiserver k8s io v1 prioritylevelconfigurations  name        Parameters       name     in path    string  required    name of the PriorityLevelConfiguration       body     a href    PriorityLevelConfiguration  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PriorityLevelConfiguration  a    OK  201   a href    PriorityLevelConfiguration  a    Created  401  Unauthorized        update  replace status of the specified PriorityLevelConfiguration       HTTP Request  PUT  apis flowcontrol apiserver k8s io v1 prioritylevelconfigurations  name  status       Parameters       name     in path    string  required    name of the PriorityLevelConfiguration       body     a href    PriorityLevelConfiguration  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PriorityLevelConfiguration  a    OK  201   a href    PriorityLevelConfiguration  a    Created  401  Unauthorized        patch  partially update the specified PriorityLevelConfiguration       HTTP Request  PATCH  apis flowcontrol apiserver k8s io v1 prioritylevelconfigurations  name        Parameters       name     in path    string  required    name of the PriorityLevelConfiguration       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PriorityLevelConfiguration  a    OK  201   a href    PriorityLevelConfiguration  a    Created  401  Unauthorized        patch  partially update status of the specified PriorityLevelConfiguration       HTTP Request  PATCH  apis flowcontrol apiserver k8s io v1 prioritylevelconfigurations  name  status       Parameters       name     in path    string  required    name of the PriorityLevelConfiguration       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    PriorityLevelConfiguration  a    OK  201   a href    PriorityLevelConfiguration  a    Created  401  Unauthorized        delete  delete a PriorityLevelConfiguration       HTTP Request  DELETE  apis flowcontrol apiserver k8s io v1 prioritylevelconfigurations  name        Parameters       name     in path    string  required    name of the PriorityLevelConfiguration       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of PriorityLevelConfiguration       HTTP Request  DELETE  apis flowcontrol apiserver k8s io v1 prioritylevelconfigurations       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference weight 7 title ValidatingAdmissionPolicy contenttype apireference apiVersion admissionregistration k8s io v1 apimetadata kind ValidatingAdmissionPolicy ValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it autogenerated true import k8s io api admissionregistration v1","answers":"---\napi_metadata:\n  apiVersion: \"admissionregistration.k8s.io\/v1\"\n  import: \"k8s.io\/api\/admissionregistration\/v1\"\n  kind: \"ValidatingAdmissionPolicy\"\ncontent_type: \"api_reference\"\ndescription: \"ValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it.\"\ntitle: \"ValidatingAdmissionPolicy\"\nweight: 7\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: admissionregistration.k8s.io\/v1`\n\n`import \"k8s.io\/api\/admissionregistration\/v1\"`\n\n\n## ValidatingAdmissionPolicy {#ValidatingAdmissionPolicy}\n\nValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it.\n\n<hr>\n\n- **apiVersion**: admissionregistration.k8s.io\/v1\n\n\n- **kind**: ValidatingAdmissionPolicy\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata; More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata.\n\n- **spec** (ValidatingAdmissionPolicySpec)\n\n  Specification of the desired behavior of the ValidatingAdmissionPolicy.\n\n  <a name=\"ValidatingAdmissionPolicySpec\"><\/a>\n  *ValidatingAdmissionPolicySpec is the specification of the desired behavior of the AdmissionPolicy.*\n\n  - **spec.auditAnnotations** ([]AuditAnnotation)\n\n    *Atomic: will be replaced during a merge*\n    \n    auditAnnotations contains CEL expressions which are used to produce audit annotations for the audit event of the API request. validations and auditAnnotations may not both be empty; a least one of validations or auditAnnotations is required.\n\n    <a name=\"AuditAnnotation\"><\/a>\n    *AuditAnnotation describes how to produce an audit annotation for an API request.*\n\n    - **spec.auditAnnotations.key** (string), required\n\n      key specifies the audit annotation key. The audit annotation keys of a ValidatingAdmissionPolicy must be unique. The key must be a qualified name ([A-Za-z0-9][-A-Za-z0-9_.]*) no more than 63 bytes in length.\n      \n      The key is combined with the resource name of the ValidatingAdmissionPolicy to construct an audit annotation key: \"{ValidatingAdmissionPolicy name}\/{key}\".\n      \n      If an admission webhook uses the same resource name as this ValidatingAdmissionPolicy and the same audit annotation key, the annotation key will be identical. In this case, the first annotation written with the key will be included in the audit event and all subsequent annotations with the same key will be discarded.\n      \n      Required.\n\n    - **spec.auditAnnotations.valueExpression** (string), required\n\n      valueExpression represents the expression which is evaluated by CEL to produce an audit annotation value. The expression must evaluate to either a string or null value. If the expression evaluates to a string, the audit annotation is included with the string value. If the expression evaluates to null or empty string the audit annotation will be omitted. The valueExpression may be no longer than 5kb in length. If the result of the valueExpression is more than 10kb in length, it will be truncated to 10kb.\n      \n      If multiple ValidatingAdmissionPolicyBinding resources match an API request, then the valueExpression will be evaluated for each binding. All unique values produced by the valueExpressions will be joined together in a comma-separated list.\n      \n      Required.\n\n  - **spec.failurePolicy** (string)\n\n    failurePolicy defines how to handle failures for the admission policy. Failures can occur from CEL expression parse errors, type check errors, runtime errors and invalid or mis-configured policy definitions or bindings.\n    \n    A policy is invalid if spec.paramKind refers to a non-existent Kind. A binding is invalid if spec.paramRef.name refers to a non-existent resource.\n    \n    failurePolicy does not define how validations that evaluate to false are handled.\n    \n    When failurePolicy is set to Fail, ValidatingAdmissionPolicyBinding validationActions define how failures are enforced.\n    \n    Allowed values are Ignore or Fail. Defaults to Fail.\n\n  - **spec.matchConditions** ([]MatchCondition)\n\n    *Patch strategy: merge on key `name`*\n    \n    *Map: unique values on key name will be kept during a merge*\n    \n    MatchConditions is a list of conditions that must be met for a request to be validated. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed.\n    \n    If a parameter object is provided, it can be accessed via the `params` handle in the same manner as validation expressions.\n    \n    The exact matching logic is (in order):\n      1. If ANY matchCondition evaluates to FALSE, the policy is skipped.\n      2. If ALL matchConditions evaluate to TRUE, the policy is evaluated.\n      3. If any matchCondition evaluates to an error (but none are FALSE):\n         - If failurePolicy=Fail, reject the request\n         - If failurePolicy=Ignore, the policy is skipped\n\n    <a name=\"MatchCondition\"><\/a>\n    *MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook.*\n\n    - **spec.matchConditions.expression** (string), required\n\n      Expression represents the expression which will be evaluated by CEL. Must evaluate to bool. CEL expressions have access to the contents of the AdmissionRequest and Authorizer, organized into CEL variables:\n      \n      'object' - The object from the incoming request. The value is null for DELETE requests. 'oldObject' - The existing object. The value is null for CREATE requests. 'request' - Attributes of the admission request(\/pkg\/apis\/admission\/types.go#AdmissionRequest). 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request.\n        See https:\/\/pkg.go.dev\/k8s.io\/apiserver\/pkg\/cel\/library#Authz\n      'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the\n        request resource.\n      Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/\n      \n      Required.\n\n    - **spec.matchConditions.name** (string), required\n\n      Name is an identifier for this match condition, used for strategic merging of MatchConditions, as well as providing an identifier for logging purposes. A good name should be descriptive of the associated expression. Name must be a qualified name consisting of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName',  or 'my.name',  or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]') with an optional DNS subdomain prefix and '\/' (e.g. 'example.com\/MyName')\n      \n      Required.\n\n  - **spec.matchConstraints** (MatchResources)\n\n    MatchConstraints specifies what resources this policy is designed to validate. The AdmissionPolicy cares about a request if it matches _all_ Constraints. However, in order to prevent clusters from being put into an unstable state that cannot be recovered from via the API ValidatingAdmissionPolicy cannot match ValidatingAdmissionPolicy and ValidatingAdmissionPolicyBinding. Required.\n\n    <a name=\"MatchResources\"><\/a>\n    *MatchResources decides whether to run the admission control policy on an object based on whether it meets the match criteria. The exclude rules take precedence over include rules (if a resource matches both, it is excluded)*\n\n    - **spec.matchConstraints.excludeResourceRules** ([]NamedRuleWithOperations)\n\n      *Atomic: will be replaced during a merge*\n      \n      ExcludeResourceRules describes what operations on what resources\/subresources the ValidatingAdmissionPolicy should not care about. The exclude rules take precedence over include rules (if a resource matches both, it is excluded)\n\n      <a name=\"NamedRuleWithOperations\"><\/a>\n      *NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames.*\n\n      - **spec.matchConstraints.excludeResourceRules.apiGroups** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIGroups is the API groups the resources belong to. '*' is all groups. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchConstraints.excludeResourceRules.apiVersions** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIVersions is the API versions the resources belong to. '*' is all versions. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchConstraints.excludeResourceRules.operations** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchConstraints.excludeResourceRules.resourceNames** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        ResourceNames is an optional white list of names that the rule applies to.  An empty set means that everything is allowed.\n\n      - **spec.matchConstraints.excludeResourceRules.resources** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Resources is a list of resources this rule applies to.\n        \n        For example: 'pods' means pods. 'pods\/log' means the log subresource of pods. '*' means all resources, but not subresources. 'pods\/*' means all subresources of pods. '*\/scale' means all scale subresources. '*\/*' means all resources and their subresources.\n        \n        If wildcard is present, the validation rule will ensure resources do not overlap with each other.\n        \n        Depending on the enclosing object, subresources might not be allowed. Required.\n\n      - **spec.matchConstraints.excludeResourceRules.scope** (string)\n\n        scope specifies the scope of this rule. Valid values are \"Cluster\", \"Namespaced\", and \"*\" \"Cluster\" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. \"Namespaced\" means that only namespaced resources will match this rule. \"*\" means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is \"*\".\n\n    - **spec.matchConstraints.matchPolicy** (string)\n\n      matchPolicy defines how the \"MatchResources\" list is used to match incoming requests. Allowed values are \"Exact\" or \"Equivalent\".\n      \n      - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps\/v1, apps\/v1beta1, and extensions\/v1beta1, but \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps\/v1beta1 or extensions\/v1beta1 would not be sent to the ValidatingAdmissionPolicy.\n      \n      - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps\/v1, apps\/v1beta1, and extensions\/v1beta1, and \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps\/v1beta1 or extensions\/v1beta1 would be converted to apps\/v1 and sent to the ValidatingAdmissionPolicy.\n      \n      Defaults to \"Equivalent\"\n\n    - **spec.matchConstraints.namespaceSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      NamespaceSelector decides whether to run the admission control policy on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the policy.\n      \n      For example, to run the webhook on any objects whose namespace is not associated with \"runlevel\" of \"0\" or \"1\";  you will set the selector as follows: \"namespaceSelector\": {\n        \"matchExpressions\": [\n          {\n            \"key\": \"runlevel\",\n            \"operator\": \"NotIn\",\n            \"values\": [\n              \"0\",\n              \"1\"\n            ]\n          }\n        ]\n      }\n      \n      If instead you want to only run the policy on any objects whose namespace is associated with the \"environment\" of \"prod\" or \"staging\"; you will set the selector as follows: \"namespaceSelector\": {\n        \"matchExpressions\": [\n          {\n            \"key\": \"environment\",\n            \"operator\": \"In\",\n            \"values\": [\n              \"prod\",\n              \"staging\"\n            ]\n          }\n        ]\n      }\n      \n      See https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\/ for more examples of label selectors.\n      \n      Default to the empty LabelSelector, which matches everything.\n\n    - **spec.matchConstraints.objectSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      ObjectSelector decides whether to run the validation based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the cel validation, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything.\n\n    - **spec.matchConstraints.resourceRules** ([]NamedRuleWithOperations)\n\n      *Atomic: will be replaced during a merge*\n      \n      ResourceRules describes what operations on what resources\/subresources the ValidatingAdmissionPolicy matches. The policy cares about an operation if it matches _any_ Rule.\n\n      <a name=\"NamedRuleWithOperations\"><\/a>\n      *NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames.*\n\n      - **spec.matchConstraints.resourceRules.apiGroups** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIGroups is the API groups the resources belong to. '*' is all groups. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchConstraints.resourceRules.apiVersions** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIVersions is the API versions the resources belong to. '*' is all versions. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchConstraints.resourceRules.operations** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchConstraints.resourceRules.resourceNames** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        ResourceNames is an optional white list of names that the rule applies to.  An empty set means that everything is allowed.\n\n      - **spec.matchConstraints.resourceRules.resources** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Resources is a list of resources this rule applies to.\n        \n        For example: 'pods' means pods. 'pods\/log' means the log subresource of pods. '*' means all resources, but not subresources. 'pods\/*' means all subresources of pods. '*\/scale' means all scale subresources. '*\/*' means all resources and their subresources.\n        \n        If wildcard is present, the validation rule will ensure resources do not overlap with each other.\n        \n        Depending on the enclosing object, subresources might not be allowed. Required.\n\n      - **spec.matchConstraints.resourceRules.scope** (string)\n\n        scope specifies the scope of this rule. Valid values are \"Cluster\", \"Namespaced\", and \"*\" \"Cluster\" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. \"Namespaced\" means that only namespaced resources will match this rule. \"*\" means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is \"*\".\n\n  - **spec.paramKind** (ParamKind)\n\n    ParamKind specifies the kind of resources used to parameterize this policy. If absent, there are no parameters for this policy and the param CEL variable will not be provided to validation expressions. If ParamKind refers to a non-existent kind, this policy definition is mis-configured and the FailurePolicy is applied. If paramKind is specified but paramRef is unset in ValidatingAdmissionPolicyBinding, the params variable will be null.\n\n    <a name=\"ParamKind\"><\/a>\n    *ParamKind is a tuple of Group Kind and Version.*\n\n    - **spec.paramKind.apiVersion** (string)\n\n      APIVersion is the API group version the resources belong to. In format of \"group\/version\". Required.\n\n    - **spec.paramKind.kind** (string)\n\n      Kind is the API kind the resources belong to. Required.\n\n  - **spec.validations** ([]Validation)\n\n    *Atomic: will be replaced during a merge*\n    \n    Validations contain CEL expressions which is used to apply the validation. Validations and AuditAnnotations may not both be empty; a minimum of one Validations or AuditAnnotations is required.\n\n    <a name=\"Validation\"><\/a>\n    *Validation specifies the CEL expression which is used to apply the validation.*\n\n    - **spec.validations.expression** (string), required\n\n      Expression represents the expression which will be evaluated by CEL. ref: https:\/\/github.com\/google\/cel-spec CEL expressions have access to the contents of the API request\/response, organized into CEL variables as well as some other useful variables:\n      \n      - 'object' - The object from the incoming request. The value is null for DELETE requests. - 'oldObject' - The existing object. The value is null for CREATE requests. - 'request' - Attributes of the API request([ref](\/pkg\/apis\/admission\/types.go#AdmissionRequest)). - 'params' - Parameter resource referred to by the policy binding being evaluated. Only populated if the policy has a ParamKind. - 'namespaceObject' - The namespace object that the incoming object belongs to. The value is null for cluster-scoped resources. - 'variables' - Map of composited variables, from its name to its lazily evaluated value.\n        For example, a variable named 'foo' can be accessed as 'variables.foo'.\n      - 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request.\n        See https:\/\/pkg.go.dev\/k8s.io\/apiserver\/pkg\/cel\/library#Authz\n      - 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the\n        request resource.\n      \n      The `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from the root of the object. No other metadata properties are accessible.\n      \n      Only property names of the form `[a-zA-Z_.-\/][a-zA-Z0-9_.-\/]*` are accessible. Accessible property names are escaped according to the following rules when accessed in the expression: - '__' escapes to '__underscores__' - '.' escapes to '__dot__' - '-' escapes to '__dash__' - '\/' escapes to '__slash__' - Property names that exactly match a CEL RESERVED keyword escape to '__{keyword}__'. The keywords are:\n      \t  \"true\", \"false\", \"null\", \"in\", \"as\", \"break\", \"const\", \"continue\", \"else\", \"for\", \"function\", \"if\",\n      \t  \"import\", \"let\", \"loop\", \"package\", \"namespace\", \"return\".\n      Examples:\n        - Expression accessing a property named \"namespace\": {\"Expression\": \"object.__namespace__ > 0\"}\n        - Expression accessing a property named \"x-prop\": {\"Expression\": \"object.x__dash__prop > 0\"}\n        - Expression accessing a property named \"redact__d\": {\"Expression\": \"object.redact__underscores__d > 0\"}\n      \n      Equality on arrays with list type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1]. Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type:\n        - 'set': `X + Y` performs a union where the array positions of all elements in `X` are preserved and\n          non-intersecting elements in `Y` are appended, retaining their partial order.\n        - 'map': `X + Y` performs a merge where the array positions of all keys in `X` are preserved but the values\n          are overwritten by values in `Y` when the key sets of `X` and `Y` intersect. Elements in `Y` with\n          non-intersecting keys are appended, retaining their partial order.\n      Required.\n\n    - **spec.validations.message** (string)\n\n      Message represents the message displayed when validation fails. The message is required if the Expression contains line breaks. The message must not contain line breaks. If unset, the message is \"failed rule: {Rule}\". e.g. \"must be a URL with the host matching spec.host\" If the Expression contains line breaks. Message is required. The message must not contain line breaks. If unset, the message is \"failed Expression: {Expression}\".\n\n    - **spec.validations.messageExpression** (string)\n\n      messageExpression declares a CEL expression that evaluates to the validation failure message that is returned when this rule fails. Since messageExpression is used as a failure message, it must evaluate to a string. If both message and messageExpression are present on a validation, then messageExpression will be used if validation fails. If messageExpression results in a runtime error, the runtime error is logged, and the validation failure message is produced as if the messageExpression field were unset. If messageExpression evaluates to an empty string, a string with only spaces, or a string that contains line breaks, then the validation failure message will also be produced as if the messageExpression field were unset, and the fact that messageExpression produced an empty string\/string with only spaces\/string with line breaks will be logged. messageExpression has access to all the same variables as the `expression` except for 'authorizer' and 'authorizer.requestResource'. Example: \"object.x must be less than max (\"+string(params.max)+\")\"\n\n    - **spec.validations.reason** (string)\n\n      Reason represents a machine-readable description of why this validation failed. If this is the first validation in the list to fail, this reason, as well as the corresponding HTTP response code, are used in the HTTP response to the client. The currently supported reasons are: \"Unauthorized\", \"Forbidden\", \"Invalid\", \"RequestEntityTooLarge\". If not set, StatusReasonInvalid is used in the response to the client.\n\n  - **spec.variables** ([]Variable)\n\n    *Patch strategy: merge on key `name`*\n    \n    *Map: unique values on key name will be kept during a merge*\n    \n    Variables contain definitions of variables that can be used in composition of other expressions. Each variable is defined as a named CEL expression. The variables defined here will be available under `variables` in other expressions of the policy except MatchConditions because MatchConditions are evaluated before the rest of the policy.\n    \n    The expression of a variable can refer to other variables defined earlier in the list but not those after. Thus, Variables must be sorted by the order of first appearance and acyclic.\n\n    <a name=\"Variable\"><\/a>\n    *Variable is the definition of a variable that is used for composition. A variable is defined as a named expression.*\n\n    - **spec.variables.expression** (string), required\n\n      Expression is the expression that will be evaluated as the value of the variable. The CEL expression has access to the same identifiers as the CEL expressions in Validation.\n\n    - **spec.variables.name** (string), required\n\n      Name is the name of the variable. The name must be a valid CEL identifier and unique among all variables. The variable can be accessed in other expressions through `variables` For example, if name is \"foo\", the variable will be available as `variables.foo`\n\n- **status** (ValidatingAdmissionPolicyStatus)\n\n  The status of the ValidatingAdmissionPolicy, including warnings that are useful to determine if the policy behaves in the expected way. Populated by the system. Read-only.\n\n  <a name=\"ValidatingAdmissionPolicyStatus\"><\/a>\n  *ValidatingAdmissionPolicyStatus represents the status of an admission validation policy.*\n\n  - **status.conditions** ([]Condition)\n\n    *Map: unique values on key type will be kept during a merge*\n    \n    The conditions represent the latest available observations of a policy's current state.\n\n    <a name=\"Condition\"><\/a>\n    *Condition contains details for one aspect of the current state of this API Resource.*\n\n    - **status.conditions.lastTransitionTime** (Time), required\n\n      lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed.  If that is not known, then using the time when the API field changed is acceptable.\n\n      <a name=\"Time\"><\/a>\n      *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n    - **status.conditions.message** (string), required\n\n      message is a human readable message indicating details about the transition. This may be an empty string.\n\n    - **status.conditions.reason** (string), required\n\n      reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty.\n\n    - **status.conditions.status** (string), required\n\n      status of the condition, one of True, False, Unknown.\n\n    - **status.conditions.type** (string), required\n\n      type of condition in CamelCase or in foo.example.com\/CamelCase.\n\n    - **status.conditions.observedGeneration** (int64)\n\n      observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance.\n\n  - **status.observedGeneration** (int64)\n\n    The generation observed by the controller.\n\n  - **status.typeChecking** (TypeChecking)\n\n    The results of type checking for each expression. Presence of this field indicates the completion of the type checking.\n\n    <a name=\"TypeChecking\"><\/a>\n    *TypeChecking contains results of type checking the expressions in the ValidatingAdmissionPolicy*\n\n    - **status.typeChecking.expressionWarnings** ([]ExpressionWarning)\n\n      *Atomic: will be replaced during a merge*\n      \n      The type checking warnings for each expression.\n\n      <a name=\"ExpressionWarning\"><\/a>\n      *ExpressionWarning is a warning information that targets a specific expression.*\n\n      - **status.typeChecking.expressionWarnings.fieldRef** (string), required\n\n        The path to the field that refers the expression. For example, the reference to the expression of the first item of validations is \"spec.validations[0].expression\"\n\n      - **status.typeChecking.expressionWarnings.warning** (string), required\n\n        The content of type checking information in a human-readable form. Each line of the warning contains the type that the expression is checked against, followed by the type check error from the compiler.\n\n\n\n\n\n## ValidatingAdmissionPolicyList {#ValidatingAdmissionPolicyList}\n\nValidatingAdmissionPolicyList is a list of ValidatingAdmissionPolicy.\n\n<hr>\n\n- **items** ([]<a href=\"\">ValidatingAdmissionPolicy<\/a>), required\n\n  List of ValidatingAdmissionPolicy.\n\n- **apiVersion** (string)\n\n  APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#resources\n\n- **kind** (string)\n\n  Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n\n\n\n\n## ValidatingAdmissionPolicyBinding {#ValidatingAdmissionPolicyBinding}\n\nValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources. ValidatingAdmissionPolicyBinding and parameter CRDs together define how cluster administrators configure policies for clusters.\n\nFor a given admission request, each binding will cause its policy to be evaluated N times, where N is 1 for policies\/bindings that don't use params, otherwise N is the number of parameters selected by the binding.\n\nThe CEL expressions of a policy must have a computed CEL cost below the maximum CEL budget. Each evaluation of the policy is given an independent CEL cost budget. Adding\/removing policies, bindings, or params can not affect whether a given (policy, binding, param) combination is within its own CEL budget.\n\n<hr>\n\n- **apiVersion** (string)\n\n  APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#resources\n\n- **kind** (string)\n\n  Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata; More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata.\n\n- **spec** (ValidatingAdmissionPolicyBindingSpec)\n\n  Specification of the desired behavior of the ValidatingAdmissionPolicyBinding.\n\n  <a name=\"ValidatingAdmissionPolicyBindingSpec\"><\/a>\n  *ValidatingAdmissionPolicyBindingSpec is the specification of the ValidatingAdmissionPolicyBinding.*\n\n  - **spec.matchResources** (MatchResources)\n\n    MatchResources declares what resources match this binding and will be validated by it. Note that this is intersected with the policy's matchConstraints, so only requests that are matched by the policy can be selected by this. If this is unset, all resources matched by the policy are validated by this binding When resourceRules is unset, it does not constrain resource matching. If a resource is matched by the other fields of this object, it will be validated. Note that this is differs from ValidatingAdmissionPolicy matchConstraints, where resourceRules are required.\n\n    <a name=\"MatchResources\"><\/a>\n    *MatchResources decides whether to run the admission control policy on an object based on whether it meets the match criteria. The exclude rules take precedence over include rules (if a resource matches both, it is excluded)*\n\n    - **spec.matchResources.excludeResourceRules** ([]NamedRuleWithOperations)\n\n      *Atomic: will be replaced during a merge*\n      \n      ExcludeResourceRules describes what operations on what resources\/subresources the ValidatingAdmissionPolicy should not care about. The exclude rules take precedence over include rules (if a resource matches both, it is excluded)\n\n      <a name=\"NamedRuleWithOperations\"><\/a>\n      *NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames.*\n\n      - **spec.matchResources.excludeResourceRules.apiGroups** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIGroups is the API groups the resources belong to. '*' is all groups. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchResources.excludeResourceRules.apiVersions** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIVersions is the API versions the resources belong to. '*' is all versions. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchResources.excludeResourceRules.operations** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchResources.excludeResourceRules.resourceNames** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        ResourceNames is an optional white list of names that the rule applies to.  An empty set means that everything is allowed.\n\n      - **spec.matchResources.excludeResourceRules.resources** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Resources is a list of resources this rule applies to.\n        \n        For example: 'pods' means pods. 'pods\/log' means the log subresource of pods. '*' means all resources, but not subresources. 'pods\/*' means all subresources of pods. '*\/scale' means all scale subresources. '*\/*' means all resources and their subresources.\n        \n        If wildcard is present, the validation rule will ensure resources do not overlap with each other.\n        \n        Depending on the enclosing object, subresources might not be allowed. Required.\n\n      - **spec.matchResources.excludeResourceRules.scope** (string)\n\n        scope specifies the scope of this rule. Valid values are \"Cluster\", \"Namespaced\", and \"*\" \"Cluster\" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. \"Namespaced\" means that only namespaced resources will match this rule. \"*\" means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is \"*\".\n\n    - **spec.matchResources.matchPolicy** (string)\n\n      matchPolicy defines how the \"MatchResources\" list is used to match incoming requests. Allowed values are \"Exact\" or \"Equivalent\".\n      \n      - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps\/v1, apps\/v1beta1, and extensions\/v1beta1, but \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps\/v1beta1 or extensions\/v1beta1 would not be sent to the ValidatingAdmissionPolicy.\n      \n      - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps\/v1, apps\/v1beta1, and extensions\/v1beta1, and \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps\/v1beta1 or extensions\/v1beta1 would be converted to apps\/v1 and sent to the ValidatingAdmissionPolicy.\n      \n      Defaults to \"Equivalent\"\n\n    - **spec.matchResources.namespaceSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      NamespaceSelector decides whether to run the admission control policy on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the policy.\n      \n      For example, to run the webhook on any objects whose namespace is not associated with \"runlevel\" of \"0\" or \"1\";  you will set the selector as follows: \"namespaceSelector\": {\n        \"matchExpressions\": [\n          {\n            \"key\": \"runlevel\",\n            \"operator\": \"NotIn\",\n            \"values\": [\n              \"0\",\n              \"1\"\n            ]\n          }\n        ]\n      }\n      \n      If instead you want to only run the policy on any objects whose namespace is associated with the \"environment\" of \"prod\" or \"staging\"; you will set the selector as follows: \"namespaceSelector\": {\n        \"matchExpressions\": [\n          {\n            \"key\": \"environment\",\n            \"operator\": \"In\",\n            \"values\": [\n              \"prod\",\n              \"staging\"\n            ]\n          }\n        ]\n      }\n      \n      See https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\/ for more examples of label selectors.\n      \n      Default to the empty LabelSelector, which matches everything.\n\n    - **spec.matchResources.objectSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      ObjectSelector decides whether to run the validation based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the cel validation, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything.\n\n    - **spec.matchResources.resourceRules** ([]NamedRuleWithOperations)\n\n      *Atomic: will be replaced during a merge*\n      \n      ResourceRules describes what operations on what resources\/subresources the ValidatingAdmissionPolicy matches. The policy cares about an operation if it matches _any_ Rule.\n\n      <a name=\"NamedRuleWithOperations\"><\/a>\n      *NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames.*\n\n      - **spec.matchResources.resourceRules.apiGroups** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIGroups is the API groups the resources belong to. '*' is all groups. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchResources.resourceRules.apiVersions** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIVersions is the API versions the resources belong to. '*' is all versions. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchResources.resourceRules.operations** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchResources.resourceRules.resourceNames** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        ResourceNames is an optional white list of names that the rule applies to.  An empty set means that everything is allowed.\n\n      - **spec.matchResources.resourceRules.resources** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Resources is a list of resources this rule applies to.\n        \n        For example: 'pods' means pods. 'pods\/log' means the log subresource of pods. '*' means all resources, but not subresources. 'pods\/*' means all subresources of pods. '*\/scale' means all scale subresources. '*\/*' means all resources and their subresources.\n        \n        If wildcard is present, the validation rule will ensure resources do not overlap with each other.\n        \n        Depending on the enclosing object, subresources might not be allowed. Required.\n\n      - **spec.matchResources.resourceRules.scope** (string)\n\n        scope specifies the scope of this rule. Valid values are \"Cluster\", \"Namespaced\", and \"*\" \"Cluster\" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. \"Namespaced\" means that only namespaced resources will match this rule. \"*\" means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is \"*\".\n\n  - **spec.paramRef** (ParamRef)\n\n    paramRef specifies the parameter resource used to configure the admission control policy. It should point to a resource of the type specified in ParamKind of the bound ValidatingAdmissionPolicy. If the policy specifies a ParamKind and the resource referred to by ParamRef does not exist, this binding is considered mis-configured and the FailurePolicy of the ValidatingAdmissionPolicy applied. If the policy does not specify a ParamKind then this field is ignored, and the rules are evaluated without a param.\n\n    <a name=\"ParamRef\"><\/a>\n    *ParamRef describes how to locate the params to be used as input to expressions of rules applied by a policy binding.*\n\n    - **spec.paramRef.name** (string)\n\n      name is the name of the resource being referenced.\n      \n      One of `name` or `selector` must be set, but `name` and `selector` are mutually exclusive properties. If one is set, the other must be unset.\n      \n      A single parameter used for all admission requests can be configured by setting the `name` field, leaving `selector` blank, and setting namespace if `paramKind` is namespace-scoped.\n\n    - **spec.paramRef.namespace** (string)\n\n      namespace is the namespace of the referenced resource. Allows limiting the search for params to a specific namespace. Applies to both `name` and `selector` fields.\n      \n      A per-namespace parameter may be used by specifying a namespace-scoped `paramKind` in the policy and leaving this field empty.\n      \n      - If `paramKind` is cluster-scoped, this field MUST be unset. Setting this field results in a configuration error.\n      \n      - If `paramKind` is namespace-scoped, the namespace of the object being evaluated for admission will be used when this field is left unset. Take care that if this is left empty the binding must not match any cluster-scoped resources, which will result in an error.\n\n    - **spec.paramRef.parameterNotFoundAction** (string)\n\n      `parameterNotFoundAction` controls the behavior of the binding when the resource exists, and name or selector is valid, but there are no parameters matched by the binding. If the value is set to `Allow`, then no matched parameters will be treated as successful validation by the binding. If set to `Deny`, then no matched parameters will be subject to the `failurePolicy` of the policy.\n      \n      Allowed values are `Allow` or `Deny`\n      \n      Required\n\n    - **spec.paramRef.selector** (<a href=\"\">LabelSelector<\/a>)\n\n      selector can be used to match multiple param objects based on their labels. Supply selector: {} to match all resources of the ParamKind.\n      \n      If multiple params are found, they are all evaluated with the policy expressions and the results are ANDed together.\n      \n      One of `name` or `selector` must be set, but `name` and `selector` are mutually exclusive properties. If one is set, the other must be unset.\n\n  - **spec.policyName** (string)\n\n    PolicyName references a ValidatingAdmissionPolicy name which the ValidatingAdmissionPolicyBinding binds to. If the referenced resource does not exist, this binding is considered invalid and will be ignored Required.\n\n  - **spec.validationActions** ([]string)\n\n    *Set: unique values will be kept during a merge*\n    \n    validationActions declares how Validations of the referenced ValidatingAdmissionPolicy are enforced. If a validation evaluates to false it is always enforced according to these actions.\n    \n    Failures defined by the ValidatingAdmissionPolicy's FailurePolicy are enforced according to these actions only if the FailurePolicy is set to Fail, otherwise the failures are ignored. This includes compilation errors, runtime errors and misconfigurations of the policy.\n    \n    validationActions is declared as a set of action values. Order does not matter. validationActions may not contain duplicates of the same action.\n    \n    The supported actions values are:\n    \n    \"Deny\" specifies that a validation failure results in a denied request.\n    \n    \"Warn\" specifies that a validation failure is reported to the request client in HTTP Warning headers, with a warning code of 299. Warnings can be sent both for allowed or denied admission responses.\n    \n    \"Audit\" specifies that a validation failure is included in the published audit event for the request. The audit event will contain a `validation.policy.admission.k8s.io\/validation_failure` audit annotation with a value containing the details of the validation failures, formatted as a JSON list of objects, each with the following fields: - message: The validation failure message string - policy: The resource name of the ValidatingAdmissionPolicy - binding: The resource name of the ValidatingAdmissionPolicyBinding - expressionIndex: The index of the failed validations in the ValidatingAdmissionPolicy - validationActions: The enforcement actions enacted for the validation failure Example audit annotation: `\"validation.policy.admission.k8s.io\/validation_failure\": \"[{\"message\": \"Invalid value\", {\"policy\": \"policy.example.com\", {\"binding\": \"policybinding.example.com\", {\"expressionIndex\": \"1\", {\"validationActions\": [\"Audit\"]}]\"`\n    \n    Clients should expect to handle additional values by ignoring any values not recognized.\n    \n    \"Deny\" and \"Warn\" may not be used together since this combination needlessly duplicates the validation failure both in the API response body and the HTTP warning headers.\n    \n    Required.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ValidatingAdmissionPolicy\n\n#### HTTP Request\n\nGET \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicies\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingAdmissionPolicy\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingAdmissionPolicy<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified ValidatingAdmissionPolicy\n\n#### HTTP Request\n\nGET \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicies\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingAdmissionPolicy\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingAdmissionPolicy<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ValidatingAdmissionPolicy\n\n#### HTTP Request\n\nGET \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicies\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingAdmissionPolicyList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ValidatingAdmissionPolicy\n\n#### HTTP Request\n\nPOST \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicies\n\n#### Parameters\n\n\n- **body**: <a href=\"\">ValidatingAdmissionPolicy<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingAdmissionPolicy<\/a>): OK\n\n201 (<a href=\"\">ValidatingAdmissionPolicy<\/a>): Created\n\n202 (<a href=\"\">ValidatingAdmissionPolicy<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ValidatingAdmissionPolicy\n\n#### HTTP Request\n\nPUT \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicies\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingAdmissionPolicy\n\n\n- **body**: <a href=\"\">ValidatingAdmissionPolicy<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingAdmissionPolicy<\/a>): OK\n\n201 (<a href=\"\">ValidatingAdmissionPolicy<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified ValidatingAdmissionPolicy\n\n#### HTTP Request\n\nPUT \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicies\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingAdmissionPolicy\n\n\n- **body**: <a href=\"\">ValidatingAdmissionPolicy<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingAdmissionPolicy<\/a>): OK\n\n201 (<a href=\"\">ValidatingAdmissionPolicy<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ValidatingAdmissionPolicy\n\n#### HTTP Request\n\nPATCH \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicies\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingAdmissionPolicy\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingAdmissionPolicy<\/a>): OK\n\n201 (<a href=\"\">ValidatingAdmissionPolicy<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified ValidatingAdmissionPolicy\n\n#### HTTP Request\n\nPATCH \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicies\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingAdmissionPolicy\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingAdmissionPolicy<\/a>): OK\n\n201 (<a href=\"\">ValidatingAdmissionPolicy<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ValidatingAdmissionPolicy\n\n#### HTTP Request\n\nDELETE \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicies\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingAdmissionPolicy\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ValidatingAdmissionPolicy\n\n#### HTTP Request\n\nDELETE \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicies\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   admissionregistration k8s io v1    import   k8s io api admissionregistration v1    kind   ValidatingAdmissionPolicy  content type   api reference  description   ValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it   title   ValidatingAdmissionPolicy  weight  7 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  admissionregistration k8s io v1    import  k8s io api admissionregistration v1        ValidatingAdmissionPolicy   ValidatingAdmissionPolicy   ValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it    hr       apiVersion    admissionregistration k8s io v1       kind    ValidatingAdmissionPolicy       metadata     a href    ObjectMeta  a      Standard object metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata       spec    ValidatingAdmissionPolicySpec     Specification of the desired behavior of the ValidatingAdmissionPolicy      a name  ValidatingAdmissionPolicySpec    a     ValidatingAdmissionPolicySpec is the specification of the desired behavior of the AdmissionPolicy          spec auditAnnotations      AuditAnnotation        Atomic  will be replaced during a merge           auditAnnotations contains CEL expressions which are used to produce audit annotations for the audit event of the API request  validations and auditAnnotations may not both be empty  a least one of validations or auditAnnotations is required        a name  AuditAnnotation    a       AuditAnnotation describes how to produce an audit annotation for an API request            spec auditAnnotations key    string   required        key specifies the audit annotation key  The audit annotation keys of a ValidatingAdmissionPolicy must be unique  The key must be a qualified name   A Za z0 9   A Za z0 9      no more than 63 bytes in length               The key is combined with the resource name of the ValidatingAdmissionPolicy to construct an audit annotation key    ValidatingAdmissionPolicy name   key                 If an admission webhook uses the same resource name as this ValidatingAdmissionPolicy and the same audit annotation key  the annotation key will be identical  In this case  the first annotation written with the key will be included in the audit event and all subsequent annotations with the same key will be discarded               Required           spec auditAnnotations valueExpression    string   required        valueExpression represents the expression which is evaluated by CEL to produce an audit annotation value  The expression must evaluate to either a string or null value  If the expression evaluates to a string  the audit annotation is included with the string value  If the expression evaluates to null or empty string the audit annotation will be omitted  The valueExpression may be no longer than 5kb in length  If the result of the valueExpression is more than 10kb in length  it will be truncated to 10kb               If multiple ValidatingAdmissionPolicyBinding resources match an API request  then the valueExpression will be evaluated for each binding  All unique values produced by the valueExpressions will be joined together in a comma separated list               Required         spec failurePolicy    string       failurePolicy defines how to handle failures for the admission policy  Failures can occur from CEL expression parse errors  type check errors  runtime errors and invalid or mis configured policy definitions or bindings           A policy is invalid if spec paramKind refers to a non existent Kind  A binding is invalid if spec paramRef name refers to a non existent resource           failurePolicy does not define how validations that evaluate to false are handled           When failurePolicy is set to Fail  ValidatingAdmissionPolicyBinding validationActions define how failures are enforced           Allowed values are Ignore or Fail  Defaults to Fail         spec matchConditions      MatchCondition        Patch strategy  merge on key  name             Map  unique values on key name will be kept during a merge           MatchConditions is a list of conditions that must be met for a request to be validated  Match conditions filter requests that have already been matched by the rules  namespaceSelector  and objectSelector  An empty list of matchConditions matches all requests  There are a maximum of 64 match conditions allowed           If a parameter object is provided  it can be accessed via the  params  handle in the same manner as validation expressions           The exact matching logic is  in order         1  If ANY matchCondition evaluates to FALSE  the policy is skipped        2  If ALL matchConditions evaluate to TRUE  the policy is evaluated        3  If any matchCondition evaluates to an error  but none are FALSE              If failurePolicy Fail  reject the request            If failurePolicy Ignore  the policy is skipped       a name  MatchCondition    a       MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook            spec matchConditions expression    string   required        Expression represents the expression which will be evaluated by CEL  Must evaluate to bool  CEL expressions have access to the contents of the AdmissionRequest and Authorizer  organized into CEL variables                object    The object from the incoming request  The value is null for DELETE requests   oldObject    The existing object  The value is null for CREATE requests   request    Attributes of the admission request  pkg apis admission types go AdmissionRequest    authorizer    A CEL Authorizer  May be used to perform authorization checks for the principal  user or service account  of the request          See https   pkg go dev k8s io apiserver pkg cel library Authz        authorizer requestResource    A CEL ResourceCheck constructed from the  authorizer  and configured with the         request resource        Documentation on CEL  https   kubernetes io docs reference using api cel               Required           spec matchConditions name    string   required        Name is an identifier for this match condition  used for strategic merging of MatchConditions  as well as providing an identifier for logging purposes  A good name should be descriptive of the associated expression  Name must be a qualified name consisting of alphanumeric characters           or      and must start and end with an alphanumeric character  e g   MyName    or  my name    or  123 abc   regex used for validation is    A Za z0 9   A Za z0 9       A Za z0 9    with an optional DNS subdomain prefix and      e g   example com MyName                Required         spec matchConstraints    MatchResources       MatchConstraints specifies what resources this policy is designed to validate  The AdmissionPolicy cares about a request if it matches  all  Constraints  However  in order to prevent clusters from being put into an unstable state that cannot be recovered from via the API ValidatingAdmissionPolicy cannot match ValidatingAdmissionPolicy and ValidatingAdmissionPolicyBinding  Required        a name  MatchResources    a       MatchResources decides whether to run the admission control policy on an object based on whether it meets the match criteria  The exclude rules take precedence over include rules  if a resource matches both  it is excluded            spec matchConstraints excludeResourceRules      NamedRuleWithOperations          Atomic  will be replaced during a merge               ExcludeResourceRules describes what operations on what resources subresources the ValidatingAdmissionPolicy should not care about  The exclude rules take precedence over include rules  if a resource matches both  it is excluded          a name  NamedRuleWithOperations    a         NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames              spec matchConstraints excludeResourceRules apiGroups      string            Atomic  will be replaced during a merge                   APIGroups is the API groups the resources belong to      is all groups  If     is present  the length of the slice must be one  Required             spec matchConstraints excludeResourceRules apiVersions      string            Atomic  will be replaced during a merge                   APIVersions is the API versions the resources belong to      is all versions  If     is present  the length of the slice must be one  Required             spec matchConstraints excludeResourceRules operations      string            Atomic  will be replaced during a merge                   Operations is the operations the admission hook cares about   CREATE  UPDATE  DELETE  CONNECT or   for all of those operations and any future admission operations that are added  If     is present  the length of the slice must be one  Required             spec matchConstraints excludeResourceRules resourceNames      string            Atomic  will be replaced during a merge                   ResourceNames is an optional white list of names that the rule applies to   An empty set means that everything is allowed             spec matchConstraints excludeResourceRules resources      string            Atomic  will be replaced during a merge                   Resources is a list of resources this rule applies to                   For example   pods  means pods   pods log  means the log subresource of pods      means all resources  but not subresources   pods    means all subresources of pods     scale  means all scale subresources        means all resources and their subresources                   If wildcard is present  the validation rule will ensure resources do not overlap with each other                   Depending on the enclosing object  subresources might not be allowed  Required             spec matchConstraints excludeResourceRules scope    string           scope specifies the scope of this rule  Valid values are  Cluster    Namespaced   and      Cluster  means that only cluster scoped resources will match this rule  Namespace API objects are cluster scoped   Namespaced  means that only namespaced resources will match this rule      means that there are no scope restrictions  Subresources match the scope of their parent resource  Default is               spec matchConstraints matchPolicy    string         matchPolicy defines how the  MatchResources  list is used to match incoming requests  Allowed values are  Exact  or  Equivalent                  Exact  match a request only if it exactly matches a specified rule  For example  if deployments can be modified via apps v1  apps v1beta1  and extensions v1beta1  but  rules  only included  apiGroups   apps    apiVersions   v1    resources    deployments     a request to apps v1beta1 or extensions v1beta1 would not be sent to the ValidatingAdmissionPolicy                 Equivalent  match a request if modifies a resource listed in rules  even via another API group or version  For example  if deployments can be modified via apps v1  apps v1beta1  and extensions v1beta1  and  rules  only included  apiGroups   apps    apiVersions   v1    resources    deployments     a request to apps v1beta1 or extensions v1beta1 would be converted to apps v1 and sent to the ValidatingAdmissionPolicy               Defaults to  Equivalent           spec matchConstraints namespaceSelector     a href    LabelSelector  a          NamespaceSelector decides whether to run the admission control policy on an object based on whether the namespace for that object matches the selector  If the object itself is a namespace  the matching is performed on object metadata labels  If the object is another cluster scoped resource  it never skips the policy               For example  to run the webhook on any objects whose namespace is not associated with  runlevel  of  0  or  1    you will set the selector as follows   namespaceSelector              matchExpressions                              key    runlevel                operator    NotIn                values                    0                  1                                                           If instead you want to only run the policy on any objects whose namespace is associated with the  environment  of  prod  or  staging   you will set the selector as follows   namespaceSelector              matchExpressions                              key    environment                operator    In                values                    prod                  staging                                                           See https   kubernetes io docs concepts overview working with objects labels  for more examples of label selectors               Default to the empty LabelSelector  which matches everything           spec matchConstraints objectSelector     a href    LabelSelector  a          ObjectSelector decides whether to run the validation based on if the object has matching labels  objectSelector is evaluated against both the oldObject and newObject that would be sent to the cel validation  and is considered to match if either object matches the selector  A null object  oldObject in the case of create  or newObject in the case of delete  or an object that cannot have labels  like a DeploymentRollback or a PodProxyOptions object  is not considered to match  Use the object selector only if the webhook is opt in  because end users may skip the admission webhook by setting the labels  Default to the empty LabelSelector  which matches everything           spec matchConstraints resourceRules      NamedRuleWithOperations          Atomic  will be replaced during a merge               ResourceRules describes what operations on what resources subresources the ValidatingAdmissionPolicy matches  The policy cares about an operation if it matches  any  Rule          a name  NamedRuleWithOperations    a         NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames              spec matchConstraints resourceRules apiGroups      string            Atomic  will be replaced during a merge                   APIGroups is the API groups the resources belong to      is all groups  If     is present  the length of the slice must be one  Required             spec matchConstraints resourceRules apiVersions      string            Atomic  will be replaced during a merge                   APIVersions is the API versions the resources belong to      is all versions  If     is present  the length of the slice must be one  Required             spec matchConstraints resourceRules operations      string            Atomic  will be replaced during a merge                   Operations is the operations the admission hook cares about   CREATE  UPDATE  DELETE  CONNECT or   for all of those operations and any future admission operations that are added  If     is present  the length of the slice must be one  Required             spec matchConstraints resourceRules resourceNames      string            Atomic  will be replaced during a merge                   ResourceNames is an optional white list of names that the rule applies to   An empty set means that everything is allowed             spec matchConstraints resourceRules resources      string            Atomic  will be replaced during a merge                   Resources is a list of resources this rule applies to                   For example   pods  means pods   pods log  means the log subresource of pods      means all resources  but not subresources   pods    means all subresources of pods     scale  means all scale subresources        means all resources and their subresources                   If wildcard is present  the validation rule will ensure resources do not overlap with each other                   Depending on the enclosing object  subresources might not be allowed  Required             spec matchConstraints resourceRules scope    string           scope specifies the scope of this rule  Valid values are  Cluster    Namespaced   and      Cluster  means that only cluster scoped resources will match this rule  Namespace API objects are cluster scoped   Namespaced  means that only namespaced resources will match this rule      means that there are no scope restrictions  Subresources match the scope of their parent resource  Default is             spec paramKind    ParamKind       ParamKind specifies the kind of resources used to parameterize this policy  If absent  there are no parameters for this policy and the param CEL variable will not be provided to validation expressions  If ParamKind refers to a non existent kind  this policy definition is mis configured and the FailurePolicy is applied  If paramKind is specified but paramRef is unset in ValidatingAdmissionPolicyBinding  the params variable will be null        a name  ParamKind    a       ParamKind is a tuple of Group Kind and Version            spec paramKind apiVersion    string         APIVersion is the API group version the resources belong to  In format of  group version   Required           spec paramKind kind    string         Kind is the API kind the resources belong to  Required         spec validations      Validation        Atomic  will be replaced during a merge           Validations contain CEL expressions which is used to apply the validation  Validations and AuditAnnotations may not both be empty  a minimum of one Validations or AuditAnnotations is required        a name  Validation    a       Validation specifies the CEL expression which is used to apply the validation            spec validations expression    string   required        Expression represents the expression which will be evaluated by CEL  ref  https   github com google cel spec CEL expressions have access to the contents of the API request response  organized into CEL variables as well as some other useful variables                  object    The object from the incoming request  The value is null for DELETE requests     oldObject    The existing object  The value is null for CREATE requests     request    Attributes of the API request  ref   pkg apis admission types go AdmissionRequest       params    Parameter resource referred to by the policy binding being evaluated  Only populated if the policy has a ParamKind     namespaceObject    The namespace object that the incoming object belongs to  The value is null for cluster scoped resources     variables    Map of composited variables  from its name to its lazily evaluated value          For example  a variable named  foo  can be accessed as  variables foo            authorizer    A CEL Authorizer  May be used to perform authorization checks for the principal  user or service account  of the request          See https   pkg go dev k8s io apiserver pkg cel library Authz          authorizer requestResource    A CEL ResourceCheck constructed from the  authorizer  and configured with the         request resource               The  apiVersion    kind    metadata name  and  metadata generateName  are always accessible from the root of the object  No other metadata properties are accessible               Only property names of the form   a zA Z      a zA Z0 9        are accessible  Accessible property names are escaped according to the following rules when accessed in the expression         escapes to    underscores          escapes to    dot          escapes to    dash          escapes to    slash      Property names that exactly match a CEL RESERVED keyword escape to     keyword      The keywords are            true    false    null    in    as    break    const    continue    else    for    function    if             import    let    loop    package    namespace    return         Examples            Expression accessing a property named  namespace     Expression    object   namespace     0             Expression accessing a property named  x prop     Expression    object x  dash  prop   0             Expression accessing a property named  redact  d     Expression    object redact  underscores  d   0                Equality on arrays with list type of  set  or  map  ignores element order  i e   1  2      2  1   Concatenation on arrays with x kubernetes list type use the semantics of the list type             set    X   Y  performs a union where the array positions of all elements in  X  are preserved and           non intersecting elements in  Y  are appended  retaining their partial order             map    X   Y  performs a merge where the array positions of all keys in  X  are preserved but the values           are overwritten by values in  Y  when the key sets of  X  and  Y  intersect  Elements in  Y  with           non intersecting keys are appended  retaining their partial order        Required           spec validations message    string         Message represents the message displayed when validation fails  The message is required if the Expression contains line breaks  The message must not contain line breaks  If unset  the message is  failed rule   Rule    e g   must be a URL with the host matching spec host  If the Expression contains line breaks  Message is required  The message must not contain line breaks  If unset  the message is  failed Expression   Expression             spec validations messageExpression    string         messageExpression declares a CEL expression that evaluates to the validation failure message that is returned when this rule fails  Since messageExpression is used as a failure message  it must evaluate to a string  If both message and messageExpression are present on a validation  then messageExpression will be used if validation fails  If messageExpression results in a runtime error  the runtime error is logged  and the validation failure message is produced as if the messageExpression field were unset  If messageExpression evaluates to an empty string  a string with only spaces  or a string that contains line breaks  then the validation failure message will also be produced as if the messageExpression field were unset  and the fact that messageExpression produced an empty string string with only spaces string with line breaks will be logged  messageExpression has access to all the same variables as the  expression  except for  authorizer  and  authorizer requestResource   Example   object x must be less than max    string params max               spec validations reason    string         Reason represents a machine readable description of why this validation failed  If this is the first validation in the list to fail  this reason  as well as the corresponding HTTP response code  are used in the HTTP response to the client  The currently supported reasons are   Unauthorized    Forbidden    Invalid    RequestEntityTooLarge   If not set  StatusReasonInvalid is used in the response to the client         spec variables      Variable        Patch strategy  merge on key  name             Map  unique values on key name will be kept during a merge           Variables contain definitions of variables that can be used in composition of other expressions  Each variable is defined as a named CEL expression  The variables defined here will be available under  variables  in other expressions of the policy except MatchConditions because MatchConditions are evaluated before the rest of the policy           The expression of a variable can refer to other variables defined earlier in the list but not those after  Thus  Variables must be sorted by the order of first appearance and acyclic        a name  Variable    a       Variable is the definition of a variable that is used for composition  A variable is defined as a named expression            spec variables expression    string   required        Expression is the expression that will be evaluated as the value of the variable  The CEL expression has access to the same identifiers as the CEL expressions in Validation           spec variables name    string   required        Name is the name of the variable  The name must be a valid CEL identifier and unique among all variables  The variable can be accessed in other expressions through  variables  For example  if name is  foo   the variable will be available as  variables foo       status    ValidatingAdmissionPolicyStatus     The status of the ValidatingAdmissionPolicy  including warnings that are useful to determine if the policy behaves in the expected way  Populated by the system  Read only      a name  ValidatingAdmissionPolicyStatus    a     ValidatingAdmissionPolicyStatus represents the status of an admission validation policy          status conditions      Condition        Map  unique values on key type will be kept during a merge           The conditions represent the latest available observations of a policy s current state        a name  Condition    a       Condition contains details for one aspect of the current state of this API Resource            status conditions lastTransitionTime    Time   required        lastTransitionTime is the last time the condition transitioned from one status to another  This should be when the underlying condition changed   If that is not known  then using the time when the API field changed is acceptable          a name  Time    a         Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers            status conditions message    string   required        message is a human readable message indicating details about the transition  This may be an empty string           status conditions reason    string   required        reason contains a programmatic identifier indicating the reason for the condition s last transition  Producers of specific condition types may define expected values and meanings for this field  and whether the values are considered a guaranteed API  The value should be a CamelCase string  This field may not be empty           status conditions status    string   required        status of the condition  one of True  False  Unknown           status conditions type    string   required        type of condition in CamelCase or in foo example com CamelCase           status conditions observedGeneration    int64         observedGeneration represents the  metadata generation that the condition was set based upon  For instance  if  metadata generation is currently 12  but the  status conditions x  observedGeneration is 9  the condition is out of date with respect to the current state of the instance         status observedGeneration    int64       The generation observed by the controller         status typeChecking    TypeChecking       The results of type checking for each expression  Presence of this field indicates the completion of the type checking        a name  TypeChecking    a       TypeChecking contains results of type checking the expressions in the ValidatingAdmissionPolicy           status typeChecking expressionWarnings      ExpressionWarning          Atomic  will be replaced during a merge               The type checking warnings for each expression          a name  ExpressionWarning    a         ExpressionWarning is a warning information that targets a specific expression              status typeChecking expressionWarnings fieldRef    string   required          The path to the field that refers the expression  For example  the reference to the expression of the first item of validations is  spec validations 0  expression             status typeChecking expressionWarnings warning    string   required          The content of type checking information in a human readable form  Each line of the warning contains the type that the expression is checked against  followed by the type check error from the compiler          ValidatingAdmissionPolicyList   ValidatingAdmissionPolicyList   ValidatingAdmissionPolicyList is a list of ValidatingAdmissionPolicy    hr       items       a href    ValidatingAdmissionPolicy  a    required    List of ValidatingAdmissionPolicy       apiVersion    string     APIVersion defines the versioned schema of this representation of an object  Servers should convert recognized schemas to the latest internal value  and may reject unrecognized values  More info  https   git k8s io community contributors devel sig architecture api conventions md resources      kind    string     Kind is a string value representing the REST resource this object represents  Servers may infer this from the endpoint the client submits requests to  Cannot be updated  In CamelCase  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds         ValidatingAdmissionPolicyBinding   ValidatingAdmissionPolicyBinding   ValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources  ValidatingAdmissionPolicyBinding and parameter CRDs together define how cluster administrators configure policies for clusters   For a given admission request  each binding will cause its policy to be evaluated N times  where N is 1 for policies bindings that don t use params  otherwise N is the number of parameters selected by the binding   The CEL expressions of a policy must have a computed CEL cost below the maximum CEL budget  Each evaluation of the policy is given an independent CEL cost budget  Adding removing policies  bindings  or params can not affect whether a given  policy  binding  param  combination is within its own CEL budget    hr       apiVersion    string     APIVersion defines the versioned schema of this representation of an object  Servers should convert recognized schemas to the latest internal value  and may reject unrecognized values  More info  https   git k8s io community contributors devel sig architecture api conventions md resources      kind    string     Kind is a string value representing the REST resource this object represents  Servers may infer this from the endpoint the client submits requests to  Cannot be updated  In CamelCase  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      metadata     a href    ObjectMeta  a      Standard object metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata       spec    ValidatingAdmissionPolicyBindingSpec     Specification of the desired behavior of the ValidatingAdmissionPolicyBinding      a name  ValidatingAdmissionPolicyBindingSpec    a     ValidatingAdmissionPolicyBindingSpec is the specification of the ValidatingAdmissionPolicyBinding          spec matchResources    MatchResources       MatchResources declares what resources match this binding and will be validated by it  Note that this is intersected with the policy s matchConstraints  so only requests that are matched by the policy can be selected by this  If this is unset  all resources matched by the policy are validated by this binding When resourceRules is unset  it does not constrain resource matching  If a resource is matched by the other fields of this object  it will be validated  Note that this is differs from ValidatingAdmissionPolicy matchConstraints  where resourceRules are required        a name  MatchResources    a       MatchResources decides whether to run the admission control policy on an object based on whether it meets the match criteria  The exclude rules take precedence over include rules  if a resource matches both  it is excluded            spec matchResources excludeResourceRules      NamedRuleWithOperations          Atomic  will be replaced during a merge               ExcludeResourceRules describes what operations on what resources subresources the ValidatingAdmissionPolicy should not care about  The exclude rules take precedence over include rules  if a resource matches both  it is excluded          a name  NamedRuleWithOperations    a         NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames              spec matchResources excludeResourceRules apiGroups      string            Atomic  will be replaced during a merge                   APIGroups is the API groups the resources belong to      is all groups  If     is present  the length of the slice must be one  Required             spec matchResources excludeResourceRules apiVersions      string            Atomic  will be replaced during a merge                   APIVersions is the API versions the resources belong to      is all versions  If     is present  the length of the slice must be one  Required             spec matchResources excludeResourceRules operations      string            Atomic  will be replaced during a merge                   Operations is the operations the admission hook cares about   CREATE  UPDATE  DELETE  CONNECT or   for all of those operations and any future admission operations that are added  If     is present  the length of the slice must be one  Required             spec matchResources excludeResourceRules resourceNames      string            Atomic  will be replaced during a merge                   ResourceNames is an optional white list of names that the rule applies to   An empty set means that everything is allowed             spec matchResources excludeResourceRules resources      string            Atomic  will be replaced during a merge                   Resources is a list of resources this rule applies to                   For example   pods  means pods   pods log  means the log subresource of pods      means all resources  but not subresources   pods    means all subresources of pods     scale  means all scale subresources        means all resources and their subresources                   If wildcard is present  the validation rule will ensure resources do not overlap with each other                   Depending on the enclosing object  subresources might not be allowed  Required             spec matchResources excludeResourceRules scope    string           scope specifies the scope of this rule  Valid values are  Cluster    Namespaced   and      Cluster  means that only cluster scoped resources will match this rule  Namespace API objects are cluster scoped   Namespaced  means that only namespaced resources will match this rule      means that there are no scope restrictions  Subresources match the scope of their parent resource  Default is               spec matchResources matchPolicy    string         matchPolicy defines how the  MatchResources  list is used to match incoming requests  Allowed values are  Exact  or  Equivalent                  Exact  match a request only if it exactly matches a specified rule  For example  if deployments can be modified via apps v1  apps v1beta1  and extensions v1beta1  but  rules  only included  apiGroups   apps    apiVersions   v1    resources    deployments     a request to apps v1beta1 or extensions v1beta1 would not be sent to the ValidatingAdmissionPolicy                 Equivalent  match a request if modifies a resource listed in rules  even via another API group or version  For example  if deployments can be modified via apps v1  apps v1beta1  and extensions v1beta1  and  rules  only included  apiGroups   apps    apiVersions   v1    resources    deployments     a request to apps v1beta1 or extensions v1beta1 would be converted to apps v1 and sent to the ValidatingAdmissionPolicy               Defaults to  Equivalent           spec matchResources namespaceSelector     a href    LabelSelector  a          NamespaceSelector decides whether to run the admission control policy on an object based on whether the namespace for that object matches the selector  If the object itself is a namespace  the matching is performed on object metadata labels  If the object is another cluster scoped resource  it never skips the policy               For example  to run the webhook on any objects whose namespace is not associated with  runlevel  of  0  or  1    you will set the selector as follows   namespaceSelector              matchExpressions                              key    runlevel                operator    NotIn                values                    0                  1                                                           If instead you want to only run the policy on any objects whose namespace is associated with the  environment  of  prod  or  staging   you will set the selector as follows   namespaceSelector              matchExpressions                              key    environment                operator    In                values                    prod                  staging                                                           See https   kubernetes io docs concepts overview working with objects labels  for more examples of label selectors               Default to the empty LabelSelector  which matches everything           spec matchResources objectSelector     a href    LabelSelector  a          ObjectSelector decides whether to run the validation based on if the object has matching labels  objectSelector is evaluated against both the oldObject and newObject that would be sent to the cel validation  and is considered to match if either object matches the selector  A null object  oldObject in the case of create  or newObject in the case of delete  or an object that cannot have labels  like a DeploymentRollback or a PodProxyOptions object  is not considered to match  Use the object selector only if the webhook is opt in  because end users may skip the admission webhook by setting the labels  Default to the empty LabelSelector  which matches everything           spec matchResources resourceRules      NamedRuleWithOperations          Atomic  will be replaced during a merge               ResourceRules describes what operations on what resources subresources the ValidatingAdmissionPolicy matches  The policy cares about an operation if it matches  any  Rule          a name  NamedRuleWithOperations    a         NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames              spec matchResources resourceRules apiGroups      string            Atomic  will be replaced during a merge                   APIGroups is the API groups the resources belong to      is all groups  If     is present  the length of the slice must be one  Required             spec matchResources resourceRules apiVersions      string            Atomic  will be replaced during a merge                   APIVersions is the API versions the resources belong to      is all versions  If     is present  the length of the slice must be one  Required             spec matchResources resourceRules operations      string            Atomic  will be replaced during a merge                   Operations is the operations the admission hook cares about   CREATE  UPDATE  DELETE  CONNECT or   for all of those operations and any future admission operations that are added  If     is present  the length of the slice must be one  Required             spec matchResources resourceRules resourceNames      string            Atomic  will be replaced during a merge                   ResourceNames is an optional white list of names that the rule applies to   An empty set means that everything is allowed             spec matchResources resourceRules resources      string            Atomic  will be replaced during a merge                   Resources is a list of resources this rule applies to                   For example   pods  means pods   pods log  means the log subresource of pods      means all resources  but not subresources   pods    means all subresources of pods     scale  means all scale subresources        means all resources and their subresources                   If wildcard is present  the validation rule will ensure resources do not overlap with each other                   Depending on the enclosing object  subresources might not be allowed  Required             spec matchResources resourceRules scope    string           scope specifies the scope of this rule  Valid values are  Cluster    Namespaced   and      Cluster  means that only cluster scoped resources will match this rule  Namespace API objects are cluster scoped   Namespaced  means that only namespaced resources will match this rule      means that there are no scope restrictions  Subresources match the scope of their parent resource  Default is             spec paramRef    ParamRef       paramRef specifies the parameter resource used to configure the admission control policy  It should point to a resource of the type specified in ParamKind of the bound ValidatingAdmissionPolicy  If the policy specifies a ParamKind and the resource referred to by ParamRef does not exist  this binding is considered mis configured and the FailurePolicy of the ValidatingAdmissionPolicy applied  If the policy does not specify a ParamKind then this field is ignored  and the rules are evaluated without a param        a name  ParamRef    a       ParamRef describes how to locate the params to be used as input to expressions of rules applied by a policy binding            spec paramRef name    string         name is the name of the resource being referenced               One of  name  or  selector  must be set  but  name  and  selector  are mutually exclusive properties  If one is set  the other must be unset               A single parameter used for all admission requests can be configured by setting the  name  field  leaving  selector  blank  and setting namespace if  paramKind  is namespace scoped           spec paramRef namespace    string         namespace is the namespace of the referenced resource  Allows limiting the search for params to a specific namespace  Applies to both  name  and  selector  fields               A per namespace parameter may be used by specifying a namespace scoped  paramKind  in the policy and leaving this field empty                 If  paramKind  is cluster scoped  this field MUST be unset  Setting this field results in a configuration error                 If  paramKind  is namespace scoped  the namespace of the object being evaluated for admission will be used when this field is left unset  Take care that if this is left empty the binding must not match any cluster scoped resources  which will result in an error           spec paramRef parameterNotFoundAction    string          parameterNotFoundAction  controls the behavior of the binding when the resource exists  and name or selector is valid  but there are no parameters matched by the binding  If the value is set to  Allow   then no matched parameters will be treated as successful validation by the binding  If set to  Deny   then no matched parameters will be subject to the  failurePolicy  of the policy               Allowed values are  Allow  or  Deny               Required          spec paramRef selector     a href    LabelSelector  a          selector can be used to match multiple param objects based on their labels  Supply selector     to match all resources of the ParamKind               If multiple params are found  they are all evaluated with the policy expressions and the results are ANDed together               One of  name  or  selector  must be set  but  name  and  selector  are mutually exclusive properties  If one is set  the other must be unset         spec policyName    string       PolicyName references a ValidatingAdmissionPolicy name which the ValidatingAdmissionPolicyBinding binds to  If the referenced resource does not exist  this binding is considered invalid and will be ignored Required         spec validationActions      string        Set  unique values will be kept during a merge           validationActions declares how Validations of the referenced ValidatingAdmissionPolicy are enforced  If a validation evaluates to false it is always enforced according to these actions           Failures defined by the ValidatingAdmissionPolicy s FailurePolicy are enforced according to these actions only if the FailurePolicy is set to Fail  otherwise the failures are ignored  This includes compilation errors  runtime errors and misconfigurations of the policy           validationActions is declared as a set of action values  Order does not matter  validationActions may not contain duplicates of the same action           The supported actions values are            Deny  specifies that a validation failure results in a denied request            Warn  specifies that a validation failure is reported to the request client in HTTP Warning headers  with a warning code of 299  Warnings can be sent both for allowed or denied admission responses            Audit  specifies that a validation failure is included in the published audit event for the request  The audit event will contain a  validation policy admission k8s io validation failure  audit annotation with a value containing the details of the validation failures  formatted as a JSON list of objects  each with the following fields    message  The validation failure message string   policy  The resource name of the ValidatingAdmissionPolicy   binding  The resource name of the ValidatingAdmissionPolicyBinding   expressionIndex  The index of the failed validations in the ValidatingAdmissionPolicy   validationActions  The enforcement actions enacted for the validation failure Example audit annotation    validation policy admission k8s io validation failure       message    Invalid value     policy    policy example com     binding    policybinding example com     expressionIndex    1     validationActions     Audit                Clients should expect to handle additional values by ignoring any values not recognized            Deny  and  Warn  may not be used together since this combination needlessly duplicates the validation failure both in the API response body and the HTTP warning headers           Required          Operations   Operations      hr             get  read the specified ValidatingAdmissionPolicy       HTTP Request  GET  apis admissionregistration k8s io v1 validatingadmissionpolicies  name        Parameters       name     in path    string  required    name of the ValidatingAdmissionPolicy       pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingAdmissionPolicy  a    OK  401  Unauthorized        get  read status of the specified ValidatingAdmissionPolicy       HTTP Request  GET  apis admissionregistration k8s io v1 validatingadmissionpolicies  name  status       Parameters       name     in path    string  required    name of the ValidatingAdmissionPolicy       pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingAdmissionPolicy  a    OK  401  Unauthorized        list  list or watch objects of kind ValidatingAdmissionPolicy       HTTP Request  GET  apis admissionregistration k8s io v1 validatingadmissionpolicies       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ValidatingAdmissionPolicyList  a    OK  401  Unauthorized        create  create a ValidatingAdmissionPolicy       HTTP Request  POST  apis admissionregistration k8s io v1 validatingadmissionpolicies       Parameters       body     a href    ValidatingAdmissionPolicy  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingAdmissionPolicy  a    OK  201   a href    ValidatingAdmissionPolicy  a    Created  202   a href    ValidatingAdmissionPolicy  a    Accepted  401  Unauthorized        update  replace the specified ValidatingAdmissionPolicy       HTTP Request  PUT  apis admissionregistration k8s io v1 validatingadmissionpolicies  name        Parameters       name     in path    string  required    name of the ValidatingAdmissionPolicy       body     a href    ValidatingAdmissionPolicy  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingAdmissionPolicy  a    OK  201   a href    ValidatingAdmissionPolicy  a    Created  401  Unauthorized        update  replace status of the specified ValidatingAdmissionPolicy       HTTP Request  PUT  apis admissionregistration k8s io v1 validatingadmissionpolicies  name  status       Parameters       name     in path    string  required    name of the ValidatingAdmissionPolicy       body     a href    ValidatingAdmissionPolicy  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingAdmissionPolicy  a    OK  201   a href    ValidatingAdmissionPolicy  a    Created  401  Unauthorized        patch  partially update the specified ValidatingAdmissionPolicy       HTTP Request  PATCH  apis admissionregistration k8s io v1 validatingadmissionpolicies  name        Parameters       name     in path    string  required    name of the ValidatingAdmissionPolicy       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingAdmissionPolicy  a    OK  201   a href    ValidatingAdmissionPolicy  a    Created  401  Unauthorized        patch  partially update status of the specified ValidatingAdmissionPolicy       HTTP Request  PATCH  apis admissionregistration k8s io v1 validatingadmissionpolicies  name  status       Parameters       name     in path    string  required    name of the ValidatingAdmissionPolicy       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingAdmissionPolicy  a    OK  201   a href    ValidatingAdmissionPolicy  a    Created  401  Unauthorized        delete  delete a ValidatingAdmissionPolicy       HTTP Request  DELETE  apis admissionregistration k8s io v1 validatingadmissionpolicies  name        Parameters       name     in path    string  required    name of the ValidatingAdmissionPolicy       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of ValidatingAdmissionPolicy       HTTP Request  DELETE  apis admissionregistration k8s io v1 validatingadmissionpolicies       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference apiVersion v1 contenttype apireference kind ResourceQuota title ResourceQuota apimetadata ResourceQuota sets aggregate quota restrictions enforced per namespace weight 3 autogenerated true import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"ResourceQuota\"\ncontent_type: \"api_reference\"\ndescription: \"ResourceQuota sets aggregate quota restrictions enforced per namespace.\"\ntitle: \"ResourceQuota\"\nweight: 3\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## ResourceQuota {#ResourceQuota}\n\nResourceQuota sets aggregate quota restrictions enforced per namespace\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: ResourceQuota\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">ResourceQuotaSpec<\/a>)\n\n  Spec defines the desired quota. https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **status** (<a href=\"\">ResourceQuotaStatus<\/a>)\n\n  Status defines the actual enforced quota and its current usage. https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## ResourceQuotaSpec {#ResourceQuotaSpec}\n\nResourceQuotaSpec defines the desired hard limits to enforce for Quota.\n\n<hr>\n\n- **hard** (map[string]<a href=\"\">Quantity<\/a>)\n\n  hard is the set of desired hard limits for each named resource. More info: https:\/\/kubernetes.io\/docs\/concepts\/policy\/resource-quotas\/\n\n- **scopeSelector** (ScopeSelector)\n\n  scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched.\n\n  <a name=\"ScopeSelector\"><\/a>\n  *A scope selector represents the AND of the selectors represented by the scoped-resource selector requirements.*\n\n  - **scopeSelector.matchExpressions** ([]ScopedResourceSelectorRequirement)\n\n    *Atomic: will be replaced during a merge*\n    \n    A list of scope selector requirements by scope of the resources.\n\n    <a name=\"ScopedResourceSelectorRequirement\"><\/a>\n    *A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator that relates the scope name and values.*\n\n    - **scopeSelector.matchExpressions.operator** (string), required\n\n      Represents a scope's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist.\n\n    - **scopeSelector.matchExpressions.scopeName** (string), required\n\n      The name of the scope that the selector applies to.\n\n    - **scopeSelector.matchExpressions.values** ([]string)\n\n      *Atomic: will be replaced during a merge*\n      \n      An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.\n\n- **scopes** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  A collection of filters that must match each object tracked by a quota. If not specified, the quota matches all objects.\n\n\n\n\n\n## ResourceQuotaStatus {#ResourceQuotaStatus}\n\nResourceQuotaStatus defines the enforced hard limits and observed use.\n\n<hr>\n\n- **hard** (map[string]<a href=\"\">Quantity<\/a>)\n\n  Hard is the set of enforced hard limits for each named resource. More info: https:\/\/kubernetes.io\/docs\/concepts\/policy\/resource-quotas\/\n\n- **used** (map[string]<a href=\"\">Quantity<\/a>)\n\n  Used is the current observed total usage of the resource in the namespace.\n\n\n\n\n\n## ResourceQuotaList {#ResourceQuotaList}\n\nResourceQuotaList is a list of ResourceQuota items.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: ResourceQuotaList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">ResourceQuota<\/a>), required\n\n  Items is a list of ResourceQuota objects. More info: https:\/\/kubernetes.io\/docs\/concepts\/policy\/resource-quotas\/\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ResourceQuota\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/resourcequotas\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceQuota\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceQuota<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified ResourceQuota\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/resourcequotas\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceQuota\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceQuota<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ResourceQuota\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/resourcequotas\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceQuotaList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ResourceQuota\n\n#### HTTP Request\n\nGET \/api\/v1\/resourcequotas\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceQuotaList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ResourceQuota\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\/{namespace}\/resourcequotas\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ResourceQuota<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceQuota<\/a>): OK\n\n201 (<a href=\"\">ResourceQuota<\/a>): Created\n\n202 (<a href=\"\">ResourceQuota<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ResourceQuota\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/resourcequotas\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceQuota\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ResourceQuota<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceQuota<\/a>): OK\n\n201 (<a href=\"\">ResourceQuota<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified ResourceQuota\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/resourcequotas\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceQuota\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">ResourceQuota<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceQuota<\/a>): OK\n\n201 (<a href=\"\">ResourceQuota<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ResourceQuota\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/resourcequotas\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceQuota\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceQuota<\/a>): OK\n\n201 (<a href=\"\">ResourceQuota<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified ResourceQuota\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/resourcequotas\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceQuota\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceQuota<\/a>): OK\n\n201 (<a href=\"\">ResourceQuota<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ResourceQuota\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/resourcequotas\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ResourceQuota\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ResourceQuota<\/a>): OK\n\n202 (<a href=\"\">ResourceQuota<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ResourceQuota\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/resourcequotas\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   ResourceQuota  content type   api reference  description   ResourceQuota sets aggregate quota restrictions enforced per namespace   title   ResourceQuota  weight  3 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        ResourceQuota   ResourceQuota   ResourceQuota sets aggregate quota restrictions enforced per namespace   hr       apiVersion    v1       kind    ResourceQuota       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    ResourceQuotaSpec  a      Spec defines the desired quota  https   git k8s io community contributors devel sig architecture api conventions md spec and status      status     a href    ResourceQuotaStatus  a      Status defines the actual enforced quota and its current usage  https   git k8s io community contributors devel sig architecture api conventions md spec and status         ResourceQuotaSpec   ResourceQuotaSpec   ResourceQuotaSpec defines the desired hard limits to enforce for Quota    hr       hard    map string  a href    Quantity  a      hard is the set of desired hard limits for each named resource  More info  https   kubernetes io docs concepts policy resource quotas       scopeSelector    ScopeSelector     scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values  For a resource to match  both scopes AND scopeSelector  if specified in spec   must be matched      a name  ScopeSelector    a     A scope selector represents the AND of the selectors represented by the scoped resource selector requirements          scopeSelector matchExpressions      ScopedResourceSelectorRequirement        Atomic  will be replaced during a merge           A list of scope selector requirements by scope of the resources        a name  ScopedResourceSelectorRequirement    a       A scoped resource selector requirement is a selector that contains values  a scope name  and an operator that relates the scope name and values            scopeSelector matchExpressions operator    string   required        Represents a scope s relationship to a set of values  Valid operators are In  NotIn  Exists  DoesNotExist           scopeSelector matchExpressions scopeName    string   required        The name of the scope that the selector applies to           scopeSelector matchExpressions values      string          Atomic  will be replaced during a merge               An array of string values  If the operator is In or NotIn  the values array must be non empty  If the operator is Exists or DoesNotExist  the values array must be empty  This array is replaced during a strategic merge patch       scopes      string      Atomic  will be replaced during a merge       A collection of filters that must match each object tracked by a quota  If not specified  the quota matches all objects          ResourceQuotaStatus   ResourceQuotaStatus   ResourceQuotaStatus defines the enforced hard limits and observed use    hr       hard    map string  a href    Quantity  a      Hard is the set of enforced hard limits for each named resource  More info  https   kubernetes io docs concepts policy resource quotas       used    map string  a href    Quantity  a      Used is the current observed total usage of the resource in the namespace          ResourceQuotaList   ResourceQuotaList   ResourceQuotaList is a list of ResourceQuota items    hr       apiVersion    v1       kind    ResourceQuotaList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    ResourceQuota  a    required    Items is a list of ResourceQuota objects  More info  https   kubernetes io docs concepts policy resource quotas          Operations   Operations      hr             get  read the specified ResourceQuota       HTTP Request  GET  api v1 namespaces  namespace  resourcequotas  name        Parameters       name     in path    string  required    name of the ResourceQuota       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceQuota  a    OK  401  Unauthorized        get  read status of the specified ResourceQuota       HTTP Request  GET  api v1 namespaces  namespace  resourcequotas  name  status       Parameters       name     in path    string  required    name of the ResourceQuota       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceQuota  a    OK  401  Unauthorized        list  list or watch objects of kind ResourceQuota       HTTP Request  GET  api v1 namespaces  namespace  resourcequotas       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ResourceQuotaList  a    OK  401  Unauthorized        list  list or watch objects of kind ResourceQuota       HTTP Request  GET  api v1 resourcequotas       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ResourceQuotaList  a    OK  401  Unauthorized        create  create a ResourceQuota       HTTP Request  POST  api v1 namespaces  namespace  resourcequotas       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    ResourceQuota  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceQuota  a    OK  201   a href    ResourceQuota  a    Created  202   a href    ResourceQuota  a    Accepted  401  Unauthorized        update  replace the specified ResourceQuota       HTTP Request  PUT  api v1 namespaces  namespace  resourcequotas  name        Parameters       name     in path    string  required    name of the ResourceQuota       namespace     in path    string  required     a href    namespace  a        body     a href    ResourceQuota  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceQuota  a    OK  201   a href    ResourceQuota  a    Created  401  Unauthorized        update  replace status of the specified ResourceQuota       HTTP Request  PUT  api v1 namespaces  namespace  resourcequotas  name  status       Parameters       name     in path    string  required    name of the ResourceQuota       namespace     in path    string  required     a href    namespace  a        body     a href    ResourceQuota  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceQuota  a    OK  201   a href    ResourceQuota  a    Created  401  Unauthorized        patch  partially update the specified ResourceQuota       HTTP Request  PATCH  api v1 namespaces  namespace  resourcequotas  name        Parameters       name     in path    string  required    name of the ResourceQuota       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceQuota  a    OK  201   a href    ResourceQuota  a    Created  401  Unauthorized        patch  partially update status of the specified ResourceQuota       HTTP Request  PATCH  api v1 namespaces  namespace  resourcequotas  name  status       Parameters       name     in path    string  required    name of the ResourceQuota       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ResourceQuota  a    OK  201   a href    ResourceQuota  a    Created  401  Unauthorized        delete  delete a ResourceQuota       HTTP Request  DELETE  api v1 namespaces  namespace  resourcequotas  name        Parameters       name     in path    string  required    name of the ResourceQuota       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    ResourceQuota  a    OK  202   a href    ResourceQuota  a    Accepted  401  Unauthorized        deletecollection  delete collection of ResourceQuota       HTTP Request  DELETE  api v1 namespaces  namespace  resourcequotas       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference title ValidatingAdmissionPolicyBinding contenttype apireference apiVersion admissionregistration k8s io v1 weight 8 apimetadata autogenerated true kind ValidatingAdmissionPolicyBinding import k8s io api admissionregistration v1 ValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources","answers":"---\napi_metadata:\n  apiVersion: \"admissionregistration.k8s.io\/v1\"\n  import: \"k8s.io\/api\/admissionregistration\/v1\"\n  kind: \"ValidatingAdmissionPolicyBinding\"\ncontent_type: \"api_reference\"\ndescription: \"ValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources.\"\ntitle: \"ValidatingAdmissionPolicyBinding\"\nweight: 8\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: admissionregistration.k8s.io\/v1`\n\n`import \"k8s.io\/api\/admissionregistration\/v1\"`\n\n\n## ValidatingAdmissionPolicyBinding {#ValidatingAdmissionPolicyBinding}\n\nValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources. ValidatingAdmissionPolicyBinding and parameter CRDs together define how cluster administrators configure policies for clusters.\n\nFor a given admission request, each binding will cause its policy to be evaluated N times, where N is 1 for policies\/bindings that don't use params, otherwise N is the number of parameters selected by the binding.\n\nThe CEL expressions of a policy must have a computed CEL cost below the maximum CEL budget. Each evaluation of the policy is given an independent CEL cost budget. Adding\/removing policies, bindings, or params can not affect whether a given (policy, binding, param) combination is within its own CEL budget.\n\n<hr>\n\n- **apiVersion**: admissionregistration.k8s.io\/v1\n\n\n- **kind**: ValidatingAdmissionPolicyBinding\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata; More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata.\n\n- **spec** (ValidatingAdmissionPolicyBindingSpec)\n\n  Specification of the desired behavior of the ValidatingAdmissionPolicyBinding.\n\n  <a name=\"ValidatingAdmissionPolicyBindingSpec\"><\/a>\n  *ValidatingAdmissionPolicyBindingSpec is the specification of the ValidatingAdmissionPolicyBinding.*\n\n  - **spec.matchResources** (MatchResources)\n\n    MatchResources declares what resources match this binding and will be validated by it. Note that this is intersected with the policy's matchConstraints, so only requests that are matched by the policy can be selected by this. If this is unset, all resources matched by the policy are validated by this binding When resourceRules is unset, it does not constrain resource matching. If a resource is matched by the other fields of this object, it will be validated. Note that this is differs from ValidatingAdmissionPolicy matchConstraints, where resourceRules are required.\n\n    <a name=\"MatchResources\"><\/a>\n    *MatchResources decides whether to run the admission control policy on an object based on whether it meets the match criteria. The exclude rules take precedence over include rules (if a resource matches both, it is excluded)*\n\n    - **spec.matchResources.excludeResourceRules** ([]NamedRuleWithOperations)\n\n      *Atomic: will be replaced during a merge*\n      \n      ExcludeResourceRules describes what operations on what resources\/subresources the ValidatingAdmissionPolicy should not care about. The exclude rules take precedence over include rules (if a resource matches both, it is excluded)\n\n      <a name=\"NamedRuleWithOperations\"><\/a>\n      *NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames.*\n\n      - **spec.matchResources.excludeResourceRules.apiGroups** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIGroups is the API groups the resources belong to. '*' is all groups. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchResources.excludeResourceRules.apiVersions** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIVersions is the API versions the resources belong to. '*' is all versions. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchResources.excludeResourceRules.operations** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchResources.excludeResourceRules.resourceNames** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        ResourceNames is an optional white list of names that the rule applies to.  An empty set means that everything is allowed.\n\n      - **spec.matchResources.excludeResourceRules.resources** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Resources is a list of resources this rule applies to.\n        \n        For example: 'pods' means pods. 'pods\/log' means the log subresource of pods. '*' means all resources, but not subresources. 'pods\/*' means all subresources of pods. '*\/scale' means all scale subresources. '*\/*' means all resources and their subresources.\n        \n        If wildcard is present, the validation rule will ensure resources do not overlap with each other.\n        \n        Depending on the enclosing object, subresources might not be allowed. Required.\n\n      - **spec.matchResources.excludeResourceRules.scope** (string)\n\n        scope specifies the scope of this rule. Valid values are \"Cluster\", \"Namespaced\", and \"*\" \"Cluster\" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. \"Namespaced\" means that only namespaced resources will match this rule. \"*\" means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is \"*\".\n\n    - **spec.matchResources.matchPolicy** (string)\n\n      matchPolicy defines how the \"MatchResources\" list is used to match incoming requests. Allowed values are \"Exact\" or \"Equivalent\".\n      \n      - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps\/v1, apps\/v1beta1, and extensions\/v1beta1, but \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps\/v1beta1 or extensions\/v1beta1 would not be sent to the ValidatingAdmissionPolicy.\n      \n      - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps\/v1, apps\/v1beta1, and extensions\/v1beta1, and \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps\/v1beta1 or extensions\/v1beta1 would be converted to apps\/v1 and sent to the ValidatingAdmissionPolicy.\n      \n      Defaults to \"Equivalent\"\n\n    - **spec.matchResources.namespaceSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      NamespaceSelector decides whether to run the admission control policy on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the policy.\n      \n      For example, to run the webhook on any objects whose namespace is not associated with \"runlevel\" of \"0\" or \"1\";  you will set the selector as follows: \"namespaceSelector\": {\n        \"matchExpressions\": [\n          {\n            \"key\": \"runlevel\",\n            \"operator\": \"NotIn\",\n            \"values\": [\n              \"0\",\n              \"1\"\n            ]\n          }\n        ]\n      }\n      \n      If instead you want to only run the policy on any objects whose namespace is associated with the \"environment\" of \"prod\" or \"staging\"; you will set the selector as follows: \"namespaceSelector\": {\n        \"matchExpressions\": [\n          {\n            \"key\": \"environment\",\n            \"operator\": \"In\",\n            \"values\": [\n              \"prod\",\n              \"staging\"\n            ]\n          }\n        ]\n      }\n      \n      See https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\/ for more examples of label selectors.\n      \n      Default to the empty LabelSelector, which matches everything.\n\n    - **spec.matchResources.objectSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      ObjectSelector decides whether to run the validation based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the cel validation, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything.\n\n    - **spec.matchResources.resourceRules** ([]NamedRuleWithOperations)\n\n      *Atomic: will be replaced during a merge*\n      \n      ResourceRules describes what operations on what resources\/subresources the ValidatingAdmissionPolicy matches. The policy cares about an operation if it matches _any_ Rule.\n\n      <a name=\"NamedRuleWithOperations\"><\/a>\n      *NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames.*\n\n      - **spec.matchResources.resourceRules.apiGroups** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIGroups is the API groups the resources belong to. '*' is all groups. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchResources.resourceRules.apiVersions** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIVersions is the API versions the resources belong to. '*' is all versions. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchResources.resourceRules.operations** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchResources.resourceRules.resourceNames** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        ResourceNames is an optional white list of names that the rule applies to.  An empty set means that everything is allowed.\n\n      - **spec.matchResources.resourceRules.resources** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Resources is a list of resources this rule applies to.\n        \n        For example: 'pods' means pods. 'pods\/log' means the log subresource of pods. '*' means all resources, but not subresources. 'pods\/*' means all subresources of pods. '*\/scale' means all scale subresources. '*\/*' means all resources and their subresources.\n        \n        If wildcard is present, the validation rule will ensure resources do not overlap with each other.\n        \n        Depending on the enclosing object, subresources might not be allowed. Required.\n\n      - **spec.matchResources.resourceRules.scope** (string)\n\n        scope specifies the scope of this rule. Valid values are \"Cluster\", \"Namespaced\", and \"*\" \"Cluster\" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. \"Namespaced\" means that only namespaced resources will match this rule. \"*\" means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is \"*\".\n\n  - **spec.paramRef** (ParamRef)\n\n    paramRef specifies the parameter resource used to configure the admission control policy. It should point to a resource of the type specified in ParamKind of the bound ValidatingAdmissionPolicy. If the policy specifies a ParamKind and the resource referred to by ParamRef does not exist, this binding is considered mis-configured and the FailurePolicy of the ValidatingAdmissionPolicy applied. If the policy does not specify a ParamKind then this field is ignored, and the rules are evaluated without a param.\n\n    <a name=\"ParamRef\"><\/a>\n    *ParamRef describes how to locate the params to be used as input to expressions of rules applied by a policy binding.*\n\n    - **spec.paramRef.name** (string)\n\n      name is the name of the resource being referenced.\n      \n      One of `name` or `selector` must be set, but `name` and `selector` are mutually exclusive properties. If one is set, the other must be unset.\n      \n      A single parameter used for all admission requests can be configured by setting the `name` field, leaving `selector` blank, and setting namespace if `paramKind` is namespace-scoped.\n\n    - **spec.paramRef.namespace** (string)\n\n      namespace is the namespace of the referenced resource. Allows limiting the search for params to a specific namespace. Applies to both `name` and `selector` fields.\n      \n      A per-namespace parameter may be used by specifying a namespace-scoped `paramKind` in the policy and leaving this field empty.\n      \n      - If `paramKind` is cluster-scoped, this field MUST be unset. Setting this field results in a configuration error.\n      \n      - If `paramKind` is namespace-scoped, the namespace of the object being evaluated for admission will be used when this field is left unset. Take care that if this is left empty the binding must not match any cluster-scoped resources, which will result in an error.\n\n    - **spec.paramRef.parameterNotFoundAction** (string)\n\n      `parameterNotFoundAction` controls the behavior of the binding when the resource exists, and name or selector is valid, but there are no parameters matched by the binding. If the value is set to `Allow`, then no matched parameters will be treated as successful validation by the binding. If set to `Deny`, then no matched parameters will be subject to the `failurePolicy` of the policy.\n      \n      Allowed values are `Allow` or `Deny`\n      \n      Required\n\n    - **spec.paramRef.selector** (<a href=\"\">LabelSelector<\/a>)\n\n      selector can be used to match multiple param objects based on their labels. Supply selector: {} to match all resources of the ParamKind.\n      \n      If multiple params are found, they are all evaluated with the policy expressions and the results are ANDed together.\n      \n      One of `name` or `selector` must be set, but `name` and `selector` are mutually exclusive properties. If one is set, the other must be unset.\n\n  - **spec.policyName** (string)\n\n    PolicyName references a ValidatingAdmissionPolicy name which the ValidatingAdmissionPolicyBinding binds to. If the referenced resource does not exist, this binding is considered invalid and will be ignored Required.\n\n  - **spec.validationActions** ([]string)\n\n    *Set: unique values will be kept during a merge*\n    \n    validationActions declares how Validations of the referenced ValidatingAdmissionPolicy are enforced. If a validation evaluates to false it is always enforced according to these actions.\n    \n    Failures defined by the ValidatingAdmissionPolicy's FailurePolicy are enforced according to these actions only if the FailurePolicy is set to Fail, otherwise the failures are ignored. This includes compilation errors, runtime errors and misconfigurations of the policy.\n    \n    validationActions is declared as a set of action values. Order does not matter. validationActions may not contain duplicates of the same action.\n    \n    The supported actions values are:\n    \n    \"Deny\" specifies that a validation failure results in a denied request.\n    \n    \"Warn\" specifies that a validation failure is reported to the request client in HTTP Warning headers, with a warning code of 299. Warnings can be sent both for allowed or denied admission responses.\n    \n    \"Audit\" specifies that a validation failure is included in the published audit event for the request. The audit event will contain a `validation.policy.admission.k8s.io\/validation_failure` audit annotation with a value containing the details of the validation failures, formatted as a JSON list of objects, each with the following fields: - message: The validation failure message string - policy: The resource name of the ValidatingAdmissionPolicy - binding: The resource name of the ValidatingAdmissionPolicyBinding - expressionIndex: The index of the failed validations in the ValidatingAdmissionPolicy - validationActions: The enforcement actions enacted for the validation failure Example audit annotation: `\"validation.policy.admission.k8s.io\/validation_failure\": \"[{\"message\": \"Invalid value\", {\"policy\": \"policy.example.com\", {\"binding\": \"policybinding.example.com\", {\"expressionIndex\": \"1\", {\"validationActions\": [\"Audit\"]}]\"`\n    \n    Clients should expect to handle additional values by ignoring any values not recognized.\n    \n    \"Deny\" and \"Warn\" may not be used together since this combination needlessly duplicates the validation failure both in the API response body and the HTTP warning headers.\n    \n    Required.\n\n\n\n\n\n## ValidatingAdmissionPolicy {#ValidatingAdmissionPolicy}\n\nValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it.\n\n<hr>\n\n- **apiVersion** (string)\n\n  APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#resources\n\n- **kind** (string)\n\n  Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object metadata; More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata.\n\n- **spec** (ValidatingAdmissionPolicySpec)\n\n  Specification of the desired behavior of the ValidatingAdmissionPolicy.\n\n  <a name=\"ValidatingAdmissionPolicySpec\"><\/a>\n  *ValidatingAdmissionPolicySpec is the specification of the desired behavior of the AdmissionPolicy.*\n\n  - **spec.auditAnnotations** ([]AuditAnnotation)\n\n    *Atomic: will be replaced during a merge*\n    \n    auditAnnotations contains CEL expressions which are used to produce audit annotations for the audit event of the API request. validations and auditAnnotations may not both be empty; a least one of validations or auditAnnotations is required.\n\n    <a name=\"AuditAnnotation\"><\/a>\n    *AuditAnnotation describes how to produce an audit annotation for an API request.*\n\n    - **spec.auditAnnotations.key** (string), required\n\n      key specifies the audit annotation key. The audit annotation keys of a ValidatingAdmissionPolicy must be unique. The key must be a qualified name ([A-Za-z0-9][-A-Za-z0-9_.]*) no more than 63 bytes in length.\n      \n      The key is combined with the resource name of the ValidatingAdmissionPolicy to construct an audit annotation key: \"{ValidatingAdmissionPolicy name}\/{key}\".\n      \n      If an admission webhook uses the same resource name as this ValidatingAdmissionPolicy and the same audit annotation key, the annotation key will be identical. In this case, the first annotation written with the key will be included in the audit event and all subsequent annotations with the same key will be discarded.\n      \n      Required.\n\n    - **spec.auditAnnotations.valueExpression** (string), required\n\n      valueExpression represents the expression which is evaluated by CEL to produce an audit annotation value. The expression must evaluate to either a string or null value. If the expression evaluates to a string, the audit annotation is included with the string value. If the expression evaluates to null or empty string the audit annotation will be omitted. The valueExpression may be no longer than 5kb in length. If the result of the valueExpression is more than 10kb in length, it will be truncated to 10kb.\n      \n      If multiple ValidatingAdmissionPolicyBinding resources match an API request, then the valueExpression will be evaluated for each binding. All unique values produced by the valueExpressions will be joined together in a comma-separated list.\n      \n      Required.\n\n  - **spec.failurePolicy** (string)\n\n    failurePolicy defines how to handle failures for the admission policy. Failures can occur from CEL expression parse errors, type check errors, runtime errors and invalid or mis-configured policy definitions or bindings.\n    \n    A policy is invalid if spec.paramKind refers to a non-existent Kind. A binding is invalid if spec.paramRef.name refers to a non-existent resource.\n    \n    failurePolicy does not define how validations that evaluate to false are handled.\n    \n    When failurePolicy is set to Fail, ValidatingAdmissionPolicyBinding validationActions define how failures are enforced.\n    \n    Allowed values are Ignore or Fail. Defaults to Fail.\n\n  - **spec.matchConditions** ([]MatchCondition)\n\n    *Patch strategy: merge on key `name`*\n    \n    *Map: unique values on key name will be kept during a merge*\n    \n    MatchConditions is a list of conditions that must be met for a request to be validated. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed.\n    \n    If a parameter object is provided, it can be accessed via the `params` handle in the same manner as validation expressions.\n    \n    The exact matching logic is (in order):\n      1. If ANY matchCondition evaluates to FALSE, the policy is skipped.\n      2. If ALL matchConditions evaluate to TRUE, the policy is evaluated.\n      3. If any matchCondition evaluates to an error (but none are FALSE):\n         - If failurePolicy=Fail, reject the request\n         - If failurePolicy=Ignore, the policy is skipped\n\n    <a name=\"MatchCondition\"><\/a>\n    *MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook.*\n\n    - **spec.matchConditions.expression** (string), required\n\n      Expression represents the expression which will be evaluated by CEL. Must evaluate to bool. CEL expressions have access to the contents of the AdmissionRequest and Authorizer, organized into CEL variables:\n      \n      'object' - The object from the incoming request. The value is null for DELETE requests. 'oldObject' - The existing object. The value is null for CREATE requests. 'request' - Attributes of the admission request(\/pkg\/apis\/admission\/types.go#AdmissionRequest). 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request.\n        See https:\/\/pkg.go.dev\/k8s.io\/apiserver\/pkg\/cel\/library#Authz\n      'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the\n        request resource.\n      Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/\n      \n      Required.\n\n    - **spec.matchConditions.name** (string), required\n\n      Name is an identifier for this match condition, used for strategic merging of MatchConditions, as well as providing an identifier for logging purposes. A good name should be descriptive of the associated expression. Name must be a qualified name consisting of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName',  or 'my.name',  or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]') with an optional DNS subdomain prefix and '\/' (e.g. 'example.com\/MyName')\n      \n      Required.\n\n  - **spec.matchConstraints** (MatchResources)\n\n    MatchConstraints specifies what resources this policy is designed to validate. The AdmissionPolicy cares about a request if it matches _all_ Constraints. However, in order to prevent clusters from being put into an unstable state that cannot be recovered from via the API ValidatingAdmissionPolicy cannot match ValidatingAdmissionPolicy and ValidatingAdmissionPolicyBinding. Required.\n\n    <a name=\"MatchResources\"><\/a>\n    *MatchResources decides whether to run the admission control policy on an object based on whether it meets the match criteria. The exclude rules take precedence over include rules (if a resource matches both, it is excluded)*\n\n    - **spec.matchConstraints.excludeResourceRules** ([]NamedRuleWithOperations)\n\n      *Atomic: will be replaced during a merge*\n      \n      ExcludeResourceRules describes what operations on what resources\/subresources the ValidatingAdmissionPolicy should not care about. The exclude rules take precedence over include rules (if a resource matches both, it is excluded)\n\n      <a name=\"NamedRuleWithOperations\"><\/a>\n      *NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames.*\n\n      - **spec.matchConstraints.excludeResourceRules.apiGroups** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIGroups is the API groups the resources belong to. '*' is all groups. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchConstraints.excludeResourceRules.apiVersions** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIVersions is the API versions the resources belong to. '*' is all versions. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchConstraints.excludeResourceRules.operations** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchConstraints.excludeResourceRules.resourceNames** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        ResourceNames is an optional white list of names that the rule applies to.  An empty set means that everything is allowed.\n\n      - **spec.matchConstraints.excludeResourceRules.resources** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Resources is a list of resources this rule applies to.\n        \n        For example: 'pods' means pods. 'pods\/log' means the log subresource of pods. '*' means all resources, but not subresources. 'pods\/*' means all subresources of pods. '*\/scale' means all scale subresources. '*\/*' means all resources and their subresources.\n        \n        If wildcard is present, the validation rule will ensure resources do not overlap with each other.\n        \n        Depending on the enclosing object, subresources might not be allowed. Required.\n\n      - **spec.matchConstraints.excludeResourceRules.scope** (string)\n\n        scope specifies the scope of this rule. Valid values are \"Cluster\", \"Namespaced\", and \"*\" \"Cluster\" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. \"Namespaced\" means that only namespaced resources will match this rule. \"*\" means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is \"*\".\n\n    - **spec.matchConstraints.matchPolicy** (string)\n\n      matchPolicy defines how the \"MatchResources\" list is used to match incoming requests. Allowed values are \"Exact\" or \"Equivalent\".\n      \n      - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps\/v1, apps\/v1beta1, and extensions\/v1beta1, but \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps\/v1beta1 or extensions\/v1beta1 would not be sent to the ValidatingAdmissionPolicy.\n      \n      - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps\/v1, apps\/v1beta1, and extensions\/v1beta1, and \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps\/v1beta1 or extensions\/v1beta1 would be converted to apps\/v1 and sent to the ValidatingAdmissionPolicy.\n      \n      Defaults to \"Equivalent\"\n\n    - **spec.matchConstraints.namespaceSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      NamespaceSelector decides whether to run the admission control policy on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the policy.\n      \n      For example, to run the webhook on any objects whose namespace is not associated with \"runlevel\" of \"0\" or \"1\";  you will set the selector as follows: \"namespaceSelector\": {\n        \"matchExpressions\": [\n          {\n            \"key\": \"runlevel\",\n            \"operator\": \"NotIn\",\n            \"values\": [\n              \"0\",\n              \"1\"\n            ]\n          }\n        ]\n      }\n      \n      If instead you want to only run the policy on any objects whose namespace is associated with the \"environment\" of \"prod\" or \"staging\"; you will set the selector as follows: \"namespaceSelector\": {\n        \"matchExpressions\": [\n          {\n            \"key\": \"environment\",\n            \"operator\": \"In\",\n            \"values\": [\n              \"prod\",\n              \"staging\"\n            ]\n          }\n        ]\n      }\n      \n      See https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\/ for more examples of label selectors.\n      \n      Default to the empty LabelSelector, which matches everything.\n\n    - **spec.matchConstraints.objectSelector** (<a href=\"\">LabelSelector<\/a>)\n\n      ObjectSelector decides whether to run the validation based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the cel validation, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything.\n\n    - **spec.matchConstraints.resourceRules** ([]NamedRuleWithOperations)\n\n      *Atomic: will be replaced during a merge*\n      \n      ResourceRules describes what operations on what resources\/subresources the ValidatingAdmissionPolicy matches. The policy cares about an operation if it matches _any_ Rule.\n\n      <a name=\"NamedRuleWithOperations\"><\/a>\n      *NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames.*\n\n      - **spec.matchConstraints.resourceRules.apiGroups** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIGroups is the API groups the resources belong to. '*' is all groups. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchConstraints.resourceRules.apiVersions** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        APIVersions is the API versions the resources belong to. '*' is all versions. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchConstraints.resourceRules.operations** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required.\n\n      - **spec.matchConstraints.resourceRules.resourceNames** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        ResourceNames is an optional white list of names that the rule applies to.  An empty set means that everything is allowed.\n\n      - **spec.matchConstraints.resourceRules.resources** ([]string)\n\n        *Atomic: will be replaced during a merge*\n        \n        Resources is a list of resources this rule applies to.\n        \n        For example: 'pods' means pods. 'pods\/log' means the log subresource of pods. '*' means all resources, but not subresources. 'pods\/*' means all subresources of pods. '*\/scale' means all scale subresources. '*\/*' means all resources and their subresources.\n        \n        If wildcard is present, the validation rule will ensure resources do not overlap with each other.\n        \n        Depending on the enclosing object, subresources might not be allowed. Required.\n\n      - **spec.matchConstraints.resourceRules.scope** (string)\n\n        scope specifies the scope of this rule. Valid values are \"Cluster\", \"Namespaced\", and \"*\" \"Cluster\" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. \"Namespaced\" means that only namespaced resources will match this rule. \"*\" means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is \"*\".\n\n  - **spec.paramKind** (ParamKind)\n\n    ParamKind specifies the kind of resources used to parameterize this policy. If absent, there are no parameters for this policy and the param CEL variable will not be provided to validation expressions. If ParamKind refers to a non-existent kind, this policy definition is mis-configured and the FailurePolicy is applied. If paramKind is specified but paramRef is unset in ValidatingAdmissionPolicyBinding, the params variable will be null.\n\n    <a name=\"ParamKind\"><\/a>\n    *ParamKind is a tuple of Group Kind and Version.*\n\n    - **spec.paramKind.apiVersion** (string)\n\n      APIVersion is the API group version the resources belong to. In format of \"group\/version\". Required.\n\n    - **spec.paramKind.kind** (string)\n\n      Kind is the API kind the resources belong to. Required.\n\n  - **spec.validations** ([]Validation)\n\n    *Atomic: will be replaced during a merge*\n    \n    Validations contain CEL expressions which is used to apply the validation. Validations and AuditAnnotations may not both be empty; a minimum of one Validations or AuditAnnotations is required.\n\n    <a name=\"Validation\"><\/a>\n    *Validation specifies the CEL expression which is used to apply the validation.*\n\n    - **spec.validations.expression** (string), required\n\n      Expression represents the expression which will be evaluated by CEL. ref: https:\/\/github.com\/google\/cel-spec CEL expressions have access to the contents of the API request\/response, organized into CEL variables as well as some other useful variables:\n      \n      - 'object' - The object from the incoming request. The value is null for DELETE requests. - 'oldObject' - The existing object. The value is null for CREATE requests. - 'request' - Attributes of the API request([ref](\/pkg\/apis\/admission\/types.go#AdmissionRequest)). - 'params' - Parameter resource referred to by the policy binding being evaluated. Only populated if the policy has a ParamKind. - 'namespaceObject' - The namespace object that the incoming object belongs to. The value is null for cluster-scoped resources. - 'variables' - Map of composited variables, from its name to its lazily evaluated value.\n        For example, a variable named 'foo' can be accessed as 'variables.foo'.\n      - 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request.\n        See https:\/\/pkg.go.dev\/k8s.io\/apiserver\/pkg\/cel\/library#Authz\n      - 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the\n        request resource.\n      \n      The `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from the root of the object. No other metadata properties are accessible.\n      \n      Only property names of the form `[a-zA-Z_.-\/][a-zA-Z0-9_.-\/]*` are accessible. Accessible property names are escaped according to the following rules when accessed in the expression: - '__' escapes to '__underscores__' - '.' escapes to '__dot__' - '-' escapes to '__dash__' - '\/' escapes to '__slash__' - Property names that exactly match a CEL RESERVED keyword escape to '__{keyword}__'. The keywords are:\n      \t  \"true\", \"false\", \"null\", \"in\", \"as\", \"break\", \"const\", \"continue\", \"else\", \"for\", \"function\", \"if\",\n      \t  \"import\", \"let\", \"loop\", \"package\", \"namespace\", \"return\".\n      Examples:\n        - Expression accessing a property named \"namespace\": {\"Expression\": \"object.__namespace__ > 0\"}\n        - Expression accessing a property named \"x-prop\": {\"Expression\": \"object.x__dash__prop > 0\"}\n        - Expression accessing a property named \"redact__d\": {\"Expression\": \"object.redact__underscores__d > 0\"}\n      \n      Equality on arrays with list type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1]. Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type:\n        - 'set': `X + Y` performs a union where the array positions of all elements in `X` are preserved and\n          non-intersecting elements in `Y` are appended, retaining their partial order.\n        - 'map': `X + Y` performs a merge where the array positions of all keys in `X` are preserved but the values\n          are overwritten by values in `Y` when the key sets of `X` and `Y` intersect. Elements in `Y` with\n          non-intersecting keys are appended, retaining their partial order.\n      Required.\n\n    - **spec.validations.message** (string)\n\n      Message represents the message displayed when validation fails. The message is required if the Expression contains line breaks. The message must not contain line breaks. If unset, the message is \"failed rule: {Rule}\". e.g. \"must be a URL with the host matching spec.host\" If the Expression contains line breaks. Message is required. The message must not contain line breaks. If unset, the message is \"failed Expression: {Expression}\".\n\n    - **spec.validations.messageExpression** (string)\n\n      messageExpression declares a CEL expression that evaluates to the validation failure message that is returned when this rule fails. Since messageExpression is used as a failure message, it must evaluate to a string. If both message and messageExpression are present on a validation, then messageExpression will be used if validation fails. If messageExpression results in a runtime error, the runtime error is logged, and the validation failure message is produced as if the messageExpression field were unset. If messageExpression evaluates to an empty string, a string with only spaces, or a string that contains line breaks, then the validation failure message will also be produced as if the messageExpression field were unset, and the fact that messageExpression produced an empty string\/string with only spaces\/string with line breaks will be logged. messageExpression has access to all the same variables as the `expression` except for 'authorizer' and 'authorizer.requestResource'. Example: \"object.x must be less than max (\"+string(params.max)+\")\"\n\n    - **spec.validations.reason** (string)\n\n      Reason represents a machine-readable description of why this validation failed. If this is the first validation in the list to fail, this reason, as well as the corresponding HTTP response code, are used in the HTTP response to the client. The currently supported reasons are: \"Unauthorized\", \"Forbidden\", \"Invalid\", \"RequestEntityTooLarge\". If not set, StatusReasonInvalid is used in the response to the client.\n\n  - **spec.variables** ([]Variable)\n\n    *Patch strategy: merge on key `name`*\n    \n    *Map: unique values on key name will be kept during a merge*\n    \n    Variables contain definitions of variables that can be used in composition of other expressions. Each variable is defined as a named CEL expression. The variables defined here will be available under `variables` in other expressions of the policy except MatchConditions because MatchConditions are evaluated before the rest of the policy.\n    \n    The expression of a variable can refer to other variables defined earlier in the list but not those after. Thus, Variables must be sorted by the order of first appearance and acyclic.\n\n    <a name=\"Variable\"><\/a>\n    *Variable is the definition of a variable that is used for composition. A variable is defined as a named expression.*\n\n    - **spec.variables.expression** (string), required\n\n      Expression is the expression that will be evaluated as the value of the variable. The CEL expression has access to the same identifiers as the CEL expressions in Validation.\n\n    - **spec.variables.name** (string), required\n\n      Name is the name of the variable. The name must be a valid CEL identifier and unique among all variables. The variable can be accessed in other expressions through `variables` For example, if name is \"foo\", the variable will be available as `variables.foo`\n\n- **status** (ValidatingAdmissionPolicyStatus)\n\n  The status of the ValidatingAdmissionPolicy, including warnings that are useful to determine if the policy behaves in the expected way. Populated by the system. Read-only.\n\n  <a name=\"ValidatingAdmissionPolicyStatus\"><\/a>\n  *ValidatingAdmissionPolicyStatus represents the status of an admission validation policy.*\n\n  - **status.conditions** ([]Condition)\n\n    *Map: unique values on key type will be kept during a merge*\n    \n    The conditions represent the latest available observations of a policy's current state.\n\n    <a name=\"Condition\"><\/a>\n    *Condition contains details for one aspect of the current state of this API Resource.*\n\n    - **status.conditions.lastTransitionTime** (Time), required\n\n      lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed.  If that is not known, then using the time when the API field changed is acceptable.\n\n      <a name=\"Time\"><\/a>\n      *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n    - **status.conditions.message** (string), required\n\n      message is a human readable message indicating details about the transition. This may be an empty string.\n\n    - **status.conditions.reason** (string), required\n\n      reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty.\n\n    - **status.conditions.status** (string), required\n\n      status of the condition, one of True, False, Unknown.\n\n    - **status.conditions.type** (string), required\n\n      type of condition in CamelCase or in foo.example.com\/CamelCase.\n\n    - **status.conditions.observedGeneration** (int64)\n\n      observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance.\n\n  - **status.observedGeneration** (int64)\n\n    The generation observed by the controller.\n\n  - **status.typeChecking** (TypeChecking)\n\n    The results of type checking for each expression. Presence of this field indicates the completion of the type checking.\n\n    <a name=\"TypeChecking\"><\/a>\n    *TypeChecking contains results of type checking the expressions in the ValidatingAdmissionPolicy*\n\n    - **status.typeChecking.expressionWarnings** ([]ExpressionWarning)\n\n      *Atomic: will be replaced during a merge*\n      \n      The type checking warnings for each expression.\n\n      <a name=\"ExpressionWarning\"><\/a>\n      *ExpressionWarning is a warning information that targets a specific expression.*\n\n      - **status.typeChecking.expressionWarnings.fieldRef** (string), required\n\n        The path to the field that refers the expression. For example, the reference to the expression of the first item of validations is \"spec.validations[0].expression\"\n\n      - **status.typeChecking.expressionWarnings.warning** (string), required\n\n        The content of type checking information in a human-readable form. Each line of the warning contains the type that the expression is checked against, followed by the type check error from the compiler.\n\n\n\n\n\n## ValidatingAdmissionPolicyBindingList {#ValidatingAdmissionPolicyBindingList}\n\nValidatingAdmissionPolicyBindingList is a list of ValidatingAdmissionPolicyBinding.\n\n<hr>\n\n- **items** ([]<a href=\"\">ValidatingAdmissionPolicyBinding<\/a>), required\n\n  List of PolicyBinding.\n\n- **apiVersion** (string)\n\n  APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#resources\n\n- **kind** (string)\n\n  Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified ValidatingAdmissionPolicyBinding\n\n#### HTTP Request\n\nGET \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicybindings\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingAdmissionPolicyBinding\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingAdmissionPolicyBinding<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind ValidatingAdmissionPolicyBinding\n\n#### HTTP Request\n\nGET \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicybindings\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingAdmissionPolicyBindingList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a ValidatingAdmissionPolicyBinding\n\n#### HTTP Request\n\nPOST \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicybindings\n\n#### Parameters\n\n\n- **body**: <a href=\"\">ValidatingAdmissionPolicyBinding<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingAdmissionPolicyBinding<\/a>): OK\n\n201 (<a href=\"\">ValidatingAdmissionPolicyBinding<\/a>): Created\n\n202 (<a href=\"\">ValidatingAdmissionPolicyBinding<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified ValidatingAdmissionPolicyBinding\n\n#### HTTP Request\n\nPUT \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicybindings\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingAdmissionPolicyBinding\n\n\n- **body**: <a href=\"\">ValidatingAdmissionPolicyBinding<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingAdmissionPolicyBinding<\/a>): OK\n\n201 (<a href=\"\">ValidatingAdmissionPolicyBinding<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified ValidatingAdmissionPolicyBinding\n\n#### HTTP Request\n\nPATCH \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicybindings\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingAdmissionPolicyBinding\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ValidatingAdmissionPolicyBinding<\/a>): OK\n\n201 (<a href=\"\">ValidatingAdmissionPolicyBinding<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a ValidatingAdmissionPolicyBinding\n\n#### HTTP Request\n\nDELETE \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicybindings\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the ValidatingAdmissionPolicyBinding\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of ValidatingAdmissionPolicyBinding\n\n#### HTTP Request\n\nDELETE \/apis\/admissionregistration.k8s.io\/v1\/validatingadmissionpolicybindings\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   admissionregistration k8s io v1    import   k8s io api admissionregistration v1    kind   ValidatingAdmissionPolicyBinding  content type   api reference  description   ValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources   title   ValidatingAdmissionPolicyBinding  weight  8 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  admissionregistration k8s io v1    import  k8s io api admissionregistration v1        ValidatingAdmissionPolicyBinding   ValidatingAdmissionPolicyBinding   ValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources  ValidatingAdmissionPolicyBinding and parameter CRDs together define how cluster administrators configure policies for clusters   For a given admission request  each binding will cause its policy to be evaluated N times  where N is 1 for policies bindings that don t use params  otherwise N is the number of parameters selected by the binding   The CEL expressions of a policy must have a computed CEL cost below the maximum CEL budget  Each evaluation of the policy is given an independent CEL cost budget  Adding removing policies  bindings  or params can not affect whether a given  policy  binding  param  combination is within its own CEL budget    hr       apiVersion    admissionregistration k8s io v1       kind    ValidatingAdmissionPolicyBinding       metadata     a href    ObjectMeta  a      Standard object metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata       spec    ValidatingAdmissionPolicyBindingSpec     Specification of the desired behavior of the ValidatingAdmissionPolicyBinding      a name  ValidatingAdmissionPolicyBindingSpec    a     ValidatingAdmissionPolicyBindingSpec is the specification of the ValidatingAdmissionPolicyBinding          spec matchResources    MatchResources       MatchResources declares what resources match this binding and will be validated by it  Note that this is intersected with the policy s matchConstraints  so only requests that are matched by the policy can be selected by this  If this is unset  all resources matched by the policy are validated by this binding When resourceRules is unset  it does not constrain resource matching  If a resource is matched by the other fields of this object  it will be validated  Note that this is differs from ValidatingAdmissionPolicy matchConstraints  where resourceRules are required        a name  MatchResources    a       MatchResources decides whether to run the admission control policy on an object based on whether it meets the match criteria  The exclude rules take precedence over include rules  if a resource matches both  it is excluded            spec matchResources excludeResourceRules      NamedRuleWithOperations          Atomic  will be replaced during a merge               ExcludeResourceRules describes what operations on what resources subresources the ValidatingAdmissionPolicy should not care about  The exclude rules take precedence over include rules  if a resource matches both  it is excluded          a name  NamedRuleWithOperations    a         NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames              spec matchResources excludeResourceRules apiGroups      string            Atomic  will be replaced during a merge                   APIGroups is the API groups the resources belong to      is all groups  If     is present  the length of the slice must be one  Required             spec matchResources excludeResourceRules apiVersions      string            Atomic  will be replaced during a merge                   APIVersions is the API versions the resources belong to      is all versions  If     is present  the length of the slice must be one  Required             spec matchResources excludeResourceRules operations      string            Atomic  will be replaced during a merge                   Operations is the operations the admission hook cares about   CREATE  UPDATE  DELETE  CONNECT or   for all of those operations and any future admission operations that are added  If     is present  the length of the slice must be one  Required             spec matchResources excludeResourceRules resourceNames      string            Atomic  will be replaced during a merge                   ResourceNames is an optional white list of names that the rule applies to   An empty set means that everything is allowed             spec matchResources excludeResourceRules resources      string            Atomic  will be replaced during a merge                   Resources is a list of resources this rule applies to                   For example   pods  means pods   pods log  means the log subresource of pods      means all resources  but not subresources   pods    means all subresources of pods     scale  means all scale subresources        means all resources and their subresources                   If wildcard is present  the validation rule will ensure resources do not overlap with each other                   Depending on the enclosing object  subresources might not be allowed  Required             spec matchResources excludeResourceRules scope    string           scope specifies the scope of this rule  Valid values are  Cluster    Namespaced   and      Cluster  means that only cluster scoped resources will match this rule  Namespace API objects are cluster scoped   Namespaced  means that only namespaced resources will match this rule      means that there are no scope restrictions  Subresources match the scope of their parent resource  Default is               spec matchResources matchPolicy    string         matchPolicy defines how the  MatchResources  list is used to match incoming requests  Allowed values are  Exact  or  Equivalent                  Exact  match a request only if it exactly matches a specified rule  For example  if deployments can be modified via apps v1  apps v1beta1  and extensions v1beta1  but  rules  only included  apiGroups   apps    apiVersions   v1    resources    deployments     a request to apps v1beta1 or extensions v1beta1 would not be sent to the ValidatingAdmissionPolicy                 Equivalent  match a request if modifies a resource listed in rules  even via another API group or version  For example  if deployments can be modified via apps v1  apps v1beta1  and extensions v1beta1  and  rules  only included  apiGroups   apps    apiVersions   v1    resources    deployments     a request to apps v1beta1 or extensions v1beta1 would be converted to apps v1 and sent to the ValidatingAdmissionPolicy               Defaults to  Equivalent           spec matchResources namespaceSelector     a href    LabelSelector  a          NamespaceSelector decides whether to run the admission control policy on an object based on whether the namespace for that object matches the selector  If the object itself is a namespace  the matching is performed on object metadata labels  If the object is another cluster scoped resource  it never skips the policy               For example  to run the webhook on any objects whose namespace is not associated with  runlevel  of  0  or  1    you will set the selector as follows   namespaceSelector              matchExpressions                              key    runlevel                operator    NotIn                values                    0                  1                                                           If instead you want to only run the policy on any objects whose namespace is associated with the  environment  of  prod  or  staging   you will set the selector as follows   namespaceSelector              matchExpressions                              key    environment                operator    In                values                    prod                  staging                                                           See https   kubernetes io docs concepts overview working with objects labels  for more examples of label selectors               Default to the empty LabelSelector  which matches everything           spec matchResources objectSelector     a href    LabelSelector  a          ObjectSelector decides whether to run the validation based on if the object has matching labels  objectSelector is evaluated against both the oldObject and newObject that would be sent to the cel validation  and is considered to match if either object matches the selector  A null object  oldObject in the case of create  or newObject in the case of delete  or an object that cannot have labels  like a DeploymentRollback or a PodProxyOptions object  is not considered to match  Use the object selector only if the webhook is opt in  because end users may skip the admission webhook by setting the labels  Default to the empty LabelSelector  which matches everything           spec matchResources resourceRules      NamedRuleWithOperations          Atomic  will be replaced during a merge               ResourceRules describes what operations on what resources subresources the ValidatingAdmissionPolicy matches  The policy cares about an operation if it matches  any  Rule          a name  NamedRuleWithOperations    a         NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames              spec matchResources resourceRules apiGroups      string            Atomic  will be replaced during a merge                   APIGroups is the API groups the resources belong to      is all groups  If     is present  the length of the slice must be one  Required             spec matchResources resourceRules apiVersions      string            Atomic  will be replaced during a merge                   APIVersions is the API versions the resources belong to      is all versions  If     is present  the length of the slice must be one  Required             spec matchResources resourceRules operations      string            Atomic  will be replaced during a merge                   Operations is the operations the admission hook cares about   CREATE  UPDATE  DELETE  CONNECT or   for all of those operations and any future admission operations that are added  If     is present  the length of the slice must be one  Required             spec matchResources resourceRules resourceNames      string            Atomic  will be replaced during a merge                   ResourceNames is an optional white list of names that the rule applies to   An empty set means that everything is allowed             spec matchResources resourceRules resources      string            Atomic  will be replaced during a merge                   Resources is a list of resources this rule applies to                   For example   pods  means pods   pods log  means the log subresource of pods      means all resources  but not subresources   pods    means all subresources of pods     scale  means all scale subresources        means all resources and their subresources                   If wildcard is present  the validation rule will ensure resources do not overlap with each other                   Depending on the enclosing object  subresources might not be allowed  Required             spec matchResources resourceRules scope    string           scope specifies the scope of this rule  Valid values are  Cluster    Namespaced   and      Cluster  means that only cluster scoped resources will match this rule  Namespace API objects are cluster scoped   Namespaced  means that only namespaced resources will match this rule      means that there are no scope restrictions  Subresources match the scope of their parent resource  Default is             spec paramRef    ParamRef       paramRef specifies the parameter resource used to configure the admission control policy  It should point to a resource of the type specified in ParamKind of the bound ValidatingAdmissionPolicy  If the policy specifies a ParamKind and the resource referred to by ParamRef does not exist  this binding is considered mis configured and the FailurePolicy of the ValidatingAdmissionPolicy applied  If the policy does not specify a ParamKind then this field is ignored  and the rules are evaluated without a param        a name  ParamRef    a       ParamRef describes how to locate the params to be used as input to expressions of rules applied by a policy binding            spec paramRef name    string         name is the name of the resource being referenced               One of  name  or  selector  must be set  but  name  and  selector  are mutually exclusive properties  If one is set  the other must be unset               A single parameter used for all admission requests can be configured by setting the  name  field  leaving  selector  blank  and setting namespace if  paramKind  is namespace scoped           spec paramRef namespace    string         namespace is the namespace of the referenced resource  Allows limiting the search for params to a specific namespace  Applies to both  name  and  selector  fields               A per namespace parameter may be used by specifying a namespace scoped  paramKind  in the policy and leaving this field empty                 If  paramKind  is cluster scoped  this field MUST be unset  Setting this field results in a configuration error                 If  paramKind  is namespace scoped  the namespace of the object being evaluated for admission will be used when this field is left unset  Take care that if this is left empty the binding must not match any cluster scoped resources  which will result in an error           spec paramRef parameterNotFoundAction    string          parameterNotFoundAction  controls the behavior of the binding when the resource exists  and name or selector is valid  but there are no parameters matched by the binding  If the value is set to  Allow   then no matched parameters will be treated as successful validation by the binding  If set to  Deny   then no matched parameters will be subject to the  failurePolicy  of the policy               Allowed values are  Allow  or  Deny               Required          spec paramRef selector     a href    LabelSelector  a          selector can be used to match multiple param objects based on their labels  Supply selector     to match all resources of the ParamKind               If multiple params are found  they are all evaluated with the policy expressions and the results are ANDed together               One of  name  or  selector  must be set  but  name  and  selector  are mutually exclusive properties  If one is set  the other must be unset         spec policyName    string       PolicyName references a ValidatingAdmissionPolicy name which the ValidatingAdmissionPolicyBinding binds to  If the referenced resource does not exist  this binding is considered invalid and will be ignored Required         spec validationActions      string        Set  unique values will be kept during a merge           validationActions declares how Validations of the referenced ValidatingAdmissionPolicy are enforced  If a validation evaluates to false it is always enforced according to these actions           Failures defined by the ValidatingAdmissionPolicy s FailurePolicy are enforced according to these actions only if the FailurePolicy is set to Fail  otherwise the failures are ignored  This includes compilation errors  runtime errors and misconfigurations of the policy           validationActions is declared as a set of action values  Order does not matter  validationActions may not contain duplicates of the same action           The supported actions values are            Deny  specifies that a validation failure results in a denied request            Warn  specifies that a validation failure is reported to the request client in HTTP Warning headers  with a warning code of 299  Warnings can be sent both for allowed or denied admission responses            Audit  specifies that a validation failure is included in the published audit event for the request  The audit event will contain a  validation policy admission k8s io validation failure  audit annotation with a value containing the details of the validation failures  formatted as a JSON list of objects  each with the following fields    message  The validation failure message string   policy  The resource name of the ValidatingAdmissionPolicy   binding  The resource name of the ValidatingAdmissionPolicyBinding   expressionIndex  The index of the failed validations in the ValidatingAdmissionPolicy   validationActions  The enforcement actions enacted for the validation failure Example audit annotation    validation policy admission k8s io validation failure       message    Invalid value     policy    policy example com     binding    policybinding example com     expressionIndex    1     validationActions     Audit                Clients should expect to handle additional values by ignoring any values not recognized            Deny  and  Warn  may not be used together since this combination needlessly duplicates the validation failure both in the API response body and the HTTP warning headers           Required          ValidatingAdmissionPolicy   ValidatingAdmissionPolicy   ValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it    hr       apiVersion    string     APIVersion defines the versioned schema of this representation of an object  Servers should convert recognized schemas to the latest internal value  and may reject unrecognized values  More info  https   git k8s io community contributors devel sig architecture api conventions md resources      kind    string     Kind is a string value representing the REST resource this object represents  Servers may infer this from the endpoint the client submits requests to  Cannot be updated  In CamelCase  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      metadata     a href    ObjectMeta  a      Standard object metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata       spec    ValidatingAdmissionPolicySpec     Specification of the desired behavior of the ValidatingAdmissionPolicy      a name  ValidatingAdmissionPolicySpec    a     ValidatingAdmissionPolicySpec is the specification of the desired behavior of the AdmissionPolicy          spec auditAnnotations      AuditAnnotation        Atomic  will be replaced during a merge           auditAnnotations contains CEL expressions which are used to produce audit annotations for the audit event of the API request  validations and auditAnnotations may not both be empty  a least one of validations or auditAnnotations is required        a name  AuditAnnotation    a       AuditAnnotation describes how to produce an audit annotation for an API request            spec auditAnnotations key    string   required        key specifies the audit annotation key  The audit annotation keys of a ValidatingAdmissionPolicy must be unique  The key must be a qualified name   A Za z0 9   A Za z0 9      no more than 63 bytes in length               The key is combined with the resource name of the ValidatingAdmissionPolicy to construct an audit annotation key    ValidatingAdmissionPolicy name   key                 If an admission webhook uses the same resource name as this ValidatingAdmissionPolicy and the same audit annotation key  the annotation key will be identical  In this case  the first annotation written with the key will be included in the audit event and all subsequent annotations with the same key will be discarded               Required           spec auditAnnotations valueExpression    string   required        valueExpression represents the expression which is evaluated by CEL to produce an audit annotation value  The expression must evaluate to either a string or null value  If the expression evaluates to a string  the audit annotation is included with the string value  If the expression evaluates to null or empty string the audit annotation will be omitted  The valueExpression may be no longer than 5kb in length  If the result of the valueExpression is more than 10kb in length  it will be truncated to 10kb               If multiple ValidatingAdmissionPolicyBinding resources match an API request  then the valueExpression will be evaluated for each binding  All unique values produced by the valueExpressions will be joined together in a comma separated list               Required         spec failurePolicy    string       failurePolicy defines how to handle failures for the admission policy  Failures can occur from CEL expression parse errors  type check errors  runtime errors and invalid or mis configured policy definitions or bindings           A policy is invalid if spec paramKind refers to a non existent Kind  A binding is invalid if spec paramRef name refers to a non existent resource           failurePolicy does not define how validations that evaluate to false are handled           When failurePolicy is set to Fail  ValidatingAdmissionPolicyBinding validationActions define how failures are enforced           Allowed values are Ignore or Fail  Defaults to Fail         spec matchConditions      MatchCondition        Patch strategy  merge on key  name             Map  unique values on key name will be kept during a merge           MatchConditions is a list of conditions that must be met for a request to be validated  Match conditions filter requests that have already been matched by the rules  namespaceSelector  and objectSelector  An empty list of matchConditions matches all requests  There are a maximum of 64 match conditions allowed           If a parameter object is provided  it can be accessed via the  params  handle in the same manner as validation expressions           The exact matching logic is  in order         1  If ANY matchCondition evaluates to FALSE  the policy is skipped        2  If ALL matchConditions evaluate to TRUE  the policy is evaluated        3  If any matchCondition evaluates to an error  but none are FALSE              If failurePolicy Fail  reject the request            If failurePolicy Ignore  the policy is skipped       a name  MatchCondition    a       MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook            spec matchConditions expression    string   required        Expression represents the expression which will be evaluated by CEL  Must evaluate to bool  CEL expressions have access to the contents of the AdmissionRequest and Authorizer  organized into CEL variables                object    The object from the incoming request  The value is null for DELETE requests   oldObject    The existing object  The value is null for CREATE requests   request    Attributes of the admission request  pkg apis admission types go AdmissionRequest    authorizer    A CEL Authorizer  May be used to perform authorization checks for the principal  user or service account  of the request          See https   pkg go dev k8s io apiserver pkg cel library Authz        authorizer requestResource    A CEL ResourceCheck constructed from the  authorizer  and configured with the         request resource        Documentation on CEL  https   kubernetes io docs reference using api cel               Required           spec matchConditions name    string   required        Name is an identifier for this match condition  used for strategic merging of MatchConditions  as well as providing an identifier for logging purposes  A good name should be descriptive of the associated expression  Name must be a qualified name consisting of alphanumeric characters           or      and must start and end with an alphanumeric character  e g   MyName    or  my name    or  123 abc   regex used for validation is    A Za z0 9   A Za z0 9       A Za z0 9    with an optional DNS subdomain prefix and      e g   example com MyName                Required         spec matchConstraints    MatchResources       MatchConstraints specifies what resources this policy is designed to validate  The AdmissionPolicy cares about a request if it matches  all  Constraints  However  in order to prevent clusters from being put into an unstable state that cannot be recovered from via the API ValidatingAdmissionPolicy cannot match ValidatingAdmissionPolicy and ValidatingAdmissionPolicyBinding  Required        a name  MatchResources    a       MatchResources decides whether to run the admission control policy on an object based on whether it meets the match criteria  The exclude rules take precedence over include rules  if a resource matches both  it is excluded            spec matchConstraints excludeResourceRules      NamedRuleWithOperations          Atomic  will be replaced during a merge               ExcludeResourceRules describes what operations on what resources subresources the ValidatingAdmissionPolicy should not care about  The exclude rules take precedence over include rules  if a resource matches both  it is excluded          a name  NamedRuleWithOperations    a         NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames              spec matchConstraints excludeResourceRules apiGroups      string            Atomic  will be replaced during a merge                   APIGroups is the API groups the resources belong to      is all groups  If     is present  the length of the slice must be one  Required             spec matchConstraints excludeResourceRules apiVersions      string            Atomic  will be replaced during a merge                   APIVersions is the API versions the resources belong to      is all versions  If     is present  the length of the slice must be one  Required             spec matchConstraints excludeResourceRules operations      string            Atomic  will be replaced during a merge                   Operations is the operations the admission hook cares about   CREATE  UPDATE  DELETE  CONNECT or   for all of those operations and any future admission operations that are added  If     is present  the length of the slice must be one  Required             spec matchConstraints excludeResourceRules resourceNames      string            Atomic  will be replaced during a merge                   ResourceNames is an optional white list of names that the rule applies to   An empty set means that everything is allowed             spec matchConstraints excludeResourceRules resources      string            Atomic  will be replaced during a merge                   Resources is a list of resources this rule applies to                   For example   pods  means pods   pods log  means the log subresource of pods      means all resources  but not subresources   pods    means all subresources of pods     scale  means all scale subresources        means all resources and their subresources                   If wildcard is present  the validation rule will ensure resources do not overlap with each other                   Depending on the enclosing object  subresources might not be allowed  Required             spec matchConstraints excludeResourceRules scope    string           scope specifies the scope of this rule  Valid values are  Cluster    Namespaced   and      Cluster  means that only cluster scoped resources will match this rule  Namespace API objects are cluster scoped   Namespaced  means that only namespaced resources will match this rule      means that there are no scope restrictions  Subresources match the scope of their parent resource  Default is               spec matchConstraints matchPolicy    string         matchPolicy defines how the  MatchResources  list is used to match incoming requests  Allowed values are  Exact  or  Equivalent                  Exact  match a request only if it exactly matches a specified rule  For example  if deployments can be modified via apps v1  apps v1beta1  and extensions v1beta1  but  rules  only included  apiGroups   apps    apiVersions   v1    resources    deployments     a request to apps v1beta1 or extensions v1beta1 would not be sent to the ValidatingAdmissionPolicy                 Equivalent  match a request if modifies a resource listed in rules  even via another API group or version  For example  if deployments can be modified via apps v1  apps v1beta1  and extensions v1beta1  and  rules  only included  apiGroups   apps    apiVersions   v1    resources    deployments     a request to apps v1beta1 or extensions v1beta1 would be converted to apps v1 and sent to the ValidatingAdmissionPolicy               Defaults to  Equivalent           spec matchConstraints namespaceSelector     a href    LabelSelector  a          NamespaceSelector decides whether to run the admission control policy on an object based on whether the namespace for that object matches the selector  If the object itself is a namespace  the matching is performed on object metadata labels  If the object is another cluster scoped resource  it never skips the policy               For example  to run the webhook on any objects whose namespace is not associated with  runlevel  of  0  or  1    you will set the selector as follows   namespaceSelector              matchExpressions                              key    runlevel                operator    NotIn                values                    0                  1                                                           If instead you want to only run the policy on any objects whose namespace is associated with the  environment  of  prod  or  staging   you will set the selector as follows   namespaceSelector              matchExpressions                              key    environment                operator    In                values                    prod                  staging                                                           See https   kubernetes io docs concepts overview working with objects labels  for more examples of label selectors               Default to the empty LabelSelector  which matches everything           spec matchConstraints objectSelector     a href    LabelSelector  a          ObjectSelector decides whether to run the validation based on if the object has matching labels  objectSelector is evaluated against both the oldObject and newObject that would be sent to the cel validation  and is considered to match if either object matches the selector  A null object  oldObject in the case of create  or newObject in the case of delete  or an object that cannot have labels  like a DeploymentRollback or a PodProxyOptions object  is not considered to match  Use the object selector only if the webhook is opt in  because end users may skip the admission webhook by setting the labels  Default to the empty LabelSelector  which matches everything           spec matchConstraints resourceRules      NamedRuleWithOperations          Atomic  will be replaced during a merge               ResourceRules describes what operations on what resources subresources the ValidatingAdmissionPolicy matches  The policy cares about an operation if it matches  any  Rule          a name  NamedRuleWithOperations    a         NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames              spec matchConstraints resourceRules apiGroups      string            Atomic  will be replaced during a merge                   APIGroups is the API groups the resources belong to      is all groups  If     is present  the length of the slice must be one  Required             spec matchConstraints resourceRules apiVersions      string            Atomic  will be replaced during a merge                   APIVersions is the API versions the resources belong to      is all versions  If     is present  the length of the slice must be one  Required             spec matchConstraints resourceRules operations      string            Atomic  will be replaced during a merge                   Operations is the operations the admission hook cares about   CREATE  UPDATE  DELETE  CONNECT or   for all of those operations and any future admission operations that are added  If     is present  the length of the slice must be one  Required             spec matchConstraints resourceRules resourceNames      string            Atomic  will be replaced during a merge                   ResourceNames is an optional white list of names that the rule applies to   An empty set means that everything is allowed             spec matchConstraints resourceRules resources      string            Atomic  will be replaced during a merge                   Resources is a list of resources this rule applies to                   For example   pods  means pods   pods log  means the log subresource of pods      means all resources  but not subresources   pods    means all subresources of pods     scale  means all scale subresources        means all resources and their subresources                   If wildcard is present  the validation rule will ensure resources do not overlap with each other                   Depending on the enclosing object  subresources might not be allowed  Required             spec matchConstraints resourceRules scope    string           scope specifies the scope of this rule  Valid values are  Cluster    Namespaced   and      Cluster  means that only cluster scoped resources will match this rule  Namespace API objects are cluster scoped   Namespaced  means that only namespaced resources will match this rule      means that there are no scope restrictions  Subresources match the scope of their parent resource  Default is             spec paramKind    ParamKind       ParamKind specifies the kind of resources used to parameterize this policy  If absent  there are no parameters for this policy and the param CEL variable will not be provided to validation expressions  If ParamKind refers to a non existent kind  this policy definition is mis configured and the FailurePolicy is applied  If paramKind is specified but paramRef is unset in ValidatingAdmissionPolicyBinding  the params variable will be null        a name  ParamKind    a       ParamKind is a tuple of Group Kind and Version            spec paramKind apiVersion    string         APIVersion is the API group version the resources belong to  In format of  group version   Required           spec paramKind kind    string         Kind is the API kind the resources belong to  Required         spec validations      Validation        Atomic  will be replaced during a merge           Validations contain CEL expressions which is used to apply the validation  Validations and AuditAnnotations may not both be empty  a minimum of one Validations or AuditAnnotations is required        a name  Validation    a       Validation specifies the CEL expression which is used to apply the validation            spec validations expression    string   required        Expression represents the expression which will be evaluated by CEL  ref  https   github com google cel spec CEL expressions have access to the contents of the API request response  organized into CEL variables as well as some other useful variables                  object    The object from the incoming request  The value is null for DELETE requests     oldObject    The existing object  The value is null for CREATE requests     request    Attributes of the API request  ref   pkg apis admission types go AdmissionRequest       params    Parameter resource referred to by the policy binding being evaluated  Only populated if the policy has a ParamKind     namespaceObject    The namespace object that the incoming object belongs to  The value is null for cluster scoped resources     variables    Map of composited variables  from its name to its lazily evaluated value          For example  a variable named  foo  can be accessed as  variables foo            authorizer    A CEL Authorizer  May be used to perform authorization checks for the principal  user or service account  of the request          See https   pkg go dev k8s io apiserver pkg cel library Authz          authorizer requestResource    A CEL ResourceCheck constructed from the  authorizer  and configured with the         request resource               The  apiVersion    kind    metadata name  and  metadata generateName  are always accessible from the root of the object  No other metadata properties are accessible               Only property names of the form   a zA Z      a zA Z0 9        are accessible  Accessible property names are escaped according to the following rules when accessed in the expression         escapes to    underscores          escapes to    dot          escapes to    dash          escapes to    slash      Property names that exactly match a CEL RESERVED keyword escape to     keyword      The keywords are            true    false    null    in    as    break    const    continue    else    for    function    if             import    let    loop    package    namespace    return         Examples            Expression accessing a property named  namespace     Expression    object   namespace     0             Expression accessing a property named  x prop     Expression    object x  dash  prop   0             Expression accessing a property named  redact  d     Expression    object redact  underscores  d   0                Equality on arrays with list type of  set  or  map  ignores element order  i e   1  2      2  1   Concatenation on arrays with x kubernetes list type use the semantics of the list type             set    X   Y  performs a union where the array positions of all elements in  X  are preserved and           non intersecting elements in  Y  are appended  retaining their partial order             map    X   Y  performs a merge where the array positions of all keys in  X  are preserved but the values           are overwritten by values in  Y  when the key sets of  X  and  Y  intersect  Elements in  Y  with           non intersecting keys are appended  retaining their partial order        Required           spec validations message    string         Message represents the message displayed when validation fails  The message is required if the Expression contains line breaks  The message must not contain line breaks  If unset  the message is  failed rule   Rule    e g   must be a URL with the host matching spec host  If the Expression contains line breaks  Message is required  The message must not contain line breaks  If unset  the message is  failed Expression   Expression             spec validations messageExpression    string         messageExpression declares a CEL expression that evaluates to the validation failure message that is returned when this rule fails  Since messageExpression is used as a failure message  it must evaluate to a string  If both message and messageExpression are present on a validation  then messageExpression will be used if validation fails  If messageExpression results in a runtime error  the runtime error is logged  and the validation failure message is produced as if the messageExpression field were unset  If messageExpression evaluates to an empty string  a string with only spaces  or a string that contains line breaks  then the validation failure message will also be produced as if the messageExpression field were unset  and the fact that messageExpression produced an empty string string with only spaces string with line breaks will be logged  messageExpression has access to all the same variables as the  expression  except for  authorizer  and  authorizer requestResource   Example   object x must be less than max    string params max               spec validations reason    string         Reason represents a machine readable description of why this validation failed  If this is the first validation in the list to fail  this reason  as well as the corresponding HTTP response code  are used in the HTTP response to the client  The currently supported reasons are   Unauthorized    Forbidden    Invalid    RequestEntityTooLarge   If not set  StatusReasonInvalid is used in the response to the client         spec variables      Variable        Patch strategy  merge on key  name             Map  unique values on key name will be kept during a merge           Variables contain definitions of variables that can be used in composition of other expressions  Each variable is defined as a named CEL expression  The variables defined here will be available under  variables  in other expressions of the policy except MatchConditions because MatchConditions are evaluated before the rest of the policy           The expression of a variable can refer to other variables defined earlier in the list but not those after  Thus  Variables must be sorted by the order of first appearance and acyclic        a name  Variable    a       Variable is the definition of a variable that is used for composition  A variable is defined as a named expression            spec variables expression    string   required        Expression is the expression that will be evaluated as the value of the variable  The CEL expression has access to the same identifiers as the CEL expressions in Validation           spec variables name    string   required        Name is the name of the variable  The name must be a valid CEL identifier and unique among all variables  The variable can be accessed in other expressions through  variables  For example  if name is  foo   the variable will be available as  variables foo       status    ValidatingAdmissionPolicyStatus     The status of the ValidatingAdmissionPolicy  including warnings that are useful to determine if the policy behaves in the expected way  Populated by the system  Read only      a name  ValidatingAdmissionPolicyStatus    a     ValidatingAdmissionPolicyStatus represents the status of an admission validation policy          status conditions      Condition        Map  unique values on key type will be kept during a merge           The conditions represent the latest available observations of a policy s current state        a name  Condition    a       Condition contains details for one aspect of the current state of this API Resource            status conditions lastTransitionTime    Time   required        lastTransitionTime is the last time the condition transitioned from one status to another  This should be when the underlying condition changed   If that is not known  then using the time when the API field changed is acceptable          a name  Time    a         Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers            status conditions message    string   required        message is a human readable message indicating details about the transition  This may be an empty string           status conditions reason    string   required        reason contains a programmatic identifier indicating the reason for the condition s last transition  Producers of specific condition types may define expected values and meanings for this field  and whether the values are considered a guaranteed API  The value should be a CamelCase string  This field may not be empty           status conditions status    string   required        status of the condition  one of True  False  Unknown           status conditions type    string   required        type of condition in CamelCase or in foo example com CamelCase           status conditions observedGeneration    int64         observedGeneration represents the  metadata generation that the condition was set based upon  For instance  if  metadata generation is currently 12  but the  status conditions x  observedGeneration is 9  the condition is out of date with respect to the current state of the instance         status observedGeneration    int64       The generation observed by the controller         status typeChecking    TypeChecking       The results of type checking for each expression  Presence of this field indicates the completion of the type checking        a name  TypeChecking    a       TypeChecking contains results of type checking the expressions in the ValidatingAdmissionPolicy           status typeChecking expressionWarnings      ExpressionWarning          Atomic  will be replaced during a merge               The type checking warnings for each expression          a name  ExpressionWarning    a         ExpressionWarning is a warning information that targets a specific expression              status typeChecking expressionWarnings fieldRef    string   required          The path to the field that refers the expression  For example  the reference to the expression of the first item of validations is  spec validations 0  expression             status typeChecking expressionWarnings warning    string   required          The content of type checking information in a human readable form  Each line of the warning contains the type that the expression is checked against  followed by the type check error from the compiler          ValidatingAdmissionPolicyBindingList   ValidatingAdmissionPolicyBindingList   ValidatingAdmissionPolicyBindingList is a list of ValidatingAdmissionPolicyBinding    hr       items       a href    ValidatingAdmissionPolicyBinding  a    required    List of PolicyBinding       apiVersion    string     APIVersion defines the versioned schema of this representation of an object  Servers should convert recognized schemas to the latest internal value  and may reject unrecognized values  More info  https   git k8s io community contributors devel sig architecture api conventions md resources      kind    string     Kind is a string value representing the REST resource this object represents  Servers may infer this from the endpoint the client submits requests to  Cannot be updated  In CamelCase  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds         Operations   Operations      hr             get  read the specified ValidatingAdmissionPolicyBinding       HTTP Request  GET  apis admissionregistration k8s io v1 validatingadmissionpolicybindings  name        Parameters       name     in path    string  required    name of the ValidatingAdmissionPolicyBinding       pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingAdmissionPolicyBinding  a    OK  401  Unauthorized        list  list or watch objects of kind ValidatingAdmissionPolicyBinding       HTTP Request  GET  apis admissionregistration k8s io v1 validatingadmissionpolicybindings       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ValidatingAdmissionPolicyBindingList  a    OK  401  Unauthorized        create  create a ValidatingAdmissionPolicyBinding       HTTP Request  POST  apis admissionregistration k8s io v1 validatingadmissionpolicybindings       Parameters       body     a href    ValidatingAdmissionPolicyBinding  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingAdmissionPolicyBinding  a    OK  201   a href    ValidatingAdmissionPolicyBinding  a    Created  202   a href    ValidatingAdmissionPolicyBinding  a    Accepted  401  Unauthorized        update  replace the specified ValidatingAdmissionPolicyBinding       HTTP Request  PUT  apis admissionregistration k8s io v1 validatingadmissionpolicybindings  name        Parameters       name     in path    string  required    name of the ValidatingAdmissionPolicyBinding       body     a href    ValidatingAdmissionPolicyBinding  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingAdmissionPolicyBinding  a    OK  201   a href    ValidatingAdmissionPolicyBinding  a    Created  401  Unauthorized        patch  partially update the specified ValidatingAdmissionPolicyBinding       HTTP Request  PATCH  apis admissionregistration k8s io v1 validatingadmissionpolicybindings  name        Parameters       name     in path    string  required    name of the ValidatingAdmissionPolicyBinding       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    ValidatingAdmissionPolicyBinding  a    OK  201   a href    ValidatingAdmissionPolicyBinding  a    Created  401  Unauthorized        delete  delete a ValidatingAdmissionPolicyBinding       HTTP Request  DELETE  apis admissionregistration k8s io v1 validatingadmissionpolicybindings  name        Parameters       name     in path    string  required    name of the ValidatingAdmissionPolicyBinding       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of ValidatingAdmissionPolicyBinding       HTTP Request  DELETE  apis admissionregistration k8s io v1 validatingadmissionpolicybindings       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference import k8s io api discovery v1 EndpointSlice represents a subset of the endpoints that implement a service apiVersion discovery k8s io v1 title EndpointSlice kind EndpointSlice contenttype apireference apimetadata weight 3 autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"discovery.k8s.io\/v1\"\n  import: \"k8s.io\/api\/discovery\/v1\"\n  kind: \"EndpointSlice\"\ncontent_type: \"api_reference\"\ndescription: \"EndpointSlice represents a subset of the endpoints that implement a service.\"\ntitle: \"EndpointSlice\"\nweight: 3\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: discovery.k8s.io\/v1`\n\n`import \"k8s.io\/api\/discovery\/v1\"`\n\n\n## EndpointSlice {#EndpointSlice}\n\nEndpointSlice represents a subset of the endpoints that implement a service. For a given service there may be multiple EndpointSlice objects, selected by labels, which must be joined to produce the full set of endpoints.\n\n<hr>\n\n- **apiVersion**: discovery.k8s.io\/v1\n\n\n- **kind**: EndpointSlice\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata.\n\n- **addressType** (string), required\n\n  addressType specifies the type of address carried by this EndpointSlice. All addresses in this slice must be the same type. This field is immutable after creation. The following address types are currently supported: * IPv4: Represents an IPv4 Address. * IPv6: Represents an IPv6 Address. * FQDN: Represents a Fully Qualified Domain Name.\n\n- **endpoints** ([]Endpoint), required\n\n  *Atomic: will be replaced during a merge*\n  \n  endpoints is a list of unique endpoints in this slice. Each slice may include a maximum of 1000 endpoints.\n\n  <a name=\"Endpoint\"><\/a>\n  *Endpoint represents a single logical \"backend\" implementing a service.*\n\n  - **endpoints.addresses** ([]string), required\n\n    *Set: unique values will be kept during a merge*\n    \n    addresses of this endpoint. The contents of this field are interpreted according to the corresponding EndpointSlice addressType field. Consumers must handle different types of addresses in the context of their own capabilities. This must contain at least one address but no more than 100. These are all assumed to be fungible and clients may choose to only use the first element. Refer to: https:\/\/issue.k8s.io\/106267\n\n  - **endpoints.conditions** (EndpointConditions)\n\n    conditions contains information about the current status of the endpoint.\n\n    <a name=\"EndpointConditions\"><\/a>\n    *EndpointConditions represents the current condition of an endpoint.*\n\n    - **endpoints.conditions.ready** (boolean)\n\n      ready indicates that this endpoint is prepared to receive traffic, according to whatever system is managing the endpoint. A nil value indicates an unknown state. In most cases consumers should interpret this unknown state as ready. For compatibility reasons, ready should never be \"true\" for terminating endpoints, except when the normal readiness behavior is being explicitly overridden, for example when the associated Service has set the publishNotReadyAddresses flag.\n\n    - **endpoints.conditions.serving** (boolean)\n\n      serving is identical to ready except that it is set regardless of the terminating state of endpoints. This condition should be set to true for a ready endpoint that is terminating. If nil, consumers should defer to the ready condition.\n\n    - **endpoints.conditions.terminating** (boolean)\n\n      terminating indicates that this endpoint is terminating. A nil value indicates an unknown state. Consumers should interpret this unknown state to mean that the endpoint is not terminating.\n\n  - **endpoints.deprecatedTopology** (map[string]string)\n\n    deprecatedTopology contains topology information part of the v1beta1 API. This field is deprecated, and will be removed when the v1beta1 API is removed (no sooner than kubernetes v1.24).  While this field can hold values, it is not writable through the v1 API, and any attempts to write to it will be silently ignored. Topology information can be found in the zone and nodeName fields instead.\n\n  - **endpoints.hints** (EndpointHints)\n\n    hints contains information associated with how an endpoint should be consumed.\n\n    <a name=\"EndpointHints\"><\/a>\n    *EndpointHints provides hints describing how an endpoint should be consumed.*\n\n    - **endpoints.hints.forZones** ([]ForZone)\n\n      *Atomic: will be replaced during a merge*\n      \n      forZones indicates the zone(s) this endpoint should be consumed by to enable topology aware routing.\n\n      <a name=\"ForZone\"><\/a>\n      *ForZone provides information about which zones should consume this endpoint.*\n\n      - **endpoints.hints.forZones.name** (string), required\n\n        name represents the name of the zone.\n\n  - **endpoints.hostname** (string)\n\n    hostname of this endpoint. This field may be used by consumers of endpoints to distinguish endpoints from each other (e.g. in DNS names). Multiple endpoints which use the same hostname should be considered fungible (e.g. multiple A values in DNS). Must be lowercase and pass DNS Label (RFC 1123) validation.\n\n  - **endpoints.nodeName** (string)\n\n    nodeName represents the name of the Node hosting this endpoint. This can be used to determine endpoints local to a Node.\n\n  - **endpoints.targetRef** (<a href=\"\">ObjectReference<\/a>)\n\n    targetRef is a reference to a Kubernetes object that represents this endpoint.\n\n  - **endpoints.zone** (string)\n\n    zone is the name of the Zone this endpoint exists in.\n\n- **ports** ([]EndpointPort)\n\n  *Atomic: will be replaced during a merge*\n  \n  ports specifies the list of network ports exposed by each endpoint in this slice. Each port must have a unique name. When ports is empty, it indicates that there are no defined ports. When a port is defined with a nil port value, it indicates \"all ports\". Each slice may include a maximum of 100 ports.\n\n  <a name=\"EndpointPort\"><\/a>\n  *EndpointPort represents a Port used by an EndpointSlice*\n\n  - **ports.port** (int32)\n\n    port represents the port number of the endpoint. If this is not specified, ports are not restricted and must be interpreted in the context of the specific consumer.\n\n  - **ports.protocol** (string)\n\n    protocol represents the IP protocol for this port. Must be UDP, TCP, or SCTP. Default is TCP.\n\n  - **ports.name** (string)\n\n    name represents the name of this port. All ports in an EndpointSlice must have a unique name. If the EndpointSlice is derived from a Kubernetes service, this corresponds to the Service.ports[].name. Name must either be an empty string or pass DNS_LABEL validation: * must be no more than 63 characters long. * must consist of lower case alphanumeric characters or '-'. * must start and end with an alphanumeric character. Default is empty string.\n\n  - **ports.appProtocol** (string)\n\n    The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either:\n    \n    * Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https:\/\/www.iana.org\/assignments\/service-names).\n    \n    * Kubernetes-defined prefixed names:\n      * 'kubernetes.io\/h2c' - HTTP\/2 prior knowledge over cleartext as described in https:\/\/www.rfc-editor.org\/rfc\/rfc9113.html#name-starting-http-2-with-prior-\n      * 'kubernetes.io\/ws'  - WebSocket over cleartext as described in https:\/\/www.rfc-editor.org\/rfc\/rfc6455\n      * 'kubernetes.io\/wss' - WebSocket over TLS as described in https:\/\/www.rfc-editor.org\/rfc\/rfc6455\n    \n    * Other protocols should use implementation-defined prefixed names such as mycompany.com\/my-custom-protocol.\n\n\n\n\n\n## EndpointSliceList {#EndpointSliceList}\n\nEndpointSliceList represents a list of endpoint slices\n\n<hr>\n\n- **apiVersion**: discovery.k8s.io\/v1\n\n\n- **kind**: EndpointSliceList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata.\n\n- **items** ([]<a href=\"\">EndpointSlice<\/a>), required\n\n  items is the list of endpoint slices\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified EndpointSlice\n\n#### HTTP Request\n\nGET \/apis\/discovery.k8s.io\/v1\/namespaces\/{namespace}\/endpointslices\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the EndpointSlice\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">EndpointSlice<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind EndpointSlice\n\n#### HTTP Request\n\nGET \/apis\/discovery.k8s.io\/v1\/namespaces\/{namespace}\/endpointslices\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">EndpointSliceList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind EndpointSlice\n\n#### HTTP Request\n\nGET \/apis\/discovery.k8s.io\/v1\/endpointslices\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">EndpointSliceList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create an EndpointSlice\n\n#### HTTP Request\n\nPOST \/apis\/discovery.k8s.io\/v1\/namespaces\/{namespace}\/endpointslices\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">EndpointSlice<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">EndpointSlice<\/a>): OK\n\n201 (<a href=\"\">EndpointSlice<\/a>): Created\n\n202 (<a href=\"\">EndpointSlice<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified EndpointSlice\n\n#### HTTP Request\n\nPUT \/apis\/discovery.k8s.io\/v1\/namespaces\/{namespace}\/endpointslices\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the EndpointSlice\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">EndpointSlice<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">EndpointSlice<\/a>): OK\n\n201 (<a href=\"\">EndpointSlice<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified EndpointSlice\n\n#### HTTP Request\n\nPATCH \/apis\/discovery.k8s.io\/v1\/namespaces\/{namespace}\/endpointslices\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the EndpointSlice\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">EndpointSlice<\/a>): OK\n\n201 (<a href=\"\">EndpointSlice<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete an EndpointSlice\n\n#### HTTP Request\n\nDELETE \/apis\/discovery.k8s.io\/v1\/namespaces\/{namespace}\/endpointslices\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the EndpointSlice\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of EndpointSlice\n\n#### HTTP Request\n\nDELETE \/apis\/discovery.k8s.io\/v1\/namespaces\/{namespace}\/endpointslices\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   discovery k8s io v1    import   k8s io api discovery v1    kind   EndpointSlice  content type   api reference  description   EndpointSlice represents a subset of the endpoints that implement a service   title   EndpointSlice  weight  3 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  discovery k8s io v1    import  k8s io api discovery v1        EndpointSlice   EndpointSlice   EndpointSlice represents a subset of the endpoints that implement a service  For a given service there may be multiple EndpointSlice objects  selected by labels  which must be joined to produce the full set of endpoints    hr       apiVersion    discovery k8s io v1       kind    EndpointSlice       metadata     a href    ObjectMeta  a      Standard object s metadata       addressType    string   required    addressType specifies the type of address carried by this EndpointSlice  All addresses in this slice must be the same type  This field is immutable after creation  The following address types are currently supported    IPv4  Represents an IPv4 Address    IPv6  Represents an IPv6 Address    FQDN  Represents a Fully Qualified Domain Name       endpoints      Endpoint   required     Atomic  will be replaced during a merge       endpoints is a list of unique endpoints in this slice  Each slice may include a maximum of 1000 endpoints      a name  Endpoint    a     Endpoint represents a single logical  backend  implementing a service          endpoints addresses      string   required       Set  unique values will be kept during a merge           addresses of this endpoint  The contents of this field are interpreted according to the corresponding EndpointSlice addressType field  Consumers must handle different types of addresses in the context of their own capabilities  This must contain at least one address but no more than 100  These are all assumed to be fungible and clients may choose to only use the first element  Refer to  https   issue k8s io 106267        endpoints conditions    EndpointConditions       conditions contains information about the current status of the endpoint        a name  EndpointConditions    a       EndpointConditions represents the current condition of an endpoint            endpoints conditions ready    boolean         ready indicates that this endpoint is prepared to receive traffic  according to whatever system is managing the endpoint  A nil value indicates an unknown state  In most cases consumers should interpret this unknown state as ready  For compatibility reasons  ready should never be  true  for terminating endpoints  except when the normal readiness behavior is being explicitly overridden  for example when the associated Service has set the publishNotReadyAddresses flag           endpoints conditions serving    boolean         serving is identical to ready except that it is set regardless of the terminating state of endpoints  This condition should be set to true for a ready endpoint that is terminating  If nil  consumers should defer to the ready condition           endpoints conditions terminating    boolean         terminating indicates that this endpoint is terminating  A nil value indicates an unknown state  Consumers should interpret this unknown state to mean that the endpoint is not terminating         endpoints deprecatedTopology    map string string       deprecatedTopology contains topology information part of the v1beta1 API  This field is deprecated  and will be removed when the v1beta1 API is removed  no sooner than kubernetes v1 24    While this field can hold values  it is not writable through the v1 API  and any attempts to write to it will be silently ignored  Topology information can be found in the zone and nodeName fields instead         endpoints hints    EndpointHints       hints contains information associated with how an endpoint should be consumed        a name  EndpointHints    a       EndpointHints provides hints describing how an endpoint should be consumed            endpoints hints forZones      ForZone          Atomic  will be replaced during a merge               forZones indicates the zone s  this endpoint should be consumed by to enable topology aware routing          a name  ForZone    a         ForZone provides information about which zones should consume this endpoint              endpoints hints forZones name    string   required          name represents the name of the zone         endpoints hostname    string       hostname of this endpoint  This field may be used by consumers of endpoints to distinguish endpoints from each other  e g  in DNS names   Multiple endpoints which use the same hostname should be considered fungible  e g  multiple A values in DNS   Must be lowercase and pass DNS Label  RFC 1123  validation         endpoints nodeName    string       nodeName represents the name of the Node hosting this endpoint  This can be used to determine endpoints local to a Node         endpoints targetRef     a href    ObjectReference  a        targetRef is a reference to a Kubernetes object that represents this endpoint         endpoints zone    string       zone is the name of the Zone this endpoint exists in       ports      EndpointPort      Atomic  will be replaced during a merge       ports specifies the list of network ports exposed by each endpoint in this slice  Each port must have a unique name  When ports is empty  it indicates that there are no defined ports  When a port is defined with a nil port value  it indicates  all ports   Each slice may include a maximum of 100 ports      a name  EndpointPort    a     EndpointPort represents a Port used by an EndpointSlice         ports port    int32       port represents the port number of the endpoint  If this is not specified  ports are not restricted and must be interpreted in the context of the specific consumer         ports protocol    string       protocol represents the IP protocol for this port  Must be UDP  TCP  or SCTP  Default is TCP         ports name    string       name represents the name of this port  All ports in an EndpointSlice must have a unique name  If the EndpointSlice is derived from a Kubernetes service  this corresponds to the Service ports   name  Name must either be an empty string or pass DNS LABEL validation    must be no more than 63 characters long    must consist of lower case alphanumeric characters or        must start and end with an alphanumeric character  Default is empty string         ports appProtocol    string       The application protocol for this port  This is used as a hint for implementations to offer richer behavior for protocols that they understand  This field follows standard Kubernetes label syntax  Valid values are either             Un prefixed protocol names   reserved for IANA standard service names  as per RFC 6335 and https   www iana org assignments service names              Kubernetes defined prefixed names           kubernetes io h2c    HTTP 2 prior knowledge over cleartext as described in https   www rfc editor org rfc rfc9113 html name starting http 2 with prior           kubernetes io ws     WebSocket over cleartext as described in https   www rfc editor org rfc rfc6455          kubernetes io wss    WebSocket over TLS as described in https   www rfc editor org rfc rfc6455            Other protocols should use implementation defined prefixed names such as mycompany com my custom protocol          EndpointSliceList   EndpointSliceList   EndpointSliceList represents a list of endpoint slices   hr       apiVersion    discovery k8s io v1       kind    EndpointSliceList       metadata     a href    ListMeta  a      Standard list metadata       items       a href    EndpointSlice  a    required    items is the list of endpoint slices         Operations   Operations      hr             get  read the specified EndpointSlice       HTTP Request  GET  apis discovery k8s io v1 namespaces  namespace  endpointslices  name        Parameters       name     in path    string  required    name of the EndpointSlice       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    EndpointSlice  a    OK  401  Unauthorized        list  list or watch objects of kind EndpointSlice       HTTP Request  GET  apis discovery k8s io v1 namespaces  namespace  endpointslices       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    EndpointSliceList  a    OK  401  Unauthorized        list  list or watch objects of kind EndpointSlice       HTTP Request  GET  apis discovery k8s io v1 endpointslices       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    EndpointSliceList  a    OK  401  Unauthorized        create  create an EndpointSlice       HTTP Request  POST  apis discovery k8s io v1 namespaces  namespace  endpointslices       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    EndpointSlice  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    EndpointSlice  a    OK  201   a href    EndpointSlice  a    Created  202   a href    EndpointSlice  a    Accepted  401  Unauthorized        update  replace the specified EndpointSlice       HTTP Request  PUT  apis discovery k8s io v1 namespaces  namespace  endpointslices  name        Parameters       name     in path    string  required    name of the EndpointSlice       namespace     in path    string  required     a href    namespace  a        body     a href    EndpointSlice  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    EndpointSlice  a    OK  201   a href    EndpointSlice  a    Created  401  Unauthorized        patch  partially update the specified EndpointSlice       HTTP Request  PATCH  apis discovery k8s io v1 namespaces  namespace  endpointslices  name        Parameters       name     in path    string  required    name of the EndpointSlice       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    EndpointSlice  a    OK  201   a href    EndpointSlice  a    Created  401  Unauthorized        delete  delete an EndpointSlice       HTTP Request  DELETE  apis discovery k8s io v1 namespaces  namespace  endpointslices  name        Parameters       name     in path    string  required    name of the EndpointSlice       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of EndpointSlice       HTTP Request  DELETE  apis discovery k8s io v1 namespaces  namespace  endpointslices       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference apiVersion v1 kind Endpoints weight 2 Endpoints is a collection of endpoints that implement the actual service contenttype apireference apimetadata title Endpoints autogenerated true import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"Endpoints\"\ncontent_type: \"api_reference\"\ndescription: \"Endpoints is a collection of endpoints that implement the actual service.\"\ntitle: \"Endpoints\"\nweight: 2\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## Endpoints {#Endpoints}\n\nEndpoints is a collection of endpoints that implement the actual service. Example:\n\n   Name: \"mysvc\",\n   Subsets: [\n     {\n       Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}],\n       Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}]\n     },\n     {\n       Addresses: [{\"ip\": \"10.10.3.3\"}],\n       Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}]\n     },\n  ]\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: Endpoints\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **subsets** ([]EndpointSubset)\n\n  *Atomic: will be replaced during a merge*\n  \n  The set of all endpoints is the union of all subsets. Addresses are placed into subsets according to the IPs they share. A single address with multiple ports, some of which are ready and some of which are not (because they come from different containers) will result in the address being displayed in different subsets for the different ports. No address will appear in both Addresses and NotReadyAddresses in the same subset. Sets of addresses and ports that comprise a service.\n\n  <a name=\"EndpointSubset\"><\/a>\n  *EndpointSubset is a group of addresses with a common set of ports. The expanded set of endpoints is the Cartesian product of Addresses x Ports. For example, given:\n  \n  \t{\n  \t  Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}],\n  \t  Ports:     [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}]\n  \t}\n  \n  The resulting set of endpoints can be viewed as:\n  \n  \ta: [ 10.10.1.1:8675, 10.10.2.2:8675 ],\n  \tb: [ 10.10.1.1:309, 10.10.2.2:309 ]*\n\n  - **subsets.addresses** ([]EndpointAddress)\n\n    *Atomic: will be replaced during a merge*\n    \n    IP addresses which offer the related ports that are marked as ready. These endpoints should be considered safe for load balancers and clients to utilize.\n\n    <a name=\"EndpointAddress\"><\/a>\n    *EndpointAddress is a tuple that describes single IP address.*\n\n    - **subsets.addresses.ip** (string), required\n\n      The IP of this endpoint. May not be loopback (127.0.0.0\/8 or ::1), link-local (169.254.0.0\/16 or fe80::\/10), or link-local multicast (224.0.0.0\/24 or ff02::\/16).\n\n    - **subsets.addresses.hostname** (string)\n\n      The Hostname of this endpoint\n\n    - **subsets.addresses.nodeName** (string)\n\n      Optional: Node hosting this endpoint. This can be used to determine endpoints local to a node.\n\n    - **subsets.addresses.targetRef** (<a href=\"\">ObjectReference<\/a>)\n\n      Reference to object providing the endpoint.\n\n  - **subsets.notReadyAddresses** ([]EndpointAddress)\n\n    *Atomic: will be replaced during a merge*\n    \n    IP addresses which offer the related ports but are not currently marked as ready because they have not yet finished starting, have recently failed a readiness check, or have recently failed a liveness check.\n\n    <a name=\"EndpointAddress\"><\/a>\n    *EndpointAddress is a tuple that describes single IP address.*\n\n    - **subsets.notReadyAddresses.ip** (string), required\n\n      The IP of this endpoint. May not be loopback (127.0.0.0\/8 or ::1), link-local (169.254.0.0\/16 or fe80::\/10), or link-local multicast (224.0.0.0\/24 or ff02::\/16).\n\n    - **subsets.notReadyAddresses.hostname** (string)\n\n      The Hostname of this endpoint\n\n    - **subsets.notReadyAddresses.nodeName** (string)\n\n      Optional: Node hosting this endpoint. This can be used to determine endpoints local to a node.\n\n    - **subsets.notReadyAddresses.targetRef** (<a href=\"\">ObjectReference<\/a>)\n\n      Reference to object providing the endpoint.\n\n  - **subsets.ports** ([]EndpointPort)\n\n    *Atomic: will be replaced during a merge*\n    \n    Port numbers available on the related IP addresses.\n\n    <a name=\"EndpointPort\"><\/a>\n    *EndpointPort is a tuple that describes a single port.*\n\n    - **subsets.ports.port** (int32), required\n\n      The port number of the endpoint.\n\n    - **subsets.ports.protocol** (string)\n\n      The IP protocol for this port. Must be UDP, TCP, or SCTP. Default is TCP.\n\n    - **subsets.ports.name** (string)\n\n      The name of this port.  This must match the 'name' field in the corresponding ServicePort. Must be a DNS_LABEL. Optional only if one port is defined.\n\n    - **subsets.ports.appProtocol** (string)\n\n      The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either:\n      \n      * Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https:\/\/www.iana.org\/assignments\/service-names).\n      \n      * Kubernetes-defined prefixed names:\n        * 'kubernetes.io\/h2c' - HTTP\/2 prior knowledge over cleartext as described in https:\/\/www.rfc-editor.org\/rfc\/rfc9113.html#name-starting-http-2-with-prior-\n        * 'kubernetes.io\/ws'  - WebSocket over cleartext as described in https:\/\/www.rfc-editor.org\/rfc\/rfc6455\n        * 'kubernetes.io\/wss' - WebSocket over TLS as described in https:\/\/www.rfc-editor.org\/rfc\/rfc6455\n      \n      * Other protocols should use implementation-defined prefixed names such as mycompany.com\/my-custom-protocol.\n\n\n\n\n\n## EndpointsList {#EndpointsList}\n\nEndpointsList is a list of endpoints.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: EndpointsList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">Endpoints<\/a>), required\n\n  List of endpoints.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified Endpoints\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/endpoints\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Endpoints\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Endpoints<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Endpoints\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/endpoints\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">EndpointsList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Endpoints\n\n#### HTTP Request\n\nGET \/api\/v1\/endpoints\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">EndpointsList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create Endpoints\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\/{namespace}\/endpoints\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Endpoints<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Endpoints<\/a>): OK\n\n201 (<a href=\"\">Endpoints<\/a>): Created\n\n202 (<a href=\"\">Endpoints<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified Endpoints\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/endpoints\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Endpoints\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Endpoints<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Endpoints<\/a>): OK\n\n201 (<a href=\"\">Endpoints<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified Endpoints\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/endpoints\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Endpoints\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Endpoints<\/a>): OK\n\n201 (<a href=\"\">Endpoints<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete Endpoints\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/endpoints\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Endpoints\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of Endpoints\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/endpoints\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   Endpoints  content type   api reference  description   Endpoints is a collection of endpoints that implement the actual service   title   Endpoints  weight  2 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        Endpoints   Endpoints   Endpoints is a collection of endpoints that implement the actual service  Example      Name   mysvc      Subsets                  Addresses     ip    10 10 1 1      ip    10 10 2 2            Ports     name    a    port   8675     name    b    port   309                         Addresses     ip    10 10 3 3            Ports     name    a    port   93     name    b    port   76                 hr       apiVersion    v1       kind    Endpoints       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      subsets      EndpointSubset      Atomic  will be replaced during a merge       The set of all endpoints is the union of all subsets  Addresses are placed into subsets according to the IPs they share  A single address with multiple ports  some of which are ready and some of which are not  because they come from different containers  will result in the address being displayed in different subsets for the different ports  No address will appear in both Addresses and NotReadyAddresses in the same subset  Sets of addresses and ports that comprise a service      a name  EndpointSubset    a     EndpointSubset is a group of addresses with a common set of ports  The expanded set of endpoints is the Cartesian product of Addresses x Ports  For example  given               Addresses     ip    10 10 1 1      ip    10 10 2 2          Ports         name    a    port   8675     name    b    port   309             The resulting set of endpoints can be viewed as        a    10 10 1 1 8675  10 10 2 2 8675       b    10 10 1 1 309  10 10 2 2 309           subsets addresses      EndpointAddress        Atomic  will be replaced during a merge           IP addresses which offer the related ports that are marked as ready  These endpoints should be considered safe for load balancers and clients to utilize        a name  EndpointAddress    a       EndpointAddress is a tuple that describes single IP address            subsets addresses ip    string   required        The IP of this endpoint  May not be loopback  127 0 0 0 8 or   1   link local  169 254 0 0 16 or fe80   10   or link local multicast  224 0 0 0 24 or ff02   16            subsets addresses hostname    string         The Hostname of this endpoint          subsets addresses nodeName    string         Optional  Node hosting this endpoint  This can be used to determine endpoints local to a node           subsets addresses targetRef     a href    ObjectReference  a          Reference to object providing the endpoint         subsets notReadyAddresses      EndpointAddress        Atomic  will be replaced during a merge           IP addresses which offer the related ports but are not currently marked as ready because they have not yet finished starting  have recently failed a readiness check  or have recently failed a liveness check        a name  EndpointAddress    a       EndpointAddress is a tuple that describes single IP address            subsets notReadyAddresses ip    string   required        The IP of this endpoint  May not be loopback  127 0 0 0 8 or   1   link local  169 254 0 0 16 or fe80   10   or link local multicast  224 0 0 0 24 or ff02   16            subsets notReadyAddresses hostname    string         The Hostname of this endpoint          subsets notReadyAddresses nodeName    string         Optional  Node hosting this endpoint  This can be used to determine endpoints local to a node           subsets notReadyAddresses targetRef     a href    ObjectReference  a          Reference to object providing the endpoint         subsets ports      EndpointPort        Atomic  will be replaced during a merge           Port numbers available on the related IP addresses        a name  EndpointPort    a       EndpointPort is a tuple that describes a single port            subsets ports port    int32   required        The port number of the endpoint           subsets ports protocol    string         The IP protocol for this port  Must be UDP  TCP  or SCTP  Default is TCP           subsets ports name    string         The name of this port   This must match the  name  field in the corresponding ServicePort  Must be a DNS LABEL  Optional only if one port is defined           subsets ports appProtocol    string         The application protocol for this port  This is used as a hint for implementations to offer richer behavior for protocols that they understand  This field follows standard Kubernetes label syntax  Valid values are either                 Un prefixed protocol names   reserved for IANA standard service names  as per RFC 6335 and https   www iana org assignments service names                  Kubernetes defined prefixed names             kubernetes io h2c    HTTP 2 prior knowledge over cleartext as described in https   www rfc editor org rfc rfc9113 html name starting http 2 with prior             kubernetes io ws     WebSocket over cleartext as described in https   www rfc editor org rfc rfc6455            kubernetes io wss    WebSocket over TLS as described in https   www rfc editor org rfc rfc6455                Other protocols should use implementation defined prefixed names such as mycompany com my custom protocol          EndpointsList   EndpointsList   EndpointsList is a list of endpoints    hr       apiVersion    v1       kind    EndpointsList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    Endpoints  a    required    List of endpoints          Operations   Operations      hr             get  read the specified Endpoints       HTTP Request  GET  api v1 namespaces  namespace  endpoints  name        Parameters       name     in path    string  required    name of the Endpoints       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Endpoints  a    OK  401  Unauthorized        list  list or watch objects of kind Endpoints       HTTP Request  GET  api v1 namespaces  namespace  endpoints       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    EndpointsList  a    OK  401  Unauthorized        list  list or watch objects of kind Endpoints       HTTP Request  GET  api v1 endpoints       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    EndpointsList  a    OK  401  Unauthorized        create  create Endpoints       HTTP Request  POST  api v1 namespaces  namespace  endpoints       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    Endpoints  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Endpoints  a    OK  201   a href    Endpoints  a    Created  202   a href    Endpoints  a    Accepted  401  Unauthorized        update  replace the specified Endpoints       HTTP Request  PUT  api v1 namespaces  namespace  endpoints  name        Parameters       name     in path    string  required    name of the Endpoints       namespace     in path    string  required     a href    namespace  a        body     a href    Endpoints  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Endpoints  a    OK  201   a href    Endpoints  a    Created  401  Unauthorized        patch  partially update the specified Endpoints       HTTP Request  PATCH  api v1 namespaces  namespace  endpoints  name        Parameters       name     in path    string  required    name of the Endpoints       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Endpoints  a    OK  201   a href    Endpoints  a    Created  401  Unauthorized        delete  delete Endpoints       HTTP Request  DELETE  api v1 namespaces  namespace  endpoints  name        Parameters       name     in path    string  required    name of the Endpoints       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of Endpoints       HTTP Request  DELETE  api v1 namespaces  namespace  endpoints       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference weight 5 kind IngressClass IngressClass represents the class of the Ingress referenced by the Ingress Spec contenttype apireference apimetadata title IngressClass autogenerated true import k8s io api networking v1 apiVersion networking k8s io v1","answers":"---\napi_metadata:\n  apiVersion: \"networking.k8s.io\/v1\"\n  import: \"k8s.io\/api\/networking\/v1\"\n  kind: \"IngressClass\"\ncontent_type: \"api_reference\"\ndescription: \"IngressClass represents the class of the Ingress, referenced by the Ingress Spec.\"\ntitle: \"IngressClass\"\nweight: 5\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: networking.k8s.io\/v1`\n\n`import \"k8s.io\/api\/networking\/v1\"`\n\n\n## IngressClass {#IngressClass}\n\nIngressClass represents the class of the Ingress, referenced by the Ingress Spec. The `ingressclass.kubernetes.io\/is-default-class` annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class.\n\n<hr>\n\n- **apiVersion**: networking.k8s.io\/v1\n\n\n- **kind**: IngressClass\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">IngressClassSpec<\/a>)\n\n  spec is the desired state of the IngressClass. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## IngressClassSpec {#IngressClassSpec}\n\nIngressClassSpec provides information about the class of an Ingress.\n\n<hr>\n\n- **controller** (string)\n\n  controller refers to the name of the controller that should handle this class. This allows for different \"flavors\" that are controlled by the same controller. For example, you may have different parameters for the same implementing controller. This should be specified as a domain-prefixed path no more than 250 characters in length, e.g. \"acme.io\/ingress-controller\". This field is immutable.\n\n- **parameters** (IngressClassParametersReference)\n\n  parameters is a link to a custom resource containing additional configuration for the controller. This is optional if the controller does not require extra parameters.\n\n  <a name=\"IngressClassParametersReference\"><\/a>\n  *IngressClassParametersReference identifies an API object. This can be used to specify a cluster or namespace-scoped resource.*\n\n  - **parameters.kind** (string), required\n\n    kind is the type of resource being referenced.\n\n  - **parameters.name** (string), required\n\n    name is the name of resource being referenced.\n\n  - **parameters.apiGroup** (string)\n\n    apiGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.\n\n  - **parameters.namespace** (string)\n\n    namespace is the namespace of the resource being referenced. This field is required when scope is set to \"Namespace\" and must be unset when scope is set to \"Cluster\".\n\n  - **parameters.scope** (string)\n\n    scope represents if this refers to a cluster or namespace scoped resource. This may be set to \"Cluster\" (default) or \"Namespace\".\n\n\n\n\n\n## IngressClassList {#IngressClassList}\n\nIngressClassList is a collection of IngressClasses.\n\n<hr>\n\n- **apiVersion**: networking.k8s.io\/v1\n\n\n- **kind**: IngressClassList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata.\n\n- **items** ([]<a href=\"\">IngressClass<\/a>), required\n\n  items is the list of IngressClasses.\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified IngressClass\n\n#### HTTP Request\n\nGET \/apis\/networking.k8s.io\/v1\/ingressclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the IngressClass\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">IngressClass<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind IngressClass\n\n#### HTTP Request\n\nGET \/apis\/networking.k8s.io\/v1\/ingressclasses\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">IngressClassList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create an IngressClass\n\n#### HTTP Request\n\nPOST \/apis\/networking.k8s.io\/v1\/ingressclasses\n\n#### Parameters\n\n\n- **body**: <a href=\"\">IngressClass<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">IngressClass<\/a>): OK\n\n201 (<a href=\"\">IngressClass<\/a>): Created\n\n202 (<a href=\"\">IngressClass<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified IngressClass\n\n#### HTTP Request\n\nPUT \/apis\/networking.k8s.io\/v1\/ingressclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the IngressClass\n\n\n- **body**: <a href=\"\">IngressClass<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">IngressClass<\/a>): OK\n\n201 (<a href=\"\">IngressClass<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified IngressClass\n\n#### HTTP Request\n\nPATCH \/apis\/networking.k8s.io\/v1\/ingressclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the IngressClass\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">IngressClass<\/a>): OK\n\n201 (<a href=\"\">IngressClass<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete an IngressClass\n\n#### HTTP Request\n\nDELETE \/apis\/networking.k8s.io\/v1\/ingressclasses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the IngressClass\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of IngressClass\n\n#### HTTP Request\n\nDELETE \/apis\/networking.k8s.io\/v1\/ingressclasses\n\n#### Parameters\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   networking k8s io v1    import   k8s io api networking v1    kind   IngressClass  content type   api reference  description   IngressClass represents the class of the Ingress  referenced by the Ingress Spec   title   IngressClass  weight  5 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  networking k8s io v1    import  k8s io api networking v1        IngressClass   IngressClass   IngressClass represents the class of the Ingress  referenced by the Ingress Spec  The  ingressclass kubernetes io is default class  annotation can be used to indicate that an IngressClass should be considered default  When a single IngressClass resource has this annotation set to true  new Ingress resources without a class specified will be assigned this default class    hr       apiVersion    networking k8s io v1       kind    IngressClass       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    IngressClassSpec  a      spec is the desired state of the IngressClass  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         IngressClassSpec   IngressClassSpec   IngressClassSpec provides information about the class of an Ingress    hr       controller    string     controller refers to the name of the controller that should handle this class  This allows for different  flavors  that are controlled by the same controller  For example  you may have different parameters for the same implementing controller  This should be specified as a domain prefixed path no more than 250 characters in length  e g   acme io ingress controller   This field is immutable       parameters    IngressClassParametersReference     parameters is a link to a custom resource containing additional configuration for the controller  This is optional if the controller does not require extra parameters      a name  IngressClassParametersReference    a     IngressClassParametersReference identifies an API object  This can be used to specify a cluster or namespace scoped resource          parameters kind    string   required      kind is the type of resource being referenced         parameters name    string   required      name is the name of resource being referenced         parameters apiGroup    string       apiGroup is the group for the resource being referenced  If APIGroup is not specified  the specified Kind must be in the core API group  For any other third party types  APIGroup is required         parameters namespace    string       namespace is the namespace of the resource being referenced  This field is required when scope is set to  Namespace  and must be unset when scope is set to  Cluster          parameters scope    string       scope represents if this refers to a cluster or namespace scoped resource  This may be set to  Cluster   default  or  Namespace           IngressClassList   IngressClassList   IngressClassList is a collection of IngressClasses    hr       apiVersion    networking k8s io v1       kind    IngressClassList       metadata     a href    ListMeta  a      Standard list metadata       items       a href    IngressClass  a    required    items is the list of IngressClasses          Operations   Operations      hr             get  read the specified IngressClass       HTTP Request  GET  apis networking k8s io v1 ingressclasses  name        Parameters       name     in path    string  required    name of the IngressClass       pretty     in query    string     a href    pretty  a          Response   200   a href    IngressClass  a    OK  401  Unauthorized        list  list or watch objects of kind IngressClass       HTTP Request  GET  apis networking k8s io v1 ingressclasses       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    IngressClassList  a    OK  401  Unauthorized        create  create an IngressClass       HTTP Request  POST  apis networking k8s io v1 ingressclasses       Parameters       body     a href    IngressClass  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    IngressClass  a    OK  201   a href    IngressClass  a    Created  202   a href    IngressClass  a    Accepted  401  Unauthorized        update  replace the specified IngressClass       HTTP Request  PUT  apis networking k8s io v1 ingressclasses  name        Parameters       name     in path    string  required    name of the IngressClass       body     a href    IngressClass  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    IngressClass  a    OK  201   a href    IngressClass  a    Created  401  Unauthorized        patch  partially update the specified IngressClass       HTTP Request  PATCH  apis networking k8s io v1 ingressclasses  name        Parameters       name     in path    string  required    name of the IngressClass       body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    IngressClass  a    OK  201   a href    IngressClass  a    Created  401  Unauthorized        delete  delete an IngressClass       HTTP Request  DELETE  apis networking k8s io v1 ingressclasses  name        Parameters       name     in path    string  required    name of the IngressClass       body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of IngressClass       HTTP Request  DELETE  apis networking k8s io v1 ingressclasses       Parameters       body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference kind Service apiVersion v1 weight 1 contenttype apireference title Service apimetadata autogenerated true Service is a named abstraction of software service for example mysql consisting of local port for example 3306 that the proxy listens on and the selector that determines which pods will answer requests sent through the proxy import k8s io api core v1","answers":"---\napi_metadata:\n  apiVersion: \"v1\"\n  import: \"k8s.io\/api\/core\/v1\"\n  kind: \"Service\"\ncontent_type: \"api_reference\"\ndescription: \"Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy.\"\ntitle: \"Service\"\nweight: 1\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: v1`\n\n`import \"k8s.io\/api\/core\/v1\"`\n\n\n## Service {#Service}\n\nService is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: Service\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">ServiceSpec<\/a>)\n\n  Spec defines the behavior of a service. https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **status** (<a href=\"\">ServiceStatus<\/a>)\n\n  Most recently observed status of the service. Populated by the system. Read-only. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## ServiceSpec {#ServiceSpec}\n\nServiceSpec describes the attributes that a user creates on a service.\n\n<hr>\n\n- **selector** (map[string]string)\n\n  Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/\n\n- **ports** ([]ServicePort)\n\n  *Patch strategy: merge on key `port`*\n  \n  *Map: unique values on keys `port, protocol` will be kept during a merge*\n  \n  The list of ports that are exposed by this service. More info: https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#virtual-ips-and-service-proxies\n\n  <a name=\"ServicePort\"><\/a>\n  *ServicePort contains information on service's port.*\n\n  - **ports.port** (int32), required\n\n    The port that will be exposed by this service.\n\n  - **ports.targetPort** (IntOrString)\n\n    Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#defining-a-service\n\n    <a name=\"IntOrString\"><\/a>\n    *IntOrString is a type that can hold an int32 or a string.  When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type.  This allows you to have, for example, a JSON field that can accept a name or number.*\n\n  - **ports.protocol** (string)\n\n    The IP protocol for this port. Supports \"TCP\", \"UDP\", and \"SCTP\". Default is TCP.\n\n  - **ports.name** (string)\n\n    The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.\n\n  - **ports.nodePort** (int32)\n\n    The port on each node on which this service is exposed when type is NodePort or LoadBalancer.  Usually assigned by the system. If a value is specified, in-range, and not in use it will be used, otherwise the operation will fail.  If not specified, a port will be allocated if this Service requires one.  If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type from NodePort to ClusterIP). More info: https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#type-nodeport\n\n  - **ports.appProtocol** (string)\n\n    The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either:\n    \n    * Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https:\/\/www.iana.org\/assignments\/service-names).\n    \n    * Kubernetes-defined prefixed names:\n      * 'kubernetes.io\/h2c' - HTTP\/2 prior knowledge over cleartext as described in https:\/\/www.rfc-editor.org\/rfc\/rfc9113.html#name-starting-http-2-with-prior-\n      * 'kubernetes.io\/ws'  - WebSocket over cleartext as described in https:\/\/www.rfc-editor.org\/rfc\/rfc6455\n      * 'kubernetes.io\/wss' - WebSocket over TLS as described in https:\/\/www.rfc-editor.org\/rfc\/rfc6455\n    \n    * Other protocols should use implementation-defined prefixed names such as mycompany.com\/my-custom-protocol.\n\n- **type** (string)\n\n  type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. \"ClusterIP\" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object or EndpointSlice objects. If clusterIP is \"None\", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a virtual IP. \"NodePort\" builds on ClusterIP and allocates a port on every node which routes to the same endpoints as the clusterIP. \"LoadBalancer\" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the same endpoints as the clusterIP. \"ExternalName\" aliases this service to the specified externalName. Several other fields do not apply to ExternalName services. More info: https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#publishing-services-service-types\n\n- **ipFamilies** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this service. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field. If this field is specified manually, the requested family is available in the cluster, and ipFamilyPolicy allows it, it will be used; otherwise creation of the service will fail. This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service. Valid values are \"IPv4\" and \"IPv6\".  This field only applies to Services of types ClusterIP, NodePort, and LoadBalancer, and does apply to \"headless\" services. This field will be wiped when updating a Service to type ExternalName.\n  \n  This field may hold a maximum of two entries (dual-stack families, in either order).  These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field.\n\n- **ipFamilyPolicy** (string)\n\n  IPFamilyPolicy represents the dual-stack-ness requested or required by this Service. If there is no value provided, then this field will be set to SingleStack. Services can be \"SingleStack\" (a single IP family), \"PreferDualStack\" (two IP families on dual-stack configured clusters or a single IP family on single-stack clusters), or \"RequireDualStack\" (two IP families on dual-stack configured clusters, otherwise fail). The ipFamilies and clusterIPs fields depend on the value of this field. This field will be wiped when updating a service to type ExternalName.\n\n- **clusterIP** (string)\n\n  clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above).  Valid values are \"None\", empty string (\"\"), or a valid IP address. Setting this to \"None\" makes a \"headless service\" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required.  Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#virtual-ips-and-service-proxies\n\n- **clusterIPs** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  ClusterIPs is a list of IP addresses assigned to this service, and are usually assigned randomly.  If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be empty) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above).  Valid values are \"None\", empty string (\"\"), or a valid IP address.  Setting this to \"None\" makes a \"headless service\" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required.  Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName.  If this field is not specified, it will be initialized from the clusterIP field.  If this field is specified, clients must ensure that clusterIPs[0] and clusterIP have the same value.\n  \n  This field may hold a maximum of two entries (dual-stack IPs, in either order). These IPs must correspond to the values of the ipFamilies field. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. More info: https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#virtual-ips-and-service-proxies\n\n- **externalIPs** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service.  These IPs are not managed by Kubernetes.  The user is responsible for ensuring that traffic arrives at a node with this IP.  A common example is external load-balancers that are not part of the Kubernetes system.\n\n- **sessionAffinity** (string)\n\n  Supports \"ClientIP\" and \"None\". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#virtual-ips-and-service-proxies\n\n- **loadBalancerIP** (string)\n\n  Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations. Using it is non-portable and it may not support dual-stack. Users are encouraged to use implementation-specific annotations when available.\n\n- **loadBalancerSourceRanges** ([]string)\n\n  *Atomic: will be replaced during a merge*\n  \n  If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature.\" More info: https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/create-external-load-balancer\/\n\n- **loadBalancerClass** (string)\n\n  loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. \"internal-vip\" or \"example.com\/internal-vip\". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.\n\n- **externalName** (string)\n\n  externalName is the external reference that discovery mechanisms will return as an alias for this service (e.g. a DNS CNAME record). No proxying will be involved.  Must be a lowercase RFC-1123 hostname (https:\/\/tools.ietf.org\/html\/rfc1123) and requires `type` to be \"ExternalName\".\n\n- **externalTrafficPolicy** (string)\n\n  externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service's \"externally-facing\" addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to \"Local\", the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, \"Cluster\", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get \"Cluster\" semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node.\n\n- **internalTrafficPolicy** (string)\n\n  InternalTrafficPolicy describes how nodes distribute service traffic they receive on the ClusterIP. If set to \"Local\", the proxy will assume that pods only want to talk to endpoints of the service on the same node as the pod, dropping the traffic if there are no local endpoints. The default value, \"Cluster\", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features).\n\n- **healthCheckNodePort** (int32)\n\n  healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used.  If not specified, a value will be automatically allocated.  External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not.  If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). This field cannot be updated once set.\n\n- **publishNotReadyAddresses** (boolean)\n\n  publishNotReadyAddresses indicates that any agent which deals with endpoints for this Service should disregard any indications of ready\/not-ready. The primary use case for setting this field is for a StatefulSet's Headless Service to propagate SRV DNS records for its Pods for the purpose of peer discovery. The Kubernetes controllers that generate Endpoints and EndpointSlice resources for Services interpret this to mean that all endpoints are considered \"ready\" even if the Pods themselves are not. Agents which consume only Kubernetes generated endpoints through the Endpoints or EndpointSlice resources can safely assume this behavior.\n\n- **sessionAffinityConfig** (SessionAffinityConfig)\n\n  sessionAffinityConfig contains the configurations of session affinity.\n\n  <a name=\"SessionAffinityConfig\"><\/a>\n  *SessionAffinityConfig represents the configurations of session affinity.*\n\n  - **sessionAffinityConfig.clientIP** (ClientIPConfig)\n\n    clientIP contains the configurations of Client IP based session affinity.\n\n    <a name=\"ClientIPConfig\"><\/a>\n    *ClientIPConfig represents the configurations of Client IP based session affinity.*\n\n    - **sessionAffinityConfig.clientIP.timeoutSeconds** (int32)\n\n      timeoutSeconds specifies the seconds of ClientIP type session sticky time. The value must be >0 && \\<=86400(for 1 day) if ServiceAffinity == \"ClientIP\". Default value is 10800(for 3 hours).\n\n- **allocateLoadBalancerNodePorts** (boolean)\n\n  allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer.  Default is \"true\". It may be set to \"false\" if the cluster load-balancer does not rely on NodePorts.  If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.\n\n- **trafficDistribution** (string)\n\n  TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to \"PreferClose\", implementations should prioritize endpoints that are topologically close (e.g., same zone). This is an beta field and requires enabling ServiceTrafficDistribution feature.\n\n\n\n\n\n## ServiceStatus {#ServiceStatus}\n\nServiceStatus represents the current status of a service.\n\n<hr>\n\n- **conditions** ([]Condition)\n\n  *Patch strategy: merge on key `type`*\n  \n  *Map: unique values on key type will be kept during a merge*\n  \n  Current service state\n\n  <a name=\"Condition\"><\/a>\n  *Condition contains details for one aspect of the current state of this API Resource.*\n\n  - **conditions.lastTransitionTime** (Time), required\n\n    lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed.  If that is not known, then using the time when the API field changed is acceptable.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n  - **conditions.message** (string), required\n\n    message is a human readable message indicating details about the transition. This may be an empty string.\n\n  - **conditions.reason** (string), required\n\n    reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty.\n\n  - **conditions.status** (string), required\n\n    status of the condition, one of True, False, Unknown.\n\n  - **conditions.type** (string), required\n\n    type of condition in CamelCase or in foo.example.com\/CamelCase.\n\n  - **conditions.observedGeneration** (int64)\n\n    observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance.\n\n- **loadBalancer** (LoadBalancerStatus)\n\n  LoadBalancer contains the current status of the load-balancer, if one is present.\n\n  <a name=\"LoadBalancerStatus\"><\/a>\n  *LoadBalancerStatus represents the status of a load-balancer.*\n\n  - **loadBalancer.ingress** ([]LoadBalancerIngress)\n\n    *Atomic: will be replaced during a merge*\n    \n    Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points.\n\n    <a name=\"LoadBalancerIngress\"><\/a>\n    *LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point.*\n\n    - **loadBalancer.ingress.hostname** (string)\n\n      Hostname is set for load-balancer ingress points that are DNS based (typically AWS load-balancers)\n\n    - **loadBalancer.ingress.ip** (string)\n\n      IP is set for load-balancer ingress points that are IP based (typically GCE or OpenStack load-balancers)\n\n    - **loadBalancer.ingress.ipMode** (string)\n\n      IPMode specifies how the load-balancer IP behaves, and may only be specified when the ip field is specified. Setting this to \"VIP\" indicates that traffic is delivered to the node with the destination set to the load-balancer's IP and port. Setting this to \"Proxy\" indicates that traffic is delivered to the node or pod with the destination set to the node's IP and node port or the pod's IP and port. Service implementations may use this information to adjust traffic routing.\n\n    - **loadBalancer.ingress.ports** ([]PortStatus)\n\n      *Atomic: will be replaced during a merge*\n      \n      Ports is a list of records of service ports If used, every port defined in the service should have an entry in it\n\n      <a name=\"PortStatus\"><\/a>\n      **\n\n      - **loadBalancer.ingress.ports.port** (int32), required\n\n        Port is the port number of the service port of which status is recorded here\n\n      - **loadBalancer.ingress.ports.protocol** (string), required\n\n        Protocol is the protocol of the service port of which status is recorded here The supported values are: \"TCP\", \"UDP\", \"SCTP\"\n\n      - **loadBalancer.ingress.ports.error** (string)\n\n        Error is to record the problem with the service port The format of the error shall comply with the following rules: - built-in error values shall be specified in this file and those shall use\n          CamelCase names\n        - cloud provider specific error values must have names that comply with the\n          format foo.example.com\/CamelCase.\n\n\n\n\n\n## ServiceList {#ServiceList}\n\nServiceList holds a list of services.\n\n<hr>\n\n- **apiVersion**: v1\n\n\n- **kind**: ServiceList\n\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **items** ([]<a href=\"\">Service<\/a>), required\n\n  List of services\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified Service\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/services\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Service\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Service<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified Service\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/services\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Service\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Service<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Service\n\n#### HTTP Request\n\nGET \/api\/v1\/namespaces\/{namespace}\/services\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Service\n\n#### HTTP Request\n\nGET \/api\/v1\/services\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">ServiceList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create a Service\n\n#### HTTP Request\n\nPOST \/api\/v1\/namespaces\/{namespace}\/services\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Service<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Service<\/a>): OK\n\n201 (<a href=\"\">Service<\/a>): Created\n\n202 (<a href=\"\">Service<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified Service\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/services\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Service\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Service<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Service<\/a>): OK\n\n201 (<a href=\"\">Service<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified Service\n\n#### HTTP Request\n\nPUT \/api\/v1\/namespaces\/{namespace}\/services\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Service\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Service<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Service<\/a>): OK\n\n201 (<a href=\"\">Service<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified Service\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/services\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Service\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Service<\/a>): OK\n\n201 (<a href=\"\">Service<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified Service\n\n#### HTTP Request\n\nPATCH \/api\/v1\/namespaces\/{namespace}\/services\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Service\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Service<\/a>): OK\n\n201 (<a href=\"\">Service<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete a Service\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/services\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Service\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Service<\/a>): OK\n\n202 (<a href=\"\">Service<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of Service\n\n#### HTTP Request\n\nDELETE \/api\/v1\/namespaces\/{namespace}\/services\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   v1    import   k8s io api core v1    kind   Service  content type   api reference  description   Service is a named abstraction of software service  for example  mysql  consisting of local port  for example 3306  that the proxy listens on  and the selector that determines which pods will answer requests sent through the proxy   title   Service  weight  1 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  v1    import  k8s io api core v1        Service   Service   Service is a named abstraction of software service  for example  mysql  consisting of local port  for example 3306  that the proxy listens on  and the selector that determines which pods will answer requests sent through the proxy    hr       apiVersion    v1       kind    Service       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    ServiceSpec  a      Spec defines the behavior of a service  https   git k8s io community contributors devel sig architecture api conventions md spec and status      status     a href    ServiceStatus  a      Most recently observed status of the service  Populated by the system  Read only  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         ServiceSpec   ServiceSpec   ServiceSpec describes the attributes that a user creates on a service    hr       selector    map string string     Route service traffic to pods with label keys and values matching this selector  If empty or not present  the service is assumed to have an external process managing its endpoints  which Kubernetes will not modify  Only applies to types ClusterIP  NodePort  and LoadBalancer  Ignored if type is ExternalName  More info  https   kubernetes io docs concepts services networking service       ports      ServicePort      Patch strategy  merge on key  port         Map  unique values on keys  port  protocol  will be kept during a merge       The list of ports that are exposed by this service  More info  https   kubernetes io docs concepts services networking service  virtual ips and service proxies     a name  ServicePort    a     ServicePort contains information on service s port          ports port    int32   required      The port that will be exposed by this service         ports targetPort    IntOrString       Number or name of the port to access on the pods targeted by the service  Number must be in the range 1 to 65535  Name must be an IANA SVC NAME  If this is a string  it will be looked up as a named port in the target Pod s container ports  If this is not specified  the value of the  port  field is used  an identity map   This field is ignored for services with clusterIP None  and should be omitted or set equal to the  port  field  More info  https   kubernetes io docs concepts services networking service  defining a service       a name  IntOrString    a       IntOrString is a type that can hold an int32 or a string   When used in JSON or YAML marshalling and unmarshalling  it produces or consumes the inner type   This allows you to have  for example  a JSON field that can accept a name or number          ports protocol    string       The IP protocol for this port  Supports  TCP    UDP   and  SCTP   Default is TCP         ports name    string       The name of this port within the service  This must be a DNS LABEL  All ports within a ServiceSpec must have unique names  When considering the endpoints for a Service  this must match the  name  field in the EndpointPort  Optional if only one ServicePort is defined on this service         ports nodePort    int32       The port on each node on which this service is exposed when type is NodePort or LoadBalancer   Usually assigned by the system  If a value is specified  in range  and not in use it will be used  otherwise the operation will fail   If not specified  a port will be allocated if this Service requires one   If this field is specified when creating a Service which does not need it  creation will fail  This field will be wiped when updating a Service to no longer need it  e g  changing type from NodePort to ClusterIP   More info  https   kubernetes io docs concepts services networking service  type nodeport        ports appProtocol    string       The application protocol for this port  This is used as a hint for implementations to offer richer behavior for protocols that they understand  This field follows standard Kubernetes label syntax  Valid values are either             Un prefixed protocol names   reserved for IANA standard service names  as per RFC 6335 and https   www iana org assignments service names              Kubernetes defined prefixed names           kubernetes io h2c    HTTP 2 prior knowledge over cleartext as described in https   www rfc editor org rfc rfc9113 html name starting http 2 with prior           kubernetes io ws     WebSocket over cleartext as described in https   www rfc editor org rfc rfc6455          kubernetes io wss    WebSocket over TLS as described in https   www rfc editor org rfc rfc6455            Other protocols should use implementation defined prefixed names such as mycompany com my custom protocol       type    string     type determines how the Service is exposed  Defaults to ClusterIP  Valid options are ExternalName  ClusterIP  NodePort  and LoadBalancer   ClusterIP  allocates a cluster internal IP address for load balancing to endpoints  Endpoints are determined by the selector or if that is not specified  by manual construction of an Endpoints object or EndpointSlice objects  If clusterIP is  None   no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a virtual IP   NodePort  builds on ClusterIP and allocates a port on every node which routes to the same endpoints as the clusterIP   LoadBalancer  builds on NodePort and creates an external load balancer  if supported in the current cloud  which routes to the same endpoints as the clusterIP   ExternalName  aliases this service to the specified externalName  Several other fields do not apply to ExternalName services  More info  https   kubernetes io docs concepts services networking service  publishing services service types      ipFamilies      string      Atomic  will be replaced during a merge       IPFamilies is a list of IP families  e g  IPv4  IPv6  assigned to this service  This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field  If this field is specified manually  the requested family is available in the cluster  and ipFamilyPolicy allows it  it will be used  otherwise creation of the service will fail  This field is conditionally mutable  it allows for adding or removing a secondary IP family  but it does not allow changing the primary IP family of the Service  Valid values are  IPv4  and  IPv6    This field only applies to Services of types ClusterIP  NodePort  and LoadBalancer  and does apply to  headless  services  This field will be wiped when updating a Service to type ExternalName       This field may hold a maximum of two entries  dual stack families  in either order    These families must correspond to the values of the clusterIPs field  if specified  Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field       ipFamilyPolicy    string     IPFamilyPolicy represents the dual stack ness requested or required by this Service  If there is no value provided  then this field will be set to SingleStack  Services can be  SingleStack   a single IP family    PreferDualStack   two IP families on dual stack configured clusters or a single IP family on single stack clusters   or  RequireDualStack   two IP families on dual stack configured clusters  otherwise fail   The ipFamilies and clusterIPs fields depend on the value of this field  This field will be wiped when updating a service to type ExternalName       clusterIP    string     clusterIP is the IP address of the service and is usually assigned randomly  If an address is specified manually  is in range  as per system configuration   and is not in use  it will be allocated to the service  otherwise creation of the service will fail  This field may not be changed through updates unless the type field is also being changed to ExternalName  which requires this field to be blank  or the type field is being changed from ExternalName  in which case this field may optionally be specified  as describe above    Valid values are  None   empty string       or a valid IP address  Setting this to  None  makes a  headless service   no virtual IP   which is useful when direct endpoint connections are preferred and proxying is not required   Only applies to types ClusterIP  NodePort  and LoadBalancer  If this field is specified when creating a Service of type ExternalName  creation will fail  This field will be wiped when updating a Service to type ExternalName  More info  https   kubernetes io docs concepts services networking service  virtual ips and service proxies      clusterIPs      string      Atomic  will be replaced during a merge       ClusterIPs is a list of IP addresses assigned to this service  and are usually assigned randomly   If an address is specified manually  is in range  as per system configuration   and is not in use  it will be allocated to the service  otherwise creation of the service will fail  This field may not be changed through updates unless the type field is also being changed to ExternalName  which requires this field to be empty  or the type field is being changed from ExternalName  in which case this field may optionally be specified  as describe above    Valid values are  None   empty string       or a valid IP address   Setting this to  None  makes a  headless service   no virtual IP   which is useful when direct endpoint connections are preferred and proxying is not required   Only applies to types ClusterIP  NodePort  and LoadBalancer  If this field is specified when creating a Service of type ExternalName  creation will fail  This field will be wiped when updating a Service to type ExternalName   If this field is not specified  it will be initialized from the clusterIP field   If this field is specified  clients must ensure that clusterIPs 0  and clusterIP have the same value       This field may hold a maximum of two entries  dual stack IPs  in either order   These IPs must correspond to the values of the ipFamilies field  Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field  More info  https   kubernetes io docs concepts services networking service  virtual ips and service proxies      externalIPs      string      Atomic  will be replaced during a merge       externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service   These IPs are not managed by Kubernetes   The user is responsible for ensuring that traffic arrives at a node with this IP   A common example is external load balancers that are not part of the Kubernetes system       sessionAffinity    string     Supports  ClientIP  and  None   Used to maintain session affinity  Enable client IP based session affinity  Must be ClientIP or None  Defaults to None  More info  https   kubernetes io docs concepts services networking service  virtual ips and service proxies      loadBalancerIP    string     Only applies to Service Type  LoadBalancer  This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created  This field will be ignored if the cloud provider does not support the feature  Deprecated  This field was under specified and its meaning varies across implementations  Using it is non portable and it may not support dual stack  Users are encouraged to use implementation specific annotations when available       loadBalancerSourceRanges      string      Atomic  will be replaced during a merge       If specified and supported by the platform  this will restrict traffic through the cloud provider load balancer will be restricted to the specified client IPs  This field will be ignored if the cloud provider does not support the feature   More info  https   kubernetes io docs tasks access application cluster create external load balancer       loadBalancerClass    string     loadBalancerClass is the class of the load balancer implementation this Service belongs to  If specified  the value of this field must be a label style identifier  with an optional prefix  e g   internal vip  or  example com internal vip   Unprefixed names are reserved for end users  This field can only be set when the Service type is  LoadBalancer   If not set  the default load balancer implementation is used  today this is typically done through the cloud provider integration  but should apply for any default implementation  If set  it is assumed that a load balancer implementation is watching for Services with a matching class  Any default load balancer implementation  e g  cloud providers  should ignore Services that set this field  This field can only be set when creating or updating a Service to type  LoadBalancer   Once set  it can not be changed  This field will be wiped when a service is updated to a non  LoadBalancer  type       externalName    string     externalName is the external reference that discovery mechanisms will return as an alias for this service  e g  a DNS CNAME record   No proxying will be involved   Must be a lowercase RFC 1123 hostname  https   tools ietf org html rfc1123  and requires  type  to be  ExternalName        externalTrafficPolicy    string     externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service s  externally facing  addresses  NodePorts  ExternalIPs  and LoadBalancer IPs   If set to  Local   the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes  and so each node will deliver traffic only to the node local endpoints of the service  without masquerading the client source IP   Traffic mistakenly sent to a node with no endpoints will be dropped   The default value   Cluster   uses the standard behavior of routing to all endpoints evenly  possibly modified by topology and other features   Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get  Cluster  semantics  but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node       internalTrafficPolicy    string     InternalTrafficPolicy describes how nodes distribute service traffic they receive on the ClusterIP  If set to  Local   the proxy will assume that pods only want to talk to endpoints of the service on the same node as the pod  dropping the traffic if there are no local endpoints  The default value   Cluster   uses the standard behavior of routing to all endpoints evenly  possibly modified by topology and other features        healthCheckNodePort    int32     healthCheckNodePort specifies the healthcheck nodePort for the service  This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local  If a value is specified  is in range  and is not in use  it will be used   If not specified  a value will be automatically allocated   External systems  e g  load balancers  can use this port to determine if a given node holds endpoints for this service or not   If this field is specified when creating a Service which does not need it  creation will fail  This field will be wiped when updating a Service to no longer need it  e g  changing type   This field cannot be updated once set       publishNotReadyAddresses    boolean     publishNotReadyAddresses indicates that any agent which deals with endpoints for this Service should disregard any indications of ready not ready  The primary use case for setting this field is for a StatefulSet s Headless Service to propagate SRV DNS records for its Pods for the purpose of peer discovery  The Kubernetes controllers that generate Endpoints and EndpointSlice resources for Services interpret this to mean that all endpoints are considered  ready  even if the Pods themselves are not  Agents which consume only Kubernetes generated endpoints through the Endpoints or EndpointSlice resources can safely assume this behavior       sessionAffinityConfig    SessionAffinityConfig     sessionAffinityConfig contains the configurations of session affinity      a name  SessionAffinityConfig    a     SessionAffinityConfig represents the configurations of session affinity          sessionAffinityConfig clientIP    ClientIPConfig       clientIP contains the configurations of Client IP based session affinity        a name  ClientIPConfig    a       ClientIPConfig represents the configurations of Client IP based session affinity            sessionAffinityConfig clientIP timeoutSeconds    int32         timeoutSeconds specifies the seconds of ClientIP type session sticky time  The value must be  0       86400 for 1 day  if ServiceAffinity     ClientIP   Default value is 10800 for 3 hours        allocateLoadBalancerNodePorts    boolean     allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer   Default is  true   It may be set to  false  if the cluster load balancer does not rely on NodePorts   If the caller requests specific NodePorts  by specifying a value   those requests will be respected  regardless of this field  This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type       trafficDistribution    string     TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints  Implementations can use this field as a hint  but are not required to guarantee strict adherence  If the field is not set  the implementation will apply its default routing strategy  If set to  PreferClose   implementations should prioritize endpoints that are topologically close  e g   same zone   This is an beta field and requires enabling ServiceTrafficDistribution feature          ServiceStatus   ServiceStatus   ServiceStatus represents the current status of a service    hr       conditions      Condition      Patch strategy  merge on key  type         Map  unique values on key type will be kept during a merge       Current service state     a name  Condition    a     Condition contains details for one aspect of the current state of this API Resource          conditions lastTransitionTime    Time   required      lastTransitionTime is the last time the condition transitioned from one status to another  This should be when the underlying condition changed   If that is not known  then using the time when the API field changed is acceptable        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers          conditions message    string   required      message is a human readable message indicating details about the transition  This may be an empty string         conditions reason    string   required      reason contains a programmatic identifier indicating the reason for the condition s last transition  Producers of specific condition types may define expected values and meanings for this field  and whether the values are considered a guaranteed API  The value should be a CamelCase string  This field may not be empty         conditions status    string   required      status of the condition  one of True  False  Unknown         conditions type    string   required      type of condition in CamelCase or in foo example com CamelCase         conditions observedGeneration    int64       observedGeneration represents the  metadata generation that the condition was set based upon  For instance  if  metadata generation is currently 12  but the  status conditions x  observedGeneration is 9  the condition is out of date with respect to the current state of the instance       loadBalancer    LoadBalancerStatus     LoadBalancer contains the current status of the load balancer  if one is present      a name  LoadBalancerStatus    a     LoadBalancerStatus represents the status of a load balancer          loadBalancer ingress      LoadBalancerIngress        Atomic  will be replaced during a merge           Ingress is a list containing ingress points for the load balancer  Traffic intended for the service should be sent to these ingress points        a name  LoadBalancerIngress    a       LoadBalancerIngress represents the status of a load balancer ingress point  traffic intended for the service should be sent to an ingress point            loadBalancer ingress hostname    string         Hostname is set for load balancer ingress points that are DNS based  typically AWS load balancers           loadBalancer ingress ip    string         IP is set for load balancer ingress points that are IP based  typically GCE or OpenStack load balancers           loadBalancer ingress ipMode    string         IPMode specifies how the load balancer IP behaves  and may only be specified when the ip field is specified  Setting this to  VIP  indicates that traffic is delivered to the node with the destination set to the load balancer s IP and port  Setting this to  Proxy  indicates that traffic is delivered to the node or pod with the destination set to the node s IP and node port or the pod s IP and port  Service implementations may use this information to adjust traffic routing           loadBalancer ingress ports      PortStatus          Atomic  will be replaced during a merge               Ports is a list of records of service ports If used  every port defined in the service should have an entry in it         a name  PortStatus    a                      loadBalancer ingress ports port    int32   required          Port is the port number of the service port of which status is recorded here            loadBalancer ingress ports protocol    string   required          Protocol is the protocol of the service port of which status is recorded here The supported values are   TCP    UDP    SCTP             loadBalancer ingress ports error    string           Error is to record the problem with the service port The format of the error shall comply with the following rules    built in error values shall be specified in this file and those shall use           CamelCase names           cloud provider specific error values must have names that comply with the           format foo example com CamelCase          ServiceList   ServiceList   ServiceList holds a list of services    hr       apiVersion    v1       kind    ServiceList       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      items       a href    Service  a    required    List of services         Operations   Operations      hr             get  read the specified Service       HTTP Request  GET  api v1 namespaces  namespace  services  name        Parameters       name     in path    string  required    name of the Service       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Service  a    OK  401  Unauthorized        get  read status of the specified Service       HTTP Request  GET  api v1 namespaces  namespace  services  name  status       Parameters       name     in path    string  required    name of the Service       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Service  a    OK  401  Unauthorized        list  list or watch objects of kind Service       HTTP Request  GET  api v1 namespaces  namespace  services       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ServiceList  a    OK  401  Unauthorized        list  list or watch objects of kind Service       HTTP Request  GET  api v1 services       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    ServiceList  a    OK  401  Unauthorized        create  create a Service       HTTP Request  POST  api v1 namespaces  namespace  services       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    Service  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Service  a    OK  201   a href    Service  a    Created  202   a href    Service  a    Accepted  401  Unauthorized        update  replace the specified Service       HTTP Request  PUT  api v1 namespaces  namespace  services  name        Parameters       name     in path    string  required    name of the Service       namespace     in path    string  required     a href    namespace  a        body     a href    Service  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Service  a    OK  201   a href    Service  a    Created  401  Unauthorized        update  replace status of the specified Service       HTTP Request  PUT  api v1 namespaces  namespace  services  name  status       Parameters       name     in path    string  required    name of the Service       namespace     in path    string  required     a href    namespace  a        body     a href    Service  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Service  a    OK  201   a href    Service  a    Created  401  Unauthorized        patch  partially update the specified Service       HTTP Request  PATCH  api v1 namespaces  namespace  services  name        Parameters       name     in path    string  required    name of the Service       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Service  a    OK  201   a href    Service  a    Created  401  Unauthorized        patch  partially update status of the specified Service       HTTP Request  PATCH  api v1 namespaces  namespace  services  name  status       Parameters       name     in path    string  required    name of the Service       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Service  a    OK  201   a href    Service  a    Created  401  Unauthorized        delete  delete a Service       HTTP Request  DELETE  api v1 namespaces  namespace  services  name        Parameters       name     in path    string  required    name of the Service       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Service  a    OK  202   a href    Service  a    Accepted  401  Unauthorized        deletecollection  delete collection of Service       HTTP Request  DELETE  api v1 namespaces  namespace  services       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend kind Ingress contenttype apireference title Ingress apimetadata weight 4 autogenerated true import k8s io api networking v1 apiVersion networking k8s io v1","answers":"---\napi_metadata:\n  apiVersion: \"networking.k8s.io\/v1\"\n  import: \"k8s.io\/api\/networking\/v1\"\n  kind: \"Ingress\"\ncontent_type: \"api_reference\"\ndescription: \"Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend.\"\ntitle: \"Ingress\"\nweight: 4\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n`apiVersion: networking.k8s.io\/v1`\n\n`import \"k8s.io\/api\/networking\/v1\"`\n\n\n## Ingress {#Ingress}\n\nIngress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.\n\n<hr>\n\n- **apiVersion**: networking.k8s.io\/v1\n\n\n- **kind**: Ingress\n\n\n- **metadata** (<a href=\"\">ObjectMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n- **spec** (<a href=\"\">IngressSpec<\/a>)\n\n  spec is the desired state of the Ingress. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n- **status** (<a href=\"\">IngressStatus<\/a>)\n\n  status is the current state of the Ingress. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n\n## IngressSpec {#IngressSpec}\n\nIngressSpec describes the Ingress the user wishes to exist.\n\n<hr>\n\n- **defaultBackend** (<a href=\"\">IngressBackend<\/a>)\n\n  defaultBackend is the backend that should handle requests that don't match any rule. If Rules are not specified, DefaultBackend must be specified. If DefaultBackend is not set, the handling of requests that do not match any of the rules will be up to the Ingress controller.\n\n- **ingressClassName** (string)\n\n  ingressClassName is the name of an IngressClass cluster resource. Ingress controller implementations use this field to know whether they should be serving this Ingress resource, by a transitive connection (controller -> IngressClass -> Ingress resource). Although the `kubernetes.io\/ingress.class` annotation (simple constant name) was never formally defined, it was widely supported by Ingress controllers to create a direct binding between Ingress controller and Ingress resources. Newly created Ingress resources should prefer using the field. However, even though the annotation is officially deprecated, for backwards compatibility reasons, ingress controllers should still honor that annotation if present.\n\n- **rules** ([]IngressRule)\n\n  *Atomic: will be replaced during a merge*\n  \n  rules is a list of host rules used to configure the Ingress. If unspecified, or no rule matches, all traffic is sent to the default backend.\n\n  <a name=\"IngressRule\"><\/a>\n  *IngressRule represents the rules mapping the paths under a specified host to the related backend services. Incoming requests are first evaluated for a host match, then routed to the backend associated with the matching IngressRuleValue.*\n\n  - **rules.host** (string)\n\n    host is the fully qualified domain name of a network host, as defined by RFC 3986. Note the following deviations from the \"host\" part of the URI as defined in RFC 3986: 1. IPs are not allowed. Currently an IngressRuleValue can only apply to\n       the IP in the Spec of the parent Ingress.\n    2. The `:` delimiter is not respected because ports are not allowed.\n    \t  Currently the port of an Ingress is implicitly :80 for http and\n    \t  :443 for https.\n    Both these may change in the future. Incoming requests are matched against the host before the IngressRuleValue. If the host is unspecified, the Ingress routes all traffic based on the specified IngressRuleValue.\n    \n    host can be \"precise\" which is a domain name without the terminating dot of a network host (e.g. \"foo.bar.com\") or \"wildcard\", which is a domain name prefixed with a single wildcard label (e.g. \"*.foo.com\"). The wildcard character '*' must appear by itself as the first DNS label and matches only a single label. You cannot have a wildcard label by itself (e.g. Host == \"*\"). Requests will be matched against the Host field in the following way: 1. If host is precise, the request matches this rule if the http host header is equal to Host. 2. If host is a wildcard, then the request matches this rule if the http host header is to equal to the suffix (removing the first label) of the wildcard rule.\n\n  - **rules.http** (HTTPIngressRuleValue)\n\n\n    <a name=\"HTTPIngressRuleValue\"><\/a>\n    *HTTPIngressRuleValue is a list of http selectors pointing to backends. In the example: http:\/\/<host>\/<path>?<searchpart> -> backend where where parts of the url correspond to RFC 3986, this resource will be used to match against everything after the last '\/' and before the first '?' or '#'.*\n\n    - **rules.http.paths** ([]HTTPIngressPath), required\n\n      *Atomic: will be replaced during a merge*\n      \n      paths is a collection of paths that map requests to backends.\n\n      <a name=\"HTTPIngressPath\"><\/a>\n      *HTTPIngressPath associates a path with a backend. Incoming urls matching the path are forwarded to the backend.*\n\n      - **rules.http.paths.backend** (<a href=\"\">IngressBackend<\/a>), required\n\n        backend defines the referenced service endpoint to which the traffic will be forwarded to.\n\n      - **rules.http.paths.pathType** (string), required\n\n        pathType determines the interpretation of the path matching. PathType can be one of the following values: * Exact: Matches the URL path exactly. * Prefix: Matches based on a URL path prefix split by '\/'. Matching is\n          done on a path element by element basis. A path element refers is the\n          list of labels in the path split by the '\/' separator. A request is a\n          match for path p if every p is an element-wise prefix of p of the\n          request path. Note that if the last element of the path is a substring\n          of the last element in request path, it is not a match (e.g. \/foo\/bar\n          matches \/foo\/bar\/baz, but does not match \/foo\/barbaz).\n        * ImplementationSpecific: Interpretation of the Path matching is up to\n          the IngressClass. Implementations can treat this as a separate PathType\n          or treat it identically to Prefix or Exact path types.\n        Implementations are required to support all path types.\n\n      - **rules.http.paths.path** (string)\n\n        path is matched against the path of an incoming request. Currently it can contain characters disallowed from the conventional \"path\" part of a URL as defined by RFC 3986. Paths must begin with a '\/' and must be present when using PathType with value \"Exact\" or \"Prefix\".\n\n- **tls** ([]IngressTLS)\n\n  *Atomic: will be replaced during a merge*\n  \n  tls represents the TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI.\n\n  <a name=\"IngressTLS\"><\/a>\n  *IngressTLS describes the transport layer security associated with an ingress.*\n\n  - **tls.hosts** ([]string)\n\n    *Atomic: will be replaced during a merge*\n    \n    hosts is a list of hosts included in the TLS certificate. The values in this list must match the name\/s used in the tlsSecret. Defaults to the wildcard host setting for the loadbalancer controller fulfilling this Ingress, if left unspecified.\n\n  - **tls.secretName** (string)\n\n    secretName is the name of the secret used to terminate TLS traffic on port 443. Field is left optional to allow TLS routing based on SNI hostname alone. If the SNI host in a listener conflicts with the \"Host\" header field used by an IngressRule, the SNI host is used for termination and value of the \"Host\" header is used for routing.\n\n\n\n\n\n## IngressBackend {#IngressBackend}\n\nIngressBackend describes all endpoints for a given service and port.\n\n<hr>\n\n- **resource** (<a href=\"\">TypedLocalObjectReference<\/a>)\n\n  resource is an ObjectRef to another Kubernetes resource in the namespace of the Ingress object. If resource is specified, a service.Name and service.Port must not be specified. This is a mutually exclusive setting with \"Service\".\n\n- **service** (IngressServiceBackend)\n\n  service references a service as a backend. This is a mutually exclusive setting with \"Resource\".\n\n  <a name=\"IngressServiceBackend\"><\/a>\n  *IngressServiceBackend references a Kubernetes Service as a Backend.*\n\n  - **service.name** (string), required\n\n    name is the referenced service. The service must exist in the same namespace as the Ingress object.\n\n  - **service.port** (ServiceBackendPort)\n\n    port of the referenced service. A port name or port number is required for a IngressServiceBackend.\n\n    <a name=\"ServiceBackendPort\"><\/a>\n    *ServiceBackendPort is the service port being referenced.*\n\n    - **service.port.name** (string)\n\n      name is the name of the port on the Service. This is a mutually exclusive setting with \"Number\".\n\n    - **service.port.number** (int32)\n\n      number is the numerical port number (e.g. 80) on the Service. This is a mutually exclusive setting with \"Name\".\n\n\n\n\n\n## IngressStatus {#IngressStatus}\n\nIngressStatus describe the current state of the Ingress.\n\n<hr>\n\n- **loadBalancer** (IngressLoadBalancerStatus)\n\n  loadBalancer contains the current status of the load-balancer.\n\n  <a name=\"IngressLoadBalancerStatus\"><\/a>\n  *IngressLoadBalancerStatus represents the status of a load-balancer.*\n\n  - **loadBalancer.ingress** ([]IngressLoadBalancerIngress)\n\n    *Atomic: will be replaced during a merge*\n    \n    ingress is a list containing ingress points for the load-balancer.\n\n    <a name=\"IngressLoadBalancerIngress\"><\/a>\n    *IngressLoadBalancerIngress represents the status of a load-balancer ingress point.*\n\n    - **loadBalancer.ingress.hostname** (string)\n\n      hostname is set for load-balancer ingress points that are DNS based.\n\n    - **loadBalancer.ingress.ip** (string)\n\n      ip is set for load-balancer ingress points that are IP based.\n\n    - **loadBalancer.ingress.ports** ([]IngressPortStatus)\n\n      *Atomic: will be replaced during a merge*\n      \n      ports provides information about the ports exposed by this LoadBalancer.\n\n      <a name=\"IngressPortStatus\"><\/a>\n      *IngressPortStatus represents the error condition of a service port*\n\n      - **loadBalancer.ingress.ports.port** (int32), required\n\n        port is the port number of the ingress port.\n\n      - **loadBalancer.ingress.ports.protocol** (string), required\n\n        protocol is the protocol of the ingress port. The supported values are: \"TCP\", \"UDP\", \"SCTP\"\n\n      - **loadBalancer.ingress.ports.error** (string)\n\n        error is to record the problem with the service port The format of the error shall comply with the following rules: - built-in error values shall be specified in this file and those shall use\n          CamelCase names\n        - cloud provider specific error values must have names that comply with the\n          format foo.example.com\/CamelCase.\n\n\n\n\n\n## IngressList {#IngressList}\n\nIngressList is a collection of Ingress.\n\n<hr>\n\n- **items** ([]<a href=\"\">Ingress<\/a>), required\n\n  items is the list of Ingress.\n\n- **apiVersion** (string)\n\n  APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#resources\n\n- **kind** (string)\n\n  Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard object's metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n\n\n\n\n## Operations {#Operations}\n\n\n\n<hr>\n\n\n\n\n\n\n### `get` read the specified Ingress\n\n#### HTTP Request\n\nGET \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/ingresses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Ingress\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Ingress<\/a>): OK\n\n401: Unauthorized\n\n\n### `get` read status of the specified Ingress\n\n#### HTTP Request\n\nGET \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/ingresses\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Ingress\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Ingress<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Ingress\n\n#### HTTP Request\n\nGET \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/ingresses\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">IngressList<\/a>): OK\n\n401: Unauthorized\n\n\n### `list` list or watch objects of kind Ingress\n\n#### HTTP Request\n\nGET \/apis\/networking.k8s.io\/v1\/ingresses\n\n#### Parameters\n\n\n- **allowWatchBookmarks** (*in query*): boolean\n\n  <a href=\"\">allowWatchBookmarks<\/a>\n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n- **watch** (*in query*): boolean\n\n  <a href=\"\">watch<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">IngressList<\/a>): OK\n\n401: Unauthorized\n\n\n### `create` create an Ingress\n\n#### HTTP Request\n\nPOST \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/ingresses\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Ingress<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Ingress<\/a>): OK\n\n201 (<a href=\"\">Ingress<\/a>): Created\n\n202 (<a href=\"\">Ingress<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `update` replace the specified Ingress\n\n#### HTTP Request\n\nPUT \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/ingresses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Ingress\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Ingress<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Ingress<\/a>): OK\n\n201 (<a href=\"\">Ingress<\/a>): Created\n\n401: Unauthorized\n\n\n### `update` replace status of the specified Ingress\n\n#### HTTP Request\n\nPUT \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/ingresses\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Ingress\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Ingress<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Ingress<\/a>): OK\n\n201 (<a href=\"\">Ingress<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update the specified Ingress\n\n#### HTTP Request\n\nPATCH \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/ingresses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Ingress\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Ingress<\/a>): OK\n\n201 (<a href=\"\">Ingress<\/a>): Created\n\n401: Unauthorized\n\n\n### `patch` partially update status of the specified Ingress\n\n#### HTTP Request\n\nPATCH \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/ingresses\/{name}\/status\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Ingress\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">Patch<\/a>, required\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldManager** (*in query*): string\n\n  <a href=\"\">fieldManager<\/a>\n\n\n- **fieldValidation** (*in query*): string\n\n  <a href=\"\">fieldValidation<\/a>\n\n\n- **force** (*in query*): boolean\n\n  <a href=\"\">force<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Ingress<\/a>): OK\n\n201 (<a href=\"\">Ingress<\/a>): Created\n\n401: Unauthorized\n\n\n### `delete` delete an Ingress\n\n#### HTTP Request\n\nDELETE \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/ingresses\/{name}\n\n#### Parameters\n\n\n- **name** (*in path*): string, required\n\n  name of the Ingress\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n202 (<a href=\"\">Status<\/a>): Accepted\n\n401: Unauthorized\n\n\n### `deletecollection` delete collection of Ingress\n\n#### HTTP Request\n\nDELETE \/apis\/networking.k8s.io\/v1\/namespaces\/{namespace}\/ingresses\n\n#### Parameters\n\n\n- **namespace** (*in path*): string, required\n\n  <a href=\"\">namespace<\/a>\n\n\n- **body**: <a href=\"\">DeleteOptions<\/a>\n\n  \n\n\n- **continue** (*in query*): string\n\n  <a href=\"\">continue<\/a>\n\n\n- **dryRun** (*in query*): string\n\n  <a href=\"\">dryRun<\/a>\n\n\n- **fieldSelector** (*in query*): string\n\n  <a href=\"\">fieldSelector<\/a>\n\n\n- **gracePeriodSeconds** (*in query*): integer\n\n  <a href=\"\">gracePeriodSeconds<\/a>\n\n\n- **labelSelector** (*in query*): string\n\n  <a href=\"\">labelSelector<\/a>\n\n\n- **limit** (*in query*): integer\n\n  <a href=\"\">limit<\/a>\n\n\n- **pretty** (*in query*): string\n\n  <a href=\"\">pretty<\/a>\n\n\n- **propagationPolicy** (*in query*): string\n\n  <a href=\"\">propagationPolicy<\/a>\n\n\n- **resourceVersion** (*in query*): string\n\n  <a href=\"\">resourceVersion<\/a>\n\n\n- **resourceVersionMatch** (*in query*): string\n\n  <a href=\"\">resourceVersionMatch<\/a>\n\n\n- **sendInitialEvents** (*in query*): boolean\n\n  <a href=\"\">sendInitialEvents<\/a>\n\n\n- **timeoutSeconds** (*in query*): integer\n\n  <a href=\"\">timeoutSeconds<\/a>\n\n\n\n#### Response\n\n\n200 (<a href=\"\">Status<\/a>): OK\n\n401: Unauthorized\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion   networking k8s io v1    import   k8s io api networking v1    kind   Ingress  content type   api reference  description   Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend   title   Ingress  weight  4 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        apiVersion  networking k8s io v1    import  k8s io api networking v1        Ingress   Ingress   Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend  An Ingress can be configured to give services externally reachable urls  load balance traffic  terminate SSL  offer name based virtual hosting etc    hr       apiVersion    networking k8s io v1       kind    Ingress       metadata     a href    ObjectMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata      spec     a href    IngressSpec  a      spec is the desired state of the Ingress  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status      status     a href    IngressStatus  a      status is the current state of the Ingress  More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status         IngressSpec   IngressSpec   IngressSpec describes the Ingress the user wishes to exist    hr       defaultBackend     a href    IngressBackend  a      defaultBackend is the backend that should handle requests that don t match any rule  If Rules are not specified  DefaultBackend must be specified  If DefaultBackend is not set  the handling of requests that do not match any of the rules will be up to the Ingress controller       ingressClassName    string     ingressClassName is the name of an IngressClass cluster resource  Ingress controller implementations use this field to know whether they should be serving this Ingress resource  by a transitive connection  controller    IngressClass    Ingress resource   Although the  kubernetes io ingress class  annotation  simple constant name  was never formally defined  it was widely supported by Ingress controllers to create a direct binding between Ingress controller and Ingress resources  Newly created Ingress resources should prefer using the field  However  even though the annotation is officially deprecated  for backwards compatibility reasons  ingress controllers should still honor that annotation if present       rules      IngressRule      Atomic  will be replaced during a merge       rules is a list of host rules used to configure the Ingress  If unspecified  or no rule matches  all traffic is sent to the default backend      a name  IngressRule    a     IngressRule represents the rules mapping the paths under a specified host to the related backend services  Incoming requests are first evaluated for a host match  then routed to the backend associated with the matching IngressRuleValue          rules host    string       host is the fully qualified domain name of a network host  as defined by RFC 3986  Note the following deviations from the  host  part of the URI as defined in RFC 3986  1  IPs are not allowed  Currently an IngressRuleValue can only apply to        the IP in the Spec of the parent Ingress      2  The     delimiter is not respected because ports are not allowed         Currently the port of an Ingress is implicitly  80 for http and         443 for https      Both these may change in the future  Incoming requests are matched against the host before the IngressRuleValue  If the host is unspecified  the Ingress routes all traffic based on the specified IngressRuleValue           host can be  precise  which is a domain name without the terminating dot of a network host  e g   foo bar com   or  wildcard   which is a domain name prefixed with a single wildcard label  e g     foo com    The wildcard character     must appear by itself as the first DNS label and matches only a single label  You cannot have a wildcard label by itself  e g  Host          Requests will be matched against the Host field in the following way  1  If host is precise  the request matches this rule if the http host header is equal to Host  2  If host is a wildcard  then the request matches this rule if the http host header is to equal to the suffix  removing the first label  of the wildcard rule         rules http    HTTPIngressRuleValue         a name  HTTPIngressRuleValue    a       HTTPIngressRuleValue is a list of http selectors pointing to backends  In the example  http    host   path   searchpart     backend where where parts of the url correspond to RFC 3986  this resource will be used to match against everything after the last     and before the first     or                rules http paths      HTTPIngressPath   required         Atomic  will be replaced during a merge               paths is a collection of paths that map requests to backends          a name  HTTPIngressPath    a         HTTPIngressPath associates a path with a backend  Incoming urls matching the path are forwarded to the backend              rules http paths backend     a href    IngressBackend  a    required          backend defines the referenced service endpoint to which the traffic will be forwarded to             rules http paths pathType    string   required          pathType determines the interpretation of the path matching  PathType can be one of the following values    Exact  Matches the URL path exactly    Prefix  Matches based on a URL path prefix split by      Matching is           done on a path element by element basis  A path element refers is the           list of labels in the path split by the     separator  A request is a           match for path p if every p is an element wise prefix of p of the           request path  Note that if the last element of the path is a substring           of the last element in request path  it is not a match  e g   foo bar           matches  foo bar baz  but does not match  foo barbaz             ImplementationSpecific  Interpretation of the Path matching is up to           the IngressClass  Implementations can treat this as a separate PathType           or treat it identically to Prefix or Exact path types          Implementations are required to support all path types             rules http paths path    string           path is matched against the path of an incoming request  Currently it can contain characters disallowed from the conventional  path  part of a URL as defined by RFC 3986  Paths must begin with a     and must be present when using PathType with value  Exact  or  Prefix        tls      IngressTLS      Atomic  will be replaced during a merge       tls represents the TLS configuration  Currently the Ingress only supports a single TLS port  443  If multiple members of this list specify different hosts  they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension  if the ingress controller fulfilling the ingress supports SNI      a name  IngressTLS    a     IngressTLS describes the transport layer security associated with an ingress          tls hosts      string        Atomic  will be replaced during a merge           hosts is a list of hosts included in the TLS certificate  The values in this list must match the name s used in the tlsSecret  Defaults to the wildcard host setting for the loadbalancer controller fulfilling this Ingress  if left unspecified         tls secretName    string       secretName is the name of the secret used to terminate TLS traffic on port 443  Field is left optional to allow TLS routing based on SNI hostname alone  If the SNI host in a listener conflicts with the  Host  header field used by an IngressRule  the SNI host is used for termination and value of the  Host  header is used for routing          IngressBackend   IngressBackend   IngressBackend describes all endpoints for a given service and port    hr       resource     a href    TypedLocalObjectReference  a      resource is an ObjectRef to another Kubernetes resource in the namespace of the Ingress object  If resource is specified  a service Name and service Port must not be specified  This is a mutually exclusive setting with  Service        service    IngressServiceBackend     service references a service as a backend  This is a mutually exclusive setting with  Resource       a name  IngressServiceBackend    a     IngressServiceBackend references a Kubernetes Service as a Backend          service name    string   required      name is the referenced service  The service must exist in the same namespace as the Ingress object         service port    ServiceBackendPort       port of the referenced service  A port name or port number is required for a IngressServiceBackend        a name  ServiceBackendPort    a       ServiceBackendPort is the service port being referenced            service port name    string         name is the name of the port on the Service  This is a mutually exclusive setting with  Number            service port number    int32         number is the numerical port number  e g  80  on the Service  This is a mutually exclusive setting with  Name           IngressStatus   IngressStatus   IngressStatus describe the current state of the Ingress    hr       loadBalancer    IngressLoadBalancerStatus     loadBalancer contains the current status of the load balancer      a name  IngressLoadBalancerStatus    a     IngressLoadBalancerStatus represents the status of a load balancer          loadBalancer ingress      IngressLoadBalancerIngress        Atomic  will be replaced during a merge           ingress is a list containing ingress points for the load balancer        a name  IngressLoadBalancerIngress    a       IngressLoadBalancerIngress represents the status of a load balancer ingress point            loadBalancer ingress hostname    string         hostname is set for load balancer ingress points that are DNS based           loadBalancer ingress ip    string         ip is set for load balancer ingress points that are IP based           loadBalancer ingress ports      IngressPortStatus          Atomic  will be replaced during a merge               ports provides information about the ports exposed by this LoadBalancer          a name  IngressPortStatus    a         IngressPortStatus represents the error condition of a service port             loadBalancer ingress ports port    int32   required          port is the port number of the ingress port             loadBalancer ingress ports protocol    string   required          protocol is the protocol of the ingress port  The supported values are   TCP    UDP    SCTP             loadBalancer ingress ports error    string           error is to record the problem with the service port The format of the error shall comply with the following rules    built in error values shall be specified in this file and those shall use           CamelCase names           cloud provider specific error values must have names that comply with the           format foo example com CamelCase          IngressList   IngressList   IngressList is a collection of Ingress    hr       items       a href    Ingress  a    required    items is the list of Ingress       apiVersion    string     APIVersion defines the versioned schema of this representation of an object  Servers should convert recognized schemas to the latest internal value  and may reject unrecognized values  More info  https   git k8s io community contributors devel sig architecture api conventions md resources      kind    string     Kind is a string value representing the REST resource this object represents  Servers may infer this from the endpoint the client submits requests to  Cannot be updated  In CamelCase  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      metadata     a href    ListMeta  a      Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata         Operations   Operations      hr             get  read the specified Ingress       HTTP Request  GET  apis networking k8s io v1 namespaces  namespace  ingresses  name        Parameters       name     in path    string  required    name of the Ingress       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Ingress  a    OK  401  Unauthorized        get  read status of the specified Ingress       HTTP Request  GET  apis networking k8s io v1 namespaces  namespace  ingresses  name  status       Parameters       name     in path    string  required    name of the Ingress       namespace     in path    string  required     a href    namespace  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Ingress  a    OK  401  Unauthorized        list  list or watch objects of kind Ingress       HTTP Request  GET  apis networking k8s io v1 namespaces  namespace  ingresses       Parameters       namespace     in path    string  required     a href    namespace  a        allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    IngressList  a    OK  401  Unauthorized        list  list or watch objects of kind Ingress       HTTP Request  GET  apis networking k8s io v1 ingresses       Parameters       allowWatchBookmarks     in query    boolean     a href    allowWatchBookmarks  a        continue     in query    string     a href    continue  a        fieldSelector     in query    string     a href    fieldSelector  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a        watch     in query    boolean     a href    watch  a          Response   200   a href    IngressList  a    OK  401  Unauthorized        create  create an Ingress       HTTP Request  POST  apis networking k8s io v1 namespaces  namespace  ingresses       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    Ingress  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Ingress  a    OK  201   a href    Ingress  a    Created  202   a href    Ingress  a    Accepted  401  Unauthorized        update  replace the specified Ingress       HTTP Request  PUT  apis networking k8s io v1 namespaces  namespace  ingresses  name        Parameters       name     in path    string  required    name of the Ingress       namespace     in path    string  required     a href    namespace  a        body     a href    Ingress  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Ingress  a    OK  201   a href    Ingress  a    Created  401  Unauthorized        update  replace status of the specified Ingress       HTTP Request  PUT  apis networking k8s io v1 namespaces  namespace  ingresses  name  status       Parameters       name     in path    string  required    name of the Ingress       namespace     in path    string  required     a href    namespace  a        body     a href    Ingress  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Ingress  a    OK  201   a href    Ingress  a    Created  401  Unauthorized        patch  partially update the specified Ingress       HTTP Request  PATCH  apis networking k8s io v1 namespaces  namespace  ingresses  name        Parameters       name     in path    string  required    name of the Ingress       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Ingress  a    OK  201   a href    Ingress  a    Created  401  Unauthorized        patch  partially update status of the specified Ingress       HTTP Request  PATCH  apis networking k8s io v1 namespaces  namespace  ingresses  name  status       Parameters       name     in path    string  required    name of the Ingress       namespace     in path    string  required     a href    namespace  a        body     a href    Patch  a   required           dryRun     in query    string     a href    dryRun  a        fieldManager     in query    string     a href    fieldManager  a        fieldValidation     in query    string     a href    fieldValidation  a        force     in query    boolean     a href    force  a        pretty     in query    string     a href    pretty  a          Response   200   a href    Ingress  a    OK  201   a href    Ingress  a    Created  401  Unauthorized        delete  delete an Ingress       HTTP Request  DELETE  apis networking k8s io v1 namespaces  namespace  ingresses  name        Parameters       name     in path    string  required    name of the Ingress       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            dryRun     in query    string     a href    dryRun  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a          Response   200   a href    Status  a    OK  202   a href    Status  a    Accepted  401  Unauthorized        deletecollection  delete collection of Ingress       HTTP Request  DELETE  apis networking k8s io v1 namespaces  namespace  ingresses       Parameters       namespace     in path    string  required     a href    namespace  a        body     a href    DeleteOptions  a            continue     in query    string     a href    continue  a        dryRun     in query    string     a href    dryRun  a        fieldSelector     in query    string     a href    fieldSelector  a        gracePeriodSeconds     in query    integer     a href    gracePeriodSeconds  a        labelSelector     in query    string     a href    labelSelector  a        limit     in query    integer     a href    limit  a        pretty     in query    string     a href    pretty  a        propagationPolicy     in query    string     a href    propagationPolicy  a        resourceVersion     in query    string     a href    resourceVersion  a        resourceVersionMatch     in query    string     a href    resourceVersionMatch  a        sendInitialEvents     in query    boolean     a href    sendInitialEvents  a        timeoutSeconds     in query    integer     a href    timeoutSeconds  a          Response   200   a href    Status  a    OK  401  Unauthorized "}
{"questions":"kubernetes reference kind Common Parameters title Common Parameters apiVersion contenttype apireference import weight 10 apimetadata autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"\"\n  import: \"\"\n  kind: \"Common Parameters\"\ncontent_type: \"api_reference\"\ndescription: \"\"\ntitle: \"Common Parameters\"\nweight: 10\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n\n\n\n\n## allowWatchBookmarks {#allowWatchBookmarks}\n\nallowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.\n\n<hr>\n\n\n\n\n\n## continue {#continue}\n\nThe continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\".\n\nThis field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.\n\n<hr>\n\n\n\n\n\n## dryRun {#dryRun}\n\nWhen present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed\n\n<hr>\n\n\n\n\n\n## fieldManager {#fieldManager}\n\nfieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https:\/\/golang.org\/pkg\/unicode\/#IsPrint.\n\n<hr>\n\n\n\n\n\n## fieldSelector {#fieldSelector}\n\nA selector to restrict the list of returned objects by their fields. Defaults to everything.\n\n<hr>\n\n\n\n\n\n## fieldValidation {#fieldValidation}\n\nfieldValidation instructs the server on how to handle objects in the request (POST\/PUT\/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.\n\n<hr>\n\n\n\n\n\n## force {#force}\n\nForce is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.\n\n<hr>\n\n\n\n\n\n## gracePeriodSeconds {#gracePeriodSeconds}\n\nThe duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.\n\n<hr>\n\n\n\n\n\n## labelSelector {#labelSelector}\n\nA selector to restrict the list of returned objects by their labels. Defaults to everything.\n\n<hr>\n\n\n\n\n\n## limit {#limit}\n\nlimit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.\n\n<hr>\n\n\n\n\n\n## namespace {#namespace}\n\nobject name and auth scope, such as for teams and projects\n\n<hr>\n\n\n\n\n\n## pretty {#pretty}\n\nIf 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget).\n\n<hr>\n\n\n\n\n\n## propagationPolicy {#propagationPolicy}\n\nWhether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.\n\n<hr>\n\n\n\n\n\n## resourceVersion {#resourceVersion}\n\nresourceVersion sets a constraint on what resource versions a request may be served from. See https:\/\/kubernetes.io\/docs\/reference\/using-api\/api-concepts\/#resource-versions for details.\n\nDefaults to unset\n\n<hr>\n\n\n\n\n\n## resourceVersionMatch {#resourceVersionMatch}\n\nresourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https:\/\/kubernetes.io\/docs\/reference\/using-api\/api-concepts\/#resource-versions for details.\n\nDefaults to unset\n\n<hr>\n\n\n\n\n\n## sendInitialEvents {#sendInitialEvents}\n\n`sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic \"Bookmark\" event  will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `\"k8s.io\/initial-events-end\": \"true\"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.\n\nWhen `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan\n  is interpreted as \"data at least as new as the provided `resourceVersion`\"\n  and the bookmark event is send when the state is synced\n  to a `resourceVersion` at least as fresh as the one provided by the ListOptions.\n  If `resourceVersion` is unset, this is interpreted as \"consistent read\" and the\n  bookmark event is send when the state is synced at least to the moment\n  when request started being processed.\n- `resourceVersionMatch` set to any other value or unset\n  Invalid error is returned.\n\nDefaults to true if `resourceVersion=\"\"` or `resourceVersion=\"0\"` (for backward compatibility reasons) and to false otherwise.\n\n<hr>\n\n\n\n\n\n## timeoutSeconds {#timeoutSeconds}\n\nTimeout for the list\/watch call. This limits the duration of the call, regardless of any activity or inactivity.\n\n<hr>\n\n\n\n\n\n## watch {#watch}\n\nWatch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.\n\n<hr>\n\n\n\n\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion       import       kind   Common Parameters  content type   api reference  description     title   Common Parameters  weight  10 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project               allowWatchBookmarks   allowWatchBookmarks   allowWatchBookmarks requests watch events with type  BOOKMARK   Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server s discretion  Clients should not assume bookmarks are returned at any specific interval  nor may they assume the server will send any BOOKMARK event during a session  If this is not a watch  this field is ignored    hr          continue   continue   The continue option should be set when retrieving more results from the server  Since this value is server defined  clients may only use the continue value from a previous query result with identical query parameters  except for the value of continue  and the server may reject a continue value it does not recognize  If the specified continue value is no longer valid whether due to expiration  generally five to fifteen minutes  or a configuration change on the server  the server will respond with a 410 ResourceExpired error together with a continue token  If the client needs a consistent list  it must restart their list without the continue field  Otherwise  the client may send another list request with the token received with the 410 error  the server will respond with a list starting from the next key  but from the latest snapshot  which is inconsistent from the previous list results   objects that are created  modified  or deleted after the first list request will be included in the response  as long as their keys are after the  next key    This field is not supported when watch is true  Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications    hr          dryRun   dryRun   When present  indicates that modifications should not be persisted  An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request  Valid values are    All  all dry run stages will be processed   hr          fieldManager   fieldManager   fieldManager is a name associated with the actor or entity that is making these changes  The value must be less than or 128 characters long  and only contain printable characters  as defined by https   golang org pkg unicode  IsPrint    hr          fieldSelector   fieldSelector   A selector to restrict the list of returned objects by their fields  Defaults to everything    hr          fieldValidation   fieldValidation   fieldValidation instructs the server on how to handle objects in the request  POST PUT PATCH  containing unknown or duplicate fields  Valid values are    Ignore  This will ignore any unknown fields that are silently dropped from the object  and will ignore all but the last duplicate field that the decoder encounters  This is the default behavior prior to v1 23    Warn  This will send a warning via the standard warning response header for each unknown field that is dropped from the object  and for each duplicate field that is encountered  The request will still succeed if there are no other errors  and will only persist the last of any duplicate fields  This is the default in v1 23    Strict  This will fail the request with a BadRequest error if any unknown fields would be dropped from the object  or if any duplicate fields are present  The error returned from the server will contain all unknown and duplicate fields encountered    hr          force   force   Force is going to  force  Apply requests  It means user will re acquire conflicting fields owned by other people  Force flag must be unset for non apply patch requests    hr          gracePeriodSeconds   gracePeriodSeconds   The duration in seconds before the object should be deleted  Value must be non negative integer  The value zero indicates delete immediately  If this value is nil  the default grace period for the specified type will be used  Defaults to a per object value if not specified  zero means delete immediately    hr          labelSelector   labelSelector   A selector to restrict the list of returned objects by their labels  Defaults to everything    hr          limit   limit   limit is a maximum number of responses to return for a list call  If more items exist  the server will set the  continue  field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results  Setting a limit may return fewer than the requested amount of items  up to zero items  in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available  Servers may choose not to support the limit argument and will return all of the available results  If limit is specified and the continue field is empty  clients may assume that no more results are available  This field is not supported if watch is true   The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit   that is  no objects created  modified  or deleted after the first request is issued will be included in any subsequent continued requests  This is sometimes referred to as a consistent snapshot  and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects  If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned    hr          namespace   namespace   object name and auth scope  such as for teams and projects   hr          pretty   pretty   If  true   then the output is pretty printed  Defaults to  false  unless the user agent indicates a browser or command line HTTP tool  curl and wget     hr          propagationPolicy   propagationPolicy   Whether and how garbage collection will be performed  Either this field or OrphanDependents may be set  but not both  The default policy is decided by the existing finalizer set in the metadata finalizers and the resource specific default policy  Acceptable values are   Orphan    orphan the dependents   Background    allow the garbage collector to delete the dependents in the background   Foreground    a cascading policy that deletes all dependents in the foreground    hr          resourceVersion   resourceVersion   resourceVersion sets a constraint on what resource versions a request may be served from  See https   kubernetes io docs reference using api api concepts  resource versions for details   Defaults to unset   hr          resourceVersionMatch   resourceVersionMatch   resourceVersionMatch determines how resourceVersion is applied to list calls  It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https   kubernetes io docs reference using api api concepts  resource versions for details   Defaults to unset   hr          sendInitialEvents   sendInitialEvents    sendInitialEvents true  may be set together with  watch true   In that case  the watch stream will begin with synthetic events to produce the current state of objects in the collection  Once all such events have been sent  a synthetic  Bookmark  event  will be sent  The bookmark will report the ResourceVersion  RV  corresponding to the set of objects  and be marked with   k8s io initial events end    true   annotation  Afterwards  the watch stream will proceed as usual  sending watch events corresponding to changes  subsequent to the RV  to objects watched   When  sendInitialEvents  option is set  we require  resourceVersionMatch  option to also be set  The semantic of the watch request is as following     resourceVersionMatch    NotOlderThan   is interpreted as  data at least as new as the provided  resourceVersion     and the bookmark event is send when the state is synced   to a  resourceVersion  at least as fresh as the one provided by the ListOptions    If  resourceVersion  is unset  this is interpreted as  consistent read  and the   bookmark event is send when the state is synced at least to the moment   when request started being processed     resourceVersionMatch  set to any other value or unset   Invalid error is returned   Defaults to true if  resourceVersion     or  resourceVersion  0    for backward compatibility reasons  and to false otherwise    hr          timeoutSeconds   timeoutSeconds   Timeout for the list watch call  This limits the duration of the call  regardless of any activity or inactivity    hr          watch   watch   Watch for changes to the described resources and return them as a stream of add  update  and remove notifications  Specify resourceVersion    hr      "}
{"questions":"kubernetes reference kind Status import k8s io apimachinery pkg apis meta v1 title Status Status is a return value for calls that don t return other objects apiVersion contenttype apireference apimetadata weight 12 autogenerated true","answers":"---\napi_metadata:\n  apiVersion: \"\"\n  import: \"k8s.io\/apimachinery\/pkg\/apis\/meta\/v1\"\n  kind: \"Status\"\ncontent_type: \"api_reference\"\ndescription: \"Status is a return value for calls that don't return other objects.\"\ntitle: \"Status\"\nweight: 12\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n\n`import \"k8s.io\/apimachinery\/pkg\/apis\/meta\/v1\"`\n\n\nStatus is a return value for calls that don't return other objects.\n\n<hr>\n\n- **apiVersion** (string)\n\n  APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#resources\n\n- **code** (int32)\n\n  Suggested HTTP return code for this status, 0 if not set.\n\n- **details** (StatusDetails)\n\n  *Atomic: will be replaced during a merge*\n  \n  Extended data associated with the reason.  Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type.\n\n  <a name=\"StatusDetails\"><\/a>\n  *StatusDetails is a set of additional properties that MAY be set by the server to provide additional information about a response. The Reason field of a Status object defines what attributes will be set. Clients must ignore fields that do not match the defined type of each attribute, and should assume that any attribute may be empty, invalid, or under defined.*\n\n  - **details.causes** ([]StatusCause)\n\n    *Atomic: will be replaced during a merge*\n    \n    The Causes array includes more details associated with the StatusReason failure. Not all StatusReasons may provide detailed causes.\n\n    <a name=\"StatusCause\"><\/a>\n    *StatusCause provides more information about an api.Status failure, including cases when multiple errors are encountered.*\n\n    - **details.causes.field** (string)\n\n      The field of the resource that has caused this error, as named by its JSON serialization. May include dot and postfix notation for nested attributes. Arrays are zero-indexed.  Fields may appear more than once in an array of causes due to fields having multiple errors. Optional.\n      \n      Examples:\n        \"name\" - the field \"name\" on the current resource\n        \"items[0].name\" - the field \"name\" on the first array entry in \"items\"\n\n    - **details.causes.message** (string)\n\n      A human-readable description of the cause of the error.  This field may be presented as-is to a reader.\n\n    - **details.causes.reason** (string)\n\n      A machine-readable description of the cause of the error. If this value is empty there is no information available.\n\n  - **details.group** (string)\n\n    The group attribute of the resource associated with the status StatusReason.\n\n  - **details.kind** (string)\n\n    The kind attribute of the resource associated with the status StatusReason. On some operations may differ from the requested resource Kind. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n  - **details.name** (string)\n\n    The name attribute of the resource associated with the status StatusReason (when there is a single name which can be described).\n\n  - **details.retryAfterSeconds** (int32)\n\n    If specified, the time in seconds before the operation should be retried. Some errors may indicate the client must take an alternate action - for those errors this field may indicate how long to wait before taking the alternate action.\n\n  - **details.uid** (string)\n\n    UID of the resource. (when there is a single resource which can be described). More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names#uids\n\n- **kind** (string)\n\n  Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **message** (string)\n\n  A human-readable description of the status of this operation.\n\n- **metadata** (<a href=\"\">ListMeta<\/a>)\n\n  Standard list metadata. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n- **reason** (string)\n\n  A machine-readable description of why this operation is in the \"Failure\" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it.\n\n- **status** (string)\n\n  Status of the operation. One of: \"Success\" or \"Failure\". More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#spec-and-status\n\n\n\n\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion       import   k8s io apimachinery pkg apis meta v1    kind   Status  content type   api reference  description   Status is a return value for calls that don t return other objects   title   Status  weight  12 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project          import  k8s io apimachinery pkg apis meta v1     Status is a return value for calls that don t return other objects    hr       apiVersion    string     APIVersion defines the versioned schema of this representation of an object  Servers should convert recognized schemas to the latest internal value  and may reject unrecognized values  More info  https   git k8s io community contributors devel sig architecture api conventions md resources      code    int32     Suggested HTTP return code for this status  0 if not set       details    StatusDetails      Atomic  will be replaced during a merge       Extended data associated with the reason   Each reason may define its own extended details  This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type      a name  StatusDetails    a     StatusDetails is a set of additional properties that MAY be set by the server to provide additional information about a response  The Reason field of a Status object defines what attributes will be set  Clients must ignore fields that do not match the defined type of each attribute  and should assume that any attribute may be empty  invalid  or under defined          details causes      StatusCause        Atomic  will be replaced during a merge           The Causes array includes more details associated with the StatusReason failure  Not all StatusReasons may provide detailed causes        a name  StatusCause    a       StatusCause provides more information about an api Status failure  including cases when multiple errors are encountered            details causes field    string         The field of the resource that has caused this error  as named by its JSON serialization  May include dot and postfix notation for nested attributes  Arrays are zero indexed   Fields may appear more than once in an array of causes due to fields having multiple errors  Optional               Examples           name    the field  name  on the current resource          items 0  name    the field  name  on the first array entry in  items           details causes message    string         A human readable description of the cause of the error   This field may be presented as is to a reader           details causes reason    string         A machine readable description of the cause of the error  If this value is empty there is no information available         details group    string       The group attribute of the resource associated with the status StatusReason         details kind    string       The kind attribute of the resource associated with the status StatusReason  On some operations may differ from the requested resource Kind  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds        details name    string       The name attribute of the resource associated with the status StatusReason  when there is a single name which can be described          details retryAfterSeconds    int32       If specified  the time in seconds before the operation should be retried  Some errors may indicate the client must take an alternate action   for those errors this field may indicate how long to wait before taking the alternate action         details uid    string       UID of the resource   when there is a single resource which can be described   More info  https   kubernetes io docs concepts overview working with objects names uids      kind    string     Kind is a string value representing the REST resource this object represents  Servers may infer this from the endpoint the client submits requests to  Cannot be updated  In CamelCase  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      message    string     A human readable description of the status of this operation       metadata     a href    ListMeta  a      Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds      reason    string     A machine readable description of why this operation is in the  Failure  status  If this value is empty there is no information available  A Reason clarifies an HTTP status code but does not override it       status    string     Status of the operation  One of   Success  or  Failure   More info  https   git k8s io community contributors devel sig architecture api conventions md spec and status     "}
{"questions":"kubernetes reference weight 7 import k8s io apimachinery pkg apis meta v1 apiVersion contenttype apireference kind ObjectMeta apimetadata autogenerated true ObjectMeta is metadata that all persisted resources must have which includes all objects users must create title ObjectMeta","answers":"---\napi_metadata:\n  apiVersion: \"\"\n  import: \"k8s.io\/apimachinery\/pkg\/apis\/meta\/v1\"\n  kind: \"ObjectMeta\"\ncontent_type: \"api_reference\"\ndescription: \"ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.\"\ntitle: \"ObjectMeta\"\nweight: 7\nauto_generated: true\n---\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the \n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n\n`import \"k8s.io\/apimachinery\/pkg\/apis\/meta\/v1\"`\n\n\nObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.\n\n<hr>\n\n- **name** (string)\n\n  Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names#names\n\n- **generateName** (string)\n\n  GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.\n  \n  If this field is specified and the generated name exists, the server will return a 409.\n  \n  Applied only if Name is not specified. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#idempotency\n\n- **namespace** (string)\n\n  Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.\n  \n  Must be a DNS_LABEL. Cannot be updated. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/namespaces\n\n- **labels** (map[string]string)\n\n  Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\n\n- **annotations** (map[string]string)\n\n  Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/annotations\n\n\n\n### System {#System}\n\n\n- **finalizers** ([]string)\n\n  *Set: unique values will be kept during a merge*\n  \n  Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order.  Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list.\n\n- **managedFields** ([]ManagedFieldsEntry)\n\n  *Atomic: will be replaced during a merge*\n  \n  ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like \"ci-cd\". The set of fields is always in the version that the workflow used when modifying the object.\n\n  <a name=\"ManagedFieldsEntry\"><\/a>\n  *ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to.*\n\n  - **managedFields.apiVersion** (string)\n\n    APIVersion defines the version of this resource that this field set applies to. The format is \"group\/version\" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted.\n\n  - **managedFields.fieldsType** (string)\n\n    FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: \"FieldsV1\"\n\n  - **managedFields.fieldsV1** (FieldsV1)\n\n    FieldsV1 holds the first JSON version format as described in the \"FieldsV1\" type.\n\n    <a name=\"FieldsV1\"><\/a>\n    *FieldsV1 stores a set of fields in a data structure like a Trie, in JSON format.\n    \n    Each key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f:<name>', where <name> is the name of a field in a struct, or key in a map 'v:<value>', where <value> is the exact json formatted value of a list item 'i:<index>', where <index> is position of a item in a list 'k:<keys>', where <keys> is a map of  a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set.\n    \n    The exact format is defined in sigs.k8s.io\/structured-merge-diff*\n\n  - **managedFields.manager** (string)\n\n    Manager is an identifier of the workflow managing these fields.\n\n  - **managedFields.operation** (string)\n\n    Operation is the type of operation which lead to this ManagedFieldsEntry being created. The only valid values for this field are 'Apply' and 'Update'.\n\n  - **managedFields.subresource** (string)\n\n    Subresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource.\n\n  - **managedFields.time** (Time)\n\n    Time is the timestamp of when the ManagedFields entry was added. The timestamp will also be updated if a field is added, the manager changes any of the owned fields value or removes a field. The timestamp does not update when a field is removed from the entry because another manager took it over.\n\n    <a name=\"Time\"><\/a>\n    *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **ownerReferences** ([]OwnerReference)\n\n  *Patch strategy: merge on key `uid`*\n  \n  *Map: unique values on key uid will be kept during a merge*\n  \n  List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller.\n\n  <a name=\"OwnerReference\"><\/a>\n  *OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field.*\n\n  - **ownerReferences.apiVersion** (string), required\n\n    API version of the referent.\n\n  - **ownerReferences.kind** (string), required\n\n    Kind of the referent. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds\n\n  - **ownerReferences.name** (string), required\n\n    Name of the referent. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names#names\n\n  - **ownerReferences.uid** (string), required\n\n    UID of the referent. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names#uids\n\n  - **ownerReferences.blockOwnerDeletion** (boolean)\n\n    If true, AND if the owner has the \"foregroundDeletion\" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https:\/\/kubernetes.io\/docs\/concepts\/architecture\/garbage-collection\/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs \"delete\" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned.\n\n  - **ownerReferences.controller** (boolean)\n\n    If true, this reference points to the managing controller.\n\n### Read-only {#Read-only}\n\n\n- **creationTimestamp** (Time)\n\n  CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n  \n  Populated by the system. Read-only. Null for lists. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n  <a name=\"Time\"><\/a>\n  *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **deletionGracePeriodSeconds** (int64)\n\n  Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only.\n\n- **deletionTimestamp** (Time)\n\n  DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested.\n  \n  Populated by the system when a graceful deletion is requested. Read-only. More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata\n\n  <a name=\"Time\"><\/a>\n  *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON.  Wrappers are provided for many of the factory methods that the time package offers.*\n\n- **generation** (int64)\n\n  A sequence number representing a specific generation of the desired state. Populated by the system. Read-only.\n\n- **resourceVersion** (string)\n\n  An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.\n  \n  Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#concurrency-control-and-consistency\n\n- **selfLink** (string)\n\n  Deprecated: selfLink is a legacy read-only field that is no longer populated by the system.\n\n- **uid** (string)\n\n  UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.\n  \n  Populated by the system. Read-only. More info: https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names#uids\n\n\n","site":"kubernetes reference","answers_cleaned":"    api metadata    apiVersion       import   k8s io apimachinery pkg apis meta v1    kind   ObjectMeta  content type   api reference  description   ObjectMeta is metadata that all persisted resources must have  which includes all objects users must create   title   ObjectMeta  weight  7 auto generated  true           The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the   Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project          import  k8s io apimachinery pkg apis meta v1     ObjectMeta is metadata that all persisted resources must have  which includes all objects users must create    hr       name    string     Name must be unique within a namespace  Is required when creating resources  although some resources may allow a client to request the generation of an appropriate name automatically  Name is primarily intended for creation idempotence and configuration definition  Cannot be updated  More info  https   kubernetes io docs concepts overview working with objects names names      generateName    string     GenerateName is an optional prefix  used by the server  to generate a unique name ONLY IF the Name field has not been provided  If this field is used  the name returned to the client will be different than the name passed  This value will also be combined with a unique suffix  The provided value has the same validation rules as the Name field  and may be truncated by the length of the suffix required to make the value unique on the server       If this field is specified and the generated name exists  the server will return a 409       Applied only if Name is not specified  More info  https   git k8s io community contributors devel sig architecture api conventions md idempotency      namespace    string     Namespace defines the space within which each name must be unique  An empty namespace is equivalent to the  default  namespace  but  default  is the canonical representation  Not all objects are required to be scoped to a namespace   the value of this field for those objects will be empty       Must be a DNS LABEL  Cannot be updated  More info  https   kubernetes io docs concepts overview working with objects namespaces      labels    map string string     Map of string keys and values that can be used to organize and categorize  scope and select  objects  May match selectors of replication controllers and services  More info  https   kubernetes io docs concepts overview working with objects labels      annotations    map string string     Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata  They are not queryable and should be preserved when modifying objects  More info  https   kubernetes io docs concepts overview working with objects annotations        System   System        finalizers      string      Set  unique values will be kept during a merge       Must be empty before the object is deleted from the registry  Each entry is an identifier for the responsible component that will remove the entry from the list  If the deletionTimestamp of the object is non nil  entries in this list can only be removed  Finalizers may be processed and removed in any order   Order is NOT enforced because it introduces significant risk of stuck finalizers  finalizers is a shared field  any actor with permission can reorder it  If the finalizer list is processed in order  then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal  field value  external system  or other  produced by a component responsible for a finalizer later in the list  resulting in a deadlock  Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list       managedFields      ManagedFieldsEntry      Atomic  will be replaced during a merge       ManagedFields maps workflow id and version to the set of fields that are managed by that workflow  This is mostly for internal housekeeping  and users typically shouldn t need to set or understand this field  A workflow can be the user s name  a controller s name  or the name of a specific apply path like  ci cd   The set of fields is always in the version that the workflow used when modifying the object      a name  ManagedFieldsEntry    a     ManagedFieldsEntry is a workflow id  a FieldSet and the group version of the resource that the fieldset applies to          managedFields apiVersion    string       APIVersion defines the version of this resource that this field set applies to  The format is  group version  just like the top level APIVersion field  It is necessary to track the version of a field set because it cannot be automatically converted         managedFields fieldsType    string       FieldsType is the discriminator for the different fields format and version  There is currently only one possible value   FieldsV1         managedFields fieldsV1    FieldsV1       FieldsV1 holds the first JSON version format as described in the  FieldsV1  type        a name  FieldsV1    a       FieldsV1 stores a set of fields in a data structure like a Trie  in JSON format           Each key is either a     representing the field itself  and will always map to an empty set  or a string representing a sub field or item  The string will follow one of these four formats   f  name    where  name  is the name of a field in a struct  or key in a map  v  value    where  value  is the exact json formatted value of a list item  i  index    where  index  is position of a item in a list  k  keys    where  keys  is a map of  a list item s key fields to their unique values If a key maps to an empty Fields value  the field that key represents is part of the set           The exact format is defined in sigs k8s io structured merge diff         managedFields manager    string       Manager is an identifier of the workflow managing these fields         managedFields operation    string       Operation is the type of operation which lead to this ManagedFieldsEntry being created  The only valid values for this field are  Apply  and  Update          managedFields subresource    string       Subresource is the name of the subresource used to update that object  or empty string if the object was updated through the main resource  The value of this field is used to distinguish between managers  even if they share the same name  For example  a status update will be distinct from a regular update using the same manager name  Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource         managedFields time    Time       Time is the timestamp of when the ManagedFields entry was added  The timestamp will also be updated if a field is added  the manager changes any of the owned fields value or removes a field  The timestamp does not update when a field is removed from the entry because another manager took it over        a name  Time    a       Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        ownerReferences      OwnerReference      Patch strategy  merge on key  uid         Map  unique values on key uid will be kept during a merge       List of objects depended by this object  If ALL objects in the list have been deleted  this object will be garbage collected  If this object is managed by a controller  then an entry in this list will point to this controller  with the controller field set to true  There cannot be more than one managing controller      a name  OwnerReference    a     OwnerReference contains enough information to let you identify an owning object  An owning object must be in the same namespace as the dependent  or be cluster scoped  so there is no namespace field          ownerReferences apiVersion    string   required      API version of the referent         ownerReferences kind    string   required      Kind of the referent  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds        ownerReferences name    string   required      Name of the referent  More info  https   kubernetes io docs concepts overview working with objects names names        ownerReferences uid    string   required      UID of the referent  More info  https   kubernetes io docs concepts overview working with objects names uids        ownerReferences blockOwnerDeletion    boolean       If true  AND if the owner has the  foregroundDeletion  finalizer  then the owner cannot be deleted from the key value store until this reference is removed  See https   kubernetes io docs concepts architecture garbage collection  foreground deletion for how the garbage collector interacts with this field and enforces the foreground deletion  Defaults to false  To set this field  a user needs  delete  permission of the owner  otherwise 422  Unprocessable Entity  will be returned         ownerReferences controller    boolean       If true  this reference points to the managing controller       Read only   Read only        creationTimestamp    Time     CreationTimestamp is a timestamp representing the server time when this object was created  It is not guaranteed to be set in happens before order across separate operations  Clients may not set this value  It is represented in RFC3339 form and is in UTC       Populated by the system  Read only  Null for lists  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata     a name  Time    a     Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        deletionGracePeriodSeconds    int64     Number of seconds allowed for this object to gracefully terminate before it will be removed from the system  Only set when deletionTimestamp is also set  May only be shortened  Read only       deletionTimestamp    Time     DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted  This field is set by the server when a graceful deletion is requested by the user  and is not directly settable by a client  The resource is expected to be deleted  no longer visible from resource lists  and not reachable by name  after the time in this field  once the finalizers list is empty  As long as the finalizers list contains items  deletion is blocked  Once the deletionTimestamp is set  this value may not be unset or be set further into the future  although it may be shortened or the resource may be deleted prior to this time  For example  a user may request that a pod is deleted in 30 seconds  The Kubelet will react by sending a graceful termination signal to the containers in the pod  After that 30 seconds  the Kubelet will send a hard termination signal  SIGKILL  to the container and after cleanup  remove the pod from the API  In the presence of network partitions  this object may still exist after this timestamp  until an administrator or automated process can determine the resource is fully terminated  If not set  graceful deletion of the object has not been requested       Populated by the system when a graceful deletion is requested  Read only  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata     a name  Time    a     Time is a wrapper around time Time which supports correct marshaling to YAML and JSON   Wrappers are provided for many of the factory methods that the time package offers        generation    int64     A sequence number representing a specific generation of the desired state  Populated by the system  Read only       resourceVersion    string     An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed  May be used for optimistic concurrency  change detection  and the watch operation on a resource or set of resources  Clients must treat these values as opaque and passed unmodified back to the server  They may only be valid for a particular resource or set of resources       Populated by the system  Read only  Value must be treated as opaque by clients and   More info  https   git k8s io community contributors devel sig architecture api conventions md concurrency control and consistency      selfLink    string     Deprecated  selfLink is a legacy read only field that is no longer populated by the system       uid    string     UID is the unique in time and space value for this object  It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations       Populated by the system  Read only  More info  https   kubernetes io docs concepts overview working with objects names uids   "}
{"questions":"kubernetes reference weight 10 title Audit Annotations namespace These annotations apply to object from API group This page serves as a reference for the audit annotations of the kubernetes io overview","answers":"---\ntitle: \"Audit Annotations\"\nweight: 10\n---\n\n<!-- overview -->\n\nThis page serves as a reference for the audit annotations of the kubernetes.io\nnamespace. These annotations apply to `Event` object from API group\n`audit.k8s.io`.\n\n\nThe following annotations are not used within the Kubernetes API. When you\n[enable auditing](\/docs\/tasks\/debug\/debug-cluster\/audit\/) in your cluster,\naudit event data is written using `Event` from API group `audit.k8s.io`.\nThe annotations apply to audit events. Audit events are different from objects in the\n[Event API](\/docs\/reference\/kubernetes-api\/cluster-resources\/event-v1\/) (API group\n`events.k8s.io`).\n\n\n<!-- body -->\n\n## k8s.io\/deprecated\n\nExample: `k8s.io\/deprecated: \"true\"`\n\nValue **must** be \"true\" or \"false\". The value \"true\" indicates that the\nrequest used a deprecated API version.\n\n## k8s.io\/removed-release\n\nExample: `k8s.io\/removed-release: \"1.22\"`\n\nValue **must** be in the format \"<major>.<minor>\". It is set to target the removal release\non requests made to deprecated API versions with a target removal release.\n\n## pod-security.kubernetes.io\/exempt\n\nExample: `pod-security.kubernetes.io\/exempt: namespace`\n\nValue **must** be one of `user`, `namespace`, or `runtimeClass` which correspond to\n[Pod Security Exemption](\/docs\/concepts\/security\/pod-security-admission\/#exemptions)\ndimensions. This annotation indicates on which dimension was based the exemption\nfrom the PodSecurity enforcement.\n\n\n## pod-security.kubernetes.io\/enforce-policy\n\nExample: `pod-security.kubernetes.io\/enforce-policy: restricted:latest`\n\nValue **must** be `privileged:<version>`, `baseline:<version>`,\n`restricted:<version>` which correspond to [Pod Security\nStandard](\/docs\/concepts\/security\/pod-security-standards) levels accompanied by\na version which **must** be `latest` or a valid Kubernetes version in the format\n`v<MAJOR>.<MINOR>`. This annotations informs about the enforcement level that\nallowed or denied the pod during PodSecurity admission.\n\nSee [Pod Security Standards](\/docs\/concepts\/security\/pod-security-standards\/)\nfor more information.\n\n## pod-security.kubernetes.io\/audit-violations\n\nExample:  `pod-security.kubernetes.io\/audit-violations: would violate\nPodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container\n\"example\" must set securityContext.allowPrivilegeEscalation=false), ...`\n\nValue details an audit policy violation, it contains the\n[Pod Security Standard](\/docs\/concepts\/security\/pod-security-standards\/) level\nthat was transgressed as well as the specific policies on the fields that were\nviolated from the PodSecurity enforcement.\n\nSee [Pod Security Standards](\/docs\/concepts\/security\/pod-security-standards\/)\nfor more information.\n\n## authorization.k8s.io\/decision\n\nExample: `authorization.k8s.io\/decision: \"forbid\"`\n\nValue must be **forbid** or **allow**. This annotation indicates whether or not a request\nwas authorized in Kubernetes audit logs.\n\nSee [Auditing](\/docs\/tasks\/debug\/debug-cluster\/audit\/) for more information.\n\n## authorization.k8s.io\/reason\n\nExample: `authorization.k8s.io\/reason: \"Human-readable reason for the decision\"`\n\nThis annotation gives reason for the [decision](#authorization-k8s-io-decision) in Kubernetes audit logs.\n\nSee [Auditing](\/docs\/tasks\/debug\/debug-cluster\/audit\/) for more information.\n\n## missing-san.invalid-cert.kubernetes.io\/$hostname\n\nExample: `missing-san.invalid-cert.kubernetes.io\/example-svc.example-namespace.svc: \"relies on a legacy Common Name field instead of the SAN extension for subject validation\"`\n\nUsed by Kubernetes version v1.24 and later\n\nThis annotation indicates a webhook or aggregated API server\nis using an invalid certificate that is missing `subjectAltNames`.\nSupport for these certificates was disabled by default in Kubernetes 1.19,\nand removed in Kubernetes 1.23.\n\nRequests to endpoints using these certificates will fail.\nServices using these certificates should replace them as soon as possible\nto avoid disruption when running in Kubernetes 1.23+ environments.\n\nThere's more information about this in the Go documentation:\n[X.509 CommonName deprecation](https:\/\/go.dev\/doc\/go1.15#commonname).\n\n## insecure-sha1.invalid-cert.kubernetes.io\/$hostname\n\nExample: `insecure-sha1.invalid-cert.kubernetes.io\/example-svc.example-namespace.svc: \"uses an insecure SHA-1 signature\"`\n\nUsed by Kubernetes version v1.24 and later\n\nThis annotation indicates a webhook or aggregated API server\nis using an insecure certificate signed with a SHA-1 hash.\nSupport for these insecure certificates is disabled by default in Kubernetes 1.24,\nand will be removed in a future release.\n\nServices using these certificates should replace them as soon as possible,\nto ensure connections are secured properly and to avoid disruption in future releases.\n\nThere's more information about this in the Go documentation:\n[Rejecting SHA-1 certificates](https:\/\/go.dev\/doc\/go1.18#sha1).\n\n## validation.policy.admission.k8s.io\/validation_failure\n\nExample: `validation.policy.admission.k8s.io\/validation_failure: '[{\"message\": \"Invalid value\", {\"policy\": \"policy.example.com\", {\"binding\": \"policybinding.example.com\", {\"expressionIndex\": \"1\", {\"validationActions\": [\"Audit\"]}]'`\n\nUsed by Kubernetes version v1.27 and later.\n\nThis annotation indicates that a admission policy validation evaluated to false\nfor an API request, or that the validation resulted in an error while the policy\nwas configured with `failurePolicy: Fail`.\n\nThe value of the annotation is a JSON object. The `message` in the JSON\nprovides the message about the validation failure.\n\nThe `policy`, `binding` and `expressionIndex` in the JSON identifies the\nname of the `ValidatingAdmissionPolicy`, the name of the\n`ValidatingAdmissionPolicyBinding` and the index in the policy `validations` of\nthe CEL expressions that failed, respectively.\n\nThe `validationActions` shows what actions were taken for this validation failure.\nSee [Validating Admission Policy](\/docs\/reference\/access-authn-authz\/validating-admission-policy\/)\nfor more details about `validationActions`.","site":"kubernetes reference","answers_cleaned":"    title   Audit Annotations  weight  10           overview      This page serves as a reference for the audit annotations of the kubernetes io namespace  These annotations apply to  Event  object from API group  audit k8s io     The following annotations are not used within the Kubernetes API  When you  enable auditing   docs tasks debug debug cluster audit   in your cluster  audit event data is written using  Event  from API group  audit k8s io   The annotations apply to audit events  Audit events are different from objects in the  Event API   docs reference kubernetes api cluster resources event v1    API group  events k8s io           body         k8s io deprecated  Example   k8s io deprecated   true    Value   must   be  true  or  false   The value  true  indicates that the request used a deprecated API version      k8s io removed release  Example   k8s io removed release   1 22    Value   must   be in the format   major   minor    It is set to target the removal release on requests made to deprecated API versions with a target removal release      pod security kubernetes io exempt  Example   pod security kubernetes io exempt  namespace   Value   must   be one of  user    namespace   or  runtimeClass  which correspond to  Pod Security Exemption   docs concepts security pod security admission  exemptions  dimensions  This annotation indicates on which dimension was based the exemption from the PodSecurity enforcement       pod security kubernetes io enforce policy  Example   pod security kubernetes io enforce policy  restricted latest   Value   must   be  privileged  version     baseline  version     restricted  version   which correspond to  Pod Security Standard   docs concepts security pod security standards  levels accompanied by a version which   must   be  latest  or a valid Kubernetes version in the format  v MAJOR   MINOR    This annotations informs about the enforcement level that allowed or denied the pod during PodSecurity admission   See  Pod Security Standards   docs concepts security pod security standards   for more information      pod security kubernetes io audit violations  Example    pod security kubernetes io audit violations  would violate PodSecurity  restricted latest   allowPrivilegeEscalation    false  container  example  must set securityContext allowPrivilegeEscalation false         Value details an audit policy violation  it contains the  Pod Security Standard   docs concepts security pod security standards   level that was transgressed as well as the specific policies on the fields that were violated from the PodSecurity enforcement   See  Pod Security Standards   docs concepts security pod security standards   for more information      authorization k8s io decision  Example   authorization k8s io decision   forbid    Value must be   forbid   or   allow    This annotation indicates whether or not a request was authorized in Kubernetes audit logs   See  Auditing   docs tasks debug debug cluster audit   for more information      authorization k8s io reason  Example   authorization k8s io reason   Human readable reason for the decision    This annotation gives reason for the  decision   authorization k8s io decision  in Kubernetes audit logs   See  Auditing   docs tasks debug debug cluster audit   for more information      missing san invalid cert kubernetes io  hostname  Example   missing san invalid cert kubernetes io example svc example namespace svc   relies on a legacy Common Name field instead of the SAN extension for subject validation    Used by Kubernetes version v1 24 and later  This annotation indicates a webhook or aggregated API server is using an invalid certificate that is missing  subjectAltNames   Support for these certificates was disabled by default in Kubernetes 1 19  and removed in Kubernetes 1 23   Requests to endpoints using these certificates will fail  Services using these certificates should replace them as soon as possible to avoid disruption when running in Kubernetes 1 23  environments   There s more information about this in the Go documentation   X 509 CommonName deprecation  https   go dev doc go1 15 commonname       insecure sha1 invalid cert kubernetes io  hostname  Example   insecure sha1 invalid cert kubernetes io example svc example namespace svc   uses an insecure SHA 1 signature    Used by Kubernetes version v1 24 and later  This annotation indicates a webhook or aggregated API server is using an insecure certificate signed with a SHA 1 hash  Support for these insecure certificates is disabled by default in Kubernetes 1 24  and will be removed in a future release   Services using these certificates should replace them as soon as possible  to ensure connections are secured properly and to avoid disruption in future releases   There s more information about this in the Go documentation   Rejecting SHA 1 certificates  https   go dev doc go1 18 sha1       validation policy admission k8s io validation failure  Example   validation policy admission k8s io validation failure      message    Invalid value     policy    policy example com     binding    policybinding example com     expressionIndex    1     validationActions     Audit        Used by Kubernetes version v1 27 and later   This annotation indicates that a admission policy validation evaluated to false for an API request  or that the validation resulted in an error while the policy was configured with  failurePolicy  Fail    The value of the annotation is a JSON object  The  message  in the JSON provides the message about the validation failure   The  policy    binding  and  expressionIndex  in the JSON identifies the name of the  ValidatingAdmissionPolicy   the name of the  ValidatingAdmissionPolicyBinding  and the index in the policy  validations  of the CEL expressions that failed  respectively   The  validationActions  shows what actions were taken for this validation failure  See  Validating Admission Policy   docs reference access authn authz validating admission policy   for more details about  validationActions  "}
{"questions":"kubernetes reference name reference anchor labels annotations and taints used on api objects title Well Known Labels Annotations and Taints weight 30 contenttype concept weight 40 card nolist true anchors","answers":"---\ntitle: Well-Known Labels, Annotations and Taints\ncontent_type: concept\nweight: 40\nno_list: true\ncard:\n  name: reference\n  weight: 30\n  anchors:\n  - anchor: \"#labels-annotations-and-taints-used-on-api-objects\"\n    title: Labels, annotations and taints\n---\n\n<!-- overview -->\n\nKubernetes reserves all labels, annotations and taints in the `kubernetes.io` and `k8s.io` namespaces.\n\nThis document serves both as a reference to the values and as a coordination point for assigning values.\n\n<!-- body -->\n\n## Labels, annotations and taints used on API objects\n\n\n### apf.kubernetes.io\/autoupdate-spec\n\nType: Annotation\n\nExample: `apf.kubernetes.io\/autoupdate-spec: \"true\"`\n\nUsed on: [`FlowSchema` and `PriorityLevelConfiguration` Objects](\/docs\/concepts\/cluster-administration\/flow-control\/#defaults)\n\nIf this annotation is set to true on a FlowSchema or PriorityLevelConfiguration, the `spec` for that object\nis managed by the kube-apiserver. If the API server does not recognize an APF object, and you annotate it\nfor automatic update, the API server deletes the entire object. Otherwise, the API server does not manage the\nobject spec.\nFor more details, read  [Maintenance of the Mandatory and Suggested Configuration Objects](\/docs\/concepts\/cluster-administration\/flow-control\/#maintenance-of-the-mandatory-and-suggested-configuration-objects).\n\n### app.kubernetes.io\/component\n\nType: Label\n\nExample: `app.kubernetes.io\/component: \"database\"`\n\nUsed on: All Objects (typically used on [workload resources](\/docs\/reference\/kubernetes-api\/workload-resources\/)).\n\nThe component within the application architecture.\n\nOne of the [recommended labels](\/docs\/concepts\/overview\/working-with-objects\/common-labels\/#labels).\n\n### app.kubernetes.io\/created-by (deprecated)\n\nType: Label\n\nExample: `app.kubernetes.io\/created-by: \"controller-manager\"`\n\nUsed on: All Objects (typically used on [workload resources](\/docs\/reference\/kubernetes-api\/workload-resources\/)).\n\nThe controller\/user who created this resource.\n\n\nStarting from v1.9, this label is deprecated.\n\n\n### app.kubernetes.io\/instance\n\nType: Label\n\nExample: `app.kubernetes.io\/instance: \"mysql-abcxyz\"`\n\nUsed on: All Objects (typically used on\n[workload resources](\/docs\/reference\/kubernetes-api\/workload-resources\/)).\n\nA unique name identifying the instance of an application.\nTo assign a non-unique name, use [app.kubernetes.io\/name](#app-kubernetes-io-name).\n\nOne of the [recommended labels](\/docs\/concepts\/overview\/working-with-objects\/common-labels\/#labels).\n\n### app.kubernetes.io\/managed-by\n\nType: Label\n\nExample: `app.kubernetes.io\/managed-by: \"helm\"`\n\nUsed on: All Objects (typically used on\n[workload resources](\/docs\/reference\/kubernetes-api\/workload-resources\/)).\n\nThe tool being used to manage the operation of an application.\n\nOne of the [recommended labels](\/docs\/concepts\/overview\/working-with-objects\/common-labels\/#labels).\n\n### app.kubernetes.io\/name\n\nType: Label\n\nExample: `app.kubernetes.io\/name: \"mysql\"`\n\nUsed on: All Objects (typically used on\n[workload resources](\/docs\/reference\/kubernetes-api\/workload-resources\/)).\n\nThe name of the application.\n\nOne of the [recommended labels](\/docs\/concepts\/overview\/working-with-objects\/common-labels\/#labels).\n\n### app.kubernetes.io\/part-of\n\nType: Label\n\nExample: `app.kubernetes.io\/part-of: \"wordpress\"`\n\nUsed on: All Objects (typically used on\n[workload resources](\/docs\/reference\/kubernetes-api\/workload-resources\/)).\n\nThe name of a higher-level application this object is part of.\n\nOne of the [recommended labels](\/docs\/concepts\/overview\/working-with-objects\/common-labels\/#labels).\n\n### app.kubernetes.io\/version\n\nType: Label\n\nExample: `app.kubernetes.io\/version: \"5.7.21\"`\n\nUsed on: All Objects (typically used on\n[workload resources](\/docs\/reference\/kubernetes-api\/workload-resources\/)).\n\nThe current version of the application.\n\nCommon forms of values include:\n\n- [semantic version](https:\/\/semver.org\/spec\/v1.0.0.html)\n- the Git [revision hash](https:\/\/git-scm.com\/book\/en\/v2\/Git-Tools-Revision-Selection#_single_revisions)\n  for the source code.\n\nOne of the [recommended labels](\/docs\/concepts\/overview\/working-with-objects\/common-labels\/#labels).\n\n### applyset.kubernetes.io\/additional-namespaces (alpha) {#applyset-kubernetes-io-additional-namespaces}\n\nType: Annotation\n\nExample: `applyset.kubernetes.io\/additional-namespaces: \"namespace1,namespace2\"`\n\nUsed on: Objects being used as ApplySet parents.\n\nUse of this annotation is Alpha.\nFor Kubernetes version , you can use this annotation on Secrets,\nConfigMaps, or custom resources if the\n\ndefining them has the `applyset.kubernetes.io\/is-parent-type` label.\n\nPart of the specification used to implement\n[ApplySet-based pruning in kubectl](\/docs\/tasks\/manage-kubernetes-objects\/declarative-config\/#alternative-kubectl-apply-f-directory-prune).\nThis annotation is applied to the parent object used to track an ApplySet to extend the scope of\nthe ApplySet beyond the parent object's own namespace (if any).\nThe value is a comma-separated list of the names of namespaces other than the parent's namespace\nin which objects are found.\n\n### applyset.kubernetes.io\/contains-group-kinds (alpha) {#applyset-kubernetes-io-contains-group-kinds}\n\nType: Annotation\n\nExample: `applyset.kubernetes.io\/contains-group-kinds: \"certificates.cert-manager.io,configmaps,deployments.apps,secrets,services\"`\n\nUsed on: Objects being used as ApplySet parents.\n\nUse of this annotation is Alpha.\nFor Kubernetes version , you can use this annotation on Secrets, ConfigMaps,\nor custom resources if the CustomResourceDefinition\ndefining them has the `applyset.kubernetes.io\/is-parent-type` label.\n\nPart of the specification used to implement\n[ApplySet-based pruning in kubectl](\/docs\/tasks\/manage-kubernetes-objects\/declarative-config\/#alternative-kubectl-apply-f-directory-prune).\nThis annotation is applied to the parent object used to track an ApplySet to optimize listing of\nApplySet member objects. It is optional in the ApplySet specification, as tools can perform discovery\nor use a different optimization. However, as of Kubernetes version ,\nit is required by kubectl. When present, the value of this annotation must be a comma separated list\nof the group-kinds, in the fully-qualified name format, i.e. `<resource>.<group>`.\n\n### applyset.kubernetes.io\/contains-group-resources (deprecated) {#applyset-kubernetes-io-contains-group-resources}\n\nType: Annotation\n\nExample: `applyset.kubernetes.io\/contains-group-resources: \"certificates.cert-manager.io,configmaps,deployments.apps,secrets,services\"`\n\nUsed on: Objects being used as ApplySet parents.\n\nFor Kubernetes version , you can use this annotation on Secrets, ConfigMaps,\nor custom resources if the CustomResourceDefinition\ndefining them has the `applyset.kubernetes.io\/is-parent-type` label.\n\nPart of the specification used to implement\n[ApplySet-based pruning in kubectl](\/docs\/tasks\/manage-kubernetes-objects\/declarative-config\/#alternative-kubectl-apply-f-directory-prune).\nThis annotation is applied to the parent object used to track an ApplySet to optimize listing of\nApplySet member objects. It is optional in the ApplySet specification, as tools can perform discovery\nor use a different optimization. However, in Kubernetes version ,\nit is required by kubectl. When present, the value of this annotation must be a comma separated list\nof the group-kinds, in the fully-qualified name format, i.e. `<resource>.<group>`.\n\n\nThis annotation is currently deprecated and replaced by [`applyset.kubernetes.io\/contains-group-kinds`](#applyset-kubernetes-io-contains-group-kinds),\nsupport for this will be removed in applyset beta or GA.\n\n\n### applyset.kubernetes.io\/id (alpha) {#applyset-kubernetes-io-id}\n\nType: Label\n\nExample: `applyset.kubernetes.io\/id: \"applyset-0eFHV8ySqp7XoShsGvyWFQD3s96yqwHmzc4e0HR1dsY-v1\"`\n\nUsed on: Objects being used as ApplySet parents.\n\nUse of this label is Alpha.\nFor Kubernetes version , you can use this label on Secrets, ConfigMaps,\nor custom resources if the CustomResourceDefinition\ndefining them has the `applyset.kubernetes.io\/is-parent-type` label.\n\nPart of the specification used to implement\n[ApplySet-based pruning in kubectl](\/docs\/tasks\/manage-kubernetes-objects\/declarative-config\/#alternative-kubectl-apply-f-directory-prune).\nThis label is what makes an object an ApplySet parent object.\nIts value is the unique ID of the ApplySet, which is derived from the identity of the parent\nobject itself. This ID **must** be the base64 encoding (using the URL safe encoding of RFC4648) of\nthe hash of the group-kind-name-namespace of the object it is on, in the form:\n`<base64(sha256(<name>.<namespace>.<kind>.<group>))>`.\nThere is no relation between the value of this label and object UID.\n\n### applyset.kubernetes.io\/is-parent-type (alpha) {#applyset-kubernetes-io-is-parent-type}\n\nType: Label\n\nExample: `applyset.kubernetes.io\/is-parent-type: \"true\"`\n\nUsed on: Custom Resource Definition (CRD)\n\nUse of this label is Alpha.\nPart of the specification used to implement\n[ApplySet-based pruning in kubectl](\/docs\/tasks\/manage-kubernetes-objects\/declarative-config\/#alternative-kubectl-apply-f-directory-prune).\nYou can set this label on a CustomResourceDefinition (CRD) to identify the custom resource type it\ndefines (not the CRD itself) as an allowed parent for an ApplySet.\nThe only permitted value for this label is `\"true\"`; if you want to mark a CRD as\nnot being a valid parent for ApplySets, omit this label.\n\n### applyset.kubernetes.io\/part-of (alpha) {#applyset-kubernetes-io-part-of}\n\nType: Label\n\nExample: `applyset.kubernetes.io\/part-of: \"applyset-0eFHV8ySqp7XoShsGvyWFQD3s96yqwHmzc4e0HR1dsY-v1\"`\n\nUsed on: All objects.\n\nUse of this label is Alpha.\nPart of the specification used to implement\n[ApplySet-based pruning in kubectl](\/docs\/tasks\/manage-kubernetes-objects\/declarative-config\/#alternative-kubectl-apply-f-directory-prune).\nThis label is what makes an object a member of an ApplySet.\nThe value of the label **must** match the value of the `applyset.kubernetes.io\/id`\nlabel on the parent object.\n\n### applyset.kubernetes.io\/tooling (alpha) {#applyset-kubernetes-io-tooling}\n\nType: Annotation\n\nExample: `applyset.kubernetes.io\/tooling: \"kubectl\/v\"`\n\nUsed on: Objects being used as ApplySet parents.\n\nUse of this annotation is Alpha.\nFor Kubernetes version , you can use this annotation on Secrets,\nConfigMaps, or custom resources if the CustomResourceDefinitiondefining them has the\n`applyset.kubernetes.io\/is-parent-type` label.\n\nPart of the specification used to implement\n[ApplySet-based pruning in kubectl](\/docs\/tasks\/manage-kubernetes-objects\/declarative-config\/#alternative-kubectl-apply-f-directory-prune).\nThis annotation is applied to the parent object used to track an ApplySet to indicate which\ntooling manages that ApplySet. Tooling should refuse to mutate ApplySets belonging to other tools.\nThe value must be in the format `<toolname>\/<semver>`.\n\n### apps.kubernetes.io\/pod-index (beta) {#apps-kubernetes.io-pod-index}\n\nType: Label\n\nExample: `apps.kubernetes.io\/pod-index: \"0\"`\n\nUsed on: Pod\n\nWhen a StatefulSet controller creates a Pod for the StatefulSet, it sets this label on that Pod. \nThe value of the label is the ordinal index of the pod being created.\n\nSee [Pod Index Label](\/docs\/concepts\/workloads\/controllers\/statefulset\/#pod-index-label)\nin the StatefulSet topic for more details.\nNote the [PodIndexLabel](\/docs\/reference\/command-line-tools-reference\/feature-gates\/)\nfeature gate must be enabled for this label to be added to pods.\n\n### resource.kubernetes.io\/pod-claim-name\n\nType: Annotation\n\nExample: `resource.kubernetes.io\/pod-claim-name: \"my-pod-claim\"`\n\nUsed on: ResourceClaim\n\nThis annotation is assigned to generated ResourceClaims. \nIts value corresponds to the name of the resource claim in the `.spec` of any Pod(s) for which the ResourceClaim was created.\nThis annotation is an internal implementation detail of [dynamic resource allocation](\/docs\/concepts\/scheduling-eviction\/dynamic-resource-allocation\/).\nYou should not need to read or modify the value of this annotation.\n\n### cluster-autoscaler.kubernetes.io\/safe-to-evict\n\nType: Annotation\n\nExample: `cluster-autoscaler.kubernetes.io\/safe-to-evict: \"true\"`\n\nUsed on: Pod\n\nWhen this annotation is set to `\"true\"`, the cluster autoscaler is allowed to evict a Pod\neven if other rules would normally prevent that.\nThe cluster autoscaler never evicts Pods that have this annotation explicitly set to\n`\"false\"`; you could set that on an important Pod that you want to keep running.\nIf this annotation is not set then the cluster autoscaler follows its Pod-level behavior.\n\n### config.kubernetes.io\/local-config\n\nType: Annotation\n\nExample: `config.kubernetes.io\/local-config: \"true\"`\n\nUsed on: All objects\n\nThis annotation is used in manifests to mark an object as local configuration that\nshould not be submitted to the Kubernetes API.\n\nA value of `\"true\"` for this annotation declares that the object is only consumed by\nclient-side tooling and should not be submitted to the API server.\n\nA value of `\"false\"` can be used to declare that the object should be submitted to\nthe API server even when it would otherwise be assumed to be local.\n\nThis annotation is part of the Kubernetes Resource Model (KRM) Functions Specification,\nwhich is used by Kustomize and similar third-party tools.\nFor example, Kustomize removes objects with this annotation from its final build output.\n\n\n### container.apparmor.security.beta.kubernetes.io\/* (deprecated) {#container-apparmor-security-beta-kubernetes-io}\n\nType: Annotation\n\nExample: `container.apparmor.security.beta.kubernetes.io\/my-container: my-custom-profile`\n\nUsed on: Pods\n\nThis annotation allows you to specify the AppArmor security profile for a container within a\nKubernetes pod. As of Kubernetes v1.30, this should be set with the `appArmorProfile` field instead.\nTo learn more, see the [AppArmor](\/docs\/tutorials\/security\/apparmor\/) tutorial.\nThe tutorial illustrates using AppArmor to restrict a container's abilities and access.\n\nThe profile specified dictates the set of rules and restrictions that the containerized process must\nadhere to. This helps enforce security policies and isolation for your containers.\n\n### internal.config.kubernetes.io\/* (reserved prefix) {#internal.config.kubernetes.io-reserved-wildcard}\n\nType: Annotation\n\nUsed on: All objects\n\nThis prefix is reserved for internal use by tools that act as orchestrators in accordance\nwith the Kubernetes Resource Model (KRM) Functions Specification.\nAnnotations with this prefix are internal to the orchestration process and are not persisted to\nthe manifests on the filesystem. In other words, the orchestrator tool should set these\nannotations when reading files from the local filesystem and remove them when writing the output\nof functions back to the filesystem.\n\nA KRM function **must not** modify annotations with this prefix, unless otherwise specified for a\ngiven annotation. This enables orchestrator tools to add additional internal annotations, without\nrequiring changes to existing functions.\n\n### internal.config.kubernetes.io\/path\n\nType: Annotation\n\nExample: `internal.config.kubernetes.io\/path: \"relative\/file\/path.yaml\"`\n\nUsed on: All objects\n\nThis annotation records the slash-delimited, OS-agnostic, relative path to the manifest file the\nobject was loaded from. The path is relative to a fixed location on the filesystem, determined by\nthe orchestrator tool.\n\nThis annotation is part of the Kubernetes Resource Model (KRM) Functions Specification, which is\nused by Kustomize and similar third-party tools.\n\nA KRM Function **should not** modify this annotation on input objects unless it is modifying the\nreferenced files. A KRM Function **may** include this annotation on objects it generates.\n\n### internal.config.kubernetes.io\/index\n\nType: Annotation\n\nExample: `internal.config.kubernetes.io\/index: \"2\"`\n\nUsed on: All objects\n\nThis annotation records the zero-indexed position of the YAML document that contains the object\nwithin the manifest file the object was loaded from. Note that YAML documents are separated by\nthree dashes (`---`) and can each contain one object. When this annotation is not specified, a\nvalue of 0 is implied.\n\nThis annotation is part of the Kubernetes Resource Model (KRM) Functions Specification,\nwhich is used by Kustomize and similar third-party tools.\n\nA KRM Function **should not** modify this annotation on input objects unless it is modifying the\nreferenced files. A KRM Function **may** include this annotation on objects it generates.\n\n### kube-scheduler-simulator.sigs.k8s.io\/bind-result\n\nType: Annotation\n\nExample: `kube-scheduler-simulator.sigs.k8s.io\/bind-result: '{\"DefaultBinder\":\"success\"}'`\n\nUsed on: Pod\n\nThis annotation records the result of bind scheduler plugins, used by https:\/\/sigs.k8s.io\/kube-scheduler-simulator.\n\n### kube-scheduler-simulator.sigs.k8s.io\/filter-result\n\nType: Annotation\n\nExample: \n\n```yaml\nkube-scheduler-simulator.sigs.k8s.io\/filter-result: >-\n      {\"node-282x7\":{\"AzureDiskLimits\":\"passed\",\"EBSLimits\":\"passed\",\"GCEPDLimits\":\"passed\",\"InterPodAffinity\":\"passed\",\"NodeAffinity\":\"passed\",\"NodeName\":\"passed\",\"NodePorts\":\"passed\",\"NodeResourcesFit\":\"passed\",\"NodeUnschedulable\":\"passed\",\"NodeVolumeLimits\":\"passed\",\"PodTopologySpread\":\"passed\",\"TaintToleration\":\"passed\",\"VolumeBinding\":\"passed\",\"VolumeRestrictions\":\"passed\",\"VolumeZone\":\"passed\"},\"node-gp9t4\":{\"AzureDiskLimits\":\"passed\",\"EBSLimits\":\"passed\",\"GCEPDLimits\":\"passed\",\"InterPodAffinity\":\"passed\",\"NodeAffinity\":\"passed\",\"NodeName\":\"passed\",\"NodePorts\":\"passed\",\"NodeResourcesFit\":\"passed\",\"NodeUnschedulable\":\"passed\",\"NodeVolumeLimits\":\"passed\",\"PodTopologySpread\":\"passed\",\"TaintToleration\":\"passed\",\"VolumeBinding\":\"passed\",\"VolumeRestrictions\":\"passed\",\"VolumeZone\":\"passed\"}}\n```\n\nUsed on: Pod\n\nThis annotation records the result of filter scheduler plugins, used by https:\/\/sigs.k8s.io\/kube-scheduler-simulator.\n\n### kube-scheduler-simulator.sigs.k8s.io\/finalscore-result\n\nType: Annotation\n\nExample: \n\n```yaml\nkube-scheduler-simulator.sigs.k8s.io\/finalscore-result: >-\n      {\"node-282x7\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"200\",\"TaintToleration\":\"300\",\"VolumeBinding\":\"0\"},\"node-gp9t4\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"200\",\"TaintToleration\":\"300\",\"VolumeBinding\":\"0\"}}\n```\n\nUsed on: Pod\n\nThis annotation records the final scores that the scheduler calculates from the scores from score scheduler plugins,\nused by https:\/\/sigs.k8s.io\/kube-scheduler-simulator.\n\n### kube-scheduler-simulator.sigs.k8s.io\/permit-result\n\nType: Annotation\n\nExample: `kube-scheduler-simulator.sigs.k8s.io\/permit-result: '{\"CustomPermitPlugin\":\"success\"}'`\n\nUsed on: Pod\n\nThis annotation records the result of permit scheduler plugins, used by https:\/\/sigs.k8s.io\/kube-scheduler-simulator.\n\n### kube-scheduler-simulator.sigs.k8s.io\/permit-result-timeout\n\nType: Annotation\n\nExample: `kube-scheduler-simulator.sigs.k8s.io\/permit-result-timeout: '{\"CustomPermitPlugin\":\"10s\"}'`\n\nUsed on: Pod\n\nThis annotation records the timeouts returned from permit scheduler plugins, used by https:\/\/sigs.k8s.io\/kube-scheduler-simulator.\n\n### kube-scheduler-simulator.sigs.k8s.io\/postfilter-result\n\nType: Annotation\n\nExample: `kube-scheduler-simulator.sigs.k8s.io\/postfilter-result: '{\"DefaultPreemption\":\"success\"}'`\n\nUsed on: Pod\n\nThis annotation records the result of postfilter scheduler plugins, used by https:\/\/sigs.k8s.io\/kube-scheduler-simulator.\n\n### kube-scheduler-simulator.sigs.k8s.io\/prebind-result\n\nType: Annotation\n\nExample: `kube-scheduler-simulator.sigs.k8s.io\/prebind-result: '{\"VolumeBinding\":\"success\"}'`\n\nUsed on: Pod\n\nThis annotation records the result of prebind scheduler plugins, used by https:\/\/sigs.k8s.io\/kube-scheduler-simulator.\n\n### kube-scheduler-simulator.sigs.k8s.io\/prefilter-result\n\nType: Annotation\n\nExample: `kube-scheduler-simulator.sigs.k8s.io\/prebind-result: '{\"NodeAffinity\":\"[\\\"node-\\a\"]\"}'`\n\nUsed on: Pod\n\nThis annotation records the PreFilter result of prefilter scheduler plugins, used by https:\/\/sigs.k8s.io\/kube-scheduler-simulator.\n\n### kube-scheduler-simulator.sigs.k8s.io\/prefilter-result-status\n\nType: Annotation\n\nExample: \n\n```yaml\nkube-scheduler-simulator.sigs.k8s.io\/prefilter-result-status: >-\n      {\"InterPodAffinity\":\"success\",\"NodeAffinity\":\"success\",\"NodePorts\":\"success\",\"NodeResourcesFit\":\"success\",\"PodTopologySpread\":\"success\",\"VolumeBinding\":\"success\",\"VolumeRestrictions\":\"success\"}\n```\n\nUsed on: Pod\n\nThis annotation records the result of prefilter scheduler plugins, used by https:\/\/sigs.k8s.io\/kube-scheduler-simulator.\n\n### kube-scheduler-simulator.sigs.k8s.io\/prescore-result\n\nType: Annotation\n\nExample: \n\n```yaml\n    kube-scheduler-simulator.sigs.k8s.io\/prescore-result: >-\n      {\"InterPodAffinity\":\"success\",\"NodeAffinity\":\"success\",\"NodeNumber\":\"success\",\"PodTopologySpread\":\"success\",\"TaintToleration\":\"success\"}\n```\n\nUsed on: Pod\n\nThis annotation records the result of prefilter scheduler plugins, used by https:\/\/sigs.k8s.io\/kube-scheduler-simulator.\n\n### kube-scheduler-simulator.sigs.k8s.io\/reserve-result\n\nType: Annotation\n\nExample: `kube-scheduler-simulator.sigs.k8s.io\/reserve-result: '{\"VolumeBinding\":\"success\"}'`\n\nUsed on: Pod\n\nThis annotation records the result of reserve scheduler plugins, used by https:\/\/sigs.k8s.io\/kube-scheduler-simulator.\n\n### kube-scheduler-simulator.sigs.k8s.io\/result-history\n\nType: Annotation\n\nExample: `kube-scheduler-simulator.sigs.k8s.io\/result-history: '[]'`\n\nUsed on: Pod\n\nThis annotation records all the past scheduling results from scheduler plugins, used by https:\/\/sigs.k8s.io\/kube-scheduler-simulator.\n\n### kube-scheduler-simulator.sigs.k8s.io\/score-result\n\nType: Annotation\n\n```yaml\n    kube-scheduler-simulator.sigs.k8s.io\/score-result: >-\n      {\"node-282x7\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"0\",\"TaintToleration\":\"0\",\"VolumeBinding\":\"0\"},\"node-gp9t4\":{\"ImageLocality\":\"0\",\"InterPodAffinity\":\"0\",\"NodeAffinity\":\"0\",\"NodeNumber\":\"0\",\"NodeResourcesBalancedAllocation\":\"76\",\"NodeResourcesFit\":\"73\",\"PodTopologySpread\":\"0\",\"TaintToleration\":\"0\",\"VolumeBinding\":\"0\"}}\n```\n\nUsed on: Pod\n\nThis annotation records the result of score scheduler plugins, used by https:\/\/sigs.k8s.io\/kube-scheduler-simulator.\n\n### kube-scheduler-simulator.sigs.k8s.io\/selected-node\n\nType: Annotation\n\nExample: `kube-scheduler-simulator.sigs.k8s.io\/selected-node: node-282x7`\n\nUsed on: Pod\n\nThis annotation records the node that is selected by the scheduling cycle, used by https:\/\/sigs.k8s.io\/kube-scheduler-simulator.\n\n### kubernetes.io\/arch\n\nType: Label\n\nExample: `kubernetes.io\/arch: \"amd64\"`\n\nUsed on: Node\n\nThe Kubelet populates this with `runtime.GOARCH` as defined by Go.\nThis can be handy if you are mixing ARM and x86 nodes.\n\n### kubernetes.io\/os\n\nType: Label\n\nExample: `kubernetes.io\/os: \"linux\"`\n\nUsed on: Node, Pod\n\nFor nodes, the kubelet populates this with `runtime.GOOS` as defined by Go. This can be handy if you are\nmixing operating systems in your cluster (for example: mixing Linux and Windows nodes).\n\nYou can also set this label on a Pod. Kubernetes allows you to set any value for this label;\nif you use this label, you should nevertheless set it to the Go `runtime.GOOS` string for the operating\nsystem that this Pod actually works with.\n\nWhen the `kubernetes.io\/os` label value for a Pod does not match the label value on a Node,\nthe kubelet on the node will not admit the Pod. However, this is not taken into account by\nthe kube-scheduler. Alternatively, the kubelet refuses to run a Pod where you have specified a Pod OS, if\nthis isn't the same as the operating system for the node where that kubelet is running. Just\nlook for [Pods OS](\/docs\/concepts\/workloads\/pods\/#pod-os) for more details.\n\n### kubernetes.io\/metadata.name\n\nType: Label\n\nExample: `kubernetes.io\/metadata.name: \"mynamespace\"`\n\nUsed on: Namespaces\n\nThe Kubernetes API server (part of the )\nsets this label on all namespaces. The label value is set\nto the name of the namespace. You can't change this label's value.\n\nThis is useful if you want to target a specific namespace with a label\n.\n\n### kubernetes.io\/limit-ranger\n\nType: Annotation\n\nExample: `kubernetes.io\/limit-ranger: \"LimitRanger plugin set: cpu, memory request for container nginx; cpu, memory limit for container nginx\"`\n\nUsed on: Pod\n\nKubernetes by default doesn't provide any resource limit, that means unless you explicitly define\nlimits, your container can consume unlimited CPU and memory.\nYou can define a default request or default limit for pods. You do this by creating a LimitRange\nin the relevant namespace. Pods deployed after you define a LimitRange will have these limits\napplied to them.\nThe annotation `kubernetes.io\/limit-ranger` records that resource defaults were specified for the Pod,\nand they were applied successfully.\nFor more details, read about [LimitRanges](\/docs\/concepts\/policy\/limit-range).\n\n### kubernetes.io\/config.hash\n\nType: Annotation\n\nExample: `kubernetes.io\/config.hash: \"df7cc47f8477b6b1226d7d23a904867b\"`\n\nUsed on: Pod\n\nWhen the kubelet creates a static Pod based on a given manifest, it attaches this annotation\nto the static Pod. The value of the annotation is the UID of the Pod.\nNote that the kubelet also sets the `.spec.nodeName` to the current node name as if the Pod\nwas scheduled to the node.\n\n### kubernetes.io\/config.mirror\n\nType: Annotation\n\nExample: `kubernetes.io\/config.mirror: \"df7cc47f8477b6b1226d7d23a904867b\"`\n\nUsed on: Pod\n\nFor a static Pod created by the kubelet on a node, a \nis created on the API server. The kubelet adds an annotation to indicate that this Pod is\nactually a mirror Pod. The annotation value is copied from the [`kubernetes.io\/config.hash`](#kubernetes-io-config-hash)\nannotation, which is the UID of the Pod.\n\nWhen updating a Pod with this annotation set, the annotation cannot be changed or removed.\nIf a Pod doesn't have this annotation, it cannot be added during a Pod update.\n\n### kubernetes.io\/config.source\n\nType: Annotation\n\nExample: `kubernetes.io\/config.source: \"file\"`\n\nUsed on: Pod\n\nThis annotation is added by the kubelet to indicate where the Pod comes from.\nFor static Pods, the annotation value could be one of `file` or `http` depending\non where the Pod manifest is located. For a Pod created on the API server and then\nscheduled to the current node, the annotation value is `api`.\n\n### kubernetes.io\/config.seen\n\nType: Annotation\n\nExample: `kubernetes.io\/config.seen: \"2023-10-27T04:04:56.011314488Z\"`\n\nUsed on: Pod\n\nWhen the kubelet sees a Pod for the first time, it may add this annotation to\nthe Pod with a value of current timestamp in the RFC3339 format.\n\n### addonmanager.kubernetes.io\/mode\n\nType: Label\n\nExample: `addonmanager.kubernetes.io\/mode: \"Reconcile\"`\n\nUsed on: All objects\n\nTo specify how an add-on should be managed, you can use the `addonmanager.kubernetes.io\/mode` label.\nThis label can have one of three values: `Reconcile`, `EnsureExists`, or `Ignore`.\n\n- `Reconcile`: Addon resources will be periodically reconciled with the expected state.\n  If there are any differences, the add-on manager will recreate, reconfigure or delete\n  the resources as needed. This is the default mode if no label is specified.\n- `EnsureExists`: Addon resources will be checked for existence only but will not be modified\n  after creation. The add-on manager will create or re-create the resources when there is\n  no instance of the resource with that name.\n- `Ignore`: Addon resources will be ignored. This mode is useful for add-ons that are not\n  compatible with the add-on manager or that are managed by another controller.\n\nFor more details, see [Addon-manager](https:\/\/github.com\/kubernetes\/kubernetes\/blob\/master\/cluster\/addons\/addon-manager\/README.md).\n\n### beta.kubernetes.io\/arch (deprecated)\n\nType: Label\n\nThis label has been deprecated. Please use [`kubernetes.io\/arch`](#kubernetes-io-arch) instead.\n\n### beta.kubernetes.io\/os (deprecated)\n\nType: Label\n\nThis label has been deprecated. Please use [`kubernetes.io\/os`](#kubernetes-io-os) instead.\n\n### kube-aggregator.kubernetes.io\/automanaged {#kube-aggregator-kubernetesio-automanaged}\n\nType: Label\n\nExample: `kube-aggregator.kubernetes.io\/automanaged: \"onstart\"`\n\nUsed on: APIService\n\nThe `kube-apiserver` sets this label on any APIService object that the API server\nhas created automatically. The label marks how the control plane should manage that\nAPIService. You should not add, modify, or remove this label by yourself.\n\n\nAutomanaged APIService objects are deleted by kube-apiserver when it has no built-in\nor custom resource API corresponding to the API group\/version of the APIService.\n\n\nThere are two possible values:\n\n- `onstart`: The APIService should be reconciled when an API server starts up, but not otherwise.\n- `true`: The API server should reconcile this APIService continuously.\n\n### service.alpha.kubernetes.io\/tolerate-unready-endpoints (deprecated)\n\nType: Annotation\n\nUsed on: StatefulSet\n\nThis annotation on a Service denotes if the Endpoints controller should go ahead and create\nEndpoints for unready Pods. Endpoints of these Services retain their DNS records and continue\nreceiving traffic for the Service from the moment the kubelet starts all containers in the pod\nand marks it _Running_, til the kubelet stops all containers and deletes the pod from\nthe API server.\n\n### autoscaling.alpha.kubernetes.io\/behavior (deprecated) {#autoscaling-alpha-kubernetes-io-behavior}\n\nType: Annotation\n\nUsed on: HorizontalPodAutoscaler\n\nThis annotation was used to configure the scaling behavior for a HorizontalPodAutoscaler (HPA) in earlier Kubernetes versions.\nIt allowed you to specify how the HPA should scale pods up or down, including setting stabilization windows and scaling policies.\nSetting this annotation has no effect in any supported release of Kubernetes.\n\n### kubernetes.io\/hostname {#kubernetesiohostname}\n\nType: Label\n\nExample: `kubernetes.io\/hostname: \"ip-172-20-114-199.ec2.internal\"`\n\nUsed on: Node\n\nThe Kubelet populates this label with the hostname of the node. Note that the hostname\ncan be changed from the \"actual\" hostname by passing the `--hostname-override` flag to\nthe `kubelet`.\n\nThis label is also used as part of the topology hierarchy.\nSee [topology.kubernetes.io\/zone](#topologykubernetesiozone) for more information.\n\n### kubernetes.io\/change-cause {#change-cause}\n\nType: Annotation\n\nExample: `kubernetes.io\/change-cause: \"kubectl edit --record deployment foo\"`\n\nUsed on: All Objects\n\nThis annotation is a best guess at why something was changed.\n\nIt is populated when adding `--record` to a `kubectl` command that may change an object.\n\n### kubernetes.io\/description {#description}\n\nType: Annotation\n\nExample: `kubernetes.io\/description: \"Description of K8s object.\"`\n\nUsed on: All Objects\n\nThis annotation is used for describing specific behaviour of given object.\n\n### kubernetes.io\/enforce-mountable-secrets {#enforce-mountable-secrets}\n\nType: Annotation\n\nExample: `kubernetes.io\/enforce-mountable-secrets: \"true\"`\n\nUsed on: ServiceAccount\n\nThe value for this annotation must be **true** to take effect.\nWhen you set this annotation  to \"true\", Kubernetes enforces the following rules for\nPods running as this ServiceAccount:\n\n1. Secrets mounted as volumes must be listed in the ServiceAccount's `secrets` field.\n1. Secrets referenced in `envFrom` for containers (including sidecar containers and init containers)\n   must also be listed in the ServiceAccount's secrets field.\n   If any container in a Pod references a Secret not listed in the ServiceAccount's `secrets` field\n   (and even if the reference is marked as `optional`), then the Pod will fail to start,\n   and an error indicating the non-compliant secret reference will be generated.\n1. Secrets referenced in a Pod's `imagePullSecrets` must be present in the\n   ServiceAccount's `imagePullSecrets` field, the Pod will fail to start,\n   and an error indicating the non-compliant image pull secret reference will be generated.\n\nWhen you create or update a Pod, these rules are checked. If a Pod doesn't follow them, it won't start and you'll see an error message.\nIf a Pod is already running and you change the `kubernetes.io\/enforce-mountable-secrets` annotation\nto true, or you edit the associated ServiceAccount to remove the reference to a Secret\nthat the Pod is already using, the Pod continues to run.\n\n### node.kubernetes.io\/exclude-from-external-load-balancers\n\nType: Label\n\nExample: `node.kubernetes.io\/exclude-from-external-load-balancers`\n\nUsed on: Node\n\nYou can add labels to particular worker nodes to exclude them from the list of backend servers used by external load balancers.\nThe following command can be used to exclude a worker node from the list of backend servers in a\nbackend set:\n\n```shell\nkubectl label nodes <node-name> node.kubernetes.io\/exclude-from-external-load-balancers=true\n```\n\n### controller.kubernetes.io\/pod-deletion-cost {#pod-deletion-cost}\n\nType: Annotation\n\nExample: `controller.kubernetes.io\/pod-deletion-cost: \"10\"`\n\nUsed on: Pod\n\nThis annotation is used to set [Pod Deletion Cost](\/docs\/concepts\/workloads\/controllers\/replicaset\/#pod-deletion-cost)\nwhich allows users to influence ReplicaSet downscaling order.\nThe annotation value parses into an `int32` type.\n\n### cluster-autoscaler.kubernetes.io\/enable-ds-eviction\n\nType: Annotation\n\nExample: `cluster-autoscaler.kubernetes.io\/enable-ds-eviction: \"true\"`\n\nUsed on: Pod\n\nThis annotation controls whether a DaemonSet pod should be evicted by a ClusterAutoscaler.\nThis annotation needs to be specified on DaemonSet pods in a DaemonSet manifest.\nWhen this annotation is set to `\"true\"`, the ClusterAutoscaler is allowed to evict\na DaemonSet Pod, even if other rules would normally prevent that.\nTo disallow the ClusterAutoscaler from evicting DaemonSet pods,\nyou can set this annotation to `\"false\"` for important DaemonSet pods.\nIf this annotation is not set, then the ClusterAutoscaler follows its overall behavior\n(i.e evict the DaemonSets based on its configuration).\n\n\nThis annotation only impacts DaemonSet Pods.\n\n\n### kubernetes.io\/ingress-bandwidth\n\nType: Annotation\n\nExample: `kubernetes.io\/ingress-bandwidth: 10M`\n\nUsed on: Pod\n\nYou can apply quality-of-service traffic shaping to a pod and effectively limit its available\nbandwidth. Ingress traffic to a Pod is handled by shaping queued packets to effectively\nhandle data. To limit the bandwidth on a Pod, write an object definition JSON file and specify\nthe data traffic speed using `kubernetes.io\/ingress-bandwidth` annotation. The unit used for\nspecifying ingress rate is bits per second, as a\n[Quantity](\/docs\/reference\/kubernetes-api\/common-definitions\/quantity\/).\nFor example, `10M` means 10 megabits per second.\n\n\nIngress traffic shaping annotation is an experimental feature.\nIf you want to enable traffic shaping support, you must add the `bandwidth` plugin to your CNI\nconfiguration file (default `\/etc\/cni\/net.d`) and ensure that the binary is included in your CNI\nbin dir (default `\/opt\/cni\/bin`).\n\n\n### kubernetes.io\/egress-bandwidth\n\nType: Annotation\n\nExample: `kubernetes.io\/egress-bandwidth: 10M`\n\nUsed on: Pod\n\nEgress traffic from a Pod is handled by policing, which simply drops packets in excess of the\nconfigured rate. The limits you place on a Pod do not affect the bandwidth of other Pods.\nTo limit the bandwidth on a Pod, write an object definition JSON file and specify the data traffic\nspeed using `kubernetes.io\/egress-bandwidth` annotation. The unit used for specifying egress rate\nis bits per second, as a [Quantity](\/docs\/reference\/kubernetes-api\/common-definitions\/quantity\/).\nFor example, `10M` means 10 megabits per second.\n\n\nEgress traffic shaping annotation is an experimental feature.\nIf you want to enable traffic shaping support, you must add the `bandwidth` plugin to your CNI\nconfiguration file (default `\/etc\/cni\/net.d`) and ensure that the binary is included in your CNI\nbin dir (default `\/opt\/cni\/bin`).\n\n\n### beta.kubernetes.io\/instance-type (deprecated)\n\nType: Label\n\n\nStarting in v1.17, this label is deprecated in favor of\n[node.kubernetes.io\/instance-type](#nodekubernetesioinstance-type).\n\n\n### node.kubernetes.io\/instance-type {#nodekubernetesioinstance-type}\n\nType: Label\n\nExample: `node.kubernetes.io\/instance-type: \"m3.medium\"`\n\nUsed on: Node\n\nThe Kubelet populates this with the instance type as defined by the cloud provider.\nThis will be set only if you are using a cloud provider. This setting is handy\nif you want to target certain workloads to certain instance types, but typically you want\nto rely on the Kubernetes scheduler to perform resource-based scheduling.\nYou should aim to schedule based on properties rather than on instance types\n(for example: require a GPU, instead of requiring a `g2.2xlarge`).\n\n### failure-domain.beta.kubernetes.io\/region (deprecated) {#failure-domainbetakubernetesioregion}\n\nType: Label\n\n\nStarting in v1.17, this label is deprecated in favor of\n[topology.kubernetes.io\/region](#topologykubernetesioregion).\n\n\n### failure-domain.beta.kubernetes.io\/zone (deprecated) {#failure-domainbetakubernetesiozone}\n\nType: Label\n\n\nStarting in v1.17, this label is deprecated in favor of\n[topology.kubernetes.io\/zone](#topologykubernetesiozone).\n\n\n### pv.kubernetes.io\/bind-completed {#pv-kubernetesiobind-completed}\n\nType: Annotation\n\nExample: `pv.kubernetes.io\/bind-completed: \"yes\"`\n\nUsed on: PersistentVolumeClaim\n\nWhen this annotation is set on a PersistentVolumeClaim (PVC), that indicates that the lifecycle\nof the PVC has passed through initial binding setup. When present, that information changes\nhow the control plane interprets the state of PVC objects.\nThe value of this annotation does not matter to Kubernetes.\n\n### pv.kubernetes.io\/bound-by-controller {#pv-kubernetesioboundby-controller}\n\nType: Annotation\n\nExample: `pv.kubernetes.io\/bound-by-controller: \"yes\"`\n\nUsed on: PersistentVolume, PersistentVolumeClaim\n\nIf this annotation is set on a PersistentVolume or PersistentVolumeClaim, it indicates that a\nstorage binding (PersistentVolume \u2192 PersistentVolumeClaim, or PersistentVolumeClaim \u2192 PersistentVolume)\nwas installed by the .\nIf the annotation isn't set, and there is a storage binding in place, the absence of that\nannotation means that the binding was done manually.\nThe value of this annotation does not matter.\n\n### pv.kubernetes.io\/provisioned-by {#pv-kubernetesiodynamically-provisioned}\n\nType: Annotation\n\nExample: `pv.kubernetes.io\/provisioned-by: \"kubernetes.io\/rbd\"`\n\nUsed on: PersistentVolume\n\nThis annotation is added to a PersistentVolume(PV) that has been dynamically provisioned by Kubernetes.\nIts value is the name of volume plugin that created the volume. It serves both users (to show where a PV\ncomes from) and Kubernetes (to recognize dynamically provisioned PVs in its decisions).\n\n### pv.kubernetes.io\/migrated-to {#pv-kubernetesio-migratedto}\n\nType: Annotation\n\nExample: `pv.kubernetes.io\/migrated-to: pd.csi.storage.gke.io`\n\nUsed on: PersistentVolume, PersistentVolumeClaim\n\nIt is added to a PersistentVolume(PV) and PersistentVolumeClaim(PVC) that is supposed to be\ndynamically provisioned\/deleted by its corresponding CSI driver through the `CSIMigration` feature gate.\nWhen this annotation is set, the Kubernetes components will \"stand-down\" and the\n`external-provisioner` will act on the objects.\n\n### statefulset.kubernetes.io\/pod-name {#statefulsetkubernetesiopod-name}\n\nType: Label\n\nExample: `statefulset.kubernetes.io\/pod-name: \"mystatefulset-7\"`\n\nUsed on: Pod\n\nWhen a StatefulSet controller creates a Pod for the StatefulSet, the control plane\nsets this label on that Pod. The value of the label is the name of the Pod being created.\n\nSee [Pod Name Label](\/docs\/concepts\/workloads\/controllers\/statefulset\/#pod-name-label)\nin the StatefulSet topic for more details.\n\n### scheduler.alpha.kubernetes.io\/node-selector {#schedulerkubernetesnode-selector}\n\nType: Annotation\n\nExample: `scheduler.alpha.kubernetes.io\/node-selector: \"name-of-node-selector\"`\n\nUsed on: Namespace\n\nThe [PodNodeSelector](\/docs\/reference\/access-authn-authz\/admission-controllers\/#podnodeselector)\nuses this annotation key to assign node selectors to pods in namespaces.\n\n### topology.kubernetes.io\/region {#topologykubernetesioregion}\n\nType: Label\n\nExample: `topology.kubernetes.io\/region: \"us-east-1\"`\n\nUsed on: Node, PersistentVolume\n\nSee [topology.kubernetes.io\/zone](#topologykubernetesiozone).\n\n### topology.kubernetes.io\/zone {#topologykubernetesiozone}\n\nType: Label\n\nExample: `topology.kubernetes.io\/zone: \"us-east-1c\"`\n\nUsed on: Node, PersistentVolume\n\n**On Node**: The `kubelet` or the external `cloud-controller-manager` populates this\nwith the information from the cloud provider. This will be set only if you are using\na cloud provider. However, you can consider setting this on nodes if it makes sense\nin your topology.\n\n**On PersistentVolume**: topology-aware volume provisioners will automatically set\nnode affinity constraints on a `PersistentVolume`.\n\nA zone represents a logical failure domain. It is common for Kubernetes clusters to\nspan multiple zones for increased availability. While the exact definition of a zone\nis left to infrastructure implementations, common properties of a zone include\nvery low network latency within a zone, no-cost network traffic within a zone, and\nfailure independence from other zones.\nFor example, nodes within a zone might share a network switch, but nodes in different\nzones should not.\n\nA region represents a larger domain, made up of one or more zones.\nIt is uncommon for Kubernetes clusters to span multiple regions,\nWhile the exact definition of a zone or region is left to infrastructure implementations,\ncommon properties of a region include higher network latency between them than within them,\nnon-zero cost for network traffic between them, and failure independence from other zones or regions.\nFor example, nodes within a region might share power infrastructure (e.g. a UPS or generator),\nbut nodes in different regions typically would not.\n\nKubernetes makes a few assumptions about the structure of zones and regions:\n\n1. regions and zones are hierarchical: zones are strict subsets of regions and\n   no zone can be in 2 regions\n2. zone names are unique across regions; for example region \"africa-east-1\" might be comprised\n   of zones \"africa-east-1a\" and \"africa-east-1b\"\n\nIt should be safe to assume that topology labels do not change.\nEven though labels are strictly mutable, consumers of them can assume that a given node\nis not going to be moved between zones without being destroyed and recreated.\n\nKubernetes can use this information in various ways.\nFor example, the scheduler automatically tries to spread the Pods in a ReplicaSet across nodes\nin a single-zone cluster (to reduce the impact of node failures, see\n[kubernetes.io\/hostname](#kubernetesiohostname)).\nWith multiple-zone clusters, this spreading behavior also applies to zones (to reduce the impact of zone failures).\nThis is achieved via _SelectorSpreadPriority_.\n\n_SelectorSpreadPriority_ is a best effort placement. If the zones in your cluster are\nheterogeneous (for example: different numbers of nodes, different types of nodes, or different pod\nresource requirements), this placement might prevent equal spreading of your Pods across zones.\nIf desired, you can use homogeneous zones (same number and types of nodes) to reduce the probability\nof unequal spreading.\n\nThe scheduler (through the _VolumeZonePredicate_ predicate) also will ensure that Pods,\nthat claim a given volume, are only placed into the same zone as that volume.\nVolumes cannot be attached across zones.\n\nIf `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes,\nyou should consider adding the labels manually (or adding support for `PersistentVolumeLabel`).\nWith `PersistentVolumeLabel`, the scheduler prevents Pods from mounting volumes in a different zone.\nIf your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.\n\n### volume.beta.kubernetes.io\/storage-provisioner (deprecated)\n\nType: Annotation\n\nExample: `volume.beta.kubernetes.io\/storage-provisioner: \"k8s.io\/minikube-hostpath\"`\n\nUsed on: PersistentVolumeClaim\n\nThis annotation has been deprecated since v1.23.\nSee [volume.kubernetes.io\/storage-provisioner](#volume-kubernetes-io-storage-provisioner).\n\n### volume.beta.kubernetes.io\/storage-class (deprecated)\n\nType: Annotation\n\nExample: `volume.beta.kubernetes.io\/storage-class: \"example-class\"`\n\nUsed on: PersistentVolume, PersistentVolumeClaim\n\nThis annotation can be used for PersistentVolume(PV) or PersistentVolumeClaim(PVC)\nto specify the name of [StorageClass](\/docs\/concepts\/storage\/storage-classes\/).\nWhen both the `storageClassName` attribute and the `volume.beta.kubernetes.io\/storage-class`\nannotation are specified, the annotation `volume.beta.kubernetes.io\/storage-class`\ntakes precedence over the `storageClassName` attribute.\n\nThis annotation has been deprecated. Instead, set the\n[`storageClassName` field](\/docs\/concepts\/storage\/persistent-volumes\/#class)\nfor the PersistentVolumeClaim or PersistentVolume.\n\n### volume.beta.kubernetes.io\/mount-options (deprecated) {#mount-options}\n\nType: Annotation\n\nExample : `volume.beta.kubernetes.io\/mount-options: \"ro,soft\"`\n\nUsed on: PersistentVolume\n\nA Kubernetes administrator can specify additional\n[mount options](\/docs\/concepts\/storage\/persistent-volumes\/#mount-options)\nfor when a PersistentVolume is mounted on a node.\n\n### volume.kubernetes.io\/storage-provisioner  {#volume-kubernetes-io-storage-provisioner}\n\nType: Annotation\n\nUsed on: PersistentVolumeClaim\n\nThis annotation is added to a PVC that is supposed to be dynamically provisioned.\nIts value is the name of a volume plugin that is supposed to provision a volume\nfor this PVC.\n\n### volume.kubernetes.io\/selected-node\n\nType: Annotation\n\nUsed on: PersistentVolumeClaim\n\nThis annotation is added to a PVC that is triggered by a scheduler to be\ndynamically provisioned. Its value is the name of the selected node.\n\n### volumes.kubernetes.io\/controller-managed-attach-detach\n\nType: Annotation\n\nUsed on: Node\n\nIf a node has the annotation `volumes.kubernetes.io\/controller-managed-attach-detach`,\nits storage attach and detach operations are being managed by the _volume attach\/detach_\n.\n\nThe value of the annotation isn't important.\n\n### node.kubernetes.io\/windows-build {#nodekubernetesiowindows-build}\n\nType: Label\n\nExample: `node.kubernetes.io\/windows-build: \"10.0.17763\"`\n\nUsed on: Node\n\nWhen the kubelet is running on Microsoft Windows, it automatically labels its Node\nto record the version of Windows Server in use.\n\nThe label's value is in the format \"MajorVersion.MinorVersion.BuildNumber\".\n\n### storage.alpha.kubernetes.io\/migrated-plugins {#storagealphakubernetesiomigrated-plugins}\n\nType: Annotation\n\nExample:`storage.alpha.kubernetes.io\/migrated-plugins: \"kubernetes.io\/cinder\"`\n\nUsed on: CSINode (an extension API)\n\nThis annotation is automatically added for the CSINode object that maps to a node that\ninstalls CSIDriver. This annotation shows the in-tree plugin name of the migrated plugin. Its\nvalue depends on your cluster's in-tree cloud provider storage type.\n\nFor example, if the in-tree cloud provider storage type is `CSIMigrationvSphere`, the CSINodes instance for the node should be updated with:\n`storage.alpha.kubernetes.io\/migrated-plugins: \"kubernetes.io\/vsphere-volume\"`\n\n### service.kubernetes.io\/headless {#servicekubernetesioheadless}\n\nType: Label\n\nExample: `service.kubernetes.io\/headless: \"\"`\n\nUsed on: Endpoints\n\nThe control plane adds this label to an Endpoints object when the owning Service is headless.\nTo learn more, read [Headless Services](\/docs\/concepts\/services-networking\/service\/#headless-services).\n\n### service.kubernetes.io\/topology-aware-hints (deprecated) {#servicekubernetesiotopology-aware-hints}\n\nExample: `service.kubernetes.io\/topology-aware-hints: \"Auto\"`\n\nUsed on: Service\n\nThis annotation was used for enabling _topology aware hints_ on Services. Topology aware\nhints have since been renamed: the concept is now called\n[topology aware routing](\/docs\/concepts\/services-networking\/topology-aware-routing\/).\nSetting the annotation to `Auto`, on a Service, configured the Kubernetes control plane to\nadd topology hints on EndpointSlices associated with that Service. You can also explicitly\nset the annotation to `Disabled`.\n\nIf you are running a version of Kubernetes older than ,\ncheck the documentation for that Kubernetes version to see how topology aware routing\nworks in that release.\n\nThere are no other valid values for this annotation. If you don't want topology aware hints\nfor a Service, don't add this annotation.\n\n### service.kubernetes.io\/topology-mode\n\nType: Annotation\n\nExample: `service.kubernetes.io\/topology-mode: Auto`\n\nUsed on: Service\n\nThis annotation provides a way to define how Services handle network topology;\nfor example, you can configure a Service so that Kubernetes prefers keeping traffic between\na client and server within a single topology zone.\nIn some cases this can help reduce costs or improve network performance.\n\nSee [Topology Aware Routing](\/docs\/concepts\/services-networking\/topology-aware-routing\/)\nfor more details.\n\n### kubernetes.io\/service-name {#kubernetesioservice-name}\n\nType: Label\n\nExample: `kubernetes.io\/service-name: \"my-website\"`\n\nUsed on: EndpointSlice\n\nKubernetes associates [EndpointSlices](\/docs\/concepts\/services-networking\/endpoint-slices\/) with\n[Services](\/docs\/concepts\/services-networking\/service\/) using this label.\n\nThis label records the  of the\nService that the EndpointSlice is backing. All EndpointSlices should have this label set to\nthe name of their associated Service.\n\n### kubernetes.io\/service-account.name\n\nType: Annotation\n\nExample: `kubernetes.io\/service-account.name: \"sa-name\"`\n\nUsed on: Secret\n\nThis annotation records the  of the\nServiceAccount that the token (stored in the Secret of type `kubernetes.io\/service-account-token`)\nrepresents.\n\n### kubernetes.io\/service-account.uid\n\nType: Annotation\n\nExample: `kubernetes.io\/service-account.uid: da68f9c6-9d26-11e7-b84e-002dc52800da`\n\nUsed on: Secret\n\nThis annotation records the  of the\nServiceAccount that the token (stored in the Secret of type `kubernetes.io\/service-account-token`)\nrepresents.\n\n### kubernetes.io\/legacy-token-last-used\n\nType: Label\n\nExample: `kubernetes.io\/legacy-token-last-used: 2022-10-24`\n\nUsed on: Secret\n\nThe control plane only adds this label to Secrets that have the type\n`kubernetes.io\/service-account-token`.\nThe value of this label records the date (ISO 8601 format, UTC time zone) when the control plane\nlast saw a request where the client authenticated using the service account token.\n\nIf a legacy token was last used before the cluster gained the feature (added in Kubernetes v1.26),\nthen the label isn't set.\n\n### kubernetes.io\/legacy-token-invalid-since\n\nType: Label\n\nExample: `kubernetes.io\/legacy-token-invalid-since: 2023-10-27`\n\nUsed on: Secret\n\nThe control plane automatically adds this label to auto-generated Secrets that\nhave the type `kubernetes.io\/service-account-token`. This label marks the\nSecret-based token as invalid for authentication. The value of this label\nrecords the date (ISO 8601 format, UTC time zone) when the control plane detects\nthat the auto-generated Secret has not been used for a specified duration\n(defaults to one year).\n\n### endpointslice.kubernetes.io\/managed-by {#endpointslicekubernetesiomanaged-by}\n\nType: Label\n\nExample: `endpointslice.kubernetes.io\/managed-by: endpointslice-controller.k8s.io`\n\nUsed on: EndpointSlices\n\nThe label is used to indicate the controller or entity that manages the EndpointSlice. This label\naims to enable different EndpointSlice objects to be managed by different controllers or entities\nwithin the same cluster.\n\n### endpointslice.kubernetes.io\/skip-mirror {#endpointslicekubernetesioskip-mirror}\n\nType: Label\n\nExample: `endpointslice.kubernetes.io\/skip-mirror: \"true\"`\n\nUsed on: Endpoints\n\nThe label can be set to `\"true\"` on an Endpoints resource to indicate that the\nEndpointSliceMirroring controller should not mirror this resource with EndpointSlices.\n\n### service.kubernetes.io\/service-proxy-name {#servicekubernetesioservice-proxy-name}\n\nType: Label\n\nExample: `service.kubernetes.io\/service-proxy-name: \"foo-bar\"`\n\nUsed on: Service\n\nThe kube-proxy has this label for custom proxy, which delegates service control to custom proxy.\n\n### experimental.windows.kubernetes.io\/isolation-type (deprecated) {#experimental-windows-kubernetes-io-isolation-type}\n\nType: Annotation\n\nExample: `experimental.windows.kubernetes.io\/isolation-type: \"hyperv\"`\n\nUsed on: Pod\n\nThe annotation is used to run Windows containers with Hyper-V isolation.\n\n\nStarting from v1.20, this annotation is deprecated.\nExperimental Hyper-V support was removed in 1.21.\n\n\n### ingressclass.kubernetes.io\/is-default-class\n\nType: Annotation\n\nExample: `ingressclass.kubernetes.io\/is-default-class: \"true\"`\n\nUsed on: IngressClass\n\nWhen a IngressClass resource has this annotation set to `\"true\"`, new Ingress resource\nwithout a class specified will be assigned this default class.\n\n### nginx.ingress.kubernetes.io\/configuration-snippet\n\nType: Annotation\n\nExample: `nginx.ingress.kubernetes.io\/configuration-snippet: \"  more_set_headers \\\"Request-Id: $req_id\\\";\\nmore_set_headers \\\"Example: 42\\\";\\n\"`\n\nUsed on: Ingress\n\nYou can use this annotation to set extra configuration on an Ingress that\nuses the [NGINX Ingress Controller](https:\/\/github.com\/kubernetes\/ingress-nginx\/).\nThe `configuration-snippet` annotation is ignored\nby default since version 1.9.0 of the ingress controller.\nThe NGINX ingress controller setting `allow-snippet-annotations.`\nhas to be explicitly enabled to use this annotation.\nEnabling the annotation can be dangerous in a multi-tenant cluster, as it can lead people with otherwise\nlimited permissions being able to retrieve all Secrets in the cluster.\n\n### kubernetes.io\/ingress.class (deprecated)\n\nType: Annotation\n\nUsed on: Ingress\n\n\nStarting in v1.18, this annotation is deprecated in favor of `spec.ingressClassName`.\n\n\n### storageclass.kubernetes.io\/is-default-class\n\nType: Annotation\n\nExample: `storageclass.kubernetes.io\/is-default-class: \"true\"`\n\nUsed on: StorageClass\n\nWhen a single StorageClass resource has this annotation set to `\"true\"`, new PersistentVolumeClaim\nresource without a class specified will be assigned this default class.\n\n### alpha.kubernetes.io\/provided-node-ip (alpha) {#alpha-kubernetes-io-provided-node-ip}\n\nType: Annotation\n\nExample: `alpha.kubernetes.io\/provided-node-ip: \"10.0.0.1\"`\n\nUsed on: Node\n\nThe kubelet can set this annotation on a Node to denote its configured IPv4 and\/or IPv6 address.\n\nWhen kubelet is started with the `--cloud-provider` flag set to any value (includes both external\nand legacy in-tree cloud providers), it sets this annotation on the Node to denote an IP address\nset from the command line flag (`--node-ip`). This IP is verified with the cloud provider as valid\nby the cloud-controller-manager.\n\n### batch.kubernetes.io\/job-completion-index\n\nType: Annotation, Label\n\nExample: `batch.kubernetes.io\/job-completion-index: \"3\"`\n\nUsed on: Pod\n\nThe Job controller in the kube-controller-manager sets this as a label and annotation for Pods\ncreated with Indexed [completion mode](\/docs\/concepts\/workloads\/controllers\/job\/#completion-mode).\n\nNote the [PodIndexLabel](\/docs\/reference\/command-line-tools-reference\/feature-gates\/)\nfeature gate must be enabled for this to be added as a pod **label**,\notherwise it will just be an annotation.\n\n### batch.kubernetes.io\/cronjob-scheduled-timestamp\n\nType: Annotation\n\nExample: `batch.kubernetes.io\/cronjob-scheduled-timestamp: \"2016-05-19T03:00:00-07:00\"`\n\nUsed on: Jobs and Pods controlled by CronJobs\n\nThis annotation is used to record the original (expected) creation timestamp for a Job,\nwhen that Job is part of a CronJob.\nThe control plane sets the value to that timestamp in RFC3339 format. If the Job belongs to a CronJob\nwith a timezone specified, then the timestamp is in that timezone. Otherwise, the timestamp is in controller-manager's local time.\n\n### kubectl.kubernetes.io\/default-container\n\nType: Annotation\n\nExample: `kubectl.kubernetes.io\/default-container: \"front-end-app\"`\n\nThe value of the annotation is the container name that is default for this Pod.\nFor example, `kubectl logs` or `kubectl exec` without `-c` or `--container` flag\nwill use this default container.\n\n### kubectl.kubernetes.io\/default-logs-container (deprecated)\n\nType: Annotation\n\nExample: `kubectl.kubernetes.io\/default-logs-container: \"front-end-app\"`\n\nThe value of the annotation is the container name that is the default logging container for this\nPod. For example, `kubectl logs` without `-c` or `--container` flag will use this default\ncontainer.\n\n\nThis annotation is deprecated. You should use the\n[`kubectl.kubernetes.io\/default-container`](#kubectl-kubernetes-io-default-container)\nannotation instead. Kubernetes versions 1.25 and newer ignore this annotation.\n\n\n### kubectl.kubernetes.io\/last-applied-configuration\n\nType: Annotation\n\nExample: _see following snippet_\n```yaml\n    kubectl.kubernetes.io\/last-applied-configuration: >\n      {\"apiVersion\":\"apps\/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"example\",\"namespace\":\"default\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app.kubernetes.io\/name\":foo}},\"template\":{\"metadata\":{\"labels\":{\"app.kubernetes.io\/name\":\"foo\"}},\"spec\":{\"containers\":[{\"image\":\"container-registry.example\/foo-bar:1.42\",\"name\":\"foo-bar\",\"ports\":[{\"containerPort\":42}]}]}}}}\n```\n\nUsed on: all objects\n\nThe kubectl command line tool uses this annotation as a legacy mechanism\nto track changes. That mechanism has been superseded by\n[Server-side apply](\/docs\/reference\/using-api\/server-side-apply\/).\n\n### kubectl.kubernetes.io\/restartedAt {#kubectl-k8s-io-restart-at}\n\nType: Annotation\n\nExample: `kubectl.kubernetes.io\/restartedAt: \"2024-06-21T17:27:41Z\"`\n\nUsed on: Deployment, ReplicaSet, StatefulSet, DaemonSet, Pod\n\nThis annotation contains the latest restart time of a resource (Deployment, ReplicaSet, StatefulSet or DaemonSet),\nwhere kubectl triggered a rollout in order to force creation of new Pods.\nThe command `kubectl rollout restart <RESOURCE>` triggers a restart by patching the template\nmetadata of all the pods of resource with this annotation. In above example the latest restart time is shown as 21st June 2024 at 17:27:41 UTC.\n\nYou should not assume that this annotation represents the date \/ time of the most recent update;\na separate change could have been made since the last manually triggered rollout.\n\nIf you manually set this annotation on a Pod, nothing happens. The restarting side effect comes from\nhow workload management and Pod templating works.\n\n### endpoints.kubernetes.io\/over-capacity\n\nType: Annotation\n\nExample: `endpoints.kubernetes.io\/over-capacity:truncated`\n\nUsed on: Endpoints\n\nThe  adds this annotation to\nan [Endpoints](\/docs\/concepts\/services-networking\/service\/#endpoints) object if the associated\n has more than 1000 backing endpoints.\nThe annotation indicates that the Endpoints object is over capacity and the number of endpoints\nhas been truncated to 1000.\n\nIf the number of backend endpoints falls below 1000, the control plane removes this annotation.\n\n### endpoints.kubernetes.io\/last-change-trigger-time\n\nType: Annotation\n\nExample: `endpoints.kubernetes.io\/last-change-trigger-time: \"2023-07-20T04:45:21Z\"`\n\nUsed on: Endpoints\n\nThis annotation set to an [Endpoints](\/docs\/concepts\/services-networking\/service\/#endpoints) object that\nrepresents the timestamp (The timestamp is stored in RFC 3339 date-time string format. For example, '2018-10-22T19:32:52.1Z'). This is timestamp\nof the last change in some Pod or Service object, that triggered the change to the Endpoints object.\n\n### control-plane.alpha.kubernetes.io\/leader (deprecated) {#control-plane-alpha-kubernetes-io-leader}\n\nType: Annotation\n\nExample: `control-plane.alpha.kubernetes.io\/leader={\"holderIdentity\":\"controller-0\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2023-01-19T13:12:57Z\",\"renewTime\":\"2023-01-19T13:13:54Z\",\"leaderTransitions\":1}`\n\nUsed on: Endpoints\n\nThe  previously set annotation on\nan [Endpoints](\/docs\/concepts\/services-networking\/service\/#endpoints) object. This annotation provided\nthe following detail:\n\n- Who is the current leader.\n- The time when the current leadership was acquired.\n- The duration of the lease (of the leadership) in seconds.\n- The time the current lease (the current leadership) should be renewed.\n- The number of leadership transitions that happened in the past.\n\nKubernetes now uses [Leases](\/docs\/concepts\/architecture\/leases\/) to\nmanage leader assignment for the Kubernetes control plane.\n\n### batch.kubernetes.io\/job-tracking (deprecated) {#batch-kubernetes-io-job-tracking}\n\nType: Annotation\n\nExample: `batch.kubernetes.io\/job-tracking: \"\"`\n\nUsed on: Jobs\n\nThe presence of this annotation on a Job used to indicate that the control plane is\n[tracking the Job status using finalizers](\/docs\/concepts\/workloads\/controllers\/job\/#job-tracking-with-finalizers).\nAdding or removing this annotation no longer has an effect (Kubernetes v1.27 and later)\nAll Jobs are tracked with finalizers.\n\n### job-name (deprecated) {#job-name}\n\nType: Label\n\nExample: `job-name: \"pi\"`\n\nUsed on: Jobs and Pods controlled by Jobs\n\n\nStarting from Kubernetes 1.27, this label is deprecated.\nKubernetes 1.27 and newer ignore this label and use the prefixed `job-name` label.\n\n\n### controller-uid (deprecated) {#controller-uid}\n\nType: Label\n\nExample: `controller-uid: \"$UID\"`\n\nUsed on: Jobs and Pods controlled by Jobs\n\n\nStarting from Kubernetes 1.27, this label is deprecated.\nKubernetes 1.27 and newer ignore this label and use the prefixed `controller-uid` label.\n\n\n### batch.kubernetes.io\/job-name {#batchkubernetesio-job-name}\n\nType: Label\n\nExample: `batch.kubernetes.io\/job-name: \"pi\"`\n\nUsed on: Jobs and Pods controlled by Jobs\n\nThis label is used as a user-friendly way to get Pods corresponding to a Job.\nThe `job-name` comes from the `name` of the Job and allows for an easy way to\nget Pods corresponding to the Job.\n\n### batch.kubernetes.io\/controller-uid {#batchkubernetesio-controller-uid}\n\nType: Label\n\nExample: `batch.kubernetes.io\/controller-uid: \"$UID\"`\n\nUsed on: Jobs and Pods controlled by Jobs\n\nThis label is used as a programmatic way to get all Pods corresponding to a Job.  \nThe `controller-uid` is a unique identifier that gets set in the `selector` field so the Job\ncontroller can get all the corresponding Pods.\n\n### scheduler.alpha.kubernetes.io\/defaultTolerations {#scheduleralphakubernetesio-defaulttolerations}\n\nType: Annotation\n\nExample: `scheduler.alpha.kubernetes.io\/defaultTolerations: '[{\"operator\": \"Equal\", \"value\": \"value1\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'`\n\nUsed on: Namespace\n\nThis annotation requires the [PodTolerationRestriction](\/docs\/reference\/access-authn-authz\/admission-controllers\/#podtolerationrestriction)\nadmission controller to be enabled. This annotation key allows assigning tolerations to a\nnamespace and any new pods created in this namespace would get these tolerations added.\n\n### scheduler.alpha.kubernetes.io\/tolerationsWhitelist {#schedulerkubernetestolerations-whitelist}\n\nType: Annotation\n\nExample: `scheduler.alpha.kubernetes.io\/tolerationsWhitelist: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'`\n\nUsed on: Namespace\n\nThis annotation is only useful when the (Alpha)\n[PodTolerationRestriction](\/docs\/reference\/access-authn-authz\/admission-controllers\/#podtolerationrestriction)\nadmission controller is enabled. The annotation value is a JSON document that defines a list of\nallowed tolerations for the namespace it annotates. When you create a Pod or modify its\ntolerations, the API server checks the tolerations to see if they are mentioned in the allow list.\nThe pod is admitted only if the check succeeds.\n\n### scheduler.alpha.kubernetes.io\/preferAvoidPods (deprecated) {#scheduleralphakubernetesio-preferavoidpods}\n\nType: Annotation\n\nUsed on: Node\n\nThis annotation requires the [NodePreferAvoidPods scheduling plugin](\/docs\/reference\/scheduling\/config\/#scheduling-plugins)\nto be enabled. The plugin is deprecated since Kubernetes 1.22.\nUse [Taints and Tolerations](\/docs\/concepts\/scheduling-eviction\/taint-and-toleration\/) instead.\n\n### node.kubernetes.io\/not-ready\n\nType: Taint\n\nExample: `node.kubernetes.io\/not-ready: \"NoExecute\"`\n\nUsed on: Node\n\nThe Node controller detects whether a Node is ready by monitoring its health\nand adds or removes this taint accordingly.\n\n### node.kubernetes.io\/unreachable\n\nType: Taint\n\nExample: `node.kubernetes.io\/unreachable: \"NoExecute\"`\n\nUsed on: Node\n\nThe Node controller adds the taint to a Node corresponding to the\n[NodeCondition](\/docs\/concepts\/architecture\/nodes\/#condition) `Ready` being `Unknown`.\n\n### node.kubernetes.io\/unschedulable\n\nType: Taint\n\nExample: `node.kubernetes.io\/unschedulable: \"NoSchedule\"`\n\nUsed on: Node\n\nThe taint will be added to a node when initializing the node to avoid race condition.\n\n### node.kubernetes.io\/memory-pressure\n\nType: Taint\n\nExample: `node.kubernetes.io\/memory-pressure: \"NoSchedule\"`\n\nUsed on: Node\n\nThe kubelet detects memory pressure based on `memory.available` and `allocatableMemory.available`\nobserved on a Node. The observed values are then compared to the corresponding thresholds that can\nbe set on the kubelet to determine if the Node condition and taint should be added\/removed.\n\n### node.kubernetes.io\/disk-pressure\n\nType: Taint\n\nExample: `node.kubernetes.io\/disk-pressure :\"NoSchedule\"`\n\nUsed on: Node\n\nThe kubelet detects disk pressure based on `imagefs.available`, `imagefs.inodesFree`,\n`nodefs.available` and `nodefs.inodesFree`(Linux only) observed on a Node.\nThe observed values are then compared to the corresponding thresholds that can be set on the\nkubelet to determine if the Node condition and taint should be added\/removed.\n\n### node.kubernetes.io\/network-unavailable\n\nType: Taint\n\nExample: `node.kubernetes.io\/network-unavailable: \"NoSchedule\"`\n\nUsed on: Node\n\nThis is initially set by the kubelet when the cloud provider used indicates a requirement for\nadditional network configuration. Only when the route on the cloud is configured properly will the\ntaint be removed by the cloud provider.\n\n### node.kubernetes.io\/pid-pressure\n\nType: Taint\n\nExample: `node.kubernetes.io\/pid-pressure: \"NoSchedule\"`\n\nUsed on: Node\n\nThe kubelet checks D-value of the size of `\/proc\/sys\/kernel\/pid_max` and the PIDs consumed by\nKubernetes on a node to get the number of available PIDs that referred to as the `pid.available`\nmetric. The metric is then compared to the corresponding threshold that can be set on the kubelet\nto determine if the node condition and taint should be added\/removed.\n\n### node.kubernetes.io\/out-of-service\n\nType: Taint\n\nExample: `node.kubernetes.io\/out-of-service:NoExecute`\n\nUsed on: Node\n\nA user can manually add the taint to a Node marking it out-of-service.\nIf the `NodeOutOfServiceVolumeDetach`\n[feature gate](\/docs\/reference\/command-line-tools-reference\/feature-gates\/)\nis enabled on `kube-controller-manager`, and a Node is marked out-of-service with this taint,\nthe Pods on the node will be forcefully deleted if there are no matching tolerations on it and\nvolume detach operations for the Pods terminating on the node will happen immediately.\nThis allows the Pods on the out-of-service node to recover quickly on a different node.\n\n\nRefer to [Non-graceful node shutdown](\/docs\/concepts\/architecture\/nodes\/#non-graceful-node-shutdown)\nfor further details about when and how to use this taint.\n\n\n### node.cloudprovider.kubernetes.io\/uninitialized\n\nType: Taint\n\nExample: `node.cloudprovider.kubernetes.io\/uninitialized: \"NoSchedule\"`\n\nUsed on: Node\n\nSets this taint on a Node to mark it as unusable, when kubelet is started with the \"external\"\ncloud provider, until a controller from the cloud-controller-manager initializes this Node, and\nthen removes the taint.\n\n### node.cloudprovider.kubernetes.io\/shutdown\n\nType: Taint\n\nExample: `node.cloudprovider.kubernetes.io\/shutdown: \"NoSchedule\"`\n\nUsed on: Node\n\nIf a Node is in a cloud provider specified shutdown state, the Node gets tainted accordingly\nwith `node.cloudprovider.kubernetes.io\/shutdown` and the taint effect of `NoSchedule`.\n\n### feature.node.kubernetes.io\/*\n\nType: Label\n\nExample: `feature.node.kubernetes.io\/network-sriov.capable: \"true\"`\n\nUsed on: Node\n\nThese labels are used by the Node Feature Discovery (NFD) component to advertise\nfeatures on a node. All built-in labels use the `feature.node.kubernetes.io` label\nnamespace and have the format `feature.node.kubernetes.io\/<feature-name>: \"true\"`.\nNFD has many extension points for creating vendor and application-specific labels.\nFor details, see the [customization guide](https:\/\/kubernetes-sigs.github.io\/node-feature-discovery\/v0.12\/usage\/customization-guide).\n\n### nfd.node.kubernetes.io\/master.version\n\nType: Annotation\n\nExample: `nfd.node.kubernetes.io\/master.version: \"v0.6.0\"`\n\nUsed on: Node\n\nFor node(s) where the Node Feature Discovery (NFD)\n[master](https:\/\/kubernetes-sigs.github.io\/node-feature-discovery\/stable\/usage\/nfd-master.html)\nis scheduled, this annotation records the version of the NFD master.\nIt is used for informative use only.\n\n### nfd.node.kubernetes.io\/worker.version\n\nType: Annotation\n\nExample: `nfd.node.kubernetes.io\/worker.version: \"v0.4.0\"`\n\nUsed on: Nodes\n\nThis annotation records the version for a Node Feature Discovery's\n[worker](https:\/\/kubernetes-sigs.github.io\/node-feature-discovery\/stable\/usage\/nfd-worker.html)\nif there is one running on a node. It's used for informative use only.\n\n### nfd.node.kubernetes.io\/feature-labels\n\nType: Annotation\n\nExample: `nfd.node.kubernetes.io\/feature-labels: \"cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-hardware_multithreading,kernel-version.full\"`\n\nUsed on: Nodes\n\nThis annotation records a comma-separated list of node feature labels managed by\n[Node Feature Discovery](https:\/\/kubernetes-sigs.github.io\/node-feature-discovery\/) (NFD).\nNFD uses this for an internal mechanism. You should not edit this annotation yourself.\n\n### nfd.node.kubernetes.io\/extended-resources\n\nType: Annotation\n\nExample: `nfd.node.kubernetes.io\/extended-resources: \"accelerator.acme.example\/q500,example.com\/coprocessor-fx5\"`\n\nUsed on: Nodes\n\nThis annotation records a comma-separated list of\n[extended resources](\/docs\/concepts\/configuration\/manage-resources-containers\/#extended-resources)\nmanaged by [Node Feature Discovery](https:\/\/kubernetes-sigs.github.io\/node-feature-discovery\/) (NFD).\nNFD uses this for an internal mechanism. You should not edit this annotation yourself.\n\n### nfd.node.kubernetes.io\/node-name\n\nType: Label\n\nExample: `nfd.node.kubernetes.io\/node-name: node-1`\n\nUsed on: Nodes\n\nIt specifies which node the NodeFeature object is targeting.\nCreators of NodeFeature objects must set this label and \nconsumers of the objects are supposed to use the label for \nfiltering features designated for a certain node.\n\n\nThese Node Feature Discovery (NFD) labels or annotations only apply to \nthe nodes where NFD is running. To learn more about NFD and \nits components go to its official [documentation](https:\/\/kubernetes-sigs.github.io\/node-feature-discovery\/stable\/get-started\/).\n\n\n### service.beta.kubernetes.io\/aws-load-balancer-access-log-emit-interval (beta) {#service-beta-kubernetes-io-aws-load-balancer-access-log-emit-interval}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-access-log-emit-interval: \"5\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\nthe load balancer for a Service based on this annotation. The value determines\nhow often the load balancer writes log entries. For example, if you set the value\nto 5, the log writes occur 5 seconds apart.\n\n### service.beta.kubernetes.io\/aws-load-balancer-access-log-enabled (beta) {#service-beta-kubernetes-io-aws-load-balancer-access-log-enabled}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-access-log-enabled: \"false\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\nthe load balancer for a Service based on this annotation. Access logging is enabled\nif you set the annotation to \"true\".\n\n### service.beta.kubernetes.io\/aws-load-balancer-access-log-s3-bucket-name (beta) {#service-beta-kubernetes-io-aws-load-balancer-access-log-s3-bucket-name}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-access-log-enabled: example`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\nthe load balancer for a Service based on this annotation. The load balancer\nwrites logs to an S3 bucket with the name you specify.\n\n### service.beta.kubernetes.io\/aws-load-balancer-access-log-s3-bucket-prefix (beta) {#service-beta-kubernetes-io-aws-load-balancer-access-log-s3-bucket-prefix}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-access-log-enabled: \"\/example\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\nthe load balancer for a Service based on this annotation. The load balancer\nwrites log objects with the prefix that you specify.\n\n### service.beta.kubernetes.io\/aws-load-balancer-additional-resource-tags (beta) {#service-beta-kubernetes-io-aws-load-balancer-additional-resource-tags}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-additional-resource-tags: \"Environment=demo,Project=example\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\ntags (an AWS concept) for a load balancer based on the comma-separated key\/value\npairs in the value of this annotation.\n\n### service.beta.kubernetes.io\/aws-load-balancer-alpn-policy (beta) {#service-beta-kubernetes-io-aws-load-balancer-alpn-policy}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-alpn-policy: HTTP2Optional`\n\nUsed on: Service\n\nThe [AWS load balancer controller](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/)\nuses this annotation.\nSee [annotations](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/latest\/guide\/service\/annotations\/)\nin the AWS load balancer controller documentation.\n\n### service.beta.kubernetes.io\/aws-load-balancer-attributes (beta) {#service-beta-kubernetes-io-aws-load-balancer-attributes}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-attributes: \"deletion_protection.enabled=true\"`\n\nUsed on: Service\n\nThe [AWS load balancer controller](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/)\nuses this annotation.\nSee [annotations](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/latest\/guide\/service\/annotations\/)\nin the AWS load balancer controller documentation.\n\n### service.beta.kubernetes.io\/aws-load-balancer-backend-protocol (beta) {#service-beta-kubernetes-io-aws-load-balancer-backend-protocol}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-backend-protocol: tcp`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\nthe load balancer listener based on the value of this annotation.\n\n### service.beta.kubernetes.io\/aws-load-balancer-connection-draining-enabled (beta) {#service-beta-kubernetes-io-aws-load-balancer-connection-draining-enabled}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-connection-draining-enabled: \"false\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\nthe load balancer based on this annotation. The load balancer's connection draining\nsetting depends on the value you set.\n\n### service.beta.kubernetes.io\/aws-load-balancer-connection-draining-timeout (beta) {#service-beta-kubernetes-io-aws-load-balancer-connection-draining-timeout}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-connection-draining-timeout: \"60\"`\n\nUsed on: Service\n\nIf you configure [connection draining](#service-beta-kubernetes-io-aws-load-balancer-connection-draining-enabled)\nfor a Service of `type: LoadBalancer`, and you use the AWS cloud, the integration configures\nthe draining period based on this annotation. The value you set determines the draining\ntimeout in seconds.\n\n### service.beta.kubernetes.io\/aws-load-balancer-ip-address-type (beta) {#service-beta-kubernetes-io-aws-load-balancer-ip-address-type}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-ip-address-type: ipv4`\n\nUsed on: Service\n\nThe [AWS load balancer controller](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/)\nuses this annotation.\nSee [annotations](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/latest\/guide\/service\/annotations\/)\nin the AWS load balancer controller documentation.\n\n### service.beta.kubernetes.io\/aws-load-balancer-connection-idle-timeout (beta) {#service-beta-kubernetes-io-aws-load-balancer-connection-idle-timeout}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-connection-idle-timeout: \"60\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\na load balancer based on this annotation. The load balancer has a configured idle\ntimeout period (in seconds) that applies to its connections. If no data has been\nsent or received by the time that the idle timeout period elapses, the load balancer\ncloses the connection.\n\n### service.beta.kubernetes.io\/aws-load-balancer-cross-zone-load-balancing-enabled (beta) {#service-beta-kubernetes-io-aws-load-balancer-cross-zone-load-balancing-enabled}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-cross-zone-load-balancing-enabled: \"true\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\na load balancer based on this annotation. If you set this annotation to \"true\",\neach load balancer node distributes requests evenly across the registered targets\nin all enabled [availability zones](https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/using-regions-availability-zones.html#concepts-availability-zones).\nIf you disable cross-zone load balancing, each load balancer node distributes requests\nevenly across the registered targets in its availability zone only.\n\n### service.beta.kubernetes.io\/aws-load-balancer-eip-allocations (beta) {#service-beta-kubernetes-io-aws-load-balancer-eip-allocations}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-eip-allocations: \"eipalloc-01bcdef23bcdef456,eipalloc-def1234abc4567890\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\na load balancer based on this annotation. The value is a comma-separated list\nof elastic IP address allocation IDs.\n\nThis annotation is only relevant for Services of `type: LoadBalancer`, where\nthe load balancer is an AWS Network Load Balancer.\n\n### service.beta.kubernetes.io\/aws-load-balancer-extra-security-groups (beta) {#service-beta-kubernetes-io-aws-load-balancer-extra-security-groups}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-extra-security-groups: \"sg-12abcd3456,sg-34dcba6543\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\na load balancer based on this annotation. The annotation value is a comma-separated\nlist of extra AWS VPC security groups to configure for the load balancer.\n\n### service.beta.kubernetes.io\/aws-load-balancer-healthcheck-healthy-threshold (beta) {#service-beta-kubernetes-io-aws-load-balancer-healthcheck-healthy-threshold}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-healthcheck-healthy-threshold: \"3\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\na load balancer based on this annotation. The annotation value specifies the number of\nsuccessive successful health checks required for a backend to be considered healthy\nfor traffic.\n\n### service.beta.kubernetes.io\/aws-load-balancer-healthcheck-interval (beta) {#service-beta-kubernetes-io-aws-load-balancer-healthcheck-interval}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-healthcheck-interval: \"30\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\na load balancer based on this annotation. The annotation value specifies the interval,\nin seconds, between health check probes made by the load balancer.\n\n### service.beta.kubernetes.io\/aws-load-balancer-healthcheck-path (beta) {#service-beta-kubernetes-io-aws-load-balancer-healthcheck-papth}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-healthcheck-path: \/healthcheck`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\na load balancer based on this annotation. The annotation value determines the\npath part of the URL that is used for HTTP health checks.\n\n### service.beta.kubernetes.io\/aws-load-balancer-healthcheck-port (beta) {#service-beta-kubernetes-io-aws-load-balancer-healthcheck-port}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-healthcheck-port: \"24\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\na load balancer based on this annotation. The annotation value determines which\nport the load balancer connects to when performing health checks.\n\n### service.beta.kubernetes.io\/aws-load-balancer-healthcheck-protocol (beta) {#service-beta-kubernetes-io-aws-load-balancer-healthcheck-protocol}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-healthcheck-protocol: TCP`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\na load balancer based on this annotation. The annotation value determines how the\nload balancer checks the health of backend targets.\n\n### service.beta.kubernetes.io\/aws-load-balancer-healthcheck-timeout (beta) {#service-beta-kubernetes-io-aws-load-balancer-healthcheck-timeout}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-healthcheck-timeout: \"3\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\na load balancer based on this annotation. The annotation value specifies the number\nof seconds before a probe that hasn't yet succeeded is automatically treated as\nhaving failed.\n\n### service.beta.kubernetes.io\/aws-load-balancer-healthcheck-unhealthy-threshold (beta) {#service-beta-kubernetes-io-aws-load-balancer-healthcheck-unhealthy-threshold}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-healthcheck-unhealthy-threshold: \"3\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\na load balancer based on this annotation. The annotation value specifies the number of\nsuccessive unsuccessful health checks required for a backend to be considered unhealthy\nfor traffic.\n\n### service.beta.kubernetes.io\/aws-load-balancer-internal (beta) {#service-beta-kubernetes-io-aws-load-balancer-internal}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-internal: \"true\"`\n\nUsed on: Service\n\nThe cloud controller manager integration with AWS elastic load balancing configures\na load balancer based on this annotation. When you set this annotation to \"true\",\nthe integration configures an internal load balancer.\n\nIf you use the [AWS load balancer controller](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/),\nsee [`service.beta.kubernetes.io\/aws-load-balancer-scheme`](#service-beta-kubernetes-io-aws-load-balancer-scheme).\n\n### service.beta.kubernetes.io\/aws-load-balancer-manage-backend-security-group-rules (beta) {#service-beta-kubernetes-io-aws-load-balancer-manage-backend-security-group-rules}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-manage-backend-security-group-rules: \"true\"`\n\nUsed on: Service\n\nThe [AWS load balancer controller](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/)\nuses this annotation.\nSee [annotations](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/latest\/guide\/service\/annotations\/)\nin the AWS load balancer controller documentation.\n\n### service.beta.kubernetes.io\/aws-load-balancer-name (beta) {#service-beta-kubernetes-io-aws-load-balancer-name}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-name: my-elb`\n\nUsed on: Service\n\nIf you set this annotation on a Service, and you also annotate that Service with\n`service.beta.kubernetes.io\/aws-load-balancer-type: \"external\"`, and you use the\n[AWS load balancer controller](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/)\nin your cluster, then the AWS load balancer controller sets the name of that load\nbalancer to the value you set for _this_ annotation.\n\nSee [annotations](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/latest\/guide\/service\/annotations\/)\nin the AWS load balancer controller documentation.\n\n### service.beta.kubernetes.io\/aws-load-balancer-nlb-target-type (beta) {#service-beta-kubernetes-io-aws-load-balancer-nlb-target-type}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-nlb-target-type: \"true\"`\n\nUsed on: Service\n\nThe [AWS load balancer controller](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/)\nuses this annotation.\nSee [annotations](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/latest\/guide\/service\/annotations\/)\nin the AWS load balancer controller documentation.\n\n### service.beta.kubernetes.io\/aws-load-balancer-private-ipv4-addresses (beta) {#service-beta-kubernetes-io-aws-load-balancer-private-ipv4-addresses}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-private-ipv4-addresses: \"198.51.100.0,198.51.100.64\"`\n\nUsed on: Service\n\nThe [AWS load balancer controller](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/)\nuses this annotation.\nSee [annotations](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/latest\/guide\/service\/annotations\/)\nin the AWS load balancer controller documentation.\n\n### service.beta.kubernetes.io\/aws-load-balancer-proxy-protocol (beta) {#service-beta-kubernetes-io-aws-load-balancer-proxy-protocol}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-proxy-protocol: \"*\"`\n\nUsed on: Service\n\nThe official Kubernetes integration with AWS elastic load balancing configures\na load balancer based on this annotation. The only permitted value is `\"*\"`,\nwhich indicates that the load balancer should wrap TCP connections to the backend\nPod with the PROXY protocol.\n\n### service.beta.kubernetes.io\/aws-load-balancer-scheme (beta) {#service-beta-kubernetes-io-aws-load-balancer-scheme}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-scheme: internal`\n\nUsed on: Service\n\nThe [AWS load balancer controller](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/)\nuses this annotation.\nSee [annotations](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/latest\/guide\/service\/annotations\/)\nin the AWS load balancer controller documentation.\n\n### service.beta.kubernetes.io\/aws-load-balancer-security-groups (deprecated) {#service-beta-kubernetes-io-aws-load-balancer-security-groups}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-security-groups: \"sg-53fae93f,sg-8725gr62r\"`\n\nUsed on: Service\n\nThe AWS load balancer controller uses this annotation to specify a comma separated list\nof security groups you want to attach to an AWS load balancer. Both name and ID of security\nare supported where name matches a `Name` tag, not the `groupName` attribute.\n\nWhen this annotation is added to a Service, the load-balancer controller attaches the security groups\nreferenced by the annotation to the load balancer. If you omit this annotation, the AWS load balancer\ncontroller automatically creates a new security group and attaches it to the load balancer.\n\n\nKubernetes v1.27 and later do not directly set or read this annotation. However, the AWS\nload balancer controller (part of the Kubernetes project) does still use the\n`service.beta.kubernetes.io\/aws-load-balancer-security-groups` annotation.\n\n\n### service.beta.kubernetes.io\/load-balancer-source-ranges (deprecated) {#service-beta-kubernetes-io-load-balancer-source-ranges}\n\nExample: `service.beta.kubernetes.io\/load-balancer-source-ranges: \"192.0.2.0\/25\"`\n\nUsed on: Service\n\nThe [AWS load balancer controller](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/)\nuses this annotation. You should set `.spec.loadBalancerSourceRanges` for the Service instead.\n\n### service.beta.kubernetes.io\/aws-load-balancer-ssl-cert (beta) {#service-beta-kubernetes-io-aws-load-balancer-ssl-cert}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-ssl-cert: \"arn:aws:acm:us-east-1:123456789012:certificate\/12345678-1234-1234-1234-123456789012\"`\n\nUsed on: Service\n\nThe official integration with AWS elastic load balancing configures TLS for a Service of\n`type: LoadBalancer` based on this annotation. The value of the annotation is the\nAWS Resource Name (ARN) of the X.509 certificate that the load balancer listener should\nuse.\n\n(The TLS protocol is based on an older technology that abbreviates to SSL.)\n\n### service.beta.kubernetes.io\/aws-load-balancer-ssl-negotiation-policy (beta) {#service-beta-kubernetes-io-aws-load-balancer-ssl-negotiation-policy}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS-1-2-2017-01`\n\nThe official integration with AWS elastic load balancing configures TLS for a Service of\n`type: LoadBalancer` based on this annotation. The value of the annotation is the name\nof an AWS policy for negotiating TLS with a client peer.\n\n### service.beta.kubernetes.io\/aws-load-balancer-ssl-ports (beta) {#service-beta-kubernetes-io-aws-load-balancer-ssl-ports}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-ssl-ports: \"*\"`\n\nThe official integration with AWS elastic load balancing configures TLS for a Service of\n`type: LoadBalancer` based on this annotation. The value of the annotation is either `\"*\"`,\nwhich means that all the load balancer's ports should use TLS, or it is a comma separated\nlist of port numbers.\n\n### service.beta.kubernetes.io\/aws-load-balancer-subnets (beta) {#service-beta-kubernetes-io-aws-load-balancer-subnets}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-subnets: \"private-a,private-b\"`\n\nKubernetes' official integration with AWS uses this annotation to configure a\nload balancer and determine in which AWS availability zones to deploy the managed\nload balancing service. The value is either a comma separated list of subnet names, or a\ncomma separated list of subnet IDs.\n\n### service.beta.kubernetes.io\/aws-load-balancer-target-group-attributes (beta) {#service-beta-kubernetes-io-aws-load-balancer-target-group-attributes}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-target-group-attributes: \"stickiness.enabled=true,stickiness.type=source_ip\"`\n\nUsed on: Service\n\nThe [AWS load balancer controller](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/)\nuses this annotation.\nSee [annotations](https:\/\/kubernetes-sigs.github.io\/aws-load-balancer-controller\/latest\/guide\/service\/annotations\/)\nin the AWS load balancer controller documentation.\n\n### service.beta.kubernetes.io\/aws-load-balancer-target-node-labels (beta) {#service-beta-kubernetes-io-aws-target-node-labels}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-target-node-labels: \"kubernetes.io\/os=Linux,topology.kubernetes.io\/region=us-east-2\"`\n\nKubernetes' official integration with AWS uses this annotation to determine which\nnodes in your cluster should be considered as valid targets for the load balancer.\n\n### service.beta.kubernetes.io\/aws-load-balancer-type (beta) {#service-beta-kubernetes-io-aws-load-balancer-type}\n\nExample: `service.beta.kubernetes.io\/aws-load-balancer-type: external`\n\nKubernetes' official integrations with AWS use this annotation to determine\nwhether the AWS cloud provider integration should manage a Service of\n`type: LoadBalancer`.\n\nThere are two permitted values:\n\n`nlb`\n: the cloud controller manager configures a Network Load Balancer\n\n`external`\n: the cloud controller manager does not configure any load balancer\n\nIf you deploy a Service of `type: LoadBalancer` on AWS, and you don't set any\n`service.beta.kubernetes.io\/aws-load-balancer-type` annotation,\nthe AWS integration deploys a classic Elastic Load Balancer. This behavior,\nwith no annotation present, is the default unless you specify otherwise.\n\nWhen you set this annotation to `external` on a Service of `type: LoadBalancer`,\nand your cluster has a working deployment of the AWS Load Balancer controller,\nthen the AWS Load Balancer controller attempts to deploy a load balancer based\non the Service specification.\n\n\nDo not modify or add the `service.beta.kubernetes.io\/aws-load-balancer-type` annotation\non an existing Service object. See the AWS documentation on this topic for more\ndetails.\n\n\n### service.beta.kubernetes.io\/azure-load-balancer-disable-tcp-reset (deprecated) {#service-beta-kubernetes-azure-load-balancer-disble-tcp-reset}\n\nExample: `service.beta.kubernetes.io\/azure-load-balancer-disable-tcp-reset: \"false\"`\n\nUsed on: Service\n\nThis annotation only works for Azure standard load balancer backed service.\nThis annotation is used on the Service to specify whether the load balancer\nshould disable or enable TCP reset on idle timeout. If enabled, it helps\napplications to behave more predictably, to detect the termination of a connection,\nremove expired connections and initiate new connections. \nYou can set the value to be either true or false.\n\nSee [Load Balancer TCP Reset](https:\/\/learn.microsoft.com\/en-gb\/azure\/load-balancer\/load-balancer-tcp-reset) for more information.\n\n \nThis annotation is deprecated.\n\n\n### pod-security.kubernetes.io\/enforce\n\nType: Label\n\nExample: `pod-security.kubernetes.io\/enforce: \"baseline\"`\n\nUsed on: Namespace\n\nValue **must** be one of `privileged`, `baseline`, or `restricted` which correspond to\n[Pod Security Standard](\/docs\/concepts\/security\/pod-security-standards) levels.\nSpecifically, the `enforce` label _prohibits_ the creation of any Pod in the labeled\nNamespace which does not meet the requirements outlined in the indicated level.\n\nSee [Enforcing Pod Security at the Namespace Level](\/docs\/concepts\/security\/pod-security-admission)\nfor more information.\n\n### pod-security.kubernetes.io\/enforce-version\n\nType: Label\n\nExample: `pod-security.kubernetes.io\/enforce-version: \"\"`\n\nUsed on: Namespace\n\nValue **must** be `latest` or a valid Kubernetes version in the format `v<major>.<minor>`.\nThis determines the version of the\n[Pod Security Standard](\/docs\/concepts\/security\/pod-security-standards)\npolicies to apply when validating a Pod.\n\nSee [Enforcing Pod Security at the Namespace Level](\/docs\/concepts\/security\/pod-security-admission)\nfor more information.\n\n### pod-security.kubernetes.io\/audit\n\nType: Label\n\nExample: `pod-security.kubernetes.io\/audit: \"baseline\"`\n\nUsed on: Namespace\n\nValue **must** be one of `privileged`, `baseline`, or `restricted` which correspond to\n[Pod Security Standard](\/docs\/concepts\/security\/pod-security-standards) levels.\nSpecifically, the `audit` label does not prevent the creation of a Pod in the labeled\nNamespace which does not meet the requirements outlined in the indicated level,\nbut adds an this annotation to the Pod.\n\nSee [Enforcing Pod Security at the Namespace Level](\/docs\/concepts\/security\/pod-security-admission)\nfor more information.\n\n### pod-security.kubernetes.io\/audit-version\n\nType: Label\n\nExample: `pod-security.kubernetes.io\/audit-version: \"\"`\n\nUsed on: Namespace\n\nValue **must** be `latest` or a valid Kubernetes version in the format `v<major>.<minor>`.\nThis determines the version of the\n[Pod Security Standard](\/docs\/concepts\/security\/pod-security-standards)\npolicies to apply when validating a Pod.\n\nSee [Enforcing Pod Security at the Namespace Level](\/docs\/concepts\/security\/pod-security-admission)\nfor more information.\n\n### pod-security.kubernetes.io\/warn\n\nType: Label\n\nExample: `pod-security.kubernetes.io\/warn: \"baseline\"`\n\nUsed on: Namespace\n\nValue **must** be one of `privileged`, `baseline`, or `restricted` which correspond to\n[Pod Security Standard](\/docs\/concepts\/security\/pod-security-standards) levels.\nSpecifically, the `warn` label does not prevent the creation of a Pod in the labeled\nNamespace which does not meet the requirements outlined in the indicated level,\nbut returns a warning to the user after doing so.\nNote that warnings are also displayed when creating or updating objects that contain\nPod templates, such as Deployments, Jobs, StatefulSets, etc.\n\nSee [Enforcing Pod Security at the Namespace Level](\/docs\/concepts\/security\/pod-security-admission)\nfor more information.\n\n### pod-security.kubernetes.io\/warn-version\n\nType: Label\n\nExample: `pod-security.kubernetes.io\/warn-version: \"\"`\n\nUsed on: Namespace\n\nValue **must** be `latest` or a valid Kubernetes version in the format `v<major>.<minor>`.\nThis determines the version of the [Pod Security Standard](\/docs\/concepts\/security\/pod-security-standards)\npolicies to apply when validating a submitted Pod.\nNote that warnings are also displayed when creating or updating objects that contain\nPod templates, such as Deployments, Jobs, StatefulSets, etc.\n\nSee [Enforcing Pod Security at the Namespace Level](\/docs\/concepts\/security\/pod-security-admission)\nfor more information.\n\n### rbac.authorization.kubernetes.io\/autoupdate\n\nType: Annotation\n\nExample: `rbac.authorization.kubernetes.io\/autoupdate: \"false\"`\n\nUsed on: ClusterRole, ClusterRoleBinding, Role, RoleBinding\n\nWhen this annotation is set to `\"true\"` on default RBAC objects created by the API server,\nthey are automatically updated at server start to add missing permissions and subjects\n(extra permissions and subjects are left in place).\nTo prevent autoupdating a particular role or rolebinding, set this annotation to `\"false\"`.\nIf you create your own RBAC objects and set this annotation to `\"false\"`, `kubectl auth reconcile`\n(which allows reconciling arbitrary RBAC objects in a )\nrespects this annotation and does not automatically add missing permissions and subjects.\n\n### kubernetes.io\/psp (deprecated) {#kubernetes-io-psp}\n\nType: Annotation\n\nExample: `kubernetes.io\/psp: restricted`\n\nUsed on: Pod\n\nThis annotation was only relevant if you were using\n[PodSecurityPolicy](\/docs\/concepts\/security\/pod-security-policy\/) objects.\nKubernetes v does not support the PodSecurityPolicy API.\n\nWhen the PodSecurityPolicy admission controller admitted a Pod, the admission controller\nmodified the Pod to have this annotation.\nThe value of the annotation was the name of the PodSecurityPolicy that was used for validation.\n\n### seccomp.security.alpha.kubernetes.io\/pod (non-functional) {#seccomp-security-alpha-kubernetes-io-pod}\n\nType: Annotation\n\nUsed on: Pod\n\nKubernetes before v1.25 allowed you to configure seccomp behavior using this annotation.\nSee [Restrict a Container's Syscalls with seccomp](\/docs\/tutorials\/security\/seccomp\/) to\nlearn the supported way to specify seccomp restrictions for a Pod.\n\n### container.seccomp.security.alpha.kubernetes.io\/[NAME] (non-functional) {#container-seccomp-security-alpha-kubernetes-io}\n\nType: Annotation\n\nUsed on: Pod\n\nKubernetes before v1.25 allowed you to configure seccomp behavior using this annotation.\nSee [Restrict a Container's Syscalls with seccomp](\/docs\/tutorials\/security\/seccomp\/) to\nlearn the supported way to specify seccomp restrictions for a Pod.\n\n### snapshot.storage.kubernetes.io\/allow-volume-mode-change\n\nType: Annotation\n\nExample: `snapshot.storage.kubernetes.io\/allow-volume-mode-change: \"true\"`\n\nUsed on: VolumeSnapshotContent\n\nValue can either be `true` or `false`. This determines whether a user can modify\nthe mode of the source volume when a PersistentVolumeClaim is being created from\na VolumeSnapshot.\n\nRefer to [Converting the volume mode of a Snapshot](\/docs\/concepts\/storage\/volume-snapshots\/#convert-volume-mode)\nand the [Kubernetes CSI Developer Documentation](https:\/\/kubernetes-csi.github.io\/docs\/)\nfor more information.\n\n### scheduler.alpha.kubernetes.io\/critical-pod (deprecated)\n\nType: Annotation\n\nExample: `scheduler.alpha.kubernetes.io\/critical-pod: \"\"`\n\nUsed on: Pod\n\nThis annotation lets Kubernetes control plane know about a Pod being a critical Pod\nso that the descheduler will not remove this Pod.\n\n\nStarting in v1.16, this annotation was removed in favor of\n[Pod Priority](\/docs\/concepts\/scheduling-eviction\/pod-priority-preemption\/).\n\n\n## Annotations used for audit\n\n<!-- sorted by annotation -->\n- [`authorization.k8s.io\/decision`](\/docs\/reference\/labels-annotations-taints\/audit-annotations\/#authorization-k8s-io-decision)\n- [`authorization.k8s.io\/reason`](\/docs\/reference\/labels-annotations-taints\/audit-annotations\/#authorization-k8s-io-reason)\n- [`insecure-sha1.invalid-cert.kubernetes.io\/$hostname`](\/docs\/reference\/labels-annotations-taints\/audit-annotations\/#insecure-sha1-invalid-cert-kubernetes-io-hostname)\n- [`missing-san.invalid-cert.kubernetes.io\/$hostname`](\/docs\/reference\/labels-annotations-taints\/audit-annotations\/#missing-san-invalid-cert-kubernetes-io-hostname)\n- [`pod-security.kubernetes.io\/audit-violations`](\/docs\/reference\/labels-annotations-taints\/audit-annotations\/#pod-security-kubernetes-io-audit-violations)\n- [`pod-security.kubernetes.io\/enforce-policy`](\/docs\/reference\/labels-annotations-taints\/audit-annotations\/#pod-security-kubernetes-io-enforce-policy)\n- [`pod-security.kubernetes.io\/exempt`](\/docs\/reference\/labels-annotations-taints\/audit-annotations\/#pod-security-kubernetes-io-exempt)\n- [`validation.policy.admission.k8s.io\/validation_failure`](\/docs\/reference\/labels-annotations-taints\/audit-annotations\/#validation-policy-admission-k8s-io-validation-failure)\n  \nSee more details on [Audit Annotations](\/docs\/reference\/labels-annotations-taints\/audit-annotations\/).\n\n## kubeadm\n\n### kubeadm.alpha.kubernetes.io\/cri-socket\n\nType: Annotation\n\nExample: `kubeadm.alpha.kubernetes.io\/cri-socket: unix:\/\/\/run\/containerd\/container.sock`\n\nUsed on: Node\n\nAnnotation that kubeadm uses to preserve the CRI socket information given to kubeadm at\n`init`\/`join` time for later use. kubeadm annotates the Node object with this information.\nThe annotation remains \"alpha\", since ideally this should be a field in KubeletConfiguration\ninstead.\n\n### kubeadm.kubernetes.io\/etcd.advertise-client-urls\n\nType: Annotation\n\nExample: `kubeadm.kubernetes.io\/etcd.advertise-client-urls: https:\/\/172.17.0.18:2379`\n\nUsed on: Pod\n\nAnnotation that kubeadm places on locally managed etcd Pods to keep track of\na list of URLs where etcd clients should connect to.\nThis is used mainly for etcd cluster health check purposes.\n\n### kubeadm.kubernetes.io\/kube-apiserver.advertise-address.endpoint\n\nType: Annotation\n\nExample: `kubeadm.kubernetes.io\/kube-apiserver.advertise-address.endpoint: https:\/\/172.17.0.18:6443`\n\nUsed on: Pod\n\nAnnotation that kubeadm places on locally managed `kube-apiserver` Pods to keep track\nof the exposed advertise address\/port endpoint for that API server instance.\n\n### kubeadm.kubernetes.io\/component-config.hash\n\nType: Annotation\n\nExample: `kubeadm.kubernetes.io\/component-config.hash: 2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae`\n\nUsed on: ConfigMap\n\nAnnotation that kubeadm places on ConfigMaps that it manages for configuring components.\nIt contains a hash (SHA-256) used to determine if the user has applied settings different\nfrom the kubeadm defaults for a particular component.\n\n### node-role.kubernetes.io\/control-plane\n\nType: Label\n\nUsed on: Node\n\nA marker label to indicate that the node is used to run control plane components.\nThe kubeadm tool applies this label to the control plane nodes that it manages.\nOther cluster management tools typically also set this taint.\n\nYou can label control plane nodes with this label to make it easier to schedule Pods\nonly onto these nodes, or to avoid running Pods on the control plane.\nIf this label is set, the [EndpointSlice controller](\/docs\/concepts\/services-networking\/topology-aware-routing\/#implementation-control-plane)\nignores that node while calculating Topology Aware Hints.\n\n### node-role.kubernetes.io\/control-plane {#node-role-kubernetes-io-control-plane-taint}\n\nType: Taint\n\nExample: `node-role.kubernetes.io\/control-plane:NoSchedule`\n\nUsed on: Node\n\nTaint that kubeadm applies on control plane nodes to restrict placing Pods and\nallow only specific pods to schedule on them.\n\nIf this Taint is applied, control plane nodes allow only critical workloads to\nbe scheduled onto them. You can manually remove this taint with the following\ncommand on a specific node.\n\n```shell\nkubectl taint nodes <node-name> node-role.kubernetes.io\/control-plane:NoSchedule-\n```\n\n### node-role.kubernetes.io\/master (deprecated) {#node-role-kubernetes-io-master-taint}\n\nType: Taint\n\nUsed on: Node\n\nExample: `node-role.kubernetes.io\/master:NoSchedule`\n\nTaint that kubeadm previously applied on control plane nodes to allow only critical\nworkloads to schedule on them. Replaced by the\n[`node-role.kubernetes.io\/control-plane`](#node-role-kubernetes-io-control-plane-taint)\ntaint. kubeadm no longer sets or uses this deprecated taint.","site":"kubernetes reference","answers_cleaned":"    title  Well Known Labels  Annotations and Taints content type  concept weight  40 no list  true card    name  reference   weight  30   anchors      anchor    labels annotations and taints used on api objects      title  Labels  annotations and taints           overview      Kubernetes reserves all labels  annotations and taints in the  kubernetes io  and  k8s io  namespaces   This document serves both as a reference to the values and as a coordination point for assigning values        body         Labels  annotations and taints used on API objects       apf kubernetes io autoupdate spec  Type  Annotation  Example   apf kubernetes io autoupdate spec   true    Used on    FlowSchema  and  PriorityLevelConfiguration  Objects   docs concepts cluster administration flow control  defaults   If this annotation is set to true on a FlowSchema or PriorityLevelConfiguration  the  spec  for that object is managed by the kube apiserver  If the API server does not recognize an APF object  and you annotate it for automatic update  the API server deletes the entire object  Otherwise  the API server does not manage the object spec  For more details  read   Maintenance of the Mandatory and Suggested Configuration Objects   docs concepts cluster administration flow control  maintenance of the mandatory and suggested configuration objects        app kubernetes io component  Type  Label  Example   app kubernetes io component   database    Used on  All Objects  typically used on  workload resources   docs reference kubernetes api workload resources      The component within the application architecture   One of the  recommended labels   docs concepts overview working with objects common labels  labels        app kubernetes io created by  deprecated   Type  Label  Example   app kubernetes io created by   controller manager    Used on  All Objects  typically used on  workload resources   docs reference kubernetes api workload resources      The controller user who created this resource    Starting from v1 9  this label is deprecated        app kubernetes io instance  Type  Label  Example   app kubernetes io instance   mysql abcxyz    Used on  All Objects  typically used on  workload resources   docs reference kubernetes api workload resources      A unique name identifying the instance of an application  To assign a non unique name  use  app kubernetes io name   app kubernetes io name    One of the  recommended labels   docs concepts overview working with objects common labels  labels        app kubernetes io managed by  Type  Label  Example   app kubernetes io managed by   helm    Used on  All Objects  typically used on  workload resources   docs reference kubernetes api workload resources      The tool being used to manage the operation of an application   One of the  recommended labels   docs concepts overview working with objects common labels  labels        app kubernetes io name  Type  Label  Example   app kubernetes io name   mysql    Used on  All Objects  typically used on  workload resources   docs reference kubernetes api workload resources      The name of the application   One of the  recommended labels   docs concepts overview working with objects common labels  labels        app kubernetes io part of  Type  Label  Example   app kubernetes io part of   wordpress    Used on  All Objects  typically used on  workload resources   docs reference kubernetes api workload resources      The name of a higher level application this object is part of   One of the  recommended labels   docs concepts overview working with objects common labels  labels        app kubernetes io version  Type  Label  Example   app kubernetes io version   5 7 21    Used on  All Objects  typically used on  workload resources   docs reference kubernetes api workload resources      The current version of the application   Common forms of values include      semantic version  https   semver org spec v1 0 0 html    the Git  revision hash  https   git scm com book en v2 Git Tools Revision Selection  single revisions    for the source code   One of the  recommended labels   docs concepts overview working with objects common labels  labels        applyset kubernetes io additional namespaces  alpha    applyset kubernetes io additional namespaces   Type  Annotation  Example   applyset kubernetes io additional namespaces   namespace1 namespace2    Used on  Objects being used as ApplySet parents   Use of this annotation is Alpha  For Kubernetes version   you can use this annotation on Secrets  ConfigMaps  or custom resources if the  defining them has the  applyset kubernetes io is parent type  label   Part of the specification used to implement  ApplySet based pruning in kubectl   docs tasks manage kubernetes objects declarative config  alternative kubectl apply f directory prune   This annotation is applied to the parent object used to track an ApplySet to extend the scope of the ApplySet beyond the parent object s own namespace  if any   The value is a comma separated list of the names of namespaces other than the parent s namespace in which objects are found       applyset kubernetes io contains group kinds  alpha    applyset kubernetes io contains group kinds   Type  Annotation  Example   applyset kubernetes io contains group kinds   certificates cert manager io configmaps deployments apps secrets services    Used on  Objects being used as ApplySet parents   Use of this annotation is Alpha  For Kubernetes version   you can use this annotation on Secrets  ConfigMaps  or custom resources if the CustomResourceDefinition defining them has the  applyset kubernetes io is parent type  label   Part of the specification used to implement  ApplySet based pruning in kubectl   docs tasks manage kubernetes objects declarative config  alternative kubectl apply f directory prune   This annotation is applied to the parent object used to track an ApplySet to optimize listing of ApplySet member objects  It is optional in the ApplySet specification  as tools can perform discovery or use a different optimization  However  as of Kubernetes version   it is required by kubectl  When present  the value of this annotation must be a comma separated list of the group kinds  in the fully qualified name format  i e    resource   group         applyset kubernetes io contains group resources  deprecated    applyset kubernetes io contains group resources   Type  Annotation  Example   applyset kubernetes io contains group resources   certificates cert manager io configmaps deployments apps secrets services    Used on  Objects being used as ApplySet parents   For Kubernetes version   you can use this annotation on Secrets  ConfigMaps  or custom resources if the CustomResourceDefinition defining them has the  applyset kubernetes io is parent type  label   Part of the specification used to implement  ApplySet based pruning in kubectl   docs tasks manage kubernetes objects declarative config  alternative kubectl apply f directory prune   This annotation is applied to the parent object used to track an ApplySet to optimize listing of ApplySet member objects  It is optional in the ApplySet specification  as tools can perform discovery or use a different optimization  However  in Kubernetes version   it is required by kubectl  When present  the value of this annotation must be a comma separated list of the group kinds  in the fully qualified name format  i e    resource   group      This annotation is currently deprecated and replaced by   applyset kubernetes io contains group kinds    applyset kubernetes io contains group kinds   support for this will be removed in applyset beta or GA        applyset kubernetes io id  alpha    applyset kubernetes io id   Type  Label  Example   applyset kubernetes io id   applyset 0eFHV8ySqp7XoShsGvyWFQD3s96yqwHmzc4e0HR1dsY v1    Used on  Objects being used as ApplySet parents   Use of this label is Alpha  For Kubernetes version   you can use this label on Secrets  ConfigMaps  or custom resources if the CustomResourceDefinition defining them has the  applyset kubernetes io is parent type  label   Part of the specification used to implement  ApplySet based pruning in kubectl   docs tasks manage kubernetes objects declarative config  alternative kubectl apply f directory prune   This label is what makes an object an ApplySet parent object  Its value is the unique ID of the ApplySet  which is derived from the identity of the parent object itself  This ID   must   be the base64 encoding  using the URL safe encoding of RFC4648  of the hash of the group kind name namespace of the object it is on  in the form    base64 sha256  name   namespace   kind   group       There is no relation between the value of this label and object UID       applyset kubernetes io is parent type  alpha    applyset kubernetes io is parent type   Type  Label  Example   applyset kubernetes io is parent type   true    Used on  Custom Resource Definition  CRD   Use of this label is Alpha  Part of the specification used to implement  ApplySet based pruning in kubectl   docs tasks manage kubernetes objects declarative config  alternative kubectl apply f directory prune   You can set this label on a CustomResourceDefinition  CRD  to identify the custom resource type it defines  not the CRD itself  as an allowed parent for an ApplySet  The only permitted value for this label is   true    if you want to mark a CRD as not being a valid parent for ApplySets  omit this label       applyset kubernetes io part of  alpha    applyset kubernetes io part of   Type  Label  Example   applyset kubernetes io part of   applyset 0eFHV8ySqp7XoShsGvyWFQD3s96yqwHmzc4e0HR1dsY v1    Used on  All objects   Use of this label is Alpha  Part of the specification used to implement  ApplySet based pruning in kubectl   docs tasks manage kubernetes objects declarative config  alternative kubectl apply f directory prune   This label is what makes an object a member of an ApplySet  The value of the label   must   match the value of the  applyset kubernetes io id  label on the parent object       applyset kubernetes io tooling  alpha    applyset kubernetes io tooling   Type  Annotation  Example   applyset kubernetes io tooling   kubectl v    Used on  Objects being used as ApplySet parents   Use of this annotation is Alpha  For Kubernetes version   you can use this annotation on Secrets  ConfigMaps  or custom resources if the CustomResourceDefinitiondefining them has the  applyset kubernetes io is parent type  label   Part of the specification used to implement  ApplySet based pruning in kubectl   docs tasks manage kubernetes objects declarative config  alternative kubectl apply f directory prune   This annotation is applied to the parent object used to track an ApplySet to indicate which tooling manages that ApplySet  Tooling should refuse to mutate ApplySets belonging to other tools  The value must be in the format   toolname   semver         apps kubernetes io pod index  beta    apps kubernetes io pod index   Type  Label  Example   apps kubernetes io pod index   0    Used on  Pod  When a StatefulSet controller creates a Pod for the StatefulSet  it sets this label on that Pod   The value of the label is the ordinal index of the pod being created   See  Pod Index Label   docs concepts workloads controllers statefulset  pod index label  in the StatefulSet topic for more details  Note the  PodIndexLabel   docs reference command line tools reference feature gates   feature gate must be enabled for this label to be added to pods       resource kubernetes io pod claim name  Type  Annotation  Example   resource kubernetes io pod claim name   my pod claim    Used on  ResourceClaim  This annotation is assigned to generated ResourceClaims   Its value corresponds to the name of the resource claim in the   spec  of any Pod s  for which the ResourceClaim was created  This annotation is an internal implementation detail of  dynamic resource allocation   docs concepts scheduling eviction dynamic resource allocation    You should not need to read or modify the value of this annotation       cluster autoscaler kubernetes io safe to evict  Type  Annotation  Example   cluster autoscaler kubernetes io safe to evict   true    Used on  Pod  When this annotation is set to   true    the cluster autoscaler is allowed to evict a Pod even if other rules would normally prevent that  The cluster autoscaler never evicts Pods that have this annotation explicitly set to   false    you could set that on an important Pod that you want to keep running  If this annotation is not set then the cluster autoscaler follows its Pod level behavior       config kubernetes io local config  Type  Annotation  Example   config kubernetes io local config   true    Used on  All objects  This annotation is used in manifests to mark an object as local configuration that should not be submitted to the Kubernetes API   A value of   true   for this annotation declares that the object is only consumed by client side tooling and should not be submitted to the API server   A value of   false   can be used to declare that the object should be submitted to the API server even when it would otherwise be assumed to be local   This annotation is part of the Kubernetes Resource Model  KRM  Functions Specification  which is used by Kustomize and similar third party tools  For example  Kustomize removes objects with this annotation from its final build output        container apparmor security beta kubernetes io    deprecated    container apparmor security beta kubernetes io   Type  Annotation  Example   container apparmor security beta kubernetes io my container  my custom profile   Used on  Pods  This annotation allows you to specify the AppArmor security profile for a container within a Kubernetes pod  As of Kubernetes v1 30  this should be set with the  appArmorProfile  field instead  To learn more  see the  AppArmor   docs tutorials security apparmor   tutorial  The tutorial illustrates using AppArmor to restrict a container s abilities and access   The profile specified dictates the set of rules and restrictions that the containerized process must adhere to  This helps enforce security policies and isolation for your containers       internal config kubernetes io    reserved prefix    internal config kubernetes io reserved wildcard   Type  Annotation  Used on  All objects  This prefix is reserved for internal use by tools that act as orchestrators in accordance with the Kubernetes Resource Model  KRM  Functions Specification  Annotations with this prefix are internal to the orchestration process and are not persisted to the manifests on the filesystem  In other words  the orchestrator tool should set these annotations when reading files from the local filesystem and remove them when writing the output of functions back to the filesystem   A KRM function   must not   modify annotations with this prefix  unless otherwise specified for a given annotation  This enables orchestrator tools to add additional internal annotations  without requiring changes to existing functions       internal config kubernetes io path  Type  Annotation  Example   internal config kubernetes io path   relative file path yaml    Used on  All objects  This annotation records the slash delimited  OS agnostic  relative path to the manifest file the object was loaded from  The path is relative to a fixed location on the filesystem  determined by the orchestrator tool   This annotation is part of the Kubernetes Resource Model  KRM  Functions Specification  which is used by Kustomize and similar third party tools   A KRM Function   should not   modify this annotation on input objects unless it is modifying the referenced files  A KRM Function   may   include this annotation on objects it generates       internal config kubernetes io index  Type  Annotation  Example   internal config kubernetes io index   2    Used on  All objects  This annotation records the zero indexed position of the YAML document that contains the object within the manifest file the object was loaded from  Note that YAML documents are separated by three dashes         and can each contain one object  When this annotation is not specified  a value of 0 is implied   This annotation is part of the Kubernetes Resource Model  KRM  Functions Specification  which is used by Kustomize and similar third party tools   A KRM Function   should not   modify this annotation on input objects unless it is modifying the referenced files  A KRM Function   may   include this annotation on objects it generates       kube scheduler simulator sigs k8s io bind result  Type  Annotation  Example   kube scheduler simulator sigs k8s io bind result     DefaultBinder   success      Used on  Pod  This annotation records the result of bind scheduler plugins  used by https   sigs k8s io kube scheduler simulator       kube scheduler simulator sigs k8s io filter result  Type  Annotation  Example       yaml kube scheduler simulator sigs k8s io filter result             node 282x7    AzureDiskLimits   passed   EBSLimits   passed   GCEPDLimits   passed   InterPodAffinity   passed   NodeAffinity   passed   NodeName   passed   NodePorts   passed   NodeResourcesFit   passed   NodeUnschedulable   passed   NodeVolumeLimits   passed   PodTopologySpread   passed   TaintToleration   passed   VolumeBinding   passed   VolumeRestrictions   passed   VolumeZone   passed    node gp9t4    AzureDiskLimits   passed   EBSLimits   passed   GCEPDLimits   passed   InterPodAffinity   passed   NodeAffinity   passed   NodeName   passed   NodePorts   passed   NodeResourcesFit   passed   NodeUnschedulable   passed   NodeVolumeLimits   passed   PodTopologySpread   passed   TaintToleration   passed   VolumeBinding   passed   VolumeRestrictions   passed   VolumeZone   passed         Used on  Pod  This annotation records the result of filter scheduler plugins  used by https   sigs k8s io kube scheduler simulator       kube scheduler simulator sigs k8s io finalscore result  Type  Annotation  Example       yaml kube scheduler simulator sigs k8s io finalscore result             node 282x7    ImageLocality   0   InterPodAffinity   0   NodeAffinity   0   NodeNumber   0   NodeResourcesBalancedAllocation   76   NodeResourcesFit   73   PodTopologySpread   200   TaintToleration   300   VolumeBinding   0    node gp9t4    ImageLocality   0   InterPodAffinity   0   NodeAffinity   0   NodeNumber   0   NodeResourcesBalancedAllocation   76   NodeResourcesFit   73   PodTopologySpread   200   TaintToleration   300   VolumeBinding   0         Used on  Pod  This annotation records the final scores that the scheduler calculates from the scores from score scheduler plugins  used by https   sigs k8s io kube scheduler simulator       kube scheduler simulator sigs k8s io permit result  Type  Annotation  Example   kube scheduler simulator sigs k8s io permit result     CustomPermitPlugin   success      Used on  Pod  This annotation records the result of permit scheduler plugins  used by https   sigs k8s io kube scheduler simulator       kube scheduler simulator sigs k8s io permit result timeout  Type  Annotation  Example   kube scheduler simulator sigs k8s io permit result timeout     CustomPermitPlugin   10s      Used on  Pod  This annotation records the timeouts returned from permit scheduler plugins  used by https   sigs k8s io kube scheduler simulator       kube scheduler simulator sigs k8s io postfilter result  Type  Annotation  Example   kube scheduler simulator sigs k8s io postfilter result     DefaultPreemption   success      Used on  Pod  This annotation records the result of postfilter scheduler plugins  used by https   sigs k8s io kube scheduler simulator       kube scheduler simulator sigs k8s io prebind result  Type  Annotation  Example   kube scheduler simulator sigs k8s io prebind result     VolumeBinding   success      Used on  Pod  This annotation records the result of prebind scheduler plugins  used by https   sigs k8s io kube scheduler simulator       kube scheduler simulator sigs k8s io prefilter result  Type  Annotation  Example   kube scheduler simulator sigs k8s io prebind result     NodeAffinity      node  a        Used on  Pod  This annotation records the PreFilter result of prefilter scheduler plugins  used by https   sigs k8s io kube scheduler simulator       kube scheduler simulator sigs k8s io prefilter result status  Type  Annotation  Example       yaml kube scheduler simulator sigs k8s io prefilter result status             InterPodAffinity   success   NodeAffinity   success   NodePorts   success   NodeResourcesFit   success   PodTopologySpread   success   VolumeBinding   success   VolumeRestrictions   success        Used on  Pod  This annotation records the result of prefilter scheduler plugins  used by https   sigs k8s io kube scheduler simulator       kube scheduler simulator sigs k8s io prescore result  Type  Annotation  Example       yaml     kube scheduler simulator sigs k8s io prescore result             InterPodAffinity   success   NodeAffinity   success   NodeNumber   success   PodTopologySpread   success   TaintToleration   success        Used on  Pod  This annotation records the result of prefilter scheduler plugins  used by https   sigs k8s io kube scheduler simulator       kube scheduler simulator sigs k8s io reserve result  Type  Annotation  Example   kube scheduler simulator sigs k8s io reserve result     VolumeBinding   success      Used on  Pod  This annotation records the result of reserve scheduler plugins  used by https   sigs k8s io kube scheduler simulator       kube scheduler simulator sigs k8s io result history  Type  Annotation  Example   kube scheduler simulator sigs k8s io result history         Used on  Pod  This annotation records all the past scheduling results from scheduler plugins  used by https   sigs k8s io kube scheduler simulator       kube scheduler simulator sigs k8s io score result  Type  Annotation     yaml     kube scheduler simulator sigs k8s io score result             node 282x7    ImageLocality   0   InterPodAffinity   0   NodeAffinity   0   NodeNumber   0   NodeResourcesBalancedAllocation   76   NodeResourcesFit   73   PodTopologySpread   0   TaintToleration   0   VolumeBinding   0    node gp9t4    ImageLocality   0   InterPodAffinity   0   NodeAffinity   0   NodeNumber   0   NodeResourcesBalancedAllocation   76   NodeResourcesFit   73   PodTopologySpread   0   TaintToleration   0   VolumeBinding   0         Used on  Pod  This annotation records the result of score scheduler plugins  used by https   sigs k8s io kube scheduler simulator       kube scheduler simulator sigs k8s io selected node  Type  Annotation  Example   kube scheduler simulator sigs k8s io selected node  node 282x7   Used on  Pod  This annotation records the node that is selected by the scheduling cycle  used by https   sigs k8s io kube scheduler simulator       kubernetes io arch  Type  Label  Example   kubernetes io arch   amd64    Used on  Node  The Kubelet populates this with  runtime GOARCH  as defined by Go  This can be handy if you are mixing ARM and x86 nodes       kubernetes io os  Type  Label  Example   kubernetes io os   linux    Used on  Node  Pod  For nodes  the kubelet populates this with  runtime GOOS  as defined by Go  This can be handy if you are mixing operating systems in your cluster  for example  mixing Linux and Windows nodes    You can also set this label on a Pod  Kubernetes allows you to set any value for this label  if you use this label  you should nevertheless set it to the Go  runtime GOOS  string for the operating system that this Pod actually works with   When the  kubernetes io os  label value for a Pod does not match the label value on a Node  the kubelet on the node will not admit the Pod  However  this is not taken into account by the kube scheduler  Alternatively  the kubelet refuses to run a Pod where you have specified a Pod OS  if this isn t the same as the operating system for the node where that kubelet is running  Just look for  Pods OS   docs concepts workloads pods  pod os  for more details       kubernetes io metadata name  Type  Label  Example   kubernetes io metadata name   mynamespace    Used on  Namespaces  The Kubernetes API server  part of the   sets this label on all namespaces  The label value is set to the name of the namespace  You can t change this label s value   This is useful if you want to target a specific namespace with a label        kubernetes io limit ranger  Type  Annotation  Example   kubernetes io limit ranger   LimitRanger plugin set  cpu  memory request for container nginx  cpu  memory limit for container nginx    Used on  Pod  Kubernetes by default doesn t provide any resource limit  that means unless you explicitly define limits  your container can consume unlimited CPU and memory  You can define a default request or default limit for pods  You do this by creating a LimitRange in the relevant namespace  Pods deployed after you define a LimitRange will have these limits applied to them  The annotation  kubernetes io limit ranger  records that resource defaults were specified for the Pod  and they were applied successfully  For more details  read about  LimitRanges   docs concepts policy limit range        kubernetes io config hash  Type  Annotation  Example   kubernetes io config hash   df7cc47f8477b6b1226d7d23a904867b    Used on  Pod  When the kubelet creates a static Pod based on a given manifest  it attaches this annotation to the static Pod  The value of the annotation is the UID of the Pod  Note that the kubelet also sets the   spec nodeName  to the current node name as if the Pod was scheduled to the node       kubernetes io config mirror  Type  Annotation  Example   kubernetes io config mirror   df7cc47f8477b6b1226d7d23a904867b    Used on  Pod  For a static Pod created by the kubelet on a node  a  is created on the API server  The kubelet adds an annotation to indicate that this Pod is actually a mirror Pod  The annotation value is copied from the   kubernetes io config hash    kubernetes io config hash  annotation  which is the UID of the Pod   When updating a Pod with this annotation set  the annotation cannot be changed or removed  If a Pod doesn t have this annotation  it cannot be added during a Pod update       kubernetes io config source  Type  Annotation  Example   kubernetes io config source   file    Used on  Pod  This annotation is added by the kubelet to indicate where the Pod comes from  For static Pods  the annotation value could be one of  file  or  http  depending on where the Pod manifest is located  For a Pod created on the API server and then scheduled to the current node  the annotation value is  api        kubernetes io config seen  Type  Annotation  Example   kubernetes io config seen   2023 10 27T04 04 56 011314488Z    Used on  Pod  When the kubelet sees a Pod for the first time  it may add this annotation to the Pod with a value of current timestamp in the RFC3339 format       addonmanager kubernetes io mode  Type  Label  Example   addonmanager kubernetes io mode   Reconcile    Used on  All objects  To specify how an add on should be managed  you can use the  addonmanager kubernetes io mode  label  This label can have one of three values   Reconcile    EnsureExists   or  Ignore       Reconcile   Addon resources will be periodically reconciled with the expected state    If there are any differences  the add on manager will recreate  reconfigure or delete   the resources as needed  This is the default mode if no label is specified     EnsureExists   Addon resources will be checked for existence only but will not be modified   after creation  The add on manager will create or re create the resources when there is   no instance of the resource with that name     Ignore   Addon resources will be ignored  This mode is useful for add ons that are not   compatible with the add on manager or that are managed by another controller   For more details  see  Addon manager  https   github com kubernetes kubernetes blob master cluster addons addon manager README md        beta kubernetes io arch  deprecated   Type  Label  This label has been deprecated  Please use   kubernetes io arch    kubernetes io arch  instead       beta kubernetes io os  deprecated   Type  Label  This label has been deprecated  Please use   kubernetes io os    kubernetes io os  instead       kube aggregator kubernetes io automanaged   kube aggregator kubernetesio automanaged   Type  Label  Example   kube aggregator kubernetes io automanaged   onstart    Used on  APIService  The  kube apiserver  sets this label on any APIService object that the API server has created automatically  The label marks how the control plane should manage that APIService  You should not add  modify  or remove this label by yourself    Automanaged APIService objects are deleted by kube apiserver when it has no built in or custom resource API corresponding to the API group version of the APIService    There are two possible values      onstart   The APIService should be reconciled when an API server starts up  but not otherwise     true   The API server should reconcile this APIService continuously       service alpha kubernetes io tolerate unready endpoints  deprecated   Type  Annotation  Used on  StatefulSet  This annotation on a Service denotes if the Endpoints controller should go ahead and create Endpoints for unready Pods  Endpoints of these Services retain their DNS records and continue receiving traffic for the Service from the moment the kubelet starts all containers in the pod and marks it  Running   til the kubelet stops all containers and deletes the pod from the API server       autoscaling alpha kubernetes io behavior  deprecated    autoscaling alpha kubernetes io behavior   Type  Annotation  Used on  HorizontalPodAutoscaler  This annotation was used to configure the scaling behavior for a HorizontalPodAutoscaler  HPA  in earlier Kubernetes versions  It allowed you to specify how the HPA should scale pods up or down  including setting stabilization windows and scaling policies  Setting this annotation has no effect in any supported release of Kubernetes       kubernetes io hostname   kubernetesiohostname   Type  Label  Example   kubernetes io hostname   ip 172 20 114 199 ec2 internal    Used on  Node  The Kubelet populates this label with the hostname of the node  Note that the hostname can be changed from the  actual  hostname by passing the    hostname override  flag to the  kubelet    This label is also used as part of the topology hierarchy  See  topology kubernetes io zone   topologykubernetesiozone  for more information       kubernetes io change cause   change cause   Type  Annotation  Example   kubernetes io change cause   kubectl edit   record deployment foo    Used on  All Objects  This annotation is a best guess at why something was changed   It is populated when adding    record  to a  kubectl  command that may change an object       kubernetes io description   description   Type  Annotation  Example   kubernetes io description   Description of K8s object     Used on  All Objects  This annotation is used for describing specific behaviour of given object       kubernetes io enforce mountable secrets   enforce mountable secrets   Type  Annotation  Example   kubernetes io enforce mountable secrets   true    Used on  ServiceAccount  The value for this annotation must be   true   to take effect  When you set this annotation  to  true   Kubernetes enforces the following rules for Pods running as this ServiceAccount   1  Secrets mounted as volumes must be listed in the ServiceAccount s  secrets  field  1  Secrets referenced in  envFrom  for containers  including sidecar containers and init containers     must also be listed in the ServiceAccount s secrets field     If any container in a Pod references a Secret not listed in the ServiceAccount s  secrets  field     and even if the reference is marked as  optional    then the Pod will fail to start     and an error indicating the non compliant secret reference will be generated  1  Secrets referenced in a Pod s  imagePullSecrets  must be present in the    ServiceAccount s  imagePullSecrets  field  the Pod will fail to start     and an error indicating the non compliant image pull secret reference will be generated   When you create or update a Pod  these rules are checked  If a Pod doesn t follow them  it won t start and you ll see an error message  If a Pod is already running and you change the  kubernetes io enforce mountable secrets  annotation to true  or you edit the associated ServiceAccount to remove the reference to a Secret that the Pod is already using  the Pod continues to run       node kubernetes io exclude from external load balancers  Type  Label  Example   node kubernetes io exclude from external load balancers   Used on  Node  You can add labels to particular worker nodes to exclude them from the list of backend servers used by external load balancers  The following command can be used to exclude a worker node from the list of backend servers in a backend set      shell kubectl label nodes  node name  node kubernetes io exclude from external load balancers true          controller kubernetes io pod deletion cost   pod deletion cost   Type  Annotation  Example   controller kubernetes io pod deletion cost   10    Used on  Pod  This annotation is used to set  Pod Deletion Cost   docs concepts workloads controllers replicaset  pod deletion cost  which allows users to influence ReplicaSet downscaling order  The annotation value parses into an  int32  type       cluster autoscaler kubernetes io enable ds eviction  Type  Annotation  Example   cluster autoscaler kubernetes io enable ds eviction   true    Used on  Pod  This annotation controls whether a DaemonSet pod should be evicted by a ClusterAutoscaler  This annotation needs to be specified on DaemonSet pods in a DaemonSet manifest  When this annotation is set to   true    the ClusterAutoscaler is allowed to evict a DaemonSet Pod  even if other rules would normally prevent that  To disallow the ClusterAutoscaler from evicting DaemonSet pods  you can set this annotation to   false   for important DaemonSet pods  If this annotation is not set  then the ClusterAutoscaler follows its overall behavior  i e evict the DaemonSets based on its configuration     This annotation only impacts DaemonSet Pods        kubernetes io ingress bandwidth  Type  Annotation  Example   kubernetes io ingress bandwidth  10M   Used on  Pod  You can apply quality of service traffic shaping to a pod and effectively limit its available bandwidth  Ingress traffic to a Pod is handled by shaping queued packets to effectively handle data  To limit the bandwidth on a Pod  write an object definition JSON file and specify the data traffic speed using  kubernetes io ingress bandwidth  annotation  The unit used for specifying ingress rate is bits per second  as a  Quantity   docs reference kubernetes api common definitions quantity    For example   10M  means 10 megabits per second    Ingress traffic shaping annotation is an experimental feature  If you want to enable traffic shaping support  you must add the  bandwidth  plugin to your CNI configuration file  default   etc cni net d   and ensure that the binary is included in your CNI bin dir  default   opt cni bin          kubernetes io egress bandwidth  Type  Annotation  Example   kubernetes io egress bandwidth  10M   Used on  Pod  Egress traffic from a Pod is handled by policing  which simply drops packets in excess of the configured rate  The limits you place on a Pod do not affect the bandwidth of other Pods  To limit the bandwidth on a Pod  write an object definition JSON file and specify the data traffic speed using  kubernetes io egress bandwidth  annotation  The unit used for specifying egress rate is bits per second  as a  Quantity   docs reference kubernetes api common definitions quantity    For example   10M  means 10 megabits per second    Egress traffic shaping annotation is an experimental feature  If you want to enable traffic shaping support  you must add the  bandwidth  plugin to your CNI configuration file  default   etc cni net d   and ensure that the binary is included in your CNI bin dir  default   opt cni bin          beta kubernetes io instance type  deprecated   Type  Label   Starting in v1 17  this label is deprecated in favor of  node kubernetes io instance type   nodekubernetesioinstance type         node kubernetes io instance type   nodekubernetesioinstance type   Type  Label  Example   node kubernetes io instance type   m3 medium    Used on  Node  The Kubelet populates this with the instance type as defined by the cloud provider  This will be set only if you are using a cloud provider  This setting is handy if you want to target certain workloads to certain instance types  but typically you want to rely on the Kubernetes scheduler to perform resource based scheduling  You should aim to schedule based on properties rather than on instance types  for example  require a GPU  instead of requiring a  g2 2xlarge         failure domain beta kubernetes io region  deprecated    failure domainbetakubernetesioregion   Type  Label   Starting in v1 17  this label is deprecated in favor of  topology kubernetes io region   topologykubernetesioregion         failure domain beta kubernetes io zone  deprecated    failure domainbetakubernetesiozone   Type  Label   Starting in v1 17  this label is deprecated in favor of  topology kubernetes io zone   topologykubernetesiozone         pv kubernetes io bind completed   pv kubernetesiobind completed   Type  Annotation  Example   pv kubernetes io bind completed   yes    Used on  PersistentVolumeClaim  When this annotation is set on a PersistentVolumeClaim  PVC   that indicates that the lifecycle of the PVC has passed through initial binding setup  When present  that information changes how the control plane interprets the state of PVC objects  The value of this annotation does not matter to Kubernetes       pv kubernetes io bound by controller   pv kubernetesioboundby controller   Type  Annotation  Example   pv kubernetes io bound by controller   yes    Used on  PersistentVolume  PersistentVolumeClaim  If this annotation is set on a PersistentVolume or PersistentVolumeClaim  it indicates that a storage binding  PersistentVolume   PersistentVolumeClaim  or PersistentVolumeClaim   PersistentVolume  was installed by the   If the annotation isn t set  and there is a storage binding in place  the absence of that annotation means that the binding was done manually  The value of this annotation does not matter       pv kubernetes io provisioned by   pv kubernetesiodynamically provisioned   Type  Annotation  Example   pv kubernetes io provisioned by   kubernetes io rbd    Used on  PersistentVolume  This annotation is added to a PersistentVolume PV  that has been dynamically provisioned by Kubernetes  Its value is the name of volume plugin that created the volume  It serves both users  to show where a PV comes from  and Kubernetes  to recognize dynamically provisioned PVs in its decisions        pv kubernetes io migrated to   pv kubernetesio migratedto   Type  Annotation  Example   pv kubernetes io migrated to  pd csi storage gke io   Used on  PersistentVolume  PersistentVolumeClaim  It is added to a PersistentVolume PV  and PersistentVolumeClaim PVC  that is supposed to be dynamically provisioned deleted by its corresponding CSI driver through the  CSIMigration  feature gate  When this annotation is set  the Kubernetes components will  stand down  and the  external provisioner  will act on the objects       statefulset kubernetes io pod name   statefulsetkubernetesiopod name   Type  Label  Example   statefulset kubernetes io pod name   mystatefulset 7    Used on  Pod  When a StatefulSet controller creates a Pod for the StatefulSet  the control plane sets this label on that Pod  The value of the label is the name of the Pod being created   See  Pod Name Label   docs concepts workloads controllers statefulset  pod name label  in the StatefulSet topic for more details       scheduler alpha kubernetes io node selector   schedulerkubernetesnode selector   Type  Annotation  Example   scheduler alpha kubernetes io node selector   name of node selector    Used on  Namespace  The  PodNodeSelector   docs reference access authn authz admission controllers  podnodeselector  uses this annotation key to assign node selectors to pods in namespaces       topology kubernetes io region   topologykubernetesioregion   Type  Label  Example   topology kubernetes io region   us east 1    Used on  Node  PersistentVolume  See  topology kubernetes io zone   topologykubernetesiozone        topology kubernetes io zone   topologykubernetesiozone   Type  Label  Example   topology kubernetes io zone   us east 1c    Used on  Node  PersistentVolume    On Node    The  kubelet  or the external  cloud controller manager  populates this with the information from the cloud provider  This will be set only if you are using a cloud provider  However  you can consider setting this on nodes if it makes sense in your topology     On PersistentVolume    topology aware volume provisioners will automatically set node affinity constraints on a  PersistentVolume    A zone represents a logical failure domain  It is common for Kubernetes clusters to span multiple zones for increased availability  While the exact definition of a zone is left to infrastructure implementations  common properties of a zone include very low network latency within a zone  no cost network traffic within a zone  and failure independence from other zones  For example  nodes within a zone might share a network switch  but nodes in different zones should not   A region represents a larger domain  made up of one or more zones  It is uncommon for Kubernetes clusters to span multiple regions  While the exact definition of a zone or region is left to infrastructure implementations  common properties of a region include higher network latency between them than within them  non zero cost for network traffic between them  and failure independence from other zones or regions  For example  nodes within a region might share power infrastructure  e g  a UPS or generator   but nodes in different regions typically would not   Kubernetes makes a few assumptions about the structure of zones and regions   1  regions and zones are hierarchical  zones are strict subsets of regions and    no zone can be in 2 regions 2  zone names are unique across regions  for example region  africa east 1  might be comprised    of zones  africa east 1a  and  africa east 1b   It should be safe to assume that topology labels do not change  Even though labels are strictly mutable  consumers of them can assume that a given node is not going to be moved between zones without being destroyed and recreated   Kubernetes can use this information in various ways  For example  the scheduler automatically tries to spread the Pods in a ReplicaSet across nodes in a single zone cluster  to reduce the impact of node failures  see  kubernetes io hostname   kubernetesiohostname    With multiple zone clusters  this spreading behavior also applies to zones  to reduce the impact of zone failures   This is achieved via  SelectorSpreadPriority     SelectorSpreadPriority  is a best effort placement  If the zones in your cluster are heterogeneous  for example  different numbers of nodes  different types of nodes  or different pod resource requirements   this placement might prevent equal spreading of your Pods across zones  If desired  you can use homogeneous zones  same number and types of nodes  to reduce the probability of unequal spreading   The scheduler  through the  VolumeZonePredicate  predicate  also will ensure that Pods  that claim a given volume  are only placed into the same zone as that volume  Volumes cannot be attached across zones   If  PersistentVolumeLabel  does not support automatic labeling of your PersistentVolumes  you should consider adding the labels manually  or adding support for  PersistentVolumeLabel    With  PersistentVolumeLabel   the scheduler prevents Pods from mounting volumes in a different zone  If your infrastructure doesn t have this constraint  you don t need to add the zone labels to the volumes at all       volume beta kubernetes io storage provisioner  deprecated   Type  Annotation  Example   volume beta kubernetes io storage provisioner   k8s io minikube hostpath    Used on  PersistentVolumeClaim  This annotation has been deprecated since v1 23  See  volume kubernetes io storage provisioner   volume kubernetes io storage provisioner        volume beta kubernetes io storage class  deprecated   Type  Annotation  Example   volume beta kubernetes io storage class   example class    Used on  PersistentVolume  PersistentVolumeClaim  This annotation can be used for PersistentVolume PV  or PersistentVolumeClaim PVC  to specify the name of  StorageClass   docs concepts storage storage classes    When both the  storageClassName  attribute and the  volume beta kubernetes io storage class  annotation are specified  the annotation  volume beta kubernetes io storage class  takes precedence over the  storageClassName  attribute   This annotation has been deprecated  Instead  set the   storageClassName  field   docs concepts storage persistent volumes  class  for the PersistentVolumeClaim or PersistentVolume       volume beta kubernetes io mount options  deprecated    mount options   Type  Annotation  Example    volume beta kubernetes io mount options   ro soft    Used on  PersistentVolume  A Kubernetes administrator can specify additional  mount options   docs concepts storage persistent volumes  mount options  for when a PersistentVolume is mounted on a node       volume kubernetes io storage provisioner    volume kubernetes io storage provisioner   Type  Annotation  Used on  PersistentVolumeClaim  This annotation is added to a PVC that is supposed to be dynamically provisioned  Its value is the name of a volume plugin that is supposed to provision a volume for this PVC       volume kubernetes io selected node  Type  Annotation  Used on  PersistentVolumeClaim  This annotation is added to a PVC that is triggered by a scheduler to be dynamically provisioned  Its value is the name of the selected node       volumes kubernetes io controller managed attach detach  Type  Annotation  Used on  Node  If a node has the annotation  volumes kubernetes io controller managed attach detach   its storage attach and detach operations are being managed by the  volume attach detach     The value of the annotation isn t important       node kubernetes io windows build   nodekubernetesiowindows build   Type  Label  Example   node kubernetes io windows build   10 0 17763    Used on  Node  When the kubelet is running on Microsoft Windows  it automatically labels its Node to record the version of Windows Server in use   The label s value is in the format  MajorVersion MinorVersion BuildNumber        storage alpha kubernetes io migrated plugins   storagealphakubernetesiomigrated plugins   Type  Annotation  Example  storage alpha kubernetes io migrated plugins   kubernetes io cinder    Used on  CSINode  an extension API   This annotation is automatically added for the CSINode object that maps to a node that installs CSIDriver  This annotation shows the in tree plugin name of the migrated plugin  Its value depends on your cluster s in tree cloud provider storage type   For example  if the in tree cloud provider storage type is  CSIMigrationvSphere   the CSINodes instance for the node should be updated with   storage alpha kubernetes io migrated plugins   kubernetes io vsphere volume        service kubernetes io headless   servicekubernetesioheadless   Type  Label  Example   service kubernetes io headless       Used on  Endpoints  The control plane adds this label to an Endpoints object when the owning Service is headless  To learn more  read  Headless Services   docs concepts services networking service  headless services        service kubernetes io topology aware hints  deprecated    servicekubernetesiotopology aware hints   Example   service kubernetes io topology aware hints   Auto    Used on  Service  This annotation was used for enabling  topology aware hints  on Services  Topology aware hints have since been renamed  the concept is now called  topology aware routing   docs concepts services networking topology aware routing    Setting the annotation to  Auto   on a Service  configured the Kubernetes control plane to add topology hints on EndpointSlices associated with that Service  You can also explicitly set the annotation to  Disabled    If you are running a version of Kubernetes older than   check the documentation for that Kubernetes version to see how topology aware routing works in that release   There are no other valid values for this annotation  If you don t want topology aware hints for a Service  don t add this annotation       service kubernetes io topology mode  Type  Annotation  Example   service kubernetes io topology mode  Auto   Used on  Service  This annotation provides a way to define how Services handle network topology  for example  you can configure a Service so that Kubernetes prefers keeping traffic between a client and server within a single topology zone  In some cases this can help reduce costs or improve network performance   See  Topology Aware Routing   docs concepts services networking topology aware routing   for more details       kubernetes io service name   kubernetesioservice name   Type  Label  Example   kubernetes io service name   my website    Used on  EndpointSlice  Kubernetes associates  EndpointSlices   docs concepts services networking endpoint slices   with  Services   docs concepts services networking service   using this label   This label records the  of the Service that the EndpointSlice is backing  All EndpointSlices should have this label set to the name of their associated Service       kubernetes io service account name  Type  Annotation  Example   kubernetes io service account name   sa name    Used on  Secret  This annotation records the  of the ServiceAccount that the token  stored in the Secret of type  kubernetes io service account token   represents       kubernetes io service account uid  Type  Annotation  Example   kubernetes io service account uid  da68f9c6 9d26 11e7 b84e 002dc52800da   Used on  Secret  This annotation records the  of the ServiceAccount that the token  stored in the Secret of type  kubernetes io service account token   represents       kubernetes io legacy token last used  Type  Label  Example   kubernetes io legacy token last used  2022 10 24   Used on  Secret  The control plane only adds this label to Secrets that have the type  kubernetes io service account token   The value of this label records the date  ISO 8601 format  UTC time zone  when the control plane last saw a request where the client authenticated using the service account token   If a legacy token was last used before the cluster gained the feature  added in Kubernetes v1 26   then the label isn t set       kubernetes io legacy token invalid since  Type  Label  Example   kubernetes io legacy token invalid since  2023 10 27   Used on  Secret  The control plane automatically adds this label to auto generated Secrets that have the type  kubernetes io service account token   This label marks the Secret based token as invalid for authentication  The value of this label records the date  ISO 8601 format  UTC time zone  when the control plane detects that the auto generated Secret has not been used for a specified duration  defaults to one year        endpointslice kubernetes io managed by   endpointslicekubernetesiomanaged by   Type  Label  Example   endpointslice kubernetes io managed by  endpointslice controller k8s io   Used on  EndpointSlices  The label is used to indicate the controller or entity that manages the EndpointSlice  This label aims to enable different EndpointSlice objects to be managed by different controllers or entities within the same cluster       endpointslice kubernetes io skip mirror   endpointslicekubernetesioskip mirror   Type  Label  Example   endpointslice kubernetes io skip mirror   true    Used on  Endpoints  The label can be set to   true   on an Endpoints resource to indicate that the EndpointSliceMirroring controller should not mirror this resource with EndpointSlices       service kubernetes io service proxy name   servicekubernetesioservice proxy name   Type  Label  Example   service kubernetes io service proxy name   foo bar    Used on  Service  The kube proxy has this label for custom proxy  which delegates service control to custom proxy       experimental windows kubernetes io isolation type  deprecated    experimental windows kubernetes io isolation type   Type  Annotation  Example   experimental windows kubernetes io isolation type   hyperv    Used on  Pod  The annotation is used to run Windows containers with Hyper V isolation    Starting from v1 20  this annotation is deprecated  Experimental Hyper V support was removed in 1 21        ingressclass kubernetes io is default class  Type  Annotation  Example   ingressclass kubernetes io is default class   true    Used on  IngressClass  When a IngressClass resource has this annotation set to   true    new Ingress resource without a class specified will be assigned this default class       nginx ingress kubernetes io configuration snippet  Type  Annotation  Example   nginx ingress kubernetes io configuration snippet     more set headers   Request Id   req id    nmore set headers   Example  42    n    Used on  Ingress  You can use this annotation to set extra configuration on an Ingress that uses the  NGINX Ingress Controller  https   github com kubernetes ingress nginx    The  configuration snippet  annotation is ignored by default since version 1 9 0 of the ingress controller  The NGINX ingress controller setting  allow snippet annotations   has to be explicitly enabled to use this annotation  Enabling the annotation can be dangerous in a multi tenant cluster  as it can lead people with otherwise limited permissions being able to retrieve all Secrets in the cluster       kubernetes io ingress class  deprecated   Type  Annotation  Used on  Ingress   Starting in v1 18  this annotation is deprecated in favor of  spec ingressClassName         storageclass kubernetes io is default class  Type  Annotation  Example   storageclass kubernetes io is default class   true    Used on  StorageClass  When a single StorageClass resource has this annotation set to   true    new PersistentVolumeClaim resource without a class specified will be assigned this default class       alpha kubernetes io provided node ip  alpha    alpha kubernetes io provided node ip   Type  Annotation  Example   alpha kubernetes io provided node ip   10 0 0 1    Used on  Node  The kubelet can set this annotation on a Node to denote its configured IPv4 and or IPv6 address   When kubelet is started with the    cloud provider  flag set to any value  includes both external and legacy in tree cloud providers   it sets this annotation on the Node to denote an IP address set from the command line flag     node ip    This IP is verified with the cloud provider as valid by the cloud controller manager       batch kubernetes io job completion index  Type  Annotation  Label  Example   batch kubernetes io job completion index   3    Used on  Pod  The Job controller in the kube controller manager sets this as a label and annotation for Pods created with Indexed  completion mode   docs concepts workloads controllers job  completion mode    Note the  PodIndexLabel   docs reference command line tools reference feature gates   feature gate must be enabled for this to be added as a pod   label    otherwise it will just be an annotation       batch kubernetes io cronjob scheduled timestamp  Type  Annotation  Example   batch kubernetes io cronjob scheduled timestamp   2016 05 19T03 00 00 07 00    Used on  Jobs and Pods controlled by CronJobs  This annotation is used to record the original  expected  creation timestamp for a Job  when that Job is part of a CronJob  The control plane sets the value to that timestamp in RFC3339 format  If the Job belongs to a CronJob with a timezone specified  then the timestamp is in that timezone  Otherwise  the timestamp is in controller manager s local time       kubectl kubernetes io default container  Type  Annotation  Example   kubectl kubernetes io default container   front end app    The value of the annotation is the container name that is default for this Pod  For example   kubectl logs  or  kubectl exec  without   c  or    container  flag will use this default container       kubectl kubernetes io default logs container  deprecated   Type  Annotation  Example   kubectl kubernetes io default logs container   front end app    The value of the annotation is the container name that is the default logging container for this Pod  For example   kubectl logs  without   c  or    container  flag will use this default container    This annotation is deprecated  You should use the   kubectl kubernetes io default container    kubectl kubernetes io default container  annotation instead  Kubernetes versions 1 25 and newer ignore this annotation        kubectl kubernetes io last applied configuration  Type  Annotation  Example   see following snippet     yaml     kubectl kubernetes io last applied configuration            apiVersion   apps v1   kind   Deployment   metadata    annotations      name   example   namespace   default    spec    selector    matchLabels    app kubernetes io name  foo    template    metadata    labels    app kubernetes io name   foo     spec    containers     image   container registry example foo bar 1 42   name   foo bar   ports     containerPort  42              Used on  all objects  The kubectl command line tool uses this annotation as a legacy mechanism to track changes  That mechanism has been superseded by  Server side apply   docs reference using api server side apply         kubectl kubernetes io restartedAt   kubectl k8s io restart at   Type  Annotation  Example   kubectl kubernetes io restartedAt   2024 06 21T17 27 41Z    Used on  Deployment  ReplicaSet  StatefulSet  DaemonSet  Pod  This annotation contains the latest restart time of a resource  Deployment  ReplicaSet  StatefulSet or DaemonSet   where kubectl triggered a rollout in order to force creation of new Pods  The command  kubectl rollout restart  RESOURCE   triggers a restart by patching the template metadata of all the pods of resource with this annotation  In above example the latest restart time is shown as 21st June 2024 at 17 27 41 UTC   You should not assume that this annotation represents the date   time of the most recent update  a separate change could have been made since the last manually triggered rollout   If you manually set this annotation on a Pod  nothing happens  The restarting side effect comes from how workload management and Pod templating works       endpoints kubernetes io over capacity  Type  Annotation  Example   endpoints kubernetes io over capacity truncated   Used on  Endpoints  The  adds this annotation to an  Endpoints   docs concepts services networking service  endpoints  object if the associated  has more than 1000 backing endpoints  The annotation indicates that the Endpoints object is over capacity and the number of endpoints has been truncated to 1000   If the number of backend endpoints falls below 1000  the control plane removes this annotation       endpoints kubernetes io last change trigger time  Type  Annotation  Example   endpoints kubernetes io last change trigger time   2023 07 20T04 45 21Z    Used on  Endpoints  This annotation set to an  Endpoints   docs concepts services networking service  endpoints  object that represents the timestamp  The timestamp is stored in RFC 3339 date time string format  For example   2018 10 22T19 32 52 1Z    This is timestamp of the last change in some Pod or Service object  that triggered the change to the Endpoints object       control plane alpha kubernetes io leader  deprecated    control plane alpha kubernetes io leader   Type  Annotation  Example   control plane alpha kubernetes io leader   holderIdentity   controller 0   leaseDurationSeconds  15  acquireTime   2023 01 19T13 12 57Z   renewTime   2023 01 19T13 13 54Z   leaderTransitions  1    Used on  Endpoints  The  previously set annotation on an  Endpoints   docs concepts services networking service  endpoints  object  This annotation provided the following detail     Who is the current leader    The time when the current leadership was acquired    The duration of the lease  of the leadership  in seconds    The time the current lease  the current leadership  should be renewed    The number of leadership transitions that happened in the past   Kubernetes now uses  Leases   docs concepts architecture leases   to manage leader assignment for the Kubernetes control plane       batch kubernetes io job tracking  deprecated    batch kubernetes io job tracking   Type  Annotation  Example   batch kubernetes io job tracking       Used on  Jobs  The presence of this annotation on a Job used to indicate that the control plane is  tracking the Job status using finalizers   docs concepts workloads controllers job  job tracking with finalizers   Adding or removing this annotation no longer has an effect  Kubernetes v1 27 and later  All Jobs are tracked with finalizers       job name  deprecated    job name   Type  Label  Example   job name   pi    Used on  Jobs and Pods controlled by Jobs   Starting from Kubernetes 1 27  this label is deprecated  Kubernetes 1 27 and newer ignore this label and use the prefixed  job name  label        controller uid  deprecated    controller uid   Type  Label  Example   controller uid    UID    Used on  Jobs and Pods controlled by Jobs   Starting from Kubernetes 1 27  this label is deprecated  Kubernetes 1 27 and newer ignore this label and use the prefixed  controller uid  label        batch kubernetes io job name   batchkubernetesio job name   Type  Label  Example   batch kubernetes io job name   pi    Used on  Jobs and Pods controlled by Jobs  This label is used as a user friendly way to get Pods corresponding to a Job  The  job name  comes from the  name  of the Job and allows for an easy way to get Pods corresponding to the Job       batch kubernetes io controller uid   batchkubernetesio controller uid   Type  Label  Example   batch kubernetes io controller uid    UID    Used on  Jobs and Pods controlled by Jobs  This label is used as a programmatic way to get all Pods corresponding to a Job    The  controller uid  is a unique identifier that gets set in the  selector  field so the Job controller can get all the corresponding Pods       scheduler alpha kubernetes io defaultTolerations   scheduleralphakubernetesio defaulttolerations   Type  Annotation  Example   scheduler alpha kubernetes io defaultTolerations      operator    Equal    value    value1    effect    NoSchedule    key    dedicated node       Used on  Namespace  This annotation requires the  PodTolerationRestriction   docs reference access authn authz admission controllers  podtolerationrestriction  admission controller to be enabled  This annotation key allows assigning tolerations to a namespace and any new pods created in this namespace would get these tolerations added       scheduler alpha kubernetes io tolerationsWhitelist   schedulerkubernetestolerations whitelist   Type  Annotation  Example   scheduler alpha kubernetes io tolerationsWhitelist      operator    Exists    effect    NoSchedule    key    dedicated node       Used on  Namespace  This annotation is only useful when the  Alpha   PodTolerationRestriction   docs reference access authn authz admission controllers  podtolerationrestriction  admission controller is enabled  The annotation value is a JSON document that defines a list of allowed tolerations for the namespace it annotates  When you create a Pod or modify its tolerations  the API server checks the tolerations to see if they are mentioned in the allow list  The pod is admitted only if the check succeeds       scheduler alpha kubernetes io preferAvoidPods  deprecated    scheduleralphakubernetesio preferavoidpods   Type  Annotation  Used on  Node  This annotation requires the  NodePreferAvoidPods scheduling plugin   docs reference scheduling config  scheduling plugins  to be enabled  The plugin is deprecated since Kubernetes 1 22  Use  Taints and Tolerations   docs concepts scheduling eviction taint and toleration   instead       node kubernetes io not ready  Type  Taint  Example   node kubernetes io not ready   NoExecute    Used on  Node  The Node controller detects whether a Node is ready by monitoring its health and adds or removes this taint accordingly       node kubernetes io unreachable  Type  Taint  Example   node kubernetes io unreachable   NoExecute    Used on  Node  The Node controller adds the taint to a Node corresponding to the  NodeCondition   docs concepts architecture nodes  condition   Ready  being  Unknown        node kubernetes io unschedulable  Type  Taint  Example   node kubernetes io unschedulable   NoSchedule    Used on  Node  The taint will be added to a node when initializing the node to avoid race condition       node kubernetes io memory pressure  Type  Taint  Example   node kubernetes io memory pressure   NoSchedule    Used on  Node  The kubelet detects memory pressure based on  memory available  and  allocatableMemory available  observed on a Node  The observed values are then compared to the corresponding thresholds that can be set on the kubelet to determine if the Node condition and taint should be added removed       node kubernetes io disk pressure  Type  Taint  Example   node kubernetes io disk pressure   NoSchedule    Used on  Node  The kubelet detects disk pressure based on  imagefs available    imagefs inodesFree    nodefs available  and  nodefs inodesFree  Linux only  observed on a Node  The observed values are then compared to the corresponding thresholds that can be set on the kubelet to determine if the Node condition and taint should be added removed       node kubernetes io network unavailable  Type  Taint  Example   node kubernetes io network unavailable   NoSchedule    Used on  Node  This is initially set by the kubelet when the cloud provider used indicates a requirement for additional network configuration  Only when the route on the cloud is configured properly will the taint be removed by the cloud provider       node kubernetes io pid pressure  Type  Taint  Example   node kubernetes io pid pressure   NoSchedule    Used on  Node  The kubelet checks D value of the size of   proc sys kernel pid max  and the PIDs consumed by Kubernetes on a node to get the number of available PIDs that referred to as the  pid available  metric  The metric is then compared to the corresponding threshold that can be set on the kubelet to determine if the node condition and taint should be added removed       node kubernetes io out of service  Type  Taint  Example   node kubernetes io out of service NoExecute   Used on  Node  A user can manually add the taint to a Node marking it out of service  If the  NodeOutOfServiceVolumeDetach   feature gate   docs reference command line tools reference feature gates   is enabled on  kube controller manager   and a Node is marked out of service with this taint  the Pods on the node will be forcefully deleted if there are no matching tolerations on it and volume detach operations for the Pods terminating on the node will happen immediately  This allows the Pods on the out of service node to recover quickly on a different node    Refer to  Non graceful node shutdown   docs concepts architecture nodes  non graceful node shutdown  for further details about when and how to use this taint        node cloudprovider kubernetes io uninitialized  Type  Taint  Example   node cloudprovider kubernetes io uninitialized   NoSchedule    Used on  Node  Sets this taint on a Node to mark it as unusable  when kubelet is started with the  external  cloud provider  until a controller from the cloud controller manager initializes this Node  and then removes the taint       node cloudprovider kubernetes io shutdown  Type  Taint  Example   node cloudprovider kubernetes io shutdown   NoSchedule    Used on  Node  If a Node is in a cloud provider specified shutdown state  the Node gets tainted accordingly with  node cloudprovider kubernetes io shutdown  and the taint effect of  NoSchedule        feature node kubernetes io    Type  Label  Example   feature node kubernetes io network sriov capable   true    Used on  Node  These labels are used by the Node Feature Discovery  NFD  component to advertise features on a node  All built in labels use the  feature node kubernetes io  label namespace and have the format  feature node kubernetes io  feature name    true    NFD has many extension points for creating vendor and application specific labels  For details  see the  customization guide  https   kubernetes sigs github io node feature discovery v0 12 usage customization guide        nfd node kubernetes io master version  Type  Annotation  Example   nfd node kubernetes io master version   v0 6 0    Used on  Node  For node s  where the Node Feature Discovery  NFD   master  https   kubernetes sigs github io node feature discovery stable usage nfd master html  is scheduled  this annotation records the version of the NFD master  It is used for informative use only       nfd node kubernetes io worker version  Type  Annotation  Example   nfd node kubernetes io worker version   v0 4 0    Used on  Nodes  This annotation records the version for a Node Feature Discovery s  worker  https   kubernetes sigs github io node feature discovery stable usage nfd worker html  if there is one running on a node  It s used for informative use only       nfd node kubernetes io feature labels  Type  Annotation  Example   nfd node kubernetes io feature labels   cpu cpuid ADX cpu cpuid AESNI cpu hardware multithreading kernel version full    Used on  Nodes  This annotation records a comma separated list of node feature labels managed by  Node Feature Discovery  https   kubernetes sigs github io node feature discovery    NFD   NFD uses this for an internal mechanism  You should not edit this annotation yourself       nfd node kubernetes io extended resources  Type  Annotation  Example   nfd node kubernetes io extended resources   accelerator acme example q500 example com coprocessor fx5    Used on  Nodes  This annotation records a comma separated list of  extended resources   docs concepts configuration manage resources containers  extended resources  managed by  Node Feature Discovery  https   kubernetes sigs github io node feature discovery    NFD   NFD uses this for an internal mechanism  You should not edit this annotation yourself       nfd node kubernetes io node name  Type  Label  Example   nfd node kubernetes io node name  node 1   Used on  Nodes  It specifies which node the NodeFeature object is targeting  Creators of NodeFeature objects must set this label and  consumers of the objects are supposed to use the label for  filtering features designated for a certain node    These Node Feature Discovery  NFD  labels or annotations only apply to  the nodes where NFD is running  To learn more about NFD and  its components go to its official  documentation  https   kubernetes sigs github io node feature discovery stable get started          service beta kubernetes io aws load balancer access log emit interval  beta    service beta kubernetes io aws load balancer access log emit interval   Example   service beta kubernetes io aws load balancer access log emit interval   5    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures the load balancer for a Service based on this annotation  The value determines how often the load balancer writes log entries  For example  if you set the value to 5  the log writes occur 5 seconds apart       service beta kubernetes io aws load balancer access log enabled  beta    service beta kubernetes io aws load balancer access log enabled   Example   service beta kubernetes io aws load balancer access log enabled   false    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures the load balancer for a Service based on this annotation  Access logging is enabled if you set the annotation to  true        service beta kubernetes io aws load balancer access log s3 bucket name  beta    service beta kubernetes io aws load balancer access log s3 bucket name   Example   service beta kubernetes io aws load balancer access log enabled  example   Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures the load balancer for a Service based on this annotation  The load balancer writes logs to an S3 bucket with the name you specify       service beta kubernetes io aws load balancer access log s3 bucket prefix  beta    service beta kubernetes io aws load balancer access log s3 bucket prefix   Example   service beta kubernetes io aws load balancer access log enabled    example    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures the load balancer for a Service based on this annotation  The load balancer writes log objects with the prefix that you specify       service beta kubernetes io aws load balancer additional resource tags  beta    service beta kubernetes io aws load balancer additional resource tags   Example   service beta kubernetes io aws load balancer additional resource tags   Environment demo Project example    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures tags  an AWS concept  for a load balancer based on the comma separated key value pairs in the value of this annotation       service beta kubernetes io aws load balancer alpn policy  beta    service beta kubernetes io aws load balancer alpn policy   Example   service beta kubernetes io aws load balancer alpn policy  HTTP2Optional   Used on  Service  The  AWS load balancer controller  https   kubernetes sigs github io aws load balancer controller   uses this annotation  See  annotations  https   kubernetes sigs github io aws load balancer controller latest guide service annotations   in the AWS load balancer controller documentation       service beta kubernetes io aws load balancer attributes  beta    service beta kubernetes io aws load balancer attributes   Example   service beta kubernetes io aws load balancer attributes   deletion protection enabled true    Used on  Service  The  AWS load balancer controller  https   kubernetes sigs github io aws load balancer controller   uses this annotation  See  annotations  https   kubernetes sigs github io aws load balancer controller latest guide service annotations   in the AWS load balancer controller documentation       service beta kubernetes io aws load balancer backend protocol  beta    service beta kubernetes io aws load balancer backend protocol   Example   service beta kubernetes io aws load balancer backend protocol  tcp   Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures the load balancer listener based on the value of this annotation       service beta kubernetes io aws load balancer connection draining enabled  beta    service beta kubernetes io aws load balancer connection draining enabled   Example   service beta kubernetes io aws load balancer connection draining enabled   false    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures the load balancer based on this annotation  The load balancer s connection draining setting depends on the value you set       service beta kubernetes io aws load balancer connection draining timeout  beta    service beta kubernetes io aws load balancer connection draining timeout   Example   service beta kubernetes io aws load balancer connection draining timeout   60    Used on  Service  If you configure  connection draining   service beta kubernetes io aws load balancer connection draining enabled  for a Service of  type  LoadBalancer   and you use the AWS cloud  the integration configures the draining period based on this annotation  The value you set determines the draining timeout in seconds       service beta kubernetes io aws load balancer ip address type  beta    service beta kubernetes io aws load balancer ip address type   Example   service beta kubernetes io aws load balancer ip address type  ipv4   Used on  Service  The  AWS load balancer controller  https   kubernetes sigs github io aws load balancer controller   uses this annotation  See  annotations  https   kubernetes sigs github io aws load balancer controller latest guide service annotations   in the AWS load balancer controller documentation       service beta kubernetes io aws load balancer connection idle timeout  beta    service beta kubernetes io aws load balancer connection idle timeout   Example   service beta kubernetes io aws load balancer connection idle timeout   60    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures a load balancer based on this annotation  The load balancer has a configured idle timeout period  in seconds  that applies to its connections  If no data has been sent or received by the time that the idle timeout period elapses  the load balancer closes the connection       service beta kubernetes io aws load balancer cross zone load balancing enabled  beta    service beta kubernetes io aws load balancer cross zone load balancing enabled   Example   service beta kubernetes io aws load balancer cross zone load balancing enabled   true    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures a load balancer based on this annotation  If you set this annotation to  true   each load balancer node distributes requests evenly across the registered targets in all enabled  availability zones  https   docs aws amazon com AWSEC2 latest UserGuide using regions availability zones html concepts availability zones   If you disable cross zone load balancing  each load balancer node distributes requests evenly across the registered targets in its availability zone only       service beta kubernetes io aws load balancer eip allocations  beta    service beta kubernetes io aws load balancer eip allocations   Example   service beta kubernetes io aws load balancer eip allocations   eipalloc 01bcdef23bcdef456 eipalloc def1234abc4567890    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures a load balancer based on this annotation  The value is a comma separated list of elastic IP address allocation IDs   This annotation is only relevant for Services of  type  LoadBalancer   where the load balancer is an AWS Network Load Balancer       service beta kubernetes io aws load balancer extra security groups  beta    service beta kubernetes io aws load balancer extra security groups   Example   service beta kubernetes io aws load balancer extra security groups   sg 12abcd3456 sg 34dcba6543    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures a load balancer based on this annotation  The annotation value is a comma separated list of extra AWS VPC security groups to configure for the load balancer       service beta kubernetes io aws load balancer healthcheck healthy threshold  beta    service beta kubernetes io aws load balancer healthcheck healthy threshold   Example   service beta kubernetes io aws load balancer healthcheck healthy threshold   3    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures a load balancer based on this annotation  The annotation value specifies the number of successive successful health checks required for a backend to be considered healthy for traffic       service beta kubernetes io aws load balancer healthcheck interval  beta    service beta kubernetes io aws load balancer healthcheck interval   Example   service beta kubernetes io aws load balancer healthcheck interval   30    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures a load balancer based on this annotation  The annotation value specifies the interval  in seconds  between health check probes made by the load balancer       service beta kubernetes io aws load balancer healthcheck path  beta    service beta kubernetes io aws load balancer healthcheck papth   Example   service beta kubernetes io aws load balancer healthcheck path   healthcheck   Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures a load balancer based on this annotation  The annotation value determines the path part of the URL that is used for HTTP health checks       service beta kubernetes io aws load balancer healthcheck port  beta    service beta kubernetes io aws load balancer healthcheck port   Example   service beta kubernetes io aws load balancer healthcheck port   24    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures a load balancer based on this annotation  The annotation value determines which port the load balancer connects to when performing health checks       service beta kubernetes io aws load balancer healthcheck protocol  beta    service beta kubernetes io aws load balancer healthcheck protocol   Example   service beta kubernetes io aws load balancer healthcheck protocol  TCP   Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures a load balancer based on this annotation  The annotation value determines how the load balancer checks the health of backend targets       service beta kubernetes io aws load balancer healthcheck timeout  beta    service beta kubernetes io aws load balancer healthcheck timeout   Example   service beta kubernetes io aws load balancer healthcheck timeout   3    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures a load balancer based on this annotation  The annotation value specifies the number of seconds before a probe that hasn t yet succeeded is automatically treated as having failed       service beta kubernetes io aws load balancer healthcheck unhealthy threshold  beta    service beta kubernetes io aws load balancer healthcheck unhealthy threshold   Example   service beta kubernetes io aws load balancer healthcheck unhealthy threshold   3    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures a load balancer based on this annotation  The annotation value specifies the number of successive unsuccessful health checks required for a backend to be considered unhealthy for traffic       service beta kubernetes io aws load balancer internal  beta    service beta kubernetes io aws load balancer internal   Example   service beta kubernetes io aws load balancer internal   true    Used on  Service  The cloud controller manager integration with AWS elastic load balancing configures a load balancer based on this annotation  When you set this annotation to  true   the integration configures an internal load balancer   If you use the  AWS load balancer controller  https   kubernetes sigs github io aws load balancer controller    see   service beta kubernetes io aws load balancer scheme    service beta kubernetes io aws load balancer scheme        service beta kubernetes io aws load balancer manage backend security group rules  beta    service beta kubernetes io aws load balancer manage backend security group rules   Example   service beta kubernetes io aws load balancer manage backend security group rules   true    Used on  Service  The  AWS load balancer controller  https   kubernetes sigs github io aws load balancer controller   uses this annotation  See  annotations  https   kubernetes sigs github io aws load balancer controller latest guide service annotations   in the AWS load balancer controller documentation       service beta kubernetes io aws load balancer name  beta    service beta kubernetes io aws load balancer name   Example   service beta kubernetes io aws load balancer name  my elb   Used on  Service  If you set this annotation on a Service  and you also annotate that Service with  service beta kubernetes io aws load balancer type   external    and you use the  AWS load balancer controller  https   kubernetes sigs github io aws load balancer controller   in your cluster  then the AWS load balancer controller sets the name of that load balancer to the value you set for  this  annotation   See  annotations  https   kubernetes sigs github io aws load balancer controller latest guide service annotations   in the AWS load balancer controller documentation       service beta kubernetes io aws load balancer nlb target type  beta    service beta kubernetes io aws load balancer nlb target type   Example   service beta kubernetes io aws load balancer nlb target type   true    Used on  Service  The  AWS load balancer controller  https   kubernetes sigs github io aws load balancer controller   uses this annotation  See  annotations  https   kubernetes sigs github io aws load balancer controller latest guide service annotations   in the AWS load balancer controller documentation       service beta kubernetes io aws load balancer private ipv4 addresses  beta    service beta kubernetes io aws load balancer private ipv4 addresses   Example   service beta kubernetes io aws load balancer private ipv4 addresses   198 51 100 0 198 51 100 64    Used on  Service  The  AWS load balancer controller  https   kubernetes sigs github io aws load balancer controller   uses this annotation  See  annotations  https   kubernetes sigs github io aws load balancer controller latest guide service annotations   in the AWS load balancer controller documentation       service beta kubernetes io aws load balancer proxy protocol  beta    service beta kubernetes io aws load balancer proxy protocol   Example   service beta kubernetes io aws load balancer proxy protocol        Used on  Service  The official Kubernetes integration with AWS elastic load balancing configures a load balancer based on this annotation  The only permitted value is        which indicates that the load balancer should wrap TCP connections to the backend Pod with the PROXY protocol       service beta kubernetes io aws load balancer scheme  beta    service beta kubernetes io aws load balancer scheme   Example   service beta kubernetes io aws load balancer scheme  internal   Used on  Service  The  AWS load balancer controller  https   kubernetes sigs github io aws load balancer controller   uses this annotation  See  annotations  https   kubernetes sigs github io aws load balancer controller latest guide service annotations   in the AWS load balancer controller documentation       service beta kubernetes io aws load balancer security groups  deprecated    service beta kubernetes io aws load balancer security groups   Example   service beta kubernetes io aws load balancer security groups   sg 53fae93f sg 8725gr62r    Used on  Service  The AWS load balancer controller uses this annotation to specify a comma separated list of security groups you want to attach to an AWS load balancer  Both name and ID of security are supported where name matches a  Name  tag  not the  groupName  attribute   When this annotation is added to a Service  the load balancer controller attaches the security groups referenced by the annotation to the load balancer  If you omit this annotation  the AWS load balancer controller automatically creates a new security group and attaches it to the load balancer    Kubernetes v1 27 and later do not directly set or read this annotation  However  the AWS load balancer controller  part of the Kubernetes project  does still use the  service beta kubernetes io aws load balancer security groups  annotation        service beta kubernetes io load balancer source ranges  deprecated    service beta kubernetes io load balancer source ranges   Example   service beta kubernetes io load balancer source ranges   192 0 2 0 25    Used on  Service  The  AWS load balancer controller  https   kubernetes sigs github io aws load balancer controller   uses this annotation  You should set   spec loadBalancerSourceRanges  for the Service instead       service beta kubernetes io aws load balancer ssl cert  beta    service beta kubernetes io aws load balancer ssl cert   Example   service beta kubernetes io aws load balancer ssl cert   arn aws acm us east 1 123456789012 certificate 12345678 1234 1234 1234 123456789012    Used on  Service  The official integration with AWS elastic load balancing configures TLS for a Service of  type  LoadBalancer  based on this annotation  The value of the annotation is the AWS Resource Name  ARN  of the X 509 certificate that the load balancer listener should use    The TLS protocol is based on an older technology that abbreviates to SSL        service beta kubernetes io aws load balancer ssl negotiation policy  beta    service beta kubernetes io aws load balancer ssl negotiation policy   Example   service beta kubernetes io aws load balancer ssl negotiation policy  ELBSecurityPolicy TLS 1 2 2017 01   The official integration with AWS elastic load balancing configures TLS for a Service of  type  LoadBalancer  based on this annotation  The value of the annotation is the name of an AWS policy for negotiating TLS with a client peer       service beta kubernetes io aws load balancer ssl ports  beta    service beta kubernetes io aws load balancer ssl ports   Example   service beta kubernetes io aws load balancer ssl ports        The official integration with AWS elastic load balancing configures TLS for a Service of  type  LoadBalancer  based on this annotation  The value of the annotation is either        which means that all the load balancer s ports should use TLS  or it is a comma separated list of port numbers       service beta kubernetes io aws load balancer subnets  beta    service beta kubernetes io aws load balancer subnets   Example   service beta kubernetes io aws load balancer subnets   private a private b    Kubernetes  official integration with AWS uses this annotation to configure a load balancer and determine in which AWS availability zones to deploy the managed load balancing service  The value is either a comma separated list of subnet names  or a comma separated list of subnet IDs       service beta kubernetes io aws load balancer target group attributes  beta    service beta kubernetes io aws load balancer target group attributes   Example   service beta kubernetes io aws load balancer target group attributes   stickiness enabled true stickiness type source ip    Used on  Service  The  AWS load balancer controller  https   kubernetes sigs github io aws load balancer controller   uses this annotation  See  annotations  https   kubernetes sigs github io aws load balancer controller latest guide service annotations   in the AWS load balancer controller documentation       service beta kubernetes io aws load balancer target node labels  beta    service beta kubernetes io aws target node labels   Example   service beta kubernetes io aws load balancer target node labels   kubernetes io os Linux topology kubernetes io region us east 2    Kubernetes  official integration with AWS uses this annotation to determine which nodes in your cluster should be considered as valid targets for the load balancer       service beta kubernetes io aws load balancer type  beta    service beta kubernetes io aws load balancer type   Example   service beta kubernetes io aws load balancer type  external   Kubernetes  official integrations with AWS use this annotation to determine whether the AWS cloud provider integration should manage a Service of  type  LoadBalancer    There are two permitted values    nlb    the cloud controller manager configures a Network Load Balancer   external    the cloud controller manager does not configure any load balancer  If you deploy a Service of  type  LoadBalancer  on AWS  and you don t set any  service beta kubernetes io aws load balancer type  annotation  the AWS integration deploys a classic Elastic Load Balancer  This behavior  with no annotation present  is the default unless you specify otherwise   When you set this annotation to  external  on a Service of  type  LoadBalancer   and your cluster has a working deployment of the AWS Load Balancer controller  then the AWS Load Balancer controller attempts to deploy a load balancer based on the Service specification    Do not modify or add the  service beta kubernetes io aws load balancer type  annotation on an existing Service object  See the AWS documentation on this topic for more details        service beta kubernetes io azure load balancer disable tcp reset  deprecated    service beta kubernetes azure load balancer disble tcp reset   Example   service beta kubernetes io azure load balancer disable tcp reset   false    Used on  Service  This annotation only works for Azure standard load balancer backed service  This annotation is used on the Service to specify whether the load balancer should disable or enable TCP reset on idle timeout  If enabled  it helps applications to behave more predictably  to detect the termination of a connection  remove expired connections and initiate new connections   You can set the value to be either true or false   See  Load Balancer TCP Reset  https   learn microsoft com en gb azure load balancer load balancer tcp reset  for more information     This annotation is deprecated        pod security kubernetes io enforce  Type  Label  Example   pod security kubernetes io enforce   baseline    Used on  Namespace  Value   must   be one of  privileged    baseline   or  restricted  which correspond to  Pod Security Standard   docs concepts security pod security standards  levels  Specifically  the  enforce  label  prohibits  the creation of any Pod in the labeled Namespace which does not meet the requirements outlined in the indicated level   See  Enforcing Pod Security at the Namespace Level   docs concepts security pod security admission  for more information       pod security kubernetes io enforce version  Type  Label  Example   pod security kubernetes io enforce version       Used on  Namespace  Value   must   be  latest  or a valid Kubernetes version in the format  v major   minor    This determines the version of the  Pod Security Standard   docs concepts security pod security standards  policies to apply when validating a Pod   See  Enforcing Pod Security at the Namespace Level   docs concepts security pod security admission  for more information       pod security kubernetes io audit  Type  Label  Example   pod security kubernetes io audit   baseline    Used on  Namespace  Value   must   be one of  privileged    baseline   or  restricted  which correspond to  Pod Security Standard   docs concepts security pod security standards  levels  Specifically  the  audit  label does not prevent the creation of a Pod in the labeled Namespace which does not meet the requirements outlined in the indicated level  but adds an this annotation to the Pod   See  Enforcing Pod Security at the Namespace Level   docs concepts security pod security admission  for more information       pod security kubernetes io audit version  Type  Label  Example   pod security kubernetes io audit version       Used on  Namespace  Value   must   be  latest  or a valid Kubernetes version in the format  v major   minor    This determines the version of the  Pod Security Standard   docs concepts security pod security standards  policies to apply when validating a Pod   See  Enforcing Pod Security at the Namespace Level   docs concepts security pod security admission  for more information       pod security kubernetes io warn  Type  Label  Example   pod security kubernetes io warn   baseline    Used on  Namespace  Value   must   be one of  privileged    baseline   or  restricted  which correspond to  Pod Security Standard   docs concepts security pod security standards  levels  Specifically  the  warn  label does not prevent the creation of a Pod in the labeled Namespace which does not meet the requirements outlined in the indicated level  but returns a warning to the user after doing so  Note that warnings are also displayed when creating or updating objects that contain Pod templates  such as Deployments  Jobs  StatefulSets  etc   See  Enforcing Pod Security at the Namespace Level   docs concepts security pod security admission  for more information       pod security kubernetes io warn version  Type  Label  Example   pod security kubernetes io warn version       Used on  Namespace  Value   must   be  latest  or a valid Kubernetes version in the format  v major   minor    This determines the version of the  Pod Security Standard   docs concepts security pod security standards  policies to apply when validating a submitted Pod  Note that warnings are also displayed when creating or updating objects that contain Pod templates  such as Deployments  Jobs  StatefulSets  etc   See  Enforcing Pod Security at the Namespace Level   docs concepts security pod security admission  for more information       rbac authorization kubernetes io autoupdate  Type  Annotation  Example   rbac authorization kubernetes io autoupdate   false    Used on  ClusterRole  ClusterRoleBinding  Role  RoleBinding  When this annotation is set to   true   on default RBAC objects created by the API server  they are automatically updated at server start to add missing permissions and subjects  extra permissions and subjects are left in place   To prevent autoupdating a particular role or rolebinding  set this annotation to   false    If you create your own RBAC objects and set this annotation to   false     kubectl auth reconcile   which allows reconciling arbitrary RBAC objects in a   respects this annotation and does not automatically add missing permissions and subjects       kubernetes io psp  deprecated    kubernetes io psp   Type  Annotation  Example   kubernetes io psp  restricted   Used on  Pod  This annotation was only relevant if you were using  PodSecurityPolicy   docs concepts security pod security policy   objects  Kubernetes v does not support the PodSecurityPolicy API   When the PodSecurityPolicy admission controller admitted a Pod  the admission controller modified the Pod to have this annotation  The value of the annotation was the name of the PodSecurityPolicy that was used for validation       seccomp security alpha kubernetes io pod  non functional    seccomp security alpha kubernetes io pod   Type  Annotation  Used on  Pod  Kubernetes before v1 25 allowed you to configure seccomp behavior using this annotation  See  Restrict a Container s Syscalls with seccomp   docs tutorials security seccomp   to learn the supported way to specify seccomp restrictions for a Pod       container seccomp security alpha kubernetes io  NAME   non functional    container seccomp security alpha kubernetes io   Type  Annotation  Used on  Pod  Kubernetes before v1 25 allowed you to configure seccomp behavior using this annotation  See  Restrict a Container s Syscalls with seccomp   docs tutorials security seccomp   to learn the supported way to specify seccomp restrictions for a Pod       snapshot storage kubernetes io allow volume mode change  Type  Annotation  Example   snapshot storage kubernetes io allow volume mode change   true    Used on  VolumeSnapshotContent  Value can either be  true  or  false   This determines whether a user can modify the mode of the source volume when a PersistentVolumeClaim is being created from a VolumeSnapshot   Refer to  Converting the volume mode of a Snapshot   docs concepts storage volume snapshots  convert volume mode  and the  Kubernetes CSI Developer Documentation  https   kubernetes csi github io docs   for more information       scheduler alpha kubernetes io critical pod  deprecated   Type  Annotation  Example   scheduler alpha kubernetes io critical pod       Used on  Pod  This annotation lets Kubernetes control plane know about a Pod being a critical Pod so that the descheduler will not remove this Pod    Starting in v1 16  this annotation was removed in favor of  Pod Priority   docs concepts scheduling eviction pod priority preemption         Annotations used for audit       sorted by annotation         authorization k8s io decision    docs reference labels annotations taints audit annotations  authorization k8s io decision      authorization k8s io reason    docs reference labels annotations taints audit annotations  authorization k8s io reason      insecure sha1 invalid cert kubernetes io  hostname    docs reference labels annotations taints audit annotations  insecure sha1 invalid cert kubernetes io hostname      missing san invalid cert kubernetes io  hostname    docs reference labels annotations taints audit annotations  missing san invalid cert kubernetes io hostname      pod security kubernetes io audit violations    docs reference labels annotations taints audit annotations  pod security kubernetes io audit violations      pod security kubernetes io enforce policy    docs reference labels annotations taints audit annotations  pod security kubernetes io enforce policy      pod security kubernetes io exempt    docs reference labels annotations taints audit annotations  pod security kubernetes io exempt      validation policy admission k8s io validation failure    docs reference labels annotations taints audit annotations  validation policy admission k8s io validation failure     See more details on  Audit Annotations   docs reference labels annotations taints audit annotations        kubeadm      kubeadm alpha kubernetes io cri socket  Type  Annotation  Example   kubeadm alpha kubernetes io cri socket  unix    run containerd container sock   Used on  Node  Annotation that kubeadm uses to preserve the CRI socket information given to kubeadm at  init   join  time for later use  kubeadm annotates the Node object with this information  The annotation remains  alpha   since ideally this should be a field in KubeletConfiguration instead       kubeadm kubernetes io etcd advertise client urls  Type  Annotation  Example   kubeadm kubernetes io etcd advertise client urls  https   172 17 0 18 2379   Used on  Pod  Annotation that kubeadm places on locally managed etcd Pods to keep track of a list of URLs where etcd clients should connect to  This is used mainly for etcd cluster health check purposes       kubeadm kubernetes io kube apiserver advertise address endpoint  Type  Annotation  Example   kubeadm kubernetes io kube apiserver advertise address endpoint  https   172 17 0 18 6443   Used on  Pod  Annotation that kubeadm places on locally managed  kube apiserver  Pods to keep track of the exposed advertise address port endpoint for that API server instance       kubeadm kubernetes io component config hash  Type  Annotation  Example   kubeadm kubernetes io component config hash  2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae   Used on  ConfigMap  Annotation that kubeadm places on ConfigMaps that it manages for configuring components  It contains a hash  SHA 256  used to determine if the user has applied settings different from the kubeadm defaults for a particular component       node role kubernetes io control plane  Type  Label  Used on  Node  A marker label to indicate that the node is used to run control plane components  The kubeadm tool applies this label to the control plane nodes that it manages  Other cluster management tools typically also set this taint   You can label control plane nodes with this label to make it easier to schedule Pods only onto these nodes  or to avoid running Pods on the control plane  If this label is set  the  EndpointSlice controller   docs concepts services networking topology aware routing  implementation control plane  ignores that node while calculating Topology Aware Hints       node role kubernetes io control plane   node role kubernetes io control plane taint   Type  Taint  Example   node role kubernetes io control plane NoSchedule   Used on  Node  Taint that kubeadm applies on control plane nodes to restrict placing Pods and allow only specific pods to schedule on them   If this Taint is applied  control plane nodes allow only critical workloads to be scheduled onto them  You can manually remove this taint with the following command on a specific node      shell kubectl taint nodes  node name  node role kubernetes io control plane NoSchedule           node role kubernetes io master  deprecated    node role kubernetes io master taint   Type  Taint  Used on  Node  Example   node role kubernetes io master NoSchedule   Taint that kubeadm previously applied on control plane nodes to allow only critical workloads to schedule on them  Replaced by the   node role kubernetes io control plane    node role kubernetes io control plane taint  taint  kubeadm no longer sets or uses this deprecated taint "}
{"questions":"kubernetes reference contenttype tool reference Resource Types title Kubernetes Custom Metrics v1beta2 package custom metrics k8s io v1beta2 p Package v1beta2 is the v1beta2 version of the custommetrics API p autogenerated true","answers":"---\ntitle: Kubernetes Custom Metrics (v1beta2)\ncontent_type: tool-reference\npackage: custom.metrics.k8s.io\/v1beta2\nauto_generated: true\n---\n<p>Package v1beta2 is the v1beta2 version of the custom_metrics API.<\/p>\n\n\n## Resource Types \n\n\n- [MetricListOptions](#custom-metrics-k8s-io-v1beta2-MetricListOptions)\n- [MetricValue](#custom-metrics-k8s-io-v1beta2-MetricValue)\n- [MetricValueList](#custom-metrics-k8s-io-v1beta2-MetricValueList)\n  \n    \n\n## `MetricListOptions`     {#custom-metrics-k8s-io-v1beta2-MetricListOptions}\n    \n\n\n<p>MetricListOptions is used to select metrics by their label selectors<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>custom.metrics.k8s.io\/v1beta2<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>MetricListOptions<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>labelSelector<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>A selector to restrict the list of returned objects by their labels.\nDefaults to everything.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>metricLabelSelector<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>A selector to restrict the list of returned metrics by their labels<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `MetricValue`     {#custom-metrics-k8s-io-v1beta2-MetricValue}\n    \n\n**Appears in:**\n\n- [MetricValueList](#custom-metrics-k8s-io-v1beta2-MetricValueList)\n\n\n<p>MetricValue is the metric value for some object<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>custom.metrics.k8s.io\/v1beta2<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>MetricValue<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>describedObject<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#objectreference-v1-core\"><code>core\/v1.ObjectReference<\/code><\/a>\n<\/td>\n<td>\n   <p>a reference to the described object<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>metric<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#custom-metrics-k8s-io-v1beta2-MetricIdentifier\"><code>MetricIdentifier<\/code><\/a>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>timestamp<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#time-v1-meta\"><code>meta\/v1.Time<\/code><\/a>\n<\/td>\n<td>\n   <p>indicates the time at which the metrics were produced<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>windowSeconds<\/code> <B>[Required]<\/B><br\/>\n<code>int64<\/code>\n<\/td>\n<td>\n   <p>indicates the window ([Timestamp-Window, Timestamp]) from\nwhich these metrics were calculated, when returning rate\nmetrics calculated from cumulative metrics (or zero for\nnon-calculated instantaneous metrics).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>value<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/api\/resource#Quantity\"><code>k8s.io\/apimachinery\/pkg\/api\/resource.Quantity<\/code><\/a>\n<\/td>\n<td>\n   <p>the value of the metric for this<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `MetricValueList`     {#custom-metrics-k8s-io-v1beta2-MetricValueList}\n    \n\n\n<p>MetricValueList is a list of values for a given metric for some set of objects<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>custom.metrics.k8s.io\/v1beta2<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>MetricValueList<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>metadata<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#listmeta-v1-meta\"><code>meta\/v1.ListMeta<\/code><\/a>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>items<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#custom-metrics-k8s-io-v1beta2-MetricValue\"><code>[]MetricValue<\/code><\/a>\n<\/td>\n<td>\n   <p>the value of the metric across the described objects<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `MetricIdentifier`     {#custom-metrics-k8s-io-v1beta2-MetricIdentifier}\n    \n\n**Appears in:**\n\n- [MetricValue](#custom-metrics-k8s-io-v1beta2-MetricValue)\n\n\n<p>MetricIdentifier identifies a metric by name and, optionally, selector<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>name is the name of the given metric<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>selector<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#labelselector-v1-meta\"><code>meta\/v1.LabelSelector<\/code><\/a>\n<\/td>\n<td>\n   <p>selector represents the label selector that could be used to select\nthis metric, and will generally just be the selector passed in to\nthe query used to fetch this metric.\nWhen left blank, only the metric's Name will be used to gather metrics.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  Kubernetes Custom Metrics  v1beta2  content type  tool reference package  custom metrics k8s io v1beta2 auto generated  true      p Package v1beta2 is the v1beta2 version of the custom metrics API   p       Resource Types       MetricListOptions   custom metrics k8s io v1beta2 MetricListOptions     MetricValue   custom metrics k8s io v1beta2 MetricValue     MetricValueList   custom metrics k8s io v1beta2 MetricValueList               MetricListOptions        custom metrics k8s io v1beta2 MetricListOptions          p MetricListOptions is used to select metrics by their label selectors  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code custom metrics k8s io v1beta2  code   td   tr   tr  td  code kind  code  br  string  td  td  code MetricListOptions  code   td   tr           tr  td  code labelSelector  code  br    code string  code    td   td      p A selector to restrict the list of returned objects by their labels  Defaults to everything   p    td    tr   tr  td  code metricLabelSelector  code  br    code string  code    td   td      p A selector to restrict the list of returned metrics by their labels  p    td    tr    tbody    table       MetricValue        custom metrics k8s io v1beta2 MetricValue          Appears in        MetricValueList   custom metrics k8s io v1beta2 MetricValueList     p MetricValue is the metric value for some object  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code custom metrics k8s io v1beta2  code   td   tr   tr  td  code kind  code  br  string  td  td  code MetricValue  code   td   tr           tr  td  code describedObject  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 28  objectreference v1 core   code core v1 ObjectReference  code   a    td   td      p a reference to the described object  p    td    tr   tr  td  code metric  code   B  Required   B  br    a href   custom metrics k8s io v1beta2 MetricIdentifier   code MetricIdentifier  code   a    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code timestamp  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 28  time v1 meta   code meta v1 Time  code   a    td   td      p indicates the time at which the metrics were produced  p    td    tr   tr  td  code windowSeconds  code   B  Required   B  br    code int64  code    td   td      p indicates the window   Timestamp Window  Timestamp   from which these metrics were calculated  when returning rate metrics calculated from cumulative metrics  or zero for non calculated instantaneous metrics    p    td    tr   tr  td  code value  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg api resource Quantity   code k8s io apimachinery pkg api resource Quantity  code   a    td   td      p the value of the metric for this  p    td    tr    tbody    table       MetricValueList        custom metrics k8s io v1beta2 MetricValueList          p MetricValueList is a list of values for a given metric for some set of objects  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code custom metrics k8s io v1beta2  code   td   tr   tr  td  code kind  code  br  string  td  td  code MetricValueList  code   td   tr           tr  td  code metadata  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 28  listmeta v1 meta   code meta v1 ListMeta  code   a    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code items  code   B  Required   B  br    a href   custom metrics k8s io v1beta2 MetricValue   code   MetricValue  code   a    td   td      p the value of the metric across the described objects  p    td    tr    tbody    table       MetricIdentifier        custom metrics k8s io v1beta2 MetricIdentifier          Appears in        MetricValue   custom metrics k8s io v1beta2 MetricValue     p MetricIdentifier identifies a metric by name and  optionally  selector  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p name is the name of the given metric  p    td    tr   tr  td  code selector  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 28  labelselector v1 meta   code meta v1 LabelSelector  code   a    td   td      p selector represents the label selector that could be used to select this metric  and will generally just be the selector passed in to the query used to fetch this metric  When left blank  only the metric s Name will be used to gather metrics   p    td    tr    tbody    table   "}
{"questions":"kubernetes reference contenttype tool reference package metrics k8s io v1beta1 title Kubernetes Metrics v1beta1 Resource Types p Package v1beta1 is the v1beta1 version of the metrics API p autogenerated true","answers":"---\ntitle: Kubernetes Metrics (v1beta1)\ncontent_type: tool-reference\npackage: metrics.k8s.io\/v1beta1\nauto_generated: true\n---\n<p>Package v1beta1 is the v1beta1 version of the metrics API.<\/p>\n\n\n## Resource Types \n\n\n- [NodeMetrics](#metrics-k8s-io-v1beta1-NodeMetrics)\n- [NodeMetricsList](#metrics-k8s-io-v1beta1-NodeMetricsList)\n- [PodMetrics](#metrics-k8s-io-v1beta1-PodMetrics)\n- [PodMetricsList](#metrics-k8s-io-v1beta1-PodMetricsList)\n  \n    \n\n## `NodeMetrics`     {#metrics-k8s-io-v1beta1-NodeMetrics}\n    \n\n**Appears in:**\n\n- [NodeMetricsList](#metrics-k8s-io-v1beta1-NodeMetricsList)\n\n\n<p>NodeMetrics sets resource usage metrics of a node.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>metrics.k8s.io\/v1beta1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>NodeMetrics<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>metadata<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#objectmeta-v1-meta\"><code>meta\/v1.ObjectMeta<\/code><\/a>\n<\/td>\n<td>\n   <p>Standard object's metadata.\nMore info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata<\/p>\nRefer to the Kubernetes API documentation for the fields of the <code>metadata<\/code> field.<\/td>\n<\/tr>\n<tr><td><code>timestamp<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#time-v1-meta\"><code>meta\/v1.Time<\/code><\/a>\n<\/td>\n<td>\n   <p>The following fields define time interval from which metrics were\ncollected from the interval [Timestamp-Window, Timestamp].<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>window<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>usage<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#resourcelist-v1-core\"><code>core\/v1.ResourceList<\/code><\/a>\n<\/td>\n<td>\n   <p>The memory usage is the memory working set.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `NodeMetricsList`     {#metrics-k8s-io-v1beta1-NodeMetricsList}\n    \n\n\n<p>NodeMetricsList is a list of NodeMetrics.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>metrics.k8s.io\/v1beta1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>NodeMetricsList<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>metadata<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#listmeta-v1-meta\"><code>meta\/v1.ListMeta<\/code><\/a>\n<\/td>\n<td>\n   <p>Standard list metadata.\nMore info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>items<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#metrics-k8s-io-v1beta1-NodeMetrics\"><code>[]NodeMetrics<\/code><\/a>\n<\/td>\n<td>\n   <p>List of node metrics.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `PodMetrics`     {#metrics-k8s-io-v1beta1-PodMetrics}\n    \n\n**Appears in:**\n\n- [PodMetricsList](#metrics-k8s-io-v1beta1-PodMetricsList)\n\n\n<p>PodMetrics sets resource usage metrics of a pod.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>metrics.k8s.io\/v1beta1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>PodMetrics<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>metadata<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#objectmeta-v1-meta\"><code>meta\/v1.ObjectMeta<\/code><\/a>\n<\/td>\n<td>\n   <p>Standard object's metadata.\nMore info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata<\/p>\nRefer to the Kubernetes API documentation for the fields of the <code>metadata<\/code> field.<\/td>\n<\/tr>\n<tr><td><code>timestamp<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#time-v1-meta\"><code>meta\/v1.Time<\/code><\/a>\n<\/td>\n<td>\n   <p>The following fields define time interval from which metrics were\ncollected from the interval [Timestamp-Window, Timestamp].<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>window<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>containers<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#metrics-k8s-io-v1beta1-ContainerMetrics\"><code>[]ContainerMetrics<\/code><\/a>\n<\/td>\n<td>\n   <p>Metrics for all containers are collected within the same time window.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `PodMetricsList`     {#metrics-k8s-io-v1beta1-PodMetricsList}\n    \n\n\n<p>PodMetricsList is a list of PodMetrics.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>metrics.k8s.io\/v1beta1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>PodMetricsList<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>metadata<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#listmeta-v1-meta\"><code>meta\/v1.ListMeta<\/code><\/a>\n<\/td>\n<td>\n   <p>Standard list metadata.\nMore info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>items<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#metrics-k8s-io-v1beta1-PodMetrics\"><code>[]PodMetrics<\/code><\/a>\n<\/td>\n<td>\n   <p>List of pod metrics.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ContainerMetrics`     {#metrics-k8s-io-v1beta1-ContainerMetrics}\n    \n\n**Appears in:**\n\n- [PodMetrics](#metrics-k8s-io-v1beta1-PodMetrics)\n\n\n<p>ContainerMetrics sets resource usage metrics of a container.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Container name corresponding to the one from pod.spec.containers.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>usage<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#resourcelist-v1-core\"><code>core\/v1.ResourceList<\/code><\/a>\n<\/td>\n<td>\n   <p>The memory usage is the memory working set.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  Kubernetes Metrics  v1beta1  content type  tool reference package  metrics k8s io v1beta1 auto generated  true      p Package v1beta1 is the v1beta1 version of the metrics API   p       Resource Types       NodeMetrics   metrics k8s io v1beta1 NodeMetrics     NodeMetricsList   metrics k8s io v1beta1 NodeMetricsList     PodMetrics   metrics k8s io v1beta1 PodMetrics     PodMetricsList   metrics k8s io v1beta1 PodMetricsList               NodeMetrics        metrics k8s io v1beta1 NodeMetrics          Appears in        NodeMetricsList   metrics k8s io v1beta1 NodeMetricsList     p NodeMetrics sets resource usage metrics of a node   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code metrics k8s io v1beta1  code   td   tr   tr  td  code kind  code  br  string  td  td  code NodeMetrics  code   td   tr           tr  td  code metadata  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 28  objectmeta v1 meta   code meta v1 ObjectMeta  code   a    td   td      p Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata  p  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field   td    tr   tr  td  code timestamp  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 28  time v1 meta   code meta v1 Time  code   a    td   td      p The following fields define time interval from which metrics were collected from the interval  Timestamp Window  Timestamp    p    td    tr   tr  td  code window  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code usage  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 28  resourcelist v1 core   code core v1 ResourceList  code   a    td   td      p The memory usage is the memory working set   p    td    tr    tbody    table       NodeMetricsList        metrics k8s io v1beta1 NodeMetricsList          p NodeMetricsList is a list of NodeMetrics   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code metrics k8s io v1beta1  code   td   tr   tr  td  code kind  code  br  string  td  td  code NodeMetricsList  code   td   tr           tr  td  code metadata  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 28  listmeta v1 meta   code meta v1 ListMeta  code   a    td   td      p Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds  p    td    tr   tr  td  code items  code   B  Required   B  br    a href   metrics k8s io v1beta1 NodeMetrics   code   NodeMetrics  code   a    td   td      p List of node metrics   p    td    tr    tbody    table       PodMetrics        metrics k8s io v1beta1 PodMetrics          Appears in        PodMetricsList   metrics k8s io v1beta1 PodMetricsList     p PodMetrics sets resource usage metrics of a pod   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code metrics k8s io v1beta1  code   td   tr   tr  td  code kind  code  br  string  td  td  code PodMetrics  code   td   tr           tr  td  code metadata  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 28  objectmeta v1 meta   code meta v1 ObjectMeta  code   a    td   td      p Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata  p  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field   td    tr   tr  td  code timestamp  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 28  time v1 meta   code meta v1 Time  code   a    td   td      p The following fields define time interval from which metrics were collected from the interval  Timestamp Window  Timestamp    p    td    tr   tr  td  code window  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code containers  code   B  Required   B  br    a href   metrics k8s io v1beta1 ContainerMetrics   code   ContainerMetrics  code   a    td   td      p Metrics for all containers are collected within the same time window   p    td    tr    tbody    table       PodMetricsList        metrics k8s io v1beta1 PodMetricsList          p PodMetricsList is a list of PodMetrics   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code metrics k8s io v1beta1  code   td   tr   tr  td  code kind  code  br  string  td  td  code PodMetricsList  code   td   tr           tr  td  code metadata  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 28  listmeta v1 meta   code meta v1 ListMeta  code   a    td   td      p Standard list metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md types kinds  p    td    tr   tr  td  code items  code   B  Required   B  br    a href   metrics k8s io v1beta1 PodMetrics   code   PodMetrics  code   a    td   td      p List of pod metrics   p    td    tr    tbody    table       ContainerMetrics        metrics k8s io v1beta1 ContainerMetrics          Appears in        PodMetrics   metrics k8s io v1beta1 PodMetrics     p ContainerMetrics sets resource usage metrics of a container   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p Container name corresponding to the one from pod spec containers   p    td    tr   tr  td  code usage  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 28  resourcelist v1 core   code core v1 ResourceList  code   a    td   td      p The memory usage is the memory working set   p    td    tr    tbody    table   "}
{"questions":"kubernetes reference title Kubernetes External Metrics v1beta1 contenttype tool reference p Package v1beta1 is the v1beta1 version of the external metrics API p Resource Types package external metrics k8s io v1beta1 autogenerated true","answers":"---\ntitle: Kubernetes External Metrics (v1beta1)\ncontent_type: tool-reference\npackage: external.metrics.k8s.io\/v1beta1\nauto_generated: true\n---\n<p>Package v1beta1 is the v1beta1 version of the external metrics API.<\/p>\n\n\n## Resource Types \n\n\n- [ExternalMetricValue](#external-metrics-k8s-io-v1beta1-ExternalMetricValue)\n- [ExternalMetricValueList](#external-metrics-k8s-io-v1beta1-ExternalMetricValueList)\n  \n    \n\n## `ExternalMetricValue`     {#external-metrics-k8s-io-v1beta1-ExternalMetricValue}\n    \n\n**Appears in:**\n\n- [ExternalMetricValueList](#external-metrics-k8s-io-v1beta1-ExternalMetricValueList)\n\n\n<p>ExternalMetricValue is a metric value for external metric\nA single metric value is identified by metric name and a set of string labels.\nFor one metric there can be multiple values with different sets of labels.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>external.metrics.k8s.io\/v1beta1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>ExternalMetricValue<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>metricName<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>the name of the metric<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>metricLabels<\/code> <B>[Required]<\/B><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p>a set of labels that identify a single time series for the metric<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>timestamp<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#time-v1-meta\"><code>meta\/v1.Time<\/code><\/a>\n<\/td>\n<td>\n   <p>indicates the time at which the metrics were produced<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>window<\/code> <B>[Required]<\/B><br\/>\n<code>int64<\/code>\n<\/td>\n<td>\n   <p>indicates the window ([Timestamp-Window, Timestamp]) from\nwhich these metrics were calculated, when returning rate\nmetrics calculated from cumulative metrics (or zero for\nnon-calculated instantaneous metrics).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>value<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/api\/resource#Quantity\"><code>k8s.io\/apimachinery\/pkg\/api\/resource.Quantity<\/code><\/a>\n<\/td>\n<td>\n   <p>the value of the metric<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExternalMetricValueList`     {#external-metrics-k8s-io-v1beta1-ExternalMetricValueList}\n    \n\n\n<p>ExternalMetricValueList is a list of values for a given metric for some set labels<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>external.metrics.k8s.io\/v1beta1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>ExternalMetricValueList<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>metadata<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#listmeta-v1-meta\"><code>meta\/v1.ListMeta<\/code><\/a>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>items<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#external-metrics-k8s-io-v1beta1-ExternalMetricValue\"><code>[]ExternalMetricValue<\/code><\/a>\n<\/td>\n<td>\n   <p>value of the metric matching a given set of labels<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  Kubernetes External Metrics  v1beta1  content type  tool reference package  external metrics k8s io v1beta1 auto generated  true      p Package v1beta1 is the v1beta1 version of the external metrics API   p       Resource Types       ExternalMetricValue   external metrics k8s io v1beta1 ExternalMetricValue     ExternalMetricValueList   external metrics k8s io v1beta1 ExternalMetricValueList               ExternalMetricValue        external metrics k8s io v1beta1 ExternalMetricValue          Appears in        ExternalMetricValueList   external metrics k8s io v1beta1 ExternalMetricValueList     p ExternalMetricValue is a metric value for external metric A single metric value is identified by metric name and a set of string labels  For one metric there can be multiple values with different sets of labels   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code external metrics k8s io v1beta1  code   td   tr   tr  td  code kind  code  br  string  td  td  code ExternalMetricValue  code   td   tr           tr  td  code metricName  code   B  Required   B  br    code string  code    td   td      p the name of the metric  p    td    tr   tr  td  code metricLabels  code   B  Required   B  br    code map string string  code    td   td      p a set of labels that identify a single time series for the metric  p    td    tr   tr  td  code timestamp  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 28  time v1 meta   code meta v1 Time  code   a    td   td      p indicates the time at which the metrics were produced  p    td    tr   tr  td  code window  code   B  Required   B  br    code int64  code    td   td      p indicates the window   Timestamp Window  Timestamp   from which these metrics were calculated  when returning rate metrics calculated from cumulative metrics  or zero for non calculated instantaneous metrics    p    td    tr   tr  td  code value  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg api resource Quantity   code k8s io apimachinery pkg api resource Quantity  code   a    td   td      p the value of the metric  p    td    tr    tbody    table       ExternalMetricValueList        external metrics k8s io v1beta1 ExternalMetricValueList          p ExternalMetricValueList is a list of values for a given metric for some set labels  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code external metrics k8s io v1beta1  code   td   tr   tr  td  code kind  code  br  string  td  td  code ExternalMetricValueList  code   td   tr           tr  td  code metadata  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 28  listmeta v1 meta   code meta v1 ListMeta  code   a    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code items  code   B  Required   B  br    a href   external metrics k8s io v1beta1 ExternalMetricValue   code   ExternalMetricValue  code   a    td   td      p value of the metric matching a given set of labels  p    td    tr    tbody    table   "}
{"questions":"kubernetes reference weight 20 file and passing its path as a command line argument feature state fork8sversion v1 25 state stable You can customize the behavior of the by writing a configuration contenttype concept title Scheduler Configuration","answers":"---\ntitle: Scheduler Configuration\ncontent_type: concept\nweight: 20\n---\n\n\n\nYou can customize the behavior of the `kube-scheduler` by writing a configuration\nfile and passing its path as a command line argument.\n\n<!-- overview -->\n\n<!-- body -->\n\nA scheduling Profile allows you to configure the different stages of scheduling\nin the .\nEach stage is exposed in an extension point. Plugins provide scheduling behaviors\nby implementing one or more of these extension points.\n\nYou can specify scheduling profiles by running `kube-scheduler --config <filename>`,\nusing the\nKubeSchedulerConfiguration [v1](\/docs\/reference\/config-api\/kube-scheduler-config.v1\/)\nstruct.\n\nA minimal configuration looks as follows:\n\n```yaml\napiVersion: kubescheduler.config.k8s.io\/v1\nkind: KubeSchedulerConfiguration\nclientConnection:\n  kubeconfig: \/etc\/srv\/kubernetes\/kube-scheduler\/kubeconfig\n```\n\n\nKubeSchedulerConfiguration v1beta3 is deprecated in v1.26 and is removed in v1.29.\nPlease migrate KubeSchedulerConfiguration to [v1](\/docs\/reference\/config-api\/kube-scheduler-config.v1\/).\n\n\n## Profiles\n\nA scheduling Profile allows you to configure the different stages of scheduling\nin the .\nEach stage is exposed in an [extension point](#extension-points).\n[Plugins](#scheduling-plugins) provide scheduling behaviors by implementing one\nor more of these extension points.\n\nYou can configure a single instance of `kube-scheduler` to run\n[multiple profiles](#multiple-profiles).\n\n### Extension points\n\nScheduling happens in a series of stages that are exposed through the following\nextension points:\n\n1. `queueSort`: These plugins provide an ordering function that is used to\n   sort pending Pods in the scheduling queue. Exactly one queue sort plugin\n   may be enabled at a time.\n1. `preFilter`: These plugins are used to pre-process or check information\n   about a Pod or the cluster before filtering. They can mark a pod as\n   unschedulable.\n1. `filter`: These plugins are the equivalent of Predicates in a scheduling\n   Policy and are used to filter out nodes that can not run the Pod. Filters\n   are called in the configured order. A pod is marked as unschedulable if no\n   nodes pass all the filters.\n1. `postFilter`: These plugins are called in their configured order when no\n   feasible nodes were found for the pod. If any `postFilter` plugin marks the\n   Pod _schedulable_, the remaining plugins are not called.\n1. `preScore`: This is an informational extension point that can be used\n   for doing pre-scoring work.\n1. `score`: These plugins provide a score to each node that has passed the\n   filtering phase. The scheduler will then select the node with the highest\n   weighted scores sum.\n1. `reserve`: This is an informational extension point that notifies plugins\n   when resources have been reserved for a given Pod. Plugins also implement an\n   `Unreserve` call that gets called in the case of failure during or after\n   `Reserve`.\n1. `permit`: These plugins can prevent or delay the binding of a Pod.\n1. `preBind`: These plugins perform any work required before a Pod is bound.\n1. `bind`: The plugins bind a Pod to a Node. `bind` plugins are called in order\n   and once one has done the binding, the remaining plugins are skipped. At\n   least one bind plugin is required.\n1. `postBind`: This is an informational extension point that is called after\n   a Pod has been bound.\n1. `multiPoint`: This is a config-only field that allows plugins to be enabled\n   or disabled for all of their applicable extension points simultaneously.\n\nFor each extension point, you could disable specific [default plugins](#scheduling-plugins)\nor enable your own. For example:\n\n```yaml\napiVersion: kubescheduler.config.k8s.io\/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n  - plugins:\n      score:\n        disabled:\n        - name: PodTopologySpread\n        enabled:\n        - name: MyCustomPluginA\n          weight: 2\n        - name: MyCustomPluginB\n          weight: 1\n```\n\nYou can use `*` as name in the disabled array to disable all default plugins\nfor that extension point. This can also be used to rearrange plugins order, if\ndesired.\n\n### Scheduling plugins\n\nThe following plugins, enabled by default, implement one or more of these\nextension points:\n\n- `ImageLocality`: Favors nodes that already have the container images that the\n  Pod runs.\n  Extension points: `score`.\n- `TaintToleration`: Implements\n  [taints and tolerations](\/docs\/concepts\/scheduling-eviction\/taint-and-toleration\/).\n  Implements extension points: `filter`, `preScore`, `score`.\n- `NodeName`: Checks if a Pod spec node name matches the current node.\n  Extension points: `filter`.\n- `NodePorts`: Checks if a node has free ports for the requested Pod ports.\n  Extension points: `preFilter`, `filter`.\n- `NodeAffinity`: Implements\n  [node selectors](\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#nodeselector)\n  and [node affinity](\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#node-affinity).\n  Extension points: `filter`, `score`.\n- `PodTopologySpread`: Implements\n  [Pod topology spread](\/docs\/concepts\/scheduling-eviction\/topology-spread-constraints\/).\n  Extension points: `preFilter`, `filter`, `preScore`, `score`.\n- `NodeUnschedulable`: Filters out nodes that have `.spec.unschedulable` set to\n  true.\n  Extension points: `filter`.\n- `NodeResourcesFit`: Checks if the node has all the resources that the Pod is\n  requesting. The score can use one of three strategies: `LeastAllocated`\n  (default), `MostAllocated` and `RequestedToCapacityRatio`.\n  Extension points: `preFilter`, `filter`, `score`.\n- `NodeResourcesBalancedAllocation`: Favors nodes that would obtain a more\n  balanced resource usage if the Pod is scheduled there.\n  Extension points: `score`.\n- `VolumeBinding`: Checks if the node has or if it can bind the requested\n  .\n  Extension points: `preFilter`, `filter`, `reserve`, `preBind`, `score`.\n  \n  `score` extension point is enabled when `VolumeCapacityPriority` feature is\n  enabled. It prioritizes the smallest PVs that can fit the requested volume\n  size.\n  \n- `VolumeRestrictions`: Checks that volumes mounted in the node satisfy\n  restrictions that are specific to the volume provider.\n  Extension points: `filter`.\n- `VolumeZone`: Checks that volumes requested satisfy any zone requirements they\n  might have.\n  Extension points: `filter`.\n- `NodeVolumeLimits`: Checks that CSI volume limits can be satisfied for the\n  node.\n  Extension points: `filter`.\n- `EBSLimits`: Checks that AWS EBS volume limits can be satisfied for the node.\n  Extension points: `filter`.\n- `GCEPDLimits`: Checks that GCP-PD volume limits can be satisfied for the node.\n  Extension points: `filter`.\n- `AzureDiskLimits`: Checks that Azure disk volume limits can be satisfied for\n  the node.\n  Extension points: `filter`.\n- `InterPodAffinity`: Implements\n  [inter-Pod affinity and anti-affinity](\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#inter-pod-affinity-and-anti-affinity).\n  Extension points: `preFilter`, `filter`, `preScore`, `score`.\n- `PrioritySort`: Provides the default priority based sorting.\n  Extension points: `queueSort`.\n- `DefaultBinder`: Provides the default binding mechanism.\n  Extension points: `bind`.\n- `DefaultPreemption`: Provides the default preemption mechanism.\n  Extension points: `postFilter`.\n\nYou can also enable the following plugins, through the component config APIs,\nthat are not enabled by default:\n\n- `CinderLimits`: Checks that [OpenStack Cinder](https:\/\/docs.openstack.org\/cinder\/)\n  volume limits can be satisfied for the node.\n  Extension points: `filter`.\n\n### Multiple profiles\n\nYou can configure `kube-scheduler` to run more than one profile.\nEach profile has an associated scheduler name and can have a different set of\nplugins configured in its [extension points](#extension-points).\n\nWith the following sample configuration, the scheduler will run with two\nprofiles: one with the default plugins and one with all scoring plugins\ndisabled.\n\n```yaml\napiVersion: kubescheduler.config.k8s.io\/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n  - schedulerName: default-scheduler\n  - schedulerName: no-scoring-scheduler\n    plugins:\n      preScore:\n        disabled:\n        - name: '*'\n      score:\n        disabled:\n        - name: '*'\n```\n\nPods that want to be scheduled according to a specific profile can include\nthe corresponding scheduler name in its `.spec.schedulerName`.\n\nBy default, one profile with the scheduler name `default-scheduler` is created.\nThis profile includes the default plugins described above. When declaring more\nthan one profile, a unique scheduler name for each of them is required.\n\nIf a Pod doesn't specify a scheduler name, kube-apiserver will set it to\n`default-scheduler`. Therefore, a profile with this scheduler name should exist\nto get those pods scheduled.\n\n\nPod's scheduling events have `.spec.schedulerName` as their `reportingController`.\nEvents for leader election use the scheduler name of the first profile in the list.\n\nFor more information, please refer to the `reportingController` section under\n[Event API Reference](\/docs\/reference\/kubernetes-api\/cluster-resources\/event-v1\/).\n\n\n\nAll profiles must use the same plugin in the `queueSort` extension point and have\nthe same configuration parameters (if applicable). This is because the scheduler\nonly has one pending pods queue.\n\n\n### Plugins that apply to multiple extension points {#multipoint}\n\nStarting from `kubescheduler.config.k8s.io\/v1beta3`, there is an additional field in the\nprofile config, `multiPoint`, which allows for easily enabling or disabling a plugin\nacross several extension points. The intent of `multiPoint` config is to simplify the\nconfiguration needed for users and administrators when using custom profiles.\n\nConsider a plugin, `MyPlugin`, which implements the `preScore`, `score`, `preFilter`,\nand `filter` extension points. To enable `MyPlugin` for all its available extension\npoints, the profile config looks like:\n\n```yaml\napiVersion: kubescheduler.config.k8s.io\/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n  - schedulerName: multipoint-scheduler\n    plugins:\n      multiPoint:\n        enabled:\n        - name: MyPlugin\n```\n\nThis would equate to manually enabling `MyPlugin` for all of its extension\npoints, like so:\n\n```yaml\napiVersion: kubescheduler.config.k8s.io\/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n  - schedulerName: non-multipoint-scheduler\n    plugins:\n      preScore:\n        enabled:\n        - name: MyPlugin\n      score:\n        enabled:\n        - name: MyPlugin\n      preFilter:\n        enabled:\n        - name: MyPlugin\n      filter:\n        enabled:\n        - name: MyPlugin\n```\n\nOne benefit of using `multiPoint` here is that if `MyPlugin` implements another\nextension point in the future, the `multiPoint` config will automatically enable it\nfor the new extension.\n\nSpecific extension points can be excluded from `MultiPoint` expansion using\nthe `disabled` field for that extension point. This works with disabling default\nplugins, non-default plugins, or with the wildcard (`'*'`) to disable all plugins.\nAn example of this, disabling `Score` and `PreScore`, would be:\n\n```yaml\napiVersion: kubescheduler.config.k8s.io\/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n  - schedulerName: non-multipoint-scheduler\n    plugins:\n      multiPoint:\n        enabled:\n        - name: 'MyPlugin'\n      preScore:\n        disabled:\n        - name: '*'\n      score:\n        disabled:\n        - name: '*'\n```\n\nStarting from `kubescheduler.config.k8s.io\/v1beta3`, all [default plugins](#scheduling-plugins)\nare enabled internally through `MultiPoint`.\nHowever, individual extension points are still available to allow flexible\nreconfiguration of the default values (such as ordering and Score weights). For\nexample, consider two Score plugins `DefaultScore1` and `DefaultScore2`, each with\na weight of `1`. They can be reordered with different weights like so:\n\n```yaml\napiVersion: kubescheduler.config.k8s.io\/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n  - schedulerName: multipoint-scheduler\n    plugins:\n      score:\n        enabled:\n        - name: 'DefaultScore2'\n          weight: 5\n```\n\nIn this example, it's unnecessary to specify the plugins in `MultiPoint` explicitly\nbecause they are default plugins. And the only plugin specified in `Score` is `DefaultScore2`.\nThis is because plugins set through specific extension points will always take precedence\nover `MultiPoint` plugins. So, this snippet essentially re-orders the two plugins\nwithout needing to specify both of them.\n\nThe general hierarchy for precedence when configuring `MultiPoint` plugins is as follows:\n1. Specific extension points run first, and their settings override whatever is set elsewhere\n2. Plugins manually configured through `MultiPoint` and their settings\n3. Default plugins and their default settings\n\nTo demonstrate the above hierarchy, the following example is based on these plugins:\n|Plugin|Extension Points|\n|---|---|\n|`DefaultQueueSort`|`QueueSort`|\n|`CustomQueueSort`|`QueueSort`|\n|`DefaultPlugin1`|`Score`, `Filter`|\n|`DefaultPlugin2`|`Score`|\n|`CustomPlugin1`|`Score`, `Filter`|\n|`CustomPlugin2`|`Score`, `Filter`|\n\nA valid sample configuration for these plugins would be:\n\n```yaml\napiVersion: kubescheduler.config.k8s.io\/v1\nkind: KubeSchedulerConfiguration\nprofiles:\n  - schedulerName: multipoint-scheduler\n    plugins:\n      multiPoint:\n        enabled:\n        - name: 'CustomQueueSort'\n        - name: 'CustomPlugin1'\n          weight: 3\n        - name: 'CustomPlugin2'\n        disabled:\n        - name: 'DefaultQueueSort'\n      filter:\n        disabled:\n        - name: 'DefaultPlugin1'\n      score:\n        enabled:\n        - name: 'DefaultPlugin2'\n```\n\nNote that there is no error for re-declaring a `MultiPoint` plugin in a specific\nextension point. The re-declaration is ignored (and logged), as specific extension points\ntake precedence.\n\nBesides keeping most of the config in one spot, this sample does a few things:\n* Enables the custom `queueSort` plugin and disables the default one\n* Enables `CustomPlugin1` and `CustomPlugin2`, which will run first for all of their extension points\n* Disables `DefaultPlugin1`, but only for `filter`\n* Reorders `DefaultPlugin2` to run first in `score` (even before the custom plugins)\n\nIn versions of the config before `v1beta3`, without `multiPoint`, the above snippet would equate to this:\n\n```yaml\napiVersion: kubescheduler.config.k8s.io\/v1beta2\nkind: KubeSchedulerConfiguration\nprofiles:\n  - schedulerName: multipoint-scheduler\n    plugins:\n\n      # Disable the default QueueSort plugin\n      queueSort:\n        enabled:\n        - name: 'CustomQueueSort'\n        disabled:\n        - name: 'DefaultQueueSort'\n\n      # Enable custom Filter plugins\n      filter:\n        enabled:\n        - name: 'CustomPlugin1'\n        - name: 'CustomPlugin2'\n        - name: 'DefaultPlugin2'\n        disabled:\n        - name: 'DefaultPlugin1'\n\n      # Enable and reorder custom score plugins\n      score:\n        enabled:\n        - name: 'DefaultPlugin2'\n          weight: 1\n        - name: 'DefaultPlugin1'\n          weight: 3\n```\n\nWhile this is a complicated example, it demonstrates the flexibility of `MultiPoint` config\nas well as its seamless integration with the existing methods for configuring extension points.\n\n## Scheduler configuration migrations\n\n\n\n* With the v1beta2 configuration version, you can use a new score extension for the\n  `NodeResourcesFit` plugin.\n  The new extension combines the functionalities of the `NodeResourcesLeastAllocated`,\n  `NodeResourcesMostAllocated` and `RequestedToCapacityRatio` plugins.\n  For example, if you previously used the `NodeResourcesMostAllocated` plugin, you\n  would instead use `NodeResourcesFit` (enabled by default) and add a `pluginConfig`\n  with a `scoreStrategy` that is similar to:\n  ```yaml\n  apiVersion: kubescheduler.config.k8s.io\/v1beta2\n  kind: KubeSchedulerConfiguration\n  profiles:\n  - pluginConfig:\n    - args:\n        scoringStrategy:\n          resources:\n          - name: cpu\n            weight: 1\n          type: MostAllocated\n      name: NodeResourcesFit\n  ```\n\n* The scheduler plugin `NodeLabel` is deprecated; instead, use the [`NodeAffinity`](\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#affinity-and-anti-affinity) plugin (enabled by default) to achieve similar behavior.\n\n* The scheduler plugin `ServiceAffinity` is deprecated; instead, use the [`InterPodAffinity`](\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#inter-pod-affinity-and-anti-affinity) plugin (enabled by default) to achieve similar behavior.\n\n* The scheduler plugin `NodePreferAvoidPods` is deprecated; instead, use [node taints](\/docs\/concepts\/scheduling-eviction\/taint-and-toleration\/) to achieve similar behavior.\n\n* A plugin enabled in a v1beta2 configuration file takes precedence over the default configuration for that plugin.\n\n* Invalid `host` or `port` configured for scheduler healthz and metrics bind address will cause validation failure.\n\n\n\n* Three plugins' weight are increased by default:\n  * `InterPodAffinity` from 1 to 2\n  * `NodeAffinity` from 1 to 2\n  * `TaintToleration` from 1 to 3\n\n\n\n* The scheduler plugin `SelectorSpread` is removed, instead, use the `PodTopologySpread` plugin (enabled by default)\nto achieve similar behavior.\n\n\n\n## \n\n* Read the [kube-scheduler reference](\/docs\/reference\/command-line-tools-reference\/kube-scheduler\/)\n* Learn about [scheduling](\/docs\/concepts\/scheduling-eviction\/kube-scheduler\/)\n* Read the [kube-scheduler configuration (v1)](\/docs\/reference\/config-api\/kube-scheduler-config.v1\/) reference","site":"kubernetes reference","answers_cleaned":"    title  Scheduler Configuration content type  concept weight  20        You can customize the behavior of the  kube scheduler  by writing a configuration file and passing its path as a command line argument        overview           body      A scheduling Profile allows you to configure the different stages of scheduling in the   Each stage is exposed in an extension point  Plugins provide scheduling behaviors by implementing one or more of these extension points   You can specify scheduling profiles by running  kube scheduler   config  filename    using the KubeSchedulerConfiguration  v1   docs reference config api kube scheduler config v1   struct   A minimal configuration looks as follows      yaml apiVersion  kubescheduler config k8s io v1 kind  KubeSchedulerConfiguration clientConnection    kubeconfig   etc srv kubernetes kube scheduler kubeconfig       KubeSchedulerConfiguration v1beta3 is deprecated in v1 26 and is removed in v1 29  Please migrate KubeSchedulerConfiguration to  v1   docs reference config api kube scheduler config v1         Profiles  A scheduling Profile allows you to configure the different stages of scheduling in the   Each stage is exposed in an  extension point   extension points    Plugins   scheduling plugins  provide scheduling behaviors by implementing one or more of these extension points   You can configure a single instance of  kube scheduler  to run  multiple profiles   multiple profiles        Extension points  Scheduling happens in a series of stages that are exposed through the following extension points   1   queueSort   These plugins provide an ordering function that is used to    sort pending Pods in the scheduling queue  Exactly one queue sort plugin    may be enabled at a time  1   preFilter   These plugins are used to pre process or check information    about a Pod or the cluster before filtering  They can mark a pod as    unschedulable  1   filter   These plugins are the equivalent of Predicates in a scheduling    Policy and are used to filter out nodes that can not run the Pod  Filters    are called in the configured order  A pod is marked as unschedulable if no    nodes pass all the filters  1   postFilter   These plugins are called in their configured order when no    feasible nodes were found for the pod  If any  postFilter  plugin marks the    Pod  schedulable   the remaining plugins are not called  1   preScore   This is an informational extension point that can be used    for doing pre scoring work  1   score   These plugins provide a score to each node that has passed the    filtering phase  The scheduler will then select the node with the highest    weighted scores sum  1   reserve   This is an informational extension point that notifies plugins    when resources have been reserved for a given Pod  Plugins also implement an     Unreserve  call that gets called in the case of failure during or after     Reserve   1   permit   These plugins can prevent or delay the binding of a Pod  1   preBind   These plugins perform any work required before a Pod is bound  1   bind   The plugins bind a Pod to a Node   bind  plugins are called in order    and once one has done the binding  the remaining plugins are skipped  At    least one bind plugin is required  1   postBind   This is an informational extension point that is called after    a Pod has been bound  1   multiPoint   This is a config only field that allows plugins to be enabled    or disabled for all of their applicable extension points simultaneously   For each extension point  you could disable specific  default plugins   scheduling plugins  or enable your own  For example      yaml apiVersion  kubescheduler config k8s io v1 kind  KubeSchedulerConfiguration profiles      plugins        score          disabled            name  PodTopologySpread         enabled            name  MyCustomPluginA           weight  2           name  MyCustomPluginB           weight  1      You can use     as name in the disabled array to disable all default plugins for that extension point  This can also be used to rearrange plugins order  if desired       Scheduling plugins  The following plugins  enabled by default  implement one or more of these extension points      ImageLocality   Favors nodes that already have the container images that the   Pod runs    Extension points   score      TaintToleration   Implements    taints and tolerations   docs concepts scheduling eviction taint and toleration      Implements extension points   filter    preScore    score      NodeName   Checks if a Pod spec node name matches the current node    Extension points   filter      NodePorts   Checks if a node has free ports for the requested Pod ports    Extension points   preFilter    filter      NodeAffinity   Implements    node selectors   docs concepts scheduling eviction assign pod node  nodeselector    and  node affinity   docs concepts scheduling eviction assign pod node  node affinity     Extension points   filter    score      PodTopologySpread   Implements    Pod topology spread   docs concepts scheduling eviction topology spread constraints      Extension points   preFilter    filter    preScore    score      NodeUnschedulable   Filters out nodes that have   spec unschedulable  set to   true    Extension points   filter      NodeResourcesFit   Checks if the node has all the resources that the Pod is   requesting  The score can use one of three strategies   LeastAllocated     default    MostAllocated  and  RequestedToCapacityRatio     Extension points   preFilter    filter    score      NodeResourcesBalancedAllocation   Favors nodes that would obtain a more   balanced resource usage if the Pod is scheduled there    Extension points   score      VolumeBinding   Checks if the node has or if it can bind the requested       Extension points   preFilter    filter    reserve    preBind    score         score  extension point is enabled when  VolumeCapacityPriority  feature is   enabled  It prioritizes the smallest PVs that can fit the requested volume   size        VolumeRestrictions   Checks that volumes mounted in the node satisfy   restrictions that are specific to the volume provider    Extension points   filter      VolumeZone   Checks that volumes requested satisfy any zone requirements they   might have    Extension points   filter      NodeVolumeLimits   Checks that CSI volume limits can be satisfied for the   node    Extension points   filter      EBSLimits   Checks that AWS EBS volume limits can be satisfied for the node    Extension points   filter      GCEPDLimits   Checks that GCP PD volume limits can be satisfied for the node    Extension points   filter      AzureDiskLimits   Checks that Azure disk volume limits can be satisfied for   the node    Extension points   filter      InterPodAffinity   Implements    inter Pod affinity and anti affinity   docs concepts scheduling eviction assign pod node  inter pod affinity and anti affinity     Extension points   preFilter    filter    preScore    score      PrioritySort   Provides the default priority based sorting    Extension points   queueSort      DefaultBinder   Provides the default binding mechanism    Extension points   bind      DefaultPreemption   Provides the default preemption mechanism    Extension points   postFilter    You can also enable the following plugins  through the component config APIs  that are not enabled by default      CinderLimits   Checks that  OpenStack Cinder  https   docs openstack org cinder     volume limits can be satisfied for the node    Extension points   filter        Multiple profiles  You can configure  kube scheduler  to run more than one profile  Each profile has an associated scheduler name and can have a different set of plugins configured in its  extension points   extension points    With the following sample configuration  the scheduler will run with two profiles  one with the default plugins and one with all scoring plugins disabled      yaml apiVersion  kubescheduler config k8s io v1 kind  KubeSchedulerConfiguration profiles      schedulerName  default scheduler     schedulerName  no scoring scheduler     plugins        preScore          disabled            name            score          disabled            name           Pods that want to be scheduled according to a specific profile can include the corresponding scheduler name in its   spec schedulerName    By default  one profile with the scheduler name  default scheduler  is created  This profile includes the default plugins described above  When declaring more than one profile  a unique scheduler name for each of them is required   If a Pod doesn t specify a scheduler name  kube apiserver will set it to  default scheduler   Therefore  a profile with this scheduler name should exist to get those pods scheduled    Pod s scheduling events have   spec schedulerName  as their  reportingController   Events for leader election use the scheduler name of the first profile in the list   For more information  please refer to the  reportingController  section under  Event API Reference   docs reference kubernetes api cluster resources event v1       All profiles must use the same plugin in the  queueSort  extension point and have the same configuration parameters  if applicable   This is because the scheduler only has one pending pods queue        Plugins that apply to multiple extension points   multipoint   Starting from  kubescheduler config k8s io v1beta3   there is an additional field in the profile config   multiPoint   which allows for easily enabling or disabling a plugin across several extension points  The intent of  multiPoint  config is to simplify the configuration needed for users and administrators when using custom profiles   Consider a plugin   MyPlugin   which implements the  preScore    score    preFilter   and  filter  extension points  To enable  MyPlugin  for all its available extension points  the profile config looks like      yaml apiVersion  kubescheduler config k8s io v1 kind  KubeSchedulerConfiguration profiles      schedulerName  multipoint scheduler     plugins        multiPoint          enabled            name  MyPlugin      This would equate to manually enabling  MyPlugin  for all of its extension points  like so      yaml apiVersion  kubescheduler config k8s io v1 kind  KubeSchedulerConfiguration profiles      schedulerName  non multipoint scheduler     plugins        preScore          enabled            name  MyPlugin       score          enabled            name  MyPlugin       preFilter          enabled            name  MyPlugin       filter          enabled            name  MyPlugin      One benefit of using  multiPoint  here is that if  MyPlugin  implements another extension point in the future  the  multiPoint  config will automatically enable it for the new extension   Specific extension points can be excluded from  MultiPoint  expansion using the  disabled  field for that extension point  This works with disabling default plugins  non default plugins  or with the wildcard         to disable all plugins  An example of this  disabling  Score  and  PreScore   would be      yaml apiVersion  kubescheduler config k8s io v1 kind  KubeSchedulerConfiguration profiles      schedulerName  non multipoint scheduler     plugins        multiPoint          enabled            name   MyPlugin        preScore          disabled            name            score          disabled            name           Starting from  kubescheduler config k8s io v1beta3   all  default plugins   scheduling plugins  are enabled internally through  MultiPoint   However  individual extension points are still available to allow flexible reconfiguration of the default values  such as ordering and Score weights   For example  consider two Score plugins  DefaultScore1  and  DefaultScore2   each with a weight of  1   They can be reordered with different weights like so      yaml apiVersion  kubescheduler config k8s io v1 kind  KubeSchedulerConfiguration profiles      schedulerName  multipoint scheduler     plugins        score          enabled            name   DefaultScore2            weight  5      In this example  it s unnecessary to specify the plugins in  MultiPoint  explicitly because they are default plugins  And the only plugin specified in  Score  is  DefaultScore2   This is because plugins set through specific extension points will always take precedence over  MultiPoint  plugins  So  this snippet essentially re orders the two plugins without needing to specify both of them   The general hierarchy for precedence when configuring  MultiPoint  plugins is as follows  1  Specific extension points run first  and their settings override whatever is set elsewhere 2  Plugins manually configured through  MultiPoint  and their settings 3  Default plugins and their default settings  To demonstrate the above hierarchy  the following example is based on these plugins   Plugin Extension Points              DefaultQueueSort   QueueSort     CustomQueueSort   QueueSort     DefaultPlugin1   Score    Filter     DefaultPlugin2   Score     CustomPlugin1   Score    Filter     CustomPlugin2   Score    Filter    A valid sample configuration for these plugins would be      yaml apiVersion  kubescheduler config k8s io v1 kind  KubeSchedulerConfiguration profiles      schedulerName  multipoint scheduler     plugins        multiPoint          enabled            name   CustomQueueSort            name   CustomPlugin1            weight  3           name   CustomPlugin2          disabled            name   DefaultQueueSort        filter          disabled            name   DefaultPlugin1        score          enabled            name   DefaultPlugin2       Note that there is no error for re declaring a  MultiPoint  plugin in a specific extension point  The re declaration is ignored  and logged   as specific extension points take precedence   Besides keeping most of the config in one spot  this sample does a few things    Enables the custom  queueSort  plugin and disables the default one   Enables  CustomPlugin1  and  CustomPlugin2   which will run first for all of their extension points   Disables  DefaultPlugin1   but only for  filter    Reorders  DefaultPlugin2  to run first in  score   even before the custom plugins   In versions of the config before  v1beta3   without  multiPoint   the above snippet would equate to this      yaml apiVersion  kubescheduler config k8s io v1beta2 kind  KubeSchedulerConfiguration profiles      schedulerName  multipoint scheduler     plugins           Disable the default QueueSort plugin       queueSort          enabled            name   CustomQueueSort          disabled            name   DefaultQueueSort           Enable custom Filter plugins       filter          enabled            name   CustomPlugin1            name   CustomPlugin2            name   DefaultPlugin2          disabled            name   DefaultPlugin1           Enable and reorder custom score plugins       score          enabled            name   DefaultPlugin2            weight  1           name   DefaultPlugin1            weight  3      While this is a complicated example  it demonstrates the flexibility of  MultiPoint  config as well as its seamless integration with the existing methods for configuring extension points      Scheduler configuration migrations      With the v1beta2 configuration version  you can use a new score extension for the    NodeResourcesFit  plugin    The new extension combines the functionalities of the  NodeResourcesLeastAllocated      NodeResourcesMostAllocated  and  RequestedToCapacityRatio  plugins    For example  if you previously used the  NodeResourcesMostAllocated  plugin  you   would instead use  NodeResourcesFit   enabled by default  and add a  pluginConfig    with a  scoreStrategy  that is similar to       yaml   apiVersion  kubescheduler config k8s io v1beta2   kind  KubeSchedulerConfiguration   profiles      pluginConfig        args          scoringStrategy            resources              name  cpu             weight  1           type  MostAllocated       name  NodeResourcesFit          The scheduler plugin  NodeLabel  is deprecated  instead  use the   NodeAffinity    docs concepts scheduling eviction assign pod node  affinity and anti affinity  plugin  enabled by default  to achieve similar behavior     The scheduler plugin  ServiceAffinity  is deprecated  instead  use the   InterPodAffinity    docs concepts scheduling eviction assign pod node  inter pod affinity and anti affinity  plugin  enabled by default  to achieve similar behavior     The scheduler plugin  NodePreferAvoidPods  is deprecated  instead  use  node taints   docs concepts scheduling eviction taint and toleration   to achieve similar behavior     A plugin enabled in a v1beta2 configuration file takes precedence over the default configuration for that plugin     Invalid  host  or  port  configured for scheduler healthz and metrics bind address will cause validation failure       Three plugins  weight are increased by default       InterPodAffinity  from 1 to 2      NodeAffinity  from 1 to 2      TaintToleration  from 1 to 3      The scheduler plugin  SelectorSpread  is removed  instead  use the  PodTopologySpread  plugin  enabled by default  to achieve similar behavior            Read the  kube scheduler reference   docs reference command line tools reference kube scheduler     Learn about  scheduling   docs concepts scheduling eviction kube scheduler     Read the  kube scheduler configuration  v1    docs reference config api kube scheduler config v1   reference"}
{"questions":"kubernetes reference contenttype reference title Virtual IPs and Service Proxies glossarytooltip termid cluster text cluster runs a overview weight 50 Every glossarytooltip termid node text node in a Kubernetes","answers":"---\ntitle: Virtual IPs and Service Proxies\ncontent_type: reference\nweight: 50\n---\n\n<!-- overview -->\nEvery  in a Kubernetes\n runs a\n[kube-proxy](\/docs\/reference\/command-line-tools-reference\/kube-proxy\/)\n(unless you have deployed your own alternative component in place of `kube-proxy`).\n\nThe `kube-proxy` component is responsible for implementing a _virtual IP_\nmechanism for \nof `type` other than\n[`ExternalName`](\/docs\/concepts\/services-networking\/service\/#externalname).\nEach instance of kube-proxy watches the Kubernetes  for the addition and\nremoval of Service and EndpointSlice . For each Service, kube-proxy\ncalls appropriate APIs (depending on the kube-proxy mode) to configure\nthe node to capture traffic to the Service's `clusterIP` and `port`,\nand redirect that traffic to one of the Service's endpoints\n(usually a Pod, but possibly an arbitrary user-provided IP address). A control\nloop ensures that the rules on each node are reliably synchronized with\nthe Service and EndpointSlice state as indicated by the API server.\n\n\n\nA question that pops up every now and then is why Kubernetes relies on\nproxying to forward inbound traffic to backends. What about other\napproaches? For example, would it be possible to configure DNS records that\nhave multiple A values (or AAAA for IPv6), and rely on round-robin name\nresolution?\n\nThere are a few reasons for using proxying for Services:\n\n* There is a long history of DNS implementations not respecting record TTLs,\n  and caching the results of name lookups after they should have expired.\n* Some apps do DNS lookups only once and cache the results indefinitely.\n* Even if apps and libraries did proper re-resolution, the low or zero TTLs\n  on the DNS records could impose a high load on DNS that then becomes\n  difficult to manage.\n\nLater in this page you can read about how various kube-proxy implementations work.\nOverall, you should note that, when running `kube-proxy`, kernel level rules may be modified\n(for example, iptables rules might get created), which won't get cleaned up, in some\ncases until you reboot.  Thus, running kube-proxy is something that should only be done\nby an administrator which understands the consequences of having a low level, privileged\nnetwork proxying service on a computer.  Although the `kube-proxy` executable supports a\n`cleanup` function, this function is not an official feature and thus is only available\nto use as-is.\n\n<a id=\"example\"><\/a>\nSome of the details in this reference refer to an example: the backend\n for a stateless\nimage-processing workloads, running with\nthree replicas. Those replicas are\nfungible&mdash;frontends do not care which backend they use.  While the actual Pods that\ncompose the backend set may change, the frontend clients should not need to be aware of that,\nnor should they need to keep track of the set of backends themselves.\n\n<!-- body -->\n\n## Proxy modes\n\nThe kube-proxy starts up in different modes, which are determined by its configuration.\n\nOn Linux nodes, the available modes for kube-proxy are:\n\n[`iptables`](#proxy-mode-iptables)\n: A mode where the kube-proxy configures packet forwarding rules using iptables.\n\n[`ipvs`](#proxy-mode-ipvs)\n: a mode where the kube-proxy configures packet forwarding rules using ipvs.\n\n[`nftables`](#proxy-mode-nftables)\n: a mode where the kube-proxy configures packet forwarding rules using nftables.\n\nThere is only one mode available for kube-proxy on Windows:\n\n[`kernelspace`](#proxy-mode-kernelspace)\n: a mode where the kube-proxy configures packet forwarding rules in the Windows kernel\n\n### `iptables` proxy mode {#proxy-mode-iptables}\n\n_This proxy mode is only available on Linux nodes._\n\nIn this mode, kube-proxy configures packet forwarding rules using the\niptables API of the kernel netfilter subsystem. For each endpoint, it\ninstalls iptables rules which, by default, select a backend Pod at\nrandom.\n\n#### Example {#packet-processing-iptables}\n\nAs an example, consider the image processing application described [earlier](#example)\nin the page.\nWhen the backend Service is created, the Kubernetes control plane assigns a virtual\nIP address, for example 10.0.0.1.  For this example, assume that the\nService port is 1234.\nAll of the kube-proxy instances in the cluster observe the creation of the new\nService.\n\nWhen kube-proxy on a node sees a new Service, it installs a series of iptables rules\nwhich redirect from the virtual IP address to more iptables rules, defined per Service.\nThe per-Service rules link to further rules for each backend endpoint, and the per-\nendpoint rules redirect traffic (using destination NAT) to the backends.\n\nWhen a client connects to the Service's virtual IP address the iptables rule kicks in.\nA backend is chosen (either based on session affinity or randomly) and packets are\nredirected to the backend without rewriting the client IP address.\n\nThis same basic flow executes when traffic comes in through a node-port or\nthrough a load-balancer, though in those cases the client IP address does get altered.\n\n#### Optimizing iptables mode performance\n\nIn iptables mode, kube-proxy creates a few iptables rules for every\nService, and a few iptables rules for each endpoint IP address. In\nclusters with tens of thousands of Pods and Services, this means tens\nof thousands of iptables rules, and kube-proxy may take a long time to update the rules\nin the kernel when Services (or their EndpointSlices) change. You can adjust the syncing\nbehavior of kube-proxy via options in the [`iptables` section](\/docs\/reference\/config-api\/kube-proxy-config.v1alpha1\/#kubeproxy-config-k8s-io-v1alpha1-KubeProxyIPTablesConfiguration)\nof the\nkube-proxy [configuration file](\/docs\/reference\/config-api\/kube-proxy-config.v1alpha1\/)\n(which you specify via `kube-proxy --config <path>`):\n\n```yaml\n...\niptables:\n  minSyncPeriod: 1s\n  syncPeriod: 30s\n...\n```\n\n##### `minSyncPeriod`\n\nThe `minSyncPeriod` parameter sets the minimum duration between\nattempts to resynchronize iptables rules with the kernel. If it is\n`0s`, then kube-proxy will always immediately synchronize the rules\nevery time any Service or Endpoint changes. This works fine in very\nsmall clusters, but it results in a lot of redundant work when lots of\nthings change in a small time period. For example, if you have a\nService backed by a \nwith 100 pods, and you delete the\nDeployment, then with `minSyncPeriod: 0s`, kube-proxy would end up\nremoving the Service's endpoints from the iptables rules one by one,\nfor a total of 100 updates. With a larger `minSyncPeriod`, multiple\nPod deletion events would get aggregated\ntogether, so kube-proxy might\ninstead end up making, say, 5 updates, each removing 20 endpoints,\nwhich will be much more efficient in terms of CPU, and result in the\nfull set of changes being synchronized faster.\n\nThe larger the value of `minSyncPeriod`, the more work that can be\naggregated, but the downside is that each individual change may end up\nwaiting up to the full `minSyncPeriod` before being processed, meaning\nthat the iptables rules spend more time being out-of-sync with the\ncurrent API server state.\n\nThe default value of `1s` should work well in most clusters, but in very\nlarge clusters it may be necessary to set it to a larger value.\nEspecially, if kube-proxy's `sync_proxy_rules_duration_seconds` metric\nindicates an average time much larger than 1 second, then bumping up\n`minSyncPeriod` may make updates more efficient.\n\n##### Updating legacy `minSyncPeriod` configuration {#minimize-iptables-restore}\n\nOlder versions of kube-proxy updated all the rules for all Services on\nevery sync; this led to performance issues (update lag) in large\nclusters, and the recommended solution was to set a larger\n`minSyncPeriod`. Since Kubernetes v1.28, the iptables mode of\nkube-proxy uses a more minimal approach, only making updates where\nServices or EndpointSlices have actually changed.\n\nIf you were previously overriding `minSyncPeriod`, you should try\nremoving that override and letting kube-proxy use the default value\n(`1s`) or at least a smaller value than you were using before upgrading.\n\nIf you are not running kube-proxy from Kubernetes , check\nthe behavior and associated advice for the version that you are actually running.\n\n##### `syncPeriod`\n\nThe `syncPeriod` parameter controls a handful of synchronization\noperations that are not directly related to changes in individual\nServices and EndpointSlices. In particular, it controls how quickly\nkube-proxy notices if an external component has interfered with\nkube-proxy's iptables rules. In large clusters, kube-proxy also only\nperforms certain cleanup operations once every `syncPeriod` to avoid\nunnecessary work.\n\nFor the most part, increasing `syncPeriod` is not expected to have much\nimpact on performance, but in the past, it was sometimes useful to set\nit to a very large value (eg, `1h`). This is no longer recommended,\nand is likely to hurt functionality more than it improves performance.\n\n### IPVS proxy mode {#proxy-mode-ipvs}\n\n_This proxy mode is only available on Linux nodes._\n\nIn `ipvs` mode, kube-proxy uses the kernel IPVS and iptables APIs to\ncreate rules to redirect traffic from Service IPs to endpoint IPs.\n\nThe IPVS proxy mode is based on netfilter hook function that is similar to\niptables mode, but uses a hash table as the underlying data structure and works\nin the kernel space.\nThat means kube-proxy in IPVS mode redirects traffic with lower latency than\nkube-proxy in iptables mode, with much better performance when synchronizing\nproxy rules. Compared to the iptables proxy mode, IPVS mode also supports a\nhigher throughput of network traffic.\n\nIPVS provides more options for balancing traffic to backend Pods;\nthese are:\n\n* `rr` (Round Robin): Traffic is equally distributed amongst the backing servers.\n\n* `wrr` (Weighted Round Robin): Traffic is routed to the backing servers based on\n  the weights of the servers. Servers with higher weights receive new connections\n  and get more requests than servers with lower weights.\n\n* `lc` (Least Connection): More traffic is assigned to servers with fewer active connections.\n\n* `wlc` (Weighted Least Connection): More traffic is routed to servers with fewer connections\n  relative to their weights, that is, connections divided by weight.\n\n* `lblc` (Locality based Least Connection): Traffic for the same IP address is sent to the\n  same backing server if the server is not overloaded and available; otherwise the traffic\n  is sent to servers with fewer connections, and keep it for future assignment.\n\n* `lblcr` (Locality Based Least Connection with Replication): Traffic for the same IP\n  address is sent to the server with least connections. If all the backing servers are\n  overloaded, it picks up one with fewer connections and add it to the target set.\n  If the target set has not changed for the specified time, the most loaded server\n  is removed from the set, in order to avoid high degree of replication.\n\n* `sh` (Source Hashing): Traffic is sent to a backing server by looking up a statically\n  assigned hash table based on the source IP addresses.\n\n* `dh` (Destination Hashing): Traffic is sent to a backing server by looking up a\n  statically assigned hash table based on their destination addresses.\n\n* `sed` (Shortest Expected Delay): Traffic forwarded to a backing server with the shortest\n  expected delay. The expected delay is `(C + 1) \/ U` if sent to a server, where `C` is\n  the number of connections on the server and `U` is the fixed service rate (weight) of\n  the server.\n\n* `nq` (Never Queue): Traffic is sent to an idle server if there is one, instead of\n  waiting for a fast one; if all servers are busy, the algorithm falls back to the `sed`\n  behavior.\n\n* `mh` (Maglev Hashing): Assigns incoming jobs based on\n  [Google's Maglev hashing algorithm](https:\/\/static.googleusercontent.com\/media\/research.google.com\/en\/\/pubs\/archive\/44824.pdf),\n  This scheduler has two flags: `mh-fallback`, which enables fallback to a different\n  server if the selected server is unavailable, and `mh-port`, which adds the source port number to\n  the hash computation. When using `mh`, kube-proxy always sets the `mh-port` flag and does not\n  enable the `mh-fallback` flag.\n  In proxy-mode=ipvs `mh` will work as source-hashing (`sh`), but with ports.\n\nThese scheduling algorithms are configured through the\n[`ipvs.scheduler`](\/docs\/reference\/config-api\/kube-proxy-config.v1alpha1\/#kubeproxy-config-k8s-io-v1alpha1-KubeProxyIPVSConfiguration)\nfield in the kube-proxy configuration.\n\n\nTo run kube-proxy in IPVS mode, you must make IPVS available on\nthe node before starting kube-proxy.\n\nWhen kube-proxy starts in IPVS proxy mode, it verifies whether IPVS\nkernel modules are available. If the IPVS kernel modules are not detected, then kube-proxy\nexits with an error.\n\n\n\n\n### `nftables` proxy mode {#proxy-mode-nftables}\n\n\n\n_This proxy mode is only available on Linux nodes, and requires kernel\n5.13 or later._\n\nIn this mode, kube-proxy configures packet forwarding rules using the\nnftables API of the kernel netfilter subsystem. For each endpoint, it\ninstalls nftables rules which, by default, select a backend Pod at\nrandom.\n\nThe nftables API is the successor to the iptables API and is designed\nto provide better performance and scalability than iptables. The\n`nftables` proxy mode is able to process changes to service endpoints\nfaster and more efficiently than the `iptables` mode, and is also able\nto more efficiently process packets in the kernel (though this only\nbecomes noticeable in clusters with tens of thousands of services).\n\nAs of Kubernetes , the `nftables` mode is\nstill relatively new, and may not be compatible with all network\nplugins; consult the documentation for your network plugin.\n\n#### Migrating from `iptables` mode to `nftables`\n\nUsers who want to switch from the default `iptables` mode to the\n`nftables` mode should be aware that some features work slightly\ndifferently the `nftables` mode:\n\n - **NodePort interfaces**: In `iptables` mode, by default,\n   [NodePort services](\/docs\/concepts\/services-networking\/service\/#type-nodeport)\n   are reachable on all local IP addresses. This is usually not what\n   users want, so the `nftables` mode defaults to\n   `--nodeport-addresses primary`, meaning NodePort services are only\n   reachable on the node's primary IPv4 and\/or IPv6 addresses. You can\n   override this by specifying an explicit value for that option:\n   e.g., `--nodeport-addresses 0.0.0.0\/0` to listen on all (local)\n   IPv4 IPs.\n\n - **NodePort services on `127.0.0.1`**: In `iptables` mode, if the\n   `--nodeport-addresses` range includes `127.0.0.1` (and the option\n   `--iptables-localhost-nodeports false` option is not passed), then\n   NodePort services are reachable even on \"localhost\" (`127.0.0.1`).\n   In `nftables` mode (and `ipvs` mode), this will not work. If you\n   are not sure if you are depending on this functionality, you can\n   check kube-proxy's\n   `iptables_localhost_nodeports_accepted_packets_total` metric; if it\n   is non-0, that means that some client has connected to a NodePort\n   service via `127.0.0.1`.\n\n - **NodePort interaction with firewalls**: The `iptables` mode of\n   kube-proxy tries to be compatible with overly-agressive firewalls;\n   for each NodePort service, it will add rules to accept inbound\n   traffic on that port, in case that traffic would otherwise be\n   blocked by a firewall. This approach will not work with firewalls\n   based on nftables, so kube-proxy's `nftables` mode does not do\n   anything here; if you have a local firewall, you must ensure that\n   it is properly configured to allow Kubernetes traffic through\n   (e.g., by allowing inbound traffic on the entire NodePort range).\n\n - **Conntrack bug workarounds**: Linux kernels prior to 6.1 have a\n   bug that can result in long-lived TCP connections to service IPs\n   being closed with the error \"Connection reset by peer\". The\n   `iptables` mode of kube-proxy installs a workaround for this bug,\n   but this workaround was later found to cause other problems in some\n   clusters. The `nftables` mode does not install any workaround by\n   default, but you can check kube-proxy's\n   `iptables_ct_state_invalid_dropped_packets_total` metric to see if\n   your cluster is depending on the workaround, and if so, you can run\n   kube-proxy with the option `--conntrack-tcp-be-liberal` to work\n   around the problem in `nftables` mode.\n\n### `kernelspace` proxy mode {#proxy-mode-kernelspace}\n\n_This proxy mode is only available on Windows nodes._\n\nThe kube-proxy configures packet filtering rules in the Windows _Virtual Filtering Platform_ (VFP),\nan extension to Windows vSwitch. These rules process encapsulated packets within the node-level\nvirtual networks, and rewrite packets so that the destination IP address (and layer 2 information)\nis correct for getting the packet routed to the correct destination.\nThe Windows VFP is analogous to tools such as Linux `nftables` or `iptables`. The Windows VFP extends\nthe _Hyper-V Switch_, which was initially implemented to support virtual machine networking.\n\nWhen a Pod on a node sends traffic to a virtual IP address, and the kube-proxy selects a Pod on\na different node as the load balancing target, the `kernelspace` proxy mode rewrites that packet\nto be destined to the target backend Pod. The Windows _Host Networking Service_ (HNS) ensures that\npacket rewriting rules are configured so that the return traffic appears to come from the virtual\nIP address and not the specific backend Pod.\n\n#### Direct server return for `kernelspace` mode {#windows-direct-server-return}\n\n\n\nAs an alternative to the basic operation, a node that hosts the backend Pod for a Service can\napply the packet rewriting directly, rather than placing this burden on the node where the client\nPod is running. This is called _direct server return_.\n\nTo use this, you must run kube-proxy with the `--enable-dsr` command line argument **and**\nenable the `WinDSR` [feature gate](\/docs\/reference\/command-line-tools-reference\/feature-gates\/).\n\nDirect server return also optimizes the case for Pod return traffic even when both Pods\nare running on the same node.\n\n## Session affinity\n\nIn these proxy models, the traffic bound for the Service's IP:Port is\nproxied to an appropriate backend without the clients knowing anything\nabout Kubernetes or Services or Pods.\n\nIf you want to make sure that connections from a particular client\nare passed to the same Pod each time, you can select the session affinity based\non the client's IP addresses by setting `.spec.sessionAffinity` to `ClientIP`\nfor a Service (the default is `None`).\n\n### Session stickiness timeout\n\nYou can also set the maximum session sticky time by setting\n`.spec.sessionAffinityConfig.clientIP.timeoutSeconds` appropriately for a Service.\n(the default value is 10800, which works out to be 3 hours).\n\n\nOn Windows, setting the maximum session sticky time for Services is not supported.\n\n\n## IP address assignment to Services\n\nUnlike Pod IP addresses, which actually route to a fixed destination,\nService IPs are not actually answered by a single host.  Instead, kube-proxy\nuses packet processing logic (such as Linux iptables) to define _virtual_ IP\naddresses which are transparently redirected as needed.\n\nWhen clients connect to the VIP, their traffic is automatically transported to an\nappropriate endpoint. The environment variables and DNS for Services are actually\npopulated in terms of the Service's virtual IP address (and port).\n\n### Avoiding collisions\n\nOne of the primary philosophies of Kubernetes is that you should not be\nexposed to situations that could cause your actions to fail through no fault\nof your own. For the design of the Service resource, this means not making\nyou choose your own IP address if that choice might collide with\nsomeone else's choice.  That is an isolation failure.\n\nIn order to allow you to choose an IP address for your Services, we must\nensure that no two Services can collide. Kubernetes does that by allocating each\nService its own IP address from within the `service-cluster-ip-range`\nCIDR range that is configured for the .\n\n### IP address allocation tracking\n\nTo ensure each Service receives a unique IP address, an internal allocator atomically\nupdates a global allocation map in \nprior to creating each Service. The map object must exist in the registry for\nServices to get IP address assignments, otherwise creations will\nfail with a message indicating an IP address could not be allocated.\n\nIn the control plane, a background controller is responsible for creating that\nmap (needed to support migrating from older versions of Kubernetes that used\nin-memory locking). Kubernetes also uses controllers to check for invalid\nassignments (for example: due to administrator intervention) and for cleaning up allocated\nIP addresses that are no longer used by any Services.\n\n#### IP address allocation tracking using the Kubernetes API {#ip-address-objects}\n\n\n\nIf you enable the `MultiCIDRServiceAllocator`\n[feature gate](\/docs\/reference\/command-line-tools-reference\/feature-gates\/) and the\n[`networking.k8s.io\/v1alpha1` API group](\/docs\/tasks\/administer-cluster\/enable-disable-api\/),\nthe control plane replaces the existing etcd allocator with a revised implementation\nthat uses IPAddress and ServiceCIDR objects instead of an internal global allocation map.\nEach cluster IP address associated to a Service then references an IPAddress object.\n\nEnabling the feature gate also replaces a background controller with an alternative\nthat handles the IPAddress objects and supports migration from the old allocator model.\nKubernetes  does not support migrating from IPAddress\nobjects to the internal allocation map.\n\nOne of the main benefits of the revised allocator is that it removes the size limitations\nfor the IP address range that can be used for the cluster IP address of Services.\nWith `MultiCIDRServiceAllocator` enabled, there are no limitations for IPv4, and for IPv6\nyou can use IP address netmasks that are a \/64 or smaller (as opposed to \/108 with the\nlegacy implementation).\n\nMaking IP address allocations available via the API means that you as a cluster administrator\ncan allow users to inspect the IP addresses assigned to their Services.\nKubernetes extensions, such as the [Gateway API](\/docs\/concepts\/services-networking\/gateway\/),\ncan use the IPAddress API to extend Kubernetes' inherent networking capabilities.\n\nHere is a brief example of a user querying for IP addresses:\n\n```shell\nkubectl get services\n```\n```\nNAME         TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)   AGE\nkubernetes   ClusterIP   2001:db8:1:2::1   <none>        443\/TCP   3d1h\n```\n```shell\nkubectl get ipaddresses\n```\n```\nNAME              PARENTREF\n2001:db8:1:2::1   services\/default\/kubernetes\n2001:db8:1:2::a   services\/kube-system\/kube-dns\n```\n\nKubernetes also allow users to dynamically define the available IP ranges for Services using\nServiceCIDR objects. During bootstrap, a default ServiceCIDR object named `kubernetes` is created\nfrom the value of the `--service-cluster-ip-range` command line argument to kube-apiserver:\n\n```shell\nkubectl get servicecidrs\n```\n```\nNAME         CIDRS         AGE\nkubernetes   10.96.0.0\/28  17m\n```\n\nUsers can create or delete new ServiceCIDR objects to manage the available IP ranges for Services:\n\n```shell\ncat <<'EOF' | kubectl apply -f -\napiVersion: networking.k8s.io\/v1beta1\nkind: ServiceCIDR\nmetadata:\n  name: newservicecidr\nspec:\n  cidrs:\n  - 10.96.0.0\/24\nEOF\n```\n```\nservicecidr.networking.k8s.io\/newcidr1 created\n```\n\n```shell\nkubectl get servicecidrs\n```\n```\nNAME             CIDRS         AGE\nkubernetes       10.96.0.0\/28  17m\nnewservicecidr   10.96.0.0\/24  7m\n```\n\n### IP address ranges for Service virtual IP addresses {#service-ip-static-sub-range}\n\n\n\nKubernetes divides the `ClusterIP` range into two bands, based on\nthe size of the configured `service-cluster-ip-range` by using the following formula\n`min(max(16, cidrSize \/ 16), 256)`. That formula paraphrases as _never less than 16 or\nmore than 256, with a graduated step function between them_.\n\nKubernetes prefers to allocate dynamic IP addresses to Services by choosing from the upper band,\nwhich means that if you want to assign a specific IP address to a `type: ClusterIP`\nService, you should manually assign an IP address from the **lower** band. That approach\nreduces the risk of a conflict over allocation.\n\n## Traffic policies\n\nYou can set the `.spec.internalTrafficPolicy` and `.spec.externalTrafficPolicy` fields\nto control how Kubernetes routes traffic to healthy (\u201cready\u201d) backends.\n\n### Internal traffic policy\n\n\n\nYou can set the `.spec.internalTrafficPolicy` field to control how traffic from\ninternal sources is routed. Valid values are `Cluster` and `Local`. Set the field to\n`Cluster` to route internal traffic to all ready endpoints and `Local` to only route\nto ready node-local endpoints. If the traffic policy is `Local` and there are no\nnode-local endpoints, traffic is dropped by kube-proxy.\n\n### External traffic policy\n\nYou can set the `.spec.externalTrafficPolicy` field to control how traffic from\nexternal sources is routed. Valid values are `Cluster` and `Local`. Set the field\nto `Cluster` to route external traffic to all ready endpoints and `Local` to only\nroute to ready node-local endpoints. If the traffic policy is `Local` and there are\nare no node-local endpoints, the kube-proxy does not forward any traffic for the\nrelevant Service.\n\nIf `Cluster` is specified all nodes are eligible load balancing targets _as long as_\nthe node is not being deleted and kube-proxy is healthy. In this mode: load balancer \nhealth checks are configured to target the service proxy's readiness port and path.\nIn the case of kube-proxy this evaluates to: `${NODE_IP}:10256\/healthz`. kube-proxy\nwill return either an HTTP code 200 or 503. kube-proxy's load balancer health check\nendpoint returns 200 if:\n\n1. kube-proxy is healthy, meaning:\n   - it's able to progress programming the network and isn't timing out while doing\n     so (the timeout is defined to be: **2 \u00d7 `iptables.syncPeriod`**); and\n2. the node is not being deleted (there is no deletion timestamp set for the Node).\n\nThe reason why kube-proxy returns 503 and marks the node as not\neligible when it's being deleted, is because kube-proxy supports connection\ndraining for terminating nodes. A couple of important things occur from the point\nof view of a Kubernetes-managed load balancer when a node _is being_ \/ _is_ deleted.\n\nWhile deleting:\n\n* kube-proxy will start failing its readiness probe and essentially mark the\n   node as not eligible for load balancer traffic. The load balancer health\n   check failing causes load balancers which support connection draining to\n   allow existing connections to terminate, and block new connections from\n   establishing.\n\nWhen deleted:\n\n* The service controller in the Kubernetes cloud controller manager removes the\n  node from the referenced set of eligible targets. Removing any instance from\n  the load balancer's set of backend targets immediately terminates all\n  connections. This is also the reason kube-proxy first fails the health check\n  while the node is deleting.\n\nIt's important to note for Kubernetes vendors that if any vendor configures the\nkube-proxy readiness probe as a liveness probe: that kube-proxy will start\nrestarting continuously when a node is deleting until it has been fully deleted.\nkube-proxy exposes a `\/livez` path which, as opposed to the `\/healthz` one, does\n**not** consider the Node's deleting state and only its progress programming the\nnetwork. `\/livez` is therefore the recommended path for anyone looking to define\na livenessProbe for kube-proxy.\n\nUsers deploying kube-proxy can inspect both the readiness \/ liveness state by\nevaluating the metrics: `proxy_livez_total` \/ `proxy_healthz_total`. Both\nmetrics publish two series, one with the 200 label and one with the 503 one.\n\nFor `Local` Services: kube-proxy will return 200 if\n\n1. kube-proxy is healthy\/ready, and\n2. has a local endpoint on the node in question.\n\nNode deletion does **not** have an impact on kube-proxy's return\ncode for what concerns load balancer health checks. The reason for this is:\ndeleting nodes could end up causing an ingress outage should all endpoints\nsimultaneously be running on said nodes.\n\nThe Kubernetes project recommends that cloud provider integration code\nconfigures load balancer health checks that target the service proxy's healthz\nport. If you are using or implementing your own virtual IP implementation,\nthat people can use instead of kube-proxy, you should set up a similar health\nchecking port with logic that matches the kube-proxy implementation.\n\n### Traffic to terminating endpoints\n\n\n\nIf the `ProxyTerminatingEndpoints`\n[feature gate](\/docs\/reference\/command-line-tools-reference\/feature-gates\/)\nis enabled in kube-proxy and the traffic policy is `Local`, that node's\nkube-proxy uses a more complicated algorithm to select endpoints for a Service.\nWith the feature enabled, kube-proxy checks if the node\nhas local endpoints and whether or not all the local endpoints are marked as terminating.\nIf there are local endpoints and **all** of them are terminating, then kube-proxy\nwill forward traffic to those terminating endpoints. Otherwise, kube-proxy will always\nprefer forwarding traffic to endpoints that are not terminating.\n\nThis forwarding behavior for terminating endpoints exist to allow `NodePort` and `LoadBalancer`\nServices to gracefully drain connections when using `externalTrafficPolicy: Local`.\n\nAs a deployment goes through a rolling update, nodes backing a load balancer may transition from\nN to 0 replicas of that deployment. In some cases, external load balancers can send traffic to\na node with 0 replicas in between health check probes. Routing traffic to terminating endpoints\nensures that Node's that are scaling down Pods can gracefully receive and drain traffic to\nthose terminating Pods. By the time the Pod completes termination, the external load balancer\nshould have seen the node's health check failing and fully removed the node from the backend\npool.\n\n## Traffic Distribution\n\n\n\nThe `spec.trafficDistribution` field within a Kubernetes Service allows you to\nexpress preferences for how traffic should be routed to Service endpoints.\nImplementations like kube-proxy use the `spec.trafficDistribution` field as a\nguideline. The behavior associated with a given preference may subtly differ\nbetween implementations.\n\n`PreferClose` with kube-proxy\n: For kube-proxy, this means prioritizing sending traffic to endpoints within\n  the same zone as the client. The EndpointSlice controller updates\n  EndpointSlices with `hints` to communicate this preference, which kube-proxy\n  then uses for routing decisions. If a client's zone does not have any\n  available endpoints, traffic will be routed cluster-wide for that client.\n\nIn the absence of any value for `trafficDistribution`, the default routing\nstrategy for kube-proxy is to distribute traffic to any endpoint in the cluster.\n\n### Comparison with `service.kubernetes.io\/topology-mode: Auto`\n\nThe `trafficDistribution` field with `PreferClose` and the\n`service.kubernetes.io\/topology-mode: Auto` annotation both aim to prioritize\nsame-zone traffic. However, there are key differences in their approaches:\n\n* `service.kubernetes.io\/topology-mode: Auto`: Attempts to distribute traffic\n  proportionally across zones based on allocatable CPU resources. This heuristic\n  includes safeguards (such as the [fallback\n  behavior](\/docs\/concepts\/services-networking\/topology-aware-routing\/#three-or-more-endpoints-per-zone)\n  for small numbers of endpoints) and could lead to the feature being disabled\n  in certain scenarios for load-balancing reasons. This approach sacrifices some\n  predictability in favor of potential load balancing.\n\n* `trafficDistribution: PreferClose`: This approach aims to be slightly simpler\n  and more predictable: \"If there are endpoints in the zone, they will receive\n  all traffic for that zone, if there are no endpoints in a zone, the traffic\n  will be distributed to other zones\". While the approach may offer more\n  predictability, it does mean that you are in control of managing a [potential\n  overload](#considerations-for-using-traffic-distribution-control).\n\nIf the `service.kubernetes.io\/topology-mode` annotation is set to `Auto`, it\nwill take precedence over `trafficDistribution`. (The annotation may be deprecated\nin the future in favour of the `trafficDistribution` field).\n\n### Interaction with Traffic Policies\n\nWhen compared to the `trafficDistribution` field, the traffic policy fields\n(`externalTrafficPolicy` and `internalTrafficPolicy`) are meant to offer a\nstricter traffic locality requirements. Here's how `trafficDistribution`\ninteracts with them:\n\n* Precedence of Traffic Policies: For a given Service, if a traffic policy\n  (`externalTrafficPolicy` or `internalTrafficPolicy`) is set to `Local`, it\n  takes precedence over `trafficDistribution: PreferClose` for the corresponding\n  traffic type (external or internal, respectively).\n\n* `trafficDistribution` Influence: For a given Service, if a traffic policy\n  (`externalTrafficPolicy` or `internalTrafficPolicy`) is set to `Cluster` (the\n  default), or if the fields are not set, then `trafficDistribution:\n  PreferClose` guides the routing behavior for the corresponding traffic type\n  (external or internal, respectively). This means that an attempt will be made\n  to route traffic to an endpoint that is in the same zone as the client.\n\n### Considerations for using traffic distribution control  \n\n* **Increased Probability of Overloaded Endpoints:** The `PreferClose`\n  heuristic will attempt to route traffic to the closest healthy endpoints\n  instead of spreading that traffic evenly across all endpoints. If you do not\n  have a sufficient number of endpoints within a zone, they may become\n  overloaded. This is especially likely if incoming traffic is not\n  proportionally distributed across zones. To mitigate this, consider the\n  following strategies:\n\n    * [Pod Topology Spread\n      Constraints](\/docs\/concepts\/scheduling-eviction\/topology-spread-constraints\/):\n      Use Pod Topology Spread Constraints to distribute your pods more evenly\n      across zones.\n\n    * Zone-specific Deployments: If you expect to see skewed traffic patterns,\n      create a separate Deployment for each zone. This approach allows the\n      separate workloads to scale independently. There are also workload\n      management addons available from the ecosystem, outside the Kubernetes\n      project itself, that can help here.\n\n* **Implementation-specific behavior:** Each dataplane implementation may handle\n  this field slightly differently. If you're using an implementation other than\n  kube-proxy, refer the documentation specific to that implementation to\n  understand how this field is being handled.\n\n## \n\nTo learn more about Services,\nread [Connecting Applications with Services](\/docs\/tutorials\/services\/connect-applications-service\/).\n\nYou can also:\n\n* Read about [Services](\/docs\/concepts\/services-networking\/service\/) as a concept\n* Read about [Ingresses](\/docs\/concepts\/services-networking\/ingress\/) as a concept\n* Read the [API reference](\/docs\/reference\/kubernetes-api\/service-resources\/service-v1\/) for the Service API\n","site":"kubernetes reference","answers_cleaned":"    title  Virtual IPs and Service Proxies content type  reference weight  50           overview     Every  in a Kubernetes  runs a  kube proxy   docs reference command line tools reference kube proxy    unless you have deployed your own alternative component in place of  kube proxy     The  kube proxy  component is responsible for implementing a  virtual IP  mechanism for  of  type  other than   ExternalName    docs concepts services networking service  externalname   Each instance of kube proxy watches the Kubernetes  for the addition and removal of Service and EndpointSlice   For each Service  kube proxy calls appropriate APIs  depending on the kube proxy mode  to configure the node to capture traffic to the Service s  clusterIP  and  port   and redirect that traffic to one of the Service s endpoints  usually a Pod  but possibly an arbitrary user provided IP address   A control loop ensures that the rules on each node are reliably synchronized with the Service and EndpointSlice state as indicated by the API server     A question that pops up every now and then is why Kubernetes relies on proxying to forward inbound traffic to backends  What about other approaches  For example  would it be possible to configure DNS records that have multiple A values  or AAAA for IPv6   and rely on round robin name resolution   There are a few reasons for using proxying for Services     There is a long history of DNS implementations not respecting record TTLs    and caching the results of name lookups after they should have expired    Some apps do DNS lookups only once and cache the results indefinitely    Even if apps and libraries did proper re resolution  the low or zero TTLs   on the DNS records could impose a high load on DNS that then becomes   difficult to manage   Later in this page you can read about how various kube proxy implementations work  Overall  you should note that  when running  kube proxy   kernel level rules may be modified  for example  iptables rules might get created   which won t get cleaned up  in some cases until you reboot   Thus  running kube proxy is something that should only be done by an administrator which understands the consequences of having a low level  privileged network proxying service on a computer   Although the  kube proxy  executable supports a  cleanup  function  this function is not an official feature and thus is only available to use as is    a id  example    a  Some of the details in this reference refer to an example  the backend  for a stateless image processing workloads  running with three replicas  Those replicas are fungible mdash frontends do not care which backend they use   While the actual Pods that compose the backend set may change  the frontend clients should not need to be aware of that  nor should they need to keep track of the set of backends themselves        body         Proxy modes  The kube proxy starts up in different modes  which are determined by its configuration   On Linux nodes  the available modes for kube proxy are     iptables    proxy mode iptables    A mode where the kube proxy configures packet forwarding rules using iptables     ipvs    proxy mode ipvs    a mode where the kube proxy configures packet forwarding rules using ipvs     nftables    proxy mode nftables    a mode where the kube proxy configures packet forwarding rules using nftables   There is only one mode available for kube proxy on Windows     kernelspace    proxy mode kernelspace    a mode where the kube proxy configures packet forwarding rules in the Windows kernel       iptables  proxy mode   proxy mode iptables    This proxy mode is only available on Linux nodes    In this mode  kube proxy configures packet forwarding rules using the iptables API of the kernel netfilter subsystem  For each endpoint  it installs iptables rules which  by default  select a backend Pod at random        Example   packet processing iptables   As an example  consider the image processing application described  earlier   example  in the page  When the backend Service is created  the Kubernetes control plane assigns a virtual IP address  for example 10 0 0 1   For this example  assume that the Service port is 1234  All of the kube proxy instances in the cluster observe the creation of the new Service   When kube proxy on a node sees a new Service  it installs a series of iptables rules which redirect from the virtual IP address to more iptables rules  defined per Service  The per Service rules link to further rules for each backend endpoint  and the per  endpoint rules redirect traffic  using destination NAT  to the backends   When a client connects to the Service s virtual IP address the iptables rule kicks in  A backend is chosen  either based on session affinity or randomly  and packets are redirected to the backend without rewriting the client IP address   This same basic flow executes when traffic comes in through a node port or through a load balancer  though in those cases the client IP address does get altered        Optimizing iptables mode performance  In iptables mode  kube proxy creates a few iptables rules for every Service  and a few iptables rules for each endpoint IP address  In clusters with tens of thousands of Pods and Services  this means tens of thousands of iptables rules  and kube proxy may take a long time to update the rules in the kernel when Services  or their EndpointSlices  change  You can adjust the syncing behavior of kube proxy via options in the   iptables  section   docs reference config api kube proxy config v1alpha1  kubeproxy config k8s io v1alpha1 KubeProxyIPTablesConfiguration  of the kube proxy  configuration file   docs reference config api kube proxy config v1alpha1    which you specify via  kube proxy   config  path         yaml     iptables    minSyncPeriod  1s   syncPeriod  30s                 minSyncPeriod   The  minSyncPeriod  parameter sets the minimum duration between attempts to resynchronize iptables rules with the kernel  If it is  0s   then kube proxy will always immediately synchronize the rules every time any Service or Endpoint changes  This works fine in very small clusters  but it results in a lot of redundant work when lots of things change in a small time period  For example  if you have a Service backed by a  with 100 pods  and you delete the Deployment  then with  minSyncPeriod  0s   kube proxy would end up removing the Service s endpoints from the iptables rules one by one  for a total of 100 updates  With a larger  minSyncPeriod   multiple Pod deletion events would get aggregated together  so kube proxy might instead end up making  say  5 updates  each removing 20 endpoints  which will be much more efficient in terms of CPU  and result in the full set of changes being synchronized faster   The larger the value of  minSyncPeriod   the more work that can be aggregated  but the downside is that each individual change may end up waiting up to the full  minSyncPeriod  before being processed  meaning that the iptables rules spend more time being out of sync with the current API server state   The default value of  1s  should work well in most clusters  but in very large clusters it may be necessary to set it to a larger value  Especially  if kube proxy s  sync proxy rules duration seconds  metric indicates an average time much larger than 1 second  then bumping up  minSyncPeriod  may make updates more efficient         Updating legacy  minSyncPeriod  configuration   minimize iptables restore   Older versions of kube proxy updated all the rules for all Services on every sync  this led to performance issues  update lag  in large clusters  and the recommended solution was to set a larger  minSyncPeriod   Since Kubernetes v1 28  the iptables mode of kube proxy uses a more minimal approach  only making updates where Services or EndpointSlices have actually changed   If you were previously overriding  minSyncPeriod   you should try removing that override and letting kube proxy use the default value   1s   or at least a smaller value than you were using before upgrading   If you are not running kube proxy from Kubernetes   check the behavior and associated advice for the version that you are actually running          syncPeriod   The  syncPeriod  parameter controls a handful of synchronization operations that are not directly related to changes in individual Services and EndpointSlices  In particular  it controls how quickly kube proxy notices if an external component has interfered with kube proxy s iptables rules  In large clusters  kube proxy also only performs certain cleanup operations once every  syncPeriod  to avoid unnecessary work   For the most part  increasing  syncPeriod  is not expected to have much impact on performance  but in the past  it was sometimes useful to set it to a very large value  eg   1h    This is no longer recommended  and is likely to hurt functionality more than it improves performance       IPVS proxy mode   proxy mode ipvs    This proxy mode is only available on Linux nodes    In  ipvs  mode  kube proxy uses the kernel IPVS and iptables APIs to create rules to redirect traffic from Service IPs to endpoint IPs   The IPVS proxy mode is based on netfilter hook function that is similar to iptables mode  but uses a hash table as the underlying data structure and works in the kernel space  That means kube proxy in IPVS mode redirects traffic with lower latency than kube proxy in iptables mode  with much better performance when synchronizing proxy rules  Compared to the iptables proxy mode  IPVS mode also supports a higher throughput of network traffic   IPVS provides more options for balancing traffic to backend Pods  these are      rr   Round Robin   Traffic is equally distributed amongst the backing servers      wrr   Weighted Round Robin   Traffic is routed to the backing servers based on   the weights of the servers  Servers with higher weights receive new connections   and get more requests than servers with lower weights      lc   Least Connection   More traffic is assigned to servers with fewer active connections      wlc   Weighted Least Connection   More traffic is routed to servers with fewer connections   relative to their weights  that is  connections divided by weight      lblc   Locality based Least Connection   Traffic for the same IP address is sent to the   same backing server if the server is not overloaded and available  otherwise the traffic   is sent to servers with fewer connections  and keep it for future assignment      lblcr   Locality Based Least Connection with Replication   Traffic for the same IP   address is sent to the server with least connections  If all the backing servers are   overloaded  it picks up one with fewer connections and add it to the target set    If the target set has not changed for the specified time  the most loaded server   is removed from the set  in order to avoid high degree of replication      sh   Source Hashing   Traffic is sent to a backing server by looking up a statically   assigned hash table based on the source IP addresses      dh   Destination Hashing   Traffic is sent to a backing server by looking up a   statically assigned hash table based on their destination addresses      sed   Shortest Expected Delay   Traffic forwarded to a backing server with the shortest   expected delay  The expected delay is   C   1    U  if sent to a server  where  C  is   the number of connections on the server and  U  is the fixed service rate  weight  of   the server      nq   Never Queue   Traffic is sent to an idle server if there is one  instead of   waiting for a fast one  if all servers are busy  the algorithm falls back to the  sed    behavior      mh   Maglev Hashing   Assigns incoming jobs based on    Google s Maglev hashing algorithm  https   static googleusercontent com media research google com en  pubs archive 44824 pdf     This scheduler has two flags   mh fallback   which enables fallback to a different   server if the selected server is unavailable  and  mh port   which adds the source port number to   the hash computation  When using  mh   kube proxy always sets the  mh port  flag and does not   enable the  mh fallback  flag    In proxy mode ipvs  mh  will work as source hashing   sh    but with ports   These scheduling algorithms are configured through the   ipvs scheduler    docs reference config api kube proxy config v1alpha1  kubeproxy config k8s io v1alpha1 KubeProxyIPVSConfiguration  field in the kube proxy configuration    To run kube proxy in IPVS mode  you must make IPVS available on the node before starting kube proxy   When kube proxy starts in IPVS proxy mode  it verifies whether IPVS kernel modules are available  If the IPVS kernel modules are not detected  then kube proxy exits with an error           nftables  proxy mode   proxy mode nftables      This proxy mode is only available on Linux nodes  and requires kernel 5 13 or later    In this mode  kube proxy configures packet forwarding rules using the nftables API of the kernel netfilter subsystem  For each endpoint  it installs nftables rules which  by default  select a backend Pod at random   The nftables API is the successor to the iptables API and is designed to provide better performance and scalability than iptables  The  nftables  proxy mode is able to process changes to service endpoints faster and more efficiently than the  iptables  mode  and is also able to more efficiently process packets in the kernel  though this only becomes noticeable in clusters with tens of thousands of services    As of Kubernetes   the  nftables  mode is still relatively new  and may not be compatible with all network plugins  consult the documentation for your network plugin        Migrating from  iptables  mode to  nftables   Users who want to switch from the default  iptables  mode to the  nftables  mode should be aware that some features work slightly differently the  nftables  mode        NodePort interfaces    In  iptables  mode  by default      NodePort services   docs concepts services networking service  type nodeport     are reachable on all local IP addresses  This is usually not what    users want  so the  nftables  mode defaults to       nodeport addresses primary   meaning NodePort services are only    reachable on the node s primary IPv4 and or IPv6 addresses  You can    override this by specifying an explicit value for that option     e g      nodeport addresses 0 0 0 0 0  to listen on all  local     IPv4 IPs        NodePort services on  127 0 0 1     In  iptables  mode  if the       nodeport addresses  range includes  127 0 0 1   and the option       iptables localhost nodeports false  option is not passed   then    NodePort services are reachable even on  localhost    127 0 0 1       In  nftables  mode  and  ipvs  mode   this will not work  If you    are not sure if you are depending on this functionality  you can    check kube proxy s     iptables localhost nodeports accepted packets total  metric  if it    is non 0  that means that some client has connected to a NodePort    service via  127 0 0 1         NodePort interaction with firewalls    The  iptables  mode of    kube proxy tries to be compatible with overly agressive firewalls     for each NodePort service  it will add rules to accept inbound    traffic on that port  in case that traffic would otherwise be    blocked by a firewall  This approach will not work with firewalls    based on nftables  so kube proxy s  nftables  mode does not do    anything here  if you have a local firewall  you must ensure that    it is properly configured to allow Kubernetes traffic through     e g   by allowing inbound traffic on the entire NodePort range         Conntrack bug workarounds    Linux kernels prior to 6 1 have a    bug that can result in long lived TCP connections to service IPs    being closed with the error  Connection reset by peer   The     iptables  mode of kube proxy installs a workaround for this bug     but this workaround was later found to cause other problems in some    clusters  The  nftables  mode does not install any workaround by    default  but you can check kube proxy s     iptables ct state invalid dropped packets total  metric to see if    your cluster is depending on the workaround  and if so  you can run    kube proxy with the option    conntrack tcp be liberal  to work    around the problem in  nftables  mode        kernelspace  proxy mode   proxy mode kernelspace    This proxy mode is only available on Windows nodes    The kube proxy configures packet filtering rules in the Windows  Virtual Filtering Platform   VFP   an extension to Windows vSwitch  These rules process encapsulated packets within the node level virtual networks  and rewrite packets so that the destination IP address  and layer 2 information  is correct for getting the packet routed to the correct destination  The Windows VFP is analogous to tools such as Linux  nftables  or  iptables   The Windows VFP extends the  Hyper V Switch   which was initially implemented to support virtual machine networking   When a Pod on a node sends traffic to a virtual IP address  and the kube proxy selects a Pod on a different node as the load balancing target  the  kernelspace  proxy mode rewrites that packet to be destined to the target backend Pod  The Windows  Host Networking Service   HNS  ensures that packet rewriting rules are configured so that the return traffic appears to come from the virtual IP address and not the specific backend Pod        Direct server return for  kernelspace  mode   windows direct server return     As an alternative to the basic operation  a node that hosts the backend Pod for a Service can apply the packet rewriting directly  rather than placing this burden on the node where the client Pod is running  This is called  direct server return    To use this  you must run kube proxy with the    enable dsr  command line argument   and   enable the  WinDSR   feature gate   docs reference command line tools reference feature gates     Direct server return also optimizes the case for Pod return traffic even when both Pods are running on the same node      Session affinity  In these proxy models  the traffic bound for the Service s IP Port is proxied to an appropriate backend without the clients knowing anything about Kubernetes or Services or Pods   If you want to make sure that connections from a particular client are passed to the same Pod each time  you can select the session affinity based on the client s IP addresses by setting   spec sessionAffinity  to  ClientIP  for a Service  the default is  None         Session stickiness timeout  You can also set the maximum session sticky time by setting   spec sessionAffinityConfig clientIP timeoutSeconds  appropriately for a Service   the default value is 10800  which works out to be 3 hours     On Windows  setting the maximum session sticky time for Services is not supported       IP address assignment to Services  Unlike Pod IP addresses  which actually route to a fixed destination  Service IPs are not actually answered by a single host   Instead  kube proxy uses packet processing logic  such as Linux iptables  to define  virtual  IP addresses which are transparently redirected as needed   When clients connect to the VIP  their traffic is automatically transported to an appropriate endpoint  The environment variables and DNS for Services are actually populated in terms of the Service s virtual IP address  and port        Avoiding collisions  One of the primary philosophies of Kubernetes is that you should not be exposed to situations that could cause your actions to fail through no fault of your own  For the design of the Service resource  this means not making you choose your own IP address if that choice might collide with someone else s choice   That is an isolation failure   In order to allow you to choose an IP address for your Services  we must ensure that no two Services can collide  Kubernetes does that by allocating each Service its own IP address from within the  service cluster ip range  CIDR range that is configured for the        IP address allocation tracking  To ensure each Service receives a unique IP address  an internal allocator atomically updates a global allocation map in  prior to creating each Service  The map object must exist in the registry for Services to get IP address assignments  otherwise creations will fail with a message indicating an IP address could not be allocated   In the control plane  a background controller is responsible for creating that map  needed to support migrating from older versions of Kubernetes that used in memory locking   Kubernetes also uses controllers to check for invalid assignments  for example  due to administrator intervention  and for cleaning up allocated IP addresses that are no longer used by any Services        IP address allocation tracking using the Kubernetes API   ip address objects     If you enable the  MultiCIDRServiceAllocator   feature gate   docs reference command line tools reference feature gates   and the   networking k8s io v1alpha1  API group   docs tasks administer cluster enable disable api    the control plane replaces the existing etcd allocator with a revised implementation that uses IPAddress and ServiceCIDR objects instead of an internal global allocation map  Each cluster IP address associated to a Service then references an IPAddress object   Enabling the feature gate also replaces a background controller with an alternative that handles the IPAddress objects and supports migration from the old allocator model  Kubernetes  does not support migrating from IPAddress objects to the internal allocation map   One of the main benefits of the revised allocator is that it removes the size limitations for the IP address range that can be used for the cluster IP address of Services  With  MultiCIDRServiceAllocator  enabled  there are no limitations for IPv4  and for IPv6 you can use IP address netmasks that are a  64 or smaller  as opposed to  108 with the legacy implementation    Making IP address allocations available via the API means that you as a cluster administrator can allow users to inspect the IP addresses assigned to their Services  Kubernetes extensions  such as the  Gateway API   docs concepts services networking gateway    can use the IPAddress API to extend Kubernetes  inherent networking capabilities   Here is a brief example of a user querying for IP addresses      shell kubectl get services         NAME         TYPE        CLUSTER IP        EXTERNAL IP   PORT S    AGE kubernetes   ClusterIP   2001 db8 1 2  1    none         443 TCP   3d1h        shell kubectl get ipaddresses         NAME              PARENTREF 2001 db8 1 2  1   services default kubernetes 2001 db8 1 2  a   services kube system kube dns      Kubernetes also allow users to dynamically define the available IP ranges for Services using ServiceCIDR objects  During bootstrap  a default ServiceCIDR object named  kubernetes  is created from the value of the    service cluster ip range  command line argument to kube apiserver      shell kubectl get servicecidrs         NAME         CIDRS         AGE kubernetes   10 96 0 0 28  17m      Users can create or delete new ServiceCIDR objects to manage the available IP ranges for Services      shell cat    EOF    kubectl apply  f   apiVersion  networking k8s io v1beta1 kind  ServiceCIDR metadata    name  newservicecidr spec    cidrs      10 96 0 0 24 EOF         servicecidr networking k8s io newcidr1 created         shell kubectl get servicecidrs         NAME             CIDRS         AGE kubernetes       10 96 0 0 28  17m newservicecidr   10 96 0 0 24  7m          IP address ranges for Service virtual IP addresses   service ip static sub range     Kubernetes divides the  ClusterIP  range into two bands  based on the size of the configured  service cluster ip range  by using the following formula  min max 16  cidrSize   16   256    That formula paraphrases as  never less than 16 or more than 256  with a graduated step function between them    Kubernetes prefers to allocate dynamic IP addresses to Services by choosing from the upper band  which means that if you want to assign a specific IP address to a  type  ClusterIP  Service  you should manually assign an IP address from the   lower   band  That approach reduces the risk of a conflict over allocation      Traffic policies  You can set the   spec internalTrafficPolicy  and   spec externalTrafficPolicy  fields to control how Kubernetes routes traffic to healthy   ready   backends       Internal traffic policy    You can set the   spec internalTrafficPolicy  field to control how traffic from internal sources is routed  Valid values are  Cluster  and  Local   Set the field to  Cluster  to route internal traffic to all ready endpoints and  Local  to only route to ready node local endpoints  If the traffic policy is  Local  and there are no node local endpoints  traffic is dropped by kube proxy       External traffic policy  You can set the   spec externalTrafficPolicy  field to control how traffic from external sources is routed  Valid values are  Cluster  and  Local   Set the field to  Cluster  to route external traffic to all ready endpoints and  Local  to only route to ready node local endpoints  If the traffic policy is  Local  and there are are no node local endpoints  the kube proxy does not forward any traffic for the relevant Service   If  Cluster  is specified all nodes are eligible load balancing targets  as long as  the node is not being deleted and kube proxy is healthy  In this mode  load balancer  health checks are configured to target the service proxy s readiness port and path  In the case of kube proxy this evaluates to     NODE IP  10256 healthz   kube proxy will return either an HTTP code 200 or 503  kube proxy s load balancer health check endpoint returns 200 if   1  kube proxy is healthy  meaning       it s able to progress programming the network and isn t timing out while doing      so  the timeout is defined to be    2    iptables syncPeriod      and 2  the node is not being deleted  there is no deletion timestamp set for the Node    The reason why kube proxy returns 503 and marks the node as not eligible when it s being deleted  is because kube proxy supports connection draining for terminating nodes  A couple of important things occur from the point of view of a Kubernetes managed load balancer when a node  is being     is  deleted   While deleting     kube proxy will start failing its readiness probe and essentially mark the    node as not eligible for load balancer traffic  The load balancer health    check failing causes load balancers which support connection draining to    allow existing connections to terminate  and block new connections from    establishing   When deleted     The service controller in the Kubernetes cloud controller manager removes the   node from the referenced set of eligible targets  Removing any instance from   the load balancer s set of backend targets immediately terminates all   connections  This is also the reason kube proxy first fails the health check   while the node is deleting   It s important to note for Kubernetes vendors that if any vendor configures the kube proxy readiness probe as a liveness probe  that kube proxy will start restarting continuously when a node is deleting until it has been fully deleted  kube proxy exposes a   livez  path which  as opposed to the   healthz  one  does   not   consider the Node s deleting state and only its progress programming the network    livez  is therefore the recommended path for anyone looking to define a livenessProbe for kube proxy   Users deploying kube proxy can inspect both the readiness   liveness state by evaluating the metrics   proxy livez total     proxy healthz total   Both metrics publish two series  one with the 200 label and one with the 503 one   For  Local  Services  kube proxy will return 200 if  1  kube proxy is healthy ready  and 2  has a local endpoint on the node in question   Node deletion does   not   have an impact on kube proxy s return code for what concerns load balancer health checks  The reason for this is  deleting nodes could end up causing an ingress outage should all endpoints simultaneously be running on said nodes   The Kubernetes project recommends that cloud provider integration code configures load balancer health checks that target the service proxy s healthz port  If you are using or implementing your own virtual IP implementation  that people can use instead of kube proxy  you should set up a similar health checking port with logic that matches the kube proxy implementation       Traffic to terminating endpoints    If the  ProxyTerminatingEndpoints   feature gate   docs reference command line tools reference feature gates   is enabled in kube proxy and the traffic policy is  Local   that node s kube proxy uses a more complicated algorithm to select endpoints for a Service  With the feature enabled  kube proxy checks if the node has local endpoints and whether or not all the local endpoints are marked as terminating  If there are local endpoints and   all   of them are terminating  then kube proxy will forward traffic to those terminating endpoints  Otherwise  kube proxy will always prefer forwarding traffic to endpoints that are not terminating   This forwarding behavior for terminating endpoints exist to allow  NodePort  and  LoadBalancer  Services to gracefully drain connections when using  externalTrafficPolicy  Local    As a deployment goes through a rolling update  nodes backing a load balancer may transition from N to 0 replicas of that deployment  In some cases  external load balancers can send traffic to a node with 0 replicas in between health check probes  Routing traffic to terminating endpoints ensures that Node s that are scaling down Pods can gracefully receive and drain traffic to those terminating Pods  By the time the Pod completes termination  the external load balancer should have seen the node s health check failing and fully removed the node from the backend pool      Traffic Distribution    The  spec trafficDistribution  field within a Kubernetes Service allows you to express preferences for how traffic should be routed to Service endpoints  Implementations like kube proxy use the  spec trafficDistribution  field as a guideline  The behavior associated with a given preference may subtly differ between implementations    PreferClose  with kube proxy   For kube proxy  this means prioritizing sending traffic to endpoints within   the same zone as the client  The EndpointSlice controller updates   EndpointSlices with  hints  to communicate this preference  which kube proxy   then uses for routing decisions  If a client s zone does not have any   available endpoints  traffic will be routed cluster wide for that client   In the absence of any value for  trafficDistribution   the default routing strategy for kube proxy is to distribute traffic to any endpoint in the cluster       Comparison with  service kubernetes io topology mode  Auto   The  trafficDistribution  field with  PreferClose  and the  service kubernetes io topology mode  Auto  annotation both aim to prioritize same zone traffic  However  there are key differences in their approaches      service kubernetes io topology mode  Auto   Attempts to distribute traffic   proportionally across zones based on allocatable CPU resources  This heuristic   includes safeguards  such as the  fallback   behavior   docs concepts services networking topology aware routing  three or more endpoints per zone    for small numbers of endpoints  and could lead to the feature being disabled   in certain scenarios for load balancing reasons  This approach sacrifices some   predictability in favor of potential load balancing      trafficDistribution  PreferClose   This approach aims to be slightly simpler   and more predictable   If there are endpoints in the zone  they will receive   all traffic for that zone  if there are no endpoints in a zone  the traffic   will be distributed to other zones   While the approach may offer more   predictability  it does mean that you are in control of managing a  potential   overload   considerations for using traffic distribution control    If the  service kubernetes io topology mode  annotation is set to  Auto   it will take precedence over  trafficDistribution    The annotation may be deprecated in the future in favour of the  trafficDistribution  field        Interaction with Traffic Policies  When compared to the  trafficDistribution  field  the traffic policy fields   externalTrafficPolicy  and  internalTrafficPolicy   are meant to offer a stricter traffic locality requirements  Here s how  trafficDistribution  interacts with them     Precedence of Traffic Policies  For a given Service  if a traffic policy     externalTrafficPolicy  or  internalTrafficPolicy   is set to  Local   it   takes precedence over  trafficDistribution  PreferClose  for the corresponding   traffic type  external or internal  respectively       trafficDistribution  Influence  For a given Service  if a traffic policy     externalTrafficPolicy  or  internalTrafficPolicy   is set to  Cluster   the   default   or if the fields are not set  then  trafficDistribution    PreferClose  guides the routing behavior for the corresponding traffic type    external or internal  respectively   This means that an attempt will be made   to route traffic to an endpoint that is in the same zone as the client       Considerations for using traffic distribution control        Increased Probability of Overloaded Endpoints    The  PreferClose    heuristic will attempt to route traffic to the closest healthy endpoints   instead of spreading that traffic evenly across all endpoints  If you do not   have a sufficient number of endpoints within a zone  they may become   overloaded  This is especially likely if incoming traffic is not   proportionally distributed across zones  To mitigate this  consider the   following strategies          Pod Topology Spread       Constraints   docs concepts scheduling eviction topology spread constraints          Use Pod Topology Spread Constraints to distribute your pods more evenly       across zones         Zone specific Deployments  If you expect to see skewed traffic patterns        create a separate Deployment for each zone  This approach allows the       separate workloads to scale independently  There are also workload       management addons available from the ecosystem  outside the Kubernetes       project itself  that can help here       Implementation specific behavior    Each dataplane implementation may handle   this field slightly differently  If you re using an implementation other than   kube proxy  refer the documentation specific to that implementation to   understand how this field is being handled        To learn more about Services  read  Connecting Applications with Services   docs tutorials services connect applications service     You can also     Read about  Services   docs concepts services networking service   as a concept   Read about  Ingresses   docs concepts services networking ingress   as a concept   Read the  API reference   docs reference kubernetes api service resources service v1   for the Service API "}
{"questions":"kubernetes reference title Protocols for Services weight 10 you can select from any network protocol that Kubernetes supports If you configure a glossarytooltip text Service termid service contenttype reference overview","answers":"---\ntitle: Protocols for Services\ncontent_type: reference\nweight: 10\n---\n\n<!-- overview -->\nIf you configure a ,\nyou can select from any network protocol that Kubernetes supports.\n\nKubernetes supports the following protocols with Services:\n\n- [`SCTP`](#protocol-sctp)\n- [`TCP`](#protocol-tcp) _(the default)_\n- [`UDP`](#protocol-udp)\n\nWhen you define a Service, you can also specify the\n[application protocol](\/docs\/concepts\/services-networking\/service\/#application-protocol)\nthat it uses.\n\nThis document details some special cases, all of them typically using TCP\nas a transport protocol:\n\n- [HTTP](#protocol-http-special) and [HTTPS](#protocol-http-special)\n- [PROXY protocol](#protocol-proxy-special)\n- [TLS](#protocol-tls-special) termination at the load balancer\n\n<!-- body -->\n## Supported protocols {#protocol-support}\n\nThere are 3 valid values for the `protocol` of a port for a Service:\n\n### `SCTP` {#protocol-sctp}\n\n\n\nWhen using a network plugin that supports SCTP traffic, you can use SCTP for\nmost Services. For `type: LoadBalancer` Services, SCTP support depends on the cloud\nprovider offering this facility. (Most do not).\n\nSCTP is not supported on nodes that run Windows.\n\n#### Support for multihomed SCTP associations {#caveat-sctp-multihomed}\n\nThe support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a Pod.\n\nNAT for multihomed SCTP associations requires special logic in the corresponding kernel modules.\n\n### `TCP` {#protocol-tcp}\n\nYou can use TCP for any kind of Service, and it's the default network protocol.\n\n### `UDP` {#protocol-udp}\n\nYou can use UDP for most Services. For `type: LoadBalancer` Services,\nUDP support depends on the cloud provider offering this facility.\n\n\n## Special cases\n\n### HTTP {#protocol-http-special}\n\nIf your cloud provider supports it, you can use a Service in LoadBalancer mode to\nconfigure a load balancer outside of your Kubernetes cluster, in a special mode\nwhere your cloud provider's load balancer implements HTTP \/ HTTPS reverse proxying,\nwith traffic forwarded to the backend endpoints for that Service.\n\nTypically, you set the protocol for the Service to `TCP` and add an\n\n(usually specific to your cloud provider) that configures the load balancer\nto handle traffic at the HTTP level.\nThis configuration might also include serving HTTPS (HTTP over TLS) and\nreverse-proxying plain HTTP to your workload.\n\n\nYou can also use an  to expose\nHTTP\/HTTPS Services.\n\n\nYou might additionally want to specify that the\n[application protocol](\/docs\/concepts\/services-networking\/service\/#application-protocol)\nof the connection is `http` or `https`. Use `http` if the session from the\nload balancer to your workload is HTTP without TLS, and use `https` if the\nsession from the load balancer to your workload uses TLS encryption.\n\n### PROXY protocol {#protocol-proxy-special}\n\nIf your cloud provider supports it, you can use a Service set to `type: LoadBalancer`\nto configure a load balancer outside of Kubernetes itself, that will forward connections\nwrapped with the\n[PROXY protocol](https:\/\/www.haproxy.org\/download\/2.5\/doc\/proxy-protocol.txt).\n\nThe load balancer then sends an initial series of octets describing the\nincoming connection, similar to this example (PROXY protocol v1):\n\n```\nPROXY TCP4 192.0.2.202 10.0.42.7 12345 7\\r\\n\n```\n\nThe data after the proxy protocol preamble are the original\ndata from the client. When either side closes the connection,\nthe load balancer also triggers a connection close and sends\nany remaining data where feasible.\n\nTypically, you define a Service with the protocol to `TCP`.\nYou also set an annotation, specific to your\ncloud provider, that configures the load balancer to wrap each incoming connection in the PROXY protocol.\n\n### TLS {#protocol-tls-special}\n\nIf your cloud provider supports it, you can use a Service set to `type: LoadBalancer` as\na way to set up external reverse proxying, where the connection from client to load\nbalancer is TLS encrypted and the load balancer is the TLS server peer.\nThe connection from the load balancer to your workload can also be TLS,\nor might be plain text. The exact options available to you depend on your\ncloud provider or custom Service implementation.\n\nTypically, you set the protocol to `TCP` and set an annotation\n(usually specific to your cloud provider) that configures the load balancer\nto act as a TLS server. You would configure the TLS identity (as server,\nand possibly also as a client that connects to your workload) using\nmechanisms that are specific to your cloud provider.","site":"kubernetes reference","answers_cleaned":"    title  Protocols for Services content type  reference weight  10           overview     If you configure a   you can select from any network protocol that Kubernetes supports   Kubernetes supports the following protocols with Services       SCTP    protocol sctp      TCP    protocol tcp    the default       UDP    protocol udp   When you define a Service  you can also specify the  application protocol   docs concepts services networking service  application protocol  that it uses   This document details some special cases  all of them typically using TCP as a transport protocol      HTTP   protocol http special  and  HTTPS   protocol http special     PROXY protocol   protocol proxy special     TLS   protocol tls special  termination at the load balancer       body        Supported protocols   protocol support   There are 3 valid values for the  protocol  of a port for a Service        SCTP    protocol sctp     When using a network plugin that supports SCTP traffic  you can use SCTP for most Services  For  type  LoadBalancer  Services  SCTP support depends on the cloud provider offering this facility   Most do not    SCTP is not supported on nodes that run Windows        Support for multihomed SCTP associations   caveat sctp multihomed   The support of multihomed SCTP associations requires that the CNI plugin can support the assignment of multiple interfaces and IP addresses to a Pod   NAT for multihomed SCTP associations requires special logic in the corresponding kernel modules        TCP    protocol tcp   You can use TCP for any kind of Service  and it s the default network protocol        UDP    protocol udp   You can use UDP for most Services  For  type  LoadBalancer  Services  UDP support depends on the cloud provider offering this facility       Special cases      HTTP   protocol http special   If your cloud provider supports it  you can use a Service in LoadBalancer mode to configure a load balancer outside of your Kubernetes cluster  in a special mode where your cloud provider s load balancer implements HTTP   HTTPS reverse proxying  with traffic forwarded to the backend endpoints for that Service   Typically  you set the protocol for the Service to  TCP  and add an   usually specific to your cloud provider  that configures the load balancer to handle traffic at the HTTP level  This configuration might also include serving HTTPS  HTTP over TLS  and reverse proxying plain HTTP to your workload    You can also use an  to expose HTTP HTTPS Services    You might additionally want to specify that the  application protocol   docs concepts services networking service  application protocol  of the connection is  http  or  https   Use  http  if the session from the load balancer to your workload is HTTP without TLS  and use  https  if the session from the load balancer to your workload uses TLS encryption       PROXY protocol   protocol proxy special   If your cloud provider supports it  you can use a Service set to  type  LoadBalancer  to configure a load balancer outside of Kubernetes itself  that will forward connections wrapped with the  PROXY protocol  https   www haproxy org download 2 5 doc proxy protocol txt    The load balancer then sends an initial series of octets describing the incoming connection  similar to this example  PROXY protocol v1        PROXY TCP4 192 0 2 202 10 0 42 7 12345 7 r n      The data after the proxy protocol preamble are the original data from the client  When either side closes the connection  the load balancer also triggers a connection close and sends any remaining data where feasible   Typically  you define a Service with the protocol to  TCP   You also set an annotation  specific to your cloud provider  that configures the load balancer to wrap each incoming connection in the PROXY protocol       TLS   protocol tls special   If your cloud provider supports it  you can use a Service set to  type  LoadBalancer  as a way to set up external reverse proxying  where the connection from client to load balancer is TLS encrypted and the load balancer is the TLS server peer  The connection from the load balancer to your workload can also be TLS  or might be plain text  The exact options available to you depend on your cloud provider or custom Service implementation   Typically  you set the protocol to  TCP  and set an annotation  usually specific to your cloud provider  that configures the load balancer to act as a TLS server  You would configure the TLS identity  as server  and possibly also as a client that connects to your workload  using mechanisms that are specific to your cloud provider "}
{"questions":"kubernetes reference package kubeadm k8s io v1beta3 p Package v1beta3 defines the v1beta3 version of the kubeadm configuration file format contenttype tool reference h2 Overview h2 title kubeadm Configuration v1beta3 p A list of changes since v1beta2 p This version improves on the v1beta2 format by fixing some minor issues and adding a few new fields p autogenerated true","answers":"---\ntitle: kubeadm Configuration (v1beta3)\ncontent_type: tool-reference\npackage: kubeadm.k8s.io\/v1beta3\nauto_generated: true\n---\n<h2>Overview<\/h2>\n<p>Package v1beta3 defines the v1beta3 version of the kubeadm configuration file format.\nThis version improves on the v1beta2 format by fixing some minor issues and adding a few new fields.<\/p>\n<p>A list of changes since v1beta2:<\/p>\n<ul>\n<li>The deprecated &quot;ClusterConfiguration.useHyperKubeImage&quot; field has been removed.\nKubeadm no longer supports the hyperkube image.<\/li>\n<li>The &quot;ClusterConfiguration.dns.type&quot; field has been removed since CoreDNS is the only supported\nDNS server type by kubeadm.<\/li>\n<li>Include &quot;datapolicy&quot; tags on the fields that hold secrets.\nThis would result in the field values to be omitted when API structures are printed with klog.<\/li>\n<li>Add &quot;InitConfiguration.skipPhases&quot;, &quot;JoinConfiguration.skipPhases&quot; to allow skipping\na list of phases during kubeadm init\/join command execution.<\/li>\n<li>Add &quot;InitConfiguration.nodeRegistration.imagePullPolicy&quot; and &quot;JoinConfiguration.nodeRegistration.imagePullPolicy&quot;\nto allow specifying the images pull policy during kubeadm &quot;init&quot; and &quot;join&quot;.\nThe value must be one of &quot;Always&quot;, &quot;Never&quot; or &quot;IfNotPresent&quot;.\n&quot;IfNotPresent&quot; is the default, which has been the existing behavior prior to this addition.<\/li>\n<li>Add &quot;InitConfiguration.patches.directory&quot;, &quot;JoinConfiguration.patches.directory&quot; to allow\nthe user to configure a directory from which to take patches for components deployed by kubeadm.<\/li>\n<li>Move the BootstrapToken* API and related utilities out of the &quot;kubeadm&quot; API group to a new group\n&quot;bootstraptoken&quot;. The kubeadm API version v1beta3 no longer contains the BootstrapToken* structures.<\/li>\n<\/ul>\n<p>Migration from old kubeadm config versions<\/p>\n<ul>\n<li>kubeadm v1.15.x and newer can be used to migrate from v1beta1 to v1beta2.<\/li>\n<li>kubeadm v1.22.x and newer no longer support v1beta1 and older APIs, but can be used to migrate v1beta2 to v1beta3.<\/li>\n<li>kubeadm v1.27.x and newer no longer support v1beta2 and older APIs,<\/li>\n<\/ul>\n<h2>Basics<\/h2>\n<p>The preferred way to configure kubeadm is to pass an YAML configuration file with the <code>--config<\/code> option. Some of the\nconfiguration options defined in the kubeadm config file are also available as command line flags, but only\nthe most common\/simple use case are supported with this approach.<\/p>\n<p>A kubeadm config file could contain multiple configuration types separated using three dashes (<code>---<\/code>).<\/p>\n<p>kubeadm supports the following configuration types:<\/p>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta3<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>InitConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta3<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>ClusterConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubelet.config.k8s.io\/v1beta1<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>KubeletConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeproxy.config.k8s.io\/v1alpha1<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>KubeProxyConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta3<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>JoinConfiguration<span style=\"color:#bbb\">\n<\/span><\/pre><p>To print the defaults for &quot;init&quot; and &quot;join&quot; actions use the following commands:<\/p>\n<pre style=\"background-color:#fff\">kubeadm config print init-defaults\nkubeadm config print join-defaults\n<\/pre><p>The list of configuration types that must be included in a configuration file depends by the action you are\nperforming (<code>init<\/code> or <code>join<\/code>) and by the configuration options you are going to use (defaults or advanced\ncustomization).<\/p>\n<p>If some configuration types are not provided, or provided only partially, kubeadm will use default values; defaults\nprovided by kubeadm includes also enforcing consistency of values across components when required (e.g.\n<code>--cluster-cidr<\/code> flag on controller manager and <code>clusterCIDR<\/code> on kube-proxy).<\/p>\n<p>Users are always allowed to override default values, with the only exception of a small subset of setting with\nrelevance for security (e.g. enforce authorization-mode Node and RBAC on api server).<\/p>\n<p>If the user provides a configuration types that is not expected for the action you are performing, kubeadm will\nignore those types and print a warning.<\/p>\n<h2>Kubeadm init configuration types<\/h2>\n<p>When executing kubeadm init with the <code>--config<\/code> option, the following configuration types could be used:\nInitConfiguration, ClusterConfiguration, KubeProxyConfiguration, KubeletConfiguration, but only one\nbetween InitConfiguration and ClusterConfiguration is mandatory.<\/p>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta3<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>InitConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">bootstrapTokens<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>...<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">nodeRegistration<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>...<span style=\"color:#bbb\">\n<\/span><\/pre><p>The InitConfiguration type should be used to configure runtime settings, that in case of kubeadm init\nare the configuration of the bootstrap token and all the setting which are specific to the node where\nkubeadm is executed, including:<\/p>\n<ul>\n<li>\n<p>NodeRegistration, that holds fields that relate to registering the new node to the cluster;\nuse it to customize the node name, the CRI socket to use or any other settings that should apply to this\nnode only (e.g. the node ip).<\/p>\n<\/li>\n<li>\n<p>LocalAPIEndpoint, that represents the endpoint of the instance of the API server to be deployed on this node;\nuse it e.g. to customize the API server advertise address.<\/p>\n<\/li>\n<\/ul>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta3<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>ClusterConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">networking<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>...<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">etcd<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>...<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apiServer<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraArgs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>...<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraVolumes<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>...<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>...<span style=\"color:#bbb\">\n<\/span><\/pre><p>The ClusterConfiguration type should be used to configure cluster-wide settings,\nincluding settings for:<\/p>\n<ul>\n<li>\n<p><code>networking<\/code> that holds configuration for the networking topology of the cluster; use it e.g. to customize\nPod subnet or services subnet.<\/p>\n<\/li>\n<li>\n<p><code>etcd<\/code>: use it e.g. to customize the local etcd or to configure the API server\nfor using an external etcd cluster.<\/p>\n<\/li>\n<li>\n<p>kube-apiserver, kube-scheduler, kube-controller-manager configurations; use it to customize control-plane\ncomponents by adding customized setting or overriding kubeadm default settings.<\/p>\n<\/li>\n<\/ul>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeproxy.config.k8s.io\/v1alpha1<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>KubeProxyConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>...<span style=\"color:#bbb\">\n<\/span><\/pre><p>The KubeProxyConfiguration type should be used to change the configuration passed to kube-proxy instances\ndeployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.<\/p>\n<p>See https:\/\/kubernetes.io\/docs\/reference\/command-line-tools-reference\/kube-proxy\/ or\nhttps:\/\/pkg.go.dev\/k8s.io\/kube-proxy\/config\/v1alpha1#KubeProxyConfiguration\nfor kube-proxy official documentation.<\/p>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubelet.config.k8s.io\/v1beta1<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>KubeletConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>...<span style=\"color:#bbb\">\n<\/span><\/pre><p>The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances\ndeployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.<\/p>\n<p>See https:\/\/kubernetes.io\/docs\/reference\/command-line-tools-reference\/kubelet\/ or\nhttps:\/\/pkg.go.dev\/k8s.io\/kubelet\/config\/v1beta1#KubeletConfiguration\nfor kubelet official documentation.<\/p>\n<p>Here is a fully populated example of a single YAML file containing multiple\nconfiguration types to be used during a <code>kubeadm init<\/code> run.<\/p>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta3<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>InitConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">bootstrapTokens<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>- <span style=\"color:#000;font-weight:bold\">token<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;9a08jv.c0izixklcxtmnze7&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">description<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;kubeadm bootstrap token&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">ttl<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;24h&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>- <span style=\"color:#000;font-weight:bold\">token<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;783bde.3f89s0fje9f38fhf&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">description<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;another bootstrap token&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">usages<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span>- authentication<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span>- signing<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">groups<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span>- system:bootstrappers:kubeadm:default-node-token<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">nodeRegistration<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;ec2-10-100-0-1&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">criSocket<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/var\/run\/dockershim.sock&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">taints<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#000;font-weight:bold\">key<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;kubeadmNode&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">value<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;someValue&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">effect<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;NoSchedule&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">kubeletExtraArgs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">v<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#099\">4<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">ignorePreflightErrors<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- IsPrivilegedUser<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">imagePullPolicy<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;IfNotPresent&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">localAPIEndpoint<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">advertiseAddress<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;10.100.0.1&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">bindPort<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#099\">6443<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">certificateKey<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">skipPhases<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>- addon\/kube-proxy<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>---<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta3<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>ClusterConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">etcd<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#998;font-style:italic\"># one of local or external<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">local<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">imageRepository<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;registry.k8s.io&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">imageTag<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;3.2.24&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">dataDir<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/var\/lib\/etcd&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">extraArgs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">listen-client-urls<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;http:\/\/10.100.0.1:2379&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">serverCertSANs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span>- <span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;ec2-10-100-0-1.compute-1.amazonaws.com&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">peerCertSANs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span>- <span style=\"color:#d14\">&#34;10.100.0.1&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#998;font-style:italic\"># external:<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#998;font-style:italic\"># endpoints:<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#998;font-style:italic\"># - &#34;10.100.0.1:2379&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#998;font-style:italic\"># - &#34;10.100.0.2:2379&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#998;font-style:italic\"># caFile: &#34;\/etcd\/kubernetes\/pki\/etcd\/etcd-ca.crt&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#998;font-style:italic\"># certFile: &#34;\/etcd\/kubernetes\/pki\/etcd\/etcd.crt&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#998;font-style:italic\"># keyFile: &#34;\/etcd\/kubernetes\/pki\/etcd\/etcd.key&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">networking<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">serviceSubnet<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;10.96.0.0\/16&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">podSubnet<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;10.244.0.0\/24&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">dnsDomain<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;cluster.local&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kubernetesVersion<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;v1.21.0&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">controlPlaneEndpoint<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;10.100.0.1:6443&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apiServer<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraArgs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">authorization-mode<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;Node,RBAC&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraVolumes<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;some-volume&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">hostPath<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/etc\/some-path&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">mountPath<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/etc\/some-pod-path&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">readOnly<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#000;font-weight:bold\">false<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">pathType<\/span>:<span style=\"color:#bbb\"> <\/span>File<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">certSANs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#d14\">&#34;10.100.1.1&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#d14\">&#34;ec2-10-100-0-1.compute-1.amazonaws.com&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">timeoutForControlPlane<\/span>:<span style=\"color:#bbb\"> <\/span>4m0s<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">controllerManager<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraArgs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">&#34;node-cidr-mask-size&#34;: <\/span><span style=\"color:#d14\">&#34;20&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraVolumes<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;some-volume&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">hostPath<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/etc\/some-path&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">mountPath<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/etc\/some-pod-path&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">readOnly<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#000;font-weight:bold\">false<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">pathType<\/span>:<span style=\"color:#bbb\"> <\/span>File<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">scheduler<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraArgs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">bind-address<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;10.100.0.1&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraVolumes<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;some-volume&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">hostPath<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/etc\/some-path&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">mountPath<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/etc\/some-pod-path&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">readOnly<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#000;font-weight:bold\">false<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">pathType<\/span>:<span style=\"color:#bbb\"> <\/span>File<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">certificatesDir<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/etc\/kubernetes\/pki&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">imageRepository<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;registry.k8s.io&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">clusterName<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;example-cluster&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>---<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubelet.config.k8s.io\/v1beta1<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>KubeletConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#998;font-style:italic\"># kubelet specific options here<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>---<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeproxy.config.k8s.io\/v1alpha1<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>KubeProxyConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#998;font-style:italic\"># kube-proxy specific options here<\/span><span style=\"color:#bbb\">\n<\/span><\/pre><h2>Kubeadm join configuration types<\/h2>\n<p>When executing <code>kubeadm join<\/code> with the <code>--config<\/code> option, the JoinConfiguration type should be provided.<\/p>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta3<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>JoinConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>...<span style=\"color:#bbb\">\n<\/span><\/pre><p>The JoinConfiguration type should be used to configure runtime settings, that in case of <code>kubeadm join<\/code>\nare the discovery method used for accessing the cluster info and all the setting which are specific\nto the node where kubeadm is executed, including:<\/p>\n<ul>\n<li>\n<p><code>nodeRegistration<\/code>, that holds fields that relate to registering the new node to the cluster;\nuse it to customize the node name, the CRI socket to use or any other settings that should apply to this\nnode only (e.g. the node ip).<\/p>\n<\/li>\n<li>\n<p><code>apiEndpoint<\/code>, that represents the endpoint of the instance of the API server to be eventually deployed on this node.<\/p>\n<\/li>\n<\/ul>\n\n\n## Resource Types \n\n\n- [ClusterConfiguration](#kubeadm-k8s-io-v1beta3-ClusterConfiguration)\n- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration)\n- [JoinConfiguration](#kubeadm-k8s-io-v1beta3-JoinConfiguration)\n  \n    \n    \n\n## `BootstrapToken`     {#BootstrapToken}\n    \n\n**Appears in:**\n\n- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration)\n\n\n<p>BootstrapToken describes one bootstrap token, stored as a Secret in the cluster<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>token<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#BootstrapTokenString\"><code>BootstrapTokenString<\/code><\/a>\n<\/td>\n<td>\n   <p><code>token<\/code> is used for establishing bidirectional trust between nodes and control-planes.\nUsed for joining nodes in the cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>description<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>description<\/code> sets a human-friendly message why this token exists and what it's used\nfor, so other administrators can know its purpose.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ttl<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>ttl<\/code> defines the time to live for this token. Defaults to <code>24h<\/code>.\n<code>expires<\/code> and <code>ttl<\/code> are mutually exclusive.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>expires<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#time-v1-meta\"><code>meta\/v1.Time<\/code><\/a>\n<\/td>\n<td>\n   <p><code>expires<\/code> specifies the timestamp when this token expires. Defaults to being set\ndynamically at runtime based on the <code>ttl<\/code>. <code>expires<\/code> and <code>ttl<\/code> are mutually exclusive.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>usages<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>usages<\/code> describes the ways in which this token can be used. Can by default be used\nfor establishing bidirectional trust, but that can be changed here.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>groups<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>groups<\/code> specifies the extra groups that this token will authenticate as when\/if\nused for authentication<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `BootstrapTokenString`     {#BootstrapTokenString}\n    \n\n**Appears in:**\n\n- [BootstrapToken](#BootstrapToken)\n\n\n<p>BootstrapTokenString is a token of the format <code>abcdef.abcdef0123456789<\/code> that is used\nfor both validation of the practically of the API server from a joining node's point\nof view and as an authentication method for the node in the bootstrap phase of\n&quot;kubeadm join&quot;. This token is and should be short-lived.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>-<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>-<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n  \n\n## `ClusterConfiguration`     {#kubeadm-k8s-io-v1beta3-ClusterConfiguration}\n    \n\n\n<p>ClusterConfiguration contains cluster-wide configuration for a kubeadm cluster.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubeadm.k8s.io\/v1beta3<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>ClusterConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>etcd<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-Etcd\"><code>Etcd<\/code><\/a>\n<\/td>\n<td>\n   <p><code>etcd<\/code> holds the configuration for etcd.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>networking<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-Networking\"><code>Networking<\/code><\/a>\n<\/td>\n<td>\n   <p><code>networking<\/code> holds configuration for the networking topology of the cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>kubernetesVersion<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>kubernetesVersion<\/code> is the target version of the control plane.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>controlPlaneEndpoint<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>controlPlaneEndpoint<\/code> sets a stable IP address or DNS name for the control plane.\nIt can be a valid IP address or a RFC-1123 DNS subdomain, both with optional TCP port.\nIn case the <code>controlPlaneEndpoint<\/code> is not specified, the <code>advertiseAddress<\/code> + <code>bindPort<\/code>\nare used; in case the <code>controlPlaneEndpoint<\/code> is specified but without a TCP port,\nthe <code>bindPort<\/code> is used.\nPossible usages are:<\/p>\n<ul>\n<li>In a cluster with more than one control plane instances, this field should be\nassigned the address of the external load balancer in front of the\ncontrol plane instances.<\/li>\n<li>In environments with enforced node recycling, the <code>controlPlaneEndpoint<\/code> could\nbe used for assigning a stable DNS to the control plane.<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr><td><code>apiServer<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-APIServer\"><code>APIServer<\/code><\/a>\n<\/td>\n<td>\n   <p><code>apiServer<\/code> contains extra settings for the API server.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>controllerManager<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-ControlPlaneComponent\"><code>ControlPlaneComponent<\/code><\/a>\n<\/td>\n<td>\n   <p><code>controllerManager<\/code> contains extra settings for the controller manager.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>scheduler<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-ControlPlaneComponent\"><code>ControlPlaneComponent<\/code><\/a>\n<\/td>\n<td>\n   <p><code>scheduler<\/code> contains extra settings for the scheduler.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>dns<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-DNS\"><code>DNS<\/code><\/a>\n<\/td>\n<td>\n   <p><code>dns<\/code> defines the options for the DNS add-on installed in the cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificatesDir<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>certificatesDir<\/code> specifies where to store or look for all required certificates.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imageRepository<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>imageRepository<\/code> sets the container registry to pull images from.\nIf empty, <code>registry.k8s.io<\/code> will be used by default.\nIn case of kubernetes version is a CI build (kubernetes version starts with <code>ci\/<\/code>)\n<code>gcr.io\/k8s-staging-ci-images<\/code> will be used as a default for control plane components\nand for kube-proxy, while <code>registry.k8s.io<\/code> will be used for all the other images.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>featureGates<\/code><br\/>\n<code>map[string]bool<\/code>\n<\/td>\n<td>\n   <p><code>featureGates<\/code> contains the feature gates enabled by the user.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clusterName<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>The cluster name.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `InitConfiguration`     {#kubeadm-k8s-io-v1beta3-InitConfiguration}\n    \n\n\n<p>InitConfiguration contains a list of elements that is specific &quot;kubeadm init&quot;-only runtime\ninformation.\n<code>kubeadm init<\/code>-only information. These fields are solely used the first time <code>kubeadm init<\/code> runs.\nAfter that, the information in the fields IS NOT uploaded to the <code>kubeadm-config<\/code> ConfigMap\nthat is used by <code>kubeadm upgrade<\/code> for instance. These fields must be omitempty.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubeadm.k8s.io\/v1beta3<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>InitConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>bootstrapTokens<\/code><br\/>\n<a href=\"#BootstrapToken\"><code>[]BootstrapToken<\/code><\/a>\n<\/td>\n<td>\n   <p><code>bootstrapTokens<\/code> is respected at <code>kubeadm init<\/code> time and describes a set of Bootstrap Tokens to create.\nThis information IS NOT uploaded to the kubeadm cluster configmap, partly because of its sensitive nature<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>nodeRegistration<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-NodeRegistrationOptions\"><code>NodeRegistrationOptions<\/code><\/a>\n<\/td>\n<td>\n   <p><code>nodeRegistration<\/code> holds fields that relate to registering the new control-plane node\nto the cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>localAPIEndpoint<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-APIEndpoint\"><code>APIEndpoint<\/code><\/a>\n<\/td>\n<td>\n   <p><code>localAPIEndpoint<\/code> represents the endpoint of the API server instance that's deployed on this\ncontrol plane node. In HA setups, this differs from <code>ClusterConfiguration.controlPlaneEndpoint<\/code>\nin the sense that <code>controlPlaneEndpoint<\/code> is the global endpoint for the cluster, which then\nload-balances the requests to each individual API server.\nThis configuration object lets you customize what IP\/DNS name and port the local API server\nadvertises it's accessible on. By default, kubeadm tries to auto-detect the IP of the default\ninterface and use that, but in case that process fails you may set the desired value here.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificateKey<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>certificateKey<\/code> sets the key with which certificates and keys are encrypted prior to being\nuploaded in a Secret in the cluster during the <code>uploadcerts init<\/code> phase.\nThe certificate key is a hex encoded string that is an AES key of size 32 bytes.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>skipPhases<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>skipPhases<\/code> is a list of phases to skip during command execution.\nThe list of phases can be obtained with the <code>kubeadm init --help<\/code> command.\nThe flag &quot;--skip-phases&quot; takes precedence over this field.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>patches<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-Patches\"><code>Patches<\/code><\/a>\n<\/td>\n<td>\n   <p><code>patches<\/code> contains options related to applying patches to components deployed by kubeadm during\n<code>kubeadm init<\/code>.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `JoinConfiguration`     {#kubeadm-k8s-io-v1beta3-JoinConfiguration}\n    \n\n\n<p>JoinConfiguration contains elements describing a particular node.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubeadm.k8s.io\/v1beta3<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>JoinConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>nodeRegistration<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-NodeRegistrationOptions\"><code>NodeRegistrationOptions<\/code><\/a>\n<\/td>\n<td>\n   <p><code>nodeRegistration<\/code> holds fields that relate to registering the new\ncontrol-plane node to the cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>caCertPath<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>caCertPath<\/code> is the path to the SSL certificate authority used to secure\ncomunications between a node and the control-plane.\nDefaults to &quot;\/etc\/kubernetes\/pki\/ca.crt&quot;.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>discovery<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-Discovery\"><code>Discovery<\/code><\/a>\n<\/td>\n<td>\n   <p><code>discovery<\/code> specifies the options for the kubelet to use during the TLS\nbootstrap process.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>controlPlane<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-JoinControlPlane\"><code>JoinControlPlane<\/code><\/a>\n<\/td>\n<td>\n   <p><code>controlPlane<\/code> defines the additional control plane instance to be deployed\non the joining node. If nil, no additional control plane instance will be deployed.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>skipPhases<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>skipPhases<\/code> is a list of phases to skip during command execution.\nThe list of phases can be obtained with the <code>kubeadm join --help<\/code> command.\nThe flag <code>--skip-phases<\/code> takes precedence over this field.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>patches<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-Patches\"><code>Patches<\/code><\/a>\n<\/td>\n<td>\n   <p><code>patches<\/code> contains options related to applying patches to components deployed\nby kubeadm during <code>kubeadm join<\/code>.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `APIEndpoint`     {#kubeadm-k8s-io-v1beta3-APIEndpoint}\n    \n\n**Appears in:**\n\n- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration)\n\n- [JoinControlPlane](#kubeadm-k8s-io-v1beta3-JoinControlPlane)\n\n\n<p>APIEndpoint struct contains elements of API server instance deployed on a node.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>advertiseAddress<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>advertiseAddress<\/code> sets the IP address for the API server to advertise.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>bindPort<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p><code>bindPort<\/code> sets the secure port for the API Server to bind to.\nDefaults to 6443.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `APIServer`     {#kubeadm-k8s-io-v1beta3-APIServer}\n    \n\n**Appears in:**\n\n- [ClusterConfiguration](#kubeadm-k8s-io-v1beta3-ClusterConfiguration)\n\n\n<p>APIServer holds settings necessary for API server deployments in the cluster<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ControlPlaneComponent<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-ControlPlaneComponent\"><code>ControlPlaneComponent<\/code><\/a>\n<\/td>\n<td>(Members of <code>ControlPlaneComponent<\/code> are embedded into this type.)\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>certSANs<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>certSANs<\/code> sets extra Subject Alternative Names (SANs) for the API Server signing\ncertificate.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>timeoutForControlPlane<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>timeoutForControlPlane<\/code> controls the timeout that we wait for API server to appear.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `BootstrapTokenDiscovery`     {#kubeadm-k8s-io-v1beta3-BootstrapTokenDiscovery}\n    \n\n**Appears in:**\n\n- [Discovery](#kubeadm-k8s-io-v1beta3-Discovery)\n\n\n<p>BootstrapTokenDiscovery is used to set the options for bootstrap token based discovery.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>token<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>token<\/code> is a token used to validate cluster information fetched from the\ncontrol-plane.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>apiServerEndpoint<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>apiServerEndpoint<\/code> is an IP or domain name to the API server from which\ninformation will be fetched.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>caCertHashes<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>caCertHashes<\/code> specifies a set of public key pins to verify when token-based discovery\nis used. The root CA found during discovery must match one of these values.\nSpecifying an empty set disables root CA pinning, which can be unsafe.\nEach hash is specified as <code>&lt;type&gt;:&lt;value&gt;<\/code>, where the only currently supported type is\n&quot;sha256&quot;. This is a hex-encoded SHA-256 hash of the Subject Public Key Info (SPKI)\nobject in DER-encoded ASN.1. These hashes can be calculated using, for example, OpenSSL.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>unsafeSkipCAVerification<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>unsafeSkipCAVerification<\/code> allows token-based discovery without CA verification\nvia <code>caCertHashes<\/code>. This can weaken the security of kubeadm since other nodes can\nimpersonate the control-plane.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ControlPlaneComponent`     {#kubeadm-k8s-io-v1beta3-ControlPlaneComponent}\n    \n\n**Appears in:**\n\n- [ClusterConfiguration](#kubeadm-k8s-io-v1beta3-ClusterConfiguration)\n\n- [APIServer](#kubeadm-k8s-io-v1beta3-APIServer)\n\n\n<p>ControlPlaneComponent holds settings common to control plane component of the cluster<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>extraArgs<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p><code>extraArgs<\/code> is an extra set of flags to pass to the control plane component.\nA key in this map is the flag name as it appears on the command line except\nwithout leading dash(es).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>extraVolumes<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-HostPathMount\"><code>[]HostPathMount<\/code><\/a>\n<\/td>\n<td>\n   <p><code>extraVolumes<\/code> is an extra set of host volumes, mounted to the control plane component.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `DNS`     {#kubeadm-k8s-io-v1beta3-DNS}\n    \n\n**Appears in:**\n\n- [ClusterConfiguration](#kubeadm-k8s-io-v1beta3-ClusterConfiguration)\n\n\n<p>DNS defines the DNS addon that should be used in the cluster<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ImageMeta<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-ImageMeta\"><code>ImageMeta<\/code><\/a>\n<\/td>\n<td>(Members of <code>ImageMeta<\/code> are embedded into this type.)\n   <p><code>imageMeta<\/code> allows to customize the image used for the DNS component.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Discovery`     {#kubeadm-k8s-io-v1beta3-Discovery}\n    \n\n**Appears in:**\n\n- [JoinConfiguration](#kubeadm-k8s-io-v1beta3-JoinConfiguration)\n\n\n<p>Discovery specifies the options for the kubelet to use during the TLS Bootstrap process.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>bootstrapToken<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-BootstrapTokenDiscovery\"><code>BootstrapTokenDiscovery<\/code><\/a>\n<\/td>\n<td>\n   <p><code>bootstrapToken<\/code> is used to set the options for bootstrap token based discovery.\n<code>bootstrapToken<\/code> and <code>file<\/code> are mutually exclusive.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>file<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-FileDiscovery\"><code>FileDiscovery<\/code><\/a>\n<\/td>\n<td>\n   <p><code>file<\/code> is used to specify a file or URL to a kubeconfig file from which to load\ncluster information.\n<code>bootstrapToken<\/code> and <code>file<\/code> are mutually exclusive.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tlsBootstrapToken<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>tlsBootstrapToken<\/code> is a token used for TLS bootstrapping.\nIf <code>bootstrapToken<\/code> is set, this field is defaulted to <code>.bootstrapToken.token<\/code>, but\ncan be overridden. If <code>file<\/code> is set, this field <strong>must be set<\/strong> in case the KubeConfigFile\ndoes not contain any other authentication information<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>timeout<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>timeout<\/code> modifies the discovery timeout.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Etcd`     {#kubeadm-k8s-io-v1beta3-Etcd}\n    \n\n**Appears in:**\n\n- [ClusterConfiguration](#kubeadm-k8s-io-v1beta3-ClusterConfiguration)\n\n\n<p>Etcd contains elements describing Etcd configuration.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>local<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-LocalEtcd\"><code>LocalEtcd<\/code><\/a>\n<\/td>\n<td>\n   <p><code>local<\/code> provides configuration knobs for configuring the local etcd instance.\n<code>local<\/code> and <code>external<\/code> are mutually exclusive.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>external<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-ExternalEtcd\"><code>ExternalEtcd<\/code><\/a>\n<\/td>\n<td>\n   <p><code>external<\/code> describes how to connect to an external etcd cluster.\n<code>local<\/code> and <code>external<\/code> are mutually exclusive.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExternalEtcd`     {#kubeadm-k8s-io-v1beta3-ExternalEtcd}\n    \n\n**Appears in:**\n\n- [Etcd](#kubeadm-k8s-io-v1beta3-Etcd)\n\n\n<p>ExternalEtcd describes an external etcd cluster.\nKubeadm has no knowledge of where certificate files live and they must be supplied.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>endpoints<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>endpoints<\/code> contains the list of etcd members.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>caFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>caFile<\/code> is an SSL Certificate Authority (CA) file used to secure etcd communication.\nRequired if using a TLS connection.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>certFile<\/code> is an SSL certification file used to secure etcd communication.\nRequired if using a TLS connection.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>keyFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>keyFile<\/code> is an SSL key file used to secure etcd communication.\nRequired if using a TLS connection.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `FileDiscovery`     {#kubeadm-k8s-io-v1beta3-FileDiscovery}\n    \n\n**Appears in:**\n\n- [Discovery](#kubeadm-k8s-io-v1beta3-Discovery)\n\n\n<p>FileDiscovery is used to specify a file or URL to a kubeconfig file from which to load\ncluster information.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>kubeConfigPath<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>kubeConfigPath<\/code> is used to specify the actual file path or URL to the kubeconfig\nfile from which to load cluster information.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `HostPathMount`     {#kubeadm-k8s-io-v1beta3-HostPathMount}\n    \n\n**Appears in:**\n\n- [ControlPlaneComponent](#kubeadm-k8s-io-v1beta3-ControlPlaneComponent)\n\n\n<p>HostPathMount contains elements describing volumes that are mounted from the host.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>name<\/code> is the name of the volume inside the Pod template.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>hostPath<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>hostPath<\/code> is the path in the host that will be mounted inside the Pod.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>mountPath<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>mountPath<\/code> is the path inside the Pod where <code>hostPath<\/code> will be mounted.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>readOnly<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>readOnly<\/code> controls write access to the volume.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>pathType<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#hostpathtype-v1-core\"><code>core\/v1.HostPathType<\/code><\/a>\n<\/td>\n<td>\n   <p><code>pathType<\/code> is the type of the <code>hostPath<\/code>.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ImageMeta`     {#kubeadm-k8s-io-v1beta3-ImageMeta}\n    \n\n**Appears in:**\n\n- [DNS](#kubeadm-k8s-io-v1beta3-DNS)\n\n- [LocalEtcd](#kubeadm-k8s-io-v1beta3-LocalEtcd)\n\n\n<p>ImageMeta allows to customize the image used for components that are not\noriginated from the Kubernetes\/Kubernetes release process<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>imageRepository<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>imageRepository<\/code> sets the container registry to pull images from.\nIf not set, the <code>imageRepository<\/code> defined in ClusterConfiguration will be used instead.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imageTag<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>imageTag<\/code> allows to specify a tag for the image.\nIn case this value is set, kubeadm does not change automatically the version of\nthe above components during upgrades.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `JoinControlPlane`     {#kubeadm-k8s-io-v1beta3-JoinControlPlane}\n    \n\n**Appears in:**\n\n- [JoinConfiguration](#kubeadm-k8s-io-v1beta3-JoinConfiguration)\n\n\n<p>JoinControlPlane contains elements describing an additional control plane instance\nto be deployed on the joining node.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>localAPIEndpoint<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-APIEndpoint\"><code>APIEndpoint<\/code><\/a>\n<\/td>\n<td>\n   <p><code>localAPIEndpoint<\/code> represents the endpoint of the API server instance to be\ndeployed on this node.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificateKey<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>certificateKey<\/code> is the key that is used for decryption of certificates after\nthey are downloaded from the secret upon joining a new control plane node.\nThe corresponding encryption key is in the InitConfiguration.\nThe certificate key is a hex encoded string that is an AES key of size 32 bytes.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `LocalEtcd`     {#kubeadm-k8s-io-v1beta3-LocalEtcd}\n    \n\n**Appears in:**\n\n- [Etcd](#kubeadm-k8s-io-v1beta3-Etcd)\n\n\n<p>LocalEtcd describes that kubeadm should run an etcd cluster locally.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ImageMeta<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta3-ImageMeta\"><code>ImageMeta<\/code><\/a>\n<\/td>\n<td>(Members of <code>ImageMeta<\/code> are embedded into this type.)\n   <p>ImageMeta allows to customize the container used for etcd.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>dataDir<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>dataDir<\/code> is the directory etcd will place its data.\nDefaults to &quot;\/var\/lib\/etcd&quot;.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>extraArgs<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p><code>extraArgs<\/code> are extra arguments provided to the etcd binary when run\ninside a static Pod. A key in this map is the flag name as it appears on the\ncommand line except without leading dash(es).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>serverCertSANs<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>serverCertSANs<\/code> sets extra Subject Alternative Names (SANs) for the etcd\nserver signing certificate.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>peerCertSANs<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>peerCertSANs<\/code> sets extra Subject Alternative Names (SANs) for the etcd peer\nsigning certificate.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Networking`     {#kubeadm-k8s-io-v1beta3-Networking}\n    \n\n**Appears in:**\n\n- [ClusterConfiguration](#kubeadm-k8s-io-v1beta3-ClusterConfiguration)\n\n\n<p>Networking contains elements describing cluster's networking configuration.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>serviceSubnet<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>serviceSubnet<\/code> is the subnet used by Kubernetes Services. Defaults to &quot;10.96.0.0\/12&quot;.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>podSubnet<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>podSubnet<\/code> is the subnet used by Pods.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>dnsDomain<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>dnsDomain<\/code> is the DNS domain used by Kubernetes Services. Defaults to &quot;cluster.local&quot;.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `NodeRegistrationOptions`     {#kubeadm-k8s-io-v1beta3-NodeRegistrationOptions}\n    \n\n**Appears in:**\n\n- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration)\n\n- [JoinConfiguration](#kubeadm-k8s-io-v1beta3-JoinConfiguration)\n\n\n<p>NodeRegistrationOptions holds fields that relate to registering a new control-plane or\nnode to the cluster, either via <code>kubeadm init<\/code> or <code>kubeadm join<\/code>.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>name<\/code> is the <code>.metadata.name<\/code> field of the Node API object that will be created in this\n<code>kubeadm init<\/code> or <code>kubeadm join<\/code> operation.\nThis field is also used in the <code>CommonName<\/code> field of the kubelet's client certificate to\nthe API server.\nDefaults to the hostname of the node if not provided.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>criSocket<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>criSocket<\/code> is used to retrieve container runtime info.\nThis information will be annotated to the Node API object, for later re-use.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>taints<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#taint-v1-core\"><code>[]core\/v1.Taint<\/code><\/a>\n<\/td>\n<td>\n   <p><code>taints<\/code> specifies the taints the Node API object should be registered with.\nIf this field is unset, i.e. nil, it will be defaulted with a control-plane taint for control-plane nodes.\nIf you don't want to taint your control-plane node, set this field to an empty list,\ni.e. <code>taints: []<\/code> in the YAML file. This field is solely used for Node registration.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>kubeletExtraArgs<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p><code>kubeletExtraArgs<\/code> passes through extra arguments to the kubelet.\nThe arguments here are passed to the kubelet command line via the environment file\nkubeadm writes at runtime for the kubelet to source.\nThis overrides the generic base-level configuration in the <code>kubelet-config<\/code> ConfigMap.\nFlags have higher priority when parsing. These values are local and specific to the node\nkubeadm is executing on. A key in this map is the flag name as it appears on the\ncommand line except without leading dash(es).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ignorePreflightErrors<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>ignorePreflightErrors<\/code> provides a list of pre-flight errors to be ignored when\nthe current node is registered, e.g. <code>IsPrevilegedUser,Swap<\/code>.\nValue <code>all<\/code> ignores errors from all checks.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imagePullPolicy<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#pullpolicy-v1-core\"><code>core\/v1.PullPolicy<\/code><\/a>\n<\/td>\n<td>\n   <p><code>imagePullPolicy<\/code> specifies the policy for image pulling during kubeadm &quot;init&quot; and\n&quot;join&quot; operations.\nThe value of this field must be one of &quot;Always&quot;, &quot;IfNotPresent&quot; or &quot;Never&quot;.\nIf this field is not set, kubeadm will default it to &quot;IfNotPresent&quot;, or pull the required\nimages if not present on the host.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Patches`     {#kubeadm-k8s-io-v1beta3-Patches}\n    \n\n**Appears in:**\n\n- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration)\n\n- [JoinConfiguration](#kubeadm-k8s-io-v1beta3-JoinConfiguration)\n\n\n<p>Patches contains options related to applying patches to components deployed by kubeadm.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>directory<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>directory<\/code> is a path to a directory that contains files named\n&quot;target[suffix][+patchtype].extension&quot;.\nFor example, &quot;kube-apiserver0+merge.yaml&quot; or just &quot;etcd.json&quot;. &quot;target&quot; can be one of\n&quot;kube-apiserver&quot;, &quot;kube-controller-manager&quot;, &quot;kube-scheduler&quot;, &quot;etcd&quot;. &quot;patchtype&quot; can\nbe one of &quot;strategic&quot; &quot;merge&quot; or &quot;json&quot; and they match the patch formats supported by\nkubectl.\nThe default &quot;patchtype&quot; is &quot;strategic&quot;. &quot;extension&quot; must be either &quot;json&quot; or &quot;yaml&quot;.\n&quot;suffix&quot; is an optional string that can be used to determine which patches are applied\nfirst alpha-numerically.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  kubeadm Configuration  v1beta3  content type  tool reference package  kubeadm k8s io v1beta3 auto generated  true      h2 Overview  h2   p Package v1beta3 defines the v1beta3 version of the kubeadm configuration file format  This version improves on the v1beta2 format by fixing some minor issues and adding a few new fields   p   p A list of changes since v1beta2   p   ul   li The deprecated  quot ClusterConfiguration useHyperKubeImage quot  field has been removed  Kubeadm no longer supports the hyperkube image   li   li The  quot ClusterConfiguration dns type quot  field has been removed since CoreDNS is the only supported DNS server type by kubeadm   li   li Include  quot datapolicy quot  tags on the fields that hold secrets  This would result in the field values to be omitted when API structures are printed with klog   li   li Add  quot InitConfiguration skipPhases quot    quot JoinConfiguration skipPhases quot  to allow skipping a list of phases during kubeadm init join command execution   li   li Add  quot InitConfiguration nodeRegistration imagePullPolicy quot  and  quot JoinConfiguration nodeRegistration imagePullPolicy quot  to allow specifying the images pull policy during kubeadm  quot init quot  and  quot join quot   The value must be one of  quot Always quot    quot Never quot  or  quot IfNotPresent quot    quot IfNotPresent quot  is the default  which has been the existing behavior prior to this addition   li   li Add  quot InitConfiguration patches directory quot    quot JoinConfiguration patches directory quot  to allow the user to configure a directory from which to take patches for components deployed by kubeadm   li   li Move the BootstrapToken  API and related utilities out of the  quot kubeadm quot  API group to a new group  quot bootstraptoken quot   The kubeadm API version v1beta3 no longer contains the BootstrapToken  structures   li    ul   p Migration from old kubeadm config versions  p   ul   li kubeadm v1 15 x and newer can be used to migrate from v1beta1 to v1beta2   li   li kubeadm v1 22 x and newer no longer support v1beta1 and older APIs  but can be used to migrate v1beta2 to v1beta3   li   li kubeadm v1 27 x and newer no longer support v1beta2 and older APIs   li    ul   h2 Basics  h2   p The preferred way to configure kubeadm is to pass an YAML configuration file with the  code   config  code  option  Some of the configuration options defined in the kubeadm config file are also available as command line flags  but only the most common simple use case are supported with this approach   p   p A kubeadm config file could contain multiple configuration types separated using three dashes   code      code     p   p kubeadm supports the following configuration types   p   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta3 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span InitConfiguration span style  color  bbb     span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta3 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span ClusterConfiguration span style  color  bbb     span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubelet config k8s io v1beta1 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span KubeletConfiguration span style  color  bbb     span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeproxy config k8s io v1alpha1 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span KubeProxyConfiguration span style  color  bbb     span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta3 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span JoinConfiguration span style  color  bbb     span   pre  p To print the defaults for  quot init quot  and  quot join quot  actions use the following commands   p   pre style  background color  fff  kubeadm config print init defaults kubeadm config print join defaults   pre  p The list of configuration types that must be included in a configuration file depends by the action you are performing   code init  code  or  code join  code   and by the configuration options you are going to use  defaults or advanced customization    p   p If some configuration types are not provided  or provided only partially  kubeadm will use default values  defaults provided by kubeadm includes also enforcing consistency of values across components when required  e g   code   cluster cidr  code  flag on controller manager and  code clusterCIDR  code  on kube proxy    p   p Users are always allowed to override default values  with the only exception of a small subset of setting with relevance for security  e g  enforce authorization mode Node and RBAC on api server    p   p If the user provides a configuration types that is not expected for the action you are performing  kubeadm will ignore those types and print a warning   p   h2 Kubeadm init configuration types  h2   p When executing kubeadm init with the  code   config  code  option  the following configuration types could be used  InitConfiguration  ClusterConfiguration  KubeProxyConfiguration  KubeletConfiguration  but only one between InitConfiguration and ClusterConfiguration is mandatory   p   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta3 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span InitConfiguration span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  bootstrapTokens  span   span style  color  bbb     span  span style  color  bbb      span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  nodeRegistration  span   span style  color  bbb     span  span style  color  bbb      span     span style  color  bbb     span   pre  p The InitConfiguration type should be used to configure runtime settings  that in case of kubeadm init are the configuration of the bootstrap token and all the setting which are specific to the node where kubeadm is executed  including   p   ul   li   p NodeRegistration  that holds fields that relate to registering the new node to the cluster  use it to customize the node name  the CRI socket to use or any other settings that should apply to this node only  e g  the node ip    p    li   li   p LocalAPIEndpoint  that represents the endpoint of the instance of the API server to be deployed on this node  use it e g  to customize the API server advertise address   p    li    ul   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta3 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span ClusterConfiguration span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  networking  span   span style  color  bbb     span  span style  color  bbb      span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  etcd  span   span style  color  bbb     span  span style  color  bbb      span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apiServer  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraArgs  span   span style  color  bbb     span  span style  color  bbb        span     span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraVolumes  span   span style  color  bbb     span  span style  color  bbb        span     span style  color  bbb     span  span style  color  bbb    span     span style  color  bbb     span   pre  p The ClusterConfiguration type should be used to configure cluster wide settings  including settings for   p   ul   li   p  code networking  code  that holds configuration for the networking topology of the cluster  use it e g  to customize Pod subnet or services subnet   p    li   li   p  code etcd  code   use it e g  to customize the local etcd or to configure the API server for using an external etcd cluster   p    li   li   p kube apiserver  kube scheduler  kube controller manager configurations  use it to customize control plane components by adding customized setting or overriding kubeadm default settings   p    li    ul   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeproxy config k8s io v1alpha1 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span KubeProxyConfiguration span style  color  bbb     span  span style  color  bbb      span     span style  color  bbb     span   pre  p The KubeProxyConfiguration type should be used to change the configuration passed to kube proxy instances deployed in the cluster  If this object is not provided or provided only partially  kubeadm applies defaults   p   p See https   kubernetes io docs reference command line tools reference kube proxy  or https   pkg go dev k8s io kube proxy config v1alpha1 KubeProxyConfiguration for kube proxy official documentation   p   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubelet config k8s io v1beta1 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span KubeletConfiguration span style  color  bbb     span  span style  color  bbb      span     span style  color  bbb     span   pre  p The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances deployed in the cluster  If this object is not provided or provided only partially  kubeadm applies defaults   p   p See https   kubernetes io docs reference command line tools reference kubelet  or https   pkg go dev k8s io kubelet config v1beta1 KubeletConfiguration for kubelet official documentation   p   p Here is a fully populated example of a single YAML file containing multiple configuration types to be used during a  code kubeadm init  code  run   p   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta3 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span InitConfiguration span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  bootstrapTokens  span   span style  color  bbb     span  span style  color  bbb      span    span style  color  000 font weight bold  token  span   span style  color  bbb     span  span style  color  d14    34 9a08jv c0izixklcxtmnze7  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  description  span   span style  color  bbb     span  span style  color  d14    34 kubeadm bootstrap token  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  ttl  span   span style  color  bbb     span  span style  color  d14    34 24h  34   span  span style  color  bbb     span  span style  color  bbb      span    span style  color  000 font weight bold  token  span   span style  color  bbb     span  span style  color  d14    34 783bde 3f89s0fje9f38fhf  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  description  span   span style  color  bbb     span  span style  color  d14    34 another bootstrap token  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  usages  span   span style  color  bbb     span  span style  color  bbb          span   authentication span style  color  bbb     span  span style  color  bbb          span   signing span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  groups  span   span style  color  bbb     span  span style  color  bbb          span   system bootstrappers kubeadm default node token span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  nodeRegistration  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  name  span   span style  color  bbb     span  span style  color  d14    34 ec2 10 100 0 1  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  criSocket  span   span style  color  bbb     span  span style  color  d14    34  var run dockershim sock  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  taints  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  000 font weight bold  key  span   span style  color  bbb     span  span style  color  d14    34 kubeadmNode  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  value  span   span style  color  bbb     span  span style  color  d14    34 someValue  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  effect  span   span style  color  bbb     span  span style  color  d14    34 NoSchedule  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  kubeletExtraArgs  span   span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  v  span   span style  color  bbb     span  span style  color  099  4  span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  ignorePreflightErrors  span   span style  color  bbb     span  span style  color  bbb        span   IsPrivilegedUser span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  imagePullPolicy  span   span style  color  bbb     span  span style  color  d14    34 IfNotPresent  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  localAPIEndpoint  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  advertiseAddress  span   span style  color  bbb     span  span style  color  d14    34 10 100 0 1  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  bindPort  span   span style  color  bbb     span  span style  color  099  6443  span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  certificateKey  span   span style  color  bbb     span  span style  color  d14    34 e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  skipPhases  span   span style  color  bbb     span  span style  color  bbb      span   addon kube proxy span style  color  bbb     span  span style  color  bbb    span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta3 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span ClusterConfiguration span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  etcd  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  998 font style italic    one of local or external  span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  local  span   span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  imageRepository  span   span style  color  bbb     span  span style  color  d14    34 registry k8s io  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  imageTag  span   span style  color  bbb     span  span style  color  d14    34 3 2 24  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  dataDir  span   span style  color  bbb     span  span style  color  d14    34  var lib etcd  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  extraArgs  span   span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  listen client urls  span   span style  color  bbb     span  span style  color  d14    34 http   10 100 0 1 2379  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  serverCertSANs  span   span style  color  bbb     span  span style  color  bbb          span    span style  color  bbb     span  span style  color  d14    34 ec2 10 100 0 1 compute 1 amazonaws com  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  peerCertSANs  span   span style  color  bbb     span  span style  color  bbb          span    span style  color  d14    34 10 100 0 1  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  998 font style italic    external   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  998 font style italic    endpoints   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  998 font style italic        34 10 100 0 1 2379  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  998 font style italic        34 10 100 0 2 2379  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  998 font style italic    caFile    34  etcd kubernetes pki etcd etcd ca crt  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  998 font style italic    certFile    34  etcd kubernetes pki etcd etcd crt  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  998 font style italic    keyFile    34  etcd kubernetes pki etcd etcd key  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  networking  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  serviceSubnet  span   span style  color  bbb     span  span style  color  d14    34 10 96 0 0 16  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  podSubnet  span   span style  color  bbb     span  span style  color  d14    34 10 244 0 0 24  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  dnsDomain  span   span style  color  bbb     span  span style  color  d14    34 cluster local  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kubernetesVersion  span   span style  color  bbb     span  span style  color  d14    34 v1 21 0  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  controlPlaneEndpoint  span   span style  color  bbb     span  span style  color  d14    34 10 100 0 1 6443  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apiServer  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraArgs  span   span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  authorization mode  span   span style  color  bbb     span  span style  color  d14    34 Node RBAC  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraVolumes  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  000 font weight bold  name  span   span style  color  bbb     span  span style  color  d14    34 some volume  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  hostPath  span   span style  color  bbb     span  span style  color  d14    34  etc some path  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  mountPath  span   span style  color  bbb     span  span style  color  d14    34  etc some pod path  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  readOnly  span   span style  color  bbb     span  span style  color  000 font weight bold  false  span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  pathType  span   span style  color  bbb     span File span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  certSANs  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  d14    34 10 100 1 1  34   span  span style  color  bbb     span  span style  color  bbb        span    span style  color  d14    34 ec2 10 100 0 1 compute 1 amazonaws com  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  timeoutForControlPlane  span   span style  color  bbb     span 4m0s span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  controllerManager  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraArgs  span   span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold    34 node cidr mask size  34     span  span style  color  d14    34 20  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraVolumes  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  000 font weight bold  name  span   span style  color  bbb     span  span style  color  d14    34 some volume  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  hostPath  span   span style  color  bbb     span  span style  color  d14    34  etc some path  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  mountPath  span   span style  color  bbb     span  span style  color  d14    34  etc some pod path  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  readOnly  span   span style  color  bbb     span  span style  color  000 font weight bold  false  span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  pathType  span   span style  color  bbb     span File span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  scheduler  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraArgs  span   span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  bind address  span   span style  color  bbb     span  span style  color  d14    34 10 100 0 1  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraVolumes  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  000 font weight bold  name  span   span style  color  bbb     span  span style  color  d14    34 some volume  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  hostPath  span   span style  color  bbb     span  span style  color  d14    34  etc some path  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  mountPath  span   span style  color  bbb     span  span style  color  d14    34  etc some pod path  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  readOnly  span   span style  color  bbb     span  span style  color  000 font weight bold  false  span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  pathType  span   span style  color  bbb     span File span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  certificatesDir  span   span style  color  bbb     span  span style  color  d14    34  etc kubernetes pki  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  imageRepository  span   span style  color  bbb     span  span style  color  d14    34 registry k8s io  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  clusterName  span   span style  color  bbb     span  span style  color  d14    34 example cluster  34   span  span style  color  bbb     span  span style  color  bbb    span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubelet config k8s io v1beta1 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span KubeletConfiguration span style  color  bbb     span  span style  color  bbb    span  span style  color  998 font style italic    kubelet specific options here  span  span style  color  bbb     span  span style  color  bbb    span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeproxy config k8s io v1alpha1 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span KubeProxyConfiguration span style  color  bbb     span  span style  color  bbb    span  span style  color  998 font style italic    kube proxy specific options here  span  span style  color  bbb     span   pre  h2 Kubeadm join configuration types  h2   p When executing  code kubeadm join  code  with the  code   config  code  option  the JoinConfiguration type should be provided   p   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta3 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span JoinConfiguration span style  color  bbb     span  span style  color  bbb      span     span style  color  bbb     span   pre  p The JoinConfiguration type should be used to configure runtime settings  that in case of  code kubeadm join  code  are the discovery method used for accessing the cluster info and all the setting which are specific to the node where kubeadm is executed  including   p   ul   li   p  code nodeRegistration  code   that holds fields that relate to registering the new node to the cluster  use it to customize the node name  the CRI socket to use or any other settings that should apply to this node only  e g  the node ip    p    li   li   p  code apiEndpoint  code   that represents the endpoint of the instance of the API server to be eventually deployed on this node   p    li    ul       Resource Types       ClusterConfiguration   kubeadm k8s io v1beta3 ClusterConfiguration     InitConfiguration   kubeadm k8s io v1beta3 InitConfiguration     JoinConfiguration   kubeadm k8s io v1beta3 JoinConfiguration                    BootstrapToken        BootstrapToken          Appears in        InitConfiguration   kubeadm k8s io v1beta3 InitConfiguration     p BootstrapToken describes one bootstrap token  stored as a Secret in the cluster  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code token  code   B  Required   B  br    a href   BootstrapTokenString   code BootstrapTokenString  code   a    td   td      p  code token  code  is used for establishing bidirectional trust between nodes and control planes  Used for joining nodes in the cluster   p    td    tr   tr  td  code description  code  br    code string  code    td   td      p  code description  code  sets a human friendly message why this token exists and what it s used for  so other administrators can know its purpose   p    td    tr   tr  td  code ttl  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p  code ttl  code  defines the time to live for this token  Defaults to  code 24h  code    code expires  code  and  code ttl  code  are mutually exclusive   p    td    tr   tr  td  code expires  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  time v1 meta   code meta v1 Time  code   a    td   td      p  code expires  code  specifies the timestamp when this token expires  Defaults to being set dynamically at runtime based on the  code ttl  code    code expires  code  and  code ttl  code  are mutually exclusive   p    td    tr   tr  td  code usages  code  br    code   string  code    td   td      p  code usages  code  describes the ways in which this token can be used  Can by default be used for establishing bidirectional trust  but that can be changed here   p    td    tr   tr  td  code groups  code  br    code   string  code    td   td      p  code groups  code  specifies the extra groups that this token will authenticate as when if used for authentication  p    td    tr    tbody    table       BootstrapTokenString        BootstrapTokenString          Appears in        BootstrapToken   BootstrapToken     p BootstrapTokenString is a token of the format  code abcdef abcdef0123456789  code  that is used for both validation of the practically of the API server from a joining node s point of view and as an authentication method for the node in the bootstrap phase of  quot kubeadm join quot   This token is and should be short lived   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code    code   B  Required   B  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code    code   B  Required   B  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr    tbody    table          ClusterConfiguration        kubeadm k8s io v1beta3 ClusterConfiguration          p ClusterConfiguration contains cluster wide configuration for a kubeadm cluster   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubeadm k8s io v1beta3  code   td   tr   tr  td  code kind  code  br  string  td  td  code ClusterConfiguration  code   td   tr           tr  td  code etcd  code  br    a href   kubeadm k8s io v1beta3 Etcd   code Etcd  code   a    td   td      p  code etcd  code  holds the configuration for etcd   p    td    tr   tr  td  code networking  code  br    a href   kubeadm k8s io v1beta3 Networking   code Networking  code   a    td   td      p  code networking  code  holds configuration for the networking topology of the cluster   p    td    tr   tr  td  code kubernetesVersion  code  br    code string  code    td   td      p  code kubernetesVersion  code  is the target version of the control plane   p    td    tr   tr  td  code controlPlaneEndpoint  code  br    code string  code    td   td      p  code controlPlaneEndpoint  code  sets a stable IP address or DNS name for the control plane  It can be a valid IP address or a RFC 1123 DNS subdomain  both with optional TCP port  In case the  code controlPlaneEndpoint  code  is not specified  the  code advertiseAddress  code     code bindPort  code  are used  in case the  code controlPlaneEndpoint  code  is specified but without a TCP port  the  code bindPort  code  is used  Possible usages are   p   ul   li In a cluster with more than one control plane instances  this field should be assigned the address of the external load balancer in front of the control plane instances   li   li In environments with enforced node recycling  the  code controlPlaneEndpoint  code  could be used for assigning a stable DNS to the control plane   li    ul    td    tr   tr  td  code apiServer  code  br    a href   kubeadm k8s io v1beta3 APIServer   code APIServer  code   a    td   td      p  code apiServer  code  contains extra settings for the API server   p    td    tr   tr  td  code controllerManager  code  br    a href   kubeadm k8s io v1beta3 ControlPlaneComponent   code ControlPlaneComponent  code   a    td   td      p  code controllerManager  code  contains extra settings for the controller manager   p    td    tr   tr  td  code scheduler  code  br    a href   kubeadm k8s io v1beta3 ControlPlaneComponent   code ControlPlaneComponent  code   a    td   td      p  code scheduler  code  contains extra settings for the scheduler   p    td    tr   tr  td  code dns  code  br    a href   kubeadm k8s io v1beta3 DNS   code DNS  code   a    td   td      p  code dns  code  defines the options for the DNS add on installed in the cluster   p    td    tr   tr  td  code certificatesDir  code  br    code string  code    td   td      p  code certificatesDir  code  specifies where to store or look for all required certificates   p    td    tr   tr  td  code imageRepository  code  br    code string  code    td   td      p  code imageRepository  code  sets the container registry to pull images from  If empty   code registry k8s io  code  will be used by default  In case of kubernetes version is a CI build  kubernetes version starts with  code ci   code    code gcr io k8s staging ci images  code  will be used as a default for control plane components and for kube proxy  while  code registry k8s io  code  will be used for all the other images   p    td    tr   tr  td  code featureGates  code  br    code map string bool  code    td   td      p  code featureGates  code  contains the feature gates enabled by the user   p    td    tr   tr  td  code clusterName  code  br    code string  code    td   td      p The cluster name   p    td    tr    tbody    table       InitConfiguration        kubeadm k8s io v1beta3 InitConfiguration          p InitConfiguration contains a list of elements that is specific  quot kubeadm init quot  only runtime information   code kubeadm init  code  only information  These fields are solely used the first time  code kubeadm init  code  runs  After that  the information in the fields IS NOT uploaded to the  code kubeadm config  code  ConfigMap that is used by  code kubeadm upgrade  code  for instance  These fields must be omitempty   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubeadm k8s io v1beta3  code   td   tr   tr  td  code kind  code  br  string  td  td  code InitConfiguration  code   td   tr           tr  td  code bootstrapTokens  code  br    a href   BootstrapToken   code   BootstrapToken  code   a    td   td      p  code bootstrapTokens  code  is respected at  code kubeadm init  code  time and describes a set of Bootstrap Tokens to create  This information IS NOT uploaded to the kubeadm cluster configmap  partly because of its sensitive nature  p    td    tr   tr  td  code nodeRegistration  code  br    a href   kubeadm k8s io v1beta3 NodeRegistrationOptions   code NodeRegistrationOptions  code   a    td   td      p  code nodeRegistration  code  holds fields that relate to registering the new control plane node to the cluster   p    td    tr   tr  td  code localAPIEndpoint  code  br    a href   kubeadm k8s io v1beta3 APIEndpoint   code APIEndpoint  code   a    td   td      p  code localAPIEndpoint  code  represents the endpoint of the API server instance that s deployed on this control plane node  In HA setups  this differs from  code ClusterConfiguration controlPlaneEndpoint  code  in the sense that  code controlPlaneEndpoint  code  is the global endpoint for the cluster  which then load balances the requests to each individual API server  This configuration object lets you customize what IP DNS name and port the local API server advertises it s accessible on  By default  kubeadm tries to auto detect the IP of the default interface and use that  but in case that process fails you may set the desired value here   p    td    tr   tr  td  code certificateKey  code  br    code string  code    td   td      p  code certificateKey  code  sets the key with which certificates and keys are encrypted prior to being uploaded in a Secret in the cluster during the  code uploadcerts init  code  phase  The certificate key is a hex encoded string that is an AES key of size 32 bytes   p    td    tr   tr  td  code skipPhases  code  br    code   string  code    td   td      p  code skipPhases  code  is a list of phases to skip during command execution  The list of phases can be obtained with the  code kubeadm init   help  code  command  The flag  quot   skip phases quot  takes precedence over this field   p    td    tr   tr  td  code patches  code  br    a href   kubeadm k8s io v1beta3 Patches   code Patches  code   a    td   td      p  code patches  code  contains options related to applying patches to components deployed by kubeadm during  code kubeadm init  code    p    td    tr    tbody    table       JoinConfiguration        kubeadm k8s io v1beta3 JoinConfiguration          p JoinConfiguration contains elements describing a particular node   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubeadm k8s io v1beta3  code   td   tr   tr  td  code kind  code  br  string  td  td  code JoinConfiguration  code   td   tr           tr  td  code nodeRegistration  code  br    a href   kubeadm k8s io v1beta3 NodeRegistrationOptions   code NodeRegistrationOptions  code   a    td   td      p  code nodeRegistration  code  holds fields that relate to registering the new control plane node to the cluster   p    td    tr   tr  td  code caCertPath  code  br    code string  code    td   td      p  code caCertPath  code  is the path to the SSL certificate authority used to secure comunications between a node and the control plane  Defaults to  quot  etc kubernetes pki ca crt quot    p    td    tr   tr  td  code discovery  code   B  Required   B  br    a href   kubeadm k8s io v1beta3 Discovery   code Discovery  code   a    td   td      p  code discovery  code  specifies the options for the kubelet to use during the TLS bootstrap process   p    td    tr   tr  td  code controlPlane  code  br    a href   kubeadm k8s io v1beta3 JoinControlPlane   code JoinControlPlane  code   a    td   td      p  code controlPlane  code  defines the additional control plane instance to be deployed on the joining node  If nil  no additional control plane instance will be deployed   p    td    tr   tr  td  code skipPhases  code  br    code   string  code    td   td      p  code skipPhases  code  is a list of phases to skip during command execution  The list of phases can be obtained with the  code kubeadm join   help  code  command  The flag  code   skip phases  code  takes precedence over this field   p    td    tr   tr  td  code patches  code  br    a href   kubeadm k8s io v1beta3 Patches   code Patches  code   a    td   td      p  code patches  code  contains options related to applying patches to components deployed by kubeadm during  code kubeadm join  code    p    td    tr    tbody    table       APIEndpoint        kubeadm k8s io v1beta3 APIEndpoint          Appears in        InitConfiguration   kubeadm k8s io v1beta3 InitConfiguration      JoinControlPlane   kubeadm k8s io v1beta3 JoinControlPlane     p APIEndpoint struct contains elements of API server instance deployed on a node   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code advertiseAddress  code  br    code string  code    td   td      p  code advertiseAddress  code  sets the IP address for the API server to advertise   p    td    tr   tr  td  code bindPort  code  br    code int32  code    td   td      p  code bindPort  code  sets the secure port for the API Server to bind to  Defaults to 6443   p    td    tr    tbody    table       APIServer        kubeadm k8s io v1beta3 APIServer          Appears in        ClusterConfiguration   kubeadm k8s io v1beta3 ClusterConfiguration     p APIServer holds settings necessary for API server deployments in the cluster  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ControlPlaneComponent  code   B  Required   B  br    a href   kubeadm k8s io v1beta3 ControlPlaneComponent   code ControlPlaneComponent  code   a    td   td  Members of  code ControlPlaneComponent  code  are embedded into this type       span class  text muted  No description provided   span   td    tr   tr  td  code certSANs  code  br    code   string  code    td   td      p  code certSANs  code  sets extra Subject Alternative Names  SANs  for the API Server signing certificate   p    td    tr   tr  td  code timeoutForControlPlane  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p  code timeoutForControlPlane  code  controls the timeout that we wait for API server to appear   p    td    tr    tbody    table       BootstrapTokenDiscovery        kubeadm k8s io v1beta3 BootstrapTokenDiscovery          Appears in        Discovery   kubeadm k8s io v1beta3 Discovery     p BootstrapTokenDiscovery is used to set the options for bootstrap token based discovery   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code token  code   B  Required   B  br    code string  code    td   td      p  code token  code  is a token used to validate cluster information fetched from the control plane   p    td    tr   tr  td  code apiServerEndpoint  code  br    code string  code    td   td      p  code apiServerEndpoint  code  is an IP or domain name to the API server from which information will be fetched   p    td    tr   tr  td  code caCertHashes  code  br    code   string  code    td   td      p  code caCertHashes  code  specifies a set of public key pins to verify when token based discovery is used  The root CA found during discovery must match one of these values  Specifying an empty set disables root CA pinning  which can be unsafe  Each hash is specified as  code  lt type gt   lt value gt   code   where the only currently supported type is  quot sha256 quot   This is a hex encoded SHA 256 hash of the Subject Public Key Info  SPKI  object in DER encoded ASN 1  These hashes can be calculated using  for example  OpenSSL   p    td    tr   tr  td  code unsafeSkipCAVerification  code  br    code bool  code    td   td      p  code unsafeSkipCAVerification  code  allows token based discovery without CA verification via  code caCertHashes  code   This can weaken the security of kubeadm since other nodes can impersonate the control plane   p    td    tr    tbody    table       ControlPlaneComponent        kubeadm k8s io v1beta3 ControlPlaneComponent          Appears in        ClusterConfiguration   kubeadm k8s io v1beta3 ClusterConfiguration      APIServer   kubeadm k8s io v1beta3 APIServer     p ControlPlaneComponent holds settings common to control plane component of the cluster  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code extraArgs  code  br    code map string string  code    td   td      p  code extraArgs  code  is an extra set of flags to pass to the control plane component  A key in this map is the flag name as it appears on the command line except without leading dash es    p    td    tr   tr  td  code extraVolumes  code  br    a href   kubeadm k8s io v1beta3 HostPathMount   code   HostPathMount  code   a    td   td      p  code extraVolumes  code  is an extra set of host volumes  mounted to the control plane component   p    td    tr    tbody    table       DNS        kubeadm k8s io v1beta3 DNS          Appears in        ClusterConfiguration   kubeadm k8s io v1beta3 ClusterConfiguration     p DNS defines the DNS addon that should be used in the cluster  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ImageMeta  code   B  Required   B  br    a href   kubeadm k8s io v1beta3 ImageMeta   code ImageMeta  code   a    td   td  Members of  code ImageMeta  code  are embedded into this type       p  code imageMeta  code  allows to customize the image used for the DNS component   p    td    tr    tbody    table       Discovery        kubeadm k8s io v1beta3 Discovery          Appears in        JoinConfiguration   kubeadm k8s io v1beta3 JoinConfiguration     p Discovery specifies the options for the kubelet to use during the TLS Bootstrap process   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code bootstrapToken  code  br    a href   kubeadm k8s io v1beta3 BootstrapTokenDiscovery   code BootstrapTokenDiscovery  code   a    td   td      p  code bootstrapToken  code  is used to set the options for bootstrap token based discovery   code bootstrapToken  code  and  code file  code  are mutually exclusive   p    td    tr   tr  td  code file  code  br    a href   kubeadm k8s io v1beta3 FileDiscovery   code FileDiscovery  code   a    td   td      p  code file  code  is used to specify a file or URL to a kubeconfig file from which to load cluster information   code bootstrapToken  code  and  code file  code  are mutually exclusive   p    td    tr   tr  td  code tlsBootstrapToken  code  br    code string  code    td   td      p  code tlsBootstrapToken  code  is a token used for TLS bootstrapping  If  code bootstrapToken  code  is set  this field is defaulted to  code  bootstrapToken token  code   but can be overridden  If  code file  code  is set  this field  strong must be set  strong  in case the KubeConfigFile does not contain any other authentication information  p    td    tr   tr  td  code timeout  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p  code timeout  code  modifies the discovery timeout   p    td    tr    tbody    table       Etcd        kubeadm k8s io v1beta3 Etcd          Appears in        ClusterConfiguration   kubeadm k8s io v1beta3 ClusterConfiguration     p Etcd contains elements describing Etcd configuration   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code local  code  br    a href   kubeadm k8s io v1beta3 LocalEtcd   code LocalEtcd  code   a    td   td      p  code local  code  provides configuration knobs for configuring the local etcd instance   code local  code  and  code external  code  are mutually exclusive   p    td    tr   tr  td  code external  code  br    a href   kubeadm k8s io v1beta3 ExternalEtcd   code ExternalEtcd  code   a    td   td      p  code external  code  describes how to connect to an external etcd cluster   code local  code  and  code external  code  are mutually exclusive   p    td    tr    tbody    table       ExternalEtcd        kubeadm k8s io v1beta3 ExternalEtcd          Appears in        Etcd   kubeadm k8s io v1beta3 Etcd     p ExternalEtcd describes an external etcd cluster  Kubeadm has no knowledge of where certificate files live and they must be supplied   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code endpoints  code   B  Required   B  br    code   string  code    td   td      p  code endpoints  code  contains the list of etcd members   p    td    tr   tr  td  code caFile  code   B  Required   B  br    code string  code    td   td      p  code caFile  code  is an SSL Certificate Authority  CA  file used to secure etcd communication  Required if using a TLS connection   p    td    tr   tr  td  code certFile  code   B  Required   B  br    code string  code    td   td      p  code certFile  code  is an SSL certification file used to secure etcd communication  Required if using a TLS connection   p    td    tr   tr  td  code keyFile  code   B  Required   B  br    code string  code    td   td      p  code keyFile  code  is an SSL key file used to secure etcd communication  Required if using a TLS connection   p    td    tr    tbody    table       FileDiscovery        kubeadm k8s io v1beta3 FileDiscovery          Appears in        Discovery   kubeadm k8s io v1beta3 Discovery     p FileDiscovery is used to specify a file or URL to a kubeconfig file from which to load cluster information   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code kubeConfigPath  code   B  Required   B  br    code string  code    td   td      p  code kubeConfigPath  code  is used to specify the actual file path or URL to the kubeconfig file from which to load cluster information   p    td    tr    tbody    table       HostPathMount        kubeadm k8s io v1beta3 HostPathMount          Appears in        ControlPlaneComponent   kubeadm k8s io v1beta3 ControlPlaneComponent     p HostPathMount contains elements describing volumes that are mounted from the host   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p  code name  code  is the name of the volume inside the Pod template   p    td    tr   tr  td  code hostPath  code   B  Required   B  br    code string  code    td   td      p  code hostPath  code  is the path in the host that will be mounted inside the Pod   p    td    tr   tr  td  code mountPath  code   B  Required   B  br    code string  code    td   td      p  code mountPath  code  is the path inside the Pod where  code hostPath  code  will be mounted   p    td    tr   tr  td  code readOnly  code  br    code bool  code    td   td      p  code readOnly  code  controls write access to the volume   p    td    tr   tr  td  code pathType  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  hostpathtype v1 core   code core v1 HostPathType  code   a    td   td      p  code pathType  code  is the type of the  code hostPath  code    p    td    tr    tbody    table       ImageMeta        kubeadm k8s io v1beta3 ImageMeta          Appears in        DNS   kubeadm k8s io v1beta3 DNS      LocalEtcd   kubeadm k8s io v1beta3 LocalEtcd     p ImageMeta allows to customize the image used for components that are not originated from the Kubernetes Kubernetes release process  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code imageRepository  code  br    code string  code    td   td      p  code imageRepository  code  sets the container registry to pull images from  If not set  the  code imageRepository  code  defined in ClusterConfiguration will be used instead   p    td    tr   tr  td  code imageTag  code  br    code string  code    td   td      p  code imageTag  code  allows to specify a tag for the image  In case this value is set  kubeadm does not change automatically the version of the above components during upgrades   p    td    tr    tbody    table       JoinControlPlane        kubeadm k8s io v1beta3 JoinControlPlane          Appears in        JoinConfiguration   kubeadm k8s io v1beta3 JoinConfiguration     p JoinControlPlane contains elements describing an additional control plane instance to be deployed on the joining node   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code localAPIEndpoint  code  br    a href   kubeadm k8s io v1beta3 APIEndpoint   code APIEndpoint  code   a    td   td      p  code localAPIEndpoint  code  represents the endpoint of the API server instance to be deployed on this node   p    td    tr   tr  td  code certificateKey  code  br    code string  code    td   td      p  code certificateKey  code  is the key that is used for decryption of certificates after they are downloaded from the secret upon joining a new control plane node  The corresponding encryption key is in the InitConfiguration  The certificate key is a hex encoded string that is an AES key of size 32 bytes   p    td    tr    tbody    table       LocalEtcd        kubeadm k8s io v1beta3 LocalEtcd          Appears in        Etcd   kubeadm k8s io v1beta3 Etcd     p LocalEtcd describes that kubeadm should run an etcd cluster locally   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ImageMeta  code   B  Required   B  br    a href   kubeadm k8s io v1beta3 ImageMeta   code ImageMeta  code   a    td   td  Members of  code ImageMeta  code  are embedded into this type       p ImageMeta allows to customize the container used for etcd   p    td    tr   tr  td  code dataDir  code   B  Required   B  br    code string  code    td   td      p  code dataDir  code  is the directory etcd will place its data  Defaults to  quot  var lib etcd quot    p    td    tr   tr  td  code extraArgs  code  br    code map string string  code    td   td      p  code extraArgs  code  are extra arguments provided to the etcd binary when run inside a static Pod  A key in this map is the flag name as it appears on the command line except without leading dash es    p    td    tr   tr  td  code serverCertSANs  code  br    code   string  code    td   td      p  code serverCertSANs  code  sets extra Subject Alternative Names  SANs  for the etcd server signing certificate   p    td    tr   tr  td  code peerCertSANs  code  br    code   string  code    td   td      p  code peerCertSANs  code  sets extra Subject Alternative Names  SANs  for the etcd peer signing certificate   p    td    tr    tbody    table       Networking        kubeadm k8s io v1beta3 Networking          Appears in        ClusterConfiguration   kubeadm k8s io v1beta3 ClusterConfiguration     p Networking contains elements describing cluster s networking configuration   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code serviceSubnet  code  br    code string  code    td   td      p  code serviceSubnet  code  is the subnet used by Kubernetes Services  Defaults to  quot 10 96 0 0 12 quot    p    td    tr   tr  td  code podSubnet  code  br    code string  code    td   td      p  code podSubnet  code  is the subnet used by Pods   p    td    tr   tr  td  code dnsDomain  code  br    code string  code    td   td      p  code dnsDomain  code  is the DNS domain used by Kubernetes Services  Defaults to  quot cluster local quot    p    td    tr    tbody    table       NodeRegistrationOptions        kubeadm k8s io v1beta3 NodeRegistrationOptions          Appears in        InitConfiguration   kubeadm k8s io v1beta3 InitConfiguration      JoinConfiguration   kubeadm k8s io v1beta3 JoinConfiguration     p NodeRegistrationOptions holds fields that relate to registering a new control plane or node to the cluster  either via  code kubeadm init  code  or  code kubeadm join  code    p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code  br    code string  code    td   td      p  code name  code  is the  code  metadata name  code  field of the Node API object that will be created in this  code kubeadm init  code  or  code kubeadm join  code  operation  This field is also used in the  code CommonName  code  field of the kubelet s client certificate to the API server  Defaults to the hostname of the node if not provided   p    td    tr   tr  td  code criSocket  code  br    code string  code    td   td      p  code criSocket  code  is used to retrieve container runtime info  This information will be annotated to the Node API object  for later re use   p    td    tr   tr  td  code taints  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  taint v1 core   code   core v1 Taint  code   a    td   td      p  code taints  code  specifies the taints the Node API object should be registered with  If this field is unset  i e  nil  it will be defaulted with a control plane taint for control plane nodes  If you don t want to taint your control plane node  set this field to an empty list  i e   code taints      code  in the YAML file  This field is solely used for Node registration   p    td    tr   tr  td  code kubeletExtraArgs  code  br    code map string string  code    td   td      p  code kubeletExtraArgs  code  passes through extra arguments to the kubelet  The arguments here are passed to the kubelet command line via the environment file kubeadm writes at runtime for the kubelet to source  This overrides the generic base level configuration in the  code kubelet config  code  ConfigMap  Flags have higher priority when parsing  These values are local and specific to the node kubeadm is executing on  A key in this map is the flag name as it appears on the command line except without leading dash es    p    td    tr   tr  td  code ignorePreflightErrors  code  br    code   string  code    td   td      p  code ignorePreflightErrors  code  provides a list of pre flight errors to be ignored when the current node is registered  e g   code IsPrevilegedUser Swap  code   Value  code all  code  ignores errors from all checks   p    td    tr   tr  td  code imagePullPolicy  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  pullpolicy v1 core   code core v1 PullPolicy  code   a    td   td      p  code imagePullPolicy  code  specifies the policy for image pulling during kubeadm  quot init quot  and  quot join quot  operations  The value of this field must be one of  quot Always quot    quot IfNotPresent quot  or  quot Never quot   If this field is not set  kubeadm will default it to  quot IfNotPresent quot   or pull the required images if not present on the host   p    td    tr    tbody    table       Patches        kubeadm k8s io v1beta3 Patches          Appears in        InitConfiguration   kubeadm k8s io v1beta3 InitConfiguration      JoinConfiguration   kubeadm k8s io v1beta3 JoinConfiguration     p Patches contains options related to applying patches to components deployed by kubeadm   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code directory  code  br    code string  code    td   td      p  code directory  code  is a path to a directory that contains files named  quot target suffix   patchtype  extension quot   For example   quot kube apiserver0 merge yaml quot  or just  quot etcd json quot    quot target quot  can be one of  quot kube apiserver quot    quot kube controller manager quot    quot kube scheduler quot    quot etcd quot    quot patchtype quot  can be one of  quot strategic quot   quot merge quot  or  quot json quot  and they match the patch formats supported by kubectl  The default  quot patchtype quot  is  quot strategic quot    quot extension quot  must be either  quot json quot  or  quot yaml quot    quot suffix quot  is an optional string that can be used to determine which patches are applied first alpha numerically   p    td    tr    tbody    table   "}
{"questions":"kubernetes reference title kube apiserver Audit Configuration v1 contenttype tool reference Resource Types package audit k8s io v1 autogenerated true","answers":"---\ntitle: kube-apiserver Audit Configuration (v1)\ncontent_type: tool-reference\npackage: audit.k8s.io\/v1\nauto_generated: true\n---\n\n\n## Resource Types \n\n\n- [Event](#audit-k8s-io-v1-Event)\n- [EventList](#audit-k8s-io-v1-EventList)\n- [Policy](#audit-k8s-io-v1-Policy)\n- [PolicyList](#audit-k8s-io-v1-PolicyList)\n  \n\n## `Event`     {#audit-k8s-io-v1-Event}\n    \n\n**Appears in:**\n\n- [EventList](#audit-k8s-io-v1-EventList)\n\n\n<p>Event captures all the information that can be included in an API audit log.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>audit.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>Event<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>level<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#audit-k8s-io-v1-Level\"><code>Level<\/code><\/a>\n<\/td>\n<td>\n   <p>AuditLevel at which event was generated<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>auditID<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/types#UID\"><code>k8s.io\/apimachinery\/pkg\/types.UID<\/code><\/a>\n<\/td>\n<td>\n   <p>Unique audit ID, generated for each request.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>stage<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#audit-k8s-io-v1-Stage\"><code>Stage<\/code><\/a>\n<\/td>\n<td>\n   <p>Stage of the request handling when this event instance was generated.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>requestURI<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>RequestURI is the request URI as sent by the client to a server.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>verb<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Verb is the kubernetes verb associated with the request.\nFor non-resource requests, this is the lower-cased HTTP method.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>user<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#userinfo-v1-authentication-k8s-io\"><code>authentication\/v1.UserInfo<\/code><\/a>\n<\/td>\n<td>\n   <p>Authenticated user information.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>impersonatedUser<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#userinfo-v1-authentication-k8s-io\"><code>authentication\/v1.UserInfo<\/code><\/a>\n<\/td>\n<td>\n   <p>Impersonated user information.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>sourceIPs<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>Source IPs, from where the request originated and intermediate proxies.\nThe source IPs are listed from (in order):<\/p>\n<ol>\n<li>X-Forwarded-For request header IPs<\/li>\n<li>X-Real-Ip header, if not present in the X-Forwarded-For list<\/li>\n<li>The remote address for the connection, if it doesn't match the last\nIP in the list up to here (X-Forwarded-For or X-Real-Ip).\nNote: All but the last IP can be arbitrarily set by the client.<\/li>\n<\/ol>\n<\/td>\n<\/tr>\n<tr><td><code>userAgent<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>UserAgent records the user agent string reported by the client.\nNote that the UserAgent is provided by the client, and must not be trusted.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>objectRef<\/code><br\/>\n<a href=\"#audit-k8s-io-v1-ObjectReference\"><code>ObjectReference<\/code><\/a>\n<\/td>\n<td>\n   <p>Object reference this request is targeted at.\nDoes not apply for List-type requests, or non-resource requests.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>responseStatus<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#status-v1-meta\"><code>meta\/v1.Status<\/code><\/a>\n<\/td>\n<td>\n   <p>The response status, populated even when the ResponseObject is not a Status type.\nFor successful responses, this will only include the Code and StatusSuccess.\nFor non-status type error responses, this will be auto-populated with the error Message.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>requestObject<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/runtime#Unknown\"><code>k8s.io\/apimachinery\/pkg\/runtime.Unknown<\/code><\/a>\n<\/td>\n<td>\n   <p>API object from the request, in JSON format. The RequestObject is recorded as-is in the request\n(possibly re-encoded as JSON), prior to version conversion, defaulting, admission or\nmerging. It is an external versioned object type, and may not be a valid object on its own.\nOmitted for non-resource requests.  Only logged at Request Level and higher.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>responseObject<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/runtime#Unknown\"><code>k8s.io\/apimachinery\/pkg\/runtime.Unknown<\/code><\/a>\n<\/td>\n<td>\n   <p>API object returned in the response, in JSON. The ResponseObject is recorded after conversion\nto the external type, and serialized as JSON.  Omitted for non-resource requests.  Only logged\nat Response Level.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>requestReceivedTimestamp<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#microtime-v1-meta\"><code>meta\/v1.MicroTime<\/code><\/a>\n<\/td>\n<td>\n   <p>Time the request reached the apiserver.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>stageTimestamp<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#microtime-v1-meta\"><code>meta\/v1.MicroTime<\/code><\/a>\n<\/td>\n<td>\n   <p>Time the request reached current audit stage.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>annotations<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p>Annotations is an unstructured key value map stored with an audit event that may be set by\nplugins invoked in the request serving chain, including authentication, authorization and\nadmission plugins. Note that these annotations are for the audit event, and do not correspond\nto the metadata.annotations of the submitted object. Keys should uniquely identify the informing\ncomponent to avoid name collisions (e.g. podsecuritypolicy.admission.k8s.io\/policy). Values\nshould be short. Annotations are included in the Metadata level.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `EventList`     {#audit-k8s-io-v1-EventList}\n    \n\n\n<p>EventList is a list of audit Events.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>audit.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>EventList<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>metadata<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#listmeta-v1-meta\"><code>meta\/v1.ListMeta<\/code><\/a>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>items<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#audit-k8s-io-v1-Event\"><code>[]Event<\/code><\/a>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Policy`     {#audit-k8s-io-v1-Policy}\n    \n\n**Appears in:**\n\n- [PolicyList](#audit-k8s-io-v1-PolicyList)\n\n\n<p>Policy defines the configuration of audit logging, and the rules for how different request\ncategories are logged.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>audit.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>Policy<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>metadata<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#objectmeta-v1-meta\"><code>meta\/v1.ObjectMeta<\/code><\/a>\n<\/td>\n<td>\n   <p>ObjectMeta is included for interoperability with API infrastructure.<\/p>\nRefer to the Kubernetes API documentation for the fields of the <code>metadata<\/code> field.<\/td>\n<\/tr>\n<tr><td><code>rules<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#audit-k8s-io-v1-PolicyRule\"><code>[]PolicyRule<\/code><\/a>\n<\/td>\n<td>\n   <p>Rules specify the audit Level a request should be recorded at.\nA request may match multiple rules, in which case the FIRST matching rule is used.\nThe default audit level is None, but can be overridden by a catch-all rule at the end of the list.\nPolicyRules are strictly ordered.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>omitStages<\/code><br\/>\n<a href=\"#audit-k8s-io-v1-Stage\"><code>[]Stage<\/code><\/a>\n<\/td>\n<td>\n   <p>OmitStages is a list of stages for which no events are created. Note that this can also\nbe specified per rule in which case the union of both are omitted.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>omitManagedFields<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>OmitManagedFields indicates whether to omit the managed fields of the request\nand response bodies from being written to the API audit log.\nThis is used as a global default - a value of 'true' will omit the managed fileds,\notherwise the managed fields will be included in the API audit log.\nNote that this can also be specified per rule in which case the value specified\nin a rule will override the global default.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `PolicyList`     {#audit-k8s-io-v1-PolicyList}\n    \n\n\n<p>PolicyList is a list of audit Policies.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>audit.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>PolicyList<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>metadata<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#listmeta-v1-meta\"><code>meta\/v1.ListMeta<\/code><\/a>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>items<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#audit-k8s-io-v1-Policy\"><code>[]Policy<\/code><\/a>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `GroupResources`     {#audit-k8s-io-v1-GroupResources}\n    \n\n**Appears in:**\n\n- [PolicyRule](#audit-k8s-io-v1-PolicyRule)\n\n\n<p>GroupResources represents resource kinds in an API group.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>group<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Group is the name of the API group that contains the resources.\nThe empty string represents the core API group.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>resources<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>Resources is a list of resources this rule applies to.<\/p>\n<p>For example:<\/p>\n<ul>\n<li><code>pods<\/code> matches pods.<\/li>\n<li><code>pods\/log<\/code> matches the log subresource of pods.<\/li>\n<li><code>*<\/code> matches all resources and their subresources.<\/li>\n<li><code>pods\/*<\/code> matches all subresources of pods.<\/li>\n<li><code>*\/scale<\/code> matches all scale subresources.<\/li>\n<\/ul>\n<p>If wildcard is present, the validation rule will ensure resources do not\noverlap with each other.<\/p>\n<p>An empty list implies all resources and subresources in this API groups apply.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>resourceNames<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>ResourceNames is a list of resource instance names that the policy matches.\nUsing this field requires Resources to be specified.\nAn empty list implies that every instance of the resource is matched.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Level`     {#audit-k8s-io-v1-Level}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [Event](#audit-k8s-io-v1-Event)\n\n- [PolicyRule](#audit-k8s-io-v1-PolicyRule)\n\n\n<p>Level defines the amount of information logged during auditing<\/p>\n\n\n\n\n## `ObjectReference`     {#audit-k8s-io-v1-ObjectReference}\n    \n\n**Appears in:**\n\n- [Event](#audit-k8s-io-v1-Event)\n\n\n<p>ObjectReference contains enough information to let you inspect or modify the referred object.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>resource<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>namespace<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>name<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>uid<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/types#UID\"><code>k8s.io\/apimachinery\/pkg\/types.UID<\/code><\/a>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>apiGroup<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>APIGroup is the name of the API group that contains the referred object.\nThe empty string represents the core API group.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>apiVersion<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>APIVersion is the version of the API group that contains the referred object.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>resourceVersion<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>subresource<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `PolicyRule`     {#audit-k8s-io-v1-PolicyRule}\n    \n\n**Appears in:**\n\n- [Policy](#audit-k8s-io-v1-Policy)\n\n\n<p>PolicyRule maps requests based off metadata to an audit Level.\nRequests must match the rules of every field (an intersection of rules).<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>level<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#audit-k8s-io-v1-Level\"><code>Level<\/code><\/a>\n<\/td>\n<td>\n   <p>The Level that requests matching this rule are recorded at.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>users<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>The users (by authenticated user name) this rule applies to.\nAn empty list implies every user.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>userGroups<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>The user groups this rule applies to. A user is considered matching\nif it is a member of any of the UserGroups.\nAn empty list implies every user group.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>verbs<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>The verbs that match this rule.\nAn empty list implies every verb.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>resources<\/code><br\/>\n<a href=\"#audit-k8s-io-v1-GroupResources\"><code>[]GroupResources<\/code><\/a>\n<\/td>\n<td>\n   <p>Resources that this rule matches. An empty list implies all kinds in all API groups.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>namespaces<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>Namespaces that this rule matches.\nThe empty string &quot;&quot; matches non-namespaced resources.\nAn empty list implies every namespace.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>nonResourceURLs<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>NonResourceURLs is a set of URL paths that should be audited.\n<code>*<\/code>s are allowed, but only as the full, final step in the path.\nExamples:<\/p>\n<ul>\n<li><code>\/metrics<\/code> - Log requests for apiserver metrics<\/li>\n<li><code>\/healthz*<\/code> - Log all health checks<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr><td><code>omitStages<\/code><br\/>\n<a href=\"#audit-k8s-io-v1-Stage\"><code>[]Stage<\/code><\/a>\n<\/td>\n<td>\n   <p>OmitStages is a list of stages for which no events are created. Note that this can also\nbe specified policy wide in which case the union of both are omitted.\nAn empty list means no restrictions will apply.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>omitManagedFields<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>OmitManagedFields indicates whether to omit the managed fields of the request\nand response bodies from being written to the API audit log.<\/p>\n<ul>\n<li>a value of 'true' will drop the managed fields from the API audit log<\/li>\n<li>a value of 'false' indicates that the managed fileds should be included\nin the API audit log\nNote that the value, if specified, in this rule will override the global default\nIf a value is not specified then the global default specified in\nPolicy.OmitManagedFields will stand.<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Stage`     {#audit-k8s-io-v1-Stage}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [Event](#audit-k8s-io-v1-Event)\n\n- [Policy](#audit-k8s-io-v1-Policy)\n\n- [PolicyRule](#audit-k8s-io-v1-PolicyRule)\n\n\n<p>Stage defines the stages in request handling that audit events may be generated.<\/p>\n\n\n\n ","site":"kubernetes reference","answers_cleaned":"    title  kube apiserver Audit Configuration  v1  content type  tool reference package  audit k8s io v1 auto generated  true          Resource Types       Event   audit k8s io v1 Event     EventList   audit k8s io v1 EventList     Policy   audit k8s io v1 Policy     PolicyList   audit k8s io v1 PolicyList          Event        audit k8s io v1 Event          Appears in        EventList   audit k8s io v1 EventList     p Event captures all the information that can be included in an API audit log   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code audit k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code Event  code   td   tr           tr  td  code level  code   B  Required   B  br    a href   audit k8s io v1 Level   code Level  code   a    td   td      p AuditLevel at which event was generated  p    td    tr   tr  td  code auditID  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg types UID   code k8s io apimachinery pkg types UID  code   a    td   td      p Unique audit ID  generated for each request   p    td    tr   tr  td  code stage  code   B  Required   B  br    a href   audit k8s io v1 Stage   code Stage  code   a    td   td      p Stage of the request handling when this event instance was generated   p    td    tr   tr  td  code requestURI  code   B  Required   B  br    code string  code    td   td      p RequestURI is the request URI as sent by the client to a server   p    td    tr   tr  td  code verb  code   B  Required   B  br    code string  code    td   td      p Verb is the kubernetes verb associated with the request  For non resource requests  this is the lower cased HTTP method   p    td    tr   tr  td  code user  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  userinfo v1 authentication k8s io   code authentication v1 UserInfo  code   a    td   td      p Authenticated user information   p    td    tr   tr  td  code impersonatedUser  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  userinfo v1 authentication k8s io   code authentication v1 UserInfo  code   a    td   td      p Impersonated user information   p    td    tr   tr  td  code sourceIPs  code  br    code   string  code    td   td      p Source IPs  from where the request originated and intermediate proxies  The source IPs are listed from  in order    p   ol   li X Forwarded For request header IPs  li   li X Real Ip header  if not present in the X Forwarded For list  li   li The remote address for the connection  if it doesn t match the last IP in the list up to here  X Forwarded For or X Real Ip   Note  All but the last IP can be arbitrarily set by the client   li    ol    td    tr   tr  td  code userAgent  code  br    code string  code    td   td      p UserAgent records the user agent string reported by the client  Note that the UserAgent is provided by the client  and must not be trusted   p    td    tr   tr  td  code objectRef  code  br    a href   audit k8s io v1 ObjectReference   code ObjectReference  code   a    td   td      p Object reference this request is targeted at  Does not apply for List type requests  or non resource requests   p    td    tr   tr  td  code responseStatus  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  status v1 meta   code meta v1 Status  code   a    td   td      p The response status  populated even when the ResponseObject is not a Status type  For successful responses  this will only include the Code and StatusSuccess  For non status type error responses  this will be auto populated with the error Message   p    td    tr   tr  td  code requestObject  code  br    a href  https   pkg go dev k8s io apimachinery pkg runtime Unknown   code k8s io apimachinery pkg runtime Unknown  code   a    td   td      p API object from the request  in JSON format  The RequestObject is recorded as is in the request  possibly re encoded as JSON   prior to version conversion  defaulting  admission or merging  It is an external versioned object type  and may not be a valid object on its own  Omitted for non resource requests   Only logged at Request Level and higher   p    td    tr   tr  td  code responseObject  code  br    a href  https   pkg go dev k8s io apimachinery pkg runtime Unknown   code k8s io apimachinery pkg runtime Unknown  code   a    td   td      p API object returned in the response  in JSON  The ResponseObject is recorded after conversion to the external type  and serialized as JSON   Omitted for non resource requests   Only logged at Response Level   p    td    tr   tr  td  code requestReceivedTimestamp  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  microtime v1 meta   code meta v1 MicroTime  code   a    td   td      p Time the request reached the apiserver   p    td    tr   tr  td  code stageTimestamp  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  microtime v1 meta   code meta v1 MicroTime  code   a    td   td      p Time the request reached current audit stage   p    td    tr   tr  td  code annotations  code  br    code map string string  code    td   td      p Annotations is an unstructured key value map stored with an audit event that may be set by plugins invoked in the request serving chain  including authentication  authorization and admission plugins  Note that these annotations are for the audit event  and do not correspond to the metadata annotations of the submitted object  Keys should uniquely identify the informing component to avoid name collisions  e g  podsecuritypolicy admission k8s io policy   Values should be short  Annotations are included in the Metadata level   p    td    tr    tbody    table       EventList        audit k8s io v1 EventList          p EventList is a list of audit Events   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code audit k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code EventList  code   td   tr           tr  td  code metadata  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  listmeta v1 meta   code meta v1 ListMeta  code   a    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code items  code   B  Required   B  br    a href   audit k8s io v1 Event   code   Event  code   a    td   td      span class  text muted  No description provided   span   td    tr    tbody    table       Policy        audit k8s io v1 Policy          Appears in        PolicyList   audit k8s io v1 PolicyList     p Policy defines the configuration of audit logging  and the rules for how different request categories are logged   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code audit k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code Policy  code   td   tr           tr  td  code metadata  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  objectmeta v1 meta   code meta v1 ObjectMeta  code   a    td   td      p ObjectMeta is included for interoperability with API infrastructure   p  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field   td    tr   tr  td  code rules  code   B  Required   B  br    a href   audit k8s io v1 PolicyRule   code   PolicyRule  code   a    td   td      p Rules specify the audit Level a request should be recorded at  A request may match multiple rules  in which case the FIRST matching rule is used  The default audit level is None  but can be overridden by a catch all rule at the end of the list  PolicyRules are strictly ordered   p    td    tr   tr  td  code omitStages  code  br    a href   audit k8s io v1 Stage   code   Stage  code   a    td   td      p OmitStages is a list of stages for which no events are created  Note that this can also be specified per rule in which case the union of both are omitted   p    td    tr   tr  td  code omitManagedFields  code  br    code bool  code    td   td      p OmitManagedFields indicates whether to omit the managed fields of the request and response bodies from being written to the API audit log  This is used as a global default   a value of  true  will omit the managed fileds  otherwise the managed fields will be included in the API audit log  Note that this can also be specified per rule in which case the value specified in a rule will override the global default   p    td    tr    tbody    table       PolicyList        audit k8s io v1 PolicyList          p PolicyList is a list of audit Policies   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code audit k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code PolicyList  code   td   tr           tr  td  code metadata  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  listmeta v1 meta   code meta v1 ListMeta  code   a    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code items  code   B  Required   B  br    a href   audit k8s io v1 Policy   code   Policy  code   a    td   td      span class  text muted  No description provided   span   td    tr    tbody    table       GroupResources        audit k8s io v1 GroupResources          Appears in        PolicyRule   audit k8s io v1 PolicyRule     p GroupResources represents resource kinds in an API group   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code group  code  br    code string  code    td   td      p Group is the name of the API group that contains the resources  The empty string represents the core API group   p    td    tr   tr  td  code resources  code  br    code   string  code    td   td      p Resources is a list of resources this rule applies to   p   p For example   p   ul   li  code pods  code  matches pods   li   li  code pods log  code  matches the log subresource of pods   li   li  code    code  matches all resources and their subresources   li   li  code pods    code  matches all subresources of pods   li   li  code   scale  code  matches all scale subresources   li    ul   p If wildcard is present  the validation rule will ensure resources do not overlap with each other   p   p An empty list implies all resources and subresources in this API groups apply   p    td    tr   tr  td  code resourceNames  code  br    code   string  code    td   td      p ResourceNames is a list of resource instance names that the policy matches  Using this field requires Resources to be specified  An empty list implies that every instance of the resource is matched   p    td    tr    tbody    table       Level        audit k8s io v1 Level        Alias of  string      Appears in        Event   audit k8s io v1 Event      PolicyRule   audit k8s io v1 PolicyRule     p Level defines the amount of information logged during auditing  p          ObjectReference        audit k8s io v1 ObjectReference          Appears in        Event   audit k8s io v1 Event     p ObjectReference contains enough information to let you inspect or modify the referred object   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code resource  code  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code namespace  code  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code name  code  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code uid  code  br    a href  https   pkg go dev k8s io apimachinery pkg types UID   code k8s io apimachinery pkg types UID  code   a    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code apiGroup  code  br    code string  code    td   td      p APIGroup is the name of the API group that contains the referred object  The empty string represents the core API group   p    td    tr   tr  td  code apiVersion  code  br    code string  code    td   td      p APIVersion is the version of the API group that contains the referred object   p    td    tr   tr  td  code resourceVersion  code  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code subresource  code  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr    tbody    table       PolicyRule        audit k8s io v1 PolicyRule          Appears in        Policy   audit k8s io v1 Policy     p PolicyRule maps requests based off metadata to an audit Level  Requests must match the rules of every field  an intersection of rules    p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code level  code   B  Required   B  br    a href   audit k8s io v1 Level   code Level  code   a    td   td      p The Level that requests matching this rule are recorded at   p    td    tr   tr  td  code users  code  br    code   string  code    td   td      p The users  by authenticated user name  this rule applies to  An empty list implies every user   p    td    tr   tr  td  code userGroups  code  br    code   string  code    td   td      p The user groups this rule applies to  A user is considered matching if it is a member of any of the UserGroups  An empty list implies every user group   p    td    tr   tr  td  code verbs  code  br    code   string  code    td   td      p The verbs that match this rule  An empty list implies every verb   p    td    tr   tr  td  code resources  code  br    a href   audit k8s io v1 GroupResources   code   GroupResources  code   a    td   td      p Resources that this rule matches  An empty list implies all kinds in all API groups   p    td    tr   tr  td  code namespaces  code  br    code   string  code    td   td      p Namespaces that this rule matches  The empty string  quot  quot  matches non namespaced resources  An empty list implies every namespace   p    td    tr   tr  td  code nonResourceURLs  code  br    code   string  code    td   td      p NonResourceURLs is a set of URL paths that should be audited   code    code s are allowed  but only as the full  final step in the path  Examples   p   ul   li  code  metrics  code    Log requests for apiserver metrics  li   li  code  healthz   code    Log all health checks  li    ul    td    tr   tr  td  code omitStages  code  br    a href   audit k8s io v1 Stage   code   Stage  code   a    td   td      p OmitStages is a list of stages for which no events are created  Note that this can also be specified policy wide in which case the union of both are omitted  An empty list means no restrictions will apply   p    td    tr   tr  td  code omitManagedFields  code  br    code bool  code    td   td      p OmitManagedFields indicates whether to omit the managed fields of the request and response bodies from being written to the API audit log   p   ul   li a value of  true  will drop the managed fields from the API audit log  li   li a value of  false  indicates that the managed fileds should be included in the API audit log Note that the value  if specified  in this rule will override the global default If a value is not specified then the global default specified in Policy OmitManagedFields will stand   li    ul    td    tr    tbody    table       Stage        audit k8s io v1 Stage        Alias of  string      Appears in        Event   audit k8s io v1 Event      Policy   audit k8s io v1 Policy      PolicyRule   audit k8s io v1 PolicyRule     p Stage defines the stages in request handling that audit events may be generated   p      "}
{"questions":"kubernetes reference contenttype tool reference Resource Types title Client Authentication v1beta1 package client authentication k8s io v1beta1 autogenerated true","answers":"---\ntitle: Client Authentication (v1beta1)\ncontent_type: tool-reference\npackage: client.authentication.k8s.io\/v1beta1\nauto_generated: true\n---\n\n\n## Resource Types \n\n\n- [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential)\n  \n\n## `ExecCredential`     {#client-authentication-k8s-io-v1beta1-ExecCredential}\n    \n\n\n<p>ExecCredential is used by exec-based plugins to communicate credentials to\nHTTP transports.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>client.authentication.k8s.io\/v1beta1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>ExecCredential<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>spec<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#client-authentication-k8s-io-v1beta1-ExecCredentialSpec\"><code>ExecCredentialSpec<\/code><\/a>\n<\/td>\n<td>\n   <p>Spec holds information passed to the plugin by the transport.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>status<\/code><br\/>\n<a href=\"#client-authentication-k8s-io-v1beta1-ExecCredentialStatus\"><code>ExecCredentialStatus<\/code><\/a>\n<\/td>\n<td>\n   <p>Status is filled in by the plugin and holds the credentials that the transport\nshould use to contact the API.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Cluster`     {#client-authentication-k8s-io-v1beta1-Cluster}\n    \n\n**Appears in:**\n\n- [ExecCredentialSpec](#client-authentication-k8s-io-v1beta1-ExecCredentialSpec)\n\n\n<p>Cluster contains information to allow an exec plugin to communicate\nwith the kubernetes cluster being authenticated to.<\/p>\n<p>To ensure that this struct contains everything someone would need to communicate\nwith a kubernetes cluster (just like they would via a kubeconfig), the fields\nshould shadow &quot;k8s.io\/client-go\/tools\/clientcmd\/api\/v1&quot;.Cluster, with the exception\nof CertificateAuthority, since CA data will always be passed to the plugin as bytes.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>server<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Server is the address of the kubernetes cluster (https:\/\/hostname:port).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tls-server-name<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>TLSServerName is passed to the server for SNI and is used in the client to\ncheck server certificates against. If ServerName is empty, the hostname\nused to contact the server is used.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>insecure-skip-tls-verify<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>InsecureSkipTLSVerify skips the validity check for the server's certificate.\nThis will make your HTTPS connections insecure.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificate-authority-data<\/code><br\/>\n<code>[]byte<\/code>\n<\/td>\n<td>\n   <p>CAData contains PEM-encoded certificate authority certificates.\nIf empty, system roots should be used.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>proxy-url<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>ProxyURL is the URL to the proxy to be used for all requests to this\ncluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>disable-compression<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>DisableCompression allows client to opt-out of response compression for all requests to the server. This is useful\nto speed up requests (specifically lists) when client-server network bandwidth is ample, by saving time on\ncompression (server-side) and decompression (client-side): https:\/\/github.com\/kubernetes\/kubernetes\/issues\/112296.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>config<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/runtime\/#RawExtension\"><code>k8s.io\/apimachinery\/pkg\/runtime.RawExtension<\/code><\/a>\n<\/td>\n<td>\n   <p>Config holds additional config data that is specific to the exec\nplugin with regards to the cluster being authenticated to.<\/p>\n<p>This data is sourced from the clientcmd Cluster object's\nextensions[client.authentication.k8s.io\/exec] field:<\/p>\n<p>clusters:<\/p>\n<ul>\n<li>name: my-cluster\ncluster:\n...\nextensions:\n<ul>\n<li>name: client.authentication.k8s.io\/exec  # reserved extension name for per cluster exec config\nextension:\naudience: 06e3fbd18de8  # arbitrary config<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>In some environments, the user config may be exactly the same across many clusters\n(i.e. call this exec plugin) minus some details that are specific to each cluster\nsuch as the audience.  This field allows the per cluster config to be directly\nspecified with the cluster info.  Using this field to store secret data is not\nrecommended as one of the prime benefits of exec plugins is that no secrets need\nto be stored directly in the kubeconfig.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExecCredentialSpec`     {#client-authentication-k8s-io-v1beta1-ExecCredentialSpec}\n    \n\n**Appears in:**\n\n- [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential)\n\n\n<p>ExecCredentialSpec holds request and runtime specific information provided by\nthe transport.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>cluster<\/code><br\/>\n<a href=\"#client-authentication-k8s-io-v1beta1-Cluster\"><code>Cluster<\/code><\/a>\n<\/td>\n<td>\n   <p>Cluster contains information to allow an exec plugin to communicate with the\nkubernetes cluster being authenticated to. Note that Cluster is non-nil only\nwhen provideClusterInfo is set to true in the exec provider config (i.e.,\nExecConfig.ProvideClusterInfo).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>interactive<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>Interactive declares whether stdin has been passed to this exec plugin.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExecCredentialStatus`     {#client-authentication-k8s-io-v1beta1-ExecCredentialStatus}\n    \n\n**Appears in:**\n\n- [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential)\n\n\n<p>ExecCredentialStatus holds credentials for the transport to use.<\/p>\n<p>Token and ClientKeyData are sensitive fields. This data should only be\ntransmitted in-memory between client and exec plugin process. Exec plugin\nitself should at least be protected via file permissions.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>expirationTimestamp<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#time-v1-meta\"><code>meta\/v1.Time<\/code><\/a>\n<\/td>\n<td>\n   <p>ExpirationTimestamp indicates a time when the provided credentials expire.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>token<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Token is a bearer token used by the client for request authentication.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clientCertificateData<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>PEM-encoded client TLS certificates (including intermediates, if any).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clientKeyData<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>PEM-encoded private key for the above certificate.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  Client Authentication  v1beta1  content type  tool reference package  client authentication k8s io v1beta1 auto generated  true          Resource Types       ExecCredential   client authentication k8s io v1beta1 ExecCredential          ExecCredential        client authentication k8s io v1beta1 ExecCredential          p ExecCredential is used by exec based plugins to communicate credentials to HTTP transports   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code client authentication k8s io v1beta1  code   td   tr   tr  td  code kind  code  br  string  td  td  code ExecCredential  code   td   tr           tr  td  code spec  code   B  Required   B  br    a href   client authentication k8s io v1beta1 ExecCredentialSpec   code ExecCredentialSpec  code   a    td   td      p Spec holds information passed to the plugin by the transport   p    td    tr   tr  td  code status  code  br    a href   client authentication k8s io v1beta1 ExecCredentialStatus   code ExecCredentialStatus  code   a    td   td      p Status is filled in by the plugin and holds the credentials that the transport should use to contact the API   p    td    tr    tbody    table       Cluster        client authentication k8s io v1beta1 Cluster          Appears in        ExecCredentialSpec   client authentication k8s io v1beta1 ExecCredentialSpec     p Cluster contains information to allow an exec plugin to communicate with the kubernetes cluster being authenticated to   p   p To ensure that this struct contains everything someone would need to communicate with a kubernetes cluster  just like they would via a kubeconfig   the fields should shadow  quot k8s io client go tools clientcmd api v1 quot  Cluster  with the exception of CertificateAuthority  since CA data will always be passed to the plugin as bytes   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code server  code   B  Required   B  br    code string  code    td   td      p Server is the address of the kubernetes cluster  https   hostname port    p    td    tr   tr  td  code tls server name  code  br    code string  code    td   td      p TLSServerName is passed to the server for SNI and is used in the client to check server certificates against  If ServerName is empty  the hostname used to contact the server is used   p    td    tr   tr  td  code insecure skip tls verify  code  br    code bool  code    td   td      p InsecureSkipTLSVerify skips the validity check for the server s certificate  This will make your HTTPS connections insecure   p    td    tr   tr  td  code certificate authority data  code  br    code   byte  code    td   td      p CAData contains PEM encoded certificate authority certificates  If empty  system roots should be used   p    td    tr   tr  td  code proxy url  code  br    code string  code    td   td      p ProxyURL is the URL to the proxy to be used for all requests to this cluster   p    td    tr   tr  td  code disable compression  code  br    code bool  code    td   td      p DisableCompression allows client to opt out of response compression for all requests to the server  This is useful to speed up requests  specifically lists  when client server network bandwidth is ample  by saving time on compression  server side  and decompression  client side   https   github com kubernetes kubernetes issues 112296   p    td    tr   tr  td  code config  code  br    a href  https   pkg go dev k8s io apimachinery pkg runtime  RawExtension   code k8s io apimachinery pkg runtime RawExtension  code   a    td   td      p Config holds additional config data that is specific to the exec plugin with regards to the cluster being authenticated to   p   p This data is sourced from the clientcmd Cluster object s extensions client authentication k8s io exec  field   p   p clusters   p   ul   li name  my cluster cluster      extensions   ul   li name  client authentication k8s io exec    reserved extension name for per cluster exec config extension  audience  06e3fbd18de8    arbitrary config  li    ul    li    ul   p In some environments  the user config may be exactly the same across many clusters  i e  call this exec plugin  minus some details that are specific to each cluster such as the audience   This field allows the per cluster config to be directly specified with the cluster info   Using this field to store secret data is not recommended as one of the prime benefits of exec plugins is that no secrets need to be stored directly in the kubeconfig   p    td    tr    tbody    table       ExecCredentialSpec        client authentication k8s io v1beta1 ExecCredentialSpec          Appears in        ExecCredential   client authentication k8s io v1beta1 ExecCredential     p ExecCredentialSpec holds request and runtime specific information provided by the transport   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code cluster  code  br    a href   client authentication k8s io v1beta1 Cluster   code Cluster  code   a    td   td      p Cluster contains information to allow an exec plugin to communicate with the kubernetes cluster being authenticated to  Note that Cluster is non nil only when provideClusterInfo is set to true in the exec provider config  i e   ExecConfig ProvideClusterInfo    p    td    tr   tr  td  code interactive  code   B  Required   B  br    code bool  code    td   td      p Interactive declares whether stdin has been passed to this exec plugin   p    td    tr    tbody    table       ExecCredentialStatus        client authentication k8s io v1beta1 ExecCredentialStatus          Appears in        ExecCredential   client authentication k8s io v1beta1 ExecCredential     p ExecCredentialStatus holds credentials for the transport to use   p   p Token and ClientKeyData are sensitive fields  This data should only be transmitted in memory between client and exec plugin process  Exec plugin itself should at least be protected via file permissions   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code expirationTimestamp  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  time v1 meta   code meta v1 Time  code   a    td   td      p ExpirationTimestamp indicates a time when the provided credentials expire   p    td    tr   tr  td  code token  code   B  Required   B  br    code string  code    td   td      p Token is a bearer token used by the client for request authentication   p    td    tr   tr  td  code clientCertificateData  code   B  Required   B  br    code string  code    td   td      p PEM encoded client TLS certificates  including intermediates  if any    p    td    tr   tr  td  code clientKeyData  code   B  Required   B  br    code string  code    td   td      p PEM encoded private key for the above certificate   p    td    tr    tbody    table   "}
{"questions":"kubernetes reference contenttype tool reference Resource Types title Kubelet Configuration v1beta1 package kubelet config k8s io v1beta1 autogenerated true","answers":"---\ntitle: Kubelet Configuration (v1beta1)\ncontent_type: tool-reference\npackage: kubelet.config.k8s.io\/v1beta1\nauto_generated: true\n---\n\n\n## Resource Types \n\n\n- [CredentialProviderConfig](#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig)\n- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)\n- [SerializedNodeConfigSource](#kubelet-config-k8s-io-v1beta1-SerializedNodeConfigSource)\n  \n    \n    \n\n## `FormatOptions`     {#FormatOptions}\n    \n\n**Appears in:**\n\n- [LoggingConfiguration](#LoggingConfiguration)\n\n\n<p>FormatOptions contains options for the different logging formats.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>text<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#TextOptions\"><code>TextOptions<\/code><\/a>\n<\/td>\n<td>\n   <p>[Alpha] Text contains options for logging format &quot;text&quot;.\nOnly available when the LoggingAlphaOptions feature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>json<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#JSONOptions\"><code>JSONOptions<\/code><\/a>\n<\/td>\n<td>\n   <p>[Alpha] JSON contains options for logging format &quot;json&quot;.\nOnly available when the LoggingAlphaOptions feature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `JSONOptions`     {#JSONOptions}\n    \n\n**Appears in:**\n\n- [FormatOptions](#FormatOptions)\n\n\n<p>JSONOptions contains options for logging format &quot;json&quot;.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>OutputRoutingOptions<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#OutputRoutingOptions\"><code>OutputRoutingOptions<\/code><\/a>\n<\/td>\n<td>(Members of <code>OutputRoutingOptions<\/code> are embedded into this type.)\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `LogFormatFactory`     {#LogFormatFactory}\n    \n\n\n<p>LogFormatFactory provides support for a certain additional,\nnon-default log format.<\/p>\n\n\n\n\n## `LoggingConfiguration`     {#LoggingConfiguration}\n    \n\n**Appears in:**\n\n- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)\n\n\n<p>LoggingConfiguration contains logging options.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>format<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Format Flag specifies the structure of log messages.\ndefault value of format is <code>text<\/code><\/p>\n<\/td>\n<\/tr>\n<tr><td><code>flushFrequency<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#TimeOrMetaDuration\"><code>TimeOrMetaDuration<\/code><\/a>\n<\/td>\n<td>\n   <p>Maximum time between log flushes.\nIf a string, parsed as a duration (i.e. &quot;1s&quot;)\nIf an int, the maximum number of nanoseconds (i.e. 1s = 1000000000).\nIgnored if the selected logging backend writes log messages without buffering.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>verbosity<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#VerbosityLevel\"><code>VerbosityLevel<\/code><\/a>\n<\/td>\n<td>\n   <p>Verbosity is the threshold that determines which log messages are\nlogged. Default is zero which logs only the most important\nmessages. Higher values enable additional messages. Error messages\nare always logged.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>vmodule<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#VModuleConfiguration\"><code>VModuleConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>VModule overrides the verbosity threshold for individual files.\nOnly supported for &quot;text&quot; log format.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>options<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#FormatOptions\"><code>FormatOptions<\/code><\/a>\n<\/td>\n<td>\n   <p>[Alpha] Options holds additional parameters that are specific\nto the different logging formats. Only the options for the selected\nformat get used, but all of them get validated.\nOnly available when the LoggingAlphaOptions feature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `LoggingOptions`     {#LoggingOptions}\n    \n\n\n<p>LoggingOptions can be used with ValidateAndApplyWithOptions to override\ncertain global defaults.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ErrorStream<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/io#Writer\"><code>io.Writer<\/code><\/a>\n<\/td>\n<td>\n   <p>ErrorStream can be used to override the os.Stderr default.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>InfoStream<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/io#Writer\"><code>io.Writer<\/code><\/a>\n<\/td>\n<td>\n   <p>InfoStream can be used to override the os.Stdout default.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `OutputRoutingOptions`     {#OutputRoutingOptions}\n    \n\n**Appears in:**\n\n- [JSONOptions](#JSONOptions)\n\n- [TextOptions](#TextOptions)\n\n\n<p>OutputRoutingOptions contains options that are supported by both &quot;text&quot; and &quot;json&quot;.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>splitStream<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>[Alpha] SplitStream redirects error messages to stderr while\ninfo messages go to stdout, with buffering. The default is to write\nboth to stdout, without buffering. Only available when\nthe LoggingAlphaOptions feature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>infoBufferSize<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/api\/resource#QuantityValue\"><code>k8s.io\/apimachinery\/pkg\/api\/resource.QuantityValue<\/code><\/a>\n<\/td>\n<td>\n   <p>[Alpha] InfoBufferSize sets the size of the info stream when\nusing split streams. The default is zero, which disables buffering.\nOnly available when the LoggingAlphaOptions feature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `TextOptions`     {#TextOptions}\n    \n\n**Appears in:**\n\n- [FormatOptions](#FormatOptions)\n\n\n<p>TextOptions contains options for logging format &quot;text&quot;.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>OutputRoutingOptions<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#OutputRoutingOptions\"><code>OutputRoutingOptions<\/code><\/a>\n<\/td>\n<td>(Members of <code>OutputRoutingOptions<\/code> are embedded into this type.)\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `TimeOrMetaDuration`     {#TimeOrMetaDuration}\n    \n\n**Appears in:**\n\n- [LoggingConfiguration](#LoggingConfiguration)\n\n\n<p>TimeOrMetaDuration is present only for backwards compatibility for the\nflushFrequency field, and new fields should use metav1.Duration.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>Duration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>Duration holds the duration<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>-<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>SerializeAsString controls whether the value is serialized as a string or an integer<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `TracingConfiguration`     {#TracingConfiguration}\n    \n\n**Appears in:**\n\n- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)\n\n\n<p>TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>endpoint<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Endpoint of the collector this component will report traces to.\nThe connection is insecure, and does not currently support TLS.\nRecommended is unset, and endpoint is the otlp grpc default, localhost:4317.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>samplingRatePerMillion<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>SamplingRatePerMillion is the number of samples to collect per million spans.\nRecommended is unset. If unset, sampler respects its parent span's sampling\nrate, but otherwise never samples.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `VModuleConfiguration`     {#VModuleConfiguration}\n    \n(Alias of `[]k8s.io\/component-base\/logs\/api\/v1.VModuleItem`)\n\n**Appears in:**\n\n- [LoggingConfiguration](#LoggingConfiguration)\n\n\n<p>VModuleConfiguration is a collection of individual file names or patterns\nand the corresponding verbosity threshold.<\/p>\n\n\n\n\n## `VerbosityLevel`     {#VerbosityLevel}\n    \n(Alias of `uint32`)\n\n**Appears in:**\n\n- [LoggingConfiguration](#LoggingConfiguration)\n\n\n\n<p>VerbosityLevel represents a klog or logr verbosity threshold.<\/p>\n\n\n\n  \n\n## `CredentialProviderConfig`     {#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig}\n    \n\n\n<p>CredentialProviderConfig is the configuration containing information about\neach exec credential provider. Kubelet reads this configuration from disk and enables\neach provider as specified by the CredentialProvider type.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubelet.config.k8s.io\/v1beta1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>CredentialProviderConfig<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>providers<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubelet-config-k8s-io-v1beta1-CredentialProvider\"><code>[]CredentialProvider<\/code><\/a>\n<\/td>\n<td>\n   <p>providers is a list of credential provider plugins that will be enabled by the kubelet.\nMultiple providers may match against a single image, in which case credentials\nfrom all providers will be returned to the kubelet. If multiple providers are called\nfor a single image, the results are combined. If providers return overlapping\nauth keys, the value from the provider earlier in this list is used.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeletConfiguration`     {#kubelet-config-k8s-io-v1beta1-KubeletConfiguration}\n    \n\n\n<p>KubeletConfiguration contains the configuration for the Kubelet<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubelet.config.k8s.io\/v1beta1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>KubeletConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>enableServer<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableServer enables Kubelet's secured server.\nNote: Kubelet's insecure port is controlled by the readOnlyPort option.\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>staticPodPath<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>staticPodPath is the path to the directory containing local (static) pods to\nrun, or the path to a single static pod file.\nDefault: &quot;&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>podLogsDir<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>podLogsDir is a custom root directory path kubelet will use to place pod's log files.\nDefault: &quot;\/var\/log\/pods\/&quot;\nNote: it is not recommended to use the temp folder as a log directory as it may cause\nunexpected behavior in many places.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>syncFrequency<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>syncFrequency is the max period between synchronizing running\ncontainers and config.\nDefault: &quot;1m&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>fileCheckFrequency<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>fileCheckFrequency is the duration between checking config files for\nnew data.\nDefault: &quot;20s&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>httpCheckFrequency<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>httpCheckFrequency is the duration between checking http for new data.\nDefault: &quot;20s&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>staticPodURL<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>staticPodURL is the URL for accessing static pods to run.\nDefault: &quot;&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>staticPodURLHeader<\/code><br\/>\n<code>map[string][]string<\/code>\n<\/td>\n<td>\n   <p>staticPodURLHeader is a map of slices with HTTP headers to use when accessing the podURL.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>address<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>address is the IP address for the Kubelet to serve on (set to 0.0.0.0\nfor all interfaces).\nDefault: &quot;0.0.0.0&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>port<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>port is the port for the Kubelet to serve on.\nThe port number must be between 1 and 65535, inclusive.\nDefault: 10250<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>readOnlyPort<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>readOnlyPort is the read-only port for the Kubelet to serve on with\nno authentication\/authorization.\nThe port number must be between 1 and 65535, inclusive.\nSetting this field to 0 disables the read-only service.\nDefault: 0 (disabled)<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tlsCertFile<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>tlsCertFile is the file containing x509 Certificate for HTTPS. (CA cert,\nif any, concatenated after server cert). If tlsCertFile and\ntlsPrivateKeyFile are not provided, a self-signed certificate\nand key are generated for the public address and saved to the directory\npassed to the Kubelet's --cert-dir flag.\nDefault: &quot;&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tlsPrivateKeyFile<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>tlsPrivateKeyFile is the file containing x509 private key matching tlsCertFile.\nDefault: &quot;&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tlsCipherSuites<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>tlsCipherSuites is the list of allowed cipher suites for the server.\nNote that TLS 1.3 ciphersuites are not configurable.\nValues are from tls package constants (https:\/\/golang.org\/pkg\/crypto\/tls\/#pkg-constants).\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tlsMinVersion<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>tlsMinVersion is the minimum TLS version supported.\nValues are from tls package constants (https:\/\/golang.org\/pkg\/crypto\/tls\/#pkg-constants).\nDefault: &quot;&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>rotateCertificates<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>rotateCertificates enables client certificate rotation. The Kubelet will request a\nnew certificate from the certificates.k8s.io API. This requires an approver to approve the\ncertificate signing requests.\nDefault: false<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>serverTLSBootstrap<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>serverTLSBootstrap enables server certificate bootstrap. Instead of self\nsigning a serving certificate, the Kubelet will request a certificate from\nthe 'certificates.k8s.io' API. This requires an approver to approve the\ncertificate signing requests (CSR). The RotateKubeletServerCertificate feature\nmust be enabled when setting this field.\nDefault: false<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>authentication<\/code><br\/>\n<a href=\"#kubelet-config-k8s-io-v1beta1-KubeletAuthentication\"><code>KubeletAuthentication<\/code><\/a>\n<\/td>\n<td>\n   <p>authentication specifies how requests to the Kubelet's server are authenticated.\nDefaults:\nanonymous:\nenabled: false\nwebhook:\nenabled: true\ncacheTTL: &quot;2m&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>authorization<\/code><br\/>\n<a href=\"#kubelet-config-k8s-io-v1beta1-KubeletAuthorization\"><code>KubeletAuthorization<\/code><\/a>\n<\/td>\n<td>\n   <p>authorization specifies how requests to the Kubelet's server are authorized.\nDefaults:\nmode: Webhook\nwebhook:\ncacheAuthorizedTTL: &quot;5m&quot;\ncacheUnauthorizedTTL: &quot;30s&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>registryPullQPS<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>registryPullQPS is the limit of registry pulls per second.\nThe value must not be a negative number.\nSetting it to 0 means no limit.\nDefault: 5<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>registryBurst<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>registryBurst is the maximum size of bursty pulls, temporarily allows\npulls to burst to this number, while still not exceeding registryPullQPS.\nThe value must not be a negative number.\nOnly used if registryPullQPS is greater than 0.\nDefault: 10<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>eventRecordQPS<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>eventRecordQPS is the maximum event creations per second. If 0, there\nis no limit enforced. The value cannot be a negative number.\nDefault: 50<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>eventBurst<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>eventBurst is the maximum size of a burst of event creations, temporarily\nallows event creations to burst to this number, while still not exceeding\neventRecordQPS. This field canot be a negative number and it is only used\nwhen eventRecordQPS &gt; 0.\nDefault: 100<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>enableDebuggingHandlers<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableDebuggingHandlers enables server endpoints for log access\nand local running of containers and commands, including the exec,\nattach, logs, and portforward features.\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>enableContentionProfiling<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableContentionProfiling enables block profiling, if enableDebuggingHandlers is true.\nDefault: false<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>healthzPort<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>healthzPort is the port of the localhost healthz endpoint (set to 0 to disable).\nA valid number is between 1 and 65535.\nDefault: 10248<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>healthzBindAddress<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>healthzBindAddress is the IP address for the healthz server to serve on.\nDefault: &quot;127.0.0.1&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>oomScoreAdj<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>oomScoreAdj is The oom-score-adj value for kubelet process. Values\nmust be within the range [-1000, 1000].\nDefault: -999<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clusterDomain<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>clusterDomain is the DNS domain for this cluster. If set, kubelet will\nconfigure all containers to search this domain in addition to the\nhost's search domains.\nDefault: &quot;&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clusterDNS<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>clusterDNS is a list of IP addresses for the cluster DNS server. If set,\nkubelet will configure all containers to use this for DNS resolution\ninstead of the host's DNS servers.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>streamingConnectionIdleTimeout<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>streamingConnectionIdleTimeout is the maximum time a streaming connection\ncan be idle before the connection is automatically closed.\nDefault: &quot;4h&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>nodeStatusUpdateFrequency<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>nodeStatusUpdateFrequency is the frequency that kubelet computes node\nstatus. If node lease feature is not enabled, it is also the frequency that\nkubelet posts node status to master.\nNote: When node lease feature is not enabled, be cautious when changing the\nconstant, it must work with nodeMonitorGracePeriod in nodecontroller.\nDefault: &quot;10s&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>nodeStatusReportFrequency<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>nodeStatusReportFrequency is the frequency that kubelet posts node\nstatus to master if node status does not change. Kubelet will ignore this\nfrequency and post node status immediately if any change is detected. It is\nonly used when node lease feature is enabled. nodeStatusReportFrequency's\ndefault value is 5m. But if nodeStatusUpdateFrequency is set explicitly,\nnodeStatusReportFrequency's default value will be set to\nnodeStatusUpdateFrequency for backward compatibility.\nDefault: &quot;5m&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>nodeLeaseDurationSeconds<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>nodeLeaseDurationSeconds is the duration the Kubelet will set on its corresponding Lease.\nNodeLease provides an indicator of node health by having the Kubelet create and\nperiodically renew a lease, named after the node, in the kube-node-lease namespace.\nIf the lease expires, the node can be considered unhealthy.\nThe lease is currently renewed every 10s, per KEP-0009. In the future, the lease renewal\ninterval may be set based on the lease duration.\nThe field value must be greater than 0.\nDefault: 40<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imageMinimumGCAge<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>imageMinimumGCAge is the minimum age for an unused image before it is\ngarbage collected.\nDefault: &quot;2m&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imageMaximumGCAge<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>imageMaximumGCAge is the maximum age an image can be unused before it is garbage collected.\nThe default of this field is &quot;0s&quot;, which disables this field--meaning images won't be garbage\ncollected based on being unused for too long.\nDefault: &quot;0s&quot; (disabled)<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imageGCHighThresholdPercent<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>imageGCHighThresholdPercent is the percent of disk usage after which\nimage garbage collection is always run. The percent is calculated by\ndividing this field value by 100, so this field must be between 0 and\n100, inclusive. When specified, the value must be greater than\nimageGCLowThresholdPercent.\nDefault: 85<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imageGCLowThresholdPercent<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>imageGCLowThresholdPercent is the percent of disk usage before which\nimage garbage collection is never run. Lowest disk usage to garbage\ncollect to. The percent is calculated by dividing this field value by 100,\nso the field value must be between 0 and 100, inclusive. When specified, the\nvalue must be less than imageGCHighThresholdPercent.\nDefault: 80<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>volumeStatsAggPeriod<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>volumeStatsAggPeriod is the frequency for calculating and caching volume\ndisk usage for all pods.\nDefault: &quot;1m&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>kubeletCgroups<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>kubeletCgroups is the absolute name of cgroups to isolate the kubelet in\nDefault: &quot;&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>systemCgroups<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>systemCgroups is absolute name of cgroups in which to place\nall non-kernel processes that are not already in a container. Empty\nfor no container. Rolling back the flag requires a reboot.\nThe cgroupRoot must be specified if this field is not empty.\nDefault: &quot;&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>cgroupRoot<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>cgroupRoot is the root cgroup to use for pods. This is handled by the\ncontainer runtime on a best effort basis.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>cgroupsPerQOS<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>cgroupsPerQOS enable QoS based CGroup hierarchy: top level CGroups for QoS classes\nand all Burstable and BestEffort Pods are brought up under their specific top level\nQoS CGroup.\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>cgroupDriver<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>cgroupDriver is the driver kubelet uses to manipulate CGroups on the host (cgroupfs\nor systemd).\nDefault: &quot;cgroupfs&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>cpuManagerPolicy<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>cpuManagerPolicy is the name of the policy to use.\nRequires the CPUManager feature gate to be enabled.\nDefault: &quot;None&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>cpuManagerPolicyOptions<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p>cpuManagerPolicyOptions is a set of key=value which \tallows to set extra options\nto fine tune the behaviour of the cpu manager policies.\nRequires  both the &quot;CPUManager&quot; and &quot;CPUManagerPolicyOptions&quot; feature gates to be enabled.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>cpuManagerReconcilePeriod<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>cpuManagerReconcilePeriod is the reconciliation period for the CPU Manager.\nRequires the CPUManager feature gate to be enabled.\nDefault: &quot;10s&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>memoryManagerPolicy<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>memoryManagerPolicy is the name of the policy to use by memory manager.\nRequires the MemoryManager feature gate to be enabled.\nDefault: &quot;none&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>topologyManagerPolicy<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>topologyManagerPolicy is the name of the topology manager policy to use.\nValid values include:<\/p>\n<ul>\n<li><code>restricted<\/code>: kubelet only allows pods with optimal NUMA node alignment for\nrequested resources;<\/li>\n<li><code>best-effort<\/code>: kubelet will favor pods with NUMA alignment of CPU and device\nresources;<\/li>\n<li><code>none<\/code>: kubelet has no knowledge of NUMA alignment of a pod's CPU and device resources.<\/li>\n<li><code>single-numa-node<\/code>: kubelet only allows pods with a single NUMA alignment\nof CPU and device resources.<\/li>\n<\/ul>\n<p>Default: &quot;none&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>topologyManagerScope<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>topologyManagerScope represents the scope of topology hint generation\nthat topology manager requests and hint providers generate. Valid values include:<\/p>\n<ul>\n<li><code>container<\/code>: topology policy is applied on a per-container basis.<\/li>\n<li><code>pod<\/code>: topology policy is applied on a per-pod basis.<\/li>\n<\/ul>\n<p>Default: &quot;container&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>topologyManagerPolicyOptions<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p>TopologyManagerPolicyOptions is a set of key=value which allows to set extra options\nto fine tune the behaviour of the topology manager policies.\nRequires  both the &quot;TopologyManager&quot; and &quot;TopologyManagerPolicyOptions&quot; feature gates to be enabled.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>qosReserved<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p>qosReserved is a set of resource name to percentage pairs that specify\nthe minimum percentage of a resource reserved for exclusive use by the\nguaranteed QoS tier.\nCurrently supported resources: &quot;memory&quot;\nRequires the QOSReserved feature gate to be enabled.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>runtimeRequestTimeout<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>runtimeRequestTimeout is the timeout for all runtime requests except long running\nrequests - pull, logs, exec and attach.\nDefault: &quot;2m&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>hairpinMode<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>hairpinMode specifies how the Kubelet should configure the container\nbridge for hairpin packets.\nSetting this flag allows endpoints in a Service to loadbalance back to\nthemselves if they should try to access their own Service. Values:<\/p>\n<ul>\n<li>&quot;promiscuous-bridge&quot;: make the container bridge promiscuous.<\/li>\n<li>&quot;hairpin-veth&quot;:       set the hairpin flag on container veth interfaces.<\/li>\n<li>&quot;none&quot;:               do nothing.<\/li>\n<\/ul>\n<p>Generally, one must set <code>--hairpin-mode=hairpin-veth to<\/code> achieve hairpin NAT,\nbecause promiscuous-bridge assumes the existence of a container bridge named cbr0.\nDefault: &quot;promiscuous-bridge&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>maxPods<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>maxPods is the maximum number of Pods that can run on this Kubelet.\nThe value must be a non-negative integer.\nDefault: 110<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>podCIDR<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>podCIDR is the CIDR to use for pod IP addresses, only used in standalone mode.\nIn cluster mode, this is obtained from the control plane.\nDefault: &quot;&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>podPidsLimit<\/code><br\/>\n<code>int64<\/code>\n<\/td>\n<td>\n   <p>podPidsLimit is the maximum number of PIDs in any pod.\nDefault: -1<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>resolvConf<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>resolvConf is the resolver configuration file used as the basis\nfor the container DNS resolution configuration.\nIf set to the empty string, will override the default and effectively disable DNS lookups.\nDefault: &quot;\/etc\/resolv.conf&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>runOnce<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>runOnce causes the Kubelet to check the API server once for pods,\nrun those in addition to the pods specified by static pod files, and exit.\nDefault: false<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>cpuCFSQuota<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>cpuCFSQuota enables CPU CFS quota enforcement for containers that\nspecify CPU limits.\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>cpuCFSQuotaPeriod<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>cpuCFSQuotaPeriod is the CPU CFS quota period value, <code>cpu.cfs_period_us<\/code>.\nThe value must be between 1 ms and 1 second, inclusive.\nRequires the CustomCPUCFSQuotaPeriod feature gate to be enabled.\nDefault: &quot;100ms&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>nodeStatusMaxImages<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>nodeStatusMaxImages caps the number of images reported in Node.status.images.\nThe value must be greater than -2.\nNote: If -1 is specified, no cap will be applied. If 0 is specified, no image is returned.\nDefault: 50<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>maxOpenFiles<\/code><br\/>\n<code>int64<\/code>\n<\/td>\n<td>\n   <p>maxOpenFiles is Number of files that can be opened by Kubelet process.\nThe value must be a non-negative number.\nDefault: 1000000<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>contentType<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>contentType is contentType of requests sent to apiserver.\nDefault: &quot;application\/vnd.kubernetes.protobuf&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>kubeAPIQPS<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>kubeAPIQPS is the QPS to use while talking with kubernetes apiserver.\nDefault: 50<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>kubeAPIBurst<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>kubeAPIBurst is the burst to allow while talking with kubernetes API server.\nThis field cannot be a negative number.\nDefault: 100<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>serializeImagePulls<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>serializeImagePulls when enabled, tells the Kubelet to pull images one\nat a time. We recommend <em>not<\/em> changing the default value on nodes that\nrun docker daemon with version  &lt; 1.9 or an Aufs storage backend.\nIssue #10959 has more details.\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>maxParallelImagePulls<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>MaxParallelImagePulls sets the maximum number of image pulls in parallel.\nThis field cannot be set if SerializeImagePulls is true.\nSetting it to nil means no limit.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>evictionHard<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p>evictionHard is a map of signal names to quantities that defines hard eviction\nthresholds. For example: <code>{&quot;memory.available&quot;: &quot;300Mi&quot;}<\/code>.\nTo explicitly disable, pass a 0% or 100% threshold on an arbitrary resource.\nDefault:\nmemory.available:  &quot;100Mi&quot;\nnodefs.available:  &quot;10%&quot;\nnodefs.inodesFree: &quot;5%&quot;\nimagefs.available: &quot;15%&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>evictionSoft<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p>evictionSoft is a map of signal names to quantities that defines soft eviction thresholds.\nFor example: <code>{&quot;memory.available&quot;: &quot;300Mi&quot;}<\/code>.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>evictionSoftGracePeriod<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p>evictionSoftGracePeriod is a map of signal names to quantities that defines grace\nperiods for each soft eviction signal. For example: <code>{&quot;memory.available&quot;: &quot;30s&quot;}<\/code>.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>evictionPressureTransitionPeriod<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>evictionPressureTransitionPeriod is the duration for which the kubelet has to wait\nbefore transitioning out of an eviction pressure condition.\nDefault: &quot;5m&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>evictionMaxPodGracePeriod<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>evictionMaxPodGracePeriod is the maximum allowed grace period (in seconds) to use\nwhen terminating pods in response to a soft eviction threshold being met. This value\neffectively caps the Pod's terminationGracePeriodSeconds value during soft evictions.\nNote: Due to issue #64530, the behavior has a bug where this value currently just\noverrides the grace period during soft eviction, which can increase the grace\nperiod from what is set on the Pod. This bug will be fixed in a future release.\nDefault: 0<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>evictionMinimumReclaim<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p>evictionMinimumReclaim is a map of signal names to quantities that defines minimum reclaims,\nwhich describe the minimum amount of a given resource the kubelet will reclaim when\nperforming a pod eviction while that resource is under pressure.\nFor example: <code>{&quot;imagefs.available&quot;: &quot;2Gi&quot;}<\/code>.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>podsPerCore<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>podsPerCore is the maximum number of pods per core. Cannot exceed maxPods.\nThe value must be a non-negative integer.\nIf 0, there is no limit on the number of Pods.\nDefault: 0<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>enableControllerAttachDetach<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableControllerAttachDetach enables the Attach\/Detach controller to\nmanage attachment\/detachment of volumes scheduled to this node, and\ndisables kubelet from executing any attach\/detach operations.\nNote: attaching\/detaching CSI volumes is not supported by the kubelet,\nso this option needs to be true for that use case.\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>protectKernelDefaults<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>protectKernelDefaults, if true, causes the Kubelet to error if kernel\nflags are not as it expects. Otherwise the Kubelet will attempt to modify\nkernel flags to match its expectation.\nDefault: false<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>makeIPTablesUtilChains<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>makeIPTablesUtilChains, if true, causes the Kubelet to create the\nKUBE-IPTABLES-HINT chain in iptables as a hint to other components about the\nconfiguration of iptables on the system.\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>iptablesMasqueradeBit<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>iptablesMasqueradeBit formerly controlled the creation of the KUBE-MARK-MASQ\nchain.\nDeprecated: no longer has any effect.\nDefault: 14<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>iptablesDropBit<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>iptablesDropBit formerly controlled the creation of the KUBE-MARK-DROP chain.\nDeprecated: no longer has any effect.\nDefault: 15<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>featureGates<\/code><br\/>\n<code>map[string]bool<\/code>\n<\/td>\n<td>\n   <p>featureGates is a map of feature names to bools that enable or disable experimental\nfeatures. This field modifies piecemeal the built-in default values from\n&quot;k8s.io\/kubernetes\/pkg\/features\/kube_features.go&quot;.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>failSwapOn<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>failSwapOn tells the Kubelet to fail to start if swap is enabled on the node.\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>memorySwap<\/code><br\/>\n<a href=\"#kubelet-config-k8s-io-v1beta1-MemorySwapConfiguration\"><code>MemorySwapConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>memorySwap configures swap memory available to container workloads.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>containerLogMaxSize<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>containerLogMaxSize is a quantity defining the maximum size of the container log\nfile before it is rotated. For example: &quot;5Mi&quot; or &quot;256Ki&quot;.\nDefault: &quot;10Mi&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>containerLogMaxFiles<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>containerLogMaxFiles specifies the maximum number of container log files that can\nbe present for a container.\nDefault: 5<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>containerLogMaxWorkers<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>ContainerLogMaxWorkers specifies the maximum number of concurrent workers to spawn\nfor performing the log rotate operations. Set this count to 1 for disabling the\nconcurrent log rotation workflows\nDefault: 1<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>containerLogMonitorInterval<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>ContainerLogMonitorInterval specifies the duration at which the container logs are monitored\nfor performing the log rotate operation. This defaults to 10 * time.Seconds. But can be\ncustomized to a smaller value based on the log generation rate and the size required to be\nrotated against\nDefault: 10s<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>configMapAndSecretChangeDetectionStrategy<\/code><br\/>\n<a href=\"#kubelet-config-k8s-io-v1beta1-ResourceChangeDetectionStrategy\"><code>ResourceChangeDetectionStrategy<\/code><\/a>\n<\/td>\n<td>\n   <p>configMapAndSecretChangeDetectionStrategy is a mode in which ConfigMap and Secret\nmanagers are running. Valid values include:<\/p>\n<ul>\n<li><code>Get<\/code>: kubelet fetches necessary objects directly from the API server;<\/li>\n<li><code>Cache<\/code>: kubelet uses TTL cache for object fetched from the API server;<\/li>\n<li><code>Watch<\/code>: kubelet uses watches to observe changes to objects that are in its interest.<\/li>\n<\/ul>\n<p>Default: &quot;Watch&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>systemReserved<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p>systemReserved is a set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=150G)\npairs that describe resources reserved for non-kubernetes components.\nCurrently only cpu and memory are supported.\nSee http:\/\/kubernetes.io\/docs\/user-guide\/compute-resources for more detail.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>kubeReserved<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p>kubeReserved is a set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=150G) pairs\nthat describe resources reserved for kubernetes system components.\nCurrently cpu, memory and local storage for root file system are supported.\nSee https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\nfor more details.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>reservedSystemCPUs<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>The reservedSystemCPUs option specifies the CPU list reserved for the host\nlevel system threads and kubernetes related threads. This provide a &quot;static&quot;\nCPU list rather than the &quot;dynamic&quot; list by systemReserved and kubeReserved.\nThis option does not support systemReservedCgroup or kubeReservedCgroup.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>showHiddenMetricsForVersion<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>showHiddenMetricsForVersion is the previous version for which you want to show\nhidden metrics.\nOnly the previous minor version is meaningful, other values will not be allowed.\nThe format is <code>&lt;major&gt;.&lt;minor&gt;<\/code>, e.g.: <code>1.16<\/code>.\nThe purpose of this format is make sure you have the opportunity to notice\nif the next release hides additional metrics, rather than being surprised\nwhen they are permanently removed in the release after that.\nDefault: &quot;&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>systemReservedCgroup<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>systemReservedCgroup helps the kubelet identify absolute name of top level CGroup used\nto enforce <code>systemReserved<\/code> compute resource reservation for OS system daemons.\nRefer to <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/reserve-compute-resources\/#node-allocatable\">Node Allocatable<\/a>\ndoc for more information.\nDefault: &quot;&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>kubeReservedCgroup<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>kubeReservedCgroup helps the kubelet identify absolute name of top level CGroup used\nto enforce <code>KubeReserved<\/code> compute resource reservation for Kubernetes node system daemons.\nRefer to <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/reserve-compute-resources\/#node-allocatable\">Node Allocatable<\/a>\ndoc for more information.\nDefault: &quot;&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>enforceNodeAllocatable<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>This flag specifies the various Node Allocatable enforcements that Kubelet needs to perform.\nThis flag accepts a list of options. Acceptable options are <code>none<\/code>, <code>pods<\/code>,\n<code>system-reserved<\/code> and <code>kube-reserved<\/code>.\nIf <code>none<\/code> is specified, no other options may be specified.\nWhen <code>system-reserved<\/code> is in the list, systemReservedCgroup must be specified.\nWhen <code>kube-reserved<\/code> is in the list, kubeReservedCgroup must be specified.\nThis field is supported only when <code>cgroupsPerQOS<\/code> is set to true.\nRefer to <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/reserve-compute-resources\/#node-allocatable\">Node Allocatable<\/a>\nfor more information.\nDefault: [&quot;pods&quot;]<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>allowedUnsafeSysctls<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>A comma separated whitelist of unsafe sysctls or sysctl patterns (ending in <code>*<\/code>).\nUnsafe sysctl groups are <code>kernel.shm*<\/code>, <code>kernel.msg*<\/code>, <code>kernel.sem<\/code>, <code>fs.mqueue.*<\/code>,\nand <code>net.*<\/code>. For example: &quot;<code>kernel.msg*,net.ipv4.route.min_pmtu<\/code>&quot;\nDefault: []<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>volumePluginDir<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>volumePluginDir is the full path of the directory in which to search\nfor additional third party volume plugins.\nDefault: &quot;\/usr\/libexec\/kubernetes\/kubelet-plugins\/volume\/exec\/&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>providerID<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>providerID, if set, sets the unique ID of the instance that an external\nprovider (i.e. cloudprovider) can use to identify a specific node.\nDefault: &quot;&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>kernelMemcgNotification<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>kernelMemcgNotification, if set, instructs the kubelet to integrate with the\nkernel memcg notification for determining if memory eviction thresholds are\nexceeded rather than polling.\nDefault: false<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>logging<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#LoggingConfiguration\"><code>LoggingConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>logging specifies the options of logging.\nRefer to <a href=\"https:\/\/github.com\/kubernetes\/component-base\/blob\/master\/logs\/options.go\">Logs Options<\/a>\nfor more information.\nDefault:\nFormat: text<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>enableSystemLogHandler<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableSystemLogHandler enables system logs via web interface host:port\/logs\/\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>enableSystemLogQuery<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableSystemLogQuery enables the node log query feature on the \/logs endpoint.\nEnableSystemLogHandler has to be enabled in addition for this feature to work.\nDefault: false<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>shutdownGracePeriod<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>shutdownGracePeriod specifies the total duration that the node should delay the\nshutdown and total grace period for pod termination during a node shutdown.\nDefault: &quot;0s&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>shutdownGracePeriodCriticalPods<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>shutdownGracePeriodCriticalPods specifies the duration used to terminate critical\npods during a node shutdown. This should be less than shutdownGracePeriod.\nFor example, if shutdownGracePeriod=30s, and shutdownGracePeriodCriticalPods=10s,\nduring a node shutdown the first 20 seconds would be reserved for gracefully\nterminating normal pods, and the last 10 seconds would be reserved for terminating\ncritical pods.\nDefault: &quot;0s&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>shutdownGracePeriodByPodPriority<\/code><br\/>\n<a href=\"#kubelet-config-k8s-io-v1beta1-ShutdownGracePeriodByPodPriority\"><code>[]ShutdownGracePeriodByPodPriority<\/code><\/a>\n<\/td>\n<td>\n   <p>shutdownGracePeriodByPodPriority specifies the shutdown grace period for Pods based\non their associated priority class value.\nWhen a shutdown request is received, the Kubelet will initiate shutdown on all pods\nrunning on the node with a grace period that depends on the priority of the pod,\nand then wait for all pods to exit.\nEach entry in the array represents the graceful shutdown time a pod with a priority\nclass value that lies in the range of that value and the next higher entry in the\nlist when the node is shutting down.\nFor example, to allow critical pods 10s to shutdown, priority&gt;=10000 pods 20s to\nshutdown, and all remaining pods 30s to shutdown.<\/p>\n<p>shutdownGracePeriodByPodPriority:<\/p>\n<ul>\n<li>priority: 2000000000\nshutdownGracePeriodSeconds: 10<\/li>\n<li>priority: 10000\nshutdownGracePeriodSeconds: 20<\/li>\n<li>priority: 0\nshutdownGracePeriodSeconds: 30<\/li>\n<\/ul>\n<p>The time the Kubelet will wait before exiting will at most be the maximum of all\nshutdownGracePeriodSeconds for each priority class range represented on the node.\nWhen all pods have exited or reached their grace periods, the Kubelet will release\nthe shutdown inhibit lock.\nRequires the GracefulNodeShutdown feature gate to be enabled.\nThis configuration must be empty if either ShutdownGracePeriod or ShutdownGracePeriodCriticalPods is set.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>reservedMemory<\/code><br\/>\n<a href=\"#kubelet-config-k8s-io-v1beta1-MemoryReservation\"><code>[]MemoryReservation<\/code><\/a>\n<\/td>\n<td>\n   <p>reservedMemory specifies a comma-separated list of memory reservations for NUMA nodes.\nThe parameter makes sense only in the context of the memory manager feature.\nThe memory manager will not allocate reserved memory for container workloads.\nFor example, if you have a NUMA0 with 10Gi of memory and the reservedMemory was\nspecified to reserve 1Gi of memory at NUMA0, the memory manager will assume that\nonly 9Gi is available for allocation.\nYou can specify a different amount of NUMA node and memory types.\nYou can omit this parameter at all, but you should be aware that the amount of\nreserved memory from all NUMA nodes should be equal to the amount of memory specified\nby the <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/reserve-compute-resources\/#node-allocatable\">node allocatable<\/a>.\nIf at least one node allocatable parameter has a non-zero value, you will need\nto specify at least one NUMA node.\nAlso, avoid specifying:<\/p>\n<ol>\n<li>Duplicates, the same NUMA node, and memory type, but with a different value.<\/li>\n<li>zero limits for any memory type.<\/li>\n<li>NUMAs nodes IDs that do not exist under the machine.<\/li>\n<li>memory types except for memory and hugepages-<!-- raw HTML omitted --><\/li>\n<\/ol>\n<p>Default: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>enableProfilingHandler<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableProfilingHandler enables profiling via web interface host:port\/debug\/pprof\/\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>enableDebugFlagsHandler<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableDebugFlagsHandler enables flags endpoint via web interface host:port\/debug\/flags\/v\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>seccompDefault<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>SeccompDefault enables the use of <code>RuntimeDefault<\/code> as the default seccomp profile for all workloads.\nDefault: false<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>memoryThrottlingFactor<\/code><br\/>\n<code>float64<\/code>\n<\/td>\n<td>\n   <p>MemoryThrottlingFactor specifies the factor multiplied by the memory limit or node allocatable memory\nwhen setting the cgroupv2 memory.high value to enforce MemoryQoS.\nDecreasing this factor will set lower high limit for container cgroups and put heavier reclaim pressure\nwhile increasing will put less reclaim pressure.\nSee https:\/\/kep.k8s.io\/2570 for more details.\nDefault: 0.9<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>registerWithTaints<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#taint-v1-core\"><code>[]core\/v1.Taint<\/code><\/a>\n<\/td>\n<td>\n   <p>registerWithTaints are an array of taints to add to a node object when\nthe kubelet registers itself. This only takes effect when registerNode\nis true and upon the initial registration of the node.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>registerNode<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>registerNode enables automatic registration with the apiserver.\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tracing<\/code><br\/>\n<a href=\"#TracingConfiguration\"><code>TracingConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>Tracing specifies the versioned configuration for OpenTelemetry tracing clients.\nSee https:\/\/kep.k8s.io\/2832 for more details.\nDefault: nil<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>localStorageCapacityIsolation<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>LocalStorageCapacityIsolation enables local ephemeral storage isolation feature. The default setting is true.\nThis feature allows users to set request\/limit for container's ephemeral storage and manage it in a similar way\nas cpu and memory. It also allows setting sizeLimit for emptyDir volume, which will trigger pod eviction if disk\nusage from the volume exceeds the limit.\nThis feature depends on the capability of detecting correct root file system disk usage. For certain systems,\nsuch as kind rootless, if this capability cannot be supported, the feature LocalStorageCapacityIsolation should be\ndisabled. Once disabled, user should not set request\/limit for container's ephemeral storage, or sizeLimit for emptyDir.\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>containerRuntimeEndpoint<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>ContainerRuntimeEndpoint is the endpoint of container runtime.\nUnix Domain Sockets are supported on Linux, while npipe and tcp endpoints are supported on Windows.\nExamples:'unix:\/\/\/path\/to\/runtime.sock', 'npipe:\/\/\/\/.\/pipe\/runtime'<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imageServiceEndpoint<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>ImageServiceEndpoint is the endpoint of container image service.\nUnix Domain Socket are supported on Linux, while npipe and tcp endpoints are supported on Windows.\nExamples:'unix:\/\/\/path\/to\/runtime.sock', 'npipe:\/\/\/\/.\/pipe\/runtime'.\nIf not specified, the value in containerRuntimeEndpoint is used.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>failCgroupV1<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>FailCgroupV1 prevents the kubelet from starting on hosts\nthat use cgroup v1. By default, this is set to 'false', meaning\nthe kubelet is allowed to start on cgroup v1 hosts unless this\noption is explicitly enabled.\nDefault: false<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `SerializedNodeConfigSource`     {#kubelet-config-k8s-io-v1beta1-SerializedNodeConfigSource}\n    \n\n\n<p>SerializedNodeConfigSource allows us to serialize v1.NodeConfigSource.\nThis type is used internally by the Kubelet for tracking checkpointed dynamic configs.\nIt exists in the kubeletconfig API group because it is classified as a versioned input to the Kubelet.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubelet.config.k8s.io\/v1beta1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>SerializedNodeConfigSource<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>source<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#nodeconfigsource-v1-core\"><code>core\/v1.NodeConfigSource<\/code><\/a>\n<\/td>\n<td>\n   <p>source is the source that we are serializing.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `CredentialProvider`     {#kubelet-config-k8s-io-v1beta1-CredentialProvider}\n    \n\n**Appears in:**\n\n- [CredentialProviderConfig](#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig)\n\n\n<p>CredentialProvider represents an exec plugin to be invoked by the kubelet. The plugin is only\ninvoked when an image being pulled matches the images handled by the plugin (see matchImages).<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>name is the required name of the credential provider. It must match the name of the\nprovider executable as seen by the kubelet. The executable must be in the kubelet's\nbin directory (set by the --image-credential-provider-bin-dir flag).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>matchImages<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>matchImages is a required list of strings used to match against images in order to\ndetermine if this provider should be invoked. If one of the strings matches the\nrequested image from the kubelet, the plugin will be invoked and given a chance\nto provide credentials. Images are expected to contain the registry domain\nand URL path.<\/p>\n<p>Each entry in matchImages is a pattern which can optionally contain a port and a path.\nGlobs can be used in the domain, but not in the port or the path. Globs are supported\nas subdomains like '<em>.k8s.io' or 'k8s.<\/em>.io', and top-level-domains such as 'k8s.<em>'.\nMatching partial subdomains like 'app<\/em>.k8s.io' is also supported. Each glob can only match\na single subdomain segment, so *.io does not match *.k8s.io.<\/p>\n<p>A match exists between an image and a matchImage when all of the below are true:<\/p>\n<ul>\n<li>Both contain the same number of domain parts and each part matches.<\/li>\n<li>The URL path of an imageMatch must be a prefix of the target image URL path.<\/li>\n<li>If the imageMatch contains a port, then the port must match in the image as well.<\/li>\n<\/ul>\n<p>Example values of matchImages:<\/p>\n<ul>\n<li>123456789.dkr.ecr.us-east-1.amazonaws.com<\/li>\n<li>*.azurecr.io<\/li>\n<li>gcr.io<\/li>\n<li><em>.<\/em>.registry.io<\/li>\n<li>registry.io:8080\/path<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr><td><code>defaultCacheDuration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>defaultCacheDuration is the default duration the plugin will cache credentials in-memory\nif a cache duration is not provided in the plugin response. This field is required.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>apiVersion<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse\nMUST use the same encoding version as the input. Current supported values are:<\/p>\n<ul>\n<li>credentialprovider.kubelet.k8s.io\/v1beta1<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr><td><code>args<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>Arguments to pass to the command when executing it.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>env<\/code><br\/>\n<a href=\"#kubelet-config-k8s-io-v1beta1-ExecEnvVar\"><code>[]ExecEnvVar<\/code><\/a>\n<\/td>\n<td>\n   <p>Env defines additional environment variables to expose to the process. These\nare unioned with the host's environment, as well as variables client-go uses\nto pass argument to the plugin.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExecEnvVar`     {#kubelet-config-k8s-io-v1beta1-ExecEnvVar}\n    \n\n**Appears in:**\n\n- [CredentialProvider](#kubelet-config-k8s-io-v1beta1-CredentialProvider)\n\n\n<p>ExecEnvVar is used for setting environment variables when executing an exec-based\ncredential plugin.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>value<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeletAnonymousAuthentication`     {#kubelet-config-k8s-io-v1beta1-KubeletAnonymousAuthentication}\n    \n\n**Appears in:**\n\n- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>enabled<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enabled allows anonymous requests to the kubelet server.\nRequests that are not rejected by another authentication method are treated as\nanonymous requests.\nAnonymous requests have a username of <code>system:anonymous<\/code>, and a group name of\n<code>system:unauthenticated<\/code>.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeletAuthentication`     {#kubelet-config-k8s-io-v1beta1-KubeletAuthentication}\n    \n\n**Appears in:**\n\n- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>x509<\/code><br\/>\n<a href=\"#kubelet-config-k8s-io-v1beta1-KubeletX509Authentication\"><code>KubeletX509Authentication<\/code><\/a>\n<\/td>\n<td>\n   <p>x509 contains settings related to x509 client certificate authentication.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>webhook<\/code><br\/>\n<a href=\"#kubelet-config-k8s-io-v1beta1-KubeletWebhookAuthentication\"><code>KubeletWebhookAuthentication<\/code><\/a>\n<\/td>\n<td>\n   <p>webhook contains settings related to webhook bearer token authentication.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>anonymous<\/code><br\/>\n<a href=\"#kubelet-config-k8s-io-v1beta1-KubeletAnonymousAuthentication\"><code>KubeletAnonymousAuthentication<\/code><\/a>\n<\/td>\n<td>\n   <p>anonymous contains settings related to anonymous authentication.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeletAuthorization`     {#kubelet-config-k8s-io-v1beta1-KubeletAuthorization}\n    \n\n**Appears in:**\n\n- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>mode<\/code><br\/>\n<a href=\"#kubelet-config-k8s-io-v1beta1-KubeletAuthorizationMode\"><code>KubeletAuthorizationMode<\/code><\/a>\n<\/td>\n<td>\n   <p>mode is the authorization mode to apply to requests to the kubelet server.\nValid values are <code>AlwaysAllow<\/code> and <code>Webhook<\/code>.\nWebhook mode uses the SubjectAccessReview API to determine authorization.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>webhook<\/code><br\/>\n<a href=\"#kubelet-config-k8s-io-v1beta1-KubeletWebhookAuthorization\"><code>KubeletWebhookAuthorization<\/code><\/a>\n<\/td>\n<td>\n   <p>webhook contains settings related to Webhook authorization.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeletAuthorizationMode`     {#kubelet-config-k8s-io-v1beta1-KubeletAuthorizationMode}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [KubeletAuthorization](#kubelet-config-k8s-io-v1beta1-KubeletAuthorization)\n\n\n\n\n\n## `KubeletWebhookAuthentication`     {#kubelet-config-k8s-io-v1beta1-KubeletWebhookAuthentication}\n    \n\n**Appears in:**\n\n- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>enabled<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enabled allows bearer token authentication backed by the\ntokenreviews.authentication.k8s.io API.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>cacheTTL<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>cacheTTL enables caching of authentication results<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeletWebhookAuthorization`     {#kubelet-config-k8s-io-v1beta1-KubeletWebhookAuthorization}\n    \n\n**Appears in:**\n\n- [KubeletAuthorization](#kubelet-config-k8s-io-v1beta1-KubeletAuthorization)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>cacheAuthorizedTTL<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>cacheAuthorizedTTL is the duration to cache 'authorized' responses from the\nwebhook authorizer.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>cacheUnauthorizedTTL<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>cacheUnauthorizedTTL is the duration to cache 'unauthorized' responses from\nthe webhook authorizer.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeletX509Authentication`     {#kubelet-config-k8s-io-v1beta1-KubeletX509Authentication}\n    \n\n**Appears in:**\n\n- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>clientCAFile<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>clientCAFile is the path to a PEM-encoded certificate bundle. If set, any request\npresenting a client certificate signed by one of the authorities in the bundle\nis authenticated with a username corresponding to the CommonName,\nand groups corresponding to the Organization in the client certificate.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `MemoryReservation`     {#kubelet-config-k8s-io-v1beta1-MemoryReservation}\n    \n\n**Appears in:**\n\n- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)\n\n\n<p>MemoryReservation specifies the memory reservation of different types for each NUMA node<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>numaNode<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>limits<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#resourcelist-v1-core\"><code>core\/v1.ResourceList<\/code><\/a>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `MemorySwapConfiguration`     {#kubelet-config-k8s-io-v1beta1-MemorySwapConfiguration}\n    \n\n**Appears in:**\n\n- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>swapBehavior<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>swapBehavior configures swap memory available to container workloads. May be one of\n&quot;&quot;, &quot;NoSwap&quot;: workloads can not use swap, default option.\n&quot;LimitedSwap&quot;: workload swap usage is limited. The swap limit is proportionate to the container's memory request.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ResourceChangeDetectionStrategy`     {#kubelet-config-k8s-io-v1beta1-ResourceChangeDetectionStrategy}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)\n\n\n<p>ResourceChangeDetectionStrategy denotes a mode in which internal\nmanagers (secret, configmap) are discovering object changes.<\/p>\n\n\n\n\n## `ShutdownGracePeriodByPodPriority`     {#kubelet-config-k8s-io-v1beta1-ShutdownGracePeriodByPodPriority}\n    \n\n**Appears in:**\n\n- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)\n\n\n<p>ShutdownGracePeriodByPodPriority specifies the shutdown grace period for Pods based on their associated priority class value<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>priority<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>priority is the priority value associated with the shutdown grace period<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>shutdownGracePeriodSeconds<\/code> <B>[Required]<\/B><br\/>\n<code>int64<\/code>\n<\/td>\n<td>\n   <p>shutdownGracePeriodSeconds is the shutdown grace period in seconds<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  Kubelet Configuration  v1beta1  content type  tool reference package  kubelet config k8s io v1beta1 auto generated  true          Resource Types       CredentialProviderConfig   kubelet config k8s io v1beta1 CredentialProviderConfig     KubeletConfiguration   kubelet config k8s io v1beta1 KubeletConfiguration     SerializedNodeConfigSource   kubelet config k8s io v1beta1 SerializedNodeConfigSource                    FormatOptions        FormatOptions          Appears in        LoggingConfiguration   LoggingConfiguration     p FormatOptions contains options for the different logging formats   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code text  code   B  Required   B  br    a href   TextOptions   code TextOptions  code   a    td   td      p  Alpha  Text contains options for logging format  quot text quot   Only available when the LoggingAlphaOptions feature gate is enabled   p    td    tr   tr  td  code json  code   B  Required   B  br    a href   JSONOptions   code JSONOptions  code   a    td   td      p  Alpha  JSON contains options for logging format  quot json quot   Only available when the LoggingAlphaOptions feature gate is enabled   p    td    tr    tbody    table       JSONOptions        JSONOptions          Appears in        FormatOptions   FormatOptions     p JSONOptions contains options for logging format  quot json quot    p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code OutputRoutingOptions  code   B  Required   B  br    a href   OutputRoutingOptions   code OutputRoutingOptions  code   a    td   td  Members of  code OutputRoutingOptions  code  are embedded into this type       span class  text muted  No description provided   span   td    tr    tbody    table       LogFormatFactory        LogFormatFactory          p LogFormatFactory provides support for a certain additional  non default log format   p          LoggingConfiguration        LoggingConfiguration          Appears in        KubeletConfiguration   kubelet config k8s io v1beta1 KubeletConfiguration     p LoggingConfiguration contains logging options   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code format  code   B  Required   B  br    code string  code    td   td      p Format Flag specifies the structure of log messages  default value of format is  code text  code   p    td    tr   tr  td  code flushFrequency  code   B  Required   B  br    a href   TimeOrMetaDuration   code TimeOrMetaDuration  code   a    td   td      p Maximum time between log flushes  If a string  parsed as a duration  i e   quot 1s quot   If an int  the maximum number of nanoseconds  i e  1s   1000000000   Ignored if the selected logging backend writes log messages without buffering   p    td    tr   tr  td  code verbosity  code   B  Required   B  br    a href   VerbosityLevel   code VerbosityLevel  code   a    td   td      p Verbosity is the threshold that determines which log messages are logged  Default is zero which logs only the most important messages  Higher values enable additional messages  Error messages are always logged   p    td    tr   tr  td  code vmodule  code   B  Required   B  br    a href   VModuleConfiguration   code VModuleConfiguration  code   a    td   td      p VModule overrides the verbosity threshold for individual files  Only supported for  quot text quot  log format   p    td    tr   tr  td  code options  code   B  Required   B  br    a href   FormatOptions   code FormatOptions  code   a    td   td      p  Alpha  Options holds additional parameters that are specific to the different logging formats  Only the options for the selected format get used  but all of them get validated  Only available when the LoggingAlphaOptions feature gate is enabled   p    td    tr    tbody    table       LoggingOptions        LoggingOptions          p LoggingOptions can be used with ValidateAndApplyWithOptions to override certain global defaults   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ErrorStream  code   B  Required   B  br    a href  https   pkg go dev io Writer   code io Writer  code   a    td   td      p ErrorStream can be used to override the os Stderr default   p    td    tr   tr  td  code InfoStream  code   B  Required   B  br    a href  https   pkg go dev io Writer   code io Writer  code   a    td   td      p InfoStream can be used to override the os Stdout default   p    td    tr    tbody    table       OutputRoutingOptions        OutputRoutingOptions          Appears in        JSONOptions   JSONOptions      TextOptions   TextOptions     p OutputRoutingOptions contains options that are supported by both  quot text quot  and  quot json quot    p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code splitStream  code   B  Required   B  br    code bool  code    td   td      p  Alpha  SplitStream redirects error messages to stderr while info messages go to stdout  with buffering  The default is to write both to stdout  without buffering  Only available when the LoggingAlphaOptions feature gate is enabled   p    td    tr   tr  td  code infoBufferSize  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg api resource QuantityValue   code k8s io apimachinery pkg api resource QuantityValue  code   a    td   td      p  Alpha  InfoBufferSize sets the size of the info stream when using split streams  The default is zero  which disables buffering  Only available when the LoggingAlphaOptions feature gate is enabled   p    td    tr    tbody    table       TextOptions        TextOptions          Appears in        FormatOptions   FormatOptions     p TextOptions contains options for logging format  quot text quot    p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code OutputRoutingOptions  code   B  Required   B  br    a href   OutputRoutingOptions   code OutputRoutingOptions  code   a    td   td  Members of  code OutputRoutingOptions  code  are embedded into this type       span class  text muted  No description provided   span   td    tr    tbody    table       TimeOrMetaDuration        TimeOrMetaDuration          Appears in        LoggingConfiguration   LoggingConfiguration     p TimeOrMetaDuration is present only for backwards compatibility for the flushFrequency field  and new fields should use metav1 Duration   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code Duration  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p Duration holds the duration  p    td    tr   tr  td  code    code   B  Required   B  br    code bool  code    td   td      p SerializeAsString controls whether the value is serialized as a string or an integer  p    td    tr    tbody    table       TracingConfiguration        TracingConfiguration          Appears in        KubeletConfiguration   kubelet config k8s io v1beta1 KubeletConfiguration     p TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code endpoint  code  br    code string  code    td   td      p Endpoint of the collector this component will report traces to  The connection is insecure  and does not currently support TLS  Recommended is unset  and endpoint is the otlp grpc default  localhost 4317   p    td    tr   tr  td  code samplingRatePerMillion  code  br    code int32  code    td   td      p SamplingRatePerMillion is the number of samples to collect per million spans  Recommended is unset  If unset  sampler respects its parent span s sampling rate  but otherwise never samples   p    td    tr    tbody    table       VModuleConfiguration        VModuleConfiguration        Alias of    k8s io component base logs api v1 VModuleItem      Appears in        LoggingConfiguration   LoggingConfiguration     p VModuleConfiguration is a collection of individual file names or patterns and the corresponding verbosity threshold   p          VerbosityLevel        VerbosityLevel        Alias of  uint32      Appears in        LoggingConfiguration   LoggingConfiguration      p VerbosityLevel represents a klog or logr verbosity threshold   p             CredentialProviderConfig        kubelet config k8s io v1beta1 CredentialProviderConfig          p CredentialProviderConfig is the configuration containing information about each exec credential provider  Kubelet reads this configuration from disk and enables each provider as specified by the CredentialProvider type   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubelet config k8s io v1beta1  code   td   tr   tr  td  code kind  code  br  string  td  td  code CredentialProviderConfig  code   td   tr           tr  td  code providers  code   B  Required   B  br    a href   kubelet config k8s io v1beta1 CredentialProvider   code   CredentialProvider  code   a    td   td      p providers is a list of credential provider plugins that will be enabled by the kubelet  Multiple providers may match against a single image  in which case credentials from all providers will be returned to the kubelet  If multiple providers are called for a single image  the results are combined  If providers return overlapping auth keys  the value from the provider earlier in this list is used   p    td    tr    tbody    table       KubeletConfiguration        kubelet config k8s io v1beta1 KubeletConfiguration          p KubeletConfiguration contains the configuration for the Kubelet  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubelet config k8s io v1beta1  code   td   tr   tr  td  code kind  code  br  string  td  td  code KubeletConfiguration  code   td   tr           tr  td  code enableServer  code   B  Required   B  br    code bool  code    td   td      p enableServer enables Kubelet s secured server  Note  Kubelet s insecure port is controlled by the readOnlyPort option  Default  true  p    td    tr   tr  td  code staticPodPath  code  br    code string  code    td   td      p staticPodPath is the path to the directory containing local  static  pods to run  or the path to a single static pod file  Default   quot  quot   p    td    tr   tr  td  code podLogsDir  code  br    code string  code    td   td      p podLogsDir is a custom root directory path kubelet will use to place pod s log files  Default   quot  var log pods  quot  Note  it is not recommended to use the temp folder as a log directory as it may cause unexpected behavior in many places   p    td    tr   tr  td  code syncFrequency  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p syncFrequency is the max period between synchronizing running containers and config  Default   quot 1m quot   p    td    tr   tr  td  code fileCheckFrequency  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p fileCheckFrequency is the duration between checking config files for new data  Default   quot 20s quot   p    td    tr   tr  td  code httpCheckFrequency  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p httpCheckFrequency is the duration between checking http for new data  Default   quot 20s quot   p    td    tr   tr  td  code staticPodURL  code  br    code string  code    td   td      p staticPodURL is the URL for accessing static pods to run  Default   quot  quot   p    td    tr   tr  td  code staticPodURLHeader  code  br    code map string   string  code    td   td      p staticPodURLHeader is a map of slices with HTTP headers to use when accessing the podURL  Default  nil  p    td    tr   tr  td  code address  code  br    code string  code    td   td      p address is the IP address for the Kubelet to serve on  set to 0 0 0 0 for all interfaces   Default   quot 0 0 0 0 quot   p    td    tr   tr  td  code port  code  br    code int32  code    td   td      p port is the port for the Kubelet to serve on  The port number must be between 1 and 65535  inclusive  Default  10250  p    td    tr   tr  td  code readOnlyPort  code  br    code int32  code    td   td      p readOnlyPort is the read only port for the Kubelet to serve on with no authentication authorization  The port number must be between 1 and 65535  inclusive  Setting this field to 0 disables the read only service  Default  0  disabled   p    td    tr   tr  td  code tlsCertFile  code  br    code string  code    td   td      p tlsCertFile is the file containing x509 Certificate for HTTPS   CA cert  if any  concatenated after server cert   If tlsCertFile and tlsPrivateKeyFile are not provided  a self signed certificate and key are generated for the public address and saved to the directory passed to the Kubelet s   cert dir flag  Default   quot  quot   p    td    tr   tr  td  code tlsPrivateKeyFile  code  br    code string  code    td   td      p tlsPrivateKeyFile is the file containing x509 private key matching tlsCertFile  Default   quot  quot   p    td    tr   tr  td  code tlsCipherSuites  code  br    code   string  code    td   td      p tlsCipherSuites is the list of allowed cipher suites for the server  Note that TLS 1 3 ciphersuites are not configurable  Values are from tls package constants  https   golang org pkg crypto tls  pkg constants   Default  nil  p    td    tr   tr  td  code tlsMinVersion  code  br    code string  code    td   td      p tlsMinVersion is the minimum TLS version supported  Values are from tls package constants  https   golang org pkg crypto tls  pkg constants   Default   quot  quot   p    td    tr   tr  td  code rotateCertificates  code  br    code bool  code    td   td      p rotateCertificates enables client certificate rotation  The Kubelet will request a new certificate from the certificates k8s io API  This requires an approver to approve the certificate signing requests  Default  false  p    td    tr   tr  td  code serverTLSBootstrap  code  br    code bool  code    td   td      p serverTLSBootstrap enables server certificate bootstrap  Instead of self signing a serving certificate  the Kubelet will request a certificate from the  certificates k8s io  API  This requires an approver to approve the certificate signing requests  CSR   The RotateKubeletServerCertificate feature must be enabled when setting this field  Default  false  p    td    tr   tr  td  code authentication  code  br    a href   kubelet config k8s io v1beta1 KubeletAuthentication   code KubeletAuthentication  code   a    td   td      p authentication specifies how requests to the Kubelet s server are authenticated  Defaults  anonymous  enabled  false webhook  enabled  true cacheTTL   quot 2m quot   p    td    tr   tr  td  code authorization  code  br    a href   kubelet config k8s io v1beta1 KubeletAuthorization   code KubeletAuthorization  code   a    td   td      p authorization specifies how requests to the Kubelet s server are authorized  Defaults  mode  Webhook webhook  cacheAuthorizedTTL   quot 5m quot  cacheUnauthorizedTTL   quot 30s quot   p    td    tr   tr  td  code registryPullQPS  code  br    code int32  code    td   td      p registryPullQPS is the limit of registry pulls per second  The value must not be a negative number  Setting it to 0 means no limit  Default  5  p    td    tr   tr  td  code registryBurst  code  br    code int32  code    td   td      p registryBurst is the maximum size of bursty pulls  temporarily allows pulls to burst to this number  while still not exceeding registryPullQPS  The value must not be a negative number  Only used if registryPullQPS is greater than 0  Default  10  p    td    tr   tr  td  code eventRecordQPS  code  br    code int32  code    td   td      p eventRecordQPS is the maximum event creations per second  If 0  there is no limit enforced  The value cannot be a negative number  Default  50  p    td    tr   tr  td  code eventBurst  code  br    code int32  code    td   td      p eventBurst is the maximum size of a burst of event creations  temporarily allows event creations to burst to this number  while still not exceeding eventRecordQPS  This field canot be a negative number and it is only used when eventRecordQPS  gt  0  Default  100  p    td    tr   tr  td  code enableDebuggingHandlers  code  br    code bool  code    td   td      p enableDebuggingHandlers enables server endpoints for log access and local running of containers and commands  including the exec  attach  logs  and portforward features  Default  true  p    td    tr   tr  td  code enableContentionProfiling  code  br    code bool  code    td   td      p enableContentionProfiling enables block profiling  if enableDebuggingHandlers is true  Default  false  p    td    tr   tr  td  code healthzPort  code  br    code int32  code    td   td      p healthzPort is the port of the localhost healthz endpoint  set to 0 to disable   A valid number is between 1 and 65535  Default  10248  p    td    tr   tr  td  code healthzBindAddress  code  br    code string  code    td   td      p healthzBindAddress is the IP address for the healthz server to serve on  Default   quot 127 0 0 1 quot   p    td    tr   tr  td  code oomScoreAdj  code  br    code int32  code    td   td      p oomScoreAdj is The oom score adj value for kubelet process  Values must be within the range   1000  1000   Default   999  p    td    tr   tr  td  code clusterDomain  code  br    code string  code    td   td      p clusterDomain is the DNS domain for this cluster  If set  kubelet will configure all containers to search this domain in addition to the host s search domains  Default   quot  quot   p    td    tr   tr  td  code clusterDNS  code  br    code   string  code    td   td      p clusterDNS is a list of IP addresses for the cluster DNS server  If set  kubelet will configure all containers to use this for DNS resolution instead of the host s DNS servers  Default  nil  p    td    tr   tr  td  code streamingConnectionIdleTimeout  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p streamingConnectionIdleTimeout is the maximum time a streaming connection can be idle before the connection is automatically closed  Default   quot 4h quot   p    td    tr   tr  td  code nodeStatusUpdateFrequency  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p nodeStatusUpdateFrequency is the frequency that kubelet computes node status  If node lease feature is not enabled  it is also the frequency that kubelet posts node status to master  Note  When node lease feature is not enabled  be cautious when changing the constant  it must work with nodeMonitorGracePeriod in nodecontroller  Default   quot 10s quot   p    td    tr   tr  td  code nodeStatusReportFrequency  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p nodeStatusReportFrequency is the frequency that kubelet posts node status to master if node status does not change  Kubelet will ignore this frequency and post node status immediately if any change is detected  It is only used when node lease feature is enabled  nodeStatusReportFrequency s default value is 5m  But if nodeStatusUpdateFrequency is set explicitly  nodeStatusReportFrequency s default value will be set to nodeStatusUpdateFrequency for backward compatibility  Default   quot 5m quot   p    td    tr   tr  td  code nodeLeaseDurationSeconds  code  br    code int32  code    td   td      p nodeLeaseDurationSeconds is the duration the Kubelet will set on its corresponding Lease  NodeLease provides an indicator of node health by having the Kubelet create and periodically renew a lease  named after the node  in the kube node lease namespace  If the lease expires  the node can be considered unhealthy  The lease is currently renewed every 10s  per KEP 0009  In the future  the lease renewal interval may be set based on the lease duration  The field value must be greater than 0  Default  40  p    td    tr   tr  td  code imageMinimumGCAge  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p imageMinimumGCAge is the minimum age for an unused image before it is garbage collected  Default   quot 2m quot   p    td    tr   tr  td  code imageMaximumGCAge  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p imageMaximumGCAge is the maximum age an image can be unused before it is garbage collected  The default of this field is  quot 0s quot   which disables this field  meaning images won t be garbage collected based on being unused for too long  Default   quot 0s quot   disabled   p    td    tr   tr  td  code imageGCHighThresholdPercent  code  br    code int32  code    td   td      p imageGCHighThresholdPercent is the percent of disk usage after which image garbage collection is always run  The percent is calculated by dividing this field value by 100  so this field must be between 0 and 100  inclusive  When specified  the value must be greater than imageGCLowThresholdPercent  Default  85  p    td    tr   tr  td  code imageGCLowThresholdPercent  code  br    code int32  code    td   td      p imageGCLowThresholdPercent is the percent of disk usage before which image garbage collection is never run  Lowest disk usage to garbage collect to  The percent is calculated by dividing this field value by 100  so the field value must be between 0 and 100  inclusive  When specified  the value must be less than imageGCHighThresholdPercent  Default  80  p    td    tr   tr  td  code volumeStatsAggPeriod  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p volumeStatsAggPeriod is the frequency for calculating and caching volume disk usage for all pods  Default   quot 1m quot   p    td    tr   tr  td  code kubeletCgroups  code  br    code string  code    td   td      p kubeletCgroups is the absolute name of cgroups to isolate the kubelet in Default   quot  quot   p    td    tr   tr  td  code systemCgroups  code  br    code string  code    td   td      p systemCgroups is absolute name of cgroups in which to place all non kernel processes that are not already in a container  Empty for no container  Rolling back the flag requires a reboot  The cgroupRoot must be specified if this field is not empty  Default   quot  quot   p    td    tr   tr  td  code cgroupRoot  code  br    code string  code    td   td      p cgroupRoot is the root cgroup to use for pods  This is handled by the container runtime on a best effort basis   p    td    tr   tr  td  code cgroupsPerQOS  code  br    code bool  code    td   td      p cgroupsPerQOS enable QoS based CGroup hierarchy  top level CGroups for QoS classes and all Burstable and BestEffort Pods are brought up under their specific top level QoS CGroup  Default  true  p    td    tr   tr  td  code cgroupDriver  code  br    code string  code    td   td      p cgroupDriver is the driver kubelet uses to manipulate CGroups on the host  cgroupfs or systemd   Default   quot cgroupfs quot   p    td    tr   tr  td  code cpuManagerPolicy  code  br    code string  code    td   td      p cpuManagerPolicy is the name of the policy to use  Requires the CPUManager feature gate to be enabled  Default   quot None quot   p    td    tr   tr  td  code cpuManagerPolicyOptions  code  br    code map string string  code    td   td      p cpuManagerPolicyOptions is a set of key value which  allows to set extra options to fine tune the behaviour of the cpu manager policies  Requires  both the  quot CPUManager quot  and  quot CPUManagerPolicyOptions quot  feature gates to be enabled  Default  nil  p    td    tr   tr  td  code cpuManagerReconcilePeriod  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p cpuManagerReconcilePeriod is the reconciliation period for the CPU Manager  Requires the CPUManager feature gate to be enabled  Default   quot 10s quot   p    td    tr   tr  td  code memoryManagerPolicy  code  br    code string  code    td   td      p memoryManagerPolicy is the name of the policy to use by memory manager  Requires the MemoryManager feature gate to be enabled  Default   quot none quot   p    td    tr   tr  td  code topologyManagerPolicy  code  br    code string  code    td   td      p topologyManagerPolicy is the name of the topology manager policy to use  Valid values include   p   ul   li  code restricted  code   kubelet only allows pods with optimal NUMA node alignment for requested resources   li   li  code best effort  code   kubelet will favor pods with NUMA alignment of CPU and device resources   li   li  code none  code   kubelet has no knowledge of NUMA alignment of a pod s CPU and device resources   li   li  code single numa node  code   kubelet only allows pods with a single NUMA alignment of CPU and device resources   li    ul   p Default   quot none quot   p    td    tr   tr  td  code topologyManagerScope  code  br    code string  code    td   td      p topologyManagerScope represents the scope of topology hint generation that topology manager requests and hint providers generate  Valid values include   p   ul   li  code container  code   topology policy is applied on a per container basis   li   li  code pod  code   topology policy is applied on a per pod basis   li    ul   p Default   quot container quot   p    td    tr   tr  td  code topologyManagerPolicyOptions  code  br    code map string string  code    td   td      p TopologyManagerPolicyOptions is a set of key value which allows to set extra options to fine tune the behaviour of the topology manager policies  Requires  both the  quot TopologyManager quot  and  quot TopologyManagerPolicyOptions quot  feature gates to be enabled  Default  nil  p    td    tr   tr  td  code qosReserved  code  br    code map string string  code    td   td      p qosReserved is a set of resource name to percentage pairs that specify the minimum percentage of a resource reserved for exclusive use by the guaranteed QoS tier  Currently supported resources   quot memory quot  Requires the QOSReserved feature gate to be enabled  Default  nil  p    td    tr   tr  td  code runtimeRequestTimeout  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p runtimeRequestTimeout is the timeout for all runtime requests except long running requests   pull  logs  exec and attach  Default   quot 2m quot   p    td    tr   tr  td  code hairpinMode  code  br    code string  code    td   td      p hairpinMode specifies how the Kubelet should configure the container bridge for hairpin packets  Setting this flag allows endpoints in a Service to loadbalance back to themselves if they should try to access their own Service  Values   p   ul   li  quot promiscuous bridge quot   make the container bridge promiscuous   li   li  quot hairpin veth quot         set the hairpin flag on container veth interfaces   li   li  quot none quot                 do nothing   li    ul   p Generally  one must set  code   hairpin mode hairpin veth to  code  achieve hairpin NAT  because promiscuous bridge assumes the existence of a container bridge named cbr0  Default   quot promiscuous bridge quot   p    td    tr   tr  td  code maxPods  code  br    code int32  code    td   td      p maxPods is the maximum number of Pods that can run on this Kubelet  The value must be a non negative integer  Default  110  p    td    tr   tr  td  code podCIDR  code  br    code string  code    td   td      p podCIDR is the CIDR to use for pod IP addresses  only used in standalone mode  In cluster mode  this is obtained from the control plane  Default   quot  quot   p    td    tr   tr  td  code podPidsLimit  code  br    code int64  code    td   td      p podPidsLimit is the maximum number of PIDs in any pod  Default   1  p    td    tr   tr  td  code resolvConf  code  br    code string  code    td   td      p resolvConf is the resolver configuration file used as the basis for the container DNS resolution configuration  If set to the empty string  will override the default and effectively disable DNS lookups  Default   quot  etc resolv conf quot   p    td    tr   tr  td  code runOnce  code  br    code bool  code    td   td      p runOnce causes the Kubelet to check the API server once for pods  run those in addition to the pods specified by static pod files  and exit  Default  false  p    td    tr   tr  td  code cpuCFSQuota  code  br    code bool  code    td   td      p cpuCFSQuota enables CPU CFS quota enforcement for containers that specify CPU limits  Default  true  p    td    tr   tr  td  code cpuCFSQuotaPeriod  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p cpuCFSQuotaPeriod is the CPU CFS quota period value   code cpu cfs period us  code   The value must be between 1 ms and 1 second  inclusive  Requires the CustomCPUCFSQuotaPeriod feature gate to be enabled  Default   quot 100ms quot   p    td    tr   tr  td  code nodeStatusMaxImages  code  br    code int32  code    td   td      p nodeStatusMaxImages caps the number of images reported in Node status images  The value must be greater than  2  Note  If  1 is specified  no cap will be applied  If 0 is specified  no image is returned  Default  50  p    td    tr   tr  td  code maxOpenFiles  code  br    code int64  code    td   td      p maxOpenFiles is Number of files that can be opened by Kubelet process  The value must be a non negative number  Default  1000000  p    td    tr   tr  td  code contentType  code  br    code string  code    td   td      p contentType is contentType of requests sent to apiserver  Default   quot application vnd kubernetes protobuf quot   p    td    tr   tr  td  code kubeAPIQPS  code  br    code int32  code    td   td      p kubeAPIQPS is the QPS to use while talking with kubernetes apiserver  Default  50  p    td    tr   tr  td  code kubeAPIBurst  code  br    code int32  code    td   td      p kubeAPIBurst is the burst to allow while talking with kubernetes API server  This field cannot be a negative number  Default  100  p    td    tr   tr  td  code serializeImagePulls  code  br    code bool  code    td   td      p serializeImagePulls when enabled  tells the Kubelet to pull images one at a time  We recommend  em not  em  changing the default value on nodes that run docker daemon with version   lt  1 9 or an Aufs storage backend  Issue  10959 has more details  Default  true  p    td    tr   tr  td  code maxParallelImagePulls  code  br    code int32  code    td   td      p MaxParallelImagePulls sets the maximum number of image pulls in parallel  This field cannot be set if SerializeImagePulls is true  Setting it to nil means no limit  Default  nil  p    td    tr   tr  td  code evictionHard  code  br    code map string string  code    td   td      p evictionHard is a map of signal names to quantities that defines hard eviction thresholds  For example   code   quot memory available quot    quot 300Mi quot    code   To explicitly disable  pass a 0  or 100  threshold on an arbitrary resource  Default  memory available    quot 100Mi quot  nodefs available    quot 10  quot  nodefs inodesFree   quot 5  quot  imagefs available   quot 15  quot   p    td    tr   tr  td  code evictionSoft  code  br    code map string string  code    td   td      p evictionSoft is a map of signal names to quantities that defines soft eviction thresholds  For example   code   quot memory available quot    quot 300Mi quot    code   Default  nil  p    td    tr   tr  td  code evictionSoftGracePeriod  code  br    code map string string  code    td   td      p evictionSoftGracePeriod is a map of signal names to quantities that defines grace periods for each soft eviction signal  For example   code   quot memory available quot    quot 30s quot    code   Default  nil  p    td    tr   tr  td  code evictionPressureTransitionPeriod  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p evictionPressureTransitionPeriod is the duration for which the kubelet has to wait before transitioning out of an eviction pressure condition  Default   quot 5m quot   p    td    tr   tr  td  code evictionMaxPodGracePeriod  code  br    code int32  code    td   td      p evictionMaxPodGracePeriod is the maximum allowed grace period  in seconds  to use when terminating pods in response to a soft eviction threshold being met  This value effectively caps the Pod s terminationGracePeriodSeconds value during soft evictions  Note  Due to issue  64530  the behavior has a bug where this value currently just overrides the grace period during soft eviction  which can increase the grace period from what is set on the Pod  This bug will be fixed in a future release  Default  0  p    td    tr   tr  td  code evictionMinimumReclaim  code  br    code map string string  code    td   td      p evictionMinimumReclaim is a map of signal names to quantities that defines minimum reclaims  which describe the minimum amount of a given resource the kubelet will reclaim when performing a pod eviction while that resource is under pressure  For example   code   quot imagefs available quot    quot 2Gi quot    code   Default  nil  p    td    tr   tr  td  code podsPerCore  code  br    code int32  code    td   td      p podsPerCore is the maximum number of pods per core  Cannot exceed maxPods  The value must be a non negative integer  If 0  there is no limit on the number of Pods  Default  0  p    td    tr   tr  td  code enableControllerAttachDetach  code  br    code bool  code    td   td      p enableControllerAttachDetach enables the Attach Detach controller to manage attachment detachment of volumes scheduled to this node  and disables kubelet from executing any attach detach operations  Note  attaching detaching CSI volumes is not supported by the kubelet  so this option needs to be true for that use case  Default  true  p    td    tr   tr  td  code protectKernelDefaults  code  br    code bool  code    td   td      p protectKernelDefaults  if true  causes the Kubelet to error if kernel flags are not as it expects  Otherwise the Kubelet will attempt to modify kernel flags to match its expectation  Default  false  p    td    tr   tr  td  code makeIPTablesUtilChains  code  br    code bool  code    td   td      p makeIPTablesUtilChains  if true  causes the Kubelet to create the KUBE IPTABLES HINT chain in iptables as a hint to other components about the configuration of iptables on the system  Default  true  p    td    tr   tr  td  code iptablesMasqueradeBit  code  br    code int32  code    td   td      p iptablesMasqueradeBit formerly controlled the creation of the KUBE MARK MASQ chain  Deprecated  no longer has any effect  Default  14  p    td    tr   tr  td  code iptablesDropBit  code  br    code int32  code    td   td      p iptablesDropBit formerly controlled the creation of the KUBE MARK DROP chain  Deprecated  no longer has any effect  Default  15  p    td    tr   tr  td  code featureGates  code  br    code map string bool  code    td   td      p featureGates is a map of feature names to bools that enable or disable experimental features  This field modifies piecemeal the built in default values from  quot k8s io kubernetes pkg features kube features go quot   Default  nil  p    td    tr   tr  td  code failSwapOn  code  br    code bool  code    td   td      p failSwapOn tells the Kubelet to fail to start if swap is enabled on the node  Default  true  p    td    tr   tr  td  code memorySwap  code  br    a href   kubelet config k8s io v1beta1 MemorySwapConfiguration   code MemorySwapConfiguration  code   a    td   td      p memorySwap configures swap memory available to container workloads   p    td    tr   tr  td  code containerLogMaxSize  code  br    code string  code    td   td      p containerLogMaxSize is a quantity defining the maximum size of the container log file before it is rotated  For example   quot 5Mi quot  or  quot 256Ki quot   Default   quot 10Mi quot   p    td    tr   tr  td  code containerLogMaxFiles  code  br    code int32  code    td   td      p containerLogMaxFiles specifies the maximum number of container log files that can be present for a container  Default  5  p    td    tr   tr  td  code containerLogMaxWorkers  code  br    code int32  code    td   td      p ContainerLogMaxWorkers specifies the maximum number of concurrent workers to spawn for performing the log rotate operations  Set this count to 1 for disabling the concurrent log rotation workflows Default  1  p    td    tr   tr  td  code containerLogMonitorInterval  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p ContainerLogMonitorInterval specifies the duration at which the container logs are monitored for performing the log rotate operation  This defaults to 10   time Seconds  But can be customized to a smaller value based on the log generation rate and the size required to be rotated against Default  10s  p    td    tr   tr  td  code configMapAndSecretChangeDetectionStrategy  code  br    a href   kubelet config k8s io v1beta1 ResourceChangeDetectionStrategy   code ResourceChangeDetectionStrategy  code   a    td   td      p configMapAndSecretChangeDetectionStrategy is a mode in which ConfigMap and Secret managers are running  Valid values include   p   ul   li  code Get  code   kubelet fetches necessary objects directly from the API server   li   li  code Cache  code   kubelet uses TTL cache for object fetched from the API server   li   li  code Watch  code   kubelet uses watches to observe changes to objects that are in its interest   li    ul   p Default   quot Watch quot   p    td    tr   tr  td  code systemReserved  code  br    code map string string  code    td   td      p systemReserved is a set of ResourceName ResourceQuantity  e g  cpu 200m memory 150G  pairs that describe resources reserved for non kubernetes components  Currently only cpu and memory are supported  See http   kubernetes io docs user guide compute resources for more detail  Default  nil  p    td    tr   tr  td  code kubeReserved  code  br    code map string string  code    td   td      p kubeReserved is a set of ResourceName ResourceQuantity  e g  cpu 200m memory 150G  pairs that describe resources reserved for kubernetes system components  Currently cpu  memory and local storage for root file system are supported  See https   kubernetes io docs concepts configuration manage resources containers  for more details  Default  nil  p    td    tr   tr  td  code reservedSystemCPUs  code   B  Required   B  br    code string  code    td   td      p The reservedSystemCPUs option specifies the CPU list reserved for the host level system threads and kubernetes related threads  This provide a  quot static quot  CPU list rather than the  quot dynamic quot  list by systemReserved and kubeReserved  This option does not support systemReservedCgroup or kubeReservedCgroup   p    td    tr   tr  td  code showHiddenMetricsForVersion  code  br    code string  code    td   td      p showHiddenMetricsForVersion is the previous version for which you want to show hidden metrics  Only the previous minor version is meaningful  other values will not be allowed  The format is  code  lt major gt   lt minor gt   code   e g    code 1 16  code   The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics  rather than being surprised when they are permanently removed in the release after that  Default   quot  quot   p    td    tr   tr  td  code systemReservedCgroup  code  br    code string  code    td   td      p systemReservedCgroup helps the kubelet identify absolute name of top level CGroup used to enforce  code systemReserved  code  compute resource reservation for OS system daemons  Refer to  a href  https   kubernetes io docs tasks administer cluster reserve compute resources  node allocatable  Node Allocatable  a  doc for more information  Default   quot  quot   p    td    tr   tr  td  code kubeReservedCgroup  code  br    code string  code    td   td      p kubeReservedCgroup helps the kubelet identify absolute name of top level CGroup used to enforce  code KubeReserved  code  compute resource reservation for Kubernetes node system daemons  Refer to  a href  https   kubernetes io docs tasks administer cluster reserve compute resources  node allocatable  Node Allocatable  a  doc for more information  Default   quot  quot   p    td    tr   tr  td  code enforceNodeAllocatable  code  br    code   string  code    td   td      p This flag specifies the various Node Allocatable enforcements that Kubelet needs to perform  This flag accepts a list of options  Acceptable options are  code none  code    code pods  code    code system reserved  code  and  code kube reserved  code   If  code none  code  is specified  no other options may be specified  When  code system reserved  code  is in the list  systemReservedCgroup must be specified  When  code kube reserved  code  is in the list  kubeReservedCgroup must be specified  This field is supported only when  code cgroupsPerQOS  code  is set to true  Refer to  a href  https   kubernetes io docs tasks administer cluster reserve compute resources  node allocatable  Node Allocatable  a  for more information  Default    quot pods quot    p    td    tr   tr  td  code allowedUnsafeSysctls  code  br    code   string  code    td   td      p A comma separated whitelist of unsafe sysctls or sysctl patterns  ending in  code    code    Unsafe sysctl groups are  code kernel shm   code    code kernel msg   code    code kernel sem  code    code fs mqueue    code   and  code net    code   For example   quot  code kernel msg  net ipv4 route min pmtu  code  quot  Default      p    td    tr   tr  td  code volumePluginDir  code  br    code string  code    td   td      p volumePluginDir is the full path of the directory in which to search for additional third party volume plugins  Default   quot  usr libexec kubernetes kubelet plugins volume exec  quot   p    td    tr   tr  td  code providerID  code  br    code string  code    td   td      p providerID  if set  sets the unique ID of the instance that an external provider  i e  cloudprovider  can use to identify a specific node  Default   quot  quot   p    td    tr   tr  td  code kernelMemcgNotification  code  br    code bool  code    td   td      p kernelMemcgNotification  if set  instructs the kubelet to integrate with the kernel memcg notification for determining if memory eviction thresholds are exceeded rather than polling  Default  false  p    td    tr   tr  td  code logging  code   B  Required   B  br    a href   LoggingConfiguration   code LoggingConfiguration  code   a    td   td      p logging specifies the options of logging  Refer to  a href  https   github com kubernetes component base blob master logs options go  Logs Options  a  for more information  Default  Format  text  p    td    tr   tr  td  code enableSystemLogHandler  code  br    code bool  code    td   td      p enableSystemLogHandler enables system logs via web interface host port logs  Default  true  p    td    tr   tr  td  code enableSystemLogQuery  code  br    code bool  code    td   td      p enableSystemLogQuery enables the node log query feature on the  logs endpoint  EnableSystemLogHandler has to be enabled in addition for this feature to work  Default  false  p    td    tr   tr  td  code shutdownGracePeriod  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p shutdownGracePeriod specifies the total duration that the node should delay the shutdown and total grace period for pod termination during a node shutdown  Default   quot 0s quot   p    td    tr   tr  td  code shutdownGracePeriodCriticalPods  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p shutdownGracePeriodCriticalPods specifies the duration used to terminate critical pods during a node shutdown  This should be less than shutdownGracePeriod  For example  if shutdownGracePeriod 30s  and shutdownGracePeriodCriticalPods 10s  during a node shutdown the first 20 seconds would be reserved for gracefully terminating normal pods  and the last 10 seconds would be reserved for terminating critical pods  Default   quot 0s quot   p    td    tr   tr  td  code shutdownGracePeriodByPodPriority  code  br    a href   kubelet config k8s io v1beta1 ShutdownGracePeriodByPodPriority   code   ShutdownGracePeriodByPodPriority  code   a    td   td      p shutdownGracePeriodByPodPriority specifies the shutdown grace period for Pods based on their associated priority class value  When a shutdown request is received  the Kubelet will initiate shutdown on all pods running on the node with a grace period that depends on the priority of the pod  and then wait for all pods to exit  Each entry in the array represents the graceful shutdown time a pod with a priority class value that lies in the range of that value and the next higher entry in the list when the node is shutting down  For example  to allow critical pods 10s to shutdown  priority gt  10000 pods 20s to shutdown  and all remaining pods 30s to shutdown   p   p shutdownGracePeriodByPodPriority   p   ul   li priority  2000000000 shutdownGracePeriodSeconds  10  li   li priority  10000 shutdownGracePeriodSeconds  20  li   li priority  0 shutdownGracePeriodSeconds  30  li    ul   p The time the Kubelet will wait before exiting will at most be the maximum of all shutdownGracePeriodSeconds for each priority class range represented on the node  When all pods have exited or reached their grace periods  the Kubelet will release the shutdown inhibit lock  Requires the GracefulNodeShutdown feature gate to be enabled  This configuration must be empty if either ShutdownGracePeriod or ShutdownGracePeriodCriticalPods is set  Default  nil  p    td    tr   tr  td  code reservedMemory  code  br    a href   kubelet config k8s io v1beta1 MemoryReservation   code   MemoryReservation  code   a    td   td      p reservedMemory specifies a comma separated list of memory reservations for NUMA nodes  The parameter makes sense only in the context of the memory manager feature  The memory manager will not allocate reserved memory for container workloads  For example  if you have a NUMA0 with 10Gi of memory and the reservedMemory was specified to reserve 1Gi of memory at NUMA0  the memory manager will assume that only 9Gi is available for allocation  You can specify a different amount of NUMA node and memory types  You can omit this parameter at all  but you should be aware that the amount of reserved memory from all NUMA nodes should be equal to the amount of memory specified by the  a href  https   kubernetes io docs tasks administer cluster reserve compute resources  node allocatable  node allocatable  a   If at least one node allocatable parameter has a non zero value  you will need to specify at least one NUMA node  Also  avoid specifying   p   ol   li Duplicates  the same NUMA node  and memory type  but with a different value   li   li zero limits for any memory type   li   li NUMAs nodes IDs that do not exist under the machine   li   li memory types except for memory and hugepages      raw HTML omitted      li    ol   p Default  nil  p    td    tr   tr  td  code enableProfilingHandler  code  br    code bool  code    td   td      p enableProfilingHandler enables profiling via web interface host port debug pprof  Default  true  p    td    tr   tr  td  code enableDebugFlagsHandler  code  br    code bool  code    td   td      p enableDebugFlagsHandler enables flags endpoint via web interface host port debug flags v Default  true  p    td    tr   tr  td  code seccompDefault  code  br    code bool  code    td   td      p SeccompDefault enables the use of  code RuntimeDefault  code  as the default seccomp profile for all workloads  Default  false  p    td    tr   tr  td  code memoryThrottlingFactor  code  br    code float64  code    td   td      p MemoryThrottlingFactor specifies the factor multiplied by the memory limit or node allocatable memory when setting the cgroupv2 memory high value to enforce MemoryQoS  Decreasing this factor will set lower high limit for container cgroups and put heavier reclaim pressure while increasing will put less reclaim pressure  See https   kep k8s io 2570 for more details  Default  0 9  p    td    tr   tr  td  code registerWithTaints  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  taint v1 core   code   core v1 Taint  code   a    td   td      p registerWithTaints are an array of taints to add to a node object when the kubelet registers itself  This only takes effect when registerNode is true and upon the initial registration of the node  Default  nil  p    td    tr   tr  td  code registerNode  code  br    code bool  code    td   td      p registerNode enables automatic registration with the apiserver  Default  true  p    td    tr   tr  td  code tracing  code  br    a href   TracingConfiguration   code TracingConfiguration  code   a    td   td      p Tracing specifies the versioned configuration for OpenTelemetry tracing clients  See https   kep k8s io 2832 for more details  Default  nil  p    td    tr   tr  td  code localStorageCapacityIsolation  code  br    code bool  code    td   td      p LocalStorageCapacityIsolation enables local ephemeral storage isolation feature  The default setting is true  This feature allows users to set request limit for container s ephemeral storage and manage it in a similar way as cpu and memory  It also allows setting sizeLimit for emptyDir volume  which will trigger pod eviction if disk usage from the volume exceeds the limit  This feature depends on the capability of detecting correct root file system disk usage  For certain systems  such as kind rootless  if this capability cannot be supported  the feature LocalStorageCapacityIsolation should be disabled  Once disabled  user should not set request limit for container s ephemeral storage  or sizeLimit for emptyDir  Default  true  p    td    tr   tr  td  code containerRuntimeEndpoint  code   B  Required   B  br    code string  code    td   td      p ContainerRuntimeEndpoint is the endpoint of container runtime  Unix Domain Sockets are supported on Linux  while npipe and tcp endpoints are supported on Windows  Examples  unix    path to runtime sock    npipe       pipe runtime   p    td    tr   tr  td  code imageServiceEndpoint  code  br    code string  code    td   td      p ImageServiceEndpoint is the endpoint of container image service  Unix Domain Socket are supported on Linux  while npipe and tcp endpoints are supported on Windows  Examples  unix    path to runtime sock    npipe       pipe runtime   If not specified  the value in containerRuntimeEndpoint is used   p    td    tr   tr  td  code failCgroupV1  code  br    code bool  code    td   td      p FailCgroupV1 prevents the kubelet from starting on hosts that use cgroup v1  By default  this is set to  false   meaning the kubelet is allowed to start on cgroup v1 hosts unless this option is explicitly enabled  Default  false  p    td    tr    tbody    table       SerializedNodeConfigSource        kubelet config k8s io v1beta1 SerializedNodeConfigSource          p SerializedNodeConfigSource allows us to serialize v1 NodeConfigSource  This type is used internally by the Kubelet for tracking checkpointed dynamic configs  It exists in the kubeletconfig API group because it is classified as a versioned input to the Kubelet   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubelet config k8s io v1beta1  code   td   tr   tr  td  code kind  code  br  string  td  td  code SerializedNodeConfigSource  code   td   tr           tr  td  code source  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  nodeconfigsource v1 core   code core v1 NodeConfigSource  code   a    td   td      p source is the source that we are serializing   p    td    tr    tbody    table       CredentialProvider        kubelet config k8s io v1beta1 CredentialProvider          Appears in        CredentialProviderConfig   kubelet config k8s io v1beta1 CredentialProviderConfig     p CredentialProvider represents an exec plugin to be invoked by the kubelet  The plugin is only invoked when an image being pulled matches the images handled by the plugin  see matchImages    p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p name is the required name of the credential provider  It must match the name of the provider executable as seen by the kubelet  The executable must be in the kubelet s bin directory  set by the   image credential provider bin dir flag    p    td    tr   tr  td  code matchImages  code   B  Required   B  br    code   string  code    td   td      p matchImages is a required list of strings used to match against images in order to determine if this provider should be invoked  If one of the strings matches the requested image from the kubelet  the plugin will be invoked and given a chance to provide credentials  Images are expected to contain the registry domain and URL path   p   p Each entry in matchImages is a pattern which can optionally contain a port and a path  Globs can be used in the domain  but not in the port or the path  Globs are supported as subdomains like   em  k8s io  or  k8s   em  io   and top level domains such as  k8s  em    Matching partial subdomains like  app  em  k8s io  is also supported  Each glob can only match a single subdomain segment  so   io does not match   k8s io   p   p A match exists between an image and a matchImage when all of the below are true   p   ul   li Both contain the same number of domain parts and each part matches   li   li The URL path of an imageMatch must be a prefix of the target image URL path   li   li If the imageMatch contains a port  then the port must match in the image as well   li    ul   p Example values of matchImages   p   ul   li 123456789 dkr ecr us east 1 amazonaws com  li   li   azurecr io  li   li gcr io  li   li  em    em  registry io  li   li registry io 8080 path  li    ul    td    tr   tr  td  code defaultCacheDuration  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p defaultCacheDuration is the default duration the plugin will cache credentials in memory if a cache duration is not provided in the plugin response  This field is required   p    td    tr   tr  td  code apiVersion  code   B  Required   B  br    code string  code    td   td      p Required input version of the exec CredentialProviderRequest  The returned CredentialProviderResponse MUST use the same encoding version as the input  Current supported values are   p   ul   li credentialprovider kubelet k8s io v1beta1  li    ul    td    tr   tr  td  code args  code  br    code   string  code    td   td      p Arguments to pass to the command when executing it   p    td    tr   tr  td  code env  code  br    a href   kubelet config k8s io v1beta1 ExecEnvVar   code   ExecEnvVar  code   a    td   td      p Env defines additional environment variables to expose to the process  These are unioned with the host s environment  as well as variables client go uses to pass argument to the plugin   p    td    tr    tbody    table       ExecEnvVar        kubelet config k8s io v1beta1 ExecEnvVar          Appears in        CredentialProvider   kubelet config k8s io v1beta1 CredentialProvider     p ExecEnvVar is used for setting environment variables when executing an exec based credential plugin   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code value  code   B  Required   B  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr    tbody    table       KubeletAnonymousAuthentication        kubelet config k8s io v1beta1 KubeletAnonymousAuthentication          Appears in        KubeletAuthentication   kubelet config k8s io v1beta1 KubeletAuthentication      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code enabled  code  br    code bool  code    td   td      p enabled allows anonymous requests to the kubelet server  Requests that are not rejected by another authentication method are treated as anonymous requests  Anonymous requests have a username of  code system anonymous  code   and a group name of  code system unauthenticated  code    p    td    tr    tbody    table       KubeletAuthentication        kubelet config k8s io v1beta1 KubeletAuthentication          Appears in        KubeletConfiguration   kubelet config k8s io v1beta1 KubeletConfiguration      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code x509  code  br    a href   kubelet config k8s io v1beta1 KubeletX509Authentication   code KubeletX509Authentication  code   a    td   td      p x509 contains settings related to x509 client certificate authentication   p    td    tr   tr  td  code webhook  code  br    a href   kubelet config k8s io v1beta1 KubeletWebhookAuthentication   code KubeletWebhookAuthentication  code   a    td   td      p webhook contains settings related to webhook bearer token authentication   p    td    tr   tr  td  code anonymous  code  br    a href   kubelet config k8s io v1beta1 KubeletAnonymousAuthentication   code KubeletAnonymousAuthentication  code   a    td   td      p anonymous contains settings related to anonymous authentication   p    td    tr    tbody    table       KubeletAuthorization        kubelet config k8s io v1beta1 KubeletAuthorization          Appears in        KubeletConfiguration   kubelet config k8s io v1beta1 KubeletConfiguration      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code mode  code  br    a href   kubelet config k8s io v1beta1 KubeletAuthorizationMode   code KubeletAuthorizationMode  code   a    td   td      p mode is the authorization mode to apply to requests to the kubelet server  Valid values are  code AlwaysAllow  code  and  code Webhook  code   Webhook mode uses the SubjectAccessReview API to determine authorization   p    td    tr   tr  td  code webhook  code  br    a href   kubelet config k8s io v1beta1 KubeletWebhookAuthorization   code KubeletWebhookAuthorization  code   a    td   td      p webhook contains settings related to Webhook authorization   p    td    tr    tbody    table       KubeletAuthorizationMode        kubelet config k8s io v1beta1 KubeletAuthorizationMode        Alias of  string      Appears in        KubeletAuthorization   kubelet config k8s io v1beta1 KubeletAuthorization           KubeletWebhookAuthentication        kubelet config k8s io v1beta1 KubeletWebhookAuthentication          Appears in        KubeletAuthentication   kubelet config k8s io v1beta1 KubeletAuthentication      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code enabled  code  br    code bool  code    td   td      p enabled allows bearer token authentication backed by the tokenreviews authentication k8s io API   p    td    tr   tr  td  code cacheTTL  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p cacheTTL enables caching of authentication results  p    td    tr    tbody    table       KubeletWebhookAuthorization        kubelet config k8s io v1beta1 KubeletWebhookAuthorization          Appears in        KubeletAuthorization   kubelet config k8s io v1beta1 KubeletAuthorization      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code cacheAuthorizedTTL  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p cacheAuthorizedTTL is the duration to cache  authorized  responses from the webhook authorizer   p    td    tr   tr  td  code cacheUnauthorizedTTL  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p cacheUnauthorizedTTL is the duration to cache  unauthorized  responses from the webhook authorizer   p    td    tr    tbody    table       KubeletX509Authentication        kubelet config k8s io v1beta1 KubeletX509Authentication          Appears in        KubeletAuthentication   kubelet config k8s io v1beta1 KubeletAuthentication      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code clientCAFile  code  br    code string  code    td   td      p clientCAFile is the path to a PEM encoded certificate bundle  If set  any request presenting a client certificate signed by one of the authorities in the bundle is authenticated with a username corresponding to the CommonName  and groups corresponding to the Organization in the client certificate   p    td    tr    tbody    table       MemoryReservation        kubelet config k8s io v1beta1 MemoryReservation          Appears in        KubeletConfiguration   kubelet config k8s io v1beta1 KubeletConfiguration     p MemoryReservation specifies the memory reservation of different types for each NUMA node  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code numaNode  code   B  Required   B  br    code int32  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code limits  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  resourcelist v1 core   code core v1 ResourceList  code   a    td   td      span class  text muted  No description provided   span   td    tr    tbody    table       MemorySwapConfiguration        kubelet config k8s io v1beta1 MemorySwapConfiguration          Appears in        KubeletConfiguration   kubelet config k8s io v1beta1 KubeletConfiguration      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code swapBehavior  code  br    code string  code    td   td      p swapBehavior configures swap memory available to container workloads  May be one of  quot  quot    quot NoSwap quot   workloads can not use swap  default option   quot LimitedSwap quot   workload swap usage is limited  The swap limit is proportionate to the container s memory request   p    td    tr    tbody    table       ResourceChangeDetectionStrategy        kubelet config k8s io v1beta1 ResourceChangeDetectionStrategy        Alias of  string      Appears in        KubeletConfiguration   kubelet config k8s io v1beta1 KubeletConfiguration     p ResourceChangeDetectionStrategy denotes a mode in which internal managers  secret  configmap  are discovering object changes   p          ShutdownGracePeriodByPodPriority        kubelet config k8s io v1beta1 ShutdownGracePeriodByPodPriority          Appears in        KubeletConfiguration   kubelet config k8s io v1beta1 KubeletConfiguration     p ShutdownGracePeriodByPodPriority specifies the shutdown grace period for Pods based on their associated priority class value  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code priority  code   B  Required   B  br    code int32  code    td   td      p priority is the priority value associated with the shutdown grace period  p    td    tr   tr  td  code shutdownGracePeriodSeconds  code   B  Required   B  br    code int64  code    td   td      p shutdownGracePeriodSeconds is the shutdown grace period in seconds  p    td    tr    tbody    table   "}
{"questions":"kubernetes reference package v1 title kubeconfig v1 contenttype tool reference Resource Types autogenerated true","answers":"---\ntitle: kubeconfig (v1)\ncontent_type: tool-reference\npackage: v1\nauto_generated: true\n---\n\n## Resource Types \n\n\n- [Config](#Config)\n  \n    \n    \n\n## `Config`     {#Config}\n    \n\n\n<p>Config holds the information needed to build connect to remote kubernetes clusters as a given user<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>Config<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>kind<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Legacy field from pkg\/api\/types.go TypeMeta.\nTODO(jlowdermilk): remove this after eliminating downstream dependencies.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>apiVersion<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Legacy field from pkg\/api\/types.go TypeMeta.\nTODO(jlowdermilk): remove this after eliminating downstream dependencies.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>preferences<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#Preferences\"><code>Preferences<\/code><\/a>\n<\/td>\n<td>\n   <p>Preferences holds general information to be use for cli interactions<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clusters<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#NamedCluster\"><code>[]NamedCluster<\/code><\/a>\n<\/td>\n<td>\n   <p>Clusters is a map of referencable names to cluster configs<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>users<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#NamedAuthInfo\"><code>[]NamedAuthInfo<\/code><\/a>\n<\/td>\n<td>\n   <p>AuthInfos is a map of referencable names to user configs<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>contexts<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#NamedContext\"><code>[]NamedContext<\/code><\/a>\n<\/td>\n<td>\n   <p>Contexts is a map of referencable names to context configs<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>current-context<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>CurrentContext is the name of the context that you would like to use by default<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>extensions<\/code><br\/>\n<a href=\"#NamedExtension\"><code>[]NamedExtension<\/code><\/a>\n<\/td>\n<td>\n   <p>Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AuthInfo`     {#AuthInfo}\n    \n\n**Appears in:**\n\n- [NamedAuthInfo](#NamedAuthInfo)\n\n\n<p>AuthInfo contains information that describes identity information.  This is use to tell the kubernetes cluster who you are.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>client-certificate<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>ClientCertificate is the path to a client cert file for TLS.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>client-certificate-data<\/code><br\/>\n<code>[]byte<\/code>\n<\/td>\n<td>\n   <p>ClientCertificateData contains PEM-encoded data from a client cert file for TLS. Overrides ClientCertificate<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>client-key<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>ClientKey is the path to a client key file for TLS.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>client-key-data<\/code><br\/>\n<code>[]byte<\/code>\n<\/td>\n<td>\n   <p>ClientKeyData contains PEM-encoded data from a client key file for TLS. Overrides ClientKey<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>token<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Token is the bearer token for authentication to the kubernetes cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tokenFile<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>TokenFile is a pointer to a file that contains a bearer token (as described above).  If both Token and TokenFile are present, Token takes precedence.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>as<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Impersonate is the username to impersonate.  The name matches the flag.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>as-uid<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>ImpersonateUID is the uid to impersonate.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>as-groups<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>ImpersonateGroups is the groups to impersonate.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>as-user-extra<\/code><br\/>\n<code>map[string][]string<\/code>\n<\/td>\n<td>\n   <p>ImpersonateUserExtra contains additional information for impersonated user.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>username<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Username is the username for basic authentication to the kubernetes cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>password<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Password is the password for basic authentication to the kubernetes cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>auth-provider<\/code><br\/>\n<a href=\"#AuthProviderConfig\"><code>AuthProviderConfig<\/code><\/a>\n<\/td>\n<td>\n   <p>AuthProvider specifies a custom authentication plugin for the kubernetes cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>exec<\/code><br\/>\n<a href=\"#ExecConfig\"><code>ExecConfig<\/code><\/a>\n<\/td>\n<td>\n   <p>Exec specifies a custom exec-based authentication plugin for the kubernetes cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>extensions<\/code><br\/>\n<a href=\"#NamedExtension\"><code>[]NamedExtension<\/code><\/a>\n<\/td>\n<td>\n   <p>Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AuthProviderConfig`     {#AuthProviderConfig}\n    \n\n**Appears in:**\n\n- [AuthInfo](#AuthInfo)\n\n\n<p>AuthProviderConfig holds the configuration for a specified auth provider.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>config<\/code> <B>[Required]<\/B><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Cluster`     {#Cluster}\n    \n\n**Appears in:**\n\n- [NamedCluster](#NamedCluster)\n\n\n<p>Cluster contains information about how to communicate with a kubernetes cluster<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>server<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Server is the address of the kubernetes cluster (https:\/\/hostname:port).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tls-server-name<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>TLSServerName is used to check server certificate. If TLSServerName is empty, the hostname used to contact the server is used.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>insecure-skip-tls-verify<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>InsecureSkipTLSVerify skips the validity check for the server's certificate. This will make your HTTPS connections insecure.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificate-authority<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>CertificateAuthority is the path to a cert file for the certificate authority.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificate-authority-data<\/code><br\/>\n<code>[]byte<\/code>\n<\/td>\n<td>\n   <p>CertificateAuthorityData contains PEM-encoded certificate authority certificates. Overrides CertificateAuthority<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>proxy-url<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>ProxyURL is the URL to the proxy to be used for all requests made by this\nclient. URLs with &quot;http&quot;, &quot;https&quot;, and &quot;socks5&quot; schemes are supported.  If\nthis configuration is not provided or the empty string, the client\nattempts to construct a proxy configuration from http_proxy and\nhttps_proxy environment variables. If these environment variables are not\nset, the client does not attempt to proxy requests.<\/p>\n<p>socks5 proxying does not currently support spdy streaming endpoints (exec,\nattach, port forward).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>disable-compression<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>DisableCompression allows client to opt-out of response compression for all requests to the server. This is useful\nto speed up requests (specifically lists) when client-server network bandwidth is ample, by saving time on\ncompression (server-side) and decompression (client-side): https:\/\/github.com\/kubernetes\/kubernetes\/issues\/112296.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>extensions<\/code><br\/>\n<a href=\"#NamedExtension\"><code>[]NamedExtension<\/code><\/a>\n<\/td>\n<td>\n   <p>Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Context`     {#Context}\n    \n\n**Appears in:**\n\n- [NamedContext](#NamedContext)\n\n\n<p>Context is a tuple of references to a cluster (how do I communicate with a kubernetes cluster), a user (how do I identify myself), and a namespace (what subset of resources do I want to work with)<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>cluster<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Cluster is the name of the cluster for this context<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>user<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>AuthInfo is the name of the authInfo for this context<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>namespace<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Namespace is the default namespace to use on unspecified requests<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>extensions<\/code><br\/>\n<a href=\"#NamedExtension\"><code>[]NamedExtension<\/code><\/a>\n<\/td>\n<td>\n   <p>Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExecConfig`     {#ExecConfig}\n    \n\n**Appears in:**\n\n- [AuthInfo](#AuthInfo)\n\n\n<p>ExecConfig specifies a command to provide client credentials. The command is exec'd\nand outputs structured stdout holding credentials.<\/p>\n<p>See the client.authentication.k8s.io API group for specifications of the exact input\nand output format<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>command<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Command to execute.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>args<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>Arguments to pass to the command when executing it.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>env<\/code><br\/>\n<a href=\"#ExecEnvVar\"><code>[]ExecEnvVar<\/code><\/a>\n<\/td>\n<td>\n   <p>Env defines additional environment variables to expose to the process. These\nare unioned with the host's environment, as well as variables client-go uses\nto pass argument to the plugin.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>apiVersion<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Preferred input version of the ExecInfo. The returned ExecCredentials MUST use\nthe same encoding version as the input.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>installHint<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>This text is shown to the user when the executable doesn't seem to be\npresent. For example, <code>brew install foo-cli<\/code> might be a good InstallHint for\nfoo-cli on Mac OS systems.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>provideClusterInfo<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>ProvideClusterInfo determines whether or not to provide cluster information,\nwhich could potentially contain very large CA data, to this exec plugin as a\npart of the KUBERNETES_EXEC_INFO environment variable. By default, it is set\nto false. Package k8s.io\/client-go\/tools\/auth\/exec provides helper methods for\nreading this environment variable.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>interactiveMode<\/code><br\/>\n<a href=\"#ExecInteractiveMode\"><code>ExecInteractiveMode<\/code><\/a>\n<\/td>\n<td>\n   <p>InteractiveMode determines this plugin's relationship with standard input. Valid\nvalues are &quot;Never&quot; (this exec plugin never uses standard input), &quot;IfAvailable&quot; (this\nexec plugin wants to use standard input if it is available), or &quot;Always&quot; (this exec\nplugin requires standard input to function). See ExecInteractiveMode values for more\ndetails.<\/p>\n<p>If APIVersion is client.authentication.k8s.io\/v1alpha1 or\nclient.authentication.k8s.io\/v1beta1, then this field is optional and defaults\nto &quot;IfAvailable&quot; when unset. Otherwise, this field is required.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExecEnvVar`     {#ExecEnvVar}\n    \n\n**Appears in:**\n\n- [ExecConfig](#ExecConfig)\n\n\n<p>ExecEnvVar is used for setting environment variables when executing an exec-based\ncredential plugin.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>value<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExecInteractiveMode`     {#ExecInteractiveMode}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [ExecConfig](#ExecConfig)\n\n\n<p>ExecInteractiveMode is a string that describes an exec plugin's relationship with standard input.<\/p>\n\n\n\n\n## `NamedAuthInfo`     {#NamedAuthInfo}\n    \n\n**Appears in:**\n\n- [Config](#Config)\n\n\n<p>NamedAuthInfo relates nicknames to auth information<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name is the nickname for this AuthInfo<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>user<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#AuthInfo\"><code>AuthInfo<\/code><\/a>\n<\/td>\n<td>\n   <p>AuthInfo holds the auth information<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `NamedCluster`     {#NamedCluster}\n    \n\n**Appears in:**\n\n- [Config](#Config)\n\n\n<p>NamedCluster relates nicknames to cluster information<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name is the nickname for this Cluster<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>cluster<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#Cluster\"><code>Cluster<\/code><\/a>\n<\/td>\n<td>\n   <p>Cluster holds the cluster information<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `NamedContext`     {#NamedContext}\n    \n\n**Appears in:**\n\n- [Config](#Config)\n\n\n<p>NamedContext relates nicknames to context information<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name is the nickname for this Context<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>context<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#Context\"><code>Context<\/code><\/a>\n<\/td>\n<td>\n   <p>Context holds the context information<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `NamedExtension`     {#NamedExtension}\n    \n\n**Appears in:**\n\n- [Config](#Config)\n\n- [AuthInfo](#AuthInfo)\n\n- [Cluster](#Cluster)\n\n- [Context](#Context)\n\n- [Preferences](#Preferences)\n\n\n<p>NamedExtension relates nicknames to extension information<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name is the nickname for this Extension<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>extension<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/runtime\/#RawExtension\"><code>k8s.io\/apimachinery\/pkg\/runtime.RawExtension<\/code><\/a>\n<\/td>\n<td>\n   <p>Extension holds the extension information<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Preferences`     {#Preferences}\n    \n\n**Appears in:**\n\n- [Config](#Config)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>colors<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>extensions<\/code><br\/>\n<a href=\"#NamedExtension\"><code>[]NamedExtension<\/code><\/a>\n<\/td>\n<td>\n   <p>Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table","site":"kubernetes reference","answers_cleaned":"    title  kubeconfig  v1  content type  tool reference package  v1 auto generated  true         Resource Types       Config   Config                    Config        Config          p Config holds the information needed to build connect to remote kubernetes clusters as a given user  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code  v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code Config  code   td   tr           tr  td  code kind  code  br    code string  code    td   td      p Legacy field from pkg api types go TypeMeta  TODO jlowdermilk   remove this after eliminating downstream dependencies   p    td    tr   tr  td  code apiVersion  code  br    code string  code    td   td      p Legacy field from pkg api types go TypeMeta  TODO jlowdermilk   remove this after eliminating downstream dependencies   p    td    tr   tr  td  code preferences  code   B  Required   B  br    a href   Preferences   code Preferences  code   a    td   td      p Preferences holds general information to be use for cli interactions  p    td    tr   tr  td  code clusters  code   B  Required   B  br    a href   NamedCluster   code   NamedCluster  code   a    td   td      p Clusters is a map of referencable names to cluster configs  p    td    tr   tr  td  code users  code   B  Required   B  br    a href   NamedAuthInfo   code   NamedAuthInfo  code   a    td   td      p AuthInfos is a map of referencable names to user configs  p    td    tr   tr  td  code contexts  code   B  Required   B  br    a href   NamedContext   code   NamedContext  code   a    td   td      p Contexts is a map of referencable names to context configs  p    td    tr   tr  td  code current context  code   B  Required   B  br    code string  code    td   td      p CurrentContext is the name of the context that you would like to use by default  p    td    tr   tr  td  code extensions  code  br    a href   NamedExtension   code   NamedExtension  code   a    td   td      p Extensions holds additional information  This is useful for extenders so that reads and writes don t clobber unknown fields  p    td    tr    tbody    table       AuthInfo        AuthInfo          Appears in        NamedAuthInfo   NamedAuthInfo     p AuthInfo contains information that describes identity information   This is use to tell the kubernetes cluster who you are   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code client certificate  code  br    code string  code    td   td      p ClientCertificate is the path to a client cert file for TLS   p    td    tr   tr  td  code client certificate data  code  br    code   byte  code    td   td      p ClientCertificateData contains PEM encoded data from a client cert file for TLS  Overrides ClientCertificate  p    td    tr   tr  td  code client key  code  br    code string  code    td   td      p ClientKey is the path to a client key file for TLS   p    td    tr   tr  td  code client key data  code  br    code   byte  code    td   td      p ClientKeyData contains PEM encoded data from a client key file for TLS  Overrides ClientKey  p    td    tr   tr  td  code token  code  br    code string  code    td   td      p Token is the bearer token for authentication to the kubernetes cluster   p    td    tr   tr  td  code tokenFile  code  br    code string  code    td   td      p TokenFile is a pointer to a file that contains a bearer token  as described above    If both Token and TokenFile are present  Token takes precedence   p    td    tr   tr  td  code as  code  br    code string  code    td   td      p Impersonate is the username to impersonate   The name matches the flag   p    td    tr   tr  td  code as uid  code  br    code string  code    td   td      p ImpersonateUID is the uid to impersonate   p    td    tr   tr  td  code as groups  code  br    code   string  code    td   td      p ImpersonateGroups is the groups to impersonate   p    td    tr   tr  td  code as user extra  code  br    code map string   string  code    td   td      p ImpersonateUserExtra contains additional information for impersonated user   p    td    tr   tr  td  code username  code  br    code string  code    td   td      p Username is the username for basic authentication to the kubernetes cluster   p    td    tr   tr  td  code password  code  br    code string  code    td   td      p Password is the password for basic authentication to the kubernetes cluster   p    td    tr   tr  td  code auth provider  code  br    a href   AuthProviderConfig   code AuthProviderConfig  code   a    td   td      p AuthProvider specifies a custom authentication plugin for the kubernetes cluster   p    td    tr   tr  td  code exec  code  br    a href   ExecConfig   code ExecConfig  code   a    td   td      p Exec specifies a custom exec based authentication plugin for the kubernetes cluster   p    td    tr   tr  td  code extensions  code  br    a href   NamedExtension   code   NamedExtension  code   a    td   td      p Extensions holds additional information  This is useful for extenders so that reads and writes don t clobber unknown fields  p    td    tr    tbody    table       AuthProviderConfig        AuthProviderConfig          Appears in        AuthInfo   AuthInfo     p AuthProviderConfig holds the configuration for a specified auth provider   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code config  code   B  Required   B  br    code map string string  code    td   td      span class  text muted  No description provided   span   td    tr    tbody    table       Cluster        Cluster          Appears in        NamedCluster   NamedCluster     p Cluster contains information about how to communicate with a kubernetes cluster  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code server  code   B  Required   B  br    code string  code    td   td      p Server is the address of the kubernetes cluster  https   hostname port    p    td    tr   tr  td  code tls server name  code  br    code string  code    td   td      p TLSServerName is used to check server certificate  If TLSServerName is empty  the hostname used to contact the server is used   p    td    tr   tr  td  code insecure skip tls verify  code  br    code bool  code    td   td      p InsecureSkipTLSVerify skips the validity check for the server s certificate  This will make your HTTPS connections insecure   p    td    tr   tr  td  code certificate authority  code  br    code string  code    td   td      p CertificateAuthority is the path to a cert file for the certificate authority   p    td    tr   tr  td  code certificate authority data  code  br    code   byte  code    td   td      p CertificateAuthorityData contains PEM encoded certificate authority certificates  Overrides CertificateAuthority  p    td    tr   tr  td  code proxy url  code  br    code string  code    td   td      p ProxyURL is the URL to the proxy to be used for all requests made by this client  URLs with  quot http quot    quot https quot   and  quot socks5 quot  schemes are supported   If this configuration is not provided or the empty string  the client attempts to construct a proxy configuration from http proxy and https proxy environment variables  If these environment variables are not set  the client does not attempt to proxy requests   p   p socks5 proxying does not currently support spdy streaming endpoints  exec  attach  port forward    p    td    tr   tr  td  code disable compression  code  br    code bool  code    td   td      p DisableCompression allows client to opt out of response compression for all requests to the server  This is useful to speed up requests  specifically lists  when client server network bandwidth is ample  by saving time on compression  server side  and decompression  client side   https   github com kubernetes kubernetes issues 112296   p    td    tr   tr  td  code extensions  code  br    a href   NamedExtension   code   NamedExtension  code   a    td   td      p Extensions holds additional information  This is useful for extenders so that reads and writes don t clobber unknown fields  p    td    tr    tbody    table       Context        Context          Appears in        NamedContext   NamedContext     p Context is a tuple of references to a cluster  how do I communicate with a kubernetes cluster   a user  how do I identify myself   and a namespace  what subset of resources do I want to work with   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code cluster  code   B  Required   B  br    code string  code    td   td      p Cluster is the name of the cluster for this context  p    td    tr   tr  td  code user  code   B  Required   B  br    code string  code    td   td      p AuthInfo is the name of the authInfo for this context  p    td    tr   tr  td  code namespace  code  br    code string  code    td   td      p Namespace is the default namespace to use on unspecified requests  p    td    tr   tr  td  code extensions  code  br    a href   NamedExtension   code   NamedExtension  code   a    td   td      p Extensions holds additional information  This is useful for extenders so that reads and writes don t clobber unknown fields  p    td    tr    tbody    table       ExecConfig        ExecConfig          Appears in        AuthInfo   AuthInfo     p ExecConfig specifies a command to provide client credentials  The command is exec d and outputs structured stdout holding credentials   p   p See the client authentication k8s io API group for specifications of the exact input and output format  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code command  code   B  Required   B  br    code string  code    td   td      p Command to execute   p    td    tr   tr  td  code args  code  br    code   string  code    td   td      p Arguments to pass to the command when executing it   p    td    tr   tr  td  code env  code  br    a href   ExecEnvVar   code   ExecEnvVar  code   a    td   td      p Env defines additional environment variables to expose to the process  These are unioned with the host s environment  as well as variables client go uses to pass argument to the plugin   p    td    tr   tr  td  code apiVersion  code   B  Required   B  br    code string  code    td   td      p Preferred input version of the ExecInfo  The returned ExecCredentials MUST use the same encoding version as the input   p    td    tr   tr  td  code installHint  code   B  Required   B  br    code string  code    td   td      p This text is shown to the user when the executable doesn t seem to be present  For example   code brew install foo cli  code  might be a good InstallHint for foo cli on Mac OS systems   p    td    tr   tr  td  code provideClusterInfo  code   B  Required   B  br    code bool  code    td   td      p ProvideClusterInfo determines whether or not to provide cluster information  which could potentially contain very large CA data  to this exec plugin as a part of the KUBERNETES EXEC INFO environment variable  By default  it is set to false  Package k8s io client go tools auth exec provides helper methods for reading this environment variable   p    td    tr   tr  td  code interactiveMode  code  br    a href   ExecInteractiveMode   code ExecInteractiveMode  code   a    td   td      p InteractiveMode determines this plugin s relationship with standard input  Valid values are  quot Never quot   this exec plugin never uses standard input    quot IfAvailable quot   this exec plugin wants to use standard input if it is available   or  quot Always quot   this exec plugin requires standard input to function   See ExecInteractiveMode values for more details   p   p If APIVersion is client authentication k8s io v1alpha1 or client authentication k8s io v1beta1  then this field is optional and defaults to  quot IfAvailable quot  when unset  Otherwise  this field is required   p    td    tr    tbody    table       ExecEnvVar        ExecEnvVar          Appears in        ExecConfig   ExecConfig     p ExecEnvVar is used for setting environment variables when executing an exec based credential plugin   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code value  code   B  Required   B  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr    tbody    table       ExecInteractiveMode        ExecInteractiveMode        Alias of  string      Appears in        ExecConfig   ExecConfig     p ExecInteractiveMode is a string that describes an exec plugin s relationship with standard input   p          NamedAuthInfo        NamedAuthInfo          Appears in        Config   Config     p NamedAuthInfo relates nicknames to auth information  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p Name is the nickname for this AuthInfo  p    td    tr   tr  td  code user  code   B  Required   B  br    a href   AuthInfo   code AuthInfo  code   a    td   td      p AuthInfo holds the auth information  p    td    tr    tbody    table       NamedCluster        NamedCluster          Appears in        Config   Config     p NamedCluster relates nicknames to cluster information  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p Name is the nickname for this Cluster  p    td    tr   tr  td  code cluster  code   B  Required   B  br    a href   Cluster   code Cluster  code   a    td   td      p Cluster holds the cluster information  p    td    tr    tbody    table       NamedContext        NamedContext          Appears in        Config   Config     p NamedContext relates nicknames to context information  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p Name is the nickname for this Context  p    td    tr   tr  td  code context  code   B  Required   B  br    a href   Context   code Context  code   a    td   td      p Context holds the context information  p    td    tr    tbody    table       NamedExtension        NamedExtension          Appears in        Config   Config      AuthInfo   AuthInfo      Cluster   Cluster      Context   Context      Preferences   Preferences     p NamedExtension relates nicknames to extension information  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p Name is the nickname for this Extension  p    td    tr   tr  td  code extension  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg runtime  RawExtension   code k8s io apimachinery pkg runtime RawExtension  code   a    td   td      p Extension holds the extension information  p    td    tr    tbody    table       Preferences        Preferences          Appears in        Config   Config      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code colors  code  br    code bool  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code extensions  code  br    a href   NamedExtension   code   NamedExtension  code   a    td   td      p Extensions holds additional information  This is useful for extenders so that reads and writes don t clobber unknown fields  p    td    tr    tbody    table"}
{"questions":"kubernetes reference contenttype tool reference Resource Types package client authentication k8s io v1 autogenerated true title Client Authentication v1","answers":"---\ntitle: Client Authentication (v1)\ncontent_type: tool-reference\npackage: client.authentication.k8s.io\/v1\nauto_generated: true\n---\n\n\n## Resource Types \n\n\n- [ExecCredential](#client-authentication-k8s-io-v1-ExecCredential)\n  \n\n## `ExecCredential`     {#client-authentication-k8s-io-v1-ExecCredential}\n    \n\n\n<p>ExecCredential is used by exec-based plugins to communicate credentials to\nHTTP transports.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>client.authentication.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>ExecCredential<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>spec<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#client-authentication-k8s-io-v1-ExecCredentialSpec\"><code>ExecCredentialSpec<\/code><\/a>\n<\/td>\n<td>\n   <p>Spec holds information passed to the plugin by the transport.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>status<\/code><br\/>\n<a href=\"#client-authentication-k8s-io-v1-ExecCredentialStatus\"><code>ExecCredentialStatus<\/code><\/a>\n<\/td>\n<td>\n   <p>Status is filled in by the plugin and holds the credentials that the transport\nshould use to contact the API.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Cluster`     {#client-authentication-k8s-io-v1-Cluster}\n    \n\n**Appears in:**\n\n- [ExecCredentialSpec](#client-authentication-k8s-io-v1-ExecCredentialSpec)\n\n\n<p>Cluster contains information to allow an exec plugin to communicate\nwith the kubernetes cluster being authenticated to.<\/p>\n<p>To ensure that this struct contains everything someone would need to communicate\nwith a kubernetes cluster (just like they would via a kubeconfig), the fields\nshould shadow &quot;k8s.io\/client-go\/tools\/clientcmd\/api\/v1&quot;.Cluster, with the exception\nof CertificateAuthority, since CA data will always be passed to the plugin as bytes.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>server<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Server is the address of the kubernetes cluster (https:\/\/hostname:port).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tls-server-name<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>TLSServerName is passed to the server for SNI and is used in the client to\ncheck server certificates against. If ServerName is empty, the hostname\nused to contact the server is used.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>insecure-skip-tls-verify<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>InsecureSkipTLSVerify skips the validity check for the server's certificate.\nThis will make your HTTPS connections insecure.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificate-authority-data<\/code><br\/>\n<code>[]byte<\/code>\n<\/td>\n<td>\n   <p>CAData contains PEM-encoded certificate authority certificates.\nIf empty, system roots should be used.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>proxy-url<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>ProxyURL is the URL to the proxy to be used for all requests to this\ncluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>disable-compression<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>DisableCompression allows client to opt-out of response compression for all requests to the server. This is useful\nto speed up requests (specifically lists) when client-server network bandwidth is ample, by saving time on\ncompression (server-side) and decompression (client-side): https:\/\/github.com\/kubernetes\/kubernetes\/issues\/112296.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>config<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/runtime\/#RawExtension\"><code>k8s.io\/apimachinery\/pkg\/runtime.RawExtension<\/code><\/a>\n<\/td>\n<td>\n   <p>Config holds additional config data that is specific to the exec\nplugin with regards to the cluster being authenticated to.<\/p>\n<p>This data is sourced from the clientcmd Cluster object's\nextensions[client.authentication.k8s.io\/exec] field:<\/p>\n<p>clusters:<\/p>\n<ul>\n<li>name: my-cluster\ncluster:\n...\nextensions:\n<ul>\n<li>name: client.authentication.k8s.io\/exec  # reserved extension name for per cluster exec config\nextension:\naudience: 06e3fbd18de8  # arbitrary config<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>In some environments, the user config may be exactly the same across many clusters\n(i.e. call this exec plugin) minus some details that are specific to each cluster\nsuch as the audience.  This field allows the per cluster config to be directly\nspecified with the cluster info.  Using this field to store secret data is not\nrecommended as one of the prime benefits of exec plugins is that no secrets need\nto be stored directly in the kubeconfig.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExecCredentialSpec`     {#client-authentication-k8s-io-v1-ExecCredentialSpec}\n    \n\n**Appears in:**\n\n- [ExecCredential](#client-authentication-k8s-io-v1-ExecCredential)\n\n\n<p>ExecCredentialSpec holds request and runtime specific information provided by\nthe transport.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>cluster<\/code><br\/>\n<a href=\"#client-authentication-k8s-io-v1-Cluster\"><code>Cluster<\/code><\/a>\n<\/td>\n<td>\n   <p>Cluster contains information to allow an exec plugin to communicate with the\nkubernetes cluster being authenticated to. Note that Cluster is non-nil only\nwhen provideClusterInfo is set to true in the exec provider config (i.e.,\nExecConfig.ProvideClusterInfo).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>interactive<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>Interactive declares whether stdin has been passed to this exec plugin.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExecCredentialStatus`     {#client-authentication-k8s-io-v1-ExecCredentialStatus}\n    \n\n**Appears in:**\n\n- [ExecCredential](#client-authentication-k8s-io-v1-ExecCredential)\n\n\n<p>ExecCredentialStatus holds credentials for the transport to use.<\/p>\n<p>Token and ClientKeyData are sensitive fields. This data should only be\ntransmitted in-memory between client and exec plugin process. Exec plugin\nitself should at least be protected via file permissions.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>expirationTimestamp<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#time-v1-meta\"><code>meta\/v1.Time<\/code><\/a>\n<\/td>\n<td>\n   <p>ExpirationTimestamp indicates a time when the provided credentials expire.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>token<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Token is a bearer token used by the client for request authentication.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clientCertificateData<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>PEM-encoded client TLS certificates (including intermediates, if any).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clientKeyData<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>PEM-encoded private key for the above certificate.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  Client Authentication  v1  content type  tool reference package  client authentication k8s io v1 auto generated  true          Resource Types       ExecCredential   client authentication k8s io v1 ExecCredential          ExecCredential        client authentication k8s io v1 ExecCredential          p ExecCredential is used by exec based plugins to communicate credentials to HTTP transports   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code client authentication k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code ExecCredential  code   td   tr           tr  td  code spec  code   B  Required   B  br    a href   client authentication k8s io v1 ExecCredentialSpec   code ExecCredentialSpec  code   a    td   td      p Spec holds information passed to the plugin by the transport   p    td    tr   tr  td  code status  code  br    a href   client authentication k8s io v1 ExecCredentialStatus   code ExecCredentialStatus  code   a    td   td      p Status is filled in by the plugin and holds the credentials that the transport should use to contact the API   p    td    tr    tbody    table       Cluster        client authentication k8s io v1 Cluster          Appears in        ExecCredentialSpec   client authentication k8s io v1 ExecCredentialSpec     p Cluster contains information to allow an exec plugin to communicate with the kubernetes cluster being authenticated to   p   p To ensure that this struct contains everything someone would need to communicate with a kubernetes cluster  just like they would via a kubeconfig   the fields should shadow  quot k8s io client go tools clientcmd api v1 quot  Cluster  with the exception of CertificateAuthority  since CA data will always be passed to the plugin as bytes   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code server  code   B  Required   B  br    code string  code    td   td      p Server is the address of the kubernetes cluster  https   hostname port    p    td    tr   tr  td  code tls server name  code  br    code string  code    td   td      p TLSServerName is passed to the server for SNI and is used in the client to check server certificates against  If ServerName is empty  the hostname used to contact the server is used   p    td    tr   tr  td  code insecure skip tls verify  code  br    code bool  code    td   td      p InsecureSkipTLSVerify skips the validity check for the server s certificate  This will make your HTTPS connections insecure   p    td    tr   tr  td  code certificate authority data  code  br    code   byte  code    td   td      p CAData contains PEM encoded certificate authority certificates  If empty  system roots should be used   p    td    tr   tr  td  code proxy url  code  br    code string  code    td   td      p ProxyURL is the URL to the proxy to be used for all requests to this cluster   p    td    tr   tr  td  code disable compression  code  br    code bool  code    td   td      p DisableCompression allows client to opt out of response compression for all requests to the server  This is useful to speed up requests  specifically lists  when client server network bandwidth is ample  by saving time on compression  server side  and decompression  client side   https   github com kubernetes kubernetes issues 112296   p    td    tr   tr  td  code config  code  br    a href  https   pkg go dev k8s io apimachinery pkg runtime  RawExtension   code k8s io apimachinery pkg runtime RawExtension  code   a    td   td      p Config holds additional config data that is specific to the exec plugin with regards to the cluster being authenticated to   p   p This data is sourced from the clientcmd Cluster object s extensions client authentication k8s io exec  field   p   p clusters   p   ul   li name  my cluster cluster      extensions   ul   li name  client authentication k8s io exec    reserved extension name for per cluster exec config extension  audience  06e3fbd18de8    arbitrary config  li    ul    li    ul   p In some environments  the user config may be exactly the same across many clusters  i e  call this exec plugin  minus some details that are specific to each cluster such as the audience   This field allows the per cluster config to be directly specified with the cluster info   Using this field to store secret data is not recommended as one of the prime benefits of exec plugins is that no secrets need to be stored directly in the kubeconfig   p    td    tr    tbody    table       ExecCredentialSpec        client authentication k8s io v1 ExecCredentialSpec          Appears in        ExecCredential   client authentication k8s io v1 ExecCredential     p ExecCredentialSpec holds request and runtime specific information provided by the transport   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code cluster  code  br    a href   client authentication k8s io v1 Cluster   code Cluster  code   a    td   td      p Cluster contains information to allow an exec plugin to communicate with the kubernetes cluster being authenticated to  Note that Cluster is non nil only when provideClusterInfo is set to true in the exec provider config  i e   ExecConfig ProvideClusterInfo    p    td    tr   tr  td  code interactive  code   B  Required   B  br    code bool  code    td   td      p Interactive declares whether stdin has been passed to this exec plugin   p    td    tr    tbody    table       ExecCredentialStatus        client authentication k8s io v1 ExecCredentialStatus          Appears in        ExecCredential   client authentication k8s io v1 ExecCredential     p ExecCredentialStatus holds credentials for the transport to use   p   p Token and ClientKeyData are sensitive fields  This data should only be transmitted in memory between client and exec plugin process  Exec plugin itself should at least be protected via file permissions   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code expirationTimestamp  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  time v1 meta   code meta v1 Time  code   a    td   td      p ExpirationTimestamp indicates a time when the provided credentials expire   p    td    tr   tr  td  code token  code   B  Required   B  br    code string  code    td   td      p Token is a bearer token used by the client for request authentication   p    td    tr   tr  td  code clientCertificateData  code   B  Required   B  br    code string  code    td   td      p PEM encoded client TLS certificates  including intermediates  if any    p    td    tr   tr  td  code clientKeyData  code   B  Required   B  br    code string  code    td   td      p PEM encoded private key for the above certificate   p    td    tr    tbody    table   "}
{"questions":"kubernetes reference contenttype tool reference Resource Types package kubescheduler config k8s io v1 title kube scheduler Configuration v1 autogenerated true","answers":"---\ntitle: kube-scheduler Configuration (v1)\ncontent_type: tool-reference\npackage: kubescheduler.config.k8s.io\/v1\nauto_generated: true\n---\n\n\n## Resource Types \n\n\n- [DefaultPreemptionArgs](#kubescheduler-config-k8s-io-v1-DefaultPreemptionArgs)\n- [InterPodAffinityArgs](#kubescheduler-config-k8s-io-v1-InterPodAffinityArgs)\n- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)\n- [NodeAffinityArgs](#kubescheduler-config-k8s-io-v1-NodeAffinityArgs)\n- [NodeResourcesBalancedAllocationArgs](#kubescheduler-config-k8s-io-v1-NodeResourcesBalancedAllocationArgs)\n- [NodeResourcesFitArgs](#kubescheduler-config-k8s-io-v1-NodeResourcesFitArgs)\n- [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1-PodTopologySpreadArgs)\n- [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1-VolumeBindingArgs)\n  \n    \n    \n\n## `ClientConnectionConfiguration`     {#ClientConnectionConfiguration}\n    \n\n**Appears in:**\n\n- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)\n\n\n<p>ClientConnectionConfiguration contains details for constructing a client.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>kubeconfig<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>kubeconfig is the path to a KubeConfig file.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>acceptContentTypes<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the\ndefault value of 'application\/json'. This field will control all connections to the server used by a particular\nclient.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>contentType<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>contentType is the content type used when sending data to the server from this client.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>qps<\/code> <B>[Required]<\/B><br\/>\n<code>float32<\/code>\n<\/td>\n<td>\n   <p>qps controls the number of queries per second allowed for this connection.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>burst<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>burst allows extra queries to accumulate when a client is exceeding its rate.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `DebuggingConfiguration`     {#DebuggingConfiguration}\n    \n\n**Appears in:**\n\n- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)\n\n\n<p>DebuggingConfiguration holds configuration for Debugging related features.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>enableProfiling<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableProfiling enables profiling via web interface host:port\/debug\/pprof\/<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>enableContentionProfiling<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableContentionProfiling enables block profiling, if\nenableProfiling is true.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `LeaderElectionConfiguration`     {#LeaderElectionConfiguration}\n    \n\n**Appears in:**\n\n- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)\n\n\n<p>LeaderElectionConfiguration defines the configuration of leader election\nclients for components that can run with leader election enabled.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>leaderElect<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>leaderElect enables a leader election client to gain leadership\nbefore executing the main loop. Enable this when running replicated\ncomponents for high availability.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>leaseDuration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>leaseDuration is the duration that non-leader candidates will wait\nafter observing a leadership renewal until attempting to acquire\nleadership of a led but unrenewed leader slot. This is effectively the\nmaximum duration that a leader can be stopped before it is replaced\nby another candidate. This is only applicable if leader election is\nenabled.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>renewDeadline<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>renewDeadline is the interval between attempts by the acting master to\nrenew a leadership slot before it stops leading. This must be less\nthan or equal to the lease duration. This is only applicable if leader\nelection is enabled.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>retryPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>retryPeriod is the duration the clients should wait between attempting\nacquisition and renewal of a leadership. This is only applicable if\nleader election is enabled.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>resourceLock<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>resourceLock indicates the resource object type that will be used to lock\nduring leader election cycles.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>resourceName<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>resourceName indicates the name of resource object that will be used to lock\nduring leader election cycles.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>resourceNamespace<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>resourceName indicates the namespace of resource object that will be used to lock\nduring leader election cycles.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n  \n\n## `DefaultPreemptionArgs`     {#kubescheduler-config-k8s-io-v1-DefaultPreemptionArgs}\n    \n\n\n<p>DefaultPreemptionArgs holds arguments used to configure the\nDefaultPreemption plugin.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubescheduler.config.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>DefaultPreemptionArgs<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>minCandidateNodesPercentage<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>MinCandidateNodesPercentage is the minimum number of candidates to\nshortlist when dry running preemption as a percentage of number of nodes.\nMust be in the range [0, 100]. Defaults to 10% of the cluster size if\nunspecified.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>minCandidateNodesAbsolute<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>MinCandidateNodesAbsolute is the absolute minimum number of candidates to\nshortlist. The likely number of candidates enumerated for dry running\npreemption is given by the formula:\nnumCandidates = max(numNodes * minCandidateNodesPercentage, minCandidateNodesAbsolute)\nWe say &quot;likely&quot; because there are other factors such as PDB violations\nthat play a role in the number of candidates shortlisted. Must be at least\n0 nodes. Defaults to 100 nodes if unspecified.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `InterPodAffinityArgs`     {#kubescheduler-config-k8s-io-v1-InterPodAffinityArgs}\n    \n\n\n<p>InterPodAffinityArgs holds arguments used to configure the InterPodAffinity plugin.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubescheduler.config.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>InterPodAffinityArgs<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>hardPodAffinityWeight<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>HardPodAffinityWeight is the scoring weight for existing pods with a\nmatching hard affinity to the incoming pod.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ignorePreferredTermsOfExistingPods<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>IgnorePreferredTermsOfExistingPods configures the scheduler to ignore existing pods' preferred affinity\nrules when scoring candidate nodes, unless the incoming pod has inter-pod affinities.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeSchedulerConfiguration`     {#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration}\n    \n\n\n<p>KubeSchedulerConfiguration configures a scheduler<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubescheduler.config.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>KubeSchedulerConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>parallelism<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>Parallelism defines the amount of parallelism in algorithms for scheduling a Pods. Must be greater than 0. Defaults to 16<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>leaderElection<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#LeaderElectionConfiguration\"><code>LeaderElectionConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>LeaderElection defines the configuration of leader election client.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clientConnection<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#ClientConnectionConfiguration\"><code>ClientConnectionConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>ClientConnection specifies the kubeconfig file and client connection\nsettings for the proxy server to use when communicating with the apiserver.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>DebuggingConfiguration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#DebuggingConfiguration\"><code>DebuggingConfiguration<\/code><\/a>\n<\/td>\n<td>(Members of <code>DebuggingConfiguration<\/code> are embedded into this type.)\n   <p>DebuggingConfiguration holds configuration for Debugging related features\nTODO: We might wanna make this a substruct like Debugging componentbaseconfigv1alpha1.DebuggingConfiguration<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>percentageOfNodesToScore<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>PercentageOfNodesToScore is the percentage of all nodes that once found feasible\nfor running a pod, the scheduler stops its search for more feasible nodes in\nthe cluster. This helps improve scheduler's performance. Scheduler always tries to find\nat least &quot;minFeasibleNodesToFind&quot; feasible nodes no matter what the value of this flag is.\nExample: if the cluster size is 500 nodes and the value of this flag is 30,\nthen scheduler stops finding further feasible nodes once it finds 150 feasible ones.\nWhen the value is 0, default percentage (5%--50% based on the size of the cluster) of the\nnodes will be scored. It is overridden by profile level PercentageOfNodesToScore.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>podInitialBackoffSeconds<\/code> <B>[Required]<\/B><br\/>\n<code>int64<\/code>\n<\/td>\n<td>\n   <p>PodInitialBackoffSeconds is the initial backoff for unschedulable pods.\nIf specified, it must be greater than 0. If this value is null, the default value (1s)\nwill be used.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>podMaxBackoffSeconds<\/code> <B>[Required]<\/B><br\/>\n<code>int64<\/code>\n<\/td>\n<td>\n   <p>PodMaxBackoffSeconds is the max backoff for unschedulable pods.\nIf specified, it must be greater than podInitialBackoffSeconds. If this value is null,\nthe default value (10s) will be used.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>profiles<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-KubeSchedulerProfile\"><code>[]KubeSchedulerProfile<\/code><\/a>\n<\/td>\n<td>\n   <p>Profiles are scheduling profiles that kube-scheduler supports. Pods can\nchoose to be scheduled under a particular profile by setting its associated\nscheduler name. Pods that don't specify any scheduler name are scheduled\nwith the &quot;default-scheduler&quot; profile, if present here.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>extenders<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-Extender\"><code>[]Extender<\/code><\/a>\n<\/td>\n<td>\n   <p>Extenders are the list of scheduler extenders, each holding the values of how to communicate\nwith the extender. These extenders are shared by all scheduler profiles.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>delayCacheUntilActive<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>DelayCacheUntilActive specifies when to start caching. If this is true and leader election is enabled,\nthe scheduler will wait to fill informer caches until it is the leader. Doing so will have slower\nfailover with the benefit of lower memory overhead while waiting to become leader.\nDefaults to false.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `NodeAffinityArgs`     {#kubescheduler-config-k8s-io-v1-NodeAffinityArgs}\n    \n\n\n<p>NodeAffinityArgs holds arguments to configure the NodeAffinity plugin.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubescheduler.config.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>NodeAffinityArgs<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>addedAffinity<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#nodeaffinity-v1-core\"><code>core\/v1.NodeAffinity<\/code><\/a>\n<\/td>\n<td>\n   <p>AddedAffinity is applied to all Pods additionally to the NodeAffinity\nspecified in the PodSpec. That is, Nodes need to satisfy AddedAffinity\nAND .spec.NodeAffinity. AddedAffinity is empty by default (all Nodes\nmatch).\nWhen AddedAffinity is used, some Pods with affinity requirements that match\na specific Node (such as Daemonset Pods) might remain unschedulable.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `NodeResourcesBalancedAllocationArgs`     {#kubescheduler-config-k8s-io-v1-NodeResourcesBalancedAllocationArgs}\n    \n\n\n<p>NodeResourcesBalancedAllocationArgs holds arguments used to configure NodeResourcesBalancedAllocation plugin.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubescheduler.config.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>NodeResourcesBalancedAllocationArgs<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>resources<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-ResourceSpec\"><code>[]ResourceSpec<\/code><\/a>\n<\/td>\n<td>\n   <p>Resources to be managed, the default is &quot;cpu&quot; and &quot;memory&quot; if not specified.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `NodeResourcesFitArgs`     {#kubescheduler-config-k8s-io-v1-NodeResourcesFitArgs}\n    \n\n\n<p>NodeResourcesFitArgs holds arguments used to configure the NodeResourcesFit plugin.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubescheduler.config.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>NodeResourcesFitArgs<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>ignoredResources<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>IgnoredResources is the list of resources that NodeResources fit filter\nshould ignore. This doesn't apply to scoring.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ignoredResourceGroups<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>IgnoredResourceGroups defines the list of resource groups that NodeResources fit filter should ignore.\ne.g. if group is [&quot;example.com&quot;], it will ignore all resource names that begin\nwith &quot;example.com&quot;, such as &quot;example.com\/aaa&quot; and &quot;example.com\/bbb&quot;.\nA resource group name can't contain '\/'. This doesn't apply to scoring.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>scoringStrategy<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-ScoringStrategy\"><code>ScoringStrategy<\/code><\/a>\n<\/td>\n<td>\n   <p>ScoringStrategy selects the node resource scoring strategy.\nThe default strategy is LeastAllocated with an equal &quot;cpu&quot; and &quot;memory&quot; weight.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `PodTopologySpreadArgs`     {#kubescheduler-config-k8s-io-v1-PodTopologySpreadArgs}\n    \n\n\n<p>PodTopologySpreadArgs holds arguments used to configure the PodTopologySpread plugin.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubescheduler.config.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>PodTopologySpreadArgs<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>defaultConstraints<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#topologyspreadconstraint-v1-core\"><code>[]core\/v1.TopologySpreadConstraint<\/code><\/a>\n<\/td>\n<td>\n   <p>DefaultConstraints defines topology spread constraints to be applied to\nPods that don't define any in <code>pod.spec.topologySpreadConstraints<\/code>.\n<code>.defaultConstraints[*].labelSelectors<\/code> must be empty, as they are\ndeduced from the Pod's membership to Services, ReplicationControllers,\nReplicaSets or StatefulSets.\nWhen not empty, .defaultingType must be &quot;List&quot;.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>defaultingType<\/code><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PodTopologySpreadConstraintsDefaulting\"><code>PodTopologySpreadConstraintsDefaulting<\/code><\/a>\n<\/td>\n<td>\n   <p>DefaultingType determines how .defaultConstraints are deduced. Can be one\nof &quot;System&quot; or &quot;List&quot;.<\/p>\n<ul>\n<li>&quot;System&quot;: Use kubernetes defined constraints that spread Pods among\nNodes and Zones.<\/li>\n<li>&quot;List&quot;: Use constraints defined in .defaultConstraints.<\/li>\n<\/ul>\n<p>Defaults to &quot;System&quot;.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `VolumeBindingArgs`     {#kubescheduler-config-k8s-io-v1-VolumeBindingArgs}\n    \n\n\n<p>VolumeBindingArgs holds arguments used to configure the VolumeBinding plugin.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubescheduler.config.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>VolumeBindingArgs<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>bindTimeoutSeconds<\/code> <B>[Required]<\/B><br\/>\n<code>int64<\/code>\n<\/td>\n<td>\n   <p>BindTimeoutSeconds is the timeout in seconds in volume binding operation.\nValue must be non-negative integer. The value zero indicates no waiting.\nIf this value is nil, the default value (600) will be used.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>shape<\/code><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-UtilizationShapePoint\"><code>[]UtilizationShapePoint<\/code><\/a>\n<\/td>\n<td>\n   <p>Shape specifies the points defining the score function shape, which is\nused to score nodes based on the utilization of statically provisioned\nPVs. The utilization is calculated by dividing the total requested\nstorage of the pod by the total capacity of feasible PVs on each node.\nEach point contains utilization (ranges from 0 to 100) and its\nassociated score (ranges from 0 to 10). You can turn the priority by\nspecifying different scores for different utilization numbers.\nThe default shape points are:<\/p>\n<ol>\n<li>0 for 0 utilization<\/li>\n<li>10 for 100 utilization\nAll points must be sorted in increasing order by utilization.<\/li>\n<\/ol>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Extender`     {#kubescheduler-config-k8s-io-v1-Extender}\n    \n\n**Appears in:**\n\n- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)\n\n\n<p>Extender holds the parameters used to communicate with the extender. If a verb is unspecified\/empty,\nit is assumed that the extender chose not to provide that extension.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>urlPrefix<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>URLPrefix at which the extender is available<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>filterVerb<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Verb for the filter call, empty if not supported. This verb is appended to the URLPrefix when issuing the filter call to extender.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>preemptVerb<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Verb for the preempt call, empty if not supported. This verb is appended to the URLPrefix when issuing the preempt call to extender.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>prioritizeVerb<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Verb for the prioritize call, empty if not supported. This verb is appended to the URLPrefix when issuing the prioritize call to extender.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>weight<\/code> <B>[Required]<\/B><br\/>\n<code>int64<\/code>\n<\/td>\n<td>\n   <p>The numeric multiplier for the node scores that the prioritize call generates.\nThe weight should be a positive integer<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>bindVerb<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Verb for the bind call, empty if not supported. This verb is appended to the URLPrefix when issuing the bind call to extender.\nIf this method is implemented by the extender, it is the extender's responsibility to bind the pod to apiserver. Only one extender\ncan implement this function.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>enableHTTPS<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>EnableHTTPS specifies whether https should be used to communicate with the extender<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tlsConfig<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-ExtenderTLSConfig\"><code>ExtenderTLSConfig<\/code><\/a>\n<\/td>\n<td>\n   <p>TLSConfig specifies the transport layer security config<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>httpTimeout<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>HTTPTimeout specifies the timeout duration for a call to the extender. Filter timeout fails the scheduling of the pod. Prioritize\ntimeout is ignored, k8s\/other extenders priorities are used to select the node.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>nodeCacheCapable<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>NodeCacheCapable specifies that the extender is capable of caching node information,\nso the scheduler should only send minimal information about the eligible nodes\nassuming that the extender already cached full details of all nodes in the cluster<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>managedResources<\/code><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-ExtenderManagedResource\"><code>[]ExtenderManagedResource<\/code><\/a>\n<\/td>\n<td>\n   <p>ManagedResources is a list of extended resources that are managed by\nthis extender.<\/p>\n<ul>\n<li>A pod will be sent to the extender on the Filter, Prioritize and Bind\n(if the extender is the binder) phases iff the pod requests at least\none of the extended resources in this list. If empty or unspecified,\nall pods will be sent to this extender.<\/li>\n<li>If IgnoredByScheduler is set to true for a resource, kube-scheduler\nwill skip checking the resource in predicates.<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr><td><code>ignorable<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>Ignorable specifies if the extender is ignorable, i.e. scheduling should not\nfail when the extender returns an error or is not reachable.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExtenderManagedResource`     {#kubescheduler-config-k8s-io-v1-ExtenderManagedResource}\n    \n\n**Appears in:**\n\n- [Extender](#kubescheduler-config-k8s-io-v1-Extender)\n\n\n<p>ExtenderManagedResource describes the arguments of extended resources\nmanaged by an extender.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name is the extended resource name.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ignoredByScheduler<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>IgnoredByScheduler indicates whether kube-scheduler should ignore this\nresource when applying predicates.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExtenderTLSConfig`     {#kubescheduler-config-k8s-io-v1-ExtenderTLSConfig}\n    \n\n**Appears in:**\n\n- [Extender](#kubescheduler-config-k8s-io-v1-Extender)\n\n\n<p>ExtenderTLSConfig contains settings to enable TLS with extender<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>insecure<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>Server should be accessed without verifying the TLS certificate. For testing only.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>serverName<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>ServerName is passed to the server for SNI and is used in the client to check server\ncertificates against. If ServerName is empty, the hostname used to contact the\nserver is used.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Server requires TLS client certificate authentication<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>keyFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Server requires TLS client certificate authentication<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>caFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Trusted root certificates for server<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certData<\/code> <B>[Required]<\/B><br\/>\n<code>[]byte<\/code>\n<\/td>\n<td>\n   <p>CertData holds PEM-encoded bytes (typically read from a client certificate file).\nCertData takes precedence over CertFile<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>keyData<\/code> <B>[Required]<\/B><br\/>\n<code>[]byte<\/code>\n<\/td>\n<td>\n   <p>KeyData holds PEM-encoded bytes (typically read from a client certificate key file).\nKeyData takes precedence over KeyFile<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>caData<\/code> <B>[Required]<\/B><br\/>\n<code>[]byte<\/code>\n<\/td>\n<td>\n   <p>CAData holds PEM-encoded bytes (typically read from a root certificates bundle).\nCAData takes precedence over CAFile<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeSchedulerProfile`     {#kubescheduler-config-k8s-io-v1-KubeSchedulerProfile}\n    \n\n**Appears in:**\n\n- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)\n\n\n<p>KubeSchedulerProfile is a scheduling profile.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>schedulerName<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>SchedulerName is the name of the scheduler associated to this profile.\nIf SchedulerName matches with the pod's &quot;spec.schedulerName&quot;, then the pod\nis scheduled with this profile.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>percentageOfNodesToScore<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>PercentageOfNodesToScore is the percentage of all nodes that once found feasible\nfor running a pod, the scheduler stops its search for more feasible nodes in\nthe cluster. This helps improve scheduler's performance. Scheduler always tries to find\nat least &quot;minFeasibleNodesToFind&quot; feasible nodes no matter what the value of this flag is.\nExample: if the cluster size is 500 nodes and the value of this flag is 30,\nthen scheduler stops finding further feasible nodes once it finds 150 feasible ones.\nWhen the value is 0, default percentage (5%--50% based on the size of the cluster) of the\nnodes will be scored. It will override global PercentageOfNodesToScore. If it is empty,\nglobal PercentageOfNodesToScore will be used.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>plugins<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-Plugins\"><code>Plugins<\/code><\/a>\n<\/td>\n<td>\n   <p>Plugins specify the set of plugins that should be enabled or disabled.\nEnabled plugins are the ones that should be enabled in addition to the\ndefault plugins. Disabled plugins are any of the default plugins that\nshould be disabled.\nWhen no enabled or disabled plugin is specified for an extension point,\ndefault plugins for that extension point will be used if there is any.\nIf a QueueSort plugin is specified, the same QueueSort Plugin and\nPluginConfig must be specified for all profiles.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>pluginConfig<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PluginConfig\"><code>[]PluginConfig<\/code><\/a>\n<\/td>\n<td>\n   <p>PluginConfig is an optional set of custom plugin arguments for each plugin.\nOmitting config args for a plugin is equivalent to using the default config\nfor that plugin.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Plugin`     {#kubescheduler-config-k8s-io-v1-Plugin}\n    \n\n**Appears in:**\n\n- [PluginSet](#kubescheduler-config-k8s-io-v1-PluginSet)\n\n\n<p>Plugin specifies a plugin name and its weight when applicable. Weight is used only for Score plugins.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name defines the name of plugin<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>weight<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>Weight defines the weight of plugin, only used for Score plugins.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `PluginConfig`     {#kubescheduler-config-k8s-io-v1-PluginConfig}\n    \n\n**Appears in:**\n\n- [KubeSchedulerProfile](#kubescheduler-config-k8s-io-v1-KubeSchedulerProfile)\n\n\n<p>PluginConfig specifies arguments that should be passed to a plugin at the time of initialization.\nA plugin that is invoked at multiple extension points is initialized once. Args can have arbitrary structure.\nIt is up to the plugin to process these Args.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name defines the name of plugin being configured<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>args<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/runtime\/#RawExtension\"><code>k8s.io\/apimachinery\/pkg\/runtime.RawExtension<\/code><\/a>\n<\/td>\n<td>\n   <p>Args defines the arguments passed to the plugins at the time of initialization. Args can have arbitrary structure.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `PluginSet`     {#kubescheduler-config-k8s-io-v1-PluginSet}\n    \n\n**Appears in:**\n\n- [Plugins](#kubescheduler-config-k8s-io-v1-Plugins)\n\n\n<p>PluginSet specifies enabled and disabled plugins for an extension point.\nIf an array is empty, missing, or nil, default plugins at that extension point will be used.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>enabled<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-Plugin\"><code>[]Plugin<\/code><\/a>\n<\/td>\n<td>\n   <p>Enabled specifies plugins that should be enabled in addition to default plugins.\nIf the default plugin is also configured in the scheduler config file, the weight of plugin will\nbe overridden accordingly.\nThese are called after default plugins and in the same order specified here.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>disabled<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-Plugin\"><code>[]Plugin<\/code><\/a>\n<\/td>\n<td>\n   <p>Disabled specifies default plugins that should be disabled.\nWhen all default plugins need to be disabled, an array containing only one &quot;*&quot; should be provided.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Plugins`     {#kubescheduler-config-k8s-io-v1-Plugins}\n    \n\n**Appears in:**\n\n- [KubeSchedulerProfile](#kubescheduler-config-k8s-io-v1-KubeSchedulerProfile)\n\n\n<p>Plugins include multiple extension points. When specified, the list of plugins for\na particular extension point are the only ones enabled. If an extension point is\nomitted from the config, then the default set of plugins is used for that extension point.\nEnabled plugins are called in the order specified here, after default plugins. If they need to\nbe invoked before default plugins, default plugins must be disabled and re-enabled here in desired order.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>preEnqueue<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PluginSet\"><code>PluginSet<\/code><\/a>\n<\/td>\n<td>\n   <p>PreEnqueue is a list of plugins that should be invoked before adding pods to the scheduling queue.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>queueSort<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PluginSet\"><code>PluginSet<\/code><\/a>\n<\/td>\n<td>\n   <p>QueueSort is a list of plugins that should be invoked when sorting pods in the scheduling queue.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>preFilter<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PluginSet\"><code>PluginSet<\/code><\/a>\n<\/td>\n<td>\n   <p>PreFilter is a list of plugins that should be invoked at &quot;PreFilter&quot; extension point of the scheduling framework.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>filter<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PluginSet\"><code>PluginSet<\/code><\/a>\n<\/td>\n<td>\n   <p>Filter is a list of plugins that should be invoked when filtering out nodes that cannot run the Pod.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>postFilter<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PluginSet\"><code>PluginSet<\/code><\/a>\n<\/td>\n<td>\n   <p>PostFilter is a list of plugins that are invoked after filtering phase, but only when no feasible nodes were found for the pod.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>preScore<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PluginSet\"><code>PluginSet<\/code><\/a>\n<\/td>\n<td>\n   <p>PreScore is a list of plugins that are invoked before scoring.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>score<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PluginSet\"><code>PluginSet<\/code><\/a>\n<\/td>\n<td>\n   <p>Score is a list of plugins that should be invoked when ranking nodes that have passed the filtering phase.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>reserve<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PluginSet\"><code>PluginSet<\/code><\/a>\n<\/td>\n<td>\n   <p>Reserve is a list of plugins invoked when reserving\/unreserving resources\nafter a node is assigned to run the pod.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>permit<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PluginSet\"><code>PluginSet<\/code><\/a>\n<\/td>\n<td>\n   <p>Permit is a list of plugins that control binding of a Pod. These plugins can prevent or delay binding of a Pod.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>preBind<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PluginSet\"><code>PluginSet<\/code><\/a>\n<\/td>\n<td>\n   <p>PreBind is a list of plugins that should be invoked before a pod is bound.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>bind<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PluginSet\"><code>PluginSet<\/code><\/a>\n<\/td>\n<td>\n   <p>Bind is a list of plugins that should be invoked at &quot;Bind&quot; extension point of the scheduling framework.\nThe scheduler call these plugins in order. Scheduler skips the rest of these plugins as soon as one returns success.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>postBind<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PluginSet\"><code>PluginSet<\/code><\/a>\n<\/td>\n<td>\n   <p>PostBind is a list of plugins that should be invoked after a pod is successfully bound.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>multiPoint<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-PluginSet\"><code>PluginSet<\/code><\/a>\n<\/td>\n<td>\n   <p>MultiPoint is a simplified config section to enable plugins for all valid extension points.\nPlugins enabled through MultiPoint will automatically register for every individual extension\npoint the plugin has implemented. Disabling a plugin through MultiPoint disables that behavior.\nThe same is true for disabling &quot;*&quot; through MultiPoint (no default plugins will be automatically registered).\nPlugins can still be disabled through their individual extension points.<\/p>\n<p>In terms of precedence, plugin config follows this basic hierarchy<\/p>\n<ol>\n<li>Specific extension points<\/li>\n<li>Explicitly configured MultiPoint plugins<\/li>\n<li>The set of default plugins, as MultiPoint plugins\nThis implies that a higher precedence plugin will run first and overwrite any settings within MultiPoint.\nExplicitly user-configured plugins also take a higher precedence over default plugins.\nWithin this hierarchy, an Enabled setting takes precedence over Disabled. For example, if a plugin is\nset in both <code>multiPoint.Enabled<\/code> and <code>multiPoint.Disabled<\/code>, the plugin will be enabled. Similarly,\nincluding <code>multiPoint.Disabled = '*'<\/code> and <code>multiPoint.Enabled = pluginA<\/code> will still register that specific\nplugin through MultiPoint. This follows the same behavior as all other extension point configurations.<\/li>\n<\/ol>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `PodTopologySpreadConstraintsDefaulting`     {#kubescheduler-config-k8s-io-v1-PodTopologySpreadConstraintsDefaulting}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1-PodTopologySpreadArgs)\n\n\n<p>PodTopologySpreadConstraintsDefaulting defines how to set default constraints\nfor the PodTopologySpread plugin.<\/p>\n\n\n\n\n## `RequestedToCapacityRatioParam`     {#kubescheduler-config-k8s-io-v1-RequestedToCapacityRatioParam}\n    \n\n**Appears in:**\n\n- [ScoringStrategy](#kubescheduler-config-k8s-io-v1-ScoringStrategy)\n\n\n<p>RequestedToCapacityRatioParam define RequestedToCapacityRatio parameters<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>shape<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-UtilizationShapePoint\"><code>[]UtilizationShapePoint<\/code><\/a>\n<\/td>\n<td>\n   <p>Shape is a list of points defining the scoring function shape.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ResourceSpec`     {#kubescheduler-config-k8s-io-v1-ResourceSpec}\n    \n\n**Appears in:**\n\n- [NodeResourcesBalancedAllocationArgs](#kubescheduler-config-k8s-io-v1-NodeResourcesBalancedAllocationArgs)\n\n- [ScoringStrategy](#kubescheduler-config-k8s-io-v1-ScoringStrategy)\n\n\n<p>ResourceSpec represents a single resource.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name of the resource.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>weight<\/code> <B>[Required]<\/B><br\/>\n<code>int64<\/code>\n<\/td>\n<td>\n   <p>Weight of the resource.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ScoringStrategy`     {#kubescheduler-config-k8s-io-v1-ScoringStrategy}\n    \n\n**Appears in:**\n\n- [NodeResourcesFitArgs](#kubescheduler-config-k8s-io-v1-NodeResourcesFitArgs)\n\n\n<p>ScoringStrategy define ScoringStrategyType for node resource plugin<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>type<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-ScoringStrategyType\"><code>ScoringStrategyType<\/code><\/a>\n<\/td>\n<td>\n   <p>Type selects which strategy to run.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>resources<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-ResourceSpec\"><code>[]ResourceSpec<\/code><\/a>\n<\/td>\n<td>\n   <p>Resources to consider when scoring.\nThe default resource set includes &quot;cpu&quot; and &quot;memory&quot; with an equal weight.\nAllowed weights go from 1 to 100.\nWeight defaults to 1 if not specified or explicitly set to 0.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>requestedToCapacityRatio<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubescheduler-config-k8s-io-v1-RequestedToCapacityRatioParam\"><code>RequestedToCapacityRatioParam<\/code><\/a>\n<\/td>\n<td>\n   <p>Arguments specific to RequestedToCapacityRatio strategy.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ScoringStrategyType`     {#kubescheduler-config-k8s-io-v1-ScoringStrategyType}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [ScoringStrategy](#kubescheduler-config-k8s-io-v1-ScoringStrategy)\n\n\n<p>ScoringStrategyType the type of scoring strategy used in NodeResourcesFit plugin.<\/p>\n\n\n\n\n## `UtilizationShapePoint`     {#kubescheduler-config-k8s-io-v1-UtilizationShapePoint}\n    \n\n**Appears in:**\n\n- [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1-VolumeBindingArgs)\n\n- [RequestedToCapacityRatioParam](#kubescheduler-config-k8s-io-v1-RequestedToCapacityRatioParam)\n\n\n<p>UtilizationShapePoint represents single point of priority function shape.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>utilization<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>Utilization (x axis). Valid values are 0 to 100. Fully utilized node maps to 100.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>score<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>Score assigned to given utilization (y axis). Valid values are 0 to 10.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  kube scheduler Configuration  v1  content type  tool reference package  kubescheduler config k8s io v1 auto generated  true          Resource Types       DefaultPreemptionArgs   kubescheduler config k8s io v1 DefaultPreemptionArgs     InterPodAffinityArgs   kubescheduler config k8s io v1 InterPodAffinityArgs     KubeSchedulerConfiguration   kubescheduler config k8s io v1 KubeSchedulerConfiguration     NodeAffinityArgs   kubescheduler config k8s io v1 NodeAffinityArgs     NodeResourcesBalancedAllocationArgs   kubescheduler config k8s io v1 NodeResourcesBalancedAllocationArgs     NodeResourcesFitArgs   kubescheduler config k8s io v1 NodeResourcesFitArgs     PodTopologySpreadArgs   kubescheduler config k8s io v1 PodTopologySpreadArgs     VolumeBindingArgs   kubescheduler config k8s io v1 VolumeBindingArgs                    ClientConnectionConfiguration        ClientConnectionConfiguration          Appears in        KubeSchedulerConfiguration   kubescheduler config k8s io v1 KubeSchedulerConfiguration     p ClientConnectionConfiguration contains details for constructing a client   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code kubeconfig  code   B  Required   B  br    code string  code    td   td      p kubeconfig is the path to a KubeConfig file   p    td    tr   tr  td  code acceptContentTypes  code   B  Required   B  br    code string  code    td   td      p acceptContentTypes defines the Accept header sent by clients when connecting to a server  overriding the default value of  application json   This field will control all connections to the server used by a particular client   p    td    tr   tr  td  code contentType  code   B  Required   B  br    code string  code    td   td      p contentType is the content type used when sending data to the server from this client   p    td    tr   tr  td  code qps  code   B  Required   B  br    code float32  code    td   td      p qps controls the number of queries per second allowed for this connection   p    td    tr   tr  td  code burst  code   B  Required   B  br    code int32  code    td   td      p burst allows extra queries to accumulate when a client is exceeding its rate   p    td    tr    tbody    table       DebuggingConfiguration        DebuggingConfiguration          Appears in        KubeSchedulerConfiguration   kubescheduler config k8s io v1 KubeSchedulerConfiguration     p DebuggingConfiguration holds configuration for Debugging related features   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code enableProfiling  code   B  Required   B  br    code bool  code    td   td      p enableProfiling enables profiling via web interface host port debug pprof   p    td    tr   tr  td  code enableContentionProfiling  code   B  Required   B  br    code bool  code    td   td      p enableContentionProfiling enables block profiling  if enableProfiling is true   p    td    tr    tbody    table       LeaderElectionConfiguration        LeaderElectionConfiguration          Appears in        KubeSchedulerConfiguration   kubescheduler config k8s io v1 KubeSchedulerConfiguration     p LeaderElectionConfiguration defines the configuration of leader election clients for components that can run with leader election enabled   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code leaderElect  code   B  Required   B  br    code bool  code    td   td      p leaderElect enables a leader election client to gain leadership before executing the main loop  Enable this when running replicated components for high availability   p    td    tr   tr  td  code leaseDuration  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p leaseDuration is the duration that non leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot  This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate  This is only applicable if leader election is enabled   p    td    tr   tr  td  code renewDeadline  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p renewDeadline is the interval between attempts by the acting master to renew a leadership slot before it stops leading  This must be less than or equal to the lease duration  This is only applicable if leader election is enabled   p    td    tr   tr  td  code retryPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p retryPeriod is the duration the clients should wait between attempting acquisition and renewal of a leadership  This is only applicable if leader election is enabled   p    td    tr   tr  td  code resourceLock  code   B  Required   B  br    code string  code    td   td      p resourceLock indicates the resource object type that will be used to lock during leader election cycles   p    td    tr   tr  td  code resourceName  code   B  Required   B  br    code string  code    td   td      p resourceName indicates the name of resource object that will be used to lock during leader election cycles   p    td    tr   tr  td  code resourceNamespace  code   B  Required   B  br    code string  code    td   td      p resourceName indicates the namespace of resource object that will be used to lock during leader election cycles   p    td    tr    tbody    table          DefaultPreemptionArgs        kubescheduler config k8s io v1 DefaultPreemptionArgs          p DefaultPreemptionArgs holds arguments used to configure the DefaultPreemption plugin   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubescheduler config k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code DefaultPreemptionArgs  code   td   tr           tr  td  code minCandidateNodesPercentage  code   B  Required   B  br    code int32  code    td   td      p MinCandidateNodesPercentage is the minimum number of candidates to shortlist when dry running preemption as a percentage of number of nodes  Must be in the range  0  100   Defaults to 10  of the cluster size if unspecified   p    td    tr   tr  td  code minCandidateNodesAbsolute  code   B  Required   B  br    code int32  code    td   td      p MinCandidateNodesAbsolute is the absolute minimum number of candidates to shortlist  The likely number of candidates enumerated for dry running preemption is given by the formula  numCandidates   max numNodes   minCandidateNodesPercentage  minCandidateNodesAbsolute  We say  quot likely quot  because there are other factors such as PDB violations that play a role in the number of candidates shortlisted  Must be at least 0 nodes  Defaults to 100 nodes if unspecified   p    td    tr    tbody    table       InterPodAffinityArgs        kubescheduler config k8s io v1 InterPodAffinityArgs          p InterPodAffinityArgs holds arguments used to configure the InterPodAffinity plugin   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubescheduler config k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code InterPodAffinityArgs  code   td   tr           tr  td  code hardPodAffinityWeight  code   B  Required   B  br    code int32  code    td   td      p HardPodAffinityWeight is the scoring weight for existing pods with a matching hard affinity to the incoming pod   p    td    tr   tr  td  code ignorePreferredTermsOfExistingPods  code   B  Required   B  br    code bool  code    td   td      p IgnorePreferredTermsOfExistingPods configures the scheduler to ignore existing pods  preferred affinity rules when scoring candidate nodes  unless the incoming pod has inter pod affinities   p    td    tr    tbody    table       KubeSchedulerConfiguration        kubescheduler config k8s io v1 KubeSchedulerConfiguration          p KubeSchedulerConfiguration configures a scheduler  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubescheduler config k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code KubeSchedulerConfiguration  code   td   tr           tr  td  code parallelism  code   B  Required   B  br    code int32  code    td   td      p Parallelism defines the amount of parallelism in algorithms for scheduling a Pods  Must be greater than 0  Defaults to 16  p    td    tr   tr  td  code leaderElection  code   B  Required   B  br    a href   LeaderElectionConfiguration   code LeaderElectionConfiguration  code   a    td   td      p LeaderElection defines the configuration of leader election client   p    td    tr   tr  td  code clientConnection  code   B  Required   B  br    a href   ClientConnectionConfiguration   code ClientConnectionConfiguration  code   a    td   td      p ClientConnection specifies the kubeconfig file and client connection settings for the proxy server to use when communicating with the apiserver   p    td    tr   tr  td  code DebuggingConfiguration  code   B  Required   B  br    a href   DebuggingConfiguration   code DebuggingConfiguration  code   a    td   td  Members of  code DebuggingConfiguration  code  are embedded into this type       p DebuggingConfiguration holds configuration for Debugging related features TODO  We might wanna make this a substruct like Debugging componentbaseconfigv1alpha1 DebuggingConfiguration  p    td    tr   tr  td  code percentageOfNodesToScore  code   B  Required   B  br    code int32  code    td   td      p PercentageOfNodesToScore is the percentage of all nodes that once found feasible for running a pod  the scheduler stops its search for more feasible nodes in the cluster  This helps improve scheduler s performance  Scheduler always tries to find at least  quot minFeasibleNodesToFind quot  feasible nodes no matter what the value of this flag is  Example  if the cluster size is 500 nodes and the value of this flag is 30  then scheduler stops finding further feasible nodes once it finds 150 feasible ones  When the value is 0  default percentage  5   50  based on the size of the cluster  of the nodes will be scored  It is overridden by profile level PercentageOfNodesToScore   p    td    tr   tr  td  code podInitialBackoffSeconds  code   B  Required   B  br    code int64  code    td   td      p PodInitialBackoffSeconds is the initial backoff for unschedulable pods  If specified  it must be greater than 0  If this value is null  the default value  1s  will be used   p    td    tr   tr  td  code podMaxBackoffSeconds  code   B  Required   B  br    code int64  code    td   td      p PodMaxBackoffSeconds is the max backoff for unschedulable pods  If specified  it must be greater than podInitialBackoffSeconds  If this value is null  the default value  10s  will be used   p    td    tr   tr  td  code profiles  code   B  Required   B  br    a href   kubescheduler config k8s io v1 KubeSchedulerProfile   code   KubeSchedulerProfile  code   a    td   td      p Profiles are scheduling profiles that kube scheduler supports  Pods can choose to be scheduled under a particular profile by setting its associated scheduler name  Pods that don t specify any scheduler name are scheduled with the  quot default scheduler quot  profile  if present here   p    td    tr   tr  td  code extenders  code   B  Required   B  br    a href   kubescheduler config k8s io v1 Extender   code   Extender  code   a    td   td      p Extenders are the list of scheduler extenders  each holding the values of how to communicate with the extender  These extenders are shared by all scheduler profiles   p    td    tr   tr  td  code delayCacheUntilActive  code   B  Required   B  br    code bool  code    td   td      p DelayCacheUntilActive specifies when to start caching  If this is true and leader election is enabled  the scheduler will wait to fill informer caches until it is the leader  Doing so will have slower failover with the benefit of lower memory overhead while waiting to become leader  Defaults to false   p    td    tr    tbody    table       NodeAffinityArgs        kubescheduler config k8s io v1 NodeAffinityArgs          p NodeAffinityArgs holds arguments to configure the NodeAffinity plugin   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubescheduler config k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code NodeAffinityArgs  code   td   tr           tr  td  code addedAffinity  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  nodeaffinity v1 core   code core v1 NodeAffinity  code   a    td   td      p AddedAffinity is applied to all Pods additionally to the NodeAffinity specified in the PodSpec  That is  Nodes need to satisfy AddedAffinity AND  spec NodeAffinity  AddedAffinity is empty by default  all Nodes match   When AddedAffinity is used  some Pods with affinity requirements that match a specific Node  such as Daemonset Pods  might remain unschedulable   p    td    tr    tbody    table       NodeResourcesBalancedAllocationArgs        kubescheduler config k8s io v1 NodeResourcesBalancedAllocationArgs          p NodeResourcesBalancedAllocationArgs holds arguments used to configure NodeResourcesBalancedAllocation plugin   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubescheduler config k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code NodeResourcesBalancedAllocationArgs  code   td   tr           tr  td  code resources  code   B  Required   B  br    a href   kubescheduler config k8s io v1 ResourceSpec   code   ResourceSpec  code   a    td   td      p Resources to be managed  the default is  quot cpu quot  and  quot memory quot  if not specified   p    td    tr    tbody    table       NodeResourcesFitArgs        kubescheduler config k8s io v1 NodeResourcesFitArgs          p NodeResourcesFitArgs holds arguments used to configure the NodeResourcesFit plugin   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubescheduler config k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code NodeResourcesFitArgs  code   td   tr           tr  td  code ignoredResources  code   B  Required   B  br    code   string  code    td   td      p IgnoredResources is the list of resources that NodeResources fit filter should ignore  This doesn t apply to scoring   p    td    tr   tr  td  code ignoredResourceGroups  code   B  Required   B  br    code   string  code    td   td      p IgnoredResourceGroups defines the list of resource groups that NodeResources fit filter should ignore  e g  if group is   quot example com quot    it will ignore all resource names that begin with  quot example com quot   such as  quot example com aaa quot  and  quot example com bbb quot   A resource group name can t contain      This doesn t apply to scoring   p    td    tr   tr  td  code scoringStrategy  code   B  Required   B  br    a href   kubescheduler config k8s io v1 ScoringStrategy   code ScoringStrategy  code   a    td   td      p ScoringStrategy selects the node resource scoring strategy  The default strategy is LeastAllocated with an equal  quot cpu quot  and  quot memory quot  weight   p    td    tr    tbody    table       PodTopologySpreadArgs        kubescheduler config k8s io v1 PodTopologySpreadArgs          p PodTopologySpreadArgs holds arguments used to configure the PodTopologySpread plugin   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubescheduler config k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code PodTopologySpreadArgs  code   td   tr           tr  td  code defaultConstraints  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  topologyspreadconstraint v1 core   code   core v1 TopologySpreadConstraint  code   a    td   td      p DefaultConstraints defines topology spread constraints to be applied to Pods that don t define any in  code pod spec topologySpreadConstraints  code    code  defaultConstraints    labelSelectors  code  must be empty  as they are deduced from the Pod s membership to Services  ReplicationControllers  ReplicaSets or StatefulSets  When not empty   defaultingType must be  quot List quot    p    td    tr   tr  td  code defaultingType  code  br    a href   kubescheduler config k8s io v1 PodTopologySpreadConstraintsDefaulting   code PodTopologySpreadConstraintsDefaulting  code   a    td   td      p DefaultingType determines how  defaultConstraints are deduced  Can be one of  quot System quot  or  quot List quot    p   ul   li  quot System quot   Use kubernetes defined constraints that spread Pods among Nodes and Zones   li   li  quot List quot   Use constraints defined in  defaultConstraints   li    ul   p Defaults to  quot System quot    p    td    tr    tbody    table       VolumeBindingArgs        kubescheduler config k8s io v1 VolumeBindingArgs          p VolumeBindingArgs holds arguments used to configure the VolumeBinding plugin   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubescheduler config k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code VolumeBindingArgs  code   td   tr           tr  td  code bindTimeoutSeconds  code   B  Required   B  br    code int64  code    td   td      p BindTimeoutSeconds is the timeout in seconds in volume binding operation  Value must be non negative integer  The value zero indicates no waiting  If this value is nil  the default value  600  will be used   p    td    tr   tr  td  code shape  code  br    a href   kubescheduler config k8s io v1 UtilizationShapePoint   code   UtilizationShapePoint  code   a    td   td      p Shape specifies the points defining the score function shape  which is used to score nodes based on the utilization of statically provisioned PVs  The utilization is calculated by dividing the total requested storage of the pod by the total capacity of feasible PVs on each node  Each point contains utilization  ranges from 0 to 100  and its associated score  ranges from 0 to 10   You can turn the priority by specifying different scores for different utilization numbers  The default shape points are   p   ol   li 0 for 0 utilization  li   li 10 for 100 utilization All points must be sorted in increasing order by utilization   li    ol    td    tr    tbody    table       Extender        kubescheduler config k8s io v1 Extender          Appears in        KubeSchedulerConfiguration   kubescheduler config k8s io v1 KubeSchedulerConfiguration     p Extender holds the parameters used to communicate with the extender  If a verb is unspecified empty  it is assumed that the extender chose not to provide that extension   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code urlPrefix  code   B  Required   B  br    code string  code    td   td      p URLPrefix at which the extender is available  p    td    tr   tr  td  code filterVerb  code   B  Required   B  br    code string  code    td   td      p Verb for the filter call  empty if not supported  This verb is appended to the URLPrefix when issuing the filter call to extender   p    td    tr   tr  td  code preemptVerb  code   B  Required   B  br    code string  code    td   td      p Verb for the preempt call  empty if not supported  This verb is appended to the URLPrefix when issuing the preempt call to extender   p    td    tr   tr  td  code prioritizeVerb  code   B  Required   B  br    code string  code    td   td      p Verb for the prioritize call  empty if not supported  This verb is appended to the URLPrefix when issuing the prioritize call to extender   p    td    tr   tr  td  code weight  code   B  Required   B  br    code int64  code    td   td      p The numeric multiplier for the node scores that the prioritize call generates  The weight should be a positive integer  p    td    tr   tr  td  code bindVerb  code   B  Required   B  br    code string  code    td   td      p Verb for the bind call  empty if not supported  This verb is appended to the URLPrefix when issuing the bind call to extender  If this method is implemented by the extender  it is the extender s responsibility to bind the pod to apiserver  Only one extender can implement this function   p    td    tr   tr  td  code enableHTTPS  code   B  Required   B  br    code bool  code    td   td      p EnableHTTPS specifies whether https should be used to communicate with the extender  p    td    tr   tr  td  code tlsConfig  code   B  Required   B  br    a href   kubescheduler config k8s io v1 ExtenderTLSConfig   code ExtenderTLSConfig  code   a    td   td      p TLSConfig specifies the transport layer security config  p    td    tr   tr  td  code httpTimeout  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p HTTPTimeout specifies the timeout duration for a call to the extender  Filter timeout fails the scheduling of the pod  Prioritize timeout is ignored  k8s other extenders priorities are used to select the node   p    td    tr   tr  td  code nodeCacheCapable  code   B  Required   B  br    code bool  code    td   td      p NodeCacheCapable specifies that the extender is capable of caching node information  so the scheduler should only send minimal information about the eligible nodes assuming that the extender already cached full details of all nodes in the cluster  p    td    tr   tr  td  code managedResources  code  br    a href   kubescheduler config k8s io v1 ExtenderManagedResource   code   ExtenderManagedResource  code   a    td   td      p ManagedResources is a list of extended resources that are managed by this extender   p   ul   li A pod will be sent to the extender on the Filter  Prioritize and Bind  if the extender is the binder  phases iff the pod requests at least one of the extended resources in this list  If empty or unspecified  all pods will be sent to this extender   li   li If IgnoredByScheduler is set to true for a resource  kube scheduler will skip checking the resource in predicates   li    ul    td    tr   tr  td  code ignorable  code   B  Required   B  br    code bool  code    td   td      p Ignorable specifies if the extender is ignorable  i e  scheduling should not fail when the extender returns an error or is not reachable   p    td    tr    tbody    table       ExtenderManagedResource        kubescheduler config k8s io v1 ExtenderManagedResource          Appears in        Extender   kubescheduler config k8s io v1 Extender     p ExtenderManagedResource describes the arguments of extended resources managed by an extender   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p Name is the extended resource name   p    td    tr   tr  td  code ignoredByScheduler  code   B  Required   B  br    code bool  code    td   td      p IgnoredByScheduler indicates whether kube scheduler should ignore this resource when applying predicates   p    td    tr    tbody    table       ExtenderTLSConfig        kubescheduler config k8s io v1 ExtenderTLSConfig          Appears in        Extender   kubescheduler config k8s io v1 Extender     p ExtenderTLSConfig contains settings to enable TLS with extender  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code insecure  code   B  Required   B  br    code bool  code    td   td      p Server should be accessed without verifying the TLS certificate  For testing only   p    td    tr   tr  td  code serverName  code   B  Required   B  br    code string  code    td   td      p ServerName is passed to the server for SNI and is used in the client to check server certificates against  If ServerName is empty  the hostname used to contact the server is used   p    td    tr   tr  td  code certFile  code   B  Required   B  br    code string  code    td   td      p Server requires TLS client certificate authentication  p    td    tr   tr  td  code keyFile  code   B  Required   B  br    code string  code    td   td      p Server requires TLS client certificate authentication  p    td    tr   tr  td  code caFile  code   B  Required   B  br    code string  code    td   td      p Trusted root certificates for server  p    td    tr   tr  td  code certData  code   B  Required   B  br    code   byte  code    td   td      p CertData holds PEM encoded bytes  typically read from a client certificate file   CertData takes precedence over CertFile  p    td    tr   tr  td  code keyData  code   B  Required   B  br    code   byte  code    td   td      p KeyData holds PEM encoded bytes  typically read from a client certificate key file   KeyData takes precedence over KeyFile  p    td    tr   tr  td  code caData  code   B  Required   B  br    code   byte  code    td   td      p CAData holds PEM encoded bytes  typically read from a root certificates bundle   CAData takes precedence over CAFile  p    td    tr    tbody    table       KubeSchedulerProfile        kubescheduler config k8s io v1 KubeSchedulerProfile          Appears in        KubeSchedulerConfiguration   kubescheduler config k8s io v1 KubeSchedulerConfiguration     p KubeSchedulerProfile is a scheduling profile   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code schedulerName  code   B  Required   B  br    code string  code    td   td      p SchedulerName is the name of the scheduler associated to this profile  If SchedulerName matches with the pod s  quot spec schedulerName quot   then the pod is scheduled with this profile   p    td    tr   tr  td  code percentageOfNodesToScore  code   B  Required   B  br    code int32  code    td   td      p PercentageOfNodesToScore is the percentage of all nodes that once found feasible for running a pod  the scheduler stops its search for more feasible nodes in the cluster  This helps improve scheduler s performance  Scheduler always tries to find at least  quot minFeasibleNodesToFind quot  feasible nodes no matter what the value of this flag is  Example  if the cluster size is 500 nodes and the value of this flag is 30  then scheduler stops finding further feasible nodes once it finds 150 feasible ones  When the value is 0  default percentage  5   50  based on the size of the cluster  of the nodes will be scored  It will override global PercentageOfNodesToScore  If it is empty  global PercentageOfNodesToScore will be used   p    td    tr   tr  td  code plugins  code   B  Required   B  br    a href   kubescheduler config k8s io v1 Plugins   code Plugins  code   a    td   td      p Plugins specify the set of plugins that should be enabled or disabled  Enabled plugins are the ones that should be enabled in addition to the default plugins  Disabled plugins are any of the default plugins that should be disabled  When no enabled or disabled plugin is specified for an extension point  default plugins for that extension point will be used if there is any  If a QueueSort plugin is specified  the same QueueSort Plugin and PluginConfig must be specified for all profiles   p    td    tr   tr  td  code pluginConfig  code   B  Required   B  br    a href   kubescheduler config k8s io v1 PluginConfig   code   PluginConfig  code   a    td   td      p PluginConfig is an optional set of custom plugin arguments for each plugin  Omitting config args for a plugin is equivalent to using the default config for that plugin   p    td    tr    tbody    table       Plugin        kubescheduler config k8s io v1 Plugin          Appears in        PluginSet   kubescheduler config k8s io v1 PluginSet     p Plugin specifies a plugin name and its weight when applicable  Weight is used only for Score plugins   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p Name defines the name of plugin  p    td    tr   tr  td  code weight  code   B  Required   B  br    code int32  code    td   td      p Weight defines the weight of plugin  only used for Score plugins   p    td    tr    tbody    table       PluginConfig        kubescheduler config k8s io v1 PluginConfig          Appears in        KubeSchedulerProfile   kubescheduler config k8s io v1 KubeSchedulerProfile     p PluginConfig specifies arguments that should be passed to a plugin at the time of initialization  A plugin that is invoked at multiple extension points is initialized once  Args can have arbitrary structure  It is up to the plugin to process these Args   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p Name defines the name of plugin being configured  p    td    tr   tr  td  code args  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg runtime  RawExtension   code k8s io apimachinery pkg runtime RawExtension  code   a    td   td      p Args defines the arguments passed to the plugins at the time of initialization  Args can have arbitrary structure   p    td    tr    tbody    table       PluginSet        kubescheduler config k8s io v1 PluginSet          Appears in        Plugins   kubescheduler config k8s io v1 Plugins     p PluginSet specifies enabled and disabled plugins for an extension point  If an array is empty  missing  or nil  default plugins at that extension point will be used   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code enabled  code   B  Required   B  br    a href   kubescheduler config k8s io v1 Plugin   code   Plugin  code   a    td   td      p Enabled specifies plugins that should be enabled in addition to default plugins  If the default plugin is also configured in the scheduler config file  the weight of plugin will be overridden accordingly  These are called after default plugins and in the same order specified here   p    td    tr   tr  td  code disabled  code   B  Required   B  br    a href   kubescheduler config k8s io v1 Plugin   code   Plugin  code   a    td   td      p Disabled specifies default plugins that should be disabled  When all default plugins need to be disabled  an array containing only one  quot   quot  should be provided   p    td    tr    tbody    table       Plugins        kubescheduler config k8s io v1 Plugins          Appears in        KubeSchedulerProfile   kubescheduler config k8s io v1 KubeSchedulerProfile     p Plugins include multiple extension points  When specified  the list of plugins for a particular extension point are the only ones enabled  If an extension point is omitted from the config  then the default set of plugins is used for that extension point  Enabled plugins are called in the order specified here  after default plugins  If they need to be invoked before default plugins  default plugins must be disabled and re enabled here in desired order   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code preEnqueue  code   B  Required   B  br    a href   kubescheduler config k8s io v1 PluginSet   code PluginSet  code   a    td   td      p PreEnqueue is a list of plugins that should be invoked before adding pods to the scheduling queue   p    td    tr   tr  td  code queueSort  code   B  Required   B  br    a href   kubescheduler config k8s io v1 PluginSet   code PluginSet  code   a    td   td      p QueueSort is a list of plugins that should be invoked when sorting pods in the scheduling queue   p    td    tr   tr  td  code preFilter  code   B  Required   B  br    a href   kubescheduler config k8s io v1 PluginSet   code PluginSet  code   a    td   td      p PreFilter is a list of plugins that should be invoked at  quot PreFilter quot  extension point of the scheduling framework   p    td    tr   tr  td  code filter  code   B  Required   B  br    a href   kubescheduler config k8s io v1 PluginSet   code PluginSet  code   a    td   td      p Filter is a list of plugins that should be invoked when filtering out nodes that cannot run the Pod   p    td    tr   tr  td  code postFilter  code   B  Required   B  br    a href   kubescheduler config k8s io v1 PluginSet   code PluginSet  code   a    td   td      p PostFilter is a list of plugins that are invoked after filtering phase  but only when no feasible nodes were found for the pod   p    td    tr   tr  td  code preScore  code   B  Required   B  br    a href   kubescheduler config k8s io v1 PluginSet   code PluginSet  code   a    td   td      p PreScore is a list of plugins that are invoked before scoring   p    td    tr   tr  td  code score  code   B  Required   B  br    a href   kubescheduler config k8s io v1 PluginSet   code PluginSet  code   a    td   td      p Score is a list of plugins that should be invoked when ranking nodes that have passed the filtering phase   p    td    tr   tr  td  code reserve  code   B  Required   B  br    a href   kubescheduler config k8s io v1 PluginSet   code PluginSet  code   a    td   td      p Reserve is a list of plugins invoked when reserving unreserving resources after a node is assigned to run the pod   p    td    tr   tr  td  code permit  code   B  Required   B  br    a href   kubescheduler config k8s io v1 PluginSet   code PluginSet  code   a    td   td      p Permit is a list of plugins that control binding of a Pod  These plugins can prevent or delay binding of a Pod   p    td    tr   tr  td  code preBind  code   B  Required   B  br    a href   kubescheduler config k8s io v1 PluginSet   code PluginSet  code   a    td   td      p PreBind is a list of plugins that should be invoked before a pod is bound   p    td    tr   tr  td  code bind  code   B  Required   B  br    a href   kubescheduler config k8s io v1 PluginSet   code PluginSet  code   a    td   td      p Bind is a list of plugins that should be invoked at  quot Bind quot  extension point of the scheduling framework  The scheduler call these plugins in order  Scheduler skips the rest of these plugins as soon as one returns success   p    td    tr   tr  td  code postBind  code   B  Required   B  br    a href   kubescheduler config k8s io v1 PluginSet   code PluginSet  code   a    td   td      p PostBind is a list of plugins that should be invoked after a pod is successfully bound   p    td    tr   tr  td  code multiPoint  code   B  Required   B  br    a href   kubescheduler config k8s io v1 PluginSet   code PluginSet  code   a    td   td      p MultiPoint is a simplified config section to enable plugins for all valid extension points  Plugins enabled through MultiPoint will automatically register for every individual extension point the plugin has implemented  Disabling a plugin through MultiPoint disables that behavior  The same is true for disabling  quot   quot  through MultiPoint  no default plugins will be automatically registered   Plugins can still be disabled through their individual extension points   p   p In terms of precedence  plugin config follows this basic hierarchy  p   ol   li Specific extension points  li   li Explicitly configured MultiPoint plugins  li   li The set of default plugins  as MultiPoint plugins This implies that a higher precedence plugin will run first and overwrite any settings within MultiPoint  Explicitly user configured plugins also take a higher precedence over default plugins  Within this hierarchy  an Enabled setting takes precedence over Disabled  For example  if a plugin is set in both  code multiPoint Enabled  code  and  code multiPoint Disabled  code   the plugin will be enabled  Similarly  including  code multiPoint Disabled        code  and  code multiPoint Enabled   pluginA  code  will still register that specific plugin through MultiPoint  This follows the same behavior as all other extension point configurations   li    ol    td    tr    tbody    table       PodTopologySpreadConstraintsDefaulting        kubescheduler config k8s io v1 PodTopologySpreadConstraintsDefaulting        Alias of  string      Appears in        PodTopologySpreadArgs   kubescheduler config k8s io v1 PodTopologySpreadArgs     p PodTopologySpreadConstraintsDefaulting defines how to set default constraints for the PodTopologySpread plugin   p          RequestedToCapacityRatioParam        kubescheduler config k8s io v1 RequestedToCapacityRatioParam          Appears in        ScoringStrategy   kubescheduler config k8s io v1 ScoringStrategy     p RequestedToCapacityRatioParam define RequestedToCapacityRatio parameters  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code shape  code   B  Required   B  br    a href   kubescheduler config k8s io v1 UtilizationShapePoint   code   UtilizationShapePoint  code   a    td   td      p Shape is a list of points defining the scoring function shape   p    td    tr    tbody    table       ResourceSpec        kubescheduler config k8s io v1 ResourceSpec          Appears in        NodeResourcesBalancedAllocationArgs   kubescheduler config k8s io v1 NodeResourcesBalancedAllocationArgs      ScoringStrategy   kubescheduler config k8s io v1 ScoringStrategy     p ResourceSpec represents a single resource   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p Name of the resource   p    td    tr   tr  td  code weight  code   B  Required   B  br    code int64  code    td   td      p Weight of the resource   p    td    tr    tbody    table       ScoringStrategy        kubescheduler config k8s io v1 ScoringStrategy          Appears in        NodeResourcesFitArgs   kubescheduler config k8s io v1 NodeResourcesFitArgs     p ScoringStrategy define ScoringStrategyType for node resource plugin  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code type  code   B  Required   B  br    a href   kubescheduler config k8s io v1 ScoringStrategyType   code ScoringStrategyType  code   a    td   td      p Type selects which strategy to run   p    td    tr   tr  td  code resources  code   B  Required   B  br    a href   kubescheduler config k8s io v1 ResourceSpec   code   ResourceSpec  code   a    td   td      p Resources to consider when scoring  The default resource set includes  quot cpu quot  and  quot memory quot  with an equal weight  Allowed weights go from 1 to 100  Weight defaults to 1 if not specified or explicitly set to 0   p    td    tr   tr  td  code requestedToCapacityRatio  code   B  Required   B  br    a href   kubescheduler config k8s io v1 RequestedToCapacityRatioParam   code RequestedToCapacityRatioParam  code   a    td   td      p Arguments specific to RequestedToCapacityRatio strategy   p    td    tr    tbody    table       ScoringStrategyType        kubescheduler config k8s io v1 ScoringStrategyType        Alias of  string      Appears in        ScoringStrategy   kubescheduler config k8s io v1 ScoringStrategy     p ScoringStrategyType the type of scoring strategy used in NodeResourcesFit plugin   p          UtilizationShapePoint        kubescheduler config k8s io v1 UtilizationShapePoint          Appears in        VolumeBindingArgs   kubescheduler config k8s io v1 VolumeBindingArgs      RequestedToCapacityRatioParam   kubescheduler config k8s io v1 RequestedToCapacityRatioParam     p UtilizationShapePoint represents single point of priority function shape   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code utilization  code   B  Required   B  br    code int32  code    td   td      p Utilization  x axis   Valid values are 0 to 100  Fully utilized node maps to 100   p    td    tr   tr  td  code score  code   B  Required   B  br    code int32  code    td   td      p Score assigned to given utilization  y axis   Valid values are 0 to 10   p    td    tr    tbody    table   "}
{"questions":"kubernetes reference package credentialprovider kubelet k8s io v1 contenttype tool reference title Kubelet CredentialProvider v1 Resource Types autogenerated true","answers":"---\ntitle: Kubelet CredentialProvider (v1)\ncontent_type: tool-reference\npackage: credentialprovider.kubelet.k8s.io\/v1\nauto_generated: true\n---\n\n\n## Resource Types \n\n\n- [CredentialProviderRequest](#credentialprovider-kubelet-k8s-io-v1-CredentialProviderRequest)\n- [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1-CredentialProviderResponse)\n  \n\n## `CredentialProviderRequest`     {#credentialprovider-kubelet-k8s-io-v1-CredentialProviderRequest}\n    \n\n\n<p>CredentialProviderRequest includes the image that the kubelet requires authentication for.\nKubelet will pass this request object to the plugin via stdin. In general, plugins should\nprefer responding with the same apiVersion they were sent.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>credentialprovider.kubelet.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>CredentialProviderRequest<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>image<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>image is the container image that is being pulled as part of the\ncredential provider plugin request. Plugins may optionally parse the image\nto extract any information required to fetch credentials.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `CredentialProviderResponse`     {#credentialprovider-kubelet-k8s-io-v1-CredentialProviderResponse}\n    \n\n\n<p>CredentialProviderResponse holds credentials that the kubelet should use for the specified\nimage provided in the original request. Kubelet will read the response from the plugin via stdout.\nThis response should be set to the same apiVersion as CredentialProviderRequest.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>credentialprovider.kubelet.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>CredentialProviderResponse<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>cacheKeyType<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#credentialprovider-kubelet-k8s-io-v1-PluginCacheKeyType\"><code>PluginCacheKeyType<\/code><\/a>\n<\/td>\n<td>\n   <p>cacheKeyType indiciates the type of caching key to use based on the image provided\nin the request. There are three valid values for the cache key type: Image, Registry, and\nGlobal. If an invalid value is specified, the response will NOT be used by the kubelet.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>cacheDuration<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>cacheDuration indicates the duration the provided credentials should be cached for.\nThe kubelet will use this field to set the in-memory cache duration for credentials\nin the AuthConfig. If null, the kubelet will use defaultCacheDuration provided in\nCredentialProviderConfig. If set to 0, the kubelet will not cache the provided AuthConfig.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>auth<\/code><br\/>\n<a href=\"#credentialprovider-kubelet-k8s-io-v1-AuthConfig\"><code>map[string]AuthConfig<\/code><\/a>\n<\/td>\n<td>\n   <p>auth is a map containing authentication information passed into the kubelet.\nEach key is a match image string (more on this below). The corresponding authConfig value\nshould be valid for all images that match against this key. A plugin should set\nthis field to null if no valid credentials can be returned for the requested image.<\/p>\n<p>Each key in the map is a pattern which can optionally contain a port and a path.\nGlobs can be used in the domain, but not in the port or the path. Globs are supported\nas subdomains like '<em>.k8s.io' or 'k8s.<\/em>.io', and top-level-domains such as 'k8s.<em>'.\nMatching partial subdomains like 'app<\/em>.k8s.io' is also supported. Each glob can only match\na single subdomain segment, so *.io does not match *.k8s.io.<\/p>\n<p>The kubelet will match images against the key when all of the below are true:<\/p>\n<ul>\n<li>Both contain the same number of domain parts and each part matches.<\/li>\n<li>The URL path of an imageMatch must be a prefix of the target image URL path.<\/li>\n<li>If the imageMatch contains a port, then the port must match in the image as well.<\/li>\n<\/ul>\n<p>When multiple keys are returned, the kubelet will traverse all keys in reverse order so that:<\/p>\n<ul>\n<li>longer keys come before shorter keys with the same prefix<\/li>\n<li>non-wildcard keys come before wildcard keys with the same prefix.<\/li>\n<\/ul>\n<p>For any given match, the kubelet will attempt an image pull with the provided credentials,\nstopping after the first successfully authenticated pull.<\/p>\n<p>Example keys:<\/p>\n<ul>\n<li>123456789.dkr.ecr.us-east-1.amazonaws.com<\/li>\n<li>*.azurecr.io<\/li>\n<li>gcr.io<\/li>\n<li><em>.<\/em>.registry.io<\/li>\n<li>registry.io:8080\/path<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AuthConfig`     {#credentialprovider-kubelet-k8s-io-v1-AuthConfig}\n    \n\n**Appears in:**\n\n- [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1-CredentialProviderResponse)\n\n\n<p>AuthConfig contains authentication information for a container registry.\nOnly username\/password based authentication is supported today, but more authentication\nmechanisms may be added in the future.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>username<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>username is the username used for authenticating to the container registry\nAn empty username is valid.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>password<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>password is the password used for authenticating to the container registry\nAn empty password is valid.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `PluginCacheKeyType`     {#credentialprovider-kubelet-k8s-io-v1-PluginCacheKeyType}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1-CredentialProviderResponse)\n\n\n\n\n ","site":"kubernetes reference","answers_cleaned":"    title  Kubelet CredentialProvider  v1  content type  tool reference package  credentialprovider kubelet k8s io v1 auto generated  true          Resource Types       CredentialProviderRequest   credentialprovider kubelet k8s io v1 CredentialProviderRequest     CredentialProviderResponse   credentialprovider kubelet k8s io v1 CredentialProviderResponse          CredentialProviderRequest        credentialprovider kubelet k8s io v1 CredentialProviderRequest          p CredentialProviderRequest includes the image that the kubelet requires authentication for  Kubelet will pass this request object to the plugin via stdin  In general  plugins should prefer responding with the same apiVersion they were sent   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code credentialprovider kubelet k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code CredentialProviderRequest  code   td   tr           tr  td  code image  code   B  Required   B  br    code string  code    td   td      p image is the container image that is being pulled as part of the credential provider plugin request  Plugins may optionally parse the image to extract any information required to fetch credentials   p    td    tr    tbody    table       CredentialProviderResponse        credentialprovider kubelet k8s io v1 CredentialProviderResponse          p CredentialProviderResponse holds credentials that the kubelet should use for the specified image provided in the original request  Kubelet will read the response from the plugin via stdout  This response should be set to the same apiVersion as CredentialProviderRequest   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code credentialprovider kubelet k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code CredentialProviderResponse  code   td   tr           tr  td  code cacheKeyType  code   B  Required   B  br    a href   credentialprovider kubelet k8s io v1 PluginCacheKeyType   code PluginCacheKeyType  code   a    td   td      p cacheKeyType indiciates the type of caching key to use based on the image provided in the request  There are three valid values for the cache key type  Image  Registry  and Global  If an invalid value is specified  the response will NOT be used by the kubelet   p    td    tr   tr  td  code cacheDuration  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p cacheDuration indicates the duration the provided credentials should be cached for  The kubelet will use this field to set the in memory cache duration for credentials in the AuthConfig  If null  the kubelet will use defaultCacheDuration provided in CredentialProviderConfig  If set to 0  the kubelet will not cache the provided AuthConfig   p    td    tr   tr  td  code auth  code  br    a href   credentialprovider kubelet k8s io v1 AuthConfig   code map string AuthConfig  code   a    td   td      p auth is a map containing authentication information passed into the kubelet  Each key is a match image string  more on this below   The corresponding authConfig value should be valid for all images that match against this key  A plugin should set this field to null if no valid credentials can be returned for the requested image   p   p Each key in the map is a pattern which can optionally contain a port and a path  Globs can be used in the domain  but not in the port or the path  Globs are supported as subdomains like   em  k8s io  or  k8s   em  io   and top level domains such as  k8s  em    Matching partial subdomains like  app  em  k8s io  is also supported  Each glob can only match a single subdomain segment  so   io does not match   k8s io   p   p The kubelet will match images against the key when all of the below are true   p   ul   li Both contain the same number of domain parts and each part matches   li   li The URL path of an imageMatch must be a prefix of the target image URL path   li   li If the imageMatch contains a port  then the port must match in the image as well   li    ul   p When multiple keys are returned  the kubelet will traverse all keys in reverse order so that   p   ul   li longer keys come before shorter keys with the same prefix  li   li non wildcard keys come before wildcard keys with the same prefix   li    ul   p For any given match  the kubelet will attempt an image pull with the provided credentials  stopping after the first successfully authenticated pull   p   p Example keys   p   ul   li 123456789 dkr ecr us east 1 amazonaws com  li   li   azurecr io  li   li gcr io  li   li  em    em  registry io  li   li registry io 8080 path  li    ul    td    tr    tbody    table       AuthConfig        credentialprovider kubelet k8s io v1 AuthConfig          Appears in        CredentialProviderResponse   credentialprovider kubelet k8s io v1 CredentialProviderResponse     p AuthConfig contains authentication information for a container registry  Only username password based authentication is supported today  but more authentication mechanisms may be added in the future   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code username  code   B  Required   B  br    code string  code    td   td      p username is the username used for authenticating to the container registry An empty username is valid   p    td    tr   tr  td  code password  code   B  Required   B  br    code string  code    td   td      p password is the password used for authenticating to the container registry An empty password is valid   p    td    tr    tbody    table       PluginCacheKeyType        credentialprovider kubelet k8s io v1 PluginCacheKeyType        Alias of  string      Appears in        CredentialProviderResponse   credentialprovider kubelet k8s io v1 CredentialProviderResponse       "}
{"questions":"kubernetes reference contenttype tool reference package apiserver k8s io v1beta1 Resource Types p Package v1beta1 is the v1beta1 version of the API p title kube apiserver Configuration v1beta1 autogenerated true","answers":"---\ntitle: kube-apiserver Configuration (v1beta1)\ncontent_type: tool-reference\npackage: apiserver.k8s.io\/v1beta1\nauto_generated: true\n---\n<p>Package v1beta1 is the v1beta1 version of the API.<\/p>\n\n\n## Resource Types \n\n\n- [AuthenticationConfiguration](#apiserver-k8s-io-v1beta1-AuthenticationConfiguration)\n- [AuthorizationConfiguration](#apiserver-k8s-io-v1beta1-AuthorizationConfiguration)\n- [EgressSelectorConfiguration](#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration)\n- [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration)\n  \n    \n    \n\n## `TracingConfiguration`     {#TracingConfiguration}\n    \n\n**Appears in:**\n\n- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)\n\n- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration)\n\n- [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration)\n\n\n<p>TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>endpoint<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Endpoint of the collector this component will report traces to.\nThe connection is insecure, and does not currently support TLS.\nRecommended is unset, and endpoint is the otlp grpc default, localhost:4317.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>samplingRatePerMillion<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>SamplingRatePerMillion is the number of samples to collect per million spans.\nRecommended is unset. If unset, sampler respects its parent span's sampling\nrate, but otherwise never samples.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n  \n\n## `AuthenticationConfiguration`     {#apiserver-k8s-io-v1beta1-AuthenticationConfiguration}\n    \n\n\n<p>AuthenticationConfiguration provides versioned configuration for authentication.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>apiserver.k8s.io\/v1beta1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>AuthenticationConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>jwt<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-JWTAuthenticator\"><code>[]JWTAuthenticator<\/code><\/a>\n<\/td>\n<td>\n   <p>jwt is a list of authenticator to authenticate Kubernetes users using\nJWT compliant tokens. The authenticator will attempt to parse a raw ID token,\nverify it's been signed by the configured issuer. The public key to verify the\nsignature is discovered from the issuer's public endpoint using OIDC discovery.\nFor an incoming token, each JWT authenticator will be attempted in\nthe order in which it is specified in this list.  Note however that\nother authenticators may run before or after the JWT authenticators.\nThe specific position of JWT authenticators in relation to other\nauthenticators is neither defined nor stable across releases.  Since\neach JWT authenticator must have a unique issuer URL, at most one\nJWT authenticator will attempt to cryptographically validate the token.<\/p>\n<p>The minimum valid JWT payload must contain the following claims:\n{\n&quot;iss&quot;: &quot;https:\/\/issuer.example.com&quot;,\n&quot;aud&quot;: [&quot;audience&quot;],\n&quot;exp&quot;: 1234567890,\n&quot;<!-- raw HTML omitted -->&quot;: &quot;username&quot;\n}<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>anonymous<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-AnonymousAuthConfig\"><code>AnonymousAuthConfig<\/code><\/a>\n<\/td>\n<td>\n   <p>If present --anonymous-auth must not be set<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AuthorizationConfiguration`     {#apiserver-k8s-io-v1beta1-AuthorizationConfiguration}\n    \n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>apiserver.k8s.io\/v1beta1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>AuthorizationConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>authorizers<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-AuthorizerConfiguration\"><code>[]AuthorizerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>Authorizers is an ordered list of authorizers to\nauthorize requests against.\nThis is similar to the --authorization-modes kube-apiserver flag\nMust be at least one.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `EgressSelectorConfiguration`     {#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration}\n    \n\n\n<p>EgressSelectorConfiguration provides versioned configuration for egress selector clients.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>apiserver.k8s.io\/v1beta1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>EgressSelectorConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>egressSelections<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-EgressSelection\"><code>[]EgressSelection<\/code><\/a>\n<\/td>\n<td>\n   <p>connectionServices contains a list of egress selection client configurations<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `TracingConfiguration`     {#apiserver-k8s-io-v1beta1-TracingConfiguration}\n    \n\n\n<p>TracingConfiguration provides versioned configuration for tracing clients.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>apiserver.k8s.io\/v1beta1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>TracingConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>TracingConfiguration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#TracingConfiguration\"><code>TracingConfiguration<\/code><\/a>\n<\/td>\n<td>(Members of <code>TracingConfiguration<\/code> are embedded into this type.)\n   <p>Embed the component config tracing configuration struct<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AnonymousAuthCondition`     {#apiserver-k8s-io-v1beta1-AnonymousAuthCondition}\n    \n\n**Appears in:**\n\n- [AnonymousAuthConfig](#apiserver-k8s-io-v1beta1-AnonymousAuthConfig)\n\n\n<p>AnonymousAuthCondition describes the condition under which anonymous auth\nshould be enabled.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>path<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Path for which anonymous auth is enabled.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AnonymousAuthConfig`     {#apiserver-k8s-io-v1beta1-AnonymousAuthConfig}\n    \n\n**Appears in:**\n\n- [AuthenticationConfiguration](#apiserver-k8s-io-v1beta1-AuthenticationConfiguration)\n\n\n<p>AnonymousAuthConfig provides the configuration for the anonymous authenticator.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>enabled<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>conditions<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-AnonymousAuthCondition\"><code>[]AnonymousAuthCondition<\/code><\/a>\n<\/td>\n<td>\n   <p>If set, anonymous auth is only allowed if the request meets one of the\nconditions.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AudienceMatchPolicyType`     {#apiserver-k8s-io-v1beta1-AudienceMatchPolicyType}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [Issuer](#apiserver-k8s-io-v1beta1-Issuer)\n\n\n<p>AudienceMatchPolicyType is a set of valid values for issuer.audienceMatchPolicy<\/p>\n\n\n\n\n## `AuthorizerConfiguration`     {#apiserver-k8s-io-v1beta1-AuthorizerConfiguration}\n    \n\n**Appears in:**\n\n- [AuthorizationConfiguration](#apiserver-k8s-io-v1beta1-AuthorizationConfiguration)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>type<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Type refers to the type of the authorizer\n&quot;Webhook&quot; is supported in the generic API server\nOther API servers may support additional authorizer\ntypes like Node, RBAC, ABAC, etc.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name used to describe the webhook\nThis is explicitly used in monitoring machinery for metrics\nNote: Names must be DNS1123 labels like <code>myauthorizername<\/code> or\nsubdomains like <code>myauthorizer.example.domain<\/code>\nRequired, with no default<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>webhook<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-WebhookConfiguration\"><code>WebhookConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>Webhook defines the configuration for a Webhook authorizer\nMust be defined when Type=Webhook\nMust not be defined when Type!=Webhook<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ClaimMappings`     {#apiserver-k8s-io-v1beta1-ClaimMappings}\n    \n\n**Appears in:**\n\n- [JWTAuthenticator](#apiserver-k8s-io-v1beta1-JWTAuthenticator)\n\n\n<p>ClaimMappings provides the configuration for claim mapping<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>username<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-PrefixedClaimOrExpression\"><code>PrefixedClaimOrExpression<\/code><\/a>\n<\/td>\n<td>\n   <p>username represents an option for the username attribute.\nThe claim's value must be a singular string.\nSame as the --oidc-username-claim and --oidc-username-prefix flags.\nIf username.expression is set, the expression must produce a string value.\nIf username.expression uses 'claims.email', then 'claims.email_verified' must be used in\nusername.expression or extra[<em>].valueExpression or claimValidationRules[<\/em>].expression.\nAn example claim validation rule expression that matches the validation automatically\napplied when username.claim is set to 'email' is 'claims.?email_verified.orValue(true)'.<\/p>\n<p>In the flag based approach, the --oidc-username-claim and --oidc-username-prefix are optional. If --oidc-username-claim is not set,\nthe default value is &quot;sub&quot;. For the authentication config, there is no defaulting for claim or prefix. The claim and prefix must be set explicitly.\nFor claim, if --oidc-username-claim was not set with legacy flag approach, configure username.claim=&quot;sub&quot; in the authentication config.\nFor prefix:\n(1) --oidc-username-prefix=&quot;-&quot;, no prefix was added to the username. For the same behavior using authentication config,\nset username.prefix=&quot;&quot;\n(2) --oidc-username-prefix=&quot;&quot; and  --oidc-username-claim != &quot;email&quot;, prefix was &quot;&lt;value of --oidc-issuer-url&gt;#&quot;. For the same\nbehavior using authentication config, set username.prefix=&quot;<!-- raw HTML omitted -->#&quot;\n(3) --oidc-username-prefix=&quot;<!-- raw HTML omitted -->&quot;. For the same behavior using authentication config, set username.prefix=&quot;<!-- raw HTML omitted -->&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>groups<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-PrefixedClaimOrExpression\"><code>PrefixedClaimOrExpression<\/code><\/a>\n<\/td>\n<td>\n   <p>groups represents an option for the groups attribute.\nThe claim's value must be a string or string array claim.\nIf groups.claim is set, the prefix must be specified (and can be the empty string).\nIf groups.expression is set, the expression must produce a string or string array value.\n&quot;&quot;, [], and null values are treated as the group mapping not being present.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>uid<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-ClaimOrExpression\"><code>ClaimOrExpression<\/code><\/a>\n<\/td>\n<td>\n   <p>uid represents an option for the uid attribute.\nClaim must be a singular string claim.\nIf uid.expression is set, the expression must produce a string value.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>extra<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-ExtraMapping\"><code>[]ExtraMapping<\/code><\/a>\n<\/td>\n<td>\n   <p>extra represents an option for the extra attribute.\nexpression must produce a string or string array value.\nIf the value is empty, the extra mapping will not be present.<\/p>\n<p>hard-coded extra key\/value<\/p>\n<ul>\n<li>key: &quot;foo&quot;\nvalueExpression: &quot;'bar'&quot;\nThis will result in an extra attribute - foo: [&quot;bar&quot;]<\/li>\n<\/ul>\n<p>hard-coded key, value copying claim value<\/p>\n<ul>\n<li>key: &quot;foo&quot;\nvalueExpression: &quot;claims.some_claim&quot;\nThis will result in an extra attribute - foo: [value of some_claim]<\/li>\n<\/ul>\n<p>hard-coded key, value derived from claim value<\/p>\n<ul>\n<li>key: &quot;admin&quot;\nvalueExpression: '(has(claims.is_admin) &amp;&amp; claims.is_admin) ? &quot;true&quot;:&quot;&quot;'\nThis will result in:<\/li>\n<li>if is_admin claim is present and true, extra attribute - admin: [&quot;true&quot;]<\/li>\n<li>if is_admin claim is present and false or is_admin claim is not present, no extra attribute will be added<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ClaimOrExpression`     {#apiserver-k8s-io-v1beta1-ClaimOrExpression}\n    \n\n**Appears in:**\n\n- [ClaimMappings](#apiserver-k8s-io-v1beta1-ClaimMappings)\n\n\n<p>ClaimOrExpression provides the configuration for a single claim or expression.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>claim<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>claim is the JWT claim to use.\nEither claim or expression must be set.\nMutually exclusive with expression.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>expression<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>expression represents the expression which will be evaluated by CEL.<\/p>\n<p>CEL expressions have access to the contents of the token claims, organized into CEL variable:<\/p>\n<ul>\n<li>'claims' is a map of claim names to claim values.\nFor example, a variable named 'sub' can be accessed as 'claims.sub'.\nNested claims can be accessed using dot notation, e.g. 'claims.foo.bar'.<\/li>\n<\/ul>\n<p>Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/<\/p>\n<p>Mutually exclusive with claim.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ClaimValidationRule`     {#apiserver-k8s-io-v1beta1-ClaimValidationRule}\n    \n\n**Appears in:**\n\n- [JWTAuthenticator](#apiserver-k8s-io-v1beta1-JWTAuthenticator)\n\n\n<p>ClaimValidationRule provides the configuration for a single claim validation rule.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>claim<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>claim is the name of a required claim.\nSame as --oidc-required-claim flag.\nOnly string claim keys are supported.\nMutually exclusive with expression and message.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>requiredValue<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>requiredValue is the value of a required claim.\nSame as --oidc-required-claim flag.\nOnly string claim values are supported.\nIf claim is set and requiredValue is not set, the claim must be present with a value set to the empty string.\nMutually exclusive with expression and message.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>expression<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>expression represents the expression which will be evaluated by CEL.\nMust produce a boolean.<\/p>\n<p>CEL expressions have access to the contents of the token claims, organized into CEL variable:<\/p>\n<ul>\n<li>'claims' is a map of claim names to claim values.\nFor example, a variable named 'sub' can be accessed as 'claims.sub'.\nNested claims can be accessed using dot notation, e.g. 'claims.foo.bar'.\nMust return true for the validation to pass.<\/li>\n<\/ul>\n<p>Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/<\/p>\n<p>Mutually exclusive with claim and requiredValue.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>message<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>message customizes the returned error message when expression returns false.\nmessage is a literal string.\nMutually exclusive with claim and requiredValue.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Connection`     {#apiserver-k8s-io-v1beta1-Connection}\n    \n\n**Appears in:**\n\n- [EgressSelection](#apiserver-k8s-io-v1beta1-EgressSelection)\n\n\n<p>Connection provides the configuration for a single egress selection client.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>proxyProtocol<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-ProtocolType\"><code>ProtocolType<\/code><\/a>\n<\/td>\n<td>\n   <p>Protocol is the protocol used to connect from client to the konnectivity server.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>transport<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-Transport\"><code>Transport<\/code><\/a>\n<\/td>\n<td>\n   <p>Transport defines the transport configurations we use to dial to the konnectivity server.\nThis is required if ProxyProtocol is HTTPConnect or GRPC.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `EgressSelection`     {#apiserver-k8s-io-v1beta1-EgressSelection}\n    \n\n**Appears in:**\n\n- [EgressSelectorConfiguration](#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration)\n\n\n<p>EgressSelection provides the configuration for a single egress selection client.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>name is the name of the egress selection.\nCurrently supported values are &quot;controlplane&quot;, &quot;master&quot;, &quot;etcd&quot; and &quot;cluster&quot;\nThe &quot;master&quot; egress selector is deprecated in favor of &quot;controlplane&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>connection<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-Connection\"><code>Connection<\/code><\/a>\n<\/td>\n<td>\n   <p>connection is the exact information used to configure the egress selection<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExtraMapping`     {#apiserver-k8s-io-v1beta1-ExtraMapping}\n    \n\n**Appears in:**\n\n- [ClaimMappings](#apiserver-k8s-io-v1beta1-ClaimMappings)\n\n\n<p>ExtraMapping provides the configuration for a single extra mapping.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>key<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>key is a string to use as the extra attribute key.\nkey must be a domain-prefix path (e.g. example.org\/foo). All characters before the first &quot;\/&quot; must be a valid\nsubdomain as defined by RFC 1123. All characters trailing the first &quot;\/&quot; must\nbe valid HTTP Path characters as defined by RFC 3986.\nkey must be lowercase.\nRequired to be unique.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>valueExpression<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>valueExpression is a CEL expression to extract extra attribute value.\nvalueExpression must produce a string or string array value.\n&quot;&quot;, [], and null values are treated as the extra mapping not being present.\nEmpty string values contained within a string array are filtered out.<\/p>\n<p>CEL expressions have access to the contents of the token claims, organized into CEL variable:<\/p>\n<ul>\n<li>'claims' is a map of claim names to claim values.\nFor example, a variable named 'sub' can be accessed as 'claims.sub'.\nNested claims can be accessed using dot notation, e.g. 'claims.foo.bar'.<\/li>\n<\/ul>\n<p>Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Issuer`     {#apiserver-k8s-io-v1beta1-Issuer}\n    \n\n**Appears in:**\n\n- [JWTAuthenticator](#apiserver-k8s-io-v1beta1-JWTAuthenticator)\n\n\n<p>Issuer provides the configuration for an external provider's specific settings.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>url<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>url points to the issuer URL in a format https:\/\/url or https:\/\/url\/path.\nThis must match the &quot;iss&quot; claim in the presented JWT, and the issuer returned from discovery.\nSame value as the --oidc-issuer-url flag.\nDiscovery information is fetched from &quot;{url}\/.well-known\/openid-configuration&quot; unless overridden by discoveryURL.\nRequired to be unique across all JWT authenticators.\nNote that egress selection configuration is not used for this network connection.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>discoveryURL<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>discoveryURL, if specified, overrides the URL used to fetch discovery\ninformation instead of using &quot;{url}\/.well-known\/openid-configuration&quot;.\nThe exact value specified is used, so &quot;\/.well-known\/openid-configuration&quot;\nmust be included in discoveryURL if needed.<\/p>\n<p>The &quot;issuer&quot; field in the fetched discovery information must match the &quot;issuer.url&quot; field\nin the AuthenticationConfiguration and will be used to validate the &quot;iss&quot; claim in the presented JWT.\nThis is for scenarios where the well-known and jwks endpoints are hosted at a different\nlocation than the issuer (such as locally in the cluster).<\/p>\n<p>Example:\nA discovery url that is exposed using kubernetes service 'oidc' in namespace 'oidc-namespace'\nand discovery information is available at '\/.well-known\/openid-configuration'.\ndiscoveryURL: &quot;https:\/\/oidc.oidc-namespace\/.well-known\/openid-configuration&quot;\ncertificateAuthority is used to verify the TLS connection and the hostname on the leaf certificate\nmust be set to 'oidc.oidc-namespace'.<\/p>\n<p>curl https:\/\/oidc.oidc-namespace\/.well-known\/openid-configuration (.discoveryURL field)\n{\nissuer: &quot;https:\/\/oidc.example.com&quot; (.url field)\n}<\/p>\n<p>discoveryURL must be different from url.\nRequired to be unique across all JWT authenticators.\nNote that egress selection configuration is not used for this network connection.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificateAuthority<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>certificateAuthority contains PEM-encoded certificate authority certificates\nused to validate the connection when fetching discovery information.\nIf unset, the system verifier is used.\nSame value as the content of the file referenced by the --oidc-ca-file flag.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>audiences<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>audiences is the set of acceptable audiences the JWT must be issued to.\nAt least one of the entries must match the &quot;aud&quot; claim in presented JWTs.\nSame value as the --oidc-client-id flag (though this field supports an array).\nRequired to be non-empty.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>audienceMatchPolicy<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-AudienceMatchPolicyType\"><code>AudienceMatchPolicyType<\/code><\/a>\n<\/td>\n<td>\n   <p>audienceMatchPolicy defines how the &quot;audiences&quot; field is used to match the &quot;aud&quot; claim in the presented JWT.\nAllowed values are:<\/p>\n<ol>\n<li>&quot;MatchAny&quot; when multiple audiences are specified and<\/li>\n<li>empty (or unset) or &quot;MatchAny&quot; when a single audience is specified.<\/li>\n<\/ol>\n<ul>\n<li>\n<p>MatchAny: the &quot;aud&quot; claim in the presented JWT must match at least one of the entries in the &quot;audiences&quot; field.\nFor example, if &quot;audiences&quot; is [&quot;foo&quot;, &quot;bar&quot;], the &quot;aud&quot; claim in the presented JWT must contain either &quot;foo&quot; or &quot;bar&quot; (and may contain both).<\/p>\n<\/li>\n<li>\n<p>&quot;&quot;: The match policy can be empty (or unset) when a single audience is specified in the &quot;audiences&quot; field. The &quot;aud&quot; claim in the presented JWT must contain the single audience (and may contain others).<\/p>\n<\/li>\n<\/ul>\n<p>For more nuanced audience validation, use claimValidationRules.\nexample: claimValidationRule[].expression: 'sets.equivalent(claims.aud, [&quot;bar&quot;, &quot;foo&quot;, &quot;baz&quot;])' to require an exact match.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `JWTAuthenticator`     {#apiserver-k8s-io-v1beta1-JWTAuthenticator}\n    \n\n**Appears in:**\n\n- [AuthenticationConfiguration](#apiserver-k8s-io-v1beta1-AuthenticationConfiguration)\n\n\n<p>JWTAuthenticator provides the configuration for a single JWT authenticator.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>issuer<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-Issuer\"><code>Issuer<\/code><\/a>\n<\/td>\n<td>\n   <p>issuer contains the basic OIDC provider connection options.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>claimValidationRules<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-ClaimValidationRule\"><code>[]ClaimValidationRule<\/code><\/a>\n<\/td>\n<td>\n   <p>claimValidationRules are rules that are applied to validate token claims to authenticate users.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>claimMappings<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-ClaimMappings\"><code>ClaimMappings<\/code><\/a>\n<\/td>\n<td>\n   <p>claimMappings points claims of a token to be treated as user attributes.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>userValidationRules<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-UserValidationRule\"><code>[]UserValidationRule<\/code><\/a>\n<\/td>\n<td>\n   <p>userValidationRules are rules that are applied to final user before completing authentication.\nThese allow invariants to be applied to incoming identities such as preventing the\nuse of the system: prefix that is commonly used by Kubernetes components.\nThe validation rules are logically ANDed together and must all return true for the validation to pass.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `PrefixedClaimOrExpression`     {#apiserver-k8s-io-v1beta1-PrefixedClaimOrExpression}\n    \n\n**Appears in:**\n\n- [ClaimMappings](#apiserver-k8s-io-v1beta1-ClaimMappings)\n\n\n<p>PrefixedClaimOrExpression provides the configuration for a single prefixed claim or expression.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>claim<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>claim is the JWT claim to use.\nMutually exclusive with expression.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>prefix<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>prefix is prepended to claim's value to prevent clashes with existing names.\nprefix needs to be set if claim is set and can be the empty string.\nMutually exclusive with expression.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>expression<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>expression represents the expression which will be evaluated by CEL.<\/p>\n<p>CEL expressions have access to the contents of the token claims, organized into CEL variable:<\/p>\n<ul>\n<li>'claims' is a map of claim names to claim values.\nFor example, a variable named 'sub' can be accessed as 'claims.sub'.\nNested claims can be accessed using dot notation, e.g. 'claims.foo.bar'.<\/li>\n<\/ul>\n<p>Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/<\/p>\n<p>Mutually exclusive with claim and prefix.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ProtocolType`     {#apiserver-k8s-io-v1beta1-ProtocolType}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [Connection](#apiserver-k8s-io-v1beta1-Connection)\n\n\n<p>ProtocolType is a set of valid values for Connection.ProtocolType<\/p>\n\n\n\n\n## `TCPTransport`     {#apiserver-k8s-io-v1beta1-TCPTransport}\n    \n\n**Appears in:**\n\n- [Transport](#apiserver-k8s-io-v1beta1-Transport)\n\n\n<p>TCPTransport provides the information to connect to konnectivity server via TCP<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>url<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>URL is the location of the konnectivity server to connect to.\nAs an example it might be &quot;https:\/\/127.0.0.1:8131&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tlsConfig<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-TLSConfig\"><code>TLSConfig<\/code><\/a>\n<\/td>\n<td>\n   <p>TLSConfig is the config needed to use TLS when connecting to konnectivity server<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `TLSConfig`     {#apiserver-k8s-io-v1beta1-TLSConfig}\n    \n\n**Appears in:**\n\n- [TCPTransport](#apiserver-k8s-io-v1beta1-TCPTransport)\n\n\n<p>TLSConfig provides the authentication information to connect to konnectivity server\nOnly used with TCPTransport<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>caBundle<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>caBundle is the file location of the CA to be used to determine trust with the konnectivity server.\nMust be absent\/empty if TCPTransport.URL is prefixed with http:\/\/\nIf absent while TCPTransport.URL is prefixed with https:\/\/, default to system trust roots.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clientKey<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>clientKey is the file location of the client key to be used in mtls handshakes with the konnectivity server.\nMust be absent\/empty if TCPTransport.URL is prefixed with http:\/\/\nMust be configured if TCPTransport.URL is prefixed with https:\/\/<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clientCert<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>clientCert is the file location of the client certificate to be used in mtls handshakes with the konnectivity server.\nMust be absent\/empty if TCPTransport.URL is prefixed with http:\/\/\nMust be configured if TCPTransport.URL is prefixed with https:\/\/<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Transport`     {#apiserver-k8s-io-v1beta1-Transport}\n    \n\n**Appears in:**\n\n- [Connection](#apiserver-k8s-io-v1beta1-Connection)\n\n\n<p>Transport defines the transport configurations we use to dial to the konnectivity server<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>tcp<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-TCPTransport\"><code>TCPTransport<\/code><\/a>\n<\/td>\n<td>\n   <p>TCP is the TCP configuration for communicating with the konnectivity server via TCP\nProxyProtocol of GRPC is not supported with TCP transport at the moment\nRequires at least one of TCP or UDS to be set<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>uds<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-UDSTransport\"><code>UDSTransport<\/code><\/a>\n<\/td>\n<td>\n   <p>UDS is the UDS configuration for communicating with the konnectivity server via UDS\nRequires at least one of TCP or UDS to be set<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `UDSTransport`     {#apiserver-k8s-io-v1beta1-UDSTransport}\n    \n\n**Appears in:**\n\n- [Transport](#apiserver-k8s-io-v1beta1-Transport)\n\n\n<p>UDSTransport provides the information to connect to konnectivity server via UDS<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>udsName<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>UDSName is the name of the unix domain socket to connect to konnectivity server\nThis does not use a unix:\/\/ prefix. (Eg: \/etc\/srv\/kubernetes\/konnectivity-server\/konnectivity-server.socket)<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `UserValidationRule`     {#apiserver-k8s-io-v1beta1-UserValidationRule}\n    \n\n**Appears in:**\n\n- [JWTAuthenticator](#apiserver-k8s-io-v1beta1-JWTAuthenticator)\n\n\n<p>UserValidationRule provides the configuration for a single user info validation rule.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>expression<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>expression represents the expression which will be evaluated by CEL.\nMust return true for the validation to pass.<\/p>\n<p>CEL expressions have access to the contents of UserInfo, organized into CEL variable:<\/p>\n<ul>\n<li>'user' - authentication.k8s.io\/v1, Kind=UserInfo object\nRefer to https:\/\/github.com\/kubernetes\/api\/blob\/release-1.28\/authentication\/v1\/types.go#L105-L122 for the definition.\nAPI documentation: https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#userinfo-v1-authentication-k8s-io<\/li>\n<\/ul>\n<p>Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>message<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>message customizes the returned error message when rule returns false.\nmessage is a literal string.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `WebhookConfiguration`     {#apiserver-k8s-io-v1beta1-WebhookConfiguration}\n    \n\n**Appears in:**\n\n- [AuthorizerConfiguration](#apiserver-k8s-io-v1beta1-AuthorizerConfiguration)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>authorizedTTL<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>The duration to cache 'authorized' responses from the webhook\nauthorizer.\nSame as setting <code>--authorization-webhook-cache-authorized-ttl<\/code> flag\nDefault: 5m0s<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>unauthorizedTTL<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>The duration to cache 'unauthorized' responses from the webhook\nauthorizer.\nSame as setting <code>--authorization-webhook-cache-unauthorized-ttl<\/code> flag\nDefault: 30s<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>timeout<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>Timeout for the webhook request\nMaximum allowed value is 30s.\nRequired, no default value.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>subjectAccessReviewVersion<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>The API version of the authorization.k8s.io SubjectAccessReview to\nsend to and expect from the webhook.\nSame as setting <code>--authorization-webhook-version<\/code> flag\nValid values: v1beta1, v1\nRequired, no default value<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>matchConditionSubjectAccessReviewVersion<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>MatchConditionSubjectAccessReviewVersion specifies the SubjectAccessReview\nversion the CEL expressions are evaluated against\nValid values: v1\nRequired, no default value<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>failurePolicy<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Controls the authorization decision when a webhook request fails to\ncomplete or returns a malformed response or errors evaluating\nmatchConditions.\nValid values:<\/p>\n<ul>\n<li>NoOpinion: continue to subsequent authorizers to see if one of\nthem allows the request<\/li>\n<li>Deny: reject the request without consulting subsequent authorizers\nRequired, with no default.<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr><td><code>connectionInfo<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-WebhookConnectionInfo\"><code>WebhookConnectionInfo<\/code><\/a>\n<\/td>\n<td>\n   <p>ConnectionInfo defines how we talk to the webhook<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>matchConditions<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1beta1-WebhookMatchCondition\"><code>[]WebhookMatchCondition<\/code><\/a>\n<\/td>\n<td>\n   <p>matchConditions is a list of conditions that must be met for a request to be sent to this\nwebhook. An empty list of matchConditions matches all requests.\nThere are a maximum of 64 match conditions allowed.<\/p>\n<p>The exact matching logic is (in order):<\/p>\n<ol>\n<li>If at least one matchCondition evaluates to FALSE, then the webhook is skipped.<\/li>\n<li>If ALL matchConditions evaluate to TRUE, then the webhook is called.<\/li>\n<li>If at least one matchCondition evaluates to an error (but none are FALSE):\n<ul>\n<li>If failurePolicy=Deny, then the webhook rejects the request<\/li>\n<li>If failurePolicy=NoOpinion, then the error is ignored and the webhook is skipped<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `WebhookConnectionInfo`     {#apiserver-k8s-io-v1beta1-WebhookConnectionInfo}\n    \n\n**Appears in:**\n\n- [WebhookConfiguration](#apiserver-k8s-io-v1beta1-WebhookConfiguration)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>type<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Controls how the webhook should communicate with the server.\nValid values:<\/p>\n<ul>\n<li>KubeConfigFile: use the file specified in kubeConfigFile to locate the\nserver.<\/li>\n<li>InClusterConfig: use the in-cluster configuration to call the\nSubjectAccessReview API hosted by kube-apiserver. This mode is not\nallowed for kube-apiserver.<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr><td><code>kubeConfigFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Path to KubeConfigFile for connection info\nRequired, if connectionInfo.Type is KubeConfig<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `WebhookMatchCondition`     {#apiserver-k8s-io-v1beta1-WebhookMatchCondition}\n    \n\n**Appears in:**\n\n- [WebhookConfiguration](#apiserver-k8s-io-v1beta1-WebhookConfiguration)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>expression<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>expression represents the expression which will be evaluated by CEL. Must evaluate to bool.\nCEL expressions have access to the contents of the SubjectAccessReview in v1 version.\nIf version specified by subjectAccessReviewVersion in the request variable is v1beta1,\nthe contents would be converted to the v1 version before evaluating the CEL expression.<\/p>\n<p>Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  kube apiserver Configuration  v1beta1  content type  tool reference package  apiserver k8s io v1beta1 auto generated  true      p Package v1beta1 is the v1beta1 version of the API   p       Resource Types       AuthenticationConfiguration   apiserver k8s io v1beta1 AuthenticationConfiguration     AuthorizationConfiguration   apiserver k8s io v1beta1 AuthorizationConfiguration     EgressSelectorConfiguration   apiserver k8s io v1beta1 EgressSelectorConfiguration     TracingConfiguration   apiserver k8s io v1beta1 TracingConfiguration                    TracingConfiguration        TracingConfiguration          Appears in        KubeletConfiguration   kubelet config k8s io v1beta1 KubeletConfiguration      TracingConfiguration   apiserver k8s io v1alpha1 TracingConfiguration      TracingConfiguration   apiserver k8s io v1beta1 TracingConfiguration     p TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code endpoint  code  br    code string  code    td   td      p Endpoint of the collector this component will report traces to  The connection is insecure  and does not currently support TLS  Recommended is unset  and endpoint is the otlp grpc default  localhost 4317   p    td    tr   tr  td  code samplingRatePerMillion  code  br    code int32  code    td   td      p SamplingRatePerMillion is the number of samples to collect per million spans  Recommended is unset  If unset  sampler respects its parent span s sampling rate  but otherwise never samples   p    td    tr    tbody    table          AuthenticationConfiguration        apiserver k8s io v1beta1 AuthenticationConfiguration          p AuthenticationConfiguration provides versioned configuration for authentication   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code apiserver k8s io v1beta1  code   td   tr   tr  td  code kind  code  br  string  td  td  code AuthenticationConfiguration  code   td   tr           tr  td  code jwt  code   B  Required   B  br    a href   apiserver k8s io v1beta1 JWTAuthenticator   code   JWTAuthenticator  code   a    td   td      p jwt is a list of authenticator to authenticate Kubernetes users using JWT compliant tokens  The authenticator will attempt to parse a raw ID token  verify it s been signed by the configured issuer  The public key to verify the signature is discovered from the issuer s public endpoint using OIDC discovery  For an incoming token  each JWT authenticator will be attempted in the order in which it is specified in this list   Note however that other authenticators may run before or after the JWT authenticators  The specific position of JWT authenticators in relation to other authenticators is neither defined nor stable across releases   Since each JWT authenticator must have a unique issuer URL  at most one JWT authenticator will attempt to cryptographically validate the token   p   p The minimum valid JWT payload must contain the following claims     quot iss quot    quot https   issuer example com quot    quot aud quot     quot audience quot     quot exp quot   1234567890   quot      raw HTML omitted     quot    quot username quot     p    td    tr   tr  td  code anonymous  code   B  Required   B  br    a href   apiserver k8s io v1beta1 AnonymousAuthConfig   code AnonymousAuthConfig  code   a    td   td      p If present   anonymous auth must not be set  p    td    tr    tbody    table       AuthorizationConfiguration        apiserver k8s io v1beta1 AuthorizationConfiguration           table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code apiserver k8s io v1beta1  code   td   tr   tr  td  code kind  code  br  string  td  td  code AuthorizationConfiguration  code   td   tr           tr  td  code authorizers  code   B  Required   B  br    a href   apiserver k8s io v1beta1 AuthorizerConfiguration   code   AuthorizerConfiguration  code   a    td   td      p Authorizers is an ordered list of authorizers to authorize requests against  This is similar to the   authorization modes kube apiserver flag Must be at least one   p    td    tr    tbody    table       EgressSelectorConfiguration        apiserver k8s io v1beta1 EgressSelectorConfiguration          p EgressSelectorConfiguration provides versioned configuration for egress selector clients   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code apiserver k8s io v1beta1  code   td   tr   tr  td  code kind  code  br  string  td  td  code EgressSelectorConfiguration  code   td   tr           tr  td  code egressSelections  code   B  Required   B  br    a href   apiserver k8s io v1beta1 EgressSelection   code   EgressSelection  code   a    td   td      p connectionServices contains a list of egress selection client configurations  p    td    tr    tbody    table       TracingConfiguration        apiserver k8s io v1beta1 TracingConfiguration          p TracingConfiguration provides versioned configuration for tracing clients   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code apiserver k8s io v1beta1  code   td   tr   tr  td  code kind  code  br  string  td  td  code TracingConfiguration  code   td   tr           tr  td  code TracingConfiguration  code   B  Required   B  br    a href   TracingConfiguration   code TracingConfiguration  code   a    td   td  Members of  code TracingConfiguration  code  are embedded into this type       p Embed the component config tracing configuration struct  p    td    tr    tbody    table       AnonymousAuthCondition        apiserver k8s io v1beta1 AnonymousAuthCondition          Appears in        AnonymousAuthConfig   apiserver k8s io v1beta1 AnonymousAuthConfig     p AnonymousAuthCondition describes the condition under which anonymous auth should be enabled   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code path  code   B  Required   B  br    code string  code    td   td      p Path for which anonymous auth is enabled   p    td    tr    tbody    table       AnonymousAuthConfig        apiserver k8s io v1beta1 AnonymousAuthConfig          Appears in        AuthenticationConfiguration   apiserver k8s io v1beta1 AuthenticationConfiguration     p AnonymousAuthConfig provides the configuration for the anonymous authenticator   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code enabled  code   B  Required   B  br    code bool  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code conditions  code   B  Required   B  br    a href   apiserver k8s io v1beta1 AnonymousAuthCondition   code   AnonymousAuthCondition  code   a    td   td      p If set  anonymous auth is only allowed if the request meets one of the conditions   p    td    tr    tbody    table       AudienceMatchPolicyType        apiserver k8s io v1beta1 AudienceMatchPolicyType        Alias of  string      Appears in        Issuer   apiserver k8s io v1beta1 Issuer     p AudienceMatchPolicyType is a set of valid values for issuer audienceMatchPolicy  p          AuthorizerConfiguration        apiserver k8s io v1beta1 AuthorizerConfiguration          Appears in        AuthorizationConfiguration   apiserver k8s io v1beta1 AuthorizationConfiguration      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code type  code   B  Required   B  br    code string  code    td   td      p Type refers to the type of the authorizer  quot Webhook quot  is supported in the generic API server Other API servers may support additional authorizer types like Node  RBAC  ABAC  etc   p    td    tr   tr  td  code name  code   B  Required   B  br    code string  code    td   td      p Name used to describe the webhook This is explicitly used in monitoring machinery for metrics Note  Names must be DNS1123 labels like  code myauthorizername  code  or subdomains like  code myauthorizer example domain  code  Required  with no default  p    td    tr   tr  td  code webhook  code   B  Required   B  br    a href   apiserver k8s io v1beta1 WebhookConfiguration   code WebhookConfiguration  code   a    td   td      p Webhook defines the configuration for a Webhook authorizer Must be defined when Type Webhook Must not be defined when Type  Webhook  p    td    tr    tbody    table       ClaimMappings        apiserver k8s io v1beta1 ClaimMappings          Appears in        JWTAuthenticator   apiserver k8s io v1beta1 JWTAuthenticator     p ClaimMappings provides the configuration for claim mapping  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code username  code   B  Required   B  br    a href   apiserver k8s io v1beta1 PrefixedClaimOrExpression   code PrefixedClaimOrExpression  code   a    td   td      p username represents an option for the username attribute  The claim s value must be a singular string  Same as the   oidc username claim and   oidc username prefix flags  If username expression is set  the expression must produce a string value  If username expression uses  claims email   then  claims email verified  must be used in username expression or extra  em   valueExpression or claimValidationRules   em   expression  An example claim validation rule expression that matches the validation automatically applied when username claim is set to  email  is  claims  email verified orValue true     p   p In the flag based approach  the   oidc username claim and   oidc username prefix are optional  If   oidc username claim is not set  the default value is  quot sub quot   For the authentication config  there is no defaulting for claim or prefix  The claim and prefix must be set explicitly  For claim  if   oidc username claim was not set with legacy flag approach  configure username claim  quot sub quot  in the authentication config  For prefix   1    oidc username prefix  quot   quot   no prefix was added to the username  For the same behavior using authentication config  set username prefix  quot  quot   2    oidc username prefix  quot  quot  and    oidc username claim     quot email quot   prefix was  quot  lt value of   oidc issuer url gt   quot   For the same behavior using authentication config  set username prefix  quot      raw HTML omitted      quot   3    oidc username prefix  quot      raw HTML omitted     quot   For the same behavior using authentication config  set username prefix  quot      raw HTML omitted     quot   p    td    tr   tr  td  code groups  code  br    a href   apiserver k8s io v1beta1 PrefixedClaimOrExpression   code PrefixedClaimOrExpression  code   a    td   td      p groups represents an option for the groups attribute  The claim s value must be a string or string array claim  If groups claim is set  the prefix must be specified  and can be the empty string   If groups expression is set  the expression must produce a string or string array value   quot  quot       and null values are treated as the group mapping not being present   p    td    tr   tr  td  code uid  code  br    a href   apiserver k8s io v1beta1 ClaimOrExpression   code ClaimOrExpression  code   a    td   td      p uid represents an option for the uid attribute  Claim must be a singular string claim  If uid expression is set  the expression must produce a string value   p    td    tr   tr  td  code extra  code  br    a href   apiserver k8s io v1beta1 ExtraMapping   code   ExtraMapping  code   a    td   td      p extra represents an option for the extra attribute  expression must produce a string or string array value  If the value is empty  the extra mapping will not be present   p   p hard coded extra key value  p   ul   li key   quot foo quot  valueExpression   quot  bar  quot  This will result in an extra attribute   foo    quot bar quot    li    ul   p hard coded key  value copying claim value  p   ul   li key   quot foo quot  valueExpression   quot claims some claim quot  This will result in an extra attribute   foo   value of some claim   li    ul   p hard coded key  value derived from claim value  p   ul   li key   quot admin quot  valueExpression    has claims is admin   amp  amp  claims is admin     quot true quot   quot  quot   This will result in   li   li if is admin claim is present and true  extra attribute   admin    quot true quot    li   li if is admin claim is present and false or is admin claim is not present  no extra attribute will be added  li    ul    td    tr    tbody    table       ClaimOrExpression        apiserver k8s io v1beta1 ClaimOrExpression          Appears in        ClaimMappings   apiserver k8s io v1beta1 ClaimMappings     p ClaimOrExpression provides the configuration for a single claim or expression   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code claim  code  br    code string  code    td   td      p claim is the JWT claim to use  Either claim or expression must be set  Mutually exclusive with expression   p    td    tr   tr  td  code expression  code  br    code string  code    td   td      p expression represents the expression which will be evaluated by CEL   p   p CEL expressions have access to the contents of the token claims  organized into CEL variable   p   ul   li  claims  is a map of claim names to claim values  For example  a variable named  sub  can be accessed as  claims sub   Nested claims can be accessed using dot notation  e g   claims foo bar    li    ul   p Documentation on CEL  https   kubernetes io docs reference using api cel   p   p Mutually exclusive with claim   p    td    tr    tbody    table       ClaimValidationRule        apiserver k8s io v1beta1 ClaimValidationRule          Appears in        JWTAuthenticator   apiserver k8s io v1beta1 JWTAuthenticator     p ClaimValidationRule provides the configuration for a single claim validation rule   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code claim  code  br    code string  code    td   td      p claim is the name of a required claim  Same as   oidc required claim flag  Only string claim keys are supported  Mutually exclusive with expression and message   p    td    tr   tr  td  code requiredValue  code  br    code string  code    td   td      p requiredValue is the value of a required claim  Same as   oidc required claim flag  Only string claim values are supported  If claim is set and requiredValue is not set  the claim must be present with a value set to the empty string  Mutually exclusive with expression and message   p    td    tr   tr  td  code expression  code  br    code string  code    td   td      p expression represents the expression which will be evaluated by CEL  Must produce a boolean   p   p CEL expressions have access to the contents of the token claims  organized into CEL variable   p   ul   li  claims  is a map of claim names to claim values  For example  a variable named  sub  can be accessed as  claims sub   Nested claims can be accessed using dot notation  e g   claims foo bar   Must return true for the validation to pass   li    ul   p Documentation on CEL  https   kubernetes io docs reference using api cel   p   p Mutually exclusive with claim and requiredValue   p    td    tr   tr  td  code message  code  br    code string  code    td   td      p message customizes the returned error message when expression returns false  message is a literal string  Mutually exclusive with claim and requiredValue   p    td    tr    tbody    table       Connection        apiserver k8s io v1beta1 Connection          Appears in        EgressSelection   apiserver k8s io v1beta1 EgressSelection     p Connection provides the configuration for a single egress selection client   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code proxyProtocol  code   B  Required   B  br    a href   apiserver k8s io v1beta1 ProtocolType   code ProtocolType  code   a    td   td      p Protocol is the protocol used to connect from client to the konnectivity server   p    td    tr   tr  td  code transport  code  br    a href   apiserver k8s io v1beta1 Transport   code Transport  code   a    td   td      p Transport defines the transport configurations we use to dial to the konnectivity server  This is required if ProxyProtocol is HTTPConnect or GRPC   p    td    tr    tbody    table       EgressSelection        apiserver k8s io v1beta1 EgressSelection          Appears in        EgressSelectorConfiguration   apiserver k8s io v1beta1 EgressSelectorConfiguration     p EgressSelection provides the configuration for a single egress selection client   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p name is the name of the egress selection  Currently supported values are  quot controlplane quot    quot master quot    quot etcd quot  and  quot cluster quot  The  quot master quot  egress selector is deprecated in favor of  quot controlplane quot   p    td    tr   tr  td  code connection  code   B  Required   B  br    a href   apiserver k8s io v1beta1 Connection   code Connection  code   a    td   td      p connection is the exact information used to configure the egress selection  p    td    tr    tbody    table       ExtraMapping        apiserver k8s io v1beta1 ExtraMapping          Appears in        ClaimMappings   apiserver k8s io v1beta1 ClaimMappings     p ExtraMapping provides the configuration for a single extra mapping   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code key  code   B  Required   B  br    code string  code    td   td      p key is a string to use as the extra attribute key  key must be a domain prefix path  e g  example org foo   All characters before the first  quot   quot  must be a valid subdomain as defined by RFC 1123  All characters trailing the first  quot   quot  must be valid HTTP Path characters as defined by RFC 3986  key must be lowercase  Required to be unique   p    td    tr   tr  td  code valueExpression  code   B  Required   B  br    code string  code    td   td      p valueExpression is a CEL expression to extract extra attribute value  valueExpression must produce a string or string array value   quot  quot       and null values are treated as the extra mapping not being present  Empty string values contained within a string array are filtered out   p   p CEL expressions have access to the contents of the token claims  organized into CEL variable   p   ul   li  claims  is a map of claim names to claim values  For example  a variable named  sub  can be accessed as  claims sub   Nested claims can be accessed using dot notation  e g   claims foo bar    li    ul   p Documentation on CEL  https   kubernetes io docs reference using api cel   p    td    tr    tbody    table       Issuer        apiserver k8s io v1beta1 Issuer          Appears in        JWTAuthenticator   apiserver k8s io v1beta1 JWTAuthenticator     p Issuer provides the configuration for an external provider s specific settings   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code url  code   B  Required   B  br    code string  code    td   td      p url points to the issuer URL in a format https   url or https   url path  This must match the  quot iss quot  claim in the presented JWT  and the issuer returned from discovery  Same value as the   oidc issuer url flag  Discovery information is fetched from  quot  url   well known openid configuration quot  unless overridden by discoveryURL  Required to be unique across all JWT authenticators  Note that egress selection configuration is not used for this network connection   p    td    tr   tr  td  code discoveryURL  code  br    code string  code    td   td      p discoveryURL  if specified  overrides the URL used to fetch discovery information instead of using  quot  url   well known openid configuration quot   The exact value specified is used  so  quot   well known openid configuration quot  must be included in discoveryURL if needed   p   p The  quot issuer quot  field in the fetched discovery information must match the  quot issuer url quot  field in the AuthenticationConfiguration and will be used to validate the  quot iss quot  claim in the presented JWT  This is for scenarios where the well known and jwks endpoints are hosted at a different location than the issuer  such as locally in the cluster    p   p Example  A discovery url that is exposed using kubernetes service  oidc  in namespace  oidc namespace  and discovery information is available at    well known openid configuration   discoveryURL   quot https   oidc oidc namespace  well known openid configuration quot  certificateAuthority is used to verify the TLS connection and the hostname on the leaf certificate must be set to  oidc oidc namespace    p   p curl https   oidc oidc namespace  well known openid configuration   discoveryURL field    issuer   quot https   oidc example com quot    url field     p   p discoveryURL must be different from url  Required to be unique across all JWT authenticators  Note that egress selection configuration is not used for this network connection   p    td    tr   tr  td  code certificateAuthority  code  br    code string  code    td   td      p certificateAuthority contains PEM encoded certificate authority certificates used to validate the connection when fetching discovery information  If unset  the system verifier is used  Same value as the content of the file referenced by the   oidc ca file flag   p    td    tr   tr  td  code audiences  code   B  Required   B  br    code   string  code    td   td      p audiences is the set of acceptable audiences the JWT must be issued to  At least one of the entries must match the  quot aud quot  claim in presented JWTs  Same value as the   oidc client id flag  though this field supports an array   Required to be non empty   p    td    tr   tr  td  code audienceMatchPolicy  code  br    a href   apiserver k8s io v1beta1 AudienceMatchPolicyType   code AudienceMatchPolicyType  code   a    td   td      p audienceMatchPolicy defines how the  quot audiences quot  field is used to match the  quot aud quot  claim in the presented JWT  Allowed values are   p   ol   li  quot MatchAny quot  when multiple audiences are specified and  li   li empty  or unset  or  quot MatchAny quot  when a single audience is specified   li    ol   ul   li   p MatchAny  the  quot aud quot  claim in the presented JWT must match at least one of the entries in the  quot audiences quot  field  For example  if  quot audiences quot  is   quot foo quot    quot bar quot    the  quot aud quot  claim in the presented JWT must contain either  quot foo quot  or  quot bar quot   and may contain both    p    li   li   p  quot  quot   The match policy can be empty  or unset  when a single audience is specified in the  quot audiences quot  field  The  quot aud quot  claim in the presented JWT must contain the single audience  and may contain others    p    li    ul   p For more nuanced audience validation  use claimValidationRules  example  claimValidationRule   expression   sets equivalent claims aud    quot bar quot    quot foo quot    quot baz quot     to require an exact match   p    td    tr    tbody    table       JWTAuthenticator        apiserver k8s io v1beta1 JWTAuthenticator          Appears in        AuthenticationConfiguration   apiserver k8s io v1beta1 AuthenticationConfiguration     p JWTAuthenticator provides the configuration for a single JWT authenticator   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code issuer  code   B  Required   B  br    a href   apiserver k8s io v1beta1 Issuer   code Issuer  code   a    td   td      p issuer contains the basic OIDC provider connection options   p    td    tr   tr  td  code claimValidationRules  code  br    a href   apiserver k8s io v1beta1 ClaimValidationRule   code   ClaimValidationRule  code   a    td   td      p claimValidationRules are rules that are applied to validate token claims to authenticate users   p    td    tr   tr  td  code claimMappings  code   B  Required   B  br    a href   apiserver k8s io v1beta1 ClaimMappings   code ClaimMappings  code   a    td   td      p claimMappings points claims of a token to be treated as user attributes   p    td    tr   tr  td  code userValidationRules  code  br    a href   apiserver k8s io v1beta1 UserValidationRule   code   UserValidationRule  code   a    td   td      p userValidationRules are rules that are applied to final user before completing authentication  These allow invariants to be applied to incoming identities such as preventing the use of the system  prefix that is commonly used by Kubernetes components  The validation rules are logically ANDed together and must all return true for the validation to pass   p    td    tr    tbody    table       PrefixedClaimOrExpression        apiserver k8s io v1beta1 PrefixedClaimOrExpression          Appears in        ClaimMappings   apiserver k8s io v1beta1 ClaimMappings     p PrefixedClaimOrExpression provides the configuration for a single prefixed claim or expression   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code claim  code  br    code string  code    td   td      p claim is the JWT claim to use  Mutually exclusive with expression   p    td    tr   tr  td  code prefix  code  br    code string  code    td   td      p prefix is prepended to claim s value to prevent clashes with existing names  prefix needs to be set if claim is set and can be the empty string  Mutually exclusive with expression   p    td    tr   tr  td  code expression  code  br    code string  code    td   td      p expression represents the expression which will be evaluated by CEL   p   p CEL expressions have access to the contents of the token claims  organized into CEL variable   p   ul   li  claims  is a map of claim names to claim values  For example  a variable named  sub  can be accessed as  claims sub   Nested claims can be accessed using dot notation  e g   claims foo bar    li    ul   p Documentation on CEL  https   kubernetes io docs reference using api cel   p   p Mutually exclusive with claim and prefix   p    td    tr    tbody    table       ProtocolType        apiserver k8s io v1beta1 ProtocolType        Alias of  string      Appears in        Connection   apiserver k8s io v1beta1 Connection     p ProtocolType is a set of valid values for Connection ProtocolType  p          TCPTransport        apiserver k8s io v1beta1 TCPTransport          Appears in        Transport   apiserver k8s io v1beta1 Transport     p TCPTransport provides the information to connect to konnectivity server via TCP  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code url  code   B  Required   B  br    code string  code    td   td      p URL is the location of the konnectivity server to connect to  As an example it might be  quot https   127 0 0 1 8131 quot   p    td    tr   tr  td  code tlsConfig  code  br    a href   apiserver k8s io v1beta1 TLSConfig   code TLSConfig  code   a    td   td      p TLSConfig is the config needed to use TLS when connecting to konnectivity server  p    td    tr    tbody    table       TLSConfig        apiserver k8s io v1beta1 TLSConfig          Appears in        TCPTransport   apiserver k8s io v1beta1 TCPTransport     p TLSConfig provides the authentication information to connect to konnectivity server Only used with TCPTransport  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code caBundle  code  br    code string  code    td   td      p caBundle is the file location of the CA to be used to determine trust with the konnectivity server  Must be absent empty if TCPTransport URL is prefixed with http    If absent while TCPTransport URL is prefixed with https     default to system trust roots   p    td    tr   tr  td  code clientKey  code  br    code string  code    td   td      p clientKey is the file location of the client key to be used in mtls handshakes with the konnectivity server  Must be absent empty if TCPTransport URL is prefixed with http    Must be configured if TCPTransport URL is prefixed with https     p    td    tr   tr  td  code clientCert  code  br    code string  code    td   td      p clientCert is the file location of the client certificate to be used in mtls handshakes with the konnectivity server  Must be absent empty if TCPTransport URL is prefixed with http    Must be configured if TCPTransport URL is prefixed with https     p    td    tr    tbody    table       Transport        apiserver k8s io v1beta1 Transport          Appears in        Connection   apiserver k8s io v1beta1 Connection     p Transport defines the transport configurations we use to dial to the konnectivity server  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code tcp  code  br    a href   apiserver k8s io v1beta1 TCPTransport   code TCPTransport  code   a    td   td      p TCP is the TCP configuration for communicating with the konnectivity server via TCP ProxyProtocol of GRPC is not supported with TCP transport at the moment Requires at least one of TCP or UDS to be set  p    td    tr   tr  td  code uds  code  br    a href   apiserver k8s io v1beta1 UDSTransport   code UDSTransport  code   a    td   td      p UDS is the UDS configuration for communicating with the konnectivity server via UDS Requires at least one of TCP or UDS to be set  p    td    tr    tbody    table       UDSTransport        apiserver k8s io v1beta1 UDSTransport          Appears in        Transport   apiserver k8s io v1beta1 Transport     p UDSTransport provides the information to connect to konnectivity server via UDS  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code udsName  code   B  Required   B  br    code string  code    td   td      p UDSName is the name of the unix domain socket to connect to konnectivity server This does not use a unix    prefix   Eg   etc srv kubernetes konnectivity server konnectivity server socket   p    td    tr    tbody    table       UserValidationRule        apiserver k8s io v1beta1 UserValidationRule          Appears in        JWTAuthenticator   apiserver k8s io v1beta1 JWTAuthenticator     p UserValidationRule provides the configuration for a single user info validation rule   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code expression  code   B  Required   B  br    code string  code    td   td      p expression represents the expression which will be evaluated by CEL  Must return true for the validation to pass   p   p CEL expressions have access to the contents of UserInfo  organized into CEL variable   p   ul   li  user    authentication k8s io v1  Kind UserInfo object Refer to https   github com kubernetes api blob release 1 28 authentication v1 types go L105 L122 for the definition  API documentation  https   kubernetes io docs reference generated kubernetes api v1 28  userinfo v1 authentication k8s io  li    ul   p Documentation on CEL  https   kubernetes io docs reference using api cel   p    td    tr   tr  td  code message  code  br    code string  code    td   td      p message customizes the returned error message when rule returns false  message is a literal string   p    td    tr    tbody    table       WebhookConfiguration        apiserver k8s io v1beta1 WebhookConfiguration          Appears in        AuthorizerConfiguration   apiserver k8s io v1beta1 AuthorizerConfiguration      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code authorizedTTL  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p The duration to cache  authorized  responses from the webhook authorizer  Same as setting  code   authorization webhook cache authorized ttl  code  flag Default  5m0s  p    td    tr   tr  td  code unauthorizedTTL  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p The duration to cache  unauthorized  responses from the webhook authorizer  Same as setting  code   authorization webhook cache unauthorized ttl  code  flag Default  30s  p    td    tr   tr  td  code timeout  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p Timeout for the webhook request Maximum allowed value is 30s  Required  no default value   p    td    tr   tr  td  code subjectAccessReviewVersion  code   B  Required   B  br    code string  code    td   td      p The API version of the authorization k8s io SubjectAccessReview to send to and expect from the webhook  Same as setting  code   authorization webhook version  code  flag Valid values  v1beta1  v1 Required  no default value  p    td    tr   tr  td  code matchConditionSubjectAccessReviewVersion  code   B  Required   B  br    code string  code    td   td      p MatchConditionSubjectAccessReviewVersion specifies the SubjectAccessReview version the CEL expressions are evaluated against Valid values  v1 Required  no default value  p    td    tr   tr  td  code failurePolicy  code   B  Required   B  br    code string  code    td   td      p Controls the authorization decision when a webhook request fails to complete or returns a malformed response or errors evaluating matchConditions  Valid values   p   ul   li NoOpinion  continue to subsequent authorizers to see if one of them allows the request  li   li Deny  reject the request without consulting subsequent authorizers Required  with no default   li    ul    td    tr   tr  td  code connectionInfo  code   B  Required   B  br    a href   apiserver k8s io v1beta1 WebhookConnectionInfo   code WebhookConnectionInfo  code   a    td   td      p ConnectionInfo defines how we talk to the webhook  p    td    tr   tr  td  code matchConditions  code   B  Required   B  br    a href   apiserver k8s io v1beta1 WebhookMatchCondition   code   WebhookMatchCondition  code   a    td   td      p matchConditions is a list of conditions that must be met for a request to be sent to this webhook  An empty list of matchConditions matches all requests  There are a maximum of 64 match conditions allowed   p   p The exact matching logic is  in order    p   ol   li If at least one matchCondition evaluates to FALSE  then the webhook is skipped   li   li If ALL matchConditions evaluate to TRUE  then the webhook is called   li   li If at least one matchCondition evaluates to an error  but none are FALSE    ul   li If failurePolicy Deny  then the webhook rejects the request  li   li If failurePolicy NoOpinion  then the error is ignored and the webhook is skipped  li    ul    li    ol    td    tr    tbody    table       WebhookConnectionInfo        apiserver k8s io v1beta1 WebhookConnectionInfo          Appears in        WebhookConfiguration   apiserver k8s io v1beta1 WebhookConfiguration      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code type  code   B  Required   B  br    code string  code    td   td      p Controls how the webhook should communicate with the server  Valid values   p   ul   li KubeConfigFile  use the file specified in kubeConfigFile to locate the server   li   li InClusterConfig  use the in cluster configuration to call the SubjectAccessReview API hosted by kube apiserver  This mode is not allowed for kube apiserver   li    ul    td    tr   tr  td  code kubeConfigFile  code   B  Required   B  br    code string  code    td   td      p Path to KubeConfigFile for connection info Required  if connectionInfo Type is KubeConfig  p    td    tr    tbody    table       WebhookMatchCondition        apiserver k8s io v1beta1 WebhookMatchCondition          Appears in        WebhookConfiguration   apiserver k8s io v1beta1 WebhookConfiguration      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code expression  code   B  Required   B  br    code string  code    td   td      p expression represents the expression which will be evaluated by CEL  Must evaluate to bool  CEL expressions have access to the contents of the SubjectAccessReview in v1 version  If version specified by subjectAccessReviewVersion in the request variable is v1beta1  the contents would be converted to the v1 version before evaluating the CEL expression   p   p Documentation on CEL  https   kubernetes io docs reference using api cel   p    td    tr    tbody    table   "}
{"questions":"kubernetes reference package kubelet config k8s io v1 contenttype tool reference Resource Types title Kubelet Configuration v1 autogenerated true","answers":"---\ntitle: Kubelet Configuration (v1)\ncontent_type: tool-reference\npackage: kubelet.config.k8s.io\/v1\nauto_generated: true\n---\n\n\n## Resource Types \n\n\n- [CredentialProviderConfig](#kubelet-config-k8s-io-v1-CredentialProviderConfig)\n  \n\n## `CredentialProviderConfig`     {#kubelet-config-k8s-io-v1-CredentialProviderConfig}\n    \n\n\n<p>CredentialProviderConfig is the configuration containing information about\neach exec credential provider. Kubelet reads this configuration from disk and enables\neach provider as specified by the CredentialProvider type.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubelet.config.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>CredentialProviderConfig<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>providers<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubelet-config-k8s-io-v1-CredentialProvider\"><code>[]CredentialProvider<\/code><\/a>\n<\/td>\n<td>\n   <p>providers is a list of credential provider plugins that will be enabled by the kubelet.\nMultiple providers may match against a single image, in which case credentials\nfrom all providers will be returned to the kubelet. If multiple providers are called\nfor a single image, the results are combined. If providers return overlapping\nauth keys, the value from the provider earlier in this list is used.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `CredentialProvider`     {#kubelet-config-k8s-io-v1-CredentialProvider}\n    \n\n**Appears in:**\n\n- [CredentialProviderConfig](#kubelet-config-k8s-io-v1-CredentialProviderConfig)\n\n\n<p>CredentialProvider represents an exec plugin to be invoked by the kubelet. The plugin is only\ninvoked when an image being pulled matches the images handled by the plugin (see matchImages).<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>name is the required name of the credential provider. It must match the name of the\nprovider executable as seen by the kubelet. The executable must be in the kubelet's\nbin directory (set by the --image-credential-provider-bin-dir flag).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>matchImages<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>matchImages is a required list of strings used to match against images in order to\ndetermine if this provider should be invoked. If one of the strings matches the\nrequested image from the kubelet, the plugin will be invoked and given a chance\nto provide credentials. Images are expected to contain the registry domain\nand URL path.<\/p>\n<p>Each entry in matchImages is a pattern which can optionally contain a port and a path.\nGlobs can be used in the domain, but not in the port or the path. Globs are supported\nas subdomains like '<em>.k8s.io' or 'k8s.<\/em>.io', and top-level-domains such as 'k8s.<em>'.\nMatching partial subdomains like 'app<\/em>.k8s.io' is also supported. Each glob can only match\na single subdomain segment, so *.io does not match *.k8s.io.<\/p>\n<p>A match exists between an image and a matchImage when all of the below are true:<\/p>\n<ul>\n<li>Both contain the same number of domain parts and each part matches.<\/li>\n<li>The URL path of an imageMatch must be a prefix of the target image URL path.<\/li>\n<li>If the imageMatch contains a port, then the port must match in the image as well.<\/li>\n<\/ul>\n<p>Example values of matchImages:<\/p>\n<ul>\n<li>123456789.dkr.ecr.us-east-1.amazonaws.com<\/li>\n<li>*.azurecr.io<\/li>\n<li>gcr.io<\/li>\n<li><em>.<\/em>.registry.io<\/li>\n<li>registry.io:8080\/path<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr><td><code>defaultCacheDuration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>defaultCacheDuration is the default duration the plugin will cache credentials in-memory\nif a cache duration is not provided in the plugin response. This field is required.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>apiVersion<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse\nMUST use the same encoding version as the input. Current supported values are:<\/p>\n<ul>\n<li>credentialprovider.kubelet.k8s.io\/v1<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr><td><code>args<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>Arguments to pass to the command when executing it.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>env<\/code><br\/>\n<a href=\"#kubelet-config-k8s-io-v1-ExecEnvVar\"><code>[]ExecEnvVar<\/code><\/a>\n<\/td>\n<td>\n   <p>Env defines additional environment variables to expose to the process. These\nare unioned with the host's environment, as well as variables client-go uses\nto pass argument to the plugin.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExecEnvVar`     {#kubelet-config-k8s-io-v1-ExecEnvVar}\n    \n\n**Appears in:**\n\n- [CredentialProvider](#kubelet-config-k8s-io-v1-CredentialProvider)\n\n\n<p>ExecEnvVar is used for setting environment variables when executing an exec-based\ncredential plugin.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>value<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  Kubelet Configuration  v1  content type  tool reference package  kubelet config k8s io v1 auto generated  true          Resource Types       CredentialProviderConfig   kubelet config k8s io v1 CredentialProviderConfig          CredentialProviderConfig        kubelet config k8s io v1 CredentialProviderConfig          p CredentialProviderConfig is the configuration containing information about each exec credential provider  Kubelet reads this configuration from disk and enables each provider as specified by the CredentialProvider type   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubelet config k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code CredentialProviderConfig  code   td   tr           tr  td  code providers  code   B  Required   B  br    a href   kubelet config k8s io v1 CredentialProvider   code   CredentialProvider  code   a    td   td      p providers is a list of credential provider plugins that will be enabled by the kubelet  Multiple providers may match against a single image  in which case credentials from all providers will be returned to the kubelet  If multiple providers are called for a single image  the results are combined  If providers return overlapping auth keys  the value from the provider earlier in this list is used   p    td    tr    tbody    table       CredentialProvider        kubelet config k8s io v1 CredentialProvider          Appears in        CredentialProviderConfig   kubelet config k8s io v1 CredentialProviderConfig     p CredentialProvider represents an exec plugin to be invoked by the kubelet  The plugin is only invoked when an image being pulled matches the images handled by the plugin  see matchImages    p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p name is the required name of the credential provider  It must match the name of the provider executable as seen by the kubelet  The executable must be in the kubelet s bin directory  set by the   image credential provider bin dir flag    p    td    tr   tr  td  code matchImages  code   B  Required   B  br    code   string  code    td   td      p matchImages is a required list of strings used to match against images in order to determine if this provider should be invoked  If one of the strings matches the requested image from the kubelet  the plugin will be invoked and given a chance to provide credentials  Images are expected to contain the registry domain and URL path   p   p Each entry in matchImages is a pattern which can optionally contain a port and a path  Globs can be used in the domain  but not in the port or the path  Globs are supported as subdomains like   em  k8s io  or  k8s   em  io   and top level domains such as  k8s  em    Matching partial subdomains like  app  em  k8s io  is also supported  Each glob can only match a single subdomain segment  so   io does not match   k8s io   p   p A match exists between an image and a matchImage when all of the below are true   p   ul   li Both contain the same number of domain parts and each part matches   li   li The URL path of an imageMatch must be a prefix of the target image URL path   li   li If the imageMatch contains a port  then the port must match in the image as well   li    ul   p Example values of matchImages   p   ul   li 123456789 dkr ecr us east 1 amazonaws com  li   li   azurecr io  li   li gcr io  li   li  em    em  registry io  li   li registry io 8080 path  li    ul    td    tr   tr  td  code defaultCacheDuration  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p defaultCacheDuration is the default duration the plugin will cache credentials in memory if a cache duration is not provided in the plugin response  This field is required   p    td    tr   tr  td  code apiVersion  code   B  Required   B  br    code string  code    td   td      p Required input version of the exec CredentialProviderRequest  The returned CredentialProviderResponse MUST use the same encoding version as the input  Current supported values are   p   ul   li credentialprovider kubelet k8s io v1  li    ul    td    tr   tr  td  code args  code  br    code   string  code    td   td      p Arguments to pass to the command when executing it   p    td    tr   tr  td  code env  code  br    a href   kubelet config k8s io v1 ExecEnvVar   code   ExecEnvVar  code   a    td   td      p Env defines additional environment variables to expose to the process  These are unioned with the host s environment  as well as variables client go uses to pass argument to the plugin   p    td    tr    tbody    table       ExecEnvVar        kubelet config k8s io v1 ExecEnvVar          Appears in        CredentialProvider   kubelet config k8s io v1 CredentialProvider     p ExecEnvVar is used for setting environment variables when executing an exec based credential plugin   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code value  code   B  Required   B  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr    tbody    table   "}
{"questions":"kubernetes reference title Kubelet Configuration v1alpha1 contenttype tool reference Resource Types package kubelet config k8s io v1alpha1 autogenerated true","answers":"---\ntitle: Kubelet Configuration (v1alpha1)\ncontent_type: tool-reference\npackage: kubelet.config.k8s.io\/v1alpha1\nauto_generated: true\n---\n\n\n## Resource Types \n\n\n- [CredentialProviderConfig](#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig)\n  \n\n## `CredentialProviderConfig`     {#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig}\n    \n\n\n<p>CredentialProviderConfig is the configuration containing information about\neach exec credential provider. Kubelet reads this configuration from disk and enables\neach provider as specified by the CredentialProvider type.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubelet.config.k8s.io\/v1alpha1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>CredentialProviderConfig<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>providers<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubelet-config-k8s-io-v1alpha1-CredentialProvider\"><code>[]CredentialProvider<\/code><\/a>\n<\/td>\n<td>\n   <p>providers is a list of credential provider plugins that will be enabled by the kubelet.\nMultiple providers may match against a single image, in which case credentials\nfrom all providers will be returned to the kubelet. If multiple providers are called\nfor a single image, the results are combined. If providers return overlapping\nauth keys, the value from the provider earlier in this list is used.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `CredentialProvider`     {#kubelet-config-k8s-io-v1alpha1-CredentialProvider}\n    \n\n**Appears in:**\n\n- [CredentialProviderConfig](#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig)\n\n\n<p>CredentialProvider represents an exec plugin to be invoked by the kubelet. The plugin is only\ninvoked when an image being pulled matches the images handled by the plugin (see matchImages).<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>name is the required name of the credential provider. It must match the name of the\nprovider executable as seen by the kubelet. The executable must be in the kubelet's\nbin directory (set by the --image-credential-provider-bin-dir flag).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>matchImages<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>matchImages is a required list of strings used to match against images in order to\ndetermine if this provider should be invoked. If one of the strings matches the\nrequested image from the kubelet, the plugin will be invoked and given a chance\nto provide credentials. Images are expected to contain the registry domain\nand URL path.<\/p>\n<p>Each entry in matchImages is a pattern which can optionally contain a port and a path.\nGlobs can be used in the domain, but not in the port or the path. Globs are supported\nas subdomains like <code>*.k8s.io<\/code> or <code>k8s.*.io<\/code>, and top-level-domains such as <code>k8s.*<\/code>.\nMatching partial subdomains like <code>app*.k8s.io<\/code> is also supported. Each glob can only match\na single subdomain segment, so <code>*.io<\/code> does not match <code>*.k8s.io<\/code>.<\/p>\n<p>A match exists between an image and a matchImage when all of the below are true:<\/p>\n<ul>\n<li>Both contain the same number of domain parts and each part matches.<\/li>\n<li>The URL path of an imageMatch must be a prefix of the target image URL path.<\/li>\n<li>If the imageMatch contains a port, then the port must match in the image as well.<\/li>\n<\/ul>\n<p>Example values of matchImages:<\/p>\n<ul>\n<li><code>123456789.dkr.ecr.us-east-1.amazonaws.com<\/code><\/li>\n<li><code>*.azurecr.io<\/code><\/li>\n<li><code>gcr.io<\/code><\/li>\n<li><code>*.*.registry.io<\/code><\/li>\n<li><code>registry.io:8080\/path<\/code><\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr><td><code>defaultCacheDuration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>defaultCacheDuration is the default duration the plugin will cache credentials in-memory\nif a cache duration is not provided in the plugin response. This field is required.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>apiVersion<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse\nMUST use the same encoding version as the input. Current supported values are:<\/p>\n<ul>\n<li>credentialprovider.kubelet.k8s.io\/v1alpha1<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr><td><code>args<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>Arguments to pass to the command when executing it.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>env<\/code><br\/>\n<a href=\"#kubelet-config-k8s-io-v1alpha1-ExecEnvVar\"><code>[]ExecEnvVar<\/code><\/a>\n<\/td>\n<td>\n   <p>Env defines additional environment variables to expose to the process. These\nare unioned with the host's environment, as well as variables client-go uses\nto pass argument to the plugin.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExecEnvVar`     {#kubelet-config-k8s-io-v1alpha1-ExecEnvVar}\n    \n\n**Appears in:**\n\n- [CredentialProvider](#kubelet-config-k8s-io-v1alpha1-CredentialProvider)\n\n\n<p>ExecEnvVar is used for setting environment variables when executing an exec-based\ncredential plugin.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>value<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  Kubelet Configuration  v1alpha1  content type  tool reference package  kubelet config k8s io v1alpha1 auto generated  true          Resource Types       CredentialProviderConfig   kubelet config k8s io v1alpha1 CredentialProviderConfig          CredentialProviderConfig        kubelet config k8s io v1alpha1 CredentialProviderConfig          p CredentialProviderConfig is the configuration containing information about each exec credential provider  Kubelet reads this configuration from disk and enables each provider as specified by the CredentialProvider type   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubelet config k8s io v1alpha1  code   td   tr   tr  td  code kind  code  br  string  td  td  code CredentialProviderConfig  code   td   tr           tr  td  code providers  code   B  Required   B  br    a href   kubelet config k8s io v1alpha1 CredentialProvider   code   CredentialProvider  code   a    td   td      p providers is a list of credential provider plugins that will be enabled by the kubelet  Multiple providers may match against a single image  in which case credentials from all providers will be returned to the kubelet  If multiple providers are called for a single image  the results are combined  If providers return overlapping auth keys  the value from the provider earlier in this list is used   p    td    tr    tbody    table       CredentialProvider        kubelet config k8s io v1alpha1 CredentialProvider          Appears in        CredentialProviderConfig   kubelet config k8s io v1alpha1 CredentialProviderConfig     p CredentialProvider represents an exec plugin to be invoked by the kubelet  The plugin is only invoked when an image being pulled matches the images handled by the plugin  see matchImages    p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p name is the required name of the credential provider  It must match the name of the provider executable as seen by the kubelet  The executable must be in the kubelet s bin directory  set by the   image credential provider bin dir flag    p    td    tr   tr  td  code matchImages  code   B  Required   B  br    code   string  code    td   td      p matchImages is a required list of strings used to match against images in order to determine if this provider should be invoked  If one of the strings matches the requested image from the kubelet  the plugin will be invoked and given a chance to provide credentials  Images are expected to contain the registry domain and URL path   p   p Each entry in matchImages is a pattern which can optionally contain a port and a path  Globs can be used in the domain  but not in the port or the path  Globs are supported as subdomains like  code   k8s io  code  or  code k8s   io  code   and top level domains such as  code k8s    code   Matching partial subdomains like  code app  k8s io  code  is also supported  Each glob can only match a single subdomain segment  so  code   io  code  does not match  code   k8s io  code    p   p A match exists between an image and a matchImage when all of the below are true   p   ul   li Both contain the same number of domain parts and each part matches   li   li The URL path of an imageMatch must be a prefix of the target image URL path   li   li If the imageMatch contains a port  then the port must match in the image as well   li    ul   p Example values of matchImages   p   ul   li  code 123456789 dkr ecr us east 1 amazonaws com  code   li   li  code   azurecr io  code   li   li  code gcr io  code   li   li  code     registry io  code   li   li  code registry io 8080 path  code   li    ul    td    tr   tr  td  code defaultCacheDuration  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p defaultCacheDuration is the default duration the plugin will cache credentials in memory if a cache duration is not provided in the plugin response  This field is required   p    td    tr   tr  td  code apiVersion  code   B  Required   B  br    code string  code    td   td      p Required input version of the exec CredentialProviderRequest  The returned CredentialProviderResponse MUST use the same encoding version as the input  Current supported values are   p   ul   li credentialprovider kubelet k8s io v1alpha1  li    ul    td    tr   tr  td  code args  code  br    code   string  code    td   td      p Arguments to pass to the command when executing it   p    td    tr   tr  td  code env  code  br    a href   kubelet config k8s io v1alpha1 ExecEnvVar   code   ExecEnvVar  code   a    td   td      p Env defines additional environment variables to expose to the process  These are unioned with the host s environment  as well as variables client go uses to pass argument to the plugin   p    td    tr    tbody    table       ExecEnvVar        kubelet config k8s io v1alpha1 ExecEnvVar          Appears in        CredentialProvider   kubelet config k8s io v1alpha1 CredentialProvider     p ExecEnvVar is used for setting environment variables when executing an exec based credential plugin   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code value  code   B  Required   B  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr    tbody    table   "}
{"questions":"kubernetes reference contenttype tool reference title Event Rate Limit Configuration v1alpha1 package eventratelimit admission k8s io v1alpha1 Resource Types autogenerated true","answers":"---\ntitle: Event Rate Limit Configuration (v1alpha1)\ncontent_type: tool-reference\npackage: eventratelimit.admission.k8s.io\/v1alpha1\nauto_generated: true\n---\n\n\n## Resource Types \n\n\n- [Configuration](#eventratelimit-admission-k8s-io-v1alpha1-Configuration)\n  \n\n## `Configuration`     {#eventratelimit-admission-k8s-io-v1alpha1-Configuration}\n    \n\n\n<p>Configuration provides configuration for the EventRateLimit admission\ncontroller.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>eventratelimit.admission.k8s.io\/v1alpha1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>Configuration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>limits<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#eventratelimit-admission-k8s-io-v1alpha1-Limit\"><code>[]Limit<\/code><\/a>\n<\/td>\n<td>\n   <p>limits are the limits to place on event queries received.\nLimits can be placed on events received server-wide, per namespace,\nper user, and per source+object.\nAt least one limit is required.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Limit`     {#eventratelimit-admission-k8s-io-v1alpha1-Limit}\n    \n\n**Appears in:**\n\n- [Configuration](#eventratelimit-admission-k8s-io-v1alpha1-Configuration)\n\n\n<p>Limit is the configuration for a particular limit type<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>type<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#eventratelimit-admission-k8s-io-v1alpha1-LimitType\"><code>LimitType<\/code><\/a>\n<\/td>\n<td>\n   <p>type is the type of limit to which this configuration applies<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>qps<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>qps is the number of event queries per second that are allowed for this\ntype of limit. The qps and burst fields are used together to determine if\na particular event query is accepted. The qps determines how many queries\nare accepted once the burst amount of queries has been exhausted.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>burst<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>burst is the burst number of event queries that are allowed for this type\nof limit. The qps and burst fields are used together to determine if a\nparticular event query is accepted. The burst determines the maximum size\nof the allowance granted for a particular bucket. For example, if the burst\nis 10 and the qps is 3, then the admission control will accept 10 queries\nbefore blocking any queries. Every second, 3 more queries will be allowed.\nIf some of that allowance is not used, then it will roll over to the next\nsecond, until the maximum allowance of 10 is reached.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>cacheSize<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>cacheSize is the size of the LRU cache for this type of limit. If a bucket\nis evicted from the cache, then the allowance for that bucket is reset. If\nmore queries are later received for an evicted bucket, then that bucket\nwill re-enter the cache with a clean slate, giving that bucket a full\nallowance of burst queries.<\/p>\n<p>The default cache size is 4096.<\/p>\n<p>If limitType is 'server', then cacheSize is ignored.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `LimitType`     {#eventratelimit-admission-k8s-io-v1alpha1-LimitType}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [Limit](#eventratelimit-admission-k8s-io-v1alpha1-Limit)\n\n\n<p>LimitType is the type of the limit (e.g., per-namespace)<\/p>\n\n\n\n ","site":"kubernetes reference","answers_cleaned":"    title  Event Rate Limit Configuration  v1alpha1  content type  tool reference package  eventratelimit admission k8s io v1alpha1 auto generated  true          Resource Types       Configuration   eventratelimit admission k8s io v1alpha1 Configuration          Configuration        eventratelimit admission k8s io v1alpha1 Configuration          p Configuration provides configuration for the EventRateLimit admission controller   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code eventratelimit admission k8s io v1alpha1  code   td   tr   tr  td  code kind  code  br  string  td  td  code Configuration  code   td   tr           tr  td  code limits  code   B  Required   B  br    a href   eventratelimit admission k8s io v1alpha1 Limit   code   Limit  code   a    td   td      p limits are the limits to place on event queries received  Limits can be placed on events received server wide  per namespace  per user  and per source object  At least one limit is required   p    td    tr    tbody    table       Limit        eventratelimit admission k8s io v1alpha1 Limit          Appears in        Configuration   eventratelimit admission k8s io v1alpha1 Configuration     p Limit is the configuration for a particular limit type  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code type  code   B  Required   B  br    a href   eventratelimit admission k8s io v1alpha1 LimitType   code LimitType  code   a    td   td      p type is the type of limit to which this configuration applies  p    td    tr   tr  td  code qps  code   B  Required   B  br    code int32  code    td   td      p qps is the number of event queries per second that are allowed for this type of limit  The qps and burst fields are used together to determine if a particular event query is accepted  The qps determines how many queries are accepted once the burst amount of queries has been exhausted   p    td    tr   tr  td  code burst  code   B  Required   B  br    code int32  code    td   td      p burst is the burst number of event queries that are allowed for this type of limit  The qps and burst fields are used together to determine if a particular event query is accepted  The burst determines the maximum size of the allowance granted for a particular bucket  For example  if the burst is 10 and the qps is 3  then the admission control will accept 10 queries before blocking any queries  Every second  3 more queries will be allowed  If some of that allowance is not used  then it will roll over to the next second  until the maximum allowance of 10 is reached   p    td    tr   tr  td  code cacheSize  code  br    code int32  code    td   td      p cacheSize is the size of the LRU cache for this type of limit  If a bucket is evicted from the cache  then the allowance for that bucket is reset  If more queries are later received for an evicted bucket  then that bucket will re enter the cache with a clean slate  giving that bucket a full allowance of burst queries   p   p The default cache size is 4096   p   p If limitType is  server   then cacheSize is ignored   p    td    tr    tbody    table       LimitType        eventratelimit admission k8s io v1alpha1 LimitType        Alias of  string      Appears in        Limit   eventratelimit admission k8s io v1alpha1 Limit     p LimitType is the type of the limit  e g   per namespace   p      "}
{"questions":"kubernetes reference contenttype tool reference Resource Types title kube controller manager Configuration v1alpha1 package kubecontrollermanager config k8s io v1alpha1 autogenerated true","answers":"---\ntitle: kube-controller-manager Configuration (v1alpha1)\ncontent_type: tool-reference\npackage: kubecontrollermanager.config.k8s.io\/v1alpha1\nauto_generated: true\n---\n\n\n## Resource Types \n\n\n- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)\n- [LeaderMigrationConfiguration](#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration)\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n  \n    \n    \n\n## `NodeControllerConfiguration`     {#NodeControllerConfiguration}\n    \n\n**Appears in:**\n\n- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)\n\n\n<p>NodeControllerConfiguration contains elements describing NodeController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentNodeSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>ConcurrentNodeSyncs is the number of workers\nconcurrently synchronizing nodes<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ServiceControllerConfiguration`     {#ServiceControllerConfiguration}\n    \n\n**Appears in:**\n\n- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>ServiceControllerConfiguration contains elements describing ServiceController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentServiceSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentServiceSyncs is the number of services that are\nallowed to sync concurrently. Larger number = more responsive service\nmanagement, but more CPU (and network) load.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n  \n\n## `CloudControllerManagerConfiguration`     {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration}\n    \n\n\n<p>CloudControllerManagerConfiguration contains elements describing cloud-controller manager.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>cloudcontrollermanager.config.k8s.io\/v1alpha1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>CloudControllerManagerConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>Generic<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration\"><code>GenericControllerManagerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>Generic holds configuration for a generic controller-manager<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>KubeCloudShared<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration\"><code>KubeCloudSharedConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>KubeCloudSharedConfiguration holds configuration for shared related features\nboth in cloud controller manager and kube-controller manager.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>NodeController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#NodeControllerConfiguration\"><code>NodeControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>NodeController holds configuration for node controller\nrelated features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ServiceController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#ServiceControllerConfiguration\"><code>ServiceControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>ServiceControllerConfiguration holds configuration for ServiceController\nrelated features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>NodeStatusUpdateFrequency<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>NodeStatusUpdateFrequency is the frequency at which the controller updates nodes' status<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>Webhook<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration\"><code>WebhookConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>Webhook is the configuration for cloud-controller-manager hosted webhooks<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `CloudProviderConfiguration`     {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration}\n    \n\n**Appears in:**\n\n- [KubeCloudSharedConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration)\n\n\n<p>CloudProviderConfiguration contains basically elements about cloud provider.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>Name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name is the provider for cloud services.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>CloudConfigFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>cloudConfigFile is the path to the cloud provider configuration file.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeCloudSharedConfiguration`     {#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration}\n    \n\n**Appears in:**\n\n- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>KubeCloudSharedConfiguration contains elements shared by both kube-controller manager\nand cloud-controller manager, but not genericconfig.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>CloudProvider<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration\"><code>CloudProviderConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>CloudProviderConfiguration holds configuration for CloudProvider related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ExternalCloudVolumePlugin<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>externalCloudVolumePlugin specifies the plugin to use when cloudProvider is &quot;external&quot;.\nIt is currently used by the in repo cloud providers to handle node and volume control in the KCM.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>UseServiceAccountCredentials<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>useServiceAccountCredentials indicates whether controllers should be run with\nindividual service account credentials.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>AllowUntaggedCloud<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>run with untagged cloud instances<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>RouteReconciliationPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider..<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>NodeMonitorPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ClusterName<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>clusterName is the instance prefix for the cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ClusterCIDR<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>clusterCIDR is CIDR Range for Pods in cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>AllocateNodeCIDRs<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>AllocateNodeCIDRs enables CIDRs for Pods to be allocated and, if\nConfigureCloudRoutes is true, to be set on the cloud provider.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>CIDRAllocatorType<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>CIDRAllocatorType determines what kind of pod CIDR allocator will be used.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ConfigureCloudRoutes<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs\nto be configured on the cloud provider.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>NodeSyncPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer\nperiods will result in fewer calls to cloud provider, but may delay addition\nof new nodes to cluster.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `WebhookConfiguration`     {#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration}\n    \n\n**Appears in:**\n\n- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)\n\n\n<p>WebhookConfiguration contains configuration related to\ncloud-controller-manager hosted webhooks<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>Webhooks<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>Webhooks is the list of webhooks to enable or disable\n'*' means &quot;all enabled by default webhooks&quot;\n'foo' means &quot;enable 'foo'&quot;\n'-foo' means &quot;disable 'foo'&quot;\nfirst item for a particular name wins<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n  \n  \n\n## `LeaderMigrationConfiguration`     {#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration}\n    \n\n**Appears in:**\n\n- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)\n\n\n<p>LeaderMigrationConfiguration provides versioned configuration for all migrating leader locks.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>controllermanager.config.k8s.io\/v1alpha1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>LeaderMigrationConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>leaderName<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>LeaderName is the name of the leader election resource that protects the migration\nE.g. 1-20-KCM-to-1-21-CCM<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>resourceLock<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>ResourceLock indicates the resource object type that will be used to lock\nShould be &quot;leases&quot; or &quot;endpoints&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>controllerLeaders<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#controllermanager-config-k8s-io-v1alpha1-ControllerLeaderConfiguration\"><code>[]ControllerLeaderConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>ControllerLeaders contains a list of migrating leader lock configurations<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ControllerLeaderConfiguration`     {#controllermanager-config-k8s-io-v1alpha1-ControllerLeaderConfiguration}\n    \n\n**Appears in:**\n\n- [LeaderMigrationConfiguration](#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration)\n\n\n<p>ControllerLeaderConfiguration provides the configuration for a migrating leader lock.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name is the name of the controller being migrated\nE.g. service-controller, route-controller, cloud-node-controller, etc<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>component<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Component is the name of the component in which the controller should be running.\nE.g. kube-controller-manager, cloud-controller-manager, etc\nOr '*' meaning the controller can be run under any component that participates in the migration<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `GenericControllerManagerConfiguration`     {#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration}\n    \n\n**Appears in:**\n\n- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>GenericControllerManagerConfiguration holds configuration for a generic controller-manager.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>Port<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>port is the port that the controller-manager's http service runs on.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>Address<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>address is the IP address to serve on (set to 0.0.0.0 for all interfaces).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>MinResyncPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>minResyncPeriod is the resync period in reflectors; will be random between\nminResyncPeriod and 2*minResyncPeriod.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ClientConnection<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#ClientConnectionConfiguration\"><code>ClientConnectionConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>ClientConnection specifies the kubeconfig file and client connection\nsettings for the proxy server to use when communicating with the apiserver.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ControllerStartInterval<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>How long to wait between starting controller managers<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>LeaderElection<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#LeaderElectionConfiguration\"><code>LeaderElectionConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>leaderElection defines the configuration of leader election client.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>Controllers<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>Controllers is the list of controllers to enable or disable\n'*' means &quot;all enabled by default controllers&quot;\n'foo' means &quot;enable 'foo'&quot;\n'-foo' means &quot;disable 'foo'&quot;\nfirst item for a particular name wins<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>Debugging<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#DebuggingConfiguration\"><code>DebuggingConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>DebuggingConfiguration holds configuration for Debugging related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>LeaderMigrationEnabled<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>LeaderMigrationEnabled indicates whether Leader Migration should be enabled for the controller manager.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>LeaderMigration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration\"><code>LeaderMigrationConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>LeaderMigration holds the configuration for Leader Migration.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n  \n  \n\n## `KubeControllerManagerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration}\n    \n\n\n<p>KubeControllerManagerConfiguration contains elements describing kube-controller manager.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubecontrollermanager.config.k8s.io\/v1alpha1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>KubeControllerManagerConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>Generic<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration\"><code>GenericControllerManagerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>Generic holds configuration for a generic controller-manager<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>KubeCloudShared<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration\"><code>KubeCloudSharedConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>KubeCloudSharedConfiguration holds configuration for shared related features\nboth in cloud controller manager and kube-controller manager.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>AttachDetachController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-AttachDetachControllerConfiguration\"><code>AttachDetachControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>AttachDetachControllerConfiguration holds configuration for\nAttachDetachController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>CSRSigningController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration\"><code>CSRSigningControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>CSRSigningControllerConfiguration holds configuration for\nCSRSigningController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>DaemonSetController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-DaemonSetControllerConfiguration\"><code>DaemonSetControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>DaemonSetControllerConfiguration holds configuration for DaemonSetController\nrelated features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>DeploymentController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-DeploymentControllerConfiguration\"><code>DeploymentControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>DeploymentControllerConfiguration holds configuration for\nDeploymentController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>StatefulSetController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-StatefulSetControllerConfiguration\"><code>StatefulSetControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>StatefulSetControllerConfiguration holds configuration for\nStatefulSetController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>DeprecatedController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-DeprecatedControllerConfiguration\"><code>DeprecatedControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>DeprecatedControllerConfiguration holds configuration for some deprecated\nfeatures.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>EndpointController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointControllerConfiguration\"><code>EndpointControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>EndpointControllerConfiguration holds configuration for EndpointController\nrelated features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>EndpointSliceController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceControllerConfiguration\"><code>EndpointSliceControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>EndpointSliceControllerConfiguration holds configuration for\nEndpointSliceController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>EndpointSliceMirroringController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceMirroringControllerConfiguration\"><code>EndpointSliceMirroringControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>EndpointSliceMirroringControllerConfiguration holds configuration for\nEndpointSliceMirroringController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>EphemeralVolumeController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-EphemeralVolumeControllerConfiguration\"><code>EphemeralVolumeControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>EphemeralVolumeControllerConfiguration holds configuration for EphemeralVolumeController\nrelated features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>GarbageCollectorController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration\"><code>GarbageCollectorControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>GarbageCollectorControllerConfiguration holds configuration for\nGarbageCollectorController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>HPAController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-HPAControllerConfiguration\"><code>HPAControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>HPAControllerConfiguration holds configuration for HPAController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>JobController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-JobControllerConfiguration\"><code>JobControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>JobControllerConfiguration holds configuration for JobController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>CronJobController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-CronJobControllerConfiguration\"><code>CronJobControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>CronJobControllerConfiguration holds configuration for CronJobController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>LegacySATokenCleaner<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-LegacySATokenCleanerConfiguration\"><code>LegacySATokenCleanerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>LegacySATokenCleanerConfiguration holds configuration for LegacySATokenCleaner related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>NamespaceController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-NamespaceControllerConfiguration\"><code>NamespaceControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>NamespaceControllerConfiguration holds configuration for NamespaceController\nrelated features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>NodeIPAMController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-NodeIPAMControllerConfiguration\"><code>NodeIPAMControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>NodeIPAMControllerConfiguration holds configuration for NodeIPAMController\nrelated features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>NodeLifecycleController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-NodeLifecycleControllerConfiguration\"><code>NodeLifecycleControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>NodeLifecycleControllerConfiguration holds configuration for\nNodeLifecycleController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>PersistentVolumeBinderController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration\"><code>PersistentVolumeBinderControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>PersistentVolumeBinderControllerConfiguration holds configuration for\nPersistentVolumeBinderController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>PodGCController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-PodGCControllerConfiguration\"><code>PodGCControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>PodGCControllerConfiguration holds configuration for PodGCController\nrelated features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ReplicaSetController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicaSetControllerConfiguration\"><code>ReplicaSetControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>ReplicaSetControllerConfiguration holds configuration for ReplicaSet related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ReplicationController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicationControllerConfiguration\"><code>ReplicationControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>ReplicationControllerConfiguration holds configuration for\nReplicationController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ResourceQuotaController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-ResourceQuotaControllerConfiguration\"><code>ResourceQuotaControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>ResourceQuotaControllerConfiguration holds configuration for\nResourceQuotaController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>SAController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-SAControllerConfiguration\"><code>SAControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>SAControllerConfiguration holds configuration for ServiceAccountController\nrelated features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ServiceController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#ServiceControllerConfiguration\"><code>ServiceControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>ServiceControllerConfiguration holds configuration for ServiceController\nrelated features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>TTLAfterFinishedController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-TTLAfterFinishedControllerConfiguration\"><code>TTLAfterFinishedControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>TTLAfterFinishedControllerConfiguration holds configuration for\nTTLAfterFinishedController related features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ValidatingAdmissionPolicyStatusController<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-ValidatingAdmissionPolicyStatusControllerConfiguration\"><code>ValidatingAdmissionPolicyStatusControllerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>ValidatingAdmissionPolicyStatusControllerConfiguration holds configuration for\nValidatingAdmissionPolicyStatusController related features.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AttachDetachControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-AttachDetachControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>AttachDetachControllerConfiguration contains elements describing AttachDetachController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>DisableAttachDetachReconcilerSync<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>Reconciler runs a periodic loop to reconcile the desired state of the with\nthe actual state of the world by triggering attach detach operations.\nThis flag enables or disables reconcile.  Is false by default, and thus enabled.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ReconcilerSyncLoopPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>ReconcilerSyncLoopPeriod is the amount of time the reconciler sync states loop\nwait between successive executions. Is set to 60 sec by default.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>disableForceDetachOnTimeout<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>DisableForceDetachOnTimeout disables force detach when the maximum unmount\ntime is exceeded. Is false by default, and thus force detach on unmount is\nenabled.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `CSRSigningConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningConfiguration}\n    \n\n**Appears in:**\n\n- [CSRSigningControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration)\n\n\n<p>CSRSigningConfiguration holds information about a particular CSR signer<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>CertFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>certFile is the filename containing a PEM-encoded\nX509 CA certificate used to issue certificates<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>KeyFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>keyFile is the filename containing a PEM-encoded\nRSA or ECDSA private key used to issue certificates<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `CSRSigningControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>CSRSigningControllerConfiguration contains elements describing CSRSigningController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ClusterSigningCertFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>clusterSigningCertFile is the filename containing a PEM-encoded\nX509 CA certificate used to issue cluster-scoped certificates<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ClusterSigningKeyFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>clusterSigningCertFile is the filename containing a PEM-encoded\nRSA or ECDSA private key used to issue cluster-scoped certificates<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>KubeletServingSignerConfiguration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningConfiguration\"><code>CSRSigningConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>kubeletServingSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io\/kubelet-serving signer<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>KubeletClientSignerConfiguration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningConfiguration\"><code>CSRSigningConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>kubeletClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io\/kube-apiserver-client-kubelet<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>KubeAPIServerClientSignerConfiguration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningConfiguration\"><code>CSRSigningConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>kubeAPIServerClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io\/kube-apiserver-client<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>LegacyUnknownSignerConfiguration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningConfiguration\"><code>CSRSigningConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>legacyUnknownSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io\/legacy-unknown<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ClusterSigningDuration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>clusterSigningDuration is the max length of duration signed certificates will be given.\nIndividual CSRs may request shorter certs by setting spec.expirationSeconds.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `CronJobControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-CronJobControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>CronJobControllerConfiguration contains elements describing CrongJob2Controller.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentCronJobSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentCronJobSyncs is the number of job objects that are\nallowed to sync concurrently. Larger number = more responsive jobs,\nbut more CPU (and network) load.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `DaemonSetControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-DaemonSetControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>DaemonSetControllerConfiguration contains elements describing DaemonSetController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentDaemonSetSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentDaemonSetSyncs is the number of daemonset objects that are\nallowed to sync concurrently. Larger number = more responsive daemonset,\nbut more CPU (and network) load.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `DeploymentControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-DeploymentControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>DeploymentControllerConfiguration contains elements describing DeploymentController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentDeploymentSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentDeploymentSyncs is the number of deployment objects that are\nallowed to sync concurrently. Larger number = more responsive deployments,\nbut more CPU (and network) load.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `DeprecatedControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-DeprecatedControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>DeprecatedControllerConfiguration contains elements be deprecated.<\/p>\n\n\n\n\n## `EndpointControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>EndpointControllerConfiguration contains elements describing EndpointController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentEndpointSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentEndpointSyncs is the number of endpoint syncing operations\nthat will be done concurrently. Larger number = faster endpoint updating,\nbut more CPU (and network) load.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>EndpointUpdatesBatchPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period.\nProcessing of pod changes will be delayed by this duration to join them with potential\nupcoming updates and reduce the overall number of endpoints updates.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `EndpointSliceControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>EndpointSliceControllerConfiguration contains elements describing\nEndpointSliceController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentServiceEndpointSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentServiceEndpointSyncs is the number of service endpoint syncing\noperations that will be done concurrently. Larger number = faster\nendpoint slice updating, but more CPU (and network) load.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>MaxEndpointsPerSlice<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>maxEndpointsPerSlice is the maximum number of endpoints that will be\nadded to an EndpointSlice. More endpoints per slice will result in fewer\nand larger endpoint slices, but larger resources.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>EndpointUpdatesBatchPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period.\nProcessing of pod changes will be delayed by this duration to join them with potential\nupcoming updates and reduce the overall number of endpoints updates.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `EndpointSliceMirroringControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceMirroringControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>EndpointSliceMirroringControllerConfiguration contains elements describing\nEndpointSliceMirroringController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>MirroringConcurrentServiceEndpointSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>mirroringConcurrentServiceEndpointSyncs is the number of service endpoint\nsyncing operations that will be done concurrently. Larger number = faster\nendpoint slice updating, but more CPU (and network) load.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>MirroringMaxEndpointsPerSubset<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>mirroringMaxEndpointsPerSubset is the maximum number of endpoints that\nwill be mirrored to an EndpointSlice for an EndpointSubset.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>MirroringEndpointUpdatesBatchPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>mirroringEndpointUpdatesBatchPeriod can be used to batch EndpointSlice\nupdates. All updates triggered by EndpointSlice changes will be delayed\nby up to 'mirroringEndpointUpdatesBatchPeriod'. If other addresses in the\nsame Endpoints resource change in that period, they will be batched to a\nsingle EndpointSlice update. Default 0 value means that each Endpoints\nupdate triggers an EndpointSlice update.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `EphemeralVolumeControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-EphemeralVolumeControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>EphemeralVolumeControllerConfiguration contains elements describing EphemeralVolumeController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentEphemeralVolumeSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>ConcurrentEphemeralVolumeSyncseSyncs is the number of ephemeral volume syncing operations\nthat will be done concurrently. Larger number = faster ephemeral volume updating,\nbut more CPU (and network) load.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `GarbageCollectorControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>GarbageCollectorControllerConfiguration contains elements describing GarbageCollectorController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>EnableGarbageCollector<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enables the generic garbage collector. MUST be synced with the\ncorresponding flag of the kube-apiserver. WARNING: the generic garbage\ncollector is an alpha feature.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ConcurrentGCSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentGCSyncs is the number of garbage collector workers that are\nallowed to sync concurrently.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>GCIgnoredResources<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-GroupResource\"><code>[]GroupResource<\/code><\/a>\n<\/td>\n<td>\n   <p>gcIgnoredResources is the list of GroupResources that garbage collection should ignore.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `GroupResource`     {#kubecontrollermanager-config-k8s-io-v1alpha1-GroupResource}\n    \n\n**Appears in:**\n\n- [GarbageCollectorControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration)\n\n\n<p>GroupResource describes an group resource.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>Group<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>group is the group portion of the GroupResource.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>Resource<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>resource is the resource portion of the GroupResource.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `HPAControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-HPAControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>HPAControllerConfiguration contains elements describing HPAController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentHorizontalPodAutoscalerSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>ConcurrentHorizontalPodAutoscalerSyncs is the number of HPA objects that are allowed to sync concurrently.\nLarger number = more responsive HPA processing, but more CPU (and network) load.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>HorizontalPodAutoscalerSyncPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>HorizontalPodAutoscalerSyncPeriod is the period for syncing the number of\npods in horizontal pod autoscaler.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>HorizontalPodAutoscalerDownscaleStabilizationWindow<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>HorizontalPodAutoscalerDowncaleStabilizationWindow is a period for which autoscaler will look\nbackwards and not scale down below any recommendation it made during that period.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>HorizontalPodAutoscalerTolerance<\/code> <B>[Required]<\/B><br\/>\n<code>float64<\/code>\n<\/td>\n<td>\n   <p>HorizontalPodAutoscalerTolerance is the tolerance for when\nresource usage suggests upscaling\/downscaling<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>HorizontalPodAutoscalerCPUInitializationPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>HorizontalPodAutoscalerCPUInitializationPeriod is the period after pod start when CPU samples\nmight be skipped.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>HorizontalPodAutoscalerInitialReadinessDelay<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>HorizontalPodAutoscalerInitialReadinessDelay is period after pod start during which readiness\nchanges are treated as readiness being set for the first time. The only effect of this is that\nHPA will disregard CPU samples from unready pods that had last readiness change during that\nperiod.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `JobControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-JobControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>JobControllerConfiguration contains elements describing JobController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentJobSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentJobSyncs is the number of job objects that are\nallowed to sync concurrently. Larger number = more responsive jobs,\nbut more CPU (and network) load.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `LegacySATokenCleanerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-LegacySATokenCleanerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>LegacySATokenCleanerConfiguration contains elements describing LegacySATokenCleaner<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>CleanUpPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>CleanUpPeriod is the period of time since the last usage of an\nauto-generated service account token before it can be deleted.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `NamespaceControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-NamespaceControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>NamespaceControllerConfiguration contains elements describing NamespaceController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>NamespaceSyncPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>namespaceSyncPeriod is the period for syncing namespace life-cycle\nupdates.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ConcurrentNamespaceSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentNamespaceSyncs is the number of namespace objects that are\nallowed to sync concurrently.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `NodeIPAMControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-NodeIPAMControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>NodeIPAMControllerConfiguration contains elements describing NodeIpamController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ServiceCIDR<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>serviceCIDR is CIDR Range for Services in cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>SecondaryServiceCIDR<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>secondaryServiceCIDR is CIDR Range for Services in cluster. This is used in dual stack clusters. SecondaryServiceCIDR must be of different IP family than ServiceCIDR<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>NodeCIDRMaskSize<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>NodeCIDRMaskSize is the mask size for node cidr in cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>NodeCIDRMaskSizeIPv4<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>NodeCIDRMaskSizeIPv4 is the mask size for node cidr in dual-stack cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>NodeCIDRMaskSizeIPv6<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>NodeCIDRMaskSizeIPv6 is the mask size for node cidr in dual-stack cluster.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `NodeLifecycleControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-NodeLifecycleControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>NodeLifecycleControllerConfiguration contains elements describing NodeLifecycleController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>NodeEvictionRate<\/code> <B>[Required]<\/B><br\/>\n<code>float32<\/code>\n<\/td>\n<td>\n   <p>nodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is healthy<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>SecondaryNodeEvictionRate<\/code> <B>[Required]<\/B><br\/>\n<code>float32<\/code>\n<\/td>\n<td>\n   <p>secondaryNodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>NodeStartupGracePeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>nodeStartupGracePeriod is the amount of time which we allow starting a node to\nbe unresponsive before marking it unhealthy.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>NodeMonitorGracePeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>nodeMontiorGracePeriod is the amount of time which we allow a running node to be\nunresponsive before marking it unhealthy. Must be N times more than kubelet's\nnodeStatusUpdateFrequency, where N means number of retries allowed for kubelet\nto post node status.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>PodEvictionTimeout<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>podEvictionTimeout is the grace period for deleting pods on failed nodes.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>LargeClusterSizeThreshold<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>secondaryNodeEvictionRate is implicitly overridden to 0 for clusters smaller than or equal to largeClusterSizeThreshold<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>UnhealthyZoneThreshold<\/code> <B>[Required]<\/B><br\/>\n<code>float32<\/code>\n<\/td>\n<td>\n   <p>Zone is treated as unhealthy in nodeEvictionRate and secondaryNodeEvictionRate when at least\nunhealthyZoneThreshold (no less than 3) of Nodes in the zone are NotReady<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `PersistentVolumeBinderControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>PersistentVolumeBinderControllerConfiguration contains elements describing\nPersistentVolumeBinderController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>PVClaimBinderSyncPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>pvClaimBinderSyncPeriod is the period for syncing persistent volumes\nand persistent volume claims.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>VolumeConfiguration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration\"><code>VolumeConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>volumeConfiguration holds configuration for volume related features.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `PersistentVolumeRecyclerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeRecyclerConfiguration}\n    \n\n**Appears in:**\n\n- [VolumeConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration)\n\n\n<p>PersistentVolumeRecyclerConfiguration contains elements describing persistent volume plugins.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>MaximumRetry<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>maximumRetry is number of retries the PV recycler will execute on failure to recycle\nPV.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>MinimumTimeoutNFS<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>minimumTimeoutNFS is the minimum ActiveDeadlineSeconds to use for an NFS Recycler\npod.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>PodTemplateFilePathNFS<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>podTemplateFilePathNFS is the file path to a pod definition used as a template for\nNFS persistent volume recycling<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>IncrementTimeoutNFS<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>incrementTimeoutNFS is the increment of time added per Gi to ActiveDeadlineSeconds\nfor an NFS scrubber pod.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>PodTemplateFilePathHostPath<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>podTemplateFilePathHostPath is the file path to a pod definition used as a template for\nHostPath persistent volume recycling. This is for development and testing only and\nwill not work in a multi-node cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>MinimumTimeoutHostPath<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>minimumTimeoutHostPath is the minimum ActiveDeadlineSeconds to use for a HostPath\nRecycler pod.  This is for development and testing only and will not work in a multi-node\ncluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>IncrementTimeoutHostPath<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>incrementTimeoutHostPath is the increment of time added per Gi to ActiveDeadlineSeconds\nfor a HostPath scrubber pod.  This is for development and testing only and will not work\nin a multi-node cluster.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `PodGCControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-PodGCControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>PodGCControllerConfiguration contains elements describing PodGCController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>TerminatedPodGCThreshold<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>terminatedPodGCThreshold is the number of terminated pods that can exist\nbefore the terminated pod garbage collector starts deleting terminated pods.\nIf &lt;= 0, the terminated pod garbage collector is disabled.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ReplicaSetControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicaSetControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>ReplicaSetControllerConfiguration contains elements describing ReplicaSetController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentRSSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentRSSyncs is the number of replica sets that are  allowed to sync\nconcurrently. Larger number = more responsive replica  management, but more\nCPU (and network) load.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ReplicationControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicationControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>ReplicationControllerConfiguration contains elements describing ReplicationController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentRCSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentRCSyncs is the number of replication controllers that are\nallowed to sync concurrently. Larger number = more responsive replica\nmanagement, but more CPU (and network) load.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ResourceQuotaControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-ResourceQuotaControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>ResourceQuotaControllerConfiguration contains elements describing ResourceQuotaController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ResourceQuotaSyncPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>resourceQuotaSyncPeriod is the period for syncing quota usage status\nin the system.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ConcurrentResourceQuotaSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentResourceQuotaSyncs is the number of resource quotas that are\nallowed to sync concurrently. Larger number = more responsive quota\nmanagement, but more CPU (and network) load.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `SAControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-SAControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>SAControllerConfiguration contains elements describing ServiceAccountController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ServiceAccountKeyFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>serviceAccountKeyFile is the filename containing a PEM-encoded private RSA key\nused to sign service account tokens.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ConcurrentSATokenSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentSATokenSyncs is the number of service account token syncing operations\nthat will be done concurrently.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>RootCAFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>rootCAFile is the root certificate authority will be included in service\naccount's token secret. This must be a valid PEM-encoded CA bundle.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `StatefulSetControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-StatefulSetControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>StatefulSetControllerConfiguration contains elements describing StatefulSetController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentStatefulSetSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentStatefulSetSyncs is the number of statefulset objects that are\nallowed to sync concurrently. Larger number = more responsive statefulsets,\nbut more CPU (and network) load.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `TTLAfterFinishedControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-TTLAfterFinishedControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>TTLAfterFinishedControllerConfiguration contains elements describing TTLAfterFinishedController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentTTLSyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>concurrentTTLSyncs is the number of TTL-after-finished collector workers that are\nallowed to sync concurrently.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ValidatingAdmissionPolicyStatusControllerConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-ValidatingAdmissionPolicyStatusControllerConfiguration}\n    \n\n**Appears in:**\n\n- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)\n\n\n<p>ValidatingAdmissionPolicyStatusControllerConfiguration contains elements describing ValidatingAdmissionPolicyStatusController.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ConcurrentPolicySyncs<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>ConcurrentPolicySyncs is the number of policy objects that are\nallowed to sync concurrently. Larger number = quicker type checking,\nbut more CPU (and network) load.\nThe default value is 5.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `VolumeConfiguration`     {#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration}\n    \n\n**Appears in:**\n\n- [PersistentVolumeBinderControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration)\n\n\n<p>VolumeConfiguration contains <em>all<\/em> enumerated flags meant to configure all volume\nplugins. From this config, the controller-manager binary will create many instances of\nvolume.VolumeConfig, each containing only the configuration needed for that plugin which\nare then passed to the appropriate plugin. The ControllerManager binary is the only part\nof the code which knows what plugins are supported and which flags correspond to each plugin.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>EnableHostPathProvisioning<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableHostPathProvisioning enables HostPath PV provisioning when running without a\ncloud provider. This allows testing and development of provisioning features. HostPath\nprovisioning is not supported in any way, won't work in a multi-node cluster, and\nshould not be used for anything other than testing or development.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>EnableDynamicProvisioning<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableDynamicProvisioning enables the provisioning of volumes when running within an environment\nthat supports dynamic provisioning. Defaults to true.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>PersistentVolumeRecyclerConfiguration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeRecyclerConfiguration\"><code>PersistentVolumeRecyclerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>persistentVolumeRecyclerConfiguration holds configuration for persistent volume plugins.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>FlexVolumePluginDir<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>volumePluginDir is the full path of the directory in which the flex\nvolume plugin should search for additional third party volume plugins<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  kube controller manager Configuration  v1alpha1  content type  tool reference package  kubecontrollermanager config k8s io v1alpha1 auto generated  true          Resource Types       CloudControllerManagerConfiguration   cloudcontrollermanager config k8s io v1alpha1 CloudControllerManagerConfiguration     LeaderMigrationConfiguration   controllermanager config k8s io v1alpha1 LeaderMigrationConfiguration     KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration                    NodeControllerConfiguration        NodeControllerConfiguration          Appears in        CloudControllerManagerConfiguration   cloudcontrollermanager config k8s io v1alpha1 CloudControllerManagerConfiguration     p NodeControllerConfiguration contains elements describing NodeController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentNodeSyncs  code   B  Required   B  br    code int32  code    td   td      p ConcurrentNodeSyncs is the number of workers concurrently synchronizing nodes  p    td    tr    tbody    table       ServiceControllerConfiguration        ServiceControllerConfiguration          Appears in        CloudControllerManagerConfiguration   cloudcontrollermanager config k8s io v1alpha1 CloudControllerManagerConfiguration      KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p ServiceControllerConfiguration contains elements describing ServiceController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentServiceSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentServiceSyncs is the number of services that are allowed to sync concurrently  Larger number   more responsive service management  but more CPU  and network  load   p    td    tr    tbody    table          CloudControllerManagerConfiguration        cloudcontrollermanager config k8s io v1alpha1 CloudControllerManagerConfiguration          p CloudControllerManagerConfiguration contains elements describing cloud controller manager   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code cloudcontrollermanager config k8s io v1alpha1  code   td   tr   tr  td  code kind  code  br  string  td  td  code CloudControllerManagerConfiguration  code   td   tr           tr  td  code Generic  code   B  Required   B  br    a href   controllermanager config k8s io v1alpha1 GenericControllerManagerConfiguration   code GenericControllerManagerConfiguration  code   a    td   td      p Generic holds configuration for a generic controller manager  p    td    tr   tr  td  code KubeCloudShared  code   B  Required   B  br    a href   cloudcontrollermanager config k8s io v1alpha1 KubeCloudSharedConfiguration   code KubeCloudSharedConfiguration  code   a    td   td      p KubeCloudSharedConfiguration holds configuration for shared related features both in cloud controller manager and kube controller manager   p    td    tr   tr  td  code NodeController  code   B  Required   B  br    a href   NodeControllerConfiguration   code NodeControllerConfiguration  code   a    td   td      p NodeController holds configuration for node controller related features   p    td    tr   tr  td  code ServiceController  code   B  Required   B  br    a href   ServiceControllerConfiguration   code ServiceControllerConfiguration  code   a    td   td      p ServiceControllerConfiguration holds configuration for ServiceController related features   p    td    tr   tr  td  code NodeStatusUpdateFrequency  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p NodeStatusUpdateFrequency is the frequency at which the controller updates nodes  status  p    td    tr   tr  td  code Webhook  code   B  Required   B  br    a href   cloudcontrollermanager config k8s io v1alpha1 WebhookConfiguration   code WebhookConfiguration  code   a    td   td      p Webhook is the configuration for cloud controller manager hosted webhooks  p    td    tr    tbody    table       CloudProviderConfiguration        cloudcontrollermanager config k8s io v1alpha1 CloudProviderConfiguration          Appears in        KubeCloudSharedConfiguration   cloudcontrollermanager config k8s io v1alpha1 KubeCloudSharedConfiguration     p CloudProviderConfiguration contains basically elements about cloud provider   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code Name  code   B  Required   B  br    code string  code    td   td      p Name is the provider for cloud services   p    td    tr   tr  td  code CloudConfigFile  code   B  Required   B  br    code string  code    td   td      p cloudConfigFile is the path to the cloud provider configuration file   p    td    tr    tbody    table       KubeCloudSharedConfiguration        cloudcontrollermanager config k8s io v1alpha1 KubeCloudSharedConfiguration          Appears in        CloudControllerManagerConfiguration   cloudcontrollermanager config k8s io v1alpha1 CloudControllerManagerConfiguration      KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p KubeCloudSharedConfiguration contains elements shared by both kube controller manager and cloud controller manager  but not genericconfig   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code CloudProvider  code   B  Required   B  br    a href   cloudcontrollermanager config k8s io v1alpha1 CloudProviderConfiguration   code CloudProviderConfiguration  code   a    td   td      p CloudProviderConfiguration holds configuration for CloudProvider related features   p    td    tr   tr  td  code ExternalCloudVolumePlugin  code   B  Required   B  br    code string  code    td   td      p externalCloudVolumePlugin specifies the plugin to use when cloudProvider is  quot external quot   It is currently used by the in repo cloud providers to handle node and volume control in the KCM   p    td    tr   tr  td  code UseServiceAccountCredentials  code   B  Required   B  br    code bool  code    td   td      p useServiceAccountCredentials indicates whether controllers should be run with individual service account credentials   p    td    tr   tr  td  code AllowUntaggedCloud  code   B  Required   B  br    code bool  code    td   td      p run with untagged cloud instances  p    td    tr   tr  td  code RouteReconciliationPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider    p    td    tr   tr  td  code NodeMonitorPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p nodeMonitorPeriod is the period for syncing NodeStatus in NodeController   p    td    tr   tr  td  code ClusterName  code   B  Required   B  br    code string  code    td   td      p clusterName is the instance prefix for the cluster   p    td    tr   tr  td  code ClusterCIDR  code   B  Required   B  br    code string  code    td   td      p clusterCIDR is CIDR Range for Pods in cluster   p    td    tr   tr  td  code AllocateNodeCIDRs  code   B  Required   B  br    code bool  code    td   td      p AllocateNodeCIDRs enables CIDRs for Pods to be allocated and  if ConfigureCloudRoutes is true  to be set on the cloud provider   p    td    tr   tr  td  code CIDRAllocatorType  code   B  Required   B  br    code string  code    td   td      p CIDRAllocatorType determines what kind of pod CIDR allocator will be used   p    td    tr   tr  td  code ConfigureCloudRoutes  code   B  Required   B  br    code bool  code    td   td      p configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs to be configured on the cloud provider   p    td    tr   tr  td  code NodeSyncPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p nodeSyncPeriod is the period for syncing nodes from cloudprovider  Longer periods will result in fewer calls to cloud provider  but may delay addition of new nodes to cluster   p    td    tr    tbody    table       WebhookConfiguration        cloudcontrollermanager config k8s io v1alpha1 WebhookConfiguration          Appears in        CloudControllerManagerConfiguration   cloudcontrollermanager config k8s io v1alpha1 CloudControllerManagerConfiguration     p WebhookConfiguration contains configuration related to cloud controller manager hosted webhooks  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code Webhooks  code   B  Required   B  br    code   string  code    td   td      p Webhooks is the list of webhooks to enable or disable     means  quot all enabled by default webhooks quot   foo  means  quot enable  foo  quot    foo  means  quot disable  foo  quot  first item for a particular name wins  p    td    tr    tbody    table             LeaderMigrationConfiguration        controllermanager config k8s io v1alpha1 LeaderMigrationConfiguration          Appears in        GenericControllerManagerConfiguration   controllermanager config k8s io v1alpha1 GenericControllerManagerConfiguration     p LeaderMigrationConfiguration provides versioned configuration for all migrating leader locks   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code controllermanager config k8s io v1alpha1  code   td   tr   tr  td  code kind  code  br  string  td  td  code LeaderMigrationConfiguration  code   td   tr           tr  td  code leaderName  code   B  Required   B  br    code string  code    td   td      p LeaderName is the name of the leader election resource that protects the migration E g  1 20 KCM to 1 21 CCM  p    td    tr   tr  td  code resourceLock  code   B  Required   B  br    code string  code    td   td      p ResourceLock indicates the resource object type that will be used to lock Should be  quot leases quot  or  quot endpoints quot   p    td    tr   tr  td  code controllerLeaders  code   B  Required   B  br    a href   controllermanager config k8s io v1alpha1 ControllerLeaderConfiguration   code   ControllerLeaderConfiguration  code   a    td   td      p ControllerLeaders contains a list of migrating leader lock configurations  p    td    tr    tbody    table       ControllerLeaderConfiguration        controllermanager config k8s io v1alpha1 ControllerLeaderConfiguration          Appears in        LeaderMigrationConfiguration   controllermanager config k8s io v1alpha1 LeaderMigrationConfiguration     p ControllerLeaderConfiguration provides the configuration for a migrating leader lock   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p Name is the name of the controller being migrated E g  service controller  route controller  cloud node controller  etc  p    td    tr   tr  td  code component  code   B  Required   B  br    code string  code    td   td      p Component is the name of the component in which the controller should be running  E g  kube controller manager  cloud controller manager  etc Or     meaning the controller can be run under any component that participates in the migration  p    td    tr    tbody    table       GenericControllerManagerConfiguration        controllermanager config k8s io v1alpha1 GenericControllerManagerConfiguration          Appears in        CloudControllerManagerConfiguration   cloudcontrollermanager config k8s io v1alpha1 CloudControllerManagerConfiguration      KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p GenericControllerManagerConfiguration holds configuration for a generic controller manager   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code Port  code   B  Required   B  br    code int32  code    td   td      p port is the port that the controller manager s http service runs on   p    td    tr   tr  td  code Address  code   B  Required   B  br    code string  code    td   td      p address is the IP address to serve on  set to 0 0 0 0 for all interfaces    p    td    tr   tr  td  code MinResyncPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p minResyncPeriod is the resync period in reflectors  will be random between minResyncPeriod and 2 minResyncPeriod   p    td    tr   tr  td  code ClientConnection  code   B  Required   B  br    a href   ClientConnectionConfiguration   code ClientConnectionConfiguration  code   a    td   td      p ClientConnection specifies the kubeconfig file and client connection settings for the proxy server to use when communicating with the apiserver   p    td    tr   tr  td  code ControllerStartInterval  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p How long to wait between starting controller managers  p    td    tr   tr  td  code LeaderElection  code   B  Required   B  br    a href   LeaderElectionConfiguration   code LeaderElectionConfiguration  code   a    td   td      p leaderElection defines the configuration of leader election client   p    td    tr   tr  td  code Controllers  code   B  Required   B  br    code   string  code    td   td      p Controllers is the list of controllers to enable or disable     means  quot all enabled by default controllers quot   foo  means  quot enable  foo  quot    foo  means  quot disable  foo  quot  first item for a particular name wins  p    td    tr   tr  td  code Debugging  code   B  Required   B  br    a href   DebuggingConfiguration   code DebuggingConfiguration  code   a    td   td      p DebuggingConfiguration holds configuration for Debugging related features   p    td    tr   tr  td  code LeaderMigrationEnabled  code   B  Required   B  br    code bool  code    td   td      p LeaderMigrationEnabled indicates whether Leader Migration should be enabled for the controller manager   p    td    tr   tr  td  code LeaderMigration  code   B  Required   B  br    a href   controllermanager config k8s io v1alpha1 LeaderMigrationConfiguration   code LeaderMigrationConfiguration  code   a    td   td      p LeaderMigration holds the configuration for Leader Migration   p    td    tr    tbody    table             KubeControllerManagerConfiguration        kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration          p KubeControllerManagerConfiguration contains elements describing kube controller manager   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubecontrollermanager config k8s io v1alpha1  code   td   tr   tr  td  code kind  code  br  string  td  td  code KubeControllerManagerConfiguration  code   td   tr           tr  td  code Generic  code   B  Required   B  br    a href   controllermanager config k8s io v1alpha1 GenericControllerManagerConfiguration   code GenericControllerManagerConfiguration  code   a    td   td      p Generic holds configuration for a generic controller manager  p    td    tr   tr  td  code KubeCloudShared  code   B  Required   B  br    a href   cloudcontrollermanager config k8s io v1alpha1 KubeCloudSharedConfiguration   code KubeCloudSharedConfiguration  code   a    td   td      p KubeCloudSharedConfiguration holds configuration for shared related features both in cloud controller manager and kube controller manager   p    td    tr   tr  td  code AttachDetachController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 AttachDetachControllerConfiguration   code AttachDetachControllerConfiguration  code   a    td   td      p AttachDetachControllerConfiguration holds configuration for AttachDetachController related features   p    td    tr   tr  td  code CSRSigningController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 CSRSigningControllerConfiguration   code CSRSigningControllerConfiguration  code   a    td   td      p CSRSigningControllerConfiguration holds configuration for CSRSigningController related features   p    td    tr   tr  td  code DaemonSetController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 DaemonSetControllerConfiguration   code DaemonSetControllerConfiguration  code   a    td   td      p DaemonSetControllerConfiguration holds configuration for DaemonSetController related features   p    td    tr   tr  td  code DeploymentController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 DeploymentControllerConfiguration   code DeploymentControllerConfiguration  code   a    td   td      p DeploymentControllerConfiguration holds configuration for DeploymentController related features   p    td    tr   tr  td  code StatefulSetController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 StatefulSetControllerConfiguration   code StatefulSetControllerConfiguration  code   a    td   td      p StatefulSetControllerConfiguration holds configuration for StatefulSetController related features   p    td    tr   tr  td  code DeprecatedController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 DeprecatedControllerConfiguration   code DeprecatedControllerConfiguration  code   a    td   td      p DeprecatedControllerConfiguration holds configuration for some deprecated features   p    td    tr   tr  td  code EndpointController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 EndpointControllerConfiguration   code EndpointControllerConfiguration  code   a    td   td      p EndpointControllerConfiguration holds configuration for EndpointController related features   p    td    tr   tr  td  code EndpointSliceController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 EndpointSliceControllerConfiguration   code EndpointSliceControllerConfiguration  code   a    td   td      p EndpointSliceControllerConfiguration holds configuration for EndpointSliceController related features   p    td    tr   tr  td  code EndpointSliceMirroringController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 EndpointSliceMirroringControllerConfiguration   code EndpointSliceMirroringControllerConfiguration  code   a    td   td      p EndpointSliceMirroringControllerConfiguration holds configuration for EndpointSliceMirroringController related features   p    td    tr   tr  td  code EphemeralVolumeController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 EphemeralVolumeControllerConfiguration   code EphemeralVolumeControllerConfiguration  code   a    td   td      p EphemeralVolumeControllerConfiguration holds configuration for EphemeralVolumeController related features   p    td    tr   tr  td  code GarbageCollectorController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 GarbageCollectorControllerConfiguration   code GarbageCollectorControllerConfiguration  code   a    td   td      p GarbageCollectorControllerConfiguration holds configuration for GarbageCollectorController related features   p    td    tr   tr  td  code HPAController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 HPAControllerConfiguration   code HPAControllerConfiguration  code   a    td   td      p HPAControllerConfiguration holds configuration for HPAController related features   p    td    tr   tr  td  code JobController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 JobControllerConfiguration   code JobControllerConfiguration  code   a    td   td      p JobControllerConfiguration holds configuration for JobController related features   p    td    tr   tr  td  code CronJobController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 CronJobControllerConfiguration   code CronJobControllerConfiguration  code   a    td   td      p CronJobControllerConfiguration holds configuration for CronJobController related features   p    td    tr   tr  td  code LegacySATokenCleaner  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 LegacySATokenCleanerConfiguration   code LegacySATokenCleanerConfiguration  code   a    td   td      p LegacySATokenCleanerConfiguration holds configuration for LegacySATokenCleaner related features   p    td    tr   tr  td  code NamespaceController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 NamespaceControllerConfiguration   code NamespaceControllerConfiguration  code   a    td   td      p NamespaceControllerConfiguration holds configuration for NamespaceController related features   p    td    tr   tr  td  code NodeIPAMController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 NodeIPAMControllerConfiguration   code NodeIPAMControllerConfiguration  code   a    td   td      p NodeIPAMControllerConfiguration holds configuration for NodeIPAMController related features   p    td    tr   tr  td  code NodeLifecycleController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 NodeLifecycleControllerConfiguration   code NodeLifecycleControllerConfiguration  code   a    td   td      p NodeLifecycleControllerConfiguration holds configuration for NodeLifecycleController related features   p    td    tr   tr  td  code PersistentVolumeBinderController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 PersistentVolumeBinderControllerConfiguration   code PersistentVolumeBinderControllerConfiguration  code   a    td   td      p PersistentVolumeBinderControllerConfiguration holds configuration for PersistentVolumeBinderController related features   p    td    tr   tr  td  code PodGCController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 PodGCControllerConfiguration   code PodGCControllerConfiguration  code   a    td   td      p PodGCControllerConfiguration holds configuration for PodGCController related features   p    td    tr   tr  td  code ReplicaSetController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 ReplicaSetControllerConfiguration   code ReplicaSetControllerConfiguration  code   a    td   td      p ReplicaSetControllerConfiguration holds configuration for ReplicaSet related features   p    td    tr   tr  td  code ReplicationController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 ReplicationControllerConfiguration   code ReplicationControllerConfiguration  code   a    td   td      p ReplicationControllerConfiguration holds configuration for ReplicationController related features   p    td    tr   tr  td  code ResourceQuotaController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 ResourceQuotaControllerConfiguration   code ResourceQuotaControllerConfiguration  code   a    td   td      p ResourceQuotaControllerConfiguration holds configuration for ResourceQuotaController related features   p    td    tr   tr  td  code SAController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 SAControllerConfiguration   code SAControllerConfiguration  code   a    td   td      p SAControllerConfiguration holds configuration for ServiceAccountController related features   p    td    tr   tr  td  code ServiceController  code   B  Required   B  br    a href   ServiceControllerConfiguration   code ServiceControllerConfiguration  code   a    td   td      p ServiceControllerConfiguration holds configuration for ServiceController related features   p    td    tr   tr  td  code TTLAfterFinishedController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 TTLAfterFinishedControllerConfiguration   code TTLAfterFinishedControllerConfiguration  code   a    td   td      p TTLAfterFinishedControllerConfiguration holds configuration for TTLAfterFinishedController related features   p    td    tr   tr  td  code ValidatingAdmissionPolicyStatusController  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 ValidatingAdmissionPolicyStatusControllerConfiguration   code ValidatingAdmissionPolicyStatusControllerConfiguration  code   a    td   td      p ValidatingAdmissionPolicyStatusControllerConfiguration holds configuration for ValidatingAdmissionPolicyStatusController related features   p    td    tr    tbody    table       AttachDetachControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 AttachDetachControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p AttachDetachControllerConfiguration contains elements describing AttachDetachController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code DisableAttachDetachReconcilerSync  code   B  Required   B  br    code bool  code    td   td      p Reconciler runs a periodic loop to reconcile the desired state of the with the actual state of the world by triggering attach detach operations  This flag enables or disables reconcile   Is false by default  and thus enabled   p    td    tr   tr  td  code ReconcilerSyncLoopPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p ReconcilerSyncLoopPeriod is the amount of time the reconciler sync states loop wait between successive executions  Is set to 60 sec by default   p    td    tr   tr  td  code disableForceDetachOnTimeout  code   B  Required   B  br    code bool  code    td   td      p DisableForceDetachOnTimeout disables force detach when the maximum unmount time is exceeded  Is false by default  and thus force detach on unmount is enabled   p    td    tr    tbody    table       CSRSigningConfiguration        kubecontrollermanager config k8s io v1alpha1 CSRSigningConfiguration          Appears in        CSRSigningControllerConfiguration   kubecontrollermanager config k8s io v1alpha1 CSRSigningControllerConfiguration     p CSRSigningConfiguration holds information about a particular CSR signer  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code CertFile  code   B  Required   B  br    code string  code    td   td      p certFile is the filename containing a PEM encoded X509 CA certificate used to issue certificates  p    td    tr   tr  td  code KeyFile  code   B  Required   B  br    code string  code    td   td      p keyFile is the filename containing a PEM encoded RSA or ECDSA private key used to issue certificates  p    td    tr    tbody    table       CSRSigningControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 CSRSigningControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p CSRSigningControllerConfiguration contains elements describing CSRSigningController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ClusterSigningCertFile  code   B  Required   B  br    code string  code    td   td      p clusterSigningCertFile is the filename containing a PEM encoded X509 CA certificate used to issue cluster scoped certificates  p    td    tr   tr  td  code ClusterSigningKeyFile  code   B  Required   B  br    code string  code    td   td      p clusterSigningCertFile is the filename containing a PEM encoded RSA or ECDSA private key used to issue cluster scoped certificates  p    td    tr   tr  td  code KubeletServingSignerConfiguration  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 CSRSigningConfiguration   code CSRSigningConfiguration  code   a    td   td      p kubeletServingSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes io kubelet serving signer  p    td    tr   tr  td  code KubeletClientSignerConfiguration  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 CSRSigningConfiguration   code CSRSigningConfiguration  code   a    td   td      p kubeletClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes io kube apiserver client kubelet  p    td    tr   tr  td  code KubeAPIServerClientSignerConfiguration  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 CSRSigningConfiguration   code CSRSigningConfiguration  code   a    td   td      p kubeAPIServerClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes io kube apiserver client  p    td    tr   tr  td  code LegacyUnknownSignerConfiguration  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 CSRSigningConfiguration   code CSRSigningConfiguration  code   a    td   td      p legacyUnknownSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes io legacy unknown  p    td    tr   tr  td  code ClusterSigningDuration  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p clusterSigningDuration is the max length of duration signed certificates will be given  Individual CSRs may request shorter certs by setting spec expirationSeconds   p    td    tr    tbody    table       CronJobControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 CronJobControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p CronJobControllerConfiguration contains elements describing CrongJob2Controller   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentCronJobSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentCronJobSyncs is the number of job objects that are allowed to sync concurrently  Larger number   more responsive jobs  but more CPU  and network  load   p    td    tr    tbody    table       DaemonSetControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 DaemonSetControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p DaemonSetControllerConfiguration contains elements describing DaemonSetController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentDaemonSetSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentDaemonSetSyncs is the number of daemonset objects that are allowed to sync concurrently  Larger number   more responsive daemonset  but more CPU  and network  load   p    td    tr    tbody    table       DeploymentControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 DeploymentControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p DeploymentControllerConfiguration contains elements describing DeploymentController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentDeploymentSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentDeploymentSyncs is the number of deployment objects that are allowed to sync concurrently  Larger number   more responsive deployments  but more CPU  and network  load   p    td    tr    tbody    table       DeprecatedControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 DeprecatedControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p DeprecatedControllerConfiguration contains elements be deprecated   p          EndpointControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 EndpointControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p EndpointControllerConfiguration contains elements describing EndpointController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentEndpointSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentEndpointSyncs is the number of endpoint syncing operations that will be done concurrently  Larger number   faster endpoint updating  but more CPU  and network  load   p    td    tr   tr  td  code EndpointUpdatesBatchPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period  Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates   p    td    tr    tbody    table       EndpointSliceControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 EndpointSliceControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p EndpointSliceControllerConfiguration contains elements describing EndpointSliceController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentServiceEndpointSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentServiceEndpointSyncs is the number of service endpoint syncing operations that will be done concurrently  Larger number   faster endpoint slice updating  but more CPU  and network  load   p    td    tr   tr  td  code MaxEndpointsPerSlice  code   B  Required   B  br    code int32  code    td   td      p maxEndpointsPerSlice is the maximum number of endpoints that will be added to an EndpointSlice  More endpoints per slice will result in fewer and larger endpoint slices  but larger resources   p    td    tr   tr  td  code EndpointUpdatesBatchPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period  Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates   p    td    tr    tbody    table       EndpointSliceMirroringControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 EndpointSliceMirroringControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p EndpointSliceMirroringControllerConfiguration contains elements describing EndpointSliceMirroringController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code MirroringConcurrentServiceEndpointSyncs  code   B  Required   B  br    code int32  code    td   td      p mirroringConcurrentServiceEndpointSyncs is the number of service endpoint syncing operations that will be done concurrently  Larger number   faster endpoint slice updating  but more CPU  and network  load   p    td    tr   tr  td  code MirroringMaxEndpointsPerSubset  code   B  Required   B  br    code int32  code    td   td      p mirroringMaxEndpointsPerSubset is the maximum number of endpoints that will be mirrored to an EndpointSlice for an EndpointSubset   p    td    tr   tr  td  code MirroringEndpointUpdatesBatchPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p mirroringEndpointUpdatesBatchPeriod can be used to batch EndpointSlice updates  All updates triggered by EndpointSlice changes will be delayed by up to  mirroringEndpointUpdatesBatchPeriod   If other addresses in the same Endpoints resource change in that period  they will be batched to a single EndpointSlice update  Default 0 value means that each Endpoints update triggers an EndpointSlice update   p    td    tr    tbody    table       EphemeralVolumeControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 EphemeralVolumeControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p EphemeralVolumeControllerConfiguration contains elements describing EphemeralVolumeController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentEphemeralVolumeSyncs  code   B  Required   B  br    code int32  code    td   td      p ConcurrentEphemeralVolumeSyncseSyncs is the number of ephemeral volume syncing operations that will be done concurrently  Larger number   faster ephemeral volume updating  but more CPU  and network  load   p    td    tr    tbody    table       GarbageCollectorControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 GarbageCollectorControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p GarbageCollectorControllerConfiguration contains elements describing GarbageCollectorController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code EnableGarbageCollector  code   B  Required   B  br    code bool  code    td   td      p enables the generic garbage collector  MUST be synced with the corresponding flag of the kube apiserver  WARNING  the generic garbage collector is an alpha feature   p    td    tr   tr  td  code ConcurrentGCSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentGCSyncs is the number of garbage collector workers that are allowed to sync concurrently   p    td    tr   tr  td  code GCIgnoredResources  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 GroupResource   code   GroupResource  code   a    td   td      p gcIgnoredResources is the list of GroupResources that garbage collection should ignore   p    td    tr    tbody    table       GroupResource        kubecontrollermanager config k8s io v1alpha1 GroupResource          Appears in        GarbageCollectorControllerConfiguration   kubecontrollermanager config k8s io v1alpha1 GarbageCollectorControllerConfiguration     p GroupResource describes an group resource   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code Group  code   B  Required   B  br    code string  code    td   td      p group is the group portion of the GroupResource   p    td    tr   tr  td  code Resource  code   B  Required   B  br    code string  code    td   td      p resource is the resource portion of the GroupResource   p    td    tr    tbody    table       HPAControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 HPAControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p HPAControllerConfiguration contains elements describing HPAController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentHorizontalPodAutoscalerSyncs  code   B  Required   B  br    code int32  code    td   td      p ConcurrentHorizontalPodAutoscalerSyncs is the number of HPA objects that are allowed to sync concurrently  Larger number   more responsive HPA processing  but more CPU  and network  load   p    td    tr   tr  td  code HorizontalPodAutoscalerSyncPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p HorizontalPodAutoscalerSyncPeriod is the period for syncing the number of pods in horizontal pod autoscaler   p    td    tr   tr  td  code HorizontalPodAutoscalerDownscaleStabilizationWindow  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p HorizontalPodAutoscalerDowncaleStabilizationWindow is a period for which autoscaler will look backwards and not scale down below any recommendation it made during that period   p    td    tr   tr  td  code HorizontalPodAutoscalerTolerance  code   B  Required   B  br    code float64  code    td   td      p HorizontalPodAutoscalerTolerance is the tolerance for when resource usage suggests upscaling downscaling  p    td    tr   tr  td  code HorizontalPodAutoscalerCPUInitializationPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p HorizontalPodAutoscalerCPUInitializationPeriod is the period after pod start when CPU samples might be skipped   p    td    tr   tr  td  code HorizontalPodAutoscalerInitialReadinessDelay  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p HorizontalPodAutoscalerInitialReadinessDelay is period after pod start during which readiness changes are treated as readiness being set for the first time  The only effect of this is that HPA will disregard CPU samples from unready pods that had last readiness change during that period   p    td    tr    tbody    table       JobControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 JobControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p JobControllerConfiguration contains elements describing JobController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentJobSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentJobSyncs is the number of job objects that are allowed to sync concurrently  Larger number   more responsive jobs  but more CPU  and network  load   p    td    tr    tbody    table       LegacySATokenCleanerConfiguration        kubecontrollermanager config k8s io v1alpha1 LegacySATokenCleanerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p LegacySATokenCleanerConfiguration contains elements describing LegacySATokenCleaner  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code CleanUpPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p CleanUpPeriod is the period of time since the last usage of an auto generated service account token before it can be deleted   p    td    tr    tbody    table       NamespaceControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 NamespaceControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p NamespaceControllerConfiguration contains elements describing NamespaceController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code NamespaceSyncPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p namespaceSyncPeriod is the period for syncing namespace life cycle updates   p    td    tr   tr  td  code ConcurrentNamespaceSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentNamespaceSyncs is the number of namespace objects that are allowed to sync concurrently   p    td    tr    tbody    table       NodeIPAMControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 NodeIPAMControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p NodeIPAMControllerConfiguration contains elements describing NodeIpamController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ServiceCIDR  code   B  Required   B  br    code string  code    td   td      p serviceCIDR is CIDR Range for Services in cluster   p    td    tr   tr  td  code SecondaryServiceCIDR  code   B  Required   B  br    code string  code    td   td      p secondaryServiceCIDR is CIDR Range for Services in cluster  This is used in dual stack clusters  SecondaryServiceCIDR must be of different IP family than ServiceCIDR  p    td    tr   tr  td  code NodeCIDRMaskSize  code   B  Required   B  br    code int32  code    td   td      p NodeCIDRMaskSize is the mask size for node cidr in cluster   p    td    tr   tr  td  code NodeCIDRMaskSizeIPv4  code   B  Required   B  br    code int32  code    td   td      p NodeCIDRMaskSizeIPv4 is the mask size for node cidr in dual stack cluster   p    td    tr   tr  td  code NodeCIDRMaskSizeIPv6  code   B  Required   B  br    code int32  code    td   td      p NodeCIDRMaskSizeIPv6 is the mask size for node cidr in dual stack cluster   p    td    tr    tbody    table       NodeLifecycleControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 NodeLifecycleControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p NodeLifecycleControllerConfiguration contains elements describing NodeLifecycleController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code NodeEvictionRate  code   B  Required   B  br    code float32  code    td   td      p nodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is healthy  p    td    tr   tr  td  code SecondaryNodeEvictionRate  code   B  Required   B  br    code float32  code    td   td      p secondaryNodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy  p    td    tr   tr  td  code NodeStartupGracePeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p nodeStartupGracePeriod is the amount of time which we allow starting a node to be unresponsive before marking it unhealthy   p    td    tr   tr  td  code NodeMonitorGracePeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p nodeMontiorGracePeriod is the amount of time which we allow a running node to be unresponsive before marking it unhealthy  Must be N times more than kubelet s nodeStatusUpdateFrequency  where N means number of retries allowed for kubelet to post node status   p    td    tr   tr  td  code PodEvictionTimeout  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p podEvictionTimeout is the grace period for deleting pods on failed nodes   p    td    tr   tr  td  code LargeClusterSizeThreshold  code   B  Required   B  br    code int32  code    td   td      p secondaryNodeEvictionRate is implicitly overridden to 0 for clusters smaller than or equal to largeClusterSizeThreshold  p    td    tr   tr  td  code UnhealthyZoneThreshold  code   B  Required   B  br    code float32  code    td   td      p Zone is treated as unhealthy in nodeEvictionRate and secondaryNodeEvictionRate when at least unhealthyZoneThreshold  no less than 3  of Nodes in the zone are NotReady  p    td    tr    tbody    table       PersistentVolumeBinderControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 PersistentVolumeBinderControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p PersistentVolumeBinderControllerConfiguration contains elements describing PersistentVolumeBinderController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code PVClaimBinderSyncPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p pvClaimBinderSyncPeriod is the period for syncing persistent volumes and persistent volume claims   p    td    tr   tr  td  code VolumeConfiguration  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 VolumeConfiguration   code VolumeConfiguration  code   a    td   td      p volumeConfiguration holds configuration for volume related features   p    td    tr    tbody    table       PersistentVolumeRecyclerConfiguration        kubecontrollermanager config k8s io v1alpha1 PersistentVolumeRecyclerConfiguration          Appears in        VolumeConfiguration   kubecontrollermanager config k8s io v1alpha1 VolumeConfiguration     p PersistentVolumeRecyclerConfiguration contains elements describing persistent volume plugins   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code MaximumRetry  code   B  Required   B  br    code int32  code    td   td      p maximumRetry is number of retries the PV recycler will execute on failure to recycle PV   p    td    tr   tr  td  code MinimumTimeoutNFS  code   B  Required   B  br    code int32  code    td   td      p minimumTimeoutNFS is the minimum ActiveDeadlineSeconds to use for an NFS Recycler pod   p    td    tr   tr  td  code PodTemplateFilePathNFS  code   B  Required   B  br    code string  code    td   td      p podTemplateFilePathNFS is the file path to a pod definition used as a template for NFS persistent volume recycling  p    td    tr   tr  td  code IncrementTimeoutNFS  code   B  Required   B  br    code int32  code    td   td      p incrementTimeoutNFS is the increment of time added per Gi to ActiveDeadlineSeconds for an NFS scrubber pod   p    td    tr   tr  td  code PodTemplateFilePathHostPath  code   B  Required   B  br    code string  code    td   td      p podTemplateFilePathHostPath is the file path to a pod definition used as a template for HostPath persistent volume recycling  This is for development and testing only and will not work in a multi node cluster   p    td    tr   tr  td  code MinimumTimeoutHostPath  code   B  Required   B  br    code int32  code    td   td      p minimumTimeoutHostPath is the minimum ActiveDeadlineSeconds to use for a HostPath Recycler pod   This is for development and testing only and will not work in a multi node cluster   p    td    tr   tr  td  code IncrementTimeoutHostPath  code   B  Required   B  br    code int32  code    td   td      p incrementTimeoutHostPath is the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod   This is for development and testing only and will not work in a multi node cluster   p    td    tr    tbody    table       PodGCControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 PodGCControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p PodGCControllerConfiguration contains elements describing PodGCController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code TerminatedPodGCThreshold  code   B  Required   B  br    code int32  code    td   td      p terminatedPodGCThreshold is the number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods  If  lt   0  the terminated pod garbage collector is disabled   p    td    tr    tbody    table       ReplicaSetControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 ReplicaSetControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p ReplicaSetControllerConfiguration contains elements describing ReplicaSetController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentRSSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentRSSyncs is the number of replica sets that are  allowed to sync concurrently  Larger number   more responsive replica  management  but more CPU  and network  load   p    td    tr    tbody    table       ReplicationControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 ReplicationControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p ReplicationControllerConfiguration contains elements describing ReplicationController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentRCSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentRCSyncs is the number of replication controllers that are allowed to sync concurrently  Larger number   more responsive replica management  but more CPU  and network  load   p    td    tr    tbody    table       ResourceQuotaControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 ResourceQuotaControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p ResourceQuotaControllerConfiguration contains elements describing ResourceQuotaController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ResourceQuotaSyncPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p resourceQuotaSyncPeriod is the period for syncing quota usage status in the system   p    td    tr   tr  td  code ConcurrentResourceQuotaSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentResourceQuotaSyncs is the number of resource quotas that are allowed to sync concurrently  Larger number   more responsive quota management  but more CPU  and network  load   p    td    tr    tbody    table       SAControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 SAControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p SAControllerConfiguration contains elements describing ServiceAccountController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ServiceAccountKeyFile  code   B  Required   B  br    code string  code    td   td      p serviceAccountKeyFile is the filename containing a PEM encoded private RSA key used to sign service account tokens   p    td    tr   tr  td  code ConcurrentSATokenSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentSATokenSyncs is the number of service account token syncing operations that will be done concurrently   p    td    tr   tr  td  code RootCAFile  code   B  Required   B  br    code string  code    td   td      p rootCAFile is the root certificate authority will be included in service account s token secret  This must be a valid PEM encoded CA bundle   p    td    tr    tbody    table       StatefulSetControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 StatefulSetControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p StatefulSetControllerConfiguration contains elements describing StatefulSetController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentStatefulSetSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentStatefulSetSyncs is the number of statefulset objects that are allowed to sync concurrently  Larger number   more responsive statefulsets  but more CPU  and network  load   p    td    tr    tbody    table       TTLAfterFinishedControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 TTLAfterFinishedControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p TTLAfterFinishedControllerConfiguration contains elements describing TTLAfterFinishedController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentTTLSyncs  code   B  Required   B  br    code int32  code    td   td      p concurrentTTLSyncs is the number of TTL after finished collector workers that are allowed to sync concurrently   p    td    tr    tbody    table       ValidatingAdmissionPolicyStatusControllerConfiguration        kubecontrollermanager config k8s io v1alpha1 ValidatingAdmissionPolicyStatusControllerConfiguration          Appears in        KubeControllerManagerConfiguration   kubecontrollermanager config k8s io v1alpha1 KubeControllerManagerConfiguration     p ValidatingAdmissionPolicyStatusControllerConfiguration contains elements describing ValidatingAdmissionPolicyStatusController   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ConcurrentPolicySyncs  code   B  Required   B  br    code int32  code    td   td      p ConcurrentPolicySyncs is the number of policy objects that are allowed to sync concurrently  Larger number   quicker type checking  but more CPU  and network  load  The default value is 5   p    td    tr    tbody    table       VolumeConfiguration        kubecontrollermanager config k8s io v1alpha1 VolumeConfiguration          Appears in        PersistentVolumeBinderControllerConfiguration   kubecontrollermanager config k8s io v1alpha1 PersistentVolumeBinderControllerConfiguration     p VolumeConfiguration contains  em all  em  enumerated flags meant to configure all volume plugins  From this config  the controller manager binary will create many instances of volume VolumeConfig  each containing only the configuration needed for that plugin which are then passed to the appropriate plugin  The ControllerManager binary is the only part of the code which knows what plugins are supported and which flags correspond to each plugin   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code EnableHostPathProvisioning  code   B  Required   B  br    code bool  code    td   td      p enableHostPathProvisioning enables HostPath PV provisioning when running without a cloud provider  This allows testing and development of provisioning features  HostPath provisioning is not supported in any way  won t work in a multi node cluster  and should not be used for anything other than testing or development   p    td    tr   tr  td  code EnableDynamicProvisioning  code   B  Required   B  br    code bool  code    td   td      p enableDynamicProvisioning enables the provisioning of volumes when running within an environment that supports dynamic provisioning  Defaults to true   p    td    tr   tr  td  code PersistentVolumeRecyclerConfiguration  code   B  Required   B  br    a href   kubecontrollermanager config k8s io v1alpha1 PersistentVolumeRecyclerConfiguration   code PersistentVolumeRecyclerConfiguration  code   a    td   td      p persistentVolumeRecyclerConfiguration holds configuration for persistent volume plugins   p    td    tr   tr  td  code FlexVolumePluginDir  code   B  Required   B  br    code string  code    td   td      p volumePluginDir is the full path of the directory in which the flex volume plugin should search for additional third party volume plugins  p    td    tr    tbody    table   "}
{"questions":"kubernetes reference contenttype tool reference Resource Types package kubeproxy config k8s io v1alpha1 autogenerated true title kube proxy Configuration v1alpha1","answers":"---\ntitle: kube-proxy Configuration (v1alpha1)\ncontent_type: tool-reference\npackage: kubeproxy.config.k8s.io\/v1alpha1\nauto_generated: true\n---\n\n\n## Resource Types \n\n\n- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)\n  \n    \n    \n\n## `ClientConnectionConfiguration`     {#ClientConnectionConfiguration}\n    \n\n**Appears in:**\n\n- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)\n\n- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)\n\n- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)\n\n\n<p>ClientConnectionConfiguration contains details for constructing a client.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>kubeconfig<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>kubeconfig is the path to a KubeConfig file.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>acceptContentTypes<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the\ndefault value of 'application\/json'. This field will control all connections to the server used by a particular\nclient.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>contentType<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>contentType is the content type used when sending data to the server from this client.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>qps<\/code> <B>[Required]<\/B><br\/>\n<code>float32<\/code>\n<\/td>\n<td>\n   <p>qps controls the number of queries per second allowed for this connection.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>burst<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>burst allows extra queries to accumulate when a client is exceeding its rate.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `DebuggingConfiguration`     {#DebuggingConfiguration}\n    \n\n**Appears in:**\n\n- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)\n\n- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)\n\n\n<p>DebuggingConfiguration holds configuration for Debugging related features.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>enableProfiling<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableProfiling enables profiling via web interface host:port\/debug\/pprof\/<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>enableContentionProfiling<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableContentionProfiling enables block profiling, if\nenableProfiling is true.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `LeaderElectionConfiguration`     {#LeaderElectionConfiguration}\n    \n\n**Appears in:**\n\n- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration)\n\n- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration)\n\n\n<p>LeaderElectionConfiguration defines the configuration of leader election\nclients for components that can run with leader election enabled.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>leaderElect<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>leaderElect enables a leader election client to gain leadership\nbefore executing the main loop. Enable this when running replicated\ncomponents for high availability.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>leaseDuration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>leaseDuration is the duration that non-leader candidates will wait\nafter observing a leadership renewal until attempting to acquire\nleadership of a led but unrenewed leader slot. This is effectively the\nmaximum duration that a leader can be stopped before it is replaced\nby another candidate. This is only applicable if leader election is\nenabled.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>renewDeadline<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>renewDeadline is the interval between attempts by the acting master to\nrenew a leadership slot before it stops leading. This must be less\nthan or equal to the lease duration. This is only applicable if leader\nelection is enabled.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>retryPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>retryPeriod is the duration the clients should wait between attempting\nacquisition and renewal of a leadership. This is only applicable if\nleader election is enabled.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>resourceLock<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>resourceLock indicates the resource object type that will be used to lock\nduring leader election cycles.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>resourceName<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>resourceName indicates the name of resource object that will be used to lock\nduring leader election cycles.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>resourceNamespace<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>resourceName indicates the namespace of resource object that will be used to lock\nduring leader election cycles.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n  \n\n## `KubeProxyConfiguration`     {#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration}\n    \n\n\n<p>KubeProxyConfiguration contains everything necessary to configure the\nKubernetes proxy server.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubeproxy.config.k8s.io\/v1alpha1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>KubeProxyConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>featureGates<\/code> <B>[Required]<\/B><br\/>\n<code>map[string]bool<\/code>\n<\/td>\n<td>\n   <p>featureGates is a map of feature names to bools that enable or disable alpha\/experimental features.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clientConnection<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#ClientConnectionConfiguration\"><code>ClientConnectionConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>clientConnection specifies the kubeconfig file and client connection settings for the proxy\nserver to use when communicating with the apiserver.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>logging<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#LoggingConfiguration\"><code>LoggingConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>logging specifies the options of logging.\nRefer to <a href=\"https:\/\/github.com\/kubernetes\/component-base\/blob\/master\/logs\/options.go\">Logs Options<\/a>\nfor more information.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>hostnameOverride<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>hostnameOverride, if non-empty, will be used as the name of the Node that\nkube-proxy is running on. If unset, the node name is assumed to be the same as\nthe node's hostname.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>bindAddress<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>bindAddress can be used to override kube-proxy's idea of what its node's\nprimary IP is. Note that the name is a historical artifact, and kube-proxy does\nnot actually bind any sockets to this IP.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>healthzBindAddress<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>healthzBindAddress is the IP address and port for the health check server to\nserve on, defaulting to &quot;0.0.0.0:10256&quot; (if bindAddress is unset or IPv4), or\n&quot;[::]:10256&quot; (if bindAddress is IPv6).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>metricsBindAddress<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>metricsBindAddress is the IP address and port for the metrics server to serve\non, defaulting to &quot;127.0.0.1:10249&quot; (if bindAddress is unset or IPv4), or\n&quot;[::1]:10249&quot; (if bindAddress is IPv6). (Set to &quot;0.0.0.0:10249&quot; \/ &quot;[::]:10249&quot;\nto bind on all interfaces.)<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>bindAddressHardFail<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>bindAddressHardFail, if true, tells kube-proxy to treat failure to bind to a\nport as fatal and exit<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>enableProfiling<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableProfiling enables profiling via web interface on \/debug\/pprof handler.\nProfiling handlers will be handled by metrics server.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>showHiddenMetricsForVersion<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>showHiddenMetricsForVersion is the version for which you want to show hidden metrics.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>mode<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeproxy-config-k8s-io-v1alpha1-ProxyMode\"><code>ProxyMode<\/code><\/a>\n<\/td>\n<td>\n   <p>mode specifies which proxy mode to use.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>iptables<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeproxy-config-k8s-io-v1alpha1-KubeProxyIPTablesConfiguration\"><code>KubeProxyIPTablesConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>iptables contains iptables-related configuration options.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ipvs<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeproxy-config-k8s-io-v1alpha1-KubeProxyIPVSConfiguration\"><code>KubeProxyIPVSConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>ipvs contains ipvs-related configuration options.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>nftables<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeproxy-config-k8s-io-v1alpha1-KubeProxyNFTablesConfiguration\"><code>KubeProxyNFTablesConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>nftables contains nftables-related configuration options.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>winkernel<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeproxy-config-k8s-io-v1alpha1-KubeProxyWinkernelConfiguration\"><code>KubeProxyWinkernelConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>winkernel contains winkernel-related configuration options.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>detectLocalMode<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeproxy-config-k8s-io-v1alpha1-LocalMode\"><code>LocalMode<\/code><\/a>\n<\/td>\n<td>\n   <p>detectLocalMode determines mode to use for detecting local traffic, defaults to ClusterCIDR<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>detectLocal<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeproxy-config-k8s-io-v1alpha1-DetectLocalConfiguration\"><code>DetectLocalConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>detectLocal contains optional configuration settings related to DetectLocalMode.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clusterCIDR<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>clusterCIDR is the CIDR range of the pods in the cluster. (For dual-stack\nclusters, this can be a comma-separated dual-stack pair of CIDR ranges.). When\nDetectLocalMode is set to ClusterCIDR, kube-proxy will consider\ntraffic to be local if its source IP is in this range. (Otherwise it is not\nused.)<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>nodePortAddresses<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>nodePortAddresses is a list of CIDR ranges that contain valid node IPs, or\nalternatively, the single string 'primary'. If set to a list of CIDRs,\nconnections to NodePort services will only be accepted on node IPs in one of\nthe indicated ranges. If set to 'primary', NodePort services will only be\naccepted on the node's primary IPv4 and\/or IPv6 address according to the Node\nobject. If unset, NodePort connections will be accepted on all local IPs.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>oomScoreAdj<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>oomScoreAdj is the oom-score-adj value for kube-proxy process. Values must be within\nthe range [-1000, 1000]<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>conntrack<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConntrackConfiguration\"><code>KubeProxyConntrackConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>conntrack contains conntrack-related configuration options.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>configSyncPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>configSyncPeriod is how often configuration from the apiserver is refreshed. Must be greater\nthan 0.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>portRange<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>portRange was previously used to configure the userspace proxy, but is now unused.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>windowsRunAsService<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>windowsRunAsService, if true, enables Windows service control manager API integration.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `DetectLocalConfiguration`     {#kubeproxy-config-k8s-io-v1alpha1-DetectLocalConfiguration}\n    \n\n**Appears in:**\n\n- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)\n\n\n<p>DetectLocalConfiguration contains optional settings related to DetectLocalMode option<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>bridgeInterface<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>bridgeInterface is a bridge interface name. When DetectLocalMode is set to\nLocalModeBridgeInterface, kube-proxy will consider traffic to be local if\nit originates from this bridge.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>interfaceNamePrefix<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>interfaceNamePrefix is an interface name prefix. When DetectLocalMode is set to\nLocalModeInterfaceNamePrefix, kube-proxy will consider traffic to be local if\nit originates from any interface whose name begins with this prefix.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeProxyConntrackConfiguration`     {#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConntrackConfiguration}\n    \n\n**Appears in:**\n\n- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)\n\n\n<p>KubeProxyConntrackConfiguration contains conntrack settings for\nthe Kubernetes proxy server.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>maxPerCore<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>maxPerCore is the maximum number of NAT connections to track\nper CPU core (0 to leave the limit as-is and ignore min).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>min<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>min is the minimum value of connect-tracking records to allocate,\nregardless of maxPerCore (set maxPerCore=0 to leave the limit as-is).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tcpEstablishedTimeout<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>tcpEstablishedTimeout is how long an idle TCP connection will be kept open\n(e.g. '2s').  Must be greater than 0 to set.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tcpCloseWaitTimeout<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>tcpCloseWaitTimeout is how long an idle conntrack entry\nin CLOSE_WAIT state will remain in the conntrack\ntable. (e.g. '60s'). Must be greater than 0 to set.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tcpBeLiberal<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>tcpBeLiberal, if true, kube-proxy will configure conntrack\nto run in liberal mode for TCP connections and packets with\nout-of-window sequence numbers won't be marked INVALID.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>udpTimeout<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>udpTimeout is how long an idle UDP conntrack entry in\nUNREPLIED state will remain in the conntrack table\n(e.g. '30s'). Must be greater than 0 to set.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>udpStreamTimeout<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>udpStreamTimeout is how long an idle UDP conntrack entry in\nASSURED state will remain in the conntrack table\n(e.g. '300s'). Must be greater than 0 to set.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeProxyIPTablesConfiguration`     {#kubeproxy-config-k8s-io-v1alpha1-KubeProxyIPTablesConfiguration}\n    \n\n**Appears in:**\n\n- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)\n\n\n<p>KubeProxyIPTablesConfiguration contains iptables-related configuration\ndetails for the Kubernetes proxy server.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>masqueradeBit<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>masqueradeBit is the bit of the iptables fwmark space to use for SNAT if using\nthe iptables or ipvs proxy mode. Values must be within the range [0, 31].<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>masqueradeAll<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>masqueradeAll tells kube-proxy to SNAT all traffic sent to Service cluster IPs,\nwhen using the iptables or ipvs proxy mode. This may be required with some CNI\nplugins.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>localhostNodePorts<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>localhostNodePorts, if false, tells kube-proxy to disable the legacy behavior\nof allowing NodePort services to be accessed via localhost. (Applies only to\niptables mode and IPv4; localhost NodePorts are never allowed with other proxy\nmodes or with IPv6.)<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>syncPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>syncPeriod is an interval (e.g. '5s', '1m', '2h22m') indicating how frequently\nvarious re-synchronizing and cleanup operations are performed. Must be greater\nthan 0.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>minSyncPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>minSyncPeriod is the minimum period between iptables rule resyncs (e.g. '5s',\n'1m', '2h22m'). A value of 0 means every Service or EndpointSlice change will\nresult in an immediate iptables resync.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeProxyIPVSConfiguration`     {#kubeproxy-config-k8s-io-v1alpha1-KubeProxyIPVSConfiguration}\n    \n\n**Appears in:**\n\n- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)\n\n\n<p>KubeProxyIPVSConfiguration contains ipvs-related configuration\ndetails for the Kubernetes proxy server.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>syncPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>syncPeriod is an interval (e.g. '5s', '1m', '2h22m') indicating how frequently\nvarious re-synchronizing and cleanup operations are performed. Must be greater\nthan 0.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>minSyncPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>minSyncPeriod is the minimum period between IPVS rule resyncs (e.g. '5s', '1m',\n'2h22m'). A value of 0 means every Service or EndpointSlice change will result\nin an immediate IPVS resync.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>scheduler<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>scheduler is the IPVS scheduler to use<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>excludeCIDRs<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>excludeCIDRs is a list of CIDRs which the ipvs proxier should not touch\nwhen cleaning up ipvs services.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>strictARP<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>strictARP configures arp_ignore and arp_announce to avoid answering ARP queries\nfrom kube-ipvs0 interface<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tcpTimeout<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>tcpTimeout is the timeout value used for idle IPVS TCP sessions.\nThe default value is 0, which preserves the current timeout value on the system.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tcpFinTimeout<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>tcpFinTimeout is the timeout value used for IPVS TCP sessions after receiving a FIN.\nThe default value is 0, which preserves the current timeout value on the system.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>udpTimeout<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>udpTimeout is the timeout value used for IPVS UDP packets.\nThe default value is 0, which preserves the current timeout value on the system.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeProxyNFTablesConfiguration`     {#kubeproxy-config-k8s-io-v1alpha1-KubeProxyNFTablesConfiguration}\n    \n\n**Appears in:**\n\n- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)\n\n\n<p>KubeProxyNFTablesConfiguration contains nftables-related configuration\ndetails for the Kubernetes proxy server.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>masqueradeBit<\/code> <B>[Required]<\/B><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>masqueradeBit is the bit of the iptables fwmark space to use for SNAT if using\nthe nftables proxy mode. Values must be within the range [0, 31].<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>masqueradeAll<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>masqueradeAll tells kube-proxy to SNAT all traffic sent to Service cluster IPs,\nwhen using the nftables mode. This may be required with some CNI plugins.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>syncPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>syncPeriod is an interval (e.g. '5s', '1m', '2h22m') indicating how frequently\nvarious re-synchronizing and cleanup operations are performed. Must be greater\nthan 0.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>minSyncPeriod<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>minSyncPeriod is the minimum period between iptables rule resyncs (e.g. '5s',\n'1m', '2h22m'). A value of 0 means every Service or EndpointSlice change will\nresult in an immediate iptables resync.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `KubeProxyWinkernelConfiguration`     {#kubeproxy-config-k8s-io-v1alpha1-KubeProxyWinkernelConfiguration}\n    \n\n**Appears in:**\n\n- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)\n\n\n<p>KubeProxyWinkernelConfiguration contains Windows\/HNS settings for\nthe Kubernetes proxy server.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>networkName<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>networkName is the name of the network kube-proxy will use\nto create endpoints and policies<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>sourceVip<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>sourceVip is the IP address of the source VIP endpoint used for\nNAT when loadbalancing<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>enableDSR<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>enableDSR tells kube-proxy whether HNS policies should be created\nwith DSR<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>rootHnsEndpointName<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>rootHnsEndpointName is the name of hnsendpoint that is attached to\nl2bridge for root network namespace<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>forwardHealthCheckVip<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>forwardHealthCheckVip forwards service VIP for health check port on\nWindows<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `LocalMode`     {#kubeproxy-config-k8s-io-v1alpha1-LocalMode}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)\n\n\n<p>LocalMode represents modes to detect local traffic from the node<\/p>\n\n\n\n\n## `ProxyMode`     {#kubeproxy-config-k8s-io-v1alpha1-ProxyMode}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration)\n\n\n<p>ProxyMode represents modes used by the Kubernetes proxy server.<\/p>\n<p>Currently, two modes of proxy are available on Linux platforms: 'iptables' and 'ipvs'.\nOne mode of proxy is available on Windows platforms: 'kernelspace'.<\/p>\n<p>If the proxy mode is unspecified, the best-available proxy mode will be used (currently this\nis <code>iptables<\/code> on Linux and <code>kernelspace<\/code> on Windows). If the selected proxy mode cannot be\nused (due to lack of kernel support, missing userspace components, etc) then kube-proxy\nwill exit with an error.<\/p>\n\n\n\n ","site":"kubernetes reference","answers_cleaned":"    title  kube proxy Configuration  v1alpha1  content type  tool reference package  kubeproxy config k8s io v1alpha1 auto generated  true          Resource Types       KubeProxyConfiguration   kubeproxy config k8s io v1alpha1 KubeProxyConfiguration                    ClientConnectionConfiguration        ClientConnectionConfiguration          Appears in        KubeProxyConfiguration   kubeproxy config k8s io v1alpha1 KubeProxyConfiguration      KubeSchedulerConfiguration   kubescheduler config k8s io v1 KubeSchedulerConfiguration      GenericControllerManagerConfiguration   controllermanager config k8s io v1alpha1 GenericControllerManagerConfiguration     p ClientConnectionConfiguration contains details for constructing a client   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code kubeconfig  code   B  Required   B  br    code string  code    td   td      p kubeconfig is the path to a KubeConfig file   p    td    tr   tr  td  code acceptContentTypes  code   B  Required   B  br    code string  code    td   td      p acceptContentTypes defines the Accept header sent by clients when connecting to a server  overriding the default value of  application json   This field will control all connections to the server used by a particular client   p    td    tr   tr  td  code contentType  code   B  Required   B  br    code string  code    td   td      p contentType is the content type used when sending data to the server from this client   p    td    tr   tr  td  code qps  code   B  Required   B  br    code float32  code    td   td      p qps controls the number of queries per second allowed for this connection   p    td    tr   tr  td  code burst  code   B  Required   B  br    code int32  code    td   td      p burst allows extra queries to accumulate when a client is exceeding its rate   p    td    tr    tbody    table       DebuggingConfiguration        DebuggingConfiguration          Appears in        KubeSchedulerConfiguration   kubescheduler config k8s io v1 KubeSchedulerConfiguration      GenericControllerManagerConfiguration   controllermanager config k8s io v1alpha1 GenericControllerManagerConfiguration     p DebuggingConfiguration holds configuration for Debugging related features   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code enableProfiling  code   B  Required   B  br    code bool  code    td   td      p enableProfiling enables profiling via web interface host port debug pprof   p    td    tr   tr  td  code enableContentionProfiling  code   B  Required   B  br    code bool  code    td   td      p enableContentionProfiling enables block profiling  if enableProfiling is true   p    td    tr    tbody    table       LeaderElectionConfiguration        LeaderElectionConfiguration          Appears in        KubeSchedulerConfiguration   kubescheduler config k8s io v1 KubeSchedulerConfiguration      GenericControllerManagerConfiguration   controllermanager config k8s io v1alpha1 GenericControllerManagerConfiguration     p LeaderElectionConfiguration defines the configuration of leader election clients for components that can run with leader election enabled   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code leaderElect  code   B  Required   B  br    code bool  code    td   td      p leaderElect enables a leader election client to gain leadership before executing the main loop  Enable this when running replicated components for high availability   p    td    tr   tr  td  code leaseDuration  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p leaseDuration is the duration that non leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot  This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate  This is only applicable if leader election is enabled   p    td    tr   tr  td  code renewDeadline  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p renewDeadline is the interval between attempts by the acting master to renew a leadership slot before it stops leading  This must be less than or equal to the lease duration  This is only applicable if leader election is enabled   p    td    tr   tr  td  code retryPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p retryPeriod is the duration the clients should wait between attempting acquisition and renewal of a leadership  This is only applicable if leader election is enabled   p    td    tr   tr  td  code resourceLock  code   B  Required   B  br    code string  code    td   td      p resourceLock indicates the resource object type that will be used to lock during leader election cycles   p    td    tr   tr  td  code resourceName  code   B  Required   B  br    code string  code    td   td      p resourceName indicates the name of resource object that will be used to lock during leader election cycles   p    td    tr   tr  td  code resourceNamespace  code   B  Required   B  br    code string  code    td   td      p resourceName indicates the namespace of resource object that will be used to lock during leader election cycles   p    td    tr    tbody    table          KubeProxyConfiguration        kubeproxy config k8s io v1alpha1 KubeProxyConfiguration          p KubeProxyConfiguration contains everything necessary to configure the Kubernetes proxy server   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubeproxy config k8s io v1alpha1  code   td   tr   tr  td  code kind  code  br  string  td  td  code KubeProxyConfiguration  code   td   tr           tr  td  code featureGates  code   B  Required   B  br    code map string bool  code    td   td      p featureGates is a map of feature names to bools that enable or disable alpha experimental features   p    td    tr   tr  td  code clientConnection  code   B  Required   B  br    a href   ClientConnectionConfiguration   code ClientConnectionConfiguration  code   a    td   td      p clientConnection specifies the kubeconfig file and client connection settings for the proxy server to use when communicating with the apiserver   p    td    tr   tr  td  code logging  code   B  Required   B  br    a href   LoggingConfiguration   code LoggingConfiguration  code   a    td   td      p logging specifies the options of logging  Refer to  a href  https   github com kubernetes component base blob master logs options go  Logs Options  a  for more information   p    td    tr   tr  td  code hostnameOverride  code   B  Required   B  br    code string  code    td   td      p hostnameOverride  if non empty  will be used as the name of the Node that kube proxy is running on  If unset  the node name is assumed to be the same as the node s hostname   p    td    tr   tr  td  code bindAddress  code   B  Required   B  br    code string  code    td   td      p bindAddress can be used to override kube proxy s idea of what its node s primary IP is  Note that the name is a historical artifact  and kube proxy does not actually bind any sockets to this IP   p    td    tr   tr  td  code healthzBindAddress  code   B  Required   B  br    code string  code    td   td      p healthzBindAddress is the IP address and port for the health check server to serve on  defaulting to  quot 0 0 0 0 10256 quot   if bindAddress is unset or IPv4   or  quot      10256 quot   if bindAddress is IPv6    p    td    tr   tr  td  code metricsBindAddress  code   B  Required   B  br    code string  code    td   td      p metricsBindAddress is the IP address and port for the metrics server to serve on  defaulting to  quot 127 0 0 1 10249 quot   if bindAddress is unset or IPv4   or  quot    1  10249 quot   if bindAddress is IPv6    Set to  quot 0 0 0 0 10249 quot     quot      10249 quot  to bind on all interfaces    p    td    tr   tr  td  code bindAddressHardFail  code   B  Required   B  br    code bool  code    td   td      p bindAddressHardFail  if true  tells kube proxy to treat failure to bind to a port as fatal and exit  p    td    tr   tr  td  code enableProfiling  code   B  Required   B  br    code bool  code    td   td      p enableProfiling enables profiling via web interface on  debug pprof handler  Profiling handlers will be handled by metrics server   p    td    tr   tr  td  code showHiddenMetricsForVersion  code   B  Required   B  br    code string  code    td   td      p showHiddenMetricsForVersion is the version for which you want to show hidden metrics   p    td    tr   tr  td  code mode  code   B  Required   B  br    a href   kubeproxy config k8s io v1alpha1 ProxyMode   code ProxyMode  code   a    td   td      p mode specifies which proxy mode to use   p    td    tr   tr  td  code iptables  code   B  Required   B  br    a href   kubeproxy config k8s io v1alpha1 KubeProxyIPTablesConfiguration   code KubeProxyIPTablesConfiguration  code   a    td   td      p iptables contains iptables related configuration options   p    td    tr   tr  td  code ipvs  code   B  Required   B  br    a href   kubeproxy config k8s io v1alpha1 KubeProxyIPVSConfiguration   code KubeProxyIPVSConfiguration  code   a    td   td      p ipvs contains ipvs related configuration options   p    td    tr   tr  td  code nftables  code   B  Required   B  br    a href   kubeproxy config k8s io v1alpha1 KubeProxyNFTablesConfiguration   code KubeProxyNFTablesConfiguration  code   a    td   td      p nftables contains nftables related configuration options   p    td    tr   tr  td  code winkernel  code   B  Required   B  br    a href   kubeproxy config k8s io v1alpha1 KubeProxyWinkernelConfiguration   code KubeProxyWinkernelConfiguration  code   a    td   td      p winkernel contains winkernel related configuration options   p    td    tr   tr  td  code detectLocalMode  code   B  Required   B  br    a href   kubeproxy config k8s io v1alpha1 LocalMode   code LocalMode  code   a    td   td      p detectLocalMode determines mode to use for detecting local traffic  defaults to ClusterCIDR  p    td    tr   tr  td  code detectLocal  code   B  Required   B  br    a href   kubeproxy config k8s io v1alpha1 DetectLocalConfiguration   code DetectLocalConfiguration  code   a    td   td      p detectLocal contains optional configuration settings related to DetectLocalMode   p    td    tr   tr  td  code clusterCIDR  code   B  Required   B  br    code string  code    td   td      p clusterCIDR is the CIDR range of the pods in the cluster   For dual stack clusters  this can be a comma separated dual stack pair of CIDR ranges    When DetectLocalMode is set to ClusterCIDR  kube proxy will consider traffic to be local if its source IP is in this range   Otherwise it is not used    p    td    tr   tr  td  code nodePortAddresses  code   B  Required   B  br    code   string  code    td   td      p nodePortAddresses is a list of CIDR ranges that contain valid node IPs  or alternatively  the single string  primary   If set to a list of CIDRs  connections to NodePort services will only be accepted on node IPs in one of the indicated ranges  If set to  primary   NodePort services will only be accepted on the node s primary IPv4 and or IPv6 address according to the Node object  If unset  NodePort connections will be accepted on all local IPs   p    td    tr   tr  td  code oomScoreAdj  code   B  Required   B  br    code int32  code    td   td      p oomScoreAdj is the oom score adj value for kube proxy process  Values must be within the range   1000  1000   p    td    tr   tr  td  code conntrack  code   B  Required   B  br    a href   kubeproxy config k8s io v1alpha1 KubeProxyConntrackConfiguration   code KubeProxyConntrackConfiguration  code   a    td   td      p conntrack contains conntrack related configuration options   p    td    tr   tr  td  code configSyncPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p configSyncPeriod is how often configuration from the apiserver is refreshed  Must be greater than 0   p    td    tr   tr  td  code portRange  code   B  Required   B  br    code string  code    td   td      p portRange was previously used to configure the userspace proxy  but is now unused   p    td    tr   tr  td  code windowsRunAsService  code   B  Required   B  br    code bool  code    td   td      p windowsRunAsService  if true  enables Windows service control manager API integration   p    td    tr    tbody    table       DetectLocalConfiguration        kubeproxy config k8s io v1alpha1 DetectLocalConfiguration          Appears in        KubeProxyConfiguration   kubeproxy config k8s io v1alpha1 KubeProxyConfiguration     p DetectLocalConfiguration contains optional settings related to DetectLocalMode option  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code bridgeInterface  code   B  Required   B  br    code string  code    td   td      p bridgeInterface is a bridge interface name  When DetectLocalMode is set to LocalModeBridgeInterface  kube proxy will consider traffic to be local if it originates from this bridge   p    td    tr   tr  td  code interfaceNamePrefix  code   B  Required   B  br    code string  code    td   td      p interfaceNamePrefix is an interface name prefix  When DetectLocalMode is set to LocalModeInterfaceNamePrefix  kube proxy will consider traffic to be local if it originates from any interface whose name begins with this prefix   p    td    tr    tbody    table       KubeProxyConntrackConfiguration        kubeproxy config k8s io v1alpha1 KubeProxyConntrackConfiguration          Appears in        KubeProxyConfiguration   kubeproxy config k8s io v1alpha1 KubeProxyConfiguration     p KubeProxyConntrackConfiguration contains conntrack settings for the Kubernetes proxy server   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code maxPerCore  code   B  Required   B  br    code int32  code    td   td      p maxPerCore is the maximum number of NAT connections to track per CPU core  0 to leave the limit as is and ignore min    p    td    tr   tr  td  code min  code   B  Required   B  br    code int32  code    td   td      p min is the minimum value of connect tracking records to allocate  regardless of maxPerCore  set maxPerCore 0 to leave the limit as is    p    td    tr   tr  td  code tcpEstablishedTimeout  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p tcpEstablishedTimeout is how long an idle TCP connection will be kept open  e g   2s     Must be greater than 0 to set   p    td    tr   tr  td  code tcpCloseWaitTimeout  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p tcpCloseWaitTimeout is how long an idle conntrack entry in CLOSE WAIT state will remain in the conntrack table   e g   60s    Must be greater than 0 to set   p    td    tr   tr  td  code tcpBeLiberal  code   B  Required   B  br    code bool  code    td   td      p tcpBeLiberal  if true  kube proxy will configure conntrack to run in liberal mode for TCP connections and packets with out of window sequence numbers won t be marked INVALID   p    td    tr   tr  td  code udpTimeout  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p udpTimeout is how long an idle UDP conntrack entry in UNREPLIED state will remain in the conntrack table  e g   30s    Must be greater than 0 to set   p    td    tr   tr  td  code udpStreamTimeout  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p udpStreamTimeout is how long an idle UDP conntrack entry in ASSURED state will remain in the conntrack table  e g   300s    Must be greater than 0 to set   p    td    tr    tbody    table       KubeProxyIPTablesConfiguration        kubeproxy config k8s io v1alpha1 KubeProxyIPTablesConfiguration          Appears in        KubeProxyConfiguration   kubeproxy config k8s io v1alpha1 KubeProxyConfiguration     p KubeProxyIPTablesConfiguration contains iptables related configuration details for the Kubernetes proxy server   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code masqueradeBit  code   B  Required   B  br    code int32  code    td   td      p masqueradeBit is the bit of the iptables fwmark space to use for SNAT if using the iptables or ipvs proxy mode  Values must be within the range  0  31    p    td    tr   tr  td  code masqueradeAll  code   B  Required   B  br    code bool  code    td   td      p masqueradeAll tells kube proxy to SNAT all traffic sent to Service cluster IPs  when using the iptables or ipvs proxy mode  This may be required with some CNI plugins   p    td    tr   tr  td  code localhostNodePorts  code   B  Required   B  br    code bool  code    td   td      p localhostNodePorts  if false  tells kube proxy to disable the legacy behavior of allowing NodePort services to be accessed via localhost   Applies only to iptables mode and IPv4  localhost NodePorts are never allowed with other proxy modes or with IPv6    p    td    tr   tr  td  code syncPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p syncPeriod is an interval  e g   5s    1m    2h22m   indicating how frequently various re synchronizing and cleanup operations are performed  Must be greater than 0   p    td    tr   tr  td  code minSyncPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p minSyncPeriod is the minimum period between iptables rule resyncs  e g   5s    1m    2h22m    A value of 0 means every Service or EndpointSlice change will result in an immediate iptables resync   p    td    tr    tbody    table       KubeProxyIPVSConfiguration        kubeproxy config k8s io v1alpha1 KubeProxyIPVSConfiguration          Appears in        KubeProxyConfiguration   kubeproxy config k8s io v1alpha1 KubeProxyConfiguration     p KubeProxyIPVSConfiguration contains ipvs related configuration details for the Kubernetes proxy server   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code syncPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p syncPeriod is an interval  e g   5s    1m    2h22m   indicating how frequently various re synchronizing and cleanup operations are performed  Must be greater than 0   p    td    tr   tr  td  code minSyncPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p minSyncPeriod is the minimum period between IPVS rule resyncs  e g   5s    1m    2h22m    A value of 0 means every Service or EndpointSlice change will result in an immediate IPVS resync   p    td    tr   tr  td  code scheduler  code   B  Required   B  br    code string  code    td   td      p scheduler is the IPVS scheduler to use  p    td    tr   tr  td  code excludeCIDRs  code   B  Required   B  br    code   string  code    td   td      p excludeCIDRs is a list of CIDRs which the ipvs proxier should not touch when cleaning up ipvs services   p    td    tr   tr  td  code strictARP  code   B  Required   B  br    code bool  code    td   td      p strictARP configures arp ignore and arp announce to avoid answering ARP queries from kube ipvs0 interface  p    td    tr   tr  td  code tcpTimeout  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p tcpTimeout is the timeout value used for idle IPVS TCP sessions  The default value is 0  which preserves the current timeout value on the system   p    td    tr   tr  td  code tcpFinTimeout  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p tcpFinTimeout is the timeout value used for IPVS TCP sessions after receiving a FIN  The default value is 0  which preserves the current timeout value on the system   p    td    tr   tr  td  code udpTimeout  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p udpTimeout is the timeout value used for IPVS UDP packets  The default value is 0  which preserves the current timeout value on the system   p    td    tr    tbody    table       KubeProxyNFTablesConfiguration        kubeproxy config k8s io v1alpha1 KubeProxyNFTablesConfiguration          Appears in        KubeProxyConfiguration   kubeproxy config k8s io v1alpha1 KubeProxyConfiguration     p KubeProxyNFTablesConfiguration contains nftables related configuration details for the Kubernetes proxy server   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code masqueradeBit  code   B  Required   B  br    code int32  code    td   td      p masqueradeBit is the bit of the iptables fwmark space to use for SNAT if using the nftables proxy mode  Values must be within the range  0  31    p    td    tr   tr  td  code masqueradeAll  code   B  Required   B  br    code bool  code    td   td      p masqueradeAll tells kube proxy to SNAT all traffic sent to Service cluster IPs  when using the nftables mode  This may be required with some CNI plugins   p    td    tr   tr  td  code syncPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p syncPeriod is an interval  e g   5s    1m    2h22m   indicating how frequently various re synchronizing and cleanup operations are performed  Must be greater than 0   p    td    tr   tr  td  code minSyncPeriod  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p minSyncPeriod is the minimum period between iptables rule resyncs  e g   5s    1m    2h22m    A value of 0 means every Service or EndpointSlice change will result in an immediate iptables resync   p    td    tr    tbody    table       KubeProxyWinkernelConfiguration        kubeproxy config k8s io v1alpha1 KubeProxyWinkernelConfiguration          Appears in        KubeProxyConfiguration   kubeproxy config k8s io v1alpha1 KubeProxyConfiguration     p KubeProxyWinkernelConfiguration contains Windows HNS settings for the Kubernetes proxy server   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code networkName  code   B  Required   B  br    code string  code    td   td      p networkName is the name of the network kube proxy will use to create endpoints and policies  p    td    tr   tr  td  code sourceVip  code   B  Required   B  br    code string  code    td   td      p sourceVip is the IP address of the source VIP endpoint used for NAT when loadbalancing  p    td    tr   tr  td  code enableDSR  code   B  Required   B  br    code bool  code    td   td      p enableDSR tells kube proxy whether HNS policies should be created with DSR  p    td    tr   tr  td  code rootHnsEndpointName  code   B  Required   B  br    code string  code    td   td      p rootHnsEndpointName is the name of hnsendpoint that is attached to l2bridge for root network namespace  p    td    tr   tr  td  code forwardHealthCheckVip  code   B  Required   B  br    code bool  code    td   td      p forwardHealthCheckVip forwards service VIP for health check port on Windows  p    td    tr    tbody    table       LocalMode        kubeproxy config k8s io v1alpha1 LocalMode        Alias of  string      Appears in        KubeProxyConfiguration   kubeproxy config k8s io v1alpha1 KubeProxyConfiguration     p LocalMode represents modes to detect local traffic from the node  p          ProxyMode        kubeproxy config k8s io v1alpha1 ProxyMode        Alias of  string      Appears in        KubeProxyConfiguration   kubeproxy config k8s io v1alpha1 KubeProxyConfiguration     p ProxyMode represents modes used by the Kubernetes proxy server   p   p Currently  two modes of proxy are available on Linux platforms   iptables  and  ipvs   One mode of proxy is available on Windows platforms   kernelspace    p   p If the proxy mode is unspecified  the best available proxy mode will be used  currently this is  code iptables  code  on Linux and  code kernelspace  code  on Windows   If the selected proxy mode cannot be used  due to lack of kernel support  missing userspace components  etc  then kube proxy will exit with an error   p      "}
{"questions":"kubernetes reference package admission k8s io v1 contenttype tool reference title kube apiserver Admission v1 Resource Types autogenerated true","answers":"---\ntitle: kube-apiserver Admission (v1)\ncontent_type: tool-reference\npackage: admission.k8s.io\/v1\nauto_generated: true\n---\n\n\n## Resource Types \n\n\n- [AdmissionReview](#admission-k8s-io-v1-AdmissionReview)\n  \n\n## `AdmissionReview`     {#admission-k8s-io-v1-AdmissionReview}\n    \n\n\n<p>AdmissionReview describes an admission review request\/response.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>admission.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>AdmissionReview<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>request<\/code><br\/>\n<a href=\"#admission-k8s-io-v1-AdmissionRequest\"><code>AdmissionRequest<\/code><\/a>\n<\/td>\n<td>\n   <p>Request describes the attributes for the admission request.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>response<\/code><br\/>\n<a href=\"#admission-k8s-io-v1-AdmissionResponse\"><code>AdmissionResponse<\/code><\/a>\n<\/td>\n<td>\n   <p>Response describes the attributes for the admission response.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AdmissionRequest`     {#admission-k8s-io-v1-AdmissionRequest}\n    \n\n**Appears in:**\n\n- [AdmissionReview](#admission-k8s-io-v1-AdmissionReview)\n\n\n<p>AdmissionRequest describes the admission.Attributes for the admission request.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>uid<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/types#UID\"><code>k8s.io\/apimachinery\/pkg\/types.UID<\/code><\/a>\n<\/td>\n<td>\n   <p>UID is an identifier for the individual request\/response. It allows us to distinguish instances of requests which are\notherwise identical (parallel requests, requests when earlier requests did not modify etc)\nThe UID is meant to track the round trip (request\/response) between the KAS and the WebHook, not the user request.\nIt is suitable for correlating log entries between the webhook and apiserver, for either auditing or debugging.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>kind<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#GroupVersionKind\"><code>meta\/v1.GroupVersionKind<\/code><\/a>\n<\/td>\n<td>\n   <p>Kind is the fully-qualified type of object being submitted (for example, v1.Pod or autoscaling.v1.Scale)<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>resource<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#GroupVersionResource\"><code>meta\/v1.GroupVersionResource<\/code><\/a>\n<\/td>\n<td>\n   <p>Resource is the fully-qualified resource being requested (for example, v1.pods)<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>subResource<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>SubResource is the subresource being requested, if any (for example, &quot;status&quot; or &quot;scale&quot;)<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>requestKind<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#GroupVersionKind\"><code>meta\/v1.GroupVersionKind<\/code><\/a>\n<\/td>\n<td>\n   <p>RequestKind is the fully-qualified type of the original API request (for example, v1.Pod or autoscaling.v1.Scale).\nIf this is specified and differs from the value in &quot;kind&quot;, an equivalent match and conversion was performed.<\/p>\n<p>For example, if deployments can be modified via apps\/v1 and apps\/v1beta1, and a webhook registered a rule of\n<code>apiGroups:[&quot;apps&quot;], apiVersions:[&quot;v1&quot;], resources: [&quot;deployments&quot;]<\/code> and <code>matchPolicy: Equivalent<\/code>,\nan API request to apps\/v1beta1 deployments would be converted and sent to the webhook\nwith <code>kind: {group:&quot;apps&quot;, version:&quot;v1&quot;, kind:&quot;Deployment&quot;}<\/code> (matching the rule the webhook registered for),\nand <code>requestKind: {group:&quot;apps&quot;, version:&quot;v1beta1&quot;, kind:&quot;Deployment&quot;}<\/code> (indicating the kind of the original API request).<\/p>\n<p>See documentation for the &quot;matchPolicy&quot; field in the webhook configuration type for more details.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>requestResource<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#GroupVersionResource\"><code>meta\/v1.GroupVersionResource<\/code><\/a>\n<\/td>\n<td>\n   <p>RequestResource is the fully-qualified resource of the original API request (for example, v1.pods).\nIf this is specified and differs from the value in &quot;resource&quot;, an equivalent match and conversion was performed.<\/p>\n<p>For example, if deployments can be modified via apps\/v1 and apps\/v1beta1, and a webhook registered a rule of\n<code>apiGroups:[&quot;apps&quot;], apiVersions:[&quot;v1&quot;], resources: [&quot;deployments&quot;]<\/code> and <code>matchPolicy: Equivalent<\/code>,\nan API request to apps\/v1beta1 deployments would be converted and sent to the webhook\nwith <code>resource: {group:&quot;apps&quot;, version:&quot;v1&quot;, resource:&quot;deployments&quot;}<\/code> (matching the resource the webhook registered for),\nand <code>requestResource: {group:&quot;apps&quot;, version:&quot;v1beta1&quot;, resource:&quot;deployments&quot;}<\/code> (indicating the resource of the original API request).<\/p>\n<p>See documentation for the &quot;matchPolicy&quot; field in the webhook configuration type.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>requestSubResource<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>RequestSubResource is the name of the subresource of the original API request, if any (for example, &quot;status&quot; or &quot;scale&quot;)\nIf this is specified and differs from the value in &quot;subResource&quot;, an equivalent match and conversion was performed.\nSee documentation for the &quot;matchPolicy&quot; field in the webhook configuration type.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>name<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name is the name of the object as presented in the request.  On a CREATE operation, the client may omit name and\nrely on the server to generate the name.  If that is the case, this field will contain an empty string.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>namespace<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Namespace is the namespace associated with the request (if any).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>operation<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#admission-k8s-io-v1-Operation\"><code>Operation<\/code><\/a>\n<\/td>\n<td>\n   <p>Operation is the operation being performed. This may be different than the operation\nrequested. e.g. a patch can result in either a CREATE or UPDATE Operation.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>userInfo<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#userinfo-v1-authentication-k8s-io\"><code>authentication\/v1.UserInfo<\/code><\/a>\n<\/td>\n<td>\n   <p>UserInfo is information about the requesting user<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>object<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/runtime\/#RawExtension\"><code>k8s.io\/apimachinery\/pkg\/runtime.RawExtension<\/code><\/a>\n<\/td>\n<td>\n   <p>Object is the object from the incoming request.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>oldObject<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/runtime\/#RawExtension\"><code>k8s.io\/apimachinery\/pkg\/runtime.RawExtension<\/code><\/a>\n<\/td>\n<td>\n   <p>OldObject is the existing object. Only populated for DELETE and UPDATE requests.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>dryRun<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>DryRun indicates that modifications will definitely not be persisted for this request.\nDefaults to false.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>options<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/runtime\/#RawExtension\"><code>k8s.io\/apimachinery\/pkg\/runtime.RawExtension<\/code><\/a>\n<\/td>\n<td>\n   <p>Options is the operation option structure of the operation being performed.\ne.g. <code>meta.k8s.io\/v1.DeleteOptions<\/code> or <code>meta.k8s.io\/v1.CreateOptions<\/code>. This may be\ndifferent than the options the caller provided. e.g. for a patch request the performed\nOperation might be a CREATE, in which case the Options will a\n<code>meta.k8s.io\/v1.CreateOptions<\/code> even though the caller provided <code>meta.k8s.io\/v1.PatchOptions<\/code>.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AdmissionResponse`     {#admission-k8s-io-v1-AdmissionResponse}\n    \n\n**Appears in:**\n\n- [AdmissionReview](#admission-k8s-io-v1-AdmissionReview)\n\n\n<p>AdmissionResponse describes an admission response.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>uid<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/types#UID\"><code>k8s.io\/apimachinery\/pkg\/types.UID<\/code><\/a>\n<\/td>\n<td>\n   <p>UID is an identifier for the individual request\/response.\nThis must be copied over from the corresponding AdmissionRequest.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>allowed<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>Allowed indicates whether or not the admission request was permitted.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>status<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#status-v1-meta\"><code>meta\/v1.Status<\/code><\/a>\n<\/td>\n<td>\n   <p>Result contains extra details into why an admission request was denied.\nThis field IS NOT consulted in any way if &quot;Allowed&quot; is &quot;true&quot;.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>patch<\/code><br\/>\n<code>[]byte<\/code>\n<\/td>\n<td>\n   <p>The patch body. Currently we only support &quot;JSONPatch&quot; which implements RFC 6902.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>patchType<\/code><br\/>\n<a href=\"#admission-k8s-io-v1-PatchType\"><code>PatchType<\/code><\/a>\n<\/td>\n<td>\n   <p>The type of Patch. Currently we only allow &quot;JSONPatch&quot;.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>auditAnnotations<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p>AuditAnnotations is an unstructured key value map set by remote admission controller (e.g. error=image-blacklisted).\nMutatingAdmissionWebhook and ValidatingAdmissionWebhook admission controller will prefix the keys with\nadmission webhook name (e.g. imagepolicy.example.com\/error=image-blacklisted). AuditAnnotations will be provided by\nthe admission webhook to add additional context to the audit log for this request.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>warnings<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>warnings is a list of warning messages to return to the requesting API client.\nWarning messages describe a problem the client making the API request should correct or be aware of.\nLimit warnings to 120 characters if possible.\nWarnings over 256 characters and large numbers of warnings may be truncated.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Operation`     {#admission-k8s-io-v1-Operation}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [AdmissionRequest](#admission-k8s-io-v1-AdmissionRequest)\n\n\n<p>Operation is the type of resource operation being checked for admission control<\/p>\n\n\n\n\n## `PatchType`     {#admission-k8s-io-v1-PatchType}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [AdmissionResponse](#admission-k8s-io-v1-AdmissionResponse)\n\n\n<p>PatchType is the type of patch being used to represent the mutated object<\/p>\n\n\n\n ","site":"kubernetes reference","answers_cleaned":"    title  kube apiserver Admission  v1  content type  tool reference package  admission k8s io v1 auto generated  true          Resource Types       AdmissionReview   admission k8s io v1 AdmissionReview          AdmissionReview        admission k8s io v1 AdmissionReview          p AdmissionReview describes an admission review request response   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code admission k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code AdmissionReview  code   td   tr           tr  td  code request  code  br    a href   admission k8s io v1 AdmissionRequest   code AdmissionRequest  code   a    td   td      p Request describes the attributes for the admission request   p    td    tr   tr  td  code response  code  br    a href   admission k8s io v1 AdmissionResponse   code AdmissionResponse  code   a    td   td      p Response describes the attributes for the admission response   p    td    tr    tbody    table       AdmissionRequest        admission k8s io v1 AdmissionRequest          Appears in        AdmissionReview   admission k8s io v1 AdmissionReview     p AdmissionRequest describes the admission Attributes for the admission request   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code uid  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg types UID   code k8s io apimachinery pkg types UID  code   a    td   td      p UID is an identifier for the individual request response  It allows us to distinguish instances of requests which are otherwise identical  parallel requests  requests when earlier requests did not modify etc  The UID is meant to track the round trip  request response  between the KAS and the WebHook  not the user request  It is suitable for correlating log entries between the webhook and apiserver  for either auditing or debugging   p    td    tr   tr  td  code kind  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 GroupVersionKind   code meta v1 GroupVersionKind  code   a    td   td      p Kind is the fully qualified type of object being submitted  for example  v1 Pod or autoscaling v1 Scale   p    td    tr   tr  td  code resource  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 GroupVersionResource   code meta v1 GroupVersionResource  code   a    td   td      p Resource is the fully qualified resource being requested  for example  v1 pods   p    td    tr   tr  td  code subResource  code  br    code string  code    td   td      p SubResource is the subresource being requested  if any  for example   quot status quot  or  quot scale quot    p    td    tr   tr  td  code requestKind  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 GroupVersionKind   code meta v1 GroupVersionKind  code   a    td   td      p RequestKind is the fully qualified type of the original API request  for example  v1 Pod or autoscaling v1 Scale   If this is specified and differs from the value in  quot kind quot   an equivalent match and conversion was performed   p   p For example  if deployments can be modified via apps v1 and apps v1beta1  and a webhook registered a rule of  code apiGroups   quot apps quot    apiVersions   quot v1 quot    resources    quot deployments quot    code  and  code matchPolicy  Equivalent  code   an API request to apps v1beta1 deployments would be converted and sent to the webhook with  code kind   group  quot apps quot   version  quot v1 quot   kind  quot Deployment quot    code   matching the rule the webhook registered for   and  code requestKind   group  quot apps quot   version  quot v1beta1 quot   kind  quot Deployment quot    code   indicating the kind of the original API request    p   p See documentation for the  quot matchPolicy quot  field in the webhook configuration type for more details   p    td    tr   tr  td  code requestResource  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 GroupVersionResource   code meta v1 GroupVersionResource  code   a    td   td      p RequestResource is the fully qualified resource of the original API request  for example  v1 pods   If this is specified and differs from the value in  quot resource quot   an equivalent match and conversion was performed   p   p For example  if deployments can be modified via apps v1 and apps v1beta1  and a webhook registered a rule of  code apiGroups   quot apps quot    apiVersions   quot v1 quot    resources    quot deployments quot    code  and  code matchPolicy  Equivalent  code   an API request to apps v1beta1 deployments would be converted and sent to the webhook with  code resource   group  quot apps quot   version  quot v1 quot   resource  quot deployments quot    code   matching the resource the webhook registered for   and  code requestResource   group  quot apps quot   version  quot v1beta1 quot   resource  quot deployments quot    code   indicating the resource of the original API request    p   p See documentation for the  quot matchPolicy quot  field in the webhook configuration type   p    td    tr   tr  td  code requestSubResource  code  br    code string  code    td   td      p RequestSubResource is the name of the subresource of the original API request  if any  for example   quot status quot  or  quot scale quot   If this is specified and differs from the value in  quot subResource quot   an equivalent match and conversion was performed  See documentation for the  quot matchPolicy quot  field in the webhook configuration type   p    td    tr   tr  td  code name  code  br    code string  code    td   td      p Name is the name of the object as presented in the request   On a CREATE operation  the client may omit name and rely on the server to generate the name   If that is the case  this field will contain an empty string   p    td    tr   tr  td  code namespace  code  br    code string  code    td   td      p Namespace is the namespace associated with the request  if any    p    td    tr   tr  td  code operation  code   B  Required   B  br    a href   admission k8s io v1 Operation   code Operation  code   a    td   td      p Operation is the operation being performed  This may be different than the operation requested  e g  a patch can result in either a CREATE or UPDATE Operation   p    td    tr   tr  td  code userInfo  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  userinfo v1 authentication k8s io   code authentication v1 UserInfo  code   a    td   td      p UserInfo is information about the requesting user  p    td    tr   tr  td  code object  code  br    a href  https   pkg go dev k8s io apimachinery pkg runtime  RawExtension   code k8s io apimachinery pkg runtime RawExtension  code   a    td   td      p Object is the object from the incoming request   p    td    tr   tr  td  code oldObject  code  br    a href  https   pkg go dev k8s io apimachinery pkg runtime  RawExtension   code k8s io apimachinery pkg runtime RawExtension  code   a    td   td      p OldObject is the existing object  Only populated for DELETE and UPDATE requests   p    td    tr   tr  td  code dryRun  code  br    code bool  code    td   td      p DryRun indicates that modifications will definitely not be persisted for this request  Defaults to false   p    td    tr   tr  td  code options  code  br    a href  https   pkg go dev k8s io apimachinery pkg runtime  RawExtension   code k8s io apimachinery pkg runtime RawExtension  code   a    td   td      p Options is the operation option structure of the operation being performed  e g   code meta k8s io v1 DeleteOptions  code  or  code meta k8s io v1 CreateOptions  code   This may be different than the options the caller provided  e g  for a patch request the performed Operation might be a CREATE  in which case the Options will a  code meta k8s io v1 CreateOptions  code  even though the caller provided  code meta k8s io v1 PatchOptions  code    p    td    tr    tbody    table       AdmissionResponse        admission k8s io v1 AdmissionResponse          Appears in        AdmissionReview   admission k8s io v1 AdmissionReview     p AdmissionResponse describes an admission response   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code uid  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg types UID   code k8s io apimachinery pkg types UID  code   a    td   td      p UID is an identifier for the individual request response  This must be copied over from the corresponding AdmissionRequest   p    td    tr   tr  td  code allowed  code   B  Required   B  br    code bool  code    td   td      p Allowed indicates whether or not the admission request was permitted   p    td    tr   tr  td  code status  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  status v1 meta   code meta v1 Status  code   a    td   td      p Result contains extra details into why an admission request was denied  This field IS NOT consulted in any way if  quot Allowed quot  is  quot true quot    p    td    tr   tr  td  code patch  code  br    code   byte  code    td   td      p The patch body  Currently we only support  quot JSONPatch quot  which implements RFC 6902   p    td    tr   tr  td  code patchType  code  br    a href   admission k8s io v1 PatchType   code PatchType  code   a    td   td      p The type of Patch  Currently we only allow  quot JSONPatch quot    p    td    tr   tr  td  code auditAnnotations  code  br    code map string string  code    td   td      p AuditAnnotations is an unstructured key value map set by remote admission controller  e g  error image blacklisted   MutatingAdmissionWebhook and ValidatingAdmissionWebhook admission controller will prefix the keys with admission webhook name  e g  imagepolicy example com error image blacklisted   AuditAnnotations will be provided by the admission webhook to add additional context to the audit log for this request   p    td    tr   tr  td  code warnings  code  br    code   string  code    td   td      p warnings is a list of warning messages to return to the requesting API client  Warning messages describe a problem the client making the API request should correct or be aware of  Limit warnings to 120 characters if possible  Warnings over 256 characters and large numbers of warnings may be truncated   p    td    tr    tbody    table       Operation        admission k8s io v1 Operation        Alias of  string      Appears in        AdmissionRequest   admission k8s io v1 AdmissionRequest     p Operation is the type of resource operation being checked for admission control  p          PatchType        admission k8s io v1 PatchType        Alias of  string      Appears in        AdmissionResponse   admission k8s io v1 AdmissionResponse     p PatchType is the type of patch being used to represent the mutated object  p      "}
{"questions":"kubernetes reference contenttype tool reference Resource Types package apiserver k8s io v1alpha1 title kube apiserver Configuration v1alpha1 autogenerated true p Package v1alpha1 is the v1alpha1 version of the API p","answers":"---\ntitle: kube-apiserver Configuration (v1alpha1)\ncontent_type: tool-reference\npackage: apiserver.k8s.io\/v1alpha1\nauto_generated: true\n---\n<p>Package v1alpha1 is the v1alpha1 version of the API.<\/p>\n\n\n## Resource Types \n\n\n- [AdmissionConfiguration](#apiserver-k8s-io-v1alpha1-AdmissionConfiguration)\n- [AuthenticationConfiguration](#apiserver-k8s-io-v1alpha1-AuthenticationConfiguration)\n- [AuthorizationConfiguration](#apiserver-k8s-io-v1alpha1-AuthorizationConfiguration)\n- [EgressSelectorConfiguration](#apiserver-k8s-io-v1alpha1-EgressSelectorConfiguration)\n- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration)\n  \n    \n    \n\n## `TracingConfiguration`     {#TracingConfiguration}\n    \n\n**Appears in:**\n\n- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)\n\n- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration)\n\n\n<p>TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>endpoint<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Endpoint of the collector this component will report traces to.\nThe connection is insecure, and does not currently support TLS.\nRecommended is unset, and endpoint is the otlp grpc default, localhost:4317.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>samplingRatePerMillion<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>SamplingRatePerMillion is the number of samples to collect per million spans.\nRecommended is unset. If unset, sampler respects its parent span's sampling\nrate, but otherwise never samples.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n  \n\n## `AdmissionConfiguration`     {#apiserver-k8s-io-v1alpha1-AdmissionConfiguration}\n    \n\n\n<p>AdmissionConfiguration provides versioned configuration for admission controllers.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>apiserver.k8s.io\/v1alpha1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>AdmissionConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>plugins<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-AdmissionPluginConfiguration\"><code>[]AdmissionPluginConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>Plugins allows specifying a configuration per admission control plugin.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AuthenticationConfiguration`     {#apiserver-k8s-io-v1alpha1-AuthenticationConfiguration}\n    \n\n\n<p>AuthenticationConfiguration provides versioned configuration for authentication.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>apiserver.k8s.io\/v1alpha1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>AuthenticationConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>jwt<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-JWTAuthenticator\"><code>[]JWTAuthenticator<\/code><\/a>\n<\/td>\n<td>\n   <p>jwt is a list of authenticator to authenticate Kubernetes users using\nJWT compliant tokens. The authenticator will attempt to parse a raw ID token,\nverify it's been signed by the configured issuer. The public key to verify the\nsignature is discovered from the issuer's public endpoint using OIDC discovery.\nFor an incoming token, each JWT authenticator will be attempted in\nthe order in which it is specified in this list.  Note however that\nother authenticators may run before or after the JWT authenticators.\nThe specific position of JWT authenticators in relation to other\nauthenticators is neither defined nor stable across releases.  Since\neach JWT authenticator must have a unique issuer URL, at most one\nJWT authenticator will attempt to cryptographically validate the token.<\/p>\n<p>The minimum valid JWT payload must contain the following claims:\n{\n&quot;iss&quot;: &quot;https:\/\/issuer.example.com&quot;,\n&quot;aud&quot;: [&quot;audience&quot;],\n&quot;exp&quot;: 1234567890,\n&quot;<!-- raw HTML omitted -->&quot;: &quot;username&quot;\n}<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>anonymous<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-AnonymousAuthConfig\"><code>AnonymousAuthConfig<\/code><\/a>\n<\/td>\n<td>\n   <p>If present --anonymous-auth must not be set<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AuthorizationConfiguration`     {#apiserver-k8s-io-v1alpha1-AuthorizationConfiguration}\n    \n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>apiserver.k8s.io\/v1alpha1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>AuthorizationConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>authorizers<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-AuthorizerConfiguration\"><code>[]AuthorizerConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>Authorizers is an ordered list of authorizers to\nauthorize requests against.\nThis is similar to the --authorization-modes kube-apiserver flag\nMust be at least one.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `EgressSelectorConfiguration`     {#apiserver-k8s-io-v1alpha1-EgressSelectorConfiguration}\n    \n\n\n<p>EgressSelectorConfiguration provides versioned configuration for egress selector clients.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>apiserver.k8s.io\/v1alpha1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>EgressSelectorConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>egressSelections<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-EgressSelection\"><code>[]EgressSelection<\/code><\/a>\n<\/td>\n<td>\n   <p>connectionServices contains a list of egress selection client configurations<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `TracingConfiguration`     {#apiserver-k8s-io-v1alpha1-TracingConfiguration}\n    \n\n\n<p>TracingConfiguration provides versioned configuration for tracing clients.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>apiserver.k8s.io\/v1alpha1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>TracingConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>TracingConfiguration<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#TracingConfiguration\"><code>TracingConfiguration<\/code><\/a>\n<\/td>\n<td>(Members of <code>TracingConfiguration<\/code> are embedded into this type.)\n   <p>Embed the component config tracing configuration struct<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AdmissionPluginConfiguration`     {#apiserver-k8s-io-v1alpha1-AdmissionPluginConfiguration}\n    \n\n**Appears in:**\n\n- [AdmissionConfiguration](#apiserver-k8s-io-v1alpha1-AdmissionConfiguration)\n\n\n<p>AdmissionPluginConfiguration provides the configuration for a single plug-in.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name is the name of the admission controller.\nIt must match the registered admission plugin name.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>path<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Path is the path to a configuration file that contains the plugin's\nconfiguration<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>configuration<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/runtime#Unknown\"><code>k8s.io\/apimachinery\/pkg\/runtime.Unknown<\/code><\/a>\n<\/td>\n<td>\n   <p>Configuration is an embedded configuration object to be used as the plugin's\nconfiguration. If present, it will be used instead of the path to the configuration file.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AnonymousAuthCondition`     {#apiserver-k8s-io-v1alpha1-AnonymousAuthCondition}\n    \n\n**Appears in:**\n\n- [AnonymousAuthConfig](#apiserver-k8s-io-v1alpha1-AnonymousAuthConfig)\n\n\n<p>AnonymousAuthCondition describes the condition under which anonymous auth\nshould be enabled.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>path<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Path for which anonymous auth is enabled.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AnonymousAuthConfig`     {#apiserver-k8s-io-v1alpha1-AnonymousAuthConfig}\n    \n\n**Appears in:**\n\n- [AuthenticationConfiguration](#apiserver-k8s-io-v1alpha1-AuthenticationConfiguration)\n\n\n<p>AnonymousAuthConfig provides the configuration for the anonymous authenticator.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>enabled<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>conditions<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-AnonymousAuthCondition\"><code>[]AnonymousAuthCondition<\/code><\/a>\n<\/td>\n<td>\n   <p>If set, anonymous auth is only allowed if the request meets one of the\nconditions.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AudienceMatchPolicyType`     {#apiserver-k8s-io-v1alpha1-AudienceMatchPolicyType}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [Issuer](#apiserver-k8s-io-v1alpha1-Issuer)\n\n\n<p>AudienceMatchPolicyType is a set of valid values for issuer.audienceMatchPolicy<\/p>\n\n\n\n\n## `AuthorizerConfiguration`     {#apiserver-k8s-io-v1alpha1-AuthorizerConfiguration}\n    \n\n**Appears in:**\n\n- [AuthorizationConfiguration](#apiserver-k8s-io-v1alpha1-AuthorizationConfiguration)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>type<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Type refers to the type of the authorizer\n&quot;Webhook&quot; is supported in the generic API server\nOther API servers may support additional authorizer\ntypes like Node, RBAC, ABAC, etc.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name used to describe the webhook\nThis is explicitly used in monitoring machinery for metrics\nNote: Names must be DNS1123 labels like <code>myauthorizername<\/code> or\nsubdomains like <code>myauthorizer.example.domain<\/code>\nRequired, with no default<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>webhook<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-WebhookConfiguration\"><code>WebhookConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>Webhook defines the configuration for a Webhook authorizer\nMust be defined when Type=Webhook\nMust not be defined when Type!=Webhook<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ClaimMappings`     {#apiserver-k8s-io-v1alpha1-ClaimMappings}\n    \n\n**Appears in:**\n\n- [JWTAuthenticator](#apiserver-k8s-io-v1alpha1-JWTAuthenticator)\n\n\n<p>ClaimMappings provides the configuration for claim mapping<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>username<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-PrefixedClaimOrExpression\"><code>PrefixedClaimOrExpression<\/code><\/a>\n<\/td>\n<td>\n   <p>username represents an option for the username attribute.\nThe claim's value must be a singular string.\nSame as the --oidc-username-claim and --oidc-username-prefix flags.\nIf username.expression is set, the expression must produce a string value.\nIf username.expression uses 'claims.email', then 'claims.email_verified' must be used in\nusername.expression or extra[<em>].valueExpression or claimValidationRules[<\/em>].expression.\nAn example claim validation rule expression that matches the validation automatically\napplied when username.claim is set to 'email' is 'claims.?email_verified.orValue(true)'.<\/p>\n<p>In the flag based approach, the --oidc-username-claim and --oidc-username-prefix are optional. If --oidc-username-claim is not set,\nthe default value is &quot;sub&quot;. For the authentication config, there is no defaulting for claim or prefix. The claim and prefix must be set explicitly.\nFor claim, if --oidc-username-claim was not set with legacy flag approach, configure username.claim=&quot;sub&quot; in the authentication config.\nFor prefix:\n(1) --oidc-username-prefix=&quot;-&quot;, no prefix was added to the username. For the same behavior using authentication config,\nset username.prefix=&quot;&quot;\n(2) --oidc-username-prefix=&quot;&quot; and  --oidc-username-claim != &quot;email&quot;, prefix was &quot;&lt;value of --oidc-issuer-url&gt;#&quot;. For the same\nbehavior using authentication config, set username.prefix=&quot;<!-- raw HTML omitted -->#&quot;\n(3) --oidc-username-prefix=&quot;<!-- raw HTML omitted -->&quot;. For the same behavior using authentication config, set username.prefix=&quot;<!-- raw HTML omitted -->&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>groups<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-PrefixedClaimOrExpression\"><code>PrefixedClaimOrExpression<\/code><\/a>\n<\/td>\n<td>\n   <p>groups represents an option for the groups attribute.\nThe claim's value must be a string or string array claim.\nIf groups.claim is set, the prefix must be specified (and can be the empty string).\nIf groups.expression is set, the expression must produce a string or string array value.\n&quot;&quot;, [], and null values are treated as the group mapping not being present.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>uid<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-ClaimOrExpression\"><code>ClaimOrExpression<\/code><\/a>\n<\/td>\n<td>\n   <p>uid represents an option for the uid attribute.\nClaim must be a singular string claim.\nIf uid.expression is set, the expression must produce a string value.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>extra<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-ExtraMapping\"><code>[]ExtraMapping<\/code><\/a>\n<\/td>\n<td>\n   <p>extra represents an option for the extra attribute.\nexpression must produce a string or string array value.\nIf the value is empty, the extra mapping will not be present.<\/p>\n<p>hard-coded extra key\/value<\/p>\n<ul>\n<li>key: &quot;foo&quot;\nvalueExpression: &quot;'bar'&quot;\nThis will result in an extra attribute - foo: [&quot;bar&quot;]<\/li>\n<\/ul>\n<p>hard-coded key, value copying claim value<\/p>\n<ul>\n<li>key: &quot;foo&quot;\nvalueExpression: &quot;claims.some_claim&quot;\nThis will result in an extra attribute - foo: [value of some_claim]<\/li>\n<\/ul>\n<p>hard-coded key, value derived from claim value<\/p>\n<ul>\n<li>key: &quot;admin&quot;\nvalueExpression: '(has(claims.is_admin) &amp;&amp; claims.is_admin) ? &quot;true&quot;:&quot;&quot;'\nThis will result in:<\/li>\n<li>if is_admin claim is present and true, extra attribute - admin: [&quot;true&quot;]<\/li>\n<li>if is_admin claim is present and false or is_admin claim is not present, no extra attribute will be added<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ClaimOrExpression`     {#apiserver-k8s-io-v1alpha1-ClaimOrExpression}\n    \n\n**Appears in:**\n\n- [ClaimMappings](#apiserver-k8s-io-v1alpha1-ClaimMappings)\n\n\n<p>ClaimOrExpression provides the configuration for a single claim or expression.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>claim<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>claim is the JWT claim to use.\nEither claim or expression must be set.\nMutually exclusive with expression.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>expression<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>expression represents the expression which will be evaluated by CEL.<\/p>\n<p>CEL expressions have access to the contents of the token claims, organized into CEL variable:<\/p>\n<ul>\n<li>'claims' is a map of claim names to claim values.\nFor example, a variable named 'sub' can be accessed as 'claims.sub'.\nNested claims can be accessed using dot notation, e.g. 'claims.foo.bar'.<\/li>\n<\/ul>\n<p>Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/<\/p>\n<p>Mutually exclusive with claim.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ClaimValidationRule`     {#apiserver-k8s-io-v1alpha1-ClaimValidationRule}\n    \n\n**Appears in:**\n\n- [JWTAuthenticator](#apiserver-k8s-io-v1alpha1-JWTAuthenticator)\n\n\n<p>ClaimValidationRule provides the configuration for a single claim validation rule.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>claim<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>claim is the name of a required claim.\nSame as --oidc-required-claim flag.\nOnly string claim keys are supported.\nMutually exclusive with expression and message.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>requiredValue<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>requiredValue is the value of a required claim.\nSame as --oidc-required-claim flag.\nOnly string claim values are supported.\nIf claim is set and requiredValue is not set, the claim must be present with a value set to the empty string.\nMutually exclusive with expression and message.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>expression<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>expression represents the expression which will be evaluated by CEL.\nMust produce a boolean.<\/p>\n<p>CEL expressions have access to the contents of the token claims, organized into CEL variable:<\/p>\n<ul>\n<li>'claims' is a map of claim names to claim values.\nFor example, a variable named 'sub' can be accessed as 'claims.sub'.\nNested claims can be accessed using dot notation, e.g. 'claims.foo.bar'.\nMust return true for the validation to pass.<\/li>\n<\/ul>\n<p>Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/<\/p>\n<p>Mutually exclusive with claim and requiredValue.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>message<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>message customizes the returned error message when expression returns false.\nmessage is a literal string.\nMutually exclusive with claim and requiredValue.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Connection`     {#apiserver-k8s-io-v1alpha1-Connection}\n    \n\n**Appears in:**\n\n- [EgressSelection](#apiserver-k8s-io-v1alpha1-EgressSelection)\n\n\n<p>Connection provides the configuration for a single egress selection client.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>proxyProtocol<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-ProtocolType\"><code>ProtocolType<\/code><\/a>\n<\/td>\n<td>\n   <p>Protocol is the protocol used to connect from client to the konnectivity server.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>transport<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-Transport\"><code>Transport<\/code><\/a>\n<\/td>\n<td>\n   <p>Transport defines the transport configurations we use to dial to the konnectivity server.\nThis is required if ProxyProtocol is HTTPConnect or GRPC.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `EgressSelection`     {#apiserver-k8s-io-v1alpha1-EgressSelection}\n    \n\n**Appears in:**\n\n- [EgressSelectorConfiguration](#apiserver-k8s-io-v1alpha1-EgressSelectorConfiguration)\n\n\n<p>EgressSelection provides the configuration for a single egress selection client.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>name is the name of the egress selection.\nCurrently supported values are &quot;controlplane&quot;, &quot;master&quot;, &quot;etcd&quot; and &quot;cluster&quot;\nThe &quot;master&quot; egress selector is deprecated in favor of &quot;controlplane&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>connection<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-Connection\"><code>Connection<\/code><\/a>\n<\/td>\n<td>\n   <p>connection is the exact information used to configure the egress selection<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExtraMapping`     {#apiserver-k8s-io-v1alpha1-ExtraMapping}\n    \n\n**Appears in:**\n\n- [ClaimMappings](#apiserver-k8s-io-v1alpha1-ClaimMappings)\n\n\n<p>ExtraMapping provides the configuration for a single extra mapping.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>key<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>key is a string to use as the extra attribute key.\nkey must be a domain-prefix path (e.g. example.org\/foo). All characters before the first &quot;\/&quot; must be a valid\nsubdomain as defined by RFC 1123. All characters trailing the first &quot;\/&quot; must\nbe valid HTTP Path characters as defined by RFC 3986.\nkey must be lowercase.\nRequired to be unique.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>valueExpression<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>valueExpression is a CEL expression to extract extra attribute value.\nvalueExpression must produce a string or string array value.\n&quot;&quot;, [], and null values are treated as the extra mapping not being present.\nEmpty string values contained within a string array are filtered out.<\/p>\n<p>CEL expressions have access to the contents of the token claims, organized into CEL variable:<\/p>\n<ul>\n<li>'claims' is a map of claim names to claim values.\nFor example, a variable named 'sub' can be accessed as 'claims.sub'.\nNested claims can be accessed using dot notation, e.g. 'claims.foo.bar'.<\/li>\n<\/ul>\n<p>Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Issuer`     {#apiserver-k8s-io-v1alpha1-Issuer}\n    \n\n**Appears in:**\n\n- [JWTAuthenticator](#apiserver-k8s-io-v1alpha1-JWTAuthenticator)\n\n\n<p>Issuer provides the configuration for an external provider's specific settings.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>url<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>url points to the issuer URL in a format https:\/\/url or https:\/\/url\/path.\nThis must match the &quot;iss&quot; claim in the presented JWT, and the issuer returned from discovery.\nSame value as the --oidc-issuer-url flag.\nDiscovery information is fetched from &quot;{url}\/.well-known\/openid-configuration&quot; unless overridden by discoveryURL.\nRequired to be unique across all JWT authenticators.\nNote that egress selection configuration is not used for this network connection.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>discoveryURL<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>discoveryURL, if specified, overrides the URL used to fetch discovery\ninformation instead of using &quot;{url}\/.well-known\/openid-configuration&quot;.\nThe exact value specified is used, so &quot;\/.well-known\/openid-configuration&quot;\nmust be included in discoveryURL if needed.<\/p>\n<p>The &quot;issuer&quot; field in the fetched discovery information must match the &quot;issuer.url&quot; field\nin the AuthenticationConfiguration and will be used to validate the &quot;iss&quot; claim in the presented JWT.\nThis is for scenarios where the well-known and jwks endpoints are hosted at a different\nlocation than the issuer (such as locally in the cluster).<\/p>\n<p>Example:\nA discovery url that is exposed using kubernetes service 'oidc' in namespace 'oidc-namespace'\nand discovery information is available at '\/.well-known\/openid-configuration'.\ndiscoveryURL: &quot;https:\/\/oidc.oidc-namespace\/.well-known\/openid-configuration&quot;\ncertificateAuthority is used to verify the TLS connection and the hostname on the leaf certificate\nmust be set to 'oidc.oidc-namespace'.<\/p>\n<p>curl https:\/\/oidc.oidc-namespace\/.well-known\/openid-configuration (.discoveryURL field)\n{\nissuer: &quot;https:\/\/oidc.example.com&quot; (.url field)\n}<\/p>\n<p>discoveryURL must be different from url.\nRequired to be unique across all JWT authenticators.\nNote that egress selection configuration is not used for this network connection.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificateAuthority<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>certificateAuthority contains PEM-encoded certificate authority certificates\nused to validate the connection when fetching discovery information.\nIf unset, the system verifier is used.\nSame value as the content of the file referenced by the --oidc-ca-file flag.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>audiences<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>audiences is the set of acceptable audiences the JWT must be issued to.\nAt least one of the entries must match the &quot;aud&quot; claim in presented JWTs.\nSame value as the --oidc-client-id flag (though this field supports an array).\nRequired to be non-empty.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>audienceMatchPolicy<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-AudienceMatchPolicyType\"><code>AudienceMatchPolicyType<\/code><\/a>\n<\/td>\n<td>\n   <p>audienceMatchPolicy defines how the &quot;audiences&quot; field is used to match the &quot;aud&quot; claim in the presented JWT.\nAllowed values are:<\/p>\n<ol>\n<li>&quot;MatchAny&quot; when multiple audiences are specified and<\/li>\n<li>empty (or unset) or &quot;MatchAny&quot; when a single audience is specified.<\/li>\n<\/ol>\n<ul>\n<li>\n<p>MatchAny: the &quot;aud&quot; claim in the presented JWT must match at least one of the entries in the &quot;audiences&quot; field.\nFor example, if &quot;audiences&quot; is [&quot;foo&quot;, &quot;bar&quot;], the &quot;aud&quot; claim in the presented JWT must contain either &quot;foo&quot; or &quot;bar&quot; (and may contain both).<\/p>\n<\/li>\n<li>\n<p>&quot;&quot;: The match policy can be empty (or unset) when a single audience is specified in the &quot;audiences&quot; field. The &quot;aud&quot; claim in the presented JWT must contain the single audience (and may contain others).<\/p>\n<\/li>\n<\/ul>\n<p>For more nuanced audience validation, use claimValidationRules.\nexample: claimValidationRule[].expression: 'sets.equivalent(claims.aud, [&quot;bar&quot;, &quot;foo&quot;, &quot;baz&quot;])' to require an exact match.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `JWTAuthenticator`     {#apiserver-k8s-io-v1alpha1-JWTAuthenticator}\n    \n\n**Appears in:**\n\n- [AuthenticationConfiguration](#apiserver-k8s-io-v1alpha1-AuthenticationConfiguration)\n\n\n<p>JWTAuthenticator provides the configuration for a single JWT authenticator.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>issuer<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-Issuer\"><code>Issuer<\/code><\/a>\n<\/td>\n<td>\n   <p>issuer contains the basic OIDC provider connection options.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>claimValidationRules<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-ClaimValidationRule\"><code>[]ClaimValidationRule<\/code><\/a>\n<\/td>\n<td>\n   <p>claimValidationRules are rules that are applied to validate token claims to authenticate users.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>claimMappings<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-ClaimMappings\"><code>ClaimMappings<\/code><\/a>\n<\/td>\n<td>\n   <p>claimMappings points claims of a token to be treated as user attributes.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>userValidationRules<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-UserValidationRule\"><code>[]UserValidationRule<\/code><\/a>\n<\/td>\n<td>\n   <p>userValidationRules are rules that are applied to final user before completing authentication.\nThese allow invariants to be applied to incoming identities such as preventing the\nuse of the system: prefix that is commonly used by Kubernetes components.\nThe validation rules are logically ANDed together and must all return true for the validation to pass.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `PrefixedClaimOrExpression`     {#apiserver-k8s-io-v1alpha1-PrefixedClaimOrExpression}\n    \n\n**Appears in:**\n\n- [ClaimMappings](#apiserver-k8s-io-v1alpha1-ClaimMappings)\n\n\n<p>PrefixedClaimOrExpression provides the configuration for a single prefixed claim or expression.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>claim<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>claim is the JWT claim to use.\nMutually exclusive with expression.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>prefix<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>prefix is prepended to claim's value to prevent clashes with existing names.\nprefix needs to be set if claim is set and can be the empty string.\nMutually exclusive with expression.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>expression<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>expression represents the expression which will be evaluated by CEL.<\/p>\n<p>CEL expressions have access to the contents of the token claims, organized into CEL variable:<\/p>\n<ul>\n<li>'claims' is a map of claim names to claim values.\nFor example, a variable named 'sub' can be accessed as 'claims.sub'.\nNested claims can be accessed using dot notation, e.g. 'claims.foo.bar'.<\/li>\n<\/ul>\n<p>Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/<\/p>\n<p>Mutually exclusive with claim and prefix.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ProtocolType`     {#apiserver-k8s-io-v1alpha1-ProtocolType}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [Connection](#apiserver-k8s-io-v1alpha1-Connection)\n\n\n<p>ProtocolType is a set of valid values for Connection.ProtocolType<\/p>\n\n\n\n\n## `TCPTransport`     {#apiserver-k8s-io-v1alpha1-TCPTransport}\n    \n\n**Appears in:**\n\n- [Transport](#apiserver-k8s-io-v1alpha1-Transport)\n\n\n<p>TCPTransport provides the information to connect to konnectivity server via TCP<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>url<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>URL is the location of the konnectivity server to connect to.\nAs an example it might be &quot;https:\/\/127.0.0.1:8131&quot;<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tlsConfig<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-TLSConfig\"><code>TLSConfig<\/code><\/a>\n<\/td>\n<td>\n   <p>TLSConfig is the config needed to use TLS when connecting to konnectivity server<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `TLSConfig`     {#apiserver-k8s-io-v1alpha1-TLSConfig}\n    \n\n**Appears in:**\n\n- [TCPTransport](#apiserver-k8s-io-v1alpha1-TCPTransport)\n\n\n<p>TLSConfig provides the authentication information to connect to konnectivity server\nOnly used with TCPTransport<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>caBundle<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>caBundle is the file location of the CA to be used to determine trust with the konnectivity server.\nMust be absent\/empty if TCPTransport.URL is prefixed with http:\/\/\nIf absent while TCPTransport.URL is prefixed with https:\/\/, default to system trust roots.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clientKey<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>clientKey is the file location of the client key to be used in mtls handshakes with the konnectivity server.\nMust be absent\/empty if TCPTransport.URL is prefixed with http:\/\/\nMust be configured if TCPTransport.URL is prefixed with https:\/\/<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clientCert<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>clientCert is the file location of the client certificate to be used in mtls handshakes with the konnectivity server.\nMust be absent\/empty if TCPTransport.URL is prefixed with http:\/\/\nMust be configured if TCPTransport.URL is prefixed with https:\/\/<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Transport`     {#apiserver-k8s-io-v1alpha1-Transport}\n    \n\n**Appears in:**\n\n- [Connection](#apiserver-k8s-io-v1alpha1-Connection)\n\n\n<p>Transport defines the transport configurations we use to dial to the konnectivity server<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>tcp<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-TCPTransport\"><code>TCPTransport<\/code><\/a>\n<\/td>\n<td>\n   <p>TCP is the TCP configuration for communicating with the konnectivity server via TCP\nProxyProtocol of GRPC is not supported with TCP transport at the moment\nRequires at least one of TCP or UDS to be set<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>uds<\/code><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-UDSTransport\"><code>UDSTransport<\/code><\/a>\n<\/td>\n<td>\n   <p>UDS is the UDS configuration for communicating with the konnectivity server via UDS\nRequires at least one of TCP or UDS to be set<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `UDSTransport`     {#apiserver-k8s-io-v1alpha1-UDSTransport}\n    \n\n**Appears in:**\n\n- [Transport](#apiserver-k8s-io-v1alpha1-Transport)\n\n\n<p>UDSTransport provides the information to connect to konnectivity server via UDS<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>udsName<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>UDSName is the name of the unix domain socket to connect to konnectivity server\nThis does not use a unix:\/\/ prefix. (Eg: \/etc\/srv\/kubernetes\/konnectivity-server\/konnectivity-server.socket)<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `UserValidationRule`     {#apiserver-k8s-io-v1alpha1-UserValidationRule}\n    \n\n**Appears in:**\n\n- [JWTAuthenticator](#apiserver-k8s-io-v1alpha1-JWTAuthenticator)\n\n\n<p>UserValidationRule provides the configuration for a single user info validation rule.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>expression<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>expression represents the expression which will be evaluated by CEL.\nMust return true for the validation to pass.<\/p>\n<p>CEL expressions have access to the contents of UserInfo, organized into CEL variable:<\/p>\n<ul>\n<li>'user' - authentication.k8s.io\/v1, Kind=UserInfo object\nRefer to https:\/\/github.com\/kubernetes\/api\/blob\/release-1.28\/authentication\/v1\/types.go#L105-L122 for the definition.\nAPI documentation: https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.28\/#userinfo-v1-authentication-k8s-io<\/li>\n<\/ul>\n<p>Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>message<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>message customizes the returned error message when rule returns false.\nmessage is a literal string.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `WebhookConfiguration`     {#apiserver-k8s-io-v1alpha1-WebhookConfiguration}\n    \n\n**Appears in:**\n\n- [AuthorizerConfiguration](#apiserver-k8s-io-v1alpha1-AuthorizerConfiguration)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>authorizedTTL<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>The duration to cache 'authorized' responses from the webhook\nauthorizer.\nSame as setting <code>--authorization-webhook-cache-authorized-ttl<\/code> flag\nDefault: 5m0s<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>unauthorizedTTL<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>The duration to cache 'unauthorized' responses from the webhook\nauthorizer.\nSame as setting <code>--authorization-webhook-cache-unauthorized-ttl<\/code> flag\nDefault: 30s<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>timeout<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>Timeout for the webhook request\nMaximum allowed value is 30s.\nRequired, no default value.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>subjectAccessReviewVersion<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>The API version of the authorization.k8s.io SubjectAccessReview to\nsend to and expect from the webhook.\nSame as setting <code>--authorization-webhook-version<\/code> flag\nValid values: v1beta1, v1\nRequired, no default value<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>matchConditionSubjectAccessReviewVersion<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>MatchConditionSubjectAccessReviewVersion specifies the SubjectAccessReview\nversion the CEL expressions are evaluated against\nValid values: v1\nRequired, no default value<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>failurePolicy<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Controls the authorization decision when a webhook request fails to\ncomplete or returns a malformed response or errors evaluating\nmatchConditions.\nValid values:<\/p>\n<ul>\n<li>NoOpinion: continue to subsequent authorizers to see if one of\nthem allows the request<\/li>\n<li>Deny: reject the request without consulting subsequent authorizers\nRequired, with no default.<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr><td><code>connectionInfo<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-WebhookConnectionInfo\"><code>WebhookConnectionInfo<\/code><\/a>\n<\/td>\n<td>\n   <p>ConnectionInfo defines how we talk to the webhook<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>matchConditions<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-k8s-io-v1alpha1-WebhookMatchCondition\"><code>[]WebhookMatchCondition<\/code><\/a>\n<\/td>\n<td>\n   <p>matchConditions is a list of conditions that must be met for a request to be sent to this\nwebhook. An empty list of matchConditions matches all requests.\nThere are a maximum of 64 match conditions allowed.<\/p>\n<p>The exact matching logic is (in order):<\/p>\n<ol>\n<li>If at least one matchCondition evaluates to FALSE, then the webhook is skipped.<\/li>\n<li>If ALL matchConditions evaluate to TRUE, then the webhook is called.<\/li>\n<li>If at least one matchCondition evaluates to an error (but none are FALSE):\n<ul>\n<li>If failurePolicy=Deny, then the webhook rejects the request<\/li>\n<li>If failurePolicy=NoOpinion, then the error is ignored and the webhook is skipped<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `WebhookConnectionInfo`     {#apiserver-k8s-io-v1alpha1-WebhookConnectionInfo}\n    \n\n**Appears in:**\n\n- [WebhookConfiguration](#apiserver-k8s-io-v1alpha1-WebhookConfiguration)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>type<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Controls how the webhook should communicate with the server.\nValid values:<\/p>\n<ul>\n<li>KubeConfigFile: use the file specified in kubeConfigFile to locate the\nserver.<\/li>\n<li>InClusterConfig: use the in-cluster configuration to call the\nSubjectAccessReview API hosted by kube-apiserver. This mode is not\nallowed for kube-apiserver.<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr><td><code>kubeConfigFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Path to KubeConfigFile for connection info\nRequired, if connectionInfo.Type is KubeConfig<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `WebhookMatchCondition`     {#apiserver-k8s-io-v1alpha1-WebhookMatchCondition}\n    \n\n**Appears in:**\n\n- [WebhookConfiguration](#apiserver-k8s-io-v1alpha1-WebhookConfiguration)\n\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>expression<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>expression represents the expression which will be evaluated by CEL. Must evaluate to bool.\nCEL expressions have access to the contents of the SubjectAccessReview in v1 version.\nIf version specified by subjectAccessReviewVersion in the request variable is v1beta1,\nthe contents would be converted to the v1 version before evaluating the CEL expression.<\/p>\n<p>Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  kube apiserver Configuration  v1alpha1  content type  tool reference package  apiserver k8s io v1alpha1 auto generated  true      p Package v1alpha1 is the v1alpha1 version of the API   p       Resource Types       AdmissionConfiguration   apiserver k8s io v1alpha1 AdmissionConfiguration     AuthenticationConfiguration   apiserver k8s io v1alpha1 AuthenticationConfiguration     AuthorizationConfiguration   apiserver k8s io v1alpha1 AuthorizationConfiguration     EgressSelectorConfiguration   apiserver k8s io v1alpha1 EgressSelectorConfiguration     TracingConfiguration   apiserver k8s io v1alpha1 TracingConfiguration                    TracingConfiguration        TracingConfiguration          Appears in        KubeletConfiguration   kubelet config k8s io v1beta1 KubeletConfiguration      TracingConfiguration   apiserver k8s io v1alpha1 TracingConfiguration     p TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code endpoint  code  br    code string  code    td   td      p Endpoint of the collector this component will report traces to  The connection is insecure  and does not currently support TLS  Recommended is unset  and endpoint is the otlp grpc default  localhost 4317   p    td    tr   tr  td  code samplingRatePerMillion  code  br    code int32  code    td   td      p SamplingRatePerMillion is the number of samples to collect per million spans  Recommended is unset  If unset  sampler respects its parent span s sampling rate  but otherwise never samples   p    td    tr    tbody    table          AdmissionConfiguration        apiserver k8s io v1alpha1 AdmissionConfiguration          p AdmissionConfiguration provides versioned configuration for admission controllers   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code apiserver k8s io v1alpha1  code   td   tr   tr  td  code kind  code  br  string  td  td  code AdmissionConfiguration  code   td   tr           tr  td  code plugins  code  br    a href   apiserver k8s io v1alpha1 AdmissionPluginConfiguration   code   AdmissionPluginConfiguration  code   a    td   td      p Plugins allows specifying a configuration per admission control plugin   p    td    tr    tbody    table       AuthenticationConfiguration        apiserver k8s io v1alpha1 AuthenticationConfiguration          p AuthenticationConfiguration provides versioned configuration for authentication   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code apiserver k8s io v1alpha1  code   td   tr   tr  td  code kind  code  br  string  td  td  code AuthenticationConfiguration  code   td   tr           tr  td  code jwt  code   B  Required   B  br    a href   apiserver k8s io v1alpha1 JWTAuthenticator   code   JWTAuthenticator  code   a    td   td      p jwt is a list of authenticator to authenticate Kubernetes users using JWT compliant tokens  The authenticator will attempt to parse a raw ID token  verify it s been signed by the configured issuer  The public key to verify the signature is discovered from the issuer s public endpoint using OIDC discovery  For an incoming token  each JWT authenticator will be attempted in the order in which it is specified in this list   Note however that other authenticators may run before or after the JWT authenticators  The specific position of JWT authenticators in relation to other authenticators is neither defined nor stable across releases   Since each JWT authenticator must have a unique issuer URL  at most one JWT authenticator will attempt to cryptographically validate the token   p   p The minimum valid JWT payload must contain the following claims     quot iss quot    quot https   issuer example com quot    quot aud quot     quot audience quot     quot exp quot   1234567890   quot      raw HTML omitted     quot    quot username quot     p    td    tr   tr  td  code anonymous  code   B  Required   B  br    a href   apiserver k8s io v1alpha1 AnonymousAuthConfig   code AnonymousAuthConfig  code   a    td   td      p If present   anonymous auth must not be set  p    td    tr    tbody    table       AuthorizationConfiguration        apiserver k8s io v1alpha1 AuthorizationConfiguration           table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code apiserver k8s io v1alpha1  code   td   tr   tr  td  code kind  code  br  string  td  td  code AuthorizationConfiguration  code   td   tr           tr  td  code authorizers  code   B  Required   B  br    a href   apiserver k8s io v1alpha1 AuthorizerConfiguration   code   AuthorizerConfiguration  code   a    td   td      p Authorizers is an ordered list of authorizers to authorize requests against  This is similar to the   authorization modes kube apiserver flag Must be at least one   p    td    tr    tbody    table       EgressSelectorConfiguration        apiserver k8s io v1alpha1 EgressSelectorConfiguration          p EgressSelectorConfiguration provides versioned configuration for egress selector clients   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code apiserver k8s io v1alpha1  code   td   tr   tr  td  code kind  code  br  string  td  td  code EgressSelectorConfiguration  code   td   tr           tr  td  code egressSelections  code   B  Required   B  br    a href   apiserver k8s io v1alpha1 EgressSelection   code   EgressSelection  code   a    td   td      p connectionServices contains a list of egress selection client configurations  p    td    tr    tbody    table       TracingConfiguration        apiserver k8s io v1alpha1 TracingConfiguration          p TracingConfiguration provides versioned configuration for tracing clients   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code apiserver k8s io v1alpha1  code   td   tr   tr  td  code kind  code  br  string  td  td  code TracingConfiguration  code   td   tr           tr  td  code TracingConfiguration  code   B  Required   B  br    a href   TracingConfiguration   code TracingConfiguration  code   a    td   td  Members of  code TracingConfiguration  code  are embedded into this type       p Embed the component config tracing configuration struct  p    td    tr    tbody    table       AdmissionPluginConfiguration        apiserver k8s io v1alpha1 AdmissionPluginConfiguration          Appears in        AdmissionConfiguration   apiserver k8s io v1alpha1 AdmissionConfiguration     p AdmissionPluginConfiguration provides the configuration for a single plug in   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p Name is the name of the admission controller  It must match the registered admission plugin name   p    td    tr   tr  td  code path  code  br    code string  code    td   td      p Path is the path to a configuration file that contains the plugin s configuration  p    td    tr   tr  td  code configuration  code  br    a href  https   pkg go dev k8s io apimachinery pkg runtime Unknown   code k8s io apimachinery pkg runtime Unknown  code   a    td   td      p Configuration is an embedded configuration object to be used as the plugin s configuration  If present  it will be used instead of the path to the configuration file   p    td    tr    tbody    table       AnonymousAuthCondition        apiserver k8s io v1alpha1 AnonymousAuthCondition          Appears in        AnonymousAuthConfig   apiserver k8s io v1alpha1 AnonymousAuthConfig     p AnonymousAuthCondition describes the condition under which anonymous auth should be enabled   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code path  code   B  Required   B  br    code string  code    td   td      p Path for which anonymous auth is enabled   p    td    tr    tbody    table       AnonymousAuthConfig        apiserver k8s io v1alpha1 AnonymousAuthConfig          Appears in        AuthenticationConfiguration   apiserver k8s io v1alpha1 AuthenticationConfiguration     p AnonymousAuthConfig provides the configuration for the anonymous authenticator   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code enabled  code   B  Required   B  br    code bool  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code conditions  code   B  Required   B  br    a href   apiserver k8s io v1alpha1 AnonymousAuthCondition   code   AnonymousAuthCondition  code   a    td   td      p If set  anonymous auth is only allowed if the request meets one of the conditions   p    td    tr    tbody    table       AudienceMatchPolicyType        apiserver k8s io v1alpha1 AudienceMatchPolicyType        Alias of  string      Appears in        Issuer   apiserver k8s io v1alpha1 Issuer     p AudienceMatchPolicyType is a set of valid values for issuer audienceMatchPolicy  p          AuthorizerConfiguration        apiserver k8s io v1alpha1 AuthorizerConfiguration          Appears in        AuthorizationConfiguration   apiserver k8s io v1alpha1 AuthorizationConfiguration      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code type  code   B  Required   B  br    code string  code    td   td      p Type refers to the type of the authorizer  quot Webhook quot  is supported in the generic API server Other API servers may support additional authorizer types like Node  RBAC  ABAC  etc   p    td    tr   tr  td  code name  code   B  Required   B  br    code string  code    td   td      p Name used to describe the webhook This is explicitly used in monitoring machinery for metrics Note  Names must be DNS1123 labels like  code myauthorizername  code  or subdomains like  code myauthorizer example domain  code  Required  with no default  p    td    tr   tr  td  code webhook  code   B  Required   B  br    a href   apiserver k8s io v1alpha1 WebhookConfiguration   code WebhookConfiguration  code   a    td   td      p Webhook defines the configuration for a Webhook authorizer Must be defined when Type Webhook Must not be defined when Type  Webhook  p    td    tr    tbody    table       ClaimMappings        apiserver k8s io v1alpha1 ClaimMappings          Appears in        JWTAuthenticator   apiserver k8s io v1alpha1 JWTAuthenticator     p ClaimMappings provides the configuration for claim mapping  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code username  code   B  Required   B  br    a href   apiserver k8s io v1alpha1 PrefixedClaimOrExpression   code PrefixedClaimOrExpression  code   a    td   td      p username represents an option for the username attribute  The claim s value must be a singular string  Same as the   oidc username claim and   oidc username prefix flags  If username expression is set  the expression must produce a string value  If username expression uses  claims email   then  claims email verified  must be used in username expression or extra  em   valueExpression or claimValidationRules   em   expression  An example claim validation rule expression that matches the validation automatically applied when username claim is set to  email  is  claims  email verified orValue true     p   p In the flag based approach  the   oidc username claim and   oidc username prefix are optional  If   oidc username claim is not set  the default value is  quot sub quot   For the authentication config  there is no defaulting for claim or prefix  The claim and prefix must be set explicitly  For claim  if   oidc username claim was not set with legacy flag approach  configure username claim  quot sub quot  in the authentication config  For prefix   1    oidc username prefix  quot   quot   no prefix was added to the username  For the same behavior using authentication config  set username prefix  quot  quot   2    oidc username prefix  quot  quot  and    oidc username claim     quot email quot   prefix was  quot  lt value of   oidc issuer url gt   quot   For the same behavior using authentication config  set username prefix  quot      raw HTML omitted      quot   3    oidc username prefix  quot      raw HTML omitted     quot   For the same behavior using authentication config  set username prefix  quot      raw HTML omitted     quot   p    td    tr   tr  td  code groups  code  br    a href   apiserver k8s io v1alpha1 PrefixedClaimOrExpression   code PrefixedClaimOrExpression  code   a    td   td      p groups represents an option for the groups attribute  The claim s value must be a string or string array claim  If groups claim is set  the prefix must be specified  and can be the empty string   If groups expression is set  the expression must produce a string or string array value   quot  quot       and null values are treated as the group mapping not being present   p    td    tr   tr  td  code uid  code  br    a href   apiserver k8s io v1alpha1 ClaimOrExpression   code ClaimOrExpression  code   a    td   td      p uid represents an option for the uid attribute  Claim must be a singular string claim  If uid expression is set  the expression must produce a string value   p    td    tr   tr  td  code extra  code  br    a href   apiserver k8s io v1alpha1 ExtraMapping   code   ExtraMapping  code   a    td   td      p extra represents an option for the extra attribute  expression must produce a string or string array value  If the value is empty  the extra mapping will not be present   p   p hard coded extra key value  p   ul   li key   quot foo quot  valueExpression   quot  bar  quot  This will result in an extra attribute   foo    quot bar quot    li    ul   p hard coded key  value copying claim value  p   ul   li key   quot foo quot  valueExpression   quot claims some claim quot  This will result in an extra attribute   foo   value of some claim   li    ul   p hard coded key  value derived from claim value  p   ul   li key   quot admin quot  valueExpression    has claims is admin   amp  amp  claims is admin     quot true quot   quot  quot   This will result in   li   li if is admin claim is present and true  extra attribute   admin    quot true quot    li   li if is admin claim is present and false or is admin claim is not present  no extra attribute will be added  li    ul    td    tr    tbody    table       ClaimOrExpression        apiserver k8s io v1alpha1 ClaimOrExpression          Appears in        ClaimMappings   apiserver k8s io v1alpha1 ClaimMappings     p ClaimOrExpression provides the configuration for a single claim or expression   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code claim  code  br    code string  code    td   td      p claim is the JWT claim to use  Either claim or expression must be set  Mutually exclusive with expression   p    td    tr   tr  td  code expression  code  br    code string  code    td   td      p expression represents the expression which will be evaluated by CEL   p   p CEL expressions have access to the contents of the token claims  organized into CEL variable   p   ul   li  claims  is a map of claim names to claim values  For example  a variable named  sub  can be accessed as  claims sub   Nested claims can be accessed using dot notation  e g   claims foo bar    li    ul   p Documentation on CEL  https   kubernetes io docs reference using api cel   p   p Mutually exclusive with claim   p    td    tr    tbody    table       ClaimValidationRule        apiserver k8s io v1alpha1 ClaimValidationRule          Appears in        JWTAuthenticator   apiserver k8s io v1alpha1 JWTAuthenticator     p ClaimValidationRule provides the configuration for a single claim validation rule   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code claim  code  br    code string  code    td   td      p claim is the name of a required claim  Same as   oidc required claim flag  Only string claim keys are supported  Mutually exclusive with expression and message   p    td    tr   tr  td  code requiredValue  code  br    code string  code    td   td      p requiredValue is the value of a required claim  Same as   oidc required claim flag  Only string claim values are supported  If claim is set and requiredValue is not set  the claim must be present with a value set to the empty string  Mutually exclusive with expression and message   p    td    tr   tr  td  code expression  code  br    code string  code    td   td      p expression represents the expression which will be evaluated by CEL  Must produce a boolean   p   p CEL expressions have access to the contents of the token claims  organized into CEL variable   p   ul   li  claims  is a map of claim names to claim values  For example  a variable named  sub  can be accessed as  claims sub   Nested claims can be accessed using dot notation  e g   claims foo bar   Must return true for the validation to pass   li    ul   p Documentation on CEL  https   kubernetes io docs reference using api cel   p   p Mutually exclusive with claim and requiredValue   p    td    tr   tr  td  code message  code  br    code string  code    td   td      p message customizes the returned error message when expression returns false  message is a literal string  Mutually exclusive with claim and requiredValue   p    td    tr    tbody    table       Connection        apiserver k8s io v1alpha1 Connection          Appears in        EgressSelection   apiserver k8s io v1alpha1 EgressSelection     p Connection provides the configuration for a single egress selection client   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code proxyProtocol  code   B  Required   B  br    a href   apiserver k8s io v1alpha1 ProtocolType   code ProtocolType  code   a    td   td      p Protocol is the protocol used to connect from client to the konnectivity server   p    td    tr   tr  td  code transport  code  br    a href   apiserver k8s io v1alpha1 Transport   code Transport  code   a    td   td      p Transport defines the transport configurations we use to dial to the konnectivity server  This is required if ProxyProtocol is HTTPConnect or GRPC   p    td    tr    tbody    table       EgressSelection        apiserver k8s io v1alpha1 EgressSelection          Appears in        EgressSelectorConfiguration   apiserver k8s io v1alpha1 EgressSelectorConfiguration     p EgressSelection provides the configuration for a single egress selection client   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p name is the name of the egress selection  Currently supported values are  quot controlplane quot    quot master quot    quot etcd quot  and  quot cluster quot  The  quot master quot  egress selector is deprecated in favor of  quot controlplane quot   p    td    tr   tr  td  code connection  code   B  Required   B  br    a href   apiserver k8s io v1alpha1 Connection   code Connection  code   a    td   td      p connection is the exact information used to configure the egress selection  p    td    tr    tbody    table       ExtraMapping        apiserver k8s io v1alpha1 ExtraMapping          Appears in        ClaimMappings   apiserver k8s io v1alpha1 ClaimMappings     p ExtraMapping provides the configuration for a single extra mapping   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code key  code   B  Required   B  br    code string  code    td   td      p key is a string to use as the extra attribute key  key must be a domain prefix path  e g  example org foo   All characters before the first  quot   quot  must be a valid subdomain as defined by RFC 1123  All characters trailing the first  quot   quot  must be valid HTTP Path characters as defined by RFC 3986  key must be lowercase  Required to be unique   p    td    tr   tr  td  code valueExpression  code   B  Required   B  br    code string  code    td   td      p valueExpression is a CEL expression to extract extra attribute value  valueExpression must produce a string or string array value   quot  quot       and null values are treated as the extra mapping not being present  Empty string values contained within a string array are filtered out   p   p CEL expressions have access to the contents of the token claims  organized into CEL variable   p   ul   li  claims  is a map of claim names to claim values  For example  a variable named  sub  can be accessed as  claims sub   Nested claims can be accessed using dot notation  e g   claims foo bar    li    ul   p Documentation on CEL  https   kubernetes io docs reference using api cel   p    td    tr    tbody    table       Issuer        apiserver k8s io v1alpha1 Issuer          Appears in        JWTAuthenticator   apiserver k8s io v1alpha1 JWTAuthenticator     p Issuer provides the configuration for an external provider s specific settings   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code url  code   B  Required   B  br    code string  code    td   td      p url points to the issuer URL in a format https   url or https   url path  This must match the  quot iss quot  claim in the presented JWT  and the issuer returned from discovery  Same value as the   oidc issuer url flag  Discovery information is fetched from  quot  url   well known openid configuration quot  unless overridden by discoveryURL  Required to be unique across all JWT authenticators  Note that egress selection configuration is not used for this network connection   p    td    tr   tr  td  code discoveryURL  code  br    code string  code    td   td      p discoveryURL  if specified  overrides the URL used to fetch discovery information instead of using  quot  url   well known openid configuration quot   The exact value specified is used  so  quot   well known openid configuration quot  must be included in discoveryURL if needed   p   p The  quot issuer quot  field in the fetched discovery information must match the  quot issuer url quot  field in the AuthenticationConfiguration and will be used to validate the  quot iss quot  claim in the presented JWT  This is for scenarios where the well known and jwks endpoints are hosted at a different location than the issuer  such as locally in the cluster    p   p Example  A discovery url that is exposed using kubernetes service  oidc  in namespace  oidc namespace  and discovery information is available at    well known openid configuration   discoveryURL   quot https   oidc oidc namespace  well known openid configuration quot  certificateAuthority is used to verify the TLS connection and the hostname on the leaf certificate must be set to  oidc oidc namespace    p   p curl https   oidc oidc namespace  well known openid configuration   discoveryURL field    issuer   quot https   oidc example com quot    url field     p   p discoveryURL must be different from url  Required to be unique across all JWT authenticators  Note that egress selection configuration is not used for this network connection   p    td    tr   tr  td  code certificateAuthority  code  br    code string  code    td   td      p certificateAuthority contains PEM encoded certificate authority certificates used to validate the connection when fetching discovery information  If unset  the system verifier is used  Same value as the content of the file referenced by the   oidc ca file flag   p    td    tr   tr  td  code audiences  code   B  Required   B  br    code   string  code    td   td      p audiences is the set of acceptable audiences the JWT must be issued to  At least one of the entries must match the  quot aud quot  claim in presented JWTs  Same value as the   oidc client id flag  though this field supports an array   Required to be non empty   p    td    tr   tr  td  code audienceMatchPolicy  code  br    a href   apiserver k8s io v1alpha1 AudienceMatchPolicyType   code AudienceMatchPolicyType  code   a    td   td      p audienceMatchPolicy defines how the  quot audiences quot  field is used to match the  quot aud quot  claim in the presented JWT  Allowed values are   p   ol   li  quot MatchAny quot  when multiple audiences are specified and  li   li empty  or unset  or  quot MatchAny quot  when a single audience is specified   li    ol   ul   li   p MatchAny  the  quot aud quot  claim in the presented JWT must match at least one of the entries in the  quot audiences quot  field  For example  if  quot audiences quot  is   quot foo quot    quot bar quot    the  quot aud quot  claim in the presented JWT must contain either  quot foo quot  or  quot bar quot   and may contain both    p    li   li   p  quot  quot   The match policy can be empty  or unset  when a single audience is specified in the  quot audiences quot  field  The  quot aud quot  claim in the presented JWT must contain the single audience  and may contain others    p    li    ul   p For more nuanced audience validation  use claimValidationRules  example  claimValidationRule   expression   sets equivalent claims aud    quot bar quot    quot foo quot    quot baz quot     to require an exact match   p    td    tr    tbody    table       JWTAuthenticator        apiserver k8s io v1alpha1 JWTAuthenticator          Appears in        AuthenticationConfiguration   apiserver k8s io v1alpha1 AuthenticationConfiguration     p JWTAuthenticator provides the configuration for a single JWT authenticator   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code issuer  code   B  Required   B  br    a href   apiserver k8s io v1alpha1 Issuer   code Issuer  code   a    td   td      p issuer contains the basic OIDC provider connection options   p    td    tr   tr  td  code claimValidationRules  code  br    a href   apiserver k8s io v1alpha1 ClaimValidationRule   code   ClaimValidationRule  code   a    td   td      p claimValidationRules are rules that are applied to validate token claims to authenticate users   p    td    tr   tr  td  code claimMappings  code   B  Required   B  br    a href   apiserver k8s io v1alpha1 ClaimMappings   code ClaimMappings  code   a    td   td      p claimMappings points claims of a token to be treated as user attributes   p    td    tr   tr  td  code userValidationRules  code  br    a href   apiserver k8s io v1alpha1 UserValidationRule   code   UserValidationRule  code   a    td   td      p userValidationRules are rules that are applied to final user before completing authentication  These allow invariants to be applied to incoming identities such as preventing the use of the system  prefix that is commonly used by Kubernetes components  The validation rules are logically ANDed together and must all return true for the validation to pass   p    td    tr    tbody    table       PrefixedClaimOrExpression        apiserver k8s io v1alpha1 PrefixedClaimOrExpression          Appears in        ClaimMappings   apiserver k8s io v1alpha1 ClaimMappings     p PrefixedClaimOrExpression provides the configuration for a single prefixed claim or expression   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code claim  code  br    code string  code    td   td      p claim is the JWT claim to use  Mutually exclusive with expression   p    td    tr   tr  td  code prefix  code  br    code string  code    td   td      p prefix is prepended to claim s value to prevent clashes with existing names  prefix needs to be set if claim is set and can be the empty string  Mutually exclusive with expression   p    td    tr   tr  td  code expression  code  br    code string  code    td   td      p expression represents the expression which will be evaluated by CEL   p   p CEL expressions have access to the contents of the token claims  organized into CEL variable   p   ul   li  claims  is a map of claim names to claim values  For example  a variable named  sub  can be accessed as  claims sub   Nested claims can be accessed using dot notation  e g   claims foo bar    li    ul   p Documentation on CEL  https   kubernetes io docs reference using api cel   p   p Mutually exclusive with claim and prefix   p    td    tr    tbody    table       ProtocolType        apiserver k8s io v1alpha1 ProtocolType        Alias of  string      Appears in        Connection   apiserver k8s io v1alpha1 Connection     p ProtocolType is a set of valid values for Connection ProtocolType  p          TCPTransport        apiserver k8s io v1alpha1 TCPTransport          Appears in        Transport   apiserver k8s io v1alpha1 Transport     p TCPTransport provides the information to connect to konnectivity server via TCP  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code url  code   B  Required   B  br    code string  code    td   td      p URL is the location of the konnectivity server to connect to  As an example it might be  quot https   127 0 0 1 8131 quot   p    td    tr   tr  td  code tlsConfig  code  br    a href   apiserver k8s io v1alpha1 TLSConfig   code TLSConfig  code   a    td   td      p TLSConfig is the config needed to use TLS when connecting to konnectivity server  p    td    tr    tbody    table       TLSConfig        apiserver k8s io v1alpha1 TLSConfig          Appears in        TCPTransport   apiserver k8s io v1alpha1 TCPTransport     p TLSConfig provides the authentication information to connect to konnectivity server Only used with TCPTransport  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code caBundle  code  br    code string  code    td   td      p caBundle is the file location of the CA to be used to determine trust with the konnectivity server  Must be absent empty if TCPTransport URL is prefixed with http    If absent while TCPTransport URL is prefixed with https     default to system trust roots   p    td    tr   tr  td  code clientKey  code  br    code string  code    td   td      p clientKey is the file location of the client key to be used in mtls handshakes with the konnectivity server  Must be absent empty if TCPTransport URL is prefixed with http    Must be configured if TCPTransport URL is prefixed with https     p    td    tr   tr  td  code clientCert  code  br    code string  code    td   td      p clientCert is the file location of the client certificate to be used in mtls handshakes with the konnectivity server  Must be absent empty if TCPTransport URL is prefixed with http    Must be configured if TCPTransport URL is prefixed with https     p    td    tr    tbody    table       Transport        apiserver k8s io v1alpha1 Transport          Appears in        Connection   apiserver k8s io v1alpha1 Connection     p Transport defines the transport configurations we use to dial to the konnectivity server  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code tcp  code  br    a href   apiserver k8s io v1alpha1 TCPTransport   code TCPTransport  code   a    td   td      p TCP is the TCP configuration for communicating with the konnectivity server via TCP ProxyProtocol of GRPC is not supported with TCP transport at the moment Requires at least one of TCP or UDS to be set  p    td    tr   tr  td  code uds  code  br    a href   apiserver k8s io v1alpha1 UDSTransport   code UDSTransport  code   a    td   td      p UDS is the UDS configuration for communicating with the konnectivity server via UDS Requires at least one of TCP or UDS to be set  p    td    tr    tbody    table       UDSTransport        apiserver k8s io v1alpha1 UDSTransport          Appears in        Transport   apiserver k8s io v1alpha1 Transport     p UDSTransport provides the information to connect to konnectivity server via UDS  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code udsName  code   B  Required   B  br    code string  code    td   td      p UDSName is the name of the unix domain socket to connect to konnectivity server This does not use a unix    prefix   Eg   etc srv kubernetes konnectivity server konnectivity server socket   p    td    tr    tbody    table       UserValidationRule        apiserver k8s io v1alpha1 UserValidationRule          Appears in        JWTAuthenticator   apiserver k8s io v1alpha1 JWTAuthenticator     p UserValidationRule provides the configuration for a single user info validation rule   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code expression  code   B  Required   B  br    code string  code    td   td      p expression represents the expression which will be evaluated by CEL  Must return true for the validation to pass   p   p CEL expressions have access to the contents of UserInfo  organized into CEL variable   p   ul   li  user    authentication k8s io v1  Kind UserInfo object Refer to https   github com kubernetes api blob release 1 28 authentication v1 types go L105 L122 for the definition  API documentation  https   kubernetes io docs reference generated kubernetes api v1 28  userinfo v1 authentication k8s io  li    ul   p Documentation on CEL  https   kubernetes io docs reference using api cel   p    td    tr   tr  td  code message  code  br    code string  code    td   td      p message customizes the returned error message when rule returns false  message is a literal string   p    td    tr    tbody    table       WebhookConfiguration        apiserver k8s io v1alpha1 WebhookConfiguration          Appears in        AuthorizerConfiguration   apiserver k8s io v1alpha1 AuthorizerConfiguration      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code authorizedTTL  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p The duration to cache  authorized  responses from the webhook authorizer  Same as setting  code   authorization webhook cache authorized ttl  code  flag Default  5m0s  p    td    tr   tr  td  code unauthorizedTTL  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p The duration to cache  unauthorized  responses from the webhook authorizer  Same as setting  code   authorization webhook cache unauthorized ttl  code  flag Default  30s  p    td    tr   tr  td  code timeout  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p Timeout for the webhook request Maximum allowed value is 30s  Required  no default value   p    td    tr   tr  td  code subjectAccessReviewVersion  code   B  Required   B  br    code string  code    td   td      p The API version of the authorization k8s io SubjectAccessReview to send to and expect from the webhook  Same as setting  code   authorization webhook version  code  flag Valid values  v1beta1  v1 Required  no default value  p    td    tr   tr  td  code matchConditionSubjectAccessReviewVersion  code   B  Required   B  br    code string  code    td   td      p MatchConditionSubjectAccessReviewVersion specifies the SubjectAccessReview version the CEL expressions are evaluated against Valid values  v1 Required  no default value  p    td    tr   tr  td  code failurePolicy  code   B  Required   B  br    code string  code    td   td      p Controls the authorization decision when a webhook request fails to complete or returns a malformed response or errors evaluating matchConditions  Valid values   p   ul   li NoOpinion  continue to subsequent authorizers to see if one of them allows the request  li   li Deny  reject the request without consulting subsequent authorizers Required  with no default   li    ul    td    tr   tr  td  code connectionInfo  code   B  Required   B  br    a href   apiserver k8s io v1alpha1 WebhookConnectionInfo   code WebhookConnectionInfo  code   a    td   td      p ConnectionInfo defines how we talk to the webhook  p    td    tr   tr  td  code matchConditions  code   B  Required   B  br    a href   apiserver k8s io v1alpha1 WebhookMatchCondition   code   WebhookMatchCondition  code   a    td   td      p matchConditions is a list of conditions that must be met for a request to be sent to this webhook  An empty list of matchConditions matches all requests  There are a maximum of 64 match conditions allowed   p   p The exact matching logic is  in order    p   ol   li If at least one matchCondition evaluates to FALSE  then the webhook is skipped   li   li If ALL matchConditions evaluate to TRUE  then the webhook is called   li   li If at least one matchCondition evaluates to an error  but none are FALSE    ul   li If failurePolicy Deny  then the webhook rejects the request  li   li If failurePolicy NoOpinion  then the error is ignored and the webhook is skipped  li    ul    li    ol    td    tr    tbody    table       WebhookConnectionInfo        apiserver k8s io v1alpha1 WebhookConnectionInfo          Appears in        WebhookConfiguration   apiserver k8s io v1alpha1 WebhookConfiguration      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code type  code   B  Required   B  br    code string  code    td   td      p Controls how the webhook should communicate with the server  Valid values   p   ul   li KubeConfigFile  use the file specified in kubeConfigFile to locate the server   li   li InClusterConfig  use the in cluster configuration to call the SubjectAccessReview API hosted by kube apiserver  This mode is not allowed for kube apiserver   li    ul    td    tr   tr  td  code kubeConfigFile  code   B  Required   B  br    code string  code    td   td      p Path to KubeConfigFile for connection info Required  if connectionInfo Type is KubeConfig  p    td    tr    tbody    table       WebhookMatchCondition        apiserver k8s io v1alpha1 WebhookMatchCondition          Appears in        WebhookConfiguration   apiserver k8s io v1alpha1 WebhookConfiguration      table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code expression  code   B  Required   B  br    code string  code    td   td      p expression represents the expression which will be evaluated by CEL  Must evaluate to bool  CEL expressions have access to the contents of the SubjectAccessReview in v1 version  If version specified by subjectAccessReviewVersion in the request variable is v1beta1  the contents would be converted to the v1 version before evaluating the CEL expression   p   p Documentation on CEL  https   kubernetes io docs reference using api cel   p    td    tr    tbody    table   "}
{"questions":"kubernetes reference package imagepolicy k8s io v1alpha1 contenttype tool reference Resource Types title Image Policy API v1alpha1 autogenerated true","answers":"---\ntitle: Image Policy API (v1alpha1)\ncontent_type: tool-reference\npackage: imagepolicy.k8s.io\/v1alpha1\nauto_generated: true\n---\n\n\n## Resource Types \n\n\n- [ImageReview](#imagepolicy-k8s-io-v1alpha1-ImageReview)\n  \n\n## `ImageReview`     {#imagepolicy-k8s-io-v1alpha1-ImageReview}\n    \n\n\n<p>ImageReview checks if the set of images in a pod are allowed.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>imagepolicy.k8s.io\/v1alpha1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>ImageReview<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>metadata<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#objectmeta-v1-meta\"><code>meta\/v1.ObjectMeta<\/code><\/a>\n<\/td>\n<td>\n   <p>Standard object's metadata.\nMore info: https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#metadata<\/p>\nRefer to the Kubernetes API documentation for the fields of the <code>metadata<\/code> field.<\/td>\n<\/tr>\n<tr><td><code>spec<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#imagepolicy-k8s-io-v1alpha1-ImageReviewSpec\"><code>ImageReviewSpec<\/code><\/a>\n<\/td>\n<td>\n   <p>Spec holds information about the pod being evaluated<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>status<\/code><br\/>\n<a href=\"#imagepolicy-k8s-io-v1alpha1-ImageReviewStatus\"><code>ImageReviewStatus<\/code><\/a>\n<\/td>\n<td>\n   <p>Status is filled in by the backend and indicates whether the pod should be allowed.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ImageReviewContainerSpec`     {#imagepolicy-k8s-io-v1alpha1-ImageReviewContainerSpec}\n    \n\n**Appears in:**\n\n- [ImageReviewSpec](#imagepolicy-k8s-io-v1alpha1-ImageReviewSpec)\n\n\n<p>ImageReviewContainerSpec is a description of a container within the pod creation request.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>image<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>This can be in the form image:tag or image@SHA:012345679abcdef.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ImageReviewSpec`     {#imagepolicy-k8s-io-v1alpha1-ImageReviewSpec}\n    \n\n**Appears in:**\n\n- [ImageReview](#imagepolicy-k8s-io-v1alpha1-ImageReview)\n\n\n<p>ImageReviewSpec is a description of the pod creation request.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>containers<\/code><br\/>\n<a href=\"#imagepolicy-k8s-io-v1alpha1-ImageReviewContainerSpec\"><code>[]ImageReviewContainerSpec<\/code><\/a>\n<\/td>\n<td>\n   <p>Containers is a list of a subset of the information in each container of the Pod being created.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>annotations<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p>Annotations is a list of key-value pairs extracted from the Pod's annotations.\nIt only includes keys which match the pattern <code>*.image-policy.k8s.io\/*<\/code>.\nIt is up to each webhook backend to determine how to interpret these annotations, if at all.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>namespace<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Namespace is the namespace the pod is being created in.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ImageReviewStatus`     {#imagepolicy-k8s-io-v1alpha1-ImageReviewStatus}\n    \n\n**Appears in:**\n\n- [ImageReview](#imagepolicy-k8s-io-v1alpha1-ImageReview)\n\n\n<p>ImageReviewStatus is the result of the review for the pod creation request.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>allowed<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>Allowed indicates that all images were allowed to be run.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>reason<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Reason should be empty unless Allowed is false in which case it\nmay contain a short description of what is wrong.  Kubernetes\nmay truncate excessively long errors when displaying to the user.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>auditAnnotations<\/code><br\/>\n<code>map[string]string<\/code>\n<\/td>\n<td>\n   <p>AuditAnnotations will be added to the attributes object of the\nadmission controller request using 'AddAnnotation'.  The keys should\nbe prefix-less (i.e., the admission controller will add an\nappropriate prefix).<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  Image Policy API  v1alpha1  content type  tool reference package  imagepolicy k8s io v1alpha1 auto generated  true          Resource Types       ImageReview   imagepolicy k8s io v1alpha1 ImageReview          ImageReview        imagepolicy k8s io v1alpha1 ImageReview          p ImageReview checks if the set of images in a pod are allowed   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code imagepolicy k8s io v1alpha1  code   td   tr   tr  td  code kind  code  br  string  td  td  code ImageReview  code   td   tr           tr  td  code metadata  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  objectmeta v1 meta   code meta v1 ObjectMeta  code   a    td   td      p Standard object s metadata  More info  https   git k8s io community contributors devel sig architecture api conventions md metadata  p  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field   td    tr   tr  td  code spec  code   B  Required   B  br    a href   imagepolicy k8s io v1alpha1 ImageReviewSpec   code ImageReviewSpec  code   a    td   td      p Spec holds information about the pod being evaluated  p    td    tr   tr  td  code status  code  br    a href   imagepolicy k8s io v1alpha1 ImageReviewStatus   code ImageReviewStatus  code   a    td   td      p Status is filled in by the backend and indicates whether the pod should be allowed   p    td    tr    tbody    table       ImageReviewContainerSpec        imagepolicy k8s io v1alpha1 ImageReviewContainerSpec          Appears in        ImageReviewSpec   imagepolicy k8s io v1alpha1 ImageReviewSpec     p ImageReviewContainerSpec is a description of a container within the pod creation request   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code image  code  br    code string  code    td   td      p This can be in the form image tag or image SHA 012345679abcdef   p    td    tr    tbody    table       ImageReviewSpec        imagepolicy k8s io v1alpha1 ImageReviewSpec          Appears in        ImageReview   imagepolicy k8s io v1alpha1 ImageReview     p ImageReviewSpec is a description of the pod creation request   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code containers  code  br    a href   imagepolicy k8s io v1alpha1 ImageReviewContainerSpec   code   ImageReviewContainerSpec  code   a    td   td      p Containers is a list of a subset of the information in each container of the Pod being created   p    td    tr   tr  td  code annotations  code  br    code map string string  code    td   td      p Annotations is a list of key value pairs extracted from the Pod s annotations  It only includes keys which match the pattern  code   image policy k8s io    code   It is up to each webhook backend to determine how to interpret these annotations  if at all   p    td    tr   tr  td  code namespace  code  br    code string  code    td   td      p Namespace is the namespace the pod is being created in   p    td    tr    tbody    table       ImageReviewStatus        imagepolicy k8s io v1alpha1 ImageReviewStatus          Appears in        ImageReview   imagepolicy k8s io v1alpha1 ImageReview     p ImageReviewStatus is the result of the review for the pod creation request   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code allowed  code   B  Required   B  br    code bool  code    td   td      p Allowed indicates that all images were allowed to be run   p    td    tr   tr  td  code reason  code  br    code string  code    td   td      p Reason should be empty unless Allowed is false in which case it may contain a short description of what is wrong   Kubernetes may truncate excessively long errors when displaying to the user   p    td    tr   tr  td  code auditAnnotations  code  br    code map string string  code    td   td      p AuditAnnotations will be added to the attributes object of the admission controller request using  AddAnnotation    The keys should be prefix less  i e   the admission controller will add an appropriate prefix    p    td    tr    tbody    table   "}
{"questions":"kubernetes reference p Package v1 is the v1 version of the API p contenttype tool reference title kube apiserver Configuration v1 Resource Types package apiserver config k8s io v1 autogenerated true","answers":"---\ntitle: kube-apiserver Configuration (v1)\ncontent_type: tool-reference\npackage: apiserver.config.k8s.io\/v1\nauto_generated: true\n---\n<p>Package v1 is the v1 version of the API.<\/p>\n\n\n## Resource Types \n\n\n- [AdmissionConfiguration](#apiserver-config-k8s-io-v1-AdmissionConfiguration)\n- [EncryptionConfiguration](#apiserver-config-k8s-io-v1-EncryptionConfiguration)\n  \n\n## `AdmissionConfiguration`     {#apiserver-config-k8s-io-v1-AdmissionConfiguration}\n    \n\n\n<p>AdmissionConfiguration provides versioned configuration for admission controllers.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>apiserver.config.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>AdmissionConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>plugins<\/code><br\/>\n<a href=\"#apiserver-config-k8s-io-v1-AdmissionPluginConfiguration\"><code>[]AdmissionPluginConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>Plugins allows specifying a configuration per admission control plugin.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `EncryptionConfiguration`     {#apiserver-config-k8s-io-v1-EncryptionConfiguration}\n    \n\n\n<p>EncryptionConfiguration stores the complete configuration for encryption providers.\nIt also allows the use of wildcards to specify the resources that should be encrypted.\nUse '&ast;.&lt;group&gt;' to encrypt all resources within a group or '&ast;.&ast;' to encrypt all resources.\n'&ast;.' can be used to encrypt all resource in the core group.  '&ast;.&ast;' will encrypt all\nresources, even custom resources that are added after API server start.\nUse of wildcards that overlap within the same resource list or across multiple\nentries are not allowed since part of the configuration would be ineffective.\nResource lists are processed in order, with earlier lists taking precedence.<\/p>\n<p>Example:<\/p>\n<pre><code>kind: EncryptionConfiguration\napiVersion: apiserver.config.k8s.io\/v1\nresources:\n- resources:\n  - events\n  providers:\n  - identity: {}  # do not encrypt events even though *.* is specified below\n- resources:\n  - secrets\n  - configmaps\n  - pandas.awesome.bears.example\n  providers:\n  - aescbc:\n      keys:\n      - name: key1\n        secret: c2VjcmV0IGlzIHNlY3VyZQ==\n- resources:\n  - '*.apps'\n  providers:\n  - aescbc:\n      keys:\n      - name: key2\n        secret: c2VjcmV0IGlzIHNlY3VyZSwgb3IgaXMgaXQ\/Cg==\n- resources:\n  - '*.*'\n  providers:\n  - aescbc:\n      keys:\n      - name: key3\n        secret: c2VjcmV0IGlzIHNlY3VyZSwgSSB0aGluaw==<\/code><\/pre>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>apiserver.config.k8s.io\/v1<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>EncryptionConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>resources<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-config-k8s-io-v1-ResourceConfiguration\"><code>[]ResourceConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>resources is a list containing resources, and their corresponding encryption providers.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AESConfiguration`     {#apiserver-config-k8s-io-v1-AESConfiguration}\n    \n\n**Appears in:**\n\n- [ProviderConfiguration](#apiserver-config-k8s-io-v1-ProviderConfiguration)\n\n\n<p>AESConfiguration contains the API configuration for an AES transformer.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>keys<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-config-k8s-io-v1-Key\"><code>[]Key<\/code><\/a>\n<\/td>\n<td>\n   <p>keys is a list of keys to be used for creating the AES transformer.\nEach key has to be 32 bytes long for AES-CBC and 16, 24 or 32 bytes for AES-GCM.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `AdmissionPluginConfiguration`     {#apiserver-config-k8s-io-v1-AdmissionPluginConfiguration}\n    \n\n**Appears in:**\n\n- [AdmissionConfiguration](#apiserver-config-k8s-io-v1-AdmissionConfiguration)\n\n\n<p>AdmissionPluginConfiguration provides the configuration for a single plug-in.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Name is the name of the admission controller.\nIt must match the registered admission plugin name.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>path<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>Path is the path to a configuration file that contains the plugin's\nconfiguration<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>configuration<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/runtime#Unknown\"><code>k8s.io\/apimachinery\/pkg\/runtime.Unknown<\/code><\/a>\n<\/td>\n<td>\n   <p>Configuration is an embedded configuration object to be used as the plugin's\nconfiguration. If present, it will be used instead of the path to the configuration file.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `IdentityConfiguration`     {#apiserver-config-k8s-io-v1-IdentityConfiguration}\n    \n\n**Appears in:**\n\n- [ProviderConfiguration](#apiserver-config-k8s-io-v1-ProviderConfiguration)\n\n\n<p>IdentityConfiguration is an empty struct to allow identity transformer in provider configuration.<\/p>\n\n\n\n\n## `KMSConfiguration`     {#apiserver-config-k8s-io-v1-KMSConfiguration}\n    \n\n**Appears in:**\n\n- [ProviderConfiguration](#apiserver-config-k8s-io-v1-ProviderConfiguration)\n\n\n<p>KMSConfiguration contains the name, cache size and path to configuration file for a KMS based envelope transformer.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>apiVersion<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>apiVersion of KeyManagementService<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>name is the name of the KMS plugin to be used.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>cachesize<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p>cachesize is the maximum number of secrets which are cached in memory. The default value is 1000.\nSet to a negative value to disable caching. This field is only allowed for KMS v1 providers.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>endpoint<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>endpoint is the gRPC server listening address, for example &quot;unix:\/\/\/var\/run\/kms-provider.sock&quot;.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>timeout<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p>timeout for gRPC calls to kms-plugin (ex. 5s). The default is 3 seconds.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Key`     {#apiserver-config-k8s-io-v1-Key}\n    \n\n**Appears in:**\n\n- [AESConfiguration](#apiserver-config-k8s-io-v1-AESConfiguration)\n\n- [SecretboxConfiguration](#apiserver-config-k8s-io-v1-SecretboxConfiguration)\n\n\n<p>Key contains name and secret of the provided key for a transformer.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>name is the name of the key to be used while storing data to disk.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>secret<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>secret is the actual key, encoded in base64.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ProviderConfiguration`     {#apiserver-config-k8s-io-v1-ProviderConfiguration}\n    \n\n**Appears in:**\n\n- [ResourceConfiguration](#apiserver-config-k8s-io-v1-ResourceConfiguration)\n\n\n<p>ProviderConfiguration stores the provided configuration for an encryption provider.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>aesgcm<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-config-k8s-io-v1-AESConfiguration\"><code>AESConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>aesgcm is the configuration for the AES-GCM transformer.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>aescbc<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-config-k8s-io-v1-AESConfiguration\"><code>AESConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>aescbc is the configuration for the AES-CBC transformer.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>secretbox<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-config-k8s-io-v1-SecretboxConfiguration\"><code>SecretboxConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>secretbox is the configuration for the Secretbox based transformer.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>identity<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-config-k8s-io-v1-IdentityConfiguration\"><code>IdentityConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>identity is the (empty) configuration for the identity transformer.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>kms<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-config-k8s-io-v1-KMSConfiguration\"><code>KMSConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>kms contains the name, cache size and path to configuration file for a KMS based envelope transformer.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ResourceConfiguration`     {#apiserver-config-k8s-io-v1-ResourceConfiguration}\n    \n\n**Appears in:**\n\n- [EncryptionConfiguration](#apiserver-config-k8s-io-v1-EncryptionConfiguration)\n\n\n<p>ResourceConfiguration stores per resource configuration.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>resources<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p>resources is a list of kubernetes resources which have to be encrypted. The resource names are derived from <code>resource<\/code> or <code>resource.group<\/code> of the group\/version\/resource.\neg: pandas.awesome.bears.example is a custom resource with 'group': awesome.bears.example, 'resource': pandas.\nUse '&ast;.&ast;' to encrypt all resources and '&ast;.&lt;group&gt;' to encrypt all resources in a specific group.\neg: '&ast;.awesome.bears.example' will encrypt all resources in the group 'awesome.bears.example'.\neg: '&ast;.' will encrypt all resources in the core group (such as pods, configmaps, etc).<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>providers<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-config-k8s-io-v1-ProviderConfiguration\"><code>[]ProviderConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p>providers is a list of transformers to be used for reading and writing the resources to disk.\neg: aesgcm, aescbc, secretbox, identity, kms.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `SecretboxConfiguration`     {#apiserver-config-k8s-io-v1-SecretboxConfiguration}\n    \n\n**Appears in:**\n\n- [ProviderConfiguration](#apiserver-config-k8s-io-v1-ProviderConfiguration)\n\n\n<p>SecretboxConfiguration contains the API configuration for an Secretbox transformer.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>keys<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#apiserver-config-k8s-io-v1-Key\"><code>[]Key<\/code><\/a>\n<\/td>\n<td>\n   <p>keys is a list of keys to be used for creating the Secretbox transformer.\nEach key has to be 32 bytes long.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n  ","site":"kubernetes reference","answers_cleaned":"    title  kube apiserver Configuration  v1  content type  tool reference package  apiserver config k8s io v1 auto generated  true      p Package v1 is the v1 version of the API   p       Resource Types       AdmissionConfiguration   apiserver config k8s io v1 AdmissionConfiguration     EncryptionConfiguration   apiserver config k8s io v1 EncryptionConfiguration          AdmissionConfiguration        apiserver config k8s io v1 AdmissionConfiguration          p AdmissionConfiguration provides versioned configuration for admission controllers   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code apiserver config k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code AdmissionConfiguration  code   td   tr           tr  td  code plugins  code  br    a href   apiserver config k8s io v1 AdmissionPluginConfiguration   code   AdmissionPluginConfiguration  code   a    td   td      p Plugins allows specifying a configuration per admission control plugin   p    td    tr    tbody    table       EncryptionConfiguration        apiserver config k8s io v1 EncryptionConfiguration          p EncryptionConfiguration stores the complete configuration for encryption providers  It also allows the use of wildcards to specify the resources that should be encrypted  Use   ast   lt group gt   to encrypt all resources within a group or   ast   ast   to encrypt all resources    ast    can be used to encrypt all resource in the core group     ast   ast   will encrypt all resources  even custom resources that are added after API server start  Use of wildcards that overlap within the same resource list or across multiple entries are not allowed since part of the configuration would be ineffective  Resource lists are processed in order  with earlier lists taking precedence   p   p Example   p   pre  code kind  EncryptionConfiguration apiVersion  apiserver config k8s io v1 resources    resources      events   providers      identity        do not encrypt events even though     is specified below   resources      secrets     configmaps     pandas awesome bears example   providers      aescbc        keys          name  key1         secret  c2VjcmV0IGlzIHNlY3VyZQ     resources         apps    providers      aescbc        keys          name  key2         secret  c2VjcmV0IGlzIHNlY3VyZSwgb3IgaXMgaXQ Cg     resources              providers      aescbc        keys          name  key3         secret  c2VjcmV0IGlzIHNlY3VyZSwgSSB0aGluaw    code   pre     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code apiserver config k8s io v1  code   td   tr   tr  td  code kind  code  br  string  td  td  code EncryptionConfiguration  code   td   tr           tr  td  code resources  code   B  Required   B  br    a href   apiserver config k8s io v1 ResourceConfiguration   code   ResourceConfiguration  code   a    td   td      p resources is a list containing resources  and their corresponding encryption providers   p    td    tr    tbody    table       AESConfiguration        apiserver config k8s io v1 AESConfiguration          Appears in        ProviderConfiguration   apiserver config k8s io v1 ProviderConfiguration     p AESConfiguration contains the API configuration for an AES transformer   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code keys  code   B  Required   B  br    a href   apiserver config k8s io v1 Key   code   Key  code   a    td   td      p keys is a list of keys to be used for creating the AES transformer  Each key has to be 32 bytes long for AES CBC and 16  24 or 32 bytes for AES GCM   p    td    tr    tbody    table       AdmissionPluginConfiguration        apiserver config k8s io v1 AdmissionPluginConfiguration          Appears in        AdmissionConfiguration   apiserver config k8s io v1 AdmissionConfiguration     p AdmissionPluginConfiguration provides the configuration for a single plug in   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p Name is the name of the admission controller  It must match the registered admission plugin name   p    td    tr   tr  td  code path  code  br    code string  code    td   td      p Path is the path to a configuration file that contains the plugin s configuration  p    td    tr   tr  td  code configuration  code  br    a href  https   pkg go dev k8s io apimachinery pkg runtime Unknown   code k8s io apimachinery pkg runtime Unknown  code   a    td   td      p Configuration is an embedded configuration object to be used as the plugin s configuration  If present  it will be used instead of the path to the configuration file   p    td    tr    tbody    table       IdentityConfiguration        apiserver config k8s io v1 IdentityConfiguration          Appears in        ProviderConfiguration   apiserver config k8s io v1 ProviderConfiguration     p IdentityConfiguration is an empty struct to allow identity transformer in provider configuration   p          KMSConfiguration        apiserver config k8s io v1 KMSConfiguration          Appears in        ProviderConfiguration   apiserver config k8s io v1 ProviderConfiguration     p KMSConfiguration contains the name  cache size and path to configuration file for a KMS based envelope transformer   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code apiVersion  code  br    code string  code    td   td      p apiVersion of KeyManagementService  p    td    tr   tr  td  code name  code   B  Required   B  br    code string  code    td   td      p name is the name of the KMS plugin to be used   p    td    tr   tr  td  code cachesize  code  br    code int32  code    td   td      p cachesize is the maximum number of secrets which are cached in memory  The default value is 1000  Set to a negative value to disable caching  This field is only allowed for KMS v1 providers   p    td    tr   tr  td  code endpoint  code   B  Required   B  br    code string  code    td   td      p endpoint is the gRPC server listening address  for example  quot unix    var run kms provider sock quot    p    td    tr   tr  td  code timeout  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p timeout for gRPC calls to kms plugin  ex  5s   The default is 3 seconds   p    td    tr    tbody    table       Key        apiserver config k8s io v1 Key          Appears in        AESConfiguration   apiserver config k8s io v1 AESConfiguration      SecretboxConfiguration   apiserver config k8s io v1 SecretboxConfiguration     p Key contains name and secret of the provided key for a transformer   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p name is the name of the key to be used while storing data to disk   p    td    tr   tr  td  code secret  code   B  Required   B  br    code string  code    td   td      p secret is the actual key  encoded in base64   p    td    tr    tbody    table       ProviderConfiguration        apiserver config k8s io v1 ProviderConfiguration          Appears in        ResourceConfiguration   apiserver config k8s io v1 ResourceConfiguration     p ProviderConfiguration stores the provided configuration for an encryption provider   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code aesgcm  code   B  Required   B  br    a href   apiserver config k8s io v1 AESConfiguration   code AESConfiguration  code   a    td   td      p aesgcm is the configuration for the AES GCM transformer   p    td    tr   tr  td  code aescbc  code   B  Required   B  br    a href   apiserver config k8s io v1 AESConfiguration   code AESConfiguration  code   a    td   td      p aescbc is the configuration for the AES CBC transformer   p    td    tr   tr  td  code secretbox  code   B  Required   B  br    a href   apiserver config k8s io v1 SecretboxConfiguration   code SecretboxConfiguration  code   a    td   td      p secretbox is the configuration for the Secretbox based transformer   p    td    tr   tr  td  code identity  code   B  Required   B  br    a href   apiserver config k8s io v1 IdentityConfiguration   code IdentityConfiguration  code   a    td   td      p identity is the  empty  configuration for the identity transformer   p    td    tr   tr  td  code kms  code   B  Required   B  br    a href   apiserver config k8s io v1 KMSConfiguration   code KMSConfiguration  code   a    td   td      p kms contains the name  cache size and path to configuration file for a KMS based envelope transformer   p    td    tr    tbody    table       ResourceConfiguration        apiserver config k8s io v1 ResourceConfiguration          Appears in        EncryptionConfiguration   apiserver config k8s io v1 EncryptionConfiguration     p ResourceConfiguration stores per resource configuration   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code resources  code   B  Required   B  br    code   string  code    td   td      p resources is a list of kubernetes resources which have to be encrypted  The resource names are derived from  code resource  code  or  code resource group  code  of the group version resource  eg  pandas awesome bears example is a custom resource with  group   awesome bears example   resource   pandas  Use   ast   ast   to encrypt all resources and   ast   lt group gt   to encrypt all resources in a specific group  eg    ast  awesome bears example  will encrypt all resources in the group  awesome bears example   eg    ast    will encrypt all resources in the core group  such as pods  configmaps  etc    p    td    tr   tr  td  code providers  code   B  Required   B  br    a href   apiserver config k8s io v1 ProviderConfiguration   code   ProviderConfiguration  code   a    td   td      p providers is a list of transformers to be used for reading and writing the resources to disk  eg  aesgcm  aescbc  secretbox  identity  kms   p    td    tr    tbody    table       SecretboxConfiguration        apiserver config k8s io v1 SecretboxConfiguration          Appears in        ProviderConfiguration   apiserver config k8s io v1 ProviderConfiguration     p SecretboxConfiguration contains the API configuration for an Secretbox transformer   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code keys  code   B  Required   B  br    a href   apiserver config k8s io v1 Key   code   Key  code   a    td   td      p keys is a list of keys to be used for creating the Secretbox transformer  Each key has to be 32 bytes long   p    td    tr    tbody    table    "}
{"questions":"kubernetes reference title kubeadm Configuration v1beta4 package kubeadm k8s io v1beta4 contenttype tool reference h2 Overview h2 p Package v1beta4 defines the v1beta4 version of the kubeadm configuration file format This version improves on the v1beta3 format by fixing some minor issues and adding a few new fields p p A list of changes since v1beta3 p autogenerated true","answers":"---\ntitle: kubeadm Configuration (v1beta4)\ncontent_type: tool-reference\npackage: kubeadm.k8s.io\/v1beta4\nauto_generated: true\n---\n<h2>Overview<\/h2>\n<p>Package v1beta4 defines the v1beta4 version of the kubeadm configuration file format.\nThis version improves on the v1beta3 format by fixing some minor issues and adding a few new fields.<\/p>\n<p>A list of changes since v1beta3:<\/p>\n<ul>\n<li>TODO https:\/\/github.com\/kubernetes\/kubeadm\/issues\/2890<\/li>\n<li>Support custom environment variables in control plane components under <code>ClusterConfiguration<\/code>.\nUse <code>apiServer.extraEnvs<\/code>, <code>controllerManager.extraEnvs<\/code>, <code>scheduler.extraEnvs<\/code>,\n<code>etcd.local.extraEnvs<\/code>.<\/li>\n<li>The <code>ResetConfiguration<\/code> API type is now supported in v1beta4.\nUsers are able to reset a node by passing a <code>--config<\/code> file to <code>kubeadm reset<\/code>.<\/li>\n<li>Dry run mode is now configureable in InitConfiguration and JoinConfiguration.<\/li>\n<li>Replace the existing string\/string extra argument maps with structured extra arguments\nthat support duplicates. The change applies to <code>ClusterConfiguration<\/code> - <code>apiServer.extraArgs<\/code>,\n<code>controllerManager.extraArgs<\/code>, <code>scheduler.extraArgs<\/code>, <code>etcd.local.extraArgs<\/code>.\nAlso to <code>nodeRegistration.kubeletExtraArgs<\/code>.<\/li>\n<li>Add <code>ClusterConfiguration.encryptionAlgorithm<\/code> that can be used to set the asymmetric\nencryption algorithm used for this cluster's keys and certificates. Can be one of\n<code>&quot;RSA-2048&quot;<\/code> (default), <code>&quot;RSA-3072&quot;<\/code>, <code>&quot;RSA-4096&quot;<\/code> or <code>&quot;ECDSA-P256&quot;<\/code>.<\/li>\n<li>Add <code>ClusterConfiguration.dns.disabled<\/code> and <code>ClusterConfiguration.proxy.disabled<\/code>\nthat can be used to disable the CoreDNS and kube-proxy addons during cluster\ninitialization. Skipping the related addons phases, during cluster creation will\nset the same fields to <code>false<\/code>.<\/li>\n<li>Add the <code>nodeRegistration.imagePullSerial<\/code> field in <code>InitConfiguration<\/code> and <code>JoinConfiguration<\/code>, which\ncan be used to control if kubeadm pulls images serially or in parallel.<\/li>\n<li>The <code>UpgradeConfiguration<\/code> kubeadm API is now supported in v1beta4 when passing\n<code>--config<\/code> to <code>kubeadm upgrade<\/code> subcommands. Usage of component configuration for <code>kubelet<\/code> and <code>kube-proxy<\/code>,\n<code>InitConfiguration<\/code> and <code>ClusterConfiguration<\/code> is deprecated and will be ignored when passing <code>--config<\/code> to\n<code>upgrade<\/code> subcommands.<\/li>\n<li>Add a <code>Timeouts<\/code> structure to <code>InitConfiguration<\/code>, <code>JoinConfiguration<\/code>, <code>ResetConfiguration<\/code> and <code>UpgradeConfiguration<\/code>\nthat can be used to configure various timeouts.<\/li>\n<li>Add a <code>certificateValidityPeriod<\/code> and <code>caCertificateValidityPeriod<\/code> fields to <code>ClusterConfiguration<\/code>. These fields\ncan be used to control the validity period of certificates generated by kubeadm during sub-commands such as <code>init<\/code>,\n<code>join<\/code>, <code>upgrade<\/code> and <code>certs<\/code>. Default values continue to be 1 year for non-CA certificates and 10 years for CA\ncertificates. Only non-CA certificates continue to be renewable by <code>kubeadm certs renew<\/code>.<\/li>\n<\/ul>\n<h1>Migration from old kubeadm config versions<\/h1>\n<ul>\n<li>kubeadm v1.15.x and newer can be used to migrate from v1beta1 to v1beta2.<\/li>\n<li>kubeadm v1.22.x and newer no longer support v1beta1 and older APIs, but can be used to migrate v1beta2 to v1beta3.<\/li>\n<li>kubeadm v1.27.x and newer no longer support v1beta2 and older APIs.<\/li>\n<li>TODO: https:\/\/github.com\/kubernetes\/kubeadm\/issues\/2890\nadd version that can be used to convert to v1beta4<\/li>\n<\/ul>\n<h2>Basics<\/h2>\n<p>The preferred way to configure kubeadm is to pass an YAML configuration file with\nthe <code>--config<\/code> option. Some of the configuration options defined in the kubeadm\nconfig file are also available as command line flags, but only the most\ncommon\/simple use case are supported with this approach.<\/p>\n<p>A kubeadm config file could contain multiple configuration types separated using three dashes (<code>---<\/code>).<\/p>\n<p>kubeadm supports the following configuration types:<\/p>\n<pre><code>apiVersion: kubeadm.k8s.io\/v1beta4\nkind: InitConfiguration\n\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: ClusterConfiguration\n\napiVersion: kubelet.config.k8s.io\/v1beta1\nkind: KubeletConfiguration\n\napiVersion: kubeproxy.config.k8s.io\/v1alpha1\nkind: KubeProxyConfiguration\n\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: JoinConfiguration\n\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: ResetConfiguration\n\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: UpgradeConfiguration\n<\/code><\/pre>\n<p>To print the defaults for <code>init<\/code> and <code>join<\/code> actions use the following commands:<\/p>\n<pre style=\"background-color:#fff\">kubeadm config print init-defaults\nkubeadm config print join-defaults\nkubeadm config print reset-defaults\nkubeadm config print upgrade-defaults\n<\/pre><p>The list of configuration types that must be included in a configuration file depends by the action you are\nperforming (<code>init<\/code> or <code>join<\/code>) and by the configuration options you are going to use (defaults or advanced customization).<\/p>\n<p>If some configuration types are not provided, or provided only partially, kubeadm will use default values; defaults\nprovided by kubeadm includes also enforcing consistency of values across components when required (e.g.\n<code>--cluster-cidr<\/code> flag on controller manager and <code>clusterCIDR<\/code> on kube-proxy).<\/p>\n<p>Users are always allowed to override default values, with the only exception of a small subset of setting with\nrelevance for security (e.g. enforce authorization-mode Node and RBAC on api server).<\/p>\n<p>If the user provides a configuration types that is not expected for the action you are performing, kubeadm will\nignore those types and print a warning.<\/p>\n<h2>Kubeadm init configuration types<\/h2>\n<p>When executing kubeadm init with the <code>--config<\/code> option, the following configuration types could be used:\nInitConfiguration, ClusterConfiguration, KubeProxyConfiguration, KubeletConfiguration, but only one\nbetween InitConfiguration and ClusterConfiguration is mandatory.<\/p>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta4<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>InitConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">bootstrapTokens<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>...<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">nodeRegistration<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>...<span style=\"color:#bbb\">\n<\/span><\/pre><p>The InitConfiguration type should be used to configure runtime settings, that in case of kubeadm init\nare the configuration of the bootstrap token and all the setting which are specific to the node where kubeadm\nis executed, including:<\/p>\n<ul>\n<li>\n<p>NodeRegistration, that holds fields that relate to registering the new node to the cluster;\nuse it to customize the node name, the CRI socket to use or any other settings that should apply to this\nnode only (e.g. the node ip).<\/p>\n<\/li>\n<li>\n<p>LocalAPIEndpoint, that represents the endpoint of the instance of the API server to be deployed on this node;\nuse it e.g. to customize the API server advertise address.<\/p>\n<\/li>\n<\/ul>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta4<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>ClusterConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">networking<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>...<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">etcd<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>...<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apiServer<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraArgs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>...<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraVolumes<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>...<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>...<span style=\"color:#bbb\">\n<\/span><\/pre><p>The ClusterConfiguration type should be used to configure cluster-wide settings,\nincluding settings for:<\/p>\n<ul>\n<li>\n<p><code>networking<\/code> that holds configuration for the networking topology of the cluster; use it e.g. to customize\nPod subnet or services subnet.<\/p>\n<\/li>\n<li>\n<p><code>etcd<\/code>: use it e.g. to customize the local etcd or to configure the API server\nfor using an external etcd cluster.<\/p>\n<\/li>\n<li>\n<p>kube-apiserver, kube-scheduler, kube-controller-manager configurations; use it to customize control-plane\ncomponents by adding customized setting or overriding kubeadm default settings.<\/p>\n<\/li>\n<\/ul>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeproxy.config.k8s.io\/v1alpha1<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>KubeProxyConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>...<span style=\"color:#bbb\">\n<\/span><\/pre><p>The KubeProxyConfiguration type should be used to change the configuration passed to kube-proxy instances deployed\nin the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.<\/p>\n<p>See https:\/\/kubernetes.io\/docs\/reference\/command-line-tools-reference\/kube-proxy\/ or\nhttps:\/\/pkg.go.dev\/k8s.io\/kube-proxy\/config\/v1alpha1#KubeProxyConfiguration\nfor kube-proxy official documentation.<\/p>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubelet.config.k8s.io\/v1beta1<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>KubeletConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>...<span style=\"color:#bbb\">\n<\/span><\/pre><p>The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances\ndeployed in the cluster. If this object is not provided or provided only partially, kubeadm applies defaults.<\/p>\n<p>See https:\/\/kubernetes.io\/docs\/reference\/command-line-tools-reference\/kubelet\/ or\nhttps:\/\/pkg.go.dev\/k8s.io\/kubelet\/config\/v1beta1#KubeletConfiguration\nfor kubelet official documentation.<\/p>\n<p>Here is a fully populated example of a single YAML file containing multiple\nconfiguration types to be used during a <code>kubeadm init<\/code> run.<\/p>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta4<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>InitConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">bootstrapTokens<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>- <span style=\"color:#000;font-weight:bold\">token<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;9a08jv.c0izixklcxtmnze7&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">description<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;kubeadm bootstrap token&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">ttl<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;24h&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>- <span style=\"color:#000;font-weight:bold\">token<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;783bde.3f89s0fje9f38fhf&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">description<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;another bootstrap token&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">usages<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>- authentication<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>- signing<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">groups<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>- system:bootstrappers:kubeadm:default-node-token<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">nodeRegistration<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;ec2-10-100-0-1&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">criSocket<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;unix:\/\/\/var\/run\/containerd\/containerd.sock&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">taints<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#000;font-weight:bold\">key<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;kubeadmNode&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">value<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;someValue&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">effect<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;NoSchedule&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">kubeletExtraArgs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span>v<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">value<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;5&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">ignorePreflightErrors<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- IsPrivilegedUser<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">imagePullPolicy<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;IfNotPresent&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">imagePullSerial<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#000;font-weight:bold\">true<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">localAPIEndpoint<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">advertiseAddress<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;10.100.0.1&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">bindPort<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#099\">6443<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">certificateKey<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">skipPhases<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>- preflight<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">timeouts<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">controlPlaneComponentHealthCheck<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;60s&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">kubenetesAPICall<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;40s&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>---<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta4<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>ClusterConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">etcd<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#998;font-style:italic\"># one of local or external<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">local<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">imageRepository<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;registry.k8s.io&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">imageTag<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;3.2.24&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">dataDir<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/var\/lib\/etcd&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">extraArgs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span>- <span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span>listen-client-urls<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">        <\/span><span style=\"color:#000;font-weight:bold\">value<\/span>:<span style=\"color:#bbb\"> <\/span>http:\/\/<span style=\"color:#099\">10.100.0.1<\/span>:<span style=\"color:#099\">2379<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">extraEnvs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span>- <span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span>SOME_VAR<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">        <\/span><span style=\"color:#000;font-weight:bold\">value<\/span>:<span style=\"color:#bbb\"> <\/span>SOME_VALUE<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">serverCertSANs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span>- <span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;ec2-10-100-0-1.compute-1.amazonaws.com&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">peerCertSANs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span>- <span style=\"color:#d14\">&#34;10.100.0.1&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#998;font-style:italic\"># external:<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#998;font-style:italic\">#   endpoints:<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#998;font-style:italic\">#     - &#34;10.100.0.1:2379&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#998;font-style:italic\">#     - &#34;10.100.0.2:2379&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#998;font-style:italic\">#   caFile: &#34;\/etcd\/kubernetes\/pki\/etcd\/etcd-ca.crt&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#998;font-style:italic\">#   certFile: &#34;\/etcd\/kubernetes\/pki\/etcd\/etcd.crt&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#998;font-style:italic\">#   keyFile: &#34;\/etcd\/kubernetes\/pki\/etcd\/etcd.key&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">networking<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">serviceSubnet<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;10.96.0.0\/16&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">podSubnet<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;10.244.0.0\/24&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">dnsDomain<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;cluster.local&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kubernetesVersion<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;v1.21.0&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">controlPlaneEndpoint<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;10.100.0.1:6443&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apiServer<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraArgs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span>authorization-mode<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">value<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;Node,RBAC&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraEnvs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span>SOME_VAR<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">value<\/span>:<span style=\"color:#bbb\"> <\/span>SOME_VALUE<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraVolumes<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;some-volume&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">hostPath<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/etc\/some-path&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">mountPath<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/etc\/some-pod-path&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">readOnly<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#000;font-weight:bold\">false<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">pathType<\/span>:<span style=\"color:#bbb\"> <\/span>File<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">certSANs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#d14\">&#34;10.100.1.1&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#d14\">&#34;ec2-10-100-0-1.compute-1.amazonaws.com&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">controllerManager<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraArgs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span>node-cidr-mask-size<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">value<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;20&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraVolumes<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;some-volume&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">hostPath<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/etc\/some-path&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">mountPath<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/etc\/some-pod-path&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">readOnly<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#000;font-weight:bold\">false<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">pathType<\/span>:<span style=\"color:#bbb\"> <\/span>File<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">scheduler<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraArgs<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span>address<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">value<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;10.100.0.1&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">extraVolumes<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span>- <span style=\"color:#000;font-weight:bold\">name<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;some-volume&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">hostPath<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/etc\/some-path&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">mountPath<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/etc\/some-pod-path&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">readOnly<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#000;font-weight:bold\">false<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">      <\/span><span style=\"color:#000;font-weight:bold\">pathType<\/span>:<span style=\"color:#bbb\"> <\/span>File<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">certificatesDir<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;\/etc\/kubernetes\/pki&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">imageRepository<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;registry.k8s.io&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">clusterName<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#d14\">&#34;example-cluster&#34;<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">encryptionAlgorithm<\/span>:<span style=\"color:#bbb\"> <\/span>ECDSA-P256<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">dns<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">disabled<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#000;font-weight:bold\">true<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#998;font-style:italic\"># disable CoreDNS<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">proxy<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">disabled<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#000;font-weight:bold\">true<\/span><span style=\"color:#bbb\">   <\/span><span style=\"color:#998;font-style:italic\"># disable kube-proxy<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>---<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubelet.config.k8s.io\/v1beta1<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>KubeletConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#998;font-style:italic\"># kubelet specific options here<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>---<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeproxy.config.k8s.io\/v1alpha1<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>KubeProxyConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#998;font-style:italic\"># kube-proxy specific options here<\/span><span style=\"color:#bbb\">\n<\/span><\/pre><h2>Kubeadm join configuration types<\/h2>\n<p>When executing kubeadm join with the --config option, the JoinConfiguration type should be provided.<\/p>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta4<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>JoinConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">discovery<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">bootstrapToken<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">apiServerEndpoint<\/span>:<span style=\"color:#bbb\"> <\/span>some-address:<span style=\"color:#099\">6443<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">token<\/span>:<span style=\"color:#bbb\"> <\/span>abcdef.0123456789abcdef<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">    <\/span><span style=\"color:#000;font-weight:bold\">unsafeSkipCAVerification<\/span>:<span style=\"color:#bbb\"> <\/span><span style=\"color:#000;font-weight:bold\">true<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span><span style=\"color:#000;font-weight:bold\">tlsBootstrapToken<\/span>:<span style=\"color:#bbb\"> <\/span>abcdef.0123456789abcdef<span style=\"color:#bbb\">\n<\/span><\/pre><p>The JoinConfiguration type should be used to configure runtime settings, that in case of kubeadm join\nare the discovery method used for accessing the cluster info and all the setting which are specific\nto the node where kubeadm is executed, including:<\/p>\n<ul>\n<li>\n<p><code>nodeRegistration<\/code>, that holds fields that relate to registering the new node to the cluster;\nuse it to customize the node name, the CRI socket to use or any other settings that should apply to this\nnode only (e.g. the node ip).<\/p>\n<\/li>\n<li>\n<p><code>apiEndpoint<\/code>, that represents the endpoint of the instance of the API server to be eventually deployed on this node.<\/p>\n<\/li>\n<\/ul>\n<h2>Kubeadm reset configuration types<\/h2>\n<p>When executing <code>kubeadm reset<\/code> with the <code>--config<\/code> option, the <code>ResetConfiguration<\/code> type should be provided.<\/p>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta4<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>ResetConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>...<span style=\"color:#bbb\">\n<\/span><\/pre><h2>Kubeadm upgrade configuration types<\/h2>\n<p>When executing <code>kubeadm upgrade<\/code> with the <code>--config<\/code> option, the <code>UpgradeConfiguration<\/code> type should be provided.<\/p>\n<pre style=\"background-color:#fff\"><span style=\"color:#000;font-weight:bold\">apiVersion<\/span>:<span style=\"color:#bbb\"> <\/span>kubeadm.k8s.io\/v1beta4<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">kind<\/span>:<span style=\"color:#bbb\"> <\/span>UpgradeConfiguration<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">apply<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>...<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">diff<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>...<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">node<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>...<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span><span style=\"color:#000;font-weight:bold\">plan<\/span>:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\">  <\/span>...<span style=\"color:#bbb\">\n<\/span><\/pre><p>The <code>UpgradeConfiguration<\/code> structure includes a few substructures that only apply to different subcommands of\n<code>kubeadm upgrade<\/code>. For example, the <code>apply<\/code> substructure will be used with the <code>kubeadm upgrade apply<\/code> subcommand\nand all other substructures will be ignored in such a case.<\/p>\n\n\n## Resource Types \n\n\n- [ClusterConfiguration](#kubeadm-k8s-io-v1beta4-ClusterConfiguration)\n- [InitConfiguration](#kubeadm-k8s-io-v1beta4-InitConfiguration)\n- [JoinConfiguration](#kubeadm-k8s-io-v1beta4-JoinConfiguration)\n- [ResetConfiguration](#kubeadm-k8s-io-v1beta4-ResetConfiguration)\n- [UpgradeConfiguration](#kubeadm-k8s-io-v1beta4-UpgradeConfiguration)\n  \n    \n    \n\n## `BootstrapToken`     {#BootstrapToken}\n    \n\n**Appears in:**\n\n- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration)\n\n- [InitConfiguration](#kubeadm-k8s-io-v1beta4-InitConfiguration)\n\n\n<p>BootstrapToken describes one bootstrap token, stored as a Secret in the cluster<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>token<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#BootstrapTokenString\"><code>BootstrapTokenString<\/code><\/a>\n<\/td>\n<td>\n   <p><code>token<\/code> is used for establishing bidirectional trust between nodes and control-planes.\nUsed for joining nodes in the cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>description<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>description<\/code> sets a human-friendly message why this token exists and what it's used\nfor, so other administrators can know its purpose.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ttl<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>ttl<\/code> defines the time to live for this token. Defaults to <code>24h<\/code>.\n<code>expires<\/code> and <code>ttl<\/code> are mutually exclusive.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>expires<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#time-v1-meta\"><code>meta\/v1.Time<\/code><\/a>\n<\/td>\n<td>\n   <p><code>expires<\/code> specifies the timestamp when this token expires. Defaults to being set\ndynamically at runtime based on the <code>ttl<\/code>. <code>expires<\/code> and <code>ttl<\/code> are mutually exclusive.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>usages<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>usages<\/code> describes the ways in which this token can be used. Can by default be used\nfor establishing bidirectional trust, but that can be changed here.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>groups<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>groups<\/code> specifies the extra groups that this token will authenticate as when\/if\nused for authentication<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `BootstrapTokenString`     {#BootstrapTokenString}\n    \n\n**Appears in:**\n\n- [BootstrapToken](#BootstrapToken)\n\n\n<p>BootstrapTokenString is a token of the format <code>abcdef.abcdef0123456789<\/code> that is used\nfor both validation of the practically of the API server from a joining node's point\nof view and as an authentication method for the node in the bootstrap phase of\n&quot;kubeadm join&quot;. This token is and should be short-lived.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>-<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>-<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n  \n\n## `ClusterConfiguration`     {#kubeadm-k8s-io-v1beta4-ClusterConfiguration}\n    \n\n\n<p>ClusterConfiguration contains cluster-wide configuration for a kubeadm cluster.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubeadm.k8s.io\/v1beta4<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>ClusterConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>etcd<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Etcd\"><code>Etcd<\/code><\/a>\n<\/td>\n<td>\n   <p><code>etcd<\/code> holds the configuration for etcd.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>networking<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Networking\"><code>Networking<\/code><\/a>\n<\/td>\n<td>\n   <p><code>networking<\/code> holds configuration for the networking topology of the cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>kubernetesVersion<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>kubernetesVersion<\/code> is the target version of the control plane.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>controlPlaneEndpoint<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>controlPlaneEndpoint<\/code> sets a stable IP address or DNS name for the control plane;\nIt can be a valid IP address or a RFC-1123 DNS subdomain, both with optional TCP port.\nIn case the <code>controlPlaneEndpoint<\/code> is not specified, the <code>advertiseAddress<\/code> + <code>bindPort<\/code>\nare used; in case the <code>controlPlaneEndpoint<\/code> is specified but without a TCP port,\nthe <code>bindPort<\/code> is used.\nPossible usages are:<\/p>\n<ul>\n<li>In a cluster with more than one control plane instances, this field should be\nassigned the address of the external load balancer in front of the\ncontrol plane instances.<\/li>\n<li>In environments with enforced node recycling, the <code>controlPlaneEndpoint<\/code>\ncould be used for assigning a stable DNS to the control plane.<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr><td><code>apiServer<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-APIServer\"><code>APIServer<\/code><\/a>\n<\/td>\n<td>\n   <p><code>apiServer<\/code> contains extra settings for the API server.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>controllerManager<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-ControlPlaneComponent\"><code>ControlPlaneComponent<\/code><\/a>\n<\/td>\n<td>\n   <p><code>controllerManager<\/code> contains extra settings for the controller manager.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>scheduler<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-ControlPlaneComponent\"><code>ControlPlaneComponent<\/code><\/a>\n<\/td>\n<td>\n   <p><code>scheduler<\/code> contains extra settings for the scheduler.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>dns<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-DNS\"><code>DNS<\/code><\/a>\n<\/td>\n<td>\n   <p><code>dns<\/code> defines the options for the DNS add-on installed in the cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>proxy<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Proxy\"><code>Proxy<\/code><\/a>\n<\/td>\n<td>\n   <p><code>proxy<\/code> defines the options for the proxy add-on installed in the cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificatesDir<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>certificatesDir<\/code> specifies where to store or look for all required certificates.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imageRepository<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>imageRepository<\/code> sets the container registry to pull images from.\nIf empty, <code>registry.k8s.io<\/code> will be used by default.\nIn case of kubernetes version is a CI build (kubernetes version starts with <code>ci\/<\/code>)\n<code>gcr.io\/k8s-staging-ci-images<\/code> will be used as a default for control plane components\nand for kube-proxy, while <code>registry.k8s.io<\/code> will be used for all the other images.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>featureGates<\/code><br\/>\n<code>map[string]bool<\/code>\n<\/td>\n<td>\n   <p><code>featureGates<\/code> contains the feature gates enabled by the user.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>clusterName<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>The cluster name.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>encryptionAlgorithm<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-EncryptionAlgorithmType\"><code>EncryptionAlgorithmType<\/code><\/a>\n<\/td>\n<td>\n   <p><code>encryptionAlgorithm<\/code> holds the type of asymmetric encryption algorithm used for keys and\ncertificates. Can be one of <code>&quot;RSA-2048&quot;<\/code> (default), <code>&quot;RSA-3072&quot;<\/code>, <code>&quot;RSA-4096&quot;<\/code> or <code>&quot;ECDSA-P256&quot;<\/code>.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificateValidityPeriod<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>certificateValidityPeriod<\/code> specifies the validity period for a non-CA certificate generated by kubeadm.\nDefault value: `8760h`` (365 days * 24 hours = 1 year)<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>caCertificateValidityPeriod<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>caCertificateValidityPeriod<\/code> specifies the validity period for a CA certificate generated by kubeadm.\nDefault value: <code>87600h<\/code> (365 days * 24 hours * 10 = 10 years)<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `InitConfiguration`     {#kubeadm-k8s-io-v1beta4-InitConfiguration}\n    \n\n\n<p>InitConfiguration contains a list of elements that is specific &quot;kubeadm init&quot;-only runtime\ninformation.\n<code>kubeadm init<\/code>-only information. These fields are solely used the first time <code>kubeadm init<\/code> runs.\nAfter that, the information in the fields IS NOT uploaded to the <code>kubeadm-config<\/code> ConfigMap\nthat is used by <code>kubeadm upgrade<\/code> for instance. These fields must be omitempty.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubeadm.k8s.io\/v1beta4<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>InitConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>bootstrapTokens<\/code><br\/>\n<a href=\"#BootstrapToken\"><code>[]BootstrapToken<\/code><\/a>\n<\/td>\n<td>\n   <p><code>bootstrapTokens<\/code> is respected at <code>kubeadm init<\/code> time and describes a set of Bootstrap Tokens to create.\nThis information IS NOT uploaded to the kubeadm cluster configmap, partly because of its sensitive nature<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>dryRun<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>dryRun<\/code> tells if the dry run mode is enabled, don't apply any change in dry run mode,\njust out put what would be done.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>nodeRegistration<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-NodeRegistrationOptions\"><code>NodeRegistrationOptions<\/code><\/a>\n<\/td>\n<td>\n   <p><code>nodeRegistration<\/code> holds fields that relate to registering the new control-plane node\nto the cluster.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>localAPIEndpoint<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-APIEndpoint\"><code>APIEndpoint<\/code><\/a>\n<\/td>\n<td>\n   <p><code>localAPIEndpoint<\/code> represents the endpoint of the API server instance that's deployed on this\ncontrol plane node. In HA setups, this differs from <code>ClusterConfiguration.controlPlaneEndpoint<\/code>\nin the sense that <code>controlPlaneEndpoint<\/code> is the global endpoint for the cluster, which then\nloadbalances the requests to each individual API server.\nThis configuration object lets you customize what IP\/DNS name and port the local API server\nadvertises it's accessible on. By default, kubeadm tries to auto-detect the IP of the default\ninterface and use that, but in case that process fails you may set the desired value here.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificateKey<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>certificateKey<\/code> sets the key with which certificates and keys are encrypted prior to being\nuploaded in a Secret in the cluster during the <code>uploadcerts init<\/code> phase.\nThe certificate key is a hex encoded string that is an AES key of size 32 bytes.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>skipPhases<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>skipPhases<\/code> is a list of phases to skip during command execution.\nThe list of phases can be obtained with the <code>kubeadm init --help<\/code> command.\nThe flag <code>--skip-phases<\/code> takes precedence over this field.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>patches<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Patches\"><code>Patches<\/code><\/a>\n<\/td>\n<td>\n   <p><code>patches<\/code> contains options related to applying patches to components deployed by kubeadm during\n<code>kubeadm init<\/code>.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>timeouts<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Timeouts\"><code>Timeouts<\/code><\/a>\n<\/td>\n<td>\n   <p><code>timeouts<\/code> holds various timeouts that apply to kubeadm commands.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `JoinConfiguration`     {#kubeadm-k8s-io-v1beta4-JoinConfiguration}\n    \n\n\n<p>JoinConfiguration contains elements describing a particular node.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubeadm.k8s.io\/v1beta4<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>JoinConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>dryRun<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>dryRun<\/code> tells if the dry run mode is enabled, don't apply any change if it is set,\njust output what would be done.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>nodeRegistration<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-NodeRegistrationOptions\"><code>NodeRegistrationOptions<\/code><\/a>\n<\/td>\n<td>\n   <p><code>nodeRegistration<\/code> holds fields that relate to registering the new control-plane\nnode to the cluster<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>caCertPath<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>caCertPath<\/code> is the path to the SSL certificate authority used to secure comunications\nbetween node and control-plane.\nDefaults to &quot;\/etc\/kubernetes\/pki\/ca.crt&quot;.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>discovery<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Discovery\"><code>Discovery<\/code><\/a>\n<\/td>\n<td>\n   <p><code>discovery<\/code> specifies the options for the kubelet to use during the TLS bootstrap process.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>controlPlane<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-JoinControlPlane\"><code>JoinControlPlane<\/code><\/a>\n<\/td>\n<td>\n   <p><code>controlPlane<\/code> defines the additional control plane instance to be deployed on the\njoining node. If nil, no additional control plane instance will be deployed.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>skipPhases<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>skipPhases<\/code> is a list of phases to skip during command execution.\nThe list of phases can be obtained with the <code>kubeadm join --help<\/code> command.\nThe flag <code>--skip-phases<\/code> takes precedence over this field.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>patches<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Patches\"><code>Patches<\/code><\/a>\n<\/td>\n<td>\n   <p><code>patches<\/code> contains options related to applying patches to components deployed\nby kubeadm during <code>kubeadm join<\/code>.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>timeouts<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Timeouts\"><code>Timeouts<\/code><\/a>\n<\/td>\n<td>\n   <p><code>timeouts<\/code> holds various timeouts that apply to kubeadm commands.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ResetConfiguration`     {#kubeadm-k8s-io-v1beta4-ResetConfiguration}\n    \n\n\n<p>ResetConfiguration contains a list of fields that are specifically <code>kubeadm reset<\/code>-only\nruntime information.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubeadm.k8s.io\/v1beta4<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>ResetConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>cleanupTmpDir<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>cleanupTmpDir<\/code> specifies whether the &quot;\/etc\/kubernetes\/tmp&quot; directory should be cleaned\nduring the reset process.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificatesDir<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>certificatesDir<\/code> specifies the directory where the certificates are stored.\nIf specified, it will be cleaned during the reset process.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>criSocket<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>criSocket<\/code> is used to retrieve container runtime inforomation and used for the\nremoval of the containers.\nIf <code>criSocket<\/code> is not specified by flag or config file, kubeadm will try to detect\none valid CRI socket instead.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>dryRun<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>dryRun<\/code> tells if the dry run mode is enabled, don't apply any change if it is set\nand just output what would be done.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>force<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>The <code>force<\/code> flag instructs kubeadm to reset the node without prompting for confirmation.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ignorePreflightErrors<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>ignorePreflightErrors<\/code> provides a list of pre-flight errors to be ignored during\nthe reset process, e.g. <code>IsPrivilegedUser,Swap<\/code>.\nValue <code>all<\/code> ignores errors from all checks.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>skipPhases<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>skipPhases<\/code> is a list of phases to skip during command execution.\nThe list of phases can be obtained with the <code>kubeadm reset phase --help<\/code> command.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>unmountFlags<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>unmountFlags<\/code> is a list of <code>unmount2()<\/code> syscall flags that kubeadm can use when unmounting\ndirectories during &quot;reset&quot;. This flag can be one of: <code>&quot;MNT_FORCE&quot;<\/code>, <code>&quot;MNT_DETACH&quot;<\/code>,\n<code>&quot;MNT_EXPIRE&quot;<\/code>, <code>&quot;UMOUNT_NOFOLLOW&quot;<\/code>.  By default this list is empty.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>timeouts<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Timeouts\"><code>Timeouts<\/code><\/a>\n<\/td>\n<td>\n   <p>Timeouts holds various timeouts that apply to kubeadm commands.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `UpgradeConfiguration`     {#kubeadm-k8s-io-v1beta4-UpgradeConfiguration}\n    \n\n\n<p>UpgradeConfiguration contains a list of options that are specific to <code>kubeadm upgrade<\/code> subcommands.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n<tr><td><code>apiVersion<\/code><br\/>string<\/td><td><code>kubeadm.k8s.io\/v1beta4<\/code><\/td><\/tr>\n<tr><td><code>kind<\/code><br\/>string<\/td><td><code>UpgradeConfiguration<\/code><\/td><\/tr>\n    \n  \n<tr><td><code>apply<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-UpgradeApplyConfiguration\"><code>UpgradeApplyConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>apply<\/code> holds a list of options that are specific to the <code>kubeadm upgrade apply<\/code> command.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>diff<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-UpgradeDiffConfiguration\"><code>UpgradeDiffConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>diff<\/code> holds a list of options that are specific to the <code>kubeadm upgrade diff<\/code> command.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>node<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-UpgradeNodeConfiguration\"><code>UpgradeNodeConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>node<\/code> holds a list of options that are specific to the <code>kubeadm upgrade node<\/code> command.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>plan<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-UpgradePlanConfiguration\"><code>UpgradePlanConfiguration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>plan<\/code> holds a list of options that are specific to the <code>kubeadm upgrade plan<\/code> command.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>timeouts<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Timeouts\"><code>Timeouts<\/code><\/a>\n<\/td>\n<td>\n   <p><code>timeouts<\/code> holds various timeouts that apply to kubeadm commands.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `APIEndpoint`     {#kubeadm-k8s-io-v1beta4-APIEndpoint}\n    \n\n**Appears in:**\n\n- [InitConfiguration](#kubeadm-k8s-io-v1beta4-InitConfiguration)\n\n- [JoinControlPlane](#kubeadm-k8s-io-v1beta4-JoinControlPlane)\n\n\n<p>APIEndpoint struct contains elements of API server instance deployed on a node.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>advertiseAddress<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>dvertiseAddress<\/code> sets the IP address for the API server to advertise.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>bindPort<\/code><br\/>\n<code>int32<\/code>\n<\/td>\n<td>\n   <p><code>bindPort<\/code> sets the secure port for the API Server to bind to.\nDefaults to 6443.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `APIServer`     {#kubeadm-k8s-io-v1beta4-APIServer}\n    \n\n**Appears in:**\n\n- [ClusterConfiguration](#kubeadm-k8s-io-v1beta4-ClusterConfiguration)\n\n\n<p>APIServer holds settings necessary for API server deployments in the cluster<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ControlPlaneComponent<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-ControlPlaneComponent\"><code>ControlPlaneComponent<\/code><\/a>\n<\/td>\n<td>(Members of <code>ControlPlaneComponent<\/code> are embedded into this type.)\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<tr><td><code>certSANs<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>certSANs<\/code> sets extra Subject Alternative Names (SANs) for the API Server signing\ncertificate.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Arg`     {#kubeadm-k8s-io-v1beta4-Arg}\n    \n\n**Appears in:**\n\n- [ControlPlaneComponent](#kubeadm-k8s-io-v1beta4-ControlPlaneComponent)\n\n- [LocalEtcd](#kubeadm-k8s-io-v1beta4-LocalEtcd)\n\n- [NodeRegistrationOptions](#kubeadm-k8s-io-v1beta4-NodeRegistrationOptions)\n\n\n<p>Arg represents an argument with a name and a value.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>The name of the argument.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>value<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p>The value of the argument.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `BootstrapTokenDiscovery`     {#kubeadm-k8s-io-v1beta4-BootstrapTokenDiscovery}\n    \n\n**Appears in:**\n\n- [Discovery](#kubeadm-k8s-io-v1beta4-Discovery)\n\n\n<p>BootstrapTokenDiscovery is used to set the options for bootstrap token based discovery.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>token<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>token<\/code> is a token used to validate cluster information fetched from the\ncontrol-plane.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>apiServerEndpoint<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>apiServerEndpoint<\/code> is an IP or domain name to the API server from which\ninformation will be fetched.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>caCertHashes<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>caCertHashes<\/code> specifies a set of public key pins to verify when token-based discovery\nis used. The root CA found during discovery must match one of these values.\nSpecifying an empty set disables root CA pinning, which can be unsafe.\nEach hash is specified as <code>&lt;type&gt;:&lt;value&gt;<\/code>, where the only currently supported type is\n&quot;sha256&quot;. This is a hex-encoded SHA-256 hash of the Subject Public Key Info (SPKI)\nobject in DER-encoded ASN.1. These hashes can be \/\/ calculated using, for example, OpenSSL.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>unsafeSkipCAVerification<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>unsafeSkipCAVerification<\/code> allows token-based discovery without CA verification\nvia <code>caCertHashes<\/code>. This can weaken the security of kubeadm since other nodes can\nimpersonate the control-plane.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ControlPlaneComponent`     {#kubeadm-k8s-io-v1beta4-ControlPlaneComponent}\n    \n\n**Appears in:**\n\n- [ClusterConfiguration](#kubeadm-k8s-io-v1beta4-ClusterConfiguration)\n\n- [APIServer](#kubeadm-k8s-io-v1beta4-APIServer)\n\n\n<p>ControlPlaneComponent holds settings common to control plane component of the cluster<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>extraArgs<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Arg\"><code>[]Arg<\/code><\/a>\n<\/td>\n<td>\n   <p><code>extraArgs<\/code> is an extra set of flags to pass to the control plane component.\nAn argument name in this list is the flag name as it appears on the\ncommand line except without leading dash(es). Extra arguments will override existing\ndefault arguments. Duplicate extra arguments are allowed.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>extraVolumes<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-HostPathMount\"><code>[]HostPathMount<\/code><\/a>\n<\/td>\n<td>\n   <p><code>extraVolumes<\/code> is an extra set of host volumes, mounted to the control plane component.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>extraEnvs<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-EnvVar\"><code>[]EnvVar<\/code><\/a>\n<\/td>\n<td>\n   <p><code>extraEnvs<\/code> is an extra set of environment variables to pass to the control plane component.\nEnvironment variables passed using <code>extraEnvs<\/code> will override any existing environment variables,\nor <code>*_proxy<\/code> environment variables that kubeadm adds by default.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `DNS`     {#kubeadm-k8s-io-v1beta4-DNS}\n    \n\n**Appears in:**\n\n- [ClusterConfiguration](#kubeadm-k8s-io-v1beta4-ClusterConfiguration)\n\n\n<p>DNS defines the DNS addon that should be used in the cluster<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ImageMeta<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-ImageMeta\"><code>ImageMeta<\/code><\/a>\n<\/td>\n<td>(Members of <code>ImageMeta<\/code> are embedded into this type.)\n   <p><code>imageMeta<\/code> allows to customize the image used for the DNS addon.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>disabled<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>disabled<\/code> specifies whether to disable this addon in the cluster.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Discovery`     {#kubeadm-k8s-io-v1beta4-Discovery}\n    \n\n**Appears in:**\n\n- [JoinConfiguration](#kubeadm-k8s-io-v1beta4-JoinConfiguration)\n\n\n<p>Discovery specifies the options for the kubelet to use during the TLS Bootstrap process<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>bootstrapToken<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-BootstrapTokenDiscovery\"><code>BootstrapTokenDiscovery<\/code><\/a>\n<\/td>\n<td>\n   <p><code>bootstrapToken<\/code> is used to set the options for bootstrap token based discovery.\n<code>bootstrapToken<\/code> and <code>file<\/code> are mutually exclusive.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>file<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-FileDiscovery\"><code>FileDiscovery<\/code><\/a>\n<\/td>\n<td>\n   <p><code>file<\/code> is used to specify a file or URL to a kubeconfig file from which to load\ncluster information. <code>bootstrapToken<\/code> and <code>file<\/code> are mutually exclusive.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tlsBootstrapToken<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>tlsBootstrapToken<\/code> is a token used for TLS bootstrapping.\nIf <code>bootstrapToken<\/code> is set, this field is defaulted to <code>bootstrapToken.token<\/code>, but\ncan be overridden. If <code>file<\/code> is set, this field <strong>must be set<\/strong> in case the KubeConfigFile\ndoes not contain any other authentication information.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `EncryptionAlgorithmType`     {#kubeadm-k8s-io-v1beta4-EncryptionAlgorithmType}\n    \n(Alias of `string`)\n\n**Appears in:**\n\n- [ClusterConfiguration](#kubeadm-k8s-io-v1beta4-ClusterConfiguration)\n\n\n<p>EncryptionAlgorithmType can define an asymmetric encryption algorithm type.<\/p>\n\n\n\n\n## `EnvVar`     {#kubeadm-k8s-io-v1beta4-EnvVar}\n    \n\n**Appears in:**\n\n- [ControlPlaneComponent](#kubeadm-k8s-io-v1beta4-ControlPlaneComponent)\n\n- [LocalEtcd](#kubeadm-k8s-io-v1beta4-LocalEtcd)\n\n\n<p>EnvVar represents an environment variable present in a Container.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>EnvVar<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#envvar-v1-core\"><code>core\/v1.EnvVar<\/code><\/a>\n<\/td>\n<td>(Members of <code>EnvVar<\/code> are embedded into this type.)\n   <span class=\"text-muted\">No description provided.<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Etcd`     {#kubeadm-k8s-io-v1beta4-Etcd}\n    \n\n**Appears in:**\n\n- [ClusterConfiguration](#kubeadm-k8s-io-v1beta4-ClusterConfiguration)\n\n\n<p>Etcd contains elements describing Etcd configuration.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>local<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-LocalEtcd\"><code>LocalEtcd<\/code><\/a>\n<\/td>\n<td>\n   <p><code>local<\/code> provides configuration knobs for configuring the local etcd instance.\n<code>local<\/code> and <code>external<\/code> are mutually exclusive.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>external<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-ExternalEtcd\"><code>ExternalEtcd<\/code><\/a>\n<\/td>\n<td>\n   <p><code>external<\/code> describes how to connect to an external etcd cluster.\n<code>local<\/code> and <code>external<\/code> are mutually exclusive.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ExternalEtcd`     {#kubeadm-k8s-io-v1beta4-ExternalEtcd}\n    \n\n**Appears in:**\n\n- [Etcd](#kubeadm-k8s-io-v1beta4-Etcd)\n\n\n<p>ExternalEtcd describes an external etcd cluster.\nKubeadm has no knowledge of where certificate files live and they must be supplied.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>endpoints<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>endpoints<\/code> contains the list of etcd members.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>caFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>caFile<\/code> is an SSL Certificate Authority (CA) file used to secure etcd communication.\nRequired if using a TLS connection.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>certFile<\/code> is an SSL certification file used to secure etcd communication.\nRequired if using a TLS connection.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>keyFile<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>keyFile<\/code> is an SSL key file used to secure etcd communication.\nRequired if using a TLS connection.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `FileDiscovery`     {#kubeadm-k8s-io-v1beta4-FileDiscovery}\n    \n\n**Appears in:**\n\n- [Discovery](#kubeadm-k8s-io-v1beta4-Discovery)\n\n\n<p>FileDiscovery is used to specify a file or URL to a kubeconfig file from which to load\ncluster information.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>kubeConfigPath<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>kubeConfigPath<\/code> is used to specify the actual file path or URL to the kubeconfig\nfile from which to load cluster information.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `HostPathMount`     {#kubeadm-k8s-io-v1beta4-HostPathMount}\n    \n\n**Appears in:**\n\n- [ControlPlaneComponent](#kubeadm-k8s-io-v1beta4-ControlPlaneComponent)\n\n\n<p>HostPathMount contains elements describing volumes that are mounted from the host.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>name<\/code> is the name of the volume inside the Pod template.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>hostPath<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>hostPath<\/code> is the path in the host that will be mounted inside the Pod.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>mountPath<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>mountPath<\/code> is the path inside the Pod where <code>hostPath<\/code> will be mounted.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>readOnly<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>readOnly<\/code> controls write access to the volume.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>pathType<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#hostpathtype-v1-core\"><code>core\/v1.HostPathType<\/code><\/a>\n<\/td>\n<td>\n   <p><code>pathType<\/code> is the type of the <code>hostPath<\/code>.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `ImageMeta`     {#kubeadm-k8s-io-v1beta4-ImageMeta}\n    \n\n**Appears in:**\n\n- [DNS](#kubeadm-k8s-io-v1beta4-DNS)\n\n- [LocalEtcd](#kubeadm-k8s-io-v1beta4-LocalEtcd)\n\n\n<p>ImageMeta allows to customize the image used for components that are not\noriginated from the Kubernetes\/Kubernetes release process<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>imageRepository<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>imageRepository<\/code> sets the container registry to pull images from.\nif not set, the <code>imageRepository<\/code> defined in ClusterConfiguration will be used instead.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imageTag<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>imageTag<\/code> allows to specify a tag for the image.\nIn case this value is set, kubeadm does not change automatically the version of\nthe above components during upgrades.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `JoinControlPlane`     {#kubeadm-k8s-io-v1beta4-JoinControlPlane}\n    \n\n**Appears in:**\n\n- [JoinConfiguration](#kubeadm-k8s-io-v1beta4-JoinConfiguration)\n\n\n<p>JoinControlPlane contains elements describing an additional control plane instance to be deployed on the joining node.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>localAPIEndpoint<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-APIEndpoint\"><code>APIEndpoint<\/code><\/a>\n<\/td>\n<td>\n   <p><code>localAPIEndpoint<\/code> represents the endpoint of the API server instance to be\ndeployed on this node.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificateKey<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>certificateKey<\/code> is the key that is used for decryption of certificates after\nthey are downloaded from the Secret upon joining a new control plane node.\nThe corresponding encryption key is in the InitConfiguration.\nThe certificate key is a hex encoded string that is an AES key of size 32 bytes.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `LocalEtcd`     {#kubeadm-k8s-io-v1beta4-LocalEtcd}\n    \n\n**Appears in:**\n\n- [Etcd](#kubeadm-k8s-io-v1beta4-Etcd)\n\n\n<p>LocalEtcd describes that kubeadm should run an etcd cluster locally.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>ImageMeta<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-ImageMeta\"><code>ImageMeta<\/code><\/a>\n<\/td>\n<td>(Members of <code>ImageMeta<\/code> are embedded into this type.)\n   <p>ImageMeta allows to customize the container used for etcd<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>dataDir<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>dataDir<\/code> is the directory etcd will place its data.\nDefaults to &quot;\/var\/lib\/etcd&quot;.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>extraArgs<\/code> <B>[Required]<\/B><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Arg\"><code>[]Arg<\/code><\/a>\n<\/td>\n<td>\n   <p><code>extraArgs<\/code> are extra arguments provided to the etcd binary when run\ninside a static Pod. An argument name in this list is the flag name as\nit appears on the command line except without leading dash(es).\nExtra arguments will override existing default arguments.\nDuplicate extra arguments are allowed.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>extraEnvs<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-EnvVar\"><code>[]EnvVar<\/code><\/a>\n<\/td>\n<td>\n   <p><code>extraEnvs<\/code> is an extra set of environment variables to pass to the\ncontrol plane component. Environment variables passed using <code>extraEnvs<\/code>\nwill override any existing environment variables, or <code>*_proxy<\/code> environment\nvariables that kubeadm adds by default.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>serverCertSANs<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>serverCertSANs<\/code> sets extra Subject Alternative Names (SANs) for the etcd\nserver signing certificate.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>peerCertSANs<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>peerCertSANs<\/code> sets extra Subject Alternative Names (SANs) for the etcd peer\nsigning certificate.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Networking`     {#kubeadm-k8s-io-v1beta4-Networking}\n    \n\n**Appears in:**\n\n- [ClusterConfiguration](#kubeadm-k8s-io-v1beta4-ClusterConfiguration)\n\n\n<p>Networking contains elements describing cluster's networking configuration.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>serviceSubnet<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>serviceSubnet<\/code> is the subnet used by Kubernetes Services. Defaults to &quot;10.96.0.0\/12&quot;.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>podSubnet<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>podSubnet<\/code> is the subnet used by Pods.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>dnsDomain<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>dnsDomain<\/code> is the dns domain used by Kubernetes Services. Defaults to &quot;cluster.local&quot;.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `NodeRegistrationOptions`     {#kubeadm-k8s-io-v1beta4-NodeRegistrationOptions}\n    \n\n**Appears in:**\n\n- [InitConfiguration](#kubeadm-k8s-io-v1beta4-InitConfiguration)\n\n- [JoinConfiguration](#kubeadm-k8s-io-v1beta4-JoinConfiguration)\n\n\n<p>NodeRegistrationOptions holds fields that relate to registering a new control-plane or\nnode to the cluster, either via <code>kubeadm init<\/code> or <code>kubeadm join<\/code>.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>name<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>name<\/code> is the <code>.Metadata.Name<\/code> field of the Node API object that will be created in this\n<code>kubeadm init<\/code> or <code>kubeadm join<\/code> operation.\nThis field is also used in the <code>CommonName<\/code> field of the kubelet's client certificate to\nthe API server.\nDefaults to the hostname of the node if not provided.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>criSocket<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>criSocket<\/code> is used to retrieve container runtime info.\nThis information will be annotated to the Node API object, for later re-use.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>taints<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#taint-v1-core\"><code>[]core\/v1.Taint<\/code><\/a>\n<\/td>\n<td>\n   <p><code>taints<\/code> specifies the taints the Node API object should be registered with.\nIf this field is unset, i.e. nil, it will be defaulted with a control-plane taint for control-plane nodes.\nIf you don't want to taint your control-plane node, set this field to an empty list,\ni.e. <code>taints: []<\/code> in the YAML file. This field is solely used for Node registration.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>kubeletExtraArgs<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Arg\"><code>[]Arg<\/code><\/a>\n<\/td>\n<td>\n   <p><code>kubeletExtraArgs<\/code> passes through extra arguments to the kubelet.\nThe arguments here are passed to the kubelet command line via the environment file\nkubeadm writes at runtime for the kubelet to source.\nThis overrides the generic base-level configuration in the <code>kubelet-config<\/code> ConfigMap.\nFlags have higher priority when parsing. These values are local and specific to the node\nkubeadm is executing on. An argument name in this list is the flag name as it appears on the\ncommand line except without leading dash(es). Extra arguments will override existing\ndefault arguments. Duplicate extra arguments are allowed.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ignorePreflightErrors<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>ignorePreflightErrors<\/code> provides a slice of pre-flight errors to be ignored when\nthe current node is registered, e.g. 'IsPrivilegedUser,Swap'.\nValue 'all' ignores errors from all checks.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imagePullPolicy<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#pullpolicy-v1-core\"><code>core\/v1.PullPolicy<\/code><\/a>\n<\/td>\n<td>\n   <p><code>imagePullPolicy<\/code> specifies the policy for image pulling during kubeadm <code>init<\/code> and\n<code>join<\/code> operations.\nThe value of this field must be one of &quot;Always&quot;, &quot;IfNotPresent&quot; or &quot;Never&quot;.\nIf this field is unset kubeadm will default it to &quot;IfNotPresent&quot;, or pull the required\nimages if not present on the host.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imagePullSerial<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>imagePullSerial<\/code> specifies if image pulling performed by kubeadm must be done serially or in parallel.\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Patches`     {#kubeadm-k8s-io-v1beta4-Patches}\n    \n\n**Appears in:**\n\n- [InitConfiguration](#kubeadm-k8s-io-v1beta4-InitConfiguration)\n\n- [JoinConfiguration](#kubeadm-k8s-io-v1beta4-JoinConfiguration)\n\n- [UpgradeApplyConfiguration](#kubeadm-k8s-io-v1beta4-UpgradeApplyConfiguration)\n\n- [UpgradeNodeConfiguration](#kubeadm-k8s-io-v1beta4-UpgradeNodeConfiguration)\n\n\n<p>Patches contains options related to applying patches to components deployed by kubeadm.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>directory<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>directory<\/code> is a path to a directory that contains files named\n&quot;target[suffix][+patchtype].extension&quot;.\nFor example, &quot;kube-apiserver0+merge.yaml&quot; or just &quot;etcd.json&quot;. &quot;target&quot; can be one of\n&quot;kube-apiserver&quot;, &quot;kube-controller-manager&quot;, &quot;kube-scheduler&quot;, &quot;etcd&quot;, &quot;kubeletconfiguration&quot;,\n&quot;corednsdeployment&quot;.\n&quot;patchtype&quot; can be one of &quot;strategic&quot;, &quot;merge&quot; or &quot;json&quot; and they match the patch formats\nsupported by kubectl.\nThe default &quot;patchtype&quot; is &quot;strategic&quot;. &quot;extension&quot; must be either &quot;json&quot; or &quot;yaml&quot;.\n&quot;suffix&quot; is an optional string that can be used to determine which patches are applied\nfirst alpha-numerically.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Proxy`     {#kubeadm-k8s-io-v1beta4-Proxy}\n    \n\n**Appears in:**\n\n- [ClusterConfiguration](#kubeadm-k8s-io-v1beta4-ClusterConfiguration)\n\n\n<p>Proxy defines the proxy addon that should be used in the cluster.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>disabled<\/code> <B>[Required]<\/B><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>disabled<\/code> specifies whether to disable this addon in the cluster.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `Timeouts`     {#kubeadm-k8s-io-v1beta4-Timeouts}\n    \n\n**Appears in:**\n\n- [InitConfiguration](#kubeadm-k8s-io-v1beta4-InitConfiguration)\n\n- [JoinConfiguration](#kubeadm-k8s-io-v1beta4-JoinConfiguration)\n\n- [ResetConfiguration](#kubeadm-k8s-io-v1beta4-ResetConfiguration)\n\n- [UpgradeConfiguration](#kubeadm-k8s-io-v1beta4-UpgradeConfiguration)\n\n\n<p>Timeouts holds various timeouts that apply to kubeadm commands.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>controlPlaneComponentHealthCheck<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>controlPlaneComponentHealthCheck<\/code> is the amount of time to wait for a control plane\ncomponent, such as the API server, to be healthy during <code>kubeadm init<\/code> and <code>kubeadm join<\/code>.\nDefault: 4m<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>kubeletHealthCheck<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>kubeletHealthCheck<\/code> is the amount of time to wait for the kubelet to be healthy\nduring <code>kubeadm init<\/code> and <code>kubeadm join<\/code>.\nDefault: 4m<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>kubernetesAPICall<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>kubernetesAPICall<\/code> is the amount of time to wait for the kubeadm client to complete a request to\nthe API server. This applies to all types of methods (GET, POST, etc).\nDefault: 1m<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>etcdAPICall<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>etcdAPICall<\/code> is the amount of time to wait for the kubeadm etcd client to complete a request to\nthe etcd cluster.\nDefault: 2m<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>tlsBootstrap<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>tlsBootstrap<\/code> is the amount of time to wait for the kubelet to complete TLS bootstrap\nfor a joining node.\nDefault: 5m<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>discovery<\/code><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>discovery<\/code> is the amount of time to wait for kubeadm to validate the API server identity\nfor a joining node.\nDefault: 5m<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>upgradeManifests<\/code> <B>[Required]<\/B><br\/>\n<a href=\"https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\"><code>meta\/v1.Duration<\/code><\/a>\n<\/td>\n<td>\n   <p><code>upgradeManifests<\/code> is the timeout for upgrading static Pod manifests\nDefault: 5m<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `UpgradeApplyConfiguration`     {#kubeadm-k8s-io-v1beta4-UpgradeApplyConfiguration}\n    \n\n**Appears in:**\n\n- [UpgradeConfiguration](#kubeadm-k8s-io-v1beta4-UpgradeConfiguration)\n\n\n<p>UpgradeApplyConfiguration contains a list of configurable options which are specific to the  &quot;kubeadm upgrade apply&quot; command.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>kubernetesVersion<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>kubernetesVersion<\/code> is the target version of the control plane.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>allowExperimentalUpgrades<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>allowExperimentalUpgrades<\/code> instructs kubeadm to show unstable versions of Kubernetes as an upgrade\nalternative and allows upgrading to an alpha\/beta\/release candidate version of Kubernetes.\nDefault: false<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>allowRCUpgrades<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>Enable <code>allowRCUpgrades<\/code> will show release candidate versions of Kubernetes as an upgrade alternative and\nallows upgrading to a release candidate version of Kubernetes.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>certificateRenewal<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>certificateRenewal<\/code> instructs kubeadm to execute certificate renewal during upgrades.\nDefaults to true.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>dryRun<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>dryRun<\/code> tells if the dry run mode is enabled, don't apply any change if it is and just output\nwhat would be done.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>etcdUpgrade<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>etcdUpgrade<\/code> instructs kubeadm to execute etcd upgrade during upgrades.\nDefaults to true.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>forceUpgrade<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>forceUpgrade<\/code> flag instructs kubeadm to upgrade the cluster without prompting for confirmation.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ignorePreflightErrors<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>ignorePreflightErrors<\/code> provides a slice of pre-flight errors to be ignored during the upgrade process,\ne.g. <code>IsPrivilegedUser,Swap<\/code>. Value <code>all<\/code> ignores errors from all checks.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>patches<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Patches\"><code>Patches<\/code><\/a>\n<\/td>\n<td>\n   <p><code>patches<\/code> contains options related to applying patches to components deployed by kubeadm during <code>kubeadm upgrade<\/code>.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>printConfig<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>printConfig<\/code> specifies whether the configuration file that will be used in the upgrade should be printed or not.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>skipPhases<\/code> <B>[Required]<\/B><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>skipPhases<\/code> is a list of phases to skip during command execution.\nNOTE: This field is currently ignored for <code>kubeadm upgrade apply<\/code>, but in the future it will be supported.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imagePullPolicy<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#pullpolicy-v1-core\"><code>core\/v1.PullPolicy<\/code><\/a>\n<\/td>\n<td>\n   <p><code>imagePullPolicy<\/code> specifies the policy for image pulling during <code>kubeadm upgrade apply<\/code> operations.\nThe value of this field must be one of &quot;Always&quot;, &quot;IfNotPresent&quot; or &quot;Never&quot;.\nIf this field is unset kubeadm will default it to &quot;IfNotPresent&quot;, or pull the required images if not present on the host.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imagePullSerial<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>imagePullSerial<\/code> specifies if image pulling performed by kubeadm must be done serially or in parallel.\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `UpgradeDiffConfiguration`     {#kubeadm-k8s-io-v1beta4-UpgradeDiffConfiguration}\n    \n\n**Appears in:**\n\n- [UpgradeConfiguration](#kubeadm-k8s-io-v1beta4-UpgradeConfiguration)\n\n\n<p>UpgradeDiffConfiguration contains a list of configurable options which are specific to the <code>kubeadm upgrade diff<\/code> command.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>kubernetesVersion<\/code><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>kubernetesVersion<\/code> is the target version of the control plane.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>contextLines<\/code><br\/>\n<code>int<\/code>\n<\/td>\n<td>\n   <p><code>diffContextLines<\/code> is the number of lines of context in the diff.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `UpgradeNodeConfiguration`     {#kubeadm-k8s-io-v1beta4-UpgradeNodeConfiguration}\n    \n\n**Appears in:**\n\n- [UpgradeConfiguration](#kubeadm-k8s-io-v1beta4-UpgradeConfiguration)\n\n\n<p>UpgradeNodeConfiguration contains a list of configurable options which are specific to the &quot;kubeadm upgrade node&quot; command.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>certificateRenewal<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>certificateRenewal<\/code> instructs kubeadm to execute certificate renewal during upgrades.\nDefaults to true.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>dryRun<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>dryRun<\/code> tells if the dry run mode is enabled, don't apply any change if it is and just output what would be done.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>etcdUpgrade<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>etcdUpgrade<\/code> instructs kubeadm to execute etcd upgrade during upgrades.\nDefaults to true.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ignorePreflightErrors<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>ignorePreflightErrors<\/code> provides a slice of pre-flight errors to be ignored during the upgrade process,\ne.g. 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>skipPhases<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>skipPhases<\/code> is a list of phases to skip during command execution.\nThe list of phases can be obtained with the <code>kubeadm upgrade node phase --help<\/code> command.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>patches<\/code><br\/>\n<a href=\"#kubeadm-k8s-io-v1beta4-Patches\"><code>Patches<\/code><\/a>\n<\/td>\n<td>\n   <p><code>patches<\/code> contains options related to applying patches to components deployed by kubeadm during <code>kubeadm upgrade<\/code>.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imagePullPolicy<\/code><br\/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.31\/#pullpolicy-v1-core\"><code>core\/v1.PullPolicy<\/code><\/a>\n<\/td>\n<td>\n   <p><code>imagePullPolicy<\/code> specifies the policy for image pulling during <code>kubeadm upgrade node<\/code> operations.\nThe value of this field must be one of &quot;Always&quot;, &quot;IfNotPresent&quot; or &quot;Never&quot;.\nIf this field is unset kubeadm will default it to &quot;IfNotPresent&quot;, or pull the required images if not present on the host.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>imagePullSerial<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>imagePullSerial<\/code> specifies if image pulling performed by kubeadm must be done serially or in parallel.\nDefault: true<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n## `UpgradePlanConfiguration`     {#kubeadm-k8s-io-v1beta4-UpgradePlanConfiguration}\n    \n\n**Appears in:**\n\n- [UpgradeConfiguration](#kubeadm-k8s-io-v1beta4-UpgradeConfiguration)\n\n\n<p>UpgradePlanConfiguration contains a list of configurable options which are specific to the &quot;kubeadm upgrade plan&quot; command.<\/p>\n\n\n<table class=\"table\">\n<thead><tr><th width=\"30%\">Field<\/th><th>Description<\/th><\/tr><\/thead>\n<tbody>\n    \n  \n<tr><td><code>kubernetesVersion<\/code> <B>[Required]<\/B><br\/>\n<code>string<\/code>\n<\/td>\n<td>\n   <p><code>kubernetesVersion<\/code> is the target version of the control plane.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>allowExperimentalUpgrades<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>allowExperimentalUpgrades<\/code> instructs kubeadm to show unstable versions of Kubernetes as an upgrade\nalternative and allows upgrading to an alpha\/beta\/release candidate version of Kubernetes.\nDefault: false<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>allowRCUpgrades<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p>Enable <code>allowRCUpgrades<\/code> will show release candidate versions of Kubernetes as an upgrade alternative and\nallows upgrading to a release candidate version of Kubernetes.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>dryRun<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>dryRun<\/code> tells if the dry run mode is enabled, don't apply any change if it is and just output what would be done.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>ignorePreflightErrors<\/code><br\/>\n<code>[]string<\/code>\n<\/td>\n<td>\n   <p><code>ignorePreflightErrors<\/code> provides a slice of pre-flight errors to be ignored during the upgrade process,\ne.g. 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.<\/p>\n<\/td>\n<\/tr>\n<tr><td><code>printConfig<\/code><br\/>\n<code>bool<\/code>\n<\/td>\n<td>\n   <p><code>printConfig<\/code> specifies whether the configuration file that will be used in the upgrade should be printed or not.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n ","site":"kubernetes reference","answers_cleaned":"    title  kubeadm Configuration  v1beta4  content type  tool reference package  kubeadm k8s io v1beta4 auto generated  true      h2 Overview  h2   p Package v1beta4 defines the v1beta4 version of the kubeadm configuration file format  This version improves on the v1beta3 format by fixing some minor issues and adding a few new fields   p   p A list of changes since v1beta3   p   ul   li TODO https   github com kubernetes kubeadm issues 2890  li   li Support custom environment variables in control plane components under  code ClusterConfiguration  code   Use  code apiServer extraEnvs  code    code controllerManager extraEnvs  code    code scheduler extraEnvs  code    code etcd local extraEnvs  code    li   li The  code ResetConfiguration  code  API type is now supported in v1beta4  Users are able to reset a node by passing a  code   config  code  file to  code kubeadm reset  code    li   li Dry run mode is now configureable in InitConfiguration and JoinConfiguration   li   li Replace the existing string string extra argument maps with structured extra arguments that support duplicates  The change applies to  code ClusterConfiguration  code     code apiServer extraArgs  code    code controllerManager extraArgs  code    code scheduler extraArgs  code    code etcd local extraArgs  code   Also to  code nodeRegistration kubeletExtraArgs  code    li   li Add  code ClusterConfiguration encryptionAlgorithm  code  that can be used to set the asymmetric encryption algorithm used for this cluster s keys and certificates  Can be one of  code  quot RSA 2048 quot   code   default    code  quot RSA 3072 quot   code    code  quot RSA 4096 quot   code  or  code  quot ECDSA P256 quot   code    li   li Add  code ClusterConfiguration dns disabled  code  and  code ClusterConfiguration proxy disabled  code  that can be used to disable the CoreDNS and kube proxy addons during cluster initialization  Skipping the related addons phases  during cluster creation will set the same fields to  code false  code    li   li Add the  code nodeRegistration imagePullSerial  code  field in  code InitConfiguration  code  and  code JoinConfiguration  code   which can be used to control if kubeadm pulls images serially or in parallel   li   li The  code UpgradeConfiguration  code  kubeadm API is now supported in v1beta4 when passing  code   config  code  to  code kubeadm upgrade  code  subcommands  Usage of component configuration for  code kubelet  code  and  code kube proxy  code    code InitConfiguration  code  and  code ClusterConfiguration  code  is deprecated and will be ignored when passing  code   config  code  to  code upgrade  code  subcommands   li   li Add a  code Timeouts  code  structure to  code InitConfiguration  code    code JoinConfiguration  code    code ResetConfiguration  code  and  code UpgradeConfiguration  code  that can be used to configure various timeouts   li   li Add a  code certificateValidityPeriod  code  and  code caCertificateValidityPeriod  code  fields to  code ClusterConfiguration  code   These fields can be used to control the validity period of certificates generated by kubeadm during sub commands such as  code init  code    code join  code    code upgrade  code  and  code certs  code   Default values continue to be 1 year for non CA certificates and 10 years for CA certificates  Only non CA certificates continue to be renewable by  code kubeadm certs renew  code    li    ul   h1 Migration from old kubeadm config versions  h1   ul   li kubeadm v1 15 x and newer can be used to migrate from v1beta1 to v1beta2   li   li kubeadm v1 22 x and newer no longer support v1beta1 and older APIs  but can be used to migrate v1beta2 to v1beta3   li   li kubeadm v1 27 x and newer no longer support v1beta2 and older APIs   li   li TODO  https   github com kubernetes kubeadm issues 2890 add version that can be used to convert to v1beta4  li    ul   h2 Basics  h2   p The preferred way to configure kubeadm is to pass an YAML configuration file with the  code   config  code  option  Some of the configuration options defined in the kubeadm config file are also available as command line flags  but only the most common simple use case are supported with this approach   p   p A kubeadm config file could contain multiple configuration types separated using three dashes   code      code     p   p kubeadm supports the following configuration types   p   pre  code apiVersion  kubeadm k8s io v1beta4 kind  InitConfiguration  apiVersion  kubeadm k8s io v1beta4 kind  ClusterConfiguration  apiVersion  kubelet config k8s io v1beta1 kind  KubeletConfiguration  apiVersion  kubeproxy config k8s io v1alpha1 kind  KubeProxyConfiguration  apiVersion  kubeadm k8s io v1beta4 kind  JoinConfiguration  apiVersion  kubeadm k8s io v1beta4 kind  ResetConfiguration  apiVersion  kubeadm k8s io v1beta4 kind  UpgradeConfiguration   code   pre   p To print the defaults for  code init  code  and  code join  code  actions use the following commands   p   pre style  background color  fff  kubeadm config print init defaults kubeadm config print join defaults kubeadm config print reset defaults kubeadm config print upgrade defaults   pre  p The list of configuration types that must be included in a configuration file depends by the action you are performing   code init  code  or  code join  code   and by the configuration options you are going to use  defaults or advanced customization    p   p If some configuration types are not provided  or provided only partially  kubeadm will use default values  defaults provided by kubeadm includes also enforcing consistency of values across components when required  e g   code   cluster cidr  code  flag on controller manager and  code clusterCIDR  code  on kube proxy    p   p Users are always allowed to override default values  with the only exception of a small subset of setting with relevance for security  e g  enforce authorization mode Node and RBAC on api server    p   p If the user provides a configuration types that is not expected for the action you are performing  kubeadm will ignore those types and print a warning   p   h2 Kubeadm init configuration types  h2   p When executing kubeadm init with the  code   config  code  option  the following configuration types could be used  InitConfiguration  ClusterConfiguration  KubeProxyConfiguration  KubeletConfiguration  but only one between InitConfiguration and ClusterConfiguration is mandatory   p   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta4 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span InitConfiguration span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  bootstrapTokens  span   span style  color  bbb     span  span style  color  bbb    span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  nodeRegistration  span   span style  color  bbb     span  span style  color  bbb    span     span style  color  bbb     span   pre  p The InitConfiguration type should be used to configure runtime settings  that in case of kubeadm init are the configuration of the bootstrap token and all the setting which are specific to the node where kubeadm is executed  including   p   ul   li   p NodeRegistration  that holds fields that relate to registering the new node to the cluster  use it to customize the node name  the CRI socket to use or any other settings that should apply to this node only  e g  the node ip    p    li   li   p LocalAPIEndpoint  that represents the endpoint of the instance of the API server to be deployed on this node  use it e g  to customize the API server advertise address   p    li    ul   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta4 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span ClusterConfiguration span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  networking  span   span style  color  bbb     span  span style  color  bbb      span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  etcd  span   span style  color  bbb     span  span style  color  bbb      span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apiServer  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraArgs  span   span style  color  bbb     span  span style  color  bbb        span     span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraVolumes  span   span style  color  bbb     span  span style  color  bbb        span     span style  color  bbb     span  span style  color  bbb    span     span style  color  bbb     span   pre  p The ClusterConfiguration type should be used to configure cluster wide settings  including settings for   p   ul   li   p  code networking  code  that holds configuration for the networking topology of the cluster  use it e g  to customize Pod subnet or services subnet   p    li   li   p  code etcd  code   use it e g  to customize the local etcd or to configure the API server for using an external etcd cluster   p    li   li   p kube apiserver  kube scheduler  kube controller manager configurations  use it to customize control plane components by adding customized setting or overriding kubeadm default settings   p    li    ul   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeproxy config k8s io v1alpha1 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span KubeProxyConfiguration span style  color  bbb     span  span style  color  bbb    span     span style  color  bbb     span   pre  p The KubeProxyConfiguration type should be used to change the configuration passed to kube proxy instances deployed in the cluster  If this object is not provided or provided only partially  kubeadm applies defaults   p   p See https   kubernetes io docs reference command line tools reference kube proxy  or https   pkg go dev k8s io kube proxy config v1alpha1 KubeProxyConfiguration for kube proxy official documentation   p   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubelet config k8s io v1beta1 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span KubeletConfiguration span style  color  bbb     span  span style  color  bbb    span     span style  color  bbb     span   pre  p The KubeletConfiguration type should be used to change the configurations that will be passed to all kubelet instances deployed in the cluster  If this object is not provided or provided only partially  kubeadm applies defaults   p   p See https   kubernetes io docs reference command line tools reference kubelet  or https   pkg go dev k8s io kubelet config v1beta1 KubeletConfiguration for kubelet official documentation   p   p Here is a fully populated example of a single YAML file containing multiple configuration types to be used during a  code kubeadm init  code  run   p   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta4 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span InitConfiguration span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  bootstrapTokens  span   span style  color  bbb     span  span style  color  bbb      span    span style  color  000 font weight bold  token  span   span style  color  bbb     span  span style  color  d14    34 9a08jv c0izixklcxtmnze7  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  description  span   span style  color  bbb     span  span style  color  d14    34 kubeadm bootstrap token  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  ttl  span   span style  color  bbb     span  span style  color  d14    34 24h  34   span  span style  color  bbb     span  span style  color  bbb      span    span style  color  000 font weight bold  token  span   span style  color  bbb     span  span style  color  d14    34 783bde 3f89s0fje9f38fhf  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  description  span   span style  color  bbb     span  span style  color  d14    34 another bootstrap token  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  usages  span   span style  color  bbb     span  span style  color  bbb      span   authentication span style  color  bbb     span  span style  color  bbb      span   signing span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  groups  span   span style  color  bbb     span  span style  color  bbb      span   system bootstrappers kubeadm default node token span style  color  bbb     span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  nodeRegistration  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  name  span   span style  color  bbb     span  span style  color  d14    34 ec2 10 100 0 1  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  criSocket  span   span style  color  bbb     span  span style  color  d14    34 unix    var run containerd containerd sock  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  taints  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  000 font weight bold  key  span   span style  color  bbb     span  span style  color  d14    34 kubeadmNode  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  value  span   span style  color  bbb     span  span style  color  d14    34 someValue  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  effect  span   span style  color  bbb     span  span style  color  d14    34 NoSchedule  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  kubeletExtraArgs  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  000 font weight bold  name  span   span style  color  bbb     span v span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  value  span   span style  color  bbb     span  span style  color  d14    34 5  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  ignorePreflightErrors  span   span style  color  bbb     span  span style  color  bbb        span   IsPrivilegedUser span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  imagePullPolicy  span   span style  color  bbb     span  span style  color  d14    34 IfNotPresent  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  imagePullSerial  span   span style  color  bbb     span  span style  color  000 font weight bold  true  span  span style  color  bbb     span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  localAPIEndpoint  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  advertiseAddress  span   span style  color  bbb     span  span style  color  d14    34 10 100 0 1  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  bindPort  span   span style  color  bbb     span  span style  color  099  6443  span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  certificateKey  span   span style  color  bbb     span  span style  color  d14    34 e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  skipPhases  span   span style  color  bbb     span  span style  color  bbb      span   preflight span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  timeouts  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  controlPlaneComponentHealthCheck  span   span style  color  bbb     span  span style  color  d14    34 60s  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  kubenetesAPICall  span   span style  color  bbb     span  span style  color  d14    34 40s  34   span  span style  color  bbb     span  span style  color  bbb    span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta4 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span ClusterConfiguration span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  etcd  span   span style  color  bbb     span  span style  color  bbb     span  span style  color  bbb      span  span style  color  998 font style italic    one of local or external  span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  local  span   span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  imageRepository  span   span style  color  bbb     span  span style  color  d14    34 registry k8s io  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  imageTag  span   span style  color  bbb     span  span style  color  d14    34 3 2 24  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  dataDir  span   span style  color  bbb     span  span style  color  d14    34  var lib etcd  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  extraArgs  span   span style  color  bbb     span  span style  color  bbb          span    span style  color  000 font weight bold  name  span   span style  color  bbb     span listen client urls span style  color  bbb     span  span style  color  bbb            span  span style  color  000 font weight bold  value  span   span style  color  bbb     span http    span style  color  099  10 100 0 1  span   span style  color  099  2379  span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  extraEnvs  span   span style  color  bbb     span  span style  color  bbb          span    span style  color  000 font weight bold  name  span   span style  color  bbb     span SOME VAR span style  color  bbb     span  span style  color  bbb            span  span style  color  000 font weight bold  value  span   span style  color  bbb     span SOME VALUE span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  serverCertSANs  span   span style  color  bbb     span  span style  color  bbb          span    span style  color  bbb     span  span style  color  d14    34 ec2 10 100 0 1 compute 1 amazonaws com  34   span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  peerCertSANs  span   span style  color  bbb     span  span style  color  bbb          span    span style  color  d14    34 10 100 0 1  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  998 font style italic    external   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  998 font style italic      endpoints   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  998 font style italic            34 10 100 0 1 2379  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  998 font style italic            34 10 100 0 2 2379  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  998 font style italic      caFile    34  etcd kubernetes pki etcd etcd ca crt  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  998 font style italic      certFile    34  etcd kubernetes pki etcd etcd crt  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  998 font style italic      keyFile    34  etcd kubernetes pki etcd etcd key  34   span  span style  color  bbb     span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  networking  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  serviceSubnet  span   span style  color  bbb     span  span style  color  d14    34 10 96 0 0 16  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  podSubnet  span   span style  color  bbb     span  span style  color  d14    34 10 244 0 0 24  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  dnsDomain  span   span style  color  bbb     span  span style  color  d14    34 cluster local  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kubernetesVersion  span   span style  color  bbb     span  span style  color  d14    34 v1 21 0  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  controlPlaneEndpoint  span   span style  color  bbb     span  span style  color  d14    34 10 100 0 1 6443  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apiServer  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraArgs  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  000 font weight bold  name  span   span style  color  bbb     span authorization mode span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  value  span   span style  color  bbb     span  span style  color  d14    34 Node RBAC  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraEnvs  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  000 font weight bold  name  span   span style  color  bbb     span SOME VAR span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  value  span   span style  color  bbb     span SOME VALUE span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraVolumes  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  000 font weight bold  name  span   span style  color  bbb     span  span style  color  d14    34 some volume  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  hostPath  span   span style  color  bbb     span  span style  color  d14    34  etc some path  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  mountPath  span   span style  color  bbb     span  span style  color  d14    34  etc some pod path  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  readOnly  span   span style  color  bbb     span  span style  color  000 font weight bold  false  span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  pathType  span   span style  color  bbb     span File span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  certSANs  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  d14    34 10 100 1 1  34   span  span style  color  bbb     span  span style  color  bbb        span    span style  color  d14    34 ec2 10 100 0 1 compute 1 amazonaws com  34   span  span style  color  bbb     span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  controllerManager  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraArgs  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  000 font weight bold  name  span   span style  color  bbb     span node cidr mask size span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  value  span   span style  color  bbb     span  span style  color  d14    34 20  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraVolumes  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  000 font weight bold  name  span   span style  color  bbb     span  span style  color  d14    34 some volume  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  hostPath  span   span style  color  bbb     span  span style  color  d14    34  etc some path  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  mountPath  span   span style  color  bbb     span  span style  color  d14    34  etc some pod path  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  readOnly  span   span style  color  bbb     span  span style  color  000 font weight bold  false  span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  pathType  span   span style  color  bbb     span File span style  color  bbb     span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  scheduler  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraArgs  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  000 font weight bold  name  span   span style  color  bbb     span address span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  value  span   span style  color  bbb     span  span style  color  d14    34 10 100 0 1  34   span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  extraVolumes  span   span style  color  bbb     span  span style  color  bbb        span    span style  color  000 font weight bold  name  span   span style  color  bbb     span  span style  color  d14    34 some volume  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  hostPath  span   span style  color  bbb     span  span style  color  d14    34  etc some path  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  mountPath  span   span style  color  bbb     span  span style  color  d14    34  etc some pod path  34   span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  readOnly  span   span style  color  bbb     span  span style  color  000 font weight bold  false  span  span style  color  bbb     span  span style  color  bbb          span  span style  color  000 font weight bold  pathType  span   span style  color  bbb     span File span style  color  bbb     span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  certificatesDir  span   span style  color  bbb     span  span style  color  d14    34  etc kubernetes pki  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  imageRepository  span   span style  color  bbb     span  span style  color  d14    34 registry k8s io  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  clusterName  span   span style  color  bbb     span  span style  color  d14    34 example cluster  34   span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  encryptionAlgorithm  span   span style  color  bbb     span ECDSA P256 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  dns  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  disabled  span   span style  color  bbb     span  span style  color  000 font weight bold  true  span  span style  color  bbb      span  span style  color  998 font style italic    disable CoreDNS  span  span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  proxy  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  disabled  span   span style  color  bbb     span  span style  color  000 font weight bold  true  span  span style  color  bbb       span  span style  color  998 font style italic    disable kube proxy  span  span style  color  bbb     span  span style  color  bbb     span  span style  color  bbb    span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubelet config k8s io v1beta1 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span KubeletConfiguration span style  color  bbb     span  span style  color  bbb    span  span style  color  998 font style italic    kubelet specific options here  span  span style  color  bbb     span  span style  color  bbb    span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeproxy config k8s io v1alpha1 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span KubeProxyConfiguration span style  color  bbb     span  span style  color  bbb    span  span style  color  998 font style italic    kube proxy specific options here  span  span style  color  bbb     span   pre  h2 Kubeadm join configuration types  h2   p When executing kubeadm join with the   config option  the JoinConfiguration type should be provided   p   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta4 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span JoinConfiguration span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  discovery  span   span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  bootstrapToken  span   span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  apiServerEndpoint  span   span style  color  bbb     span some address  span style  color  099  6443  span  span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  token  span   span style  color  bbb     span abcdef 0123456789abcdef span style  color  bbb     span  span style  color  bbb        span  span style  color  000 font weight bold  unsafeSkipCAVerification  span   span style  color  bbb     span  span style  color  000 font weight bold  true  span  span style  color  bbb     span  span style  color  bbb      span  span style  color  000 font weight bold  tlsBootstrapToken  span   span style  color  bbb     span abcdef 0123456789abcdef span style  color  bbb     span   pre  p The JoinConfiguration type should be used to configure runtime settings  that in case of kubeadm join are the discovery method used for accessing the cluster info and all the setting which are specific to the node where kubeadm is executed  including   p   ul   li   p  code nodeRegistration  code   that holds fields that relate to registering the new node to the cluster  use it to customize the node name  the CRI socket to use or any other settings that should apply to this node only  e g  the node ip    p    li   li   p  code apiEndpoint  code   that represents the endpoint of the instance of the API server to be eventually deployed on this node   p    li    ul   h2 Kubeadm reset configuration types  h2   p When executing  code kubeadm reset  code  with the  code   config  code  option  the  code ResetConfiguration  code  type should be provided   p   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta4 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span ResetConfiguration span style  color  bbb     span  span style  color  bbb    span     span style  color  bbb     span   pre  h2 Kubeadm upgrade configuration types  h2   p When executing  code kubeadm upgrade  code  with the  code   config  code  option  the  code UpgradeConfiguration  code  type should be provided   p   pre style  background color  fff   span style  color  000 font weight bold  apiVersion  span   span style  color  bbb     span kubeadm k8s io v1beta4 span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  kind  span   span style  color  bbb     span UpgradeConfiguration span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  apply  span   span style  color  bbb     span  span style  color  bbb      span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  diff  span   span style  color  bbb     span  span style  color  bbb      span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  node  span   span style  color  bbb     span  span style  color  bbb      span     span style  color  bbb     span  span style  color  bbb    span  span style  color  000 font weight bold  plan  span   span style  color  bbb     span  span style  color  bbb      span     span style  color  bbb     span   pre  p The  code UpgradeConfiguration  code  structure includes a few substructures that only apply to different subcommands of  code kubeadm upgrade  code   For example  the  code apply  code  substructure will be used with the  code kubeadm upgrade apply  code  subcommand and all other substructures will be ignored in such a case   p       Resource Types       ClusterConfiguration   kubeadm k8s io v1beta4 ClusterConfiguration     InitConfiguration   kubeadm k8s io v1beta4 InitConfiguration     JoinConfiguration   kubeadm k8s io v1beta4 JoinConfiguration     ResetConfiguration   kubeadm k8s io v1beta4 ResetConfiguration     UpgradeConfiguration   kubeadm k8s io v1beta4 UpgradeConfiguration                    BootstrapToken        BootstrapToken          Appears in        InitConfiguration   kubeadm k8s io v1beta3 InitConfiguration      InitConfiguration   kubeadm k8s io v1beta4 InitConfiguration     p BootstrapToken describes one bootstrap token  stored as a Secret in the cluster  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code token  code   B  Required   B  br    a href   BootstrapTokenString   code BootstrapTokenString  code   a    td   td      p  code token  code  is used for establishing bidirectional trust between nodes and control planes  Used for joining nodes in the cluster   p    td    tr   tr  td  code description  code  br    code string  code    td   td      p  code description  code  sets a human friendly message why this token exists and what it s used for  so other administrators can know its purpose   p    td    tr   tr  td  code ttl  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p  code ttl  code  defines the time to live for this token  Defaults to  code 24h  code    code expires  code  and  code ttl  code  are mutually exclusive   p    td    tr   tr  td  code expires  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  time v1 meta   code meta v1 Time  code   a    td   td      p  code expires  code  specifies the timestamp when this token expires  Defaults to being set dynamically at runtime based on the  code ttl  code    code expires  code  and  code ttl  code  are mutually exclusive   p    td    tr   tr  td  code usages  code  br    code   string  code    td   td      p  code usages  code  describes the ways in which this token can be used  Can by default be used for establishing bidirectional trust  but that can be changed here   p    td    tr   tr  td  code groups  code  br    code   string  code    td   td      p  code groups  code  specifies the extra groups that this token will authenticate as when if used for authentication  p    td    tr    tbody    table       BootstrapTokenString        BootstrapTokenString          Appears in        BootstrapToken   BootstrapToken     p BootstrapTokenString is a token of the format  code abcdef abcdef0123456789  code  that is used for both validation of the practically of the API server from a joining node s point of view and as an authentication method for the node in the bootstrap phase of  quot kubeadm join quot   This token is and should be short lived   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code    code   B  Required   B  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr   tr  td  code    code   B  Required   B  br    code string  code    td   td      span class  text muted  No description provided   span   td    tr    tbody    table          ClusterConfiguration        kubeadm k8s io v1beta4 ClusterConfiguration          p ClusterConfiguration contains cluster wide configuration for a kubeadm cluster   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubeadm k8s io v1beta4  code   td   tr   tr  td  code kind  code  br  string  td  td  code ClusterConfiguration  code   td   tr           tr  td  code etcd  code  br    a href   kubeadm k8s io v1beta4 Etcd   code Etcd  code   a    td   td      p  code etcd  code  holds the configuration for etcd   p    td    tr   tr  td  code networking  code  br    a href   kubeadm k8s io v1beta4 Networking   code Networking  code   a    td   td      p  code networking  code  holds configuration for the networking topology of the cluster   p    td    tr   tr  td  code kubernetesVersion  code  br    code string  code    td   td      p  code kubernetesVersion  code  is the target version of the control plane   p    td    tr   tr  td  code controlPlaneEndpoint  code  br    code string  code    td   td      p  code controlPlaneEndpoint  code  sets a stable IP address or DNS name for the control plane  It can be a valid IP address or a RFC 1123 DNS subdomain  both with optional TCP port  In case the  code controlPlaneEndpoint  code  is not specified  the  code advertiseAddress  code     code bindPort  code  are used  in case the  code controlPlaneEndpoint  code  is specified but without a TCP port  the  code bindPort  code  is used  Possible usages are   p   ul   li In a cluster with more than one control plane instances  this field should be assigned the address of the external load balancer in front of the control plane instances   li   li In environments with enforced node recycling  the  code controlPlaneEndpoint  code  could be used for assigning a stable DNS to the control plane   li    ul    td    tr   tr  td  code apiServer  code  br    a href   kubeadm k8s io v1beta4 APIServer   code APIServer  code   a    td   td      p  code apiServer  code  contains extra settings for the API server   p    td    tr   tr  td  code controllerManager  code  br    a href   kubeadm k8s io v1beta4 ControlPlaneComponent   code ControlPlaneComponent  code   a    td   td      p  code controllerManager  code  contains extra settings for the controller manager   p    td    tr   tr  td  code scheduler  code  br    a href   kubeadm k8s io v1beta4 ControlPlaneComponent   code ControlPlaneComponent  code   a    td   td      p  code scheduler  code  contains extra settings for the scheduler   p    td    tr   tr  td  code dns  code  br    a href   kubeadm k8s io v1beta4 DNS   code DNS  code   a    td   td      p  code dns  code  defines the options for the DNS add on installed in the cluster   p    td    tr   tr  td  code proxy  code   B  Required   B  br    a href   kubeadm k8s io v1beta4 Proxy   code Proxy  code   a    td   td      p  code proxy  code  defines the options for the proxy add on installed in the cluster   p    td    tr   tr  td  code certificatesDir  code  br    code string  code    td   td      p  code certificatesDir  code  specifies where to store or look for all required certificates   p    td    tr   tr  td  code imageRepository  code  br    code string  code    td   td      p  code imageRepository  code  sets the container registry to pull images from  If empty   code registry k8s io  code  will be used by default  In case of kubernetes version is a CI build  kubernetes version starts with  code ci   code    code gcr io k8s staging ci images  code  will be used as a default for control plane components and for kube proxy  while  code registry k8s io  code  will be used for all the other images   p    td    tr   tr  td  code featureGates  code  br    code map string bool  code    td   td      p  code featureGates  code  contains the feature gates enabled by the user   p    td    tr   tr  td  code clusterName  code  br    code string  code    td   td      p The cluster name   p    td    tr   tr  td  code encryptionAlgorithm  code  br    a href   kubeadm k8s io v1beta4 EncryptionAlgorithmType   code EncryptionAlgorithmType  code   a    td   td      p  code encryptionAlgorithm  code  holds the type of asymmetric encryption algorithm used for keys and certificates  Can be one of  code  quot RSA 2048 quot   code   default    code  quot RSA 3072 quot   code    code  quot RSA 4096 quot   code  or  code  quot ECDSA P256 quot   code    p    td    tr   tr  td  code certificateValidityPeriod  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p  code certificateValidityPeriod  code  specifies the validity period for a non CA certificate generated by kubeadm  Default value   8760h    365 days   24 hours   1 year   p    td    tr   tr  td  code caCertificateValidityPeriod  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p  code caCertificateValidityPeriod  code  specifies the validity period for a CA certificate generated by kubeadm  Default value   code 87600h  code   365 days   24 hours   10   10 years   p    td    tr    tbody    table       InitConfiguration        kubeadm k8s io v1beta4 InitConfiguration          p InitConfiguration contains a list of elements that is specific  quot kubeadm init quot  only runtime information   code kubeadm init  code  only information  These fields are solely used the first time  code kubeadm init  code  runs  After that  the information in the fields IS NOT uploaded to the  code kubeadm config  code  ConfigMap that is used by  code kubeadm upgrade  code  for instance  These fields must be omitempty   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubeadm k8s io v1beta4  code   td   tr   tr  td  code kind  code  br  string  td  td  code InitConfiguration  code   td   tr           tr  td  code bootstrapTokens  code  br    a href   BootstrapToken   code   BootstrapToken  code   a    td   td      p  code bootstrapTokens  code  is respected at  code kubeadm init  code  time and describes a set of Bootstrap Tokens to create  This information IS NOT uploaded to the kubeadm cluster configmap  partly because of its sensitive nature  p    td    tr   tr  td  code dryRun  code   B  Required   B  br    code bool  code    td   td      p  code dryRun  code  tells if the dry run mode is enabled  don t apply any change in dry run mode  just out put what would be done   p    td    tr   tr  td  code nodeRegistration  code  br    a href   kubeadm k8s io v1beta4 NodeRegistrationOptions   code NodeRegistrationOptions  code   a    td   td      p  code nodeRegistration  code  holds fields that relate to registering the new control plane node to the cluster   p    td    tr   tr  td  code localAPIEndpoint  code  br    a href   kubeadm k8s io v1beta4 APIEndpoint   code APIEndpoint  code   a    td   td      p  code localAPIEndpoint  code  represents the endpoint of the API server instance that s deployed on this control plane node  In HA setups  this differs from  code ClusterConfiguration controlPlaneEndpoint  code  in the sense that  code controlPlaneEndpoint  code  is the global endpoint for the cluster  which then loadbalances the requests to each individual API server  This configuration object lets you customize what IP DNS name and port the local API server advertises it s accessible on  By default  kubeadm tries to auto detect the IP of the default interface and use that  but in case that process fails you may set the desired value here   p    td    tr   tr  td  code certificateKey  code  br    code string  code    td   td      p  code certificateKey  code  sets the key with which certificates and keys are encrypted prior to being uploaded in a Secret in the cluster during the  code uploadcerts init  code  phase  The certificate key is a hex encoded string that is an AES key of size 32 bytes   p    td    tr   tr  td  code skipPhases  code  br    code   string  code    td   td      p  code skipPhases  code  is a list of phases to skip during command execution  The list of phases can be obtained with the  code kubeadm init   help  code  command  The flag  code   skip phases  code  takes precedence over this field   p    td    tr   tr  td  code patches  code  br    a href   kubeadm k8s io v1beta4 Patches   code Patches  code   a    td   td      p  code patches  code  contains options related to applying patches to components deployed by kubeadm during  code kubeadm init  code    p    td    tr   tr  td  code timeouts  code  br    a href   kubeadm k8s io v1beta4 Timeouts   code Timeouts  code   a    td   td      p  code timeouts  code  holds various timeouts that apply to kubeadm commands   p    td    tr    tbody    table       JoinConfiguration        kubeadm k8s io v1beta4 JoinConfiguration          p JoinConfiguration contains elements describing a particular node   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubeadm k8s io v1beta4  code   td   tr   tr  td  code kind  code  br  string  td  td  code JoinConfiguration  code   td   tr           tr  td  code dryRun  code  br    code bool  code    td   td      p  code dryRun  code  tells if the dry run mode is enabled  don t apply any change if it is set  just output what would be done   p    td    tr   tr  td  code nodeRegistration  code  br    a href   kubeadm k8s io v1beta4 NodeRegistrationOptions   code NodeRegistrationOptions  code   a    td   td      p  code nodeRegistration  code  holds fields that relate to registering the new control plane node to the cluster  p    td    tr   tr  td  code caCertPath  code  br    code string  code    td   td      p  code caCertPath  code  is the path to the SSL certificate authority used to secure comunications between node and control plane  Defaults to  quot  etc kubernetes pki ca crt quot    p    td    tr   tr  td  code discovery  code   B  Required   B  br    a href   kubeadm k8s io v1beta4 Discovery   code Discovery  code   a    td   td      p  code discovery  code  specifies the options for the kubelet to use during the TLS bootstrap process   p    td    tr   tr  td  code controlPlane  code  br    a href   kubeadm k8s io v1beta4 JoinControlPlane   code JoinControlPlane  code   a    td   td      p  code controlPlane  code  defines the additional control plane instance to be deployed on the joining node  If nil  no additional control plane instance will be deployed   p    td    tr   tr  td  code skipPhases  code  br    code   string  code    td   td      p  code skipPhases  code  is a list of phases to skip during command execution  The list of phases can be obtained with the  code kubeadm join   help  code  command  The flag  code   skip phases  code  takes precedence over this field   p    td    tr   tr  td  code patches  code  br    a href   kubeadm k8s io v1beta4 Patches   code Patches  code   a    td   td      p  code patches  code  contains options related to applying patches to components deployed by kubeadm during  code kubeadm join  code    p    td    tr   tr  td  code timeouts  code  br    a href   kubeadm k8s io v1beta4 Timeouts   code Timeouts  code   a    td   td      p  code timeouts  code  holds various timeouts that apply to kubeadm commands   p    td    tr    tbody    table       ResetConfiguration        kubeadm k8s io v1beta4 ResetConfiguration          p ResetConfiguration contains a list of fields that are specifically  code kubeadm reset  code  only runtime information   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubeadm k8s io v1beta4  code   td   tr   tr  td  code kind  code  br  string  td  td  code ResetConfiguration  code   td   tr           tr  td  code cleanupTmpDir  code  br    code bool  code    td   td      p  code cleanupTmpDir  code  specifies whether the  quot  etc kubernetes tmp quot  directory should be cleaned during the reset process   p    td    tr   tr  td  code certificatesDir  code  br    code string  code    td   td      p  code certificatesDir  code  specifies the directory where the certificates are stored  If specified  it will be cleaned during the reset process   p    td    tr   tr  td  code criSocket  code  br    code string  code    td   td      p  code criSocket  code  is used to retrieve container runtime inforomation and used for the removal of the containers  If  code criSocket  code  is not specified by flag or config file  kubeadm will try to detect one valid CRI socket instead   p    td    tr   tr  td  code dryRun  code  br    code bool  code    td   td      p  code dryRun  code  tells if the dry run mode is enabled  don t apply any change if it is set and just output what would be done   p    td    tr   tr  td  code force  code  br    code bool  code    td   td      p The  code force  code  flag instructs kubeadm to reset the node without prompting for confirmation   p    td    tr   tr  td  code ignorePreflightErrors  code  br    code   string  code    td   td      p  code ignorePreflightErrors  code  provides a list of pre flight errors to be ignored during the reset process  e g   code IsPrivilegedUser Swap  code   Value  code all  code  ignores errors from all checks   p    td    tr   tr  td  code skipPhases  code  br    code   string  code    td   td      p  code skipPhases  code  is a list of phases to skip during command execution  The list of phases can be obtained with the  code kubeadm reset phase   help  code  command   p    td    tr   tr  td  code unmountFlags  code  br    code   string  code    td   td      p  code unmountFlags  code  is a list of  code unmount2    code  syscall flags that kubeadm can use when unmounting directories during  quot reset quot   This flag can be one of   code  quot MNT FORCE quot   code    code  quot MNT DETACH quot   code    code  quot MNT EXPIRE quot   code    code  quot UMOUNT NOFOLLOW quot   code    By default this list is empty   p    td    tr   tr  td  code timeouts  code  br    a href   kubeadm k8s io v1beta4 Timeouts   code Timeouts  code   a    td   td      p Timeouts holds various timeouts that apply to kubeadm commands   p    td    tr    tbody    table       UpgradeConfiguration        kubeadm k8s io v1beta4 UpgradeConfiguration          p UpgradeConfiguration contains a list of options that are specific to  code kubeadm upgrade  code  subcommands   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody        tr  td  code apiVersion  code  br  string  td  td  code kubeadm k8s io v1beta4  code   td   tr   tr  td  code kind  code  br  string  td  td  code UpgradeConfiguration  code   td   tr           tr  td  code apply  code  br    a href   kubeadm k8s io v1beta4 UpgradeApplyConfiguration   code UpgradeApplyConfiguration  code   a    td   td      p  code apply  code  holds a list of options that are specific to the  code kubeadm upgrade apply  code  command   p    td    tr   tr  td  code diff  code  br    a href   kubeadm k8s io v1beta4 UpgradeDiffConfiguration   code UpgradeDiffConfiguration  code   a    td   td      p  code diff  code  holds a list of options that are specific to the  code kubeadm upgrade diff  code  command   p    td    tr   tr  td  code node  code  br    a href   kubeadm k8s io v1beta4 UpgradeNodeConfiguration   code UpgradeNodeConfiguration  code   a    td   td      p  code node  code  holds a list of options that are specific to the  code kubeadm upgrade node  code  command   p    td    tr   tr  td  code plan  code  br    a href   kubeadm k8s io v1beta4 UpgradePlanConfiguration   code UpgradePlanConfiguration  code   a    td   td      p  code plan  code  holds a list of options that are specific to the  code kubeadm upgrade plan  code  command   p    td    tr   tr  td  code timeouts  code  br    a href   kubeadm k8s io v1beta4 Timeouts   code Timeouts  code   a    td   td      p  code timeouts  code  holds various timeouts that apply to kubeadm commands   p    td    tr    tbody    table       APIEndpoint        kubeadm k8s io v1beta4 APIEndpoint          Appears in        InitConfiguration   kubeadm k8s io v1beta4 InitConfiguration      JoinControlPlane   kubeadm k8s io v1beta4 JoinControlPlane     p APIEndpoint struct contains elements of API server instance deployed on a node   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code advertiseAddress  code  br    code string  code    td   td      p  code dvertiseAddress  code  sets the IP address for the API server to advertise   p    td    tr   tr  td  code bindPort  code  br    code int32  code    td   td      p  code bindPort  code  sets the secure port for the API Server to bind to  Defaults to 6443   p    td    tr    tbody    table       APIServer        kubeadm k8s io v1beta4 APIServer          Appears in        ClusterConfiguration   kubeadm k8s io v1beta4 ClusterConfiguration     p APIServer holds settings necessary for API server deployments in the cluster  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ControlPlaneComponent  code   B  Required   B  br    a href   kubeadm k8s io v1beta4 ControlPlaneComponent   code ControlPlaneComponent  code   a    td   td  Members of  code ControlPlaneComponent  code  are embedded into this type       span class  text muted  No description provided   span   td    tr   tr  td  code certSANs  code  br    code   string  code    td   td      p  code certSANs  code  sets extra Subject Alternative Names  SANs  for the API Server signing certificate   p    td    tr    tbody    table       Arg        kubeadm k8s io v1beta4 Arg          Appears in        ControlPlaneComponent   kubeadm k8s io v1beta4 ControlPlaneComponent      LocalEtcd   kubeadm k8s io v1beta4 LocalEtcd      NodeRegistrationOptions   kubeadm k8s io v1beta4 NodeRegistrationOptions     p Arg represents an argument with a name and a value   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p The name of the argument   p    td    tr   tr  td  code value  code   B  Required   B  br    code string  code    td   td      p The value of the argument   p    td    tr    tbody    table       BootstrapTokenDiscovery        kubeadm k8s io v1beta4 BootstrapTokenDiscovery          Appears in        Discovery   kubeadm k8s io v1beta4 Discovery     p BootstrapTokenDiscovery is used to set the options for bootstrap token based discovery   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code token  code   B  Required   B  br    code string  code    td   td      p  code token  code  is a token used to validate cluster information fetched from the control plane   p    td    tr   tr  td  code apiServerEndpoint  code  br    code string  code    td   td      p  code apiServerEndpoint  code  is an IP or domain name to the API server from which information will be fetched   p    td    tr   tr  td  code caCertHashes  code  br    code   string  code    td   td      p  code caCertHashes  code  specifies a set of public key pins to verify when token based discovery is used  The root CA found during discovery must match one of these values  Specifying an empty set disables root CA pinning  which can be unsafe  Each hash is specified as  code  lt type gt   lt value gt   code   where the only currently supported type is  quot sha256 quot   This is a hex encoded SHA 256 hash of the Subject Public Key Info  SPKI  object in DER encoded ASN 1  These hashes can be    calculated using  for example  OpenSSL   p    td    tr   tr  td  code unsafeSkipCAVerification  code  br    code bool  code    td   td      p  code unsafeSkipCAVerification  code  allows token based discovery without CA verification via  code caCertHashes  code   This can weaken the security of kubeadm since other nodes can impersonate the control plane   p    td    tr    tbody    table       ControlPlaneComponent        kubeadm k8s io v1beta4 ControlPlaneComponent          Appears in        ClusterConfiguration   kubeadm k8s io v1beta4 ClusterConfiguration      APIServer   kubeadm k8s io v1beta4 APIServer     p ControlPlaneComponent holds settings common to control plane component of the cluster  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code extraArgs  code  br    a href   kubeadm k8s io v1beta4 Arg   code   Arg  code   a    td   td      p  code extraArgs  code  is an extra set of flags to pass to the control plane component  An argument name in this list is the flag name as it appears on the command line except without leading dash es   Extra arguments will override existing default arguments  Duplicate extra arguments are allowed   p    td    tr   tr  td  code extraVolumes  code  br    a href   kubeadm k8s io v1beta4 HostPathMount   code   HostPathMount  code   a    td   td      p  code extraVolumes  code  is an extra set of host volumes  mounted to the control plane component   p    td    tr   tr  td  code extraEnvs  code  br    a href   kubeadm k8s io v1beta4 EnvVar   code   EnvVar  code   a    td   td      p  code extraEnvs  code  is an extra set of environment variables to pass to the control plane component  Environment variables passed using  code extraEnvs  code  will override any existing environment variables  or  code   proxy  code  environment variables that kubeadm adds by default   p    td    tr    tbody    table       DNS        kubeadm k8s io v1beta4 DNS          Appears in        ClusterConfiguration   kubeadm k8s io v1beta4 ClusterConfiguration     p DNS defines the DNS addon that should be used in the cluster  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ImageMeta  code   B  Required   B  br    a href   kubeadm k8s io v1beta4 ImageMeta   code ImageMeta  code   a    td   td  Members of  code ImageMeta  code  are embedded into this type       p  code imageMeta  code  allows to customize the image used for the DNS addon   p    td    tr   tr  td  code disabled  code   B  Required   B  br    code bool  code    td   td      p  code disabled  code  specifies whether to disable this addon in the cluster   p    td    tr    tbody    table       Discovery        kubeadm k8s io v1beta4 Discovery          Appears in        JoinConfiguration   kubeadm k8s io v1beta4 JoinConfiguration     p Discovery specifies the options for the kubelet to use during the TLS Bootstrap process  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code bootstrapToken  code  br    a href   kubeadm k8s io v1beta4 BootstrapTokenDiscovery   code BootstrapTokenDiscovery  code   a    td   td      p  code bootstrapToken  code  is used to set the options for bootstrap token based discovery   code bootstrapToken  code  and  code file  code  are mutually exclusive   p    td    tr   tr  td  code file  code  br    a href   kubeadm k8s io v1beta4 FileDiscovery   code FileDiscovery  code   a    td   td      p  code file  code  is used to specify a file or URL to a kubeconfig file from which to load cluster information   code bootstrapToken  code  and  code file  code  are mutually exclusive   p    td    tr   tr  td  code tlsBootstrapToken  code  br    code string  code    td   td      p  code tlsBootstrapToken  code  is a token used for TLS bootstrapping  If  code bootstrapToken  code  is set  this field is defaulted to  code bootstrapToken token  code   but can be overridden  If  code file  code  is set  this field  strong must be set  strong  in case the KubeConfigFile does not contain any other authentication information   p    td    tr    tbody    table       EncryptionAlgorithmType        kubeadm k8s io v1beta4 EncryptionAlgorithmType        Alias of  string      Appears in        ClusterConfiguration   kubeadm k8s io v1beta4 ClusterConfiguration     p EncryptionAlgorithmType can define an asymmetric encryption algorithm type   p          EnvVar        kubeadm k8s io v1beta4 EnvVar          Appears in        ControlPlaneComponent   kubeadm k8s io v1beta4 ControlPlaneComponent      LocalEtcd   kubeadm k8s io v1beta4 LocalEtcd     p EnvVar represents an environment variable present in a Container   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code EnvVar  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  envvar v1 core   code core v1 EnvVar  code   a    td   td  Members of  code EnvVar  code  are embedded into this type       span class  text muted  No description provided   span   td    tr    tbody    table       Etcd        kubeadm k8s io v1beta4 Etcd          Appears in        ClusterConfiguration   kubeadm k8s io v1beta4 ClusterConfiguration     p Etcd contains elements describing Etcd configuration   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code local  code  br    a href   kubeadm k8s io v1beta4 LocalEtcd   code LocalEtcd  code   a    td   td      p  code local  code  provides configuration knobs for configuring the local etcd instance   code local  code  and  code external  code  are mutually exclusive   p    td    tr   tr  td  code external  code  br    a href   kubeadm k8s io v1beta4 ExternalEtcd   code ExternalEtcd  code   a    td   td      p  code external  code  describes how to connect to an external etcd cluster   code local  code  and  code external  code  are mutually exclusive   p    td    tr    tbody    table       ExternalEtcd        kubeadm k8s io v1beta4 ExternalEtcd          Appears in        Etcd   kubeadm k8s io v1beta4 Etcd     p ExternalEtcd describes an external etcd cluster  Kubeadm has no knowledge of where certificate files live and they must be supplied   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code endpoints  code   B  Required   B  br    code   string  code    td   td      p  code endpoints  code  contains the list of etcd members   p    td    tr   tr  td  code caFile  code   B  Required   B  br    code string  code    td   td      p  code caFile  code  is an SSL Certificate Authority  CA  file used to secure etcd communication  Required if using a TLS connection   p    td    tr   tr  td  code certFile  code   B  Required   B  br    code string  code    td   td      p  code certFile  code  is an SSL certification file used to secure etcd communication  Required if using a TLS connection   p    td    tr   tr  td  code keyFile  code   B  Required   B  br    code string  code    td   td      p  code keyFile  code  is an SSL key file used to secure etcd communication  Required if using a TLS connection   p    td    tr    tbody    table       FileDiscovery        kubeadm k8s io v1beta4 FileDiscovery          Appears in        Discovery   kubeadm k8s io v1beta4 Discovery     p FileDiscovery is used to specify a file or URL to a kubeconfig file from which to load cluster information   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code kubeConfigPath  code   B  Required   B  br    code string  code    td   td      p  code kubeConfigPath  code  is used to specify the actual file path or URL to the kubeconfig file from which to load cluster information   p    td    tr    tbody    table       HostPathMount        kubeadm k8s io v1beta4 HostPathMount          Appears in        ControlPlaneComponent   kubeadm k8s io v1beta4 ControlPlaneComponent     p HostPathMount contains elements describing volumes that are mounted from the host   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code   B  Required   B  br    code string  code    td   td      p  code name  code  is the name of the volume inside the Pod template   p    td    tr   tr  td  code hostPath  code   B  Required   B  br    code string  code    td   td      p  code hostPath  code  is the path in the host that will be mounted inside the Pod   p    td    tr   tr  td  code mountPath  code   B  Required   B  br    code string  code    td   td      p  code mountPath  code  is the path inside the Pod where  code hostPath  code  will be mounted   p    td    tr   tr  td  code readOnly  code  br    code bool  code    td   td      p  code readOnly  code  controls write access to the volume   p    td    tr   tr  td  code pathType  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  hostpathtype v1 core   code core v1 HostPathType  code   a    td   td      p  code pathType  code  is the type of the  code hostPath  code    p    td    tr    tbody    table       ImageMeta        kubeadm k8s io v1beta4 ImageMeta          Appears in        DNS   kubeadm k8s io v1beta4 DNS      LocalEtcd   kubeadm k8s io v1beta4 LocalEtcd     p ImageMeta allows to customize the image used for components that are not originated from the Kubernetes Kubernetes release process  p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code imageRepository  code  br    code string  code    td   td      p  code imageRepository  code  sets the container registry to pull images from  if not set  the  code imageRepository  code  defined in ClusterConfiguration will be used instead   p    td    tr   tr  td  code imageTag  code  br    code string  code    td   td      p  code imageTag  code  allows to specify a tag for the image  In case this value is set  kubeadm does not change automatically the version of the above components during upgrades   p    td    tr    tbody    table       JoinControlPlane        kubeadm k8s io v1beta4 JoinControlPlane          Appears in        JoinConfiguration   kubeadm k8s io v1beta4 JoinConfiguration     p JoinControlPlane contains elements describing an additional control plane instance to be deployed on the joining node   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code localAPIEndpoint  code  br    a href   kubeadm k8s io v1beta4 APIEndpoint   code APIEndpoint  code   a    td   td      p  code localAPIEndpoint  code  represents the endpoint of the API server instance to be deployed on this node   p    td    tr   tr  td  code certificateKey  code  br    code string  code    td   td      p  code certificateKey  code  is the key that is used for decryption of certificates after they are downloaded from the Secret upon joining a new control plane node  The corresponding encryption key is in the InitConfiguration  The certificate key is a hex encoded string that is an AES key of size 32 bytes   p    td    tr    tbody    table       LocalEtcd        kubeadm k8s io v1beta4 LocalEtcd          Appears in        Etcd   kubeadm k8s io v1beta4 Etcd     p LocalEtcd describes that kubeadm should run an etcd cluster locally   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code ImageMeta  code   B  Required   B  br    a href   kubeadm k8s io v1beta4 ImageMeta   code ImageMeta  code   a    td   td  Members of  code ImageMeta  code  are embedded into this type       p ImageMeta allows to customize the container used for etcd  p    td    tr   tr  td  code dataDir  code   B  Required   B  br    code string  code    td   td      p  code dataDir  code  is the directory etcd will place its data  Defaults to  quot  var lib etcd quot    p    td    tr   tr  td  code extraArgs  code   B  Required   B  br    a href   kubeadm k8s io v1beta4 Arg   code   Arg  code   a    td   td      p  code extraArgs  code  are extra arguments provided to the etcd binary when run inside a static Pod  An argument name in this list is the flag name as it appears on the command line except without leading dash es   Extra arguments will override existing default arguments  Duplicate extra arguments are allowed   p    td    tr   tr  td  code extraEnvs  code  br    a href   kubeadm k8s io v1beta4 EnvVar   code   EnvVar  code   a    td   td      p  code extraEnvs  code  is an extra set of environment variables to pass to the control plane component  Environment variables passed using  code extraEnvs  code  will override any existing environment variables  or  code   proxy  code  environment variables that kubeadm adds by default   p    td    tr   tr  td  code serverCertSANs  code  br    code   string  code    td   td      p  code serverCertSANs  code  sets extra Subject Alternative Names  SANs  for the etcd server signing certificate   p    td    tr   tr  td  code peerCertSANs  code  br    code   string  code    td   td      p  code peerCertSANs  code  sets extra Subject Alternative Names  SANs  for the etcd peer signing certificate   p    td    tr    tbody    table       Networking        kubeadm k8s io v1beta4 Networking          Appears in        ClusterConfiguration   kubeadm k8s io v1beta4 ClusterConfiguration     p Networking contains elements describing cluster s networking configuration   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code serviceSubnet  code  br    code string  code    td   td      p  code serviceSubnet  code  is the subnet used by Kubernetes Services  Defaults to  quot 10 96 0 0 12 quot    p    td    tr   tr  td  code podSubnet  code  br    code string  code    td   td      p  code podSubnet  code  is the subnet used by Pods   p    td    tr   tr  td  code dnsDomain  code  br    code string  code    td   td      p  code dnsDomain  code  is the dns domain used by Kubernetes Services  Defaults to  quot cluster local quot    p    td    tr    tbody    table       NodeRegistrationOptions        kubeadm k8s io v1beta4 NodeRegistrationOptions          Appears in        InitConfiguration   kubeadm k8s io v1beta4 InitConfiguration      JoinConfiguration   kubeadm k8s io v1beta4 JoinConfiguration     p NodeRegistrationOptions holds fields that relate to registering a new control plane or node to the cluster  either via  code kubeadm init  code  or  code kubeadm join  code    p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code name  code  br    code string  code    td   td      p  code name  code  is the  code  Metadata Name  code  field of the Node API object that will be created in this  code kubeadm init  code  or  code kubeadm join  code  operation  This field is also used in the  code CommonName  code  field of the kubelet s client certificate to the API server  Defaults to the hostname of the node if not provided   p    td    tr   tr  td  code criSocket  code  br    code string  code    td   td      p  code criSocket  code  is used to retrieve container runtime info  This information will be annotated to the Node API object  for later re use   p    td    tr   tr  td  code taints  code   B  Required   B  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  taint v1 core   code   core v1 Taint  code   a    td   td      p  code taints  code  specifies the taints the Node API object should be registered with  If this field is unset  i e  nil  it will be defaulted with a control plane taint for control plane nodes  If you don t want to taint your control plane node  set this field to an empty list  i e   code taints      code  in the YAML file  This field is solely used for Node registration   p    td    tr   tr  td  code kubeletExtraArgs  code  br    a href   kubeadm k8s io v1beta4 Arg   code   Arg  code   a    td   td      p  code kubeletExtraArgs  code  passes through extra arguments to the kubelet  The arguments here are passed to the kubelet command line via the environment file kubeadm writes at runtime for the kubelet to source  This overrides the generic base level configuration in the  code kubelet config  code  ConfigMap  Flags have higher priority when parsing  These values are local and specific to the node kubeadm is executing on  An argument name in this list is the flag name as it appears on the command line except without leading dash es   Extra arguments will override existing default arguments  Duplicate extra arguments are allowed   p    td    tr   tr  td  code ignorePreflightErrors  code  br    code   string  code    td   td      p  code ignorePreflightErrors  code  provides a slice of pre flight errors to be ignored when the current node is registered  e g   IsPrivilegedUser Swap   Value  all  ignores errors from all checks   p    td    tr   tr  td  code imagePullPolicy  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  pullpolicy v1 core   code core v1 PullPolicy  code   a    td   td      p  code imagePullPolicy  code  specifies the policy for image pulling during kubeadm  code init  code  and  code join  code  operations  The value of this field must be one of  quot Always quot    quot IfNotPresent quot  or  quot Never quot   If this field is unset kubeadm will default it to  quot IfNotPresent quot   or pull the required images if not present on the host   p    td    tr   tr  td  code imagePullSerial  code  br    code bool  code    td   td      p  code imagePullSerial  code  specifies if image pulling performed by kubeadm must be done serially or in parallel  Default  true  p    td    tr    tbody    table       Patches        kubeadm k8s io v1beta4 Patches          Appears in        InitConfiguration   kubeadm k8s io v1beta4 InitConfiguration      JoinConfiguration   kubeadm k8s io v1beta4 JoinConfiguration      UpgradeApplyConfiguration   kubeadm k8s io v1beta4 UpgradeApplyConfiguration      UpgradeNodeConfiguration   kubeadm k8s io v1beta4 UpgradeNodeConfiguration     p Patches contains options related to applying patches to components deployed by kubeadm   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code directory  code  br    code string  code    td   td      p  code directory  code  is a path to a directory that contains files named  quot target suffix   patchtype  extension quot   For example   quot kube apiserver0 merge yaml quot  or just  quot etcd json quot    quot target quot  can be one of  quot kube apiserver quot    quot kube controller manager quot    quot kube scheduler quot    quot etcd quot    quot kubeletconfiguration quot    quot corednsdeployment quot    quot patchtype quot  can be one of  quot strategic quot    quot merge quot  or  quot json quot  and they match the patch formats supported by kubectl  The default  quot patchtype quot  is  quot strategic quot    quot extension quot  must be either  quot json quot  or  quot yaml quot    quot suffix quot  is an optional string that can be used to determine which patches are applied first alpha numerically   p    td    tr    tbody    table       Proxy        kubeadm k8s io v1beta4 Proxy          Appears in        ClusterConfiguration   kubeadm k8s io v1beta4 ClusterConfiguration     p Proxy defines the proxy addon that should be used in the cluster   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code disabled  code   B  Required   B  br    code bool  code    td   td      p  code disabled  code  specifies whether to disable this addon in the cluster   p    td    tr    tbody    table       Timeouts        kubeadm k8s io v1beta4 Timeouts          Appears in        InitConfiguration   kubeadm k8s io v1beta4 InitConfiguration      JoinConfiguration   kubeadm k8s io v1beta4 JoinConfiguration      ResetConfiguration   kubeadm k8s io v1beta4 ResetConfiguration      UpgradeConfiguration   kubeadm k8s io v1beta4 UpgradeConfiguration     p Timeouts holds various timeouts that apply to kubeadm commands   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code controlPlaneComponentHealthCheck  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p  code controlPlaneComponentHealthCheck  code  is the amount of time to wait for a control plane component  such as the API server  to be healthy during  code kubeadm init  code  and  code kubeadm join  code   Default  4m  p    td    tr   tr  td  code kubeletHealthCheck  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p  code kubeletHealthCheck  code  is the amount of time to wait for the kubelet to be healthy during  code kubeadm init  code  and  code kubeadm join  code   Default  4m  p    td    tr   tr  td  code kubernetesAPICall  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p  code kubernetesAPICall  code  is the amount of time to wait for the kubeadm client to complete a request to the API server  This applies to all types of methods  GET  POST  etc   Default  1m  p    td    tr   tr  td  code etcdAPICall  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p  code etcdAPICall  code  is the amount of time to wait for the kubeadm etcd client to complete a request to the etcd cluster  Default  2m  p    td    tr   tr  td  code tlsBootstrap  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p  code tlsBootstrap  code  is the amount of time to wait for the kubelet to complete TLS bootstrap for a joining node  Default  5m  p    td    tr   tr  td  code discovery  code  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p  code discovery  code  is the amount of time to wait for kubeadm to validate the API server identity for a joining node  Default  5m  p    td    tr   tr  td  code upgradeManifests  code   B  Required   B  br    a href  https   pkg go dev k8s io apimachinery pkg apis meta v1 Duration   code meta v1 Duration  code   a    td   td      p  code upgradeManifests  code  is the timeout for upgrading static Pod manifests Default  5m  p    td    tr    tbody    table       UpgradeApplyConfiguration        kubeadm k8s io v1beta4 UpgradeApplyConfiguration          Appears in        UpgradeConfiguration   kubeadm k8s io v1beta4 UpgradeConfiguration     p UpgradeApplyConfiguration contains a list of configurable options which are specific to the   quot kubeadm upgrade apply quot  command   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code kubernetesVersion  code  br    code string  code    td   td      p  code kubernetesVersion  code  is the target version of the control plane   p    td    tr   tr  td  code allowExperimentalUpgrades  code  br    code bool  code    td   td      p  code allowExperimentalUpgrades  code  instructs kubeadm to show unstable versions of Kubernetes as an upgrade alternative and allows upgrading to an alpha beta release candidate version of Kubernetes  Default  false  p    td    tr   tr  td  code allowRCUpgrades  code  br    code bool  code    td   td      p Enable  code allowRCUpgrades  code  will show release candidate versions of Kubernetes as an upgrade alternative and allows upgrading to a release candidate version of Kubernetes   p    td    tr   tr  td  code certificateRenewal  code  br    code bool  code    td   td      p  code certificateRenewal  code  instructs kubeadm to execute certificate renewal during upgrades  Defaults to true   p    td    tr   tr  td  code dryRun  code  br    code bool  code    td   td      p  code dryRun  code  tells if the dry run mode is enabled  don t apply any change if it is and just output what would be done   p    td    tr   tr  td  code etcdUpgrade  code  br    code bool  code    td   td      p  code etcdUpgrade  code  instructs kubeadm to execute etcd upgrade during upgrades  Defaults to true   p    td    tr   tr  td  code forceUpgrade  code  br    code bool  code    td   td      p  code forceUpgrade  code  flag instructs kubeadm to upgrade the cluster without prompting for confirmation   p    td    tr   tr  td  code ignorePreflightErrors  code  br    code   string  code    td   td      p  code ignorePreflightErrors  code  provides a slice of pre flight errors to be ignored during the upgrade process  e g   code IsPrivilegedUser Swap  code   Value  code all  code  ignores errors from all checks   p    td    tr   tr  td  code patches  code  br    a href   kubeadm k8s io v1beta4 Patches   code Patches  code   a    td   td      p  code patches  code  contains options related to applying patches to components deployed by kubeadm during  code kubeadm upgrade  code    p    td    tr   tr  td  code printConfig  code  br    code bool  code    td   td      p  code printConfig  code  specifies whether the configuration file that will be used in the upgrade should be printed or not   p    td    tr   tr  td  code skipPhases  code   B  Required   B  br    code   string  code    td   td      p  code skipPhases  code  is a list of phases to skip during command execution  NOTE  This field is currently ignored for  code kubeadm upgrade apply  code   but in the future it will be supported   p    td    tr   tr  td  code imagePullPolicy  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  pullpolicy v1 core   code core v1 PullPolicy  code   a    td   td      p  code imagePullPolicy  code  specifies the policy for image pulling during  code kubeadm upgrade apply  code  operations  The value of this field must be one of  quot Always quot    quot IfNotPresent quot  or  quot Never quot   If this field is unset kubeadm will default it to  quot IfNotPresent quot   or pull the required images if not present on the host   p    td    tr   tr  td  code imagePullSerial  code  br    code bool  code    td   td      p  code imagePullSerial  code  specifies if image pulling performed by kubeadm must be done serially or in parallel  Default  true  p    td    tr    tbody    table       UpgradeDiffConfiguration        kubeadm k8s io v1beta4 UpgradeDiffConfiguration          Appears in        UpgradeConfiguration   kubeadm k8s io v1beta4 UpgradeConfiguration     p UpgradeDiffConfiguration contains a list of configurable options which are specific to the  code kubeadm upgrade diff  code  command   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code kubernetesVersion  code  br    code string  code    td   td      p  code kubernetesVersion  code  is the target version of the control plane   p    td    tr   tr  td  code contextLines  code  br    code int  code    td   td      p  code diffContextLines  code  is the number of lines of context in the diff   p    td    tr    tbody    table       UpgradeNodeConfiguration        kubeadm k8s io v1beta4 UpgradeNodeConfiguration          Appears in        UpgradeConfiguration   kubeadm k8s io v1beta4 UpgradeConfiguration     p UpgradeNodeConfiguration contains a list of configurable options which are specific to the  quot kubeadm upgrade node quot  command   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code certificateRenewal  code  br    code bool  code    td   td      p  code certificateRenewal  code  instructs kubeadm to execute certificate renewal during upgrades  Defaults to true   p    td    tr   tr  td  code dryRun  code  br    code bool  code    td   td      p  code dryRun  code  tells if the dry run mode is enabled  don t apply any change if it is and just output what would be done   p    td    tr   tr  td  code etcdUpgrade  code  br    code bool  code    td   td      p  code etcdUpgrade  code  instructs kubeadm to execute etcd upgrade during upgrades  Defaults to true   p    td    tr   tr  td  code ignorePreflightErrors  code  br    code   string  code    td   td      p  code ignorePreflightErrors  code  provides a slice of pre flight errors to be ignored during the upgrade process  e g   IsPrivilegedUser Swap   Value  all  ignores errors from all checks   p    td    tr   tr  td  code skipPhases  code  br    code   string  code    td   td      p  code skipPhases  code  is a list of phases to skip during command execution  The list of phases can be obtained with the  code kubeadm upgrade node phase   help  code  command   p    td    tr   tr  td  code patches  code  br    a href   kubeadm k8s io v1beta4 Patches   code Patches  code   a    td   td      p  code patches  code  contains options related to applying patches to components deployed by kubeadm during  code kubeadm upgrade  code    p    td    tr   tr  td  code imagePullPolicy  code  br    a href  https   kubernetes io docs reference generated kubernetes api v1 31  pullpolicy v1 core   code core v1 PullPolicy  code   a    td   td      p  code imagePullPolicy  code  specifies the policy for image pulling during  code kubeadm upgrade node  code  operations  The value of this field must be one of  quot Always quot    quot IfNotPresent quot  or  quot Never quot   If this field is unset kubeadm will default it to  quot IfNotPresent quot   or pull the required images if not present on the host   p    td    tr   tr  td  code imagePullSerial  code  br    code bool  code    td   td      p  code imagePullSerial  code  specifies if image pulling performed by kubeadm must be done serially or in parallel  Default  true  p    td    tr    tbody    table       UpgradePlanConfiguration        kubeadm k8s io v1beta4 UpgradePlanConfiguration          Appears in        UpgradeConfiguration   kubeadm k8s io v1beta4 UpgradeConfiguration     p UpgradePlanConfiguration contains a list of configurable options which are specific to the  quot kubeadm upgrade plan quot  command   p     table class  table    thead  tr  th width  30   Field  th  th Description  th   tr   thead   tbody           tr  td  code kubernetesVersion  code   B  Required   B  br    code string  code    td   td      p  code kubernetesVersion  code  is the target version of the control plane   p    td    tr   tr  td  code allowExperimentalUpgrades  code  br    code bool  code    td   td      p  code allowExperimentalUpgrades  code  instructs kubeadm to show unstable versions of Kubernetes as an upgrade alternative and allows upgrading to an alpha beta release candidate version of Kubernetes  Default  false  p    td    tr   tr  td  code allowRCUpgrades  code  br    code bool  code    td   td      p Enable  code allowRCUpgrades  code  will show release candidate versions of Kubernetes as an upgrade alternative and allows upgrading to a release candidate version of Kubernetes   p    td    tr   tr  td  code dryRun  code  br    code bool  code    td   td      p  code dryRun  code  tells if the dry run mode is enabled  don t apply any change if it is and just output what would be done   p    td    tr   tr  td  code ignorePreflightErrors  code  br    code   string  code    td   td      p  code ignorePreflightErrors  code  provides a slice of pre flight errors to be ignored during the upgrade process  e g   IsPrivilegedUser Swap   Value  all  ignores errors from all checks   p    td    tr   tr  td  code printConfig  code  br    code bool  code    td   td      p  code printConfig  code  specifies whether the configuration file that will be used in the upgrade should be printed or not   p    td    tr    tbody    table   "}
{"questions":"kubernetes reference Metrics v1 31 Details of the metric data that Kubernetes components export contenttype reference title Kubernetes Metrics Reference autogenerated true","answers":"---\ntitle: Kubernetes Metrics Reference\ncontent_type: reference\nauto_generated: true\ndescription: >-\n  Details of the metric data that Kubernetes components export.\n---\n\n## Metrics (v1.31)\n\n<!-- (auto-generated 2024 Oct 28) -->\n<!-- (auto-generated v1.31) -->\nThis page details the metrics that different Kubernetes components export. You can query the metrics endpoint for these \ncomponents using an HTTP scrape, and fetch the current metrics data in Prometheus format.\n\n### List of Stable Kubernetes Metrics\n\nStable metrics observe strict API contracts and no labels can be added or removed from stable metrics during their lifetime.\n\n<div class=\"metrics\"><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">apiserver_admission_controller_admission_duration_seconds<\/div>\n\t<div class=\"metric_help\">Admission controller latency histogram in seconds, identified by name and broken out for each operation and API resource and type (validate or admit).<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><span class=\"metric_label\">operation<\/span><span class=\"metric_label\">rejected<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">apiserver_admission_step_admission_duration_seconds<\/div>\n\t<div class=\"metric_help\">Admission sub-step latency histogram in seconds, broken out for each operation and API resource and step type (validate or admit).<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation<\/span><span class=\"metric_label\">rejected<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">apiserver_admission_webhook_admission_duration_seconds<\/div>\n\t<div class=\"metric_help\">Admission webhook latency histogram in seconds, identified by name and broken out for each operation and API resource and type (validate or admit).<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><span class=\"metric_label\">operation<\/span><span class=\"metric_label\">rejected<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">apiserver_current_inflight_requests<\/div>\n\t<div class=\"metric_help\">Maximal number of currently used inflight request limit of this apiserver per request kind in last second.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">request_kind<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">apiserver_longrunning_requests<\/div>\n\t<div class=\"metric_help\">Gauge of all active long-running apiserver requests broken out by verb, group, version, resource, scope and component. Not all requests are tracked this way.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">component<\/span><span class=\"metric_label\">group<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">scope<\/span><span class=\"metric_label\">subresource<\/span><span class=\"metric_label\">verb<\/span><span class=\"metric_label\">version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">apiserver_request_duration_seconds<\/div>\n\t<div class=\"metric_help\">Response latency distribution in seconds for each verb, dry run value, group, version, resource, subresource, scope and component.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">component<\/span><span class=\"metric_label\">dry_run<\/span><span class=\"metric_label\">group<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">scope<\/span><span class=\"metric_label\">subresource<\/span><span class=\"metric_label\">verb<\/span><span class=\"metric_label\">version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">apiserver_request_total<\/div>\n\t<div class=\"metric_help\">Counter of apiserver requests broken out for each verb, dry run value, group, version, resource, scope, component, and HTTP response code.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><span class=\"metric_label\">component<\/span><span class=\"metric_label\">dry_run<\/span><span class=\"metric_label\">group<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">scope<\/span><span class=\"metric_label\">subresource<\/span><span class=\"metric_label\">verb<\/span><span class=\"metric_label\">version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">apiserver_requested_deprecated_apis<\/div>\n\t<div class=\"metric_help\">Gauge of deprecated APIs that have been requested, broken out by API group, version, resource, subresource, and removed_release.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">group<\/span><span class=\"metric_label\">removed_release<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">subresource<\/span><span class=\"metric_label\">version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">apiserver_response_sizes<\/div>\n\t<div class=\"metric_help\">Response size distribution in bytes for each group, version, verb, resource, subresource, scope and component.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">component<\/span><span class=\"metric_label\">group<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">scope<\/span><span class=\"metric_label\">subresource<\/span><span class=\"metric_label\">verb<\/span><span class=\"metric_label\">version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">apiserver_storage_objects<\/div>\n\t<div class=\"metric_help\">Number of stored objects at the time of last check split by kind. In case of a fetching error, the value will be -1.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">apiserver_storage_size_bytes<\/div>\n\t<div class=\"metric_help\">Size of the storage database file physically allocated in bytes.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">storage_cluster_id<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">container_cpu_usage_seconds_total<\/div>\n\t<div class=\"metric_help\">Cumulative cpu time consumed by the container in core-seconds<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">container<\/span><span class=\"metric_label\">pod<\/span><span class=\"metric_label\">namespace<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">container_memory_working_set_bytes<\/div>\n\t<div class=\"metric_help\">Current working set of the container in bytes<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">container<\/span><span class=\"metric_label\">pod<\/span><span class=\"metric_label\">namespace<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">container_start_time_seconds<\/div>\n\t<div class=\"metric_help\">Start time of the container since unix epoch in seconds<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">container<\/span><span class=\"metric_label\">pod<\/span><span class=\"metric_label\">namespace<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">cronjob_controller_job_creation_skew_duration_seconds<\/div>\n\t<div class=\"metric_help\">Time between when a cronjob is scheduled to be run, and when the corresponding job is created<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">job_controller_job_pods_finished_total<\/div>\n\t<div class=\"metric_help\">The number of finished Pods that are fully tracked<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">completion_mode<\/span><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">job_controller_job_sync_duration_seconds<\/div>\n\t<div class=\"metric_help\">The time it took to sync a job<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">action<\/span><span class=\"metric_label\">completion_mode<\/span><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">job_controller_job_syncs_total<\/div>\n\t<div class=\"metric_help\">The number of job syncs<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">action<\/span><span class=\"metric_label\">completion_mode<\/span><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">job_controller_jobs_finished_total<\/div>\n\t<div class=\"metric_help\">The number of finished jobs<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">completion_mode<\/span><span class=\"metric_label\">reason<\/span><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">kube_pod_resource_limit<\/div>\n\t<div class=\"metric_help\">Resources limit for workloads on the cluster, broken down by pod. This shows the resource usage the scheduler and kubelet expect per pod for resources along with the unit for the resource if any.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">pod<\/span><span class=\"metric_label\">node<\/span><span class=\"metric_label\">scheduler<\/span><span class=\"metric_label\">priority<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">unit<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">kube_pod_resource_request<\/div>\n\t<div class=\"metric_help\">Resources requested by workloads on the cluster, broken down by pod. This shows the resource usage the scheduler and kubelet expect per pod for resources along with the unit for the resource if any.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">pod<\/span><span class=\"metric_label\">node<\/span><span class=\"metric_label\">scheduler<\/span><span class=\"metric_label\">priority<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">unit<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">kubernetes_healthcheck<\/div>\n\t<div class=\"metric_help\">This metric records the result of a single healthcheck.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">kubernetes_healthchecks_total<\/div>\n\t<div class=\"metric_help\">This metric records the results of all healthcheck.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><span class=\"metric_label\">status<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">node_collector_evictions_total<\/div>\n\t<div class=\"metric_help\">Number of Node evictions that happened since current instance of NodeController started.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">zone<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">node_cpu_usage_seconds_total<\/div>\n\t<div class=\"metric_help\">Cumulative cpu time consumed by the node in core-seconds<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">node_memory_working_set_bytes<\/div>\n\t<div class=\"metric_help\">Current working set of the node in bytes<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">pod_cpu_usage_seconds_total<\/div>\n\t<div class=\"metric_help\">Cumulative cpu time consumed by the pod in core-seconds<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">pod<\/span><span class=\"metric_label\">namespace<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">pod_memory_working_set_bytes<\/div>\n\t<div class=\"metric_help\">Current working set of the pod in bytes<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">pod<\/span><span class=\"metric_label\">namespace<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">resource_scrape_error<\/div>\n\t<div class=\"metric_help\">1 if there was an error while getting container metrics, 0 otherwise<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">scheduler_framework_extension_point_duration_seconds<\/div>\n\t<div class=\"metric_help\">Latency for running all plugins of a specific extension point.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">extension_point<\/span><span class=\"metric_label\">profile<\/span><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">scheduler_pending_pods<\/div>\n\t<div class=\"metric_help\">Number of pending pods, by the queue type. 'active' means number of pods in activeQ; 'backoff' means number of pods in backoffQ; 'unschedulable' means number of pods in unschedulablePods that the scheduler attempted to schedule and failed; 'gated' is the number of unschedulable pods that the scheduler never attempted to schedule because they are gated.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">queue<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">scheduler_pod_scheduling_attempts<\/div>\n\t<div class=\"metric_help\">Number of attempts to successfully schedule a pod.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">scheduler_pod_scheduling_duration_seconds<\/div>\n\t<div class=\"metric_help\">E2e latency for a pod being scheduled which may include multiple scheduling attempts.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">attempts<\/span><\/li><li class=\"metric_deprecated_version\"><label class=\"metric_detail\">Deprecated Versions:<\/label><span>1.29.0<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">scheduler_preemption_attempts_total<\/div>\n\t<div class=\"metric_help\">Total preemption attempts in the cluster till now<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">scheduler_preemption_victims<\/div>\n\t<div class=\"metric_help\">Number of selected preemption victims<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">scheduler_queue_incoming_pods_total<\/div>\n\t<div class=\"metric_help\">Number of pods added to scheduling queues by event and queue type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">event<\/span><span class=\"metric_label\">queue<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">scheduler_schedule_attempts_total<\/div>\n\t<div class=\"metric_help\">Number of attempts to schedule pods, by the result. 'unschedulable' means a pod could not be scheduled, while 'error' means an internal scheduler problem.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">profile<\/span><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"stable\">\n\t<div class=\"metric_name\">scheduler_scheduling_attempt_duration_seconds<\/div>\n\t<div class=\"metric_help\">Scheduling attempt latency in seconds (scheduling algorithm + binding)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">STABLE<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">profile<\/span><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div>\n<\/div>\n\n### List of Beta Kubernetes Metrics\n\nBeta metrics observe a looser API contract than its stable counterparts. No labels can be removed from beta metrics during their lifetime, however, labels can be added while the metric is in the beta stage. This offers the assurance that beta metrics will honor existing dashboards and alerts, while allowing for amendments in the future. \n\n<div class=\"metrics\"><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">apiserver_cel_compilation_duration_seconds<\/div>\n\t<div class=\"metric_help\">CEL compilation time in seconds.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">apiserver_cel_evaluation_duration_seconds<\/div>\n\t<div class=\"metric_help\">CEL evaluation time in seconds.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_current_executing_requests<\/div>\n\t<div class=\"metric_help\">Number of requests in initial (for a WATCH) or any (for a non-WATCH) execution stage in the API Priority and Fairness subsystem<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">flow_schema<\/span><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_current_executing_seats<\/div>\n\t<div class=\"metric_help\">Concurrency (number of seats) occupied by the currently executing (initial stage for a WATCH, any stage otherwise) requests in the API Priority and Fairness subsystem<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">flow_schema<\/span><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_current_inqueue_requests<\/div>\n\t<div class=\"metric_help\">Number of requests currently pending in queues of the API Priority and Fairness subsystem<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">flow_schema<\/span><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_dispatched_requests_total<\/div>\n\t<div class=\"metric_help\">Number of requests executed by API Priority and Fairness subsystem<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">flow_schema<\/span><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_nominal_limit_seats<\/div>\n\t<div class=\"metric_help\">Nominal number of execution seats configured for each priority level<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_rejected_requests_total<\/div>\n\t<div class=\"metric_help\">Number of requests rejected by API Priority and Fairness subsystem<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">flow_schema<\/span><span class=\"metric_label\">priority_level<\/span><span class=\"metric_label\">reason<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_request_wait_duration_seconds<\/div>\n\t<div class=\"metric_help\">Length of time a request spent waiting in its queue<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">execute<\/span><span class=\"metric_label\">flow_schema<\/span><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">apiserver_validating_admission_policy_check_duration_seconds<\/div>\n\t<div class=\"metric_help\">Validation admission latency for individual validation expressions in seconds, labeled by policy and further including binding and enforcement action taken.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">enforcement_action<\/span><span class=\"metric_label\">error_type<\/span><span class=\"metric_label\">policy<\/span><span class=\"metric_label\">policy_binding<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">apiserver_validating_admission_policy_check_total<\/div>\n\t<div class=\"metric_help\">Validation admission policy check total, labeled by policy and further identified by binding and enforcement action taken.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">enforcement_action<\/span><span class=\"metric_label\">error_type<\/span><span class=\"metric_label\">policy<\/span><span class=\"metric_label\">policy_binding<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">disabled_metrics_total<\/div>\n\t<div class=\"metric_help\">The count of disabled metrics.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">hidden_metrics_total<\/div>\n\t<div class=\"metric_help\">The count of hidden metrics.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">kubernetes_feature_enabled<\/div>\n\t<div class=\"metric_help\">This metric records the data about the stage and enablement of a k8s feature.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><span class=\"metric_label\">stage<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">registered_metrics_total<\/div>\n\t<div class=\"metric_help\">The count of registered metrics broken by stability level and deprecation version.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">deprecated_version<\/span><span class=\"metric_label\">stability_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"beta\">\n\t<div class=\"metric_name\">scheduler_pod_scheduling_sli_duration_seconds<\/div>\n\t<div class=\"metric_help\">E2e latency for a pod being scheduled, from the time the pod enters the scheduling queue and might involve multiple scheduling attempts.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">BETA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">attempts<\/span><\/li><\/ul>\n\t<\/div>\n<\/div>\n\n### List of Alpha Kubernetes Metrics\n\nAlpha metrics do not have any API guarantees. These metrics must be used at your own risk, subsequent versions of Kubernetes may remove these metrics altogether, or mutate the API in such a way that breaks existing dashboards and alerts. \n\n<div class=\"metrics\"><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">aggregator_discovery_aggregation_count_total<\/div>\n\t<div class=\"metric_help\">Counter of number of times discovery was aggregated<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">aggregator_openapi_v2_regeneration_count<\/div>\n\t<div class=\"metric_help\">Counter of OpenAPI v2 spec regeneration count broken down by causing APIService name and reason.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">apiservice<\/span><span class=\"metric_label\">reason<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">aggregator_openapi_v2_regeneration_duration<\/div>\n\t<div class=\"metric_help\">Gauge of OpenAPI v2 spec regeneration duration in seconds.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">reason<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">aggregator_unavailable_apiservice<\/div>\n\t<div class=\"metric_help\">Gauge of APIServices which are marked as unavailable broken down by APIService name.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">aggregator_unavailable_apiservice_total<\/div>\n\t<div class=\"metric_help\">Counter of APIServices which are marked as unavailable broken down by APIService name and reason.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><span class=\"metric_label\">reason<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiextensions_apiserver_validation_ratcheting_seconds<\/div>\n\t<div class=\"metric_help\">Time for comparison of old to new for the purposes of CRDValidationRatcheting during an UPDATE in seconds.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiextensions_openapi_v2_regeneration_count<\/div>\n\t<div class=\"metric_help\">Counter of OpenAPI v2 spec regeneration count broken down by causing CRD name and reason.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">crd<\/span><span class=\"metric_label\">reason<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiextensions_openapi_v3_regeneration_count<\/div>\n\t<div class=\"metric_help\">Counter of OpenAPI v3 spec regeneration count broken down by group, version, causing CRD and reason.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">crd<\/span><span class=\"metric_label\">group<\/span><span class=\"metric_label\">reason<\/span><span class=\"metric_label\">version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_admission_match_condition_evaluation_errors_total<\/div>\n\t<div class=\"metric_help\">Admission match condition evaluation errors count, identified by name of resource containing the match condition and broken out for each kind containing matchConditions (webhook or policy), operation and admission type (validate or admit).<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">kind<\/span><span class=\"metric_label\">name<\/span><span class=\"metric_label\">operation<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_admission_match_condition_evaluation_seconds<\/div>\n\t<div class=\"metric_help\">Admission match condition evaluation time in seconds, identified by name and broken out for each kind containing matchConditions (webhook or policy), operation and type (validate or admit).<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">kind<\/span><span class=\"metric_label\">name<\/span><span class=\"metric_label\">operation<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_admission_match_condition_exclusions_total<\/div>\n\t<div class=\"metric_help\">Admission match condition evaluation exclusions count, identified by name of resource containing the match condition and broken out for each kind containing matchConditions (webhook or policy), operation and admission type (validate or admit).<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">kind<\/span><span class=\"metric_label\">name<\/span><span class=\"metric_label\">operation<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_admission_step_admission_duration_seconds_summary<\/div>\n\t<div class=\"metric_help\">Admission sub-step latency summary in seconds, broken out for each operation and API resource and step type (validate or admit).<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"summary\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Summary<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation<\/span><span class=\"metric_label\">rejected<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_admission_webhook_fail_open_count<\/div>\n\t<div class=\"metric_help\">Admission webhook fail open count, identified by name and broken out for each admission type (validating or mutating).<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_admission_webhook_rejection_count<\/div>\n\t<div class=\"metric_help\">Admission webhook rejection count, identified by name and broken out for each admission type (validating or admit) and operation. Additional labels specify an error type (calling_webhook_error or apiserver_internal_error if an error occurred; no_error otherwise) and optionally a non-zero rejection code if the webhook rejects the request with an HTTP status code (honored by the apiserver when the code is greater or equal to 400). Codes greater than 600 are truncated to 600, to keep the metrics cardinality bounded.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">error_type<\/span><span class=\"metric_label\">name<\/span><span class=\"metric_label\">operation<\/span><span class=\"metric_label\">rejection_code<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_admission_webhook_request_total<\/div>\n\t<div class=\"metric_help\">Admission webhook request total, identified by name and broken out for each admission type (validating or mutating) and operation. Additional labels specify whether the request was rejected or not and an HTTP status code. Codes greater than 600 are truncated to 600, to keep the metrics cardinality bounded.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><span class=\"metric_label\">name<\/span><span class=\"metric_label\">operation<\/span><span class=\"metric_label\">rejected<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_audit_error_total<\/div>\n\t<div class=\"metric_help\">Counter of audit events that failed to be audited properly. Plugin identifies the plugin affected by the error.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">plugin<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_audit_event_total<\/div>\n\t<div class=\"metric_help\">Counter of audit events generated and sent to the audit backend.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_audit_level_total<\/div>\n\t<div class=\"metric_help\">Counter of policy levels for audit events (1 per request).<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_audit_requests_rejected_total<\/div>\n\t<div class=\"metric_help\">Counter of apiserver requests rejected due to an error in audit logging backend.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_authentication_config_controller_automatic_reload_last_timestamp_seconds<\/div>\n\t<div class=\"metric_help\">Timestamp of the last automatic reload of authentication configuration split by status and apiserver identity.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">apiserver_id_hash<\/span><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_authentication_config_controller_automatic_reloads_total<\/div>\n\t<div class=\"metric_help\">Total number of automatic reloads of authentication configuration split by status and apiserver identity.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">apiserver_id_hash<\/span><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_authentication_jwt_authenticator_latency_seconds<\/div>\n\t<div class=\"metric_help\">Latency of jwt authentication operations in seconds. This is the time spent authenticating a token for cache miss only (i.e. when the token is not found in the cache).<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">jwt_issuer_hash<\/span><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_authorization_config_controller_automatic_reload_last_timestamp_seconds<\/div>\n\t<div class=\"metric_help\">Timestamp of the last automatic reload of authorization configuration split by status and apiserver identity.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">apiserver_id_hash<\/span><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_authorization_config_controller_automatic_reloads_total<\/div>\n\t<div class=\"metric_help\">Total number of automatic reloads of authorization configuration split by status and apiserver identity.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">apiserver_id_hash<\/span><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_authorization_decisions_total<\/div>\n\t<div class=\"metric_help\">Total number of terminal decisions made by an authorizer split by authorizer type, name, and decision.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">decision<\/span><span class=\"metric_label\">name<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_authorization_match_condition_evaluation_errors_total<\/div>\n\t<div class=\"metric_help\">Total number of errors when an authorization webhook encounters a match condition error split by authorizer type and name.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_authorization_match_condition_evaluation_seconds<\/div>\n\t<div class=\"metric_help\">Authorization match condition evaluation time in seconds, split by authorizer type and name.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_authorization_match_condition_exclusions_total<\/div>\n\t<div class=\"metric_help\">Total number of exclusions when an authorization webhook is skipped because match conditions exclude it.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_authorization_webhook_duration_seconds<\/div>\n\t<div class=\"metric_help\">Request latency in seconds.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_authorization_webhook_evaluations_fail_open_total<\/div>\n\t<div class=\"metric_help\">NoOpinion results due to webhook timeout or error.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_authorization_webhook_evaluations_total<\/div>\n\t<div class=\"metric_help\">Round-trips to authorization webhooks.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_cache_list_fetched_objects_total<\/div>\n\t<div class=\"metric_help\">Number of objects read from watch cache in the course of serving a LIST request<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">index<\/span><span class=\"metric_label\">resource_prefix<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_cache_list_returned_objects_total<\/div>\n\t<div class=\"metric_help\">Number of objects returned for a LIST request from watch cache<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource_prefix<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_cache_list_total<\/div>\n\t<div class=\"metric_help\">Number of LIST requests served from watch cache<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">index<\/span><span class=\"metric_label\">resource_prefix<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_certificates_registry_csr_honored_duration_total<\/div>\n\t<div class=\"metric_help\">Total number of issued CSRs with a requested duration that was honored, sliced by signer (only kubernetes.io signer names are specifically identified)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">signerName<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_certificates_registry_csr_requested_duration_total<\/div>\n\t<div class=\"metric_help\">Total number of issued CSRs with a requested duration, sliced by signer (only kubernetes.io signer names are specifically identified)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">signerName<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_client_certificate_expiration_seconds<\/div>\n\t<div class=\"metric_help\">Distribution of the remaining lifetime on the certificate used to authenticate a request.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_clusterip_repair_ip_errors_total<\/div>\n\t<div class=\"metric_help\">Number of errors detected on clusterips by the repair loop broken down by type of error: leak, repair, full, outOfRange, duplicate, unknown, invalid<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_clusterip_repair_reconcile_errors_total<\/div>\n\t<div class=\"metric_help\">Number of reconciliation failures on the clusterip repair reconcile loop<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_conversion_webhook_duration_seconds<\/div>\n\t<div class=\"metric_help\">Conversion webhook request latency<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">failure_type<\/span><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_conversion_webhook_request_total<\/div>\n\t<div class=\"metric_help\">Counter for conversion webhook requests with success\/failure and failure error type<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">failure_type<\/span><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_crd_conversion_webhook_duration_seconds<\/div>\n\t<div class=\"metric_help\">CRD webhook conversion duration in seconds<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">crd_name<\/span><span class=\"metric_label\">from_version<\/span><span class=\"metric_label\">succeeded<\/span><span class=\"metric_label\">to_version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_current_inqueue_requests<\/div>\n\t<div class=\"metric_help\">Maximal number of queued requests in this apiserver per request kind in last second.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">request_kind<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_delegated_authn_request_duration_seconds<\/div>\n\t<div class=\"metric_help\">Request latency in seconds. Broken down by status code.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_delegated_authn_request_total<\/div>\n\t<div class=\"metric_help\">Number of HTTP requests partitioned by status code.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_delegated_authz_request_duration_seconds<\/div>\n\t<div class=\"metric_help\">Request latency in seconds. Broken down by status code.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_delegated_authz_request_total<\/div>\n\t<div class=\"metric_help\">Number of HTTP requests partitioned by status code.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_egress_dialer_dial_duration_seconds<\/div>\n\t<div class=\"metric_help\">Dial latency histogram in seconds, labeled by the protocol (http-connect or grpc), transport (tcp or uds)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">protocol<\/span><span class=\"metric_label\">transport<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_egress_dialer_dial_failure_count<\/div>\n\t<div class=\"metric_help\">Dial failure count, labeled by the protocol (http-connect or grpc), transport (tcp or uds), and stage (connect or proxy). The stage indicates at which stage the dial failed<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">protocol<\/span><span class=\"metric_label\">stage<\/span><span class=\"metric_label\">transport<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_egress_dialer_dial_start_total<\/div>\n\t<div class=\"metric_help\">Dial starts, labeled by the protocol (http-connect or grpc) and transport (tcp or uds).<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">protocol<\/span><span class=\"metric_label\">transport<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_encryption_config_controller_automatic_reload_failures_total<\/div>\n\t<div class=\"metric_help\">Total number of failed automatic reloads of encryption configuration split by apiserver identity.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">apiserver_id_hash<\/span><\/li><li class=\"metric_deprecated_version\"><label class=\"metric_detail\">Deprecated Versions:<\/label><span>1.30.0<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_encryption_config_controller_automatic_reload_last_timestamp_seconds<\/div>\n\t<div class=\"metric_help\">Timestamp of the last successful or failed automatic reload of encryption configuration split by apiserver identity.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">apiserver_id_hash<\/span><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_encryption_config_controller_automatic_reload_success_total<\/div>\n\t<div class=\"metric_help\">Total number of successful automatic reloads of encryption configuration split by apiserver identity.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">apiserver_id_hash<\/span><\/li><li class=\"metric_deprecated_version\"><label class=\"metric_detail\">Deprecated Versions:<\/label><span>1.30.0<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_encryption_config_controller_automatic_reloads_total<\/div>\n\t<div class=\"metric_help\">Total number of reload successes and failures of encryption configuration split by apiserver identity.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">apiserver_id_hash<\/span><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_envelope_encryption_dek_cache_fill_percent<\/div>\n\t<div class=\"metric_help\">Percent of the cache slots currently occupied by cached DEKs.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_envelope_encryption_dek_cache_inter_arrival_time_seconds<\/div>\n\t<div class=\"metric_help\">Time (in seconds) of inter arrival of transformation requests.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">transformation_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_envelope_encryption_dek_source_cache_size<\/div>\n\t<div class=\"metric_help\">Number of records in data encryption key (DEK) source cache. On a restart, this value is an approximation of the number of decrypt RPC calls the server will make to the KMS plugin.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">provider_name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_envelope_encryption_invalid_key_id_from_status_total<\/div>\n\t<div class=\"metric_help\">Number of times an invalid keyID is returned by the Status RPC call split by error.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">error<\/span><span class=\"metric_label\">provider_name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_envelope_encryption_key_id_hash_last_timestamp_seconds<\/div>\n\t<div class=\"metric_help\">The last time in seconds when a keyID was used.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">apiserver_id_hash<\/span><span class=\"metric_label\">key_id_hash<\/span><span class=\"metric_label\">provider_name<\/span><span class=\"metric_label\">transformation_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_envelope_encryption_key_id_hash_status_last_timestamp_seconds<\/div>\n\t<div class=\"metric_help\">The last time in seconds when a keyID was returned by the Status RPC call.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">apiserver_id_hash<\/span><span class=\"metric_label\">key_id_hash<\/span><span class=\"metric_label\">provider_name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_envelope_encryption_key_id_hash_total<\/div>\n\t<div class=\"metric_help\">Number of times a keyID is used split by transformation type, provider, and apiserver identity.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">apiserver_id_hash<\/span><span class=\"metric_label\">key_id_hash<\/span><span class=\"metric_label\">provider_name<\/span><span class=\"metric_label\">transformation_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_envelope_encryption_kms_operations_latency_seconds<\/div>\n\t<div class=\"metric_help\">KMS operation duration with gRPC error code status total.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">grpc_status_code<\/span><span class=\"metric_label\">method_name<\/span><span class=\"metric_label\">provider_name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_current_inqueue_seats<\/div>\n\t<div class=\"metric_help\">Number of seats currently pending in queues of the API Priority and Fairness subsystem<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">flow_schema<\/span><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_current_limit_seats<\/div>\n\t<div class=\"metric_help\">current derived number of execution seats available to each priority level<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_current_r<\/div>\n\t<div class=\"metric_help\">R(time of last change)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_demand_seats<\/div>\n\t<div class=\"metric_help\">Observations, at the end of every nanosecond, of (the number of seats each priority level could use) \/ (nominal number of seats for that level)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"timingratiohistogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">TimingRatioHistogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_demand_seats_average<\/div>\n\t<div class=\"metric_help\">Time-weighted average, over last adjustment period, of demand_seats<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_demand_seats_high_watermark<\/div>\n\t<div class=\"metric_help\">High watermark, over last adjustment period, of demand_seats<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_demand_seats_smoothed<\/div>\n\t<div class=\"metric_help\">Smoothed seat demands<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_demand_seats_stdev<\/div>\n\t<div class=\"metric_help\">Time-weighted standard deviation, over last adjustment period, of demand_seats<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_dispatch_r<\/div>\n\t<div class=\"metric_help\">R(time of last dispatch)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_epoch_advance_total<\/div>\n\t<div class=\"metric_help\">Number of times the queueset's progress meter jumped backward<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><span class=\"metric_label\">success<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_latest_s<\/div>\n\t<div class=\"metric_help\">S(most recently dispatched request)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_lower_limit_seats<\/div>\n\t<div class=\"metric_help\">Configured lower bound on number of execution seats available to each priority level<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_next_discounted_s_bounds<\/div>\n\t<div class=\"metric_help\">min and max, over queues, of S(oldest waiting request in queue) - estimated work in progress<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">bound<\/span><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_next_s_bounds<\/div>\n\t<div class=\"metric_help\">min and max, over queues, of S(oldest waiting request in queue)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">bound<\/span><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_priority_level_request_utilization<\/div>\n\t<div class=\"metric_help\">Observations, at the end of every nanosecond, of number of requests (as a fraction of the relevant limit) waiting or in any stage of execution (but only initial stage for WATCHes)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"timingratiohistogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">TimingRatioHistogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">phase<\/span><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_priority_level_seat_utilization<\/div>\n\t<div class=\"metric_help\">Observations, at the end of every nanosecond, of utilization of seats for any stage of execution (but only initial stage for WATCHes)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"timingratiohistogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">TimingRatioHistogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><li class=\"metric_labels_constant\"><label class=\"metric_detail\">Const Labels:<\/label><span class=\"metric_label\">phase:executing<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_read_vs_write_current_requests<\/div>\n\t<div class=\"metric_help\">Observations, at the end of every nanosecond, of the number of requests (as a fraction of the relevant limit) waiting or in regular stage of execution<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"timingratiohistogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">TimingRatioHistogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">phase<\/span><span class=\"metric_label\">request_kind<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_request_concurrency_in_use<\/div>\n\t<div class=\"metric_help\">Concurrency (number of seats) occupied by the currently executing (initial stage for a WATCH, any stage otherwise) requests in the API Priority and Fairness subsystem<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">flow_schema<\/span><span class=\"metric_label\">priority_level<\/span><\/li><li class=\"metric_deprecated_version\"><label class=\"metric_detail\">Deprecated Versions:<\/label><span>1.31.0<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_request_concurrency_limit<\/div>\n\t<div class=\"metric_help\">Nominal number of execution seats configured for each priority level<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><li class=\"metric_deprecated_version\"><label class=\"metric_detail\">Deprecated Versions:<\/label><span>1.30.0<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_request_dispatch_no_accommodation_total<\/div>\n\t<div class=\"metric_help\">Number of times a dispatch attempt resulted in a non accommodation due to lack of available seats<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">flow_schema<\/span><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_request_execution_seconds<\/div>\n\t<div class=\"metric_help\">Duration of initial stage (for a WATCH) or any (for a non-WATCH) stage of request execution in the API Priority and Fairness subsystem<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">flow_schema<\/span><span class=\"metric_label\">priority_level<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_request_queue_length_after_enqueue<\/div>\n\t<div class=\"metric_help\">Length of queue in the API Priority and Fairness subsystem, as seen by each request after it is enqueued<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">flow_schema<\/span><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_seat_fair_frac<\/div>\n\t<div class=\"metric_help\">Fair fraction of server's concurrency to allocate to each priority level that can use it<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_target_seats<\/div>\n\t<div class=\"metric_help\">Seat allocation targets<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_upper_limit_seats<\/div>\n\t<div class=\"metric_help\">Configured upper bound on number of execution seats available to each priority level<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_watch_count_samples<\/div>\n\t<div class=\"metric_help\">count of watchers for mutating requests in API Priority and Fairness<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">flow_schema<\/span><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_flowcontrol_work_estimated_seats<\/div>\n\t<div class=\"metric_help\">Number of estimated seats (maximum of initial and final seats) associated with requests in API Priority and Fairness<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">flow_schema<\/span><span class=\"metric_label\">priority_level<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_init_events_total<\/div>\n\t<div class=\"metric_help\">Counter of init events processed in watch cache broken by resource type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_kube_aggregator_x509_insecure_sha1_total<\/div>\n\t<div class=\"metric_help\">Counts the number of requests to servers with insecure SHA1 signatures in their serving certificate OR the number of connection failures due to the insecure SHA1 signatures (either\/or, based on the runtime environment)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_kube_aggregator_x509_missing_san_total<\/div>\n\t<div class=\"metric_help\">Counts the number of requests to servers missing SAN extension in their serving certificate OR the number of connection failures due to the lack of x509 certificate SAN extension missing (either\/or, based on the runtime environment)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_nodeport_repair_port_errors_total<\/div>\n\t<div class=\"metric_help\">Number of errors detected on ports by the repair loop broken down by type of error: leak, repair, full, outOfRange, duplicate, unknown<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_nodeport_repair_reconcile_errors_total<\/div>\n\t<div class=\"metric_help\">Number of reconciliation failures on the nodeport repair reconcile loop<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_request_aborts_total<\/div>\n\t<div class=\"metric_help\">Number of requests which apiserver aborted possibly due to a timeout, for each group, version, verb, resource, subresource and scope<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">group<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">scope<\/span><span class=\"metric_label\">subresource<\/span><span class=\"metric_label\">verb<\/span><span class=\"metric_label\">version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_request_body_size_bytes<\/div>\n\t<div class=\"metric_help\">Apiserver request body size in bytes broken out by resource and verb.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">verb<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_request_filter_duration_seconds<\/div>\n\t<div class=\"metric_help\">Request filter latency distribution in seconds, for each filter type<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">filter<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_request_post_timeout_total<\/div>\n\t<div class=\"metric_help\">Tracks the activity of the request handlers after the associated requests have been timed out by the apiserver<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">source<\/span><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_request_sli_duration_seconds<\/div>\n\t<div class=\"metric_help\">Response latency distribution (not counting webhook duration and priority & fairness queue wait times) in seconds for each verb, group, version, resource, subresource, scope and component.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">component<\/span><span class=\"metric_label\">group<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">scope<\/span><span class=\"metric_label\">subresource<\/span><span class=\"metric_label\">verb<\/span><span class=\"metric_label\">version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_request_slo_duration_seconds<\/div>\n\t<div class=\"metric_help\">Response latency distribution (not counting webhook duration and priority & fairness queue wait times) in seconds for each verb, group, version, resource, subresource, scope and component.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">component<\/span><span class=\"metric_label\">group<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">scope<\/span><span class=\"metric_label\">subresource<\/span><span class=\"metric_label\">verb<\/span><span class=\"metric_label\">version<\/span><\/li><li class=\"metric_deprecated_version\"><label class=\"metric_detail\">Deprecated Versions:<\/label><span>1.27.0<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_request_terminations_total<\/div>\n\t<div class=\"metric_help\">Number of requests which apiserver terminated in self-defense.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><span class=\"metric_label\">component<\/span><span class=\"metric_label\">group<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">scope<\/span><span class=\"metric_label\">subresource<\/span><span class=\"metric_label\">verb<\/span><span class=\"metric_label\">version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_request_timestamp_comparison_time<\/div>\n\t<div class=\"metric_help\">Time taken for comparison of old vs new objects in UPDATE or PATCH requests<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code_path<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_rerouted_request_total<\/div>\n\t<div class=\"metric_help\">Total number of requests that were proxied to a peer kube apiserver because the local apiserver was not capable of serving it<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_selfrequest_total<\/div>\n\t<div class=\"metric_help\">Counter of apiserver self-requests broken out for each verb, API resource and subresource.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">subresource<\/span><span class=\"metric_label\">verb<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_storage_data_key_generation_duration_seconds<\/div>\n\t<div class=\"metric_help\">Latencies in seconds of data encryption key(DEK) generation operations.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_storage_data_key_generation_failures_total<\/div>\n\t<div class=\"metric_help\">Total number of failed data encryption key(DEK) generation operations.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_storage_db_total_size_in_bytes<\/div>\n\t<div class=\"metric_help\">Total size of the storage database file physically allocated in bytes.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">endpoint<\/span><\/li><li class=\"metric_deprecated_version\"><label class=\"metric_detail\">Deprecated Versions:<\/label><span>1.28.0<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_storage_decode_errors_total<\/div>\n\t<div class=\"metric_help\">Number of stored object decode errors split by object type<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_storage_envelope_transformation_cache_misses_total<\/div>\n\t<div class=\"metric_help\">Total number of cache misses while accessing key decryption key(KEK).<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_storage_events_received_total<\/div>\n\t<div class=\"metric_help\">Number of etcd events received split by kind.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_storage_list_evaluated_objects_total<\/div>\n\t<div class=\"metric_help\">Number of objects tested in the course of serving a LIST request from storage<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_storage_list_fetched_objects_total<\/div>\n\t<div class=\"metric_help\">Number of objects read from storage in the course of serving a LIST request<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_storage_list_returned_objects_total<\/div>\n\t<div class=\"metric_help\">Number of objects returned for a LIST request from storage<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_storage_list_total<\/div>\n\t<div class=\"metric_help\">Number of LIST requests served from storage<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_storage_transformation_duration_seconds<\/div>\n\t<div class=\"metric_help\">Latencies in seconds of value transformation operations.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">transformation_type<\/span><span class=\"metric_label\">transformer_prefix<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_storage_transformation_operations_total<\/div>\n\t<div class=\"metric_help\">Total number of transformations. Successful transformation will have a status 'OK' and a varied status string when the transformation fails. This status and transformation_type fields may be used for alerting on encryption\/decryption failure using transformation_type from_storage for decryption and to_storage for encryption<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">status<\/span><span class=\"metric_label\">transformation_type<\/span><span class=\"metric_label\">transformer_prefix<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_stream_translator_requests_total<\/div>\n\t<div class=\"metric_help\">Total number of requests that were handled by the StreamTranslatorProxy, which processes streaming RemoteCommand\/V5<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_stream_tunnel_requests_total<\/div>\n\t<div class=\"metric_help\">Total number of requests that were handled by the StreamTunnelProxy, which processes streaming PortForward\/V2<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_terminated_watchers_total<\/div>\n\t<div class=\"metric_help\">Counter of watchers closed due to unresponsiveness broken by resource type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_tls_handshake_errors_total<\/div>\n\t<div class=\"metric_help\">Number of requests dropped with 'TLS handshake error from' error<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_watch_cache_consistent_read_total<\/div>\n\t<div class=\"metric_help\">Counter for consistent reads from cache.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">fallback<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">success<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_watch_cache_events_dispatched_total<\/div>\n\t<div class=\"metric_help\">Counter of events dispatched in watch cache broken by resource type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_watch_cache_events_received_total<\/div>\n\t<div class=\"metric_help\">Counter of events received in watch cache broken by resource type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_watch_cache_initializations_total<\/div>\n\t<div class=\"metric_help\">Counter of watch cache initializations broken by resource type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_watch_cache_read_wait_seconds<\/div>\n\t<div class=\"metric_help\">Histogram of time spent waiting for a watch cache to become fresh.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_watch_cache_resource_version<\/div>\n\t<div class=\"metric_help\">Current resource version of watch cache broken by resource type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_watch_events_sizes<\/div>\n\t<div class=\"metric_help\">Watch event size distribution in bytes<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">group<\/span><span class=\"metric_label\">kind<\/span><span class=\"metric_label\">version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_watch_events_total<\/div>\n\t<div class=\"metric_help\">Number of events sent in watch clients<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">group<\/span><span class=\"metric_label\">kind<\/span><span class=\"metric_label\">version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_watch_list_duration_seconds<\/div>\n\t<div class=\"metric_help\">Response latency distribution in seconds for watch list requests broken by group, version, resource and scope.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">group<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">scope<\/span><span class=\"metric_label\">version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_webhooks_x509_insecure_sha1_total<\/div>\n\t<div class=\"metric_help\">Counts the number of requests to servers with insecure SHA1 signatures in their serving certificate OR the number of connection failures due to the insecure SHA1 signatures (either\/or, based on the runtime environment)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">apiserver_webhooks_x509_missing_san_total<\/div>\n\t<div class=\"metric_help\">Counts the number of requests to servers missing SAN extension in their serving certificate OR the number of connection failures due to the lack of x509 certificate SAN extension missing (either\/or, based on the runtime environment)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">attach_detach_controller_attachdetach_controller_forced_detaches<\/div>\n\t<div class=\"metric_help\">Number of times the A\/D Controller performed a forced detach<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">reason<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">attachdetach_controller_total_volumes<\/div>\n\t<div class=\"metric_help\">Number of volumes in A\/D Controller<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">plugin_name<\/span><span class=\"metric_label\">state<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">authenticated_user_requests<\/div>\n\t<div class=\"metric_help\">Counter of authenticated requests broken out by username.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">username<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">authentication_attempts<\/div>\n\t<div class=\"metric_help\">Counter of authenticated attempts.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">authentication_duration_seconds<\/div>\n\t<div class=\"metric_help\">Authentication duration in seconds broken out by result.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">authentication_token_cache_active_fetch_count<\/div>\n\t<div class=\"metric_help\"><\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">authentication_token_cache_fetch_total<\/div>\n\t<div class=\"metric_help\"><\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">authentication_token_cache_request_duration_seconds<\/div>\n\t<div class=\"metric_help\"><\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">authentication_token_cache_request_total<\/div>\n\t<div class=\"metric_help\"><\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">authorization_attempts_total<\/div>\n\t<div class=\"metric_help\">Counter of authorization attempts broken down by result. It can be either 'allowed', 'denied', 'no-opinion' or 'error'.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">authorization_duration_seconds<\/div>\n\t<div class=\"metric_help\">Authorization duration in seconds broken out by result.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">cloud_provider_webhook_request_duration_seconds<\/div>\n\t<div class=\"metric_help\">Request latency in seconds. Broken down by status code.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><span class=\"metric_label\">webhook<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">cloud_provider_webhook_request_total<\/div>\n\t<div class=\"metric_help\">Number of HTTP requests partitioned by status code.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><span class=\"metric_label\">webhook<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">container_swap_usage_bytes<\/div>\n\t<div class=\"metric_help\">Current amount of the container swap usage in bytes. Reported only on non-windows systems<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">container<\/span><span class=\"metric_label\">pod<\/span><span class=\"metric_label\">namespace<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">csi_operations_seconds<\/div>\n\t<div class=\"metric_help\">Container Storage Interface operation duration with gRPC error code status total<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">driver_name<\/span><span class=\"metric_label\">grpc_status_code<\/span><span class=\"metric_label\">method_name<\/span><span class=\"metric_label\">migrated<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_controller_changes<\/div>\n\t<div class=\"metric_help\">Number of EndpointSlice changes<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_controller_desired_endpoint_slices<\/div>\n\t<div class=\"metric_help\">Number of EndpointSlices that would exist with perfect endpoint allocation<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_controller_endpoints_added_per_sync<\/div>\n\t<div class=\"metric_help\">Number of endpoints added on each Service sync<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_controller_endpoints_desired<\/div>\n\t<div class=\"metric_help\">Number of endpoints desired<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_controller_endpoints_removed_per_sync<\/div>\n\t<div class=\"metric_help\">Number of endpoints removed on each Service sync<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_controller_endpointslices_changed_per_sync<\/div>\n\t<div class=\"metric_help\">Number of EndpointSlices changed on each Service sync<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">topology<\/span><span class=\"metric_label\">traffic_distribution<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_controller_num_endpoint_slices<\/div>\n\t<div class=\"metric_help\">Number of EndpointSlices<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_controller_services_count_by_traffic_distribution<\/div>\n\t<div class=\"metric_help\">Number of Services using some specific trafficDistribution<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">traffic_distribution<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_controller_syncs<\/div>\n\t<div class=\"metric_help\">Number of EndpointSlice syncs<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_mirroring_controller_addresses_skipped_per_sync<\/div>\n\t<div class=\"metric_help\">Number of addresses skipped on each Endpoints sync due to being invalid or exceeding MaxEndpointsPerSubset<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_mirroring_controller_changes<\/div>\n\t<div class=\"metric_help\">Number of EndpointSlice changes<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_mirroring_controller_desired_endpoint_slices<\/div>\n\t<div class=\"metric_help\">Number of EndpointSlices that would exist with perfect endpoint allocation<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_mirroring_controller_endpoints_added_per_sync<\/div>\n\t<div class=\"metric_help\">Number of endpoints added on each Endpoints sync<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_mirroring_controller_endpoints_desired<\/div>\n\t<div class=\"metric_help\">Number of endpoints desired<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_mirroring_controller_endpoints_removed_per_sync<\/div>\n\t<div class=\"metric_help\">Number of endpoints removed on each Endpoints sync<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_mirroring_controller_endpoints_sync_duration<\/div>\n\t<div class=\"metric_help\">Duration of syncEndpoints() in seconds<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_mirroring_controller_endpoints_updated_per_sync<\/div>\n\t<div class=\"metric_help\">Number of endpoints updated on each Endpoints sync<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">endpoint_slice_mirroring_controller_num_endpoint_slices<\/div>\n\t<div class=\"metric_help\">Number of EndpointSlices<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">ephemeral_volume_controller_create_failures_total<\/div>\n\t<div class=\"metric_help\">Number of PersistenVolumeClaims creation requests<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">ephemeral_volume_controller_create_total<\/div>\n\t<div class=\"metric_help\">Number of PersistenVolumeClaims creation requests<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">etcd_bookmark_counts<\/div>\n\t<div class=\"metric_help\">Number of etcd bookmarks (progress notify events) split by kind.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">etcd_lease_object_counts<\/div>\n\t<div class=\"metric_help\">Number of objects attached to a single etcd lease.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">etcd_request_duration_seconds<\/div>\n\t<div class=\"metric_help\">Etcd request latency in seconds for each operation and object type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">etcd_request_errors_total<\/div>\n\t<div class=\"metric_help\">Etcd failed request counts for each operation and object type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">etcd_requests_total<\/div>\n\t<div class=\"metric_help\">Etcd request counts for each operation and object type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation<\/span><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">etcd_version_info<\/div>\n\t<div class=\"metric_help\">Etcd server's binary version<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">binary_version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">field_validation_request_duration_seconds<\/div>\n\t<div class=\"metric_help\">Response latency distribution in seconds for each field validation value<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">field_validation<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">force_cleaned_failed_volume_operation_errors_total<\/div>\n\t<div class=\"metric_help\">The number of volumes that failed force cleanup after their reconstruction failed during kubelet startup.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">force_cleaned_failed_volume_operations_total<\/div>\n\t<div class=\"metric_help\">The number of volumes that were force cleaned after their reconstruction failed during kubelet startup. This includes both successful and failed cleanups.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">garbagecollector_controller_resources_sync_error_total<\/div>\n\t<div class=\"metric_help\">Number of garbage collector resources sync errors<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">horizontal_pod_autoscaler_controller_metric_computation_duration_seconds<\/div>\n\t<div class=\"metric_help\">The time(seconds) that the HPA controller takes to calculate one metric. The label 'action' should be either 'scale_down', 'scale_up', or 'none'. The label 'error' should be either 'spec', 'internal', or 'none'. The label 'metric_type' corresponds to HPA.spec.metrics[*].type<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">action<\/span><span class=\"metric_label\">error<\/span><span class=\"metric_label\">metric_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">horizontal_pod_autoscaler_controller_metric_computation_total<\/div>\n\t<div class=\"metric_help\">Number of metric computations. The label 'action' should be either 'scale_down', 'scale_up', or 'none'. Also, the label 'error' should be either 'spec', 'internal', or 'none'. The label 'metric_type' corresponds to HPA.spec.metrics[*].type<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">action<\/span><span class=\"metric_label\">error<\/span><span class=\"metric_label\">metric_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">horizontal_pod_autoscaler_controller_reconciliation_duration_seconds<\/div>\n\t<div class=\"metric_help\">The time(seconds) that the HPA controller takes to reconcile once. The label 'action' should be either 'scale_down', 'scale_up', or 'none'. Also, the label 'error' should be either 'spec', 'internal', or 'none'. Note that if both spec and internal errors happen during a reconciliation, the first one to occur is reported in `error` label.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">action<\/span><span class=\"metric_label\">error<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">horizontal_pod_autoscaler_controller_reconciliations_total<\/div>\n\t<div class=\"metric_help\">Number of reconciliations of HPA controller. The label 'action' should be either 'scale_down', 'scale_up', or 'none'. Also, the label 'error' should be either 'spec', 'internal', or 'none'. Note that if both spec and internal errors happen during a reconciliation, the first one to occur is reported in `error` label.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">action<\/span><span class=\"metric_label\">error<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">job_controller_job_finished_indexes_total<\/div>\n\t<div class=\"metric_help\">`The number of finished indexes. Possible values for the, \t\t\tstatus label are: \"succeeded\", \"failed\". Possible values for the, \t\t\tbackoffLimit label are: \"perIndex\" and \"global\"`<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">backoffLimit<\/span><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">job_controller_job_pods_creation_total<\/div>\n\t<div class=\"metric_help\">`The number of Pods created by the Job controller labelled with a reason for the Pod creation., This metric also distinguishes between Pods created using different PodReplacementPolicy settings., Possible values of the \"reason\" label are:, \"new\", \"recreate_terminating_or_failed\", \"recreate_failed\"., Possible values of the \"status\" label are:, \"succeeded\", \"failed\".`<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">reason<\/span><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">job_controller_jobs_by_external_controller_total<\/div>\n\t<div class=\"metric_help\">The number of Jobs managed by an external controller<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">controller_name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">job_controller_pod_failures_handled_by_failure_policy_total<\/div>\n\t<div class=\"metric_help\">`The number of failed Pods handled by failure policy with, \t\t\trespect to the failure policy action applied based on the matched, \t\t\trule. Possible values of the action label correspond to the, \t\t\tpossible values for the failure policy rule action, which are:, \t\t\t\"FailJob\", \"Ignore\" and \"Count\".`<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">action<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">job_controller_terminated_pods_tracking_finalizer_total<\/div>\n\t<div class=\"metric_help\">`The number of terminated pods (phase=Failed|Succeeded), that have the finalizer batch.kubernetes.io\/job-tracking, The event label can be \"add\" or \"delete\".`<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">event<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kube_apiserver_clusterip_allocator_allocated_ips<\/div>\n\t<div class=\"metric_help\">Gauge measuring the number of allocated IPs for Services<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">cidr<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kube_apiserver_clusterip_allocator_allocation_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds to allocate a Cluster IP by ServiceCIDR<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">cidr<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kube_apiserver_clusterip_allocator_allocation_errors_total<\/div>\n\t<div class=\"metric_help\">Number of errors trying to allocate Cluster IPs<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">cidr<\/span><span class=\"metric_label\">scope<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kube_apiserver_clusterip_allocator_allocation_total<\/div>\n\t<div class=\"metric_help\">Number of Cluster IPs allocations<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">cidr<\/span><span class=\"metric_label\">scope<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kube_apiserver_clusterip_allocator_available_ips<\/div>\n\t<div class=\"metric_help\">Gauge measuring the number of available IPs for Services<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">cidr<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kube_apiserver_nodeport_allocator_allocated_ports<\/div>\n\t<div class=\"metric_help\">Gauge measuring the number of allocated NodePorts for Services<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kube_apiserver_nodeport_allocator_allocation_errors_total<\/div>\n\t<div class=\"metric_help\">Number of errors trying to allocate NodePort<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">scope<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kube_apiserver_nodeport_allocator_allocation_total<\/div>\n\t<div class=\"metric_help\">Number of NodePort allocations<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">scope<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kube_apiserver_nodeport_allocator_available_ports<\/div>\n\t<div class=\"metric_help\">Gauge measuring the number of available NodePorts for Services<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kube_apiserver_pod_logs_backend_tls_failure_total<\/div>\n\t<div class=\"metric_help\">Total number of requests for pods\/logs that failed due to kubelet server TLS verification<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kube_apiserver_pod_logs_insecure_backend_total<\/div>\n\t<div class=\"metric_help\">Total number of requests for pods\/logs sliced by usage type: enforce_tls, skip_tls_allowed, skip_tls_denied<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">usage<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kube_apiserver_pod_logs_pods_logs_backend_tls_failure_total<\/div>\n\t<div class=\"metric_help\">Total number of requests for pods\/logs that failed due to kubelet server TLS verification<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_deprecated_version\"><label class=\"metric_detail\">Deprecated Versions:<\/label><span>1.27.0<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kube_apiserver_pod_logs_pods_logs_insecure_backend_total<\/div>\n\t<div class=\"metric_help\">Total number of requests for pods\/logs sliced by usage type: enforce_tls, skip_tls_allowed, skip_tls_denied<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">usage<\/span><\/li><li class=\"metric_deprecated_version\"><label class=\"metric_detail\">Deprecated Versions:<\/label><span>1.27.0<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_active_pods<\/div>\n\t<div class=\"metric_help\">The number of pods the kubelet considers active and which are being considered when admitting new pods. static is true if the pod is not from the apiserver.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">static<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_certificate_manager_client_expiration_renew_errors<\/div>\n\t<div class=\"metric_help\">Counter of certificate renewal errors.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_certificate_manager_client_ttl_seconds<\/div>\n\t<div class=\"metric_help\">Gauge of the TTL (time-to-live) of the Kubelet's client certificate. The value is in seconds until certificate expiry (negative if already expired). If client certificate is invalid or unused, the value will be +INF.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_certificate_manager_server_rotation_seconds<\/div>\n\t<div class=\"metric_help\">Histogram of the number of seconds the previous certificate lived before being rotated.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_certificate_manager_server_ttl_seconds<\/div>\n\t<div class=\"metric_help\">Gauge of the shortest TTL (time-to-live) of the Kubelet's serving certificate. The value is in seconds until certificate expiry (negative if already expired). If serving certificate is invalid or unused, the value will be +INF.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_cgroup_manager_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds for cgroup manager operations. Broken down by method.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_cgroup_version<\/div>\n\t<div class=\"metric_help\">cgroup version on the hosts.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_container_log_filesystem_used_bytes<\/div>\n\t<div class=\"metric_help\">Bytes used by the container's logs on the filesystem.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">uid<\/span><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">pod<\/span><span class=\"metric_label\">container<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_containers_per_pod_count<\/div>\n\t<div class=\"metric_help\">The number of containers per pod.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_cpu_manager_pinning_errors_total<\/div>\n\t<div class=\"metric_help\">The number of cpu core allocations which required pinning failed.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_cpu_manager_pinning_requests_total<\/div>\n\t<div class=\"metric_help\">The number of cpu core allocations which required pinning.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_credential_provider_plugin_duration<\/div>\n\t<div class=\"metric_help\">Duration of execution in seconds for credential provider plugin<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">plugin_name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_credential_provider_plugin_errors<\/div>\n\t<div class=\"metric_help\">Number of errors from credential provider plugin<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">plugin_name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_desired_pods<\/div>\n\t<div class=\"metric_help\">The number of pods the kubelet is being instructed to run. static is true if the pod is not from the apiserver.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">static<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_device_plugin_alloc_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds to serve a device plugin Allocation request. Broken down by resource name.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource_name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_device_plugin_registration_total<\/div>\n\t<div class=\"metric_help\">Cumulative number of device plugin registrations. Broken down by resource name.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource_name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_evented_pleg_connection_error_count<\/div>\n\t<div class=\"metric_help\">The number of errors encountered during the establishment of streaming connection with the CRI runtime.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_evented_pleg_connection_latency_seconds<\/div>\n\t<div class=\"metric_help\">The latency of streaming connection with the CRI runtime, measured in seconds.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_evented_pleg_connection_success_count<\/div>\n\t<div class=\"metric_help\">The number of times a streaming client was obtained to receive CRI Events.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_eviction_stats_age_seconds<\/div>\n\t<div class=\"metric_help\">Time between when stats are collected, and when pod is evicted based on those stats by eviction signal<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">eviction_signal<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_evictions<\/div>\n\t<div class=\"metric_help\">Cumulative number of pod evictions by eviction signal<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">eviction_signal<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_graceful_shutdown_end_time_seconds<\/div>\n\t<div class=\"metric_help\">Last graceful shutdown start time since unix epoch in seconds<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_graceful_shutdown_start_time_seconds<\/div>\n\t<div class=\"metric_help\">Last graceful shutdown start time since unix epoch in seconds<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_http_inflight_requests<\/div>\n\t<div class=\"metric_help\">Number of the inflight http requests<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">long_running<\/span><span class=\"metric_label\">method<\/span><span class=\"metric_label\">path<\/span><span class=\"metric_label\">server_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_http_requests_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds to serve http requests<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">long_running<\/span><span class=\"metric_label\">method<\/span><span class=\"metric_label\">path<\/span><span class=\"metric_label\">server_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_http_requests_total<\/div>\n\t<div class=\"metric_help\">Number of the http requests received since the server started<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">long_running<\/span><span class=\"metric_label\">method<\/span><span class=\"metric_label\">path<\/span><span class=\"metric_label\">server_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_image_garbage_collected_total<\/div>\n\t<div class=\"metric_help\">Total number of images garbage collected by the kubelet, whether through disk usage or image age.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">reason<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_image_pull_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds to pull an image.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">image_size_in_bytes<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_lifecycle_handler_http_fallbacks_total<\/div>\n\t<div class=\"metric_help\">The number of times lifecycle handlers successfully fell back to http from https.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_managed_ephemeral_containers<\/div>\n\t<div class=\"metric_help\">Current number of ephemeral containers in pods managed by this kubelet.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_memory_manager_pinning_errors_total<\/div>\n\t<div class=\"metric_help\">The number of memory pages allocations which required pinning that failed.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_memory_manager_pinning_requests_total<\/div>\n\t<div class=\"metric_help\">The number of memory pages allocations which required pinning.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_mirror_pods<\/div>\n\t<div class=\"metric_help\">The number of mirror pods the kubelet will try to create (one per admitted static pod)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_node_name<\/div>\n\t<div class=\"metric_help\">The node's name. The count is always 1.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">node<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_node_startup_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds of node startup in total.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_node_startup_post_registration_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds of node startup after registration.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_node_startup_pre_kubelet_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds of node startup before kubelet starts.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_node_startup_pre_registration_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds of node startup before registration.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_node_startup_registration_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds of node startup during registration.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_orphan_pod_cleaned_volumes<\/div>\n\t<div class=\"metric_help\">The total number of orphaned Pods whose volumes were cleaned in the last periodic sweep.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_orphan_pod_cleaned_volumes_errors<\/div>\n\t<div class=\"metric_help\">The number of orphaned Pods whose volumes failed to be cleaned in the last periodic sweep.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_orphaned_runtime_pods_total<\/div>\n\t<div class=\"metric_help\">Number of pods that have been detected in the container runtime without being already known to the pod worker. This typically indicates the kubelet was restarted while a pod was force deleted in the API or in the local configuration, which is unusual.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pleg_discard_events<\/div>\n\t<div class=\"metric_help\">The number of discard events in PLEG.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pleg_last_seen_seconds<\/div>\n\t<div class=\"metric_help\">Timestamp in seconds when PLEG was last seen active.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pleg_relist_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds for relisting pods in PLEG.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pleg_relist_interval_seconds<\/div>\n\t<div class=\"metric_help\">Interval in seconds between relisting in PLEG.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pod_resources_endpoint_errors_get<\/div>\n\t<div class=\"metric_help\">Number of requests to the PodResource Get endpoint which returned error. Broken down by server api version.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">server_api_version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pod_resources_endpoint_errors_get_allocatable<\/div>\n\t<div class=\"metric_help\">Number of requests to the PodResource GetAllocatableResources endpoint which returned error. Broken down by server api version.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">server_api_version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pod_resources_endpoint_errors_list<\/div>\n\t<div class=\"metric_help\">Number of requests to the PodResource List endpoint which returned error. Broken down by server api version.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">server_api_version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pod_resources_endpoint_requests_get<\/div>\n\t<div class=\"metric_help\">Number of requests to the PodResource Get endpoint. Broken down by server api version.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">server_api_version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pod_resources_endpoint_requests_get_allocatable<\/div>\n\t<div class=\"metric_help\">Number of requests to the PodResource GetAllocatableResources endpoint. Broken down by server api version.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">server_api_version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pod_resources_endpoint_requests_list<\/div>\n\t<div class=\"metric_help\">Number of requests to the PodResource List endpoint. Broken down by server api version.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">server_api_version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pod_resources_endpoint_requests_total<\/div>\n\t<div class=\"metric_help\">Cumulative number of requests to the PodResource endpoint. Broken down by server api version.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">server_api_version<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pod_start_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds from kubelet seeing a pod for the first time to the pod starting to run<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pod_start_sli_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds to start a pod, excluding time to pull images and run init containers, measured from pod creation timestamp to when all its containers are reported as started and observed via watch<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pod_start_total_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds to start a pod since creation, including time to pull images and run init containers, measured from pod creation timestamp to when all its containers are reported as started and observed via watch<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pod_status_sync_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds to sync a pod status update. Measures time from detection of a change to pod status until the API is successfully updated for that pod, even if multiple intevening changes to pod status occur.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pod_worker_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds to sync a single pod. Broken down by operation type: create, update, or sync<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_pod_worker_start_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds from kubelet seeing a pod to starting a worker.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_preemptions<\/div>\n\t<div class=\"metric_help\">Cumulative number of pod preemptions by preemption resource<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">preemption_signal<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_restarted_pods_total<\/div>\n\t<div class=\"metric_help\">Number of pods that have been restarted because they were deleted and recreated with the same UID while the kubelet was watching them (common for static pods, extremely uncommon for API pods)<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">static<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_run_podsandbox_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds of the run_podsandbox operations. Broken down by RuntimeClass.Handler.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">runtime_handler<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_run_podsandbox_errors_total<\/div>\n\t<div class=\"metric_help\">Cumulative number of the run_podsandbox operation errors by RuntimeClass.Handler.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">runtime_handler<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_running_containers<\/div>\n\t<div class=\"metric_help\">Number of containers currently running<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">container_state<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_running_pods<\/div>\n\t<div class=\"metric_help\">Number of pods that have a running pod sandbox<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_runtime_operations_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds of runtime operations. Broken down by operation type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_runtime_operations_errors_total<\/div>\n\t<div class=\"metric_help\">Cumulative number of runtime operation errors by operation type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_runtime_operations_total<\/div>\n\t<div class=\"metric_help\">Cumulative number of runtime operations by operation type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_server_expiration_renew_errors<\/div>\n\t<div class=\"metric_help\">Counter of certificate renewal errors.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_sleep_action_terminated_early_total<\/div>\n\t<div class=\"metric_help\">The number of times lifecycle sleep handler got terminated before it finishes<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_started_containers_errors_total<\/div>\n\t<div class=\"metric_help\">Cumulative number of errors when starting containers<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><span class=\"metric_label\">container_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_started_containers_total<\/div>\n\t<div class=\"metric_help\">Cumulative number of containers started<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">container_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_started_host_process_containers_errors_total<\/div>\n\t<div class=\"metric_help\">Cumulative number of errors when starting hostprocess containers. This metric will only be collected on Windows.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><span class=\"metric_label\">container_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_started_host_process_containers_total<\/div>\n\t<div class=\"metric_help\">Cumulative number of hostprocess containers started. This metric will only be collected on Windows.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">container_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_started_pods_errors_total<\/div>\n\t<div class=\"metric_help\">Cumulative number of errors when starting pods<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_started_pods_total<\/div>\n\t<div class=\"metric_help\">Cumulative number of pods started<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_topology_manager_admission_duration_ms<\/div>\n\t<div class=\"metric_help\">Duration in milliseconds to serve a pod admission request.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_topology_manager_admission_errors_total<\/div>\n\t<div class=\"metric_help\">The number of admission request failures where resources could not be aligned.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_topology_manager_admission_requests_total<\/div>\n\t<div class=\"metric_help\">The number of admission requests where resources have to be aligned.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_volume_metric_collection_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds to calculate volume stats<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">metric_source<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_volume_stats_available_bytes<\/div>\n\t<div class=\"metric_help\">Number of available bytes in the volume<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">persistentvolumeclaim<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_volume_stats_capacity_bytes<\/div>\n\t<div class=\"metric_help\">Capacity in bytes of the volume<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">persistentvolumeclaim<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_volume_stats_health_status_abnormal<\/div>\n\t<div class=\"metric_help\">Abnormal volume health status. The count is either 1 or 0. 1 indicates the volume is unhealthy, 0 indicates volume is healthy<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">persistentvolumeclaim<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_volume_stats_inodes<\/div>\n\t<div class=\"metric_help\">Maximum number of inodes in the volume<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">persistentvolumeclaim<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_volume_stats_inodes_free<\/div>\n\t<div class=\"metric_help\">Number of free inodes in the volume<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">persistentvolumeclaim<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_volume_stats_inodes_used<\/div>\n\t<div class=\"metric_help\">Number of used inodes in the volume<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">persistentvolumeclaim<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_volume_stats_used_bytes<\/div>\n\t<div class=\"metric_help\">Number of used bytes in the volume<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">persistentvolumeclaim<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubelet_working_pods<\/div>\n\t<div class=\"metric_help\">Number of pods the kubelet is actually running, broken down by lifecycle phase, whether the pod is desired, orphaned, or runtime only (also orphaned), and whether the pod is static. An orphaned pod has been removed from local configuration or force deleted in the API and consumes resources that are not otherwise visible.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">config<\/span><span class=\"metric_label\">lifecycle<\/span><span class=\"metric_label\">static<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_iptables_ct_state_invalid_dropped_packets_total<\/div>\n\t<div class=\"metric_help\">packets dropped by iptables to work around conntrack problems<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_iptables_localhost_nodeports_accepted_packets_total<\/div>\n\t<div class=\"metric_help\">Number of packets accepted on nodeports of loopback interface<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_network_programming_duration_seconds<\/div>\n\t<div class=\"metric_help\">In Cluster Network Programming Latency in seconds<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_proxy_healthz_total<\/div>\n\t<div class=\"metric_help\">Cumulative proxy healthz HTTP status<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_proxy_livez_total<\/div>\n\t<div class=\"metric_help\">Cumulative proxy livez HTTP status<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_full_proxy_rules_duration_seconds<\/div>\n\t<div class=\"metric_help\">SyncProxyRules latency in seconds for full resyncs<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_partial_proxy_rules_duration_seconds<\/div>\n\t<div class=\"metric_help\">SyncProxyRules latency in seconds for partial resyncs<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_proxy_rules_duration_seconds<\/div>\n\t<div class=\"metric_help\">SyncProxyRules latency in seconds<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_proxy_rules_endpoint_changes_pending<\/div>\n\t<div class=\"metric_help\">Pending proxy rules Endpoint changes<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_proxy_rules_endpoint_changes_total<\/div>\n\t<div class=\"metric_help\">Cumulative proxy rules Endpoint changes<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_proxy_rules_iptables_last<\/div>\n\t<div class=\"metric_help\">Number of iptables rules written by kube-proxy in last sync<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">table<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_proxy_rules_iptables_partial_restore_failures_total<\/div>\n\t<div class=\"metric_help\">Cumulative proxy iptables partial restore failures<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_proxy_rules_iptables_restore_failures_total<\/div>\n\t<div class=\"metric_help\">Cumulative proxy iptables restore failures<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_proxy_rules_iptables_total<\/div>\n\t<div class=\"metric_help\">Total number of iptables rules owned by kube-proxy<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">table<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_proxy_rules_last_queued_timestamp_seconds<\/div>\n\t<div class=\"metric_help\">The last time a sync of proxy rules was queued<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_proxy_rules_last_timestamp_seconds<\/div>\n\t<div class=\"metric_help\">The last time proxy rules were successfully synced<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_proxy_rules_nftables_cleanup_failures_total<\/div>\n\t<div class=\"metric_help\">Cumulative proxy nftables cleanup failures<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_proxy_rules_nftables_sync_failures_total<\/div>\n\t<div class=\"metric_help\">Cumulative proxy nftables sync failures<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_proxy_rules_no_local_endpoints_total<\/div>\n\t<div class=\"metric_help\">Number of services with a Local traffic policy and no endpoints<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">traffic_policy<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_proxy_rules_service_changes_pending<\/div>\n\t<div class=\"metric_help\">Pending proxy rules Service changes<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubeproxy_sync_proxy_rules_service_changes_total<\/div>\n\t<div class=\"metric_help\">Cumulative proxy rules Service changes<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">kubernetes_build_info<\/div>\n\t<div class=\"metric_help\">A metric with a constant '1' value labeled by major, minor, git version, git commit, git tree state, build date, Go version, and compiler from which Kubernetes was built, and platform on which it is running.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">build_date<\/span><span class=\"metric_label\">compiler<\/span><span class=\"metric_label\">git_commit<\/span><span class=\"metric_label\">git_tree_state<\/span><span class=\"metric_label\">git_version<\/span><span class=\"metric_label\">go_version<\/span><span class=\"metric_label\">major<\/span><span class=\"metric_label\">minor<\/span><span class=\"metric_label\">platform<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">leader_election_master_status<\/div>\n\t<div class=\"metric_help\">Gauge of if the reporting system is master of the relevant lease, 0 indicates backup, 1 indicates master. 'name' is the string used to identify the lease. Please make sure to group by name.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">leader_election_slowpath_total<\/div>\n\t<div class=\"metric_help\">Total number of slow path exercised in renewing leader leases. 'name' is the string used to identify the lease. Please make sure to group by name.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">node_authorizer_graph_actions_duration_seconds<\/div>\n\t<div class=\"metric_help\">Histogram of duration of graph actions in node authorizer.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">node_collector_unhealthy_nodes_in_zone<\/div>\n\t<div class=\"metric_help\">Gauge measuring number of not Ready Nodes per zones.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">zone<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">node_collector_update_all_nodes_health_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds for NodeController to update the health of all nodes.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">node_collector_update_node_health_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds for NodeController to update the health of a single node.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">node_collector_zone_health<\/div>\n\t<div class=\"metric_help\">Gauge measuring percentage of healthy nodes per zone.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">zone<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">node_collector_zone_size<\/div>\n\t<div class=\"metric_help\">Gauge measuring number of registered Nodes per zones.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">zone<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">node_controller_cloud_provider_taint_removal_delay_seconds<\/div>\n\t<div class=\"metric_help\">Number of seconds after node creation when NodeController removed the cloud-provider taint of a single node.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">node_controller_initial_node_sync_delay_seconds<\/div>\n\t<div class=\"metric_help\">Number of seconds after node creation when NodeController finished the initial synchronization of a single node.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">node_ipam_controller_cidrset_allocation_tries_per_request<\/div>\n\t<div class=\"metric_help\">Number of endpoints added on each Service sync<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">clusterCIDR<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">node_ipam_controller_cidrset_cidrs_allocations_total<\/div>\n\t<div class=\"metric_help\">Counter measuring total number of CIDR allocations.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">clusterCIDR<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">node_ipam_controller_cidrset_cidrs_releases_total<\/div>\n\t<div class=\"metric_help\">Counter measuring total number of CIDR releases.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">clusterCIDR<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">node_ipam_controller_cidrset_usage_cidrs<\/div>\n\t<div class=\"metric_help\">Gauge measuring percentage of allocated CIDRs.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">clusterCIDR<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">node_ipam_controller_cirdset_max_cidrs<\/div>\n\t<div class=\"metric_help\">Maximum number of CIDRs that can be allocated.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">clusterCIDR<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">node_swap_usage_bytes<\/div>\n\t<div class=\"metric_help\">Current swap usage of the node in bytes. Reported only on non-windows systems<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">plugin_manager_total_plugins<\/div>\n\t<div class=\"metric_help\">Number of plugins in Plugin Manager<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">socket_path<\/span><span class=\"metric_label\">state<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">pod_gc_collector_force_delete_pod_errors_total<\/div>\n\t<div class=\"metric_help\">Number of errors encountered when forcefully deleting the pods since the Pod GC Controller started.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">reason<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">pod_gc_collector_force_delete_pods_total<\/div>\n\t<div class=\"metric_help\">Number of pods that are being forcefully deleted since the Pod GC Controller started.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">reason<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">pod_security_errors_total<\/div>\n\t<div class=\"metric_help\">Number of errors preventing normal evaluation. Non-fatal errors may result in the latest restricted profile being used for evaluation.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">fatal<\/span><span class=\"metric_label\">request_operation<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">subresource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">pod_security_evaluations_total<\/div>\n\t<div class=\"metric_help\">Number of policy evaluations that occurred, not counting ignored or exempt requests.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">decision<\/span><span class=\"metric_label\">mode<\/span><span class=\"metric_label\">policy_level<\/span><span class=\"metric_label\">policy_version<\/span><span class=\"metric_label\">request_operation<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">subresource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">pod_security_exemptions_total<\/div>\n\t<div class=\"metric_help\">Number of exempt requests, not counting ignored or out of scope requests.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">request_operation<\/span><span class=\"metric_label\">resource<\/span><span class=\"metric_label\">subresource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">pod_swap_usage_bytes<\/div>\n\t<div class=\"metric_help\">Current amount of the pod swap usage in bytes. Reported only on non-windows systems<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">pod<\/span><span class=\"metric_label\">namespace<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">prober_probe_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration in seconds for a probe response.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">container<\/span><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">pod<\/span><span class=\"metric_label\">probe_type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">prober_probe_total<\/div>\n\t<div class=\"metric_help\">Cumulative number of a liveness, readiness or startup probe for a container by result.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">container<\/span><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">pod<\/span><span class=\"metric_label\">pod_uid<\/span><span class=\"metric_label\">probe_type<\/span><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">pv_collector_bound_pv_count<\/div>\n\t<div class=\"metric_help\">Gauge measuring number of persistent volume currently bound<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">storage_class<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">pv_collector_bound_pvc_count<\/div>\n\t<div class=\"metric_help\">Gauge measuring number of persistent volume claim currently bound<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">storage_class<\/span><span class=\"metric_label\">volume_attributes_class<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">pv_collector_total_pv_count<\/div>\n\t<div class=\"metric_help\">Gauge measuring total number of persistent volumes<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">plugin_name<\/span><span class=\"metric_label\">volume_mode<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">pv_collector_unbound_pv_count<\/div>\n\t<div class=\"metric_help\">Gauge measuring number of persistent volume currently unbound<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">storage_class<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">pv_collector_unbound_pvc_count<\/div>\n\t<div class=\"metric_help\">Gauge measuring number of persistent volume claim currently unbound<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">namespace<\/span><span class=\"metric_label\">storage_class<\/span><span class=\"metric_label\">volume_attributes_class<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">reconstruct_volume_operations_errors_total<\/div>\n\t<div class=\"metric_help\">The number of volumes that failed reconstruction from the operating system during kubelet startup.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">reconstruct_volume_operations_total<\/div>\n\t<div class=\"metric_help\">The number of volumes that were attempted to be reconstructed from the operating system during kubelet startup. This includes both successful and failed reconstruction.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">replicaset_controller_sorting_deletion_age_ratio<\/div>\n\t<div class=\"metric_help\">The ratio of chosen deleted pod's ages to the current youngest pod's age (at the time). Should be <2. The intent of this metric is to measure the rough efficacy of the LogarithmicScaleDown feature gate's effect on the sorting (and deletion) of pods when a replicaset scales down. This only considers Ready pods when calculating and reporting.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">resourceclaim_controller_create_attempts_total<\/div>\n\t<div class=\"metric_help\">Number of ResourceClaims creation requests<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">resourceclaim_controller_create_failures_total<\/div>\n\t<div class=\"metric_help\">Number of ResourceClaims creation request failures<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">rest_client_dns_resolution_duration_seconds<\/div>\n\t<div class=\"metric_help\">DNS resolver latency in seconds. Broken down by host.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">host<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">rest_client_exec_plugin_call_total<\/div>\n\t<div class=\"metric_help\">Number of calls to an exec plugin, partitioned by the type of event encountered (no_error, plugin_execution_error, plugin_not_found_error, client_internal_error) and an optional exit code. The exit code will be set to 0 if and only if the plugin call was successful.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">call_status<\/span><span class=\"metric_label\">code<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">rest_client_exec_plugin_certificate_rotation_age<\/div>\n\t<div class=\"metric_help\">Histogram of the number of seconds the last auth exec plugin client certificate lived before being rotated. If auth exec plugin client certificates are unused, histogram will contain no data.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">rest_client_exec_plugin_ttl_seconds<\/div>\n\t<div class=\"metric_help\">Gauge of the shortest TTL (time-to-live) of the client certificate(s) managed by the auth exec plugin. The value is in seconds until certificate expiry (negative if already expired). If auth exec plugins are unused or manage no TLS certificates, the value will be +INF.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">rest_client_rate_limiter_duration_seconds<\/div>\n\t<div class=\"metric_help\">Client side rate limiter latency in seconds. Broken down by verb, and host.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">host<\/span><span class=\"metric_label\">verb<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">rest_client_request_duration_seconds<\/div>\n\t<div class=\"metric_help\">Request latency in seconds. Broken down by verb, and host.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">host<\/span><span class=\"metric_label\">verb<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">rest_client_request_retries_total<\/div>\n\t<div class=\"metric_help\">Number of request retries, partitioned by status code, verb, and host.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><span class=\"metric_label\">host<\/span><span class=\"metric_label\">verb<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">rest_client_request_size_bytes<\/div>\n\t<div class=\"metric_help\">Request size in bytes. Broken down by verb and host.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">host<\/span><span class=\"metric_label\">verb<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">rest_client_requests_total<\/div>\n\t<div class=\"metric_help\">Number of HTTP requests, partitioned by status code, method, and host.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><span class=\"metric_label\">host<\/span><span class=\"metric_label\">method<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">rest_client_response_size_bytes<\/div>\n\t<div class=\"metric_help\">Response size in bytes. Broken down by verb and host.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">host<\/span><span class=\"metric_label\">verb<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">rest_client_transport_cache_entries<\/div>\n\t<div class=\"metric_help\">Number of transport entries in the internal cache.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">rest_client_transport_create_calls_total<\/div>\n\t<div class=\"metric_help\">Number of calls to get a new transport, partitioned by the result of the operation hit: obtained from the cache, miss: created and added to the cache, uncacheable: created and not cached<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">retroactive_storageclass_errors_total<\/div>\n\t<div class=\"metric_help\">Total number of failed retroactive StorageClass assignments to persistent volume claim<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">retroactive_storageclass_total<\/div>\n\t<div class=\"metric_help\">Total number of retroactive StorageClass assignments to persistent volume claim<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">root_ca_cert_publisher_sync_duration_seconds<\/div>\n\t<div class=\"metric_help\">Number of namespace syncs happened in root ca cert publisher.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">root_ca_cert_publisher_sync_total<\/div>\n\t<div class=\"metric_help\">Number of namespace syncs happened in root ca cert publisher.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">code<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">running_managed_controllers<\/div>\n\t<div class=\"metric_help\">Indicates where instances of a controller are currently running<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">manager<\/span><span class=\"metric_label\">name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">scheduler_event_handling_duration_seconds<\/div>\n\t<div class=\"metric_help\">Event handling latency in seconds.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">event<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">scheduler_goroutines<\/div>\n\t<div class=\"metric_help\">Number of running goroutines split by the work they do such as binding.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">scheduler_permit_wait_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration of waiting on permit.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">result<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">scheduler_plugin_evaluation_total<\/div>\n\t<div class=\"metric_help\">Number of attempts to schedule pods by each plugin and the extension point (available only in PreFilter, Filter, PreScore, and Score).<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">extension_point<\/span><span class=\"metric_label\">plugin<\/span><span class=\"metric_label\">profile<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">scheduler_plugin_execution_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration for running a plugin at a specific extension point.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">extension_point<\/span><span class=\"metric_label\">plugin<\/span><span class=\"metric_label\">status<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">scheduler_queueing_hint_execution_duration_seconds<\/div>\n\t<div class=\"metric_help\">Duration for running a queueing hint function of a plugin.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">event<\/span><span class=\"metric_label\">hint<\/span><span class=\"metric_label\">plugin<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">scheduler_scheduler_cache_size<\/div>\n\t<div class=\"metric_help\">Number of nodes, pods, and assumed (bound) pods in the scheduler cache.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">type<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">scheduler_scheduling_algorithm_duration_seconds<\/div>\n\t<div class=\"metric_help\">Scheduling algorithm latency in seconds<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">scheduler_unschedulable_pods<\/div>\n\t<div class=\"metric_help\">The number of unschedulable pods broken down by plugin name. A pod will increment the gauge for all plugins that caused it to not schedule and so this metric have meaning only when broken down by plugin.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">plugin<\/span><span class=\"metric_label\">profile<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">scheduler_volume_binder_cache_requests_total<\/div>\n\t<div class=\"metric_help\">Total number for request volume binding cache<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">scheduler_volume_scheduling_stage_error_total<\/div>\n\t<div class=\"metric_help\">Volume scheduling stage error count<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">scrape_error<\/div>\n\t<div class=\"metric_help\">1 if there was an error while getting container metrics, 0 otherwise<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_deprecated_version\"><label class=\"metric_detail\">Deprecated Versions:<\/label><span>1.29.0<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">service_controller_loadbalancer_sync_total<\/div>\n\t<div class=\"metric_help\">A metric counting the amount of times any load balancer has been configured, as an effect of service\/node changes on the cluster<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">service_controller_nodesync_error_total<\/div>\n\t<div class=\"metric_help\">A metric counting the amount of times any load balancer has been configured and errored, as an effect of node changes on the cluster<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">service_controller_nodesync_latency_seconds<\/div>\n\t<div class=\"metric_help\">A metric measuring the latency for nodesync which updates loadbalancer hosts on cluster node updates.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">service_controller_update_loadbalancer_host_latency_seconds<\/div>\n\t<div class=\"metric_help\">A metric measuring the latency for updating each load balancer hosts.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">serviceaccount_invalid_legacy_auto_token_uses_total<\/div>\n\t<div class=\"metric_help\">Cumulative invalid auto-generated legacy tokens used<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">serviceaccount_legacy_auto_token_uses_total<\/div>\n\t<div class=\"metric_help\">Cumulative auto-generated legacy tokens used<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">serviceaccount_legacy_manual_token_uses_total<\/div>\n\t<div class=\"metric_help\">Cumulative manually created legacy tokens used<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">serviceaccount_legacy_tokens_total<\/div>\n\t<div class=\"metric_help\">Cumulative legacy service account tokens used<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">serviceaccount_stale_tokens_total<\/div>\n\t<div class=\"metric_help\">Cumulative stale projected service account tokens used<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">serviceaccount_valid_tokens_total<\/div>\n\t<div class=\"metric_help\">Cumulative valid projected service account tokens used<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">storage_count_attachable_volumes_in_use<\/div>\n\t<div class=\"metric_help\">Measure number of volumes in use<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">node<\/span><span class=\"metric_label\">volume_plugin<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">storage_operation_duration_seconds<\/div>\n\t<div class=\"metric_help\">Storage operation duration<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">migrated<\/span><span class=\"metric_label\">operation_name<\/span><span class=\"metric_label\">status<\/span><span class=\"metric_label\">volume_plugin<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">taint_eviction_controller_pod_deletion_duration_seconds<\/div>\n\t<div class=\"metric_help\">Latency, in seconds, between the time when a taint effect has been activated for the Pod and its deletion via TaintEvictionController.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">taint_eviction_controller_pod_deletions_total<\/div>\n\t<div class=\"metric_help\">Total number of Pods deleted by TaintEvictionController since its start.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">ttl_after_finished_controller_job_deletion_duration_seconds<\/div>\n\t<div class=\"metric_help\">The time it took to delete the job since it became eligible for deletion<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">volume_manager_selinux_container_errors_total<\/div>\n\t<div class=\"metric_help\">Number of errors when kubelet cannot compute SELinux context for a container. Kubelet can't start such a Pod then and it will retry, therefore value of this metric may not represent the actual nr. of containers.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">access_mode<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">volume_manager_selinux_container_warnings_total<\/div>\n\t<div class=\"metric_help\">Number of errors when kubelet cannot compute SELinux context for a container that are ignored. They will become real errors when SELinuxMountReadWriteOncePod feature is expanded to all volume access modes.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">access_mode<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">volume_manager_selinux_pod_context_mismatch_errors_total<\/div>\n\t<div class=\"metric_help\">Number of errors when a Pod defines different SELinux contexts for its containers that use the same volume. Kubelet can't start such a Pod then and it will retry, therefore value of this metric may not represent the actual nr. of Pods.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">access_mode<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">volume_manager_selinux_pod_context_mismatch_warnings_total<\/div>\n\t<div class=\"metric_help\">Number of errors when a Pod defines different SELinux contexts for its containers that use the same volume. They are not errors yet, but they will become real errors when SELinuxMountReadWriteOncePod feature is expanded to all volume access modes.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">access_mode<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">volume_manager_selinux_volume_context_mismatch_errors_total<\/div>\n\t<div class=\"metric_help\">Number of errors when a Pod uses a volume that is already mounted with a different SELinux context than the Pod needs. Kubelet can't start such a Pod then and it will retry, therefore value of this metric may not represent the actual nr. of Pods.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">access_mode<\/span><span class=\"metric_label\">volume_plugin<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">volume_manager_selinux_volume_context_mismatch_warnings_total<\/div>\n\t<div class=\"metric_help\">Number of errors when a Pod uses a volume that is already mounted with a different SELinux context than the Pod needs. They are not errors yet, but they will become real errors when SELinuxMountReadWriteOncePod feature is expanded to all volume access modes.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">access_mode<\/span><span class=\"metric_label\">volume_plugin<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">volume_manager_selinux_volumes_admitted_total<\/div>\n\t<div class=\"metric_help\">Number of volumes whose SELinux context was fine and will be mounted with mount -o context option.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">access_mode<\/span><span class=\"metric_label\">volume_plugin<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">volume_manager_total_volumes<\/div>\n\t<div class=\"metric_help\">Number of volumes in Volume Manager<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"custom\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Custom<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">plugin_name<\/span><span class=\"metric_label\">state<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">volume_operation_total_errors<\/div>\n\t<div class=\"metric_help\">Total volume operation errors<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation_name<\/span><span class=\"metric_label\">plugin_name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">volume_operation_total_seconds<\/div>\n\t<div class=\"metric_help\">Storage operation end to end duration in seconds<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">operation_name<\/span><span class=\"metric_label\">plugin_name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">watch_cache_capacity<\/div>\n\t<div class=\"metric_help\">Total capacity of watch cache broken by resource type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">watch_cache_capacity_decrease_total<\/div>\n\t<div class=\"metric_help\">Total number of watch cache capacity decrease events broken by resource type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">watch_cache_capacity_increase_total<\/div>\n\t<div class=\"metric_help\">Total number of watch cache capacity increase events broken by resource type.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">resource<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">workqueue_adds_total<\/div>\n\t<div class=\"metric_help\">Total number of adds handled by workqueue<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">workqueue_depth<\/div>\n\t<div class=\"metric_help\">Current depth of workqueue<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">workqueue_longest_running_processor_seconds<\/div>\n\t<div class=\"metric_help\">How many seconds has the longest running processor for workqueue been running.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">workqueue_queue_duration_seconds<\/div>\n\t<div class=\"metric_help\">How long in seconds an item stays in workqueue before being requested.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">workqueue_retries_total<\/div>\n\t<div class=\"metric_help\">Total number of retries handled by workqueue<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"counter\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Counter<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">workqueue_unfinished_work_seconds<\/div>\n\t<div class=\"metric_help\">How many seconds of work has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"gauge\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Gauge<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><\/li><\/ul>\n\t<\/div><div class=\"metric\" data-stability=\"alpha\">\n\t<div class=\"metric_name\">workqueue_work_duration_seconds<\/div>\n\t<div class=\"metric_help\">How long in seconds processing an item from workqueue takes.<\/div>\n\t<ul>\n\t<li><label class=\"metric_detail\">Stability Level:<\/label><span class=\"metric_stability_level\">ALPHA<\/span><\/li>\n\t<li data-type=\"histogram\"><label class=\"metric_detail\">Type:<\/label> <span class=\"metric_type\">Histogram<\/span><\/li>\n\t<li class=\"metric_labels_varying\"><label class=\"metric_detail\">Labels:<\/label><span class=\"metric_label\">name<\/span><\/li><\/ul>\n\t<\/div>\n<\/div>","site":"kubernetes reference","answers_cleaned":"    title  Kubernetes Metrics Reference content type  reference auto generated  true description       Details of the metric data that Kubernetes components export          Metrics  v1 31         auto generated 2024 Oct 28            auto generated v1 31      This page details the metrics that different Kubernetes components export  You can query the metrics endpoint for these  components using an HTTP scrape  and fetch the current metrics data in Prometheus format       List of Stable Kubernetes Metrics  Stable metrics observe strict API contracts and no labels can be added or removed from stable metrics during their lifetime    div class  metrics   div class  metric  data stability  stable     div class  metric name  apiserver admission controller admission duration seconds  div    div class  metric help  Admission controller latency histogram in seconds  identified by name and broken out for each operation and API resource and type  validate or admit    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span  span class  metric label  operation  span  span class  metric label  rejected  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  apiserver admission step admission duration seconds  div    div class  metric help  Admission sub step latency histogram in seconds  broken out for each operation and API resource and step type  validate or admit    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation  span  span class  metric label  rejected  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  apiserver admission webhook admission duration seconds  div    div class  metric help  Admission webhook latency histogram in seconds  identified by name and broken out for each operation and API resource and type  validate or admit    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span  span class  metric label  operation  span  span class  metric label  rejected  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  apiserver current inflight requests  div    div class  metric help  Maximal number of currently used inflight request limit of this apiserver per request kind in last second   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  request kind  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  apiserver longrunning requests  div    div class  metric help  Gauge of all active long running apiserver requests broken out by verb  group  version  resource  scope and component  Not all requests are tracked this way   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  component  span  span class  metric label  group  span  span class  metric label  resource  span  span class  metric label  scope  span  span class  metric label  subresource  span  span class  metric label  verb  span  span class  metric label  version  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  apiserver request duration seconds  div    div class  metric help  Response latency distribution in seconds for each verb  dry run value  group  version  resource  subresource  scope and component   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  component  span  span class  metric label  dry run  span  span class  metric label  group  span  span class  metric label  resource  span  span class  metric label  scope  span  span class  metric label  subresource  span  span class  metric label  verb  span  span class  metric label  version  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  apiserver request total  div    div class  metric help  Counter of apiserver requests broken out for each verb  dry run value  group  version  resource  scope  component  and HTTP response code   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span  span class  metric label  component  span  span class  metric label  dry run  span  span class  metric label  group  span  span class  metric label  resource  span  span class  metric label  scope  span  span class  metric label  subresource  span  span class  metric label  verb  span  span class  metric label  version  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  apiserver requested deprecated apis  div    div class  metric help  Gauge of deprecated APIs that have been requested  broken out by API group  version  resource  subresource  and removed release   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  group  span  span class  metric label  removed release  span  span class  metric label  resource  span  span class  metric label  subresource  span  span class  metric label  version  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  apiserver response sizes  div    div class  metric help  Response size distribution in bytes for each group  version  verb  resource  subresource  scope and component   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  component  span  span class  metric label  group  span  span class  metric label  resource  span  span class  metric label  scope  span  span class  metric label  subresource  span  span class  metric label  verb  span  span class  metric label  version  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  apiserver storage objects  div    div class  metric help  Number of stored objects at the time of last check split by kind  In case of a fetching error  the value will be  1   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  apiserver storage size bytes  div    div class  metric help  Size of the storage database file physically allocated in bytes   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  storage cluster id  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  container cpu usage seconds total  div    div class  metric help  Cumulative cpu time consumed by the container in core seconds  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  container  span  span class  metric label  pod  span  span class  metric label  namespace  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  container memory working set bytes  div    div class  metric help  Current working set of the container in bytes  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  container  span  span class  metric label  pod  span  span class  metric label  namespace  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  container start time seconds  div    div class  metric help  Start time of the container since unix epoch in seconds  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  container  span  span class  metric label  pod  span  span class  metric label  namespace  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  cronjob controller job creation skew duration seconds  div    div class  metric help  Time between when a cronjob is scheduled to be run  and when the corresponding job is created  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  stable     div class  metric name  job controller job pods finished total  div    div class  metric help  The number of finished Pods that are fully tracked  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  completion mode  span  span class  metric label  result  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  job controller job sync duration seconds  div    div class  metric help  The time it took to sync a job  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  action  span  span class  metric label  completion mode  span  span class  metric label  result  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  job controller job syncs total  div    div class  metric help  The number of job syncs  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  action  span  span class  metric label  completion mode  span  span class  metric label  result  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  job controller jobs finished total  div    div class  metric help  The number of finished jobs  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  completion mode  span  span class  metric label  reason  span  span class  metric label  result  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  kube pod resource limit  div    div class  metric help  Resources limit for workloads on the cluster  broken down by pod  This shows the resource usage the scheduler and kubelet expect per pod for resources along with the unit for the resource if any   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  namespace  span  span class  metric label  pod  span  span class  metric label  node  span  span class  metric label  scheduler  span  span class  metric label  priority  span  span class  metric label  resource  span  span class  metric label  unit  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  kube pod resource request  div    div class  metric help  Resources requested by workloads on the cluster  broken down by pod  This shows the resource usage the scheduler and kubelet expect per pod for resources along with the unit for the resource if any   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  namespace  span  span class  metric label  pod  span  span class  metric label  node  span  span class  metric label  scheduler  span  span class  metric label  priority  span  span class  metric label  resource  span  span class  metric label  unit  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  kubernetes healthcheck  div    div class  metric help  This metric records the result of a single healthcheck   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  kubernetes healthchecks total  div    div class  metric help  This metric records the results of all healthcheck   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span  span class  metric label  status  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  node collector evictions total  div    div class  metric help  Number of Node evictions that happened since current instance of NodeController started   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  zone  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  node cpu usage seconds total  div    div class  metric help  Cumulative cpu time consumed by the node in core seconds  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li     ul     div  div class  metric  data stability  stable     div class  metric name  node memory working set bytes  div    div class  metric help  Current working set of the node in bytes  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li     ul     div  div class  metric  data stability  stable     div class  metric name  pod cpu usage seconds total  div    div class  metric help  Cumulative cpu time consumed by the pod in core seconds  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  pod  span  span class  metric label  namespace  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  pod memory working set bytes  div    div class  metric help  Current working set of the pod in bytes  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  pod  span  span class  metric label  namespace  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  resource scrape error  div    div class  metric help  1 if there was an error while getting container metrics  0 otherwise  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li     ul     div  div class  metric  data stability  stable     div class  metric name  scheduler framework extension point duration seconds  div    div class  metric help  Latency for running all plugins of a specific extension point   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  extension point  span  span class  metric label  profile  span  span class  metric label  status  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  scheduler pending pods  div    div class  metric help  Number of pending pods  by the queue type   active  means number of pods in activeQ   backoff  means number of pods in backoffQ   unschedulable  means number of pods in unschedulablePods that the scheduler attempted to schedule and failed   gated  is the number of unschedulable pods that the scheduler never attempted to schedule because they are gated   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  queue  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  scheduler pod scheduling attempts  div    div class  metric help  Number of attempts to successfully schedule a pod   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  stable     div class  metric name  scheduler pod scheduling duration seconds  div    div class  metric help  E2e latency for a pod being scheduled which may include multiple scheduling attempts   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  attempts  span   li  li class  metric deprecated version   label class  metric detail  Deprecated Versions   label  span 1 29 0  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  scheduler preemption attempts total  div    div class  metric help  Total preemption attempts in the cluster till now  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  stable     div class  metric name  scheduler preemption victims  div    div class  metric help  Number of selected preemption victims  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  stable     div class  metric name  scheduler queue incoming pods total  div    div class  metric help  Number of pods added to scheduling queues by event and queue type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  event  span  span class  metric label  queue  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  scheduler schedule attempts total  div    div class  metric help  Number of attempts to schedule pods  by the result   unschedulable  means a pod could not be scheduled  while  error  means an internal scheduler problem   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  profile  span  span class  metric label  result  span   li   ul     div  div class  metric  data stability  stable     div class  metric name  scheduler scheduling attempt duration seconds  div    div class  metric help  Scheduling attempt latency in seconds  scheduling algorithm   binding   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  STABLE  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  profile  span  span class  metric label  result  span   li   ul     div    div       List of Beta Kubernetes Metrics  Beta metrics observe a looser API contract than its stable counterparts  No labels can be removed from beta metrics during their lifetime  however  labels can be added while the metric is in the beta stage  This offers the assurance that beta metrics will honor existing dashboards and alerts  while allowing for amendments in the future     div class  metrics   div class  metric  data stability  beta     div class  metric name  apiserver cel compilation duration seconds  div    div class  metric help  CEL compilation time in seconds   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  beta     div class  metric name  apiserver cel evaluation duration seconds  div    div class  metric help  CEL evaluation time in seconds   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  beta     div class  metric name  apiserver flowcontrol current executing requests  div    div class  metric help  Number of requests in initial  for a WATCH  or any  for a non WATCH  execution stage in the API Priority and Fairness subsystem  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  flow schema  span  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  beta     div class  metric name  apiserver flowcontrol current executing seats  div    div class  metric help  Concurrency  number of seats  occupied by the currently executing  initial stage for a WATCH  any stage otherwise  requests in the API Priority and Fairness subsystem  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  flow schema  span  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  beta     div class  metric name  apiserver flowcontrol current inqueue requests  div    div class  metric help  Number of requests currently pending in queues of the API Priority and Fairness subsystem  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  flow schema  span  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  beta     div class  metric name  apiserver flowcontrol dispatched requests total  div    div class  metric help  Number of requests executed by API Priority and Fairness subsystem  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  flow schema  span  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  beta     div class  metric name  apiserver flowcontrol nominal limit seats  div    div class  metric help  Nominal number of execution seats configured for each priority level  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  beta     div class  metric name  apiserver flowcontrol rejected requests total  div    div class  metric help  Number of requests rejected by API Priority and Fairness subsystem  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  flow schema  span  span class  metric label  priority level  span  span class  metric label  reason  span   li   ul     div  div class  metric  data stability  beta     div class  metric name  apiserver flowcontrol request wait duration seconds  div    div class  metric help  Length of time a request spent waiting in its queue  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  execute  span  span class  metric label  flow schema  span  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  beta     div class  metric name  apiserver validating admission policy check duration seconds  div    div class  metric help  Validation admission latency for individual validation expressions in seconds  labeled by policy and further including binding and enforcement action taken   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  enforcement action  span  span class  metric label  error type  span  span class  metric label  policy  span  span class  metric label  policy binding  span   li   ul     div  div class  metric  data stability  beta     div class  metric name  apiserver validating admission policy check total  div    div class  metric help  Validation admission policy check total  labeled by policy and further identified by binding and enforcement action taken   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  enforcement action  span  span class  metric label  error type  span  span class  metric label  policy  span  span class  metric label  policy binding  span   li   ul     div  div class  metric  data stability  beta     div class  metric name  disabled metrics total  div    div class  metric help  The count of disabled metrics   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  beta     div class  metric name  hidden metrics total  div    div class  metric help  The count of hidden metrics   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  beta     div class  metric name  kubernetes feature enabled  div    div class  metric help  This metric records the data about the stage and enablement of a k8s feature   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span  span class  metric label  stage  span   li   ul     div  div class  metric  data stability  beta     div class  metric name  registered metrics total  div    div class  metric help  The count of registered metrics broken by stability level and deprecation version   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  deprecated version  span  span class  metric label  stability level  span   li   ul     div  div class  metric  data stability  beta     div class  metric name  scheduler pod scheduling sli duration seconds  div    div class  metric help  E2e latency for a pod being scheduled  from the time the pod enters the scheduling queue and might involve multiple scheduling attempts   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  BETA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  attempts  span   li   ul     div    div       List of Alpha Kubernetes Metrics  Alpha metrics do not have any API guarantees  These metrics must be used at your own risk  subsequent versions of Kubernetes may remove these metrics altogether  or mutate the API in such a way that breaks existing dashboards and alerts     div class  metrics   div class  metric  data stability  alpha     div class  metric name  aggregator discovery aggregation count total  div    div class  metric help  Counter of number of times discovery was aggregated  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  aggregator openapi v2 regeneration count  div    div class  metric help  Counter of OpenAPI v2 spec regeneration count broken down by causing APIService name and reason   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  apiservice  span  span class  metric label  reason  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  aggregator openapi v2 regeneration duration  div    div class  metric help  Gauge of OpenAPI v2 spec regeneration duration in seconds   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  reason  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  aggregator unavailable apiservice  div    div class  metric help  Gauge of APIServices which are marked as unavailable broken down by APIService name   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  aggregator unavailable apiservice total  div    div class  metric help  Counter of APIServices which are marked as unavailable broken down by APIService name and reason   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span  span class  metric label  reason  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiextensions apiserver validation ratcheting seconds  div    div class  metric help  Time for comparison of old to new for the purposes of CRDValidationRatcheting during an UPDATE in seconds   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiextensions openapi v2 regeneration count  div    div class  metric help  Counter of OpenAPI v2 spec regeneration count broken down by causing CRD name and reason   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  crd  span  span class  metric label  reason  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiextensions openapi v3 regeneration count  div    div class  metric help  Counter of OpenAPI v3 spec regeneration count broken down by group  version  causing CRD and reason   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  crd  span  span class  metric label  group  span  span class  metric label  reason  span  span class  metric label  version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver admission match condition evaluation errors total  div    div class  metric help  Admission match condition evaluation errors count  identified by name of resource containing the match condition and broken out for each kind containing matchConditions  webhook or policy   operation and admission type  validate or admit    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  kind  span  span class  metric label  name  span  span class  metric label  operation  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver admission match condition evaluation seconds  div    div class  metric help  Admission match condition evaluation time in seconds  identified by name and broken out for each kind containing matchConditions  webhook or policy   operation and type  validate or admit    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  kind  span  span class  metric label  name  span  span class  metric label  operation  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver admission match condition exclusions total  div    div class  metric help  Admission match condition evaluation exclusions count  identified by name of resource containing the match condition and broken out for each kind containing matchConditions  webhook or policy   operation and admission type  validate or admit    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  kind  span  span class  metric label  name  span  span class  metric label  operation  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver admission step admission duration seconds summary  div    div class  metric help  Admission sub step latency summary in seconds  broken out for each operation and API resource and step type  validate or admit    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  summary   label class  metric detail  Type   label   span class  metric type  Summary  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation  span  span class  metric label  rejected  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver admission webhook fail open count  div    div class  metric help  Admission webhook fail open count  identified by name and broken out for each admission type  validating or mutating    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver admission webhook rejection count  div    div class  metric help  Admission webhook rejection count  identified by name and broken out for each admission type  validating or admit  and operation  Additional labels specify an error type  calling webhook error or apiserver internal error if an error occurred  no error otherwise  and optionally a non zero rejection code if the webhook rejects the request with an HTTP status code  honored by the apiserver when the code is greater or equal to 400   Codes greater than 600 are truncated to 600  to keep the metrics cardinality bounded   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  error type  span  span class  metric label  name  span  span class  metric label  operation  span  span class  metric label  rejection code  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver admission webhook request total  div    div class  metric help  Admission webhook request total  identified by name and broken out for each admission type  validating or mutating  and operation  Additional labels specify whether the request was rejected or not and an HTTP status code  Codes greater than 600 are truncated to 600  to keep the metrics cardinality bounded   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span  span class  metric label  name  span  span class  metric label  operation  span  span class  metric label  rejected  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver audit error total  div    div class  metric help  Counter of audit events that failed to be audited properly  Plugin identifies the plugin affected by the error   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  plugin  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver audit event total  div    div class  metric help  Counter of audit events generated and sent to the audit backend   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver audit level total  div    div class  metric help  Counter of policy levels for audit events  1 per request    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver audit requests rejected total  div    div class  metric help  Counter of apiserver requests rejected due to an error in audit logging backend   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver authentication config controller automatic reload last timestamp seconds  div    div class  metric help  Timestamp of the last automatic reload of authentication configuration split by status and apiserver identity   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  apiserver id hash  span  span class  metric label  status  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver authentication config controller automatic reloads total  div    div class  metric help  Total number of automatic reloads of authentication configuration split by status and apiserver identity   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  apiserver id hash  span  span class  metric label  status  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver authentication jwt authenticator latency seconds  div    div class  metric help  Latency of jwt authentication operations in seconds  This is the time spent authenticating a token for cache miss only  i e  when the token is not found in the cache    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  jwt issuer hash  span  span class  metric label  result  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver authorization config controller automatic reload last timestamp seconds  div    div class  metric help  Timestamp of the last automatic reload of authorization configuration split by status and apiserver identity   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  apiserver id hash  span  span class  metric label  status  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver authorization config controller automatic reloads total  div    div class  metric help  Total number of automatic reloads of authorization configuration split by status and apiserver identity   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  apiserver id hash  span  span class  metric label  status  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver authorization decisions total  div    div class  metric help  Total number of terminal decisions made by an authorizer split by authorizer type  name  and decision   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  decision  span  span class  metric label  name  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver authorization match condition evaluation errors total  div    div class  metric help  Total number of errors when an authorization webhook encounters a match condition error split by authorizer type and name   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver authorization match condition evaluation seconds  div    div class  metric help  Authorization match condition evaluation time in seconds  split by authorizer type and name   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver authorization match condition exclusions total  div    div class  metric help  Total number of exclusions when an authorization webhook is skipped because match conditions exclude it   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver authorization webhook duration seconds  div    div class  metric help  Request latency in seconds   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span  span class  metric label  result  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver authorization webhook evaluations fail open total  div    div class  metric help  NoOpinion results due to webhook timeout or error   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span  span class  metric label  result  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver authorization webhook evaluations total  div    div class  metric help  Round trips to authorization webhooks   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span  span class  metric label  result  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver cache list fetched objects total  div    div class  metric help  Number of objects read from watch cache in the course of serving a LIST request  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  index  span  span class  metric label  resource prefix  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver cache list returned objects total  div    div class  metric help  Number of objects returned for a LIST request from watch cache  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource prefix  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver cache list total  div    div class  metric help  Number of LIST requests served from watch cache  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  index  span  span class  metric label  resource prefix  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver certificates registry csr honored duration total  div    div class  metric help  Total number of issued CSRs with a requested duration that was honored  sliced by signer  only kubernetes io signer names are specifically identified   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  signerName  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver certificates registry csr requested duration total  div    div class  metric help  Total number of issued CSRs with a requested duration  sliced by signer  only kubernetes io signer names are specifically identified   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  signerName  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver client certificate expiration seconds  div    div class  metric help  Distribution of the remaining lifetime on the certificate used to authenticate a request   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver clusterip repair ip errors total  div    div class  metric help  Number of errors detected on clusterips by the repair loop broken down by type of error  leak  repair  full  outOfRange  duplicate  unknown  invalid  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver clusterip repair reconcile errors total  div    div class  metric help  Number of reconciliation failures on the clusterip repair reconcile loop  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver conversion webhook duration seconds  div    div class  metric help  Conversion webhook request latency  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  failure type  span  span class  metric label  result  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver conversion webhook request total  div    div class  metric help  Counter for conversion webhook requests with success failure and failure error type  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  failure type  span  span class  metric label  result  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver crd conversion webhook duration seconds  div    div class  metric help  CRD webhook conversion duration in seconds  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  crd name  span  span class  metric label  from version  span  span class  metric label  succeeded  span  span class  metric label  to version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver current inqueue requests  div    div class  metric help  Maximal number of queued requests in this apiserver per request kind in last second   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  request kind  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver delegated authn request duration seconds  div    div class  metric help  Request latency in seconds  Broken down by status code   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver delegated authn request total  div    div class  metric help  Number of HTTP requests partitioned by status code   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver delegated authz request duration seconds  div    div class  metric help  Request latency in seconds  Broken down by status code   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver delegated authz request total  div    div class  metric help  Number of HTTP requests partitioned by status code   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver egress dialer dial duration seconds  div    div class  metric help  Dial latency histogram in seconds  labeled by the protocol  http connect or grpc   transport  tcp or uds   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  protocol  span  span class  metric label  transport  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver egress dialer dial failure count  div    div class  metric help  Dial failure count  labeled by the protocol  http connect or grpc   transport  tcp or uds   and stage  connect or proxy   The stage indicates at which stage the dial failed  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  protocol  span  span class  metric label  stage  span  span class  metric label  transport  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver egress dialer dial start total  div    div class  metric help  Dial starts  labeled by the protocol  http connect or grpc  and transport  tcp or uds    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  protocol  span  span class  metric label  transport  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver encryption config controller automatic reload failures total  div    div class  metric help  Total number of failed automatic reloads of encryption configuration split by apiserver identity   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  apiserver id hash  span   li  li class  metric deprecated version   label class  metric detail  Deprecated Versions   label  span 1 30 0  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver encryption config controller automatic reload last timestamp seconds  div    div class  metric help  Timestamp of the last successful or failed automatic reload of encryption configuration split by apiserver identity   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  apiserver id hash  span  span class  metric label  status  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver encryption config controller automatic reload success total  div    div class  metric help  Total number of successful automatic reloads of encryption configuration split by apiserver identity   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  apiserver id hash  span   li  li class  metric deprecated version   label class  metric detail  Deprecated Versions   label  span 1 30 0  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver encryption config controller automatic reloads total  div    div class  metric help  Total number of reload successes and failures of encryption configuration split by apiserver identity   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  apiserver id hash  span  span class  metric label  status  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver envelope encryption dek cache fill percent  div    div class  metric help  Percent of the cache slots currently occupied by cached DEKs   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver envelope encryption dek cache inter arrival time seconds  div    div class  metric help  Time  in seconds  of inter arrival of transformation requests   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  transformation type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver envelope encryption dek source cache size  div    div class  metric help  Number of records in data encryption key  DEK  source cache  On a restart  this value is an approximation of the number of decrypt RPC calls the server will make to the KMS plugin   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  provider name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver envelope encryption invalid key id from status total  div    div class  metric help  Number of times an invalid keyID is returned by the Status RPC call split by error   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  error  span  span class  metric label  provider name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver envelope encryption key id hash last timestamp seconds  div    div class  metric help  The last time in seconds when a keyID was used   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  apiserver id hash  span  span class  metric label  key id hash  span  span class  metric label  provider name  span  span class  metric label  transformation type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver envelope encryption key id hash status last timestamp seconds  div    div class  metric help  The last time in seconds when a keyID was returned by the Status RPC call   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  apiserver id hash  span  span class  metric label  key id hash  span  span class  metric label  provider name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver envelope encryption key id hash total  div    div class  metric help  Number of times a keyID is used split by transformation type  provider  and apiserver identity   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  apiserver id hash  span  span class  metric label  key id hash  span  span class  metric label  provider name  span  span class  metric label  transformation type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver envelope encryption kms operations latency seconds  div    div class  metric help  KMS operation duration with gRPC error code status total   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  grpc status code  span  span class  metric label  method name  span  span class  metric label  provider name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol current inqueue seats  div    div class  metric help  Number of seats currently pending in queues of the API Priority and Fairness subsystem  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  flow schema  span  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol current limit seats  div    div class  metric help  current derived number of execution seats available to each priority level  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol current r  div    div class  metric help  R time of last change   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol demand seats  div    div class  metric help  Observations  at the end of every nanosecond  of  the number of seats each priority level could use     nominal number of seats for that level   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  timingratiohistogram   label class  metric detail  Type   label   span class  metric type  TimingRatioHistogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol demand seats average  div    div class  metric help  Time weighted average  over last adjustment period  of demand seats  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol demand seats high watermark  div    div class  metric help  High watermark  over last adjustment period  of demand seats  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol demand seats smoothed  div    div class  metric help  Smoothed seat demands  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol demand seats stdev  div    div class  metric help  Time weighted standard deviation  over last adjustment period  of demand seats  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol dispatch r  div    div class  metric help  R time of last dispatch   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol epoch advance total  div    div class  metric help  Number of times the queueset s progress meter jumped backward  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span  span class  metric label  success  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol latest s  div    div class  metric help  S most recently dispatched request   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol lower limit seats  div    div class  metric help  Configured lower bound on number of execution seats available to each priority level  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol next discounted s bounds  div    div class  metric help  min and max  over queues  of S oldest waiting request in queue    estimated work in progress  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  bound  span  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol next s bounds  div    div class  metric help  min and max  over queues  of S oldest waiting request in queue   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  bound  span  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol priority level request utilization  div    div class  metric help  Observations  at the end of every nanosecond  of number of requests  as a fraction of the relevant limit  waiting or in any stage of execution  but only initial stage for WATCHes   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  timingratiohistogram   label class  metric detail  Type   label   span class  metric type  TimingRatioHistogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  phase  span  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol priority level seat utilization  div    div class  metric help  Observations  at the end of every nanosecond  of utilization of seats for any stage of execution  but only initial stage for WATCHes   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  timingratiohistogram   label class  metric detail  Type   label   span class  metric type  TimingRatioHistogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li  li class  metric labels constant   label class  metric detail  Const Labels   label  span class  metric label  phase executing  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol read vs write current requests  div    div class  metric help  Observations  at the end of every nanosecond  of the number of requests  as a fraction of the relevant limit  waiting or in regular stage of execution  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  timingratiohistogram   label class  metric detail  Type   label   span class  metric type  TimingRatioHistogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  phase  span  span class  metric label  request kind  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol request concurrency in use  div    div class  metric help  Concurrency  number of seats  occupied by the currently executing  initial stage for a WATCH  any stage otherwise  requests in the API Priority and Fairness subsystem  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  flow schema  span  span class  metric label  priority level  span   li  li class  metric deprecated version   label class  metric detail  Deprecated Versions   label  span 1 31 0  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol request concurrency limit  div    div class  metric help  Nominal number of execution seats configured for each priority level  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li  li class  metric deprecated version   label class  metric detail  Deprecated Versions   label  span 1 30 0  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol request dispatch no accommodation total  div    div class  metric help  Number of times a dispatch attempt resulted in a non accommodation due to lack of available seats  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  flow schema  span  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol request execution seconds  div    div class  metric help  Duration of initial stage  for a WATCH  or any  for a non WATCH  stage of request execution in the API Priority and Fairness subsystem  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  flow schema  span  span class  metric label  priority level  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol request queue length after enqueue  div    div class  metric help  Length of queue in the API Priority and Fairness subsystem  as seen by each request after it is enqueued  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  flow schema  span  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol seat fair frac  div    div class  metric help  Fair fraction of server s concurrency to allocate to each priority level that can use it  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol target seats  div    div class  metric help  Seat allocation targets  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol upper limit seats  div    div class  metric help  Configured upper bound on number of execution seats available to each priority level  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol watch count samples  div    div class  metric help  count of watchers for mutating requests in API Priority and Fairness  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  flow schema  span  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver flowcontrol work estimated seats  div    div class  metric help  Number of estimated seats  maximum of initial and final seats  associated with requests in API Priority and Fairness  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  flow schema  span  span class  metric label  priority level  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver init events total  div    div class  metric help  Counter of init events processed in watch cache broken by resource type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver kube aggregator x509 insecure sha1 total  div    div class  metric help  Counts the number of requests to servers with insecure SHA1 signatures in their serving certificate OR the number of connection failures due to the insecure SHA1 signatures  either or  based on the runtime environment   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver kube aggregator x509 missing san total  div    div class  metric help  Counts the number of requests to servers missing SAN extension in their serving certificate OR the number of connection failures due to the lack of x509 certificate SAN extension missing  either or  based on the runtime environment   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver nodeport repair port errors total  div    div class  metric help  Number of errors detected on ports by the repair loop broken down by type of error  leak  repair  full  outOfRange  duplicate  unknown  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver nodeport repair reconcile errors total  div    div class  metric help  Number of reconciliation failures on the nodeport repair reconcile loop  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver request aborts total  div    div class  metric help  Number of requests which apiserver aborted possibly due to a timeout  for each group  version  verb  resource  subresource and scope  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  group  span  span class  metric label  resource  span  span class  metric label  scope  span  span class  metric label  subresource  span  span class  metric label  verb  span  span class  metric label  version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver request body size bytes  div    div class  metric help  Apiserver request body size in bytes broken out by resource and verb   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span  span class  metric label  verb  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver request filter duration seconds  div    div class  metric help  Request filter latency distribution in seconds  for each filter type  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  filter  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver request post timeout total  div    div class  metric help  Tracks the activity of the request handlers after the associated requests have been timed out by the apiserver  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  source  span  span class  metric label  status  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver request sli duration seconds  div    div class  metric help  Response latency distribution  not counting webhook duration and priority   fairness queue wait times  in seconds for each verb  group  version  resource  subresource  scope and component   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  component  span  span class  metric label  group  span  span class  metric label  resource  span  span class  metric label  scope  span  span class  metric label  subresource  span  span class  metric label  verb  span  span class  metric label  version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver request slo duration seconds  div    div class  metric help  Response latency distribution  not counting webhook duration and priority   fairness queue wait times  in seconds for each verb  group  version  resource  subresource  scope and component   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  component  span  span class  metric label  group  span  span class  metric label  resource  span  span class  metric label  scope  span  span class  metric label  subresource  span  span class  metric label  verb  span  span class  metric label  version  span   li  li class  metric deprecated version   label class  metric detail  Deprecated Versions   label  span 1 27 0  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver request terminations total  div    div class  metric help  Number of requests which apiserver terminated in self defense   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span  span class  metric label  component  span  span class  metric label  group  span  span class  metric label  resource  span  span class  metric label  scope  span  span class  metric label  subresource  span  span class  metric label  verb  span  span class  metric label  version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver request timestamp comparison time  div    div class  metric help  Time taken for comparison of old vs new objects in UPDATE or PATCH requests  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code path  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver rerouted request total  div    div class  metric help  Total number of requests that were proxied to a peer kube apiserver because the local apiserver was not capable of serving it  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver selfrequest total  div    div class  metric help  Counter of apiserver self requests broken out for each verb  API resource and subresource   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span  span class  metric label  subresource  span  span class  metric label  verb  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver storage data key generation duration seconds  div    div class  metric help  Latencies in seconds of data encryption key DEK  generation operations   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver storage data key generation failures total  div    div class  metric help  Total number of failed data encryption key DEK  generation operations   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver storage db total size in bytes  div    div class  metric help  Total size of the storage database file physically allocated in bytes   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  endpoint  span   li  li class  metric deprecated version   label class  metric detail  Deprecated Versions   label  span 1 28 0  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver storage decode errors total  div    div class  metric help  Number of stored object decode errors split by object type  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver storage envelope transformation cache misses total  div    div class  metric help  Total number of cache misses while accessing key decryption key KEK    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver storage events received total  div    div class  metric help  Number of etcd events received split by kind   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver storage list evaluated objects total  div    div class  metric help  Number of objects tested in the course of serving a LIST request from storage  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver storage list fetched objects total  div    div class  metric help  Number of objects read from storage in the course of serving a LIST request  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver storage list returned objects total  div    div class  metric help  Number of objects returned for a LIST request from storage  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver storage list total  div    div class  metric help  Number of LIST requests served from storage  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver storage transformation duration seconds  div    div class  metric help  Latencies in seconds of value transformation operations   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  transformation type  span  span class  metric label  transformer prefix  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver storage transformation operations total  div    div class  metric help  Total number of transformations  Successful transformation will have a status  OK  and a varied status string when the transformation fails  This status and transformation type fields may be used for alerting on encryption decryption failure using transformation type from storage for decryption and to storage for encryption  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  status  span  span class  metric label  transformation type  span  span class  metric label  transformer prefix  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver stream translator requests total  div    div class  metric help  Total number of requests that were handled by the StreamTranslatorProxy  which processes streaming RemoteCommand V5  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver stream tunnel requests total  div    div class  metric help  Total number of requests that were handled by the StreamTunnelProxy  which processes streaming PortForward V2  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver terminated watchers total  div    div class  metric help  Counter of watchers closed due to unresponsiveness broken by resource type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver tls handshake errors total  div    div class  metric help  Number of requests dropped with  TLS handshake error from  error  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver watch cache consistent read total  div    div class  metric help  Counter for consistent reads from cache   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  fallback  span  span class  metric label  resource  span  span class  metric label  success  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver watch cache events dispatched total  div    div class  metric help  Counter of events dispatched in watch cache broken by resource type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver watch cache events received total  div    div class  metric help  Counter of events received in watch cache broken by resource type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver watch cache initializations total  div    div class  metric help  Counter of watch cache initializations broken by resource type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver watch cache read wait seconds  div    div class  metric help  Histogram of time spent waiting for a watch cache to become fresh   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver watch cache resource version  div    div class  metric help  Current resource version of watch cache broken by resource type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver watch events sizes  div    div class  metric help  Watch event size distribution in bytes  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  group  span  span class  metric label  kind  span  span class  metric label  version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver watch events total  div    div class  metric help  Number of events sent in watch clients  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  group  span  span class  metric label  kind  span  span class  metric label  version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver watch list duration seconds  div    div class  metric help  Response latency distribution in seconds for watch list requests broken by group  version  resource and scope   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  group  span  span class  metric label  resource  span  span class  metric label  scope  span  span class  metric label  version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver webhooks x509 insecure sha1 total  div    div class  metric help  Counts the number of requests to servers with insecure SHA1 signatures in their serving certificate OR the number of connection failures due to the insecure SHA1 signatures  either or  based on the runtime environment   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  apiserver webhooks x509 missing san total  div    div class  metric help  Counts the number of requests to servers missing SAN extension in their serving certificate OR the number of connection failures due to the lack of x509 certificate SAN extension missing  either or  based on the runtime environment   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  attach detach controller attachdetach controller forced detaches  div    div class  metric help  Number of times the A D Controller performed a forced detach  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  reason  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  attachdetach controller total volumes  div    div class  metric help  Number of volumes in A D Controller  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  plugin name  span  span class  metric label  state  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  authenticated user requests  div    div class  metric help  Counter of authenticated requests broken out by username   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  username  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  authentication attempts  div    div class  metric help  Counter of authenticated attempts   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  result  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  authentication duration seconds  div    div class  metric help  Authentication duration in seconds broken out by result   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  result  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  authentication token cache active fetch count  div    div class  metric help    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  status  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  authentication token cache fetch total  div    div class  metric help    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  status  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  authentication token cache request duration seconds  div    div class  metric help    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  status  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  authentication token cache request total  div    div class  metric help    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  status  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  authorization attempts total  div    div class  metric help  Counter of authorization attempts broken down by result  It can be either  allowed    denied    no opinion  or  error    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  result  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  authorization duration seconds  div    div class  metric help  Authorization duration in seconds broken out by result   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  result  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  cloud provider webhook request duration seconds  div    div class  metric help  Request latency in seconds  Broken down by status code   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span  span class  metric label  webhook  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  cloud provider webhook request total  div    div class  metric help  Number of HTTP requests partitioned by status code   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span  span class  metric label  webhook  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  container swap usage bytes  div    div class  metric help  Current amount of the container swap usage in bytes  Reported only on non windows systems  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  container  span  span class  metric label  pod  span  span class  metric label  namespace  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  csi operations seconds  div    div class  metric help  Container Storage Interface operation duration with gRPC error code status total  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  driver name  span  span class  metric label  grpc status code  span  span class  metric label  method name  span  span class  metric label  migrated  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice controller changes  div    div class  metric help  Number of EndpointSlice changes  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice controller desired endpoint slices  div    div class  metric help  Number of EndpointSlices that would exist with perfect endpoint allocation  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice controller endpoints added per sync  div    div class  metric help  Number of endpoints added on each Service sync  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice controller endpoints desired  div    div class  metric help  Number of endpoints desired  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice controller endpoints removed per sync  div    div class  metric help  Number of endpoints removed on each Service sync  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice controller endpointslices changed per sync  div    div class  metric help  Number of EndpointSlices changed on each Service sync  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  topology  span  span class  metric label  traffic distribution  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice controller num endpoint slices  div    div class  metric help  Number of EndpointSlices  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice controller services count by traffic distribution  div    div class  metric help  Number of Services using some specific trafficDistribution  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  traffic distribution  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice controller syncs  div    div class  metric help  Number of EndpointSlice syncs  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  result  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice mirroring controller addresses skipped per sync  div    div class  metric help  Number of addresses skipped on each Endpoints sync due to being invalid or exceeding MaxEndpointsPerSubset  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice mirroring controller changes  div    div class  metric help  Number of EndpointSlice changes  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice mirroring controller desired endpoint slices  div    div class  metric help  Number of EndpointSlices that would exist with perfect endpoint allocation  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice mirroring controller endpoints added per sync  div    div class  metric help  Number of endpoints added on each Endpoints sync  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice mirroring controller endpoints desired  div    div class  metric help  Number of endpoints desired  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice mirroring controller endpoints removed per sync  div    div class  metric help  Number of endpoints removed on each Endpoints sync  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice mirroring controller endpoints sync duration  div    div class  metric help  Duration of syncEndpoints   in seconds  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice mirroring controller endpoints updated per sync  div    div class  metric help  Number of endpoints updated on each Endpoints sync  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  endpoint slice mirroring controller num endpoint slices  div    div class  metric help  Number of EndpointSlices  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  ephemeral volume controller create failures total  div    div class  metric help  Number of PersistenVolumeClaims creation requests  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  ephemeral volume controller create total  div    div class  metric help  Number of PersistenVolumeClaims creation requests  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  etcd bookmark counts  div    div class  metric help  Number of etcd bookmarks  progress notify events  split by kind   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  etcd lease object counts  div    div class  metric help  Number of objects attached to a single etcd lease   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  etcd request duration seconds  div    div class  metric help  Etcd request latency in seconds for each operation and object type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  etcd request errors total  div    div class  metric help  Etcd failed request counts for each operation and object type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  etcd requests total  div    div class  metric help  Etcd request counts for each operation and object type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation  span  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  etcd version info  div    div class  metric help  Etcd server s binary version  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  binary version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  field validation request duration seconds  div    div class  metric help  Response latency distribution in seconds for each field validation value  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  field validation  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  force cleaned failed volume operation errors total  div    div class  metric help  The number of volumes that failed force cleanup after their reconstruction failed during kubelet startup   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  force cleaned failed volume operations total  div    div class  metric help  The number of volumes that were force cleaned after their reconstruction failed during kubelet startup  This includes both successful and failed cleanups   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  garbagecollector controller resources sync error total  div    div class  metric help  Number of garbage collector resources sync errors  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  horizontal pod autoscaler controller metric computation duration seconds  div    div class  metric help  The time seconds  that the HPA controller takes to calculate one metric  The label  action  should be either  scale down    scale up   or  none   The label  error  should be either  spec    internal   or  none   The label  metric type  corresponds to HPA spec metrics    type  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  action  span  span class  metric label  error  span  span class  metric label  metric type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  horizontal pod autoscaler controller metric computation total  div    div class  metric help  Number of metric computations  The label  action  should be either  scale down    scale up   or  none   Also  the label  error  should be either  spec    internal   or  none   The label  metric type  corresponds to HPA spec metrics    type  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  action  span  span class  metric label  error  span  span class  metric label  metric type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  horizontal pod autoscaler controller reconciliation duration seconds  div    div class  metric help  The time seconds  that the HPA controller takes to reconcile once  The label  action  should be either  scale down    scale up   or  none   Also  the label  error  should be either  spec    internal   or  none   Note that if both spec and internal errors happen during a reconciliation  the first one to occur is reported in  error  label   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  action  span  span class  metric label  error  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  horizontal pod autoscaler controller reconciliations total  div    div class  metric help  Number of reconciliations of HPA controller  The label  action  should be either  scale down    scale up   or  none   Also  the label  error  should be either  spec    internal   or  none   Note that if both spec and internal errors happen during a reconciliation  the first one to occur is reported in  error  label   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  action  span  span class  metric label  error  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  job controller job finished indexes total  div    div class  metric help   The number of finished indexes  Possible values for the     status label are   succeeded    failed   Possible values for the     backoffLimit label are   perIndex  and  global    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  backoffLimit  span  span class  metric label  status  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  job controller job pods creation total  div    div class  metric help   The number of Pods created by the Job controller labelled with a reason for the Pod creation   This metric also distinguishes between Pods created using different PodReplacementPolicy settings   Possible values of the  reason  label are    new    recreate terminating or failed    recreate failed    Possible values of the  status  label are    succeeded    failed     div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  reason  span  span class  metric label  status  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  job controller jobs by external controller total  div    div class  metric help  The number of Jobs managed by an external controller  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  controller name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  job controller pod failures handled by failure policy total  div    div class  metric help   The number of failed Pods handled by failure policy with     respect to the failure policy action applied based on the matched     rule  Possible values of the action label correspond to the     possible values for the failure policy rule action  which are       FailJob    Ignore  and  Count     div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  action  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  job controller terminated pods tracking finalizer total  div    div class  metric help   The number of terminated pods  phase Failed Succeeded   that have the finalizer batch kubernetes io job tracking  The event label can be  add  or  delete     div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  event  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kube apiserver clusterip allocator allocated ips  div    div class  metric help  Gauge measuring the number of allocated IPs for Services  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  cidr  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kube apiserver clusterip allocator allocation duration seconds  div    div class  metric help  Duration in seconds to allocate a Cluster IP by ServiceCIDR  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  cidr  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kube apiserver clusterip allocator allocation errors total  div    div class  metric help  Number of errors trying to allocate Cluster IPs  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  cidr  span  span class  metric label  scope  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kube apiserver clusterip allocator allocation total  div    div class  metric help  Number of Cluster IPs allocations  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  cidr  span  span class  metric label  scope  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kube apiserver clusterip allocator available ips  div    div class  metric help  Gauge measuring the number of available IPs for Services  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  cidr  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kube apiserver nodeport allocator allocated ports  div    div class  metric help  Gauge measuring the number of allocated NodePorts for Services  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kube apiserver nodeport allocator allocation errors total  div    div class  metric help  Number of errors trying to allocate NodePort  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  scope  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kube apiserver nodeport allocator allocation total  div    div class  metric help  Number of NodePort allocations  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  scope  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kube apiserver nodeport allocator available ports  div    div class  metric help  Gauge measuring the number of available NodePorts for Services  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kube apiserver pod logs backend tls failure total  div    div class  metric help  Total number of requests for pods logs that failed due to kubelet server TLS verification  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kube apiserver pod logs insecure backend total  div    div class  metric help  Total number of requests for pods logs sliced by usage type  enforce tls  skip tls allowed  skip tls denied  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  usage  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kube apiserver pod logs pods logs backend tls failure total  div    div class  metric help  Total number of requests for pods logs that failed due to kubelet server TLS verification  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric deprecated version   label class  metric detail  Deprecated Versions   label  span 1 27 0  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kube apiserver pod logs pods logs insecure backend total  div    div class  metric help  Total number of requests for pods logs sliced by usage type  enforce tls  skip tls allowed  skip tls denied  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  usage  span   li  li class  metric deprecated version   label class  metric detail  Deprecated Versions   label  span 1 27 0  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet active pods  div    div class  metric help  The number of pods the kubelet considers active and which are being considered when admitting new pods  static is true if the pod is not from the apiserver   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  static  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet certificate manager client expiration renew errors  div    div class  metric help  Counter of certificate renewal errors   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet certificate manager client ttl seconds  div    div class  metric help  Gauge of the TTL  time to live  of the Kubelet s client certificate  The value is in seconds until certificate expiry  negative if already expired   If client certificate is invalid or unused  the value will be  INF   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet certificate manager server rotation seconds  div    div class  metric help  Histogram of the number of seconds the previous certificate lived before being rotated   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet certificate manager server ttl seconds  div    div class  metric help  Gauge of the shortest TTL  time to live  of the Kubelet s serving certificate  The value is in seconds until certificate expiry  negative if already expired   If serving certificate is invalid or unused  the value will be  INF   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet cgroup manager duration seconds  div    div class  metric help  Duration in seconds for cgroup manager operations  Broken down by method   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet cgroup version  div    div class  metric help  cgroup version on the hosts   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet container log filesystem used bytes  div    div class  metric help  Bytes used by the container s logs on the filesystem   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  uid  span  span class  metric label  namespace  span  span class  metric label  pod  span  span class  metric label  container  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet containers per pod count  div    div class  metric help  The number of containers per pod   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet cpu manager pinning errors total  div    div class  metric help  The number of cpu core allocations which required pinning failed   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet cpu manager pinning requests total  div    div class  metric help  The number of cpu core allocations which required pinning   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet credential provider plugin duration  div    div class  metric help  Duration of execution in seconds for credential provider plugin  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  plugin name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet credential provider plugin errors  div    div class  metric help  Number of errors from credential provider plugin  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  plugin name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet desired pods  div    div class  metric help  The number of pods the kubelet is being instructed to run  static is true if the pod is not from the apiserver   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  static  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet device plugin alloc duration seconds  div    div class  metric help  Duration in seconds to serve a device plugin Allocation request  Broken down by resource name   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet device plugin registration total  div    div class  metric help  Cumulative number of device plugin registrations  Broken down by resource name   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet evented pleg connection error count  div    div class  metric help  The number of errors encountered during the establishment of streaming connection with the CRI runtime   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet evented pleg connection latency seconds  div    div class  metric help  The latency of streaming connection with the CRI runtime  measured in seconds   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet evented pleg connection success count  div    div class  metric help  The number of times a streaming client was obtained to receive CRI Events   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet eviction stats age seconds  div    div class  metric help  Time between when stats are collected  and when pod is evicted based on those stats by eviction signal  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  eviction signal  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet evictions  div    div class  metric help  Cumulative number of pod evictions by eviction signal  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  eviction signal  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet graceful shutdown end time seconds  div    div class  metric help  Last graceful shutdown start time since unix epoch in seconds  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet graceful shutdown start time seconds  div    div class  metric help  Last graceful shutdown start time since unix epoch in seconds  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet http inflight requests  div    div class  metric help  Number of the inflight http requests  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  long running  span  span class  metric label  method  span  span class  metric label  path  span  span class  metric label  server type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet http requests duration seconds  div    div class  metric help  Duration in seconds to serve http requests  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  long running  span  span class  metric label  method  span  span class  metric label  path  span  span class  metric label  server type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet http requests total  div    div class  metric help  Number of the http requests received since the server started  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  long running  span  span class  metric label  method  span  span class  metric label  path  span  span class  metric label  server type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet image garbage collected total  div    div class  metric help  Total number of images garbage collected by the kubelet  whether through disk usage or image age   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  reason  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet image pull duration seconds  div    div class  metric help  Duration in seconds to pull an image   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  image size in bytes  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet lifecycle handler http fallbacks total  div    div class  metric help  The number of times lifecycle handlers successfully fell back to http from https   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet managed ephemeral containers  div    div class  metric help  Current number of ephemeral containers in pods managed by this kubelet   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet memory manager pinning errors total  div    div class  metric help  The number of memory pages allocations which required pinning that failed   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet memory manager pinning requests total  div    div class  metric help  The number of memory pages allocations which required pinning   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet mirror pods  div    div class  metric help  The number of mirror pods the kubelet will try to create  one per admitted static pod   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet node name  div    div class  metric help  The node s name  The count is always 1   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  node  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet node startup duration seconds  div    div class  metric help  Duration in seconds of node startup in total   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet node startup post registration duration seconds  div    div class  metric help  Duration in seconds of node startup after registration   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet node startup pre kubelet duration seconds  div    div class  metric help  Duration in seconds of node startup before kubelet starts   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet node startup pre registration duration seconds  div    div class  metric help  Duration in seconds of node startup before registration   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet node startup registration duration seconds  div    div class  metric help  Duration in seconds of node startup during registration   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet orphan pod cleaned volumes  div    div class  metric help  The total number of orphaned Pods whose volumes were cleaned in the last periodic sweep   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet orphan pod cleaned volumes errors  div    div class  metric help  The number of orphaned Pods whose volumes failed to be cleaned in the last periodic sweep   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet orphaned runtime pods total  div    div class  metric help  Number of pods that have been detected in the container runtime without being already known to the pod worker  This typically indicates the kubelet was restarted while a pod was force deleted in the API or in the local configuration  which is unusual   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pleg discard events  div    div class  metric help  The number of discard events in PLEG   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pleg last seen seconds  div    div class  metric help  Timestamp in seconds when PLEG was last seen active   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pleg relist duration seconds  div    div class  metric help  Duration in seconds for relisting pods in PLEG   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pleg relist interval seconds  div    div class  metric help  Interval in seconds between relisting in PLEG   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pod resources endpoint errors get  div    div class  metric help  Number of requests to the PodResource Get endpoint which returned error  Broken down by server api version   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  server api version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pod resources endpoint errors get allocatable  div    div class  metric help  Number of requests to the PodResource GetAllocatableResources endpoint which returned error  Broken down by server api version   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  server api version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pod resources endpoint errors list  div    div class  metric help  Number of requests to the PodResource List endpoint which returned error  Broken down by server api version   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  server api version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pod resources endpoint requests get  div    div class  metric help  Number of requests to the PodResource Get endpoint  Broken down by server api version   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  server api version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pod resources endpoint requests get allocatable  div    div class  metric help  Number of requests to the PodResource GetAllocatableResources endpoint  Broken down by server api version   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  server api version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pod resources endpoint requests list  div    div class  metric help  Number of requests to the PodResource List endpoint  Broken down by server api version   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  server api version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pod resources endpoint requests total  div    div class  metric help  Cumulative number of requests to the PodResource endpoint  Broken down by server api version   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  server api version  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pod start duration seconds  div    div class  metric help  Duration in seconds from kubelet seeing a pod for the first time to the pod starting to run  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pod start sli duration seconds  div    div class  metric help  Duration in seconds to start a pod  excluding time to pull images and run init containers  measured from pod creation timestamp to when all its containers are reported as started and observed via watch  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pod start total duration seconds  div    div class  metric help  Duration in seconds to start a pod since creation  including time to pull images and run init containers  measured from pod creation timestamp to when all its containers are reported as started and observed via watch  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pod status sync duration seconds  div    div class  metric help  Duration in seconds to sync a pod status update  Measures time from detection of a change to pod status until the API is successfully updated for that pod  even if multiple intevening changes to pod status occur   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pod worker duration seconds  div    div class  metric help  Duration in seconds to sync a single pod  Broken down by operation type  create  update  or sync  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet pod worker start duration seconds  div    div class  metric help  Duration in seconds from kubelet seeing a pod to starting a worker   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet preemptions  div    div class  metric help  Cumulative number of pod preemptions by preemption resource  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  preemption signal  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet restarted pods total  div    div class  metric help  Number of pods that have been restarted because they were deleted and recreated with the same UID while the kubelet was watching them  common for static pods  extremely uncommon for API pods   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  static  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet run podsandbox duration seconds  div    div class  metric help  Duration in seconds of the run podsandbox operations  Broken down by RuntimeClass Handler   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  runtime handler  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet run podsandbox errors total  div    div class  metric help  Cumulative number of the run podsandbox operation errors by RuntimeClass Handler   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  runtime handler  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet running containers  div    div class  metric help  Number of containers currently running  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  container state  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet running pods  div    div class  metric help  Number of pods that have a running pod sandbox  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet runtime operations duration seconds  div    div class  metric help  Duration in seconds of runtime operations  Broken down by operation type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet runtime operations errors total  div    div class  metric help  Cumulative number of runtime operation errors by operation type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet runtime operations total  div    div class  metric help  Cumulative number of runtime operations by operation type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet server expiration renew errors  div    div class  metric help  Counter of certificate renewal errors   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet sleep action terminated early total  div    div class  metric help  The number of times lifecycle sleep handler got terminated before it finishes  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet started containers errors total  div    div class  metric help  Cumulative number of errors when starting containers  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span  span class  metric label  container type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet started containers total  div    div class  metric help  Cumulative number of containers started  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  container type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet started host process containers errors total  div    div class  metric help  Cumulative number of errors when starting hostprocess containers  This metric will only be collected on Windows   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span  span class  metric label  container type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet started host process containers total  div    div class  metric help  Cumulative number of hostprocess containers started  This metric will only be collected on Windows   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  container type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet started pods errors total  div    div class  metric help  Cumulative number of errors when starting pods  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet started pods total  div    div class  metric help  Cumulative number of pods started  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet topology manager admission duration ms  div    div class  metric help  Duration in milliseconds to serve a pod admission request   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet topology manager admission errors total  div    div class  metric help  The number of admission request failures where resources could not be aligned   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet topology manager admission requests total  div    div class  metric help  The number of admission requests where resources have to be aligned   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet volume metric collection duration seconds  div    div class  metric help  Duration in seconds to calculate volume stats  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  metric source  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet volume stats available bytes  div    div class  metric help  Number of available bytes in the volume  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  namespace  span  span class  metric label  persistentvolumeclaim  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet volume stats capacity bytes  div    div class  metric help  Capacity in bytes of the volume  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  namespace  span  span class  metric label  persistentvolumeclaim  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet volume stats health status abnormal  div    div class  metric help  Abnormal volume health status  The count is either 1 or 0  1 indicates the volume is unhealthy  0 indicates volume is healthy  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  namespace  span  span class  metric label  persistentvolumeclaim  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet volume stats inodes  div    div class  metric help  Maximum number of inodes in the volume  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  namespace  span  span class  metric label  persistentvolumeclaim  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet volume stats inodes free  div    div class  metric help  Number of free inodes in the volume  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  namespace  span  span class  metric label  persistentvolumeclaim  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet volume stats inodes used  div    div class  metric help  Number of used inodes in the volume  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  namespace  span  span class  metric label  persistentvolumeclaim  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet volume stats used bytes  div    div class  metric help  Number of used bytes in the volume  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  namespace  span  span class  metric label  persistentvolumeclaim  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubelet working pods  div    div class  metric help  Number of pods the kubelet is actually running  broken down by lifecycle phase  whether the pod is desired  orphaned  or runtime only  also orphaned   and whether the pod is static  An orphaned pod has been removed from local configuration or force deleted in the API and consumes resources that are not otherwise visible   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  config  span  span class  metric label  lifecycle  span  span class  metric label  static  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy iptables ct state invalid dropped packets total  div    div class  metric help  packets dropped by iptables to work around conntrack problems  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy iptables localhost nodeports accepted packets total  div    div class  metric help  Number of packets accepted on nodeports of loopback interface  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy network programming duration seconds  div    div class  metric help  In Cluster Network Programming Latency in seconds  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy proxy healthz total  div    div class  metric help  Cumulative proxy healthz HTTP status  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy proxy livez total  div    div class  metric help  Cumulative proxy livez HTTP status  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync full proxy rules duration seconds  div    div class  metric help  SyncProxyRules latency in seconds for full resyncs  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync partial proxy rules duration seconds  div    div class  metric help  SyncProxyRules latency in seconds for partial resyncs  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync proxy rules duration seconds  div    div class  metric help  SyncProxyRules latency in seconds  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync proxy rules endpoint changes pending  div    div class  metric help  Pending proxy rules Endpoint changes  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync proxy rules endpoint changes total  div    div class  metric help  Cumulative proxy rules Endpoint changes  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync proxy rules iptables last  div    div class  metric help  Number of iptables rules written by kube proxy in last sync  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  table  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync proxy rules iptables partial restore failures total  div    div class  metric help  Cumulative proxy iptables partial restore failures  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync proxy rules iptables restore failures total  div    div class  metric help  Cumulative proxy iptables restore failures  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync proxy rules iptables total  div    div class  metric help  Total number of iptables rules owned by kube proxy  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  table  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync proxy rules last queued timestamp seconds  div    div class  metric help  The last time a sync of proxy rules was queued  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync proxy rules last timestamp seconds  div    div class  metric help  The last time proxy rules were successfully synced  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync proxy rules nftables cleanup failures total  div    div class  metric help  Cumulative proxy nftables cleanup failures  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync proxy rules nftables sync failures total  div    div class  metric help  Cumulative proxy nftables sync failures  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync proxy rules no local endpoints total  div    div class  metric help  Number of services with a Local traffic policy and no endpoints  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  traffic policy  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync proxy rules service changes pending  div    div class  metric help  Pending proxy rules Service changes  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubeproxy sync proxy rules service changes total  div    div class  metric help  Cumulative proxy rules Service changes  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  kubernetes build info  div    div class  metric help  A metric with a constant  1  value labeled by major  minor  git version  git commit  git tree state  build date  Go version  and compiler from which Kubernetes was built  and platform on which it is running   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  build date  span  span class  metric label  compiler  span  span class  metric label  git commit  span  span class  metric label  git tree state  span  span class  metric label  git version  span  span class  metric label  go version  span  span class  metric label  major  span  span class  metric label  minor  span  span class  metric label  platform  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  leader election master status  div    div class  metric help  Gauge of if the reporting system is master of the relevant lease  0 indicates backup  1 indicates master   name  is the string used to identify the lease  Please make sure to group by name   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  leader election slowpath total  div    div class  metric help  Total number of slow path exercised in renewing leader leases   name  is the string used to identify the lease  Please make sure to group by name   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  node authorizer graph actions duration seconds  div    div class  metric help  Histogram of duration of graph actions in node authorizer   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  node collector unhealthy nodes in zone  div    div class  metric help  Gauge measuring number of not Ready Nodes per zones   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  zone  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  node collector update all nodes health duration seconds  div    div class  metric help  Duration in seconds for NodeController to update the health of all nodes   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  node collector update node health duration seconds  div    div class  metric help  Duration in seconds for NodeController to update the health of a single node   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  node collector zone health  div    div class  metric help  Gauge measuring percentage of healthy nodes per zone   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  zone  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  node collector zone size  div    div class  metric help  Gauge measuring number of registered Nodes per zones   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  zone  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  node controller cloud provider taint removal delay seconds  div    div class  metric help  Number of seconds after node creation when NodeController removed the cloud provider taint of a single node   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  node controller initial node sync delay seconds  div    div class  metric help  Number of seconds after node creation when NodeController finished the initial synchronization of a single node   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  node ipam controller cidrset allocation tries per request  div    div class  metric help  Number of endpoints added on each Service sync  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  clusterCIDR  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  node ipam controller cidrset cidrs allocations total  div    div class  metric help  Counter measuring total number of CIDR allocations   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  clusterCIDR  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  node ipam controller cidrset cidrs releases total  div    div class  metric help  Counter measuring total number of CIDR releases   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  clusterCIDR  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  node ipam controller cidrset usage cidrs  div    div class  metric help  Gauge measuring percentage of allocated CIDRs   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  clusterCIDR  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  node ipam controller cirdset max cidrs  div    div class  metric help  Maximum number of CIDRs that can be allocated   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  clusterCIDR  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  node swap usage bytes  div    div class  metric help  Current swap usage of the node in bytes  Reported only on non windows systems  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  plugin manager total plugins  div    div class  metric help  Number of plugins in Plugin Manager  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  socket path  span  span class  metric label  state  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  pod gc collector force delete pod errors total  div    div class  metric help  Number of errors encountered when forcefully deleting the pods since the Pod GC Controller started   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  namespace  span  span class  metric label  reason  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  pod gc collector force delete pods total  div    div class  metric help  Number of pods that are being forcefully deleted since the Pod GC Controller started   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  namespace  span  span class  metric label  reason  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  pod security errors total  div    div class  metric help  Number of errors preventing normal evaluation  Non fatal errors may result in the latest restricted profile being used for evaluation   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  fatal  span  span class  metric label  request operation  span  span class  metric label  resource  span  span class  metric label  subresource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  pod security evaluations total  div    div class  metric help  Number of policy evaluations that occurred  not counting ignored or exempt requests   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  decision  span  span class  metric label  mode  span  span class  metric label  policy level  span  span class  metric label  policy version  span  span class  metric label  request operation  span  span class  metric label  resource  span  span class  metric label  subresource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  pod security exemptions total  div    div class  metric help  Number of exempt requests  not counting ignored or out of scope requests   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  request operation  span  span class  metric label  resource  span  span class  metric label  subresource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  pod swap usage bytes  div    div class  metric help  Current amount of the pod swap usage in bytes  Reported only on non windows systems  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  pod  span  span class  metric label  namespace  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  prober probe duration seconds  div    div class  metric help  Duration in seconds for a probe response   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  container  span  span class  metric label  namespace  span  span class  metric label  pod  span  span class  metric label  probe type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  prober probe total  div    div class  metric help  Cumulative number of a liveness  readiness or startup probe for a container by result   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  container  span  span class  metric label  namespace  span  span class  metric label  pod  span  span class  metric label  pod uid  span  span class  metric label  probe type  span  span class  metric label  result  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  pv collector bound pv count  div    div class  metric help  Gauge measuring number of persistent volume currently bound  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  storage class  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  pv collector bound pvc count  div    div class  metric help  Gauge measuring number of persistent volume claim currently bound  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  namespace  span  span class  metric label  storage class  span  span class  metric label  volume attributes class  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  pv collector total pv count  div    div class  metric help  Gauge measuring total number of persistent volumes  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  plugin name  span  span class  metric label  volume mode  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  pv collector unbound pv count  div    div class  metric help  Gauge measuring number of persistent volume currently unbound  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  storage class  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  pv collector unbound pvc count  div    div class  metric help  Gauge measuring number of persistent volume claim currently unbound  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  namespace  span  span class  metric label  storage class  span  span class  metric label  volume attributes class  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  reconstruct volume operations errors total  div    div class  metric help  The number of volumes that failed reconstruction from the operating system during kubelet startup   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  reconstruct volume operations total  div    div class  metric help  The number of volumes that were attempted to be reconstructed from the operating system during kubelet startup  This includes both successful and failed reconstruction   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  replicaset controller sorting deletion age ratio  div    div class  metric help  The ratio of chosen deleted pod s ages to the current youngest pod s age  at the time   Should be  2  The intent of this metric is to measure the rough efficacy of the LogarithmicScaleDown feature gate s effect on the sorting  and deletion  of pods when a replicaset scales down  This only considers Ready pods when calculating and reporting   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  resourceclaim controller create attempts total  div    div class  metric help  Number of ResourceClaims creation requests  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  resourceclaim controller create failures total  div    div class  metric help  Number of ResourceClaims creation request failures  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  rest client dns resolution duration seconds  div    div class  metric help  DNS resolver latency in seconds  Broken down by host   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  host  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  rest client exec plugin call total  div    div class  metric help  Number of calls to an exec plugin  partitioned by the type of event encountered  no error  plugin execution error  plugin not found error  client internal error  and an optional exit code  The exit code will be set to 0 if and only if the plugin call was successful   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  call status  span  span class  metric label  code  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  rest client exec plugin certificate rotation age  div    div class  metric help  Histogram of the number of seconds the last auth exec plugin client certificate lived before being rotated  If auth exec plugin client certificates are unused  histogram will contain no data   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  rest client exec plugin ttl seconds  div    div class  metric help  Gauge of the shortest TTL  time to live  of the client certificate s  managed by the auth exec plugin  The value is in seconds until certificate expiry  negative if already expired   If auth exec plugins are unused or manage no TLS certificates  the value will be  INF   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  rest client rate limiter duration seconds  div    div class  metric help  Client side rate limiter latency in seconds  Broken down by verb  and host   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  host  span  span class  metric label  verb  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  rest client request duration seconds  div    div class  metric help  Request latency in seconds  Broken down by verb  and host   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  host  span  span class  metric label  verb  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  rest client request retries total  div    div class  metric help  Number of request retries  partitioned by status code  verb  and host   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span  span class  metric label  host  span  span class  metric label  verb  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  rest client request size bytes  div    div class  metric help  Request size in bytes  Broken down by verb and host   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  host  span  span class  metric label  verb  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  rest client requests total  div    div class  metric help  Number of HTTP requests  partitioned by status code  method  and host   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span  span class  metric label  host  span  span class  metric label  method  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  rest client response size bytes  div    div class  metric help  Response size in bytes  Broken down by verb and host   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  host  span  span class  metric label  verb  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  rest client transport cache entries  div    div class  metric help  Number of transport entries in the internal cache   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  rest client transport create calls total  div    div class  metric help  Number of calls to get a new transport  partitioned by the result of the operation hit  obtained from the cache  miss  created and added to the cache  uncacheable  created and not cached  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  result  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  retroactive storageclass errors total  div    div class  metric help  Total number of failed retroactive StorageClass assignments to persistent volume claim  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  retroactive storageclass total  div    div class  metric help  Total number of retroactive StorageClass assignments to persistent volume claim  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  root ca cert publisher sync duration seconds  div    div class  metric help  Number of namespace syncs happened in root ca cert publisher   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  root ca cert publisher sync total  div    div class  metric help  Number of namespace syncs happened in root ca cert publisher   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  code  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  running managed controllers  div    div class  metric help  Indicates where instances of a controller are currently running  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  manager  span  span class  metric label  name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  scheduler event handling duration seconds  div    div class  metric help  Event handling latency in seconds   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  event  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  scheduler goroutines  div    div class  metric help  Number of running goroutines split by the work they do such as binding   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  scheduler permit wait duration seconds  div    div class  metric help  Duration of waiting on permit   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  result  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  scheduler plugin evaluation total  div    div class  metric help  Number of attempts to schedule pods by each plugin and the extension point  available only in PreFilter  Filter  PreScore  and Score    div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  extension point  span  span class  metric label  plugin  span  span class  metric label  profile  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  scheduler plugin execution duration seconds  div    div class  metric help  Duration for running a plugin at a specific extension point   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  extension point  span  span class  metric label  plugin  span  span class  metric label  status  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  scheduler queueing hint execution duration seconds  div    div class  metric help  Duration for running a queueing hint function of a plugin   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  event  span  span class  metric label  hint  span  span class  metric label  plugin  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  scheduler scheduler cache size  div    div class  metric help  Number of nodes  pods  and assumed  bound  pods in the scheduler cache   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  type  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  scheduler scheduling algorithm duration seconds  div    div class  metric help  Scheduling algorithm latency in seconds  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  scheduler unschedulable pods  div    div class  metric help  The number of unschedulable pods broken down by plugin name  A pod will increment the gauge for all plugins that caused it to not schedule and so this metric have meaning only when broken down by plugin   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  plugin  span  span class  metric label  profile  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  scheduler volume binder cache requests total  div    div class  metric help  Total number for request volume binding cache  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  scheduler volume scheduling stage error total  div    div class  metric help  Volume scheduling stage error count  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  scrape error  div    div class  metric help  1 if there was an error while getting container metrics  0 otherwise  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric deprecated version   label class  metric detail  Deprecated Versions   label  span 1 29 0  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  service controller loadbalancer sync total  div    div class  metric help  A metric counting the amount of times any load balancer has been configured  as an effect of service node changes on the cluster  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  service controller nodesync error total  div    div class  metric help  A metric counting the amount of times any load balancer has been configured and errored  as an effect of node changes on the cluster  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  service controller nodesync latency seconds  div    div class  metric help  A metric measuring the latency for nodesync which updates loadbalancer hosts on cluster node updates   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  service controller update loadbalancer host latency seconds  div    div class  metric help  A metric measuring the latency for updating each load balancer hosts   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  serviceaccount invalid legacy auto token uses total  div    div class  metric help  Cumulative invalid auto generated legacy tokens used  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  serviceaccount legacy auto token uses total  div    div class  metric help  Cumulative auto generated legacy tokens used  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  serviceaccount legacy manual token uses total  div    div class  metric help  Cumulative manually created legacy tokens used  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  serviceaccount legacy tokens total  div    div class  metric help  Cumulative legacy service account tokens used  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  serviceaccount stale tokens total  div    div class  metric help  Cumulative stale projected service account tokens used  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  serviceaccount valid tokens total  div    div class  metric help  Cumulative valid projected service account tokens used  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  storage count attachable volumes in use  div    div class  metric help  Measure number of volumes in use  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  node  span  span class  metric label  volume plugin  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  storage operation duration seconds  div    div class  metric help  Storage operation duration  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  migrated  span  span class  metric label  operation name  span  span class  metric label  status  span  span class  metric label  volume plugin  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  taint eviction controller pod deletion duration seconds  div    div class  metric help  Latency  in seconds  between the time when a taint effect has been activated for the Pod and its deletion via TaintEvictionController   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  taint eviction controller pod deletions total  div    div class  metric help  Total number of Pods deleted by TaintEvictionController since its start   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  ttl after finished controller job deletion duration seconds  div    div class  metric help  The time it took to delete the job since it became eligible for deletion  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li     ul     div  div class  metric  data stability  alpha     div class  metric name  volume manager selinux container errors total  div    div class  metric help  Number of errors when kubelet cannot compute SELinux context for a container  Kubelet can t start such a Pod then and it will retry  therefore value of this metric may not represent the actual nr  of containers   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  access mode  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  volume manager selinux container warnings total  div    div class  metric help  Number of errors when kubelet cannot compute SELinux context for a container that are ignored  They will become real errors when SELinuxMountReadWriteOncePod feature is expanded to all volume access modes   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  access mode  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  volume manager selinux pod context mismatch errors total  div    div class  metric help  Number of errors when a Pod defines different SELinux contexts for its containers that use the same volume  Kubelet can t start such a Pod then and it will retry  therefore value of this metric may not represent the actual nr  of Pods   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  access mode  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  volume manager selinux pod context mismatch warnings total  div    div class  metric help  Number of errors when a Pod defines different SELinux contexts for its containers that use the same volume  They are not errors yet  but they will become real errors when SELinuxMountReadWriteOncePod feature is expanded to all volume access modes   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  access mode  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  volume manager selinux volume context mismatch errors total  div    div class  metric help  Number of errors when a Pod uses a volume that is already mounted with a different SELinux context than the Pod needs  Kubelet can t start such a Pod then and it will retry  therefore value of this metric may not represent the actual nr  of Pods   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  access mode  span  span class  metric label  volume plugin  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  volume manager selinux volume context mismatch warnings total  div    div class  metric help  Number of errors when a Pod uses a volume that is already mounted with a different SELinux context than the Pod needs  They are not errors yet  but they will become real errors when SELinuxMountReadWriteOncePod feature is expanded to all volume access modes   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  access mode  span  span class  metric label  volume plugin  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  volume manager selinux volumes admitted total  div    div class  metric help  Number of volumes whose SELinux context was fine and will be mounted with mount  o context option   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  access mode  span  span class  metric label  volume plugin  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  volume manager total volumes  div    div class  metric help  Number of volumes in Volume Manager  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  custom   label class  metric detail  Type   label   span class  metric type  Custom  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  plugin name  span  span class  metric label  state  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  volume operation total errors  div    div class  metric help  Total volume operation errors  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation name  span  span class  metric label  plugin name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  volume operation total seconds  div    div class  metric help  Storage operation end to end duration in seconds  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  operation name  span  span class  metric label  plugin name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  watch cache capacity  div    div class  metric help  Total capacity of watch cache broken by resource type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  watch cache capacity decrease total  div    div class  metric help  Total number of watch cache capacity decrease events broken by resource type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  watch cache capacity increase total  div    div class  metric help  Total number of watch cache capacity increase events broken by resource type   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  resource  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  workqueue adds total  div    div class  metric help  Total number of adds handled by workqueue  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  workqueue depth  div    div class  metric help  Current depth of workqueue  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  workqueue longest running processor seconds  div    div class  metric help  How many seconds has the longest running processor for workqueue been running   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  workqueue queue duration seconds  div    div class  metric help  How long in seconds an item stays in workqueue before being requested   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  workqueue retries total  div    div class  metric help  Total number of retries handled by workqueue  div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  counter   label class  metric detail  Type   label   span class  metric type  Counter  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  workqueue unfinished work seconds  div    div class  metric help  How many seconds of work has done that is in progress and hasn t been observed by work duration  Large values indicate stuck threads  One can deduce the number of stuck threads by observing the rate at which this increases   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  gauge   label class  metric detail  Type   label   span class  metric type  Gauge  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span   li   ul     div  div class  metric  data stability  alpha     div class  metric name  workqueue work duration seconds  div    div class  metric help  How long in seconds processing an item from workqueue takes   div    ul    li  label class  metric detail  Stability Level   label  span class  metric stability level  ALPHA  span   li    li data type  histogram   label class  metric detail  Type   label   span class  metric type  Histogram  span   li    li class  metric labels varying   label class  metric detail  Labels   label  span class  metric label  name  span   li   ul     div    div "}
{"questions":"kubernetes reference weight 20 title kubeadm init contenttype concept This command initializes a Kubernetes control plane node overview","answers":"---\ntitle: kubeadm init\ncontent_type: concept\nweight: 20\n---\n\n<!-- overview -->\n\nThis command initializes a Kubernetes control-plane node.\n\n<!-- body -->\n\n\n\n### Init workflow {#init-workflow}\n\n`kubeadm init` bootstraps a Kubernetes control-plane node by executing the\nfollowing steps:\n\n1. Runs a series of pre-flight checks to validate the system state\n   before making changes. Some checks only trigger warnings, others are\n   considered errors and will exit kubeadm until the problem is corrected or the\n   user specifies `--ignore-preflight-errors=<list-of-errors>`.\n\n1. Generates a self-signed CA to set up identities for each component in the cluster. The user can provide their\n   own CA cert and\/or key by dropping it in the cert directory configured via `--cert-dir`\n   (`\/etc\/kubernetes\/pki` by default).\n   The APIServer certs will have additional SAN entries for any `--apiserver-cert-extra-sans`\n   arguments, lowercased if necessary.\n\n1. Writes kubeconfig files in `\/etc\/kubernetes\/` for the kubelet, the controller-manager and the\n   scheduler to use to connect to the API server, each with its own identity. Also\n   additional kubeconfig files are written, for kubeadm as administrative entity (`admin.conf`)\n   and for a super admin user that can bypass RBAC (`super-admin.conf`).\n\n1. Generates static Pod manifests for the API server,\n   controller-manager and scheduler. In case an external etcd is not provided,\n   an additional static Pod manifest is generated for etcd.\n\n   Static Pod manifests are written to `\/etc\/kubernetes\/manifests`; the kubelet\n   watches this directory for Pods to create on startup.\n\n   Once control plane Pods are up and running, the `kubeadm init` sequence can continue.\n\n1. Apply labels and taints to the control-plane node so that no additional workloads will\n   run there.\n\n1. Generates the token that additional nodes can use to register\n   themselves with a control-plane in the future. Optionally, the user can provide a\n   token via `--token`, as described in the\n   [kubeadm token](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-token\/) docs.\n\n1. Makes all the necessary configurations for allowing node joining with the\n   [Bootstrap Tokens](\/docs\/reference\/access-authn-authz\/bootstrap-tokens\/) and\n   [TLS Bootstrap](\/docs\/reference\/access-authn-authz\/kubelet-tls-bootstrapping\/)\n   mechanism:\n\n   - Write a ConfigMap for making available all the information required\n     for joining, and set up related RBAC access rules.\n\n   - Let Bootstrap Tokens access the CSR signing API.\n\n   - Configure auto-approval for new CSR requests.\n\n   See [kubeadm join](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-join\/) for additional info.\n\n1. Installs a DNS server (CoreDNS) and the kube-proxy addon components via the API server.\n   In Kubernetes version 1.11 and later CoreDNS is the default DNS server.\n   Please note that although the DNS server is deployed, it will not be scheduled until CNI is installed.\n\n   \n   kube-dns usage with kubeadm is deprecated as of v1.18 and is removed in v1.21.\n   \n\n### Using init phases with kubeadm {#init-phases}\n\nKubeadm allows you to create a control-plane node in phases using the `kubeadm init phase` command.\n\nTo view the ordered list of phases and sub-phases you can call `kubeadm init --help`. The list\nwill be located at the top of the help screen and each phase will have a description next to it.\nNote that by calling `kubeadm init` all of the phases and sub-phases will be executed in this exact order.\n\nSome phases have unique flags, so if you want to have a look at the list of available options add\n`--help`, for example:\n\n```shell\nsudo kubeadm init phase control-plane controller-manager --help\n```\n\nYou can also use `--help` to see the list of sub-phases for a certain parent phase:\n\n```shell\nsudo kubeadm init phase control-plane --help\n```\n\n`kubeadm init` also exposes a flag called `--skip-phases` that can be used to skip certain phases.\nThe flag accepts a list of phase names and the names can be taken from the above ordered list.\n\nAn example:\n\n```shell\nsudo kubeadm init phase control-plane all --config=configfile.yaml\nsudo kubeadm init phase etcd local --config=configfile.yaml\n# you can now modify the control plane and etcd manifest files\nsudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml\n```\n\nWhat this example would do is write the manifest files for the control plane and etcd in\n`\/etc\/kubernetes\/manifests` based on the configuration in `configfile.yaml`. This allows you to\nmodify the files and then skip these phases using `--skip-phases`. By calling the last command you\nwill create a control plane node with the custom manifest files.\n\n\n\nAlternatively, you can use the `skipPhases` field under `InitConfiguration`.\n\n### Using kubeadm init with a configuration file {#config-file}\n\n\nThe config file is still considered beta and may change in future versions.\n\n\nIt's possible to configure `kubeadm init` with a configuration file instead of command\nline flags, and some more advanced features may only be available as\nconfiguration file options. This file is passed using the `--config` flag and it must\ncontain a `ClusterConfiguration` structure and optionally more structures separated by `---\\n`\nMixing `--config` with others flags may not be allowed in some cases.\n\nThe default configuration can be printed out using the\n[kubeadm config print](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-config\/) command.\n\nIf your configuration is not using the latest version it is **recommended** that you migrate using\nthe [kubeadm config migrate](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-config\/) command.\n\nFor more information on the fields and usage of the configuration you can navigate to our\n[API reference page](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/).\n\n### Using kubeadm init with feature gates {#feature-gates}\n\nKubeadm supports a set of feature gates that are unique to kubeadm and can only be applied\nduring cluster creation with `kubeadm init`. These features can control the behavior\nof the cluster. Feature gates are removed after a feature graduates to GA.\n\nTo pass a feature gate you can either use the `--feature-gates` flag for\n`kubeadm init`, or you can add items into the `featureGates` field when you pass\na [configuration file](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/#kubeadm-k8s-io-v1beta4-ClusterConfiguration)\nusing `--config`.\n\nPassing [feature gates for core Kubernetes components](\/docs\/reference\/command-line-tools-reference\/feature-gates)\ndirectly to kubeadm is not supported. Instead, it is possible to pass them by\n[Customizing components with the kubeadm API](\/docs\/setup\/production-environment\/tools\/kubeadm\/control-plane-flags\/).\n\nList of feature gates:\n\n\nFeature | Default | Alpha | Beta | GA\n:-------|:--------|:------|:-----|:----\n`ControlPlaneKubeletLocalMode` | `false` | 1.31 | - | -\n`EtcdLearnerMode` | `true` | 1.27 | 1.29 | -\n`PublicKeysECDSA` | `false` | 1.19 | - | -\n`WaitForAllControlPlaneComponents` | `false` | 1.30 | - | -\n\n\n\nOnce a feature gate goes GA its value becomes locked to `true` by default.\n\n\nFeature gate descriptions:\n\n`ControlPlaneKubeletLocalMode`\n: With this feature gate enabled, when joining a new control plane node, kubeadm will configure the kubelet\nto connect to the local kube-apiserver. This ensures that there will not be a violation of the version skew\npolicy during rolling upgrades.\n\n`EtcdLearnerMode`\n: With this feature gate enabled, when joining a new control plane node, a new etcd member will be created\nas a learner and promoted to a voting member only after the etcd data are fully aligned.\n\n`PublicKeysECDSA`\n: Can be used to create a cluster that uses ECDSA certificates instead of the default RSA algorithm.\nRenewal of existing ECDSA certificates is also supported using `kubeadm certs renew`, but you cannot\nswitch between the RSA and ECDSA algorithms on the fly or during upgrades. Kubernetes\n has a bug where keys in generated kubeconfig files are set use RSA\ndespite the feature gate being enabled. Kubernetes versions before v1.31 had a bug where keys in generated kubeconfig files\nwere set use RSA, even when you had enabled the `PublicKeysECDSA` feature gate.\n\n`WaitForAllControlPlaneComponents`\n: With this feature gate enabled kubeadm will wait for all control plane components (kube-apiserver,\nkube-controller-manager, kube-scheduler) on a control plane node to report status 200 on their `\/healthz`\nendpoints. These checks are performed on `https:\/\/127.0.0.1:PORT\/healthz`, where `PORT` is taken from\n`--secure-port` of a component. If you specify custom `--secure-port` values in the kubeadm configuration\nthey will be respected. Without the feature gate enabled, kubeadm will only wait for the kube-apiserver\non a control plane node to become ready. The wait process starts right after the kubelet on the host\nis started by kubeadm. You are advised to enable this feature gate in case you wish to observe a ready\nstate from all control plane components during the `kubeadm init` or `kubeadm join` command execution.\n\nList of deprecated feature gates:\n\n\nFeature | Default | Alpha | Beta | GA | Deprecated\n:-------|:--------|:------|:-----|:---|:----------\n`RootlessControlPlane` | `false` | 1.22 | - | - | 1.31\n\n\nFeature gate descriptions:\n\n`RootlessControlPlane`\n: Setting this flag configures the kubeadm deployed control plane component static Pod containers\nfor `kube-apiserver`, `kube-controller-manager`, `kube-scheduler` and `etcd` to run as non-root users.\nIf the flag is not set, those components run as root. You can change the value of this feature gate before\nyou upgrade to a newer version of Kubernetes.\n\nList of removed feature gates:\n\n\nFeature | Alpha | Beta | GA | Removed\n:-------|:------|:-----|:---|:-------\n`IPv6DualStack` | 1.16 | 1.21 | 1.23 | 1.24\n`UnversionedKubeletConfigMap` | 1.22 | 1.23 | 1.25 | 1.26\n`UpgradeAddonsBeforeControlPlane` | 1.28 | - | - | 1.31\n\n\nFeature gate descriptions:\n\n`IPv6DualStack`\n: This flag helps to configure components dual stack when the feature is in progress. For more details on Kubernetes\ndual-stack support see [Dual-stack support with kubeadm](\/docs\/setup\/production-environment\/tools\/kubeadm\/dual-stack-support\/).\n\n`UnversionedKubeletConfigMap`\n: This flag controls the name of the  where kubeadm stores\nkubelet configuration data. With this flag not specified or set to `true`, the ConfigMap is named `kubelet-config`.\nIf you set this flag to `false`, the name of the ConfigMap includes the major and minor version for Kubernetes\n(for example: `kubelet-config-`). Kubeadm ensures that RBAC rules for reading and writing\nthat ConfigMap are appropriate for the value you set. When kubeadm writes this ConfigMap (during `kubeadm init`\nor `kubeadm upgrade apply`), kubeadm respects the value of `UnversionedKubeletConfigMap`. When reading that ConfigMap\n(during `kubeadm join`, `kubeadm reset`, `kubeadm upgrade ...`), kubeadm attempts to use unversioned ConfigMap name first;\nif that does not succeed, kubeadm falls back to using the legacy (versioned) name for that ConfigMap.\n\n`UpgradeAddonsBeforeControlPlane`\n: This feature gate has been removed. It was introduced in v1.28 as a deprecated feature and then removed in v1.31. For documentation on older versions, please switch to the corresponding website version.\n\n### Adding kube-proxy parameters {#kube-proxy}\n\nFor information about kube-proxy parameters in the kubeadm configuration see:\n- [kube-proxy reference](\/docs\/reference\/config-api\/kube-proxy-config.v1alpha1\/)\n\nFor information about enabling IPVS mode with kubeadm see:\n- [IPVS](https:\/\/github.com\/kubernetes\/kubernetes\/blob\/master\/pkg\/proxy\/ipvs\/README.md)\n\n### Passing custom flags to control plane components {#control-plane-flags}\n\nFor information about passing flags to control plane components see:\n- [control-plane-flags](\/docs\/setup\/production-environment\/tools\/kubeadm\/control-plane-flags\/)\n\n### Running kubeadm without an Internet connection {#without-internet-connection}\n\nFor running kubeadm without an Internet connection you have to pre-pull the required control-plane images.\n\nYou can list and pull the images using the `kubeadm config images` sub-command:\n\n```shell\nkubeadm config images list\nkubeadm config images pull\n```\n\nYou can pass `--config` to the above commands with a [kubeadm configuration file](#config-file)\nto control the `kubernetesVersion` and `imageRepository` fields.\n\nAll default `registry.k8s.io` images that kubeadm requires support multiple architectures.\n\n### Using custom images {#custom-images}\n\nBy default, kubeadm pulls images from `registry.k8s.io`. If the\nrequested Kubernetes version is a CI label (such as `ci\/latest`)\n`gcr.io\/k8s-staging-ci-images` is used.\n\nYou can override this behavior by using [kubeadm with a configuration file](#config-file).\nAllowed customization are:\n\n* To provide `kubernetesVersion` which affects the version of the images.\n* To provide an alternative `imageRepository` to be used instead of\n  `registry.k8s.io`.\n* To provide a specific `imageRepository` and `imageTag` for etcd or CoreDNS.\n\nImage paths between the default `registry.k8s.io` and a custom repository specified using\n`imageRepository` may differ for backwards compatibility reasons. For example,\none image might have a subpath at `registry.k8s.io\/subpath\/image`, but be defaulted\nto `my.customrepository.io\/image` when using a custom repository.\n\nTo ensure you push the images to your custom repository in paths that kubeadm\ncan consume, you must:\n\n* Pull images from the defaults paths at `registry.k8s.io` using `kubeadm config images {list|pull}`.\n* Push images to the paths from `kubeadm config images list --config=config.yaml`,\nwhere `config.yaml` contains the custom `imageRepository`, and\/or `imageTag`\nfor etcd and CoreDNS.\n* Pass the same `config.yaml` to `kubeadm init`.\n\n#### Custom sandbox (pause) images {#custom-pause-image}\n\nTo set a custom image for these you need to configure this in your\n\nto use the image.\nConsult the documentation for your container runtime to find out how to change this setting;\nfor selected container runtimes, you can also find advice within the\n[Container Runtimes](\/docs\/setup\/production-environment\/container-runtimes\/) topic.\n\n### Uploading control-plane certificates to the cluster\n\nBy adding the flag `--upload-certs` to `kubeadm init` you can temporary upload\nthe control-plane certificates to a Secret in the cluster. Please note that this Secret\nwill expire automatically after 2 hours. The certificates are encrypted using\na 32byte key that can be specified using `--certificate-key`. The same key can be used\nto download the certificates when additional control-plane nodes are joining, by passing\n`--control-plane` and `--certificate-key` to `kubeadm join`.\n\nThe following phase command can be used to re-upload the certificates after expiration:\n\n```shell\nkubeadm init phase upload-certs --upload-certs --config=SOME_YAML_FILE\n```\n\nA predefined `certificateKey` can be provided in `InitConfiguration` when passing the\n[configuration file](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/) with `--config`.\n\n\nIf a predefined certificate key is not passed to `kubeadm init` and\n`kubeadm init phase upload-certs` a new key will be generated automatically.\n\nThe following command can be used to generate a new key on demand:\n\n```shell\nkubeadm certs certificate-key\n```\n\n### Certificate management with kubeadm\n\nFor detailed information on certificate management with kubeadm see\n[Certificate Management with kubeadm](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-certs\/).\nThe document includes information about using external CA, custom certificates\nand certificate renewal.\n\n### Managing the kubeadm drop-in file for the kubelet {#kubelet-drop-in}\n\nThe `kubeadm` package ships with a configuration file for running the `kubelet` by `systemd`.\nNote that the kubeadm CLI never touches this drop-in file. This drop-in file is part of the kubeadm\nDEB\/RPM package.\n\nFor further information, see\n[Managing the kubeadm drop-in file for systemd](\/docs\/setup\/production-environment\/tools\/kubeadm\/kubelet-integration\/#the-kubelet-drop-in-file-for-systemd).\n\n### Use kubeadm with CRI runtimes\n\nBy default kubeadm attempts to detect your container runtime. For more details on this detection,\nsee the [kubeadm CRI installation guide](\/docs\/setup\/production-environment\/tools\/kubeadm\/install-kubeadm\/#installing-runtime).\n\n### Setting the node name\n\nBy default, `kubeadm` assigns a node name based on a machine's host address.\nYou can override this setting with the `--node-name` flag.\nThe flag passes the appropriate [`--hostname-override`](\/docs\/reference\/command-line-tools-reference\/kubelet\/#options)\nvalue to the kubelet.\n\nBe aware that overriding the hostname can\n[interfere with cloud providers](https:\/\/github.com\/kubernetes\/website\/pull\/8873).\n\n### Automating kubeadm\n\nRather than copying the token you obtained from `kubeadm init` to each node, as\nin the [basic kubeadm tutorial](\/docs\/setup\/production-environment\/tools\/kubeadm\/create-cluster-kubeadm\/),\nyou can parallelize the token distribution for easier automation. To implement this automation,\nyou must know the IP address that the control-plane node will have after it is started, or use a\nDNS name or an address of a load balancer.\n\n1. Generate a token. This token must have the form  `<6 character string>.<16\n   character string>`. More formally, it must match the regex:\n   `[a-z0-9]{6}\\.[a-z0-9]{16}`.\n\n   kubeadm can generate a token for you:\n\n   ```shell\n    kubeadm token generate\n   ```\n\n1. Start both the control-plane node and the worker nodes concurrently with this token.\n   As they come up they should find each other and form the cluster.  The same\n   `--token` argument can be used on both `kubeadm init` and `kubeadm join`.\n\n1. Similar can be done for `--certificate-key` when joining additional control-plane\n   nodes. The key can be generated using:\n\n   ```shell\n   kubeadm certs certificate-key\n   ```\n\nOnce the cluster is up, you can use the `\/etc\/kubernetes\/admin.conf` file from\na control-plane node to talk to the cluster with administrator credentials or\n[Generating kubeconfig files for additional users](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-certs#kubeconfig-additional-users).\n\nNote that this style of bootstrap has some relaxed security guarantees because\nit does not allow the root CA hash to be validated with\n`--discovery-token-ca-cert-hash` (since it's not generated when the nodes are\nprovisioned). For details, see the [kubeadm join](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-join\/).\n\n## \n\n* [kubeadm init phase](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init-phase\/) to understand more about\n  `kubeadm init` phases\n* [kubeadm join](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-join\/) to bootstrap a Kubernetes\n  worker node and join it to the cluster\n* [kubeadm upgrade](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-upgrade\/) to upgrade a Kubernetes\n  cluster to a newer version\n* [kubeadm reset](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-reset\/) to revert any changes made\n  to this host by `kubeadm init` or `kubeadm join`\n","site":"kubernetes reference","answers_cleaned":"    title  kubeadm init content type  concept weight  20           overview      This command initializes a Kubernetes control plane node        body            Init workflow   init workflow    kubeadm init  bootstraps a Kubernetes control plane node by executing the following steps   1  Runs a series of pre flight checks to validate the system state    before making changes  Some checks only trigger warnings  others are    considered errors and will exit kubeadm until the problem is corrected or the    user specifies    ignore preflight errors  list of errors     1  Generates a self signed CA to set up identities for each component in the cluster  The user can provide their    own CA cert and or key by dropping it in the cert directory configured via    cert dir        etc kubernetes pki  by default      The APIServer certs will have additional SAN entries for any    apiserver cert extra sans     arguments  lowercased if necessary   1  Writes kubeconfig files in   etc kubernetes   for the kubelet  the controller manager and the    scheduler to use to connect to the API server  each with its own identity  Also    additional kubeconfig files are written  for kubeadm as administrative entity   admin conf      and for a super admin user that can bypass RBAC   super admin conf     1  Generates static Pod manifests for the API server     controller manager and scheduler  In case an external etcd is not provided     an additional static Pod manifest is generated for etcd      Static Pod manifests are written to   etc kubernetes manifests   the kubelet    watches this directory for Pods to create on startup      Once control plane Pods are up and running  the  kubeadm init  sequence can continue   1  Apply labels and taints to the control plane node so that no additional workloads will    run there   1  Generates the token that additional nodes can use to register    themselves with a control plane in the future  Optionally  the user can provide a    token via    token   as described in the     kubeadm token   docs reference setup tools kubeadm kubeadm token   docs   1  Makes all the necessary configurations for allowing node joining with the     Bootstrap Tokens   docs reference access authn authz bootstrap tokens   and     TLS Bootstrap   docs reference access authn authz kubelet tls bootstrapping      mechanism        Write a ConfigMap for making available all the information required      for joining  and set up related RBAC access rules        Let Bootstrap Tokens access the CSR signing API        Configure auto approval for new CSR requests      See  kubeadm join   docs reference setup tools kubeadm kubeadm join   for additional info   1  Installs a DNS server  CoreDNS  and the kube proxy addon components via the API server     In Kubernetes version 1 11 and later CoreDNS is the default DNS server     Please note that although the DNS server is deployed  it will not be scheduled until CNI is installed          kube dns usage with kubeadm is deprecated as of v1 18 and is removed in v1 21           Using init phases with kubeadm   init phases   Kubeadm allows you to create a control plane node in phases using the  kubeadm init phase  command   To view the ordered list of phases and sub phases you can call  kubeadm init   help   The list will be located at the top of the help screen and each phase will have a description next to it  Note that by calling  kubeadm init  all of the phases and sub phases will be executed in this exact order   Some phases have unique flags  so if you want to have a look at the list of available options add    help   for example      shell sudo kubeadm init phase control plane controller manager   help      You can also use    help  to see the list of sub phases for a certain parent phase      shell sudo kubeadm init phase control plane   help       kubeadm init  also exposes a flag called    skip phases  that can be used to skip certain phases  The flag accepts a list of phase names and the names can be taken from the above ordered list   An example      shell sudo kubeadm init phase control plane all   config configfile yaml sudo kubeadm init phase etcd local   config configfile yaml   you can now modify the control plane and etcd manifest files sudo kubeadm init   skip phases control plane etcd   config configfile yaml      What this example would do is write the manifest files for the control plane and etcd in   etc kubernetes manifests  based on the configuration in  configfile yaml   This allows you to modify the files and then skip these phases using    skip phases   By calling the last command you will create a control plane node with the custom manifest files     Alternatively  you can use the  skipPhases  field under  InitConfiguration        Using kubeadm init with a configuration file   config file    The config file is still considered beta and may change in future versions    It s possible to configure  kubeadm init  with a configuration file instead of command line flags  and some more advanced features may only be available as configuration file options  This file is passed using the    config  flag and it must contain a  ClusterConfiguration  structure and optionally more structures separated by      n  Mixing    config  with others flags may not be allowed in some cases   The default configuration can be printed out using the  kubeadm config print   docs reference setup tools kubeadm kubeadm config   command   If your configuration is not using the latest version it is   recommended   that you migrate using the  kubeadm config migrate   docs reference setup tools kubeadm kubeadm config   command   For more information on the fields and usage of the configuration you can navigate to our  API reference page   docs reference config api kubeadm config v1beta4         Using kubeadm init with feature gates   feature gates   Kubeadm supports a set of feature gates that are unique to kubeadm and can only be applied during cluster creation with  kubeadm init   These features can control the behavior of the cluster  Feature gates are removed after a feature graduates to GA   To pass a feature gate you can either use the    feature gates  flag for  kubeadm init   or you can add items into the  featureGates  field when you pass a  configuration file   docs reference config api kubeadm config v1beta4  kubeadm k8s io v1beta4 ClusterConfiguration  using    config    Passing  feature gates for core Kubernetes components   docs reference command line tools reference feature gates  directly to kubeadm is not supported  Instead  it is possible to pass them by  Customizing components with the kubeadm API   docs setup production environment tools kubeadm control plane flags     List of feature gates    Feature   Default   Alpha   Beta   GA                                          ControlPlaneKubeletLocalMode     false    1 31          EtcdLearnerMode     true    1 27   1 29      PublicKeysECDSA     false    1 19          WaitForAllControlPlaneComponents     false    1 30            Once a feature gate goes GA its value becomes locked to  true  by default    Feature gate descriptions    ControlPlaneKubeletLocalMode    With this feature gate enabled  when joining a new control plane node  kubeadm will configure the kubelet to connect to the local kube apiserver  This ensures that there will not be a violation of the version skew policy during rolling upgrades    EtcdLearnerMode    With this feature gate enabled  when joining a new control plane node  a new etcd member will be created as a learner and promoted to a voting member only after the etcd data are fully aligned    PublicKeysECDSA    Can be used to create a cluster that uses ECDSA certificates instead of the default RSA algorithm  Renewal of existing ECDSA certificates is also supported using  kubeadm certs renew   but you cannot switch between the RSA and ECDSA algorithms on the fly or during upgrades  Kubernetes  has a bug where keys in generated kubeconfig files are set use RSA despite the feature gate being enabled  Kubernetes versions before v1 31 had a bug where keys in generated kubeconfig files were set use RSA  even when you had enabled the  PublicKeysECDSA  feature gate    WaitForAllControlPlaneComponents    With this feature gate enabled kubeadm will wait for all control plane components  kube apiserver  kube controller manager  kube scheduler  on a control plane node to report status 200 on their   healthz  endpoints  These checks are performed on  https   127 0 0 1 PORT healthz   where  PORT  is taken from    secure port  of a component  If you specify custom    secure port  values in the kubeadm configuration they will be respected  Without the feature gate enabled  kubeadm will only wait for the kube apiserver on a control plane node to become ready  The wait process starts right after the kubelet on the host is started by kubeadm  You are advised to enable this feature gate in case you wish to observe a ready state from all control plane components during the  kubeadm init  or  kubeadm join  command execution   List of deprecated feature gates    Feature   Default   Alpha   Beta   GA   Deprecated                                                     RootlessControlPlane     false    1 22           1 31   Feature gate descriptions    RootlessControlPlane    Setting this flag configures the kubeadm deployed control plane component static Pod containers for  kube apiserver    kube controller manager    kube scheduler  and  etcd  to run as non root users  If the flag is not set  those components run as root  You can change the value of this feature gate before you upgrade to a newer version of Kubernetes   List of removed feature gates    Feature   Alpha   Beta   GA   Removed                                        IPv6DualStack    1 16   1 21   1 23   1 24  UnversionedKubeletConfigMap    1 22   1 23   1 25   1 26  UpgradeAddonsBeforeControlPlane    1 28           1 31   Feature gate descriptions    IPv6DualStack    This flag helps to configure components dual stack when the feature is in progress  For more details on Kubernetes dual stack support see  Dual stack support with kubeadm   docs setup production environment tools kubeadm dual stack support      UnversionedKubeletConfigMap    This flag controls the name of the  where kubeadm stores kubelet configuration data  With this flag not specified or set to  true   the ConfigMap is named  kubelet config   If you set this flag to  false   the name of the ConfigMap includes the major and minor version for Kubernetes  for example   kubelet config     Kubeadm ensures that RBAC rules for reading and writing that ConfigMap are appropriate for the value you set  When kubeadm writes this ConfigMap  during  kubeadm init  or  kubeadm upgrade apply    kubeadm respects the value of  UnversionedKubeletConfigMap   When reading that ConfigMap  during  kubeadm join    kubeadm reset    kubeadm upgrade        kubeadm attempts to use unversioned ConfigMap name first  if that does not succeed  kubeadm falls back to using the legacy  versioned  name for that ConfigMap    UpgradeAddonsBeforeControlPlane    This feature gate has been removed  It was introduced in v1 28 as a deprecated feature and then removed in v1 31  For documentation on older versions  please switch to the corresponding website version       Adding kube proxy parameters   kube proxy   For information about kube proxy parameters in the kubeadm configuration see     kube proxy reference   docs reference config api kube proxy config v1alpha1    For information about enabling IPVS mode with kubeadm see     IPVS  https   github com kubernetes kubernetes blob master pkg proxy ipvs README md       Passing custom flags to control plane components   control plane flags   For information about passing flags to control plane components see     control plane flags   docs setup production environment tools kubeadm control plane flags        Running kubeadm without an Internet connection   without internet connection   For running kubeadm without an Internet connection you have to pre pull the required control plane images   You can list and pull the images using the  kubeadm config images  sub command      shell kubeadm config images list kubeadm config images pull      You can pass    config  to the above commands with a  kubeadm configuration file   config file  to control the  kubernetesVersion  and  imageRepository  fields   All default  registry k8s io  images that kubeadm requires support multiple architectures       Using custom images   custom images   By default  kubeadm pulls images from  registry k8s io   If the requested Kubernetes version is a CI label  such as  ci latest    gcr io k8s staging ci images  is used   You can override this behavior by using  kubeadm with a configuration file   config file   Allowed customization are     To provide  kubernetesVersion  which affects the version of the images    To provide an alternative  imageRepository  to be used instead of    registry k8s io     To provide a specific  imageRepository  and  imageTag  for etcd or CoreDNS   Image paths between the default  registry k8s io  and a custom repository specified using  imageRepository  may differ for backwards compatibility reasons  For example  one image might have a subpath at  registry k8s io subpath image   but be defaulted to  my customrepository io image  when using a custom repository   To ensure you push the images to your custom repository in paths that kubeadm can consume  you must     Pull images from the defaults paths at  registry k8s io  using  kubeadm config images  list pull      Push images to the paths from  kubeadm config images list   config config yaml   where  config yaml  contains the custom  imageRepository   and or  imageTag  for etcd and CoreDNS    Pass the same  config yaml  to  kubeadm init         Custom sandbox  pause  images   custom pause image   To set a custom image for these you need to configure this in your  to use the image  Consult the documentation for your container runtime to find out how to change this setting  for selected container runtimes  you can also find advice within the  Container Runtimes   docs setup production environment container runtimes   topic       Uploading control plane certificates to the cluster  By adding the flag    upload certs  to  kubeadm init  you can temporary upload the control plane certificates to a Secret in the cluster  Please note that this Secret will expire automatically after 2 hours  The certificates are encrypted using a 32byte key that can be specified using    certificate key   The same key can be used to download the certificates when additional control plane nodes are joining  by passing    control plane  and    certificate key  to  kubeadm join    The following phase command can be used to re upload the certificates after expiration      shell kubeadm init phase upload certs   upload certs   config SOME YAML FILE      A predefined  certificateKey  can be provided in  InitConfiguration  when passing the  configuration file   docs reference config api kubeadm config v1beta4   with    config     If a predefined certificate key is not passed to  kubeadm init  and  kubeadm init phase upload certs  a new key will be generated automatically   The following command can be used to generate a new key on demand      shell kubeadm certs certificate key          Certificate management with kubeadm  For detailed information on certificate management with kubeadm see  Certificate Management with kubeadm   docs tasks administer cluster kubeadm kubeadm certs    The document includes information about using external CA  custom certificates and certificate renewal       Managing the kubeadm drop in file for the kubelet   kubelet drop in   The  kubeadm  package ships with a configuration file for running the  kubelet  by  systemd   Note that the kubeadm CLI never touches this drop in file  This drop in file is part of the kubeadm DEB RPM package   For further information  see  Managing the kubeadm drop in file for systemd   docs setup production environment tools kubeadm kubelet integration  the kubelet drop in file for systemd        Use kubeadm with CRI runtimes  By default kubeadm attempts to detect your container runtime  For more details on this detection  see the  kubeadm CRI installation guide   docs setup production environment tools kubeadm install kubeadm  installing runtime        Setting the node name  By default   kubeadm  assigns a node name based on a machine s host address  You can override this setting with the    node name  flag  The flag passes the appropriate     hostname override    docs reference command line tools reference kubelet  options  value to the kubelet   Be aware that overriding the hostname can  interfere with cloud providers  https   github com kubernetes website pull 8873        Automating kubeadm  Rather than copying the token you obtained from  kubeadm init  to each node  as in the  basic kubeadm tutorial   docs setup production environment tools kubeadm create cluster kubeadm    you can parallelize the token distribution for easier automation  To implement this automation  you must know the IP address that the control plane node will have after it is started  or use a DNS name or an address of a load balancer   1  Generate a token  This token must have the form    6 character string   16    character string    More formally  it must match the regex       a z0 9  6    a z0 9  16        kubeadm can generate a token for you         shell     kubeadm token generate         1  Start both the control plane node and the worker nodes concurrently with this token     As they come up they should find each other and form the cluster   The same       token  argument can be used on both  kubeadm init  and  kubeadm join    1  Similar can be done for    certificate key  when joining additional control plane    nodes  The key can be generated using         shell    kubeadm certs certificate key         Once the cluster is up  you can use the   etc kubernetes admin conf  file from a control plane node to talk to the cluster with administrator credentials or  Generating kubeconfig files for additional users   docs tasks administer cluster kubeadm kubeadm certs kubeconfig additional users    Note that this style of bootstrap has some relaxed security guarantees because it does not allow the root CA hash to be validated with    discovery token ca cert hash   since it s not generated when the nodes are provisioned   For details  see the  kubeadm join   docs reference setup tools kubeadm kubeadm join             kubeadm init phase   docs reference setup tools kubeadm kubeadm init phase   to understand more about    kubeadm init  phases    kubeadm join   docs reference setup tools kubeadm kubeadm join   to bootstrap a Kubernetes   worker node and join it to the cluster    kubeadm upgrade   docs reference setup tools kubeadm kubeadm upgrade   to upgrade a Kubernetes   cluster to a newer version    kubeadm reset   docs reference setup tools kubeadm kubeadm reset   to revert any changes made   to this host by  kubeadm init  or  kubeadm join  "}
{"questions":"kubernetes reference weight 100 title Implementation details luxas contenttype concept reviewers jbeda overview","answers":"---\nreviewers:\n- luxas\n- jbeda\ntitle: Implementation details\ncontent_type: concept\nweight: 100\n---\n\n<!-- overview -->\n\n\n\n`kubeadm init` and `kubeadm join` together provide a nice user experience for creating a\nbare Kubernetes cluster from scratch, that aligns with the best-practices.\nHowever, it might not be obvious _how_ kubeadm does that.\n\nThis document provides additional details on what happens under the hood, with the aim of sharing\nknowledge on the best practices for a Kubernetes cluster.\n\n<!-- body -->\n\n## Core design principles\n\nThe cluster that `kubeadm init` and `kubeadm join` set up should be:\n\n- **Secure**: It should adopt latest best-practices like:\n  - enforcing RBAC\n  - using the Node Authorizer\n  - using secure communication between the control plane components\n  - using secure communication between the API server and the kubelets\n  - lock-down the kubelet API\n  - locking down access to the API for system components like the kube-proxy and CoreDNS\n  - locking down what a Bootstrap Token can access\n- **User-friendly**: The user should not have to run anything more than a couple of commands:\n  - `kubeadm init`\n  - `export KUBECONFIG=\/etc\/kubernetes\/admin.conf`\n  - `kubectl apply -f <network-of-choice.yaml>`\n  - `kubeadm join --token <token> <endpoint>:<port>`\n- **Extendable**:\n  - It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope\n  - It should provide the possibility to use a config file for customizing various parameters\n\n## Constants and well-known values and paths\n\nIn order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm, it uses a\nlimited set of constant values for well-known paths and file names.\n\nThe Kubernetes directory `\/etc\/kubernetes` is a constant in the application, since it is clearly the given path\nin a majority of cases, and the most intuitive location; other constant paths and file names are:\n\n- `\/etc\/kubernetes\/manifests` as the path where the kubelet should look for static Pod manifests.\n  Names of static Pod manifests are:\n\n  - `etcd.yaml`\n  - `kube-apiserver.yaml`\n  - `kube-controller-manager.yaml`\n  - `kube-scheduler.yaml`\n\n- `\/etc\/kubernetes\/` as the path where kubeconfig files with identities for control plane\n  components are stored. Names of kubeconfig files are:\n\n  - `kubelet.conf` (`bootstrap-kubelet.conf` during TLS bootstrap)\n  - `controller-manager.conf`\n  - `scheduler.conf`\n  - `admin.conf` for the cluster admin and kubeadm itself\n  - `super-admin.conf` for the cluster super-admin that can bypass RBAC\n\n- Names of certificates and key files:\n\n  - `ca.crt`, `ca.key` for the Kubernetes certificate authority\n  - `apiserver.crt`, `apiserver.key` for the API server certificate\n  - `apiserver-kubelet-client.crt`, `apiserver-kubelet-client.key` for the client certificate used\n    by the API server to connect to the kubelets securely\n  - `sa.pub`, `sa.key` for the key used by the controller manager when signing ServiceAccount\n  - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority\n  - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client\n\n## kubeadm init workflow internal design\n\nThe `kubeadm init` consists of a sequence of atomic work tasks to perform,\nas described in the `kubeadm init` [internal workflow](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init\/#init-workflow).\n\nThe [`kubeadm init phase`](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init-phase\/) command allows\nusers to invoke each task individually, and ultimately offers a reusable and composable\nAPI\/toolbox that can be used by other Kubernetes bootstrap tools, by any IT automation tool or by\nan advanced user for creating custom clusters.\n\n### Preflight checks\n\nKubeadm executes a set of preflight checks before starting the init, with the aim to verify\npreconditions and avoid common cluster startup problems.\nThe user can skip specific preflight checks or all of them with the `--ignore-preflight-errors` option.\n\n- [Warning] if the Kubernetes version to use (specified with the `--kubernetes-version` flag) is\n  at least one minor version higher than the kubeadm CLI version.\n- Kubernetes system requirements:\n  - if running on linux:\n    - [Error] if Kernel is older than the minimum required version\n    - [Error] if required cgroups subsystem aren't set up\n- [Error] if the CRI endpoint does not answer\n- [Error] if user is not root\n- [Error] if the machine hostname is not a valid DNS subdomain\n- [Warning] if the host name cannot be reached via network lookup\n- [Error] if kubelet version is lower that the minimum kubelet version supported by kubeadm (current minor -1)\n- [Error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)\n- [Warning] if kubelet service does not exist or if it is disabled\n- [Warning] if firewalld is active\n- [Error] if API server bindPort or ports 10250\/10251\/10252 are used\n- [Error] if `\/etc\/kubernetes\/manifest` folder already exists and it is not empty\n- [Error] if swap is on\n- [Error] if `conntrack`, `ip`, `iptables`, `mount`, `nsenter` commands are not present in the command path\n- [Warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path\n- [Warning] if extra arg flags for API server, controller manager, scheduler contains some invalid options\n- [Warning] if connection to https:\/\/API.AdvertiseAddress:API.BindPort goes through proxy\n- [Warning] if connection to services subnet goes through proxy (only first address checked)\n- [Warning] if connection to Pods subnet goes through proxy (only first address checked)\n- If external etcd is provided:\n  - [Error] if etcd version is older than the minimum required version\n  - [Error] if etcd certificates or keys are specified, but not provided\n- If external etcd is NOT provided (and thus local etcd will be installed):\n  - [Error] if ports 2379 is used\n  - [Error] if Etcd.DataDir folder already exists and it is not empty\n- If authorization mode is ABAC:\n  - [Error] if abac_policy.json does not exist\n- If authorization mode is WebHook\n  - [Error] if webhook_authz.conf does not exist\n\n\nPreflight checks can be invoked individually with the\n[`kubeadm init phase preflight`](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init-phase\/#cmd-phase-preflight)\ncommand.\n\n\n### Generate the necessary certificates\n\nKubeadm generates certificate and private key pairs for different purposes:\n\n- A self signed certificate authority for the Kubernetes cluster saved into `ca.crt` file and\n  `ca.key` private key file\n\n- A serving certificate for the API server, generated using `ca.crt` as the CA, and saved into\n  `apiserver.crt` file with its private key `apiserver.key`. This certificate should contain\n  the following alternative names:\n\n  - The Kubernetes service's internal clusterIP (the first address in the services CIDR, e.g.\n    `10.96.0.1` if service subnet is `10.96.0.0\/12`)\n  - Kubernetes DNS names, e.g. `kubernetes.default.svc.cluster.local` if `--service-dns-domain`\n    flag value is `cluster.local`, plus default DNS names `kubernetes.default.svc`,\n    `kubernetes.default`, `kubernetes`\n  - The node-name\n  - The `--apiserver-advertise-address`\n  - Additional alternative names specified by the user\n\n- A client certificate for the API server to connect to the kubelets securely, generated using\n  `ca.crt` as the CA and saved into `apiserver-kubelet-client.crt` file with its private key\n  `apiserver-kubelet-client.key`.\n  This certificate should be in the `system:masters` organization\n\n- A private key for signing ServiceAccount Tokens saved into `sa.key` file along with its public key `sa.pub`\n\n- A certificate authority for the front proxy saved into `front-proxy-ca.crt` file with its key\n  `front-proxy-ca.key`\n\n- A client certificate for the front proxy client, generated using `front-proxy-ca.crt` as the CA and\n  saved into `front-proxy-client.crt` file with its private key`front-proxy-client.key`\n\nCertificates are stored by default in `\/etc\/kubernetes\/pki`, but this directory is configurable\nusing the `--cert-dir` flag.\n\nPlease note that:\n\n1. If a given certificate and private key pair both exist, and their content is evaluated to be compliant with the above specs, the existing files will\n   be used and the generation phase for the given certificate will be skipped. This means the user can, for example, copy an existing CA to\n   `\/etc\/kubernetes\/pki\/ca.{crt,key}`, and then kubeadm will use those files for signing the rest of the certs.\n   See also [using custom certificates](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-certs#custom-certificates)\n1. For the CA, it is possible to provide the `ca.crt` file but not the `ca.key` file. If all other certificates and kubeconfig files\n   are already in place, kubeadm recognizes this condition and activates the ExternalCA, which also implies the `csrsigner` controller in\n   controller-manager won't be started\n1. If kubeadm is running in [external CA mode](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-certs#external-ca-mode);\n   all the certificates must be provided by the user, because kubeadm cannot generate them by itself\n1. In case kubeadm is executed in the `--dry-run` mode, certificate files are written in a temporary folder\n1. Certificate generation can be invoked individually with the\n   [`kubeadm init phase certs all`](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init-phase\/#cmd-phase-certs) command\n\n### Generate kubeconfig files for control plane components\n\nKubeadm generates kubeconfig files with identities for control plane components:\n\n- A kubeconfig file for the kubelet to use during TLS bootstrap -\n  \/etc\/kubernetes\/bootstrap-kubelet.conf. Inside this file, there is a bootstrap-token or embedded\n  client certificates for authenticating this node with the cluster.\n\n  This client certificate should:\n\n  - Be in the `system:nodes` organization, as required by the\n    [Node Authorization](\/docs\/reference\/access-authn-authz\/node\/) module\n  - Have the Common Name (CN) `system:node:<hostname-lowercased>`\n\n- A kubeconfig file for controller-manager, `\/etc\/kubernetes\/controller-manager.conf`; inside this\n  file is embedded a client certificate with controller-manager identity. This client certificate should\n  have the CN `system:kube-controller-manager`, as defined by default\n  [RBAC core components roles](\/docs\/reference\/access-authn-authz\/rbac\/#core-component-roles)\n\n- A kubeconfig file for scheduler, `\/etc\/kubernetes\/scheduler.conf`; inside this file is embedded\n  a client certificate with scheduler identity.\n  This client certificate should have the CN `system:kube-scheduler`, as defined by default\n  [RBAC core components roles](\/docs\/reference\/access-authn-authz\/rbac\/#core-component-roles)\n\nAdditionally, a kubeconfig file for kubeadm as an administrative entity is generated and stored\nin `\/etc\/kubernetes\/admin.conf`. This file includes a certificate with\n`Subject: O = kubeadm:cluster-admins, CN = kubernetes-admin`. `kubeadm:cluster-admins`\nis a group managed by kubeadm. It is bound to the `cluster-admin` ClusterRole during `kubeadm init`,\nby using the `super-admin.conf` file, which does not require RBAC.\nThis `admin.conf` file must remain on control plane nodes and should not be shared with additional users.\n\nDuring `kubeadm init` another kubeconfig file is generated and stored in `\/etc\/kubernetes\/super-admin.conf`.\nThis file includes a certificate with `Subject: O = system:masters, CN = kubernetes-super-admin`.\n`system:masters` is a superuser group that bypasses RBAC and makes `super-admin.conf` useful in case\nof an emergency where a cluster is locked due to RBAC misconfiguration.\nThe `super-admin.conf` file must be stored in a safe location and should not be shared with additional users.\n\nSee [RBAC user facing role bindings](\/docs\/reference\/access-authn-authz\/rbac\/#user-facing-roles)\nfor additional information on RBAC and built-in ClusterRoles and groups.\n\nPlease note that:\n\n1. `ca.crt` certificate is embedded in all the kubeconfig files.\n1. If a given kubeconfig file exists, and its content is evaluated as compliant with the above specs,\n   the existing file will be used and the generation phase for the given kubeconfig will be skipped\n1. If kubeadm is running in [ExternalCA mode](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init\/#external-ca-mode),\n   all the required kubeconfig must be provided by the user as well, because kubeadm cannot\n   generate any of them by itself\n1. In case kubeadm is executed in the `--dry-run` mode, kubeconfig files are written in a temporary folder\n1. Generation of kubeconfig files can be invoked individually with the\n   [`kubeadm init phase kubeconfig all`](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init-phase\/#cmd-phase-kubeconfig) command\n\n### Generate static Pod manifests for control plane components\n\nKubeadm writes static Pod manifest files for control plane components to\n`\/etc\/kubernetes\/manifests`. The kubelet watches this directory for Pods to be created on startup.\n\nStatic Pod manifests share a set of common properties:\n\n- All static Pods are deployed on `kube-system` namespace\n- All static Pods get `tier:control-plane` and `component:{component-name}` labels\n- All static Pods use the `system-node-critical` priority class\n- `hostNetwork: true` is set on all static Pods to allow control plane startup before a network is\n  configured; as a consequence:\n\n  * The `address` that the controller-manager and the scheduler use to refer to the API server is `127.0.0.1`\n  * If the etcd server is set up locally, the `etcd-server` address will be set to `127.0.0.1:2379`\n\n- Leader election is enabled for both the controller-manager and the scheduler\n- Controller-manager and the scheduler will reference kubeconfig files with their respective, unique identities\n- All static Pods get any extra flags specified by the user as described in\n  [passing custom arguments to control plane components](\/docs\/setup\/production-environment\/tools\/kubeadm\/control-plane-flags\/)\n- All static Pods get any extra Volumes specified by the user (Host path)\n\nPlease note that:\n\n1. All images will be pulled from registry.k8s.io by default.\n   See [using custom images](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init\/#custom-images)\n   for customizing the image repository\n1. In case kubeadm is executed in the `--dry-run` mode, static Pod files are written in a\n   temporary folder\n1. Static Pod manifest generation for control plane components can be invoked individually with\n   the [`kubeadm init phase control-plane all`](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init-phase\/#cmd-phase-control-plane) command\n\n#### API server\n\nThe static Pod manifest for the API server is affected by the following parameters provided by the users:\n\n- The `apiserver-advertise-address` and `apiserver-bind-port` to bind to; if not provided, those\n  values default to the IP address of the default network interface on the machine and port 6443\n- The `service-cluster-ip-range` to use for services\n- If an external etcd server is specified, the `etcd-servers` address and related TLS settings\n  (`etcd-cafile`, `etcd-certfile`, `etcd-keyfile`);\n  if an external etcd server is not provided, a local etcd will be used (via host network)\n- If a cloud provider is specified, the corresponding `--cloud-provider` parameter is configured together\n  with the `--cloud-config` path if such file exists (this is experimental, alpha and will be\n  removed in a future version)\n\nOther API server flags that are set unconditionally are:\n\n- `--insecure-port=0` to avoid insecure connections to the api server\n- `--enable-bootstrap-token-auth=true` to enable the `BootstrapTokenAuthenticator` authentication module.\n  See [TLS Bootstrapping](\/docs\/reference\/access-authn-authz\/kubelet-tls-bootstrapping\/) for more details\n- `--allow-privileged` to `true` (required e.g. by kube proxy)\n- `--requestheader-client-ca-file` to `front-proxy-ca.crt`\n- `--enable-admission-plugins` to:\n\n  - [`NamespaceLifecycle`](\/docs\/reference\/access-authn-authz\/admission-controllers\/#namespacelifecycle)\n    e.g. to avoid deletion of system reserved namespaces\n  - [`LimitRanger`](\/docs\/reference\/access-authn-authz\/admission-controllers\/#limitranger)\n    and [`ResourceQuota`](\/docs\/reference\/access-authn-authz\/admission-controllers\/#resourcequota)\n    to enforce limits on namespaces\n  - [`ServiceAccount`](\/docs\/reference\/access-authn-authz\/admission-controllers\/#serviceaccount)\n    to enforce service account automation\n  - [`PersistentVolumeLabel`](\/docs\/reference\/access-authn-authz\/admission-controllers\/#persistentvolumelabel)\n    attaches region or zone labels to PersistentVolumes as defined by the cloud provider (This\n    admission controller is deprecated and will be removed in a future version.\n    It is not deployed by kubeadm by default with v1.9 onwards when not explicitly opting into\n    using `gce` or `aws` as cloud providers)\n  - [`DefaultStorageClass`](\/docs\/reference\/access-authn-authz\/admission-controllers\/#defaultstorageclass)\n    to enforce default storage class on `PersistentVolumeClaim` objects\n  - [`DefaultTolerationSeconds`](\/docs\/reference\/access-authn-authz\/admission-controllers\/#defaulttolerationseconds)\n  - [`NodeRestriction`](\/docs\/reference\/access-authn-authz\/admission-controllers\/#noderestriction)\n    to limit what a kubelet can modify (e.g. only pods on this node)\n\n- `--kubelet-preferred-address-types` to `InternalIP,ExternalIP,Hostname;` this makes `kubectl\n  logs` and other API server-kubelet communication work in environments where the hostnames of the\n  nodes aren't resolvable\n\n- Flags for using certificates generated in previous steps:\n\n  - `--client-ca-file` to `ca.crt`\n  - `--tls-cert-file` to `apiserver.crt`\n  - `--tls-private-key-file` to `apiserver.key`\n  - `--kubelet-client-certificate` to `apiserver-kubelet-client.crt`\n  - `--kubelet-client-key` to `apiserver-kubelet-client.key`\n  - `--service-account-key-file` to `sa.pub`\n  - `--requestheader-client-ca-file` to `front-proxy-ca.crt`\n  - `--proxy-client-cert-file` to `front-proxy-client.crt`\n  - `--proxy-client-key-file` to `front-proxy-client.key`\n\n- Other flags for securing the front proxy\n  ([API Aggregation](\/docs\/concepts\/extend-kubernetes\/api-extension\/apiserver-aggregation\/))\n  communications:\n\n  - `--requestheader-username-headers=X-Remote-User`\n  - `--requestheader-group-headers=X-Remote-Group`\n  - `--requestheader-extra-headers-prefix=X-Remote-Extra-`\n  - `--requestheader-allowed-names=front-proxy-client`\n\n#### Controller manager\n\nThe static Pod manifest for the controller manager is affected by following parameters provided by\nthe users:\n\n- If kubeadm is invoked specifying a `--pod-network-cidr`, the subnet manager feature required for\n  some CNI network plugins is enabled by setting:\n\n  - `--allocate-node-cidrs=true`\n  - `--cluster-cidr` and `--node-cidr-mask-size` flags according to the given CIDR\n\n- If a cloud provider is specified, the corresponding `--cloud-provider` is specified together\n  with the `--cloud-config` path if such configuration file exists (this is experimental, alpha\n  and will be removed in a future version)\n\nOther flags that are set unconditionally are:\n\n- `--controllers` enabling all the default controllers plus `BootstrapSigner` and `TokenCleaner`\n  controllers for TLS bootstrap. See [TLS Bootstrapping](\/docs\/reference\/access-authn-authz\/kubelet-tls-bootstrapping\/)\n  for more details.\n\n- `--use-service-account-credentials` to `true`\n\n- Flags for using certificates generated in previous steps:\n\n  - `--root-ca-file` to `ca.crt`\n  - `--cluster-signing-cert-file` to `ca.crt`, if External CA mode is disabled, otherwise to `\"\"`\n  - `--cluster-signing-key-file` to `ca.key`, if External CA mode is disabled, otherwise to `\"\"`\n  - `--service-account-private-key-file` to `sa.key`\n\n#### Scheduler\n\nThe static Pod manifest for the scheduler is not affected by parameters provided by the users.\n\n### Generate static Pod manifest for local etcd\n\nIf you specified an external etcd, this step will be skipped, otherwise kubeadm generates a\nstatic Pod manifest file for creating a local etcd instance running in a Pod with following attributes:\n\n- listen on `localhost:2379` and use `HostNetwork=true`\n- make a `hostPath` mount out from the `dataDir` to the host's filesystem\n- Any extra flags specified by the user\n\nPlease note that:\n\n1. The etcd container image will be pulled from `registry.gcr.io` by default. See\n   [using custom images](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init\/#custom-images)\n   for customizing the image repository.\n1. If you run kubeadm in `--dry-run` mode, the etcd static Pod manifest is written\n   into a temporary folder.\n1. You can directly invoke static Pod manifest generation for local etcd, using the\n   [`kubeadm init phase etcd local`](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init-phase\/#cmd-phase-etcd)\n   command.\n\n### Wait for the control plane to come up\n\nkubeadm waits (upto 4m0s) until `localhost:6443\/healthz` (kube-apiserver liveness) returns `ok`.\nHowever, in order to detect deadlock conditions, kubeadm fails fast if `localhost:10255\/healthz`\n(kubelet liveness) or `localhost:10255\/healthz\/syncloop` (kubelet readiness) don't return `ok`\nwithin 40s and 60s respectively.\n\nkubeadm relies on the kubelet to pull the control plane images and run them properly as static Pods.\nAfter the control plane is up, kubeadm completes the tasks described in following paragraphs.\n\n### Save the kubeadm ClusterConfiguration in a ConfigMap for later reference\n\nkubeadm saves the configuration passed to `kubeadm init` in a ConfigMap named `kubeadm-config`\nunder `kube-system` namespace.\n\nThis will ensure that kubeadm actions executed in future (e.g `kubeadm upgrade`) will be able to\ndetermine the actual\/current cluster state and make new decisions based on that data.\n\nPlease note that:\n\n1. Before saving the ClusterConfiguration, sensitive information like the token is stripped from the configuration\n1. Upload of control plane node configuration can be invoked individually with the command\n   [`kubeadm init phase upload-config`](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init-phase\/#cmd-phase-upload-config).\n\n### Mark the node as control-plane\n\nAs soon as the control plane is available, kubeadm executes the following actions:\n\n- Labels the node as control-plane with `node-role.kubernetes.io\/control-plane=\"\"`\n- Taints the node with `node-role.kubernetes.io\/control-plane:NoSchedule`\n\nPlease note that the phase to mark the control-plane phase can be invoked\nindividually with the [`kubeadm init phase mark-control-plane`](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init-phase\/#cmd-phase-mark-control-plane) command.\n\n\n### Configure TLS-Bootstrapping for node joining\n\nKubeadm uses [Authenticating with Bootstrap Tokens](\/docs\/reference\/access-authn-authz\/bootstrap-tokens\/)\nfor joining new nodes to an existing cluster; for more details see also\n[design proposal](https:\/\/git.k8s.io\/design-proposals-archive\/cluster-lifecycle\/bootstrap-discovery.md).\n\n`kubeadm init` ensures that everything is properly configured for this process, and this includes\nfollowing steps as well as setting API server and controller flags as already described in\nprevious paragraphs.\n\n\nTLS bootstrapping for nodes can be configured with the command\n[`kubeadm init phase bootstrap-token`](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init-phase\/#cmd-phase-bootstrap-token),\nexecuting all the configuration steps described in following paragraphs;\nalternatively, each step can be invoked individually.\n\n\n#### Create a bootstrap token\n\n`kubeadm init` creates a first bootstrap token, either generated automatically or provided by the\nuser with the `--token` flag; as documented in bootstrap token specification, token should be\nsaved as a secret with name `bootstrap-token-<token-id>` under `kube-system` namespace.\n\nPlease note that:\n\n1. The default token created by `kubeadm init` will be used to validate temporary user during TLS\n   bootstrap process; those users will be member of\n  `system:bootstrappers:kubeadm:default-node-token` group\n1. The token has a limited validity, default 24 hours (the interval may be changed with the `\u2014token-ttl` flag)\n1. Additional tokens can be created with the [`kubeadm token`](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-token\/)\n   command, that provide other useful functions for token management as well.\n\n#### Allow joining nodes to call CSR API\n\nKubeadm ensures that users in `system:bootstrappers:kubeadm:default-node-token` group are able to\naccess the certificate signing API.\n\nThis is implemented by creating a ClusterRoleBinding named `kubeadm:kubelet-bootstrap` between the\ngroup above and the default RBAC role `system:node-bootstrapper`.\n\n#### Set up auto approval for new bootstrap tokens\n\nKubeadm ensures that the Bootstrap Token will get its CSR request automatically approved by the\ncsrapprover controller.\n\nThis is implemented by creating ClusterRoleBinding named `kubeadm:node-autoapprove-bootstrap`\nbetween the `system:bootstrappers:kubeadm:default-node-token` group and the default role\n`system:certificates.k8s.io:certificatesigningrequests:nodeclient`.\n\nThe role `system:certificates.k8s.io:certificatesigningrequests:nodeclient` should be created as\nwell, granting POST permission to\n`\/apis\/certificates.k8s.io\/certificatesigningrequests\/nodeclient`.\n\n#### Set up nodes certificate rotation with auto approval\n\nKubeadm ensures that certificate rotation is enabled for nodes, and that a new certificate request\nfor nodes will get its CSR request automatically approved by the csrapprover controller.\n\nThis is implemented by creating ClusterRoleBinding named\n`kubeadm:node-autoapprove-certificate-rotation` between the `system:nodes` group and the default\nrole `system:certificates.k8s.io:certificatesigningrequests:selfnodeclient`.\n\n#### Create the public cluster-info ConfigMap\n\nThis phase creates the `cluster-info` ConfigMap in the `kube-public` namespace.\n\nAdditionally, it creates a Role and a RoleBinding granting access to the ConfigMap for\nunauthenticated users (i.e. users in RBAC group `system:unauthenticated`).\n\n\nThe access to the `cluster-info` ConfigMap _is not_ rate-limited. This may or may not be a\nproblem if you expose your cluster's API server to the internet; worst-case scenario here is a\nDoS attack where an attacker uses all the in-flight requests the kube-apiserver can handle to\nserve the `cluster-info` ConfigMap.\n\n\n### Install addons\n\nKubeadm installs the internal DNS server and the kube-proxy addon components via the API server.\n\n\nThis phase can be invoked individually with the command\n[`kubeadm init phase addon all`](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init-phase\/#cmd-phase-addon).\n\n\n#### proxy\n\nA ServiceAccount for `kube-proxy` is created in the `kube-system` namespace; then kube-proxy is\ndeployed as a DaemonSet:\n\n- The credentials (`ca.crt` and `token`) to the control plane come from the ServiceAccount\n- The location (URL) of the API server comes from a ConfigMap\n- The `kube-proxy` ServiceAccount is bound to the privileges in the `system:node-proxier` ClusterRole\n\n#### DNS\n\n- The CoreDNS service is named `kube-dns`. This is done to prevent any interruption\n  in service when the user is switching the cluster DNS from kube-dns to CoreDNS\n  through the `--config` method described [here](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init-phase\/#cmd-phase-addon).\n\n- A ServiceAccount for CoreDNS is created in the `kube-system` namespace.\n\n- The `coredns` ServiceAccount is bound to the privileges in the `system:coredns` ClusterRole\n\nIn Kubernetes version 1.21, support for using `kube-dns` with kubeadm was removed.\nYou can use CoreDNS with kubeadm even when the related Service is named `kube-dns`.\n\n## kubeadm join phases internal design\n\nSimilarly to `kubeadm init`, also `kubeadm join` internal workflow consists of a sequence of\natomic work tasks to perform.\n\nThis is split into discovery (having the Node trust the Kubernetes Master) and TLS bootstrap\n(having the Kubernetes Master trust the Node).\n\nsee [Authenticating with Bootstrap Tokens](\/docs\/reference\/access-authn-authz\/bootstrap-tokens\/)\nor the corresponding [design proposal](https:\/\/git.k8s.io\/design-proposals-archive\/cluster-lifecycle\/bootstrap-discovery.md).\n\n### Preflight checks\n\n`kubeadm` executes a set of preflight checks before starting the join, with the aim to verify\npreconditions and avoid common cluster startup problems.\n\nPlease note that:\n\n1. `kubeadm join` preflight checks are basically a subset of `kubeadm init` preflight checks\n1. Starting from 1.24, kubeadm uses crictl to communicate to all known CRI endpoints.\n1. Starting from 1.9, kubeadm provides support for joining nodes running on Windows; in that case,\n   linux specific controls are skipped.\n1. In any case the user can skip specific preflight checks (or eventually all preflight checks)\n   with the `--ignore-preflight-errors` option.\n\n### Discovery cluster-info\n\nThere are 2 main schemes for discovery. The first is to use a shared token along with the IP\naddress of the API server.\nThe second is to provide a file (that is a subset of the standard kubeconfig file).\n\n#### Shared token discovery\n\nIf `kubeadm join` is invoked with `--discovery-token`, token discovery is used; in this case the\nnode basically retrieves the cluster CA certificates from the `cluster-info` ConfigMap in the\n`kube-public` namespace.\n\nIn order to prevent \"man in the middle\" attacks, several steps are taken:\n\n- First, the CA certificate is retrieved via insecure connection (this is possible because\n  `kubeadm init` is granted access to `cluster-info` users for `system:unauthenticated`)\n\n- Then the CA certificate goes through following validation steps:\n\n  - Basic validation: using the token ID against a JWT signature\n  - Pub key validation: using provided `--discovery-token-ca-cert-hash`. This value is available\n    in the output of `kubeadm init` or can be calculated using standard tools (the hash is\n    calculated over the bytes of the Subject Public Key Info (SPKI) object as in RFC7469). The\n    `--discovery-token-ca-cert-hash flag` may be repeated multiple times to allow more than one public key.\n  - As an additional validation, the CA certificate is retrieved via secure connection and then\n    compared with the CA retrieved initially\n\n\n\nPub key validation can be skipped passing `--discovery-token-unsafe-skip-ca-verification` flag;\nThis weakens the kubeadm security model since others can potentially impersonate the Kubernetes Master.\n\n\n#### File\/https discovery\n\nIf `kubeadm join` is invoked with `--discovery-file`, file discovery is used; this file can be a\nlocal file or downloaded via an HTTPS URL; in case of HTTPS, the host installed CA bundle is used\nto verify the connection.\n\nWith file discovery, the cluster CA certificate is provided into the file itself; in fact, the\ndiscovery file is a kubeconfig file with only `server` and `certificate-authority-data` attributes\nset, as described in the [`kubeadm join`](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-join\/#file-or-https-based-discovery)\nreference doc; when the connection with the cluster is established, kubeadm tries to access the\n`cluster-info` ConfigMap, and if available, uses it.\n\n## TLS Bootstrap\n\nOnce the cluster info is known, the file `bootstrap-kubelet.conf` is written, thus allowing\nkubelet to do TLS Bootstrapping.\n\nThe TLS bootstrap mechanism uses the shared token to temporarily authenticate with the Kubernetes\nAPI server to submit a certificate signing request (CSR) for a locally created key pair.\n\nThe request is then automatically approved and the operation completes saving `ca.crt` file and\n`kubelet.conf` file to be used by the kubelet for joining the cluster, while `bootstrap-kubelet.conf`\nis deleted.\n\n\n- The temporary authentication is validated against the token saved during the `kubeadm init`\n  process (or with additional tokens created with `kubeadm token` command)\n- The temporary authentication resolves to a user member of\n  `system:bootstrappers:kubeadm:default-node-token` group which was granted access to the CSR api\n  during the `kubeadm init` process\n- The automatic CSR approval is managed by the csrapprover controller, according to\n  the configuration present in the `kubeadm init` process\n","site":"kubernetes reference","answers_cleaned":"    reviewers    luxas   jbeda title  Implementation details content type  concept weight  100           overview         kubeadm init  and  kubeadm join  together provide a nice user experience for creating a bare Kubernetes cluster from scratch  that aligns with the best practices  However  it might not be obvious  how  kubeadm does that   This document provides additional details on what happens under the hood  with the aim of sharing knowledge on the best practices for a Kubernetes cluster        body         Core design principles  The cluster that  kubeadm init  and  kubeadm join  set up should be       Secure    It should adopt latest best practices like      enforcing RBAC     using the Node Authorizer     using secure communication between the control plane components     using secure communication between the API server and the kubelets     lock down the kubelet API     locking down access to the API for system components like the kube proxy and CoreDNS     locking down what a Bootstrap Token can access     User friendly    The user should not have to run anything more than a couple of commands       kubeadm init       export KUBECONFIG  etc kubernetes admin conf       kubectl apply  f  network of choice yaml        kubeadm join   token  token   endpoint   port       Extendable        It should  not  favor any particular network provider  Configuring the cluster network is out of scope     It should provide the possibility to use a config file for customizing various parameters     Constants and well known values and paths  In order to reduce complexity and to simplify development of higher level tools that build on top of kubeadm  it uses a limited set of constant values for well known paths and file names   The Kubernetes directory   etc kubernetes  is a constant in the application  since it is clearly the given path in a majority of cases  and the most intuitive location  other constant paths and file names are       etc kubernetes manifests  as the path where the kubelet should look for static Pod manifests    Names of static Pod manifests are        etcd yaml       kube apiserver yaml       kube controller manager yaml       kube scheduler yaml       etc kubernetes   as the path where kubeconfig files with identities for control plane   components are stored  Names of kubeconfig files are        kubelet conf    bootstrap kubelet conf  during TLS bootstrap       controller manager conf       scheduler conf       admin conf  for the cluster admin and kubeadm itself      super admin conf  for the cluster super admin that can bypass RBAC    Names of certificates and key files        ca crt    ca key  for the Kubernetes certificate authority      apiserver crt    apiserver key  for the API server certificate      apiserver kubelet client crt    apiserver kubelet client key  for the client certificate used     by the API server to connect to the kubelets securely      sa pub    sa key  for the key used by the controller manager when signing ServiceAccount      front proxy ca crt    front proxy ca key  for the front proxy certificate authority      front proxy client crt    front proxy client key  for the front proxy client     kubeadm init workflow internal design  The  kubeadm init  consists of a sequence of atomic work tasks to perform  as described in the  kubeadm init   internal workflow   docs reference setup tools kubeadm kubeadm init  init workflow    The   kubeadm init phase    docs reference setup tools kubeadm kubeadm init phase   command allows users to invoke each task individually  and ultimately offers a reusable and composable API toolbox that can be used by other Kubernetes bootstrap tools  by any IT automation tool or by an advanced user for creating custom clusters       Preflight checks  Kubeadm executes a set of preflight checks before starting the init  with the aim to verify preconditions and avoid common cluster startup problems  The user can skip specific preflight checks or all of them with the    ignore preflight errors  option      Warning  if the Kubernetes version to use  specified with the    kubernetes version  flag  is   at least one minor version higher than the kubeadm CLI version    Kubernetes system requirements      if running on linux         Error  if Kernel is older than the minimum required version        Error  if required cgroups subsystem aren t set up    Error  if the CRI endpoint does not answer    Error  if user is not root    Error  if the machine hostname is not a valid DNS subdomain    Warning  if the host name cannot be reached via network lookup    Error  if kubelet version is lower that the minimum kubelet version supported by kubeadm  current minor  1     Error  if kubelet version is at least one minor higher than the required controlplane version  unsupported version skew     Warning  if kubelet service does not exist or if it is disabled    Warning  if firewalld is active    Error  if API server bindPort or ports 10250 10251 10252 are used    Error  if   etc kubernetes manifest  folder already exists and it is not empty    Error  if swap is on    Error  if  conntrack    ip    iptables    mount    nsenter  commands are not present in the command path    Warning  if  ebtables    ethtool    socat    tc    touch    crictl  commands are not present in the command path    Warning  if extra arg flags for API server  controller manager  scheduler contains some invalid options    Warning  if connection to https   API AdvertiseAddress API BindPort goes through proxy    Warning  if connection to services subnet goes through proxy  only first address checked     Warning  if connection to Pods subnet goes through proxy  only first address checked    If external etcd is provided       Error  if etcd version is older than the minimum required version      Error  if etcd certificates or keys are specified  but not provided   If external etcd is NOT provided  and thus local etcd will be installed        Error  if ports 2379 is used      Error  if Etcd DataDir folder already exists and it is not empty   If authorization mode is ABAC       Error  if abac policy json does not exist   If authorization mode is WebHook      Error  if webhook authz conf does not exist   Preflight checks can be invoked individually with the   kubeadm init phase preflight    docs reference setup tools kubeadm kubeadm init phase  cmd phase preflight  command        Generate the necessary certificates  Kubeadm generates certificate and private key pairs for different purposes     A self signed certificate authority for the Kubernetes cluster saved into  ca crt  file and    ca key  private key file    A serving certificate for the API server  generated using  ca crt  as the CA  and saved into    apiserver crt  file with its private key  apiserver key   This certificate should contain   the following alternative names       The Kubernetes service s internal clusterIP  the first address in the services CIDR  e g       10 96 0 1  if service subnet is  10 96 0 0 12       Kubernetes DNS names  e g   kubernetes default svc cluster local  if    service dns domain      flag value is  cluster local   plus default DNS names  kubernetes default svc        kubernetes default    kubernetes      The node name     The    apiserver advertise address      Additional alternative names specified by the user    A client certificate for the API server to connect to the kubelets securely  generated using    ca crt  as the CA and saved into  apiserver kubelet client crt  file with its private key    apiserver kubelet client key     This certificate should be in the  system masters  organization    A private key for signing ServiceAccount Tokens saved into  sa key  file along with its public key  sa pub     A certificate authority for the front proxy saved into  front proxy ca crt  file with its key    front proxy ca key     A client certificate for the front proxy client  generated using  front proxy ca crt  as the CA and   saved into  front proxy client crt  file with its private key front proxy client key   Certificates are stored by default in   etc kubernetes pki   but this directory is configurable using the    cert dir  flag   Please note that   1  If a given certificate and private key pair both exist  and their content is evaluated to be compliant with the above specs  the existing files will    be used and the generation phase for the given certificate will be skipped  This means the user can  for example  copy an existing CA to      etc kubernetes pki ca  crt key    and then kubeadm will use those files for signing the rest of the certs     See also  using custom certificates   docs tasks administer cluster kubeadm kubeadm certs custom certificates  1  For the CA  it is possible to provide the  ca crt  file but not the  ca key  file  If all other certificates and kubeconfig files    are already in place  kubeadm recognizes this condition and activates the ExternalCA  which also implies the  csrsigner  controller in    controller manager won t be started 1  If kubeadm is running in  external CA mode   docs tasks administer cluster kubeadm kubeadm certs external ca mode      all the certificates must be provided by the user  because kubeadm cannot generate them by itself 1  In case kubeadm is executed in the    dry run  mode  certificate files are written in a temporary folder 1  Certificate generation can be invoked individually with the      kubeadm init phase certs all    docs reference setup tools kubeadm kubeadm init phase  cmd phase certs  command      Generate kubeconfig files for control plane components  Kubeadm generates kubeconfig files with identities for control plane components     A kubeconfig file for the kubelet to use during TLS bootstrap      etc kubernetes bootstrap kubelet conf  Inside this file  there is a bootstrap token or embedded   client certificates for authenticating this node with the cluster     This client certificate should       Be in the  system nodes  organization  as required by the      Node Authorization   docs reference access authn authz node   module     Have the Common Name  CN   system node  hostname lowercased      A kubeconfig file for controller manager    etc kubernetes controller manager conf   inside this   file is embedded a client certificate with controller manager identity  This client certificate should   have the CN  system kube controller manager   as defined by default    RBAC core components roles   docs reference access authn authz rbac  core component roles     A kubeconfig file for scheduler    etc kubernetes scheduler conf   inside this file is embedded   a client certificate with scheduler identity    This client certificate should have the CN  system kube scheduler   as defined by default    RBAC core components roles   docs reference access authn authz rbac  core component roles   Additionally  a kubeconfig file for kubeadm as an administrative entity is generated and stored in   etc kubernetes admin conf   This file includes a certificate with  Subject  O   kubeadm cluster admins  CN   kubernetes admin    kubeadm cluster admins  is a group managed by kubeadm  It is bound to the  cluster admin  ClusterRole during  kubeadm init   by using the  super admin conf  file  which does not require RBAC  This  admin conf  file must remain on control plane nodes and should not be shared with additional users   During  kubeadm init  another kubeconfig file is generated and stored in   etc kubernetes super admin conf   This file includes a certificate with  Subject  O   system masters  CN   kubernetes super admin    system masters  is a superuser group that bypasses RBAC and makes  super admin conf  useful in case of an emergency where a cluster is locked due to RBAC misconfiguration  The  super admin conf  file must be stored in a safe location and should not be shared with additional users   See  RBAC user facing role bindings   docs reference access authn authz rbac  user facing roles  for additional information on RBAC and built in ClusterRoles and groups   Please note that   1   ca crt  certificate is embedded in all the kubeconfig files  1  If a given kubeconfig file exists  and its content is evaluated as compliant with the above specs     the existing file will be used and the generation phase for the given kubeconfig will be skipped 1  If kubeadm is running in  ExternalCA mode   docs reference setup tools kubeadm kubeadm init  external ca mode      all the required kubeconfig must be provided by the user as well  because kubeadm cannot    generate any of them by itself 1  In case kubeadm is executed in the    dry run  mode  kubeconfig files are written in a temporary folder 1  Generation of kubeconfig files can be invoked individually with the      kubeadm init phase kubeconfig all    docs reference setup tools kubeadm kubeadm init phase  cmd phase kubeconfig  command      Generate static Pod manifests for control plane components  Kubeadm writes static Pod manifest files for control plane components to   etc kubernetes manifests   The kubelet watches this directory for Pods to be created on startup   Static Pod manifests share a set of common properties     All static Pods are deployed on  kube system  namespace   All static Pods get  tier control plane  and  component  component name   labels   All static Pods use the  system node critical  priority class    hostNetwork  true  is set on all static Pods to allow control plane startup before a network is   configured  as a consequence       The  address  that the controller manager and the scheduler use to refer to the API server is  127 0 0 1      If the etcd server is set up locally  the  etcd server  address will be set to  127 0 0 1 2379     Leader election is enabled for both the controller manager and the scheduler   Controller manager and the scheduler will reference kubeconfig files with their respective  unique identities   All static Pods get any extra flags specified by the user as described in    passing custom arguments to control plane components   docs setup production environment tools kubeadm control plane flags     All static Pods get any extra Volumes specified by the user  Host path   Please note that   1  All images will be pulled from registry k8s io by default     See  using custom images   docs reference setup tools kubeadm kubeadm init  custom images     for customizing the image repository 1  In case kubeadm is executed in the    dry run  mode  static Pod files are written in a    temporary folder 1  Static Pod manifest generation for control plane components can be invoked individually with    the   kubeadm init phase control plane all    docs reference setup tools kubeadm kubeadm init phase  cmd phase control plane  command       API server  The static Pod manifest for the API server is affected by the following parameters provided by the users     The  apiserver advertise address  and  apiserver bind port  to bind to  if not provided  those   values default to the IP address of the default network interface on the machine and port 6443   The  service cluster ip range  to use for services   If an external etcd server is specified  the  etcd servers  address and related TLS settings     etcd cafile    etcd certfile    etcd keyfile      if an external etcd server is not provided  a local etcd will be used  via host network    If a cloud provider is specified  the corresponding    cloud provider  parameter is configured together   with the    cloud config  path if such file exists  this is experimental  alpha and will be   removed in a future version   Other API server flags that are set unconditionally are        insecure port 0  to avoid insecure connections to the api server      enable bootstrap token auth true  to enable the  BootstrapTokenAuthenticator  authentication module    See  TLS Bootstrapping   docs reference access authn authz kubelet tls bootstrapping   for more details      allow privileged  to  true   required e g  by kube proxy       requestheader client ca file  to  front proxy ca crt       enable admission plugins  to         NamespaceLifecycle    docs reference access authn authz admission controllers  namespacelifecycle      e g  to avoid deletion of system reserved namespaces       LimitRanger    docs reference access authn authz admission controllers  limitranger      and   ResourceQuota    docs reference access authn authz admission controllers  resourcequota      to enforce limits on namespaces       ServiceAccount    docs reference access authn authz admission controllers  serviceaccount      to enforce service account automation       PersistentVolumeLabel    docs reference access authn authz admission controllers  persistentvolumelabel      attaches region or zone labels to PersistentVolumes as defined by the cloud provider  This     admission controller is deprecated and will be removed in a future version      It is not deployed by kubeadm by default with v1 9 onwards when not explicitly opting into     using  gce  or  aws  as cloud providers        DefaultStorageClass    docs reference access authn authz admission controllers  defaultstorageclass      to enforce default storage class on  PersistentVolumeClaim  objects       DefaultTolerationSeconds    docs reference access authn authz admission controllers  defaulttolerationseconds        NodeRestriction    docs reference access authn authz admission controllers  noderestriction      to limit what a kubelet can modify  e g  only pods on this node        kubelet preferred address types  to  InternalIP ExternalIP Hostname   this makes  kubectl   logs  and other API server kubelet communication work in environments where the hostnames of the   nodes aren t resolvable    Flags for using certificates generated in previous steps          client ca file  to  ca crt         tls cert file  to  apiserver crt         tls private key file  to  apiserver key         kubelet client certificate  to  apiserver kubelet client crt         kubelet client key  to  apiserver kubelet client key         service account key file  to  sa pub         requestheader client ca file  to  front proxy ca crt         proxy client cert file  to  front proxy client crt         proxy client key file  to  front proxy client key     Other flags for securing the front proxy     API Aggregation   docs concepts extend kubernetes api extension apiserver aggregation      communications          requestheader username headers X Remote User         requestheader group headers X Remote Group         requestheader extra headers prefix X Remote Extra          requestheader allowed names front proxy client        Controller manager  The static Pod manifest for the controller manager is affected by following parameters provided by the users     If kubeadm is invoked specifying a    pod network cidr   the subnet manager feature required for   some CNI network plugins is enabled by setting          allocate node cidrs true         cluster cidr  and    node cidr mask size  flags according to the given CIDR    If a cloud provider is specified  the corresponding    cloud provider  is specified together   with the    cloud config  path if such configuration file exists  this is experimental  alpha   and will be removed in a future version   Other flags that are set unconditionally are        controllers  enabling all the default controllers plus  BootstrapSigner  and  TokenCleaner    controllers for TLS bootstrap  See  TLS Bootstrapping   docs reference access authn authz kubelet tls bootstrapping     for more details        use service account credentials  to  true     Flags for using certificates generated in previous steps          root ca file  to  ca crt         cluster signing cert file  to  ca crt   if External CA mode is disabled  otherwise to             cluster signing key file  to  ca key   if External CA mode is disabled  otherwise to             service account private key file  to  sa key        Scheduler  The static Pod manifest for the scheduler is not affected by parameters provided by the users       Generate static Pod manifest for local etcd  If you specified an external etcd  this step will be skipped  otherwise kubeadm generates a static Pod manifest file for creating a local etcd instance running in a Pod with following attributes     listen on  localhost 2379  and use  HostNetwork true    make a  hostPath  mount out from the  dataDir  to the host s filesystem   Any extra flags specified by the user  Please note that   1  The etcd container image will be pulled from  registry gcr io  by default  See     using custom images   docs reference setup tools kubeadm kubeadm init  custom images     for customizing the image repository  1  If you run kubeadm in    dry run  mode  the etcd static Pod manifest is written    into a temporary folder  1  You can directly invoke static Pod manifest generation for local etcd  using the      kubeadm init phase etcd local    docs reference setup tools kubeadm kubeadm init phase  cmd phase etcd     command       Wait for the control plane to come up  kubeadm waits  upto 4m0s  until  localhost 6443 healthz   kube apiserver liveness  returns  ok   However  in order to detect deadlock conditions  kubeadm fails fast if  localhost 10255 healthz   kubelet liveness  or  localhost 10255 healthz syncloop   kubelet readiness  don t return  ok  within 40s and 60s respectively   kubeadm relies on the kubelet to pull the control plane images and run them properly as static Pods  After the control plane is up  kubeadm completes the tasks described in following paragraphs       Save the kubeadm ClusterConfiguration in a ConfigMap for later reference  kubeadm saves the configuration passed to  kubeadm init  in a ConfigMap named  kubeadm config  under  kube system  namespace   This will ensure that kubeadm actions executed in future  e g  kubeadm upgrade   will be able to determine the actual current cluster state and make new decisions based on that data   Please note that   1  Before saving the ClusterConfiguration  sensitive information like the token is stripped from the configuration 1  Upload of control plane node configuration can be invoked individually with the command      kubeadm init phase upload config    docs reference setup tools kubeadm kubeadm init phase  cmd phase upload config        Mark the node as control plane  As soon as the control plane is available  kubeadm executes the following actions     Labels the node as control plane with  node role kubernetes io control plane       Taints the node with  node role kubernetes io control plane NoSchedule   Please note that the phase to mark the control plane phase can be invoked individually with the   kubeadm init phase mark control plane    docs reference setup tools kubeadm kubeadm init phase  cmd phase mark control plane  command        Configure TLS Bootstrapping for node joining  Kubeadm uses  Authenticating with Bootstrap Tokens   docs reference access authn authz bootstrap tokens   for joining new nodes to an existing cluster  for more details see also  design proposal  https   git k8s io design proposals archive cluster lifecycle bootstrap discovery md     kubeadm init  ensures that everything is properly configured for this process  and this includes following steps as well as setting API server and controller flags as already described in previous paragraphs    TLS bootstrapping for nodes can be configured with the command   kubeadm init phase bootstrap token    docs reference setup tools kubeadm kubeadm init phase  cmd phase bootstrap token   executing all the configuration steps described in following paragraphs  alternatively  each step can be invoked individually         Create a bootstrap token   kubeadm init  creates a first bootstrap token  either generated automatically or provided by the user with the    token  flag  as documented in bootstrap token specification  token should be saved as a secret with name  bootstrap token  token id   under  kube system  namespace   Please note that   1  The default token created by  kubeadm init  will be used to validate temporary user during TLS    bootstrap process  those users will be member of    system bootstrappers kubeadm default node token  group 1  The token has a limited validity  default 24 hours  the interval may be changed with the   token ttl  flag  1  Additional tokens can be created with the   kubeadm token    docs reference setup tools kubeadm kubeadm token      command  that provide other useful functions for token management as well        Allow joining nodes to call CSR API  Kubeadm ensures that users in  system bootstrappers kubeadm default node token  group are able to access the certificate signing API   This is implemented by creating a ClusterRoleBinding named  kubeadm kubelet bootstrap  between the group above and the default RBAC role  system node bootstrapper         Set up auto approval for new bootstrap tokens  Kubeadm ensures that the Bootstrap Token will get its CSR request automatically approved by the csrapprover controller   This is implemented by creating ClusterRoleBinding named  kubeadm node autoapprove bootstrap  between the  system bootstrappers kubeadm default node token  group and the default role  system certificates k8s io certificatesigningrequests nodeclient    The role  system certificates k8s io certificatesigningrequests nodeclient  should be created as well  granting POST permission to   apis certificates k8s io certificatesigningrequests nodeclient         Set up nodes certificate rotation with auto approval  Kubeadm ensures that certificate rotation is enabled for nodes  and that a new certificate request for nodes will get its CSR request automatically approved by the csrapprover controller   This is implemented by creating ClusterRoleBinding named  kubeadm node autoapprove certificate rotation  between the  system nodes  group and the default role  system certificates k8s io certificatesigningrequests selfnodeclient         Create the public cluster info ConfigMap  This phase creates the  cluster info  ConfigMap in the  kube public  namespace   Additionally  it creates a Role and a RoleBinding granting access to the ConfigMap for unauthenticated users  i e  users in RBAC group  system unauthenticated      The access to the  cluster info  ConfigMap  is not  rate limited  This may or may not be a problem if you expose your cluster s API server to the internet  worst case scenario here is a DoS attack where an attacker uses all the in flight requests the kube apiserver can handle to serve the  cluster info  ConfigMap        Install addons  Kubeadm installs the internal DNS server and the kube proxy addon components via the API server    This phase can be invoked individually with the command   kubeadm init phase addon all    docs reference setup tools kubeadm kubeadm init phase  cmd phase addon          proxy  A ServiceAccount for  kube proxy  is created in the  kube system  namespace  then kube proxy is deployed as a DaemonSet     The credentials   ca crt  and  token   to the control plane come from the ServiceAccount   The location  URL  of the API server comes from a ConfigMap   The  kube proxy  ServiceAccount is bound to the privileges in the  system node proxier  ClusterRole       DNS    The CoreDNS service is named  kube dns   This is done to prevent any interruption   in service when the user is switching the cluster DNS from kube dns to CoreDNS   through the    config  method described  here   docs reference setup tools kubeadm kubeadm init phase  cmd phase addon      A ServiceAccount for CoreDNS is created in the  kube system  namespace     The  coredns  ServiceAccount is bound to the privileges in the  system coredns  ClusterRole  In Kubernetes version 1 21  support for using  kube dns  with kubeadm was removed  You can use CoreDNS with kubeadm even when the related Service is named  kube dns       kubeadm join phases internal design  Similarly to  kubeadm init   also  kubeadm join  internal workflow consists of a sequence of atomic work tasks to perform   This is split into discovery  having the Node trust the Kubernetes Master  and TLS bootstrap  having the Kubernetes Master trust the Node    see  Authenticating with Bootstrap Tokens   docs reference access authn authz bootstrap tokens   or the corresponding  design proposal  https   git k8s io design proposals archive cluster lifecycle bootstrap discovery md        Preflight checks   kubeadm  executes a set of preflight checks before starting the join  with the aim to verify preconditions and avoid common cluster startup problems   Please note that   1   kubeadm join  preflight checks are basically a subset of  kubeadm init  preflight checks 1  Starting from 1 24  kubeadm uses crictl to communicate to all known CRI endpoints  1  Starting from 1 9  kubeadm provides support for joining nodes running on Windows  in that case     linux specific controls are skipped  1  In any case the user can skip specific preflight checks  or eventually all preflight checks     with the    ignore preflight errors  option       Discovery cluster info  There are 2 main schemes for discovery  The first is to use a shared token along with the IP address of the API server  The second is to provide a file  that is a subset of the standard kubeconfig file         Shared token discovery  If  kubeadm join  is invoked with    discovery token   token discovery is used  in this case the node basically retrieves the cluster CA certificates from the  cluster info  ConfigMap in the  kube public  namespace   In order to prevent  man in the middle  attacks  several steps are taken     First  the CA certificate is retrieved via insecure connection  this is possible because    kubeadm init  is granted access to  cluster info  users for  system unauthenticated      Then the CA certificate goes through following validation steps       Basic validation  using the token ID against a JWT signature     Pub key validation  using provided    discovery token ca cert hash   This value is available     in the output of  kubeadm init  or can be calculated using standard tools  the hash is     calculated over the bytes of the Subject Public Key Info  SPKI  object as in RFC7469   The        discovery token ca cert hash flag  may be repeated multiple times to allow more than one public key      As an additional validation  the CA certificate is retrieved via secure connection and then     compared with the CA retrieved initially    Pub key validation can be skipped passing    discovery token unsafe skip ca verification  flag  This weakens the kubeadm security model since others can potentially impersonate the Kubernetes Master         File https discovery  If  kubeadm join  is invoked with    discovery file   file discovery is used  this file can be a local file or downloaded via an HTTPS URL  in case of HTTPS  the host installed CA bundle is used to verify the connection   With file discovery  the cluster CA certificate is provided into the file itself  in fact  the discovery file is a kubeconfig file with only  server  and  certificate authority data  attributes set  as described in the   kubeadm join    docs reference setup tools kubeadm kubeadm join  file or https based discovery  reference doc  when the connection with the cluster is established  kubeadm tries to access the  cluster info  ConfigMap  and if available  uses it      TLS Bootstrap  Once the cluster info is known  the file  bootstrap kubelet conf  is written  thus allowing kubelet to do TLS Bootstrapping   The TLS bootstrap mechanism uses the shared token to temporarily authenticate with the Kubernetes API server to submit a certificate signing request  CSR  for a locally created key pair   The request is then automatically approved and the operation completes saving  ca crt  file and  kubelet conf  file to be used by the kubelet for joining the cluster  while  bootstrap kubelet conf  is deleted      The temporary authentication is validated against the token saved during the  kubeadm init    process  or with additional tokens created with  kubeadm token  command    The temporary authentication resolves to a user member of    system bootstrappers kubeadm default node token  group which was granted access to the CSR api   during the  kubeadm init  process   The automatic CSR approval is managed by the csrapprover controller  according to   the configuration present in the  kubeadm init  process "}
{"questions":"kubernetes reference title kubeadm init phase enables you to invoke atomic steps of the bootstrap process Hence you can let kubeadm do some of the work and you can fill in the gaps weight 90 contenttype concept if you wish to apply customization","answers":"---\ntitle: kubeadm init phase\nweight: 90\ncontent_type: concept\n---\n\n`kubeadm init phase` enables you to invoke atomic steps of the bootstrap process.\nHence, you can let kubeadm do some of the work and you can fill in the gaps\nif you wish to apply customization.\n\n`kubeadm init phase` is consistent with the [kubeadm init workflow](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init\/#init-workflow),\nand behind the scene both use the same code.\n\n## kubeadm init phase preflight {#cmd-phase-preflight}\n\nUsing this command you can execute preflight checks on a control-plane node.\n\n\n\n\n\n## kubeadm init phase kubelet-start {#cmd-phase-kubelet-start}\n\nThis phase will write the kubelet configuration file and environment file and then start the kubelet.\n\n\n\n\n\n## kubeadm init phase certs {#cmd-phase-certs}\n\nCan be used to create all required certificates by kubeadm.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n## kubeadm init phase kubeconfig {#cmd-phase-kubeconfig}\n\nYou can create all required kubeconfig files by calling the `all` subcommand or call them individually.\n\n\n\n\n\n\n\n\n\n\n\n## kubeadm init phase control-plane {#cmd-phase-control-plane}\n\nUsing this phase you can create all required static Pod files for the control plane components.\n\n\n\n\n\n\n\n\n\n\n## kubeadm init phase etcd {#cmd-phase-etcd}\n\nUse the following phase to create a local etcd instance based on a static Pod file.\n\n\n\n\n\n\n## kubeadm init phase upload-config {#cmd-phase-upload-config}\n\nYou can use this command to upload the kubeadm configuration to your cluster.\nAlternatively, you can use [kubeadm config](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-config\/).\n\n\n\n\n\n\n\n\n## kubeadm init phase upload-certs {#cmd-phase-upload-certs}\n\nUse the following phase to upload control-plane certificates to the cluster.\nBy default the certs and encryption key expire after two hours.\n\n\n\n\n\n## kubeadm init phase mark-control-plane {#cmd-phase-mark-control-plane}\n\nUse the following phase to label and taint the node as a control plane node.\n\n\n\n\n\n## kubeadm init phase bootstrap-token {#cmd-phase-bootstrap-token}\n\nUse the following phase to configure bootstrap tokens.\n\n\n\n\n\n## kubeadm init phase kubelet-finalize {#cmd-phase-kubelet-finalize-all}\n\nUse the following phase to update settings relevant to the kubelet after TLS\nbootstrap. You can use the `all` subcommand to run all `kubelet-finalize`\nphases.\n\n\n\n\n\n\n\n## kubeadm init phase addon {#cmd-phase-addon}\n\nYou can install all the available addons with the `all` subcommand, or\ninstall them selectively.\n\n\n\n\n\n\n\n\nFor more details on each field in the `v1beta4` configuration you can navigate to our\n[API reference pages.](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/)\n\n## \n\n* [kubeadm init](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init\/) to bootstrap a Kubernetes control-plane node\n* [kubeadm join](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-join\/) to connect a node to the cluster\n* [kubeadm reset](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-reset\/) to revert any changes made to this host by `kubeadm init` or `kubeadm join`\n* [kubeadm alpha](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-alpha\/) to try experimental functionality","site":"kubernetes reference","answers_cleaned":"    title  kubeadm init phase weight  90 content type  concept       kubeadm init phase  enables you to invoke atomic steps of the bootstrap process  Hence  you can let kubeadm do some of the work and you can fill in the gaps if you wish to apply customization    kubeadm init phase  is consistent with the  kubeadm init workflow   docs reference setup tools kubeadm kubeadm init  init workflow   and behind the scene both use the same code      kubeadm init phase preflight   cmd phase preflight   Using this command you can execute preflight checks on a control plane node          kubeadm init phase kubelet start   cmd phase kubelet start   This phase will write the kubelet configuration file and environment file and then start the kubelet          kubeadm init phase certs   cmd phase certs   Can be used to create all required certificates by kubeadm                      kubeadm init phase kubeconfig   cmd phase kubeconfig   You can create all required kubeconfig files by calling the  all  subcommand or call them individually                kubeadm init phase control plane   cmd phase control plane   Using this phase you can create all required static Pod files for the control plane components               kubeadm init phase etcd   cmd phase etcd   Use the following phase to create a local etcd instance based on a static Pod file           kubeadm init phase upload config   cmd phase upload config   You can use this command to upload the kubeadm configuration to your cluster  Alternatively  you can use  kubeadm config   docs reference setup tools kubeadm kubeadm config               kubeadm init phase upload certs   cmd phase upload certs   Use the following phase to upload control plane certificates to the cluster  By default the certs and encryption key expire after two hours          kubeadm init phase mark control plane   cmd phase mark control plane   Use the following phase to label and taint the node as a control plane node          kubeadm init phase bootstrap token   cmd phase bootstrap token   Use the following phase to configure bootstrap tokens          kubeadm init phase kubelet finalize   cmd phase kubelet finalize all   Use the following phase to update settings relevant to the kubelet after TLS bootstrap  You can use the  all  subcommand to run all  kubelet finalize  phases            kubeadm init phase addon   cmd phase addon   You can install all the available addons with the  all  subcommand  or install them selectively          For more details on each field in the  v1beta4  configuration you can navigate to our  API reference pages    docs reference config api kubeadm config v1beta4            kubeadm init   docs reference setup tools kubeadm kubeadm init   to bootstrap a Kubernetes control plane node    kubeadm join   docs reference setup tools kubeadm kubeadm join   to connect a node to the cluster    kubeadm reset   docs reference setup tools kubeadm kubeadm reset   to revert any changes made to this host by  kubeadm init  or  kubeadm join     kubeadm alpha   docs reference setup tools kubeadm kubeadm alpha   to try experimental functionality"}
{"questions":"kubernetes reference luxas weight 30 contenttype concept overview reviewers jbeda This command initializes a new Kubernetes node and joins it to the existing cluster title kubeadm join","answers":"---\nreviewers:\n- luxas\n- jbeda\ntitle: kubeadm join\ncontent_type: concept\nweight: 30\n---\n<!-- overview -->\nThis command initializes a new Kubernetes node and joins it to the existing cluster.\n\n<!-- body -->\n\n\n### The join workflow {#join-workflow}\n\n`kubeadm join` bootstraps a Kubernetes worker node or a control-plane node and adds it to the cluster.\nThis action consists of the following steps for worker nodes:\n\n1. kubeadm downloads necessary cluster information from the API server.\n   By default, it uses the bootstrap token and the CA key hash to verify the\n   authenticity of that data. The root CA can also be discovered directly via a\n   file or URL.\n\n1. Once the cluster information is known, kubelet can start the TLS bootstrapping\n   process.\n\n   The TLS bootstrap uses the shared token to temporarily authenticate\n   with the Kubernetes API server to submit a certificate signing request (CSR); by\n   default the control plane signs this CSR request automatically.\n\n1. Finally, kubeadm configures the local kubelet to connect to the API\n   server with the definitive identity assigned to the node.\n\nFor control-plane nodes additional steps are performed:\n\n1. Downloading certificates shared among control-plane nodes from the cluster\n  (if explicitly requested by the user).\n\n1. Generating control-plane component manifests, certificates and kubeconfig.\n\n1. Adding new local etcd member.\n\n### Using join phases with kubeadm {#join-phases}\n\nKubeadm allows you join a node to the cluster in phases using `kubeadm join phase`.\n\nTo view the ordered list of phases and sub-phases you can call `kubeadm join --help`. The list will be located\nat the top of the help screen and each phase will have a description next to it.\nNote that by calling `kubeadm join` all of the phases and sub-phases will be executed in this exact order.\n\nSome phases have unique flags, so if you want to have a look at the list of available options add `--help`, for example:\n\n```shell\nkubeadm join phase kubelet-start --help\n```\n\nSimilar to the [kubeadm init phase](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init\/#init-phases)\ncommand, `kubeadm join phase` allows you to skip a list of phases using the `--skip-phases` flag.\n\nFor example:\n\n```shell\nsudo kubeadm join --skip-phases=preflight --config=config.yaml\n```\n\n\n\nAlternatively, you can use the `skipPhases` field in `JoinConfiguration`.\n\n### Discovering what cluster CA to trust\n\nThe kubeadm discovery has several options, each with security tradeoffs.\nThe right method for your environment depends on how you provision nodes and the\nsecurity expectations you have about your network and node lifecycles.\n\n#### Token-based discovery with CA pinning\n\nThis is the default mode in kubeadm. In this mode, kubeadm downloads\nthe cluster configuration (including root CA) and validates it using the token\nas well as validating that the root CA public key matches the provided hash and\nthat the API server certificate is valid under the root CA.\n\nThe CA key hash has the format `sha256:<hex_encoded_hash>`.\nBy default, the hash value is printed at the end of the `kubeadm init` command or\nin the output from the `kubeadm token create --print-join-command` command.\nIt is in a standard format (see [RFC7469](https:\/\/tools.ietf.org\/html\/rfc7469#section-2.4))\nand can also be calculated by 3rd party tools or provisioning systems.\nFor example, using the OpenSSL CLI:\n\n```shell\nopenssl x509 -pubkey -in \/etc\/kubernetes\/pki\/ca.crt | openssl rsa -pubin -outform der 2>\/dev\/null | openssl dgst -sha256 -hex | sed 's\/^.* \/\/'\n```\n\n**Example `kubeadm join` commands:**\n\nFor worker nodes:\n\n```shell\nkubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef 1.2.3.4:6443\n```\n\nFor control-plane nodes:\n\n```shell\nkubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef --control-plane 1.2.3.4:6443\n```\n\nYou can also call `join` for a control-plane node with `--certificate-key` to copy certificates to this node,\nif the `kubeadm init` command was called with `--upload-certs`.\n\n**Advantages:**\n\n- Allows bootstrapping nodes to securely discover a root of trust for the\n  control-plane node even if other worker nodes or the network are compromised.\n\n- Convenient to execute manually since all of the information required fits\n  into a single `kubeadm join` command.\n\n**Disadvantages:**\n\n- The CA hash is not normally known until the control-plane node has been provisioned,\n  which can make it more difficult to build automated provisioning tools that\n  use kubeadm. By generating your CA in beforehand, you may workaround this\n  limitation.\n\n#### Token-based discovery without CA pinning\n\nThis mode relies only on the symmetric token to sign\n(HMAC-SHA256) the discovery information that establishes the root of trust for\nthe control-plane. To use the mode the joining nodes must skip the hash validation of the\nCA public key, using `--discovery-token-unsafe-skip-ca-verification`. You should consider\nusing one of the other modes if possible.\n\n**Example `kubeadm join` command:**\n\n```shell\nkubeadm join --token abcdef.1234567890abcdef --discovery-token-unsafe-skip-ca-verification 1.2.3.4:6443\n```\n\n**Advantages:**\n\n- Still protects against many network-level attacks.\n\n- The token can be generated ahead of time and shared with the control-plane node and\n  worker nodes, which can then bootstrap in parallel without coordination. This\n  allows it to be used in many provisioning scenarios.\n\n**Disadvantages:**\n\n- If an attacker is able to steal a bootstrap token via some vulnerability,\n  they can use that token (along with network-level access) to impersonate the\n  control-plane node to other bootstrapping nodes. This may or may not be an appropriate\n  tradeoff in your environment.\n\n#### File or HTTPS-based discovery\n\nThis provides an out-of-band way to establish a root of trust between the control-plane node\nand bootstrapping nodes. Consider using this mode if you are building automated provisioning\nusing kubeadm. The format of the discovery file is a regular Kubernetes\n[kubeconfig](\/docs\/tasks\/access-application-cluster\/configure-access-multiple-clusters\/) file.\n\nIn case the discovery file does not contain credentials, the TLS discovery token will be used.\n\n**Example `kubeadm join` commands:**\n\n- `kubeadm join --discovery-file path\/to\/file.conf` (local file)\n\n- `kubeadm join --discovery-file https:\/\/url\/file.conf` (remote HTTPS URL)\n\n**Advantages:**\n\n- Allows bootstrapping nodes to securely discover a root of trust for the\n  control-plane node even if the network or other worker nodes are compromised.\n\n**Disadvantages:**\n\n- Requires that you have some way to carry the discovery information from\n  the control-plane node to the bootstrapping nodes. If the discovery file contains credentials\n  you must keep it secret and transfer it over a secure channel. This might be possible with your\n  cloud provider or provisioning tool.\n\n#### Use of custom kubelet credentials with `kubeadm join`\n\nTo allow `kubeadm join` to use predefined kubelet credentials and skip client TLS bootstrap\nand CSR approval for a new node:\n\n1. From a working control plane node in the cluster that has `\/etc\/kubernetes\/pki\/ca.key`\n   execute `kubeadm kubeconfig user --org system:nodes --client-name system:node:$NODE > kubelet.conf`.\n   `$NODE` must be set to the name of the new node.\n2. Modify the resulted `kubelet.conf` manually to adjust the cluster name and the server endpoint,\n   or run `kubeadm kubeconfig user --config` (it accepts `InitConfiguration`).\n\nIf your cluster does not have the `ca.key` file, you must sign the embedded certificates in\nthe `kubelet.conf` externally. For additional information, see\n[PKI certificates and requirements](\/docs\/setup\/best-practices\/certificates\/) and\n[Certificate Management with kubeadm](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-certs\/#external-ca-mode).\n\n1. Copy the resulting `kubelet.conf` to `\/etc\/kubernetes\/kubelet.conf` on the new node.\n2. Execute `kubeadm join` with the flag\n   `--ignore-preflight-errors=FileAvailable--etc-kubernetes-kubelet.conf` on the new node.\n\n### Securing your installation even more {#securing-more}\n\nThe defaults for kubeadm may not work for everyone. This section documents how to tighten up a kubeadm installation\nat the cost of some usability.\n\n#### Turning off auto-approval of node client certificates\n\nBy default, there is a CSR auto-approver enabled that basically approves any client certificate request\nfor a kubelet when a Bootstrap Token was used when authenticating. If you don't want the cluster to\nautomatically approve kubelet client certs, you can turn it off by executing this command:\n\n```shell\nkubectl delete clusterrolebinding kubeadm:node-autoapprove-bootstrap\n```\n\nAfter that, `kubeadm join` will block until the admin has manually approved the CSR in flight:\n\n1. Using `kubectl get csr`, you can see that the original CSR is in the Pending state.\n\n   ```shell\n   kubectl get csr\n   ```\n\n   The output is similar to this:\n\n   ```\n   NAME                                                   AGE       REQUESTOR                 CONDITION\n   node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ   18s       system:bootstrap:878f07   Pending\n   ```\n\n2. `kubectl certificate approve` allows the admin to approve CSR.This action tells a certificate signing\n   controller to issue a certificate to the requestor with the attributes requested in the CSR.\n\n   ```shell\n   kubectl certificate approve node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ\n   ```\n\n   The output is similar to this:\n\n   ```\n   certificatesigningrequest \"node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ\" approved\n   ```\n\n3. This would change the CRS resource to Active state.\n\n   ```shell\n   kubectl get csr\n   ```\n\n   The output is similar to this:\n\n   ```\n   NAME                                                   AGE       REQUESTOR                 CONDITION\n   node-csr-c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ   1m        system:bootstrap:878f07   Approved,Issued\n   ```\n\nThis forces the workflow that `kubeadm join` will only succeed if `kubectl certificate approve` has been run.\n\n#### Turning off public access to the `cluster-info` ConfigMap\n\nIn order to achieve the joining flow using the token as the only piece of validation information, a\n ConfigMap with some data needed for validation of the control-plane node's identity is exposed publicly by\ndefault. While there is no private data in this ConfigMap, some users might wish to turn\nit off regardless. Doing so will disable the ability to use the `--discovery-token` flag of the\n`kubeadm join` flow. Here are the steps to do so:\n\n* Fetch the `cluster-info` file from the API Server:\n\n```shell\nkubectl -n kube-public get cm cluster-info -o jsonpath='{.data.kubeconfig}' | tee cluster-info.yaml\n```\n\nThe output is similar to this:\n\n```yaml\napiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    certificate-authority-data: <ca-cert>\n    server: https:\/\/<ip>:<port>\n  name: \"\"\ncontexts: []\ncurrent-context: \"\"\npreferences: {}\nusers: []\n```\n\n* Use the `cluster-info.yaml` file as an argument to `kubeadm join --discovery-file`.\n\n* Turn off public access to the `cluster-info` ConfigMap:\n\n```shell\nkubectl -n kube-public delete rolebinding kubeadm:bootstrap-signer-clusterinfo\n```\n\nThese commands should be run after `kubeadm init` but before `kubeadm join`.\n\n### Using kubeadm join with a configuration file {#config-file}\n\n\nThe config file is still considered beta and may change in future versions.\n\n\nIt's possible to configure `kubeadm join` with a configuration file instead of command\nline flags, and some more advanced features may only be available as\nconfiguration file options. This file is passed using the `--config` flag and it must\ncontain a `JoinConfiguration` structure. Mixing `--config` with others flags may not be\nallowed in some cases.\n\nThe default configuration can be printed out using the\n[kubeadm config print](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-config\/#cmd-config-print) command.\n\nIf your configuration is not using the latest version it is **recommended** that you migrate using\nthe [kubeadm config migrate](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-config\/#cmd-config-migrate) command.\n\nFor more information on the fields and usage of the configuration you can navigate to our\n[API reference](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/).\n\n## \n\n* [kubeadm init](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init\/) to bootstrap a Kubernetes control-plane node.\n* [kubeadm token](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-token\/) to manage tokens for `kubeadm join`.\n* [kubeadm reset](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-reset\/) to revert any changes made to this host by `kubeadm init` or `kubeadm join`.","site":"kubernetes reference","answers_cleaned":"    reviewers    luxas   jbeda title  kubeadm join content type  concept weight  30          overview     This command initializes a new Kubernetes node and joins it to the existing cluster        body           The join workflow   join workflow    kubeadm join  bootstraps a Kubernetes worker node or a control plane node and adds it to the cluster  This action consists of the following steps for worker nodes   1  kubeadm downloads necessary cluster information from the API server     By default  it uses the bootstrap token and the CA key hash to verify the    authenticity of that data  The root CA can also be discovered directly via a    file or URL   1  Once the cluster information is known  kubelet can start the TLS bootstrapping    process      The TLS bootstrap uses the shared token to temporarily authenticate    with the Kubernetes API server to submit a certificate signing request  CSR   by    default the control plane signs this CSR request automatically   1  Finally  kubeadm configures the local kubelet to connect to the API    server with the definitive identity assigned to the node   For control plane nodes additional steps are performed   1  Downloading certificates shared among control plane nodes from the cluster    if explicitly requested by the user    1  Generating control plane component manifests  certificates and kubeconfig   1  Adding new local etcd member       Using join phases with kubeadm   join phases   Kubeadm allows you join a node to the cluster in phases using  kubeadm join phase    To view the ordered list of phases and sub phases you can call  kubeadm join   help   The list will be located at the top of the help screen and each phase will have a description next to it  Note that by calling  kubeadm join  all of the phases and sub phases will be executed in this exact order   Some phases have unique flags  so if you want to have a look at the list of available options add    help   for example      shell kubeadm join phase kubelet start   help      Similar to the  kubeadm init phase   docs reference setup tools kubeadm kubeadm init  init phases  command   kubeadm join phase  allows you to skip a list of phases using the    skip phases  flag   For example      shell sudo kubeadm join   skip phases preflight   config config yaml        Alternatively  you can use the  skipPhases  field in  JoinConfiguration        Discovering what cluster CA to trust  The kubeadm discovery has several options  each with security tradeoffs  The right method for your environment depends on how you provision nodes and the security expectations you have about your network and node lifecycles        Token based discovery with CA pinning  This is the default mode in kubeadm  In this mode  kubeadm downloads the cluster configuration  including root CA  and validates it using the token as well as validating that the root CA public key matches the provided hash and that the API server certificate is valid under the root CA   The CA key hash has the format  sha256  hex encoded hash    By default  the hash value is printed at the end of the  kubeadm init  command or in the output from the  kubeadm token create   print join command  command  It is in a standard format  see  RFC7469  https   tools ietf org html rfc7469 section 2 4   and can also be calculated by 3rd party tools or provisioning systems  For example  using the OpenSSL CLI      shell openssl x509  pubkey  in  etc kubernetes pki ca crt   openssl rsa  pubin  outform der 2  dev null   openssl dgst  sha256  hex   sed  s                Example  kubeadm join  commands     For worker nodes      shell kubeadm join   discovery token abcdef 1234567890abcdef   discovery token ca cert hash sha256 1234  cdef 1 2 3 4 6443      For control plane nodes      shell kubeadm join   discovery token abcdef 1234567890abcdef   discovery token ca cert hash sha256 1234  cdef   control plane 1 2 3 4 6443      You can also call  join  for a control plane node with    certificate key  to copy certificates to this node  if the  kubeadm init  command was called with    upload certs      Advantages       Allows bootstrapping nodes to securely discover a root of trust for the   control plane node even if other worker nodes or the network are compromised     Convenient to execute manually since all of the information required fits   into a single  kubeadm join  command     Disadvantages       The CA hash is not normally known until the control plane node has been provisioned    which can make it more difficult to build automated provisioning tools that   use kubeadm  By generating your CA in beforehand  you may workaround this   limitation        Token based discovery without CA pinning  This mode relies only on the symmetric token to sign  HMAC SHA256  the discovery information that establishes the root of trust for the control plane  To use the mode the joining nodes must skip the hash validation of the CA public key  using    discovery token unsafe skip ca verification   You should consider using one of the other modes if possible     Example  kubeadm join  command        shell kubeadm join   token abcdef 1234567890abcdef   discovery token unsafe skip ca verification 1 2 3 4 6443        Advantages       Still protects against many network level attacks     The token can be generated ahead of time and shared with the control plane node and   worker nodes  which can then bootstrap in parallel without coordination  This   allows it to be used in many provisioning scenarios     Disadvantages       If an attacker is able to steal a bootstrap token via some vulnerability    they can use that token  along with network level access  to impersonate the   control plane node to other bootstrapping nodes  This may or may not be an appropriate   tradeoff in your environment        File or HTTPS based discovery  This provides an out of band way to establish a root of trust between the control plane node and bootstrapping nodes  Consider using this mode if you are building automated provisioning using kubeadm  The format of the discovery file is a regular Kubernetes  kubeconfig   docs tasks access application cluster configure access multiple clusters   file   In case the discovery file does not contain credentials  the TLS discovery token will be used     Example  kubeadm join  commands        kubeadm join   discovery file path to file conf   local file      kubeadm join   discovery file https   url file conf   remote HTTPS URL     Advantages       Allows bootstrapping nodes to securely discover a root of trust for the   control plane node even if the network or other worker nodes are compromised     Disadvantages       Requires that you have some way to carry the discovery information from   the control plane node to the bootstrapping nodes  If the discovery file contains credentials   you must keep it secret and transfer it over a secure channel  This might be possible with your   cloud provider or provisioning tool        Use of custom kubelet credentials with  kubeadm join   To allow  kubeadm join  to use predefined kubelet credentials and skip client TLS bootstrap and CSR approval for a new node   1  From a working control plane node in the cluster that has   etc kubernetes pki ca key     execute  kubeadm kubeconfig user   org system nodes   client name system node  NODE   kubelet conf        NODE  must be set to the name of the new node  2  Modify the resulted  kubelet conf  manually to adjust the cluster name and the server endpoint     or run  kubeadm kubeconfig user   config   it accepts  InitConfiguration     If your cluster does not have the  ca key  file  you must sign the embedded certificates in the  kubelet conf  externally  For additional information  see  PKI certificates and requirements   docs setup best practices certificates   and  Certificate Management with kubeadm   docs tasks administer cluster kubeadm kubeadm certs  external ca mode    1  Copy the resulting  kubelet conf  to   etc kubernetes kubelet conf  on the new node  2  Execute  kubeadm join  with the flag       ignore preflight errors FileAvailable  etc kubernetes kubelet conf  on the new node       Securing your installation even more   securing more   The defaults for kubeadm may not work for everyone  This section documents how to tighten up a kubeadm installation at the cost of some usability        Turning off auto approval of node client certificates  By default  there is a CSR auto approver enabled that basically approves any client certificate request for a kubelet when a Bootstrap Token was used when authenticating  If you don t want the cluster to automatically approve kubelet client certs  you can turn it off by executing this command      shell kubectl delete clusterrolebinding kubeadm node autoapprove bootstrap      After that   kubeadm join  will block until the admin has manually approved the CSR in flight   1  Using  kubectl get csr   you can see that the original CSR is in the Pending state         shell    kubectl get csr            The output is similar to this             NAME                                                   AGE       REQUESTOR                 CONDITION    node csr c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ   18s       system bootstrap 878f07   Pending         2   kubectl certificate approve  allows the admin to approve CSR This action tells a certificate signing    controller to issue a certificate to the requestor with the attributes requested in the CSR         shell    kubectl certificate approve node csr c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ            The output is similar to this             certificatesigningrequest  node csr c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ  approved         3  This would change the CRS resource to Active state         shell    kubectl get csr            The output is similar to this             NAME                                                   AGE       REQUESTOR                 CONDITION    node csr c69HXe7aYcqkS1bKmH4faEnHAWxn6i2bHZ2mD04jZyQ   1m        system bootstrap 878f07   Approved Issued         This forces the workflow that  kubeadm join  will only succeed if  kubectl certificate approve  has been run        Turning off public access to the  cluster info  ConfigMap  In order to achieve the joining flow using the token as the only piece of validation information  a  ConfigMap with some data needed for validation of the control plane node s identity is exposed publicly by default  While there is no private data in this ConfigMap  some users might wish to turn it off regardless  Doing so will disable the ability to use the    discovery token  flag of the  kubeadm join  flow  Here are the steps to do so     Fetch the  cluster info  file from the API Server      shell kubectl  n kube public get cm cluster info  o jsonpath    data kubeconfig     tee cluster info yaml      The output is similar to this      yaml apiVersion  v1 kind  Config clusters    cluster      certificate authority data   ca cert      server  https    ip   port    name     contexts     current context     preferences     users            Use the  cluster info yaml  file as an argument to  kubeadm join   discovery file      Turn off public access to the  cluster info  ConfigMap      shell kubectl  n kube public delete rolebinding kubeadm bootstrap signer clusterinfo      These commands should be run after  kubeadm init  but before  kubeadm join        Using kubeadm join with a configuration file   config file    The config file is still considered beta and may change in future versions    It s possible to configure  kubeadm join  with a configuration file instead of command line flags  and some more advanced features may only be available as configuration file options  This file is passed using the    config  flag and it must contain a  JoinConfiguration  structure  Mixing    config  with others flags may not be allowed in some cases   The default configuration can be printed out using the  kubeadm config print   docs reference setup tools kubeadm kubeadm config  cmd config print  command   If your configuration is not using the latest version it is   recommended   that you migrate using the  kubeadm config migrate   docs reference setup tools kubeadm kubeadm config  cmd config migrate  command   For more information on the fields and usage of the configuration you can navigate to our  API reference   docs reference config api kubeadm config v1beta4             kubeadm init   docs reference setup tools kubeadm kubeadm init   to bootstrap a Kubernetes control plane node     kubeadm token   docs reference setup tools kubeadm kubeadm token   to manage tokens for  kubeadm join      kubeadm reset   docs reference setup tools kubeadm kubeadm reset   to revert any changes made to this host by  kubeadm init  or  kubeadm join  "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic To learn how To update the reference content please follow the to generate the reference documentation please read project guide You can file document formatting bugs against the","answers":"<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\nOutput shell completion code for the specified shell (bash or zsh)\n\n### Synopsis\n\n\n\nOutput shell completion code for the specified shell (bash or zsh).\nThe shell code must be evaluated to provide interactive\ncompletion of kubeadm commands. This can be done by sourcing it from\nthe .bash_profile.\n\nNote: this requires the bash-completion framework.\n\nTo install it on Mac use homebrew:\n    $ brew install bash-completion\nOnce installed, bash_completion must be evaluated. This can be done by adding the\nfollowing line to the .bash_profile\n    $ source $(brew --prefix)\/etc\/bash_completion\n\nIf bash-completion is not installed on Linux, please install the 'bash-completion' package\nvia your distribution's package manager.\n\nNote for zsh users: [1] zsh completions are only supported in versions of zsh &gt;= 5.2\n\n```\nkubeadm completion SHELL [flags]\n```\n\n### Examples\n\n```\n\n# Install bash completion on a Mac using homebrew\nbrew install bash-completion\nprintf \"\\n# Bash completion support\\nsource $(brew --prefix)\/etc\/bash_completion\\n\" >> $HOME\/.bash_profile\nsource $HOME\/.bash_profile\n\n# Load the kubeadm completion code for bash into the current shell\nsource <(kubeadm completion bash)\n\n# Write bash completion code to a file and source it from .bash_profile\nkubeadm completion bash > ~\/.kube\/kubeadm_completion.bash.inc\nprintf \"\\n# Kubeadm shell completion\\nsource '$HOME\/.kube\/kubeadm_completion.bash.inc'\\n\" >> $HOME\/.bash_profile\nsource $HOME\/.bash_profile\n\n# Load the kubeadm completion code for zsh[1] into the current shell\nsource <(kubeadm completion zsh)\n```\n\n### Options\n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for completion<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n### Options inherited from parent commands\n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--rootfs string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The path to the 'real' host root filesystem. This will cause kubeadm to chroot into the provided path.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n","site":"kubernetes reference","answers_cleaned":"     The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project        Output shell completion code for the specified shell  bash or zsh       Synopsis    Output shell completion code for the specified shell  bash or zsh   The shell code must be evaluated to provide interactive completion of kubeadm commands  This can be done by sourcing it from the  bash profile   Note  this requires the bash completion framework   To install it on Mac use homebrew        brew install bash completion Once installed  bash completion must be evaluated  This can be done by adding the following line to the  bash profile       source   brew   prefix  etc bash completion  If bash completion is not installed on Linux  please install the  bash completion  package via your distribution s package manager   Note for zsh users   1  zsh completions are only supported in versions of zsh  gt   5 2      kubeadm completion SHELL  flags           Examples         Install bash completion on a Mac using homebrew brew install bash completion printf   n  Bash completion support nsource   brew   prefix  etc bash completion n      HOME  bash profile source  HOME  bash profile    Load the kubeadm completion code for bash into the current shell source   kubeadm completion bash     Write bash completion code to a file and source it from  bash profile kubeadm completion bash      kube kubeadm completion bash inc printf   n  Kubeadm shell completion nsource   HOME  kube kubeadm completion bash inc  n      HOME  bash profile source  HOME  bash profile    Load the kubeadm completion code for zsh 1  into the current shell source   kubeadm completion zsh           Options      table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for completion  p   td    tr     tbody    table         Options inherited from parent commands      table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    rootfs string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The path to the  real  host root filesystem  This will cause kubeadm to chroot into the provided path   p   td    tr     tbody    table    "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference title kube apiserver weight 30 autogenerated true","answers":"---\ntitle: kube-apiserver\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nThe Kubernetes API server validates and configures data\nfor the api objects which include pods, services, replicationcontrollers, and\nothers. The API Server services REST operations and provides the frontend to the\ncluster's shared state through which all other components interact.\n\n```\nkube-apiserver [flags]\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--admission-control-config-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File with admission control configuration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--advertise-address string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The IP address on which to advertise the apiserver to members of the cluster. This address must be reachable by the rest of the cluster. If blank, the --bind-address will be used. If --bind-address is unspecified, the host's default interface will be used.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--aggregator-reject-forwarding-redirect&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Aggregator reject forwarding redirect response back to client.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-metric-labels stringToString&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: []<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The map from metric-label to value allow-list of this label. The key's format is &lt;MetricName&gt;,&lt;LabelName&gt;. The value's format is &lt;allowed_value&gt;,&lt;allowed_value&gt;...e.g. metric1,label1='v1,v2,v3', metric1,label2='v1,v2,v3' metric2,label1='v1,v2,v3'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-metric-labels-manifest string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The path to the manifest file that contains the allow-list mapping. The format of the file is the same as the flag --allow-metric-labels. Note that the flag --allow-metric-labels will override the manifest file.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-privileged<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, allow privileged containers. [default=false]<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--anonymous-auth&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enables anonymous requests to the secure port of the API server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of system:anonymous, and a group name of system:unauthenticated.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--api-audiences strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-batch-buffer-size int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10000<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The size of the buffer to store events before batching and writing. Only used in batch mode.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-batch-max-size int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The maximum size of a batch. Only used in batch mode.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-batch-max-wait duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-batch-throttle-burst int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-batch-throttle-enable<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Whether batching throttling is enabled. Only used in batch mode.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-batch-throttle-qps float<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum average number of batches per second. Only used in batch mode.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-compress<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If set, the rotated log files will be compressed using gzip.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-format string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"json\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Format of saved audits. &quot;legacy&quot; indicates 1-line text format for each event. &quot;json&quot; indicates structured json format. Known formats are legacy,json.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-maxage int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-maxbackup int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The maximum number of old audit log files to retain. Setting a value of 0 will mean there's no restriction on the number of files.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-maxsize int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The maximum size in megabytes of the audit log file before it gets rotated.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-mode string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"blocking\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-path string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If set, all requests coming to the apiserver will be logged to this file.  '-' means standard out.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-truncate-enabled<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Whether event and batch truncating is enabled.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-truncate-max-batch-size int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10485760<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-truncate-max-event-size int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 102400<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-log-version string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"audit.k8s.io\/v1\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>API group and version used for serializing audit events written to log.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-policy-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the file that defines the audit policy configuration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-webhook-batch-buffer-size int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10000<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The size of the buffer to store events before batching and writing. Only used in batch mode.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-webhook-batch-max-size int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 400<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The maximum size of a batch. Only used in batch mode.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-webhook-batch-max-wait duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 30s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-webhook-batch-throttle-burst int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 15<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-webhook-batch-throttle-enable&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Whether batching throttling is enabled. Only used in batch mode.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-webhook-batch-throttle-qps float&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum average number of batches per second. Only used in batch mode.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-webhook-config-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a kubeconfig formatted file that defines the audit webhook configuration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-webhook-initial-backoff duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The amount of time to wait before retrying the first failed request.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-webhook-mode string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"batch\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-webhook-truncate-enabled<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Whether event and batch truncating is enabled.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-webhook-truncate-max-batch-size int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10485760<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-webhook-truncate-max-event-size int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 102400<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audit-webhook-version string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"audit.k8s.io\/v1\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>API group and version used for serializing audit events written to webhook.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authentication-config string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File with Authentication Configuration to configure the JWT Token authenticator or the anonymous authenticator. Note: This feature is in Alpha since v1.29.--feature-gate=StructuredAuthenticationConfiguration=true needs to be set for enabling this feature.This feature is mutually exclusive with the oidc-* flags.To configure anonymous authenticator you need to enable --feature-gate=AnonymousAuthConfigurableEndpoints.When you configure anonymous authenticator in the authentication config you cannot use the --anonymous-auth flag.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authentication-token-webhook-cache-ttl duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 2m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The duration to cache responses from the webhook token authenticator.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authentication-token-webhook-config-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File with webhook configuration for token authentication in kubeconfig format. The API server will query the remote service to determine authentication for bearer tokens.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authentication-token-webhook-version string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"v1beta1\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The API version of the authentication.k8s.io TokenReview to send to and expect from the webhook.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-config string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File with Authorization Configuration to configure the authorizer chain.Note: This feature is in Alpha since v1.29.--feature-gate=StructuredAuthorizationConfiguration=true feature flag needs to be set to true for enabling the functionality.This feature is mutually exclusive with the other --authorization-mode and --authorization-webhook-* flags.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-mode strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Ordered list of plug-ins to do authorization on secure port. Defaults to AlwaysAllow if --authorization-config is not used. Comma-delimited list of: AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-policy-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File with authorization policy in json line by line format, used with --authorization-mode=ABAC, on the secure port.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-webhook-cache-authorized-ttl duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The duration to cache 'authorized' responses from the webhook authorizer.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-webhook-cache-unauthorized-ttl duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 30s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The duration to cache 'unauthorized' responses from the webhook authorizer.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-webhook-config-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File with webhook configuration in kubeconfig format, used with --authorization-mode=Webhook. The API server will query the remote service to determine access on the API server's secure port.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-webhook-version string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"v1beta1\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The API version of the authorization.k8s.io SubjectAccessReview to send to and expect from the webhook.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--bind-address string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 0.0.0.0<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI\/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces and IP address families will be used.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cert-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"\/var\/run\/kubernetes\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-ca-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--contention-profiling<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enable block profiling, if profiling is enabled<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cors-allowed-origins strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of allowed origins for CORS, comma separated. An allowed origin can be a regular expression to support subdomain matching. If this list is empty CORS will not be enabled. Please ensure each expression matches the entire hostname by anchoring to the start with '^' or including the '\/\/' prefix, and by anchoring to the end with '$' or including the ':' port separator suffix. Examples of valid expressions are '\/\/example.com(:|$)' and '^https:\/\/example.com(:|$)'<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--debug-socket-path string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Use an unprotected (no authn\/authz) unix-domain socket for profiling with the given path<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--delete-collection-workers int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-admission-plugins strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, ClusterTrustBundleAttest, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, ClusterTrustBundleAttest, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-http2-serving<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, HTTP2 serving will be disabled [default=false]<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disabled-metrics strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>This flag provides an escape hatch for misbehaving metrics. You must provide the fully qualified metric name in order to disable it. Disclaimer: disabling metrics is higher in precedence than showing hidden metrics.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--egress-selector-config-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File with apiserver egress selector configuration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--emulated-version strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The versions different components emulate their capabilities (APIs, features, ...) of.<br\/>If set, the component will emulate the behavior of this version instead of the underlying binary version.<br\/>Version format could only be major.minor, for example: '--emulated-version=wardle=1.2,kube=1.31'. Options are:<br\/>kube=1.31..1.31 (default=1.31)If the component is not specified, defaults to &quot;kube&quot;<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enable-admission-plugins strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, ClusterTrustBundleAttest, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, ClusterTrustBundleAttest, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enable-aggregator-routing<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Turns on aggregator routing requests to endpoints IP rather than cluster IP.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enable-bootstrap-token-auth<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enable to allow secrets of type 'bootstrap.kubernetes.io\/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enable-garbage-collector&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enable-priority-and-fairness&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, replace the max-in-flight handler with an enhanced one that queues and dispatches with priority and fairness<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--encryption-provider-config string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The file containing configuration for encryption providers to be used for storing secrets in etcd<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--encryption-provider-config-automatic-reload<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Determines if the file set by --encryption-provider-config should be automatically reloaded if the disk contents change. Setting this to true disables the ability to uniquely identify distinct KMS plugins via the API server healthz endpoints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--endpoint-reconciler-type string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"lease\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Use an endpoint reconciler (master-count, lease, none) master-count is deprecated, and will be removed in a future version.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--etcd-cafile string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>SSL Certificate Authority file used to secure etcd communication.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--etcd-certfile string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>SSL certification file used to secure etcd communication.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--etcd-compaction-interval duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The interval of compaction requests. If 0, the compaction request from apiserver is disabled.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--etcd-count-metric-poll-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Frequency of polling etcd for number of resources per type. 0 disables the metric collection.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--etcd-db-metric-poll-interval duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 30s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The interval of requests to poll etcd and update metric. 0 disables the metric collection<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--etcd-healthcheck-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 2s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The timeout to use when checking etcd health.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--etcd-keyfile string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>SSL key file used to secure etcd communication.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--etcd-prefix string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"\/registry\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The prefix to prepend to all resource paths in etcd.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--etcd-readycheck-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 2s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The timeout to use when checking etcd readiness<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--etcd-servers strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of etcd servers to connect with (scheme:\/\/ip:port), comma separated.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--etcd-servers-overrides strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Per-resource etcd servers overrides, comma separated. The individual override format: group\/resource#servers, where servers are URLs, semicolon separated. Note that this applies only to resources compiled into this server binary.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--event-ttl duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1h0m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Amount of time to retain events.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--external-hostname string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The hostname to use when generating externalized URLs for this master (e.g. Swagger API Docs or OpenID Discovery).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--feature-gates colonSeparatedMultimapStringString<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Comma-separated list of component:key=value pairs that describe feature gates for alpha\/experimental features of different components.<br\/>If the component is not specified, defaults to &quot;kube&quot;. This flag can be repeatedly invoked. For example: --feature-gates 'wardle:featureA=true,wardle:featureB=false' --feature-gates 'kube:featureC=true'Options are:<br\/>kube:APIResponseCompression=true|false (BETA - default=true)<br\/>kube:APIServerIdentity=true|false (BETA - default=true)<br\/>kube:APIServerTracing=true|false (BETA - default=true)<br\/>kube:APIServingWithRoutine=true|false (ALPHA - default=false)<br\/>kube:AllAlpha=true|false (ALPHA - default=false)<br\/>kube:AllBeta=true|false (BETA - default=false)<br\/>kube:AnonymousAuthConfigurableEndpoints=true|false (ALPHA - default=false)<br\/>kube:AnyVolumeDataSource=true|false (BETA - default=true)<br\/>kube:AuthorizeNodeWithSelectors=true|false (ALPHA - default=false)<br\/>kube:AuthorizeWithSelectors=true|false (ALPHA - default=false)<br\/>kube:CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)<br\/>kube:CPUManagerPolicyBetaOptions=true|false (BETA - default=true)<br\/>kube:CPUManagerPolicyOptions=true|false (BETA - default=true)<br\/>kube:CRDValidationRatcheting=true|false (BETA - default=true)<br\/>kube:CSIMigrationPortworx=true|false (BETA - default=true)<br\/>kube:CSIVolumeHealth=true|false (ALPHA - default=false)<br\/>kube:CloudControllerManagerWebhook=true|false (ALPHA - default=false)<br\/>kube:ClusterTrustBundle=true|false (ALPHA - default=false)<br\/>kube:ClusterTrustBundleProjection=true|false (ALPHA - default=false)<br\/>kube:ComponentSLIs=true|false (BETA - default=true)<br\/>kube:ConcurrentWatchObjectDecode=true|false (BETA - default=false)<br\/>kube:ConsistentListFromCache=true|false (BETA - default=true)<br\/>kube:ContainerCheckpoint=true|false (BETA - default=true)<br\/>kube:ContextualLogging=true|false (BETA - default=true)<br\/>kube:CoordinatedLeaderElection=true|false (ALPHA - default=false)<br\/>kube:CronJobsScheduledAnnotation=true|false (BETA - default=true)<br\/>kube:CrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)<br\/>kube:CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)<br\/>kube:CustomResourceFieldSelectors=true|false (BETA - default=true)<br\/>kube:DRAControlPlaneController=true|false (ALPHA - default=false)<br\/>kube:DisableAllocatorDualWrite=true|false (ALPHA - default=false)<br\/>kube:DisableNodeKubeProxyVersion=true|false (BETA - default=true)<br\/>kube:DynamicResourceAllocation=true|false (ALPHA - default=false)<br\/>kube:EventedPLEG=true|false (ALPHA - default=false)<br\/>kube:GracefulNodeShutdown=true|false (BETA - default=true)<br\/>kube:GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)<br\/>kube:HPAScaleToZero=true|false (ALPHA - default=false)<br\/>kube:HonorPVReclaimPolicy=true|false (BETA - default=true)<br\/>kube:ImageMaximumGCAge=true|false (BETA - default=true)<br\/>kube:ImageVolume=true|false (ALPHA - default=false)<br\/>kube:InPlacePodVerticalScaling=true|false (ALPHA - default=false)<br\/>kube:InTreePluginPortworxUnregister=true|false (ALPHA - default=false)<br\/>kube:InformerResourceVersion=true|false (ALPHA - default=false)<br\/>kube:JobBackoffLimitPerIndex=true|false (BETA - default=true)<br\/>kube:JobManagedBy=true|false (ALPHA - default=false)<br\/>kube:JobPodReplacementPolicy=true|false (BETA - default=true)<br\/>kube:JobSuccessPolicy=true|false (BETA - default=true)<br\/>kube:KubeletCgroupDriverFromCRI=true|false (BETA - default=true)<br\/>kube:KubeletInUserNamespace=true|false (ALPHA - default=false)<br\/>kube:KubeletPodResourcesDynamicResources=true|false (ALPHA - default=false)<br\/>kube:KubeletPodResourcesGet=true|false (ALPHA - default=false)<br\/>kube:KubeletSeparateDiskGC=true|false (BETA - default=true)<br\/>kube:KubeletTracing=true|false (BETA - default=true)<br\/>kube:LoadBalancerIPMode=true|false (BETA - default=true)<br\/>kube:LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (BETA - default=false)<br\/>kube:LoggingAlphaOptions=true|false (ALPHA - default=false)<br\/>kube:LoggingBetaOptions=true|false (BETA - default=true)<br\/>kube:MatchLabelKeysInPodAffinity=true|false (BETA - default=true)<br\/>kube:MatchLabelKeysInPodTopologySpread=true|false (BETA - default=true)<br\/>kube:MaxUnavailableStatefulSet=true|false (ALPHA - default=false)<br\/>kube:MemoryManager=true|false (BETA - default=true)<br\/>kube:MemoryQoS=true|false (ALPHA - default=false)<br\/>kube:MultiCIDRServiceAllocator=true|false (BETA - default=false)<br\/>kube:MutatingAdmissionPolicy=true|false (ALPHA - default=false)<br\/>kube:NFTablesProxyMode=true|false (BETA - default=true)<br\/>kube:NodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)<br\/>kube:NodeLogQuery=true|false (BETA - default=false)<br\/>kube:NodeSwap=true|false (BETA - default=true)<br\/>kube:OpenAPIEnums=true|false (BETA - default=true)<br\/>kube:PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)<br\/>kube:PodDeletionCost=true|false (BETA - default=true)<br\/>kube:PodIndexLabel=true|false (BETA - default=true)<br\/>kube:PodLifecycleSleepAction=true|false (BETA - default=true)<br\/>kube:PodReadyToStartContainersCondition=true|false (BETA - default=true)<br\/>kube:PortForwardWebsockets=true|false (BETA - default=true)<br\/>kube:ProcMountType=true|false (BETA - default=false)<br\/>kube:QOSReserved=true|false (ALPHA - default=false)<br\/>kube:RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)<br\/>kube:RecursiveReadOnlyMounts=true|false (BETA - default=true)<br\/>kube:RelaxedEnvironmentVariableValidation=true|false (ALPHA - default=false)<br\/>kube:ReloadKubeletServerCertificateFile=true|false (BETA - default=true)<br\/>kube:ResilientWatchCacheInitialization=true|false (BETA - default=true)<br\/>kube:ResourceHealthStatus=true|false (ALPHA - default=false)<br\/>kube:RetryGenerateName=true|false (BETA - default=true)<br\/>kube:RotateKubeletServerCertificate=true|false (BETA - default=true)<br\/>kube:RuntimeClassInImageCriApi=true|false (ALPHA - default=false)<br\/>kube:SELinuxMount=true|false (ALPHA - default=false)<br\/>kube:SELinuxMountReadWriteOncePod=true|false (BETA - default=true)<br\/>kube:SchedulerQueueingHints=true|false (BETA - default=false)<br\/>kube:SeparateCacheWatchRPC=true|false (BETA - default=true)<br\/>kube:SeparateTaintEvictionController=true|false (BETA - default=true)<br\/>kube:ServiceAccountTokenJTI=true|false (BETA - default=true)<br\/>kube:ServiceAccountTokenNodeBinding=true|false (BETA - default=true)<br\/>kube:ServiceAccountTokenNodeBindingValidation=true|false (BETA - default=true)<br\/>kube:ServiceAccountTokenPodNodeInfo=true|false (BETA - default=true)<br\/>kube:ServiceTrafficDistribution=true|false (BETA - default=true)<br\/>kube:SidecarContainers=true|false (BETA - default=true)<br\/>kube:SizeMemoryBackedVolumes=true|false (BETA - default=true)<br\/>kube:StatefulSetAutoDeletePVC=true|false (BETA - default=true)<br\/>kube:StorageNamespaceIndex=true|false (BETA - default=true)<br\/>kube:StorageVersionAPI=true|false (ALPHA - default=false)<br\/>kube:StorageVersionHash=true|false (BETA - default=true)<br\/>kube:StorageVersionMigrator=true|false (ALPHA - default=false)<br\/>kube:StrictCostEnforcementForVAP=true|false (BETA - default=false)<br\/>kube:StrictCostEnforcementForWebhooks=true|false (BETA - default=false)<br\/>kube:StructuredAuthenticationConfiguration=true|false (BETA - default=true)<br\/>kube:StructuredAuthorizationConfiguration=true|false (BETA - default=true)<br\/>kube:SupplementalGroupsPolicy=true|false (ALPHA - default=false)<br\/>kube:TopologyAwareHints=true|false (BETA - default=true)<br\/>kube:TopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)<br\/>kube:TopologyManagerPolicyBetaOptions=true|false (BETA - default=true)<br\/>kube:TopologyManagerPolicyOptions=true|false (BETA - default=true)<br\/>kube:TranslateStreamCloseWebsocketRequests=true|false (BETA - default=true)<br\/>kube:UnauthenticatedHTTP2DOSMitigation=true|false (BETA - default=true)<br\/>kube:UnknownVersionInteroperabilityProxy=true|false (ALPHA - default=false)<br\/>kube:UserNamespacesPodSecurityStandards=true|false (ALPHA - default=false)<br\/>kube:UserNamespacesSupport=true|false (BETA - default=false)<br\/>kube:VolumeAttributesClass=true|false (BETA - default=false)<br\/>kube:VolumeCapacityPriority=true|false (ALPHA - default=false)<br\/>kube:WatchCacheInitializationPostStartHook=true|false (BETA - default=false)<br\/>kube:WatchFromStorageWithoutResourceVersion=true|false (BETA - default=false)<br\/>kube:WatchList=true|false (ALPHA - default=false)<br\/>kube:WatchListClient=true|false (BETA - default=false)<br\/>kube:WinDSR=true|false (ALPHA - default=false)<br\/>kube:WinOverlay=true|false (BETA - default=true)<br\/>kube:WindowsHostNetwork=true|false (ALPHA - default=true)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--goaway-chance float<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>To prevent HTTP\/2 clients from getting stuck on a single apiserver, randomly close a connection (GOAWAY). The client's other in-flight requests won't be affected, and the client will reconnect, likely landing on a different apiserver after going through the load balancer again. This argument sets the fraction of requests that will be sent a GOAWAY. Clusters with single apiservers, or which don't use a load balancer, should NOT enable this. Min is 0 (off), Max is .02 (1\/50 requests); .001 (1\/1000) is a recommended starting point.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for kube-apiserver<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--http2-max-streams-per-connection int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The limit that the server gives to clients for the maximum number of streams in an HTTP\/2 connection. Zero means to use golang's default.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubelet-certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubelet-client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client cert file for TLS.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubelet-client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubelet-preferred-address-types strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of the preferred NodeAddressTypes to use for kubelet connections.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubelet-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Timeout for kubelet operations.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubernetes-service-node-port int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If non-zero, the Kubernetes master service (which apiserver creates\/maintains) will be of type NodePort, using this as the value of the port. If zero, the Kubernetes master service will be of type ClusterIP.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--lease-reuse-duration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 60<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The time in seconds that each lease is reused. A lower value could avoid large number of objects reusing the same lease. Notice that a too small value may cause performance problems at storage layer.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--livez-grace-period duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>This option represents the maximum amount of time it should take for apiserver to complete its startup sequence and become live. From apiserver's start time to when this amount of time has elapsed, \/livez will assume that unfinished post-start hooks will complete successfully and therefore return true.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-flush-frequency duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum number of seconds between log flushes<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-text-info-buffer-size quantity<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>[Alpha] In text format with split output streams, the info messages can be buffered for a while to increase performance. The default value of zero bytes disables buffering. The size can be specified as number of bytes (512), multiples of 1000 (1K), multiples of 1024 (2Ki), or powers of those (3M, 4G, 5Mi, 6Gi). Enable the LoggingAlphaOptions feature gate to use this.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-text-split-stream<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>[Alpha] In text format, write error messages to stderr and info messages to stdout. The default is to write a single stream to stdout. Enable the LoggingAlphaOptions feature gate to use this.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--logging-format string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"text\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Sets the log format. Permitted formats: &quot;text&quot;.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--max-connection-bytes-per-sec int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If non-zero, throttle each user connection to this number of bytes\/sec. Currently only applies to long-running requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--max-mutating-requests-inflight int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 200<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>This and --max-requests-inflight are summed to determine the server's total concurrency limit (which must be positive) if --enable-priority-and-fairness is true. Otherwise, this flag limits the maximum number of mutating requests in flight, or a zero value disables the limit completely.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--max-requests-inflight int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 400<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>This and --max-mutating-requests-inflight are summed to determine the server's total concurrency limit (which must be positive) if --enable-priority-and-fairness is true. Otherwise, this flag limits the maximum number of non-mutating requests in flight, or a zero value disables the limit completely.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--min-request-timeout int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1800<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--oidc-ca-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If set, the OpenID server's certificate will be verified by one of the authorities in the oidc-ca-file, otherwise the host's root CA set will be used.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--oidc-client-id string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The client ID for the OpenID Connect client, must be set if oidc-issuer-url is set.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--oidc-groups-claim string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If provided, the name of a custom OpenID Connect claim for specifying user groups. The claim value is expected to be a string or array of strings. This flag is experimental, please see the authentication documentation for further details.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--oidc-groups-prefix string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If provided, all groups will be prefixed with this value to prevent conflicts with other authentication strategies.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--oidc-issuer-url string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The URL of the OpenID issuer, only HTTPS scheme will be accepted. If set, it will be used to verify the OIDC JSON Web Token (JWT).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--oidc-required-claim &lt;comma-separated 'key=value' pairs&gt;<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--oidc-signing-algs strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"RS256\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Comma-separated list of allowed JOSE asymmetric signing algorithms. JWTs with a supported 'alg' header values are: RS256, RS384, RS512, ES256, ES384, ES512, PS256, PS384, PS512. Values are defined by RFC 7518 https:\/\/tools.ietf.org\/html\/rfc7518#section-3.1.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--oidc-username-claim string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"sub\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The OpenID claim to use as the user name. Note that claims other than the default ('sub') is not guaranteed to be unique and immutable. This flag is experimental, please see the authentication documentation for further details.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--oidc-username-prefix string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If provided, all usernames will be prefixed with this value. If not provided, username claims other than 'email' are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value '-'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--peer-advertise-ip string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If set and the UnknownVersionInteroperabilityProxy feature gate is enabled, this IP will be used by peer kube-apiservers to proxy requests to this kube-apiserver when the request cannot be handled by the peer due to version skew between the kube-apiservers. This flag is only used in clusters configured with multiple kube-apiservers for high availability.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--peer-advertise-port string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If set and the UnknownVersionInteroperabilityProxy feature gate is enabled, this port will be used by peer kube-apiservers to proxy requests to this kube-apiserver when the request cannot be handled by the peer due to version skew between the kube-apiservers. This flag is only used in clusters configured with multiple kube-apiservers for high availability.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--peer-ca-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If set and the UnknownVersionInteroperabilityProxy feature gate is enabled, this file will be used to verify serving certificates of peer kube-apiservers. This flag is only used in clusters configured with multiple kube-apiservers for high availability.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--permit-address-sharing<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, SO_REUSEADDR will be used when binding the port. This allows binding to wildcard IPs like 0.0.0.0 and specific IPs in parallel, and it avoids waiting for the kernel to release sockets in TIME_WAIT state. [default=false]<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--permit-port-sharing<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profiling&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enable profiling via web interface host:port\/debug\/pprof\/<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--proxy-client-cert-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins. It is expected that this cert includes a signature from the CA in the --requestheader-client-ca-file flag. That CA is published in the 'extension-apiserver-authentication' configmap in the kube-system namespace. Components receiving calls from kube-aggregator should use that CA to perform their half of the mutual TLS verification.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--proxy-client-key-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Private key for the client certificate used to prove the identity of the aggregator or kube-apiserver when it must call out during a request. This includes proxying requests to a user api-server and calling out to webhook admission plugins.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>An optional field indicating the duration a handler must keep a request open before timing it out. This is the default request timeout for requests but may be overridden by flags such as --min-request-timeout for specific types of requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-allowed-names strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-client-ca-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-extra-headers-prefix strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of request header prefixes to inspect. X-Remote-Extra- is suggested.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-group-headers strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of request headers to inspect for groups. X-Remote-Group is suggested.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-username-headers strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of request headers to inspect for usernames. X-Remote-User is common.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--runtime-config &lt;comma-separated 'key=value' pairs&gt;<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A set of key=value pairs that enable or disable built-in APIs. Supported options are:<br\/>v1=true|false for the core API group<br\/>&lt;group&gt;\/&lt;version&gt;=true|false for a specific API group and version (e.g. apps\/v1=true)<br\/>api\/all=true|false controls all API versions<br\/>api\/ga=true|false controls all API versions of the form v[0-9]+<br\/>api\/beta=true|false controls all API versions of the form v[0-9]+beta[0-9]+<br\/>api\/alpha=true|false controls all API versions of the form v[0-9]+alpha[0-9]+<br\/>api\/legacy is deprecated, and will be removed in a future version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--secure-port int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 6443<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The port on which to serve HTTPS with authentication and authorization. It cannot be switched off with 0.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--service-account-extend-token-expiration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Turns on projected service account expiration extension during token generation, which helps safe transition from legacy token to bound service account token feature. If this flag is enabled, admission injected tokens would be extended up to 1 year to prevent unexpected failure during transition, ignoring value of service-account-max-token-expiration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--service-account-issuer strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Identifier of the service account token issuer. The issuer will assert this identifier in &quot;iss&quot; claim of issued tokens. This value is a string or URI. If this option is not a valid URI per the OpenID Discovery 1.0 spec, the ServiceAccountIssuerDiscovery feature will remain disabled, even if the feature gate is set to true. It is highly recommended that this value comply with the OpenID spec: https:\/\/openid.net\/specs\/openid-connect-discovery-1_0.html. In practice, this means that service-account-issuer must be an https URL. It is also highly recommended that this URL be capable of serving OpenID discovery documents at {service-account-issuer}\/.well-known\/openid-configuration. When this flag is specified multiple times, the first is used to generate tokens and all are used to determine which issuers are accepted.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--service-account-jwks-uri string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Overrides the URI for the JSON Web Key Set in the discovery doc served at \/.well-known\/openid-configuration. This flag is useful if the discovery docand key set are served to relying parties from a URL other than the API server's external (as auto-detected or overridden with external-hostname).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--service-account-key-file strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File containing PEM-encoded x509 RSA or ECDSA private or public keys, used to verify ServiceAccount tokens. The specified file can contain multiple keys, and the flag can be specified multiple times with different files. If unspecified, --tls-private-key-file is used. Must be specified when --service-account-signing-key-file is provided<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--service-account-lookup&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, validate ServiceAccount tokens exist in etcd as part of authentication.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--service-account-max-token-expiration duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The maximum validity duration of a token created by the service account token issuer. If an otherwise valid TokenRequest with a validity duration larger than this value is requested, a token will be issued with a validity duration of this value.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--service-account-signing-key-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the file that contains the current private key of the service account token issuer. The issuer will sign issued ID tokens with this private key.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--service-cluster-ip-range string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes or pods. Max of two dual-stack CIDRs is allowed.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--service-node-port-range &lt;a string in the form 'N1-N2'&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 30000-32767<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A port range to reserve for services with NodePort visibility.  This must not overlap with the ephemeral port range on nodes.  Example: '30000-32767'. Inclusive at both ends of the range.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-hidden-metrics-for-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is &lt;major&gt;.&lt;minor&gt;, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--shutdown-delay-duration duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Time to delay the termination. During that time the server keeps serving requests normally. The endpoints \/healthz and \/livez will return success, but \/readyz immediately returns failure. Graceful termination starts after this delay has elapsed. This can be used to allow load balancer to stop sending traffic to this server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--shutdown-send-retry-after<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true the HTTP Server will continue listening until all non long running request(s) in flight have been drained, during this window all incoming requests will be rejected with a status code 429 and a 'Retry-After' response header, in addition 'Connection: close' response header is set in order to tear down the TCP connection when idle.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--shutdown-watch-termination-grace-period duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>This option, if set, represents the maximum amount of grace period the apiserver will wait for active watch request(s) to drain during the graceful server shutdown window.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-backend string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The storage backend for persistence. Options: 'etcd3' (default).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-initialization-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum amount of time to wait for storage initialization before declaring apiserver ready. Defaults to 1m.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-media-type string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"application\/vnd.kubernetes.protobuf\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. Supported media types: [application\/json, application\/yaml, application\/vnd.kubernetes.protobuf]<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--strict-transport-security-directives strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of directives for HSTS, comma separated. If this list is empty, then HSTS directives will not be added. Example: 'max-age=31536000,includeSubDomains,preload'<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-cert-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-cipher-suites strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.<br\/>Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256.<br\/>Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_RC4_128_SHA.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-min-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-private-key-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File containing the default x509 private key matching --tls-cert-file.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-sni-cert-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key\/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: &quot;example.crt,example.key&quot; or &quot;foo.crt,foo.key:*.foo.com,foo.com&quot;.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token-auth-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If set, the file that will be used to secure the secure port of the API server via token authentication.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tracing-config-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File with apiserver tracing configuration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-v, --v int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>number for the log level verbosity<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--vmodule pattern=N,...<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>comma-separated list of pattern=N settings for file-filtered logging (only works for text log format)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--watch-cache&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enable watch caching in the apiserver<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--watch-cache-sizes strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. This option is only meaningful for resources built into the apiserver, not ones defined by CRDs or aggregated from external servers, and is only consulted if the watch-cache is enabled. The only meaningful size setting to supply here is zero, which means to disable watch caching for the associated resource; all non-zero values are equivalent and mean to not disable watch caching for that resource<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n","site":"kubernetes reference","answers_cleaned":"    title  kube apiserver content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              The Kubernetes API server validates and configures data for the api objects which include pods  services  replicationcontrollers  and others  The API Server services REST operations and provides the frontend to the cluster s shared state through which all other components interact       kube apiserver  flags                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    admission control config file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File with admission control configuration   p   td    tr    tr   td colspan  2    advertise address string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The IP address on which to advertise the apiserver to members of the cluster  This address must be reachable by the rest of the cluster  If blank  the   bind address will be used  If   bind address is unspecified  the host s default interface will be used   p   td    tr    tr   td colspan  2    aggregator reject forwarding redirect nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Aggregator reject forwarding redirect response back to client   p   td    tr    tr   td colspan  2    allow metric labels stringToString nbsp  nbsp  nbsp  nbsp  nbsp Default      td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The map from metric label to value allow list of this label  The key s format is  lt MetricName gt   lt LabelName gt   The value s format is  lt allowed value gt   lt allowed value gt    e g  metric1 label1  v1 v2 v3   metric1 label2  v1 v2 v3  metric2 label1  v1 v2 v3    p   td    tr    tr   td colspan  2    allow metric labels manifest string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The path to the manifest file that contains the allow list mapping  The format of the file is the same as the flag   allow metric labels  Note that the flag   allow metric labels will override the manifest file   p   td    tr    tr   td colspan  2    allow privileged  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  allow privileged containers   default false   p   td    tr    tr   td colspan  2    anonymous auth nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enables anonymous requests to the secure port of the API server  Requests that are not rejected by another authentication method are treated as anonymous requests  Anonymous requests have a username of system anonymous  and a group name of system unauthenticated   p   td    tr    tr   td colspan  2    api audiences strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Identifiers of the API  The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences  If the   service account issuer flag is configured and this flag is not  this field defaults to a single element list containing the issuer URL   p   td    tr    tr   td colspan  2    audit log batch buffer size int nbsp  nbsp  nbsp  nbsp  nbsp Default  10000  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The size of the buffer to store events before batching and writing  Only used in batch mode   p   td    tr    tr   td colspan  2    audit log batch max size int nbsp  nbsp  nbsp  nbsp  nbsp Default  1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The maximum size of a batch  Only used in batch mode   p   td    tr    tr   td colspan  2    audit log batch max wait duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The amount of time to wait before force writing the batch that hadn t reached the max size  Only used in batch mode   p   td    tr    tr   td colspan  2    audit log batch throttle burst int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before  Only used in batch mode   p   td    tr    tr   td colspan  2    audit log batch throttle enable  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Whether batching throttling is enabled  Only used in batch mode   p   td    tr    tr   td colspan  2    audit log batch throttle qps float  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum average number of batches per second  Only used in batch mode   p   td    tr    tr   td colspan  2    audit log compress  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If set  the rotated log files will be compressed using gzip   p   td    tr    tr   td colspan  2    audit log format string nbsp  nbsp  nbsp  nbsp  nbsp Default   json   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Format of saved audits   quot legacy quot  indicates 1 line text format for each event   quot json quot  indicates structured json format  Known formats are legacy json   p   td    tr    tr   td colspan  2    audit log maxage int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The maximum number of days to retain old audit log files based on the timestamp encoded in their filename   p   td    tr    tr   td colspan  2    audit log maxbackup int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The maximum number of old audit log files to retain  Setting a value of 0 will mean there s no restriction on the number of files   p   td    tr    tr   td colspan  2    audit log maxsize int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The maximum size in megabytes of the audit log file before it gets rotated   p   td    tr    tr   td colspan  2    audit log mode string nbsp  nbsp  nbsp  nbsp  nbsp Default   blocking   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Strategy for sending audit events  Blocking indicates sending events should block server responses  Batch causes the backend to buffer and write events asynchronously  Known modes are batch blocking blocking strict   p   td    tr    tr   td colspan  2    audit log path string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If set  all requests coming to the apiserver will be logged to this file       means standard out   p   td    tr    tr   td colspan  2    audit log truncate enabled  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Whether event and batch truncating is enabled   p   td    tr    tr   td colspan  2    audit log truncate max batch size int nbsp  nbsp  nbsp  nbsp  nbsp Default  10485760  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum size of the batch sent to the underlying backend  Actual serialized size can be several hundreds of bytes greater  If a batch exceeds this limit  it is split into several batches of smaller size   p   td    tr    tr   td colspan  2    audit log truncate max event size int nbsp  nbsp  nbsp  nbsp  nbsp Default  102400  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum size of the audit event sent to the underlying backend  If the size of an event is greater than this number  first request and response are removed  and if this doesn t reduce the size enough  event is discarded   p   td    tr    tr   td colspan  2    audit log version string nbsp  nbsp  nbsp  nbsp  nbsp Default   audit k8s io v1   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p API group and version used for serializing audit events written to log   p   td    tr    tr   td colspan  2    audit policy file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the file that defines the audit policy configuration   p   td    tr    tr   td colspan  2    audit webhook batch buffer size int nbsp  nbsp  nbsp  nbsp  nbsp Default  10000  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The size of the buffer to store events before batching and writing  Only used in batch mode   p   td    tr    tr   td colspan  2    audit webhook batch max size int nbsp  nbsp  nbsp  nbsp  nbsp Default  400  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The maximum size of a batch  Only used in batch mode   p   td    tr    tr   td colspan  2    audit webhook batch max wait duration nbsp  nbsp  nbsp  nbsp  nbsp Default  30s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The amount of time to wait before force writing the batch that hadn t reached the max size  Only used in batch mode   p   td    tr    tr   td colspan  2    audit webhook batch throttle burst int nbsp  nbsp  nbsp  nbsp  nbsp Default  15  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before  Only used in batch mode   p   td    tr    tr   td colspan  2    audit webhook batch throttle enable nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Whether batching throttling is enabled  Only used in batch mode   p   td    tr    tr   td colspan  2    audit webhook batch throttle qps float nbsp  nbsp  nbsp  nbsp  nbsp Default  10  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum average number of batches per second  Only used in batch mode   p   td    tr    tr   td colspan  2    audit webhook config file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a kubeconfig formatted file that defines the audit webhook configuration   p   td    tr    tr   td colspan  2    audit webhook initial backoff duration nbsp  nbsp  nbsp  nbsp  nbsp Default  10s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The amount of time to wait before retrying the first failed request   p   td    tr    tr   td colspan  2    audit webhook mode string nbsp  nbsp  nbsp  nbsp  nbsp Default   batch   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Strategy for sending audit events  Blocking indicates sending events should block server responses  Batch causes the backend to buffer and write events asynchronously  Known modes are batch blocking blocking strict   p   td    tr    tr   td colspan  2    audit webhook truncate enabled  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Whether event and batch truncating is enabled   p   td    tr    tr   td colspan  2    audit webhook truncate max batch size int nbsp  nbsp  nbsp  nbsp  nbsp Default  10485760  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum size of the batch sent to the underlying backend  Actual serialized size can be several hundreds of bytes greater  If a batch exceeds this limit  it is split into several batches of smaller size   p   td    tr    tr   td colspan  2    audit webhook truncate max event size int nbsp  nbsp  nbsp  nbsp  nbsp Default  102400  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum size of the audit event sent to the underlying backend  If the size of an event is greater than this number  first request and response are removed  and if this doesn t reduce the size enough  event is discarded   p   td    tr    tr   td colspan  2    audit webhook version string nbsp  nbsp  nbsp  nbsp  nbsp Default   audit k8s io v1   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p API group and version used for serializing audit events written to webhook   p   td    tr    tr   td colspan  2    authentication config string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File with Authentication Configuration to configure the JWT Token authenticator or the anonymous authenticator  Note  This feature is in Alpha since v1 29   feature gate StructuredAuthenticationConfiguration true needs to be set for enabling this feature This feature is mutually exclusive with the oidc   flags To configure anonymous authenticator you need to enable   feature gate AnonymousAuthConfigurableEndpoints When you configure anonymous authenticator in the authentication config you cannot use the   anonymous auth flag   p   td    tr    tr   td colspan  2    authentication token webhook cache ttl duration nbsp  nbsp  nbsp  nbsp  nbsp Default  2m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The duration to cache responses from the webhook token authenticator   p   td    tr    tr   td colspan  2    authentication token webhook config file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File with webhook configuration for token authentication in kubeconfig format  The API server will query the remote service to determine authentication for bearer tokens   p   td    tr    tr   td colspan  2    authentication token webhook version string nbsp  nbsp  nbsp  nbsp  nbsp Default   v1beta1   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The API version of the authentication k8s io TokenReview to send to and expect from the webhook   p   td    tr    tr   td colspan  2    authorization config string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File with Authorization Configuration to configure the authorizer chain Note  This feature is in Alpha since v1 29   feature gate StructuredAuthorizationConfiguration true feature flag needs to be set to true for enabling the functionality This feature is mutually exclusive with the other   authorization mode and   authorization webhook   flags   p   td    tr    tr   td colspan  2    authorization mode strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Ordered list of plug ins to do authorization on secure port  Defaults to AlwaysAllow if   authorization config is not used  Comma delimited list of  AlwaysAllow AlwaysDeny ABAC Webhook RBAC Node   p   td    tr    tr   td colspan  2    authorization policy file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File with authorization policy in json line by line format  used with   authorization mode ABAC  on the secure port   p   td    tr    tr   td colspan  2    authorization webhook cache authorized ttl duration nbsp  nbsp  nbsp  nbsp  nbsp Default  5m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The duration to cache  authorized  responses from the webhook authorizer   p   td    tr    tr   td colspan  2    authorization webhook cache unauthorized ttl duration nbsp  nbsp  nbsp  nbsp  nbsp Default  30s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The duration to cache  unauthorized  responses from the webhook authorizer   p   td    tr    tr   td colspan  2    authorization webhook config file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File with webhook configuration in kubeconfig format  used with   authorization mode Webhook  The API server will query the remote service to determine access on the API server s secure port   p   td    tr    tr   td colspan  2    authorization webhook version string nbsp  nbsp  nbsp  nbsp  nbsp Default   v1beta1   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The API version of the authorization k8s io SubjectAccessReview to send to and expect from the webhook   p   td    tr    tr   td colspan  2    bind address string nbsp  nbsp  nbsp  nbsp  nbsp Default  0 0 0 0  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The IP address on which to listen for the   secure port port  The associated interface s  must be reachable by the rest of the cluster  and by CLI web clients  If blank or an unspecified address  0 0 0 0 or      all interfaces and IP address families will be used   p   td    tr    tr   td colspan  2    cert dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    var run kubernetes   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The directory where the TLS certs are located  If   tls cert file and   tls private key file are provided  this flag will be ignored   p   td    tr    tr   td colspan  2    client ca file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If set  any request presenting a client certificate signed by one of the authorities in the client ca file is authenticated with an identity corresponding to the CommonName of the client certificate   p   td    tr    tr   td colspan  2    contention profiling  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enable block profiling  if profiling is enabled  p   td    tr    tr   td colspan  2    cors allowed origins strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of allowed origins for CORS  comma separated  An allowed origin can be a regular expression to support subdomain matching  If this list is empty CORS will not be enabled  Please ensure each expression matches the entire hostname by anchoring to the start with     or including the      prefix  and by anchoring to the end with     or including the     port separator suffix  Examples of valid expressions are    example com       and   https   example com        p   td    tr    tr   td colspan  2    debug socket path string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Use an unprotected  no authn authz  unix domain socket for profiling with the given path  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    delete collection workers int nbsp  nbsp  nbsp  nbsp  nbsp Default  1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Number of workers spawned for DeleteCollection call  These are used to speed up namespace cleanup   p   td    tr    tr   td colspan  2    disable admission plugins strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p admission plugins that should be disabled although they are in the default enabled plugins list  NamespaceLifecycle  LimitRanger  ServiceAccount  TaintNodesByCondition  PodSecurity  Priority  DefaultTolerationSeconds  DefaultStorageClass  StorageObjectInUseProtection  PersistentVolumeClaimResize  RuntimeClass  CertificateApproval  CertificateSigning  ClusterTrustBundleAttest  CertificateSubjectRestriction  DefaultIngressClass  MutatingAdmissionWebhook  ValidatingAdmissionPolicy  ValidatingAdmissionWebhook  ResourceQuota   Comma delimited list of admission plugins  AlwaysAdmit  AlwaysDeny  AlwaysPullImages  CertificateApproval  CertificateSigning  CertificateSubjectRestriction  ClusterTrustBundleAttest  DefaultIngressClass  DefaultStorageClass  DefaultTolerationSeconds  DenyServiceExternalIPs  EventRateLimit  ExtendedResourceToleration  ImagePolicyWebhook  LimitPodHardAntiAffinityTopology  LimitRanger  MutatingAdmissionWebhook  NamespaceAutoProvision  NamespaceExists  NamespaceLifecycle  NodeRestriction  OwnerReferencesPermissionEnforcement  PersistentVolumeClaimResize  PodNodeSelector  PodSecurity  PodTolerationRestriction  Priority  ResourceQuota  RuntimeClass  ServiceAccount  StorageObjectInUseProtection  TaintNodesByCondition  ValidatingAdmissionPolicy  ValidatingAdmissionWebhook  The order of plugins in this flag does not matter   p   td    tr    tr   td colspan  2    disable http2 serving  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  HTTP2 serving will be disabled  default false   p   td    tr    tr   td colspan  2    disabled metrics strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p This flag provides an escape hatch for misbehaving metrics  You must provide the fully qualified metric name in order to disable it  Disclaimer  disabling metrics is higher in precedence than showing hidden metrics   p   td    tr    tr   td colspan  2    egress selector config file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File with apiserver egress selector configuration   p   td    tr    tr   td colspan  2    emulated version strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The versions different components emulate their capabilities  APIs  features       of  br  If set  the component will emulate the behavior of this version instead of the underlying binary version  br  Version format could only be major minor  for example     emulated version wardle 1 2 kube 1 31   Options are  br  kube 1 31  1 31  default 1 31 If the component is not specified  defaults to  quot kube quot   p   td    tr    tr   td colspan  2    enable admission plugins strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p admission plugins that should be enabled in addition to default enabled ones  NamespaceLifecycle  LimitRanger  ServiceAccount  TaintNodesByCondition  PodSecurity  Priority  DefaultTolerationSeconds  DefaultStorageClass  StorageObjectInUseProtection  PersistentVolumeClaimResize  RuntimeClass  CertificateApproval  CertificateSigning  ClusterTrustBundleAttest  CertificateSubjectRestriction  DefaultIngressClass  MutatingAdmissionWebhook  ValidatingAdmissionPolicy  ValidatingAdmissionWebhook  ResourceQuota   Comma delimited list of admission plugins  AlwaysAdmit  AlwaysDeny  AlwaysPullImages  CertificateApproval  CertificateSigning  CertificateSubjectRestriction  ClusterTrustBundleAttest  DefaultIngressClass  DefaultStorageClass  DefaultTolerationSeconds  DenyServiceExternalIPs  EventRateLimit  ExtendedResourceToleration  ImagePolicyWebhook  LimitPodHardAntiAffinityTopology  LimitRanger  MutatingAdmissionWebhook  NamespaceAutoProvision  NamespaceExists  NamespaceLifecycle  NodeRestriction  OwnerReferencesPermissionEnforcement  PersistentVolumeClaimResize  PodNodeSelector  PodSecurity  PodTolerationRestriction  Priority  ResourceQuota  RuntimeClass  ServiceAccount  StorageObjectInUseProtection  TaintNodesByCondition  ValidatingAdmissionPolicy  ValidatingAdmissionWebhook  The order of plugins in this flag does not matter   p   td    tr    tr   td colspan  2    enable aggregator routing  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Turns on aggregator routing requests to endpoints IP rather than cluster IP   p   td    tr    tr   td colspan  2    enable bootstrap token auth  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enable to allow secrets of type  bootstrap kubernetes io token  in the  kube system  namespace to be used for TLS bootstrapping authentication   p   td    tr    tr   td colspan  2    enable garbage collector nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enables the generic garbage collector  MUST be synced with the corresponding flag of the kube controller manager   p   td    tr    tr   td colspan  2    enable priority and fairness nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  replace the max in flight handler with an enhanced one that queues and dispatches with priority and fairness  p   td    tr    tr   td colspan  2    encryption provider config string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The file containing configuration for encryption providers to be used for storing secrets in etcd  p   td    tr    tr   td colspan  2    encryption provider config automatic reload  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Determines if the file set by   encryption provider config should be automatically reloaded if the disk contents change  Setting this to true disables the ability to uniquely identify distinct KMS plugins via the API server healthz endpoints   p   td    tr    tr   td colspan  2    endpoint reconciler type string nbsp  nbsp  nbsp  nbsp  nbsp Default   lease   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Use an endpoint reconciler  master count  lease  none  master count is deprecated  and will be removed in a future version   p   td    tr    tr   td colspan  2    etcd cafile string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p SSL Certificate Authority file used to secure etcd communication   p   td    tr    tr   td colspan  2    etcd certfile string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p SSL certification file used to secure etcd communication   p   td    tr    tr   td colspan  2    etcd compaction interval duration nbsp  nbsp  nbsp  nbsp  nbsp Default  5m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The interval of compaction requests  If 0  the compaction request from apiserver is disabled   p   td    tr    tr   td colspan  2    etcd count metric poll period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Frequency of polling etcd for number of resources per type  0 disables the metric collection   p   td    tr    tr   td colspan  2    etcd db metric poll interval duration nbsp  nbsp  nbsp  nbsp  nbsp Default  30s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The interval of requests to poll etcd and update metric  0 disables the metric collection  p   td    tr    tr   td colspan  2    etcd healthcheck timeout duration nbsp  nbsp  nbsp  nbsp  nbsp Default  2s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The timeout to use when checking etcd health   p   td    tr    tr   td colspan  2    etcd keyfile string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p SSL key file used to secure etcd communication   p   td    tr    tr   td colspan  2    etcd prefix string nbsp  nbsp  nbsp  nbsp  nbsp Default    registry   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The prefix to prepend to all resource paths in etcd   p   td    tr    tr   td colspan  2    etcd readycheck timeout duration nbsp  nbsp  nbsp  nbsp  nbsp Default  2s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The timeout to use when checking etcd readiness  p   td    tr    tr   td colspan  2    etcd servers strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of etcd servers to connect with  scheme   ip port   comma separated   p   td    tr    tr   td colspan  2    etcd servers overrides strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Per resource etcd servers overrides  comma separated  The individual override format  group resource servers  where servers are URLs  semicolon separated  Note that this applies only to resources compiled into this server binary   p   td    tr    tr   td colspan  2    event ttl duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1h0m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Amount of time to retain events   p   td    tr    tr   td colspan  2    external hostname string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The hostname to use when generating externalized URLs for this master  e g  Swagger API Docs or OpenID Discovery    p   td    tr    tr   td colspan  2    feature gates colonSeparatedMultimapStringString  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Comma separated list of component key value pairs that describe feature gates for alpha experimental features of different components  br  If the component is not specified  defaults to  quot kube quot   This flag can be repeatedly invoked  For example    feature gates  wardle featureA true wardle featureB false    feature gates  kube featureC true Options are  br  kube APIResponseCompression true false  BETA   default true  br  kube APIServerIdentity true false  BETA   default true  br  kube APIServerTracing true false  BETA   default true  br  kube APIServingWithRoutine true false  ALPHA   default false  br  kube AllAlpha true false  ALPHA   default false  br  kube AllBeta true false  BETA   default false  br  kube AnonymousAuthConfigurableEndpoints true false  ALPHA   default false  br  kube AnyVolumeDataSource true false  BETA   default true  br  kube AuthorizeNodeWithSelectors true false  ALPHA   default false  br  kube AuthorizeWithSelectors true false  ALPHA   default false  br  kube CPUManagerPolicyAlphaOptions true false  ALPHA   default false  br  kube CPUManagerPolicyBetaOptions true false  BETA   default true  br  kube CPUManagerPolicyOptions true false  BETA   default true  br  kube CRDValidationRatcheting true false  BETA   default true  br  kube CSIMigrationPortworx true false  BETA   default true  br  kube CSIVolumeHealth true false  ALPHA   default false  br  kube CloudControllerManagerWebhook true false  ALPHA   default false  br  kube ClusterTrustBundle true false  ALPHA   default false  br  kube ClusterTrustBundleProjection true false  ALPHA   default false  br  kube ComponentSLIs true false  BETA   default true  br  kube ConcurrentWatchObjectDecode true false  BETA   default false  br  kube ConsistentListFromCache true false  BETA   default true  br  kube ContainerCheckpoint true false  BETA   default true  br  kube ContextualLogging true false  BETA   default true  br  kube CoordinatedLeaderElection true false  ALPHA   default false  br  kube CronJobsScheduledAnnotation true false  BETA   default true  br  kube CrossNamespaceVolumeDataSource true false  ALPHA   default false  br  kube CustomCPUCFSQuotaPeriod true false  ALPHA   default false  br  kube CustomResourceFieldSelectors true false  BETA   default true  br  kube DRAControlPlaneController true false  ALPHA   default false  br  kube DisableAllocatorDualWrite true false  ALPHA   default false  br  kube DisableNodeKubeProxyVersion true false  BETA   default true  br  kube DynamicResourceAllocation true false  ALPHA   default false  br  kube EventedPLEG true false  ALPHA   default false  br  kube GracefulNodeShutdown true false  BETA   default true  br  kube GracefulNodeShutdownBasedOnPodPriority true false  BETA   default true  br  kube HPAScaleToZero true false  ALPHA   default false  br  kube HonorPVReclaimPolicy true false  BETA   default true  br  kube ImageMaximumGCAge true false  BETA   default true  br  kube ImageVolume true false  ALPHA   default false  br  kube InPlacePodVerticalScaling true false  ALPHA   default false  br  kube InTreePluginPortworxUnregister true false  ALPHA   default false  br  kube InformerResourceVersion true false  ALPHA   default false  br  kube JobBackoffLimitPerIndex true false  BETA   default true  br  kube JobManagedBy true false  ALPHA   default false  br  kube JobPodReplacementPolicy true false  BETA   default true  br  kube JobSuccessPolicy true false  BETA   default true  br  kube KubeletCgroupDriverFromCRI true false  BETA   default true  br  kube KubeletInUserNamespace true false  ALPHA   default false  br  kube KubeletPodResourcesDynamicResources true false  ALPHA   default false  br  kube KubeletPodResourcesGet true false  ALPHA   default false  br  kube KubeletSeparateDiskGC true false  BETA   default true  br  kube KubeletTracing true false  BETA   default true  br  kube LoadBalancerIPMode true false  BETA   default true  br  kube LocalStorageCapacityIsolationFSQuotaMonitoring true false  BETA   default false  br  kube LoggingAlphaOptions true false  ALPHA   default false  br  kube LoggingBetaOptions true false  BETA   default true  br  kube MatchLabelKeysInPodAffinity true false  BETA   default true  br  kube MatchLabelKeysInPodTopologySpread true false  BETA   default true  br  kube MaxUnavailableStatefulSet true false  ALPHA   default false  br  kube MemoryManager true false  BETA   default true  br  kube MemoryQoS true false  ALPHA   default false  br  kube MultiCIDRServiceAllocator true false  BETA   default false  br  kube MutatingAdmissionPolicy true false  ALPHA   default false  br  kube NFTablesProxyMode true false  BETA   default true  br  kube NodeInclusionPolicyInPodTopologySpread true false  BETA   default true  br  kube NodeLogQuery true false  BETA   default false  br  kube NodeSwap true false  BETA   default true  br  kube OpenAPIEnums true false  BETA   default true  br  kube PodAndContainerStatsFromCRI true false  ALPHA   default false  br  kube PodDeletionCost true false  BETA   default true  br  kube PodIndexLabel true false  BETA   default true  br  kube PodLifecycleSleepAction true false  BETA   default true  br  kube PodReadyToStartContainersCondition true false  BETA   default true  br  kube PortForwardWebsockets true false  BETA   default true  br  kube ProcMountType true false  BETA   default false  br  kube QOSReserved true false  ALPHA   default false  br  kube RecoverVolumeExpansionFailure true false  ALPHA   default false  br  kube RecursiveReadOnlyMounts true false  BETA   default true  br  kube RelaxedEnvironmentVariableValidation true false  ALPHA   default false  br  kube ReloadKubeletServerCertificateFile true false  BETA   default true  br  kube ResilientWatchCacheInitialization true false  BETA   default true  br  kube ResourceHealthStatus true false  ALPHA   default false  br  kube RetryGenerateName true false  BETA   default true  br  kube RotateKubeletServerCertificate true false  BETA   default true  br  kube RuntimeClassInImageCriApi true false  ALPHA   default false  br  kube SELinuxMount true false  ALPHA   default false  br  kube SELinuxMountReadWriteOncePod true false  BETA   default true  br  kube SchedulerQueueingHints true false  BETA   default false  br  kube SeparateCacheWatchRPC true false  BETA   default true  br  kube SeparateTaintEvictionController true false  BETA   default true  br  kube ServiceAccountTokenJTI true false  BETA   default true  br  kube ServiceAccountTokenNodeBinding true false  BETA   default true  br  kube ServiceAccountTokenNodeBindingValidation true false  BETA   default true  br  kube ServiceAccountTokenPodNodeInfo true false  BETA   default true  br  kube ServiceTrafficDistribution true false  BETA   default true  br  kube SidecarContainers true false  BETA   default true  br  kube SizeMemoryBackedVolumes true false  BETA   default true  br  kube StatefulSetAutoDeletePVC true false  BETA   default true  br  kube StorageNamespaceIndex true false  BETA   default true  br  kube StorageVersionAPI true false  ALPHA   default false  br  kube StorageVersionHash true false  BETA   default true  br  kube StorageVersionMigrator true false  ALPHA   default false  br  kube StrictCostEnforcementForVAP true false  BETA   default false  br  kube StrictCostEnforcementForWebhooks true false  BETA   default false  br  kube StructuredAuthenticationConfiguration true false  BETA   default true  br  kube StructuredAuthorizationConfiguration true false  BETA   default true  br  kube SupplementalGroupsPolicy true false  ALPHA   default false  br  kube TopologyAwareHints true false  BETA   default true  br  kube TopologyManagerPolicyAlphaOptions true false  ALPHA   default false  br  kube TopologyManagerPolicyBetaOptions true false  BETA   default true  br  kube TopologyManagerPolicyOptions true false  BETA   default true  br  kube TranslateStreamCloseWebsocketRequests true false  BETA   default true  br  kube UnauthenticatedHTTP2DOSMitigation true false  BETA   default true  br  kube UnknownVersionInteroperabilityProxy true false  ALPHA   default false  br  kube UserNamespacesPodSecurityStandards true false  ALPHA   default false  br  kube UserNamespacesSupport true false  BETA   default false  br  kube VolumeAttributesClass true false  BETA   default false  br  kube VolumeCapacityPriority true false  ALPHA   default false  br  kube WatchCacheInitializationPostStartHook true false  BETA   default false  br  kube WatchFromStorageWithoutResourceVersion true false  BETA   default false  br  kube WatchList true false  ALPHA   default false  br  kube WatchListClient true false  BETA   default false  br  kube WinDSR true false  ALPHA   default false  br  kube WinOverlay true false  BETA   default true  br  kube WindowsHostNetwork true false  ALPHA   default true   p   td    tr    tr   td colspan  2    goaway chance float  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p To prevent HTTP 2 clients from getting stuck on a single apiserver  randomly close a connection  GOAWAY   The client s other in flight requests won t be affected  and the client will reconnect  likely landing on a different apiserver after going through the load balancer again  This argument sets the fraction of requests that will be sent a GOAWAY  Clusters with single apiservers  or which don t use a load balancer  should NOT enable this  Min is 0  off   Max is  02  1 50 requests    001  1 1000  is a recommended starting point   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for kube apiserver  p   td    tr    tr   td colspan  2    http2 max streams per connection int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The limit that the server gives to clients for the maximum number of streams in an HTTP 2 connection  Zero means to use golang s default   p   td    tr    tr   td colspan  2    kubelet certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority   p   td    tr    tr   td colspan  2    kubelet client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client cert file for TLS   p   td    tr    tr   td colspan  2    kubelet client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS   p   td    tr    tr   td colspan  2    kubelet preferred address types strings nbsp  nbsp  nbsp  nbsp  nbsp Default   Hostname InternalDNS InternalIP ExternalDNS ExternalIP   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of the preferred NodeAddressTypes to use for kubelet connections   p   td    tr    tr   td colspan  2    kubelet timeout duration nbsp  nbsp  nbsp  nbsp  nbsp Default  5s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Timeout for kubelet operations   p   td    tr    tr   td colspan  2    kubernetes service node port int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If non zero  the Kubernetes master service  which apiserver creates maintains  will be of type NodePort  using this as the value of the port  If zero  the Kubernetes master service will be of type ClusterIP   p   td    tr    tr   td colspan  2    lease reuse duration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  60  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The time in seconds that each lease is reused  A lower value could avoid large number of objects reusing the same lease  Notice that a too small value may cause performance problems at storage layer   p   td    tr    tr   td colspan  2    livez grace period duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p This option represents the maximum amount of time it should take for apiserver to complete its startup sequence and become live  From apiserver s start time to when this amount of time has elapsed   livez will assume that unfinished post start hooks will complete successfully and therefore return true   p   td    tr    tr   td colspan  2    log flush frequency duration nbsp  nbsp  nbsp  nbsp  nbsp Default  5s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum number of seconds between log flushes  p   td    tr    tr   td colspan  2    log text info buffer size quantity  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p  Alpha  In text format with split output streams  the info messages can be buffered for a while to increase performance  The default value of zero bytes disables buffering  The size can be specified as number of bytes  512   multiples of 1000  1K   multiples of 1024  2Ki   or powers of those  3M  4G  5Mi  6Gi   Enable the LoggingAlphaOptions feature gate to use this   p   td    tr    tr   td colspan  2    log text split stream  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p  Alpha  In text format  write error messages to stderr and info messages to stdout  The default is to write a single stream to stdout  Enable the LoggingAlphaOptions feature gate to use this   p   td    tr    tr   td colspan  2    logging format string nbsp  nbsp  nbsp  nbsp  nbsp Default   text   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Sets the log format  Permitted formats   quot text quot    p   td    tr    tr   td colspan  2    max connection bytes per sec int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If non zero  throttle each user connection to this number of bytes sec  Currently only applies to long running requests   p   td    tr    tr   td colspan  2    max mutating requests inflight int nbsp  nbsp  nbsp  nbsp  nbsp Default  200  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p This and   max requests inflight are summed to determine the server s total concurrency limit  which must be positive  if   enable priority and fairness is true  Otherwise  this flag limits the maximum number of mutating requests in flight  or a zero value disables the limit completely   p   td    tr    tr   td colspan  2    max requests inflight int nbsp  nbsp  nbsp  nbsp  nbsp Default  400  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p This and   max mutating requests inflight are summed to determine the server s total concurrency limit  which must be positive  if   enable priority and fairness is true  Otherwise  this flag limits the maximum number of non mutating requests in flight  or a zero value disables the limit completely   p   td    tr    tr   td colspan  2    min request timeout int nbsp  nbsp  nbsp  nbsp  nbsp Default  1800  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out  Currently only honored by the watch request handler  which picks a randomized value above this number as the connection timeout  to spread out load   p   td    tr    tr   td colspan  2    oidc ca file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If set  the OpenID server s certificate will be verified by one of the authorities in the oidc ca file  otherwise the host s root CA set will be used   p   td    tr    tr   td colspan  2    oidc client id string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The client ID for the OpenID Connect client  must be set if oidc issuer url is set   p   td    tr    tr   td colspan  2    oidc groups claim string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If provided  the name of a custom OpenID Connect claim for specifying user groups  The claim value is expected to be a string or array of strings  This flag is experimental  please see the authentication documentation for further details   p   td    tr    tr   td colspan  2    oidc groups prefix string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If provided  all groups will be prefixed with this value to prevent conflicts with other authentication strategies   p   td    tr    tr   td colspan  2    oidc issuer url string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The URL of the OpenID issuer  only HTTPS scheme will be accepted  If set  it will be used to verify the OIDC JSON Web Token  JWT    p   td    tr    tr   td colspan  2    oidc required claim  lt comma separated  key value  pairs gt   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A key value pair that describes a required claim in the ID Token  If set  the claim is verified to be present in the ID Token with a matching value  Repeat this flag to specify multiple claims   p   td    tr    tr   td colspan  2    oidc signing algs strings nbsp  nbsp  nbsp  nbsp  nbsp Default   RS256   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Comma separated list of allowed JOSE asymmetric signing algorithms  JWTs with a supported  alg  header values are  RS256  RS384  RS512  ES256  ES384  ES512  PS256  PS384  PS512  Values are defined by RFC 7518 https   tools ietf org html rfc7518 section 3 1   p   td    tr    tr   td colspan  2    oidc username claim string nbsp  nbsp  nbsp  nbsp  nbsp Default   sub   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The OpenID claim to use as the user name  Note that claims other than the default   sub   is not guaranteed to be unique and immutable  This flag is experimental  please see the authentication documentation for further details   p   td    tr    tr   td colspan  2    oidc username prefix string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If provided  all usernames will be prefixed with this value  If not provided  username claims other than  email  are prefixed by the issuer URL to avoid clashes  To skip any prefixing  provide the value       p   td    tr    tr   td colspan  2    peer advertise ip string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If set and the UnknownVersionInteroperabilityProxy feature gate is enabled  this IP will be used by peer kube apiservers to proxy requests to this kube apiserver when the request cannot be handled by the peer due to version skew between the kube apiservers  This flag is only used in clusters configured with multiple kube apiservers for high availability   p   td    tr    tr   td colspan  2    peer advertise port string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If set and the UnknownVersionInteroperabilityProxy feature gate is enabled  this port will be used by peer kube apiservers to proxy requests to this kube apiserver when the request cannot be handled by the peer due to version skew between the kube apiservers  This flag is only used in clusters configured with multiple kube apiservers for high availability   p   td    tr    tr   td colspan  2    peer ca file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If set and the UnknownVersionInteroperabilityProxy feature gate is enabled  this file will be used to verify serving certificates of peer kube apiservers  This flag is only used in clusters configured with multiple kube apiservers for high availability   p   td    tr    tr   td colspan  2    permit address sharing  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  SO REUSEADDR will be used when binding the port  This allows binding to wildcard IPs like 0 0 0 0 and specific IPs in parallel  and it avoids waiting for the kernel to release sockets in TIME WAIT state   default false   p   td    tr    tr   td colspan  2    permit port sharing  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  SO REUSEPORT will be used when binding the port  which allows more than one instance to bind on the same address and port   default false   p   td    tr    tr   td colspan  2    profiling nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enable profiling via web interface host port debug pprof   p   td    tr    tr   td colspan  2    proxy client cert file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Client certificate used to prove the identity of the aggregator or kube apiserver when it must call out during a request  This includes proxying requests to a user api server and calling out to webhook admission plugins  It is expected that this cert includes a signature from the CA in the   requestheader client ca file flag  That CA is published in the  extension apiserver authentication  configmap in the kube system namespace  Components receiving calls from kube aggregator should use that CA to perform their half of the mutual TLS verification   p   td    tr    tr   td colspan  2    proxy client key file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Private key for the client certificate used to prove the identity of the aggregator or kube apiserver when it must call out during a request  This includes proxying requests to a user api server and calling out to webhook admission plugins   p   td    tr    tr   td colspan  2    request timeout duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p An optional field indicating the duration a handler must keep a request open before timing it out  This is the default request timeout for requests but may be overridden by flags such as   min request timeout for specific types of requests   p   td    tr    tr   td colspan  2    requestheader allowed names strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of client certificate common names to allow to provide usernames in headers specified by   requestheader username headers  If empty  any client certificate validated by the authorities in   requestheader client ca file is allowed   p   td    tr    tr   td colspan  2    requestheader client ca file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by   requestheader username headers  WARNING  generally do not depend on authorization being already done for incoming requests   p   td    tr    tr   td colspan  2    requestheader extra headers prefix strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of request header prefixes to inspect  X Remote Extra  is suggested   p   td    tr    tr   td colspan  2    requestheader group headers strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of request headers to inspect for groups  X Remote Group is suggested   p   td    tr    tr   td colspan  2    requestheader username headers strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of request headers to inspect for usernames  X Remote User is common   p   td    tr    tr   td colspan  2    runtime config  lt comma separated  key value  pairs gt   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A set of key value pairs that enable or disable built in APIs  Supported options are  br  v1 true false for the core API group br   lt group gt   lt version gt  true false for a specific API group and version  e g  apps v1 true  br  api all true false controls all API versions br  api ga true false controls all API versions of the form v 0 9   br  api beta true false controls all API versions of the form v 0 9  beta 0 9   br  api alpha true false controls all API versions of the form v 0 9  alpha 0 9   br  api legacy is deprecated  and will be removed in a future version  p   td    tr    tr   td colspan  2    secure port int nbsp  nbsp  nbsp  nbsp  nbsp Default  6443  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The port on which to serve HTTPS with authentication and authorization  It cannot be switched off with 0   p   td    tr    tr   td colspan  2    service account extend token expiration nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Turns on projected service account expiration extension during token generation  which helps safe transition from legacy token to bound service account token feature  If this flag is enabled  admission injected tokens would be extended up to 1 year to prevent unexpected failure during transition  ignoring value of service account max token expiration   p   td    tr    tr   td colspan  2    service account issuer strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Identifier of the service account token issuer  The issuer will assert this identifier in  quot iss quot  claim of issued tokens  This value is a string or URI  If this option is not a valid URI per the OpenID Discovery 1 0 spec  the ServiceAccountIssuerDiscovery feature will remain disabled  even if the feature gate is set to true  It is highly recommended that this value comply with the OpenID spec  https   openid net specs openid connect discovery 1 0 html  In practice  this means that service account issuer must be an https URL  It is also highly recommended that this URL be capable of serving OpenID discovery documents at  service account issuer   well known openid configuration  When this flag is specified multiple times  the first is used to generate tokens and all are used to determine which issuers are accepted   p   td    tr    tr   td colspan  2    service account jwks uri string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Overrides the URI for the JSON Web Key Set in the discovery doc served at   well known openid configuration  This flag is useful if the discovery docand key set are served to relying parties from a URL other than the API server s external  as auto detected or overridden with external hostname    p   td    tr    tr   td colspan  2    service account key file strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File containing PEM encoded x509 RSA or ECDSA private or public keys  used to verify ServiceAccount tokens  The specified file can contain multiple keys  and the flag can be specified multiple times with different files  If unspecified    tls private key file is used  Must be specified when   service account signing key file is provided  p   td    tr    tr   td colspan  2    service account lookup nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  validate ServiceAccount tokens exist in etcd as part of authentication   p   td    tr    tr   td colspan  2    service account max token expiration duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The maximum validity duration of a token created by the service account token issuer  If an otherwise valid TokenRequest with a validity duration larger than this value is requested  a token will be issued with a validity duration of this value   p   td    tr    tr   td colspan  2    service account signing key file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the file that contains the current private key of the service account token issuer  The issuer will sign issued ID tokens with this private key   p   td    tr    tr   td colspan  2    service cluster ip range string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A CIDR notation IP range from which to assign service cluster IPs  This must not overlap with any IP ranges assigned to nodes or pods  Max of two dual stack CIDRs is allowed   p   td    tr    tr   td colspan  2    service node port range  lt a string in the form  N1 N2  gt  nbsp  nbsp  nbsp  nbsp  nbsp Default  30000 32767  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A port range to reserve for services with NodePort visibility   This must not overlap with the ephemeral port range on nodes   Example   30000 32767   Inclusive at both ends of the range   p   td    tr    tr   td colspan  2    show hidden metrics for version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The previous version for which you want to show hidden metrics  Only the previous minor version is meaningful  other values will not be allowed  The format is  lt major gt   lt minor gt   e g    1 16   The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics  rather than being surprised when they are permanently removed in the release after that   p   td    tr    tr   td colspan  2    shutdown delay duration duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Time to delay the termination  During that time the server keeps serving requests normally  The endpoints  healthz and  livez will return success  but  readyz immediately returns failure  Graceful termination starts after this delay has elapsed  This can be used to allow load balancer to stop sending traffic to this server   p   td    tr    tr   td colspan  2    shutdown send retry after  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true the HTTP Server will continue listening until all non long running request s  in flight have been drained  during this window all incoming requests will be rejected with a status code 429 and a  Retry After  response header  in addition  Connection  close  response header is set in order to tear down the TCP connection when idle   p   td    tr    tr   td colspan  2    shutdown watch termination grace period duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p This option  if set  represents the maximum amount of grace period the apiserver will wait for active watch request s  to drain during the graceful server shutdown window   p   td    tr    tr   td colspan  2    storage backend string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The storage backend for persistence  Options   etcd3   default    p   td    tr    tr   td colspan  2    storage initialization timeout duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum amount of time to wait for storage initialization before declaring apiserver ready  Defaults to 1m   p   td    tr    tr   td colspan  2    storage media type string nbsp  nbsp  nbsp  nbsp  nbsp Default   application vnd kubernetes protobuf   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The media type to use to store objects in storage  Some resources or storage backends may only support a specific media type and will ignore this setting  Supported media types   application json  application yaml  application vnd kubernetes protobuf   p   td    tr    tr   td colspan  2    strict transport security directives strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of directives for HSTS  comma separated  If this list is empty  then HSTS directives will not be added  Example   max age 31536000 includeSubDomains preload   p   td    tr    tr   td colspan  2    tls cert file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File containing the default x509 Certificate for HTTPS   CA cert  if any  concatenated after server cert   If HTTPS serving is enabled  and   tls cert file and   tls private key file are not provided  a self signed certificate and key are generated for the public address and saved to the directory specified by   cert dir   p   td    tr    tr   td colspan  2    tls cipher suites strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Comma separated list of cipher suites for the server  If omitted  the default Go cipher suites will be used  br  Preferred values  TLS AES 128 GCM SHA256  TLS AES 256 GCM SHA384  TLS CHACHA20 POLY1305 SHA256  TLS ECDHE ECDSA WITH AES 128 CBC SHA  TLS ECDHE ECDSA WITH AES 128 GCM SHA256  TLS ECDHE ECDSA WITH AES 256 CBC SHA  TLS ECDHE ECDSA WITH AES 256 GCM SHA384  TLS ECDHE ECDSA WITH CHACHA20 POLY1305  TLS ECDHE ECDSA WITH CHACHA20 POLY1305 SHA256  TLS ECDHE RSA WITH AES 128 CBC SHA  TLS ECDHE RSA WITH AES 128 GCM SHA256  TLS ECDHE RSA WITH AES 256 CBC SHA  TLS ECDHE RSA WITH AES 256 GCM SHA384  TLS ECDHE RSA WITH CHACHA20 POLY1305  TLS ECDHE RSA WITH CHACHA20 POLY1305 SHA256  br  Insecure values  TLS ECDHE ECDSA WITH AES 128 CBC SHA256  TLS ECDHE ECDSA WITH RC4 128 SHA  TLS ECDHE RSA WITH 3DES EDE CBC SHA  TLS ECDHE RSA WITH AES 128 CBC SHA256  TLS ECDHE RSA WITH RC4 128 SHA  TLS RSA WITH 3DES EDE CBC SHA  TLS RSA WITH AES 128 CBC SHA  TLS RSA WITH AES 128 CBC SHA256  TLS RSA WITH AES 128 GCM SHA256  TLS RSA WITH AES 256 CBC SHA  TLS RSA WITH AES 256 GCM SHA384  TLS RSA WITH RC4 128 SHA   p   td    tr    tr   td colspan  2    tls min version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Minimum TLS version supported  Possible values  VersionTLS10  VersionTLS11  VersionTLS12  VersionTLS13  p   td    tr    tr   td colspan  2    tls private key file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File containing the default x509 private key matching   tls cert file   p   td    tr    tr   td colspan  2    tls sni cert key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A pair of x509 certificate and private key file paths  optionally suffixed with a list of domain patterns which are fully qualified domain names  possibly with prefixed wildcard segments  The domain patterns also allow IP addresses  but IPs should only be used if the apiserver has visibility to the IP address requested by a client  If no domain patterns are provided  the names of the certificate are extracted  Non wildcard matches trump over wildcard matches  explicit domain patterns trump over extracted names  For multiple key certificate pairs  use the   tls sni cert key multiple times  Examples   quot example crt example key quot  or  quot foo crt foo key   foo com foo com quot    p   td    tr    tr   td colspan  2    token auth file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If set  the file that will be used to secure the secure port of the API server via token authentication   p   td    tr    tr   td colspan  2    tracing config file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File with apiserver tracing configuration   p   td    tr    tr   td colspan  2   v    v int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p number for the log level verbosity  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    vmodule pattern N      td    tr   tr   td   td  td style  line height  130   word wrap  break word    p comma separated list of pattern N settings for file filtered logging  only works for text log format   p   td    tr    tr   td colspan  2    watch cache nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enable watch caching in the apiserver  p   td    tr    tr   td colspan  2    watch cache sizes strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Watch cache size settings for some resources  pods  nodes  etc    comma separated  The individual setting format  resource  group  size  where resource is lowercase plural  no version   group is omitted for resources of apiVersion v1  the legacy core API  and included for others  and size is a number  This option is only meaningful for resources built into the apiserver  not ones defined by CRDs or aggregated from external servers  and is only consulted if the watch cache is enabled  The only meaningful size setting to supply here is zero  which means to disable watch caching for the associated resource  all non zero values are equivalent and mean to not disable watch caching for that resource  p   td    tr     tbody    table    "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic title kube controller manager contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kube-controller-manager\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nThe Kubernetes controller manager is a daemon that embeds\nthe core control loops shipped with Kubernetes. In applications of robotics and\nautomation, a control loop is a non-terminating loop that regulates the state of\nthe system. In Kubernetes, a controller is a control loop that watches the shared\nstate of the cluster through the apiserver and makes changes attempting to move the\ncurrent state towards the desired state. Examples of controllers that ship with\nKubernetes today are the replication controller, endpoints controller, namespace\ncontroller, and serviceaccounts controller.\n\n```\nkube-controller-manager [flags]\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allocate-node-cidrs<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Should CIDRs for Pods be allocated and set on the cloud provider.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-metric-labels stringToString&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: []<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The map from metric-label to value allow-list of this label. The key's format is &lt;MetricName&gt;,&lt;LabelName&gt;. The value's format is &lt;allowed_value&gt;,&lt;allowed_value&gt;...e.g. metric1,label1='v1,v2,v3', metric1,label2='v1,v2,v3' metric2,label1='v1,v2,v3'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-metric-labels-manifest string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The path to the manifest file that contains the allow-list mapping. The format of the file is the same as the flag --allow-metric-labels. Note that the flag --allow-metric-labels will override the manifest file.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--attach-detach-reconcile-sync-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The reconciler sync wait time between volume attach detach. This duration must be larger than one second, and increasing this value from the default may allow for volumes to be mismatched with pods.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authentication-kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authentication-skip-lookup<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authentication-token-webhook-cache-ttl duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The duration to cache responses from the webhook token authenticator.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authentication-tolerate-lookup-failure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-always-allow-paths strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"\/healthz,\/readyz,\/livez\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-webhook-cache-authorized-ttl duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The duration to cache 'authorized' responses from the webhook authorizer.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-webhook-cache-unauthorized-ttl duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The duration to cache 'unauthorized' responses from the webhook authorizer.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--bind-address string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 0.0.0.0<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI\/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces and IP address families will be used.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cert-dir string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cidr-allocator-type string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"RangeAllocator\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Type of CIDR allocator to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-ca-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cloud-config string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The path to the cloud provider configuration file. Empty string for no configuration file.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cloud-provider string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The provider for cloud services. Empty string for no provider.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-cidr string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>CIDR Range for Pods in cluster. Requires --allocate-node-cidrs to be true<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-name string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubernetes\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The instance prefix for the cluster.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-signing-cert-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename containing a PEM-encoded X509 CA certificate used to issue cluster-scoped certificates.  If specified, no more specific --cluster-signing-* flag may be specified.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-signing-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 8760h0m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The max length of duration signed certificates will be given.  Individual CSRs may request shorter certs by setting spec.expirationSeconds.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-signing-key-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename containing a PEM-encoded RSA or ECDSA private key used to sign cluster-scoped certificates.  If specified, no more specific --cluster-signing-* flag may be specified.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-signing-kube-apiserver-client-cert-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io\/kube-apiserver-client signer.  If specified, --cluster-signing-{cert,key}-file must not be set.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-signing-kube-apiserver-client-key-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io\/kube-apiserver-client signer.  If specified, --cluster-signing-{cert,key}-file must not be set.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-signing-kubelet-client-cert-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io\/kube-apiserver-client-kubelet signer.  If specified, --cluster-signing-{cert,key}-file must not be set.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-signing-kubelet-client-key-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io\/kube-apiserver-client-kubelet signer.  If specified, --cluster-signing-{cert,key}-file must not be set.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-signing-kubelet-serving-cert-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io\/kubelet-serving signer.  If specified, --cluster-signing-{cert,key}-file must not be set.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-signing-kubelet-serving-key-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io\/kubelet-serving signer.  If specified, --cluster-signing-{cert,key}-file must not be set.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-signing-legacy-unknown-cert-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename containing a PEM-encoded X509 CA certificate used to issue certificates for the kubernetes.io\/legacy-unknown signer.  If specified, --cluster-signing-{cert,key}-file must not be set.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-signing-legacy-unknown-key-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename containing a PEM-encoded RSA or ECDSA private key used to sign certificates for the kubernetes.io\/legacy-unknown signer.  If specified, --cluster-signing-{cert,key}-file must not be set.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-cron-job-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of cron job objects that are allowed to sync concurrently. Larger number = more responsive jobs, but more CPU (and network) load<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-deployment-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of deployment objects that are allowed to sync concurrently. Larger number = more responsive deployments, but more CPU (and network) load<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-endpoint-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of endpoint syncing operations that will be done concurrently. Larger number = faster endpoint updating, but more CPU (and network) load<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-ephemeralvolume-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of ephemeral volume syncing operations that will be done concurrently. Larger number = faster ephemeral volume updating, but more CPU (and network) load<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-gc-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 20<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of garbage collector workers that are allowed to sync concurrently.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-horizontal-pod-autoscaler-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of horizontal pod autoscaler objects that are allowed to sync concurrently. Larger number = more responsive horizontal pod autoscaler objects processing, but more CPU (and network) load.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-job-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of job objects that are allowed to sync concurrently. Larger number = more responsive jobs, but more CPU (and network) load<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-namespace-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of namespace objects that are allowed to sync concurrently. Larger number = more responsive namespace termination, but more CPU (and network) load<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-rc-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-replicaset-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of replica sets that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-resource-quota-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of resource quotas that are allowed to sync concurrently. Larger number = more responsive quota management, but more CPU (and network) load<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-service-endpoint-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of service endpoint syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-service-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of services that are allowed to sync concurrently. Larger number = more responsive service management, but more CPU (and network) load<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-serviceaccount-token-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of service account token objects that are allowed to sync concurrently. Larger number = more responsive token generation, but more CPU (and network) load<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-statefulset-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of statefulset objects that are allowed to sync concurrently. Larger number = more responsive statefulsets, but more CPU (and network) load<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-ttl-after-finished-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of ttl-after-finished-controller workers that are allowed to sync concurrently.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--concurrent-validating-admission-policy-status-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of ValidatingAdmissionPolicyStatusController workers that are allowed to sync concurrently.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--configure-cloud-routes&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--contention-profiling<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enable block profiling, if profiling is enabled<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--controller-start-interval duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Interval between starting controller managers.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--controllers strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"*\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'.<br\/>All controllers: bootstrap-signer-controller, certificatesigningrequest-approving-controller, certificatesigningrequest-cleaner-controller, certificatesigningrequest-signing-controller, cloud-node-lifecycle-controller, clusterrole-aggregation-controller, cronjob-controller, daemonset-controller, deployment-controller, disruption-controller, endpoints-controller, endpointslice-controller, endpointslice-mirroring-controller, ephemeral-volume-controller, garbage-collector-controller, horizontal-pod-autoscaler-controller, job-controller, legacy-serviceaccount-token-cleaner-controller, namespace-controller, node-ipam-controller, node-lifecycle-controller, node-route-controller, persistentvolume-attach-detach-controller, persistentvolume-binder-controller, persistentvolume-expander-controller, persistentvolume-protection-controller, persistentvolumeclaim-protection-controller, pod-garbage-collector-controller, replicaset-controller, replicationcontroller-controller, resourceclaim-controller, resourcequota-controller, root-ca-certificate-publisher-controller, service-cidr-controller, service-lb-controller, serviceaccount-controller, serviceaccount-token-controller, statefulset-controller, storage-version-migrator-controller, storageversion-garbage-collector-controller, taint-eviction-controller, token-cleaner-controller, ttl-after-finished-controller, ttl-controller, validatingadmissionpolicy-status-controller<br\/>Disabled-by-default controllers: bootstrap-signer-controller, token-cleaner-controller<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-attach-detach-reconcile-sync<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Disable volume attach detach reconciler sync. Disabling this may cause volumes to be mismatched with pods. Use wisely.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-force-detach-on-timeout<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Prevent force detaching volumes based on maximum unmount time and node status. If this flag is set to true, the non-graceful node shutdown feature must be used to recover from node failure. See https:\/\/k8s.io\/docs\/storage-disable-force-detach-on-timeout\/.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-http2-serving<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, HTTP2 serving will be disabled [default=false]<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disabled-metrics strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>This flag provides an escape hatch for misbehaving metrics. You must provide the fully qualified metric name in order to disable it. Disclaimer: disabling metrics is higher in precedence than showing hidden metrics.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--emulated-version strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The versions different components emulate their capabilities (APIs, features, ...) of.<br\/>If set, the component will emulate the behavior of this version instead of the underlying binary version.<br\/>Version format could only be major.minor, for example: '--emulated-version=wardle=1.2,kube=1.31'. Options are:<br\/>kube=1.31..1.31 (default=1.31)If the component is not specified, defaults to &quot;kube&quot;<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enable-dynamic-provisioning&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enable dynamic provisioning for environments that support it.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enable-garbage-collector&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-apiserver.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enable-hostpath-provisioner<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enable HostPath PV provisioning when running without a cloud provider. This allows testing and development of provisioning features.  HostPath provisioning is not supported in any way, won't work in a multi-node cluster, and should not be used for anything other than testing or development.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enable-leader-migration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Whether to enable controller leader migration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--endpoint-updates-batch-period duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of endpoint updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--endpointslice-updates-batch-period duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of endpoint slice updates batching period. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--external-cloud-volume-plugin string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The plugin to use when cloud provider is set to external. Can be empty, should only be set when cloud-provider is external. Currently used to allow node-ipam-controller, persistentvolume-binder-controller, persistentvolume-expander-controller and attach-detach-controller to work for in tree cloud providers.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--feature-gates colonSeparatedMultimapStringString<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Comma-separated list of component:key=value pairs that describe feature gates for alpha\/experimental features of different components.<br\/>If the component is not specified, defaults to &quot;kube&quot;. This flag can be repeatedly invoked. For example: --feature-gates 'wardle:featureA=true,wardle:featureB=false' --feature-gates 'kube:featureC=true'Options are:<br\/>kube:APIResponseCompression=true|false (BETA - default=true)<br\/>kube:APIServerIdentity=true|false (BETA - default=true)<br\/>kube:APIServerTracing=true|false (BETA - default=true)<br\/>kube:APIServingWithRoutine=true|false (ALPHA - default=false)<br\/>kube:AllAlpha=true|false (ALPHA - default=false)<br\/>kube:AllBeta=true|false (BETA - default=false)<br\/>kube:AnonymousAuthConfigurableEndpoints=true|false (ALPHA - default=false)<br\/>kube:AnyVolumeDataSource=true|false (BETA - default=true)<br\/>kube:AuthorizeNodeWithSelectors=true|false (ALPHA - default=false)<br\/>kube:AuthorizeWithSelectors=true|false (ALPHA - default=false)<br\/>kube:CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)<br\/>kube:CPUManagerPolicyBetaOptions=true|false (BETA - default=true)<br\/>kube:CPUManagerPolicyOptions=true|false (BETA - default=true)<br\/>kube:CRDValidationRatcheting=true|false (BETA - default=true)<br\/>kube:CSIMigrationPortworx=true|false (BETA - default=true)<br\/>kube:CSIVolumeHealth=true|false (ALPHA - default=false)<br\/>kube:CloudControllerManagerWebhook=true|false (ALPHA - default=false)<br\/>kube:ClusterTrustBundle=true|false (ALPHA - default=false)<br\/>kube:ClusterTrustBundleProjection=true|false (ALPHA - default=false)<br\/>kube:ComponentSLIs=true|false (BETA - default=true)<br\/>kube:ConcurrentWatchObjectDecode=true|false (BETA - default=false)<br\/>kube:ConsistentListFromCache=true|false (BETA - default=true)<br\/>kube:ContainerCheckpoint=true|false (BETA - default=true)<br\/>kube:ContextualLogging=true|false (BETA - default=true)<br\/>kube:CoordinatedLeaderElection=true|false (ALPHA - default=false)<br\/>kube:CronJobsScheduledAnnotation=true|false (BETA - default=true)<br\/>kube:CrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)<br\/>kube:CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)<br\/>kube:CustomResourceFieldSelectors=true|false (BETA - default=true)<br\/>kube:DRAControlPlaneController=true|false (ALPHA - default=false)<br\/>kube:DisableAllocatorDualWrite=true|false (ALPHA - default=false)<br\/>kube:DisableNodeKubeProxyVersion=true|false (BETA - default=true)<br\/>kube:DynamicResourceAllocation=true|false (ALPHA - default=false)<br\/>kube:EventedPLEG=true|false (ALPHA - default=false)<br\/>kube:GracefulNodeShutdown=true|false (BETA - default=true)<br\/>kube:GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)<br\/>kube:HPAScaleToZero=true|false (ALPHA - default=false)<br\/>kube:HonorPVReclaimPolicy=true|false (BETA - default=true)<br\/>kube:ImageMaximumGCAge=true|false (BETA - default=true)<br\/>kube:ImageVolume=true|false (ALPHA - default=false)<br\/>kube:InPlacePodVerticalScaling=true|false (ALPHA - default=false)<br\/>kube:InTreePluginPortworxUnregister=true|false (ALPHA - default=false)<br\/>kube:InformerResourceVersion=true|false (ALPHA - default=false)<br\/>kube:JobBackoffLimitPerIndex=true|false (BETA - default=true)<br\/>kube:JobManagedBy=true|false (ALPHA - default=false)<br\/>kube:JobPodReplacementPolicy=true|false (BETA - default=true)<br\/>kube:JobSuccessPolicy=true|false (BETA - default=true)<br\/>kube:KubeletCgroupDriverFromCRI=true|false (BETA - default=true)<br\/>kube:KubeletInUserNamespace=true|false (ALPHA - default=false)<br\/>kube:KubeletPodResourcesDynamicResources=true|false (ALPHA - default=false)<br\/>kube:KubeletPodResourcesGet=true|false (ALPHA - default=false)<br\/>kube:KubeletSeparateDiskGC=true|false (BETA - default=true)<br\/>kube:KubeletTracing=true|false (BETA - default=true)<br\/>kube:LoadBalancerIPMode=true|false (BETA - default=true)<br\/>kube:LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (BETA - default=false)<br\/>kube:LoggingAlphaOptions=true|false (ALPHA - default=false)<br\/>kube:LoggingBetaOptions=true|false (BETA - default=true)<br\/>kube:MatchLabelKeysInPodAffinity=true|false (BETA - default=true)<br\/>kube:MatchLabelKeysInPodTopologySpread=true|false (BETA - default=true)<br\/>kube:MaxUnavailableStatefulSet=true|false (ALPHA - default=false)<br\/>kube:MemoryManager=true|false (BETA - default=true)<br\/>kube:MemoryQoS=true|false (ALPHA - default=false)<br\/>kube:MultiCIDRServiceAllocator=true|false (BETA - default=false)<br\/>kube:MutatingAdmissionPolicy=true|false (ALPHA - default=false)<br\/>kube:NFTablesProxyMode=true|false (BETA - default=true)<br\/>kube:NodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)<br\/>kube:NodeLogQuery=true|false (BETA - default=false)<br\/>kube:NodeSwap=true|false (BETA - default=true)<br\/>kube:OpenAPIEnums=true|false (BETA - default=true)<br\/>kube:PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)<br\/>kube:PodDeletionCost=true|false (BETA - default=true)<br\/>kube:PodIndexLabel=true|false (BETA - default=true)<br\/>kube:PodLifecycleSleepAction=true|false (BETA - default=true)<br\/>kube:PodReadyToStartContainersCondition=true|false (BETA - default=true)<br\/>kube:PortForwardWebsockets=true|false (BETA - default=true)<br\/>kube:ProcMountType=true|false (BETA - default=false)<br\/>kube:QOSReserved=true|false (ALPHA - default=false)<br\/>kube:RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)<br\/>kube:RecursiveReadOnlyMounts=true|false (BETA - default=true)<br\/>kube:RelaxedEnvironmentVariableValidation=true|false (ALPHA - default=false)<br\/>kube:ReloadKubeletServerCertificateFile=true|false (BETA - default=true)<br\/>kube:ResilientWatchCacheInitialization=true|false (BETA - default=true)<br\/>kube:ResourceHealthStatus=true|false (ALPHA - default=false)<br\/>kube:RetryGenerateName=true|false (BETA - default=true)<br\/>kube:RotateKubeletServerCertificate=true|false (BETA - default=true)<br\/>kube:RuntimeClassInImageCriApi=true|false (ALPHA - default=false)<br\/>kube:SELinuxMount=true|false (ALPHA - default=false)<br\/>kube:SELinuxMountReadWriteOncePod=true|false (BETA - default=true)<br\/>kube:SchedulerQueueingHints=true|false (BETA - default=false)<br\/>kube:SeparateCacheWatchRPC=true|false (BETA - default=true)<br\/>kube:SeparateTaintEvictionController=true|false (BETA - default=true)<br\/>kube:ServiceAccountTokenJTI=true|false (BETA - default=true)<br\/>kube:ServiceAccountTokenNodeBinding=true|false (BETA - default=true)<br\/>kube:ServiceAccountTokenNodeBindingValidation=true|false (BETA - default=true)<br\/>kube:ServiceAccountTokenPodNodeInfo=true|false (BETA - default=true)<br\/>kube:ServiceTrafficDistribution=true|false (BETA - default=true)<br\/>kube:SidecarContainers=true|false (BETA - default=true)<br\/>kube:SizeMemoryBackedVolumes=true|false (BETA - default=true)<br\/>kube:StatefulSetAutoDeletePVC=true|false (BETA - default=true)<br\/>kube:StorageNamespaceIndex=true|false (BETA - default=true)<br\/>kube:StorageVersionAPI=true|false (ALPHA - default=false)<br\/>kube:StorageVersionHash=true|false (BETA - default=true)<br\/>kube:StorageVersionMigrator=true|false (ALPHA - default=false)<br\/>kube:StrictCostEnforcementForVAP=true|false (BETA - default=false)<br\/>kube:StrictCostEnforcementForWebhooks=true|false (BETA - default=false)<br\/>kube:StructuredAuthenticationConfiguration=true|false (BETA - default=true)<br\/>kube:StructuredAuthorizationConfiguration=true|false (BETA - default=true)<br\/>kube:SupplementalGroupsPolicy=true|false (ALPHA - default=false)<br\/>kube:TopologyAwareHints=true|false (BETA - default=true)<br\/>kube:TopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)<br\/>kube:TopologyManagerPolicyBetaOptions=true|false (BETA - default=true)<br\/>kube:TopologyManagerPolicyOptions=true|false (BETA - default=true)<br\/>kube:TranslateStreamCloseWebsocketRequests=true|false (BETA - default=true)<br\/>kube:UnauthenticatedHTTP2DOSMitigation=true|false (BETA - default=true)<br\/>kube:UnknownVersionInteroperabilityProxy=true|false (ALPHA - default=false)<br\/>kube:UserNamespacesPodSecurityStandards=true|false (ALPHA - default=false)<br\/>kube:UserNamespacesSupport=true|false (BETA - default=false)<br\/>kube:VolumeAttributesClass=true|false (BETA - default=false)<br\/>kube:VolumeCapacityPriority=true|false (ALPHA - default=false)<br\/>kube:WatchCacheInitializationPostStartHook=true|false (BETA - default=false)<br\/>kube:WatchFromStorageWithoutResourceVersion=true|false (BETA - default=false)<br\/>kube:WatchList=true|false (ALPHA - default=false)<br\/>kube:WatchListClient=true|false (BETA - default=false)<br\/>kube:WinDSR=true|false (ALPHA - default=false)<br\/>kube:WinOverlay=true|false (BETA - default=true)<br\/>kube:WindowsHostNetwork=true|false (ALPHA - default=true)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--flex-volume-plugin-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"\/usr\/libexec\/kubernetes\/kubelet-plugins\/volume\/exec\/\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Full path of the directory in which the flex volume plugin should search for additional third party volume plugins.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for kube-controller-manager<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--horizontal-pod-autoscaler-cpu-initialization-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The period after pod start when CPU samples might be skipped.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--horizontal-pod-autoscaler-downscale-stabilization duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The period for which autoscaler will look backwards and not scale down below any recommendation it made during that period.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--horizontal-pod-autoscaler-initial-readiness-delay duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 30s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The period after pod start during which readiness changes will be treated as initial readiness.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--horizontal-pod-autoscaler-sync-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 15s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The period for syncing the number of pods in horizontal pod autoscaler.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--horizontal-pod-autoscaler-tolerance float&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 0.1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The minimum change (from 1.0) in the desired-to-actual metrics ratio for the horizontal pod autoscaler to consider scaling.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--http2-max-streams-per-connection int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The limit that the server gives to clients for the maximum number of streams in an HTTP\/2 connection. Zero means to use golang's default.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kube-api-burst int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 30<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Burst to use while talking with kubernetes apiserver.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kube-api-content-type string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"application\/vnd.kubernetes.protobuf\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Content type of requests sent to apiserver.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kube-api-qps float&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 20<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>QPS to use while talking with kubernetes apiserver.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to kubeconfig file with authorization and master location information (the master location can be overridden by the master flag).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--large-cluster-size-threshold int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 50<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Number of nodes from which node-lifecycle-controller treats the cluster as large for the eviction logic purposes. --secondary-node-eviction-rate is implicitly overridden to 0 for clusters this size or smaller. Notice: If nodes reside in multiple zones, this threshold will be considered as zone node size threshold for each zone to determine node eviction rate independently.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-elect&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-elect-lease-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 15s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-elect-renew-deadline duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than the lease duration. This is only applicable if leader election is enabled.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-elect-resource-lock string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"leases\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The type of resource object that is used for locking during leader election. Supported options are 'leases', 'endpointsleases' and 'configmapsleases'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-elect-resource-name string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kube-controller-manager\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of resource object that is used for locking during leader election.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-elect-resource-namespace string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kube-system\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The namespace of resource object that is used for locking during leader election.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-elect-retry-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 2s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-migration-config string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the config file for controller leader migration, or empty to use the value that reflects default configuration of the controller manager. The config file should be of type LeaderMigrationConfiguration, group controllermanager.config.k8s.io, version v1alpha1.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--legacy-service-account-token-clean-up-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 8760h0m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The period of time since the last usage of an legacy service account token before it can be deleted.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-flush-frequency duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum number of seconds between log flushes<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-text-info-buffer-size quantity<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>[Alpha] In text format with split output streams, the info messages can be buffered for a while to increase performance. The default value of zero bytes disables buffering. The size can be specified as number of bytes (512), multiples of 1000 (1K), multiples of 1024 (2Ki), or powers of those (3M, 4G, 5Mi, 6Gi). Enable the LoggingAlphaOptions feature gate to use this.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-text-split-stream<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>[Alpha] In text format, write error messages to stderr and info messages to stdout. The default is to write a single stream to stdout. Enable the LoggingAlphaOptions feature gate to use this.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--logging-format string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"text\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Sets the log format. Permitted formats: &quot;text&quot;.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--master string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address of the Kubernetes API server (overrides any value in kubeconfig).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--max-endpoints-per-slice int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 100<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The maximum number of endpoints that will be added to an EndpointSlice. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--min-resync-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 12h0m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The resync period in reflectors will be random between MinResyncPeriod and 2*MinResyncPeriod.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--mirroring-concurrent-service-endpoint-syncs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The number of service endpoint syncing operations that will be done concurrently by the endpointslice-mirroring-controller. Larger number = faster endpoint slice updating, but more CPU (and network) load. Defaults to 5.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--mirroring-endpointslice-updates-batch-period duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of EndpointSlice updates batching period for endpointslice-mirroring-controller. Processing of EndpointSlice changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of EndpointSlice updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--mirroring-max-endpoints-per-subset int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1000<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The maximum number of endpoints that will be added to an EndpointSlice by the endpointslice-mirroring-controller. More endpoints per slice will result in less endpoint slices, but larger resources. Defaults to 100.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--namespace-sync-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The period for syncing namespace life-cycle updates<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--node-cidr-mask-size int32<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Mask size for node cidr in cluster. Default is 24 for IPv4 and 64 for IPv6.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--node-cidr-mask-size-ipv4 int32<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Mask size for IPv4 node cidr in dual-stack cluster. Default is 24.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--node-cidr-mask-size-ipv6 int32<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Mask size for IPv6 node cidr in dual-stack cluster. Default is 64.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--node-eviction-rate float&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 0.1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Number of nodes per second on which pods are deleted in case of node failure when a zone is healthy (see --unhealthy-zone-threshold for definition of healthy\/unhealthy). Zone refers to entire cluster in non-multizone clusters.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--node-monitor-grace-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 40s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Amount of time which we allow running Node to be unresponsive before marking it unhealthy. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--node-monitor-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The period for syncing NodeStatus in cloud-node-lifecycle-controller.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--node-startup-grace-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Amount of time which we allow starting Node to be unresponsive before marking it unhealthy.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--permit-address-sharing<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, SO_REUSEADDR will be used when binding the port. This allows binding to wildcard IPs like 0.0.0.0 and specific IPs in parallel, and it avoids waiting for the kernel to release sockets in TIME_WAIT state. [default=false]<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--permit-port-sharing<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profiling&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enable profiling via web interface host:port\/debug\/pprof\/<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pv-recycler-increment-timeout-nfs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 30<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>the increment of time added per Gi to ActiveDeadlineSeconds for an NFS scrubber pod<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pv-recycler-minimum-timeout-hostpath int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 60<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The minimum ActiveDeadlineSeconds to use for a HostPath Recycler pod.  This is for development and testing only and will not work in a multi-node cluster.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pv-recycler-minimum-timeout-nfs int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The minimum ActiveDeadlineSeconds to use for an NFS Recycler pod<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pv-recycler-pod-template-filepath-hostpath string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The file path to a pod definition used as a template for HostPath persistent volume recycling. This is for development and testing only and will not work in a multi-node cluster.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pv-recycler-pod-template-filepath-nfs string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The file path to a pod definition used as a template for NFS persistent volume recycling<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pv-recycler-timeout-increment-hostpath int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 30<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod.  This is for development and testing only and will not work in a multi-node cluster.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pvclaimbinder-sync-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 15s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The period for syncing persistent volumes and persistent volume claims<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-allowed-names strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-client-ca-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-extra-headers-prefix strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"x-remote-extra-\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of request header prefixes to inspect. X-Remote-Extra- is suggested.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-group-headers strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"x-remote-group\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of request headers to inspect for groups. X-Remote-Group is suggested.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-username-headers strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"x-remote-user\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of request headers to inspect for usernames. X-Remote-User is common.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--resource-quota-sync-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The period for syncing quota usage status in the system<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--root-ca-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If set, this root certificate authority will be included in service account's token secret. This must be a valid PEM-encoded CA bundle.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--route-reconciliation-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The period for reconciling routes created for Nodes by cloud provider.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--secondary-node-eviction-rate float&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 0.01<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy (see --unhealthy-zone-threshold for definition of healthy\/unhealthy). Zone refers to entire cluster in non-multizone clusters. This value is implicitly overridden to 0 if the cluster size is smaller than --large-cluster-size-threshold.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--secure-port int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10257<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--service-account-private-key-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename containing a PEM-encoded private RSA or ECDSA key used to sign service account tokens.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--service-cluster-ip-range string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>CIDR Range for Services in cluster. Requires --allocate-node-cidrs to be true<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-hidden-metrics-for-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is &lt;major&gt;.&lt;minor&gt;, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--terminated-pod-gc-threshold int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 12500<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If &lt;= 0, the terminated pod garbage collector is disabled.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-cert-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-cipher-suites strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.<br\/>Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256.<br\/>Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_RC4_128_SHA.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-min-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-private-key-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File containing the default x509 private key matching --tls-cert-file.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-sni-cert-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key\/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: &quot;example.crt,example.key&quot; or &quot;foo.crt,foo.key:*.foo.com,foo.com&quot;.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--unhealthy-zone-threshold float&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 0.55<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Fraction of Nodes in a zone which needs to be not Ready (minimum 3) for zone to be treated as unhealthy.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--use-service-account-credentials<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, use individual service account credentials for each controller.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-v, --v int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>number for the log level verbosity<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--vmodule pattern=N,...<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>comma-separated list of pattern=N settings for file-filtered logging (only works for text log format)<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n","site":"kubernetes reference","answers_cleaned":"    title  kube controller manager content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              The Kubernetes controller manager is a daemon that embeds the core control loops shipped with Kubernetes  In applications of robotics and automation  a control loop is a non terminating loop that regulates the state of the system  In Kubernetes  a controller is a control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state  Examples of controllers that ship with Kubernetes today are the replication controller  endpoints controller  namespace controller  and serviceaccounts controller       kube controller manager  flags                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allocate node cidrs  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Should CIDRs for Pods be allocated and set on the cloud provider   p   td    tr    tr   td colspan  2    allow metric labels stringToString nbsp  nbsp  nbsp  nbsp  nbsp Default      td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The map from metric label to value allow list of this label  The key s format is  lt MetricName gt   lt LabelName gt   The value s format is  lt allowed value gt   lt allowed value gt    e g  metric1 label1  v1 v2 v3   metric1 label2  v1 v2 v3  metric2 label1  v1 v2 v3    p   td    tr    tr   td colspan  2    allow metric labels manifest string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The path to the manifest file that contains the allow list mapping  The format of the file is the same as the flag   allow metric labels  Note that the flag   allow metric labels will override the manifest file   p   td    tr    tr   td colspan  2    attach detach reconcile sync period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The reconciler sync wait time between volume attach detach  This duration must be larger than one second  and increasing this value from the default may allow for volumes to be mismatched with pods   p   td    tr    tr   td colspan  2    authentication kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p kubeconfig file pointing at the  core  kubernetes server with enough rights to create tokenreviews authentication k8s io  This is optional  If empty  all token requests are considered to be anonymous and no client CA is looked up in the cluster   p   td    tr    tr   td colspan  2    authentication skip lookup  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If false  the authentication kubeconfig will be used to lookup missing authentication configuration from the cluster   p   td    tr    tr   td colspan  2    authentication token webhook cache ttl duration nbsp  nbsp  nbsp  nbsp  nbsp Default  10s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The duration to cache responses from the webhook token authenticator   p   td    tr    tr   td colspan  2    authentication tolerate lookup failure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  failures to look up missing authentication configuration from the cluster are not considered fatal  Note that this can result in authentication that treats all requests as anonymous   p   td    tr    tr   td colspan  2    authorization always allow paths strings nbsp  nbsp  nbsp  nbsp  nbsp Default    healthz  readyz  livez   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A list of HTTP paths to skip during authorization  i e  these are authorized without contacting the  core  kubernetes server   p   td    tr    tr   td colspan  2    authorization kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p kubeconfig file pointing at the  core  kubernetes server with enough rights to create subjectaccessreviews authorization k8s io  This is optional  If empty  all requests not skipped by authorization are forbidden   p   td    tr    tr   td colspan  2    authorization webhook cache authorized ttl duration nbsp  nbsp  nbsp  nbsp  nbsp Default  10s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The duration to cache  authorized  responses from the webhook authorizer   p   td    tr    tr   td colspan  2    authorization webhook cache unauthorized ttl duration nbsp  nbsp  nbsp  nbsp  nbsp Default  10s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The duration to cache  unauthorized  responses from the webhook authorizer   p   td    tr    tr   td colspan  2    bind address string nbsp  nbsp  nbsp  nbsp  nbsp Default  0 0 0 0  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The IP address on which to listen for the   secure port port  The associated interface s  must be reachable by the rest of the cluster  and by CLI web clients  If blank or an unspecified address  0 0 0 0 or      all interfaces and IP address families will be used   p   td    tr    tr   td colspan  2    cert dir string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The directory where the TLS certs are located  If   tls cert file and   tls private key file are provided  this flag will be ignored   p   td    tr    tr   td colspan  2    cidr allocator type string nbsp  nbsp  nbsp  nbsp  nbsp Default   RangeAllocator   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Type of CIDR allocator to use  p   td    tr    tr   td colspan  2    client ca file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If set  any request presenting a client certificate signed by one of the authorities in the client ca file is authenticated with an identity corresponding to the CommonName of the client certificate   p   td    tr    tr   td colspan  2    cloud config string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The path to the cloud provider configuration file  Empty string for no configuration file   p   td    tr    tr   td colspan  2    cloud provider string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The provider for cloud services  Empty string for no provider   p   td    tr    tr   td colspan  2    cluster cidr string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p CIDR Range for Pods in cluster  Requires   allocate node cidrs to be true  p   td    tr    tr   td colspan  2    cluster name string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubernetes   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The instance prefix for the cluster   p   td    tr    tr   td colspan  2    cluster signing cert file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename containing a PEM encoded X509 CA certificate used to issue cluster scoped certificates   If specified  no more specific   cluster signing   flag may be specified   p   td    tr    tr   td colspan  2    cluster signing duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  8760h0m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The max length of duration signed certificates will be given   Individual CSRs may request shorter certs by setting spec expirationSeconds   p   td    tr    tr   td colspan  2    cluster signing key file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename containing a PEM encoded RSA or ECDSA private key used to sign cluster scoped certificates   If specified  no more specific   cluster signing   flag may be specified   p   td    tr    tr   td colspan  2    cluster signing kube apiserver client cert file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename containing a PEM encoded X509 CA certificate used to issue certificates for the kubernetes io kube apiserver client signer   If specified    cluster signing  cert key  file must not be set   p   td    tr    tr   td colspan  2    cluster signing kube apiserver client key file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename containing a PEM encoded RSA or ECDSA private key used to sign certificates for the kubernetes io kube apiserver client signer   If specified    cluster signing  cert key  file must not be set   p   td    tr    tr   td colspan  2    cluster signing kubelet client cert file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename containing a PEM encoded X509 CA certificate used to issue certificates for the kubernetes io kube apiserver client kubelet signer   If specified    cluster signing  cert key  file must not be set   p   td    tr    tr   td colspan  2    cluster signing kubelet client key file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename containing a PEM encoded RSA or ECDSA private key used to sign certificates for the kubernetes io kube apiserver client kubelet signer   If specified    cluster signing  cert key  file must not be set   p   td    tr    tr   td colspan  2    cluster signing kubelet serving cert file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename containing a PEM encoded X509 CA certificate used to issue certificates for the kubernetes io kubelet serving signer   If specified    cluster signing  cert key  file must not be set   p   td    tr    tr   td colspan  2    cluster signing kubelet serving key file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename containing a PEM encoded RSA or ECDSA private key used to sign certificates for the kubernetes io kubelet serving signer   If specified    cluster signing  cert key  file must not be set   p   td    tr    tr   td colspan  2    cluster signing legacy unknown cert file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename containing a PEM encoded X509 CA certificate used to issue certificates for the kubernetes io legacy unknown signer   If specified    cluster signing  cert key  file must not be set   p   td    tr    tr   td colspan  2    cluster signing legacy unknown key file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename containing a PEM encoded RSA or ECDSA private key used to sign certificates for the kubernetes io legacy unknown signer   If specified    cluster signing  cert key  file must not be set   p   td    tr    tr   td colspan  2    concurrent cron job syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of cron job objects that are allowed to sync concurrently  Larger number   more responsive jobs  but more CPU  and network  load  p   td    tr    tr   td colspan  2    concurrent deployment syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of deployment objects that are allowed to sync concurrently  Larger number   more responsive deployments  but more CPU  and network  load  p   td    tr    tr   td colspan  2    concurrent endpoint syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of endpoint syncing operations that will be done concurrently  Larger number   faster endpoint updating  but more CPU  and network  load  p   td    tr    tr   td colspan  2    concurrent ephemeralvolume syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of ephemeral volume syncing operations that will be done concurrently  Larger number   faster ephemeral volume updating  but more CPU  and network  load  p   td    tr    tr   td colspan  2    concurrent gc syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  20  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of garbage collector workers that are allowed to sync concurrently   p   td    tr    tr   td colspan  2    concurrent horizontal pod autoscaler syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of horizontal pod autoscaler objects that are allowed to sync concurrently  Larger number   more responsive horizontal pod autoscaler objects processing  but more CPU  and network  load   p   td    tr    tr   td colspan  2    concurrent job syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of job objects that are allowed to sync concurrently  Larger number   more responsive jobs  but more CPU  and network  load  p   td    tr    tr   td colspan  2    concurrent namespace syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  10  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of namespace objects that are allowed to sync concurrently  Larger number   more responsive namespace termination  but more CPU  and network  load  p   td    tr    tr   td colspan  2    concurrent rc syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of replication controllers that are allowed to sync concurrently  Larger number   more responsive replica management  but more CPU  and network  load  p   td    tr    tr   td colspan  2    concurrent replicaset syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of replica sets that are allowed to sync concurrently  Larger number   more responsive replica management  but more CPU  and network  load  p   td    tr    tr   td colspan  2    concurrent resource quota syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of resource quotas that are allowed to sync concurrently  Larger number   more responsive quota management  but more CPU  and network  load  p   td    tr    tr   td colspan  2    concurrent service endpoint syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of service endpoint syncing operations that will be done concurrently  Larger number   faster endpoint slice updating  but more CPU  and network  load  Defaults to 5   p   td    tr    tr   td colspan  2    concurrent service syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of services that are allowed to sync concurrently  Larger number   more responsive service management  but more CPU  and network  load  p   td    tr    tr   td colspan  2    concurrent serviceaccount token syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of service account token objects that are allowed to sync concurrently  Larger number   more responsive token generation  but more CPU  and network  load  p   td    tr    tr   td colspan  2    concurrent statefulset syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of statefulset objects that are allowed to sync concurrently  Larger number   more responsive statefulsets  but more CPU  and network  load  p   td    tr    tr   td colspan  2    concurrent ttl after finished syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of ttl after finished controller workers that are allowed to sync concurrently   p   td    tr    tr   td colspan  2    concurrent validating admission policy status syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of ValidatingAdmissionPolicyStatusController workers that are allowed to sync concurrently   p   td    tr    tr   td colspan  2    configure cloud routes nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Should CIDRs allocated by allocate node cidrs be configured on the cloud provider   p   td    tr    tr   td colspan  2    contention profiling  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enable block profiling  if profiling is enabled  p   td    tr    tr   td colspan  2    controller start interval duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Interval between starting controller managers   p   td    tr    tr   td colspan  2    controllers strings nbsp  nbsp  nbsp  nbsp  nbsp Default       td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A list of controllers to enable      enables all on by default controllers   foo  enables the controller named  foo     foo  disables the controller named  foo   br  All controllers  bootstrap signer controller  certificatesigningrequest approving controller  certificatesigningrequest cleaner controller  certificatesigningrequest signing controller  cloud node lifecycle controller  clusterrole aggregation controller  cronjob controller  daemonset controller  deployment controller  disruption controller  endpoints controller  endpointslice controller  endpointslice mirroring controller  ephemeral volume controller  garbage collector controller  horizontal pod autoscaler controller  job controller  legacy serviceaccount token cleaner controller  namespace controller  node ipam controller  node lifecycle controller  node route controller  persistentvolume attach detach controller  persistentvolume binder controller  persistentvolume expander controller  persistentvolume protection controller  persistentvolumeclaim protection controller  pod garbage collector controller  replicaset controller  replicationcontroller controller  resourceclaim controller  resourcequota controller  root ca certificate publisher controller  service cidr controller  service lb controller  serviceaccount controller  serviceaccount token controller  statefulset controller  storage version migrator controller  storageversion garbage collector controller  taint eviction controller  token cleaner controller  ttl after finished controller  ttl controller  validatingadmissionpolicy status controller br  Disabled by default controllers  bootstrap signer controller  token cleaner controller  p   td    tr    tr   td colspan  2    disable attach detach reconcile sync  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Disable volume attach detach reconciler sync  Disabling this may cause volumes to be mismatched with pods  Use wisely   p   td    tr    tr   td colspan  2    disable force detach on timeout  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Prevent force detaching volumes based on maximum unmount time and node status  If this flag is set to true  the non graceful node shutdown feature must be used to recover from node failure  See https   k8s io docs storage disable force detach on timeout    p   td    tr    tr   td colspan  2    disable http2 serving  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  HTTP2 serving will be disabled  default false   p   td    tr    tr   td colspan  2    disabled metrics strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p This flag provides an escape hatch for misbehaving metrics  You must provide the fully qualified metric name in order to disable it  Disclaimer  disabling metrics is higher in precedence than showing hidden metrics   p   td    tr    tr   td colspan  2    emulated version strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The versions different components emulate their capabilities  APIs  features       of  br  If set  the component will emulate the behavior of this version instead of the underlying binary version  br  Version format could only be major minor  for example     emulated version wardle 1 2 kube 1 31   Options are  br  kube 1 31  1 31  default 1 31 If the component is not specified  defaults to  quot kube quot   p   td    tr    tr   td colspan  2    enable dynamic provisioning nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enable dynamic provisioning for environments that support it   p   td    tr    tr   td colspan  2    enable garbage collector nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enables the generic garbage collector  MUST be synced with the corresponding flag of the kube apiserver   p   td    tr    tr   td colspan  2    enable hostpath provisioner  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enable HostPath PV provisioning when running without a cloud provider  This allows testing and development of provisioning features   HostPath provisioning is not supported in any way  won t work in a multi node cluster  and should not be used for anything other than testing or development   p   td    tr    tr   td colspan  2    enable leader migration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Whether to enable controller leader migration   p   td    tr    tr   td colspan  2    endpoint updates batch period duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of endpoint updates batching period  Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates  Larger number   higher endpoint programming latency  but lower number of endpoints revision generated  p   td    tr    tr   td colspan  2    endpointslice updates batch period duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of endpoint slice updates batching period  Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates  Larger number   higher endpoint programming latency  but lower number of endpoints revision generated  p   td    tr    tr   td colspan  2    external cloud volume plugin string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The plugin to use when cloud provider is set to external  Can be empty  should only be set when cloud provider is external  Currently used to allow node ipam controller  persistentvolume binder controller  persistentvolume expander controller and attach detach controller to work for in tree cloud providers   p   td    tr    tr   td colspan  2    feature gates colonSeparatedMultimapStringString  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Comma separated list of component key value pairs that describe feature gates for alpha experimental features of different components  br  If the component is not specified  defaults to  quot kube quot   This flag can be repeatedly invoked  For example    feature gates  wardle featureA true wardle featureB false    feature gates  kube featureC true Options are  br  kube APIResponseCompression true false  BETA   default true  br  kube APIServerIdentity true false  BETA   default true  br  kube APIServerTracing true false  BETA   default true  br  kube APIServingWithRoutine true false  ALPHA   default false  br  kube AllAlpha true false  ALPHA   default false  br  kube AllBeta true false  BETA   default false  br  kube AnonymousAuthConfigurableEndpoints true false  ALPHA   default false  br  kube AnyVolumeDataSource true false  BETA   default true  br  kube AuthorizeNodeWithSelectors true false  ALPHA   default false  br  kube AuthorizeWithSelectors true false  ALPHA   default false  br  kube CPUManagerPolicyAlphaOptions true false  ALPHA   default false  br  kube CPUManagerPolicyBetaOptions true false  BETA   default true  br  kube CPUManagerPolicyOptions true false  BETA   default true  br  kube CRDValidationRatcheting true false  BETA   default true  br  kube CSIMigrationPortworx true false  BETA   default true  br  kube CSIVolumeHealth true false  ALPHA   default false  br  kube CloudControllerManagerWebhook true false  ALPHA   default false  br  kube ClusterTrustBundle true false  ALPHA   default false  br  kube ClusterTrustBundleProjection true false  ALPHA   default false  br  kube ComponentSLIs true false  BETA   default true  br  kube ConcurrentWatchObjectDecode true false  BETA   default false  br  kube ConsistentListFromCache true false  BETA   default true  br  kube ContainerCheckpoint true false  BETA   default true  br  kube ContextualLogging true false  BETA   default true  br  kube CoordinatedLeaderElection true false  ALPHA   default false  br  kube CronJobsScheduledAnnotation true false  BETA   default true  br  kube CrossNamespaceVolumeDataSource true false  ALPHA   default false  br  kube CustomCPUCFSQuotaPeriod true false  ALPHA   default false  br  kube CustomResourceFieldSelectors true false  BETA   default true  br  kube DRAControlPlaneController true false  ALPHA   default false  br  kube DisableAllocatorDualWrite true false  ALPHA   default false  br  kube DisableNodeKubeProxyVersion true false  BETA   default true  br  kube DynamicResourceAllocation true false  ALPHA   default false  br  kube EventedPLEG true false  ALPHA   default false  br  kube GracefulNodeShutdown true false  BETA   default true  br  kube GracefulNodeShutdownBasedOnPodPriority true false  BETA   default true  br  kube HPAScaleToZero true false  ALPHA   default false  br  kube HonorPVReclaimPolicy true false  BETA   default true  br  kube ImageMaximumGCAge true false  BETA   default true  br  kube ImageVolume true false  ALPHA   default false  br  kube InPlacePodVerticalScaling true false  ALPHA   default false  br  kube InTreePluginPortworxUnregister true false  ALPHA   default false  br  kube InformerResourceVersion true false  ALPHA   default false  br  kube JobBackoffLimitPerIndex true false  BETA   default true  br  kube JobManagedBy true false  ALPHA   default false  br  kube JobPodReplacementPolicy true false  BETA   default true  br  kube JobSuccessPolicy true false  BETA   default true  br  kube KubeletCgroupDriverFromCRI true false  BETA   default true  br  kube KubeletInUserNamespace true false  ALPHA   default false  br  kube KubeletPodResourcesDynamicResources true false  ALPHA   default false  br  kube KubeletPodResourcesGet true false  ALPHA   default false  br  kube KubeletSeparateDiskGC true false  BETA   default true  br  kube KubeletTracing true false  BETA   default true  br  kube LoadBalancerIPMode true false  BETA   default true  br  kube LocalStorageCapacityIsolationFSQuotaMonitoring true false  BETA   default false  br  kube LoggingAlphaOptions true false  ALPHA   default false  br  kube LoggingBetaOptions true false  BETA   default true  br  kube MatchLabelKeysInPodAffinity true false  BETA   default true  br  kube MatchLabelKeysInPodTopologySpread true false  BETA   default true  br  kube MaxUnavailableStatefulSet true false  ALPHA   default false  br  kube MemoryManager true false  BETA   default true  br  kube MemoryQoS true false  ALPHA   default false  br  kube MultiCIDRServiceAllocator true false  BETA   default false  br  kube MutatingAdmissionPolicy true false  ALPHA   default false  br  kube NFTablesProxyMode true false  BETA   default true  br  kube NodeInclusionPolicyInPodTopologySpread true false  BETA   default true  br  kube NodeLogQuery true false  BETA   default false  br  kube NodeSwap true false  BETA   default true  br  kube OpenAPIEnums true false  BETA   default true  br  kube PodAndContainerStatsFromCRI true false  ALPHA   default false  br  kube PodDeletionCost true false  BETA   default true  br  kube PodIndexLabel true false  BETA   default true  br  kube PodLifecycleSleepAction true false  BETA   default true  br  kube PodReadyToStartContainersCondition true false  BETA   default true  br  kube PortForwardWebsockets true false  BETA   default true  br  kube ProcMountType true false  BETA   default false  br  kube QOSReserved true false  ALPHA   default false  br  kube RecoverVolumeExpansionFailure true false  ALPHA   default false  br  kube RecursiveReadOnlyMounts true false  BETA   default true  br  kube RelaxedEnvironmentVariableValidation true false  ALPHA   default false  br  kube ReloadKubeletServerCertificateFile true false  BETA   default true  br  kube ResilientWatchCacheInitialization true false  BETA   default true  br  kube ResourceHealthStatus true false  ALPHA   default false  br  kube RetryGenerateName true false  BETA   default true  br  kube RotateKubeletServerCertificate true false  BETA   default true  br  kube RuntimeClassInImageCriApi true false  ALPHA   default false  br  kube SELinuxMount true false  ALPHA   default false  br  kube SELinuxMountReadWriteOncePod true false  BETA   default true  br  kube SchedulerQueueingHints true false  BETA   default false  br  kube SeparateCacheWatchRPC true false  BETA   default true  br  kube SeparateTaintEvictionController true false  BETA   default true  br  kube ServiceAccountTokenJTI true false  BETA   default true  br  kube ServiceAccountTokenNodeBinding true false  BETA   default true  br  kube ServiceAccountTokenNodeBindingValidation true false  BETA   default true  br  kube ServiceAccountTokenPodNodeInfo true false  BETA   default true  br  kube ServiceTrafficDistribution true false  BETA   default true  br  kube SidecarContainers true false  BETA   default true  br  kube SizeMemoryBackedVolumes true false  BETA   default true  br  kube StatefulSetAutoDeletePVC true false  BETA   default true  br  kube StorageNamespaceIndex true false  BETA   default true  br  kube StorageVersionAPI true false  ALPHA   default false  br  kube StorageVersionHash true false  BETA   default true  br  kube StorageVersionMigrator true false  ALPHA   default false  br  kube StrictCostEnforcementForVAP true false  BETA   default false  br  kube StrictCostEnforcementForWebhooks true false  BETA   default false  br  kube StructuredAuthenticationConfiguration true false  BETA   default true  br  kube StructuredAuthorizationConfiguration true false  BETA   default true  br  kube SupplementalGroupsPolicy true false  ALPHA   default false  br  kube TopologyAwareHints true false  BETA   default true  br  kube TopologyManagerPolicyAlphaOptions true false  ALPHA   default false  br  kube TopologyManagerPolicyBetaOptions true false  BETA   default true  br  kube TopologyManagerPolicyOptions true false  BETA   default true  br  kube TranslateStreamCloseWebsocketRequests true false  BETA   default true  br  kube UnauthenticatedHTTP2DOSMitigation true false  BETA   default true  br  kube UnknownVersionInteroperabilityProxy true false  ALPHA   default false  br  kube UserNamespacesPodSecurityStandards true false  ALPHA   default false  br  kube UserNamespacesSupport true false  BETA   default false  br  kube VolumeAttributesClass true false  BETA   default false  br  kube VolumeCapacityPriority true false  ALPHA   default false  br  kube WatchCacheInitializationPostStartHook true false  BETA   default false  br  kube WatchFromStorageWithoutResourceVersion true false  BETA   default false  br  kube WatchList true false  ALPHA   default false  br  kube WatchListClient true false  BETA   default false  br  kube WinDSR true false  ALPHA   default false  br  kube WinOverlay true false  BETA   default true  br  kube WindowsHostNetwork true false  ALPHA   default true   p   td    tr    tr   td colspan  2    flex volume plugin dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    usr libexec kubernetes kubelet plugins volume exec    td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Full path of the directory in which the flex volume plugin should search for additional third party volume plugins   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for kube controller manager  p   td    tr    tr   td colspan  2    horizontal pod autoscaler cpu initialization period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  5m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The period after pod start when CPU samples might be skipped   p   td    tr    tr   td colspan  2    horizontal pod autoscaler downscale stabilization duration nbsp  nbsp  nbsp  nbsp  nbsp Default  5m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The period for which autoscaler will look backwards and not scale down below any recommendation it made during that period   p   td    tr    tr   td colspan  2    horizontal pod autoscaler initial readiness delay duration nbsp  nbsp  nbsp  nbsp  nbsp Default  30s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The period after pod start during which readiness changes will be treated as initial readiness   p   td    tr    tr   td colspan  2    horizontal pod autoscaler sync period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  15s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The period for syncing the number of pods in horizontal pod autoscaler   p   td    tr    tr   td colspan  2    horizontal pod autoscaler tolerance float nbsp  nbsp  nbsp  nbsp  nbsp Default  0 1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The minimum change  from 1 0  in the desired to actual metrics ratio for the horizontal pod autoscaler to consider scaling   p   td    tr    tr   td colspan  2    http2 max streams per connection int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The limit that the server gives to clients for the maximum number of streams in an HTTP 2 connection  Zero means to use golang s default   p   td    tr    tr   td colspan  2    kube api burst int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  30  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Burst to use while talking with kubernetes apiserver   p   td    tr    tr   td colspan  2    kube api content type string nbsp  nbsp  nbsp  nbsp  nbsp Default   application vnd kubernetes protobuf   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Content type of requests sent to apiserver   p   td    tr    tr   td colspan  2    kube api qps float nbsp  nbsp  nbsp  nbsp  nbsp Default  20  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p QPS to use while talking with kubernetes apiserver   p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to kubeconfig file with authorization and master location information  the master location can be overridden by the master flag    p   td    tr    tr   td colspan  2    large cluster size threshold int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  50  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Number of nodes from which node lifecycle controller treats the cluster as large for the eviction logic purposes    secondary node eviction rate is implicitly overridden to 0 for clusters this size or smaller  Notice  If nodes reside in multiple zones  this threshold will be considered as zone node size threshold for each zone to determine node eviction rate independently   p   td    tr    tr   td colspan  2    leader elect nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Start a leader election client and gain leadership before executing the main loop  Enable this when running replicated components for high availability   p   td    tr    tr   td colspan  2    leader elect lease duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  15s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The duration that non leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot  This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate  This is only applicable if leader election is enabled   p   td    tr    tr   td colspan  2    leader elect renew deadline duration nbsp  nbsp  nbsp  nbsp  nbsp Default  10s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The interval between attempts by the acting master to renew a leadership slot before it stops leading  This must be less than the lease duration  This is only applicable if leader election is enabled   p   td    tr    tr   td colspan  2    leader elect resource lock string nbsp  nbsp  nbsp  nbsp  nbsp Default   leases   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The type of resource object that is used for locking during leader election  Supported options are  leases    endpointsleases  and  configmapsleases    p   td    tr    tr   td colspan  2    leader elect resource name string nbsp  nbsp  nbsp  nbsp  nbsp Default   kube controller manager   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of resource object that is used for locking during leader election   p   td    tr    tr   td colspan  2    leader elect resource namespace string nbsp  nbsp  nbsp  nbsp  nbsp Default   kube system   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The namespace of resource object that is used for locking during leader election   p   td    tr    tr   td colspan  2    leader elect retry period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  2s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The duration the clients should wait between attempting acquisition and renewal of a leadership  This is only applicable if leader election is enabled   p   td    tr    tr   td colspan  2    leader migration config string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the config file for controller leader migration  or empty to use the value that reflects default configuration of the controller manager  The config file should be of type LeaderMigrationConfiguration  group controllermanager config k8s io  version v1alpha1   p   td    tr    tr   td colspan  2    legacy service account token clean up period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  8760h0m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The period of time since the last usage of an legacy service account token before it can be deleted   p   td    tr    tr   td colspan  2    log flush frequency duration nbsp  nbsp  nbsp  nbsp  nbsp Default  5s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum number of seconds between log flushes  p   td    tr    tr   td colspan  2    log text info buffer size quantity  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p  Alpha  In text format with split output streams  the info messages can be buffered for a while to increase performance  The default value of zero bytes disables buffering  The size can be specified as number of bytes  512   multiples of 1000  1K   multiples of 1024  2Ki   or powers of those  3M  4G  5Mi  6Gi   Enable the LoggingAlphaOptions feature gate to use this   p   td    tr    tr   td colspan  2    log text split stream  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p  Alpha  In text format  write error messages to stderr and info messages to stdout  The default is to write a single stream to stdout  Enable the LoggingAlphaOptions feature gate to use this   p   td    tr    tr   td colspan  2    logging format string nbsp  nbsp  nbsp  nbsp  nbsp Default   text   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Sets the log format  Permitted formats   quot text quot    p   td    tr    tr   td colspan  2    master string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address of the Kubernetes API server  overrides any value in kubeconfig    p   td    tr    tr   td colspan  2    max endpoints per slice int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  100  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The maximum number of endpoints that will be added to an EndpointSlice  More endpoints per slice will result in less endpoint slices  but larger resources  Defaults to 100   p   td    tr    tr   td colspan  2    min resync period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  12h0m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The resync period in reflectors will be random between MinResyncPeriod and 2 MinResyncPeriod   p   td    tr    tr   td colspan  2    mirroring concurrent service endpoint syncs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The number of service endpoint syncing operations that will be done concurrently by the endpointslice mirroring controller  Larger number   faster endpoint slice updating  but more CPU  and network  load  Defaults to 5   p   td    tr    tr   td colspan  2    mirroring endpointslice updates batch period duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of EndpointSlice updates batching period for endpointslice mirroring controller  Processing of EndpointSlice changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of EndpointSlice updates  Larger number   higher endpoint programming latency  but lower number of endpoints revision generated  p   td    tr    tr   td colspan  2    mirroring max endpoints per subset int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  1000  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The maximum number of endpoints that will be added to an EndpointSlice by the endpointslice mirroring controller  More endpoints per slice will result in less endpoint slices  but larger resources  Defaults to 100   p   td    tr    tr   td colspan  2    namespace sync period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  5m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The period for syncing namespace life cycle updates  p   td    tr    tr   td colspan  2    node cidr mask size int32  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Mask size for node cidr in cluster  Default is 24 for IPv4 and 64 for IPv6   p   td    tr    tr   td colspan  2    node cidr mask size ipv4 int32  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Mask size for IPv4 node cidr in dual stack cluster  Default is 24   p   td    tr    tr   td colspan  2    node cidr mask size ipv6 int32  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Mask size for IPv6 node cidr in dual stack cluster  Default is 64   p   td    tr    tr   td colspan  2    node eviction rate float nbsp  nbsp  nbsp  nbsp  nbsp Default  0 1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Number of nodes per second on which pods are deleted in case of node failure when a zone is healthy  see   unhealthy zone threshold for definition of healthy unhealthy   Zone refers to entire cluster in non multizone clusters   p   td    tr    tr   td colspan  2    node monitor grace period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  40s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Amount of time which we allow running Node to be unresponsive before marking it unhealthy  Must be N times more than kubelet s nodeStatusUpdateFrequency  where N means number of retries allowed for kubelet to post node status   p   td    tr    tr   td colspan  2    node monitor period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  5s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The period for syncing NodeStatus in cloud node lifecycle controller   p   td    tr    tr   td colspan  2    node startup grace period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Amount of time which we allow starting Node to be unresponsive before marking it unhealthy   p   td    tr    tr   td colspan  2    permit address sharing  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  SO REUSEADDR will be used when binding the port  This allows binding to wildcard IPs like 0 0 0 0 and specific IPs in parallel  and it avoids waiting for the kernel to release sockets in TIME WAIT state   default false   p   td    tr    tr   td colspan  2    permit port sharing  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  SO REUSEPORT will be used when binding the port  which allows more than one instance to bind on the same address and port   default false   p   td    tr    tr   td colspan  2    profiling nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enable profiling via web interface host port debug pprof   p   td    tr    tr   td colspan  2    pv recycler increment timeout nfs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  30  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p the increment of time added per Gi to ActiveDeadlineSeconds for an NFS scrubber pod  p   td    tr    tr   td colspan  2    pv recycler minimum timeout hostpath int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  60  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The minimum ActiveDeadlineSeconds to use for a HostPath Recycler pod   This is for development and testing only and will not work in a multi node cluster   p   td    tr    tr   td colspan  2    pv recycler minimum timeout nfs int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The minimum ActiveDeadlineSeconds to use for an NFS Recycler pod  p   td    tr    tr   td colspan  2    pv recycler pod template filepath hostpath string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The file path to a pod definition used as a template for HostPath persistent volume recycling  This is for development and testing only and will not work in a multi node cluster   p   td    tr    tr   td colspan  2    pv recycler pod template filepath nfs string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The file path to a pod definition used as a template for NFS persistent volume recycling  p   td    tr    tr   td colspan  2    pv recycler timeout increment hostpath int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  30  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p the increment of time added per Gi to ActiveDeadlineSeconds for a HostPath scrubber pod   This is for development and testing only and will not work in a multi node cluster   p   td    tr    tr   td colspan  2    pvclaimbinder sync period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  15s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The period for syncing persistent volumes and persistent volume claims  p   td    tr    tr   td colspan  2    requestheader allowed names strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of client certificate common names to allow to provide usernames in headers specified by   requestheader username headers  If empty  any client certificate validated by the authorities in   requestheader client ca file is allowed   p   td    tr    tr   td colspan  2    requestheader client ca file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by   requestheader username headers  WARNING  generally do not depend on authorization being already done for incoming requests   p   td    tr    tr   td colspan  2    requestheader extra headers prefix strings nbsp  nbsp  nbsp  nbsp  nbsp Default   x remote extra    td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of request header prefixes to inspect  X Remote Extra  is suggested   p   td    tr    tr   td colspan  2    requestheader group headers strings nbsp  nbsp  nbsp  nbsp  nbsp Default   x remote group   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of request headers to inspect for groups  X Remote Group is suggested   p   td    tr    tr   td colspan  2    requestheader username headers strings nbsp  nbsp  nbsp  nbsp  nbsp Default   x remote user   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of request headers to inspect for usernames  X Remote User is common   p   td    tr    tr   td colspan  2    resource quota sync period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  5m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The period for syncing quota usage status in the system  p   td    tr    tr   td colspan  2    root ca file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If set  this root certificate authority will be included in service account s token secret  This must be a valid PEM encoded CA bundle   p   td    tr    tr   td colspan  2    route reconciliation period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  10s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The period for reconciling routes created for Nodes by cloud provider   p   td    tr    tr   td colspan  2    secondary node eviction rate float nbsp  nbsp  nbsp  nbsp  nbsp Default  0 01  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy  see   unhealthy zone threshold for definition of healthy unhealthy   Zone refers to entire cluster in non multizone clusters  This value is implicitly overridden to 0 if the cluster size is smaller than   large cluster size threshold   p   td    tr    tr   td colspan  2    secure port int nbsp  nbsp  nbsp  nbsp  nbsp Default  10257  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The port on which to serve HTTPS with authentication and authorization  If 0  don t serve HTTPS at all   p   td    tr    tr   td colspan  2    service account private key file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename containing a PEM encoded private RSA or ECDSA key used to sign service account tokens   p   td    tr    tr   td colspan  2    service cluster ip range string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p CIDR Range for Services in cluster  Requires   allocate node cidrs to be true  p   td    tr    tr   td colspan  2    show hidden metrics for version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The previous version for which you want to show hidden metrics  Only the previous minor version is meaningful  other values will not be allowed  The format is  lt major gt   lt minor gt   e g    1 16   The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics  rather than being surprised when they are permanently removed in the release after that   p   td    tr    tr   td colspan  2    terminated pod gc threshold int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  12500  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods  If  lt   0  the terminated pod garbage collector is disabled   p   td    tr    tr   td colspan  2    tls cert file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File containing the default x509 Certificate for HTTPS   CA cert  if any  concatenated after server cert   If HTTPS serving is enabled  and   tls cert file and   tls private key file are not provided  a self signed certificate and key are generated for the public address and saved to the directory specified by   cert dir   p   td    tr    tr   td colspan  2    tls cipher suites strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Comma separated list of cipher suites for the server  If omitted  the default Go cipher suites will be used  br  Preferred values  TLS AES 128 GCM SHA256  TLS AES 256 GCM SHA384  TLS CHACHA20 POLY1305 SHA256  TLS ECDHE ECDSA WITH AES 128 CBC SHA  TLS ECDHE ECDSA WITH AES 128 GCM SHA256  TLS ECDHE ECDSA WITH AES 256 CBC SHA  TLS ECDHE ECDSA WITH AES 256 GCM SHA384  TLS ECDHE ECDSA WITH CHACHA20 POLY1305  TLS ECDHE ECDSA WITH CHACHA20 POLY1305 SHA256  TLS ECDHE RSA WITH AES 128 CBC SHA  TLS ECDHE RSA WITH AES 128 GCM SHA256  TLS ECDHE RSA WITH AES 256 CBC SHA  TLS ECDHE RSA WITH AES 256 GCM SHA384  TLS ECDHE RSA WITH CHACHA20 POLY1305  TLS ECDHE RSA WITH CHACHA20 POLY1305 SHA256  br  Insecure values  TLS ECDHE ECDSA WITH AES 128 CBC SHA256  TLS ECDHE ECDSA WITH RC4 128 SHA  TLS ECDHE RSA WITH 3DES EDE CBC SHA  TLS ECDHE RSA WITH AES 128 CBC SHA256  TLS ECDHE RSA WITH RC4 128 SHA  TLS RSA WITH 3DES EDE CBC SHA  TLS RSA WITH AES 128 CBC SHA  TLS RSA WITH AES 128 CBC SHA256  TLS RSA WITH AES 128 GCM SHA256  TLS RSA WITH AES 256 CBC SHA  TLS RSA WITH AES 256 GCM SHA384  TLS RSA WITH RC4 128 SHA   p   td    tr    tr   td colspan  2    tls min version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Minimum TLS version supported  Possible values  VersionTLS10  VersionTLS11  VersionTLS12  VersionTLS13  p   td    tr    tr   td colspan  2    tls private key file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File containing the default x509 private key matching   tls cert file   p   td    tr    tr   td colspan  2    tls sni cert key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A pair of x509 certificate and private key file paths  optionally suffixed with a list of domain patterns which are fully qualified domain names  possibly with prefixed wildcard segments  The domain patterns also allow IP addresses  but IPs should only be used if the apiserver has visibility to the IP address requested by a client  If no domain patterns are provided  the names of the certificate are extracted  Non wildcard matches trump over wildcard matches  explicit domain patterns trump over extracted names  For multiple key certificate pairs  use the   tls sni cert key multiple times  Examples   quot example crt example key quot  or  quot foo crt foo key   foo com foo com quot    p   td    tr    tr   td colspan  2    unhealthy zone threshold float nbsp  nbsp  nbsp  nbsp  nbsp Default  0 55  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Fraction of Nodes in a zone which needs to be not Ready  minimum 3  for zone to be treated as unhealthy   p   td    tr    tr   td colspan  2    use service account credentials  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  use individual service account credentials for each controller   p   td    tr    tr   td colspan  2   v    v int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p number for the log level verbosity  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    vmodule pattern N      td    tr   tr   td   td  td style  line height  130   word wrap  break word    p comma separated list of pattern N settings for file filtered logging  only works for text log format   p   td    tr     tbody    table    "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference title kube proxy weight 30 autogenerated true","answers":"---\ntitle: kube-proxy\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nThe Kubernetes network proxy runs on each node. This\nreflects services as defined in the Kubernetes API on each node and can do simple\nTCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends.\nService cluster IPs and ports are currently found through Docker-links-compatible\nenvironment variables specifying ports opened by the service proxy. There is an optional\naddon that provides cluster DNS for these cluster IPs. The user must create a service\nwith the apiserver API to configure the proxy.\n\n```\nkube-proxy [flags]\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--add_dir_header<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, adds the file directory to the header of the log messages<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--alsologtostderr<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>log to standard error as well as files (no effect when -logtostderr=true)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--bind-address string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 0.0.0.0<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Overrides kube-proxy's idea of what its node's primary IP is. Note that the name is a historical artifact, and kube-proxy does not actually bind any sockets to this IP. This parameter is ignored if a config file is specified by --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--bind-address-hard-fail<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true kube-proxy will treat failure to bind to a port as fatal and exit<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cleanup<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true cleanup iptables and ipvs rules and exit.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-cidr string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The CIDR range of the pods in the cluster. (For dual-stack clusters, this can be a comma-separated dual-stack pair of CIDR ranges.). When --detect-local-mode is set to ClusterCIDR, kube-proxy will consider traffic to be local if its source IP is in this range. (Otherwise it is not used.) This parameter is ignored if a config file is specified by --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--config string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The path to the configuration file.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--config-sync-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 15m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>How often configuration from the apiserver is refreshed.  Must be greater than 0.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--conntrack-max-per-core int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 32768<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum number of NAT connections to track per CPU core (0 to leave the limit as-is and ignore conntrack-min).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--conntrack-min int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 131072<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Minimum number of conntrack entries to allocate, regardless of conntrack-max-per-core (set conntrack-max-per-core=0 to leave the limit as-is).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--conntrack-tcp-be-liberal<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enable liberal mode for tracking TCP packets by setting nf_conntrack_tcp_be_liberal to 1<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--conntrack-tcp-timeout-close-wait duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1h0m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>NAT timeout for TCP connections in the CLOSE_WAIT state<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--conntrack-tcp-timeout-established duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 24h0m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Idle timeout for established TCP connections (0 to leave as-is)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--conntrack-udp-timeout duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Idle timeout for UNREPLIED UDP connections (0 to leave as-is)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--conntrack-udp-timeout-stream duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Idle timeout for ASSURED UDP connections (0 to leave as-is)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--detect-local-mode LocalMode<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Mode to use to detect local traffic. This parameter is ignored if a config file is specified by --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--feature-gates &lt;comma-separated 'key=True|False' pairs&gt;<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A set of key=value pairs that describe feature gates for alpha\/experimental features. Options are:<br\/>APIResponseCompression=true|false (BETA - default=true)<br\/>APIServerIdentity=true|false (BETA - default=true)<br\/>APIServerTracing=true|false (BETA - default=true)<br\/>APIServingWithRoutine=true|false (ALPHA - default=false)<br\/>AllAlpha=true|false (ALPHA - default=false)<br\/>AllBeta=true|false (BETA - default=false)<br\/>AnonymousAuthConfigurableEndpoints=true|false (ALPHA - default=false)<br\/>AnyVolumeDataSource=true|false (BETA - default=true)<br\/>AuthorizeNodeWithSelectors=true|false (ALPHA - default=false)<br\/>AuthorizeWithSelectors=true|false (ALPHA - default=false)<br\/>CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)<br\/>CPUManagerPolicyBetaOptions=true|false (BETA - default=true)<br\/>CPUManagerPolicyOptions=true|false (BETA - default=true)<br\/>CRDValidationRatcheting=true|false (BETA - default=true)<br\/>CSIMigrationPortworx=true|false (BETA - default=true)<br\/>CSIVolumeHealth=true|false (ALPHA - default=false)<br\/>CloudControllerManagerWebhook=true|false (ALPHA - default=false)<br\/>ClusterTrustBundle=true|false (ALPHA - default=false)<br\/>ClusterTrustBundleProjection=true|false (ALPHA - default=false)<br\/>ComponentSLIs=true|false (BETA - default=true)<br\/>ConcurrentWatchObjectDecode=true|false (BETA - default=false)<br\/>ConsistentListFromCache=true|false (BETA - default=true)<br\/>ContainerCheckpoint=true|false (BETA - default=true)<br\/>ContextualLogging=true|false (BETA - default=true)<br\/>CoordinatedLeaderElection=true|false (ALPHA - default=false)<br\/>CronJobsScheduledAnnotation=true|false (BETA - default=true)<br\/>CrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)<br\/>CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)<br\/>CustomResourceFieldSelectors=true|false (BETA - default=true)<br\/>DRAControlPlaneController=true|false (ALPHA - default=false)<br\/>DisableAllocatorDualWrite=true|false (ALPHA - default=false)<br\/>DisableNodeKubeProxyVersion=true|false (BETA - default=true)<br\/>DynamicResourceAllocation=true|false (ALPHA - default=false)<br\/>EventedPLEG=true|false (ALPHA - default=false)<br\/>GracefulNodeShutdown=true|false (BETA - default=true)<br\/>GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)<br\/>HPAScaleToZero=true|false (ALPHA - default=false)<br\/>HonorPVReclaimPolicy=true|false (BETA - default=true)<br\/>ImageMaximumGCAge=true|false (BETA - default=true)<br\/>ImageVolume=true|false (ALPHA - default=false)<br\/>InPlacePodVerticalScaling=true|false (ALPHA - default=false)<br\/>InTreePluginPortworxUnregister=true|false (ALPHA - default=false)<br\/>InformerResourceVersion=true|false (ALPHA - default=false)<br\/>JobBackoffLimitPerIndex=true|false (BETA - default=true)<br\/>JobManagedBy=true|false (ALPHA - default=false)<br\/>JobPodReplacementPolicy=true|false (BETA - default=true)<br\/>JobSuccessPolicy=true|false (BETA - default=true)<br\/>KubeletCgroupDriverFromCRI=true|false (BETA - default=true)<br\/>KubeletInUserNamespace=true|false (ALPHA - default=false)<br\/>KubeletPodResourcesDynamicResources=true|false (ALPHA - default=false)<br\/>KubeletPodResourcesGet=true|false (ALPHA - default=false)<br\/>KubeletSeparateDiskGC=true|false (BETA - default=true)<br\/>KubeletTracing=true|false (BETA - default=true)<br\/>LoadBalancerIPMode=true|false (BETA - default=true)<br\/>LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (BETA - default=false)<br\/>LoggingAlphaOptions=true|false (ALPHA - default=false)<br\/>LoggingBetaOptions=true|false (BETA - default=true)<br\/>MatchLabelKeysInPodAffinity=true|false (BETA - default=true)<br\/>MatchLabelKeysInPodTopologySpread=true|false (BETA - default=true)<br\/>MaxUnavailableStatefulSet=true|false (ALPHA - default=false)<br\/>MemoryManager=true|false (BETA - default=true)<br\/>MemoryQoS=true|false (ALPHA - default=false)<br\/>MultiCIDRServiceAllocator=true|false (BETA - default=false)<br\/>MutatingAdmissionPolicy=true|false (ALPHA - default=false)<br\/>NFTablesProxyMode=true|false (BETA - default=true)<br\/>NodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)<br\/>NodeLogQuery=true|false (BETA - default=false)<br\/>NodeSwap=true|false (BETA - default=true)<br\/>OpenAPIEnums=true|false (BETA - default=true)<br\/>PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)<br\/>PodDeletionCost=true|false (BETA - default=true)<br\/>PodIndexLabel=true|false (BETA - default=true)<br\/>PodLifecycleSleepAction=true|false (BETA - default=true)<br\/>PodReadyToStartContainersCondition=true|false (BETA - default=true)<br\/>PortForwardWebsockets=true|false (BETA - default=true)<br\/>ProcMountType=true|false (BETA - default=false)<br\/>QOSReserved=true|false (ALPHA - default=false)<br\/>RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)<br\/>RecursiveReadOnlyMounts=true|false (BETA - default=true)<br\/>RelaxedEnvironmentVariableValidation=true|false (ALPHA - default=false)<br\/>ReloadKubeletServerCertificateFile=true|false (BETA - default=true)<br\/>ResilientWatchCacheInitialization=true|false (BETA - default=true)<br\/>ResourceHealthStatus=true|false (ALPHA - default=false)<br\/>RetryGenerateName=true|false (BETA - default=true)<br\/>RotateKubeletServerCertificate=true|false (BETA - default=true)<br\/>RuntimeClassInImageCriApi=true|false (ALPHA - default=false)<br\/>SELinuxMount=true|false (ALPHA - default=false)<br\/>SELinuxMountReadWriteOncePod=true|false (BETA - default=true)<br\/>SchedulerQueueingHints=true|false (BETA - default=false)<br\/>SeparateCacheWatchRPC=true|false (BETA - default=true)<br\/>SeparateTaintEvictionController=true|false (BETA - default=true)<br\/>ServiceAccountTokenJTI=true|false (BETA - default=true)<br\/>ServiceAccountTokenNodeBinding=true|false (BETA - default=true)<br\/>ServiceAccountTokenNodeBindingValidation=true|false (BETA - default=true)<br\/>ServiceAccountTokenPodNodeInfo=true|false (BETA - default=true)<br\/>ServiceTrafficDistribution=true|false (BETA - default=true)<br\/>SidecarContainers=true|false (BETA - default=true)<br\/>SizeMemoryBackedVolumes=true|false (BETA - default=true)<br\/>StatefulSetAutoDeletePVC=true|false (BETA - default=true)<br\/>StorageNamespaceIndex=true|false (BETA - default=true)<br\/>StorageVersionAPI=true|false (ALPHA - default=false)<br\/>StorageVersionHash=true|false (BETA - default=true)<br\/>StorageVersionMigrator=true|false (ALPHA - default=false)<br\/>StrictCostEnforcementForVAP=true|false (BETA - default=false)<br\/>StrictCostEnforcementForWebhooks=true|false (BETA - default=false)<br\/>StructuredAuthenticationConfiguration=true|false (BETA - default=true)<br\/>StructuredAuthorizationConfiguration=true|false (BETA - default=true)<br\/>SupplementalGroupsPolicy=true|false (ALPHA - default=false)<br\/>TopologyAwareHints=true|false (BETA - default=true)<br\/>TopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)<br\/>TopologyManagerPolicyBetaOptions=true|false (BETA - default=true)<br\/>TopologyManagerPolicyOptions=true|false (BETA - default=true)<br\/>TranslateStreamCloseWebsocketRequests=true|false (BETA - default=true)<br\/>UnauthenticatedHTTP2DOSMitigation=true|false (BETA - default=true)<br\/>UnknownVersionInteroperabilityProxy=true|false (ALPHA - default=false)<br\/>UserNamespacesPodSecurityStandards=true|false (ALPHA - default=false)<br\/>UserNamespacesSupport=true|false (BETA - default=false)<br\/>VolumeAttributesClass=true|false (BETA - default=false)<br\/>VolumeCapacityPriority=true|false (ALPHA - default=false)<br\/>WatchCacheInitializationPostStartHook=true|false (BETA - default=false)<br\/>WatchFromStorageWithoutResourceVersion=true|false (BETA - default=false)<br\/>WatchList=true|false (ALPHA - default=false)<br\/>WatchListClient=true|false (BETA - default=false)<br\/>WinDSR=true|false (ALPHA - default=false)<br\/>WinOverlay=true|false (BETA - default=true)<br\/>WindowsHostNetwork=true|false (ALPHA - default=true)<br\/>This parameter is ignored if a config file is specified by --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--healthz-bind-address ipport&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 0.0.0.0:10256<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The IP address and port for the health check server to serve on, defaulting to &quot;0.0.0.0:10256&quot;. This parameter is ignored if a config file is specified by --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for kube-proxy<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--hostname-override string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If non-empty, will be used as the name of the Node that kube-proxy is running on. If unset, the node name is assumed to be the same as the node's hostname.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--init-only<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, perform any initialization steps that must be done with full root privileges, and then exit. After doing this, you can run kube-proxy again with only the CAP_NET_ADMIN capability.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--iptables-localhost-nodeports&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If false, kube-proxy will disable the legacy behavior of allowing NodePort services to be accessed via localhost. (Applies only to iptables mode and IPv4; localhost NodePorts are never allowed with other proxy modes or with IPv6.)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--iptables-masquerade-bit int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 14<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If using the iptables or ipvs proxy mode, the bit of the fwmark space to mark packets requiring SNAT with.  Must be within the range [0, 31].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--iptables-min-sync-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The minimum period between iptables rule resyncs (e.g. '5s', '1m', '2h22m'). A value of 0 means every Service or EndpointSlice change will result in an immediate iptables resync.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--iptables-sync-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 30s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>An interval (e.g. '5s', '1m', '2h22m') indicating how frequently various re-synchronizing and cleanup operations are performed. Must be greater than 0.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--ipvs-exclude-cidrs strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A comma-separated list of CIDRs which the ipvs proxier should not touch when cleaning up IPVS rules.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--ipvs-min-sync-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The minimum period between IPVS rule resyncs (e.g. '5s', '1m', '2h22m'). A value of 0 means every Service or EndpointSlice change will result in an immediate IPVS resync.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--ipvs-scheduler string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The ipvs scheduler type when proxy mode is ipvs<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--ipvs-strict-arp<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enable strict ARP by setting arp_ignore to 1 and arp_announce to 2<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--ipvs-sync-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 30s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>An interval (e.g. '5s', '1m', '2h22m') indicating how frequently various re-synchronizing and cleanup operations are performed. Must be greater than 0.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--ipvs-tcp-timeout duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The timeout for idle IPVS TCP connections, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--ipvs-tcpfin-timeout duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The timeout for IPVS TCP connections after receiving a FIN packet, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--ipvs-udp-timeout duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The timeout for IPVS UDP packets, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kube-api-burst int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Burst to use while talking with kubernetes apiserver<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kube-api-content-type string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"application\/vnd.kubernetes.protobuf\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Content type of requests sent to apiserver.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kube-api-qps float&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>QPS to use while talking with kubernetes apiserver<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to kubeconfig file with authorization information (the master location can be overridden by the master flag).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-flush-frequency duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum number of seconds between log flushes<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-text-info-buffer-size quantity<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>[Alpha] In text format with split output streams, the info messages can be buffered for a while to increase performance. The default value of zero bytes disables buffering. The size can be specified as number of bytes (512), multiples of 1000 (1K), multiples of 1024 (2Ki), or powers of those (3M, 4G, 5Mi, 6Gi). Enable the LoggingAlphaOptions feature gate to use this.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-text-split-stream<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>[Alpha] In text format, write error messages to stderr and info messages to stdout. The default is to write a single stream to stdout. Enable the LoggingAlphaOptions feature gate to use this.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log_backtrace_at &lt;a string in the form 'file:N'&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: :0<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>when logging hits line file:N, emit a stack trace<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log_dir string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If non-empty, write log files in this directory (no effect when -logtostderr=true)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log_file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If non-empty, use this log file (no effect when -logtostderr=true)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log_file_max_size uint&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1800<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--logging-format string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"text\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Sets the log format. Permitted formats: &quot;text&quot;.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--logtostderr&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>log to standard error instead of files<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--masquerade-all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>SNAT all traffic sent via Service cluster IPs. This may be required with some CNI plugins. Only supported on Linux.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--master string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address of the Kubernetes API server (overrides any value in kubeconfig)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--metrics-bind-address ipport&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 127.0.0.1:10249<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The IP address and port for the metrics server to serve on, defaulting to &quot;127.0.0.1:10249&quot;. (Set to &quot;0.0.0.0:10249&quot; \/ &quot;[::]:10249&quot; to bind on all interfaces.) Set empty to disable. This parameter is ignored if a config file is specified by --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--nodeport-addresses strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A list of CIDR ranges that contain valid node IPs, or alternatively, the single string 'primary'. If set to a list of CIDRs, connections to NodePort services will only be accepted on node IPs in one of the indicated ranges. If set to 'primary', NodePort services will only be accepted on the node's primary IP(s) according to the Node object. If unset, NodePort connections will be accepted on all local IPs. This parameter is ignored if a config file is specified by --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--one_output<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--oom-score-adj int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -999<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The oom-score-adj value for kube-proxy process. Values must be within the range [-1000, 1000]. This parameter is ignored if a config file is specified by --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pod-bridge-interface string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A bridge interface name. When --detect-local-mode is set to BridgeInterface, kube-proxy will consider traffic to be local if it originates from this bridge.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pod-interface-name-prefix string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>An interface name prefix. When --detect-local-mode is set to InterfaceNamePrefix, kube-proxy will consider traffic to be local if it originates from any interface whose name begins with this prefix.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profiling<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true enables profiling via web interface on \/debug\/pprof handler. This parameter is ignored if a config file is specified by --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--proxy-mode ProxyMode<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Which proxy mode to use: on Linux this can be 'iptables' (default) or 'ipvs'. On Windows the only supported value is 'kernelspace'.This parameter is ignored if a config file is specified by --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-hidden-metrics-for-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is &lt;major&gt;.&lt;minor&gt;, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that. This parameter is ignored if a config file is specified by --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--skip_headers<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, avoid header prefixes in the log messages<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--skip_log_headers<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, avoid headers when opening log files (no effect when -logtostderr=true)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--stderrthreshold int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 2<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=true)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-v, --v int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>number for the log level verbosity<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--vmodule pattern=N,...<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>comma-separated list of pattern=N settings for file-filtered logging (only works for text log format)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--write-config-to string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If set, write the default configuration values to this file and exit.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n","site":"kubernetes reference","answers_cleaned":"    title  kube proxy content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              The Kubernetes network proxy runs on each node  This reflects services as defined in the Kubernetes API on each node and can do simple TCP  UDP  and SCTP stream forwarding or round robin TCP  UDP  and SCTP forwarding across a set of backends  Service cluster IPs and ports are currently found through Docker links compatible environment variables specifying ports opened by the service proxy  There is an optional addon that provides cluster DNS for these cluster IPs  The user must create a service with the apiserver API to configure the proxy       kube proxy  flags                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    add dir header  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  adds the file directory to the header of the log messages  p   td    tr    tr   td colspan  2    alsologtostderr  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p log to standard error as well as files  no effect when  logtostderr true   p   td    tr    tr   td colspan  2    bind address string nbsp  nbsp  nbsp  nbsp  nbsp Default  0 0 0 0  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Overrides kube proxy s idea of what its node s primary IP is  Note that the name is a historical artifact  and kube proxy does not actually bind any sockets to this IP  This parameter is ignored if a config file is specified by   config   p   td    tr    tr   td colspan  2    bind address hard fail  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true kube proxy will treat failure to bind to a port as fatal and exit  p   td    tr    tr   td colspan  2    cleanup  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true cleanup iptables and ipvs rules and exit   p   td    tr    tr   td colspan  2    cluster cidr string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The CIDR range of the pods in the cluster   For dual stack clusters  this can be a comma separated dual stack pair of CIDR ranges    When   detect local mode is set to ClusterCIDR  kube proxy will consider traffic to be local if its source IP is in this range   Otherwise it is not used   This parameter is ignored if a config file is specified by   config   p   td    tr    tr   td colspan  2    config string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The path to the configuration file   p   td    tr    tr   td colspan  2    config sync period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  15m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p How often configuration from the apiserver is refreshed   Must be greater than 0   p   td    tr    tr   td colspan  2    conntrack max per core int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  32768  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum number of NAT connections to track per CPU core  0 to leave the limit as is and ignore conntrack min    p   td    tr    tr   td colspan  2    conntrack min int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  131072  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Minimum number of conntrack entries to allocate  regardless of conntrack max per core  set conntrack max per core 0 to leave the limit as is    p   td    tr    tr   td colspan  2    conntrack tcp be liberal  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enable liberal mode for tracking TCP packets by setting nf conntrack tcp be liberal to 1  p   td    tr    tr   td colspan  2    conntrack tcp timeout close wait duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1h0m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p NAT timeout for TCP connections in the CLOSE WAIT state  p   td    tr    tr   td colspan  2    conntrack tcp timeout established duration nbsp  nbsp  nbsp  nbsp  nbsp Default  24h0m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Idle timeout for established TCP connections  0 to leave as is   p   td    tr    tr   td colspan  2    conntrack udp timeout duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Idle timeout for UNREPLIED UDP connections  0 to leave as is   p   td    tr    tr   td colspan  2    conntrack udp timeout stream duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Idle timeout for ASSURED UDP connections  0 to leave as is   p   td    tr    tr   td colspan  2    detect local mode LocalMode  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Mode to use to detect local traffic  This parameter is ignored if a config file is specified by   config   p   td    tr    tr   td colspan  2    feature gates  lt comma separated  key True False  pairs gt   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A set of key value pairs that describe feature gates for alpha experimental features  Options are  br  APIResponseCompression true false  BETA   default true  br  APIServerIdentity true false  BETA   default true  br  APIServerTracing true false  BETA   default true  br  APIServingWithRoutine true false  ALPHA   default false  br  AllAlpha true false  ALPHA   default false  br  AllBeta true false  BETA   default false  br  AnonymousAuthConfigurableEndpoints true false  ALPHA   default false  br  AnyVolumeDataSource true false  BETA   default true  br  AuthorizeNodeWithSelectors true false  ALPHA   default false  br  AuthorizeWithSelectors true false  ALPHA   default false  br  CPUManagerPolicyAlphaOptions true false  ALPHA   default false  br  CPUManagerPolicyBetaOptions true false  BETA   default true  br  CPUManagerPolicyOptions true false  BETA   default true  br  CRDValidationRatcheting true false  BETA   default true  br  CSIMigrationPortworx true false  BETA   default true  br  CSIVolumeHealth true false  ALPHA   default false  br  CloudControllerManagerWebhook true false  ALPHA   default false  br  ClusterTrustBundle true false  ALPHA   default false  br  ClusterTrustBundleProjection true false  ALPHA   default false  br  ComponentSLIs true false  BETA   default true  br  ConcurrentWatchObjectDecode true false  BETA   default false  br  ConsistentListFromCache true false  BETA   default true  br  ContainerCheckpoint true false  BETA   default true  br  ContextualLogging true false  BETA   default true  br  CoordinatedLeaderElection true false  ALPHA   default false  br  CronJobsScheduledAnnotation true false  BETA   default true  br  CrossNamespaceVolumeDataSource true false  ALPHA   default false  br  CustomCPUCFSQuotaPeriod true false  ALPHA   default false  br  CustomResourceFieldSelectors true false  BETA   default true  br  DRAControlPlaneController true false  ALPHA   default false  br  DisableAllocatorDualWrite true false  ALPHA   default false  br  DisableNodeKubeProxyVersion true false  BETA   default true  br  DynamicResourceAllocation true false  ALPHA   default false  br  EventedPLEG true false  ALPHA   default false  br  GracefulNodeShutdown true false  BETA   default true  br  GracefulNodeShutdownBasedOnPodPriority true false  BETA   default true  br  HPAScaleToZero true false  ALPHA   default false  br  HonorPVReclaimPolicy true false  BETA   default true  br  ImageMaximumGCAge true false  BETA   default true  br  ImageVolume true false  ALPHA   default false  br  InPlacePodVerticalScaling true false  ALPHA   default false  br  InTreePluginPortworxUnregister true false  ALPHA   default false  br  InformerResourceVersion true false  ALPHA   default false  br  JobBackoffLimitPerIndex true false  BETA   default true  br  JobManagedBy true false  ALPHA   default false  br  JobPodReplacementPolicy true false  BETA   default true  br  JobSuccessPolicy true false  BETA   default true  br  KubeletCgroupDriverFromCRI true false  BETA   default true  br  KubeletInUserNamespace true false  ALPHA   default false  br  KubeletPodResourcesDynamicResources true false  ALPHA   default false  br  KubeletPodResourcesGet true false  ALPHA   default false  br  KubeletSeparateDiskGC true false  BETA   default true  br  KubeletTracing true false  BETA   default true  br  LoadBalancerIPMode true false  BETA   default true  br  LocalStorageCapacityIsolationFSQuotaMonitoring true false  BETA   default false  br  LoggingAlphaOptions true false  ALPHA   default false  br  LoggingBetaOptions true false  BETA   default true  br  MatchLabelKeysInPodAffinity true false  BETA   default true  br  MatchLabelKeysInPodTopologySpread true false  BETA   default true  br  MaxUnavailableStatefulSet true false  ALPHA   default false  br  MemoryManager true false  BETA   default true  br  MemoryQoS true false  ALPHA   default false  br  MultiCIDRServiceAllocator true false  BETA   default false  br  MutatingAdmissionPolicy true false  ALPHA   default false  br  NFTablesProxyMode true false  BETA   default true  br  NodeInclusionPolicyInPodTopologySpread true false  BETA   default true  br  NodeLogQuery true false  BETA   default false  br  NodeSwap true false  BETA   default true  br  OpenAPIEnums true false  BETA   default true  br  PodAndContainerStatsFromCRI true false  ALPHA   default false  br  PodDeletionCost true false  BETA   default true  br  PodIndexLabel true false  BETA   default true  br  PodLifecycleSleepAction true false  BETA   default true  br  PodReadyToStartContainersCondition true false  BETA   default true  br  PortForwardWebsockets true false  BETA   default true  br  ProcMountType true false  BETA   default false  br  QOSReserved true false  ALPHA   default false  br  RecoverVolumeExpansionFailure true false  ALPHA   default false  br  RecursiveReadOnlyMounts true false  BETA   default true  br  RelaxedEnvironmentVariableValidation true false  ALPHA   default false  br  ReloadKubeletServerCertificateFile true false  BETA   default true  br  ResilientWatchCacheInitialization true false  BETA   default true  br  ResourceHealthStatus true false  ALPHA   default false  br  RetryGenerateName true false  BETA   default true  br  RotateKubeletServerCertificate true false  BETA   default true  br  RuntimeClassInImageCriApi true false  ALPHA   default false  br  SELinuxMount true false  ALPHA   default false  br  SELinuxMountReadWriteOncePod true false  BETA   default true  br  SchedulerQueueingHints true false  BETA   default false  br  SeparateCacheWatchRPC true false  BETA   default true  br  SeparateTaintEvictionController true false  BETA   default true  br  ServiceAccountTokenJTI true false  BETA   default true  br  ServiceAccountTokenNodeBinding true false  BETA   default true  br  ServiceAccountTokenNodeBindingValidation true false  BETA   default true  br  ServiceAccountTokenPodNodeInfo true false  BETA   default true  br  ServiceTrafficDistribution true false  BETA   default true  br  SidecarContainers true false  BETA   default true  br  SizeMemoryBackedVolumes true false  BETA   default true  br  StatefulSetAutoDeletePVC true false  BETA   default true  br  StorageNamespaceIndex true false  BETA   default true  br  StorageVersionAPI true false  ALPHA   default false  br  StorageVersionHash true false  BETA   default true  br  StorageVersionMigrator true false  ALPHA   default false  br  StrictCostEnforcementForVAP true false  BETA   default false  br  StrictCostEnforcementForWebhooks true false  BETA   default false  br  StructuredAuthenticationConfiguration true false  BETA   default true  br  StructuredAuthorizationConfiguration true false  BETA   default true  br  SupplementalGroupsPolicy true false  ALPHA   default false  br  TopologyAwareHints true false  BETA   default true  br  TopologyManagerPolicyAlphaOptions true false  ALPHA   default false  br  TopologyManagerPolicyBetaOptions true false  BETA   default true  br  TopologyManagerPolicyOptions true false  BETA   default true  br  TranslateStreamCloseWebsocketRequests true false  BETA   default true  br  UnauthenticatedHTTP2DOSMitigation true false  BETA   default true  br  UnknownVersionInteroperabilityProxy true false  ALPHA   default false  br  UserNamespacesPodSecurityStandards true false  ALPHA   default false  br  UserNamespacesSupport true false  BETA   default false  br  VolumeAttributesClass true false  BETA   default false  br  VolumeCapacityPriority true false  ALPHA   default false  br  WatchCacheInitializationPostStartHook true false  BETA   default false  br  WatchFromStorageWithoutResourceVersion true false  BETA   default false  br  WatchList true false  ALPHA   default false  br  WatchListClient true false  BETA   default false  br  WinDSR true false  ALPHA   default false  br  WinOverlay true false  BETA   default true  br  WindowsHostNetwork true false  ALPHA   default true  br  This parameter is ignored if a config file is specified by   config   p   td    tr    tr   td colspan  2    healthz bind address ipport nbsp  nbsp  nbsp  nbsp  nbsp Default  0 0 0 0 10256  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The IP address and port for the health check server to serve on  defaulting to  quot 0 0 0 0 10256 quot   This parameter is ignored if a config file is specified by   config   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for kube proxy  p   td    tr    tr   td colspan  2    hostname override string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If non empty  will be used as the name of the Node that kube proxy is running on  If unset  the node name is assumed to be the same as the node s hostname   p   td    tr    tr   td colspan  2    init only  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  perform any initialization steps that must be done with full root privileges  and then exit  After doing this  you can run kube proxy again with only the CAP NET ADMIN capability   p   td    tr    tr   td colspan  2    iptables localhost nodeports nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If false  kube proxy will disable the legacy behavior of allowing NodePort services to be accessed via localhost   Applies only to iptables mode and IPv4  localhost NodePorts are never allowed with other proxy modes or with IPv6    p   td    tr    tr   td colspan  2    iptables masquerade bit int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  14  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If using the iptables or ipvs proxy mode  the bit of the fwmark space to mark packets requiring SNAT with   Must be within the range  0  31    p   td    tr    tr   td colspan  2    iptables min sync period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The minimum period between iptables rule resyncs  e g   5s    1m    2h22m    A value of 0 means every Service or EndpointSlice change will result in an immediate iptables resync   p   td    tr    tr   td colspan  2    iptables sync period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  30s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p An interval  e g   5s    1m    2h22m   indicating how frequently various re synchronizing and cleanup operations are performed  Must be greater than 0   p   td    tr    tr   td colspan  2    ipvs exclude cidrs strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A comma separated list of CIDRs which the ipvs proxier should not touch when cleaning up IPVS rules   p   td    tr    tr   td colspan  2    ipvs min sync period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The minimum period between IPVS rule resyncs  e g   5s    1m    2h22m    A value of 0 means every Service or EndpointSlice change will result in an immediate IPVS resync   p   td    tr    tr   td colspan  2    ipvs scheduler string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The ipvs scheduler type when proxy mode is ipvs  p   td    tr    tr   td colspan  2    ipvs strict arp  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enable strict ARP by setting arp ignore to 1 and arp announce to 2  p   td    tr    tr   td colspan  2    ipvs sync period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  30s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p An interval  e g   5s    1m    2h22m   indicating how frequently various re synchronizing and cleanup operations are performed  Must be greater than 0   p   td    tr    tr   td colspan  2    ipvs tcp timeout duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The timeout for idle IPVS TCP connections  0 to leave as is   e g   5s    1m    2h22m     p   td    tr    tr   td colspan  2    ipvs tcpfin timeout duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The timeout for IPVS TCP connections after receiving a FIN packet  0 to leave as is   e g   5s    1m    2h22m     p   td    tr    tr   td colspan  2    ipvs udp timeout duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The timeout for IPVS UDP packets  0 to leave as is   e g   5s    1m    2h22m     p   td    tr    tr   td colspan  2    kube api burst int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  10  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Burst to use while talking with kubernetes apiserver  p   td    tr    tr   td colspan  2    kube api content type string nbsp  nbsp  nbsp  nbsp  nbsp Default   application vnd kubernetes protobuf   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Content type of requests sent to apiserver   p   td    tr    tr   td colspan  2    kube api qps float nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p QPS to use while talking with kubernetes apiserver  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to kubeconfig file with authorization information  the master location can be overridden by the master flag    p   td    tr    tr   td colspan  2    log flush frequency duration nbsp  nbsp  nbsp  nbsp  nbsp Default  5s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum number of seconds between log flushes  p   td    tr    tr   td colspan  2    log text info buffer size quantity  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p  Alpha  In text format with split output streams  the info messages can be buffered for a while to increase performance  The default value of zero bytes disables buffering  The size can be specified as number of bytes  512   multiples of 1000  1K   multiples of 1024  2Ki   or powers of those  3M  4G  5Mi  6Gi   Enable the LoggingAlphaOptions feature gate to use this   p   td    tr    tr   td colspan  2    log text split stream  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p  Alpha  In text format  write error messages to stderr and info messages to stdout  The default is to write a single stream to stdout  Enable the LoggingAlphaOptions feature gate to use this   p   td    tr    tr   td colspan  2    log backtrace at  lt a string in the form  file N  gt  nbsp  nbsp  nbsp  nbsp  nbsp Default   0  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p when logging hits line file N  emit a stack trace  p   td    tr    tr   td colspan  2    log dir string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If non empty  write log files in this directory  no effect when  logtostderr true   p   td    tr    tr   td colspan  2    log file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If non empty  use this log file  no effect when  logtostderr true   p   td    tr    tr   td colspan  2    log file max size uint nbsp  nbsp  nbsp  nbsp  nbsp Default  1800  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Defines the maximum size a log file can grow to  no effect when  logtostderr true   Unit is megabytes  If the value is 0  the maximum file size is unlimited   p   td    tr    tr   td colspan  2    logging format string nbsp  nbsp  nbsp  nbsp  nbsp Default   text   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Sets the log format  Permitted formats   quot text quot    p   td    tr    tr   td colspan  2    logtostderr nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p log to standard error instead of files  p   td    tr    tr   td colspan  2    masquerade all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p SNAT all traffic sent via Service cluster IPs  This may be required with some CNI plugins  Only supported on Linux   p   td    tr    tr   td colspan  2    master string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address of the Kubernetes API server  overrides any value in kubeconfig   p   td    tr    tr   td colspan  2    metrics bind address ipport nbsp  nbsp  nbsp  nbsp  nbsp Default  127 0 0 1 10249  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The IP address and port for the metrics server to serve on  defaulting to  quot 127 0 0 1 10249 quot    Set to  quot 0 0 0 0 10249 quot     quot      10249 quot  to bind on all interfaces   Set empty to disable  This parameter is ignored if a config file is specified by   config   p   td    tr    tr   td colspan  2    nodeport addresses strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A list of CIDR ranges that contain valid node IPs  or alternatively  the single string  primary   If set to a list of CIDRs  connections to NodePort services will only be accepted on node IPs in one of the indicated ranges  If set to  primary   NodePort services will only be accepted on the node s primary IP s  according to the Node object  If unset  NodePort connections will be accepted on all local IPs  This parameter is ignored if a config file is specified by   config   p   td    tr    tr   td colspan  2    one output  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  only write logs to their native severity level  vs also writing to each lower severity level  no effect when  logtostderr true   p   td    tr    tr   td colspan  2    oom score adj int32 nbsp  nbsp  nbsp  nbsp  nbsp Default   999  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The oom score adj value for kube proxy process  Values must be within the range   1000  1000   This parameter is ignored if a config file is specified by   config   p   td    tr    tr   td colspan  2    pod bridge interface string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A bridge interface name  When   detect local mode is set to BridgeInterface  kube proxy will consider traffic to be local if it originates from this bridge   p   td    tr    tr   td colspan  2    pod interface name prefix string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p An interface name prefix  When   detect local mode is set to InterfaceNamePrefix  kube proxy will consider traffic to be local if it originates from any interface whose name begins with this prefix   p   td    tr    tr   td colspan  2    profiling  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true enables profiling via web interface on  debug pprof handler  This parameter is ignored if a config file is specified by   config   p   td    tr    tr   td colspan  2    proxy mode ProxyMode  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Which proxy mode to use  on Linux this can be  iptables   default  or  ipvs   On Windows the only supported value is  kernelspace  This parameter is ignored if a config file is specified by   config   p   td    tr    tr   td colspan  2    show hidden metrics for version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The previous version for which you want to show hidden metrics  Only the previous minor version is meaningful  other values will not be allowed  The format is  lt major gt   lt minor gt   e g    1 16   The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics  rather than being surprised when they are permanently removed in the release after that  This parameter is ignored if a config file is specified by   config   p   td    tr    tr   td colspan  2    skip headers  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  avoid header prefixes in the log messages  p   td    tr    tr   td colspan  2    skip log headers  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  avoid headers when opening log files  no effect when  logtostderr true   p   td    tr    tr   td colspan  2    stderrthreshold int nbsp  nbsp  nbsp  nbsp  nbsp Default  2  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p logs at or above this threshold go to stderr when writing to files and stderr  no effect when  logtostderr true or  alsologtostderr true   p   td    tr    tr   td colspan  2   v    v int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p number for the log level verbosity  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    vmodule pattern N      td    tr   tr   td   td  td style  line height  130   word wrap  break word    p comma separated list of pattern N settings for file filtered logging  only works for text log format   p   td    tr    tr   td colspan  2    write config to string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If set  write the default configuration values to this file and exit   p   td    tr     tbody    table    "}
{"questions":"kubernetes reference The kubelet is the primary node agent that runs on each node It can weight 20 register the node with the apiserver using one of the hostname a flag to contenttype tool reference heading synopsis title kubelet","answers":"---\ntitle: kubelet\ncontent_type: tool-reference\nweight: 20\n---\n\n## \n\nThe kubelet is the primary \"node agent\" that runs on each node. It can\nregister the node with the apiserver using one of: the hostname; a flag to\noverride the hostname; or specific logic for a cloud provider.\n\nThe kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object\nthat describes a pod. The kubelet takes a set of PodSpecs that are provided\nthrough various mechanisms (primarily through the apiserver) and ensures that\nthe containers described in those PodSpecs are running and healthy. The\nkubelet doesn't manage containers which were not created by Kubernetes.\n\nOther than from a PodSpec from the apiserver, there are two ways that a\ncontainer manifest can be provided to the kubelet.\n\n- File: Path passed as a flag on the command line. Files under this path will be\n  monitored periodically for updates. The monitoring period is 20s by default\n  and is configurable via a flag.\n- HTTP endpoint: HTTP endpoint passed as a parameter on the command line. This\n  endpoint is checked every 20 seconds (also configurable with a flag).\n\n```\nkubelet [flags]\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--address string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 0.0.0.0 <\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The IP address for the kubelet to serve on (set to <code>0.0.0.0<\/code> or <code>::<\/code> for listening on all interfaces and IP address families)  (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allowed-unsafe-sysctls strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Comma-separated whitelist of unsafe sysctls or unsafe sysctl patterns (ending in <code>&ast;<\/code>). Use these at your own risk. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--anonymous-auth&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Enables anonymous requests to the kubelet server. Requests that are not rejected by another authentication method are treated as anonymous requests. Anonymous requests have a username of <code>system:anonymous<\/code>, and a group name of <code>system:unauthenticated<\/code>. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authentication-token-webhook<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Use the <code>TokenReview<\/code> API to determine authentication for bearer tokens. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authentication-token-webhook-cache-ttl duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>2m0s<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The duration to cache responses from the webhook token authenticator. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-mode string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>AlwaysAllow<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Authorization mode for kubelet server. Valid options are &quot;<code>AlwaysAllow<\/code>&quot; or &quot;<code>Webhook<\/code>&quot;. Webhook mode uses the <code>SubjectAccessReview<\/code> API to determine authorization. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-webhook-cache-authorized-ttl duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>5m0s<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The duration to cache 'authorized' responses from the webhook authorizer. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-webhook-cache-unauthorized-ttl duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>30s<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The duration to cache 'unauthorized' responses from the webhook authorizer. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--bootstrap-kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Path to a kubeconfig file that will be used to get client certificate for kubelet. If the file specified by <code>--kubeconfig<\/code> does not exist, the bootstrap kubeconfig is used to request a client certificate from the API server. On success, a kubeconfig file referencing the generated client certificate and key is written to the path specified by <code>--kubeconfig<\/code>. The client certificate and key file will be stored in the directory pointed by <code>--cert-dir<\/code>.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cert-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>\/var\/lib\/kubelet\/pki<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The directory where the TLS certs are located. If <code>--tls-cert-file<\/code> and <code>--tls-private-key-file<\/code> are provided, this flag will be ignored.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cgroup-driver string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>cgroupfs<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Driver that the kubelet uses to manipulate cgroups on the host.  Possible values: &quot;<code>cgroupfs<\/code>&quot;, &quot;<code>systemd<\/code>&quot;. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cgroup-root string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>''<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Optional root cgroup to use for pods. This is handled by the container runtime on a best effort basis. Default: '', which means use the container runtime default. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cgroups-per-qos&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Enable creation of QoS cgroup hierarchy, if true, top level QoS and pod cgroups are created. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-ca-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the <code>CommonName<\/code> of the client certificate. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cloud-config string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The path to the cloud provider configuration file. Empty string for no configuration file. (DEPRECATED: will be removed in 1.25 or later, in favor of removing cloud providers code from kubelet.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cloud-provider string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The provider for cloud services. Set to empty string for running with no cloud provider. Set to 'external' for running with an external cloud provider. If set, the cloud provider determines the name of the node (consult cloud provider documentation to determine if and how the hostname is used).<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-dns strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Comma-separated list of DNS server IP address. This value is used for containers DNS server in case of Pods with \"<code>dnsPolicy: ClusterFirst<\/code>\".<br\/><B>Note:<\/B> all DNS servers appearing in the list MUST serve the same set of records otherwise name resolution within the cluster may not work correctly. There is no guarantee as to which DNS server may be contacted for name resolution. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-domain string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Domain for this cluster. If set, kubelet will configure all containers to search this domain in addition to the host's search domains. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--config string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The kubelet will load its initial configuration from this file. The path may be absolute or relative; relative paths start at the kubelet's current working directory. Omit this flag to use the built-in default configuration values. Command-line flags override configuration from this file.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--config-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: ''<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Path to a directory to specify drop-ins, allows the user to optionally specify additional configs to overwrite what is provided by default and in the `--config` flag.<br\/><B>Note<\/B>: Set the '<code>KUBELET_CONFIG_DROPIN_DIR_ALPHA<\/code>' environment variable to specify the directory.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--container-log-max-files int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">&lt;Warning: Beta feature&gt; Set the maximum number of container log files that can be present for a container. The number must be &gt;= 2. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--container-log-max-size string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>10Mi<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">&lt;Warning: Beta feature&gt; Set the maximum size (e.g. <code>10Mi<\/code>) of container log file before it is rotated.  (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--container-runtime-endpoint string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>\"unix:\/\/\/run\/containerd\/containerd.sock\"<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The endpoint of remote runtime service. UNIX domain sockets are supported on Linux, while 'npipe' and 'tcp' endpoints are supported on windows. Examples: <code>'unix:\/\/\/path\/to\/runtime.sock'<\/code>, <code>'npipe:\/\/\/\/.\/pipe\/runtime'<\/code>. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--contention-profiling<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Enable block profiling, if profiling is enabled. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cpu-cfs-quota&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Enable CPU CFS quota enforcement for containers that specify CPU limits. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cpu-cfs-quota-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>100ms<\/code><\/td>\n   <\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Sets CPU CFS quota period value, <code>cpu.cfs_period_us<\/code>, defaults to Linux Kernel default. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cpu-manager-policy string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>none<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The CPU manager policy to use. Possible values: &quot;<code>none<\/code>&quot;, &quot;<code>static<\/code>&quot;. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cpu-manager-policy-options string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">A set of 'key=value' CPU manager policy options to use, to fine tune their behaviour. If not supplied, keep the default behaviour. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cpu-manager-reconcile-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>10s<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">&lt;Warning: Alpha feature&gt; CPU manager reconciliation period. Examples: &quot;<code>10s<\/code>&quot;, or &quot;<code>1m<\/code>&quot;. If not supplied, defaults to node status update frequency. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enable-controller-attach-detach&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Enables the Attach\/Detach controller to manage attachment\/detachment of volumes scheduled to this node, and disables kubelet from executing any attach\/detach operations. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enable-debugging-handlers&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Enables server endpoints for log collection and local running of containers and commands. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enable-server&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Enable the kubelet's server. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enforce-node-allocatable strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>pods<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">A comma separated list of levels of node allocatable enforcement to be enforced by kubelet. Acceptable options are &quot;<code>none<\/code>&quot;, &quot;<code>pods<\/code>&quot;, &quot;<code>system-reserved<\/code>&quot;, and &quot;<code>kube-reserved<\/code>&quot;. If the latter two options are specified, <code>--system-reserved-cgroup<\/code> and <code>--kube-reserved-cgroup<\/code> must also be set, respectively. If &quot;<code>none<\/code>&quot; is specified, no additional options should be set. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/reserve-compute-resources\/\">official documentation<\/a> for more details. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--event-burst int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 100<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Maximum size of a bursty event records, temporarily allows event records to burst to this number, while still not exceeding <code>--event-qps<\/code>. The number must be &gt;= 0. If 0 will use default burst (100). (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--event-qps int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 50<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">QPS to limit event creations. The number must be &gt;= 0. If 0 will use default QPS (50). (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--eviction-hard strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>imagefs.available<15%,memory.available<100Mi,nodefs.available<10%<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">A set of eviction thresholds (e.g. &quot;<code>memory.available<1Gi<\/code>&quot;) that if met would trigger a pod eviction. On a Linux node, the default value also includes &quot;<code>nodefs.inodesFree<5%<\/code>&quot;. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--eviction-max-pod-grace-period int32<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"> Maximum allowed grace period (in seconds) to use when terminating pods in response to a soft eviction threshold being met. If negative, defer to pod specified value. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--eviction-minimum-reclaim strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">A set of minimum reclaims (e.g. &quot;<code>imagefs.available=2Gi<\/code>&quot;) that describes the minimum amount of resource the kubelet will reclaim when performing a pod eviction if that resource is under pressure. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--eviction-pressure-transition-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>5m0s<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Duration for which the kubelet has to wait before transitioning out of an eviction pressure condition. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--eviction-soft strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">A set of eviction thresholds (e.g. &quot;<code>memory.available<1.5Gi<\/code>&quot;) that if met over a corresponding grace period would trigger a pod eviction. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--eviction-soft-grace-period strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">A set of eviction grace periods (e.g. &quot;<code>memory.available=1m30s<\/code>&quot;) that correspond to how long a soft eviction threshold must hold before triggering a pod eviction. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--exit-on-lock-contention<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Whether kubelet should exit upon lock-file contention.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--experimental-allocatable-ignore-eviction&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>false<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">When set to <code>true<\/code>, hard eviction thresholds will be ignored while calculating node allocatable. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/reserve-compute-resources\/\">here<\/a> for more details. (DEPRECATED: will be removed in 1.25 or later)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--experimental-mounter-path string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>mount<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">[Experimental] Path of mounter binary. Leave empty to use the default <code>mount<\/code>. (DEPRECATED: will be removed in 1.24 or later, in favor of using CSI.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--fail-swap-on&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Makes the kubelet fail to start if swap is enabled on the node. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--feature-gates &lt;A list of 'key=true\/false' pairs&gt;<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">A set of <code>key=value<\/code> pairs that describe feature gates for alpha\/experimental features. Options are:<br\/>\n\nAPIResponseCompression=true|false (BETA - default=true)<br\/>\nAPIServerIdentity=true|false (BETA - default=true)<br\/>\nAPIServerTracing=true|false (BETA - default=true)<br\/>\nAPIServingWithRoutine=true|false (BETA - default=true)<br\/>\nAllAlpha=true|false (ALPHA - default=false)<br\/>\nAllBeta=true|false (BETA - default=false)<br\/>\nAnyVolumeDataSource=true|false (BETA - default=true)<br\/>\nAppArmor=true|false (BETA - default=true)<br\/>\nAppArmorFields=true|false (BETA - default=true)<br\/>\nCPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)<br\/>\nCPUManagerPolicyBetaOptions=true|false (BETA - default=true)<br\/>\nCPUManagerPolicyOptions=true|false (BETA - default=true)<br\/>\nCRDValidationRatcheting=true|false (BETA - default=true)<br\/>\nCSIMigrationPortworx=true|false (BETA - default=false)<br\/>\nCSIVolumeHealth=true|false (ALPHA - default=false)<br\/>\nCloudControllerManagerWebhook=true|false (ALPHA - default=false)<br\/>\nClusterTrustBundle=true|false (ALPHA - default=false)<br\/>\nClusterTrustBundleProjection=true|false (ALPHA - default=false)<br\/>\nComponentSLIs=true|false (BETA - default=true)<br\/>\nConsistentListFromCache=true|false (ALPHA - default=false)<br\/>\nContainerCheckpoint=true|false (BETA - default=true)<br\/>\nContextualLogging=true|false (BETA - default=true)<br\/>\nCronJobsScheduledAnnotation=true|false (BETA - default=true)<br\/>\nCrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)<br\/>\nCustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)<br\/>\nCustomResourceFieldSelectors=true|false (ALPHA - default=false)<br\/>\nDevicePluginCDIDevices=true|false (BETA - default=true)<br\/>\nDisableCloudProviders=true|false (BETA - default=true)<br\/>\nDisableKubeletCloudCredentialProviders=true|false (BETA - default=true)<br\/>\nDisableNodeKubeProxyVersion=true|false (ALPHA - default=false)<br\/>\nDynamicResourceAllocation=true|false (ALPHA - default=false)<br\/>\nElasticIndexedJob=true|false (BETA - default=true)<br\/>\nEventedPLEG=true|false (ALPHA - default=false)<br\/>\nGracefulNodeShutdown=true|false (BETA - default=true)<br\/>\nGracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)<br\/>\nHPAScaleToZero=true|false (ALPHA - default=false)<br\/>\nHonorPVReclaimPolicy=true|false (ALPHA - default=false)<br\/>\nImageMaximumGCAge=true|false (BETA - default=true)<br\/>\nInPlacePodVerticalScaling=true|false (ALPHA - default=false)<br\/>\nInTreePluginAWSUnregister=true|false (ALPHA - default=false)<br\/>\nInTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)<br\/>\nInTreePluginAzureFileUnregister=true|false (ALPHA - default=false)<br\/>\nInTreePluginGCEUnregister=true|false (ALPHA - default=false)<br\/>\nInTreePluginOpenStackUnregister=true|false (ALPHA - default=false)<br\/>\nInTreePluginPortworxUnregister=true|false (ALPHA - default=false)<br\/>\nInTreePluginvSphereUnregister=true|false (ALPHA - default=false)<br\/>\nInformerResourceVersion=true|false (ALPHA - default=false)<br\/>\nJobBackoffLimitPerIndex=true|false (BETA - default=true)<br\/>\nJobManagedBy=true|false (ALPHA - default=false)<br\/>\nJobPodFailurePolicy=true|false (BETA - default=true)<br\/>\nJobPodReplacementPolicy=true|false (BETA - default=true)<br\/>\nJobSuccessPolicy=true|false (ALPHA - default=false)<br\/>\nKubeProxyDrainingTerminatingNodes=true|false (BETA - default=true)<br\/>\nKubeletCgroupDriverFromCRI=true|false (ALPHA - default=false)<br\/>\nKubeletInUserNamespace=true|false (ALPHA - default=false)<br\/>\nKubeletPodResourcesDynamicResources=true|false (ALPHA - default=false)<br\/>\nKubeletPodResourcesGet=true|false (ALPHA - default=false)<br\/>\nKubeletSeparateDiskGC=true|false (ALPHA - default=false)<br\/>\nKubeletTracing=true|false (BETA - default=true)<br\/>\nLoadBalancerIPMode=true|false (BETA - default=true)<br\/>\nLocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)<br\/>\nLogarithmicScaleDown=true|false (BETA - default=true)<br\/>\nLoggingAlphaOptions=true|false (ALPHA - default=false)<br\/>\nLoggingBetaOptions=true|false (BETA - default=true)<br\/>\nMatchLabelKeysInPodAffinity=true|false (ALPHA - default=false)<br\/>\nMatchLabelKeysInPodTopologySpread=true|false (BETA - default=true)<br\/>\nMaxUnavailableStatefulSet=true|false (ALPHA - default=false)<br\/>\nMemoryManager=true|false (BETA - default=true)<br\/>\nMemoryQoS=true|false (ALPHA - default=false)<br\/>\nMultiCIDRServiceAllocator=true|false (ALPHA - default=false)<br\/>\nMutatingAdmissionPolicy=true|false (ALPHA - default=false)<br\/>\nNFTablesProxyMode=true|false (ALPHA - default=false)<br\/>\nNodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)<br\/>\nNodeLogQuery=true|false (BETA - default=false)<br\/>\nNodeSwap=true|false (BETA - default=true)<br\/>\nOpenAPIEnums=true|false (BETA - default=true)<br\/>\nPDBUnhealthyPodEvictionPolicy=true|false (BETA - default=true)<br\/>\nPersistentVolumeLastPhaseTransitionTime=true|false (BETA - default=true)<br\/>\nPodAndContainerStatsFromCRI=true|false (ALPHA - default=false)<br\/>\nPodDeletionCost=true|false (BETA - default=true)<br\/>\nPodDisruptionConditions=true|false (BETA - default=true)<br\/>\nPodIndexLabel=true|false (BETA - default=true)<br\/>\nPodLifecycleSleepAction=true|false (BETA - default=true)<br\/>\nPodReadyToStartContainersCondition=true|false (BETA - default=true)<br\/>\nPortForwardWebsockets=true|false (ALPHA - default=false)<br\/>\nProcMountType=true|false (ALPHA - default=false)<br\/>\nQOSReserved=true|false (ALPHA - default=false)<br\/>\nRecoverVolumeExpansionFailure=true|false (ALPHA - default=false)<br\/>\nRecursiveReadOnlyMounts=true|false (ALPHA - default=false)<br\/>\nRelaxedEnvironmentVariableValidation=true|false (ALPHA - default=false)<br\/>\nRetryGenerateName=true|false (ALPHA - default=false)<br\/>\nRotateKubeletServerCertificate=true|false (BETA - default=true)<br\/>\nRuntimeClassInImageCriApi=true|false (ALPHA - default=false)<br\/>\nSELinuxMount=true|false (ALPHA - default=false)<br\/>\nSELinuxMountReadWriteOncePod=true|false (BETA - default=true)<br\/>\nSchedulerQueueingHints=true|false (BETA - default=false)<br\/>\nSeparateCacheWatchRPC=true|false (BETA - default=true)<br\/>\nSeparateTaintEvictionController=true|false (BETA - default=true)<br\/>\nServiceAccountTokenJTI=true|false (BETA - default=true)<br\/>\nServiceAccountTokenNodeBinding=true|false (ALPHA - default=false)<br\/>\nServiceAccountTokenNodeBindingValidation=true|false (BETA - default=true)<br\/>\nServiceAccountTokenPodNodeInfo=true|false (BETA - default=true)<br\/>\nServiceTrafficDistribution=true|false (ALPHA - default=false)<br\/>\nSidecarContainers=true|false (BETA - default=true)<br\/>\nSizeMemoryBackedVolumes=true|false (BETA - default=true)<br\/>\nStatefulSetAutoDeletePVC=true|false (BETA - default=true)<br\/>\nStatefulSetStartOrdinal=true|false (BETA - default=true)<br\/>\nStorageNamespaceIndex=true|false (BETA - default=true)<br\/>\nStorageVersionAPI=true|false (ALPHA - default=false)<br\/>\nStorageVersionHash=true|false (BETA - default=true)<br\/>\nStorageVersionMigrator=true|false (ALPHA - default=false)<br\/>\nStructuredAuthenticationConfiguration=true|false (BETA - default=true)<br\/>\nStructuredAuthorizationConfiguration=true|false (BETA - default=true)<br\/>\nTopologyAwareHints=true|false (BETA - default=true)<br\/>\nTopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)<br\/>\nTopologyManagerPolicyBetaOptions=true|false (BETA - default=true)<br\/>\nTopologyManagerPolicyOptions=true|false (BETA - default=true)<br\/>\nTranslateStreamCloseWebsocketRequests=true|false (BETA - default=true)<br\/>\nUnauthenticatedHTTP2DOSMitigation=true|false (BETA - default=true)<br\/>\nUnknownVersionInteroperabilityProxy=true|false (ALPHA - default=false)<br\/>\nUserNamespacesPodSecurityStandards=true|false (ALPHA - default=false)<br\/>\nUserNamespacesSupport=true|false (BETA - default=false)<br\/>\nVolumeAttributesClass=true|false (ALPHA - default=false)<br\/>\nVolumeCapacityPriority=true|false (ALPHA - default=false)<br\/>\nWatchFromStorageWithoutResourceVersion=true|false (BETA - default=false)<br\/>\nWatchList=true|false (ALPHA - default=false)<br\/>\nWatchListClient=true|false (BETA - default=false)<br\/>\nWinDSR=true|false (ALPHA - default=false)<br\/>\nWinOverlay=true|false (BETA - default=true)<br\/>\nWindowsHostNetwork=true|false (ALPHA - default=true)<br\/>\n(DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--file-check-frequency duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>20s<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Duration between checking config files for new data. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--hairpin-mode string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>promiscuous-bridge<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">How should the kubelet setup hairpin NAT. This allows endpoints of a Service to load balance back to themselves if they should try to access their own Service. Valid values are &quot;<code>promiscuous-bridge<\/code>&quot;, &quot;<code>hairpin-veth<\/code>&quot; and &quot;<code>none<\/code>&quot;. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--healthz-bind-address string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>127.0.0.1<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The IP address for the healthz server to serve on (set to &quot;<code>0.0.0.0<\/code>&quot; or &quot;<code>::<\/code>&quot; for listening in all interfaces and IP families). (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--healthz-port int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10248<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The port of the localhost healthz endpoint (set to <code>0<\/code> to disable). (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">help for kubelet<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--hostname-override string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If non-empty, will use this string as identification instead of the actual hostname. If <code>--cloud-provider<\/code> is set, the cloud provider determines the name of the node (consult cloud provider documentation to determine if and how the hostname is used).<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--http-check-frequency duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>20s<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Duration between checking HTTP for new data. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--image-credential-provider-bin-dir string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The path to the directory where credential provider plugin binaries are located.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--image-credential-provider-config string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The path to the credential provider plugin config file.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--image-gc-high-threshold int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 85<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The percent of disk usage after which image garbage collection is always run. Values must be within the range [0, 100], To disable image garbage collection, set to 100.   (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--image-gc-low-threshold int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 80<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Values must be within the range [0, 100] and should not be larger than that of <code>--image-gc-high-threshold<\/code>. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--image-service-endpoint string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The endpoint of remote image service. If not specified, it will be the same with <code>--container-runtime-endpoint<\/code> by default. UNIX domain socket are supported on Linux, while `npipe` and `tcp` endpoints are supported on Windows. Examples: <code>unix:\/\/\/path\/to\/runtime.sock<\/code>, <code>npipe:\/\/\/\/.\/pipe\/runtime<\/code>. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--keep-terminated-pod-volumes<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Keep terminated pod volumes mounted to the node after the pod terminates. Can be useful for debugging volume related issues. (DEPRECATED: will be removed in a future version)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kernel-memcg-notification<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If enabled, the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kube-api-burst int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 100<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Burst to use while talking with kubernetes API server. The number must be >= 0. If 0 will use default burst (100). Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kube-api-content-type string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>application\/vnd.kubernetes.protobuf<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Content type of requests sent to apiserver. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kube-api-qps int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 50<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">QPS to use while talking with kubernetes API server. The number must be &gt;= 0. If 0 will use default QPS (50). Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kube-reserved strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: &lt;None&gt;<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">A set of <code>&lt;resource name&gt;=&lt;resource quantity&gt;<\/code> (e.g. &quot;<code>cpu=200m,memory=500Mi,ephemeral-storage=1Gi,pid='100'<\/code>&quot;) pairs that describe resources reserved for kubernetes system components. Currently <code>cpu<\/code>, <code>memory<\/code> and local <code>ephemeral-storage<\/code> for root file system are supported. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/reserve-compute-resources\/#kube-reserved\">here<\/a> for more detail. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kube-reserved-cgroup string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>''<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Absolute name of the top level cgroup that is used to manage kubernetes components for which compute resources were reserved via <code>--kube-reserved<\/code> flag. Ex. &quot;<code>\/kube-reserved<\/code>&quot;. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Path to a kubeconfig file, specifying how to connect to the API server. Providing <code>--kubeconfig<\/code> enables API server mode, omitting <code>--kubeconfig<\/code> enables standalone mode. <\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubelet-cgroups string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Optional absolute name of cgroups to create and run the kubelet in. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--local-storage-capacity-isolation&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If true, local ephemeral storage isolation is enabled. Otherwise, local storage isolation feature will be disabled. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--lock-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">&lt;Warning: Alpha feature&gt; The path to file for kubelet to use as a lock file.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-flush-frequency duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>5s<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Maximum number of seconds between log flushes.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-json-info-buffer-size string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>'0'<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">[Alpha] In JSON format with split output streams, the info messages can be buffered for a while to increase performance. The default value of zero bytes disables buffering. The size can be specified as number of bytes (512), multiples of 1000 (1K), multiples of 1024 (2Ki), or powers of those (3M, 4G, 5Mi, 6Gi). Enable the <code>LoggingAlphaOptions<\/code> feature gate to use this. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-json-split-stream<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">[Alpha] In JSON format, write error messages to stderr and info messages to stdout. The default is to write a single stream to stdout. Enable the <code>LoggingAlphaOptions<\/code> feature gate to use this. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-text-info-buffer-size string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>'0'<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">[Alpha] In text format with split output streams, the info messages can be buffered for a while to increase performance. The default value of zero bytes disables buffering. The size can be specified as number of bytes (512), multiples of 1000 (1K), multiples of 1024 (2Ki), or powers of those (3M, 4G, 5Mi, 6Gi). Enable the <code>LoggingAlphaOptions<\/code> feature gate to use this. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config<\/code> flag. See https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/ for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-text-split-stream<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">[Alpha] In text format, write error messages to stderr and info messages to stdout. The default is to write a single stream to stdout. Enable the <code>LoggingAlphaOptions<\/code> feature gate to use this. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config<\/code> flag. See https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/ for more information.)\n<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--logging-format string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>text<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Sets the log format. Permitted formats: &quot;<code>json<\/code>&quot; (gated by <code>LoggingBetaOptions<\/code>, &quot;<code>text<\/code>&quot;). (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--make-iptables-util-chains&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If true, kubelet will ensure <code>iptables<\/code> utility rules are present on host. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--manifest-url string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">URL for accessing additional Pod specifications to run. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--manifest-url-header strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Comma-separated list of HTTP headers to use when accessing the URL provided to <code>--manifest-url<\/code>. Multiple headers with the same name will be added in the same order provided. This flag can be repeatedly invoked. For example: <code>--manifest-url-header 'a:hello,b:again,c:world' --manifest-url-header 'b:beautiful'<\/code> (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--max-open-files int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1000000<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Number of files that can be opened by kubelet process. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--max-pods int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 110<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Number of Pods that can run on this kubelet. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--maximum-dead-containers int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Maximum number of old instances of containers to retain globally. Each container takes up some disk space. To disable, set to a negative number. (DEPRECATED: Use <code>--eviction-hard<\/code> or <code>--eviction-soft<\/code> instead. Will be removed in a future version.)<\/td>\n<\/tr>\n\n <tr>\n<td colspan=\"2\">--maximum-dead-containers-per-container int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Maximum number of old instances to retain per container.  Each container takes up some disk space. (DEPRECATED: Use <code>--eviction-hard<\/code> or <code>--eviction-soft<\/code> instead. Will be removed in a future version.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--memory-manager-policy string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>None<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Memory Manager policy to use. Possible values: &quot;<code>None<\/code>&quot;, &quot;<code>Static<\/code>&quot;. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--minimum-container-ttl-duration duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Minimum age for a finished container before it is garbage collected. Examples: &quot;<code>300ms<\/code>&quot;, &quot;<code>10s<\/code>&quot; or &quot;<code>2h45m<\/code>&quot;. (DEPRECATED: Use <code>--eviction-hard<\/code> or <code>--eviction-soft<\/code> instead. Will be removed in a future version.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--minimum-image-ttl-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>2m0s<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Minimum age for an unused image before it is garbage collected. Examples: &quot;<code>300ms<\/code>&quot;, &quot;<code>10s<\/code>&quot; or &quot;<code>2h45m<\/code>&quot;. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--node-ip string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">IP address (or comma-separated dual-stack IP addresses) of the node. If unset, kubelet will use the node's default IPv4 address, if any, or its default IPv6 address if it has no IPv4 addresses. You can pass &quot;<code>::<\/code>&quot; to make it prefer the default IPv6 address rather than the default IPv4 address.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--node-labels &lt;key=value pairs&gt;<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">&lt;Warning: Alpha feature&gt;Labels to add when registering the node in the cluster. Labels must be <code>key=value<\/code> pairs separated by <code>','<\/code>. Labels in the <code>'kubernetes.io'<\/code> namespace must begin with an allowed prefix (<code>'kubelet.kubernetes.io'<\/code>, <code>'node.kubernetes.io'<\/code>) or be in the specifically allowed set (<code>'beta.kubernetes.io\/arch'<\/code>, <code>'beta.kubernetes.io\/instance-type'<\/code>, <code>'beta.kubernetes.io\/os'<\/code>, <code>'failure-domain.beta.kubernetes.io\/region'<\/code>, <code>'failure-domain.beta.kubernetes.io\/zone'<\/code>, <code>'kubernetes.io\/arch'<\/code>, <code>'kubernetes.io\/hostname'<\/code>, <code>'kubernetes.io\/os'<\/code>, <code>'node.kubernetes.io\/instance-type'<\/code>, <code>'topology.kubernetes.io\/region'<\/code>, <code>'topology.kubernetes.io\/zone'<\/code>)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--node-status-max-images int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 50<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The maximum number of images to report in <code>node.status.images<\/code>. If <code>-1<\/code> is specified, no cap will be applied. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--node-status-update-frequency duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>10s<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Specifies how often kubelet posts node status to master. <B>Note<\/B>: be cautious when changing the constant, it must work with <code>nodeMonitorGracePeriod<\/code> in Node controller. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--oom-score-adj int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -999<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The <code>oom-score-adj<\/code> value for kubelet process. Values must be within the range [-1000, 1000]. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pod-cidr string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The CIDR to use for pod IP addresses, only used in standalone mode. In cluster mode, this is obtained from the master. For IPv6, the maximum number of IP's allocated is 65536 (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pod-infra-container-image string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>registry.k8s.io\/pause:3.9<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Specified image will not be pruned by the image garbage collector. CRI implementations have their own configuration to set this image. (DEPRECATED: will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pod-manifest-path string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Path to the directory containing static pod files to run, or the path to a single static pod file. Files starting with dots will be ignored. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pod-max-pids int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Set the maximum number of processes per pod. If <code>-1<\/code>, the kubelet defaults to the node allocatable PID capacity. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pods-per-core int32<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Number of Pods per core that can run on this kubelet. The total number of pods on this kubelet cannot exceed <code>--max-pods<\/code>, so <code>--max-pods<\/code> will be used if this calculation results in a larger number of pods allowed on the kubelet. A value of <code>0<\/code> disables this limit. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--port int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10250<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The port for the kubelet to serve on. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--protect-kernel-defaults<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Default kubelet behaviour for kernel tuning. If set, kubelet errors if any of kernel tunables is different than kubelet defaults. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--provider-id string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Unique identifier for identifying the node in a machine database, i.e cloud provider.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--qos-reserved string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">&lt;Warning: Alpha feature&gt; A set of <code>&lt;resource name&gt;=&lt;percentage&gt;<\/code> (e.g. &quot;<code>memory=50%<\/code>&quot;) pairs that describe how pod resource requests are reserved at the QoS level. Currently only <code>memory<\/code> is supported. Requires the <code>QOSReserved<\/code> feature gate to be enabled. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--read-only-port int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10255<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The read-only port for the kubelet to serve on with no authentication\/authorization (set to <code>0<\/code> to disable). (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--register-node&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Register the node with the API server. If <code>--kubeconfig<\/code> is not provided, this flag is irrelevant, as the kubelet won't have an API server to register with. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--register-schedulable&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Register the node as schedulable. Won't have any effect if <code>--register-node<\/code> is <code>false<\/code>. (DEPRECATED: will be removed in a future version)<\/td>\n  <\/tr>\n\n<tr>\n<td colspan=\"2\">--register-with-taints string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Register the node with the given list of taints (comma separated <code>&lt;key&gt;=&lt;value&gt;:&lt;effect&gt;<\/code>). No-op if <code>--register-node<\/code> is <code>false<\/code>. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--registry-burst int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Maximum size of a bursty pulls, temporarily allows pulls to burst to this number, while still not exceeding <code>--registry-qps<\/code>. Only used if <code>--registry-qps<\/code> is greater than 0. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--registry-qps int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If &gt; 0, limit registry pull QPS to this value.  If <code>0<\/code>, unlimited. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--reserved-cpus string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">A comma-separated list of CPUs or CPU ranges that are reserved for system and kubernetes usage. This specific list will supersede cpu counts in <code>--system-reserved<\/code> and <code>--kube-reserved<\/code>. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--reserved-memory string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">A comma-separated list of memory reservations for NUMA nodes. (e.g. &quot;<code>--reserved-memory 0:memory=1Gi,hugepages-1M=2Gi --reserved-memory 1:memory=2Gi<\/code>&quot;). The total sum for each memory type should be equal to the sum of <code>--kube-reserved<\/code>, <code>--system-reserved<\/code> and <code>--eviction-threshold<\/code>. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/memory-manager\/#reserved-memory-flag\">here<\/a> for more details. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--resolv-conf string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>\/etc\/resolv.conf<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Resolver configuration file used as the basis for the container DNS resolution configuration. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--root-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>\/var\/lib\/kubelet<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Directory path for managing kubelet files (volume mounts, etc).<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--rotate-certificates<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Auto rotate the kubelet client certificates by requesting new certificates from the <code>kube-apiserver<\/code> when the certificate expiration approaches. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--rotate-server-certificates<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Auto-request and rotate the kubelet serving certificates by requesting new certificates from the <code>kube-apiserver<\/code> when the certificate expiration approaches. Requires the <code>RotateKubeletServerCertificate<\/code> feature gate to be enabled, and approval of the submitted <code>CertificateSigningRequest<\/code> objects. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--runonce<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If <code>true<\/code>, exit after spawning pods from local manifests or remote urls. Exclusive with <code>--enable-server<\/code> (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--runtime-cgroups string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Optional absolute name of cgroups to create and run the runtime in.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--runtime-request-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>2m0s<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Timeout of all runtime requests except long running request - <code>pull<\/code>, <code>logs<\/code>, <code>exec<\/code> and <code>attach<\/code>. When timeout exceeded, kubelet will cancel the request, throw out an error and retry later. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--seccomp-default<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Enable the use of <code>RuntimeDefault<\/code> as the default seccomp profile for all workloads.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--serialize-image-pulls&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>true<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Pull images one at a time. We recommend *not* changing the default value on nodes that run docker daemon with version &lt; 1.9 or an <code>aufs<\/code> storage backend. Issue #10959 has more details. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--streaming-connection-idle-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>4h0m0s<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Maximum time a streaming connection can be idle before the connection is automatically closed. <code>0<\/code> indicates no timeout. Example: <code>5m<\/code>. Note: All connections to the kubelet server have a maximum duration of 4 hours.  (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--sync-frequency duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>1m0s<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Max period between synchronizing running containers and config. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--system-cgroups string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Optional absolute name of cgroups in which to place all non-kernel processes that are not already inside a cgroup under <code>'\/'<\/code>. Empty for no container. Rolling back the flag requires a reboot. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--system-reserved string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: &lt;none&gt;<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">A set of <code>&lt;resource name&gt;=&lt;resource quantity&gt;<\/code> (e.g. &quot;<code>cpu=200m,memory=500Mi,ephemeral-storage=1Gi,pid='100'<\/code>&quot;) pairs that describe resources reserved for non-kubernetes components. Currently only <code>cpu<\/code> and <code>memory<\/code> and local ephemeral storage for root file system are supported. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/reserve-compute-resources\/#system-reserved\">here<\/a> for more detail. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--system-reserved-cgroup string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>''<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Absolute name of the top level cgroup that is used to manage non-kubernetes components for which compute resources were reserved via <code>--system-reserved<\/code> flag. Ex. <code>\/system-reserved<\/code>. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-cert-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">File containing x509 certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If <code>--tls-cert-file<\/code> and <code>--tls-private-key-file<\/code> are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to <code>--cert-dir<\/code>. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-cipher-suites string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.<br\/>\nPreferred values:\n<code>TLS_AES_128_GCM_SHA256<\/code>, <code>TLS_AES_256_GCM_SHA384<\/code>, <code>TLS_CHACHA20_POLY1305_SHA256<\/code>, <code>TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA<\/code>, <code>TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256<\/code>, <code>TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA<\/code>, <code>TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384<\/code>, <code>TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305<\/code>, <code>TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256<\/code>, <code>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA<\/code>, <code>TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256<\/code>, <code>TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA<\/code>, <code>TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384<\/code>, <code>TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305<\/code>, <code>TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256<\/code>, <code>TLS_RSA_WITH_AES_128_CBC_SHA<\/code>, <code>TLS_RSA_WITH_AES_128_GCM_SHA256<\/code>, <code>TLS_RSA_WITH_AES_256_CBC_SHA<\/code>, <code>TLS_RSA_WITH_AES_256_GCM_SHA384<\/code><br\/>\nInsecure values:\n<code>TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256<\/code>, <code>TLS_ECDHE_ECDSA_WITH_RC4_128_SHA<\/code>, <code>TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA<\/code>, <code>TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256<\/code>, <code>TLS_ECDHE_RSA_WITH_RC4_128_SHA<\/code>, <code>TLS_RSA_WITH_3DES_EDE_CBC_SHA<\/code>, <code>TLS_RSA_WITH_AES_128_CBC_SHA256<\/code>, <code>TLS_RSA_WITH_RC4_128_SHA<\/code>.<br\/>\n(DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-min-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Minimum TLS version supported. Possible values: &quot;<code>VersionTLS10<\/code>&quot;, &quot;<code>VersionTLS11<\/code>&quot;, &quot;<code>VersionTLS12<\/code>&quot;, &quot;<code>VersionTLS13<\/code>&quot;. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-private-key-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">File containing x509 private key matching <code>--tls-cert-file<\/code>. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n\n<tr>\n<td colspan=\"2\">--topology-manager-policy string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>'none'<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Topology Manager policy to use. Possible values: &quot;<code>none<\/code>&quot;, &quot;<code>best-effort<\/code>&quot;, &quot;<code>restricted<\/code>&quot;, &quot;<code>single-numa-node<\/code>&quot;. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--topology-manager-policy-options string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">A set of &lt;key&gt;=&lt;value&gt; topology manager policy options to use, to fine tune their behaviour. If not supplied, keep the default behaviour. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--topology-manager-scope string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>container<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Scope to which topology hints are applied. Topology manager collects hints from hint providers and applies them to the defined scope to ensure the pod admission. Possible values: &quot;<code>container<\/code>&quot;, &quot;<code>pod<\/code>&quot;. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-v, --v Level<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Number for the log level verbosity<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Print version information and quit; <code>--version=vX.Y.Z...<\/code> sets the reported version.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--vmodule &lt;A list of 'pattern=N' strings&gt;<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Comma-separated list of <code>pattern=N<\/code> settings for file-filtered logging (only works for text log format).<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--volume-plugin-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>\/usr\/libexec\/kubernetes\/kubelet-plugins\/volume\/exec\/<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The full path of the directory in which to search for additional third party volume plugins. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--volume-stats-agg-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: <code>1m0s<\/code><\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes. To disable volume calculations, set to a negative number. (DEPRECATED: This parameter should be set via the config file specified by the kubelet's <code>--config<\/code> flag. See <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/kubelet-config-file\/\">kubelet-config-file<\/a> for more information.)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>","site":"kubernetes reference","answers_cleaned":"    title  kubelet content type  tool reference weight  20           The kubelet is the primary  node agent  that runs on each node  It can register the node with the apiserver using one of  the hostname  a flag to override the hostname  or specific logic for a cloud provider   The kubelet works in terms of a PodSpec  A PodSpec is a YAML or JSON object that describes a pod  The kubelet takes a set of PodSpecs that are provided through various mechanisms  primarily through the apiserver  and ensures that the containers described in those PodSpecs are running and healthy  The kubelet doesn t manage containers which were not created by Kubernetes   Other than from a PodSpec from the apiserver  there are two ways that a container manifest can be provided to the kubelet     File  Path passed as a flag on the command line  Files under this path will be   monitored periodically for updates  The monitoring period is 20s by default   and is configurable via a flag    HTTP endpoint  HTTP endpoint passed as a parameter on the command line  This   endpoint is checked every 20 seconds  also configurable with a flag        kubelet  flags                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    address string nbsp  nbsp  nbsp  nbsp  nbsp Default  0 0 0 0   td    tr   tr   td   td  td style  line height  130   word wrap  break word   The IP address for the kubelet to serve on  set to  code 0 0 0 0  code  or  code     code  for listening on all interfaces and IP address families    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    allowed unsafe sysctls strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Comma separated whitelist of unsafe sysctls or unsafe sysctl patterns  ending in  code  ast   code    Use these at your own risk   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    anonymous auth nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Enables anonymous requests to the kubelet server  Requests that are not rejected by another authentication method are treated as anonymous requests  Anonymous requests have a username of  code system anonymous  code   and a group name of  code system unauthenticated  code    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    authentication token webhook  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Use the  code TokenReview  code  API to determine authentication for bearer tokens   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    authentication token webhook cache ttl duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 2m0s  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   The duration to cache responses from the webhook token authenticator   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    authorization mode string nbsp  nbsp  nbsp  nbsp  nbsp Default   code AlwaysAllow  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Authorization mode for kubelet server  Valid options are  quot  code AlwaysAllow  code  quot  or  quot  code Webhook  code  quot   Webhook mode uses the  code SubjectAccessReview  code  API to determine authorization   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    authorization webhook cache authorized ttl duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 5m0s  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   The duration to cache  authorized  responses from the webhook authorizer   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    authorization webhook cache unauthorized ttl duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 30s  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   The duration to cache  unauthorized  responses from the webhook authorizer   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    bootstrap kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Path to a kubeconfig file that will be used to get client certificate for kubelet  If the file specified by  code   kubeconfig  code  does not exist  the bootstrap kubeconfig is used to request a client certificate from the API server  On success  a kubeconfig file referencing the generated client certificate and key is written to the path specified by  code   kubeconfig  code   The client certificate and key file will be stored in the directory pointed by  code   cert dir  code    td    tr    tr   td colspan  2    cert dir string nbsp  nbsp  nbsp  nbsp  nbsp Default   code  var lib kubelet pki  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   The directory where the TLS certs are located  If  code   tls cert file  code  and  code   tls private key file  code  are provided  this flag will be ignored   td    tr    tr   td colspan  2    cgroup driver string nbsp  nbsp  nbsp  nbsp  nbsp Default   code cgroupfs  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Driver that the kubelet uses to manipulate cgroups on the host   Possible values   quot  code cgroupfs  code  quot    quot  code systemd  code  quot    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    cgroup root string nbsp  nbsp  nbsp  nbsp  nbsp Default   code     code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Optional root cgroup to use for pods  This is handled by the container runtime on a best effort basis  Default      which means use the container runtime default   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    cgroups per qos nbsp  nbsp  nbsp  nbsp  nbsp Default   code true  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Enable creation of QoS cgroup hierarchy  if true  top level QoS and pod cgroups are created   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    client ca file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   If set  any request presenting a client certificate signed by one of the authorities in the client ca file is authenticated with an identity corresponding to the  code CommonName  code  of the client certificate   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    cloud config string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The path to the cloud provider configuration file  Empty string for no configuration file   DEPRECATED  will be removed in 1 25 or later  in favor of removing cloud providers code from kubelet    td    tr    tr   td colspan  2    cloud provider string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The provider for cloud services  Set to empty string for running with no cloud provider  Set to  external  for running with an external cloud provider  If set  the cloud provider determines the name of the node  consult cloud provider documentation to determine if and how the hostname is used    td    tr    tr   td colspan  2    cluster dns strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Comma separated list of DNS server IP address  This value is used for containers DNS server in case of Pods with   code dnsPolicy  ClusterFirst  code    br   B Note   B  all DNS servers appearing in the list MUST serve the same set of records otherwise name resolution within the cluster may not work correctly  There is no guarantee as to which DNS server may be contacted for name resolution   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    cluster domain string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Domain for this cluster  If set  kubelet will configure all containers to search this domain in addition to the host s search domains   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    config string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The kubelet will load its initial configuration from this file  The path may be absolute or relative  relative paths start at the kubelet s current working directory  Omit this flag to use the built in default configuration values  Command line flags override configuration from this file   td    tr    tr   td colspan  2    config dir string nbsp  nbsp  nbsp  nbsp  nbsp Default      td    tr   tr   td   td  td style  line height  130   word wrap  break word   Path to a directory to specify drop ins  allows the user to optionally specify additional configs to overwrite what is provided by default and in the    config  flag  br   B Note  B   Set the   code KUBELET CONFIG DROPIN DIR ALPHA  code   environment variable to specify the directory   td    tr    tr   td colspan  2    container log max files int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    lt Warning  Beta feature gt  Set the maximum number of container log files that can be present for a container  The number must be  gt   2   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    container log max size string nbsp  nbsp  nbsp  nbsp  nbsp Default   code 10Mi  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word    lt Warning  Beta feature gt  Set the maximum size  e g   code 10Mi  code   of container log file before it is rotated    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    container runtime endpoint string nbsp  nbsp  nbsp  nbsp  nbsp Default   code  unix    run containerd containerd sock   code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   The endpoint of remote runtime service  UNIX domain sockets are supported on Linux  while  npipe  and  tcp  endpoints are supported on windows  Examples   code  unix    path to runtime sock   code    code  npipe       pipe runtime   code    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    contention profiling  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Enable block profiling  if profiling is enabled   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    cpu cfs quota nbsp  nbsp  nbsp  nbsp  nbsp Default   code true  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Enable CPU CFS quota enforcement for containers that specify CPU limits   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    cpu cfs quota period duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 100ms  code   td       tr   tr   td   td  td style  line height  130   word wrap  break word   Sets CPU CFS quota period value   code cpu cfs period us  code   defaults to Linux Kernel default   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    cpu manager policy string nbsp  nbsp  nbsp  nbsp  nbsp Default   code none  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   The CPU manager policy to use  Possible values   quot  code none  code  quot    quot  code static  code  quot    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    cpu manager policy options string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   A set of  key value  CPU manager policy options to use  to fine tune their behaviour  If not supplied  keep the default behaviour   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    cpu manager reconcile period duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 10s  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word    lt Warning  Alpha feature gt  CPU manager reconciliation period  Examples   quot  code 10s  code  quot   or  quot  code 1m  code  quot   If not supplied  defaults to node status update frequency   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    enable controller attach detach nbsp  nbsp  nbsp  nbsp  nbsp Default   code true  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Enables the Attach Detach controller to manage attachment detachment of volumes scheduled to this node  and disables kubelet from executing any attach detach operations   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    enable debugging handlers nbsp  nbsp  nbsp  nbsp  nbsp Default   code true  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Enables server endpoints for log collection and local running of containers and commands   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    enable server nbsp  nbsp  nbsp  nbsp  nbsp Default   code true  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Enable the kubelet s server   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    enforce node allocatable strings nbsp  nbsp  nbsp  nbsp  nbsp Default   code pods  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   A comma separated list of levels of node allocatable enforcement to be enforced by kubelet  Acceptable options are  quot  code none  code  quot    quot  code pods  code  quot    quot  code system reserved  code  quot   and  quot  code kube reserved  code  quot   If the latter two options are specified   code   system reserved cgroup  code  and  code   kube reserved cgroup  code  must also be set  respectively  If  quot  code none  code  quot  is specified  no additional options should be set  See  a href  https   kubernetes io docs tasks administer cluster reserve compute resources   official documentation  a  for more details   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    event burst int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  100  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Maximum size of a bursty event records  temporarily allows event records to burst to this number  while still not exceeding  code   event qps  code   The number must be  gt   0  If 0 will use default burst  100    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    event qps int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  50  td    tr   tr   td   td  td style  line height  130   word wrap  break word   QPS to limit event creations  The number must be  gt   0  If 0 will use default QPS  50    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    eviction hard strings nbsp  nbsp  nbsp  nbsp  nbsp Default   code imagefs available 15  memory available 100Mi nodefs available 10   code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   A set of eviction thresholds  e g   quot  code memory available 1Gi  code  quot   that if met would trigger a pod eviction  On a Linux node  the default value also includes  quot  code nodefs inodesFree 5   code  quot    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    eviction max pod grace period int32  td    tr   tr   td   td  td style  line height  130   word wrap  break word    Maximum allowed grace period  in seconds  to use when terminating pods in response to a soft eviction threshold being met  If negative  defer to pod specified value   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    eviction minimum reclaim strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word   A set of minimum reclaims  e g   quot  code imagefs available 2Gi  code  quot   that describes the minimum amount of resource the kubelet will reclaim when performing a pod eviction if that resource is under pressure   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    eviction pressure transition period duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 5m0s  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Duration for which the kubelet has to wait before transitioning out of an eviction pressure condition   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    eviction soft strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word   A set of eviction thresholds  e g   quot  code memory available 1 5Gi  code  quot   that if met over a corresponding grace period would trigger a pod eviction   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    eviction soft grace period strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word   A set of eviction grace periods  e g   quot  code memory available 1m30s  code  quot   that correspond to how long a soft eviction threshold must hold before triggering a pod eviction   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    exit on lock contention  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Whether kubelet should exit upon lock file contention   td    tr    tr   td colspan  2    experimental allocatable ignore eviction nbsp  nbsp  nbsp  nbsp  nbsp Default   code false  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   When set to  code true  code   hard eviction thresholds will be ignored while calculating node allocatable  See  a href  https   kubernetes io docs tasks administer cluster reserve compute resources   here  a  for more details   DEPRECATED  will be removed in 1 25 or later   td    tr    tr   td colspan  2    experimental mounter path string nbsp  nbsp  nbsp  nbsp  nbsp Default   code mount  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word    Experimental  Path of mounter binary  Leave empty to use the default  code mount  code    DEPRECATED  will be removed in 1 24 or later  in favor of using CSI    td    tr    tr   td colspan  2    fail swap on nbsp  nbsp  nbsp  nbsp  nbsp Default   code true  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Makes the kubelet fail to start if swap is enabled on the node   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    feature gates  lt A list of  key true false  pairs gt   td    tr   tr   td   td  td style  line height  130   word wrap  break word   A set of  code key value  code  pairs that describe feature gates for alpha experimental features  Options are  br    APIResponseCompression true false  BETA   default true  br   APIServerIdentity true false  BETA   default true  br   APIServerTracing true false  BETA   default true  br   APIServingWithRoutine true false  BETA   default true  br   AllAlpha true false  ALPHA   default false  br   AllBeta true false  BETA   default false  br   AnyVolumeDataSource true false  BETA   default true  br   AppArmor true false  BETA   default true  br   AppArmorFields true false  BETA   default true  br   CPUManagerPolicyAlphaOptions true false  ALPHA   default false  br   CPUManagerPolicyBetaOptions true false  BETA   default true  br   CPUManagerPolicyOptions true false  BETA   default true  br   CRDValidationRatcheting true false  BETA   default true  br   CSIMigrationPortworx true false  BETA   default false  br   CSIVolumeHealth true false  ALPHA   default false  br   CloudControllerManagerWebhook true false  ALPHA   default false  br   ClusterTrustBundle true false  ALPHA   default false  br   ClusterTrustBundleProjection true false  ALPHA   default false  br   ComponentSLIs true false  BETA   default true  br   ConsistentListFromCache true false  ALPHA   default false  br   ContainerCheckpoint true false  BETA   default true  br   ContextualLogging true false  BETA   default true  br   CronJobsScheduledAnnotation true false  BETA   default true  br   CrossNamespaceVolumeDataSource true false  ALPHA   default false  br   CustomCPUCFSQuotaPeriod true false  ALPHA   default false  br   CustomResourceFieldSelectors true false  ALPHA   default false  br   DevicePluginCDIDevices true false  BETA   default true  br   DisableCloudProviders true false  BETA   default true  br   DisableKubeletCloudCredentialProviders true false  BETA   default true  br   DisableNodeKubeProxyVersion true false  ALPHA   default false  br   DynamicResourceAllocation true false  ALPHA   default false  br   ElasticIndexedJob true false  BETA   default true  br   EventedPLEG true false  ALPHA   default false  br   GracefulNodeShutdown true false  BETA   default true  br   GracefulNodeShutdownBasedOnPodPriority true false  BETA   default true  br   HPAScaleToZero true false  ALPHA   default false  br   HonorPVReclaimPolicy true false  ALPHA   default false  br   ImageMaximumGCAge true false  BETA   default true  br   InPlacePodVerticalScaling true false  ALPHA   default false  br   InTreePluginAWSUnregister true false  ALPHA   default false  br   InTreePluginAzureDiskUnregister true false  ALPHA   default false  br   InTreePluginAzureFileUnregister true false  ALPHA   default false  br   InTreePluginGCEUnregister true false  ALPHA   default false  br   InTreePluginOpenStackUnregister true false  ALPHA   default false  br   InTreePluginPortworxUnregister true false  ALPHA   default false  br   InTreePluginvSphereUnregister true false  ALPHA   default false  br   InformerResourceVersion true false  ALPHA   default false  br   JobBackoffLimitPerIndex true false  BETA   default true  br   JobManagedBy true false  ALPHA   default false  br   JobPodFailurePolicy true false  BETA   default true  br   JobPodReplacementPolicy true false  BETA   default true  br   JobSuccessPolicy true false  ALPHA   default false  br   KubeProxyDrainingTerminatingNodes true false  BETA   default true  br   KubeletCgroupDriverFromCRI true false  ALPHA   default false  br   KubeletInUserNamespace true false  ALPHA   default false  br   KubeletPodResourcesDynamicResources true false  ALPHA   default false  br   KubeletPodResourcesGet true false  ALPHA   default false  br   KubeletSeparateDiskGC true false  ALPHA   default false  br   KubeletTracing true false  BETA   default true  br   LoadBalancerIPMode true false  BETA   default true  br   LocalStorageCapacityIsolationFSQuotaMonitoring true false  ALPHA   default false  br   LogarithmicScaleDown true false  BETA   default true  br   LoggingAlphaOptions true false  ALPHA   default false  br   LoggingBetaOptions true false  BETA   default true  br   MatchLabelKeysInPodAffinity true false  ALPHA   default false  br   MatchLabelKeysInPodTopologySpread true false  BETA   default true  br   MaxUnavailableStatefulSet true false  ALPHA   default false  br   MemoryManager true false  BETA   default true  br   MemoryQoS true false  ALPHA   default false  br   MultiCIDRServiceAllocator true false  ALPHA   default false  br   MutatingAdmissionPolicy true false  ALPHA   default false  br   NFTablesProxyMode true false  ALPHA   default false  br   NodeInclusionPolicyInPodTopologySpread true false  BETA   default true  br   NodeLogQuery true false  BETA   default false  br   NodeSwap true false  BETA   default true  br   OpenAPIEnums true false  BETA   default true  br   PDBUnhealthyPodEvictionPolicy true false  BETA   default true  br   PersistentVolumeLastPhaseTransitionTime true false  BETA   default true  br   PodAndContainerStatsFromCRI true false  ALPHA   default false  br   PodDeletionCost true false  BETA   default true  br   PodDisruptionConditions true false  BETA   default true  br   PodIndexLabel true false  BETA   default true  br   PodLifecycleSleepAction true false  BETA   default true  br   PodReadyToStartContainersCondition true false  BETA   default true  br   PortForwardWebsockets true false  ALPHA   default false  br   ProcMountType true false  ALPHA   default false  br   QOSReserved true false  ALPHA   default false  br   RecoverVolumeExpansionFailure true false  ALPHA   default false  br   RecursiveReadOnlyMounts true false  ALPHA   default false  br   RelaxedEnvironmentVariableValidation true false  ALPHA   default false  br   RetryGenerateName true false  ALPHA   default false  br   RotateKubeletServerCertificate true false  BETA   default true  br   RuntimeClassInImageCriApi true false  ALPHA   default false  br   SELinuxMount true false  ALPHA   default false  br   SELinuxMountReadWriteOncePod true false  BETA   default true  br   SchedulerQueueingHints true false  BETA   default false  br   SeparateCacheWatchRPC true false  BETA   default true  br   SeparateTaintEvictionController true false  BETA   default true  br   ServiceAccountTokenJTI true false  BETA   default true  br   ServiceAccountTokenNodeBinding true false  ALPHA   default false  br   ServiceAccountTokenNodeBindingValidation true false  BETA   default true  br   ServiceAccountTokenPodNodeInfo true false  BETA   default true  br   ServiceTrafficDistribution true false  ALPHA   default false  br   SidecarContainers true false  BETA   default true  br   SizeMemoryBackedVolumes true false  BETA   default true  br   StatefulSetAutoDeletePVC true false  BETA   default true  br   StatefulSetStartOrdinal true false  BETA   default true  br   StorageNamespaceIndex true false  BETA   default true  br   StorageVersionAPI true false  ALPHA   default false  br   StorageVersionHash true false  BETA   default true  br   StorageVersionMigrator true false  ALPHA   default false  br   StructuredAuthenticationConfiguration true false  BETA   default true  br   StructuredAuthorizationConfiguration true false  BETA   default true  br   TopologyAwareHints true false  BETA   default true  br   TopologyManagerPolicyAlphaOptions true false  ALPHA   default false  br   TopologyManagerPolicyBetaOptions true false  BETA   default true  br   TopologyManagerPolicyOptions true false  BETA   default true  br   TranslateStreamCloseWebsocketRequests true false  BETA   default true  br   UnauthenticatedHTTP2DOSMitigation true false  BETA   default true  br   UnknownVersionInteroperabilityProxy true false  ALPHA   default false  br   UserNamespacesPodSecurityStandards true false  ALPHA   default false  br   UserNamespacesSupport true false  BETA   default false  br   VolumeAttributesClass true false  ALPHA   default false  br   VolumeCapacityPriority true false  ALPHA   default false  br   WatchFromStorageWithoutResourceVersion true false  BETA   default false  br   WatchList true false  ALPHA   default false  br   WatchListClient true false  BETA   default false  br   WinDSR true false  ALPHA   default false  br   WinOverlay true false  BETA   default true  br   WindowsHostNetwork true false  ALPHA   default true  br    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    file check frequency duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 20s  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Duration between checking config files for new data   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    hairpin mode string nbsp  nbsp  nbsp  nbsp  nbsp Default   code promiscuous bridge  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   How should the kubelet setup hairpin NAT  This allows endpoints of a Service to load balance back to themselves if they should try to access their own Service  Valid values are  quot  code promiscuous bridge  code  quot    quot  code hairpin veth  code  quot  and  quot  code none  code  quot    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    healthz bind address string nbsp  nbsp  nbsp  nbsp  nbsp Default   code 127 0 0 1  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   The IP address for the healthz server to serve on  set to  quot  code 0 0 0 0  code  quot  or  quot  code     code  quot  for listening in all interfaces and IP families    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    healthz port int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  10248  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The port of the localhost healthz endpoint  set to  code 0  code  to disable    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word   help for kubelet  td    tr    tr   td colspan  2    hostname override string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   If non empty  will use this string as identification instead of the actual hostname  If  code   cloud provider  code  is set  the cloud provider determines the name of the node  consult cloud provider documentation to determine if and how the hostname is used    td    tr    tr   td colspan  2    http check frequency duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 20s  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Duration between checking HTTP for new data   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    image credential provider bin dir string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The path to the directory where credential provider plugin binaries are located   td    tr    tr   td colspan  2    image credential provider config string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The path to the credential provider plugin config file   td    tr    tr   td colspan  2    image gc high threshold int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  85  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The percent of disk usage after which image garbage collection is always run  Values must be within the range  0  100   To disable image garbage collection  set to 100     DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    image gc low threshold int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  80  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The percent of disk usage before which image garbage collection is never run  Lowest disk usage to garbage collect to  Values must be within the range  0  100  and should not be larger than that of  code   image gc high threshold  code    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    image service endpoint string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The endpoint of remote image service  If not specified  it will be the same with  code   container runtime endpoint  code  by default  UNIX domain socket are supported on Linux  while  npipe  and  tcp  endpoints are supported on Windows  Examples   code unix    path to runtime sock  code    code npipe       pipe runtime  code    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    keep terminated pod volumes  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Keep terminated pod volumes mounted to the node after the pod terminates  Can be useful for debugging volume related issues   DEPRECATED  will be removed in a future version   td    tr    tr   td colspan  2    kernel memcg notification  td    tr   tr   td   td  td style  line height  130   word wrap  break word   If enabled  the kubelet will integrate with the kernel memcg notification to determine if memory eviction thresholds are crossed rather than polling   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    kube api burst int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  100  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Burst to use while talking with kubernetes API server  The number must be    0  If 0 will use default burst  100   Doesn t cover events and node heartbeat apis which rate limiting is controlled by a different set of flags   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    kube api content type string nbsp  nbsp  nbsp  nbsp  nbsp Default   code application vnd kubernetes protobuf  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Content type of requests sent to apiserver   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    kube api qps int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  50  td    tr   tr   td   td  td style  line height  130   word wrap  break word   QPS to use while talking with kubernetes API server  The number must be  gt   0  If 0 will use default QPS  50   Doesn t cover events and node heartbeat apis which rate limiting is controlled by a different set of flags   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    kube reserved strings nbsp  nbsp  nbsp  nbsp  nbsp Default   lt None gt   td    tr   tr   td   td  td style  line height  130   word wrap  break word   A set of  code  lt resource name gt   lt resource quantity gt   code   e g   quot  code cpu 200m memory 500Mi ephemeral storage 1Gi pid  100   code  quot   pairs that describe resources reserved for kubernetes system components  Currently  code cpu  code    code memory  code  and local  code ephemeral storage  code  for root file system are supported  See  a href  https   kubernetes io docs tasks administer cluster reserve compute resources  kube reserved  here  a  for more detail   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    kube reserved cgroup string nbsp  nbsp  nbsp  nbsp  nbsp Default   code     code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Absolute name of the top level cgroup that is used to manage kubernetes components for which compute resources were reserved via  code   kube reserved  code  flag  Ex   quot  code  kube reserved  code  quot    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Path to a kubeconfig file  specifying how to connect to the API server  Providing  code   kubeconfig  code  enables API server mode  omitting  code   kubeconfig  code  enables standalone mode    td    tr    tr   td colspan  2    kubelet cgroups string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Optional absolute name of cgroups to create and run the kubelet in   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    local storage capacity isolation gt  nbsp  nbsp  nbsp  nbsp  nbsp Default   code true  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   If true  local ephemeral storage isolation is enabled  Otherwise  local storage isolation feature will be disabled   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    lock file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    lt Warning  Alpha feature gt  The path to file for kubelet to use as a lock file   td    tr    tr   td colspan  2    log flush frequency duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 5s  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Maximum number of seconds between log flushes   td    tr    tr   td colspan  2    log json info buffer size string nbsp  nbsp  nbsp  nbsp  nbsp Default   code  0   code   td    tr   tr   td   td  td style  line height  130   word wrap  break word    Alpha  In JSON format with split output streams  the info messages can be buffered for a while to increase performance  The default value of zero bytes disables buffering  The size can be specified as number of bytes  512   multiples of 1000  1K   multiples of 1024  2Ki   or powers of those  3M  4G  5Mi  6Gi   Enable the  code LoggingAlphaOptions  code  feature gate to use this   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    log json split stream  td    tr   tr   td   td  td style  line height  130   word wrap  break word    Alpha  In JSON format  write error messages to stderr and info messages to stdout  The default is to write a single stream to stdout  Enable the  code LoggingAlphaOptions  code  feature gate to use this   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    log text info buffer size string nbsp  nbsp  nbsp  nbsp  nbsp Default   code  0   code   td    tr   tr   td   td  td style  line height  130   word wrap  break word    Alpha  In text format with split output streams  the info messages can be buffered for a while to increase performance  The default value of zero bytes disables buffering  The size can be specified as number of bytes  512   multiples of 1000  1K   multiples of 1024  2Ki   or powers of those  3M  4G  5Mi  6Gi   Enable the  code LoggingAlphaOptions  code  feature gate to use this   DEPRECATED  This parameter should be set via the config file specified by the Kubelet s  code   config  code  flag  See https   kubernetes io docs tasks administer cluster kubelet config file  for more information    td    tr    tr   td colspan  2    log text split stream  td    tr   tr   td   td  td style  line height  130   word wrap  break word    Alpha  In text format  write error messages to stderr and info messages to stdout  The default is to write a single stream to stdout  Enable the  code LoggingAlphaOptions  code  feature gate to use this   DEPRECATED  This parameter should be set via the config file specified by the Kubelet s  code   config  code  flag  See https   kubernetes io docs tasks administer cluster kubelet config file  for more information     td    tr    tr   td colspan  2    logging format string nbsp  nbsp  nbsp  nbsp  nbsp Default   code text  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Sets the log format  Permitted formats   quot  code json  code  quot   gated by  code LoggingBetaOptions  code    quot  code text  code  quot     DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    make iptables util chains nbsp  nbsp  nbsp  nbsp  nbsp Default   code true  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   If true  kubelet will ensure  code iptables  code  utility rules are present on host   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    manifest url string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   URL for accessing additional Pod specifications to run   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    manifest url header strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Comma separated list of HTTP headers to use when accessing the URL provided to  code   manifest url  code   Multiple headers with the same name will be added in the same order provided  This flag can be repeatedly invoked  For example   code   manifest url header  a hello b again c world    manifest url header  b beautiful   code   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    max open files int nbsp  nbsp  nbsp  nbsp  nbsp Default  1000000  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Number of files that can be opened by kubelet process   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    max pods int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  110  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Number of Pods that can run on this kubelet   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    maximum dead containers int32 nbsp  nbsp  nbsp  nbsp  nbsp Default   1  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Maximum number of old instances of containers to retain globally  Each container takes up some disk space  To disable  set to a negative number   DEPRECATED  Use  code   eviction hard  code  or  code   eviction soft  code  instead  Will be removed in a future version    td    tr     tr   td colspan  2    maximum dead containers per container int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  1  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Maximum number of old instances to retain per container   Each container takes up some disk space   DEPRECATED  Use  code   eviction hard  code  or  code   eviction soft  code  instead  Will be removed in a future version    td    tr    tr   td colspan  2    memory manager policy string nbsp  nbsp  nbsp  nbsp  nbsp Default   code None  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Memory Manager policy to use  Possible values   quot  code None  code  quot    quot  code Static  code  quot    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    minimum container ttl duration duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Minimum age for a finished container before it is garbage collected  Examples   quot  code 300ms  code  quot    quot  code 10s  code  quot  or  quot  code 2h45m  code  quot    DEPRECATED  Use  code   eviction hard  code  or  code   eviction soft  code  instead  Will be removed in a future version    td    tr    tr   td colspan  2    minimum image ttl duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 2m0s  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Minimum age for an unused image before it is garbage collected  Examples   quot  code 300ms  code  quot    quot  code 10s  code  quot  or  quot  code 2h45m  code  quot    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    node ip string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   IP address  or comma separated dual stack IP addresses  of the node  If unset  kubelet will use the node s default IPv4 address  if any  or its default IPv6 address if it has no IPv4 addresses  You can pass  quot  code     code  quot  to make it prefer the default IPv6 address rather than the default IPv4 address   td    tr    tr   td colspan  2    node labels  lt key value pairs gt   td    tr   tr   td   td  td style  line height  130   word wrap  break word    lt Warning  Alpha feature gt Labels to add when registering the node in the cluster  Labels must be  code key value  code  pairs separated by  code      code   Labels in the  code  kubernetes io   code  namespace must begin with an allowed prefix   code  kubelet kubernetes io   code    code  node kubernetes io   code   or be in the specifically allowed set   code  beta kubernetes io arch   code    code  beta kubernetes io instance type   code    code  beta kubernetes io os   code    code  failure domain beta kubernetes io region   code    code  failure domain beta kubernetes io zone   code    code  kubernetes io arch   code    code  kubernetes io hostname   code    code  kubernetes io os   code    code  node kubernetes io instance type   code    code  topology kubernetes io region   code    code  topology kubernetes io zone   code    td    tr    tr   td colspan  2    node status max images int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  50  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The maximum number of images to report in  code node status images  code   If  code  1  code  is specified  no cap will be applied   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    node status update frequency duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 10s  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Specifies how often kubelet posts node status to master   B Note  B   be cautious when changing the constant  it must work with  code nodeMonitorGracePeriod  code  in Node controller   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    oom score adj int32 nbsp  nbsp  nbsp  nbsp  nbsp Default   999  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The  code oom score adj  code  value for kubelet process  Values must be within the range   1000  1000    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    pod cidr string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The CIDR to use for pod IP addresses  only used in standalone mode  In cluster mode  this is obtained from the master  For IPv6  the maximum number of IP s allocated is 65536  DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    pod infra container image string nbsp  nbsp  nbsp  nbsp  nbsp Default   code registry k8s io pause 3 9  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Specified image will not be pruned by the image garbage collector  CRI implementations have their own configuration to set this image   DEPRECATED  will be removed in 1 27  Image garbage collector will get sandbox image information from CRI    td    tr    tr   td colspan  2    pod manifest path string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Path to the directory containing static pod files to run  or the path to a single static pod file  Files starting with dots will be ignored   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    pod max pids int nbsp  nbsp  nbsp  nbsp  nbsp Default   1  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Set the maximum number of processes per pod  If  code  1  code   the kubelet defaults to the node allocatable PID capacity   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    pods per core int32  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Number of Pods per core that can run on this kubelet  The total number of pods on this kubelet cannot exceed  code   max pods  code   so  code   max pods  code  will be used if this calculation results in a larger number of pods allowed on the kubelet  A value of  code 0  code  disables this limit   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    port int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  10250  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The port for the kubelet to serve on   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    protect kernel defaults  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Default kubelet behaviour for kernel tuning  If set  kubelet errors if any of kernel tunables is different than kubelet defaults   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    provider id string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Unique identifier for identifying the node in a machine database  i e cloud provider   td    tr    tr   td colspan  2    qos reserved string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    lt Warning  Alpha feature gt  A set of  code  lt resource name gt   lt percentage gt   code   e g   quot  code memory 50   code  quot   pairs that describe how pod resource requests are reserved at the QoS level  Currently only  code memory  code  is supported  Requires the  code QOSReserved  code  feature gate to be enabled   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    read only port int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  10255  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The read only port for the kubelet to serve on with no authentication authorization  set to  code 0  code  to disable    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    register node nbsp  nbsp  nbsp  nbsp  nbsp Default   code true  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Register the node with the API server  If  code   kubeconfig  code  is not provided  this flag is irrelevant  as the kubelet won t have an API server to register with   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    register schedulable nbsp  nbsp  nbsp  nbsp  nbsp Default   code true  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Register the node as schedulable  Won t have any effect if  code   register node  code  is  code false  code    DEPRECATED  will be removed in a future version   td      tr    tr   td colspan  2    register with taints string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Register the node with the given list of taints  comma separated  code  lt key gt   lt value gt   lt effect gt   code    No op if  code   register node  code  is  code false  code    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    registry burst int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  10  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Maximum size of a bursty pulls  temporarily allows pulls to burst to this number  while still not exceeding  code   registry qps  code   Only used if  code   registry qps  code  is greater than 0   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    registry qps int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word   If  gt  0  limit registry pull QPS to this value   If  code 0  code   unlimited   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    reserved cpus string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   A comma separated list of CPUs or CPU ranges that are reserved for system and kubernetes usage  This specific list will supersede cpu counts in  code   system reserved  code  and  code   kube reserved  code    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    reserved memory string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   A comma separated list of memory reservations for NUMA nodes   e g   quot  code   reserved memory 0 memory 1Gi hugepages 1M 2Gi   reserved memory 1 memory 2Gi  code  quot    The total sum for each memory type should be equal to the sum of  code   kube reserved  code    code   system reserved  code  and  code   eviction threshold  code   See  a href  https   kubernetes io docs tasks administer cluster memory manager  reserved memory flag  here  a  for more details   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    resolv conf string nbsp  nbsp  nbsp  nbsp  nbsp Default   code  etc resolv conf  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Resolver configuration file used as the basis for the container DNS resolution configuration   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    root dir string nbsp  nbsp  nbsp  nbsp  nbsp Default   code  var lib kubelet  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Directory path for managing kubelet files  volume mounts  etc    td    tr    tr   td colspan  2    rotate certificates  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Auto rotate the kubelet client certificates by requesting new certificates from the  code kube apiserver  code  when the certificate expiration approaches   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    rotate server certificates  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Auto request and rotate the kubelet serving certificates by requesting new certificates from the  code kube apiserver  code  when the certificate expiration approaches  Requires the  code RotateKubeletServerCertificate  code  feature gate to be enabled  and approval of the submitted  code CertificateSigningRequest  code  objects   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    runonce  td    tr   tr   td   td  td style  line height  130   word wrap  break word   If  code true  code   exit after spawning pods from local manifests or remote urls  Exclusive with  code   enable server  code   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    runtime cgroups string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Optional absolute name of cgroups to create and run the runtime in   td    tr    tr   td colspan  2    runtime request timeout duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 2m0s  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Timeout of all runtime requests except long running request    code pull  code    code logs  code    code exec  code  and  code attach  code   When timeout exceeded  kubelet will cancel the request  throw out an error and retry later   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    seccomp default  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Enable the use of  code RuntimeDefault  code  as the default seccomp profile for all workloads   td    tr    tr   td colspan  2    serialize image pulls nbsp  nbsp  nbsp  nbsp  nbsp Default   code true  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Pull images one at a time  We recommend  not  changing the default value on nodes that run docker daemon with version  lt  1 9 or an  code aufs  code  storage backend  Issue  10959 has more details   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    streaming connection idle timeout duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 4h0m0s  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Maximum time a streaming connection can be idle before the connection is automatically closed   code 0  code  indicates no timeout  Example   code 5m  code   Note  All connections to the kubelet server have a maximum duration of 4 hours    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    sync frequency duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 1m0s  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Max period between synchronizing running containers and config   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    system cgroups string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Optional absolute name of cgroups in which to place all non kernel processes that are not already inside a cgroup under  code      code   Empty for no container  Rolling back the flag requires a reboot   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    system reserved string nbsp  nbsp  nbsp  nbsp  nbsp Default   lt none gt   td    tr   tr   td   td  td style  line height  130   word wrap  break word   A set of  code  lt resource name gt   lt resource quantity gt   code   e g   quot  code cpu 200m memory 500Mi ephemeral storage 1Gi pid  100   code  quot   pairs that describe resources reserved for non kubernetes components  Currently only  code cpu  code  and  code memory  code  and local ephemeral storage for root file system are supported  See  a href  https   kubernetes io docs tasks administer cluster reserve compute resources  system reserved  here  a  for more detail   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    system reserved cgroup string nbsp  nbsp  nbsp  nbsp  nbsp Default   code     code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Absolute name of the top level cgroup that is used to manage non kubernetes components for which compute resources were reserved via  code   system reserved  code  flag  Ex   code  system reserved  code    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    tls cert file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   File containing x509 certificate used for serving HTTPS  with intermediate certs  if any  concatenated after server cert   If  code   tls cert file  code  and  code   tls private key file  code  are not provided  a self signed certificate and key are generated for the public address and saved to the directory passed to  code   cert dir  code    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    tls cipher suites string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Comma separated list of cipher suites for the server  If omitted  the default Go cipher suites will be used  br   Preferred values   code TLS AES 128 GCM SHA256  code    code TLS AES 256 GCM SHA384  code    code TLS CHACHA20 POLY1305 SHA256  code    code TLS ECDHE ECDSA WITH AES 128 CBC SHA  code    code TLS ECDHE ECDSA WITH AES 128 GCM SHA256  code    code TLS ECDHE ECDSA WITH AES 256 CBC SHA  code    code TLS ECDHE ECDSA WITH AES 256 GCM SHA384  code    code TLS ECDHE ECDSA WITH CHACHA20 POLY1305  code    code TLS ECDHE ECDSA WITH CHACHA20 POLY1305 SHA256  code    code TLS ECDHE RSA WITH AES 128 CBC SHA  code    code TLS ECDHE RSA WITH AES 128 GCM SHA256  code    code TLS ECDHE RSA WITH AES 256 CBC SHA  code    code TLS ECDHE RSA WITH AES 256 GCM SHA384  code    code TLS ECDHE RSA WITH CHACHA20 POLY1305  code    code TLS ECDHE RSA WITH CHACHA20 POLY1305 SHA256  code    code TLS RSA WITH AES 128 CBC SHA  code    code TLS RSA WITH AES 128 GCM SHA256  code    code TLS RSA WITH AES 256 CBC SHA  code    code TLS RSA WITH AES 256 GCM SHA384  code  br   Insecure values   code TLS ECDHE ECDSA WITH AES 128 CBC SHA256  code    code TLS ECDHE ECDSA WITH RC4 128 SHA  code    code TLS ECDHE RSA WITH 3DES EDE CBC SHA  code    code TLS ECDHE RSA WITH AES 128 CBC SHA256  code    code TLS ECDHE RSA WITH RC4 128 SHA  code    code TLS RSA WITH 3DES EDE CBC SHA  code    code TLS RSA WITH AES 128 CBC SHA256  code    code TLS RSA WITH RC4 128 SHA  code   br    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information     tr    tr   td colspan  2    tls min version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Minimum TLS version supported  Possible values   quot  code VersionTLS10  code  quot    quot  code VersionTLS11  code  quot    quot  code VersionTLS12  code  quot    quot  code VersionTLS13  code  quot    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    tls private key file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   File containing x509 private key matching  code   tls cert file  code    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr     tr   td colspan  2    topology manager policy string nbsp  nbsp  nbsp  nbsp  nbsp Default   code  none   code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Topology Manager policy to use  Possible values   quot  code none  code  quot    quot  code best effort  code  quot    quot  code restricted  code  quot    quot  code single numa node  code  quot    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    topology manager policy options string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   A set of  lt key gt   lt value gt  topology manager policy options to use  to fine tune their behaviour  If not supplied  keep the default behaviour   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    topology manager scope string nbsp  nbsp  nbsp  nbsp  nbsp Default   code container  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Scope to which topology hints are applied  Topology manager collects hints from hint providers and applies them to the defined scope to ensure the pod admission  Possible values   quot  code container  code  quot    quot  code pod  code  quot    DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2   v    v Level  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Number for the log level verbosity  td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Print version information and quit   code   version vX Y Z     code  sets the reported version   td    tr    tr   td colspan  2    vmodule  lt A list of  pattern N  strings gt   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Comma separated list of  code pattern N  code  settings for file filtered logging  only works for text log format    td    tr    tr   td colspan  2    volume plugin dir string nbsp  nbsp  nbsp  nbsp  nbsp Default   code  usr libexec kubernetes kubelet plugins volume exec   code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   The full path of the directory in which to search for additional third party volume plugins   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tr   td colspan  2    volume stats agg period duration nbsp  nbsp  nbsp  nbsp  nbsp Default   code 1m0s  code   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes  To disable volume calculations  set to a negative number   DEPRECATED  This parameter should be set via the config file specified by the kubelet s  code   config  code  flag  See  a href  https   kubernetes io docs tasks administer cluster kubelet config file   kubelet config file  a  for more information    td    tr    tbody    table "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kube scheduler","answers":"---\ntitle: kube-scheduler\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nThe Kubernetes scheduler is a control plane process which assigns\nPods to Nodes. The scheduler determines which Nodes are valid placements for\neach Pod in the scheduling queue according to constraints and available\nresources. The scheduler then ranks each valid Node and binds the Pod to a\nsuitable Node. Multiple different schedulers may be used within a cluster;\nkube-scheduler is the reference implementation.\nSee [scheduling](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/)\nfor more information about scheduling and the kube-scheduler component.\n\n```\nkube-scheduler [flags]\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-metric-labels stringToString&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: []<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The map from metric-label to value allow-list of this label. The key's format is &lt;MetricName&gt;,&lt;LabelName&gt;. The value's format is &lt;allowed_value&gt;,&lt;allowed_value&gt;...e.g. metric1,label1='v1,v2,v3', metric1,label2='v1,v2,v3' metric2,label1='v1,v2,v3'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-metric-labels-manifest string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The path to the manifest file that contains the allow-list mapping. The format of the file is the same as the flag --allow-metric-labels. Note that the flag --allow-metric-labels will override the manifest file.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authentication-kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. This is optional. If empty, all token requests are considered to be anonymous and no client CA is looked up in the cluster.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authentication-skip-lookup<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authentication-token-webhook-cache-ttl duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The duration to cache responses from the webhook token authenticator.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authentication-tolerate-lookup-failure&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-always-allow-paths strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"\/healthz,\/readyz,\/livez\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. This is optional. If empty, all requests not skipped by authorization are forbidden.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-webhook-cache-authorized-ttl duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The duration to cache 'authorized' responses from the webhook authorizer.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--authorization-webhook-cache-unauthorized-ttl duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The duration to cache 'unauthorized' responses from the webhook authorizer.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--bind-address string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 0.0.0.0<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI\/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces and IP address families will be used.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cert-dir string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-ca-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--config string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The path to the configuration file.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--contention-profiling&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>DEPRECATED: enable block profiling, if profiling is enabled. This parameter is ignored if a config file is specified in --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-http2-serving<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, HTTP2 serving will be disabled [default=false]<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disabled-metrics strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>This flag provides an escape hatch for misbehaving metrics. You must provide the fully qualified metric name in order to disable it. Disclaimer: disabling metrics is higher in precedence than showing hidden metrics.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--emulated-version strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The versions different components emulate their capabilities (APIs, features, ...) of.<br\/>If set, the component will emulate the behavior of this version instead of the underlying binary version.<br\/>Version format could only be major.minor, for example: '--emulated-version=wardle=1.2,kube=1.31'. Options are:<br\/>kube=1.31..1.31 (default=1.31)If the component is not specified, defaults to &quot;kube&quot;<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--feature-gates colonSeparatedMultimapStringString<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Comma-separated list of component:key=value pairs that describe feature gates for alpha\/experimental features of different components.<br\/>If the component is not specified, defaults to &quot;kube&quot;. This flag can be repeatedly invoked. For example: --feature-gates 'wardle:featureA=true,wardle:featureB=false' --feature-gates 'kube:featureC=true'Options are:<br\/>kube:APIResponseCompression=true|false (BETA - default=true)<br\/>kube:APIServerIdentity=true|false (BETA - default=true)<br\/>kube:APIServerTracing=true|false (BETA - default=true)<br\/>kube:APIServingWithRoutine=true|false (ALPHA - default=false)<br\/>kube:AllAlpha=true|false (ALPHA - default=false)<br\/>kube:AllBeta=true|false (BETA - default=false)<br\/>kube:AnonymousAuthConfigurableEndpoints=true|false (ALPHA - default=false)<br\/>kube:AnyVolumeDataSource=true|false (BETA - default=true)<br\/>kube:AuthorizeNodeWithSelectors=true|false (ALPHA - default=false)<br\/>kube:AuthorizeWithSelectors=true|false (ALPHA - default=false)<br\/>kube:CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)<br\/>kube:CPUManagerPolicyBetaOptions=true|false (BETA - default=true)<br\/>kube:CPUManagerPolicyOptions=true|false (BETA - default=true)<br\/>kube:CRDValidationRatcheting=true|false (BETA - default=true)<br\/>kube:CSIMigrationPortworx=true|false (BETA - default=true)<br\/>kube:CSIVolumeHealth=true|false (ALPHA - default=false)<br\/>kube:CloudControllerManagerWebhook=true|false (ALPHA - default=false)<br\/>kube:ClusterTrustBundle=true|false (ALPHA - default=false)<br\/>kube:ClusterTrustBundleProjection=true|false (ALPHA - default=false)<br\/>kube:ComponentSLIs=true|false (BETA - default=true)<br\/>kube:ConcurrentWatchObjectDecode=true|false (BETA - default=false)<br\/>kube:ConsistentListFromCache=true|false (BETA - default=true)<br\/>kube:ContainerCheckpoint=true|false (BETA - default=true)<br\/>kube:ContextualLogging=true|false (BETA - default=true)<br\/>kube:CoordinatedLeaderElection=true|false (ALPHA - default=false)<br\/>kube:CronJobsScheduledAnnotation=true|false (BETA - default=true)<br\/>kube:CrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)<br\/>kube:CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)<br\/>kube:CustomResourceFieldSelectors=true|false (BETA - default=true)<br\/>kube:DRAControlPlaneController=true|false (ALPHA - default=false)<br\/>kube:DisableAllocatorDualWrite=true|false (ALPHA - default=false)<br\/>kube:DisableNodeKubeProxyVersion=true|false (BETA - default=true)<br\/>kube:DynamicResourceAllocation=true|false (ALPHA - default=false)<br\/>kube:EventedPLEG=true|false (ALPHA - default=false)<br\/>kube:GracefulNodeShutdown=true|false (BETA - default=true)<br\/>kube:GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)<br\/>kube:HPAScaleToZero=true|false (ALPHA - default=false)<br\/>kube:HonorPVReclaimPolicy=true|false (BETA - default=true)<br\/>kube:ImageMaximumGCAge=true|false (BETA - default=true)<br\/>kube:ImageVolume=true|false (ALPHA - default=false)<br\/>kube:InPlacePodVerticalScaling=true|false (ALPHA - default=false)<br\/>kube:InTreePluginPortworxUnregister=true|false (ALPHA - default=false)<br\/>kube:InformerResourceVersion=true|false (ALPHA - default=false)<br\/>kube:JobBackoffLimitPerIndex=true|false (BETA - default=true)<br\/>kube:JobManagedBy=true|false (ALPHA - default=false)<br\/>kube:JobPodReplacementPolicy=true|false (BETA - default=true)<br\/>kube:JobSuccessPolicy=true|false (BETA - default=true)<br\/>kube:KubeletCgroupDriverFromCRI=true|false (BETA - default=true)<br\/>kube:KubeletInUserNamespace=true|false (ALPHA - default=false)<br\/>kube:KubeletPodResourcesDynamicResources=true|false (ALPHA - default=false)<br\/>kube:KubeletPodResourcesGet=true|false (ALPHA - default=false)<br\/>kube:KubeletSeparateDiskGC=true|false (BETA - default=true)<br\/>kube:KubeletTracing=true|false (BETA - default=true)<br\/>kube:LoadBalancerIPMode=true|false (BETA - default=true)<br\/>kube:LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (BETA - default=false)<br\/>kube:LoggingAlphaOptions=true|false (ALPHA - default=false)<br\/>kube:LoggingBetaOptions=true|false (BETA - default=true)<br\/>kube:MatchLabelKeysInPodAffinity=true|false (BETA - default=true)<br\/>kube:MatchLabelKeysInPodTopologySpread=true|false (BETA - default=true)<br\/>kube:MaxUnavailableStatefulSet=true|false (ALPHA - default=false)<br\/>kube:MemoryManager=true|false (BETA - default=true)<br\/>kube:MemoryQoS=true|false (ALPHA - default=false)<br\/>kube:MultiCIDRServiceAllocator=true|false (BETA - default=false)<br\/>kube:MutatingAdmissionPolicy=true|false (ALPHA - default=false)<br\/>kube:NFTablesProxyMode=true|false (BETA - default=true)<br\/>kube:NodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)<br\/>kube:NodeLogQuery=true|false (BETA - default=false)<br\/>kube:NodeSwap=true|false (BETA - default=true)<br\/>kube:OpenAPIEnums=true|false (BETA - default=true)<br\/>kube:PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)<br\/>kube:PodDeletionCost=true|false (BETA - default=true)<br\/>kube:PodIndexLabel=true|false (BETA - default=true)<br\/>kube:PodLifecycleSleepAction=true|false (BETA - default=true)<br\/>kube:PodReadyToStartContainersCondition=true|false (BETA - default=true)<br\/>kube:PortForwardWebsockets=true|false (BETA - default=true)<br\/>kube:ProcMountType=true|false (BETA - default=false)<br\/>kube:QOSReserved=true|false (ALPHA - default=false)<br\/>kube:RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)<br\/>kube:RecursiveReadOnlyMounts=true|false (BETA - default=true)<br\/>kube:RelaxedEnvironmentVariableValidation=true|false (ALPHA - default=false)<br\/>kube:ReloadKubeletServerCertificateFile=true|false (BETA - default=true)<br\/>kube:ResilientWatchCacheInitialization=true|false (BETA - default=true)<br\/>kube:ResourceHealthStatus=true|false (ALPHA - default=false)<br\/>kube:RetryGenerateName=true|false (BETA - default=true)<br\/>kube:RotateKubeletServerCertificate=true|false (BETA - default=true)<br\/>kube:RuntimeClassInImageCriApi=true|false (ALPHA - default=false)<br\/>kube:SELinuxMount=true|false (ALPHA - default=false)<br\/>kube:SELinuxMountReadWriteOncePod=true|false (BETA - default=true)<br\/>kube:SchedulerQueueingHints=true|false (BETA - default=false)<br\/>kube:SeparateCacheWatchRPC=true|false (BETA - default=true)<br\/>kube:SeparateTaintEvictionController=true|false (BETA - default=true)<br\/>kube:ServiceAccountTokenJTI=true|false (BETA - default=true)<br\/>kube:ServiceAccountTokenNodeBinding=true|false (BETA - default=true)<br\/>kube:ServiceAccountTokenNodeBindingValidation=true|false (BETA - default=true)<br\/>kube:ServiceAccountTokenPodNodeInfo=true|false (BETA - default=true)<br\/>kube:ServiceTrafficDistribution=true|false (BETA - default=true)<br\/>kube:SidecarContainers=true|false (BETA - default=true)<br\/>kube:SizeMemoryBackedVolumes=true|false (BETA - default=true)<br\/>kube:StatefulSetAutoDeletePVC=true|false (BETA - default=true)<br\/>kube:StorageNamespaceIndex=true|false (BETA - default=true)<br\/>kube:StorageVersionAPI=true|false (ALPHA - default=false)<br\/>kube:StorageVersionHash=true|false (BETA - default=true)<br\/>kube:StorageVersionMigrator=true|false (ALPHA - default=false)<br\/>kube:StrictCostEnforcementForVAP=true|false (BETA - default=false)<br\/>kube:StrictCostEnforcementForWebhooks=true|false (BETA - default=false)<br\/>kube:StructuredAuthenticationConfiguration=true|false (BETA - default=true)<br\/>kube:StructuredAuthorizationConfiguration=true|false (BETA - default=true)<br\/>kube:SupplementalGroupsPolicy=true|false (ALPHA - default=false)<br\/>kube:TopologyAwareHints=true|false (BETA - default=true)<br\/>kube:TopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)<br\/>kube:TopologyManagerPolicyBetaOptions=true|false (BETA - default=true)<br\/>kube:TopologyManagerPolicyOptions=true|false (BETA - default=true)<br\/>kube:TranslateStreamCloseWebsocketRequests=true|false (BETA - default=true)<br\/>kube:UnauthenticatedHTTP2DOSMitigation=true|false (BETA - default=true)<br\/>kube:UnknownVersionInteroperabilityProxy=true|false (ALPHA - default=false)<br\/>kube:UserNamespacesPodSecurityStandards=true|false (ALPHA - default=false)<br\/>kube:UserNamespacesSupport=true|false (BETA - default=false)<br\/>kube:VolumeAttributesClass=true|false (BETA - default=false)<br\/>kube:VolumeCapacityPriority=true|false (ALPHA - default=false)<br\/>kube:WatchCacheInitializationPostStartHook=true|false (BETA - default=false)<br\/>kube:WatchFromStorageWithoutResourceVersion=true|false (BETA - default=false)<br\/>kube:WatchList=true|false (ALPHA - default=false)<br\/>kube:WatchListClient=true|false (BETA - default=false)<br\/>kube:WinDSR=true|false (ALPHA - default=false)<br\/>kube:WinOverlay=true|false (BETA - default=true)<br\/>kube:WindowsHostNetwork=true|false (ALPHA - default=true)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for kube-scheduler<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--http2-max-streams-per-connection int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The limit that the server gives to clients for the maximum number of streams in an HTTP\/2 connection. Zero means to use golang's default.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kube-api-burst int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 100<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>DEPRECATED: burst to use while talking with kubernetes apiserver. This parameter is ignored if a config file is specified in --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kube-api-content-type string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"application\/vnd.kubernetes.protobuf\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>DEPRECATED: content type of requests sent to apiserver. This parameter is ignored if a config file is specified in --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kube-api-qps float&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 50<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>DEPRECATED: QPS to use while talking with kubernetes apiserver. This parameter is ignored if a config file is specified in --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>DEPRECATED: path to kubeconfig file with authorization and master location information. This parameter is ignored if a config file is specified in --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-elect&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-elect-lease-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 15s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-elect-renew-deadline duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than the lease duration. This is only applicable if leader election is enabled.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-elect-resource-lock string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"leases\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The type of resource object that is used for locking during leader election. Supported options are 'leases', 'endpointsleases' and 'configmapsleases'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-elect-resource-name string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kube-scheduler\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of resource object that is used for locking during leader election.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-elect-resource-namespace string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kube-system\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The namespace of resource object that is used for locking during leader election.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leader-elect-retry-period duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 2s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-flush-frequency duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum number of seconds between log flushes<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-text-info-buffer-size quantity<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>[Alpha] In text format with split output streams, the info messages can be buffered for a while to increase performance. The default value of zero bytes disables buffering. The size can be specified as number of bytes (512), multiples of 1000 (1K), multiples of 1024 (2Ki), or powers of those (3M, 4G, 5Mi, 6Gi). Enable the LoggingAlphaOptions feature gate to use this.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-text-split-stream<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>[Alpha] In text format, write error messages to stderr and info messages to stdout. The default is to write a single stream to stdout. Enable the LoggingAlphaOptions feature gate to use this.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--logging-format string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"text\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Sets the log format. Permitted formats: &quot;text&quot;.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--master string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address of the Kubernetes API server (overrides any value in kubeconfig)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--permit-address-sharing<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, SO_REUSEADDR will be used when binding the port. This allows binding to wildcard IPs like 0.0.0.0 and specific IPs in parallel, and it avoids waiting for the kernel to release sockets in TIME_WAIT state. [default=false]<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--permit-port-sharing<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pod-max-in-unschedulable-pods-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>DEPRECATED: the maximum time a pod can stay in unschedulablePods. If a pod stays in unschedulablePods for longer than this value, the pod will be moved from unschedulablePods to backoffQ or activeQ. This flag is deprecated and will be removed in a future version.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profiling&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>DEPRECATED: enable profiling via web interface host:port\/debug\/pprof\/. This parameter is ignored if a config file is specified in --config.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-allowed-names strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-client-ca-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-extra-headers-prefix strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"x-remote-extra-\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of request header prefixes to inspect. X-Remote-Extra- is suggested.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-group-headers strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"x-remote-group\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of request headers to inspect for groups. X-Remote-Group is suggested.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requestheader-username-headers strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"x-remote-user\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>List of request headers to inspect for usernames. X-Remote-User is common.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--secure-port int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 10259<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-hidden-metrics-for-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The previous version for which you want to show hidden metrics. Only the previous minor version is meaningful, other values will not be allowed. The format is &lt;major&gt;.&lt;minor&gt;, e.g.: '1.16'. The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, rather than being surprised when they are permanently removed in the release after that.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-cert-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-cipher-suites strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.<br\/>Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256.<br\/>Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_RC4_128_SHA.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-min-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-private-key-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>File containing the default x509 private key matching --tls-cert-file.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-sni-cert-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key\/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: &quot;example.crt,example.key&quot; or &quot;foo.crt,foo.key:*.foo.com,foo.com&quot;.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-v, --v int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>number for the log level verbosity<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--vmodule pattern=N,...<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>comma-separated list of pattern=N settings for file-filtered logging (only works for text log format)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--write-config-to string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If set, write the configuration values to this file and exit.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n","site":"kubernetes reference","answers_cleaned":"    title  kube scheduler content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              The Kubernetes scheduler is a control plane process which assigns Pods to Nodes  The scheduler determines which Nodes are valid placements for each Pod in the scheduling queue according to constraints and available resources  The scheduler then ranks each valid Node and binds the Pod to a suitable Node  Multiple different schedulers may be used within a cluster  kube scheduler is the reference implementation  See  scheduling  https   kubernetes io docs concepts scheduling eviction   for more information about scheduling and the kube scheduler component       kube scheduler  flags                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow metric labels stringToString nbsp  nbsp  nbsp  nbsp  nbsp Default      td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The map from metric label to value allow list of this label  The key s format is  lt MetricName gt   lt LabelName gt   The value s format is  lt allowed value gt   lt allowed value gt    e g  metric1 label1  v1 v2 v3   metric1 label2  v1 v2 v3  metric2 label1  v1 v2 v3    p   td    tr    tr   td colspan  2    allow metric labels manifest string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The path to the manifest file that contains the allow list mapping  The format of the file is the same as the flag   allow metric labels  Note that the flag   allow metric labels will override the manifest file   p   td    tr    tr   td colspan  2    authentication kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p kubeconfig file pointing at the  core  kubernetes server with enough rights to create tokenreviews authentication k8s io  This is optional  If empty  all token requests are considered to be anonymous and no client CA is looked up in the cluster   p   td    tr    tr   td colspan  2    authentication skip lookup  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If false  the authentication kubeconfig will be used to lookup missing authentication configuration from the cluster   p   td    tr    tr   td colspan  2    authentication token webhook cache ttl duration nbsp  nbsp  nbsp  nbsp  nbsp Default  10s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The duration to cache responses from the webhook token authenticator   p   td    tr    tr   td colspan  2    authentication tolerate lookup failure nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  failures to look up missing authentication configuration from the cluster are not considered fatal  Note that this can result in authentication that treats all requests as anonymous   p   td    tr    tr   td colspan  2    authorization always allow paths strings nbsp  nbsp  nbsp  nbsp  nbsp Default    healthz  readyz  livez   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A list of HTTP paths to skip during authorization  i e  these are authorized without contacting the  core  kubernetes server   p   td    tr    tr   td colspan  2    authorization kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p kubeconfig file pointing at the  core  kubernetes server with enough rights to create subjectaccessreviews authorization k8s io  This is optional  If empty  all requests not skipped by authorization are forbidden   p   td    tr    tr   td colspan  2    authorization webhook cache authorized ttl duration nbsp  nbsp  nbsp  nbsp  nbsp Default  10s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The duration to cache  authorized  responses from the webhook authorizer   p   td    tr    tr   td colspan  2    authorization webhook cache unauthorized ttl duration nbsp  nbsp  nbsp  nbsp  nbsp Default  10s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The duration to cache  unauthorized  responses from the webhook authorizer   p   td    tr    tr   td colspan  2    bind address string nbsp  nbsp  nbsp  nbsp  nbsp Default  0 0 0 0  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The IP address on which to listen for the   secure port port  The associated interface s  must be reachable by the rest of the cluster  and by CLI web clients  If blank or an unspecified address  0 0 0 0 or      all interfaces and IP address families will be used   p   td    tr    tr   td colspan  2    cert dir string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The directory where the TLS certs are located  If   tls cert file and   tls private key file are provided  this flag will be ignored   p   td    tr    tr   td colspan  2    client ca file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If set  any request presenting a client certificate signed by one of the authorities in the client ca file is authenticated with an identity corresponding to the CommonName of the client certificate   p   td    tr    tr   td colspan  2    config string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The path to the configuration file   p   td    tr    tr   td colspan  2    contention profiling nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p DEPRECATED  enable block profiling  if profiling is enabled  This parameter is ignored if a config file is specified in   config   p   td    tr    tr   td colspan  2    disable http2 serving  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  HTTP2 serving will be disabled  default false   p   td    tr    tr   td colspan  2    disabled metrics strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p This flag provides an escape hatch for misbehaving metrics  You must provide the fully qualified metric name in order to disable it  Disclaimer  disabling metrics is higher in precedence than showing hidden metrics   p   td    tr    tr   td colspan  2    emulated version strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The versions different components emulate their capabilities  APIs  features       of  br  If set  the component will emulate the behavior of this version instead of the underlying binary version  br  Version format could only be major minor  for example     emulated version wardle 1 2 kube 1 31   Options are  br  kube 1 31  1 31  default 1 31 If the component is not specified  defaults to  quot kube quot   p   td    tr    tr   td colspan  2    feature gates colonSeparatedMultimapStringString  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Comma separated list of component key value pairs that describe feature gates for alpha experimental features of different components  br  If the component is not specified  defaults to  quot kube quot   This flag can be repeatedly invoked  For example    feature gates  wardle featureA true wardle featureB false    feature gates  kube featureC true Options are  br  kube APIResponseCompression true false  BETA   default true  br  kube APIServerIdentity true false  BETA   default true  br  kube APIServerTracing true false  BETA   default true  br  kube APIServingWithRoutine true false  ALPHA   default false  br  kube AllAlpha true false  ALPHA   default false  br  kube AllBeta true false  BETA   default false  br  kube AnonymousAuthConfigurableEndpoints true false  ALPHA   default false  br  kube AnyVolumeDataSource true false  BETA   default true  br  kube AuthorizeNodeWithSelectors true false  ALPHA   default false  br  kube AuthorizeWithSelectors true false  ALPHA   default false  br  kube CPUManagerPolicyAlphaOptions true false  ALPHA   default false  br  kube CPUManagerPolicyBetaOptions true false  BETA   default true  br  kube CPUManagerPolicyOptions true false  BETA   default true  br  kube CRDValidationRatcheting true false  BETA   default true  br  kube CSIMigrationPortworx true false  BETA   default true  br  kube CSIVolumeHealth true false  ALPHA   default false  br  kube CloudControllerManagerWebhook true false  ALPHA   default false  br  kube ClusterTrustBundle true false  ALPHA   default false  br  kube ClusterTrustBundleProjection true false  ALPHA   default false  br  kube ComponentSLIs true false  BETA   default true  br  kube ConcurrentWatchObjectDecode true false  BETA   default false  br  kube ConsistentListFromCache true false  BETA   default true  br  kube ContainerCheckpoint true false  BETA   default true  br  kube ContextualLogging true false  BETA   default true  br  kube CoordinatedLeaderElection true false  ALPHA   default false  br  kube CronJobsScheduledAnnotation true false  BETA   default true  br  kube CrossNamespaceVolumeDataSource true false  ALPHA   default false  br  kube CustomCPUCFSQuotaPeriod true false  ALPHA   default false  br  kube CustomResourceFieldSelectors true false  BETA   default true  br  kube DRAControlPlaneController true false  ALPHA   default false  br  kube DisableAllocatorDualWrite true false  ALPHA   default false  br  kube DisableNodeKubeProxyVersion true false  BETA   default true  br  kube DynamicResourceAllocation true false  ALPHA   default false  br  kube EventedPLEG true false  ALPHA   default false  br  kube GracefulNodeShutdown true false  BETA   default true  br  kube GracefulNodeShutdownBasedOnPodPriority true false  BETA   default true  br  kube HPAScaleToZero true false  ALPHA   default false  br  kube HonorPVReclaimPolicy true false  BETA   default true  br  kube ImageMaximumGCAge true false  BETA   default true  br  kube ImageVolume true false  ALPHA   default false  br  kube InPlacePodVerticalScaling true false  ALPHA   default false  br  kube InTreePluginPortworxUnregister true false  ALPHA   default false  br  kube InformerResourceVersion true false  ALPHA   default false  br  kube JobBackoffLimitPerIndex true false  BETA   default true  br  kube JobManagedBy true false  ALPHA   default false  br  kube JobPodReplacementPolicy true false  BETA   default true  br  kube JobSuccessPolicy true false  BETA   default true  br  kube KubeletCgroupDriverFromCRI true false  BETA   default true  br  kube KubeletInUserNamespace true false  ALPHA   default false  br  kube KubeletPodResourcesDynamicResources true false  ALPHA   default false  br  kube KubeletPodResourcesGet true false  ALPHA   default false  br  kube KubeletSeparateDiskGC true false  BETA   default true  br  kube KubeletTracing true false  BETA   default true  br  kube LoadBalancerIPMode true false  BETA   default true  br  kube LocalStorageCapacityIsolationFSQuotaMonitoring true false  BETA   default false  br  kube LoggingAlphaOptions true false  ALPHA   default false  br  kube LoggingBetaOptions true false  BETA   default true  br  kube MatchLabelKeysInPodAffinity true false  BETA   default true  br  kube MatchLabelKeysInPodTopologySpread true false  BETA   default true  br  kube MaxUnavailableStatefulSet true false  ALPHA   default false  br  kube MemoryManager true false  BETA   default true  br  kube MemoryQoS true false  ALPHA   default false  br  kube MultiCIDRServiceAllocator true false  BETA   default false  br  kube MutatingAdmissionPolicy true false  ALPHA   default false  br  kube NFTablesProxyMode true false  BETA   default true  br  kube NodeInclusionPolicyInPodTopologySpread true false  BETA   default true  br  kube NodeLogQuery true false  BETA   default false  br  kube NodeSwap true false  BETA   default true  br  kube OpenAPIEnums true false  BETA   default true  br  kube PodAndContainerStatsFromCRI true false  ALPHA   default false  br  kube PodDeletionCost true false  BETA   default true  br  kube PodIndexLabel true false  BETA   default true  br  kube PodLifecycleSleepAction true false  BETA   default true  br  kube PodReadyToStartContainersCondition true false  BETA   default true  br  kube PortForwardWebsockets true false  BETA   default true  br  kube ProcMountType true false  BETA   default false  br  kube QOSReserved true false  ALPHA   default false  br  kube RecoverVolumeExpansionFailure true false  ALPHA   default false  br  kube RecursiveReadOnlyMounts true false  BETA   default true  br  kube RelaxedEnvironmentVariableValidation true false  ALPHA   default false  br  kube ReloadKubeletServerCertificateFile true false  BETA   default true  br  kube ResilientWatchCacheInitialization true false  BETA   default true  br  kube ResourceHealthStatus true false  ALPHA   default false  br  kube RetryGenerateName true false  BETA   default true  br  kube RotateKubeletServerCertificate true false  BETA   default true  br  kube RuntimeClassInImageCriApi true false  ALPHA   default false  br  kube SELinuxMount true false  ALPHA   default false  br  kube SELinuxMountReadWriteOncePod true false  BETA   default true  br  kube SchedulerQueueingHints true false  BETA   default false  br  kube SeparateCacheWatchRPC true false  BETA   default true  br  kube SeparateTaintEvictionController true false  BETA   default true  br  kube ServiceAccountTokenJTI true false  BETA   default true  br  kube ServiceAccountTokenNodeBinding true false  BETA   default true  br  kube ServiceAccountTokenNodeBindingValidation true false  BETA   default true  br  kube ServiceAccountTokenPodNodeInfo true false  BETA   default true  br  kube ServiceTrafficDistribution true false  BETA   default true  br  kube SidecarContainers true false  BETA   default true  br  kube SizeMemoryBackedVolumes true false  BETA   default true  br  kube StatefulSetAutoDeletePVC true false  BETA   default true  br  kube StorageNamespaceIndex true false  BETA   default true  br  kube StorageVersionAPI true false  ALPHA   default false  br  kube StorageVersionHash true false  BETA   default true  br  kube StorageVersionMigrator true false  ALPHA   default false  br  kube StrictCostEnforcementForVAP true false  BETA   default false  br  kube StrictCostEnforcementForWebhooks true false  BETA   default false  br  kube StructuredAuthenticationConfiguration true false  BETA   default true  br  kube StructuredAuthorizationConfiguration true false  BETA   default true  br  kube SupplementalGroupsPolicy true false  ALPHA   default false  br  kube TopologyAwareHints true false  BETA   default true  br  kube TopologyManagerPolicyAlphaOptions true false  ALPHA   default false  br  kube TopologyManagerPolicyBetaOptions true false  BETA   default true  br  kube TopologyManagerPolicyOptions true false  BETA   default true  br  kube TranslateStreamCloseWebsocketRequests true false  BETA   default true  br  kube UnauthenticatedHTTP2DOSMitigation true false  BETA   default true  br  kube UnknownVersionInteroperabilityProxy true false  ALPHA   default false  br  kube UserNamespacesPodSecurityStandards true false  ALPHA   default false  br  kube UserNamespacesSupport true false  BETA   default false  br  kube VolumeAttributesClass true false  BETA   default false  br  kube VolumeCapacityPriority true false  ALPHA   default false  br  kube WatchCacheInitializationPostStartHook true false  BETA   default false  br  kube WatchFromStorageWithoutResourceVersion true false  BETA   default false  br  kube WatchList true false  ALPHA   default false  br  kube WatchListClient true false  BETA   default false  br  kube WinDSR true false  ALPHA   default false  br  kube WinOverlay true false  BETA   default true  br  kube WindowsHostNetwork true false  ALPHA   default true   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for kube scheduler  p   td    tr    tr   td colspan  2    http2 max streams per connection int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The limit that the server gives to clients for the maximum number of streams in an HTTP 2 connection  Zero means to use golang s default   p   td    tr    tr   td colspan  2    kube api burst int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  100  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p DEPRECATED  burst to use while talking with kubernetes apiserver  This parameter is ignored if a config file is specified in   config   p   td    tr    tr   td colspan  2    kube api content type string nbsp  nbsp  nbsp  nbsp  nbsp Default   application vnd kubernetes protobuf   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p DEPRECATED  content type of requests sent to apiserver  This parameter is ignored if a config file is specified in   config   p   td    tr    tr   td colspan  2    kube api qps float nbsp  nbsp  nbsp  nbsp  nbsp Default  50  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p DEPRECATED  QPS to use while talking with kubernetes apiserver  This parameter is ignored if a config file is specified in   config   p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p DEPRECATED  path to kubeconfig file with authorization and master location information  This parameter is ignored if a config file is specified in   config   p   td    tr    tr   td colspan  2    leader elect nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Start a leader election client and gain leadership before executing the main loop  Enable this when running replicated components for high availability   p   td    tr    tr   td colspan  2    leader elect lease duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  15s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The duration that non leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot  This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate  This is only applicable if leader election is enabled   p   td    tr    tr   td colspan  2    leader elect renew deadline duration nbsp  nbsp  nbsp  nbsp  nbsp Default  10s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The interval between attempts by the acting master to renew a leadership slot before it stops leading  This must be less than the lease duration  This is only applicable if leader election is enabled   p   td    tr    tr   td colspan  2    leader elect resource lock string nbsp  nbsp  nbsp  nbsp  nbsp Default   leases   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The type of resource object that is used for locking during leader election  Supported options are  leases    endpointsleases  and  configmapsleases    p   td    tr    tr   td colspan  2    leader elect resource name string nbsp  nbsp  nbsp  nbsp  nbsp Default   kube scheduler   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of resource object that is used for locking during leader election   p   td    tr    tr   td colspan  2    leader elect resource namespace string nbsp  nbsp  nbsp  nbsp  nbsp Default   kube system   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The namespace of resource object that is used for locking during leader election   p   td    tr    tr   td colspan  2    leader elect retry period duration nbsp  nbsp  nbsp  nbsp  nbsp Default  2s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The duration the clients should wait between attempting acquisition and renewal of a leadership  This is only applicable if leader election is enabled   p   td    tr    tr   td colspan  2    log flush frequency duration nbsp  nbsp  nbsp  nbsp  nbsp Default  5s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum number of seconds between log flushes  p   td    tr    tr   td colspan  2    log text info buffer size quantity  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p  Alpha  In text format with split output streams  the info messages can be buffered for a while to increase performance  The default value of zero bytes disables buffering  The size can be specified as number of bytes  512   multiples of 1000  1K   multiples of 1024  2Ki   or powers of those  3M  4G  5Mi  6Gi   Enable the LoggingAlphaOptions feature gate to use this   p   td    tr    tr   td colspan  2    log text split stream  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p  Alpha  In text format  write error messages to stderr and info messages to stdout  The default is to write a single stream to stdout  Enable the LoggingAlphaOptions feature gate to use this   p   td    tr    tr   td colspan  2    logging format string nbsp  nbsp  nbsp  nbsp  nbsp Default   text   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Sets the log format  Permitted formats   quot text quot    p   td    tr    tr   td colspan  2    master string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address of the Kubernetes API server  overrides any value in kubeconfig   p   td    tr    tr   td colspan  2    permit address sharing  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  SO REUSEADDR will be used when binding the port  This allows binding to wildcard IPs like 0 0 0 0 and specific IPs in parallel  and it avoids waiting for the kernel to release sockets in TIME WAIT state   default false   p   td    tr    tr   td colspan  2    permit port sharing  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  SO REUSEPORT will be used when binding the port  which allows more than one instance to bind on the same address and port   default false   p   td    tr    tr   td colspan  2    pod max in unschedulable pods duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  5m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p DEPRECATED  the maximum time a pod can stay in unschedulablePods  If a pod stays in unschedulablePods for longer than this value  the pod will be moved from unschedulablePods to backoffQ or activeQ  This flag is deprecated and will be removed in a future version   p   td    tr    tr   td colspan  2    profiling nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p DEPRECATED  enable profiling via web interface host port debug pprof   This parameter is ignored if a config file is specified in   config   p   td    tr    tr   td colspan  2    requestheader allowed names strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of client certificate common names to allow to provide usernames in headers specified by   requestheader username headers  If empty  any client certificate validated by the authorities in   requestheader client ca file is allowed   p   td    tr    tr   td colspan  2    requestheader client ca file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by   requestheader username headers  WARNING  generally do not depend on authorization being already done for incoming requests   p   td    tr    tr   td colspan  2    requestheader extra headers prefix strings nbsp  nbsp  nbsp  nbsp  nbsp Default   x remote extra    td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of request header prefixes to inspect  X Remote Extra  is suggested   p   td    tr    tr   td colspan  2    requestheader group headers strings nbsp  nbsp  nbsp  nbsp  nbsp Default   x remote group   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of request headers to inspect for groups  X Remote Group is suggested   p   td    tr    tr   td colspan  2    requestheader username headers strings nbsp  nbsp  nbsp  nbsp  nbsp Default   x remote user   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p List of request headers to inspect for usernames  X Remote User is common   p   td    tr    tr   td colspan  2    secure port int nbsp  nbsp  nbsp  nbsp  nbsp Default  10259  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The port on which to serve HTTPS with authentication and authorization  If 0  don t serve HTTPS at all   p   td    tr    tr   td colspan  2    show hidden metrics for version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The previous version for which you want to show hidden metrics  Only the previous minor version is meaningful  other values will not be allowed  The format is  lt major gt   lt minor gt   e g    1 16   The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics  rather than being surprised when they are permanently removed in the release after that   p   td    tr    tr   td colspan  2    tls cert file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File containing the default x509 Certificate for HTTPS   CA cert  if any  concatenated after server cert   If HTTPS serving is enabled  and   tls cert file and   tls private key file are not provided  a self signed certificate and key are generated for the public address and saved to the directory specified by   cert dir   p   td    tr    tr   td colspan  2    tls cipher suites strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Comma separated list of cipher suites for the server  If omitted  the default Go cipher suites will be used  br  Preferred values  TLS AES 128 GCM SHA256  TLS AES 256 GCM SHA384  TLS CHACHA20 POLY1305 SHA256  TLS ECDHE ECDSA WITH AES 128 CBC SHA  TLS ECDHE ECDSA WITH AES 128 GCM SHA256  TLS ECDHE ECDSA WITH AES 256 CBC SHA  TLS ECDHE ECDSA WITH AES 256 GCM SHA384  TLS ECDHE ECDSA WITH CHACHA20 POLY1305  TLS ECDHE ECDSA WITH CHACHA20 POLY1305 SHA256  TLS ECDHE RSA WITH AES 128 CBC SHA  TLS ECDHE RSA WITH AES 128 GCM SHA256  TLS ECDHE RSA WITH AES 256 CBC SHA  TLS ECDHE RSA WITH AES 256 GCM SHA384  TLS ECDHE RSA WITH CHACHA20 POLY1305  TLS ECDHE RSA WITH CHACHA20 POLY1305 SHA256  br  Insecure values  TLS ECDHE ECDSA WITH AES 128 CBC SHA256  TLS ECDHE ECDSA WITH RC4 128 SHA  TLS ECDHE RSA WITH 3DES EDE CBC SHA  TLS ECDHE RSA WITH AES 128 CBC SHA256  TLS ECDHE RSA WITH RC4 128 SHA  TLS RSA WITH 3DES EDE CBC SHA  TLS RSA WITH AES 128 CBC SHA  TLS RSA WITH AES 128 CBC SHA256  TLS RSA WITH AES 128 GCM SHA256  TLS RSA WITH AES 256 CBC SHA  TLS RSA WITH AES 256 GCM SHA384  TLS RSA WITH RC4 128 SHA   p   td    tr    tr   td colspan  2    tls min version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Minimum TLS version supported  Possible values  VersionTLS10  VersionTLS11  VersionTLS12  VersionTLS13  p   td    tr    tr   td colspan  2    tls private key file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p File containing the default x509 private key matching   tls cert file   p   td    tr    tr   td colspan  2    tls sni cert key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A pair of x509 certificate and private key file paths  optionally suffixed with a list of domain patterns which are fully qualified domain names  possibly with prefixed wildcard segments  The domain patterns also allow IP addresses  but IPs should only be used if the apiserver has visibility to the IP address requested by a client  If no domain patterns are provided  the names of the certificate are extracted  Non wildcard matches trump over wildcard matches  explicit domain patterns trump over extracted names  For multiple key certificate pairs  use the   tls sni cert key multiple times  Examples   quot example crt example key quot  or  quot foo crt foo key   foo com foo com quot    p   td    tr    tr   td colspan  2   v    v int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p number for the log level verbosity  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    vmodule pattern N      td    tr   tr   td   td  td style  line height  130   word wrap  break word    p comma separated list of pattern N settings for file filtered logging  only works for text log format   p   td    tr    tr   td colspan  2    write config to string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If set  write the configuration values to this file and exit   p   td    tr     tbody    table    "}
{"questions":"kubernetes reference name reference contenttype concept card weight 10 title Feature Gates overview weight 60","answers":"---\ntitle: Feature Gates\nweight: 10\ncontent_type: concept\ncard:\n  name: reference\n  weight: 60\n---\n\n<!-- overview -->\nThis page contains an overview of the various feature gates an administrator\ncan specify on different Kubernetes components.\n\nSee [feature stages](#feature-stages) for an explanation of the stages for a feature.\n\n\n<!-- body -->\n## Overview\n\nFeature gates are a set of key=value pairs that describe Kubernetes features.\nYou can turn these features on or off using the `--feature-gates` command line flag\non each Kubernetes component.\n\nEach Kubernetes component lets you enable or disable a set of feature gates that\nare relevant to that component.\nUse `-h` flag to see a full set of feature gates for all components.\nTo set feature gates for a component, such as kubelet, use the `--feature-gates`\nflag assigned to a list of feature pairs:\n\n```shell\n--feature-gates=...,GracefulNodeShutdown=true\n```\n\nThe following tables are a summary of the feature gates that you can set on\ndifferent Kubernetes components.\n\n- The \"Since\" column contains the Kubernetes release when a feature is introduced\n  or its release stage is changed.\n- The \"Until\" column, if not empty, contains the last Kubernetes release in which\n  you can still use a feature gate.\n- If a feature is in the Alpha or Beta state, you can find the feature listed\n  in the [Alpha\/Beta feature gate table](#feature-gates-for-alpha-or-beta-features).\n- If a feature is stable you can find all stages for that feature listed in the\n  [Graduated\/Deprecated feature gate table](#feature-gates-for-graduated-or-deprecated-features).\n- The [Graduated\/Deprecated feature gate table](#feature-gates-for-graduated-or-deprecated-features)\n  also lists deprecated and withdrawn features.\n\n\nFor a reference to old feature gates that are removed, please refer to\n[feature gates removed](\/docs\/reference\/command-line-tools-reference\/feature-gates-removed\/).\n\n\n<!-- Want to edit this table? See https:\/\/k8s.io\/docs\/contribute\/new-content\/new-features\/#ready-for-review-feature-gates -->\n### Feature gates for Alpha or Beta features\n\n\n\n<!-- Want to edit this table? See https:\/\/k8s.io\/docs\/contribute\/new-content\/new-features\/#ready-for-review-feature-gates -->\n### Feature gates for graduated or deprecated features\n\n\n\n## Using a feature\n\n### Feature stages\n\nA feature can be in *Alpha*, *Beta* or *GA* stage.\nAn *Alpha* feature means:\n\n* Disabled by default.\n* Might be buggy. Enabling the feature may expose bugs.\n* Support for feature may be dropped at any time without notice.\n* The API may change in incompatible ways in a later software release without notice.\n* Recommended for use only in short-lived testing clusters, due to increased\n  risk of bugs and lack of long-term support.\n\nA *Beta* feature means:\n\n* Usually enabled by default. Beta API groups are\n  [disabled by default](https:\/\/github.com\/kubernetes\/enhancements\/tree\/master\/keps\/sig-architecture\/3136-beta-apis-off-by-default).\n* The feature is well tested. Enabling the feature is considered safe.\n* Support for the overall feature will not be dropped, though details may change.\n* The schema and\/or semantics of objects may change in incompatible ways in a\n  subsequent beta or stable release. When this happens, we will provide instructions\n  for migrating to the next version. This may require deleting, editing, and\n  re-creating API objects. The editing process may require some thought.\n  This may require downtime for applications that rely on the feature.\n* Recommended for only non-business-critical uses because of potential for\n  incompatible changes in subsequent releases. If you have multiple clusters\n  that can be upgraded independently, you may be able to relax this restriction.\n\n\nPlease do try *Beta* features and give feedback on them!\nAfter they exit beta, it may not be practical for us to make more changes.\n\n\nA *General Availability* (GA) feature is also referred to as a *stable* feature. It means:\n\n* The feature is always enabled; you cannot disable it.\n* The corresponding feature gate is no longer needed.\n* Stable versions of features will appear in released software for many subsequent versions.\n\n## List of feature gates {#feature-gates}\n\nEach feature gate is designed for enabling\/disabling a specific feature.\n\n<!-- Want to edit this list? See https:\/\/k8s.io\/docs\/contribute\/new-content\/new-features\/#ready-for-review-feature-gates -->\n\n\n## \n\n* The [deprecation policy](\/docs\/reference\/using-api\/deprecation-policy\/) for Kubernetes explains\n  the project's approach to removing features and components.\n* Since Kubernetes 1.24, new beta APIs are not enabled by default.  When enabling a beta\n  feature, you will also need to enable any associated API resources.\n  For example, to enable a particular resource like\n  `storage.k8s.io\/v1beta1\/csistoragecapacities`, set `--runtime-config=storage.k8s.io\/v1beta1\/csistoragecapacities`.\n  See [API Versioning](\/docs\/reference\/using-api\/#api-versioning) for more details on the command line flags.","site":"kubernetes reference","answers_cleaned":"    title  Feature Gates weight  10 content type  concept card    name  reference   weight  60           overview     This page contains an overview of the various feature gates an administrator can specify on different Kubernetes components   See  feature stages   feature stages  for an explanation of the stages for a feature         body        Overview  Feature gates are a set of key value pairs that describe Kubernetes features  You can turn these features on or off using the    feature gates  command line flag on each Kubernetes component   Each Kubernetes component lets you enable or disable a set of feature gates that are relevant to that component  Use   h  flag to see a full set of feature gates for all components  To set feature gates for a component  such as kubelet  use the    feature gates  flag assigned to a list of feature pairs      shell   feature gates     GracefulNodeShutdown true      The following tables are a summary of the feature gates that you can set on different Kubernetes components     The  Since  column contains the Kubernetes release when a feature is introduced   or its release stage is changed    The  Until  column  if not empty  contains the last Kubernetes release in which   you can still use a feature gate    If a feature is in the Alpha or Beta state  you can find the feature listed   in the  Alpha Beta feature gate table   feature gates for alpha or beta features     If a feature is stable you can find all stages for that feature listed in the    Graduated Deprecated feature gate table   feature gates for graduated or deprecated features     The  Graduated Deprecated feature gate table   feature gates for graduated or deprecated features    also lists deprecated and withdrawn features    For a reference to old feature gates that are removed  please refer to  feature gates removed   docs reference command line tools reference feature gates removed           Want to edit this table  See https   k8s io docs contribute new content new features  ready for review feature gates         Feature gates for Alpha or Beta features         Want to edit this table  See https   k8s io docs contribute new content new features  ready for review feature gates         Feature gates for graduated or deprecated features       Using a feature      Feature stages  A feature can be in  Alpha    Beta  or  GA  stage  An  Alpha  feature means     Disabled by default    Might be buggy  Enabling the feature may expose bugs    Support for feature may be dropped at any time without notice    The API may change in incompatible ways in a later software release without notice    Recommended for use only in short lived testing clusters  due to increased   risk of bugs and lack of long term support   A  Beta  feature means     Usually enabled by default  Beta API groups are    disabled by default  https   github com kubernetes enhancements tree master keps sig architecture 3136 beta apis off by default     The feature is well tested  Enabling the feature is considered safe    Support for the overall feature will not be dropped  though details may change    The schema and or semantics of objects may change in incompatible ways in a   subsequent beta or stable release  When this happens  we will provide instructions   for migrating to the next version  This may require deleting  editing  and   re creating API objects  The editing process may require some thought    This may require downtime for applications that rely on the feature    Recommended for only non business critical uses because of potential for   incompatible changes in subsequent releases  If you have multiple clusters   that can be upgraded independently  you may be able to relax this restriction    Please do try  Beta  features and give feedback on them  After they exit beta  it may not be practical for us to make more changes    A  General Availability   GA  feature is also referred to as a  stable  feature  It means     The feature is always enabled  you cannot disable it    The corresponding feature gate is no longer needed    Stable versions of features will appear in released software for many subsequent versions      List of feature gates   feature gates   Each feature gate is designed for enabling disabling a specific feature        Want to edit this list  See https   k8s io docs contribute new content new features  ready for review feature gates              The  deprecation policy   docs reference using api deprecation policy   for Kubernetes explains   the project s approach to removing features and components    Since Kubernetes 1 24  new beta APIs are not enabled by default   When enabling a beta   feature  you will also need to enable any associated API resources    For example  to enable a particular resource like    storage k8s io v1beta1 csistoragecapacities   set    runtime config storage k8s io v1beta1 csistoragecapacities     See  API Versioning   docs reference using api  api versioning  for more details on the command line flags "}
{"questions":"kubernetes reference title Flow control API Priority and Fairness controls the behavior of the Kubernetes API server in an overload situation You can find more information about it in the weight 130 overview","answers":"---\ntitle: Flow control\nweight: 130\n---\n\n<!-- overview -->\n\nAPI Priority and Fairness controls the behavior of the Kubernetes API server in\nan overload situation. You can find more information about it in the\n[API Priority and Fairness](\/docs\/concepts\/cluster-administration\/flow-control\/)\ndocumentation.\n\n<!-- body -->\n\n## Diagnostics\n\nEvery HTTP response from an API server with the priority and fairness feature\nenabled has two extra headers: `X-Kubernetes-PF-FlowSchema-UID` and\n`X-Kubernetes-PF-PriorityLevel-UID`, noting the flow schema that matched the request\nand the priority level to which it was assigned, respectively. The API objects'\nnames are not included in these headers (to avoid revealing details in case the\nrequesting user does not have permission to view them). When debugging, you\ncan use a command such as:\n\n```shell\nkubectl get flowschemas -o custom-columns=\"uid:{metadata.uid},name:{metadata.name}\"\nkubectl get prioritylevelconfigurations -o custom-columns=\"uid:{metadata.uid},name:{metadata.name}\"\n```\n\nto get a mapping of UIDs to names for both FlowSchemas and\nPriorityLevelConfigurations.\n\n## Debug endpoints\n\nWith the `APIPriorityAndFairness` feature enabled, the `kube-apiserver`\nserves the following additional paths at its HTTP(S) ports.\n\nYou need to ensure you have permissions to access these endpoints.\nYou don't have to do anything if you are using admin.\nPermissions can be granted if needed following the [RBAC](\/docs\/reference\/access-authn-authz\/rbac\/) doc\nto access `\/debug\/api_priority_and_fairness\/` by specifying `nonResourceURLs`.\n\n- `\/debug\/api_priority_and_fairness\/dump_priority_levels` - a listing of\n  all the priority levels and the current state of each.  You can fetch like this:\n\n  ```shell\n  kubectl get --raw \/debug\/api_priority_and_fairness\/dump_priority_levels\n  ```\n\n  The output will be in CSV and similar to this:\n\n  ```none\n  PriorityLevelName, ActiveQueues, IsIdle, IsQuiescing, WaitingRequests, ExecutingRequests, DispatchedRequests, RejectedRequests, TimedoutRequests, CancelledRequests\n  catch-all,         0,            true,   false,       0,               0,                 1,                  0,                0,                0\n  exempt,            0,            true,   false,       0,               0,                 0,                  0,                0,                0\n  global-default,    0,            true,   false,       0,               0,                 46,                 0,                0,                0\n  leader-election,   0,            true,   false,       0,               0,                 4,                  0,                0,                0\n  node-high,         0,            true,   false,       0,               0,                 34,                 0,                0,                0\n  system,            0,            true,   false,       0,               0,                 48,                 0,                0,                0\n  workload-high,     0,            true,   false,       0,               0,                 500,                0,                0,                0\n  workload-low,      0,            true,   false,       0,               0,                 0,                  0,                0,                0\n  ```\n\n  Explanation for selected column names:\n  - `IsQuiescing` indicates if this priority level will be removed when its queues have been drained.\n\n- `\/debug\/api_priority_and_fairness\/dump_queues` - a listing of all the\n  queues and their current state.  You can fetch like this:\n\n  ```shell\n  kubectl get --raw \/debug\/api_priority_and_fairness\/dump_queues\n  ```\n\n  The output will be in CSV and similar to this:\n\n  ```none\n  PriorityLevelName, Index,  PendingRequests, ExecutingRequests, SeatsInUse, NextDispatchR,   InitialSeatsSum, MaxSeatsSum, TotalWorkSum\n  workload-low,      14,     27,              0,                 0,          77.64342019ss,   270,             270,         0.81000000ss\n  workload-low,      74,     26,              0,                 0,          76.95387841ss,   260,             260,         0.78000000ss\n  ...\n  leader-election,   0,      0,               0,                 0,          5088.87053833ss, 0,               0,           0.00000000ss\n  leader-election,   1,      0,               0,                 0,          0.00000000ss,    0,               0,           0.00000000ss\n  ...\n  workload-high,     0,      0,               0,                 0,          0.00000000ss,    0,               0,           0.00000000ss\n  workload-high,     1,      0,               0,                 0,          1119.44936475ss, 0,               0,           0.00000000ss\n  ```\n\n  Explanation for selected column names:\n  - `NextDispatchR`: The R progress meter reading, in units of seat-seconds, at\n    which the next request will be dispatched.\n  - `InitialSeatsSum`: The sum of InitialSeats associated with all requests in\n    a given queue.\n  - `MaxSeatsSum`: The sum of MaxSeats associated with all requests in a given\n    queue.\n  - `TotalWorkSum`: The sum of total work, in units of seat-seconds, of all\n    waiting requests in a given queue.\n\n  Note: `seat-second` (abbreviate as `ss`) is a measure of work, in units of\n  seat-seconds, in the APF world.\n\n- `\/debug\/api_priority_and_fairness\/dump_requests` - a listing of all the requests\n  including requests waiting in a queue and requests being executing.\n  You can fetch like this:\n\n  ```shell\n  kubectl get --raw \/debug\/api_priority_and_fairness\/dump_requests\n  ```\n\n  The output will be in CSV and similar to this:\n\n  ```none\n  PriorityLevelName, FlowSchemaName,   QueueIndex, RequestIndexInQueue, FlowDistingsher,                        ArriveTime,                     InitialSeats, FinalSeats, AdditionalLatency, StartTime\n  exempt,            exempt,           -1,         -1,                  ,                                       2023-07-15T04:51:25.596404345Z, 1,            0,          0s,                2023-07-15T04:51:25.596404345Z\n  workload-low,      service-accounts, 14,         0,                   system:serviceaccount:default:loadtest, 2023-07-18T00:12:51.386556253Z, 10,           0,          0s,                0001-01-01T00:00:00Z\n  workload-low,      service-accounts, 14,         1,                   system:serviceaccount:default:loadtest, 2023-07-18T00:12:51.487092539Z, 10,           0,          0s,                0001-01-01T00:00:00Z\n  ```\n\n  You can get a more detailed listing with a command like this:\n\n  ```shell\n  kubectl get --raw '\/debug\/api_priority_and_fairness\/dump_requests?includeRequestDetails=1'\n  ```\n\n  The output will be in CSV and similar to this:\n\n  ```none\n  PriorityLevelName, FlowSchemaName,   QueueIndex, RequestIndexInQueue, FlowDistingsher,                        ArriveTime,                     InitialSeats, FinalSeats, AdditionalLatency, StartTime,                      UserName,                               Verb,   APIPath,                                   Namespace,   Name,   APIVersion, Resource,   SubResource\n  exempt,            exempt,           -1,         -1,                  ,                                       2023-07-15T04:51:25.596404345Z, 1,            0,          0s,                2023-07-15T04:51:25.596404345Z, system:serviceaccount:system:admin,     list,   \/api\/v1\/namespaces\/kube-stress\/configmaps, kube-stress, ,       v1,         configmaps,\n  workload-low,      service-accounts, 14,         0,                   system:serviceaccount:default:loadtest, 2023-07-18T00:13:08.986534842Z, 10,           0,          0s,                0001-01-01T00:00:00Z,           system:serviceaccount:default:loadtest, list,   \/api\/v1\/namespaces\/kube-stress\/configmaps, kube-stress, ,       v1,         configmaps,\n  workload-low,      service-accounts, 14,         1,                   system:serviceaccount:default:loadtest, 2023-07-18T00:13:09.086476021Z, 10,           0,          0s,                0001-01-01T00:00:00Z,           system:serviceaccount:default:loadtest, list,   \/api\/v1\/namespaces\/kube-stress\/configmaps, kube-stress, ,       v1,         configmaps,\n  ```\n\n  Explanation for selected column names:\n  - `QueueIndex`: The index of the queue. It will be -1 for priority levels\n    without queues.\n  - `RequestIndexInQueue`: The index in the queue for a given request. It will\n    be -1 for executing requests.\n  - `InitialSeats`: The number of seats will be occupied during the initial\n    (normal) stage of execution of the request.\n  - `FinalSeats`: The number of seats will be occupied during the final stage\n    of request execution, accounting for the associated WATCH notifications.\n  - `AdditionalLatency`: The extra time taken during the final stage of request\n    execution. FinalSeats will be occupied during this time period. It does not\n    mean any latency that a user will observe.\n  - `StartTime`: The time a request starts to execute. It will be\n    0001-01-01T00:00:00Z for queued requests.\n\n## Debug logging\n\nAt `-v=3` or more verbosity, the API server outputs an httplog line for every\nrequest in the API server log, and it includes the following attributes.\n\n- `apf_fs`: the name of the flow schema to which the request was classified.\n- `apf_pl`: the name of the priority level for that flow schema.\n- `apf_iseats`: the number of seats determined for the initial\n  (normal) stage of execution of the request.\n- `apf_fseats`: the number of seats determined for the final stage of\n  execution (accounting for the associated `watch` notifications) of the\n  request.\n- `apf_additionalLatency`: the duration of the final stage of\n  execution of the request.\n\nAt higher levels of verbosity there will be log lines exposing details\nof how APF handled the request, primarily for debugging purposes.\n\n## Response headers\n\nAPF adds the following two headers to each HTTP response message.\nThey won't appear in the audit log. They can be viewed from the client side.\nFor client using `klog`, use verbosity `-v=8` or higher to view these headers.\n\n- `X-Kubernetes-PF-FlowSchema-UID` holds the UID of the FlowSchema\n  object to which the corresponding request was classified.\n- `X-Kubernetes-PF-PriorityLevel-UID` holds the UID of the\n  PriorityLevelConfiguration object associated with that FlowSchema.\n\n## \n\nFor background information on design details for API priority and fairness, see\nthe [enhancement proposal](https:\/\/github.com\/kubernetes\/enhancements\/tree\/master\/keps\/sig-api-machinery\/1040-priority-and-fairness).","site":"kubernetes reference","answers_cleaned":"    title  Flow control weight  130           overview      API Priority and Fairness controls the behavior of the Kubernetes API server in an overload situation  You can find more information about it in the  API Priority and Fairness   docs concepts cluster administration flow control   documentation        body         Diagnostics  Every HTTP response from an API server with the priority and fairness feature enabled has two extra headers   X Kubernetes PF FlowSchema UID  and  X Kubernetes PF PriorityLevel UID   noting the flow schema that matched the request and the priority level to which it was assigned  respectively  The API objects  names are not included in these headers  to avoid revealing details in case the requesting user does not have permission to view them   When debugging  you can use a command such as      shell kubectl get flowschemas  o custom columns  uid  metadata uid  name  metadata name   kubectl get prioritylevelconfigurations  o custom columns  uid  metadata uid  name  metadata name        to get a mapping of UIDs to names for both FlowSchemas and PriorityLevelConfigurations      Debug endpoints  With the  APIPriorityAndFairness  feature enabled  the  kube apiserver  serves the following additional paths at its HTTP S  ports   You need to ensure you have permissions to access these endpoints  You don t have to do anything if you are using admin  Permissions can be granted if needed following the  RBAC   docs reference access authn authz rbac   doc to access   debug api priority and fairness   by specifying  nonResourceURLs        debug api priority and fairness dump priority levels    a listing of   all the priority levels and the current state of each   You can fetch like this        shell   kubectl get   raw  debug api priority and fairness dump priority levels          The output will be in CSV and similar to this        none   PriorityLevelName  ActiveQueues  IsIdle  IsQuiescing  WaitingRequests  ExecutingRequests  DispatchedRequests  RejectedRequests  TimedoutRequests  CancelledRequests   catch all          0             true    false        0                0                  1                   0                 0                 0   exempt             0             true    false        0                0                  0                   0                 0                 0   global default     0             true    false        0                0                  46                  0                 0                 0   leader election    0             true    false        0                0                  4                   0                 0                 0   node high          0             true    false        0                0                  34                  0                 0                 0   system             0             true    false        0                0                  48                  0                 0                 0   workload high      0             true    false        0                0                  500                 0                 0                 0   workload low       0             true    false        0                0                  0                   0                 0                 0          Explanation for selected column names       IsQuiescing  indicates if this priority level will be removed when its queues have been drained       debug api priority and fairness dump queues    a listing of all the   queues and their current state   You can fetch like this        shell   kubectl get   raw  debug api priority and fairness dump queues          The output will be in CSV and similar to this        none   PriorityLevelName  Index   PendingRequests  ExecutingRequests  SeatsInUse  NextDispatchR    InitialSeatsSum  MaxSeatsSum  TotalWorkSum   workload low       14      27               0                  0           77 64342019ss    270              270          0 81000000ss   workload low       74      26               0                  0           76 95387841ss    260              260          0 78000000ss         leader election    0       0                0                  0           5088 87053833ss  0                0            0 00000000ss   leader election    1       0                0                  0           0 00000000ss     0                0            0 00000000ss         workload high      0       0                0                  0           0 00000000ss     0                0            0 00000000ss   workload high      1       0                0                  0           1119 44936475ss  0                0            0 00000000ss          Explanation for selected column names       NextDispatchR   The R progress meter reading  in units of seat seconds  at     which the next request will be dispatched       InitialSeatsSum   The sum of InitialSeats associated with all requests in     a given queue       MaxSeatsSum   The sum of MaxSeats associated with all requests in a given     queue       TotalWorkSum   The sum of total work  in units of seat seconds  of all     waiting requests in a given queue     Note   seat second   abbreviate as  ss   is a measure of work  in units of   seat seconds  in the APF world       debug api priority and fairness dump requests    a listing of all the requests   including requests waiting in a queue and requests being executing    You can fetch like this        shell   kubectl get   raw  debug api priority and fairness dump requests          The output will be in CSV and similar to this        none   PriorityLevelName  FlowSchemaName    QueueIndex  RequestIndexInQueue  FlowDistingsher                         ArriveTime                      InitialSeats  FinalSeats  AdditionalLatency  StartTime   exempt             exempt             1           1                                                           2023 07 15T04 51 25 596404345Z  1             0           0s                 2023 07 15T04 51 25 596404345Z   workload low       service accounts  14          0                    system serviceaccount default loadtest  2023 07 18T00 12 51 386556253Z  10            0           0s                 0001 01 01T00 00 00Z   workload low       service accounts  14          1                    system serviceaccount default loadtest  2023 07 18T00 12 51 487092539Z  10            0           0s                 0001 01 01T00 00 00Z          You can get a more detailed listing with a command like this        shell   kubectl get   raw   debug api priority and fairness dump requests includeRequestDetails 1           The output will be in CSV and similar to this        none   PriorityLevelName  FlowSchemaName    QueueIndex  RequestIndexInQueue  FlowDistingsher                         ArriveTime                      InitialSeats  FinalSeats  AdditionalLatency  StartTime                       UserName                                Verb    APIPath                                    Namespace    Name    APIVersion  Resource    SubResource   exempt             exempt             1           1                                                           2023 07 15T04 51 25 596404345Z  1             0           0s                 2023 07 15T04 51 25 596404345Z  system serviceaccount system admin      list     api v1 namespaces kube stress configmaps  kube stress          v1          configmaps    workload low       service accounts  14          0                    system serviceaccount default loadtest  2023 07 18T00 13 08 986534842Z  10            0           0s                 0001 01 01T00 00 00Z            system serviceaccount default loadtest  list     api v1 namespaces kube stress configmaps  kube stress          v1          configmaps    workload low       service accounts  14          1                    system serviceaccount default loadtest  2023 07 18T00 13 09 086476021Z  10            0           0s                 0001 01 01T00 00 00Z            system serviceaccount default loadtest  list     api v1 namespaces kube stress configmaps  kube stress          v1          configmaps           Explanation for selected column names       QueueIndex   The index of the queue  It will be  1 for priority levels     without queues       RequestIndexInQueue   The index in the queue for a given request  It will     be  1 for executing requests       InitialSeats   The number of seats will be occupied during the initial      normal  stage of execution of the request       FinalSeats   The number of seats will be occupied during the final stage     of request execution  accounting for the associated WATCH notifications       AdditionalLatency   The extra time taken during the final stage of request     execution  FinalSeats will be occupied during this time period  It does not     mean any latency that a user will observe       StartTime   The time a request starts to execute  It will be     0001 01 01T00 00 00Z for queued requests      Debug logging  At   v 3  or more verbosity  the API server outputs an httplog line for every request in the API server log  and it includes the following attributes      apf fs   the name of the flow schema to which the request was classified     apf pl   the name of the priority level for that flow schema     apf iseats   the number of seats determined for the initial    normal  stage of execution of the request     apf fseats   the number of seats determined for the final stage of   execution  accounting for the associated  watch  notifications  of the   request     apf additionalLatency   the duration of the final stage of   execution of the request   At higher levels of verbosity there will be log lines exposing details of how APF handled the request  primarily for debugging purposes      Response headers  APF adds the following two headers to each HTTP response message  They won t appear in the audit log  They can be viewed from the client side  For client using  klog   use verbosity   v 8  or higher to view these headers      X Kubernetes PF FlowSchema UID  holds the UID of the FlowSchema   object to which the corresponding request was classified     X Kubernetes PF PriorityLevel UID  holds the UID of the   PriorityLevelConfiguration object associated with that FlowSchema        For background information on design details for API priority and fairness  see the  enhancement proposal  https   github com kubernetes enhancements tree master keps sig api machinery 1040 priority and fairness  "}
{"questions":"kubernetes reference weight 80 monitoring and maintaining node status to ensure a healthy and stable cluster aspect of managing a Kubernetes cluster In this article we ll cover the basics of contenttype reference title Node Status overview The status of a in Kubernetes is a critical","answers":"---\ncontent_type: reference\ntitle: Node Status\nweight: 80\n---\n<!-- overview -->\n\nThe status of a [node](\/docs\/concepts\/architecture\/nodes\/) in Kubernetes is a critical\naspect of managing a Kubernetes cluster. In this article, we'll cover the basics of\nmonitoring and maintaining node status to ensure a healthy and stable cluster.\n\n## Node status fields\n\nA Node's status contains the following information:\n\n* [Addresses](#addresses)\n* [Conditions](#condition)\n* [Capacity and Allocatable](#capacity)\n* [Info](#info)\n\nYou can use `kubectl` to view a Node's status and other details:\n\n```shell\nkubectl describe node <insert-node-name-here>\n```\n\nEach section of the output is described below.\n\n## Addresses\n\nThe usage of these fields varies depending on your cloud provider or bare metal configuration.\n\n* HostName: The hostname as reported by the node's kernel. Can be overridden via the kubelet\n  `--hostname-override` parameter.\n* ExternalIP: Typically the IP address of the node that is externally routable (available from\n  outside the cluster).\n* InternalIP: Typically the IP address of the node that is routable only within the cluster.\n\n## Conditions {#condition}\n\nThe `conditions` field describes the status of all `Running` nodes. Examples of conditions include:\n\n\n| Node Condition       | Description |\n|----------------------|-------------|\n| `Ready`              | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last `node-monitor-grace-period` (default is 40 seconds) |\n| `DiskPressure`       | `True` if pressure exists on the disk size\u2014that is, if the disk capacity is low; otherwise `False` |\n| `MemoryPressure`     | `True` if pressure exists on the node memory\u2014that is, if the node memory is low; otherwise `False` |\n| `PIDPressure`        | `True` if pressure exists on the processes\u2014that is, if there are too many processes on the node; otherwise `False` |\n| `NetworkUnavailable` | `True` if the network for the node is not correctly configured, otherwise `False` |\n\n\n\nIf you use command-line tools to print details of a cordoned Node, the Condition includes\n`SchedulingDisabled`. `SchedulingDisabled` is not a Condition in the Kubernetes API; instead,\ncordoned nodes are marked Unschedulable in their spec.\n\n\nIn the Kubernetes API, a node's condition is represented as part of the `.status`\nof the Node resource. For example, the following JSON structure describes a healthy node:\n\n```json\n\"conditions\": [\n  {\n    \"type\": \"Ready\",\n    \"status\": \"True\",\n    \"reason\": \"KubeletReady\",\n    \"message\": \"kubelet is posting ready status\",\n    \"lastHeartbeatTime\": \"2019-06-05T18:38:35Z\",\n    \"lastTransitionTime\": \"2019-06-05T11:41:27Z\"\n  }\n]\n```\n\nWhen problems occur on nodes, the Kubernetes control plane automatically creates\n[taints](\/docs\/concepts\/scheduling-eviction\/taint-and-toleration\/) that match the conditions\naffecting the node. An example of this is when the `status` of the Ready condition\nremains `Unknown` or `False` for longer than the kube-controller-manager's `NodeMonitorGracePeriod`,\nwhich defaults to 40 seconds. This will cause either an `node.kubernetes.io\/unreachable` taint, for an `Unknown` status,\nor a `node.kubernetes.io\/not-ready` taint, for a `False` status, to be added to the Node.\n\nThese taints affect pending pods as the scheduler takes the Node's taints into consideration when\nassigning a pod to a Node. Existing pods scheduled to the node may be evicted due to the application\nof `NoExecute` taints. Pods may also have  that let\nthem schedule to and continue running on a Node even though it has a specific taint.\n\nSee [Taint Based Evictions](\/docs\/concepts\/scheduling-eviction\/taint-and-toleration\/#taint-based-evictions) and\n[Taint Nodes by Condition](\/docs\/concepts\/scheduling-eviction\/taint-and-toleration\/#taint-nodes-by-condition)\nfor more details.\n\n## Capacity and Allocatable {#capacity}\n\nDescribes the resources available on the node: CPU, memory, and the maximum\nnumber of pods that can be scheduled onto the node.\n\nThe fields in the capacity block indicate the total amount of resources that a\nNode has. The allocatable block indicates the amount of resources on a\nNode that is available to be consumed by normal Pods.\n\nYou may read more about capacity and allocatable resources while learning how\nto [reserve compute resources](\/docs\/tasks\/administer-cluster\/reserve-compute-resources\/#node-allocatable)\non a Node.\n\n## Info\n\nDescribes general information about the node, such as kernel version, Kubernetes\nversion (kubelet and kube-proxy version), container runtime details, and which\noperating system the node uses.\nThe kubelet gathers this information from the node and publishes it into\nthe Kubernetes API.\n\n## Heartbeats\n\nHeartbeats, sent by Kubernetes nodes, help your cluster determine the\navailability of each node, and to take action when failures are detected.\n\nFor nodes there are two forms of heartbeats:\n\n* updates to the `.status` of a Node\n* [Lease](\/docs\/concepts\/architecture\/leases\/) objects\n  within the `kube-node-lease`\n  .\n  Each Node has an associated Lease object.\n\nCompared to updates to `.status` of a Node, a Lease is a lightweight resource.\nUsing Leases for heartbeats reduces the performance impact of these updates\nfor large clusters.\n\nThe kubelet is responsible for creating and updating the `.status` of Nodes,\nand for updating their related Leases.\n\n- The kubelet updates the node's `.status` either when there is change in status\n  or if there has been no update for a configured interval. The default interval\n  for `.status` updates to Nodes is 5 minutes, which is much longer than the 40\n  second default timeout for unreachable nodes.\n- The kubelet creates and then updates its Lease object every 10 seconds\n  (the default update interval). Lease updates occur independently from\n  updates to the Node's `.status`. If the Lease update fails, the kubelet retries,\n  using exponential backoff that starts at 200 milliseconds and capped at 7 seconds.","site":"kubernetes reference","answers_cleaned":"    content type  reference title  Node Status weight  80          overview      The status of a  node   docs concepts architecture nodes   in Kubernetes is a critical aspect of managing a Kubernetes cluster  In this article  we ll cover the basics of monitoring and maintaining node status to ensure a healthy and stable cluster      Node status fields  A Node s status contains the following information      Addresses   addresses     Conditions   condition     Capacity and Allocatable   capacity     Info   info   You can use  kubectl  to view a Node s status and other details      shell kubectl describe node  insert node name here       Each section of the output is described below      Addresses  The usage of these fields varies depending on your cloud provider or bare metal configuration     HostName  The hostname as reported by the node s kernel  Can be overridden via the kubelet      hostname override  parameter    ExternalIP  Typically the IP address of the node that is externally routable  available from   outside the cluster     InternalIP  Typically the IP address of the node that is routable only within the cluster      Conditions   condition   The  conditions  field describes the status of all  Running  nodes  Examples of conditions include      Node Condition         Description                                             Ready                  True  if the node is healthy and ready to accept pods   False  if the node is not healthy and is not accepting pods  and  Unknown  if the node controller has not heard from the node in the last  node monitor grace period   default is 40 seconds       DiskPressure           True  if pressure exists on the disk size that is  if the disk capacity is low  otherwise  False       MemoryPressure         True  if pressure exists on the node memory that is  if the node memory is low  otherwise  False       PIDPressure            True  if pressure exists on the processes that is  if there are too many processes on the node  otherwise  False       NetworkUnavailable     True  if the network for the node is not correctly configured  otherwise  False       If you use command line tools to print details of a cordoned Node  the Condition includes  SchedulingDisabled    SchedulingDisabled  is not a Condition in the Kubernetes API  instead  cordoned nodes are marked Unschedulable in their spec    In the Kubernetes API  a node s condition is represented as part of the   status  of the Node resource  For example  the following JSON structure describes a healthy node      json  conditions              type    Ready        status    True        reason    KubeletReady        message    kubelet is posting ready status        lastHeartbeatTime    2019 06 05T18 38 35Z        lastTransitionTime    2019 06 05T11 41 27Z             When problems occur on nodes  the Kubernetes control plane automatically creates  taints   docs concepts scheduling eviction taint and toleration   that match the conditions affecting the node  An example of this is when the  status  of the Ready condition remains  Unknown  or  False  for longer than the kube controller manager s  NodeMonitorGracePeriod   which defaults to 40 seconds  This will cause either an  node kubernetes io unreachable  taint  for an  Unknown  status  or a  node kubernetes io not ready  taint  for a  False  status  to be added to the Node   These taints affect pending pods as the scheduler takes the Node s taints into consideration when assigning a pod to a Node  Existing pods scheduled to the node may be evicted due to the application of  NoExecute  taints  Pods may also have  that let them schedule to and continue running on a Node even though it has a specific taint   See  Taint Based Evictions   docs concepts scheduling eviction taint and toleration  taint based evictions  and  Taint Nodes by Condition   docs concepts scheduling eviction taint and toleration  taint nodes by condition  for more details      Capacity and Allocatable   capacity   Describes the resources available on the node  CPU  memory  and the maximum number of pods that can be scheduled onto the node   The fields in the capacity block indicate the total amount of resources that a Node has  The allocatable block indicates the amount of resources on a Node that is available to be consumed by normal Pods   You may read more about capacity and allocatable resources while learning how to  reserve compute resources   docs tasks administer cluster reserve compute resources  node allocatable  on a Node      Info  Describes general information about the node  such as kernel version  Kubernetes version  kubelet and kube proxy version   container runtime details  and which operating system the node uses  The kubelet gathers this information from the node and publishes it into the Kubernetes API      Heartbeats  Heartbeats  sent by Kubernetes nodes  help your cluster determine the availability of each node  and to take action when failures are detected   For nodes there are two forms of heartbeats     updates to the   status  of a Node    Lease   docs concepts architecture leases   objects   within the  kube node lease        Each Node has an associated Lease object   Compared to updates to   status  of a Node  a Lease is a lightweight resource  Using Leases for heartbeats reduces the performance impact of these updates for large clusters   The kubelet is responsible for creating and updating the   status  of Nodes  and for updating their related Leases     The kubelet updates the node s   status  either when there is change in status   or if there has been no update for a configured interval  The default interval   for   status  updates to Nodes is 5 minutes  which is much longer than the 40   second default timeout for unreachable nodes    The kubelet creates and then updates its Lease object every 10 seconds    the default update interval   Lease updates occur independently from   updates to the Node s   status   If the Lease update fails  the kubelet retries    using exponential backoff that starts at 200 milliseconds and capped at 7 seconds "}
{"questions":"kubernetes reference merged configuration there is some specific behavior on how different types are title Kubelet Configuration Directory Merging contenttype reference When using the kubelet s flag to specify a drop in directory for weight 50","answers":"---\ncontent_type: \"reference\"\ntitle: Kubelet Configuration Directory Merging\nweight: 50\n---\n\nWhen using the kubelet's `--config-dir` flag to specify a drop-in directory for\nconfiguration, there is some specific behavior on how different types are\nmerged.\n\nHere are some examples of how different data types behave during configuration merging:\n\n### Structure Fields\nThere are two types of structure fields in a YAML structure: singular (or a\nscalar type) and embedded (structures that contain scalar types).\nThe configuration merging process handles the overriding of singular and embedded struct fields to create a resulting kubelet configuration.\n\nFor instance, you may want a baseline kubelet configuration for all nodes, but you may want to customize the `address` and `authorization` fields.\nThis can be done as follows:\n\nMain kubelet configuration file contents:\n```yaml\napiVersion: kubelet.config.k8s.io\/v1beta1\nkind: KubeletConfiguration\nport: 20250\nauthorization:\n  mode: Webhook\n  webhook:\n    cacheAuthorizedTTL: \"5m\"\n    cacheUnauthorizedTTL: \"30s\"\nserializeImagePulls: false\naddress: \"192.168.0.1\"\n```\n\nContents of a file in `--config-dir` directory:\n```yaml\napiVersion: kubelet.config.k8s.io\/v1beta1\nkind: KubeletConfiguration\nauthorization:\n  mode: AlwaysAllow\n  webhook:\n    cacheAuthorizedTTL: \"8m\"\n    cacheUnauthorizedTTL: \"45s\"\naddress: \"192.168.0.8\"\n```\n\nThe resulting configuration will be as follows:\n```yaml\napiVersion: kubelet.config.k8s.io\/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nauthorization:\n  mode: AlwaysAllow\n  webhook:\n    cacheAuthorizedTTL: \"8m\"\n    cacheUnauthorizedTTL: \"45s\"\naddress: \"192.168.0.8\"\n```\n\n### Lists\nYou can overide the slices\/lists values of the kubelet configuration.\nHowever, the entire list gets overridden during the merging process.\nFor example, you can override the `clusterDNS` list as follows:\n\nMain kubelet configuration file contents:\n```yaml\napiVersion: kubelet.config.k8s.io\/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nclusterDNS:\n  - \"192.168.0.9\"\n  - \"192.168.0.8\"\n```\n\nContents of a file in `--config-dir` directory:\n```yaml\napiVersion: kubelet.config.k8s.io\/v1beta1\nkind: KubeletConfiguration\nclusterDNS:\n  - \"192.168.0.2\"\n  - \"192.168.0.3\"\n  - \"192.168.0.5\"\n```\n\nThe resulting configuration will be as follows:\n```yaml\napiVersion: kubelet.config.k8s.io\/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nclusterDNS:\n  - \"192.168.0.2\"\n  - \"192.168.0.3\"\n  - \"192.168.0.5\"\n```\n\n### Maps, including Nested Structures\n\nIndividual fields in maps, regardless of their value types (boolean, string, etc.), can be selectively overridden.\nHowever, for `map[string][]string`, the entire list associated with a specific field gets overridden.\nLet's understand this better with an example, particularly on fields like `featureGates` and `staticPodURLHeader`:\n\nMain kubelet configuration file contents:\n```yaml\napiVersion: kubelet.config.k8s.io\/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nfeatureGates:\n  AllAlpha: false\n  MemoryQoS: true\nstaticPodURLHeader:\n  kubelet-api-support:\n  - \"Authorization: 234APSDFA\"\n  - \"X-Custom-Header: 123\"\n  custom-static-pod:\n  - \"Authorization: 223EWRWER\"\n  - \"X-Custom-Header: 456\"\n```\n\nContents of a file in `--config-dir` directory:\n```yaml\napiVersion: kubelet.config.k8s.io\/v1beta1\nkind: KubeletConfiguration\nfeatureGates:\n  MemoryQoS: false\n  KubeletTracing: true\n  DynamicResourceAllocation: true\nstaticPodURLHeader:\n  custom-static-pod:\n  - \"Authorization: 223EWRWER\"\n  - \"X-Custom-Header: 345\"\n```\n\nThe resulting configuration will be as follows:\n```yaml\napiVersion: kubelet.config.k8s.io\/v1beta1\nkind: KubeletConfiguration\nport: 20250\nserializeImagePulls: false\nfeatureGates:\n  AllAlpha: false\n  MemoryQoS: false\n  KubeletTracing: true\n  DynamicResourceAllocation: true\nstaticPodURLHeader:\n  kubelet-api-support:\n  - \"Authorization: 234APSDFA\"\n  - \"X-Custom-Header: 123\"\n  custom-static-pod:\n  - \"Authorization: 223EWRWER\"\n  - \"X-Custom-Header: 345\"\n```","site":"kubernetes reference","answers_cleaned":"    content type   reference  title  Kubelet Configuration Directory Merging weight  50      When using the kubelet s    config dir  flag to specify a drop in directory for configuration  there is some specific behavior on how different types are merged   Here are some examples of how different data types behave during configuration merging       Structure Fields There are two types of structure fields in a YAML structure  singular  or a scalar type  and embedded  structures that contain scalar types   The configuration merging process handles the overriding of singular and embedded struct fields to create a resulting kubelet configuration   For instance  you may want a baseline kubelet configuration for all nodes  but you may want to customize the  address  and  authorization  fields  This can be done as follows   Main kubelet configuration file contents     yaml apiVersion  kubelet config k8s io v1beta1 kind  KubeletConfiguration port  20250 authorization    mode  Webhook   webhook      cacheAuthorizedTTL   5m      cacheUnauthorizedTTL   30s  serializeImagePulls  false address   192 168 0 1       Contents of a file in    config dir  directory     yaml apiVersion  kubelet config k8s io v1beta1 kind  KubeletConfiguration authorization    mode  AlwaysAllow   webhook      cacheAuthorizedTTL   8m      cacheUnauthorizedTTL   45s  address   192 168 0 8       The resulting configuration will be as follows     yaml apiVersion  kubelet config k8s io v1beta1 kind  KubeletConfiguration port  20250 serializeImagePulls  false authorization    mode  AlwaysAllow   webhook      cacheAuthorizedTTL   8m      cacheUnauthorizedTTL   45s  address   192 168 0 8           Lists You can overide the slices lists values of the kubelet configuration  However  the entire list gets overridden during the merging process  For example  you can override the  clusterDNS  list as follows   Main kubelet configuration file contents     yaml apiVersion  kubelet config k8s io v1beta1 kind  KubeletConfiguration port  20250 serializeImagePulls  false clusterDNS       192 168 0 9       192 168 0 8       Contents of a file in    config dir  directory     yaml apiVersion  kubelet config k8s io v1beta1 kind  KubeletConfiguration clusterDNS       192 168 0 2       192 168 0 3       192 168 0 5       The resulting configuration will be as follows     yaml apiVersion  kubelet config k8s io v1beta1 kind  KubeletConfiguration port  20250 serializeImagePulls  false clusterDNS       192 168 0 2       192 168 0 3       192 168 0 5           Maps  including Nested Structures  Individual fields in maps  regardless of their value types  boolean  string  etc    can be selectively overridden  However  for  map string   string   the entire list associated with a specific field gets overridden  Let s understand this better with an example  particularly on fields like  featureGates  and  staticPodURLHeader    Main kubelet configuration file contents     yaml apiVersion  kubelet config k8s io v1beta1 kind  KubeletConfiguration port  20250 serializeImagePulls  false featureGates    AllAlpha  false   MemoryQoS  true staticPodURLHeader    kubelet api support       Authorization  234APSDFA       X Custom Header  123    custom static pod       Authorization  223EWRWER       X Custom Header  456       Contents of a file in    config dir  directory     yaml apiVersion  kubelet config k8s io v1beta1 kind  KubeletConfiguration featureGates    MemoryQoS  false   KubeletTracing  true   DynamicResourceAllocation  true staticPodURLHeader    custom static pod       Authorization  223EWRWER       X Custom Header  345       The resulting configuration will be as follows     yaml apiVersion  kubelet config k8s io v1beta1 kind  KubeletConfiguration port  20250 serializeImagePulls  false featureGates    AllAlpha  false   MemoryQoS  false   KubeletTracing  true   DynamicResourceAllocation  true staticPodURLHeader    kubelet api support       Authorization  234APSDFA       X Custom Header  123    custom static pod       Authorization  223EWRWER       X Custom Header  345     "}
{"questions":"kubernetes reference weight 80 Seccomp stands for secure computing mode and has been a feature of the Linux contenttype reference title Seccomp and Kubernetes overview kernel since version 2 6 12 It can be used to sandbox the privileges of a","answers":"---\ncontent_type: reference\ntitle: Seccomp and Kubernetes\nweight: 80\n---\n\n<!-- overview -->\n\nSeccomp stands for secure computing mode and has been a feature of the Linux\nkernel since version 2.6.12. It can be used to sandbox the privileges of a\nprocess, restricting the calls it is able to make from userspace into the\nkernel. Kubernetes lets you automatically apply seccomp profiles loaded onto a\n to your Pods and containers.\n\n## Seccomp fields\n\n\n\nThere are four ways to specify a seccomp profile for a\n:\n\n- for the whole Pod using [`spec.securityContext.seccompProfile`](\/docs\/reference\/kubernetes-api\/workload-resources\/pod-v1\/#security-context)\n- for a single container using [`spec.containers[*].securityContext.seccompProfile`](\/docs\/reference\/kubernetes-api\/workload-resources\/pod-v1\/#security-context-1)\n- for an (restartable \/ sidecar) init container using [`spec.initContainers[*].securityContext.seccompProfile`](\/docs\/reference\/kubernetes-api\/workload-resources\/pod-v1\/#security-context-1)\n- for an [ephermal container](\/docs\/concepts\/workloads\/pods\/ephemeral-containers) using [`spec.ephemeralContainers[*].securityContext.seccompProfile`](\/docs\/reference\/kubernetes-api\/workload-resources\/pod-v1\/#security-context-2)\n\n\n\nThe Pod in the example above runs as `Unconfined`, while the\n`ephemeral-container` and `init-container` specifically defines\n`RuntimeDefault`. If the ephemeral or init container would not have set the\n`securityContext.seccompProfile` field explicitly, then the value would be\ninherited from the Pod. The same applies to the container, which runs a\n`Localhost` profile `my-profile.json`.\n\nGenerally speaking, fields from (ephemeral) containers have a higher priority\nthan the Pod level value, while containers which do not set the seccomp field\ninherit the profile from the Pod.\n\n\nIt is not possible to apply a seccomp profile to a Pod or container running with\n`privileged: true` set in the container's `securityContext`. Privileged\ncontainers always run as `Unconfined`.\n\n\nThe following values are possible for the `seccompProfile.type`:\n\n`Unconfined`\n: The workload runs without any seccomp restrictions.\n\n`RuntimeDefault`\n: A default seccomp profile defined by the\n\nis applied. The default profiles aim to provide a strong set of security\ndefaults while preserving the functionality of the workload. It is possible that\nthe default profiles differ between container runtimes and their release\nversions, for example when comparing those from\n and\n.\n\n`Localhost`\n: The `localhostProfile` will be applied, which has to be available on the node\ndisk (on Linux it's `\/var\/lib\/kubelet\/seccomp`). The availability of the seccomp\nprofile is verified by the\n\non container creation. If the profile does not exist, then the container\ncreation will fail with a `CreateContainerError`.\n\n### `Localhost` profiles\n\nSeccomp profiles are JSON files following the scheme defined by the\n[OCI runtime specification](https:\/\/github.com\/opencontainers\/runtime-spec\/blob\/f329913\/config-linux.md#seccomp).\nA profile basically defines actions based on matched syscalls, but also allows\nto pass specific values as arguments to syscalls. For example:\n\n```json\n{\n  \"defaultAction\": \"SCMP_ACT_ERRNO\",\n  \"defaultErrnoRet\": 38,\n  \"syscalls\": [\n    {\n      \"names\": [\n        \"adjtimex\",\n        \"alarm\",\n        \"bind\",\n        \"waitid\",\n        \"waitpid\",\n        \"write\",\n        \"writev\"\n      ],\n      \"action\": \"SCMP_ACT_ALLOW\"\n    }\n  ]\n}\n```\n\nThe `defaultAction` in the profile above is defined as `SCMP_ACT_ERRNO` and\nwill return as fallback to the actions defined in `syscalls`. The error is\ndefined as code `38` via the `defaultErrnoRet` field.\n\nThe following actions are generally possible:\n\n`SCMP_ACT_ERRNO`\n: Return the specified error code.\n\n`SCMP_ACT_ALLOW`\n: Allow the syscall to be executed.\n\n`SCMP_ACT_KILL_PROCESS`\n: Kill the process.\n\n`SCMP_ACT_KILL_THREAD` and `SCMP_ACT_KILL`\n: Kill only the thread.\n\n`SCMP_ACT_TRAP`\n: Throw a `SIGSYS` signal.\n\n`SCMP_ACT_NOTIFY` and `SECCOMP_RET_USER_NOTIF`.\n: Notify the user space.\n\n`SCMP_ACT_TRACE`\n: Notify a tracing process with the specified value.\n\n`SCMP_ACT_LOG`\n: Allow the syscall to be executed after the action has been logged to syslog or\nauditd.\n\nSome actions like `SCMP_ACT_NOTIFY` or `SECCOMP_RET_USER_NOTIF` may be not\nsupported depending on the container runtime, OCI runtime or Linux kernel\nversion being used. There may be also further limitations, for example that\n`SCMP_ACT_NOTIFY` cannot be used as `defaultAction` or for certain syscalls like\n`write`. All those limitations are defined by either the OCI runtime\n([runc](https:\/\/github.com\/opencontainers\/runc),\n[crun](https:\/\/github.com\/containers\/crun)) or\n[libseccomp](https:\/\/github.com\/seccomp\/libseccomp).\n\nThe `syscalls` JSON array contains a list of objects referencing syscalls by\ntheir respective `names`. For example, the action `SCMP_ACT_ALLOW` can be used\nto create a whitelist of allowed syscalls as outlined in the example above. It\nwould also be possible to define another list using the action `SCMP_ACT_ERRNO`\nbut a different return (`errnoRet`) value.\n\nIt is also possible to specify the arguments (`args`) passed to certain\nsyscalls. More information about those advanced use cases can be found in the\n[OCI runtime spec](https:\/\/github.com\/opencontainers\/runtime-spec\/blob\/f329913\/config-linux.md#seccomp)\nand the [Seccomp Linux kernel documentation](https:\/\/www.kernel.org\/doc\/Documentation\/prctl\/seccomp_filter.txt).\n\n## Further reading\n\n- [Restrict a Container's Syscalls with seccomp](\/docs\/tutorials\/security\/seccomp\/)\n- [Pod Security Standards](\/docs\/concepts\/security\/pod-security-standards\/)","site":"kubernetes reference","answers_cleaned":"    content type  reference title  Seccomp and Kubernetes weight  80           overview      Seccomp stands for secure computing mode and has been a feature of the Linux kernel since version 2 6 12  It can be used to sandbox the privileges of a process  restricting the calls it is able to make from userspace into the kernel  Kubernetes lets you automatically apply seccomp profiles loaded onto a  to your Pods and containers      Seccomp fields    There are four ways to specify a seccomp profile for a      for the whole Pod using   spec securityContext seccompProfile    docs reference kubernetes api workload resources pod v1  security context    for a single container using   spec containers    securityContext seccompProfile    docs reference kubernetes api workload resources pod v1  security context 1    for an  restartable   sidecar  init container using   spec initContainers    securityContext seccompProfile    docs reference kubernetes api workload resources pod v1  security context 1    for an  ephermal container   docs concepts workloads pods ephemeral containers  using   spec ephemeralContainers    securityContext seccompProfile    docs reference kubernetes api workload resources pod v1  security context 2     The Pod in the example above runs as  Unconfined   while the  ephemeral container  and  init container  specifically defines  RuntimeDefault   If the ephemeral or init container would not have set the  securityContext seccompProfile  field explicitly  then the value would be inherited from the Pod  The same applies to the container  which runs a  Localhost  profile  my profile json    Generally speaking  fields from  ephemeral  containers have a higher priority than the Pod level value  while containers which do not set the seccomp field inherit the profile from the Pod    It is not possible to apply a seccomp profile to a Pod or container running with  privileged  true  set in the container s  securityContext   Privileged containers always run as  Unconfined     The following values are possible for the  seccompProfile type     Unconfined    The workload runs without any seccomp restrictions    RuntimeDefault    A default seccomp profile defined by the  is applied  The default profiles aim to provide a strong set of security defaults while preserving the functionality of the workload  It is possible that the default profiles differ between container runtimes and their release versions  for example when comparing those from  and     Localhost    The  localhostProfile  will be applied  which has to be available on the node disk  on Linux it s   var lib kubelet seccomp    The availability of the seccomp profile is verified by the  on container creation  If the profile does not exist  then the container creation will fail with a  CreateContainerError         Localhost  profiles  Seccomp profiles are JSON files following the scheme defined by the  OCI runtime specification  https   github com opencontainers runtime spec blob f329913 config linux md seccomp   A profile basically defines actions based on matched syscalls  but also allows to pass specific values as arguments to syscalls  For example      json      defaultAction    SCMP ACT ERRNO      defaultErrnoRet   38     syscalls                  names              adjtimex            alarm            bind            waitid            waitpid            write            writev                  action    SCMP ACT ALLOW                   The  defaultAction  in the profile above is defined as  SCMP ACT ERRNO  and will return as fallback to the actions defined in  syscalls   The error is defined as code  38  via the  defaultErrnoRet  field   The following actions are generally possible    SCMP ACT ERRNO    Return the specified error code    SCMP ACT ALLOW    Allow the syscall to be executed    SCMP ACT KILL PROCESS    Kill the process    SCMP ACT KILL THREAD  and  SCMP ACT KILL    Kill only the thread    SCMP ACT TRAP    Throw a  SIGSYS  signal    SCMP ACT NOTIFY  and  SECCOMP RET USER NOTIF     Notify the user space    SCMP ACT TRACE    Notify a tracing process with the specified value    SCMP ACT LOG    Allow the syscall to be executed after the action has been logged to syslog or auditd   Some actions like  SCMP ACT NOTIFY  or  SECCOMP RET USER NOTIF  may be not supported depending on the container runtime  OCI runtime or Linux kernel version being used  There may be also further limitations  for example that  SCMP ACT NOTIFY  cannot be used as  defaultAction  or for certain syscalls like  write   All those limitations are defined by either the OCI runtime   runc  https   github com opencontainers runc    crun  https   github com containers crun   or  libseccomp  https   github com seccomp libseccomp    The  syscalls  JSON array contains a list of objects referencing syscalls by their respective  names   For example  the action  SCMP ACT ALLOW  can be used to create a whitelist of allowed syscalls as outlined in the example above  It would also be possible to define another list using the action  SCMP ACT ERRNO  but a different return   errnoRet   value   It is also possible to specify the arguments   args   passed to certain syscalls  More information about those advanced use cases can be found in the  OCI runtime spec  https   github com opencontainers runtime spec blob f329913 config linux md seccomp  and the  Seccomp Linux kernel documentation  https   www kernel org doc Documentation prctl seccomp filter txt       Further reading     Restrict a Container s Syscalls with seccomp   docs tutorials security seccomp      Pod Security Standards   docs concepts security pod security standards  "}
{"questions":"kubernetes reference This document outlines files that kubelet reads and writes title Local Files And Paths Used By The Kubelet contenttype reference The glossarytooltip text kubelet termid kubelet is mostly a stateless process running on a Kubernetes glossarytooltip text node termid node weight 42","answers":"---\ncontent_type: \"reference\"\ntitle: Local Files And Paths Used By The Kubelet\nweight: 42\n---\n\nThe  is mostly a stateless\nprocess running on a Kubernetes .\nThis document outlines files that kubelet reads and writes.\n\n\n\nThis document is for informational purpose and not describing any guaranteed behaviors or APIs.\nIt lists resources used by the kubelet, which is an implementation detail and a subject to change at any release.\n\n\n\nThe kubelet typically uses the  as\nthe source of truth on what needs to run on the Node, and the\n to retrieve\nthe current state of containers. So long as you provide a _kubeconfig_ (API client configuration)\nto the kubelet, the kubelet does connect to your control plane; otherwise the node operates in\n_standalone mode_.\n\nOn Linux nodes, the kubelet also relies on reading cgroups and various system files to collect metrics.\n\nOn Windows nodes, the kubelet collects metrics via a different mechanism that does not rely on\npaths.\n\nThere are also a few other files that are used by the kubelet as well as kubelet communicates using local Unix-domain sockets. Some are sockets that the\nkubelet listens on, and for other sockets the kubelet discovers them and then connects\nas a client.\n\n\n\nThis page lists paths as Linux paths, which map to the Windows paths by adding a root disk\n`C:\\` in place of `\/` (unless specified otherwise). For example, `\/var\/lib\/kubelet\/device-plugins` maps to `C:\\var\\lib\\kubelet\\device-plugins`.\n\n\n\n## Configuration\n\n### Kubelet configuration files\n\nThe path to the kubelet configuration file can be configured\nusing the command line argument `--config`. The kubelet also supports\n[drop-in configuration files](\/docs\/tasks\/administer-cluster\/kubelet-config-file\/#kubelet-conf-d)\nto enhance configuration.\n\n### Certificates\n\nCertificates and private keys are typically located at `\/var\/lib\/kubelet\/pki`,\nbut can be configured using the `--cert-dir` kubelet command line argument.\nNames of certificate files are also configurable.\n\n### Manifests\n\nManifests for static pods are typically located in `\/etc\/kubernetes\/manifests`.\nLocation can be configured using the `staticPodPath` kubelet configuration option.\n\n### Systemd unit settings\n\nWhen kubelet is running as a systemd unit, some kubelet configuration may be declared\nin systemd unit settings file. Typically it includes:\n\n- command line arguments to [run kubelet](\/docs\/reference\/command-line-tools-reference\/kubelet\/)\n- environment variables, used by kubelet or [configuring golang runtime](https:\/\/pkg.go.dev\/runtime#hdr-Environment_Variables)\n\n## State\n\n### Checkpoint files for resource managers {#resource-managers-state}\n\nAll resource managers keep the mapping of Pods to allocated resources in state files.\nState files are located in the kubelet's base directory, also termed the _root directory_\n(but not the same as `\/`, the node root directory). You can configure the base directory\nfor the kubelet\nusing the kubelet command line argument `--root-dir`.\n\nNames of files:\n\n- `memory_manager_state` for the [Memory Manager](\/docs\/tasks\/administer-cluster\/memory-manager\/)\n- `cpu_manager_state` for the [CPU Manager](\/docs\/tasks\/administer-cluster\/cpu-management-policies\/)\n- `dra_manager_state` for [DRA](\/docs\/concepts\/scheduling-eviction\/dynamic-resource-allocation\/)\n\n### Checkpoint file for device manager {#device-manager-state}\n\nDevice manager creates checkpoints in the same directory with socket files: `\/var\/lib\/kubelet\/device-plugins\/`.\nThe name of a checkpoint file is `kubelet_internal_checkpoint` for [Device Manager](\/docs\/concepts\/extend-kubernetes\/compute-storage-net\/device-plugins\/#device-plugin-integration-with-the-topology-manager)\n\n### Pod status checkpoint storage {#pod-status-manager-state}\n\n\n\nIf your cluster has  \n[in-place Pod vertical scaling](\/docs\/concepts\/workloads\/autoscaling\/#in-place-resizing)  \nenabled ([feature gate](\/docs\/reference\/command-line-tools-reference\/feature-gates\/)  \nname `InPlacePodVerticalScaling`), then the kubelet stores a local record of Pod status.  \n\nThe file name is `pod_status_manager_state` within the kubelet base directory\n(`\/var\/lib\/kubelet` by default on Linux; configurable using `--root-dir`).\n\n### Container runtime\n\nKubelet communicates with the container runtime using socket configured via the\nconfiguration parameters:\n\n- `containerRuntimeEndpoint` for runtime operations\n- `imageServiceEndpoint` for image management operations\n\nThe actual values of those endpoints depend on the container runtime being used.\n\n### Device plugins\n\nThe kubelet exposes a socket at the path `\/var\/lib\/kubelet\/device-plugins\/kubelet.sock` for\nvarious [Device Plugins to register](\/docs\/concepts\/extend-kubernetes\/compute-storage-net\/device-plugins\/#device-plugin-implementation).\n\nWhen a device plugin registers itself, it provides its socket path for the kubelet to connect.\n\nThe device plugin socket should be in the directory `device-plugins` within the kubelet base\ndirectory. On a typical Linux node, this means `\/var\/lib\/kubelet\/device-plugins`.\n\n### Pod resources API\n\n[Pod Resources API](\/docs\/concepts\/extend-kubernetes\/compute-storage-net\/device-plugins\/#monitoring-device-plugin-resources)\nwill be exposed at the path `\/var\/lib\/kubelet\/pod-resources`.\n\n### DRA, CSI, and Device plugins\n\nThe kubelet looks for socket files created by device plugins managed via [DRA](\/docs\/concepts\/scheduling-eviction\/dynamic-resource-allocation\/),\ndevice manager, or storage plugins, and then attempts to connect\nto these sockets. The directory that the kubelet looks in is `plugins_registry` within the kubelet base\ndirectory, so on a typical Linux node this means `\/var\/lib\/kubelet\/plugins_registry`.\n\nNote, for the device plugins there are two alternative registration mechanisms. Only one should be used for a given plugin.\n\nThe types of plugins that can place socket files into that directory are:\n\n- CSI plugins\n- DRA plugins\n- Device Manager plugins\n\n(typically `\/var\/lib\/kubelet\/plugins_registry`).\n\n## Security profiles & configuration\n\n### Seccomp\n\nSeccomp profile files referenced from Pods should be placed in `\/var\/lib\/kubelet\/seccomp`.\nSee the [seccomp reference](\/docs\/reference\/node\/seccomp\/) for details.\n\n### AppArmor\n\nThe kubelet does not load or refer to AppArmor profiles by a Kubernetes-specific path.\nAppArmor profiles are loaded via the node operating system rather then referenced by their path.\n\n## Locking\n\n\n\n\nA lock file for the kubelet; typically `\/var\/run\/kubelet.lock`. The kubelet uses this to ensure\nthat two different kubelets don't try to run in conflict with each other.\nYou can configure the path to the lock file using the the `--lock-file` kubelet command line argument.\n\nIf two kubelets on the same node use a different value for the lock file path, they will not be able to\ndetect a conflict when both are running.\n\n\n## \n\n- Learn about the kubelet [command line arguments](\/docs\/reference\/command-line-tools-reference\/kubelet\/).\n- Review the [Kubelet Configuration (v1beta1) reference](\/docs\/reference\/config-api\/kubelet-config.v1beta1\/)","site":"kubernetes reference","answers_cleaned":"    content type   reference  title  Local Files And Paths Used By The Kubelet weight  42      The  is mostly a stateless process running on a Kubernetes   This document outlines files that kubelet reads and writes     This document is for informational purpose and not describing any guaranteed behaviors or APIs  It lists resources used by the kubelet  which is an implementation detail and a subject to change at any release     The kubelet typically uses the  as the source of truth on what needs to run on the Node  and the  to retrieve the current state of containers  So long as you provide a  kubeconfig   API client configuration  to the kubelet  the kubelet does connect to your control plane  otherwise the node operates in  standalone mode    On Linux nodes  the kubelet also relies on reading cgroups and various system files to collect metrics   On Windows nodes  the kubelet collects metrics via a different mechanism that does not rely on paths   There are also a few other files that are used by the kubelet as well as kubelet communicates using local Unix domain sockets  Some are sockets that the kubelet listens on  and for other sockets the kubelet discovers them and then connects as a client     This page lists paths as Linux paths  which map to the Windows paths by adding a root disk  C    in place of      unless specified otherwise   For example    var lib kubelet device plugins  maps to  C  var lib kubelet device plugins         Configuration      Kubelet configuration files  The path to the kubelet configuration file can be configured using the command line argument    config   The kubelet also supports  drop in configuration files   docs tasks administer cluster kubelet config file  kubelet conf d  to enhance configuration       Certificates  Certificates and private keys are typically located at   var lib kubelet pki   but can be configured using the    cert dir  kubelet command line argument  Names of certificate files are also configurable       Manifests  Manifests for static pods are typically located in   etc kubernetes manifests   Location can be configured using the  staticPodPath  kubelet configuration option       Systemd unit settings  When kubelet is running as a systemd unit  some kubelet configuration may be declared in systemd unit settings file  Typically it includes     command line arguments to  run kubelet   docs reference command line tools reference kubelet     environment variables  used by kubelet or  configuring golang runtime  https   pkg go dev runtime hdr Environment Variables      State      Checkpoint files for resource managers   resource managers state   All resource managers keep the mapping of Pods to allocated resources in state files  State files are located in the kubelet s base directory  also termed the  root directory   but not the same as      the node root directory   You can configure the base directory for the kubelet using the kubelet command line argument    root dir    Names of files      memory manager state  for the  Memory Manager   docs tasks administer cluster memory manager      cpu manager state  for the  CPU Manager   docs tasks administer cluster cpu management policies      dra manager state  for  DRA   docs concepts scheduling eviction dynamic resource allocation        Checkpoint file for device manager   device manager state   Device manager creates checkpoints in the same directory with socket files    var lib kubelet device plugins    The name of a checkpoint file is  kubelet internal checkpoint  for  Device Manager   docs concepts extend kubernetes compute storage net device plugins  device plugin integration with the topology manager       Pod status checkpoint storage   pod status manager state     If your cluster has    in place Pod vertical scaling   docs concepts workloads autoscaling  in place resizing    enabled   feature gate   docs reference command line tools reference feature gates     name  InPlacePodVerticalScaling    then the kubelet stores a local record of Pod status     The file name is  pod status manager state  within the kubelet base directory    var lib kubelet  by default on Linux  configurable using    root dir         Container runtime  Kubelet communicates with the container runtime using socket configured via the configuration parameters      containerRuntimeEndpoint  for runtime operations    imageServiceEndpoint  for image management operations  The actual values of those endpoints depend on the container runtime being used       Device plugins  The kubelet exposes a socket at the path   var lib kubelet device plugins kubelet sock  for various  Device Plugins to register   docs concepts extend kubernetes compute storage net device plugins  device plugin implementation    When a device plugin registers itself  it provides its socket path for the kubelet to connect   The device plugin socket should be in the directory  device plugins  within the kubelet base directory  On a typical Linux node  this means   var lib kubelet device plugins        Pod resources API   Pod Resources API   docs concepts extend kubernetes compute storage net device plugins  monitoring device plugin resources  will be exposed at the path   var lib kubelet pod resources        DRA  CSI  and Device plugins  The kubelet looks for socket files created by device plugins managed via  DRA   docs concepts scheduling eviction dynamic resource allocation    device manager  or storage plugins  and then attempts to connect to these sockets  The directory that the kubelet looks in is  plugins registry  within the kubelet base directory  so on a typical Linux node this means   var lib kubelet plugins registry    Note  for the device plugins there are two alternative registration mechanisms  Only one should be used for a given plugin   The types of plugins that can place socket files into that directory are     CSI plugins   DRA plugins   Device Manager plugins   typically   var lib kubelet plugins registry        Security profiles   configuration      Seccomp  Seccomp profile files referenced from Pods should be placed in   var lib kubelet seccomp   See the  seccomp reference   docs reference node seccomp   for details       AppArmor  The kubelet does not load or refer to AppArmor profiles by a Kubernetes specific path  AppArmor profiles are loaded via the node operating system rather then referenced by their path      Locking     A lock file for the kubelet  typically   var run kubelet lock   The kubelet uses this to ensure that two different kubelets don t try to run in conflict with each other  You can configure the path to the lock file using the the    lock file  kubelet command line argument   If two kubelets on the same node use a different value for the lock file path  they will not be able to detect a conflict when both are running           Learn about the kubelet  command line arguments   docs reference command line tools reference kubelet      Review the  Kubelet Configuration  v1beta1  reference   docs reference config api kubelet config v1beta1  "}
{"questions":"kubernetes reference name tasks erictune title kubectl Quick Reference krousey contenttype concept card reviewers clove weight 10 highlight it","answers":"---\ntitle: kubectl Quick Reference\nreviewers:\n- erictune\n- krousey\n- clove\ncontent_type: concept\nweight: 10 # highlight it\ncard:\n  name: tasks\n  weight: 10\n---\n\n<!-- overview -->\n\nThis page contains a list of commonly used `kubectl` commands and flags.\n\n\nThese instructions are for Kubernetes v. To check the version, use the `kubectl version` command.\n\n<!-- body -->\n\n## Kubectl autocomplete\n\n### BASH\n\n```bash\nsource <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first.\necho \"source <(kubectl completion bash)\" >> ~\/.bashrc # add autocomplete permanently to your bash shell.\n```\n\nYou can also use a shorthand alias for `kubectl` that also works with completion:\n\n```bash\nalias k=kubectl\ncomplete -o default -F __start_kubectl k\n```\n\n### ZSH\n\n```bash\nsource <(kubectl completion zsh)  # set up autocomplete in zsh into the current shell\necho '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~\/.zshrc # add autocomplete permanently to your zsh shell\n```\n\n### FISH\n\n\nRequires kubectl version 1.23 or above.\n\n\n```bash\necho 'kubectl completion fish | source' > ~\/.config\/fish\/completions\/kubectl.fish && source ~\/.config\/fish\/completions\/kubectl.fish\n```\n\n### A note on `--all-namespaces`\n\nAppending `--all-namespaces` happens frequently enough that you should be aware of the shorthand for `--all-namespaces`:\n\n```kubectl -A```\n\n## Kubectl context and configuration\n\nSet which Kubernetes cluster `kubectl` communicates with and modifies configuration\ninformation. See [Authenticating Across Clusters with kubeconfig](\/docs\/tasks\/access-application-cluster\/configure-access-multiple-clusters\/) documentation for\ndetailed config file information.\n\n```bash\nkubectl config view # Show Merged kubeconfig settings.\n\n# use multiple kubeconfig files at the same time and view merged config\nKUBECONFIG=~\/.kube\/config:~\/.kube\/kubconfig2\n\nkubectl config view\n\n# Show merged kubeconfig settings and raw certificate data and exposed secrets\nkubectl config view --raw \n\n# get the password for the e2e user\nkubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'\n\n# get the certificate for the e2e user\nkubectl config view --raw -o jsonpath='{.users[?(.name == \"e2e\")].user.client-certificate-data}' | base64 -d\n\nkubectl config view -o jsonpath='{.users[].name}'    # display the first user\nkubectl config view -o jsonpath='{.users[*].name}'   # get a list of users\nkubectl config get-contexts                          # display list of contexts\nkubectl config get-contexts -o name                  # get all context names\nkubectl config current-context                       # display the current-context\nkubectl config use-context my-cluster-name           # set the default context to my-cluster-name\n\nkubectl config set-cluster my-cluster-name           # set a cluster entry in the kubeconfig\n\n# configure the URL to a proxy server to use for requests made by this client in the kubeconfig\nkubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url\n\n# add a new user to your kubeconf that supports basic auth\nkubectl config set-credentials kubeuser\/foo.kubernetes.com --username=kubeuser --password=kubepassword\n\n# permanently save the namespace for all subsequent kubectl commands in that context.\nkubectl config set-context --current --namespace=ggckad-s2\n\n# set a context utilizing a specific username and namespace.\nkubectl config set-context gce --user=cluster-admin --namespace=foo \\\n  && kubectl config use-context gce\n\nkubectl config unset users.foo                       # delete user foo\n\n# short alias to set\/show context\/namespace (only works for bash and bash-compatible shells, current context to be set before using kn to set namespace)\nalias kx='f() { [ \"$1\" ] && kubectl config use-context $1 || kubectl config current-context ; } ; f'\nalias kn='f() { [ \"$1\" ] && kubectl config set-context --current --namespace $1 || kubectl config view --minify | grep namespace | cut -d\" \" -f6 ; } ; f'\n```\n\n## Kubectl apply\n\n`apply` manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running `kubectl apply`. This is the recommended way of managing Kubernetes applications on production. See [Kubectl Book](https:\/\/kubectl.docs.kubernetes.io).\n\n## Creating objects\n\nKubernetes manifests can be defined in YAML or JSON. The file extension `.yaml`,\n`.yml`, and `.json` can be used.\n\n```bash\nkubectl apply -f .\/my-manifest.yaml                 # create resource(s)\nkubectl apply -f .\/my1.yaml -f .\/my2.yaml           # create from multiple files\nkubectl apply -f .\/dir                              # create resource(s) in all manifest files in dir\nkubectl apply -f https:\/\/example.com\/manifest.yaml  # create resource(s) from url (Note: this is an example domain and does not contain a valid manifest)\nkubectl create deployment nginx --image=nginx       # start a single instance of nginx\n\n# create a Job which prints \"Hello World\"\nkubectl create job hello --image=busybox:1.28 -- echo \"Hello World\"\n\n# create a CronJob that prints \"Hello World\" every minute\nkubectl create cronjob hello --image=busybox:1.28   --schedule=\"*\/1 * * * *\" -- echo \"Hello World\"\n\nkubectl explain pods                           # get the documentation for pod manifests\n\n# Create multiple YAML objects from stdin\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: Pod\nmetadata:\n  name: busybox-sleep\nspec:\n  containers:\n  - name: busybox\n    image: busybox:1.28\n    args:\n    - sleep\n    - \"1000000\"\n---\napiVersion: v1\nkind: Pod\nmetadata:\n  name: busybox-sleep-less\nspec:\n  containers:\n  - name: busybox\n    image: busybox:1.28\n    args:\n    - sleep\n    - \"1000\"\nEOF\n\n# Create a secret with several keys\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: mysecret\ntype: Opaque\ndata:\n  password: $(echo -n \"s33msi4\" | base64 -w0)\n  username: $(echo -n \"jane\" | base64 -w0)\nEOF\n\n```\n\n## Viewing and finding resources\n\n```bash\n# Get commands with basic output\nkubectl get services                          # List all services in the namespace\nkubectl get pods --all-namespaces             # List all pods in all namespaces\nkubectl get pods -o wide                      # List all pods in the current namespace, with more details\nkubectl get deployment my-dep                 # List a particular deployment\nkubectl get pods                              # List all pods in the namespace\nkubectl get pod my-pod -o yaml                # Get a pod's YAML\n\n# Describe commands with verbose output\nkubectl describe nodes my-node\nkubectl describe pods my-pod\n\n# List Services Sorted by Name\nkubectl get services --sort-by=.metadata.name\n\n# List pods Sorted by Restart Count\nkubectl get pods --sort-by='.status.containerStatuses[0].restartCount'\n\n# List PersistentVolumes sorted by capacity\nkubectl get pv --sort-by=.spec.capacity.storage\n\n# Get the version label of all pods with label app=cassandra\nkubectl get pods --selector=app=cassandra -o \\\n  jsonpath='{.items[*].metadata.labels.version}'\n\n# Retrieve the value of a key with dots, e.g. 'ca.crt'\nkubectl get configmap myconfig \\\n  -o jsonpath='{.data.ca\\.crt}'\n\n# Retrieve a base64 encoded value with dashes instead of underscores.\nkubectl get secret my-secret --template=''\n\n# Get all worker nodes (use a selector to exclude results that have a label\n# named 'node-role.kubernetes.io\/control-plane')\nkubectl get node --selector='!node-role.kubernetes.io\/control-plane'\n\n# Get all running pods in the namespace\nkubectl get pods --field-selector=status.phase=Running\n\n# Get ExternalIPs of all nodes\nkubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"ExternalIP\")].address}'\n\n# List Names of Pods that belong to Particular RC\n# \"jq\" command useful for transformations that are too complex for jsonpath, it can be found at https:\/\/jqlang.github.io\/jq\/\nsel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | \"\\(.key)=\\(.value),\"')%?}\necho $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})\n\n# Show labels for all pods (or any other Kubernetes object that supports labelling)\nkubectl get pods --show-labels\n\n# Check which nodes are ready\nJSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \\\n && kubectl get nodes -o jsonpath=\"$JSONPATH\" | grep \"Ready=True\"\n\n# Check which nodes are ready with custom-columns\nkubectl get node -o custom-columns='NODE_NAME:.metadata.name,STATUS:.status.conditions[?(@.type==\"Ready\")].status'\n\n# Output decoded secrets without external tools\nkubectl get secret my-secret -o go-template=''\n\n# List all Secrets currently in use by a pod\nkubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq\n\n# List all containerIDs of initContainer of all pods\n# Helpful when cleaning up stopped containers, while avoiding removal of initContainers.\nkubectl get pods --all-namespaces -o jsonpath='{range .items[*].status.initContainerStatuses[*]}{.containerID}{\"\\n\"}{end}' | cut -d\/ -f3\n\n# List Events sorted by timestamp\nkubectl get events --sort-by=.metadata.creationTimestamp\n\n# List all warning events\nkubectl events --types=Warning\n\n# Compares the current state of the cluster against the state that the cluster would be in if the manifest was applied.\nkubectl diff -f .\/my-manifest.yaml\n\n# Produce a period-delimited tree of all keys returned for nodes\n# Helpful when locating a key within a complex nested JSON structure\nkubectl get nodes -o json | jq -c 'paths|join(\".\")'\n\n# Produce a period-delimited tree of all keys returned for pods, etc\nkubectl get pods -o json | jq -c 'paths|join(\".\")'\n\n# Produce ENV for all pods, assuming you have a default container for the pods, default namespace and the `env` command is supported.\n# Helpful when running any supported command across all pods, not just `env`\nfor pod in $(kubectl get po --output=jsonpath={.items..metadata.name}); do echo $pod && kubectl exec -it $pod -- env; done\n\n# Get a deployment's status subresource\nkubectl get deployment nginx-deployment --subresource=status\n```\n\n## Updating resources\n\n```bash\nkubectl set image deployment\/frontend www=image:v2               # Rolling update \"www\" containers of \"frontend\" deployment, updating the image\nkubectl rollout history deployment\/frontend                      # Check the history of deployments including the revision\nkubectl rollout undo deployment\/frontend                         # Rollback to the previous deployment\nkubectl rollout undo deployment\/frontend --to-revision=2         # Rollback to a specific revision\nkubectl rollout status -w deployment\/frontend                    # Watch rolling update status of \"frontend\" deployment until completion\nkubectl rollout restart deployment\/frontend                      # Rolling restart of the \"frontend\" deployment\n\n\ncat pod.json | kubectl replace -f -                              # Replace a pod based on the JSON passed into stdin\n\n# Force replace, delete and then re-create the resource. Will cause a service outage.\nkubectl replace --force -f .\/pod.json\n\n# Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000\nkubectl expose rc nginx --port=80 --target-port=8000\n\n# Update a single-container pod's image version (tag) to v4\nkubectl get pod mypod -o yaml | sed 's\/\\(image: myimage\\):.*$\/\\1:v4\/' | kubectl replace -f -\n\nkubectl label pods my-pod new-label=awesome                      # Add a Label\nkubectl label pods my-pod new-label-                             # Remove a label\nkubectl label pods my-pod new-label=new-value --overwrite        # Overwrite an existing value\nkubectl annotate pods my-pod icon-url=http:\/\/goo.gl\/XXBTWq       # Add an annotation\nkubectl annotate pods my-pod icon-url-                           # Remove annotation\nkubectl autoscale deployment foo --min=2 --max=10                # Auto scale a deployment \"foo\"\n```\n\n## Patching resources\n\n```bash\n# Partially update a node\nkubectl patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}'\n\n# Update a container's image; spec.containers[*].name is required because it's a merge key\nkubectl patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}'\n\n# Update a container's image using a json patch with positional arrays\nkubectl patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"\/spec\/containers\/0\/image\", \"value\":\"new image\"}]'\n\n# Disable a deployment livenessProbe using a json patch with positional arrays\nkubectl patch deployment valid-deployment  --type json   -p='[{\"op\": \"remove\", \"path\": \"\/spec\/template\/spec\/containers\/0\/livenessProbe\"}]'\n\n# Add a new element to a positional array\nkubectl patch sa default --type='json' -p='[{\"op\": \"add\", \"path\": \"\/secrets\/1\", \"value\": {\"name\": \"whatever\" } }]'\n\n# Update a deployment's replica count by patching its scale subresource\nkubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":2}}'\n```\n\n## Editing resources\n\nEdit any API resource in your preferred editor.\n\n```bash\nkubectl edit svc\/docker-registry                      # Edit the service named docker-registry\nKUBE_EDITOR=\"nano\" kubectl edit svc\/docker-registry   # Use an alternative editor\n```\n\n## Scaling resources\n\n```bash\nkubectl scale --replicas=3 rs\/foo                                 # Scale a replicaset named 'foo' to 3\nkubectl scale --replicas=3 -f foo.yaml                            # Scale a resource specified in \"foo.yaml\" to 3\nkubectl scale --current-replicas=2 --replicas=3 deployment\/mysql  # If the deployment named mysql's current size is 2, scale mysql to 3\nkubectl scale --replicas=5 rc\/foo rc\/bar rc\/baz                   # Scale multiple replication controllers\n```\n\n## Deleting resources\n\n```bash\nkubectl delete -f .\/pod.json                                      # Delete a pod using the type and name specified in pod.json\nkubectl delete pod unwanted --now                                 # Delete a pod with no grace period\nkubectl delete pod,service baz foo                                # Delete pods and services with same names \"baz\" and \"foo\"\nkubectl delete pods,services -l name=myLabel                      # Delete pods and services with label name=myLabel\nkubectl -n my-ns delete pod,svc --all                             # Delete all pods and services in namespace my-ns,\n# Delete all pods matching the awk pattern1 or pattern2\nkubectl get pods  -n mynamespace --no-headers=true | awk '\/pattern1|pattern2\/{print $1}' | xargs  kubectl delete -n mynamespace pod\n```\n\n## Interacting with running Pods\n\n```bash\nkubectl logs my-pod                                 # dump pod logs (stdout)\nkubectl logs -l name=myLabel                        # dump pod logs, with label name=myLabel (stdout)\nkubectl logs my-pod --previous                      # dump pod logs (stdout) for a previous instantiation of a container\nkubectl logs my-pod -c my-container                 # dump pod container logs (stdout, multi-container case)\nkubectl logs -l name=myLabel -c my-container        # dump pod container logs, with label name=myLabel (stdout)\nkubectl logs my-pod -c my-container --previous      # dump pod container logs (stdout, multi-container case) for a previous instantiation of a container\nkubectl logs -f my-pod                              # stream pod logs (stdout)\nkubectl logs -f my-pod -c my-container              # stream pod container logs (stdout, multi-container case)\nkubectl logs -f -l name=myLabel --all-containers    # stream all pods logs with label name=myLabel (stdout)\nkubectl run -i --tty busybox --image=busybox:1.28 -- sh  # Run pod as interactive shell\nkubectl run nginx --image=nginx -n mynamespace      # Start a single instance of nginx pod in the namespace of mynamespace\nkubectl run nginx --image=nginx --dry-run=client -o yaml > pod.yaml\n                                                    # Generate spec for running pod nginx and write it into a file called pod.yaml\nkubectl attach my-pod -i                            # Attach to Running Container\nkubectl port-forward my-pod 5000:6000               # Listen on port 5000 on the local machine and forward to port 6000 on my-pod\nkubectl exec my-pod -- ls \/                         # Run command in existing pod (1 container case)\nkubectl exec --stdin --tty my-pod -- \/bin\/sh        # Interactive shell access to a running pod (1 container case)\nkubectl exec my-pod -c my-container -- ls \/         # Run command in existing pod (multi-container case)\nkubectl debug my-pod -it --image=busybox:1.28       # Create an interactive debugging session within existing pod and immediately attach to it\nkubectl debug node\/my-node -it --image=busybox:1.28 # Create an interactive debugging session on a node and immediately attach to it\nkubectl top pod                                     # Show metrics for all pods in the default namespace\nkubectl top pod POD_NAME --containers               # Show metrics for a given pod and its containers\nkubectl top pod POD_NAME --sort-by=cpu              # Show metrics for a given pod and sort it by 'cpu' or 'memory'\n```\n## Copying files and directories to and from containers\n\n```bash\nkubectl cp \/tmp\/foo_dir my-pod:\/tmp\/bar_dir            # Copy \/tmp\/foo_dir local directory to \/tmp\/bar_dir in a remote pod in the current namespace\nkubectl cp \/tmp\/foo my-pod:\/tmp\/bar -c my-container    # Copy \/tmp\/foo local file to \/tmp\/bar in a remote pod in a specific container\nkubectl cp \/tmp\/foo my-namespace\/my-pod:\/tmp\/bar       # Copy \/tmp\/foo local file to \/tmp\/bar in a remote pod in namespace my-namespace\nkubectl cp my-namespace\/my-pod:\/tmp\/foo \/tmp\/bar       # Copy \/tmp\/foo from a remote pod to \/tmp\/bar locally\n```\n\n`kubectl cp` requires that the 'tar' binary is present in your container image. If 'tar' is not present, `kubectl cp` will fail.\nFor advanced use cases, such as symlinks, wildcard expansion or file mode preservation consider using `kubectl exec`.\n\n\n```bash\ntar cf - \/tmp\/foo | kubectl exec -i -n my-namespace my-pod -- tar xf - -C \/tmp\/bar           # Copy \/tmp\/foo local file to \/tmp\/bar in a remote pod in namespace my-namespace\nkubectl exec -n my-namespace my-pod -- tar cf - \/tmp\/foo | tar xf - -C \/tmp\/bar    # Copy \/tmp\/foo from a remote pod to \/tmp\/bar locally\n```\n\n\n## Interacting with Deployments and Services\n```bash\nkubectl logs deploy\/my-deployment                         # dump Pod logs for a Deployment (single-container case)\nkubectl logs deploy\/my-deployment -c my-container         # dump Pod logs for a Deployment (multi-container case)\n\nkubectl port-forward svc\/my-service 5000                  # listen on local port 5000 and forward to port 5000 on Service backend\nkubectl port-forward svc\/my-service 5000:my-service-port  # listen on local port 5000 and forward to Service target port with name <my-service-port>\n\nkubectl port-forward deploy\/my-deployment 5000:6000       # listen on local port 5000 and forward to port 6000 on a Pod created by <my-deployment>\nkubectl exec deploy\/my-deployment -- ls                   # run command in first Pod and first container in Deployment (single- or multi-container cases)\n```\n\n## Interacting with Nodes and cluster\n\n```bash\nkubectl cordon my-node                                                # Mark my-node as unschedulable\nkubectl drain my-node                                                 # Drain my-node in preparation for maintenance\nkubectl uncordon my-node                                              # Mark my-node as schedulable\nkubectl top node                                                      # Show metrics for all nodes\nkubectl top node my-node                                              # Show metrics for a given node\nkubectl cluster-info                                                  # Display addresses of the master and services\nkubectl cluster-info dump                                             # Dump current cluster state to stdout\nkubectl cluster-info dump --output-directory=\/path\/to\/cluster-state   # Dump current cluster state to \/path\/to\/cluster-state\n\n# View existing taints on which exist on current nodes.\nkubectl get nodes -o='custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect'\n\n# If a taint with that key and effect already exists, its value is replaced as specified.\nkubectl taint nodes foo dedicated=special-user:NoSchedule\n```\n\n### Resource types\n\nList all supported resource types along with their shortnames, [API group](\/docs\/concepts\/overview\/kubernetes-api\/#api-groups-and-versioning), whether they are [namespaced](\/docs\/concepts\/overview\/working-with-objects\/namespaces), and [kind](\/docs\/concepts\/overview\/working-with-objects\/):\n\n```bash\nkubectl api-resources\n```\n\nOther operations for exploring API resources:\n\n```bash\nkubectl api-resources --namespaced=true      # All namespaced resources\nkubectl api-resources --namespaced=false     # All non-namespaced resources\nkubectl api-resources -o name                # All resources with simple output (only the resource name)\nkubectl api-resources -o wide                # All resources with expanded (aka \"wide\") output\nkubectl api-resources --verbs=list,get       # All resources that support the \"list\" and \"get\" request verbs\nkubectl api-resources --api-group=extensions # All resources in the \"extensions\" API group\n```\n\n### Formatting output\n\nTo output details to your terminal window in a specific format, add the `-o` (or `--output`) flag to a supported `kubectl` command.\n\nOutput format | Description\n--------------| -----------\n`-o=custom-columns=<spec>` | Print a table using a comma separated list of custom columns\n`-o=custom-columns-file=<filename>` | Print a table using the custom columns template in the `<filename>` file\n`-o=go-template=<template>`     | Print the fields defined in a [golang template](https:\/\/pkg.go.dev\/text\/template)\n`-o=go-template-file=<filename>` | Print the fields defined by the [golang template](https:\/\/pkg.go.dev\/text\/template) in the `<filename>` file\n`-o=json`     | Output a JSON formatted API object\n`-o=jsonpath=<template>` | Print the fields defined in a [jsonpath](\/docs\/reference\/kubectl\/jsonpath) expression\n`-o=jsonpath-file=<filename>` | Print the fields defined by the [jsonpath](\/docs\/reference\/kubectl\/jsonpath) expression in the `<filename>` file\n`-o=name`     | Print only the resource name and nothing else\n`-o=wide`     | Output in the plain-text format with any additional information, and for pods, the node name is included\n`-o=yaml`     | Output a YAML formatted API object\n\nExamples using `-o=custom-columns`:\n\n```bash\n# All images running in a cluster\nkubectl get pods -A -o=custom-columns='DATA:spec.containers[*].image'\n\n# All images running in namespace: default, grouped by Pod\nkubectl get pods --namespace default --output=custom-columns=\"NAME:.metadata.name,IMAGE:.spec.containers[*].image\"\n\n # All images excluding \"registry.k8s.io\/coredns:1.6.2\"\nkubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image!=\"registry.k8s.io\/coredns:1.6.2\")].image'\n\n# All fields under metadata regardless of name\nkubectl get pods -A -o=custom-columns='DATA:metadata.*'\n```\n\nMore examples in the kubectl [reference documentation](\/docs\/reference\/kubectl\/#custom-columns).\n\n### Kubectl output verbosity and debugging\n\nKubectl verbosity is controlled with the `-v` or `--v` flags followed by an integer representing the log level. General Kubernetes logging conventions and the associated log levels are described [here](https:\/\/github.com\/kubernetes\/community\/blob\/master\/contributors\/devel\/sig-instrumentation\/logging.md).\n\nVerbosity | Description\n--------------| -----------\n`--v=0` | Generally useful for this to *always* be visible to a cluster operator.\n`--v=1` | A reasonable default log level if you don't want verbosity.\n`--v=2` | Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems.\n`--v=3` | Extended information about changes.\n`--v=4` | Debug level verbosity.\n`--v=5` | Trace level verbosity.\n`--v=6` | Display requested resources.\n`--v=7` | Display HTTP request headers.\n`--v=8` | Display HTTP request contents.\n`--v=9` | Display HTTP request contents without truncation of contents.\n\n## \n\n* Read the [kubectl overview](\/docs\/reference\/kubectl\/) and learn about [JsonPath](\/docs\/reference\/kubectl\/jsonpath).\n\n* See [kubectl](\/docs\/reference\/kubectl\/kubectl\/) options.\n\n* Also read [kubectl Usage Conventions](\/docs\/reference\/kubectl\/conventions\/) to understand how to use kubectl in reusable scripts.\n\n* See more community [kubectl cheatsheets](https:\/\/github.com\/dennyzhang\/cheatsheet-kubernetes-A4).","site":"kubernetes reference","answers_cleaned":"    title  kubectl Quick Reference reviewers    erictune   krousey   clove content type  concept weight  10   highlight it card    name  tasks   weight  10           overview      This page contains a list of commonly used  kubectl  commands and flags    These instructions are for Kubernetes v  To check the version  use the  kubectl version  command        body         Kubectl autocomplete      BASH     bash source   kubectl completion bash    set up autocomplete in bash into the current shell  bash completion package should be installed first  echo  source   kubectl completion bash         bashrc   add autocomplete permanently to your bash shell       You can also use a shorthand alias for  kubectl  that also works with completion      bash alias k kubectl complete  o default  F   start kubectl k          ZSH     bash source   kubectl completion zsh     set up autocomplete in zsh into the current shell echo      commands kubectl        source   kubectl completion zsh         zshrc   add autocomplete permanently to your zsh shell          FISH   Requires kubectl version 1 23 or above       bash echo  kubectl completion fish   source       config fish completions kubectl fish    source    config fish completions kubectl fish          A note on    all namespaces   Appending    all namespaces  happens frequently enough that you should be aware of the shorthand for    all namespaces       kubectl  A        Kubectl context and configuration  Set which Kubernetes cluster  kubectl  communicates with and modifies configuration information  See  Authenticating Across Clusters with kubeconfig   docs tasks access application cluster configure access multiple clusters   documentation for detailed config file information      bash kubectl config view   Show Merged kubeconfig settings     use multiple kubeconfig files at the same time and view merged config KUBECONFIG    kube config    kube kubconfig2  kubectl config view    Show merged kubeconfig settings and raw certificate data and exposed secrets kubectl config view   raw     get the password for the e2e user kubectl config view  o jsonpath    users     name     e2e    user password      get the certificate for the e2e user kubectl config view   raw  o jsonpath    users    name     e2e    user client certificate data     base64  d  kubectl config view  o jsonpath    users   name        display the first user kubectl config view  o jsonpath    users    name       get a list of users kubectl config get contexts                            display list of contexts kubectl config get contexts  o name                    get all context names kubectl config current context                         display the current context kubectl config use context my cluster name             set the default context to my cluster name  kubectl config set cluster my cluster name             set a cluster entry in the kubeconfig    configure the URL to a proxy server to use for requests made by this client in the kubeconfig kubectl config set cluster my cluster name   proxy url my proxy url    add a new user to your kubeconf that supports basic auth kubectl config set credentials kubeuser foo kubernetes com   username kubeuser   password kubepassword    permanently save the namespace for all subsequent kubectl commands in that context  kubectl config set context   current   namespace ggckad s2    set a context utilizing a specific username and namespace  kubectl config set context gce   user cluster admin   namespace foo        kubectl config use context gce  kubectl config unset users foo                         delete user foo    short alias to set show context namespace  only works for bash and bash compatible shells  current context to be set before using kn to set namespace  alias kx  f         1       kubectl config use context  1    kubectl config current context       f  alias kn  f         1       kubectl config set context   current   namespace  1    kubectl config view   minify   grep namespace   cut  d     f6       f          Kubectl apply   apply  manages applications through files defining Kubernetes resources  It creates and updates resources in a cluster through running  kubectl apply   This is the recommended way of managing Kubernetes applications on production  See  Kubectl Book  https   kubectl docs kubernetes io       Creating objects  Kubernetes manifests can be defined in YAML or JSON  The file extension   yaml     yml   and   json  can be used      bash kubectl apply  f   my manifest yaml                   create resource s  kubectl apply  f   my1 yaml  f   my2 yaml             create from multiple files kubectl apply  f   dir                                create resource s  in all manifest files in dir kubectl apply  f https   example com manifest yaml    create resource s  from url  Note  this is an example domain and does not contain a valid manifest  kubectl create deployment nginx   image nginx         start a single instance of nginx    create a Job which prints  Hello World  kubectl create job hello   image busybox 1 28    echo  Hello World     create a CronJob that prints  Hello World  every minute kubectl create cronjob hello   image busybox 1 28     schedule    1             echo  Hello World   kubectl explain pods                             get the documentation for pod manifests    Create multiple YAML objects from stdin kubectl apply  f     EOF apiVersion  v1 kind  Pod metadata    name  busybox sleep spec    containers      name  busybox     image  busybox 1 28     args        sleep        1000000      apiVersion  v1 kind  Pod metadata    name  busybox sleep less spec    containers      name  busybox     image  busybox 1 28     args        sleep        1000  EOF    Create a secret with several keys kubectl apply  f     EOF apiVersion  v1 kind  Secret metadata    name  mysecret type  Opaque data    password    echo  n  s33msi4    base64  w0    username    echo  n  jane    base64  w0  EOF          Viewing and finding resources     bash   Get commands with basic output kubectl get services                            List all services in the namespace kubectl get pods   all namespaces               List all pods in all namespaces kubectl get pods  o wide                        List all pods in the current namespace  with more details kubectl get deployment my dep                   List a particular deployment kubectl get pods                                List all pods in the namespace kubectl get pod my pod  o yaml                  Get a pod s YAML    Describe commands with verbose output kubectl describe nodes my node kubectl describe pods my pod    List Services Sorted by Name kubectl get services   sort by  metadata name    List pods Sorted by Restart Count kubectl get pods   sort by   status containerStatuses 0  restartCount     List PersistentVolumes sorted by capacity kubectl get pv   sort by  spec capacity storage    Get the version label of all pods with label app cassandra kubectl get pods   selector app cassandra  o     jsonpath    items    metadata labels version      Retrieve the value of a key with dots  e g   ca crt  kubectl get configmap myconfig      o jsonpath    data ca  crt      Retrieve a base64 encoded value with dashes instead of underscores  kubectl get secret my secret   template       Get all worker nodes  use a selector to exclude results that have a label   named  node role kubernetes io control plane   kubectl get node   selector   node role kubernetes io control plane     Get all running pods in the namespace kubectl get pods   field selector status phase Running    Get ExternalIPs of all nodes kubectl get nodes  o jsonpath    items    status addresses     type   ExternalIP    address      List Names of Pods that belong to Particular RC    jq  command useful for transformations that are too complex for jsonpath  it can be found at https   jqlang github io jq  sel     kubectl get rc my rc   output json   jq  j   spec selector   to entries             key     value         echo   kubectl get pods   selector  sel   output jsonpath   items  metadata name      Show labels for all pods  or any other Kubernetes object that supports labelling  kubectl get pods   show labels    Check which nodes are ready JSONPATH   range  items       metadata name   range   status conditions       type     status   end  end         kubectl get nodes  o jsonpath   JSONPATH    grep  Ready True     Check which nodes are ready with custom columns kubectl get node  o custom columns  NODE NAME  metadata name STATUS  status conditions     type   Ready    status     Output decoded secrets without external tools kubectl get secret my secret  o go template       List all Secrets currently in use by a pod kubectl get pods  o json   jq   items   spec containers   env    valueFrom secretKeyRef name    grep  v null   sort   uniq    List all containerIDs of initContainer of all pods   Helpful when cleaning up stopped containers  while avoiding removal of initContainers  kubectl get pods   all namespaces  o jsonpath   range  items    status initContainerStatuses      containerID    n   end     cut  d   f3    List Events sorted by timestamp kubectl get events   sort by  metadata creationTimestamp    List all warning events kubectl events   types Warning    Compares the current state of the cluster against the state that the cluster would be in if the manifest was applied  kubectl diff  f   my manifest yaml    Produce a period delimited tree of all keys returned for nodes   Helpful when locating a key within a complex nested JSON structure kubectl get nodes  o json   jq  c  paths join          Produce a period delimited tree of all keys returned for pods  etc kubectl get pods  o json   jq  c  paths join          Produce ENV for all pods  assuming you have a default container for the pods  default namespace and the  env  command is supported    Helpful when running any supported command across all pods  not just  env  for pod in   kubectl get po   output jsonpath   items  metadata name    do echo  pod    kubectl exec  it  pod    env  done    Get a deployment s status subresource kubectl get deployment nginx deployment   subresource status         Updating resources     bash kubectl set image deployment frontend www image v2                 Rolling update  www  containers of  frontend  deployment  updating the image kubectl rollout history deployment frontend                        Check the history of deployments including the revision kubectl rollout undo deployment frontend                           Rollback to the previous deployment kubectl rollout undo deployment frontend   to revision 2           Rollback to a specific revision kubectl rollout status  w deployment frontend                      Watch rolling update status of  frontend  deployment until completion kubectl rollout restart deployment frontend                        Rolling restart of the  frontend  deployment   cat pod json   kubectl replace  f                                  Replace a pod based on the JSON passed into stdin    Force replace  delete and then re create the resource  Will cause a service outage  kubectl replace   force  f   pod json    Create a service for a replicated nginx  which serves on port 80 and connects to the containers on port 8000 kubectl expose rc nginx   port 80   target port 8000    Update a single container pod s image version  tag  to v4 kubectl get pod mypod  o yaml   sed  s   image  myimage        1 v4     kubectl replace  f    kubectl label pods my pod new label awesome                        Add a Label kubectl label pods my pod new label                                Remove a label kubectl label pods my pod new label new value   overwrite          Overwrite an existing value kubectl annotate pods my pod icon url http   goo gl XXBTWq         Add an annotation kubectl annotate pods my pod icon url                              Remove annotation kubectl autoscale deployment foo   min 2   max 10                  Auto scale a deployment  foo          Patching resources     bash   Partially update a node kubectl patch node k8s node 1  p    spec    unschedulable  true       Update a container s image  spec containers    name is required because it s a merge key kubectl patch pod valid pod  p    spec    containers     name   kubernetes serve hostname   image   new image          Update a container s image using a json patch with positional arrays kubectl patch pod valid pod   type  json   p     op    replace    path     spec containers 0 image    value   new image        Disable a deployment livenessProbe using a json patch with positional arrays kubectl patch deployment valid deployment    type json    p     op    remove    path     spec template spec containers 0 livenessProbe        Add a new element to a positional array kubectl patch sa default   type  json   p     op    add    path     secrets 1    value     name    whatever           Update a deployment s replica count by patching its scale subresource kubectl patch deployment nginx deployment   subresource  scale    type  merge   p    spec    replicas  2            Editing resources  Edit any API resource in your preferred editor      bash kubectl edit svc docker registry                        Edit the service named docker registry KUBE EDITOR  nano  kubectl edit svc docker registry     Use an alternative editor         Scaling resources     bash kubectl scale   replicas 3 rs foo                                   Scale a replicaset named  foo  to 3 kubectl scale   replicas 3  f foo yaml                              Scale a resource specified in  foo yaml  to 3 kubectl scale   current replicas 2   replicas 3 deployment mysql    If the deployment named mysql s current size is 2  scale mysql to 3 kubectl scale   replicas 5 rc foo rc bar rc baz                     Scale multiple replication controllers         Deleting resources     bash kubectl delete  f   pod json                                        Delete a pod using the type and name specified in pod json kubectl delete pod unwanted   now                                   Delete a pod with no grace period kubectl delete pod service baz foo                                  Delete pods and services with same names  baz  and  foo  kubectl delete pods services  l name myLabel                        Delete pods and services with label name myLabel kubectl  n my ns delete pod svc   all                               Delete all pods and services in namespace my ns    Delete all pods matching the awk pattern1 or pattern2 kubectl get pods   n mynamespace   no headers true   awk   pattern1 pattern2  print  1     xargs  kubectl delete  n mynamespace pod         Interacting with running Pods     bash kubectl logs my pod                                   dump pod logs  stdout  kubectl logs  l name myLabel                          dump pod logs  with label name myLabel  stdout  kubectl logs my pod   previous                        dump pod logs  stdout  for a previous instantiation of a container kubectl logs my pod  c my container                   dump pod container logs  stdout  multi container case  kubectl logs  l name myLabel  c my container          dump pod container logs  with label name myLabel  stdout  kubectl logs my pod  c my container   previous        dump pod container logs  stdout  multi container case  for a previous instantiation of a container kubectl logs  f my pod                                stream pod logs  stdout  kubectl logs  f my pod  c my container                stream pod container logs  stdout  multi container case  kubectl logs  f  l name myLabel   all containers      stream all pods logs with label name myLabel  stdout  kubectl run  i   tty busybox   image busybox 1 28    sh    Run pod as interactive shell kubectl run nginx   image nginx  n mynamespace        Start a single instance of nginx pod in the namespace of mynamespace kubectl run nginx   image nginx   dry run client  o yaml   pod yaml                                                       Generate spec for running pod nginx and write it into a file called pod yaml kubectl attach my pod  i                              Attach to Running Container kubectl port forward my pod 5000 6000                 Listen on port 5000 on the local machine and forward to port 6000 on my pod kubectl exec my pod    ls                             Run command in existing pod  1 container case  kubectl exec   stdin   tty my pod     bin sh          Interactive shell access to a running pod  1 container case  kubectl exec my pod  c my container    ls             Run command in existing pod  multi container case  kubectl debug my pod  it   image busybox 1 28         Create an interactive debugging session within existing pod and immediately attach to it kubectl debug node my node  it   image busybox 1 28   Create an interactive debugging session on a node and immediately attach to it kubectl top pod                                       Show metrics for all pods in the default namespace kubectl top pod POD NAME   containers                 Show metrics for a given pod and its containers kubectl top pod POD NAME   sort by cpu                Show metrics for a given pod and sort it by  cpu  or  memory         Copying files and directories to and from containers     bash kubectl cp  tmp foo dir my pod  tmp bar dir              Copy  tmp foo dir local directory to  tmp bar dir in a remote pod in the current namespace kubectl cp  tmp foo my pod  tmp bar  c my container      Copy  tmp foo local file to  tmp bar in a remote pod in a specific container kubectl cp  tmp foo my namespace my pod  tmp bar         Copy  tmp foo local file to  tmp bar in a remote pod in namespace my namespace kubectl cp my namespace my pod  tmp foo  tmp bar         Copy  tmp foo from a remote pod to  tmp bar locally       kubectl cp  requires that the  tar  binary is present in your container image  If  tar  is not present   kubectl cp  will fail  For advanced use cases  such as symlinks  wildcard expansion or file mode preservation consider using  kubectl exec        bash tar cf    tmp foo   kubectl exec  i  n my namespace my pod    tar xf    C  tmp bar             Copy  tmp foo local file to  tmp bar in a remote pod in namespace my namespace kubectl exec  n my namespace my pod    tar cf    tmp foo   tar xf    C  tmp bar      Copy  tmp foo from a remote pod to  tmp bar locally          Interacting with Deployments and Services    bash kubectl logs deploy my deployment                           dump Pod logs for a Deployment  single container case  kubectl logs deploy my deployment  c my container           dump Pod logs for a Deployment  multi container case   kubectl port forward svc my service 5000                    listen on local port 5000 and forward to port 5000 on Service backend kubectl port forward svc my service 5000 my service port    listen on local port 5000 and forward to Service target port with name  my service port   kubectl port forward deploy my deployment 5000 6000         listen on local port 5000 and forward to port 6000 on a Pod created by  my deployment  kubectl exec deploy my deployment    ls                     run command in first Pod and first container in Deployment  single  or multi container cases          Interacting with Nodes and cluster     bash kubectl cordon my node                                                  Mark my node as unschedulable kubectl drain my node                                                   Drain my node in preparation for maintenance kubectl uncordon my node                                                Mark my node as schedulable kubectl top node                                                        Show metrics for all nodes kubectl top node my node                                                Show metrics for a given node kubectl cluster info                                                    Display addresses of the master and services kubectl cluster info dump                                               Dump current cluster state to stdout kubectl cluster info dump   output directory  path to cluster state     Dump current cluster state to  path to cluster state    View existing taints on which exist on current nodes  kubectl get nodes  o  custom columns NodeName  metadata name TaintKey  spec taints    key TaintValue  spec taints    value TaintEffect  spec taints    effect     If a taint with that key and effect already exists  its value is replaced as specified  kubectl taint nodes foo dedicated special user NoSchedule          Resource types  List all supported resource types along with their shortnames   API group   docs concepts overview kubernetes api  api groups and versioning   whether they are  namespaced   docs concepts overview working with objects namespaces   and  kind   docs concepts overview working with objects        bash kubectl api resources      Other operations for exploring API resources      bash kubectl api resources   namespaced true        All namespaced resources kubectl api resources   namespaced false       All non namespaced resources kubectl api resources  o name                  All resources with simple output  only the resource name  kubectl api resources  o wide                  All resources with expanded  aka  wide   output kubectl api resources   verbs list get         All resources that support the  list  and  get  request verbs kubectl api resources   api group extensions   All resources in the  extensions  API group          Formatting output  To output details to your terminal window in a specific format  add the   o   or    output   flag to a supported  kubectl  command   Output format   Description                               o custom columns  spec     Print a table using a comma separated list of custom columns   o custom columns file  filename     Print a table using the custom columns template in the   filename   file   o go template  template         Print the fields defined in a  golang template  https   pkg go dev text template    o go template file  filename     Print the fields defined by the  golang template  https   pkg go dev text template  in the   filename   file   o json        Output a JSON formatted API object   o jsonpath  template     Print the fields defined in a  jsonpath   docs reference kubectl jsonpath  expression   o jsonpath file  filename     Print the fields defined by the  jsonpath   docs reference kubectl jsonpath  expression in the   filename   file   o name        Print only the resource name and nothing else   o wide        Output in the plain text format with any additional information  and for pods  the node name is included   o yaml        Output a YAML formatted API object  Examples using   o custom columns       bash   All images running in a cluster kubectl get pods  A  o custom columns  DATA spec containers    image     All images running in namespace  default  grouped by Pod kubectl get pods   namespace default   output custom columns  NAME  metadata name IMAGE  spec containers    image      All images excluding  registry k8s io coredns 1 6 2  kubectl get pods  A  o custom columns  DATA spec containers     image   registry k8s io coredns 1 6 2    image     All fields under metadata regardless of name kubectl get pods  A  o custom columns  DATA metadata         More examples in the kubectl  reference documentation   docs reference kubectl  custom columns        Kubectl output verbosity and debugging  Kubectl verbosity is controlled with the   v  or    v  flags followed by an integer representing the log level  General Kubernetes logging conventions and the associated log levels are described  here  https   github com kubernetes community blob master contributors devel sig instrumentation logging md    Verbosity   Description                                v 0    Generally useful for this to  always  be visible to a cluster operator     v 1    A reasonable default log level if you don t want verbosity     v 2    Useful steady state information about the service and important log messages that may correlate to significant changes in the system  This is the recommended default log level for most systems     v 3    Extended information about changes     v 4    Debug level verbosity     v 5    Trace level verbosity     v 6    Display requested resources     v 7    Display HTTP request headers     v 8    Display HTTP request contents     v 9    Display HTTP request contents without truncation of contents          Read the  kubectl overview   docs reference kubectl   and learn about  JsonPath   docs reference kubectl jsonpath      See  kubectl   docs reference kubectl kubectl   options     Also read  kubectl Usage Conventions   docs reference kubectl conventions   to understand how to use kubectl in reusable scripts     See more community  kubectl cheatsheets  https   github com dennyzhang cheatsheet kubernetes A4  "}
{"questions":"kubernetes reference title JSONPath Support Kubectl supports JSONPath template contenttype concept weight 40 overview","answers":"---\ntitle: JSONPath Support\ncontent_type: concept\nweight: 40\n---\n\n<!-- overview -->\nKubectl supports JSONPath template.\n\n\n<!-- body -->\n\nJSONPath template is composed of JSONPath expressions enclosed by curly braces {}.\nKubectl uses JSONPath expressions to filter on specific fields in the JSON object and format the output.\nIn addition to the original JSONPath template syntax, the following functions and syntax are valid:\n\n1. Use double quotes to quote text inside JSONPath expressions.\n2. Use the `range`, `end` operators to iterate lists.\n3. Use negative slice indices to step backwards through a list. Negative indices do not \"wrap around\" a list and are valid as long as `-index + listLength >= 0`.\n\n\n\n- The `$` operator is optional since the expression always starts from the root object by default.\n\n- The result object is printed as its String() function.\n\n\n\nGiven the JSON input:\n\n```json\n{\n  \"kind\": \"List\",\n  \"items\":[\n    {\n      \"kind\":\"None\",\n      \"metadata\":{\n        \"name\":\"127.0.0.1\",\n        \"labels\":{\n          \"kubernetes.io\/hostname\":\"127.0.0.1\"\n        }\n      },\n      \"status\":{\n        \"capacity\":{\"cpu\":\"4\"},\n        \"addresses\":[{\"type\": \"LegacyHostIP\", \"address\":\"127.0.0.1\"}]\n      }\n    },\n    {\n      \"kind\":\"None\",\n      \"metadata\":{\"name\":\"127.0.0.2\"},\n      \"status\":{\n        \"capacity\":{\"cpu\":\"8\"},\n        \"addresses\":[\n          {\"type\": \"LegacyHostIP\", \"address\":\"127.0.0.2\"},\n          {\"type\": \"another\", \"address\":\"127.0.0.3\"}\n        ]\n      }\n    }\n  ],\n  \"users\":[\n    {\n      \"name\": \"myself\",\n      \"user\": {}\n    },\n    {\n      \"name\": \"e2e\",\n      \"user\": {\"username\": \"admin\", \"password\": \"secret\"}\n    }\n  ]\n}\n```\n\nFunction            | Description                  | Example                                                         | Result\n--------------------|------------------------------|-----------------------------------------------------------------|------------------\n`text`              | the plain text               | `kind is {.kind}`                                               | `kind is List`\n`@`                 | the current object           | `{@}`                                                           | the same as input\n`.` or `[]`         | child operator               | `{.kind}`, `{['kind']}` or `{['name\\.type']}`                   | `List`\n`..`                | recursive descent            | `{..name}`                                                      | `127.0.0.1 127.0.0.2 myself e2e`\n`*`                 | wildcard. Get all objects    | `{.items[*].metadata.name}`                                     | `[127.0.0.1 127.0.0.2]`\n`[start:end:step]` | subscript operator           | `{.users[0].name}`                                              | `myself`\n`[,]`               | union operator               | `{.items[*]['metadata.name', 'status.capacity']}`               | `127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]`\n`?()`               | filter                       | `{.users[?(@.name==\"e2e\")].user.password}`                      | `secret`\n`range`, `end`      | iterate list                 | `{range .items[*]}[{.metadata.name}, {.status.capacity}] {end}` | `[127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]`\n`''`                | quote interpreted string     | `{range .items[*]}{.metadata.name}{'\\t'}{end}`                  | `127.0.0.1      127.0.0.2`\n`\\`                 | escape termination character | `{.items[0].metadata.labels.kubernetes\\.io\/hostname}`           | `127.0.0.1`\n\nExamples using `kubectl` and JSONPath expressions:\n\n```shell\nkubectl get pods -o json\nkubectl get pods -o=jsonpath='{@}'\nkubectl get pods -o=jsonpath='{.items[0]}'\nkubectl get pods -o=jsonpath='{.items[0].metadata.name}'\nkubectl get pods -o=jsonpath=\"{.items[*]['metadata.name', 'status.capacity']}\"\nkubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.startTime}{\"\\n\"}{end}'\nkubectl get pods -o=jsonpath='{.items[0].metadata.labels.kubernetes\\.io\/hostname}'\n```\n\n\nOn Windows, you must _double_ quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote around any literals in the template. For example:\n\n```cmd\nkubectl get pods -o=jsonpath=\"{range .items[*]}{.metadata.name}{'\\t'}{.status.startTime}{'\\n'}{end}\"\nkubectl get pods -o=jsonpath=\"{range .items[*]}{.metadata.name}{\\\"\\t\\\"}{.status.startTime}{\\\"\\n\\\"}{end}\"\n```\n\n\n\n\nJSONPath regular expressions are not supported. If you want to match using regular expressions, you can use a tool such as `jq`.\n\n```shell\n# kubectl does not support regular expressions for JSONpath output\n# The following command does not work\nkubectl get pods -o jsonpath='{.items[?(@.metadata.name=~\/^test$\/)].metadata.name}'\n\n# The following command achieves the desired result\nkubectl get pods -o json | jq -r '.items[] | select(.metadata.name | test(\"test-\")).metadata.name'\n```\n","site":"kubernetes reference","answers_cleaned":"    title  JSONPath Support content type  concept weight  40           overview     Kubectl supports JSONPath template         body      JSONPath template is composed of JSONPath expressions enclosed by curly braces     Kubectl uses JSONPath expressions to filter on specific fields in the JSON object and format the output  In addition to the original JSONPath template syntax  the following functions and syntax are valid   1  Use double quotes to quote text inside JSONPath expressions  2  Use the  range    end  operators to iterate lists  3  Use negative slice indices to step backwards through a list  Negative indices do not  wrap around  a list and are valid as long as   index   listLength    0        The     operator is optional since the expression always starts from the root object by default     The result object is printed as its String   function     Given the JSON input      json      kind    List      items                 kind   None          metadata             name   127 0 0 1            labels               kubernetes io hostname   127 0 0 1                            status             capacity    cpu   4             addresses     type    LegacyHostIP    address   127 0 0 1                                kind   None          metadata    name   127 0 0 2           status             capacity    cpu   8             addresses                type    LegacyHostIP    address   127 0 0 2                type    another    address   127 0 0 3                                   users                 name    myself          user                          name    e2e          user     username    admin    password    secret                    Function              Description                    Example                                                           Result                                                                                                                                           text                 the plain text                  kind is   kind                                                    kind is List                        the current object                                                                               the same as input     or                child operator                    kind        kind     or     name  type                          List                        recursive descent                  name                                                           127 0 0 1 127 0 0 2 myself e2e                        wildcard  Get all objects         items    metadata name                                           127 0 0 1 127 0 0 2     start end step     subscript operator                users 0  name                                                   myself                        union operator                    items     metadata name    status capacity                      127 0 0 1 127 0 0 2 map cpu 4  map cpu 8                         filter                            users     name   e2e    user password                           secret   range    end         iterate list                     range  items       metadata name     status capacity    end       127 0 0 1  map cpu 4    127 0 0 2  map cpu 8                          quote interpreted string         range  items      metadata name    t   end                       127 0 0 1      127 0 0 2                        escape termination character      items 0  metadata labels kubernetes  io hostname                127 0 0 1   Examples using  kubectl  and JSONPath expressions      shell kubectl get pods  o json kubectl get pods  o jsonpath       kubectl get pods  o jsonpath    items 0    kubectl get pods  o jsonpath    items 0  metadata name   kubectl get pods  o jsonpath    items     metadata name    status capacity     kubectl get pods  o jsonpath   range  items      metadata name    t    status startTime    n   end   kubectl get pods  o jsonpath    items 0  metadata labels kubernetes  io hostname         On Windows  you must  double  quote any JSONPath template that contains spaces  not single quote as shown above for bash   This in turn means that you must use a single quote or escaped double quote around any literals in the template  For example      cmd kubectl get pods  o jsonpath   range  items      metadata name    t    status startTime    n   end   kubectl get pods  o jsonpath   range  items      metadata name     t     status startTime     n    end           JSONPath regular expressions are not supported  If you want to match using regular expressions  you can use a tool such as  jq       shell   kubectl does not support regular expressions for JSONpath output   The following command does not work kubectl get pods  o jsonpath    items     metadata name    test     metadata name      The following command achieves the desired result kubectl get pods  o json   jq  r   items     select  metadata name   test  test     metadata name      "}
{"questions":"kubernetes reference contenttype tool reference heading synopsis weight 30 kubectl controls the Kubernetes cluster manager title kubectl","answers":"---\ntitle: kubectl\ncontent_type: tool-reference\nweight: 30\n---\n\n## \n\nkubectl controls the Kubernetes cluster manager.\n\nFind more information in [Command line tool](\/docs\/reference\/kubectl\/) (`kubectl`).\n\n```shell\nkubectl [flags]\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--add-dir-header<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If true, adds the file directory to the header of the log messages<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--alsologtostderr<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">log to standard error as well as files<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Username to impersonate for the operation<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group stringArray<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--azure-container-registry-config string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Path to the file containing Azure container registry configuration information.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Default cache directory<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Path to a cert file for the certificate authority<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Path to a client certificate file for TLS<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Path to a client key file for TLS<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cloud-provider-gce-l7lb-src-cidrs cidrs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 130.211.0.0\/22,35.191.0.0\/16<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">CIDRs opened in GCE firewall for L7 LB traffic proxy & health checks<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cloud-provider-gce-lb-src-cidrs cidrs&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 130.211.0.0\/22,209.85.152.0\/22,209.85.204.0\/22,35.191.0.0\/16<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">CIDRs opened in GCE firewall for L4 LB traffic proxy & health checks<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The name of the kubeconfig cluster to use<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The name of the kubeconfig context to use<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">help for kubectl<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Path to the kubeconfig file to use for CLI requests.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-backtrace-at traceLocation&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: :0<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">when logging hits line file:N, emit a stack trace<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-dir string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If non-empty, write log files in this directory<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If non-empty, use this log file<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-file-max-size uint&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1800<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--log-flush-frequency duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Maximum number of seconds between log flushes<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--logtostderr&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">log to standard error instead of files<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Require server version to match client version<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If present, the namespace scope for this CLI request<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--one-output<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If true, only write logs to their native severity level (vs also writing to each lower severity level)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Password for basic authentication to the API server<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Name of the file to write the profile to<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The address and port of the Kubernetes API server<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--skip-headers<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If true, avoid header prefixes in the log messages<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--skip-log-headers<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">If true, avoid headers when opening log files<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--stderrthreshold severity&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 2<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">logs at or above this threshold go to stderr<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Bearer token for authentication to the API server<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">The name of the kubeconfig user to use<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Username for basic authentication to the API server<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-v, --v Level<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">number for the log level verbosity<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Print version information and quit<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--vmodule moduleSpec<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">comma-separated list of pattern=N settings for file-filtered logging<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Treat warnings received from the server as errors and exit with a non-zero exit code<\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n## \n\n<table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">KUBECONFIG<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Path to the kubectl configuration (\"kubeconfig\") file. Default: \"$HOME\/.kube\/config\"<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">KUBECTL_COMMAND_HEADERS<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">When set to false, turns off extra HTTP headers detailing invoked kubectl command (Kubernetes version v1.22 or later)<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">KUBECTL_DEBUG_CUSTOM_PROFILE<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">When set to true, custom flag will be enabled in kubectl debug. This flag is used to customize the pre-defined profiles.\n<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">KUBECTL_EXPLAIN_OPENAPIV3<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">Toggles whether calls to `kubectl explain` use the new OpenAPIv3 data source available. OpenAPIV3 is enabled by default since Kubernetes 1.24.\n<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">KUBECTL_ENABLE_CMD_SHADOW<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">When set to true, external plugins can be used as subcommands for builtin commands if subcommand does not exist. In alpha stage, this feature can only be used for create command(e.g. kubectl create networkpolicy). \n<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">KUBECTL_PORT_FORWARD_WEBSOCKETS<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">When set to true, the kubectl port-forward command will attempt to stream using the websockets protocol. If the upgrade to websockets fails, the commands will fallback to use the current SPDY protocol.\n<\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">KUBECTL_REMOTE_COMMAND_WEBSOCKETS<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\">When set to true, the kubectl exec, cp, and attach commands will attempt to stream using the websockets protocol. If the upgrade to websockets fails, the commands will fallback to use the current SPDY protocol.\n<\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n## \n\n* [kubectl annotate](\/docs\/reference\/kubectl\/generated\/kubectl_annotate\/) - Update the annotations on a resource\n* [kubectl api-resources](\/docs\/reference\/kubectl\/generated\/kubectl_api-resources\/) - Print the supported API resources on the server\n* [kubectl api-versions](\/docs\/reference\/kubectl\/generated\/kubectl_api-versions\/) - Print the supported API versions on the server,\n  in the form of \"group\/version\"\n* [kubectl apply](\/docs\/reference\/kubectl\/generated\/kubectl_apply\/) - Apply a configuration to a resource by filename or stdin\n* [kubectl attach](\/docs\/reference\/kubectl\/generated\/kubectl_attach\/) - Attach to a running container\n* [kubectl auth](\/docs\/reference\/kubectl\/generated\/kubectl_auth\/) - Inspect authorization\n* [kubectl autoscale](\/docs\/reference\/kubectl\/generated\/kubectl_autoscale\/) - Auto-scale a Deployment, ReplicaSet, or ReplicationController\n* [kubectl certificate](\/docs\/reference\/kubectl\/generated\/kubectl_certificate\/) - Modify certificate resources.\n* [kubectl cluster-info](\/docs\/reference\/kubectl\/generated\/kubectl_cluster-info\/) - Display cluster info\n* [kubectl completion](\/docs\/reference\/kubectl\/generated\/kubectl_completion\/) - Output shell completion code for the specified shell (bash or zsh)\n* [kubectl config](\/docs\/reference\/kubectl\/generated\/kubectl_config\/) - Modify kubeconfig files\n* [kubectl cordon](\/docs\/reference\/kubectl\/generated\/kubectl_cordon\/) - Mark node as unschedulable\n* [kubectl cp](\/docs\/reference\/kubectl\/generated\/kubectl_cp\/) - Copy files and directories to and from containers.\n* [kubectl create](\/docs\/reference\/kubectl\/generated\/kubectl_create\/) - Create a resource from a file or from stdin.\n* [kubectl debug](\/docs\/reference\/kubectl\/generated\/kubectl_debug\/) - Create debugging sessions for troubleshooting workloads and nodes\n* [kubectl delete](\/docs\/reference\/kubectl\/generated\/kubectl_delete\/) - Delete resources by filenames,\n  stdin, resources and names, or by resources and label selector\n* [kubectl describe](\/docs\/reference\/kubectl\/generated\/kubectl_describe\/) - Show details of a specific resource or group of resources\n* [kubectl diff](\/docs\/reference\/kubectl\/generated\/kubectl_diff\/) - Diff live version against would-be applied version\n* [kubectl drain](\/docs\/reference\/kubectl\/generated\/kubectl_drain\/) - Drain node in preparation for maintenance\n* [kubectl edit](\/docs\/reference\/kubectl\/generated\/kubectl_edit\/) - Edit a resource on the server\n* [kubectl events](\/docs\/reference\/kubectl\/generated\/kubectl_events\/)  - List events\n* [kubectl exec](\/docs\/reference\/kubectl\/generated\/kubectl_exec\/) - Execute a command in a container\n* [kubectl explain](\/docs\/reference\/kubectl\/generated\/kubectl_explain\/) - Documentation of resources\n* [kubectl expose](\/docs\/reference\/kubectl\/generated\/kubectl_expose\/) - Take a replication controller,\n  service, deployment or pod and expose it as a new Kubernetes Service\n* [kubectl get](\/docs\/reference\/kubectl\/generated\/kubectl_get\/) - Display one or many resources\n* [kubectl kustomize](\/docs\/reference\/kubectl\/generated\/kubectl_kustomize\/) - Build a kustomization\n  target from a directory or a remote url.\n* [kubectl label](\/docs\/reference\/kubectl\/generated\/kubectl_label\/) - Update the labels on a resource\n* [kubectl logs](\/docs\/reference\/kubectl\/generated\/kubectl_logs\/) - Print the logs for a container in a pod\n* [kubectl options](\/docs\/reference\/kubectl\/generated\/kubectl_options\/) - Print the list of flags inherited by all commands\n* [kubectl patch](\/docs\/reference\/kubectl\/generated\/kubectl_patch\/) - Update field(s) of a resource\n* [kubectl plugin](\/docs\/reference\/kubectl\/generated\/kubectl_plugin\/) - Provides utilities for interacting with plugins.\n* [kubectl port-forward](\/docs\/reference\/kubectl\/generated\/kubectl_port-forward\/) - Forward one or more local ports to a pod\n* [kubectl proxy](\/docs\/reference\/kubectl\/generated\/kubectl_proxy\/) - Run a proxy to the Kubernetes API server\n* [kubectl replace](\/docs\/reference\/kubectl\/generated\/kubectl_replace\/) - Replace a resource by filename or stdin\n* [kubectl rollout](\/docs\/reference\/kubectl\/generated\/kubectl_rollout\/) - Manage the rollout of a resource\n* [kubectl run](\/docs\/reference\/kubectl\/generated\/kubectl_run\/) - Run a particular image on the cluster\n* [kubectl scale](\/docs\/reference\/kubectl\/generated\/kubectl_scale\/) - Set a new size for a Deployment, ReplicaSet or Replication Controller\n* [kubectl set](\/docs\/reference\/kubectl\/generated\/kubectl_set\/) - Set specific features on objects\n* [kubectl taint](\/docs\/reference\/kubectl\/generated\/kubectl_taint\/) - Update the taints on one or more nodes\n* [kubectl top](\/docs\/reference\/kubectl\/generated\/kubectl_top\/) - Display Resource (CPU\/Memory\/Storage) usage.\n* [kubectl uncordon](\/docs\/reference\/kubectl\/generated\/kubectl_uncordon\/) - Mark node as schedulable\n* [kubectl version](\/docs\/reference\/kubectl\/generated\/kubectl_version\/) - Print the client and server version information\n* [kubectl wait](\/docs\/reference\/kubectl\/generated\/kubectl_wait\/) - Experimental: Wait for a specific condition on one or many resources.","site":"kubernetes reference","answers_cleaned":"    title  kubectl content type  tool reference weight  30           kubectl controls the Kubernetes cluster manager   Find more information in  Command line tool   docs reference kubectl     kubectl        shell kubectl  flags                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    add dir header  td    tr   tr   td   td  td style  line height  130   word wrap  break word   If true  adds the file directory to the header of the log messages  td    tr    tr   td colspan  2    alsologtostderr  td    tr   tr   td   td  td style  line height  130   word wrap  break word   log to standard error as well as files  td    tr    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Username to impersonate for the operation  td    tr    tr   td colspan  2    as group stringArray  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Group to impersonate for the operation  this flag can be repeated to specify multiple groups   td    tr    tr   td colspan  2    azure container registry config string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Path to the file containing Azure container registry configuration information   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Default cache directory  td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Path to a cert file for the certificate authority  td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Path to a client certificate file for TLS  td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Path to a client key file for TLS  td    tr    tr   td colspan  2    cloud provider gce l7lb src cidrs cidrs nbsp  nbsp  nbsp  nbsp  nbsp Default  130 211 0 0 22 35 191 0 0 16  td    tr   tr   td   td  td style  line height  130   word wrap  break word   CIDRs opened in GCE firewall for L7 LB traffic proxy   health checks  td    tr    tr   td colspan  2    cloud provider gce lb src cidrs cidrs nbsp  nbsp  nbsp  nbsp  nbsp Default  130 211 0 0 22 209 85 152 0 22 209 85 204 0 22 35 191 0 0 16  td    tr   tr   td   td  td style  line height  130   word wrap  break word   CIDRs opened in GCE firewall for L4 LB traffic proxy   health checks  td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The name of the kubeconfig cluster to use  td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The name of the kubeconfig context to use  td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word   help for kubectl  td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word   If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Path to the kubeconfig file to use for CLI requests   td    tr    tr   td colspan  2    log backtrace at traceLocation nbsp  nbsp  nbsp  nbsp  nbsp Default   0  td    tr   tr   td   td  td style  line height  130   word wrap  break word   when logging hits line file N  emit a stack trace  td    tr    tr   td colspan  2    log dir string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   If non empty  write log files in this directory  td    tr    tr   td colspan  2    log file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   If non empty  use this log file  td    tr    tr   td colspan  2    log file max size uint nbsp  nbsp  nbsp  nbsp  nbsp Default  1800  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Defines the maximum size a log file can grow to  Unit is megabytes  If the value is 0  the maximum file size is unlimited   td    tr    tr   td colspan  2    log flush frequency duration nbsp  nbsp  nbsp  nbsp  nbsp Default  5s  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Maximum number of seconds between log flushes  td    tr    tr   td colspan  2    logtostderr nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word   log to standard error instead of files  td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Require server version to match client version  td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   If present  the namespace scope for this CLI request  td    tr    tr   td colspan  2    one output  td    tr   tr   td   td  td style  line height  130   word wrap  break word   If true  only write logs to their native severity level  vs also writing to each lower severity level   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Password for basic authentication to the API server  td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Name of the file to write the profile to  td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word   The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The address and port of the Kubernetes API server  td    tr    tr   td colspan  2    skip headers  td    tr   tr   td   td  td style  line height  130   word wrap  break word   If true  avoid header prefixes in the log messages  td    tr    tr   td colspan  2    skip log headers  td    tr   tr   td   td  td style  line height  130   word wrap  break word   If true  avoid headers when opening log files  td    tr    tr   td colspan  2    stderrthreshold severity nbsp  nbsp  nbsp  nbsp  nbsp Default  2  td    tr   tr   td   td  td style  line height  130   word wrap  break word   logs at or above this threshold go to stderr  td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Bearer token for authentication to the API server  td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   The name of the kubeconfig user to use  td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Username for basic authentication to the API server  td    tr    tr   td colspan  2   v    v Level  td    tr   tr   td   td  td style  line height  130   word wrap  break word   number for the log level verbosity  td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word   Print version information and quit  td    tr    tr   td colspan  2    vmodule moduleSpec  td    tr   tr   td   td  td style  line height  130   word wrap  break word   comma separated list of pattern N settings for file filtered logging  td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Treat warnings received from the server as errors and exit with a non zero exit code  td    tr     tbody    table         table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2  KUBECONFIG  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Path to the kubectl configuration   kubeconfig   file  Default    HOME  kube config   td    tr    tr   td colspan  2  KUBECTL COMMAND HEADERS  td    tr   tr   td   td  td style  line height  130   word wrap  break word   When set to false  turns off extra HTTP headers detailing invoked kubectl command  Kubernetes version v1 22 or later   td    tr    tr   td colspan  2  KUBECTL DEBUG CUSTOM PROFILE  td    tr   tr   td   td  td style  line height  130   word wrap  break word   When set to true  custom flag will be enabled in kubectl debug  This flag is used to customize the pre defined profiles    td    tr    tr   td colspan  2  KUBECTL EXPLAIN OPENAPIV3  td    tr   tr   td   td  td style  line height  130   word wrap  break word   Toggles whether calls to  kubectl explain  use the new OpenAPIv3 data source available  OpenAPIV3 is enabled by default since Kubernetes 1 24    td    tr    tr   td colspan  2  KUBECTL ENABLE CMD SHADOW  td    tr   tr   td   td  td style  line height  130   word wrap  break word   When set to true  external plugins can be used as subcommands for builtin commands if subcommand does not exist  In alpha stage  this feature can only be used for create command e g  kubectl create networkpolicy      td    tr    tr   td colspan  2  KUBECTL PORT FORWARD WEBSOCKETS  td    tr   tr   td   td  td style  line height  130   word wrap  break word   When set to true  the kubectl port forward command will attempt to stream using the websockets protocol  If the upgrade to websockets fails  the commands will fallback to use the current SPDY protocol    td    tr    tr   td colspan  2  KUBECTL REMOTE COMMAND WEBSOCKETS  td    tr   tr   td   td  td style  line height  130   word wrap  break word   When set to true  the kubectl exec  cp  and attach commands will attempt to stream using the websockets protocol  If the upgrade to websockets fails  the commands will fallback to use the current SPDY protocol    td    tr     tbody    table           kubectl annotate   docs reference kubectl generated kubectl annotate     Update the annotations on a resource    kubectl api resources   docs reference kubectl generated kubectl api resources     Print the supported API resources on the server    kubectl api versions   docs reference kubectl generated kubectl api versions     Print the supported API versions on the server    in the form of  group version     kubectl apply   docs reference kubectl generated kubectl apply     Apply a configuration to a resource by filename or stdin    kubectl attach   docs reference kubectl generated kubectl attach     Attach to a running container    kubectl auth   docs reference kubectl generated kubectl auth     Inspect authorization    kubectl autoscale   docs reference kubectl generated kubectl autoscale     Auto scale a Deployment  ReplicaSet  or ReplicationController    kubectl certificate   docs reference kubectl generated kubectl certificate     Modify certificate resources     kubectl cluster info   docs reference kubectl generated kubectl cluster info     Display cluster info    kubectl completion   docs reference kubectl generated kubectl completion     Output shell completion code for the specified shell  bash or zsh     kubectl config   docs reference kubectl generated kubectl config     Modify kubeconfig files    kubectl cordon   docs reference kubectl generated kubectl cordon     Mark node as unschedulable    kubectl cp   docs reference kubectl generated kubectl cp     Copy files and directories to and from containers     kubectl create   docs reference kubectl generated kubectl create     Create a resource from a file or from stdin     kubectl debug   docs reference kubectl generated kubectl debug     Create debugging sessions for troubleshooting workloads and nodes    kubectl delete   docs reference kubectl generated kubectl delete     Delete resources by filenames    stdin  resources and names  or by resources and label selector    kubectl describe   docs reference kubectl generated kubectl describe     Show details of a specific resource or group of resources    kubectl diff   docs reference kubectl generated kubectl diff     Diff live version against would be applied version    kubectl drain   docs reference kubectl generated kubectl drain     Drain node in preparation for maintenance    kubectl edit   docs reference kubectl generated kubectl edit     Edit a resource on the server    kubectl events   docs reference kubectl generated kubectl events      List events    kubectl exec   docs reference kubectl generated kubectl exec     Execute a command in a container    kubectl explain   docs reference kubectl generated kubectl explain     Documentation of resources    kubectl expose   docs reference kubectl generated kubectl expose     Take a replication controller    service  deployment or pod and expose it as a new Kubernetes Service    kubectl get   docs reference kubectl generated kubectl get     Display one or many resources    kubectl kustomize   docs reference kubectl generated kubectl kustomize     Build a kustomization   target from a directory or a remote url     kubectl label   docs reference kubectl generated kubectl label     Update the labels on a resource    kubectl logs   docs reference kubectl generated kubectl logs     Print the logs for a container in a pod    kubectl options   docs reference kubectl generated kubectl options     Print the list of flags inherited by all commands    kubectl patch   docs reference kubectl generated kubectl patch     Update field s  of a resource    kubectl plugin   docs reference kubectl generated kubectl plugin     Provides utilities for interacting with plugins     kubectl port forward   docs reference kubectl generated kubectl port forward     Forward one or more local ports to a pod    kubectl proxy   docs reference kubectl generated kubectl proxy     Run a proxy to the Kubernetes API server    kubectl replace   docs reference kubectl generated kubectl replace     Replace a resource by filename or stdin    kubectl rollout   docs reference kubectl generated kubectl rollout     Manage the rollout of a resource    kubectl run   docs reference kubectl generated kubectl run     Run a particular image on the cluster    kubectl scale   docs reference kubectl generated kubectl scale     Set a new size for a Deployment  ReplicaSet or Replication Controller    kubectl set   docs reference kubectl generated kubectl set     Set specific features on objects    kubectl taint   docs reference kubectl generated kubectl taint     Update the taints on one or more nodes    kubectl top   docs reference kubectl generated kubectl top     Display Resource  CPU Memory Storage  usage     kubectl uncordon   docs reference kubectl generated kubectl uncordon     Mark node as schedulable    kubectl version   docs reference kubectl generated kubectl version     Print the client and server version information    kubectl wait   docs reference kubectl generated kubectl wait     Experimental  Wait for a specific condition on one or many resources "}
{"questions":"kubernetes reference brendandburns contenttype concept reviewers title kubectl for Docker Users thockin overview weight 50","answers":"---\ntitle: kubectl for Docker Users\ncontent_type: concept\nreviewers:\n- brendandburns\n- thockin\nweight: 50\n---\n\n<!-- overview -->\nYou can use the Kubernetes command line tool `kubectl` to interact with the API Server. Using kubectl is straightforward if you are familiar with the Docker command line tool. However, there are a few differences between the Docker commands and the kubectl commands. The following sections show a Docker sub-command and describe the equivalent `kubectl` command.\n\n\n<!-- body -->\n## docker run\n\nTo run an nginx Deployment and expose the Deployment, see [kubectl create deployment](\/docs\/reference\/generated\/kubectl\/kubectl-commands#-em-deployment-em-).\n\ndocker:\n\n```shell\ndocker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx\n```\n```\n55c103fa129692154a7652490236fee9be47d70a8dd562281ae7d2f9a339a6db\n```\n\n```shell\ndocker ps\n```\n```\nCONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES\n55c103fa1296        nginx               \"nginx -g 'daemon of\u2026\"   9 seconds ago       Up 9 seconds        0.0.0.0:80->80\/tcp   nginx-app\n```\n\nkubectl:\n\n```shell\n# start the pod running nginx\nkubectl create deployment --image=nginx nginx-app\n```\n```\ndeployment.apps\/nginx-app created\n```\n\n```shell\n# add env to nginx-app\nkubectl set env deployment\/nginx-app  DOMAIN=cluster\n```\n```\ndeployment.apps\/nginx-app env updated\n```\n\n\n`kubectl` commands print the type and name of the resource created or mutated, which can then be used in subsequent commands. You can expose a new Service after a Deployment is created.\n\n\n```shell\n# expose a port through with a service\nkubectl expose deployment nginx-app --port=80 --name=nginx-http\n```\n```\nservice \"nginx-http\" exposed\n```\n\nBy using kubectl, you can create a [Deployment](\/docs\/concepts\/workloads\/controllers\/deployment\/) to ensure that N pods are running nginx, where N is the number of replicas stated in the spec and defaults to 1. You can also create a [service](\/docs\/concepts\/services-networking\/service\/) with a selector that matches the pod labels. For more information, see [Use a Service to Access an Application in a Cluster](\/docs\/tasks\/access-application-cluster\/service-access-application-cluster).\n\nBy default images run in the background, similar to `docker run -d ...`. To run things in the foreground, use [`kubectl run`](\/docs\/reference\/generated\/kubectl\/kubectl-commands\/#run) to create pod:\n```shell\nkubectl run [-i] [--tty] --attach <name> --image=<image>\n```\n\nUnlike `docker run ...`, if you specify `--attach`, then you attach `stdin`, `stdout` and `stderr`. You cannot control which streams are attached (`docker -a ...`).\nTo detach from the container, you can type the escape sequence Ctrl+P followed by Ctrl+Q.\n\n## docker ps\n\nTo list what is currently running, see [kubectl get](\/docs\/reference\/generated\/kubectl\/kubectl-commands\/#get).\n\ndocker:\n\n```shell\ndocker ps -a\n```\n```\nCONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS                     PORTS                NAMES\n14636241935f        ubuntu:16.04        \"echo test\"              5 seconds ago        Exited (0) 5 seconds ago                        cocky_fermi\n55c103fa1296        nginx               \"nginx -g 'daemon of\u2026\"   About a minute ago   Up About a minute          0.0.0.0:80->80\/tcp   nginx-app\n```\n\nkubectl:\n\n```shell\nkubectl get po\n```\n```\nNAME                        READY     STATUS      RESTARTS   AGE\nnginx-app-8df569cb7-4gd89   1\/1       Running     0          3m\nubuntu                      0\/1       Completed   0          20s\n```\n\n## docker attach\n\nTo attach a process that is already running in a container, see [kubectl attach](\/docs\/reference\/generated\/kubectl\/kubectl-commands\/#attach).\n\ndocker:\n\n```shell\ndocker ps\n```\n```\nCONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES\n55c103fa1296        nginx               \"nginx -g 'daemon of\u2026\"   5 minutes ago       Up 5 minutes        0.0.0.0:80->80\/tcp   nginx-app\n```\n\n```shell\ndocker attach 55c103fa1296\n...\n```\n\nkubectl:\n\n```shell\nkubectl get pods\n```\n```\nNAME              READY     STATUS    RESTARTS   AGE\nnginx-app-5jyvm   1\/1       Running   0          10m\n```\n\n```shell\nkubectl attach -it nginx-app-5jyvm\n...\n```\n\nTo detach from the container, you can type the escape sequence Ctrl+P followed by Ctrl+Q.\n\n## docker exec\n\nTo execute a command in a container, see [kubectl exec](\/docs\/reference\/generated\/kubectl\/kubectl-commands\/#exec).\n\ndocker:\n\n```shell\ndocker ps\n```\n```\nCONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES\n55c103fa1296        nginx               \"nginx -g 'daemon of\u2026\"   6 minutes ago       Up 6 minutes        0.0.0.0:80->80\/tcp   nginx-app\n```\n```shell\ndocker exec 55c103fa1296 cat \/etc\/hostname\n```\n```\n55c103fa1296\n```\n\nkubectl:\n\n```shell\nkubectl get po\n```\n```\nNAME              READY     STATUS    RESTARTS   AGE\nnginx-app-5jyvm   1\/1       Running   0          10m\n```\n\n```shell\nkubectl exec nginx-app-5jyvm -- cat \/etc\/hostname\n```\n```\nnginx-app-5jyvm\n```\n\nTo use interactive commands.\n\n\ndocker:\n\n```shell\ndocker exec -ti 55c103fa1296 \/bin\/sh\n# exit\n```\n\nkubectl:\n\n```shell\nkubectl exec -ti nginx-app-5jyvm -- \/bin\/sh\n# exit\n```\n\nFor more information, see [Get a Shell to a Running Container](\/docs\/tasks\/debug\/debug-application\/get-shell-running-container\/).\n\n## docker logs\n\nTo follow stdout\/stderr of a process that is running, see [kubectl logs](\/docs\/reference\/generated\/kubectl\/kubectl-commands\/#logs).\n\n\ndocker:\n\n```shell\ndocker logs -f a9e\n```\n```\n192.168.9.1 - - [14\/Jul\/2015:01:04:02 +0000] \"GET \/ HTTP\/1.1\" 200 612 \"-\" \"curl\/7.35.0\" \"-\"\n192.168.9.1 - - [14\/Jul\/2015:01:04:03 +0000] \"GET \/ HTTP\/1.1\" 200 612 \"-\" \"curl\/7.35.0\" \"-\"\n```\n\nkubectl:\n\n```shell\nkubectl logs -f nginx-app-zibvs\n```\n```\n10.240.63.110 - - [14\/Jul\/2015:01:09:01 +0000] \"GET \/ HTTP\/1.1\" 200 612 \"-\" \"curl\/7.26.0\" \"-\"\n10.240.63.110 - - [14\/Jul\/2015:01:09:02 +0000] \"GET \/ HTTP\/1.1\" 200 612 \"-\" \"curl\/7.26.0\" \"-\"\n```\n\nThere is a slight difference between pods and containers; by default pods do not terminate if their processes exit. Instead the pods restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated, but for Kubernetes, each invocation is separate. To see the output from a previous run in Kubernetes, do this:\n\n```shell\nkubectl logs --previous nginx-app-zibvs\n```\n```\n10.240.63.110 - - [14\/Jul\/2015:01:09:01 +0000] \"GET \/ HTTP\/1.1\" 200 612 \"-\" \"curl\/7.26.0\" \"-\"\n10.240.63.110 - - [14\/Jul\/2015:01:09:02 +0000] \"GET \/ HTTP\/1.1\" 200 612 \"-\" \"curl\/7.26.0\" \"-\"\n```\n\nFor more information, see [Logging Architecture](\/docs\/concepts\/cluster-administration\/logging\/).\n\n## docker stop and docker rm\n\nTo stop and delete a running process, see [kubectl delete](\/docs\/reference\/generated\/kubectl\/kubectl-commands\/#delete).\n\ndocker:\n\n```shell\ndocker ps\n```\n```\nCONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS                         NAMES\na9ec34d98787        nginx               \"nginx -g 'daemon of\"  22 hours ago        Up 22 hours         0.0.0.0:80->80\/tcp, 443\/tcp   nginx-app\n```\n\n```shell\ndocker stop a9ec34d98787\n```\n```\na9ec34d98787\n```\n\n```shell\ndocker rm a9ec34d98787\n```\n```\na9ec34d98787\n```\n\nkubectl:\n\n```shell\nkubectl get deployment nginx-app\n```\n```\nNAME         READY   UP-TO-DATE   AVAILABLE   AGE\nnginx-app    1\/1     1            1           2m\n```\n\n```shell\nkubectl get po -l app=nginx-app\n```\n```\nNAME                         READY     STATUS    RESTARTS   AGE\nnginx-app-2883164633-aklf7   1\/1       Running   0          2m\n```\n```shell\nkubectl delete deployment nginx-app\n```\n```\ndeployment \"nginx-app\" deleted\n```\n\n```shell\nkubectl get po -l app=nginx-app\n# Return nothing\n```\n\n\nWhen you use kubectl, you don't delete the pod directly. You have to first delete the Deployment that owns the pod. If you delete the pod directly, the Deployment recreates the pod.\n\n\n## docker login\n\nThere is no direct analog of `docker login` in kubectl. If you are interested in using Kubernetes with a private registry, see [Using a Private Registry](\/docs\/concepts\/containers\/images\/#using-a-private-registry).\n\n## docker version\n\nTo get the version of client and server, see [kubectl version](\/docs\/reference\/generated\/kubectl\/kubectl-commands\/#version).\n\ndocker:\n\n```shell\ndocker version\n```\n```\nClient version: 1.7.0\nClient API version: 1.19\nGo version (client): go1.4.2\nGit commit (client): 0baf609\nOS\/Arch (client): linux\/amd64\nServer version: 1.7.0\nServer API version: 1.19\nGo version (server): go1.4.2\nGit commit (server): 0baf609\nOS\/Arch (server): linux\/amd64\n```\n\nkubectl:\n\n```shell\nkubectl version\n```\n```\nClient Version: version.Info{Major:\"1\", Minor:\"6\", GitVersion:\"v1.6.9+a3d1dfa6f4335\", GitCommit:\"9b77fed11a9843ce3780f70dd251e92901c43072\", GitTreeState:\"dirty\", BuildDate:\"2017-08-29T20:32:58Z\", OpenPaasKubernetesVersion:\"v1.03.02\", GoVersion:\"go1.7.5\", Compiler:\"gc\", Platform:\"linux\/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"6\", GitVersion:\"v1.6.9+a3d1dfa6f4335\", GitCommit:\"9b77fed11a9843ce3780f70dd251e92901c43072\", GitTreeState:\"dirty\", BuildDate:\"2017-08-29T20:32:58Z\", OpenPaasKubernetesVersion:\"v1.03.02\", GoVersion:\"go1.7.5\", Compiler:\"gc\", Platform:\"linux\/amd64\"}\n```\n\n## docker info\n\nTo get miscellaneous information about the environment and configuration, see [kubectl cluster-info](\/docs\/reference\/generated\/kubectl\/kubectl-commands\/#cluster-info).\n\ndocker:\n\n```shell\ndocker info\n```\n```\nContainers: 40\nImages: 168\nStorage Driver: aufs\n Root Dir: \/usr\/local\/google\/docker\/aufs\n Backing Filesystem: extfs\n Dirs: 248\n Dirperm1 Supported: false\nExecution Driver: native-0.2\nLogging Driver: json-file\nKernel Version: 3.13.0-53-generic\nOperating System: Ubuntu 14.04.2 LTS\nCPUs: 12\nTotal Memory: 31.32 GiB\nName: k8s-is-fun.mtv.corp.google.com\nID: ADUV:GCYR:B3VJ:HMPO:LNPQ:KD5S:YKFQ:76VN:IANZ:7TFV:ZBF4:BYJO\nWARNING: No swap limit support\n```\n\nkubectl:\n\n```shell\nkubectl cluster-info\n```\n```\nKubernetes master is running at https:\/\/203.0.113.141\nKubeDNS is running at https:\/\/203.0.113.141\/api\/v1\/namespaces\/kube-system\/services\/kube-dns\/proxy\nkubernetes-dashboard is running at https:\/\/203.0.113.141\/api\/v1\/namespaces\/kube-system\/services\/kubernetes-dashboard\/proxy\nGrafana is running at https:\/\/203.0.113.141\/api\/v1\/namespaces\/kube-system\/services\/monitoring-grafana\/proxy\nHeapster is running at https:\/\/203.0.113.141\/api\/v1\/namespaces\/kube-system\/services\/monitoring-heapster\/proxy\nInfluxDB is running at https:\/\/203.0.113.141\/api\/v1\/namespaces\/kube-system\/services\/monitoring-influxdb\/proxy\n```\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl for Docker Users content type  concept reviewers    brendandburns   thockin weight  50           overview     You can use the Kubernetes command line tool  kubectl  to interact with the API Server  Using kubectl is straightforward if you are familiar with the Docker command line tool  However  there are a few differences between the Docker commands and the kubectl commands  The following sections show a Docker sub command and describe the equivalent  kubectl  command         body        docker run  To run an nginx Deployment and expose the Deployment  see  kubectl create deployment   docs reference generated kubectl kubectl commands  em deployment em     docker      shell docker run  d   restart always  e DOMAIN cluster   name nginx app  p 80 80 nginx         55c103fa129692154a7652490236fee9be47d70a8dd562281ae7d2f9a339a6db         shell docker ps         CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES 55c103fa1296        nginx                nginx  g  daemon of     9 seconds ago       Up 9 seconds        0 0 0 0 80  80 tcp   nginx app      kubectl      shell   start the pod running nginx kubectl create deployment   image nginx nginx app         deployment apps nginx app created         shell   add env to nginx app kubectl set env deployment nginx app  DOMAIN cluster         deployment apps nginx app env updated        kubectl  commands print the type and name of the resource created or mutated  which can then be used in subsequent commands  You can expose a new Service after a Deployment is created       shell   expose a port through with a service kubectl expose deployment nginx app   port 80   name nginx http         service  nginx http  exposed      By using kubectl  you can create a  Deployment   docs concepts workloads controllers deployment   to ensure that N pods are running nginx  where N is the number of replicas stated in the spec and defaults to 1  You can also create a  service   docs concepts services networking service   with a selector that matches the pod labels  For more information  see  Use a Service to Access an Application in a Cluster   docs tasks access application cluster service access application cluster    By default images run in the background  similar to  docker run  d       To run things in the foreground  use   kubectl run    docs reference generated kubectl kubectl commands  run  to create pod     shell kubectl run   i     tty    attach  name    image  image       Unlike  docker run       if you specify    attach   then you attach  stdin    stdout  and  stderr   You cannot control which streams are attached   docker  a        To detach from the container  you can type the escape sequence Ctrl P followed by Ctrl Q      docker ps  To list what is currently running  see  kubectl get   docs reference generated kubectl kubectl commands  get    docker      shell docker ps  a         CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS                     PORTS                NAMES 14636241935f        ubuntu 16 04         echo test               5 seconds ago        Exited  0  5 seconds ago                        cocky fermi 55c103fa1296        nginx                nginx  g  daemon of     About a minute ago   Up About a minute          0 0 0 0 80  80 tcp   nginx app      kubectl      shell kubectl get po         NAME                        READY     STATUS      RESTARTS   AGE nginx app 8df569cb7 4gd89   1 1       Running     0          3m ubuntu                      0 1       Completed   0          20s         docker attach  To attach a process that is already running in a container  see  kubectl attach   docs reference generated kubectl kubectl commands  attach    docker      shell docker ps         CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES 55c103fa1296        nginx                nginx  g  daemon of     5 minutes ago       Up 5 minutes        0 0 0 0 80  80 tcp   nginx app         shell docker attach 55c103fa1296          kubectl      shell kubectl get pods         NAME              READY     STATUS    RESTARTS   AGE nginx app 5jyvm   1 1       Running   0          10m         shell kubectl attach  it nginx app 5jyvm          To detach from the container  you can type the escape sequence Ctrl P followed by Ctrl Q      docker exec  To execute a command in a container  see  kubectl exec   docs reference generated kubectl kubectl commands  exec    docker      shell docker ps         CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES 55c103fa1296        nginx                nginx  g  daemon of     6 minutes ago       Up 6 minutes        0 0 0 0 80  80 tcp   nginx app        shell docker exec 55c103fa1296 cat  etc hostname         55c103fa1296      kubectl      shell kubectl get po         NAME              READY     STATUS    RESTARTS   AGE nginx app 5jyvm   1 1       Running   0          10m         shell kubectl exec nginx app 5jyvm    cat  etc hostname         nginx app 5jyvm      To use interactive commands    docker      shell docker exec  ti 55c103fa1296  bin sh   exit      kubectl      shell kubectl exec  ti nginx app 5jyvm     bin sh   exit      For more information  see  Get a Shell to a Running Container   docs tasks debug debug application get shell running container        docker logs  To follow stdout stderr of a process that is running  see  kubectl logs   docs reference generated kubectl kubectl commands  logs     docker      shell docker logs  f a9e         192 168 9 1      14 Jul 2015 01 04 02  0000   GET   HTTP 1 1  200 612      curl 7 35 0      192 168 9 1      14 Jul 2015 01 04 03  0000   GET   HTTP 1 1  200 612      curl 7 35 0           kubectl      shell kubectl logs  f nginx app zibvs         10 240 63 110      14 Jul 2015 01 09 01  0000   GET   HTTP 1 1  200 612      curl 7 26 0      10 240 63 110      14 Jul 2015 01 09 02  0000   GET   HTTP 1 1  200 612      curl 7 26 0           There is a slight difference between pods and containers  by default pods do not terminate if their processes exit  Instead the pods restart the process  This is similar to the docker run option    restart always  with one major difference  In docker  the output for each invocation of the process is concatenated  but for Kubernetes  each invocation is separate  To see the output from a previous run in Kubernetes  do this      shell kubectl logs   previous nginx app zibvs         10 240 63 110      14 Jul 2015 01 09 01  0000   GET   HTTP 1 1  200 612      curl 7 26 0      10 240 63 110      14 Jul 2015 01 09 02  0000   GET   HTTP 1 1  200 612      curl 7 26 0           For more information  see  Logging Architecture   docs concepts cluster administration logging        docker stop and docker rm  To stop and delete a running process  see  kubectl delete   docs reference generated kubectl kubectl commands  delete    docker      shell docker ps         CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS                         NAMES a9ec34d98787        nginx                nginx  g  daemon of   22 hours ago        Up 22 hours         0 0 0 0 80  80 tcp  443 tcp   nginx app         shell docker stop a9ec34d98787         a9ec34d98787         shell docker rm a9ec34d98787         a9ec34d98787      kubectl      shell kubectl get deployment nginx app         NAME         READY   UP TO DATE   AVAILABLE   AGE nginx app    1 1     1            1           2m         shell kubectl get po  l app nginx app         NAME                         READY     STATUS    RESTARTS   AGE nginx app 2883164633 aklf7   1 1       Running   0          2m        shell kubectl delete deployment nginx app         deployment  nginx app  deleted         shell kubectl get po  l app nginx app   Return nothing       When you use kubectl  you don t delete the pod directly  You have to first delete the Deployment that owns the pod  If you delete the pod directly  the Deployment recreates the pod       docker login  There is no direct analog of  docker login  in kubectl  If you are interested in using Kubernetes with a private registry  see  Using a Private Registry   docs concepts containers images  using a private registry       docker version  To get the version of client and server  see  kubectl version   docs reference generated kubectl kubectl commands  version    docker      shell docker version         Client version  1 7 0 Client API version  1 19 Go version  client   go1 4 2 Git commit  client   0baf609 OS Arch  client   linux amd64 Server version  1 7 0 Server API version  1 19 Go version  server   go1 4 2 Git commit  server   0baf609 OS Arch  server   linux amd64      kubectl      shell kubectl version         Client Version  version Info Major  1   Minor  6   GitVersion  v1 6 9 a3d1dfa6f4335   GitCommit  9b77fed11a9843ce3780f70dd251e92901c43072   GitTreeState  dirty   BuildDate  2017 08 29T20 32 58Z   OpenPaasKubernetesVersion  v1 03 02   GoVersion  go1 7 5   Compiler  gc   Platform  linux amd64   Server Version  version Info Major  1   Minor  6   GitVersion  v1 6 9 a3d1dfa6f4335   GitCommit  9b77fed11a9843ce3780f70dd251e92901c43072   GitTreeState  dirty   BuildDate  2017 08 29T20 32 58Z   OpenPaasKubernetesVersion  v1 03 02   GoVersion  go1 7 5   Compiler  gc   Platform  linux amd64           docker info  To get miscellaneous information about the environment and configuration  see  kubectl cluster info   docs reference generated kubectl kubectl commands  cluster info    docker      shell docker info         Containers  40 Images  168 Storage Driver  aufs  Root Dir   usr local google docker aufs  Backing Filesystem  extfs  Dirs  248  Dirperm1 Supported  false Execution Driver  native 0 2 Logging Driver  json file Kernel Version  3 13 0 53 generic Operating System  Ubuntu 14 04 2 LTS CPUs  12 Total Memory  31 32 GiB Name  k8s is fun mtv corp google com ID  ADUV GCYR B3VJ HMPO LNPQ KD5S YKFQ 76VN IANZ 7TFV ZBF4 BYJO WARNING  No swap limit support      kubectl      shell kubectl cluster info         Kubernetes master is running at https   203 0 113 141 KubeDNS is running at https   203 0 113 141 api v1 namespaces kube system services kube dns proxy kubernetes dashboard is running at https   203 0 113 141 api v1 namespaces kube system services kubernetes dashboard proxy Grafana is running at https   203 0 113 141 api v1 namespaces kube system services monitoring grafana proxy Heapster is running at https   203 0 113 141 api v1 namespaces kube system services monitoring heapster proxy InfluxDB is running at https   203 0 113 141 api v1 namespaces kube system services monitoring influxdb proxy     "}
{"questions":"kubernetes reference name reference weight 20 title Command line tool kubectl card contenttype reference title kubectl command line tool nolist true weight 110","answers":"---\ntitle: Command line tool (kubectl)\ncontent_type: reference\nweight: 110\nno_list: true\ncard:\n  name: reference\n  title: kubectl command line tool\n  weight: 20\n---\n\n<!-- overview -->\n\n\nThis tool is named `kubectl`.\n\nFor configuration, `kubectl` looks for a file named `config` in the `$HOME\/.kube` directory.\nYou can specify other [kubeconfig](\/docs\/concepts\/configuration\/organize-cluster-access-kubeconfig\/)\nfiles by setting the `KUBECONFIG` environment variable or by setting the\n[`--kubeconfig`](\/docs\/concepts\/configuration\/organize-cluster-access-kubeconfig\/) flag.\n\nThis overview covers `kubectl` syntax, describes the command operations, and provides common examples.\nFor details about each command, including all the supported flags and subcommands, see the\n[kubectl](\/docs\/reference\/kubectl\/generated\/kubectl\/) reference documentation.\n\nFor installation instructions, see [Installing kubectl](\/docs\/tasks\/tools\/#kubectl);\nfor a quick guide, see the [cheat sheet](\/docs\/reference\/kubectl\/quick-reference\/).\nIf you're used to using the `docker` command-line tool,\n[`kubectl` for Docker Users](\/docs\/reference\/kubectl\/docker-cli-to-kubectl\/) explains some equivalent commands for Kubernetes.\n\n<!-- body -->\n\n## Syntax\n\nUse the following syntax to run `kubectl` commands from your terminal window:\n\n```shell\nkubectl [command] [TYPE] [NAME] [flags]\n```\n\nwhere `command`, `TYPE`, `NAME`, and `flags` are:\n\n* `command`: Specifies the operation that you want to perform on one or more resources,\n  for example `create`, `get`, `describe`, `delete`.\n\n* `TYPE`: Specifies the [resource type](#resource-types). Resource types are case-insensitive and\n  you can specify the singular, plural, or abbreviated forms.\n  For example, the following commands produce the same output:\n\n  ```shell\n  kubectl get pod pod1\n  kubectl get pods pod1\n  kubectl get po pod1\n  ```\n\n* `NAME`: Specifies the name of the resource. Names are case-sensitive. If the name is omitted,\n  details for all resources are displayed, for example `kubectl get pods`.\n\n  When performing an operation on multiple resources, you can specify each resource by\n  type and name or specify one or more files:\n\n  * To specify resources by type and name:\n\n    * To group resources if they are all the same type:  `TYPE1 name1 name2 name<#>`.<br\/>\n      Example: `kubectl get pod example-pod1 example-pod2`\n\n    * To specify multiple resource types individually:  `TYPE1\/name1 TYPE1\/name2 TYPE2\/name3 TYPE<#>\/name<#>`.<br\/>\n      Example: `kubectl get pod\/example-pod1 replicationcontroller\/example-rc1`\n\n  * To specify resources with one or more files:  `-f file1 -f file2 -f file<#>`\n\n    * [Use YAML rather than JSON](\/docs\/concepts\/configuration\/overview\/#general-configuration-tips)\n      since YAML tends to be more user-friendly, especially for configuration files.<br\/>\n      Example: `kubectl get -f .\/pod.yaml`\n\n* `flags`: Specifies optional flags. For example, you can use the `-s` or `--server` flags\n  to specify the address and port of the Kubernetes API server.<br\/>\n\n\nFlags that you specify from the command line override default values and any corresponding environment variables.\n\n\nIf you need help, run `kubectl help` from the terminal window.\n\n## In-cluster authentication and namespace overrides\n\nBy default `kubectl` will first determine if it is running within a pod, and thus in a cluster.\nIt starts by checking for the `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT` environment\nvariables and the existence of a service account token file at `\/var\/run\/secrets\/kubernetes.io\/serviceaccount\/token`.\nIf all three are found in-cluster authentication is assumed.\n\nTo maintain backwards compatibility, if the `POD_NAMESPACE` environment variable is set\nduring in-cluster authentication it will override the default namespace from the\nservice account token. Any manifests or tools relying on namespace defaulting will be affected by this.\n\n**`POD_NAMESPACE` environment variable**\n\nIf the `POD_NAMESPACE` environment variable is set, cli operations on namespaced resources\nwill default to the variable value. For example, if the variable is set to `seattle`,\n`kubectl get pods` would return pods in the `seattle` namespace. This is because pods are\na namespaced resource, and no namespace was provided in the command. Review the output\nof `kubectl api-resources` to determine if a resource is namespaced.\n\nExplicit use of `--namespace <value>` overrides this behavior.\n\n**How kubectl handles ServiceAccount tokens**\n\nIf:\n\n* there is Kubernetes service account token file mounted at\n  `\/var\/run\/secrets\/kubernetes.io\/serviceaccount\/token`, and\n* the `KUBERNETES_SERVICE_HOST` environment variable is set, and\n* the `KUBERNETES_SERVICE_PORT` environment variable is set, and\n* you don't explicitly specify a namespace on the kubectl command line\n\nthen kubectl assumes it is running in your cluster. The kubectl tool looks up the\nnamespace of that ServiceAccount (this is the same as the namespace of the Pod)\nand acts against that namespace. This is different from what happens outside of a\ncluster; when kubectl runs outside a cluster and you don't specify a namespace,\nthe kubectl command acts against the namespace set for the current context in your\nclient configuration. To change the default namespace for your kubectl you can use the\nfollowing command:\n\n```shell\nkubectl config set-context --current --namespace=<namespace-name>\n```\n\n## Operations\n\nThe following table includes short descriptions and the general syntax for all of the `kubectl` operations:\n\nOperation       | Syntax    |       Description\n-------------------- | -------------------- | --------------------\n`alpha`    | `kubectl alpha SUBCOMMAND [flags]` | List the available commands that correspond to alpha features, which are not enabled in Kubernetes clusters by default.\n`annotate`    | <code>kubectl annotate (-f FILENAME &#124; TYPE NAME &#124; TYPE\/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags]<\/code> | Add or update the annotations of one or more resources.\n`api-resources`    | `kubectl api-resources [flags]` | List the API resources that are available.\n`api-versions`    | `kubectl api-versions [flags]` | List the API versions that are available.\n`apply`            | `kubectl apply -f FILENAME [flags]`| Apply a configuration change to a resource from a file or stdin.\n`attach`        | `kubectl attach POD -c CONTAINER [-i] [-t] [flags]` | Attach to a running container either to view the output stream or interact with the container (stdin).\n`auth`    | `kubectl auth [flags] [options]` | Inspect authorization.\n`autoscale`    | <code>kubectl autoscale (-f FILENAME &#124; TYPE NAME &#124; TYPE\/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags]<\/code> | Automatically scale the set of pods that are managed by a replication controller.\n`certificate`    | `kubectl certificate SUBCOMMAND [options]` | Modify certificate resources.\n`cluster-info`    | `kubectl cluster-info [flags]` | Display endpoint information about the master and services in the cluster.\n`completion`    | `kubectl completion SHELL [options]` | Output shell completion code for the specified shell (bash or zsh).\n`config`        | `kubectl config SUBCOMMAND [flags]` | Modifies kubeconfig files. See the individual subcommands for details.\n`convert`    | `kubectl convert -f FILENAME [options]` | Convert config files between different API versions. Both YAML and JSON formats are accepted. Note - requires `kubectl-convert` plugin to be installed.\n`cordon`    | `kubectl cordon NODE [options]` | Mark node as unschedulable.\n`cp`    | `kubectl cp <file-spec-src> <file-spec-dest> [options]` | Copy files and directories to and from containers.\n`create`        | `kubectl create -f FILENAME [flags]` | Create one or more resources from a file or stdin.\n`delete`        | <code>kubectl delete (-f FILENAME &#124; TYPE [NAME &#124; \/NAME &#124; -l label &#124; --all]) [flags]<\/code> | Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.\n`describe`    | <code>kubectl describe (-f FILENAME &#124; TYPE [NAME_PREFIX &#124; \/NAME &#124; -l label]) [flags]<\/code> | Display the detailed state of one or more resources.\n`diff`        | `kubectl diff -f FILENAME [flags]`| Diff file or stdin against live configuration.\n`drain`    | `kubectl drain NODE [options]` | Drain node in preparation for maintenance.\n`edit`        | <code>kubectl edit (-f FILENAME &#124; TYPE NAME &#124; TYPE\/NAME) [flags]<\/code> | Edit and update the definition of one or more resources on the server by using the default editor.\n`events`      | `kubectl events` | List events\n`exec`        | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | Execute a command against a container in a pod.\n`explain`    | `kubectl explain TYPE [--recursive=false] [flags]` | Get documentation of various resources. For instance pods, nodes, services, etc.\n`expose`        | <code>kubectl expose (-f FILENAME &#124; TYPE NAME &#124; TYPE\/NAME) [--port=port] [--protocol=TCP&#124;UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags]<\/code> | Expose a replication controller, service, or pod as a new Kubernetes service.\n`get`        | <code>kubectl get (-f FILENAME &#124; TYPE [NAME &#124; \/NAME &#124; -l label]) [--watch] [--sort-by=FIELD] [[-o &#124; --output]=OUTPUT_FORMAT] [flags]<\/code> | List one or more resources.\n`kustomize`    | `kubectl kustomize <dir> [flags] [options]` | List a set of API resources generated from instructions in a kustomization.yaml file. The argument must be the path to the directory containing the file, or a git repository URL with a path suffix specifying same with respect to the repository root.\n`label`        | <code>kubectl label (-f FILENAME &#124; TYPE NAME &#124; TYPE\/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags]<\/code> | Add or update the labels of one or more resources.\n`logs`        | `kubectl logs POD [-c CONTAINER] [--follow] [flags]` | Print the logs for a container in a pod.\n`options`    | `kubectl options` | List of global command-line options, which apply to all commands.\n`patch`        | <code>kubectl patch (-f FILENAME &#124; TYPE NAME &#124; TYPE\/NAME) --patch PATCH [flags]<\/code> | Update one or more fields of a resource by using the strategic merge patch process.\n`plugin`    | `kubectl plugin [flags] [options]` | Provides utilities for interacting with plugins.\n`port-forward`    | `kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [flags]` | Forward one or more local ports to a pod.\n`proxy`        | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | Run a proxy to the Kubernetes API server.\n`replace`        | `kubectl replace -f FILENAME` | Replace a resource from a file or stdin.\n`rollout`    | `kubectl rollout SUBCOMMAND [options]` | Manage the rollout of a resource. Valid resource types include: deployments, daemonsets and statefulsets.\n`run`        | <code>kubectl run NAME --image=image [--env=\"key=value\"] [--port=port] [--dry-run=server&#124;client&#124;none] [--overrides=inline-json] [flags]<\/code> | Run a specified image on the cluster.\n`scale`        | <code>kubectl scale (-f FILENAME &#124; TYPE NAME &#124; TYPE\/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags]<\/code> | Update the size of the specified replication controller.\n`set`    | `kubectl set SUBCOMMAND [options]` | Configure application resources.\n`taint`    | `kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N [options]` | Update the taints on one or more nodes.\n`top`    | <code>kubectl top (POD &#124; NODE) [flags] [options]<\/code> | Display Resource (CPU\/Memory\/Storage) usage of pod or node.\n`uncordon`    | `kubectl uncordon NODE [options]` | Mark node as schedulable.\n`version`        | `kubectl version [--client] [flags]` | Display the Kubernetes version running on the client and server.\n`wait`    | <code>kubectl wait ([-f FILENAME] &#124; resource.group\/resource.name &#124; resource.group [(-l label &#124; --all)]) [--for=delete&#124;--for condition=available] [options]<\/code> | Experimental: Wait for a specific condition on one or many resources.\n\nTo learn more about command operations, see the [kubectl](\/docs\/reference\/kubectl\/kubectl\/) reference documentation.\n\n## Resource types\n\nThe following table includes a list of all the supported resource types and their abbreviated aliases.\n\n(This output can be retrieved from `kubectl api-resources`, and was accurate as of Kubernetes 1.25.0)\n\n| NAME | SHORTNAMES | APIVERSION | NAMESPACED | KIND |\n|---|---|---|---|---|\n| `bindings` |  | v1 | true | Binding |\n| `componentstatuses` | `cs` | v1 | false | ComponentStatus |\n| `configmaps` | `cm` | v1 | true | ConfigMap |\n| `endpoints` | `ep` | v1 | true | Endpoints |\n| `events` | `ev` | v1 | true | Event |\n| `limitranges` | `limits` | v1 | true | LimitRange |\n| `namespaces` | `ns` | v1 | false | Namespace |\n| `nodes` | `no` | v1 | false | Node |\n| `persistentvolumeclaims` | `pvc` | v1 | true | PersistentVolumeClaim |\n| `persistentvolumes` | `pv` | v1 | false | PersistentVolume |\n| `pods` | `po` | v1 | true | Pod |\n| `podtemplates` |  | v1 | true | PodTemplate |\n| `replicationcontrollers` | `rc` | v1 | true | ReplicationController |\n| `resourcequotas` | `quota` | v1 | true | ResourceQuota |\n| `secrets` |  | v1 | true | Secret |\n| `serviceaccounts` | `sa` | v1 | true | ServiceAccount |\n| `services` | `svc` | v1 | true | Service |\n| `mutatingwebhookconfigurations` |  | admissionregistration.k8s.io\/v1 | false | MutatingWebhookConfiguration |\n| `validatingwebhookconfigurations` |  | admissionregistration.k8s.io\/v1 | false | ValidatingWebhookConfiguration |\n| `customresourcedefinitions` | `crd,crds` | apiextensions.k8s.io\/v1 | false | CustomResourceDefinition |\n| `apiservices` |  | apiregistration.k8s.io\/v1 | false | APIService |\n| `controllerrevisions` |  | apps\/v1 | true | ControllerRevision |\n| `daemonsets` | `ds` | apps\/v1 | true | DaemonSet |\n| `deployments` | `deploy` | apps\/v1 | true | Deployment |\n| `replicasets` | `rs` | apps\/v1 | true | ReplicaSet |\n| `statefulsets` | `sts` | apps\/v1 | true | StatefulSet |\n| `tokenreviews` |  | authentication.k8s.io\/v1 | false | TokenReview |\n| `localsubjectaccessreviews` |  | authorization.k8s.io\/v1 | true | LocalSubjectAccessReview |\n| `selfsubjectaccessreviews` |  | authorization.k8s.io\/v1 | false | SelfSubjectAccessReview |\n| `selfsubjectrulesreviews` |  | authorization.k8s.io\/v1 | false | SelfSubjectRulesReview |\n| `subjectaccessreviews` |  | authorization.k8s.io\/v1 | false | SubjectAccessReview |\n| `horizontalpodautoscalers` | `hpa` | autoscaling\/v2 | true | HorizontalPodAutoscaler |\n| `cronjobs` | `cj` | batch\/v1 | true | CronJob |\n| `jobs` |  | batch\/v1 | true | Job |\n| `certificatesigningrequests` | `csr` | certificates.k8s.io\/v1 | false | CertificateSigningRequest |\n| `leases` |  | coordination.k8s.io\/v1 | true | Lease |\n| `endpointslices` |  | discovery.k8s.io\/v1 | true | EndpointSlice |\n| `events` | `ev` | events.k8s.io\/v1 | true | Event |\n| `flowschemas` |  | flowcontrol.apiserver.k8s.io\/v1beta2 | false | FlowSchema |\n| `prioritylevelconfigurations` |  | flowcontrol.apiserver.k8s.io\/v1beta2 | false | PriorityLevelConfiguration |\n| `ingressclasses` |  | networking.k8s.io\/v1 | false | IngressClass |\n| `ingresses` | `ing` | networking.k8s.io\/v1 | true | Ingress |\n| `networkpolicies` | `netpol` | networking.k8s.io\/v1 | true | NetworkPolicy |\n| `runtimeclasses` |  | node.k8s.io\/v1 | false | RuntimeClass |\n| `poddisruptionbudgets` | `pdb` | policy\/v1 | true | PodDisruptionBudget |\n| `podsecuritypolicies` | `psp` | policy\/v1beta1 | false | PodSecurityPolicy |\n| `clusterrolebindings` |  | rbac.authorization.k8s.io\/v1 | false | ClusterRoleBinding |\n| `clusterroles` |  | rbac.authorization.k8s.io\/v1 | false | ClusterRole |\n| `rolebindings` |  | rbac.authorization.k8s.io\/v1 | true | RoleBinding |\n| `roles` |  | rbac.authorization.k8s.io\/v1 | true | Role |\n| `priorityclasses` | `pc` | scheduling.k8s.io\/v1 | false | PriorityClass |\n| `csidrivers` |  | storage.k8s.io\/v1 | false | CSIDriver |\n| `csinodes` |  | storage.k8s.io\/v1 | false | CSINode |\n| `csistoragecapacities` |  | storage.k8s.io\/v1 | true | CSIStorageCapacity |\n| `storageclasses` | `sc` | storage.k8s.io\/v1 | false | StorageClass |\n| `volumeattachments` |  | storage.k8s.io\/v1 | false | VolumeAttachment |\n\n## Output options\n\nUse the following sections for information about how you can format or sort the output\nof certain commands. For details about which commands support the various output options,\nsee the [kubectl](\/docs\/reference\/kubectl\/kubectl\/) reference documentation.\n\n### Formatting output\n\nThe default output format for all `kubectl` commands is the human readable plain-text format.\nTo output details to your terminal window in a specific format, you can add either the `-o`\nor `--output` flags to a supported `kubectl` command.\n\n#### Syntax\n\n```shell\nkubectl [command] [TYPE] [NAME] -o <output_format>\n```\n\nDepending on the `kubectl` operation, the following output formats are supported:\n\nOutput format | Description\n--------------| -----------\n`-o custom-columns=<spec>` | Print a table using a comma separated list of [custom columns](#custom-columns).\n`-o custom-columns-file=<filename>` | Print a table using the [custom columns](#custom-columns) template in the `<filename>` file.\n`-o json`     | Output a JSON formatted API object.\n`-o jsonpath=<template>` | Print the fields defined in a [jsonpath](\/docs\/reference\/kubectl\/jsonpath\/) expression.\n`-o jsonpath-file=<filename>` | Print the fields defined by the [jsonpath](\/docs\/reference\/kubectl\/jsonpath\/) expression in the `<filename>` file.\n`-o name`     | Print only the resource name and nothing else.\n`-o wide`     | Output in the plain-text format with any additional information. For pods, the node name is included.\n`-o yaml`     | Output a YAML formatted API object.\n\n##### Example\n\nIn this example, the following command outputs the details for a single pod as a YAML formatted object:\n\n```shell\nkubectl get pod web-pod-13je7 -o yaml\n```\n\nRemember: See the [kubectl](\/docs\/reference\/kubectl\/kubectl\/) reference documentation\nfor details about which output format is supported by each command.\n\n#### Custom columns\n\nTo define custom columns and output only the details that you want into a table, you can use the `custom-columns` option.\nYou can choose to define the custom columns inline or use a template file: `-o custom-columns=<spec>` or `-o custom-columns-file=<filename>`.\n\n##### Examples\n\nInline:\n\n```shell\nkubectl get pods <pod-name> -o custom-columns=NAME:.metadata.name,RSRC:.metadata.resourceVersion\n```\n\nTemplate file:\n\n```shell\nkubectl get pods <pod-name> -o custom-columns-file=template.txt\n```\n\nwhere the `template.txt` file contains:\n\n```\nNAME          RSRC\nmetadata.name metadata.resourceVersion\n```\nThe result of running either command is similar to:\n\n```\nNAME           RSRC\nsubmit-queue   610995\n```\n\n#### Server-side columns\n\n`kubectl` supports receiving specific column information from the server about objects.\nThis means that for any given resource, the server will return columns and rows relevant to that resource, for the client to print.\nThis allows for consistent human-readable output across clients used against the same cluster, by having the server encapsulate the details of printing.\n\nThis feature is enabled by default. To disable it, add the\n`--server-print=false` flag to the `kubectl get` command.\n\n##### Examples\n\nTo print information about the status of a pod, use a command like the following:\n\n```shell\nkubectl get pods <pod-name> --server-print=false\n```\n\nThe output is similar to:\n\n```\nNAME       AGE\npod-name   1m\n```\n\n### Sorting list objects\n\nTo output objects to a sorted list in your terminal window, you can add the `--sort-by` flag\nto a supported `kubectl` command. Sort your objects by specifying any numeric or string field\nwith the `--sort-by` flag. To specify a field, use a [jsonpath](\/docs\/reference\/kubectl\/jsonpath\/) expression.\n\n#### Syntax\n\n```shell\nkubectl [command] [TYPE] [NAME] --sort-by=<jsonpath_exp>\n```\n\n##### Example\n\nTo print a list of pods sorted by name, you run:\n\n```shell\nkubectl get pods --sort-by=.metadata.name\n```\n\n## Examples: Common operations\n\nUse the following set of examples to help you familiarize yourself with running the commonly used `kubectl` operations:\n\n`kubectl apply` - Apply or Update a resource from a file or stdin.\n\n```shell\n# Create a service using the definition in example-service.yaml.\nkubectl apply -f example-service.yaml\n\n# Create a replication controller using the definition in example-controller.yaml.\nkubectl apply -f example-controller.yaml\n\n# Create the objects that are defined in any .yaml, .yml, or .json file within the <directory> directory.\nkubectl apply -f <directory>\n```\n\n`kubectl get` - List one or more resources.\n\n```shell\n# List all pods in plain-text output format.\nkubectl get pods\n\n# List all pods in plain-text output format and include additional information (such as node name).\nkubectl get pods -o wide\n\n# List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'.\nkubectl get replicationcontroller <rc-name>\n\n# List all replication controllers and services together in plain-text output format.\nkubectl get rc,services\n\n# List all daemon sets in plain-text output format.\nkubectl get ds\n\n# List all pods running on node server01\nkubectl get pods --field-selector=spec.nodeName=server01\n```\n\n`kubectl describe` - Display detailed state of one or more resources, including the uninitialized ones by default.\n\n```shell\n# Display the details of the node with name <node-name>.\nkubectl describe nodes <node-name>\n\n# Display the details of the pod with name <pod-name>.\nkubectl describe pods\/<pod-name>\n\n# Display the details of all the pods that are managed by the replication controller named <rc-name>.\n# Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller.\nkubectl describe pods <rc-name>\n\n# Describe all pods\nkubectl describe pods\n```\n\n\nThe `kubectl get` command is usually used for retrieving one or more\nresources of the same resource type. It features a rich set of flags that allows\nyou to customize the output format using the `-o` or `--output` flag, for example.\nYou can specify the `-w` or `--watch` flag to start watching updates to a particular\nobject. The `kubectl describe` command is more focused on describing the many\nrelated aspects of a specified resource. It may invoke several API calls to the\nAPI server to build a view for the user. For example, the `kubectl describe node`\ncommand retrieves not only the information about the node, but also a summary of\nthe pods running on it, the events generated for the node etc.\n\n\n`kubectl delete` - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.\n\n```shell\n# Delete a pod using the type and name specified in the pod.yaml file.\nkubectl delete -f pod.yaml\n\n# Delete all the pods and services that have the label '<label-key>=<label-value>'.\nkubectl delete pods,services -l <label-key>=<label-value>\n\n# Delete all pods, including uninitialized ones.\nkubectl delete pods --all\n```\n\n`kubectl exec` - Execute a command against a container in a pod.\n\n```shell\n# Get output from running 'date' from pod <pod-name>. By default, output is from the first container.\nkubectl exec <pod-name> -- date\n\n# Get output from running 'date' in container <container-name> of pod <pod-name>.\nkubectl exec <pod-name> -c <container-name> -- date\n\n# Get an interactive TTY and run \/bin\/bash from pod <pod-name>. By default, output is from the first container.\nkubectl exec -ti <pod-name> -- \/bin\/bash\n```\n\n`kubectl logs` - Print the logs for a container in a pod.\n\n```shell\n# Return a snapshot of the logs from pod <pod-name>.\nkubectl logs <pod-name>\n\n# Start streaming the logs from pod <pod-name>. This is similar to the 'tail -f' Linux command.\nkubectl logs -f <pod-name>\n```\n\n`kubectl diff` - View a diff of the proposed updates to a cluster.\n\n```shell\n# Diff resources included in \"pod.json\".\nkubectl diff -f pod.json\n\n# Diff file read from stdin.\ncat service.yaml | kubectl diff -f -\n```\n\n## Examples: Creating and using plugins\n\nUse the following set of examples to help you familiarize yourself with writing and using `kubectl` plugins:\n\n```shell\n# create a simple plugin in any language and name the resulting executable file\n# so that it begins with the prefix \"kubectl-\"\ncat .\/kubectl-hello\n```\n```shell\n#!\/bin\/sh\n\n# this plugin prints the words \"hello world\"\necho \"hello world\"\n```\nWith a plugin written, let's make it executable:\n```bash\nchmod a+x .\/kubectl-hello\n\n# and move it to a location in our PATH\nsudo mv .\/kubectl-hello \/usr\/local\/bin\nsudo chown root:root \/usr\/local\/bin\n\n# You have now created and \"installed\" a kubectl plugin.\n# You can begin using this plugin by invoking it from kubectl as if it were a regular command\nkubectl hello\n```\n```\nhello world\n```\n\n```shell\n# You can \"uninstall\" a plugin, by removing it from the folder in your\n# $PATH where you placed it\nsudo rm \/usr\/local\/bin\/kubectl-hello\n```\n\nIn order to view all of the plugins that are available to `kubectl`, use\nthe `kubectl plugin list` subcommand:\n\n```shell\nkubectl plugin list\n```\nThe output is similar to:\n```\nThe following kubectl-compatible plugins are available:\n\n\/usr\/local\/bin\/kubectl-hello\n\/usr\/local\/bin\/kubectl-foo\n\/usr\/local\/bin\/kubectl-bar\n```\n\n`kubectl plugin list` also warns you about plugins that are not\nexecutable, or that are shadowed by other plugins; for example:\n\n```shell\nsudo chmod -x \/usr\/local\/bin\/kubectl-foo # remove execute permission\nkubectl plugin list\n```\n\n```\nThe following kubectl-compatible plugins are available:\n\n\/usr\/local\/bin\/kubectl-hello\n\/usr\/local\/bin\/kubectl-foo\n  - warning: \/usr\/local\/bin\/kubectl-foo identified as a plugin, but it is not executable\n\/usr\/local\/bin\/kubectl-bar\n\nerror: one plugin warning was found\n```\n\nYou can think of plugins as a means to build more complex functionality on top\nof the existing kubectl commands:\n\n```shell\ncat .\/kubectl-whoami\n```\n\nThe next few examples assume that you already made `kubectl-whoami` have\nthe following contents:\n\n```shell\n#!\/bin\/bash\n\n# this plugin makes use of the `kubectl config` command in order to output\n# information about the current user, based on the currently selected context\nkubectl config view --template='Current user: '\n```\n\nRunning the above command gives you an output containing the user for the\ncurrent context in your KUBECONFIG file:\n\n```shell\n# make the file executable\nsudo chmod +x .\/kubectl-whoami\n\n# and move it into your PATH\nsudo mv .\/kubectl-whoami \/usr\/local\/bin\n\nkubectl whoami\nCurrent user: plugins-user\n```\n\n## \n\n* Read the `kubectl` reference documentation:\n  * the kubectl [command reference](\/docs\/reference\/kubectl\/kubectl\/)\n  * the [command line arguments](\/docs\/reference\/kubectl\/generated\/kubectl\/) reference\n* Learn about [`kubectl` usage conventions](\/docs\/reference\/kubectl\/conventions\/)\n* Read about [JSONPath support](\/docs\/reference\/kubectl\/jsonpath\/) in kubectl\n* Read about how to [extend kubectl with plugins](\/docs\/tasks\/extend-kubectl\/kubectl-plugins)\n  * To find out more about plugins, take a look at the [example CLI plugin](https:\/\/github.com\/kubernetes\/sample-cli-plugin).","site":"kubernetes reference","answers_cleaned":"    title  Command line tool  kubectl  content type  reference weight  110 no list  true card    name  reference   title  kubectl command line tool   weight  20           overview       This tool is named  kubectl    For configuration   kubectl  looks for a file named  config  in the   HOME  kube  directory  You can specify other  kubeconfig   docs concepts configuration organize cluster access kubeconfig   files by setting the  KUBECONFIG  environment variable or by setting the     kubeconfig    docs concepts configuration organize cluster access kubeconfig   flag   This overview covers  kubectl  syntax  describes the command operations  and provides common examples  For details about each command  including all the supported flags and subcommands  see the  kubectl   docs reference kubectl generated kubectl   reference documentation   For installation instructions  see  Installing kubectl   docs tasks tools  kubectl   for a quick guide  see the  cheat sheet   docs reference kubectl quick reference    If you re used to using the  docker  command line tool    kubectl  for Docker Users   docs reference kubectl docker cli to kubectl   explains some equivalent commands for Kubernetes        body         Syntax  Use the following syntax to run  kubectl  commands from your terminal window      shell kubectl  command   TYPE   NAME   flags       where  command    TYPE    NAME   and  flags  are      command   Specifies the operation that you want to perform on one or more resources    for example  create    get    describe    delete       TYPE   Specifies the  resource type   resource types   Resource types are case insensitive and   you can specify the singular  plural  or abbreviated forms    For example  the following commands produce the same output        shell   kubectl get pod pod1   kubectl get pods pod1   kubectl get po pod1           NAME   Specifies the name of the resource  Names are case sensitive  If the name is omitted    details for all resources are displayed  for example  kubectl get pods      When performing an operation on multiple resources  you can specify each resource by   type and name or specify one or more files       To specify resources by type and name         To group resources if they are all the same type    TYPE1 name1 name2 name      br         Example   kubectl get pod example pod1 example pod2         To specify multiple resource types individually    TYPE1 name1 TYPE1 name2 TYPE2 name3 TYPE    name      br         Example   kubectl get pod example pod1 replicationcontroller example rc1       To specify resources with one or more files     f file1  f file2  f file             Use YAML rather than JSON   docs concepts configuration overview  general configuration tips        since YAML tends to be more user friendly  especially for configuration files  br         Example   kubectl get  f   pod yaml      flags   Specifies optional flags  For example  you can use the   s  or    server  flags   to specify the address and port of the Kubernetes API server  br     Flags that you specify from the command line override default values and any corresponding environment variables    If you need help  run  kubectl help  from the terminal window      In cluster authentication and namespace overrides  By default  kubectl  will first determine if it is running within a pod  and thus in a cluster  It starts by checking for the  KUBERNETES SERVICE HOST  and  KUBERNETES SERVICE PORT  environment variables and the existence of a service account token file at   var run secrets kubernetes io serviceaccount token   If all three are found in cluster authentication is assumed   To maintain backwards compatibility  if the  POD NAMESPACE  environment variable is set during in cluster authentication it will override the default namespace from the service account token  Any manifests or tools relying on namespace defaulting will be affected by this      POD NAMESPACE  environment variable    If the  POD NAMESPACE  environment variable is set  cli operations on namespaced resources will default to the variable value  For example  if the variable is set to  seattle    kubectl get pods  would return pods in the  seattle  namespace  This is because pods are a namespaced resource  and no namespace was provided in the command  Review the output of  kubectl api resources  to determine if a resource is namespaced   Explicit use of    namespace  value   overrides this behavior     How kubectl handles ServiceAccount tokens    If     there is Kubernetes service account token file mounted at     var run secrets kubernetes io serviceaccount token   and   the  KUBERNETES SERVICE HOST  environment variable is set  and   the  KUBERNETES SERVICE PORT  environment variable is set  and   you don t explicitly specify a namespace on the kubectl command line  then kubectl assumes it is running in your cluster  The kubectl tool looks up the namespace of that ServiceAccount  this is the same as the namespace of the Pod  and acts against that namespace  This is different from what happens outside of a cluster  when kubectl runs outside a cluster and you don t specify a namespace  the kubectl command acts against the namespace set for the current context in your client configuration  To change the default namespace for your kubectl you can use the following command      shell kubectl config set context   current   namespace  namespace name          Operations  The following table includes short descriptions and the general syntax for all of the  kubectl  operations   Operation         Syntax            Description                                                                     alpha        kubectl alpha SUBCOMMAND  flags     List the available commands that correspond to alpha features  which are not enabled in Kubernetes clusters by default   annotate        code kubectl annotate   f FILENAME   124  TYPE NAME   124  TYPE NAME  KEY 1 VAL 1     KEY N VAL N    overwrite     all     resource version version   flags   code    Add or update the annotations of one or more resources   api resources        kubectl api resources  flags     List the API resources that are available   api versions        kubectl api versions  flags     List the API versions that are available   apply                kubectl apply  f FILENAME  flags    Apply a configuration change to a resource from a file or stdin   attach            kubectl attach POD  c CONTAINER   i    t   flags     Attach to a running container either to view the output stream or interact with the container  stdin    auth        kubectl auth  flags   options     Inspect authorization   autoscale        code kubectl autoscale   f FILENAME   124  TYPE NAME   124  TYPE NAME     min MINPODS    max MAXPODS    cpu percent CPU   flags   code    Automatically scale the set of pods that are managed by a replication controller   certificate        kubectl certificate SUBCOMMAND  options     Modify certificate resources   cluster info        kubectl cluster info  flags     Display endpoint information about the master and services in the cluster   completion        kubectl completion SHELL  options     Output shell completion code for the specified shell  bash or zsh    config            kubectl config SUBCOMMAND  flags     Modifies kubeconfig files  See the individual subcommands for details   convert        kubectl convert  f FILENAME  options     Convert config files between different API versions  Both YAML and JSON formats are accepted  Note   requires  kubectl convert  plugin to be installed   cordon        kubectl cordon NODE  options     Mark node as unschedulable   cp        kubectl cp  file spec src   file spec dest   options     Copy files and directories to and from containers   create            kubectl create  f FILENAME  flags     Create one or more resources from a file or stdin   delete            code kubectl delete   f FILENAME   124  TYPE  NAME   124   NAME   124   l label   124    all    flags   code    Delete resources either from a file  stdin  or specifying label selectors  names  resource selectors  or resources   describe        code kubectl describe   f FILENAME   124  TYPE  NAME PREFIX   124   NAME   124   l label    flags   code    Display the detailed state of one or more resources   diff            kubectl diff  f FILENAME  flags    Diff file or stdin against live configuration   drain        kubectl drain NODE  options     Drain node in preparation for maintenance   edit            code kubectl edit   f FILENAME   124  TYPE NAME   124  TYPE NAME   flags   code    Edit and update the definition of one or more resources on the server by using the default editor   events          kubectl events    List events  exec            kubectl exec POD   c CONTAINER    i    t   flags      COMMAND  args         Execute a command against a container in a pod   explain        kubectl explain TYPE    recursive false   flags     Get documentation of various resources  For instance pods  nodes  services  etc   expose            code kubectl expose   f FILENAME   124  TYPE NAME   124  TYPE NAME     port port     protocol TCP  124 UDP     target port number or name     name name     external ip external ip of service     type type   flags   code    Expose a replication controller  service  or pod as a new Kubernetes service   get            code kubectl get   f FILENAME   124  TYPE  NAME   124   NAME   124   l label      watch     sort by FIELD     o   124    output  OUTPUT FORMAT   flags   code    List one or more resources   kustomize        kubectl kustomize  dir   flags   options     List a set of API resources generated from instructions in a kustomization yaml file  The argument must be the path to the directory containing the file  or a git repository URL with a path suffix specifying same with respect to the repository root   label            code kubectl label   f FILENAME   124  TYPE NAME   124  TYPE NAME  KEY 1 VAL 1     KEY N VAL N    overwrite     all     resource version version   flags   code    Add or update the labels of one or more resources   logs            kubectl logs POD   c CONTAINER     follow   flags     Print the logs for a container in a pod   options        kubectl options    List of global command line options  which apply to all commands   patch            code kubectl patch   f FILENAME   124  TYPE NAME   124  TYPE NAME    patch PATCH  flags   code    Update one or more fields of a resource by using the strategic merge patch process   plugin        kubectl plugin  flags   options     Provides utilities for interacting with plugins   port forward        kubectl port forward POD  LOCAL PORT  REMOTE PORT      LOCAL PORT N  REMOTE PORT N   flags     Forward one or more local ports to a pod   proxy            kubectl proxy    port PORT     www static dir     www prefix prefix     api prefix prefix   flags     Run a proxy to the Kubernetes API server   replace            kubectl replace  f FILENAME    Replace a resource from a file or stdin   rollout        kubectl rollout SUBCOMMAND  options     Manage the rollout of a resource  Valid resource types include  deployments  daemonsets and statefulsets   run            code kubectl run NAME   image image    env  key value      port port     dry run server  124 client  124 none     overrides inline json   flags   code    Run a specified image on the cluster   scale            code kubectl scale   f FILENAME   124  TYPE NAME   124  TYPE NAME    replicas COUNT    resource version version     current replicas count   flags   code    Update the size of the specified replication controller   set        kubectl set SUBCOMMAND  options     Configure application resources   taint        kubectl taint NODE NAME KEY 1 VAL 1 TAINT EFFECT 1     KEY N VAL N TAINT EFFECT N  options     Update the taints on one or more nodes   top        code kubectl top  POD   124  NODE   flags   options   code    Display Resource  CPU Memory Storage  usage of pod or node   uncordon        kubectl uncordon NODE  options     Mark node as schedulable   version            kubectl version    client   flags     Display the Kubernetes version running on the client and server   wait        code kubectl wait    f FILENAME    124  resource group resource name   124  resource group    l label   124    all       for delete  124   for condition available   options   code    Experimental  Wait for a specific condition on one or many resources   To learn more about command operations  see the  kubectl   docs reference kubectl kubectl   reference documentation      Resource types  The following table includes a list of all the supported resource types and their abbreviated aliases    This output can be retrieved from  kubectl api resources   and was accurate as of Kubernetes 1 25 0     NAME   SHORTNAMES   APIVERSION   NAMESPACED   KIND                            bindings       v1   true   Binding      componentstatuses     cs    v1   false   ComponentStatus      configmaps     cm    v1   true   ConfigMap      endpoints     ep    v1   true   Endpoints      events     ev    v1   true   Event      limitranges     limits    v1   true   LimitRange      namespaces     ns    v1   false   Namespace      nodes     no    v1   false   Node      persistentvolumeclaims     pvc    v1   true   PersistentVolumeClaim      persistentvolumes     pv    v1   false   PersistentVolume      pods     po    v1   true   Pod      podtemplates       v1   true   PodTemplate      replicationcontrollers     rc    v1   true   ReplicationController      resourcequotas     quota    v1   true   ResourceQuota      secrets       v1   true   Secret      serviceaccounts     sa    v1   true   ServiceAccount      services     svc    v1   true   Service      mutatingwebhookconfigurations       admissionregistration k8s io v1   false   MutatingWebhookConfiguration      validatingwebhookconfigurations       admissionregistration k8s io v1   false   ValidatingWebhookConfiguration      customresourcedefinitions     crd crds    apiextensions k8s io v1   false   CustomResourceDefinition      apiservices       apiregistration k8s io v1   false   APIService      controllerrevisions       apps v1   true   ControllerRevision      daemonsets     ds    apps v1   true   DaemonSet      deployments     deploy    apps v1   true   Deployment      replicasets     rs    apps v1   true   ReplicaSet      statefulsets     sts    apps v1   true   StatefulSet      tokenreviews       authentication k8s io v1   false   TokenReview      localsubjectaccessreviews       authorization k8s io v1   true   LocalSubjectAccessReview      selfsubjectaccessreviews       authorization k8s io v1   false   SelfSubjectAccessReview      selfsubjectrulesreviews       authorization k8s io v1   false   SelfSubjectRulesReview      subjectaccessreviews       authorization k8s io v1   false   SubjectAccessReview      horizontalpodautoscalers     hpa    autoscaling v2   true   HorizontalPodAutoscaler      cronjobs     cj    batch v1   true   CronJob      jobs       batch v1   true   Job      certificatesigningrequests     csr    certificates k8s io v1   false   CertificateSigningRequest      leases       coordination k8s io v1   true   Lease      endpointslices       discovery k8s io v1   true   EndpointSlice      events     ev    events k8s io v1   true   Event      flowschemas       flowcontrol apiserver k8s io v1beta2   false   FlowSchema      prioritylevelconfigurations       flowcontrol apiserver k8s io v1beta2   false   PriorityLevelConfiguration      ingressclasses       networking k8s io v1   false   IngressClass      ingresses     ing    networking k8s io v1   true   Ingress      networkpolicies     netpol    networking k8s io v1   true   NetworkPolicy      runtimeclasses       node k8s io v1   false   RuntimeClass      poddisruptionbudgets     pdb    policy v1   true   PodDisruptionBudget      podsecuritypolicies     psp    policy v1beta1   false   PodSecurityPolicy      clusterrolebindings       rbac authorization k8s io v1   false   ClusterRoleBinding      clusterroles       rbac authorization k8s io v1   false   ClusterRole      rolebindings       rbac authorization k8s io v1   true   RoleBinding      roles       rbac authorization k8s io v1   true   Role      priorityclasses     pc    scheduling k8s io v1   false   PriorityClass      csidrivers       storage k8s io v1   false   CSIDriver      csinodes       storage k8s io v1   false   CSINode      csistoragecapacities       storage k8s io v1   true   CSIStorageCapacity      storageclasses     sc    storage k8s io v1   false   StorageClass      volumeattachments       storage k8s io v1   false   VolumeAttachment       Output options  Use the following sections for information about how you can format or sort the output of certain commands  For details about which commands support the various output options  see the  kubectl   docs reference kubectl kubectl   reference documentation       Formatting output  The default output format for all  kubectl  commands is the human readable plain text format  To output details to your terminal window in a specific format  you can add either the   o  or    output  flags to a supported  kubectl  command        Syntax     shell kubectl  command   TYPE   NAME   o  output format       Depending on the  kubectl  operation  the following output formats are supported   Output format   Description                               o custom columns  spec     Print a table using a comma separated list of  custom columns   custom columns     o custom columns file  filename     Print a table using the  custom columns   custom columns  template in the   filename   file    o json        Output a JSON formatted API object    o jsonpath  template     Print the fields defined in a  jsonpath   docs reference kubectl jsonpath   expression    o jsonpath file  filename     Print the fields defined by the  jsonpath   docs reference kubectl jsonpath   expression in the   filename   file    o name        Print only the resource name and nothing else    o wide        Output in the plain text format with any additional information  For pods  the node name is included    o yaml        Output a YAML formatted API object         Example  In this example  the following command outputs the details for a single pod as a YAML formatted object      shell kubectl get pod web pod 13je7  o yaml      Remember  See the  kubectl   docs reference kubectl kubectl   reference documentation for details about which output format is supported by each command        Custom columns  To define custom columns and output only the details that you want into a table  you can use the  custom columns  option  You can choose to define the custom columns inline or use a template file    o custom columns  spec   or   o custom columns file  filename           Examples  Inline      shell kubectl get pods  pod name   o custom columns NAME  metadata name RSRC  metadata resourceVersion      Template file      shell kubectl get pods  pod name   o custom columns file template txt      where the  template txt  file contains       NAME          RSRC metadata name metadata resourceVersion     The result of running either command is similar to       NAME           RSRC submit queue   610995           Server side columns   kubectl  supports receiving specific column information from the server about objects  This means that for any given resource  the server will return columns and rows relevant to that resource  for the client to print  This allows for consistent human readable output across clients used against the same cluster  by having the server encapsulate the details of printing   This feature is enabled by default  To disable it  add the    server print false  flag to the  kubectl get  command         Examples  To print information about the status of a pod  use a command like the following      shell kubectl get pods  pod name    server print false      The output is similar to       NAME       AGE pod name   1m          Sorting list objects  To output objects to a sorted list in your terminal window  you can add the    sort by  flag to a supported  kubectl  command  Sort your objects by specifying any numeric or string field with the    sort by  flag  To specify a field  use a  jsonpath   docs reference kubectl jsonpath   expression        Syntax     shell kubectl  command   TYPE   NAME    sort by  jsonpath exp             Example  To print a list of pods sorted by name  you run      shell kubectl get pods   sort by  metadata name         Examples  Common operations  Use the following set of examples to help you familiarize yourself with running the commonly used  kubectl  operations    kubectl apply    Apply or Update a resource from a file or stdin      shell   Create a service using the definition in example service yaml  kubectl apply  f example service yaml    Create a replication controller using the definition in example controller yaml  kubectl apply  f example controller yaml    Create the objects that are defined in any  yaml   yml  or  json file within the  directory  directory  kubectl apply  f  directory        kubectl get    List one or more resources      shell   List all pods in plain text output format  kubectl get pods    List all pods in plain text output format and include additional information  such as node name   kubectl get pods  o wide    List the replication controller with the specified name in plain text output format  Tip  You can shorten and replace the  replicationcontroller  resource type with the alias  rc   kubectl get replicationcontroller  rc name     List all replication controllers and services together in plain text output format  kubectl get rc services    List all daemon sets in plain text output format  kubectl get ds    List all pods running on node server01 kubectl get pods   field selector spec nodeName server01       kubectl describe    Display detailed state of one or more resources  including the uninitialized ones by default      shell   Display the details of the node with name  node name   kubectl describe nodes  node name     Display the details of the pod with name  pod name   kubectl describe pods  pod name     Display the details of all the pods that are managed by the replication controller named  rc name     Remember  Any pods that are created by the replication controller get prefixed with the name of the replication controller  kubectl describe pods  rc name     Describe all pods kubectl describe pods       The  kubectl get  command is usually used for retrieving one or more resources of the same resource type  It features a rich set of flags that allows you to customize the output format using the   o  or    output  flag  for example  You can specify the   w  or    watch  flag to start watching updates to a particular object  The  kubectl describe  command is more focused on describing the many related aspects of a specified resource  It may invoke several API calls to the API server to build a view for the user  For example  the  kubectl describe node  command retrieves not only the information about the node  but also a summary of the pods running on it  the events generated for the node etc     kubectl delete    Delete resources either from a file  stdin  or specifying label selectors  names  resource selectors  or resources      shell   Delete a pod using the type and name specified in the pod yaml file  kubectl delete  f pod yaml    Delete all the pods and services that have the label   label key   label value    kubectl delete pods services  l  label key   label value     Delete all pods  including uninitialized ones  kubectl delete pods   all       kubectl exec    Execute a command against a container in a pod      shell   Get output from running  date  from pod  pod name   By default  output is from the first container  kubectl exec  pod name     date    Get output from running  date  in container  container name  of pod  pod name   kubectl exec  pod name   c  container name     date    Get an interactive TTY and run  bin bash from pod  pod name   By default  output is from the first container  kubectl exec  ti  pod name      bin bash       kubectl logs    Print the logs for a container in a pod      shell   Return a snapshot of the logs from pod  pod name   kubectl logs  pod name     Start streaming the logs from pod  pod name   This is similar to the  tail  f  Linux command  kubectl logs  f  pod name        kubectl diff    View a diff of the proposed updates to a cluster      shell   Diff resources included in  pod json   kubectl diff  f pod json    Diff file read from stdin  cat service yaml   kubectl diff  f           Examples  Creating and using plugins  Use the following set of examples to help you familiarize yourself with writing and using  kubectl  plugins      shell   create a simple plugin in any language and name the resulting executable file   so that it begins with the prefix  kubectl   cat   kubectl hello        shell    bin sh    this plugin prints the words  hello world  echo  hello world      With a plugin written  let s make it executable     bash chmod a x   kubectl hello    and move it to a location in our PATH sudo mv   kubectl hello  usr local bin sudo chown root root  usr local bin    You have now created and  installed  a kubectl plugin    You can begin using this plugin by invoking it from kubectl as if it were a regular command kubectl hello         hello world         shell   You can  uninstall  a plugin  by removing it from the folder in your    PATH where you placed it sudo rm  usr local bin kubectl hello      In order to view all of the plugins that are available to  kubectl   use the  kubectl plugin list  subcommand      shell kubectl plugin list     The output is similar to      The following kubectl compatible plugins are available    usr local bin kubectl hello  usr local bin kubectl foo  usr local bin kubectl bar       kubectl plugin list  also warns you about plugins that are not executable  or that are shadowed by other plugins  for example      shell sudo chmod  x  usr local bin kubectl foo   remove execute permission kubectl plugin list          The following kubectl compatible plugins are available    usr local bin kubectl hello  usr local bin kubectl foo     warning   usr local bin kubectl foo identified as a plugin  but it is not executable  usr local bin kubectl bar  error  one plugin warning was found      You can think of plugins as a means to build more complex functionality on top of the existing kubectl commands      shell cat   kubectl whoami      The next few examples assume that you already made  kubectl whoami  have the following contents      shell    bin bash    this plugin makes use of the  kubectl config  command in order to output   information about the current user  based on the currently selected context kubectl config view   template  Current user         Running the above command gives you an output containing the user for the current context in your KUBECONFIG file      shell   make the file executable sudo chmod  x   kubectl whoami    and move it into your PATH sudo mv   kubectl whoami  usr local bin  kubectl whoami Current user  plugins user             Read the  kubectl  reference documentation      the kubectl  command reference   docs reference kubectl kubectl       the  command line arguments   docs reference kubectl generated kubectl   reference   Learn about   kubectl  usage conventions   docs reference kubectl conventions     Read about  JSONPath support   docs reference kubectl jsonpath   in kubectl   Read about how to  extend kubectl with plugins   docs tasks extend kubectl kubectl plugins      To find out more about plugins  take a look at the  example CLI plugin  https   github com kubernetes sample cli plugin  "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl","answers":"---\ntitle: kubectl\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nkubectl controls the Kubernetes cluster manager.\n\n Find more information at: https:\/\/kubernetes.io\/docs\/reference\/kubectl\/\n\n```\nkubectl [flags]\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for kubectl<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl annotate](..\/kubectl_annotate\/)\t - Update the annotations on a resource\n* [kubectl api-resources](..\/kubectl_api-resources\/)\t - Print the supported API resources on the server\n* [kubectl api-versions](..\/kubectl_api-versions\/)\t - Print the supported API versions on the server, in the form of \"group\/version\"\n* [kubectl apply](..\/kubectl_apply\/)\t - Apply a configuration to a resource by file name or stdin\n* [kubectl attach](..\/kubectl_attach\/)\t - Attach to a running container\n* [kubectl auth](..\/kubectl_auth\/)\t - Inspect authorization\n* [kubectl autoscale](..\/kubectl_autoscale\/)\t - Auto-scale a deployment, replica set, stateful set, or replication controller\n* [kubectl certificate](..\/kubectl_certificate\/)\t - Modify certificate resources\n* [kubectl cluster-info](..\/kubectl_cluster-info\/)\t - Display cluster information\n* [kubectl completion](..\/kubectl_completion\/)\t - Output shell completion code for the specified shell (bash, zsh, fish, or powershell)\n* [kubectl config](..\/kubectl_config\/)\t - Modify kubeconfig files\n* [kubectl cordon](..\/kubectl_cordon\/)\t - Mark node as unschedulable\n* [kubectl cp](..\/kubectl_cp\/)\t - Copy files and directories to and from containers\n* [kubectl create](..\/kubectl_create\/)\t - Create a resource from a file or from stdin\n* [kubectl debug](..\/kubectl_debug\/)\t - Create debugging sessions for troubleshooting workloads and nodes\n* [kubectl delete](..\/kubectl_delete\/)\t - Delete resources by file names, stdin, resources and names, or by resources and label selector\n* [kubectl describe](..\/kubectl_describe\/)\t - Show details of a specific resource or group of resources\n* [kubectl diff](..\/kubectl_diff\/)\t - Diff the live version against a would-be applied version\n* [kubectl drain](..\/kubectl_drain\/)\t - Drain node in preparation for maintenance\n* [kubectl edit](..\/kubectl_edit\/)\t - Edit a resource on the server\n* [kubectl events](..\/kubectl_events\/)\t - List events\n* [kubectl exec](..\/kubectl_exec\/)\t - Execute a command in a container\n* [kubectl explain](..\/kubectl_explain\/)\t - Get documentation for a resource\n* [kubectl expose](..\/kubectl_expose\/)\t - Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service\n* [kubectl get](..\/kubectl_get\/)\t - Display one or many resources\n* [kubectl kustomize](..\/kubectl_kustomize\/)\t - Build a kustomization target from a directory or URL\n* [kubectl label](..\/kubectl_label\/)\t - Update the labels on a resource\n* [kubectl logs](..\/kubectl_logs\/)\t - Print the logs for a container in a pod\n* [kubectl options](..\/kubectl_options\/)\t - Print the list of flags inherited by all commands\n* [kubectl patch](..\/kubectl_patch\/)\t - Update fields of a resource\n* [kubectl plugin](..\/kubectl_plugin\/)\t - Provides utilities for interacting with plugins\n* [kubectl port-forward](..\/kubectl_port-forward\/)\t - Forward one or more local ports to a pod\n* [kubectl proxy](..\/kubectl_proxy\/)\t - Run a proxy to the Kubernetes API server\n* [kubectl replace](..\/kubectl_replace\/)\t - Replace a resource by file name or stdin\n* [kubectl rollout](..\/kubectl_rollout\/)\t - Manage the rollout of a resource\n* [kubectl run](..\/kubectl_run\/)\t - Run a particular image on the cluster\n* [kubectl scale](..\/kubectl_scale\/)\t - Set a new size for a deployment, replica set, or replication controller\n* [kubectl set](..\/kubectl_set\/)\t - Set specific features on objects\n* [kubectl taint](..\/kubectl_taint\/)\t - Update the taints on one or more nodes\n* [kubectl top](..\/kubectl_top\/)\t - Display resource (CPU\/memory) usage\n* [kubectl uncordon](..\/kubectl_uncordon\/)\t - Mark node as schedulable\n* [kubectl version](..\/kubectl_version\/)\t - Print the client and server version information\n* [kubectl wait](..\/kubectl_wait\/)\t - Experimental: Wait for a specific condition on one or many resources\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              kubectl controls the Kubernetes cluster manager    Find more information at  https   kubernetes io docs reference kubectl       kubectl  flags                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for kubectl  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl annotate     kubectl annotate      Update the annotations on a resource    kubectl api resources     kubectl api resources      Print the supported API resources on the server    kubectl api versions     kubectl api versions      Print the supported API versions on the server  in the form of  group version     kubectl apply     kubectl apply      Apply a configuration to a resource by file name or stdin    kubectl attach     kubectl attach      Attach to a running container    kubectl auth     kubectl auth      Inspect authorization    kubectl autoscale     kubectl autoscale      Auto scale a deployment  replica set  stateful set  or replication controller    kubectl certificate     kubectl certificate      Modify certificate resources    kubectl cluster info     kubectl cluster info      Display cluster information    kubectl completion     kubectl completion      Output shell completion code for the specified shell  bash  zsh  fish  or powershell     kubectl config     kubectl config      Modify kubeconfig files    kubectl cordon     kubectl cordon      Mark node as unschedulable    kubectl cp     kubectl cp      Copy files and directories to and from containers    kubectl create     kubectl create      Create a resource from a file or from stdin    kubectl debug     kubectl debug      Create debugging sessions for troubleshooting workloads and nodes    kubectl delete     kubectl delete      Delete resources by file names  stdin  resources and names  or by resources and label selector    kubectl describe     kubectl describe      Show details of a specific resource or group of resources    kubectl diff     kubectl diff      Diff the live version against a would be applied version    kubectl drain     kubectl drain      Drain node in preparation for maintenance    kubectl edit     kubectl edit      Edit a resource on the server    kubectl events     kubectl events      List events    kubectl exec     kubectl exec      Execute a command in a container    kubectl explain     kubectl explain      Get documentation for a resource    kubectl expose     kubectl expose      Take a replication controller  service  deployment or pod and expose it as a new Kubernetes service    kubectl get     kubectl get      Display one or many resources    kubectl kustomize     kubectl kustomize      Build a kustomization target from a directory or URL    kubectl label     kubectl label      Update the labels on a resource    kubectl logs     kubectl logs      Print the logs for a container in a pod    kubectl options     kubectl options      Print the list of flags inherited by all commands    kubectl patch     kubectl patch      Update fields of a resource    kubectl plugin     kubectl plugin      Provides utilities for interacting with plugins    kubectl port forward     kubectl port forward      Forward one or more local ports to a pod    kubectl proxy     kubectl proxy      Run a proxy to the Kubernetes API server    kubectl replace     kubectl replace      Replace a resource by file name or stdin    kubectl rollout     kubectl rollout      Manage the rollout of a resource    kubectl run     kubectl run      Run a particular image on the cluster    kubectl scale     kubectl scale      Set a new size for a deployment  replica set  or replication controller    kubectl set     kubectl set      Set specific features on objects    kubectl taint     kubectl taint      Update the taints on one or more nodes    kubectl top     kubectl top      Display resource  CPU memory  usage    kubectl uncordon     kubectl uncordon      Mark node as schedulable    kubectl version     kubectl version      Print the client and server version information    kubectl wait     kubectl wait      Experimental  Wait for a specific condition on one or many resources "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl attach autogenerated true","answers":"---\ntitle: kubectl attach\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nAttach to a process that is already running inside an existing container.\n\n```\nkubectl attach (POD | TYPE\/NAME) -c CONTAINER\n```\n\n## \n\n```\n  # Get output from running pod mypod; use the 'kubectl.kubernetes.io\/default-container' annotation\n  # for selecting the container to be attached or the first container in the pod will be chosen\n  kubectl attach mypod\n  \n  # Get output from ruby-container from pod mypod\n  kubectl attach mypod -c ruby-container\n  \n  # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod\n  # and sends stdout\/stderr from 'bash' back to the client\n  kubectl attach mypod -c ruby-container -i -t\n  \n  # Get output from the first pod of a replica set named nginx\n  kubectl attach rs\/nginx\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-c, --container string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Container name. If omitted, use the kubectl.kubernetes.io\/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for attach<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pod-running-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-q, --quiet<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Only print output from the remote session<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-i, --stdin<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Pass stdin to the container<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-t, --tty<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Stdin is a TTY<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl attach content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Attach to a process that is already running inside an existing container       kubectl attach  POD   TYPE NAME   c CONTAINER                   Get output from running pod mypod  use the  kubectl kubernetes io default container  annotation     for selecting the container to be attached or the first container in the pod will be chosen   kubectl attach mypod        Get output from ruby container from pod mypod   kubectl attach mypod  c ruby container        Switch to raw terminal mode  sends stdin to  bash  in ruby container from pod mypod     and sends stdout stderr from  bash  back to the client   kubectl attach mypod  c ruby container  i  t        Get output from the first pod of a replica set named nginx   kubectl attach rs nginx               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   c    container string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Container name  If omitted  use the kubectl kubernetes io default container annotation for selecting the container to be attached or the first container in the pod will be chosen  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for attach  p   td    tr    tr   td colspan  2    pod running timeout duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time  like 5s  2m  or 3h  higher than zero  to wait until at least one pod is running  p   td    tr    tr   td colspan  2   q    quiet  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Only print output from the remote session  p   td    tr    tr   td colspan  2   i    stdin  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Pass stdin to the container  p   td    tr    tr   td colspan  2   t    tty  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Stdin is a TTY  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl set subject","answers":"---\ntitle: kubectl set subject\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nUpdate the user, group, or service account in a role binding or cluster role binding.\n\n```\nkubectl set subject (-f FILENAME | TYPE NAME) [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Update a cluster role binding for serviceaccount1\n  kubectl set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1\n  \n  # Update a role binding for user1, user2, and group1\n  kubectl set subject rolebinding admin --user=user1 --user=user2 --group=group1\n  \n  # Print the result (in YAML format) of updating rolebinding subjects from a local, without hitting the server\n  kubectl create rolebinding admin --role=admin --user=admin -o yaml --dry-run=client | kubectl set subject --local -f - --user=foo -o yaml\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Select all resources, in the namespace of the specified resource types<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-set\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files the resource to update the subjects<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Groups to bind to the role<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for subject<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--local<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, set subject will NOT contact api-server but run locally.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--serviceaccount strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Service accounts to bind to the role<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Usernames to bind to the role<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl set](..\/)\t - Set specific features on objects\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl set subject content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Update the user  group  or service account in a role binding or cluster role binding       kubectl set subject   f FILENAME   TYPE NAME     user username     group groupname     serviceaccount namespace serviceaccountname     dry run server client none                    Update a cluster role binding for serviceaccount1   kubectl set subject clusterrolebinding admin   serviceaccount namespace serviceaccount1        Update a role binding for user1  user2  and group1   kubectl set subject rolebinding admin   user user1   user user2   group group1        Print the result  in YAML format  of updating rolebinding subjects from a local  without hitting the server   kubectl create rolebinding admin   role admin   user admin  o yaml   dry run client   kubectl set subject   local  f     user foo  o yaml               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Select all resources  in the namespace of the specified resource types  p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl set   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files the resource to update the subjects  p   td    tr    tr   td colspan  2    group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Groups to bind to the role  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for subject  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2    local  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  set subject will NOT contact api server but run locally   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    serviceaccount strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Service accounts to bind to the role  p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    user strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Usernames to bind to the role  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl set          Set specific features on objects "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl set selector autogenerated true","answers":"---\ntitle: kubectl set selector\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nSet the selector on a resource. Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of 'set selector'.\n\n A selector must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 63 characters. If --resource-version is specified, then updates will use this resource version, otherwise the existing resource-version will be used. Note: currently selectors can only be set on Service objects.\n\n```\nkubectl set selector (-f FILENAME | TYPE NAME) EXPRESSIONS [--resource-version=version]\n```\n\n## \n\n```\n  # Set the labels and selector before creating a deployment\/service pair\n  kubectl create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -\n  kubectl create deployment my-dep -o yaml --dry-run=client | kubectl label --local -f - environment=qa -o yaml | kubectl create -f -\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Select all resources in the namespace of the specified resource types<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-set\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>identifying the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for selector<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--local<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, annotation will NOT contact api-server but run locally.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--resource-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If non-empty, the selectors update will only succeed if this is the current resource-version for the object. Only valid when specifying a single resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl set](..\/)\t - Set specific features on objects\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl set selector content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Set the selector on a resource  Note that the new selector will overwrite the old selector if the resource had one prior to the invocation of  set selector     A selector must begin with a letter or number  and may contain letters  numbers  hyphens  dots  and underscores  up to 63 characters  If   resource version is specified  then updates will use this resource version  otherwise the existing resource version will be used  Note  currently selectors can only be set on Service objects       kubectl set selector   f FILENAME   TYPE NAME  EXPRESSIONS    resource version version                    Set the labels and selector before creating a deployment service pair   kubectl create service clusterip my svc   clusterip  None   o yaml   dry run client   kubectl set selector   local  f    environment qa   o yaml   kubectl create  f     kubectl create deployment my dep  o yaml   dry run client   kubectl label   local  f   environment qa  o yaml   kubectl create  f                 table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Select all resources in the namespace of the specified resource types  p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl set   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p identifying the resource   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for selector  p   td    tr    tr   td colspan  2    local  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  annotation will NOT contact api server but run locally   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    resource version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If non empty  the selectors update will only succeed if this is the current resource version for the object  Only valid when specifying a single resource   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl set          Set specific features on objects "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl set env","answers":"---\ntitle: kubectl set env\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nUpdate environment variables on a pod template.\n\n List environment variable definitions in one or more pods, pod templates. Add, update, or remove container environment variable definitions in one or more pod templates (within replication controllers or deployment configurations). View or modify the environment variable definitions on all containers in the specified pods or pod templates, or just those that match a wildcard.\n\n If \"--env -\" is passed, environment variables can be read from STDIN using the standard env syntax.\n\n Possible resources include (case insensitive):\n\n        pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), statefulset (sts), cronjob (cj), replicaset (rs)\n\n```\nkubectl set env RESOURCE\/NAME KEY_1=VAL_1 ... KEY_N=VAL_N\n```\n\n## \n\n```\n  # Update deployment 'registry' with a new environment variable\n  kubectl set env deployment\/registry STORAGE_DIR=\/local\n  \n  # List the environment variables defined on a deployments 'sample-build'\n  kubectl set env deployment\/sample-build --list\n  \n  # List the environment variables defined on all pods\n  kubectl set env pods --all --list\n  \n  # Output modified deployment in YAML, and does not alter the object on the server\n  kubectl set env deployment\/sample-build STORAGE_DIR=\/data -o yaml\n  \n  # Update all containers in all replication controllers in the project to have ENV=prod\n  kubectl set env rc --all ENV=prod\n  \n  # Import environment from a secret\n  kubectl set env --from=secret\/mysecret deployment\/myapp\n  \n  # Import environment from a config map with a prefix\n  kubectl set env --from=configmap\/myconfigmap --prefix=MYSQL_ deployment\/myapp\n  \n  # Import specific keys from a config map\n  kubectl set env --keys=my-example-key --from=configmap\/myconfigmap deployment\/myapp\n  \n  # Remove the environment variable ENV from container 'c1' in all deployment configs\n  kubectl set env deployments --all --containers=\"c1\" ENV-\n  \n  # Remove the environment variable ENV from a deployment definition on disk and\n  # update the deployment config on the server\n  kubectl set env -f deploy.json ENV-\n  \n  # Set some of the local shell environment into a deployment config on the server\n  env | grep RAILS_ | kubectl set env -e - deployment\/registry\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, select all resources in the namespace of the specified resource types<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-c, --containers string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"*\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The names of containers in the selected pod templates to change - may use wildcards<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-e, --env strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Specify a key-value pair for an environment variable to set into each container.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-set\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files the resource to update the env<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--from string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of a resource from which to inject environment variables<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for env<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--keys strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Comma-separated list of keys to import from specified resource<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--list<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, display the environment and any changes in the standard format. this flag will removed when we have kubectl view env.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--local<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, set env will NOT contact api-server but run locally.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--overwrite&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, allow environment to be overwritten, otherwise reject updates that overwrite existing environment.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--prefix string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Prefix to append to variable names<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--resolve<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, show secret or configmap references when listing variables<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl set](..\/)\t - Set specific features on objects\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl set env content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Update environment variables on a pod template    List environment variable definitions in one or more pods  pod templates  Add  update  or remove container environment variable definitions in one or more pod templates  within replication controllers or deployment configurations   View or modify the environment variable definitions on all containers in the specified pods or pod templates  or just those that match a wildcard    If    env    is passed  environment variables can be read from STDIN using the standard env syntax    Possible resources include  case insensitive            pod  po   replicationcontroller  rc   deployment  deploy   daemonset  ds   statefulset  sts   cronjob  cj   replicaset  rs       kubectl set env RESOURCE NAME KEY 1 VAL 1     KEY N VAL N                   Update deployment  registry  with a new environment variable   kubectl set env deployment registry STORAGE DIR  local        List the environment variables defined on a deployments  sample build    kubectl set env deployment sample build   list        List the environment variables defined on all pods   kubectl set env pods   all   list        Output modified deployment in YAML  and does not alter the object on the server   kubectl set env deployment sample build STORAGE DIR  data  o yaml        Update all containers in all replication controllers in the project to have ENV prod   kubectl set env rc   all ENV prod        Import environment from a secret   kubectl set env   from secret mysecret deployment myapp        Import environment from a config map with a prefix   kubectl set env   from configmap myconfigmap   prefix MYSQL  deployment myapp        Import specific keys from a config map   kubectl set env   keys my example key   from configmap myconfigmap deployment myapp        Remove the environment variable ENV from container  c1  in all deployment configs   kubectl set env deployments   all   containers  c1  ENV         Remove the environment variable ENV from a deployment definition on disk and     update the deployment config on the server   kubectl set env  f deploy json ENV         Set some of the local shell environment into a deployment config on the server   env   grep RAILS    kubectl set env  e   deployment registry               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  select all resources in the namespace of the specified resource types  p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2   c    containers string nbsp  nbsp  nbsp  nbsp  nbsp Default       td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The names of containers in the selected pod templates to change   may use wildcards  p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2   e    env strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Specify a key value pair for an environment variable to set into each container   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl set   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files the resource to update the env  p   td    tr    tr   td colspan  2    from string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of a resource from which to inject environment variables  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for env  p   td    tr    tr   td colspan  2    keys strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Comma separated list of keys to import from specified resource  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2    list  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  display the environment and any changes in the standard format  this flag will removed when we have kubectl view env   p   td    tr    tr   td colspan  2    local  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  set env will NOT contact api server but run locally   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    overwrite nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  allow environment to be overwritten  otherwise reject updates that overwrite existing environment   p   td    tr    tr   td colspan  2    prefix string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Prefix to append to variable names  p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    resolve  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  show secret or configmap references when listing variables  p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl set          Set specific features on objects "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl set serviceaccount autogenerated true","answers":"---\ntitle: kubectl set serviceaccount\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nUpdate the service account of pod template resources.\n\n Possible resources (case insensitive) can be:\n\n replicationcontroller (rc), deployment (deploy), daemonset (ds), job, replicaset (rs), statefulset\n\n```\nkubectl set serviceaccount (-f FILENAME | TYPE NAME) SERVICE_ACCOUNT\n```\n\n## \n\n```\n  # Set deployment nginx-deployment's service account to serviceaccount1\n  kubectl set serviceaccount deployment nginx-deployment serviceaccount1\n  \n  # Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server\n  kubectl set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run=client -o yaml\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Select all resources, in the namespace of the specified resource types<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-set\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to get from a server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for serviceaccount<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--local<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, set serviceaccount will NOT contact api-server but run locally.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl set](..\/)\t - Set specific features on objects\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl set serviceaccount content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Update the service account of pod template resources    Possible resources  case insensitive  can be    replicationcontroller  rc   deployment  deploy   daemonset  ds   job  replicaset  rs   statefulset      kubectl set serviceaccount   f FILENAME   TYPE NAME  SERVICE ACCOUNT                   Set deployment nginx deployment s service account to serviceaccount1   kubectl set serviceaccount deployment nginx deployment serviceaccount1        Print the result  in YAML format  of updated nginx deployment with the service account from local file  without hitting the API server   kubectl set sa  f nginx deployment yaml serviceaccount1   local   dry run client  o yaml               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Select all resources  in the namespace of the specified resource types  p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl set   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to get from a server   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for serviceaccount  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2    local  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  set serviceaccount will NOT contact api server but run locally   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl set          Set specific features on objects "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic title kubectl set resources contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl set resources\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nSpecify compute resource requirements (CPU, memory) for any resource that defines a pod template.  If a pod is successfully scheduled, it is guaranteed the amount of resource requested, but may burst up to its specified limits.\n\n For each compute resource, if a limit is specified and a request is omitted, the request will default to the limit.\n\n Possible resources include (case insensitive): Use \"kubectl api-resources\" for a complete list of supported resources..\n\n```\nkubectl set resources (-f FILENAME | TYPE NAME)  ([--limits=LIMITS & --requests=REQUESTS]\n```\n\n## \n\n```\n  # Set a deployments nginx container cpu limits to \"200m\" and memory to \"512Mi\"\n  kubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi\n  \n  # Set the resource request and limits for all containers in nginx\n  kubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi\n  \n  # Remove the resource requests for resources on containers in nginx\n  kubectl set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0\n  \n  # Print the result (in yaml format) of updating nginx container limits from a local, without hitting the server\n  kubectl set resources -f path\/to\/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Select all resources, in the namespace of the specified resource types<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-c, --containers string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"*\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The names of containers in the selected pod templates to change, all containers are selected by default - may use wildcards<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-set\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to get from a server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for resources<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--limits string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The resource requirement requests for this container.  For example, 'cpu=100m,memory=256Mi'.  Note that server side components may assign requests depending on the server configuration, such as limit ranges.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--local<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, set resources will NOT contact api-server but run locally.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--requests string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The resource requirement requests for this container.  For example, 'cpu=100m,memory=256Mi'.  Note that server side components may assign requests depending on the server configuration, such as limit ranges.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl set](..\/)\t - Set specific features on objects\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl set resources content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Specify compute resource requirements  CPU  memory  for any resource that defines a pod template   If a pod is successfully scheduled  it is guaranteed the amount of resource requested  but may burst up to its specified limits    For each compute resource  if a limit is specified and a request is omitted  the request will default to the limit    Possible resources include  case insensitive   Use  kubectl api resources  for a complete list of supported resources        kubectl set resources   f FILENAME   TYPE NAME       limits LIMITS     requests REQUESTS                    Set a deployments nginx container cpu limits to  200m  and memory to  512Mi    kubectl set resources deployment nginx  c nginx   limits cpu 200m memory 512Mi        Set the resource request and limits for all containers in nginx   kubectl set resources deployment nginx   limits cpu 200m memory 512Mi   requests cpu 100m memory 256Mi        Remove the resource requests for resources on containers in nginx   kubectl set resources deployment nginx   limits cpu 0 memory 0   requests cpu 0 memory 0        Print the result  in yaml format  of updating nginx container limits from a local  without hitting the server   kubectl set resources  f path to file yaml   limits cpu 200m memory 512Mi   local  o yaml               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Select all resources  in the namespace of the specified resource types  p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2   c    containers string nbsp  nbsp  nbsp  nbsp  nbsp Default       td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The names of containers in the selected pod templates to change  all containers are selected by default   may use wildcards  p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl set   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to get from a server   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for resources  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2    limits string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The resource requirement requests for this container   For example   cpu 100m memory 256Mi    Note that server side components may assign requests depending on the server configuration  such as limit ranges   p   td    tr    tr   td colspan  2    local  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  set resources will NOT contact api server but run locally   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    requests string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The resource requirement requests for this container   For example   cpu 100m memory 256Mi    Note that server side components may assign requests depending on the server configuration  such as limit ranges   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl set          Set specific features on objects "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl set image","answers":"---\ntitle: kubectl set image\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nUpdate existing container image(s) of resources.\n\n Possible resources include (case insensitive):\n\n        pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), statefulset (sts), cronjob (cj), replicaset (rs)\n\n```\nkubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N\n```\n\n## \n\n```\n  # Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'\n  kubectl set image deployment\/nginx busybox=busybox nginx=nginx:1.9.1\n  \n  # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1'\n  kubectl set image deployments,rc nginx=nginx:1.9.1 --all\n  \n  # Update image of all containers of daemonset abc to 'nginx:1.9.1'\n  kubectl set image daemonset abc *=nginx:1.9.1\n  \n  # Print result (in yaml format) of updating nginx container image from local file, without hitting the server\n  kubectl set image -f path\/to\/file.yaml nginx=nginx:1.9.1 --local -o yaml\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Select all resources, in the namespace of the specified resource types<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-set\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to get from a server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for image<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--local<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, set image will NOT contact api-server but run locally.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl set](..\/)\t - Set specific features on objects\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl set image content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Update existing container image s  of resources    Possible resources include  case insensitive            pod  po   replicationcontroller  rc   deployment  deploy   daemonset  ds   statefulset  sts   cronjob  cj   replicaset  rs       kubectl set image   f FILENAME   TYPE NAME  CONTAINER NAME 1 CONTAINER IMAGE 1     CONTAINER NAME N CONTAINER IMAGE N                   Set a deployment s nginx container image to  nginx 1 9 1   and its busybox container image to  busybox    kubectl set image deployment nginx busybox busybox nginx nginx 1 9 1        Update all deployments  and rc s nginx container s image to  nginx 1 9 1    kubectl set image deployments rc nginx nginx 1 9 1   all        Update image of all containers of daemonset abc to  nginx 1 9 1    kubectl set image daemonset abc   nginx 1 9 1        Print result  in yaml format  of updating nginx container image from local file  without hitting the server   kubectl set image  f path to file yaml nginx nginx 1 9 1   local  o yaml               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Select all resources  in the namespace of the specified resource types  p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl set   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to get from a server   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for image  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2    local  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  set image will NOT contact api server but run locally   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl set          Set specific features on objects "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl set","answers":"---\ntitle: kubectl set\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nConfigure application resources.\n\n These commands help you make changes to existing application resources.\n\n```\nkubectl set SUBCOMMAND\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for set<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n* [kubectl set env](kubectl_set_env\/)\t - Update environment variables on a pod template\n* [kubectl set image](kubectl_set_image\/)\t - Update the image of a pod template\n* [kubectl set resources](kubectl_set_resources\/)\t - Update resource requests\/limits on objects with pod templates\n* [kubectl set selector](kubectl_set_selector\/)\t - Set the selector on a resource\n* [kubectl set serviceaccount](kubectl_set_serviceaccount\/)\t - Update the service account of a resource\n* [kubectl set subject](kubectl_set_subject\/)\t - Update the user, group, or service account in a role binding or cluster role binding\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl set content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Configure application resources    These commands help you make changes to existing application resources       kubectl set SUBCOMMAND               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for set  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager    kubectl set env  kubectl set env      Update environment variables on a pod template    kubectl set image  kubectl set image      Update the image of a pod template    kubectl set resources  kubectl set resources      Update resource requests limits on objects with pod templates    kubectl set selector  kubectl set selector      Set the selector on a resource    kubectl set serviceaccount  kubectl set serviceaccount      Update the service account of a resource    kubectl set subject  kubectl set subject      Update the user  group  or service account in a role binding or cluster role binding "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl run autogenerated true","answers":"---\ntitle: kubectl run\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate and run a particular image in a pod.\n\n```\nkubectl run NAME --image=image [--env=\"key=value\"] [--port=port] [--dry-run=server|client] [--overrides=inline-json] [--command] -- [COMMAND] [args...]\n```\n\n## \n\n```\n  # Start a nginx pod\n  kubectl run nginx --image=nginx\n  \n  # Start a hazelcast pod and let the container expose port 5701\n  kubectl run hazelcast --image=hazelcast\/hazelcast --port=5701\n  \n  # Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container\n  kubectl run hazelcast --image=hazelcast\/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\"\n  \n  # Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container\n  kubectl run hazelcast --image=hazelcast\/hazelcast --labels=\"app=hazelcast,env=prod\"\n  \n  # Dry run; print the corresponding API objects without creating them\n  kubectl run nginx --image=nginx --dry-run=client\n  \n  # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON\n  kubectl run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }'\n  \n  # Start a busybox pod and keep it in the foreground, don't restart it if it exits\n  kubectl run -i -t busybox --image=busybox --restart=Never\n  \n  # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command\n  kubectl run nginx --image=nginx -- <arg1> <arg2> ... <argN>\n  \n  # Start the nginx pod using a different command and custom arguments\n  kubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--annotations strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Annotations to apply to the pod.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--attach<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, wait for the Pod to start running, and then attach to the Pod as if 'kubectl attach ...' were called.  Default false, unless '-i\/--stdin' is set, in which case the default is true. With '--restart=Never' the exit code of the container process is returned.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cascade string[=\"background\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"background\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;background&quot;, &quot;orphan&quot;, or &quot;foreground&quot;. Selects the deletion cascading strategy for the dependents (e.g. Pods created by a ReplicationController). Defaults to background.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--command<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true and extra arguments are present, use them as the 'command' field in the container, rather than the 'args' field which is the default.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--env strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Environment variables to set in the container.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--expose --port<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, create a ClusterIP service associated with the pod.  Requires --port.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-run\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>to use to replace the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--force<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--grace-period int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. Set to 1 for immediate shutdown. Can only be set to 0 when --force is true (force deletion).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for run<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--image string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The image for the container to run.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--image-pull-policy string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The image pull policy for the container.  If left empty, this value will not be specified by the client and defaulted by the server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process a kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --labels string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Comma separated labels to apply to the pod. Will override previous values.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--leave-stdin-open<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If the pod is started in interactive mode or with stdin, leave stdin open after the first attach completes. By default, stdin will be closed after the first attach completes.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--override-type string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"merge\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The method used to override the generated object: json, merge, or strategic.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--overrides string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pod-running-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--port string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The port that this container exposes.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--privileged<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, run the container in privileged mode.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-q, --quiet<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, suppress prompt messages.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--restart string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"Always\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The restart policy for this Pod.  Legal values [Always, OnFailure, Never].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--rm<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, delete the pod after it exits.  Only valid when attaching to the container, e.g. with '--attach' or with '-i\/--stdin'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-i, --stdin<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Keep stdin open on the container in the pod, even if nothing is attached.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--timeout duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-t, --tty<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Allocate a TTY for the container in the pod.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--wait<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, wait for resources to be gone before returning. This waits for finalizers.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl run content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create and run a particular image in a pod       kubectl run NAME   image image    env  key value      port port     dry run server client     overrides inline json     command      COMMAND   args                       Start a nginx pod   kubectl run nginx   image nginx        Start a hazelcast pod and let the container expose port 5701   kubectl run hazelcast   image hazelcast hazelcast   port 5701        Start a hazelcast pod and set environment variables  DNS DOMAIN cluster  and  POD NAMESPACE default  in the container   kubectl run hazelcast   image hazelcast hazelcast   env  DNS DOMAIN cluster    env  POD NAMESPACE default         Start a hazelcast pod and set labels  app hazelcast  and  env prod  in the container   kubectl run hazelcast   image hazelcast hazelcast   labels  app hazelcast env prod         Dry run  print the corresponding API objects without creating them   kubectl run nginx   image nginx   dry run client        Start a nginx pod  but overload the spec with a partial set of values parsed from JSON   kubectl run nginx   image nginx   overrides     apiVersion    v1    spec                     Start a busybox pod and keep it in the foreground  don t restart it if it exits   kubectl run  i  t busybox   image busybox   restart Never        Start the nginx pod using the default command  but use custom arguments  arg1    argN  for that command   kubectl run nginx   image nginx     arg1   arg2       argN         Start the nginx pod using a different command and custom arguments   kubectl run nginx   image nginx   command     cmd   arg1       argN                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    annotations strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Annotations to apply to the pod   p   td    tr    tr   td colspan  2    attach  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  wait for the Pod to start running  and then attach to the Pod as if  kubectl attach      were called   Default false  unless   i   stdin  is set  in which case the default is true  With    restart Never  the exit code of the container process is returned   p   td    tr    tr   td colspan  2    cascade string   background   nbsp  nbsp  nbsp  nbsp  nbsp Default   background   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot background quot    quot orphan quot   or  quot foreground quot   Selects the deletion cascading strategy for the dependents  e g  Pods created by a ReplicationController   Defaults to background   p   td    tr    tr   td colspan  2    command  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true and extra arguments are present  use them as the  command  field in the container  rather than the  args  field which is the default   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    env strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Environment variables to set in the container   p   td    tr    tr   td colspan  2    expose   port  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  create a ClusterIP service associated with the pod   Requires   port   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl run   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p to use to replace the resource   p   td    tr    tr   td colspan  2    force  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  immediately remove resources from API and bypass graceful deletion  Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation   p   td    tr    tr   td colspan  2    grace period int nbsp  nbsp  nbsp  nbsp  nbsp Default   1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Period of time in seconds given to the resource to terminate gracefully  Ignored if negative  Set to 1 for immediate shutdown  Can only be set to 0 when   force is true  force deletion    p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for run  p   td    tr    tr   td colspan  2    image string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The image for the container to run   p   td    tr    tr   td colspan  2    image pull policy string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The image pull policy for the container   If left empty  this value will not be specified by the client and defaulted by the server   p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process a kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   l    labels string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Comma separated labels to apply to the pod  Will override previous values   p   td    tr    tr   td colspan  2    leave stdin open  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If the pod is started in interactive mode or with stdin  leave stdin open after the first attach completes  By default  stdin will be closed after the first attach completes   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    override type string nbsp  nbsp  nbsp  nbsp  nbsp Default   merge   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The method used to override the generated object  json  merge  or strategic   p   td    tr    tr   td colspan  2    overrides string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p An inline JSON override for the generated object  If this is non empty  it is used to override the generated object  Requires that the object supply a valid apiVersion field   p   td    tr    tr   td colspan  2    pod running timeout duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time  like 5s  2m  or 3h  higher than zero  to wait until at least one pod is running  p   td    tr    tr   td colspan  2    port string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The port that this container exposes   p   td    tr    tr   td colspan  2    privileged  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  run the container in privileged mode   p   td    tr    tr   td colspan  2   q    quiet  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  suppress prompt messages   p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    restart string nbsp  nbsp  nbsp  nbsp  nbsp Default   Always   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The restart policy for this Pod   Legal values  Always  OnFailure  Never    p   td    tr    tr   td colspan  2    rm  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  delete the pod after it exits   Only valid when attaching to the container  e g  with    attach  or with   i   stdin    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2   i    stdin  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Keep stdin open on the container in the pod  even if nothing is attached   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    timeout duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a delete  zero means determine a timeout from the size of the object  p   td    tr    tr   td colspan  2   t    tty  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Allocate a TTY for the container in the pod   p   td    tr    tr   td colspan  2    wait  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  wait for resources to be gone before returning  This waits for finalizers   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl debug autogenerated true","answers":"---\ntitle: kubectl debug\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDebug cluster resources using interactive debugging containers.\n\n 'debug' provides automation for common debugging tasks for cluster objects identified by resource and name. Pods will be used by default if no resource is specified.\n\n The action taken by 'debug' varies depending on what resource is specified. Supported actions include:\n\n  *  Workload: Create a copy of an existing pod with certain attributes changed, for example changing the image tag to a new version.\n  *  Workload: Add an ephemeral container to an already running pod, for example to add debugging utilities without restarting the pod.\n  *  Node: Create a new pod that runs in the node's host namespaces and can access the node's filesystem.\n\n```\nkubectl debug (POD | TYPE[[.VERSION].GROUP]\/NAME) [ -- COMMAND [args...] ]\n```\n\n## \n\n```\n  # Create an interactive debugging session in pod mypod and immediately attach to it.\n  kubectl debug mypod -it --image=busybox\n  \n  # Create an interactive debugging session for the pod in the file pod.yaml and immediately attach to it.\n  # (requires the EphemeralContainers feature to be enabled in the cluster)\n  kubectl debug -f pod.yaml -it --image=busybox\n  \n  # Create a debug container named debugger using a custom automated debugging image.\n  kubectl debug --image=myproj\/debug-tools -c debugger mypod\n  \n  # Create a copy of mypod adding a debug container and attach to it\n  kubectl debug mypod -it --image=busybox --copy-to=my-debugger\n  \n  # Create a copy of mypod changing the command of mycontainer\n  kubectl debug mypod -it --copy-to=my-debugger --container=mycontainer -- sh\n  \n  # Create a copy of mypod changing all container images to busybox\n  kubectl debug mypod --copy-to=my-debugger --set-image=*=busybox\n  \n  # Create a copy of mypod adding a debug container and changing container images\n  kubectl debug mypod -it --copy-to=my-debugger --image=debian --set-image=app=app:debug,sidecar=sidecar:debug\n  \n  # Create an interactive debugging session on a node and immediately attach to it.\n  # The container will run in the host namespaces and the host's filesystem will be mounted at \/host\n  kubectl debug node\/mynode -it --image=busybox\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--arguments-only<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If specified, everything after -- will be passed to the new container as Args instead of Command.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--attach<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, wait for the container to start running, and then attach as if 'kubectl attach ...' were called.  Default false, unless '-i\/--stdin' is set, in which case the default is true.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-c, --container string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Container name to use for debug container.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--copy-to string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Create a copy of the target Pod with this name.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--custom string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a JSON or YAML file containing a partial container spec to customize built-in debug profiles.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--env stringToString&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: []<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Environment variables to set in the container.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>identifying the resource to debug<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for debug<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--image string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Container image to use for debug container.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--image-pull-policy string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The image pull policy for the container. If left empty, this value will not be specified by the client and defaulted by the server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--keep-annotations<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the original pod annotations.(This flag only works when used with '--copy-to')<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--keep-init-containers&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Run the init containers for the pod. Defaults to true.(This flag only works when used with '--copy-to')<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--keep-labels<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the original pod labels.(This flag only works when used with '--copy-to')<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--keep-liveness<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the original pod liveness probes.(This flag only works when used with '--copy-to')<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--keep-readiness<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the original pod readiness probes.(This flag only works when used with '--copy-to')<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--keep-startup<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the original startup probes.(This flag only works when used with '--copy-to')<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"legacy\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Options are &quot;legacy&quot;, &quot;general&quot;, &quot;baseline&quot;, &quot;netadmin&quot;, &quot;restricted&quot; or &quot;sysadmin&quot;.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-q, --quiet<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, suppress informational messages.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--replace<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>When used with '--copy-to', delete the original Pod.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--same-node<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>When used with '--copy-to', schedule the copy of target Pod on the same node.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--set-image stringToString&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: []<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>When used with '--copy-to', a list of name=image pairs for changing container images, similar to how 'kubectl set image' works.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--share-processes&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>When used with '--copy-to', enable process namespace sharing in the copy.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-i, --stdin<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Keep stdin open on the container(s) in the pod, even if nothing is attached.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--target string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>When using an ephemeral container, target processes in this container name.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-t, --tty<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Allocate a TTY for the debugging container.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl debug content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Debug cluster resources using interactive debugging containers     debug  provides automation for common debugging tasks for cluster objects identified by resource and name  Pods will be used by default if no resource is specified    The action taken by  debug  varies depending on what resource is specified  Supported actions include        Workload  Create a copy of an existing pod with certain attributes changed  for example changing the image tag to a new version       Workload  Add an ephemeral container to an already running pod  for example to add debugging utilities without restarting the pod       Node  Create a new pod that runs in the node s host namespaces and can access the node s filesystem       kubectl debug  POD   TYPE   VERSION  GROUP  NAME       COMMAND  args                         Create an interactive debugging session in pod mypod and immediately attach to it    kubectl debug mypod  it   image busybox        Create an interactive debugging session for the pod in the file pod yaml and immediately attach to it       requires the EphemeralContainers feature to be enabled in the cluster    kubectl debug  f pod yaml  it   image busybox        Create a debug container named debugger using a custom automated debugging image    kubectl debug   image myproj debug tools  c debugger mypod        Create a copy of mypod adding a debug container and attach to it   kubectl debug mypod  it   image busybox   copy to my debugger        Create a copy of mypod changing the command of mycontainer   kubectl debug mypod  it   copy to my debugger   container mycontainer    sh        Create a copy of mypod changing all container images to busybox   kubectl debug mypod   copy to my debugger   set image   busybox        Create a copy of mypod adding a debug container and changing container images   kubectl debug mypod  it   copy to my debugger   image debian   set image app app debug sidecar sidecar debug        Create an interactive debugging session on a node and immediately attach to it      The container will run in the host namespaces and the host s filesystem will be mounted at  host   kubectl debug node mynode  it   image busybox               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    arguments only  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If specified  everything after    will be passed to the new container as Args instead of Command   p   td    tr    tr   td colspan  2    attach  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  wait for the container to start running  and then attach as if  kubectl attach      were called   Default false  unless   i   stdin  is set  in which case the default is true   p   td    tr    tr   td colspan  2   c    container string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Container name to use for debug container   p   td    tr    tr   td colspan  2    copy to string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Create a copy of the target Pod with this name   p   td    tr    tr   td colspan  2    custom string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a JSON or YAML file containing a partial container spec to customize built in debug profiles   p   td    tr    tr   td colspan  2    env stringToString nbsp  nbsp  nbsp  nbsp  nbsp Default      td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Environment variables to set in the container   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p identifying the resource to debug  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for debug  p   td    tr    tr   td colspan  2    image string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Container image to use for debug container   p   td    tr    tr   td colspan  2    image pull policy string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The image pull policy for the container  If left empty  this value will not be specified by the client and defaulted by the server   p   td    tr    tr   td colspan  2    keep annotations  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the original pod annotations  This flag only works when used with    copy to    p   td    tr    tr   td colspan  2    keep init containers nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Run the init containers for the pod  Defaults to true  This flag only works when used with    copy to    p   td    tr    tr   td colspan  2    keep labels  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the original pod labels  This flag only works when used with    copy to    p   td    tr    tr   td colspan  2    keep liveness  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the original pod liveness probes  This flag only works when used with    copy to    p   td    tr    tr   td colspan  2    keep readiness  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the original pod readiness probes  This flag only works when used with    copy to    p   td    tr    tr   td colspan  2    keep startup  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the original startup probes  This flag only works when used with    copy to    p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   legacy   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Options are  quot legacy quot    quot general quot    quot baseline quot    quot netadmin quot    quot restricted quot  or  quot sysadmin quot    p   td    tr    tr   td colspan  2   q    quiet  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  suppress informational messages   p   td    tr    tr   td colspan  2    replace  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p When used with    copy to   delete the original Pod   p   td    tr    tr   td colspan  2    same node  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p When used with    copy to   schedule the copy of target Pod on the same node   p   td    tr    tr   td colspan  2    set image stringToString nbsp  nbsp  nbsp  nbsp  nbsp Default      td    tr   tr   td   td  td style  line height  130   word wrap  break word    p When used with    copy to   a list of name image pairs for changing container images  similar to how  kubectl set image  works   p   td    tr    tr   td colspan  2    share processes nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p When used with    copy to   enable process namespace sharing in the copy   p   td    tr    tr   td colspan  2   i    stdin  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Keep stdin open on the container s  in the pod  even if nothing is attached   p   td    tr    tr   td colspan  2    target string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p When using an ephemeral container  target processes in this container name   p   td    tr    tr   td colspan  2   t    tty  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Allocate a TTY for the debugging container   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference title kubectl api resources weight 30 autogenerated true","answers":"---\ntitle: kubectl api-resources\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nPrint the supported API resources on the server.\n\n```\nkubectl api-resources [flags]\n```\n\n## \n\n```\n  # Print the supported API resources\n  kubectl api-resources\n  \n  # Print the supported API resources with more information\n  kubectl api-resources -o wide\n  \n  # Print the supported API resources sorted by a column\n  kubectl api-resources --sort-by=name\n  \n  # Print the supported namespaced resources\n  kubectl api-resources --namespaced=true\n  \n  # Print the supported non-namespaced resources\n  kubectl api-resources --namespaced=false\n  \n  # Print the supported API resources with a specific APIGroup\n  kubectl api-resources --api-group=rbac.authorization.k8s.io\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--api-group string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Limit to resources in the specified API group.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cached<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Use the cached list of resources if available.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--categories strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Limit to resources that belong to the specified categories.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for api-resources<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--namespaced&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If false, non-namespaced resources will be returned, otherwise returning namespaced resources by default.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--no-headers<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>When using the default or custom-column output format, don't print headers (default print headers).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (wide, name).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--sort-by string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If non-empty, sort list of resources using specified field. The field can be either 'name' or 'kind'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--verbs strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Limit to resources that support the specified verbs.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl api resources content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Print the supported API resources on the server       kubectl api resources  flags                    Print the supported API resources   kubectl api resources        Print the supported API resources with more information   kubectl api resources  o wide        Print the supported API resources sorted by a column   kubectl api resources   sort by name        Print the supported namespaced resources   kubectl api resources   namespaced true        Print the supported non namespaced resources   kubectl api resources   namespaced false        Print the supported API resources with a specific APIGroup   kubectl api resources   api group rbac authorization k8s io               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    api group string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Limit to resources in the specified API group   p   td    tr    tr   td colspan  2    cached  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Use the cached list of resources if available   p   td    tr    tr   td colspan  2    categories strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Limit to resources that belong to the specified categories   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for api resources  p   td    tr    tr   td colspan  2    namespaced nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If false  non namespaced resources will be returned  otherwise returning namespaced resources by default   p   td    tr    tr   td colspan  2    no headers  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p When using the default or custom column output format  don t print headers  default print headers    p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   wide  name    p   td    tr    tr   td colspan  2    sort by string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If non empty  sort list of resources using specified field  The field can be either  name  or  kind    p   td    tr    tr   td colspan  2    verbs strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Limit to resources that support the specified verbs   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl expose","answers":"---\ntitle: kubectl expose\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nExpose a resource as a new Kubernetes service.\n\n Looks up a deployment, service, replica set, replication controller or pod by name and uses the selector for that resource as the selector for a new service on the specified port. A deployment or replica set will be exposed as a service only if its selector is convertible to a selector that service supports, i.e. when the selector contains only the matchLabels component. Note that if no port is specified via --port and the exposed resource has multiple ports, all will be re-used by the new service. Also if no labels are specified, the new service will re-use the labels from the resource it exposes.\n\n Possible resources include (case insensitive):\n\n pod (po), service (svc), replicationcontroller (rc), deployment (deploy), replicaset (rs)\n\n```\nkubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type]\n```\n\n## \n\n```\n  # Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000\n  kubectl expose rc nginx --port=80 --target-port=8000\n  \n  # Create a service for a replication controller identified by type and name specified in \"nginx-controller.yaml\", which serves on port 80 and connects to the containers on port 8000\n  kubectl expose -f nginx-controller.yaml --port=80 --target-port=8000\n  \n  # Create a service for a pod valid-pod, which serves on port 444 with the name \"frontend\"\n  kubectl expose pod valid-pod --port=444 --name=frontend\n  \n  # Create a second service based on the above service, exposing the container port 8443 as port 443 with the name \"nginx-https\"\n  kubectl expose service nginx --port=443 --target-port=8443 --name=nginx-https\n  \n  # Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named 'video-stream'.\n  kubectl expose rc streamer --port=4100 --protocol=UDP --name=video-stream\n  \n  # Create a service for a replicated nginx using replica set, which serves on port 80 and connects to the containers on port 8000\n  kubectl expose rs nginx --port=80 --target-port=8000\n  \n  # Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000\n  kubectl expose deployment nginx --port=80 --target-port=8000\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster-ip string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>ClusterIP to be assigned to the service. Leave empty to auto-allocate, or set to 'None' to create a headless service.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--external-ip string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Additional external IP address (not managed by Kubernetes) to accept for the service. If this IP is routed to a node, the service can be accessed by this IP in addition to its generated service IP.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-expose\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to expose a service<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for expose<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --labels string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Labels to apply to the service created by this call.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--load-balancer-ip string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>IP to assign to the LoadBalancer. If empty, an ephemeral IP will be created and used (cloud-provider specific).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name for the newly created object.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--override-type string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"merge\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The method used to override the generated object: json, merge, or strategic.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--overrides string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--port string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The port that the service should serve on. Copied from the resource being exposed, if unspecified<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--protocol string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The network protocol for the service to be created. Default is 'TCP'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A label selector to use for this service. Only equality-based selector requirements are supported. If empty (the default) infer the selector from the replication controller or replica set.)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--session-affinity string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If non-empty, set the session affinity for the service to this; legal values: 'None', 'ClientIP'<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--target-port string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name or number for the port on the container that the service should direct traffic to. Optional.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--type string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Type for this service: ClusterIP, NodePort, LoadBalancer, or ExternalName. Default is 'ClusterIP'.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl expose content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Expose a resource as a new Kubernetes service    Looks up a deployment  service  replica set  replication controller or pod by name and uses the selector for that resource as the selector for a new service on the specified port  A deployment or replica set will be exposed as a service only if its selector is convertible to a selector that service supports  i e  when the selector contains only the matchLabels component  Note that if no port is specified via   port and the exposed resource has multiple ports  all will be re used by the new service  Also if no labels are specified  the new service will re use the labels from the resource it exposes    Possible resources include  case insensitive     pod  po   service  svc   replicationcontroller  rc   deployment  deploy   replicaset  rs       kubectl expose   f FILENAME   TYPE NAME     port port     protocol TCP UDP SCTP     target port number or name     name name     external ip external ip of service     type type                    Create a service for a replicated nginx  which serves on port 80 and connects to the containers on port 8000   kubectl expose rc nginx   port 80   target port 8000        Create a service for a replication controller identified by type and name specified in  nginx controller yaml   which serves on port 80 and connects to the containers on port 8000   kubectl expose  f nginx controller yaml   port 80   target port 8000        Create a service for a pod valid pod  which serves on port 444 with the name  frontend    kubectl expose pod valid pod   port 444   name frontend        Create a second service based on the above service  exposing the container port 8443 as port 443 with the name  nginx https    kubectl expose service nginx   port 443   target port 8443   name nginx https        Create a service for a replicated streaming application on port 4100 balancing UDP traffic and named  video stream     kubectl expose rc streamer   port 4100   protocol UDP   name video stream        Create a service for a replicated nginx using replica set  which serves on port 80 and connects to the containers on port 8000   kubectl expose rs nginx   port 80   target port 8000        Create a service for an nginx deployment  which serves on port 80 and connects to the containers on port 8000   kubectl expose deployment nginx   port 80   target port 8000               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    cluster ip string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p ClusterIP to be assigned to the service  Leave empty to auto allocate  or set to  None  to create a headless service   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    external ip string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Additional external IP address  not managed by Kubernetes  to accept for the service  If this IP is routed to a node  the service can be accessed by this IP in addition to its generated service IP   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl expose   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to expose a service  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for expose  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   l    labels string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Labels to apply to the service created by this call   p   td    tr    tr   td colspan  2    load balancer ip string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p IP to assign to the LoadBalancer  If empty  an ephemeral IP will be created and used  cloud provider specific    p   td    tr    tr   td colspan  2    name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name for the newly created object   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    override type string nbsp  nbsp  nbsp  nbsp  nbsp Default   merge   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The method used to override the generated object  json  merge  or strategic   p   td    tr    tr   td colspan  2    overrides string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p An inline JSON override for the generated object  If this is non empty  it is used to override the generated object  Requires that the object supply a valid apiVersion field   p   td    tr    tr   td colspan  2    port string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The port that the service should serve on  Copied from the resource being exposed  if unspecified  p   td    tr    tr   td colspan  2    protocol string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The network protocol for the service to be created  Default is  TCP    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A label selector to use for this service  Only equality based selector requirements are supported  If empty  the default  infer the selector from the replication controller or replica set    p   td    tr    tr   td colspan  2    session affinity string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If non empty  set the session affinity for the service to this  legal values   None    ClientIP   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    target port string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name or number for the port on the container that the service should direct traffic to  Optional   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    type string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Type for this service  ClusterIP  NodePort  LoadBalancer  or ExternalName  Default is  ClusterIP    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl annotate autogenerated true","answers":"---\ntitle: kubectl annotate\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nUpdate the annotations on one or more resources.\n\n All Kubernetes objects support the ability to store additional data with the object as annotations. Annotations are key\/value pairs that can be larger than labels and include arbitrary string values such as structured JSON. Tools and system extensions may use annotations to store their own data.\n\n Attempting to set an annotation that already exists will fail unless --overwrite is set. If --resource-version is specified and does not match the current resource version on the server the command will fail.\n\nUse \"kubectl api-resources\" for a complete list of supported resources.\n\n```\nkubectl annotate [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version]\n```\n\n## \n\n```\n  # Update pod 'foo' with the annotation 'description' and the value 'my frontend'\n  # If the same annotation is set multiple times, only the last value will be applied\n  kubectl annotate pods foo description='my frontend'\n  \n  # Update a pod identified by type and name in \"pod.json\"\n  kubectl annotate -f pod.json description='my frontend'\n  \n  # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value\n  kubectl annotate --overwrite pods foo description='my frontend running nginx'\n  \n  # Update all pods in the namespace\n  kubectl annotate pods --all description='my frontend running nginx'\n  \n  # Update pod 'foo' only if the resource is unchanged from version 1\n  kubectl annotate pods foo description='my frontend running nginx' --resource-version=1\n  \n  # Update pod 'foo' by removing an annotation named 'description' if it exists\n  # Does not require the --overwrite flag\n  kubectl annotate pods foo description-\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Select all resources, in the namespace of the specified resource types.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-A, --all-namespaces<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, check the specified action in all namespaces.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-annotate\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to update the annotation<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for annotate<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--list<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, display the annotations for a given resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--local<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, annotation will NOT contact api-server but run locally.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--overwrite<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, allow annotations to be overwritten, otherwise reject annotation updates that overwrite existing annotations.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--resource-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If non-empty, the annotation update will only succeed if this is the current resource-version for the object. Only valid when specifying a single resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl annotate content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Update the annotations on one or more resources    All Kubernetes objects support the ability to store additional data with the object as annotations  Annotations are key value pairs that can be larger than labels and include arbitrary string values such as structured JSON  Tools and system extensions may use annotations to store their own data    Attempting to set an annotation that already exists will fail unless   overwrite is set  If   resource version is specified and does not match the current resource version on the server the command will fail   Use  kubectl api resources  for a complete list of supported resources       kubectl annotate    overwrite    f FILENAME   TYPE NAME  KEY 1 VAL 1     KEY N VAL N    resource version version                    Update pod  foo  with the annotation  description  and the value  my frontend      If the same annotation is set multiple times  only the last value will be applied   kubectl annotate pods foo description  my frontend         Update a pod identified by type and name in  pod json    kubectl annotate  f pod json description  my frontend         Update pod  foo  with the annotation  description  and the value  my frontend running nginx   overwriting any existing value   kubectl annotate   overwrite pods foo description  my frontend running nginx         Update all pods in the namespace   kubectl annotate pods   all description  my frontend running nginx         Update pod  foo  only if the resource is unchanged from version 1   kubectl annotate pods foo description  my frontend running nginx    resource version 1        Update pod  foo  by removing an annotation named  description  if it exists     Does not require the   overwrite flag   kubectl annotate pods foo description                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Select all resources  in the namespace of the specified resource types   p   td    tr    tr   td colspan  2   A    all namespaces  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  check the specified action in all namespaces   p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl annotate   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2    field selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  field query  to filter on  supports            and       e g    field selector key1 value1 key2 value2   The server only supports a limited number of field queries per type   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to update the annotation  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for annotate  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2    list  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  display the annotations for a given resource   p   td    tr    tr   td colspan  2    local  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  annotation will NOT contact api server but run locally   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    overwrite  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  allow annotations to be overwritten  otherwise reject annotation updates that overwrite existing annotations   p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    resource version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If non empty  the annotation update will only succeed if this is the current resource version for the object  Only valid when specifying a single resource   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl cluster info dump autogenerated true","answers":"---\ntitle: kubectl cluster-info dump\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDump cluster information out suitable for debugging and diagnosing cluster problems.  By default, dumps everything to stdout. You can optionally specify a directory with --output-directory.  If you specify a directory, Kubernetes will build a set of files in that directory.  By default, only dumps things in the current namespace and 'kube-system' namespace, but you can switch to a different namespace with the --namespaces flag, or specify --all-namespaces to dump all namespaces.\n\n The command also dumps the logs of all of the pods in the cluster; these logs are dumped into different directories based on namespace and pod name.\n\n```\nkubectl cluster-info dump [flags]\n```\n\n## \n\n```\n  # Dump current cluster state to stdout\n  kubectl cluster-info dump\n  \n  # Dump current cluster state to \/path\/to\/cluster-state\n  kubectl cluster-info dump --output-directory=\/path\/to\/cluster-state\n  \n  # Dump all namespaces to stdout\n  kubectl cluster-info dump --all-namespaces\n  \n  # Dump a set of namespaces to \/path\/to\/cluster-state\n  kubectl cluster-info dump --namespaces default,kube-system --output-directory=\/path\/to\/cluster-state\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-A, --all-namespaces<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, dump all namespaces.  If true, --namespaces is ignored.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for dump<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--namespaces strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A comma separated list of namespaces to dump.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"json\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--output-directory string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Where to output the files.  If empty or '-' uses stdout, otherwise creates a directory hierarchy in that directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pod-running-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 20s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl cluster-info](..\/)\t - Display cluster information\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl cluster info dump content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Dump cluster information out suitable for debugging and diagnosing cluster problems   By default  dumps everything to stdout  You can optionally specify a directory with   output directory   If you specify a directory  Kubernetes will build a set of files in that directory   By default  only dumps things in the current namespace and  kube system  namespace  but you can switch to a different namespace with the   namespaces flag  or specify   all namespaces to dump all namespaces    The command also dumps the logs of all of the pods in the cluster  these logs are dumped into different directories based on namespace and pod name       kubectl cluster info dump  flags                    Dump current cluster state to stdout   kubectl cluster info dump        Dump current cluster state to  path to cluster state   kubectl cluster info dump   output directory  path to cluster state        Dump all namespaces to stdout   kubectl cluster info dump   all namespaces        Dump a set of namespaces to  path to cluster state   kubectl cluster info dump   namespaces default kube system   output directory  path to cluster state               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   A    all namespaces  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  dump all namespaces   If true    namespaces is ignored   p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for dump  p   td    tr    tr   td colspan  2    namespaces strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A comma separated list of namespaces to dump   p   td    tr    tr   td colspan  2   o    output string nbsp  nbsp  nbsp  nbsp  nbsp Default   json   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    output directory string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Where to output the files   If empty or     uses stdout  otherwise creates a directory hierarchy in that directory  p   td    tr    tr   td colspan  2    pod running timeout duration nbsp  nbsp  nbsp  nbsp  nbsp Default  20s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time  like 5s  2m  or 3h  higher than zero  to wait until at least one pod is running  p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl cluster info          Display cluster information "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl cluster info","answers":"---\ntitle: kubectl cluster-info\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDisplay addresses of the control plane and services with label kubernetes.io\/cluster-service=true. To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n\n```\nkubectl cluster-info [flags]\n```\n\n## \n\n```\n  # Print the address of the control plane and cluster services\n  kubectl cluster-info\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for cluster-info<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n* [kubectl cluster-info dump](kubectl_cluster-info_dump\/)\t - Dump relevant information for debugging and diagnosis\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl cluster info content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Display addresses of the control plane and services with label kubernetes io cluster service true  To further debug and diagnose cluster problems  use  kubectl cluster info dump        kubectl cluster info  flags                    Print the address of the control plane and cluster services   kubectl cluster info               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for cluster info  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager    kubectl cluster info dump  kubectl cluster info dump      Dump relevant information for debugging and diagnosis "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl explain autogenerated true","answers":"---\ntitle: kubectl explain\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDescribe fields and structure of various resources.\n\n This command describes the fields associated with each supported API resource. Fields are identified via a simple JSONPath identifier:\n\n        &lt;type&gt;.&lt;fieldName&gt;[.&lt;fieldName&gt;]\n        \n Information about each field is retrieved from the server in OpenAPI format.\n\nUse \"kubectl api-resources\" for a complete list of supported resources.\n\n```\nkubectl explain TYPE [--recursive=FALSE|TRUE] [--api-version=api-version-group] [--output=plaintext|plaintext-openapiv2]\n```\n\n## \n\n```\n  # Get the documentation of the resource and its fields\n  kubectl explain pods\n  \n  # Get all the fields in the resource\n  kubectl explain pods --recursive\n  \n  # Get the explanation for deployment in supported api versions\n  kubectl explain deployments --api-version=apps\/v1\n  \n  # Get the documentation of a specific field of a resource\n  kubectl explain pods.spec.containers\n  \n  # Get the documentation of resources in different format\n  kubectl explain deployment --output=plaintext-openapiv2\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--api-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Use given api-version (group\/version) of the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for explain<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"plaintext\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Format in which to render the schema. Valid values are: (plaintext, plaintext-openapiv2).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>When true, print the name of all the fields recursively. Otherwise, print the available fields with their description.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl explain content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Describe fields and structure of various resources    This command describes the fields associated with each supported API resource  Fields are identified via a simple JSONPath identifier            lt type gt   lt fieldName gt    lt fieldName gt             Information about each field is retrieved from the server in OpenAPI format   Use  kubectl api resources  for a complete list of supported resources       kubectl explain TYPE    recursive FALSE TRUE     api version api version group     output plaintext plaintext openapiv2                    Get the documentation of the resource and its fields   kubectl explain pods        Get all the fields in the resource   kubectl explain pods   recursive        Get the explanation for deployment in supported api versions   kubectl explain deployments   api version apps v1        Get the documentation of a specific field of a resource   kubectl explain pods spec containers        Get the documentation of resources in different format   kubectl explain deployment   output plaintext openapiv2               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    api version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Use given api version  group version  of the resource   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for explain  p   td    tr    tr   td colspan  2    output string nbsp  nbsp  nbsp  nbsp  nbsp Default   plaintext   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Format in which to render the schema  Valid values are   plaintext  plaintext openapiv2    p   td    tr    tr   td colspan  2    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p When true  print the name of all the fields recursively  Otherwise  print the available fields with their description   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl describe autogenerated true","answers":"---\ntitle: kubectl describe\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nShow details of a specific resource or group of resources.\n\n Print a detailed description of the selected resources, including related resources such as events or controllers. You may select a single object by name, all objects of that type, provide a name prefix, or label selector. For example:\n\n        $ kubectl describe TYPE NAME_PREFIX\n        \n will first check for an exact match on TYPE and NAME_PREFIX. If no such resource exists, it will output details for every resource that has a name prefixed with NAME_PREFIX.\n\nUse \"kubectl api-resources\" for a complete list of supported resources.\n\n```\nkubectl describe (-f FILENAME | TYPE [NAME_PREFIX | -l label] | TYPE\/NAME)\n```\n\n## \n\n```\n  # Describe a node\n  kubectl describe nodes kubernetes-node-emt8.c.myproject.internal\n  \n  # Describe a pod\n  kubectl describe pods\/nginx\n  \n  # Describe a pod identified by type and name in \"pod.json\"\n  kubectl describe -f pod.json\n  \n  # Describe all pods\n  kubectl describe pods\n  \n  # Describe pods by label name=myLabel\n  kubectl describe pods -l name=myLabel\n  \n  # Describe all pods managed by the 'frontend' replication controller\n  # (rc-created pods get the name of the rc as a prefix in the pod name)\n  kubectl describe pods frontend\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-A, --all-namespaces<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--chunk-size int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 500<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Return large lists in chunks rather than all at once. Pass 0 to disable. This flag is beta and may change in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files containing the resource to describe<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for describe<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-events&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, display events related to the described object.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl describe content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Show details of a specific resource or group of resources    Print a detailed description of the selected resources  including related resources such as events or controllers  You may select a single object by name  all objects of that type  provide a name prefix  or label selector  For example             kubectl describe TYPE NAME PREFIX           will first check for an exact match on TYPE and NAME PREFIX  If no such resource exists  it will output details for every resource that has a name prefixed with NAME PREFIX   Use  kubectl api resources  for a complete list of supported resources       kubectl describe   f FILENAME   TYPE  NAME PREFIX    l label    TYPE NAME                    Describe a node   kubectl describe nodes kubernetes node emt8 c myproject internal        Describe a pod   kubectl describe pods nginx        Describe a pod identified by type and name in  pod json    kubectl describe  f pod json        Describe all pods   kubectl describe pods        Describe pods by label name myLabel   kubectl describe pods  l name myLabel        Describe all pods managed by the  frontend  replication controller      rc created pods get the name of the rc as a prefix in the pod name    kubectl describe pods frontend               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   A    all namespaces  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  list the requested object s  across all namespaces  Namespace in current context is ignored even if specified with   namespace   p   td    tr    tr   td colspan  2    chunk size int nbsp  nbsp  nbsp  nbsp  nbsp Default  500  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Return large lists in chunks rather than all at once  Pass 0 to disable  This flag is beta and may change in the future   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files containing the resource to describe  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for describe  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show events nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  display events related to the described object   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl edit","answers":"---\ntitle: kubectl edit\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nEdit a resource from the default editor.\n\n The edit command allows you to directly edit any API resource you can retrieve via the command-line tools. It will open the editor defined by your KUBE_EDITOR, or EDITOR environment variables, or fall back to 'vi' for Linux or 'notepad' for Windows. When attempting to open the editor, it will first attempt to use the shell that has been defined in the 'SHELL' environment variable. If this is not defined, the default shell will be used, which is '\/bin\/bash' for Linux or 'cmd' for Windows.\n\n You can edit multiple objects, although changes are applied one at a time. The command accepts file names as well as command-line arguments, although the files you point to must be previously saved versions of resources.\n\n Editing is done with the API version used to fetch the resource. To edit using a specific API version, fully-qualify the resource, version, and group.\n\n The default format is YAML. To edit in JSON, specify \"-o json\".\n\n The flag --windows-line-endings can be used to force Windows line endings, otherwise the default for your operating system will be used.\n\n In the event an error occurs while updating, a temporary file will be created on disk that contains your unapplied changes. The most common error when updating a resource is another editor changing the resource on the server. When this occurs, you will have to apply your changes to the newer version of the resource, or update your temporary saved copy to include the latest resource version.\n\n```\nkubectl edit (RESOURCE\/NAME | -f FILENAME)\n```\n\n## \n\n```\n  # Edit the service named 'registry'\n  kubectl edit svc\/registry\n  \n  # Use an alternative editor\n  KUBE_EDITOR=\"nano\" kubectl edit svc\/registry\n  \n  # Edit the job 'myjob' in JSON using the v1 API format\n  kubectl edit job.v1.batch\/myjob -o json\n  \n  # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation\n  kubectl edit deployment\/mydeployment -o yaml --save-config\n  \n  # Edit the 'status' subresource for the 'mydeployment' deployment\n  kubectl edit deployment mydeployment --subresource='status'\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-edit\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files to use to edit the resource<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for edit<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--output-patch<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output the patch if the resource is edited.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--subresource string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If specified, edit will operate on the subresource of the requested object. Must be one of [status]. This flag is beta and may change in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--windows-line-endings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Defaults to the line ending native to your platform.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl edit content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Edit a resource from the default editor    The edit command allows you to directly edit any API resource you can retrieve via the command line tools  It will open the editor defined by your KUBE EDITOR  or EDITOR environment variables  or fall back to  vi  for Linux or  notepad  for Windows  When attempting to open the editor  it will first attempt to use the shell that has been defined in the  SHELL  environment variable  If this is not defined  the default shell will be used  which is   bin bash  for Linux or  cmd  for Windows    You can edit multiple objects  although changes are applied one at a time  The command accepts file names as well as command line arguments  although the files you point to must be previously saved versions of resources    Editing is done with the API version used to fetch the resource  To edit using a specific API version  fully qualify the resource  version  and group    The default format is YAML  To edit in JSON  specify   o json     The flag   windows line endings can be used to force Windows line endings  otherwise the default for your operating system will be used    In the event an error occurs while updating  a temporary file will be created on disk that contains your unapplied changes  The most common error when updating a resource is another editor changing the resource on the server  When this occurs  you will have to apply your changes to the newer version of the resource  or update your temporary saved copy to include the latest resource version       kubectl edit  RESOURCE NAME    f FILENAME                    Edit the service named  registry    kubectl edit svc registry        Use an alternative editor   KUBE EDITOR  nano  kubectl edit svc registry        Edit the job  myjob  in JSON using the v1 API format   kubectl edit job v1 batch myjob  o json        Edit the deployment  mydeployment  in YAML and save the modified config in its annotation   kubectl edit deployment mydeployment  o yaml   save config        Edit the  status  subresource for the  mydeployment  deployment   kubectl edit deployment mydeployment   subresource  status                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl edit   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files to use to edit the resource  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for edit  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    output patch  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output the patch if the resource is edited   p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    subresource string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If specified  edit will operate on the subresource of the requested object  Must be one of  status   This flag is beta and may change in the future   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr    tr   td colspan  2    windows line endings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Defaults to the line ending native to your platform   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl events autogenerated true","answers":"---\ntitle: kubectl events\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDisplay events.\n\n Prints a table of the most important information about events. You can request events for a namespace, for all namespace, or filtered to only those pertaining to a specified resource.\n\n```\nkubectl events [(-o|--output=)json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file] [--for TYPE\/NAME] [--watch] [--types=Normal,Warning]\n```\n\n## \n\n```\n  # List recent events in the default namespace\n  kubectl events\n  \n  # List recent events in all namespaces\n  kubectl events --all-namespaces\n  \n  # List recent events for the specified pod, then wait for more events and list them as they arrive\n  kubectl events --for pod\/web-pod-13je7 --watch\n  \n  # List recent events in YAML format\n  kubectl events -oyaml\n  \n  # List recent only events of type 'Warning' or 'Normal'\n  kubectl events --types=Warning,Normal\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-A, --all-namespaces<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--chunk-size int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 500<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Return large lists in chunks rather than all at once. Pass 0 to disable. This flag is beta and may change in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--for string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filter events to only those pertaining to the specified resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for events<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--no-headers<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>When using the default output format, don't print headers.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--types strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output only events of given types.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-w, --watch<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>After listing the requested events, watch for more events.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl events content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Display events    Prints a table of the most important information about events  You can request events for a namespace  for all namespace  or filtered to only those pertaining to a specified resource       kubectl events    o   output  json yaml name go template go template file template templatefile jsonpath jsonpath as json jsonpath file     for TYPE NAME     watch     types Normal Warning                    List recent events in the default namespace   kubectl events        List recent events in all namespaces   kubectl events   all namespaces        List recent events for the specified pod  then wait for more events and list them as they arrive   kubectl events   for pod web pod 13je7   watch        List recent events in YAML format   kubectl events  oyaml        List recent only events of type  Warning  or  Normal    kubectl events   types Warning Normal               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   A    all namespaces  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  list the requested object s  across all namespaces  Namespace in current context is ignored even if specified with   namespace   p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    chunk size int nbsp  nbsp  nbsp  nbsp  nbsp Default  500  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Return large lists in chunks rather than all at once  Pass 0 to disable  This flag is beta and may change in the future   p   td    tr    tr   td colspan  2    for string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filter events to only those pertaining to the specified resource   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for events  p   td    tr    tr   td colspan  2    no headers  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p When using the default output format  don t print headers   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    types strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output only events of given types   p   td    tr    tr   td colspan  2   w    watch  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p After listing the requested events  watch for more events   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference title kubectl logs weight 30 autogenerated true","answers":"---\ntitle: kubectl logs\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nPrint the logs for a container in a pod or specified resource. If the pod has only one container, the container name is optional.\n\n```\nkubectl logs [-f] [-p] (POD | TYPE\/NAME) [-c CONTAINER]\n```\n\n## \n\n```\n  # Return snapshot logs from pod nginx with only one container\n  kubectl logs nginx\n  \n  # Return snapshot logs from pod nginx with multi containers\n  kubectl logs nginx --all-containers=true\n  \n  # Return snapshot logs from all containers in pods defined by label app=nginx\n  kubectl logs -l app=nginx --all-containers=true\n  \n  # Return snapshot of previous terminated ruby container logs from pod web-1\n  kubectl logs -p -c ruby web-1\n  \n  # Begin streaming the logs of the ruby container in pod web-1\n  kubectl logs -f -c ruby web-1\n  \n  # Begin streaming the logs from all containers in pods defined by label app=nginx\n  kubectl logs -f -l app=nginx --all-containers=true\n  \n  # Display only the most recent 20 lines of output in pod nginx\n  kubectl logs --tail=20 nginx\n  \n  # Show all logs from pod nginx written in the last hour\n  kubectl logs --since=1h nginx\n  \n  # Show logs from a kubelet with an expired serving certificate\n  kubectl logs --insecure-skip-tls-verify-backend nginx\n  \n  # Return snapshot logs from first container of a job named hello\n  kubectl logs job\/hello\n  \n  # Return snapshot logs from container nginx-1 of a deployment named nginx\n  kubectl logs deployment\/nginx -c nginx-1\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all-containers<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Get all containers' logs in the pod(s).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--all-pods<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Get logs from all pod(s). Sets prefix to true.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-c, --container string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Print the logs of this container<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --follow<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Specify if the logs should be streamed.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for logs<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--ignore-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If watching \/ following pod logs, allow for any errors that occur to be non-fatal<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify-backend<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Skip verifying the identity of the kubelet that logs are requested from.  In theory, an attacker could provide invalid log content back. You might want to use this if your kubelet serving certificates have expired.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--limit-bytes int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Maximum bytes of logs to return. Defaults to no limit.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--max-log-requests int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 5<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Specify maximum number of concurrent logs to follow when using by a selector. Defaults to 5.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pod-running-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 20s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--prefix<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Prefix each log line with the log source (pod name and container name)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-p, --previous<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, print the logs for the previous instance of the container in a pod if it exists.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--since duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of since-time \/ since may be used.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--since-time string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time \/ since may be used.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tail int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Lines of recent log file to display. Defaults to -1 with no selector, showing all log lines otherwise 10, if a selector is provided.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--timestamps<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Include timestamps on each line in the log output<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl logs content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Print the logs for a container in a pod or specified resource  If the pod has only one container  the container name is optional       kubectl logs   f    p   POD   TYPE NAME    c CONTAINER                    Return snapshot logs from pod nginx with only one container   kubectl logs nginx        Return snapshot logs from pod nginx with multi containers   kubectl logs nginx   all containers true        Return snapshot logs from all containers in pods defined by label app nginx   kubectl logs  l app nginx   all containers true        Return snapshot of previous terminated ruby container logs from pod web 1   kubectl logs  p  c ruby web 1        Begin streaming the logs of the ruby container in pod web 1   kubectl logs  f  c ruby web 1        Begin streaming the logs from all containers in pods defined by label app nginx   kubectl logs  f  l app nginx   all containers true        Display only the most recent 20 lines of output in pod nginx   kubectl logs   tail 20 nginx        Show all logs from pod nginx written in the last hour   kubectl logs   since 1h nginx        Show logs from a kubelet with an expired serving certificate   kubectl logs   insecure skip tls verify backend nginx        Return snapshot logs from first container of a job named hello   kubectl logs job hello        Return snapshot logs from container nginx 1 of a deployment named nginx   kubectl logs deployment nginx  c nginx 1               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all containers  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Get all containers  logs in the pod s    p   td    tr    tr   td colspan  2    all pods  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Get logs from all pod s   Sets prefix to true   p   td    tr    tr   td colspan  2   c    container string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Print the logs of this container  p   td    tr    tr   td colspan  2   f    follow  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Specify if the logs should be streamed   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for logs  p   td    tr    tr   td colspan  2    ignore errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If watching   following pod logs  allow for any errors that occur to be non fatal  p   td    tr    tr   td colspan  2    insecure skip tls verify backend  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Skip verifying the identity of the kubelet that logs are requested from   In theory  an attacker could provide invalid log content back  You might want to use this if your kubelet serving certificates have expired   p   td    tr    tr   td colspan  2    limit bytes int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Maximum bytes of logs to return  Defaults to no limit   p   td    tr    tr   td colspan  2    max log requests int nbsp  nbsp  nbsp  nbsp  nbsp Default  5  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Specify maximum number of concurrent logs to follow when using by a selector  Defaults to 5   p   td    tr    tr   td colspan  2    pod running timeout duration nbsp  nbsp  nbsp  nbsp  nbsp Default  20s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time  like 5s  2m  or 3h  higher than zero  to wait until at least one pod is running  p   td    tr    tr   td colspan  2    prefix  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Prefix each log line with the log source  pod name and container name   p   td    tr    tr   td colspan  2   p    previous  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  print the logs for the previous instance of the container in a pod if it exists   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    since duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Only return logs newer than a relative duration like 5s  2m  or 3h  Defaults to all logs  Only one of since time   since may be used   p   td    tr    tr   td colspan  2    since time string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Only return logs after a specific date  RFC3339   Defaults to all logs  Only one of since time   since may be used   p   td    tr    tr   td colspan  2    tail int nbsp  nbsp  nbsp  nbsp  nbsp Default   1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Lines of recent log file to display  Defaults to  1 with no selector  showing all log lines otherwise 10  if a selector is provided   p   td    tr    tr   td colspan  2    timestamps  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Include timestamps on each line in the log output  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference title kubectl autoscale nolist true contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl autoscale\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreates an autoscaler that automatically chooses and sets the number of pods that run in a Kubernetes cluster.\n\n Looks up a deployment, replica set, stateful set, or replication controller by name and creates an autoscaler that uses the given resource as a reference. An autoscaler can automatically increase or decrease number of pods deployed within the system as needed.\n\n```\nkubectl autoscale (-f FILENAME | TYPE NAME | TYPE\/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU]\n```\n\n## \n\n```\n  # Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used\n  kubectl autoscale deployment foo --min=2 --max=10\n  \n  # Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80%\n  kubectl autoscale rc foo --max=5 --cpu-percent=80\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cpu-percent int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The target average CPU utilization (represented as a percent of requested CPU) over all the pods. If it's not specified or negative, a default autoscaling policy will be used.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-autoscale\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to autoscale.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for autoscale<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--max int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The upper limit for the number of pods that can be set by the autoscaler. Required.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--min int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The lower limit for the number of pods that can be set by the autoscaler. If it's not specified or negative, the server will apply a default value.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name for the newly created object. If not specified, the name of the input resource will be used.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl autoscale content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Creates an autoscaler that automatically chooses and sets the number of pods that run in a Kubernetes cluster    Looks up a deployment  replica set  stateful set  or replication controller by name and creates an autoscaler that uses the given resource as a reference  An autoscaler can automatically increase or decrease number of pods deployed within the system as needed       kubectl autoscale   f FILENAME   TYPE NAME   TYPE NAME     min MINPODS    max MAXPODS    cpu percent CPU                    Auto scale a deployment  foo   with the number of pods between 2 and 10  no target CPU utilization specified so a default autoscaling policy will be used   kubectl autoscale deployment foo   min 2   max 10        Auto scale a replication controller  foo   with the number of pods between 1 and 5  target CPU utilization at 80    kubectl autoscale rc foo   max 5   cpu percent 80               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    cpu percent int32 nbsp  nbsp  nbsp  nbsp  nbsp Default   1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The target average CPU utilization  represented as a percent of requested CPU  over all the pods  If it s not specified or negative  a default autoscaling policy will be used   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl autoscale   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to autoscale   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for autoscale  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2    max int32 nbsp  nbsp  nbsp  nbsp  nbsp Default   1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The upper limit for the number of pods that can be set by the autoscaler  Required   p   td    tr    tr   td colspan  2    min int32 nbsp  nbsp  nbsp  nbsp  nbsp Default   1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The lower limit for the number of pods that can be set by the autoscaler  If it s not specified or negative  the server will apply a default value   p   td    tr    tr   td colspan  2    name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name for the newly created object  If not specified  the name of the input resource will be used   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl options","answers":"---\ntitle: kubectl options\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nPrint the list of flags inherited by all commands\n\n```\nkubectl options [flags]\n```\n\n## \n\n```\n  # Print flags inherited by all commands\n  kubectl options\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for options<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl options content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Print the list of flags inherited by all commands      kubectl options  flags                    Print flags inherited by all commands   kubectl options               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for options  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference title kubectl create namespace weight 30 autogenerated true","answers":"---\ntitle: kubectl create namespace\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a namespace with the specified name.\n\n```\nkubectl create namespace NAME [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a new namespace named my-namespace\n  kubectl create namespace my-namespace\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for namespace<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create namespace content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a namespace with the specified name       kubectl create namespace NAME    dry run server client none                    Create a new namespace named my namespace   kubectl create namespace my namespace               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for namespace  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic title kubectl create token contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl create token\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nRequest a service account token.\n\n```\nkubectl create token SERVICE_ACCOUNT_NAME\n```\n\n## \n\n```\n  # Request a token to authenticate to the kube-apiserver as the service account \"myapp\" in the current namespace\n  kubectl create token myapp\n  \n  # Request a token for a service account in a custom namespace\n  kubectl create token myapp --namespace myns\n  \n  # Request a token with a custom expiration\n  kubectl create token myapp --duration 10m\n  \n  # Request a token with a custom audience\n  kubectl create token myapp --audience https:\/\/example.com\n  \n  # Request a token bound to an instance of a Secret object\n  kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret\n  \n  # Request a token bound to an instance of a Secret object with a specific UID\n  kubectl create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--audience strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Audience of the requested token. If unset, defaults to requesting a token for use with the Kubernetes API server. May be repeated to request a token valid for multiple audiences.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--bound-object-kind string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Kind of an object to bind the token to. Supported kinds are Node, Pod, Secret. If set, --bound-object-name must be provided.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--bound-object-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of an object to bind the token to. The token will expire when the object is deleted. Requires --bound-object-kind.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--bound-object-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID of an object to bind the token to. Requires --bound-object-kind and --bound-object-name. If unset, the UID of the existing object is used.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--duration duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Requested lifetime of the issued token. If not set or if set to 0, the lifetime will be determined by the server automatically. The server may return a token with a longer or shorter lifetime.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for token<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create token content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Request a service account token       kubectl create token SERVICE ACCOUNT NAME                   Request a token to authenticate to the kube apiserver as the service account  myapp  in the current namespace   kubectl create token myapp        Request a token for a service account in a custom namespace   kubectl create token myapp   namespace myns        Request a token with a custom expiration   kubectl create token myapp   duration 10m        Request a token with a custom audience   kubectl create token myapp   audience https   example com        Request a token bound to an instance of a Secret object   kubectl create token myapp   bound object kind Secret   bound object name mysecret        Request a token bound to an instance of a Secret object with a specific UID   kubectl create token myapp   bound object kind Secret   bound object name mysecret   bound object uid 0d4691ed 659b 4935 a832 355f77ee47cc               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    audience strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Audience of the requested token  If unset  defaults to requesting a token for use with the Kubernetes API server  May be repeated to request a token valid for multiple audiences   p   td    tr    tr   td colspan  2    bound object kind string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Kind of an object to bind the token to  Supported kinds are Node  Pod  Secret  If set    bound object name must be provided   p   td    tr    tr   td colspan  2    bound object name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of an object to bind the token to  The token will expire when the object is deleted  Requires   bound object kind   p   td    tr    tr   td colspan  2    bound object uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID of an object to bind the token to  Requires   bound object kind and   bound object name  If unset  the UID of the existing object is used   p   td    tr    tr   td colspan  2    duration duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Requested lifetime of the issued token  If not set or if set to 0  the lifetime will be determined by the server automatically  The server may return a token with a longer or shorter lifetime   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for token  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl create ingress","answers":"---\ntitle: kubectl create ingress\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate an ingress with the specified name.\n\n```\nkubectl create ingress NAME --rule=host\/path=service:port[,tls[=secret]] \n```\n\n## \n\n```\n  # Create a single ingress called 'simple' that directs requests to foo.com\/bar to svc\n  # svc1:8080 with a TLS secret \"my-cert\"\n  kubectl create ingress simple --rule=\"foo.com\/bar=svc1:8080,tls=my-cert\"\n  \n  # Create a catch all ingress of \"\/path\" pointing to service svc:port and Ingress Class as \"otheringress\"\n  kubectl create ingress catch-all --class=otheringress --rule=\"\/path=svc:port\"\n  \n  # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2\n  kubectl create ingress annotated --class=default --rule=\"foo.com\/bar=svc:port\" \\\n  --annotation ingress.annotation1=foo \\\n  --annotation ingress.annotation2=bla\n  \n  # Create an ingress with the same host and multiple paths\n  kubectl create ingress multipath --class=default \\\n  --rule=\"foo.com\/=svc:port\" \\\n  --rule=\"foo.com\/admin\/=svcadmin:portadmin\"\n  \n  # Create an ingress with multiple hosts and the pathType as Prefix\n  kubectl create ingress ingress1 --class=default \\\n  --rule=\"foo.com\/path*=svc:8080\" \\\n  --rule=\"bar.com\/admin*=svc2:http\"\n  \n  # Create an ingress with TLS enabled using the default ingress certificate and different path types\n  kubectl create ingress ingtls --class=default \\\n  --rule=\"foo.com\/=svc:https,tls\" \\\n  --rule=\"foo.com\/path\/subpath*=othersvc:8080\"\n  \n  # Create an ingress with TLS enabled using a specific secret and pathType as Prefix\n  kubectl create ingress ingsecret --class=default \\\n  --rule=\"foo.com\/*=svc:8080,tls=secret1\"\n  \n  # Create an ingress with a default backend\n  kubectl create ingress ingdefault --class=default \\\n  --default-backend=defaultsvc:http \\\n  --rule=\"foo.com\/*=svc:8080,tls=secret1\"\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--annotation strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Annotation to insert in the ingress object, in the format annotation=value<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--class string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Ingress Class to be used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-backend string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default service for backend, in format of svcname:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for ingress<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--rule strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Rule in format host\/path=service:port[,tls=secretname]. Paths containing the leading character '*' are considered pathType=Prefix. tls argument is optional.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create ingress content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create an ingress with the specified name       kubectl create ingress NAME   rule host path service port  tls  secret                      Create a single ingress called  simple  that directs requests to foo com bar to svc     svc1 8080 with a TLS secret  my cert    kubectl create ingress simple   rule  foo com bar svc1 8080 tls my cert         Create a catch all ingress of   path  pointing to service svc port and Ingress Class as  otheringress    kubectl create ingress catch all   class otheringress   rule   path svc port         Create an ingress with two annotations  ingress annotation1 and ingress annotations2   kubectl create ingress annotated   class default   rule  foo com bar svc port        annotation ingress annotation1 foo       annotation ingress annotation2 bla        Create an ingress with the same host and multiple paths   kubectl create ingress multipath   class default       rule  foo com  svc port        rule  foo com admin  svcadmin portadmin         Create an ingress with multiple hosts and the pathType as Prefix   kubectl create ingress ingress1   class default       rule  foo com path  svc 8080        rule  bar com admin  svc2 http         Create an ingress with TLS enabled using the default ingress certificate and different path types   kubectl create ingress ingtls   class default       rule  foo com  svc https tls        rule  foo com path subpath  othersvc 8080         Create an ingress with TLS enabled using a specific secret and pathType as Prefix   kubectl create ingress ingsecret   class default       rule  foo com   svc 8080 tls secret1         Create an ingress with a default backend   kubectl create ingress ingdefault   class default       default backend defaultsvc http       rule  foo com   svc 8080 tls secret1                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    annotation strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Annotation to insert in the ingress object  in the format annotation value  p   td    tr    tr   td colspan  2    class string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Ingress Class to be used  p   td    tr    tr   td colspan  2    default backend string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default service for backend  in format of svcname port  p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for ingress  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    rule strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Rule in format host path service port  tls secretname   Paths containing the leading character     are considered pathType Prefix  tls argument is optional   p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl create deployment autogenerated true","answers":"---\ntitle: kubectl create deployment\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a deployment with the specified name.\n\n```\nkubectl create deployment NAME --image=image -- [COMMAND] [args...]\n```\n\n## \n\n```\n  # Create a deployment named my-dep that runs the busybox image\n  kubectl create deployment my-dep --image=busybox\n  \n  # Create a deployment with a command\n  kubectl create deployment my-dep --image=busybox -- date\n  \n  # Create a deployment named my-dep that runs the nginx image with 3 replicas\n  kubectl create deployment my-dep --image=nginx --replicas=3\n  \n  # Create a deployment named my-dep that runs the busybox image and expose port 5701\n  kubectl create deployment my-dep --image=busybox --port=5701\n  \n  # Create a deployment named my-dep that runs multiple containers\n  kubectl create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for deployment<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--image strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Image names to run. A deployment can have multiple images set for multi-container pod.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--port int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The containerPort that this deployment exposes.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-r, --replicas int32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Number of replicas to create. Default is 1.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create deployment content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a deployment with the specified name       kubectl create deployment NAME   image image     COMMAND   args                       Create a deployment named my dep that runs the busybox image   kubectl create deployment my dep   image busybox        Create a deployment with a command   kubectl create deployment my dep   image busybox    date        Create a deployment named my dep that runs the nginx image with 3 replicas   kubectl create deployment my dep   image nginx   replicas 3        Create a deployment named my dep that runs the busybox image and expose port 5701   kubectl create deployment my dep   image busybox   port 5701        Create a deployment named my dep that runs multiple containers   kubectl create deployment my dep   image busybox latest   image ubuntu latest   image nginx               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for deployment  p   td    tr    tr   td colspan  2    image strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Image names to run  A deployment can have multiple images set for multi container pod   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    port int32 nbsp  nbsp  nbsp  nbsp  nbsp Default   1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The containerPort that this deployment exposes   p   td    tr    tr   td colspan  2   r    replicas int32 nbsp  nbsp  nbsp  nbsp  nbsp Default  1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Number of replicas to create  Default is 1   p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl create service loadbalancer","answers":"---\ntitle: kubectl create service loadbalancer\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a LoadBalancer service with the specified name.\n\n```\nkubectl create service loadbalancer NAME [--tcp=port:targetPort] [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a new LoadBalancer service named my-lbs\n  kubectl create service loadbalancer my-lbs --tcp=5678:8080\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for loadbalancer<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tcp strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Port pairs can be specified as '&lt;port&gt;:&lt;targetPort&gt;'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create service](..\/)\t - Create a service using a specified subcommand\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create service loadbalancer content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a LoadBalancer service with the specified name       kubectl create service loadbalancer NAME    tcp port targetPort     dry run server client none                    Create a new LoadBalancer service named my lbs   kubectl create service loadbalancer my lbs   tcp 5678 8080               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for loadbalancer  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    tcp strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Port pairs can be specified as   lt port gt   lt targetPort gt     p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create service          Create a service using a specified subcommand "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic title kubectl create service externalname contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl create service externalname\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate an ExternalName service with the specified name.\n\n ExternalName service references to an external DNS address instead of only pods, which will allow application authors to reference services that exist off platform, on other clusters, or locally.\n\n```\nkubectl create service externalname NAME --external-name external.name [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a new ExternalName service named my-ns\n  kubectl create service externalname my-ns --external-name bar.com\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--external-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>External name of service<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for externalname<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tcp strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Port pairs can be specified as '&lt;port&gt;:&lt;targetPort&gt;'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create service](..\/)\t - Create a service using a specified subcommand\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create service externalname content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create an ExternalName service with the specified name    ExternalName service references to an external DNS address instead of only pods  which will allow application authors to reference services that exist off platform  on other clusters  or locally       kubectl create service externalname NAME   external name external name    dry run server client none                    Create a new ExternalName service named my ns   kubectl create service externalname my ns   external name bar com               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    external name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p External name of service  p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for externalname  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    tcp strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Port pairs can be specified as   lt port gt   lt targetPort gt     p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create service          Create a service using a specified subcommand "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl create secret","answers":"---\ntitle: kubectl create secret\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a secret with specified type.\n\n A docker-registry type secret is for accessing a container registry.\n\n A generic type secret indicate an Opaque secret type.\n\n A tls type secret holds TLS certificate and its associated key.\n\n```\nkubectl create secret (docker-registry | generic | tls)\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for secret<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n* [kubectl create secret docker-registry](..\/kubectl_create_secret_docker-registry\/)\t - Create a secret for use with a Docker registry\n* [kubectl create secret generic](..\/kubectl_create_secret_generic\/)\t - Create a secret from a local file, directory, or literal value\n* [kubectl create secret tls](..\/kubectl_create_secret_tls\/)\t - Create a TLS secret\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create secret content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a secret with specified type    A docker registry type secret is for accessing a container registry    A generic type secret indicate an Opaque secret type    A tls type secret holds TLS certificate and its associated key       kubectl create secret  docker registry   generic   tls                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for secret  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin    kubectl create secret docker registry     kubectl create secret docker registry      Create a secret for use with a Docker registry    kubectl create secret generic     kubectl create secret generic      Create a secret from a local file  directory  or literal value    kubectl create secret tls     kubectl create secret tls      Create a TLS secret "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic title kubectl create priorityclass contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl create priorityclass\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a priority class with the specified name, value, globalDefault and description.\n\n```\nkubectl create priorityclass NAME --value=VALUE --global-default=BOOL [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a priority class named high-priority\n  kubectl create priorityclass high-priority --value=1000 --description=\"high priority\"\n  \n  # Create a priority class named default-priority that is considered as the global default priority\n  kubectl create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\"\n  \n  # Create a priority class named high-priority that cannot preempt pods with lower priority\n  kubectl create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--description string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>description is an arbitrary string that usually provides guidelines on when this priority class should be used.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--global-default<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>global-default specifies whether this PriorityClass should be considered as the default priority.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for priorityclass<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--preemption-policy string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"PreemptLowerPriority\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>preemption-policy is the policy for preempting pods with lower priority.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--value int32<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>the value of this priority class.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create priorityclass content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a priority class with the specified name  value  globalDefault and description       kubectl create priorityclass NAME   value VALUE   global default BOOL    dry run server client none                    Create a priority class named high priority   kubectl create priorityclass high priority   value 1000   description  high priority         Create a priority class named default priority that is considered as the global default priority   kubectl create priorityclass default priority   value 1000   global default true   description  default priority         Create a priority class named high priority that cannot preempt pods with lower priority   kubectl create priorityclass high priority   value 1000   description  high priority    preemption policy  Never                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    description string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p description is an arbitrary string that usually provides guidelines on when this priority class should be used   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2    global default  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p global default specifies whether this PriorityClass should be considered as the default priority   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for priorityclass  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    preemption policy string nbsp  nbsp  nbsp  nbsp  nbsp Default   PreemptLowerPriority   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p preemption policy is the policy for preempting pods with lower priority   p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr    tr   td colspan  2    value int32  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p the value of this priority class   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl create poddisruptionbudget autogenerated true","answers":"---\ntitle: kubectl create poddisruptionbudget\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a pod disruption budget with the specified name, selector, and desired minimum available pods.\n\n```\nkubectl create poddisruptionbudget NAME --selector=SELECTOR --min-available=N [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a pod disruption budget named my-pdb that will select all pods with the app=rails label\n  # and require at least one of them being available at any point in time\n  kubectl create poddisruptionbudget my-pdb --selector=app=rails --min-available=1\n  \n  # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label\n  # and require at least half of the pods selected to be available at any point in time\n  kubectl create pdb my-pdb --selector=app=nginx --min-available=50%\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for poddisruptionbudget<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--max-unavailable string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The maximum number or percentage of unavailable pods this budget requires.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--min-available string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The minimum number or percentage of available pods this budget requires.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A label selector to use for this budget. Only equality-based selector requirements are supported.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create poddisruptionbudget content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a pod disruption budget with the specified name  selector  and desired minimum available pods       kubectl create poddisruptionbudget NAME   selector SELECTOR   min available N    dry run server client none                    Create a pod disruption budget named my pdb that will select all pods with the app rails label     and require at least one of them being available at any point in time   kubectl create poddisruptionbudget my pdb   selector app rails   min available 1        Create a pod disruption budget named my pdb that will select all pods with the app nginx label     and require at least half of the pods selected to be available at any point in time   kubectl create pdb my pdb   selector app nginx   min available 50                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for poddisruptionbudget  p   td    tr    tr   td colspan  2    max unavailable string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The maximum number or percentage of unavailable pods this budget requires   p   td    tr    tr   td colspan  2    min available string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The minimum number or percentage of available pods this budget requires   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A label selector to use for this budget  Only equality based selector requirements are supported   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl create configmap autogenerated true","answers":"---\ntitle: kubectl create configmap\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a config map based on a file, directory, or specified literal value.\n\n A single config map may package one or more key\/value pairs.\n\n When creating a config map based on a file, the key will default to the basename of the file, and the value will default to the file content.  If the basename is an invalid key, you may specify an alternate key.\n\n When creating a config map based on a directory, each file whose basename is a valid key in the directory will be packaged into the config map.  Any directory entries except regular files are ignored (e.g. subdirectories, symlinks, devices, pipes, etc).\n\n```\nkubectl create configmap NAME [--from-file=[key=]source] [--from-literal=key1=value1] [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a new config map named my-config based on folder bar\n  kubectl create configmap my-config --from-file=path\/to\/bar\n  \n  # Create a new config map named my-config with specified keys instead of file basenames on disk\n  kubectl create configmap my-config --from-file=key1=\/path\/to\/bar\/file1.txt --from-file=key2=\/path\/to\/bar\/file2.txt\n  \n  # Create a new config map named my-config with key1=config1 and key2=config2\n  kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2\n  \n  # Create a new config map named my-config from the key=value pairs in the file\n  kubectl create configmap my-config --from-file=path\/to\/bar\n  \n  # Create a new config map named my-config from an env file\n  kubectl create configmap my-config --from-env-file=path\/to\/foo.env --from-env-file=path\/to\/bar.env\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--append-hash<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Append a hash of the configmap to its name.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--from-env-file strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Specify the path to a file to read lines of key=val pairs to create a configmap.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--from-file strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Key file can be specified using its file path, in which case file basename will be used as configmap key, or optionally with a key and file path, in which case the given key will be used.  Specifying a directory will iterate each named file in the directory whose basename is a valid configmap key.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--from-literal strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Specify a key and literal value to insert in configmap (i.e. mykey=somevalue)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for configmap<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create configmap content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a config map based on a file  directory  or specified literal value    A single config map may package one or more key value pairs    When creating a config map based on a file  the key will default to the basename of the file  and the value will default to the file content   If the basename is an invalid key  you may specify an alternate key    When creating a config map based on a directory  each file whose basename is a valid key in the directory will be packaged into the config map   Any directory entries except regular files are ignored  e g  subdirectories  symlinks  devices  pipes  etc        kubectl create configmap NAME    from file  key  source     from literal key1 value1     dry run server client none                    Create a new config map named my config based on folder bar   kubectl create configmap my config   from file path to bar        Create a new config map named my config with specified keys instead of file basenames on disk   kubectl create configmap my config   from file key1  path to bar file1 txt   from file key2  path to bar file2 txt        Create a new config map named my config with key1 config1 and key2 config2   kubectl create configmap my config   from literal key1 config1   from literal key2 config2        Create a new config map named my config from the key value pairs in the file   kubectl create configmap my config   from file path to bar        Create a new config map named my config from an env file   kubectl create configmap my config   from env file path to foo env   from env file path to bar env               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    append hash  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Append a hash of the configmap to its name   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2    from env file strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Specify the path to a file to read lines of key val pairs to create a configmap   p   td    tr    tr   td colspan  2    from file strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Key file can be specified using its file path  in which case file basename will be used as configmap key  or optionally with a key and file path  in which case the given key will be used   Specifying a directory will iterate each named file in the directory whose basename is a valid configmap key   p   td    tr    tr   td colspan  2    from literal strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Specify a key and literal value to insert in configmap  i e  mykey somevalue   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for configmap  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl create service autogenerated true","answers":"---\ntitle: kubectl create service\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a service using a specified subcommand.\n\n```\nkubectl create service [flags]\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for service<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n* [kubectl create service clusterip](..\/kubectl_create_service_clusterip\/)\t - Create a ClusterIP service\n* [kubectl create service externalname](..\/kubectl_create_service_externalname\/)\t - Create an ExternalName service\n* [kubectl create service loadbalancer](..\/kubectl_create_service_loadbalancer\/)\t - Create a LoadBalancer service\n* [kubectl create service nodeport](..\/kubectl_create_service_nodeport\/)\t - Create a NodePort service\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create service content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a service using a specified subcommand       kubectl create service  flags                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for service  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin    kubectl create service clusterip     kubectl create service clusterip      Create a ClusterIP service    kubectl create service externalname     kubectl create service externalname      Create an ExternalName service    kubectl create service loadbalancer     kubectl create service loadbalancer      Create a LoadBalancer service    kubectl create service nodeport     kubectl create service nodeport      Create a NodePort service "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl create clusterrolebinding","answers":"---\ntitle: kubectl create clusterrolebinding\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a cluster role binding for a particular cluster role.\n\n```\nkubectl create clusterrolebinding NAME --clusterrole=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role\n  kubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--clusterrole string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>ClusterRole this ClusterRoleBinding should reference<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Groups to bind to the clusterrole. The flag can be repeated to add multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for clusterrolebinding<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--serviceaccount strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Service accounts to bind to the clusterrole, in the format &lt;namespace&gt;:&lt;name&gt;. The flag can be repeated to add multiple service accounts.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Usernames to bind to the clusterrole. The flag can be repeated to add multiple users.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create clusterrolebinding content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a cluster role binding for a particular cluster role       kubectl create clusterrolebinding NAME   clusterrole NAME    user username     group groupname     serviceaccount namespace serviceaccountname     dry run server client none                    Create a cluster role binding for user1  user2  and group1 using the cluster admin cluster role   kubectl create clusterrolebinding cluster admin   clusterrole cluster admin   user user1   user user2   group group1               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    clusterrole string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p ClusterRole this ClusterRoleBinding should reference  p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2    group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Groups to bind to the clusterrole  The flag can be repeated to add multiple groups   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for clusterrolebinding  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    serviceaccount strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Service accounts to bind to the clusterrole  in the format  lt namespace gt   lt name gt   The flag can be repeated to add multiple service accounts   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    user strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Usernames to bind to the clusterrole  The flag can be repeated to add multiple users   p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic title kubectl create secret generic contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl create secret generic\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a secret based on a file, directory, or specified literal value.\n\n A single secret may package one or more key\/value pairs.\n\n When creating a secret based on a file, the key will default to the basename of the file, and the value will default to the file content. If the basename is an invalid key or you wish to chose your own, you may specify an alternate key.\n\n When creating a secret based on a directory, each file whose basename is a valid key in the directory will be packaged into the secret. Any directory entries except regular files are ignored (e.g. subdirectories, symlinks, devices, pipes, etc).\n\n```\nkubectl create secret generic NAME [--type=string] [--from-file=[key=]source] [--from-literal=key1=value1] [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a new secret named my-secret with keys for each file in folder bar\n  kubectl create secret generic my-secret --from-file=path\/to\/bar\n  \n  # Create a new secret named my-secret with specified keys instead of names on disk\n  kubectl create secret generic my-secret --from-file=ssh-privatekey=path\/to\/id_rsa --from-file=ssh-publickey=path\/to\/id_rsa.pub\n  \n  # Create a new secret named my-secret with key1=supersecret and key2=topsecret\n  kubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret\n  \n  # Create a new secret named my-secret using a combination of a file and a literal\n  kubectl create secret generic my-secret --from-file=ssh-privatekey=path\/to\/id_rsa --from-literal=passphrase=topsecret\n  \n  # Create a new secret named my-secret from env files\n  kubectl create secret generic my-secret --from-env-file=path\/to\/foo.env --from-env-file=path\/to\/bar.env\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--append-hash<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Append a hash of the secret to its name.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--from-env-file strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Specify the path to a file to read lines of key=val pairs to create a secret.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--from-file strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Key files can be specified using their file path, in which case a default name will be given to them, or optionally with a name and file path, in which case the given name will be used.  Specifying a directory will iterate each named file in the directory that is a valid secret key.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--from-literal strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Specify a key and literal value to insert in secret (i.e. mykey=somevalue)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for generic<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--type string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The type of secret to create<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create secret](..\/)\t - Create a secret using a specified subcommand\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create secret generic content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a secret based on a file  directory  or specified literal value    A single secret may package one or more key value pairs    When creating a secret based on a file  the key will default to the basename of the file  and the value will default to the file content  If the basename is an invalid key or you wish to chose your own  you may specify an alternate key    When creating a secret based on a directory  each file whose basename is a valid key in the directory will be packaged into the secret  Any directory entries except regular files are ignored  e g  subdirectories  symlinks  devices  pipes  etc        kubectl create secret generic NAME    type string     from file  key  source     from literal key1 value1     dry run server client none                    Create a new secret named my secret with keys for each file in folder bar   kubectl create secret generic my secret   from file path to bar        Create a new secret named my secret with specified keys instead of names on disk   kubectl create secret generic my secret   from file ssh privatekey path to id rsa   from file ssh publickey path to id rsa pub        Create a new secret named my secret with key1 supersecret and key2 topsecret   kubectl create secret generic my secret   from literal key1 supersecret   from literal key2 topsecret        Create a new secret named my secret using a combination of a file and a literal   kubectl create secret generic my secret   from file ssh privatekey path to id rsa   from literal passphrase topsecret        Create a new secret named my secret from env files   kubectl create secret generic my secret   from env file path to foo env   from env file path to bar env               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    append hash  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Append a hash of the secret to its name   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2    from env file strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Specify the path to a file to read lines of key val pairs to create a secret   p   td    tr    tr   td colspan  2    from file strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Key files can be specified using their file path  in which case a default name will be given to them  or optionally with a name and file path  in which case the given name will be used   Specifying a directory will iterate each named file in the directory that is a valid secret key   p   td    tr    tr   td colspan  2    from literal strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Specify a key and literal value to insert in secret  i e  mykey somevalue   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for generic  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    type string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The type of secret to create  p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create secret          Create a secret using a specified subcommand "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic title kubectl create quota contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl create quota\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a resource quota with the specified name, hard limits, and optional scopes.\n\n```\nkubectl create quota NAME [--hard=key1=value1,key2=value2] [--scopes=Scope1,Scope2] [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a new resource quota named my-quota\n  kubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10\n  \n  # Create a new resource quota named best-effort\n  kubectl create quota best-effort --hard=pods=100 --scopes=BestEffort\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--hard string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A comma-delimited set of resource=quantity pairs that define a hard limit.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for quota<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--scopes string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A comma-delimited set of quota scopes that must all match each object tracked by the quota.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create quota content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a resource quota with the specified name  hard limits  and optional scopes       kubectl create quota NAME    hard key1 value1 key2 value2     scopes Scope1 Scope2     dry run server client none                    Create a new resource quota named my quota   kubectl create quota my quota   hard cpu 1 memory 1G pods 2 services 3 replicationcontrollers 2 resourcequotas 1 secrets 5 persistentvolumeclaims 10        Create a new resource quota named best effort   kubectl create quota best effort   hard pods 100   scopes BestEffort               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2    hard string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A comma delimited set of resource quantity pairs that define a hard limit   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for quota  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    scopes string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A comma delimited set of quota scopes that must all match each object tracked by the quota   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference title kubectl create secret tls The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl create secret tls\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a TLS secret from the given public\/private key pair.\n\n The public\/private key pair must exist beforehand. The public key certificate must be .PEM encoded and match the given private key.\n\n```\nkubectl create secret tls NAME --cert=path\/to\/cert\/file --key=path\/to\/key\/file [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a new TLS secret named tls-secret with the given key pair\n  kubectl create secret tls tls-secret --cert=path\/to\/tls.crt --key=path\/to\/tls.key\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--append-hash<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Append a hash of the secret to its name.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cert string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to PEM encoded public key certificate.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for tls<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to private key associated with given certificate.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create secret](..\/)\t - Create a secret using a specified subcommand\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create secret tls content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a TLS secret from the given public private key pair    The public private key pair must exist beforehand  The public key certificate must be  PEM encoded and match the given private key       kubectl create secret tls NAME   cert path to cert file   key path to key file    dry run server client none                    Create a new TLS secret named tls secret with the given key pair   kubectl create secret tls tls secret   cert path to tls crt   key path to tls key               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    append hash  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Append a hash of the secret to its name   p   td    tr    tr   td colspan  2    cert string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to PEM encoded public key certificate   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for tls  p   td    tr    tr   td colspan  2    key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to private key associated with given certificate   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create secret          Create a secret using a specified subcommand "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl create cronjob autogenerated true","answers":"---\ntitle: kubectl create cronjob\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a cron job with the specified name.\n\n```\nkubectl create cronjob NAME --image=image --schedule='0\/5 * * * ?' -- [COMMAND] [args...] [flags]\n```\n\n## \n\n```\n  # Create a cron job\n  kubectl create cronjob my-job --image=busybox --schedule=\"*\/1 * * * *\"\n  \n  # Create a cron job with a command\n  kubectl create cronjob my-job --image=busybox --schedule=\"*\/1 * * * *\" -- date\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for cronjob<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--image string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Image name to run.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--restart string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>job's restart policy. supported values: OnFailure, Never<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--schedule string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A schedule in the Cron format the job should be run with.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create cronjob content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a cron job with the specified name       kubectl create cronjob NAME   image image   schedule  0 5              COMMAND   args      flags                    Create a cron job   kubectl create cronjob my job   image busybox   schedule    1                 Create a cron job with a command   kubectl create cronjob my job   image busybox   schedule    1             date               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for cronjob  p   td    tr    tr   td colspan  2    image string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Image name to run   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    restart string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p job s restart policy  supported values  OnFailure  Never  p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    schedule string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A schedule in the Cron format the job should be run with   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic title kubectl create job contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl create job\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a job with the specified name.\n\n```\nkubectl create job NAME --image=image [--from=cronjob\/name] -- [COMMAND] [args...]\n```\n\n## \n\n```\n  # Create a job\n  kubectl create job my-job --image=busybox\n  \n  # Create a job with a command\n  kubectl create job my-job --image=busybox -- date\n  \n  # Create a job from a cron job named \"a-cronjob\"\n  kubectl create job test-job --from=cronjob\/a-cronjob\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--from string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the resource to create a Job from (only cronjob is supported).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for job<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--image string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Image name to run.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create job content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a job with the specified name       kubectl create job NAME   image image    from cronjob name      COMMAND   args                       Create a job   kubectl create job my job   image busybox        Create a job with a command   kubectl create job my job   image busybox    date        Create a job from a cron job named  a cronjob    kubectl create job test job   from cronjob a cronjob               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2    from string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the resource to create a Job from  only cronjob is supported    p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for job  p   td    tr    tr   td colspan  2    image string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Image name to run   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl create service clusterip autogenerated true","answers":"---\ntitle: kubectl create service clusterip\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a ClusterIP service with the specified name.\n\n```\nkubectl create service clusterip NAME [--tcp=<port>:<targetPort>] [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a new ClusterIP service named my-cs\n  kubectl create service clusterip my-cs --tcp=5678:8080\n  \n  # Create a new ClusterIP service named my-cs (in headless mode)\n  kubectl create service clusterip my-cs --clusterip=\"None\"\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--clusterip string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Assign your own ClusterIP or set to 'None' for a 'headless' service (no loadbalancing).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for clusterip<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tcp strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Port pairs can be specified as '&lt;port&gt;:&lt;targetPort&gt;'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create service](..\/)\t - Create a service using a specified subcommand\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create service clusterip content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a ClusterIP service with the specified name       kubectl create service clusterip NAME    tcp  port   targetPort      dry run server client none                    Create a new ClusterIP service named my cs   kubectl create service clusterip my cs   tcp 5678 8080        Create a new ClusterIP service named my cs  in headless mode    kubectl create service clusterip my cs   clusterip  None                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    clusterip string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Assign your own ClusterIP or set to  None  for a  headless  service  no loadbalancing    p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for clusterip  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    tcp strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Port pairs can be specified as   lt port gt   lt targetPort gt     p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create service          Create a service using a specified subcommand "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl create role","answers":"---\ntitle: kubectl create role\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a role with single rule.\n\n```\nkubectl create role NAME --verb=verb --resource=resource.group\/subresource [--resource-name=resourcename] [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods\n  kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods\n  \n  # Create a role named \"pod-reader\" with ResourceName specified\n  kubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod\n  \n  # Create a role named \"foo\" with API Group specified\n  kubectl create role foo --verb=get,list,watch --resource=rs.apps\n  \n  # Create a role named \"foo\" with SubResource specified\n  kubectl create role foo --verb=get,list,watch --resource=pods,pods\/status\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for role<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--resource strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Resource that the rule applies to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--resource-name strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Resource in the white list that the rule applies to, repeat this flag for multiple items<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--verb strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Verb that applies to the resources contained in the rule<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create role content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a role with single rule       kubectl create role NAME   verb verb   resource resource group subresource    resource name resourcename     dry run server client none                    Create a role named  pod reader  that allows user to perform  get    watch  and  list  on pods   kubectl create role pod reader   verb get   verb list   verb watch   resource pods        Create a role named  pod reader  with ResourceName specified   kubectl create role pod reader   verb get   resource pods   resource name readablepod   resource name anotherpod        Create a role named  foo  with API Group specified   kubectl create role foo   verb get list watch   resource rs apps        Create a role named  foo  with SubResource specified   kubectl create role foo   verb get list watch   resource pods pods status               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for role  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    resource strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Resource that the rule applies to  p   td    tr    tr   td colspan  2    resource name strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Resource in the white list that the rule applies to  repeat this flag for multiple items  p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr    tr   td colspan  2    verb strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Verb that applies to the resources contained in the rule  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl create rolebinding","answers":"---\ntitle: kubectl create rolebinding\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a role binding for a particular role or cluster role.\n\n```\nkubectl create rolebinding NAME --clusterrole=NAME|--role=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a role binding for user1, user2, and group1 using the admin cluster role\n  kubectl create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1\n  \n  # Create a role binding for service account monitoring:sa-dev using the admin role\n  kubectl create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--clusterrole string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>ClusterRole this RoleBinding should reference<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Groups to bind to the role. The flag can be repeated to add multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for rolebinding<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--role string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Role this RoleBinding should reference<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--serviceaccount strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Service accounts to bind to the role, in the format &lt;namespace&gt;:&lt;name&gt;. The flag can be repeated to add multiple service accounts.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Usernames to bind to the role. The flag can be repeated to add multiple users.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create rolebinding content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a role binding for a particular role or cluster role       kubectl create rolebinding NAME   clusterrole NAME   role NAME    user username     group groupname     serviceaccount namespace serviceaccountname     dry run server client none                    Create a role binding for user1  user2  and group1 using the admin cluster role   kubectl create rolebinding admin   clusterrole admin   user user1   user user2   group group1        Create a role binding for service account monitoring sa dev using the admin role   kubectl create rolebinding admin binding   role admin   serviceaccount monitoring sa dev               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    clusterrole string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p ClusterRole this RoleBinding should reference  p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2    group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Groups to bind to the role  The flag can be repeated to add multiple groups   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for rolebinding  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    role string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Role this RoleBinding should reference  p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    serviceaccount strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Service accounts to bind to the role  in the format  lt namespace gt   lt name gt   The flag can be repeated to add multiple service accounts   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    user strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Usernames to bind to the role  The flag can be repeated to add multiple users   p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl create secret docker registry autogenerated true","answers":"---\ntitle: kubectl create secret docker-registry\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a new secret for use with Docker registries.\n        \n        Dockercfg secrets are used to authenticate against Docker registries.\n        \n        When using the Docker command line to push images, you can authenticate to a given registry by running:\n        '$ docker login DOCKER_REGISTRY_SERVER --username=DOCKER_USER --password=DOCKER_PASSWORD --email=DOCKER_EMAIL'.\n        \n That produces a ~\/.dockercfg file that is used by subsequent 'docker push' and 'docker pull' commands to authenticate to the registry. The email address is optional.\n\n        When creating applications, you may have a Docker registry that requires authentication.  In order for the\n        nodes to pull images on your behalf, they must have the credentials.  You can provide this information\n        by creating a dockercfg secret and attaching it to your service account.\n\n```\nkubectl create secret docker-registry NAME --docker-username=user --docker-password=password --docker-email=email [--docker-server=string] [--from-file=[key=]source] [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # If you do not already have a .dockercfg file, create a dockercfg secret directly\n  kubectl create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL\n  \n  # Create a new secret named my-secret from ~\/.docker\/config.json\n  kubectl create secret docker-registry my-secret --from-file=path\/to\/.docker\/config.json\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--append-hash<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Append a hash of the secret to its name.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--docker-email string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Email for Docker registry<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--docker-password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for Docker registry authentication<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--docker-server string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"https:\/\/index.docker.io\/v1\/\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server location for Docker registry<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--docker-username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for Docker registry authentication<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--from-file strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Key files can be specified using their file path, in which case a default name of .dockerconfigjson will be given to them, or optionally with a name and file path, in which case the given name will be used. Specifying a directory will iterate each named file in the directory that is a valid secret key. For this command, the key should always be .dockerconfigjson.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for docker-registry<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create secret](..\/)\t - Create a secret using a specified subcommand\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create secret docker registry content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a new secret for use with Docker registries                   Dockercfg secrets are used to authenticate against Docker registries                   When using the Docker command line to push images  you can authenticate to a given registry by running             docker login DOCKER REGISTRY SERVER   username DOCKER USER   password DOCKER PASSWORD   email DOCKER EMAIL             That produces a    dockercfg file that is used by subsequent  docker push  and  docker pull  commands to authenticate to the registry  The email address is optional           When creating applications  you may have a Docker registry that requires authentication   In order for the         nodes to pull images on your behalf  they must have the credentials   You can provide this information         by creating a dockercfg secret and attaching it to your service account       kubectl create secret docker registry NAME   docker username user   docker password password   docker email email    docker server string     from file  key  source     dry run server client none                    If you do not already have a  dockercfg file  create a dockercfg secret directly   kubectl create secret docker registry my secret   docker server DOCKER REGISTRY SERVER   docker username DOCKER USER   docker password DOCKER PASSWORD   docker email DOCKER EMAIL        Create a new secret named my secret from    docker config json   kubectl create secret docker registry my secret   from file path to  docker config json               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    append hash  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Append a hash of the secret to its name   p   td    tr    tr   td colspan  2    docker email string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Email for Docker registry  p   td    tr    tr   td colspan  2    docker password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for Docker registry authentication  p   td    tr    tr   td colspan  2    docker server string nbsp  nbsp  nbsp  nbsp  nbsp Default   https   index docker io v1    td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server location for Docker registry  p   td    tr    tr   td colspan  2    docker username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for Docker registry authentication  p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2    from file strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Key files can be specified using their file path  in which case a default name of  dockerconfigjson will be given to them  or optionally with a name and file path  in which case the given name will be used  Specifying a directory will iterate each named file in the directory that is a valid secret key  For this command  the key should always be  dockerconfigjson   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for docker registry  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create secret          Create a secret using a specified subcommand "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic title kubectl create service nodeport contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl create service nodeport\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a NodePort service with the specified name.\n\n```\nkubectl create service nodeport NAME [--tcp=port:targetPort] [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a new NodePort service named my-ns\n  kubectl create service nodeport my-ns --tcp=5678:8080\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for nodeport<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--node-port int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Port used to expose the service on each node in a cluster.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tcp strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Port pairs can be specified as '&lt;port&gt;:&lt;targetPort&gt;'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create service](..\/)\t - Create a service using a specified subcommand\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create service nodeport content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a NodePort service with the specified name       kubectl create service nodeport NAME    tcp port targetPort     dry run server client none                    Create a new NodePort service named my ns   kubectl create service nodeport my ns   tcp 5678 8080               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for nodeport  p   td    tr    tr   td colspan  2    node port int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Port used to expose the service on each node in a cluster   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    tcp strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Port pairs can be specified as   lt port gt   lt targetPort gt     p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create service          Create a service using a specified subcommand "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl create serviceaccount","answers":"---\ntitle: kubectl create serviceaccount\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a service account with the specified name.\n\n```\nkubectl create serviceaccount NAME [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a new service account named my-service-account\n  kubectl create serviceaccount my-service-account\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for serviceaccount<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create serviceaccount content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a service account with the specified name       kubectl create serviceaccount NAME    dry run server client none                    Create a new service account named my service account   kubectl create serviceaccount my service account               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for serviceaccount  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference title kubectl create clusterrole The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl create clusterrole\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a cluster role.\n\n```\nkubectl create clusterrole NAME --verb=verb --resource=resource.group [--resource-name=resourcename] [--dry-run=server|client|none]\n```\n\n## \n\n```\n  # Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods\n  kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods\n  \n  # Create a cluster role named \"pod-reader\" with ResourceName specified\n  kubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod\n  \n  # Create a cluster role named \"foo\" with API Group specified\n  kubectl create clusterrole foo --verb=get,list,watch --resource=rs.apps\n  \n  # Create a cluster role named \"foo\" with SubResource specified\n  kubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods\/status\n  \n  # Create a cluster role name \"foo\" with NonResourceURL specified\n  kubectl create clusterrole \"foo\" --verb=get --non-resource-url=\/logs\/*\n  \n  # Create a cluster role name \"monitoring\" with AggregationRule specified\n  kubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com\/aggregate-to-monitoring=true\"\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--aggregation-rule &lt;comma-separated 'key=value' pairs&gt;<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>An aggregation label selector for combining ClusterRoles.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for clusterrole<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--non-resource-url strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A partial url that user should have access to.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--resource strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Resource that the rule applies to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--resource-name strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Resource in the white list that the rule applies to, repeat this flag for multiple items<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--verb strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Verb that applies to the resources contained in the rule<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl create](..\/)\t - Create a resource from a file or from stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create clusterrole content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a cluster role       kubectl create clusterrole NAME   verb verb   resource resource group    resource name resourcename     dry run server client none                    Create a cluster role named  pod reader  that allows user to perform  get    watch  and  list  on pods   kubectl create clusterrole pod reader   verb get list watch   resource pods        Create a cluster role named  pod reader  with ResourceName specified   kubectl create clusterrole pod reader   verb get   resource pods   resource name readablepod   resource name anotherpod        Create a cluster role named  foo  with API Group specified   kubectl create clusterrole foo   verb get list watch   resource rs apps        Create a cluster role named  foo  with SubResource specified   kubectl create clusterrole foo   verb get list watch   resource pods pods status        Create a cluster role name  foo  with NonResourceURL specified   kubectl create clusterrole  foo    verb get   non resource url  logs          Create a cluster role name  monitoring  with AggregationRule specified   kubectl create clusterrole monitoring   aggregation rule  rbac example com aggregate to monitoring true                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    aggregation rule  lt comma separated  key value  pairs gt   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p An aggregation label selector for combining ClusterRoles   p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for clusterrole  p   td    tr    tr   td colspan  2    non resource url strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A partial url that user should have access to   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    resource strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Resource that the rule applies to  p   td    tr    tr   td colspan  2    resource name strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Resource in the white list that the rule applies to  repeat this flag for multiple items  p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr    tr   td colspan  2    verb strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Verb that applies to the resources contained in the rule  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl create          Create a resource from a file or from stdin "}
{"questions":"kubernetes reference nolist true contenttype tool reference title kubectl create weight 30 autogenerated true","answers":"---\ntitle: kubectl create\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreate a resource from a file or from stdin.\n\n JSON and YAML formats are accepted.\n\n```\nkubectl create -f FILENAME\n```\n\n## \n\n```\n  # Create a pod using the data in pod.json\n  kubectl create -f .\/pod.json\n  \n  # Create a pod based on the JSON passed into stdin\n  cat pod.json | kubectl create -f -\n  \n  # Edit the data in registry.yaml in JSON then create the resource using the edited data\n  kubectl create -f registry.yaml --edit -o json\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--edit<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Edit the API resource before creating<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-create\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files to use to create the resource<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for create<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--raw string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Raw URI to POST to the server.  Uses the transport specified by the kubeconfig file.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--windows-line-endings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Only relevant if --edit=true. Defaults to the line ending native to your platform.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n* [kubectl create clusterrole](kubectl_create_clusterrole\/)\t - Create a cluster role\n* [kubectl create clusterrolebinding](kubectl_create_clusterrolebinding\/)\t - Create a cluster role binding for a particular cluster role\n* [kubectl create configmap](kubectl_create_configmap\/)\t - Create a config map from a local file, directory or literal value\n* [kubectl create cronjob](kubectl_create_cronjob\/)\t - Create a cron job with the specified name\n* [kubectl create deployment](kubectl_create_deployment\/)\t - Create a deployment with the specified name\n* [kubectl create ingress](kubectl_create_ingress\/)\t - Create an ingress with the specified name\n* [kubectl create job](kubectl_create_job\/)\t - Create a job with the specified name\n* [kubectl create namespace](kubectl_create_namespace\/)\t - Create a namespace with the specified name\n* [kubectl create poddisruptionbudget](kubectl_create_poddisruptionbudget\/)\t - Create a pod disruption budget with the specified name\n* [kubectl create priorityclass](kubectl_create_priorityclass\/)\t - Create a priority class with the specified name\n* [kubectl create quota](kubectl_create_quota\/)\t - Create a quota with the specified name\n* [kubectl create role](kubectl_create_role\/)\t - Create a role with single rule\n* [kubectl create rolebinding](kubectl_create_rolebinding\/)\t - Create a role binding for a particular role or cluster role\n* [kubectl create secret](kubectl_create_secret\/)\t - Create a secret using a specified subcommand\n* [kubectl create service](kubectl_create_service\/)\t - Create a service using a specified subcommand\n* [kubectl create serviceaccount](kubectl_create_serviceaccount\/)\t - Create a service account with the specified name\n* [kubectl create token](kubectl_create_token\/)\t - Request a service account token\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl create content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Create a resource from a file or from stdin    JSON and YAML formats are accepted       kubectl create  f FILENAME                   Create a pod using the data in pod json   kubectl create  f   pod json        Create a pod based on the JSON passed into stdin   cat pod json   kubectl create  f          Edit the data in registry yaml in JSON then create the resource using the edited data   kubectl create  f registry yaml   edit  o json               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    edit  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Edit the API resource before creating  p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl create   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files to use to create the resource  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for create  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    raw string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Raw URI to POST to the server   Uses the transport specified by the kubeconfig file   p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr    tr   td colspan  2    windows line endings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Only relevant if   edit true  Defaults to the line ending native to your platform   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager    kubectl create clusterrole  kubectl create clusterrole      Create a cluster role    kubectl create clusterrolebinding  kubectl create clusterrolebinding      Create a cluster role binding for a particular cluster role    kubectl create configmap  kubectl create configmap      Create a config map from a local file  directory or literal value    kubectl create cronjob  kubectl create cronjob      Create a cron job with the specified name    kubectl create deployment  kubectl create deployment      Create a deployment with the specified name    kubectl create ingress  kubectl create ingress      Create an ingress with the specified name    kubectl create job  kubectl create job      Create a job with the specified name    kubectl create namespace  kubectl create namespace      Create a namespace with the specified name    kubectl create poddisruptionbudget  kubectl create poddisruptionbudget      Create a pod disruption budget with the specified name    kubectl create priorityclass  kubectl create priorityclass      Create a priority class with the specified name    kubectl create quota  kubectl create quota      Create a quota with the specified name    kubectl create role  kubectl create role      Create a role with single rule    kubectl create rolebinding  kubectl create rolebinding      Create a role binding for a particular role or cluster role    kubectl create secret  kubectl create secret      Create a secret using a specified subcommand    kubectl create service  kubectl create service      Create a service using a specified subcommand    kubectl create serviceaccount  kubectl create serviceaccount      Create a service account with the specified name    kubectl create token  kubectl create token      Request a service account token "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl api versions","answers":"---\ntitle: kubectl api-versions\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nPrint the supported API versions on the server, in the form of \"group\/version\".\n\n```\nkubectl api-versions\n```\n\n## \n\n```\n  # Print the supported API versions\n  kubectl api-versions\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for api-versions<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl api versions content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Print the supported API versions on the server  in the form of  group version        kubectl api versions                   Print the supported API versions   kubectl api versions               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for api versions  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference title kubectl exec weight 30 autogenerated true","answers":"---\ntitle: kubectl exec\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nExecute a command in a container.\n\n```\nkubectl exec (POD | TYPE\/NAME) [-c CONTAINER] [flags] -- COMMAND [args...]\n```\n\n## \n\n```\n  # Get output from running the 'date' command from pod mypod, using the first container by default\n  kubectl exec mypod -- date\n  \n  # Get output from running the 'date' command in ruby-container from pod mypod\n  kubectl exec mypod -c ruby-container -- date\n  \n  # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod\n  # and sends stdout\/stderr from 'bash' back to the client\n  kubectl exec mypod -c ruby-container -i -t -- bash -il\n  \n  # List contents of \/usr from the first container of pod mypod and sort by modification time\n  # If the command you want to execute in the pod has any flags in common (e.g. -i),\n  # you must use two dashes (--) to separate your command's flags\/arguments\n  # Also note, do not surround your command and its flags\/arguments with quotes\n  # unless that is how you would execute it normally (i.e., do ls -t \/usr, not \"ls -t \/usr\")\n  kubectl exec mypod -i -t -- ls -t \/usr\n  \n  # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default\n  kubectl exec deploy\/mydeployment -- date\n  \n  # Get output from running 'date' command from the first pod of the service myservice, using the first container by default\n  kubectl exec svc\/myservice -- date\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-c, --container string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Container name. If omitted, use the kubectl.kubernetes.io\/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>to use to exec into the resource<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for exec<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pod-running-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-q, --quiet<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Only print output from the remote session<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-i, --stdin<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Pass stdin to the container<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-t, --tty<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Stdin is a TTY<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl exec content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Execute a command in a container       kubectl exec  POD   TYPE NAME    c CONTAINER   flags     COMMAND  args                       Get output from running the  date  command from pod mypod  using the first container by default   kubectl exec mypod    date        Get output from running the  date  command in ruby container from pod mypod   kubectl exec mypod  c ruby container    date        Switch to raw terminal mode  sends stdin to  bash  in ruby container from pod mypod     and sends stdout stderr from  bash  back to the client   kubectl exec mypod  c ruby container  i  t    bash  il        List contents of  usr from the first container of pod mypod and sort by modification time     If the command you want to execute in the pod has any flags in common  e g   i       you must use two dashes      to separate your command s flags arguments     Also note  do not surround your command and its flags arguments with quotes     unless that is how you would execute it normally  i e   do ls  t  usr  not  ls  t  usr     kubectl exec mypod  i  t    ls  t  usr        Get output from running  date  command from the first pod of the deployment mydeployment  using the first container by default   kubectl exec deploy mydeployment    date        Get output from running  date  command from the first pod of the service myservice  using the first container by default   kubectl exec svc myservice    date               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   c    container string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Container name  If omitted  use the kubectl kubernetes io default container annotation for selecting the container to be attached or the first container in the pod will be chosen  p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p to use to exec into the resource  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for exec  p   td    tr    tr   td colspan  2    pod running timeout duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time  like 5s  2m  or 3h  higher than zero  to wait until at least one pod is running  p   td    tr    tr   td colspan  2   q    quiet  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Only print output from the remote session  p   td    tr    tr   td colspan  2   i    stdin  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Pass stdin to the container  p   td    tr    tr   td colspan  2   t    tty  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Stdin is a TTY  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl patch autogenerated true","answers":"---\ntitle: kubectl patch\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nUpdate fields of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.\n\n JSON and YAML formats are accepted.\n\n Note: Strategic merge patch is not supported for custom resources.\n\n```\nkubectl patch (-f FILENAME | TYPE NAME) [-p PATCH|--patch-file FILE]\n```\n\n## \n\n```\n  # Partially update a node using a strategic merge patch, specifying the patch as JSON\n  kubectl patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}'\n  \n  # Partially update a node using a strategic merge patch, specifying the patch as YAML\n  kubectl patch node k8s-node-1 -p $'spec:\\n unschedulable: true'\n  \n  # Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch\n  kubectl patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}'\n  \n  # Update a container's image; spec.containers[*].name is required because it's a merge key\n  kubectl patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}'\n  \n  # Update a container's image using a JSON patch with positional arrays\n  kubectl patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"\/spec\/containers\/0\/image\", \"value\":\"new image\"}]'\n  \n  # Update a deployment's replicas through the 'scale' subresource using a merge patch\n  kubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":2}}'\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-patch\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to update<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for patch<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--local<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, patch will operate on the content of the file, not the server-side resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-p, --patch string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The patch to be applied to the resource JSON file.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--patch-file string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>A file containing a patch to be applied to the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--subresource string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If specified, patch will operate on the subresource of the requested object. Must be one of [status scale]. This flag is beta and may change in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--type string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strategic\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The type of patch being provided; one of [json merge strategic]<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl patch content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Update fields of a resource using strategic merge patch  a JSON merge patch  or a JSON patch    JSON and YAML formats are accepted    Note  Strategic merge patch is not supported for custom resources       kubectl patch   f FILENAME   TYPE NAME    p PATCH   patch file FILE                    Partially update a node using a strategic merge patch  specifying the patch as JSON   kubectl patch node k8s node 1  p    spec    unschedulable  true           Partially update a node using a strategic merge patch  specifying the patch as YAML   kubectl patch node k8s node 1  p   spec  n unschedulable  true         Partially update a node identified by the type and name specified in  node json  using strategic merge patch   kubectl patch  f node json  p    spec    unschedulable  true           Update a container s image  spec containers    name is required because it s a merge key   kubectl patch pod valid pod  p    spec    containers     name   kubernetes serve hostname   image   new image              Update a container s image using a JSON patch with positional arrays   kubectl patch pod valid pod   type  json   p     op    replace    path     spec containers 0 image    value   new image            Update a deployment s replicas through the  scale  subresource using a merge patch   kubectl patch deployment nginx deployment   subresource  scale    type  merge   p    spec    replicas  2                  table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl patch   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to update  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for patch  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2    local  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  patch will operate on the content of the file  not the server side resource   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   p    patch string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The patch to be applied to the resource JSON file   p   td    tr    tr   td colspan  2    patch file string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p A file containing a patch to be applied to the resource   p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    subresource string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If specified  patch will operate on the subresource of the requested object  Must be one of  status scale   This flag is beta and may change in the future   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    type string nbsp  nbsp  nbsp  nbsp  nbsp Default   strategic   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The type of patch being provided  one of  json merge strategic   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl drain autogenerated true","answers":"---\ntitle: kubectl drain\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDrain node in preparation for maintenance.\n\n The given node will be marked unschedulable to prevent new pods from arriving. 'drain' evicts the pods if the API server supports https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/disruptions\/ eviction https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/disruptions\/ . Otherwise, it will use normal DELETE to delete the pods. The 'drain' evicts or deletes all pods except mirror pods (which cannot be deleted through the API server).  If there are daemon set-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it will not delete any daemon set-managed pods, because those pods would be immediately replaced by the daemon set controller, which ignores unschedulable markings.  If there are any pods that are neither mirror pods nor managed by a replication controller, replica set, daemon set, stateful set, or job, then drain will not delete any pods unless you use --force.  --force will also allow deletion to proceed if the managing resource of one or more pods is missing.\n\n 'drain' waits for graceful termination. You should not operate on the machine until the command completes.\n\n When you are ready to put the node back into service, use kubectl uncordon, which will make the node schedulable again.\n\nhttps:\/\/kubernetes.io\/images\/docs\/kubectl_drain.svg Workflowhttps:\/\/kubernetes.io\/images\/docs\/kubectl_drain.svg\n\n```\nkubectl drain NODE\n```\n\n## \n\n```\n  # Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it\n  kubectl drain foo --force\n  \n  # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes\n  kubectl drain foo --grace-period=900\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--chunk-size int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 500<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Return large lists in chunks rather than all at once. Pass 0 to disable. This flag is beta and may change in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--delete-emptydir-data<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Continue even if there are pods using emptyDir (local data that will be deleted when the node is drained).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-eviction<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Force drain to use delete, even if eviction is supported. This will bypass checking PodDisruptionBudgets, use with caution.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--force<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Continue even if there are pods that do not declare a controller.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--grace-period int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Period of time in seconds given to each pod to terminate gracefully. If negative, the default value specified in the pod will be used.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for drain<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--ignore-daemonsets<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Ignore DaemonSet-managed pods.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pod-selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Label selector to filter pods on the node<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--skip-wait-for-delete-timeout int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If pod DeletionTimestamp older than N seconds, skip waiting for the pod.  Seconds must be greater than 0 to skip.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--timeout duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up, zero means infinite<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl drain content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Drain node in preparation for maintenance    The given node will be marked unschedulable to prevent new pods from arriving   drain  evicts the pods if the API server supports https   kubernetes io docs concepts workloads pods disruptions  eviction https   kubernetes io docs concepts workloads pods disruptions    Otherwise  it will use normal DELETE to delete the pods  The  drain  evicts or deletes all pods except mirror pods  which cannot be deleted through the API server    If there are daemon set managed pods  drain will not proceed without   ignore daemonsets  and regardless it will not delete any daemon set managed pods  because those pods would be immediately replaced by the daemon set controller  which ignores unschedulable markings   If there are any pods that are neither mirror pods nor managed by a replication controller  replica set  daemon set  stateful set  or job  then drain will not delete any pods unless you use   force     force will also allow deletion to proceed if the managing resource of one or more pods is missing     drain  waits for graceful termination  You should not operate on the machine until the command completes    When you are ready to put the node back into service  use kubectl uncordon  which will make the node schedulable again   https   kubernetes io images docs kubectl drain svg Workflowhttps   kubernetes io images docs kubectl drain svg      kubectl drain NODE                   Drain node  foo   even if there are pods not managed by a replication controller  replica set  job  daemon set  or stateful set on it   kubectl drain foo   force        As above  but abort if there are pods not managed by a replication controller  replica set  job  daemon set  or stateful set  and use a grace period of 15 minutes   kubectl drain foo   grace period 900               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    chunk size int nbsp  nbsp  nbsp  nbsp  nbsp Default  500  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Return large lists in chunks rather than all at once  Pass 0 to disable  This flag is beta and may change in the future   p   td    tr    tr   td colspan  2    delete emptydir data  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Continue even if there are pods using emptyDir  local data that will be deleted when the node is drained    p   td    tr    tr   td colspan  2    disable eviction  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Force drain to use delete  even if eviction is supported  This will bypass checking PodDisruptionBudgets  use with caution   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    force  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Continue even if there are pods that do not declare a controller   p   td    tr    tr   td colspan  2    grace period int nbsp  nbsp  nbsp  nbsp  nbsp Default   1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Period of time in seconds given to each pod to terminate gracefully  If negative  the default value specified in the pod will be used   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for drain  p   td    tr    tr   td colspan  2    ignore daemonsets  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Ignore DaemonSet managed pods   p   td    tr    tr   td colspan  2    pod selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Label selector to filter pods on the node  p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    skip wait for delete timeout int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If pod DeletionTimestamp older than N seconds  skip waiting for the pod   Seconds must be greater than 0 to skip   p   td    tr    tr   td colspan  2    timeout duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up  zero means infinite  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl label autogenerated true","answers":"---\ntitle: kubectl label\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nUpdate the labels on a resource.\n\n  *  A label key and value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 63 characters each.\n  *  Optionally, the key can begin with a DNS subdomain prefix and a single '\/', like example.com\/my-app.\n  *  If --overwrite is true, then existing labels can be overwritten, otherwise attempting to overwrite a label will result in an error.\n  *  If --resource-version is specified, then updates will use this resource version, otherwise the existing resource-version will be used.\n\n```\nkubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version]\n```\n\n## \n\n```\n  # Update pod 'foo' with the label 'unhealthy' and the value 'true'\n  kubectl label pods foo unhealthy=true\n  \n  # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value\n  kubectl label --overwrite pods foo status=unhealthy\n  \n  # Update all pods in the namespace\n  kubectl label pods --all status=unhealthy\n  \n  # Update a pod identified by the type and name in \"pod.json\"\n  kubectl label -f pod.json status=unhealthy\n  \n  # Update pod 'foo' only if the resource is unchanged from version 1\n  kubectl label pods foo status=unhealthy --resource-version=1\n  \n  # Update pod 'foo' by removing a label named 'bar' if it exists\n  # Does not require the --overwrite flag\n  kubectl label pods foo bar-\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Select all resources, in the namespace of the specified resource types<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-A, --all-namespaces<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, check the specified action in all namespaces.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-label\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to update the labels<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for label<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--list<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, display the labels for a given resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--local<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, label will NOT contact api-server but run locally.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--overwrite<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, allow labels to be overwritten, otherwise reject label updates that overwrite existing labels.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--resource-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If non-empty, the labels update will only succeed if this is the current resource-version for the object. Only valid when specifying a single resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl label content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Update the labels on a resource        A label key and value must begin with a letter or number  and may contain letters  numbers  hyphens  dots  and underscores  up to 63 characters each       Optionally  the key can begin with a DNS subdomain prefix and a single      like example com my app       If   overwrite is true  then existing labels can be overwritten  otherwise attempting to overwrite a label will result in an error       If   resource version is specified  then updates will use this resource version  otherwise the existing resource version will be used       kubectl label    overwrite    f FILENAME   TYPE NAME  KEY 1 VAL 1     KEY N VAL N    resource version version                    Update pod  foo  with the label  unhealthy  and the value  true    kubectl label pods foo unhealthy true        Update pod  foo  with the label  status  and the value  unhealthy   overwriting any existing value   kubectl label   overwrite pods foo status unhealthy        Update all pods in the namespace   kubectl label pods   all status unhealthy        Update a pod identified by the type and name in  pod json    kubectl label  f pod json status unhealthy        Update pod  foo  only if the resource is unchanged from version 1   kubectl label pods foo status unhealthy   resource version 1        Update pod  foo  by removing a label named  bar  if it exists     Does not require the   overwrite flag   kubectl label pods foo bar                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Select all resources  in the namespace of the specified resource types  p   td    tr    tr   td colspan  2   A    all namespaces  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  check the specified action in all namespaces   p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl label   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2    field selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  field query  to filter on  supports            and       e g    field selector key1 value1 key2 value2   The server only supports a limited number of field queries per type   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to update the labels  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for label  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2    list  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  display the labels for a given resource   p   td    tr    tr   td colspan  2    local  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  label will NOT contact api server but run locally   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    overwrite  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  allow labels to be overwritten  otherwise reject label updates that overwrite existing labels   p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    resource version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If non empty  the labels update will only succeed if this is the current resource version for the object  Only valid when specifying a single resource   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference title kubectl auth whoami weight 30 autogenerated true","answers":"---\ntitle: kubectl auth whoami\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nExperimental: Check who you are and your attributes (groups, extra).\n\n        This command is helpful to get yourself aware of the current user attributes,\n        especially when dynamic authentication, e.g., token webhook, auth proxy, or OIDC provider,\n        is enabled in the Kubernetes cluster.\n\n```\nkubectl auth whoami\n```\n\n## \n\n```\n  # Get your subject attributes\n  kubectl auth whoami\n  \n  # Get your subject attributes in JSON format\n  kubectl auth whoami -o json\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for whoami<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl auth](..\/)\t - Inspect authorization\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl auth whoami content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Experimental  Check who you are and your attributes  groups  extra            This command is helpful to get yourself aware of the current user attributes          especially when dynamic authentication  e g   token webhook  auth proxy  or OIDC provider          is enabled in the Kubernetes cluster       kubectl auth whoami                   Get your subject attributes   kubectl auth whoami        Get your subject attributes in JSON format   kubectl auth whoami  o json               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for whoami  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl auth          Inspect authorization "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl auth reconcile","answers":"---\ntitle: kubectl auth reconcile\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nReconciles rules for RBAC role, role binding, cluster role, and cluster role binding objects.\n\n Missing objects are created, and the containing namespace is created for namespaced objects, if required.\n\n Existing roles are updated to include the permissions in the input objects, and remove extra permissions if --remove-extra-permissions is specified.\n\n Existing bindings are updated to include the subjects in the input objects, and remove extra subjects if --remove-extra-subjects is specified.\n\n This is preferred to 'apply' for RBAC resources so that semantically-aware merging of rules and subjects is done.\n\n```\nkubectl auth reconcile -f FILENAME\n```\n\n## \n\n```\n  # Reconcile RBAC resources from a file\n  kubectl auth reconcile -f my-rbac-rules.yaml\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to reconcile.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for reconcile<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--remove-extra-permissions<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, removes extra permissions added to roles<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--remove-extra-subjects<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, removes extra subjects added to rolebindings<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl auth](..\/)\t - Inspect authorization\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl auth reconcile content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Reconciles rules for RBAC role  role binding  cluster role  and cluster role binding objects    Missing objects are created  and the containing namespace is created for namespaced objects  if required    Existing roles are updated to include the permissions in the input objects  and remove extra permissions if   remove extra permissions is specified    Existing bindings are updated to include the subjects in the input objects  and remove extra subjects if   remove extra subjects is specified    This is preferred to  apply  for RBAC resources so that semantically aware merging of rules and subjects is done       kubectl auth reconcile  f FILENAME                   Reconcile RBAC resources from a file   kubectl auth reconcile  f my rbac rules yaml               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to reconcile   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for reconcile  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    remove extra permissions  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  removes extra permissions added to roles  p   td    tr    tr   td colspan  2    remove extra subjects  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  removes extra subjects added to rolebindings  p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl auth          Inspect authorization "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl auth can i autogenerated true","answers":"---\ntitle: kubectl auth can-i\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCheck whether an action is allowed.\n\n VERB is a logical Kubernetes API verb like 'get', 'list', 'watch', 'delete', etc. TYPE is a Kubernetes resource. Shortcuts and groups will be resolved. NONRESOURCEURL is a partial URL that starts with \"\/\". NAME is the name of a particular Kubernetes resource. This command pairs nicely with impersonation. See --as global flag.\n\n```\nkubectl auth can-i VERB [TYPE | TYPE\/NAME | NONRESOURCEURL]\n```\n\n## \n\n```\n  # Check to see if I can create pods in any namespace\n  kubectl auth can-i create pods --all-namespaces\n  \n  # Check to see if I can list deployments in my current namespace\n  kubectl auth can-i list deployments.apps\n  \n  # Check to see if service account \"foo\" of namespace \"dev\" can list pods in the namespace \"prod\"\n  # You must be allowed to use impersonation for the global option \"--as\"\n  kubectl auth can-i list pods --as=system:serviceaccount:dev:foo -n prod\n  \n  # Check to see if I can do everything in my current namespace (\"*\" means all)\n  kubectl auth can-i '*' '*'\n  \n  # Check to see if I can get the job named \"bar\" in namespace \"foo\"\n  kubectl auth can-i list jobs.batch\/bar -n foo\n  \n  # Check to see if I can read pod logs\n  kubectl auth can-i get pods --subresource=log\n  \n  # Check to see if I can access the URL \/logs\/\n  kubectl auth can-i get \/logs\/\n  \n  # Check to see if I can approve certificates.k8s.io\n  kubectl auth can-i approve certificates.k8s.io\n  \n  # List all allowed actions in namespace \"foo\"\n  kubectl auth can-i --list --namespace=foo\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-A, --all-namespaces<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, check the specified action in all namespaces.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for can-i<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--list<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, prints all allowed actions.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--no-headers<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, prints allowed actions without headers<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-q, --quiet<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, suppress output and just return the exit code.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--subresource string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>SubResource such as pod\/log or deployment\/scale<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl auth](..\/)\t - Inspect authorization\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl auth can i content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Check whether an action is allowed    VERB is a logical Kubernetes API verb like  get    list    watch    delete   etc  TYPE is a Kubernetes resource  Shortcuts and groups will be resolved  NONRESOURCEURL is a partial URL that starts with      NAME is the name of a particular Kubernetes resource  This command pairs nicely with impersonation  See   as global flag       kubectl auth can i VERB  TYPE   TYPE NAME   NONRESOURCEURL                    Check to see if I can create pods in any namespace   kubectl auth can i create pods   all namespaces        Check to see if I can list deployments in my current namespace   kubectl auth can i list deployments apps        Check to see if service account  foo  of namespace  dev  can list pods in the namespace  prod      You must be allowed to use impersonation for the global option    as    kubectl auth can i list pods   as system serviceaccount dev foo  n prod        Check to see if I can do everything in my current namespace      means all    kubectl auth can i                Check to see if I can get the job named  bar  in namespace  foo    kubectl auth can i list jobs batch bar  n foo        Check to see if I can read pod logs   kubectl auth can i get pods   subresource log        Check to see if I can access the URL  logs    kubectl auth can i get  logs         Check to see if I can approve certificates k8s io   kubectl auth can i approve certificates k8s io        List all allowed actions in namespace  foo    kubectl auth can i   list   namespace foo               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   A    all namespaces  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  check the specified action in all namespaces   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for can i  p   td    tr    tr   td colspan  2    list  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  prints all allowed actions   p   td    tr    tr   td colspan  2    no headers  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  prints allowed actions without headers  p   td    tr    tr   td colspan  2   q    quiet  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  suppress output and just return the exit code   p   td    tr    tr   td colspan  2    subresource string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p SubResource such as pod log or deployment scale  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl auth          Inspect authorization "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl auth autogenerated true","answers":"---\ntitle: kubectl auth\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nInspect authorization.\n\n```\nkubectl auth [flags]\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for auth<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n* [kubectl auth can-i](kubectl_auth_can-i\/)\t - Check whether an action is allowed\n* [kubectl auth reconcile](kubectl_auth_reconcile\/)\t - Reconciles rules for RBAC role, role binding, cluster role, and cluster role binding objects\n* [kubectl auth whoami](kubectl_auth_whoami\/)\t - Experimental: Check self subject attributes\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl auth content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Inspect authorization       kubectl auth  flags                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for auth  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager    kubectl auth can i  kubectl auth can i      Check whether an action is allowed    kubectl auth reconcile  kubectl auth reconcile      Reconciles rules for RBAC role  role binding  cluster role  and cluster role binding objects    kubectl auth whoami  kubectl auth whoami      Experimental  Check self subject attributes "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl cp","answers":"---\ntitle: kubectl cp\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCopy files and directories to and from containers.\n\n```\nkubectl cp <file-spec-src> <file-spec-dest>\n```\n\n## \n\n```\n  # !!!Important Note!!!\n  # Requires that the 'tar' binary is present in your container\n  # image.  If 'tar' is not present, 'kubectl cp' will fail.\n  #\n  # For advanced use cases, such as symlinks, wildcard expansion or\n  # file mode preservation, consider using 'kubectl exec'.\n  \n  # Copy \/tmp\/foo local file to \/tmp\/bar in a remote pod in namespace <some-namespace>\n  tar cf - \/tmp\/foo | kubectl exec -i -n <some-namespace> <some-pod> -- tar xf - -C \/tmp\/bar\n  \n  # Copy \/tmp\/foo from a remote pod to \/tmp\/bar locally\n  kubectl exec -n <some-namespace> <some-pod> -- tar cf - \/tmp\/foo | tar xf - -C \/tmp\/bar\n  \n  # Copy \/tmp\/foo_dir local directory to \/tmp\/bar_dir in a remote pod in the default namespace\n  kubectl cp \/tmp\/foo_dir <some-pod>:\/tmp\/bar_dir\n  \n  # Copy \/tmp\/foo local file to \/tmp\/bar in a remote pod in a specific container\n  kubectl cp \/tmp\/foo <some-pod>:\/tmp\/bar -c <specific-container>\n  \n  # Copy \/tmp\/foo local file to \/tmp\/bar in a remote pod in namespace <some-namespace>\n  kubectl cp \/tmp\/foo <some-namespace>\/<some-pod>:\/tmp\/bar\n  \n  # Copy \/tmp\/foo from a remote pod to \/tmp\/bar locally\n  kubectl cp <some-namespace>\/<some-pod>:\/tmp\/foo \/tmp\/bar\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-c, --container string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Container name. If omitted, use the kubectl.kubernetes.io\/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for cp<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--no-preserve<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The copied file\/directory's ownership and permissions will not be preserved in the container<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--retries int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Set number of retries to complete a copy operation from a container. Specify 0 to disable or any negative value for infinite retrying. The default is 0 (no retry).<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl cp content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Copy files and directories to and from containers       kubectl cp  file spec src   file spec dest                       Important Note        Requires that the  tar  binary is present in your container     image   If  tar  is not present   kubectl cp  will fail          For advanced use cases  such as symlinks  wildcard expansion or     file mode preservation  consider using  kubectl exec          Copy  tmp foo local file to  tmp bar in a remote pod in namespace  some namespace    tar cf    tmp foo   kubectl exec  i  n  some namespace   some pod     tar xf    C  tmp bar        Copy  tmp foo from a remote pod to  tmp bar locally   kubectl exec  n  some namespace   some pod     tar cf    tmp foo   tar xf    C  tmp bar        Copy  tmp foo dir local directory to  tmp bar dir in a remote pod in the default namespace   kubectl cp  tmp foo dir  some pod   tmp bar dir        Copy  tmp foo local file to  tmp bar in a remote pod in a specific container   kubectl cp  tmp foo  some pod   tmp bar  c  specific container         Copy  tmp foo local file to  tmp bar in a remote pod in namespace  some namespace    kubectl cp  tmp foo  some namespace   some pod   tmp bar        Copy  tmp foo from a remote pod to  tmp bar locally   kubectl cp  some namespace   some pod   tmp foo  tmp bar               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   c    container string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Container name  If omitted  use the kubectl kubernetes io default container annotation for selecting the container to be attached or the first container in the pod will be chosen  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for cp  p   td    tr    tr   td colspan  2    no preserve  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The copied file directory s ownership and permissions will not be preserved in the container  p   td    tr    tr   td colspan  2    retries int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Set number of retries to complete a copy operation from a container  Specify 0 to disable or any negative value for infinite retrying  The default is 0  no retry    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference title kubectl port forward nolist true contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl port-forward\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nForward one or more local ports to a pod.\n\n Use resource type\/name such as deployment\/mydeployment to select a pod. Resource type defaults to 'pod' if omitted.\n\n If there are multiple pods matching the criteria, a pod will be selected automatically. The forwarding session ends when the selected pod terminates, and a rerun of the command is needed to resume forwarding.\n\n```\nkubectl port-forward TYPE\/NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]\n```\n\n## \n\n```\n  # Listen on ports 5000 and 6000 locally, forwarding data to\/from ports 5000 and 6000 in the pod\n  kubectl port-forward pod\/mypod 5000 6000\n  \n  # Listen on ports 5000 and 6000 locally, forwarding data to\/from ports 5000 and 6000 in a pod selected by the deployment\n  kubectl port-forward deployment\/mydeployment 5000 6000\n  \n  # Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service\n  kubectl port-forward service\/myservice 8443:https\n  \n  # Listen on port 8888 locally, forwarding to 5000 in the pod\n  kubectl port-forward pod\/mypod 8888:5000\n  \n  # Listen on port 8888 on all addresses, forwarding to 5000 in the pod\n  kubectl port-forward --address 0.0.0.0 pod\/mypod 8888:5000\n  \n  # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod\n  kubectl port-forward --address localhost,10.19.21.23 pod\/mypod 8888:5000\n  \n  # Listen on a random port locally, forwarding to 5000 in the pod\n  kubectl port-forward pod\/mypod :5000\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--address strings&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Addresses to listen on (comma separated). Only accepts IP addresses or localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1 and will fail if neither of these addresses are available to bind.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for port-forward<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--pod-running-timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl port forward content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Forward one or more local ports to a pod    Use resource type name such as deployment mydeployment to select a pod  Resource type defaults to  pod  if omitted    If there are multiple pods matching the criteria  a pod will be selected automatically  The forwarding session ends when the selected pod terminates  and a rerun of the command is needed to resume forwarding       kubectl port forward TYPE NAME  options   LOCAL PORT  REMOTE PORT      LOCAL PORT N  REMOTE PORT N                    Listen on ports 5000 and 6000 locally  forwarding data to from ports 5000 and 6000 in the pod   kubectl port forward pod mypod 5000 6000        Listen on ports 5000 and 6000 locally  forwarding data to from ports 5000 and 6000 in a pod selected by the deployment   kubectl port forward deployment mydeployment 5000 6000        Listen on port 8443 locally  forwarding to the targetPort of the service s port named  https  in a pod selected by the service   kubectl port forward service myservice 8443 https        Listen on port 8888 locally  forwarding to 5000 in the pod   kubectl port forward pod mypod 8888 5000        Listen on port 8888 on all addresses  forwarding to 5000 in the pod   kubectl port forward   address 0 0 0 0 pod mypod 8888 5000        Listen on port 8888 on localhost and selected IP  forwarding to 5000 in the pod   kubectl port forward   address localhost 10 19 21 23 pod mypod 8888 5000        Listen on a random port locally  forwarding to 5000 in the pod   kubectl port forward pod mypod  5000               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    address strings nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Addresses to listen on  comma separated   Only accepts IP addresses or localhost as a value  When localhost is supplied  kubectl will try to bind on both 127 0 0 1 and   1 and will fail if neither of these addresses are available to bind   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for port forward  p   td    tr    tr   td colspan  2    pod running timeout duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time  like 5s  2m  or 3h  higher than zero  to wait until at least one pod is running  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl replace autogenerated true","answers":"---\ntitle: kubectl replace\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nReplace a resource by file name or stdin.\n\n JSON and YAML formats are accepted. If replacing an existing resource, the complete resource spec must be provided. This can be obtained by\n\n        $ kubectl get TYPE NAME -o yaml\n\n```\nkubectl replace -f FILENAME\n```\n\n## \n\n```\n  # Replace a pod using the data in pod.json\n  kubectl replace -f .\/pod.json\n  \n  # Replace a pod based on the JSON passed into stdin\n  cat pod.json | kubectl replace -f -\n  \n  # Update a single-container pod's image version (tag) to v4\n  kubectl get pod mypod -o yaml | sed 's\/\\(image: myimage\\):.*$\/\\1:v4\/' | kubectl replace -f -\n  \n  # Force replace, delete and then re-create the resource\n  kubectl replace --force -f .\/pod.json\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cascade string[=\"background\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"background\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;background&quot;, &quot;orphan&quot;, or &quot;foreground&quot;. Selects the deletion cascading strategy for the dependents (e.g. Pods created by a ReplicationController). Defaults to background.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-replace\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The files that contain the configurations to replace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--force<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--grace-period int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. Set to 1 for immediate shutdown. Can only be set to 0 when --force is true (force deletion).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for replace<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process a kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--raw string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Raw URI to PUT to the server.  Uses the transport specified by the kubeconfig file.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--save-config<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--subresource string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If specified, replace will operate on the subresource of the requested object. Must be one of [status scale]. This flag is beta and may change in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--timeout duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--wait<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, wait for resources to be gone before returning. This waits for finalizers.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl replace content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Replace a resource by file name or stdin    JSON and YAML formats are accepted  If replacing an existing resource  the complete resource spec must be provided  This can be obtained by            kubectl get TYPE NAME  o yaml      kubectl replace  f FILENAME                   Replace a pod using the data in pod json   kubectl replace  f   pod json        Replace a pod based on the JSON passed into stdin   cat pod json   kubectl replace  f          Update a single container pod s image version  tag  to v4   kubectl get pod mypod  o yaml   sed  s   image  myimage        1 v4     kubectl replace  f          Force replace  delete and then re create the resource   kubectl replace   force  f   pod json               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    cascade string   background   nbsp  nbsp  nbsp  nbsp  nbsp Default   background   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot background quot    quot orphan quot   or  quot foreground quot   Selects the deletion cascading strategy for the dependents  e g  Pods created by a ReplicationController   Defaults to background   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl replace   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The files that contain the configurations to replace   p   td    tr    tr   td colspan  2    force  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  immediately remove resources from API and bypass graceful deletion  Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation   p   td    tr    tr   td colspan  2    grace period int nbsp  nbsp  nbsp  nbsp  nbsp Default   1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Period of time in seconds given to the resource to terminate gracefully  Ignored if negative  Set to 1 for immediate shutdown  Can only be set to 0 when   force is true  force deletion    p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for replace  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process a kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    raw string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Raw URI to PUT to the server   Uses the transport specified by the kubeconfig file   p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    save config  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the configuration of current object will be saved in its annotation  Otherwise  the annotation will be unchanged  This flag is useful when you want to perform kubectl apply on this object in the future   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    subresource string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If specified  replace will operate on the subresource of the requested object  Must be one of  status scale   This flag is beta and may change in the future   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    timeout duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a delete  zero means determine a timeout from the size of the object  p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr    tr   td colspan  2    wait  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  wait for resources to be gone before returning  This waits for finalizers   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl get","answers":"---\ntitle: kubectl get\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDisplay one or many resources.\n\n Prints a table of the most important information about the specified resources. You can filter the list using a label selector and the --selector flag. If the desired resource type is namespaced you will only see results in the current namespace if you don't specify any namespace.\n\n By specifying the output as 'template' and providing a Go template as the value of the --template flag, you can filter the attributes of the fetched resources.\n\nUse \"kubectl api-resources\" for a complete list of supported resources.\n\n```\nkubectl get [(-o|--output=)json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file|custom-columns|custom-columns-file|wide] (TYPE[.VERSION][.GROUP] [NAME | -l label] | TYPE[.VERSION][.GROUP]\/NAME ...) [flags]\n```\n\n## \n\n```\n  # List all pods in ps output format\n  kubectl get pods\n  \n  # List all pods in ps output format with more information (such as node name)\n  kubectl get pods -o wide\n  \n  # List a single replication controller with specified NAME in ps output format\n  kubectl get replicationcontroller web\n  \n  # List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group\n  kubectl get deployments.v1.apps -o json\n  \n  # List a single pod in JSON output format\n  kubectl get -o json pod web-pod-13je7\n  \n  # List a pod identified by type and name specified in \"pod.yaml\" in JSON output format\n  kubectl get -f pod.yaml -o json\n  \n  # List resources from a directory with kustomization.yaml - e.g. dir\/kustomization.yaml\n  kubectl get -k dir\/\n  \n  # Return only the phase value of the specified pod\n  kubectl get -o template pod\/web-pod-13je7 --template=\n  \n  # List resource information in custom columns\n  kubectl get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image\n  \n  # List all replication controllers and services together in ps output format\n  kubectl get rc,services\n  \n  # List one or more resources by their type and names\n  kubectl get rc\/web service\/frontend pods\/web-pod-13je7\n  \n  # List the 'status' subresource for a single pod\n  kubectl get pod web-pod-13je7 --subresource status\n  \n  # List all deployments in namespace 'backend'\n  kubectl get deployments.apps --namespace backend\n  \n  # List all pods existing in all namespaces\n  kubectl get pods --all-namespaces\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-A, --all-namespaces<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--chunk-size int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 500<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Return large lists in chunks rather than all at once. Pass 0 to disable. This flag is beta and may change in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to get from a server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for get<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--ignore-not-found<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If the requested object does not exist the command will return exit code 0.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-L, --label-columns strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Accepts a comma separated list of labels that are going to be presented as columns. Names are case-sensitive. You can also use multiple flag options like -L label1 -L label2...<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--no-headers<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>When using the default or custom-column output format, don't print headers (default print headers).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file, custom-columns, custom-columns-file, wide). See custom columns [https:\/\/kubernetes.io\/docs\/reference\/kubectl\/#custom-columns], golang template [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview] and jsonpath template [https:\/\/kubernetes.io\/docs\/reference\/kubectl\/jsonpath\/].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--output-watch-events<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output watch event objects when --watch or --watch-only is used. Existing objects are output as initial ADDED events.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--raw string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Raw URI to request from the server.  Uses the transport specified by the kubeconfig file.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--server-print&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, have the server return the appropriate table output. Supports extension APIs and CRDs.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-kind<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, list the resource type for the requested object(s).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-labels<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>When printing, show all labels as the last column (default hide labels column)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--sort-by string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If non-empty, sort list types using this field specification.  The field specification is expressed as a JSONPath expression (e.g. '{.metadata.name}'). The field in the API resource specified by this JSONPath expression must be an integer or a string.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--subresource string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If specified, gets the subresource of the requested object. Must be one of [status scale]. This flag is beta and may change in the future.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-w, --watch<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>After listing\/getting the requested object, watch for changes.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--watch-only<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Watch for changes to the requested object(s), without listing\/getting first.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl get content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Display one or many resources    Prints a table of the most important information about the specified resources  You can filter the list using a label selector and the   selector flag  If the desired resource type is namespaced you will only see results in the current namespace if you don t specify any namespace    By specifying the output as  template  and providing a Go template as the value of the   template flag  you can filter the attributes of the fetched resources   Use  kubectl api resources  for a complete list of supported resources       kubectl get    o   output  json yaml name go template go template file template templatefile jsonpath jsonpath as json jsonpath file custom columns custom columns file wide   TYPE  VERSION   GROUP   NAME    l label    TYPE  VERSION   GROUP  NAME       flags                    List all pods in ps output format   kubectl get pods        List all pods in ps output format with more information  such as node name    kubectl get pods  o wide        List a single replication controller with specified NAME in ps output format   kubectl get replicationcontroller web        List deployments in JSON output format  in the  v1  version of the  apps  API group   kubectl get deployments v1 apps  o json        List a single pod in JSON output format   kubectl get  o json pod web pod 13je7        List a pod identified by type and name specified in  pod yaml  in JSON output format   kubectl get  f pod yaml  o json        List resources from a directory with kustomization yaml   e g  dir kustomization yaml   kubectl get  k dir         Return only the phase value of the specified pod   kubectl get  o template pod web pod 13je7   template         List resource information in custom columns   kubectl get pod test pod  o custom columns CONTAINER  spec containers 0  name IMAGE  spec containers 0  image        List all replication controllers and services together in ps output format   kubectl get rc services        List one or more resources by their type and names   kubectl get rc web service frontend pods web pod 13je7        List the  status  subresource for a single pod   kubectl get pod web pod 13je7   subresource status        List all deployments in namespace  backend    kubectl get deployments apps   namespace backend        List all pods existing in all namespaces   kubectl get pods   all namespaces               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   A    all namespaces  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  list the requested object s  across all namespaces  Namespace in current context is ignored even if specified with   namespace   p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    chunk size int nbsp  nbsp  nbsp  nbsp  nbsp Default  500  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Return large lists in chunks rather than all at once  Pass 0 to disable  This flag is beta and may change in the future   p   td    tr    tr   td colspan  2    field selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  field query  to filter on  supports            and       e g    field selector key1 value1 key2 value2   The server only supports a limited number of field queries per type   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to get from a server   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for get  p   td    tr    tr   td colspan  2    ignore not found  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If the requested object does not exist the command will return exit code 0   p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   L    label columns strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Accepts a comma separated list of labels that are going to be presented as columns  Names are case sensitive  You can also use multiple flag options like  L label1  L label2     p   td    tr    tr   td colspan  2    no headers  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p When using the default or custom column output format  don t print headers  default print headers    p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file  custom columns  custom columns file  wide   See custom columns  https   kubernetes io docs reference kubectl  custom columns   golang template  http   golang org pkg text template  pkg overview  and jsonpath template  https   kubernetes io docs reference kubectl jsonpath     p   td    tr    tr   td colspan  2    output watch events  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output watch event objects when   watch or   watch only is used  Existing objects are output as initial ADDED events   p   td    tr    tr   td colspan  2    raw string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Raw URI to request from the server   Uses the transport specified by the kubeconfig file   p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    server print nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  have the server return the appropriate table output  Supports extension APIs and CRDs   p   td    tr    tr   td colspan  2    show kind  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  list the resource type for the requested object s    p   td    tr    tr   td colspan  2    show labels  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p When printing  show all labels as the last column  default hide labels column   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    sort by string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If non empty  sort list types using this field specification   The field specification is expressed as a JSONPath expression  e g     metadata name     The field in the API resource specified by this JSONPath expression must be an integer or a string   p   td    tr    tr   td colspan  2    subresource string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If specified  gets the subresource of the requested object  Must be one of  status scale   This flag is beta and may change in the future   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2   w    watch  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p After listing getting the requested object  watch for changes   p   td    tr    tr   td colspan  2    watch only  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Watch for changes to the requested object s   without listing getting first   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference title kubectl diff nolist true contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl diff\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDiff configurations specified by file name or stdin between the current online configuration, and the configuration as it would be if applied.\n\n The output is always YAML.\n\n KUBECTL_EXTERNAL_DIFF environment variable can be used to select your own diff command. Users can use external commands with params too, example: KUBECTL_EXTERNAL_DIFF=\"colordiff -N -u\"\n\n By default, the \"diff\" command available in your path will be run with the \"-u\" (unified diff) and \"-N\" (treat absent files as empty) options.\n\n Exit status: 0 No differences were found. 1 Differences were found. &gt;1 Kubectl or diff failed with an error.\n\n Note: KUBECTL_EXTERNAL_DIFF, if used, is expected to follow that convention.\n\n```\nkubectl diff -f FILENAME\n```\n\n## \n\n```\n  # Diff resources included in pod.json\n  kubectl diff -f pod.json\n  \n  # Diff file read from stdin\n  cat service.yaml | kubectl diff -f -\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--concurrency int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Number of objects to process in parallel when diffing against the live version. Larger number = faster, but more memory, I\/O and CPU over that shorter period of time.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-client-side-apply\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files contains the configuration to diff<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--force-conflicts<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, server-side apply will force the changes against conflicts.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for diff<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--prune<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Include resources that would be deleted by pruning. Can be used with -l and default shows all resources would be pruned<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--prune-allowlist strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Overwrite the default allowlist with &lt;group\/version\/kind&gt; for --prune<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--server-side<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, apply runs in the server instead of the client.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, include managed fields in the diff.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl diff content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Diff configurations specified by file name or stdin between the current online configuration  and the configuration as it would be if applied    The output is always YAML    KUBECTL EXTERNAL DIFF environment variable can be used to select your own diff command  Users can use external commands with params too  example  KUBECTL EXTERNAL DIFF  colordiff  N  u    By default  the  diff  command available in your path will be run with the   u   unified diff  and   N   treat absent files as empty  options    Exit status  0 No differences were found  1 Differences were found   gt 1 Kubectl or diff failed with an error    Note  KUBECTL EXTERNAL DIFF  if used  is expected to follow that convention       kubectl diff  f FILENAME                   Diff resources included in pod json   kubectl diff  f pod json        Diff file read from stdin   cat service yaml   kubectl diff  f                 table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    concurrency int nbsp  nbsp  nbsp  nbsp  nbsp Default  1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Number of objects to process in parallel when diffing against the live version  Larger number   faster  but more memory  I O and CPU over that shorter period of time   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl client side apply   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files contains the configuration to diff  p   td    tr    tr   td colspan  2    force conflicts  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  server side apply will force the changes against conflicts   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for diff  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2    prune  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Include resources that would be deleted by pruning  Can be used with  l and default shows all resources would be pruned  p   td    tr    tr   td colspan  2    prune allowlist strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Overwrite the default allowlist with  lt group version kind gt  for   prune  p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    server side  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  apply runs in the server instead of the client   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  include managed fields in the diff   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl delete autogenerated true","answers":"---\ntitle: kubectl delete\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDelete resources by file names, stdin, resources and names, or by resources and label selector.\n\n JSON and YAML formats are accepted. Only one type of argument may be specified: file names, resources and names, or resources and label selector.\n\n Some resources, such as pods, support graceful deletion. These resources define a default period before they are forcibly terminated (the grace period) but you may override that value with the --grace-period flag, or pass --now to set a grace-period of 1. Because these resources often represent entities in the cluster, deletion may not be acknowledged immediately. If the node hosting a pod is down or cannot reach the API server, termination may take significantly longer than the grace period. To force delete a resource, you must specify the --force flag. Note: only a subset of resources support graceful deletion. In absence of the support, the --grace-period flag is ignored.\n\n IMPORTANT: Force deleting pods does not wait for confirmation that the pod's processes have been terminated, which can leave those processes running until the node detects the deletion and completes graceful deletion. If your processes use shared storage or talk to a remote API and depend on the name of the pod to identify themselves, force deleting those pods may result in multiple processes running on different machines using the same identification which may lead to data corruption or inconsistency. Only force delete pods when you are sure the pod is terminated, or if your application can tolerate multiple copies of the same pod running at once. Also, if you force delete pods, the scheduler may place new pods on those nodes before the node has released those resources and causing those pods to be evicted immediately.\n\n Note that the delete command does NOT do resource version checks, so if someone submits an update to a resource right when you submit a delete, their update will be lost along with the rest of the resource.\n\n After a CustomResourceDefinition is deleted, invalidation of discovery cache may take up to 6 hours. If you don't want to wait, you might want to run \"kubectl api-resources\" to refresh the discovery cache.\n\n```\nkubectl delete ([-f FILENAME] | [-k DIRECTORY] | TYPE [(NAME | -l label | --all)])\n```\n\n## \n\n```\n  # Delete a pod using the type and name specified in pod.json\n  kubectl delete -f .\/pod.json\n  \n  # Delete resources from a directory containing kustomization.yaml - e.g. dir\/kustomization.yaml\n  kubectl delete -k dir\n  \n  # Delete resources from all files that end with '.json'\n  kubectl delete -f '*.json'\n  \n  # Delete a pod based on the type and name in the JSON passed into stdin\n  cat pod.json | kubectl delete -f -\n  \n  # Delete pods and services with same names \"baz\" and \"foo\"\n  kubectl delete pod,service baz foo\n  \n  # Delete pods and services with label name=myLabel\n  kubectl delete pods,services -l name=myLabel\n  \n  # Delete a pod with minimal delay\n  kubectl delete pod foo --now\n  \n  # Force delete a pod on a dead node\n  kubectl delete pod foo --force\n  \n  # Delete all pods\n  kubectl delete pods --all\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Delete all resources, in the namespace of the specified resource types.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-A, --all-namespaces<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cascade string[=\"background\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"background\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;background&quot;, &quot;orphan&quot;, or &quot;foreground&quot;. Selects the deletion cascading strategy for the dependents (e.g. Pods created by a ReplicationController). Defaults to background.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>containing the resource to delete.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--force<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--grace-period int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. Set to 1 for immediate shutdown. Can only be set to 0 when --force is true (force deletion).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for delete<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--ignore-not-found<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat &quot;resource not found&quot; as a successful delete. Defaults to &quot;true&quot; when --all is specified.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-i, --interactive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, delete resource only when user confirms.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process a kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--now<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, resources are signaled for immediate shutdown (same as --grace-period=1).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output mode. Use &quot;-o name&quot; for shorter output (resource\/name).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--raw string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Raw URI to DELETE to the server.  Uses the transport specified by the kubeconfig file.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--timeout duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--wait&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, wait for resources to be gone before returning. This waits for finalizers.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl delete content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Delete resources by file names  stdin  resources and names  or by resources and label selector    JSON and YAML formats are accepted  Only one type of argument may be specified  file names  resources and names  or resources and label selector    Some resources  such as pods  support graceful deletion  These resources define a default period before they are forcibly terminated  the grace period  but you may override that value with the   grace period flag  or pass   now to set a grace period of 1  Because these resources often represent entities in the cluster  deletion may not be acknowledged immediately  If the node hosting a pod is down or cannot reach the API server  termination may take significantly longer than the grace period  To force delete a resource  you must specify the   force flag  Note  only a subset of resources support graceful deletion  In absence of the support  the   grace period flag is ignored    IMPORTANT  Force deleting pods does not wait for confirmation that the pod s processes have been terminated  which can leave those processes running until the node detects the deletion and completes graceful deletion  If your processes use shared storage or talk to a remote API and depend on the name of the pod to identify themselves  force deleting those pods may result in multiple processes running on different machines using the same identification which may lead to data corruption or inconsistency  Only force delete pods when you are sure the pod is terminated  or if your application can tolerate multiple copies of the same pod running at once  Also  if you force delete pods  the scheduler may place new pods on those nodes before the node has released those resources and causing those pods to be evicted immediately    Note that the delete command does NOT do resource version checks  so if someone submits an update to a resource right when you submit a delete  their update will be lost along with the rest of the resource    After a CustomResourceDefinition is deleted  invalidation of discovery cache may take up to 6 hours  If you don t want to wait  you might want to run  kubectl api resources  to refresh the discovery cache       kubectl delete    f FILENAME      k DIRECTORY    TYPE   NAME    l label     all                      Delete a pod using the type and name specified in pod json   kubectl delete  f   pod json        Delete resources from a directory containing kustomization yaml   e g  dir kustomization yaml   kubectl delete  k dir        Delete resources from all files that end with   json    kubectl delete  f    json         Delete a pod based on the type and name in the JSON passed into stdin   cat pod json   kubectl delete  f          Delete pods and services with same names  baz  and  foo    kubectl delete pod service baz foo        Delete pods and services with label name myLabel   kubectl delete pods services  l name myLabel        Delete a pod with minimal delay   kubectl delete pod foo   now        Force delete a pod on a dead node   kubectl delete pod foo   force        Delete all pods   kubectl delete pods   all               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Delete all resources  in the namespace of the specified resource types   p   td    tr    tr   td colspan  2   A    all namespaces  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  list the requested object s  across all namespaces  Namespace in current context is ignored even if specified with   namespace   p   td    tr    tr   td colspan  2    cascade string   background   nbsp  nbsp  nbsp  nbsp  nbsp Default   background   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot background quot    quot orphan quot   or  quot foreground quot   Selects the deletion cascading strategy for the dependents  e g  Pods created by a ReplicationController   Defaults to background   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  field query  to filter on  supports            and       e g    field selector key1 value1 key2 value2   The server only supports a limited number of field queries per type   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p containing the resource to delete   p   td    tr    tr   td colspan  2    force  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  immediately remove resources from API and bypass graceful deletion  Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation   p   td    tr    tr   td colspan  2    grace period int nbsp  nbsp  nbsp  nbsp  nbsp Default   1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Period of time in seconds given to the resource to terminate gracefully  Ignored if negative  Set to 1 for immediate shutdown  Can only be set to 0 when   force is true  force deletion    p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for delete  p   td    tr    tr   td colspan  2    ignore not found  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat  quot resource not found quot  as a successful delete  Defaults to  quot true quot  when   all is specified   p   td    tr    tr   td colspan  2   i    interactive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  delete resource only when user confirms   p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process a kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2    now  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  resources are signaled for immediate shutdown  same as   grace period 1    p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output mode  Use  quot  o name quot  for shorter output  resource name    p   td    tr    tr   td colspan  2    raw string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Raw URI to DELETE to the server   Uses the transport specified by the kubeconfig file   p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    timeout duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a delete  zero means determine a timeout from the size of the object  p   td    tr    tr   td colspan  2    wait nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  wait for resources to be gone before returning  This waits for finalizers   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl certificate approve autogenerated true","answers":"---\ntitle: kubectl certificate approve\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nApprove a certificate signing request.\n\n kubectl certificate approve allows a cluster admin to approve a certificate signing request (CSR). This action tells a certificate signing controller to issue a certificate to the requester with the attributes requested in the CSR.\n\n SECURITY NOTICE: Depending on the requested attributes, the issued certificate can potentially grant a requester access to cluster resources or to authenticate as a requested identity. Before approving a CSR, ensure you understand what the signed certificate can do.\n\n```\nkubectl certificate approve (-f FILENAME | NAME)\n```\n\n## \n\n```\n  # Approve CSR 'csr-sqgzp'\n  kubectl certificate approve csr-sqgzp\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to update<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--force<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Update the CSR even if it is already approved.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for approve<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl certificate](..\/)\t - Modify certificate resources\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl certificate approve content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Approve a certificate signing request    kubectl certificate approve allows a cluster admin to approve a certificate signing request  CSR   This action tells a certificate signing controller to issue a certificate to the requester with the attributes requested in the CSR    SECURITY NOTICE  Depending on the requested attributes  the issued certificate can potentially grant a requester access to cluster resources or to authenticate as a requested identity  Before approving a CSR  ensure you understand what the signed certificate can do       kubectl certificate approve   f FILENAME   NAME                    Approve CSR  csr sqgzp    kubectl certificate approve csr sqgzp               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to update  p   td    tr    tr   td colspan  2    force  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Update the CSR even if it is already approved   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for approve  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl certificate          Modify certificate resources "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl certificate deny autogenerated true","answers":"---\ntitle: kubectl certificate deny\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDeny a certificate signing request.\n\n kubectl certificate deny allows a cluster admin to deny a certificate signing request (CSR). This action tells a certificate signing controller to not to issue a certificate to the requester.\n\n```\nkubectl certificate deny (-f FILENAME | NAME)\n```\n\n## \n\n```\n  # Deny CSR 'csr-sqgzp'\n  kubectl certificate deny csr-sqgzp\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to update<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--force<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Update the CSR even if it is already denied.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for deny<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl certificate](..\/)\t - Modify certificate resources\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl certificate deny content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Deny a certificate signing request    kubectl certificate deny allows a cluster admin to deny a certificate signing request  CSR   This action tells a certificate signing controller to not to issue a certificate to the requester       kubectl certificate deny   f FILENAME   NAME                    Deny CSR  csr sqgzp    kubectl certificate deny csr sqgzp               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to update  p   td    tr    tr   td colspan  2    force  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Update the CSR even if it is already denied   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for deny  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl certificate          Modify certificate resources "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl certificate autogenerated true","answers":"---\ntitle: kubectl certificate\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nModify certificate resources.\n\n```\nkubectl certificate SUBCOMMAND\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for certificate<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n* [kubectl certificate approve](kubectl_certificate_approve\/)\t - Approve a certificate signing request\n* [kubectl certificate deny](kubectl_certificate_deny\/)\t - Deny a certificate signing request\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl certificate content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Modify certificate resources       kubectl certificate SUBCOMMAND               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for certificate  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager    kubectl certificate approve  kubectl certificate approve      Approve a certificate signing request    kubectl certificate deny  kubectl certificate deny      Deny a certificate signing request "}
{"questions":"kubernetes reference nolist true contenttype tool reference title kubectl cordon weight 30 autogenerated true","answers":"---\ntitle: kubectl cordon\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nMark node as unschedulable.\n\n```\nkubectl cordon NODE\n```\n\n## \n\n```\n  # Mark node \"foo\" as unschedulable\n  kubectl cordon foo\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for cordon<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl cordon content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Mark node as unschedulable       kubectl cordon NODE                   Mark node  foo  as unschedulable   kubectl cordon foo               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for cordon  p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference title kubectl config get users weight 30 autogenerated true","answers":"---\ntitle: kubectl config get-users\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDisplay users defined in the kubeconfig.\n\n```\nkubectl config get-users [flags]\n```\n\n## \n\n```\n  # List the users that kubectl knows about\n  kubectl config get-users\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for get-users<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config get users content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Display users defined in the kubeconfig       kubectl config get users  flags                    List the users that kubectl knows about   kubectl config get users               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for get users  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config unset autogenerated true","answers":"---\ntitle: kubectl config unset\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nUnset an individual value in a kubeconfig file.\n\n PROPERTY_NAME is a dot delimited name where each token represents either an attribute name or a map key.  Map keys may not contain dots.\n\n```\nkubectl config unset PROPERTY_NAME\n```\n\n## \n\n```\n  # Unset the current-context\n  kubectl config unset current-context\n  \n  # Unset namespace in foo context\n  kubectl config unset contexts.foo.namespace\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for unset<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config unset content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Unset an individual value in a kubeconfig file    PROPERTY NAME is a dot delimited name where each token represents either an attribute name or a map key   Map keys may not contain dots       kubectl config unset PROPERTY NAME                   Unset the current context   kubectl config unset current context        Unset namespace in foo context   kubectl config unset contexts foo namespace               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for unset  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config delete user autogenerated true","answers":"---\ntitle: kubectl config delete-user\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDelete the specified user from the kubeconfig.\n\n```\nkubectl config delete-user NAME\n```\n\n## \n\n```\n  # Delete the minikube user\n  kubectl config delete-user minikube\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for delete-user<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config delete user content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Delete the specified user from the kubeconfig       kubectl config delete user NAME                   Delete the minikube user   kubectl config delete user minikube               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for delete user  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference title kubectl config delete cluster weight 30 autogenerated true","answers":"---\ntitle: kubectl config delete-cluster\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDelete the specified cluster from the kubeconfig.\n\n```\nkubectl config delete-cluster NAME\n```\n\n## \n\n```\n  # Delete the minikube cluster\n  kubectl config delete-cluster minikube\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for delete-cluster<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config delete cluster content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Delete the specified cluster from the kubeconfig       kubectl config delete cluster NAME                   Delete the minikube cluster   kubectl config delete cluster minikube               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for delete cluster  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config delete context autogenerated true","answers":"---\ntitle: kubectl config delete-context\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDelete the specified context from the kubeconfig.\n\n```\nkubectl config delete-context NAME\n```\n\n## \n\n```\n  # Delete the context for the minikube cluster\n  kubectl config delete-context minikube\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for delete-context<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config delete context content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Delete the specified context from the kubeconfig       kubectl config delete context NAME                   Delete the context for the minikube cluster   kubectl config delete context minikube               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for delete context  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config get contexts autogenerated true","answers":"---\ntitle: kubectl config get-contexts\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDisplay one or many contexts from the kubeconfig file.\n\n```\nkubectl config get-contexts [(-o|--output=)name)]\n```\n\n## \n\n```\n  # List all the contexts in your kubeconfig file\n  kubectl config get-contexts\n  \n  # Describe one context in your kubeconfig file\n  kubectl config get-contexts my-context\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for get-contexts<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--no-headers<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>When using the default or custom-column output format, don't print headers (default print headers).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (name).<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config get contexts content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Display one or many contexts from the kubeconfig file       kubectl config get contexts    o   output  name                     List all the contexts in your kubeconfig file   kubectl config get contexts        Describe one context in your kubeconfig file   kubectl config get contexts my context               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for get contexts  p   td    tr    tr   td colspan  2    no headers  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p When using the default or custom column output format  don t print headers  default print headers    p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   name    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic title kubectl config current context contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl config current-context\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDisplay the current-context.\n\n```\nkubectl config current-context [flags]\n```\n\n## \n\n```\n  # Display the current-context\n  kubectl config current-context\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for current-context<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config current context content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Display the current context       kubectl config current context  flags                    Display the current context   kubectl config current context               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for current context  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config set credentials autogenerated true","answers":"---\ntitle: kubectl config set-credentials\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nSet a user entry in kubeconfig.\n\n Specifying a name that already exists will merge new fields on top of existing values.\n\n        Client-certificate flags:\n        --client-certificate=certfile --client-key=keyfile\n        \n        Bearer token flags:\n        --token=bearer_token\n        \n        Basic auth flags:\n        --username=basic_user --password=basic_password\n        \n Bearer token and basic auth are mutually exclusive.\n\n```\nkubectl config set-credentials NAME [--client-certificate=path\/to\/certfile] [--client-key=path\/to\/keyfile] [--token=bearer_token] [--username=basic_user] [--password=basic_password] [--auth-provider=provider_name] [--auth-provider-arg=key=value] [--exec-command=exec_command] [--exec-api-version=exec_api_version] [--exec-arg=arg] [--exec-env=key=value]\n```\n\n## \n\n```\n  # Set only the \"client-key\" field on the \"cluster-admin\"\n  # entry, without touching other values\n  kubectl config set-credentials cluster-admin --client-key=~\/.kube\/admin.key\n  \n  # Set basic auth for the \"cluster-admin\" entry\n  kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif\n  \n  # Embed client certificate data in the \"cluster-admin\" entry\n  kubectl config set-credentials cluster-admin --client-certificate=~\/.kube\/admin.crt --embed-certs=true\n  \n  # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry\n  kubectl config set-credentials cluster-admin --auth-provider=gcp\n  \n  # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional arguments\n  kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar\n  \n  # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry\n  kubectl config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret-\n  \n  # Enable new exec auth plugin for the \"cluster-admin\" entry\n  kubectl config set-credentials cluster-admin --exec-command=\/path\/to\/the\/executable --exec-api-version=client.authentication.k8s.io\/v1beta1\n  \n  # Enable new exec auth plugin for the \"cluster-admin\" entry with interactive mode\n  kubectl config set-credentials cluster-admin --exec-command=\/path\/to\/the\/executable --exec-api-version=client.authentication.k8s.io\/v1beta1 --exec-interactive-mode=Never\n  \n  # Define new exec auth plugin arguments for the \"cluster-admin\" entry\n  kubectl config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2\n  \n  # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry\n  kubectl config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2\n  \n  # Remove exec auth plugin environment variables for the \"cluster-admin\" entry\n  kubectl config set-credentials cluster-admin --exec-env=var-to-remove-\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--auth-provider string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Auth provider for the user entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--auth-provider-arg strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>'key=value' arguments for the auth provider<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to client-certificate file for the user entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to client-key file for the user entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--embed-certs tristate[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Embed client cert\/key for the user entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--exec-api-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>API version of the exec credential plugin for the user entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--exec-arg strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>New arguments for the exec credential plugin command for the user entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--exec-command string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Command for the exec credential plugin for the user entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--exec-env strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>'key=value' environment values for the exec credential plugin<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--exec-interactive-mode string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>InteractiveMode of the exec credentials plugin for the user entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--exec-provide-cluster-info tristate[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>ProvideClusterInfo of the exec credentials plugin for the user entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for set-credentials<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>password for the user entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>token for the user entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>username for the user entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config set credentials content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Set a user entry in kubeconfig    Specifying a name that already exists will merge new fields on top of existing values           Client certificate flags            client certificate certfile   client key keyfile                  Bearer token flags            token bearer token                  Basic auth flags            username basic user   password basic password           Bearer token and basic auth are mutually exclusive       kubectl config set credentials NAME    client certificate path to certfile     client key path to keyfile     token bearer token     username basic user     password basic password     auth provider provider name     auth provider arg key value     exec command exec command     exec api version exec api version     exec arg arg     exec env key value                    Set only the  client key  field on the  cluster admin      entry  without touching other values   kubectl config set credentials cluster admin   client key    kube admin key        Set basic auth for the  cluster admin  entry   kubectl config set credentials cluster admin   username admin   password uXFGweU9l35qcif        Embed client certificate data in the  cluster admin  entry   kubectl config set credentials cluster admin   client certificate    kube admin crt   embed certs true        Enable the Google Compute Platform auth provider for the  cluster admin  entry   kubectl config set credentials cluster admin   auth provider gcp        Enable the OpenID Connect auth provider for the  cluster admin  entry with additional arguments   kubectl config set credentials cluster admin   auth provider oidc   auth provider arg client id foo   auth provider arg client secret bar        Remove the  client secret  config value for the OpenID Connect auth provider for the  cluster admin  entry   kubectl config set credentials cluster admin   auth provider oidc   auth provider arg client secret         Enable new exec auth plugin for the  cluster admin  entry   kubectl config set credentials cluster admin   exec command  path to the executable   exec api version client authentication k8s io v1beta1        Enable new exec auth plugin for the  cluster admin  entry with interactive mode   kubectl config set credentials cluster admin   exec command  path to the executable   exec api version client authentication k8s io v1beta1   exec interactive mode Never        Define new exec auth plugin arguments for the  cluster admin  entry   kubectl config set credentials cluster admin   exec arg arg1   exec arg arg2        Create or update exec auth plugin environment variables for the  cluster admin  entry   kubectl config set credentials cluster admin   exec env key1 val1   exec env key2 val2        Remove exec auth plugin environment variables for the  cluster admin  entry   kubectl config set credentials cluster admin   exec env var to remove                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    auth provider string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Auth provider for the user entry in kubeconfig  p   td    tr    tr   td colspan  2    auth provider arg strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p  key value  arguments for the auth provider  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to client certificate file for the user entry in kubeconfig  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to client key file for the user entry in kubeconfig  p   td    tr    tr   td colspan  2    embed certs tristate  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Embed client cert key for the user entry in kubeconfig  p   td    tr    tr   td colspan  2    exec api version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p API version of the exec credential plugin for the user entry in kubeconfig  p   td    tr    tr   td colspan  2    exec arg strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p New arguments for the exec credential plugin command for the user entry in kubeconfig  p   td    tr    tr   td colspan  2    exec command string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Command for the exec credential plugin for the user entry in kubeconfig  p   td    tr    tr   td colspan  2    exec env strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p  key value  environment values for the exec credential plugin  p   td    tr    tr   td colspan  2    exec interactive mode string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p InteractiveMode of the exec credentials plugin for the user entry in kubeconfig  p   td    tr    tr   td colspan  2    exec provide cluster info tristate  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p ProvideClusterInfo of the exec credentials plugin for the user entry in kubeconfig  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for set credentials  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p password for the user entry in kubeconfig  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p token for the user entry in kubeconfig  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p username for the user entry in kubeconfig  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config set cluster autogenerated true","answers":"---\ntitle: kubectl config set-cluster\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nSet a cluster entry in kubeconfig.\n\n Specifying a name that already exists will merge new fields on top of existing values for those fields.\n\n```\nkubectl config set-cluster NAME [--server=server] [--certificate-authority=path\/to\/certificate\/authority] [--insecure-skip-tls-verify=true] [--tls-server-name=example.com]\n```\n\n## \n\n```\n  # Set only the server field on the e2e cluster entry without touching other values\n  kubectl config set-cluster e2e --server=https:\/\/1.2.3.4\n  \n  # Embed certificate authority data for the e2e cluster entry\n  kubectl config set-cluster e2e --embed-certs --certificate-authority=~\/.kube\/e2e\/kubernetes.ca.crt\n  \n  # Disable cert checking for the e2e cluster entry\n  kubectl config set-cluster e2e --insecure-skip-tls-verify=true\n  \n  # Set the custom TLS server name to use for validation for the e2e cluster entry\n  kubectl config set-cluster e2e --tls-server-name=my-cluster-name\n  \n  # Set the proxy URL for the e2e cluster entry\n  kubectl config set-cluster e2e --proxy-url=https:\/\/1.2.3.4\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to certificate-authority file for the cluster entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--embed-certs tristate[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>embed-certs for the cluster entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for set-cluster<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify tristate[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>insecure-skip-tls-verify for the cluster entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--proxy-url string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>proxy-url for the cluster entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>server for the cluster entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>tls-server-name for the cluster entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config set cluster content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Set a cluster entry in kubeconfig    Specifying a name that already exists will merge new fields on top of existing values for those fields       kubectl config set cluster NAME    server server     certificate authority path to certificate authority     insecure skip tls verify true     tls server name example com                    Set only the server field on the e2e cluster entry without touching other values   kubectl config set cluster e2e   server https   1 2 3 4        Embed certificate authority data for the e2e cluster entry   kubectl config set cluster e2e   embed certs   certificate authority    kube e2e kubernetes ca crt        Disable cert checking for the e2e cluster entry   kubectl config set cluster e2e   insecure skip tls verify true        Set the custom TLS server name to use for validation for the e2e cluster entry   kubectl config set cluster e2e   tls server name my cluster name        Set the proxy URL for the e2e cluster entry   kubectl config set cluster e2e   proxy url https   1 2 3 4               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to certificate authority file for the cluster entry in kubeconfig  p   td    tr    tr   td colspan  2    embed certs tristate  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p embed certs for the cluster entry in kubeconfig  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for set cluster  p   td    tr    tr   td colspan  2    insecure skip tls verify tristate  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p insecure skip tls verify for the cluster entry in kubeconfig  p   td    tr    tr   td colspan  2    proxy url string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p proxy url for the cluster entry in kubeconfig  p   td    tr    tr   td colspan  2    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p server for the cluster entry in kubeconfig  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p tls server name for the cluster entry in kubeconfig  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference title kubectl config set context weight 30 autogenerated true","answers":"---\ntitle: kubectl config set-context\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nSet a context entry in kubeconfig.\n\n Specifying a name that already exists will merge new fields on top of existing values for those fields.\n\n```\nkubectl config set-context [NAME | --current] [--cluster=cluster_nickname] [--user=user_nickname] [--namespace=namespace]\n```\n\n## \n\n```\n  # Set the user field on the gce context entry without touching other values\n  kubectl config set-context gce --user=cluster-admin\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>cluster for the context entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--current<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Modify the current context<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for set-context<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>namespace for the context entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>user for the context entry in kubeconfig<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config set context content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Set a context entry in kubeconfig    Specifying a name that already exists will merge new fields on top of existing values for those fields       kubectl config set context  NAME     current     cluster cluster nickname     user user nickname     namespace namespace                    Set the user field on the gce context entry without touching other values   kubectl config set context gce   user cluster admin               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p cluster for the context entry in kubeconfig  p   td    tr    tr   td colspan  2    current  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Modify the current context  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for set context  p   td    tr    tr   td colspan  2    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p namespace for the context entry in kubeconfig  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p user for the context entry in kubeconfig  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic title kubectl config set contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl config set\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nSet an individual value in a kubeconfig file.\n\n PROPERTY_NAME is a dot delimited name where each token represents either an attribute name or a map key.  Map keys may not contain dots.\n\n PROPERTY_VALUE is the new value you want to set. Binary fields such as 'certificate-authority-data' expect a base64 encoded string unless the --set-raw-bytes flag is used.\n\n Specifying an attribute name that already exists will merge new fields on top of existing values.\n\n```\nkubectl config set PROPERTY_NAME PROPERTY_VALUE\n```\n\n## \n\n```\n  # Set the server field on the my-cluster cluster to https:\/\/1.2.3.4\n  kubectl config set clusters.my-cluster.server https:\/\/1.2.3.4\n  \n  # Set the certificate-authority-data field on the my-cluster cluster\n  kubectl config set clusters.my-cluster.certificate-authority-data $(echo \"cert_data_here\" | base64 -i -)\n  \n  # Set the cluster field in the my-context context to my-cluster\n  kubectl config set contexts.my-context.cluster my-cluster\n  \n  # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option\n  kubectl config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for set<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--set-raw-bytes tristate[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>When writing a []byte PROPERTY_VALUE, write the given string directly without base64 decoding.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config set content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Set an individual value in a kubeconfig file    PROPERTY NAME is a dot delimited name where each token represents either an attribute name or a map key   Map keys may not contain dots    PROPERTY VALUE is the new value you want to set  Binary fields such as  certificate authority data  expect a base64 encoded string unless the   set raw bytes flag is used    Specifying an attribute name that already exists will merge new fields on top of existing values       kubectl config set PROPERTY NAME PROPERTY VALUE                   Set the server field on the my cluster cluster to https   1 2 3 4   kubectl config set clusters my cluster server https   1 2 3 4        Set the certificate authority data field on the my cluster cluster   kubectl config set clusters my cluster certificate authority data   echo  cert data here    base64  i           Set the cluster field in the my context context to my cluster   kubectl config set contexts my context cluster my cluster        Set the client key data field in the cluster admin user using   set raw bytes option   kubectl config set users cluster admin client key data cert data here   set raw bytes true               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for set  p   td    tr    tr   td colspan  2    set raw bytes tristate  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p When writing a   byte PROPERTY VALUE  write the given string directly without base64 decoding   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config use context autogenerated true","answers":"---\ntitle: kubectl config use-context\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nSet the current-context in a kubeconfig file.\n\n```\nkubectl config use-context CONTEXT_NAME\n```\n\n## \n\n```\n  # Use the context for the minikube cluster\n  kubectl config use-context minikube\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for use-context<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config use context content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Set the current context in a kubeconfig file       kubectl config use context CONTEXT NAME                   Use the context for the minikube cluster   kubectl config use context minikube               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for use context  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config rename context autogenerated true","answers":"---\ntitle: kubectl config rename-context\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nRenames a context from the kubeconfig file.\n\n CONTEXT_NAME is the context name that you want to change.\n\n NEW_NAME is the new name you want to set.\n\n Note: If the context being renamed is the 'current-context', this field will also be updated.\n\n```\nkubectl config rename-context CONTEXT_NAME NEW_NAME\n```\n\n## \n\n```\n  # Rename the context 'old-name' to 'new-name' in your kubeconfig file\n  kubectl config rename-context old-name new-name\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for rename-context<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config rename context content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Renames a context from the kubeconfig file    CONTEXT NAME is the context name that you want to change    NEW NAME is the new name you want to set    Note  If the context being renamed is the  current context   this field will also be updated       kubectl config rename context CONTEXT NAME NEW NAME                   Rename the context  old name  to  new name  in your kubeconfig file   kubectl config rename context old name new name               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for rename context  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config get clusters autogenerated true","answers":"---\ntitle: kubectl config get-clusters\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDisplay clusters defined in the kubeconfig.\n\n```\nkubectl config get-clusters [flags]\n```\n\n## \n\n```\n  # List the clusters that kubectl knows about\n  kubectl config get-clusters\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for get-clusters<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config get clusters content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Display clusters defined in the kubeconfig       kubectl config get clusters  flags                    List the clusters that kubectl knows about   kubectl config get clusters               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for get clusters  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference nolist true contenttype tool reference title kubectl config weight 30 autogenerated true","answers":"---\ntitle: kubectl config\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nModify kubeconfig files using subcommands like \"kubectl config set current-context my-context\".\n\n The loading order follows these rules:\n\n  1.  If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes place.\n  2.  If $KUBECONFIG environment variable is set, then it is used as a list of paths (normal path delimiting rules for your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list.\n  3.  Otherwise, ${HOME}\/.kube\/config is used and no merging takes place.\n\n```\nkubectl config SUBCOMMAND\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for config<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n* [kubectl config current-context](kubectl_config_current-context\/)\t - Display the current-context\n* [kubectl config delete-cluster](kubectl_config_delete-cluster\/)\t - Delete the specified cluster from the kubeconfig\n* [kubectl config delete-context](kubectl_config_delete-context\/)\t - Delete the specified context from the kubeconfig\n* [kubectl config delete-user](kubectl_config_delete-user\/)\t - Delete the specified user from the kubeconfig\n* [kubectl config get-clusters](kubectl_config_get-clusters\/)\t - Display clusters defined in the kubeconfig\n* [kubectl config get-contexts](kubectl_config_get-contexts\/)\t - Describe one or many contexts\n* [kubectl config get-users](kubectl_config_get-users\/)\t - Display users defined in the kubeconfig\n* [kubectl config rename-context](kubectl_config_rename-context\/)\t - Rename a context from the kubeconfig file\n* [kubectl config set](kubectl_config_set\/)\t - Set an individual value in a kubeconfig file\n* [kubectl config set-cluster](kubectl_config_set-cluster\/)\t - Set a cluster entry in kubeconfig\n* [kubectl config set-context](kubectl_config_set-context\/)\t - Set a context entry in kubeconfig\n* [kubectl config set-credentials](kubectl_config_set-credentials\/)\t - Set a user entry in kubeconfig\n* [kubectl config unset](kubectl_config_unset\/)\t - Unset an individual value in a kubeconfig file\n* [kubectl config use-context](kubectl_config_use-context\/)\t - Set the current-context in a kubeconfig file\n* [kubectl config view](kubectl_config_view\/)\t - Display merged kubeconfig settings or a specified kubeconfig file\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Modify kubeconfig files using subcommands like  kubectl config set current context my context     The loading order follows these rules     1   If the   kubeconfig flag is set  then only that file is loaded  The flag may only be set once and no merging takes place    2   If  KUBECONFIG environment variable is set  then it is used as a list of paths  normal path delimiting rules for your system   These paths are merged  When a value is modified  it is modified in the file that defines the stanza  When a value is created  it is created in the first file that exists  If no files in the chain exist  then it creates the last file in the list    3   Otherwise    HOME   kube config is used and no merging takes place       kubectl config SUBCOMMAND               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for config  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager    kubectl config current context  kubectl config current context      Display the current context    kubectl config delete cluster  kubectl config delete cluster      Delete the specified cluster from the kubeconfig    kubectl config delete context  kubectl config delete context      Delete the specified context from the kubeconfig    kubectl config delete user  kubectl config delete user      Delete the specified user from the kubeconfig    kubectl config get clusters  kubectl config get clusters      Display clusters defined in the kubeconfig    kubectl config get contexts  kubectl config get contexts      Describe one or many contexts    kubectl config get users  kubectl config get users      Display users defined in the kubeconfig    kubectl config rename context  kubectl config rename context      Rename a context from the kubeconfig file    kubectl config set  kubectl config set      Set an individual value in a kubeconfig file    kubectl config set cluster  kubectl config set cluster      Set a cluster entry in kubeconfig    kubectl config set context  kubectl config set context      Set a context entry in kubeconfig    kubectl config set credentials  kubectl config set credentials      Set a user entry in kubeconfig    kubectl config unset  kubectl config unset      Unset an individual value in a kubeconfig file    kubectl config use context  kubectl config use context      Set the current context in a kubeconfig file    kubectl config view  kubectl config view      Display merged kubeconfig settings or a specified kubeconfig file "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl config view autogenerated true","answers":"---\ntitle: kubectl config view\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDisplay merged kubeconfig settings or a specified kubeconfig file.\n\n You can use --output jsonpath={...} to extract specific values using a jsonpath expression.\n\n```\nkubectl config view [flags]\n```\n\n## \n\n```\n  # Show merged kubeconfig settings\n  kubectl config view\n  \n  # Show merged kubeconfig settings, raw certificate data, and exposed secrets\n  kubectl config view --raw\n  \n  # Get the password for the e2e user\n  kubectl config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--flatten<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Flatten the resulting kubeconfig file into self-contained output (useful for creating portable kubeconfig files)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for view<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--merge tristate[=true]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Merge the full hierarchy of kubeconfig files<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--minify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Remove all information not used by current-context from the output<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"yaml\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--raw<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Display raw byte data and sensitive data<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use a particular kubeconfig file<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl config](..\/)\t - Modify kubeconfig files\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl config view content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Display merged kubeconfig settings or a specified kubeconfig file    You can use   output jsonpath       to extract specific values using a jsonpath expression       kubectl config view  flags                    Show merged kubeconfig settings   kubectl config view        Show merged kubeconfig settings  raw certificate data  and exposed secrets   kubectl config view   raw        Get the password for the e2e user   kubectl config view  o jsonpath    users     name     e2e    user password                 table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    flatten  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Flatten the resulting kubeconfig file into self contained output  useful for creating portable kubeconfig files   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for view  p   td    tr    tr   td colspan  2    merge tristate  true  nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Merge the full hierarchy of kubeconfig files  p   td    tr    tr   td colspan  2    minify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Remove all information not used by current context from the output  p   td    tr    tr   td colspan  2   o    output string nbsp  nbsp  nbsp  nbsp  nbsp Default   yaml   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    raw  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Display raw byte data and sensitive data  p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use a particular kubeconfig file  p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl config          Modify kubeconfig files "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl uncordon","answers":"---\ntitle: kubectl uncordon\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nMark node as schedulable.\n\n```\nkubectl uncordon NODE\n```\n\n## \n\n```\n  # Mark node \"foo\" as schedulable\n  kubectl uncordon foo\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for uncordon<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl uncordon content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Mark node as schedulable       kubectl uncordon NODE                   Mark node  foo  as schedulable   kubectl uncordon foo               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for uncordon  p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference title kubectl proxy nolist true contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl proxy\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nCreates a proxy server or application-level gateway between localhost and the Kubernetes API server. It also allows serving static content over specified HTTP path. All incoming data enters through one port and gets forwarded to the remote Kubernetes API server port, except for the path matching the static content path.\n\n```\nkubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix]\n```\n\n## \n\n```\n  # To proxy all of the Kubernetes API and nothing else\n  kubectl proxy --api-prefix=\/\n  \n  # To proxy only part of the Kubernetes API and also some static files\n  # You can get pods info with 'curl localhost:8001\/api\/v1\/pods'\n  kubectl proxy --www=\/my\/files --www-prefix=\/static\/ --api-prefix=\/api\/\n  \n  # To proxy the entire Kubernetes API at a different root\n  # You can get pods info with 'curl localhost:8001\/custom\/api\/v1\/pods'\n  kubectl proxy --api-prefix=\/custom\/\n  \n  # Run a proxy to the Kubernetes API server on port 8011, serving static content from .\/local\/www\/\n  kubectl proxy --port=8011 --www=.\/local\/www\/\n  \n  # Run a proxy to the Kubernetes API server on an arbitrary local port\n  # The chosen port for the server will be output to stdout\n  kubectl proxy --port=0\n  \n  # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api\n  # This makes e.g. the pods API available at localhost:8001\/k8s-api\/v1\/pods\/\n  kubectl proxy --api-prefix=\/k8s-api\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--accept-hosts string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"^localhost$,^127\\.0\\.0\\.1$,^\\[::1\\]$\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Regular expression for hosts that the proxy should accept.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--accept-paths string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"^.*\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Regular expression for paths that the proxy should accept.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--address string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"127.0.0.1\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The IP address on which to serve on.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--api-prefix string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"\/\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Prefix to serve the proxied API under.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--append-server-path<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, enables automatic path appending of the kube context server path to each request.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-filter<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, disable request filtering in the proxy. This is dangerous, and can leave you vulnerable to XSRF attacks, when used with an accessible port.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for proxy<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--keepalive duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>keepalive specifies the keep-alive period for an active network connection. Set to 0 to disable keepalive.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-p, --port int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 8001<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The port on which to run the proxy. Set to 0 to pick a random port.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--reject-methods string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"^$\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Regular expression for HTTP methods that the proxy should reject (example --reject-methods='POST,PUT,PATCH').<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--reject-paths string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"^\/api\/.*\/pods\/.*\/exec,<br \/>^\/api\/.*\/pods\/.*\/attach\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Regular expression for paths that the proxy should reject. Paths specified here will be rejected even accepted by --accept-paths.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-u, --unix-socket string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Unix socket on which to run the proxy.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-w, --www string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Also serve static files from the given directory under the specified prefix.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-P, --www-prefix string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"\/static\/\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Prefix to serve static files under, if static file directory is specified.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl proxy content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Creates a proxy server or application level gateway between localhost and the Kubernetes API server  It also allows serving static content over specified HTTP path  All incoming data enters through one port and gets forwarded to the remote Kubernetes API server port  except for the path matching the static content path       kubectl proxy    port PORT     www static dir     www prefix prefix     api prefix prefix                    To proxy all of the Kubernetes API and nothing else   kubectl proxy   api prefix          To proxy only part of the Kubernetes API and also some static files     You can get pods info with  curl localhost 8001 api v1 pods    kubectl proxy   www  my files   www prefix  static    api prefix  api         To proxy the entire Kubernetes API at a different root     You can get pods info with  curl localhost 8001 custom api v1 pods    kubectl proxy   api prefix  custom         Run a proxy to the Kubernetes API server on port 8011  serving static content from   local www    kubectl proxy   port 8011   www   local www         Run a proxy to the Kubernetes API server on an arbitrary local port     The chosen port for the server will be output to stdout   kubectl proxy   port 0        Run a proxy to the Kubernetes API server  changing the API prefix to k8s api     This makes e g  the pods API available at localhost 8001 k8s api v1 pods    kubectl proxy   api prefix  k8s api               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    accept hosts string nbsp  nbsp  nbsp  nbsp  nbsp Default    localhost   127  0  0  1       1      td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Regular expression for hosts that the proxy should accept   p   td    tr    tr   td colspan  2    accept paths string nbsp  nbsp  nbsp  nbsp  nbsp Default         td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Regular expression for paths that the proxy should accept   p   td    tr    tr   td colspan  2    address string nbsp  nbsp  nbsp  nbsp  nbsp Default   127 0 0 1   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The IP address on which to serve on   p   td    tr    tr   td colspan  2    api prefix string nbsp  nbsp  nbsp  nbsp  nbsp Default       td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Prefix to serve the proxied API under   p   td    tr    tr   td colspan  2    append server path  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  enables automatic path appending of the kube context server path to each request   p   td    tr    tr   td colspan  2    disable filter  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  disable request filtering in the proxy  This is dangerous  and can leave you vulnerable to XSRF attacks  when used with an accessible port   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for proxy  p   td    tr    tr   td colspan  2    keepalive duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p keepalive specifies the keep alive period for an active network connection  Set to 0 to disable keepalive   p   td    tr    tr   td colspan  2   p    port int nbsp  nbsp  nbsp  nbsp  nbsp Default  8001  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The port on which to run the proxy  Set to 0 to pick a random port   p   td    tr    tr   td colspan  2    reject methods string nbsp  nbsp  nbsp  nbsp  nbsp Default        td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Regular expression for HTTP methods that the proxy should reject  example   reject methods  POST PUT PATCH     p   td    tr    tr   td colspan  2    reject paths string nbsp  nbsp  nbsp  nbsp  nbsp Default     api    pods    exec  br     api    pods    attach   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Regular expression for paths that the proxy should reject  Paths specified here will be rejected even accepted by   accept paths   p   td    tr    tr   td colspan  2   u    unix socket string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Unix socket on which to run the proxy   p   td    tr    tr   td colspan  2   w    www string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Also serve static files from the given directory under the specified prefix   p   td    tr    tr   td colspan  2   P    www prefix string nbsp  nbsp  nbsp  nbsp  nbsp Default    static    td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Prefix to serve static files under  if static file directory is specified   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl top pod autogenerated true","answers":"---\ntitle: kubectl top pod\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDisplay resource (CPU\/memory) usage of pods.\n\n The 'top pod' command allows you to see the resource consumption of pods.\n\n Due to the metrics pipeline delay, they may be unavailable for a few minutes since pod creation.\n\n```\nkubectl top pod [NAME | -l label]\n```\n\n## \n\n```\n  # Show metrics for all pods in the default namespace\n  kubectl top pod\n  \n  # Show metrics for all pods in the given namespace\n  kubectl top pod --namespace=NAMESPACE\n  \n  # Show metrics for a given pod and its containers\n  kubectl top pod POD_NAME --containers\n  \n  # Show metrics for the pods defined by label name=myLabel\n  kubectl top pod -l name=myLabel\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-A, --all-namespaces<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--containers<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, print usage of containers within a pod.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for pod<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--no-headers<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, print output without headers.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--sort-by string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If non-empty, sort pods list using specified field. The field can be either 'cpu' or 'memory'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--sum<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Print the sum of the resource usage<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--use-protocol-buffers&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enables using protocol-buffers to access Metrics API.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl top](..\/)\t - Display resource (CPU\/memory) usage\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl top pod content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Display resource  CPU memory  usage of pods    The  top pod  command allows you to see the resource consumption of pods    Due to the metrics pipeline delay  they may be unavailable for a few minutes since pod creation       kubectl top pod  NAME    l label                    Show metrics for all pods in the default namespace   kubectl top pod        Show metrics for all pods in the given namespace   kubectl top pod   namespace NAMESPACE        Show metrics for a given pod and its containers   kubectl top pod POD NAME   containers        Show metrics for the pods defined by label name myLabel   kubectl top pod  l name myLabel               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   A    all namespaces  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  list the requested object s  across all namespaces  Namespace in current context is ignored even if specified with   namespace   p   td    tr    tr   td colspan  2    containers  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  print usage of containers within a pod   p   td    tr    tr   td colspan  2    field selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  field query  to filter on  supports            and       e g    field selector key1 value1 key2 value2   The server only supports a limited number of field queries per type   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for pod  p   td    tr    tr   td colspan  2    no headers  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  print output without headers   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    sort by string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If non empty  sort pods list using specified field  The field can be either  cpu  or  memory    p   td    tr    tr   td colspan  2    sum  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Print the sum of the resource usage  p   td    tr    tr   td colspan  2    use protocol buffers nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enables using protocol buffers to access Metrics API   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl top          Display resource  CPU memory  usage "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference title kubectl top node weight 30 autogenerated true","answers":"---\ntitle: kubectl top node\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDisplay resource (CPU\/memory) usage of nodes.\n\n The top-node command allows you to see the resource consumption of nodes.\n\n```\nkubectl top node [NAME | -l label]\n```\n\n## \n\n```\n  # Show metrics for all nodes\n  kubectl top node\n  \n  # Show metrics for a given node\n  kubectl top node NODE_NAME\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for node<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--no-headers<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, print output without headers<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-capacity<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Print node resources based on Capacity instead of Allocatable(default) of the nodes.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--sort-by string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If non-empty, sort nodes list using specified field. The field can be either 'cpu' or 'memory'.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--use-protocol-buffers&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enables using protocol-buffers to access Metrics API.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl top](..\/)\t - Display resource (CPU\/memory) usage\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl top node content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Display resource  CPU memory  usage of nodes    The top node command allows you to see the resource consumption of nodes       kubectl top node  NAME    l label                    Show metrics for all nodes   kubectl top node        Show metrics for a given node   kubectl top node NODE NAME               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for node  p   td    tr    tr   td colspan  2    no headers  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  print output without headers  p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show capacity  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Print node resources based on Capacity instead of Allocatable default  of the nodes   p   td    tr    tr   td colspan  2    sort by string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If non empty  sort nodes list using specified field  The field can be either  cpu  or  memory    p   td    tr    tr   td colspan  2    use protocol buffers nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enables using protocol buffers to access Metrics API   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl top          Display resource  CPU memory  usage "}
{"questions":"kubernetes reference nolist true contenttype tool reference title kubectl top weight 30 autogenerated true","answers":"---\ntitle: kubectl top\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nDisplay resource (CPU\/memory) usage.\n\n The top command allows you to see the resource consumption for nodes or pods.\n\n This command requires Metrics Server to be correctly configured and working on the server.\n\n```\nkubectl top [flags]\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for top<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n* [kubectl top node](kubectl_top_node\/)\t - Display resource (CPU\/memory) usage of nodes\n* [kubectl top pod](kubectl_top_pod\/)\t - Display resource (CPU\/memory) usage of pods\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl top content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Display resource  CPU memory  usage    The top command allows you to see the resource consumption for nodes or pods    This command requires Metrics Server to be correctly configured and working on the server       kubectl top  flags                table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for top  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager    kubectl top node  kubectl top node      Display resource  CPU memory  usage of nodes    kubectl top pod  kubectl top pod      Display resource  CPU memory  usage of pods "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl scale","answers":"---\ntitle: kubectl scale\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nSet a new size for a deployment, replica set, replication controller, or stateful set.\n\n Scale also allows users to specify one or more preconditions for the scale action.\n\n If --current-replicas or --resource-version is specified, it is validated before the scale is attempted, and it is guaranteed that the precondition holds true when the scale is sent to the server.\n\n```\nkubectl scale [--resource-version=version] [--current-replicas=count] --replicas=COUNT (-f FILENAME | TYPE NAME)\n```\n\n## \n\n```\n  # Scale a replica set named 'foo' to 3\n  kubectl scale --replicas=3 rs\/foo\n  \n  # Scale a resource identified by type and name specified in \"foo.yaml\" to 3\n  kubectl scale --replicas=3 -f foo.yaml\n  \n  # If the deployment named mysql's current size is 2, scale mysql to 3\n  kubectl scale --current-replicas=2 --replicas=3 deployment\/mysql\n  \n  # Scale multiple replication controllers\n  kubectl scale --replicas=5 rc\/example1 rc\/example2 rc\/example3\n  \n  # Scale stateful set named 'web' to 3\n  kubectl scale --replicas=3 statefulset\/web\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Select all resources in the namespace of the specified resource types<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--current-replicas int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Precondition for current size. Requires that the current size of the resource match this value in order to scale. -1 (default) for no condition.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to set a new size<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for scale<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--replicas int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The new desired number of replicas. Required.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--resource-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Precondition for resource version. Requires that the current resource version match this value in order to scale.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--timeout duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a scale operation, zero means don't wait. Any other values should contain a corresponding time unit (e.g. 1s, 2m, 3h).<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl scale content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Set a new size for a deployment  replica set  replication controller  or stateful set    Scale also allows users to specify one or more preconditions for the scale action    If   current replicas or   resource version is specified  it is validated before the scale is attempted  and it is guaranteed that the precondition holds true when the scale is sent to the server       kubectl scale    resource version version     current replicas count    replicas COUNT   f FILENAME   TYPE NAME                    Scale a replica set named  foo  to 3   kubectl scale   replicas 3 rs foo        Scale a resource identified by type and name specified in  foo yaml  to 3   kubectl scale   replicas 3  f foo yaml        If the deployment named mysql s current size is 2  scale mysql to 3   kubectl scale   current replicas 2   replicas 3 deployment mysql        Scale multiple replication controllers   kubectl scale   replicas 5 rc example1 rc example2 rc example3        Scale stateful set named  web  to 3   kubectl scale   replicas 3 statefulset web               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Select all resources in the namespace of the specified resource types  p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    current replicas int nbsp  nbsp  nbsp  nbsp  nbsp Default   1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Precondition for current size  Requires that the current size of the resource match this value in order to scale   1  default  for no condition   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to set a new size  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for scale  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    replicas int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The new desired number of replicas  Required   p   td    tr    tr   td colspan  2    resource version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Precondition for resource version  Requires that the current resource version match this value in order to scale   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    timeout duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a scale operation  zero means don t wait  Any other values should contain a corresponding time unit  e g  1s  2m  3h    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl plugin list","answers":"---\ntitle: kubectl plugin list\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nList all available plugin files on a user's PATH. To see plugins binary names without the full path use --name-only flag.\n\n Available plugin files are those that are: - executable - anywhere on the user's PATH - begin with \"kubectl-\"\n\n```\nkubectl plugin list [flags]\n```\n\n## \n\n```\n  # List all available plugins\n  kubectl plugin list\n  \n  # List only binary names of available plugins without paths\n  kubectl plugin list --name-only\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for list<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--name-only<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, display only the binary name of each plugin, rather than its full path<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl plugin](..\/)\t - Provides utilities for interacting with plugins\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl plugin list content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              List all available plugin files on a user s PATH  To see plugins binary names without the full path use   name only flag    Available plugin files are those that are    executable   anywhere on the user s PATH   begin with  kubectl        kubectl plugin list  flags                    List all available plugins   kubectl plugin list        List only binary names of available plugins without paths   kubectl plugin list   name only               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for list  p   td    tr    tr   td colspan  2    name only  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  display only the binary name of each plugin  rather than its full path  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl plugin          Provides utilities for interacting with plugins "}
{"questions":"kubernetes reference title kubectl plugin nolist true contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl plugin\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nProvides utilities for interacting with plugins.\n\n Plugins provide extended functionality that is not part of the major command-line distribution. Please refer to the documentation and examples for more information about how write your own plugins.\n\n The easiest way to discover and install plugins is via the kubernetes sub-project krew: [krew.sigs.k8s.io]. To install krew, visit https:\/\/krew.sigs.k8s.io\/docs\/user-guide\/setup\/install\n\n```\nkubectl plugin [flags]\n```\n\n## \n\n```\n  # List all available plugins\n  kubectl plugin list\n  \n  # List only binary names of available plugins without paths\n  kubectl plugin list --name-only\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for plugin<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n* [kubectl plugin list](kubectl_plugin_list\/)\t - List all visible plugin executables on a user's PATH\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl plugin content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Provides utilities for interacting with plugins    Plugins provide extended functionality that is not part of the major command line distribution  Please refer to the documentation and examples for more information about how write your own plugins    The easiest way to discover and install plugins is via the kubernetes sub project krew   krew sigs k8s io   To install krew  visit https   krew sigs k8s io docs user guide setup install      kubectl plugin  flags                    List all available plugins   kubectl plugin list        List only binary names of available plugins without paths   kubectl plugin list   name only               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for plugin  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager    kubectl plugin list  kubectl plugin list      List all visible plugin executables on a user s PATH "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl wait autogenerated true","answers":"---\ntitle: kubectl wait\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nExperimental: Wait for a specific condition on one or many resources.\n\n The command takes multiple resources and waits until the specified condition is seen in the Status field of every given resource.\n\n Alternatively, the command can wait for the given set of resources to be deleted by providing the \"delete\" keyword as the value to the --for flag.\n\n A successful message will be printed to stdout indicating when the specified condition has been met. You can use -o option to change to output destination.\n\n```\nkubectl wait ([-f FILENAME] | resource.group\/resource.name | resource.group [(-l label | --all)]) [--for=delete|--for condition=available|--for=jsonpath='{}'[=value]]\n```\n\n## \n\n```\n  # Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\"\n  kubectl wait --for=condition=Ready pod\/busybox1\n  \n  # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity)\n  kubectl wait --for=condition=Ready=false pod\/busybox1\n  \n  # Wait for the pod \"busybox1\" to contain the status phase to be \"Running\"\n  kubectl wait --for=jsonpath='{.status.phase}'=Running pod\/busybox1\n  \n  # Wait for pod \"busybox1\" to be Ready\n  kubectl wait --for='jsonpath={.status.conditions[?(@.type==\"Ready\")].status}=True' pod\/busybox1\n  \n  # Wait for the service \"loadbalancer\" to have ingress\n  kubectl wait --for=jsonpath='{.status.loadBalancer.ingress}' service\/loadbalancer\n  \n  # Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command\n  kubectl delete pod\/busybox1\n  kubectl wait --for=delete pod\/busybox1 --timeout=60s\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Select all resources in the namespace of the specified resource types<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-A, --all-namespaces<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>identifying the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--for string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The condition to wait on: [delete|condition=condition-name[=condition-value]|jsonpath='{JSONPath expression}'=[JSONPath value]]. The default condition-value is true.  Condition values are compared after Unicode simple case folding, which is a more general form of case-insensitivity.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for wait<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--local<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, annotation will NOT contact api-server but run locally.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--timeout duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 30s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up.  Zero means check once and don't wait, negative means wait for a week.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl wait content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Experimental  Wait for a specific condition on one or many resources    The command takes multiple resources and waits until the specified condition is seen in the Status field of every given resource    Alternatively  the command can wait for the given set of resources to be deleted by providing the  delete  keyword as the value to the   for flag    A successful message will be printed to stdout indicating when the specified condition has been met  You can use  o option to change to output destination       kubectl wait    f FILENAME    resource group resource name   resource group    l label     all       for delete   for condition available   for jsonpath       value                     Wait for the pod  busybox1  to contain the status condition of type  Ready    kubectl wait   for condition Ready pod busybox1        The default value of status condition is true  you can wait for other targets after an equal delimiter  compared after Unicode simple case folding  which is a more general form of case insensitivity    kubectl wait   for condition Ready false pod busybox1        Wait for the pod  busybox1  to contain the status phase to be  Running    kubectl wait   for jsonpath    status phase   Running pod busybox1        Wait for pod  busybox1  to be Ready   kubectl wait   for  jsonpath   status conditions     type   Ready    status  True  pod busybox1        Wait for the service  loadbalancer  to have ingress   kubectl wait   for jsonpath    status loadBalancer ingress   service loadbalancer        Wait for the pod  busybox1  to be deleted  with a timeout of 60s  after having issued the  delete  command   kubectl delete pod busybox1   kubectl wait   for delete pod busybox1   timeout 60s               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Select all resources in the namespace of the specified resource types  p   td    tr    tr   td colspan  2   A    all namespaces  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  list the requested object s  across all namespaces  Namespace in current context is ignored even if specified with   namespace   p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    field selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  field query  to filter on  supports            and       e g    field selector key1 value1 key2 value2   The server only supports a limited number of field queries per type   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p identifying the resource   p   td    tr    tr   td colspan  2    for string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The condition to wait on   delete condition condition name  condition value  jsonpath   JSONPath expression    JSONPath value    The default condition value is true   Condition values are compared after Unicode simple case folding  which is a more general form of case insensitivity   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for wait  p   td    tr    tr   td colspan  2    local  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  annotation will NOT contact api server but run locally   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    timeout duration nbsp  nbsp  nbsp  nbsp  nbsp Default  30s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up   Zero means check once and don t wait  negative means wait for a week   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl taint","answers":"---\ntitle: kubectl taint\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nUpdate the taints on one or more nodes.\n\n  *  A taint consists of a key, value, and effect. As an argument here, it is expressed as key=value:effect.\n  *  The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 253 characters.\n  *  Optionally, the key can begin with a DNS subdomain prefix and a single '\/', like example.com\/my-app.\n  *  The value is optional. If given, it must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 63 characters.\n  *  The effect must be NoSchedule, PreferNoSchedule or NoExecute.\n  *  Currently taint can only apply to node.\n\n```\nkubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N\n```\n\n## \n\n```\n  # Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule'\n  # If a taint with that key and effect already exists, its value is replaced as specified\n  kubectl taint nodes foo dedicated=special-user:NoSchedule\n  \n  # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists\n  kubectl taint nodes foo dedicated:NoSchedule-\n  \n  # Remove from node 'foo' all the taints with key 'dedicated'\n  kubectl taint nodes foo dedicated-\n  \n  # Add a taint with key 'dedicated' on nodes having label myLabel=X\n  kubectl taint node -l myLabel=X  dedicated=foo:PreferNoSchedule\n  \n  # Add to node 'foo' a taint with key 'bar' and no value\n  kubectl taint nodes foo bar:NoSchedule\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Select all nodes in the cluster<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-taint\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for taint<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--overwrite<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, allow taints to be overwritten, otherwise reject taint updates that overwrite existing taints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl taint content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Update the taints on one or more nodes        A taint consists of a key  value  and effect  As an argument here  it is expressed as key value effect       The key must begin with a letter or number  and may contain letters  numbers  hyphens  dots  and underscores  up to 253 characters       Optionally  the key can begin with a DNS subdomain prefix and a single      like example com my app       The value is optional  If given  it must begin with a letter or number  and may contain letters  numbers  hyphens  dots  and underscores  up to 63 characters       The effect must be NoSchedule  PreferNoSchedule or NoExecute       Currently taint can only apply to node       kubectl taint NODE NAME KEY 1 VAL 1 TAINT EFFECT 1     KEY N VAL N TAINT EFFECT N                   Update node  foo  with a taint with key  dedicated  and value  special user  and effect  NoSchedule      If a taint with that key and effect already exists  its value is replaced as specified   kubectl taint nodes foo dedicated special user NoSchedule        Remove from node  foo  the taint with key  dedicated  and effect  NoSchedule  if one exists   kubectl taint nodes foo dedicated NoSchedule         Remove from node  foo  all the taints with key  dedicated    kubectl taint nodes foo dedicated         Add a taint with key  dedicated  on nodes having label myLabel X   kubectl taint node  l myLabel X  dedicated foo PreferNoSchedule        Add to node  foo  a taint with key  bar  and no value   kubectl taint nodes foo bar NoSchedule               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Select all nodes in the cluster  p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl taint   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for taint  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    overwrite  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  allow taints to be overwritten  otherwise reject taint updates that overwrite existing taints   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference autogenerated true nolist true contenttype tool reference weight 30 title kubectl completion","answers":"---\ntitle: kubectl completion\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nOutput shell completion code for the specified shell (bash, zsh, fish, or powershell). The shell code must be evaluated to provide interactive completion of kubectl commands.  This can be done by sourcing it from the .bash_profile.\n\n Detailed instructions on how to do this are available here:\n\n        for macOS:\n        https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl-macos\/#enable-shell-autocompletion\n        \n        for linux:\n        https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl-linux\/#enable-shell-autocompletion\n        \n        for windows:\n        https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl-windows\/#enable-shell-autocompletion\n        \n Note for zsh users: [1] zsh completions are only supported in versions of zsh &gt;= 5.2.\n\n```\nkubectl completion SHELL\n```\n\n## \n\n```\n  # Installing bash completion on macOS using homebrew\n  ## If running Bash 3.2 included with macOS\n  brew install bash-completion\n  ## or, if running Bash 4.1+\n  brew install bash-completion@2\n  ## If kubectl is installed via homebrew, this should start working immediately\n  ## If you've installed via other means, you may need add the completion to your completion directory\n  kubectl completion bash > $(brew --prefix)\/etc\/bash_completion.d\/kubectl\n  \n  \n  # Installing bash completion on Linux\n  ## If bash-completion is not installed on Linux, install the 'bash-completion' package\n  ## via your distribution's package manager.\n  ## Load the kubectl completion code for bash into the current shell\n  source <(kubectl completion bash)\n  ## Write bash completion code to a file and source it from .bash_profile\n  kubectl completion bash > ~\/.kube\/completion.bash.inc\n  printf \"\n  # kubectl shell completion\n  source '$HOME\/.kube\/completion.bash.inc'\n  \" >> $HOME\/.bash_profile\n  source $HOME\/.bash_profile\n  \n  # Load the kubectl completion code for zsh[1] into the current shell\n  source <(kubectl completion zsh)\n  # Set the kubectl completion code for zsh[1] to autoload on startup\n  kubectl completion zsh > \"${fpath[1]}\/_kubectl\"\n  \n  \n  # Load the kubectl completion code for fish[2] into the current shell\n  kubectl completion fish | source\n  # To load completions for each session, execute once:\n  kubectl completion fish > ~\/.config\/fish\/completions\/kubectl.fish\n  \n  # Load the kubectl completion code for powershell into the current shell\n  kubectl completion powershell | Out-String | Invoke-Expression\n  # Set kubectl completion code for powershell to run on startup\n  ## Save completion code to a script and execute in the profile\n  kubectl completion powershell > $HOME\\.kube\\completion.ps1\n  Add-Content $PROFILE \"$HOME\\.kube\\completion.ps1\"\n  ## Execute completion code in the profile\n  Add-Content $PROFILE \"if (Get-Command kubectl -ErrorAction SilentlyContinue) {\n  kubectl completion powershell | Out-String | Invoke-Expression\n  }\"\n  ## Add completion code directly to the $PROFILE script\n  kubectl completion powershell >> $PROFILE\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for completion<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl completion content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Output shell completion code for the specified shell  bash  zsh  fish  or powershell   The shell code must be evaluated to provide interactive completion of kubectl commands   This can be done by sourcing it from the  bash profile    Detailed instructions on how to do this are available here           for macOS          https   kubernetes io docs tasks tools install kubectl macos  enable shell autocompletion                  for linux          https   kubernetes io docs tasks tools install kubectl linux  enable shell autocompletion                  for windows          https   kubernetes io docs tasks tools install kubectl windows  enable shell autocompletion           Note for zsh users   1  zsh completions are only supported in versions of zsh  gt   5 2       kubectl completion SHELL                   Installing bash completion on macOS using homebrew      If running Bash 3 2 included with macOS   brew install bash completion      or  if running Bash 4 1    brew install bash completion 2      If kubectl is installed via homebrew  this should start working immediately      If you ve installed via other means  you may need add the completion to your completion directory   kubectl completion bash     brew   prefix  etc bash completion d kubectl           Installing bash completion on Linux      If bash completion is not installed on Linux  install the  bash completion  package      via your distribution s package manager       Load the kubectl completion code for bash into the current shell   source   kubectl completion bash       Write bash completion code to a file and source it from  bash profile   kubectl completion bash      kube completion bash inc   printf       kubectl shell completion   source   HOME  kube completion bash inc          HOME  bash profile   source  HOME  bash profile        Load the kubectl completion code for zsh 1  into the current shell   source   kubectl completion zsh      Set the kubectl completion code for zsh 1  to autoload on startup   kubectl completion zsh      fpath 1    kubectl            Load the kubectl completion code for fish 2  into the current shell   kubectl completion fish   source     To load completions for each session  execute once    kubectl completion fish      config fish completions kubectl fish        Load the kubectl completion code for powershell into the current shell   kubectl completion powershell   Out String   Invoke Expression     Set kubectl completion code for powershell to run on startup      Save completion code to a script and execute in the profile   kubectl completion powershell    HOME  kube completion ps1   Add Content  PROFILE   HOME  kube completion ps1       Execute completion code in the profile   Add Content  PROFILE  if  Get Command kubectl  ErrorAction SilentlyContinue      kubectl completion powershell   Out String   Invoke Expression           Add completion code directly to the  PROFILE script   kubectl completion powershell     PROFILE               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for completion  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl rollout undo autogenerated true","answers":"---\ntitle: kubectl rollout undo\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nRoll back to a previous rollout.\n\n```\nkubectl rollout undo (TYPE NAME | TYPE\/NAME) [flags]\n```\n\n## \n\n```\n  # Roll back to the previous deployment\n  kubectl rollout undo deployment\/abc\n  \n  # Roll back to daemonset revision 3\n  kubectl rollout undo daemonset\/abc --to-revision=3\n  \n  # Roll back to the previous deployment with dry-run\n  kubectl rollout undo --dry-run=server deployment\/abc\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to get from a server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for undo<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--to-revision int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The revision to rollback to. Default to 0 (last revision).<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl rollout](..\/)\t - Manage the rollout of a resource\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl rollout undo content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Roll back to a previous rollout       kubectl rollout undo  TYPE NAME   TYPE NAME   flags                    Roll back to the previous deployment   kubectl rollout undo deployment abc        Roll back to daemonset revision 3   kubectl rollout undo daemonset abc   to revision 3        Roll back to the previous deployment with dry run   kubectl rollout undo   dry run server deployment abc               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to get from a server   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for undo  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    to revision int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The revision to rollback to  Default to 0  last revision    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl rollout          Manage the rollout of a resource "}
{"questions":"kubernetes reference title kubectl rollout resume The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl rollout resume\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nResume a paused resource.\n\n Paused resources will not be reconciled by a controller. By resuming a resource, we allow it to be reconciled again. Currently only deployments support being resumed.\n\n```\nkubectl rollout resume RESOURCE\n```\n\n## \n\n```\n  # Resume an already paused deployment\n  kubectl rollout resume deployment\/nginx\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-rollout\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to get from a server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for resume<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl rollout](..\/)\t - Manage the rollout of a resource\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl rollout resume content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Resume a paused resource    Paused resources will not be reconciled by a controller  By resuming a resource  we allow it to be reconciled again  Currently only deployments support being resumed       kubectl rollout resume RESOURCE                   Resume an already paused deployment   kubectl rollout resume deployment nginx               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl rollout   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to get from a server   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for resume  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl rollout          Manage the rollout of a resource "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic title kubectl rollout restart contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl rollout restart\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nRestart a resource.\n\n        Resource rollout will be restarted.\n\n```\nkubectl rollout restart RESOURCE\n```\n\n## \n\n```\n  # Restart all deployments in the test-namespace namespace\n  kubectl rollout restart deployment -n test-namespace\n  \n  # Restart a deployment\n  kubectl rollout restart deployment\/nginx\n  \n  # Restart a daemon set\n  kubectl rollout restart daemonset\/abc\n  \n  # Restart deployments with the app=nginx label\n  kubectl rollout restart deployment --selector=app=nginx\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-rollout\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to get from a server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for restart<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl rollout](..\/)\t - Manage the rollout of a resource\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl rollout restart content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Restart a resource           Resource rollout will be restarted       kubectl rollout restart RESOURCE                   Restart all deployments in the test namespace namespace   kubectl rollout restart deployment  n test namespace        Restart a deployment   kubectl rollout restart deployment nginx        Restart a daemon set   kubectl rollout restart daemonset abc        Restart deployments with the app nginx label   kubectl rollout restart deployment   selector app nginx               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl rollout   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to get from a server   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for restart  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl rollout          Manage the rollout of a resource "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl rollout status autogenerated true","answers":"---\ntitle: kubectl rollout status\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nShow the status of the rollout.\n\n By default 'rollout status' will watch the status of the latest rollout until it's done. If you don't want to wait for the rollout to finish then you can use --watch=false. Note that if a new rollout starts in-between, then 'rollout status' will continue watching the latest revision. If you want to pin to a specific revision and abort if it is rolled over by another revision, use --revision=N where N is the revision you need to watch for.\n\n```\nkubectl rollout status (TYPE NAME | TYPE\/NAME) [flags]\n```\n\n## \n\n```\n  # Watch the rollout status of a deployment\n  kubectl rollout status deployment\/nginx\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to get from a server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for status<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--revision int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Pin to a specific revision for showing its status. Defaults to 0 (last revision).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--timeout duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before ending watch, zero means never. Any other values should contain a corresponding time unit (e.g. 1s, 2m, 3h).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-w, --watch&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Watch the status of the rollout until it's done.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl rollout](..\/)\t - Manage the rollout of a resource\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl rollout status content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Show the status of the rollout    By default  rollout status  will watch the status of the latest rollout until it s done  If you don t want to wait for the rollout to finish then you can use   watch false  Note that if a new rollout starts in between  then  rollout status  will continue watching the latest revision  If you want to pin to a specific revision and abort if it is rolled over by another revision  use   revision N where N is the revision you need to watch for       kubectl rollout status  TYPE NAME   TYPE NAME   flags                    Watch the rollout status of a deployment   kubectl rollout status deployment nginx               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to get from a server   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for status  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    revision int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Pin to a specific revision for showing its status  Defaults to 0  last revision    p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    timeout duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before ending watch  zero means never  Any other values should contain a corresponding time unit  e g  1s  2m  3h    p   td    tr    tr   td colspan  2   w    watch nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Watch the status of the rollout until it s done   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl rollout          Manage the rollout of a resource "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl rollout pause autogenerated true","answers":"---\ntitle: kubectl rollout pause\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nMark the provided resource as paused.\n\n Paused resources will not be reconciled by a controller. Use \"kubectl rollout resume\" to resume a paused resource. Currently only deployments support being paused.\n\n```\nkubectl rollout pause RESOURCE\n```\n\n## \n\n```\n  # Mark the nginx deployment as paused\n  # Any current state of the deployment will continue its function; new updates\n  # to the deployment will not have an effect as long as the deployment is paused\n  kubectl rollout pause deployment\/nginx\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-rollout\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to get from a server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for pause<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl rollout](..\/)\t - Manage the rollout of a resource\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl rollout pause content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Mark the provided resource as paused    Paused resources will not be reconciled by a controller  Use  kubectl rollout resume  to resume a paused resource  Currently only deployments support being paused       kubectl rollout pause RESOURCE                   Mark the nginx deployment as paused     Any current state of the deployment will continue its function  new updates     to the deployment will not have an effect as long as the deployment is paused   kubectl rollout pause deployment nginx               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl rollout   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to get from a server   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for pause  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl rollout          Manage the rollout of a resource "}
{"questions":"kubernetes reference title kubectl rollout nolist true contenttype tool reference weight 30 autogenerated true","answers":"---\ntitle: kubectl rollout\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nManage the rollout of one or many resources.\n        \n Valid resource types include:\n\n  *  deployments\n  *  daemonsets\n  *  statefulsets\n\n```\nkubectl rollout SUBCOMMAND\n```\n\n## \n\n```\n  # Rollback to the previous deployment\n  kubectl rollout undo deployment\/abc\n  \n  # Check the rollout status of a daemonset\n  kubectl rollout status daemonset\/foo\n  \n  # Restart a deployment\n  kubectl rollout restart deployment\/abc\n  \n  # Restart deployments with the 'app=nginx' label\n  kubectl rollout restart deployment --selector=app=nginx\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for rollout<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n* [kubectl rollout history](kubectl_rollout_history\/)\t - View rollout history\n* [kubectl rollout pause](kubectl_rollout_pause\/)\t - Mark the provided resource as paused\n* [kubectl rollout restart](kubectl_rollout_restart\/)\t - Restart a resource\n* [kubectl rollout resume](kubectl_rollout_resume\/)\t - Resume a paused resource\n* [kubectl rollout status](kubectl_rollout_status\/)\t - Show the status of the rollout\n* [kubectl rollout undo](kubectl_rollout_undo\/)\t - Undo a previous rollout\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl rollout content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Manage the rollout of one or many resources            Valid resource types include        deployments      daemonsets      statefulsets      kubectl rollout SUBCOMMAND                   Rollback to the previous deployment   kubectl rollout undo deployment abc        Check the rollout status of a daemonset   kubectl rollout status daemonset foo        Restart a deployment   kubectl rollout restart deployment abc        Restart deployments with the  app nginx  label   kubectl rollout restart deployment   selector app nginx               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for rollout  p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager    kubectl rollout history  kubectl rollout history      View rollout history    kubectl rollout pause  kubectl rollout pause      Mark the provided resource as paused    kubectl rollout restart  kubectl rollout restart      Restart a resource    kubectl rollout resume  kubectl rollout resume      Resume a paused resource    kubectl rollout status  kubectl rollout status      Show the status of the rollout    kubectl rollout undo  kubectl rollout undo      Undo a previous rollout "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl rollout history autogenerated true","answers":"---\ntitle: kubectl rollout history\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nView previous rollout revisions and configurations.\n\n```\nkubectl rollout history (TYPE NAME | TYPE\/NAME) [flags]\n```\n\n## \n\n```\n  # View the rollout history of a deployment\n  kubectl rollout history deployment\/abc\n  \n  # View the details of daemonset revision 3\n  kubectl rollout history daemonset\/abc --revision=3\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files identifying the resource to get from a server.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for history<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--revision int<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>See the details, including podTemplate of the revision specified<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl rollout](..\/)\t - Manage the rollout of a resource\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl rollout history content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              View previous rollout revisions and configurations       kubectl rollout history  TYPE NAME   TYPE NAME   flags                    View the rollout history of a deployment   kubectl rollout history deployment abc        View the details of daemonset revision 3   kubectl rollout history daemonset abc   revision 3               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files identifying the resource to get from a server   p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for history  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    revision int  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p See the details  including podTemplate of the revision specified  p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl rollout          Manage the rollout of a resource "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl apply set last applied autogenerated true","answers":"---\ntitle: kubectl apply set-last-applied\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nSet the latest last-applied-configuration annotations by setting it to match the contents of a file. This results in the last-applied-configuration being updated as though 'kubectl apply -f&lt;file&gt; ' was run, without updating any other parts of the object.\n\n```\nkubectl apply set-last-applied -f FILENAME\n```\n\n## \n\n```\n  # Set the last-applied-configuration of a resource to match the contents of a file\n  kubectl apply set-last-applied -f deploy.yaml\n  \n  # Execute set-last-applied against each configuration file in a directory\n  kubectl apply set-last-applied -f path\/\n  \n  # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist\n  kubectl apply set-last-applied -f deploy.yaml --create-annotation=true\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--create-annotation<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Will create 'last-applied-configuration' annotations if current objects doesn't have one<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files that contains the last-applied-configuration annotations<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for set-last-applied<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl apply](..\/)\t - Apply a configuration to a resource by file name or stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl apply set last applied content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Set the latest last applied configuration annotations by setting it to match the contents of a file  This results in the last applied configuration being updated as though  kubectl apply  f lt file gt    was run  without updating any other parts of the object       kubectl apply set last applied  f FILENAME                   Set the last applied configuration of a resource to match the contents of a file   kubectl apply set last applied  f deploy yaml        Execute set last applied against each configuration file in a directory   kubectl apply set last applied  f path         Set the last applied configuration of a resource to match the contents of a file  will create the annotation if it does not already exist   kubectl apply set last applied  f deploy yaml   create annotation true               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    create annotation  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Will create  last applied configuration  annotations if current objects doesn t have one  p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files that contains the last applied configuration annotations  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for set last applied  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl apply          Apply a configuration to a resource by file name or stdin "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 title kubectl apply edit last applied autogenerated true","answers":"---\ntitle: kubectl apply edit-last-applied\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nEdit the latest last-applied-configuration annotations of resources from the default editor.\n\n The edit-last-applied command allows you to directly edit any API resource you can retrieve via the command-line tools. It will open the editor defined by your KUBE_EDITOR, or EDITOR environment variables, or fall back to 'vi' for Linux or 'notepad' for Windows. You can edit multiple objects, although changes are applied one at a time. The command accepts file names as well as command-line arguments, although the files you point to must be previously saved versions of resources.\n\n The default format is YAML. To edit in JSON, specify \"-o json\".\n\n The flag --windows-line-endings can be used to force Windows line endings, otherwise the default for your operating system will be used.\n\n In the event an error occurs while updating, a temporary file will be created on disk that contains your unapplied changes. The most common error when updating a resource is another editor changing the resource on the server. When this occurs, you will have to apply your changes to the newer version of the resource, or update your temporary saved copy to include the latest resource version.\n\n```\nkubectl apply edit-last-applied (RESOURCE\/NAME | -f FILENAME)\n```\n\n## \n\n```\n  # Edit the last-applied-configuration annotations by type\/name in YAML\n  kubectl apply edit-last-applied deployment\/nginx\n  \n  # Edit the last-applied-configuration annotations by file in JSON\n  kubectl apply edit-last-applied -f deploy.yaml -o json\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-client-side-apply\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files to use to edit the resource<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for edit-last-applied<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--windows-line-endings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Defaults to the line ending native to your platform.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl apply](..\/)\t - Apply a configuration to a resource by file name or stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl apply edit last applied content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Edit the latest last applied configuration annotations of resources from the default editor    The edit last applied command allows you to directly edit any API resource you can retrieve via the command line tools  It will open the editor defined by your KUBE EDITOR  or EDITOR environment variables  or fall back to  vi  for Linux or  notepad  for Windows  You can edit multiple objects  although changes are applied one at a time  The command accepts file names as well as command line arguments  although the files you point to must be previously saved versions of resources    The default format is YAML  To edit in JSON  specify   o json     The flag   windows line endings can be used to force Windows line endings  otherwise the default for your operating system will be used    In the event an error occurs while updating  a temporary file will be created on disk that contains your unapplied changes  The most common error when updating a resource is another editor changing the resource on the server  When this occurs  you will have to apply your changes to the newer version of the resource  or update your temporary saved copy to include the latest resource version       kubectl apply edit last applied  RESOURCE NAME    f FILENAME                    Edit the last applied configuration annotations by type name in YAML   kubectl apply edit last applied deployment nginx        Edit the last applied configuration annotations by file in JSON   kubectl apply edit last applied  f deploy yaml  o json               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl client side apply   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files to use to edit the resource  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for edit last applied  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr    tr   td colspan  2    windows line endings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Defaults to the line ending native to your platform   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl apply          Apply a configuration to a resource by file name or stdin "}
{"questions":"kubernetes reference The file is auto generated from the Go source code of the component using a generic contenttype tool reference weight 30 autogenerated true title kubectl apply view last applied","answers":"---\ntitle: kubectl apply view-last-applied\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nView the latest last-applied-configuration annotations by type\/name or file.\n\n The default output will be printed to stdout in YAML format. You can use the -o option to change the output format.\n\n```\nkubectl apply view-last-applied (TYPE [NAME | -l label] | TYPE\/NAME | -f FILENAME)\n```\n\n## \n\n```\n  # View the last-applied-configuration annotations by type\/name in YAML\n  kubectl apply view-last-applied deployment\/nginx\n  \n  # View the last-applied-configuration annotations by file in JSON\n  kubectl apply view-last-applied -f deploy.yaml -o json\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Select all resources in the namespace of the specified resource types<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Filename, directory, or URL to files that contains the last-applied-configuration annotations<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for view-last-applied<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"yaml\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. Must be one of (yaml, json)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl apply](..\/)\t - Apply a configuration to a resource by file name or stdin\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl apply view last applied content type  tool reference weight  30 auto generated  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              View the latest last applied configuration annotations by type name or file    The default output will be printed to stdout in YAML format  You can use the  o option to change the output format       kubectl apply view last applied  TYPE  NAME    l label    TYPE NAME    f FILENAME                    View the last applied configuration annotations by type name in YAML   kubectl apply view last applied deployment nginx        View the last applied configuration annotations by file in JSON   kubectl apply view last applied  f deploy yaml  o json               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Select all resources in the namespace of the specified resource types  p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Filename  directory  or URL to files that contains the last applied configuration annotations  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for view last applied  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2   o    output string nbsp  nbsp  nbsp  nbsp  nbsp Default   yaml   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  Must be one of  yaml  json   p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl apply          Apply a configuration to a resource by file name or stdin "}
{"questions":"kubernetes reference nolist true contenttype tool reference title kubectl apply weight 30 autogenerated true","answers":"---\ntitle: kubectl apply\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nApply a configuration to a resource by file name or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create --save-config'.\n\n JSON and YAML formats are accepted.\n\n Alpha Disclaimer: the --prune functionality is not yet complete. Do not use unless you are aware of what the current state is. See https:\/\/issues.k8s.io\/34274.\n\n```\nkubectl apply (-f FILENAME | -k DIRECTORY)\n```\n\n## \n\n```\n  # Apply the configuration in pod.json to a pod\n  kubectl apply -f .\/pod.json\n  \n  # Apply resources from a directory containing kustomization.yaml - e.g. dir\/kustomization.yaml\n  kubectl apply -k dir\/\n  \n  # Apply the JSON passed into stdin to a pod\n  cat pod.json | kubectl apply -f -\n  \n  # Apply the configuration from all files that end with '.json'\n  kubectl apply -f '*.json'\n  \n  # Note: --prune is still in Alpha\n  # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx\n  kubectl apply --prune -f manifest.yaml -l app=nginx\n  \n  # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file\n  kubectl apply --prune -f manifest.yaml --all --prune-allowlist=core\/v1\/ConfigMap\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--all<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Select all resources in the namespace of the specified resource types.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--allow-missing-template-keys&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cascade string[=\"background\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"background\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;background&quot;, &quot;orphan&quot;, or &quot;foreground&quot;. Selects the deletion cascading strategy for the dependents (e.g. Pods created by a ReplicationController). Defaults to background.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--dry-run string[=\"unchanged\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be &quot;none&quot;, &quot;server&quot;, or &quot;client&quot;. If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--field-manager string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"kubectl-client-side-apply\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the manager used to track field ownership.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-f, --filename strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The files that contain the configurations to apply.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--force<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--force-conflicts<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, server-side apply will force the changes against conflicts.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--grace-period int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: -1<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. Set to 1 for immediate shutdown. Can only be set to 0 when --force is true (force deletion).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for apply<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-k, --kustomize string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process a kustomization directory. This flag can't be used together with -f or -R.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--openapi-patch&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, use openapi to calculate diff when the openapi presents and the resource can be found in the openapi spec. Otherwise, fall back to use baked-in types.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--overwrite&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: true<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Automatically resolve conflicts between the modified and live configuration by using values from the modified configuration<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--prune<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Automatically delete resource objects, that do not appear in the configs and are created by either apply or create --save-config. Should be used with either -l or --all.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--prune-allowlist strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Overwrite the default allowlist with &lt;group\/version\/kind&gt; for --prune<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-R, --recursive<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-l, --selector string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--server-side<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, apply runs in the server instead of the client.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--show-managed-fields<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, keep the managedFields when printing objects in JSON or YAML format.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--template string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http:\/\/golang.org\/pkg\/text\/template\/#pkg-overview].<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--timeout duration<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--validate string[=\"strict\"]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"strict\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Must be one of: strict (or true), warn, ignore (or false).<br\/>&quot;true&quot; or &quot;strict&quot; will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.<br\/>&quot;warn&quot; will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as &quot;ignore&quot; otherwise.<br\/>&quot;false&quot; or &quot;ignore&quot; will not perform any schema validation, silently dropping any unknown or duplicate fields.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--wait<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, wait for resources to be gone before returning. This waits for finalizers.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n* [kubectl apply edit-last-applied](kubectl_apply_edit-last-applied\/)\t - Edit latest last-applied-configuration annotations of a resource\/object\n* [kubectl apply set-last-applied](kubectl_apply_set-last-applied\/)\t - Set the last-applied-configuration annotation on a live object to match the contents of a file\n* [kubectl apply view-last-applied](kubectl_apply_view-last-applied\/)\t - View the latest last-applied-configuration annotations of a resource\/object\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl apply content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Apply a configuration to a resource by file name or stdin  The resource name must be specified  This resource will be created if it doesn t exist yet  To use  apply   always create the resource initially with either  apply  or  create   save config     JSON and YAML formats are accepted    Alpha Disclaimer  the   prune functionality is not yet complete  Do not use unless you are aware of what the current state is  See https   issues k8s io 34274       kubectl apply   f FILENAME    k DIRECTORY                    Apply the configuration in pod json to a pod   kubectl apply  f   pod json        Apply resources from a directory containing kustomization yaml   e g  dir kustomization yaml   kubectl apply  k dir         Apply the JSON passed into stdin to a pod   cat pod json   kubectl apply  f          Apply the configuration from all files that end with   json    kubectl apply  f    json         Note    prune is still in Alpha     Apply the configuration in manifest yaml that matches label app nginx and delete all other resources that are not in the file and match label app nginx   kubectl apply   prune  f manifest yaml  l app nginx        Apply the configuration in manifest yaml and delete all the other config maps that are not in the file   kubectl apply   prune  f manifest yaml   all   prune allowlist core v1 ConfigMap               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    all  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Select all resources in the namespace of the specified resource types   p   td    tr    tr   td colspan  2    allow missing template keys nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  ignore any errors in templates when a field or map key is missing in the template  Only applies to golang and jsonpath output formats   p   td    tr    tr   td colspan  2    cascade string   background   nbsp  nbsp  nbsp  nbsp  nbsp Default   background   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot background quot    quot orphan quot   or  quot foreground quot   Selects the deletion cascading strategy for the dependents  e g  Pods created by a ReplicationController   Defaults to background   p   td    tr    tr   td colspan  2    dry run string   unchanged   nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be  quot none quot    quot server quot   or  quot client quot   If client strategy  only print the object that would be sent  without sending it  If server strategy  submit server side request without persisting the resource   p   td    tr    tr   td colspan  2    field manager string nbsp  nbsp  nbsp  nbsp  nbsp Default   kubectl client side apply   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the manager used to track field ownership   p   td    tr    tr   td colspan  2   f    filename strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The files that contain the configurations to apply   p   td    tr    tr   td colspan  2    force  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  immediately remove resources from API and bypass graceful deletion  Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation   p   td    tr    tr   td colspan  2    force conflicts  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  server side apply will force the changes against conflicts   p   td    tr    tr   td colspan  2    grace period int nbsp  nbsp  nbsp  nbsp  nbsp Default   1  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Period of time in seconds given to the resource to terminate gracefully  Ignored if negative  Set to 1 for immediate shutdown  Can only be set to 0 when   force is true  force deletion    p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for apply  p   td    tr    tr   td colspan  2   k    kustomize string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process a kustomization directory  This flag can t be used together with  f or  R   p   td    tr    tr   td colspan  2    openapi patch nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  use openapi to calculate diff when the openapi presents and the resource can be found in the openapi spec  Otherwise  fall back to use baked in types   p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Output format  One of   json  yaml  name  go template  go template file  template  templatefile  jsonpath  jsonpath as json  jsonpath file    p   td    tr    tr   td colspan  2    overwrite nbsp  nbsp  nbsp  nbsp  nbsp Default  true  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Automatically resolve conflicts between the modified and live configuration by using values from the modified configuration  p   td    tr    tr   td colspan  2    prune  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Automatically delete resource objects  that do not appear in the configs and are created by either apply or create   save config  Should be used with either  l or   all   p   td    tr    tr   td colspan  2    prune allowlist strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Overwrite the default allowlist with  lt group version kind gt  for   prune  p   td    tr    tr   td colspan  2   R    recursive  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Process the directory used in  f    filename recursively  Useful when you want to manage related manifests organized within the same directory   p   td    tr    tr   td colspan  2   l    selector string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Selector  label query  to filter on  supports            and       e g   l key1 value1 key2 value2   Matching objects must satisfy all of the specified label constraints   p   td    tr    tr   td colspan  2    server side  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  apply runs in the server instead of the client   p   td    tr    tr   td colspan  2    show managed fields  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  keep the managedFields when printing objects in JSON or YAML format   p   td    tr    tr   td colspan  2    template string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Template string or path to template file to use when  o go template   o go template file  The template format is golang templates  http   golang org pkg text template  pkg overview    p   td    tr    tr   td colspan  2    timeout duration  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a delete  zero means determine a timeout from the size of the object  p   td    tr    tr   td colspan  2    validate string   strict   nbsp  nbsp  nbsp  nbsp  nbsp Default   strict   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Must be one of  strict  or true   warn  ignore  or false   br   quot true quot  or  quot strict quot  will use a schema to validate the input and fail the request if invalid  It will perform server side validation if ServerSideFieldValidation is enabled on the api server  but will fall back to less reliable client side validation if not  br   quot warn quot  will warn about unknown or duplicate fields without blocking the request if server side field validation is enabled on the API server  and behave as  quot ignore quot  otherwise  br   quot false quot  or  quot ignore quot  will not perform any schema validation  silently dropping any unknown or duplicate fields   p   td    tr    tr   td colspan  2    wait  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  wait for resources to be gone before returning  This waits for finalizers   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager    kubectl apply edit last applied  kubectl apply edit last applied      Edit latest last applied configuration annotations of a resource object    kubectl apply set last applied  kubectl apply set last applied      Set the last applied configuration annotation on a live object to match the contents of a file    kubectl apply view last applied  kubectl apply view last applied      View the latest last applied configuration annotations of a resource object "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 title kubectl version autogenerated true","answers":"---\ntitle: kubectl version\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nPrint the client and server version information for the current context.\n\n```\nkubectl version [flags]\n```\n\n## \n\n```\n  # Print the client and server versions for the current context\n  kubectl version\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--client<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, shows client version only (no server required).<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>One of 'yaml' or 'json'.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl version content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Print the client and server version information for the current context       kubectl version  flags                    Print the client and server versions for the current context   kubectl version               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    client  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  shows client version only  no server required    p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for version  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p One of  yaml  or  json    p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference nolist true contenttype tool reference weight 30 autogenerated true title kubectl kustomize","answers":"---\ntitle: kubectl kustomize\ncontent_type: tool-reference\nweight: 30\nauto_generated: true\nno_list: true\n---\n\n\n<!--\nThe file is auto-generated from the Go source code of the component using a generic\n[generator](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/). To learn how\nto generate the reference documentation, please read\n[Contributing to the reference documentation](\/docs\/contribute\/generate-ref-docs\/).\nTo update the reference content, please follow the\n[Contributing upstream](\/docs\/contribute\/generate-ref-docs\/contribute-upstream\/)\nguide. You can file document formatting bugs against the\n[reference-docs](https:\/\/github.com\/kubernetes-sigs\/reference-docs\/) project.\n-->\n\n\n## \n\n\nBuild a set of KRM resources using a 'kustomization.yaml' file. The DIR argument must be a path to a directory containing 'kustomization.yaml', or a git repository URL with a path suffix specifying same with respect to the repository root. If DIR is omitted, '.' is assumed.\n\n```\nkubectl kustomize DIR [flags]\n```\n\n## \n\n```\n  # Build the current working directory\n  kubectl kustomize\n  \n  # Build some shared configuration directory\n  kubectl kustomize \/home\/config\/production\n  \n  # Build from github\n  kubectl kustomize https:\/\/github.com\/kubernetes-sigs\/kustomize.git\/examples\/helloWorld?ref=v1.0.6\n```\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as-current-user<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use the uid and gid of the command executor to run the function in the container<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enable-alpha-plugins<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>enable kustomize plugins<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--enable-helm<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Enable use of the Helm chart inflator generator.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-e, --env strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>a list of environment variables to be used by functions<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--helm-api-versions strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Kubernetes api versions used by Helm for Capabilities.APIVersions<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--helm-command string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"helm\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>helm command (path to executable)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--helm-kube-version string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Kubernetes version used by Helm for Capabilities.KubeVersion<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-h, --help<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>help for kustomize<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--load-restrictor string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"LoadRestrictionsRootOnly\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>if set to 'LoadRestrictionsNone', local kustomizations may load files from outside their root. This does, however, break the relocatability of the kustomization.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--mount strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>a list of storage options read from the filesystem<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--network<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>enable network access for functions that declare it<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--network-name string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"bridge\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>the docker network to run the container in<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-o, --output string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If specified, write output to this path.<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n   <table style=\"width: 100%; table-layout: fixed;\">\n<colgroup>\n<col span=\"1\" style=\"width: 10px;\" \/>\n<col span=\"1\" \/>\n<\/colgroup>\n<tbody>\n\n<tr>\n<td colspan=\"2\">--as string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username to impersonate for the operation. User could be a regular user or a service account in a namespace.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-group strings<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Group to impersonate for the operation, this flag can be repeated to specify multiple groups.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--as-uid string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>UID to impersonate for the operation.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cache-dir string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"$HOME\/.kube\/cache\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Default cache directory<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--certificate-authority string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a cert file for the certificate authority<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-certificate string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client certificate file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--client-key string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to a client key file for TLS<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--cluster string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig cluster to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--context string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig context to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-not-ready-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--default-unreachable-toleration-seconds int&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 300<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--disable-compression<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, opt-out of response compression for all requests to the server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--insecure-skip-tls-verify<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--kubeconfig string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Path to the kubeconfig file to use for CLI requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--match-server-version<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Require server version to match client version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-n, --namespace string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>If present, the namespace scope for this CLI request<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--password string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Password for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"none\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--profile-output string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"profile.pprof\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Name of the file to write the profile to<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--request-timeout string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"0\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">-s, --server string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The address and port of the Kubernetes API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-buffer-duration duration&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: 1m0s<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-db string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"cadvisor\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-host string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"localhost:8086\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database host:port<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-password string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database password<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-secure<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>use secure connection with database<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-table string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"stats\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>table name<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--storage-driver-user string&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Default: \"root\"<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>database username<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--tls-server-name string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--token string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Bearer token for authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--user string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>The name of the kubeconfig user to use<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--username string<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Username for basic authentication to the API server<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--version version[=true]<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version<\/p><\/td>\n<\/tr>\n\n<tr>\n<td colspan=\"2\">--warnings-as-errors<\/td>\n<\/tr>\n<tr>\n<td><\/td><td style=\"line-height: 130%; word-wrap: break-word;\"><p>Treat warnings received from the server as errors and exit with a non-zero exit code<\/p><\/td>\n<\/tr>\n\n<\/tbody>\n<\/table>\n\n\n\n## \n\n* [kubectl](..\/kubectl\/)\t - kubectl controls the Kubernetes cluster manager\n","site":"kubernetes reference","answers_cleaned":"    title  kubectl kustomize content type  tool reference weight  30 auto generated  true no list  true            The file is auto generated from the Go source code of the component using a generic  generator  https   github com kubernetes sigs reference docs    To learn how to generate the reference documentation  please read  Contributing to the reference documentation   docs contribute generate ref docs    To update the reference content  please follow the  Contributing upstream   docs contribute generate ref docs contribute upstream   guide  You can file document formatting bugs against the  reference docs  https   github com kubernetes sigs reference docs   project              Build a set of KRM resources using a  kustomization yaml  file  The DIR argument must be a path to a directory containing  kustomization yaml   or a git repository URL with a path suffix specifying same with respect to the repository root  If DIR is omitted      is assumed       kubectl kustomize DIR  flags                    Build the current working directory   kubectl kustomize        Build some shared configuration directory   kubectl kustomize  home config production        Build from github   kubectl kustomize https   github com kubernetes sigs kustomize git examples helloWorld ref v1 0 6               table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as current user  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use the uid and gid of the command executor to run the function in the container  p   td    tr    tr   td colspan  2    enable alpha plugins  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p enable kustomize plugins  p   td    tr    tr   td colspan  2    enable helm  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Enable use of the Helm chart inflator generator   p   td    tr    tr   td colspan  2   e    env strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p a list of environment variables to be used by functions  p   td    tr    tr   td colspan  2    helm api versions strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Kubernetes api versions used by Helm for Capabilities APIVersions  p   td    tr    tr   td colspan  2    helm command string nbsp  nbsp  nbsp  nbsp  nbsp Default   helm   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p helm command  path to executable   p   td    tr    tr   td colspan  2    helm kube version string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Kubernetes version used by Helm for Capabilities KubeVersion  p   td    tr    tr   td colspan  2   h    help  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p help for kustomize  p   td    tr    tr   td colspan  2    load restrictor string nbsp  nbsp  nbsp  nbsp  nbsp Default   LoadRestrictionsRootOnly   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p if set to  LoadRestrictionsNone   local kustomizations may load files from outside their root  This does  however  break the relocatability of the kustomization   p   td    tr    tr   td colspan  2    mount strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p a list of storage options read from the filesystem  p   td    tr    tr   td colspan  2    network  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p enable network access for functions that declare it  p   td    tr    tr   td colspan  2    network name string nbsp  nbsp  nbsp  nbsp  nbsp Default   bridge   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p the docker network to run the container in  p   td    tr    tr   td colspan  2   o    output string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If specified  write output to this path   p   td    tr     tbody    table              table style  width  100   table layout  fixed     colgroup   col span  1  style  width  10px       col span  1       colgroup   tbody    tr   td colspan  2    as string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username to impersonate for the operation  User could be a regular user or a service account in a namespace   p   td    tr    tr   td colspan  2    as group strings  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Group to impersonate for the operation  this flag can be repeated to specify multiple groups   p   td    tr    tr   td colspan  2    as uid string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p UID to impersonate for the operation   p   td    tr    tr   td colspan  2    cache dir string nbsp  nbsp  nbsp  nbsp  nbsp Default    HOME  kube cache   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Default cache directory  p   td    tr    tr   td colspan  2    certificate authority string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a cert file for the certificate authority  p   td    tr    tr   td colspan  2    client certificate string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client certificate file for TLS  p   td    tr    tr   td colspan  2    client key string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to a client key file for TLS  p   td    tr    tr   td colspan  2    cluster string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig cluster to use  p   td    tr    tr   td colspan  2    context string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig context to use  p   td    tr    tr   td colspan  2    default not ready toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for notReady NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    default unreachable toleration seconds int nbsp  nbsp  nbsp  nbsp  nbsp Default  300  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Indicates the tolerationSeconds of the toleration for unreachable NoExecute that is added by default to every pod that does not already have such a toleration   p   td    tr    tr   td colspan  2    disable compression  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  opt out of response compression for all requests to the server  p   td    tr    tr   td colspan  2    insecure skip tls verify  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If true  the server s certificate will not be checked for validity  This will make your HTTPS connections insecure  p   td    tr    tr   td colspan  2    kubeconfig string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Path to the kubeconfig file to use for CLI requests   p   td    tr    tr   td colspan  2    match server version  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Require server version to match client version  p   td    tr    tr   td colspan  2   n    namespace string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p If present  the namespace scope for this CLI request  p   td    tr    tr   td colspan  2    password string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Password for basic authentication to the API server  p   td    tr    tr   td colspan  2    profile string nbsp  nbsp  nbsp  nbsp  nbsp Default   none   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of profile to capture  One of  none cpu heap goroutine threadcreate block mutex   p   td    tr    tr   td colspan  2    profile output string nbsp  nbsp  nbsp  nbsp  nbsp Default   profile pprof   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Name of the file to write the profile to  p   td    tr    tr   td colspan  2    request timeout string nbsp  nbsp  nbsp  nbsp  nbsp Default   0   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The length of time to wait before giving up on a single server request  Non zero values should contain a corresponding time unit  e g  1s  2m  3h   A value of zero means don t timeout requests   p   td    tr    tr   td colspan  2   s    server string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The address and port of the Kubernetes API server  p   td    tr    tr   td colspan  2    storage driver buffer duration duration nbsp  nbsp  nbsp  nbsp  nbsp Default  1m0s  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Writes in the storage driver will be buffered for this duration  and committed to the non memory backends as a single transaction  p   td    tr    tr   td colspan  2    storage driver db string nbsp  nbsp  nbsp  nbsp  nbsp Default   cadvisor   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database name  p   td    tr    tr   td colspan  2    storage driver host string nbsp  nbsp  nbsp  nbsp  nbsp Default   localhost 8086   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database host port  p   td    tr    tr   td colspan  2    storage driver password string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database password  p   td    tr    tr   td colspan  2    storage driver secure  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p use secure connection with database  p   td    tr    tr   td colspan  2    storage driver table string nbsp  nbsp  nbsp  nbsp  nbsp Default   stats   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p table name  p   td    tr    tr   td colspan  2    storage driver user string nbsp  nbsp  nbsp  nbsp  nbsp Default   root   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p database username  p   td    tr    tr   td colspan  2    tls server name string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Server name to use for server certificate validation  If it is not provided  the hostname used to contact the server is used  p   td    tr    tr   td colspan  2    token string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Bearer token for authentication to the API server  p   td    tr    tr   td colspan  2    user string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p The name of the kubeconfig user to use  p   td    tr    tr   td colspan  2    username string  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Username for basic authentication to the API server  p   td    tr    tr   td colspan  2    version version  true   td    tr   tr   td   td  td style  line height  130   word wrap  break word    p   version    version raw prints version information and quits    version vX Y Z    sets the reported version  p   td    tr    tr   td colspan  2    warnings as errors  td    tr   tr   td   td  td style  line height  130   word wrap  break word    p Treat warnings received from the server as errors and exit with a non zero exit code  p   td    tr     tbody    table             kubectl     kubectl      kubectl controls the Kubernetes cluster manager "}
{"questions":"kubernetes reference liggitt deads2k erictune contenttype concept weight 10 reviewers title Authenticating lavalamp","answers":"---\nreviewers:\n- erictune\n- lavalamp\n- deads2k\n- liggitt\ntitle: Authenticating\ncontent_type: concept\nweight: 10\n---\n\n<!-- overview -->\nThis page provides an overview of authentication.\n\n<!-- body -->\n## Users in Kubernetes\n\nAll Kubernetes clusters have two categories of users: service accounts managed\nby Kubernetes, and normal users.\n\nIt is assumed that a cluster-independent service manages normal users in the following ways:\n\n- an administrator distributing private keys\n- a user store like Keystone or Google Accounts\n- a file with a list of usernames and passwords\n\nIn this regard, _Kubernetes does not have objects which represent normal user accounts._\nNormal users cannot be added to a cluster through an API call.\n\nEven though a normal user cannot be added via an API call, any user that\npresents a valid certificate signed by the cluster's certificate authority\n(CA) is considered authenticated. In this configuration, Kubernetes determines\nthe username from the common name field in the 'subject' of the cert (e.g.,\n\"\/CN=bob\"). From there, the role based access control (RBAC) sub-system would\ndetermine whether the user is authorized to perform a specific operation on a\nresource. For more details, refer to the normal users topic in\n[certificate request](\/docs\/reference\/access-authn-authz\/certificate-signing-requests\/#normal-user)\nfor more details about this.\n\nIn contrast, service accounts are users managed by the Kubernetes API. They are\nbound to specific namespaces, and created automatically by the API server or\nmanually through API calls. Service accounts are tied to a set of credentials\nstored as `Secrets`, which are mounted into pods allowing in-cluster processes\nto talk to the Kubernetes API.\n\nAPI requests are tied to either a normal user or a service account, or are treated\nas [anonymous requests](#anonymous-requests). This means every process inside or outside the cluster, from\na human user typing `kubectl` on a workstation, to `kubelets` on nodes, to members\nof the control plane, must authenticate when making requests to the API server,\nor be treated as an anonymous user.\n\n## Authentication strategies\n\nKubernetes uses client certificates, bearer tokens, or an authenticating proxy to\nauthenticate API requests through authentication plugins. As HTTP requests are\nmade to the API server, plugins attempt to associate the following attributes\nwith the request:\n\n* Username: a string which identifies the end user. Common values might be `kube-admin` or `jane@example.com`.\n* UID: a string which identifies the end user and attempts to be more consistent and unique than username.\n* Groups: a set of strings, each of which indicates the user's membership in a named logical collection of users.\n  Common values might be `system:masters` or `devops-team`.\n* Extra fields: a map of strings to list of strings which holds additional information authorizers may find useful.\n\nAll values are opaque to the authentication system and only hold significance\nwhen interpreted by an [authorizer](\/docs\/reference\/access-authn-authz\/authorization\/).\n\nYou can enable multiple authentication methods at once. You should usually use at least two methods:\n\n- service account tokens for service accounts\n- at least one other method for user authentication.\n\nWhen multiple authenticator modules are enabled, the first module\nto successfully authenticate the request short-circuits evaluation.\nThe API server does not guarantee the order authenticators run in.\n\nThe `system:authenticated` group is included in the list of groups for all authenticated users.\n\nIntegrations with other authentication protocols (LDAP, SAML, Kerberos, alternate x509 schemes, etc)\ncan be accomplished using an [authenticating proxy](#authenticating-proxy) or the\n[authentication webhook](#webhook-token-authentication).\n\n### X509 client certificates\n\nClient certificate authentication is enabled by passing the `--client-ca-file=SOMEFILE`\noption to API server. The referenced file must contain one or more certificate authorities\nto use to validate client certificates presented to the API server. If a client certificate\nis presented and verified, the common name of the subject is used as the user name for the\nrequest. As of Kubernetes 1.4, client certificates can also indicate a user's group memberships\nusing the certificate's organization fields. To include multiple group memberships for a user,\ninclude multiple organization fields in the certificate.\n\nFor example, using the `openssl` command line tool to generate a certificate signing request:\n\n```bash\nopenssl req -new -key jbeda.pem -out jbeda-csr.pem -subj \"\/CN=jbeda\/O=app1\/O=app2\"\n```\n\nThis would create a CSR for the username \"jbeda\", belonging to two groups, \"app1\" and \"app2\".\n\nSee [Managing Certificates](\/docs\/tasks\/administer-cluster\/certificates\/) for how to generate a client cert.\n\n### Static token file\n\nThe API server reads bearer tokens from a file when given the `--token-auth-file=SOMEFILE` option\non the command line. Currently, tokens last indefinitely, and the token list cannot be\nchanged without restarting the API server.\n\nThe token file is a csv file with a minimum of 3 columns: token, user name, user uid,\nfollowed by optional group names.\n\n\nIf you have more than one group, the column must be double quoted e.g.\n\n```conf\ntoken,user,uid,\"group1,group2,group3\"\n```\n\n\n#### Putting a bearer token in a request\n\nWhen using bearer token authentication from an http client, the API\nserver expects an `Authorization` header with a value of `Bearer\n<token>`. The bearer token must be a character sequence that can be\nput in an HTTP header value using no more than the encoding and\nquoting facilities of HTTP. For example: if the bearer token is\n`31ada4fd-adec-460c-809a-9e56ceb75269` then it would appear in an HTTP\nheader as shown below.\n\n```http\nAuthorization: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269\n```\n\n### Bootstrap tokens\n\n\n\nTo allow for streamlined bootstrapping for new clusters, Kubernetes includes a\ndynamically-managed Bearer token type called a *Bootstrap Token*. These tokens\nare stored as Secrets in the `kube-system` namespace, where they can be\ndynamically managed and created. Controller Manager contains a TokenCleaner\ncontroller that deletes bootstrap tokens as they expire.\n\nThe tokens are of the form `[a-z0-9]{6}.[a-z0-9]{16}`. The first component is a\nToken ID and the second component is the Token Secret. You specify the token\nin an HTTP header as follows:\n\n```http\nAuthorization: Bearer 781292.db7bc3a58fc5f07e\n```\n\nYou must enable the Bootstrap Token Authenticator with the\n`--enable-bootstrap-token-auth` flag on the API Server. You must enable\nthe TokenCleaner controller via the `--controllers` flag on the Controller\nManager. This is done with something like `--controllers=*,tokencleaner`.\n`kubeadm` will do this for you if you are using it to bootstrap a cluster.\n\nThe authenticator authenticates as `system:bootstrap:<Token ID>`. It is\nincluded in the `system:bootstrappers` group. The naming and groups are\nintentionally limited to discourage users from using these tokens past\nbootstrapping. The user names and group can be used (and are used by `kubeadm`)\nto craft the appropriate authorization policies to support bootstrapping a\ncluster.\n\nPlease see [Bootstrap Tokens](\/docs\/reference\/access-authn-authz\/bootstrap-tokens\/) for in depth\ndocumentation on the Bootstrap Token authenticator and controllers along with\nhow to manage these tokens with `kubeadm`.\n\n### Service account tokens\n\nA service account is an automatically enabled authenticator that uses signed\nbearer tokens to verify requests. The plugin takes two optional flags:\n\n* `--service-account-key-file` File containing PEM-encoded x509 RSA or ECDSA\n  private or public keys, used to verify ServiceAccount tokens. The specified file\n  can contain multiple keys, and the flag can be specified multiple times with\n  different files. If unspecified, --tls-private-key-file is used.\n* `--service-account-lookup` If enabled, tokens which are deleted from the API will be revoked.\n\nService accounts are usually created automatically by the API server and\nassociated with pods running in the cluster through the `ServiceAccount`\n[Admission Controller](\/docs\/reference\/access-authn-authz\/admission-controllers\/). Bearer tokens are\nmounted into pods at well-known locations, and allow in-cluster processes to\ntalk to the API server. Accounts may be explicitly associated with pods using the\n`serviceAccountName` field of a `PodSpec`.\n\n\n`serviceAccountName` is usually omitted because this is done automatically.\n\n\n```yaml\napiVersion: apps\/v1 # this apiVersion is relevant as of Kubernetes 1.9\nkind: Deployment\nmetadata:\n  name: nginx-deployment\n  namespace: default\nspec:\n  replicas: 3\n  template:\n    metadata:\n    # ...\n    spec:\n      serviceAccountName: bob-the-bot\n      containers:\n      - name: nginx\n        image: nginx:1.14.2\n```\n\nService account bearer tokens are perfectly valid to use outside the cluster and\ncan be used to create identities for long standing jobs that wish to talk to the\nKubernetes API. To manually create a service account, use the `kubectl create\nserviceaccount (NAME)` command. This creates a service account in the current\nnamespace.\n\n```bash\nkubectl create serviceaccount jenkins\n```\n\n```none\nserviceaccount\/jenkins created\n```\n\nCreate an associated token:\n\n```bash\nkubectl create token jenkins\n```\n\n```none\neyJhbGciOiJSUzI1NiIsImtp...\n```\n\nThe created token is a signed JSON Web Token (JWT).\n\nThe signed JWT can be used as a bearer token to authenticate as the given service\naccount. See [above](#putting-a-bearer-token-in-a-request) for how the token is included\nin a request. Normally these tokens are mounted into pods for in-cluster access to\nthe API server, but can be used from outside the cluster as well.\n\nService accounts authenticate with the username `system:serviceaccount:(NAMESPACE):(SERVICEACCOUNT)`,\nand are assigned to the groups `system:serviceaccounts` and `system:serviceaccounts:(NAMESPACE)`.\n\n\nBecause service account tokens can also be stored in Secret API objects, any user with\nwrite access to Secrets can request a token, and any user with read access to those\nSecrets can authenticate as the service account. Be cautious when granting permissions\nto service accounts and read or write capabilities for Secrets.\n\n\n### OpenID Connect Tokens\n\n[OpenID Connect](https:\/\/openid.net\/connect\/) is a flavor of OAuth2 supported by\nsome OAuth2 providers, notably Microsoft Entra ID, Salesforce, and Google.\nThe protocol's main extension of OAuth2 is an additional field returned with\nthe access token called an [ID Token](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#IDToken).\nThis token is a JSON Web Token (JWT) with well known fields, such as a user's\nemail, signed by the server.\n\nTo identify the user, the authenticator uses the `id_token` (not the `access_token`)\nfrom the OAuth2 [token response](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#TokenResponse)\nas a bearer token. See [above](#putting-a-bearer-token-in-a-request) for how the token\nis included in a request.\n\n\nsequenceDiagram\n    participant user as User\n    participant idp as Identity Provider\n    participant kube as kubectl\n    participant api as API Server\n\n    user ->> idp: 1. Log in to IdP\n    activate idp\n    idp -->> user: 2. Provide access_token,<br>id_token, and refresh_token\n    deactivate idp\n    activate user\n    user ->> kube: 3. Call kubectl<br>with --token being the id_token<br>OR add tokens to .kube\/config\n    deactivate user\n    activate kube\n    kube ->> api: 4. Authorization: Bearer...\n    deactivate kube\n    activate api\n    api ->> api: 5. Is JWT signature valid?\n    api ->> api: 6. Has the JWT expired? (iat+exp)\n    api ->> api: 7. User authorized?\n    api -->> kube: 8. Authorized: Perform<br>action and return result\n    deactivate api\n    activate kube\n    kube --x user: 9. Return result\n    deactivate kube\n\n\n1. Log in to your identity provider\n1. Your identity provider will provide you with an `access_token`, `id_token` and a `refresh_token`\n1. When using `kubectl`, use your `id_token` with the `--token` flag or add it directly to your `kubeconfig`\n1. `kubectl` sends your `id_token` in a header called Authorization to the API server\n1. The API server will make sure the JWT signature is valid\n1. Check to make sure the `id_token` hasn't expired\n\n   Perform claim and\/or user validation if CEL expressions are configured with `AuthenticationConfiguration`.\n\n1. Make sure the user is authorized\n1. Once authorized the API server returns a response to `kubectl`\n1. `kubectl` provides feedback to the user\n\nSince all of the data needed to validate who you are is in the `id_token`, Kubernetes doesn't need to\n\"phone home\" to the identity provider. In a model where every request is stateless this provides a\nvery scalable solution for authentication. It does offer a few challenges:\n\n1. Kubernetes has no \"web interface\" to trigger the authentication process. There is no browser or\n   interface to collect credentials which is why you need to authenticate to your identity provider first.\n1. The `id_token` can't be revoked, it's like a certificate so it should be short-lived (only a few minutes)\n   so it can be very annoying to have to get a new token every few minutes.\n1. To authenticate to the Kubernetes dashboard, you must use the `kubectl proxy` command or a reverse proxy\n   that injects the `id_token`.\n\n#### Configuring the API Server\n\n##### Using flags\n\nTo enable the plugin, configure the following flags on the API server:\n\n| Parameter | Description | Example | Required |\n| --------- | ----------- | ------- | ------- |\n| `--oidc-issuer-url` | URL of the provider that allows the API server to discover public signing keys. Only URLs that use the `https:\/\/` scheme are accepted. This is typically the provider's discovery URL, changed to have an empty path. | If the issuer's OIDC discovery URL is `https:\/\/accounts.provider.example\/.well-known\/openid-configuration`, the value should be `https:\/\/accounts.provider.example` | Yes |\n| `--oidc-client-id` |  A client id that all tokens must be issued for. | kubernetes | Yes |\n| `--oidc-username-claim` | JWT claim to use as the user name. By default `sub`, which is expected to be a unique identifier of the end user. Admins can choose other claims, such as `email` or `name`, depending on their provider. However, claims other than `email` will be prefixed with the issuer URL to prevent naming clashes with other plugins. | sub | No |\n| `--oidc-username-prefix` | Prefix prepended to username claims to prevent clashes with existing names (such as `system:` users). For example, the value `oidc:` will create usernames like `oidc:jane.doe`. If this flag isn't provided and `--oidc-username-claim` is a value other than `email` the prefix defaults to `( Issuer URL )#` where `( Issuer URL )` is the value of `--oidc-issuer-url`. The value `-` can be used to disable all prefixing. | `oidc:` | No |\n| `--oidc-groups-claim` | JWT claim to use as the user's group. If the claim is present it must be an array of strings. | groups | No |\n| `--oidc-groups-prefix` | Prefix prepended to group claims to prevent clashes with existing names (such as `system:` groups). For example, the value `oidc:` will create group names like `oidc:engineering` and `oidc:infra`. | `oidc:` | No |\n| `--oidc-required-claim` | A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims. | `claim=value` | No |\n| `--oidc-ca-file` | The path to the certificate for the CA that signed your identity provider's web certificate. Defaults to the host's root CAs. | `\/etc\/kubernetes\/ssl\/kc-ca.pem` | No |\n| `--oidc-signing-algs` | The signing algorithms accepted. Default is \"RS256\". | `RS512` | No |\n\n##### Authentication configuration from a file {#using-authentication-configuration}\n\n\n\nJWT Authenticator is an authenticator to authenticate Kubernetes users using JWT compliant tokens.\nThe authenticator will attempt to parse a raw ID token, verify it's been signed by the configured issuer.\nThe public key to verify the signature is discovered from the issuer's public endpoint using OIDC discovery.\n\nThe minimum valid JWT payload must contain the following claims:\n\n```json\n{\n  \"iss\": \"https:\/\/example.com\",   \/\/ must match the issuer.url\n  \"aud\": [\"my-app\"],              \/\/ at least one of the entries in issuer.audiences must match the \"aud\" claim in presented JWTs.\n  \"exp\": 1234567890,              \/\/ token expiration as Unix time (the number of seconds elapsed since January 1, 1970 UTC)\n  \"<username-claim>\": \"user\"      \/\/ this is the username claim configured in the claimMappings.username.claim or claimMappings.username.expression\n}\n```\n\nThe configuration file approach allows you to configure multiple JWT authenticators, each with a unique\n`issuer.url` and `issuer.discoveryURL`. The configuration file even allows you to specify [CEL](\/docs\/reference\/using-api\/cel\/)\nexpressions to map claims to user attributes, and to validate claims and user information.\nThe API server also automatically reloads the authenticators when the configuration file is modified.\nYou can use `apiserver_authentication_config_controller_automatic_reload_last_timestamp_seconds` metric\nto monitor the last time the configuration was reloaded by the API server.\n\nYou must specify the path to the authentication configuration using the `--authentication-config` flag\non the API server. If you want to use command line flags instead of the configuration file, those will\ncontinue to work as-is. To access the new capabilities like configuring multiple authenticators,\nsetting multiple audiences for an issuer, switch to using the configuration file.\n\nFor Kubernetes v, the structured authentication configuration file format\nis beta-level, and the mechanism for using that configuration is also beta. Provided you didn't specifically\ndisable the `StructuredAuthenticationConfiguration`\n[feature gate](\/docs\/reference\/command-line-tools-reference\/feature-gates\/) for your cluster,\nyou can turn on structured authentication by specifying the `--authentication-config` command line\nargument to the kube-apiserver. An example of the structured authentication configuration file is shown below.\n\n\nIf you specify `--authentication-config` along with any of the `--oidc-*` command line arguments, this is\na misconfiguration. In this situation, the API server reports an error and then immediately exits.\nIf you want to switch to using structured authentication configuration, you have to remove the `--oidc-*`\ncommand line arguments, and use the configuration file instead.\n\n\n```yaml\n---\n#\n# CAUTION: this is an example configuration.\n#          Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io\/v1beta1\nkind: AuthenticationConfiguration\n# list of authenticators to authenticate Kubernetes users using JWT compliant tokens.\n# the maximum number of allowed authenticators is 64.\njwt:\n- issuer:\n    # url must be unique across all authenticators.\n    # url must not conflict with issuer configured in --service-account-issuer.\n    url: https:\/\/example.com # Same as --oidc-issuer-url.\n    # discoveryURL, if specified, overrides the URL used to fetch discovery\n    # information instead of using \"{url}\/.well-known\/openid-configuration\".\n    # The exact value specified is used, so \"\/.well-known\/openid-configuration\"\n    # must be included in discoveryURL if needed.\n    #\n    # The \"issuer\" field in the fetched discovery information must match the \"issuer.url\" field\n    # in the AuthenticationConfiguration and will be used to validate the \"iss\" claim in the presented JWT.\n    # This is for scenarios where the well-known and jwks endpoints are hosted at a different\n    # location than the issuer (such as locally in the cluster).\n    # discoveryURL must be different from url if specified and must be unique across all authenticators.\n    discoveryURL: https:\/\/discovery.example.com\/.well-known\/openid-configuration\n    # PEM encoded CA certificates used to validate the connection when fetching\n    # discovery information. If not set, the system verifier will be used.\n    # Same value as the content of the file referenced by the --oidc-ca-file flag.\n    certificateAuthority: <PEM encoded CA certificates>    \n    # audiences is the set of acceptable audiences the JWT must be issued to.\n    # At least one of the entries must match the \"aud\" claim in presented JWTs.\n    audiences:\n    - my-app # Same as --oidc-client-id.\n    - my-other-app\n    # this is required to be set to \"MatchAny\" when multiple audiences are specified.\n    audienceMatchPolicy: MatchAny\n  # rules applied to validate token claims to authenticate users.\n  claimValidationRules:\n    # Same as --oidc-required-claim key=value.\n  - claim: hd\n    requiredValue: example.com\n    # Instead of claim and requiredValue, you can use expression to validate the claim.\n    # expression is a CEL expression that evaluates to a boolean.\n    # all the expressions must evaluate to true for validation to succeed.\n  - expression: 'claims.hd == \"example.com\"'\n    # Message customizes the error message seen in the API server logs when the validation fails.\n    message: the hd claim must be set to example.com\n  - expression: 'claims.exp - claims.nbf <= 86400'\n    message: total token lifetime must not exceed 24 hours\n  claimMappings:\n    # username represents an option for the username attribute.\n    # This is the only required attribute.\n    username:\n      # Same as --oidc-username-claim. Mutually exclusive with username.expression.\n      claim: \"sub\"\n      # Same as --oidc-username-prefix. Mutually exclusive with username.expression.\n      # if username.claim is set, username.prefix is required.\n      # Explicitly set it to \"\" if no prefix is desired.\n      prefix: \"\"\n      # Mutually exclusive with username.claim and username.prefix.\n      # expression is a CEL expression that evaluates to a string.\n      #\n      # 1.  If username.expression uses 'claims.email', then 'claims.email_verified' must be used in\n      #     username.expression or extra[*].valueExpression or claimValidationRules[*].expression.\n      #     An example claim validation rule expression that matches the validation automatically\n      #     applied when username.claim is set to 'email' is 'claims.?email_verified.orValue(true)'.\n      # 2.  If the username asserted based on username.expression is the empty string, the authentication\n      #     request will fail.\n      expression: 'claims.username + \":external-user\"'\n    # groups represents an option for the groups attribute.\n    groups:\n      # Same as --oidc-groups-claim. Mutually exclusive with groups.expression.\n      claim: \"sub\"\n      # Same as --oidc-groups-prefix. Mutually exclusive with groups.expression.\n      # if groups.claim is set, groups.prefix is required.\n      # Explicitly set it to \"\" if no prefix is desired.\n      prefix: \"\"\n      # Mutually exclusive with groups.claim and groups.prefix.\n      # expression is a CEL expression that evaluates to a string or a list of strings.\n      expression: 'claims.roles.split(\",\")'\n    # uid represents an option for the uid attribute.\n    uid:\n      # Mutually exclusive with uid.expression.\n      claim: 'sub'\n      # Mutually exclusive with uid.claim\n      # expression is a CEL expression that evaluates to a string.\n      expression: 'claims.sub'\n    # extra attributes to be added to the UserInfo object. Keys must be domain-prefix path and must be unique.\n    extra:\n    - key: 'example.com\/tenant'\n      # valueExpression is a CEL expression that evaluates to a string or a list of strings.\n      valueExpression: 'claims.tenant'\n  # validation rules applied to the final user object.\n  userValidationRules:\n    # expression is a CEL expression that evaluates to a boolean.\n    # all the expressions must evaluate to true for the user to be valid.\n  - expression: \"!user.username.startsWith('system:')\"\n    # Message customizes the error message seen in the API server logs when the validation fails.\n    message: 'username cannot used reserved system: prefix'\n  - expression: \"user.groups.all(group, !group.startsWith('system:'))\"\n    message: 'groups cannot used reserved system: prefix'\n```\n\n* Claim validation rule expression\n\n  `jwt.claimValidationRules[i].expression` represents the expression which will be evaluated by CEL.\n  CEL expressions have access to the contents of the token payload, organized into `claims` CEL variable.\n  `claims` is a map of claim names (as strings) to claim values (of any type).\n\n* User validation rule expression\n\n  `jwt.userValidationRules[i].expression` represents the expression which will be evaluated by CEL.\n  CEL expressions have access to the contents of `userInfo`, organized into `user` CEL variable.\n  Refer to the [UserInfo](\/docs\/reference\/generated\/kubernetes-api\/v\/#userinfo-v1-authentication-k8s-io)\n  API documentation for the schema of `user`.\n\n* Claim mapping expression\n\n  `jwt.claimMappings.username.expression`, `jwt.claimMappings.groups.expression`, `jwt.claimMappings.uid.expression`\n  `jwt.claimMappings.extra[i].valueExpression` represents the expression which will be evaluated by CEL.\n  CEL expressions have access to the contents of the token payload, organized into `claims` CEL variable.\n  `claims` is a map of claim names (as strings) to claim values (of any type).\n\n  To learn more, see the [Documentation on CEL](\/docs\/reference\/using-api\/cel\/)\n\n  Here are examples of the `AuthenticationConfiguration` with different token payloads.\n\n  \n  \n  ```yaml\n  apiVersion: apiserver.config.k8s.io\/v1beta1\n  kind: AuthenticationConfiguration\n  jwt:\n  - issuer:\n      url: https:\/\/example.com\n      audiences:\n      - my-app\n    claimMappings:\n      username:\n        expression: 'claims.username + \":external-user\"'\n      groups:\n        expression: 'claims.roles.split(\",\")'\n      uid:\n        expression: 'claims.sub'\n      extra:\n      - key: 'example.com\/tenant'\n        valueExpression: 'claims.tenant'\n    userValidationRules:\n    - expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed.\n      message: 'username cannot used reserved system: prefix'\n  ```\n\n  ```bash\n  TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ.TBWF2RkQHm4QQz85AYPcwLxSk-VLvQW-mNDHx7SEOSv9LVwcPYPuPajJpuQn9C_gKq1R94QKSQ5F6UgHMILz8OfmPKmX_00wpwwNVGeevJ79ieX2V-__W56iNR5gJ-i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7Qga__HxtKB-t0kRMNzLRS7rka_SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi-P3iCd88iu1xnzA\n  ```\n\n  where the token payload is:\n\n  ```json\n    {\n      \"aud\": \"kubernetes\",\n      \"exp\": 1703232949,\n      \"iat\": 1701107233,\n      \"iss\": \"https:\/\/example.com\",\n      \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\",\n      \"nbf\": 1701107233,\n      \"roles\": \"user,admin\",\n      \"sub\": \"auth\",\n      \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n      \"username\": \"foo\"\n    }\n  ```\n\n  The token with the above `AuthenticationConfiguration` will produce the following `UserInfo` object and successfully authenticate the user.\n\n  ```json\n  {\n      \"username\": \"foo:external-user\",\n      \"uid\": \"auth\",\n      \"groups\": [\n          \"user\",\n          \"admin\"\n      ],\n      \"extra\": {\n          \"example.com\/tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\"\n      }\n  }\n  ```\n  \n  \n  ```yaml\n  apiVersion: apiserver.config.k8s.io\/v1beta1\n  kind: AuthenticationConfiguration\n  jwt:\n  - issuer:\n      url: https:\/\/example.com\n      audiences:\n      - my-app\n    claimValidationRules:\n    - expression: 'claims.hd == \"example.com\"' # the token below does not have this claim, so validation will fail.\n      message: the hd claim must be set to example.com\n    claimMappings:\n      username:\n        expression: 'claims.username + \":external-user\"'\n      groups:\n        expression: 'claims.roles.split(\",\")'\n      uid:\n        expression: 'claims.sub'\n      extra:\n      - key: 'example.com\/tenant'\n        valueExpression: 'claims.tenant'\n    userValidationRules:\n    - expression: \"!user.username.startsWith('system:')\" # the expression will evaluate to true, so validation will succeed.\n      message: 'username cannot used reserved system: prefix'\n  ```\n\n  ```bash\n  TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ.TBWF2RkQHm4QQz85AYPcwLxSk-VLvQW-mNDHx7SEOSv9LVwcPYPuPajJpuQn9C_gKq1R94QKSQ5F6UgHMILz8OfmPKmX_00wpwwNVGeevJ79ieX2V-__W56iNR5gJ-i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7Qga__HxtKB-t0kRMNzLRS7rka_SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi-P3iCd88iu1xnzA\n  ```\n\n  where the token payload is:\n\n  ```json\n    {\n      \"aud\": \"kubernetes\",\n      \"exp\": 1703232949,\n      \"iat\": 1701107233,\n      \"iss\": \"https:\/\/example.com\",\n      \"jti\": \"7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873\",\n      \"nbf\": 1701107233,\n      \"roles\": \"user,admin\",\n      \"sub\": \"auth\",\n      \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n      \"username\": \"foo\"\n    }\n  ```\n\n  The token with the above `AuthenticationConfiguration` will fail to authenticate because the\n  `hd` claim is not set to `example.com`. The API server will return `401 Unauthorized` error.\n  \n  \n  ```yaml\n  apiVersion: apiserver.config.k8s.io\/v1beta1\n  kind: AuthenticationConfiguration\n  jwt:\n  - issuer:\n      url: https:\/\/example.com\n      audiences:\n      - my-app\n    claimValidationRules:\n    - expression: 'claims.hd == \"example.com\"'\n      message: the hd claim must be set to example.com\n    claimMappings:\n      username:\n        expression: '\"system:\" + claims.username' # this will prefix the username with \"system:\" and will fail user validation.\n      groups:\n        expression: 'claims.roles.split(\",\")'\n      uid:\n        expression: 'claims.sub'\n      extra:\n      - key: 'example.com\/tenant'\n        valueExpression: 'claims.tenant'\n    userValidationRules:\n    - expression: \"!user.username.startsWith('system:')\" # the username will be system:foo and expression will evaluate to false, so validation will fail.\n      message: 'username cannot used reserved system: prefix'\n  ```\n\n  ```bash\n  TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJoZCI6ImV4YW1wbGUuY29tIiwiaWF0IjoxNzAxMTEzMTAxLCJpc3MiOiJodHRwczovL2V4YW1wbGUuY29tIiwianRpIjoiYjViMDY1MjM3MmNkMjBlMzQ1YjZmZGZmY2RjMjE4MWY0YWZkNmYyNTlhYWI0YjdlMzU4ODEyMzdkMjkyMjBiYyIsIm5iZiI6MTcwMTExMzEwMSwicm9sZXMiOiJ1c2VyLGFkbWluIiwic3ViIjoiYXV0aCIsInRlbmFudCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0YSIsInVzZXJuYW1lIjoiZm9vIn0.FgPJBYLobo9jnbHreooBlvpgEcSPWnKfX6dc0IvdlRB-F0dCcgy91oCJeK_aBk-8zH5AKUXoFTlInfLCkPivMOJqMECA1YTrMUwt_IVqwb116AqihfByUYIIqzMjvUbthtbpIeHQm2fF0HbrUqa_Q0uaYwgy8mD807h7sBcUMjNd215ff_nFIHss-9zegH8GI1d9fiBf-g6zjkR1j987EP748khpQh9IxPjMJbSgG_uH5x80YFuqgEWwq-aYJPQxXX6FatP96a2EAn7wfPpGlPRt0HcBOvq5pCnudgCgfVgiOJiLr_7robQu4T1bis0W75VPEvwWtgFcLnvcQx0JWg\n  ```\n\n  where the token payload is:\n\n  ```json\n    {\n      \"aud\": \"kubernetes\",\n      \"exp\": 1703232949,\n      \"hd\": \"example.com\",\n      \"iat\": 1701113101,\n      \"iss\": \"https:\/\/example.com\",\n      \"jti\": \"b5b0652372cd20e345b6fdffcdc2181f4afd6f259aab4b7e35881237d29220bc\",\n      \"nbf\": 1701113101,\n      \"roles\": \"user,admin\",\n      \"sub\": \"auth\",\n      \"tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\",\n      \"username\": \"foo\"\n    }\n  ```\n\n  The token with the above `AuthenticationConfiguration` will produce the following `UserInfo` object:\n\n  ```json\n  {\n      \"username\": \"system:foo\",\n      \"uid\": \"auth\",\n      \"groups\": [\n          \"user\",\n          \"admin\"\n      ],\n      \"extra\": {\n          \"example.com\/tenant\": \"72f988bf-86f1-41af-91ab-2d7cd011db4a\"\n      }\n  }\n  ```\n\n  which will fail user validation because the username starts with `system:`.\n  The API server will return `401 Unauthorized` error.\n  \n  \n\n###### Limitations\n\n1. Distributed claims do not work via [CEL](\/docs\/reference\/using-api\/cel\/) expressions.\n1. Egress selector configuration is not supported for calls to `issuer.url` and `issuer.discoveryURL`.\n\nKubernetes does not provide an OpenID Connect Identity Provider.\nYou can use an existing public OpenID Connect Identity Provider (such as Google, or\n[others](https:\/\/connect2id.com\/products\/nimbus-oauth-openid-connect-sdk\/openid-connect-providers)).\nOr, you can run your own Identity Provider, such as [dex](https:\/\/dexidp.io\/),\n[Keycloak](https:\/\/github.com\/keycloak\/keycloak),\nCloudFoundry [UAA](https:\/\/github.com\/cloudfoundry\/uaa), or\nTremolo Security's [OpenUnison](https:\/\/openunison.github.io\/).\n\nFor an identity provider to work with Kubernetes it must:\n\n1. Support [OpenID connect discovery](https:\/\/openid.net\/specs\/openid-connect-discovery-1_0.html)\n\n   The public key to verify the signature is discovered from the issuer's public endpoint using OIDC discovery.\n   If you're using the authentication configuration file, the identity provider doesn't need to publicly expose the discovery endpoint.\n   You can host the discovery endpoint at a different location than the issuer (such as locally in the cluster) and specify the\n   `issuer.discoveryURL` in the configuration file.\n\n1. Run in TLS with non-obsolete ciphers\n1. Have a CA signed certificate (even if the CA is not a commercial CA or is self signed)\n\nA note about requirement #3 above, requiring a CA signed certificate. If you deploy your own\nidentity provider (as opposed to one of the cloud providers like Google or Microsoft) you MUST\nhave your identity provider's web server certificate signed by a certificate with the `CA` flag\nset to `TRUE`, even if it is self signed. This is due to GoLang's TLS client implementation\nbeing very strict to the standards around certificate validation. If you don't have a CA handy,\nyou can use the [gencert script](https:\/\/github.com\/dexidp\/dex\/blob\/master\/examples\/k8s\/gencert.sh)\nfrom the Dex team to create a simple CA and a signed certificate and key pair. Or you can use\n[this similar script](https:\/\/raw.githubusercontent.com\/TremoloSecurity\/openunison-qs-kubernetes\/master\/src\/main\/bash\/makessl.sh)\nthat generates SHA256 certs with a longer life and larger key size.\n\nRefer to setup instructions for specific systems:\n\n- [UAA](https:\/\/docs.cloudfoundry.org\/concepts\/architecture\/uaa.html)\n- [Dex](https:\/\/dexidp.io\/docs\/kubernetes\/)\n- [OpenUnison](https:\/\/www.tremolosecurity.com\/orchestra-k8s\/)\n\n#### Using kubectl\n\n##### Option 1 - OIDC Authenticator\n\nThe first option is to use the kubectl `oidc` authenticator, which sets the `id_token` as a bearer token\nfor all requests and refreshes the token once it expires. After you've logged into your provider, use\nkubectl to add your `id_token`, `refresh_token`, `client_id`, and `client_secret` to configure the plugin.\n\nProviders that don't return an `id_token` as part of their refresh token response aren't supported\nby this plugin and should use \"Option 2\" below.\n\n```bash\nkubectl config set-credentials USER_NAME \\\n   --auth-provider=oidc \\\n   --auth-provider-arg=idp-issuer-url=( issuer url ) \\\n   --auth-provider-arg=client-id=( your client id ) \\\n   --auth-provider-arg=client-secret=( your client secret ) \\\n   --auth-provider-arg=refresh-token=( your refresh token ) \\\n   --auth-provider-arg=idp-certificate-authority=( path to your ca certificate ) \\\n   --auth-provider-arg=id-token=( your id_token )\n```\n\nAs an example, running the below command after authenticating to your identity provider:\n\n```bash\nkubectl config set-credentials mmosley  \\\n        --auth-provider=oidc  \\\n        --auth-provider-arg=idp-issuer-url=https:\/\/oidcidp.tremolo.lan:8443\/auth\/idp\/OidcIdP  \\\n        --auth-provider-arg=client-id=kubernetes  \\\n        --auth-provider-arg=client-secret=1db158f6-177d-4d9c-8a8b-d36869918ec5  \\\n        --auth-provider-arg=refresh-token=q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXqHega4GAXlF+ma+vmYpFcHe5eZR+slBFpZKtQA= \\\n        --auth-provider-arg=idp-certificate-authority=\/root\/ca.pem \\\n        --auth-provider-arg=id-token=eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw\n```\n\nWhich would produce the below configuration:\n\n```yaml\nusers:\n- name: mmosley\n  user:\n    auth-provider:\n      config:\n        client-id: kubernetes\n        client-secret: 1db158f6-177d-4d9c-8a8b-d36869918ec5\n        id-token: eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw\n        idp-certificate-authority: \/root\/ca.pem\n        idp-issuer-url: https:\/\/oidcidp.tremolo.lan:8443\/auth\/idp\/OidcIdP\n        refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq\n      name: oidc\n```\n\nOnce your `id_token` expires, `kubectl` will attempt to refresh your `id_token` using your `refresh_token`\nand `client_secret` storing the new values for the `refresh_token` and `id_token` in your `.kube\/config`.\n\n##### Option 2 - Use the `--token` Option\n\nThe `kubectl` command lets you pass in a token using the `--token` option.\nCopy and paste the `id_token` into this option:\n\n```bash\nkubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-va_B63jn-_n9LGSCca_6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TK_yF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a7_8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1_I2ulrOVsYx01_yD35-rw get nodes\n```\n\n### Webhook Token Authentication\n\nWebhook authentication is a hook for verifying bearer tokens.\n\n* `--authentication-token-webhook-config-file` a configuration file describing how to access the remote webhook service.\n* `--authentication-token-webhook-cache-ttl` how long to cache authentication decisions. Defaults to two minutes.\n* `--authentication-token-webhook-version` determines whether to use `authentication.k8s.io\/v1beta1` or `authentication.k8s.io\/v1`\n  `TokenReview` objects to send\/receive information from the webhook. Defaults to `v1beta1`.\n\nThe configuration file uses the [kubeconfig](\/docs\/concepts\/configuration\/organize-cluster-access-kubeconfig\/)\nfile format. Within the file, `clusters` refers to the remote service and\n`users` refers to the API server webhook. An example would be:\n\n```yaml\n# Kubernetes API version\napiVersion: v1\n# kind of the API object\nkind: Config\n# clusters refers to the remote service.\nclusters:\n  - name: name-of-remote-authn-service\n    cluster:\n      certificate-authority: \/path\/to\/ca.pem         # CA for verifying the remote service.\n      server: https:\/\/authn.example.com\/authenticate # URL of remote service to query. 'https' recommended for production.\n\n# users refers to the API server's webhook configuration.\nusers:\n  - name: name-of-api-server\n    user:\n      client-certificate: \/path\/to\/cert.pem # cert for the webhook plugin to use\n      client-key: \/path\/to\/key.pem          # key matching the cert\n\n# kubeconfig files require a context. Provide one for the API server.\ncurrent-context: webhook\ncontexts:\n- context:\n    cluster: name-of-remote-authn-service\n    user: name-of-api-server\n  name: webhook\n```\n\nWhen a client attempts to authenticate with the API server using a bearer token as discussed\n[above](#putting-a-bearer-token-in-a-request), the authentication webhook POSTs a JSON-serialized\n`TokenReview` object containing the token to the remote service.\n\nNote that webhook API objects are subject to the same [versioning compatibility rules](\/docs\/concepts\/overview\/kubernetes-api\/)\nas other Kubernetes API objects. Implementers should check the `apiVersion` field of the request to ensure correct deserialization,\nand **must** respond with a `TokenReview` object of the same version as the request.\n\n\n\n\nThe Kubernetes API server defaults to sending `authentication.k8s.io\/v1beta1` token reviews for backwards compatibility.\nTo opt into receiving `authentication.k8s.io\/v1` token reviews, the API server must be started with `--authentication-token-webhook-version=v1`.\n\n\n```yaml\n{\n  \"apiVersion\": \"authentication.k8s.io\/v1\",\n  \"kind\": \"TokenReview\",\n  \"spec\": {\n    # Opaque bearer token sent to the API server\n    \"token\": \"014fbff9a07c...\",\n\n    # Optional list of the audience identifiers for the server the token was presented to.\n    # Audience-aware token authenticators (for example, OIDC token authenticators)\n    # should verify the token was intended for at least one of the audiences in this list,\n    # and return the intersection of this list and the valid audiences for the token in the response status.\n    # This ensures the token is valid to authenticate to the server it was presented to.\n    # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.\n    \"audiences\": [\"https:\/\/myserver.example.com\", \"https:\/\/myserver.internal.example.com\"]\n  }\n}\n```\n\n\n```yaml\n{\n  \"apiVersion\": \"authentication.k8s.io\/v1beta1\",\n  \"kind\": \"TokenReview\",\n  \"spec\": {\n    # Opaque bearer token sent to the API server\n    \"token\": \"014fbff9a07c...\",\n\n    # Optional list of the audience identifiers for the server the token was presented to.\n    # Audience-aware token authenticators (for example, OIDC token authenticators)\n    # should verify the token was intended for at least one of the audiences in this list,\n    # and return the intersection of this list and the valid audiences for the token in the response status.\n    # This ensures the token is valid to authenticate to the server it was presented to.\n    # If no audiences are provided, the token should be validated to authenticate to the Kubernetes API server.\n    \"audiences\": [\"https:\/\/myserver.example.com\", \"https:\/\/myserver.internal.example.com\"]\n  }\n}\n```\n\n\n\nThe remote service is expected to fill the `status` field of the request to indicate the success of the login.\nThe response body's `spec` field is ignored and may be omitted.\nThe remote service must return a response using the same `TokenReview` API version that it received.\nA successful validation of the bearer token would return:\n\n\n\n```yaml\n{\n  \"apiVersion\": \"authentication.k8s.io\/v1\",\n  \"kind\": \"TokenReview\",\n  \"status\": {\n    \"authenticated\": true,\n    \"user\": {\n      # Required\n      \"username\": \"janedoe@example.com\",\n      # Optional\n      \"uid\": \"42\",\n      # Optional group memberships\n      \"groups\": [\"developers\", \"qa\"],\n      # Optional additional information provided by the authenticator.\n      # This should not contain confidential data, as it can be recorded in logs\n      # or API objects, and is made available to admission webhooks.\n      \"extra\": {\n        \"extrafield1\": [\n          \"extravalue1\",\n          \"extravalue2\"\n        ]\n      }\n    },\n    # Optional list audience-aware token authenticators can return,\n    # containing the audiences from the `spec.audiences` list for which the provided token was valid.\n    # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.\n    \"audiences\": [\"https:\/\/myserver.example.com\"]\n  }\n}\n```\n\n\n```yaml\n{\n  \"apiVersion\": \"authentication.k8s.io\/v1beta1\",\n  \"kind\": \"TokenReview\",\n  \"status\": {\n    \"authenticated\": true,\n    \"user\": {\n      # Required\n      \"username\": \"janedoe@example.com\",\n      # Optional\n      \"uid\": \"42\",\n      # Optional group memberships\n      \"groups\": [\"developers\", \"qa\"],\n      # Optional additional information provided by the authenticator.\n      # This should not contain confidential data, as it can be recorded in logs\n      # or API objects, and is made available to admission webhooks.\n      \"extra\": {\n        \"extrafield1\": [\n          \"extravalue1\",\n          \"extravalue2\"\n        ]\n      }\n    },\n    # Optional list audience-aware token authenticators can return,\n    # containing the audiences from the `spec.audiences` list for which the provided token was valid.\n    # If this is omitted, the token is considered to be valid to authenticate to the Kubernetes API server.\n    \"audiences\": [\"https:\/\/myserver.example.com\"]\n  }\n}\n```\n\n\n\nAn unsuccessful request would return:\n\n\n\n```yaml\n{\n  \"apiVersion\": \"authentication.k8s.io\/v1\",\n  \"kind\": \"TokenReview\",\n  \"status\": {\n    \"authenticated\": false,\n    # Optionally include details about why authentication failed.\n    # If no error is provided, the API will return a generic Unauthorized message.\n    # The error field is ignored when authenticated=true.\n    \"error\": \"Credentials are expired\"\n  }\n}\n```\n\n\n```yaml\n{\n  \"apiVersion\": \"authentication.k8s.io\/v1beta1\",\n  \"kind\": \"TokenReview\",\n  \"status\": {\n    \"authenticated\": false,\n    # Optionally include details about why authentication failed.\n    # If no error is provided, the API will return a generic Unauthorized message.\n    # The error field is ignored when authenticated=true.\n    \"error\": \"Credentials are expired\"\n  }\n}\n```\n\n\n\n### Authenticating Proxy\n\nThe API server can be configured to identify users from request header values, such as `X-Remote-User`.\nIt is designed for use in combination with an authenticating proxy, which sets the request header value.\n\n* `--requestheader-username-headers` Required, case-insensitive. Header names to check, in order,\n  for the user identity. The first header containing a value is used as the username.\n* `--requestheader-group-headers` 1.6+. Optional, case-insensitive. \"X-Remote-Group\" is suggested.\n  Header names to check, in order, for the user's groups. All values in all specified headers are used as group names.\n* `--requestheader-extra-headers-prefix` 1.6+. Optional, case-insensitive. \"X-Remote-Extra-\" is suggested.\n  Header prefixes to look for to determine extra information about the user (typically used by the configured authorization plugin).\n  Any headers beginning with any of the specified prefixes have the prefix removed.\n  The remainder of the header name is lowercased and [percent-decoded](https:\/\/tools.ietf.org\/html\/rfc3986#section-2.1)\n  and becomes the extra key, and the header value is the extra value.\n\n\nPrior to 1.11.3 (and 1.10.7, 1.9.11), the extra key could only contain characters which\nwere [legal in HTTP header labels](https:\/\/tools.ietf.org\/html\/rfc7230#section-3.2.6).\n\n\nFor example, with this configuration:\n\n```\n--requestheader-username-headers=X-Remote-User\n--requestheader-group-headers=X-Remote-Group\n--requestheader-extra-headers-prefix=X-Remote-Extra-\n```\n\nthis request:\n\n```http\nGET \/ HTTP\/1.1\nX-Remote-User: fido\nX-Remote-Group: dogs\nX-Remote-Group: dachshunds\nX-Remote-Extra-Acme.com%2Fproject: some-project\nX-Remote-Extra-Scopes: openid\nX-Remote-Extra-Scopes: profile\n```\n\nwould result in this user info:\n\n```yaml\nname: fido\ngroups:\n- dogs\n- dachshunds\nextra:\n  acme.com\/project:\n  - some-project\n  scopes:\n  - openid\n  - profile\n```\n\nIn order to prevent header spoofing, the authenticating proxy is required to present a valid client\ncertificate to the API server for validation against the specified CA before the request headers are\nchecked. WARNING: do **not** reuse a CA that is used in a different context unless you understand\nthe risks and the mechanisms to protect the CA's usage.\n\n* `--requestheader-client-ca-file` Required. PEM-encoded certificate bundle. A valid client certificate\n  must be presented and validated against the certificate authorities in the specified file before the\n  request headers are checked for user names.\n* `--requestheader-allowed-names` Optional. List of Common Name values (CNs). If set, a valid client\n  certificate with a CN in the specified list must be presented before the request headers are checked\n  for user names. If empty, any CN is allowed.\n\n## Anonymous requests\n\nWhen enabled, requests that are not rejected by other configured authentication methods are\ntreated as anonymous requests, and given a username of `system:anonymous` and a group of\n`system:unauthenticated`.\n\nFor example, on a server with token authentication configured, and anonymous access enabled,\na request providing an invalid bearer token would receive a `401 Unauthorized` error.\nA request providing no bearer token would be treated as an anonymous request.\n\nIn 1.5.1-1.5.x, anonymous access is disabled by default, and can be enabled by\npassing the `--anonymous-auth=true` option to the API server.\n\nIn 1.6+, anonymous access is enabled by default if an authorization mode other than `AlwaysAllow`\nis used, and can be disabled by passing the `--anonymous-auth=false` option to the API server.\nStarting in 1.6, the ABAC and RBAC authorizers require explicit authorization of the\n`system:anonymous` user or the `system:unauthenticated` group, so legacy policy rules\nthat grant access to the `*` user or `*` group do not include anonymous users.\n\n### Anonymous Authenticator Configuration\n\n\n\nThe `AuthenticationConfiguration` can be used to configure the anonymous\nauthenticator. To enable configuring anonymous auth via the config file you need\nenable the `AnonymousAuthConfigurableEndpoints` feature gate. When this feature\ngate is enabled you cannot set the `--anonymous-auth` flag.\n\nThe main advantage of configuring anonymous authenticator using the authentication\nconfiguration file is that in addition to enabling and disabling anonymous authentication\nyou can also configure which endpoints support anonymous authentication.\n\nA sample authentication configuration file is below:\n\n```yaml\n---\n#\n# CAUTION: this is an example configuration.\n#          Do not use this for your own cluster!\n#\napiVersion: apiserver.config.k8s.io\/v1beta1\nkind: AuthenticationConfiguration\nanonymous:\n  enabled: true\n  conditions:\n  - path: \/livez\n  - path: \/readyz\n  - path: \/healthz\n```\n\nIn the configuration above only the `\/livez`, `\/readyz` and `\/healthz` endpoints\nare reachable by anonymous requests. Any other endpoints will not be reachable\neven if it is allowed by RBAC configuration.\n\n## User impersonation\n\nA user can act as another user through impersonation headers. These let requests\nmanually override the user info a request authenticates as. For example, an admin\ncould use this feature to debug an authorization policy by temporarily\nimpersonating another user and seeing if a request was denied.\n\nImpersonation requests first authenticate as the requesting user, then switch\nto the impersonated user info.\n\n* A user makes an API call with their credentials _and_ impersonation headers.\n* API server authenticates the user.\n* API server ensures the authenticated users have impersonation privileges.\n* Request user info is replaced with impersonation values.\n* Request is evaluated, authorization acts on impersonated user info.\n\nThe following HTTP headers can be used to performing an impersonation request:\n\n* `Impersonate-User`: The username to act as.\n* `Impersonate-Group`: A group name to act as. Can be provided multiple times to set multiple groups.\n  Optional. Requires \"Impersonate-User\".\n* `Impersonate-Extra-( extra name )`: A dynamic header used to associate extra fields with the user.\n  Optional. Requires \"Impersonate-User\". In order to be preserved consistently, `( extra name )`\n  must be lower-case, and any characters which aren't [legal in HTTP header labels](https:\/\/tools.ietf.org\/html\/rfc7230#section-3.2.6)\n  MUST be utf8 and [percent-encoded](https:\/\/tools.ietf.org\/html\/rfc3986#section-2.1).\n* `Impersonate-Uid`: A unique identifier that represents the user being impersonated. Optional.\n  Requires \"Impersonate-User\". Kubernetes does not impose any format requirements on this string.\n\n\nPrior to 1.11.3 (and 1.10.7, 1.9.11), `( extra name )` could only contain characters which\nwere [legal in HTTP header labels](https:\/\/tools.ietf.org\/html\/rfc7230#section-3.2.6).\n\n\n\n`Impersonate-Uid` is only available in versions 1.22.0 and higher.\n\n\nAn example of the impersonation headers used when impersonating a user with groups:\n\n```http\nImpersonate-User: jane.doe@example.com\nImpersonate-Group: developers\nImpersonate-Group: admins\n```\n\nAn example of the impersonation headers used when impersonating a user with a UID and\nextra fields:\n\n```http\nImpersonate-User: jane.doe@example.com\nImpersonate-Extra-dn: cn=jane,ou=engineers,dc=example,dc=com\nImpersonate-Extra-acme.com%2Fproject: some-project\nImpersonate-Extra-scopes: view\nImpersonate-Extra-scopes: development\nImpersonate-Uid: 06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\n```\n\nWhen using `kubectl` set the `--as` flag to configure the `Impersonate-User`\nheader, set the `--as-group` flag to configure the `Impersonate-Group` header.\n\n```bash\nkubectl drain mynode\n```\n\n```none\nError from server (Forbidden): User \"clark\" cannot get nodes at the cluster scope. (get nodes mynode)\n```\n\nSet the `--as` and `--as-group` flag:\n\n```bash\nkubectl drain mynode --as=superman --as-group=system:masters\n```\n\n```none\nnode\/mynode cordoned\nnode\/mynode drained\n```\n\n\n`kubectl` cannot impersonate extra fields or UIDs.\n\n\nTo impersonate a user, group, user identifier (UID) or extra fields, the impersonating user must\nhave the ability to perform the \"impersonate\" verb on the kind of attribute\nbeing impersonated (\"user\", \"group\", \"uid\", etc.). For clusters that enable the RBAC\nauthorization plugin, the following ClusterRole encompasses the rules needed to\nset user and group impersonation headers:\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: impersonator\nrules:\n- apiGroups: [\"\"]\n  resources: [\"users\", \"groups\", \"serviceaccounts\"]\n  verbs: [\"impersonate\"]\n```\n\nFor impersonation, extra fields and impersonated UIDs are both under the \"authentication.k8s.io\" `apiGroup`.\nExtra fields are evaluated as sub-resources of the resource \"userextras\". To\nallow a user to use impersonation headers for the extra field \"scopes\" and\nfor UIDs, a user should be granted the following role:\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: scopes-and-uid-impersonator\nrules:\n# Can set \"Impersonate-Extra-scopes\" header and the \"Impersonate-Uid\" header.\n- apiGroups: [\"authentication.k8s.io\"]\n  resources: [\"userextras\/scopes\", \"uids\"]\n  verbs: [\"impersonate\"]\n```\n\nThe values of impersonation headers can also be restricted by limiting the set\nof `resourceNames` a resource can take.\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: limited-impersonator\nrules:\n# Can impersonate the user \"jane.doe@example.com\"\n- apiGroups: [\"\"]\n  resources: [\"users\"]\n  verbs: [\"impersonate\"]\n  resourceNames: [\"jane.doe@example.com\"]\n\n# Can impersonate the groups \"developers\" and \"admins\"\n- apiGroups: [\"\"]\n  resources: [\"groups\"]\n  verbs: [\"impersonate\"]\n  resourceNames: [\"developers\",\"admins\"]\n\n# Can impersonate the extras field \"scopes\" with the values \"view\" and \"development\"\n- apiGroups: [\"authentication.k8s.io\"]\n  resources: [\"userextras\/scopes\"]\n  verbs: [\"impersonate\"]\n  resourceNames: [\"view\", \"development\"]\n\n# Can impersonate the uid \"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"\n- apiGroups: [\"authentication.k8s.io\"]\n  resources: [\"uids\"]\n  verbs: [\"impersonate\"]\n  resourceNames: [\"06f6ce97-e2c5-4ab8-7ba5-7654dd08d52b\"]\n```\n\n\nImpersonating a user or group allows you to perform any action as if you were that user or group;\nfor that reason, impersonation is not namespace scoped.\nIf you want to allow impersonation using Kubernetes RBAC,\nthis requires using a `ClusterRole` and a `ClusterRoleBinding`,\nnot a `Role` and `RoleBinding`.\n\n\n## client-go credential plugins\n\n\n\n`k8s.io\/client-go` and tools using it such as `kubectl` and `kubelet` are able to execute an\nexternal command to receive user credentials.\n\nThis feature is intended for client side integrations with authentication protocols not natively\nsupported by `k8s.io\/client-go` (LDAP, Kerberos, OAuth2, SAML, etc.). The plugin implements the\nprotocol specific logic, then returns opaque credentials to use. Almost all credential plugin\nuse cases require a server side component with support for the [webhook token authenticator](#webhook-token-authentication)\nto interpret the credential format produced by the client plugin.\n\n\nEarlier versions of `kubectl` included built-in support for authenticating to AKS and GKE, but this is no longer present.\n\n\n### Example use case\n\nIn a hypothetical use case, an organization would run an external service that exchanges LDAP credentials\nfor user specific, signed tokens. The service would also be capable of responding to [webhook token\nauthenticator](#webhook-token-authentication) requests to validate the tokens. Users would be required\nto install a credential plugin on their workstation.\n\nTo authenticate against the API:\n\n* The user issues a `kubectl` command.\n* Credential plugin prompts the user for LDAP credentials, exchanges credentials with external service for a token.\n* Credential plugin returns token to client-go, which uses it as a bearer token against the API server.\n* API server uses the [webhook token authenticator](#webhook-token-authentication) to submit a `TokenReview` to the external service.\n* External service verifies the signature on the token and returns the user's username and groups.\n\n### Configuration\n\nCredential plugins are configured through [kubectl config files](\/docs\/tasks\/access-application-cluster\/configure-access-multiple-clusters\/)\nas part of the user fields.\n\n\n\n```yaml\napiVersion: v1\nkind: Config\nusers:\n- name: my-user\n  user:\n    exec:\n      # Command to execute. Required.\n      command: \"example-client-go-exec-plugin\"\n\n      # API version to use when decoding the ExecCredentials resource. Required.\n      #\n      # The API version returned by the plugin MUST match the version listed here.\n      #\n      # To integrate with tools that support multiple versions (such as client.authentication.k8s.io\/v1beta1),\n      # set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,\n      # or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.\n      apiVersion: \"client.authentication.k8s.io\/v1\"\n\n      # Environment variables to set when executing the plugin. Optional.\n      env:\n      - name: \"FOO\"\n        value: \"bar\"\n\n      # Arguments to pass when executing the plugin. Optional.\n      args:\n      - \"arg1\"\n      - \"arg2\"\n\n      # Text shown to the user when the executable doesn't seem to be present. Optional.\n      installHint: |\n        example-client-go-exec-plugin is required to authenticate\n        to the current cluster.  It can be installed:\n\n        On macOS: brew install example-client-go-exec-plugin\n\n        On Ubuntu: apt-get install example-client-go-exec-plugin\n\n        On Fedora: dnf install example-client-go-exec-plugin\n\n        ...\n\n      # Whether or not to provide cluster information, which could potentially contain\n      # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO\n      # environment variable.\n      provideClusterInfo: true\n\n      # The contract between the exec plugin and the standard input I\/O stream. If the\n      # contract cannot be satisfied, this plugin will not be run and an error will be\n      # returned. Valid values are \"Never\" (this exec plugin never uses standard input),\n      # \"IfAvailable\" (this exec plugin wants to use standard input if it is available),\n      # or \"Always\" (this exec plugin requires standard input to function). Required.\n      interactiveMode: Never\nclusters:\n- name: my-cluster\n  cluster:\n    server: \"https:\/\/172.17.4.100:6443\"\n    certificate-authority: \"\/etc\/kubernetes\/ca.pem\"\n    extensions:\n    - name: client.authentication.k8s.io\/exec # reserved extension name for per cluster exec config\n      extension:\n        arbitrary: config\n        this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\n        you: [\"can\", \"put\", \"anything\", \"here\"]\ncontexts:\n- name: my-cluster\n  context:\n    cluster: my-cluster\n    user: my-user\ncurrent-context: my-cluster\n```\n\n\n```yaml\napiVersion: v1\nkind: Config\nusers:\n- name: my-user\n  user:\n    exec:\n      # Command to execute. Required.\n      command: \"example-client-go-exec-plugin\"\n\n      # API version to use when decoding the ExecCredentials resource. Required.\n      #\n      # The API version returned by the plugin MUST match the version listed here.\n      #\n      # To integrate with tools that support multiple versions (such as client.authentication.k8s.io\/v1),\n      # set an environment variable, pass an argument to the tool that indicates which version the exec plugin expects,\n      # or read the version from the ExecCredential object in the KUBERNETES_EXEC_INFO environment variable.\n      apiVersion: \"client.authentication.k8s.io\/v1beta1\"\n\n      # Environment variables to set when executing the plugin. Optional.\n      env:\n      - name: \"FOO\"\n        value: \"bar\"\n\n      # Arguments to pass when executing the plugin. Optional.\n      args:\n      - \"arg1\"\n      - \"arg2\"\n\n      # Text shown to the user when the executable doesn't seem to be present. Optional.\n      installHint: |\n        example-client-go-exec-plugin is required to authenticate\n        to the current cluster.  It can be installed:\n\n        On macOS: brew install example-client-go-exec-plugin\n\n        On Ubuntu: apt-get install example-client-go-exec-plugin\n\n        On Fedora: dnf install example-client-go-exec-plugin\n\n        ...\n\n      # Whether or not to provide cluster information, which could potentially contain\n      # very large CA data, to this exec plugin as a part of the KUBERNETES_EXEC_INFO\n      # environment variable.\n      provideClusterInfo: true\n\n      # The contract between the exec plugin and the standard input I\/O stream. If the\n      # contract cannot be satisfied, this plugin will not be run and an error will be\n      # returned. Valid values are \"Never\" (this exec plugin never uses standard input),\n      # \"IfAvailable\" (this exec plugin wants to use standard input if it is available),\n      # or \"Always\" (this exec plugin requires standard input to function). Optional.\n      # Defaults to \"IfAvailable\".\n      interactiveMode: Never\nclusters:\n- name: my-cluster\n  cluster:\n    server: \"https:\/\/172.17.4.100:6443\"\n    certificate-authority: \"\/etc\/kubernetes\/ca.pem\"\n    extensions:\n    - name: client.authentication.k8s.io\/exec # reserved extension name for per cluster exec config\n      extension:\n        arbitrary: config\n        this: can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\n        you: [\"can\", \"put\", \"anything\", \"here\"]\ncontexts:\n- name: my-cluster\n  context:\n    cluster: my-cluster\n    user: my-user\ncurrent-context: my-cluster\n```\n\n\n\nRelative command paths are interpreted as relative to the directory of the config file. If\nKUBECONFIG is set to `\/home\/jane\/kubeconfig` and the exec command is `.\/bin\/example-client-go-exec-plugin`,\nthe binary `\/home\/jane\/bin\/example-client-go-exec-plugin` is executed.\n\n```yaml\n- name: my-user\n  user:\n    exec:\n      # Path relative to the directory of the kubeconfig\n      command: \".\/bin\/example-client-go-exec-plugin\"\n      apiVersion: \"client.authentication.k8s.io\/v1\"\n      interactiveMode: Never\n```\n\n### Input and output formats\n\nThe executed command prints an `ExecCredential` object to `stdout`. `k8s.io\/client-go`\nauthenticates against the Kubernetes API using the returned credentials in the `status`.\nThe executed command is passed an `ExecCredential` object as input via the `KUBERNETES_EXEC_INFO`\nenvironment variable. This input contains helpful information like the expected API version\nof the returned `ExecCredential` object and whether or not the plugin can use `stdin` to interact\nwith the user.\n\nWhen run from an interactive session (i.e., a terminal), `stdin` can be exposed directly\nto the plugin. Plugins should use the `spec.interactive` field of the input\n`ExecCredential` object from the `KUBERNETES_EXEC_INFO` environment variable in order to\ndetermine if `stdin` has been provided. A plugin's `stdin` requirements (i.e., whether\n`stdin` is optional, strictly required, or never used in order for the plugin\nto run successfully) is declared via the `user.exec.interactiveMode` field in the\n[kubeconfig](\/docs\/concepts\/configuration\/organize-cluster-access-kubeconfig\/)\n(see table below for valid values). The `user.exec.interactiveMode` field is optional\nin `client.authentication.k8s.io\/v1beta1` and required in `client.authentication.k8s.io\/v1`.\n\n\n| `interactiveMode` Value | Meaning |\n| ----------------------- | ------- |\n| `Never` | This exec plugin never needs to use standard input, and therefore the exec plugin will be run regardless of whether standard input is available for user input. |\n| `IfAvailable` | This exec plugin would like to use standard input if it is available, but can still operate if standard input is not available. Therefore, the exec plugin will be run regardless of whether stdin is available for user input. If standard input is available for user input, then it will be provided to this exec plugin. |\n| `Always` | This exec plugin requires standard input in order to run, and therefore the exec plugin will only be run if standard input is available for user input. If standard input is not available for user input, then the exec plugin will not be run and an error will be returned by the exec plugin runner. |\n\n\nTo use bearer token credentials, the plugin returns a token in the status of the\n[`ExecCredential`](\/docs\/reference\/config-api\/client-authentication.v1beta1\/#client-authentication-k8s-io-v1beta1-ExecCredential)\n\n\n\n```json\n{\n  \"apiVersion\": \"client.authentication.k8s.io\/v1\",\n  \"kind\": \"ExecCredential\",\n  \"status\": {\n    \"token\": \"my-bearer-token\"\n  }\n}\n```\n\n\n```json\n{\n  \"apiVersion\": \"client.authentication.k8s.io\/v1beta1\",\n  \"kind\": \"ExecCredential\",\n  \"status\": {\n    \"token\": \"my-bearer-token\"\n  }\n}\n```\n\n\n\nAlternatively, a PEM-encoded client certificate and key can be returned to use TLS client auth.\nIf the plugin returns a different certificate and key on a subsequent call, `k8s.io\/client-go`\nwill close existing connections with the server to force a new TLS handshake.\n\nIf specified, `clientKeyData` and `clientCertificateData` must both must be present.\n\n`clientCertificateData` may contain additional intermediate certificates to send to the server.\n\n\n\n```json\n{\n  \"apiVersion\": \"client.authentication.k8s.io\/v1\",\n  \"kind\": \"ExecCredential\",\n  \"status\": {\n    \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\",\n    \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n  }\n}\n```\n\n\n```json\n{\n  \"apiVersion\": \"client.authentication.k8s.io\/v1beta1\",\n  \"kind\": \"ExecCredential\",\n  \"status\": {\n    \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----\\n...\\n-----END CERTIFICATE-----\",\n    \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n  }\n}\n```\n\n\n\nOptionally, the response can include the expiry of the credential formatted as a\n[RFC 3339](https:\/\/datatracker.ietf.org\/doc\/html\/rfc3339) timestamp.\n\nPresence or absence of an expiry has the following impact:\n\n- If an expiry is included, the bearer token and TLS credentials are cached until\n  the expiry time is reached, or if the server responds with a 401 HTTP status code,\n  or when the process exits.\n- If an expiry is omitted, the bearer token and TLS credentials are cached until\n  the server responds with a 401 HTTP status code or until the process exits.\n\n\n\n```json\n{\n  \"apiVersion\": \"client.authentication.k8s.io\/v1\",\n  \"kind\": \"ExecCredential\",\n  \"status\": {\n    \"token\": \"my-bearer-token\",\n    \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\"\n  }\n}\n```\n\n\n```json\n{\n  \"apiVersion\": \"client.authentication.k8s.io\/v1beta1\",\n  \"kind\": \"ExecCredential\",\n  \"status\": {\n    \"token\": \"my-bearer-token\",\n    \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\"\n  }\n}\n```\n\n\n\nTo enable the exec plugin to obtain cluster-specific information, set `provideClusterInfo` on the `user.exec`\nfield in the [kubeconfig](\/docs\/concepts\/configuration\/organize-cluster-access-kubeconfig\/).\nThe plugin will then be supplied this cluster-specific information in the `KUBERNETES_EXEC_INFO` environment variable.\nInformation from this environment variable can be used to perform cluster-specific\ncredential acquisition logic.\nThe following `ExecCredential` manifest describes a cluster information sample.\n\n\n\n```json\n{\n  \"apiVersion\": \"client.authentication.k8s.io\/v1\",\n  \"kind\": \"ExecCredential\",\n  \"spec\": {\n    \"cluster\": {\n      \"server\": \"https:\/\/172.17.4.100:6443\",\n      \"certificate-authority-data\": \"LS0t...\",\n      \"config\": {\n        \"arbitrary\": \"config\",\n        \"this\": \"can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\",\n        \"you\": [\"can\", \"put\", \"anything\", \"here\"]\n      }\n    },\n    \"interactive\": true\n  }\n}\n```\n\n\n```json\n{\n  \"apiVersion\": \"client.authentication.k8s.io\/v1beta1\",\n  \"kind\": \"ExecCredential\",\n  \"spec\": {\n    \"cluster\": {\n      \"server\": \"https:\/\/172.17.4.100:6443\",\n      \"certificate-authority-data\": \"LS0t...\",\n      \"config\": {\n        \"arbitrary\": \"config\",\n        \"this\": \"can be provided via the KUBERNETES_EXEC_INFO environment variable upon setting provideClusterInfo\",\n        \"you\": [\"can\", \"put\", \"anything\", \"here\"]\n      }\n    },\n    \"interactive\": true\n  }\n}\n```\n\n\n\n## API access to authentication information for a client {#self-subject-review}\n\n\n\nIf your cluster has the API enabled, you can use the `SelfSubjectReview` API to find out\nhow your Kubernetes cluster maps your authentication information to identify you as a client.\nThis works whether you are authenticating as a user (typically representing\na real person) or as a ServiceAccount.\n\n`SelfSubjectReview` objects do not have any configurable fields. On receiving a request,\nthe Kubernetes API server fills the status with the user attributes and returns it to the user.\n\nRequest example (the body would be a `SelfSubjectReview`):\n\n```http\nPOST \/apis\/authentication.k8s.io\/v1\/selfsubjectreviews\n```\n\n```json\n{\n  \"apiVersion\": \"authentication.k8s.io\/v1\",\n  \"kind\": \"SelfSubjectReview\"\n}\n```\n\nResponse example:\n\n```json\n{\n  \"apiVersion\": \"authentication.k8s.io\/v1\",\n  \"kind\": \"SelfSubjectReview\",\n  \"status\": {\n    \"userInfo\": {\n      \"name\": \"jane.doe\",\n      \"uid\": \"b6c7cfd4-f166-11ec-8ea0-0242ac120002\",\n      \"groups\": [\n        \"viewers\",\n        \"editors\",\n        \"system:authenticated\"\n      ],\n      \"extra\": {\n        \"provider_id\": [\"token.company.example\"]\n      }\n    }\n  }\n}\n```\n\nFor convenience, the `kubectl auth whoami` command is present. Executing this command will\nproduce the following output (yet different user attributes will be shown):\n\n* Simple output example\n\n  ```\n  ATTRIBUTE         VALUE\n  Username          jane.doe\n  Groups            [system:authenticated]\n  ```\n\n* Complex example including extra attributes\n\n  ```\n  ATTRIBUTE         VALUE\n  Username          jane.doe\n  UID               b79dbf30-0c6a-11ed-861d-0242ac120002\n  Groups            [students teachers system:authenticated]\n  Extra: skills     [reading learning]\n  Extra: subjects   [math sports]\n  ```\n\nBy providing the output flag, it is also possible to print the JSON or YAML representation of the result:\n\n\n\n```json\n{\n  \"apiVersion\": \"authentication.k8s.io\/v1\",\n  \"kind\": \"SelfSubjectReview\",\n  \"status\": {\n    \"userInfo\": {\n      \"username\": \"jane.doe\",\n      \"uid\": \"b79dbf30-0c6a-11ed-861d-0242ac120002\",\n      \"groups\": [\n        \"students\",\n        \"teachers\",\n        \"system:authenticated\"\n      ],\n      \"extra\": {\n        \"skills\": [\n          \"reading\",\n          \"learning\"\n        ],\n        \"subjects\": [\n          \"math\",\n          \"sports\"\n        ]\n      }\n    }\n  }\n}\n```\n\n\n\n```yaml\napiVersion: authentication.k8s.io\/v1\nkind: SelfSubjectReview\nstatus:\n  userInfo:\n    username: jane.doe\n    uid: b79dbf30-0c6a-11ed-861d-0242ac120002\n    groups:\n    - students\n    - teachers\n    - system:authenticated\n    extra:\n      skills:\n      - reading\n      - learning\n      subjects:\n      - math\n      - sports\n```\n\n\n\nThis feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster,\nfor example, if you use [webhook token authentication](\/docs\/reference\/access-authn-authz\/authentication\/#webhook-token-authentication)\nor [authenticating proxy](\/docs\/reference\/access-authn-authz\/authentication\/#authenticating-proxy).\n\n\nThe Kubernetes API server fills the `userInfo` after all authentication mechanisms are applied,\nincluding [impersonation](\/docs\/reference\/access-authn-authz\/authentication\/#user-impersonation).\nIf you, or an authentication proxy, make a SelfSubjectReview using impersonation,\nyou see the user details and properties for the user that was impersonated.\n\n\nBy default, all authenticated users can create `SelfSubjectReview` objects when the `APISelfSubjectReview`\nfeature is enabled. It is allowed by the `system:basic-user` cluster role.\n\n\nYou can only make `SelfSubjectReview` requests if:\n\n* the `APISelfSubjectReview`\n  [feature gate](\/docs\/reference\/command-line-tools-reference\/feature-gates\/)\n  is enabled for your cluster (not needed for Kubernetes , but older\n  Kubernetes versions might not offer this feature gate, or might default it to be off)\n* (if you are running a version of Kubernetes older than v1.28) the API server for your\n  cluster has the `authentication.k8s.io\/v1alpha1` or `authentication.k8s.io\/v1beta1`\n  \n  enabled.\n\n\n## \n\n* Read the [client authentication reference (v1beta1)](\/docs\/reference\/config-api\/client-authentication.v1beta1\/)\n* Read the [client authentication reference (v1)](\/docs\/reference\/config-api\/client-authentication.v1\/)","site":"kubernetes reference","answers_cleaned":"    reviewers    erictune   lavalamp   deads2k   liggitt title  Authenticating content type  concept weight  10           overview     This page provides an overview of authentication        body        Users in Kubernetes  All Kubernetes clusters have two categories of users  service accounts managed by Kubernetes  and normal users   It is assumed that a cluster independent service manages normal users in the following ways     an administrator distributing private keys   a user store like Keystone or Google Accounts   a file with a list of usernames and passwords  In this regard   Kubernetes does not have objects which represent normal user accounts   Normal users cannot be added to a cluster through an API call   Even though a normal user cannot be added via an API call  any user that presents a valid certificate signed by the cluster s certificate authority  CA  is considered authenticated  In this configuration  Kubernetes determines the username from the common name field in the  subject  of the cert  e g     CN bob    From there  the role based access control  RBAC  sub system would determine whether the user is authorized to perform a specific operation on a resource  For more details  refer to the normal users topic in  certificate request   docs reference access authn authz certificate signing requests  normal user  for more details about this   In contrast  service accounts are users managed by the Kubernetes API  They are bound to specific namespaces  and created automatically by the API server or manually through API calls  Service accounts are tied to a set of credentials stored as  Secrets   which are mounted into pods allowing in cluster processes to talk to the Kubernetes API   API requests are tied to either a normal user or a service account  or are treated as  anonymous requests   anonymous requests   This means every process inside or outside the cluster  from a human user typing  kubectl  on a workstation  to  kubelets  on nodes  to members of the control plane  must authenticate when making requests to the API server  or be treated as an anonymous user      Authentication strategies  Kubernetes uses client certificates  bearer tokens  or an authenticating proxy to authenticate API requests through authentication plugins  As HTTP requests are made to the API server  plugins attempt to associate the following attributes with the request     Username  a string which identifies the end user  Common values might be  kube admin  or  jane example com     UID  a string which identifies the end user and attempts to be more consistent and unique than username    Groups  a set of strings  each of which indicates the user s membership in a named logical collection of users    Common values might be  system masters  or  devops team     Extra fields  a map of strings to list of strings which holds additional information authorizers may find useful   All values are opaque to the authentication system and only hold significance when interpreted by an  authorizer   docs reference access authn authz authorization     You can enable multiple authentication methods at once  You should usually use at least two methods     service account tokens for service accounts   at least one other method for user authentication   When multiple authenticator modules are enabled  the first module to successfully authenticate the request short circuits evaluation  The API server does not guarantee the order authenticators run in   The  system authenticated  group is included in the list of groups for all authenticated users   Integrations with other authentication protocols  LDAP  SAML  Kerberos  alternate x509 schemes  etc  can be accomplished using an  authenticating proxy   authenticating proxy  or the  authentication webhook   webhook token authentication        X509 client certificates  Client certificate authentication is enabled by passing the    client ca file SOMEFILE  option to API server  The referenced file must contain one or more certificate authorities to use to validate client certificates presented to the API server  If a client certificate is presented and verified  the common name of the subject is used as the user name for the request  As of Kubernetes 1 4  client certificates can also indicate a user s group memberships using the certificate s organization fields  To include multiple group memberships for a user  include multiple organization fields in the certificate   For example  using the  openssl  command line tool to generate a certificate signing request      bash openssl req  new  key jbeda pem  out jbeda csr pem  subj   CN jbeda O app1 O app2       This would create a CSR for the username  jbeda   belonging to two groups   app1  and  app2    See  Managing Certificates   docs tasks administer cluster certificates   for how to generate a client cert       Static token file  The API server reads bearer tokens from a file when given the    token auth file SOMEFILE  option on the command line  Currently  tokens last indefinitely  and the token list cannot be changed without restarting the API server   The token file is a csv file with a minimum of 3 columns  token  user name  user uid  followed by optional group names    If you have more than one group  the column must be double quoted e g      conf token user uid  group1 group2 group3             Putting a bearer token in a request  When using bearer token authentication from an http client  the API server expects an  Authorization  header with a value of  Bearer  token    The bearer token must be a character sequence that can be put in an HTTP header value using no more than the encoding and quoting facilities of HTTP  For example  if the bearer token is  31ada4fd adec 460c 809a 9e56ceb75269  then it would appear in an HTTP header as shown below      http Authorization  Bearer 31ada4fd adec 460c 809a 9e56ceb75269          Bootstrap tokens    To allow for streamlined bootstrapping for new clusters  Kubernetes includes a dynamically managed Bearer token type called a  Bootstrap Token   These tokens are stored as Secrets in the  kube system  namespace  where they can be dynamically managed and created  Controller Manager contains a TokenCleaner controller that deletes bootstrap tokens as they expire   The tokens are of the form   a z0 9  6   a z0 9  16    The first component is a Token ID and the second component is the Token Secret  You specify the token in an HTTP header as follows      http Authorization  Bearer 781292 db7bc3a58fc5f07e      You must enable the Bootstrap Token Authenticator with the    enable bootstrap token auth  flag on the API Server  You must enable the TokenCleaner controller via the    controllers  flag on the Controller Manager  This is done with something like    controllers   tokencleaner    kubeadm  will do this for you if you are using it to bootstrap a cluster   The authenticator authenticates as  system bootstrap  Token ID    It is included in the  system bootstrappers  group  The naming and groups are intentionally limited to discourage users from using these tokens past bootstrapping  The user names and group can be used  and are used by  kubeadm   to craft the appropriate authorization policies to support bootstrapping a cluster   Please see  Bootstrap Tokens   docs reference access authn authz bootstrap tokens   for in depth documentation on the Bootstrap Token authenticator and controllers along with how to manage these tokens with  kubeadm        Service account tokens  A service account is an automatically enabled authenticator that uses signed bearer tokens to verify requests  The plugin takes two optional flags        service account key file  File containing PEM encoded x509 RSA or ECDSA   private or public keys  used to verify ServiceAccount tokens  The specified file   can contain multiple keys  and the flag can be specified multiple times with   different files  If unspecified    tls private key file is used       service account lookup  If enabled  tokens which are deleted from the API will be revoked   Service accounts are usually created automatically by the API server and associated with pods running in the cluster through the  ServiceAccount   Admission Controller   docs reference access authn authz admission controllers    Bearer tokens are mounted into pods at well known locations  and allow in cluster processes to talk to the API server  Accounts may be explicitly associated with pods using the  serviceAccountName  field of a  PodSpec      serviceAccountName  is usually omitted because this is done automatically       yaml apiVersion  apps v1   this apiVersion is relevant as of Kubernetes 1 9 kind  Deployment metadata    name  nginx deployment   namespace  default spec    replicas  3   template      metadata                spec        serviceAccountName  bob the bot       containers          name  nginx         image  nginx 1 14 2      Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API  To manually create a service account  use the  kubectl create serviceaccount  NAME   command  This creates a service account in the current namespace      bash kubectl create serviceaccount jenkins         none serviceaccount jenkins created      Create an associated token      bash kubectl create token jenkins         none eyJhbGciOiJSUzI1NiIsImtp         The created token is a signed JSON Web Token  JWT    The signed JWT can be used as a bearer token to authenticate as the given service account  See  above   putting a bearer token in a request  for how the token is included in a request  Normally these tokens are mounted into pods for in cluster access to the API server  but can be used from outside the cluster as well   Service accounts authenticate with the username  system serviceaccount  NAMESPACE   SERVICEACCOUNT    and are assigned to the groups  system serviceaccounts  and  system serviceaccounts  NAMESPACE      Because service account tokens can also be stored in Secret API objects  any user with write access to Secrets can request a token  and any user with read access to those Secrets can authenticate as the service account  Be cautious when granting permissions to service accounts and read or write capabilities for Secrets        OpenID Connect Tokens   OpenID Connect  https   openid net connect   is a flavor of OAuth2 supported by some OAuth2 providers  notably Microsoft Entra ID  Salesforce  and Google  The protocol s main extension of OAuth2 is an additional field returned with the access token called an  ID Token  https   openid net specs openid connect core 1 0 html IDToken   This token is a JSON Web Token  JWT  with well known fields  such as a user s email  signed by the server   To identify the user  the authenticator uses the  id token   not the  access token   from the OAuth2  token response  https   openid net specs openid connect core 1 0 html TokenResponse  as a bearer token  See  above   putting a bearer token in a request  for how the token is included in a request    sequenceDiagram     participant user as User     participant idp as Identity Provider     participant kube as kubectl     participant api as API Server      user     idp  1  Log in to IdP     activate idp     idp      user  2  Provide access token  br id token  and refresh token     deactivate idp     activate user     user     kube  3  Call kubectl br with   token being the id token br OR add tokens to  kube config     deactivate user     activate kube     kube     api  4  Authorization  Bearer        deactivate kube     activate api     api     api  5  Is JWT signature valid      api     api  6  Has the JWT expired   iat exp      api     api  7  User authorized      api      kube  8  Authorized  Perform br action and return result     deactivate api     activate kube     kube   x user  9  Return result     deactivate kube   1  Log in to your identity provider 1  Your identity provider will provide you with an  access token    id token  and a  refresh token  1  When using  kubectl   use your  id token  with the    token  flag or add it directly to your  kubeconfig  1   kubectl  sends your  id token  in a header called Authorization to the API server 1  The API server will make sure the JWT signature is valid 1  Check to make sure the  id token  hasn t expired     Perform claim and or user validation if CEL expressions are configured with  AuthenticationConfiguration    1  Make sure the user is authorized 1  Once authorized the API server returns a response to  kubectl  1   kubectl  provides feedback to the user  Since all of the data needed to validate who you are is in the  id token   Kubernetes doesn t need to  phone home  to the identity provider  In a model where every request is stateless this provides a very scalable solution for authentication  It does offer a few challenges   1  Kubernetes has no  web interface  to trigger the authentication process  There is no browser or    interface to collect credentials which is why you need to authenticate to your identity provider first  1  The  id token  can t be revoked  it s like a certificate so it should be short lived  only a few minutes     so it can be very annoying to have to get a new token every few minutes  1  To authenticate to the Kubernetes dashboard  you must use the  kubectl proxy  command or a reverse proxy    that injects the  id token         Configuring the API Server        Using flags  To enable the plugin  configure the following flags on the API server     Parameter   Description   Example   Required                                                        oidc issuer url    URL of the provider that allows the API server to discover public signing keys  Only URLs that use the  https     scheme are accepted  This is typically the provider s discovery URL  changed to have an empty path    If the issuer s OIDC discovery URL is  https   accounts provider example  well known openid configuration   the value should be  https   accounts provider example    Yes        oidc client id     A client id that all tokens must be issued for    kubernetes   Yes        oidc username claim    JWT claim to use as the user name  By default  sub   which is expected to be a unique identifier of the end user  Admins can choose other claims  such as  email  or  name   depending on their provider  However  claims other than  email  will be prefixed with the issuer URL to prevent naming clashes with other plugins    sub   No        oidc username prefix    Prefix prepended to username claims to prevent clashes with existing names  such as  system   users   For example  the value  oidc   will create usernames like  oidc jane doe   If this flag isn t provided and    oidc username claim  is a value other than  email  the prefix defaults to    Issuer URL     where    Issuer URL    is the value of    oidc issuer url   The value     can be used to disable all prefixing     oidc     No        oidc groups claim    JWT claim to use as the user s group  If the claim is present it must be an array of strings    groups   No        oidc groups prefix    Prefix prepended to group claims to prevent clashes with existing names  such as  system   groups   For example  the value  oidc   will create group names like  oidc engineering  and  oidc infra      oidc     No        oidc required claim    A key value pair that describes a required claim in the ID Token  If set  the claim is verified to be present in the ID Token with a matching value  Repeat this flag to specify multiple claims     claim value    No        oidc ca file    The path to the certificate for the CA that signed your identity provider s web certificate  Defaults to the host s root CAs      etc kubernetes ssl kc ca pem    No        oidc signing algs    The signing algorithms accepted  Default is  RS256      RS512    No          Authentication configuration from a file   using authentication configuration     JWT Authenticator is an authenticator to authenticate Kubernetes users using JWT compliant tokens  The authenticator will attempt to parse a raw ID token  verify it s been signed by the configured issuer  The public key to verify the signature is discovered from the issuer s public endpoint using OIDC discovery   The minimum valid JWT payload must contain the following claims      json      iss    https   example com        must match the issuer url    aud     my app                    at least one of the entries in issuer audiences must match the  aud  claim in presented JWTs     exp   1234567890                  token expiration as Unix time  the number of seconds elapsed since January 1  1970 UTC      username claim     user          this is the username claim configured in the claimMappings username claim or claimMappings username expression        The configuration file approach allows you to configure multiple JWT authenticators  each with a unique  issuer url  and  issuer discoveryURL   The configuration file even allows you to specify  CEL   docs reference using api cel   expressions to map claims to user attributes  and to validate claims and user information  The API server also automatically reloads the authenticators when the configuration file is modified  You can use  apiserver authentication config controller automatic reload last timestamp seconds  metric to monitor the last time the configuration was reloaded by the API server   You must specify the path to the authentication configuration using the    authentication config  flag on the API server  If you want to use command line flags instead of the configuration file  those will continue to work as is  To access the new capabilities like configuring multiple authenticators  setting multiple audiences for an issuer  switch to using the configuration file   For Kubernetes v  the structured authentication configuration file format is beta level  and the mechanism for using that configuration is also beta  Provided you didn t specifically disable the  StructuredAuthenticationConfiguration   feature gate   docs reference command line tools reference feature gates   for your cluster  you can turn on structured authentication by specifying the    authentication config  command line argument to the kube apiserver  An example of the structured authentication configuration file is shown below    If you specify    authentication config  along with any of the    oidc    command line arguments  this is a misconfiguration  In this situation  the API server reports an error and then immediately exits  If you want to switch to using structured authentication configuration  you have to remove the    oidc    command line arguments  and use the configuration file instead       yaml         CAUTION  this is an example configuration             Do not use this for your own cluster    apiVersion  apiserver config k8s io v1beta1 kind  AuthenticationConfiguration   list of authenticators to authenticate Kubernetes users using JWT compliant tokens    the maximum number of allowed authenticators is 64  jwt    issuer        url must be unique across all authenticators        url must not conflict with issuer configured in   service account issuer      url  https   example com   Same as   oidc issuer url        discoveryURL  if specified  overrides the URL used to fetch discovery       information instead of using   url   well known openid configuration         The exact value specified is used  so    well known openid configuration        must be included in discoveryURL if needed              The  issuer  field in the fetched discovery information must match the  issuer url  field       in the AuthenticationConfiguration and will be used to validate the  iss  claim in the presented JWT        This is for scenarios where the well known and jwks endpoints are hosted at a different       location than the issuer  such as locally in the cluster         discoveryURL must be different from url if specified and must be unique across all authenticators      discoveryURL  https   discovery example com  well known openid configuration       PEM encoded CA certificates used to validate the connection when fetching       discovery information  If not set  the system verifier will be used        Same value as the content of the file referenced by the   oidc ca file flag      certificateAuthority   PEM encoded CA certificates            audiences is the set of acceptable audiences the JWT must be issued to        At least one of the entries must match the  aud  claim in presented JWTs      audiences        my app   Same as   oidc client id        my other app       this is required to be set to  MatchAny  when multiple audiences are specified      audienceMatchPolicy  MatchAny     rules applied to validate token claims to authenticate users    claimValidationRules        Same as   oidc required claim key value      claim  hd     requiredValue  example com       Instead of claim and requiredValue  you can use expression to validate the claim        expression is a CEL expression that evaluates to a boolean        all the expressions must evaluate to true for validation to succeed      expression   claims hd     example com         Message customizes the error message seen in the API server logs when the validation fails      message  the hd claim must be set to example com     expression   claims exp   claims nbf    86400      message  total token lifetime must not exceed 24 hours   claimMappings        username represents an option for the username attribute        This is the only required attribute      username          Same as   oidc username claim  Mutually exclusive with username expression        claim   sub          Same as   oidc username prefix  Mutually exclusive with username expression          if username claim is set  username prefix is required          Explicitly set it to    if no prefix is desired        prefix             Mutually exclusive with username claim and username prefix          expression is a CEL expression that evaluates to a string                  1   If username expression uses  claims email   then  claims email verified  must be used in             username expression or extra    valueExpression or claimValidationRules    expression              An example claim validation rule expression that matches the validation automatically             applied when username claim is set to  email  is  claims  email verified orValue true            2   If the username asserted based on username expression is the empty string  the authentication             request will fail        expression   claims username     external user         groups represents an option for the groups attribute      groups          Same as   oidc groups claim  Mutually exclusive with groups expression        claim   sub          Same as   oidc groups prefix  Mutually exclusive with groups expression          if groups claim is set  groups prefix is required          Explicitly set it to    if no prefix is desired        prefix             Mutually exclusive with groups claim and groups prefix          expression is a CEL expression that evaluates to a string or a list of strings        expression   claims roles split             uid represents an option for the uid attribute      uid          Mutually exclusive with uid expression        claim   sub          Mutually exclusive with uid claim         expression is a CEL expression that evaluates to a string        expression   claims sub        extra attributes to be added to the UserInfo object  Keys must be domain prefix path and must be unique      extra        key   example com tenant          valueExpression is a CEL expression that evaluates to a string or a list of strings        valueExpression   claims tenant      validation rules applied to the final user object    userValidationRules        expression is a CEL expression that evaluates to a boolean        all the expressions must evaluate to true for the user to be valid      expression    user username startsWith  system           Message customizes the error message seen in the API server logs when the validation fails      message   username cannot used reserved system  prefix      expression   user groups all group   group startsWith  system          message   groups cannot used reserved system  prefix         Claim validation rule expression     jwt claimValidationRules i  expression  represents the expression which will be evaluated by CEL    CEL expressions have access to the contents of the token payload  organized into  claims  CEL variable     claims  is a map of claim names  as strings  to claim values  of any type      User validation rule expression     jwt userValidationRules i  expression  represents the expression which will be evaluated by CEL    CEL expressions have access to the contents of  userInfo   organized into  user  CEL variable    Refer to the  UserInfo   docs reference generated kubernetes api v  userinfo v1 authentication k8s io    API documentation for the schema of  user      Claim mapping expression     jwt claimMappings username expression    jwt claimMappings groups expression    jwt claimMappings uid expression     jwt claimMappings extra i  valueExpression  represents the expression which will be evaluated by CEL    CEL expressions have access to the contents of the token payload  organized into  claims  CEL variable     claims  is a map of claim names  as strings  to claim values  of any type      To learn more  see the  Documentation on CEL   docs reference using api cel      Here are examples of the  AuthenticationConfiguration  with different token payloads              yaml   apiVersion  apiserver config k8s io v1beta1   kind  AuthenticationConfiguration   jwt      issuer        url  https   example com       audiences          my app     claimMappings        username          expression   claims username     external user         groups          expression   claims roles split             uid          expression   claims sub        extra          key   example com tenant          valueExpression   claims tenant      userValidationRules        expression    user username startsWith  system       the expression will evaluate to true  so validation will succeed        message   username cannot used reserved system  prefix              bash   TOKEN eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ TBWF2RkQHm4QQz85AYPcwLxSk VLvQW mNDHx7SEOSv9LVwcPYPuPajJpuQn9C gKq1R94QKSQ5F6UgHMILz8OfmPKmX 00wpwwNVGeevJ79ieX2V   W56iNR5gJ i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7Qga  HxtKB t0kRMNzLRS7rka SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi P3iCd88iu1xnzA          where the token payload is        json              aud    kubernetes          exp   1703232949         iat   1701107233         iss    https   example com          jti    7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873          nbf   1701107233         roles    user admin          sub    auth          tenant    72f988bf 86f1 41af 91ab 2d7cd011db4a          username    foo                 The token with the above  AuthenticationConfiguration  will produce the following  UserInfo  object and successfully authenticate the user        json            username    foo external user          uid    auth          groups                user              admin                  extra                example com tenant    72f988bf 86f1 41af 91ab 2d7cd011db4a                               yaml   apiVersion  apiserver config k8s io v1beta1   kind  AuthenticationConfiguration   jwt      issuer        url  https   example com       audiences          my app     claimValidationRules        expression   claims hd     example com     the token below does not have this claim  so validation will fail        message  the hd claim must be set to example com     claimMappings        username          expression   claims username     external user         groups          expression   claims roles split             uid          expression   claims sub        extra          key   example com tenant          valueExpression   claims tenant      userValidationRules        expression    user username startsWith  system       the expression will evaluate to true  so validation will succeed        message   username cannot used reserved system  prefix              bash   TOKEN eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJpYXQiOjE3MDExMDcyMzMsImlzcyI6Imh0dHBzOi8vZXhhbXBsZS5jb20iLCJqdGkiOiI3YzMzNzk0MjgwN2U3M2NhYTJjMzBjODY4YWMwY2U5MTBiY2UwMmRkY2JmZWJlOGMyM2I4YjVmMjdhZDYyODczIiwibmJmIjoxNzAxMTA3MjMzLCJyb2xlcyI6InVzZXIsYWRtaW4iLCJzdWIiOiJhdXRoIiwidGVuYW50IjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjRhIiwidXNlcm5hbWUiOiJmb28ifQ TBWF2RkQHm4QQz85AYPcwLxSk VLvQW mNDHx7SEOSv9LVwcPYPuPajJpuQn9C gKq1R94QKSQ5F6UgHMILz8OfmPKmX 00wpwwNVGeevJ79ieX2V   W56iNR5gJ i9nn6FYk5pwfVREB0l4HSlpTOmu80gbPWAXY5hLW0ZtcE1JTEEmefORHV2ge8e3jp1xGafNy6LdJWabYuKiw8d7Qga  HxtKB t0kRMNzLRS7rka SfQg0dSYektuxhLbiDkqhmRffGlQKXGVzUsuvFw7IGM5ZWnZgEMDzCI357obHeM3tRqpn5WRjtB8oM7JgnCymaJi P3iCd88iu1xnzA          where the token payload is        json              aud    kubernetes          exp   1703232949         iat   1701107233         iss    https   example com          jti    7c337942807e73caa2c30c868ac0ce910bce02ddcbfebe8c23b8b5f27ad62873          nbf   1701107233         roles    user admin          sub    auth          tenant    72f988bf 86f1 41af 91ab 2d7cd011db4a          username    foo                 The token with the above  AuthenticationConfiguration  will fail to authenticate because the    hd  claim is not set to  example com   The API server will return  401 Unauthorized  error             yaml   apiVersion  apiserver config k8s io v1beta1   kind  AuthenticationConfiguration   jwt      issuer        url  https   example com       audiences          my app     claimValidationRules        expression   claims hd     example com         message  the hd claim must be set to example com     claimMappings        username          expression    system     claims username    this will prefix the username with  system   and will fail user validation        groups          expression   claims roles split             uid          expression   claims sub        extra          key   example com tenant          valueExpression   claims tenant      userValidationRules        expression    user username startsWith  system       the username will be system foo and expression will evaluate to false  so validation will fail        message   username cannot used reserved system  prefix              bash   TOKEN eyJhbGciOiJSUzI1NiIsImtpZCI6ImY3dF9tOEROWmFTQk1oWGw5QXZTWGhBUC04Y0JmZ0JVbFVpTG5oQkgxdXMiLCJ0eXAiOiJKV1QifQ eyJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNzAzMjMyOTQ5LCJoZCI6ImV4YW1wbGUuY29tIiwiaWF0IjoxNzAxMTEzMTAxLCJpc3MiOiJodHRwczovL2V4YW1wbGUuY29tIiwianRpIjoiYjViMDY1MjM3MmNkMjBlMzQ1YjZmZGZmY2RjMjE4MWY0YWZkNmYyNTlhYWI0YjdlMzU4ODEyMzdkMjkyMjBiYyIsIm5iZiI6MTcwMTExMzEwMSwicm9sZXMiOiJ1c2VyLGFkbWluIiwic3ViIjoiYXV0aCIsInRlbmFudCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0YSIsInVzZXJuYW1lIjoiZm9vIn0 FgPJBYLobo9jnbHreooBlvpgEcSPWnKfX6dc0IvdlRB F0dCcgy91oCJeK aBk 8zH5AKUXoFTlInfLCkPivMOJqMECA1YTrMUwt IVqwb116AqihfByUYIIqzMjvUbthtbpIeHQm2fF0HbrUqa Q0uaYwgy8mD807h7sBcUMjNd215ff nFIHss 9zegH8GI1d9fiBf g6zjkR1j987EP748khpQh9IxPjMJbSgG uH5x80YFuqgEWwq aYJPQxXX6FatP96a2EAn7wfPpGlPRt0HcBOvq5pCnudgCgfVgiOJiLr 7robQu4T1bis0W75VPEvwWtgFcLnvcQx0JWg          where the token payload is        json              aud    kubernetes          exp   1703232949         hd    example com          iat   1701113101         iss    https   example com          jti    b5b0652372cd20e345b6fdffcdc2181f4afd6f259aab4b7e35881237d29220bc          nbf   1701113101         roles    user admin          sub    auth          tenant    72f988bf 86f1 41af 91ab 2d7cd011db4a          username    foo                 The token with the above  AuthenticationConfiguration  will produce the following  UserInfo  object        json            username    system foo          uid    auth          groups                user              admin                  extra                example com tenant    72f988bf 86f1 41af 91ab 2d7cd011db4a                       which will fail user validation because the username starts with  system      The API server will return  401 Unauthorized  error                Limitations  1  Distributed claims do not work via  CEL   docs reference using api cel   expressions  1  Egress selector configuration is not supported for calls to  issuer url  and  issuer discoveryURL    Kubernetes does not provide an OpenID Connect Identity Provider  You can use an existing public OpenID Connect Identity Provider  such as Google  or  others  https   connect2id com products nimbus oauth openid connect sdk openid connect providers    Or  you can run your own Identity Provider  such as  dex  https   dexidp io     Keycloak  https   github com keycloak keycloak   CloudFoundry  UAA  https   github com cloudfoundry uaa   or Tremolo Security s  OpenUnison  https   openunison github io     For an identity provider to work with Kubernetes it must   1  Support  OpenID connect discovery  https   openid net specs openid connect discovery 1 0 html      The public key to verify the signature is discovered from the issuer s public endpoint using OIDC discovery     If you re using the authentication configuration file  the identity provider doesn t need to publicly expose the discovery endpoint     You can host the discovery endpoint at a different location than the issuer  such as locally in the cluster  and specify the     issuer discoveryURL  in the configuration file   1  Run in TLS with non obsolete ciphers 1  Have a CA signed certificate  even if the CA is not a commercial CA or is self signed   A note about requirement  3 above  requiring a CA signed certificate  If you deploy your own identity provider  as opposed to one of the cloud providers like Google or Microsoft  you MUST have your identity provider s web server certificate signed by a certificate with the  CA  flag set to  TRUE   even if it is self signed  This is due to GoLang s TLS client implementation being very strict to the standards around certificate validation  If you don t have a CA handy  you can use the  gencert script  https   github com dexidp dex blob master examples k8s gencert sh  from the Dex team to create a simple CA and a signed certificate and key pair  Or you can use  this similar script  https   raw githubusercontent com TremoloSecurity openunison qs kubernetes master src main bash makessl sh  that generates SHA256 certs with a longer life and larger key size   Refer to setup instructions for specific systems      UAA  https   docs cloudfoundry org concepts architecture uaa html     Dex  https   dexidp io docs kubernetes      OpenUnison  https   www tremolosecurity com orchestra k8s         Using kubectl        Option 1   OIDC Authenticator  The first option is to use the kubectl  oidc  authenticator  which sets the  id token  as a bearer token for all requests and refreshes the token once it expires  After you ve logged into your provider  use kubectl to add your  id token    refresh token    client id   and  client secret  to configure the plugin   Providers that don t return an  id token  as part of their refresh token response aren t supported by this plugin and should use  Option 2  below      bash kubectl config set credentials USER NAME        auth provider oidc        auth provider arg idp issuer url   issuer url          auth provider arg client id   your client id          auth provider arg client secret   your client secret          auth provider arg refresh token   your refresh token          auth provider arg idp certificate authority   path to your ca certificate          auth provider arg id token   your id token        As an example  running the below command after authenticating to your identity provider      bash kubectl config set credentials mmosley              auth provider oidc              auth provider arg idp issuer url https   oidcidp tremolo lan 8443 auth idp OidcIdP              auth provider arg client id kubernetes              auth provider arg client secret 1db158f6 177d 4d9c 8a8b d36869918ec5              auth provider arg refresh token q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW VlFUeVRGcluyVF5JT4 haZmPsluFoFu5XkpXk5BXqHega4GAXlF ma vmYpFcHe5eZR slBFpZKtQA              auth provider arg idp certificate authority  root ca pem             auth provider arg id token eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ w6p4J 6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW p mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3 UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401 RPQzPGMVBld0 zMCAwZttJ4knw      Which would produce the below configuration      yaml users    name  mmosley   user      auth provider        config          client id  kubernetes         client secret  1db158f6 177d 4d9c 8a8b d36869918ec5         id token  eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ w6p4J 6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW p mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3 UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401 RPQzPGMVBld0 zMCAwZttJ4knw         idp certificate authority   root ca pem         idp issuer url  https   oidcidp tremolo lan 8443 auth idp OidcIdP         refresh token  q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW VlFUeVRGcluyVF5JT4 haZmPsluFoFu5XkpXk5BXq       name  oidc      Once your  id token  expires   kubectl  will attempt to refresh your  id token  using your  refresh token  and  client secret  storing the new values for the  refresh token  and  id token  in your   kube config          Option 2   Use the    token  Option  The  kubectl  command lets you pass in a token using the    token  option  Copy and paste the  id token  into this option      bash kubectl   token eyJhbGciOiJSUzI1NiJ9 eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0 f2As579n9VNoaKzoF dOQGmXkFKf1FMyNV0 va B63jn  n9LGSCca 6IVMP8pO Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl TK yF5akjSTHFZD 0gRzlevBDiH8Q79NAr ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat K1PaUk5 ujMBG7yYnr95xD 63n8CO8teGUAAEMx6zRjzfhnhbzX ajwZLGwGUBT4WqjMs70 6a7 8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1 I2ulrOVsYx01 yD35 rw get nodes          Webhook Token Authentication  Webhook authentication is a hook for verifying bearer tokens        authentication token webhook config file  a configuration file describing how to access the remote webhook service       authentication token webhook cache ttl  how long to cache authentication decisions  Defaults to two minutes       authentication token webhook version  determines whether to use  authentication k8s io v1beta1  or  authentication k8s io v1     TokenReview  objects to send receive information from the webhook  Defaults to  v1beta1    The configuration file uses the  kubeconfig   docs concepts configuration organize cluster access kubeconfig   file format  Within the file   clusters  refers to the remote service and  users  refers to the API server webhook  An example would be      yaml   Kubernetes API version apiVersion  v1   kind of the API object kind  Config   clusters refers to the remote service  clusters      name  name of remote authn service     cluster        certificate authority   path to ca pem           CA for verifying the remote service        server  https   authn example com authenticate   URL of remote service to query   https  recommended for production     users refers to the API server s webhook configuration  users      name  name of api server     user        client certificate   path to cert pem   cert for the webhook plugin to use       client key   path to key pem            key matching the cert    kubeconfig files require a context  Provide one for the API server  current context  webhook contexts    context      cluster  name of remote authn service     user  name of api server   name  webhook      When a client attempts to authenticate with the API server using a bearer token as discussed  above   putting a bearer token in a request   the authentication webhook POSTs a JSON serialized  TokenReview  object containing the token to the remote service   Note that webhook API objects are subject to the same  versioning compatibility rules   docs concepts overview kubernetes api   as other Kubernetes API objects  Implementers should check the  apiVersion  field of the request to ensure correct deserialization  and   must   respond with a  TokenReview  object of the same version as the request      The Kubernetes API server defaults to sending  authentication k8s io v1beta1  token reviews for backwards compatibility  To opt into receiving  authentication k8s io v1  token reviews  the API server must be started with    authentication token webhook version v1        yaml      apiVersion    authentication k8s io v1      kind    TokenReview      spec           Opaque bearer token sent to the API server      token    014fbff9a07c             Optional list of the audience identifiers for the server the token was presented to        Audience aware token authenticators  for example  OIDC token authenticators        should verify the token was intended for at least one of the audiences in this list        and return the intersection of this list and the valid audiences for the token in the response status        This ensures the token is valid to authenticate to the server it was presented to        If no audiences are provided  the token should be validated to authenticate to the Kubernetes API server       audiences     https   myserver example com    https   myserver internal example com                  yaml      apiVersion    authentication k8s io v1beta1      kind    TokenReview      spec           Opaque bearer token sent to the API server      token    014fbff9a07c             Optional list of the audience identifiers for the server the token was presented to        Audience aware token authenticators  for example  OIDC token authenticators        should verify the token was intended for at least one of the audiences in this list        and return the intersection of this list and the valid audiences for the token in the response status        This ensures the token is valid to authenticate to the server it was presented to        If no audiences are provided  the token should be validated to authenticate to the Kubernetes API server       audiences     https   myserver example com    https   myserver internal example com                The remote service is expected to fill the  status  field of the request to indicate the success of the login  The response body s  spec  field is ignored and may be omitted  The remote service must return a response using the same  TokenReview  API version that it received  A successful validation of the bearer token would return        yaml      apiVersion    authentication k8s io v1      kind    TokenReview      status          authenticated   true       user             Required        username    janedoe example com           Optional        uid    42           Optional group memberships        groups     developers    qa            Optional additional information provided by the authenticator          This should not contain confidential data  as it can be recorded in logs         or API objects  and is made available to admission webhooks         extra              extrafield1                extravalue1              extravalue2                                 Optional list audience aware token authenticators can return        containing the audiences from the  spec audiences  list for which the provided token was valid        If this is omitted  the token is considered to be valid to authenticate to the Kubernetes API server       audiences     https   myserver example com                  yaml      apiVersion    authentication k8s io v1beta1      kind    TokenReview      status          authenticated   true       user             Required        username    janedoe example com           Optional        uid    42           Optional group memberships        groups     developers    qa            Optional additional information provided by the authenticator          This should not contain confidential data  as it can be recorded in logs         or API objects  and is made available to admission webhooks         extra              extrafield1                extravalue1              extravalue2                                 Optional list audience aware token authenticators can return        containing the audiences from the  spec audiences  list for which the provided token was valid        If this is omitted  the token is considered to be valid to authenticate to the Kubernetes API server       audiences     https   myserver example com                An unsuccessful request would return        yaml      apiVersion    authentication k8s io v1      kind    TokenReview      status          authenticated   false        Optionally include details about why authentication failed        If no error is provided  the API will return a generic Unauthorized message        The error field is ignored when authenticated true       error    Credentials are expired                 yaml      apiVersion    authentication k8s io v1beta1      kind    TokenReview      status          authenticated   false        Optionally include details about why authentication failed        If no error is provided  the API will return a generic Unauthorized message        The error field is ignored when authenticated true       error    Credentials are expired                   Authenticating Proxy  The API server can be configured to identify users from request header values  such as  X Remote User   It is designed for use in combination with an authenticating proxy  which sets the request header value        requestheader username headers  Required  case insensitive  Header names to check  in order    for the user identity  The first header containing a value is used as the username       requestheader group headers  1 6   Optional  case insensitive   X Remote Group  is suggested    Header names to check  in order  for the user s groups  All values in all specified headers are used as group names       requestheader extra headers prefix  1 6   Optional  case insensitive   X Remote Extra   is suggested    Header prefixes to look for to determine extra information about the user  typically used by the configured authorization plugin     Any headers beginning with any of the specified prefixes have the prefix removed    The remainder of the header name is lowercased and  percent decoded  https   tools ietf org html rfc3986 section 2 1    and becomes the extra key  and the header value is the extra value    Prior to 1 11 3  and 1 10 7  1 9 11   the extra key could only contain characters which were  legal in HTTP header labels  https   tools ietf org html rfc7230 section 3 2 6     For example  with this configuration         requestheader username headers X Remote User   requestheader group headers X Remote Group   requestheader extra headers prefix X Remote Extra       this request      http GET   HTTP 1 1 X Remote User  fido X Remote Group  dogs X Remote Group  dachshunds X Remote Extra Acme com 2Fproject  some project X Remote Extra Scopes  openid X Remote Extra Scopes  profile      would result in this user info      yaml name  fido groups    dogs   dachshunds extra    acme com project      some project   scopes      openid     profile      In order to prevent header spoofing  the authenticating proxy is required to present a valid client certificate to the API server for validation against the specified CA before the request headers are checked  WARNING  do   not   reuse a CA that is used in a different context unless you understand the risks and the mechanisms to protect the CA s usage        requestheader client ca file  Required  PEM encoded certificate bundle  A valid client certificate   must be presented and validated against the certificate authorities in the specified file before the   request headers are checked for user names       requestheader allowed names  Optional  List of Common Name values  CNs   If set  a valid client   certificate with a CN in the specified list must be presented before the request headers are checked   for user names  If empty  any CN is allowed      Anonymous requests  When enabled  requests that are not rejected by other configured authentication methods are treated as anonymous requests  and given a username of  system anonymous  and a group of  system unauthenticated    For example  on a server with token authentication configured  and anonymous access enabled  a request providing an invalid bearer token would receive a  401 Unauthorized  error  A request providing no bearer token would be treated as an anonymous request   In 1 5 1 1 5 x  anonymous access is disabled by default  and can be enabled by passing the    anonymous auth true  option to the API server   In 1 6   anonymous access is enabled by default if an authorization mode other than  AlwaysAllow  is used  and can be disabled by passing the    anonymous auth false  option to the API server  Starting in 1 6  the ABAC and RBAC authorizers require explicit authorization of the  system anonymous  user or the  system unauthenticated  group  so legacy policy rules that grant access to the     user or     group do not include anonymous users       Anonymous Authenticator Configuration    The  AuthenticationConfiguration  can be used to configure the anonymous authenticator  To enable configuring anonymous auth via the config file you need enable the  AnonymousAuthConfigurableEndpoints  feature gate  When this feature gate is enabled you cannot set the    anonymous auth  flag   The main advantage of configuring anonymous authenticator using the authentication configuration file is that in addition to enabling and disabling anonymous authentication you can also configure which endpoints support anonymous authentication   A sample authentication configuration file is below      yaml         CAUTION  this is an example configuration             Do not use this for your own cluster    apiVersion  apiserver config k8s io v1beta1 kind  AuthenticationConfiguration anonymous    enabled  true   conditions      path   livez     path   readyz     path   healthz      In the configuration above only the   livez     readyz  and   healthz  endpoints are reachable by anonymous requests  Any other endpoints will not be reachable even if it is allowed by RBAC configuration      User impersonation  A user can act as another user through impersonation headers  These let requests manually override the user info a request authenticates as  For example  an admin could use this feature to debug an authorization policy by temporarily impersonating another user and seeing if a request was denied   Impersonation requests first authenticate as the requesting user  then switch to the impersonated user info     A user makes an API call with their credentials  and  impersonation headers    API server authenticates the user    API server ensures the authenticated users have impersonation privileges    Request user info is replaced with impersonation values    Request is evaluated  authorization acts on impersonated user info   The following HTTP headers can be used to performing an impersonation request      Impersonate User   The username to act as     Impersonate Group   A group name to act as  Can be provided multiple times to set multiple groups    Optional  Requires  Impersonate User      Impersonate Extra   extra name     A dynamic header used to associate extra fields with the user    Optional  Requires  Impersonate User   In order to be preserved consistently     extra name      must be lower case  and any characters which aren t  legal in HTTP header labels  https   tools ietf org html rfc7230 section 3 2 6    MUST be utf8 and  percent encoded  https   tools ietf org html rfc3986 section 2 1      Impersonate Uid   A unique identifier that represents the user being impersonated  Optional    Requires  Impersonate User   Kubernetes does not impose any format requirements on this string    Prior to 1 11 3  and 1 10 7  1 9 11      extra name    could only contain characters which were  legal in HTTP header labels  https   tools ietf org html rfc7230 section 3 2 6       Impersonate Uid  is only available in versions 1 22 0 and higher    An example of the impersonation headers used when impersonating a user with groups      http Impersonate User  jane doe example com Impersonate Group  developers Impersonate Group  admins      An example of the impersonation headers used when impersonating a user with a UID and extra fields      http Impersonate User  jane doe example com Impersonate Extra dn  cn jane ou engineers dc example dc com Impersonate Extra acme com 2Fproject  some project Impersonate Extra scopes  view Impersonate Extra scopes  development Impersonate Uid  06f6ce97 e2c5 4ab8 7ba5 7654dd08d52b      When using  kubectl  set the    as  flag to configure the  Impersonate User  header  set the    as group  flag to configure the  Impersonate Group  header      bash kubectl drain mynode         none Error from server  Forbidden   User  clark  cannot get nodes at the cluster scope   get nodes mynode       Set the    as  and    as group  flag      bash kubectl drain mynode   as superman   as group system masters         none node mynode cordoned node mynode drained        kubectl  cannot impersonate extra fields or UIDs    To impersonate a user  group  user identifier  UID  or extra fields  the impersonating user must have the ability to perform the  impersonate  verb on the kind of attribute being impersonated   user    group    uid   etc    For clusters that enable the RBAC authorization plugin  the following ClusterRole encompasses the rules needed to set user and group impersonation headers      yaml apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  impersonator rules    apiGroups         resources    users    groups    serviceaccounts     verbs    impersonate        For impersonation  extra fields and impersonated UIDs are both under the  authentication k8s io   apiGroup   Extra fields are evaluated as sub resources of the resource  userextras   To allow a user to use impersonation headers for the extra field  scopes  and for UIDs  a user should be granted the following role      yaml apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  scopes and uid impersonator rules    Can set  Impersonate Extra scopes  header and the  Impersonate Uid  header    apiGroups    authentication k8s io     resources    userextras scopes    uids     verbs    impersonate        The values of impersonation headers can also be restricted by limiting the set of  resourceNames  a resource can take      yaml apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  limited impersonator rules    Can impersonate the user  jane doe example com    apiGroups         resources    users     verbs    impersonate     resourceNames    jane doe example com      Can impersonate the groups  developers  and  admins    apiGroups         resources    groups     verbs    impersonate     resourceNames    developers   admins      Can impersonate the extras field  scopes  with the values  view  and  development    apiGroups    authentication k8s io     resources    userextras scopes     verbs    impersonate     resourceNames    view    development      Can impersonate the uid  06f6ce97 e2c5 4ab8 7ba5 7654dd08d52b    apiGroups    authentication k8s io     resources    uids     verbs    impersonate     resourceNames    06f6ce97 e2c5 4ab8 7ba5 7654dd08d52b         Impersonating a user or group allows you to perform any action as if you were that user or group  for that reason  impersonation is not namespace scoped  If you want to allow impersonation using Kubernetes RBAC  this requires using a  ClusterRole  and a  ClusterRoleBinding   not a  Role  and  RoleBinding        client go credential plugins     k8s io client go  and tools using it such as  kubectl  and  kubelet  are able to execute an external command to receive user credentials   This feature is intended for client side integrations with authentication protocols not natively supported by  k8s io client go   LDAP  Kerberos  OAuth2  SAML  etc    The plugin implements the protocol specific logic  then returns opaque credentials to use  Almost all credential plugin use cases require a server side component with support for the  webhook token authenticator   webhook token authentication  to interpret the credential format produced by the client plugin    Earlier versions of  kubectl  included built in support for authenticating to AKS and GKE  but this is no longer present        Example use case  In a hypothetical use case  an organization would run an external service that exchanges LDAP credentials for user specific  signed tokens  The service would also be capable of responding to  webhook token authenticator   webhook token authentication  requests to validate the tokens  Users would be required to install a credential plugin on their workstation   To authenticate against the API     The user issues a  kubectl  command    Credential plugin prompts the user for LDAP credentials  exchanges credentials with external service for a token    Credential plugin returns token to client go  which uses it as a bearer token against the API server    API server uses the  webhook token authenticator   webhook token authentication  to submit a  TokenReview  to the external service    External service verifies the signature on the token and returns the user s username and groups       Configuration  Credential plugins are configured through  kubectl config files   docs tasks access application cluster configure access multiple clusters   as part of the user fields        yaml apiVersion  v1 kind  Config users    name  my user   user      exec          Command to execute  Required        command   example client go exec plugin           API version to use when decoding the ExecCredentials resource  Required                  The API version returned by the plugin MUST match the version listed here                  To integrate with tools that support multiple versions  such as client authentication k8s io v1beta1           set an environment variable  pass an argument to the tool that indicates which version the exec plugin expects          or read the version from the ExecCredential object in the KUBERNETES EXEC INFO environment variable        apiVersion   client authentication k8s io v1           Environment variables to set when executing the plugin  Optional        env          name   FOO          value   bar           Arguments to pass when executing the plugin  Optional        args           arg1           arg2           Text shown to the user when the executable doesn t seem to be present  Optional        installHint            example client go exec plugin is required to authenticate         to the current cluster   It can be installed           On macOS  brew install example client go exec plugin          On Ubuntu  apt get install example client go exec plugin          On Fedora  dnf install example client go exec plugin                       Whether or not to provide cluster information  which could potentially contain         very large CA data  to this exec plugin as a part of the KUBERNETES EXEC INFO         environment variable        provideClusterInfo  true          The contract between the exec plugin and the standard input I O stream  If the         contract cannot be satisfied  this plugin will not be run and an error will be         returned  Valid values are  Never   this exec plugin never uses standard input            IfAvailable   this exec plugin wants to use standard input if it is available           or  Always   this exec plugin requires standard input to function   Required        interactiveMode  Never clusters    name  my cluster   cluster      server   https   172 17 4 100 6443      certificate authority    etc kubernetes ca pem      extensions        name  client authentication k8s io exec   reserved extension name for per cluster exec config       extension          arbitrary  config         this  can be provided via the KUBERNETES EXEC INFO environment variable upon setting provideClusterInfo         you    can    put    anything    here   contexts    name  my cluster   context      cluster  my cluster     user  my user current context  my cluster          yaml apiVersion  v1 kind  Config users    name  my user   user      exec          Command to execute  Required        command   example client go exec plugin           API version to use when decoding the ExecCredentials resource  Required                  The API version returned by the plugin MUST match the version listed here                  To integrate with tools that support multiple versions  such as client authentication k8s io v1           set an environment variable  pass an argument to the tool that indicates which version the exec plugin expects          or read the version from the ExecCredential object in the KUBERNETES EXEC INFO environment variable        apiVersion   client authentication k8s io v1beta1           Environment variables to set when executing the plugin  Optional        env          name   FOO          value   bar           Arguments to pass when executing the plugin  Optional        args           arg1           arg2           Text shown to the user when the executable doesn t seem to be present  Optional        installHint            example client go exec plugin is required to authenticate         to the current cluster   It can be installed           On macOS  brew install example client go exec plugin          On Ubuntu  apt get install example client go exec plugin          On Fedora  dnf install example client go exec plugin                       Whether or not to provide cluster information  which could potentially contain         very large CA data  to this exec plugin as a part of the KUBERNETES EXEC INFO         environment variable        provideClusterInfo  true          The contract between the exec plugin and the standard input I O stream  If the         contract cannot be satisfied  this plugin will not be run and an error will be         returned  Valid values are  Never   this exec plugin never uses standard input            IfAvailable   this exec plugin wants to use standard input if it is available           or  Always   this exec plugin requires standard input to function   Optional          Defaults to  IfAvailable         interactiveMode  Never clusters    name  my cluster   cluster      server   https   172 17 4 100 6443      certificate authority    etc kubernetes ca pem      extensions        name  client authentication k8s io exec   reserved extension name for per cluster exec config       extension          arbitrary  config         this  can be provided via the KUBERNETES EXEC INFO environment variable upon setting provideClusterInfo         you    can    put    anything    here   contexts    name  my cluster   context      cluster  my cluster     user  my user current context  my cluster        Relative command paths are interpreted as relative to the directory of the config file  If KUBECONFIG is set to   home jane kubeconfig  and the exec command is    bin example client go exec plugin   the binary   home jane bin example client go exec plugin  is executed      yaml   name  my user   user      exec          Path relative to the directory of the kubeconfig       command     bin example client go exec plugin        apiVersion   client authentication k8s io v1        interactiveMode  Never          Input and output formats  The executed command prints an  ExecCredential  object to  stdout    k8s io client go  authenticates against the Kubernetes API using the returned credentials in the  status   The executed command is passed an  ExecCredential  object as input via the  KUBERNETES EXEC INFO  environment variable  This input contains helpful information like the expected API version of the returned  ExecCredential  object and whether or not the plugin can use  stdin  to interact with the user   When run from an interactive session  i e   a terminal    stdin  can be exposed directly to the plugin  Plugins should use the  spec interactive  field of the input  ExecCredential  object from the  KUBERNETES EXEC INFO  environment variable in order to determine if  stdin  has been provided  A plugin s  stdin  requirements  i e   whether  stdin  is optional  strictly required  or never used in order for the plugin to run successfully  is declared via the  user exec interactiveMode  field in the  kubeconfig   docs concepts configuration organize cluster access kubeconfig    see table below for valid values   The  user exec interactiveMode  field is optional in  client authentication k8s io v1beta1  and required in  client authentication k8s io v1        interactiveMode  Value   Meaning                                            Never    This exec plugin never needs to use standard input  and therefore the exec plugin will be run regardless of whether standard input is available for user input       IfAvailable    This exec plugin would like to use standard input if it is available  but can still operate if standard input is not available  Therefore  the exec plugin will be run regardless of whether stdin is available for user input  If standard input is available for user input  then it will be provided to this exec plugin       Always    This exec plugin requires standard input in order to run  and therefore the exec plugin will only be run if standard input is available for user input  If standard input is not available for user input  then the exec plugin will not be run and an error will be returned by the exec plugin runner      To use bearer token credentials  the plugin returns a token in the status of the   ExecCredential    docs reference config api client authentication v1beta1  client authentication k8s io v1beta1 ExecCredential        json      apiVersion    client authentication k8s io v1      kind    ExecCredential      status          token    my bearer token                 json      apiVersion    client authentication k8s io v1beta1      kind    ExecCredential      status          token    my bearer token               Alternatively  a PEM encoded client certificate and key can be returned to use TLS client auth  If the plugin returns a different certificate and key on a subsequent call   k8s io client go  will close existing connections with the server to force a new TLS handshake   If specified   clientKeyData  and  clientCertificateData  must both must be present    clientCertificateData  may contain additional intermediate certificates to send to the server        json      apiVersion    client authentication k8s io v1      kind    ExecCredential      status          clientCertificateData         BEGIN CERTIFICATE      n    n     END CERTIFICATE             clientKeyData         BEGIN RSA PRIVATE KEY      n    n     END RSA PRIVATE KEY                      json      apiVersion    client authentication k8s io v1beta1      kind    ExecCredential      status          clientCertificateData         BEGIN CERTIFICATE      n    n     END CERTIFICATE             clientKeyData         BEGIN RSA PRIVATE KEY      n    n     END RSA PRIVATE KEY                    Optionally  the response can include the expiry of the credential formatted as a  RFC 3339  https   datatracker ietf org doc html rfc3339  timestamp   Presence or absence of an expiry has the following impact     If an expiry is included  the bearer token and TLS credentials are cached until   the expiry time is reached  or if the server responds with a 401 HTTP status code    or when the process exits    If an expiry is omitted  the bearer token and TLS credentials are cached until   the server responds with a 401 HTTP status code or until the process exits        json      apiVersion    client authentication k8s io v1      kind    ExecCredential      status          token    my bearer token        expirationTimestamp    2018 03 05T17 30 20 08 00                 json      apiVersion    client authentication k8s io v1beta1      kind    ExecCredential      status          token    my bearer token        expirationTimestamp    2018 03 05T17 30 20 08 00               To enable the exec plugin to obtain cluster specific information  set  provideClusterInfo  on the  user exec  field in the  kubeconfig   docs concepts configuration organize cluster access kubeconfig    The plugin will then be supplied this cluster specific information in the  KUBERNETES EXEC INFO  environment variable  Information from this environment variable can be used to perform cluster specific credential acquisition logic  The following  ExecCredential  manifest describes a cluster information sample        json      apiVersion    client authentication k8s io v1      kind    ExecCredential      spec          cluster            server    https   172 17 4 100 6443          certificate authority data    LS0t             config              arbitrary    config            this    can be provided via the KUBERNETES EXEC INFO environment variable upon setting provideClusterInfo            you     can    put    anything    here                       interactive   true                json      apiVersion    client authentication k8s io v1beta1      kind    ExecCredential      spec          cluster            server    https   172 17 4 100 6443          certificate authority data    LS0t             config              arbitrary    config            this    can be provided via the KUBERNETES EXEC INFO environment variable upon setting provideClusterInfo            you     can    put    anything    here                       interactive   true                 API access to authentication information for a client   self subject review     If your cluster has the API enabled  you can use the  SelfSubjectReview  API to find out how your Kubernetes cluster maps your authentication information to identify you as a client  This works whether you are authenticating as a user  typically representing a real person  or as a ServiceAccount    SelfSubjectReview  objects do not have any configurable fields  On receiving a request  the Kubernetes API server fills the status with the user attributes and returns it to the user   Request example  the body would be a  SelfSubjectReview        http POST  apis authentication k8s io v1 selfsubjectreviews         json      apiVersion    authentication k8s io v1      kind    SelfSubjectReview         Response example      json      apiVersion    authentication k8s io v1      kind    SelfSubjectReview      status          userInfo            name    jane doe          uid    b6c7cfd4 f166 11ec 8ea0 0242ac120002          groups              viewers            editors            system authenticated                  extra              provider id     token company example                            For convenience  the  kubectl auth whoami  command is present  Executing this command will produce the following output  yet different user attributes will be shown      Simple output example          ATTRIBUTE         VALUE   Username          jane doe   Groups             system authenticated           Complex example including extra attributes          ATTRIBUTE         VALUE   Username          jane doe   UID               b79dbf30 0c6a 11ed 861d 0242ac120002   Groups             students teachers system authenticated    Extra  skills      reading learning    Extra  subjects    math sports         By providing the output flag  it is also possible to print the JSON or YAML representation of the result        json      apiVersion    authentication k8s io v1      kind    SelfSubjectReview      status          userInfo            username    jane doe          uid    b79dbf30 0c6a 11ed 861d 0242ac120002          groups              students            teachers            system authenticated                  extra              skills                reading              learning                      subjects                math              sports                                          yaml apiVersion  authentication k8s io v1 kind  SelfSubjectReview status    userInfo      username  jane doe     uid  b79dbf30 0c6a 11ed 861d 0242ac120002     groups        students       teachers       system authenticated     extra        skills          reading         learning       subjects          math         sports        This feature is extremely useful when a complicated authentication flow is used in a Kubernetes cluster  for example  if you use  webhook token authentication   docs reference access authn authz authentication  webhook token authentication  or  authenticating proxy   docs reference access authn authz authentication  authenticating proxy     The Kubernetes API server fills the  userInfo  after all authentication mechanisms are applied  including  impersonation   docs reference access authn authz authentication  user impersonation   If you  or an authentication proxy  make a SelfSubjectReview using impersonation  you see the user details and properties for the user that was impersonated    By default  all authenticated users can create  SelfSubjectReview  objects when the  APISelfSubjectReview  feature is enabled  It is allowed by the  system basic user  cluster role    You can only make  SelfSubjectReview  requests if     the  APISelfSubjectReview     feature gate   docs reference command line tools reference feature gates     is enabled for your cluster  not needed for Kubernetes   but older   Kubernetes versions might not offer this feature gate  or might default it to be off     if you are running a version of Kubernetes older than v1 28  the API server for your   cluster has the  authentication k8s io v1alpha1  or  authentication k8s io v1beta1       enabled           Read the  client authentication reference  v1beta1    docs reference config api client authentication v1beta1     Read the  client authentication reference  v1    docs reference config api client authentication v1  "}
{"questions":"kubernetes reference liggitt deads2k erictune contenttype concept weight 39 reviewers title Using ABAC Authorization lavalamp","answers":"---\nreviewers:\n- erictune\n- lavalamp\n- deads2k\n- liggitt\ntitle: Using ABAC Authorization\ncontent_type: concept\nweight: 39\n---\n\n<!-- overview -->\nAttribute-based access control (ABAC) defines an access control paradigm whereby access rights are granted\nto users through the use of policies which combine attributes together.\n\n<!-- body -->\n## Policy File Format\n\nTo enable `ABAC` mode, specify `--authorization-policy-file=SOME_FILENAME` and `--authorization-mode=ABAC`\non startup.\n\nThe file format is [one JSON object per line](https:\/\/jsonlines.org\/). There\nshould be no enclosing list or map, only one map per line.\n\nEach line is a \"policy object\", where each such object is a map with the following\nproperties:\n\n- Versioning properties:\n  - `apiVersion`, type string; valid values are \"abac.authorization.kubernetes.io\/v1beta1\". Allows versioning\n    and conversion of the policy format.\n  - `kind`, type string: valid values are \"Policy\". Allows versioning and conversion of the policy format.\n- `spec` property set to a map with the following properties:\n  - Subject-matching properties:\n    - `user`, type string; the user-string from `--token-auth-file`. If you specify `user`, it must match the\n      username of the authenticated user.\n    - `group`, type string; if you specify `group`, it must match one of the groups of the authenticated user.\n      `system:authenticated` matches all authenticated requests. `system:unauthenticated` matches all\n      unauthenticated requests.\n  - Resource-matching properties:\n    - `apiGroup`, type string; an API group.\n      - Ex: `apps`, `networking.k8s.io`\n      - Wildcard: `*` matches all API groups.\n    - `namespace`, type string; a namespace.\n      - Ex: `kube-system`\n      - Wildcard: `*` matches all resource requests.\n    - `resource`, type string; a resource type\n      - Ex: `pods`, `deployments`\n      - Wildcard: `*` matches all resource requests.\n  - Non-resource-matching properties:\n    - `nonResourcePath`, type string; non-resource request paths.\n      - Ex: `\/version` or `\/apis`\n      - Wildcard:\n        - `*` matches all non-resource requests.\n        - `\/foo\/*` matches all subpaths of `\/foo\/`.\n  - `readonly`, type boolean, when true, means that the Resource-matching policy only applies to get, list,\n    and watch operations, Non-resource-matching policy only applies to get operation.\n\n\nAn unset property is the same as a property set to the zero value for its type\n(e.g. empty string, 0, false). However, unset should be preferred for\nreadability.\n\nIn the future, policies may be expressed in a JSON format, and managed via a\nREST interface.\n\n\n## Authorization Algorithm\n\nA request has attributes which correspond to the properties of a policy object.\n\nWhen a request is received, the attributes are determined. Unknown attributes\nare set to the zero value of its type (e.g. empty string, 0, false).\n\nA property set to `\"*\"` will match any value of the corresponding attribute.\n\nThe tuple of attributes is checked for a match against every policy in the\npolicy file. If at least one line matches the request attributes, then the\nrequest is authorized (but may fail later validation).\n\nTo permit any authenticated user to do something, write a policy with the\ngroup property set to `\"system:authenticated\"`.\n\nTo permit any unauthenticated user to do something, write a policy with the\ngroup property set to `\"system:unauthenticated\"`.\n\nTo permit a user to do anything, write a policy with the apiGroup, namespace,\nresource, and nonResourcePath properties set to `\"*\"`.\n\n## Kubectl\n\nKubectl uses the `\/api` and `\/apis` endpoints of apiserver to discover\nserved resource types, and validates objects sent to the API by create\/update\noperations using schema information located at `\/openapi\/v2`.\n\nWhen using ABAC authorization, those special resources have to be explicitly\nexposed via the `nonResourcePath` property in a policy (see [examples](#examples) below):\n\n* `\/api`, `\/api\/*`, `\/apis`, and `\/apis\/*` for API version negotiation.\n* `\/version` for retrieving the server version via `kubectl version`.\n* `\/swaggerapi\/*` for create\/update operations.\n\nTo inspect the HTTP calls involved in a specific kubectl operation you can turn\nup the verbosity:\n\n```shell\nkubectl --v=8 version\n```\n\n## Examples\n\n1. Alice can do anything to all resources:\n\n   ```json\n   {\"apiVersion\": \"abac.authorization.kubernetes.io\/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"alice\", \"namespace\": \"*\", \"resource\": \"*\", \"apiGroup\": \"*\"}}\n   ```\n\n1. The kubelet can read any pods:\n\n   ```json\n   {\"apiVersion\": \"abac.authorization.kubernetes.io\/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"kubelet\", \"namespace\": \"*\", \"resource\": \"pods\", \"readonly\": true}}\n   ```\n\n1. The kubelet can read and write events:\n\n   ```json\n   {\"apiVersion\": \"abac.authorization.kubernetes.io\/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"kubelet\", \"namespace\": \"*\", \"resource\": \"events\"}}\n   ```\n\n1. Bob can just read pods in namespace \"projectCaribou\":\n\n   ```json\n   {\"apiVersion\": \"abac.authorization.kubernetes.io\/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"user\": \"bob\", \"namespace\": \"projectCaribou\", \"resource\": \"pods\", \"readonly\": true}}\n   ```\n\n1. Anyone can make read-only requests to all non-resource paths:\n\n   ```json\n   {\"apiVersion\": \"abac.authorization.kubernetes.io\/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"group\": \"system:authenticated\", \"readonly\": true, \"nonResourcePath\": \"*\"}}\n    {\"apiVersion\": \"abac.authorization.kubernetes.io\/v1beta1\", \"kind\": \"Policy\", \"spec\": {\"group\": \"system:unauthenticated\", \"readonly\": true, \"nonResourcePath\": \"*\"}}\n   ```\n\n[Complete file example](https:\/\/releases.k8s.io\/v\/pkg\/auth\/authorizer\/abac\/example_policy_file.jsonl)\n\n## A quick note on service accounts\n\nEvery service account has a corresponding ABAC username, and that service account's username is generated\naccording to the naming convention:\n\n```shell\nsystem:serviceaccount:<namespace>:<serviceaccountname>\n```\n\nCreating a new namespace leads to the creation of a new service account in the following format:\n\n```shell\nsystem:serviceaccount:<namespace>:default\n```\n\nFor example, if you wanted to grant the default service account (in the `kube-system` namespace) full\nprivilege to the API using ABAC, you would add this line to your policy file:\n\n```json\n{\"apiVersion\":\"abac.authorization.kubernetes.io\/v1beta1\",\"kind\":\"Policy\",\"spec\":{\"user\":\"system:serviceaccount:kube-system:default\",\"namespace\":\"*\",\"resource\":\"*\",\"apiGroup\":\"*\"}}\n```\n\nThe apiserver will need to be restarted to pick up the new policy lines.","site":"kubernetes reference","answers_cleaned":"    reviewers    erictune   lavalamp   deads2k   liggitt title  Using ABAC Authorization content type  concept weight  39           overview     Attribute based access control  ABAC  defines an access control paradigm whereby access rights are granted to users through the use of policies which combine attributes together        body        Policy File Format  To enable  ABAC  mode  specify    authorization policy file SOME FILENAME  and    authorization mode ABAC  on startup   The file format is  one JSON object per line  https   jsonlines org    There should be no enclosing list or map  only one map per line   Each line is a  policy object   where each such object is a map with the following properties     Versioning properties       apiVersion   type string  valid values are  abac authorization kubernetes io v1beta1   Allows versioning     and conversion of the policy format       kind   type string  valid values are  Policy   Allows versioning and conversion of the policy format     spec  property set to a map with the following properties      Subject matching properties         user   type string  the user string from    token auth file   If you specify  user   it must match the       username of the authenticated user         group   type string  if you specify  group   it must match one of the groups of the authenticated user         system authenticated  matches all authenticated requests   system unauthenticated  matches all       unauthenticated requests      Resource matching properties         apiGroup   type string  an API group          Ex   apps    networking k8s io          Wildcard      matches all API groups         namespace   type string  a namespace          Ex   kube system          Wildcard      matches all resource requests         resource   type string  a resource type         Ex   pods    deployments          Wildcard      matches all resource requests      Non resource matching properties         nonResourcePath   type string  non resource request paths          Ex    version  or   apis          Wildcard                matches all non resource requests              foo    matches all subpaths of   foo         readonly   type boolean  when true  means that the Resource matching policy only applies to get  list      and watch operations  Non resource matching policy only applies to get operation    An unset property is the same as a property set to the zero value for its type  e g  empty string  0  false   However  unset should be preferred for readability   In the future  policies may be expressed in a JSON format  and managed via a REST interface       Authorization Algorithm  A request has attributes which correspond to the properties of a policy object   When a request is received  the attributes are determined  Unknown attributes are set to the zero value of its type  e g  empty string  0  false    A property set to       will match any value of the corresponding attribute   The tuple of attributes is checked for a match against every policy in the policy file  If at least one line matches the request attributes  then the request is authorized  but may fail later validation    To permit any authenticated user to do something  write a policy with the group property set to   system authenticated     To permit any unauthenticated user to do something  write a policy with the group property set to   system unauthenticated     To permit a user to do anything  write a policy with the apiGroup  namespace  resource  and nonResourcePath properties set to            Kubectl  Kubectl uses the   api  and   apis  endpoints of apiserver to discover served resource types  and validates objects sent to the API by create update operations using schema information located at   openapi v2    When using ABAC authorization  those special resources have to be explicitly exposed via the  nonResourcePath  property in a policy  see  examples   examples  below        api     api       apis   and   apis    for API version negotiation      version  for retrieving the server version via  kubectl version       swaggerapi    for create update operations   To inspect the HTTP calls involved in a specific kubectl operation you can turn up the verbosity      shell kubectl   v 8 version         Examples  1  Alice can do anything to all resources         json      apiVersion    abac authorization kubernetes io v1beta1    kind    Policy    spec     user    alice    namespace         resource         apiGroup                 1  The kubelet can read any pods         json      apiVersion    abac authorization kubernetes io v1beta1    kind    Policy    spec     user    kubelet    namespace         resource    pods    readonly   true           1  The kubelet can read and write events         json      apiVersion    abac authorization kubernetes io v1beta1    kind    Policy    spec     user    kubelet    namespace         resource    events            1  Bob can just read pods in namespace  projectCaribou          json      apiVersion    abac authorization kubernetes io v1beta1    kind    Policy    spec     user    bob    namespace    projectCaribou    resource    pods    readonly   true           1  Anyone can make read only requests to all non resource paths         json      apiVersion    abac authorization kubernetes io v1beta1    kind    Policy    spec     group    system authenticated    readonly   true   nonResourcePath               apiVersion    abac authorization kubernetes io v1beta1    kind    Policy    spec     group    system unauthenticated    readonly   true   nonResourcePath                  Complete file example  https   releases k8s io v pkg auth authorizer abac example policy file jsonl      A quick note on service accounts  Every service account has a corresponding ABAC username  and that service account s username is generated according to the naming convention      shell system serviceaccount  namespace   serviceaccountname       Creating a new namespace leads to the creation of a new service account in the following format      shell system serviceaccount  namespace  default      For example  if you wanted to grant the default service account  in the  kube system  namespace  full privilege to the API using ABAC  you would add this line to your policy file      json   apiVersion   abac authorization kubernetes io v1beta1   kind   Policy   spec    user   system serviceaccount kube system default   namespace       resource       apiGroup             The apiserver will need to be restarted to pick up the new policy lines "}
{"questions":"kubernetes reference weight 20 title Authenticating with Bootstrap Tokens contenttype concept reviewers jbeda overview","answers":"---\nreviewers:\n- jbeda\ntitle: Authenticating with Bootstrap Tokens\ncontent_type: concept\nweight: 20\n---\n\n<!-- overview -->\n\n\n\nBootstrap tokens are a simple bearer token that is meant to be used when\ncreating new clusters or joining new nodes to an existing cluster.\nIt was built to support [kubeadm](\/docs\/reference\/setup-tools\/kubeadm\/), but can be used in other contexts\nfor users that wish to start clusters without `kubeadm`. It is also built to\nwork, via RBAC policy, with the\n[kubelet TLS Bootstrapping](\/docs\/reference\/access-authn-authz\/kubelet-tls-bootstrapping\/) system.\n\n\n<!-- body -->\n## Bootstrap Tokens Overview\n\nBootstrap Tokens are defined with a specific type\n(`bootstrap.kubernetes.io\/token`) of secrets that lives in the `kube-system`\nnamespace. These Secrets are then read by the Bootstrap Authenticator in the\nAPI Server. Expired tokens are removed with the TokenCleaner controller in the\nController Manager. The tokens are also used to create a signature for a\nspecific ConfigMap used in a \"discovery\" process through a BootstrapSigner\ncontroller.\n\n## Token Format\n\nBootstrap Tokens take the form of `abcdef.0123456789abcdef`.\nMore formally, they must match the regular expression `[a-z0-9]{6}\\.[a-z0-9]{16}`.\n\nThe first part of the token is the \"Token ID\" and is considered public\ninformation. It is used when referring to a token without leaking the secret\npart used for authentication. The second part is the \"Token Secret\" and should\nonly be shared with trusted parties.\n\n## Enabling Bootstrap Token Authentication\n\nThe Bootstrap Token authenticator can be enabled using the following flag on the\nAPI server:\n\n```\n--enable-bootstrap-token-auth\n```\n\nWhen enabled, bootstrapping tokens can be used as bearer token credentials to\nauthenticate requests against the API server.\n\n```http\nAuthorization: Bearer 07401b.f395accd246ae52d\n```\n\nTokens authenticate as the username `system:bootstrap:<token id>` and are members\nof the group `system:bootstrappers`. \nAdditional groups may be specified in the token's Secret.\n\nExpired tokens can be deleted automatically by enabling the `tokencleaner`\ncontroller on the controller manager.\n\n```\n--controllers=*,tokencleaner\n```\n\n## Bootstrap Token Secret Format\n\nEach valid token is backed by a secret in the `kube-system` namespace. You can\nfind the full design doc\n[here](https:\/\/git.k8s.io\/design-proposals-archive\/cluster-lifecycle\/bootstrap-discovery.md).\n\nHere is what the secret looks like.\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  # Name MUST be of form \"bootstrap-token-<token id>\"\n  name: bootstrap-token-07401b\n  namespace: kube-system\n\n# Type MUST be 'bootstrap.kubernetes.io\/token'\ntype: bootstrap.kubernetes.io\/token\nstringData:\n  # Human readable description. Optional.\n  description: \"The default bootstrap token generated by 'kubeadm init'.\"\n\n  # Token ID and secret. Required.\n  token-id: 07401b\n  token-secret: f395accd246ae52d\n\n  # Expiration. Optional.\n  expiration: 2017-03-10T03:22:11Z\n\n  # Allowed usages.\n  usage-bootstrap-authentication: \"true\"\n  usage-bootstrap-signing: \"true\"\n\n  # Extra groups to authenticate the token as. Must start with \"system:bootstrappers:\"\n  auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress\n```\n\nThe type of the secret must be `bootstrap.kubernetes.io\/token` and the name must\nbe `bootstrap-token-<token id>`. It must also exist in the `kube-system` namespace.\n\nThe `usage-bootstrap-*` members indicate what this secret is intended to be used for.\nA value must be set to `true` to be enabled.\n\n* `usage-bootstrap-authentication` indicates that the token can be used to\nauthenticate to the API server as a bearer token.\n* `usage-bootstrap-signing` indicates that the token may be used to sign the\n`cluster-info` ConfigMap as described below.\n\nThe `expiration` field controls the expiry of the token. Expired tokens are\nrejected when used for authentication and ignored during ConfigMap signing.\nThe expiry value is encoded as an absolute UTC time using [RFC3339](https:\/\/datatracker.ietf.org\/doc\/html\/rfc3339). Enable the\n`tokencleaner` controller to automatically delete expired tokens.\n\n## Token Management with kubeadm\n\nYou can use the `kubeadm` tool to manage tokens on a running cluster. See the\n[kubeadm token docs](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-token\/) for details.\n\n## ConfigMap Signing\n\nIn addition to authentication, the tokens can be used to sign a ConfigMap.\nThis is used early in a cluster bootstrap process before the client trusts the API\nserver. The signed ConfigMap can be authenticated by the shared token.\n\nEnable ConfigMap signing by enabling the `bootstrapsigner` controller on the\nController Manager.\n\n```\n--controllers=*,bootstrapsigner\n```\n\nThe ConfigMap that is signed is `cluster-info` in the `kube-public` namespace.\nThe typical flow is that a client reads this ConfigMap while unauthenticated and\nignoring TLS errors. It then validates the payload of the ConfigMap by looking\nat a signature embedded in the ConfigMap.\n\nThe ConfigMap may look like this:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: cluster-info\n  namespace: kube-public\ndata:\n  jws-kubeconfig-07401b: eyJhbGciOiJIUzI1NiIsImtpZCI6IjA3NDAxYiJ9..tYEfbo6zDNo40MQE07aZcQX2m3EB2rO3NuXtxVMYm9U\n  kubeconfig: |\n    apiVersion: v1\n    clusters:\n    - cluster:\n        certificate-authority-data: <really long certificate data>\n        server: https:\/\/10.138.0.2:6443\n      name: \"\"\n    contexts: []\n    current-context: \"\"\n    kind: Config\n    preferences: {}\n    users: []\n```\n\nThe `kubeconfig` member of the ConfigMap is a config file with only the cluster\ninformation filled out. The key thing being communicated here is the\n`certificate-authority-data`. This may be expanded in the future.\n\nThe signature is a JWS signature using the \"detached\" mode. To validate the\nsignature, the user should encode the `kubeconfig` payload according to JWS\nrules (base64 encoded while discarding any trailing `=`). That encoded payload\nis then used to form a whole JWS by inserting it between the 2 dots. You can\nverify the JWS using the `HS256` scheme (HMAC-SHA256) with the full token (e.g.\n`07401b.f395accd246ae52d`) as the shared secret. Users _must_ verify that HS256\nis used.\n\n\nAny party with a bootstrapping token can create a valid signature for that\ntoken. When using ConfigMap signing it's discouraged to share the same token with\nmany clients, since a compromised client can potentially man-in-the middle another\nclient relying on the signature to bootstrap TLS trust.\n\n\nConsult the [kubeadm implementation details](\/docs\/reference\/setup-tools\/kubeadm\/implementation-details\/)\nsection for more information.","site":"kubernetes reference","answers_cleaned":"    reviewers    jbeda title  Authenticating with Bootstrap Tokens content type  concept weight  20           overview        Bootstrap tokens are a simple bearer token that is meant to be used when creating new clusters or joining new nodes to an existing cluster  It was built to support  kubeadm   docs reference setup tools kubeadm    but can be used in other contexts for users that wish to start clusters without  kubeadm   It is also built to work  via RBAC policy  with the  kubelet TLS Bootstrapping   docs reference access authn authz kubelet tls bootstrapping   system         body        Bootstrap Tokens Overview  Bootstrap Tokens are defined with a specific type   bootstrap kubernetes io token   of secrets that lives in the  kube system  namespace  These Secrets are then read by the Bootstrap Authenticator in the API Server  Expired tokens are removed with the TokenCleaner controller in the Controller Manager  The tokens are also used to create a signature for a specific ConfigMap used in a  discovery  process through a BootstrapSigner controller      Token Format  Bootstrap Tokens take the form of  abcdef 0123456789abcdef   More formally  they must match the regular expression   a z0 9  6    a z0 9  16     The first part of the token is the  Token ID  and is considered public information  It is used when referring to a token without leaking the secret part used for authentication  The second part is the  Token Secret  and should only be shared with trusted parties      Enabling Bootstrap Token Authentication  The Bootstrap Token authenticator can be enabled using the following flag on the API server         enable bootstrap token auth      When enabled  bootstrapping tokens can be used as bearer token credentials to authenticate requests against the API server      http Authorization  Bearer 07401b f395accd246ae52d      Tokens authenticate as the username  system bootstrap  token id   and are members of the group  system bootstrappers    Additional groups may be specified in the token s Secret   Expired tokens can be deleted automatically by enabling the  tokencleaner  controller on the controller manager         controllers   tokencleaner         Bootstrap Token Secret Format  Each valid token is backed by a secret in the  kube system  namespace  You can find the full design doc  here  https   git k8s io design proposals archive cluster lifecycle bootstrap discovery md    Here is what the secret looks like      yaml apiVersion  v1 kind  Secret metadata      Name MUST be of form  bootstrap token  token id     name  bootstrap token 07401b   namespace  kube system    Type MUST be  bootstrap kubernetes io token  type  bootstrap kubernetes io token stringData      Human readable description  Optional    description   The default bootstrap token generated by  kubeadm init         Token ID and secret  Required    token id  07401b   token secret  f395accd246ae52d      Expiration  Optional    expiration  2017 03 10T03 22 11Z      Allowed usages    usage bootstrap authentication   true    usage bootstrap signing   true       Extra groups to authenticate the token as  Must start with  system bootstrappers     auth extra groups  system bootstrappers worker system bootstrappers ingress      The type of the secret must be  bootstrap kubernetes io token  and the name must be  bootstrap token  token id    It must also exist in the  kube system  namespace   The  usage bootstrap    members indicate what this secret is intended to be used for  A value must be set to  true  to be enabled      usage bootstrap authentication  indicates that the token can be used to authenticate to the API server as a bearer token     usage bootstrap signing  indicates that the token may be used to sign the  cluster info  ConfigMap as described below   The  expiration  field controls the expiry of the token  Expired tokens are rejected when used for authentication and ignored during ConfigMap signing  The expiry value is encoded as an absolute UTC time using  RFC3339  https   datatracker ietf org doc html rfc3339   Enable the  tokencleaner  controller to automatically delete expired tokens      Token Management with kubeadm  You can use the  kubeadm  tool to manage tokens on a running cluster  See the  kubeadm token docs   docs reference setup tools kubeadm kubeadm token   for details      ConfigMap Signing  In addition to authentication  the tokens can be used to sign a ConfigMap  This is used early in a cluster bootstrap process before the client trusts the API server  The signed ConfigMap can be authenticated by the shared token   Enable ConfigMap signing by enabling the  bootstrapsigner  controller on the Controller Manager         controllers   bootstrapsigner      The ConfigMap that is signed is  cluster info  in the  kube public  namespace  The typical flow is that a client reads this ConfigMap while unauthenticated and ignoring TLS errors  It then validates the payload of the ConfigMap by looking at a signature embedded in the ConfigMap   The ConfigMap may look like this      yaml apiVersion  v1 kind  ConfigMap metadata    name  cluster info   namespace  kube public data    jws kubeconfig 07401b  eyJhbGciOiJIUzI1NiIsImtpZCI6IjA3NDAxYiJ9  tYEfbo6zDNo40MQE07aZcQX2m3EB2rO3NuXtxVMYm9U   kubeconfig        apiVersion  v1     clusters        cluster          certificate authority data   really long certificate data          server  https   10 138 0 2 6443       name         contexts         current context         kind  Config     preferences         users          The  kubeconfig  member of the ConfigMap is a config file with only the cluster information filled out  The key thing being communicated here is the  certificate authority data   This may be expanded in the future   The signature is a JWS signature using the  detached  mode  To validate the signature  the user should encode the  kubeconfig  payload according to JWS rules  base64 encoded while discarding any trailing       That encoded payload is then used to form a whole JWS by inserting it between the 2 dots  You can verify the JWS using the  HS256  scheme  HMAC SHA256  with the full token  e g   07401b f395accd246ae52d   as the shared secret  Users  must  verify that HS256 is used    Any party with a bootstrapping token can create a valid signature for that token  When using ConfigMap signing it s discouraged to share the same token with many clients  since a compromised client can potentially man in the middle another client relying on the signature to bootstrap TLS trust    Consult the  kubeadm implementation details   docs reference setup tools kubeadm implementation details   section for more information "}
{"questions":"kubernetes reference liggitt deads2k weight 30 erictune contenttype concept title Authorization reviewers lavalamp","answers":"---\nreviewers:\n- erictune\n- lavalamp\n- deads2k\n- liggitt\ntitle: Authorization\ncontent_type: concept\nweight: 30\ndescription: >\n  Details of Kubernetes authorization mechanisms and supported authorization modes.\n---\n\n<!-- overview -->\n\nKubernetes authorization takes place following\n[authentication](\/docs\/reference\/access-authn-authz\/authentication\/).\nUsually, a client making a request must be authenticated (logged in) before its\nrequest can be allowed; however, Kubernetes also allows anonymous requests in\nsome circumstances.\n\nFor an overview of how authorization fits into the wider context of API access\ncontrol, read\n[Controlling Access to the Kubernetes API](\/docs\/concepts\/security\/controlling-access\/).\n\n<!-- body -->\n\n## Authorization verdicts {#determine-whether-a-request-is-allowed-or-denied}\n\nKubernetes authorization of API requests takes place within the API server.\nThe API server evaluates all of the request attributes against all policies,\npotentially also consulting external services, and then allows or denies the\nrequest.\n\nAll parts of an API request must be allowed by some authorization\nmechanism in order to proceed. In other words: access is denied by default.\n\n\nAccess controls and policies that\ndepend on specific fields of specific kinds of objects are handled by\n.\n\nKubernetes admission control happens after authorization has completed (and,\ntherefore, only when the authorization decision was to allow the request).\n\n\nWhen multiple [authorization modules](#authorization-modules) are configured,\neach is checked in sequence.\nIf any authorizer _approves_ or _denies_ a request, that decision is immediately\nreturned and no other authorizer is consulted. If all modules have _no opinion_\non the request, then the request is denied. An overall deny verdict means that\nthe API server rejects the request and responds with an HTTP 403 (Forbidden)\nstatus.\n\n## Request attributes used in authorization\n\nKubernetes reviews only the following API request attributes:\n\n * **user** - The `user` string provided during authentication.\n * **group** - The list of group names to which the authenticated user belongs.\n * **extra** - A map of arbitrary string keys to string values, provided by the authentication layer.\n * **API** - Indicates whether the request is for an API resource.\n * **Request path** - Path to miscellaneous non-resource endpoints like `\/api` or `\/healthz`.\n * **API request verb** - API verbs like `get`, `list`, `create`, `update`, `patch`, `watch`, `delete`, and `deletecollection` are used for resource requests. To determine the request verb for a resource API endpoint, see [request verbs and authorization](\/docs\/reference\/access-authn-authz\/authorization\/#determine-the-request-verb).\n * **HTTP request verb** - Lowercased HTTP methods like `get`, `post`, `put`, and `delete` are used for non-resource requests.\n * **Resource** - The ID or name of the resource that is being accessed (for resource requests only) -- For resource requests using `get`, `update`, `patch`, and `delete` verbs, you must provide the resource name.\n * **Subresource** - The subresource that is being accessed (for resource requests only).\n * **Namespace** - The namespace of the object that is being accessed (for namespaced resource requests only).\n * **API group** - The  being accessed (for resource requests only). An empty string designates the _core_ [API group](\/docs\/reference\/using-api\/#api-groups).\n\n### Request verbs and authorization {#determine-the-request-verb}\n\n#### Non-resource requests {#request-verb-non-resource}\n\nRequests to endpoints other than `\/api\/v1\/...` or `\/apis\/<group>\/<version>\/...`\nare considered _non-resource requests_, and use the lower-cased HTTP method of the request as the verb.\nFor example, making a `GET` request using HTTP to endpoints such as `\/api` or `\/healthz` would use **get** as the verb.\n\n#### Resource requests {#request-verb-resource}\n\nTo determine the request verb for a resource API endpoint, Kubernetes maps the HTTP verb\nused and considers whether or not the request acts on an individual resource or on a\ncollection of resources:\n\nHTTP verb     | request verb\n--------------|---------------\n`POST`        | **create**\n`GET`, `HEAD` | **get** (for individual resources), **list** (for collections, including full object content), **watch** (for watching an individual resource or collection of resources)\n`PUT`         | **update**\n`PATCH`       | **patch**\n`DELETE`      | **delete** (for individual resources), **deletecollection** (for collections)\n\n\n+The **get**, **list** and **watch** verbs can all return the full details of a resource. In\nterms of access to the returned data they are equivalent. For example, **list** on `secrets`\nwill reveal the **data** attributes of any returned resources.\n\n\nKubernetes sometimes checks authorization for additional permissions using specialized verbs. For example:\n\n* Special cases of [authentication](\/docs\/reference\/access-authn-authz\/authentication\/)\n  * **impersonate** verb on `users`, `groups`, and `serviceaccounts` in the core API group, and the `userextras` in the `authentication.k8s.io` API group.\n* [Authorization of CertificateSigningRequests](\/docs\/reference\/access-authn-authz\/certificate-signing-requests\/#authorization)\n  * **approve** verb for CertificateSigningRequests, and **update** for revisions to existing approvals\n* [RBAC](\/docs\/reference\/access-authn-authz\/rbac\/#privilege-escalation-prevention-and-bootstrapping)\n  * **bind** and **escalate** verbs on `roles` and `clusterroles` resources in the `rbac.authorization.k8s.io` API group.\n\n## Authorization context\n\nKubernetes expects attributes that are common to REST API requests. This means\nthat Kubernetes authorization works with existing organization-wide or\ncloud-provider-wide access control systems which may handle other APIs besides\nthe Kubernetes API.\n\n## Authorization modes {#authorization-modules}\n\nThe Kubernetes API server may authorize a request using one of several authorization modes:\n\n`AlwaysAllow`\n: This mode allows all requests, which brings [security risks](#warning-always-allow). Use this authorization mode only if you do not require authorization for your API requests (for example, for testing).\n\n`AlwaysDeny`\n: This mode blocks all requests. Use this authorization mode only for testing.\n\n`ABAC` ([attribute-based access control](\/docs\/reference\/access-authn-authz\/abac\/))\n: Kubernetes ABAC mode defines an access control paradigm whereby access rights are granted to users through the use of\u00a0policies\u00a0which combine attributes together. The\u00a0policies\u00a0can use any type of attributes (user attributes, resource attributes, object, environment attributes, etc).\n\n`RBAC` ([role-based access control](\/docs\/reference\/access-authn-authz\/rbac\/))\n: Kubernetes RBAC is a method of regulating access to computer or network resources based on the roles of individual users within an enterprise. In this context, access is the ability of an individual user to perform a specific task, such as view, create, or modify a file.  \n  In this mode, Kubernetes uses the `rbac.authorization.k8s.io` API group to drive authorization decisions, allowing you to dynamically configure permission policies through the Kubernetes API.\n\n`Node`\n: A special-purpose authorization mode that grants permissions to kubelets based on the pods they are scheduled to run. To learn more about the Node authorization mode, see [Node Authorization](\/docs\/reference\/access-authn-authz\/node\/).\n\n`Webhook`\n: Kubernetes [webhook mode](\/docs\/reference\/access-authn-authz\/webhook\/) for authorization makes a synchronous HTTP callout, blocking the request until the remote HTTP service responds to the query.You can write your own software to handle the callout, or use solutions from the ecosystem.\n\n<a id=\"warning-always-allow\" \/>\n\n\nEnabling the `AlwaysAllow` mode bypasses authorization; do not use this on a cluster where\nyou do not trust **all** potential API clients, including the workloads that you run.\n\nAuthorization mechanisms typically return either a _deny_ or _no opinion_ result; see\n[authorization verdicts](#determine-whether-a-request-is-allowed-or-denied) for more on this.\nActivating the `AlwaysAllow` means that if all other authorizers return \u201cno opinion\u201d,\nthe request is allowed. For example, `--authorization-mode=AlwaysAllow,RBAC` has the\nsame effect as `--authorization-mode=AlwaysAllow` because Kubernetes RBAC does not\nprovide negative (deny) access rules.\n\nYou should not use the `AlwaysAllow` mode on a Kubernetes cluster where the API server\nis reachable from the public internet.\n\n\n### The system:masters group\n\nThe `system:masters` group is a built-in Kubernetes group that grants unrestricted\naccess to the API server. Any user assigned to this group has full cluster administrator\nprivileges, bypassing any authorization restrictions imposed by the RBAC or Webhook mechanisms.\n[Avoid adding users](\/docs\/concepts\/security\/rbac-good-practices\/#least-privilege)\nto this group. If you do need to grant a user cluster-admin rights, you can create a\n[ClusterRoleBinding](\/docs\/reference\/access-authn-authz\/rbac\/#user-facing-roles)\nto the built-in `cluster-admin` ClusterRole.\n\n### Authorization mode configuration {#choice-of-authz-config}\n\nYou can configure the Kubernetes API server's authorizer chain using either\n[command line arguments](#using-flags-for-your-authorization-module) only or, as a beta feature,\nusing a [configuration file](#using-configuration-file-for-authorization).\n\nYou have to pick one of the two configuration approaches; setting both `--authorization-config`\npath and configuring an authorization webhook using the `--authorization-mode` and\n`--authorization-webhook-*` command line arguments is not allowed.\nIf you try this, the API server reports an error message during startup, then exits immediately.\n\n### Command line authorization mode configuration {#using-flags-for-your-authorization-module}\n\n\n\nYou can use the following modes:\n\n* `--authorization-mode=ABAC` (Attribute-based access control mode)\n* `--authorization-mode=RBAC` (Role-based access control mode)\n* `--authorization-mode=Node` (Node authorizer)\n* `--authorization-mode=Webhook` (Webhook authorization mode)\n* `--authorization-mode=AlwaysAllow` (always allows requests; carries [security risks](#warning-always-allow))\n* `--authorization-mode=AlwaysDeny` (always denies requests)\n\nYou can choose more than one authorization mode; for example:\n`--authorization-mode=Node,Webhook`\n\nKubernetes checks authorization modules based on the order that you specify them\non the API server's command line, so an earlier module has higher priority to allow\nor deny a request.\n\nYou cannot combine the `--authorization-mode` command line argument with the\n`--authorization-config` command line argument used for\n[configuring authorization using a local file](#using-configuration-file-for-authorization-mode).\n\nFor more information on command line arguments to the API server, read the\n[`kube-apiserver` reference](\/docs\/reference\/command-line-tools-reference\/kube-apiserver\/).\n\n<!-- keep legacy hyperlinks working -->\n<a id=\"configuring-the-api-server-using-an-authorization-config-file\" \/>\n\n### Configuring the API Server using an authorization config file {#using-configuration-file-for-authorization}\n\n\n\nAs a beta feature, Kubernetes lets you configure authorization chains that can include multiple\nwebhooks. The authorization items in that chain can have well-defined parameters that validate\nrequests in a particular order, offering you fine-grained control, such as explicit Deny on failures.\n\nThe configuration file approach even allows you to specify\n[CEL](\/docs\/reference\/using-api\/cel\/) rules to pre-filter requests before they are dispatched\nto webhooks, helping you to prevent unnecessary invocations. The API server also automatically\nreloads the authorizer chain when the configuration file is modified.\n\nYou specify the path to the authorization configuration using the\n`--authorization-config` command line argument.\n\nIf you want to use command line arguments instead of a configuration file, that's also a valid and supported approach.\nSome authorization capabilities (for example: multiple webhooks, webhook failure policy, and pre-filter rules)\nare only available if you use an authorization configuration file.\n\n#### Example configuration {#authz-config-example}\n\n\n---\n#\n# DO NOT USE THE CONFIG AS IS. THIS IS AN EXAMPLE.\n#\napiVersion: apiserver.config.k8s.io\/v1beta1\nkind: AuthorizationConfiguration\nauthorizers:\n  - type: Webhook\n    # Name used to describe the authorizer\n    # This is explicitly used in monitoring machinery for metrics\n    # Note:\n    #   - Validation for this field is similar to how K8s labels are validated today.\n    # Required, with no default\n    name: webhook\n    webhook:\n      # The duration to cache 'authorized' responses from the webhook\n      # authorizer.\n      # Same as setting `--authorization-webhook-cache-authorized-ttl` flag\n      # Default: 5m0s\n      authorizedTTL: 30s\n      # The duration to cache 'unauthorized' responses from the webhook\n      # authorizer.\n      # Same as setting `--authorization-webhook-cache-unauthorized-ttl` flag\n      # Default: 30s\n      unauthorizedTTL: 30s\n      # Timeout for the webhook request\n      # Maximum allowed is 30s.\n      # Required, with no default.\n      timeout: 3s\n      # The API version of the authorization.k8s.io SubjectAccessReview to\n      # send to and expect from the webhook.\n      # Same as setting `--authorization-webhook-version` flag\n      # Required, with no default\n      # Valid values: v1beta1, v1\n      subjectAccessReviewVersion: v1\n      # MatchConditionSubjectAccessReviewVersion specifies the SubjectAccessReview\n      # version the CEL expressions are evaluated against\n      # Valid values: v1\n      # Required, no default value\n      matchConditionSubjectAccessReviewVersion: v1\n      # Controls the authorization decision when a webhook request fails to\n      # complete or returns a malformed response or errors evaluating\n      # matchConditions.\n      # Valid values:\n      #   - NoOpinion: continue to subsequent authorizers to see if one of\n      #     them allows the request\n      #   - Deny: reject the request without consulting subsequent authorizers\n      # Required, with no default.\n      failurePolicy: Deny\n      connectionInfo:\n        # Controls how the webhook should communicate with the server.\n        # Valid values:\n        # - KubeConfigFile: use the file specified in kubeConfigFile to locate the\n        #   server.\n        # - InClusterConfig: use the in-cluster configuration to call the\n        #   SubjectAccessReview API hosted by kube-apiserver. This mode is not\n        #   allowed for kube-apiserver.\n        type: KubeConfigFile\n        # Path to KubeConfigFile for connection info\n        # Required, if connectionInfo.Type is KubeConfigFile\n        kubeConfigFile: \/kube-system-authz-webhook.yaml\n        # matchConditions is a list of conditions that must be met for a request to be sent to this\n        # webhook. An empty list of matchConditions matches all requests.\n        # There are a maximum of 64 match conditions allowed.\n        #\n        # The exact matching logic is (in order):\n        #   1. If at least one matchCondition evaluates to FALSE, then the webhook is skipped.\n        #   2. If ALL matchConditions evaluate to TRUE, then the webhook is called.\n        #   3. If at least one matchCondition evaluates to an error (but none are FALSE):\n        #      - If failurePolicy=Deny, then the webhook rejects the request\n        #      - If failurePolicy=NoOpinion, then the error is ignored and the webhook is skipped\n      matchConditions:\n      # expression represents the expression which will be evaluated by CEL. Must evaluate to bool.\n      # CEL expressions have access to the contents of the SubjectAccessReview in v1 version.\n      # If version specified by subjectAccessReviewVersion in the request variable is v1beta1,\n      # the contents would be converted to the v1 version before evaluating the CEL expression.\n      #\n      # Documentation on CEL: https:\/\/kubernetes.io\/docs\/reference\/using-api\/cel\/\n      #\n      # only send resource requests to the webhook\n      - expression: has(request.resourceAttributes)\n      # only intercept requests to kube-system\n      - expression: request.resourceAttributes.namespace == 'kube-system'\n      # don't intercept requests from kube-system service accounts\n      - expression: \"!('system:serviceaccounts:kube-system' in request.groups)\"\n  - type: Node\n    name: node\n  - type: RBAC\n    name: rbac\n  - type: Webhook\n    name: in-cluster-authorizer\n    webhook:\n      authorizedTTL: 5m\n      unauthorizedTTL: 30s\n      timeout: 3s\n      subjectAccessReviewVersion: v1\n      failurePolicy: NoOpinion\n      connectionInfo:\n        type: InClusterConfig\n\n\nWhen configuring the authorizer chain using a configuration file, make sure all the\ncontrol plane nodes have the same file contents. Take a note of the API server\nconfiguration when upgrading \/ downgrading your clusters. For example, if upgrading\nfrom Kubernetes  to Kubernetes ,\nyou would need to make sure the config file is in a format that Kubernetes \ncan understand, before you upgrade the cluster. If you downgrade to ,\nyou would need to set the configuration appropriately.\n\n#### Authorization configuration and reloads\n\nKubernetes reloads the authorization configuration file when the API server observes a change\nto the file, and also on a 60 second schedule if no change events were observed.\n\n\nYou must ensure that all non-webhook authorizer types remain unchanged in the file on reload.\n\nA reload **must not** add or remove Node or RBAC authorizers (they can be reordered,\nbut cannot be added or removed).\n\n\n## Privilege escalation via workload creation or edits {#privilege-escalation-via-pod-creation}\n\nUsers who can create\/edit pods in a namespace, either directly or through an object that\nenables indirect [workload management](\/docs\/concepts\/architecture\/controller\/), may be\nable to escalate their privileges in that namespace. The potential routes to privilege\nescalation include Kubernetes [API extensions](\/docs\/concepts\/extend-kubernetes\/#api-extensions)\nand their associated .\n\n\nAs a cluster administrator, use caution when granting access to create or edit workloads.\nSome details of how these can be misused are documented in\n[escalation paths](\/docs\/reference\/access-authn-authz\/authorization\/#escalation-paths).\n\n\n### Escalation paths {#escalation-paths}\n\nThere are different ways that an attacker or untrustworthy user could gain additional\nprivilege within a namespace, if you allow them to run arbitrary Pods in that namespace:\n\n- Mounting arbitrary Secrets in that namespace\n  - Can be used to access confidential information meant for other workloads\n  - Can be used to obtain a more privileged ServiceAccount's service account token\n- Using arbitrary ServiceAccounts in that namespace\n  - Can perform Kubernetes API actions as another workload (impersonation)\n  - Can perform any privileged actions that ServiceAccount has\n- Mounting or using ConfigMaps meant for other workloads in that namespace\n  - Can be used to obtain information meant for other workloads, such as database host names.\n- Mounting volumes meant for other workloads in that namespace\n  - Can be used to obtain information meant for other workloads, and change it.\n\n\nAs a system administrator, you should be cautious when deploying CustomResourceDefinitions\nthat let users make changes to the above areas. These may open privilege escalations paths.\nConsider the consequences of this kind of change when deciding on your authorization controls.\n\n\n## Checking API access\n\n`kubectl` provides the `auth can-i` subcommand for quickly querying the API authorization layer.\nThe command uses the `SelfSubjectAccessReview` API to determine if the current user can perform\na given action, and works regardless of the authorization mode used.\n\n\n```bash\nkubectl auth can-i create deployments --namespace dev\n```\n\nThe output is similar to this:\n\n```\nyes\n```\n\n```shell\nkubectl auth can-i create deployments --namespace prod\n```\n\nThe output is similar to this:\n\n```\nno\n```\n\nAdministrators can combine this with [user impersonation](\/docs\/reference\/access-authn-authz\/authentication\/#user-impersonation)\nto determine what action other users can perform.\n\n```bash\nkubectl auth can-i list secrets --namespace dev --as dave\n```\n\nThe output is similar to this:\n\n```\nno\n```\n\nSimilarly, to check whether a ServiceAccount named `dev-sa` in Namespace `dev`\ncan list Pods in the Namespace `target`:\n\n```bash\nkubectl auth can-i list pods \\\n    --namespace target \\\n    --as system:serviceaccount:dev:dev-sa\n```\n\nThe output is similar to this:\n\n```\nyes\n```\n\nSelfSubjectAccessReview is part of the `authorization.k8s.io` API group, which\nexposes the API server authorization to external services. Other resources in\nthis group include:\n\nSubjectAccessReview\n: Access review for any user, not only the current one. Useful for delegating authorization decisions to the API server. For example, the kubelet and extension API servers use this to determine user access to their own APIs.\n\nLocalSubjectAccessReview\n: Like SubjectAccessReview but restricted to a specific namespace.\n\nSelfSubjectRulesReview\n: A review which returns the set of actions a user can perform within a namespace. Useful for users to quickly summarize their own access, or for UIs to hide\/show actions.\n\nThese APIs can be queried by creating normal Kubernetes resources, where the response `status`\nfield of the returned object is the result of the query. For example:\n\n```bash\nkubectl create -f - -o yaml << EOF\napiVersion: authorization.k8s.io\/v1\nkind: SelfSubjectAccessReview\nspec:\n  resourceAttributes:\n    group: apps\n    resource: deployments\n    verb: create\n    namespace: dev\nEOF\n```\n\nThe generated SelfSubjectAccessReview is similar to:\n\n\napiVersion: authorization.k8s.io\/v1\nkind: SelfSubjectAccessReview\nmetadata:\n  creationTimestamp: null\nspec:\n  resourceAttributes:\n    group: apps\n    resource: deployments\n    namespace: dev\n    verb: create\nstatus:\n  allowed: true\n  denied: false\n\n\n## \n\n* To learn more about Authentication, see [Authentication](\/docs\/reference\/access-authn-authz\/authentication\/).\n* For an overview, read [Controlling Access to the Kubernetes API](\/docs\/concepts\/security\/controlling-access\/).\n* To learn more about Admission Control, see [Using Admission Controllers](\/docs\/reference\/access-authn-authz\/admission-controllers\/).\n* Read more about [Common Expression Language in Kubernetes](\/docs\/reference\/using-api\/cel\/).","site":"kubernetes reference","answers_cleaned":"    reviewers    erictune   lavalamp   deads2k   liggitt title  Authorization content type  concept weight  30 description      Details of Kubernetes authorization mechanisms and supported authorization modes            overview      Kubernetes authorization takes place following  authentication   docs reference access authn authz authentication    Usually  a client making a request must be authenticated  logged in  before its request can be allowed  however  Kubernetes also allows anonymous requests in some circumstances   For an overview of how authorization fits into the wider context of API access control  read  Controlling Access to the Kubernetes API   docs concepts security controlling access          body         Authorization verdicts   determine whether a request is allowed or denied   Kubernetes authorization of API requests takes place within the API server  The API server evaluates all of the request attributes against all policies  potentially also consulting external services  and then allows or denies the request   All parts of an API request must be allowed by some authorization mechanism in order to proceed  In other words  access is denied by default    Access controls and policies that depend on specific fields of specific kinds of objects are handled by    Kubernetes admission control happens after authorization has completed  and  therefore  only when the authorization decision was to allow the request     When multiple  authorization modules   authorization modules  are configured  each is checked in sequence  If any authorizer  approves  or  denies  a request  that decision is immediately returned and no other authorizer is consulted  If all modules have  no opinion  on the request  then the request is denied  An overall deny verdict means that the API server rejects the request and responds with an HTTP 403  Forbidden  status      Request attributes used in authorization  Kubernetes reviews only the following API request attributes        user     The  user  string provided during authentication       group     The list of group names to which the authenticated user belongs       extra     A map of arbitrary string keys to string values  provided by the authentication layer       API     Indicates whether the request is for an API resource       Request path     Path to miscellaneous non resource endpoints like   api  or   healthz        API request verb     API verbs like  get    list    create    update    patch    watch    delete   and  deletecollection  are used for resource requests  To determine the request verb for a resource API endpoint  see  request verbs and authorization   docs reference access authn authz authorization  determine the request verb        HTTP request verb     Lowercased HTTP methods like  get    post    put   and  delete  are used for non resource requests       Resource     The ID or name of the resource that is being accessed  for resource requests only     For resource requests using  get    update    patch   and  delete  verbs  you must provide the resource name       Subresource     The subresource that is being accessed  for resource requests only        Namespace     The namespace of the object that is being accessed  for namespaced resource requests only        API group     The  being accessed  for resource requests only   An empty string designates the  core   API group   docs reference using api  api groups        Request verbs and authorization   determine the request verb        Non resource requests   request verb non resource   Requests to endpoints other than   api v1      or   apis  group   version       are considered  non resource requests   and use the lower cased HTTP method of the request as the verb  For example  making a  GET  request using HTTP to endpoints such as   api  or   healthz  would use   get   as the verb        Resource requests   request verb resource   To determine the request verb for a resource API endpoint  Kubernetes maps the HTTP verb used and considers whether or not the request acts on an individual resource or on a collection of resources   HTTP verb       request verb                                 POST             create    GET    HEAD      get    for individual resources     list    for collections  including full object content     watch    for watching an individual resource or collection of resources   PUT              update    PATCH            patch    DELETE           delete    for individual resources     deletecollection    for collections     The   get      list   and   watch   verbs can all return the full details of a resource  In terms of access to the returned data they are equivalent  For example    list   on  secrets  will reveal the   data   attributes of any returned resources    Kubernetes sometimes checks authorization for additional permissions using specialized verbs  For example     Special cases of  authentication   docs reference access authn authz authentication         impersonate   verb on  users    groups   and  serviceaccounts  in the core API group  and the  userextras  in the  authentication k8s io  API group     Authorization of CertificateSigningRequests   docs reference access authn authz certificate signing requests  authorization        approve   verb for CertificateSigningRequests  and   update   for revisions to existing approvals    RBAC   docs reference access authn authz rbac  privilege escalation prevention and bootstrapping        bind   and   escalate   verbs on  roles  and  clusterroles  resources in the  rbac authorization k8s io  API group      Authorization context  Kubernetes expects attributes that are common to REST API requests  This means that Kubernetes authorization works with existing organization wide or cloud provider wide access control systems which may handle other APIs besides the Kubernetes API      Authorization modes   authorization modules   The Kubernetes API server may authorize a request using one of several authorization modes    AlwaysAllow    This mode allows all requests  which brings  security risks   warning always allow   Use this authorization mode only if you do not require authorization for your API requests  for example  for testing     AlwaysDeny    This mode blocks all requests  Use this authorization mode only for testing    ABAC    attribute based access control   docs reference access authn authz abac      Kubernetes ABAC mode defines an access control paradigm whereby access rights are granted to users through the use of policies which combine attributes together  The policies can use any type of attributes  user attributes  resource attributes  object  environment attributes  etc     RBAC    role based access control   docs reference access authn authz rbac      Kubernetes RBAC is a method of regulating access to computer or network resources based on the roles of individual users within an enterprise  In this context  access is the ability of an individual user to perform a specific task  such as view  create  or modify a file      In this mode  Kubernetes uses the  rbac authorization k8s io  API group to drive authorization decisions  allowing you to dynamically configure permission policies through the Kubernetes API    Node    A special purpose authorization mode that grants permissions to kubelets based on the pods they are scheduled to run  To learn more about the Node authorization mode  see  Node Authorization   docs reference access authn authz node      Webhook    Kubernetes  webhook mode   docs reference access authn authz webhook   for authorization makes a synchronous HTTP callout  blocking the request until the remote HTTP service responds to the query You can write your own software to handle the callout  or use solutions from the ecosystem    a id  warning always allow       Enabling the  AlwaysAllow  mode bypasses authorization  do not use this on a cluster where you do not trust   all   potential API clients  including the workloads that you run   Authorization mechanisms typically return either a  deny  or  no opinion  result  see  authorization verdicts   determine whether a request is allowed or denied  for more on this  Activating the  AlwaysAllow  means that if all other authorizers return  no opinion   the request is allowed  For example     authorization mode AlwaysAllow RBAC  has the same effect as    authorization mode AlwaysAllow  because Kubernetes RBAC does not provide negative  deny  access rules   You should not use the  AlwaysAllow  mode on a Kubernetes cluster where the API server is reachable from the public internet        The system masters group  The  system masters  group is a built in Kubernetes group that grants unrestricted access to the API server  Any user assigned to this group has full cluster administrator privileges  bypassing any authorization restrictions imposed by the RBAC or Webhook mechanisms   Avoid adding users   docs concepts security rbac good practices  least privilege  to this group  If you do need to grant a user cluster admin rights  you can create a  ClusterRoleBinding   docs reference access authn authz rbac  user facing roles  to the built in  cluster admin  ClusterRole       Authorization mode configuration   choice of authz config   You can configure the Kubernetes API server s authorizer chain using either  command line arguments   using flags for your authorization module  only or  as a beta feature  using a  configuration file   using configuration file for authorization    You have to pick one of the two configuration approaches  setting both    authorization config  path and configuring an authorization webhook using the    authorization mode  and    authorization webhook    command line arguments is not allowed  If you try this  the API server reports an error message during startup  then exits immediately       Command line authorization mode configuration   using flags for your authorization module     You can use the following modes        authorization mode ABAC   Attribute based access control mode       authorization mode RBAC   Role based access control mode       authorization mode Node   Node authorizer       authorization mode Webhook   Webhook authorization mode       authorization mode AlwaysAllow   always allows requests  carries  security risks   warning always allow        authorization mode AlwaysDeny   always denies requests   You can choose more than one authorization mode  for example     authorization mode Node Webhook   Kubernetes checks authorization modules based on the order that you specify them on the API server s command line  so an earlier module has higher priority to allow or deny a request   You cannot combine the    authorization mode  command line argument with the    authorization config  command line argument used for  configuring authorization using a local file   using configuration file for authorization mode    For more information on command line arguments to the API server  read the   kube apiserver  reference   docs reference command line tools reference kube apiserver          keep legacy hyperlinks working      a id  configuring the api server using an authorization config file          Configuring the API Server using an authorization config file   using configuration file for authorization     As a beta feature  Kubernetes lets you configure authorization chains that can include multiple webhooks  The authorization items in that chain can have well defined parameters that validate requests in a particular order  offering you fine grained control  such as explicit Deny on failures   The configuration file approach even allows you to specify  CEL   docs reference using api cel   rules to pre filter requests before they are dispatched to webhooks  helping you to prevent unnecessary invocations  The API server also automatically reloads the authorizer chain when the configuration file is modified   You specify the path to the authorization configuration using the    authorization config  command line argument   If you want to use command line arguments instead of a configuration file  that s also a valid and supported approach  Some authorization capabilities  for example  multiple webhooks  webhook failure policy  and pre filter rules  are only available if you use an authorization configuration file        Example configuration   authz config example            DO NOT USE THE CONFIG AS IS  THIS IS AN EXAMPLE    apiVersion  apiserver config k8s io v1beta1 kind  AuthorizationConfiguration authorizers      type  Webhook       Name used to describe the authorizer       This is explicitly used in monitoring machinery for metrics       Note            Validation for this field is similar to how K8s labels are validated today        Required  with no default     name  webhook     webhook          The duration to cache  authorized  responses from the webhook         authorizer          Same as setting    authorization webhook cache authorized ttl  flag         Default  5m0s       authorizedTTL  30s         The duration to cache  unauthorized  responses from the webhook         authorizer          Same as setting    authorization webhook cache unauthorized ttl  flag         Default  30s       unauthorizedTTL  30s         Timeout for the webhook request         Maximum allowed is 30s          Required  with no default        timeout  3s         The API version of the authorization k8s io SubjectAccessReview to         send to and expect from the webhook          Same as setting    authorization webhook version  flag         Required  with no default         Valid values  v1beta1  v1       subjectAccessReviewVersion  v1         MatchConditionSubjectAccessReviewVersion specifies the SubjectAccessReview         version the CEL expressions are evaluated against         Valid values  v1         Required  no default value       matchConditionSubjectAccessReviewVersion  v1         Controls the authorization decision when a webhook request fails to         complete or returns a malformed response or errors evaluating         matchConditions          Valid values              NoOpinion  continue to subsequent authorizers to see if one of             them allows the request             Deny  reject the request without consulting subsequent authorizers         Required  with no default        failurePolicy  Deny       connectionInfo            Controls how the webhook should communicate with the server            Valid values              KubeConfigFile  use the file specified in kubeConfigFile to locate the             server              InClusterConfig  use the in cluster configuration to call the             SubjectAccessReview API hosted by kube apiserver  This mode is not             allowed for kube apiserver          type  KubeConfigFile           Path to KubeConfigFile for connection info           Required  if connectionInfo Type is KubeConfigFile         kubeConfigFile   kube system authz webhook yaml           matchConditions is a list of conditions that must be met for a request to be sent to this           webhook  An empty list of matchConditions matches all requests            There are a maximum of 64 match conditions allowed                      The exact matching logic is  in order               1  If at least one matchCondition evaluates to FALSE  then the webhook is skipped              2  If ALL matchConditions evaluate to TRUE  then the webhook is called              3  If at least one matchCondition evaluates to an error  but none are FALSE                    If failurePolicy Deny  then the webhook rejects the request                  If failurePolicy NoOpinion  then the error is ignored and the webhook is skipped       matchConditions          expression represents the expression which will be evaluated by CEL  Must evaluate to bool          CEL expressions have access to the contents of the SubjectAccessReview in v1 version          If version specified by subjectAccessReviewVersion in the request variable is v1beta1          the contents would be converted to the v1 version before evaluating the CEL expression                  Documentation on CEL  https   kubernetes io docs reference using api cel                  only send resource requests to the webhook         expression  has request resourceAttributes          only intercept requests to kube system         expression  request resourceAttributes namespace     kube system          don t intercept requests from kube system service accounts         expression      system serviceaccounts kube system  in request groups       type  Node     name  node     type  RBAC     name  rbac     type  Webhook     name  in cluster authorizer     webhook        authorizedTTL  5m       unauthorizedTTL  30s       timeout  3s       subjectAccessReviewVersion  v1       failurePolicy  NoOpinion       connectionInfo          type  InClusterConfig   When configuring the authorizer chain using a configuration file  make sure all the control plane nodes have the same file contents  Take a note of the API server configuration when upgrading   downgrading your clusters  For example  if upgrading from Kubernetes  to Kubernetes   you would need to make sure the config file is in a format that Kubernetes  can understand  before you upgrade the cluster  If you downgrade to   you would need to set the configuration appropriately        Authorization configuration and reloads  Kubernetes reloads the authorization configuration file when the API server observes a change to the file  and also on a 60 second schedule if no change events were observed    You must ensure that all non webhook authorizer types remain unchanged in the file on reload   A reload   must not   add or remove Node or RBAC authorizers  they can be reordered  but cannot be added or removed        Privilege escalation via workload creation or edits   privilege escalation via pod creation   Users who can create edit pods in a namespace  either directly or through an object that enables indirect  workload management   docs concepts architecture controller    may be able to escalate their privileges in that namespace  The potential routes to privilege escalation include Kubernetes  API extensions   docs concepts extend kubernetes  api extensions  and their associated     As a cluster administrator  use caution when granting access to create or edit workloads  Some details of how these can be misused are documented in  escalation paths   docs reference access authn authz authorization  escalation paths         Escalation paths   escalation paths   There are different ways that an attacker or untrustworthy user could gain additional privilege within a namespace  if you allow them to run arbitrary Pods in that namespace     Mounting arbitrary Secrets in that namespace     Can be used to access confidential information meant for other workloads     Can be used to obtain a more privileged ServiceAccount s service account token   Using arbitrary ServiceAccounts in that namespace     Can perform Kubernetes API actions as another workload  impersonation      Can perform any privileged actions that ServiceAccount has   Mounting or using ConfigMaps meant for other workloads in that namespace     Can be used to obtain information meant for other workloads  such as database host names    Mounting volumes meant for other workloads in that namespace     Can be used to obtain information meant for other workloads  and change it    As a system administrator  you should be cautious when deploying CustomResourceDefinitions that let users make changes to the above areas  These may open privilege escalations paths  Consider the consequences of this kind of change when deciding on your authorization controls       Checking API access   kubectl  provides the  auth can i  subcommand for quickly querying the API authorization layer  The command uses the  SelfSubjectAccessReview  API to determine if the current user can perform a given action  and works regardless of the authorization mode used       bash kubectl auth can i create deployments   namespace dev      The output is similar to this       yes         shell kubectl auth can i create deployments   namespace prod      The output is similar to this       no      Administrators can combine this with  user impersonation   docs reference access authn authz authentication  user impersonation  to determine what action other users can perform      bash kubectl auth can i list secrets   namespace dev   as dave      The output is similar to this       no      Similarly  to check whether a ServiceAccount named  dev sa  in Namespace  dev  can list Pods in the Namespace  target       bash kubectl auth can i list pods         namespace target         as system serviceaccount dev dev sa      The output is similar to this       yes      SelfSubjectAccessReview is part of the  authorization k8s io  API group  which exposes the API server authorization to external services  Other resources in this group include   SubjectAccessReview   Access review for any user  not only the current one  Useful for delegating authorization decisions to the API server  For example  the kubelet and extension API servers use this to determine user access to their own APIs   LocalSubjectAccessReview   Like SubjectAccessReview but restricted to a specific namespace   SelfSubjectRulesReview   A review which returns the set of actions a user can perform within a namespace  Useful for users to quickly summarize their own access  or for UIs to hide show actions   These APIs can be queried by creating normal Kubernetes resources  where the response  status  field of the returned object is the result of the query  For example      bash kubectl create  f    o yaml    EOF apiVersion  authorization k8s io v1 kind  SelfSubjectAccessReview spec    resourceAttributes      group  apps     resource  deployments     verb  create     namespace  dev EOF      The generated SelfSubjectAccessReview is similar to    apiVersion  authorization k8s io v1 kind  SelfSubjectAccessReview metadata    creationTimestamp  null spec    resourceAttributes      group  apps     resource  deployments     namespace  dev     verb  create status    allowed  true   denied  false          To learn more about Authentication  see  Authentication   docs reference access authn authz authentication      For an overview  read  Controlling Access to the Kubernetes API   docs concepts security controlling access      To learn more about Admission Control  see  Using Admission Controllers   docs reference access authn authz admission controllers      Read more about  Common Expression Language in Kubernetes   docs reference using api cel   "}
{"questions":"kubernetes reference liggitt contenttype concept enj title Managing Service Accounts reviewers overview weight 50","answers":"---\nreviewers:\n  - liggitt\n  - enj\ntitle: Managing Service Accounts\ncontent_type: concept\nweight: 50\n---\n\n<!-- overview -->\n\nA _ServiceAccount_ provides an identity for processes that run in a Pod.\n\nA process inside a Pod can use the identity of its associated service account to\nauthenticate to the cluster's API server.\n\nFor an introduction to service accounts, read [configure service accounts](\/docs\/tasks\/configure-pod-container\/configure-service-account\/).\n\nThis task guide explains some of the concepts behind ServiceAccounts. The\nguide also explains how to obtain or revoke tokens that represent\nServiceAccounts.\n\n<!-- body -->\n\n## \n\n\n\nTo be able to follow these steps exactly, ensure you have a namespace named\n`examplens`.\nIf you don't, create one by running:\n\n```shell\nkubectl create namespace examplens\n```\n\n## User accounts versus service accounts\n\nKubernetes distinguishes between the concept of a user account and a service account\nfor a number of reasons:\n\n- User accounts are for humans. Service accounts are for application processes,\n  which (for Kubernetes) run in containers that are part of pods.\n- User accounts are intended to be global: names must be unique across all\n  namespaces of a cluster. No matter what namespace you look at, a particular\n  username that represents a user represents the same user.\n  In Kubernetes, service accounts are namespaced: two different namespaces can\n  contain ServiceAccounts that have identical names.\n- Typically, a cluster's user accounts might be synchronised from a corporate\n  database, where new user account creation requires special privileges and is\n  tied to complex business processes. By contrast, service account creation is\n  intended to be more lightweight, allowing cluster users to create service accounts\n  for specific tasks on demand. Separating ServiceAccount creation from the steps to\n  onboard human users makes it easier for workloads to follow the principle of\n  least privilege.\n- Auditing considerations for humans and service accounts may differ; the separation\n  makes that easier to achieve.\n- A configuration bundle for a complex system may include definition of various service\n  accounts for components of that system. Because service accounts can be created\n  without many constraints and have namespaced names, such configuration is\n  usually portable.\n\n## Bound service account tokens\n\nServiceAccount tokens can be bound to API objects that exist in the kube-apiserver.\nThis can be used to tie the validity of a token to the existence of another API object.\nSupported object types are as follows:\n\n* Pod (used for projected volume mounts, see below)\n* Secret (can be used to allow revoking a token by deleting the Secret)\n* Node (in v1.30, creating new node-bound tokens is alpha, using existing node-bound tokens is beta)\n\nWhen a token is bound to an object, the object's `metadata.name` and `metadata.uid` are\nstored as extra 'private claims' in the issued JWT.\n\nWhen a bound token is presented to the kube-apiserver, the service account authenticator\nwill extract and verify these claims.\nIf the referenced object or the ServiceAccount is pending deletion (for example, due to finalizers),\nthen for any instant that is 60 seconds (or more) after the `.metadata.deletionTimestamp` date,\nauthentication with that token would fail.\nIf the referenced object no longer exists (or its `metadata.uid` does not match),\nthe request will not be authenticated.\n\n### Additional metadata in Pod bound tokens\n\n\n\nWhen a service account token is bound to a Pod object, additional metadata is also\nembedded into the token that indicates the value of the bound pod's `spec.nodeName` field,\nand the uid of that Node, if available.\n\nThis node information is **not** verified by the kube-apiserver when the token is used for authentication.\nIt is included so integrators do not have to fetch Pod or Node API objects to check the associated Node name\nand uid when inspecting a JWT.\n\n### Verifying and inspecting private claims\n\nThe `TokenReview` API can be used to verify and extract private claims from a token:\n\n1. First, assume you have a pod named `test-pod` and a service account named `my-sa`.\n2. Create a token that is bound to this Pod:\n\n```shell\nkubectl create token my-sa --bound-object-kind=\"Pod\" --bound-object-name=\"test-pod\"\n```\n\n3. Copy this token into a new file named `tokenreview.yaml`:\n\n```yaml\napiVersion: authentication.k8s.io\/v1\nkind: TokenReview\nspec:\n  token: <token from step 2>\n```\n\n4. Submit this resource to the apiserver for review:\n\n```shell\nkubectl create -o yaml -f tokenreview.yaml # we use '-o yaml' so we can inspect the output\n```\n\nYou should see an output like below:\n\n```yaml\napiVersion: authentication.k8s.io\/v1\nkind: TokenReview\nmetadata:\n  creationTimestamp: null\nspec:\n  token: <token>\nstatus:\n  audiences:\n  - https:\/\/kubernetes.default.svc.cluster.local\n  authenticated: true\n  user:\n    extra:\n      authentication.kubernetes.io\/credential-id:\n      - JTI=7ee52be0-9045-4653-aa5e-0da57b8dccdc\n      authentication.kubernetes.io\/node-name:\n      - kind-control-plane\n      authentication.kubernetes.io\/node-uid:\n      - 497e9d9a-47aa-4930-b0f6-9f2fb574c8c6\n      authentication.kubernetes.io\/pod-name:\n      - test-pod\n      authentication.kubernetes.io\/pod-uid:\n      - e87dbbd6-3d7e-45db-aafb-72b24627dff5\n    groups:\n    - system:serviceaccounts\n    - system:serviceaccounts:default\n    - system:authenticated\n    uid: f8b4161b-2e2b-11e9-86b7-2afc33b31a7e\n    username: system:serviceaccount:default:my-sa\n```\n\n\nDespite using `kubectl create -f` to create this resource, and defining it similar to\nother resource types in Kubernetes, TokenReview is a special type and the kube-apiserver\ndoes not actually persist the TokenReview object into etcd.\nHence `kubectl get tokenreview` is not a valid command.\n\n\n## Bound service account token volume mechanism {#bound-service-account-token-volume}\n\n\n\nBy default, the Kubernetes control plane (specifically, the\n[ServiceAccount admission controller](#serviceaccount-admission-controller)) \nadds a [projected volume](\/docs\/concepts\/storage\/projected-volumes\/) to Pods,\nand this volume includes a token for Kubernetes API access.\n\nHere's an example of how that looks for a launched Pod:\n\n```yaml\n...\n  - name: kube-api-access-<random-suffix>\n    projected:\n      sources:\n        - serviceAccountToken:\n            path: token # must match the path the app expects\n        - configMap:\n            items:\n              - key: ca.crt\n                path: ca.crt\n            name: kube-root-ca.crt\n        - downwardAPI:\n            items:\n              - fieldRef:\n                  apiVersion: v1\n                  fieldPath: metadata.namespace\n                path: namespace\n```\n\nThat manifest snippet defines a projected volume that consists of three sources. In this case,\neach source also represents a single path within that volume. The three sources are:\n\n1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver.\n   The kubelet fetches time-bound tokens using the TokenRequest API. A token served for a TokenRequest expires\n   either when the pod is deleted or after a defined lifespan (by default, that is 1 hour).\n   The kubelet also refreshes that token before the token expires.\n   The token is bound to the specific Pod and has the kube-apiserver as its audience.\n   This mechanism superseded an earlier mechanism that added a volume based on a Secret,\n   where the Secret represented the ServiceAccount for the Pod, but did not expire.\n1. A `configMap` source. The ConfigMap contains a bundle of certificate authority data. Pods can use these\n   certificates to make sure that they are connecting to your cluster's kube-apiserver (and not to middlebox\n   or an accidentally misconfigured peer).\n1. A `downwardAPI` source that looks up the name of the namespace containing the Pod, and makes\n   that name information available to application code running inside the Pod.\n\nAny container within the Pod that mounts this particular volume can access the above information.\n\n\nThere is no specific mechanism to invalidate a token issued via TokenRequest. If you no longer\ntrust a bound service account token for a Pod, you can delete that Pod. Deleting a Pod expires\nits bound service account tokens.\n\n\n## Manual Secret management for ServiceAccounts\n\nVersions of Kubernetes before v1.22 automatically created credentials for accessing\nthe Kubernetes API. This older mechanism was based on creating token Secrets that\ncould then be mounted into running Pods.\n\nIn more recent versions, including Kubernetes v, API credentials\nare [obtained directly](#bound-service-account-token-volume) using the\n[TokenRequest](\/docs\/reference\/kubernetes-api\/authentication-resources\/token-request-v1\/) API,\nand are mounted into Pods using a projected volume.\nThe tokens obtained using this method have bounded lifetimes, and are automatically\ninvalidated when the Pod they are mounted into is deleted.\n\nYou can still [manually create](\/docs\/tasks\/configure-pod-container\/configure-service-account\/#manually-create-an-api-token-for-a-serviceaccount) a Secret to hold a service account token; for example, if you need a token that never expires.\n\nOnce you manually create a Secret and link it to a ServiceAccount, the Kubernetes control plane automatically populates the token into that Secret.\n\n\nAlthough the manual mechanism for creating a long-lived ServiceAccount token exists,\nusing [TokenRequest](\/docs\/reference\/kubernetes-api\/authentication-resources\/token-request-v1\/)\nto obtain short-lived API access tokens is recommended instead.\n\n\n## Auto-generated legacy ServiceAccount token clean up {#auto-generated-legacy-serviceaccount-token-clean-up}\n\nBefore version 1.24, Kubernetes automatically generated Secret-based tokens for\nServiceAccounts. To distinguish between automatically generated tokens and\nmanually created ones, Kubernetes checks for a reference from the\nServiceAccount's secrets field. If the Secret is referenced in the `secrets`\nfield, it is considered an auto-generated legacy token. Otherwise, it is\nconsidered a manually created legacy token. For example:\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: build-robot\n  namespace: default\nsecrets:\n  - name: build-robot-secret # usually NOT present for a manually generated token                         \n```\n\nBeginning from version 1.29, legacy ServiceAccount tokens that were generated\nautomatically will be marked as invalid if they remain unused for a certain\nperiod of time (set to default at one year). Tokens that continue to be unused\nfor this defined period (again, by default, one year) will subsequently be\npurged by the control plane.\n\nIf users use an invalidated auto-generated token, the token validator will\n\n1. add an audit annotation for the key-value pair\n  `authentication.k8s.io\/legacy-token-invalidated: <secret name>\/<namespace>`,\n1. increment the `invalid_legacy_auto_token_uses_total` metric count,\n1. update the Secret label `kubernetes.io\/legacy-token-last-used` with the new\n   date,\n1. return an error indicating that the token has been invalidated.\n\nWhen receiving this validation error, users can update the Secret to remove the\n`kubernetes.io\/legacy-token-invalid-since` label to temporarily allow use of\nthis token.\n\nHere's an example of an auto-generated legacy token that has been marked with the\n`kubernetes.io\/legacy-token-last-used` and `kubernetes.io\/legacy-token-invalid-since`\nlabels:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: build-robot-secret\n  namespace: default\n  labels:\n    kubernetes.io\/legacy-token-last-used: 2022-10-24\n    kubernetes.io\/legacy-token-invalid-since: 2023-10-25\n  annotations:\n    kubernetes.io\/service-account.name: build-robot\ntype: kubernetes.io\/service-account-token\n```\n\n## Control plane details\n\n### ServiceAccount controller\n\nA ServiceAccount controller manages the ServiceAccounts inside namespaces, and\nensures a ServiceAccount named \"default\" exists in every active namespace.\n\n### Token controller\n\nThe service account token controller runs as part of `kube-controller-manager`.\nThis controller acts asynchronously. It:\n\n- watches for ServiceAccount deletion and deletes all corresponding ServiceAccount\n  token Secrets.\n- watches for ServiceAccount token Secret addition, and ensures the referenced\n  ServiceAccount exists, and adds a token to the Secret if needed.\n- watches for Secret deletion and removes a reference from the corresponding\n  ServiceAccount if needed.\n\nYou must pass a service account private key file to the token controller in\nthe `kube-controller-manager` using the `--service-account-private-key-file`\nflag. The private key is used to sign generated service account tokens.\nSimilarly, you must pass the corresponding public key to the `kube-apiserver`\nusing the `--service-account-key-file` flag. The public key will be used to\nverify the tokens during authentication.\n\n### ServiceAccount admission controller\n\nThe modification of pods is implemented via a plugin\ncalled an [Admission Controller](\/docs\/reference\/access-authn-authz\/admission-controllers\/).\nIt is part of the API server.\nThis admission controller acts synchronously to modify pods as they are created.\nWhen this plugin is active (and it is by default on most distributions), then\nit does the following when a Pod is created:\n\n1. If the pod does not have a `.spec.serviceAccountName` set, the admission controller sets the name of the\n   ServiceAccount for this incoming Pod to `default`.\n1. The admission controller ensures that the ServiceAccount referenced by the incoming Pod exists. If there\n   is no ServiceAccount with a matching name, the admission controller rejects the incoming Pod. That check\n   applies even for the `default` ServiceAccount.\n1. Provided that neither the ServiceAccount's `automountServiceAccountToken` field nor the\n   Pod's `automountServiceAccountToken` field is set to `false`:\n   - the admission controller mutates the incoming Pod, adding an extra\n      that contains\n     a token for API access.\n   - the admission controller adds a `volumeMount` to each container in the Pod,\n     skipping any containers that already have a volume mount defined for the path\n     `\/var\/run\/secrets\/kubernetes.io\/serviceaccount`.\n     For Linux containers, that volume is mounted at `\/var\/run\/secrets\/kubernetes.io\/serviceaccount`;\n     on Windows nodes, the mount is at the equivalent path.\n1. If the spec of the incoming Pod doesn't already contain any `imagePullSecrets`, then the\n   admission controller adds `imagePullSecrets`, copying them from the `ServiceAccount`.\n\n### Legacy ServiceAccount token tracking controller\n\n\n\nThis controller generates a ConfigMap called\n`kube-system\/kube-apiserver-legacy-service-account-token-tracking` in the\n`kube-system` namespace. The ConfigMap records the timestamp when legacy service\naccount tokens began to be monitored by the system.\n\n### Legacy ServiceAccount token cleaner\n\n\n\nThe legacy ServiceAccount token cleaner runs as part of the\n`kube-controller-manager` and checks every 24 hours to see if any auto-generated\nlegacy ServiceAccount token has not been used in a *specified amount of time*.\nIf so, the cleaner marks those tokens as invalid.\n\nThe cleaner works by first checking the ConfigMap created by the control plane\n(provided that `LegacyServiceAccountTokenTracking` is enabled). If the current\ntime is a *specified amount of time* after the date in the ConfigMap, the\ncleaner then loops through the list of Secrets in the cluster and evaluates each\nSecret that has the type `kubernetes.io\/service-account-token`.\n\nIf a Secret meets all of the following conditions, the cleaner marks it as\ninvalid:\n\n- The Secret is auto-generated, meaning that it is bi-directionally referenced\n  by a ServiceAccount.\n- The Secret is not currently mounted by any pods.\n- The Secret has not been used in a *specified amount of time* since it was\n  created or since it was last used.\n\nThe cleaner marks a Secret invalid by adding a label called\n`kubernetes.io\/legacy-token-invalid-since` to the Secret, with the current date\nas the value. If an invalid Secret is not used in a *specified amount of time*,\nthe cleaner will delete it.\n\n\nAll the *specified amount of time* above defaults to one year. The cluster\nadministrator can configure this value through the\n`--legacy-service-account-token-clean-up-period` command line argument for the\n`kube-controller-manager` component.\n\n\n### TokenRequest API\n\n\n\nYou use the [TokenRequest](\/docs\/reference\/kubernetes-api\/authentication-resources\/token-request-v1\/)\nsubresource of a ServiceAccount to obtain a time-bound token for that ServiceAccount.\nYou don't need to call this to obtain an API token for use within a container, since\nthe kubelet sets this up for you using a _projected volume_.\n\nIf you want to use the TokenRequest API from `kubectl`, see\n[Manually create an API token for a ServiceAccount](\/docs\/tasks\/configure-pod-container\/configure-service-account\/#manually-create-an-api-token-for-a-serviceaccount).\n\nThe Kubernetes control plane (specifically, the ServiceAccount admission controller)\nadds a projected volume to Pods, and the kubelet ensures that this volume contains a token\nthat lets containers authenticate as the right ServiceAccount.\n\n(This mechanism superseded an earlier mechanism that added a volume based on a Secret,\nwhere the Secret represented the ServiceAccount for the Pod but did not expire.)\n\nHere's an example of how that looks for a launched Pod:\n\n```yaml\n...\n  - name: kube-api-access-<random-suffix>\n    projected:\n      defaultMode: 420 # decimal equivalent of octal 0644\n      sources:\n        - serviceAccountToken:\n            expirationSeconds: 3607\n            path: token\n        - configMap:\n            items:\n              - key: ca.crt\n                path: ca.crt\n            name: kube-root-ca.crt\n        - downwardAPI:\n            items:\n              - fieldRef:\n                  apiVersion: v1\n                  fieldPath: metadata.namespace\n                path: namespace\n```\n\nThat manifest snippet defines a projected volume that combines information from three sources:\n\n1. A `serviceAccountToken` source, that contains a token that the kubelet acquires from kube-apiserver.\n   The kubelet fetches time-bound tokens using the TokenRequest API. A token served for a TokenRequest expires\n   either when the pod is deleted or after a defined lifespan (by default, that is 1 hour).\n   The token is bound to the specific Pod and has the kube-apiserver as its audience.\n1. A `configMap` source. The ConfigMap contains a bundle of certificate authority data. Pods can use these\n   certificates to make sure that they are connecting to your cluster's kube-apiserver (and not to middlebox\n   or an accidentally misconfigured peer).\n1. A `downwardAPI` source. This `downwardAPI` volume makes the name of the namespace containing the Pod available\n   to application code running inside the Pod.\n\nAny container within the Pod that mounts this volume can access the above information.\n\n## Create additional API tokens {#create-token}\n\n\nOnly create long-lived API tokens if the [token request](#tokenrequest-api) mechanism\nis not suitable. The token request mechanism provides time-limited tokens; because these\nexpire, they represent a lower risk to information security.\n\n\nTo create a non-expiring, persisted API token for a ServiceAccount, create a\nSecret of type `kubernetes.io\/service-account-token` with an annotation\nreferencing the ServiceAccount. The control plane then generates a long-lived token and\nupdates that Secret with that generated token data.\n\nHere is a sample manifest for such a Secret:\n\n\n\nTo create a Secret based on this example, run:\n\n```shell\nkubectl -n examplens create -f https:\/\/k8s.io\/examples\/secret\/serviceaccount\/mysecretname.yaml\n```\n\nTo see the details for that Secret, run:\n\n```shell\nkubectl -n examplens describe secret mysecretname\n```\n\nThe output is similar to:\n\n```\nName:           mysecretname\nNamespace:      examplens\nLabels:         <none>\nAnnotations:    kubernetes.io\/service-account.name=myserviceaccount\n                kubernetes.io\/service-account.uid=8a85c4c4-8483-11e9-bc42-526af7764f64\n\nType:   kubernetes.io\/service-account-token\n\nData\n====\nca.crt:         1362 bytes\nnamespace:      9 bytes\ntoken:          ...\n```\n\nIf you launch a new Pod into the `examplens` namespace, it can use the `myserviceaccount`\nservice-account-token Secret that you just created.\n\n\nDo not reference manually created Secrets in the `secrets` field of a\nServiceAccount. Or the manually created Secrets will be cleaned if it is not used for a long\ntime. Please refer to [auto-generated legacy ServiceAccount token clean up](#auto-generated-legacy-serviceaccount-token-clean-up).\n\n\n## Delete\/invalidate a ServiceAccount token {#delete-token}\n\nIf you know the name of the Secret that contains the token you want to remove:\n\n```shell\nkubectl delete secret name-of-secret\n```\n\nOtherwise, first find the Secret for the ServiceAccount.\n\n```shell\n# This assumes that you already have a namespace named 'examplens'\nkubectl -n examplens get serviceaccount\/example-automated-thing -o yaml\n```\n\nThe output is similar to:\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  annotations:\n    kubectl.kubernetes.io\/last-applied-configuration: |\n      {\"apiVersion\":\"v1\",\"kind\":\"ServiceAccount\",\"metadata\":{\"annotations\":{},\"name\":\"example-automated-thing\",\"namespace\":\"examplens\"}}\n  creationTimestamp: \"2019-07-21T07:07:07Z\"\n  name: example-automated-thing\n  namespace: examplens\n  resourceVersion: \"777\"\n  selfLink: \/api\/v1\/namespaces\/examplens\/serviceaccounts\/example-automated-thing\n  uid: f23fd170-66f2-4697-b049-e1e266b7f835\nsecrets:\n  - name: example-automated-thing-token-zyxwv\n```\n\nThen, delete the Secret you now know the name of:\n\n```shell\nkubectl -n examplens delete secret\/example-automated-thing-token-zyxwv\n```\n\n## Clean up\n\nIf you created a namespace `examplens` to experiment with, you can remove it:\n\n```shell\nkubectl delete namespace examplens\n```\n\n## \n\n- Read more details about [projected volumes](\/docs\/concepts\/storage\/projected-volumes\/).","site":"kubernetes reference","answers_cleaned":"    reviewers      liggitt     enj title  Managing Service Accounts content type  concept weight  50           overview      A  ServiceAccount  provides an identity for processes that run in a Pod   A process inside a Pod can use the identity of its associated service account to authenticate to the cluster s API server   For an introduction to service accounts  read  configure service accounts   docs tasks configure pod container configure service account     This task guide explains some of the concepts behind ServiceAccounts  The guide also explains how to obtain or revoke tokens that represent ServiceAccounts        body             To be able to follow these steps exactly  ensure you have a namespace named  examplens   If you don t  create one by running      shell kubectl create namespace examplens         User accounts versus service accounts  Kubernetes distinguishes between the concept of a user account and a service account for a number of reasons     User accounts are for humans  Service accounts are for application processes    which  for Kubernetes  run in containers that are part of pods    User accounts are intended to be global  names must be unique across all   namespaces of a cluster  No matter what namespace you look at  a particular   username that represents a user represents the same user    In Kubernetes  service accounts are namespaced  two different namespaces can   contain ServiceAccounts that have identical names    Typically  a cluster s user accounts might be synchronised from a corporate   database  where new user account creation requires special privileges and is   tied to complex business processes  By contrast  service account creation is   intended to be more lightweight  allowing cluster users to create service accounts   for specific tasks on demand  Separating ServiceAccount creation from the steps to   onboard human users makes it easier for workloads to follow the principle of   least privilege    Auditing considerations for humans and service accounts may differ  the separation   makes that easier to achieve    A configuration bundle for a complex system may include definition of various service   accounts for components of that system  Because service accounts can be created   without many constraints and have namespaced names  such configuration is   usually portable      Bound service account tokens  ServiceAccount tokens can be bound to API objects that exist in the kube apiserver  This can be used to tie the validity of a token to the existence of another API object  Supported object types are as follows     Pod  used for projected volume mounts  see below    Secret  can be used to allow revoking a token by deleting the Secret    Node  in v1 30  creating new node bound tokens is alpha  using existing node bound tokens is beta   When a token is bound to an object  the object s  metadata name  and  metadata uid  are stored as extra  private claims  in the issued JWT   When a bound token is presented to the kube apiserver  the service account authenticator will extract and verify these claims  If the referenced object or the ServiceAccount is pending deletion  for example  due to finalizers   then for any instant that is 60 seconds  or more  after the   metadata deletionTimestamp  date  authentication with that token would fail  If the referenced object no longer exists  or its  metadata uid  does not match   the request will not be authenticated       Additional metadata in Pod bound tokens    When a service account token is bound to a Pod object  additional metadata is also embedded into the token that indicates the value of the bound pod s  spec nodeName  field  and the uid of that Node  if available   This node information is   not   verified by the kube apiserver when the token is used for authentication  It is included so integrators do not have to fetch Pod or Node API objects to check the associated Node name and uid when inspecting a JWT       Verifying and inspecting private claims  The  TokenReview  API can be used to verify and extract private claims from a token   1  First  assume you have a pod named  test pod  and a service account named  my sa   2  Create a token that is bound to this Pod      shell kubectl create token my sa   bound object kind  Pod    bound object name  test pod       3  Copy this token into a new file named  tokenreview yaml       yaml apiVersion  authentication k8s io v1 kind  TokenReview spec    token   token from step 2       4  Submit this resource to the apiserver for review      shell kubectl create  o yaml  f tokenreview yaml   we use   o yaml  so we can inspect the output      You should see an output like below      yaml apiVersion  authentication k8s io v1 kind  TokenReview metadata    creationTimestamp  null spec    token   token  status    audiences      https   kubernetes default svc cluster local   authenticated  true   user      extra        authentication kubernetes io credential id          JTI 7ee52be0 9045 4653 aa5e 0da57b8dccdc       authentication kubernetes io node name          kind control plane       authentication kubernetes io node uid          497e9d9a 47aa 4930 b0f6 9f2fb574c8c6       authentication kubernetes io pod name          test pod       authentication kubernetes io pod uid          e87dbbd6 3d7e 45db aafb 72b24627dff5     groups        system serviceaccounts       system serviceaccounts default       system authenticated     uid  f8b4161b 2e2b 11e9 86b7 2afc33b31a7e     username  system serviceaccount default my sa       Despite using  kubectl create  f  to create this resource  and defining it similar to other resource types in Kubernetes  TokenReview is a special type and the kube apiserver does not actually persist the TokenReview object into etcd  Hence  kubectl get tokenreview  is not a valid command       Bound service account token volume mechanism   bound service account token volume     By default  the Kubernetes control plane  specifically  the  ServiceAccount admission controller   serviceaccount admission controller    adds a  projected volume   docs concepts storage projected volumes   to Pods  and this volume includes a token for Kubernetes API access   Here s an example of how that looks for a launched Pod      yaml         name  kube api access  random suffix      projected        sources            serviceAccountToken              path  token   must match the path the app expects           configMap              items                  key  ca crt                 path  ca crt             name  kube root ca crt           downwardAPI              items                  fieldRef                    apiVersion  v1                   fieldPath  metadata namespace                 path  namespace      That manifest snippet defines a projected volume that consists of three sources  In this case  each source also represents a single path within that volume  The three sources are   1  A  serviceAccountToken  source  that contains a token that the kubelet acquires from kube apiserver     The kubelet fetches time bound tokens using the TokenRequest API  A token served for a TokenRequest expires    either when the pod is deleted or after a defined lifespan  by default  that is 1 hour      The kubelet also refreshes that token before the token expires     The token is bound to the specific Pod and has the kube apiserver as its audience     This mechanism superseded an earlier mechanism that added a volume based on a Secret     where the Secret represented the ServiceAccount for the Pod  but did not expire  1  A  configMap  source  The ConfigMap contains a bundle of certificate authority data  Pods can use these    certificates to make sure that they are connecting to your cluster s kube apiserver  and not to middlebox    or an accidentally misconfigured peer   1  A  downwardAPI  source that looks up the name of the namespace containing the Pod  and makes    that name information available to application code running inside the Pod   Any container within the Pod that mounts this particular volume can access the above information    There is no specific mechanism to invalidate a token issued via TokenRequest  If you no longer trust a bound service account token for a Pod  you can delete that Pod  Deleting a Pod expires its bound service account tokens       Manual Secret management for ServiceAccounts  Versions of Kubernetes before v1 22 automatically created credentials for accessing the Kubernetes API  This older mechanism was based on creating token Secrets that could then be mounted into running Pods   In more recent versions  including Kubernetes v  API credentials are  obtained directly   bound service account token volume  using the  TokenRequest   docs reference kubernetes api authentication resources token request v1   API  and are mounted into Pods using a projected volume  The tokens obtained using this method have bounded lifetimes  and are automatically invalidated when the Pod they are mounted into is deleted   You can still  manually create   docs tasks configure pod container configure service account  manually create an api token for a serviceaccount  a Secret to hold a service account token  for example  if you need a token that never expires   Once you manually create a Secret and link it to a ServiceAccount  the Kubernetes control plane automatically populates the token into that Secret    Although the manual mechanism for creating a long lived ServiceAccount token exists  using  TokenRequest   docs reference kubernetes api authentication resources token request v1   to obtain short lived API access tokens is recommended instead       Auto generated legacy ServiceAccount token clean up   auto generated legacy serviceaccount token clean up   Before version 1 24  Kubernetes automatically generated Secret based tokens for ServiceAccounts  To distinguish between automatically generated tokens and manually created ones  Kubernetes checks for a reference from the ServiceAccount s secrets field  If the Secret is referenced in the  secrets  field  it is considered an auto generated legacy token  Otherwise  it is considered a manually created legacy token  For example      yaml apiVersion  v1 kind  ServiceAccount metadata    name  build robot   namespace  default secrets      name  build robot secret   usually NOT present for a manually generated token                               Beginning from version 1 29  legacy ServiceAccount tokens that were generated automatically will be marked as invalid if they remain unused for a certain period of time  set to default at one year   Tokens that continue to be unused for this defined period  again  by default  one year  will subsequently be purged by the control plane   If users use an invalidated auto generated token  the token validator will  1  add an audit annotation for the key value pair    authentication k8s io legacy token invalidated   secret name   namespace    1  increment the  invalid legacy auto token uses total  metric count  1  update the Secret label  kubernetes io legacy token last used  with the new    date  1  return an error indicating that the token has been invalidated   When receiving this validation error  users can update the Secret to remove the  kubernetes io legacy token invalid since  label to temporarily allow use of this token   Here s an example of an auto generated legacy token that has been marked with the  kubernetes io legacy token last used  and  kubernetes io legacy token invalid since  labels      yaml apiVersion  v1 kind  Secret metadata    name  build robot secret   namespace  default   labels      kubernetes io legacy token last used  2022 10 24     kubernetes io legacy token invalid since  2023 10 25   annotations      kubernetes io service account name  build robot type  kubernetes io service account token         Control plane details      ServiceAccount controller  A ServiceAccount controller manages the ServiceAccounts inside namespaces  and ensures a ServiceAccount named  default  exists in every active namespace       Token controller  The service account token controller runs as part of  kube controller manager   This controller acts asynchronously  It     watches for ServiceAccount deletion and deletes all corresponding ServiceAccount   token Secrets    watches for ServiceAccount token Secret addition  and ensures the referenced   ServiceAccount exists  and adds a token to the Secret if needed    watches for Secret deletion and removes a reference from the corresponding   ServiceAccount if needed   You must pass a service account private key file to the token controller in the  kube controller manager  using the    service account private key file  flag  The private key is used to sign generated service account tokens  Similarly  you must pass the corresponding public key to the  kube apiserver  using the    service account key file  flag  The public key will be used to verify the tokens during authentication       ServiceAccount admission controller  The modification of pods is implemented via a plugin called an  Admission Controller   docs reference access authn authz admission controllers    It is part of the API server  This admission controller acts synchronously to modify pods as they are created  When this plugin is active  and it is by default on most distributions   then it does the following when a Pod is created   1  If the pod does not have a   spec serviceAccountName  set  the admission controller sets the name of the    ServiceAccount for this incoming Pod to  default   1  The admission controller ensures that the ServiceAccount referenced by the incoming Pod exists  If there    is no ServiceAccount with a matching name  the admission controller rejects the incoming Pod  That check    applies even for the  default  ServiceAccount  1  Provided that neither the ServiceAccount s  automountServiceAccountToken  field nor the    Pod s  automountServiceAccountToken  field is set to  false        the admission controller mutates the incoming Pod  adding an extra       that contains      a token for API access       the admission controller adds a  volumeMount  to each container in the Pod       skipping any containers that already have a volume mount defined for the path        var run secrets kubernetes io serviceaccount        For Linux containers  that volume is mounted at   var run secrets kubernetes io serviceaccount        on Windows nodes  the mount is at the equivalent path  1  If the spec of the incoming Pod doesn t already contain any  imagePullSecrets   then the    admission controller adds  imagePullSecrets   copying them from the  ServiceAccount        Legacy ServiceAccount token tracking controller    This controller generates a ConfigMap called  kube system kube apiserver legacy service account token tracking  in the  kube system  namespace  The ConfigMap records the timestamp when legacy service account tokens began to be monitored by the system       Legacy ServiceAccount token cleaner    The legacy ServiceAccount token cleaner runs as part of the  kube controller manager  and checks every 24 hours to see if any auto generated legacy ServiceAccount token has not been used in a  specified amount of time   If so  the cleaner marks those tokens as invalid   The cleaner works by first checking the ConfigMap created by the control plane  provided that  LegacyServiceAccountTokenTracking  is enabled   If the current time is a  specified amount of time  after the date in the ConfigMap  the cleaner then loops through the list of Secrets in the cluster and evaluates each Secret that has the type  kubernetes io service account token    If a Secret meets all of the following conditions  the cleaner marks it as invalid     The Secret is auto generated  meaning that it is bi directionally referenced   by a ServiceAccount    The Secret is not currently mounted by any pods    The Secret has not been used in a  specified amount of time  since it was   created or since it was last used   The cleaner marks a Secret invalid by adding a label called  kubernetes io legacy token invalid since  to the Secret  with the current date as the value  If an invalid Secret is not used in a  specified amount of time   the cleaner will delete it    All the  specified amount of time  above defaults to one year  The cluster administrator can configure this value through the    legacy service account token clean up period  command line argument for the  kube controller manager  component        TokenRequest API    You use the  TokenRequest   docs reference kubernetes api authentication resources token request v1   subresource of a ServiceAccount to obtain a time bound token for that ServiceAccount  You don t need to call this to obtain an API token for use within a container  since the kubelet sets this up for you using a  projected volume    If you want to use the TokenRequest API from  kubectl   see  Manually create an API token for a ServiceAccount   docs tasks configure pod container configure service account  manually create an api token for a serviceaccount    The Kubernetes control plane  specifically  the ServiceAccount admission controller  adds a projected volume to Pods  and the kubelet ensures that this volume contains a token that lets containers authenticate as the right ServiceAccount    This mechanism superseded an earlier mechanism that added a volume based on a Secret  where the Secret represented the ServiceAccount for the Pod but did not expire    Here s an example of how that looks for a launched Pod      yaml         name  kube api access  random suffix      projected        defaultMode  420   decimal equivalent of octal 0644       sources            serviceAccountToken              expirationSeconds  3607             path  token           configMap              items                  key  ca crt                 path  ca crt             name  kube root ca crt           downwardAPI              items                  fieldRef                    apiVersion  v1                   fieldPath  metadata namespace                 path  namespace      That manifest snippet defines a projected volume that combines information from three sources   1  A  serviceAccountToken  source  that contains a token that the kubelet acquires from kube apiserver     The kubelet fetches time bound tokens using the TokenRequest API  A token served for a TokenRequest expires    either when the pod is deleted or after a defined lifespan  by default  that is 1 hour      The token is bound to the specific Pod and has the kube apiserver as its audience  1  A  configMap  source  The ConfigMap contains a bundle of certificate authority data  Pods can use these    certificates to make sure that they are connecting to your cluster s kube apiserver  and not to middlebox    or an accidentally misconfigured peer   1  A  downwardAPI  source  This  downwardAPI  volume makes the name of the namespace containing the Pod available    to application code running inside the Pod   Any container within the Pod that mounts this volume can access the above information      Create additional API tokens   create token    Only create long lived API tokens if the  token request   tokenrequest api  mechanism is not suitable  The token request mechanism provides time limited tokens  because these expire  they represent a lower risk to information security    To create a non expiring  persisted API token for a ServiceAccount  create a Secret of type  kubernetes io service account token  with an annotation referencing the ServiceAccount  The control plane then generates a long lived token and updates that Secret with that generated token data   Here is a sample manifest for such a Secret     To create a Secret based on this example  run      shell kubectl  n examplens create  f https   k8s io examples secret serviceaccount mysecretname yaml      To see the details for that Secret  run      shell kubectl  n examplens describe secret mysecretname      The output is similar to       Name            mysecretname Namespace       examplens Labels           none  Annotations     kubernetes io service account name myserviceaccount                 kubernetes io service account uid 8a85c4c4 8483 11e9 bc42 526af7764f64  Type    kubernetes io service account token  Data      ca crt          1362 bytes namespace       9 bytes token                    If you launch a new Pod into the  examplens  namespace  it can use the  myserviceaccount  service account token Secret that you just created    Do not reference manually created Secrets in the  secrets  field of a ServiceAccount  Or the manually created Secrets will be cleaned if it is not used for a long time  Please refer to  auto generated legacy ServiceAccount token clean up   auto generated legacy serviceaccount token clean up        Delete invalidate a ServiceAccount token   delete token   If you know the name of the Secret that contains the token you want to remove      shell kubectl delete secret name of secret      Otherwise  first find the Secret for the ServiceAccount      shell   This assumes that you already have a namespace named  examplens  kubectl  n examplens get serviceaccount example automated thing  o yaml      The output is similar to      yaml apiVersion  v1 kind  ServiceAccount metadata    annotations      kubectl kubernetes io last applied configuration            apiVersion   v1   kind   ServiceAccount   metadata    annotations      name   example automated thing   namespace   examplens      creationTimestamp   2019 07 21T07 07 07Z    name  example automated thing   namespace  examplens   resourceVersion   777    selfLink   api v1 namespaces examplens serviceaccounts example automated thing   uid  f23fd170 66f2 4697 b049 e1e266b7f835 secrets      name  example automated thing token zyxwv      Then  delete the Secret you now know the name of      shell kubectl  n examplens delete secret example automated thing token zyxwv         Clean up  If you created a namespace  examplens  to experiment with  you can remove it      shell kubectl delete namespace examplens             Read more details about  projected volumes   docs concepts storage projected volumes   "}
{"questions":"kubernetes reference cici37 title Validating Admission Policy liggitt contenttype concept reviewers jpbetz overview","answers":"---\nreviewers:\n- liggitt\n- jpbetz\n- cici37\ntitle: Validating Admission Policy\ncontent_type: concept\n---\n\n<!-- overview -->\n\n\n\nThis page provides an overview of Validating Admission Policy.\n\n\n<!-- body -->\n\n## What is Validating Admission Policy?\n\nValidating admission policies offer a declarative, in-process alternative to validating admission webhooks.\n\nValidating admission policies use the Common Expression Language (CEL) to declare the validation\nrules of a policy. \nValidation admission policies are highly configurable, enabling policy authors to define policies\nthat can be parameterized and scoped to resources as needed by cluster administrators.\n\n## What Resources Make a Policy\n\nA policy is generally made up of three resources:\n\n- The `ValidatingAdmissionPolicy` describes the abstract logic of a policy\n  (think: \"this policy makes sure a particular label is set to a particular value\").\n\n- A `ValidatingAdmissionPolicyBinding` links the above resources together and provides scoping.\n  If you only want to require an `owner` label to be set for `Pods`, the binding is where you would\n  specify this restriction.\n\n- A parameter resource provides information to a ValidatingAdmissionPolicy to make it a concrete\n  statement (think \"the `owner` label must be set to something that ends in `.company.com`\").\n  A native type such as ConfigMap or a CRD defines the schema of a parameter resource.\n  `ValidatingAdmissionPolicy` objects specify what Kind they are expecting for their parameter resource.\n\nAt least a `ValidatingAdmissionPolicy` and a corresponding  `ValidatingAdmissionPolicyBinding`\nmust be defined for a policy to have an effect.\n\nIf a `ValidatingAdmissionPolicy` does not need to be configured via parameters, simply leave\n`spec.paramKind` in  `ValidatingAdmissionPolicy` not specified.\n\n## Getting Started with Validating Admission Policy\n\nValidating Admission Policy is part of the cluster control-plane. You should write and deploy them\nwith great caution. The following describes how to quickly experiment with Validating Admission Policy.\n\n### Creating a ValidatingAdmissionPolicy\n\nThe following is an example of a ValidatingAdmissionPolicy.\n\n\n\n`spec.validations` contains CEL expressions which use the [Common Expression Language (CEL)](https:\/\/github.com\/google\/cel-spec)\nto validate the request. If an expression evaluates to false, the validation check is enforced\naccording to the `spec.failurePolicy` field.\n\n\nYou can quickly test CEL expressions in [CEL Playground](https:\/\/playcel.undistro.io).\n\n\nTo configure a validating admission policy for use in a cluster, a binding is required.\nThe following is an example of a ValidatingAdmissionPolicyBinding.:\n\n\n\nWhen trying to create a deployment with replicas set not satisfying the validation expression, an\nerror will return containing message:\n\n```none\nValidatingAdmissionPolicy 'demo-policy.example.com' with binding 'demo-binding-test.example.com' denied request: failed expression: object.spec.replicas <= 5\n```\n\nThe above provides a simple example of using ValidatingAdmissionPolicy without a parameter configured.\n\n#### Validation actions\n\nEach `ValidatingAdmissionPolicyBinding` must specify one or more\n`validationActions` to declare how `validations` of a policy are enforced.\n\nThe supported `validationActions` are:\n\n- `Deny`: Validation failure results in a denied request.\n- `Warn`: Validation failure is reported to the request client\n  as a [warning](\/blog\/2020\/09\/03\/warnings\/).\n- `Audit`: Validation failure is included in the audit event for the API request.\n\nFor example, to both warn clients about a validation failure and to audit the\nvalidation failures, use:\n\n```yaml\nvalidationActions: [Warn, Audit]\n```\n\n`Deny` and `Warn` may not be used together since this combination\nneedlessly duplicates the validation failure both in the\nAPI response body and the HTTP warning headers.\n\nA `validation` that evaluates to false is always enforced according to these\nactions. Failures defined by the `failurePolicy` are enforced\naccording to these actions only if the `failurePolicy` is set to `Fail` (or not specified),\notherwise the failures are ignored.\n\nSee [Audit Annotations: validation failures](\/docs\/reference\/labels-annotations-taints\/audit-annotations\/#validation-policy-admission-k8s-io-validation-failure)\nfor more details about the validation failure audit annotation.\n\n### Parameter resources\n\nParameter resources allow a policy configuration to be separate from its definition. \nA policy can define paramKind, which outlines GVK of the parameter resource, \nand then a policy binding ties a policy by name (via policyName) to a particular parameter resource via paramRef.\n\nIf parameter configuration is needed, the following is an example of a ValidatingAdmissionPolicy\nwith parameter configuration.\n\n\n\nThe `spec.paramKind` field of the ValidatingAdmissionPolicy specifies the kind of resources used\nto parameterize this policy. For this example, it is configured by ReplicaLimit custom resources. \nNote in this example how the CEL expression references the parameters via the CEL params variable,\ne.g. `params.maxReplicas`. `spec.matchConstraints` specifies what resources this policy is\ndesigned to validate. Note that the native types such like `ConfigMap` could also be used as\nparameter reference.\n\nThe `spec.validations` fields contain CEL expressions. If an expression evaluates to false, the\nvalidation check is enforced according to the `spec.failurePolicy` field.\n\nThe validating admission policy author is responsible for providing the ReplicaLimit parameter CRD.\n\nTo configure an validating admission policy for use in a cluster, a binding and parameter resource\nare created. The following is an example of a ValidatingAdmissionPolicyBinding \nthat uses a **cluster-wide** param - the same param will be used to validate\nevery resource request that matches the binding:\n\n\n\nNotice this binding applies a parameter to the policy for all resources which\nare in the `test` environment.\n\nThe parameter resource could be as following:\n\n\n\nThis policy parameter resource limits deployments to a max of 3 replicas.\n\nAn admission policy may have multiple bindings. To bind all other environments\nto have a maxReplicas limit of 100, create another ValidatingAdmissionPolicyBinding:\n\n\n\nNotice this binding applies a different parameter to resources which\nare not in the `test` environment.\n\nAnd have a parameter resource:\n\n\n\nFor each admission request, the API server evaluates CEL expressions of each \n(policy, binding, param) combination that match the request. For a request\nto be admitted it must pass **all** evaluations.\n\nIf multiple bindings match the request, the policy will be evaluated for each,\nand they must all pass evaluation for the policy to be considered passed. \n\nIf multiple parameters match a single binding, the policy rules will be evaluated\nfor each param, and they too must all pass for the binding to be considered passed.\nBindings can have overlapping match criteria. The policy is evaluated for each \nmatching binding-parameter combination. A policy may even be evaluated multiple\ntimes if multiple bindings match it, or a single binding that matches multiple \nparameters.\n\nThe params object representing a parameter resource will not be set if a parameter resource has\nnot been bound, so for policies requiring a parameter resource, it can be useful to add a check to\nensure one has been bound. A parameter resource will not be bound and `params` will be null\nif `paramKind` of the policy, or `paramRef` of the binding are not specified.\n\nFor the use cases require parameter configuration, we recommend to add a param check in\n`spec.validations[0].expression`:\n\n```\n- expression: \"params != null\"\n  message: \"params missing but required to bind to this policy\"\n```\n\n#### Optional parameters\n\nIt can be convenient to be able to have optional parameters as part of a parameter resource, and\nonly validate them if present. CEL provides `has()`, which checks if the key passed to it exists.\nCEL also implements Boolean short-circuiting. If the first half of a logical OR evaluates to true,\nit won\u2019t evaluate the other half (since the result of the entire OR will be true regardless). \n\nCombining the two, we can provide a way to validate optional parameters:\n\n`!has(params.optionalNumber) || (params.optionalNumber >= 5 && params.optionalNumber <= 10)`\n\nHere, we first check that the optional parameter is present with `!has(params.optionalNumber)`. \n\n- If `optionalNumber` hasn\u2019t been defined, then the expression short-circuits since\n  `!has(params.optionalNumber)` will evaluate to true. \n- If `optionalNumber` has been defined, then the latter half of the CEL expression will be\n  evaluated, and optionalNumber will be checked to ensure that it contains a value between 5 and\n  10 inclusive.\n\n\n#### Per-namespace Parameters\n\nAs the author of a ValidatingAdmissionPolicy and its ValidatingAdmissionPolicyBinding, \nyou can choose to specify cluster-wide, or per-namespace parameters. \nIf you specify a `namespace` for the binding's `paramRef`, the control plane only\nsearches for parameters in that namespace.\n\nHowever, if `namespace` is not specified in the ValidatingAdmissionPolicyBinding, the\nAPI server can search for relevant parameters in the namespace that a request is against.\nFor example, if you make a request to modify a ConfigMap in the `default` namespace and\nthere is a relevant ValidatingAdmissionPolicyBinding with no `namespace` set, then the\nAPI server looks for a parameter object in `default`.\nThis design enables policy configuration that depends on the namespace\nof the resource being manipulated, for more fine-tuned control.\n\n#### Parameter selector\n\nIn addition to specify a parameter in a binding by `name`, you may\nchoose instead to specify label selector, such that all resources of the\npolicy's `paramKind`, and the param's `namespace` (if applicable) that match the\nlabel selector are selected for evaluation. See  for more information on how label selectors match resources.\n\nIf multiple parameters are found to meet the condition, the policy's rules are\nevaluated for each parameter found and the results will be ANDed together.\n\nIf `namespace` is provided, only objects of the `paramKind` in the provided\nnamespace are eligible for selection. Otherwise, when `namespace` is empty and \n`paramKind` is namespace-scoped, the `namespace` used in the request being \nadmitted will be used.\n\n#### Authorization checks {#authorization-check} \n\nWe introduced the authorization check for parameter resources.\nUser is expected to have `read` access to the resources referenced by `paramKind` in\n`ValidatingAdmissionPolicy` and `paramRef` in `ValidatingAdmissionPolicyBinding`.\n\nNote that if a resource in `paramKind` fails resolving via the restmapper, `read` access to all\nresources of groups is required.\n\n### Failure Policy\n\n`failurePolicy` defines how mis-configurations and CEL expressions evaluating to error from the\nadmission policy are handled. Allowed values are `Ignore` or `Fail`.\n\n- `Ignore` means that an error calling the ValidatingAdmissionPolicy is ignored and the API\n  request is allowed to continue.\n- `Fail` means that an error calling the ValidatingAdmissionPolicy causes the admission to fail\n  and the API request to be rejected.\n\nNote that the `failurePolicy` is defined inside `ValidatingAdmissionPolicy`:\n\n\n\n### Validation Expression\n\n`spec.validations[i].expression` represents the expression which will be evaluated by CEL.\nTo learn more, see the [CEL language specification](https:\/\/github.com\/google\/cel-spec)\nCEL expressions have access to the contents of the Admission request\/response, organized into CEL\nvariables as well as some other useful variables:\n\n- 'object' - The object from the incoming request. The value is null for DELETE requests.\n- 'oldObject' - The existing object. The value is null for CREATE requests.\n- 'request' - Attributes of the [admission request](\/docs\/reference\/config-api\/apiserver-admission.v1\/#admission-k8s-io-v1-AdmissionRequest).\n- 'params' - Parameter resource referred to by the policy binding being evaluated. The value is\n  null if `ParamKind` is not specified.\n- `namespaceObject` - The namespace, as a Kubernetes resource, that the incoming object belongs to.\n  The value is null if the incoming object is cluster-scoped.\n- `authorizer` - A CEL Authorizer. May be used to perform authorization checks for the principal\n  (authenticated user) of the request. See\n  [AuthzSelectors](https:\/\/pkg.go.dev\/k8s.io\/apiserver\/pkg\/cel\/library#AuthzSelectors) and\n  [Authz](https:\/\/pkg.go.dev\/k8s.io\/apiserver\/pkg\/cel\/library#Authz) in the Kubernetes CEL library\n  documentation for more details.\n- `authorizer.requestResource` - A shortcut for an authorization check configured with the request\n  resource (group, resource, (subresource), namespace, name).\n\t\nThe `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from\nthe root of the object. No other metadata properties are accessible.\n\t\nEquality on arrays with list type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1].\nConcatenation on arrays with x-kubernetes-list-type use the semantics of the list type:\n\n- 'set': `X + Y` performs a union where the array positions of all elements in `X` are preserved and\n  non-intersecting elements in `Y` are appended, retaining their partial order.\n- 'map': `X + Y` performs a merge where the array positions of all keys in `X` are preserved but the values\n  are overwritten by values in `Y` when the key sets of `X` and `Y` intersect. Elements in `Y` with\n  non-intersecting keys are appended, retaining their partial order.\n\n#### Validation expression examples\n\n| Expression                                                                                   | Purpose                                                                           |\n|----------------------------------------------------------------------------------------------| ------------                                                                      |\n| `object.minReplicas <= object.replicas && object.replicas <= object.maxReplicas`             | Validate that the three fields defining replicas are ordered appropriately        |\n| `'Available' in object.stateCounts`                                                          | Validate that an entry with the 'Available' key exists in a map                   |\n| `(size(object.list1) == 0) != (size(object.list2) == 0)`                                     | Validate that one of two lists is non-empty, but not both                         |\n| <code>!('MY_KEY' in object.map1) &#124;&#124; object['MY_KEY'].matches('^[a-zA-Z]*$')<\/code> | Validate the value of a map for a specific key, if it is in the map               |\n| `object.envars.filter(e, e.name == 'MY_ENV').all(e, e.value.matches('^[a-zA-Z]*$')`          | Validate the 'value' field of a listMap entry where key field 'name' is 'MY_ENV'  |\n| `has(object.expired) && object.created + object.ttl < object.expired`                        | Validate that 'expired' date is after a 'create' date plus a 'ttl' duration       |\n| `object.health.startsWith('ok')`                                                             | Validate a 'health' string field has the prefix 'ok'                              |\n| `object.widgets.exists(w, w.key == 'x' && w.foo < 10)`                                       | Validate that the 'foo' property of a listMap item with a key 'x' is less than 10 |\n| `type(object) == string ? object == '100%' : object == 1000`                                 | Validate an int-or-string field for both the int and string cases             |\n| `object.metadata.name.startsWith(object.prefix)`                                             | Validate that an object's name has the prefix of another field value              |\n| `object.set1.all(e, !(e in object.set2))`                                                    | Validate that two listSets are disjoint                                           |\n| `size(object.names) == size(object.details) && object.names.all(n, n in object.details)`     | Validate the 'details' map is keyed by the items in the 'names' listSet           |\n| `size(object.clusters.filter(c, c.name == object.primary)) == 1`                             | Validate that the 'primary' property has one and only one occurrence in the 'clusters' listMap           |\n\nRead [Supported evaluation on CEL](https:\/\/github.com\/google\/cel-spec\/blob\/v0.6.0\/doc\/langdef.md#evaluation)\nfor more information about CEL rules.\n\n`spec.validation[i].reason` represents a machine-readable description of why this validation failed.\nIf this is the first validation in the list to fail, this reason, as well as the corresponding\nHTTP response code, are used in the HTTP response to the client.\nThe currently supported reasons are: `Unauthorized`, `Forbidden`, `Invalid`, `RequestEntityTooLarge`.\nIf not set, `StatusReasonInvalid` is used in the response to the client.\n\n### Matching requests: `matchConditions`\n\nYou can define _match conditions_ for a `ValidatingAdmissionPolicy` if you need fine-grained request filtering. These\nconditions are useful if you find that match rules, `objectSelectors` and `namespaceSelectors` still\ndoesn't provide the filtering you want. Match conditions are\n[CEL expressions](\/docs\/reference\/using-api\/cel\/). All match conditions must evaluate to true for the\nresource to be evaluated.\n\nHere is an example illustrating a few different uses for match conditions:\n\n\n\nMatch conditions have access to the same CEL variables as validation expressions.\n\nIn the event of an error evaluating a match condition the policy is not evaluated. Whether to reject\nthe request is determined as follows:\n\n1. If **any** match condition evaluated to `false` (regardless of other errors), the API server skips the policy.\n2. Otherwise:\n   - for [`failurePolicy: Fail`](#failure-policy), reject the request (without evaluating the policy).\n   - for [`failurePolicy: Ignore`](#failure-policy), proceed with the request but skip the policy.\n\n### Audit annotations\n\n`auditAnnotations` may be used to include audit annotations in the audit event of the API request.\n\nFor example, here is an admission policy with an audit annotation:\n\n\n\nWhen an API request is validated with this admission policy, the resulting audit event will look like:\n\n```\n# the audit event recorded\n{\n    \"kind\": \"Event\",\n    \"apiVersion\": \"audit.k8s.io\/v1\",\n    \"annotations\": {\n        \"demo-policy.example.com\/high-replica-count\": \"Deployment spec.replicas set to 128\"\n        # other annotations\n        ...\n    }\n    # other fields\n    ...\n}\n```\n\nIn this example the annotation will only be included if the `spec.replicas` of the Deployment is more than\n50, otherwise the CEL expression evaluates to null and the annotation will not be included.\n\nNote that audit annotation keys are prefixed by the name of the `ValidatingAdmissionWebhook` and a `\/`. If\nanother admission controller, such as an admission webhook, uses the exact same audit annotation key, the \nvalue of the first admission controller to include the audit annotation will be included in the audit\nevent and all other values will be ignored.\n\n### Message expression\n\nTo return a more friendly message when the policy rejects a request, we can use a CEL expression\nto composite a message with `spec.validations[i].messageExpression`. Similar to the validation expression,\na message expression has access to `object`, `oldObject`, `request`, `params`, and `namespaceObject`.\nUnlike validations, message expression must evaluate to a string.\n\nFor example, to better inform the user of the reason of denial when the policy refers to a parameter,\nwe can have the following validation:\n\n\n\nAfter creating a params object that limits the replicas to 3 and setting up the binding,\nwhen we try to create a deployment with 5 replicas, we will receive the following message.\n\n```\n$ kubectl create deploy --image=nginx nginx --replicas=5\nerror: failed to create deployment: deployments.apps \"nginx\" is forbidden: ValidatingAdmissionPolicy 'deploy-replica-policy.example.com' with binding 'demo-binding-test.example.com' denied request: object.spec.replicas must be no greater than 3\n```\n\nThis is more informative than a static message of \"too many replicas\".\n\nThe message expression takes precedence over the static message defined in `spec.validations[i].message` if both are defined.\nHowever, if the message expression fails to evaluate, the static message will be used instead.\nAdditionally, if the message expression evaluates to a multi-line string,\nthe evaluation result will be discarded and the static message will be used if present.\nNote that static message is validated against multi-line strings.\n\n### Type checking\n\nWhen a policy definition is created or updated, the validation process parses the expressions it contains\nand reports any syntax errors, rejecting the definition if any errors are found. \nAfterward, the referred variables are checked for type errors, including missing fields and type confusion,\nagainst the matched types of `spec.matchConstraints`.\nThe result of type checking can be retrieved from `status.typeChecking`.\nThe presence of `status.typeChecking` indicates the completion of type checking,\nand an empty `status.typeChecking` means that no errors were detected.\n\nFor example, given the following policy definition:\n\n\n\nThe status will yield the following information:\n\n```yaml\nstatus:\n  typeChecking:\n    expressionWarnings:\n    - fieldRef: spec.validations[0].expression\n      warning: |-\n        apps\/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'\n         | object.replicas > 1\n         | ......^\n```\n\nIf multiple resources are matched in `spec.matchConstraints`, all of matched resources will be checked against.\nFor example, the following policy definition \n\n\n\n\nwill have multiple types and type checking result of each type in the warning message.\n\n```yaml\nstatus:\n  typeChecking:\n    expressionWarnings:\n    - fieldRef: spec.validations[0].expression\n      warning: |-\n        apps\/v1, Kind=Deployment: ERROR: <input>:1:7: undefined field 'replicas'\n         | object.replicas > 1\n         | ......^\n        apps\/v1, Kind=ReplicaSet: ERROR: <input>:1:7: undefined field 'replicas'\n         | object.replicas > 1\n         | ......^\n```\n\nType Checking has the following limitation:\n\n- No wildcard matching. If `spec.matchConstraints.resourceRules` contains `\"*\"` in any of `apiGroups`, `apiVersions` or `resources`,\n  the types that `\"*\"` matches will not be checked.\n- The number of matched types is limited to 10. This is to prevent a policy that manually specifying too many types.\n  to consume excessive computing resources. In the order of ascending group, version, and then resource, 11th combination and beyond are ignored.\n- Type Checking does not affect the policy behavior in any way. Even if the type checking detects errors, the policy will continue\n  to evaluate. If errors do occur during evaluate, the failure policy will decide its outcome.\n- Type Checking does not apply to CRDs, including matched CRD types and reference of paramKind. The support for CRDs will come in future release.\n\n### Variable composition\n\nIf an expression grows too complicated, or part of the expression is reusable and computationally expensive to evaluate,\nyou can extract some part of the expressions into variables. A variable is a named expression that can be referred later\nin `variables` in other expressions.\n\n```yaml\nspec:\n  variables:\n    - name: foo\n      expression: \"'foo' in object.spec.metadata.labels ? object.spec.metadata.labels['foo'] : 'default'\"\n  validations:\n    - expression: variables.foo == 'bar'\n```\n\nA variable is lazily evaluated when it is first referred. Any error that occurs during the evaluation will be\nreported during the evaluation of the referring expression. Both the result and potential error are memorized and\ncount only once towards the runtime cost.\n\nThe order of variables are important because a variable can refer to other variables that are defined before it.\nThis ordering prevents circular references.\n\nThe following is a more complex example of enforcing that image repo names match the environment defined in its namespace.\n\n\n\nWith the policy bound to the namespace `default`, which is labeled `environment: prod`,\nthe following attempt to create a deployment would be rejected.\n```shell\nkubectl create deploy --image=dev.example.com\/nginx invalid\n```\nThe error message is similar to this.\n```console\nerror: failed to create deployment: deployments.apps \"invalid\" is forbidden: ValidatingAdmissionPolicy 'image-matches-namespace-environment.policy.example.com' with binding 'demo-binding-test.example.com' denied request: only prod images are allowed in namespace default\n```","site":"kubernetes reference","answers_cleaned":"    reviewers    liggitt   jpbetz   cici37 title  Validating Admission Policy content type  concept           overview        This page provides an overview of Validating Admission Policy         body         What is Validating Admission Policy   Validating admission policies offer a declarative  in process alternative to validating admission webhooks   Validating admission policies use the Common Expression Language  CEL  to declare the validation rules of a policy   Validation admission policies are highly configurable  enabling policy authors to define policies that can be parameterized and scoped to resources as needed by cluster administrators      What Resources Make a Policy  A policy is generally made up of three resources     The  ValidatingAdmissionPolicy  describes the abstract logic of a policy    think   this policy makes sure a particular label is set to a particular value       A  ValidatingAdmissionPolicyBinding  links the above resources together and provides scoping    If you only want to require an  owner  label to be set for  Pods   the binding is where you would   specify this restriction     A parameter resource provides information to a ValidatingAdmissionPolicy to make it a concrete   statement  think  the  owner  label must be set to something that ends in   company com       A native type such as ConfigMap or a CRD defines the schema of a parameter resource     ValidatingAdmissionPolicy  objects specify what Kind they are expecting for their parameter resource   At least a  ValidatingAdmissionPolicy  and a corresponding   ValidatingAdmissionPolicyBinding  must be defined for a policy to have an effect   If a  ValidatingAdmissionPolicy  does not need to be configured via parameters  simply leave  spec paramKind  in   ValidatingAdmissionPolicy  not specified      Getting Started with Validating Admission Policy  Validating Admission Policy is part of the cluster control plane  You should write and deploy them with great caution  The following describes how to quickly experiment with Validating Admission Policy       Creating a ValidatingAdmissionPolicy  The following is an example of a ValidatingAdmissionPolicy      spec validations  contains CEL expressions which use the  Common Expression Language  CEL   https   github com google cel spec  to validate the request  If an expression evaluates to false  the validation check is enforced according to the  spec failurePolicy  field    You can quickly test CEL expressions in  CEL Playground  https   playcel undistro io     To configure a validating admission policy for use in a cluster  a binding is required  The following is an example of a ValidatingAdmissionPolicyBinding      When trying to create a deployment with replicas set not satisfying the validation expression  an error will return containing message      none ValidatingAdmissionPolicy  demo policy example com  with binding  demo binding test example com  denied request  failed expression  object spec replicas    5      The above provides a simple example of using ValidatingAdmissionPolicy without a parameter configured        Validation actions  Each  ValidatingAdmissionPolicyBinding  must specify one or more  validationActions  to declare how  validations  of a policy are enforced   The supported  validationActions  are      Deny   Validation failure results in a denied request     Warn   Validation failure is reported to the request client   as a  warning   blog 2020 09 03 warnings       Audit   Validation failure is included in the audit event for the API request   For example  to both warn clients about a validation failure and to audit the validation failures  use      yaml validationActions   Warn  Audit        Deny  and  Warn  may not be used together since this combination needlessly duplicates the validation failure both in the API response body and the HTTP warning headers   A  validation  that evaluates to false is always enforced according to these actions  Failures defined by the  failurePolicy  are enforced according to these actions only if the  failurePolicy  is set to  Fail   or not specified   otherwise the failures are ignored   See  Audit Annotations  validation failures   docs reference labels annotations taints audit annotations  validation policy admission k8s io validation failure  for more details about the validation failure audit annotation       Parameter resources  Parameter resources allow a policy configuration to be separate from its definition   A policy can define paramKind  which outlines GVK of the parameter resource   and then a policy binding ties a policy by name  via policyName  to a particular parameter resource via paramRef   If parameter configuration is needed  the following is an example of a ValidatingAdmissionPolicy with parameter configuration     The  spec paramKind  field of the ValidatingAdmissionPolicy specifies the kind of resources used to parameterize this policy  For this example  it is configured by ReplicaLimit custom resources   Note in this example how the CEL expression references the parameters via the CEL params variable  e g   params maxReplicas    spec matchConstraints  specifies what resources this policy is designed to validate  Note that the native types such like  ConfigMap  could also be used as parameter reference   The  spec validations  fields contain CEL expressions  If an expression evaluates to false  the validation check is enforced according to the  spec failurePolicy  field   The validating admission policy author is responsible for providing the ReplicaLimit parameter CRD   To configure an validating admission policy for use in a cluster  a binding and parameter resource are created  The following is an example of a ValidatingAdmissionPolicyBinding  that uses a   cluster wide   param   the same param will be used to validate every resource request that matches the binding     Notice this binding applies a parameter to the policy for all resources which are in the  test  environment   The parameter resource could be as following     This policy parameter resource limits deployments to a max of 3 replicas   An admission policy may have multiple bindings  To bind all other environments to have a maxReplicas limit of 100  create another ValidatingAdmissionPolicyBinding     Notice this binding applies a different parameter to resources which are not in the  test  environment   And have a parameter resource     For each admission request  the API server evaluates CEL expressions of each   policy  binding  param  combination that match the request  For a request to be admitted it must pass   all   evaluations   If multiple bindings match the request  the policy will be evaluated for each  and they must all pass evaluation for the policy to be considered passed    If multiple parameters match a single binding  the policy rules will be evaluated for each param  and they too must all pass for the binding to be considered passed  Bindings can have overlapping match criteria  The policy is evaluated for each  matching binding parameter combination  A policy may even be evaluated multiple times if multiple bindings match it  or a single binding that matches multiple  parameters   The params object representing a parameter resource will not be set if a parameter resource has not been bound  so for policies requiring a parameter resource  it can be useful to add a check to ensure one has been bound  A parameter resource will not be bound and  params  will be null if  paramKind  of the policy  or  paramRef  of the binding are not specified   For the use cases require parameter configuration  we recommend to add a param check in  spec validations 0  expression          expression   params    null    message   params missing but required to bind to this policy            Optional parameters  It can be convenient to be able to have optional parameters as part of a parameter resource  and only validate them if present  CEL provides  has     which checks if the key passed to it exists  CEL also implements Boolean short circuiting  If the first half of a logical OR evaluates to true  it won t evaluate the other half  since the result of the entire OR will be true regardless     Combining the two  we can provide a way to validate optional parameters     has params optionalNumber      params optionalNumber    5    params optionalNumber    10    Here  we first check that the optional parameter is present with   has params optionalNumber        If  optionalNumber  hasn t been defined  then the expression short circuits since     has params optionalNumber   will evaluate to true     If  optionalNumber  has been defined  then the latter half of the CEL expression will be   evaluated  and optionalNumber will be checked to ensure that it contains a value between 5 and   10 inclusive         Per namespace Parameters  As the author of a ValidatingAdmissionPolicy and its ValidatingAdmissionPolicyBinding   you can choose to specify cluster wide  or per namespace parameters   If you specify a  namespace  for the binding s  paramRef   the control plane only searches for parameters in that namespace   However  if  namespace  is not specified in the ValidatingAdmissionPolicyBinding  the API server can search for relevant parameters in the namespace that a request is against  For example  if you make a request to modify a ConfigMap in the  default  namespace and there is a relevant ValidatingAdmissionPolicyBinding with no  namespace  set  then the API server looks for a parameter object in  default   This design enables policy configuration that depends on the namespace of the resource being manipulated  for more fine tuned control        Parameter selector  In addition to specify a parameter in a binding by  name   you may choose instead to specify label selector  such that all resources of the policy s  paramKind   and the param s  namespace   if applicable  that match the label selector are selected for evaluation  See  for more information on how label selectors match resources   If multiple parameters are found to meet the condition  the policy s rules are evaluated for each parameter found and the results will be ANDed together   If  namespace  is provided  only objects of the  paramKind  in the provided namespace are eligible for selection  Otherwise  when  namespace  is empty and   paramKind  is namespace scoped  the  namespace  used in the request being  admitted will be used        Authorization checks   authorization check    We introduced the authorization check for parameter resources  User is expected to have  read  access to the resources referenced by  paramKind  in  ValidatingAdmissionPolicy  and  paramRef  in  ValidatingAdmissionPolicyBinding    Note that if a resource in  paramKind  fails resolving via the restmapper   read  access to all resources of groups is required       Failure Policy   failurePolicy  defines how mis configurations and CEL expressions evaluating to error from the admission policy are handled  Allowed values are  Ignore  or  Fail       Ignore  means that an error calling the ValidatingAdmissionPolicy is ignored and the API   request is allowed to continue     Fail  means that an error calling the ValidatingAdmissionPolicy causes the admission to fail   and the API request to be rejected   Note that the  failurePolicy  is defined inside  ValidatingAdmissionPolicy          Validation Expression   spec validations i  expression  represents the expression which will be evaluated by CEL  To learn more  see the  CEL language specification  https   github com google cel spec  CEL expressions have access to the contents of the Admission request response  organized into CEL variables as well as some other useful variables      object    The object from the incoming request  The value is null for DELETE requests     oldObject    The existing object  The value is null for CREATE requests     request    Attributes of the  admission request   docs reference config api apiserver admission v1  admission k8s io v1 AdmissionRequest      params    Parameter resource referred to by the policy binding being evaluated  The value is   null if  ParamKind  is not specified     namespaceObject    The namespace  as a Kubernetes resource  that the incoming object belongs to    The value is null if the incoming object is cluster scoped     authorizer    A CEL Authorizer  May be used to perform authorization checks for the principal    authenticated user  of the request  See    AuthzSelectors  https   pkg go dev k8s io apiserver pkg cel library AuthzSelectors  and    Authz  https   pkg go dev k8s io apiserver pkg cel library Authz  in the Kubernetes CEL library   documentation for more details     authorizer requestResource    A shortcut for an authorization check configured with the request   resource  group  resource   subresource   namespace  name     The  apiVersion    kind    metadata name  and  metadata generateName  are always accessible from the root of the object  No other metadata properties are accessible    Equality on arrays with list type of  set  or  map  ignores element order  i e   1  2      2  1   Concatenation on arrays with x kubernetes list type use the semantics of the list type      set    X   Y  performs a union where the array positions of all elements in  X  are preserved and   non intersecting elements in  Y  are appended  retaining their partial order     map    X   Y  performs a merge where the array positions of all keys in  X  are preserved but the values   are overwritten by values in  Y  when the key sets of  X  and  Y  intersect  Elements in  Y  with   non intersecting keys are appended  retaining their partial order        Validation expression examples    Expression                                                                                     Purpose                                                                                                                                                                                                                                                                     object minReplicas    object replicas    object replicas    object maxReplicas                Validate that the three fields defining replicas are ordered appropriately              Available  in object stateCounts                                                             Validate that an entry with the  Available  key exists in a map                         size object list1     0      size object list2     0                                         Validate that one of two lists is non empty  but not both                              code    MY KEY  in object map1    124   124  object  MY KEY   matches    a zA Z       code    Validate the value of a map for a specific key  if it is in the map                    object envars filter e  e name     MY ENV   all e  e value matches    a zA Z                  Validate the  value  field of a listMap entry where key field  name  is  MY ENV        has object expired     object created   object ttl   object expired                           Validate that  expired  date is after a  create  date plus a  ttl  duration            object health startsWith  ok                                                                  Validate a  health  string field has the prefix  ok                                    object widgets exists w  w key     x     w foo   10                                           Validate that the  foo  property of a listMap item with a key  x  is less than 10      type object     string   object     100     object    1000                                    Validate an int or string field for both the int and string cases                  object metadata name startsWith object prefix                                                 Validate that an object s name has the prefix of another field value                   object set1 all e    e in object set2                                                         Validate that two listSets are disjoint                                                size object names     size object details     object names all n  n in object details         Validate the  details  map is keyed by the items in the  names  listSet                size object clusters filter c  c name    object primary      1                                Validate that the  primary  property has one and only one occurrence in the  clusters  listMap              Read  Supported evaluation on CEL  https   github com google cel spec blob v0 6 0 doc langdef md evaluation  for more information about CEL rules    spec validation i  reason  represents a machine readable description of why this validation failed  If this is the first validation in the list to fail  this reason  as well as the corresponding HTTP response code  are used in the HTTP response to the client  The currently supported reasons are   Unauthorized    Forbidden    Invalid    RequestEntityTooLarge   If not set   StatusReasonInvalid  is used in the response to the client       Matching requests   matchConditions   You can define  match conditions  for a  ValidatingAdmissionPolicy  if you need fine grained request filtering  These conditions are useful if you find that match rules   objectSelectors  and  namespaceSelectors  still doesn t provide the filtering you want  Match conditions are  CEL expressions   docs reference using api cel    All match conditions must evaluate to true for the resource to be evaluated   Here is an example illustrating a few different uses for match conditions     Match conditions have access to the same CEL variables as validation expressions   In the event of an error evaluating a match condition the policy is not evaluated  Whether to reject the request is determined as follows   1  If   any   match condition evaluated to  false   regardless of other errors   the API server skips the policy  2  Otherwise       for   failurePolicy  Fail    failure policy   reject the request  without evaluating the policy        for   failurePolicy  Ignore    failure policy   proceed with the request but skip the policy       Audit annotations   auditAnnotations  may be used to include audit annotations in the audit event of the API request   For example  here is an admission policy with an audit annotation     When an API request is validated with this admission policy  the resulting audit event will look like         the audit event recorded        kind    Event        apiVersion    audit k8s io v1        annotations              demo policy example com high replica count    Deployment spec replicas set to 128            other annotations                         other fields                In this example the annotation will only be included if the  spec replicas  of the Deployment is more than 50  otherwise the CEL expression evaluates to null and the annotation will not be included   Note that audit annotation keys are prefixed by the name of the  ValidatingAdmissionWebhook  and a      If another admission controller  such as an admission webhook  uses the exact same audit annotation key  the  value of the first admission controller to include the audit annotation will be included in the audit event and all other values will be ignored       Message expression  To return a more friendly message when the policy rejects a request  we can use a CEL expression to composite a message with  spec validations i  messageExpression   Similar to the validation expression  a message expression has access to  object    oldObject    request    params   and  namespaceObject   Unlike validations  message expression must evaluate to a string   For example  to better inform the user of the reason of denial when the policy refers to a parameter  we can have the following validation     After creating a params object that limits the replicas to 3 and setting up the binding  when we try to create a deployment with 5 replicas  we will receive the following message         kubectl create deploy   image nginx nginx   replicas 5 error  failed to create deployment  deployments apps  nginx  is forbidden  ValidatingAdmissionPolicy  deploy replica policy example com  with binding  demo binding test example com  denied request  object spec replicas must be no greater than 3      This is more informative than a static message of  too many replicas    The message expression takes precedence over the static message defined in  spec validations i  message  if both are defined  However  if the message expression fails to evaluate  the static message will be used instead  Additionally  if the message expression evaluates to a multi line string  the evaluation result will be discarded and the static message will be used if present  Note that static message is validated against multi line strings       Type checking  When a policy definition is created or updated  the validation process parses the expressions it contains and reports any syntax errors  rejecting the definition if any errors are found   Afterward  the referred variables are checked for type errors  including missing fields and type confusion  against the matched types of  spec matchConstraints   The result of type checking can be retrieved from  status typeChecking   The presence of  status typeChecking  indicates the completion of type checking  and an empty  status typeChecking  means that no errors were detected   For example  given the following policy definition     The status will yield the following information      yaml status    typeChecking      expressionWarnings        fieldRef  spec validations 0  expression       warning             apps v1  Kind Deployment  ERROR   input  1 7  undefined field  replicas             object replicas   1                         If multiple resources are matched in  spec matchConstraints   all of matched resources will be checked against  For example  the following policy definition      will have multiple types and type checking result of each type in the warning message      yaml status    typeChecking      expressionWarnings        fieldRef  spec validations 0  expression       warning             apps v1  Kind Deployment  ERROR   input  1 7  undefined field  replicas             object replicas   1                            apps v1  Kind ReplicaSet  ERROR   input  1 7  undefined field  replicas             object replicas   1                         Type Checking has the following limitation     No wildcard matching  If  spec matchConstraints resourceRules  contains       in any of  apiGroups    apiVersions  or  resources     the types that       matches will not be checked    The number of matched types is limited to 10  This is to prevent a policy that manually specifying too many types    to consume excessive computing resources  In the order of ascending group  version  and then resource  11th combination and beyond are ignored    Type Checking does not affect the policy behavior in any way  Even if the type checking detects errors  the policy will continue   to evaluate  If errors do occur during evaluate  the failure policy will decide its outcome    Type Checking does not apply to CRDs  including matched CRD types and reference of paramKind  The support for CRDs will come in future release       Variable composition  If an expression grows too complicated  or part of the expression is reusable and computationally expensive to evaluate  you can extract some part of the expressions into variables  A variable is a named expression that can be referred later in  variables  in other expressions      yaml spec    variables        name  foo       expression    foo  in object spec metadata labels   object spec metadata labels  foo      default     validations        expression  variables foo     bar       A variable is lazily evaluated when it is first referred  Any error that occurs during the evaluation will be reported during the evaluation of the referring expression  Both the result and potential error are memorized and count only once towards the runtime cost   The order of variables are important because a variable can refer to other variables that are defined before it  This ordering prevents circular references   The following is a more complex example of enforcing that image repo names match the environment defined in its namespace     With the policy bound to the namespace  default   which is labeled  environment  prod   the following attempt to create a deployment would be rejected     shell kubectl create deploy   image dev example com nginx invalid     The error message is similar to this     console error  failed to create deployment  deployments apps  invalid  is forbidden  ValidatingAdmissionPolicy  image matches namespace environment policy example com  with binding  demo binding test example com  denied request  only prod images are allowed in namespace default    "}
{"questions":"kubernetes reference liggitt deads2k weight 36 erictune contenttype concept reviewers title Webhook Mode lavalamp","answers":"---\nreviewers:\n- erictune\n- lavalamp\n- deads2k\n- liggitt\ntitle: Webhook Mode\ncontent_type: concept\nweight: 36\n---\n\n<!-- overview -->\nA WebHook is an HTTP callback: an HTTP POST that occurs when something happens; a simple event-notification via HTTP POST. A web application implementing WebHooks will POST a message to a URL when certain things happen.\n\n\n<!-- body -->\nWhen specified, mode `Webhook` causes Kubernetes to query an outside REST\nservice when determining user privileges.\n\n## Configuration File Format\n\nMode `Webhook` requires a file for HTTP configuration, specify by the\n`--authorization-webhook-config-file=SOME_FILENAME` flag.\n\nThe configuration file uses the [kubeconfig](\/docs\/tasks\/access-application-cluster\/configure-access-multiple-clusters\/)\nfile format. Within the file \"users\" refers to the API Server webhook and\n\"clusters\" refers to the remote service.\n\nA configuration example which uses HTTPS client auth:\n\n```yaml\n# Kubernetes API version\napiVersion: v1\n# kind of the API object\nkind: Config\n# clusters refers to the remote service.\nclusters:\n  - name: name-of-remote-authz-service\n    cluster:\n      # CA for verifying the remote service.\n      certificate-authority: \/path\/to\/ca.pem\n      # URL of remote service to query. Must use 'https'. May not include parameters.\n      server: https:\/\/authz.example.com\/authorize\n\n# users refers to the API Server's webhook configuration.\nusers:\n  - name: name-of-api-server\n    user:\n      client-certificate: \/path\/to\/cert.pem # cert for the webhook plugin to use\n      client-key: \/path\/to\/key.pem          # key matching the cert\n\n# kubeconfig files require a context. Provide one for the API Server.\ncurrent-context: webhook\ncontexts:\n- context:\n    cluster: name-of-remote-authz-service\n    user: name-of-api-server\n  name: webhook\n```\n\n## Request Payloads\n\nWhen faced with an authorization decision, the API Server POSTs a JSON-\nserialized `authorization.k8s.io\/v1beta1` `SubjectAccessReview` object describing the\naction. This object contains fields describing the user attempting to make the\nrequest, and either details about the resource being accessed or requests\nattributes.\n\nNote that webhook API objects are subject to the same [versioning compatibility rules](\/docs\/concepts\/overview\/kubernetes-api\/)\nas other Kubernetes API objects. Implementers should be aware of looser\ncompatibility promises for beta objects and check the \"apiVersion\" field of the\nrequest to ensure correct deserialization. Additionally, the API Server must\nenable the `authorization.k8s.io\/v1beta1` API extensions group (`--runtime-config=authorization.k8s.io\/v1beta1=true`).\n\nAn example request body:\n\n```json\n{\n  \"apiVersion\": \"authorization.k8s.io\/v1beta1\",\n  \"kind\": \"SubjectAccessReview\",\n  \"spec\": {\n    \"resourceAttributes\": {\n      \"namespace\": \"kittensandponies\",\n      \"verb\": \"get\",\n      \"group\": \"unicorn.example.org\",\n      \"resource\": \"pods\"\n    },\n    \"user\": \"jane\",\n    \"group\": [\n      \"group1\",\n      \"group2\"\n    ]\n  }\n}\n```\n\nThe remote service is expected to fill the `status` field of\nthe request and respond to either allow or disallow access. The response body's\n`spec` field is ignored and may be omitted. A permissive response would return:\n\n```json\n{\n  \"apiVersion\": \"authorization.k8s.io\/v1beta1\",\n  \"kind\": \"SubjectAccessReview\",\n  \"status\": {\n    \"allowed\": true\n  }\n}\n```\n\nFor disallowing access there are two methods.\n\nThe first method is preferred in most cases, and indicates the authorization\nwebhook does not allow, or has \"no opinion\" about the request, but if other \nauthorizers are configured, they are given a chance to allow the request. \nIf there are no other authorizers, or none of them allow the request, the \nrequest is forbidden. The webhook would return:\n\n```json\n{\n  \"apiVersion\": \"authorization.k8s.io\/v1beta1\",\n  \"kind\": \"SubjectAccessReview\",\n  \"status\": {\n    \"allowed\": false,\n    \"reason\": \"user does not have read access to the namespace\"\n  }\n}\n```\n\nThe second method denies immediately, short-circuiting evaluation by other \nconfigured authorizers. This should only be used by webhooks that have \ndetailed knowledge of the full authorizer configuration of the cluster. \nThe webhook would return:\n\n```json\n{\n  \"apiVersion\": \"authorization.k8s.io\/v1beta1\",\n  \"kind\": \"SubjectAccessReview\",\n  \"status\": {\n    \"allowed\": false,\n    \"denied\": true,\n    \"reason\": \"user does not have read access to the namespace\"\n  }\n}\n```\n\nAccess to non-resource paths are sent as:\n\n```json\n{\n  \"apiVersion\": \"authorization.k8s.io\/v1beta1\",\n  \"kind\": \"SubjectAccessReview\",\n  \"spec\": {\n    \"nonResourceAttributes\": {\n      \"path\": \"\/debug\",\n      \"verb\": \"get\"\n    },\n    \"user\": \"jane\",\n    \"group\": [\n      \"group1\",\n      \"group2\"\n    ]\n  }\n}\n```\n\n\n\nWith the `AuthorizeWithSelectors` feature enabled, field and label selectors in the request\nare passed to the authorization webhook. The webhook can make authorization decisions\ninformed by the scoped field and label selectors, if it wishes.\n\nThe [SubjectAccessReview API documentation](\/docs\/reference\/kubernetes-api\/authorization-resources\/subject-access-review-v1\/)\ngives guidelines for how these fields should be interpreted and handled by authorization webhooks,\nspecifically using the parsed requirements rather than the raw selector strings,\nand how to handle unrecognized operators safely.\n\n```json\n{\n  \"apiVersion\": \"authorization.k8s.io\/v1beta1\",\n  \"kind\": \"SubjectAccessReview\",\n  \"spec\": {\n    \"resourceAttributes\": {\n      \"verb\": \"list\",\n      \"group\": \"\",\n      \"resource\": \"pods\",\n      \"fieldSelector\": {\n        \"requirements\": [\n          {\"key\":\"spec.nodeName\", \"operator\":\"In\", \"values\":[\"mynode\"]}\n        ]\n      },\n      \"labelSelector\": {\n        \"requirements\": [\n          {\"key\":\"example.com\/mykey\", \"operator\":\"In\", \"values\":[\"myvalue\"]}\n        ]\n      }\n    },\n    \"user\": \"jane\",\n    \"group\": [\n      \"group1\",\n      \"group2\"\n    ]\n  }\n}\n```\n\nNon-resource paths include: `\/api`, `\/apis`, `\/metrics`,\n`\/logs`, `\/debug`, `\/healthz`, `\/livez`, `\/openapi\/v2`, `\/readyz`, and\n`\/version.` Clients require access to `\/api`, `\/api\/*`, `\/apis`, `\/apis\/*`,\nand `\/version` to discover what resources and versions are present on the server.\nAccess to other non-resource paths can be disallowed without restricting access\nto the REST api.\n\nFor further information, refer to the\n[SubjectAccessReview API documentation](\/docs\/reference\/kubernetes-api\/authorization-resources\/subject-access-review-v1\/)\nand\n[webhook.go implementation](https:\/\/github.com\/kubernetes\/kubernetes\/blob\/master\/staging\/src\/k8s.io\/apiserver\/plugin\/pkg\/authorizer\/webhook\/webhook.go).\n","site":"kubernetes reference","answers_cleaned":"    reviewers    erictune   lavalamp   deads2k   liggitt title  Webhook Mode content type  concept weight  36           overview     A WebHook is an HTTP callback  an HTTP POST that occurs when something happens  a simple event notification via HTTP POST  A web application implementing WebHooks will POST a message to a URL when certain things happen         body     When specified  mode  Webhook  causes Kubernetes to query an outside REST service when determining user privileges      Configuration File Format  Mode  Webhook  requires a file for HTTP configuration  specify by the    authorization webhook config file SOME FILENAME  flag   The configuration file uses the  kubeconfig   docs tasks access application cluster configure access multiple clusters   file format  Within the file  users  refers to the API Server webhook and  clusters  refers to the remote service   A configuration example which uses HTTPS client auth      yaml   Kubernetes API version apiVersion  v1   kind of the API object kind  Config   clusters refers to the remote service  clusters      name  name of remote authz service     cluster          CA for verifying the remote service        certificate authority   path to ca pem         URL of remote service to query  Must use  https   May not include parameters        server  https   authz example com authorize    users refers to the API Server s webhook configuration  users      name  name of api server     user        client certificate   path to cert pem   cert for the webhook plugin to use       client key   path to key pem            key matching the cert    kubeconfig files require a context  Provide one for the API Server  current context  webhook contexts    context      cluster  name of remote authz service     user  name of api server   name  webhook         Request Payloads  When faced with an authorization decision  the API Server POSTs a JSON  serialized  authorization k8s io v1beta1   SubjectAccessReview  object describing the action  This object contains fields describing the user attempting to make the request  and either details about the resource being accessed or requests attributes   Note that webhook API objects are subject to the same  versioning compatibility rules   docs concepts overview kubernetes api   as other Kubernetes API objects  Implementers should be aware of looser compatibility promises for beta objects and check the  apiVersion  field of the request to ensure correct deserialization  Additionally  the API Server must enable the  authorization k8s io v1beta1  API extensions group     runtime config authorization k8s io v1beta1 true     An example request body      json      apiVersion    authorization k8s io v1beta1      kind    SubjectAccessReview      spec          resourceAttributes            namespace    kittensandponies          verb    get          group    unicorn example org          resource    pods              user    jane        group            group1          group2                   The remote service is expected to fill the  status  field of the request and respond to either allow or disallow access  The response body s  spec  field is ignored and may be omitted  A permissive response would return      json      apiVersion    authorization k8s io v1beta1      kind    SubjectAccessReview      status          allowed   true            For disallowing access there are two methods   The first method is preferred in most cases  and indicates the authorization webhook does not allow  or has  no opinion  about the request  but if other  authorizers are configured  they are given a chance to allow the request   If there are no other authorizers  or none of them allow the request  the  request is forbidden  The webhook would return      json      apiVersion    authorization k8s io v1beta1      kind    SubjectAccessReview      status          allowed   false       reason    user does not have read access to the namespace             The second method denies immediately  short circuiting evaluation by other  configured authorizers  This should only be used by webhooks that have  detailed knowledge of the full authorizer configuration of the cluster   The webhook would return      json      apiVersion    authorization k8s io v1beta1      kind    SubjectAccessReview      status          allowed   false       denied   true       reason    user does not have read access to the namespace             Access to non resource paths are sent as      json      apiVersion    authorization k8s io v1beta1      kind    SubjectAccessReview      spec          nonResourceAttributes            path     debug          verb    get              user    jane        group            group1          group2                     With the  AuthorizeWithSelectors  feature enabled  field and label selectors in the request are passed to the authorization webhook  The webhook can make authorization decisions informed by the scoped field and label selectors  if it wishes   The  SubjectAccessReview API documentation   docs reference kubernetes api authorization resources subject access review v1   gives guidelines for how these fields should be interpreted and handled by authorization webhooks  specifically using the parsed requirements rather than the raw selector strings  and how to handle unrecognized operators safely      json      apiVersion    authorization k8s io v1beta1      kind    SubjectAccessReview      spec          resourceAttributes            verb    list          group              resource    pods          fieldSelector              requirements                 key   spec nodeName    operator   In    values    mynode                              labelSelector              requirements                 key   example com mykey    operator   In    values    myvalue                                  user    jane        group            group1          group2                   Non resource paths include    api     apis     metrics     logs     debug     healthz     livez     openapi v2     readyz   and   version   Clients require access to   api     api       apis     apis     and   version  to discover what resources and versions are present on the server  Access to other non resource paths can be disallowed without restricting access to the REST api   For further information  refer to the  SubjectAccessReview API documentation   docs reference kubernetes api authorization resources subject access review v1   and  webhook go implementation  https   github com kubernetes kubernetes blob master staging src k8s io apiserver plugin pkg authorizer webhook webhook go   "}
{"questions":"kubernetes reference janetkuo derekwaynecarr davidopp erictune title Admission Control in Kubernetes Admission Control reviewers thockin lavalamp","answers":"---\nreviewers:\n- lavalamp\n- davidopp\n- derekwaynecarr\n- erictune\n- janetkuo\n- thockin\ntitle: Admission Control in Kubernetes\nlinkTitle: Admission Control\ncontent_type: concept\nweight: 40\n---\n\n<!-- overview -->\nThis page provides an overview of _admission controllers_.\n\nAn admission controller is a piece of code that intercepts requests to the\nKubernetes API server prior to persistence of the resource, but after the request\nis authenticated and authorized.\n\nSeveral important features of Kubernetes require an admission controller to be enabled in order\nto properly support the feature.  As a result, a Kubernetes API server that is not properly\nconfigured with the right set of admission controllers is an incomplete server that will not\nsupport all the features you expect.\n\n<!-- body -->\n## What are they?\n\nAdmission controllers are code within the Kubernetes\n that check the\ndata arriving in a request to modify a resource.\n\nAdmission controllers apply to requests that create, delete, or modify objects.\nAdmission controllers can also block custom verbs, such as a request to connect to a\npod via an API server proxy. Admission controllers do _not_ (and cannot) block requests\nto read (**get**, **watch** or **list**) objects, because reads bypass the admission\ncontrol layer.\n\nAdmission control mechanisms may be _validating_, _mutating_, or both. Mutating\ncontrollers may modify the data for the resource being modified; validating controllers may not.\n\nThe admission controllers in Kubernetes  consist of the\n[list](#what-does-each-admission-controller-do) below, are compiled into the\n`kube-apiserver` binary, and may only be configured by the cluster\nadministrator.\n\n### Admission control extension points\n\nWithin the full [list](#what-does-each-admission-controller-do), there are three\nspecial controllers:\n[MutatingAdmissionWebhook](#mutatingadmissionwebhook),\n[ValidatingAdmissionWebhook](#validatingadmissionwebhook), and\n[ValidatingAdmissionPolicy](#validatingadmissionpolicy).\nThe two webhook controllers execute the mutating and validating (respectively)\n[admission control webhooks](\/docs\/reference\/access-authn-authz\/extensible-admission-controllers\/#admission-webhooks)\nwhich are configured in the API. ValidatingAdmissionPolicy provides a way to embed\ndeclarative validation code within the API, without relying on any external HTTP\ncallouts.\n\nYou can use these three admission controllers to customize cluster behavior at\nadmission time.\n\n### Admission control phases\n\nThe admission control process proceeds in two phases. In the first phase,\nmutating admission controllers are run. In the second phase, validating\nadmission controllers are run. Note again that some of the controllers are\nboth.\n\nIf any of the controllers in either phase reject the request, the entire\nrequest is rejected immediately and an error is returned to the end-user.\n\nFinally, in addition to sometimes mutating the object in question, admission\ncontrollers may sometimes have side effects, that is, mutate related\nresources as part of request processing. Incrementing quota usage is the\ncanonical example of why this is necessary. Any such side-effect needs a\ncorresponding reclamation or reconciliation process, as a given admission\ncontroller does not know for sure that a given request will pass all of the\nother admission controllers.\n\n\n\n## How do I turn on an admission controller?\n\nThe Kubernetes API server flag `enable-admission-plugins` takes a comma-delimited list of admission control plugins to invoke prior to modifying objects in the cluster.\nFor example, the following command line enables the `NamespaceLifecycle` and the `LimitRanger`\nadmission control plugins:\n\n```shell\nkube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger ...\n```\n\n\nDepending on the way your Kubernetes cluster is deployed and how the API server is\nstarted, you may need to apply the settings in different ways. For example, you may\nhave to modify the systemd unit file if the API server is deployed as a systemd\nservice, you may modify the manifest file for the API server if Kubernetes is deployed\nin a self-hosted way.\n\n\n## How do I turn off an admission controller?\n\nThe Kubernetes API server flag `disable-admission-plugins` takes a comma-delimited list of admission control plugins to be disabled, even if they are in the list of plugins enabled by default.\n\n```shell\nkube-apiserver --disable-admission-plugins=PodNodeSelector,AlwaysDeny ...\n```\n\n## Which plugins are enabled by default?\n\nTo see which admission plugins are enabled:\n\n```shell\nkube-apiserver -h | grep enable-admission-plugins\n```\n\nIn Kubernetes , the default ones are:\n\n```shell\nCertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, PersistentVolumeClaimResize, PodSecurity, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook\n```\n\n## What does each admission controller do?\n\n### AlwaysAdmit {#alwaysadmit}\n\n\n\n**Type**: Validating.\n\nThis admission controller allows all pods into the cluster. It is **deprecated** because\nits behavior is the same as if there were no admission controller at all.\n\n### AlwaysDeny {#alwaysdeny}\n\n\n\n**Type**: Validating.\n\nRejects all requests. AlwaysDeny is **deprecated** as it has no real meaning.\n\n### AlwaysPullImages {#alwayspullimages}\n\n**Type**: Mutating and Validating.\n\nThis admission controller modifies every new Pod to force the image pull policy to `Always`. This is useful in a\nmultitenant cluster so that users can be assured that their private images can only be used by those\nwho have the credentials to pull them. Without this admission controller, once an image has been pulled to a\nnode, any pod from any user can use it by knowing the image's name (assuming the Pod is\nscheduled onto the right node), without any authorization check against the image. When this admission controller\nis enabled, images are always pulled prior to starting containers, which means valid credentials are\nrequired.\n\n### CertificateApproval {#certificateapproval}\n\n**Type**: Validating.\n\nThis admission controller observes requests to approve CertificateSigningRequest resources and performs additional\nauthorization checks to ensure the approving user has permission to **approve** certificate requests with the\n`spec.signerName` requested on the CertificateSigningRequest resource.\n\nSee [Certificate Signing Requests](\/docs\/reference\/access-authn-authz\/certificate-signing-requests\/) for more\ninformation on the permissions required to perform different actions on CertificateSigningRequest resources.\n\n### CertificateSigning {#certificatesigning}\n\n**Type**: Validating.\n\nThis admission controller observes updates to the `status.certificate` field of CertificateSigningRequest resources\nand performs an additional authorization checks to ensure the signing user has permission to **sign** certificate\nrequests with the `spec.signerName` requested on the CertificateSigningRequest resource.\n\nSee [Certificate Signing Requests](\/docs\/reference\/access-authn-authz\/certificate-signing-requests\/) for more\ninformation on the permissions required to perform different actions on CertificateSigningRequest resources.\n\n### CertificateSubjectRestriction {#certificatesubjectrestriction}\n\n**Type**: Validating.\n\nThis admission controller observes creation of CertificateSigningRequest resources that have a `spec.signerName`\nof `kubernetes.io\/kube-apiserver-client`. It rejects any request that specifies a 'group' (or 'organization attribute')\nof `system:masters`.\n\n### DefaultIngressClass {#defaultingressclass}\n\n**Type**: Mutating.\n\nThis admission controller observes creation of `Ingress` objects that do not request any specific\ningress class and automatically adds a default ingress class to them.  This way, users that do not\nrequest any special ingress class do not need to care about them at all and they will get the\ndefault one.\n\nThis admission controller does not do anything when no default ingress class is configured. When more than one ingress\nclass is marked as default, it rejects any creation of `Ingress` with an error and an administrator\nmust revisit their `IngressClass` objects and mark only one as default (with the annotation\n\"ingressclass.kubernetes.io\/is-default-class\").  This admission controller ignores any `Ingress`\nupdates; it acts only on creation.\n\nSee the [Ingress](\/docs\/concepts\/services-networking\/ingress\/) documentation for more about ingress\nclasses and how to mark one as default.\n\n### DefaultStorageClass {#defaultstorageclass}\n\n**Type**: Mutating.\n\nThis admission controller observes creation of `PersistentVolumeClaim` objects that do not request any specific storage class\nand automatically adds a default storage class to them.\nThis way, users that do not request any special storage class do not need to care about them at all and they\nwill get the default one.\n\nThis admission controller does not do anything when no default storage class is configured. When more than one storage\nclass is marked as default, it rejects any creation of `PersistentVolumeClaim` with an error and an administrator\nmust revisit their `StorageClass` objects and mark only one as default.\nThis admission controller ignores any `PersistentVolumeClaim` updates; it acts only on creation.\n\nSee [persistent volume](\/docs\/concepts\/storage\/persistent-volumes\/) documentation about persistent volume claims and\nstorage classes and how to mark a storage class as default.\n\n### DefaultTolerationSeconds {#defaulttolerationseconds}\n\n**Type**: Mutating.\n\nThis admission controller sets the default forgiveness toleration for pods to tolerate\nthe taints `notready:NoExecute` and `unreachable:NoExecute` based on the k8s-apiserver input parameters\n`default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` if the pods don't already\nhave toleration for taints `node.kubernetes.io\/not-ready:NoExecute` or\n`node.kubernetes.io\/unreachable:NoExecute`.\nThe default value for `default-not-ready-toleration-seconds` and `default-unreachable-toleration-seconds` is 5 minutes.\n\n### DenyServiceExternalIPs\n\n**Type**: Validating.\n\nThis admission controller rejects all net-new usage of the `Service` field `externalIPs`.  This\nfeature is very powerful (allows network traffic interception) and not well\ncontrolled by policy.  When enabled, users of the cluster may not create new\nServices which use `externalIPs` and may not add new values to `externalIPs` on\nexisting `Service` objects.  Existing uses of `externalIPs` are not affected,\nand users may remove values from `externalIPs` on existing `Service` objects.\n\nMost users do not need this feature at all, and cluster admins should consider disabling it.\nClusters that do need to use this feature should consider using some custom policy to manage usage\nof it.\n\nThis admission controller is disabled by default.\n\n### EventRateLimit {#eventratelimit}\n\n\n\n**Type**: Validating.\n\nThis admission controller mitigates the problem where the API server gets flooded by\nrequests to store new Events. The cluster admin can specify event rate limits by:\n\n* Enabling the `EventRateLimit` admission controller;\n* Referencing an `EventRateLimit` configuration file from the file provided to the API\n  server's command line flag `--admission-control-config-file`:\n\n```yaml\napiVersion: apiserver.config.k8s.io\/v1\nkind: AdmissionConfiguration\nplugins:\n  - name: EventRateLimit\n    path: eventconfig.yaml\n...\n```\n\nThere are four types of limits that can be specified in the configuration:\n\n * `Server`: All Event requests (creation or modifications) received by the API server share a single bucket.\n * `Namespace`: Each namespace has a dedicated bucket.\n * `User`: Each user is allocated a bucket.\n * `SourceAndObject`: A bucket is assigned by each combination of source and\n   involved object of the event.\n\nBelow is a sample `eventconfig.yaml` for such a configuration:\n\n```yaml\napiVersion: eventratelimit.admission.k8s.io\/v1alpha1\nkind: Configuration\nlimits:\n  - type: Namespace\n    qps: 50\n    burst: 100\n    cacheSize: 2000\n  - type: User\n    qps: 10\n    burst: 50\n```\n\nSee the [EventRateLimit Config API (v1alpha1)](\/docs\/reference\/config-api\/apiserver-eventratelimit.v1alpha1\/)\nfor more details.\n\nThis admission controller is disabled by default.\n\n### ExtendedResourceToleration {#extendedresourcetoleration}\n\n**Type**: Mutating.\n\nThis plug-in facilitates creation of dedicated nodes with extended resources.\nIf operators want to create dedicated nodes with extended resources (like GPUs, FPGAs etc.), they are expected to\n[taint the node](\/docs\/concepts\/scheduling-eviction\/taint-and-toleration\/#example-use-cases) with the extended resource\nname as the key. This admission controller, if enabled, automatically\nadds tolerations for such taints to pods requesting extended resources, so users don't have to manually\nadd these tolerations.\n\nThis admission controller is disabled by default.\n\n### ImagePolicyWebhook {#imagepolicywebhook}\n\n**Type**: Validating.\n\nThe ImagePolicyWebhook admission controller allows a backend webhook to make admission decisions.\n\nThis admission controller is disabled by default.\n\n#### Configuration file format {#imagereview-config-file-format}\n\nImagePolicyWebhook uses a configuration file to set options for the behavior of the backend.\nThis file may be json or yaml and has the following format:\n\n```yaml\nimagePolicy:\n  kubeConfigFile: \/path\/to\/kubeconfig\/for\/backend\n  # time in s to cache approval\n  allowTTL: 50\n  # time in s to cache denial\n  denyTTL: 50\n  # time in ms to wait between retries\n  retryBackoff: 500\n  # determines behavior if the webhook backend fails\n  defaultAllow: true\n```\n\nReference the ImagePolicyWebhook configuration file from the file provided to the API server's command line flag `--admission-control-config-file`:\n\n```yaml\napiVersion: apiserver.config.k8s.io\/v1\nkind: AdmissionConfiguration\nplugins:\n  - name: ImagePolicyWebhook\n    path: imagepolicyconfig.yaml\n...\n```\n\nAlternatively, you can embed the configuration directly in the file:\n\n```yaml\napiVersion: apiserver.config.k8s.io\/v1\nkind: AdmissionConfiguration\nplugins:\n  - name: ImagePolicyWebhook\n    configuration:\n      imagePolicy:\n        kubeConfigFile: <path-to-kubeconfig-file>\n        allowTTL: 50\n        denyTTL: 50\n        retryBackoff: 500\n        defaultAllow: true\n```\n\nThe ImagePolicyWebhook config file must reference a\n[kubeconfig](\/docs\/tasks\/access-application-cluster\/configure-access-multiple-clusters\/)\nformatted file which sets up the connection to the backend.\nIt is required that the backend communicate over TLS.\n\nThe kubeconfig file's `cluster` field must point to the remote service, and the `user` field\nmust contain the returned authorizer.\n\n```yaml\n# clusters refers to the remote service.\nclusters:\n  - name: name-of-remote-imagepolicy-service\n    cluster:\n      certificate-authority: \/path\/to\/ca.pem    # CA for verifying the remote service.\n      server: https:\/\/images.example.com\/policy # URL of remote service to query. Must use 'https'.\n\n# users refers to the API server's webhook configuration.\nusers:\n  - name: name-of-api-server\n    user:\n      client-certificate: \/path\/to\/cert.pem # cert for the webhook admission controller to use\n      client-key: \/path\/to\/key.pem          # key matching the cert\n```\n\nFor additional HTTP configuration, refer to the\n[kubeconfig](\/docs\/tasks\/access-application-cluster\/configure-access-multiple-clusters\/) documentation.\n\n#### Request payloads\n\nWhen faced with an admission decision, the API Server POSTs a JSON serialized\n`imagepolicy.k8s.io\/v1alpha1` `ImageReview` object describing the action.\nThis object contains fields describing the containers being admitted, as well as\nany pod annotations that match `*.image-policy.k8s.io\/*`.\n\n\nThe webhook API objects are subject to the same versioning compatibility rules\nas other Kubernetes API objects. Implementers should be aware of looser compatibility\npromises for alpha objects and check the `apiVersion` field of the request to\nensure correct deserialization.\nAdditionally, the API Server must enable the `imagepolicy.k8s.io\/v1alpha1` API extensions\ngroup (`--runtime-config=imagepolicy.k8s.io\/v1alpha1=true`).\n\n\nAn example request body:\n\n```json\n{\n  \"apiVersion\": \"imagepolicy.k8s.io\/v1alpha1\",\n  \"kind\": \"ImageReview\",\n  \"spec\": {\n    \"containers\": [\n      {\n        \"image\": \"myrepo\/myimage:v1\"\n      },\n      {\n        \"image\": \"myrepo\/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed\"\n      }\n    ],\n    \"annotations\": {\n      \"mycluster.image-policy.k8s.io\/ticket-1234\": \"break-glass\"\n    },\n    \"namespace\": \"mynamespace\"\n  }\n}\n```\n\nThe remote service is expected to fill the `status` field of the request and\nrespond to either allow or disallow access. The response body's `spec` field is ignored, and\nmay be omitted. A permissive response would return:\n\n```json\n{\n  \"apiVersion\": \"imagepolicy.k8s.io\/v1alpha1\",\n  \"kind\": \"ImageReview\",\n  \"status\": {\n    \"allowed\": true\n  }\n}\n```\n\nTo disallow access, the service would return:\n\n```json\n{\n  \"apiVersion\": \"imagepolicy.k8s.io\/v1alpha1\",\n  \"kind\": \"ImageReview\",\n  \"status\": {\n    \"allowed\": false,\n    \"reason\": \"image currently blacklisted\"\n  }\n}\n```\n\nFor further documentation refer to the\n[`imagepolicy.v1alpha1` API](\/docs\/reference\/config-api\/imagepolicy.v1alpha1\/).\n\n#### Extending with Annotations\n\nAll annotations on a Pod that match `*.image-policy.k8s.io\/*` are sent to the webhook.\nSending annotations allows users who are aware of the image policy backend to\nsend extra information to it, and for different backends implementations to\naccept different information.\n\nExamples of information you might put here are:\n\n* request to \"break glass\" to override a policy, in case of emergency.\n* a ticket number from a ticket system that documents the break-glass request\n* provide a hint to the policy server as to the imageID of the image being provided, to save it a lookup\n\nIn any case, the annotations are provided by the user and are not validated by Kubernetes in any way.\n\n### LimitPodHardAntiAffinityTopology {#limitpodhardantiaffinitytopology}\n\n**Type**: Validating.\n\nThis admission controller denies any pod that defines `AntiAffinity` topology key other than\n`kubernetes.io\/hostname` in `requiredDuringSchedulingRequiredDuringExecution`.\n\nThis admission controller is disabled by default.\n\n### LimitRanger {#limitranger}\n\n**Type**: Mutating and Validating.\n\nThis admission controller will observe the incoming request and ensure that it does not violate\nany of the constraints enumerated in the `LimitRange` object in a `Namespace`.  If you are using\n`LimitRange` objects in your Kubernetes deployment, you MUST use this admission controller to\nenforce those constraints. LimitRanger can also be used to apply default resource requests to Pods\nthat don't specify any; currently, the default LimitRanger applies a 0.1 CPU requirement to all\nPods in the `default` namespace.\n\nSee the [LimitRange API reference](\/docs\/reference\/kubernetes-api\/policy-resources\/limit-range-v1\/)\nand the [example of LimitRange](\/docs\/tasks\/administer-cluster\/manage-resources\/memory-default-namespace\/)\nfor more details.\n\n### MutatingAdmissionWebhook {#mutatingadmissionwebhook}\n\n**Type**: Mutating.\n\nThis admission controller calls any mutating webhooks which match the request. Matching\nwebhooks are called in serial; each one may modify the object if it desires.\n\nThis admission controller (as implied by the name) only runs in the mutating phase.\n\nIf a webhook called by this has side effects (for example, decrementing quota) it\n*must* have a reconciliation system, as it is not guaranteed that subsequent\nwebhooks or validating admission controllers will permit the request to finish.\n\nIf you disable the MutatingAdmissionWebhook, you must also disable the\n`MutatingWebhookConfiguration` object in the `admissionregistration.k8s.io\/v1`\ngroup\/version via the `--runtime-config` flag, both are on by default.\n\n#### Use caution when authoring and installing mutating webhooks\n\n* Users may be confused when the objects they try to create are different from\n  what they get back.\n* Built in control loops may break when the objects they try to create are\n  different when read back.\n  * Setting originally unset fields is less likely to cause problems than\n    overwriting fields set in the original request. Avoid doing the latter.\n* Future changes to control loops for built-in resources or third-party resources\n  may break webhooks that work well today. Even when the webhook installation API\n  is finalized, not all possible webhook behaviors will be guaranteed to be supported\n  indefinitely.\n\n### NamespaceAutoProvision {#namespaceautoprovision}\n\n**Type**: Mutating.\n\nThis admission controller examines all incoming requests on namespaced resources and checks\nif the referenced namespace does exist.\nIt creates a namespace if it cannot be found.\nThis admission controller is useful in deployments that do not want to restrict creation of\na namespace prior to its usage.\n\n### NamespaceExists {#namespaceexists}\n\n**Type**: Validating.\n\nThis admission controller checks all requests on namespaced resources other than `Namespace` itself.\nIf the namespace referenced from a request doesn't exist, the request is rejected.\n\n### NamespaceLifecycle {#namespacelifecycle}\n\n**Type**: Validating.\n\nThis admission controller enforces that a `Namespace` that is undergoing termination cannot have\nnew objects created in it, and ensures that requests in a non-existent `Namespace` are rejected.\nThis admission controller also prevents deletion of three system reserved namespaces `default`,\n`kube-system`, `kube-public`.\n\nA `Namespace` deletion kicks off a sequence of operations that remove all objects (pods, services,\netc.) in that namespace.  In order to enforce integrity of that process, we strongly recommend\nrunning this admission controller.\n\n### NodeRestriction {#noderestriction}\n\n**Type**: Validating.\n\nThis admission controller limits the `Node` and `Pod` objects a kubelet can modify. In order to be limited by this admission controller,\nkubelets must use credentials in the `system:nodes` group, with a username in the form `system:node:<nodeName>`.\nSuch kubelets will only be allowed to modify their own `Node` API object, and only modify `Pod` API objects that are bound to their node.\nkubelets are not allowed to update or remove taints from their `Node` API object.\n\nThe `NodeRestriction` admission plugin prevents kubelets from deleting their `Node` API object,\nand enforces kubelet modification of labels under the `kubernetes.io\/` or `k8s.io\/` prefixes as follows:\n\n* **Prevents** kubelets from adding\/removing\/updating labels with a `node-restriction.kubernetes.io\/` prefix.\n  This label prefix is reserved for administrators to label their `Node` objects for workload isolation purposes,\n  and kubelets will not be allowed to modify labels with that prefix.\n* **Allows** kubelets to add\/remove\/update these labels and label prefixes:\n  * `kubernetes.io\/hostname`\n  * `kubernetes.io\/arch`\n  * `kubernetes.io\/os`\n  * `beta.kubernetes.io\/instance-type`\n  * `node.kubernetes.io\/instance-type`\n  * `failure-domain.beta.kubernetes.io\/region` (deprecated)\n  * `failure-domain.beta.kubernetes.io\/zone` (deprecated)\n  * `topology.kubernetes.io\/region`\n  * `topology.kubernetes.io\/zone`\n  * `kubelet.kubernetes.io\/`-prefixed labels\n  * `node.kubernetes.io\/`-prefixed labels\n\nUse of any other labels under the `kubernetes.io` or `k8s.io` prefixes by kubelets is reserved,\nand may be disallowed or allowed by the `NodeRestriction` admission plugin in the future.\n\nFuture versions may add additional restrictions to ensure kubelets have the minimal set of\npermissions required to operate correctly.\n\n### OwnerReferencesPermissionEnforcement {#ownerreferencespermissionenforcement}\n\n**Type**: Validating.\n\nThis admission controller protects the access to the `metadata.ownerReferences` of an object\nso that only users with **delete** permission to the object can change it.\nThis admission controller also protects the access to `metadata.ownerReferences[x].blockOwnerDeletion`\nof an object, so that only users with **update** permission to the `finalizers`\nsubresource of the referenced *owner* can change it.\n\n### PersistentVolumeClaimResize {#persistentvolumeclaimresize}\n\n\n\n**Type**: Validating.\n\nThis admission controller implements additional validations for checking incoming\n`PersistentVolumeClaim` resize requests.\n\nEnabling the `PersistentVolumeClaimResize` admission controller is recommended.\nThis admission controller prevents resizing of all claims by default unless a claim's `StorageClass`\n explicitly enables resizing by setting `allowVolumeExpansion` to `true`.\n\nFor example: all `PersistentVolumeClaim`s created from the following `StorageClass` support volume expansion:\n\n```yaml\napiVersion: storage.k8s.io\/v1\nkind: StorageClass\nmetadata:\n  name: gluster-vol-default\nprovisioner: kubernetes.io\/glusterfs\nparameters:\n  resturl: \"http:\/\/192.168.10.100:8080\"\n  restuser: \"\"\n  secretNamespace: \"\"\n  secretName: \"\"\nallowVolumeExpansion: true\n```\n\nFor more information about persistent volume claims, see [PersistentVolumeClaims](\/docs\/concepts\/storage\/persistent-volumes\/#persistentvolumeclaims).\n\n### PodNodeSelector {#podnodeselector}\n\n\n\n**Type**: Validating.\n\nThis admission controller defaults and limits what node selectors may be used within a namespace\nby reading a namespace annotation and a global configuration.\n\nThis admission controller is disabled by default.\n\n#### Configuration file format\n\n`PodNodeSelector` uses a configuration file to set options for the behavior of the backend.\nNote that the configuration file format will move to a versioned file in a future release.\nThis file may be json or yaml and has the following format:\n\n```yaml\npodNodeSelectorPluginConfig:\n  clusterDefaultNodeSelector: name-of-node-selector\n  namespace1: name-of-node-selector\n  namespace2: name-of-node-selector\n```\n\nReference the `PodNodeSelector` configuration file from the file provided to the API server's\ncommand line flag `--admission-control-config-file`:\n\n```yaml\napiVersion: apiserver.config.k8s.io\/v1\nkind: AdmissionConfiguration\nplugins:\n- name: PodNodeSelector\n  path: podnodeselector.yaml\n...\n```\n\n#### Configuration Annotation Format\n\n`PodNodeSelector` uses the annotation key `scheduler.alpha.kubernetes.io\/node-selector` to assign\nnode selectors to namespaces.\n\n```yaml\napiVersion: v1\nkind: Namespace\nmetadata:\n  annotations:\n    scheduler.alpha.kubernetes.io\/node-selector: name-of-node-selector\n  name: namespace3\n```\n\n#### Internal Behavior\n\nThis admission controller has the following behavior:\n\n1. If the `Namespace` has an annotation with a key `scheduler.alpha.kubernetes.io\/node-selector`,\n   use its value as the node selector.\n2. If the namespace lacks such an annotation, use the `clusterDefaultNodeSelector` defined in the\n   `PodNodeSelector` plugin configuration file as the node selector.\n3. Evaluate the pod's node selector against the namespace node selector for conflicts. Conflicts\n   result in rejection.\n4. Evaluate the pod's node selector against the namespace-specific allowed selector defined the\n   plugin configuration file. Conflicts result in rejection.\n\n\nPodNodeSelector allows forcing pods to run on specifically labeled nodes. Also see the PodTolerationRestriction\nadmission plugin, which allows preventing pods from running on specifically tainted nodes.\n\n\n### PodSecurity {#podsecurity}\n\n\n\n**Type**: Validating.\n\nThe PodSecurity admission controller checks new Pods before they are\nadmitted, determines if it should be admitted based on the requested security context and the restrictions on permitted\n[Pod Security Standards](\/docs\/concepts\/security\/pod-security-standards\/)\nfor the namespace that the Pod would be in.\n\nSee the [Pod Security Admission](\/docs\/concepts\/security\/pod-security-admission\/)\ndocumentation for more information.\n\nPodSecurity replaced an older admission controller named PodSecurityPolicy.\n\n### PodTolerationRestriction {#podtolerationrestriction}\n\n\n\n**Type**: Mutating and Validating.\n\nThe PodTolerationRestriction admission controller verifies any conflict between tolerations of a\npod and the tolerations of its namespace.\nIt rejects the pod request if there is a conflict.\nIt then merges the tolerations annotated on the namespace into the tolerations of the pod.\nThe resulting tolerations are checked against a list of allowed tolerations annotated to the namespace.\nIf the check succeeds, the pod request is admitted otherwise it is rejected.\n\nIf the namespace of the pod does not have any associated default tolerations or allowed\ntolerations annotated, the cluster-level default tolerations or cluster-level list of allowed tolerations are used\ninstead if they are specified.\n\nTolerations to a namespace are assigned via the `scheduler.alpha.kubernetes.io\/defaultTolerations` annotation key.\nThe list of allowed tolerations can be added via the `scheduler.alpha.kubernetes.io\/tolerationsWhitelist` annotation key.\n\nExample for namespace annotations:\n\n```yaml\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: apps-that-need-nodes-exclusively\n  annotations:\n    scheduler.alpha.kubernetes.io\/defaultTolerations: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'\n    scheduler.alpha.kubernetes.io\/tolerationsWhitelist: '[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"dedicated-node\"}]'\n```\n\nThis admission controller is disabled by default.\n\n### Priority {#priority}\n\n**Type**: Mutating and Validating.\n\nThe priority admission controller uses the `priorityClassName` field and populates the integer\nvalue of the priority.\nIf the priority class is not found, the Pod is rejected.\n\n### ResourceQuota {#resourcequota}\n\n**Type**: Validating.\n\nThis admission controller will observe the incoming request and ensure that it does not violate\nany of the constraints enumerated in the `ResourceQuota` object in a `Namespace`.  If you are\nusing `ResourceQuota` objects in your Kubernetes deployment, you MUST use this admission\ncontroller to enforce quota constraints.\n\nSee the [ResourceQuota API reference](\/docs\/reference\/kubernetes-api\/policy-resources\/resource-quota-v1\/)\nand the [example of Resource Quota](\/docs\/concepts\/policy\/resource-quotas\/) for more details.\n\n### RuntimeClass {#runtimeclass}\n\n**Type**: Mutating and Validating.\n\nIf you define a RuntimeClass with [Pod overhead](\/docs\/concepts\/scheduling-eviction\/pod-overhead\/)\nconfigured, this admission controller checks incoming Pods.\nWhen enabled, this admission controller rejects any Pod create requests\nthat have the overhead already set.\nFor Pods that have a RuntimeClass configured and selected in their `.spec`,\nthis admission controller sets `.spec.overhead` in the Pod based on the value\ndefined in the corresponding RuntimeClass.\n\nSee also [Pod Overhead](\/docs\/concepts\/scheduling-eviction\/pod-overhead\/)\nfor more information.\n\n### ServiceAccount {#serviceaccount}\n\n**Type**: Mutating and Validating.\n\nThis admission controller implements automation for\n[serviceAccounts](\/docs\/tasks\/configure-pod-container\/configure-service-account\/).\nThe Kubernetes project strongly recommends enabling this admission controller.\nYou should enable this admission controller if you intend to make any use of Kubernetes\n`ServiceAccount` objects.\n\nRegarding the annotation `kubernetes.io\/enforce-mountable-secrets`: While the annotation's name suggests it only concerns the mounting of Secrets,\nits enforcement also extends to other ways Secrets are used in the context of a Pod.\nTherefore, it is crucial to ensure that all the referenced secrets are correctly specified in the ServiceAccount.\n\n### StorageObjectInUseProtection\n\n**Type**: Mutating.\n\nThe `StorageObjectInUseProtection` plugin adds the `kubernetes.io\/pvc-protection` or `kubernetes.io\/pv-protection`\nfinalizers to newly created Persistent Volume Claims (PVCs) or Persistent Volumes (PV).\nIn case a user deletes a PVC or PV the PVC or PV is not removed until the finalizer is removed\nfrom the PVC or PV by PVC or PV Protection Controller.\nRefer to the\n[Storage Object in Use Protection](\/docs\/concepts\/storage\/persistent-volumes\/#storage-object-in-use-protection)\nfor more detailed information.\n\n### TaintNodesByCondition {#taintnodesbycondition}\n\n**Type**: Mutating.\n\nThis admission controller  newly created\nNodes as `NotReady` and `NoSchedule`. That tainting avoids a race condition that could cause Pods\nto be scheduled on new Nodes before their taints were updated to accurately reflect their reported\nconditions.\n\n### ValidatingAdmissionPolicy {#validatingadmissionpolicy}\n\n**Type**: Validating.\n\n[This admission controller](\/docs\/reference\/access-authn-authz\/validating-admission-policy\/) implements the CEL validation for incoming matched requests. \nIt is enabled when both feature gate `validatingadmissionpolicy` and `admissionregistration.k8s.io\/v1alpha1` group\/version are enabled.\nIf any of the ValidatingAdmissionPolicy fails, the request fails.\n\n### ValidatingAdmissionWebhook {#validatingadmissionwebhook}\n\n**Type**: Validating.\n\nThis admission controller calls any validating webhooks which match the request. Matching\nwebhooks are called in parallel; if any of them rejects the request, the request\nfails. This admission controller only runs in the validation phase; the webhooks it calls may not\nmutate the object, as opposed to the webhooks called by the `MutatingAdmissionWebhook` admission controller.\n\nIf a webhook called by this has side effects (for example, decrementing quota) it\n*must* have a reconciliation system, as it is not guaranteed that subsequent\nwebhooks or other validating admission controllers will permit the request to finish.\n\nIf you disable the ValidatingAdmissionWebhook, you must also disable the\n`ValidatingWebhookConfiguration` object in the `admissionregistration.k8s.io\/v1`\ngroup\/version via the `--runtime-config` flag.\n\n## Is there a recommended set of admission controllers to use?\n\nYes. The recommended admission controllers are enabled by default\n(shown [here](\/docs\/reference\/command-line-tools-reference\/kube-apiserver\/#options)),\nso you do not need to explicitly specify them.\nYou can enable additional admission controllers beyond the default set using the\n`--enable-admission-plugins` flag (**order doesn't matter**).\n","site":"kubernetes reference","answers_cleaned":"    reviewers    lavalamp   davidopp   derekwaynecarr   erictune   janetkuo   thockin title  Admission Control in Kubernetes linkTitle  Admission Control content type  concept weight  40           overview     This page provides an overview of  admission controllers    An admission controller is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the resource  but after the request is authenticated and authorized   Several important features of Kubernetes require an admission controller to be enabled in order to properly support the feature   As a result  a Kubernetes API server that is not properly configured with the right set of admission controllers is an incomplete server that will not support all the features you expect        body        What are they   Admission controllers are code within the Kubernetes  that check the data arriving in a request to modify a resource   Admission controllers apply to requests that create  delete  or modify objects  Admission controllers can also block custom verbs  such as a request to connect to a pod via an API server proxy  Admission controllers do  not   and cannot  block requests to read    get      watch   or   list    objects  because reads bypass the admission control layer   Admission control mechanisms may be  validating    mutating   or both  Mutating controllers may modify the data for the resource being modified  validating controllers may not   The admission controllers in Kubernetes  consist of the  list   what does each admission controller do  below  are compiled into the  kube apiserver  binary  and may only be configured by the cluster administrator       Admission control extension points  Within the full  list   what does each admission controller do   there are three special controllers   MutatingAdmissionWebhook   mutatingadmissionwebhook    ValidatingAdmissionWebhook   validatingadmissionwebhook   and  ValidatingAdmissionPolicy   validatingadmissionpolicy   The two webhook controllers execute the mutating and validating  respectively   admission control webhooks   docs reference access authn authz extensible admission controllers  admission webhooks  which are configured in the API  ValidatingAdmissionPolicy provides a way to embed declarative validation code within the API  without relying on any external HTTP callouts   You can use these three admission controllers to customize cluster behavior at admission time       Admission control phases  The admission control process proceeds in two phases  In the first phase  mutating admission controllers are run  In the second phase  validating admission controllers are run  Note again that some of the controllers are both   If any of the controllers in either phase reject the request  the entire request is rejected immediately and an error is returned to the end user   Finally  in addition to sometimes mutating the object in question  admission controllers may sometimes have side effects  that is  mutate related resources as part of request processing  Incrementing quota usage is the canonical example of why this is necessary  Any such side effect needs a corresponding reclamation or reconciliation process  as a given admission controller does not know for sure that a given request will pass all of the other admission controllers        How do I turn on an admission controller   The Kubernetes API server flag  enable admission plugins  takes a comma delimited list of admission control plugins to invoke prior to modifying objects in the cluster  For example  the following command line enables the  NamespaceLifecycle  and the  LimitRanger  admission control plugins      shell kube apiserver   enable admission plugins NamespaceLifecycle LimitRanger           Depending on the way your Kubernetes cluster is deployed and how the API server is started  you may need to apply the settings in different ways  For example  you may have to modify the systemd unit file if the API server is deployed as a systemd service  you may modify the manifest file for the API server if Kubernetes is deployed in a self hosted way       How do I turn off an admission controller   The Kubernetes API server flag  disable admission plugins  takes a comma delimited list of admission control plugins to be disabled  even if they are in the list of plugins enabled by default      shell kube apiserver   disable admission plugins PodNodeSelector AlwaysDeny             Which plugins are enabled by default   To see which admission plugins are enabled      shell kube apiserver  h   grep enable admission plugins      In Kubernetes   the default ones are      shell CertificateApproval  CertificateSigning  CertificateSubjectRestriction  DefaultIngressClass  DefaultStorageClass  DefaultTolerationSeconds  LimitRanger  MutatingAdmissionWebhook  NamespaceLifecycle  PersistentVolumeClaimResize  PodSecurity  Priority  ResourceQuota  RuntimeClass  ServiceAccount  StorageObjectInUseProtection  TaintNodesByCondition  ValidatingAdmissionPolicy  ValidatingAdmissionWebhook         What does each admission controller do       AlwaysAdmit   alwaysadmit       Type    Validating   This admission controller allows all pods into the cluster  It is   deprecated   because its behavior is the same as if there were no admission controller at all       AlwaysDeny   alwaysdeny       Type    Validating   Rejects all requests  AlwaysDeny is   deprecated   as it has no real meaning       AlwaysPullImages   alwayspullimages     Type    Mutating and Validating   This admission controller modifies every new Pod to force the image pull policy to  Always   This is useful in a multitenant cluster so that users can be assured that their private images can only be used by those who have the credentials to pull them  Without this admission controller  once an image has been pulled to a node  any pod from any user can use it by knowing the image s name  assuming the Pod is scheduled onto the right node   without any authorization check against the image  When this admission controller is enabled  images are always pulled prior to starting containers  which means valid credentials are required       CertificateApproval   certificateapproval     Type    Validating   This admission controller observes requests to approve CertificateSigningRequest resources and performs additional authorization checks to ensure the approving user has permission to   approve   certificate requests with the  spec signerName  requested on the CertificateSigningRequest resource   See  Certificate Signing Requests   docs reference access authn authz certificate signing requests   for more information on the permissions required to perform different actions on CertificateSigningRequest resources       CertificateSigning   certificatesigning     Type    Validating   This admission controller observes updates to the  status certificate  field of CertificateSigningRequest resources and performs an additional authorization checks to ensure the signing user has permission to   sign   certificate requests with the  spec signerName  requested on the CertificateSigningRequest resource   See  Certificate Signing Requests   docs reference access authn authz certificate signing requests   for more information on the permissions required to perform different actions on CertificateSigningRequest resources       CertificateSubjectRestriction   certificatesubjectrestriction     Type    Validating   This admission controller observes creation of CertificateSigningRequest resources that have a  spec signerName  of  kubernetes io kube apiserver client   It rejects any request that specifies a  group   or  organization attribute   of  system masters        DefaultIngressClass   defaultingressclass     Type    Mutating   This admission controller observes creation of  Ingress  objects that do not request any specific ingress class and automatically adds a default ingress class to them   This way  users that do not request any special ingress class do not need to care about them at all and they will get the default one   This admission controller does not do anything when no default ingress class is configured  When more than one ingress class is marked as default  it rejects any creation of  Ingress  with an error and an administrator must revisit their  IngressClass  objects and mark only one as default  with the annotation  ingressclass kubernetes io is default class     This admission controller ignores any  Ingress  updates  it acts only on creation   See the  Ingress   docs concepts services networking ingress   documentation for more about ingress classes and how to mark one as default       DefaultStorageClass   defaultstorageclass     Type    Mutating   This admission controller observes creation of  PersistentVolumeClaim  objects that do not request any specific storage class and automatically adds a default storage class to them  This way  users that do not request any special storage class do not need to care about them at all and they will get the default one   This admission controller does not do anything when no default storage class is configured  When more than one storage class is marked as default  it rejects any creation of  PersistentVolumeClaim  with an error and an administrator must revisit their  StorageClass  objects and mark only one as default  This admission controller ignores any  PersistentVolumeClaim  updates  it acts only on creation   See  persistent volume   docs concepts storage persistent volumes   documentation about persistent volume claims and storage classes and how to mark a storage class as default       DefaultTolerationSeconds   defaulttolerationseconds     Type    Mutating   This admission controller sets the default forgiveness toleration for pods to tolerate the taints  notready NoExecute  and  unreachable NoExecute  based on the k8s apiserver input parameters  default not ready toleration seconds  and  default unreachable toleration seconds  if the pods don t already have toleration for taints  node kubernetes io not ready NoExecute  or  node kubernetes io unreachable NoExecute   The default value for  default not ready toleration seconds  and  default unreachable toleration seconds  is 5 minutes       DenyServiceExternalIPs    Type    Validating   This admission controller rejects all net new usage of the  Service  field  externalIPs    This feature is very powerful  allows network traffic interception  and not well controlled by policy   When enabled  users of the cluster may not create new Services which use  externalIPs  and may not add new values to  externalIPs  on existing  Service  objects   Existing uses of  externalIPs  are not affected  and users may remove values from  externalIPs  on existing  Service  objects   Most users do not need this feature at all  and cluster admins should consider disabling it  Clusters that do need to use this feature should consider using some custom policy to manage usage of it   This admission controller is disabled by default       EventRateLimit   eventratelimit       Type    Validating   This admission controller mitigates the problem where the API server gets flooded by requests to store new Events  The cluster admin can specify event rate limits by     Enabling the  EventRateLimit  admission controller    Referencing an  EventRateLimit  configuration file from the file provided to the API   server s command line flag    admission control config file       yaml apiVersion  apiserver config k8s io v1 kind  AdmissionConfiguration plugins      name  EventRateLimit     path  eventconfig yaml          There are four types of limits that can be specified in the configuration       Server   All Event requests  creation or modifications  received by the API server share a single bucket      Namespace   Each namespace has a dedicated bucket      User   Each user is allocated a bucket      SourceAndObject   A bucket is assigned by each combination of source and    involved object of the event   Below is a sample  eventconfig yaml  for such a configuration      yaml apiVersion  eventratelimit admission k8s io v1alpha1 kind  Configuration limits      type  Namespace     qps  50     burst  100     cacheSize  2000     type  User     qps  10     burst  50      See the  EventRateLimit Config API  v1alpha1    docs reference config api apiserver eventratelimit v1alpha1   for more details   This admission controller is disabled by default       ExtendedResourceToleration   extendedresourcetoleration     Type    Mutating   This plug in facilitates creation of dedicated nodes with extended resources  If operators want to create dedicated nodes with extended resources  like GPUs  FPGAs etc    they are expected to  taint the node   docs concepts scheduling eviction taint and toleration  example use cases  with the extended resource name as the key  This admission controller  if enabled  automatically adds tolerations for such taints to pods requesting extended resources  so users don t have to manually add these tolerations   This admission controller is disabled by default       ImagePolicyWebhook   imagepolicywebhook     Type    Validating   The ImagePolicyWebhook admission controller allows a backend webhook to make admission decisions   This admission controller is disabled by default        Configuration file format   imagereview config file format   ImagePolicyWebhook uses a configuration file to set options for the behavior of the backend  This file may be json or yaml and has the following format      yaml imagePolicy    kubeConfigFile   path to kubeconfig for backend     time in s to cache approval   allowTTL  50     time in s to cache denial   denyTTL  50     time in ms to wait between retries   retryBackoff  500     determines behavior if the webhook backend fails   defaultAllow  true      Reference the ImagePolicyWebhook configuration file from the file provided to the API server s command line flag    admission control config file       yaml apiVersion  apiserver config k8s io v1 kind  AdmissionConfiguration plugins      name  ImagePolicyWebhook     path  imagepolicyconfig yaml          Alternatively  you can embed the configuration directly in the file      yaml apiVersion  apiserver config k8s io v1 kind  AdmissionConfiguration plugins      name  ImagePolicyWebhook     configuration        imagePolicy          kubeConfigFile   path to kubeconfig file          allowTTL  50         denyTTL  50         retryBackoff  500         defaultAllow  true      The ImagePolicyWebhook config file must reference a  kubeconfig   docs tasks access application cluster configure access multiple clusters   formatted file which sets up the connection to the backend  It is required that the backend communicate over TLS   The kubeconfig file s  cluster  field must point to the remote service  and the  user  field must contain the returned authorizer      yaml   clusters refers to the remote service  clusters      name  name of remote imagepolicy service     cluster        certificate authority   path to ca pem      CA for verifying the remote service        server  https   images example com policy   URL of remote service to query  Must use  https      users refers to the API server s webhook configuration  users      name  name of api server     user        client certificate   path to cert pem   cert for the webhook admission controller to use       client key   path to key pem            key matching the cert      For additional HTTP configuration  refer to the  kubeconfig   docs tasks access application cluster configure access multiple clusters   documentation        Request payloads  When faced with an admission decision  the API Server POSTs a JSON serialized  imagepolicy k8s io v1alpha1   ImageReview  object describing the action  This object contains fields describing the containers being admitted  as well as any pod annotations that match    image policy k8s io       The webhook API objects are subject to the same versioning compatibility rules as other Kubernetes API objects  Implementers should be aware of looser compatibility promises for alpha objects and check the  apiVersion  field of the request to ensure correct deserialization  Additionally  the API Server must enable the  imagepolicy k8s io v1alpha1  API extensions group     runtime config imagepolicy k8s io v1alpha1 true      An example request body      json      apiVersion    imagepolicy k8s io v1alpha1      kind    ImageReview      spec          containers                      image    myrepo myimage v1                            image    myrepo myimage sha256 beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed                      annotations            mycluster image policy k8s io ticket 1234    break glass              namespace    mynamespace             The remote service is expected to fill the  status  field of the request and respond to either allow or disallow access  The response body s  spec  field is ignored  and may be omitted  A permissive response would return      json      apiVersion    imagepolicy k8s io v1alpha1      kind    ImageReview      status          allowed   true            To disallow access  the service would return      json      apiVersion    imagepolicy k8s io v1alpha1      kind    ImageReview      status          allowed   false       reason    image currently blacklisted             For further documentation refer to the   imagepolicy v1alpha1  API   docs reference config api imagepolicy v1alpha1          Extending with Annotations  All annotations on a Pod that match    image policy k8s io    are sent to the webhook  Sending annotations allows users who are aware of the image policy backend to send extra information to it  and for different backends implementations to accept different information   Examples of information you might put here are     request to  break glass  to override a policy  in case of emergency    a ticket number from a ticket system that documents the break glass request   provide a hint to the policy server as to the imageID of the image being provided  to save it a lookup  In any case  the annotations are provided by the user and are not validated by Kubernetes in any way       LimitPodHardAntiAffinityTopology   limitpodhardantiaffinitytopology     Type    Validating   This admission controller denies any pod that defines  AntiAffinity  topology key other than  kubernetes io hostname  in  requiredDuringSchedulingRequiredDuringExecution    This admission controller is disabled by default       LimitRanger   limitranger     Type    Mutating and Validating   This admission controller will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the  LimitRange  object in a  Namespace    If you are using  LimitRange  objects in your Kubernetes deployment  you MUST use this admission controller to enforce those constraints  LimitRanger can also be used to apply default resource requests to Pods that don t specify any  currently  the default LimitRanger applies a 0 1 CPU requirement to all Pods in the  default  namespace   See the  LimitRange API reference   docs reference kubernetes api policy resources limit range v1   and the  example of LimitRange   docs tasks administer cluster manage resources memory default namespace   for more details       MutatingAdmissionWebhook   mutatingadmissionwebhook     Type    Mutating   This admission controller calls any mutating webhooks which match the request  Matching webhooks are called in serial  each one may modify the object if it desires   This admission controller  as implied by the name  only runs in the mutating phase   If a webhook called by this has side effects  for example  decrementing quota  it  must  have a reconciliation system  as it is not guaranteed that subsequent webhooks or validating admission controllers will permit the request to finish   If you disable the MutatingAdmissionWebhook  you must also disable the  MutatingWebhookConfiguration  object in the  admissionregistration k8s io v1  group version via the    runtime config  flag  both are on by default        Use caution when authoring and installing mutating webhooks    Users may be confused when the objects they try to create are different from   what they get back    Built in control loops may break when the objects they try to create are   different when read back      Setting originally unset fields is less likely to cause problems than     overwriting fields set in the original request  Avoid doing the latter    Future changes to control loops for built in resources or third party resources   may break webhooks that work well today  Even when the webhook installation API   is finalized  not all possible webhook behaviors will be guaranteed to be supported   indefinitely       NamespaceAutoProvision   namespaceautoprovision     Type    Mutating   This admission controller examines all incoming requests on namespaced resources and checks if the referenced namespace does exist  It creates a namespace if it cannot be found  This admission controller is useful in deployments that do not want to restrict creation of a namespace prior to its usage       NamespaceExists   namespaceexists     Type    Validating   This admission controller checks all requests on namespaced resources other than  Namespace  itself  If the namespace referenced from a request doesn t exist  the request is rejected       NamespaceLifecycle   namespacelifecycle     Type    Validating   This admission controller enforces that a  Namespace  that is undergoing termination cannot have new objects created in it  and ensures that requests in a non existent  Namespace  are rejected  This admission controller also prevents deletion of three system reserved namespaces  default    kube system    kube public    A  Namespace  deletion kicks off a sequence of operations that remove all objects  pods  services  etc   in that namespace   In order to enforce integrity of that process  we strongly recommend running this admission controller       NodeRestriction   noderestriction     Type    Validating   This admission controller limits the  Node  and  Pod  objects a kubelet can modify  In order to be limited by this admission controller  kubelets must use credentials in the  system nodes  group  with a username in the form  system node  nodeName    Such kubelets will only be allowed to modify their own  Node  API object  and only modify  Pod  API objects that are bound to their node  kubelets are not allowed to update or remove taints from their  Node  API object   The  NodeRestriction  admission plugin prevents kubelets from deleting their  Node  API object  and enforces kubelet modification of labels under the  kubernetes io   or  k8s io   prefixes as follows       Prevents   kubelets from adding removing updating labels with a  node restriction kubernetes io   prefix    This label prefix is reserved for administrators to label their  Node  objects for workload isolation purposes    and kubelets will not be allowed to modify labels with that prefix      Allows   kubelets to add remove update these labels and label prefixes       kubernetes io hostname       kubernetes io arch       kubernetes io os       beta kubernetes io instance type       node kubernetes io instance type       failure domain beta kubernetes io region   deprecated       failure domain beta kubernetes io zone   deprecated       topology kubernetes io region       topology kubernetes io zone       kubelet kubernetes io   prefixed labels      node kubernetes io   prefixed labels  Use of any other labels under the  kubernetes io  or  k8s io  prefixes by kubelets is reserved  and may be disallowed or allowed by the  NodeRestriction  admission plugin in the future   Future versions may add additional restrictions to ensure kubelets have the minimal set of permissions required to operate correctly       OwnerReferencesPermissionEnforcement   ownerreferencespermissionenforcement     Type    Validating   This admission controller protects the access to the  metadata ownerReferences  of an object so that only users with   delete   permission to the object can change it  This admission controller also protects the access to  metadata ownerReferences x  blockOwnerDeletion  of an object  so that only users with   update   permission to the  finalizers  subresource of the referenced  owner  can change it       PersistentVolumeClaimResize   persistentvolumeclaimresize       Type    Validating   This admission controller implements additional validations for checking incoming  PersistentVolumeClaim  resize requests   Enabling the  PersistentVolumeClaimResize  admission controller is recommended  This admission controller prevents resizing of all claims by default unless a claim s  StorageClass   explicitly enables resizing by setting  allowVolumeExpansion  to  true    For example  all  PersistentVolumeClaim s created from the following  StorageClass  support volume expansion      yaml apiVersion  storage k8s io v1 kind  StorageClass metadata    name  gluster vol default provisioner  kubernetes io glusterfs parameters    resturl   http   192 168 10 100 8080    restuser       secretNamespace       secretName     allowVolumeExpansion  true      For more information about persistent volume claims  see  PersistentVolumeClaims   docs concepts storage persistent volumes  persistentvolumeclaims        PodNodeSelector   podnodeselector       Type    Validating   This admission controller defaults and limits what node selectors may be used within a namespace by reading a namespace annotation and a global configuration   This admission controller is disabled by default        Configuration file format   PodNodeSelector  uses a configuration file to set options for the behavior of the backend  Note that the configuration file format will move to a versioned file in a future release  This file may be json or yaml and has the following format      yaml podNodeSelectorPluginConfig    clusterDefaultNodeSelector  name of node selector   namespace1  name of node selector   namespace2  name of node selector      Reference the  PodNodeSelector  configuration file from the file provided to the API server s command line flag    admission control config file       yaml apiVersion  apiserver config k8s io v1 kind  AdmissionConfiguration plugins    name  PodNodeSelector   path  podnodeselector yaml               Configuration Annotation Format   PodNodeSelector  uses the annotation key  scheduler alpha kubernetes io node selector  to assign node selectors to namespaces      yaml apiVersion  v1 kind  Namespace metadata    annotations      scheduler alpha kubernetes io node selector  name of node selector   name  namespace3           Internal Behavior  This admission controller has the following behavior   1  If the  Namespace  has an annotation with a key  scheduler alpha kubernetes io node selector      use its value as the node selector  2  If the namespace lacks such an annotation  use the  clusterDefaultNodeSelector  defined in the     PodNodeSelector  plugin configuration file as the node selector  3  Evaluate the pod s node selector against the namespace node selector for conflicts  Conflicts    result in rejection  4  Evaluate the pod s node selector against the namespace specific allowed selector defined the    plugin configuration file  Conflicts result in rejection    PodNodeSelector allows forcing pods to run on specifically labeled nodes  Also see the PodTolerationRestriction admission plugin  which allows preventing pods from running on specifically tainted nodes        PodSecurity   podsecurity       Type    Validating   The PodSecurity admission controller checks new Pods before they are admitted  determines if it should be admitted based on the requested security context and the restrictions on permitted  Pod Security Standards   docs concepts security pod security standards   for the namespace that the Pod would be in   See the  Pod Security Admission   docs concepts security pod security admission   documentation for more information   PodSecurity replaced an older admission controller named PodSecurityPolicy       PodTolerationRestriction   podtolerationrestriction       Type    Mutating and Validating   The PodTolerationRestriction admission controller verifies any conflict between tolerations of a pod and the tolerations of its namespace  It rejects the pod request if there is a conflict  It then merges the tolerations annotated on the namespace into the tolerations of the pod  The resulting tolerations are checked against a list of allowed tolerations annotated to the namespace  If the check succeeds  the pod request is admitted otherwise it is rejected   If the namespace of the pod does not have any associated default tolerations or allowed tolerations annotated  the cluster level default tolerations or cluster level list of allowed tolerations are used instead if they are specified   Tolerations to a namespace are assigned via the  scheduler alpha kubernetes io defaultTolerations  annotation key  The list of allowed tolerations can be added via the  scheduler alpha kubernetes io tolerationsWhitelist  annotation key   Example for namespace annotations      yaml apiVersion  v1 kind  Namespace metadata    name  apps that need nodes exclusively   annotations      scheduler alpha kubernetes io defaultTolerations      operator    Exists    effect    NoSchedule    key    dedicated node         scheduler alpha kubernetes io tolerationsWhitelist      operator    Exists    effect    NoSchedule    key    dedicated node          This admission controller is disabled by default       Priority   priority     Type    Mutating and Validating   The priority admission controller uses the  priorityClassName  field and populates the integer value of the priority  If the priority class is not found  the Pod is rejected       ResourceQuota   resourcequota     Type    Validating   This admission controller will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the  ResourceQuota  object in a  Namespace    If you are using  ResourceQuota  objects in your Kubernetes deployment  you MUST use this admission controller to enforce quota constraints   See the  ResourceQuota API reference   docs reference kubernetes api policy resources resource quota v1   and the  example of Resource Quota   docs concepts policy resource quotas   for more details       RuntimeClass   runtimeclass     Type    Mutating and Validating   If you define a RuntimeClass with  Pod overhead   docs concepts scheduling eviction pod overhead   configured  this admission controller checks incoming Pods  When enabled  this admission controller rejects any Pod create requests that have the overhead already set  For Pods that have a RuntimeClass configured and selected in their   spec   this admission controller sets   spec overhead  in the Pod based on the value defined in the corresponding RuntimeClass   See also  Pod Overhead   docs concepts scheduling eviction pod overhead   for more information       ServiceAccount   serviceaccount     Type    Mutating and Validating   This admission controller implements automation for  serviceAccounts   docs tasks configure pod container configure service account    The Kubernetes project strongly recommends enabling this admission controller  You should enable this admission controller if you intend to make any use of Kubernetes  ServiceAccount  objects   Regarding the annotation  kubernetes io enforce mountable secrets   While the annotation s name suggests it only concerns the mounting of Secrets  its enforcement also extends to other ways Secrets are used in the context of a Pod  Therefore  it is crucial to ensure that all the referenced secrets are correctly specified in the ServiceAccount       StorageObjectInUseProtection    Type    Mutating   The  StorageObjectInUseProtection  plugin adds the  kubernetes io pvc protection  or  kubernetes io pv protection  finalizers to newly created Persistent Volume Claims  PVCs  or Persistent Volumes  PV   In case a user deletes a PVC or PV the PVC or PV is not removed until the finalizer is removed from the PVC or PV by PVC or PV Protection Controller  Refer to the  Storage Object in Use Protection   docs concepts storage persistent volumes  storage object in use protection  for more detailed information       TaintNodesByCondition   taintnodesbycondition     Type    Mutating   This admission controller  newly created Nodes as  NotReady  and  NoSchedule   That tainting avoids a race condition that could cause Pods to be scheduled on new Nodes before their taints were updated to accurately reflect their reported conditions       ValidatingAdmissionPolicy   validatingadmissionpolicy     Type    Validating    This admission controller   docs reference access authn authz validating admission policy   implements the CEL validation for incoming matched requests   It is enabled when both feature gate  validatingadmissionpolicy  and  admissionregistration k8s io v1alpha1  group version are enabled  If any of the ValidatingAdmissionPolicy fails  the request fails       ValidatingAdmissionWebhook   validatingadmissionwebhook     Type    Validating   This admission controller calls any validating webhooks which match the request  Matching webhooks are called in parallel  if any of them rejects the request  the request fails  This admission controller only runs in the validation phase  the webhooks it calls may not mutate the object  as opposed to the webhooks called by the  MutatingAdmissionWebhook  admission controller   If a webhook called by this has side effects  for example  decrementing quota  it  must  have a reconciliation system  as it is not guaranteed that subsequent webhooks or other validating admission controllers will permit the request to finish   If you disable the ValidatingAdmissionWebhook  you must also disable the  ValidatingWebhookConfiguration  object in the  admissionregistration k8s io v1  group version via the    runtime config  flag      Is there a recommended set of admission controllers to use   Yes  The recommended admission controllers are enabled by default  shown  here   docs reference command line tools reference kube apiserver  options    so you do not need to explicitly specify them  You can enable additional admission controllers beyond the default set using the    enable admission plugins  flag    order doesn t matter     "}
{"questions":"kubernetes reference liggitt deads2k erictune contenttype concept aliases rbac reviewers weight 33 title Using RBAC Authorization","answers":"---\nreviewers:\n- erictune\n- deads2k\n- liggitt\ntitle: Using RBAC Authorization\ncontent_type: concept\naliases: [\/rbac\/]\nweight: 33\n---\n\n<!-- overview -->\nRole-based access control (RBAC) is a method of regulating access to computer or\nnetwork resources based on the roles of individual users within your organization.\n\n\n<!-- body -->\nRBAC authorization uses the `rbac.authorization.k8s.io`\n to drive authorization\ndecisions, allowing you to dynamically configure policies through the Kubernetes API.\n\nTo enable RBAC, start the \nwith the `--authorization-mode` flag set to a comma-separated list that includes `RBAC`;\nfor example:\n```shell\nkube-apiserver --authorization-mode=Example,RBAC --other-options --more-options\n```\n\n## API objects {#api-overview}\n\nThe RBAC API declares four kinds of Kubernetes object: _Role_, _ClusterRole_,\n_RoleBinding_ and _ClusterRoleBinding_. You can describe or amend the RBAC\n\nusing tools such as `kubectl`, just like any other Kubernetes object.\n\n\nThese objects, by design, impose access restrictions. If you are making changes\nto a cluster as you learn, see\n[privilege escalation prevention and bootstrapping](#privilege-escalation-prevention-and-bootstrapping)\nto understand how those restrictions can prevent you making some changes.\n\n\n### Role and ClusterRole\n\nAn RBAC _Role_ or _ClusterRole_ contains rules that represent a set of permissions.\nPermissions are purely additive (there are no \"deny\" rules).\n\nA Role always sets permissions within a particular ;\nwhen you create a Role, you have to specify the namespace it belongs in.\n\nClusterRole, by contrast, is a non-namespaced resource. The resources have different names (Role\nand ClusterRole) because a Kubernetes object always has to be either namespaced or not namespaced;\nit can't be both.\n\nClusterRoles have several uses. You can use a ClusterRole to:\n\n1. define permissions on namespaced resources and be granted access within individual namespace(s)\n1. define permissions on namespaced resources and be granted access across all namespaces\n1. define permissions on cluster-scoped resources\n\nIf you want to define a role within a namespace, use a Role; if you want to define\na role cluster-wide, use a ClusterRole.\n\n#### Role example\n\nHere's an example Role in the \"default\" namespace that can be used to grant read access to\n:\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: Role\nmetadata:\n  namespace: default\n  name: pod-reader\nrules:\n- apiGroups: [\"\"] # \"\" indicates the core API group\n  resources: [\"pods\"]\n  verbs: [\"get\", \"watch\", \"list\"]\n```\n\n#### ClusterRole example\n\nA ClusterRole can be used to grant the same permissions as a Role.\nBecause ClusterRoles are cluster-scoped, you can also use them to grant access to:\n\n* cluster-scoped resources (like )\n* non-resource endpoints (like `\/healthz`)\n* namespaced resources (like Pods), across all namespaces\n\n  For example: you can use a ClusterRole to allow a particular user to run\n  `kubectl get pods --all-namespaces`\n\nHere is an example of a ClusterRole that can be used to grant read access to\n in any particular namespace,\nor across all namespaces (depending on how it is [bound](#rolebinding-and-clusterrolebinding)):\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  # \"namespace\" omitted since ClusterRoles are not namespaced\n  name: secret-reader\nrules:\n- apiGroups: [\"\"]\n  #\n  # at the HTTP level, the name of the resource for accessing Secret\n  # objects is \"secrets\"\n  resources: [\"secrets\"]\n  verbs: [\"get\", \"watch\", \"list\"]\n```\n\nThe name of a Role or a ClusterRole object must be a valid\n[path segment name](\/docs\/concepts\/overview\/working-with-objects\/names#path-segment-names).\n\n### RoleBinding and ClusterRoleBinding\n\nA role binding grants the permissions defined in a role to a user or set of users.\nIt holds a list of *subjects* (users, groups, or service accounts), and a reference to the\nrole being granted.\nA RoleBinding grants permissions within a specific namespace whereas a ClusterRoleBinding\ngrants that access cluster-wide.\n\nA RoleBinding may reference any Role in the same namespace. Alternatively, a RoleBinding\ncan reference a ClusterRole and bind that ClusterRole to the namespace of the RoleBinding.\nIf you want to bind a ClusterRole to all the namespaces in your cluster, you use a\nClusterRoleBinding.\n\nThe name of a RoleBinding or ClusterRoleBinding object must be a valid\n[path segment name](\/docs\/concepts\/overview\/working-with-objects\/names#path-segment-names).\n\n#### RoleBinding examples {#rolebinding-example}\n\nHere is an example of a RoleBinding that grants the \"pod-reader\" Role to the user \"jane\"\nwithin the \"default\" namespace.\nThis allows \"jane\" to read pods in the \"default\" namespace.\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\n# This role binding allows \"jane\" to read pods in the \"default\" namespace.\n# You need to already have a Role named \"pod-reader\" in that namespace.\nkind: RoleBinding\nmetadata:\n  name: read-pods\n  namespace: default\nsubjects:\n# You can specify more than one \"subject\"\n- kind: User\n  name: jane # \"name\" is case sensitive\n  apiGroup: rbac.authorization.k8s.io\nroleRef:\n  # \"roleRef\" specifies the binding to a Role \/ ClusterRole\n  kind: Role #this must be Role or ClusterRole\n  name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to\n  apiGroup: rbac.authorization.k8s.io\n```\n\nA RoleBinding can also reference a ClusterRole to grant the permissions defined in that\nClusterRole to resources inside the RoleBinding's namespace. This kind of reference\nlets you define a set of common roles across your cluster, then reuse them within\nmultiple namespaces.\n\nFor instance, even though the following RoleBinding refers to a ClusterRole,\n\"dave\" (the subject, case sensitive) will only be able to read Secrets in the \"development\"\nnamespace, because the RoleBinding's namespace (in its metadata) is \"development\".\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\n# This role binding allows \"dave\" to read secrets in the \"development\" namespace.\n# You need to already have a ClusterRole named \"secret-reader\".\nkind: RoleBinding\nmetadata:\n  name: read-secrets\n  #\n  # The namespace of the RoleBinding determines where the permissions are granted.\n  # This only grants permissions within the \"development\" namespace.\n  namespace: development\nsubjects:\n- kind: User\n  name: dave # Name is case sensitive\n  apiGroup: rbac.authorization.k8s.io\nroleRef:\n  kind: ClusterRole\n  name: secret-reader\n  apiGroup: rbac.authorization.k8s.io\n```\n\n#### ClusterRoleBinding example\n\nTo grant permissions across a whole cluster, you can use a ClusterRoleBinding.\nThe following ClusterRoleBinding allows any user in the group \"manager\" to read\nsecrets in any namespace.\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\n# This cluster role binding allows anyone in the \"manager\" group to read secrets in any namespace.\nkind: ClusterRoleBinding\nmetadata:\n  name: read-secrets-global\nsubjects:\n- kind: Group\n  name: manager # Name is case sensitive\n  apiGroup: rbac.authorization.k8s.io\nroleRef:\n  kind: ClusterRole\n  name: secret-reader\n  apiGroup: rbac.authorization.k8s.io\n```\n\nAfter you create a binding, you cannot change the Role or ClusterRole that it refers to.\nIf you try to change a binding's `roleRef`, you get a validation error. If you do want\nto change the `roleRef` for a binding, you need to remove the binding object and create\na replacement.\n\nThere are two reasons for this restriction:\n\n1. Making `roleRef` immutable allows granting someone `update` permission on an existing binding\n   object, so that they can manage the list of subjects, without being able to change\n   the role that is granted to those subjects.\n1. A binding to a different role is a fundamentally different binding.\n   Requiring a binding to be deleted\/recreated in order to change the `roleRef`\n   ensures the full list of subjects in the binding is intended to be granted\n   the new role (as opposed to enabling or accidentally modifying only the roleRef\n   without verifying all of the existing subjects should be given the new role's\n   permissions).\n\nThe `kubectl auth reconcile` command-line utility creates or updates a manifest file containing RBAC objects,\nand handles deleting and recreating binding objects if required to change the role they refer to.\nSee [command usage and examples](#kubectl-auth-reconcile) for more information.\n\n### Referring to resources\n\nIn the Kubernetes API, most resources are represented and accessed using a string representation of\ntheir object name, such as `pods` for a Pod. RBAC refers to resources using exactly the same\nname that appears in the URL for the relevant API endpoint.\nSome Kubernetes APIs involve a\n_subresource_, such as the logs for a Pod. A request for a Pod's logs looks like:\n\n```http\nGET \/api\/v1\/namespaces\/{namespace}\/pods\/{name}\/log\n```\n\nIn this case, `pods` is the namespaced resource for Pod resources, and `log` is a\nsubresource of `pods`. To represent this in an RBAC role, use a slash (`\/`) to\ndelimit the resource and subresource. To allow a subject to read `pods` and\nalso access the `log` subresource for each of those Pods, you write:\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: Role\nmetadata:\n  namespace: default\n  name: pod-and-pod-logs-reader\nrules:\n- apiGroups: [\"\"]\n  resources: [\"pods\", \"pods\/log\"]\n  verbs: [\"get\", \"list\"]\n```\n\nYou can also refer to resources by name for certain requests through the `resourceNames` list.\nWhen specified, requests can be restricted to individual instances of a resource.\nHere is an example that restricts its subject to only `get` or `update` a\n named `my-configmap`:\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: Role\nmetadata:\n  namespace: default\n  name: configmap-updater\nrules:\n- apiGroups: [\"\"]\n  #\n  # at the HTTP level, the name of the resource for accessing ConfigMap\n  # objects is \"configmaps\"\n  resources: [\"configmaps\"]\n  resourceNames: [\"my-configmap\"]\n  verbs: [\"update\", \"get\"]\n```\n\n\nYou cannot restrict `create` or `deletecollection` requests by their resource name.\nFor `create`, this limitation is because the name of the new object may not be known at authorization time.\nIf you restrict `list` or `watch` by resourceName, clients must include a `metadata.name` field selector in their `list` or `watch` request that matches the specified resourceName in order to be authorized.\nFor example, `kubectl get configmaps --field-selector=metadata.name=my-configmap`\n\n\nRather than referring to individual `resources`, `apiGroups`, and `verbs`,\nyou can use the wildcard `*` symbol to refer to all such objects.\nFor `nonResourceURLs`, you can use the wildcard `*` as a suffix glob match.\nFor `resourceNames`, an empty set means that everything is allowed.\nHere is an example that allows access to perform any current and future action on\nall current and future resources in the `example.com` API group.\nThis is similar to the built-in `cluster-admin` role.\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: Role\nmetadata:\n  namespace: default\n  name: example.com-superuser # DO NOT USE THIS ROLE, IT IS JUST AN EXAMPLE\nrules:\n- apiGroups: [\"example.com\"]\n  resources: [\"*\"]\n  verbs: [\"*\"]\n```\n\n\nUsing wildcards in resource and verb entries could result in overly permissive access being granted\nto sensitive resources.\nFor instance, if a new resource type is added, or a new subresource is added,\nor a new custom verb is checked, the wildcard entry automatically grants access, which may be undesirable.\nThe [principle of least privilege](\/docs\/concepts\/security\/rbac-good-practices\/#least-privilege)\nshould be employed, using specific resources and verbs to ensure only the permissions required for the\nworkload to function correctly are applied.\n\n\n### Aggregated ClusterRoles\n\nYou can _aggregate_ several ClusterRoles into one combined ClusterRole.\nA controller, running as part of the cluster control plane, watches for ClusterRole\nobjects with an `aggregationRule` set. The `aggregationRule` defines a label\n that the controller\nuses to match other ClusterRole objects that should be combined into the `rules`\nfield of this one.\n\n\nThe control plane overwrites any values that you manually specify in the `rules` field of an\naggregate ClusterRole. If you want to change or add rules, do so in the `ClusterRole` objects\nthat are selected by the `aggregationRule`.\n\n\nHere is an example aggregated ClusterRole:\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: monitoring\naggregationRule:\n  clusterRoleSelectors:\n  - matchLabels:\n      rbac.example.com\/aggregate-to-monitoring: \"true\"\nrules: [] # The control plane automatically fills in the rules\n```\n\nIf you create a new ClusterRole that matches the label selector of an existing aggregated ClusterRole,\nthat change triggers adding the new rules into the aggregated ClusterRole.\nHere is an example that adds rules to the \"monitoring\" ClusterRole, by creating another\nClusterRole labeled `rbac.example.com\/aggregate-to-monitoring: true`.\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: monitoring-endpoints\n  labels:\n    rbac.example.com\/aggregate-to-monitoring: \"true\"\n# When you create the \"monitoring-endpoints\" ClusterRole,\n# the rules below will be added to the \"monitoring\" ClusterRole.\nrules:\n- apiGroups: [\"\"]\n  resources: [\"services\", \"endpointslices\", \"pods\"]\n  verbs: [\"get\", \"list\", \"watch\"]\n```\n\nThe [default user-facing roles](#default-roles-and-role-bindings) use ClusterRole aggregation. This lets you,\nas a cluster administrator, include rules for custom resources, such as those served by\n\nor aggregated API servers, to extend the default roles.\n\nFor example: the following ClusterRoles let the \"admin\" and \"edit\" default roles manage the custom resource\nnamed CronTab, whereas the \"view\" role can perform only read actions on CronTab resources.\nYou can assume that CronTab objects are named `\"crontabs\"` in URLs as seen by the API server.\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: aggregate-cron-tabs-edit\n  labels:\n    # Add these permissions to the \"admin\" and \"edit\" default roles.\n    rbac.authorization.k8s.io\/aggregate-to-admin: \"true\"\n    rbac.authorization.k8s.io\/aggregate-to-edit: \"true\"\nrules:\n- apiGroups: [\"stable.example.com\"]\n  resources: [\"crontabs\"]\n  verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io\/v1\nmetadata:\n  name: aggregate-cron-tabs-view\n  labels:\n    # Add these permissions to the \"view\" default role.\n    rbac.authorization.k8s.io\/aggregate-to-view: \"true\"\nrules:\n- apiGroups: [\"stable.example.com\"]\n  resources: [\"crontabs\"]\n  verbs: [\"get\", \"list\", \"watch\"]\n```\n\n#### Role examples\n\nThe following examples are excerpts from Role or ClusterRole objects, showing only\nthe `rules` section.\n\nAllow reading `\"pods\"` resources in the core\n:\n\n```yaml\nrules:\n- apiGroups: [\"\"]\n  #\n  # at the HTTP level, the name of the resource for accessing Pod\n  # objects is \"pods\"\n  resources: [\"pods\"]\n  verbs: [\"get\", \"list\", \"watch\"]\n```\n\nAllow reading\/writing Deployments (at the HTTP level: objects with `\"deployments\"`\nin the resource part of their URL) in the `\"apps\"` API groups:\n\n```yaml\nrules:\n- apiGroups: [\"apps\"]\n  #\n  # at the HTTP level, the name of the resource for accessing Deployment\n  # objects is \"deployments\"\n  resources: [\"deployments\"]\n  verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]\n```\n\nAllow reading Pods in the core API group, as well as reading or writing Job\nresources in the `\"batch\"` API group:\n\n```yaml\nrules:\n- apiGroups: [\"\"]\n  #\n  # at the HTTP level, the name of the resource for accessing Pod\n  # objects is \"pods\"\n  resources: [\"pods\"]\n  verbs: [\"get\", \"list\", \"watch\"]\n- apiGroups: [\"batch\"]\n  #\n  # at the HTTP level, the name of the resource for accessing Job\n  # objects is \"jobs\"\n  resources: [\"jobs\"]\n  verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"]\n```\n\nAllow reading a ConfigMap named \"my-config\" (must be bound with a\nRoleBinding to limit to a single ConfigMap in a single namespace):\n\n```yaml\nrules:\n- apiGroups: [\"\"]\n  #\n  # at the HTTP level, the name of the resource for accessing ConfigMap\n  # objects is \"configmaps\"\n  resources: [\"configmaps\"]\n  resourceNames: [\"my-config\"]\n  verbs: [\"get\"]\n```\n\nAllow reading the resource `\"nodes\"` in the core group (because a\nNode is cluster-scoped, this must be in a ClusterRole bound with a\nClusterRoleBinding to be effective):\n\n```yaml\nrules:\n- apiGroups: [\"\"]\n  #\n  # at the HTTP level, the name of the resource for accessing Node\n  # objects is \"nodes\"\n  resources: [\"nodes\"]\n  verbs: [\"get\", \"list\", \"watch\"]\n```\n\nAllow GET and POST requests to the non-resource endpoint `\/healthz` and\nall subpaths (must be in a ClusterRole bound with a ClusterRoleBinding\nto be effective):\n\n```yaml\nrules:\n- nonResourceURLs: [\"\/healthz\", \"\/healthz\/*\"] # '*' in a nonResourceURL is a suffix glob match\n  verbs: [\"get\", \"post\"]\n```\n\n### Referring to subjects\n\nA RoleBinding or ClusterRoleBinding binds a role to subjects.\nSubjects can be groups, users or\n.\n\nKubernetes represents usernames as strings.\nThese can be: plain names, such as \"alice\"; email-style names, like \"bob@example.com\";\nor numeric user IDs represented as a string. It is up to you as a cluster administrator\nto configure the [authentication modules](\/docs\/reference\/access-authn-authz\/authentication\/)\nso that authentication produces usernames in the format you want.\n\n\nThe prefix `system:` is reserved for Kubernetes system use, so you should ensure\nthat you don't have users or groups with names that start with `system:` by\naccident.\nOther than this special prefix, the RBAC authorization system does not require any format\nfor usernames.\n\n\nIn Kubernetes, Authenticator modules provide group information.\nGroups, like users, are represented as strings, and that string has no format requirements,\nother than that the prefix `system:` is reserved.\n\n[ServiceAccounts](\/docs\/tasks\/configure-pod-container\/configure-service-account\/) have names prefixed\nwith `system:serviceaccount:`, and belong to groups that have names prefixed with `system:serviceaccounts:`.\n\n\n- `system:serviceaccount:` (singular) is the prefix for service account usernames.\n- `system:serviceaccounts:` (plural) is the prefix for service account groups.\n\n\n#### RoleBinding examples {#role-binding-examples}\n\nThe following examples are `RoleBinding` excerpts that only\nshow the `subjects` section.\n\nFor a user named `alice@example.com`:\n\n```yaml\nsubjects:\n- kind: User\n  name: \"alice@example.com\"\n  apiGroup: rbac.authorization.k8s.io\n```\n\nFor a group named `frontend-admins`:\n\n```yaml\nsubjects:\n- kind: Group\n  name: \"frontend-admins\"\n  apiGroup: rbac.authorization.k8s.io\n```\n\nFor the default service account in the \"kube-system\" namespace:\n\n```yaml\nsubjects:\n- kind: ServiceAccount\n  name: default\n  namespace: kube-system\n```\n\nFor all service accounts in the \"qa\" namespace:\n\n```yaml\nsubjects:\n- kind: Group\n  name: system:serviceaccounts:qa\n  apiGroup: rbac.authorization.k8s.io\n```\n\nFor all service accounts in any namespace:\n\n```yaml\nsubjects:\n- kind: Group\n  name: system:serviceaccounts\n  apiGroup: rbac.authorization.k8s.io\n```\n\nFor all authenticated users:\n\n```yaml\nsubjects:\n- kind: Group\n  name: system:authenticated\n  apiGroup: rbac.authorization.k8s.io\n```\n\nFor all unauthenticated users:\n\n```yaml\nsubjects:\n- kind: Group\n  name: system:unauthenticated\n  apiGroup: rbac.authorization.k8s.io\n```\n\nFor all users:\n\n```yaml\nsubjects:\n- kind: Group\n  name: system:authenticated\n  apiGroup: rbac.authorization.k8s.io\n- kind: Group\n  name: system:unauthenticated\n  apiGroup: rbac.authorization.k8s.io\n```\n\n## Default roles and role bindings\n\nAPI servers create a set of default ClusterRole and ClusterRoleBinding objects.\nMany of these are `system:` prefixed, which indicates that the resource is directly\nmanaged by the cluster control plane.\nAll of the default ClusterRoles and ClusterRoleBindings are labeled with `kubernetes.io\/bootstrapping=rbac-defaults`.\n\n\nTake care when modifying ClusterRoles and ClusterRoleBindings with names\nthat have a `system:` prefix.\nModifications to these resources can result in non-functional clusters.\n\n\n### Auto-reconciliation\n\nAt each start-up, the API server updates default cluster roles with any missing permissions,\nand updates default cluster role bindings with any missing subjects.\nThis allows the cluster to repair accidental modifications, and helps to keep roles and role bindings\nup-to-date as permissions and subjects change in new Kubernetes releases.\n\nTo opt out of this reconciliation, set the `rbac.authorization.kubernetes.io\/autoupdate`\nannotation on a default cluster role or default cluster RoleBinding to `false`.\nBe aware that missing default permissions and subjects can result in non-functional clusters.\n\nAuto-reconciliation is enabled by default if the RBAC authorizer is active.\n\n### API discovery roles {#discovery-roles}\n\nDefault cluster role bindings authorize unauthenticated and authenticated users to read API information\nthat is deemed safe to be publicly accessible (including CustomResourceDefinitions).\nTo disable anonymous unauthenticated access, add `--anonymous-auth=false` flag to\nthe API server configuration.\n\nTo view the configuration of these roles via `kubectl` run:\n\n```shell\nkubectl get clusterroles system:discovery -o yaml\n```\n\n\nIf you edit that ClusterRole, your changes will be overwritten on API server restart\nvia [auto-reconciliation](#auto-reconciliation). To avoid that overwriting,\neither do not manually edit the role, or disable auto-reconciliation.\n\n\n<table>\n<caption>Kubernetes RBAC API discovery roles<\/caption>\n<colgroup><col style=\"width: 25%;\" \/><col style=\"width: 25%;\" \/><col \/><\/colgroup>\n<thead>\n<tr>\n<th>Default ClusterRole<\/th>\n<th>Default ClusterRoleBinding<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><b>system:basic-user<\/b><\/td>\n<td><b>system:authenticated<\/b> group<\/td>\n<td>Allows a user read-only access to basic information about themselves. Prior to v1.14, this role was also bound to <tt>system:unauthenticated<\/tt> by default.<\/td>\n<\/tr>\n<tr>\n<td><b>system:discovery<\/b><\/td>\n<td><b>system:authenticated<\/b> group<\/td>\n<td>Allows read-only access to API discovery endpoints needed to discover and negotiate an API level. Prior to v1.14, this role was also bound to <tt>system:unauthenticated<\/tt> by default.<\/td>\n<\/tr>\n<tr>\n<td><b>system:public-info-viewer<\/b><\/td>\n<td><b>system:authenticated<\/b> and <b>system:unauthenticated<\/b> groups<\/td>\n<td>Allows read-only access to non-sensitive information about the cluster. Introduced in Kubernetes v1.14.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n### User-facing roles\n\nSome of the default ClusterRoles are not `system:` prefixed. These are intended to be user-facing roles.\nThey include super-user roles (`cluster-admin`), roles intended to be granted cluster-wide\nusing ClusterRoleBindings, and roles intended to be granted within particular\nnamespaces using RoleBindings (`admin`, `edit`, `view`).\n\nUser-facing ClusterRoles use [ClusterRole aggregation](#aggregated-clusterroles) to allow admins to include\nrules for custom resources on these ClusterRoles. To add rules to the `admin`, `edit`, or `view` roles, create\na ClusterRole with one or more of the following labels:\n\n```yaml\nmetadata:\n  labels:\n    rbac.authorization.k8s.io\/aggregate-to-admin: \"true\"\n    rbac.authorization.k8s.io\/aggregate-to-edit: \"true\"\n    rbac.authorization.k8s.io\/aggregate-to-view: \"true\"\n```\n\n<table>\n<colgroup><col style=\"width: 25%;\" \/><col style=\"width: 25%;\" \/><col \/><\/colgroup>\n<thead>\n<tr>\n<th>Default ClusterRole<\/th>\n<th>Default ClusterRoleBinding<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><b>cluster-admin<\/b><\/td>\n<td><b>system:masters<\/b> group<\/td>\n<td>Allows super-user access to perform any action on any resource.\nWhen used in a <b>ClusterRoleBinding<\/b>, it gives full control over every resource in the cluster and in all namespaces.\nWhen used in a <b>RoleBinding<\/b>, it gives full control over every resource in the role binding's namespace, including the namespace itself.<\/td>\n<\/tr>\n<tr>\n<td><b>admin<\/b><\/td>\n<td>None<\/td>\n<td>Allows admin access, intended to be granted within a namespace using a <b>RoleBinding<\/b>.\n\nIf used in a <b>RoleBinding<\/b>, allows read\/write access to most resources in a namespace,\nincluding the ability to create roles and role bindings within the namespace.\nThis role does not allow write access to resource quota or to the namespace itself.\nThis role also does not allow write access to EndpointSlices (or Endpoints) in clusters created\nusing Kubernetes v1.22+. More information is available in the\n[\"Write Access for EndpointSlices and Endpoints\" section](#write-access-for-endpoints).<\/td>\n<\/tr>\n<tr>\n<td><b>edit<\/b><\/td>\n<td>None<\/td>\n<td>Allows read\/write access to most objects in a namespace.\n\nThis role does not allow viewing or modifying roles or role bindings.\nHowever, this role allows accessing Secrets and running Pods as any ServiceAccount in\nthe namespace, so it can be used to gain the API access levels of any ServiceAccount in\nthe namespace. This role also does not allow write access to EndpointSlices (or Endpoints) in\nclusters created using Kubernetes v1.22+. More information is available in the\n[\"Write Access for EndpointSlices and Endpoints\" section](#write-access-for-endpoints).<\/td>\n<\/tr>\n<tr>\n<td><b>view<\/b><\/td>\n<td>None<\/td>\n<td>Allows read-only access to see most objects in a namespace.\nIt does not allow viewing roles or role bindings.\n\nThis role does not allow viewing Secrets, since reading\nthe contents of Secrets enables access to ServiceAccount credentials\nin the namespace, which would allow API access as any ServiceAccount\nin the namespace (a form of privilege escalation).<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n### Core component roles\n\n<table>\n<colgroup><col style=\"width: 25%;\" \/><col style=\"width: 25%;\" \/><col \/><\/colgroup>\n<thead>\n<tr>\n<th>Default ClusterRole<\/th>\n<th>Default ClusterRoleBinding<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><b>system:kube-scheduler<\/b><\/td>\n<td><b>system:kube-scheduler<\/b> user<\/td>\n<td>Allows access to the resources required by the  component.<\/td>\n<\/tr>\n<tr>\n<td><b>system:volume-scheduler<\/b><\/td>\n<td><b>system:kube-scheduler<\/b> user<\/td>\n<td>Allows access to the volume resources required by the kube-scheduler component.<\/td>\n<\/tr>\n<tr>\n<td><b>system:kube-controller-manager<\/b><\/td>\n<td><b>system:kube-controller-manager<\/b> user<\/td>\n<td>Allows access to the resources required by the  component.\nThe permissions required by individual controllers are detailed in the <a href=\"#controller-roles\">controller roles<\/a>.<\/td>\n<\/tr>\n<tr>\n<td><b>system:node<\/b><\/td>\n<td>None<\/td>\n<td>Allows access to resources required by the kubelet, <b>including read access to all secrets, and write access to all pod status objects<\/b>.\n\nYou should use the <a href=\"\/docs\/reference\/access-authn-authz\/node\/\">Node authorizer<\/a> and <a href=\"\/docs\/reference\/access-authn-authz\/admission-controllers\/#noderestriction\">NodeRestriction admission plugin<\/a> instead of the <tt>system:node<\/tt> role, and allow granting API access to kubelets based on the Pods scheduled to run on them.\n\nThe <tt>system:node<\/tt> role only exists for compatibility with Kubernetes clusters upgraded from versions prior to v1.8.\n<\/td>\n<\/tr>\n<tr>\n<td><b>system:node-proxier<\/b><\/td>\n<td><b>system:kube-proxy<\/b> user<\/td>\n<td>Allows access to the resources required by the  component.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n### Other component roles\n\n<table>\n<colgroup><col style=\"width: 25%;\" \/><col style=\"width: 25%;\" \/><col \/><\/colgroup>\n<thead>\n<tr>\n<th>Default ClusterRole<\/th>\n<th>Default ClusterRoleBinding<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><b>system:auth-delegator<\/b><\/td>\n<td>None<\/td>\n<td>Allows delegated authentication and authorization checks.\nThis is commonly used by add-on API servers for unified authentication and authorization.<\/td>\n<\/tr>\n<tr>\n<td><b>system:heapster<\/b><\/td>\n<td>None<\/td>\n<td>Role for the <a href=\"https:\/\/github.com\/kubernetes\/heapster\">Heapster<\/a> component (deprecated).<\/td>\n<\/tr>\n<tr>\n<td><b>system:kube-aggregator<\/b><\/td>\n<td>None<\/td>\n<td>Role for the <a href=\"https:\/\/github.com\/kubernetes\/kube-aggregator\">kube-aggregator<\/a> component.<\/td>\n<\/tr>\n<tr>\n<td><b>system:kube-dns<\/b><\/td>\n<td><b>kube-dns<\/b> service account in the <b>kube-system<\/b> namespace<\/td>\n<td>Role for the <a href=\"\/docs\/concepts\/services-networking\/dns-pod-service\/\">kube-dns<\/a> component.<\/td>\n<\/tr>\n<tr>\n<td><b>system:kubelet-api-admin<\/b><\/td>\n<td>None<\/td>\n<td>Allows full access to the kubelet API.<\/td>\n<\/tr>\n<tr>\n<td><b>system:node-bootstrapper<\/b><\/td>\n<td>None<\/td>\n<td>Allows access to the resources required to perform\n<a href=\"\/docs\/reference\/access-authn-authz\/kubelet-tls-bootstrapping\/\">kubelet TLS bootstrapping<\/a>.<\/td>\n<\/tr>\n<tr>\n<td><b>system:node-problem-detector<\/b><\/td>\n<td>None<\/td>\n<td>Role for the <a href=\"https:\/\/github.com\/kubernetes\/node-problem-detector\">node-problem-detector<\/a> component.<\/td>\n<\/tr>\n<tr>\n<td><b>system:persistent-volume-provisioner<\/b><\/td>\n<td>None<\/td>\n<td>Allows access to the resources required by most <a href=\"\/docs\/concepts\/storage\/persistent-volumes\/#dynamic\">dynamic volume provisioners<\/a>.<\/td>\n<\/tr>\n<tr>\n<td><b>system:monitoring<\/b><\/td>\n<td><b>system:monitoring<\/b> group<\/td>\n<td>Allows read access to control-plane monitoring endpoints (i.e.  liveness and readiness endpoints (<tt>\/healthz<\/tt>, <tt>\/livez<\/tt>, <tt>\/readyz<\/tt>), the individual health-check endpoints (<tt>\/healthz\/*<\/tt>, <tt>\/livez\/*<\/tt>, <tt>\/readyz\/*<\/tt>),  and <tt>\/metrics<\/tt>). Note that individual health check endpoints and the metric endpoint may expose sensitive information.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n### Roles for built-in controllers {#controller-roles}\n\nThe Kubernetes  runs\n that are built in to the Kubernetes\ncontrol plane.\nWhen invoked with `--use-service-account-credentials`, kube-controller-manager starts each controller\nusing a separate service account.\nCorresponding roles exist for each built-in controller, prefixed with `system:controller:`.\nIf the controller manager is not started with `--use-service-account-credentials`, it runs all control loops\nusing its own credential, which must be granted all the relevant roles.\nThese roles include:\n\n* `system:controller:attachdetach-controller`\n* `system:controller:certificate-controller`\n* `system:controller:clusterrole-aggregation-controller`\n* `system:controller:cronjob-controller`\n* `system:controller:daemon-set-controller`\n* `system:controller:deployment-controller`\n* `system:controller:disruption-controller`\n* `system:controller:endpoint-controller`\n* `system:controller:expand-controller`\n* `system:controller:generic-garbage-collector`\n* `system:controller:horizontal-pod-autoscaler`\n* `system:controller:job-controller`\n* `system:controller:namespace-controller`\n* `system:controller:node-controller`\n* `system:controller:persistent-volume-binder`\n* `system:controller:pod-garbage-collector`\n* `system:controller:pv-protection-controller`\n* `system:controller:pvc-protection-controller`\n* `system:controller:replicaset-controller`\n* `system:controller:replication-controller`\n* `system:controller:resourcequota-controller`\n* `system:controller:root-ca-cert-publisher`\n* `system:controller:route-controller`\n* `system:controller:service-account-controller`\n* `system:controller:service-controller`\n* `system:controller:statefulset-controller`\n* `system:controller:ttl-controller`\n\n## Privilege escalation prevention and bootstrapping\n\nThe RBAC API prevents users from escalating privileges by editing roles or role bindings.\nBecause this is enforced at the API level, it applies even when the RBAC authorizer is not in use.\n\n### Restrictions on role creation or update\n\nYou can only create\/update a role if at least one of the following things is true:\n\n1. You already have all the permissions contained in the role, at the same scope as the object being modified\n   (cluster-wide for a ClusterRole, within the same namespace or cluster-wide for a Role).\n2. You are granted explicit permission to perform the `escalate` verb on the `roles` or\n   `clusterroles` resource in the `rbac.authorization.k8s.io` API group.\n\nFor example, if `user-1` does not have the ability to list Secrets cluster-wide, they cannot create a ClusterRole\ncontaining that permission. To allow a user to create\/update roles:\n\n1. Grant them a role that allows them to create\/update Role or ClusterRole objects, as desired.\n2. Grant them permission to include specific permissions in the roles they create\/update:\n   * implicitly, by giving them those permissions (if they attempt to create or modify a Role or\n     ClusterRole with permissions they themselves have not been granted, the API request will be forbidden)\n   * or explicitly allow specifying any permission in a `Role` or `ClusterRole` by giving them\n     permission to perform the `escalate` verb on `roles` or `clusterroles` resources in the\n     `rbac.authorization.k8s.io` API group\n\n### Restrictions on role binding creation or update\n\nYou can only create\/update a role binding if you already have all the permissions contained in the referenced role\n(at the same scope as the role binding) *or* if you have been authorized to perform the `bind` verb on the referenced role.\nFor example, if `user-1` does not have the ability to list Secrets cluster-wide, they cannot create a ClusterRoleBinding\nto a role that grants that permission. To allow a user to create\/update role bindings:\n\n1. Grant them a role that allows them to create\/update RoleBinding or ClusterRoleBinding objects, as desired.\n2. Grant them permissions needed to bind a particular role:\n   * implicitly, by giving them the permissions contained in the role.\n   * explicitly, by giving them permission to perform the `bind` verb on the particular Role (or ClusterRole).\n\nFor example, this ClusterRole and RoleBinding would allow `user-1` to grant other users the `admin`, `edit`, and `view` roles in the namespace `user-1-namespace`:\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: role-grantor\nrules:\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n  resources: [\"rolebindings\"]\n  verbs: [\"create\"]\n- apiGroups: [\"rbac.authorization.k8s.io\"]\n  resources: [\"clusterroles\"]\n  verbs: [\"bind\"]\n  # omit resourceNames to allow binding any ClusterRole\n  resourceNames: [\"admin\",\"edit\",\"view\"]\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: RoleBinding\nmetadata:\n  name: role-grantor-binding\n  namespace: user-1-namespace\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: role-grantor\nsubjects:\n- apiGroup: rbac.authorization.k8s.io\n  kind: User\n  name: user-1\n```\n\nWhen bootstrapping the first roles and role bindings, it is necessary for the initial user to grant permissions they do not yet have.\nTo bootstrap initial roles and role bindings:\n\n* Use a credential with the \"system:masters\" group, which is bound to the \"cluster-admin\" super-user role by the default bindings.\n\n## Command-line utilities\n\n### `kubectl create role`\n\nCreates a Role object defining permissions within a single namespace. Examples:\n\n* Create a Role named \"pod-reader\" that allows users to perform `get`, `watch` and `list` on pods:\n\n  ```shell\n  kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods\n  ```\n\n* Create a Role named \"pod-reader\" with resourceNames specified:\n\n  ```shell\n  kubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod\n  ```\n\n* Create a Role named \"foo\" with apiGroups specified:\n\n  ```shell\n  kubectl create role foo --verb=get,list,watch --resource=replicasets.apps\n  ```\n\n* Create a Role named \"foo\" with subresource permissions:\n\n  ```shell\n  kubectl create role foo --verb=get,list,watch --resource=pods,pods\/status\n  ```\n\n* Create a Role named \"my-component-lease-holder\" with permissions to get\/update a resource with a specific name:\n\n  ```shell\n  kubectl create role my-component-lease-holder --verb=get,list,watch,update --resource=lease --resource-name=my-component\n  ```\n\n### `kubectl create clusterrole`\n\nCreates a ClusterRole. Examples:\n\n* Create a ClusterRole named \"pod-reader\" that allows user to perform `get`, `watch` and `list` on pods:\n\n  ```shell\n  kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods\n  ```\n\n* Create a ClusterRole named \"pod-reader\" with resourceNames specified:\n\n  ```shell\n  kubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod\n  ```\n\n* Create a ClusterRole named \"foo\" with apiGroups specified:\n\n  ```shell\n  kubectl create clusterrole foo --verb=get,list,watch --resource=replicasets.apps\n  ```\n\n* Create a ClusterRole named \"foo\" with subresource permissions:\n\n  ```shell\n  kubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods\/status\n  ```\n\n* Create a ClusterRole named \"foo\" with nonResourceURL specified:\n\n  ```shell\n  kubectl create clusterrole \"foo\" --verb=get --non-resource-url=\/logs\/*\n  ```\n\n* Create a ClusterRole named \"monitoring\" with an aggregationRule specified:\n\n  ```shell\n  kubectl create clusterrole monitoring --aggregation-rule=\"rbac.example.com\/aggregate-to-monitoring=true\"\n  ```\n\n### `kubectl create rolebinding`\n\nGrants a Role or ClusterRole within a specific namespace. Examples:\n\n* Within the namespace \"acme\", grant the permissions in the \"admin\" ClusterRole to a user named \"bob\":\n\n  ```shell\n  kubectl create rolebinding bob-admin-binding --clusterrole=admin --user=bob --namespace=acme\n  ```\n\n* Within the namespace \"acme\", grant the permissions in the \"view\" ClusterRole to the service account in the namespace \"acme\" named \"myapp\":\n\n  ```shell\n  kubectl create rolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp --namespace=acme\n  ```\n\n* Within the namespace \"acme\", grant the permissions in the \"view\" ClusterRole to a service account in the namespace \"myappnamespace\" named \"myapp\":\n\n  ```shell\n  kubectl create rolebinding myappnamespace-myapp-view-binding --clusterrole=view --serviceaccount=myappnamespace:myapp --namespace=acme\n  ```\n\n### `kubectl create clusterrolebinding`\n\nGrants a ClusterRole across the entire cluster (all namespaces). Examples:\n\n* Across the entire cluster, grant the permissions in the \"cluster-admin\" ClusterRole to a user named \"root\":\n\n  ```shell\n  kubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=root\n  ```\n\n* Across the entire cluster, grant the permissions in the \"system:node-proxier\" ClusterRole to a user named \"system:kube-proxy\":\n\n  ```shell\n  kubectl create clusterrolebinding kube-proxy-binding --clusterrole=system:node-proxier --user=system:kube-proxy\n  ```\n\n* Across the entire cluster, grant the permissions in the \"view\" ClusterRole to a service account named \"myapp\" in the namespace \"acme\":\n\n  ```shell\n  kubectl create clusterrolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp\n  ```\n\n### `kubectl auth reconcile` {#kubectl-auth-reconcile}\n\nCreates or updates `rbac.authorization.k8s.io\/v1` API objects from a manifest file.\n\nMissing objects are created, and the containing namespace is created for namespaced objects, if required.\n\nExisting roles are updated to include the permissions in the input objects,\nand remove extra permissions if `--remove-extra-permissions` is specified.\n\nExisting bindings are updated to include the subjects in the input objects,\nand remove extra subjects if `--remove-extra-subjects` is specified.\n\nExamples:\n\n* Test applying a manifest file of RBAC objects, displaying changes that would be made:\n\n  ```shell\n  kubectl auth reconcile -f my-rbac-rules.yaml --dry-run=client\n  ```\n\n* Apply a manifest file of RBAC objects, preserving any extra permissions (in roles) and any extra subjects (in bindings):\n\n  ```shell\n  kubectl auth reconcile -f my-rbac-rules.yaml\n  ```\n\n* Apply a manifest file of RBAC objects, removing any extra permissions (in roles) and any extra subjects (in bindings):\n\n  ```shell\n  kubectl auth reconcile -f my-rbac-rules.yaml --remove-extra-subjects --remove-extra-permissions\n  ```\n\n## ServiceAccount permissions {#service-account-permissions}\n\nDefault RBAC policies grant scoped permissions to control-plane components, nodes,\nand controllers, but grant *no permissions* to service accounts outside the `kube-system` namespace\n(beyond the permissions given by [API discovery roles](#discovery-roles)).\n\nThis allows you to grant particular roles to particular ServiceAccounts as needed.\nFine-grained role bindings provide greater security, but require more effort to administrate.\nBroader grants can give unnecessary (and potentially escalating) API access to\nServiceAccounts, but are easier to administrate.\n\nIn order from most secure to least secure, the approaches are:\n\n1. Grant a role to an application-specific service account (best practice)\n\n   This requires the application to specify a `serviceAccountName` in its pod spec,\n   and for the service account to be created (via the API, application manifest, `kubectl create serviceaccount`, etc.).\n\n   For example, grant read-only permission within \"my-namespace\" to the \"my-sa\" service account:\n\n   ```shell\n   kubectl create rolebinding my-sa-view \\\n     --clusterrole=view \\\n     --serviceaccount=my-namespace:my-sa \\\n     --namespace=my-namespace\n   ```\n\n2. Grant a role to the \"default\" service account in a namespace\n\n   If an application does not specify a `serviceAccountName`, it uses the \"default\" service account.\n\n   \n   Permissions given to the \"default\" service account are available to any pod\n   in the namespace that does not specify a `serviceAccountName`.\n   \n\n   For example, grant read-only permission within \"my-namespace\" to the \"default\" service account:\n\n   ```shell\n   kubectl create rolebinding default-view \\\n     --clusterrole=view \\\n     --serviceaccount=my-namespace:default \\\n     --namespace=my-namespace\n   ```\n\n   Many [add-ons](\/docs\/concepts\/cluster-administration\/addons\/) run as the\n   \"default\" service account in the `kube-system` namespace.\n   To allow those add-ons to run with super-user access, grant cluster-admin\n   permissions to the \"default\" service account in the `kube-system` namespace.\n\n   \n   Enabling this means the `kube-system` namespace contains Secrets\n   that grant super-user access to your cluster's API.\n   \n\n   ```shell\n   kubectl create clusterrolebinding add-on-cluster-admin \\\n     --clusterrole=cluster-admin \\\n     --serviceaccount=kube-system:default\n   ```\n\n3. Grant a role to all service accounts in a namespace\n\n   If you want all applications in a namespace to have a role, no matter what service account they use,\n   you can grant a role to the service account group for that namespace.\n\n   For example, grant read-only permission within \"my-namespace\" to all service accounts in that namespace:\n\n   ```shell\n   kubectl create rolebinding serviceaccounts-view \\\n     --clusterrole=view \\\n     --group=system:serviceaccounts:my-namespace \\\n     --namespace=my-namespace\n   ```\n\n4. Grant a limited role to all service accounts cluster-wide (discouraged)\n\n   If you don't want to manage permissions per-namespace, you can grant a cluster-wide role to all service accounts.\n\n   For example, grant read-only permission across all namespaces to all service accounts in the cluster:\n\n   ```shell\n   kubectl create clusterrolebinding serviceaccounts-view \\\n     --clusterrole=view \\\n    --group=system:serviceaccounts\n   ```\n\n5. Grant super-user access to all service accounts cluster-wide (strongly discouraged)\n\n   If you don't care about partitioning permissions at all, you can grant super-user access to all service accounts.\n\n   \n   This allows any application full access to your cluster, and also grants\n   any user with read access to Secrets (or the ability to create any pod)\n   full access to your cluster.\n   \n\n   ```shell\n   kubectl create clusterrolebinding serviceaccounts-cluster-admin \\\n     --clusterrole=cluster-admin \\\n     --group=system:serviceaccounts\n   ```\n\n## Write access for EndpointSlices and Endpoints {#write-access-for-endpoints}\n\nKubernetes clusters created before Kubernetes v1.22 include write access to\nEndpointSlices (and Endpoints) in the aggregated \"edit\" and \"admin\" roles.\nAs a mitigation for [CVE-2021-25740](https:\/\/github.com\/kubernetes\/kubernetes\/issues\/103675),\nthis access is not part of the aggregated roles in clusters that you create using\nKubernetes v1.22 or later.\n\nExisting clusters that have been upgraded to Kubernetes v1.22 will not be\nsubject to this change. The [CVE\nannouncement](https:\/\/github.com\/kubernetes\/kubernetes\/issues\/103675) includes\nguidance for restricting this access in existing clusters.\n\nIf you want new clusters to retain this level of access in the aggregated roles,\nyou can create the following ClusterRole:\n\n\n\n## Upgrading from ABAC\n\nClusters that originally ran older Kubernetes versions often used\npermissive ABAC policies, including granting full API access to all\nservice accounts.\n\nDefault RBAC policies grant scoped permissions to control-plane components, nodes,\nand controllers, but grant *no permissions* to service accounts outside the `kube-system` namespace\n(beyond the permissions given by [API discovery roles](#discovery-roles)).\n\nWhile far more secure, this can be disruptive to existing workloads expecting to automatically receive API permissions.\nHere are two approaches for managing this transition:\n\n### Parallel authorizers\n\nRun both the RBAC and ABAC authorizers, and specify a policy file that contains\nthe [legacy ABAC policy](\/docs\/reference\/access-authn-authz\/abac\/#policy-file-format):\n\n```shell\n--authorization-mode=...,RBAC,ABAC --authorization-policy-file=mypolicy.json\n```\n\nTo explain that first command line option in detail: if earlier authorizers, such as Node,\ndeny a request, then the RBAC authorizer attempts to authorize the API request. If RBAC\nalso denies that API request, the ABAC authorizer is then run. This means that any request\nallowed by *either* the RBAC or ABAC policies is allowed.\n\nWhen the kube-apiserver is run with a log level of 5 or higher for the RBAC component\n(`--vmodule=rbac*=5` or `--v=5`), you can see RBAC denials in the API server log\n(prefixed with `RBAC`).\nYou can use that information to determine which roles need to be granted to which users, groups, or service accounts.\n\nOnce you have [granted roles to service accounts](#service-account-permissions) and workloads\nare running with no RBAC denial messages in the server logs, you can remove the ABAC authorizer.\n\n### Permissive RBAC permissions\n\nYou can replicate a permissive ABAC policy using RBAC role bindings.\n\n\nThe following policy allows **ALL** service accounts to act as cluster administrators.\nAny application running in a container receives service account credentials automatically,\nand could perform any action against the API, including viewing secrets and modifying permissions.\nThis is not a recommended policy.\n\n```shell\nkubectl create clusterrolebinding permissive-binding \\\n  --clusterrole=cluster-admin \\\n  --user=admin \\\n  --user=kubelet \\\n  --group=system:serviceaccounts\n```\n\n\nAfter you have transitioned to use RBAC, you should adjust the access controls\nfor your cluster to ensure that these meet your information security needs.","site":"kubernetes reference","answers_cleaned":"    reviewers    erictune   deads2k   liggitt title  Using RBAC Authorization content type  concept aliases    rbac   weight  33           overview     Role based access control  RBAC  is a method of regulating access to computer or network resources based on the roles of individual users within your organization         body     RBAC authorization uses the  rbac authorization k8s io   to drive authorization decisions  allowing you to dynamically configure policies through the Kubernetes API   To enable RBAC  start the  with the    authorization mode  flag set to a comma separated list that includes  RBAC   for example     shell kube apiserver   authorization mode Example RBAC   other options   more options         API objects   api overview   The RBAC API declares four kinds of Kubernetes object   Role    ClusterRole    RoleBinding  and  ClusterRoleBinding   You can describe or amend the RBAC  using tools such as  kubectl   just like any other Kubernetes object    These objects  by design  impose access restrictions  If you are making changes to a cluster as you learn  see  privilege escalation prevention and bootstrapping   privilege escalation prevention and bootstrapping  to understand how those restrictions can prevent you making some changes        Role and ClusterRole  An RBAC  Role  or  ClusterRole  contains rules that represent a set of permissions  Permissions are purely additive  there are no  deny  rules    A Role always sets permissions within a particular   when you create a Role  you have to specify the namespace it belongs in   ClusterRole  by contrast  is a non namespaced resource  The resources have different names  Role and ClusterRole  because a Kubernetes object always has to be either namespaced or not namespaced  it can t be both   ClusterRoles have several uses  You can use a ClusterRole to   1  define permissions on namespaced resources and be granted access within individual namespace s  1  define permissions on namespaced resources and be granted access across all namespaces 1  define permissions on cluster scoped resources  If you want to define a role within a namespace  use a Role  if you want to define a role cluster wide  use a ClusterRole        Role example  Here s an example Role in the  default  namespace that can be used to grant read access to       yaml apiVersion  rbac authorization k8s io v1 kind  Role metadata    namespace  default   name  pod reader rules    apiGroups            indicates the core API group   resources    pods     verbs    get    watch    list             ClusterRole example  A ClusterRole can be used to grant the same permissions as a Role  Because ClusterRoles are cluster scoped  you can also use them to grant access to     cluster scoped resources  like     non resource endpoints  like   healthz     namespaced resources  like Pods   across all namespaces    For example  you can use a ClusterRole to allow a particular user to run    kubectl get pods   all namespaces   Here is an example of a ClusterRole that can be used to grant read access to  in any particular namespace  or across all namespaces  depending on how it is  bound   rolebinding and clusterrolebinding        yaml apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata       namespace  omitted since ClusterRoles are not namespaced   name  secret reader rules    apiGroups               at the HTTP level  the name of the resource for accessing Secret     objects is  secrets    resources    secrets     verbs    get    watch    list        The name of a Role or a ClusterRole object must be a valid  path segment name   docs concepts overview working with objects names path segment names        RoleBinding and ClusterRoleBinding  A role binding grants the permissions defined in a role to a user or set of users  It holds a list of  subjects   users  groups  or service accounts   and a reference to the role being granted  A RoleBinding grants permissions within a specific namespace whereas a ClusterRoleBinding grants that access cluster wide   A RoleBinding may reference any Role in the same namespace  Alternatively  a RoleBinding can reference a ClusterRole and bind that ClusterRole to the namespace of the RoleBinding  If you want to bind a ClusterRole to all the namespaces in your cluster  you use a ClusterRoleBinding   The name of a RoleBinding or ClusterRoleBinding object must be a valid  path segment name   docs concepts overview working with objects names path segment names         RoleBinding examples   rolebinding example   Here is an example of a RoleBinding that grants the  pod reader  Role to the user  jane  within the  default  namespace  This allows  jane  to read pods in the  default  namespace      yaml apiVersion  rbac authorization k8s io v1   This role binding allows  jane  to read pods in the  default  namespace    You need to already have a Role named  pod reader  in that namespace  kind  RoleBinding metadata    name  read pods   namespace  default subjects    You can specify more than one  subject    kind  User   name  jane    name  is case sensitive   apiGroup  rbac authorization k8s io roleRef       roleRef  specifies the binding to a Role   ClusterRole   kind  Role  this must be Role or ClusterRole   name  pod reader   this must match the name of the Role or ClusterRole you wish to bind to   apiGroup  rbac authorization k8s io      A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding s namespace  This kind of reference lets you define a set of common roles across your cluster  then reuse them within multiple namespaces   For instance  even though the following RoleBinding refers to a ClusterRole   dave   the subject  case sensitive  will only be able to read Secrets in the  development  namespace  because the RoleBinding s namespace  in its metadata  is  development       yaml apiVersion  rbac authorization k8s io v1   This role binding allows  dave  to read secrets in the  development  namespace    You need to already have a ClusterRole named  secret reader   kind  RoleBinding metadata    name  read secrets         The namespace of the RoleBinding determines where the permissions are granted      This only grants permissions within the  development  namespace    namespace  development subjects    kind  User   name  dave   Name is case sensitive   apiGroup  rbac authorization k8s io roleRef    kind  ClusterRole   name  secret reader   apiGroup  rbac authorization k8s io           ClusterRoleBinding example  To grant permissions across a whole cluster  you can use a ClusterRoleBinding  The following ClusterRoleBinding allows any user in the group  manager  to read secrets in any namespace      yaml apiVersion  rbac authorization k8s io v1   This cluster role binding allows anyone in the  manager  group to read secrets in any namespace  kind  ClusterRoleBinding metadata    name  read secrets global subjects    kind  Group   name  manager   Name is case sensitive   apiGroup  rbac authorization k8s io roleRef    kind  ClusterRole   name  secret reader   apiGroup  rbac authorization k8s io      After you create a binding  you cannot change the Role or ClusterRole that it refers to  If you try to change a binding s  roleRef   you get a validation error  If you do want to change the  roleRef  for a binding  you need to remove the binding object and create a replacement   There are two reasons for this restriction   1  Making  roleRef  immutable allows granting someone  update  permission on an existing binding    object  so that they can manage the list of subjects  without being able to change    the role that is granted to those subjects  1  A binding to a different role is a fundamentally different binding     Requiring a binding to be deleted recreated in order to change the  roleRef     ensures the full list of subjects in the binding is intended to be granted    the new role  as opposed to enabling or accidentally modifying only the roleRef    without verifying all of the existing subjects should be given the new role s    permissions    The  kubectl auth reconcile  command line utility creates or updates a manifest file containing RBAC objects  and handles deleting and recreating binding objects if required to change the role they refer to  See  command usage and examples   kubectl auth reconcile  for more information       Referring to resources  In the Kubernetes API  most resources are represented and accessed using a string representation of their object name  such as  pods  for a Pod  RBAC refers to resources using exactly the same name that appears in the URL for the relevant API endpoint  Some Kubernetes APIs involve a  subresource   such as the logs for a Pod  A request for a Pod s logs looks like      http GET  api v1 namespaces  namespace  pods  name  log      In this case   pods  is the namespaced resource for Pod resources  and  log  is a subresource of  pods   To represent this in an RBAC role  use a slash       to delimit the resource and subresource  To allow a subject to read  pods  and also access the  log  subresource for each of those Pods  you write      yaml apiVersion  rbac authorization k8s io v1 kind  Role metadata    namespace  default   name  pod and pod logs reader rules    apiGroups         resources    pods    pods log     verbs    get    list        You can also refer to resources by name for certain requests through the  resourceNames  list  When specified  requests can be restricted to individual instances of a resource  Here is an example that restricts its subject to only  get  or  update  a  named  my configmap       yaml apiVersion  rbac authorization k8s io v1 kind  Role metadata    namespace  default   name  configmap updater rules    apiGroups               at the HTTP level  the name of the resource for accessing ConfigMap     objects is  configmaps    resources    configmaps     resourceNames    my configmap     verbs    update    get         You cannot restrict  create  or  deletecollection  requests by their resource name  For  create   this limitation is because the name of the new object may not be known at authorization time  If you restrict  list  or  watch  by resourceName  clients must include a  metadata name  field selector in their  list  or  watch  request that matches the specified resourceName in order to be authorized  For example   kubectl get configmaps   field selector metadata name my configmap    Rather than referring to individual  resources    apiGroups   and  verbs   you can use the wildcard     symbol to refer to all such objects  For  nonResourceURLs   you can use the wildcard     as a suffix glob match  For  resourceNames   an empty set means that everything is allowed  Here is an example that allows access to perform any current and future action on all current and future resources in the  example com  API group  This is similar to the built in  cluster admin  role      yaml apiVersion  rbac authorization k8s io v1 kind  Role metadata    namespace  default   name  example com superuser   DO NOT USE THIS ROLE  IT IS JUST AN EXAMPLE rules    apiGroups    example com     resources          verbs              Using wildcards in resource and verb entries could result in overly permissive access being granted to sensitive resources  For instance  if a new resource type is added  or a new subresource is added  or a new custom verb is checked  the wildcard entry automatically grants access  which may be undesirable  The  principle of least privilege   docs concepts security rbac good practices  least privilege  should be employed  using specific resources and verbs to ensure only the permissions required for the workload to function correctly are applied        Aggregated ClusterRoles  You can  aggregate  several ClusterRoles into one combined ClusterRole  A controller  running as part of the cluster control plane  watches for ClusterRole objects with an  aggregationRule  set  The  aggregationRule  defines a label  that the controller uses to match other ClusterRole objects that should be combined into the  rules  field of this one    The control plane overwrites any values that you manually specify in the  rules  field of an aggregate ClusterRole  If you want to change or add rules  do so in the  ClusterRole  objects that are selected by the  aggregationRule     Here is an example aggregated ClusterRole      yaml apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  monitoring aggregationRule    clusterRoleSelectors      matchLabels        rbac example com aggregate to monitoring   true  rules       The control plane automatically fills in the rules      If you create a new ClusterRole that matches the label selector of an existing aggregated ClusterRole  that change triggers adding the new rules into the aggregated ClusterRole  Here is an example that adds rules to the  monitoring  ClusterRole  by creating another ClusterRole labeled  rbac example com aggregate to monitoring  true       yaml apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  monitoring endpoints   labels      rbac example com aggregate to monitoring   true    When you create the  monitoring endpoints  ClusterRole    the rules below will be added to the  monitoring  ClusterRole  rules    apiGroups         resources    services    endpointslices    pods     verbs    get    list    watch        The  default user facing roles   default roles and role bindings  use ClusterRole aggregation  This lets you  as a cluster administrator  include rules for custom resources  such as those served by  or aggregated API servers  to extend the default roles   For example  the following ClusterRoles let the  admin  and  edit  default roles manage the custom resource named CronTab  whereas the  view  role can perform only read actions on CronTab resources  You can assume that CronTab objects are named   crontabs   in URLs as seen by the API server      yaml apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  aggregate cron tabs edit   labels        Add these permissions to the  admin  and  edit  default roles      rbac authorization k8s io aggregate to admin   true      rbac authorization k8s io aggregate to edit   true  rules    apiGroups    stable example com     resources    crontabs     verbs    get    list    watch    create    update    patch    delete       kind  ClusterRole apiVersion  rbac authorization k8s io v1 metadata    name  aggregate cron tabs view   labels        Add these permissions to the  view  default role      rbac authorization k8s io aggregate to view   true  rules    apiGroups    stable example com     resources    crontabs     verbs    get    list    watch             Role examples  The following examples are excerpts from Role or ClusterRole objects  showing only the  rules  section   Allow reading   pods   resources in the core       yaml rules    apiGroups               at the HTTP level  the name of the resource for accessing Pod     objects is  pods    resources    pods     verbs    get    list    watch        Allow reading writing Deployments  at the HTTP level  objects with   deployments   in the resource part of their URL  in the   apps   API groups      yaml rules    apiGroups    apps           at the HTTP level  the name of the resource for accessing Deployment     objects is  deployments    resources    deployments     verbs    get    list    watch    create    update    patch    delete        Allow reading Pods in the core API group  as well as reading or writing Job resources in the   batch   API group      yaml rules    apiGroups               at the HTTP level  the name of the resource for accessing Pod     objects is  pods    resources    pods     verbs    get    list    watch     apiGroups    batch           at the HTTP level  the name of the resource for accessing Job     objects is  jobs    resources    jobs     verbs    get    list    watch    create    update    patch    delete        Allow reading a ConfigMap named  my config   must be bound with a RoleBinding to limit to a single ConfigMap in a single namespace       yaml rules    apiGroups               at the HTTP level  the name of the resource for accessing ConfigMap     objects is  configmaps    resources    configmaps     resourceNames    my config     verbs    get        Allow reading the resource   nodes   in the core group  because a Node is cluster scoped  this must be in a ClusterRole bound with a ClusterRoleBinding to be effective       yaml rules    apiGroups               at the HTTP level  the name of the resource for accessing Node     objects is  nodes    resources    nodes     verbs    get    list    watch        Allow GET and POST requests to the non resource endpoint   healthz  and all subpaths  must be in a ClusterRole bound with a ClusterRoleBinding to be effective       yaml rules    nonResourceURLs     healthz     healthz           in a nonResourceURL is a suffix glob match   verbs    get    post            Referring to subjects  A RoleBinding or ClusterRoleBinding binds a role to subjects  Subjects can be groups  users or    Kubernetes represents usernames as strings  These can be  plain names  such as  alice   email style names  like  bob example com   or numeric user IDs represented as a string  It is up to you as a cluster administrator to configure the  authentication modules   docs reference access authn authz authentication   so that authentication produces usernames in the format you want    The prefix  system   is reserved for Kubernetes system use  so you should ensure that you don t have users or groups with names that start with  system   by accident  Other than this special prefix  the RBAC authorization system does not require any format for usernames    In Kubernetes  Authenticator modules provide group information  Groups  like users  are represented as strings  and that string has no format requirements  other than that the prefix  system   is reserved    ServiceAccounts   docs tasks configure pod container configure service account   have names prefixed with  system serviceaccount    and belong to groups that have names prefixed with  system serviceaccounts         system serviceaccount    singular  is the prefix for service account usernames     system serviceaccounts    plural  is the prefix for service account groups         RoleBinding examples   role binding examples   The following examples are  RoleBinding  excerpts that only show the  subjects  section   For a user named  alice example com       yaml subjects    kind  User   name   alice example com    apiGroup  rbac authorization k8s io      For a group named  frontend admins       yaml subjects    kind  Group   name   frontend admins    apiGroup  rbac authorization k8s io      For the default service account in the  kube system  namespace      yaml subjects    kind  ServiceAccount   name  default   namespace  kube system      For all service accounts in the  qa  namespace      yaml subjects    kind  Group   name  system serviceaccounts qa   apiGroup  rbac authorization k8s io      For all service accounts in any namespace      yaml subjects    kind  Group   name  system serviceaccounts   apiGroup  rbac authorization k8s io      For all authenticated users      yaml subjects    kind  Group   name  system authenticated   apiGroup  rbac authorization k8s io      For all unauthenticated users      yaml subjects    kind  Group   name  system unauthenticated   apiGroup  rbac authorization k8s io      For all users      yaml subjects    kind  Group   name  system authenticated   apiGroup  rbac authorization k8s io   kind  Group   name  system unauthenticated   apiGroup  rbac authorization k8s io         Default roles and role bindings  API servers create a set of default ClusterRole and ClusterRoleBinding objects  Many of these are  system   prefixed  which indicates that the resource is directly managed by the cluster control plane  All of the default ClusterRoles and ClusterRoleBindings are labeled with  kubernetes io bootstrapping rbac defaults     Take care when modifying ClusterRoles and ClusterRoleBindings with names that have a  system   prefix  Modifications to these resources can result in non functional clusters        Auto reconciliation  At each start up  the API server updates default cluster roles with any missing permissions  and updates default cluster role bindings with any missing subjects  This allows the cluster to repair accidental modifications  and helps to keep roles and role bindings up to date as permissions and subjects change in new Kubernetes releases   To opt out of this reconciliation  set the  rbac authorization kubernetes io autoupdate  annotation on a default cluster role or default cluster RoleBinding to  false   Be aware that missing default permissions and subjects can result in non functional clusters   Auto reconciliation is enabled by default if the RBAC authorizer is active       API discovery roles   discovery roles   Default cluster role bindings authorize unauthenticated and authenticated users to read API information that is deemed safe to be publicly accessible  including CustomResourceDefinitions   To disable anonymous unauthenticated access  add    anonymous auth false  flag to the API server configuration   To view the configuration of these roles via  kubectl  run      shell kubectl get clusterroles system discovery  o yaml       If you edit that ClusterRole  your changes will be overwritten on API server restart via  auto reconciliation   auto reconciliation   To avoid that overwriting  either do not manually edit the role  or disable auto reconciliation     table   caption Kubernetes RBAC API discovery roles  caption   colgroup  col style  width  25       col style  width  25       col     colgroup   thead   tr   th Default ClusterRole  th   th Default ClusterRoleBinding  th   th Description  th    tr    thead   tbody   tr   td  b system basic user  b   td   td  b system authenticated  b  group  td   td Allows a user read only access to basic information about themselves  Prior to v1 14  this role was also bound to  tt system unauthenticated  tt  by default   td    tr   tr   td  b system discovery  b   td   td  b system authenticated  b  group  td   td Allows read only access to API discovery endpoints needed to discover and negotiate an API level  Prior to v1 14  this role was also bound to  tt system unauthenticated  tt  by default   td    tr   tr   td  b system public info viewer  b   td   td  b system authenticated  b  and  b system unauthenticated  b  groups  td   td Allows read only access to non sensitive information about the cluster  Introduced in Kubernetes v1 14   td    tr    tbody    table       User facing roles  Some of the default ClusterRoles are not  system   prefixed  These are intended to be user facing roles  They include super user roles   cluster admin    roles intended to be granted cluster wide using ClusterRoleBindings  and roles intended to be granted within particular namespaces using RoleBindings   admin    edit    view     User facing ClusterRoles use  ClusterRole aggregation   aggregated clusterroles  to allow admins to include rules for custom resources on these ClusterRoles  To add rules to the  admin    edit   or  view  roles  create a ClusterRole with one or more of the following labels      yaml metadata    labels      rbac authorization k8s io aggregate to admin   true      rbac authorization k8s io aggregate to edit   true      rbac authorization k8s io aggregate to view   true        table   colgroup  col style  width  25       col style  width  25       col     colgroup   thead   tr   th Default ClusterRole  th   th Default ClusterRoleBinding  th   th Description  th    tr    thead   tbody   tr   td  b cluster admin  b   td   td  b system masters  b  group  td   td Allows super user access to perform any action on any resource  When used in a  b ClusterRoleBinding  b   it gives full control over every resource in the cluster and in all namespaces  When used in a  b RoleBinding  b   it gives full control over every resource in the role binding s namespace  including the namespace itself   td    tr   tr   td  b admin  b   td   td None  td   td Allows admin access  intended to be granted within a namespace using a  b RoleBinding  b    If used in a  b RoleBinding  b   allows read write access to most resources in a namespace  including the ability to create roles and role bindings within the namespace  This role does not allow write access to resource quota or to the namespace itself  This role also does not allow write access to EndpointSlices  or Endpoints  in clusters created using Kubernetes v1 22   More information is available in the   Write Access for EndpointSlices and Endpoints  section   write access for endpoints    td    tr   tr   td  b edit  b   td   td None  td   td Allows read write access to most objects in a namespace   This role does not allow viewing or modifying roles or role bindings  However  this role allows accessing Secrets and running Pods as any ServiceAccount in the namespace  so it can be used to gain the API access levels of any ServiceAccount in the namespace  This role also does not allow write access to EndpointSlices  or Endpoints  in clusters created using Kubernetes v1 22   More information is available in the   Write Access for EndpointSlices and Endpoints  section   write access for endpoints    td    tr   tr   td  b view  b   td   td None  td   td Allows read only access to see most objects in a namespace  It does not allow viewing roles or role bindings   This role does not allow viewing Secrets  since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace  which would allow API access as any ServiceAccount in the namespace  a form of privilege escalation    td    tr    tbody    table       Core component roles   table   colgroup  col style  width  25       col style  width  25       col     colgroup   thead   tr   th Default ClusterRole  th   th Default ClusterRoleBinding  th   th Description  th    tr    thead   tbody   tr   td  b system kube scheduler  b   td   td  b system kube scheduler  b  user  td   td Allows access to the resources required by the  component   td    tr   tr   td  b system volume scheduler  b   td   td  b system kube scheduler  b  user  td   td Allows access to the volume resources required by the kube scheduler component   td    tr   tr   td  b system kube controller manager  b   td   td  b system kube controller manager  b  user  td   td Allows access to the resources required by the  component  The permissions required by individual controllers are detailed in the  a href   controller roles  controller roles  a    td    tr   tr   td  b system node  b   td   td None  td   td Allows access to resources required by the kubelet   b including read access to all secrets  and write access to all pod status objects  b    You should use the  a href   docs reference access authn authz node   Node authorizer  a  and  a href   docs reference access authn authz admission controllers  noderestriction  NodeRestriction admission plugin  a  instead of the  tt system node  tt  role  and allow granting API access to kubelets based on the Pods scheduled to run on them   The  tt system node  tt  role only exists for compatibility with Kubernetes clusters upgraded from versions prior to v1 8    td    tr   tr   td  b system node proxier  b   td   td  b system kube proxy  b  user  td   td Allows access to the resources required by the  component   td    tr    tbody    table       Other component roles   table   colgroup  col style  width  25       col style  width  25       col     colgroup   thead   tr   th Default ClusterRole  th   th Default ClusterRoleBinding  th   th Description  th    tr    thead   tbody   tr   td  b system auth delegator  b   td   td None  td   td Allows delegated authentication and authorization checks  This is commonly used by add on API servers for unified authentication and authorization   td    tr   tr   td  b system heapster  b   td   td None  td   td Role for the  a href  https   github com kubernetes heapster  Heapster  a  component  deprecated    td    tr   tr   td  b system kube aggregator  b   td   td None  td   td Role for the  a href  https   github com kubernetes kube aggregator  kube aggregator  a  component   td    tr   tr   td  b system kube dns  b   td   td  b kube dns  b  service account in the  b kube system  b  namespace  td   td Role for the  a href   docs concepts services networking dns pod service   kube dns  a  component   td    tr   tr   td  b system kubelet api admin  b   td   td None  td   td Allows full access to the kubelet API   td    tr   tr   td  b system node bootstrapper  b   td   td None  td   td Allows access to the resources required to perform  a href   docs reference access authn authz kubelet tls bootstrapping   kubelet TLS bootstrapping  a    td    tr   tr   td  b system node problem detector  b   td   td None  td   td Role for the  a href  https   github com kubernetes node problem detector  node problem detector  a  component   td    tr   tr   td  b system persistent volume provisioner  b   td   td None  td   td Allows access to the resources required by most  a href   docs concepts storage persistent volumes  dynamic  dynamic volume provisioners  a    td    tr   tr   td  b system monitoring  b   td   td  b system monitoring  b  group  td   td Allows read access to control plane monitoring endpoints  i e   liveness and readiness endpoints   tt  healthz  tt    tt  livez  tt    tt  readyz  tt    the individual health check endpoints   tt  healthz    tt    tt  livez    tt    tt  readyz    tt     and  tt  metrics  tt    Note that individual health check endpoints and the metric endpoint may expose sensitive information   td    tr    tbody    table       Roles for built in controllers   controller roles   The Kubernetes  runs  that are built in to the Kubernetes control plane  When invoked with    use service account credentials   kube controller manager starts each controller using a separate service account  Corresponding roles exist for each built in controller  prefixed with  system controller    If the controller manager is not started with    use service account credentials   it runs all control loops using its own credential  which must be granted all the relevant roles  These roles include      system controller attachdetach controller     system controller certificate controller     system controller clusterrole aggregation controller     system controller cronjob controller     system controller daemon set controller     system controller deployment controller     system controller disruption controller     system controller endpoint controller     system controller expand controller     system controller generic garbage collector     system controller horizontal pod autoscaler     system controller job controller     system controller namespace controller     system controller node controller     system controller persistent volume binder     system controller pod garbage collector     system controller pv protection controller     system controller pvc protection controller     system controller replicaset controller     system controller replication controller     system controller resourcequota controller     system controller root ca cert publisher     system controller route controller     system controller service account controller     system controller service controller     system controller statefulset controller     system controller ttl controller      Privilege escalation prevention and bootstrapping  The RBAC API prevents users from escalating privileges by editing roles or role bindings  Because this is enforced at the API level  it applies even when the RBAC authorizer is not in use       Restrictions on role creation or update  You can only create update a role if at least one of the following things is true   1  You already have all the permissions contained in the role  at the same scope as the object being modified     cluster wide for a ClusterRole  within the same namespace or cluster wide for a Role   2  You are granted explicit permission to perform the  escalate  verb on the  roles  or     clusterroles  resource in the  rbac authorization k8s io  API group   For example  if  user 1  does not have the ability to list Secrets cluster wide  they cannot create a ClusterRole containing that permission  To allow a user to create update roles   1  Grant them a role that allows them to create update Role or ClusterRole objects  as desired  2  Grant them permission to include specific permissions in the roles they create update       implicitly  by giving them those permissions  if they attempt to create or modify a Role or      ClusterRole with permissions they themselves have not been granted  the API request will be forbidden       or explicitly allow specifying any permission in a  Role  or  ClusterRole  by giving them      permission to perform the  escalate  verb on  roles  or  clusterroles  resources in the       rbac authorization k8s io  API group      Restrictions on role binding creation or update  You can only create update a role binding if you already have all the permissions contained in the referenced role  at the same scope as the role binding   or  if you have been authorized to perform the  bind  verb on the referenced role  For example  if  user 1  does not have the ability to list Secrets cluster wide  they cannot create a ClusterRoleBinding to a role that grants that permission  To allow a user to create update role bindings   1  Grant them a role that allows them to create update RoleBinding or ClusterRoleBinding objects  as desired  2  Grant them permissions needed to bind a particular role       implicitly  by giving them the permissions contained in the role       explicitly  by giving them permission to perform the  bind  verb on the particular Role  or ClusterRole    For example  this ClusterRole and RoleBinding would allow  user 1  to grant other users the  admin    edit   and  view  roles in the namespace  user 1 namespace       yaml apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  role grantor rules    apiGroups    rbac authorization k8s io     resources    rolebindings     verbs    create     apiGroups    rbac authorization k8s io     resources    clusterroles     verbs    bind       omit resourceNames to allow binding any ClusterRole   resourceNames    admin   edit   view       apiVersion  rbac authorization k8s io v1 kind  RoleBinding metadata    name  role grantor binding   namespace  user 1 namespace roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  role grantor subjects    apiGroup  rbac authorization k8s io   kind  User   name  user 1      When bootstrapping the first roles and role bindings  it is necessary for the initial user to grant permissions they do not yet have  To bootstrap initial roles and role bindings     Use a credential with the  system masters  group  which is bound to the  cluster admin  super user role by the default bindings      Command line utilities       kubectl create role   Creates a Role object defining permissions within a single namespace  Examples     Create a Role named  pod reader  that allows users to perform  get    watch  and  list  on pods        shell   kubectl create role pod reader   verb get   verb list   verb watch   resource pods          Create a Role named  pod reader  with resourceNames specified        shell   kubectl create role pod reader   verb get   resource pods   resource name readablepod   resource name anotherpod          Create a Role named  foo  with apiGroups specified        shell   kubectl create role foo   verb get list watch   resource replicasets apps          Create a Role named  foo  with subresource permissions        shell   kubectl create role foo   verb get list watch   resource pods pods status          Create a Role named  my component lease holder  with permissions to get update a resource with a specific name        shell   kubectl create role my component lease holder   verb get list watch update   resource lease   resource name my component             kubectl create clusterrole   Creates a ClusterRole  Examples     Create a ClusterRole named  pod reader  that allows user to perform  get    watch  and  list  on pods        shell   kubectl create clusterrole pod reader   verb get list watch   resource pods          Create a ClusterRole named  pod reader  with resourceNames specified        shell   kubectl create clusterrole pod reader   verb get   resource pods   resource name readablepod   resource name anotherpod          Create a ClusterRole named  foo  with apiGroups specified        shell   kubectl create clusterrole foo   verb get list watch   resource replicasets apps          Create a ClusterRole named  foo  with subresource permissions        shell   kubectl create clusterrole foo   verb get list watch   resource pods pods status          Create a ClusterRole named  foo  with nonResourceURL specified        shell   kubectl create clusterrole  foo    verb get   non resource url  logs            Create a ClusterRole named  monitoring  with an aggregationRule specified        shell   kubectl create clusterrole monitoring   aggregation rule  rbac example com aggregate to monitoring true              kubectl create rolebinding   Grants a Role or ClusterRole within a specific namespace  Examples     Within the namespace  acme   grant the permissions in the  admin  ClusterRole to a user named  bob         shell   kubectl create rolebinding bob admin binding   clusterrole admin   user bob   namespace acme          Within the namespace  acme   grant the permissions in the  view  ClusterRole to the service account in the namespace  acme  named  myapp         shell   kubectl create rolebinding myapp view binding   clusterrole view   serviceaccount acme myapp   namespace acme          Within the namespace  acme   grant the permissions in the  view  ClusterRole to a service account in the namespace  myappnamespace  named  myapp         shell   kubectl create rolebinding myappnamespace myapp view binding   clusterrole view   serviceaccount myappnamespace myapp   namespace acme             kubectl create clusterrolebinding   Grants a ClusterRole across the entire cluster  all namespaces   Examples     Across the entire cluster  grant the permissions in the  cluster admin  ClusterRole to a user named  root         shell   kubectl create clusterrolebinding root cluster admin binding   clusterrole cluster admin   user root          Across the entire cluster  grant the permissions in the  system node proxier  ClusterRole to a user named  system kube proxy         shell   kubectl create clusterrolebinding kube proxy binding   clusterrole system node proxier   user system kube proxy          Across the entire cluster  grant the permissions in the  view  ClusterRole to a service account named  myapp  in the namespace  acme         shell   kubectl create clusterrolebinding myapp view binding   clusterrole view   serviceaccount acme myapp             kubectl auth reconcile    kubectl auth reconcile   Creates or updates  rbac authorization k8s io v1  API objects from a manifest file   Missing objects are created  and the containing namespace is created for namespaced objects  if required   Existing roles are updated to include the permissions in the input objects  and remove extra permissions if    remove extra permissions  is specified   Existing bindings are updated to include the subjects in the input objects  and remove extra subjects if    remove extra subjects  is specified   Examples     Test applying a manifest file of RBAC objects  displaying changes that would be made        shell   kubectl auth reconcile  f my rbac rules yaml   dry run client          Apply a manifest file of RBAC objects  preserving any extra permissions  in roles  and any extra subjects  in bindings         shell   kubectl auth reconcile  f my rbac rules yaml          Apply a manifest file of RBAC objects  removing any extra permissions  in roles  and any extra subjects  in bindings         shell   kubectl auth reconcile  f my rbac rules yaml   remove extra subjects   remove extra permissions           ServiceAccount permissions   service account permissions   Default RBAC policies grant scoped permissions to control plane components  nodes  and controllers  but grant  no permissions  to service accounts outside the  kube system  namespace  beyond the permissions given by  API discovery roles   discovery roles     This allows you to grant particular roles to particular ServiceAccounts as needed  Fine grained role bindings provide greater security  but require more effort to administrate  Broader grants can give unnecessary  and potentially escalating  API access to ServiceAccounts  but are easier to administrate   In order from most secure to least secure  the approaches are   1  Grant a role to an application specific service account  best practice      This requires the application to specify a  serviceAccountName  in its pod spec     and for the service account to be created  via the API  application manifest   kubectl create serviceaccount   etc        For example  grant read only permission within  my namespace  to the  my sa  service account         shell    kubectl create rolebinding my sa view          clusterrole view          serviceaccount my namespace my sa          namespace my namespace         2  Grant a role to the  default  service account in a namespace     If an application does not specify a  serviceAccountName   it uses the  default  service account          Permissions given to the  default  service account are available to any pod    in the namespace that does not specify a  serviceAccountName           For example  grant read only permission within  my namespace  to the  default  service account         shell    kubectl create rolebinding default view          clusterrole view          serviceaccount my namespace default          namespace my namespace            Many  add ons   docs concepts cluster administration addons   run as the     default  service account in the  kube system  namespace     To allow those add ons to run with super user access  grant cluster admin    permissions to the  default  service account in the  kube system  namespace          Enabling this means the  kube system  namespace contains Secrets    that grant super user access to your cluster s API             shell    kubectl create clusterrolebinding add on cluster admin          clusterrole cluster admin          serviceaccount kube system default         3  Grant a role to all service accounts in a namespace     If you want all applications in a namespace to have a role  no matter what service account they use     you can grant a role to the service account group for that namespace      For example  grant read only permission within  my namespace  to all service accounts in that namespace         shell    kubectl create rolebinding serviceaccounts view          clusterrole view          group system serviceaccounts my namespace          namespace my namespace         4  Grant a limited role to all service accounts cluster wide  discouraged      If you don t want to manage permissions per namespace  you can grant a cluster wide role to all service accounts      For example  grant read only permission across all namespaces to all service accounts in the cluster         shell    kubectl create clusterrolebinding serviceaccounts view          clusterrole view         group system serviceaccounts         5  Grant super user access to all service accounts cluster wide  strongly discouraged      If you don t care about partitioning permissions at all  you can grant super user access to all service accounts          This allows any application full access to your cluster  and also grants    any user with read access to Secrets  or the ability to create any pod     full access to your cluster             shell    kubectl create clusterrolebinding serviceaccounts cluster admin          clusterrole cluster admin          group system serviceaccounts            Write access for EndpointSlices and Endpoints   write access for endpoints   Kubernetes clusters created before Kubernetes v1 22 include write access to EndpointSlices  and Endpoints  in the aggregated  edit  and  admin  roles  As a mitigation for  CVE 2021 25740  https   github com kubernetes kubernetes issues 103675   this access is not part of the aggregated roles in clusters that you create using Kubernetes v1 22 or later   Existing clusters that have been upgraded to Kubernetes v1 22 will not be subject to this change  The  CVE announcement  https   github com kubernetes kubernetes issues 103675  includes guidance for restricting this access in existing clusters   If you want new clusters to retain this level of access in the aggregated roles  you can create the following ClusterRole        Upgrading from ABAC  Clusters that originally ran older Kubernetes versions often used permissive ABAC policies  including granting full API access to all service accounts   Default RBAC policies grant scoped permissions to control plane components  nodes  and controllers  but grant  no permissions  to service accounts outside the  kube system  namespace  beyond the permissions given by  API discovery roles   discovery roles     While far more secure  this can be disruptive to existing workloads expecting to automatically receive API permissions  Here are two approaches for managing this transition       Parallel authorizers  Run both the RBAC and ABAC authorizers  and specify a policy file that contains the  legacy ABAC policy   docs reference access authn authz abac  policy file format       shell   authorization mode     RBAC ABAC   authorization policy file mypolicy json      To explain that first command line option in detail  if earlier authorizers  such as Node  deny a request  then the RBAC authorizer attempts to authorize the API request  If RBAC also denies that API request  the ABAC authorizer is then run  This means that any request allowed by  either  the RBAC or ABAC policies is allowed   When the kube apiserver is run with a log level of 5 or higher for the RBAC component     vmodule rbac  5  or    v 5    you can see RBAC denials in the API server log  prefixed with  RBAC    You can use that information to determine which roles need to be granted to which users  groups  or service accounts   Once you have  granted roles to service accounts   service account permissions  and workloads are running with no RBAC denial messages in the server logs  you can remove the ABAC authorizer       Permissive RBAC permissions  You can replicate a permissive ABAC policy using RBAC role bindings    The following policy allows   ALL   service accounts to act as cluster administrators  Any application running in a container receives service account credentials automatically  and could perform any action against the API  including viewing secrets and modifying permissions  This is not a recommended policy      shell kubectl create clusterrolebinding permissive binding       clusterrole cluster admin       user admin       user kubelet       group system serviceaccounts       After you have transitioned to use RBAC  you should adjust the access controls for your cluster to ensure that these meet your information security needs "}
{"questions":"kubernetes reference kind CertificateSigningRequest liggitt munnerz mikedanese enj reviewers apimetadata apiVersion certificates k8s io v1 title Certificates and Certificate Signing Requests","answers":"---\nreviewers:\n- liggitt\n- mikedanese\n- munnerz\n- enj\ntitle: Certificates and Certificate Signing Requests\napi_metadata:\n- apiVersion: \"certificates.k8s.io\/v1\"\n  kind: \"CertificateSigningRequest\"\n  override_link_text: \"CSR v1\"\n- apiVersion: \"certificates.k8s.io\/v1alpha1\"\n  kind: \"ClusterTrustBundle\"  \ncontent_type: concept\nweight: 60\n---\n\n<!-- overview -->\n\nKubernetes certificate and trust bundle APIs enable automation of\n[X.509](https:\/\/www.itu.int\/rec\/T-REC-X.509) credential provisioning by providing\na programmatic interface for clients of the Kubernetes API to request and obtain\nX.509  from a Certificate Authority (CA).\n\nThere is also experimental (alpha) support for distributing [trust bundles](#cluster-trust-bundles).\n\n<!-- body -->\n\n## Certificate signing requests\n\n\n\n\nA [CertificateSigningRequest](\/docs\/reference\/kubernetes-api\/authentication-resources\/certificate-signing-request-v1\/)\n(CSR) resource is used to request that a certificate be signed\nby a denoted signer, after which the request may be approved or denied before\nfinally being signed.\n\n\n### Request signing process\n\nThe CertificateSigningRequest resource type allows a client to ask for an X.509 certificate\nbe issued, based on a signing request.\nThe CertificateSigningRequest object includes a PEM-encoded PKCS#10 signing request in\nthe `spec.request` field. The CertificateSigningRequest denotes the signer (the\nrecipient that the request is being made to) using the `spec.signerName` field.\nNote that `spec.signerName` is a required key after API version `certificates.k8s.io\/v1`.\nIn Kubernetes v1.22 and later, clients may optionally set the `spec.expirationSeconds`\nfield to request a particular lifetime for the issued certificate. The minimum valid\nvalue for this field is `600`, i.e. ten minutes.\n\nOnce created, a CertificateSigningRequest must be approved before it can be signed.\nDepending on the signer selected, a CertificateSigningRequest may be automatically approved\nby a .\nOtherwise, a CertificateSigningRequest must be manually approved either via the REST API (or client-go)\nor by running `kubectl certificate approve`. Likewise, a CertificateSigningRequest may also be denied,\nwhich tells the configured signer that it must not sign the request.\n\nFor certificates that have been approved, the next step is signing. The relevant signing controller\nfirst validates that the signing conditions are met and then creates a certificate.\nThe signing controller then updates the CertificateSigningRequest, storing the new certificate into\nthe `status.certificate` field of the existing CertificateSigningRequest object. The\n`status.certificate` field is either empty or contains a X.509 certificate, encoded in PEM format.\nThe CertificateSigningRequest `status.certificate` field is empty until the signer does this.\n\nOnce the `status.certificate` field has been populated, the request has been completed and clients can now\nfetch the signed certificate PEM data from the CertificateSigningRequest resource.\nThe signers can instead deny certificate signing if the approval conditions are not met.\n\nIn order to reduce the number of old CertificateSigningRequest resources left in a cluster, a garbage collection\ncontroller runs periodically. The garbage collection removes CertificateSigningRequests that have not changed\nstate for some duration:\n\n* Approved requests: automatically deleted after 1 hour\n* Denied requests: automatically deleted after 1 hour\n* Failed requests: automatically deleted after 1 hour\n* Pending requests: automatically deleted after 24 hours\n* All requests: automatically deleted after the issued certificate has expired\n\n### Certificate signing authorization {#authorization}\n\nTo allow creating a CertificateSigningRequest and retrieving any CertificateSigningRequest:\n\n* Verbs: `create`, `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`\n\nFor example:\n\n\n\nTo allow approving a CertificateSigningRequest:\n\n* Verbs: `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`\n* Verbs: `update`, group: `certificates.k8s.io`, resource: `certificatesigningrequests\/approval`\n* Verbs: `approve`, group: `certificates.k8s.io`, resource: `signers`, resourceName: `<signerNameDomain>\/<signerNamePath>` or `<signerNameDomain>\/*`\n\nFor example:\n\n\n\nTo allow signing a CertificateSigningRequest:\n\n* Verbs: `get`, `list`, `watch`, group: `certificates.k8s.io`, resource: `certificatesigningrequests`\n* Verbs: `update`, group: `certificates.k8s.io`, resource: `certificatesigningrequests\/status`\n* Verbs: `sign`, group: `certificates.k8s.io`, resource: `signers`, resourceName: `<signerNameDomain>\/<signerNamePath>` or `<signerNameDomain>\/*`\n\n\n\n\n## Signers\n\nSigners abstractly represent the entity or entities that might sign, or have\nsigned, a security certificate.\n\nAny signer that is made available for outside a particular cluster should provide information\nabout how the signer works, so that consumers can understand what that means for CertifcateSigningRequests\nand (if enabled) [ClusterTrustBundles](#cluster-trust-bundles).\nThis includes:\n\n1. **Trust distribution**: how trust anchors (CA certificates or certificate bundles) are distributed.\n1. **Permitted subjects**: any restrictions on and behavior when a disallowed subject is requested.\n1. **Permitted x509 extensions**: including IP subjectAltNames, DNS subjectAltNames,\n   Email subjectAltNames, URI subjectAltNames etc, and behavior when a disallowed extension is requested.\n1. **Permitted key usages \/ extended key usages**: any restrictions on and behavior\n   when usages different than the signer-determined usages are specified in the CSR.\n1. **Expiration\/certificate lifetime**: whether it is fixed by the signer, configurable by the admin, determined by the CSR `spec.expirationSeconds` field, etc\n   and the behavior when the signer-determined expiration is different from the CSR `spec.expirationSeconds` field.\n1. **CA bit allowed\/disallowed**: and behavior if a CSR contains a request a for a CA certificate when the signer does not permit it.\n\nCommonly, the `status.certificate` field of a CertificateSigningRequest contains a\nsingle PEM-encoded X.509 certificate once the CSR is approved and the certificate is issued.\nSome signers store multiple certificates into the `status.certificate` field. In\nthat case, the documentation for the signer should specify the meaning of\nadditional certificates; for example, this might be the certificate plus\nintermediates to be presented during TLS handshakes.\n\nIf you want to make the _trust anchor_ (root certificate) available, this should be done\nseparately from a CertificateSigningRequest and its `status.certificate` field. For example,\nyou could use a ClusterTrustBundle.\n\nThe PKCS#10 signing request format does not have a standard mechanism to specify a\ncertificate expiration or lifetime. The expiration or lifetime therefore has to be set\nthrough the `spec.expirationSeconds` field of the CSR object. The built-in signers\nuse the `ClusterSigningDuration` configuration option, which defaults to 1 year,\n(the `--cluster-signing-duration` command-line flag of the kube-controller-manager)\nas the default when no `spec.expirationSeconds` is specified. When `spec.expirationSeconds`\nis specified, the minimum of `spec.expirationSeconds` and `ClusterSigningDuration` is\nused.\n\n\nThe `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.\nKubernetes API servers prior to v1.22 will silently drop this field when the object is created.\n\n\n### Kubernetes signers\n\nKubernetes provides built-in signers that each have a well-known `signerName`:\n\n1. `kubernetes.io\/kube-apiserver-client`: signs certificates that will be honored as client certificates by the API server.\n   Never auto-approved by .\n   1. Trust distribution: signed certificates must be honored as client certificates by the API server. The CA bundle is not distributed by any other means.\n   1. Permitted subjects - no subject restrictions, but approvers and signers may choose not to approve or sign.\n      Certain subjects like cluster-admin level users or groups vary between distributions and installations,\n      but deserve additional scrutiny before approval and signing.\n      The `CertificateSubjectRestriction` admission plugin is enabled by default to restrict `system:masters`,\n      but it is often not the only cluster-admin subject in a cluster.\n   1. Permitted x509 extensions - honors subjectAltName and key usage extensions and discards other extensions.\n   1. Permitted key usages - must include `[\"client auth\"]`. Must not include key usages beyond `[\"digital signature\", \"key encipherment\", \"client auth\"]`.\n   1. Expiration\/certificate lifetime - for the kube-controller-manager implementation of this signer, set to the minimum\n      of the `--cluster-signing-duration` option or, if specified, the `spec.expirationSeconds` field of the CSR object.\n   1. CA bit allowed\/disallowed - not allowed.\n\n1. `kubernetes.io\/kube-apiserver-client-kubelet`: signs client certificates that will be honored as client certificates by the\n   API server.\n   May be auto-approved by .\n   1. Trust distribution: signed certificates must be honored as client certificates by the API server. The CA bundle\n      is not distributed by any other means.\n   1. Permitted subjects - organizations are exactly `[\"system:nodes\"]`, common name is \"`system:node:${NODE_NAME}`\".\n   1. Permitted x509 extensions - honors key usage extensions, forbids subjectAltName extensions and drops other extensions.\n   1. Permitted key usages - `[\"key encipherment\", \"digital signature\", \"client auth\"]` or `[\"digital signature\", \"client auth\"]`.\n   1. Expiration\/certificate lifetime - for the kube-controller-manager implementation of this signer, set to the minimum\n      of the `--cluster-signing-duration` option or, if specified, the `spec.expirationSeconds` field of the CSR object.\n   1. CA bit allowed\/disallowed - not allowed.\n\n1. `kubernetes.io\/kubelet-serving`: signs serving certificates that are honored as a valid kubelet serving certificate\n   by the API server, but has no other guarantees.\n   Never auto-approved by .\n   1. Trust distribution: signed certificates must be honored by the API server as valid to terminate connections to a kubelet.\n      The CA bundle is not distributed by any other means.\n   1. Permitted subjects - organizations are exactly `[\"system:nodes\"]`, common name is \"`system:node:${NODE_NAME}`\".\n   1. Permitted x509 extensions - honors key usage and DNSName\/IPAddress subjectAltName extensions, forbids EmailAddress and\n      URI subjectAltName extensions, drops other extensions. At least one DNS or IP subjectAltName must be present.\n   1. Permitted key usages - `[\"key encipherment\", \"digital signature\", \"server auth\"]` or `[\"digital signature\", \"server auth\"]`.\n   1. Expiration\/certificate lifetime - for the kube-controller-manager implementation of this signer, set to the minimum\n      of the `--cluster-signing-duration` option or, if specified, the `spec.expirationSeconds` field of the CSR object.\n   1. CA bit allowed\/disallowed - not allowed.\n\n1. `kubernetes.io\/legacy-unknown`: has no guarantees for trust at all. Some third-party distributions of Kubernetes\n   may honor client certificates signed by it. The stable CertificateSigningRequest API (version `certificates.k8s.io\/v1` and later)\n   does not allow to set the `signerName` as `kubernetes.io\/legacy-unknown`.\n   Never auto-approved by .\n   1. Trust distribution: None. There is no standard trust or distribution for this signer in a Kubernetes cluster.\n   1. Permitted subjects - any\n   1. Permitted x509 extensions - honors subjectAltName and key usage extensions and discards other extensions.\n   1. Permitted key usages - any\n   1. Expiration\/certificate lifetime - for the kube-controller-manager implementation of this signer, set to the minimum\n      of the `--cluster-signing-duration` option or, if specified, the `spec.expirationSeconds` field of the CSR object.\n   1. CA bit allowed\/disallowed - not allowed.\n\nThe kube-controller-manager implements [control plane signing](#signer-control-plane) for each of the built in\nsigners. Failures for all of these are only reported in kube-controller-manager logs.\n\n\nThe `spec.expirationSeconds` field was added in Kubernetes v1.22. Earlier versions of Kubernetes do not honor this field.\nKubernetes API servers prior to v1.22 will silently drop this field when the object is created.\n\n\nDistribution of trust happens out of band for these signers. Any trust outside of those described above are strictly\ncoincidental. For instance, some distributions may honor `kubernetes.io\/legacy-unknown` as client certificates for the\nkube-apiserver, but this is not a standard.\nNone of these usages are related to ServiceAccount token secrets `.data[ca.crt]` in any way. That CA bundle is only\nguaranteed to verify a connection to the API server using the default service (`kubernetes.default.svc`).\n\n### Custom signers\n\nYou can also introduce your own custom signer, which should have a similar prefixed name but using your\nown domain name. For example, if you represent an open source project that uses the domain `open-fictional.example`\nthen you might use `issuer.open-fictional.example\/service-mesh` as a signer name.\n\nA custom signer uses the Kubernetes API to issue a certificate. See [API-based signers](#signer-api).\n\n## Signing\n\n### Control plane signer {#signer-control-plane}\n\nThe Kubernetes control plane implements each of the\n[Kubernetes signers](\/docs\/reference\/access-authn-authz\/certificate-signing-requests\/#kubernetes-signers),\nas part of the kube-controller-manager.\n\n\nPrior to Kubernetes v1.18, the kube-controller-manager would sign any CSRs that\nwere marked as approved.\n\n\n\nThe `spec.expirationSeconds` field was added in Kubernetes v1.22.\nEarlier versions of Kubernetes do not honor this field.\nKubernetes API servers prior to v1.22 will silently drop this field when the object is created.\n\n\n### API-based signers {#signer-api}\n\nUsers of the REST API can sign CSRs by submitting an UPDATE request to the `status`\nsubresource of the CSR to be signed.\n\nAs part of this request, the `status.certificate` field should be set to contain the\nsigned certificate. This field contains one or more PEM-encoded certificates.\n\nAll PEM blocks must have the \"CERTIFICATE\" label, contain no headers,\nand the encoded data must be a BER-encoded ASN.1 Certificate structure\nas described in [section 4 of RFC5280](https:\/\/tools.ietf.org\/html\/rfc5280#section-4.1).\n\nExample certificate content:\n\n```\n-----BEGIN CERTIFICATE-----\nMIIDgjCCAmqgAwIBAgIUC1N1EJ4Qnsd322BhDPRwmg3b\/oAwDQYJKoZIhvcNAQEL\nBQAwXDELMAkGA1UEBhMCeHgxCjAIBgNVBAgMAXgxCjAIBgNVBAcMAXgxCjAIBgNV\nBAoMAXgxCjAIBgNVBAsMAXgxCzAJBgNVBAMMAmNhMRAwDgYJKoZIhvcNAQkBFgF4\nMB4XDTIwMDcwNjIyMDcwMFoXDTI1MDcwNTIyMDcwMFowNzEVMBMGA1UEChMMc3lz\ndGVtOm5vZGVzMR4wHAYDVQQDExVzeXN0ZW06bm9kZToxMjcuMC4wLjEwggEiMA0G\nCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDne5X2eQ1JcLZkKvhzCR4Hxl9+ZmU3\n+e1zfOywLdoQxrPi+o4hVsUH3q0y52BMa7u1yehHDRSaq9u62cmi5ekgXhXHzGmm\nkmW5n0itRECv3SFsSm2DSghRKf0mm6iTYHWDHzUXKdm9lPPWoSOxoR5oqOsm3JEh\nQ7Et13wrvTJqBMJo1GTwQuF+HYOku0NF\/DLqbZIcpI08yQKyrBgYz2uO51\/oNp8a\nsTCsV4OUfyHhx2BBLUo4g4SptHFySTBwlpRWBnSjZPOhmN74JcpTLB4J5f4iEeA7\n2QytZfADckG4wVkhH3C2EJUmRtFIBVirwDn39GXkSGlnvnMgF3uLZ6zNAgMBAAGj\nYTBfMA4GA1UdDwEB\/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDAjAMBgNVHRMB\nAf8EAjAAMB0GA1UdDgQWBBTREl2hW54lkQBDeVCcd2f2VSlB1DALBgNVHREEBDAC\nggAwDQYJKoZIhvcNAQELBQADggEBABpZjuIKTq8pCaX8dMEGPWtAykgLsTcD2jYr\nL0\/TCrqmuaaliUa42jQTt2OVsVP\/L8ofFunj\/KjpQU0bvKJPLMRKtmxbhXuQCQi1\nqCRkp8o93mHvEz3mTUN+D1cfQ2fpsBENLnpS0F4G\/JyY2Vrh19\/X8+mImMEK5eOy\no0BMby7byUj98WmcUvNCiXbC6F45QTmkwEhMqWns0JZQY+\/XeDhEcg+lJvz9Eyo2\naGgPsye1o3DpyXnyfJWAWMhOz7cikS5X2adesbgI86PhEHBXPIJ1v13ZdfCExmdd\nM1fLPhLyR54fGaY+7\/X8P9AZzPefAkwizeXwe9ii6\/a08vWoiE4=\n-----END CERTIFICATE-----\n```\n\nNon-PEM content may appear before or after the CERTIFICATE PEM blocks and is unvalidated,\nto allow for explanatory text as described in [section 5.2 of RFC7468](https:\/\/www.rfc-editor.org\/rfc\/rfc7468#section-5.2).\n\nWhen encoded in JSON or YAML, this field is base-64 encoded.\nA CertificateSigningRequest containing the example certificate above would look like this:\n\n```yaml\napiVersion: certificates.k8s.io\/v1\nkind: CertificateSigningRequest\n...\nstatus:\n  certificate: \"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JS...\"\n```\n\n## Approval or rejection  {#approval-rejection}\n\nBefore a [signer](#signers) issues a certificate based on a CertificateSigningRequest,\nthe signer typically checks that the issuance for that CSR has been _approved_.\n\n### Control plane automated approval {#approval-rejection-control-plane}\n\nThe kube-controller-manager ships with a built-in approver for certificates with\na signerName of `kubernetes.io\/kube-apiserver-client-kubelet` that delegates various\npermissions on CSRs for node credentials to authorization.\nThe kube-controller-manager POSTs SubjectAccessReview resources to the API server\nin order to check authorization for certificate approval.\n\n### Approval or rejection using `kubectl` {#approval-rejection-kubectl}\n\nA Kubernetes administrator (with appropriate permissions) can manually approve\n(or deny) CertificateSigningRequests by using the `kubectl certificate\napprove` and `kubectl certificate deny` commands.\n\nTo approve a CSR with kubectl:\n\n```shell\nkubectl certificate approve <certificate-signing-request-name>\n```\n\nLikewise, to deny a CSR:\n\n```shell\nkubectl certificate deny <certificate-signing-request-name>\n```\n\n### Approval or rejection using the Kubernetes API {#approval-rejection-api-client}\n\nUsers of the REST API can approve CSRs by submitting an UPDATE request to the `approval`\nsubresource of the CSR to be approved. For example, you could write an\n that watches for a particular\nkind of CSR and then sends an UPDATE to approve them.\n\nWhen you make an approval or rejection request, set either the `Approved` or `Denied`\nstatus condition based on the state you determine:\n\nFor `Approved` CSRs:\n\n```yaml\napiVersion: certificates.k8s.io\/v1\nkind: CertificateSigningRequest\n...\nstatus:\n  conditions:\n  - lastUpdateTime: \"2020-02-08T11:37:35Z\"\n    lastTransitionTime: \"2020-02-08T11:37:35Z\"\n    message: Approved by my custom approver controller\n    reason: ApprovedByMyPolicy # You can set this to any string\n    type: Approved\n```\n\nFor `Denied` CSRs:\n\n```yaml\napiVersion: certificates.k8s.io\/v1\nkind: CertificateSigningRequest\n...\nstatus:\n  conditions:\n  - lastUpdateTime: \"2020-02-08T11:37:35Z\"\n    lastTransitionTime: \"2020-02-08T11:37:35Z\"\n    message: Denied by my custom approver controller\n    reason: DeniedByMyPolicy # You can set this to any string\n    type: Denied\n```\n\nIt's usual to set `status.conditions.reason` to a machine-friendly reason\ncode using TitleCase; this is a convention but you can set it to anything\nyou like. If you want to add a note for human consumption, use the\n`status.conditions.message` field.\n\n\n## Cluster trust bundles {#cluster-trust-bundles}\n\n\n\n\nIn Kubernetes , you must enable the `ClusterTrustBundle`\n[feature gate](\/docs\/reference\/command-line-tools-reference\/feature-gates\/)\n_and_ the `certificates.k8s.io\/v1alpha1`\n in order to use\nthis API.\n\n\nA ClusterTrustBundles is a cluster-scoped object for distributing X.509 trust\nanchors (root certificates) to workloads within the cluster. They're designed\nto work well with the [signer](#signers) concept from CertificateSigningRequests.\n\nClusterTrustBundles can be used in two modes:\n[signer-linked](#ctb-signer-linked) and [signer-unlinked](#ctb-signer-unlinked).\n\n### Common properties and validation {#ctb-common}\n\nAll ClusterTrustBundle objects have strong validation on the contents of their\n`trustBundle` field. That field must contain one or more X.509 certificates,\nDER-serialized, each wrapped in a PEM `CERTIFICATE` block. The certificates\nmust parse as valid X.509 certificates.\n\nEsoteric PEM features like inter-block data and intra-block headers are either\nrejected during object validation, or can be ignored by consumers of the object.\nAdditionally, consumers are allowed to reorder the certificates in\nthe bundle with their own arbitrary but stable ordering.\n\nClusterTrustBundle objects should be considered world-readable within the\ncluster. If your cluster uses [RBAC](\/docs\/reference\/access-authn-authz\/rbac\/)\nauthorization, all ServiceAccounts have a default grant that allows them to\n**get**, **list**, and **watch** all ClusterTrustBundle objects.\nIf you use your own authorization mechanism and you have enabled\nClusterTrustBundles in your cluster, you should set up an equivalent rule to\nmake these objects public within the cluster, so that they work as intended.\n\nIf you do not have permission to list cluster trust bundles by default in your\ncluster, you can impersonate a service account you have access to in order to\nsee available ClusterTrustBundles:\n\n```bash\nkubectl get clustertrustbundles --as='system:serviceaccount:mynamespace:default'\n```\n\n### Signer-linked ClusterTrustBundles {#ctb-signer-linked}\n\nSigner-linked ClusterTrustBundles are associated with a _signer name_, like this:\n\n```yaml\napiVersion: certificates.k8s.io\/v1alpha1\nkind: ClusterTrustBundle\nmetadata:\n  name: example.com:mysigner:foo\nspec:\n  signerName: example.com\/mysigner\n  trustBundle: \"<... PEM data ...>\"\n```\n\nThese ClusterTrustBundles are intended to be maintained by a signer-specific\ncontroller in the cluster, so they have several security features:\n\n* To create or update a signer-linked ClusterTrustBundle, you must be permitted\n  to **attest** on the signer (custom authorization verb `attest`,\n  API group `certificates.k8s.io`; resource path `signers`). You can configure\n  authorization for the specific resource name\n  `<signerNameDomain>\/<signerNamePath>` or match a pattern such as\n  `<signerNameDomain>\/*`.\n* Signer-linked ClusterTrustBundles **must** be named with a prefix derived from\n  their `spec.signerName` field. Slashes (`\/`) are replaced with colons (`:`),\n  and a final colon is appended. This is followed by an arbitrary name. For\n  example, the signer `example.com\/mysigner` can be linked to a\n  ClusterTrustBundle `example.com:mysigner:<arbitrary-name>`.\n\nSigner-linked ClusterTrustBundles will typically be consumed in workloads\nby a combination of a\n[field selector](\/docs\/concepts\/overview\/working-with-objects\/field-selectors\/) on the signer name, and a separate\n[label selector](\/docs\/concepts\/overview\/working-with-objects\/labels\/#label-selectors).\n\n### Signer-unlinked ClusterTrustBundles {#ctb-signer-unlinked}\n\nSigner-unlinked ClusterTrustBundles have an empty `spec.signerName` field, like this:\n\n```yaml\napiVersion: certificates.k8s.io\/v1alpha1\nkind: ClusterTrustBundle\nmetadata:\n  name: foo\nspec:\n  # no signerName specified, so the field is blank\n  trustBundle: \"<... PEM data ...>\"\n```\n\nThey are primarily intended for cluster configuration use cases.\nEach signer-unlinked ClusterTrustBundle is an independent object, in contrast to the\ncustomary grouping behavior of signer-linked ClusterTrustBundles.\n\nSigner-unlinked ClusterTrustBundles have no `attest` verb requirement.\nInstead, you control access to them directly using the usual mechanisms,\nsuch as role-based access control.\n\nTo distinguish them from signer-linked ClusterTrustBundles, the names of\nsigner-unlinked ClusterTrustBundles **must not** contain a colon (`:`).\n\n### Accessing ClusterTrustBundles from pods {#ctb-projection}\n\n\n\nThe contents of ClusterTrustBundles can be injected into the container filesystem, similar to ConfigMaps and Secrets.\nSee the [clusterTrustBundle projected volume source](\/docs\/concepts\/storage\/projected-volumes#clustertrustbundle) for more details.\n\n<!-- TODO this should become a task page -->\n## How to issue a certificate for a user {#normal-user}\n\nA few steps are required in order to get a normal user to be able to\nauthenticate and invoke an API. First, this user must have a certificate issued\nby the Kubernetes cluster, and then present that certificate to the Kubernetes API.\n\n### Create private key\n\nThe following scripts show how to generate PKI private key and CSR. It is\nimportant to set CN and O attribute of the CSR. CN is the name of the user and\nO is the group that this user will belong to. You can refer to\n[RBAC](\/docs\/reference\/access-authn-authz\/rbac\/) for standard groups.\n\n```shell\nopenssl genrsa -out myuser.key 2048\nopenssl req -new -key myuser.key -out myuser.csr -subj \"\/CN=myuser\"\n```\n\n### Create a CertificateSigningRequest {#create-certificatessigningrequest}\n\nCreate a [CertificateSigningRequest](\/docs\/reference\/kubernetes-api\/authentication-resources\/certificate-signing-request-v1\/)\nand submit it to a Kubernetes Cluster via kubectl. Below is a script to generate the\nCertificateSigningRequest.\n\n```shell\ncat <<EOF | kubectl apply -f -\napiVersion: certificates.k8s.io\/v1\nkind: CertificateSigningRequest\nmetadata:\n  name: myuser\nspec:\n  request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZVzVuWld4aE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQTByczhJTHRHdTYxakx2dHhWTTJSVlRWMDNHWlJTWWw0dWluVWo4RElaWjBOCnR2MUZtRVFSd3VoaUZsOFEzcWl0Qm0wMUFSMkNJVXBGd2ZzSjZ4MXF3ckJzVkhZbGlBNVhwRVpZM3ExcGswSDQKM3Z3aGJlK1o2MVNrVHF5SVBYUUwrTWM5T1Nsbm0xb0R2N0NtSkZNMUlMRVI3QTVGZnZKOEdFRjJ6dHBoaUlFMwpub1dtdHNZb3JuT2wzc2lHQ2ZGZzR4Zmd4eW8ybmlneFNVekl1bXNnVm9PM2ttT0x1RVF6cXpkakJ3TFJXbWlECklmMXBMWnoyalVnald4UkhCM1gyWnVVV1d1T09PZnpXM01LaE8ybHEvZi9DdS8wYk83c0x0MCt3U2ZMSU91TFcKcW90blZtRmxMMytqTy82WDNDKzBERHk5aUtwbXJjVDBnWGZLemE1dHJRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR05WdmVIOGR4ZzNvK21VeVRkbmFjVmQ1N24zSkExdnZEU1JWREkyQTZ1eXN3ZFp1L1BVCkkwZXpZWFV0RVNnSk1IRmQycVVNMjNuNVJsSXJ3R0xuUXFISUh5VStWWHhsdnZsRnpNOVpEWllSTmU3QlJvYXgKQVlEdUI5STZXT3FYbkFvczFqRmxNUG5NbFpqdU5kSGxpT1BjTU1oNndLaTZzZFhpVStHYTJ2RUVLY01jSVUyRgpvU2djUWdMYTk0aEpacGk3ZnNMdm1OQUxoT045UHdNMGM1dVJVejV4T0dGMUtCbWRSeEgvbUNOS2JKYjFRQm1HCkkwYitEUEdaTktXTU0xMzhIQXdoV0tkNjVoVHdYOWl4V3ZHMkh4TG1WQzg0L1BHT0tWQW9FNkpsYWFHdTlQVmkKdjlOSjVaZlZrcXdCd0hKbzZXdk9xVlA3SVFjZmg3d0drWm89Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=\n  signerName: kubernetes.io\/kube-apiserver-client\n  expirationSeconds: 86400  # one day\n  usages:\n  - client auth\nEOF\n```\n\nSome points to note:\n\n- `usages` has to be '`client auth`'\n- `expirationSeconds` could be made longer (i.e. `864000` for ten days) or shorter (i.e. `3600` for one hour)\n- `request` is the base64 encoded value of the CSR file content.\n  You can get the content using this command: \n\n  ```shell\n  cat myuser.csr | base64 | tr -d \"\\n\"\n  ```\n\n\n### Approve the CertificateSigningRequest {#approve-certificate-signing-request}\n\nUse kubectl to create a CSR and approve it.\n\nGet the list of CSRs:\n\n```shell\nkubectl get csr\n```\n\nApprove the CSR:\n\n```shell\nkubectl certificate approve myuser\n```\n\n### Get the certificate\n\nRetrieve the certificate from the CSR:\n\n```shell\nkubectl get csr\/myuser -o yaml\n```\n\nThe certificate value is in Base64-encoded format under `status.certificate`.\n\nExport the issued certificate from the CertificateSigningRequest.\n\n```shell\nkubectl get csr myuser -o jsonpath='{.status.certificate}'| base64 -d > myuser.crt\n```\n\n### Create Role and RoleBinding\n\nWith the certificate created it is time to define the Role and RoleBinding for\nthis user to access Kubernetes cluster resources.\n\nThis is a sample command to create a Role for this new user:\n\n```shell\nkubectl create role developer --verb=create --verb=get --verb=list --verb=update --verb=delete --resource=pods\n```\n\nThis is a sample command to create a RoleBinding for this new user:\n\n```shell\nkubectl create rolebinding developer-binding-myuser --role=developer --user=myuser\n```\n\n### Add to kubeconfig\n\nThe last step is to add this user into the kubeconfig file.\n\nFirst, you need to add new credentials:\n\n```shell\nkubectl config set-credentials myuser --client-key=myuser.key --client-certificate=myuser.crt --embed-certs=true\n\n```\n\nThen, you need to add the context:\n\n```shell\nkubectl config set-context myuser --cluster=kubernetes --user=myuser\n```\n\nTo test it, change the context to `myuser`:\n\n```shell\nkubectl config use-context myuser\n```\n\n## \n\n* Read [Manage TLS Certificates in a Cluster](\/docs\/tasks\/tls\/managing-tls-in-a-cluster\/)\n* View the source code for the kube-controller-manager built in\n  [signer](https:\/\/github.com\/kubernetes\/kubernetes\/blob\/32ec6c212ec9415f604ffc1f4c1f29b782968ff1\/pkg\/controller\/certificates\/signer\/cfssl_signer.go)\n* View the source code for the kube-controller-manager built in\n  [approver](https:\/\/github.com\/kubernetes\/kubernetes\/blob\/32ec6c212ec9415f604ffc1f4c1f29b782968ff1\/pkg\/controller\/certificates\/approver\/sarapprove.go)\n* For details of X.509 itself, refer to [RFC 5280](https:\/\/tools.ietf.org\/html\/rfc5280#section-3.1) section 3.1\n* For information on the syntax of PKCS#10 certificate signing requests, refer to [RFC 2986](https:\/\/tools.ietf.org\/html\/rfc2986)\n* Read about the ClusterTrustBundle API:\n  * ","site":"kubernetes reference","answers_cleaned":"    reviewers    liggitt   mikedanese   munnerz   enj title  Certificates and Certificate Signing Requests api metadata    apiVersion   certificates k8s io v1    kind   CertificateSigningRequest    override link text   CSR v1    apiVersion   certificates k8s io v1alpha1    kind   ClusterTrustBundle    content type  concept weight  60           overview      Kubernetes certificate and trust bundle APIs enable automation of  X 509  https   www itu int rec T REC X 509  credential provisioning by providing a programmatic interface for clients of the Kubernetes API to request and obtain X 509  from a Certificate Authority  CA    There is also experimental  alpha  support for distributing  trust bundles   cluster trust bundles         body         Certificate signing requests     A  CertificateSigningRequest   docs reference kubernetes api authentication resources certificate signing request v1    CSR  resource is used to request that a certificate be signed by a denoted signer  after which the request may be approved or denied before finally being signed        Request signing process  The CertificateSigningRequest resource type allows a client to ask for an X 509 certificate be issued  based on a signing request  The CertificateSigningRequest object includes a PEM encoded PKCS 10 signing request in the  spec request  field  The CertificateSigningRequest denotes the signer  the recipient that the request is being made to  using the  spec signerName  field  Note that  spec signerName  is a required key after API version  certificates k8s io v1   In Kubernetes v1 22 and later  clients may optionally set the  spec expirationSeconds  field to request a particular lifetime for the issued certificate  The minimum valid value for this field is  600   i e  ten minutes   Once created  a CertificateSigningRequest must be approved before it can be signed  Depending on the signer selected  a CertificateSigningRequest may be automatically approved by a   Otherwise  a CertificateSigningRequest must be manually approved either via the REST API  or client go  or by running  kubectl certificate approve   Likewise  a CertificateSigningRequest may also be denied  which tells the configured signer that it must not sign the request   For certificates that have been approved  the next step is signing  The relevant signing controller first validates that the signing conditions are met and then creates a certificate  The signing controller then updates the CertificateSigningRequest  storing the new certificate into the  status certificate  field of the existing CertificateSigningRequest object  The  status certificate  field is either empty or contains a X 509 certificate  encoded in PEM format  The CertificateSigningRequest  status certificate  field is empty until the signer does this   Once the  status certificate  field has been populated  the request has been completed and clients can now fetch the signed certificate PEM data from the CertificateSigningRequest resource  The signers can instead deny certificate signing if the approval conditions are not met   In order to reduce the number of old CertificateSigningRequest resources left in a cluster  a garbage collection controller runs periodically  The garbage collection removes CertificateSigningRequests that have not changed state for some duration     Approved requests  automatically deleted after 1 hour   Denied requests  automatically deleted after 1 hour   Failed requests  automatically deleted after 1 hour   Pending requests  automatically deleted after 24 hours   All requests  automatically deleted after the issued certificate has expired      Certificate signing authorization   authorization   To allow creating a CertificateSigningRequest and retrieving any CertificateSigningRequest     Verbs   create    get    list    watch   group   certificates k8s io   resource   certificatesigningrequests   For example     To allow approving a CertificateSigningRequest     Verbs   get    list    watch   group   certificates k8s io   resource   certificatesigningrequests    Verbs   update   group   certificates k8s io   resource   certificatesigningrequests approval    Verbs   approve   group   certificates k8s io   resource   signers   resourceName    signerNameDomain   signerNamePath   or   signerNameDomain      For example     To allow signing a CertificateSigningRequest     Verbs   get    list    watch   group   certificates k8s io   resource   certificatesigningrequests    Verbs   update   group   certificates k8s io   resource   certificatesigningrequests status    Verbs   sign   group   certificates k8s io   resource   signers   resourceName    signerNameDomain   signerNamePath   or   signerNameDomain            Signers  Signers abstractly represent the entity or entities that might sign  or have signed  a security certificate   Any signer that is made available for outside a particular cluster should provide information about how the signer works  so that consumers can understand what that means for CertifcateSigningRequests and  if enabled   ClusterTrustBundles   cluster trust bundles   This includes   1    Trust distribution    how trust anchors  CA certificates or certificate bundles  are distributed  1    Permitted subjects    any restrictions on and behavior when a disallowed subject is requested  1    Permitted x509 extensions    including IP subjectAltNames  DNS subjectAltNames     Email subjectAltNames  URI subjectAltNames etc  and behavior when a disallowed extension is requested  1    Permitted key usages   extended key usages    any restrictions on and behavior    when usages different than the signer determined usages are specified in the CSR  1    Expiration certificate lifetime    whether it is fixed by the signer  configurable by the admin  determined by the CSR  spec expirationSeconds  field  etc    and the behavior when the signer determined expiration is different from the CSR  spec expirationSeconds  field  1    CA bit allowed disallowed    and behavior if a CSR contains a request a for a CA certificate when the signer does not permit it   Commonly  the  status certificate  field of a CertificateSigningRequest contains a single PEM encoded X 509 certificate once the CSR is approved and the certificate is issued  Some signers store multiple certificates into the  status certificate  field  In that case  the documentation for the signer should specify the meaning of additional certificates  for example  this might be the certificate plus intermediates to be presented during TLS handshakes   If you want to make the  trust anchor   root certificate  available  this should be done separately from a CertificateSigningRequest and its  status certificate  field  For example  you could use a ClusterTrustBundle   The PKCS 10 signing request format does not have a standard mechanism to specify a certificate expiration or lifetime  The expiration or lifetime therefore has to be set through the  spec expirationSeconds  field of the CSR object  The built in signers use the  ClusterSigningDuration  configuration option  which defaults to 1 year   the    cluster signing duration  command line flag of the kube controller manager  as the default when no  spec expirationSeconds  is specified  When  spec expirationSeconds  is specified  the minimum of  spec expirationSeconds  and  ClusterSigningDuration  is used    The  spec expirationSeconds  field was added in Kubernetes v1 22  Earlier versions of Kubernetes do not honor this field  Kubernetes API servers prior to v1 22 will silently drop this field when the object is created        Kubernetes signers  Kubernetes provides built in signers that each have a well known  signerName    1   kubernetes io kube apiserver client   signs certificates that will be honored as client certificates by the API server     Never auto approved by      1  Trust distribution  signed certificates must be honored as client certificates by the API server  The CA bundle is not distributed by any other means     1  Permitted subjects   no subject restrictions  but approvers and signers may choose not to approve or sign        Certain subjects like cluster admin level users or groups vary between distributions and installations        but deserve additional scrutiny before approval and signing        The  CertificateSubjectRestriction  admission plugin is enabled by default to restrict  system masters         but it is often not the only cluster admin subject in a cluster     1  Permitted x509 extensions   honors subjectAltName and key usage extensions and discards other extensions     1  Permitted key usages   must include    client auth     Must not include key usages beyond    digital signature    key encipherment    client auth        1  Expiration certificate lifetime   for the kube controller manager implementation of this signer  set to the minimum       of the    cluster signing duration  option or  if specified  the  spec expirationSeconds  field of the CSR object     1  CA bit allowed disallowed   not allowed   1   kubernetes io kube apiserver client kubelet   signs client certificates that will be honored as client certificates by the    API server     May be auto approved by      1  Trust distribution  signed certificates must be honored as client certificates by the API server  The CA bundle       is not distributed by any other means     1  Permitted subjects   organizations are exactly    system nodes     common name is   system node   NODE NAME        1  Permitted x509 extensions   honors key usage extensions  forbids subjectAltName extensions and drops other extensions     1  Permitted key usages      key encipherment    digital signature    client auth    or    digital signature    client auth        1  Expiration certificate lifetime   for the kube controller manager implementation of this signer  set to the minimum       of the    cluster signing duration  option or  if specified  the  spec expirationSeconds  field of the CSR object     1  CA bit allowed disallowed   not allowed   1   kubernetes io kubelet serving   signs serving certificates that are honored as a valid kubelet serving certificate    by the API server  but has no other guarantees     Never auto approved by      1  Trust distribution  signed certificates must be honored by the API server as valid to terminate connections to a kubelet        The CA bundle is not distributed by any other means     1  Permitted subjects   organizations are exactly    system nodes     common name is   system node   NODE NAME        1  Permitted x509 extensions   honors key usage and DNSName IPAddress subjectAltName extensions  forbids EmailAddress and       URI subjectAltName extensions  drops other extensions  At least one DNS or IP subjectAltName must be present     1  Permitted key usages      key encipherment    digital signature    server auth    or    digital signature    server auth        1  Expiration certificate lifetime   for the kube controller manager implementation of this signer  set to the minimum       of the    cluster signing duration  option or  if specified  the  spec expirationSeconds  field of the CSR object     1  CA bit allowed disallowed   not allowed   1   kubernetes io legacy unknown   has no guarantees for trust at all  Some third party distributions of Kubernetes    may honor client certificates signed by it  The stable CertificateSigningRequest API  version  certificates k8s io v1  and later     does not allow to set the  signerName  as  kubernetes io legacy unknown      Never auto approved by      1  Trust distribution  None  There is no standard trust or distribution for this signer in a Kubernetes cluster     1  Permitted subjects   any    1  Permitted x509 extensions   honors subjectAltName and key usage extensions and discards other extensions     1  Permitted key usages   any    1  Expiration certificate lifetime   for the kube controller manager implementation of this signer  set to the minimum       of the    cluster signing duration  option or  if specified  the  spec expirationSeconds  field of the CSR object     1  CA bit allowed disallowed   not allowed   The kube controller manager implements  control plane signing   signer control plane  for each of the built in signers  Failures for all of these are only reported in kube controller manager logs    The  spec expirationSeconds  field was added in Kubernetes v1 22  Earlier versions of Kubernetes do not honor this field  Kubernetes API servers prior to v1 22 will silently drop this field when the object is created    Distribution of trust happens out of band for these signers  Any trust outside of those described above are strictly coincidental  For instance  some distributions may honor  kubernetes io legacy unknown  as client certificates for the kube apiserver  but this is not a standard  None of these usages are related to ServiceAccount token secrets   data ca crt   in any way  That CA bundle is only guaranteed to verify a connection to the API server using the default service   kubernetes default svc         Custom signers  You can also introduce your own custom signer  which should have a similar prefixed name but using your own domain name  For example  if you represent an open source project that uses the domain  open fictional example  then you might use  issuer open fictional example service mesh  as a signer name   A custom signer uses the Kubernetes API to issue a certificate  See  API based signers   signer api       Signing      Control plane signer   signer control plane   The Kubernetes control plane implements each of the  Kubernetes signers   docs reference access authn authz certificate signing requests  kubernetes signers   as part of the kube controller manager    Prior to Kubernetes v1 18  the kube controller manager would sign any CSRs that were marked as approved     The  spec expirationSeconds  field was added in Kubernetes v1 22  Earlier versions of Kubernetes do not honor this field  Kubernetes API servers prior to v1 22 will silently drop this field when the object is created        API based signers   signer api   Users of the REST API can sign CSRs by submitting an UPDATE request to the  status  subresource of the CSR to be signed   As part of this request  the  status certificate  field should be set to contain the signed certificate  This field contains one or more PEM encoded certificates   All PEM blocks must have the  CERTIFICATE  label  contain no headers  and the encoded data must be a BER encoded ASN 1 Certificate structure as described in  section 4 of RFC5280  https   tools ietf org html rfc5280 section 4 1    Example certificate content            BEGIN CERTIFICATE      MIIDgjCCAmqgAwIBAgIUC1N1EJ4Qnsd322BhDPRwmg3b oAwDQYJKoZIhvcNAQEL BQAwXDELMAkGA1UEBhMCeHgxCjAIBgNVBAgMAXgxCjAIBgNVBAcMAXgxCjAIBgNV BAoMAXgxCjAIBgNVBAsMAXgxCzAJBgNVBAMMAmNhMRAwDgYJKoZIhvcNAQkBFgF4 MB4XDTIwMDcwNjIyMDcwMFoXDTI1MDcwNTIyMDcwMFowNzEVMBMGA1UEChMMc3lz dGVtOm5vZGVzMR4wHAYDVQQDExVzeXN0ZW06bm9kZToxMjcuMC4wLjEwggEiMA0G CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDne5X2eQ1JcLZkKvhzCR4Hxl9 ZmU3  e1zfOywLdoQxrPi o4hVsUH3q0y52BMa7u1yehHDRSaq9u62cmi5ekgXhXHzGmm kmW5n0itRECv3SFsSm2DSghRKf0mm6iTYHWDHzUXKdm9lPPWoSOxoR5oqOsm3JEh Q7Et13wrvTJqBMJo1GTwQuF HYOku0NF DLqbZIcpI08yQKyrBgYz2uO51 oNp8a sTCsV4OUfyHhx2BBLUo4g4SptHFySTBwlpRWBnSjZPOhmN74JcpTLB4J5f4iEeA7 2QytZfADckG4wVkhH3C2EJUmRtFIBVirwDn39GXkSGlnvnMgF3uLZ6zNAgMBAAGj YTBfMA4GA1UdDwEB wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDAjAMBgNVHRMB Af8EAjAAMB0GA1UdDgQWBBTREl2hW54lkQBDeVCcd2f2VSlB1DALBgNVHREEBDAC ggAwDQYJKoZIhvcNAQELBQADggEBABpZjuIKTq8pCaX8dMEGPWtAykgLsTcD2jYr L0 TCrqmuaaliUa42jQTt2OVsVP L8ofFunj KjpQU0bvKJPLMRKtmxbhXuQCQi1 qCRkp8o93mHvEz3mTUN D1cfQ2fpsBENLnpS0F4G JyY2Vrh19 X8 mImMEK5eOy o0BMby7byUj98WmcUvNCiXbC6F45QTmkwEhMqWns0JZQY  XeDhEcg lJvz9Eyo2 aGgPsye1o3DpyXnyfJWAWMhOz7cikS5X2adesbgI86PhEHBXPIJ1v13ZdfCExmdd M1fLPhLyR54fGaY 7 X8P9AZzPefAkwizeXwe9ii6 a08vWoiE4       END CERTIFICATE           Non PEM content may appear before or after the CERTIFICATE PEM blocks and is unvalidated  to allow for explanatory text as described in  section 5 2 of RFC7468  https   www rfc editor org rfc rfc7468 section 5 2    When encoded in JSON or YAML  this field is base 64 encoded  A CertificateSigningRequest containing the example certificate above would look like this      yaml apiVersion  certificates k8s io v1 kind  CertificateSigningRequest     status    certificate   LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JS             Approval or rejection    approval rejection   Before a  signer   signers  issues a certificate based on a CertificateSigningRequest  the signer typically checks that the issuance for that CSR has been  approved        Control plane automated approval   approval rejection control plane   The kube controller manager ships with a built in approver for certificates with a signerName of  kubernetes io kube apiserver client kubelet  that delegates various permissions on CSRs for node credentials to authorization  The kube controller manager POSTs SubjectAccessReview resources to the API server in order to check authorization for certificate approval       Approval or rejection using  kubectl    approval rejection kubectl   A Kubernetes administrator  with appropriate permissions  can manually approve  or deny  CertificateSigningRequests by using the  kubectl certificate approve  and  kubectl certificate deny  commands   To approve a CSR with kubectl      shell kubectl certificate approve  certificate signing request name       Likewise  to deny a CSR      shell kubectl certificate deny  certificate signing request name           Approval or rejection using the Kubernetes API   approval rejection api client   Users of the REST API can approve CSRs by submitting an UPDATE request to the  approval  subresource of the CSR to be approved  For example  you could write an  that watches for a particular kind of CSR and then sends an UPDATE to approve them   When you make an approval or rejection request  set either the  Approved  or  Denied  status condition based on the state you determine   For  Approved  CSRs      yaml apiVersion  certificates k8s io v1 kind  CertificateSigningRequest     status    conditions      lastUpdateTime   2020 02 08T11 37 35Z      lastTransitionTime   2020 02 08T11 37 35Z      message  Approved by my custom approver controller     reason  ApprovedByMyPolicy   You can set this to any string     type  Approved      For  Denied  CSRs      yaml apiVersion  certificates k8s io v1 kind  CertificateSigningRequest     status    conditions      lastUpdateTime   2020 02 08T11 37 35Z      lastTransitionTime   2020 02 08T11 37 35Z      message  Denied by my custom approver controller     reason  DeniedByMyPolicy   You can set this to any string     type  Denied      It s usual to set  status conditions reason  to a machine friendly reason code using TitleCase  this is a convention but you can set it to anything you like  If you want to add a note for human consumption  use the  status conditions message  field       Cluster trust bundles   cluster trust bundles      In Kubernetes   you must enable the  ClusterTrustBundle   feature gate   docs reference command line tools reference feature gates    and  the  certificates k8s io v1alpha1   in order to use this API    A ClusterTrustBundles is a cluster scoped object for distributing X 509 trust anchors  root certificates  to workloads within the cluster  They re designed to work well with the  signer   signers  concept from CertificateSigningRequests   ClusterTrustBundles can be used in two modes   signer linked   ctb signer linked  and  signer unlinked   ctb signer unlinked        Common properties and validation   ctb common   All ClusterTrustBundle objects have strong validation on the contents of their  trustBundle  field  That field must contain one or more X 509 certificates  DER serialized  each wrapped in a PEM  CERTIFICATE  block  The certificates must parse as valid X 509 certificates   Esoteric PEM features like inter block data and intra block headers are either rejected during object validation  or can be ignored by consumers of the object  Additionally  consumers are allowed to reorder the certificates in the bundle with their own arbitrary but stable ordering   ClusterTrustBundle objects should be considered world readable within the cluster  If your cluster uses  RBAC   docs reference access authn authz rbac   authorization  all ServiceAccounts have a default grant that allows them to   get      list    and   watch   all ClusterTrustBundle objects  If you use your own authorization mechanism and you have enabled ClusterTrustBundles in your cluster  you should set up an equivalent rule to make these objects public within the cluster  so that they work as intended   If you do not have permission to list cluster trust bundles by default in your cluster  you can impersonate a service account you have access to in order to see available ClusterTrustBundles      bash kubectl get clustertrustbundles   as  system serviceaccount mynamespace default           Signer linked ClusterTrustBundles   ctb signer linked   Signer linked ClusterTrustBundles are associated with a  signer name   like this      yaml apiVersion  certificates k8s io v1alpha1 kind  ClusterTrustBundle metadata    name  example com mysigner foo spec    signerName  example com mysigner   trustBundle        PEM data            These ClusterTrustBundles are intended to be maintained by a signer specific controller in the cluster  so they have several security features     To create or update a signer linked ClusterTrustBundle  you must be permitted   to   attest   on the signer  custom authorization verb  attest     API group  certificates k8s io   resource path  signers    You can configure   authorization for the specific resource name     signerNameDomain   signerNamePath   or match a pattern such as     signerNameDomain        Signer linked ClusterTrustBundles   must   be named with a prefix derived from   their  spec signerName  field  Slashes       are replaced with colons          and a final colon is appended  This is followed by an arbitrary name  For   example  the signer  example com mysigner  can be linked to a   ClusterTrustBundle  example com mysigner  arbitrary name     Signer linked ClusterTrustBundles will typically be consumed in workloads by a combination of a  field selector   docs concepts overview working with objects field selectors   on the signer name  and a separate  label selector   docs concepts overview working with objects labels  label selectors        Signer unlinked ClusterTrustBundles   ctb signer unlinked   Signer unlinked ClusterTrustBundles have an empty  spec signerName  field  like this      yaml apiVersion  certificates k8s io v1alpha1 kind  ClusterTrustBundle metadata    name  foo spec      no signerName specified  so the field is blank   trustBundle        PEM data            They are primarily intended for cluster configuration use cases  Each signer unlinked ClusterTrustBundle is an independent object  in contrast to the customary grouping behavior of signer linked ClusterTrustBundles   Signer unlinked ClusterTrustBundles have no  attest  verb requirement  Instead  you control access to them directly using the usual mechanisms  such as role based access control   To distinguish them from signer linked ClusterTrustBundles  the names of signer unlinked ClusterTrustBundles   must not   contain a colon             Accessing ClusterTrustBundles from pods   ctb projection     The contents of ClusterTrustBundles can be injected into the container filesystem  similar to ConfigMaps and Secrets  See the  clusterTrustBundle projected volume source   docs concepts storage projected volumes clustertrustbundle  for more details        TODO this should become a task page        How to issue a certificate for a user   normal user   A few steps are required in order to get a normal user to be able to authenticate and invoke an API  First  this user must have a certificate issued by the Kubernetes cluster  and then present that certificate to the Kubernetes API       Create private key  The following scripts show how to generate PKI private key and CSR  It is important to set CN and O attribute of the CSR  CN is the name of the user and O is the group that this user will belong to  You can refer to  RBAC   docs reference access authn authz rbac   for standard groups      shell openssl genrsa  out myuser key 2048 openssl req  new  key myuser key  out myuser csr  subj   CN myuser           Create a CertificateSigningRequest   create certificatessigningrequest   Create a  CertificateSigningRequest   docs reference kubernetes api authentication resources certificate signing request v1   and submit it to a Kubernetes Cluster via kubectl  Below is a script to generate the CertificateSigningRequest      shell cat   EOF   kubectl apply  f   apiVersion  certificates k8s io v1 kind  CertificateSigningRequest metadata    name  myuser spec    request  LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZVzVuWld4aE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQTByczhJTHRHdTYxakx2dHhWTTJSVlRWMDNHWlJTWWw0dWluVWo4RElaWjBOCnR2MUZtRVFSd3VoaUZsOFEzcWl0Qm0wMUFSMkNJVXBGd2ZzSjZ4MXF3ckJzVkhZbGlBNVhwRVpZM3ExcGswSDQKM3Z3aGJlK1o2MVNrVHF5SVBYUUwrTWM5T1Nsbm0xb0R2N0NtSkZNMUlMRVI3QTVGZnZKOEdFRjJ6dHBoaUlFMwpub1dtdHNZb3JuT2wzc2lHQ2ZGZzR4Zmd4eW8ybmlneFNVekl1bXNnVm9PM2ttT0x1RVF6cXpkakJ3TFJXbWlECklmMXBMWnoyalVnald4UkhCM1gyWnVVV1d1T09PZnpXM01LaE8ybHEvZi9DdS8wYk83c0x0MCt3U2ZMSU91TFcKcW90blZtRmxMMytqTy82WDNDKzBERHk5aUtwbXJjVDBnWGZLemE1dHJRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR05WdmVIOGR4ZzNvK21VeVRkbmFjVmQ1N24zSkExdnZEU1JWREkyQTZ1eXN3ZFp1L1BVCkkwZXpZWFV0RVNnSk1IRmQycVVNMjNuNVJsSXJ3R0xuUXFISUh5VStWWHhsdnZsRnpNOVpEWllSTmU3QlJvYXgKQVlEdUI5STZXT3FYbkFvczFqRmxNUG5NbFpqdU5kSGxpT1BjTU1oNndLaTZzZFhpVStHYTJ2RUVLY01jSVUyRgpvU2djUWdMYTk0aEpacGk3ZnNMdm1OQUxoT045UHdNMGM1dVJVejV4T0dGMUtCbWRSeEgvbUNOS2JKYjFRQm1HCkkwYitEUEdaTktXTU0xMzhIQXdoV0tkNjVoVHdYOWl4V3ZHMkh4TG1WQzg0L1BHT0tWQW9FNkpsYWFHdTlQVmkKdjlOSjVaZlZrcXdCd0hKbzZXdk9xVlA3SVFjZmg3d0drWm89Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo    signerName  kubernetes io kube apiserver client   expirationSeconds  86400    one day   usages      client auth EOF      Some points to note      usages  has to be   client auth      expirationSeconds  could be made longer  i e   864000  for ten days  or shorter  i e   3600  for one hour     request  is the base64 encoded value of the CSR file content    You can get the content using this command         shell   cat myuser csr   base64   tr  d   n              Approve the CertificateSigningRequest   approve certificate signing request   Use kubectl to create a CSR and approve it   Get the list of CSRs      shell kubectl get csr      Approve the CSR      shell kubectl certificate approve myuser          Get the certificate  Retrieve the certificate from the CSR      shell kubectl get csr myuser  o yaml      The certificate value is in Base64 encoded format under  status certificate    Export the issued certificate from the CertificateSigningRequest      shell kubectl get csr myuser  o jsonpath    status certificate    base64  d   myuser crt          Create Role and RoleBinding  With the certificate created it is time to define the Role and RoleBinding for this user to access Kubernetes cluster resources   This is a sample command to create a Role for this new user      shell kubectl create role developer   verb create   verb get   verb list   verb update   verb delete   resource pods      This is a sample command to create a RoleBinding for this new user      shell kubectl create rolebinding developer binding myuser   role developer   user myuser          Add to kubeconfig  The last step is to add this user into the kubeconfig file   First  you need to add new credentials      shell kubectl config set credentials myuser   client key myuser key   client certificate myuser crt   embed certs true       Then  you need to add the context      shell kubectl config set context myuser   cluster kubernetes   user myuser      To test it  change the context to  myuser       shell kubectl config use context myuser             Read  Manage TLS Certificates in a Cluster   docs tasks tls managing tls in a cluster     View the source code for the kube controller manager built in    signer  https   github com kubernetes kubernetes blob 32ec6c212ec9415f604ffc1f4c1f29b782968ff1 pkg controller certificates signer cfssl signer go    View the source code for the kube controller manager built in    approver  https   github com kubernetes kubernetes blob 32ec6c212ec9415f604ffc1f4c1f29b782968ff1 pkg controller certificates approver sarapprove go    For details of X 509 itself  refer to  RFC 5280  https   tools ietf org html rfc5280 section 3 1  section 3 1   For information on the syntax of PKCS 10 certificate signing requests  refer to  RFC 2986  https   tools ietf org html rfc2986    Read about the ClusterTrustBundle API      "}
{"questions":"kubernetes reference title Dynamic Admission Control liggitt deads2k contenttype concept reviewers caesarxuchao jpbetz lavalamp smarterclayton","answers":"---\nreviewers:\n- smarterclayton\n- lavalamp\n- caesarxuchao\n- deads2k\n- liggitt\n- jpbetz\ntitle: Dynamic Admission Control\ncontent_type: concept\nweight: 45\n---\n\n<!-- overview -->\nIn addition to [compiled-in admission plugins](\/docs\/reference\/access-authn-authz\/admission-controllers\/),\nadmission plugins can be developed as extensions and run as webhooks configured at runtime.\nThis page describes how to build, configure, use, and monitor admission webhooks.\n\n<!-- body -->\n\n## What are admission webhooks?\n\nAdmission webhooks are HTTP callbacks that receive admission requests and do\nsomething with them. You can define two types of admission webhooks,\n[validating admission webhook](\/docs\/reference\/access-authn-authz\/admission-controllers\/#validatingadmissionwebhook)\nand\n[mutating admission webhook](\/docs\/reference\/access-authn-authz\/admission-controllers\/#mutatingadmissionwebhook).\nMutating admission webhooks are invoked first, and can modify objects sent to the API server to enforce custom defaults.\nAfter all object modifications are complete, and after the incoming object is validated by the API server,\nvalidating admission webhooks are invoked and can reject requests to enforce custom policies.\n\n\nAdmission webhooks that need to guarantee they see the final state of the object in order to enforce policy\nshould use a validating admission webhook, since objects can be modified after being seen by mutating webhooks.\n\n\n## Experimenting with admission webhooks\n\nAdmission webhooks are essentially part of the cluster control-plane. You should\nwrite and deploy them with great caution. Please read the\n[user guides](\/docs\/reference\/access-authn-authz\/extensible-admission-controllers\/#write-an-admission-webhook-server)\nfor instructions if you intend to write\/deploy production-grade admission webhooks.\nIn the following, we describe how to quickly experiment with admission webhooks.\n\n### Prerequisites\n\n* Ensure that MutatingAdmissionWebhook and ValidatingAdmissionWebhook\n  admission controllers are enabled.\n  [Here](\/docs\/reference\/access-authn-authz\/admission-controllers\/#is-there-a-recommended-set-of-admission-controllers-to-use)\n  is a recommended set of admission controllers to enable in general.\n\n* Ensure that the `admissionregistration.k8s.io\/v1` API is enabled.\n\n### Write an admission webhook server\n\nPlease refer to the implementation of the [admission webhook server](https:\/\/github.com\/kubernetes\/kubernetes\/blob\/release-1.21\/test\/images\/agnhost\/webhook\/main.go)\nthat is validated in a Kubernetes e2e test. The webhook handles the\n`AdmissionReview` request sent by the API servers, and sends back its decision\nas an `AdmissionReview` object in the same version it received.\n\nSee the [webhook request](#request) section for details on the data sent to webhooks.\n\nSee the [webhook response](#response) section for the data expected from webhooks.\n\nThe example admission webhook server leaves the `ClientAuth` field\n[empty](https:\/\/github.com\/kubernetes\/kubernetes\/blob\/v1.22.0\/test\/images\/agnhost\/webhook\/config.go#L38-L39),\nwhich defaults to `NoClientCert`. This means that the webhook server does not\nauthenticate the identity of the clients, supposedly API servers. If you need\nmutual TLS or other ways to authenticate the clients, see\nhow to [authenticate API servers](#authenticate-apiservers).\n\n### Deploy the admission webhook service\n\nThe webhook server in the e2e test is deployed in the Kubernetes cluster, via\nthe [deployment API](\/docs\/reference\/generated\/kubernetes-api\/\/#deployment-v1-apps).\nThe test also creates a [service](\/docs\/reference\/generated\/kubernetes-api\/\/#service-v1-core)\nas the front-end of the webhook server. See\n[code](https:\/\/github.com\/kubernetes\/kubernetes\/blob\/v1.22.0\/test\/e2e\/apimachinery\/webhook.go#L748).\n\nYou may also deploy your webhooks outside of the cluster. You will need to update\nyour webhook configurations accordingly.\n\n### Configure admission webhooks on the fly\n\nYou can dynamically configure what resources are subject to what admission\nwebhooks via\n[ValidatingWebhookConfiguration](\/docs\/reference\/generated\/kubernetes-api\/\/#validatingwebhookconfiguration-v1-admissionregistration-k8s-io)\nor\n[MutatingWebhookConfiguration](\/docs\/reference\/generated\/kubernetes-api\/\/#mutatingwebhookconfiguration-v1-admissionregistration-k8s-io).\n\nThe following is an example `ValidatingWebhookConfiguration`, a mutating webhook configuration is similar.\nSee the [webhook configuration](#webhook-configuration) section for details about each config field.\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: ValidatingWebhookConfiguration\nmetadata:\n  name: \"pod-policy.example.com\"\nwebhooks:\n- name: \"pod-policy.example.com\"\n  rules:\n  - apiGroups:   [\"\"]\n    apiVersions: [\"v1\"]\n    operations:  [\"CREATE\"]\n    resources:   [\"pods\"]\n    scope:       \"Namespaced\"\n  clientConfig:\n    service:\n      namespace: \"example-namespace\"\n      name: \"example-service\"\n    caBundle: <CA_BUNDLE>\n  admissionReviewVersions: [\"v1\"]\n  sideEffects: None\n  timeoutSeconds: 5\n```\n\n\nYou must replace the `<CA_BUNDLE>` in the above example by a valid CA bundle\nwhich is a PEM-encoded (field value is Base64 encoded) CA bundle for validating the webhook's server certificate.\n\n\nThe `scope` field specifies if only cluster-scoped resources (\"Cluster\") or namespace-scoped\nresources (\"Namespaced\") will match this rule. \"&lowast;\" means that there are no scope restrictions.\n\n\nWhen using `clientConfig.service`, the server cert must be valid for\n`<svc_name>.<svc_namespace>.svc`.\n\n\n\nDefault timeout for a webhook call is 10 seconds,\nYou can set the `timeout` and it is encouraged to use a short timeout for webhooks.\nIf the webhook call times out, the request is handled according to the webhook's\nfailure policy.\n\n\nWhen an API server receives a request that matches one of the `rules`, the\nAPI server sends an `admissionReview` request to webhook as specified in the\n`clientConfig`.\n\nAfter you create the webhook configuration, the system will take a few seconds\nto honor the new configuration.\n\n### Authenticate API servers   {#authenticate-apiservers}\n\nIf your admission webhooks require authentication, you can configure the\nAPI servers to use basic auth, bearer token, or a cert to authenticate itself to\nthe webhooks. There are three steps to complete the configuration.\n\n* When starting the API server, specify the location of the admission control\n  configuration file via the `--admission-control-config-file` flag.\n\n* In the admission control configuration file, specify where the\n  MutatingAdmissionWebhook controller and ValidatingAdmissionWebhook controller\n  should read the credentials. The credentials are stored in kubeConfig files\n  (yes, the same schema that's used by kubectl), so the field name is\n  `kubeConfigFile`. Here is an example admission control configuration file:\n\n\n\n```yaml\napiVersion: apiserver.config.k8s.io\/v1\nkind: AdmissionConfiguration\nplugins:\n- name: ValidatingAdmissionWebhook\n  configuration:\n    apiVersion: apiserver.config.k8s.io\/v1\n    kind: WebhookAdmissionConfiguration\n    kubeConfigFile: \"<path-to-kubeconfig-file>\"\n- name: MutatingAdmissionWebhook\n  configuration:\n    apiVersion: apiserver.config.k8s.io\/v1\n    kind: WebhookAdmissionConfiguration\n    kubeConfigFile: \"<path-to-kubeconfig-file>\"\n```\n\n\n```yaml\n# Deprecated in v1.17 in favor of apiserver.config.k8s.io\/v1\napiVersion: apiserver.k8s.io\/v1alpha1\nkind: AdmissionConfiguration\nplugins:\n- name: ValidatingAdmissionWebhook\n  configuration:\n    # Deprecated in v1.17 in favor of apiserver.config.k8s.io\/v1, kind=WebhookAdmissionConfiguration\n    apiVersion: apiserver.config.k8s.io\/v1alpha1\n    kind: WebhookAdmission\n    kubeConfigFile: \"<path-to-kubeconfig-file>\"\n- name: MutatingAdmissionWebhook\n  configuration:\n    # Deprecated in v1.17 in favor of apiserver.config.k8s.io\/v1, kind=WebhookAdmissionConfiguration\n    apiVersion: apiserver.config.k8s.io\/v1alpha1\n    kind: WebhookAdmission\n    kubeConfigFile: \"<path-to-kubeconfig-file>\"\n```\n\n\n\nFor more information about `AdmissionConfiguration`, see the\n[AdmissionConfiguration (v1) reference](\/docs\/reference\/config-api\/apiserver-webhookadmission.v1\/).\nSee the [webhook configuration](#webhook-configuration) section for details about each config field.\n\nIn the kubeConfig file, provide the credentials:\n\n```yaml\napiVersion: v1\nkind: Config\nusers:\n# name should be set to the DNS name of the service or the host (including port) of the URL the webhook is configured to speak to.\n# If a non-443 port is used for services, it must be included in the name when configuring 1.16+ API servers.\n#\n# For a webhook configured to speak to a service on the default port (443), specify the DNS name of the service:\n# - name: webhook1.ns1.svc\n#   user: ...\n#\n# For a webhook configured to speak to a service on non-default port (e.g. 8443), specify the DNS name and port of the service in 1.16+:\n# - name: webhook1.ns1.svc:8443\n#   user: ...\n# and optionally create a second stanza using only the DNS name of the service for compatibility with 1.15 API servers:\n# - name: webhook1.ns1.svc\n#   user: ...\n#\n# For webhooks configured to speak to a URL, match the host (and port) specified in the webhook's URL. Examples:\n# A webhook with `url: https:\/\/www.example.com`:\n# - name: www.example.com\n#   user: ...\n#\n# A webhook with `url: https:\/\/www.example.com:443`:\n# - name: www.example.com:443\n#   user: ...\n#\n# A webhook with `url: https:\/\/www.example.com:8443`:\n# - name: www.example.com:8443\n#   user: ...\n#\n- name: 'webhook1.ns1.svc'\n  user:\n    client-certificate-data: \"<pem encoded certificate>\"\n    client-key-data: \"<pem encoded key>\"\n# The `name` supports using * to wildcard-match prefixing segments.\n- name: '*.webhook-company.org'\n  user:\n    password: \"<password>\"\n    username: \"<name>\"\n# '*' is the default match.\n- name: '*'\n  user:\n    token: \"<token>\"\n```\n\nOf course you need to set up the webhook server to handle these authentication requests.\n\n## Webhook request and response\n\n### Request\n\nWebhooks are sent as POST requests, with `Content-Type: application\/json`,\nwith an `AdmissionReview` API object in the `admission.k8s.io` API group\nserialized to JSON as the body.\n\nWebhooks can specify what versions of `AdmissionReview` objects they accept\nwith the `admissionReviewVersions` field in their configuration:\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n  admissionReviewVersions: [\"v1\", \"v1beta1\"]\n```\n\n`admissionReviewVersions` is a required field when creating webhook configurations.\nWebhooks are required to support at least one `AdmissionReview`\nversion understood by the current and previous API server.\n\nAPI servers send the first `AdmissionReview` version in the `admissionReviewVersions` list they support.\nIf none of the versions in the list are supported by the API server, the configuration will not be allowed to be created.\nIf an API server encounters a webhook configuration that was previously created and does not support any of the `AdmissionReview`\nversions the API server knows how to send, attempts to call to the webhook will fail and be subject to the [failure policy](#failure-policy).\n\nThis example shows the data contained in an `AdmissionReview` object\nfor a request to update the `scale` subresource of an `apps\/v1` `Deployment`:\n\n```yaml\napiVersion: admission.k8s.io\/v1\nkind: AdmissionReview\nrequest:\n  # Random uid uniquely identifying this admission call\n  uid: 705ab4f5-6393-11e8-b7cc-42010a800002\n\n  # Fully-qualified group\/version\/kind of the incoming object\n  kind:\n    group: autoscaling\n    version: v1\n    kind: Scale\n\n  # Fully-qualified group\/version\/kind of the resource being modified\n  resource:\n    group: apps\n    version: v1\n    resource: deployments\n\n  # subresource, if the request is to a subresource\n  subResource: scale\n\n  # Fully-qualified group\/version\/kind of the incoming object in the original request to the API server.\n  # This only differs from `kind` if the webhook specified `matchPolicy: Equivalent` and the\n  # original request to the API server was converted to a version the webhook registered for.\n  requestKind:\n    group: autoscaling\n    version: v1\n    kind: Scale\n\n  # Fully-qualified group\/version\/kind of the resource being modified in the original request to the API server.\n  # This only differs from `resource` if the webhook specified `matchPolicy: Equivalent` and the\n  # original request to the API server was converted to a version the webhook registered for.\n  requestResource:\n    group: apps\n    version: v1\n    resource: deployments\n\n  # subresource, if the request is to a subresource\n  # This only differs from `subResource` if the webhook specified `matchPolicy: Equivalent` and the\n  # original request to the API server was converted to a version the webhook registered for.\n  requestSubResource: scale\n\n  # Name of the resource being modified\n  name: my-deployment\n\n  # Namespace of the resource being modified, if the resource is namespaced (or is a Namespace object)\n  namespace: my-namespace\n\n  # operation can be CREATE, UPDATE, DELETE, or CONNECT\n  operation: UPDATE\n\n  userInfo:\n    # Username of the authenticated user making the request to the API server\n    username: admin\n\n    # UID of the authenticated user making the request to the API server\n    uid: 014fbff9a07c\n\n    # Group memberships of the authenticated user making the request to the API server\n    groups:\n      - system:authenticated\n      - my-admin-group\n    # Arbitrary extra info associated with the user making the request to the API server.\n    # This is populated by the API server authentication layer and should be included\n    # if any SubjectAccessReview checks are performed by the webhook.\n    extra:\n      some-key:\n        - some-value1\n        - some-value2\n\n  # object is the new object being admitted.\n  # It is null for DELETE operations.\n  object:\n    apiVersion: autoscaling\/v1\n    kind: Scale\n\n  # oldObject is the existing object.\n  # It is null for CREATE and CONNECT operations.\n  oldObject:\n    apiVersion: autoscaling\/v1\n    kind: Scale\n\n  # options contains the options for the operation being admitted, like meta.k8s.io\/v1 CreateOptions, UpdateOptions, or DeleteOptions.\n  # It is null for CONNECT operations.\n  options:\n    apiVersion: meta.k8s.io\/v1\n    kind: UpdateOptions\n\n  # dryRun indicates the API request is running in dry run mode and will not be persisted.\n  # Webhooks with side effects should avoid actuating those side effects when dryRun is true.\n  # See http:\/\/k8s.io\/docs\/reference\/using-api\/api-concepts\/#make-a-dry-run-request for more details.\n  dryRun: False\n```\n\n### Response\n\nWebhooks respond with a 200 HTTP status code, `Content-Type: application\/json`,\nand a body containing an `AdmissionReview` object (in the same version they were sent),\nwith the `response` stanza populated, serialized to JSON.\n\nAt a minimum, the `response` stanza must contain the following fields:\n\n* `uid`, copied from the `request.uid` sent to the webhook\n* `allowed`, either set to `true` or `false`\n\nExample of a minimal response from a webhook to allow a request:\n\n```json\n{\n  \"apiVersion\": \"admission.k8s.io\/v1\",\n  \"kind\": \"AdmissionReview\",\n  \"response\": {\n    \"uid\": \"<value from request.uid>\",\n    \"allowed\": true\n  }\n}\n```\n\nExample of a minimal response from a webhook to forbid a request:\n\n```json\n{\n  \"apiVersion\": \"admission.k8s.io\/v1\",\n  \"kind\": \"AdmissionReview\",\n  \"response\": {\n    \"uid\": \"<value from request.uid>\",\n    \"allowed\": false\n  }\n}\n```\n\nWhen rejecting a request, the webhook can customize the http code and message returned to the user\nusing the `status` field. The specified status object is returned to the user.\nSee the [API documentation](\/docs\/reference\/generated\/kubernetes-api\/\/#status-v1-meta)\nfor details about the `status` type.\nExample of a response to forbid a request, customizing the HTTP status code and message presented to the user:\n\n```json\n{\n  \"apiVersion\": \"admission.k8s.io\/v1\",\n  \"kind\": \"AdmissionReview\",\n  \"response\": {\n    \"uid\": \"<value from request.uid>\",\n    \"allowed\": false,\n    \"status\": {\n      \"code\": 403,\n      \"message\": \"You cannot do this because it is Tuesday and your name starts with A\"\n    }\n  }\n}\n```\n\nWhen allowing a request, a mutating admission webhook may optionally modify the incoming object as well.\nThis is done using the `patch` and `patchType` fields in the response.\nThe only currently supported `patchType` is `JSONPatch`.\nSee [JSON patch](https:\/\/jsonpatch.com\/) documentation for more details.\nFor `patchType: JSONPatch`, the `patch` field contains a base64-encoded array of JSON patch operations.\n\nAs an example, a single patch operation that would set `spec.replicas` would be\n`[{\"op\": \"add\", \"path\": \"\/spec\/replicas\", \"value\": 3}]`\n\nBase64-encoded, this would be `W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=`\n\nSo a webhook response to add that label would be:\n\n```json\n{\n  \"apiVersion\": \"admission.k8s.io\/v1\",\n  \"kind\": \"AdmissionReview\",\n  \"response\": {\n    \"uid\": \"<value from request.uid>\",\n    \"allowed\": true,\n    \"patchType\": \"JSONPatch\",\n    \"patch\": \"W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0=\"\n  }\n}\n```\n\nAdmission webhooks can optionally return warning messages that are returned to the requesting client\nin HTTP `Warning` headers with a warning code of 299. Warnings can be sent with allowed or rejected admission responses.\n\nIf you're implementing a webhook that returns a warning:\n\n* Don't include a \"Warning:\" prefix in the message\n* Use warning messages to describe problems the client making the API request should correct or be aware of\n* Limit warnings to 120 characters if possible\n\n\nIndividual warning messages over 256 characters may be truncated by the API server before being returned to clients.\nIf more than 4096 characters of warning messages are added (from all sources), additional warning messages are ignored.\n\n\n```json\n{\n  \"apiVersion\": \"admission.k8s.io\/v1\",\n  \"kind\": \"AdmissionReview\",\n  \"response\": {\n    \"uid\": \"<value from request.uid>\",\n    \"allowed\": true,\n    \"warnings\": [\n      \"duplicate envvar entries specified with name MY_ENV\",\n      \"memory request less than 4MB specified for container mycontainer, which will not start successfully\"\n    ]\n  }\n}\n```\n\n## Webhook configuration\n\nTo register admission webhooks, create `MutatingWebhookConfiguration` or `ValidatingWebhookConfiguration` API objects.\nThe name of a `MutatingWebhookConfiguration` or a `ValidatingWebhookConfiguration` object must be a valid\n[DNS subdomain name](\/docs\/concepts\/overview\/working-with-objects\/names#dns-subdomain-names).\n\nEach configuration can contain one or more webhooks.\nIf multiple webhooks are specified in a single configuration, each must be given a unique name.\nThis is required in order to make resulting audit logs and metrics easier to match up to active\nconfigurations.\n\nEach webhook defines the following things.\n\n### Matching requests: rules\n\nEach webhook must specify a list of rules used to determine if a request to the API server should be sent to the webhook.\nEach rule specifies one or more operations, apiGroups, apiVersions, and resources, and a resource scope:\n\n* `operations` lists one or more operations to match. Can be `\"CREATE\"`, `\"UPDATE\"`, `\"DELETE\"`, `\"CONNECT\"`,\n  or `\"*\"` to match all.\n* `apiGroups` lists one or more API groups to match. `\"\"` is the core API group. `\"*\"` matches all API groups.\n* `apiVersions` lists one or more API versions to match. `\"*\"` matches all API versions.\n* `resources` lists one or more resources to match.\n\n  * `\"*\"` matches all resources, but not subresources.\n  * `\"*\/*\"` matches all resources and subresources.\n  * `\"pods\/*\"` matches all subresources of pods.\n  * `\"*\/status\"` matches all status subresources.\n\n* `scope` specifies a scope to match. Valid values are `\"Cluster\"`, `\"Namespaced\"`, and `\"*\"`.\n  Subresources match the scope of their parent resource. Default is `\"*\"`.\n\n  * `\"Cluster\"` means that only cluster-scoped resources will match this rule (Namespace API objects are cluster-scoped).\n  * `\"Namespaced\"` means that only namespaced resources will match this rule.\n  * `\"*\"` means that there are no scope restrictions.\n\nIf an incoming request matches one of the specified `operations`, `groups`, `versions`,\n`resources`, and `scope` for any of a webhook's `rules`, the request is sent to the webhook.\n\nHere are other examples of rules that could be used to specify which resources should be intercepted.\n\nMatch `CREATE` or `UPDATE` requests to `apps\/v1` and `apps\/v1beta1` `deployments` and `replicasets`:\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: ValidatingWebhookConfiguration\n...\nwebhooks:\n- name: my-webhook.example.com\n  rules:\n  - operations: [\"CREATE\", \"UPDATE\"]\n    apiGroups: [\"apps\"]\n    apiVersions: [\"v1\", \"v1beta1\"]\n    resources: [\"deployments\", \"replicasets\"]\n    scope: \"Namespaced\"\n  ...\n```\n\nMatch create requests for all resources (but not subresources) in all API groups and versions:\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n  - name: my-webhook.example.com\n    rules:\n      - operations: [\"CREATE\"]\n        apiGroups: [\"*\"]\n        apiVersions: [\"*\"]\n        resources: [\"*\"]\n        scope: \"*\"\n```\n\nMatch update requests for all `status` subresources in all API groups and versions:\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n  - name: my-webhook.example.com\n    rules:\n      - operations: [\"UPDATE\"]\n        apiGroups: [\"*\"]\n        apiVersions: [\"*\"]\n        resources: [\"*\/status\"]\n        scope: \"*\"\n```\n\n### Matching requests: objectSelector\n\nWebhooks may optionally limit which requests are intercepted based on the labels of the\nobjects they would be sent, by specifying an `objectSelector`. If specified, the objectSelector\nis evaluated against both the object and oldObject that would be sent to the webhook,\nand is considered to match if either object matches the selector.\n\nA null object (`oldObject` in the case of create, or `newObject` in the case of delete),\nor an object that cannot have labels (like a `DeploymentRollback` or a `PodProxyOptions` object)\nis not considered to match.\n\nUse the object selector only if the webhook is opt-in, because end users may skip\nthe admission webhook by setting the labels.\n\nThis example shows a mutating webhook that would match a `CREATE` of any resource (but not subresources) with the label `foo: bar`:\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n  objectSelector:\n    matchLabels:\n      foo: bar\n  rules:\n  - operations: [\"CREATE\"]\n    apiGroups: [\"*\"]\n    apiVersions: [\"*\"]\n    resources: [\"*\"]\n    scope: \"*\"\n```\n\nSee [labels concept](\/docs\/concepts\/overview\/working-with-objects\/labels)\nfor more examples of label selectors.\n\n### Matching requests: namespaceSelector\n\nWebhooks may optionally limit which requests for namespaced resources are intercepted,\nbased on the labels of the containing namespace, by specifying a `namespaceSelector`.\n\nThe `namespaceSelector` decides whether to run the webhook on a request for a namespaced resource\n(or a Namespace object), based on whether the namespace's labels match the selector.\nIf the object itself is a namespace, the matching is performed on object.metadata.labels.\nIf the object is a cluster scoped resource other than a Namespace, `namespaceSelector` has no effect.\n\nThis example shows a mutating webhook that matches a `CREATE` of any namespaced resource inside a namespace\nthat does not have a \"runlevel\" label of \"0\" or \"1\":\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n  - name: my-webhook.example.com\n    namespaceSelector:\n      matchExpressions:\n        - key: runlevel\n          operator: NotIn\n          values: [\"0\",\"1\"]\n    rules:\n      - operations: [\"CREATE\"]\n        apiGroups: [\"*\"]\n        apiVersions: [\"*\"]\n        resources: [\"*\"]\n        scope: \"Namespaced\"\n```\n\nThis example shows a validating webhook that matches a `CREATE` of any namespaced resource inside\na namespace that is associated with the \"environment\" of \"prod\" or \"staging\":\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n  - name: my-webhook.example.com\n    namespaceSelector:\n      matchExpressions:\n        - key: environment\n          operator: In\n          values: [\"prod\",\"staging\"]\n    rules:\n      - operations: [\"CREATE\"]\n        apiGroups: [\"*\"]\n        apiVersions: [\"*\"]\n        resources: [\"*\"]\n        scope: \"Namespaced\"\n```\n\nSee [labels concept](\/docs\/concepts\/overview\/working-with-objects\/labels)\nfor more examples of label selectors.\n\n### Matching requests: matchPolicy\n\nAPI servers can make objects available via multiple API groups or versions.\n\nFor example, if a webhook only specified a rule for some API groups\/versions\n(like `apiGroups:[\"apps\"], apiVersions:[\"v1\",\"v1beta1\"]`),\nand a request was made to modify the resource via another API group\/version (like `extensions\/v1beta1`),\nthe request would not be sent to the webhook.\n\nThe `matchPolicy` lets a webhook define how its `rules` are used to match incoming requests.\nAllowed values are `Exact` or `Equivalent`.\n\n* `Exact` means a request should be intercepted only if it exactly matches a specified rule.\n* `Equivalent` means a request should be intercepted if it modifies a resource listed in `rules`,\n  even via another API group or version.\n\nIn the example given above, the webhook that only registered for `apps\/v1` could use `matchPolicy`:\n\n* `matchPolicy: Exact` would mean the `extensions\/v1beta1` request would not be sent to the webhook\n* `matchPolicy: Equivalent` means the `extensions\/v1beta1` request would be sent to the webhook\n  (with the objects converted to a version the webhook had specified: `apps\/v1`)\n\nSpecifying `Equivalent` is recommended, and ensures that webhooks continue to intercept the\nresources they expect when upgrades enable new versions of the resource in the API server.\n\nWhen a resource stops being served by the API server, it is no longer considered equivalent to\nother versions of that resource that are still served.\nFor example, `extensions\/v1beta1` deployments were first deprecated and then removed (in Kubernetes v1.16).\n\nSince that removal, a webhook with a `apiGroups:[\"extensions\"], apiVersions:[\"v1beta1\"], resources:[\"deployments\"]` rule\ndoes not intercept deployments created via `apps\/v1` APIs. For that reason, webhooks should prefer registering\nfor stable versions of resources.\n\nThis example shows a validating webhook that intercepts modifications to deployments (no matter the API group or version),\nand is always sent an `apps\/v1` `Deployment` object:\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n  matchPolicy: Equivalent\n  rules:\n  - operations: [\"CREATE\",\"UPDATE\",\"DELETE\"]\n    apiGroups: [\"apps\"]\n    apiVersions: [\"v1\"]\n    resources: [\"deployments\"]\n    scope: \"Namespaced\"\n```\n\nThe `matchPolicy` for an admission webhooks defaults to `Equivalent`.\n\n### Matching requests: `matchConditions`\n\n\n\nYou can define _match conditions_ for webhooks if you need fine-grained request filtering. These\nconditions are useful if you find that match rules, `objectSelectors` and `namespaceSelectors` still\ndoesn't provide the filtering you want over when to call out over HTTP. Match conditions are\n[CEL expressions](\/docs\/reference\/using-api\/cel\/). All match conditions must evaluate to true for the\nwebhook to be called.\n\nHere is an example illustrating a few different uses for match conditions:\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n  - name: my-webhook.example.com\n    matchPolicy: Equivalent\n    rules:\n      - operations: ['CREATE','UPDATE']\n        apiGroups: ['*']\n        apiVersions: ['*']\n        resources: ['*']\n    failurePolicy: 'Ignore' # Fail-open (optional)\n    sideEffects: None\n    clientConfig:\n      service:\n        namespace: my-namespace\n        name: my-webhook\n      caBundle: '<omitted>'\n    # You can have up to 64 matchConditions per webhook\n    matchConditions:\n      - name: 'exclude-leases' # Each match condition must have a unique name\n        expression: '!(request.resource.group == \"coordination.k8s.io\" && request.resource.resource == \"leases\")' # Match non-lease resources.\n      - name: 'exclude-kubelet-requests'\n        expression: '!(\"system:nodes\" in request.userInfo.groups)' # Match requests made by non-node users.\n      - name: 'rbac' # Skip RBAC requests, which are handled by the second webhook.\n        expression: 'request.resource.group != \"rbac.authorization.k8s.io\"'\n  \n  # This example illustrates the use of the 'authorizer'. The authorization check is more expensive\n  # than a simple expression, so in this example it is scoped to only RBAC requests by using a second\n  # webhook. Both webhooks can be served by the same endpoint.\n  - name: rbac.my-webhook.example.com\n    matchPolicy: Equivalent\n    rules:\n      - operations: ['CREATE','UPDATE']\n        apiGroups: ['rbac.authorization.k8s.io']\n        apiVersions: ['*']\n        resources: ['*']\n    failurePolicy: 'Fail' # Fail-closed (the default)\n    sideEffects: None\n    clientConfig:\n      service:\n        namespace: my-namespace\n        name: my-webhook\n      caBundle: '<omitted>'\n    # You can have up to 64 matchConditions per webhook\n    matchConditions:\n      - name: 'breakglass'\n        # Skip requests made by users authorized to 'breakglass' on this webhook.\n        # The 'breakglass' API verb does not need to exist outside this check.\n        expression: '!authorizer.group(\"admissionregistration.k8s.io\").resource(\"validatingwebhookconfigurations\").name(\"my-webhook.example.com\").check(\"breakglass\").allowed()'\n```\n\n\nYou can define up to 64 elements in the `matchConditions` field per webhook.\n\n\nMatch conditions have access to the following CEL variables:\n\n- `object` - The object from the incoming request. The value is null for DELETE requests. The object\n  version may be converted based on the [matchPolicy](#matching-requests-matchpolicy).\n- `oldObject` - The existing object. The value is null for CREATE requests.\n- `request` - The request portion of the [AdmissionReview](#request), excluding `object` and `oldObject`.\n- `authorizer` - A CEL Authorizer. May be used to perform authorization checks for the principal\n  (authenticated user) of the request. See\n  [Authz](https:\/\/pkg.go.dev\/k8s.io\/apiserver\/pkg\/cel\/library#Authz) in the Kubernetes CEL library\n  documentation for more details.\n- `authorizer.requestResource` - A shortcut for an authorization check configured with the request\n  resource (group, resource, (subresource), namespace, name).\n\nFor more information on CEL expressions, refer to the\n[Common Expression Language in Kubernetes reference](\/docs\/reference\/using-api\/cel\/).\n\nIn the event of an error evaluating a match condition the webhook is never called. Whether to reject\nthe request is determined as follows:\n\n1. If **any** match condition evaluated to `false` (regardless of other errors), the API server skips the webhook.\n2. Otherwise:\n    - for [`failurePolicy: Fail`](#failure-policy), reject the request (without calling the webhook).\n    - for [`failurePolicy: Ignore`](#failure-policy), proceed with the request but skip the webhook.\n\n### Contacting the webhook\n\nOnce the API server has determined a request should be sent to a webhook,\nit needs to know how to contact the webhook. This is specified in the `clientConfig`\nstanza of the webhook configuration.\n\nWebhooks can either be called via a URL or a service reference,\nand can optionally include a custom CA bundle to use to verify the TLS connection.\n\n#### URL\n\n`url` gives the location of the webhook, in standard URL form\n(`scheme:\/\/host:port\/path`).\n\nThe `host` should not refer to a service running in the cluster; use\na service reference by specifying the `service` field instead.\nThe host might be resolved via external DNS in some API servers\n(e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would\nbe a layering violation). `host` may also be an IP address.\n\nPlease note that using `localhost` or `127.0.0.1` as a `host` is\nrisky unless you take great care to run this webhook on all hosts\nwhich run an API server which might need to make calls to this\nwebhook. Such installations are likely to be non-portable or not readily\nrun in a new cluster.\n\nThe scheme must be \"https\"; the URL must begin with \"https:\/\/\".\n\nAttempting to use a user or basic auth (for example `user:password@`) is not allowed.\nFragments (`#...`) and query parameters (`?...`) are also not allowed.\n\nHere is an example of a mutating webhook configured to call a URL\n(and expects the TLS certificate to be verified using system trust roots, so does not specify a caBundle):\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n  clientConfig:\n    url: \"https:\/\/my-webhook.example.com:9443\/my-webhook-path\"\n```\n\n#### Service reference\n\nThe `service` stanza inside `clientConfig` is a reference to the service for this webhook.\nIf the webhook is running within the cluster, then you should use `service` instead of `url`.\nThe service namespace and name are required. The port is optional and defaults to 443.\nThe path is optional and defaults to \"\/\".\n\nHere is an example of a mutating webhook configured to call a service on port \"1234\"\nat the subpath \"\/my-path\", and to verify the TLS connection against the ServerName\n`my-service-name.my-service-namespace.svc` using a custom CA bundle:\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n  clientConfig:\n    caBundle: <CA_BUNDLE>\n    service:\n      namespace: my-service-namespace\n      name: my-service-name\n      path: \/my-path\n      port: 1234\n```\n\n\nYou must replace the `<CA_BUNDLE>` in the above example by a valid CA bundle\nwhich is a PEM-encoded CA bundle for validating the webhook's server certificate.\n\n\n### Side effects\n\nWebhooks typically operate only on the content of the `AdmissionReview` sent to them.\nSome webhooks, however, make out-of-band changes as part of processing admission requests.\n\nWebhooks that make out-of-band changes (\"side effects\") must also have a reconciliation mechanism\n(like a controller) that periodically determines the actual state of the world, and adjusts\nthe out-of-band data modified by the admission webhook to reflect reality.\nThis is because a call to an admission webhook does not guarantee the admitted object will be persisted as is, or at all.\nLater webhooks can modify the content of the object, a conflict could be encountered while writing to storage,\nor the server could power off before persisting the object.\n\nAdditionally, webhooks with side effects must skip those side-effects when `dryRun: true` admission requests are handled.\nA webhook must explicitly indicate that it will not have side-effects when run with `dryRun`,\nor the dry-run request will not be sent to the webhook and the API request will fail instead.\n\nWebhooks indicate whether they have side effects using the `sideEffects` field in the webhook configuration:\n\n* `None`: calling the webhook will have no side effects.\n* `NoneOnDryRun`: calling the webhook will possibly have side effects, but if a request with\n  `dryRun: true` is sent to the webhook, the webhook will suppress the side effects (the webhook\n  is `dryRun`-aware).\n\nHere is an example of a validating webhook indicating it has no side effects on `dryRun: true` requests:\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n  - name: my-webhook.example.com\n    sideEffects: NoneOnDryRun\n```\n\n### Timeouts\n\nBecause webhooks add to API request latency, they should evaluate as quickly as possible.\n`timeoutSeconds` allows configuring how long the API server should wait for a webhook to respond\nbefore treating the call as a failure.\n\nIf the timeout expires before the webhook responds, the webhook call will be ignored or\nthe API call will be rejected based on the [failure policy](#failure-policy).\n\nThe timeout value must be between 1 and 30 seconds.\n\nHere is an example of a validating webhook with a custom timeout of 2 seconds:\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: ValidatingWebhookConfiguration\nwebhooks:\n  - name: my-webhook.example.com\n    timeoutSeconds: 2\n```\n\nThe timeout for an admission webhook defaults to 10 seconds.\n\n### Reinvocation policy\n\nA single ordering of mutating admissions plugins (including webhooks) does not work for all cases\n(see https:\/\/issue.k8s.io\/64333 as an example). A mutating webhook can add a new sub-structure\nto the object (like adding a `container` to a `pod`), and other mutating plugins which have already\nrun may have opinions on those new structures (like setting an `imagePullPolicy` on all containers).\n\nTo allow mutating admission plugins to observe changes made by other plugins,\nbuilt-in mutating admission plugins are re-run if a mutating webhook modifies an object,\nand mutating webhooks can specify a `reinvocationPolicy` to control whether they are reinvoked as well.\n\n`reinvocationPolicy` may be set to `Never` or `IfNeeded`. It defaults to `Never`.\n\n* `Never`: the webhook must not be called more than once in a single admission evaluation.\n* `IfNeeded`: the webhook may be called again as part of the admission evaluation if the object\n  being admitted is modified by other admission plugins after the initial webhook call.\n\nThe important elements to note are:\n\n* The number of additional invocations is not guaranteed to be exactly one.\n* If additional invocations result in further modifications to the object, webhooks are not\n  guaranteed to be invoked again.\n* Webhooks that use this option may be reordered to minimize the number of additional invocations.\n* To validate an object after all mutations are guaranteed complete, use a validating admission\n  webhook instead (recommended for webhooks with side-effects).\n\nHere is an example of a mutating webhook opting into being re-invoked if later admission plugins\nmodify the object:\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n  reinvocationPolicy: IfNeeded\n```\n\nMutating webhooks must be [idempotent](#idempotence), able to successfully process an object they have already admitted\nand potentially modified. This is true for all mutating admission webhooks, since any change they can make\nin an object could already exist in the user-provided object, but it is essential for webhooks that opt into reinvocation.\n\n### Failure policy\n\n`failurePolicy` defines how unrecognized errors and timeout errors from the admission webhook\nare handled. Allowed values are `Ignore` or `Fail`.\n\n* `Ignore` means that an error calling the webhook is ignored and the API request is allowed to continue.\n* `Fail` means that an error calling the webhook causes the admission to fail and the API request to be rejected.\n\nHere is a mutating webhook configured to reject an API request if errors are encountered calling the admission webhook:\n\n```yaml\napiVersion: admissionregistration.k8s.io\/v1\nkind: MutatingWebhookConfiguration\nwebhooks:\n- name: my-webhook.example.com\n  failurePolicy: Fail\n```\n\nThe default `failurePolicy` for an admission webhooks is `Fail`.\n\n## Monitoring admission webhooks\n\nThe API server provides ways to monitor admission webhook behaviors. These\nmonitoring mechanisms help cluster admins to answer questions like:\n\n1. Which mutating webhook mutated the object in a API request?\n\n2. What change did the mutating webhook applied to the object?\n\n3. Which webhooks are frequently rejecting API requests? What's the reason for a rejection?\n\n### Mutating webhook auditing annotations\n\nSometimes it's useful to know which mutating webhook mutated the object in a API request, and what change did the\nwebhook apply.\n\nThe Kubernetes API server performs [auditing](\/docs\/tasks\/debug\/debug-cluster\/audit\/) on each\nmutating webhook invocation. Each invocation generates an auditing annotation\ncapturing if a request object is mutated by the invocation, and optionally generates an annotation\ncapturing the applied patch from the webhook admission response. The annotations are set in the\naudit event for given request on given stage of its execution, which is then pre-processed\naccording to a certain policy and written to a backend.\n\nThe audit level of a event determines which annotations get recorded:\n\n- At `Metadata` audit level or higher, an annotation with key\n  `mutation.webhook.admission.k8s.io\/round_{round idx}_index_{order idx}` gets logged with JSON\n  payload indicating a webhook gets invoked for given request and whether it mutated the object or not.\n\n  For example, the following annotation gets recorded for a webhook being reinvoked. The webhook is\n  ordered the third in the mutating webhook chain, and didn't mutated the request object during the\n  invocation.\n\n  ```yaml\n  # the audit event recorded\n  {\n      \"kind\": \"Event\",\n      \"apiVersion\": \"audit.k8s.io\/v1\",\n      \"annotations\": {\n          \"mutation.webhook.admission.k8s.io\/round_1_index_2\": \"{\\\"configuration\\\":\\\"my-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook.example.com\\\",\\\"mutated\\\": false}\"\n          # other annotations\n          ...\n      }\n      # other fields\n      ...\n  }\n  ```\n  \n  ```yaml\n  # the annotation value deserialized\n  {\n      \"configuration\": \"my-mutating-webhook-configuration.example.com\",\n      \"webhook\": \"my-webhook.example.com\",\n      \"mutated\": false\n  }\n  ```\n  \n  The following annotation gets recorded for a webhook being invoked in the first round. The webhook\n  is ordered the first in the mutating webhook chain, and mutated the request object during the\n  invocation.\n\n  ```yaml\n  # the audit event recorded\n  {\n      \"kind\": \"Event\",\n      \"apiVersion\": \"audit.k8s.io\/v1\",\n      \"annotations\": {\n          \"mutation.webhook.admission.k8s.io\/round_0_index_0\": \"{\\\"configuration\\\":\\\"my-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook-always-mutate.example.com\\\",\\\"mutated\\\": true}\"\n          # other annotations\n          ...\n      }\n      # other fields\n      ...\n  }\n  ```\n  \n  ```yaml\n  # the annotation value deserialized\n  {\n      \"configuration\": \"my-mutating-webhook-configuration.example.com\",\n      \"webhook\": \"my-webhook-always-mutate.example.com\",\n      \"mutated\": true\n  }\n  ```\n\n- At `Request` audit level or higher, an annotation with key\n  `patch.webhook.admission.k8s.io\/round_{round idx}_index_{order idx}` gets logged with JSON payload indicating\n  a webhook gets invoked for given request and what patch gets applied to the request object.\n\n  For example, the following annotation gets recorded for a webhook being reinvoked. The webhook is ordered the fourth in the\n  mutating webhook chain, and responded with a JSON patch which got applied to the request object.\n  \n  ```yaml\n  # the audit event recorded\n  {\n      \"kind\": \"Event\",\n      \"apiVersion\": \"audit.k8s.io\/v1\",\n      \"annotations\": {\n          \"patch.webhook.admission.k8s.io\/round_1_index_3\": \"{\\\"configuration\\\":\\\"my-other-mutating-webhook-configuration.example.com\\\",\\\"webhook\\\":\\\"my-webhook-always-mutate.example.com\\\",\\\"patch\\\":[{\\\"op\\\":\\\"add\\\",\\\"path\\\":\\\"\/data\/mutation-stage\\\",\\\"value\\\":\\\"yes\\\"}],\\\"patchType\\\":\\\"JSONPatch\\\"}\"\n          # other annotations\n          ...\n      }\n      # other fields\n      ...\n  }\n  ```\n  \n  ```yaml\n  # the annotation value deserialized\n  {\n      \"configuration\": \"my-other-mutating-webhook-configuration.example.com\",\n      \"webhook\": \"my-webhook-always-mutate.example.com\",\n      \"patchType\": \"JSONPatch\",\n      \"patch\": [\n          {\n              \"op\": \"add\",\n              \"path\": \"\/data\/mutation-stage\",\n              \"value\": \"yes\"\n          }\n      ]\n  }\n  ```\n\n### Admission webhook metrics\n\nThe API server  exposes Prometheus metrics from the `\/metrics` endpoint, which can be used for monitoring and\ndiagnosing API server status. The following metrics record status related to admission webhooks.\n\n#### API server admission webhook rejection count\n\nSometimes it's useful to know which admission webhooks are frequently rejecting API requests, and the\nreason for a rejection.\n\nThe API server exposes a Prometheus counter metric recording admission webhook rejections. The\nmetrics are labelled to identify the causes of webhook rejection(s):\n\n- `name`: the name of the webhook that rejected a request.\n- `operation`: the operation type of the request, can be one of `CREATE`,\n  `UPDATE`, `DELETE` and `CONNECT`.\n- `type`: the admission webhook type, can be one of `admit` and `validating`.\n- `error_type`: identifies if an error occurred during the webhook invocation\n  that caused the rejection. Its value can be one of:\n\n  - `calling_webhook_error`: unrecognized errors or timeout errors from the admission webhook happened and the\n    webhook's [Failure policy](#failure-policy) is set to `Fail`.\n  - `no_error`: no error occurred. The webhook rejected the request with `allowed: false` in the admission\n    response. The metrics label `rejection_code` records the `.status.code` set in the admission response.\n  - `apiserver_internal_error`: an API server internal error happened.\n\n- `rejection_code`: the HTTP status code set in the admission response when a\n  webhook rejected a request.\n\nExample of the rejection count metrics:\n\n```\n# HELP apiserver_admission_webhook_rejection_count [ALPHA] Admission webhook rejection count, identified by name and broken out for each admission type (validating or admit) and operation. Additional labels specify an error type (calling_webhook_error or apiserver_internal_error if an error occurred; no_error otherwise) and optionally a non-zero rejection code if the webhook rejects the request with an HTTP status code (honored by the apiserver when the code is greater or equal to 400). Codes greater than 600 are truncated to 600, to keep the metrics cardinality bounded.\n# TYPE apiserver_admission_webhook_rejection_count counter\napiserver_admission_webhook_rejection_count{error_type=\"calling_webhook_error\",name=\"always-timeout-webhook.example.com\",operation=\"CREATE\",rejection_code=\"0\",type=\"validating\"} 1\napiserver_admission_webhook_rejection_count{error_type=\"calling_webhook_error\",name=\"invalid-admission-response-webhook.example.com\",operation=\"CREATE\",rejection_code=\"0\",type=\"validating\"} 1\napiserver_admission_webhook_rejection_count{error_type=\"no_error\",name=\"deny-unwanted-configmap-data.example.com\",operation=\"CREATE\",rejection_code=\"400\",type=\"validating\"} 13\n```\n\n## Best practices and warnings\n\n### Idempotence\n\nAn idempotent mutating admission webhook is able to successfully process an object it has already admitted\nand potentially modified. The admission can be applied multiple times without changing the result beyond\nthe initial application.\n\n#### Example of idempotent mutating admission webhooks:\n\n1. For a `CREATE` pod request, set the field `.spec.securityContext.runAsNonRoot` of the\n   pod to true, to enforce security best practices.\n\n2. For a `CREATE` pod request, if the field `.spec.containers[].resources.limits`\n   of a container is not set, set default resource limits.\n\n3. For a `CREATE` pod request, inject a sidecar container with name `foo-sidecar` if no container\n   with the name `foo-sidecar` already exists.\n\nIn the cases above, the webhook can be safely reinvoked, or admit an object that already has the fields set.\n\n#### Example of non-idempotent mutating admission webhooks:\n\n1. For a `CREATE` pod request, inject a sidecar container with name `foo-sidecar`\n   suffixed with the current timestamp (e.g. `foo-sidecar-19700101-000000`).\n\n2. For a `CREATE`\/`UPDATE` pod request, reject if the pod has label `\"env\"` set,\n   otherwise add an `\"env\": \"prod\"` label to the pod.\n\n3. For a `CREATE` pod request, blindly append a sidecar container named\n   `foo-sidecar` without looking to see if there is already a `foo-sidecar`\n   container in the pod.\n\nIn the first case above, reinvoking the webhook can result in the same sidecar being injected multiple times to a pod, each time\nwith a different container name. Similarly the webhook can inject duplicated containers if the sidecar already exists in\na user-provided pod.\n\nIn the second case above, reinvoking the webhook will result in the webhook failing on its own output.\n\nIn the third case above, reinvoking the webhook will result in duplicated containers in the pod spec, which makes\nthe request invalid and rejected by the API server.\n\n### Intercepting all versions of an object\n\nIt is recommended that admission webhooks should always intercept all versions of an object by setting `.webhooks[].matchPolicy`\nto `Equivalent`. It is also recommended that admission webhooks should prefer registering for stable versions of resources.\nFailure to intercept all versions of an object can result in admission policies not being enforced for requests in certain\nversions. See [Matching requests: matchPolicy](#matching-requests-matchpolicy) for examples.\n\n### Availability\n\nIt is recommended that admission webhooks should evaluate as quickly as possible (typically in\nmilliseconds), since they add to API request latency.\nIt is encouraged to use a small timeout for webhooks. See [Timeouts](#timeouts) for more detail.\n\nIt is recommended that admission webhooks should leverage some format of load-balancing, to\nprovide high availability and performance benefits. If a webhook is running within the cluster,\nyou can run multiple webhook backends behind a service to leverage the load-balancing that service\nsupports.\n\n### Guaranteeing the final state of the object is seen\n\nAdmission webhooks that need to guarantee they see the final state of the object in order to enforce policy\nshould use a validating admission webhook, since objects can be modified after being seen by mutating webhooks.\n\nFor example, a mutating admission webhook is configured to inject a sidecar container with name\n\"foo-sidecar\" on every `CREATE` pod request. If the sidecar *must* be present, a validating\nadmisson webhook should also be configured to intercept `CREATE` pod requests, and validate that a\ncontainer with name \"foo-sidecar\" with the expected configuration exists in the to-be-created\nobject.\n\n### Avoiding deadlocks in self-hosted webhooks\n\nA webhook running inside the cluster might cause deadlocks for its own deployment if it is configured\nto intercept resources required to start its own pods.\n\nFor example, a mutating admission webhook is configured to admit `CREATE` pod requests only if a certain label is set in the\npod (e.g. `\"env\": \"prod\"`). The webhook server runs in a deployment which doesn't set the `\"env\"` label.\nWhen a node that runs the webhook server pods\nbecomes unhealthy, the webhook deployment will try to reschedule the pods to another node. However the requests will\nget rejected by the existing webhook server since the `\"env\"` label is unset, and the migration cannot happen.\n\nIt is recommended to exclude the namespace where your webhook is running with a\n[namespaceSelector](#matching-requests-namespaceselector).\n\n### Side effects\n\nIt is recommended that admission webhooks should avoid side effects if possible, which means the webhooks operate only on the\ncontent of the `AdmissionReview` sent to them, and do not make out-of-band changes. The `.webhooks[].sideEffects` field should\nbe set to `None` if a webhook doesn't have any side effect.\n\nIf side effects are required during the admission evaluation, they must be suppressed when processing an\n`AdmissionReview` object with `dryRun` set to `true`, and the `.webhooks[].sideEffects` field should be\nset to `NoneOnDryRun`. See [Side effects](#side-effects) for more detail.\n\n### Avoiding operating on the kube-system namespace\n\nThe `kube-system` namespace contains objects created by the Kubernetes system,\ne.g. service accounts for the control plane components, pods like `kube-dns`.\nAccidentally mutating or rejecting requests in the `kube-system` namespace may\ncause the control plane components to stop functioning or introduce unknown behavior.\nIf your admission webhooks don't intend to modify the behavior of the Kubernetes control\nplane, exclude the `kube-system` namespace from being intercepted using a\n[`namespaceSelector`](#matching-requests-namespaceselector).","site":"kubernetes reference","answers_cleaned":"    reviewers    smarterclayton   lavalamp   caesarxuchao   deads2k   liggitt   jpbetz title  Dynamic Admission Control content type  concept weight  45           overview     In addition to  compiled in admission plugins   docs reference access authn authz admission controllers    admission plugins can be developed as extensions and run as webhooks configured at runtime  This page describes how to build  configure  use  and monitor admission webhooks        body         What are admission webhooks   Admission webhooks are HTTP callbacks that receive admission requests and do something with them  You can define two types of admission webhooks   validating admission webhook   docs reference access authn authz admission controllers  validatingadmissionwebhook  and  mutating admission webhook   docs reference access authn authz admission controllers  mutatingadmissionwebhook   Mutating admission webhooks are invoked first  and can modify objects sent to the API server to enforce custom defaults  After all object modifications are complete  and after the incoming object is validated by the API server  validating admission webhooks are invoked and can reject requests to enforce custom policies    Admission webhooks that need to guarantee they see the final state of the object in order to enforce policy should use a validating admission webhook  since objects can be modified after being seen by mutating webhooks       Experimenting with admission webhooks  Admission webhooks are essentially part of the cluster control plane  You should write and deploy them with great caution  Please read the  user guides   docs reference access authn authz extensible admission controllers  write an admission webhook server  for instructions if you intend to write deploy production grade admission webhooks  In the following  we describe how to quickly experiment with admission webhooks       Prerequisites    Ensure that MutatingAdmissionWebhook and ValidatingAdmissionWebhook   admission controllers are enabled     Here   docs reference access authn authz admission controllers  is there a recommended set of admission controllers to use    is a recommended set of admission controllers to enable in general     Ensure that the  admissionregistration k8s io v1  API is enabled       Write an admission webhook server  Please refer to the implementation of the  admission webhook server  https   github com kubernetes kubernetes blob release 1 21 test images agnhost webhook main go  that is validated in a Kubernetes e2e test  The webhook handles the  AdmissionReview  request sent by the API servers  and sends back its decision as an  AdmissionReview  object in the same version it received   See the  webhook request   request  section for details on the data sent to webhooks   See the  webhook response   response  section for the data expected from webhooks   The example admission webhook server leaves the  ClientAuth  field  empty  https   github com kubernetes kubernetes blob v1 22 0 test images agnhost webhook config go L38 L39   which defaults to  NoClientCert   This means that the webhook server does not authenticate the identity of the clients  supposedly API servers  If you need mutual TLS or other ways to authenticate the clients  see how to  authenticate API servers   authenticate apiservers        Deploy the admission webhook service  The webhook server in the e2e test is deployed in the Kubernetes cluster  via the  deployment API   docs reference generated kubernetes api   deployment v1 apps   The test also creates a  service   docs reference generated kubernetes api   service v1 core  as the front end of the webhook server  See  code  https   github com kubernetes kubernetes blob v1 22 0 test e2e apimachinery webhook go L748    You may also deploy your webhooks outside of the cluster  You will need to update your webhook configurations accordingly       Configure admission webhooks on the fly  You can dynamically configure what resources are subject to what admission webhooks via  ValidatingWebhookConfiguration   docs reference generated kubernetes api   validatingwebhookconfiguration v1 admissionregistration k8s io  or  MutatingWebhookConfiguration   docs reference generated kubernetes api   mutatingwebhookconfiguration v1 admissionregistration k8s io    The following is an example  ValidatingWebhookConfiguration   a mutating webhook configuration is similar  See the  webhook configuration   webhook configuration  section for details about each config field      yaml apiVersion  admissionregistration k8s io v1 kind  ValidatingWebhookConfiguration metadata    name   pod policy example com  webhooks    name   pod policy example com    rules      apiGroups             apiVersions    v1       operations     CREATE       resources      pods       scope         Namespaced    clientConfig      service        namespace   example namespace        name   example service      caBundle   CA BUNDLE    admissionReviewVersions    v1     sideEffects  None   timeoutSeconds  5       You must replace the   CA BUNDLE   in the above example by a valid CA bundle which is a PEM encoded  field value is Base64 encoded  CA bundle for validating the webhook s server certificate    The  scope  field specifies if only cluster scoped resources   Cluster   or namespace scoped resources   Namespaced   will match this rule    lowast   means that there are no scope restrictions    When using  clientConfig service   the server cert must be valid for   svc name   svc namespace  svc      Default timeout for a webhook call is 10 seconds  You can set the  timeout  and it is encouraged to use a short timeout for webhooks  If the webhook call times out  the request is handled according to the webhook s failure policy    When an API server receives a request that matches one of the  rules   the API server sends an  admissionReview  request to webhook as specified in the  clientConfig    After you create the webhook configuration  the system will take a few seconds to honor the new configuration       Authenticate API servers     authenticate apiservers   If your admission webhooks require authentication  you can configure the API servers to use basic auth  bearer token  or a cert to authenticate itself to the webhooks  There are three steps to complete the configuration     When starting the API server  specify the location of the admission control   configuration file via the    admission control config file  flag     In the admission control configuration file  specify where the   MutatingAdmissionWebhook controller and ValidatingAdmissionWebhook controller   should read the credentials  The credentials are stored in kubeConfig files    yes  the same schema that s used by kubectl   so the field name is    kubeConfigFile   Here is an example admission control configuration file        yaml apiVersion  apiserver config k8s io v1 kind  AdmissionConfiguration plugins    name  ValidatingAdmissionWebhook   configuration      apiVersion  apiserver config k8s io v1     kind  WebhookAdmissionConfiguration     kubeConfigFile    path to kubeconfig file     name  MutatingAdmissionWebhook   configuration      apiVersion  apiserver config k8s io v1     kind  WebhookAdmissionConfiguration     kubeConfigFile    path to kubeconfig file            yaml   Deprecated in v1 17 in favor of apiserver config k8s io v1 apiVersion  apiserver k8s io v1alpha1 kind  AdmissionConfiguration plugins    name  ValidatingAdmissionWebhook   configuration        Deprecated in v1 17 in favor of apiserver config k8s io v1  kind WebhookAdmissionConfiguration     apiVersion  apiserver config k8s io v1alpha1     kind  WebhookAdmission     kubeConfigFile    path to kubeconfig file     name  MutatingAdmissionWebhook   configuration        Deprecated in v1 17 in favor of apiserver config k8s io v1  kind WebhookAdmissionConfiguration     apiVersion  apiserver config k8s io v1alpha1     kind  WebhookAdmission     kubeConfigFile    path to kubeconfig file          For more information about  AdmissionConfiguration   see the  AdmissionConfiguration  v1  reference   docs reference config api apiserver webhookadmission v1    See the  webhook configuration   webhook configuration  section for details about each config field   In the kubeConfig file  provide the credentials      yaml apiVersion  v1 kind  Config users    name should be set to the DNS name of the service or the host  including port  of the URL the webhook is configured to speak to    If a non 443 port is used for services  it must be included in the name when configuring 1 16  API servers      For a webhook configured to speak to a service on the default port  443   specify the DNS name of the service      name  webhook1 ns1 svc     user          For a webhook configured to speak to a service on non default port  e g  8443   specify the DNS name and port of the service in 1 16       name  webhook1 ns1 svc 8443     user        and optionally create a second stanza using only the DNS name of the service for compatibility with 1 15 API servers      name  webhook1 ns1 svc     user          For webhooks configured to speak to a URL  match the host  and port  specified in the webhook s URL  Examples    A webhook with  url  https   www example com       name  www example com     user          A webhook with  url  https   www example com 443       name  www example com 443     user          A webhook with  url  https   www example com 8443       name  www example com 8443     user          name   webhook1 ns1 svc    user      client certificate data    pem encoded certificate       client key data    pem encoded key     The  name  supports using   to wildcard match prefixing segments    name     webhook company org    user      password    password       username    name         is the default match    name        user      token    token        Of course you need to set up the webhook server to handle these authentication requests      Webhook request and response      Request  Webhooks are sent as POST requests  with  Content Type  application json   with an  AdmissionReview  API object in the  admission k8s io  API group serialized to JSON as the body   Webhooks can specify what versions of  AdmissionReview  objects they accept with the  admissionReviewVersions  field in their configuration      yaml apiVersion  admissionregistration k8s io v1 kind  ValidatingWebhookConfiguration webhooks    name  my webhook example com   admissionReviewVersions    v1    v1beta1         admissionReviewVersions  is a required field when creating webhook configurations  Webhooks are required to support at least one  AdmissionReview  version understood by the current and previous API server   API servers send the first  AdmissionReview  version in the  admissionReviewVersions  list they support  If none of the versions in the list are supported by the API server  the configuration will not be allowed to be created  If an API server encounters a webhook configuration that was previously created and does not support any of the  AdmissionReview  versions the API server knows how to send  attempts to call to the webhook will fail and be subject to the  failure policy   failure policy    This example shows the data contained in an  AdmissionReview  object for a request to update the  scale  subresource of an  apps v1   Deployment       yaml apiVersion  admission k8s io v1 kind  AdmissionReview request      Random uid uniquely identifying this admission call   uid  705ab4f5 6393 11e8 b7cc 42010a800002      Fully qualified group version kind of the incoming object   kind      group  autoscaling     version  v1     kind  Scale      Fully qualified group version kind of the resource being modified   resource      group  apps     version  v1     resource  deployments      subresource  if the request is to a subresource   subResource  scale      Fully qualified group version kind of the incoming object in the original request to the API server      This only differs from  kind  if the webhook specified  matchPolicy  Equivalent  and the     original request to the API server was converted to a version the webhook registered for    requestKind      group  autoscaling     version  v1     kind  Scale      Fully qualified group version kind of the resource being modified in the original request to the API server      This only differs from  resource  if the webhook specified  matchPolicy  Equivalent  and the     original request to the API server was converted to a version the webhook registered for    requestResource      group  apps     version  v1     resource  deployments      subresource  if the request is to a subresource     This only differs from  subResource  if the webhook specified  matchPolicy  Equivalent  and the     original request to the API server was converted to a version the webhook registered for    requestSubResource  scale      Name of the resource being modified   name  my deployment      Namespace of the resource being modified  if the resource is namespaced  or is a Namespace object    namespace  my namespace      operation can be CREATE  UPDATE  DELETE  or CONNECT   operation  UPDATE    userInfo        Username of the authenticated user making the request to the API server     username  admin        UID of the authenticated user making the request to the API server     uid  014fbff9a07c        Group memberships of the authenticated user making the request to the API server     groups          system authenticated         my admin group       Arbitrary extra info associated with the user making the request to the API server        This is populated by the API server authentication layer and should be included       if any SubjectAccessReview checks are performed by the webhook      extra        some key            some value1           some value2      object is the new object being admitted      It is null for DELETE operations    object      apiVersion  autoscaling v1     kind  Scale      oldObject is the existing object      It is null for CREATE and CONNECT operations    oldObject      apiVersion  autoscaling v1     kind  Scale      options contains the options for the operation being admitted  like meta k8s io v1 CreateOptions  UpdateOptions  or DeleteOptions      It is null for CONNECT operations    options      apiVersion  meta k8s io v1     kind  UpdateOptions      dryRun indicates the API request is running in dry run mode and will not be persisted      Webhooks with side effects should avoid actuating those side effects when dryRun is true      See http   k8s io docs reference using api api concepts  make a dry run request for more details    dryRun  False          Response  Webhooks respond with a 200 HTTP status code   Content Type  application json   and a body containing an  AdmissionReview  object  in the same version they were sent   with the  response  stanza populated  serialized to JSON   At a minimum  the  response  stanza must contain the following fields      uid   copied from the  request uid  sent to the webhook    allowed   either set to  true  or  false   Example of a minimal response from a webhook to allow a request      json      apiVersion    admission k8s io v1      kind    AdmissionReview      response          uid     value from request uid         allowed   true            Example of a minimal response from a webhook to forbid a request      json      apiVersion    admission k8s io v1      kind    AdmissionReview      response          uid     value from request uid         allowed   false            When rejecting a request  the webhook can customize the http code and message returned to the user using the  status  field  The specified status object is returned to the user  See the  API documentation   docs reference generated kubernetes api   status v1 meta  for details about the  status  type  Example of a response to forbid a request  customizing the HTTP status code and message presented to the user      json      apiVersion    admission k8s io v1      kind    AdmissionReview      response          uid     value from request uid         allowed   false       status            code   403         message    You cannot do this because it is Tuesday and your name starts with A                   When allowing a request  a mutating admission webhook may optionally modify the incoming object as well  This is done using the  patch  and  patchType  fields in the response  The only currently supported  patchType  is  JSONPatch   See  JSON patch  https   jsonpatch com   documentation for more details  For  patchType  JSONPatch   the  patch  field contains a base64 encoded array of JSON patch operations   As an example  a single patch operation that would set  spec replicas  would be     op    add    path     spec replicas    value   3     Base64 encoded  this would be  W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0    So a webhook response to add that label would be      json      apiVersion    admission k8s io v1      kind    AdmissionReview      response          uid     value from request uid         allowed   true       patchType    JSONPatch        patch    W3sib3AiOiAiYWRkIiwgInBhdGgiOiAiL3NwZWMvcmVwbGljYXMiLCAidmFsdWUiOiAzfV0              Admission webhooks can optionally return warning messages that are returned to the requesting client in HTTP  Warning  headers with a warning code of 299  Warnings can be sent with allowed or rejected admission responses   If you re implementing a webhook that returns a warning     Don t include a  Warning   prefix in the message   Use warning messages to describe problems the client making the API request should correct or be aware of   Limit warnings to 120 characters if possible   Individual warning messages over 256 characters may be truncated by the API server before being returned to clients  If more than 4096 characters of warning messages are added  from all sources   additional warning messages are ignored       json      apiVersion    admission k8s io v1      kind    AdmissionReview      response          uid     value from request uid         allowed   true       warnings            duplicate envvar entries specified with name MY ENV          memory request less than 4MB specified for container mycontainer  which will not start successfully                      Webhook configuration  To register admission webhooks  create  MutatingWebhookConfiguration  or  ValidatingWebhookConfiguration  API objects  The name of a  MutatingWebhookConfiguration  or a  ValidatingWebhookConfiguration  object must be a valid  DNS subdomain name   docs concepts overview working with objects names dns subdomain names    Each configuration can contain one or more webhooks  If multiple webhooks are specified in a single configuration  each must be given a unique name  This is required in order to make resulting audit logs and metrics easier to match up to active configurations   Each webhook defines the following things       Matching requests  rules  Each webhook must specify a list of rules used to determine if a request to the API server should be sent to the webhook  Each rule specifies one or more operations  apiGroups  apiVersions  and resources  and a resource scope      operations  lists one or more operations to match  Can be   CREATE      UPDATE      DELETE      CONNECT      or       to match all     apiGroups  lists one or more API groups to match       is the core API group        matches all API groups     apiVersions  lists one or more API versions to match        matches all API versions     resources  lists one or more resources to match             matches all resources  but not subresources              matches all resources and subresources        pods     matches all subresources of pods          status   matches all status subresources      scope  specifies a scope to match  Valid values are   Cluster      Namespaced    and          Subresources match the scope of their parent resource  Default is               Cluster   means that only cluster scoped resources will match this rule  Namespace API objects are cluster scoped         Namespaced   means that only namespaced resources will match this rule            means that there are no scope restrictions   If an incoming request matches one of the specified  operations    groups    versions    resources   and  scope  for any of a webhook s  rules   the request is sent to the webhook   Here are other examples of rules that could be used to specify which resources should be intercepted   Match  CREATE  or  UPDATE  requests to  apps v1  and  apps v1beta1   deployments  and  replicasets       yaml apiVersion  admissionregistration k8s io v1 kind  ValidatingWebhookConfiguration     webhooks    name  my webhook example com   rules      operations    CREATE    UPDATE       apiGroups    apps       apiVersions    v1    v1beta1       resources    deployments    replicasets       scope   Namespaced             Match create requests for all resources  but not subresources  in all API groups and versions      yaml apiVersion  admissionregistration k8s io v1 kind  ValidatingWebhookConfiguration webhooks      name  my webhook example com     rules          operations    CREATE           apiGroups                apiVersions                resources                scope           Match update requests for all  status  subresources in all API groups and versions      yaml apiVersion  admissionregistration k8s io v1 kind  ValidatingWebhookConfiguration webhooks      name  my webhook example com     rules          operations    UPDATE           apiGroups                apiVersions                resources      status           scope               Matching requests  objectSelector  Webhooks may optionally limit which requests are intercepted based on the labels of the objects they would be sent  by specifying an  objectSelector   If specified  the objectSelector is evaluated against both the object and oldObject that would be sent to the webhook  and is considered to match if either object matches the selector   A null object   oldObject  in the case of create  or  newObject  in the case of delete   or an object that cannot have labels  like a  DeploymentRollback  or a  PodProxyOptions  object  is not considered to match   Use the object selector only if the webhook is opt in  because end users may skip the admission webhook by setting the labels   This example shows a mutating webhook that would match a  CREATE  of any resource  but not subresources  with the label  foo  bar       yaml apiVersion  admissionregistration k8s io v1 kind  MutatingWebhookConfiguration webhooks    name  my webhook example com   objectSelector      matchLabels        foo  bar   rules      operations    CREATE       apiGroups            apiVersions            resources            scope           See  labels concept   docs concepts overview working with objects labels  for more examples of label selectors       Matching requests  namespaceSelector  Webhooks may optionally limit which requests for namespaced resources are intercepted  based on the labels of the containing namespace  by specifying a  namespaceSelector    The  namespaceSelector  decides whether to run the webhook on a request for a namespaced resource  or a Namespace object   based on whether the namespace s labels match the selector  If the object itself is a namespace  the matching is performed on object metadata labels  If the object is a cluster scoped resource other than a Namespace   namespaceSelector  has no effect   This example shows a mutating webhook that matches a  CREATE  of any namespaced resource inside a namespace that does not have a  runlevel  label of  0  or  1       yaml apiVersion  admissionregistration k8s io v1 kind  MutatingWebhookConfiguration webhooks      name  my webhook example com     namespaceSelector        matchExpressions            key  runlevel           operator  NotIn           values    0   1       rules          operations    CREATE           apiGroups                apiVersions                resources                scope   Namespaced       This example shows a validating webhook that matches a  CREATE  of any namespaced resource inside a namespace that is associated with the  environment  of  prod  or  staging       yaml apiVersion  admissionregistration k8s io v1 kind  ValidatingWebhookConfiguration webhooks      name  my webhook example com     namespaceSelector        matchExpressions            key  environment           operator  In           values    prod   staging       rules          operations    CREATE           apiGroups                apiVersions                resources                scope   Namespaced       See  labels concept   docs concepts overview working with objects labels  for more examples of label selectors       Matching requests  matchPolicy  API servers can make objects available via multiple API groups or versions   For example  if a webhook only specified a rule for some API groups versions  like  apiGroups   apps    apiVersions   v1   v1beta1      and a request was made to modify the resource via another API group version  like  extensions v1beta1    the request would not be sent to the webhook   The  matchPolicy  lets a webhook define how its  rules  are used to match incoming requests  Allowed values are  Exact  or  Equivalent       Exact  means a request should be intercepted only if it exactly matches a specified rule     Equivalent  means a request should be intercepted if it modifies a resource listed in  rules     even via another API group or version   In the example given above  the webhook that only registered for  apps v1  could use  matchPolicy       matchPolicy  Exact  would mean the  extensions v1beta1  request would not be sent to the webhook    matchPolicy  Equivalent  means the  extensions v1beta1  request would be sent to the webhook    with the objects converted to a version the webhook had specified   apps v1    Specifying  Equivalent  is recommended  and ensures that webhooks continue to intercept the resources they expect when upgrades enable new versions of the resource in the API server   When a resource stops being served by the API server  it is no longer considered equivalent to other versions of that resource that are still served  For example   extensions v1beta1  deployments were first deprecated and then removed  in Kubernetes v1 16    Since that removal  a webhook with a  apiGroups   extensions    apiVersions   v1beta1    resources   deployments    rule does not intercept deployments created via  apps v1  APIs  For that reason  webhooks should prefer registering for stable versions of resources   This example shows a validating webhook that intercepts modifications to deployments  no matter the API group or version   and is always sent an  apps v1   Deployment  object      yaml apiVersion  admissionregistration k8s io v1 kind  ValidatingWebhookConfiguration webhooks    name  my webhook example com   matchPolicy  Equivalent   rules      operations    CREATE   UPDATE   DELETE       apiGroups    apps       apiVersions    v1       resources    deployments       scope   Namespaced       The  matchPolicy  for an admission webhooks defaults to  Equivalent        Matching requests   matchConditions     You can define  match conditions  for webhooks if you need fine grained request filtering  These conditions are useful if you find that match rules   objectSelectors  and  namespaceSelectors  still doesn t provide the filtering you want over when to call out over HTTP  Match conditions are  CEL expressions   docs reference using api cel    All match conditions must evaluate to true for the webhook to be called   Here is an example illustrating a few different uses for match conditions      yaml apiVersion  admissionregistration k8s io v1 kind  ValidatingWebhookConfiguration webhooks      name  my webhook example com     matchPolicy  Equivalent     rules          operations    CREATE   UPDATE           apiGroups                apiVersions                resources            failurePolicy   Ignore    Fail open  optional      sideEffects  None     clientConfig        service          namespace  my namespace         name  my webhook       caBundle    omitted         You can have up to 64 matchConditions per webhook     matchConditions          name   exclude leases    Each match condition must have a unique name         expression     request resource group     coordination k8s io     request resource resource     leases      Match non lease resources          name   exclude kubelet requests          expression      system nodes  in request userInfo groups     Match requests made by non node users          name   rbac    Skip RBAC requests  which are handled by the second webhook          expression   request resource group     rbac authorization k8s io          This example illustrates the use of the  authorizer   The authorization check is more expensive     than a simple expression  so in this example it is scoped to only RBAC requests by using a second     webhook  Both webhooks can be served by the same endpoint      name  rbac my webhook example com     matchPolicy  Equivalent     rules          operations    CREATE   UPDATE           apiGroups    rbac authorization k8s io           apiVersions                resources            failurePolicy   Fail    Fail closed  the default      sideEffects  None     clientConfig        service          namespace  my namespace         name  my webhook       caBundle    omitted         You can have up to 64 matchConditions per webhook     matchConditions          name   breakglass            Skip requests made by users authorized to  breakglass  on this webhook            The  breakglass  API verb does not need to exist outside this check          expression    authorizer group  admissionregistration k8s io   resource  validatingwebhookconfigurations   name  my webhook example com   check  breakglass   allowed          You can define up to 64 elements in the  matchConditions  field per webhook    Match conditions have access to the following CEL variables      object    The object from the incoming request  The value is null for DELETE requests  The object   version may be converted based on the  matchPolicy   matching requests matchpolicy      oldObject    The existing object  The value is null for CREATE requests     request    The request portion of the  AdmissionReview   request   excluding  object  and  oldObject      authorizer    A CEL Authorizer  May be used to perform authorization checks for the principal    authenticated user  of the request  See    Authz  https   pkg go dev k8s io apiserver pkg cel library Authz  in the Kubernetes CEL library   documentation for more details     authorizer requestResource    A shortcut for an authorization check configured with the request   resource  group  resource   subresource   namespace  name    For more information on CEL expressions  refer to the  Common Expression Language in Kubernetes reference   docs reference using api cel     In the event of an error evaluating a match condition the webhook is never called  Whether to reject the request is determined as follows   1  If   any   match condition evaluated to  false   regardless of other errors   the API server skips the webhook  2  Otherwise        for   failurePolicy  Fail    failure policy   reject the request  without calling the webhook         for   failurePolicy  Ignore    failure policy   proceed with the request but skip the webhook       Contacting the webhook  Once the API server has determined a request should be sent to a webhook  it needs to know how to contact the webhook  This is specified in the  clientConfig  stanza of the webhook configuration   Webhooks can either be called via a URL or a service reference  and can optionally include a custom CA bundle to use to verify the TLS connection        URL   url  gives the location of the webhook  in standard URL form   scheme   host port path     The  host  should not refer to a service running in the cluster  use a service reference by specifying the  service  field instead  The host might be resolved via external DNS in some API servers  e g    kube apiserver  cannot resolve in cluster DNS as that would be a layering violation    host  may also be an IP address   Please note that using  localhost  or  127 0 0 1  as a  host  is risky unless you take great care to run this webhook on all hosts which run an API server which might need to make calls to this webhook  Such installations are likely to be non portable or not readily run in a new cluster   The scheme must be  https   the URL must begin with  https       Attempting to use a user or basic auth  for example  user password    is not allowed  Fragments          and query parameters          are also not allowed   Here is an example of a mutating webhook configured to call a URL  and expects the TLS certificate to be verified using system trust roots  so does not specify a caBundle       yaml apiVersion  admissionregistration k8s io v1 kind  MutatingWebhookConfiguration webhooks    name  my webhook example com   clientConfig      url   https   my webhook example com 9443 my webhook path            Service reference  The  service  stanza inside  clientConfig  is a reference to the service for this webhook  If the webhook is running within the cluster  then you should use  service  instead of  url   The service namespace and name are required  The port is optional and defaults to 443  The path is optional and defaults to       Here is an example of a mutating webhook configured to call a service on port  1234  at the subpath   my path   and to verify the TLS connection against the ServerName  my service name my service namespace svc  using a custom CA bundle      yaml apiVersion  admissionregistration k8s io v1 kind  MutatingWebhookConfiguration webhooks    name  my webhook example com   clientConfig      caBundle   CA BUNDLE      service        namespace  my service namespace       name  my service name       path   my path       port  1234       You must replace the   CA BUNDLE   in the above example by a valid CA bundle which is a PEM encoded CA bundle for validating the webhook s server certificate        Side effects  Webhooks typically operate only on the content of the  AdmissionReview  sent to them  Some webhooks  however  make out of band changes as part of processing admission requests   Webhooks that make out of band changes   side effects   must also have a reconciliation mechanism  like a controller  that periodically determines the actual state of the world  and adjusts the out of band data modified by the admission webhook to reflect reality  This is because a call to an admission webhook does not guarantee the admitted object will be persisted as is  or at all  Later webhooks can modify the content of the object  a conflict could be encountered while writing to storage  or the server could power off before persisting the object   Additionally  webhooks with side effects must skip those side effects when  dryRun  true  admission requests are handled  A webhook must explicitly indicate that it will not have side effects when run with  dryRun   or the dry run request will not be sent to the webhook and the API request will fail instead   Webhooks indicate whether they have side effects using the  sideEffects  field in the webhook configuration      None   calling the webhook will have no side effects     NoneOnDryRun   calling the webhook will possibly have side effects  but if a request with    dryRun  true  is sent to the webhook  the webhook will suppress the side effects  the webhook   is  dryRun  aware    Here is an example of a validating webhook indicating it has no side effects on  dryRun  true  requests      yaml apiVersion  admissionregistration k8s io v1 kind  ValidatingWebhookConfiguration webhooks      name  my webhook example com     sideEffects  NoneOnDryRun          Timeouts  Because webhooks add to API request latency  they should evaluate as quickly as possible   timeoutSeconds  allows configuring how long the API server should wait for a webhook to respond before treating the call as a failure   If the timeout expires before the webhook responds  the webhook call will be ignored or the API call will be rejected based on the  failure policy   failure policy    The timeout value must be between 1 and 30 seconds   Here is an example of a validating webhook with a custom timeout of 2 seconds      yaml apiVersion  admissionregistration k8s io v1 kind  ValidatingWebhookConfiguration webhooks      name  my webhook example com     timeoutSeconds  2      The timeout for an admission webhook defaults to 10 seconds       Reinvocation policy  A single ordering of mutating admissions plugins  including webhooks  does not work for all cases  see https   issue k8s io 64333 as an example   A mutating webhook can add a new sub structure to the object  like adding a  container  to a  pod    and other mutating plugins which have already run may have opinions on those new structures  like setting an  imagePullPolicy  on all containers    To allow mutating admission plugins to observe changes made by other plugins  built in mutating admission plugins are re run if a mutating webhook modifies an object  and mutating webhooks can specify a  reinvocationPolicy  to control whether they are reinvoked as well    reinvocationPolicy  may be set to  Never  or  IfNeeded   It defaults to  Never       Never   the webhook must not be called more than once in a single admission evaluation     IfNeeded   the webhook may be called again as part of the admission evaluation if the object   being admitted is modified by other admission plugins after the initial webhook call   The important elements to note are     The number of additional invocations is not guaranteed to be exactly one    If additional invocations result in further modifications to the object  webhooks are not   guaranteed to be invoked again    Webhooks that use this option may be reordered to minimize the number of additional invocations    To validate an object after all mutations are guaranteed complete  use a validating admission   webhook instead  recommended for webhooks with side effects    Here is an example of a mutating webhook opting into being re invoked if later admission plugins modify the object      yaml apiVersion  admissionregistration k8s io v1 kind  MutatingWebhookConfiguration webhooks    name  my webhook example com   reinvocationPolicy  IfNeeded      Mutating webhooks must be  idempotent   idempotence   able to successfully process an object they have already admitted and potentially modified  This is true for all mutating admission webhooks  since any change they can make in an object could already exist in the user provided object  but it is essential for webhooks that opt into reinvocation       Failure policy   failurePolicy  defines how unrecognized errors and timeout errors from the admission webhook are handled  Allowed values are  Ignore  or  Fail       Ignore  means that an error calling the webhook is ignored and the API request is allowed to continue     Fail  means that an error calling the webhook causes the admission to fail and the API request to be rejected   Here is a mutating webhook configured to reject an API request if errors are encountered calling the admission webhook      yaml apiVersion  admissionregistration k8s io v1 kind  MutatingWebhookConfiguration webhooks    name  my webhook example com   failurePolicy  Fail      The default  failurePolicy  for an admission webhooks is  Fail       Monitoring admission webhooks  The API server provides ways to monitor admission webhook behaviors  These monitoring mechanisms help cluster admins to answer questions like   1  Which mutating webhook mutated the object in a API request   2  What change did the mutating webhook applied to the object   3  Which webhooks are frequently rejecting API requests  What s the reason for a rejection       Mutating webhook auditing annotations  Sometimes it s useful to know which mutating webhook mutated the object in a API request  and what change did the webhook apply   The Kubernetes API server performs  auditing   docs tasks debug debug cluster audit   on each mutating webhook invocation  Each invocation generates an auditing annotation capturing if a request object is mutated by the invocation  and optionally generates an annotation capturing the applied patch from the webhook admission response  The annotations are set in the audit event for given request on given stage of its execution  which is then pre processed according to a certain policy and written to a backend   The audit level of a event determines which annotations get recorded     At  Metadata  audit level or higher  an annotation with key    mutation webhook admission k8s io round  round idx  index  order idx   gets logged with JSON   payload indicating a webhook gets invoked for given request and whether it mutated the object or not     For example  the following annotation gets recorded for a webhook being reinvoked  The webhook is   ordered the third in the mutating webhook chain  and didn t mutated the request object during the   invocation        yaml     the audit event recorded            kind    Event          apiVersion    audit k8s io v1          annotations                mutation webhook admission k8s io round 1 index 2       configuration     my mutating webhook configuration example com     webhook     my webhook example com     mutated    false               other annotations                               other fields                             yaml     the annotation value deserialized            configuration    my mutating webhook configuration example com          webhook    my webhook example com          mutated   false                The following annotation gets recorded for a webhook being invoked in the first round  The webhook   is ordered the first in the mutating webhook chain  and mutated the request object during the   invocation        yaml     the audit event recorded            kind    Event          apiVersion    audit k8s io v1          annotations                mutation webhook admission k8s io round 0 index 0       configuration     my mutating webhook configuration example com     webhook     my webhook always mutate example com     mutated    true               other annotations                               other fields                             yaml     the annotation value deserialized            configuration    my mutating webhook configuration example com          webhook    my webhook always mutate example com          mutated   true              At  Request  audit level or higher  an annotation with key    patch webhook admission k8s io round  round idx  index  order idx   gets logged with JSON payload indicating   a webhook gets invoked for given request and what patch gets applied to the request object     For example  the following annotation gets recorded for a webhook being reinvoked  The webhook is ordered the fourth in the   mutating webhook chain  and responded with a JSON patch which got applied to the request object          yaml     the audit event recorded            kind    Event          apiVersion    audit k8s io v1          annotations                patch webhook admission k8s io round 1 index 3       configuration     my other mutating webhook configuration example com     webhook     my webhook always mutate example com     patch       op     add     path      data mutation stage     value     yes       patchType     JSONPatch                 other annotations                               other fields                             yaml     the annotation value deserialized            configuration    my other mutating webhook configuration example com          webhook    my webhook always mutate example com          patchType    JSONPatch          patch                                op    add                  path     data mutation stage                  value    yes                                     Admission webhook metrics  The API server  exposes Prometheus metrics from the   metrics  endpoint  which can be used for monitoring and diagnosing API server status  The following metrics record status related to admission webhooks        API server admission webhook rejection count  Sometimes it s useful to know which admission webhooks are frequently rejecting API requests  and the reason for a rejection   The API server exposes a Prometheus counter metric recording admission webhook rejections  The metrics are labelled to identify the causes of webhook rejection s       name   the name of the webhook that rejected a request     operation   the operation type of the request  can be one of  CREATE      UPDATE    DELETE  and  CONNECT      type   the admission webhook type  can be one of  admit  and  validating      error type   identifies if an error occurred during the webhook invocation   that caused the rejection  Its value can be one of        calling webhook error   unrecognized errors or timeout errors from the admission webhook happened and the     webhook s  Failure policy   failure policy  is set to  Fail        no error   no error occurred  The webhook rejected the request with  allowed  false  in the admission     response  The metrics label  rejection code  records the   status code  set in the admission response       apiserver internal error   an API server internal error happened      rejection code   the HTTP status code set in the admission response when a   webhook rejected a request   Example of the rejection count metrics         HELP apiserver admission webhook rejection count  ALPHA  Admission webhook rejection count  identified by name and broken out for each admission type  validating or admit  and operation  Additional labels specify an error type  calling webhook error or apiserver internal error if an error occurred  no error otherwise  and optionally a non zero rejection code if the webhook rejects the request with an HTTP status code  honored by the apiserver when the code is greater or equal to 400   Codes greater than 600 are truncated to 600  to keep the metrics cardinality bounded    TYPE apiserver admission webhook rejection count counter apiserver admission webhook rejection count error type  calling webhook error  name  always timeout webhook example com  operation  CREATE  rejection code  0  type  validating   1 apiserver admission webhook rejection count error type  calling webhook error  name  invalid admission response webhook example com  operation  CREATE  rejection code  0  type  validating   1 apiserver admission webhook rejection count error type  no error  name  deny unwanted configmap data example com  operation  CREATE  rejection code  400  type  validating   13         Best practices and warnings      Idempotence  An idempotent mutating admission webhook is able to successfully process an object it has already admitted and potentially modified  The admission can be applied multiple times without changing the result beyond the initial application        Example of idempotent mutating admission webhooks   1  For a  CREATE  pod request  set the field   spec securityContext runAsNonRoot  of the    pod to true  to enforce security best practices   2  For a  CREATE  pod request  if the field   spec containers   resources limits     of a container is not set  set default resource limits   3  For a  CREATE  pod request  inject a sidecar container with name  foo sidecar  if no container    with the name  foo sidecar  already exists   In the cases above  the webhook can be safely reinvoked  or admit an object that already has the fields set        Example of non idempotent mutating admission webhooks   1  For a  CREATE  pod request  inject a sidecar container with name  foo sidecar     suffixed with the current timestamp  e g   foo sidecar 19700101 000000     2  For a  CREATE   UPDATE  pod request  reject if the pod has label   env   set     otherwise add an   env    prod   label to the pod   3  For a  CREATE  pod request  blindly append a sidecar container named     foo sidecar  without looking to see if there is already a  foo sidecar     container in the pod   In the first case above  reinvoking the webhook can result in the same sidecar being injected multiple times to a pod  each time with a different container name  Similarly the webhook can inject duplicated containers if the sidecar already exists in a user provided pod   In the second case above  reinvoking the webhook will result in the webhook failing on its own output   In the third case above  reinvoking the webhook will result in duplicated containers in the pod spec  which makes the request invalid and rejected by the API server       Intercepting all versions of an object  It is recommended that admission webhooks should always intercept all versions of an object by setting   webhooks   matchPolicy  to  Equivalent   It is also recommended that admission webhooks should prefer registering for stable versions of resources  Failure to intercept all versions of an object can result in admission policies not being enforced for requests in certain versions  See  Matching requests  matchPolicy   matching requests matchpolicy  for examples       Availability  It is recommended that admission webhooks should evaluate as quickly as possible  typically in milliseconds   since they add to API request latency  It is encouraged to use a small timeout for webhooks  See  Timeouts   timeouts  for more detail   It is recommended that admission webhooks should leverage some format of load balancing  to provide high availability and performance benefits  If a webhook is running within the cluster  you can run multiple webhook backends behind a service to leverage the load balancing that service supports       Guaranteeing the final state of the object is seen  Admission webhooks that need to guarantee they see the final state of the object in order to enforce policy should use a validating admission webhook  since objects can be modified after being seen by mutating webhooks   For example  a mutating admission webhook is configured to inject a sidecar container with name  foo sidecar  on every  CREATE  pod request  If the sidecar  must  be present  a validating admisson webhook should also be configured to intercept  CREATE  pod requests  and validate that a container with name  foo sidecar  with the expected configuration exists in the to be created object       Avoiding deadlocks in self hosted webhooks  A webhook running inside the cluster might cause deadlocks for its own deployment if it is configured to intercept resources required to start its own pods   For example  a mutating admission webhook is configured to admit  CREATE  pod requests only if a certain label is set in the pod  e g    env    prod     The webhook server runs in a deployment which doesn t set the   env   label  When a node that runs the webhook server pods becomes unhealthy  the webhook deployment will try to reschedule the pods to another node  However the requests will get rejected by the existing webhook server since the   env   label is unset  and the migration cannot happen   It is recommended to exclude the namespace where your webhook is running with a  namespaceSelector   matching requests namespaceselector        Side effects  It is recommended that admission webhooks should avoid side effects if possible  which means the webhooks operate only on the content of the  AdmissionReview  sent to them  and do not make out of band changes  The   webhooks   sideEffects  field should be set to  None  if a webhook doesn t have any side effect   If side effects are required during the admission evaluation  they must be suppressed when processing an  AdmissionReview  object with  dryRun  set to  true   and the   webhooks   sideEffects  field should be set to  NoneOnDryRun   See  Side effects   side effects  for more detail       Avoiding operating on the kube system namespace  The  kube system  namespace contains objects created by the Kubernetes system  e g  service accounts for the control plane components  pods like  kube dns   Accidentally mutating or rejecting requests in the  kube system  namespace may cause the control plane components to stop functioning or introduce unknown behavior  If your admission webhooks don t intend to modify the behavior of the Kubernetes control plane  exclude the  kube system  namespace from being intercepted using a   namespaceSelector    matching requests namespaceselector  "}
{"questions":"kubernetes reference weight 95 tallclair liggitt contenttype concept title Mapping PodSecurityPolicies to Pod Security Standards reviewers overview","answers":"---\nreviewers:\n- tallclair\n- liggitt\ntitle: Mapping PodSecurityPolicies to Pod Security Standards\ncontent_type: concept\nweight: 95\n---\n\n<!-- overview -->\nThe tables below enumerate the configuration parameters on\n`PodSecurityPolicy` objects, whether the field mutates\nand\/or validates pods, and how the configuration values map to the\n[Pod Security Standards](\/docs\/concepts\/security\/pod-security-standards\/).\n\nFor each applicable parameter, the allowed values for the\n[Baseline](\/docs\/concepts\/security\/pod-security-standards\/#baseline) and\n[Restricted](\/docs\/concepts\/security\/pod-security-standards\/#restricted) profiles are listed.\nAnything outside the allowed values for those profiles would fall under the\n[Privileged](\/docs\/concepts\/security\/pod-security-standards\/#privileged) profile. \"No opinion\"\nmeans all values are allowed under all Pod Security Standards.\n\nFor a step-by-step migration guide, see\n[Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller](\/docs\/tasks\/configure-pod-container\/migrate-from-psp\/).\n\n<!-- body -->\n\n## PodSecurityPolicy Spec\n\nThe fields enumerated in this table are part of the `PodSecurityPolicySpec`, which is specified\nunder the `.spec` field path.\n\n<table class=\"no-word-break\">\n  <caption style=\"display:none\">Mapping PodSecurityPolicySpec fields to Pod Security Standards<\/caption>\n  <tbody>\n    <tr>\n      <th><code>PodSecurityPolicySpec<\/code><\/th>\n      <th>Type<\/th>\n      <th>Pod Security Standards Equivalent<\/th>\n    <\/tr>\n    <tr>\n      <td><code>privileged<\/code><\/td>\n      <td>Validating<\/td>\n      <td><b>Baseline & Restricted<\/b>: <code>false<\/code> \/ undefined \/ nil<\/td>\n    <\/tr>\n    <tr>\n      <td><code>defaultAddCapabilities<\/code><\/td>\n      <td>Mutating & Validating<\/td>\n      <td>Requirements match <code>allowedCapabilities<\/code> below.<\/td>\n    <\/tr>\n    <tr>\n      <td><code>allowedCapabilities<\/code><\/td>\n      <td>Validating<\/td>\n      <td>\n        <p><b>Baseline<\/b>: subset of<\/p>\n        <ul>\n          <li><code>AUDIT_WRITE<\/code><\/li>\n          <li><code>CHOWN<\/code><\/li>\n          <li><code>DAC_OVERRIDE<\/code><\/li>\n          <li><code>FOWNER<\/code><\/li>\n          <li><code>FSETID<\/code><\/li>\n          <li><code>KILL<\/code><\/li>\n          <li><code>MKNOD<\/code><\/li>\n          <li><code>NET_BIND_SERVICE<\/code><\/li>\n          <li><code>SETFCAP<\/code><\/li>\n          <li><code>SETGID<\/code><\/li>\n          <li><code>SETPCAP<\/code><\/li>\n          <li><code>SETUID<\/code><\/li>\n          <li><code>SYS_CHROOT<\/code><\/li>\n        <\/ul>\n        <p><b>Restricted<\/b>: empty \/ undefined \/ nil OR a list containing <i>only<\/i> <code>NET_BIND_SERVICE<\/code>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td><code>requiredDropCapabilities<\/code><\/td>\n      <td>Mutating & Validating<\/td>\n      <td>\n        <p><b>Baseline<\/b>: no opinion<\/p>\n        <p><b>Restricted<\/b>: must include <code>ALL<\/code><\/p>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td><code>volumes<\/code><\/td>\n      <td>Validating<\/td>\n      <td>\n        <p><b>Baseline<\/b>: anything except<\/p>\n        <ul>\n          <li><code>hostPath<\/code><\/li>\n          <li><code>*<\/code><\/li>\n        <\/ul>\n        <p><b>Restricted<\/b>: subset of<\/p>\n        <ul>\n          <li><code>configMap<\/code><\/li>\n          <li><code>csi<\/code><\/li>\n          <li><code>downwardAPI<\/code><\/li>\n          <li><code>emptyDir<\/code><\/li>\n          <li><code>ephemeral<\/code><\/li>\n          <li><code>persistentVolumeClaim<\/code><\/li>\n          <li><code>projected<\/code><\/li>\n          <li><code>secret<\/code><\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td><code>hostNetwork<\/code><\/td>\n      <td>Validating<\/td>\n      <td><b>Baseline & Restricted<\/b>: <code>false<\/code> \/ undefined \/ nil<\/td>\n    <\/tr>\n    <tr>\n      <td><code>hostPorts<\/code><\/td>\n      <td>Validating<\/td>\n      <td><b>Baseline & Restricted<\/b>: undefined \/ nil \/ empty<\/td>\n    <\/tr>\n    <tr>\n      <td><code>hostPID<\/code><\/td>\n      <td>Validating<\/td>\n      <td><b>Baseline & Restricted<\/b>: <code>false<\/code> \/ undefined \/ nil<\/td>\n    <\/tr>\n    <tr>\n      <td><code>hostIPC<\/code><\/td>\n      <td>Validating<\/td>\n      <td><b>Baseline & Restricted<\/b>: <code>false<\/code> \/ undefined \/ nil<\/td>\n    <\/tr>\n    <tr>\n      <td><code>seLinux<\/code><\/td>\n      <td>Mutating & Validating<\/td>\n      <td>\n        <p><b>Baseline & Restricted<\/b>:\n        <code>seLinux.rule<\/code> is <code>MustRunAs<\/code>, with the following <code>options<\/code><\/p>\n        <ul>\n          <li><code>user<\/code> is unset (<code>\"\"<\/code> \/ undefined \/ nil)<\/li>\n          <li><code>role<\/code> is unset (<code>\"\"<\/code> \/ undefined \/ nil)<\/li>\n          <li><code>type<\/code> is unset or one of: <code>container_t, container_init_t, container_kvm_t, container_engine_t<\/code><\/li>\n          <li><code>level<\/code> is anything<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td><code>runAsUser<\/code><\/td>\n      <td>Mutating & Validating<\/td>\n      <td>\n        <p><b>Baseline<\/b>: Anything<\/p>\n        <p><b>Restricted<\/b>: <code>rule<\/code> is <code>MustRunAsNonRoot<\/code><\/p>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td><code>runAsGroup<\/code><\/td>\n      <td>Mutating (MustRunAs) & Validating<\/td>\n      <td>\n        <i>No opinion<\/i>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td><code>supplementalGroups<\/code><\/td>\n      <td>Mutating & Validating<\/td>\n      <td>\n        <i>No opinion<\/i>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td><code>fsGroup<\/code><\/td>\n      <td>Mutating & Validating<\/td>\n      <td>\n        <i>No opinion<\/i>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td><code>readOnlyRootFilesystem<\/code><\/td>\n      <td>Mutating & Validating<\/td>\n      <td>\n        <i>No opinion<\/i>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td><code>defaultAllowPrivilegeEscalation<\/code><\/td>\n      <td>Mutating<\/td>\n      <td>\n        <i>No opinion (non-validating)<\/i>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td><code>allowPrivilegeEscalation<\/code><\/td>\n      <td>Mutating & Validating<\/td>\n      <td>\n        <p><i>Only mutating if set to <code>false<\/code><\/i><\/p>\n        <p><b>Baseline<\/b>: No opinion<\/p>\n        <p><b>Restricted<\/b>: <code>false<\/code><\/p>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td><code>allowedHostPaths<\/code><\/td>\n      <td>Validating<\/td>\n      <td><i>No opinion (volumes takes precedence)<\/i><\/td>\n    <\/tr>\n    <tr>\n      <td><code>allowedFlexVolumes<\/code><\/td>\n      <td>Validating<\/td>\n      <td><i>No opinion (volumes takes precedence)<\/i><\/td>\n    <\/tr>\n    <tr>\n      <td><code>allowedCSIDrivers<\/code><\/td>\n      <td>Validating<\/td>\n      <td><i>No opinion (volumes takes precedence)<\/i><\/td>\n    <\/tr>\n    <tr>\n      <td><code>allowedUnsafeSysctls<\/code><\/td>\n      <td>Validating<\/td>\n      <td><b>Baseline & Restricted<\/b>: undefined \/ nil \/ empty<\/td>\n    <\/tr>\n    <tr>\n      <td><code>forbiddenSysctls<\/code><\/td>\n      <td>Validating<\/td>\n      <td><i>No opinion<\/i><\/td>\n    <\/tr>\n    <tr>\n      <td><code>allowedProcMountTypes<\/code><br><i>(alpha feature)<\/i><\/td>\n      <td>Validating<\/td>\n      <td><b>Baseline & Restricted<\/b>: <code>[\"Default\"]<\/code> OR undefined \/ nil \/ empty<\/td>\n    <\/tr>\n    <tr>\n      <td><code>runtimeClass<\/code><br><code>&nbsp;.defaultRuntimeClassName<\/code><\/td>\n      <td>Mutating<\/td>\n      <td><i>No opinion<\/i><\/td>\n    <\/tr>\n    <tr>\n      <td><code>runtimeClass<\/code><br><code>&nbsp;.allowedRuntimeClassNames<\/code><\/td>\n      <td>Validating<\/td>\n      <td><i>No opinion<\/i><\/td>\n    <\/tr>\n  <\/tbody>\n<\/table>\n\n## PodSecurityPolicy annotations\n\nThe [annotations](\/docs\/concepts\/overview\/working-with-objects\/annotations\/) enumerated in this\ntable can be specified under `.metadata.annotations` on the PodSecurityPolicy object.\n\n<table class=\"no-word-break\">\n  <caption style=\"display:none\">Mapping PodSecurityPolicy annotations to Pod Security Standards<\/caption>\n  <tbody>\n    <tr>\n      <th><code>PSP Annotation<\/code><\/th>\n      <th>Type<\/th>\n      <th>Pod Security Standards Equivalent<\/th>\n    <\/tr>\n    <tr>\n      <td><code>seccomp.security.alpha.kubernetes.io<\/code><br><code>\/defaultProfileName<\/code><\/td>\n      <td>Mutating<\/td>\n      <td><i>No opinion<\/i><\/td>\n    <\/tr>\n    <tr>\n      <td><code>seccomp.security.alpha.kubernetes.io<\/code><br><code>\/allowedProfileNames<\/code><\/td>\n      <td>Validating<\/td>\n      <td>\n        <p><b>Baseline<\/b>: <code>\"runtime\/default,\"<\/code> <i>(Trailing comma to allow unset)<\/i><\/p>\n        <p><b>Restricted<\/b>: <code>\"runtime\/default\"<\/code> <i>(No trailing comma)<\/i><\/p>\n        <p><i><code>localhost\/*<\/code> values are also permitted for both Baseline & Restricted.<\/i><\/p>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td><code>apparmor.security.beta.kubernetes.io<\/code><br><code>\/defaultProfileName<\/code><\/td>\n      <td>Mutating<\/td>\n      <td><i>No opinion<\/i><\/td>\n    <\/tr>\n    <tr>\n      <td><code>apparmor.security.beta.kubernetes.io<\/code><br><code>\/allowedProfileNames<\/code><\/td>\n      <td>Validating<\/td>\n      <td>\n        <p><b>Baseline<\/b>: <code>\"runtime\/default,\"<\/code> <i>(Trailing comma to allow unset)<\/i><\/p>\n        <p><b>Restricted<\/b>: <code>\"runtime\/default\"<\/code> <i>(No trailing comma)<\/i><\/p>\n        <p><i><code>localhost\/*<\/code> values are also permitted for both Baseline & Restricted.<\/i><\/p>\n      <\/td>\n    <\/tr>\n  <\/tbody>\n<\/table>","site":"kubernetes reference","answers_cleaned":"    reviewers    tallclair   liggitt title  Mapping PodSecurityPolicies to Pod Security Standards content type  concept weight  95           overview     The tables below enumerate the configuration parameters on  PodSecurityPolicy  objects  whether the field mutates and or validates pods  and how the configuration values map to the  Pod Security Standards   docs concepts security pod security standards     For each applicable parameter  the allowed values for the  Baseline   docs concepts security pod security standards  baseline  and  Restricted   docs concepts security pod security standards  restricted  profiles are listed  Anything outside the allowed values for those profiles would fall under the  Privileged   docs concepts security pod security standards  privileged  profile   No opinion  means all values are allowed under all Pod Security Standards   For a step by step migration guide  see  Migrate from PodSecurityPolicy to the Built In PodSecurity Admission Controller   docs tasks configure pod container migrate from psp          body         PodSecurityPolicy Spec  The fields enumerated in this table are part of the  PodSecurityPolicySpec   which is specified under the   spec  field path    table class  no word break      caption style  display none  Mapping PodSecurityPolicySpec fields to Pod Security Standards  caption     tbody       tr         th  code PodSecurityPolicySpec  code   th         th Type  th         th Pod Security Standards Equivalent  th        tr       tr         td  code privileged  code   td         td Validating  td         td  b Baseline   Restricted  b    code false  code    undefined   nil  td        tr       tr         td  code defaultAddCapabilities  code   td         td Mutating   Validating  td         td Requirements match  code allowedCapabilities  code  below   td        tr       tr         td  code allowedCapabilities  code   td         td Validating  td         td           p  b Baseline  b   subset of  p           ul             li  code AUDIT WRITE  code   li             li  code CHOWN  code   li             li  code DAC OVERRIDE  code   li             li  code FOWNER  code   li             li  code FSETID  code   li             li  code KILL  code   li             li  code MKNOD  code   li             li  code NET BIND SERVICE  code   li             li  code SETFCAP  code   li             li  code SETGID  code   li             li  code SETPCAP  code   li             li  code SETUID  code   li             li  code SYS CHROOT  code   li            ul           p  b Restricted  b   empty   undefined   nil OR a list containing  i only  i   code NET BIND SERVICE  code          td        tr       tr         td  code requiredDropCapabilities  code   td         td Mutating   Validating  td         td           p  b Baseline  b   no opinion  p           p  b Restricted  b   must include  code ALL  code   p          td        tr       tr         td  code volumes  code   td         td Validating  td         td           p  b Baseline  b   anything except  p           ul             li  code hostPath  code   li             li  code    code   li            ul           p  b Restricted  b   subset of  p           ul             li  code configMap  code   li             li  code csi  code   li             li  code downwardAPI  code   li             li  code emptyDir  code   li             li  code ephemeral  code   li             li  code persistentVolumeClaim  code   li             li  code projected  code   li             li  code secret  code   li            ul          td        tr       tr         td  code hostNetwork  code   td         td Validating  td         td  b Baseline   Restricted  b    code false  code    undefined   nil  td        tr       tr         td  code hostPorts  code   td         td Validating  td         td  b Baseline   Restricted  b   undefined   nil   empty  td        tr       tr         td  code hostPID  code   td         td Validating  td         td  b Baseline   Restricted  b    code false  code    undefined   nil  td        tr       tr         td  code hostIPC  code   td         td Validating  td         td  b Baseline   Restricted  b    code false  code    undefined   nil  td        tr       tr         td  code seLinux  code   td         td Mutating   Validating  td         td           p  b Baseline   Restricted  b            code seLinux rule  code  is  code MustRunAs  code   with the following  code options  code   p           ul             li  code user  code  is unset   code     code    undefined   nil   li             li  code role  code  is unset   code     code    undefined   nil   li             li  code type  code  is unset or one of   code container t  container init t  container kvm t  container engine t  code   li             li  code level  code  is anything  li            ul          td        tr       tr         td  code runAsUser  code   td         td Mutating   Validating  td         td           p  b Baseline  b   Anything  p           p  b Restricted  b    code rule  code  is  code MustRunAsNonRoot  code   p          td        tr       tr         td  code runAsGroup  code   td         td Mutating  MustRunAs    Validating  td         td           i No opinion  i          td        tr       tr         td  code supplementalGroups  code   td         td Mutating   Validating  td         td           i No opinion  i          td        tr       tr         td  code fsGroup  code   td         td Mutating   Validating  td         td           i No opinion  i          td        tr       tr         td  code readOnlyRootFilesystem  code   td         td Mutating   Validating  td         td           i No opinion  i          td        tr       tr         td  code defaultAllowPrivilegeEscalation  code   td         td Mutating  td         td           i No opinion  non validating   i          td        tr       tr         td  code allowPrivilegeEscalation  code   td         td Mutating   Validating  td         td           p  i Only mutating if set to  code false  code   i   p           p  b Baseline  b   No opinion  p           p  b Restricted  b    code false  code   p          td        tr       tr         td  code allowedHostPaths  code   td         td Validating  td         td  i No opinion  volumes takes precedence   i   td        tr       tr         td  code allowedFlexVolumes  code   td         td Validating  td         td  i No opinion  volumes takes precedence   i   td        tr       tr         td  code allowedCSIDrivers  code   td         td Validating  td         td  i No opinion  volumes takes precedence   i   td        tr       tr         td  code allowedUnsafeSysctls  code   td         td Validating  td         td  b Baseline   Restricted  b   undefined   nil   empty  td        tr       tr         td  code forbiddenSysctls  code   td         td Validating  td         td  i No opinion  i   td        tr       tr         td  code allowedProcMountTypes  code  br  i  alpha feature   i   td         td Validating  td         td  b Baseline   Restricted  b    code   Default    code  OR undefined   nil   empty  td        tr       tr         td  code runtimeClass  code  br  code  nbsp  defaultRuntimeClassName  code   td         td Mutating  td         td  i No opinion  i   td        tr       tr         td  code runtimeClass  code  br  code  nbsp  allowedRuntimeClassNames  code   td         td Validating  td         td  i No opinion  i   td        tr      tbody    table      PodSecurityPolicy annotations  The  annotations   docs concepts overview working with objects annotations   enumerated in this table can be specified under   metadata annotations  on the PodSecurityPolicy object    table class  no word break      caption style  display none  Mapping PodSecurityPolicy annotations to Pod Security Standards  caption     tbody       tr         th  code PSP Annotation  code   th         th Type  th         th Pod Security Standards Equivalent  th        tr       tr         td  code seccomp security alpha kubernetes io  code  br  code  defaultProfileName  code   td         td Mutating  td         td  i No opinion  i   td        tr       tr         td  code seccomp security alpha kubernetes io  code  br  code  allowedProfileNames  code   td         td Validating  td         td           p  b Baseline  b    code  runtime default    code   i  Trailing comma to allow unset   i   p           p  b Restricted  b    code  runtime default   code   i  No trailing comma   i   p           p  i  code localhost    code  values are also permitted for both Baseline   Restricted   i   p          td        tr       tr         td  code apparmor security beta kubernetes io  code  br  code  defaultProfileName  code   td         td Mutating  td         td  i No opinion  i   td        tr       tr         td  code apparmor security beta kubernetes io  code  br  code  allowedProfileNames  code   td         td Validating  td         td           p  b Baseline  b    code  runtime default    code   i  Trailing comma to allow unset   i   p           p  b Restricted  b    code  runtime default   code   i  No trailing comma   i   p           p  i  code localhost    code  values are also permitted for both Baseline   Restricted   i   p          td        tr      tbody    table "}
{"questions":"kubernetes reference title TLS bootstrapping liggitt contenttype concept mikedanese reviewers awly weight 120 smarterclayton","answers":"---\nreviewers:\n- mikedanese\n- liggitt\n- smarterclayton\n- awly\ntitle: TLS bootstrapping\ncontent_type: concept\nweight: 120\n---\n\n<!-- overview -->\n\nIn a Kubernetes cluster, the components on the worker nodes - kubelet and kube-proxy - need\nto communicate with Kubernetes control plane components, specifically kube-apiserver.\nIn order to ensure that communication is kept private, not interfered with, and ensure that\neach component of the cluster is talking to another trusted component, we strongly\nrecommend using client TLS certificates on nodes.\n\nThe normal process of bootstrapping these components, especially worker nodes that need certificates\nso they can communicate safely with kube-apiserver, can be a challenging process as it is often outside\nof the scope of Kubernetes and requires significant additional work.\nThis in turn, can make it challenging to initialize or scale a cluster.\n\nIn order to simplify the process, beginning in version 1.4, Kubernetes introduced a certificate request\nand signing API. The proposal can be found [here](https:\/\/github.com\/kubernetes\/kubernetes\/pull\/20439).\n\nThis document describes the process of node initialization, how to set up TLS client certificate bootstrapping for\nkubelets, and how it works.\n\n<!-- body -->\n\n## Initialization process\n\nWhen a worker node starts up, the kubelet does the following:\n\n1. Look for its `kubeconfig` file\n1. Retrieve the URL of the API server and credentials, normally a TLS key and signed certificate from the `kubeconfig` file\n1. Attempt to communicate with the API server using the credentials.\n\nAssuming that the kube-apiserver successfully validates the kubelet's credentials,\nit will treat the kubelet as a valid node, and begin to assign pods to it.\n\nNote that the above process depends upon:\n\n* Existence of a key and certificate on the local host in the `kubeconfig`\n* The certificate having been signed by a Certificate Authority (CA) trusted by the kube-apiserver\n\nAll of the following are responsibilities of whoever sets up and manages the cluster:\n\n1. Creating the CA key and certificate\n1. Distributing the CA certificate to the control plane nodes, where kube-apiserver is running\n1. Creating a key and certificate for each kubelet; strongly recommended to have a unique one, with a unique CN, for each kubelet\n1. Signing the kubelet certificate using the CA key\n1. Distributing the kubelet key and signed certificate to the specific node on which the kubelet is running\n\nThe TLS Bootstrapping described in this document is intended to simplify, and partially or even\ncompletely automate, steps 3 onwards, as these are the most common when initializing or scaling\na cluster.\n\n### Bootstrap initialization\n\nIn the bootstrap initialization process, the following occurs:\n\n1. kubelet begins\n1. kubelet sees that it does _not_ have a `kubeconfig` file\n1. kubelet searches for and finds a `bootstrap-kubeconfig` file\n1. kubelet reads its bootstrap file, retrieving the URL of the API server and a limited usage \"token\"\n1. kubelet connects to the API server, authenticates using the token\n1. kubelet now has limited credentials to create and retrieve a certificate signing request (CSR)\n1. kubelet creates a CSR for itself with the signerName set to `kubernetes.io\/kube-apiserver-client-kubelet`\n1. CSR is approved in one of two ways:\n   * If configured, kube-controller-manager automatically approves the CSR\n   * If configured, an outside process, possibly a person, approves the CSR using the Kubernetes API or via `kubectl`\n1. Certificate is created for the kubelet\n1. Certificate is issued to the kubelet\n1. kubelet retrieves the certificate\n1. kubelet creates a proper `kubeconfig` with the key and signed certificate\n1. kubelet begins normal operation\n1. Optional: if configured, kubelet automatically requests renewal of the certificate when it is close to expiry\n1. The renewed certificate is approved and issued, either automatically or manually, depending on configuration.\n\nThe rest of this document describes the necessary steps to configure TLS Bootstrapping, and its limitations.\n\n## Configuration\n\nTo configure for TLS bootstrapping and optional automatic approval, you must configure options on the following components:\n\n* kube-apiserver\n* kube-controller-manager\n* kubelet\n* in-cluster resources: `ClusterRoleBinding` and potentially `ClusterRole`\n\nIn addition, you need your Kubernetes Certificate Authority (CA).\n\n## Certificate Authority\n\nAs without bootstrapping, you will need a Certificate Authority (CA) key and certificate.\nAs without bootstrapping, these will be used to sign the kubelet certificate. As before,\nit is your responsibility to distribute them to control plane nodes.\n\nFor the purposes of this document, we will assume these have been distributed to control\nplane nodes at `\/var\/lib\/kubernetes\/ca.pem` (certificate) and `\/var\/lib\/kubernetes\/ca-key.pem` (key).\nWe will refer to these as \"Kubernetes CA certificate and key\".\n\nAll Kubernetes components that use these certificates - kubelet, kube-apiserver,\nkube-controller-manager - assume the key and certificate to be PEM-encoded.\n\n## kube-apiserver configuration\n\nThe kube-apiserver has several requirements to enable TLS bootstrapping:\n\n* Recognizing CA that signs the client certificate\n* Authenticating the bootstrapping kubelet to the `system:bootstrappers` group\n* Authorize the bootstrapping kubelet to create a certificate signing request (CSR)\n\n### Recognizing client certificates\n\nThis is normal for all client certificate authentication.\nIf not already set, add the `--client-ca-file=FILENAME` flag to the kube-apiserver command to enable\nclient certificate authentication, referencing a certificate authority bundle\ncontaining the signing certificate, for example\n`--client-ca-file=\/var\/lib\/kubernetes\/ca.pem`.\n\n### Initial bootstrap authentication\n\nIn order for the bootstrapping kubelet to connect to kube-apiserver and request a certificate,\nit must first authenticate to the server. You can use any\n[authenticator](\/docs\/reference\/access-authn-authz\/authentication\/) that can authenticate the kubelet.\n\nWhile any authentication strategy can be used for the kubelet's initial\nbootstrap credentials, the following two authenticators are recommended for ease\nof provisioning.\n\n1. [Bootstrap Tokens](#bootstrap-tokens)\n1. [Token authentication file](#token-authentication-file)\n\nUsing bootstrap tokens is a simpler and more easily managed method to authenticate kubelets,\nand does not require any additional flags when starting kube-apiserver.\n\nWhichever method you choose, the requirement is that the kubelet be able to authenticate as a user with the rights to:\n\n1. create and retrieve CSRs\n1. be automatically approved to request node client certificates, if automatic approval is enabled.\n\nA kubelet authenticating using bootstrap tokens is authenticated as a user in the group\n`system:bootstrappers`, which is the standard method to use.\n\nAs this feature matures, you\nshould ensure tokens are bound to a Role Based Access Control (RBAC) policy\nwhich limits requests (using the [bootstrap token](\/docs\/reference\/access-authn-authz\/bootstrap-tokens\/)) strictly to client\nrequests related to certificate provisioning. With RBAC in place, scoping the\ntokens to a group allows for great flexibility. For example, you could disable a\nparticular bootstrap group's access when you are done provisioning the nodes.\n\n#### Bootstrap tokens\n\nBootstrap tokens are described in detail [here](\/docs\/reference\/access-authn-authz\/bootstrap-tokens\/).\nThese are tokens that are stored as secrets in the Kubernetes cluster, and then issued to the individual kubelet.\nYou can use a single token for an entire cluster, or issue one per worker node.\n\nThe process is two-fold:\n\n1. Create a Kubernetes secret with the token ID, secret and scope(s).\n1. Issue the token to the kubelet\n\nFrom the kubelet's perspective, one token is like another and has no special meaning.\nFrom the kube-apiserver's perspective, however, the bootstrap token is special.\nDue to its `type`, `namespace` and `name`, kube-apiserver recognizes it as a special token,\nand grants anyone authenticating with that token special bootstrap rights, notably treating\nthem as a member of the `system:bootstrappers` group. This fulfills a basic requirement\nfor TLS bootstrapping.\n\nThe details for creating the secret are available [here](\/docs\/reference\/access-authn-authz\/bootstrap-tokens\/).\n\nIf you want to use bootstrap tokens, you must enable it on kube-apiserver with the flag:\n\n```console\n--enable-bootstrap-token-auth=true\n```\n\n#### Token authentication file\n\nkube-apiserver has the ability to accept tokens as authentication.\nThese tokens are arbitrary but should represent at least 128 bits of entropy derived\nfrom a secure random number generator (such as `\/dev\/urandom` on most modern Linux\nsystems). There are multiple ways you can generate a token. For example:\n\n```shell\nhead -c 16 \/dev\/urandom | od -An -t x | tr -d ' '\n```\n\nThis will generate tokens that look like `02b50b05283e98dd0fd71db496ef01e8`.\n\nThe token file should look like the following example, where the first three\nvalues can be anything and the quoted group name should be as depicted:\n\n```console\n02b50b05283e98dd0fd71db496ef01e8,kubelet-bootstrap,10001,\"system:bootstrappers\"\n```\n\nAdd the `--token-auth-file=FILENAME` flag to the kube-apiserver command (in your\nsystemd unit file perhaps) to enable the token file. See docs\n[here](\/docs\/reference\/access-authn-authz\/authentication\/#static-token-file) for\nfurther details.\n\n### Authorize kubelet to create CSR\n\nNow that the bootstrapping node is _authenticated_ as part of the\n`system:bootstrappers` group, it needs to be _authorized_ to create a\ncertificate signing request (CSR) as well as retrieve it when done.\nFortunately, Kubernetes ships with a `ClusterRole` with precisely these (and\nonly these) permissions, `system:node-bootstrapper`.\n\nTo do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers`\ngroup to the cluster role `system:node-bootstrapper`.\n\n```yaml\n# enable bootstrapping nodes to create CSR\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: create-csrs-for-bootstrapping\nsubjects:\n- kind: Group\n  name: system:bootstrappers\n  apiGroup: rbac.authorization.k8s.io\nroleRef:\n  kind: ClusterRole\n  name: system:node-bootstrapper\n  apiGroup: rbac.authorization.k8s.io\n```\n\n## kube-controller-manager configuration\n\nWhile the apiserver receives the requests for certificates from the kubelet and authenticates those requests,\nthe controller-manager is responsible for issuing actual signed certificates.\n\nThe controller-manager performs this function via a certificate-issuing control loop.\nThis takes the form of a\n[cfssl](https:\/\/blog.cloudflare.com\/introducing-cfssl\/) local signer using\nassets on disk. Currently, all certificates issued have one year validity and a\ndefault set of key usages.\n\nIn order for the controller-manager to sign certificates, it needs the following:\n\n* access to the \"Kubernetes CA key and certificate\" that you created and distributed\n* enabling CSR signing\n\n### Access to key and certificate\n\nAs described earlier, you need to create a Kubernetes CA key and certificate, and distribute it to the control plane nodes.\nThese will be used by the controller-manager to sign the kubelet certificates.\n\nSince these signed certificates will, in turn, be used by the kubelet to authenticate as a regular kubelet\nto kube-apiserver, it is important that the CA provided to the controller-manager at this stage also be\ntrusted by kube-apiserver for authentication. This is provided to kube-apiserver with the flag `--client-ca-file=FILENAME`\n(for example, `--client-ca-file=\/var\/lib\/kubernetes\/ca.pem`), as described in the kube-apiserver configuration section.\n\nTo provide the Kubernetes CA key and certificate to kube-controller-manager, use the following flags:\n\n```shell\n--cluster-signing-cert-file=\"\/etc\/path\/to\/kubernetes\/ca\/ca.crt\" --cluster-signing-key-file=\"\/etc\/path\/to\/kubernetes\/ca\/ca.key\"\n```\n\nFor example:\n\n```shell\n--cluster-signing-cert-file=\"\/var\/lib\/kubernetes\/ca.pem\" --cluster-signing-key-file=\"\/var\/lib\/kubernetes\/ca-key.pem\"\n```\n\nThe validity duration of signed certificates can be configured with flag:\n\n```shell\n--cluster-signing-duration\n```\n\n### Approval\n\nIn order to approve CSRs, you need to tell the controller-manager that it is acceptable to approve them. This is done by granting\nRBAC permissions to the correct group.\n\nThere are two distinct sets of permissions:\n\n* `nodeclient`: If a node is creating a new certificate for a node, then it does not have a certificate yet.\n  It is authenticating using one of the tokens listed above, and thus is part of the group `system:bootstrappers`.\n* `selfnodeclient`: If a node is renewing its certificate, then it already has a certificate (by definition),\n  which it uses continuously to authenticate as part of the group `system:nodes`.\n\nTo enable the kubelet to request and receive a new certificate, create a `ClusterRoleBinding` that binds\nthe group in which the bootstrapping node is a member `system:bootstrappers` to the `ClusterRole` that\ngrants it permission, `system:certificates.k8s.io:certificatesigningrequests:nodeclient`:\n\n```yaml\n# Approve all CSRs for the group \"system:bootstrappers\"\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: auto-approve-csrs-for-group\nsubjects:\n- kind: Group\n  name: system:bootstrappers\n  apiGroup: rbac.authorization.k8s.io\nroleRef:\n  kind: ClusterRole\n  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient\n  apiGroup: rbac.authorization.k8s.io\n```\n\nTo enable the kubelet to renew its own client certificate, create a `ClusterRoleBinding` that binds\nthe group in which the fully functioning node is a member `system:nodes` to the `ClusterRole` that\ngrants it permission, `system:certificates.k8s.io:certificatesigningrequests:selfnodeclient`:\n\n```yaml\n# Approve renewal CSRs for the group \"system:nodes\"\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: auto-approve-renewals-for-nodes\nsubjects:\n- kind: Group\n  name: system:nodes\n  apiGroup: rbac.authorization.k8s.io\nroleRef:\n  kind: ClusterRole\n  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient\n  apiGroup: rbac.authorization.k8s.io\n```\n\nThe `csrapproving` controller that ships as part of\n[kube-controller-manager](\/docs\/reference\/command-line-tools-reference\/kube-controller-manager\/) and is enabled\nby default. The controller uses the\n[`SubjectAccessReview` API](\/docs\/reference\/access-authn-authz\/authorization\/#checking-api-access) to\ndetermine if a given user is authorized to request a CSR, then approves based on\nthe authorization outcome. To prevent conflicts with other approvers, the\nbuilt-in approver doesn't explicitly deny CSRs. It only ignores unauthorized\nrequests. The controller also prunes expired certificates as part of garbage\ncollection.\n\n## kubelet configuration\n\nFinally, with the control plane nodes properly set up and all of the necessary\nauthentication and authorization in place, we can configure the kubelet.\n\nThe kubelet requires the following configuration to bootstrap:\n\n* A path to store the key and certificate it generates (optional, can use default)\n* A path to a `kubeconfig` file that does not yet exist; it will place the bootstrapped config file here\n* A path to a bootstrap `kubeconfig` file to provide the URL for the server and bootstrap credentials, e.g. a bootstrap token\n* Optional: instructions to rotate certificates\n\nThe bootstrap `kubeconfig` should be in a path available to the kubelet, for example `\/var\/lib\/kubelet\/bootstrap-kubeconfig`.\n\nIts format is identical to a normal `kubeconfig` file. A sample file might look as follows:\n\n```yaml\napiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    certificate-authority: \/var\/lib\/kubernetes\/ca.pem\n    server: https:\/\/my.server.example.com:6443\n  name: bootstrap\ncontexts:\n- context:\n    cluster: bootstrap\n    user: kubelet-bootstrap\n  name: bootstrap\ncurrent-context: bootstrap\npreferences: {}\nusers:\n- name: kubelet-bootstrap\n  user:\n    token: 07401b.f395accd246ae52d\n```\n\nThe important elements to note are:\n\n* `certificate-authority`: path to a CA file, used to validate the server certificate presented by kube-apiserver\n* `server`: URL to kube-apiserver\n* `token`: the token to use\n\nThe format of the token does not matter, as long as it matches what kube-apiserver expects. In the above example, we used a bootstrap token.\nAs stated earlier, _any_ valid authentication method can be used, not only tokens.\n\nBecause the bootstrap `kubeconfig` _is_ a standard `kubeconfig`, you can use `kubectl` to generate it. To create the above example file:\n\n```\nkubectl config --kubeconfig=\/var\/lib\/kubelet\/bootstrap-kubeconfig set-cluster bootstrap --server='https:\/\/my.server.example.com:6443' --certificate-authority=\/var\/lib\/kubernetes\/ca.pem\nkubectl config --kubeconfig=\/var\/lib\/kubelet\/bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d\nkubectl config --kubeconfig=\/var\/lib\/kubelet\/bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap\nkubectl config --kubeconfig=\/var\/lib\/kubelet\/bootstrap-kubeconfig use-context bootstrap\n```\n\nTo indicate to the kubelet to use the bootstrap `kubeconfig`, use the following kubelet flag:\n\n```\n--bootstrap-kubeconfig=\"\/var\/lib\/kubelet\/bootstrap-kubeconfig\" --kubeconfig=\"\/var\/lib\/kubelet\/kubeconfig\"\n```\n\nWhen starting the kubelet, if the file specified via `--kubeconfig` does not\nexist, the bootstrap kubeconfig specified via `--bootstrap-kubeconfig` is used\nto request a client certificate from the API server. On approval of the\ncertificate request and receipt back by the kubelet, a kubeconfig file\nreferencing the generated key and obtained certificate is written to the path\nspecified by `--kubeconfig`. The certificate and key file will be placed in the\ndirectory specified by `--cert-dir`.\n\n### Client and serving certificates\n\nAll of the above relate to kubelet _client_ certificates, specifically, the certificates a kubelet\nuses to authenticate to kube-apiserver.\n\nA kubelet also can use _serving_ certificates. The kubelet itself exposes an https endpoint for certain features.\nTo secure these, the kubelet can do one of:\n\n* use provided key and certificate, via the `--tls-private-key-file` and `--tls-cert-file` flags\n* create self-signed key and certificate, if a key and certificate are not provided\n* request serving certificates from the cluster server, via the CSR API\n\nThe client certificate provided by TLS bootstrapping is signed, by default, for `client auth` only, and thus cannot\nbe used as serving certificates, or `server auth`.\n\nHowever, you _can_ enable its server certificate, at least partially, via certificate rotation.\n\n### Certificate rotation\n\nKubernetes v1.8 and higher kubelet implements features for enabling\nrotation of its client and\/or serving certificates. Note, rotation of serving\ncertificate is a __beta__ feature and requires the `RotateKubeletServerCertificate`\nfeature flag on the kubelet (enabled by default).\n\nYou can configure the kubelet to rotate its client certificates by creating new CSRs\nas its existing credentials expire. To enable this feature, use the `rotateCertificates`\nfield of [kubelet configuration file](\/docs\/tasks\/administer-cluster\/kubelet-config-file\/)\nor pass the following command line argument to the kubelet (deprecated):\n\n```\n--rotate-certificates\n```\n\nEnabling `RotateKubeletServerCertificate` causes the kubelet **both** to request a serving\ncertificate after bootstrapping its client credentials **and** to rotate that\ncertificate. To enable this behavior, use the field `serverTLSBootstrap` of\nthe [kubelet configuration file](\/docs\/tasks\/administer-cluster\/kubelet-config-file\/)\nor pass the following command line argument to the kubelet (deprecated):\n\n```\n--rotate-server-certificates\n```\n\n\nThe CSR approving controllers implemented in core Kubernetes do not\napprove node _serving_ certificates for\n[security reasons](https:\/\/github.com\/kubernetes\/community\/pull\/1982). To use\n`RotateKubeletServerCertificate` operators need to run a custom approving\ncontroller, or manually approve the serving certificate requests.\n\nA deployment-specific approval process for kubelet serving certificates should typically only approve CSRs which:\n\n1. are requested by nodes (ensure the `spec.username` field is of the form\n   `system:node:<nodeName>` and `spec.groups` contains `system:nodes`)\n1. request usages for a serving certificate (ensure `spec.usages` contains `server auth`,\n   optionally contains `digital signature` and `key encipherment`, and contains no other usages)\n1. only have IP and DNS subjectAltNames that belong to the requesting node,\n   and have no URI and Email subjectAltNames (parse the x509 Certificate Signing Request\n   in `spec.request` to verify `subjectAltNames`)\n\n\n\n## Other authenticating components\n\nAll of TLS bootstrapping described in this document relates to the kubelet. However,\nother components may need to communicate directly with kube-apiserver. Notable is kube-proxy, which\nis part of the Kubernetes node components and runs on every node, but may also include other components such as monitoring or networking.\n\nLike the kubelet, these other components also require a method of authenticating to kube-apiserver.\nYou have several options for generating these credentials:\n\n* The old way: Create and distribute certificates the same way you did for kubelet before TLS bootstrapping\n* DaemonSet: Since the kubelet itself is loaded on each node, and is sufficient to start base services,\n  you can run kube-proxy and other node-specific services not as a standalone process, but rather as a\n  daemonset in the `kube-system` namespace. Since it will be in-cluster, you can give it a proper service\n  account with appropriate permissions to perform its activities. This may be the simplest way to configure\n  such services.\n\n## kubectl approval\n\nCSRs can be approved outside of the approval flows built into the controller\nmanager.\n\nThe signing controller does not immediately sign all certificate requests.\nInstead, it waits until they have been flagged with an \"Approved\" status by an\nappropriately-privileged user. This flow is intended to allow for automated\napproval handled by an external approval controller or the approval controller\nimplemented in the core controller-manager. However cluster administrators can\nalso manually approve certificate requests using kubectl. An administrator can\nlist CSRs with `kubectl get csr` and describe one in detail with\n`kubectl describe csr <name>`. An administrator can approve or deny a CSR with\n`kubectl certificate approve <name>` and `kubectl certificate deny <name>`.","site":"kubernetes reference","answers_cleaned":"    reviewers    mikedanese   liggitt   smarterclayton   awly title  TLS bootstrapping content type  concept weight  120           overview      In a Kubernetes cluster  the components on the worker nodes   kubelet and kube proxy   need to communicate with Kubernetes control plane components  specifically kube apiserver  In order to ensure that communication is kept private  not interfered with  and ensure that each component of the cluster is talking to another trusted component  we strongly recommend using client TLS certificates on nodes   The normal process of bootstrapping these components  especially worker nodes that need certificates so they can communicate safely with kube apiserver  can be a challenging process as it is often outside of the scope of Kubernetes and requires significant additional work  This in turn  can make it challenging to initialize or scale a cluster   In order to simplify the process  beginning in version 1 4  Kubernetes introduced a certificate request and signing API  The proposal can be found  here  https   github com kubernetes kubernetes pull 20439    This document describes the process of node initialization  how to set up TLS client certificate bootstrapping for kubelets  and how it works        body         Initialization process  When a worker node starts up  the kubelet does the following   1  Look for its  kubeconfig  file 1  Retrieve the URL of the API server and credentials  normally a TLS key and signed certificate from the  kubeconfig  file 1  Attempt to communicate with the API server using the credentials   Assuming that the kube apiserver successfully validates the kubelet s credentials  it will treat the kubelet as a valid node  and begin to assign pods to it   Note that the above process depends upon     Existence of a key and certificate on the local host in the  kubeconfig    The certificate having been signed by a Certificate Authority  CA  trusted by the kube apiserver  All of the following are responsibilities of whoever sets up and manages the cluster   1  Creating the CA key and certificate 1  Distributing the CA certificate to the control plane nodes  where kube apiserver is running 1  Creating a key and certificate for each kubelet  strongly recommended to have a unique one  with a unique CN  for each kubelet 1  Signing the kubelet certificate using the CA key 1  Distributing the kubelet key and signed certificate to the specific node on which the kubelet is running  The TLS Bootstrapping described in this document is intended to simplify  and partially or even completely automate  steps 3 onwards  as these are the most common when initializing or scaling a cluster       Bootstrap initialization  In the bootstrap initialization process  the following occurs   1  kubelet begins 1  kubelet sees that it does  not  have a  kubeconfig  file 1  kubelet searches for and finds a  bootstrap kubeconfig  file 1  kubelet reads its bootstrap file  retrieving the URL of the API server and a limited usage  token  1  kubelet connects to the API server  authenticates using the token 1  kubelet now has limited credentials to create and retrieve a certificate signing request  CSR  1  kubelet creates a CSR for itself with the signerName set to  kubernetes io kube apiserver client kubelet  1  CSR is approved in one of two ways       If configured  kube controller manager automatically approves the CSR      If configured  an outside process  possibly a person  approves the CSR using the Kubernetes API or via  kubectl  1  Certificate is created for the kubelet 1  Certificate is issued to the kubelet 1  kubelet retrieves the certificate 1  kubelet creates a proper  kubeconfig  with the key and signed certificate 1  kubelet begins normal operation 1  Optional  if configured  kubelet automatically requests renewal of the certificate when it is close to expiry 1  The renewed certificate is approved and issued  either automatically or manually  depending on configuration   The rest of this document describes the necessary steps to configure TLS Bootstrapping  and its limitations      Configuration  To configure for TLS bootstrapping and optional automatic approval  you must configure options on the following components     kube apiserver   kube controller manager   kubelet   in cluster resources   ClusterRoleBinding  and potentially  ClusterRole   In addition  you need your Kubernetes Certificate Authority  CA       Certificate Authority  As without bootstrapping  you will need a Certificate Authority  CA  key and certificate  As without bootstrapping  these will be used to sign the kubelet certificate  As before  it is your responsibility to distribute them to control plane nodes   For the purposes of this document  we will assume these have been distributed to control plane nodes at   var lib kubernetes ca pem   certificate  and   var lib kubernetes ca key pem   key   We will refer to these as  Kubernetes CA certificate and key    All Kubernetes components that use these certificates   kubelet  kube apiserver  kube controller manager   assume the key and certificate to be PEM encoded      kube apiserver configuration  The kube apiserver has several requirements to enable TLS bootstrapping     Recognizing CA that signs the client certificate   Authenticating the bootstrapping kubelet to the  system bootstrappers  group   Authorize the bootstrapping kubelet to create a certificate signing request  CSR       Recognizing client certificates  This is normal for all client certificate authentication  If not already set  add the    client ca file FILENAME  flag to the kube apiserver command to enable client certificate authentication  referencing a certificate authority bundle containing the signing certificate  for example    client ca file  var lib kubernetes ca pem        Initial bootstrap authentication  In order for the bootstrapping kubelet to connect to kube apiserver and request a certificate  it must first authenticate to the server  You can use any  authenticator   docs reference access authn authz authentication   that can authenticate the kubelet   While any authentication strategy can be used for the kubelet s initial bootstrap credentials  the following two authenticators are recommended for ease of provisioning   1   Bootstrap Tokens   bootstrap tokens  1   Token authentication file   token authentication file   Using bootstrap tokens is a simpler and more easily managed method to authenticate kubelets  and does not require any additional flags when starting kube apiserver   Whichever method you choose  the requirement is that the kubelet be able to authenticate as a user with the rights to   1  create and retrieve CSRs 1  be automatically approved to request node client certificates  if automatic approval is enabled   A kubelet authenticating using bootstrap tokens is authenticated as a user in the group  system bootstrappers   which is the standard method to use   As this feature matures  you should ensure tokens are bound to a Role Based Access Control  RBAC  policy which limits requests  using the  bootstrap token   docs reference access authn authz bootstrap tokens    strictly to client requests related to certificate provisioning  With RBAC in place  scoping the tokens to a group allows for great flexibility  For example  you could disable a particular bootstrap group s access when you are done provisioning the nodes        Bootstrap tokens  Bootstrap tokens are described in detail  here   docs reference access authn authz bootstrap tokens    These are tokens that are stored as secrets in the Kubernetes cluster  and then issued to the individual kubelet  You can use a single token for an entire cluster  or issue one per worker node   The process is two fold   1  Create a Kubernetes secret with the token ID  secret and scope s   1  Issue the token to the kubelet  From the kubelet s perspective  one token is like another and has no special meaning  From the kube apiserver s perspective  however  the bootstrap token is special  Due to its  type    namespace  and  name   kube apiserver recognizes it as a special token  and grants anyone authenticating with that token special bootstrap rights  notably treating them as a member of the  system bootstrappers  group  This fulfills a basic requirement for TLS bootstrapping   The details for creating the secret are available  here   docs reference access authn authz bootstrap tokens     If you want to use bootstrap tokens  you must enable it on kube apiserver with the flag      console   enable bootstrap token auth true           Token authentication file  kube apiserver has the ability to accept tokens as authentication  These tokens are arbitrary but should represent at least 128 bits of entropy derived from a secure random number generator  such as   dev urandom  on most modern Linux systems   There are multiple ways you can generate a token  For example      shell head  c 16  dev urandom   od  An  t x   tr  d          This will generate tokens that look like  02b50b05283e98dd0fd71db496ef01e8    The token file should look like the following example  where the first three values can be anything and the quoted group name should be as depicted      console 02b50b05283e98dd0fd71db496ef01e8 kubelet bootstrap 10001  system bootstrappers       Add the    token auth file FILENAME  flag to the kube apiserver command  in your systemd unit file perhaps  to enable the token file  See docs  here   docs reference access authn authz authentication  static token file  for further details       Authorize kubelet to create CSR  Now that the bootstrapping node is  authenticated  as part of the  system bootstrappers  group  it needs to be  authorized  to create a certificate signing request  CSR  as well as retrieve it when done  Fortunately  Kubernetes ships with a  ClusterRole  with precisely these  and only these  permissions   system node bootstrapper    To do this  you only need to create a  ClusterRoleBinding  that binds the  system bootstrappers  group to the cluster role  system node bootstrapper       yaml   enable bootstrapping nodes to create CSR apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  create csrs for bootstrapping subjects    kind  Group   name  system bootstrappers   apiGroup  rbac authorization k8s io roleRef    kind  ClusterRole   name  system node bootstrapper   apiGroup  rbac authorization k8s io         kube controller manager configuration  While the apiserver receives the requests for certificates from the kubelet and authenticates those requests  the controller manager is responsible for issuing actual signed certificates   The controller manager performs this function via a certificate issuing control loop  This takes the form of a  cfssl  https   blog cloudflare com introducing cfssl   local signer using assets on disk  Currently  all certificates issued have one year validity and a default set of key usages   In order for the controller manager to sign certificates  it needs the following     access to the  Kubernetes CA key and certificate  that you created and distributed   enabling CSR signing      Access to key and certificate  As described earlier  you need to create a Kubernetes CA key and certificate  and distribute it to the control plane nodes  These will be used by the controller manager to sign the kubelet certificates   Since these signed certificates will  in turn  be used by the kubelet to authenticate as a regular kubelet to kube apiserver  it is important that the CA provided to the controller manager at this stage also be trusted by kube apiserver for authentication  This is provided to kube apiserver with the flag    client ca file FILENAME   for example     client ca file  var lib kubernetes ca pem    as described in the kube apiserver configuration section   To provide the Kubernetes CA key and certificate to kube controller manager  use the following flags      shell   cluster signing cert file   etc path to kubernetes ca ca crt    cluster signing key file   etc path to kubernetes ca ca key       For example      shell   cluster signing cert file   var lib kubernetes ca pem    cluster signing key file   var lib kubernetes ca key pem       The validity duration of signed certificates can be configured with flag      shell   cluster signing duration          Approval  In order to approve CSRs  you need to tell the controller manager that it is acceptable to approve them  This is done by granting RBAC permissions to the correct group   There are two distinct sets of permissions      nodeclient   If a node is creating a new certificate for a node  then it does not have a certificate yet    It is authenticating using one of the tokens listed above  and thus is part of the group  system bootstrappers      selfnodeclient   If a node is renewing its certificate  then it already has a certificate  by definition     which it uses continuously to authenticate as part of the group  system nodes    To enable the kubelet to request and receive a new certificate  create a  ClusterRoleBinding  that binds the group in which the bootstrapping node is a member  system bootstrappers  to the  ClusterRole  that grants it permission   system certificates k8s io certificatesigningrequests nodeclient       yaml   Approve all CSRs for the group  system bootstrappers  apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  auto approve csrs for group subjects    kind  Group   name  system bootstrappers   apiGroup  rbac authorization k8s io roleRef    kind  ClusterRole   name  system certificates k8s io certificatesigningrequests nodeclient   apiGroup  rbac authorization k8s io      To enable the kubelet to renew its own client certificate  create a  ClusterRoleBinding  that binds the group in which the fully functioning node is a member  system nodes  to the  ClusterRole  that grants it permission   system certificates k8s io certificatesigningrequests selfnodeclient       yaml   Approve renewal CSRs for the group  system nodes  apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  auto approve renewals for nodes subjects    kind  Group   name  system nodes   apiGroup  rbac authorization k8s io roleRef    kind  ClusterRole   name  system certificates k8s io certificatesigningrequests selfnodeclient   apiGroup  rbac authorization k8s io      The  csrapproving  controller that ships as part of  kube controller manager   docs reference command line tools reference kube controller manager   and is enabled by default  The controller uses the   SubjectAccessReview  API   docs reference access authn authz authorization  checking api access  to determine if a given user is authorized to request a CSR  then approves based on the authorization outcome  To prevent conflicts with other approvers  the built in approver doesn t explicitly deny CSRs  It only ignores unauthorized requests  The controller also prunes expired certificates as part of garbage collection      kubelet configuration  Finally  with the control plane nodes properly set up and all of the necessary authentication and authorization in place  we can configure the kubelet   The kubelet requires the following configuration to bootstrap     A path to store the key and certificate it generates  optional  can use default    A path to a  kubeconfig  file that does not yet exist  it will place the bootstrapped config file here   A path to a bootstrap  kubeconfig  file to provide the URL for the server and bootstrap credentials  e g  a bootstrap token   Optional  instructions to rotate certificates  The bootstrap  kubeconfig  should be in a path available to the kubelet  for example   var lib kubelet bootstrap kubeconfig    Its format is identical to a normal  kubeconfig  file  A sample file might look as follows      yaml apiVersion  v1 kind  Config clusters    cluster      certificate authority   var lib kubernetes ca pem     server  https   my server example com 6443   name  bootstrap contexts    context      cluster  bootstrap     user  kubelet bootstrap   name  bootstrap current context  bootstrap preferences     users    name  kubelet bootstrap   user      token  07401b f395accd246ae52d      The important elements to note are      certificate authority   path to a CA file  used to validate the server certificate presented by kube apiserver    server   URL to kube apiserver    token   the token to use  The format of the token does not matter  as long as it matches what kube apiserver expects  In the above example  we used a bootstrap token  As stated earlier   any  valid authentication method can be used  not only tokens   Because the bootstrap  kubeconfig   is  a standard  kubeconfig   you can use  kubectl  to generate it  To create the above example file       kubectl config   kubeconfig  var lib kubelet bootstrap kubeconfig set cluster bootstrap   server  https   my server example com 6443    certificate authority  var lib kubernetes ca pem kubectl config   kubeconfig  var lib kubelet bootstrap kubeconfig set credentials kubelet bootstrap   token 07401b f395accd246ae52d kubectl config   kubeconfig  var lib kubelet bootstrap kubeconfig set context bootstrap   user kubelet bootstrap   cluster bootstrap kubectl config   kubeconfig  var lib kubelet bootstrap kubeconfig use context bootstrap      To indicate to the kubelet to use the bootstrap  kubeconfig   use the following kubelet flag         bootstrap kubeconfig   var lib kubelet bootstrap kubeconfig    kubeconfig   var lib kubelet kubeconfig       When starting the kubelet  if the file specified via    kubeconfig  does not exist  the bootstrap kubeconfig specified via    bootstrap kubeconfig  is used to request a client certificate from the API server  On approval of the certificate request and receipt back by the kubelet  a kubeconfig file referencing the generated key and obtained certificate is written to the path specified by    kubeconfig   The certificate and key file will be placed in the directory specified by    cert dir        Client and serving certificates  All of the above relate to kubelet  client  certificates  specifically  the certificates a kubelet uses to authenticate to kube apiserver   A kubelet also can use  serving  certificates  The kubelet itself exposes an https endpoint for certain features  To secure these  the kubelet can do one of     use provided key and certificate  via the    tls private key file  and    tls cert file  flags   create self signed key and certificate  if a key and certificate are not provided   request serving certificates from the cluster server  via the CSR API  The client certificate provided by TLS bootstrapping is signed  by default  for  client auth  only  and thus cannot be used as serving certificates  or  server auth    However  you  can  enable its server certificate  at least partially  via certificate rotation       Certificate rotation  Kubernetes v1 8 and higher kubelet implements features for enabling rotation of its client and or serving certificates  Note  rotation of serving certificate is a   beta   feature and requires the  RotateKubeletServerCertificate  feature flag on the kubelet  enabled by default    You can configure the kubelet to rotate its client certificates by creating new CSRs as its existing credentials expire  To enable this feature  use the  rotateCertificates  field of  kubelet configuration file   docs tasks administer cluster kubelet config file   or pass the following command line argument to the kubelet  deprecated          rotate certificates      Enabling  RotateKubeletServerCertificate  causes the kubelet   both   to request a serving certificate after bootstrapping its client credentials   and   to rotate that certificate  To enable this behavior  use the field  serverTLSBootstrap  of the  kubelet configuration file   docs tasks administer cluster kubelet config file   or pass the following command line argument to the kubelet  deprecated          rotate server certificates       The CSR approving controllers implemented in core Kubernetes do not approve node  serving  certificates for  security reasons  https   github com kubernetes community pull 1982   To use  RotateKubeletServerCertificate  operators need to run a custom approving controller  or manually approve the serving certificate requests   A deployment specific approval process for kubelet serving certificates should typically only approve CSRs which   1  are requested by nodes  ensure the  spec username  field is of the form     system node  nodeName   and  spec groups  contains  system nodes   1  request usages for a serving certificate  ensure  spec usages  contains  server auth      optionally contains  digital signature  and  key encipherment   and contains no other usages  1  only have IP and DNS subjectAltNames that belong to the requesting node     and have no URI and Email subjectAltNames  parse the x509 Certificate Signing Request    in  spec request  to verify  subjectAltNames         Other authenticating components  All of TLS bootstrapping described in this document relates to the kubelet  However  other components may need to communicate directly with kube apiserver  Notable is kube proxy  which is part of the Kubernetes node components and runs on every node  but may also include other components such as monitoring or networking   Like the kubelet  these other components also require a method of authenticating to kube apiserver  You have several options for generating these credentials     The old way  Create and distribute certificates the same way you did for kubelet before TLS bootstrapping   DaemonSet  Since the kubelet itself is loaded on each node  and is sufficient to start base services    you can run kube proxy and other node specific services not as a standalone process  but rather as a   daemonset in the  kube system  namespace  Since it will be in cluster  you can give it a proper service   account with appropriate permissions to perform its activities  This may be the simplest way to configure   such services      kubectl approval  CSRs can be approved outside of the approval flows built into the controller manager   The signing controller does not immediately sign all certificate requests  Instead  it waits until they have been flagged with an  Approved  status by an appropriately privileged user  This flow is intended to allow for automated approval handled by an external approval controller or the approval controller implemented in the core controller manager  However cluster administrators can also manually approve certificate requests using kubectl  An administrator can list CSRs with  kubectl get csr  and describe one in detail with  kubectl describe csr  name    An administrator can approve or deny a CSR with  kubectl certificate approve  name   and  kubectl certificate deny  name   "}
{"questions":"kubernetes reference weight 20 liggitt contenttype concept title Kubernetes API Concepts reviewers lavalamp smarterclayton","answers":"---\ntitle: Kubernetes API Concepts\nreviewers:\n- smarterclayton\n- lavalamp\n- liggitt\ncontent_type: concept\nweight: 20\n---\n\n<!-- overview -->\nThe Kubernetes API is a resource-based (RESTful) programmatic interface\nprovided via HTTP. It supports retrieving, creating, updating, and deleting\nprimary resources via the standard HTTP verbs (POST, PUT, PATCH, DELETE,\nGET).\n\nFor some resources, the API includes additional subresources that allow\nfine-grained authorization (such as separate views for Pod details and\nlog retrievals), and can accept and serve those resources in different\nrepresentations for convenience or efficiency.\n\nKubernetes supports efficient change notifications on resources via\n_watches_:\n\nKubernetes also provides consistent list operations so that API clients can\neffectively cache, track, and synchronize the state of resources.\n\nYou can view the [API reference](\/docs\/reference\/kubernetes-api\/) online,\nor read on to learn about the API in general.\n\n<!-- body -->\n## Kubernetes API terminology {#standard-api-terminology}\n\nKubernetes generally leverages common RESTful terminology to describe the\nAPI concepts:\n\n* A *resource type* is the name used in the URL (`pods`, `namespaces`, `services`)\n* All resource types have a concrete representation (their object schema) which is called a *kind*\n* A list of instances of a resource type is known as a *collection*\n* A single instance of a resource type is called a *resource*, and also usually represents an *object*\n* For some resource types, the API includes one or more *sub-resources*, which are represented as URI paths below the resource\n\nMost Kubernetes API resource types are\n \u2013\nthey represent a concrete instance of a concept on the cluster, like a\npod or namespace. A smaller number of API resource types are *virtual* in\nthat they often represent operations on objects, rather than objects, such\nas a permission check\n(use a POST with a JSON-encoded body of `SubjectAccessReview` to the\n`subjectaccessreviews` resource), or the `eviction` sub-resource of a Pod\n(used to trigger\n[API-initiated eviction](\/docs\/concepts\/scheduling-eviction\/api-eviction\/)).\n\n### Object names\n\nAll objects you can create via the API have a unique object\n to allow idempotent creation and\nretrieval, except that virtual resource types may not have unique names if they are\nnot retrievable, or do not rely on idempotency.\nWithin a , only one object\nof a given kind can have a given name at a time. However, if you delete the object,\nyou can make a new object with the same name. Some objects are not namespaced (for\nexample: Nodes), and so their names must be unique across the whole cluster.\n\n### API verbs\n\nAlmost all object resource types support the standard HTTP verbs - GET, POST, PUT, PATCH,\nand DELETE. Kubernetes also uses its own verbs, which are often written in lowercase to distinguish\nthem from HTTP verbs.\n\nKubernetes uses the term **list** to describe returning a [collection](#collections) of\nresources to distinguish from retrieving a single resource which is usually called\na **get**. If you sent an HTTP GET request with the `?watch` query parameter,\nKubernetes calls this a **watch** and not a **get** (see\n[Efficient detection of changes](#efficient-detection-of-changes) for more details).\n\nFor PUT requests, Kubernetes internally classifies these as either **create** or **update**\nbased on the state of the existing object. An **update** is different from a **patch**; the\nHTTP verb for a **patch** is PATCH.\n\n## Resource URIs\n\nAll resource types are either scoped by the cluster (`\/apis\/GROUP\/VERSION\/*`) or to a\nnamespace (`\/apis\/GROUP\/VERSION\/namespaces\/NAMESPACE\/*`). A namespace-scoped resource\ntype will be deleted when its namespace is deleted and access to that resource type\nis controlled by authorization checks on the namespace scope.\n\nNote: core resources use `\/api` instead of `\/apis` and omit the GROUP path segment.\n\nExamples:\n\n* `\/api\/v1\/namespaces`\n* `\/api\/v1\/pods`\n* `\/api\/v1\/namespaces\/my-namespace\/pods`\n* `\/apis\/apps\/v1\/deployments`\n* `\/apis\/apps\/v1\/namespaces\/my-namespace\/deployments`\n* `\/apis\/apps\/v1\/namespaces\/my-namespace\/deployments\/my-deployment`\n\nYou can also access collections of resources (for example: listing all Nodes).\nThe following paths are used to retrieve collections and resources:\n\n* Cluster-scoped resources:\n\n  * `GET \/apis\/GROUP\/VERSION\/RESOURCETYPE` - return the collection of resources of the resource type\n  * `GET \/apis\/GROUP\/VERSION\/RESOURCETYPE\/NAME` - return the resource with NAME under the resource type\n\n* Namespace-scoped resources:\n\n  * `GET \/apis\/GROUP\/VERSION\/RESOURCETYPE` - return the collection of all\n    instances of the resource type across all namespaces\n  * `GET \/apis\/GROUP\/VERSION\/namespaces\/NAMESPACE\/RESOURCETYPE` - return\n    collection of all instances of the resource type in NAMESPACE\n  * `GET \/apis\/GROUP\/VERSION\/namespaces\/NAMESPACE\/RESOURCETYPE\/NAME` -\n    return the instance of the resource type with NAME in NAMESPACE\n\nSince a namespace is a cluster-scoped resource type, you can retrieve the list\n(\u201ccollection\u201d) of all namespaces with `GET \/api\/v1\/namespaces` and details about\na particular namespace with `GET \/api\/v1\/namespaces\/NAME`.\n\n* Cluster-scoped subresource: `GET \/apis\/GROUP\/VERSION\/RESOURCETYPE\/NAME\/SUBRESOURCE`\n* Namespace-scoped subresource: `GET \/apis\/GROUP\/VERSION\/namespaces\/NAMESPACE\/RESOURCETYPE\/NAME\/SUBRESOURCE`\n\nThe verbs supported for each subresource will differ depending on the object -\nsee the [API reference](\/docs\/reference\/kubernetes-api\/) for more information. It\nis not possible to access sub-resources across multiple resources - generally a new\nvirtual resource type would be used if that becomes necessary.\n\n## HTTP media types {#alternate-representations-of-resources}\n\nOver HTTP, Kubernetes supports JSON and Protobuf wire encodings.\n\nBy default, Kubernetes returns objects in [JSON serialization](#json-encoding), using the\n`application\/json` media type. Although JSON is the default, clients may request a response in\nYAML, or use the more efficient binary [Protobuf representation](#protobuf-encoding) for better performance at scale.\n\nThe Kubernetes API implements standard HTTP content type negotiation: passing an\n`Accept` header with a `GET` call will request that the server tries to return\na response in your preferred media type. If you want to send an object in Protobuf to\nthe server for a `PUT` or `POST` request, you must set the `Content-Type` request header\nappropriately.\n\nIf you request an available media type, the API server returns a response with a suitable\n`Content-Type`; if none of the media types you request are supported, the API server returns\na `406 Not acceptable` error message.\nAll built-in resource types support the `application\/json` media type.\n\n### JSON resource encoding {#json-encoding}\n\nThe Kubernetes API defaults to using [JSON](https:\/\/www.json.org\/json-en.html) for encoding\nHTTP message bodies.\n\nFor example:\n\n1. List all of the pods on a cluster, without specifying a preferred format\n\n   ```\n   GET \/api\/v1\/pods\n   ```\n\n   ```\n   200 OK\n   Content-Type: application\/json\n\n   \u2026 JSON encoded collection of Pods (PodList object)\n   ```\n\n1. Create a pod by sending JSON to the server, requesting a JSON response.\n\n   ```\n   POST \/api\/v1\/namespaces\/test\/pods\n   Content-Type: application\/json\n   Accept: application\/json\n   \u2026 JSON encoded Pod object\n   ```\n\n   ```\n   200 OK\n   Content-Type: application\/json\n\n   {\n     \"kind\": \"Pod\",\n     \"apiVersion\": \"v1\",\n     \u2026\n   }\n   ```\n\n### YAML resource encoding {#yaml-encoding}\n\nKubernetes also supports the [`application\/yaml`](https:\/\/www.rfc-editor.org\/rfc\/rfc9512.html)\nmedia type for both requests and responses. [`YAML`](https:\/\/yaml.org\/)\ncan be used for defining Kubernetes manifests and API interactions.\n\nFor example:\n\n1. List all of the pods on a cluster in YAML format\n\n   ```\n   GET \/api\/v1\/pods\n   Accept: application\/yaml\n   ```\n   \n   ```\n   200 OK\n   Content-Type: application\/yaml\n\n   \u2026 YAML encoded collection of Pods (PodList object)\n   ```\n\n1. Create a pod by sending YAML-encoded data to the server, requesting a YAML response:\n\n   ```\n   POST \/api\/v1\/namespaces\/test\/pods\n   Content-Type: application\/yaml\n   Accept: application\/yaml\n   \u2026 YAML encoded Pod object\n   ```\n\n   ```\n   200 OK\n   Content-Type: application\/yaml\n\n   apiVersion: v1\n   kind: Pod\n   metadata:\n     name: my-pod\n     \u2026\n   ```\n\n### Kubernetes Protobuf encoding {#protobuf-encoding}\n\nKubernetes uses an envelope wrapper to encode [Protobuf](https:\/\/protobuf.dev\/) responses.\nThat wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf\n(as opposed to JSON). The 4 byte magic number data is followed by a Protobuf encoded wrapper message, which\ndescribes the encoding and type of the underlying object. Within the Protobuf wrapper message,\nthe inner object data is recorded using the `raw` field of Unknown (see the [IDL](#protobuf-encoding-idl)\nfor more detail).\n\nFor example:\n\n1. List all of the pods on a cluster in Protobuf format.\n\n   ```\n   GET \/api\/v1\/pods\n   Accept: application\/vnd.kubernetes.protobuf\n   ```\n\n   ```\n   200 OK\n   Content-Type: application\/vnd.kubernetes.protobuf\n\n   \u2026 JSON encoded collection of Pods (PodList object)\n   ```\n\n1. Create a pod by sending Protobuf encoded data to the server, but request a response\n   in JSON.\n\n   ```\n   POST \/api\/v1\/namespaces\/test\/pods\n   Content-Type: application\/vnd.kubernetes.protobuf\n   Accept: application\/json\n   \u2026 binary encoded Pod object\n   ```\n\n   ```\n   200 OK\n   Content-Type: application\/json\n\n   {\n     \"kind\": \"Pod\",\n     \"apiVersion\": \"v1\",\n     ...\n   }\n   ```\n\nYou can use both techniques together and use Kubernetes' Protobuf encoding to interact with any API that\nsupports it, for both reads and writes. Only some API resource types are [compatible](#protobuf-encoding-compatibility)\nwith Protobuf.\n\n<a id=\"protobuf-encoding-idl\" \/>\n\nThe wrapper format is:\n\n```\nA four byte magic number prefix:\n  Bytes 0-3: \"k8s\\x00\" [0x6b, 0x38, 0x73, 0x00]\n\nAn encoded Protobuf message with the following IDL:\n  message Unknown {\n    \/\/ typeMeta should have the string values for \"kind\" and \"apiVersion\" as set on the JSON object\n    optional TypeMeta typeMeta = 1;\n\n    \/\/ raw will hold the complete serialized object in protobuf. See the protobuf definitions in the client libraries for a given kind.\n    optional bytes raw = 2;\n\n    \/\/ contentEncoding is encoding used for the raw data. Unspecified means no encoding.\n    optional string contentEncoding = 3;\n\n    \/\/ contentType is the serialization method used to serialize 'raw'. Unspecified means application\/vnd.kubernetes.protobuf and is usually\n    \/\/ omitted.\n    optional string contentType = 4;\n  }\n\n  message TypeMeta {\n    \/\/ apiVersion is the group\/version for this type\n    optional string apiVersion = 1;\n    \/\/ kind is the name of the object schema. A protobuf definition should exist for this object.\n    optional string kind = 2;\n  }\n```\n\n\nClients that receive a response in `application\/vnd.kubernetes.protobuf` that does\nnot match the expected prefix should reject the response, as future versions may need\nto alter the serialization format in an incompatible way and will do so by changing\nthe prefix.\n\n\n#### Compatibility with Kubernetes Protobuf {#protobuf-encoding-compatibility}\n\nNot all API resource types support Kubernetes' Protobuf encoding; specifically, Protobuf isn't\navailable for resources that are defined as\n\nor are served via the\n.\n\nAs a client, if you might need to work with extension types you should specify multiple\ncontent types in the request `Accept` header to support fallback to JSON.\nFor example:\n\n```\nAccept: application\/vnd.kubernetes.protobuf, application\/json\n```\n\n## Efficient detection of changes\n\nThe Kubernetes API allows clients to make an initial request for an object or a\ncollection, and then to track changes since that initial request: a **watch**. Clients\ncan send a **list** or a **get** and then make a follow-up **watch** request.\n\nTo make this change tracking possible, every Kubernetes object has a `resourceVersion`\nfield representing the version of that resource as stored in the underlying persistence\nlayer. When retrieving a collection of resources (either namespace or cluster scoped),\nthe response from the API server contains a `resourceVersion` value. The client can\nuse that `resourceVersion` to initiate a **watch** against the API server.\n\nWhen you send a **watch** request, the API server responds with a stream of\nchanges. These changes itemize the outcome of operations (such as **create**, **delete**,\nand **update**) that occurred after the `resourceVersion` you specified as a parameter\nto the **watch** request. The overall **watch** mechanism allows a client to fetch\nthe current state and then subscribe to subsequent changes, without missing any events.\n\nIf a client **watch** is disconnected then that client can start a new **watch** from\nthe last returned `resourceVersion`; the client could also perform a fresh **get** \/\n**list** request and begin again. See [Resource Version Semantics](#resource-versions)\nfor more detail.\n\nFor example:\n\n1. List all of the pods in a given namespace.\n\n   ```\n   GET \/api\/v1\/namespaces\/test\/pods\n   ---\n   200 OK\n   Content-Type: application\/json\n\n   {\n     \"kind\": \"PodList\",\n     \"apiVersion\": \"v1\",\n     \"metadata\": {\"resourceVersion\":\"10245\"},\n     \"items\": [...]\n   }\n   ```\n\n2. Starting from resource version 10245, receive notifications of any API operations\n   (such as **create**, **delete**, **patch** or **update**) that affect Pods in the\n   _test_ namespace. Each change notification is a JSON document. The HTTP response body\n   (served as `application\/json`) consists a series of JSON documents.\n\n   ```\n   GET \/api\/v1\/namespaces\/test\/pods?watch=1&resourceVersion=10245\n   ---\n   200 OK\n   Transfer-Encoding: chunked\n   Content-Type: application\/json\n\n   {\n     \"type\": \"ADDED\",\n     \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...}\n   }\n   {\n     \"type\": \"MODIFIED\",\n     \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"11020\", ...}, ...}\n   }\n   ...\n   ```\n\nA given Kubernetes server will only preserve a historical record of changes for a\nlimited time. Clusters using etcd 3 preserve changes in the last 5 minutes by default.\nWhen the requested **watch** operations fail because the historical version of that\nresource is not available, clients must handle the case by recognizing the status code\n`410 Gone`, clearing their local cache, performing a new **get** or **list** operation,\nand starting the **watch** from the `resourceVersion` that was returned.\n\nFor subscribing to collections, Kubernetes client libraries typically offer some form\nof standard tool for this **list**-then-**watch** logic. (In the Go client library,\nthis is called a `Reflector` and is located in the `k8s.io\/client-go\/tools\/cache` package.)\n\n### Watch bookmarks {#watch-bookmarks}\n\nTo mitigate the impact of short history window, the Kubernetes API provides a watch\nevent named `BOOKMARK`. It is a special kind of event to mark that all changes up\nto a given `resourceVersion` the client is requesting have already been sent. The\ndocument representing the `BOOKMARK` event is of the type requested by the request,\nbut only includes a `.metadata.resourceVersion` field. For example:\n\n```\nGET \/api\/v1\/namespaces\/test\/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application\/json\n\n{\n  \"type\": \"ADDED\",\n  \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10596\", ...}, ...}\n}\n...\n{\n  \"type\": \"BOOKMARK\",\n  \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"12746\"} }\n}\n```\n\nAs a client, you can request `BOOKMARK` events by setting the\n`allowWatchBookmarks=true` query parameter to a **watch** request, but you shouldn't\nassume bookmarks are returned at any specific interval, nor can clients assume that\nthe API server will send any `BOOKMARK` event even when requested.\n\n## Streaming lists\n\n\n\nOn large clusters, retrieving the collection of some resource types may result in\na significant increase of resource usage (primarily RAM) on the control plane.\nIn order to alleviate its impact and simplify the user experience of the **list** + **watch**\npattern, Kubernetes v1.27 introduces as an alpha feature the support\nfor requesting the initial state (previously requested via the **list** request) as part of\nthe **watch** request.\n\nProvided that the `WatchList` [feature gate](\/docs\/reference\/command-line-tools-reference\/feature-gates\/)\nis enabled, this can be achieved by specifying `sendInitialEvents=true` as query string parameter\nin a **watch** request. If set, the API server starts the watch stream with synthetic init\nevents (of type `ADDED`) to build the whole state of all existing objects followed by a\n[`BOOKMARK` event](\/docs\/reference\/using-api\/api-concepts\/#watch-bookmarks)\n(if requested via `allowWatchBookmarks=true` option). The bookmark event includes the resource version\nto which is synced. After sending the bookmark event, the API server continues as for any other **watch**\nrequest.\n\nWhen you set `sendInitialEvents=true` in the query string, Kubernetes also requires that you set\n`resourceVersionMatch` to `NotOlderThan` value.\nIf you provided `resourceVersion` in the query string without providing a value or don't provide\nit at all, this is interpreted as a request for _consistent read_;\nthe bookmark event is sent when the state is synced at least to the moment of a consistent read\nfrom when the request started to be processed. If you specify `resourceVersion` (in the query string),\nthe bookmark event is sent when the state is synced at least to the provided resource version.\n\n### Example {#example-streaming-lists}\n\nAn example: you want to watch a collection of Pods. For that collection, the current resource version\nis 10245 and there are two pods: `foo` and `bar`. Then sending the following request (explicitly requesting\n_consistent read_ by setting empty resource version using `resourceVersion=`) could result\nin the following sequence of events:\n\n```\nGET \/api\/v1\/namespaces\/test\/pods?watch=1&sendInitialEvents=true&allowWatchBookmarks=true&resourceVersion=&resourceVersionMatch=NotOlderThan\n---\n200 OK\nTransfer-Encoding: chunked\nContent-Type: application\/json\n\n{\n  \"type\": \"ADDED\",\n  \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"8467\", \"name\": \"foo\"}, ...}\n}\n{\n  \"type\": \"ADDED\",\n  \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"5726\", \"name\": \"bar\"}, ...}\n}\n{\n  \"type\": \"BOOKMARK\",\n  \"object\": {\"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": {\"resourceVersion\": \"10245\"} }\n}\n...\n<followed by regular watch stream starting from resourceVersion=\"10245\">\n```\n\n## Response compression\n\n\n\n`APIResponseCompression` is an option that allows the API server to compress the responses for **get**\nand **list** requests, reducing the network bandwidth and improving the performance of large-scale clusters.\nIt is enabled by default since Kubernetes 1.16 and it can be disabled by including\n`APIResponseCompression=false` in the `--feature-gates` flag on the API server.\n\nAPI response compression can significantly reduce the size of the response, especially for large resources or\n[collections](\/docs\/reference\/using-api\/api-concepts\/#collections).\nFor example, a **list** request for pods can return hundreds of kilobytes or even megabytes of data,\ndepending on the number of pods and their attributes. By compressing the response, the network bandwidth\ncan be saved and the latency can be reduced.\n\nTo verify if `APIResponseCompression` is working, you can send a **get** or **list** request to the\nAPI server with an `Accept-Encoding` header, and check the response size and headers. For example:\n\n```\nGET \/api\/v1\/pods\nAccept-Encoding: gzip\n---\n200 OK\nContent-Type: application\/json\ncontent-encoding: gzip\n...\n```\n\nThe `content-encoding` header indicates that the response is compressed with `gzip`.\n\n## Retrieving large results sets in chunks\n\n\n\nOn large clusters, retrieving the collection of some resource types may result in\nvery large responses that can impact the server and client. For instance, a cluster\nmay have tens of thousands of Pods, each of which is equivalent to roughly 2 KiB of\nencoded JSON. Retrieving all pods across all namespaces may result in a very large\nresponse (10-20MB) and consume a large amount of server resources.\n\nThe Kubernetes API server supports the ability to break a single large collection request\ninto many smaller chunks while preserving the consistency of the total request. Each\nchunk can be returned sequentially which reduces both the total size of the request and\nallows user-oriented clients to display results incrementally to improve responsiveness.\n\nYou can request that the API server handles a **list** by serving single collection\nusing pages (which Kubernetes calls _chunks_). To retrieve a single collection in\nchunks, two query parameters `limit` and `continue` are supported on requests against\ncollections, and a response field `continue` is returned from all **list** operations\nin the collection's `metadata` field. A client should specify the maximum results they\nwish to receive in each chunk with `limit` and the server will return up to `limit`\nresources in the result and include a `continue` value if there are more resources\nin the collection.\n\nAs an API client, you can then pass this `continue` value to the API server on the\nnext request, to instruct the server to return the next page (_chunk_) of results. By\ncontinuing until the server returns an empty `continue` value, you can retrieve the\nentire collection.\n\nLike a **watch** operation, a `continue` token will expire after a short amount\nof time (by default 5 minutes) and return a `410 Gone` if more results cannot be\nreturned. In this case, the client will need to start from the beginning or omit the\n`limit` parameter.\n\nFor example, if there are 1,253 pods on the cluster and you want to receive chunks\nof 500 pods at a time, request those chunks as follows:\n\n1. List all of the pods on a cluster, retrieving up to 500 pods each time.\n\n   ```\n   GET \/api\/v1\/pods?limit=500\n   ---\n   200 OK\n   Content-Type: application\/json\n\n   {\n     \"kind\": \"PodList\",\n     \"apiVersion\": \"v1\",\n     \"metadata\": {\n       \"resourceVersion\":\"10245\",\n       \"continue\": \"ENCODED_CONTINUE_TOKEN\",\n       \"remainingItemCount\": 753,\n       ...\n     },\n     \"items\": [...] \/\/ returns pods 1-500\n   }\n   ```\n\n1. Continue the previous call, retrieving the next set of 500 pods.\n\n   ```\n   GET \/api\/v1\/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN\n   ---\n   200 OK\n   Content-Type: application\/json\n\n   {\n     \"kind\": \"PodList\",\n     \"apiVersion\": \"v1\",\n     \"metadata\": {\n       \"resourceVersion\":\"10245\",\n       \"continue\": \"ENCODED_CONTINUE_TOKEN_2\",\n       \"remainingItemCount\": 253,\n       ...\n     },\n     \"items\": [...] \/\/ returns pods 501-1000\n   }\n   ```\n\n1. Continue the previous call, retrieving the last 253 pods.\n\n   ```\n   GET \/api\/v1\/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN_2\n   ---\n   200 OK\n   Content-Type: application\/json\n\n   {\n     \"kind\": \"PodList\",\n     \"apiVersion\": \"v1\",\n     \"metadata\": {\n       \"resourceVersion\":\"10245\",\n       \"continue\": \"\", \/\/ continue token is empty because we have reached the end of the list\n       ...\n     },\n     \"items\": [...] \/\/ returns pods 1001-1253\n   }\n   ```\n\nNotice that the `resourceVersion` of the collection remains constant across each request,\nindicating the server is showing you a consistent snapshot of the pods. Pods that\nare created, updated, or deleted after version `10245` would not be shown unless\nyou make a separate **list** request without the `continue` token.  This allows you\nto break large requests into smaller chunks and then perform a **watch** operation\non the full set without missing any updates.\n\n`remainingItemCount` is the number of subsequent items in the collection that are not\nincluded in this response. If the **list** request contained label or field\n then the number of\nremaining items is unknown and the API server does not include a `remainingItemCount`\nfield in its response.\nIf the **list** is complete (either because it is not chunking, or because this is the\nlast chunk), then there are no more remaining items and the API server does not include a\n`remainingItemCount` field in its response. The intended use of the `remainingItemCount`\nis estimating the size of a collection.\n\n## Collections\n\nIn Kubernetes terminology, the response you get from a **list** is\na _collection_. However, Kubernetes defines concrete kinds for\ncollections of different types of resource. Collections have a kind\nnamed for the resource kind, with `List` appended.\n\nWhen you query the API for a particular type, all items returned by that query are\nof that type. For example, when you **list** Services, the collection response\nhas `kind` set to\n[`ServiceList`](\/docs\/reference\/kubernetes-api\/service-resources\/service-v1\/#ServiceList);\neach item in that collection represents a single Service. For example:\n\n```\nGET \/api\/v1\/services\n```\n\n```yaml\n{\n  \"kind\": \"ServiceList\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {\n    \"resourceVersion\": \"2947301\"\n  },\n  \"items\": [\n    {\n      \"metadata\": {\n        \"name\": \"kubernetes\",\n        \"namespace\": \"default\",\n...\n      \"metadata\": {\n        \"name\": \"kube-dns\",\n        \"namespace\": \"kube-system\",\n...\n```\n\nThere are dozens of collection types (such as `PodList`, `ServiceList`,\nand `NodeList`) defined in the Kubernetes API.\nYou can get more information about each collection type from the\n[Kubernetes API](\/docs\/reference\/kubernetes-api\/) documentation.\n\nSome tools, such as `kubectl`, represent the Kubernetes collection\nmechanism slightly differently from the Kubernetes API itself.\nBecause the output of `kubectl` might include the response from\nmultiple **list** operations at the API level, `kubectl` represents\na list of items using `kind: List`. For example:\n\n```shell\nkubectl get services -A -o yaml\n```\n```yaml\napiVersion: v1\nkind: List\nmetadata:\n  resourceVersion: \"\"\n  selfLink: \"\"\nitems:\n- apiVersion: v1\n  kind: Service\n  metadata:\n    creationTimestamp: \"2021-06-03T14:54:12Z\"\n    labels:\n      component: apiserver\n      provider: kubernetes\n    name: kubernetes\n    namespace: default\n...\n- apiVersion: v1\n  kind: Service\n  metadata:\n    annotations:\n      prometheus.io\/port: \"9153\"\n      prometheus.io\/scrape: \"true\"\n    creationTimestamp: \"2021-06-03T14:54:14Z\"\n    labels:\n      k8s-app: kube-dns\n      kubernetes.io\/cluster-service: \"true\"\n      kubernetes.io\/name: CoreDNS\n    name: kube-dns\n    namespace: kube-system\n```\n\n\nKeep in mind that the Kubernetes API does not have a `kind` named `List`.\n\n`kind: List` is a client-side, internal implementation detail for processing\ncollections that might be of different kinds of object. Avoid depending on\n`kind: List` in automation or other code.\n\n\n## Receiving resources as Tables\n\nWhen you run `kubectl get`, the default output format is a simple tabular\nrepresentation of one or more instances of a particular resource type. In the past,\nclients were required to reproduce the tabular and describe output implemented in\n`kubectl` to perform simple lists of objects.\nA few limitations of that approach include non-trivial logic when dealing with\ncertain objects. Additionally, types provided by API aggregation or third party\nresources are not known at compile time. This means that generic implementations\nhad to be in place for types unrecognized by a client.\n\nIn order to avoid potential limitations as described above, clients may request\nthe Table representation of objects, delegating specific details of printing to the\nserver. The Kubernetes API implements standard HTTP content type negotiation: passing\nan `Accept` header containing a value of `application\/json;as=Table;g=meta.k8s.io;v=v1`\nwith a `GET` call will request that the server return objects in the Table content\ntype.\n\nFor example, list all of the pods on a cluster in the Table format.\n\n```\nGET \/api\/v1\/pods\nAccept: application\/json;as=Table;g=meta.k8s.io;v=v1\n---\n200 OK\nContent-Type: application\/json\n\n{\n    \"kind\": \"Table\",\n    \"apiVersion\": \"meta.k8s.io\/v1\",\n    ...\n    \"columnDefinitions\": [\n        ...\n    ]\n}\n```\n\nFor API resource types that do not have a custom Table definition known to the control\nplane, the API server returns a default Table response that consists of the resource's\n`name` and `creationTimestamp` fields.\n\n```\nGET \/apis\/crd.example.com\/v1alpha1\/namespaces\/default\/resources\n---\n200 OK\nContent-Type: application\/json\n...\n\n{\n    \"kind\": \"Table\",\n    \"apiVersion\": \"meta.k8s.io\/v1\",\n    ...\n    \"columnDefinitions\": [\n        {\n            \"name\": \"Name\",\n            \"type\": \"string\",\n            ...\n        },\n        {\n            \"name\": \"Created At\",\n            \"type\": \"date\",\n            ...\n        }\n    ]\n}\n```\n\nNot all API resource types support a Table response; for example, a\n\nmight not define field-to-table mappings, and an APIService that\n[extends the core Kubernetes API](\/docs\/concepts\/extend-kubernetes\/api-extension\/apiserver-aggregation\/)\nmight not serve Table responses at all. If you are implementing a client that\nuses the Table information and must work against all resource types, including\nextensions, you should make requests that specify multiple content types in the\n`Accept` header. For example:\n\n```\nAccept: application\/json;as=Table;g=meta.k8s.io;v=v1, application\/json\n```\n\n## Resource deletion\n\nWhen you **delete** a resource this takes place in two phases.\n\n1. _finalization_\n2. removal\n\n```yaml\n{\n  \"kind\": \"ConfigMap\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {\n    \"finalizers\": [\"url.io\/neat-finalization\", \"other-url.io\/my-finalizer\"],\n    \"deletionTimestamp\": nil,\n  }\n}\n```\n\nWhen a client first sends a **delete** to request the removal of a resource, the `.metadata.deletionTimestamp` is set to the current time.\nOnce the `.metadata.deletionTimestamp` is set, external controllers that act on finalizers\nmay start performing their cleanup work at any time, in any order.\n\nOrder is **not** enforced between finalizers because it would introduce significant\nrisk of stuck `.metadata.finalizers`.\n\nThe `.metadata.finalizers` field is shared: any actor with permission can reorder it.\nIf the finalizer list were processed in order, then this might lead to a situation\nin which the component responsible for the first finalizer in the list is\nwaiting for some signal (field value, external system, or other) produced by a\ncomponent responsible for a finalizer later in the list, resulting in a deadlock.\n\nWithout enforced ordering, finalizers are free to order amongst themselves and are\nnot vulnerable to ordering changes in the list.\n\nOnce the last finalizer is removed, the resource is actually removed from etcd.\n\n\n## Single resource API\n\nThe Kubernetes API verbs **get**, **create**, **update**, **patch**,\n**delete** and **proxy** support single resources only.\nThese verbs with single resource support have no support for submitting multiple\nresources together in an ordered or unordered list or transaction.\n\nWhen clients (including kubectl) act on a set of resources, the client makes a series\nof single-resource API requests, then aggregates the responses if needed.\n\nBy contrast, the Kubernetes API verbs **list** and **watch** allow getting multiple\nresources, and **deletecollection** allows deleting multiple resources.\n\n## Field validation\n\nKubernetes always validates the type of fields. For example, if a field in the\nAPI is defined as a number, you cannot set the field to a text value. If a field\nis defined as an array of strings, you can only provide an array. Some fields\nallow you to omit them, other fields are required. Omitting a required field\nfrom an API request is an error.\n\nIf you make a request with an extra field, one that the cluster's control plane\ndoes not recognize, then the behavior of the API server is more complicated.\n\nBy default, the API server drops fields that it does not recognize\nfrom an input that it receives (for example, the JSON body of a `PUT` request).\n\nThere are two situations where the API server drops fields that you supplied in\nan HTTP request.\n\nThese situations are:\n\n1. The field is unrecognized because it is not in the resource's OpenAPI schema. (One\n   exception to this is for  that explicitly choose not to prune unknown\n   fields via `x-kubernetes-preserve-unknown-fields`).\n1. The field is duplicated in the object.\n\n### Validation for unrecognized or duplicate fields {#setting-the-field-validation-level}\n\n\n\nFrom 1.25 onward, unrecognized or duplicate fields in an object are detected via\nvalidation on the server when you use HTTP verbs that can submit data (`POST`, `PUT`, and `PATCH`). Possible levels of\nvalidation are `Ignore`, `Warn` (default), and `Strict`.\n\n`Ignore`\n: The API server succeeds in handling the request as it would without the erroneous fields\n  being set, dropping all unknown and duplicate fields and giving no indication it\n  has done so.\n\n`Warn`\n: (Default) The API server succeeds in handling the request, and reports a\n  warning to the client. The warning is sent using the `Warning:` response header,\n  adding one warning item for each unknown or duplicate field. For more\n  information about warnings and the Kubernetes API, see the blog article\n  [Warning: Helpful Warnings Ahead](\/blog\/2020\/09\/03\/warnings\/).\n\n`Strict`\n: The API server rejects the request with a 400 Bad Request error when it\n  detects any unknown or duplicate fields. The response message from the API\n  server specifies all the unknown or duplicate fields that the API server has\n  detected.\n\nThe field validation level is set by the `fieldValidation` query parameter.\n\n\nIf you submit a request that specifies an unrecognized field, and that is also invalid for\na different reason (for example, the request provides a string value where the API expects\nan integer for a known field), then the API server responds with a 400 Bad Request error, but will\nnot provide any information on unknown or duplicate fields (only which fatal\nerror it encountered first).\n\nYou always receive an error response in this case, no matter what field validation level you requested.\n\n\nTools that submit requests to the server (such as `kubectl`), might set their own\ndefaults that are different from the `Warn` validation level that the API server uses\nby default.\n\nThe `kubectl` tool uses the `--validate` flag to set the level of field\nvalidation. It accepts the values `ignore`, `warn`, and `strict` while\nalso accepting the values `true` (equivalent to `strict`) and `false`\n(equivalent to `ignore`). The default validation setting for kubectl is\n`--validate=true`, which means strict server-side field validation.\n\nWhen kubectl cannot connect to an API server with field validation (API servers\nprior to Kubernetes 1.27), it will fall back to using client-side validation.\nClient-side validation will be removed entirely in a future version of kubectl.\n\n\n\nPrior to Kubernetes 1.25  `kubectl --validate` was used to toggle client-side validation on or off as\na boolean flag.\n\n\n\n## Dry-run\n\n\n\nWhen you use HTTP verbs that can modify resources (`POST`, `PUT`, `PATCH`, and\n`DELETE`), you can submit your request in a _dry run_ mode. Dry run mode helps to\nevaluate a request through the typical request stages (admission chain, validation,\nmerge conflicts) up until persisting objects to storage. The response body for the\nrequest is as close as possible to a non-dry-run response. Kubernetes guarantees that\ndry-run requests will not be persisted in storage or have any other side effects.\n\n### Make a dry-run request\n\nDry-run is triggered by setting the `dryRun` query parameter. This parameter is a\nstring, working as an enum, and the only accepted values are:\n\n[no value set]\n: Allow side effects. You request this with a query string such as `?dryRun`\n  or `?dryRun&pretty=true`. The response is the final object that would have been\n  persisted, or an error if the request could not be fulfilled.\n\n`All`\n: Every stage runs as normal, except for the final storage stage where side effects\n  are prevented.\n\nWhen you set `?dryRun=All`, any relevant\n\nare run, validating admission controllers check the request post-mutation, merge is\nperformed on `PATCH`, fields are defaulted, and schema validation occurs. The changes\nare not persisted to the underlying storage, but the final object which would have\nbeen persisted is still returned to the user, along with the normal status code.\n\nIf the non-dry-run version of a request would trigger an admission controller that has\nside effects, the request will be failed rather than risk an unwanted side effect. All\nbuilt in admission control plugins support dry-run. Additionally, admission webhooks can\ndeclare in their\n[configuration object](\/docs\/reference\/generated\/kubernetes-api\/\/#validatingwebhook-v1-admissionregistration-k8s-io)\nthat they do not have side effects, by setting their `sideEffects` field to `None`.\n\n\nIf a webhook actually does have side effects, then the `sideEffects` field should be\nset to \"NoneOnDryRun\". That change is appropriate provided that the webhook is also\nbe modified to understand the `DryRun` field in AdmissionReview, and to prevent side\neffects on any request marked as dry runs.\n\n\nHere is an example dry-run request that uses `?dryRun=All`:\n\n```\nPOST \/api\/v1\/namespaces\/test\/pods?dryRun=All\nContent-Type: application\/json\nAccept: application\/json\n```\n\nThe response would look the same as for non-dry-run request, but the values of some\ngenerated fields may differ.\n\n### Generated values\n\nSome values of an object are typically generated before the object is persisted. It\nis important not to rely upon the values of these fields set by a dry-run request,\nsince these values will likely be different in dry-run mode from when the real\nrequest is made. Some of these fields are:\n\n* `name`: if `generateName` is set, `name` will have a unique random name\n* `creationTimestamp` \/ `deletionTimestamp`: records the time of creation\/deletion\n* `UID`: [uniquely identifies](\/docs\/concepts\/overview\/working-with-objects\/names\/#uids)\n  the object and is randomly generated (non-deterministic)\n* `resourceVersion`: tracks the persisted version of the object\n* Any field set by a mutating admission controller\n* For the `Service` resource: Ports or IP addresses that the kube-apiserver assigns to Service objects\n\n### Dry-run authorization\n\nAuthorization for dry-run and non-dry-run requests is identical. Thus, to make\na dry-run request, you must be authorized to make the non-dry-run request.\n\nFor example, to run a dry-run **patch** for a Deployment, you must be authorized\nto perform that **patch**. Here is an example of a rule for Kubernetes\n that allows patching\nDeployments:\n\n```yaml\nrules:\n- apiGroups: [\"apps\"]\n  resources: [\"deployments\"]\n  verbs: [\"patch\"]\n```\n\nSee [Authorization Overview](\/docs\/reference\/access-authn-authz\/authorization\/).\n\n## Updates to existing resources {#patch-and-apply}\n\nKubernetes provides several ways to update existing objects.\nYou can read [choosing an update mechanism](#update-mechanism-choose) to\nlearn about which approach might be best for your use case.\n\nYou can overwrite (**update**) an existing resource - for example, a ConfigMap -\nusing an HTTP PUT. For a PUT request, it is the client's responsibility to specify\nthe `resourceVersion` (taking this from the object being updated). Kubernetes uses\nthat `resourceVersion` information so that the API server can detect lost updates\nand reject requests made by a client that is out of date with the cluster.\nIn the event that the resource has changed (the `resourceVersion` the client\nprovided is stale), the API server returns a `409 Conflict` error response.\n\nInstead of sending a PUT request, the client can send an instruction to the API\nserver to **patch** an existing resource. A **patch** is typically appropriate\nif the change that the client wants to make isn't conditional on the existing data.\nClients that need effective detection of lost updates should consider\nmaking their request conditional on the existing `resourceVersion` (either HTTP PUT or HTTP PATCH),\nand then handle any retries that are needed in case there is a conflict.\n\nThe Kubernetes API supports four different PATCH operations, determined by their\ncorresponding HTTP `Content-Type` header:\n\n`application\/apply-patch+yaml`\n: Server Side Apply YAML (a Kubernetes-specific extension, based on YAML).\n  All JSON documents are valid YAML, so you can also submit JSON using this\n  media type. See [Server Side Apply serialization](\/docs\/reference\/using-api\/server-side-apply\/#serialization)\n  for more details.  \n  To Kubernetes, this is a **create** operation if the object does not exist,\n  or a **patch** operation if the object already exists.\n\n`application\/json-patch+json`\n: JSON Patch, as defined in [RFC6902](https:\/\/tools.ietf.org\/html\/rfc6902).\n  A JSON patch is a sequence of operations that are executed on the resource;\n  for example `{\"op\": \"add\", \"path\": \"\/a\/b\/c\", \"value\": [ \"foo\", \"bar\" ]}`.  \n  To Kubernetes, this is a **patch** operation.\n  \n  A **patch** using `application\/json-patch+json` can include conditions to\n  validate consistency, allowing the operation to fail if those conditions\n  are not met (for example, to avoid a lost update).\n\n`application\/merge-patch+json`\n: JSON Merge Patch, as defined in [RFC7386](https:\/\/tools.ietf.org\/html\/rfc7386).\n  A JSON Merge Patch is essentially a partial representation of the resource.\n  The submitted JSON is combined with the current resource to create a new one,\n  then the new one is saved.  \n  To Kubernetes, this is a **patch** operation.\n\n`application\/strategic-merge-patch+json`\n: Strategic Merge Patch (a Kubernetes-specific extension based on JSON).\n  Strategic Merge Patch is a custom implementation of JSON Merge Patch.\n  You can only use Strategic Merge Patch with built-in APIs, or with aggregated\n  API servers that have special support for it. You cannot use\n  `application\/strategic-merge-patch+json` with any API\n  defined using a .\n  \n  \n  The Kubernetes _server side apply_ mechanism has superseded Strategic Merge\n  Patch.\n  \n\nKubernetes' [Server Side Apply](\/docs\/reference\/using-api\/server-side-apply\/)\nfeature allows the control plane to track managed fields for newly created objects.\nServer Side Apply provides a clear pattern for managing field conflicts,\noffers server-side **apply** and **update** operations, and replaces the\nclient-side functionality of `kubectl apply`.\n\nFor Server-Side Apply, Kubernetes treats the request as a **create** if the object\ndoes not yet exist, and a **patch** otherwise. For other requests that use PATCH\nat the HTTP level, the logical Kubernetes operation is always **patch**.\n\nSee [Server Side Apply](\/docs\/reference\/using-api\/server-side-apply\/) for more details.\n\n### Choosing an update mechanism {#update-mechanism-choose}\n\n#### HTTP PUT to replace existing resource {#update-mechanism-update}\n\nThe **update** (HTTP `PUT`) operation is simple to implement and flexible,\nbut has drawbacks:\n\n* You need to handle conflicts where the `resourceVersion` of the object changes\n  between your client reading it and trying to write it back. Kubernetes always\n  detects the conflict, but you as the client author need to implement retries.\n* You might accidentally drop fields if you decode an object locally (for example,\n  using client-go, you could receive fields that your client does not know how to\n  handle - and then drop them as part of your update.\n* If there's a lot of contention on the object (even on a field, or set of fields,\n  that you're not trying to edit), you might have trouble sending the update.\n  The problem is worse for larger objects and for objects with many fields.\n\n#### HTTP PATCH using JSON Patch {#update-mechanism-json-patch}\n\nA **patch** update is helpful, because:\n\n* As you're only sending differences, you have less data to send in the `PATCH`\n  request.\n* You can make changes that rely on existing values, such as copying the\n  value of a particular field into an annotation.\n* Unlike with an **update** (HTTP `PUT`), making your change can happen right away\n  even if there are frequent changes to unrelated fields): you usually would\n  not need to retry.\n  * You might still need to specify the `resourceVersion` (to match an existing object)\n    if you want to be extra careful to avoid lost updates\n  * It's still good practice to write in some retry logic in case of errors.\n* You can use test conditions to careful craft specific update conditions.\n  For example, you can increment a counter without reading it if the existing\n  value matches what you expect. You can do this with no lost update risk,\n  even if the object has changed in other ways since you last wrote to it.\n  (If the test condition fails, you can fall back to reading the current value\n  and then write back the changed number).\n\nHowever:\n\n* You need more local (client) logic to build the patch; it helps a lot if you have\n  a library implementation of JSON Patch, or even for making a JSON Patch specifically against Kubernetes.\n* As the author of client software, you need to be careful when building the patch\n  (the HTTP request body) not to drop fields (the order of operations matters).\n\n#### HTTP PATCH using Server-Side Apply {#update-mechanism-server-side-apply}\n\nServer-Side Apply has some clear benefits:\n\n* A single round trip: it rarely requires making a `GET` request first.\n  * and you can still detect conflicts for unexpected changes\n  * you have the option to force override a conflict, if appropriate\n* Client implementations are easy to make.\n* You get an atomic create-or-update operation without extra effort\n  (similar to `UPSERT` in some SQL dialects).\n\nHowever:\n\n* Server-Side Apply does not work at all for field changes that depend on a current value of the object.\n* You can only apply updates to objects. Some resources in the Kubernetes HTTP API are\n  not objects (they do not have a `.metadata` field), and Server-Side Apply\n  is only relevant for Kubernetes objects.\n\n## Resource versions\n\nResource versions are strings that identify the server's internal version of an\nobject. Resource versions can be used by clients to determine when objects have\nchanged, or to express data consistency requirements when getting, listing and\nwatching resources. Resource versions must be treated as opaque by clients and passed\nunmodified back to the server.\n\nYou must not assume resource versions are numeric or collatable. API clients may\nonly compare two resource versions for equality (this means that you must not compare\nresource versions for greater-than or less-than relationships).\n\n### `resourceVersion` fields in metadata {#resourceversion-in-metadata}\n\nClients find resource versions in resources, including the resources from the response\nstream for a **watch**, or when using **list** to enumerate resources.\n\n[v1.meta\/ObjectMeta](\/docs\/reference\/generated\/kubernetes-api\/\/#objectmeta-v1-meta) -\nThe `metadata.resourceVersion` of a resource instance identifies the resource version the instance was last modified at.\n\n[v1.meta\/ListMeta](\/docs\/reference\/generated\/kubernetes-api\/\/#listmeta-v1-meta) -\nThe `metadata.resourceVersion` of a resource collection (the response to a **list**) identifies the\nresource version at which the collection was constructed.\n\n### `resourceVersion` parameters in query strings {#the-resourceversion-parameter}\n\nThe **get**, **list**, and **watch** operations support the `resourceVersion` parameter.\nFrom version v1.19, Kubernetes API servers also support the `resourceVersionMatch`\nparameter on _list_ requests.\n\nThe API server interprets the `resourceVersion` parameter differently depending\non the operation you request, and on the value of `resourceVersion`. If you set\n`resourceVersionMatch` then this also affects the way matching happens.\n\n### Semantics for **get** and **list**\n\nFor **get** and **list**, the semantics of `resourceVersion` are:\n\n**get:**\n\n| resourceVersion unset | resourceVersion=\"0\" | resourceVersion=\"{value other than 0}\" |\n|-----------------------|---------------------|----------------------------------------|\n| Most Recent           | Any                 | Not older than                         |\n\n**list:**\n\nFrom version v1.19, Kubernetes API servers support the `resourceVersionMatch` parameter\non _list_ requests. If you set both `resourceVersion` and `resourceVersionMatch`, the\n`resourceVersionMatch` parameter determines how the API server interprets\n`resourceVersion`.\n\nYou should always set the `resourceVersionMatch` parameter when setting\n`resourceVersion` on a **list** request. However, be prepared to handle the case\nwhere the API server that responds is unaware of `resourceVersionMatch`\nand ignores it.\n\nUnless you have strong consistency requirements, using `resourceVersionMatch=NotOlderThan` and\na known `resourceVersion` is preferable since it can achieve better performance and scalability\nof your cluster than leaving `resourceVersion` and `resourceVersionMatch` unset, which requires\nquorum read to be served.\n\nSetting the `resourceVersionMatch` parameter without setting `resourceVersion` is not valid.\n\nThis table explains the behavior of **list** requests with various combinations of\n`resourceVersion` and `resourceVersionMatch`:\n\n\n\n| resourceVersionMatch param | paging params | resourceVersion not set | resourceVersion=\"0\" | resourceVersion=\"{value other than 0}\" |\n|----------------------------|---------------|-------------------------|---------------------|----------------------------------------|\n| _unset_ | _limit unset_ | Most Recent | Any | Not older than |\n| _unset_ | limit=\\<n\\>, _continue unset_ | Most Recent | Any | Exact |\n| _unset_ | limit=\\<n\\>, continue=\\<token\\>| Continue Token, Exact | Invalid, treated as Continue Token, Exact | Invalid, HTTP `400 Bad Request` |\n| `resourceVersionMatch=Exact` | _limit unset_ | Invalid | Invalid | Exact |\n| `resourceVersionMatch=Exact` | limit=\\<n\\>, _continue unset_ | Invalid | Invalid | Exact |\n| `resourceVersionMatch=NotOlderThan` | _limit unset_ | Invalid | Any | Not older than |\n| `resourceVersionMatch=NotOlderThan` | limit=\\<n\\>, _continue unset_ | Invalid | Any | Not older than |\n\n\n\n\nIf your cluster's API server does not honor the `resourceVersionMatch` parameter,\nthe behavior is the same as if you did not set it.\n\n\nThe meaning of the **get** and **list** semantics are:\n\nAny\n: Return data at any resource version. The newest available resource version is preferred,\n  but strong consistency is not required; data at any resource version may be served. It is possible\n  for the request to return data at a much older resource version that the client has previously\n  observed, particularly in high availability configurations, due to partitions or stale\n  caches. Clients that cannot tolerate this should not use this semantic.\n\nMost recent\n: Return data at the most recent resource version. The returned data must be\n  consistent (in detail: served from etcd via a quorum read).\n  For etcd v3.4.31+ and v3.5.13+ Kubernetes  serves \u201cmost recent\u201d reads from the _watch cache_:\n  an internal, in-memory store within the API server that caches and mirrors the state of data\n  persisted into etcd. Kubernetes requests progress notification to maintain cache consistency against\n  the etcd persistence layer. Kubernetes versions v1.28 through to v1.30 also supported this\n  feature, although as Alpha it was not recommended for production nor enabled by default until the v1.31 release.\n\nNot older than\n: Return data at least as new as the provided `resourceVersion`. The newest\n  available data is preferred, but any data not older than the provided `resourceVersion` may be\n  served.  For **list** requests to servers that honor the `resourceVersionMatch` parameter, this\n  guarantees that the collection's `.metadata.resourceVersion` is not older than the requested\n  `resourceVersion`, but does not make any guarantee about the `.metadata.resourceVersion` of any\n  of the items in that collection.\n\nExact\n: Return data at the exact resource version provided. If the provided `resourceVersion` is\n  unavailable, the server responds with HTTP 410 \"Gone\".  For **list** requests to servers that honor the\n  `resourceVersionMatch` parameter, this guarantees that the collection's `.metadata.resourceVersion`\n  is the same as the `resourceVersion` you requested in the query string. That guarantee does\n  not apply to the `.metadata.resourceVersion` of any items within that collection.\n\nContinue Token, Exact\n: Return data at the resource version of the initial paginated **list** call. The returned _continue\n  tokens_ are responsible for keeping track of the initially provided resource version for all paginated\n  **list** calls after the initial paginated **list**.\n\n\nWhen you **list** resources and receive a collection response, the response includes the\n[list metadata](\/docs\/reference\/generated\/kubernetes-api\/v\/#listmeta-v1-meta)\nof the collection as well as\n[object metadata](\/docs\/reference\/generated\/kubernetes-api\/v\/#objectmeta-v1-meta)\nfor each item in that collection. For individual objects found within a collection response,\n`.metadata.resourceVersion` tracks when that object was last updated, and not how up-to-date\nthe object is when served.\n\n\nWhen using `resourceVersionMatch=NotOlderThan` and limit is set, clients must\nhandle HTTP 410 \"Gone\" responses. For example, the client might retry with a\nnewer `resourceVersion` or fall back to `resourceVersion=\"\"`.\n\nWhen using `resourceVersionMatch=Exact` and `limit` is unset, clients must\nverify that the collection's `.metadata.resourceVersion` matches\nthe requested `resourceVersion`, and handle the case where it does not. For\nexample, the client might fall back to a request with `limit` set.\n\n### Semantics for **watch**\n\nFor **watch**, the semantics of resource version are:\n\n**watch:**\n\n\n\n| resourceVersion unset               | resourceVersion=\"0\"        | resourceVersion=\"{value other than 0}\" |\n|-------------------------------------|----------------------------|----------------------------------------|\n| Get State and Start at Most Recent  | Get State and Start at Any | Start at Exact                         |\n\n\n\nThe meaning of those **watch** semantics are:\n\nGet State and Start at Any\n:  Watches initialized this way may return arbitrarily stale\n  data. Please review this semantic before using it, and favor the other semantics\n  where possible.\n  \n  Start a **watch** at any resource version; the most recent resource version\n  available is preferred, but not required. Any starting resource version is\n  allowed. It is possible for the **watch** to start at a much older resource\n  version that the client has previously observed, particularly in high availability\n  configurations, due to partitions or stale caches. Clients that cannot tolerate\n  this apparent rewinding should not start a **watch** with this semantic. To\n  establish initial state, the **watch** begins with synthetic \"Added\" events for\n  all resource instances that exist at the starting resource version. All following\n  watch events are for all changes that occurred after the resource version the\n  **watch** started at.\n\nGet State and Start at Most Recent\n: Start a **watch** at the most recent resource version, which must be consistent\n  (in detail: served from etcd via a quorum read). To establish initial state,\n  the **watch** begins with synthetic \"Added\" events of all resources instances\n  that exist at the starting resource version. All following watch events are for\n  all changes that occurred after the resource version the **watch** started at.\n\nStart at Exact\n: Start a **watch** at an exact resource version. The watch events are for all changes\n  after the provided resource version. Unlike \"Get State and Start at Most Recent\"\n  and \"Get State and Start at Any\", the **watch** is not started with synthetic\n  \"Added\" events for the provided resource version. The client is assumed to already\n  have the initial state at the starting resource version since the client provided\n  the resource version.\n\n### \"410 Gone\" responses\n\nServers are not required to serve all older resource versions and may return a HTTP\n`410 (Gone)` status code if a client requests a `resourceVersion` older than the\nserver has retained. Clients must be able to tolerate `410 (Gone)` responses. See\n[Efficient detection of changes](#efficient-detection-of-changes) for details on\nhow to handle `410 (Gone)` responses when watching resources.\n\nIf you request a `resourceVersion` outside the applicable limit then, depending\non whether a request is served from cache or not, the API server may reply with a\n`410 Gone` HTTP response.\n\n### Unavailable resource versions\n\nServers are not required to serve unrecognized resource versions. If you request\n**list** or **get** for a resource version that the API server does not recognize,\nthen the API server may either:\n\n* wait briefly for the resource version to become available, then timeout with a\n  `504 (Gateway Timeout)` if the provided resource versions does not become available\n  in a reasonable amount of time;\n* respond with a `Retry-After` response header indicating how many seconds a client\n  should wait before retrying the request.\n\nIf you request a resource version that an API server does not recognize, the\nkube-apiserver additionally identifies its error responses with a \"Too large resource\nversion\" message.\n\nIf you make a **watch** request for an unrecognized resource version, the API server\nmay wait indefinitely (until the request timeout) for the resource version to become\navailable.","site":"kubernetes reference","answers_cleaned":"    title  Kubernetes API Concepts reviewers    smarterclayton   lavalamp   liggitt content type  concept weight  20           overview     The Kubernetes API is a resource based  RESTful  programmatic interface provided via HTTP  It supports retrieving  creating  updating  and deleting primary resources via the standard HTTP verbs  POST  PUT  PATCH  DELETE  GET    For some resources  the API includes additional subresources that allow fine grained authorization  such as separate views for Pod details and log retrievals   and can accept and serve those resources in different representations for convenience or efficiency   Kubernetes supports efficient change notifications on resources via  watches    Kubernetes also provides consistent list operations so that API clients can effectively cache  track  and synchronize the state of resources   You can view the  API reference   docs reference kubernetes api   online  or read on to learn about the API in general        body        Kubernetes API terminology   standard api terminology   Kubernetes generally leverages common RESTful terminology to describe the API concepts     A  resource type  is the name used in the URL   pods    namespaces    services     All resource types have a concrete representation  their object schema  which is called a  kind    A list of instances of a resource type is known as a  collection    A single instance of a resource type is called a  resource   and also usually represents an  object    For some resource types  the API includes one or more  sub resources   which are represented as URI paths below the resource  Most Kubernetes API resource types are    they represent a concrete instance of a concept on the cluster  like a pod or namespace  A smaller number of API resource types are  virtual  in that they often represent operations on objects  rather than objects  such as a permission check  use a POST with a JSON encoded body of  SubjectAccessReview  to the  subjectaccessreviews  resource   or the  eviction  sub resource of a Pod  used to trigger  API initiated eviction   docs concepts scheduling eviction api eviction          Object names  All objects you can create via the API have a unique object  to allow idempotent creation and retrieval  except that virtual resource types may not have unique names if they are not retrievable  or do not rely on idempotency  Within a   only one object of a given kind can have a given name at a time  However  if you delete the object  you can make a new object with the same name  Some objects are not namespaced  for example  Nodes   and so their names must be unique across the whole cluster       API verbs  Almost all object resource types support the standard HTTP verbs   GET  POST  PUT  PATCH  and DELETE  Kubernetes also uses its own verbs  which are often written in lowercase to distinguish them from HTTP verbs   Kubernetes uses the term   list   to describe returning a  collection   collections  of resources to distinguish from retrieving a single resource which is usually called a   get    If you sent an HTTP GET request with the   watch  query parameter  Kubernetes calls this a   watch   and not a   get    see  Efficient detection of changes   efficient detection of changes  for more details    For PUT requests  Kubernetes internally classifies these as either   create   or   update   based on the state of the existing object  An   update   is different from a   patch    the HTTP verb for a   patch   is PATCH      Resource URIs  All resource types are either scoped by the cluster    apis GROUP VERSION     or to a namespace    apis GROUP VERSION namespaces NAMESPACE      A namespace scoped resource type will be deleted when its namespace is deleted and access to that resource type is controlled by authorization checks on the namespace scope   Note  core resources use   api  instead of   apis  and omit the GROUP path segment   Examples       api v1 namespaces      api v1 pods      api v1 namespaces my namespace pods      apis apps v1 deployments      apis apps v1 namespaces my namespace deployments      apis apps v1 namespaces my namespace deployments my deployment   You can also access collections of resources  for example  listing all Nodes   The following paths are used to retrieve collections and resources     Cluster scoped resources        GET  apis GROUP VERSION RESOURCETYPE    return the collection of resources of the resource type      GET  apis GROUP VERSION RESOURCETYPE NAME    return the resource with NAME under the resource type    Namespace scoped resources        GET  apis GROUP VERSION RESOURCETYPE    return the collection of all     instances of the resource type across all namespaces      GET  apis GROUP VERSION namespaces NAMESPACE RESOURCETYPE    return     collection of all instances of the resource type in NAMESPACE      GET  apis GROUP VERSION namespaces NAMESPACE RESOURCETYPE NAME        return the instance of the resource type with NAME in NAMESPACE  Since a namespace is a cluster scoped resource type  you can retrieve the list   collection   of all namespaces with  GET  api v1 namespaces  and details about a particular namespace with  GET  api v1 namespaces NAME      Cluster scoped subresource   GET  apis GROUP VERSION RESOURCETYPE NAME SUBRESOURCE    Namespace scoped subresource   GET  apis GROUP VERSION namespaces NAMESPACE RESOURCETYPE NAME SUBRESOURCE   The verbs supported for each subresource will differ depending on the object   see the  API reference   docs reference kubernetes api   for more information  It is not possible to access sub resources across multiple resources   generally a new virtual resource type would be used if that becomes necessary      HTTP media types   alternate representations of resources   Over HTTP  Kubernetes supports JSON and Protobuf wire encodings   By default  Kubernetes returns objects in  JSON serialization   json encoding   using the  application json  media type  Although JSON is the default  clients may request a response in YAML  or use the more efficient binary  Protobuf representation   protobuf encoding  for better performance at scale   The Kubernetes API implements standard HTTP content type negotiation  passing an  Accept  header with a  GET  call will request that the server tries to return a response in your preferred media type  If you want to send an object in Protobuf to the server for a  PUT  or  POST  request  you must set the  Content Type  request header appropriately   If you request an available media type  the API server returns a response with a suitable  Content Type   if none of the media types you request are supported  the API server returns a  406 Not acceptable  error message  All built in resource types support the  application json  media type       JSON resource encoding   json encoding   The Kubernetes API defaults to using  JSON  https   www json org json en html  for encoding HTTP message bodies   For example   1  List all of the pods on a cluster  without specifying a preferred format            GET  api v1 pods                   200 OK    Content Type  application json       JSON encoded collection of Pods  PodList object          1  Create a pod by sending JSON to the server  requesting a JSON response             POST  api v1 namespaces test pods    Content Type  application json    Accept  application json      JSON encoded Pod object                   200 OK    Content Type  application json             kind    Pod         apiVersion    v1                           YAML resource encoding   yaml encoding   Kubernetes also supports the   application yaml   https   www rfc editor org rfc rfc9512 html  media type for both requests and responses    YAML   https   yaml org   can be used for defining Kubernetes manifests and API interactions   For example   1  List all of the pods on a cluster in YAML format            GET  api v1 pods    Accept  application yaml                      200 OK    Content Type  application yaml       YAML encoded collection of Pods  PodList object          1  Create a pod by sending YAML encoded data to the server  requesting a YAML response             POST  api v1 namespaces test pods    Content Type  application yaml    Accept  application yaml      YAML encoded Pod object                   200 OK    Content Type  application yaml     apiVersion  v1    kind  Pod    metadata       name  my pod                    Kubernetes Protobuf encoding   protobuf encoding   Kubernetes uses an envelope wrapper to encode  Protobuf  https   protobuf dev   responses  That wrapper starts with a 4 byte magic number to help identify content in disk or in etcd as Protobuf  as opposed to JSON   The 4 byte magic number data is followed by a Protobuf encoded wrapper message  which describes the encoding and type of the underlying object  Within the Protobuf wrapper message  the inner object data is recorded using the  raw  field of Unknown  see the  IDL   protobuf encoding idl  for more detail    For example   1  List all of the pods on a cluster in Protobuf format             GET  api v1 pods    Accept  application vnd kubernetes protobuf                   200 OK    Content Type  application vnd kubernetes protobuf       JSON encoded collection of Pods  PodList object          1  Create a pod by sending Protobuf encoded data to the server  but request a response    in JSON             POST  api v1 namespaces test pods    Content Type  application vnd kubernetes protobuf    Accept  application json      binary encoded Pod object                   200 OK    Content Type  application json             kind    Pod         apiVersion    v1                         You can use both techniques together and use Kubernetes  Protobuf encoding to interact with any API that supports it  for both reads and writes  Only some API resource types are  compatible   protobuf encoding compatibility  with Protobuf    a id  protobuf encoding idl      The wrapper format is       A four byte magic number prefix    Bytes 0 3   k8s x00   0x6b  0x38  0x73  0x00   An encoded Protobuf message with the following IDL    message Unknown          typeMeta should have the string values for  kind  and  apiVersion  as set on the JSON object     optional TypeMeta typeMeta   1          raw will hold the complete serialized object in protobuf  See the protobuf definitions in the client libraries for a given kind      optional bytes raw   2          contentEncoding is encoding used for the raw data  Unspecified means no encoding      optional string contentEncoding   3          contentType is the serialization method used to serialize  raw   Unspecified means application vnd kubernetes protobuf and is usually        omitted      optional string contentType   4         message TypeMeta          apiVersion is the group version for this type     optional string apiVersion   1         kind is the name of the object schema  A protobuf definition should exist for this object      optional string kind   2            Clients that receive a response in  application vnd kubernetes protobuf  that does not match the expected prefix should reject the response  as future versions may need to alter the serialization format in an incompatible way and will do so by changing the prefix         Compatibility with Kubernetes Protobuf   protobuf encoding compatibility   Not all API resource types support Kubernetes  Protobuf encoding  specifically  Protobuf isn t available for resources that are defined as  or are served via the    As a client  if you might need to work with extension types you should specify multiple content types in the request  Accept  header to support fallback to JSON  For example       Accept  application vnd kubernetes protobuf  application json         Efficient detection of changes  The Kubernetes API allows clients to make an initial request for an object or a collection  and then to track changes since that initial request  a   watch    Clients can send a   list   or a   get   and then make a follow up   watch   request   To make this change tracking possible  every Kubernetes object has a  resourceVersion  field representing the version of that resource as stored in the underlying persistence layer  When retrieving a collection of resources  either namespace or cluster scoped   the response from the API server contains a  resourceVersion  value  The client can use that  resourceVersion  to initiate a   watch   against the API server   When you send a   watch   request  the API server responds with a stream of changes  These changes itemize the outcome of operations  such as   create      delete    and   update    that occurred after the  resourceVersion  you specified as a parameter to the   watch   request  The overall   watch   mechanism allows a client to fetch the current state and then subscribe to subsequent changes  without missing any events   If a client   watch   is disconnected then that client can start a new   watch   from the last returned  resourceVersion   the client could also perform a fresh   get       list   request and begin again  See  Resource Version Semantics   resource versions  for more detail   For example   1  List all of the pods in a given namespace             GET  api v1 namespaces test pods           200 OK    Content Type  application json             kind    PodList         apiVersion    v1         metadata     resourceVersion   10245          items                      2  Starting from resource version 10245  receive notifications of any API operations     such as   create      delete      patch   or   update    that affect Pods in the     test  namespace  Each change notification is a JSON document  The HTTP response body     served as  application json   consists a series of JSON documents             GET  api v1 namespaces test pods watch 1 resourceVersion 10245           200 OK    Transfer Encoding  chunked    Content Type  application json             type    ADDED         object     kind    Pod    apiVersion    v1    metadata     resourceVersion    10596                              type    MODIFIED         object     kind    Pod    apiVersion    v1    metadata     resourceVersion    11020                                  A given Kubernetes server will only preserve a historical record of changes for a limited time  Clusters using etcd 3 preserve changes in the last 5 minutes by default  When the requested   watch   operations fail because the historical version of that resource is not available  clients must handle the case by recognizing the status code  410 Gone   clearing their local cache  performing a new   get   or   list   operation  and starting the   watch   from the  resourceVersion  that was returned   For subscribing to collections  Kubernetes client libraries typically offer some form of standard tool for this   list   then   watch   logic   In the Go client library  this is called a  Reflector  and is located in the  k8s io client go tools cache  package        Watch bookmarks   watch bookmarks   To mitigate the impact of short history window  the Kubernetes API provides a watch event named  BOOKMARK   It is a special kind of event to mark that all changes up to a given  resourceVersion  the client is requesting have already been sent  The document representing the  BOOKMARK  event is of the type requested by the request  but only includes a   metadata resourceVersion  field  For example       GET  api v1 namespaces test pods watch 1 resourceVersion 10245 allowWatchBookmarks true     200 OK Transfer Encoding  chunked Content Type  application json       type    ADDED      object     kind    Pod    apiVersion    v1    metadata     resourceVersion    10596                         type    BOOKMARK      object     kind    Pod    apiVersion    v1    metadata     resourceVersion    12746            As a client  you can request  BOOKMARK  events by setting the  allowWatchBookmarks true  query parameter to a   watch   request  but you shouldn t assume bookmarks are returned at any specific interval  nor can clients assume that the API server will send any  BOOKMARK  event even when requested      Streaming lists    On large clusters  retrieving the collection of some resource types may result in a significant increase of resource usage  primarily RAM  on the control plane  In order to alleviate its impact and simplify the user experience of the   list       watch   pattern  Kubernetes v1 27 introduces as an alpha feature the support for requesting the initial state  previously requested via the   list   request  as part of the   watch   request   Provided that the  WatchList   feature gate   docs reference command line tools reference feature gates   is enabled  this can be achieved by specifying  sendInitialEvents true  as query string parameter in a   watch   request  If set  the API server starts the watch stream with synthetic init events  of type  ADDED   to build the whole state of all existing objects followed by a   BOOKMARK  event   docs reference using api api concepts  watch bookmarks   if requested via  allowWatchBookmarks true  option   The bookmark event includes the resource version to which is synced  After sending the bookmark event  the API server continues as for any other   watch   request   When you set  sendInitialEvents true  in the query string  Kubernetes also requires that you set  resourceVersionMatch  to  NotOlderThan  value  If you provided  resourceVersion  in the query string without providing a value or don t provide it at all  this is interpreted as a request for  consistent read   the bookmark event is sent when the state is synced at least to the moment of a consistent read from when the request started to be processed  If you specify  resourceVersion   in the query string   the bookmark event is sent when the state is synced at least to the provided resource version       Example   example streaming lists   An example  you want to watch a collection of Pods  For that collection  the current resource version is 10245 and there are two pods   foo  and  bar   Then sending the following request  explicitly requesting  consistent read  by setting empty resource version using  resourceVersion    could result in the following sequence of events       GET  api v1 namespaces test pods watch 1 sendInitialEvents true allowWatchBookmarks true resourceVersion  resourceVersionMatch NotOlderThan     200 OK Transfer Encoding  chunked Content Type  application json       type    ADDED      object     kind    Pod    apiVersion    v1    metadata     resourceVersion    8467    name    foo                type    ADDED      object     kind    Pod    apiVersion    v1    metadata     resourceVersion    5726    name    bar                type    BOOKMARK      object     kind    Pod    apiVersion    v1    metadata     resourceVersion    10245            followed by regular watch stream starting from resourceVersion  10245           Response compression     APIResponseCompression  is an option that allows the API server to compress the responses for   get   and   list   requests  reducing the network bandwidth and improving the performance of large scale clusters  It is enabled by default since Kubernetes 1 16 and it can be disabled by including  APIResponseCompression false  in the    feature gates  flag on the API server   API response compression can significantly reduce the size of the response  especially for large resources or  collections   docs reference using api api concepts  collections   For example  a   list   request for pods can return hundreds of kilobytes or even megabytes of data  depending on the number of pods and their attributes  By compressing the response  the network bandwidth can be saved and the latency can be reduced   To verify if  APIResponseCompression  is working  you can send a   get   or   list   request to the API server with an  Accept Encoding  header  and check the response size and headers  For example       GET  api v1 pods Accept Encoding  gzip     200 OK Content Type  application json content encoding  gzip          The  content encoding  header indicates that the response is compressed with  gzip       Retrieving large results sets in chunks    On large clusters  retrieving the collection of some resource types may result in very large responses that can impact the server and client  For instance  a cluster may have tens of thousands of Pods  each of which is equivalent to roughly 2 KiB of encoded JSON  Retrieving all pods across all namespaces may result in a very large response  10 20MB  and consume a large amount of server resources   The Kubernetes API server supports the ability to break a single large collection request into many smaller chunks while preserving the consistency of the total request  Each chunk can be returned sequentially which reduces both the total size of the request and allows user oriented clients to display results incrementally to improve responsiveness   You can request that the API server handles a   list   by serving single collection using pages  which Kubernetes calls  chunks    To retrieve a single collection in chunks  two query parameters  limit  and  continue  are supported on requests against collections  and a response field  continue  is returned from all   list   operations in the collection s  metadata  field  A client should specify the maximum results they wish to receive in each chunk with  limit  and the server will return up to  limit  resources in the result and include a  continue  value if there are more resources in the collection   As an API client  you can then pass this  continue  value to the API server on the next request  to instruct the server to return the next page   chunk   of results  By continuing until the server returns an empty  continue  value  you can retrieve the entire collection   Like a   watch   operation  a  continue  token will expire after a short amount of time  by default 5 minutes  and return a  410 Gone  if more results cannot be returned  In this case  the client will need to start from the beginning or omit the  limit  parameter   For example  if there are 1 253 pods on the cluster and you want to receive chunks of 500 pods at a time  request those chunks as follows   1  List all of the pods on a cluster  retrieving up to 500 pods each time             GET  api v1 pods limit 500           200 OK    Content Type  application json             kind    PodList         apiVersion    v1         metadata             resourceVersion   10245           continue    ENCODED CONTINUE TOKEN           remainingItemCount   753                           items            returns pods 1 500              1  Continue the previous call  retrieving the next set of 500 pods             GET  api v1 pods limit 500 continue ENCODED CONTINUE TOKEN           200 OK    Content Type  application json             kind    PodList         apiVersion    v1         metadata             resourceVersion   10245           continue    ENCODED CONTINUE TOKEN 2           remainingItemCount   253                           items            returns pods 501 1000              1  Continue the previous call  retrieving the last 253 pods             GET  api v1 pods limit 500 continue ENCODED CONTINUE TOKEN 2           200 OK    Content Type  application json             kind    PodList         apiVersion    v1         metadata             resourceVersion   10245           continue          continue token is empty because we have reached the end of the list                          items            returns pods 1001 1253              Notice that the  resourceVersion  of the collection remains constant across each request  indicating the server is showing you a consistent snapshot of the pods  Pods that are created  updated  or deleted after version  10245  would not be shown unless you make a separate   list   request without the  continue  token   This allows you to break large requests into smaller chunks and then perform a   watch   operation on the full set without missing any updates    remainingItemCount  is the number of subsequent items in the collection that are not included in this response  If the   list   request contained label or field  then the number of remaining items is unknown and the API server does not include a  remainingItemCount  field in its response  If the   list   is complete  either because it is not chunking  or because this is the last chunk   then there are no more remaining items and the API server does not include a  remainingItemCount  field in its response  The intended use of the  remainingItemCount  is estimating the size of a collection      Collections  In Kubernetes terminology  the response you get from a   list   is a  collection   However  Kubernetes defines concrete kinds for collections of different types of resource  Collections have a kind named for the resource kind  with  List  appended   When you query the API for a particular type  all items returned by that query are of that type  For example  when you   list   Services  the collection response has  kind  set to   ServiceList    docs reference kubernetes api service resources service v1  ServiceList   each item in that collection represents a single Service  For example       GET  api v1 services         yaml      kind    ServiceList      apiVersion    v1      metadata          resourceVersion    2947301          items                  metadata              name    kubernetes            namespace    default              metadata              name    kube dns            namespace    kube system            There are dozens of collection types  such as  PodList    ServiceList   and  NodeList   defined in the Kubernetes API  You can get more information about each collection type from the  Kubernetes API   docs reference kubernetes api   documentation   Some tools  such as  kubectl   represent the Kubernetes collection mechanism slightly differently from the Kubernetes API itself  Because the output of  kubectl  might include the response from multiple   list   operations at the API level   kubectl  represents a list of items using  kind  List   For example      shell kubectl get services  A  o yaml        yaml apiVersion  v1 kind  List metadata    resourceVersion       selfLink     items    apiVersion  v1   kind  Service   metadata      creationTimestamp   2021 06 03T14 54 12Z      labels        component  apiserver       provider  kubernetes     name  kubernetes     namespace  default       apiVersion  v1   kind  Service   metadata      annotations        prometheus io port   9153        prometheus io scrape   true      creationTimestamp   2021 06 03T14 54 14Z      labels        k8s app  kube dns       kubernetes io cluster service   true        kubernetes io name  CoreDNS     name  kube dns     namespace  kube system       Keep in mind that the Kubernetes API does not have a  kind  named  List     kind  List  is a client side  internal implementation detail for processing collections that might be of different kinds of object  Avoid depending on  kind  List  in automation or other code       Receiving resources as Tables  When you run  kubectl get   the default output format is a simple tabular representation of one or more instances of a particular resource type  In the past  clients were required to reproduce the tabular and describe output implemented in  kubectl  to perform simple lists of objects  A few limitations of that approach include non trivial logic when dealing with certain objects  Additionally  types provided by API aggregation or third party resources are not known at compile time  This means that generic implementations had to be in place for types unrecognized by a client   In order to avoid potential limitations as described above  clients may request the Table representation of objects  delegating specific details of printing to the server  The Kubernetes API implements standard HTTP content type negotiation  passing an  Accept  header containing a value of  application json as Table g meta k8s io v v1  with a  GET  call will request that the server return objects in the Table content type   For example  list all of the pods on a cluster in the Table format       GET  api v1 pods Accept  application json as Table g meta k8s io v v1     200 OK Content Type  application json         kind    Table        apiVersion    meta k8s io v1                columnDefinitions                              For API resource types that do not have a custom Table definition known to the control plane  the API server returns a default Table response that consists of the resource s  name  and  creationTimestamp  fields       GET  apis crd example com v1alpha1 namespaces default resources     200 OK Content Type  application json             kind    Table        apiVersion    meta k8s io v1                columnDefinitions                            name    Name                type    string                                                     name    Created At                type    date                                          Not all API resource types support a Table response  for example  a  might not define field to table mappings  and an APIService that  extends the core Kubernetes API   docs concepts extend kubernetes api extension apiserver aggregation   might not serve Table responses at all  If you are implementing a client that uses the Table information and must work against all resource types  including extensions  you should make requests that specify multiple content types in the  Accept  header  For example       Accept  application json as Table g meta k8s io v v1  application json         Resource deletion  When you   delete   a resource this takes place in two phases   1   finalization  2  removal     yaml      kind    ConfigMap      apiVersion    v1      metadata          finalizers     url io neat finalization    other url io my finalizer         deletionTimestamp   nil             When a client first sends a   delete   to request the removal of a resource  the   metadata deletionTimestamp  is set to the current time  Once the   metadata deletionTimestamp  is set  external controllers that act on finalizers may start performing their cleanup work at any time  in any order   Order is   not   enforced between finalizers because it would introduce significant risk of stuck   metadata finalizers    The   metadata finalizers  field is shared  any actor with permission can reorder it  If the finalizer list were processed in order  then this might lead to a situation in which the component responsible for the first finalizer in the list is waiting for some signal  field value  external system  or other  produced by a component responsible for a finalizer later in the list  resulting in a deadlock   Without enforced ordering  finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list   Once the last finalizer is removed  the resource is actually removed from etcd       Single resource API  The Kubernetes API verbs   get      create      update      patch      delete   and   proxy   support single resources only  These verbs with single resource support have no support for submitting multiple resources together in an ordered or unordered list or transaction   When clients  including kubectl  act on a set of resources  the client makes a series of single resource API requests  then aggregates the responses if needed   By contrast  the Kubernetes API verbs   list   and   watch   allow getting multiple resources  and   deletecollection   allows deleting multiple resources      Field validation  Kubernetes always validates the type of fields  For example  if a field in the API is defined as a number  you cannot set the field to a text value  If a field is defined as an array of strings  you can only provide an array  Some fields allow you to omit them  other fields are required  Omitting a required field from an API request is an error   If you make a request with an extra field  one that the cluster s control plane does not recognize  then the behavior of the API server is more complicated   By default  the API server drops fields that it does not recognize from an input that it receives  for example  the JSON body of a  PUT  request    There are two situations where the API server drops fields that you supplied in an HTTP request   These situations are   1  The field is unrecognized because it is not in the resource s OpenAPI schema   One    exception to this is for  that explicitly choose not to prune unknown    fields via  x kubernetes preserve unknown fields    1  The field is duplicated in the object       Validation for unrecognized or duplicate fields   setting the field validation level     From 1 25 onward  unrecognized or duplicate fields in an object are detected via validation on the server when you use HTTP verbs that can submit data   POST    PUT   and  PATCH    Possible levels of validation are  Ignore    Warn   default   and  Strict     Ignore    The API server succeeds in handling the request as it would without the erroneous fields   being set  dropping all unknown and duplicate fields and giving no indication it   has done so    Warn     Default  The API server succeeds in handling the request  and reports a   warning to the client  The warning is sent using the  Warning   response header    adding one warning item for each unknown or duplicate field  For more   information about warnings and the Kubernetes API  see the blog article    Warning  Helpful Warnings Ahead   blog 2020 09 03 warnings      Strict    The API server rejects the request with a 400 Bad Request error when it   detects any unknown or duplicate fields  The response message from the API   server specifies all the unknown or duplicate fields that the API server has   detected   The field validation level is set by the  fieldValidation  query parameter    If you submit a request that specifies an unrecognized field  and that is also invalid for a different reason  for example  the request provides a string value where the API expects an integer for a known field   then the API server responds with a 400 Bad Request error  but will not provide any information on unknown or duplicate fields  only which fatal error it encountered first    You always receive an error response in this case  no matter what field validation level you requested    Tools that submit requests to the server  such as  kubectl    might set their own defaults that are different from the  Warn  validation level that the API server uses by default   The  kubectl  tool uses the    validate  flag to set the level of field validation  It accepts the values  ignore    warn   and  strict  while also accepting the values  true   equivalent to  strict   and  false   equivalent to  ignore    The default validation setting for kubectl is    validate true   which means strict server side field validation   When kubectl cannot connect to an API server with field validation  API servers prior to Kubernetes 1 27   it will fall back to using client side validation  Client side validation will be removed entirely in a future version of kubectl     Prior to Kubernetes 1 25   kubectl   validate  was used to toggle client side validation on or off as a boolean flag        Dry run    When you use HTTP verbs that can modify resources   POST    PUT    PATCH   and  DELETE    you can submit your request in a  dry run  mode  Dry run mode helps to evaluate a request through the typical request stages  admission chain  validation  merge conflicts  up until persisting objects to storage  The response body for the request is as close as possible to a non dry run response  Kubernetes guarantees that dry run requests will not be persisted in storage or have any other side effects       Make a dry run request  Dry run is triggered by setting the  dryRun  query parameter  This parameter is a string  working as an enum  and the only accepted values are    no value set    Allow side effects  You request this with a query string such as   dryRun    or   dryRun pretty true   The response is the final object that would have been   persisted  or an error if the request could not be fulfilled    All    Every stage runs as normal  except for the final storage stage where side effects   are prevented   When you set   dryRun All   any relevant  are run  validating admission controllers check the request post mutation  merge is performed on  PATCH   fields are defaulted  and schema validation occurs  The changes are not persisted to the underlying storage  but the final object which would have been persisted is still returned to the user  along with the normal status code   If the non dry run version of a request would trigger an admission controller that has side effects  the request will be failed rather than risk an unwanted side effect  All built in admission control plugins support dry run  Additionally  admission webhooks can declare in their  configuration object   docs reference generated kubernetes api   validatingwebhook v1 admissionregistration k8s io  that they do not have side effects  by setting their  sideEffects  field to  None     If a webhook actually does have side effects  then the  sideEffects  field should be set to  NoneOnDryRun   That change is appropriate provided that the webhook is also be modified to understand the  DryRun  field in AdmissionReview  and to prevent side effects on any request marked as dry runs    Here is an example dry run request that uses   dryRun All        POST  api v1 namespaces test pods dryRun All Content Type  application json Accept  application json      The response would look the same as for non dry run request  but the values of some generated fields may differ       Generated values  Some values of an object are typically generated before the object is persisted  It is important not to rely upon the values of these fields set by a dry run request  since these values will likely be different in dry run mode from when the real request is made  Some of these fields are      name   if  generateName  is set   name  will have a unique random name    creationTimestamp     deletionTimestamp   records the time of creation deletion    UID    uniquely identifies   docs concepts overview working with objects names  uids    the object and is randomly generated  non deterministic     resourceVersion   tracks the persisted version of the object   Any field set by a mutating admission controller   For the  Service  resource  Ports or IP addresses that the kube apiserver assigns to Service objects      Dry run authorization  Authorization for dry run and non dry run requests is identical  Thus  to make a dry run request  you must be authorized to make the non dry run request   For example  to run a dry run   patch   for a Deployment  you must be authorized to perform that   patch    Here is an example of a rule for Kubernetes  that allows patching Deployments      yaml rules    apiGroups    apps     resources    deployments     verbs    patch        See  Authorization Overview   docs reference access authn authz authorization        Updates to existing resources   patch and apply   Kubernetes provides several ways to update existing objects  You can read  choosing an update mechanism   update mechanism choose  to learn about which approach might be best for your use case   You can overwrite    update    an existing resource   for example  a ConfigMap   using an HTTP PUT  For a PUT request  it is the client s responsibility to specify the  resourceVersion   taking this from the object being updated   Kubernetes uses that  resourceVersion  information so that the API server can detect lost updates and reject requests made by a client that is out of date with the cluster  In the event that the resource has changed  the  resourceVersion  the client provided is stale   the API server returns a  409 Conflict  error response   Instead of sending a PUT request  the client can send an instruction to the API server to   patch   an existing resource  A   patch   is typically appropriate if the change that the client wants to make isn t conditional on the existing data  Clients that need effective detection of lost updates should consider making their request conditional on the existing  resourceVersion   either HTTP PUT or HTTP PATCH   and then handle any retries that are needed in case there is a conflict   The Kubernetes API supports four different PATCH operations  determined by their corresponding HTTP  Content Type  header    application apply patch yaml    Server Side Apply YAML  a Kubernetes specific extension  based on YAML     All JSON documents are valid YAML  so you can also submit JSON using this   media type  See  Server Side Apply serialization   docs reference using api server side apply  serialization    for more details      To Kubernetes  this is a   create   operation if the object does not exist    or a   patch   operation if the object already exists    application json patch json    JSON Patch  as defined in  RFC6902  https   tools ietf org html rfc6902     A JSON patch is a sequence of operations that are executed on the resource    for example    op    add    path     a b c    value      foo    bar           To Kubernetes  this is a   patch   operation       A   patch   using  application json patch json  can include conditions to   validate consistency  allowing the operation to fail if those conditions   are not met  for example  to avoid a lost update     application merge patch json    JSON Merge Patch  as defined in  RFC7386  https   tools ietf org html rfc7386     A JSON Merge Patch is essentially a partial representation of the resource    The submitted JSON is combined with the current resource to create a new one    then the new one is saved      To Kubernetes  this is a   patch   operation    application strategic merge patch json    Strategic Merge Patch  a Kubernetes specific extension based on JSON     Strategic Merge Patch is a custom implementation of JSON Merge Patch    You can only use Strategic Merge Patch with built in APIs  or with aggregated   API servers that have special support for it  You cannot use    application strategic merge patch json  with any API   defined using a           The Kubernetes  server side apply  mechanism has superseded Strategic Merge   Patch      Kubernetes   Server Side Apply   docs reference using api server side apply   feature allows the control plane to track managed fields for newly created objects  Server Side Apply provides a clear pattern for managing field conflicts  offers server side   apply   and   update   operations  and replaces the client side functionality of  kubectl apply    For Server Side Apply  Kubernetes treats the request as a   create   if the object does not yet exist  and a   patch   otherwise  For other requests that use PATCH at the HTTP level  the logical Kubernetes operation is always   patch     See  Server Side Apply   docs reference using api server side apply   for more details       Choosing an update mechanism   update mechanism choose        HTTP PUT to replace existing resource   update mechanism update   The   update    HTTP  PUT   operation is simple to implement and flexible  but has drawbacks     You need to handle conflicts where the  resourceVersion  of the object changes   between your client reading it and trying to write it back  Kubernetes always   detects the conflict  but you as the client author need to implement retries    You might accidentally drop fields if you decode an object locally  for example    using client go  you could receive fields that your client does not know how to   handle   and then drop them as part of your update    If there s a lot of contention on the object  even on a field  or set of fields    that you re not trying to edit   you might have trouble sending the update    The problem is worse for larger objects and for objects with many fields        HTTP PATCH using JSON Patch   update mechanism json patch   A   patch   update is helpful  because     As you re only sending differences  you have less data to send in the  PATCH    request    You can make changes that rely on existing values  such as copying the   value of a particular field into an annotation    Unlike with an   update    HTTP  PUT    making your change can happen right away   even if there are frequent changes to unrelated fields   you usually would   not need to retry      You might still need to specify the  resourceVersion   to match an existing object      if you want to be extra careful to avoid lost updates     It s still good practice to write in some retry logic in case of errors    You can use test conditions to careful craft specific update conditions    For example  you can increment a counter without reading it if the existing   value matches what you expect  You can do this with no lost update risk    even if the object has changed in other ways since you last wrote to it     If the test condition fails  you can fall back to reading the current value   and then write back the changed number    However     You need more local  client  logic to build the patch  it helps a lot if you have   a library implementation of JSON Patch  or even for making a JSON Patch specifically against Kubernetes    As the author of client software  you need to be careful when building the patch    the HTTP request body  not to drop fields  the order of operations matters         HTTP PATCH using Server Side Apply   update mechanism server side apply   Server Side Apply has some clear benefits     A single round trip  it rarely requires making a  GET  request first      and you can still detect conflicts for unexpected changes     you have the option to force override a conflict  if appropriate   Client implementations are easy to make    You get an atomic create or update operation without extra effort    similar to  UPSERT  in some SQL dialects    However     Server Side Apply does not work at all for field changes that depend on a current value of the object    You can only apply updates to objects  Some resources in the Kubernetes HTTP API are   not objects  they do not have a   metadata  field   and Server Side Apply   is only relevant for Kubernetes objects      Resource versions  Resource versions are strings that identify the server s internal version of an object  Resource versions can be used by clients to determine when objects have changed  or to express data consistency requirements when getting  listing and watching resources  Resource versions must be treated as opaque by clients and passed unmodified back to the server   You must not assume resource versions are numeric or collatable  API clients may only compare two resource versions for equality  this means that you must not compare resource versions for greater than or less than relationships         resourceVersion  fields in metadata   resourceversion in metadata   Clients find resource versions in resources  including the resources from the response stream for a   watch    or when using   list   to enumerate resources    v1 meta ObjectMeta   docs reference generated kubernetes api   objectmeta v1 meta    The  metadata resourceVersion  of a resource instance identifies the resource version the instance was last modified at    v1 meta ListMeta   docs reference generated kubernetes api   listmeta v1 meta    The  metadata resourceVersion  of a resource collection  the response to a   list    identifies the resource version at which the collection was constructed        resourceVersion  parameters in query strings   the resourceversion parameter   The   get      list    and   watch   operations support the  resourceVersion  parameter  From version v1 19  Kubernetes API servers also support the  resourceVersionMatch  parameter on  list  requests   The API server interprets the  resourceVersion  parameter differently depending on the operation you request  and on the value of  resourceVersion   If you set  resourceVersionMatch  then this also affects the way matching happens       Semantics for   get   and   list    For   get   and   list    the semantics of  resourceVersion  are     get       resourceVersion unset   resourceVersion  0    resourceVersion   value other than 0                                                                                                Most Recent             Any                   Not older than                              list     From version v1 19  Kubernetes API servers support the  resourceVersionMatch  parameter on  list  requests  If you set both  resourceVersion  and  resourceVersionMatch   the  resourceVersionMatch  parameter determines how the API server interprets  resourceVersion    You should always set the  resourceVersionMatch  parameter when setting  resourceVersion  on a   list   request  However  be prepared to handle the case where the API server that responds is unaware of  resourceVersionMatch  and ignores it   Unless you have strong consistency requirements  using  resourceVersionMatch NotOlderThan  and a known  resourceVersion  is preferable since it can achieve better performance and scalability of your cluster than leaving  resourceVersion  and  resourceVersionMatch  unset  which requires quorum read to be served   Setting the  resourceVersionMatch  parameter without setting  resourceVersion  is not valid   This table explains the behavior of   list   requests with various combinations of  resourceVersion  and  resourceVersionMatch        resourceVersionMatch param   paging params   resourceVersion not set   resourceVersion  0    resourceVersion   value other than 0                                                                                                                                                unset     limit unset    Most Recent   Any   Not older than      unset    limit   n     continue unset    Most Recent   Any   Exact      unset    limit   n    continue   token    Continue Token  Exact   Invalid  treated as Continue Token  Exact   Invalid  HTTP  400 Bad Request       resourceVersionMatch Exact     limit unset    Invalid   Invalid   Exact      resourceVersionMatch Exact    limit   n     continue unset    Invalid   Invalid   Exact      resourceVersionMatch NotOlderThan     limit unset    Invalid   Any   Not older than      resourceVersionMatch NotOlderThan    limit   n     continue unset    Invalid   Any   Not older than       If your cluster s API server does not honor the  resourceVersionMatch  parameter  the behavior is the same as if you did not set it    The meaning of the   get   and   list   semantics are   Any   Return data at any resource version  The newest available resource version is preferred    but strong consistency is not required  data at any resource version may be served  It is possible   for the request to return data at a much older resource version that the client has previously   observed  particularly in high availability configurations  due to partitions or stale   caches  Clients that cannot tolerate this should not use this semantic   Most recent   Return data at the most recent resource version  The returned data must be   consistent  in detail  served from etcd via a quorum read     For etcd v3 4 31  and v3 5 13  Kubernetes  serves  most recent  reads from the  watch cache     an internal  in memory store within the API server that caches and mirrors the state of data   persisted into etcd  Kubernetes requests progress notification to maintain cache consistency against   the etcd persistence layer  Kubernetes versions v1 28 through to v1 30 also supported this   feature  although as Alpha it was not recommended for production nor enabled by default until the v1 31 release   Not older than   Return data at least as new as the provided  resourceVersion   The newest   available data is preferred  but any data not older than the provided  resourceVersion  may be   served   For   list   requests to servers that honor the  resourceVersionMatch  parameter  this   guarantees that the collection s   metadata resourceVersion  is not older than the requested    resourceVersion   but does not make any guarantee about the   metadata resourceVersion  of any   of the items in that collection   Exact   Return data at the exact resource version provided  If the provided  resourceVersion  is   unavailable  the server responds with HTTP 410  Gone    For   list   requests to servers that honor the    resourceVersionMatch  parameter  this guarantees that the collection s   metadata resourceVersion    is the same as the  resourceVersion  you requested in the query string  That guarantee does   not apply to the   metadata resourceVersion  of any items within that collection   Continue Token  Exact   Return data at the resource version of the initial paginated   list   call  The returned  continue   tokens  are responsible for keeping track of the initially provided resource version for all paginated     list   calls after the initial paginated   list      When you   list   resources and receive a collection response  the response includes the  list metadata   docs reference generated kubernetes api v  listmeta v1 meta  of the collection as well as  object metadata   docs reference generated kubernetes api v  objectmeta v1 meta  for each item in that collection  For individual objects found within a collection response    metadata resourceVersion  tracks when that object was last updated  and not how up to date the object is when served    When using  resourceVersionMatch NotOlderThan  and limit is set  clients must handle HTTP 410  Gone  responses  For example  the client might retry with a newer  resourceVersion  or fall back to  resourceVersion       When using  resourceVersionMatch Exact  and  limit  is unset  clients must verify that the collection s   metadata resourceVersion  matches the requested  resourceVersion   and handle the case where it does not  For example  the client might fall back to a request with  limit  set       Semantics for   watch    For   watch    the semantics of resource version are     watch         resourceVersion unset                 resourceVersion  0           resourceVersion   value other than 0                                                                                                                     Get State and Start at Most Recent    Get State and Start at Any   Start at Exact                              The meaning of those   watch   semantics are   Get State and Start at Any    Watches initialized this way may return arbitrarily stale   data  Please review this semantic before using it  and favor the other semantics   where possible       Start a   watch   at any resource version  the most recent resource version   available is preferred  but not required  Any starting resource version is   allowed  It is possible for the   watch   to start at a much older resource   version that the client has previously observed  particularly in high availability   configurations  due to partitions or stale caches  Clients that cannot tolerate   this apparent rewinding should not start a   watch   with this semantic  To   establish initial state  the   watch   begins with synthetic  Added  events for   all resource instances that exist at the starting resource version  All following   watch events are for all changes that occurred after the resource version the     watch   started at   Get State and Start at Most Recent   Start a   watch   at the most recent resource version  which must be consistent    in detail  served from etcd via a quorum read   To establish initial state    the   watch   begins with synthetic  Added  events of all resources instances   that exist at the starting resource version  All following watch events are for   all changes that occurred after the resource version the   watch   started at   Start at Exact   Start a   watch   at an exact resource version  The watch events are for all changes   after the provided resource version  Unlike  Get State and Start at Most Recent    and  Get State and Start at Any   the   watch   is not started with synthetic    Added  events for the provided resource version  The client is assumed to already   have the initial state at the starting resource version since the client provided   the resource version        410 Gone  responses  Servers are not required to serve all older resource versions and may return a HTTP  410  Gone   status code if a client requests a  resourceVersion  older than the server has retained  Clients must be able to tolerate  410  Gone   responses  See  Efficient detection of changes   efficient detection of changes  for details on how to handle  410  Gone   responses when watching resources   If you request a  resourceVersion  outside the applicable limit then  depending on whether a request is served from cache or not  the API server may reply with a  410 Gone  HTTP response       Unavailable resource versions  Servers are not required to serve unrecognized resource versions  If you request   list   or   get   for a resource version that the API server does not recognize  then the API server may either     wait briefly for the resource version to become available  then timeout with a    504  Gateway Timeout   if the provided resource versions does not become available   in a reasonable amount of time    respond with a  Retry After  response header indicating how many seconds a client   should wait before retrying the request   If you request a resource version that an API server does not recognize  the kube apiserver additionally identifies its error responses with a  Too large resource version  message   If you make a   watch   request for an unrecognized resource version  the API server may wait indefinitely  until the request timeout  for the resource version to become available "}
{"questions":"kubernetes reference title Kubernetes Deprecation Policy bgrant0607 contenttype concept weight 40 reviewers thockin lavalamp","answers":"---\nreviewers:\n- bgrant0607\n- lavalamp\n- thockin\ntitle: Kubernetes Deprecation Policy\ncontent_type: concept\nweight: 40\n---\n\n<!-- overview -->\nThis document details the deprecation policy for various facets of the system.\n\n<!-- body -->\nKubernetes is a large system with many components and many contributors. As\nwith any such software, the feature set naturally evolves over time, and\nsometimes a feature may need to be removed. This could include an API, a flag,\nor even an entire feature. To avoid breaking existing users, Kubernetes follows\na deprecation policy for aspects of the system that are slated to be removed.\n\n## Deprecating parts of the API\n\nSince Kubernetes is an API-driven system, the API has evolved over time to\nreflect the evolving understanding of the problem space. The Kubernetes API is\nactually a set of APIs, called \"API groups\", and each API group is\nindependently versioned. [API versions](\/docs\/reference\/using-api\/#api-versioning) fall\ninto 3 main tracks, each of which has different policies for deprecation:\n\n| Example  | Track                            |\n|----------|----------------------------------|\n| v1       | GA (generally available, stable) |\n| v1beta1  | Beta (pre-release)               |\n| v1alpha1 | Alpha (experimental)             |\n\nA given release of Kubernetes can support any number of API groups and any\nnumber of versions of each.\n\nThe following rules govern the deprecation of elements of the API. This\nincludes:\n\n* REST resources (aka API objects)\n* Fields of REST resources\n* Annotations on REST resources, including \"beta\" annotations but not\n  including \"alpha\" annotations.\n* Enumerated or constant values\n* Component config structures\n\nThese rules are enforced between official releases, not between\narbitrary commits to master or release branches.\n\n**Rule #1: API elements may only be removed by incrementing the version of the\nAPI group.**\n\nOnce an API element has been added to an API group at a particular version, it\ncan not be removed from that version or have its behavior significantly\nchanged, regardless of track.\n\n\nFor historical reasons, there are 2 \"monolithic\" API groups - \"core\" (no\ngroup name) and \"extensions\". Resources will incrementally be moved from these\nlegacy API groups into more domain-specific API groups.\n\n\n**Rule #2: API objects must be able to round-trip between API versions in a given\nrelease without information loss, with the exception of whole REST resources\nthat do not exist in some versions.**\n\nFor example, an object can be written as v1 and then read back as v2 and\nconverted to v1, and the resulting v1 resource will be identical to the\noriginal. The representation in v2 might be different from v1, but the system\nknows how to convert between them in both directions. Additionally, any new\nfield added in v2 must be able to round-trip to v1 and back, which means v1\nmight have to add an equivalent field or represent it as an annotation.\n\n**Rule #3: An API version in a given track may not be deprecated in favor of a less stable API version.**\n\n* GA API versions can replace beta and alpha API versions.\n* Beta API versions can replace earlier beta and alpha API versions, but *may not* replace GA API versions.\n* Alpha API versions can replace earlier alpha API versions, but *may not* replace GA or beta API versions.\n\n**Rule #4a: API lifetime is determined by the API stability level**\n\n* GA API versions may be marked as deprecated, but must not be removed within a major version of Kubernetes\n* Beta API versions are deprecated no more than 9 months or 3 minor releases after introduction (whichever is longer),\n  and are no longer served 9 months or 3 minor releases after deprecation (whichever is longer)\n* Alpha API versions may be removed in any release without prior deprecation notice\n\nThis ensures beta API support covers the [maximum supported version skew of 2 releases](\/releases\/version-skew-policy\/),\nand that APIs don't stagnate on unstable beta versions, accumulating production usage that will be\ndisrupted when support for the beta API ends.\n\n\nThere are no current plans for a major version revision of Kubernetes that removes GA APIs.\n\n\n\nUntil [#52185](https:\/\/github.com\/kubernetes\/kubernetes\/issues\/52185) is\nresolved, no API versions that have been persisted to storage may be removed.\nServing REST endpoints for those versions may be disabled (subject to the\ndeprecation timelines in this document), but the API server must remain capable\nof decoding\/converting previously persisted data from storage.\n\n\n**Rule #4b: The \"preferred\" API version and the \"storage version\" for a given\ngroup may not advance until after a release has been made that supports both the\nnew version and the previous version**\n\nUsers must be able to upgrade to a new release of Kubernetes and then roll back\nto a previous release, without converting anything to the new API version or\nsuffering breakages (unless they explicitly used features only available in the\nnewer version). This is particularly evident in the stored representation of\nobjects.\n\nAll of this is best illustrated by examples. Imagine a Kubernetes release,\nversion X, which introduces a new API group. A new Kubernetes release is made\nevery approximately 4 months (3 per year). The following table describes which\nAPI versions are supported in a series of subsequent releases.\n\n<table>\n  <thead>\n    <tr>\n      <th>Release<\/th>\n      <th>API Versions<\/th>\n      <th>Preferred\/Storage Version<\/th>\n      <th>Notes<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n    <tr>\n      <td>X<\/td>\n      <td>v1alpha1<\/td>\n      <td>v1alpha1<\/td>\n      <td><\/td>\n    <\/tr>\n    <tr>\n      <td>X+1<\/td>\n      <td>v1alpha2<\/td>\n      <td>v1alpha2<\/td>\n      <td>\n        <ul>\n           <li>v1alpha1 is removed. See release notes for required actions.<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td>X+2<\/td>\n      <td>v1beta1<\/td>\n      <td>v1beta1<\/td>\n      <td>\n        <ul>\n          <li>v1alpha2 is removed. See release notes for required actions.<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td>X+3<\/td>\n      <td>v1beta2, v1beta1 (deprecated)<\/td>\n      <td>v1beta1<\/td>\n      <td>\n        <ul>\n          <li>v1beta1 is deprecated. See release notes for required actions.<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td>X+4<\/td>\n      <td>v1beta2, v1beta1 (deprecated)<\/td>\n      <td>v1beta2<\/td>\n      <td><\/td>\n    <\/tr>\n    <tr>\n      <td>X+5<\/td>\n      <td>v1, v1beta1 (deprecated), v1beta2 (deprecated)<\/td>\n      <td>v1beta2<\/td>\n      <td>\n        <ul>\n          <li>v1beta2 is deprecated. See release notes for required actions.<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td>X+6<\/td>\n      <td>v1, v1beta2 (deprecated)<\/td>\n      <td>v1<\/td>\n      <td>\n        <ul>\n          <li>v1beta1 is removed. See release notes for required actions.<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td>X+7<\/td>\n      <td>v1, v1beta2 (deprecated)<\/td>\n      <td>v1<\/td>\n      <td><\/td>\n    <\/tr>\n    <tr>\n      <td>X+8<\/td>\n      <td>v2alpha1, v1<\/td>\n      <td>v1<\/td>\n      <td>\n        <ul>\n          <li>v1beta2 is removed. See release notes for required actions.<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td>X+9<\/td>\n      <td>v2alpha2, v1<\/td>\n      <td>v1<\/td>\n      <td>\n        <ul>\n           <li>v2alpha1 is removed. See release notes for required actions.<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td>X+10<\/td>\n      <td>v2beta1, v1<\/td>\n      <td>v1<\/td>\n      <td>\n        <ul>\n          <li>v2alpha2 is removed. See release notes for required actions.<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td>X+11<\/td>\n      <td>v2beta2, v2beta1 (deprecated), v1<\/td>\n      <td>v1<\/td>\n      <td>\n        <ul>\n          <li>v2beta1 is deprecated. See release notes for required actions.<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td>X+12<\/td>\n      <td>v2, v2beta2 (deprecated), v2beta1 (deprecated), v1 (deprecated)<\/td>\n      <td>v1<\/td>\n      <td>\n        <ul>\n          <li>v2beta2 is deprecated. See release notes for required actions.<\/li>\n          <li>v1 is deprecated in favor of v2, but will not be removed<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td>X+13<\/td>\n      <td>v2, v2beta1 (deprecated), v2beta2 (deprecated), v1 (deprecated)<\/td>\n      <td>v2<\/td>\n      <td><\/td>\n    <\/tr>\n    <tr>\n      <td>X+14<\/td>\n      <td>v2, v2beta2 (deprecated), v1 (deprecated)<\/td>\n      <td>v2<\/td>\n      <td>\n        <ul>\n          <li>v2beta1 is removed. See release notes for required actions.<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td>X+15<\/td>\n      <td>v2, v1 (deprecated)<\/td>\n      <td>v2<\/td>\n      <td>\n        <ul>\n          <li>v2beta2 is removed. See release notes for required actions.<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n  <\/tbody>\n<\/table>\n\n### REST resources (aka API objects)\n\nConsider a hypothetical REST resource named Widget, which was present in API v1\nin the above timeline, and which needs to be deprecated. We document and\n[announce](https:\/\/groups.google.com\/forum\/#!forum\/kubernetes-announce) the\ndeprecation in sync with release X+1. The Widget resource still exists in API\nversion v1 (deprecated) but not in v2alpha1. The Widget resource continues to\nexist and function in releases up to and including X+8. Only in release X+9,\nwhen API v1 has aged out, does the Widget resource cease to exist, and the\nbehavior get removed.\n\nStarting in Kubernetes v1.19, making an API request to a deprecated REST API endpoint:\n\n1. Returns a `Warning` header\n   (as defined in [RFC7234, Section 5.5](https:\/\/tools.ietf.org\/html\/rfc7234#section-5.5)) in the API response.\n1. Adds a `\"k8s.io\/deprecated\":\"true\"` annotation to the\n   [audit event](\/docs\/tasks\/debug\/debug-cluster\/audit\/) recorded for the request.\n1. Sets an `apiserver_requested_deprecated_apis` gauge metric to `1` in the `kube-apiserver`\n   process. The metric has labels for `group`, `version`, `resource`, `subresource` that can be joined\n   to the `apiserver_request_total` metric, and a `removed_release` label that indicates the\n   Kubernetes release in which the API will no longer be served. The following Prometheus query\n   returns information about requests made to deprecated APIs which will be removed in v1.22:\n\n   ```promql\n   apiserver_requested_deprecated_apis{removed_release=\"1.22\"} * on(group,version,resource,subresource) group_right() apiserver_request_total\n   ```\n\n### Fields of REST resources\n\nAs with whole REST resources, an individual field which was present in API v1\nmust exist and function until API v1 is removed. Unlike whole resources, the\nv2 APIs may choose a different representation for the field, as long as it can\nbe round-tripped. For example a v1 field named \"magnitude\" which was\ndeprecated might be named \"deprecatedMagnitude\" in API v2. When v1 is\neventually removed, the deprecated field can be removed from v2.\n\n### Enumerated or constant values\n\nAs with whole REST resources and fields thereof, a constant value which was\nsupported in API v1 must exist and function until API v1 is removed.\n\n### Component config structures\n\nComponent configs are versioned and managed similar to REST resources.\n\n### Future work\n\nOver time, Kubernetes will introduce more fine-grained API versions, at which\npoint these rules will be adjusted as needed.\n\n## Deprecating a flag or CLI\n\nThe Kubernetes system is comprised of several different programs cooperating.\nSometimes, a Kubernetes release might remove flags or CLI commands\n(collectively \"CLI elements\") in these programs. The individual programs\nnaturally sort into two main groups - user-facing and admin-facing programs,\nwhich vary slightly in their deprecation policies. Unless a flag is explicitly\nprefixed or documented as \"alpha\" or \"beta\", it is considered GA.\n\nCLI elements are effectively part of the API to the system, but since they are\nnot versioned in the same way as the REST API, the rules for deprecation are as\nfollows:\n\n**Rule #5a: CLI elements of user-facing components (e.g. kubectl) must function\nafter their announced deprecation for no less than:**\n\n* **GA: 12 months or 2 releases (whichever is longer)**\n* **Beta: 3 months or 1 release (whichever is longer)**\n* **Alpha: 0 releases**\n\n**Rule #5b: CLI elements of admin-facing components (e.g. kubelet) must function\nafter their announced deprecation for no less than:**\n\n* **GA: 6 months or 1 release (whichever is longer)**\n* **Beta: 3 months or 1 release (whichever is longer)**\n* **Alpha: 0 releases**\n\n**Rule #5c: Command line interface (CLI) elements cannot be deprecated in favor of\nless stable CLI elements**\n\nSimilar to the Rule #3 for APIs, if an element of a command line interface is being replaced with an\nalternative implementation, such as by renaming an existing element, or by switching to\nuse configuration sourced from a file\ninstead of a command line argument, that recommended alternative must be of\nthe same or higher stability level.\n\n**Rule #6: Deprecated CLI elements must emit warnings (optionally disable)\nwhen used.**\n\n## Deprecating a feature or behavior\n\nOccasionally a Kubernetes release needs to deprecate some feature or behavior\nof the system that is not controlled by the API or CLI. In this case, the\nrules for deprecation are as follows:\n\n**Rule #7: Deprecated behaviors must function for no less than 1 year after their\nannounced deprecation.**\n\nIf the feature or behavior is being replaced with an alternative implementation\nthat requires work to adopt the change, there should be an effort to simplify\nthe transition whenever possible. If an alternative implementation is under\nKubernetes organization control, the following rules apply:\n\n**Rule #8: The feature of behavior must not be deprecated in favor of an alternative\nimplementation that is less stable**\n\nFor example, a generally available feature cannot be deprecated in favor of a Beta replacement.\nThe Kubernetes project does, however, encourage users to adopt and transitions to alternative\nimplementations even before they reach the same maturity level. This is particularly important\nfor exploring new use cases of a feature or getting an early feedback on the replacement.\n\nAlternative implementations may sometimes be external tools or products,\nfor example a feature may move from the kubelet to container runtime\nthat is not under Kubernetes project control. In such cases, the rule cannot be\napplied, but there must be an effort to ensure that there is a transition path\nthat does not compromise on components' maturity levels. In the example with\ncontainer runtimes, the effort may involve trying to ensure that popular container runtimes\nhave versions that offer the same level of stability while implementing that replacement behavior.\n\nDeprecation rules for features and behaviors do not imply that all changes\nto the system are governed by this policy.\nThese rules apply only to significant, user-visible behaviors which impact the\ncorrectness of applications running on Kubernetes or that impact the\nadministration of Kubernetes clusters, and which are being removed entirely.\n\nAn exception to the above rule is _feature gates_. Feature gates are key=value\npairs that allow for users to enable\/disable experimental features.\n\nFeature gates are intended to cover the development life cycle of a feature - they\nare not intended to be long-term APIs. As such, they are expected to be deprecated\nand removed after a feature becomes GA or is dropped.\n\nAs a feature moves through the stages, the associated feature gate evolves.\nThe feature life cycle matched to its corresponding feature gate is:\n\n* Alpha: the feature gate is disabled by default and can be enabled by the user.\n* Beta: the feature gate is enabled by default and can be disabled by the user.\n* GA: the feature gate is deprecated (see [\"Deprecation\"](#deprecation)) and becomes\n  non-operational.\n* GA, deprecation window complete: the feature gate is removed and calls to it are\n  no longer accepted.\n\n### Deprecation\n\nFeatures can be removed at any point in the life cycle prior to GA. When features are\nremoved prior to GA, their associated feature gates are also deprecated.\n\nWhen an invocation tries to disable a non-operational feature gate, the call fails in order\nto avoid unsupported scenarios that might otherwise run silently.\n\nIn some cases, removing pre-GA features requires considerable time. Feature gates can remain\noperational until their associated feature is fully removed, at which point the feature gate\nitself can be deprecated.\n\nWhen removing a feature gate for a GA feature also requires considerable time, calls to\nfeature gates may remain operational if the feature gate has no effect on the feature,\nand if the feature gate causes no errors.\n\nFeatures intended to be disabled by users should include a mechanism for disabling the\nfeature in the associated feature gate.\n\nVersioning for feature gates is different from the previously discussed components,\ntherefore the rules for deprecation are as follows:\n\n**Rule #9: Feature gates must be deprecated when the corresponding feature they control\ntransitions a lifecycle stage as follows. Feature gates must function for no less than:**\n\n* **Beta feature to GA: 6 months or 2 releases (whichever is longer)**\n* **Beta feature to EOL: 3 months or 1 release (whichever is longer)**\n* **Alpha feature to EOL: 0 releases**\n\n**Rule #10: Deprecated feature gates must respond with a warning when used. When a feature gate\nis deprecated it must be documented in both in the release notes and the corresponding CLI help.\nBoth warnings and documentation must indicate whether a feature gate is non-operational.**\n\n## Deprecating a metric\n\nEach component of the Kubernetes control-plane exposes metrics (usually the\n`\/metrics` endpoint), which are typically ingested by cluster administrators.\nNot all metrics are the same: some metrics are commonly used as SLIs or used\nto determine SLOs, these tend to have greater import. Other metrics are more\nexperimental in nature or are used primarily in the Kubernetes development\nprocess.\n\nAccordingly, metrics fall under three stability classes (`ALPHA`, `BETA` `STABLE`);\nthis impacts removal of a metric during a Kubernetes release. These classes\nare determined by the perceived importance of the metric. The rules for\ndeprecating and removing a metric are as follows:\n\n**Rule #11a: Metrics, for the corresponding stability class, must function for no less than:**\n\n* **STABLE: 4 releases or 12 months (whichever is longer)**\n* **BETA: 2 releases or 8 months (whichever is longer)**\n* **ALPHA: 0 releases**\n\n**Rule #11b: Metrics, after their _announced deprecation_, must function for no less than:**\n\n* **STABLE: 3 releases or 9 months (whichever is longer)**\n* **BETA: 1 releases or 4 months (whichever is longer)**\n* **ALPHA: 0 releases**\n\nDeprecated metrics will have their description text prefixed with a deprecation notice\nstring '(Deprecated from x.y)' and a warning log will be emitted during metric\nregistration. Like their stable undeprecated counterparts, deprecated metrics will\nbe automatically registered to the metrics endpoint and therefore visible.\n\nOn a subsequent release (when the metric's `deprecatedVersion` is equal to\n_current_kubernetes_version - 3_), a deprecated metric will become a _hidden_ metric.\n**_Unlike_** their deprecated counterparts, hidden metrics will _no longer_ be\nautomatically registered to the metrics endpoint (hence hidden). However, they\ncan be explicitly enabled through a command line flag on the binary\n(`--show-hidden-metrics-for-version=`). This provides cluster admins an\nescape hatch to properly migrate off of a deprecated metric, if they were not\nable to react to the earlier deprecation warnings. Hidden metrics should be\ndeleted after one release.\n\n## Exceptions\n\nNo policy can cover every possible situation. This policy is a living\ndocument, and will evolve over time. In practice, there will be situations\nthat do not fit neatly into this policy, or for which this policy becomes a\nserious impediment. Such situations should be discussed with SIGs and project\nleaders to find the best solutions for those specific cases, always bearing in\nmind that Kubernetes is committed to being a stable system that, as much as\npossible, never breaks users. Exceptions will always be announced in all\nrelevant release notes.","site":"kubernetes reference","answers_cleaned":"    reviewers    bgrant0607   lavalamp   thockin title  Kubernetes Deprecation Policy content type  concept weight  40           overview     This document details the deprecation policy for various facets of the system        body     Kubernetes is a large system with many components and many contributors  As with any such software  the feature set naturally evolves over time  and sometimes a feature may need to be removed  This could include an API  a flag  or even an entire feature  To avoid breaking existing users  Kubernetes follows a deprecation policy for aspects of the system that are slated to be removed      Deprecating parts of the API  Since Kubernetes is an API driven system  the API has evolved over time to reflect the evolving understanding of the problem space  The Kubernetes API is actually a set of APIs  called  API groups   and each API group is independently versioned   API versions   docs reference using api  api versioning  fall into 3 main tracks  each of which has different policies for deprecation     Example    Track                                                                                v1         GA  generally available  stable      v1beta1    Beta  pre release                    v1alpha1   Alpha  experimental                 A given release of Kubernetes can support any number of API groups and any number of versions of each   The following rules govern the deprecation of elements of the API  This includes     REST resources  aka API objects    Fields of REST resources   Annotations on REST resources  including  beta  annotations but not   including  alpha  annotations    Enumerated or constant values   Component config structures  These rules are enforced between official releases  not between arbitrary commits to master or release branches     Rule  1  API elements may only be removed by incrementing the version of the API group     Once an API element has been added to an API group at a particular version  it can not be removed from that version or have its behavior significantly changed  regardless of track    For historical reasons  there are 2  monolithic  API groups    core   no group name  and  extensions   Resources will incrementally be moved from these legacy API groups into more domain specific API groups      Rule  2  API objects must be able to round trip between API versions in a given release without information loss  with the exception of whole REST resources that do not exist in some versions     For example  an object can be written as v1 and then read back as v2 and converted to v1  and the resulting v1 resource will be identical to the original  The representation in v2 might be different from v1  but the system knows how to convert between them in both directions  Additionally  any new field added in v2 must be able to round trip to v1 and back  which means v1 might have to add an equivalent field or represent it as an annotation     Rule  3  An API version in a given track may not be deprecated in favor of a less stable API version       GA API versions can replace beta and alpha API versions    Beta API versions can replace earlier beta and alpha API versions  but  may not  replace GA API versions    Alpha API versions can replace earlier alpha API versions  but  may not  replace GA or beta API versions     Rule  4a  API lifetime is determined by the API stability level      GA API versions may be marked as deprecated  but must not be removed within a major version of Kubernetes   Beta API versions are deprecated no more than 9 months or 3 minor releases after introduction  whichever is longer     and are no longer served 9 months or 3 minor releases after deprecation  whichever is longer    Alpha API versions may be removed in any release without prior deprecation notice  This ensures beta API support covers the  maximum supported version skew of 2 releases   releases version skew policy    and that APIs don t stagnate on unstable beta versions  accumulating production usage that will be disrupted when support for the beta API ends    There are no current plans for a major version revision of Kubernetes that removes GA APIs     Until   52185  https   github com kubernetes kubernetes issues 52185  is resolved  no API versions that have been persisted to storage may be removed  Serving REST endpoints for those versions may be disabled  subject to the deprecation timelines in this document   but the API server must remain capable of decoding converting previously persisted data from storage      Rule  4b  The  preferred  API version and the  storage version  for a given group may not advance until after a release has been made that supports both the new version and the previous version    Users must be able to upgrade to a new release of Kubernetes and then roll back to a previous release  without converting anything to the new API version or suffering breakages  unless they explicitly used features only available in the newer version   This is particularly evident in the stored representation of objects   All of this is best illustrated by examples  Imagine a Kubernetes release  version X  which introduces a new API group  A new Kubernetes release is made every approximately 4 months  3 per year   The following table describes which API versions are supported in a series of subsequent releases    table     thead       tr         th Release  th         th API Versions  th         th Preferred Storage Version  th         th Notes  th        tr      thead     tbody       tr         td X  td         td v1alpha1  td         td v1alpha1  td         td   td        tr       tr         td X 1  td         td v1alpha2  td         td v1alpha2  td         td           ul              li v1alpha1 is removed  See release notes for required actions   li            ul          td        tr       tr         td X 2  td         td v1beta1  td         td v1beta1  td         td           ul             li v1alpha2 is removed  See release notes for required actions   li            ul          td        tr       tr         td X 3  td         td v1beta2  v1beta1  deprecated   td         td v1beta1  td         td           ul             li v1beta1 is deprecated  See release notes for required actions   li            ul          td        tr       tr         td X 4  td         td v1beta2  v1beta1  deprecated   td         td v1beta2  td         td   td        tr       tr         td X 5  td         td v1  v1beta1  deprecated   v1beta2  deprecated   td         td v1beta2  td         td           ul             li v1beta2 is deprecated  See release notes for required actions   li            ul          td        tr       tr         td X 6  td         td v1  v1beta2  deprecated   td         td v1  td         td           ul             li v1beta1 is removed  See release notes for required actions   li            ul          td        tr       tr         td X 7  td         td v1  v1beta2  deprecated   td         td v1  td         td   td        tr       tr         td X 8  td         td v2alpha1  v1  td         td v1  td         td           ul             li v1beta2 is removed  See release notes for required actions   li            ul          td        tr       tr         td X 9  td         td v2alpha2  v1  td         td v1  td         td           ul              li v2alpha1 is removed  See release notes for required actions   li            ul          td        tr       tr         td X 10  td         td v2beta1  v1  td         td v1  td         td           ul             li v2alpha2 is removed  See release notes for required actions   li            ul          td        tr       tr         td X 11  td         td v2beta2  v2beta1  deprecated   v1  td         td v1  td         td           ul             li v2beta1 is deprecated  See release notes for required actions   li            ul          td        tr       tr         td X 12  td         td v2  v2beta2  deprecated   v2beta1  deprecated   v1  deprecated   td         td v1  td         td           ul             li v2beta2 is deprecated  See release notes for required actions   li             li v1 is deprecated in favor of v2  but will not be removed  li            ul          td        tr       tr         td X 13  td         td v2  v2beta1  deprecated   v2beta2  deprecated   v1  deprecated   td         td v2  td         td   td        tr       tr         td X 14  td         td v2  v2beta2  deprecated   v1  deprecated   td         td v2  td         td           ul             li v2beta1 is removed  See release notes for required actions   li            ul          td        tr       tr         td X 15  td         td v2  v1  deprecated   td         td v2  td         td           ul             li v2beta2 is removed  See release notes for required actions   li            ul          td        tr      tbody    table       REST resources  aka API objects   Consider a hypothetical REST resource named Widget  which was present in API v1 in the above timeline  and which needs to be deprecated  We document and  announce  https   groups google com forum   forum kubernetes announce  the deprecation in sync with release X 1  The Widget resource still exists in API version v1  deprecated  but not in v2alpha1  The Widget resource continues to exist and function in releases up to and including X 8  Only in release X 9  when API v1 has aged out  does the Widget resource cease to exist  and the behavior get removed   Starting in Kubernetes v1 19  making an API request to a deprecated REST API endpoint   1  Returns a  Warning  header     as defined in  RFC7234  Section 5 5  https   tools ietf org html rfc7234 section 5 5   in the API response  1  Adds a   k8s io deprecated   true   annotation to the     audit event   docs tasks debug debug cluster audit   recorded for the request  1  Sets an  apiserver requested deprecated apis  gauge metric to  1  in the  kube apiserver     process  The metric has labels for  group    version    resource    subresource  that can be joined    to the  apiserver request total  metric  and a  removed release  label that indicates the    Kubernetes release in which the API will no longer be served  The following Prometheus query    returns information about requests made to deprecated APIs which will be removed in v1 22         promql    apiserver requested deprecated apis removed release  1 22     on group version resource subresource  group right   apiserver request total             Fields of REST resources  As with whole REST resources  an individual field which was present in API v1 must exist and function until API v1 is removed  Unlike whole resources  the v2 APIs may choose a different representation for the field  as long as it can be round tripped  For example a v1 field named  magnitude  which was deprecated might be named  deprecatedMagnitude  in API v2  When v1 is eventually removed  the deprecated field can be removed from v2       Enumerated or constant values  As with whole REST resources and fields thereof  a constant value which was supported in API v1 must exist and function until API v1 is removed       Component config structures  Component configs are versioned and managed similar to REST resources       Future work  Over time  Kubernetes will introduce more fine grained API versions  at which point these rules will be adjusted as needed      Deprecating a flag or CLI  The Kubernetes system is comprised of several different programs cooperating  Sometimes  a Kubernetes release might remove flags or CLI commands  collectively  CLI elements   in these programs  The individual programs naturally sort into two main groups   user facing and admin facing programs  which vary slightly in their deprecation policies  Unless a flag is explicitly prefixed or documented as  alpha  or  beta   it is considered GA   CLI elements are effectively part of the API to the system  but since they are not versioned in the same way as the REST API  the rules for deprecation are as follows     Rule  5a  CLI elements of user facing components  e g  kubectl  must function after their announced deprecation for no less than         GA  12 months or 2 releases  whichever is longer        Beta  3 months or 1 release  whichever is longer        Alpha  0 releases      Rule  5b  CLI elements of admin facing components  e g  kubelet  must function after their announced deprecation for no less than         GA  6 months or 1 release  whichever is longer        Beta  3 months or 1 release  whichever is longer        Alpha  0 releases      Rule  5c  Command line interface  CLI  elements cannot be deprecated in favor of less stable CLI elements    Similar to the Rule  3 for APIs  if an element of a command line interface is being replaced with an alternative implementation  such as by renaming an existing element  or by switching to use configuration sourced from a file instead of a command line argument  that recommended alternative must be of the same or higher stability level     Rule  6  Deprecated CLI elements must emit warnings  optionally disable  when used        Deprecating a feature or behavior  Occasionally a Kubernetes release needs to deprecate some feature or behavior of the system that is not controlled by the API or CLI  In this case  the rules for deprecation are as follows     Rule  7  Deprecated behaviors must function for no less than 1 year after their announced deprecation     If the feature or behavior is being replaced with an alternative implementation that requires work to adopt the change  there should be an effort to simplify the transition whenever possible  If an alternative implementation is under Kubernetes organization control  the following rules apply     Rule  8  The feature of behavior must not be deprecated in favor of an alternative implementation that is less stable    For example  a generally available feature cannot be deprecated in favor of a Beta replacement  The Kubernetes project does  however  encourage users to adopt and transitions to alternative implementations even before they reach the same maturity level  This is particularly important for exploring new use cases of a feature or getting an early feedback on the replacement   Alternative implementations may sometimes be external tools or products  for example a feature may move from the kubelet to container runtime that is not under Kubernetes project control  In such cases  the rule cannot be applied  but there must be an effort to ensure that there is a transition path that does not compromise on components  maturity levels  In the example with container runtimes  the effort may involve trying to ensure that popular container runtimes have versions that offer the same level of stability while implementing that replacement behavior   Deprecation rules for features and behaviors do not imply that all changes to the system are governed by this policy  These rules apply only to significant  user visible behaviors which impact the correctness of applications running on Kubernetes or that impact the administration of Kubernetes clusters  and which are being removed entirely   An exception to the above rule is  feature gates   Feature gates are key value pairs that allow for users to enable disable experimental features   Feature gates are intended to cover the development life cycle of a feature   they are not intended to be long term APIs  As such  they are expected to be deprecated and removed after a feature becomes GA or is dropped   As a feature moves through the stages  the associated feature gate evolves  The feature life cycle matched to its corresponding feature gate is     Alpha  the feature gate is disabled by default and can be enabled by the user    Beta  the feature gate is enabled by default and can be disabled by the user    GA  the feature gate is deprecated  see   Deprecation    deprecation   and becomes   non operational    GA  deprecation window complete  the feature gate is removed and calls to it are   no longer accepted       Deprecation  Features can be removed at any point in the life cycle prior to GA  When features are removed prior to GA  their associated feature gates are also deprecated   When an invocation tries to disable a non operational feature gate  the call fails in order to avoid unsupported scenarios that might otherwise run silently   In some cases  removing pre GA features requires considerable time  Feature gates can remain operational until their associated feature is fully removed  at which point the feature gate itself can be deprecated   When removing a feature gate for a GA feature also requires considerable time  calls to feature gates may remain operational if the feature gate has no effect on the feature  and if the feature gate causes no errors   Features intended to be disabled by users should include a mechanism for disabling the feature in the associated feature gate   Versioning for feature gates is different from the previously discussed components  therefore the rules for deprecation are as follows     Rule  9  Feature gates must be deprecated when the corresponding feature they control transitions a lifecycle stage as follows  Feature gates must function for no less than         Beta feature to GA  6 months or 2 releases  whichever is longer        Beta feature to EOL  3 months or 1 release  whichever is longer        Alpha feature to EOL  0 releases      Rule  10  Deprecated feature gates must respond with a warning when used  When a feature gate is deprecated it must be documented in both in the release notes and the corresponding CLI help  Both warnings and documentation must indicate whether a feature gate is non operational        Deprecating a metric  Each component of the Kubernetes control plane exposes metrics  usually the   metrics  endpoint   which are typically ingested by cluster administrators  Not all metrics are the same  some metrics are commonly used as SLIs or used to determine SLOs  these tend to have greater import  Other metrics are more experimental in nature or are used primarily in the Kubernetes development process   Accordingly  metrics fall under three stability classes   ALPHA    BETA   STABLE    this impacts removal of a metric during a Kubernetes release  These classes are determined by the perceived importance of the metric  The rules for deprecating and removing a metric are as follows     Rule  11a  Metrics  for the corresponding stability class  must function for no less than         STABLE  4 releases or 12 months  whichever is longer        BETA  2 releases or 8 months  whichever is longer        ALPHA  0 releases      Rule  11b  Metrics  after their  announced deprecation   must function for no less than         STABLE  3 releases or 9 months  whichever is longer        BETA  1 releases or 4 months  whichever is longer        ALPHA  0 releases    Deprecated metrics will have their description text prefixed with a deprecation notice string   Deprecated from x y   and a warning log will be emitted during metric registration  Like their stable undeprecated counterparts  deprecated metrics will be automatically registered to the metrics endpoint and therefore visible   On a subsequent release  when the metric s  deprecatedVersion  is equal to  current kubernetes version   3    a deprecated metric will become a  hidden  metric     Unlike    their deprecated counterparts  hidden metrics will  no longer  be automatically registered to the metrics endpoint  hence hidden   However  they can be explicitly enabled through a command line flag on the binary     show hidden metrics for version     This provides cluster admins an escape hatch to properly migrate off of a deprecated metric  if they were not able to react to the earlier deprecation warnings  Hidden metrics should be deleted after one release      Exceptions  No policy can cover every possible situation  This policy is a living document  and will evolve over time  In practice  there will be situations that do not fit neatly into this policy  or for which this policy becomes a serious impediment  Such situations should be discussed with SIGs and project leaders to find the best solutions for those specific cases  always bearing in mind that Kubernetes is committed to being a stable system that  as much as possible  never breaks users  Exceptions will always be announced in all relevant release notes "}
{"questions":"kubernetes reference liggitt weight 45 reviewers title Deprecated API Migration Guide contenttype reference thockin lavalamp smarterclayton","answers":"---\nreviewers:\n- liggitt\n- lavalamp\n- thockin\n- smarterclayton\ntitle: \"Deprecated API Migration Guide\"\nweight: 45\ncontent_type: reference\n---\n\n<!-- overview -->\n\nAs the Kubernetes API evolves, APIs are periodically reorganized or upgraded.\nWhen APIs evolve, the old API is deprecated and eventually removed.\nThis page contains information you need to know when migrating from\ndeprecated API versions to newer and more stable API versions.\n\n<!-- body -->\n\n## Removed APIs by release\n\n### v1.32\n\nThe **v1.32** release will stop serving the following deprecated API versions:\n\n#### Flow control resources {#flowcontrol-resources-v132}\n\nThe **flowcontrol.apiserver.k8s.io\/v1beta3** API version of FlowSchema and PriorityLevelConfiguration will no longer be served in v1.32.\n\n* Migrate manifests and API clients to use the **flowcontrol.apiserver.k8s.io\/v1** API version, available since v1.29.\n* All existing persisted objects are accessible via the new API\n* Notable changes in **flowcontrol.apiserver.k8s.io\/v1**:\n  * The PriorityLevelConfiguration `spec.limited.nominalConcurrencyShares` field only defaults to 30 when unspecified, and an explicit value of 0 is not changed to 30.\n\n### v1.29\n\nThe **v1.29** release stopped serving the following deprecated API versions:\n\n#### Flow control resources {#flowcontrol-resources-v129}\n\nThe **flowcontrol.apiserver.k8s.io\/v1beta2** API version of FlowSchema and PriorityLevelConfiguration is no longer served as of v1.29.\n\n* Migrate manifests and API clients to use the **flowcontrol.apiserver.k8s.io\/v1** API version, available since v1.29, or the **flowcontrol.apiserver.k8s.io\/v1beta3** API version, available since v1.26.\n* All existing persisted objects are accessible via the new API\n* Notable changes in **flowcontrol.apiserver.k8s.io\/v1**:\n  * The PriorityLevelConfiguration `spec.limited.assuredConcurrencyShares` field is renamed to `spec.limited.nominalConcurrencyShares` and only defaults to 30 when unspecified, and an explicit value of 0 is not changed to 30.\n* Notable changes in **flowcontrol.apiserver.k8s.io\/v1beta3**:\n  * The PriorityLevelConfiguration `spec.limited.assuredConcurrencyShares` field is renamed to `spec.limited.nominalConcurrencyShares`\n\n### v1.27\n\nThe **v1.27** release stopped serving the following deprecated API versions:\n\n#### CSIStorageCapacity {#csistoragecapacity-v127}\n\nThe **storage.k8s.io\/v1beta1** API version of CSIStorageCapacity is no longer served as of v1.27.\n\n* Migrate manifests and API clients to use the **storage.k8s.io\/v1** API version, available since v1.24.\n* All existing persisted objects are accessible via the new API\n* No notable changes\n\n### v1.26\n\nThe **v1.26** release stopped serving the following deprecated API versions:\n\n#### Flow control resources {#flowcontrol-resources-v126}\n\nThe **flowcontrol.apiserver.k8s.io\/v1beta1** API version of FlowSchema and PriorityLevelConfiguration is no longer served as of v1.26.\n\n* Migrate manifests and API clients to use the **flowcontrol.apiserver.k8s.io\/v1beta2** API version.\n* All existing persisted objects are accessible via the new API\n* No notable changes\n\n#### HorizontalPodAutoscaler {#horizontalpodautoscaler-v126}\n\nThe **autoscaling\/v2beta2** API version of HorizontalPodAutoscaler is no longer served as of v1.26.\n\n* Migrate manifests and API clients to use the **autoscaling\/v2** API version, available since v1.23.\n* All existing persisted objects are accessible via the new API\n* Notable changes:\n  * `targetAverageUtilization` is replaced with `target.averageUtilization` and `target.type: Utilization`. See [Autoscaling on multiple metrics and custom metrics](\/docs\/tasks\/run-application\/horizontal-pod-autoscale-walkthrough\/#autoscaling-on-multiple-metrics-and-custom-metrics).\n### v1.25\n\nThe **v1.25** release stopped serving the following deprecated API versions:\n\n#### CronJob {#cronjob-v125}\n\nThe **batch\/v1beta1** API version of CronJob is no longer served as of v1.25.\n\n* Migrate manifests and API clients to use the **batch\/v1** API version, available since v1.21.\n* All existing persisted objects are accessible via the new API\n* No notable changes\n\n#### EndpointSlice {#endpointslice-v125}\n\nThe **discovery.k8s.io\/v1beta1** API version of EndpointSlice is no longer served as of v1.25.\n\n* Migrate manifests and API clients to use the **discovery.k8s.io\/v1** API version, available since v1.21.\n* All existing persisted objects are accessible via the new API\n* Notable changes in **discovery.k8s.io\/v1**:\n  * use per Endpoint `nodeName` field instead of deprecated `topology[\"kubernetes.io\/hostname\"]` field\n  * use per Endpoint `zone` field instead of deprecated `topology[\"topology.kubernetes.io\/zone\"]` field\n  * `topology` is replaced with the `deprecatedTopology` field which is not writable in v1\n\n#### Event {#event-v125}\n\nThe **events.k8s.io\/v1beta1** API version of Event is no longer served as of v1.25.\n\n* Migrate manifests and API clients to use the **events.k8s.io\/v1** API version, available since v1.19.\n* All existing persisted objects are accessible via the new API\n* Notable changes in **events.k8s.io\/v1**:\n  * `type` is limited to `Normal` and `Warning`\n  * `involvedObject` is renamed to `regarding`\n  * `action`, `reason`, `reportingController`, and `reportingInstance` are required\n    when creating new **events.k8s.io\/v1** Events\n  * use `eventTime` instead of the deprecated `firstTimestamp` field (which is renamed\n    to `deprecatedFirstTimestamp` and not permitted in new **events.k8s.io\/v1** Events)\n  * use `series.lastObservedTime` instead of the deprecated `lastTimestamp` field\n    (which is renamed to `deprecatedLastTimestamp` and not permitted in new **events.k8s.io\/v1** Events)\n  * use `series.count` instead of the deprecated `count` field\n    (which is renamed to `deprecatedCount` and not permitted in new **events.k8s.io\/v1** Events)\n  * use `reportingController` instead of the deprecated `source.component` field\n    (which is renamed to `deprecatedSource.component` and not permitted in new **events.k8s.io\/v1** Events)\n  * use `reportingInstance` instead of the deprecated `source.host` field\n    (which is renamed to `deprecatedSource.host` and not permitted in new **events.k8s.io\/v1** Events)\n\n#### HorizontalPodAutoscaler {#horizontalpodautoscaler-v125}\n\nThe **autoscaling\/v2beta1** API version of HorizontalPodAutoscaler is no longer served as of v1.25.\n\n* Migrate manifests and API clients to use the **autoscaling\/v2** API version, available since v1.23.\n* All existing persisted objects are accessible via the new API\n* Notable changes:\n  * `targetAverageUtilization` is replaced with `target.averageUtilization` and `target.type: Utilization`. See [Autoscaling on multiple metrics and custom metrics](\/docs\/tasks\/run-application\/horizontal-pod-autoscale-walkthrough\/#autoscaling-on-multiple-metrics-and-custom-metrics).\n\n#### PodDisruptionBudget {#poddisruptionbudget-v125}\n\nThe **policy\/v1beta1** API version of PodDisruptionBudget is no longer served as of v1.25.\n\n* Migrate manifests and API clients to use the **policy\/v1** API version, available since v1.21.\n* All existing persisted objects are accessible via the new API\n* Notable changes in **policy\/v1**:\n  * an empty `spec.selector` (`{}`) written to a `policy\/v1` PodDisruptionBudget selects all\n    pods in the namespace (in `policy\/v1beta1` an empty `spec.selector` selected no pods).\n    An unset `spec.selector` selects no pods in either API version.\n\n#### PodSecurityPolicy {#psp-v125}\n\nPodSecurityPolicy in the **policy\/v1beta1** API version is no longer served as of v1.25,\nand the PodSecurityPolicy admission controller will be removed.\n\nMigrate to [Pod Security Admission](\/docs\/concepts\/security\/pod-security-admission\/)\nor a [3rd party admission webhook](\/docs\/reference\/access-authn-authz\/extensible-admission-controllers\/).\nFor a migration guide, see [Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller](\/docs\/tasks\/configure-pod-container\/migrate-from-psp\/).\nFor more information on the deprecation, see [PodSecurityPolicy Deprecation: Past, Present, and Future](\/blog\/2021\/04\/06\/podsecuritypolicy-deprecation-past-present-and-future\/).\n\n#### RuntimeClass {#runtimeclass-v125}\n\nRuntimeClass in the **node.k8s.io\/v1beta1** API version is no longer served as of v1.25.\n\n* Migrate manifests and API clients to use the **node.k8s.io\/v1** API version, available since v1.20.\n* All existing persisted objects are accessible via the new API\n* No notable changes\n\n### v1.22\n\nThe **v1.22** release stopped serving the following deprecated API versions:\n\n#### Webhook resources {#webhook-resources-v122}\n\nThe **admissionregistration.k8s.io\/v1beta1** API version of MutatingWebhookConfiguration\nand ValidatingWebhookConfiguration is no longer served as of v1.22.\n\n* Migrate manifests and API clients to use the **admissionregistration.k8s.io\/v1** API version, available since v1.16.\n* All existing persisted objects are accessible via the new APIs\n* Notable changes:\n  * `webhooks[*].failurePolicy` default changed from `Ignore` to `Fail` for v1\n  * `webhooks[*].matchPolicy` default changed from `Exact` to `Equivalent` for v1\n  * `webhooks[*].timeoutSeconds` default changed from `30s` to `10s` for v1\n  * `webhooks[*].sideEffects` default value is removed, and the field made required,\n    and only `None` and `NoneOnDryRun` are permitted for v1\n  * `webhooks[*].admissionReviewVersions` default value is removed and the field made\n    required for v1 (supported versions for AdmissionReview are `v1` and `v1beta1`)\n  * `webhooks[*].name` must be unique in the list for objects created via `admissionregistration.k8s.io\/v1`\n\n#### CustomResourceDefinition {#customresourcedefinition-v122}\n\nThe **apiextensions.k8s.io\/v1beta1** API version of CustomResourceDefinition is no longer served as of v1.22.\n\n* Migrate manifests and API clients to use the **apiextensions.k8s.io\/v1** API version, available since v1.16.\n* All existing persisted objects are accessible via the new API\n* Notable changes:\n  * `spec.scope` is no longer defaulted to `Namespaced` and must be explicitly specified\n  * `spec.version` is removed in v1; use `spec.versions` instead\n  * `spec.validation` is removed in v1; use `spec.versions[*].schema` instead\n  * `spec.subresources` is removed in v1; use `spec.versions[*].subresources` instead\n  * `spec.additionalPrinterColumns` is removed in v1; use `spec.versions[*].additionalPrinterColumns` instead\n  * `spec.conversion.webhookClientConfig` is moved to `spec.conversion.webhook.clientConfig` in v1\n  * `spec.conversion.conversionReviewVersions` is moved to `spec.conversion.webhook.conversionReviewVersions` in v1\n  * `spec.versions[*].schema.openAPIV3Schema` is now required when creating v1 CustomResourceDefinition objects,\n    and must be a [structural schema](\/docs\/tasks\/extend-kubernetes\/custom-resources\/custom-resource-definitions\/#specifying-a-structural-schema)\n  * `spec.preserveUnknownFields: true` is disallowed when creating v1 CustomResourceDefinition objects;\n    it must be specified within schema definitions as `x-kubernetes-preserve-unknown-fields: true`\n  * In `additionalPrinterColumns` items, the `JSONPath` field was renamed to `jsonPath` in v1\n    (fixes [#66531](https:\/\/github.com\/kubernetes\/kubernetes\/issues\/66531))\n\n#### APIService {#apiservice-v122}\n\nThe **apiregistration.k8s.io\/v1beta1** API version of APIService is no longer served as of v1.22.\n\n* Migrate manifests and API clients to use the **apiregistration.k8s.io\/v1** API version, available since v1.10.\n* All existing persisted objects are accessible via the new API\n* No notable changes\n\n#### TokenReview {#tokenreview-v122}\n\nThe **authentication.k8s.io\/v1beta1** API version of TokenReview is no longer served as of v1.22.\n\n* Migrate manifests and API clients to use the **authentication.k8s.io\/v1** API version, available since v1.6.\n* No notable changes\n\n#### SubjectAccessReview resources {#subjectaccessreview-resources-v122}\n\nThe **authorization.k8s.io\/v1beta1** API version of LocalSubjectAccessReview,\nSelfSubjectAccessReview, SubjectAccessReview, and SelfSubjectRulesReview is no longer served as of v1.22.\n\n* Migrate manifests and API clients to use the **authorization.k8s.io\/v1** API version, available since v1.6.\n* Notable changes:\n  * `spec.group` was renamed to `spec.groups` in v1 (fixes [#32709](https:\/\/github.com\/kubernetes\/kubernetes\/issues\/32709))\n\n#### CertificateSigningRequest {#certificatesigningrequest-v122}\n\nThe **certificates.k8s.io\/v1beta1** API version of CertificateSigningRequest is no longer served as of v1.22.\n\n* Migrate manifests and API clients to use the **certificates.k8s.io\/v1** API version, available since v1.19.\n* All existing persisted objects are accessible via the new API\n* Notable changes in `certificates.k8s.io\/v1`:\n  * For API clients requesting certificates:\n    * `spec.signerName` is now required\n      (see [known Kubernetes signers](\/docs\/reference\/access-authn-authz\/certificate-signing-requests\/#kubernetes-signers)),\n      and requests for `kubernetes.io\/legacy-unknown` are not allowed to be created via the `certificates.k8s.io\/v1` API\n    * `spec.usages` is now required, may not contain duplicate values, and must only contain known usages\n  * For API clients approving or signing certificates:\n    * `status.conditions` may not contain duplicate types\n    * `status.conditions[*].status` is now required\n    * `status.certificate` must be PEM-encoded, and contain only `CERTIFICATE` blocks\n\n#### Lease {#lease-v122}\n\nThe **coordination.k8s.io\/v1beta1** API version of Lease is no longer served as of v1.22.\n\n* Migrate manifests and API clients to use the **coordination.k8s.io\/v1** API version, available since v1.14.\n* All existing persisted objects are accessible via the new API\n* No notable changes\n\n#### Ingress {#ingress-v122}\n\nThe **extensions\/v1beta1** and **networking.k8s.io\/v1beta1** API versions of Ingress is no longer served as of v1.22.\n\n* Migrate manifests and API clients to use the **networking.k8s.io\/v1** API version, available since v1.19.\n* All existing persisted objects are accessible via the new API\n* Notable changes:\n  * `spec.backend` is renamed to `spec.defaultBackend`\n  * The backend `serviceName` field is renamed to `service.name`\n  * Numeric backend `servicePort` fields are renamed to `service.port.number`\n  * String backend `servicePort` fields are renamed to `service.port.name`\n  * `pathType` is now required for each specified path. Options are `Prefix`,\n    `Exact`, and `ImplementationSpecific`. To match the undefined `v1beta1` behavior, use `ImplementationSpecific`.\n\n#### IngressClass {#ingressclass-v122}\n\nThe **networking.k8s.io\/v1beta1** API version of IngressClass is no longer served as of v1.22.\n\n* Migrate manifests and API clients to use the **networking.k8s.io\/v1** API version, available since v1.19.\n* All existing persisted objects are accessible via the new API\n* No notable changes\n\n#### RBAC resources {#rbac-resources-v122}\n\nThe **rbac.authorization.k8s.io\/v1beta1** API version of ClusterRole, ClusterRoleBinding,\nRole, and RoleBinding is no longer served as of v1.22.\n\n* Migrate manifests and API clients to use the **rbac.authorization.k8s.io\/v1** API version, available since v1.8.\n* All existing persisted objects are accessible via the new APIs\n* No notable changes\n\n#### PriorityClass {#priorityclass-v122}\n\nThe **scheduling.k8s.io\/v1beta1** API version of PriorityClass is no longer served as of v1.22.\n\n* Migrate manifests and API clients to use the **scheduling.k8s.io\/v1** API version, available since v1.14.\n* All existing persisted objects are accessible via the new API\n* No notable changes\n\n#### Storage resources {#storage-resources-v122}\n\nThe **storage.k8s.io\/v1beta1** API version of CSIDriver, CSINode, StorageClass, and VolumeAttachment is no longer served as of v1.22.\n\n* Migrate manifests and API clients to use the **storage.k8s.io\/v1** API version\n  * CSIDriver is available in **storage.k8s.io\/v1** since v1.19.\n  * CSINode is available in **storage.k8s.io\/v1** since v1.17\n  * StorageClass is available in **storage.k8s.io\/v1** since v1.6\n  * VolumeAttachment is available in **storage.k8s.io\/v1** v1.13\n* All existing persisted objects are accessible via the new APIs\n* No notable changes\n\n### v1.16\n\nThe **v1.16** release stopped serving the following deprecated API versions:\n\n#### NetworkPolicy {#networkpolicy-v116}\n\nThe **extensions\/v1beta1** API version of NetworkPolicy is no longer served as of v1.16.\n\n* Migrate manifests and API clients to use the **networking.k8s.io\/v1** API version, available since v1.8.\n* All existing persisted objects are accessible via the new API\n\n#### DaemonSet {#daemonset-v116}\n\nThe **extensions\/v1beta1** and **apps\/v1beta2** API versions of DaemonSet are no longer served as of v1.16.\n\n* Migrate manifests and API clients to use the **apps\/v1** API version, available since v1.9.\n* All existing persisted objects are accessible via the new API\n* Notable changes:\n  * `spec.templateGeneration` is removed\n  * `spec.selector` is now required and immutable after creation; use the existing\n    template labels as the selector for seamless upgrades\n  * `spec.updateStrategy.type` now defaults to `RollingUpdate`\n    (the default in `extensions\/v1beta1` was `OnDelete`)\n\n#### Deployment {#deployment-v116}\n\nThe **extensions\/v1beta1**, **apps\/v1beta1**, and **apps\/v1beta2** API versions of Deployment are no longer served as of v1.16.\n\n* Migrate manifests and API clients to use the **apps\/v1** API version, available since v1.9.\n* All existing persisted objects are accessible via the new API\n* Notable changes:\n  * `spec.rollbackTo` is removed\n  * `spec.selector` is now required and immutable after creation; use the existing\n    template labels as the selector for seamless upgrades\n  * `spec.progressDeadlineSeconds` now defaults to `600` seconds\n    (the default in `extensions\/v1beta1` was no deadline)\n  * `spec.revisionHistoryLimit` now defaults to `10`\n    (the default in `apps\/v1beta1` was `2`, the default in `extensions\/v1beta1` was to retain all)\n  * `maxSurge` and `maxUnavailable` now default to `25%`\n    (the default in `extensions\/v1beta1` was `1`)\n\n#### StatefulSet {#statefulset-v116}\n\nThe **apps\/v1beta1** and **apps\/v1beta2** API versions of StatefulSet are no longer served as of v1.16.\n\n* Migrate manifests and API clients to use the **apps\/v1** API version, available since v1.9.\n* All existing persisted objects are accessible via the new API\n* Notable changes:\n  * `spec.selector` is now required and immutable after creation;\n    use the existing template labels as the selector for seamless upgrades\n  * `spec.updateStrategy.type` now defaults to `RollingUpdate`\n    (the default in `apps\/v1beta1` was `OnDelete`)\n\n#### ReplicaSet {#replicaset-v116}\n\nThe **extensions\/v1beta1**, **apps\/v1beta1**, and **apps\/v1beta2** API versions of ReplicaSet are no longer served as of v1.16.\n\n* Migrate manifests and API clients to use the **apps\/v1** API version, available since v1.9.\n* All existing persisted objects are accessible via the new API\n* Notable changes:\n  * `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades\n\n#### PodSecurityPolicy {#psp-v116}\n\nThe **extensions\/v1beta1** API version of PodSecurityPolicy is no longer served as of v1.16.\n\n* Migrate manifests and API client to use the **policy\/v1beta1** API version, available since v1.10.\n* Note that the **policy\/v1beta1** API version of PodSecurityPolicy will be removed in v1.25.\n\n## What to do\n\n### Test with deprecated APIs disabled\n\nYou can test your clusters by starting an API server with specific API versions disabled\nto simulate upcoming removals. Add the following flag to the API server startup arguments:\n\n`--runtime-config=<group>\/<version>=false`\n\nFor example:\n\n`--runtime-config=admissionregistration.k8s.io\/v1beta1=false,apiextensions.k8s.io\/v1beta1,...`\n\n### Locate use of deprecated APIs\n\nUse [client warnings, metrics, and audit information available in 1.19+](\/blog\/2020\/09\/03\/warnings\/#deprecation-warnings)\nto locate use of deprecated APIs.\n\n### Migrate to non-deprecated APIs\n\n* Update custom integrations and controllers to call the non-deprecated APIs\n* Change YAML files to reference the non-deprecated APIs\n\n  You can use the `kubectl convert` command to automatically convert an existing object:\n\n  `kubectl convert -f <file> --output-version <group>\/<version>`.\n\n  For example, to convert an older Deployment to `apps\/v1`, you can run:\n\n  `kubectl convert -f .\/my-deployment.yaml --output-version apps\/v1`\n\n  This conversion may use non-ideal default values. To learn more about a specific\n  resource, check the Kubernetes [API reference](\/docs\/reference\/kubernetes-api\/).\n  \n  \n  The `kubectl convert` tool is not installed by default, although\n  in fact it once was part of `kubectl` itself. For more details, you can read the\n  [deprecation and removal issue](https:\/\/github.com\/kubernetes\/kubectl\/issues\/725)\n  for the built-in subcommand.\n  \n  To learn how to set up `kubectl convert` on your computer, visit the page that is right for your \n  operating system:\n  [Linux](\/docs\/tasks\/tools\/install-kubectl-linux\/#install-kubectl-convert-plugin),\n  [macOS](\/docs\/tasks\/tools\/install-kubectl-macos\/#install-kubectl-convert-plugin), or\n  [Windows](\/docs\/tasks\/tools\/install-kubectl-windows\/#install-kubectl-convert-plugin).\n  ","site":"kubernetes reference","answers_cleaned":"    reviewers    liggitt   lavalamp   thockin   smarterclayton title   Deprecated API Migration Guide  weight  45 content type  reference           overview      As the Kubernetes API evolves  APIs are periodically reorganized or upgraded  When APIs evolve  the old API is deprecated and eventually removed  This page contains information you need to know when migrating from deprecated API versions to newer and more stable API versions        body         Removed APIs by release      v1 32  The   v1 32   release will stop serving the following deprecated API versions        Flow control resources   flowcontrol resources v132   The   flowcontrol apiserver k8s io v1beta3   API version of FlowSchema and PriorityLevelConfiguration will no longer be served in v1 32     Migrate manifests and API clients to use the   flowcontrol apiserver k8s io v1   API version  available since v1 29    All existing persisted objects are accessible via the new API   Notable changes in   flowcontrol apiserver k8s io v1        The PriorityLevelConfiguration  spec limited nominalConcurrencyShares  field only defaults to 30 when unspecified  and an explicit value of 0 is not changed to 30       v1 29  The   v1 29   release stopped serving the following deprecated API versions        Flow control resources   flowcontrol resources v129   The   flowcontrol apiserver k8s io v1beta2   API version of FlowSchema and PriorityLevelConfiguration is no longer served as of v1 29     Migrate manifests and API clients to use the   flowcontrol apiserver k8s io v1   API version  available since v1 29  or the   flowcontrol apiserver k8s io v1beta3   API version  available since v1 26    All existing persisted objects are accessible via the new API   Notable changes in   flowcontrol apiserver k8s io v1        The PriorityLevelConfiguration  spec limited assuredConcurrencyShares  field is renamed to  spec limited nominalConcurrencyShares  and only defaults to 30 when unspecified  and an explicit value of 0 is not changed to 30    Notable changes in   flowcontrol apiserver k8s io v1beta3        The PriorityLevelConfiguration  spec limited assuredConcurrencyShares  field is renamed to  spec limited nominalConcurrencyShares       v1 27  The   v1 27   release stopped serving the following deprecated API versions        CSIStorageCapacity   csistoragecapacity v127   The   storage k8s io v1beta1   API version of CSIStorageCapacity is no longer served as of v1 27     Migrate manifests and API clients to use the   storage k8s io v1   API version  available since v1 24    All existing persisted objects are accessible via the new API   No notable changes      v1 26  The   v1 26   release stopped serving the following deprecated API versions        Flow control resources   flowcontrol resources v126   The   flowcontrol apiserver k8s io v1beta1   API version of FlowSchema and PriorityLevelConfiguration is no longer served as of v1 26     Migrate manifests and API clients to use the   flowcontrol apiserver k8s io v1beta2   API version    All existing persisted objects are accessible via the new API   No notable changes       HorizontalPodAutoscaler   horizontalpodautoscaler v126   The   autoscaling v2beta2   API version of HorizontalPodAutoscaler is no longer served as of v1 26     Migrate manifests and API clients to use the   autoscaling v2   API version  available since v1 23    All existing persisted objects are accessible via the new API   Notable changes       targetAverageUtilization  is replaced with  target averageUtilization  and  target type  Utilization   See  Autoscaling on multiple metrics and custom metrics   docs tasks run application horizontal pod autoscale walkthrough  autoscaling on multiple metrics and custom metrics       v1 25  The   v1 25   release stopped serving the following deprecated API versions        CronJob   cronjob v125   The   batch v1beta1   API version of CronJob is no longer served as of v1 25     Migrate manifests and API clients to use the   batch v1   API version  available since v1 21    All existing persisted objects are accessible via the new API   No notable changes       EndpointSlice   endpointslice v125   The   discovery k8s io v1beta1   API version of EndpointSlice is no longer served as of v1 25     Migrate manifests and API clients to use the   discovery k8s io v1   API version  available since v1 21    All existing persisted objects are accessible via the new API   Notable changes in   discovery k8s io v1        use per Endpoint  nodeName  field instead of deprecated  topology  kubernetes io hostname    field     use per Endpoint  zone  field instead of deprecated  topology  topology kubernetes io zone    field      topology  is replaced with the  deprecatedTopology  field which is not writable in v1       Event   event v125   The   events k8s io v1beta1   API version of Event is no longer served as of v1 25     Migrate manifests and API clients to use the   events k8s io v1   API version  available since v1 19    All existing persisted objects are accessible via the new API   Notable changes in   events k8s io v1         type  is limited to  Normal  and  Warning       involvedObject  is renamed to  regarding       action    reason    reportingController   and  reportingInstance  are required     when creating new   events k8s io v1   Events     use  eventTime  instead of the deprecated  firstTimestamp  field  which is renamed     to  deprecatedFirstTimestamp  and not permitted in new   events k8s io v1   Events      use  series lastObservedTime  instead of the deprecated  lastTimestamp  field      which is renamed to  deprecatedLastTimestamp  and not permitted in new   events k8s io v1   Events      use  series count  instead of the deprecated  count  field      which is renamed to  deprecatedCount  and not permitted in new   events k8s io v1   Events      use  reportingController  instead of the deprecated  source component  field      which is renamed to  deprecatedSource component  and not permitted in new   events k8s io v1   Events      use  reportingInstance  instead of the deprecated  source host  field      which is renamed to  deprecatedSource host  and not permitted in new   events k8s io v1   Events        HorizontalPodAutoscaler   horizontalpodautoscaler v125   The   autoscaling v2beta1   API version of HorizontalPodAutoscaler is no longer served as of v1 25     Migrate manifests and API clients to use the   autoscaling v2   API version  available since v1 23    All existing persisted objects are accessible via the new API   Notable changes       targetAverageUtilization  is replaced with  target averageUtilization  and  target type  Utilization   See  Autoscaling on multiple metrics and custom metrics   docs tasks run application horizontal pod autoscale walkthrough  autoscaling on multiple metrics and custom metrics         PodDisruptionBudget   poddisruptionbudget v125   The   policy v1beta1   API version of PodDisruptionBudget is no longer served as of v1 25     Migrate manifests and API clients to use the   policy v1   API version  available since v1 21    All existing persisted objects are accessible via the new API   Notable changes in   policy v1        an empty  spec selector         written to a  policy v1  PodDisruptionBudget selects all     pods in the namespace  in  policy v1beta1  an empty  spec selector  selected no pods       An unset  spec selector  selects no pods in either API version        PodSecurityPolicy   psp v125   PodSecurityPolicy in the   policy v1beta1   API version is no longer served as of v1 25  and the PodSecurityPolicy admission controller will be removed   Migrate to  Pod Security Admission   docs concepts security pod security admission   or a  3rd party admission webhook   docs reference access authn authz extensible admission controllers    For a migration guide  see  Migrate from PodSecurityPolicy to the Built In PodSecurity Admission Controller   docs tasks configure pod container migrate from psp    For more information on the deprecation  see  PodSecurityPolicy Deprecation  Past  Present  and Future   blog 2021 04 06 podsecuritypolicy deprecation past present and future          RuntimeClass   runtimeclass v125   RuntimeClass in the   node k8s io v1beta1   API version is no longer served as of v1 25     Migrate manifests and API clients to use the   node k8s io v1   API version  available since v1 20    All existing persisted objects are accessible via the new API   No notable changes      v1 22  The   v1 22   release stopped serving the following deprecated API versions        Webhook resources   webhook resources v122   The   admissionregistration k8s io v1beta1   API version of MutatingWebhookConfiguration and ValidatingWebhookConfiguration is no longer served as of v1 22     Migrate manifests and API clients to use the   admissionregistration k8s io v1   API version  available since v1 16    All existing persisted objects are accessible via the new APIs   Notable changes       webhooks    failurePolicy  default changed from  Ignore  to  Fail  for v1      webhooks    matchPolicy  default changed from  Exact  to  Equivalent  for v1      webhooks    timeoutSeconds  default changed from  30s  to  10s  for v1      webhooks    sideEffects  default value is removed  and the field made required      and only  None  and  NoneOnDryRun  are permitted for v1      webhooks    admissionReviewVersions  default value is removed and the field made     required for v1  supported versions for AdmissionReview are  v1  and  v1beta1        webhooks    name  must be unique in the list for objects created via  admissionregistration k8s io v1        CustomResourceDefinition   customresourcedefinition v122   The   apiextensions k8s io v1beta1   API version of CustomResourceDefinition is no longer served as of v1 22     Migrate manifests and API clients to use the   apiextensions k8s io v1   API version  available since v1 16    All existing persisted objects are accessible via the new API   Notable changes       spec scope  is no longer defaulted to  Namespaced  and must be explicitly specified      spec version  is removed in v1  use  spec versions  instead      spec validation  is removed in v1  use  spec versions    schema  instead      spec subresources  is removed in v1  use  spec versions    subresources  instead      spec additionalPrinterColumns  is removed in v1  use  spec versions    additionalPrinterColumns  instead      spec conversion webhookClientConfig  is moved to  spec conversion webhook clientConfig  in v1      spec conversion conversionReviewVersions  is moved to  spec conversion webhook conversionReviewVersions  in v1      spec versions    schema openAPIV3Schema  is now required when creating v1 CustomResourceDefinition objects      and must be a  structural schema   docs tasks extend kubernetes custom resources custom resource definitions  specifying a structural schema       spec preserveUnknownFields  true  is disallowed when creating v1 CustomResourceDefinition objects      it must be specified within schema definitions as  x kubernetes preserve unknown fields  true      In  additionalPrinterColumns  items  the  JSONPath  field was renamed to  jsonPath  in v1      fixes   66531  https   github com kubernetes kubernetes issues 66531         APIService   apiservice v122   The   apiregistration k8s io v1beta1   API version of APIService is no longer served as of v1 22     Migrate manifests and API clients to use the   apiregistration k8s io v1   API version  available since v1 10    All existing persisted objects are accessible via the new API   No notable changes       TokenReview   tokenreview v122   The   authentication k8s io v1beta1   API version of TokenReview is no longer served as of v1 22     Migrate manifests and API clients to use the   authentication k8s io v1   API version  available since v1 6    No notable changes       SubjectAccessReview resources   subjectaccessreview resources v122   The   authorization k8s io v1beta1   API version of LocalSubjectAccessReview  SelfSubjectAccessReview  SubjectAccessReview  and SelfSubjectRulesReview is no longer served as of v1 22     Migrate manifests and API clients to use the   authorization k8s io v1   API version  available since v1 6    Notable changes       spec group  was renamed to  spec groups  in v1  fixes   32709  https   github com kubernetes kubernetes issues 32709         CertificateSigningRequest   certificatesigningrequest v122   The   certificates k8s io v1beta1   API version of CertificateSigningRequest is no longer served as of v1 22     Migrate manifests and API clients to use the   certificates k8s io v1   API version  available since v1 19    All existing persisted objects are accessible via the new API   Notable changes in  certificates k8s io v1       For API clients requesting certificates         spec signerName  is now required        see  known Kubernetes signers   docs reference access authn authz certificate signing requests  kubernetes signers          and requests for  kubernetes io legacy unknown  are not allowed to be created via the  certificates k8s io v1  API        spec usages  is now required  may not contain duplicate values  and must only contain known usages     For API clients approving or signing certificates         status conditions  may not contain duplicate types        status conditions    status  is now required        status certificate  must be PEM encoded  and contain only  CERTIFICATE  blocks       Lease   lease v122   The   coordination k8s io v1beta1   API version of Lease is no longer served as of v1 22     Migrate manifests and API clients to use the   coordination k8s io v1   API version  available since v1 14    All existing persisted objects are accessible via the new API   No notable changes       Ingress   ingress v122   The   extensions v1beta1   and   networking k8s io v1beta1   API versions of Ingress is no longer served as of v1 22     Migrate manifests and API clients to use the   networking k8s io v1   API version  available since v1 19    All existing persisted objects are accessible via the new API   Notable changes       spec backend  is renamed to  spec defaultBackend      The backend  serviceName  field is renamed to  service name      Numeric backend  servicePort  fields are renamed to  service port number      String backend  servicePort  fields are renamed to  service port name       pathType  is now required for each specified path  Options are  Prefix        Exact   and  ImplementationSpecific   To match the undefined  v1beta1  behavior  use  ImplementationSpecific         IngressClass   ingressclass v122   The   networking k8s io v1beta1   API version of IngressClass is no longer served as of v1 22     Migrate manifests and API clients to use the   networking k8s io v1   API version  available since v1 19    All existing persisted objects are accessible via the new API   No notable changes       RBAC resources   rbac resources v122   The   rbac authorization k8s io v1beta1   API version of ClusterRole  ClusterRoleBinding  Role  and RoleBinding is no longer served as of v1 22     Migrate manifests and API clients to use the   rbac authorization k8s io v1   API version  available since v1 8    All existing persisted objects are accessible via the new APIs   No notable changes       PriorityClass   priorityclass v122   The   scheduling k8s io v1beta1   API version of PriorityClass is no longer served as of v1 22     Migrate manifests and API clients to use the   scheduling k8s io v1   API version  available since v1 14    All existing persisted objects are accessible via the new API   No notable changes       Storage resources   storage resources v122   The   storage k8s io v1beta1   API version of CSIDriver  CSINode  StorageClass  and VolumeAttachment is no longer served as of v1 22     Migrate manifests and API clients to use the   storage k8s io v1   API version     CSIDriver is available in   storage k8s io v1   since v1 19      CSINode is available in   storage k8s io v1   since v1 17     StorageClass is available in   storage k8s io v1   since v1 6     VolumeAttachment is available in   storage k8s io v1   v1 13   All existing persisted objects are accessible via the new APIs   No notable changes      v1 16  The   v1 16   release stopped serving the following deprecated API versions        NetworkPolicy   networkpolicy v116   The   extensions v1beta1   API version of NetworkPolicy is no longer served as of v1 16     Migrate manifests and API clients to use the   networking k8s io v1   API version  available since v1 8    All existing persisted objects are accessible via the new API       DaemonSet   daemonset v116   The   extensions v1beta1   and   apps v1beta2   API versions of DaemonSet are no longer served as of v1 16     Migrate manifests and API clients to use the   apps v1   API version  available since v1 9    All existing persisted objects are accessible via the new API   Notable changes       spec templateGeneration  is removed      spec selector  is now required and immutable after creation  use the existing     template labels as the selector for seamless upgrades      spec updateStrategy type  now defaults to  RollingUpdate       the default in  extensions v1beta1  was  OnDelete         Deployment   deployment v116   The   extensions v1beta1      apps v1beta1    and   apps v1beta2   API versions of Deployment are no longer served as of v1 16     Migrate manifests and API clients to use the   apps v1   API version  available since v1 9    All existing persisted objects are accessible via the new API   Notable changes       spec rollbackTo  is removed      spec selector  is now required and immutable after creation  use the existing     template labels as the selector for seamless upgrades      spec progressDeadlineSeconds  now defaults to  600  seconds      the default in  extensions v1beta1  was no deadline       spec revisionHistoryLimit  now defaults to  10       the default in  apps v1beta1  was  2   the default in  extensions v1beta1  was to retain all       maxSurge  and  maxUnavailable  now default to  25        the default in  extensions v1beta1  was  1         StatefulSet   statefulset v116   The   apps v1beta1   and   apps v1beta2   API versions of StatefulSet are no longer served as of v1 16     Migrate manifests and API clients to use the   apps v1   API version  available since v1 9    All existing persisted objects are accessible via the new API   Notable changes       spec selector  is now required and immutable after creation      use the existing template labels as the selector for seamless upgrades      spec updateStrategy type  now defaults to  RollingUpdate       the default in  apps v1beta1  was  OnDelete         ReplicaSet   replicaset v116   The   extensions v1beta1      apps v1beta1    and   apps v1beta2   API versions of ReplicaSet are no longer served as of v1 16     Migrate manifests and API clients to use the   apps v1   API version  available since v1 9    All existing persisted objects are accessible via the new API   Notable changes       spec selector  is now required and immutable after creation  use the existing template labels as the selector for seamless upgrades       PodSecurityPolicy   psp v116   The   extensions v1beta1   API version of PodSecurityPolicy is no longer served as of v1 16     Migrate manifests and API client to use the   policy v1beta1   API version  available since v1 10    Note that the   policy v1beta1   API version of PodSecurityPolicy will be removed in v1 25      What to do      Test with deprecated APIs disabled  You can test your clusters by starting an API server with specific API versions disabled to simulate upcoming removals  Add the following flag to the API server startup arguments      runtime config  group   version  false   For example      runtime config admissionregistration k8s io v1beta1 false apiextensions k8s io v1beta1           Locate use of deprecated APIs  Use  client warnings  metrics  and audit information available in 1 19    blog 2020 09 03 warnings  deprecation warnings  to locate use of deprecated APIs       Migrate to non deprecated APIs    Update custom integrations and controllers to call the non deprecated APIs   Change YAML files to reference the non deprecated APIs    You can use the  kubectl convert  command to automatically convert an existing object      kubectl convert  f  file    output version  group   version       For example  to convert an older Deployment to  apps v1   you can run      kubectl convert  f   my deployment yaml   output version apps v1     This conversion may use non ideal default values  To learn more about a specific   resource  check the Kubernetes  API reference   docs reference kubernetes api            The  kubectl convert  tool is not installed by default  although   in fact it once was part of  kubectl  itself  For more details  you can read the    deprecation and removal issue  https   github com kubernetes kubectl issues 725    for the built in subcommand       To learn how to set up  kubectl convert  on your computer  visit the page that is right for your    operating system     Linux   docs tasks tools install kubectl linux  install kubectl convert plugin      macOS   docs tasks tools install kubectl macos  install kubectl convert plugin   or    Windows   docs tasks tools install kubectl windows  install kubectl convert plugin     "}
{"questions":"kubernetes reference title Server Side Apply liggitt contenttype concept apelisse reviewers weight 25 lavalamp smarterclayton","answers":"---\ntitle: Server-Side Apply\nreviewers:\n- smarterclayton\n- apelisse\n- lavalamp\n- liggitt\ncontent_type: concept\nweight: 25\n---\n\n<!-- overview -->\n\n\n\nKubernetes supports multiple appliers collaborating to manage the fields\nof a single [object](\/docs\/concepts\/overview\/working-with-objects\/).\n\nServer-Side Apply provides an optional mechanism for your cluster's control plane to track\nchanges to an object's fields. At the level of a specific resource, Server-Side\nApply records and tracks information about control over the fields of that object.\n\nServer-Side Apply helps users and \nmanage their resources through declarative configuration. Clients can create and modify\n\ndeclaratively by submitting their _fully specified intent_.\n\nA fully specified intent is a partial object that only includes the fields and\nvalues for which the user has an opinion. That intent either creates a new\nobject (using default values for unspecified fields), or is\n[combined](#merge-strategy), by the API server, with the existing object.\n\n[Comparison with Client-Side Apply](#comparison-with-client-side-apply) explains\nhow Server-Side Apply differs from the original, client-side `kubectl apply`\nimplementation.\n\n<!-- body -->\n\n## Field management\n\nThe Kubernetes API server tracks _managed fields_ for all newly created objects.\n\nWhen trying to apply an object, fields that have a different value and are owned by\nanother [manager](#managers) will result in a [conflict](#conflicts). This is done\nin order to signal that the operation might undo another collaborator's changes.\nWrites to objects with managed fields can be forced, in which case the value of any\nconflicted field will be overridden, and the ownership will be transferred.\n\nWhenever a field's value does change, ownership moves from its current manager to the\nmanager making the change.\n\nApply checks if there are any other field managers that also own the\nfield.  If the field is not owned by any other field managers, that field is\nset to its default value (if there is one), or otherwise is deleted from the\nobject.\nThe same rule applies to fields that are lists, associative lists, or maps.\n\nFor a user to manage a field, in the Server-Side Apply sense, means that the\nuser relies on and expects the value of the field not to change. The user who\nlast made an assertion about the value of a field will be recorded as the\ncurrent field manager. This can be done by changing the field manager\ndetails explicitly using HTTP `POST` (**create**), `PUT` (**update**), or non-apply\n`PATCH` (**patch**). You can also declare and record a field manager\nby including a value for that field in a Server-Side Apply operation.\n\nA Server-Side Apply **patch** request requires the client to provide its identity\nas a [field manager](#managers). When using Server-Side Apply, trying to change a\nfield that is controlled by a different manager results in a rejected\nrequest unless the client forces an override.\nFor details of overrides, see [Conflicts](#conflicts).\n\nWhen two or more appliers set a field to the same value, they share ownership of\nthat field. Any subsequent attempt to change the value of the shared field, by any of\nthe appliers, results in a conflict. Shared field owners may give up ownership\nof a field by making a Server-Side Apply **patch** request that doesn't include\nthat field.\n\nField management details are stored in a `managedFields` field that is part of an\nobject's [`metadata`](\/docs\/reference\/kubernetes-api\/common-definitions\/object-meta\/).\n\nIf you remove a field from a manifest and apply that manifest, Server-Side\nApply checks if there are any other field managers that also own the field.\nIf the field is not owned by any other field managers, it is either deleted\nfrom the live object or reset to its default value, if it has one.\nThe same rule applies to associative list or map items.\n\nCompared to the (legacy)\n[`kubectl.kubernetes.io\/last-applied-configuration`](\/docs\/reference\/labels-annotations-taints\/#kubectl-kubernetes-io-last-applied-configuration)\nannotation managed by `kubectl`, Server-Side Apply uses a more declarative\napproach, that tracks a user's (or client's) field management, rather than\na user's last applied state. As a side effect of using Server-Side Apply,\ninformation about which field manager manages each field in an object also\nbecomes available.\n\n### Example {#ssa-example-configmap}\n\nA simple example of an object created using Server-Side Apply could look like this:\n\n\n`kubectl get` omits managed fields by default. \nAdd `--show-managed-fields` to show `managedFields` when the output format is either `json` or `yaml`.\n\n\n```yaml\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: test-cm\n  namespace: default\n  labels:\n    test-label: test\n  managedFields:\n  - manager: kubectl\n    operation: Apply # note capitalization: \"Apply\" (or \"Update\")\n    apiVersion: v1\n    time: \"2010-10-10T0:00:00Z\"\n    fieldsType: FieldsV1\n    fieldsV1:\n      f:metadata:\n        f:labels:\n          f:test-label: {}\n      f:data:\n        f:key: {}\ndata:\n  key: some value\n```\n\nThat example ConfigMap object contains a single field management record in\n`.metadata.managedFields`. The field management record consists of basic information\nabout the managing entity itself, plus details about the fields being managed and\nthe relevant operation (`Apply` or `Update`). If the request that last changed that\nfield was a Server-Side Apply **patch** then the value of `operation` is `Apply`;\notherwise, it is `Update`.\n\nThere is another possible outcome. A client could submit an invalid request\nbody. If the fully specified intent does not produce a valid object, the\nrequest fails.\n\nIt is however possible to change `.metadata.managedFields` through an\n**update**, or through a **patch** operation that does not use Server-Side Apply.\nDoing so is highly discouraged, but might be a reasonable option to try if,\nfor example, the `.metadata.managedFields` get into an inconsistent state\n(which should not happen in normal operations).\n\nThe format of `managedFields` is [described](\/docs\/reference\/kubernetes-api\/common-definitions\/object-meta\/#System)\nin the Kubernetes API reference.\n\n\nThe `.metadata.managedFields` field is managed by the API server.\nYou should avoid updating it manually.\n\n\n### Conflicts\n\nA _conflict_ is a special status error that occurs when an `Apply` operation tries\nto change a field that another manager also claims to manage. This prevents an\napplier from unintentionally overwriting the value set by another user. When\nthis occurs, the applier has 3 options to resolve the conflicts:\n\n* **Overwrite value, become sole manager:** If overwriting the value was\n  intentional (or if the applier is an automated process like a controller) the\n  applier should set the `force` query parameter to true (for `kubectl apply`,\n  you use the `--force-conflicts` command line parameter), and make the request\n  again. This forces the operation to succeed, changes the value of the field,\n  and removes the field from all other managers' entries in `managedFields`.\n\n* **Don't overwrite value, give up management claim:** If the applier doesn't\n  care about the value of the field any more, the applier can remove it from their\n  local model of the resource, and make a new request with that particular field\n  omitted. This leaves the value unchanged, and causes the field to be removed\n  from the applier's entry in `managedFields`.\n\n* **Don't overwrite value, become shared manager:** If the applier still cares\n  about the value of a field, but doesn't want to overwrite it, they can\n  change the value of that field in their local model of the resource so as to\n  match the value of the object on the server, and then make a new request that\n  takes into account that local update. Doing so leaves the value unchanged,\n  and causes that field's management to be shared by the applier along with all\n  other field managers that already claimed to manage it.\n\n### Field managers {#managers}\n\nManagers identify distinct workflows that are modifying the object (especially\nuseful on conflicts!), and can be specified through the\n[`fieldManager`](\/docs\/reference\/kubernetes-api\/common-parameters\/common-parameters\/#fieldManager)\nquery parameter as part of a modifying request. When you Apply to a resource,\nthe `fieldManager` parameter is required.\nFor other updates, the API server infers a field manager identity from the\n \"User-Agent:\" HTTP header (if present).\n\nWhen you use the `kubectl` tool to perform a Server-Side Apply operation, `kubectl`\nsets the manager identity to `\"kubectl\"` by default.\n\n## Serialization\n\nAt the protocol level, Kubernetes represents Server-Side Apply message bodies\nas [YAML](https:\/\/yaml.org\/), with the media type `application\/apply-patch+yaml`.\n\n\nWhether you are submitting JSON data or YAML data, use\n`application\/apply-patch+yaml` as the `Content-Type` header value.\n\nAll JSON documents are valid YAML. However, Kubernetes has a bug where it uses a YAML\nparser that does not fully implement the YAML specification. Some JSON escapes may\nnot be recognized.\n\n\nThe serialization is the same as for Kubernetes objects, with the exception that\nclients are not required to send a complete object.\n\nHere's an example of a Server-Side Apply message body (fully specified intent):\n```yaml\n{\n  \"apiVersion\": \"v1\",\n  \"kind\": \"ConfigMap\"\n}\n```\n\n(this would make a no-change update, provided that it was sent as the body\nof a **patch** request to a valid `v1\/configmaps` resource, and with the\nappropriate request `Content-Type`).\n\n\n## Operations in scope for field management {#apply-and-update}\n\nThe Kubernetes API operations where field management is considered are:\n\n1. Server-Side Apply (HTTP `PATCH`, with content type `application\/apply-patch+yaml`)\n2. Replacing an existing object (**update** to Kubernetes; `PUT` at the HTTP level)\n\nBoth operations update `.metadata.managedFields`, but behave a little differently.\n\nUnless you specify a forced override, an apply operation that encounters field-level\nconflicts always fails; by contrast, if you make a change using **update** that would\naffect a managed field, a conflict never provokes failure of the operation.\n\nAll Server-Side Apply **patch** requests are required to identify themselves by providing a\n`fieldManager` query parameter, while the query parameter is optional for **update**\noperations. Finally, when using the `Apply` operation you cannot define `managedFields` in\nthe body of the request that you submit.\n\nAn example object with multiple managers could look like this:\n\n```yaml\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: test-cm\n  namespace: default\n  labels:\n    test-label: test\n  managedFields:\n  - manager: kubectl\n    operation: Apply\n    time: '2019-03-30T15:00:00.000Z'\n    apiVersion: v1\n    fieldsType: FieldsV1\n    fieldsV1:\n      f:metadata:\n        f:labels:\n          f:test-label: {}\n  - manager: kube-controller-manager\n    operation: Update\n    apiVersion: v1\n    time: '2019-03-30T16:00:00.000Z'\n    fieldsType: FieldsV1\n    fieldsV1:\n      f:data:\n        f:key: {}\ndata:\n  key: new value\n```\n\nIn this example, a second operation was run as an **update** by the manager called\n`kube-controller-manager`. The update request succeeded and changed a value in the data\nfield, which caused that field's management to change to the `kube-controller-manager`.\n\nIf this update has instead been attempted using Server-Side Apply, the request\nwould have failed due to conflicting ownership.\n\n## Merge strategy\n\nThe merging strategy, implemented with Server-Side Apply, provides a generally\nmore stable object lifecycle. Server-Side Apply tries to merge fields based on\nthe actor who manages them instead of overruling based on values. This way\nmultiple actors can update the same object without causing unexpected interference.\n\nWhen a user sends a _fully-specified intent_ object to the Server-Side Apply\nendpoint, the server merges it with the live object favoring the value from the\nrequest body if it is specified in both places. If the set of items present in\nthe applied config is not a superset of the items applied by the same user last\ntime, each missing item not managed by any other appliers is removed. For\nmore information about how an object's schema is used to make decisions when\nmerging, see\n[sigs.k8s.io\/structured-merge-diff](https:\/\/sigs.k8s.io\/structured-merge-diff).\n\nThe Kubernetes API (and the Go code that implements that API for Kubernetes) allows\ndefining _merge strategy markers_. These markers describe the merge strategy supported\nfor fields within Kubernetes objects.\nFor a ,\nyou can set these markers when you define the custom resource.\n\n| Golang marker   | OpenAPI extension            | Possible values                                  | Description                                                                                                                                                                                                                                                                                                                                                                                                                                              |\n| --------------- | ---------------------------- | ------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `\/\/+listType`   | `x-kubernetes-list-type`     | `atomic`\/`set`\/`map`                             | Applicable to lists. `set` applies to lists that include only scalar elements. These elements must be unique. `map` applies to lists of nested types only. The key values (see `listMapKey`) must be unique in the list. `atomic` can apply to any list. If configured as `atomic`, the entire list is replaced during merge. At any point in time, a single manager owns the list. If `set` or `map`, different managers can manage entries separately. |\n| `\/\/+listMapKey` | `x-kubernetes-list-map-keys` | List of field names, e.g. `[\"port\", \"protocol\"]` | Only applicable when `+listType=map`. A list of field names whose values uniquely identify entries in the list. While there can be multiple keys, `listMapKey` is singular because keys need to be specified individually in the Go type. The key fields must be scalars.                                                                                                                                                                                |\n| `\/\/+mapType`    | `x-kubernetes-map-type`      | `atomic`\/`granular`                              | Applicable to maps. `atomic` means that the map can only be entirely replaced by a single manager. `granular` means that the map supports separate managers updating individual fields.                                                                                                                                                                                                                                                                  |\n| `\/\/+structType` | `x-kubernetes-map-type`      | `atomic`\/`granular`                              | Applicable to structs; otherwise same usage and OpenAPI annotation as `\/\/+mapType`.                                                                                                                                                                                                                                                                                                                                                                      |\n\nIf `listType` is missing, the API server interprets a\n`patchStrategy=merge` marker as a `listType=map` and the\ncorresponding `patchMergeKey` marker as a `listMapKey`.\n\nThe `atomic` list type is recursive.\n\n(In the [Go](https:\/\/go.dev\/) code for Kubernetes, these markers are specified as\ncomments and code authors need not repeat them as field tags).\n\n## Custom resources and Server-Side Apply\n\nBy default, Server-Side Apply treats custom resources as unstructured data. All\nkeys are treated the same as struct fields, and all lists are considered atomic.\n\nIf the CustomResourceDefinition defines a\n[schema](\/docs\/reference\/generated\/kubernetes-api\/#jsonschemaprops-v1-apiextensions-k8s-io)\nthat contains annotations as defined in the previous [Merge Strategy](#merge-strategy)\nsection, these annotations will be used when merging objects of this\ntype.\n\n\n### Compatibility across topology changes\n\nOn rare occurrences, the author for a CustomResourceDefinition (CRD) or built-in\nmay want to change the specific topology of a field in their resource,\nwithout incrementing its API version. Changing the topology of types,\nby upgrading the cluster or updating the CRD, has different consequences when\nupdating existing objects. There are two categories of changes: when a field goes from\n`map`\/`set`\/`granular` to `atomic`, and the other way around.\n\nWhen the `listType`, `mapType`, or `structType` changes from\n`map`\/`set`\/`granular` to `atomic`, the whole list, map, or struct of\nexisting objects will end-up being owned by actors who owned an element\nof these types. This means that any further change to these objects\nwould cause a conflict.\n\nWhen a `listType`, `mapType`, or `structType` changes from `atomic` to\n`map`\/`set`\/`granular`, the API server is unable to infer the new\nownership of these fields. Because of that, no conflict will be produced\nwhen objects have these fields updated. For that reason, it is not\nrecommended to change a type from `atomic` to `map`\/`set`\/`granular`.\n\nTake for example, the custom resource:\n\n```yaml\n---\napiVersion: example.com\/v1\nkind: Foo\nmetadata:\n  name: foo-sample\n  managedFields:\n  - manager: \"manager-one\"\n    operation: Apply\n    apiVersion: example.com\/v1\n    fieldsType: FieldsV1\n    fieldsV1:\n      f:spec:\n        f:data: {}\nspec:\n  data:\n    key1: val1\n    key2: val2\n```\n\nBefore `spec.data` gets changed from `atomic` to `granular`,\n`manager-one` owns the field `spec.data`, and all the fields within it\n(`key1` and `key2`). When the CRD gets changed to make `spec.data`\n`granular`, `manager-one` continues to own the top-level field\n`spec.data` (meaning no other managers can delete the map called `data`\nwithout a conflict), but it no longer owns `key1` and `key2`, so another\nmanager can then modify or delete those fields without conflict.\n\n## Using Server-Side Apply in a controller\n\nAs a developer of a controller, you can use Server-Side Apply as a way to\nsimplify the update logic of your controller. The main differences with a\nread-modify-write and\/or patch are the following:\n\n* the applied object must contain all the fields that the controller cares about.\n* there is no way to remove fields that haven't been applied by the controller\n  before (controller can still send a **patch** or **update** for these use-cases).\n* the object doesn't have to be read beforehand; `resourceVersion` doesn't have\n  to be specified.\n\nIt is strongly recommended for controllers to always force conflicts on objects that\nthey own and manage, since they might not be able to resolve or act on these conflicts.\n\n## Transferring ownership\n\nIn addition to the concurrency controls provided by [conflict resolution](#conflicts),\nServer-Side Apply provides ways to perform coordinated\nfield ownership transfers from users to controllers.\n\nThis is best explained by example. Let's look at how to safely transfer\nownership of the `replicas` field from a user to a controller while enabling\nautomatic horizontal scaling for a Deployment, using the HorizontalPodAutoscaler\nresource and its accompanying controller.\n\nSay a user has defined Deployment with `replicas` set to the desired value:\n\n\n\nAnd the user has created the Deployment using Server-Side Apply, like so:\n\n```shell\nkubectl apply -f https:\/\/k8s.io\/examples\/application\/ssa\/nginx-deployment.yaml --server-side\n```\n\nThen later, automatic scaling is enabled for the Deployment; for example:\n\n```shell\nkubectl autoscale deployment nginx-deployment --cpu-percent=50 --min=1 --max=10\n```\n\nNow, the user would like to remove `replicas` from their configuration, so they\ndon't accidentally fight with the HorizontalPodAutoscaler (HPA) and its controller.\nHowever, there is a race: it might take some time before the HPA feels the need\nto adjust `.spec.replicas`; if the user removes `.spec.replicas` before the HPA writes\nto the field and becomes its owner, then the API server would set `.spec.replicas` to\n1 (the default replica count for Deployment).\nThis is not what the user wants to happen, even temporarily - it might well degrade\na running workload.\n\nThere are two solutions:\n\n- (basic) Leave `replicas` in the configuration; when the HPA eventually writes to that\n  field, the system gives the user a conflict over it. At that point, it is safe\n  to remove from the configuration.\n\n- (more advanced) If, however, the user doesn't want to wait, for example\n  because they want to keep the cluster legible to their colleagues, then they\n  can take the following steps to make it safe to remove `replicas` from their\n  configuration:\n\nFirst, the user defines a new manifest containing only the `replicas` field:\n\n```yaml\n# Save this file as 'nginx-deployment-replicas-only.yaml'.\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx-deployment\nspec:\n  replicas: 3\n```\n\n\nThe YAML file for SSA in this case only contains the fields you want to change.\nYou are not supposed to provide a fully compliant Deployment manifest if you only\nwant to modify the `spec.replicas` field using SSA.\n\n\nThe user applies that manifest using a private field manager name. In this example,\nthe user picked `handover-to-hpa`:\n\n```shell\nkubectl apply -f nginx-deployment-replicas-only.yaml \\\n  --server-side --field-manager=handover-to-hpa \\\n  --validate=false\n```\n\nIf the apply results in a conflict with the HPA controller, then do nothing. The\nconflict indicates the controller has claimed the field earlier in the\nprocess than it sometimes does.\n\nAt this point the user may remove the `replicas` field from their manifest:\n\n\n\nNote that whenever the HPA controller sets the `replicas` field to a new value,\nthe temporary field manager will no longer own any fields and will be\nautomatically deleted. No further clean up is required.\n\n### Transferring ownership between managers\n\nField managers can transfer ownership of a field between each other by setting the field\nto the same value in both of their applied configurations, causing them to share\nownership of the field. Once the managers share ownership of the field, one of them\ncan remove the field from their applied configuration to give up ownership and\ncomplete the transfer to the other field manager.\n\n## Comparison with Client-Side Apply\n\nServer-Side Apply is meant both as a replacement for the original client-side\nimplementation of the `kubectl apply` subcommand, and as simple and effective\nmechanism for \nto enact their changes.\n\nCompared to the `last-applied` annotation managed by `kubectl`, Server-Side\nApply uses a more declarative approach, which tracks an object's field management,\nrather than a user's last applied state. This means that as a side effect of\nusing Server-Side Apply, information about which field manager manages each\nfield in an object also becomes available.\n\nA consequence of the conflict detection and resolution implemented by Server-Side\nApply is that an applier always has up to date field values in their local\nstate. If they don't, they get a conflict the next time they apply. Any of the\nthree options to resolve conflicts results in the applied configuration being an\nup to date subset of the object on the server's fields.\n\nThis is different from Client-Side Apply, where outdated values which have been\noverwritten by other users are left in an applier's local config. These values\nonly become accurate when the user updates that specific field, if ever, and an\napplier has no way of knowing whether their next apply will overwrite other\nusers' changes.\n\nAnother difference is that an applier using Client-Side Apply is unable to\nchange the API version they are using, but Server-Side Apply supports this use\ncase.\n\n## Migration between client-side and server-side apply\n\n### Upgrading from client-side apply to server-side apply\n\nClient-side apply users who manage a resource with `kubectl apply` can start\nusing server-side apply with the following flag.\n\n```shell\nkubectl apply --server-side [--dry-run=server]\n```\n\nBy default, field management of the object transfers from client-side apply to\nkubectl server-side apply, without encountering conflicts.\n\n\nKeep the `last-applied-configuration` annotation up to date.\nThe annotation infers client-side applies managed fields.\nAny fields not managed by client-side apply raise conflicts.\n\nFor example, if you used `kubectl scale` to update the replicas field after\nclient-side apply, then this field is not owned by client-side apply and\ncreates conflicts on `kubectl apply --server-side`.\n\n\nThis behavior applies to server-side apply with the `kubectl` field manager.\nAs an exception, you can opt-out of this behavior by specifying a different,\nnon-default field manager, as seen in the following example. The default field\nmanager for kubectl server-side apply is `kubectl`.\n\n```shell\nkubectl apply --server-side --field-manager=my-manager [--dry-run=server]\n```\n\n### Downgrading from server-side apply to client-side apply\n\nIf you manage a resource with `kubectl apply --server-side`,\nyou can downgrade to client-side apply directly with `kubectl apply`.\n\nDowngrading works because kubectl Server-Side Apply keeps the\n`last-applied-configuration` annotation up-to-date if you use\n`kubectl apply`.\n\nThis behavior applies to Server-Side Apply with the `kubectl` field manager.\nAs an exception, you can opt-out of this behavior by specifying a different,\nnon-default field manager, as seen in the following example. The default field\nmanager for kubectl server-side apply is `kubectl`.\n\n```shell\nkubectl apply --server-side --field-manager=my-manager [--dry-run=server]\n```\n\n## API implementation\n\nThe `PATCH` verb for a resource that supports Server-Side Apply can accepts the\nunofficial `application\/apply-patch+yaml` content type. Users of Server-Side\nApply can send partially specified objects as YAML as the body of a `PATCH` request\nto the URI of a resource.  When applying a configuration, you should always include all the\nfields that are important to the outcome (such as a desired state) that you want to define.\n\nAll JSON messages are valid YAML. Some clients specify Server-Side Apply requests using YAML\nrequest bodies that are also valid JSON.\n\n### Access control and permissions {#rbac-and-permissions}\n\nSince Server-Side Apply is a type of `PATCH`, a principal (such as a Role for Kubernetes\n) requires the **patch** permission to\nedit existing resources, and also needs the **create** verb permission in order to create\nnew resources with Server-Side Apply.\n\n## Clearing `managedFields`\n\nIt is possible to strip all `managedFields` from an object by overwriting them\nusing a **patch** (JSON Merge Patch, Strategic Merge Patch, JSON Patch), or\nthrough an **update** (HTTP `PUT`); in other words, through every write operation\nother than **apply**. This can be done by overwriting the `managedFields` field\nwith an empty entry. Two examples are:\n\n```console\nPATCH \/api\/v1\/namespaces\/default\/configmaps\/example-cm\nAccept: application\/json\nContent-Type: application\/merge-patch+json\n\n{\n  \"metadata\": {\n    \"managedFields\": [\n      {}\n    ]\n  }\n}\n```\n\n```console\nPATCH \/api\/v1\/namespaces\/default\/configmaps\/example-cm\nAccept: application\/json\nContent-Type: application\/json-patch+json\nIf-Match: 1234567890123456789\n\n[{\"op\": \"replace\", \"path\": \"\/metadata\/managedFields\", \"value\": [{}]}]\n```\n\nThis will overwrite the `managedFields` with a list containing a single empty\nentry that then results in the `managedFields` being stripped entirely from the\nobject. Note that setting the `managedFields` to an empty list will not\nreset the field. This is on purpose, so `managedFields` never get stripped by\nclients not aware of the field.\n\nIn cases where the reset operation is combined with changes to other fields\nthan the `managedFields`, this will result in the `managedFields` being reset\nfirst and the other changes being processed afterwards. As a result the\napplier takes ownership of any fields updated in the same request.\n\n\nServer-Side Apply does not correctly track ownership on\nsub-resources that don't receive the resource object type. If you are\nusing Server-Side Apply with such a sub-resource, the changed fields\nmay not be tracked.\n\n\n## \n\nYou can read about `managedFields` within the Kubernetes API reference for the\n[`metadata`](\/docs\/reference\/kubernetes-api\/common-definitions\/object-meta\/)\ntop level field.","site":"kubernetes reference","answers_cleaned":"    title  Server Side Apply reviewers    smarterclayton   apelisse   lavalamp   liggitt content type  concept weight  25           overview        Kubernetes supports multiple appliers collaborating to manage the fields of a single  object   docs concepts overview working with objects     Server Side Apply provides an optional mechanism for your cluster s control plane to track changes to an object s fields  At the level of a specific resource  Server Side Apply records and tracks information about control over the fields of that object   Server Side Apply helps users and  manage their resources through declarative configuration  Clients can create and modify  declaratively by submitting their  fully specified intent    A fully specified intent is a partial object that only includes the fields and values for which the user has an opinion  That intent either creates a new object  using default values for unspecified fields   or is  combined   merge strategy   by the API server  with the existing object    Comparison with Client Side Apply   comparison with client side apply  explains how Server Side Apply differs from the original  client side  kubectl apply  implementation        body         Field management  The Kubernetes API server tracks  managed fields  for all newly created objects   When trying to apply an object  fields that have a different value and are owned by another  manager   managers  will result in a  conflict   conflicts   This is done in order to signal that the operation might undo another collaborator s changes  Writes to objects with managed fields can be forced  in which case the value of any conflicted field will be overridden  and the ownership will be transferred   Whenever a field s value does change  ownership moves from its current manager to the manager making the change   Apply checks if there are any other field managers that also own the field   If the field is not owned by any other field managers  that field is set to its default value  if there is one   or otherwise is deleted from the object  The same rule applies to fields that are lists  associative lists  or maps   For a user to manage a field  in the Server Side Apply sense  means that the user relies on and expects the value of the field not to change  The user who last made an assertion about the value of a field will be recorded as the current field manager  This can be done by changing the field manager details explicitly using HTTP  POST     create      PUT     update     or non apply  PATCH     patch     You can also declare and record a field manager by including a value for that field in a Server Side Apply operation   A Server Side Apply   patch   request requires the client to provide its identity as a  field manager   managers   When using Server Side Apply  trying to change a field that is controlled by a different manager results in a rejected request unless the client forces an override  For details of overrides  see  Conflicts   conflicts    When two or more appliers set a field to the same value  they share ownership of that field  Any subsequent attempt to change the value of the shared field  by any of the appliers  results in a conflict  Shared field owners may give up ownership of a field by making a Server Side Apply   patch   request that doesn t include that field   Field management details are stored in a  managedFields  field that is part of an object s   metadata    docs reference kubernetes api common definitions object meta     If you remove a field from a manifest and apply that manifest  Server Side Apply checks if there are any other field managers that also own the field  If the field is not owned by any other field managers  it is either deleted from the live object or reset to its default value  if it has one  The same rule applies to associative list or map items   Compared to the  legacy    kubectl kubernetes io last applied configuration    docs reference labels annotations taints  kubectl kubernetes io last applied configuration  annotation managed by  kubectl   Server Side Apply uses a more declarative approach  that tracks a user s  or client s  field management  rather than a user s last applied state  As a side effect of using Server Side Apply  information about which field manager manages each field in an object also becomes available       Example   ssa example configmap   A simple example of an object created using Server Side Apply could look like this     kubectl get  omits managed fields by default   Add    show managed fields  to show  managedFields  when the output format is either  json  or  yaml        yaml     apiVersion  v1 kind  ConfigMap metadata    name  test cm   namespace  default   labels      test label  test   managedFields      manager  kubectl     operation  Apply   note capitalization   Apply   or  Update       apiVersion  v1     time   2010 10 10T0 00 00Z      fieldsType  FieldsV1     fieldsV1        f metadata          f labels            f test label           f data          f key     data    key  some value      That example ConfigMap object contains a single field management record in   metadata managedFields   The field management record consists of basic information about the managing entity itself  plus details about the fields being managed and the relevant operation   Apply  or  Update    If the request that last changed that field was a Server Side Apply   patch   then the value of  operation  is  Apply   otherwise  it is  Update    There is another possible outcome  A client could submit an invalid request body  If the fully specified intent does not produce a valid object  the request fails   It is however possible to change   metadata managedFields  through an   update    or through a   patch   operation that does not use Server Side Apply  Doing so is highly discouraged  but might be a reasonable option to try if  for example  the   metadata managedFields  get into an inconsistent state  which should not happen in normal operations    The format of  managedFields  is  described   docs reference kubernetes api common definitions object meta  System  in the Kubernetes API reference    The   metadata managedFields  field is managed by the API server  You should avoid updating it manually        Conflicts  A  conflict  is a special status error that occurs when an  Apply  operation tries to change a field that another manager also claims to manage  This prevents an applier from unintentionally overwriting the value set by another user  When this occurs  the applier has 3 options to resolve the conflicts       Overwrite value  become sole manager    If overwriting the value was   intentional  or if the applier is an automated process like a controller  the   applier should set the  force  query parameter to true  for  kubectl apply     you use the    force conflicts  command line parameter   and make the request   again  This forces the operation to succeed  changes the value of the field    and removes the field from all other managers  entries in  managedFields        Don t overwrite value  give up management claim    If the applier doesn t   care about the value of the field any more  the applier can remove it from their   local model of the resource  and make a new request with that particular field   omitted  This leaves the value unchanged  and causes the field to be removed   from the applier s entry in  managedFields        Don t overwrite value  become shared manager    If the applier still cares   about the value of a field  but doesn t want to overwrite it  they can   change the value of that field in their local model of the resource so as to   match the value of the object on the server  and then make a new request that   takes into account that local update  Doing so leaves the value unchanged    and causes that field s management to be shared by the applier along with all   other field managers that already claimed to manage it       Field managers   managers   Managers identify distinct workflows that are modifying the object  especially useful on conflicts    and can be specified through the   fieldManager    docs reference kubernetes api common parameters common parameters  fieldManager  query parameter as part of a modifying request  When you Apply to a resource  the  fieldManager  parameter is required  For other updates  the API server infers a field manager identity from the   User Agent   HTTP header  if present    When you use the  kubectl  tool to perform a Server Side Apply operation   kubectl  sets the manager identity to   kubectl   by default      Serialization  At the protocol level  Kubernetes represents Server Side Apply message bodies as  YAML  https   yaml org    with the media type  application apply patch yaml     Whether you are submitting JSON data or YAML data  use  application apply patch yaml  as the  Content Type  header value   All JSON documents are valid YAML  However  Kubernetes has a bug where it uses a YAML parser that does not fully implement the YAML specification  Some JSON escapes may not be recognized    The serialization is the same as for Kubernetes objects  with the exception that clients are not required to send a complete object   Here s an example of a Server Side Apply message body  fully specified intent      yaml      apiVersion    v1      kind    ConfigMap          this would make a no change update  provided that it was sent as the body of a   patch   request to a valid  v1 configmaps  resource  and with the appropriate request  Content Type         Operations in scope for field management   apply and update   The Kubernetes API operations where field management is considered are   1  Server Side Apply  HTTP  PATCH   with content type  application apply patch yaml   2  Replacing an existing object    update   to Kubernetes   PUT  at the HTTP level   Both operations update   metadata managedFields   but behave a little differently   Unless you specify a forced override  an apply operation that encounters field level conflicts always fails  by contrast  if you make a change using   update   that would affect a managed field  a conflict never provokes failure of the operation   All Server Side Apply   patch   requests are required to identify themselves by providing a  fieldManager  query parameter  while the query parameter is optional for   update   operations  Finally  when using the  Apply  operation you cannot define  managedFields  in the body of the request that you submit   An example object with multiple managers could look like this      yaml     apiVersion  v1 kind  ConfigMap metadata    name  test cm   namespace  default   labels      test label  test   managedFields      manager  kubectl     operation  Apply     time   2019 03 30T15 00 00 000Z      apiVersion  v1     fieldsType  FieldsV1     fieldsV1        f metadata          f labels            f test label         manager  kube controller manager     operation  Update     apiVersion  v1     time   2019 03 30T16 00 00 000Z      fieldsType  FieldsV1     fieldsV1        f data          f key     data    key  new value      In this example  a second operation was run as an   update   by the manager called  kube controller manager   The update request succeeded and changed a value in the data field  which caused that field s management to change to the  kube controller manager    If this update has instead been attempted using Server Side Apply  the request would have failed due to conflicting ownership      Merge strategy  The merging strategy  implemented with Server Side Apply  provides a generally more stable object lifecycle  Server Side Apply tries to merge fields based on the actor who manages them instead of overruling based on values  This way multiple actors can update the same object without causing unexpected interference   When a user sends a  fully specified intent  object to the Server Side Apply endpoint  the server merges it with the live object favoring the value from the request body if it is specified in both places  If the set of items present in the applied config is not a superset of the items applied by the same user last time  each missing item not managed by any other appliers is removed  For more information about how an object s schema is used to make decisions when merging  see  sigs k8s io structured merge diff  https   sigs k8s io structured merge diff    The Kubernetes API  and the Go code that implements that API for Kubernetes  allows defining  merge strategy markers   These markers describe the merge strategy supported for fields within Kubernetes objects  For a   you can set these markers when you define the custom resource     Golang marker     OpenAPI extension              Possible values                                    Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       listType       x kubernetes list type         atomic   set   map                                Applicable to lists   set  applies to lists that include only scalar elements  These elements must be unique   map  applies to lists of nested types only  The key values  see  listMapKey   must be unique in the list   atomic  can apply to any list  If configured as  atomic   the entire list is replaced during merge  At any point in time  a single manager owns the list  If  set  or  map   different managers can manage entries separately          listMapKey     x kubernetes list map keys    List of field names  e g     port    protocol      Only applicable when   listType map   A list of field names whose values uniquely identify entries in the list  While there can be multiple keys   listMapKey  is singular because keys need to be specified individually in the Go type  The key fields must be scalars                                                                                                                                                                                         mapType        x kubernetes map type          atomic   granular                                 Applicable to maps   atomic  means that the map can only be entirely replaced by a single manager   granular  means that the map supports separate managers updating individual fields                                                                                                                                                                                                                                                                           structType     x kubernetes map type          atomic   granular                                 Applicable to structs  otherwise same usage and OpenAPI annotation as     mapType                                                                                                                                                                                                                                                                                                                                                                           If  listType  is missing  the API server interprets a  patchStrategy merge  marker as a  listType map  and the corresponding  patchMergeKey  marker as a  listMapKey    The  atomic  list type is recursive    In the  Go  https   go dev   code for Kubernetes  these markers are specified as comments and code authors need not repeat them as field tags       Custom resources and Server Side Apply  By default  Server Side Apply treats custom resources as unstructured data  All keys are treated the same as struct fields  and all lists are considered atomic   If the CustomResourceDefinition defines a  schema   docs reference generated kubernetes api  jsonschemaprops v1 apiextensions k8s io  that contains annotations as defined in the previous  Merge Strategy   merge strategy  section  these annotations will be used when merging objects of this type        Compatibility across topology changes  On rare occurrences  the author for a CustomResourceDefinition  CRD  or built in may want to change the specific topology of a field in their resource  without incrementing its API version  Changing the topology of types  by upgrading the cluster or updating the CRD  has different consequences when updating existing objects  There are two categories of changes  when a field goes from  map   set   granular  to  atomic   and the other way around   When the  listType    mapType   or  structType  changes from  map   set   granular  to  atomic   the whole list  map  or struct of existing objects will end up being owned by actors who owned an element of these types  This means that any further change to these objects would cause a conflict   When a  listType    mapType   or  structType  changes from  atomic  to  map   set   granular   the API server is unable to infer the new ownership of these fields  Because of that  no conflict will be produced when objects have these fields updated  For that reason  it is not recommended to change a type from  atomic  to  map   set   granular    Take for example  the custom resource      yaml     apiVersion  example com v1 kind  Foo metadata    name  foo sample   managedFields      manager   manager one      operation  Apply     apiVersion  example com v1     fieldsType  FieldsV1     fieldsV1        f spec          f data     spec    data      key1  val1     key2  val2      Before  spec data  gets changed from  atomic  to  granular    manager one  owns the field  spec data   and all the fields within it   key1  and  key2    When the CRD gets changed to make  spec data   granular    manager one  continues to own the top level field  spec data   meaning no other managers can delete the map called  data  without a conflict   but it no longer owns  key1  and  key2   so another manager can then modify or delete those fields without conflict      Using Server Side Apply in a controller  As a developer of a controller  you can use Server Side Apply as a way to simplify the update logic of your controller  The main differences with a read modify write and or patch are the following     the applied object must contain all the fields that the controller cares about    there is no way to remove fields that haven t been applied by the controller   before  controller can still send a   patch   or   update   for these use cases     the object doesn t have to be read beforehand   resourceVersion  doesn t have   to be specified   It is strongly recommended for controllers to always force conflicts on objects that they own and manage  since they might not be able to resolve or act on these conflicts      Transferring ownership  In addition to the concurrency controls provided by  conflict resolution   conflicts   Server Side Apply provides ways to perform coordinated field ownership transfers from users to controllers   This is best explained by example  Let s look at how to safely transfer ownership of the  replicas  field from a user to a controller while enabling automatic horizontal scaling for a Deployment  using the HorizontalPodAutoscaler resource and its accompanying controller   Say a user has defined Deployment with  replicas  set to the desired value     And the user has created the Deployment using Server Side Apply  like so      shell kubectl apply  f https   k8s io examples application ssa nginx deployment yaml   server side      Then later  automatic scaling is enabled for the Deployment  for example      shell kubectl autoscale deployment nginx deployment   cpu percent 50   min 1   max 10      Now  the user would like to remove  replicas  from their configuration  so they don t accidentally fight with the HorizontalPodAutoscaler  HPA  and its controller  However  there is a race  it might take some time before the HPA feels the need to adjust   spec replicas   if the user removes   spec replicas  before the HPA writes to the field and becomes its owner  then the API server would set   spec replicas  to 1  the default replica count for Deployment   This is not what the user wants to happen  even temporarily   it might well degrade a running workload   There are two solutions      basic  Leave  replicas  in the configuration  when the HPA eventually writes to that   field  the system gives the user a conflict over it  At that point  it is safe   to remove from the configuration      more advanced  If  however  the user doesn t want to wait  for example   because they want to keep the cluster legible to their colleagues  then they   can take the following steps to make it safe to remove  replicas  from their   configuration   First  the user defines a new manifest containing only the  replicas  field      yaml   Save this file as  nginx deployment replicas only yaml   apiVersion  apps v1 kind  Deployment metadata    name  nginx deployment spec    replicas  3       The YAML file for SSA in this case only contains the fields you want to change  You are not supposed to provide a fully compliant Deployment manifest if you only want to modify the  spec replicas  field using SSA    The user applies that manifest using a private field manager name  In this example  the user picked  handover to hpa       shell kubectl apply  f nginx deployment replicas only yaml       server side   field manager handover to hpa       validate false      If the apply results in a conflict with the HPA controller  then do nothing  The conflict indicates the controller has claimed the field earlier in the process than it sometimes does   At this point the user may remove the  replicas  field from their manifest     Note that whenever the HPA controller sets the  replicas  field to a new value  the temporary field manager will no longer own any fields and will be automatically deleted  No further clean up is required       Transferring ownership between managers  Field managers can transfer ownership of a field between each other by setting the field to the same value in both of their applied configurations  causing them to share ownership of the field  Once the managers share ownership of the field  one of them can remove the field from their applied configuration to give up ownership and complete the transfer to the other field manager      Comparison with Client Side Apply  Server Side Apply is meant both as a replacement for the original client side implementation of the  kubectl apply  subcommand  and as simple and effective mechanism for  to enact their changes   Compared to the  last applied  annotation managed by  kubectl   Server Side Apply uses a more declarative approach  which tracks an object s field management  rather than a user s last applied state  This means that as a side effect of using Server Side Apply  information about which field manager manages each field in an object also becomes available   A consequence of the conflict detection and resolution implemented by Server Side Apply is that an applier always has up to date field values in their local state  If they don t  they get a conflict the next time they apply  Any of the three options to resolve conflicts results in the applied configuration being an up to date subset of the object on the server s fields   This is different from Client Side Apply  where outdated values which have been overwritten by other users are left in an applier s local config  These values only become accurate when the user updates that specific field  if ever  and an applier has no way of knowing whether their next apply will overwrite other users  changes   Another difference is that an applier using Client Side Apply is unable to change the API version they are using  but Server Side Apply supports this use case      Migration between client side and server side apply      Upgrading from client side apply to server side apply  Client side apply users who manage a resource with  kubectl apply  can start using server side apply with the following flag      shell kubectl apply   server side    dry run server       By default  field management of the object transfers from client side apply to kubectl server side apply  without encountering conflicts    Keep the  last applied configuration  annotation up to date  The annotation infers client side applies managed fields  Any fields not managed by client side apply raise conflicts   For example  if you used  kubectl scale  to update the replicas field after client side apply  then this field is not owned by client side apply and creates conflicts on  kubectl apply   server side     This behavior applies to server side apply with the  kubectl  field manager  As an exception  you can opt out of this behavior by specifying a different  non default field manager  as seen in the following example  The default field manager for kubectl server side apply is  kubectl       shell kubectl apply   server side   field manager my manager    dry run server           Downgrading from server side apply to client side apply  If you manage a resource with  kubectl apply   server side   you can downgrade to client side apply directly with  kubectl apply    Downgrading works because kubectl Server Side Apply keeps the  last applied configuration  annotation up to date if you use  kubectl apply    This behavior applies to Server Side Apply with the  kubectl  field manager  As an exception  you can opt out of this behavior by specifying a different  non default field manager  as seen in the following example  The default field manager for kubectl server side apply is  kubectl       shell kubectl apply   server side   field manager my manager    dry run server          API implementation  The  PATCH  verb for a resource that supports Server Side Apply can accepts the unofficial  application apply patch yaml  content type  Users of Server Side Apply can send partially specified objects as YAML as the body of a  PATCH  request to the URI of a resource   When applying a configuration  you should always include all the fields that are important to the outcome  such as a desired state  that you want to define   All JSON messages are valid YAML  Some clients specify Server Side Apply requests using YAML request bodies that are also valid JSON       Access control and permissions   rbac and permissions   Since Server Side Apply is a type of  PATCH   a principal  such as a Role for Kubernetes   requires the   patch   permission to edit existing resources  and also needs the   create   verb permission in order to create new resources with Server Side Apply      Clearing  managedFields   It is possible to strip all  managedFields  from an object by overwriting them using a   patch    JSON Merge Patch  Strategic Merge Patch  JSON Patch   or through an   update    HTTP  PUT    in other words  through every write operation other than   apply    This can be done by overwriting the  managedFields  field with an empty entry  Two examples are      console PATCH  api v1 namespaces default configmaps example cm Accept  application json Content Type  application merge patch json       metadata          managedFields                                  console PATCH  api v1 namespaces default configmaps example cm Accept  application json Content Type  application json patch json If Match  1234567890123456789     op    replace    path     metadata managedFields    value               This will overwrite the  managedFields  with a list containing a single empty entry that then results in the  managedFields  being stripped entirely from the object  Note that setting the  managedFields  to an empty list will not reset the field  This is on purpose  so  managedFields  never get stripped by clients not aware of the field   In cases where the reset operation is combined with changes to other fields than the  managedFields   this will result in the  managedFields  being reset first and the other changes being processed afterwards  As a result the applier takes ownership of any fields updated in the same request    Server Side Apply does not correctly track ownership on sub resources that don t receive the resource object type  If you are using Server Side Apply with such a sub resource  the changed fields may not be tracked         You can read about  managedFields  within the Kubernetes API reference for the   metadata    docs reference kubernetes api common definitions object meta   top level field "}
{"questions":"kubernetes reference cici37 contenttype concept min kubernetes server version 1 25 reviewers weight 35 jpbetz title Common Expression Language in Kubernetes","answers":"---\ntitle: Common Expression Language in Kubernetes\nreviewers:\n- jpbetz\n- cici37\ncontent_type: concept\nweight: 35\nmin-kubernetes-server-version: 1.25\n---\n\n<!-- overview -->\n\nThe [Common Expression Language (CEL)](https:\/\/github.com\/google\/cel-go) is used\nin the Kubernetes API to declare validation rules, policy rules, and other\nconstraints or conditions.\n\nCEL expressions are evaluated directly in the\n, making CEL a\nconvenient alternative to out-of-process mechanisms, such as webhooks, for many\nextensibility use cases. Your CEL expressions continue to execute so long as the\ncontrol plane's API server component remains available.\n\n<!-- body -->\n\n## Language overview\n\nThe [CEL language](https:\/\/github.com\/google\/cel-spec\/blob\/master\/doc\/langdef.md)\nhas a straightforward syntax that is similar to the expressions in C, C++, Java,\nJavaScript and Go.\n\nCEL was designed to be embedded into applications. Each CEL \"program\" is a\nsingle expression that evaluates to a single value. CEL expressions are\ntypically short \"one-liners\" that inline well into the string fields of Kubernetes\nAPI resources.\n\nInputs to a CEL program are \"variables\". Each Kubernetes API field that contains\nCEL declares in the API documentation which variables are available to use for\nthat field. For example, in the `x-kubernetes-validations[i].rules` field of\nCustomResourceDefinitions, the `self` and `oldSelf` variables are available and\nrefer to the previous and current state of the custom resource data to be\nvalidated by the CEL expression. Other Kubernetes API fields may declare\ndifferent variables. See the API documentation of the API fields to learn which\nvariables are available for that field.\n\nExample CEL expressions:\n\n\n| Rule                                                                               | Purpose                                                                           |\n|------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|\n| `self.minReplicas <= self.replicas && self.replicas <= self.maxReplicas`           | Validate that the three fields defining replicas are ordered appropriately        |\n| `'Available' in self.stateCounts`                                                  | Validate that an entry with the 'Available' key exists in a map                   |\n| `(self.list1.size() == 0) != (self.list2.size() == 0)`                             | Validate that one of two lists is non-empty, but not both                         |\n| `self.envars.filter(e, e.name = 'MY_ENV').all(e, e.value.matches('^[a-zA-Z]*$'))`  | Validate the 'value' field of a listMap entry where key field 'name' is 'MY_ENV'  |\n| `has(self.expired) && self.created + self.ttl < self.expired`                      | Validate that 'expired' date is after a 'create' date plus a 'ttl' duration       |\n| `self.health.startsWith('ok')`                                                     | Validate a 'health' string field has the prefix 'ok'                              |\n| `self.widgets.exists(w, w.key == 'x' && w.foo < 10)`                               | Validate that the 'foo' property of a listMap item with a key 'x' is less than 10 |\n| `type(self) == string ? self == '99%' : self == 42`                                | Validate an int-or-string field for both the int and string cases                 |\n| `self.metadata.name == 'singleton'`                                                | Validate that an object's name matches a specific value (making it a singleton)   |\n| `self.set1.all(e, !(e in self.set2))`                                              | Validate that two listSets are disjoint                                           |\n| `self.names.size() == self.details.size() && self.names.all(n, n in self.details)` | Validate the 'details' map is keyed by the items in the 'names' listSet           |\n| `self.details.all(key, key.matches('^[a-zA-Z]*$'))`                                | Validate the keys of the 'details' map                                            |\n| `self.details.all(key, self.details[key].matches('^[a-zA-Z]*$'))`                  | Validate the values of the 'details' map                                          |\n\n\n## CEL options, language features, and libraries\n\nCEL is configured with the following options, libraries and language features, introduced at the specified Kubernetes versions:\n\n| CEL option, library or language feature | Included | Availablity |\n|-----------------------------------------|----------|-------------|\n| [Standard macros](https:\/\/github.com\/google\/cel-spec\/blob\/v0.7.0\/doc\/langdef.md#macros) | `has`, `all`, `exists`, `exists_one`, `map`, `filter` | All Kubernetes versions |\n| [Standard functions](https:\/\/github.com\/google\/cel-spec\/blob\/master\/doc\/langdef.md#list-of-standard-definitions) | See [official list of standard definitions](https:\/\/github.com\/google\/cel-spec\/blob\/master\/doc\/langdef.md#list-of-standard-definitions) | All Kubernetes versions |\n| [Homogeneous Aggregate Literals](https:\/\/pkg.go.dev\/github.com\/google\/cel-go@v0.17.4\/cel#HomogeneousAggregateLiterals) | | All Kubernetes versions |\n| [Default UTC Time Zone](https:\/\/pkg.go.dev\/github.com\/google\/cel-go@v0.17.4\/cel#DefaultUTCTimeZone) | | All Kubernetes versions |\n| [Eagerly Validate Declarations](https:\/\/pkg.go.dev\/github.com\/google\/cel-go@v0.17.4\/cel#EagerlyValidateDeclarations) | | All Kubernetes versions |\n| [Extended strings library](https:\/\/pkg.go.dev\/github.com\/google\/cel-go\/ext#Strings), Version 1 | `charAt`, `indexOf`, `lastIndexOf`, `lowerAscii`, `upperAscii`, `replace`, `split`, `join`, `substring`, `trim` | All Kubernetes versions |\n| Kubernetes list library | See [Kubernetes list library](#kubernetes-list-library) | All Kubernetes versions |\n| Kubernetes regex library | See [Kubernetes regex library](#kubernetes-regex-library) | All Kubernetes versions |\n| Kubernetes URL library | See [Kubernetes URL library](#kubernetes-url-library) | All Kubernetes versions |\n| Kubernetes authorizer library | See [Kubernetes authorizer library](#kubernetes-authorizer-library) | All Kubernetes versions |\n| Kubernetes quantity library | See [Kubernetes quantity library](#kubernetes-quantity-library) | Kubernetes versions 1.29+ |\n| CEL optional types | See [CEL optional types](https:\/\/pkg.go.dev\/github.com\/google\/cel-go@v0.17.4\/cel#OptionalTypes) | Kubernetes versions 1.29+ |\n| CEL CrossTypeNumericComparisons | See [CEL CrossTypeNumericComparisons](https:\/\/pkg.go.dev\/github.com\/google\/cel-go@v0.17.4\/cel#CrossTypeNumericComparisons) | Kubernetes versions 1.29+ |\n\nCEL functions, features and language settings support Kubernetes control plane\nrollbacks. For example, _CEL Optional Values_ was introduced at Kubernetes 1.29\nand so only API servers at that version or newer will accept write requests to\nCEL expressions that use _CEL Optional Values_. However, when a cluster is\nrolled back to Kubernetes 1.28 CEL expressions using \"CEL Optional Values\" that\nare already stored in API resources will continue to evaluate correctly.\n\n## Kubernetes CEL libraries\n\nIn additional to the CEL community libraries, Kubernetes includes CEL libraries\nthat are available everywhere CEL is used in Kubernetes.\n\n### Kubernetes list library\n\nThe list library includes `indexOf` and `lastIndexOf`, which work similar to the\nstrings functions of the same names. These functions either the first or last\npositional index of the provided element in the list.\n\nThe list library also includes `min`, `max` and `sum`. Sum is supported on all\nnumber types as well as the duration type. Min and max are supported on all\ncomparable types.\n\n`isSorted` is also provided as a convenience function and is supported on all\ncomparable types.\n\nExamples:\n\n\n| CEL Expression                                                                     | Purpose                                                   |\n|------------------------------------------------------------------------------------|-----------------------------------------------------------|\n| `names.isSorted()`                                                                 | Verify that a list of names is kept in alphabetical order |\n| `items.map(x, x.weight).sum() == 1.0`                                              | Verify that the \"weights\" of a list of objects sum to 1.0 |\n| `lowPriorities.map(x, x.priority).max() < highPriorities.map(x, x.priority).min()` | Verify that two sets of priorities do not overlap         |\n| `names.indexOf('should-be-first') == 1`                                            | Require that the first name in a list if a specific value | \n\n\nSee the [Kubernetes List Library](https:\/\/pkg.go.dev\/k8s.io\/apiextensions-apiserver\/pkg\/apiserver\/schema\/cel\/library#Lists)\ngodoc for more information.\n\n### Kubernetes regex library\n\nIn addition to the `matches` function provided by the CEL standard library, the\nregex library provides `find` and `findAll`, enabling a much wider range of\nregex operations.\n\nExamples:\n\n\n| CEL Expression                                              | Purpose                                                  |\n|-------------------------------------------------------------|----------------------------------------------------------|\n| `\"abc 123\".find('[0-9]+')`                                  | Find the first number in a string                        |\n| `\"1, 2, 3, 4\".findAll('[0-9]+').map(x, int(x)).sum() < 100` | Verify that the numbers in a string sum to less than 100 |\n\n\nSee the [Kubernetes regex library](https:\/\/pkg.go.dev\/k8s.io\/apiextensions-apiserver\/pkg\/apiserver\/schema\/cel\/library#Regex)\ngodoc for more information.\n\n### Kubernetes URL library\n\nTo make it easier and safer to process URLs, the following functions have been added:\n\n- `isURL(string)` checks if a string is a valid URL according to the\n  [Go's net\/url](https:\/\/pkg.go.dev\/net\/url#URL) package. The string must be an\n  absolute URL.\n- `url(string) URL` converts a string to a URL or results in an error if the\n  string is not a valid URL.\n\nOnce parsed via the `url` function, the resulting URL object has `getScheme`,\n`getHost`, `getHostname`, `getPort`, `getEscapedPath` and `getQuery` accessors.\n\nExamples:\n\n\n| CEL Expression                                                  | Purpose                                        |\n|-----------------------------------------------------------------|------------------------------------------------|\n| `url('https:\/\/example.com:80\/').getHost()`                      | Gets the 'example.com:80' host part of the URL |\n| `url('https:\/\/example.com\/path with spaces\/').getEscapedPath()` | Returns '\/path%20with%20spaces\/'               |\n\n\nSee the [Kubernetes URL library](https:\/\/pkg.go.dev\/k8s.io\/apiextensions-apiserver\/pkg\/apiserver\/schema\/cel\/library#URLs)\ngodoc for more information.\n\n### Kubernetes authorizer library\n\nFor CEL expressions in the API where a variable of type `Authorizer` is available,\nthe authorizer may be used to perform authorization checks for the principal\n(authenticated user) of the request.\n\nAPI resource checks are performed as follows:\n\n1. Specify the group and resource to check: `Authorizer.group(string).resource(string) ResourceCheck`\n1. Optionally call any combination of the following builder functions to further narrow the authorization check.\n   Note that these functions return the receiver type and can be chained:\n   - `ResourceCheck.subresource(string) ResourceCheck`\n   - `ResourceCheck.namespace(string) ResourceCheck`\n   - `ResourceCheck.name(string) ResourceCheck`\n1. Call `ResourceCheck.check(verb string) Decision` to perform the authorization check.\n1. Call `allowed() bool` or `reason() string` to inspect the result of the authorization check.\n\nNon-resource authorization performed are used as follows:\n\n1. Specify only a path: `Authorizer.path(string) PathCheck`\n1. Call `PathCheck.check(httpVerb string) Decision` to perform the authorization check.\n1. Call `allowed() bool` or `reason() string` to inspect the result of the authorization check.\n\nTo perform an authorization check for a service account:\n\n- `Authorizer.serviceAccount(namespace string, name string) Authorizer`\n\n\n| CEL Expression | Purpose |\n|----------------|---------|\n| `authorizer.group('').resource('pods').namespace('default').check('create').allowed()` | Returns true if the principal (user or service account) is allowed create pods in the 'default' namespace. |\n| `authorizer.path('\/healthz').check('get').allowed()` | Checks if the principal (user or service account) is authorized to make HTTP GET requests to the \/healthz API path. |\n| `authorizer.serviceAccount('default', 'myserviceaccount').resource('deployments').check('delete').allowed()` | Checks if the service account is authorized to delete deployments. |\n\n\n\n\nWith the alpha `AuthorizeWithSelectors` feature enabled, field and label selectors can be added to authorization checks.\n\n\n| CEL Expression | Purpose |\n|----------------|---------|\n| `authorizer.group('').resource('pods').fieldSelector('spec.nodeName=mynode').check('list').allowed()` | Returns true if the principal (user or service account) is allowed to list pods with the field selector `spec.nodeName=mynode`. |\n| `authorizer.group('').resource('pods').labelSelector('example.com\/mylabel=myvalue').check('list').allowed()` | Returns true if the principal (user or service account) is allowed to list pods with the label selector `example.com\/mylabel=myvalue`. |\n\n\nSee the [Kubernetes Authz library](https:\/\/pkg.go.dev\/k8s.io\/apiserver\/pkg\/cel\/library#Authz)\nand [Kubernetes AuthzSelectors library](https:\/\/pkg.go.dev\/k8s.io\/apiserver\/pkg\/cel\/library#AuthzSelectors)\ngodoc for more information.\n\n### Kubernetes quantity library\n\nKubernetes 1.28 adds support for manipulating quantity strings (ex 1.5G, 512k, 20Mi)\n\n- `isQuantity(string)` checks if a string is a valid Quantity according to\n  [Kubernetes' resource.Quantity](https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/api\/resource#Quantity).\n- `quantity(string) Quantity` converts a string to a Quantity or results in an error if the\n  string is not a valid quantity.\n\nOnce parsed via the `quantity` function, the resulting Quantity object has the\nfollowing library of member functions:\n\n\n| Member Function             | CEL Return Value | Description |\n|-----------------------------|------------------|-------------|\n| `isInteger()`               | bool             | Returns true if and only if asInteger is safe to call without an error |\n| `asInteger()`               | int              | Returns a representation of the current value as an int64 if possible or results in an error if conversion would result in overflow or loss of precision. |\n| `asApproximateFloat()`      | float            | Returns a float64 representation of the quantity which may lose precision. If the value of the quantity is outside the range of a float64 +Inf\/-Inf will be returned. |\n| `sign()`                    | int              | Returns `1` if the quantity is positive, `-1` if it is negative. `0` if it is zero |\n| `add(<Quantity>)`           | Quantity         | Returns sum of two quantities |\n| `add(<int>)`                | Quantity         | Returns sum of quantity and an integer | \n| `sub(<Quantity>)`           | Quantity         | Returns difference between two quantities |\n| `sub(<int>)`                | Quantity         | Returns difference between a quantity and an integer |\n| `isLessThan(<Quantity>)`    | bool             | Returns true if and only if the receiver is less than the operand |\n| `isGreaterThan(<Quantity>)` | bool             | Returns true if and only if the receiver is greater than the operand |\n| `compareTo(<Quantity>)`     | int              | Compares receiver to operand and returns 0 if they are equal, 1 if the receiver is greater, or -1 if the receiver is less than the operand |\n\n\nExamples:\n\n\n| CEL Expression                                                            | Purpose                                               |\n|---------------------------------------------------------------------------|-------------------------------------------------------|\n| `quantity(\"500000G\").isInteger()`                                         | Test if conversion to integer would throw an error    |\n| `quantity(\"50k\").asInteger()`                                             | Precise conversion to integer                         |\n| `quantity(\"9999999999999999999999999999999999999G\").asApproximateFloat()` | Lossy conversion to float                              |\n| `quantity(\"50k\").add(quantity(\"20k\"))`                                   | Add two quantities                                    |\n| `quantity(\"50k\").sub(20000)`                                              | Subtract an integer from a quantity                   |\n| `quantity(\"50k\").add(20).sub(quantity(\"100k\")).sub(-50000)`               | Chain adding and subtracting integers and quantities  |\n| `quantity(\"200M\").compareTo(quantity(\"0.2G\"))`                            | Compare two quantities                                |\n| `quantity(\"150Mi\").isGreaterThan(quantity(\"100Mi\"))`                      | Test if a quantity is greater than the receiver       |\n| `quantity(\"50M\").isLessThan(quantity(\"100M\"))`                            | Test if a quantity is less than the receiver          |\n\n\n## Type checking\n\nCEL is a [gradually typed language](https:\/\/github.com\/google\/cel-spec\/blob\/master\/doc\/langdef.md#gradual-type-checking).\n\nSome Kubernetes API fields contain fully type checked CEL expressions. For example,\n[CustomResourceDefinitions Validation Rules](\/docs\/tasks\/extend-kubernetes\/custom-resources\/custom-resource-definitions\/#validation-rules)\nare fully type checked.\n\nSome Kubernetes API fields contain partially type checked CEL expressions. A\npartially type checked expression is an expressions where some of the variables\nare statically typed but others are dynamically typed. For example, in the CEL\nexpressions of\n[ValidatingAdmissionPolicies](\/docs\/reference\/access-authn-authz\/validating-admission-policy\/)\nthe `request` variable is typed, but the `object` variable is dynamically typed.\nAs a result, an expression containing `request.namex` would fail type checking\nbecause the `namex` field is not defined. However, `object.namex` would pass\ntype checking even when the `namex` field is not defined for the resource kinds\nthat `object` refers to, because `object` is dynamically typed.\n\nThe `has()` macro in CEL may be used in CEL expressions to check if a field of a\ndynamically typed variable is accessible before attempting to access the field's\nvalue. For example:\n\n```cel\nhas(object.namex) ? object.namex == 'special' : request.name == 'special'\n```\n\n## Type system integration\n\n\n| OpenAPIv3 type                                     | CEL type |\n|----------------------------------------------------|----------|\n| 'object' with Properties                           | object \/ \"message type\" (`type(<object>)` evaluates to `selfType<uniqueNumber>.path.to.object.from.self`) |\n| 'object' with AdditionalProperties                 | map |\n| 'object' with x-kubernetes-embedded-type           | object \/ \"message type\", 'apiVersion', 'kind', 'metadata.name' and 'metadata.generateName' are implicitly included in schema |\n| 'object' with x-kubernetes-preserve-unknown-fields | object \/ \"message type\", unknown fields are NOT accessible in CEL expression |\n| x-kubernetes-int-or-string                         | union of int or string, `self.intOrString < 100 \\|\\| self.intOrString == '50%'` evaluates to true for both `50` and `\"50%\"` |\n| 'array'                                            | list |\n| 'array' with x-kubernetes-list-type=map            | list with map based Equality & unique key guarantees |\n| 'array' with x-kubernetes-list-type=set            | list with set based Equality & unique entry guarantees |\n| 'boolean'                                          | boolean |\n| 'number' (all formats)                             | double |\n| 'integer' (all formats)                            | int (64) |\n| _no equivalent_                                    | uint (64) |\n| 'null'                                             | null_type |\n| 'string'                                           | string |\n| 'string' with format=byte (base64 encoded)         | bytes |\n| 'string' with format=date                          | timestamp (google.protobuf.Timestamp) |\n| 'string' with format=datetime                      | timestamp (google.protobuf.Timestamp) |\n| 'string' with format=duration                      | duration (google.protobuf.Duration) |\n\n\nAlso see: [CEL types](https:\/\/github.com\/google\/cel-spec\/blob\/v0.6.0\/doc\/langdef.md#values),\n[OpenAPI types](https:\/\/swagger.io\/specification\/#data-types),\n[Kubernetes Structural Schemas](\/docs\/tasks\/extend-kubernetes\/custom-resources\/custom-resource-definitions\/#specifying-a-structural-schema).\n\nEquality comparison for arrays with `x-kubernetes-list-type` of `set` or `map` ignores element\norder. For example `[1, 2] == [2, 1]` if the arrays represent Kubernetes `set` values.\n\nConcatenation on arrays with `x-kubernetes-list-type` use the semantics of the\nlist type:\n\n`set`\n: `X + Y` performs a union where the array positions of all elements in\n  `X` are preserved and non-intersecting elements in `Y` are appended, retaining\n  their partial order.\n\n`map`\n: `X + Y` performs a merge where the array positions of all keys in `X`\n  are preserved but the values are overwritten by values in `Y` when the key\n  sets of `X` and `Y` intersect. Elements in `Y` with non-intersecting keys are\n  appended, retaining their partial order.\n\n## Escaping\n\nOnly Kubernetes resource property names of the form\n`[a-zA-Z_.-\/][a-zA-Z0-9_.-\/]*` are accessible from CEL. Accessible property\nnames are escaped according to the following rules when accessed in the\nexpression:\n\n\n| escape sequence   | property name equivalent                                                                     |\n|-------------------|----------------------------------------------------------------------------------------------|\n| `__underscores__` | `__`                                                                                         |\n| `__dot__`         | `.`                                                                                          |\n| `__dash__`        | `-`                                                                                          |\n| `__slash__`       | `\/`                                                                                          |\n| `__{keyword}__`   | [CEL **RESERVED** keyword](https:\/\/github.com\/google\/cel-spec\/blob\/v0.6.0\/doc\/langdef.md#syntax) |\n\n\nWhen you escape any of CEL's **RESERVED** keywords you need to match the exact property name\nuse the underscore escaping\n(for example, `int` in the word `sprint` would not be escaped and nor would it need to be).\n\nExamples on escaping:\n\n\n| property name | rule with escaped property name   |\n|---------------|-----------------------------------|\n| `namespace`   | `self.__namespace__ > 0`          |\n| `x-prop`      | `self.x__dash__prop > 0`          |\n| `redact__d`   | `self.redact__underscores__d > 0` |\n| `string`      | `self.startsWith('kube')`         |\n\n\n## Resource constraints\n\nCEL is non-Turing complete and offers a variety of production safety controls to\nlimit execution time. CEL's _resource constraint_ features provide feedback to\ndevelopers about expression complexity and help protect the API server from\nexcessive resource consumption during evaluation. CEL's resource constraint\nfeatures are used to prevent CEL evaluation from consuming excessive API server\nresources.\n\nA key element of the resource constraint features is a _cost unit_ that CEL\ndefines as a way of tracking CPU utilization. Cost units are independent of\nsystem load and hardware. Cost units are also deterministic; for any given CEL\nexpression and input data, evaluation of the expression by the CEL interpreter\nwill always result in the same cost.\n\nMany of CEL's core operations have fixed costs. The simplest operations, such as\ncomparisons (e.g. `<`) have a cost of 1. Some have a higher fixed cost, for\nexample list literal declarations have a fixed base cost of 40 cost units.\n\nCalls to functions implemented in native code approximate cost based on the time\ncomplexity of the operation. For example: operations that use regular\nexpressions, such as `match` and `find`, are estimated using an approximated\ncost of `length(regexString)*length(inputString)`. The approximated cost\nreflects the worst case time complexity of Go's RE2 implementation.\n\n### Runtime cost budget\n\nAll CEL expressions evaluated by Kubernetes are constrained by a runtime cost\nbudget. The runtime cost budget is an estimate of actual CPU utilization\ncomputed by incrementing a cost unit counter while interpreting a CEL\nexpression. If the CEL interpreter executes too many instructions, the runtime\ncost budget will be exceeded, execution of the expressions will be halted, and\nan error will result.\n\nSome Kubernetes resources define an additional runtime cost budget that bounds\nthe execution of multiple expressions. If the sum total of the cost of\nexpressions exceed the budget, execution of the expressions will be halted, and\nan error will result. For example the validation of a custom resource has a\n_per-validation_ runtime cost budget for all\n[Validation Rules](\/docs\/tasks\/extend-kubernetes\/custom-resources\/custom-resource-definitions\/#validation-rules)\nevaluated to validate the custom resource.\n\n### Estimated cost limits\n\nFor some Kubernetes resources, the API server may also check if worst case\nestimated running time of CEL expressions would be prohibitively expensive to\nexecute. If so, the API server prevent the CEL expression from being written to\nAPI resources by rejecting create or update operations containing the CEL\nexpression to the API resources. This feature offers a stronger assurance that\nCEL expressions written to the API resource will be evaluated at runtime without\nexceeding the runtime cost budget.","site":"kubernetes reference","answers_cleaned":"    title  Common Expression Language in Kubernetes reviewers    jpbetz   cici37 content type  concept weight  35 min kubernetes server version  1 25           overview      The  Common Expression Language  CEL   https   github com google cel go  is used in the Kubernetes API to declare validation rules  policy rules  and other constraints or conditions   CEL expressions are evaluated directly in the   making CEL a convenient alternative to out of process mechanisms  such as webhooks  for many extensibility use cases  Your CEL expressions continue to execute so long as the control plane s API server component remains available        body         Language overview  The  CEL language  https   github com google cel spec blob master doc langdef md  has a straightforward syntax that is similar to the expressions in C  C    Java  JavaScript and Go   CEL was designed to be embedded into applications  Each CEL  program  is a single expression that evaluates to a single value  CEL expressions are typically short  one liners  that inline well into the string fields of Kubernetes API resources   Inputs to a CEL program are  variables   Each Kubernetes API field that contains CEL declares in the API documentation which variables are available to use for that field  For example  in the  x kubernetes validations i  rules  field of CustomResourceDefinitions  the  self  and  oldSelf  variables are available and refer to the previous and current state of the custom resource data to be validated by the CEL expression  Other Kubernetes API fields may declare different variables  See the API documentation of the API fields to learn which variables are available for that field   Example CEL expressions      Rule                                                                                 Purpose                                                                                                                                                                                                                                                           self minReplicas    self replicas    self replicas    self maxReplicas              Validate that the three fields defining replicas are ordered appropriately              Available  in self stateCounts                                                     Validate that an entry with the  Available  key exists in a map                         self list1 size      0      self list2 size      0                                 Validate that one of two lists is non empty  but not both                              self envars filter e  e name    MY ENV   all e  e value matches    a zA Z           Validate the  value  field of a listMap entry where key field  name  is  MY ENV        has self expired     self created   self ttl   self expired                         Validate that  expired  date is after a  create  date plus a  ttl  duration            self health startsWith  ok                                                          Validate a  health  string field has the prefix  ok                                    self widgets exists w  w key     x     w foo   10                                   Validate that the  foo  property of a listMap item with a key  x  is less than 10      type self     string   self     99     self    42                                   Validate an int or string field for both the int and string cases                      self metadata name     singleton                                                    Validate that an object s name matches a specific value  making it a singleton         self set1 all e    e in self set2                                                   Validate that two listSets are disjoint                                                self names size      self details size      self names all n  n in self details     Validate the  details  map is keyed by the items in the  names  listSet                self details all key  key matches    a zA Z                                         Validate the keys of the  details  map                                                 self details all key  self details key  matches    a zA Z                           Validate the values of the  details  map                                                 CEL options  language features  and libraries  CEL is configured with the following options  libraries and language features  introduced at the specified Kubernetes versions     CEL option  library or language feature   Included   Availablity                                                                           Standard macros  https   github com google cel spec blob v0 7 0 doc langdef md macros     has    all    exists    exists one    map    filter    All Kubernetes versions      Standard functions  https   github com google cel spec blob master doc langdef md list of standard definitions    See  official list of standard definitions  https   github com google cel spec blob master doc langdef md list of standard definitions    All Kubernetes versions      Homogeneous Aggregate Literals  https   pkg go dev github com google cel go v0 17 4 cel HomogeneousAggregateLiterals      All Kubernetes versions      Default UTC Time Zone  https   pkg go dev github com google cel go v0 17 4 cel DefaultUTCTimeZone      All Kubernetes versions      Eagerly Validate Declarations  https   pkg go dev github com google cel go v0 17 4 cel EagerlyValidateDeclarations      All Kubernetes versions      Extended strings library  https   pkg go dev github com google cel go ext Strings   Version 1    charAt    indexOf    lastIndexOf    lowerAscii    upperAscii    replace    split    join    substring    trim    All Kubernetes versions     Kubernetes list library   See  Kubernetes list library   kubernetes list library    All Kubernetes versions     Kubernetes regex library   See  Kubernetes regex library   kubernetes regex library    All Kubernetes versions     Kubernetes URL library   See  Kubernetes URL library   kubernetes url library    All Kubernetes versions     Kubernetes authorizer library   See  Kubernetes authorizer library   kubernetes authorizer library    All Kubernetes versions     Kubernetes quantity library   See  Kubernetes quantity library   kubernetes quantity library    Kubernetes versions 1 29      CEL optional types   See  CEL optional types  https   pkg go dev github com google cel go v0 17 4 cel OptionalTypes    Kubernetes versions 1 29      CEL CrossTypeNumericComparisons   See  CEL CrossTypeNumericComparisons  https   pkg go dev github com google cel go v0 17 4 cel CrossTypeNumericComparisons    Kubernetes versions 1 29     CEL functions  features and language settings support Kubernetes control plane rollbacks  For example   CEL Optional Values  was introduced at Kubernetes 1 29 and so only API servers at that version or newer will accept write requests to CEL expressions that use  CEL Optional Values   However  when a cluster is rolled back to Kubernetes 1 28 CEL expressions using  CEL Optional Values  that are already stored in API resources will continue to evaluate correctly      Kubernetes CEL libraries  In additional to the CEL community libraries  Kubernetes includes CEL libraries that are available everywhere CEL is used in Kubernetes       Kubernetes list library  The list library includes  indexOf  and  lastIndexOf   which work similar to the strings functions of the same names  These functions either the first or last positional index of the provided element in the list   The list library also includes  min    max  and  sum   Sum is supported on all number types as well as the duration type  Min and max are supported on all comparable types    isSorted  is also provided as a convenience function and is supported on all comparable types   Examples      CEL Expression                                                                       Purpose                                                                                                                                                                                                           names isSorted                                                                      Verify that a list of names is kept in alphabetical order      items map x  x weight  sum      1 0                                                 Verify that the  weights  of a list of objects sum to 1 0      lowPriorities map x  x priority  max     highPriorities map x  x priority  min      Verify that two sets of priorities do not overlap              names indexOf  should be first      1                                               Require that the first name in a list if a specific value      See the  Kubernetes List Library  https   pkg go dev k8s io apiextensions apiserver pkg apiserver schema cel library Lists  godoc for more information       Kubernetes regex library  In addition to the  matches  function provided by the CEL standard library  the regex library provides  find  and  findAll   enabling a much wider range of regex operations   Examples      CEL Expression                                                Purpose                                                                                                                                                                                   abc 123  find   0 9                                         Find the first number in a string                              1  2  3  4  findAll   0 9     map x  int x   sum     100    Verify that the numbers in a string sum to less than 100     See the  Kubernetes regex library  https   pkg go dev k8s io apiextensions apiserver pkg apiserver schema cel library Regex  godoc for more information       Kubernetes URL library  To make it easier and safer to process URLs  the following functions have been added      isURL string   checks if a string is a valid URL according to the    Go s net url  https   pkg go dev net url URL  package  The string must be an   absolute URL     url string  URL  converts a string to a URL or results in an error if the   string is not a valid URL   Once parsed via the  url  function  the resulting URL object has  getScheme    getHost    getHostname    getPort    getEscapedPath  and  getQuery  accessors   Examples      CEL Expression                                                    Purpose                                                                                                                                                                  url  https   example com 80    getHost                           Gets the  example com 80  host part of the URL      url  https   example com path with spaces    getEscapedPath      Returns   path 20with 20spaces                     See the  Kubernetes URL library  https   pkg go dev k8s io apiextensions apiserver pkg apiserver schema cel library URLs  godoc for more information       Kubernetes authorizer library  For CEL expressions in the API where a variable of type  Authorizer  is available  the authorizer may be used to perform authorization checks for the principal  authenticated user  of the request   API resource checks are performed as follows   1  Specify the group and resource to check   Authorizer group string  resource string  ResourceCheck  1  Optionally call any combination of the following builder functions to further narrow the authorization check     Note that these functions return the receiver type and can be chained        ResourceCheck subresource string  ResourceCheck        ResourceCheck namespace string  ResourceCheck        ResourceCheck name string  ResourceCheck  1  Call  ResourceCheck check verb string  Decision  to perform the authorization check  1  Call  allowed   bool  or  reason   string  to inspect the result of the authorization check   Non resource authorization performed are used as follows   1  Specify only a path   Authorizer path string  PathCheck  1  Call  PathCheck check httpVerb string  Decision  to perform the authorization check  1  Call  allowed   bool  or  reason   string  to inspect the result of the authorization check   To perform an authorization check for a service account      Authorizer serviceAccount namespace string  name string  Authorizer      CEL Expression   Purpose                                   authorizer group     resource  pods   namespace  default   check  create   allowed      Returns true if the principal  user or service account  is allowed create pods in the  default  namespace       authorizer path   healthz   check  get   allowed      Checks if the principal  user or service account  is authorized to make HTTP GET requests to the  healthz API path       authorizer serviceAccount  default    myserviceaccount   resource  deployments   check  delete   allowed      Checks if the service account is authorized to delete deployments        With the alpha  AuthorizeWithSelectors  feature enabled  field and label selectors can be added to authorization checks      CEL Expression   Purpose                                   authorizer group     resource  pods   fieldSelector  spec nodeName mynode   check  list   allowed      Returns true if the principal  user or service account  is allowed to list pods with the field selector  spec nodeName mynode        authorizer group     resource  pods   labelSelector  example com mylabel myvalue   check  list   allowed      Returns true if the principal  user or service account  is allowed to list pods with the label selector  example com mylabel myvalue       See the  Kubernetes Authz library  https   pkg go dev k8s io apiserver pkg cel library Authz  and  Kubernetes AuthzSelectors library  https   pkg go dev k8s io apiserver pkg cel library AuthzSelectors  godoc for more information       Kubernetes quantity library  Kubernetes 1 28 adds support for manipulating quantity strings  ex 1 5G  512k  20Mi      isQuantity string   checks if a string is a valid Quantity according to    Kubernetes  resource Quantity  https   pkg go dev k8s io apimachinery pkg api resource Quantity      quantity string  Quantity  converts a string to a Quantity or results in an error if the   string is not a valid quantity   Once parsed via the  quantity  function  the resulting Quantity object has the following library of member functions      Member Function               CEL Return Value   Description                                                                       isInteger                    bool               Returns true if and only if asInteger is safe to call without an error      asInteger                    int                Returns a representation of the current value as an int64 if possible or results in an error if conversion would result in overflow or loss of precision       asApproximateFloat           float              Returns a float64 representation of the quantity which may lose precision  If the value of the quantity is outside the range of a float64  Inf  Inf will be returned       sign                         int                Returns  1  if the quantity is positive    1  if it is negative   0  if it is zero      add  Quantity                Quantity           Returns sum of two quantities      add  int                     Quantity           Returns sum of quantity and an integer       sub  Quantity                Quantity           Returns difference between two quantities      sub  int                     Quantity           Returns difference between a quantity and an integer      isLessThan  Quantity         bool               Returns true if and only if the receiver is less than the operand      isGreaterThan  Quantity      bool               Returns true if and only if the receiver is greater than the operand      compareTo  Quantity          int                Compares receiver to operand and returns 0 if they are equal  1 if the receiver is greater  or  1 if the receiver is less than the operand     Examples      CEL Expression                                                              Purpose                                                                                                                                                                                          quantity  500000G   isInteger                                              Test if conversion to integer would throw an error         quantity  50k   asInteger                                                  Precise conversion to integer                              quantity  9999999999999999999999999999999999999G   asApproximateFloat      Lossy conversion to float                                   quantity  50k   add quantity  20k                                         Add two quantities                                         quantity  50k   sub 20000                                                  Subtract an integer from a quantity                        quantity  50k   add 20  sub quantity  100k    sub  50000                   Chain adding and subtracting integers and quantities       quantity  200M   compareTo quantity  0 2G                                  Compare two quantities                                     quantity  150Mi   isGreaterThan quantity  100Mi                            Test if a quantity is greater than the receiver            quantity  50M   isLessThan quantity  100M                                  Test if a quantity is less than the receiver                 Type checking  CEL is a  gradually typed language  https   github com google cel spec blob master doc langdef md gradual type checking    Some Kubernetes API fields contain fully type checked CEL expressions  For example   CustomResourceDefinitions Validation Rules   docs tasks extend kubernetes custom resources custom resource definitions  validation rules  are fully type checked   Some Kubernetes API fields contain partially type checked CEL expressions  A partially type checked expression is an expressions where some of the variables are statically typed but others are dynamically typed  For example  in the CEL expressions of  ValidatingAdmissionPolicies   docs reference access authn authz validating admission policy   the  request  variable is typed  but the  object  variable is dynamically typed  As a result  an expression containing  request namex  would fail type checking because the  namex  field is not defined  However   object namex  would pass type checking even when the  namex  field is not defined for the resource kinds that  object  refers to  because  object  is dynamically typed   The  has    macro in CEL may be used in CEL expressions to check if a field of a dynamically typed variable is accessible before attempting to access the field s value  For example      cel has object namex    object namex     special    request name     special          Type system integration     OpenAPIv3 type                                       CEL type                                                                        object  with Properties                             object    message type    type  object    evaluates to  selfType uniqueNumber  path to object from self        object  with AdditionalProperties                   map      object  with x kubernetes embedded type             object    message type    apiVersion    kind    metadata name  and  metadata generateName  are implicitly included in schema      object  with x kubernetes preserve unknown fields   object    message type   unknown fields are NOT accessible in CEL expression     x kubernetes int or string                           union of int or string   self intOrString   100      self intOrString     50    evaluates to true for both  50  and   50         array                                               list      array  with x kubernetes list type map              list with map based Equality   unique key guarantees      array  with x kubernetes list type set              list with set based Equality   unique entry guarantees      boolean                                             boolean      number   all formats                                double      integer   all formats                               int  64       no equivalent                                       uint  64       null                                                null type      string                                              string      string  with format byte  base64 encoded            bytes      string  with format date                            timestamp  google protobuf Timestamp       string  with format datetime                        timestamp  google protobuf Timestamp       string  with format duration                        duration  google protobuf Duration      Also see   CEL types  https   github com google cel spec blob v0 6 0 doc langdef md values    OpenAPI types  https   swagger io specification  data types    Kubernetes Structural Schemas   docs tasks extend kubernetes custom resources custom resource definitions  specifying a structural schema    Equality comparison for arrays with  x kubernetes list type  of  set  or  map  ignores element order  For example   1  2      2  1   if the arrays represent Kubernetes  set  values   Concatenation on arrays with  x kubernetes list type  use the semantics of the list type    set     X   Y  performs a union where the array positions of all elements in    X  are preserved and non intersecting elements in  Y  are appended  retaining   their partial order    map     X   Y  performs a merge where the array positions of all keys in  X    are preserved but the values are overwritten by values in  Y  when the key   sets of  X  and  Y  intersect  Elements in  Y  with non intersecting keys are   appended  retaining their partial order      Escaping  Only Kubernetes resource property names of the form   a zA Z      a zA Z0 9        are accessible from CEL  Accessible property names are escaped according to the following rules when accessed in the expression      escape sequence     property name equivalent                                                                                                                                                                                                 underscores                                                                                                          dot                                                                                                                  dash                                                                                                                 slash                                                                                                                 keyword          CEL   RESERVED   keyword  https   github com google cel spec blob v0 6 0 doc langdef md syntax      When you escape any of CEL s   RESERVED   keywords you need to match the exact property name use the underscore escaping  for example   int  in the word  sprint  would not be escaped and nor would it need to be    Examples on escaping      property name   rule with escaped property name                                                              namespace       self   namespace     0                x prop          self x  dash  prop   0                redact  d       self redact  underscores  d   0       string          self startsWith  kube                   Resource constraints  CEL is non Turing complete and offers a variety of production safety controls to limit execution time  CEL s  resource constraint  features provide feedback to developers about expression complexity and help protect the API server from excessive resource consumption during evaluation  CEL s resource constraint features are used to prevent CEL evaluation from consuming excessive API server resources   A key element of the resource constraint features is a  cost unit  that CEL defines as a way of tracking CPU utilization  Cost units are independent of system load and hardware  Cost units are also deterministic  for any given CEL expression and input data  evaluation of the expression by the CEL interpreter will always result in the same cost   Many of CEL s core operations have fixed costs  The simplest operations  such as comparisons  e g       have a cost of 1  Some have a higher fixed cost  for example list literal declarations have a fixed base cost of 40 cost units   Calls to functions implemented in native code approximate cost based on the time complexity of the operation  For example  operations that use regular expressions  such as  match  and  find   are estimated using an approximated cost of  length regexString  length inputString    The approximated cost reflects the worst case time complexity of Go s RE2 implementation       Runtime cost budget  All CEL expressions evaluated by Kubernetes are constrained by a runtime cost budget  The runtime cost budget is an estimate of actual CPU utilization computed by incrementing a cost unit counter while interpreting a CEL expression  If the CEL interpreter executes too many instructions  the runtime cost budget will be exceeded  execution of the expressions will be halted  and an error will result   Some Kubernetes resources define an additional runtime cost budget that bounds the execution of multiple expressions  If the sum total of the cost of expressions exceed the budget  execution of the expressions will be halted  and an error will result  For example the validation of a custom resource has a  per validation  runtime cost budget for all  Validation Rules   docs tasks extend kubernetes custom resources custom resource definitions  validation rules  evaluated to validate the custom resource       Estimated cost limits  For some Kubernetes resources  the API server may also check if worst case estimated running time of CEL expressions would be prohibitively expensive to execute  If so  the API server prevent the CEL expression from being written to API resources by rejecting create or update operations containing the CEL expression to the API resources  This feature offers a stronger assurance that CEL expressions written to the API resource will be evaluated at runtime without exceeding the runtime cost budget "}
{"questions":"kubernetes reference title Kubernetes API health endpoints contenttype concept logicalhan reviewers The Kubernetes glossarytooltip termid kube apiserver text API server provides API endpoints to indicate the current status of the API server overview weight 50","answers":"---\ntitle: Kubernetes API health endpoints\nreviewers:\n- logicalhan\ncontent_type: concept\nweight: 50\n---\n\n<!-- overview -->\nThe Kubernetes  provides API endpoints to indicate the current status of the API server.\nThis page describes these API endpoints and explains how you can use them.\n\n<!-- body -->\n\n## API endpoints for health\n\nThe Kubernetes API server provides 3 API endpoints (`healthz`, `livez` and `readyz`) to indicate the current status of the API server.\nThe `healthz` endpoint is deprecated (since Kubernetes v1.16), and you should use the more specific `livez` and `readyz` endpoints instead.\nThe `livez` endpoint can be used with the `--livez-grace-period` [flag](\/docs\/reference\/command-line-tools-reference\/kube-apiserver) to specify the startup duration.\nFor a graceful shutdown you can specify the `--shutdown-delay-duration` [flag](\/docs\/reference\/command-line-tools-reference\/kube-apiserver) with the `\/readyz` endpoint.\nMachines that check the `healthz`\/`livez`\/`readyz` of the API server should rely on the HTTP status code.\nA status code `200` indicates the API server is `healthy`\/`live`\/`ready`, depending on the called endpoint.\nThe more verbose options shown below are intended to be used by human operators to debug their cluster or understand the state of the API server.\n\nThe following examples will show how you can interact with the health API endpoints.\n\nFor all endpoints, you can use the `verbose` parameter to print out the checks and their status.\nThis can be useful for a human operator to debug the current status of the API server, it is not intended to be consumed by a machine:\n\n```shell\ncurl -k https:\/\/localhost:6443\/livez?verbose\n```\n\nor from a remote host with authentication:\n\n```shell\nkubectl get --raw='\/readyz?verbose'\n```\n\nThe output will look like this:\n\n    [+]ping ok\n    [+]log ok\n    [+]etcd ok\n    [+]poststarthook\/start-kube-apiserver-admission-initializer ok\n    [+]poststarthook\/generic-apiserver-start-informers ok\n    [+]poststarthook\/start-apiextensions-informers ok\n    [+]poststarthook\/start-apiextensions-controllers ok\n    [+]poststarthook\/crd-informer-synced ok\n    [+]poststarthook\/bootstrap-controller ok\n    [+]poststarthook\/rbac\/bootstrap-roles ok\n    [+]poststarthook\/scheduling\/bootstrap-system-priority-classes ok\n    [+]poststarthook\/start-cluster-authentication-info-controller ok\n    [+]poststarthook\/start-kube-aggregator-informers ok\n    [+]poststarthook\/apiservice-registration-controller ok\n    [+]poststarthook\/apiservice-status-available-controller ok\n    [+]poststarthook\/kube-apiserver-autoregistration ok\n    [+]autoregister-completion ok\n    [+]poststarthook\/apiservice-openapi-controller ok\n    healthz check passed\n\nThe Kubernetes API server also supports to exclude specific checks.\nThe query parameters can also be combined like in this example:\n\n```shell\ncurl -k 'https:\/\/localhost:6443\/readyz?verbose&exclude=etcd'\n```\n\nThe output show that the `etcd` check is excluded:\n\n    [+]ping ok\n    [+]log ok\n    [+]etcd excluded: ok\n    [+]poststarthook\/start-kube-apiserver-admission-initializer ok\n    [+]poststarthook\/generic-apiserver-start-informers ok\n    [+]poststarthook\/start-apiextensions-informers ok\n    [+]poststarthook\/start-apiextensions-controllers ok\n    [+]poststarthook\/crd-informer-synced ok\n    [+]poststarthook\/bootstrap-controller ok\n    [+]poststarthook\/rbac\/bootstrap-roles ok\n    [+]poststarthook\/scheduling\/bootstrap-system-priority-classes ok\n    [+]poststarthook\/start-cluster-authentication-info-controller ok\n    [+]poststarthook\/start-kube-aggregator-informers ok\n    [+]poststarthook\/apiservice-registration-controller ok\n    [+]poststarthook\/apiservice-status-available-controller ok\n    [+]poststarthook\/kube-apiserver-autoregistration ok\n    [+]autoregister-completion ok\n    [+]poststarthook\/apiservice-openapi-controller ok\n    [+]shutdown ok\n    healthz check passed\n\n## Individual health checks\n\n\n\nEach individual health check exposes an HTTP endpoint and can be checked individually.\nThe schema for the individual health checks is `\/livez\/<healthcheck-name>` or `\/readyz\/<healthcheck-name>`, where `livez` and `readyz` can be used to indicate if you want to check the liveness or the readiness of the API server, respectively.\nThe `<healthcheck-name>` path can be discovered using the `verbose` flag from above and take the path between `[+]` and `ok`.\nThese individual health checks should not be consumed by machines but can be helpful for a human operator to debug a system:\n\n```shell\ncurl -k https:\/\/localhost:6443\/livez\/etcd\n```","site":"kubernetes reference","answers_cleaned":"    title  Kubernetes API health endpoints reviewers    logicalhan content type  concept weight  50           overview     The Kubernetes  provides API endpoints to indicate the current status of the API server  This page describes these API endpoints and explains how you can use them        body         API endpoints for health  The Kubernetes API server provides 3 API endpoints   healthz    livez  and  readyz   to indicate the current status of the API server  The  healthz  endpoint is deprecated  since Kubernetes v1 16   and you should use the more specific  livez  and  readyz  endpoints instead  The  livez  endpoint can be used with the    livez grace period   flag   docs reference command line tools reference kube apiserver  to specify the startup duration  For a graceful shutdown you can specify the    shutdown delay duration   flag   docs reference command line tools reference kube apiserver  with the   readyz  endpoint  Machines that check the  healthz   livez   readyz  of the API server should rely on the HTTP status code  A status code  200  indicates the API server is  healthy   live   ready   depending on the called endpoint  The more verbose options shown below are intended to be used by human operators to debug their cluster or understand the state of the API server   The following examples will show how you can interact with the health API endpoints   For all endpoints  you can use the  verbose  parameter to print out the checks and their status  This can be useful for a human operator to debug the current status of the API server  it is not intended to be consumed by a machine      shell curl  k https   localhost 6443 livez verbose      or from a remote host with authentication      shell kubectl get   raw   readyz verbose       The output will look like this          ping ok        log ok        etcd ok        poststarthook start kube apiserver admission initializer ok        poststarthook generic apiserver start informers ok        poststarthook start apiextensions informers ok        poststarthook start apiextensions controllers ok        poststarthook crd informer synced ok        poststarthook bootstrap controller ok        poststarthook rbac bootstrap roles ok        poststarthook scheduling bootstrap system priority classes ok        poststarthook start cluster authentication info controller ok        poststarthook start kube aggregator informers ok        poststarthook apiservice registration controller ok        poststarthook apiservice status available controller ok        poststarthook kube apiserver autoregistration ok        autoregister completion ok        poststarthook apiservice openapi controller ok     healthz check passed  The Kubernetes API server also supports to exclude specific checks  The query parameters can also be combined like in this example      shell curl  k  https   localhost 6443 readyz verbose exclude etcd       The output show that the  etcd  check is excluded          ping ok        log ok        etcd excluded  ok        poststarthook start kube apiserver admission initializer ok        poststarthook generic apiserver start informers ok        poststarthook start apiextensions informers ok        poststarthook start apiextensions controllers ok        poststarthook crd informer synced ok        poststarthook bootstrap controller ok        poststarthook rbac bootstrap roles ok        poststarthook scheduling bootstrap system priority classes ok        poststarthook start cluster authentication info controller ok        poststarthook start kube aggregator informers ok        poststarthook apiservice registration controller ok        poststarthook apiservice status available controller ok        poststarthook kube apiserver autoregistration ok        autoregister completion ok        poststarthook apiservice openapi controller ok        shutdown ok     healthz check passed     Individual health checks    Each individual health check exposes an HTTP endpoint and can be checked individually  The schema for the individual health checks is   livez  healthcheck name   or   readyz  healthcheck name    where  livez  and  readyz  can be used to indicate if you want to check the liveness or the readiness of the API server  respectively  The   healthcheck name   path can be discovered using the  verbose  flag from above and take the path between       and  ok   These individual health checks should not be consumed by machines but can be helpful for a human operator to debug a system      shell curl  k https   localhost 6443 livez etcd    "}
{"questions":"kubernetes reference weight 20 title API Overview erictune contenttype concept card reviewers jbeda nolist true lavalamp","answers":"---\ntitle: API Overview\nreviewers:\n- erictune\n- lavalamp\n- jbeda\ncontent_type: concept\nweight: 20\nno_list: true\ncard:\n  name: reference\n  weight: 50\n  title: Overview of API\n---\n\n<!-- overview -->\n\nThis section provides reference information for the Kubernetes API.\n\nThe REST API is the fundamental fabric of Kubernetes. All operations and\ncommunications between components, and external user commands are REST API\ncalls that the API Server handles. Consequently, everything in the Kubernetes\nplatform is treated as an API object and has a corresponding entry in the\n[API](\/docs\/reference\/generated\/kubernetes-api\/\/).\n\nThe [Kubernetes API reference](\/docs\/reference\/generated\/kubernetes-api\/\/)\nlists the API for Kubernetes version .\n\nFor general background information, read\n[The Kubernetes API](\/docs\/concepts\/overview\/kubernetes-api\/).\n[Controlling Access to the Kubernetes API](\/docs\/concepts\/security\/controlling-access\/)\ndescribes how clients can authenticate to the Kubernetes API server, and how their\nrequests are authorized.\n\n\n## API versioning\n\nThe JSON and Protobuf serialization schemas follow the same guidelines for\nschema changes. The following descriptions cover both formats.\n\nThe API versioning and software versioning are indirectly related.\nThe [API and release versioning proposal](https:\/\/git.k8s.io\/sig-release\/release-engineering\/versioning.md)\ndescribes the relationship between API versioning and software versioning.\n\nDifferent API versions indicate different levels of stability and support. You\ncan find more information about the criteria for each level in the\n[API Changes documentation](https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api_changes.md#alpha-beta-and-stable-versions).\n\nHere's a summary of each level:\n\n- Alpha:\n  - The version names contain `alpha` (for example, `v1alpha1`).\n  - Built-in alpha API versions are disabled by default and must be explicitly enabled in the `kube-apiserver` configuration to be used.\n  - The software may contain bugs. Enabling a feature may expose bugs.\n  - Support for an alpha API may be dropped at any time without notice.\n  - The API may change in incompatible ways in a later software release without notice.\n  - The software is recommended for use only in short-lived testing clusters,\n    due to increased risk of bugs and lack of long-term support.\n\n- Beta:\n  - The version names contain `beta` (for example, `v2beta3`).\n  - Built-in beta API versions are disabled by default and must be explicitly enabled in the `kube-apiserver` configuration to be used\n    (**except** for beta versions of APIs introduced prior to Kubernetes 1.22, which were enabled by default).\n  - Built-in beta API versions have a maximum lifetime of 9 months or 3 minor releases (whichever is longer) from introduction\n    to deprecation, and 9 months or 3 minor releases (whichever is longer) from deprecation to removal.\n  - The software is well tested. Enabling a feature is considered safe.\n  - The support for a feature will not be dropped, though the details may change.\n\n  - The schema and\/or semantics of objects may change in incompatible ways in\n    a subsequent beta or stable API version. When this happens, migration\n    instructions are provided. Adapting to a subsequent beta or stable API version\n    may require editing or re-creating API objects, and may not be straightforward.\n    The migration may require downtime for applications that rely on the feature.\n  - The software is not recommended for production uses. Subsequent releases\n    may introduce incompatible changes. Use of beta API versions is\n    required to transition to subsequent beta or stable API versions\n    once the beta API version is deprecated and no longer served.\n\n  \n  Please try beta features and provide feedback. After the features exit beta, it\n  may not be practical to make more changes.\n  \n\n- Stable:\n  - The version name is `vX` where `X` is an integer.\n  - Stable API versions remain available for all future releases within a Kubernetes major version,\n    and there are no current plans for a major version revision of Kubernetes that removes stable APIs.\n\n## API groups\n\n[API groups](https:\/\/git.k8s.io\/design-proposals-archive\/api-machinery\/api-group.md)\nmake it easier to extend the Kubernetes API.\nThe API group is specified in a REST path and in the `apiVersion` field of a\nserialized object.\n\nThere are several API groups in Kubernetes:\n\n*  The *core* (also called *legacy*) group is found at REST path `\/api\/v1`.\n   The core group is not specified as part of the `apiVersion` field, for\n   example, `apiVersion: v1`.\n*  The named groups are at REST path `\/apis\/$GROUP_NAME\/$VERSION` and use\n   `apiVersion: $GROUP_NAME\/$VERSION` (for example, `apiVersion: batch\/v1`).\n   You can find the full list of supported API groups in\n   [Kubernetes API reference](\/docs\/reference\/generated\/kubernetes-api\/\/#-strong-api-groups-strong-).\n\n## Enabling or disabling API groups   {#enabling-or-disabling}\n\nCertain resources and API groups are enabled by default. You can enable or\ndisable them by setting `--runtime-config` on the API server.  The\n`--runtime-config` flag accepts comma separated `<key>[=<value>]` pairs\ndescribing the runtime configuration of the API server. If the `=<value>`\npart is omitted, it is treated as if `=true` is specified. For example:\n\n - to disable `batch\/v1`, set `--runtime-config=batch\/v1=false`\n - to enable `batch\/v2alpha1`, set `--runtime-config=batch\/v2alpha1`\n - to enable a specific version of an API, such as `storage.k8s.io\/v1beta1\/csistoragecapacities`, set `--runtime-config=storage.k8s.io\/v1beta1\/csistoragecapacities`\n\n\nWhen you enable or disable groups or resources, you need to restart the API\nserver and controller manager to pick up the `--runtime-config` changes.\n\n\n## Persistence\n\nKubernetes stores its serialized state in terms of the API resources by writing them into\n.\n\n## \n\n- Learn more about [API conventions](https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#api-conventions)\n- Read the design documentation for\n  [aggregator](https:\/\/git.k8s.io\/design-proposals-archive\/api-machinery\/aggregated-api-servers.md)","site":"kubernetes reference","answers_cleaned":"    title  API Overview reviewers    erictune   lavalamp   jbeda content type  concept weight  20 no list  true card    name  reference   weight  50   title  Overview of API           overview      This section provides reference information for the Kubernetes API   The REST API is the fundamental fabric of Kubernetes  All operations and communications between components  and external user commands are REST API calls that the API Server handles  Consequently  everything in the Kubernetes platform is treated as an API object and has a corresponding entry in the  API   docs reference generated kubernetes api      The  Kubernetes API reference   docs reference generated kubernetes api    lists the API for Kubernetes version    For general background information  read  The Kubernetes API   docs concepts overview kubernetes api     Controlling Access to the Kubernetes API   docs concepts security controlling access   describes how clients can authenticate to the Kubernetes API server  and how their requests are authorized       API versioning  The JSON and Protobuf serialization schemas follow the same guidelines for schema changes  The following descriptions cover both formats   The API versioning and software versioning are indirectly related  The  API and release versioning proposal  https   git k8s io sig release release engineering versioning md  describes the relationship between API versioning and software versioning   Different API versions indicate different levels of stability and support  You can find more information about the criteria for each level in the  API Changes documentation  https   git k8s io community contributors devel sig architecture api changes md alpha beta and stable versions    Here s a summary of each level     Alpha      The version names contain  alpha   for example   v1alpha1        Built in alpha API versions are disabled by default and must be explicitly enabled in the  kube apiserver  configuration to be used      The software may contain bugs  Enabling a feature may expose bugs      Support for an alpha API may be dropped at any time without notice      The API may change in incompatible ways in a later software release without notice      The software is recommended for use only in short lived testing clusters      due to increased risk of bugs and lack of long term support     Beta      The version names contain  beta   for example   v2beta3        Built in beta API versions are disabled by default and must be explicitly enabled in the  kube apiserver  configuration to be used        except   for beta versions of APIs introduced prior to Kubernetes 1 22  which were enabled by default       Built in beta API versions have a maximum lifetime of 9 months or 3 minor releases  whichever is longer  from introduction     to deprecation  and 9 months or 3 minor releases  whichever is longer  from deprecation to removal      The software is well tested  Enabling a feature is considered safe      The support for a feature will not be dropped  though the details may change       The schema and or semantics of objects may change in incompatible ways in     a subsequent beta or stable API version  When this happens  migration     instructions are provided  Adapting to a subsequent beta or stable API version     may require editing or re creating API objects  and may not be straightforward      The migration may require downtime for applications that rely on the feature      The software is not recommended for production uses  Subsequent releases     may introduce incompatible changes  Use of beta API versions is     required to transition to subsequent beta or stable API versions     once the beta API version is deprecated and no longer served        Please try beta features and provide feedback  After the features exit beta  it   may not be practical to make more changes        Stable      The version name is  vX  where  X  is an integer      Stable API versions remain available for all future releases within a Kubernetes major version      and there are no current plans for a major version revision of Kubernetes that removes stable APIs      API groups   API groups  https   git k8s io design proposals archive api machinery api group md  make it easier to extend the Kubernetes API  The API group is specified in a REST path and in the  apiVersion  field of a serialized object   There are several API groups in Kubernetes      The  core   also called  legacy   group is found at REST path   api v1      The core group is not specified as part of the  apiVersion  field  for    example   apiVersion  v1      The named groups are at REST path   apis  GROUP NAME  VERSION  and use     apiVersion   GROUP NAME  VERSION   for example   apiVersion  batch v1       You can find the full list of supported API groups in     Kubernetes API reference   docs reference generated kubernetes api    strong api groups strong        Enabling or disabling API groups     enabling or disabling   Certain resources and API groups are enabled by default  You can enable or disable them by setting    runtime config  on the API server   The    runtime config  flag accepts comma separated   key    value    pairs describing the runtime configuration of the API server  If the    value   part is omitted  it is treated as if   true  is specified  For example      to disable  batch v1   set    runtime config batch v1 false     to enable  batch v2alpha1   set    runtime config batch v2alpha1     to enable a specific version of an API  such as  storage k8s io v1beta1 csistoragecapacities   set    runtime config storage k8s io v1beta1 csistoragecapacities    When you enable or disable groups or resources  you need to restart the API server and controller manager to pick up the    runtime config  changes       Persistence  Kubernetes stores its serialized state in terms of the API resources by writing them into           Learn more about  API conventions  https   git k8s io community contributors devel sig architecture api conventions md api conventions    Read the design documentation for    aggregator  https   git k8s io design proposals archive api machinery aggregated api servers md "}
{"questions":"kubernetes setup weight 20 vincepri title Container Runtimes contenttype concept reviewers bart0sh overview","answers":"---\nreviewers:\n- vincepri\n- bart0sh\ntitle: Container Runtimes\ncontent_type: concept\nweight: 20\n---\n<!-- overview -->\n\n\n\nYou need to install a\n\ninto each node in the cluster so that Pods can run there. This page outlines\nwhat is involved and describes related tasks for setting up nodes.\n\nKubernetes  requires that you use a runtime that\nconforms with the\n (CRI).\n\nSee [CRI version support](#cri-versions) for more information.\n\nThis page provides an outline of how to use several common container runtimes with\nKubernetes.\n\n- [containerd](#containerd)\n- [CRI-O](#cri-o)\n- [Docker Engine](#docker)\n- [Mirantis Container Runtime](#mcr)\n\n\nKubernetes releases before v1.24 included a direct integration with Docker Engine,\nusing a component named _dockershim_. That special direct integration is no longer\npart of Kubernetes (this removal was\n[announced](\/blog\/2020\/12\/08\/kubernetes-1-20-release-announcement\/#dockershim-deprecation)\nas part of the v1.20 release).\nYou can read\n[Check whether Dockershim removal affects you](\/docs\/tasks\/administer-cluster\/migrating-from-dockershim\/check-if-dockershim-removal-affects-you\/)\nto understand how this removal might affect you. To learn about migrating from using dockershim, see\n[Migrating from dockershim](\/docs\/tasks\/administer-cluster\/migrating-from-dockershim\/).\n\nIf you are running a version of Kubernetes other than v,\ncheck the documentation for that version.\n\n\n<!-- body -->\n## Install and configure prerequisites\n\n### Network configuration\n\nBy default, the Linux kernel does not allow IPv4 packets to be routed\nbetween interfaces. Most Kubernetes cluster networking implementations\nwill change this setting (if needed), but some might expect the\nadministrator to do it for them. (Some might also expect other sysctl\nparameters to be set, kernel modules to be loaded, etc; consult the\ndocumentation for your specific network implementation.)\n\n### Enable IPv4 packet forwarding {#prerequisite-ipv4-forwarding-optional}\n\nTo manually enable IPv4 packet forwarding:\n\n```bash\n# sysctl params required by setup, params persist across reboots\ncat <<EOF | sudo tee \/etc\/sysctl.d\/k8s.conf\nnet.ipv4.ip_forward = 1\nEOF\n\n# Apply sysctl params without reboot\nsudo sysctl --system\n```\n\nVerify that `net.ipv4.ip_forward` is set to 1 with:\n\n```bash\nsysctl net.ipv4.ip_forward\n```\n\n## cgroup drivers\n\nOn Linux, \nare used to constrain resources that are allocated to processes.\n\nBoth the  and the\nunderlying container runtime need to interface with control groups to enforce\n[resource management for pods and containers](\/docs\/concepts\/configuration\/manage-resources-containers\/)\nand set resources such as cpu\/memory requests and limits. To interface with control\ngroups, the kubelet and the container runtime need to use a *cgroup driver*.\nIt's critical that the kubelet and the container runtime use the same cgroup\ndriver and are configured the same.\n\nThere are two cgroup drivers available:\n\n* [`cgroupfs`](#cgroupfs-cgroup-driver)\n* [`systemd`](#systemd-cgroup-driver)\n\n### cgroupfs driver {#cgroupfs-cgroup-driver}\n\nThe `cgroupfs` driver is the [default cgroup driver in the kubelet](\/docs\/reference\/config-api\/kubelet-config.v1beta1).\n When the `cgroupfs` driver is used, the kubelet and the container runtime directly interface with\n the cgroup filesystem to configure cgroups.\n\nThe `cgroupfs` driver is **not** recommended when\n[systemd](https:\/\/www.freedesktop.org\/wiki\/Software\/systemd\/) is the\ninit system because systemd expects a single cgroup manager on\nthe system. Additionally, if you use [cgroup v2](\/docs\/concepts\/architecture\/cgroups), use the `systemd`\ncgroup driver instead of `cgroupfs`.\n\n### systemd cgroup driver {#systemd-cgroup-driver}\n\nWhen [systemd](https:\/\/www.freedesktop.org\/wiki\/Software\/systemd\/) is chosen as the init\nsystem for a Linux distribution, the init process generates and consumes a root control group\n(`cgroup`) and acts as a cgroup manager.\n\nsystemd has a tight integration with cgroups and allocates a cgroup per systemd\nunit. As a result, if you use `systemd` as the init system with the `cgroupfs`\ndriver, the system gets two different cgroup managers.\n\nTwo cgroup managers result in two views of the available and in-use resources in\nthe system. In some cases, nodes that are configured to use `cgroupfs` for the\nkubelet and container runtime, but use `systemd` for the rest of the processes become\nunstable under resource pressure.\n\nThe approach to mitigate this instability is to use `systemd` as the cgroup driver for\nthe kubelet and the container runtime when systemd is the selected init system.\n\nTo set `systemd` as the cgroup driver, edit the\n[`KubeletConfiguration`](\/docs\/tasks\/administer-cluster\/kubelet-config-file\/)\noption of `cgroupDriver` and set it to `systemd`. For example:\n\n```yaml\napiVersion: kubelet.config.k8s.io\/v1beta1\nkind: KubeletConfiguration\n...\ncgroupDriver: systemd\n```\n\n\nStarting with v1.22 and later, when creating a cluster with kubeadm, if the user does not set\nthe `cgroupDriver` field under `KubeletConfiguration`, kubeadm defaults it to `systemd`.\n\n\nIf you configure `systemd` as the cgroup driver for the kubelet, you must also\nconfigure `systemd` as the cgroup driver for the container runtime. Refer to\nthe documentation for your container runtime for instructions. For example:\n\n*  [containerd](#containerd-systemd)\n*  [CRI-O](#cri-o)\n\nIn Kubernetes , with the `KubeletCgroupDriverFromCRI`\n[feature gate](\/docs\/reference\/command-line-tools-reference\/feature-gates\/)\nenabled and a container runtime that supports the `RuntimeConfig` CRI RPC,\nthe kubelet automatically detects the appropriate cgroup driver from the runtime,\nand ignores the `cgroupDriver` setting within the kubelet configuration.\n\n\nChanging the cgroup driver of a Node that has joined a cluster is a sensitive operation.\nIf the kubelet has created Pods using the semantics of one cgroup driver, changing the container\nruntime to another cgroup driver can cause errors when trying to re-create the Pod sandbox\nfor such existing Pods. Restarting the kubelet may not solve such errors.\n\nIf you have automation that makes it feasible, replace the node with another using the updated\nconfiguration, or reinstall it using automation.\n\n\n\n### Migrating to the `systemd` driver in kubeadm managed clusters\n\nIf you wish to migrate to the `systemd` cgroup driver in existing kubeadm managed clusters,\nfollow [configuring a cgroup driver](\/docs\/tasks\/administer-cluster\/kubeadm\/configure-cgroup-driver\/).\n\n## CRI version support {#cri-versions}\n\nYour container runtime must support at least v1alpha2 of the container runtime interface.\n\nKubernetes [starting v1.26](\/blog\/2022\/11\/18\/upcoming-changes-in-kubernetes-1-26\/#cri-api-removal)\n_only works_ with v1 of the CRI API. Earlier versions default\nto v1 version, however if a container runtime does not support the v1 API, the kubelet falls back to\nusing the (deprecated) v1alpha2 API instead.\n\n## Container runtimes\n\n\n\n### containerd\n\nThis section outlines the necessary steps to use containerd as CRI runtime.\n\nTo install containerd on your system, follow the instructions on\n[getting started with containerd](https:\/\/github.com\/containerd\/containerd\/blob\/main\/docs\/getting-started.md).\nReturn to this step once you've created a valid `config.toml` configuration file.\n\n\n\nYou can find this file under the path `\/etc\/containerd\/config.toml`.\n\n\nYou can find this file under the path `C:\\Program Files\\containerd\\config.toml`.\n\n\n\nOn Linux the default CRI socket for containerd is `\/run\/containerd\/containerd.sock`.\nOn Windows the default CRI endpoint is `npipe:\/\/.\/pipe\/containerd-containerd`.\n\n#### Configuring the `systemd` cgroup driver {#containerd-systemd}\n\nTo use the `systemd` cgroup driver in `\/etc\/containerd\/config.toml` with `runc`, set\n\n```\n[plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc]\n  ...\n  [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options]\n    SystemdCgroup = true\n```\n\nThe `systemd` cgroup driver is recommended if you use [cgroup v2](\/docs\/concepts\/architecture\/cgroups).\n\n\nIf you installed containerd from a package (for example, RPM or `.deb`), you may find\nthat the CRI integration plugin is disabled by default.\n\nYou need CRI support enabled to use containerd with Kubernetes. Make sure that `cri`\nis not included in the`disabled_plugins` list within `\/etc\/containerd\/config.toml`;\nif you made changes to that file, also restart `containerd`.\n\nIf you experience container crash loops after the initial cluster installation or after\ninstalling a CNI, the containerd configuration provided with the package might contain\nincompatible configuration parameters. Consider resetting the containerd configuration\nwith `containerd config default > \/etc\/containerd\/config.toml` as specified in\n[getting-started.md](https:\/\/github.com\/containerd\/containerd\/blob\/main\/docs\/getting-started.md#advanced-topics)\nand then set the configuration parameters specified above accordingly.\n\n\nIf you apply this change, make sure to restart containerd:\n\n```shell\nsudo systemctl restart containerd\n```\n\nWhen using kubeadm, manually configure the\n[cgroup driver for kubelet](\/docs\/tasks\/administer-cluster\/kubeadm\/configure-cgroup-driver\/#configuring-the-kubelet-cgroup-driver).\n\nIn Kubernetes v1.28, you can enable automatic detection of the\ncgroup driver as an alpha feature. See [systemd cgroup driver](#systemd-cgroup-driver)\nfor more details.\n\n#### Overriding the sandbox (pause) image {#override-pause-image-containerd}\n\nIn your [containerd config](https:\/\/github.com\/containerd\/containerd\/blob\/main\/docs\/cri\/config.md) you can overwrite the\nsandbox image by setting the following config:\n\n```toml\n[plugins.\"io.containerd.grpc.v1.cri\"]\n  sandbox_image = \"registry.k8s.io\/pause:3.2\"\n```\n\nYou might need to restart `containerd` as well once you've updated the config file: `systemctl restart containerd`.\n\n### CRI-O\n\nThis section contains the necessary steps to install CRI-O as a container runtime.\n\nTo install CRI-O, follow [CRI-O Install Instructions](https:\/\/github.com\/cri-o\/packaging\/blob\/main\/README.md#usage).\n\n#### cgroup driver\n\nCRI-O uses the systemd cgroup driver per default, which is likely to work fine\nfor you. To switch to the `cgroupfs` cgroup driver, either edit\n`\/etc\/crio\/crio.conf` or place a drop-in configuration in\n`\/etc\/crio\/crio.conf.d\/02-cgroup-manager.conf`, for example:\n\n```toml\n[crio.runtime]\nconmon_cgroup = \"pod\"\ncgroup_manager = \"cgroupfs\"\n```\n\nYou should also note the changed `conmon_cgroup`, which has to be set to the value\n`pod` when using CRI-O with `cgroupfs`. It is generally necessary to keep the\ncgroup driver configuration of the kubelet (usually done via kubeadm) and CRI-O\nin sync.\n\nIn Kubernetes v1.28, you can enable automatic detection of the\ncgroup driver as an alpha feature. See [systemd cgroup driver](#systemd-cgroup-driver)\nfor more details.\n\nFor CRI-O, the CRI socket is `\/var\/run\/crio\/crio.sock` by default.\n\n#### Overriding the sandbox (pause) image {#override-pause-image-cri-o}\n\nIn your [CRI-O config](https:\/\/github.com\/cri-o\/cri-o\/blob\/main\/docs\/crio.conf.5.md) you can set the following\nconfig value:\n\n```toml\n[crio.image]\npause_image=\"registry.k8s.io\/pause:3.6\"\n```\n\nThis config option supports live configuration reload to apply this change: `systemctl reload crio` or by sending\n`SIGHUP` to the `crio` process.\n\n### Docker Engine {#docker}\n\n\nThese instructions assume that you are using the\n[`cri-dockerd`](https:\/\/mirantis.github.io\/cri-dockerd\/) adapter to integrate\nDocker Engine with Kubernetes.\n\n\n1. On each of your nodes, install Docker for your Linux distribution as per\n  [Install Docker Engine](https:\/\/docs.docker.com\/engine\/install\/#server).\n\n2. Install [`cri-dockerd`](https:\/\/mirantis.github.io\/cri-dockerd\/usage\/install), following the directions in the install section of the documentation.\n\nFor `cri-dockerd`, the CRI socket is `\/run\/cri-dockerd.sock` by default.\n\n### Mirantis Container Runtime {#mcr}\n\n[Mirantis Container Runtime](https:\/\/docs.mirantis.com\/mcr\/20.10\/overview.html) (MCR) is a commercially\navailable container runtime that was formerly known as Docker Enterprise Edition.\n\nYou can use Mirantis Container Runtime with Kubernetes using the open source\n[`cri-dockerd`](https:\/\/mirantis.github.io\/cri-dockerd\/) component, included with MCR.\n\nTo learn more about how to install Mirantis Container Runtime,\nvisit [MCR Deployment Guide](https:\/\/docs.mirantis.com\/mcr\/20.10\/install.html).\n\nCheck the systemd unit named `cri-docker.socket` to find out the path to the CRI\nsocket.\n\n#### Overriding the sandbox (pause) image {#override-pause-image-cri-dockerd-mcr}\n\nThe `cri-dockerd` adapter accepts a command line argument for\nspecifying which container image to use as the Pod infrastructure container (\u201cpause image\u201d).\nThe command line argument to use is `--pod-infra-container-image`.\n\n## \n\nAs well as a container runtime, your cluster will need a working\n[network plugin](\/docs\/concepts\/cluster-administration\/networking\/#how-to-implement-the-kubernetes-network-model).","site":"kubernetes setup","answers_cleaned":"    reviewers    vincepri   bart0sh title  Container Runtimes content type  concept weight  20          overview        You need to install a  into each node in the cluster so that Pods can run there  This page outlines what is involved and describes related tasks for setting up nodes   Kubernetes  requires that you use a runtime that conforms with the   CRI    See  CRI version support   cri versions  for more information   This page provides an outline of how to use several common container runtimes with Kubernetes      containerd   containerd     CRI O   cri o     Docker Engine   docker     Mirantis Container Runtime   mcr    Kubernetes releases before v1 24 included a direct integration with Docker Engine  using a component named  dockershim   That special direct integration is no longer part of Kubernetes  this removal was  announced   blog 2020 12 08 kubernetes 1 20 release announcement  dockershim deprecation  as part of the v1 20 release   You can read  Check whether Dockershim removal affects you   docs tasks administer cluster migrating from dockershim check if dockershim removal affects you   to understand how this removal might affect you  To learn about migrating from using dockershim  see  Migrating from dockershim   docs tasks administer cluster migrating from dockershim     If you are running a version of Kubernetes other than v  check the documentation for that version         body        Install and configure prerequisites      Network configuration  By default  the Linux kernel does not allow IPv4 packets to be routed between interfaces  Most Kubernetes cluster networking implementations will change this setting  if needed   but some might expect the administrator to do it for them   Some might also expect other sysctl parameters to be set  kernel modules to be loaded  etc  consult the documentation for your specific network implementation        Enable IPv4 packet forwarding   prerequisite ipv4 forwarding optional   To manually enable IPv4 packet forwarding      bash   sysctl params required by setup  params persist across reboots cat   EOF   sudo tee  etc sysctl d k8s conf net ipv4 ip forward   1 EOF    Apply sysctl params without reboot sudo sysctl   system      Verify that  net ipv4 ip forward  is set to 1 with      bash sysctl net ipv4 ip forward         cgroup drivers  On Linux   are used to constrain resources that are allocated to processes   Both the  and the underlying container runtime need to interface with control groups to enforce  resource management for pods and containers   docs concepts configuration manage resources containers   and set resources such as cpu memory requests and limits  To interface with control groups  the kubelet and the container runtime need to use a  cgroup driver   It s critical that the kubelet and the container runtime use the same cgroup driver and are configured the same   There are two cgroup drivers available       cgroupfs    cgroupfs cgroup driver      systemd    systemd cgroup driver       cgroupfs driver   cgroupfs cgroup driver   The  cgroupfs  driver is the  default cgroup driver in the kubelet   docs reference config api kubelet config v1beta1    When the  cgroupfs  driver is used  the kubelet and the container runtime directly interface with  the cgroup filesystem to configure cgroups   The  cgroupfs  driver is   not   recommended when  systemd  https   www freedesktop org wiki Software systemd   is the init system because systemd expects a single cgroup manager on the system  Additionally  if you use  cgroup v2   docs concepts architecture cgroups   use the  systemd  cgroup driver instead of  cgroupfs        systemd cgroup driver   systemd cgroup driver   When  systemd  https   www freedesktop org wiki Software systemd   is chosen as the init system for a Linux distribution  the init process generates and consumes a root control group   cgroup   and acts as a cgroup manager   systemd has a tight integration with cgroups and allocates a cgroup per systemd unit  As a result  if you use  systemd  as the init system with the  cgroupfs  driver  the system gets two different cgroup managers   Two cgroup managers result in two views of the available and in use resources in the system  In some cases  nodes that are configured to use  cgroupfs  for the kubelet and container runtime  but use  systemd  for the rest of the processes become unstable under resource pressure   The approach to mitigate this instability is to use  systemd  as the cgroup driver for the kubelet and the container runtime when systemd is the selected init system   To set  systemd  as the cgroup driver  edit the   KubeletConfiguration    docs tasks administer cluster kubelet config file   option of  cgroupDriver  and set it to  systemd   For example      yaml apiVersion  kubelet config k8s io v1beta1 kind  KubeletConfiguration     cgroupDriver  systemd       Starting with v1 22 and later  when creating a cluster with kubeadm  if the user does not set the  cgroupDriver  field under  KubeletConfiguration   kubeadm defaults it to  systemd     If you configure  systemd  as the cgroup driver for the kubelet  you must also configure  systemd  as the cgroup driver for the container runtime  Refer to the documentation for your container runtime for instructions  For example       containerd   containerd systemd      CRI O   cri o   In Kubernetes   with the  KubeletCgroupDriverFromCRI   feature gate   docs reference command line tools reference feature gates   enabled and a container runtime that supports the  RuntimeConfig  CRI RPC  the kubelet automatically detects the appropriate cgroup driver from the runtime  and ignores the  cgroupDriver  setting within the kubelet configuration    Changing the cgroup driver of a Node that has joined a cluster is a sensitive operation  If the kubelet has created Pods using the semantics of one cgroup driver  changing the container runtime to another cgroup driver can cause errors when trying to re create the Pod sandbox for such existing Pods  Restarting the kubelet may not solve such errors   If you have automation that makes it feasible  replace the node with another using the updated configuration  or reinstall it using automation         Migrating to the  systemd  driver in kubeadm managed clusters  If you wish to migrate to the  systemd  cgroup driver in existing kubeadm managed clusters  follow  configuring a cgroup driver   docs tasks administer cluster kubeadm configure cgroup driver        CRI version support   cri versions   Your container runtime must support at least v1alpha2 of the container runtime interface   Kubernetes  starting v1 26   blog 2022 11 18 upcoming changes in kubernetes 1 26  cri api removal   only works  with v1 of the CRI API  Earlier versions default to v1 version  however if a container runtime does not support the v1 API  the kubelet falls back to using the  deprecated  v1alpha2 API instead      Container runtimes        containerd  This section outlines the necessary steps to use containerd as CRI runtime   To install containerd on your system  follow the instructions on  getting started with containerd  https   github com containerd containerd blob main docs getting started md   Return to this step once you ve created a valid  config toml  configuration file     You can find this file under the path   etc containerd config toml     You can find this file under the path  C  Program Files containerd config toml      On Linux the default CRI socket for containerd is   run containerd containerd sock   On Windows the default CRI endpoint is  npipe     pipe containerd containerd         Configuring the  systemd  cgroup driver   containerd systemd   To use the  systemd  cgroup driver in   etc containerd config toml  with  runc   set       plugins  io containerd grpc v1 cri  containerd runtimes runc           plugins  io containerd grpc v1 cri  containerd runtimes runc options      SystemdCgroup   true      The  systemd  cgroup driver is recommended if you use  cgroup v2   docs concepts architecture cgroups     If you installed containerd from a package  for example  RPM or   deb    you may find that the CRI integration plugin is disabled by default   You need CRI support enabled to use containerd with Kubernetes  Make sure that  cri  is not included in the disabled plugins  list within   etc containerd config toml   if you made changes to that file  also restart  containerd    If you experience container crash loops after the initial cluster installation or after installing a CNI  the containerd configuration provided with the package might contain incompatible configuration parameters  Consider resetting the containerd configuration with  containerd config default    etc containerd config toml  as specified in  getting started md  https   github com containerd containerd blob main docs getting started md advanced topics  and then set the configuration parameters specified above accordingly    If you apply this change  make sure to restart containerd      shell sudo systemctl restart containerd      When using kubeadm  manually configure the  cgroup driver for kubelet   docs tasks administer cluster kubeadm configure cgroup driver  configuring the kubelet cgroup driver    In Kubernetes v1 28  you can enable automatic detection of the cgroup driver as an alpha feature  See  systemd cgroup driver   systemd cgroup driver  for more details        Overriding the sandbox  pause  image   override pause image containerd   In your  containerd config  https   github com containerd containerd blob main docs cri config md  you can overwrite the sandbox image by setting the following config      toml  plugins  io containerd grpc v1 cri     sandbox image    registry k8s io pause 3 2       You might need to restart  containerd  as well once you ve updated the config file   systemctl restart containerd        CRI O  This section contains the necessary steps to install CRI O as a container runtime   To install CRI O  follow  CRI O Install Instructions  https   github com cri o packaging blob main README md usage         cgroup driver  CRI O uses the systemd cgroup driver per default  which is likely to work fine for you  To switch to the  cgroupfs  cgroup driver  either edit   etc crio crio conf  or place a drop in configuration in   etc crio crio conf d 02 cgroup manager conf   for example      toml  crio runtime  conmon cgroup    pod  cgroup manager    cgroupfs       You should also note the changed  conmon cgroup   which has to be set to the value  pod  when using CRI O with  cgroupfs   It is generally necessary to keep the cgroup driver configuration of the kubelet  usually done via kubeadm  and CRI O in sync   In Kubernetes v1 28  you can enable automatic detection of the cgroup driver as an alpha feature  See  systemd cgroup driver   systemd cgroup driver  for more details   For CRI O  the CRI socket is   var run crio crio sock  by default        Overriding the sandbox  pause  image   override pause image cri o   In your  CRI O config  https   github com cri o cri o blob main docs crio conf 5 md  you can set the following config value      toml  crio image  pause image  registry k8s io pause 3 6       This config option supports live configuration reload to apply this change   systemctl reload crio  or by sending  SIGHUP  to the  crio  process       Docker Engine   docker    These instructions assume that you are using the   cri dockerd   https   mirantis github io cri dockerd   adapter to integrate Docker Engine with Kubernetes    1  On each of your nodes  install Docker for your Linux distribution as per    Install Docker Engine  https   docs docker com engine install  server    2  Install   cri dockerd   https   mirantis github io cri dockerd usage install   following the directions in the install section of the documentation   For  cri dockerd   the CRI socket is   run cri dockerd sock  by default       Mirantis Container Runtime   mcr    Mirantis Container Runtime  https   docs mirantis com mcr 20 10 overview html   MCR  is a commercially available container runtime that was formerly known as Docker Enterprise Edition   You can use Mirantis Container Runtime with Kubernetes using the open source   cri dockerd   https   mirantis github io cri dockerd   component  included with MCR   To learn more about how to install Mirantis Container Runtime  visit  MCR Deployment Guide  https   docs mirantis com mcr 20 10 install html    Check the systemd unit named  cri docker socket  to find out the path to the CRI socket        Overriding the sandbox  pause  image   override pause image cri dockerd mcr   The  cri dockerd  adapter accepts a command line argument for specifying which container image to use as the Pod infrastructure container   pause image    The command line argument to use is    pod infra container image         As well as a container runtime  your cluster will need a working  network plugin   docs concepts cluster administration networking  how to implement the kubernetes network model  "}
{"questions":"kubernetes setup A production quality Kubernetes cluster requires planning and preparation title Production environment weight 30 If your Kubernetes cluster is to run critical workloads it must be configured to be resilient nolist true overview Create a production quality Kubernetes cluster","answers":"---\ntitle: \"Production environment\"\ndescription: Create a production-quality Kubernetes cluster\nweight: 30\nno_list: true\n---\n<!-- overview -->\n\nA production-quality Kubernetes cluster requires planning and preparation.\nIf your Kubernetes cluster is to run critical workloads, it must be configured to be resilient.\nThis page explains steps you can take to set up a production-ready cluster,\nor to promote an existing cluster for production use.\nIf you're already familiar with production setup and want the links, skip to\n[What's next](#what-s-next).\n\n<!-- body -->\n\n## Production considerations\n\nTypically, a production Kubernetes cluster environment has more requirements than a\npersonal learning, development, or test environment Kubernetes. A production environment may require\nsecure access by many users, consistent availability, and the resources to adapt\nto changing demands.\n\nAs you decide where you want your production Kubernetes environment to live\n(on premises or in a cloud) and the amount of management you want to take\non or hand to others, consider how your requirements for a Kubernetes cluster\nare influenced by the following issues:\n\n- *Availability*: A single-machine Kubernetes [learning environment](\/docs\/setup\/#learning-environment)\n  has a single point of failure. Creating a highly available cluster means considering:\n  - Separating the control plane from the worker nodes.\n  - Replicating the control plane components on multiple nodes.\n  - Load balancing traffic to the cluster\u2019s .\n  - Having enough worker nodes available, or able to quickly become available, as changing workloads warrant it.\n\n- *Scale*: If you expect your production Kubernetes environment to receive a stable amount of\n  demand, you might be able to set up for the capacity you need and be done. However,\n  if you expect demand to grow over time or change dramatically based on things like\n  season or special events, you need to plan how to scale to relieve increased\n  pressure from more requests to the control plane and worker nodes or scale down to reduce unused\n  resources.\n\n- *Security and access management*: You have full admin privileges on your own\n  Kubernetes learning cluster. But shared clusters with important workloads, and\n  more than one or two users, require a more refined approach to who and what can\n  access cluster resources. You can use role-based access control\n  ([RBAC](\/docs\/reference\/access-authn-authz\/rbac\/)) and other\n  security mechanisms to make sure that users and workloads can get access to the\n  resources they need, while keeping workloads, and the cluster itself, secure.\n  You can set limits on the resources that users and workloads can access\n  by managing [policies](\/docs\/concepts\/policy\/) and\n  [container resources](\/docs\/concepts\/configuration\/manage-resources-containers\/).\n\nBefore building a Kubernetes production environment on your own, consider\nhanding off some or all of this job to \n[Turnkey Cloud Solutions](\/docs\/setup\/production-environment\/turnkey-solutions\/) \nproviders or other [Kubernetes Partners](\/partners\/).\nOptions include:\n\n- *Serverless*: Just run workloads on third-party equipment without managing\n  a cluster at all. You will be charged for things like CPU usage, memory, and\n  disk requests.\n- *Managed control plane*: Let the provider manage the scale and availability\n  of the cluster's control plane, as well as handle patches and upgrades.\n- *Managed worker nodes*: Configure pools of nodes to meet your needs,\n  then the provider makes sure those nodes are available and ready to implement\n  upgrades when needed.\n- *Integration*: There are providers that integrate Kubernetes with other\n  services you may need, such as storage, container registries, authentication\n  methods, and development tools.\n\nWhether you build a production Kubernetes cluster yourself or work with\npartners, review the following sections to evaluate your needs as they relate\nto your cluster\u2019s *control plane*, *worker nodes*, *user access*, and\n*workload resources*.\n\n## Production cluster setup\n\nIn a production-quality Kubernetes cluster, the control plane manages the\ncluster from services that can be spread across multiple computers\nin different ways. Each worker node, however, represents a single entity that\nis configured to run Kubernetes pods.\n\n### Production control plane\n\nThe simplest Kubernetes cluster has the entire control plane and worker node\nservices running on the same machine. You can grow that environment by adding\nworker nodes, as reflected in the diagram illustrated in\n[Kubernetes Components](\/docs\/concepts\/overview\/components\/).\nIf the cluster is meant to be available for a short period of time, or can be\ndiscarded if something goes seriously wrong, this might meet your needs.\n\nIf you need a more permanent, highly available cluster, however, you should\nconsider ways of extending the control plane. By design, one-machine control\nplane services running on a single machine are not highly available.\nIf keeping the cluster up and running\nand ensuring that it can be repaired if something goes wrong is important,\nconsider these steps:\n\n- *Choose deployment tools*: You can deploy a control plane using tools such\n  as kubeadm, kops, and kubespray. See\n  [Installing Kubernetes with deployment tools](\/docs\/setup\/production-environment\/tools\/)\n  to learn tips for production-quality deployments using each of those deployment\n  methods. Different [Container Runtimes](\/docs\/setup\/production-environment\/container-runtimes\/)\n  are available to use with your deployments.\n- *Manage certificates*: Secure communications between control plane services\n  are implemented using certificates. Certificates are automatically generated\n  during deployment or you can generate them using your own certificate authority.\n  See [PKI certificates and requirements](\/docs\/setup\/best-practices\/certificates\/) for details.\n- *Configure load balancer for apiserver*: Configure a load balancer\n  to distribute external API requests to the apiserver service instances running on different nodes. See \n  [Create an External Load Balancer](\/docs\/tasks\/access-application-cluster\/create-external-load-balancer\/)\n  for details.\n- *Separate and backup etcd service*: The etcd services can either run on the\n  same machines as other control plane services or run on separate machines, for\n  extra security and availability. Because etcd stores cluster configuration data,\n  backing up the etcd database should be done regularly to ensure that you can\n  repair that database if needed.\n  See the [etcd FAQ](https:\/\/etcd.io\/docs\/v3.5\/faq\/) for details on configuring and using etcd.\n  See [Operating etcd clusters for Kubernetes](\/docs\/tasks\/administer-cluster\/configure-upgrade-etcd\/)\n  and [Set up a High Availability etcd cluster with kubeadm](\/docs\/setup\/production-environment\/tools\/kubeadm\/setup-ha-etcd-with-kubeadm\/)\n  for details.\n- *Create multiple control plane systems*: For high availability, the\n  control plane should not be limited to a single machine. If the control plane\n  services are run by an init service (such as systemd), each service should run on at\n  least three machines. However, running control plane services as pods in\n  Kubernetes ensures that the replicated number of services that you request\n  will always be available.\n  The scheduler should be fault tolerant,\n  but not highly available. Some deployment tools set up [Raft](https:\/\/raft.github.io\/)\n  consensus algorithm to do leader election of Kubernetes services. If the\n  primary goes away, another service elects itself and take over. \n- *Span multiple zones*: If keeping your cluster available at all times is\n  critical, consider creating a cluster that runs across multiple data centers,\n  referred to as zones in cloud environments. Groups of zones are referred to as regions.\n  By spreading a cluster across\n  multiple zones in the same region, it can improve the chances that your\n  cluster will continue to function even if one zone becomes unavailable.\n  See [Running in multiple zones](\/docs\/setup\/best-practices\/multiple-zones\/) for details.\n- *Manage on-going features*: If you plan to keep your cluster over time,\n  there are tasks you need to do to maintain its health and security. For example,\n  if you installed with kubeadm, there are instructions to help you with\n  [Certificate Management](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-certs\/)\n  and [Upgrading kubeadm clusters](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-upgrade\/).\n  See [Administer a Cluster](\/docs\/tasks\/administer-cluster\/)\n  for a longer list of Kubernetes administrative tasks.\n\nTo learn about available options when you run control plane services, see\n[kube-apiserver](\/docs\/reference\/command-line-tools-reference\/kube-apiserver\/),\n[kube-controller-manager](\/docs\/reference\/command-line-tools-reference\/kube-controller-manager\/),\nand [kube-scheduler](\/docs\/reference\/command-line-tools-reference\/kube-scheduler\/)\ncomponent pages. For highly available control plane examples, see\n[Options for Highly Available topology](\/docs\/setup\/production-environment\/tools\/kubeadm\/ha-topology\/),\n[Creating Highly Available clusters with kubeadm](\/docs\/setup\/production-environment\/tools\/kubeadm\/high-availability\/),\nand [Operating etcd clusters for Kubernetes](\/docs\/tasks\/administer-cluster\/configure-upgrade-etcd\/).\nSee [Backing up an etcd cluster](\/docs\/tasks\/administer-cluster\/configure-upgrade-etcd\/#backing-up-an-etcd-cluster)\nfor information on making an etcd backup plan.\n\n### Production worker nodes\n\nProduction-quality workloads need to be resilient and anything they rely\non needs to be resilient (such as CoreDNS). Whether you manage your own\ncontrol plane or have a cloud provider do it for you, you still need to\nconsider how you want to manage your worker nodes (also referred to\nsimply as *nodes*).  \n\n- *Configure nodes*: Nodes can be physical or virtual machines. If you want to\n  create and manage your own nodes, you can install a supported operating system,\n  then add and run the appropriate\n  [Node services](\/docs\/concepts\/architecture\/#node-components). Consider:\n  - The demands of your workloads when you set up nodes by having appropriate memory, CPU, and disk speed and storage capacity available.\n  - Whether generic computer systems will do or you have workloads that need GPU processors, Windows nodes, or VM isolation.\n- *Validate nodes*: See [Valid node setup](\/docs\/setup\/best-practices\/node-conformance\/)\n  for information on how to ensure that a node meets the requirements to join\n  a Kubernetes cluster.\n- *Add nodes to the cluster*: If you are managing your own cluster you can\n  add nodes by setting up your own machines and either adding them manually or\n  having them register themselves to the cluster\u2019s apiserver. See the\n  [Nodes](\/docs\/concepts\/architecture\/nodes\/) section for information on how to set up Kubernetes to add nodes in these ways.\n- *Scale nodes*: Have a plan for expanding the capacity your cluster will\n  eventually need. See [Considerations for large clusters](\/docs\/setup\/best-practices\/cluster-large\/)\n  to help determine how many nodes you need, based on the number of pods and\n  containers you need to run. If you are managing nodes yourself, this can mean\n  purchasing and installing your own physical equipment.\n- *Autoscale nodes*: Read [Cluster Autoscaling](\/docs\/concepts\/cluster-administration\/cluster-autoscaling) to learn about the\n  tools available to automatically manage your nodes and the capacity they\n  provide.\n- *Set up node health checks*: For important workloads, you want to make sure\n  that the nodes and pods running on those nodes are healthy. Using the\n  [Node Problem Detector](\/docs\/tasks\/debug\/debug-cluster\/monitor-node-health\/)\n  daemon, you can ensure your nodes are healthy.\n\n## Production user management\n\nIn production, you may be moving from a model where you or a small group of\npeople are accessing the cluster to where there may potentially be dozens or\nhundreds of people. In a learning environment or platform prototype, you might have a single\nadministrative account for everything you do. In production, you will want\nmore accounts with different levels of access to different namespaces.\n\nTaking on a production-quality cluster means deciding how you\nwant to selectively allow access by other users. In particular, you need to\nselect strategies for validating the identities of those who try to access your\ncluster (authentication) and deciding if they have permissions to do what they\nare asking (authorization):\n\n- *Authentication*: The apiserver can authenticate users using client\n  certificates, bearer tokens, an authenticating proxy, or HTTP basic auth.\n  You can choose which authentication methods you want to use.\n  Using plugins, the apiserver can leverage your organization\u2019s existing\n  authentication methods, such as LDAP or Kerberos. See\n  [Authentication](\/docs\/reference\/access-authn-authz\/authentication\/)\n  for a description of these different methods of authenticating Kubernetes users.\n- *Authorization*: When you set out to authorize your regular users, you will probably choose\n  between RBAC and ABAC authorization. See [Authorization Overview](\/docs\/reference\/access-authn-authz\/authorization\/)\n  to review different modes for authorizing user accounts (as well as service account access to\n  your cluster):\n  - *Role-based access control* ([RBAC](\/docs\/reference\/access-authn-authz\/rbac\/)): Lets you\n    assign access to your cluster by allowing specific sets of permissions to authenticated users.\n    Permissions can be assigned for a specific namespace (Role) or across the entire cluster\n    (ClusterRole). Then using RoleBindings and ClusterRoleBindings, those permissions can be attached\n    to particular users.\n  - *Attribute-based access control* ([ABAC](\/docs\/reference\/access-authn-authz\/abac\/)): Lets you\n    create policies based on resource attributes in the cluster and will allow or deny access\n    based on those attributes. Each line of a policy file identifies versioning properties (apiVersion\n    and kind) and a map of spec properties to match the subject (user or group), resource property,\n    non-resource property (\/version or \/apis), and readonly. See\n    [Examples](\/docs\/reference\/access-authn-authz\/abac\/#examples) for details.\n\nAs someone setting up authentication and authorization on your production Kubernetes cluster, here are some things to consider:\n\n- *Set the authorization mode*: When the Kubernetes API server\n  ([kube-apiserver](\/docs\/reference\/command-line-tools-reference\/kube-apiserver\/))\n  starts, the supported authentication modes must be set using the *--authorization-mode*\n  flag. For example, that flag in the *kube-adminserver.yaml* file (in *\/etc\/kubernetes\/manifests*)\n  could be set to Node,RBAC. This would allow Node and RBAC authorization for authenticated requests.\n- *Create user certificates and role bindings (RBAC)*: If you are using RBAC\n  authorization, users can create a CertificateSigningRequest (CSR) that can be\n  signed by the cluster CA. Then you can bind Roles and ClusterRoles to each user.\n  See [Certificate Signing Requests](\/docs\/reference\/access-authn-authz\/certificate-signing-requests\/)\n  for details.\n- *Create policies that combine attributes (ABAC)*: If you are using ABAC\n  authorization, you can assign combinations of attributes to form policies to\n  authorize selected users or groups to access particular resources (such as a\n  pod), namespace, or apiGroup. For more information, see\n  [Examples](\/docs\/reference\/access-authn-authz\/abac\/#examples).\n- *Consider Admission Controllers*: Additional forms of authorization for\n  requests that can come in through the API server include\n  [Webhook Token Authentication](\/docs\/reference\/access-authn-authz\/authentication\/#webhook-token-authentication).\n  Webhooks and other special authorization types need to be enabled by adding\n  [Admission Controllers](\/docs\/reference\/access-authn-authz\/admission-controllers\/)\n  to the API server.\n\n## Set limits on workload resources\n\nDemands from production workloads can cause pressure both inside and outside\nof the Kubernetes control plane. Consider these items when setting up for the\nneeds of your cluster's workloads:\n\n- *Set namespace limits*: Set per-namespace quotas on things like memory and CPU. See\n  [Manage Memory, CPU, and API Resources](\/docs\/tasks\/administer-cluster\/manage-resources\/)\n  for details. You can also set\n  [Hierarchical Namespaces](\/blog\/2020\/08\/14\/introducing-hierarchical-namespaces\/)\n  for inheriting limits.\n- *Prepare for DNS demand*: If you expect workloads to massively scale up,\n  your DNS service must be ready to scale up as well. See\n  [Autoscale the DNS service in a Cluster](\/docs\/tasks\/administer-cluster\/dns-horizontal-autoscaling\/).\n- *Create additional service accounts*: User accounts determine what users can\n  do on a cluster, while a service account defines pod access within a particular\n  namespace. By default, a pod takes on the default service account from its namespace.\n  See [Managing Service Accounts](\/docs\/reference\/access-authn-authz\/service-accounts-admin\/)\n  for information on creating a new service account. For example, you might want to:\n  - Add secrets that a pod could use to pull images from a particular container registry. See\n    [Configure Service Accounts for Pods](\/docs\/tasks\/configure-pod-container\/configure-service-account\/)\n    for an example.\n  - Assign RBAC permissions to a service account. See\n    [ServiceAccount permissions](\/docs\/reference\/access-authn-authz\/rbac\/#service-account-permissions)\n    for details.\n\n## \n\n- Decide if you want to build your own production Kubernetes or obtain one from\n  available [Turnkey Cloud Solutions](\/docs\/setup\/production-environment\/turnkey-solutions\/)\n  or [Kubernetes Partners](\/partners\/).\n- If you choose to build your own cluster, plan how you want to\n  handle [certificates](\/docs\/setup\/best-practices\/certificates\/)\n  and set up high availability for features such as\n  [etcd](\/docs\/setup\/production-environment\/tools\/kubeadm\/setup-ha-etcd-with-kubeadm\/)\n  and the\n  [API server](\/docs\/setup\/production-environment\/tools\/kubeadm\/ha-topology\/).\n- Choose from [kubeadm](\/docs\/setup\/production-environment\/tools\/kubeadm\/),\n  [kops](https:\/\/kops.sigs.k8s.io\/) or\n  [Kubespray](https:\/\/kubespray.io\/) deployment methods.\n- Configure user management by determining your\n  [Authentication](\/docs\/reference\/access-authn-authz\/authentication\/) and\n  [Authorization](\/docs\/reference\/access-authn-authz\/authorization\/) methods.\n- Prepare for application workloads by setting up\n  [resource limits](\/docs\/tasks\/administer-cluster\/manage-resources\/),\n  [DNS autoscaling](\/docs\/tasks\/administer-cluster\/dns-horizontal-autoscaling\/)\n  and [service accounts](\/docs\/reference\/access-authn-authz\/service-accounts-admin\/).\n","site":"kubernetes setup","answers_cleaned":"    title   Production environment  description  Create a production quality Kubernetes cluster weight  30 no list  true          overview      A production quality Kubernetes cluster requires planning and preparation  If your Kubernetes cluster is to run critical workloads  it must be configured to be resilient  This page explains steps you can take to set up a production ready cluster  or to promote an existing cluster for production use  If you re already familiar with production setup and want the links  skip to  What s next   what s next         body         Production considerations  Typically  a production Kubernetes cluster environment has more requirements than a personal learning  development  or test environment Kubernetes  A production environment may require secure access by many users  consistent availability  and the resources to adapt to changing demands   As you decide where you want your production Kubernetes environment to live  on premises or in a cloud  and the amount of management you want to take on or hand to others  consider how your requirements for a Kubernetes cluster are influenced by the following issues      Availability   A single machine Kubernetes  learning environment   docs setup  learning environment    has a single point of failure  Creating a highly available cluster means considering      Separating the control plane from the worker nodes      Replicating the control plane components on multiple nodes      Load balancing traffic to the cluster s       Having enough worker nodes available  or able to quickly become available  as changing workloads warrant it      Scale   If you expect your production Kubernetes environment to receive a stable amount of   demand  you might be able to set up for the capacity you need and be done  However    if you expect demand to grow over time or change dramatically based on things like   season or special events  you need to plan how to scale to relieve increased   pressure from more requests to the control plane and worker nodes or scale down to reduce unused   resources      Security and access management   You have full admin privileges on your own   Kubernetes learning cluster  But shared clusters with important workloads  and   more than one or two users  require a more refined approach to who and what can   access cluster resources  You can use role based access control     RBAC   docs reference access authn authz rbac    and other   security mechanisms to make sure that users and workloads can get access to the   resources they need  while keeping workloads  and the cluster itself  secure    You can set limits on the resources that users and workloads can access   by managing  policies   docs concepts policy   and    container resources   docs concepts configuration manage resources containers     Before building a Kubernetes production environment on your own  consider handing off some or all of this job to   Turnkey Cloud Solutions   docs setup production environment turnkey solutions    providers or other  Kubernetes Partners   partners    Options include      Serverless   Just run workloads on third party equipment without managing   a cluster at all  You will be charged for things like CPU usage  memory  and   disk requests     Managed control plane   Let the provider manage the scale and availability   of the cluster s control plane  as well as handle patches and upgrades     Managed worker nodes   Configure pools of nodes to meet your needs    then the provider makes sure those nodes are available and ready to implement   upgrades when needed     Integration   There are providers that integrate Kubernetes with other   services you may need  such as storage  container registries  authentication   methods  and development tools   Whether you build a production Kubernetes cluster yourself or work with partners  review the following sections to evaluate your needs as they relate to your cluster s  control plane    worker nodes    user access   and  workload resources       Production cluster setup  In a production quality Kubernetes cluster  the control plane manages the cluster from services that can be spread across multiple computers in different ways  Each worker node  however  represents a single entity that is configured to run Kubernetes pods       Production control plane  The simplest Kubernetes cluster has the entire control plane and worker node services running on the same machine  You can grow that environment by adding worker nodes  as reflected in the diagram illustrated in  Kubernetes Components   docs concepts overview components    If the cluster is meant to be available for a short period of time  or can be discarded if something goes seriously wrong  this might meet your needs   If you need a more permanent  highly available cluster  however  you should consider ways of extending the control plane  By design  one machine control plane services running on a single machine are not highly available  If keeping the cluster up and running and ensuring that it can be repaired if something goes wrong is important  consider these steps      Choose deployment tools   You can deploy a control plane using tools such   as kubeadm  kops  and kubespray  See    Installing Kubernetes with deployment tools   docs setup production environment tools     to learn tips for production quality deployments using each of those deployment   methods  Different  Container Runtimes   docs setup production environment container runtimes     are available to use with your deployments     Manage certificates   Secure communications between control plane services   are implemented using certificates  Certificates are automatically generated   during deployment or you can generate them using your own certificate authority    See  PKI certificates and requirements   docs setup best practices certificates   for details     Configure load balancer for apiserver   Configure a load balancer   to distribute external API requests to the apiserver service instances running on different nodes  See     Create an External Load Balancer   docs tasks access application cluster create external load balancer     for details     Separate and backup etcd service   The etcd services can either run on the   same machines as other control plane services or run on separate machines  for   extra security and availability  Because etcd stores cluster configuration data    backing up the etcd database should be done regularly to ensure that you can   repair that database if needed    See the  etcd FAQ  https   etcd io docs v3 5 faq   for details on configuring and using etcd    See  Operating etcd clusters for Kubernetes   docs tasks administer cluster configure upgrade etcd     and  Set up a High Availability etcd cluster with kubeadm   docs setup production environment tools kubeadm setup ha etcd with kubeadm     for details     Create multiple control plane systems   For high availability  the   control plane should not be limited to a single machine  If the control plane   services are run by an init service  such as systemd   each service should run on at   least three machines  However  running control plane services as pods in   Kubernetes ensures that the replicated number of services that you request   will always be available    The scheduler should be fault tolerant    but not highly available  Some deployment tools set up  Raft  https   raft github io     consensus algorithm to do leader election of Kubernetes services  If the   primary goes away  another service elects itself and take over      Span multiple zones   If keeping your cluster available at all times is   critical  consider creating a cluster that runs across multiple data centers    referred to as zones in cloud environments  Groups of zones are referred to as regions    By spreading a cluster across   multiple zones in the same region  it can improve the chances that your   cluster will continue to function even if one zone becomes unavailable    See  Running in multiple zones   docs setup best practices multiple zones   for details     Manage on going features   If you plan to keep your cluster over time    there are tasks you need to do to maintain its health and security  For example    if you installed with kubeadm  there are instructions to help you with    Certificate Management   docs tasks administer cluster kubeadm kubeadm certs     and  Upgrading kubeadm clusters   docs tasks administer cluster kubeadm kubeadm upgrade      See  Administer a Cluster   docs tasks administer cluster     for a longer list of Kubernetes administrative tasks   To learn about available options when you run control plane services  see  kube apiserver   docs reference command line tools reference kube apiserver     kube controller manager   docs reference command line tools reference kube controller manager    and  kube scheduler   docs reference command line tools reference kube scheduler   component pages  For highly available control plane examples  see  Options for Highly Available topology   docs setup production environment tools kubeadm ha topology     Creating Highly Available clusters with kubeadm   docs setup production environment tools kubeadm high availability    and  Operating etcd clusters for Kubernetes   docs tasks administer cluster configure upgrade etcd    See  Backing up an etcd cluster   docs tasks administer cluster configure upgrade etcd  backing up an etcd cluster  for information on making an etcd backup plan       Production worker nodes  Production quality workloads need to be resilient and anything they rely on needs to be resilient  such as CoreDNS   Whether you manage your own control plane or have a cloud provider do it for you  you still need to consider how you want to manage your worker nodes  also referred to simply as  nodes          Configure nodes   Nodes can be physical or virtual machines  If you want to   create and manage your own nodes  you can install a supported operating system    then add and run the appropriate    Node services   docs concepts architecture  node components   Consider      The demands of your workloads when you set up nodes by having appropriate memory  CPU  and disk speed and storage capacity available      Whether generic computer systems will do or you have workloads that need GPU processors  Windows nodes  or VM isolation     Validate nodes   See  Valid node setup   docs setup best practices node conformance     for information on how to ensure that a node meets the requirements to join   a Kubernetes cluster     Add nodes to the cluster   If you are managing your own cluster you can   add nodes by setting up your own machines and either adding them manually or   having them register themselves to the cluster s apiserver  See the    Nodes   docs concepts architecture nodes   section for information on how to set up Kubernetes to add nodes in these ways     Scale nodes   Have a plan for expanding the capacity your cluster will   eventually need  See  Considerations for large clusters   docs setup best practices cluster large     to help determine how many nodes you need  based on the number of pods and   containers you need to run  If you are managing nodes yourself  this can mean   purchasing and installing your own physical equipment     Autoscale nodes   Read  Cluster Autoscaling   docs concepts cluster administration cluster autoscaling  to learn about the   tools available to automatically manage your nodes and the capacity they   provide     Set up node health checks   For important workloads  you want to make sure   that the nodes and pods running on those nodes are healthy  Using the    Node Problem Detector   docs tasks debug debug cluster monitor node health     daemon  you can ensure your nodes are healthy      Production user management  In production  you may be moving from a model where you or a small group of people are accessing the cluster to where there may potentially be dozens or hundreds of people  In a learning environment or platform prototype  you might have a single administrative account for everything you do  In production  you will want more accounts with different levels of access to different namespaces   Taking on a production quality cluster means deciding how you want to selectively allow access by other users  In particular  you need to select strategies for validating the identities of those who try to access your cluster  authentication  and deciding if they have permissions to do what they are asking  authorization       Authentication   The apiserver can authenticate users using client   certificates  bearer tokens  an authenticating proxy  or HTTP basic auth    You can choose which authentication methods you want to use    Using plugins  the apiserver can leverage your organization s existing   authentication methods  such as LDAP or Kerberos  See    Authentication   docs reference access authn authz authentication     for a description of these different methods of authenticating Kubernetes users     Authorization   When you set out to authorize your regular users  you will probably choose   between RBAC and ABAC authorization  See  Authorization Overview   docs reference access authn authz authorization     to review different modes for authorizing user accounts  as well as service account access to   your cluster        Role based access control    RBAC   docs reference access authn authz rbac     Lets you     assign access to your cluster by allowing specific sets of permissions to authenticated users      Permissions can be assigned for a specific namespace  Role  or across the entire cluster      ClusterRole   Then using RoleBindings and ClusterRoleBindings  those permissions can be attached     to particular users       Attribute based access control    ABAC   docs reference access authn authz abac     Lets you     create policies based on resource attributes in the cluster and will allow or deny access     based on those attributes  Each line of a policy file identifies versioning properties  apiVersion     and kind  and a map of spec properties to match the subject  user or group   resource property      non resource property   version or  apis   and readonly  See      Examples   docs reference access authn authz abac  examples  for details   As someone setting up authentication and authorization on your production Kubernetes cluster  here are some things to consider      Set the authorization mode   When the Kubernetes API server     kube apiserver   docs reference command line tools reference kube apiserver      starts  the supported authentication modes must be set using the    authorization mode    flag  For example  that flag in the  kube adminserver yaml  file  in   etc kubernetes manifests     could be set to Node RBAC  This would allow Node and RBAC authorization for authenticated requests     Create user certificates and role bindings  RBAC    If you are using RBAC   authorization  users can create a CertificateSigningRequest  CSR  that can be   signed by the cluster CA  Then you can bind Roles and ClusterRoles to each user    See  Certificate Signing Requests   docs reference access authn authz certificate signing requests     for details     Create policies that combine attributes  ABAC    If you are using ABAC   authorization  you can assign combinations of attributes to form policies to   authorize selected users or groups to access particular resources  such as a   pod   namespace  or apiGroup  For more information  see    Examples   docs reference access authn authz abac  examples      Consider Admission Controllers   Additional forms of authorization for   requests that can come in through the API server include    Webhook Token Authentication   docs reference access authn authz authentication  webhook token authentication     Webhooks and other special authorization types need to be enabled by adding    Admission Controllers   docs reference access authn authz admission controllers     to the API server      Set limits on workload resources  Demands from production workloads can cause pressure both inside and outside of the Kubernetes control plane  Consider these items when setting up for the needs of your cluster s workloads      Set namespace limits   Set per namespace quotas on things like memory and CPU  See    Manage Memory  CPU  and API Resources   docs tasks administer cluster manage resources     for details  You can also set    Hierarchical Namespaces   blog 2020 08 14 introducing hierarchical namespaces     for inheriting limits     Prepare for DNS demand   If you expect workloads to massively scale up    your DNS service must be ready to scale up as well  See    Autoscale the DNS service in a Cluster   docs tasks administer cluster dns horizontal autoscaling       Create additional service accounts   User accounts determine what users can   do on a cluster  while a service account defines pod access within a particular   namespace  By default  a pod takes on the default service account from its namespace    See  Managing Service Accounts   docs reference access authn authz service accounts admin     for information on creating a new service account  For example  you might want to      Add secrets that a pod could use to pull images from a particular container registry  See      Configure Service Accounts for Pods   docs tasks configure pod container configure service account       for an example      Assign RBAC permissions to a service account  See      ServiceAccount permissions   docs reference access authn authz rbac  service account permissions      for details          Decide if you want to build your own production Kubernetes or obtain one from   available  Turnkey Cloud Solutions   docs setup production environment turnkey solutions     or  Kubernetes Partners   partners      If you choose to build your own cluster  plan how you want to   handle  certificates   docs setup best practices certificates     and set up high availability for features such as    etcd   docs setup production environment tools kubeadm setup ha etcd with kubeadm     and the    API server   docs setup production environment tools kubeadm ha topology      Choose from  kubeadm   docs setup production environment tools kubeadm       kops  https   kops sigs k8s io   or    Kubespray  https   kubespray io   deployment methods    Configure user management by determining your    Authentication   docs reference access authn authz authentication   and    Authorization   docs reference access authn authz authorization   methods    Prepare for application workloads by setting up    resource limits   docs tasks administer cluster manage resources       DNS autoscaling   docs tasks administer cluster dns horizontal autoscaling     and  service accounts   docs reference access authn authz service accounts admin    "}
{"questions":"kubernetes setup weight 20 title Troubleshooting kubeadm contenttype concept As with any program you might run into an error installing or running kubeadm This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem overview","answers":"---\ntitle: Troubleshooting kubeadm\ncontent_type: concept\nweight: 20\n---\n\n<!-- overview -->\n\nAs with any program, you might run into an error installing or running kubeadm.\nThis page lists some common failure scenarios and have provided steps that can help you understand and fix the problem.\n\nIf your problem is not listed below, please follow the following steps:\n\n- If you think your problem is a bug with kubeadm:\n  - Go to [github.com\/kubernetes\/kubeadm](https:\/\/github.com\/kubernetes\/kubeadm\/issues) and search for existing issues.\n  - If no issue exists, please [open one](https:\/\/github.com\/kubernetes\/kubeadm\/issues\/new) and follow the issue template.\n\n- If you are unsure about how kubeadm works, you can ask on [Slack](https:\/\/slack.k8s.io\/) in `#kubeadm`,\n  or open a question on [StackOverflow](https:\/\/stackoverflow.com\/questions\/tagged\/kubernetes). Please include\n  relevant tags like `#kubernetes` and `#kubeadm` so folks can help you.\n\n<!-- body -->\n\n## Not possible to join a v1.18 Node to a v1.17 cluster due to missing RBAC\n\nIn v1.18 kubeadm added prevention for joining a Node in the cluster if a Node with the same name already exists.\nThis required adding RBAC for the bootstrap-token user to be able to GET a Node object.\n\nHowever this causes an issue where `kubeadm join` from v1.18 cannot join a cluster created by kubeadm v1.17.\n\nTo workaround the issue you have two options:\n\nExecute `kubeadm init phase bootstrap-token` on a control-plane node using kubeadm v1.18.\nNote that this enables the rest of the bootstrap-token permissions as well.\n\nor\n\nApply the following RBAC manually using `kubectl apply -f ...`:\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRole\nmetadata:\n  name: kubeadm:get-nodes\nrules:\n  - apiGroups:\n      - \"\"\n    resources:\n      - nodes\n    verbs:\n      - get\n---\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: kubeadm:get-nodes\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: kubeadm:get-nodes\nsubjects:\n  - apiGroup: rbac.authorization.k8s.io\n    kind: Group\n    name: system:bootstrappers:kubeadm:default-node-token\n```\n\n## `ebtables` or some similar executable not found during installation\n\nIf you see the following warnings while running `kubeadm init`\n\n```console\n[preflight] WARNING: ebtables not found in system path\n[preflight] WARNING: ethtool not found in system path\n```\n\nThen you may be missing `ebtables`, `ethtool` or a similar executable on your node.\nYou can install them with the following commands:\n\n- For Ubuntu\/Debian users, run `apt install ebtables ethtool`.\n- For CentOS\/Fedora users, run `yum install ebtables ethtool`.\n\n## kubeadm blocks waiting for control plane during installation\n\nIf you notice that `kubeadm init` hangs after printing out the following line:\n\n```console\n[apiclient] Created API client, waiting for the control plane to become ready\n```\n\nThis may be caused by a number of problems. The most common are:\n\n- network connection problems. Check that your machine has full network connectivity before continuing.\n- the cgroup driver of the container runtime differs from that of the kubelet. To understand how to\n  configure it properly, see [Configuring a cgroup driver](\/docs\/tasks\/administer-cluster\/kubeadm\/configure-cgroup-driver\/).\n- control plane containers are crashlooping or hanging. You can check this by running `docker ps`\n  and investigating each container by running `docker logs`. For other container runtime, see\n  [Debugging Kubernetes nodes with crictl](\/docs\/tasks\/debug\/debug-cluster\/crictl\/).\n\n## kubeadm blocks when removing managed containers\n\nThe following could happen if the container runtime halts and does not remove\nany Kubernetes-managed containers:\n\n```shell\nsudo kubeadm reset\n```\n\n```console\n[preflight] Running pre-flight checks\n[reset] Stopping the kubelet service\n[reset] Unmounting mounted directories in \"\/var\/lib\/kubelet\"\n[reset] Removing kubernetes-managed containers\n(block)\n```\n\nA possible solution is to restart the container runtime and then re-run `kubeadm reset`.\nYou can also use `crictl` to debug the state of the container runtime. See\n[Debugging Kubernetes nodes with crictl](\/docs\/tasks\/debug\/debug-cluster\/crictl\/).\n\n## Pods in `RunContainerError`, `CrashLoopBackOff` or `Error` state\n\nRight after `kubeadm init` there should not be any pods in these states.\n\n- If there are pods in one of these states _right after_ `kubeadm init`, please open an\n  issue in the kubeadm repo. `coredns` (or `kube-dns`) should be in the `Pending` state\n  until you have deployed the network add-on.\n- If you see Pods in the `RunContainerError`, `CrashLoopBackOff` or `Error` state\n  after deploying the network add-on and nothing happens to `coredns` (or `kube-dns`),\n  it's very likely that the Pod Network add-on that you installed is somehow broken.\n  You might have to grant it more RBAC privileges or use a newer version. Please file\n  an issue in the Pod Network providers' issue tracker and get the issue triaged there.\n\n## `coredns` is stuck in the `Pending` state\n\nThis is **expected** and part of the design. kubeadm is network provider-agnostic, so the admin\nshould [install the pod network add-on](\/docs\/concepts\/cluster-administration\/addons\/)\nof choice. You have to install a Pod Network\nbefore CoreDNS may be deployed fully. Hence the `Pending` state before the network is set up.\n\n## `HostPort` services do not work\n\nThe `HostPort` and `HostIP` functionality is available depending on your Pod Network\nprovider. Please contact the author of the Pod Network add-on to find out whether\n`HostPort` and `HostIP` functionality are available.\n\nCalico, Canal, and Flannel CNI providers are verified to support HostPort.\n\nFor more information, see the\n[CNI portmap documentation](https:\/\/github.com\/containernetworking\/plugins\/blob\/master\/plugins\/meta\/portmap\/README.md).\n\nIf your network provider does not support the portmap CNI plugin, you may need to use the\n[NodePort feature of services](\/docs\/concepts\/services-networking\/service\/#type-nodeport)\nor use `HostNetwork=true`.\n\n## Pods are not accessible via their Service IP\n\n- Many network add-ons do not yet enable [hairpin mode](\/docs\/tasks\/debug\/debug-application\/debug-service\/#a-pod-fails-to-reach-itself-via-the-service-ip)\n  which allows pods to access themselves via their Service IP. This is an issue related to\n  [CNI](https:\/\/github.com\/containernetworking\/cni\/issues\/476). Please contact the network\n  add-on provider to get the latest status of their support for hairpin mode.\n\n- If you are using VirtualBox (directly or via Vagrant), you will need to\n  ensure that `hostname -i` returns a routable IP address. By default, the first\n  interface is connected to a non-routable host-only network. A work around\n  is to modify `\/etc\/hosts`, see this\n  [Vagrantfile](https:\/\/github.com\/errordeveloper\/k8s-playground\/blob\/22dd39dfc06111235620e6c4404a96ae146f26fd\/Vagrantfile#L11)\n  for an example.\n\n## TLS certificate errors\n\nThe following error indicates a possible certificate mismatch.\n\n```none\n# kubectl get pods\nUnable to connect to the server: x509: certificate signed by unknown authority (possibly because of \"crypto\/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")\n```\n\n- Verify that the `$HOME\/.kube\/config` file contains a valid certificate, and\n  regenerate a certificate if necessary. The certificates in a kubeconfig file\n  are base64 encoded. The `base64 --decode` command can be used to decode the certificate\n  and `openssl x509 -text -noout` can be used for viewing the certificate information.\n\n- Unset the `KUBECONFIG` environment variable using:\n\n  ```sh\n  unset KUBECONFIG\n  ```\n\n  Or set it to the default `KUBECONFIG` location:\n\n  ```sh\n  export KUBECONFIG=\/etc\/kubernetes\/admin.conf\n  ```\n\n- Another workaround is to overwrite the existing `kubeconfig` for the \"admin\" user:\n\n  ```sh\n  mv $HOME\/.kube $HOME\/.kube.bak\n  mkdir $HOME\/.kube\n  sudo cp -i \/etc\/kubernetes\/admin.conf $HOME\/.kube\/config\n  sudo chown $(id -u):$(id -g) $HOME\/.kube\/config\n  ```\n\n## Kubelet client certificate rotation fails {#kubelet-client-cert}\n\nBy default, kubeadm configures a kubelet with automatic rotation of client certificates by using the\n`\/var\/lib\/kubelet\/pki\/kubelet-client-current.pem` symlink specified in `\/etc\/kubernetes\/kubelet.conf`.\nIf this rotation process fails you might see errors such as `x509: certificate has expired or is not yet valid`\nin kube-apiserver logs. To fix the issue you must follow these steps:\n\n1. Backup and delete `\/etc\/kubernetes\/kubelet.conf` and `\/var\/lib\/kubelet\/pki\/kubelet-client*` from the failed node.\n1. From a working control plane node in the cluster that has `\/etc\/kubernetes\/pki\/ca.key` execute\n   `kubeadm kubeconfig user --org system:nodes --client-name system:node:$NODE > kubelet.conf`.\n   `$NODE` must be set to the name of the existing failed node in the cluster.\n   Modify the resulted `kubelet.conf` manually to adjust the cluster name and server endpoint,\n   or pass `kubeconfig user --config` (see [Generating kubeconfig files for additional users](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-certs\/#kubeconfig-additional-users)). If your cluster does not have\n   the `ca.key` you must sign the embedded certificates in the `kubelet.conf` externally.\n1. Copy this resulted `kubelet.conf` to `\/etc\/kubernetes\/kubelet.conf` on the failed node.\n1. Restart the kubelet (`systemctl restart kubelet`) on the failed node and wait for\n   `\/var\/lib\/kubelet\/pki\/kubelet-client-current.pem` to be recreated.\n1. Manually edit the `kubelet.conf` to point to the rotated kubelet client certificates, by replacing\n   `client-certificate-data` and `client-key-data` with:\n\n   ```yaml\n   client-certificate: \/var\/lib\/kubelet\/pki\/kubelet-client-current.pem\n   client-key: \/var\/lib\/kubelet\/pki\/kubelet-client-current.pem\n   ```\n\n1. Restart the kubelet.\n1. Make sure the node becomes `Ready`.\n\n## Default NIC When using flannel as the pod network in Vagrant\n\nThe following error might indicate that something was wrong in the pod network:\n\n```sh\nError from server (NotFound): the server could not find the requested resource\n```\n\n- If you're using flannel as the pod network inside Vagrant, then you will have to\n  specify the default interface name for flannel.\n\n  Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts\n  are assigned the IP address `10.0.2.15`, is for external traffic that gets NATed.\n\n  This may lead to problems with flannel, which defaults to the first interface on a host.\n  This leads to all hosts thinking they have the same public IP address. To prevent this,\n  pass the `--iface eth1` flag to flannel so that the second interface is chosen.\n\n## Non-public IP used for containers\n\nIn some situations `kubectl logs` and `kubectl run` commands may return with the\nfollowing errors in an otherwise functional cluster:\n\n```console\nError from server: Get https:\/\/10.19.0.41:10250\/containerLogs\/default\/mysql-ddc65b868-glc5m\/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host\n```\n\n- This may be due to Kubernetes using an IP that can not communicate with other IPs on\n  the seemingly same subnet, possibly by policy of the machine provider.\n- DigitalOcean assigns a public IP to `eth0` as well as a private one to be used internally\n  as anchor for their floating IP feature, yet `kubelet` will pick the latter as the node's\n  `InternalIP` instead of the public one.\n\n  Use `ip addr show` to check for this scenario instead of `ifconfig` because `ifconfig` will\n  not display the offending alias IP address. Alternatively an API endpoint specific to\n  DigitalOcean allows to query for the anchor IP from the droplet:\n\n  ```sh\n  curl http:\/\/169.254.169.254\/metadata\/v1\/interfaces\/public\/0\/anchor_ipv4\/address\n  ```\n\n  The workaround is to tell `kubelet` which IP to use using `--node-ip`.\n  When using DigitalOcean, it can be the public one (assigned to `eth0`) or\n  the private one (assigned to `eth1`) should you want to use the optional\n  private network. The `kubeletExtraArgs` section of the kubeadm\n  [`NodeRegistrationOptions` structure](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/#kubeadm-k8s-io-v1beta4-NodeRegistrationOptions)\n  can be used for this.\n\n  Then restart `kubelet`:\n\n  ```sh\n  systemctl daemon-reload\n  systemctl restart kubelet\n  ```\n\n## `coredns` pods have `CrashLoopBackOff` or `Error` state\n\nIf you have nodes that are running SELinux with an older version of Docker, you might experience a scenario\nwhere the `coredns` pods are not starting. To solve that, you can try one of the following options:\n\n- Upgrade to a [newer version of Docker](\/docs\/setup\/production-environment\/container-runtimes\/#docker).\n\n- [Disable SELinux](https:\/\/access.redhat.com\/documentation\/en-us\/red_hat_enterprise_linux\/6\/html\/security-enhanced_linux\/sect-security-enhanced_linux-enabling_and_disabling_selinux-disabling_selinux).\n\n- Modify the `coredns` deployment to set `allowPrivilegeEscalation` to `true`:\n\n```bash\nkubectl -n kube-system get deployment coredns -o yaml | \\\n  sed 's\/allowPrivilegeEscalation: false\/allowPrivilegeEscalation: true\/g' | \\\n  kubectl apply -f -\n```\n\nAnother cause for CoreDNS to have `CrashLoopBackOff` is when a CoreDNS Pod deployed in Kubernetes detects a loop.\n[A number of workarounds](https:\/\/github.com\/coredns\/coredns\/tree\/master\/plugin\/loop#troubleshooting-loops-in-kubernetes-clusters)\nare available to avoid Kubernetes trying to restart the CoreDNS Pod every time CoreDNS detects the loop and exits.\n\n\nDisabling SELinux or setting `allowPrivilegeEscalation` to `true` can compromise\nthe security of your cluster.\n\n\n## etcd pods restart continually\n\nIf you encounter the following error:\n\n```\nrpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused \"process_linux.go:110: decoding init error from pipe caused \\\"read parent: connection reset by peer\\\"\"\n```\n\nThis issue appears if you run CentOS 7 with Docker 1.13.1.84.\nThis version of Docker can prevent the kubelet from executing into the etcd container.\n\nTo work around the issue, choose one of these options:\n\n- Roll back to an earlier version of Docker, such as 1.13.1-75\n\n  ```\n  yum downgrade docker-1.13.1-75.git8633870.el7.centos.x86_64 docker-client-1.13.1-75.git8633870.el7.centos.x86_64 docker-common-1.13.1-75.git8633870.el7.centos.x86_64\n  ```\n\n- Install one of the more recent recommended versions, such as 18.06:\n\n  ```bash\n  sudo yum-config-manager --add-repo https:\/\/download.docker.com\/linux\/centos\/docker-ce.repo\n  yum install docker-ce-18.06.1.ce-3.el7.x86_64\n  ```\n\n## Not possible to pass a comma separated list of values to arguments inside a `--component-extra-args` flag\n\n`kubeadm init` flags such as `--component-extra-args` allow you to pass custom arguments to a control-plane\ncomponent like the kube-apiserver. However, this mechanism is limited due to the underlying type used for parsing\nthe values (`mapStringString`).\n\nIf you decide to pass an argument that supports multiple, comma-separated values such as\n`--apiserver-extra-args \"enable-admission-plugins=LimitRanger,NamespaceExists\"` this flag will fail with\n`flag: malformed pair, expect string=string`. This happens because the list of arguments for\n`--apiserver-extra-args` expects `key=value` pairs and in this case `NamespacesExists` is considered\nas a key that is missing a value.\n\nAlternatively, you can try separating the `key=value` pairs like so:\n`--apiserver-extra-args \"enable-admission-plugins=LimitRanger,enable-admission-plugins=NamespaceExists\"`\nbut this will result in the key `enable-admission-plugins` only having the value of `NamespaceExists`.\n\nA known workaround is to use the kubeadm [configuration file](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/).\n\n## kube-proxy scheduled before node is initialized by cloud-controller-manager\n\nIn cloud provider scenarios, kube-proxy can end up being scheduled on new worker nodes before\nthe cloud-controller-manager has initialized the node addresses. This causes kube-proxy to fail\nto pick up the node's IP address properly and has knock-on effects to the proxy function managing\nload balancers.\n\nThe following error can be seen in kube-proxy Pods:\n\n```\nserver.go:610] Failed to retrieve node IP: host IP unknown; known addresses: []\nproxier.go:340] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP\n```\n\nA known solution is to patch the kube-proxy DaemonSet to allow scheduling it on control-plane\nnodes regardless of their conditions, keeping it off of other nodes until their initial guarding\nconditions abate:\n\n```\nkubectl -n kube-system patch ds kube-proxy -p='{\n  \"spec\": {\n    \"template\": {\n      \"spec\": {\n        \"tolerations\": [\n          {\n            \"key\": \"CriticalAddonsOnly\",\n            \"operator\": \"Exists\"\n          },\n          {\n            \"effect\": \"NoSchedule\",\n            \"key\": \"node-role.kubernetes.io\/control-plane\"\n          }\n        ]\n      }\n    }\n  }\n}'\n```\n\nThe tracking issue for this problem is [here](https:\/\/github.com\/kubernetes\/kubeadm\/issues\/1027).\n\n## `\/usr` is mounted read-only on nodes {#usr-mounted-read-only}\n\nOn Linux distributions such as Fedora CoreOS or Flatcar Container Linux, the directory `\/usr` is mounted as a read-only filesystem.\nFor [flex-volume support](https:\/\/github.com\/kubernetes\/community\/blob\/ab55d85\/contributors\/devel\/sig-storage\/flexvolume.md),\nKubernetes components like the kubelet and kube-controller-manager use the default path of\n`\/usr\/libexec\/kubernetes\/kubelet-plugins\/volume\/exec\/`, yet the flex-volume directory _must be writeable_\nfor the feature to work.\n\n\nFlexVolume was deprecated in the Kubernetes v1.23 release.\n\n\nTo workaround this issue, you can configure the flex-volume directory using the kubeadm\n[configuration file](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/).\n\nOn the primary control-plane Node (created using `kubeadm init`), pass the following\nfile using `--config`:\n\n```yaml\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: InitConfiguration\nnodeRegistration:\n  kubeletExtraArgs:\n  - name: \"volume-plugin-dir\"\n    value: \"\/opt\/libexec\/kubernetes\/kubelet-plugins\/volume\/exec\/\"\n---\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: ClusterConfiguration\ncontrollerManager:\n  extraArgs:\n  - name: \"flex-volume-plugin-dir\"\n    value: \"\/opt\/libexec\/kubernetes\/kubelet-plugins\/volume\/exec\/\"\n```\n\nOn joining Nodes:\n\n```yaml\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: JoinConfiguration\nnodeRegistration:\n  kubeletExtraArgs:\n  - name: \"volume-plugin-dir\"\n    value: \"\/opt\/libexec\/kubernetes\/kubelet-plugins\/volume\/exec\/\"\n```\n\nAlternatively, you can modify `\/etc\/fstab` to make the `\/usr` mount writeable, but please\nbe advised that this is modifying a design principle of the Linux distribution.\n\n## `kubeadm upgrade plan` prints out `context deadline exceeded` error message\n\nThis error message is shown when upgrading a Kubernetes cluster with `kubeadm` in\nthe case of running an external etcd. This is not a critical bug and happens because\nolder versions of kubeadm perform a version check on the external etcd cluster.\nYou can proceed with `kubeadm upgrade apply ...`.\n\nThis issue is fixed as of version 1.19.\n\n## `kubeadm reset` unmounts `\/var\/lib\/kubelet`\n\nIf `\/var\/lib\/kubelet` is being mounted, performing a `kubeadm reset` will effectively unmount it.\n\nTo workaround the issue, re-mount the `\/var\/lib\/kubelet` directory after performing the `kubeadm reset` operation.\n\nThis is a regression introduced in kubeadm 1.15. The issue is fixed in 1.20.\n\n## Cannot use the metrics-server securely in a kubeadm cluster\n\nIn a kubeadm cluster, the [metrics-server](https:\/\/github.com\/kubernetes-sigs\/metrics-server)\ncan be used insecurely by passing the `--kubelet-insecure-tls` to it. This is not recommended for production clusters.\n\nIf you want to use TLS between the metrics-server and the kubelet there is a problem,\nsince kubeadm deploys a self-signed serving certificate for the kubelet. This can cause the following errors\non the side of the metrics-server:\n\n```\nx509: certificate signed by unknown authority\nx509: certificate is valid for IP-foo not IP-bar\n```\n\nSee [Enabling signed kubelet serving certificates](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-certs\/#kubelet-serving-certs)\nto understand how to configure the kubelets in a kubeadm cluster to have properly signed serving certificates.\n\nAlso see [How to run the metrics-server securely](https:\/\/github.com\/kubernetes-sigs\/metrics-server\/blob\/master\/FAQ.md#how-to-run-metrics-server-securely).\n\n## Upgrade fails due to etcd hash not changing\n\nOnly applicable to upgrading a control plane node with a kubeadm binary v1.28.3 or later,\nwhere the node is currently managed by kubeadm versions v1.28.0, v1.28.1 or v1.28.2.\n\nHere is the error message you may encounter:\n\n```\n[upgrade\/etcd] Failed to upgrade etcd: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: static Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition\n[upgrade\/etcd] Waiting for previous etcd to become available\nI0907 10:10:09.109104    3704 etcd.go:588] [etcd] attempting to see if all cluster endpoints ([https:\/\/172.17.0.6:2379\/ https:\/\/172.17.0.4:2379\/ https:\/\/172.17.0.3:2379\/]) are available 1\/10\n[upgrade\/etcd] Etcd was rolled back and is now available\nstatic Pod hash for component etcd on Node kinder-upgrade-control-plane-1 did not change after 5m0s: timed out waiting for the condition\ncouldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced\nk8s.io\/kubernetes\/cmd\/kubeadm\/app\/phases\/upgrade.rollbackOldManifests\n\tcmd\/kubeadm\/app\/phases\/upgrade\/staticpods.go:525\nk8s.io\/kubernetes\/cmd\/kubeadm\/app\/phases\/upgrade.upgradeComponent\n\tcmd\/kubeadm\/app\/phases\/upgrade\/staticpods.go:254\nk8s.io\/kubernetes\/cmd\/kubeadm\/app\/phases\/upgrade.performEtcdStaticPodUpgrade\n\tcmd\/kubeadm\/app\/phases\/upgrade\/staticpods.go:338\n...\n```\n\nThe reason for this failure is that the affected versions generate an etcd manifest file with\nunwanted defaults in the PodSpec. This will result in a diff from the manifest comparison,\nand kubeadm will expect a change in the Pod hash, but the kubelet will never update the hash.\n\nThere are two way to workaround this issue if you see it in your cluster:\n\n- The etcd upgrade can be skipped between the affected versions and v1.28.3 (or later) by using:\n\n  ```shell\n  kubeadm upgrade {apply|node} [version] --etcd-upgrade=false\n  ```\n\n  This is not recommended in case a new etcd version was introduced by a later v1.28 patch version.\n\n- Before upgrade, patch the manifest for the etcd static pod, to remove the problematic defaulted attributes:\n\n  ```patch\n  diff --git a\/etc\/kubernetes\/manifests\/etcd_defaults.yaml b\/etc\/kubernetes\/manifests\/etcd_origin.yaml\n  index d807ccbe0aa..46b35f00e15 100644\n  --- a\/etc\/kubernetes\/manifests\/etcd_defaults.yaml\n  +++ b\/etc\/kubernetes\/manifests\/etcd_origin.yaml\n  @@ -43,7 +43,6 @@ spec:\n          scheme: HTTP\n        initialDelaySeconds: 10\n        periodSeconds: 10\n  -      successThreshold: 1\n        timeoutSeconds: 15\n      name: etcd\n      resources:\n  @@ -59,26 +58,18 @@ spec:\n          scheme: HTTP\n        initialDelaySeconds: 10\n        periodSeconds: 10\n  -      successThreshold: 1\n        timeoutSeconds: 15\n  -    terminationMessagePath: \/dev\/termination-log\n  -    terminationMessagePolicy: File\n      volumeMounts:\n      - mountPath: \/var\/lib\/etcd\n        name: etcd-data\n      - mountPath: \/etc\/kubernetes\/pki\/etcd\n        name: etcd-certs\n  -  dnsPolicy: ClusterFirst\n  -  enableServiceLinks: true\n    hostNetwork: true\n    priority: 2000001000\n    priorityClassName: system-node-critical\n  -  restartPolicy: Always\n  -  schedulerName: default-scheduler\n    securityContext:\n      seccompProfile:\n        type: RuntimeDefault\n  -  terminationGracePeriodSeconds: 30\n    volumes:\n    - hostPath:\n        path: \/etc\/kubernetes\/pki\/etcd\n  ```\n\nMore information can be found in the\n[tracking issue](https:\/\/github.com\/kubernetes\/kubeadm\/issues\/2927) for this bug.","site":"kubernetes setup","answers_cleaned":"    title  Troubleshooting kubeadm content type  concept weight  20           overview      As with any program  you might run into an error installing or running kubeadm  This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem   If your problem is not listed below  please follow the following steps     If you think your problem is a bug with kubeadm      Go to  github com kubernetes kubeadm  https   github com kubernetes kubeadm issues  and search for existing issues      If no issue exists  please  open one  https   github com kubernetes kubeadm issues new  and follow the issue template     If you are unsure about how kubeadm works  you can ask on  Slack  https   slack k8s io   in   kubeadm     or open a question on  StackOverflow  https   stackoverflow com questions tagged kubernetes   Please include   relevant tags like   kubernetes  and   kubeadm  so folks can help you        body         Not possible to join a v1 18 Node to a v1 17 cluster due to missing RBAC  In v1 18 kubeadm added prevention for joining a Node in the cluster if a Node with the same name already exists  This required adding RBAC for the bootstrap token user to be able to GET a Node object   However this causes an issue where  kubeadm join  from v1 18 cannot join a cluster created by kubeadm v1 17   To workaround the issue you have two options   Execute  kubeadm init phase bootstrap token  on a control plane node using kubeadm v1 18  Note that this enables the rest of the bootstrap token permissions as well   or  Apply the following RBAC manually using  kubectl apply  f           yaml apiVersion  rbac authorization k8s io v1 kind  ClusterRole metadata    name  kubeadm get nodes rules      apiGroups                 resources          nodes     verbs          get     apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  kubeadm get nodes roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  kubeadm get nodes subjects      apiGroup  rbac authorization k8s io     kind  Group     name  system bootstrappers kubeadm default node token          ebtables  or some similar executable not found during installation  If you see the following warnings while running  kubeadm init      console  preflight  WARNING  ebtables not found in system path  preflight  WARNING  ethtool not found in system path      Then you may be missing  ebtables    ethtool  or a similar executable on your node  You can install them with the following commands     For Ubuntu Debian users  run  apt install ebtables ethtool     For CentOS Fedora users  run  yum install ebtables ethtool       kubeadm blocks waiting for control plane during installation  If you notice that  kubeadm init  hangs after printing out the following line      console  apiclient  Created API client  waiting for the control plane to become ready      This may be caused by a number of problems  The most common are     network connection problems  Check that your machine has full network connectivity before continuing    the cgroup driver of the container runtime differs from that of the kubelet  To understand how to   configure it properly  see  Configuring a cgroup driver   docs tasks administer cluster kubeadm configure cgroup driver      control plane containers are crashlooping or hanging  You can check this by running  docker ps    and investigating each container by running  docker logs   For other container runtime  see    Debugging Kubernetes nodes with crictl   docs tasks debug debug cluster crictl        kubeadm blocks when removing managed containers  The following could happen if the container runtime halts and does not remove any Kubernetes managed containers      shell sudo kubeadm reset         console  preflight  Running pre flight checks  reset  Stopping the kubelet service  reset  Unmounting mounted directories in   var lib kubelet   reset  Removing kubernetes managed containers  block       A possible solution is to restart the container runtime and then re run  kubeadm reset   You can also use  crictl  to debug the state of the container runtime  See  Debugging Kubernetes nodes with crictl   docs tasks debug debug cluster crictl        Pods in  RunContainerError    CrashLoopBackOff  or  Error  state  Right after  kubeadm init  there should not be any pods in these states     If there are pods in one of these states  right after   kubeadm init   please open an   issue in the kubeadm repo   coredns   or  kube dns   should be in the  Pending  state   until you have deployed the network add on    If you see Pods in the  RunContainerError    CrashLoopBackOff  or  Error  state   after deploying the network add on and nothing happens to  coredns   or  kube dns      it s very likely that the Pod Network add on that you installed is somehow broken    You might have to grant it more RBAC privileges or use a newer version  Please file   an issue in the Pod Network providers  issue tracker and get the issue triaged there       coredns  is stuck in the  Pending  state  This is   expected   and part of the design  kubeadm is network provider agnostic  so the admin should  install the pod network add on   docs concepts cluster administration addons   of choice  You have to install a Pod Network before CoreDNS may be deployed fully  Hence the  Pending  state before the network is set up       HostPort  services do not work  The  HostPort  and  HostIP  functionality is available depending on your Pod Network provider  Please contact the author of the Pod Network add on to find out whether  HostPort  and  HostIP  functionality are available   Calico  Canal  and Flannel CNI providers are verified to support HostPort   For more information  see the  CNI portmap documentation  https   github com containernetworking plugins blob master plugins meta portmap README md    If your network provider does not support the portmap CNI plugin  you may need to use the  NodePort feature of services   docs concepts services networking service  type nodeport  or use  HostNetwork true       Pods are not accessible via their Service IP    Many network add ons do not yet enable  hairpin mode   docs tasks debug debug application debug service  a pod fails to reach itself via the service ip    which allows pods to access themselves via their Service IP  This is an issue related to    CNI  https   github com containernetworking cni issues 476   Please contact the network   add on provider to get the latest status of their support for hairpin mode     If you are using VirtualBox  directly or via Vagrant   you will need to   ensure that  hostname  i  returns a routable IP address  By default  the first   interface is connected to a non routable host only network  A work around   is to modify   etc hosts   see this    Vagrantfile  https   github com errordeveloper k8s playground blob 22dd39dfc06111235620e6c4404a96ae146f26fd Vagrantfile L11    for an example      TLS certificate errors  The following error indicates a possible certificate mismatch      none   kubectl get pods Unable to connect to the server  x509  certificate signed by unknown authority  possibly because of  crypto rsa  verification error  while trying to verify candidate authority certificate  kubernetes          Verify that the   HOME  kube config  file contains a valid certificate  and   regenerate a certificate if necessary  The certificates in a kubeconfig file   are base64 encoded  The  base64   decode  command can be used to decode the certificate   and  openssl x509  text  noout  can be used for viewing the certificate information     Unset the  KUBECONFIG  environment variable using        sh   unset KUBECONFIG          Or set it to the default  KUBECONFIG  location        sh   export KUBECONFIG  etc kubernetes admin conf          Another workaround is to overwrite the existing  kubeconfig  for the  admin  user        sh   mv  HOME  kube  HOME  kube bak   mkdir  HOME  kube   sudo cp  i  etc kubernetes admin conf  HOME  kube config   sudo chown   id  u    id  g   HOME  kube config           Kubelet client certificate rotation fails   kubelet client cert   By default  kubeadm configures a kubelet with automatic rotation of client certificates by using the   var lib kubelet pki kubelet client current pem  symlink specified in   etc kubernetes kubelet conf   If this rotation process fails you might see errors such as  x509  certificate has expired or is not yet valid  in kube apiserver logs  To fix the issue you must follow these steps   1  Backup and delete   etc kubernetes kubelet conf  and   var lib kubelet pki kubelet client   from the failed node  1  From a working control plane node in the cluster that has   etc kubernetes pki ca key  execute     kubeadm kubeconfig user   org system nodes   client name system node  NODE   kubelet conf        NODE  must be set to the name of the existing failed node in the cluster     Modify the resulted  kubelet conf  manually to adjust the cluster name and server endpoint     or pass  kubeconfig user   config   see  Generating kubeconfig files for additional users   docs tasks administer cluster kubeadm kubeadm certs  kubeconfig additional users    If your cluster does not have    the  ca key  you must sign the embedded certificates in the  kubelet conf  externally  1  Copy this resulted  kubelet conf  to   etc kubernetes kubelet conf  on the failed node  1  Restart the kubelet   systemctl restart kubelet   on the failed node and wait for      var lib kubelet pki kubelet client current pem  to be recreated  1  Manually edit the  kubelet conf  to point to the rotated kubelet client certificates  by replacing     client certificate data  and  client key data  with         yaml    client certificate   var lib kubelet pki kubelet client current pem    client key   var lib kubelet pki kubelet client current pem         1  Restart the kubelet  1  Make sure the node becomes  Ready       Default NIC When using flannel as the pod network in Vagrant  The following error might indicate that something was wrong in the pod network      sh Error from server  NotFound   the server could not find the requested resource        If you re using flannel as the pod network inside Vagrant  then you will have to   specify the default interface name for flannel     Vagrant typically assigns two interfaces to all VMs  The first  for which all hosts   are assigned the IP address  10 0 2 15   is for external traffic that gets NATed     This may lead to problems with flannel  which defaults to the first interface on a host    This leads to all hosts thinking they have the same public IP address  To prevent this    pass the    iface eth1  flag to flannel so that the second interface is chosen      Non public IP used for containers  In some situations  kubectl logs  and  kubectl run  commands may return with the following errors in an otherwise functional cluster      console Error from server  Get https   10 19 0 41 10250 containerLogs default mysql ddc65b868 glc5m mysql  dial tcp 10 19 0 41 10250  getsockopt  no route to host        This may be due to Kubernetes using an IP that can not communicate with other IPs on   the seemingly same subnet  possibly by policy of the machine provider    DigitalOcean assigns a public IP to  eth0  as well as a private one to be used internally   as anchor for their floating IP feature  yet  kubelet  will pick the latter as the node s    InternalIP  instead of the public one     Use  ip addr show  to check for this scenario instead of  ifconfig  because  ifconfig  will   not display the offending alias IP address  Alternatively an API endpoint specific to   DigitalOcean allows to query for the anchor IP from the droplet        sh   curl http   169 254 169 254 metadata v1 interfaces public 0 anchor ipv4 address          The workaround is to tell  kubelet  which IP to use using    node ip     When using DigitalOcean  it can be the public one  assigned to  eth0   or   the private one  assigned to  eth1   should you want to use the optional   private network  The  kubeletExtraArgs  section of the kubeadm     NodeRegistrationOptions  structure   docs reference config api kubeadm config v1beta4  kubeadm k8s io v1beta4 NodeRegistrationOptions    can be used for this     Then restart  kubelet         sh   systemctl daemon reload   systemctl restart kubelet            coredns  pods have  CrashLoopBackOff  or  Error  state  If you have nodes that are running SELinux with an older version of Docker  you might experience a scenario where the  coredns  pods are not starting  To solve that  you can try one of the following options     Upgrade to a  newer version of Docker   docs setup production environment container runtimes  docker       Disable SELinux  https   access redhat com documentation en us red hat enterprise linux 6 html security enhanced linux sect security enhanced linux enabling and disabling selinux disabling selinux      Modify the  coredns  deployment to set  allowPrivilegeEscalation  to  true       bash kubectl  n kube system get deployment coredns  o yaml       sed  s allowPrivilegeEscalation  false allowPrivilegeEscalation  true g        kubectl apply  f        Another cause for CoreDNS to have  CrashLoopBackOff  is when a CoreDNS Pod deployed in Kubernetes detects a loop   A number of workarounds  https   github com coredns coredns tree master plugin loop troubleshooting loops in kubernetes clusters  are available to avoid Kubernetes trying to restart the CoreDNS Pod every time CoreDNS detects the loop and exits    Disabling SELinux or setting  allowPrivilegeEscalation  to  true  can compromise the security of your cluster       etcd pods restart continually  If you encounter the following error       rpc error  code   2 desc   oci runtime error  exec failed  container linux go 247  starting container process caused  process linux go 110  decoding init error from pipe caused   read parent  connection reset by peer         This issue appears if you run CentOS 7 with Docker 1 13 1 84  This version of Docker can prevent the kubelet from executing into the etcd container   To work around the issue  choose one of these options     Roll back to an earlier version of Docker  such as 1 13 1 75          yum downgrade docker 1 13 1 75 git8633870 el7 centos x86 64 docker client 1 13 1 75 git8633870 el7 centos x86 64 docker common 1 13 1 75 git8633870 el7 centos x86 64          Install one of the more recent recommended versions  such as 18 06        bash   sudo yum config manager   add repo https   download docker com linux centos docker ce repo   yum install docker ce 18 06 1 ce 3 el7 x86 64           Not possible to pass a comma separated list of values to arguments inside a    component extra args  flag   kubeadm init  flags such as    component extra args  allow you to pass custom arguments to a control plane component like the kube apiserver  However  this mechanism is limited due to the underlying type used for parsing the values   mapStringString     If you decide to pass an argument that supports multiple  comma separated values such as    apiserver extra args  enable admission plugins LimitRanger NamespaceExists   this flag will fail with  flag  malformed pair  expect string string   This happens because the list of arguments for    apiserver extra args  expects  key value  pairs and in this case  NamespacesExists  is considered as a key that is missing a value   Alternatively  you can try separating the  key value  pairs like so     apiserver extra args  enable admission plugins LimitRanger enable admission plugins NamespaceExists   but this will result in the key  enable admission plugins  only having the value of  NamespaceExists    A known workaround is to use the kubeadm  configuration file   docs reference config api kubeadm config v1beta4        kube proxy scheduled before node is initialized by cloud controller manager  In cloud provider scenarios  kube proxy can end up being scheduled on new worker nodes before the cloud controller manager has initialized the node addresses  This causes kube proxy to fail to pick up the node s IP address properly and has knock on effects to the proxy function managing load balancers   The following error can be seen in kube proxy Pods       server go 610  Failed to retrieve node IP  host IP unknown  known addresses     proxier go 340  invalid nodeIP  initializing kube proxy with 127 0 0 1 as nodeIP      A known solution is to patch the kube proxy DaemonSet to allow scheduling it on control plane nodes regardless of their conditions  keeping it off of other nodes until their initial guarding conditions abate       kubectl  n kube system patch ds kube proxy  p       spec          template            spec              tolerations                              key    CriticalAddonsOnly                operator    Exists                                        effect    NoSchedule                key    node role kubernetes io control plane                                                  The tracking issue for this problem is  here  https   github com kubernetes kubeadm issues 1027         usr  is mounted read only on nodes   usr mounted read only   On Linux distributions such as Fedora CoreOS or Flatcar Container Linux  the directory   usr  is mounted as a read only filesystem  For  flex volume support  https   github com kubernetes community blob ab55d85 contributors devel sig storage flexvolume md   Kubernetes components like the kubelet and kube controller manager use the default path of   usr libexec kubernetes kubelet plugins volume exec    yet the flex volume directory  must be writeable  for the feature to work    FlexVolume was deprecated in the Kubernetes v1 23 release    To workaround this issue  you can configure the flex volume directory using the kubeadm  configuration file   docs reference config api kubeadm config v1beta4     On the primary control plane Node  created using  kubeadm init    pass the following file using    config       yaml apiVersion  kubeadm k8s io v1beta4 kind  InitConfiguration nodeRegistration    kubeletExtraArgs      name   volume plugin dir      value    opt libexec kubernetes kubelet plugins volume exec       apiVersion  kubeadm k8s io v1beta4 kind  ClusterConfiguration controllerManager    extraArgs      name   flex volume plugin dir      value    opt libexec kubernetes kubelet plugins volume exec        On joining Nodes      yaml apiVersion  kubeadm k8s io v1beta4 kind  JoinConfiguration nodeRegistration    kubeletExtraArgs      name   volume plugin dir      value    opt libexec kubernetes kubelet plugins volume exec        Alternatively  you can modify   etc fstab  to make the   usr  mount writeable  but please be advised that this is modifying a design principle of the Linux distribution       kubeadm upgrade plan  prints out  context deadline exceeded  error message  This error message is shown when upgrading a Kubernetes cluster with  kubeadm  in the case of running an external etcd  This is not a critical bug and happens because older versions of kubeadm perform a version check on the external etcd cluster  You can proceed with  kubeadm upgrade apply        This issue is fixed as of version 1 19       kubeadm reset  unmounts   var lib kubelet   If   var lib kubelet  is being mounted  performing a  kubeadm reset  will effectively unmount it   To workaround the issue  re mount the   var lib kubelet  directory after performing the  kubeadm reset  operation   This is a regression introduced in kubeadm 1 15  The issue is fixed in 1 20      Cannot use the metrics server securely in a kubeadm cluster  In a kubeadm cluster  the  metrics server  https   github com kubernetes sigs metrics server  can be used insecurely by passing the    kubelet insecure tls  to it  This is not recommended for production clusters   If you want to use TLS between the metrics server and the kubelet there is a problem  since kubeadm deploys a self signed serving certificate for the kubelet  This can cause the following errors on the side of the metrics server       x509  certificate signed by unknown authority x509  certificate is valid for IP foo not IP bar      See  Enabling signed kubelet serving certificates   docs tasks administer cluster kubeadm kubeadm certs  kubelet serving certs  to understand how to configure the kubelets in a kubeadm cluster to have properly signed serving certificates   Also see  How to run the metrics server securely  https   github com kubernetes sigs metrics server blob master FAQ md how to run metrics server securely       Upgrade fails due to etcd hash not changing  Only applicable to upgrading a control plane node with a kubeadm binary v1 28 3 or later  where the node is currently managed by kubeadm versions v1 28 0  v1 28 1 or v1 28 2   Here is the error message you may encounter        upgrade etcd  Failed to upgrade etcd  couldn t upgrade control plane  kubeadm has tried to recover everything into the earlier state  Errors faced  static Pod hash for component etcd on Node kinder upgrade control plane 1 did not change after 5m0s  timed out waiting for the condition  upgrade etcd  Waiting for previous etcd to become available I0907 10 10 09 109104    3704 etcd go 588   etcd  attempting to see if all cluster endpoints   https   172 17 0 6 2379  https   172 17 0 4 2379  https   172 17 0 3 2379    are available 1 10  upgrade etcd  Etcd was rolled back and is now available static Pod hash for component etcd on Node kinder upgrade control plane 1 did not change after 5m0s  timed out waiting for the condition couldn t upgrade control plane  kubeadm has tried to recover everything into the earlier state  Errors faced k8s io kubernetes cmd kubeadm app phases upgrade rollbackOldManifests  cmd kubeadm app phases upgrade staticpods go 525 k8s io kubernetes cmd kubeadm app phases upgrade upgradeComponent  cmd kubeadm app phases upgrade staticpods go 254 k8s io kubernetes cmd kubeadm app phases upgrade performEtcdStaticPodUpgrade  cmd kubeadm app phases upgrade staticpods go 338          The reason for this failure is that the affected versions generate an etcd manifest file with unwanted defaults in the PodSpec  This will result in a diff from the manifest comparison  and kubeadm will expect a change in the Pod hash  but the kubelet will never update the hash   There are two way to workaround this issue if you see it in your cluster     The etcd upgrade can be skipped between the affected versions and v1 28 3  or later  by using        shell   kubeadm upgrade  apply node   version    etcd upgrade false          This is not recommended in case a new etcd version was introduced by a later v1 28 patch version     Before upgrade  patch the manifest for the etcd static pod  to remove the problematic defaulted attributes        patch   diff   git a etc kubernetes manifests etcd defaults yaml b etc kubernetes manifests etcd origin yaml   index d807ccbe0aa  46b35f00e15 100644       a etc kubernetes manifests etcd defaults yaml       b etc kubernetes manifests etcd origin yaml       43 7  43 6    spec            scheme  HTTP         initialDelaySeconds  10         periodSeconds  10          successThreshold  1         timeoutSeconds  15       name  etcd       resources        59 26  58 18    spec            scheme  HTTP         initialDelaySeconds  10         periodSeconds  10          successThreshold  1         timeoutSeconds  15        terminationMessagePath   dev termination log        terminationMessagePolicy  File       volumeMounts          mountPath   var lib etcd         name  etcd data         mountPath   etc kubernetes pki etcd         name  etcd certs      dnsPolicy  ClusterFirst      enableServiceLinks  true     hostNetwork  true     priority  2000001000     priorityClassName  system node critical      restartPolicy  Always      schedulerName  default scheduler     securityContext        seccompProfile          type  RuntimeDefault      terminationGracePeriodSeconds  30     volumes        hostPath          path   etc kubernetes pki etcd        More information can be found in the  tracking issue  https   github com kubernetes kubeadm issues 2927  for this bug "}
{"questions":"kubernetes setup title Installing kubeadm title Install the kubeadm setup tool name setup contenttype task card weight 10 weight 40","answers":"---\ntitle: Installing kubeadm\ncontent_type: task\nweight: 10\ncard:\n  name: setup\n  weight: 40\n  title: Install the kubeadm setup tool\n---\n\n<!-- overview -->\n\n<img src=\"\/images\/kubeadm-stacked-color.png\" align=\"right\" width=\"150px\"><\/img>\nThis page shows how to install the `kubeadm` toolbox.\nFor information on how to create a cluster with kubeadm once you have performed this installation process,\nsee the [Creating a cluster with kubeadm](\/docs\/setup\/production-environment\/tools\/kubeadm\/create-cluster-kubeadm\/) page.\n\n\n\n## \n\n* A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions\n  based on Debian and Red Hat, and those distributions without a package manager.\n* 2 GB or more of RAM per machine (any less will leave little room for your apps).\n* 2 CPUs or more for control plane machines.\n* Full network connectivity between all machines in the cluster (public or private network is fine).\n* Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-mac-address) for more details.\n* Certain ports are open on your machines. See [here](#check-required-ports) for more details.\n\n\nThe `kubeadm` installation is done via binaries that use dynamic linking and assumes that your target system provides `glibc`.\nThis is a reasonable assumption on many Linux distributions (including Debian, Ubuntu, Fedora, CentOS, etc.)\nbut it is not always the case with custom and lightweight distributions which don't include `glibc` by default, such as Alpine Linux.\nThe expectation is that the distribution either includes `glibc` or a\n[compatibility layer](https:\/\/wiki.alpinelinux.org\/wiki\/Running_glibc_programs)\nthat provides the expected symbols.\n\n\n<!-- steps -->\n\n## Verify the MAC address and product_uuid are unique for every node {#verify-mac-address}\n\n* You can get the MAC address of the network interfaces using the command `ip link` or `ifconfig -a`\n* The product_uuid can be checked by using the command `sudo cat \/sys\/class\/dmi\/id\/product_uuid`\n\nIt is very likely that hardware devices will have unique addresses, although some virtual machines may have\nidentical values. Kubernetes uses these values to uniquely identify the nodes in the cluster.\nIf these values are not unique to each node, the installation process\nmay [fail](https:\/\/github.com\/kubernetes\/kubeadm\/issues\/31).\n\n## Check network adapters\n\nIf you have more than one network adapter, and your Kubernetes components are not reachable on the default\nroute, we recommend you add IP route(s) so Kubernetes cluster addresses go via the appropriate adapter.\n\n## Check required ports {#check-required-ports}\n\nThese [required ports](\/docs\/reference\/networking\/ports-and-protocols\/)\nneed to be open in order for Kubernetes components to communicate with each other.\nYou can use tools like [netcat](https:\/\/netcat.sourceforge.net) to check if a port is open. For example:\n\n```shell\nnc 127.0.0.1 6443 -v\n```\n\nThe pod network plugin you use may also require certain ports to be\nopen. Since this differs with each pod network plugin, please see the\ndocumentation for the plugins about what port(s) those need.\n\n## Swap configuration {#swap-configuration}\n\nThe default behavior of a kubelet is to fail to start if swap memory is detected on a node.\nThis means that swap should either be disabled or tolerated by kubelet.\n\n* To tolerate swap, add `failSwapOn: false` to kubelet configuration or as a command line argument.\n  Note: even if `failSwapOn: false` is provided, workloads wouldn't have swap access by default.\n  This can be changed by setting a `swapBehavior`, again in the kubelet configuration file. To use swap,\n  set a `swapBehavior` other than the default `NoSwap` setting.\n  See [Swap memory management](\/docs\/concepts\/architecture\/nodes\/#swap-memory) for more details.\n* To disable swap, `sudo swapoff -a` can be used to disable swapping temporarily.\n  To make this change persistent across reboots, make sure swap is disabled in\n  config files like `\/etc\/fstab`, `systemd.swap`, depending how it was configured on your system.\n\n\n## Installing a container runtime {#installing-runtime}\n\nTo run containers in Pods, Kubernetes uses a\n.\n\nBy default, Kubernetes uses the\n (CRI)\nto interface with your chosen container runtime.\n\nIf you don't specify a runtime, kubeadm automatically tries to detect an installed\ncontainer runtime by scanning through a list of known endpoints.\n\nIf multiple or no container runtimes are detected kubeadm will throw an error\nand will request that you specify which one you want to use.\n\nSee [container runtimes](\/docs\/setup\/production-environment\/container-runtimes\/)\nfor more information.\n\n\nDocker Engine does not implement the [CRI](\/docs\/concepts\/architecture\/cri\/)\nwhich is a requirement for a container runtime to work with Kubernetes.\nFor that reason, an additional service [cri-dockerd](https:\/\/mirantis.github.io\/cri-dockerd\/)\nhas to be installed. cri-dockerd is a project based on the legacy built-in\nDocker Engine support that was [removed](\/dockershim) from the kubelet in version 1.24.\n\n\nThe tables below include the known endpoints for supported operating systems:\n\n\n\n\n\n| Runtime                            | Path to Unix domain socket                   |\n|------------------------------------|----------------------------------------------|\n| containerd                         | `unix:\/\/\/var\/run\/containerd\/containerd.sock` |\n| CRI-O                              | `unix:\/\/\/var\/run\/crio\/crio.sock`             |\n| Docker Engine (using cri-dockerd)  | `unix:\/\/\/var\/run\/cri-dockerd.sock`           |\n\n\n\n\n\n\n\n| Runtime                            | Path to Windows named pipe                   |\n|------------------------------------|----------------------------------------------|\n| containerd                         | `npipe:\/\/\/\/.\/pipe\/containerd-containerd`     |\n| Docker Engine (using cri-dockerd)  | `npipe:\/\/\/\/.\/pipe\/cri-dockerd`               |\n\n\n\n\n\n## Installing kubeadm, kubelet and kubectl\n\nYou will install these packages on all of your machines:\n\n* `kubeadm`: the command to bootstrap the cluster.\n\n* `kubelet`: the component that runs on all of the machines in your cluster\n  and does things like starting pods and containers.\n\n* `kubectl`: the command line util to talk to your cluster.\n\nkubeadm **will not** install or manage `kubelet` or `kubectl` for you, so you will\nneed to ensure they match the version of the Kubernetes control plane you want\nkubeadm to install for you. If you do not, there is a risk of a version skew occurring that\ncan lead to unexpected, buggy behaviour. However, _one_ minor version skew between the\nkubelet and the control plane is supported, but the kubelet version may never exceed the API\nserver version. For example, the kubelet running 1.7.0 should be fully compatible with a 1.8.0 API server,\nbut not vice versa.\n\nFor information about installing `kubectl`, see [Install and set up kubectl](\/docs\/tasks\/tools\/).\n\n\nThese instructions exclude all Kubernetes packages from any system upgrades.\nThis is because kubeadm and Kubernetes require\n[special attention to upgrade](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-upgrade\/).\n\n\nFor more information on version skews, see:\n\n* Kubernetes [version and version-skew policy](\/docs\/setup\/release\/version-skew-policy\/)\n* Kubeadm-specific [version skew policy](\/docs\/setup\/production-environment\/tools\/kubeadm\/create-cluster-kubeadm\/#version-skew-policy)\n\n\n\n\nThere's a dedicated package repository for each Kubernetes minor version. If you want to install\na minor version other than v, please see the installation guide for\nyour desired minor version.\n\n\n\n\n\nThese instructions are for Kubernetes v.\n\n1. Update the `apt` package index and install packages needed to use the Kubernetes `apt` repository:\n\n   ```shell\n   sudo apt-get update\n   # apt-transport-https may be a dummy package; if so, you can skip that package\n   sudo apt-get install -y apt-transport-https ca-certificates curl gpg\n   ```\n\n2. Download the public signing key for the Kubernetes package repositories.\n   The same signing key is used for all repositories so you can disregard the version in the URL:\n\n   ```shell\n   # If the directory `\/etc\/apt\/keyrings` does not exist, it should be created before the curl command, read the note below.\n   # sudo mkdir -p -m 755 \/etc\/apt\/keyrings\n   curl -fsSL https:\/\/pkgs.k8s.io\/core:\/stable:\/\/deb\/Release.key | sudo gpg --dearmor -o \/etc\/apt\/keyrings\/kubernetes-apt-keyring.gpg\n   ```\n\n\nIn releases older than Debian 12 and Ubuntu 22.04, directory `\/etc\/apt\/keyrings` does not\nexist by default, and it should be created before the curl command.\n\n\n3. Add the appropriate Kubernetes `apt` repository. Please note that this repository have packages\n   only for Kubernetes ; for other Kubernetes minor versions, you need to\n   change the Kubernetes minor version in the URL to match your desired minor version\n   (you should also check that you are reading the documentation for the version of Kubernetes\n   that you plan to install).\n\n   ```shell\n   # This overwrites any existing configuration in \/etc\/apt\/sources.list.d\/kubernetes.list\n   echo 'deb [signed-by=\/etc\/apt\/keyrings\/kubernetes-apt-keyring.gpg] https:\/\/pkgs.k8s.io\/core:\/stable:\/\/deb\/ \/' | sudo tee \/etc\/apt\/sources.list.d\/kubernetes.list\n   ```\n\n4. Update the `apt` package index, install kubelet, kubeadm and kubectl, and pin their version:\n\n   ```shell\n   sudo apt-get update\n   sudo apt-get install -y kubelet kubeadm kubectl\n   sudo apt-mark hold kubelet kubeadm kubectl\n   ```\n\n5. (Optional) Enable the kubelet service before running kubeadm:\n\n   ```shell\n   sudo systemctl enable --now kubelet\n   ```\n\n\n\n\n1. Set SELinux to `permissive` mode:\n\n   These instructions are for Kubernetes .\n\n   ```shell\n   # Set SELinux in permissive mode (effectively disabling it)\n   sudo setenforce 0\n   sudo sed -i 's\/^SELINUX=enforcing$\/SELINUX=permissive\/' \/etc\/selinux\/config\n   ```\n\n\n- Setting SELinux in permissive mode by running `setenforce 0` and `sed ...`\n  effectively disables it. This is required to allow containers to access the host\n  filesystem; for example, some cluster network plugins require that. You have to\n  do this until SELinux support is improved in the kubelet.\n- You can leave SELinux enabled if you know how to configure it but it may require\n  settings that are not supported by kubeadm.\n\n\n2. Add the Kubernetes `yum` repository. The `exclude` parameter in the\n   repository definition ensures that the packages related to Kubernetes are\n   not upgraded upon running `yum update` as there's a special procedure that\n   must be followed for upgrading Kubernetes. Please note that this repository\n   have packages only for Kubernetes ; for other\n   Kubernetes minor versions, you need to change the Kubernetes minor version\n   in the URL to match your desired minor version (you should also check that\n   you are reading the documentation for the version of Kubernetes that you\n   plan to install).\n\n   ```shell\n   # This overwrites any existing configuration in \/etc\/yum.repos.d\/kubernetes.repo\n   cat <<EOF | sudo tee \/etc\/yum.repos.d\/kubernetes.repo\n   [kubernetes]\n   name=Kubernetes\n   baseurl=https:\/\/pkgs.k8s.io\/core:\/stable:\/\/rpm\/\n   enabled=1\n   gpgcheck=1\n   gpgkey=https:\/\/pkgs.k8s.io\/core:\/stable:\/\/rpm\/repodata\/repomd.xml.key\n   exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni\n   EOF\n   ```\n\n3. Install kubelet, kubeadm and kubectl:\n\n   ```shell\n   sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes\n   ```\n\n4. (Optional) Enable the kubelet service before running kubeadm:\n\n   ```shell\n   sudo systemctl enable --now kubelet\n   ```\n\n\n\nInstall CNI plugins (required for most pod network):\n\n```bash\nCNI_PLUGINS_VERSION=\"v1.3.0\"\nARCH=\"amd64\"\nDEST=\"\/opt\/cni\/bin\"\nsudo mkdir -p \"$DEST\"\ncurl -L \"https:\/\/github.com\/containernetworking\/plugins\/releases\/download\/${CNI_PLUGINS_VERSION}\/cni-plugins-linux-${ARCH}-${CNI_PLUGINS_VERSION}.tgz\" | sudo tar -C \"$DEST\" -xz\n```\n\nDefine the directory to download command files:\n\n\nThe `DOWNLOAD_DIR` variable must be set to a writable directory.\nIf you are running Flatcar Container Linux, set `DOWNLOAD_DIR=\"\/opt\/bin\"`.\n\n\n```bash\nDOWNLOAD_DIR=\"\/usr\/local\/bin\"\nsudo mkdir -p \"$DOWNLOAD_DIR\"\n```\n\nOptionally install crictl (required for interaction with the Container Runtime Interface (CRI), optional for kubeadm):\n\n```bash\nCRICTL_VERSION=\"v1.31.0\"\nARCH=\"amd64\"\ncurl -L \"https:\/\/github.com\/kubernetes-sigs\/cri-tools\/releases\/download\/${CRICTL_VERSION}\/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz\" | sudo tar -C $DOWNLOAD_DIR -xz\n```\n\nInstall `kubeadm`, `kubelet` and add a `kubelet` systemd service:\n\n```bash\nRELEASE=\"$(curl -sSL https:\/\/dl.k8s.io\/release\/stable.txt)\"\nARCH=\"amd64\"\ncd $DOWNLOAD_DIR\nsudo curl -L --remote-name-all https:\/\/dl.k8s.io\/release\/${RELEASE}\/bin\/linux\/${ARCH}\/{kubeadm,kubelet}\nsudo chmod +x {kubeadm,kubelet}\n\nRELEASE_VERSION=\"v0.16.2\"\ncurl -sSL \"https:\/\/raw.githubusercontent.com\/kubernetes\/release\/${RELEASE_VERSION}\/cmd\/krel\/templates\/latest\/kubelet\/kubelet.service\" | sed \"s:\/usr\/bin:${DOWNLOAD_DIR}:g\" | sudo tee \/usr\/lib\/systemd\/system\/kubelet.service\nsudo mkdir -p \/usr\/lib\/systemd\/system\/kubelet.service.d\ncurl -sSL \"https:\/\/raw.githubusercontent.com\/kubernetes\/release\/${RELEASE_VERSION}\/cmd\/krel\/templates\/latest\/kubeadm\/10-kubeadm.conf\" | sed \"s:\/usr\/bin:${DOWNLOAD_DIR}:g\" | sudo tee \/usr\/lib\/systemd\/system\/kubelet.service.d\/10-kubeadm.conf\n```\n\n\nPlease refer to the note in the [Before you begin](#before-you-begin) section for Linux distributions\nthat do not include `glibc` by default.\n\n\nInstall `kubectl` by following the instructions on [Install Tools page](\/docs\/tasks\/tools\/#kubectl).\n\nOptionally, enable the kubelet service before running kubeadm:\n\n```bash\nsudo systemctl enable --now kubelet\n```\n\n\nThe Flatcar Container Linux distribution mounts the `\/usr` directory as a read-only filesystem.\nBefore bootstrapping your cluster, you need to take additional steps to configure a writable directory.\nSee the [Kubeadm Troubleshooting guide](\/docs\/setup\/production-environment\/tools\/kubeadm\/troubleshooting-kubeadm\/#usr-mounted-read-only)\nto learn how to set up a writable directory.\n\n\n\n\nThe kubelet is now restarting every few seconds, as it waits in a crashloop for\nkubeadm to tell it what to do.\n\n## Configuring a cgroup driver\n\nBoth the container runtime and the kubelet have a property called\n[\"cgroup driver\"](\/docs\/setup\/production-environment\/container-runtimes\/#cgroup-drivers), which is important\nfor the management of cgroups on Linux machines.\n\n\nMatching the container runtime and kubelet cgroup drivers is required or otherwise the kubelet process will fail.\n\nSee [Configuring a cgroup driver](\/docs\/tasks\/administer-cluster\/kubeadm\/configure-cgroup-driver\/) for more details.\n\n\n## Troubleshooting\n\nIf you are running into difficulties with kubeadm, please consult our\n[troubleshooting docs](\/docs\/setup\/production-environment\/tools\/kubeadm\/troubleshooting-kubeadm\/).\n\n## \n\n* [Using kubeadm to Create a Cluster](\/docs\/setup\/production-environment\/tools\/kubeadm\/create-cluster-kubeadm\/)","site":"kubernetes setup","answers_cleaned":"    title  Installing kubeadm content type  task weight  10 card    name  setup   weight  40   title  Install the kubeadm setup tool           overview       img src   images kubeadm stacked color png  align  right  width  150px    img  This page shows how to install the  kubeadm  toolbox  For information on how to create a cluster with kubeadm once you have performed this installation process  see the  Creating a cluster with kubeadm   docs setup production environment tools kubeadm create cluster kubeadm   page            A compatible Linux host  The Kubernetes project provides generic instructions for Linux distributions   based on Debian and Red Hat  and those distributions without a package manager    2 GB or more of RAM per machine  any less will leave little room for your apps     2 CPUs or more for control plane machines    Full network connectivity between all machines in the cluster  public or private network is fine     Unique hostname  MAC address  and product uuid for every node  See  here   verify mac address  for more details    Certain ports are open on your machines  See  here   check required ports  for more details    The  kubeadm  installation is done via binaries that use dynamic linking and assumes that your target system provides  glibc   This is a reasonable assumption on many Linux distributions  including Debian  Ubuntu  Fedora  CentOS  etc   but it is not always the case with custom and lightweight distributions which don t include  glibc  by default  such as Alpine Linux  The expectation is that the distribution either includes  glibc  or a  compatibility layer  https   wiki alpinelinux org wiki Running glibc programs  that provides the expected symbols         steps         Verify the MAC address and product uuid are unique for every node   verify mac address     You can get the MAC address of the network interfaces using the command  ip link  or  ifconfig  a    The product uuid can be checked by using the command  sudo cat  sys class dmi id product uuid   It is very likely that hardware devices will have unique addresses  although some virtual machines may have identical values  Kubernetes uses these values to uniquely identify the nodes in the cluster  If these values are not unique to each node  the installation process may  fail  https   github com kubernetes kubeadm issues 31       Check network adapters  If you have more than one network adapter  and your Kubernetes components are not reachable on the default route  we recommend you add IP route s  so Kubernetes cluster addresses go via the appropriate adapter      Check required ports   check required ports   These  required ports   docs reference networking ports and protocols   need to be open in order for Kubernetes components to communicate with each other  You can use tools like  netcat  https   netcat sourceforge net  to check if a port is open  For example      shell nc 127 0 0 1 6443  v      The pod network plugin you use may also require certain ports to be open  Since this differs with each pod network plugin  please see the documentation for the plugins about what port s  those need      Swap configuration   swap configuration   The default behavior of a kubelet is to fail to start if swap memory is detected on a node  This means that swap should either be disabled or tolerated by kubelet     To tolerate swap  add  failSwapOn  false  to kubelet configuration or as a command line argument    Note  even if  failSwapOn  false  is provided  workloads wouldn t have swap access by default    This can be changed by setting a  swapBehavior   again in the kubelet configuration file  To use swap    set a  swapBehavior  other than the default  NoSwap  setting    See  Swap memory management   docs concepts architecture nodes  swap memory  for more details    To disable swap   sudo swapoff  a  can be used to disable swapping temporarily    To make this change persistent across reboots  make sure swap is disabled in   config files like   etc fstab    systemd swap   depending how it was configured on your system       Installing a container runtime   installing runtime   To run containers in Pods  Kubernetes uses a    By default  Kubernetes uses the   CRI  to interface with your chosen container runtime   If you don t specify a runtime  kubeadm automatically tries to detect an installed container runtime by scanning through a list of known endpoints   If multiple or no container runtimes are detected kubeadm will throw an error and will request that you specify which one you want to use   See  container runtimes   docs setup production environment container runtimes   for more information    Docker Engine does not implement the  CRI   docs concepts architecture cri   which is a requirement for a container runtime to work with Kubernetes  For that reason  an additional service  cri dockerd  https   mirantis github io cri dockerd   has to be installed  cri dockerd is a project based on the legacy built in Docker Engine support that was  removed   dockershim  from the kubelet in version 1 24    The tables below include the known endpoints for supported operating systems         Runtime                              Path to Unix domain socket                                                                                                             containerd                            unix    var run containerd containerd sock      CRI O                                 unix    var run crio crio sock                  Docker Engine  using cri dockerd      unix    var run cri dockerd sock                       Runtime                              Path to Windows named pipe                                                                                                             containerd                            npipe       pipe containerd containerd          Docker Engine  using cri dockerd      npipe       pipe cri dockerd                          Installing kubeadm  kubelet and kubectl  You will install these packages on all of your machines      kubeadm   the command to bootstrap the cluster      kubelet   the component that runs on all of the machines in your cluster   and does things like starting pods and containers      kubectl   the command line util to talk to your cluster   kubeadm   will not   install or manage  kubelet  or  kubectl  for you  so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you  If you do not  there is a risk of a version skew occurring that can lead to unexpected  buggy behaviour  However   one  minor version skew between the kubelet and the control plane is supported  but the kubelet version may never exceed the API server version  For example  the kubelet running 1 7 0 should be fully compatible with a 1 8 0 API server  but not vice versa   For information about installing  kubectl   see  Install and set up kubectl   docs tasks tools      These instructions exclude all Kubernetes packages from any system upgrades  This is because kubeadm and Kubernetes require  special attention to upgrade   docs tasks administer cluster kubeadm kubeadm upgrade      For more information on version skews  see     Kubernetes  version and version skew policy   docs setup release version skew policy     Kubeadm specific  version skew policy   docs setup production environment tools kubeadm create cluster kubeadm  version skew policy      There s a dedicated package repository for each Kubernetes minor version  If you want to install a minor version other than v  please see the installation guide for your desired minor version       These instructions are for Kubernetes v   1  Update the  apt  package index and install packages needed to use the Kubernetes  apt  repository         shell    sudo apt get update      apt transport https may be a dummy package  if so  you can skip that package    sudo apt get install  y apt transport https ca certificates curl gpg         2  Download the public signing key for the Kubernetes package repositories     The same signing key is used for all repositories so you can disregard the version in the URL         shell      If the directory   etc apt keyrings  does not exist  it should be created before the curl command  read the note below       sudo mkdir  p  m 755  etc apt keyrings    curl  fsSL https   pkgs k8s io core  stable   deb Release key   sudo gpg   dearmor  o  etc apt keyrings kubernetes apt keyring gpg          In releases older than Debian 12 and Ubuntu 22 04  directory   etc apt keyrings  does not exist by default  and it should be created before the curl command    3  Add the appropriate Kubernetes  apt  repository  Please note that this repository have packages    only for Kubernetes   for other Kubernetes minor versions  you need to    change the Kubernetes minor version in the URL to match your desired minor version     you should also check that you are reading the documentation for the version of Kubernetes    that you plan to install          shell      This overwrites any existing configuration in  etc apt sources list d kubernetes list    echo  deb  signed by  etc apt keyrings kubernetes apt keyring gpg  https   pkgs k8s io core  stable   deb       sudo tee  etc apt sources list d kubernetes list         4  Update the  apt  package index  install kubelet  kubeadm and kubectl  and pin their version         shell    sudo apt get update    sudo apt get install  y kubelet kubeadm kubectl    sudo apt mark hold kubelet kubeadm kubectl         5   Optional  Enable the kubelet service before running kubeadm         shell    sudo systemctl enable   now kubelet            1  Set SELinux to  permissive  mode      These instructions are for Kubernetes          shell      Set SELinux in permissive mode  effectively disabling it     sudo setenforce 0    sudo sed  i  s  SELINUX enforcing  SELINUX permissive    etc selinux config            Setting SELinux in permissive mode by running  setenforce 0  and  sed        effectively disables it  This is required to allow containers to access the host   filesystem  for example  some cluster network plugins require that  You have to   do this until SELinux support is improved in the kubelet    You can leave SELinux enabled if you know how to configure it but it may require   settings that are not supported by kubeadm    2  Add the Kubernetes  yum  repository  The  exclude  parameter in the    repository definition ensures that the packages related to Kubernetes are    not upgraded upon running  yum update  as there s a special procedure that    must be followed for upgrading Kubernetes  Please note that this repository    have packages only for Kubernetes   for other    Kubernetes minor versions  you need to change the Kubernetes minor version    in the URL to match your desired minor version  you should also check that    you are reading the documentation for the version of Kubernetes that you    plan to install          shell      This overwrites any existing configuration in  etc yum repos d kubernetes repo    cat   EOF   sudo tee  etc yum repos d kubernetes repo     kubernetes     name Kubernetes    baseurl https   pkgs k8s io core  stable   rpm     enabled 1    gpgcheck 1    gpgkey https   pkgs k8s io core  stable   rpm repodata repomd xml key    exclude kubelet kubeadm kubectl cri tools kubernetes cni    EOF         3  Install kubelet  kubeadm and kubectl         shell    sudo yum install  y kubelet kubeadm kubectl   disableexcludes kubernetes         4   Optional  Enable the kubelet service before running kubeadm         shell    sudo systemctl enable   now kubelet           Install CNI plugins  required for most pod network       bash CNI PLUGINS VERSION  v1 3 0  ARCH  amd64  DEST   opt cni bin  sudo mkdir  p   DEST  curl  L  https   github com containernetworking plugins releases download   CNI PLUGINS VERSION  cni plugins linux   ARCH    CNI PLUGINS VERSION  tgz    sudo tar  C   DEST   xz      Define the directory to download command files    The  DOWNLOAD DIR  variable must be set to a writable directory  If you are running Flatcar Container Linux  set  DOWNLOAD DIR   opt bin         bash DOWNLOAD DIR   usr local bin  sudo mkdir  p   DOWNLOAD DIR       Optionally install crictl  required for interaction with the Container Runtime Interface  CRI   optional for kubeadm       bash CRICTL VERSION  v1 31 0  ARCH  amd64  curl  L  https   github com kubernetes sigs cri tools releases download   CRICTL VERSION  crictl   CRICTL VERSION  linux   ARCH  tar gz    sudo tar  C  DOWNLOAD DIR  xz      Install  kubeadm    kubelet  and add a  kubelet  systemd service      bash RELEASE    curl  sSL https   dl k8s io release stable txt   ARCH  amd64  cd  DOWNLOAD DIR sudo curl  L   remote name all https   dl k8s io release   RELEASE  bin linux   ARCH   kubeadm kubelet  sudo chmod  x  kubeadm kubelet   RELEASE VERSION  v0 16 2  curl  sSL  https   raw githubusercontent com kubernetes release   RELEASE VERSION  cmd krel templates latest kubelet kubelet service    sed  s  usr bin   DOWNLOAD DIR  g    sudo tee  usr lib systemd system kubelet service sudo mkdir  p  usr lib systemd system kubelet service d curl  sSL  https   raw githubusercontent com kubernetes release   RELEASE VERSION  cmd krel templates latest kubeadm 10 kubeadm conf    sed  s  usr bin   DOWNLOAD DIR  g    sudo tee  usr lib systemd system kubelet service d 10 kubeadm conf       Please refer to the note in the  Before you begin   before you begin  section for Linux distributions that do not include  glibc  by default    Install  kubectl  by following the instructions on  Install Tools page   docs tasks tools  kubectl    Optionally  enable the kubelet service before running kubeadm      bash sudo systemctl enable   now kubelet       The Flatcar Container Linux distribution mounts the   usr  directory as a read only filesystem  Before bootstrapping your cluster  you need to take additional steps to configure a writable directory  See the  Kubeadm Troubleshooting guide   docs setup production environment tools kubeadm troubleshooting kubeadm  usr mounted read only  to learn how to set up a writable directory      The kubelet is now restarting every few seconds  as it waits in a crashloop for kubeadm to tell it what to do      Configuring a cgroup driver  Both the container runtime and the kubelet have a property called   cgroup driver    docs setup production environment container runtimes  cgroup drivers   which is important for the management of cgroups on Linux machines    Matching the container runtime and kubelet cgroup drivers is required or otherwise the kubelet process will fail   See  Configuring a cgroup driver   docs tasks administer cluster kubeadm configure cgroup driver   for more details       Troubleshooting  If you are running into difficulties with kubeadm  please consult our  troubleshooting docs   docs setup production environment tools kubeadm troubleshooting kubeadm             Using kubeadm to Create a Cluster   docs setup production environment tools kubeadm create cluster kubeadm  "}
{"questions":"kubernetes setup title Customizing components with the kubeadm API contenttype concept weight 40 sig cluster lifecycle reviewers overview","answers":"---\nreviewers:\n- sig-cluster-lifecycle\ntitle: Customizing components with the kubeadm API\ncontent_type: concept\nweight: 40\n---\n\n<!-- overview -->\n\nThis page covers how to customize the components that kubeadm deploys. For control plane components\nyou can use flags in the `ClusterConfiguration` structure or patches per-node. For the kubelet\nand kube-proxy you can use `KubeletConfiguration` and `KubeProxyConfiguration`, accordingly.\n\nAll of these options are possible via the kubeadm configuration API.\nFor more details on each field in the configuration you can navigate to our\n[API reference pages](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/).\n\n\nCustomizing the CoreDNS deployment of kubeadm is currently not supported. You must manually\npatch the `kube-system\/coredns` \nand recreate the CoreDNS  after that. Alternatively,\nyou can skip the default CoreDNS deployment and deploy your own variant.\nFor more details on that see [Using init phases with kubeadm](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init\/#init-phases).\n\n\n\nTo reconfigure a cluster that has already been created see\n[Reconfiguring a kubeadm cluster](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-reconfigure).\n\n\n<!-- body -->\n\n## Customizing the control plane with flags in `ClusterConfiguration`\n\nThe kubeadm `ClusterConfiguration` object exposes a way for users to override the default\nflags passed to control plane components such as the APIServer, ControllerManager, Scheduler and Etcd.\nThe components are defined using the following structures:\n\n- `apiServer`\n- `controllerManager`\n- `scheduler`\n- `etcd`\n\nThese structures contain a common `extraArgs` field, that consists of `name` \/ `value` pairs.\nTo override a flag for a control plane component:\n\n1.  Add the appropriate `extraArgs` to your configuration.\n2.  Add flags to the `extraArgs` field.\n3.  Run `kubeadm init` with `--config <YOUR CONFIG YAML>`.\n\n\nYou can generate a `ClusterConfiguration` object with default values by running `kubeadm config print init-defaults`\nand saving the output to a file of your choice.\n\n\n\nThe `ClusterConfiguration` object is currently global in kubeadm clusters. This means that any flags that you add,\nwill apply to all instances of the same component on different nodes. To apply individual configuration per component\non different nodes you can use [patches](#patches).\n\n\n\nDuplicate flags (keys), or passing the same flag `--foo` multiple times, is currently not supported.\nTo workaround that you must use [patches](#patches).\n\n\n### APIServer flags\n\nFor details, see the [reference documentation for kube-apiserver](\/docs\/reference\/command-line-tools-reference\/kube-apiserver\/).\n\nExample usage:\n\n```yaml\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\napiServer:\n  extraArgs:\n  - name: \"enable-admission-plugins\"\n    value: \"AlwaysPullImages,DefaultStorageClass\"\n  - name: \"audit-log-path\"\n    value: \"\/home\/johndoe\/audit.log\"\n```\n\n### ControllerManager flags\n\nFor details, see the [reference documentation for kube-controller-manager](\/docs\/reference\/command-line-tools-reference\/kube-controller-manager\/).\n\nExample usage:\n\n```yaml\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\ncontrollerManager:\n  extraArgs:\n  - name: \"cluster-signing-key-file\"\n    value: \"\/home\/johndoe\/keys\/ca.key\"\n  - name: \"deployment-controller-sync-period\"\n    value: \"50\"\n```\n\n### Scheduler flags\n\nFor details, see the [reference documentation for kube-scheduler](\/docs\/reference\/command-line-tools-reference\/kube-scheduler\/).\n\nExample usage:\n\n```yaml\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: ClusterConfiguration\nkubernetesVersion: v1.16.0\nscheduler:\n  extraArgs:\n  - name: \"config\"\n    value: \"\/etc\/kubernetes\/scheduler-config.yaml\"\n  extraVolumes:\n    - name: schedulerconfig\n      hostPath: \/home\/johndoe\/schedconfig.yaml\n      mountPath: \/etc\/kubernetes\/scheduler-config.yaml\n      readOnly: true\n      pathType: \"File\"\n```\n\n### Etcd flags\n\nFor details, see the [etcd server documentation](https:\/\/etcd.io\/docs\/).\n\nExample usage:\n\n```yaml\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: ClusterConfiguration\netcd:\n  local:\n    extraArgs:\n    - name: \"election-timeout\"\n      value: 1000\n```\n\n## Customizing with patches {#patches}\n\n\n\nKubeadm allows you to pass a directory with patch files to `InitConfiguration` and `JoinConfiguration`\non individual nodes. These patches can be used as the last customization step before component configuration\nis written to disk.\n\nYou can pass this file to `kubeadm init` with `--config <YOUR CONFIG YAML>`:\n\n```yaml\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: InitConfiguration\npatches:\n  directory: \/home\/user\/somedir\n```\n\n\nFor `kubeadm init` you can pass a file containing both a `ClusterConfiguration` and `InitConfiguration`\nseparated by `---`.\n\n\nYou can pass this file to `kubeadm join` with `--config <YOUR CONFIG YAML>`:\n\n```yaml\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: JoinConfiguration\npatches:\n  directory: \/home\/user\/somedir\n```\n\nThe directory must contain files named `target[suffix][+patchtype].extension`.\nFor example, `kube-apiserver0+merge.yaml` or just `etcd.json`.\n\n- `target` can be one of `kube-apiserver`, `kube-controller-manager`, `kube-scheduler`, `etcd`\nand `kubeletconfiguration`.\n- `suffix` is an optional string that can be used to determine which patches are applied first\nalpha-numerically.\n- `patchtype` can be one of `strategic`, `merge` or `json` and these must match the patching formats\n[supported by kubectl](\/docs\/tasks\/manage-kubernetes-objects\/update-api-object-kubectl-patch).\nThe default `patchtype` is `strategic`.\n- `extension` must be either `json` or `yaml`.\n\n\nIf you are using `kubeadm upgrade` to upgrade your kubeadm nodes you must again provide the same\npatches, so that the customization is preserved after upgrade. To do that you can use the `--patches`\nflag, which must point to the same directory. `kubeadm upgrade` currently does not support a configuration\nAPI structure that can be used for the same purpose.\n\n\n## Customizing the kubelet {#kubelet}\n\nTo customize the kubelet you can add a [`KubeletConfiguration`](\/docs\/reference\/config-api\/kubelet-config.v1beta1\/)\nnext to the `ClusterConfiguration` or `InitConfiguration` separated by `---` within the same configuration file.\nThis file can then be passed to `kubeadm init` and kubeadm will apply the same base `KubeletConfiguration`\nto all nodes in the cluster.\n\nFor applying instance-specific configuration over the base `KubeletConfiguration` you can use the\n[`kubeletconfiguration` patch target](#patches).\n\nAlternatively, you can use kubelet flags as overrides by passing them in the\n`nodeRegistration.kubeletExtraArgs` field supported by both `InitConfiguration` and `JoinConfiguration`.\nSome kubelet flags are deprecated, so check their status in the\n[kubelet reference documentation](\/docs\/reference\/command-line-tools-reference\/kubelet) before using them.\n\nFor additional details see [Configuring each kubelet in your cluster using kubeadm](\/docs\/setup\/production-environment\/tools\/kubeadm\/kubelet-integration)\n\n## Customizing kube-proxy\n\nTo customize kube-proxy you can pass a `KubeProxyConfiguration` next your `ClusterConfiguration` or\n`InitConfiguration` to `kubeadm init` separated by `---`.\n\nFor more details you can navigate to our [API reference pages](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/).\n\n\nkubeadm deploys kube-proxy as a , which means\nthat the `KubeProxyConfiguration` would apply to all instances of kube-proxy in the cluster.\n","site":"kubernetes setup","answers_cleaned":"    reviewers    sig cluster lifecycle title  Customizing components with the kubeadm API content type  concept weight  40           overview      This page covers how to customize the components that kubeadm deploys  For control plane components you can use flags in the  ClusterConfiguration  structure or patches per node  For the kubelet and kube proxy you can use  KubeletConfiguration  and  KubeProxyConfiguration   accordingly   All of these options are possible via the kubeadm configuration API  For more details on each field in the configuration you can navigate to our  API reference pages   docs reference config api kubeadm config v1beta4      Customizing the CoreDNS deployment of kubeadm is currently not supported  You must manually patch the  kube system coredns   and recreate the CoreDNS  after that  Alternatively  you can skip the default CoreDNS deployment and deploy your own variant  For more details on that see  Using init phases with kubeadm   docs reference setup tools kubeadm kubeadm init  init phases      To reconfigure a cluster that has already been created see  Reconfiguring a kubeadm cluster   docs tasks administer cluster kubeadm kubeadm reconfigure          body         Customizing the control plane with flags in  ClusterConfiguration   The kubeadm  ClusterConfiguration  object exposes a way for users to override the default flags passed to control plane components such as the APIServer  ControllerManager  Scheduler and Etcd  The components are defined using the following structures      apiServer     controllerManager     scheduler     etcd   These structures contain a common  extraArgs  field  that consists of  name     value  pairs  To override a flag for a control plane component   1   Add the appropriate  extraArgs  to your configuration  2   Add flags to the  extraArgs  field  3   Run  kubeadm init  with    config  YOUR CONFIG YAML      You can generate a  ClusterConfiguration  object with default values by running  kubeadm config print init defaults  and saving the output to a file of your choice     The  ClusterConfiguration  object is currently global in kubeadm clusters  This means that any flags that you add  will apply to all instances of the same component on different nodes  To apply individual configuration per component on different nodes you can use  patches   patches      Duplicate flags  keys   or passing the same flag    foo  multiple times  is currently not supported  To workaround that you must use  patches   patches         APIServer flags  For details  see the  reference documentation for kube apiserver   docs reference command line tools reference kube apiserver     Example usage      yaml apiVersion  kubeadm k8s io v1beta4 kind  ClusterConfiguration kubernetesVersion  v1 16 0 apiServer    extraArgs      name   enable admission plugins      value   AlwaysPullImages DefaultStorageClass      name   audit log path      value    home johndoe audit log           ControllerManager flags  For details  see the  reference documentation for kube controller manager   docs reference command line tools reference kube controller manager     Example usage      yaml apiVersion  kubeadm k8s io v1beta4 kind  ClusterConfiguration kubernetesVersion  v1 16 0 controllerManager    extraArgs      name   cluster signing key file      value    home johndoe keys ca key      name   deployment controller sync period      value   50           Scheduler flags  For details  see the  reference documentation for kube scheduler   docs reference command line tools reference kube scheduler     Example usage      yaml apiVersion  kubeadm k8s io v1beta4 kind  ClusterConfiguration kubernetesVersion  v1 16 0 scheduler    extraArgs      name   config      value    etc kubernetes scheduler config yaml    extraVolumes        name  schedulerconfig       hostPath   home johndoe schedconfig yaml       mountPath   etc kubernetes scheduler config yaml       readOnly  true       pathType   File           Etcd flags  For details  see the  etcd server documentation  https   etcd io docs     Example usage      yaml apiVersion  kubeadm k8s io v1beta4 kind  ClusterConfiguration etcd    local      extraArgs        name   election timeout        value  1000         Customizing with patches   patches     Kubeadm allows you to pass a directory with patch files to  InitConfiguration  and  JoinConfiguration  on individual nodes  These patches can be used as the last customization step before component configuration is written to disk   You can pass this file to  kubeadm init  with    config  YOUR CONFIG YAML        yaml apiVersion  kubeadm k8s io v1beta4 kind  InitConfiguration patches    directory   home user somedir       For  kubeadm init  you can pass a file containing both a  ClusterConfiguration  and  InitConfiguration  separated by          You can pass this file to  kubeadm join  with    config  YOUR CONFIG YAML        yaml apiVersion  kubeadm k8s io v1beta4 kind  JoinConfiguration patches    directory   home user somedir      The directory must contain files named  target suffix   patchtype  extension   For example   kube apiserver0 merge yaml  or just  etcd json       target  can be one of  kube apiserver    kube controller manager    kube scheduler    etcd  and  kubeletconfiguration      suffix  is an optional string that can be used to determine which patches are applied first alpha numerically     patchtype  can be one of  strategic    merge  or  json  and these must match the patching formats  supported by kubectl   docs tasks manage kubernetes objects update api object kubectl patch   The default  patchtype  is  strategic      extension  must be either  json  or  yaml     If you are using  kubeadm upgrade  to upgrade your kubeadm nodes you must again provide the same patches  so that the customization is preserved after upgrade  To do that you can use the    patches  flag  which must point to the same directory   kubeadm upgrade  currently does not support a configuration API structure that can be used for the same purpose       Customizing the kubelet   kubelet   To customize the kubelet you can add a   KubeletConfiguration    docs reference config api kubelet config v1beta1   next to the  ClusterConfiguration  or  InitConfiguration  separated by       within the same configuration file  This file can then be passed to  kubeadm init  and kubeadm will apply the same base  KubeletConfiguration  to all nodes in the cluster   For applying instance specific configuration over the base  KubeletConfiguration  you can use the   kubeletconfiguration  patch target   patches    Alternatively  you can use kubelet flags as overrides by passing them in the  nodeRegistration kubeletExtraArgs  field supported by both  InitConfiguration  and  JoinConfiguration   Some kubelet flags are deprecated  so check their status in the  kubelet reference documentation   docs reference command line tools reference kubelet  before using them   For additional details see  Configuring each kubelet in your cluster using kubeadm   docs setup production environment tools kubeadm kubelet integration      Customizing kube proxy  To customize kube proxy you can pass a  KubeProxyConfiguration  next your  ClusterConfiguration  or  InitConfiguration  to  kubeadm init  separated by         For more details you can navigate to our  API reference pages   docs reference config api kubeadm config v1beta4      kubeadm deploys kube proxy as a   which means that the  KubeProxyConfiguration  would apply to all instances of kube proxy in the cluster  "}
{"questions":"kubernetes setup weight 70 title Set up a High Availability etcd Cluster with kubeadm contenttype task sig cluster lifecycle reviewers overview","answers":"---\nreviewers:\n- sig-cluster-lifecycle\ntitle: Set up a High Availability etcd Cluster with kubeadm\ncontent_type: task\nweight: 70\n---\n\n<!-- overview -->\n\n\nBy default, kubeadm runs a local etcd instance on each control plane node.\nIt is also possible to treat the etcd cluster as external and provision\netcd instances on separate hosts. The differences between the two approaches are covered in the\n[Options for Highly Available topology](\/docs\/setup\/production-environment\/tools\/kubeadm\/ha-topology) page.\n\nThis task walks through the process of creating a high availability external\netcd cluster of three members that can be used by kubeadm during cluster creation.\n\n## \n\n- Three hosts that can talk to each other over TCP ports 2379 and 2380. This\n  document assumes these default ports. However, they are configurable through\n  the kubeadm config file.\n- Each host must have systemd and a bash compatible shell installed.\n- Each host must [have a container runtime, kubelet, and kubeadm installed](\/docs\/setup\/production-environment\/tools\/kubeadm\/install-kubeadm\/).\n- Each host should have access to the Kubernetes container image registry (`registry.k8s.io`) or list\/pull the required etcd image using\n  `kubeadm config images list\/pull`. This guide will set up etcd instances as\n  [static pods](\/docs\/tasks\/configure-pod-container\/static-pod\/) managed by a kubelet.\n- Some infrastructure to copy files between hosts. For example `ssh` and `scp`\n  can satisfy this requirement.\n\n<!-- steps -->\n\n## Setting up the cluster\n\nThe general approach is to generate all certs on one node and only distribute\nthe _necessary_ files to the other nodes.\n\n\nkubeadm contains all the necessary cryptographic machinery to generate\nthe certificates described below; no other cryptographic tooling is required for\nthis example.\n\n\n\nThe examples below use IPv4 addresses but you can also configure kubeadm, the kubelet and etcd\nto use IPv6 addresses. Dual-stack is supported by some Kubernetes options, but not by etcd. For more details\non Kubernetes dual-stack support see [Dual-stack support with kubeadm](\/docs\/setup\/production-environment\/tools\/kubeadm\/dual-stack-support\/).\n\n\n1. Configure the kubelet to be a service manager for etcd.\n\n   You must do this on every host where etcd should be running.\n   Since etcd was created first, you must override the service priority by creating a new unit file\n   that has higher precedence than the kubeadm-provided kubelet unit file.\n\n   ```sh\n   cat << EOF > \/etc\/systemd\/system\/kubelet.service.d\/kubelet.conf\n   # Replace \"systemd\" with the cgroup driver of your container runtime. The default value in the kubelet is \"cgroupfs\".\n   # Replace the value of \"containerRuntimeEndpoint\" for a different container runtime if needed.\n   #\n   apiVersion: kubelet.config.k8s.io\/v1beta1\n   kind: KubeletConfiguration\n   authentication:\n     anonymous:\n       enabled: false\n     webhook:\n       enabled: false\n   authorization:\n     mode: AlwaysAllow\n   cgroupDriver: systemd\n   address: 127.0.0.1\n   containerRuntimeEndpoint: unix:\/\/\/var\/run\/containerd\/containerd.sock\n   staticPodPath: \/etc\/kubernetes\/manifests\n   EOF\n\n   cat << EOF > \/etc\/systemd\/system\/kubelet.service.d\/20-etcd-service-manager.conf\n   [Service]\n   ExecStart=\n   ExecStart=\/usr\/bin\/kubelet --config=\/etc\/systemd\/system\/kubelet.service.d\/kubelet.conf\n   Restart=always\n   EOF\n\n   systemctl daemon-reload\n   systemctl restart kubelet\n   ```\n\n   Check the kubelet status to ensure it is running.\n\n   ```sh\n   systemctl status kubelet\n   ```\n\n1. Create configuration files for kubeadm.\n\n   Generate one kubeadm configuration file for each host that will have an etcd\n   member running on it using the following script.\n\n   ```sh\n   # Update HOST0, HOST1 and HOST2 with the IPs of your hosts\n   export HOST0=10.0.0.6\n   export HOST1=10.0.0.7\n   export HOST2=10.0.0.8\n\n   # Update NAME0, NAME1 and NAME2 with the hostnames of your hosts\n   export NAME0=\"infra0\"\n   export NAME1=\"infra1\"\n   export NAME2=\"infra2\"\n\n   # Create temp directories to store files that will end up on other hosts\n   mkdir -p \/tmp\/${HOST0}\/ \/tmp\/${HOST1}\/ \/tmp\/${HOST2}\/\n\n   HOSTS=(${HOST0} ${HOST1} ${HOST2})\n   NAMES=(${NAME0} ${NAME1} ${NAME2})\n\n   for i in \"${!HOSTS[@]}\"; do\n   HOST=${HOSTS[$i]}\n   NAME=${NAMES[$i]}\n   cat << EOF > \/tmp\/${HOST}\/kubeadmcfg.yaml\n   ---\n   apiVersion: \"kubeadm.k8s.io\/v1beta4\"\n   kind: InitConfiguration\n   nodeRegistration:\n       name: ${NAME}\n   localAPIEndpoint:\n       advertiseAddress: ${HOST}\n   ---\n   apiVersion: \"kubeadm.k8s.io\/v1beta4\"\n   kind: ClusterConfiguration\n   etcd:\n       local:\n           serverCertSANs:\n           - \"${HOST}\"\n           peerCertSANs:\n           - \"${HOST}\"\n           extraArgs:\n           - name: initial-cluster\n             value: ${NAMES[0]}=https:\/\/${HOSTS[0]}:2380,${NAMES[1]}=https:\/\/${HOSTS[1]}:2380,${NAMES[2]}=https:\/\/${HOSTS[2]}:2380\n           - name: initial-cluster-state\n             value: new\n           - name: name\n             value: ${NAME}\n           - name: listen-peer-urls\n             value: https:\/\/${HOST}:2380\n           - name: listen-client-urls\n             value: https:\/\/${HOST}:2379\n           - name: advertise-client-urls\n             value: https:\/\/${HOST}:2379\n           - name: initial-advertise-peer-urls\n             value: https:\/\/${HOST}:2380\n   EOF\n   done\n   ```\n\n1. Generate the certificate authority.\n\n   If you already have a CA then the only action that is copying the CA's `crt` and\n   `key` file to `\/etc\/kubernetes\/pki\/etcd\/ca.crt` and\n   `\/etc\/kubernetes\/pki\/etcd\/ca.key`. After those files have been copied,\n   proceed to the next step, \"Create certificates for each member\".\n\n   If you do not already have a CA then run this command on `$HOST0` (where you\n   generated the configuration files for kubeadm).\n\n   ```\n   kubeadm init phase certs etcd-ca\n   ```\n\n   This creates two files:\n\n   - `\/etc\/kubernetes\/pki\/etcd\/ca.crt`\n   - `\/etc\/kubernetes\/pki\/etcd\/ca.key`\n\n1. Create certificates for each member.\n\n   ```sh\n   kubeadm init phase certs etcd-server --config=\/tmp\/${HOST2}\/kubeadmcfg.yaml\n   kubeadm init phase certs etcd-peer --config=\/tmp\/${HOST2}\/kubeadmcfg.yaml\n   kubeadm init phase certs etcd-healthcheck-client --config=\/tmp\/${HOST2}\/kubeadmcfg.yaml\n   kubeadm init phase certs apiserver-etcd-client --config=\/tmp\/${HOST2}\/kubeadmcfg.yaml\n   cp -R \/etc\/kubernetes\/pki \/tmp\/${HOST2}\/\n   # cleanup non-reusable certificates\n   find \/etc\/kubernetes\/pki -not -name ca.crt -not -name ca.key -type f -delete\n\n   kubeadm init phase certs etcd-server --config=\/tmp\/${HOST1}\/kubeadmcfg.yaml\n   kubeadm init phase certs etcd-peer --config=\/tmp\/${HOST1}\/kubeadmcfg.yaml\n   kubeadm init phase certs etcd-healthcheck-client --config=\/tmp\/${HOST1}\/kubeadmcfg.yaml\n   kubeadm init phase certs apiserver-etcd-client --config=\/tmp\/${HOST1}\/kubeadmcfg.yaml\n   cp -R \/etc\/kubernetes\/pki \/tmp\/${HOST1}\/\n   find \/etc\/kubernetes\/pki -not -name ca.crt -not -name ca.key -type f -delete\n\n   kubeadm init phase certs etcd-server --config=\/tmp\/${HOST0}\/kubeadmcfg.yaml\n   kubeadm init phase certs etcd-peer --config=\/tmp\/${HOST0}\/kubeadmcfg.yaml\n   kubeadm init phase certs etcd-healthcheck-client --config=\/tmp\/${HOST0}\/kubeadmcfg.yaml\n   kubeadm init phase certs apiserver-etcd-client --config=\/tmp\/${HOST0}\/kubeadmcfg.yaml\n   # No need to move the certs because they are for HOST0\n\n   # clean up certs that should not be copied off this host\n   find \/tmp\/${HOST2} -name ca.key -type f -delete\n   find \/tmp\/${HOST1} -name ca.key -type f -delete\n   ```\n\n1. Copy certificates and kubeadm configs.\n\n   The certificates have been generated and now they must be moved to their\n   respective hosts.\n\n   ```sh\n   USER=ubuntu\n   HOST=${HOST1}\n   scp -r \/tmp\/${HOST}\/* ${USER}@${HOST}:\n   ssh ${USER}@${HOST}\n   USER@HOST $ sudo -Es\n   root@HOST $ chown -R root:root pki\n   root@HOST $ mv pki \/etc\/kubernetes\/\n   ```\n\n1. Ensure all expected files exist.\n\n   The complete list of required files on `$HOST0` is:\n\n   ```\n   \/tmp\/${HOST0}\n   \u2514\u2500\u2500 kubeadmcfg.yaml\n   ---\n   \/etc\/kubernetes\/pki\n   \u251c\u2500\u2500 apiserver-etcd-client.crt\n   \u251c\u2500\u2500 apiserver-etcd-client.key\n   \u2514\u2500\u2500 etcd\n       \u251c\u2500\u2500 ca.crt\n       \u251c\u2500\u2500 ca.key\n       \u251c\u2500\u2500 healthcheck-client.crt\n       \u251c\u2500\u2500 healthcheck-client.key\n       \u251c\u2500\u2500 peer.crt\n       \u251c\u2500\u2500 peer.key\n       \u251c\u2500\u2500 server.crt\n       \u2514\u2500\u2500 server.key\n   ```\n\n   On `$HOST1`:\n\n   ```\n   $HOME\n   \u2514\u2500\u2500 kubeadmcfg.yaml\n   ---\n   \/etc\/kubernetes\/pki\n   \u251c\u2500\u2500 apiserver-etcd-client.crt\n   \u251c\u2500\u2500 apiserver-etcd-client.key\n   \u2514\u2500\u2500 etcd\n       \u251c\u2500\u2500 ca.crt\n       \u251c\u2500\u2500 healthcheck-client.crt\n       \u251c\u2500\u2500 healthcheck-client.key\n       \u251c\u2500\u2500 peer.crt\n       \u251c\u2500\u2500 peer.key\n       \u251c\u2500\u2500 server.crt\n       \u2514\u2500\u2500 server.key\n   ```\n\n   On `$HOST2`:\n\n   ```\n   $HOME\n   \u2514\u2500\u2500 kubeadmcfg.yaml\n   ---\n   \/etc\/kubernetes\/pki\n   \u251c\u2500\u2500 apiserver-etcd-client.crt\n   \u251c\u2500\u2500 apiserver-etcd-client.key\n   \u2514\u2500\u2500 etcd\n       \u251c\u2500\u2500 ca.crt\n       \u251c\u2500\u2500 healthcheck-client.crt\n       \u251c\u2500\u2500 healthcheck-client.key\n       \u251c\u2500\u2500 peer.crt\n       \u251c\u2500\u2500 peer.key\n       \u251c\u2500\u2500 server.crt\n       \u2514\u2500\u2500 server.key\n   ```\n\n1. Create the static pod manifests.\n\n   Now that the certificates and configs are in place it's time to create the\n   manifests. On each host run the `kubeadm` command to generate a static manifest\n   for etcd.\n\n   ```sh\n   root@HOST0 $ kubeadm init phase etcd local --config=\/tmp\/${HOST0}\/kubeadmcfg.yaml\n   root@HOST1 $ kubeadm init phase etcd local --config=$HOME\/kubeadmcfg.yaml\n   root@HOST2 $ kubeadm init phase etcd local --config=$HOME\/kubeadmcfg.yaml\n   ```\n\n1. Optional: Check the cluster health.\n\n    If `etcdctl` isn't available, you can run this tool inside a container image.\n    You would do that directly with your container runtime using a tool such as\n    `crictl run` and not through Kubernetes\n\n    ```sh\n    ETCDCTL_API=3 etcdctl \\\n    --cert \/etc\/kubernetes\/pki\/etcd\/peer.crt \\\n    --key \/etc\/kubernetes\/pki\/etcd\/peer.key \\\n    --cacert \/etc\/kubernetes\/pki\/etcd\/ca.crt \\\n    --endpoints https:\/\/${HOST0}:2379 endpoint health\n    ...\n    https:\/\/[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms\n    https:\/\/[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms\n    https:\/\/[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms\n    ```\n\n    - Set `${HOST0}`to the IP address of the host you are testing.\n\n\n## \n\nOnce you have an etcd cluster with 3 working members, you can continue setting up a\nhighly available control plane using the\n[external etcd method with kubeadm](\/docs\/setup\/production-environment\/tools\/kubeadm\/high-availability\/).","site":"kubernetes setup","answers_cleaned":"    reviewers    sig cluster lifecycle title  Set up a High Availability etcd Cluster with kubeadm content type  task weight  70           overview       By default  kubeadm runs a local etcd instance on each control plane node  It is also possible to treat the etcd cluster as external and provision etcd instances on separate hosts  The differences between the two approaches are covered in the  Options for Highly Available topology   docs setup production environment tools kubeadm ha topology  page   This task walks through the process of creating a high availability external etcd cluster of three members that can be used by kubeadm during cluster creation          Three hosts that can talk to each other over TCP ports 2379 and 2380  This   document assumes these default ports  However  they are configurable through   the kubeadm config file    Each host must have systemd and a bash compatible shell installed    Each host must  have a container runtime  kubelet  and kubeadm installed   docs setup production environment tools kubeadm install kubeadm      Each host should have access to the Kubernetes container image registry   registry k8s io   or list pull the required etcd image using    kubeadm config images list pull   This guide will set up etcd instances as    static pods   docs tasks configure pod container static pod   managed by a kubelet    Some infrastructure to copy files between hosts  For example  ssh  and  scp    can satisfy this requirement        steps         Setting up the cluster  The general approach is to generate all certs on one node and only distribute the  necessary  files to the other nodes    kubeadm contains all the necessary cryptographic machinery to generate the certificates described below  no other cryptographic tooling is required for this example     The examples below use IPv4 addresses but you can also configure kubeadm  the kubelet and etcd to use IPv6 addresses  Dual stack is supported by some Kubernetes options  but not by etcd  For more details on Kubernetes dual stack support see  Dual stack support with kubeadm   docs setup production environment tools kubeadm dual stack support      1  Configure the kubelet to be a service manager for etcd      You must do this on every host where etcd should be running     Since etcd was created first  you must override the service priority by creating a new unit file    that has higher precedence than the kubeadm provided kubelet unit file         sh    cat    EOF    etc systemd system kubelet service d kubelet conf      Replace  systemd  with the cgroup driver of your container runtime  The default value in the kubelet is  cgroupfs        Replace the value of  containerRuntimeEndpoint  for a different container runtime if needed          apiVersion  kubelet config k8s io v1beta1    kind  KubeletConfiguration    authentication       anonymous         enabled  false      webhook         enabled  false    authorization       mode  AlwaysAllow    cgroupDriver  systemd    address  127 0 0 1    containerRuntimeEndpoint  unix    var run containerd containerd sock    staticPodPath   etc kubernetes manifests    EOF     cat    EOF    etc systemd system kubelet service d 20 etcd service manager conf     Service     ExecStart     ExecStart  usr bin kubelet   config  etc systemd system kubelet service d kubelet conf    Restart always    EOF     systemctl daemon reload    systemctl restart kubelet            Check the kubelet status to ensure it is running         sh    systemctl status kubelet         1  Create configuration files for kubeadm      Generate one kubeadm configuration file for each host that will have an etcd    member running on it using the following script         sh      Update HOST0  HOST1 and HOST2 with the IPs of your hosts    export HOST0 10 0 0 6    export HOST1 10 0 0 7    export HOST2 10 0 0 8       Update NAME0  NAME1 and NAME2 with the hostnames of your hosts    export NAME0  infra0     export NAME1  infra1     export NAME2  infra2        Create temp directories to store files that will end up on other hosts    mkdir  p  tmp   HOST0    tmp   HOST1    tmp   HOST2       HOSTS    HOST0    HOST1    HOST2      NAMES    NAME0    NAME1    NAME2       for i in     HOSTS       do    HOST   HOSTS  i      NAME   NAMES  i      cat    EOF    tmp   HOST  kubeadmcfg yaml           apiVersion   kubeadm k8s io v1beta4     kind  InitConfiguration    nodeRegistration         name    NAME     localAPIEndpoint         advertiseAddress    HOST            apiVersion   kubeadm k8s io v1beta4     kind  ClusterConfiguration    etcd         local             serverCertSANs                  HOST              peerCertSANs                  HOST              extraArgs               name  initial cluster              value    NAMES 0   https     HOSTS 0   2380   NAMES 1   https     HOSTS 1   2380   NAMES 2   https     HOSTS 2   2380              name  initial cluster state              value  new              name  name              value    NAME               name  listen peer urls              value  https     HOST  2380              name  listen client urls              value  https     HOST  2379              name  advertise client urls              value  https     HOST  2379              name  initial advertise peer urls              value  https     HOST  2380    EOF    done         1  Generate the certificate authority      If you already have a CA then the only action that is copying the CA s  crt  and     key  file to   etc kubernetes pki etcd ca crt  and      etc kubernetes pki etcd ca key   After those files have been copied     proceed to the next step   Create certificates for each member       If you do not already have a CA then run this command on   HOST0   where you    generated the configuration files for kubeadm              kubeadm init phase certs etcd ca            This creates two files          etc kubernetes pki etcd ca crt         etc kubernetes pki etcd ca key   1  Create certificates for each member         sh    kubeadm init phase certs etcd server   config  tmp   HOST2  kubeadmcfg yaml    kubeadm init phase certs etcd peer   config  tmp   HOST2  kubeadmcfg yaml    kubeadm init phase certs etcd healthcheck client   config  tmp   HOST2  kubeadmcfg yaml    kubeadm init phase certs apiserver etcd client   config  tmp   HOST2  kubeadmcfg yaml    cp  R  etc kubernetes pki  tmp   HOST2        cleanup non reusable certificates    find  etc kubernetes pki  not  name ca crt  not  name ca key  type f  delete     kubeadm init phase certs etcd server   config  tmp   HOST1  kubeadmcfg yaml    kubeadm init phase certs etcd peer   config  tmp   HOST1  kubeadmcfg yaml    kubeadm init phase certs etcd healthcheck client   config  tmp   HOST1  kubeadmcfg yaml    kubeadm init phase certs apiserver etcd client   config  tmp   HOST1  kubeadmcfg yaml    cp  R  etc kubernetes pki  tmp   HOST1      find  etc kubernetes pki  not  name ca crt  not  name ca key  type f  delete     kubeadm init phase certs etcd server   config  tmp   HOST0  kubeadmcfg yaml    kubeadm init phase certs etcd peer   config  tmp   HOST0  kubeadmcfg yaml    kubeadm init phase certs etcd healthcheck client   config  tmp   HOST0  kubeadmcfg yaml    kubeadm init phase certs apiserver etcd client   config  tmp   HOST0  kubeadmcfg yaml      No need to move the certs because they are for HOST0       clean up certs that should not be copied off this host    find  tmp   HOST2   name ca key  type f  delete    find  tmp   HOST1   name ca key  type f  delete         1  Copy certificates and kubeadm configs      The certificates have been generated and now they must be moved to their    respective hosts         sh    USER ubuntu    HOST   HOST1     scp  r  tmp   HOST      USER    HOST      ssh   USER    HOST     USER HOST   sudo  Es    root HOST   chown  R root root pki    root HOST   mv pki  etc kubernetes          1  Ensure all expected files exist      The complete list of required files on   HOST0  is              tmp   HOST0         kubeadmcfg yaml            etc kubernetes pki        apiserver etcd client crt        apiserver etcd client key        etcd            ca crt            ca key            healthcheck client crt            healthcheck client key            peer crt            peer key            server crt            server key            On   HOST1               HOME        kubeadmcfg yaml            etc kubernetes pki        apiserver etcd client crt        apiserver etcd client key        etcd            ca crt            healthcheck client crt            healthcheck client key            peer crt            peer key            server crt            server key            On   HOST2               HOME        kubeadmcfg yaml            etc kubernetes pki        apiserver etcd client crt        apiserver etcd client key        etcd            ca crt            healthcheck client crt            healthcheck client key            peer crt            peer key            server crt            server key         1  Create the static pod manifests      Now that the certificates and configs are in place it s time to create the    manifests  On each host run the  kubeadm  command to generate a static manifest    for etcd         sh    root HOST0   kubeadm init phase etcd local   config  tmp   HOST0  kubeadmcfg yaml    root HOST1   kubeadm init phase etcd local   config  HOME kubeadmcfg yaml    root HOST2   kubeadm init phase etcd local   config  HOME kubeadmcfg yaml         1  Optional  Check the cluster health       If  etcdctl  isn t available  you can run this tool inside a container image      You would do that directly with your container runtime using a tool such as      crictl run  and not through Kubernetes         sh     ETCDCTL API 3 etcdctl         cert  etc kubernetes pki etcd peer crt         key  etc kubernetes pki etcd peer key         cacert  etc kubernetes pki etcd ca crt         endpoints https     HOST0  2379 endpoint health             https    HOST0 IP  2379 is healthy  successfully committed proposal  took   16 283339ms     https    HOST1 IP  2379 is healthy  successfully committed proposal  took   19 44402ms     https    HOST2 IP  2379 is healthy  successfully committed proposal  took   35 926451ms                Set    HOST0  to the IP address of the host you are testing         Once you have an etcd cluster with 3 working members  you can continue setting up a highly available control plane using the  external etcd method with kubeadm   docs setup production environment tools kubeadm high availability   "}
{"questions":"kubernetes setup title Creating a cluster with kubeadm weight 30 contenttype task sig cluster lifecycle reviewers overview","answers":"---\nreviewers:\n- sig-cluster-lifecycle\ntitle: Creating a cluster with kubeadm\ncontent_type: task\nweight: 30\n---\n\n<!-- overview -->\n\n<img src=\"\/images\/kubeadm-stacked-color.png\" align=\"right\" width=\"150px\"><\/img>\nUsing `kubeadm`, you can create a minimum viable Kubernetes cluster that conforms to best practices.\nIn fact, you can use `kubeadm` to set up a cluster that will pass the\n[Kubernetes Conformance tests](\/blog\/2017\/10\/software-conformance-certification\/).\n`kubeadm` also supports other cluster lifecycle functions, such as\n[bootstrap tokens](\/docs\/reference\/access-authn-authz\/bootstrap-tokens\/) and cluster upgrades.\n\nThe `kubeadm` tool is good if you need:\n\n- A simple way for you to try out Kubernetes, possibly for the first time.\n- A way for existing users to automate setting up a cluster and test their application.\n- A building block in other ecosystem and\/or installer tools with a larger\n  scope.\n\nYou can install and use `kubeadm` on various machines: your laptop, a set\nof cloud servers, a Raspberry Pi, and more. Whether you're deploying into the\ncloud or on-premises, you can integrate `kubeadm` into provisioning systems such\nas Ansible or Terraform.\n\n## \n\nTo follow this guide, you need:\n\n- One or more machines running a deb\/rpm-compatible Linux OS; for example: Ubuntu or CentOS.\n- 2 GiB or more of RAM per machine--any less leaves little room for your apps.\n- At least 2 CPUs on the machine that you use as a control-plane node.\n- Full network connectivity among all machines in the cluster. You can use either a\n  public or a private network.\n\nYou also need to use a version of `kubeadm` that can deploy the version\nof Kubernetes that you want to use in your new cluster.\n\n[Kubernetes' version and version skew support policy](\/docs\/setup\/release\/version-skew-policy\/#supported-versions)\napplies to `kubeadm` as well as to Kubernetes overall.\nCheck that policy to learn about what versions of Kubernetes and `kubeadm`\nare supported. This page is written for Kubernetes .\n\nThe `kubeadm` tool's overall feature state is General Availability (GA). Some sub-features are\nstill under active development. The implementation of creating the cluster may change\nslightly as the tool evolves, but the overall implementation should be pretty stable.\n\n\nAny commands under `kubeadm alpha` are, by definition, supported on an alpha level.\n\n\n<!-- steps -->\n\n## Objectives\n\n* Install a single control-plane Kubernetes cluster\n* Install a Pod network on the cluster so that your Pods can\n  talk to each other\n\n## Instructions\n\n### Preparing the hosts\n\n#### Component installation\n\nInstall a \nand kubeadm on all the hosts. For detailed instructions and other prerequisites, see\n[Installing kubeadm](\/docs\/setup\/production-environment\/tools\/kubeadm\/install-kubeadm\/).\n\n\nIf you have already installed kubeadm, see the first two steps of the\n[Upgrading Linux nodes](\/docs\/tasks\/administer-cluster\/kubeadm\/upgrading-linux-nodes)\ndocument for instructions on how to upgrade kubeadm.\n\nWhen you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for\nkubeadm to tell it what to do. This crashloop is expected and normal.\nAfter you initialize your control-plane, the kubelet runs normally.\n\n\n#### Network setup\n\nkubeadm similarly to other Kubernetes components tries to find a usable IP on\nthe network interfaces associated with a default gateway on a host. Such\nan IP is then used for the advertising and\/or listening performed by a component.\n\nTo find out what this IP is on a Linux host you can use:\n\n```shell\nip route show # Look for a line starting with \"default via\"\n```\n\n\nIf two or more default gateways are present on the host, a Kubernetes component will\ntry to use the first one it encounters that has a suitable global unicast IP address.\nWhile making this choice, the exact ordering of gateways might vary between different\noperating systems and kernel versions.\n\n\nKubernetes components do not accept custom network interface as an option,\ntherefore a custom IP address must be passed as a flag to all components instances\nthat need such a custom configuration.\n\n\nIf the host does not have a default gateway and if a custom IP address is not passed\nto a Kubernetes component, the component may exit with an error.\n\n\nTo configure the API server advertise address for control plane nodes created with both\n`init` and `join`, the flag `--apiserver-advertise-address` can be used.\nPreferably, this option can be set in the [kubeadm API](\/docs\/reference\/config-api\/kubeadm-config.v1beta4)\nas `InitConfiguration.localAPIEndpoint` and `JoinConfiguration.controlPlane.localAPIEndpoint`.\n\nFor kubelets on all nodes, the `--node-ip` option can be passed in\n`.nodeRegistration.kubeletExtraArgs` inside a kubeadm configuration file\n(`InitConfiguration` or `JoinConfiguration`).\n\nFor dual-stack see\n[Dual-stack support with kubeadm](\/docs\/setup\/production-environment\/tools\/kubeadm\/dual-stack-support).\n\nThe IP addresses that you assign to control plane components become part of their X.509 certificates'\nsubject alternative name fields. Changing these IP addresses would require\nsigning new certificates and restarting the affected components, so that the change in\ncertificate files is reflected. See\n[Manual certificate renewal](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-certs\/#manual-certificate-renewal)\nfor more details on this topic.\n\n\nThe Kubernetes project recommends against this approach (configuring all component instances\nwith custom IP addresses). Instead, the Kubernetes maintainers recommend to setup the host network,\nso that the default gateway IP is the one that Kubernetes components auto-detect and use.\nOn Linux nodes, you can use commands such as `ip route` to configure networking; your operating\nsystem might also provide higher level network management tools. If your node's default gateway\nis a public IP address, you should configure packet filtering or other security measures that\nprotect the nodes and your cluster.\n\n\n### Preparing the required container images\n\nThis step is optional and only applies in case you wish `kubeadm init` and `kubeadm join`\nto not download the default container images which are hosted at `registry.k8s.io`.\n\nKubeadm has commands that can help you pre-pull the required images\nwhen creating a cluster without an internet connection on its nodes.\nSee [Running kubeadm without an internet connection](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init#without-internet-connection)\nfor more details.\n\nKubeadm allows you to use a custom image repository for the required images.\nSee [Using custom images](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init#custom-images)\nfor more details.\n\n### Initializing your control-plane node\n\nThe control-plane node is the machine where the control plane components run, including\n (the cluster database) and the\n\n(which the  command line tool\ncommunicates with).\n\n1. (Recommended) If you have plans to upgrade this single control-plane `kubeadm` cluster\n   to [high availability](\/docs\/setup\/production-environment\/tools\/kubeadm\/high-availability\/)\n   you should specify the `--control-plane-endpoint` to set the shared endpoint for all control-plane nodes.\n   Such an endpoint can be either a DNS name or an IP address of a load-balancer.\n1. Choose a Pod network add-on, and verify whether it requires any arguments to\n   be passed to `kubeadm init`. Depending on which\n   third-party provider you choose, you might need to set the `--pod-network-cidr` to\n   a provider-specific value. See [Installing a Pod network add-on](#pod-network).\n1. (Optional) `kubeadm` tries to detect the container runtime by using a list of well\n   known endpoints. To use different container runtime or if there are more than one installed\n   on the provisioned node, specify the `--cri-socket` argument to `kubeadm`. See\n   [Installing a runtime](\/docs\/setup\/production-environment\/tools\/kubeadm\/install-kubeadm\/#installing-runtime).\n\nTo initialize the control-plane node run:\n\n```bash\nkubeadm init <args>\n```\n\n### Considerations about apiserver-advertise-address and ControlPlaneEndpoint\n\nWhile `--apiserver-advertise-address` can be used to set the advertised address for this particular\ncontrol-plane node's API server, `--control-plane-endpoint` can be used to set the shared endpoint\nfor all control-plane nodes.\n\n`--control-plane-endpoint` allows both IP addresses and DNS names that can map to IP addresses.\nPlease contact your network administrator to evaluate possible solutions with respect to such mapping.\n\nHere is an example mapping:\n\n```\n192.168.0.102 cluster-endpoint\n```\n\nWhere `192.168.0.102` is the IP address of this node and `cluster-endpoint` is a custom DNS name that maps to this IP.\nThis will allow you to pass `--control-plane-endpoint=cluster-endpoint` to `kubeadm init` and pass the same DNS name to\n`kubeadm join`. Later you can modify `cluster-endpoint` to point to the address of your load-balancer in a\nhigh availability scenario.\n\nTurning a single control plane cluster created without `--control-plane-endpoint` into a highly available cluster\nis not supported by kubeadm.\n\n### More information\n\nFor more information about `kubeadm init` arguments, see the [kubeadm reference guide](\/docs\/reference\/setup-tools\/kubeadm\/).\n\nTo configure `kubeadm init` with a configuration file see\n[Using kubeadm init with a configuration file](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-init\/#config-file).\n\nTo customize control plane components, including optional IPv6 assignment to liveness probe\nfor control plane components and etcd server, provide extra arguments to each component as documented in\n[custom arguments](\/docs\/setup\/production-environment\/tools\/kubeadm\/control-plane-flags\/).\n\nTo reconfigure a cluster that has already been created see\n[Reconfiguring a kubeadm cluster](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-reconfigure).\n\nTo run `kubeadm init` again, you must first [tear down the cluster](#tear-down).\n\nIf you join a node with a different architecture to your cluster, make sure that your deployed DaemonSets\nhave container image support for this architecture.\n\n`kubeadm init` first runs a series of prechecks to ensure that the machine\nis ready to run Kubernetes. These prechecks expose warnings and exit on errors. `kubeadm init`\nthen downloads and installs the cluster control plane components. This may take several minutes.\nAfter it finishes you should see:\n\n```none\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n  mkdir -p $HOME\/.kube\n  sudo cp -i \/etc\/kubernetes\/admin.conf $HOME\/.kube\/config\n  sudo chown $(id -u):$(id -g) $HOME\/.kube\/config\n\nYou should now deploy a Pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n  \/docs\/concepts\/cluster-administration\/addons\/\n\nYou can now join any number of machines by running the following on each node\nas root:\n\n  kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>\n```\n\nTo make kubectl work for your non-root user, run these commands, which are\nalso part of the `kubeadm init` output:\n\n```bash\nmkdir -p $HOME\/.kube\nsudo cp -i \/etc\/kubernetes\/admin.conf $HOME\/.kube\/config\nsudo chown $(id -u):$(id -g) $HOME\/.kube\/config\n```\n\nAlternatively, if you are the `root` user, you can run:\n\n```bash\nexport KUBECONFIG=\/etc\/kubernetes\/admin.conf\n```\n\n\nThe kubeconfig file `admin.conf` that `kubeadm init` generates contains a certificate with\n`Subject: O = kubeadm:cluster-admins, CN = kubernetes-admin`. The group `kubeadm:cluster-admins`\nis bound to the built-in `cluster-admin` ClusterRole.\nDo not share the `admin.conf` file with anyone.\n\n`kubeadm init` generates another kubeconfig file `super-admin.conf` that contains a certificate with\n`Subject: O = system:masters, CN = kubernetes-super-admin`.\n`system:masters` is a break-glass, super user group that bypasses the authorization layer (for example RBAC).\nDo not share the `super-admin.conf` file with anyone. It is recommended to move the file to a safe location.\n\nSee\n[Generating kubeconfig files for additional users](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-certs#kubeconfig-additional-users)\non how to use `kubeadm kubeconfig user` to generate kubeconfig files for additional users.\n\n\nMake a record of the `kubeadm join` command that `kubeadm init` outputs. You\nneed this command to [join nodes to your cluster](#join-nodes).\n\nThe token is used for mutual authentication between the control-plane node and the joining\nnodes. The token included here is secret. Keep it safe, because anyone with this\ntoken can add authenticated nodes to your cluster. These tokens can be listed,\ncreated, and deleted with the `kubeadm token` command. See the\n[kubeadm reference guide](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-token\/).\n\n### Installing a Pod network add-on {#pod-network}\n\n\nThis section contains important information about networking setup and\ndeployment order.\nRead all of this advice carefully before proceeding.\n\n**You must deploy a\n\n(CNI) based Pod network add-on so that your Pods can communicate with each other.\nCluster DNS (CoreDNS) will not start up before a network is installed.**\n\n- Take care that your Pod network must not overlap with any of the host\n  networks: you are likely to see problems if there is any overlap.\n  (If you find a collision between your network plugin's preferred Pod\n  network and some of your host networks, you should think of a suitable\n  CIDR block to use instead, then use that during `kubeadm init` with\n  `--pod-network-cidr` and as a replacement in your network plugin's YAML).\n\n- By default, `kubeadm` sets up your cluster to use and enforce use of\n  [RBAC](\/docs\/reference\/access-authn-authz\/rbac\/) (role based access\n  control).\n  Make sure that your Pod network plugin supports RBAC, and so do any manifests\n  that you use to deploy it.\n\n- If you want to use IPv6--either dual-stack, or single-stack IPv6 only\n  networking--for your cluster, make sure that your Pod network plugin\n  supports IPv6.\n  IPv6 support was added to CNI in [v0.6.0](https:\/\/github.com\/containernetworking\/cni\/releases\/tag\/v0.6.0).\n\n\n\n\nKubeadm should be CNI agnostic and the validation of CNI providers is out of the scope of our current e2e testing.\nIf you find an issue related to a CNI plugin you should log a ticket in its respective issue\ntracker instead of the kubeadm or kubernetes issue trackers.\n\n\nSeveral external projects provide Kubernetes Pod networks using CNI, some of which also\nsupport [Network Policy](\/docs\/concepts\/services-networking\/network-policies\/).\n\nSee a list of add-ons that implement the\n[Kubernetes networking model](\/docs\/concepts\/cluster-administration\/networking\/#how-to-implement-the-kubernetes-network-model).\n\nPlease refer to the [Installing Addons](\/docs\/concepts\/cluster-administration\/addons\/#networking-and-network-policy)\npage for a non-exhaustive list of networking addons supported by Kubernetes.\nYou can install a Pod network add-on with the following command on the\ncontrol-plane node or a node that has the kubeconfig credentials:\n\n```bash\nkubectl apply -f <add-on.yaml>\n```\n\n\nOnly a few CNI plugins support Windows. More details and setup instructions can be found\nin [Adding Windows worker nodes](\/docs\/tasks\/administer-cluster\/kubeadm\/adding-windows-nodes\/#network-config).\n\n\nYou can install only one Pod network per cluster.\n\nOnce a Pod network has been installed, you can confirm that it is working by\nchecking that the CoreDNS Pod is `Running` in the output of `kubectl get pods --all-namespaces`.\nAnd once the CoreDNS Pod is up and running, you can continue by joining your nodes.\n\nIf your network is not working or CoreDNS is not in the `Running` state, check out the\n[troubleshooting guide](\/docs\/setup\/production-environment\/tools\/kubeadm\/troubleshooting-kubeadm\/)\nfor `kubeadm`.\n\n### Managed node labels\n\nBy default, kubeadm enables the [NodeRestriction](\/docs\/reference\/access-authn-authz\/admission-controllers\/#noderestriction)\nadmission controller that restricts what labels can be self-applied by kubelets on node registration.\nThe admission controller documentation covers what labels are permitted to be used with the kubelet `--node-labels` option.\nThe `node-role.kubernetes.io\/control-plane` label is such a restricted label and kubeadm manually applies it using\na privileged client after a node has been created. To do that manually you can do the same by using `kubectl label`\nand ensure it is using a privileged kubeconfig such as the kubeadm managed `\/etc\/kubernetes\/admin.conf`.\n\n### Control plane node isolation\n\nBy default, your cluster will not schedule Pods on the control plane nodes for security\nreasons. If you want to be able to schedule Pods on the control plane nodes,\nfor example for a single machine Kubernetes cluster, run:\n\n```bash\nkubectl taint nodes --all node-role.kubernetes.io\/control-plane-\n```\n\nThe output will look something like:\n\n```\nnode \"test-01\" untainted\n...\n```\n\nThis will remove the `node-role.kubernetes.io\/control-plane:NoSchedule` taint\nfrom any nodes that have it, including the control plane nodes, meaning that the\nscheduler will then be able to schedule Pods everywhere.\n\nAdditionally, you can execute the following command to remove the\n[`node.kubernetes.io\/exclude-from-external-load-balancers`](\/docs\/reference\/labels-annotations-taints\/#node-kubernetes-io-exclude-from-external-load-balancers) label\nfrom the control plane node, which excludes it from the list of backend servers:\n\n```bash\nkubectl label nodes --all node.kubernetes.io\/exclude-from-external-load-balancers-\n```\n\n### Adding more control plane nodes\n\nSee [Creating Highly Available Clusters with kubeadm](\/docs\/setup\/production-environment\/tools\/kubeadm\/high-availability\/)\nfor steps on creating a high availability kubeadm cluster by adding more control plane nodes.\n\n### Adding worker nodes {#join-nodes}\n\nThe worker nodes are where your workloads run.\n\nThe following pages show how to add Linux and Windows worker nodes to the cluster by using\nthe `kubeadm join` command:\n\n* [Adding Linux worker nodes](\/docs\/tasks\/administer-cluster\/kubeadm\/adding-linux-nodes\/)\n* [Adding Windows worker nodes](\/docs\/tasks\/administer-cluster\/kubeadm\/adding-windows-nodes\/)\n\n### (Optional) Controlling your cluster from machines other than the control-plane node\n\nIn order to get a kubectl on some other computer (e.g. laptop) to talk to your\ncluster, you need to copy the administrator kubeconfig file from your control-plane node\nto your workstation like this:\n\n```bash\nscp root@<control-plane-host>:\/etc\/kubernetes\/admin.conf .\nkubectl --kubeconfig .\/admin.conf get nodes\n```\n\n\nThe example above assumes SSH access is enabled for root. If that is not the\ncase, you can copy the `admin.conf` file to be accessible by some other user\nand `scp` using that other user instead.\n\nThe `admin.conf` file gives the user _superuser_ privileges over the cluster.\nThis file should be used sparingly. For normal users, it's recommended to\ngenerate an unique credential to which you grant privileges. You can do\nthis with the `kubeadm kubeconfig user --client-name <CN>`\ncommand. That command will print out a KubeConfig file to STDOUT which you\nshould save to a file and distribute to your user. After that, grant\nprivileges by using `kubectl create (cluster)rolebinding`.\n\n\n### (Optional) Proxying API Server to localhost\n\nIf you want to connect to the API Server from outside the cluster, you can use\n`kubectl proxy`:\n\n```bash\nscp root@<control-plane-host>:\/etc\/kubernetes\/admin.conf .\nkubectl --kubeconfig .\/admin.conf proxy\n```\n\nYou can now access the API Server locally at `http:\/\/localhost:8001\/api\/v1`\n\n## Clean up {#tear-down}\n\nIf you used disposable servers for your cluster, for testing, you can\nswitch those off and do no further clean up. You can use\n`kubectl config delete-cluster` to delete your local references to the\ncluster.\n\nHowever, if you want to deprovision your cluster more cleanly, you should\nfirst [drain the node](\/docs\/reference\/generated\/kubectl\/kubectl-commands#drain)\nand make sure that the node is empty, then deconfigure the node.\n\n### Remove the node\n\nTalking to the control-plane node with the appropriate credentials, run:\n\n```bash\nkubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets\n```\n\nBefore removing the node, reset the state installed by `kubeadm`:\n\n```bash\nkubeadm reset\n```\n\nThe reset process does not reset or clean up iptables rules or IPVS tables.\nIf you wish to reset iptables, you must do so manually:\n\n```bash\niptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X\n```\n\nIf you want to reset the IPVS tables, you must run the following command:\n\n```bash\nipvsadm -C\n```\n\nNow remove the node:\n\n```bash\nkubectl delete node <node name>\n```\n\nIf you wish to start over, run `kubeadm init` or `kubeadm join` with the\nappropriate arguments.\n\n### Clean up the control plane\n\nYou can use `kubeadm reset` on the control plane host to trigger a best-effort\nclean up.\n\nSee the [`kubeadm reset`](\/docs\/reference\/setup-tools\/kubeadm\/kubeadm-reset\/)\nreference documentation for more information about this subcommand and its\noptions.\n\n## Version skew policy {#version-skew-policy}\n\nWhile kubeadm allows version skew against some components that it manages, it is recommended that you\nmatch the kubeadm version with the versions of the control plane components, kube-proxy and kubelet.\n\n### kubeadm's skew against the Kubernetes version\n\nkubeadm can be used with Kubernetes components that are the same version as kubeadm\nor one version older. The Kubernetes version can be specified to kubeadm by using the\n`--kubernetes-version` flag of `kubeadm init` or the\n[`ClusterConfiguration.kubernetesVersion`](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/)\nfield when using `--config`. This option will control the versions\nof kube-apiserver, kube-controller-manager, kube-scheduler and kube-proxy.\n\nExample:\n\n* kubeadm is at \n* `kubernetesVersion` must be at  or \n\n### kubeadm's skew against the kubelet\n\nSimilarly to the Kubernetes version, kubeadm can be used with a kubelet version that is\nthe same version as kubeadm or three versions older.\n\nExample:\n\n* kubeadm is at \n* kubelet on the host must be at , ,\n   or \n\n### kubeadm's skew against kubeadm\n\nThere are certain limitations on how kubeadm commands can operate on existing nodes or whole clusters\nmanaged by kubeadm.\n\nIf new nodes are joined to the cluster, the kubeadm binary used for `kubeadm join` must match\nthe last version of kubeadm used to either create the cluster with `kubeadm init` or to upgrade\nthe same node with `kubeadm upgrade`. Similar rules apply to the rest of the kubeadm commands\nwith the exception of `kubeadm upgrade`.\n\nExample for `kubeadm join`:\n\n* kubeadm version  was used to create a cluster with `kubeadm init`\n* Joining nodes must use a kubeadm binary that is at version \n\nNodes that are being upgraded must use a version of kubeadm that is the same MINOR\nversion or one MINOR version newer than the version of kubeadm used for managing the\nnode.\n\nExample for `kubeadm upgrade`:\n\n* kubeadm version  was used to create or upgrade the node\n* The version of kubeadm used for upgrading the node must be at \n  or \n\nTo learn more about the version skew between the different Kubernetes component see\nthe [Version Skew Policy](\/releases\/version-skew-policy\/).\n\n## Limitations {#limitations}\n\n### Cluster resilience {#resilience}\n\nThe cluster created here has a single control-plane node, with a single etcd database\nrunning on it. This means that if the control-plane node fails, your cluster may lose\ndata and may need to be recreated from scratch.\n\nWorkarounds:\n\n* Regularly [back up etcd](https:\/\/etcd.io\/docs\/v3.5\/op-guide\/recovery\/). The\n  etcd data directory configured by kubeadm is at `\/var\/lib\/etcd` on the control-plane node.\n\n* Use multiple control-plane nodes. You can read\n  [Options for Highly Available topology](\/docs\/setup\/production-environment\/tools\/kubeadm\/ha-topology\/) to pick a cluster\n  topology that provides [high-availability](\/docs\/setup\/production-environment\/tools\/kubeadm\/high-availability\/).\n\n### Platform compatibility {#multi-platform}\n\nkubeadm deb\/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x\nfollowing the [multi-platform proposal](https:\/\/git.k8s.io\/design-proposals-archive\/multi-platform.md).\n\nMultiplatform container images for the control plane and addons are also supported since v1.12.\n\nOnly some of the network providers offer solutions for all platforms. Please consult the list of\nnetwork providers above or the documentation from each provider to figure out whether the provider\nsupports your chosen platform.\n\n## Troubleshooting {#troubleshooting}\n\nIf you are running into difficulties with kubeadm, please consult our\n[troubleshooting docs](\/docs\/setup\/production-environment\/tools\/kubeadm\/troubleshooting-kubeadm\/).\n\n<!-- discussion -->\n\n## \n\n* Verify that your cluster is running properly with [Sonobuoy](https:\/\/github.com\/heptio\/sonobuoy)\n* <a id=\"lifecycle\" \/>See [Upgrading kubeadm clusters](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-upgrade\/)\n  for details about upgrading your cluster using `kubeadm`.\n* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](\/docs\/reference\/setup-tools\/kubeadm\/)\n* Learn more about Kubernetes [concepts](\/docs\/concepts\/) and [`kubectl`](\/docs\/reference\/kubectl\/).\n* See the [Cluster Networking](\/docs\/concepts\/cluster-administration\/networking\/) page for a bigger list\n  of Pod network add-ons.\n* <a id=\"other-addons\" \/>See the [list of add-ons](\/docs\/concepts\/cluster-administration\/addons\/) to\n  explore other add-ons, including tools for logging, monitoring, network policy, visualization &amp;\n  control of your Kubernetes cluster.\n* Configure how your cluster handles logs for cluster events and from\n  applications running in Pods.\n  See [Logging Architecture](\/docs\/concepts\/cluster-administration\/logging\/) for\n  an overview of what is involved.\n\n### Feedback {#feedback}\n\n* For bugs, visit the [kubeadm GitHub issue tracker](https:\/\/github.com\/kubernetes\/kubeadm\/issues)\n* For support, visit the\n  [#kubeadm](https:\/\/kubernetes.slack.com\/messages\/kubeadm\/) Slack channel\n* General SIG Cluster Lifecycle development Slack channel:\n  [#sig-cluster-lifecycle](https:\/\/kubernetes.slack.com\/messages\/sig-cluster-lifecycle\/)\n* SIG Cluster Lifecycle [SIG information](https:\/\/github.com\/kubernetes\/community\/tree\/master\/sig-cluster-lifecycle#readme)\n* SIG Cluster Lifecycle mailing list:\n  [kubernetes-sig-cluster-lifecycle](https:\/\/groups.google.com\/forum\/#!forum\/kubernetes-sig-cluster-lifecycle)","site":"kubernetes setup","answers_cleaned":"    reviewers    sig cluster lifecycle title  Creating a cluster with kubeadm content type  task weight  30           overview       img src   images kubeadm stacked color png  align  right  width  150px    img  Using  kubeadm   you can create a minimum viable Kubernetes cluster that conforms to best practices  In fact  you can use  kubeadm  to set up a cluster that will pass the  Kubernetes Conformance tests   blog 2017 10 software conformance certification     kubeadm  also supports other cluster lifecycle functions  such as  bootstrap tokens   docs reference access authn authz bootstrap tokens   and cluster upgrades   The  kubeadm  tool is good if you need     A simple way for you to try out Kubernetes  possibly for the first time    A way for existing users to automate setting up a cluster and test their application    A building block in other ecosystem and or installer tools with a larger   scope   You can install and use  kubeadm  on various machines  your laptop  a set of cloud servers  a Raspberry Pi  and more  Whether you re deploying into the cloud or on premises  you can integrate  kubeadm  into provisioning systems such as Ansible or Terraform        To follow this guide  you need     One or more machines running a deb rpm compatible Linux OS  for example  Ubuntu or CentOS    2 GiB or more of RAM per machine  any less leaves little room for your apps    At least 2 CPUs on the machine that you use as a control plane node    Full network connectivity among all machines in the cluster  You can use either a   public or a private network   You also need to use a version of  kubeadm  that can deploy the version of Kubernetes that you want to use in your new cluster    Kubernetes  version and version skew support policy   docs setup release version skew policy  supported versions  applies to  kubeadm  as well as to Kubernetes overall  Check that policy to learn about what versions of Kubernetes and  kubeadm  are supported  This page is written for Kubernetes    The  kubeadm  tool s overall feature state is General Availability  GA   Some sub features are still under active development  The implementation of creating the cluster may change slightly as the tool evolves  but the overall implementation should be pretty stable    Any commands under  kubeadm alpha  are  by definition  supported on an alpha level         steps         Objectives    Install a single control plane Kubernetes cluster   Install a Pod network on the cluster so that your Pods can   talk to each other     Instructions      Preparing the hosts       Component installation  Install a  and kubeadm on all the hosts  For detailed instructions and other prerequisites  see  Installing kubeadm   docs setup production environment tools kubeadm install kubeadm      If you have already installed kubeadm  see the first two steps of the  Upgrading Linux nodes   docs tasks administer cluster kubeadm upgrading linux nodes  document for instructions on how to upgrade kubeadm   When you upgrade  the kubelet restarts every few seconds as it waits in a crashloop for kubeadm to tell it what to do  This crashloop is expected and normal  After you initialize your control plane  the kubelet runs normally         Network setup  kubeadm similarly to other Kubernetes components tries to find a usable IP on the network interfaces associated with a default gateway on a host  Such an IP is then used for the advertising and or listening performed by a component   To find out what this IP is on a Linux host you can use      shell ip route show   Look for a line starting with  default via        If two or more default gateways are present on the host  a Kubernetes component will try to use the first one it encounters that has a suitable global unicast IP address  While making this choice  the exact ordering of gateways might vary between different operating systems and kernel versions    Kubernetes components do not accept custom network interface as an option  therefore a custom IP address must be passed as a flag to all components instances that need such a custom configuration    If the host does not have a default gateway and if a custom IP address is not passed to a Kubernetes component  the component may exit with an error    To configure the API server advertise address for control plane nodes created with both  init  and  join   the flag    apiserver advertise address  can be used  Preferably  this option can be set in the  kubeadm API   docs reference config api kubeadm config v1beta4  as  InitConfiguration localAPIEndpoint  and  JoinConfiguration controlPlane localAPIEndpoint    For kubelets on all nodes  the    node ip  option can be passed in   nodeRegistration kubeletExtraArgs  inside a kubeadm configuration file   InitConfiguration  or  JoinConfiguration     For dual stack see  Dual stack support with kubeadm   docs setup production environment tools kubeadm dual stack support    The IP addresses that you assign to control plane components become part of their X 509 certificates  subject alternative name fields  Changing these IP addresses would require signing new certificates and restarting the affected components  so that the change in certificate files is reflected  See  Manual certificate renewal   docs tasks administer cluster kubeadm kubeadm certs  manual certificate renewal  for more details on this topic    The Kubernetes project recommends against this approach  configuring all component instances with custom IP addresses   Instead  the Kubernetes maintainers recommend to setup the host network  so that the default gateway IP is the one that Kubernetes components auto detect and use  On Linux nodes  you can use commands such as  ip route  to configure networking  your operating system might also provide higher level network management tools  If your node s default gateway is a public IP address  you should configure packet filtering or other security measures that protect the nodes and your cluster        Preparing the required container images  This step is optional and only applies in case you wish  kubeadm init  and  kubeadm join  to not download the default container images which are hosted at  registry k8s io    Kubeadm has commands that can help you pre pull the required images when creating a cluster without an internet connection on its nodes  See  Running kubeadm without an internet connection   docs reference setup tools kubeadm kubeadm init without internet connection  for more details   Kubeadm allows you to use a custom image repository for the required images  See  Using custom images   docs reference setup tools kubeadm kubeadm init custom images  for more details       Initializing your control plane node  The control plane node is the machine where the control plane components run  including   the cluster database  and the   which the  command line tool communicates with    1   Recommended  If you have plans to upgrade this single control plane  kubeadm  cluster    to  high availability   docs setup production environment tools kubeadm high availability      you should specify the    control plane endpoint  to set the shared endpoint for all control plane nodes     Such an endpoint can be either a DNS name or an IP address of a load balancer  1  Choose a Pod network add on  and verify whether it requires any arguments to    be passed to  kubeadm init   Depending on which    third party provider you choose  you might need to set the    pod network cidr  to    a provider specific value  See  Installing a Pod network add on   pod network   1   Optional   kubeadm  tries to detect the container runtime by using a list of well    known endpoints  To use different container runtime or if there are more than one installed    on the provisioned node  specify the    cri socket  argument to  kubeadm   See     Installing a runtime   docs setup production environment tools kubeadm install kubeadm  installing runtime    To initialize the control plane node run      bash kubeadm init  args           Considerations about apiserver advertise address and ControlPlaneEndpoint  While    apiserver advertise address  can be used to set the advertised address for this particular control plane node s API server     control plane endpoint  can be used to set the shared endpoint for all control plane nodes      control plane endpoint  allows both IP addresses and DNS names that can map to IP addresses  Please contact your network administrator to evaluate possible solutions with respect to such mapping   Here is an example mapping       192 168 0 102 cluster endpoint      Where  192 168 0 102  is the IP address of this node and  cluster endpoint  is a custom DNS name that maps to this IP  This will allow you to pass    control plane endpoint cluster endpoint  to  kubeadm init  and pass the same DNS name to  kubeadm join   Later you can modify  cluster endpoint  to point to the address of your load balancer in a high availability scenario   Turning a single control plane cluster created without    control plane endpoint  into a highly available cluster is not supported by kubeadm       More information  For more information about  kubeadm init  arguments  see the  kubeadm reference guide   docs reference setup tools kubeadm     To configure  kubeadm init  with a configuration file see  Using kubeadm init with a configuration file   docs reference setup tools kubeadm kubeadm init  config file    To customize control plane components  including optional IPv6 assignment to liveness probe for control plane components and etcd server  provide extra arguments to each component as documented in  custom arguments   docs setup production environment tools kubeadm control plane flags     To reconfigure a cluster that has already been created see  Reconfiguring a kubeadm cluster   docs tasks administer cluster kubeadm kubeadm reconfigure    To run  kubeadm init  again  you must first  tear down the cluster   tear down    If you join a node with a different architecture to your cluster  make sure that your deployed DaemonSets have container image support for this architecture    kubeadm init  first runs a series of prechecks to ensure that the machine is ready to run Kubernetes  These prechecks expose warnings and exit on errors   kubeadm init  then downloads and installs the cluster control plane components  This may take several minutes  After it finishes you should see      none Your Kubernetes control plane has initialized successfully   To start using your cluster  you need to run the following as a regular user     mkdir  p  HOME  kube   sudo cp  i  etc kubernetes admin conf  HOME  kube config   sudo chown   id  u    id  g   HOME  kube config  You should now deploy a Pod network to the cluster  Run  kubectl apply  f  podnetwork  yaml  with one of the options listed at     docs concepts cluster administration addons   You can now join any number of machines by running the following on each node as root     kubeadm join  control plane host   control plane port    token  token    discovery token ca cert hash sha256  hash       To make kubectl work for your non root user  run these commands  which are also part of the  kubeadm init  output      bash mkdir  p  HOME  kube sudo cp  i  etc kubernetes admin conf  HOME  kube config sudo chown   id  u    id  g   HOME  kube config      Alternatively  if you are the  root  user  you can run      bash export KUBECONFIG  etc kubernetes admin conf       The kubeconfig file  admin conf  that  kubeadm init  generates contains a certificate with  Subject  O   kubeadm cluster admins  CN   kubernetes admin   The group  kubeadm cluster admins  is bound to the built in  cluster admin  ClusterRole  Do not share the  admin conf  file with anyone    kubeadm init  generates another kubeconfig file  super admin conf  that contains a certificate with  Subject  O   system masters  CN   kubernetes super admin    system masters  is a break glass  super user group that bypasses the authorization layer  for example RBAC   Do not share the  super admin conf  file with anyone  It is recommended to move the file to a safe location   See  Generating kubeconfig files for additional users   docs tasks administer cluster kubeadm kubeadm certs kubeconfig additional users  on how to use  kubeadm kubeconfig user  to generate kubeconfig files for additional users    Make a record of the  kubeadm join  command that  kubeadm init  outputs  You need this command to  join nodes to your cluster   join nodes    The token is used for mutual authentication between the control plane node and the joining nodes  The token included here is secret  Keep it safe  because anyone with this token can add authenticated nodes to your cluster  These tokens can be listed  created  and deleted with the  kubeadm token  command  See the  kubeadm reference guide   docs reference setup tools kubeadm kubeadm token         Installing a Pod network add on   pod network    This section contains important information about networking setup and deployment order  Read all of this advice carefully before proceeding     You must deploy a   CNI  based Pod network add on so that your Pods can communicate with each other  Cluster DNS  CoreDNS  will not start up before a network is installed       Take care that your Pod network must not overlap with any of the host   networks  you are likely to see problems if there is any overlap     If you find a collision between your network plugin s preferred Pod   network and some of your host networks  you should think of a suitable   CIDR block to use instead  then use that during  kubeadm init  with      pod network cidr  and as a replacement in your network plugin s YAML      By default   kubeadm  sets up your cluster to use and enforce use of    RBAC   docs reference access authn authz rbac    role based access   control     Make sure that your Pod network plugin supports RBAC  and so do any manifests   that you use to deploy it     If you want to use IPv6  either dual stack  or single stack IPv6 only   networking  for your cluster  make sure that your Pod network plugin   supports IPv6    IPv6 support was added to CNI in  v0 6 0  https   github com containernetworking cni releases tag v0 6 0       Kubeadm should be CNI agnostic and the validation of CNI providers is out of the scope of our current e2e testing  If you find an issue related to a CNI plugin you should log a ticket in its respective issue tracker instead of the kubeadm or kubernetes issue trackers    Several external projects provide Kubernetes Pod networks using CNI  some of which also support  Network Policy   docs concepts services networking network policies     See a list of add ons that implement the  Kubernetes networking model   docs concepts cluster administration networking  how to implement the kubernetes network model    Please refer to the  Installing Addons   docs concepts cluster administration addons  networking and network policy  page for a non exhaustive list of networking addons supported by Kubernetes  You can install a Pod network add on with the following command on the control plane node or a node that has the kubeconfig credentials      bash kubectl apply  f  add on yaml        Only a few CNI plugins support Windows  More details and setup instructions can be found in  Adding Windows worker nodes   docs tasks administer cluster kubeadm adding windows nodes  network config     You can install only one Pod network per cluster   Once a Pod network has been installed  you can confirm that it is working by checking that the CoreDNS Pod is  Running  in the output of  kubectl get pods   all namespaces   And once the CoreDNS Pod is up and running  you can continue by joining your nodes   If your network is not working or CoreDNS is not in the  Running  state  check out the  troubleshooting guide   docs setup production environment tools kubeadm troubleshooting kubeadm   for  kubeadm        Managed node labels  By default  kubeadm enables the  NodeRestriction   docs reference access authn authz admission controllers  noderestriction  admission controller that restricts what labels can be self applied by kubelets on node registration  The admission controller documentation covers what labels are permitted to be used with the kubelet    node labels  option  The  node role kubernetes io control plane  label is such a restricted label and kubeadm manually applies it using a privileged client after a node has been created  To do that manually you can do the same by using  kubectl label  and ensure it is using a privileged kubeconfig such as the kubeadm managed   etc kubernetes admin conf        Control plane node isolation  By default  your cluster will not schedule Pods on the control plane nodes for security reasons  If you want to be able to schedule Pods on the control plane nodes  for example for a single machine Kubernetes cluster  run      bash kubectl taint nodes   all node role kubernetes io control plane       The output will look something like       node  test 01  untainted          This will remove the  node role kubernetes io control plane NoSchedule  taint from any nodes that have it  including the control plane nodes  meaning that the scheduler will then be able to schedule Pods everywhere   Additionally  you can execute the following command to remove the   node kubernetes io exclude from external load balancers    docs reference labels annotations taints  node kubernetes io exclude from external load balancers  label from the control plane node  which excludes it from the list of backend servers      bash kubectl label nodes   all node kubernetes io exclude from external load balancers           Adding more control plane nodes  See  Creating Highly Available Clusters with kubeadm   docs setup production environment tools kubeadm high availability   for steps on creating a high availability kubeadm cluster by adding more control plane nodes       Adding worker nodes   join nodes   The worker nodes are where your workloads run   The following pages show how to add Linux and Windows worker nodes to the cluster by using the  kubeadm join  command      Adding Linux worker nodes   docs tasks administer cluster kubeadm adding linux nodes      Adding Windows worker nodes   docs tasks administer cluster kubeadm adding windows nodes         Optional  Controlling your cluster from machines other than the control plane node  In order to get a kubectl on some other computer  e g  laptop  to talk to your cluster  you need to copy the administrator kubeconfig file from your control plane node to your workstation like this      bash scp root  control plane host   etc kubernetes admin conf   kubectl   kubeconfig   admin conf get nodes       The example above assumes SSH access is enabled for root  If that is not the case  you can copy the  admin conf  file to be accessible by some other user and  scp  using that other user instead   The  admin conf  file gives the user  superuser  privileges over the cluster  This file should be used sparingly  For normal users  it s recommended to generate an unique credential to which you grant privileges  You can do this with the  kubeadm kubeconfig user   client name  CN   command  That command will print out a KubeConfig file to STDOUT which you should save to a file and distribute to your user  After that  grant privileges by using  kubectl create  cluster rolebinding          Optional  Proxying API Server to localhost  If you want to connect to the API Server from outside the cluster  you can use  kubectl proxy       bash scp root  control plane host   etc kubernetes admin conf   kubectl   kubeconfig   admin conf proxy      You can now access the API Server locally at  http   localhost 8001 api v1      Clean up   tear down   If you used disposable servers for your cluster  for testing  you can switch those off and do no further clean up  You can use  kubectl config delete cluster  to delete your local references to the cluster   However  if you want to deprovision your cluster more cleanly  you should first  drain the node   docs reference generated kubectl kubectl commands drain  and make sure that the node is empty  then deconfigure the node       Remove the node  Talking to the control plane node with the appropriate credentials  run      bash kubectl drain  node name    delete emptydir data   force   ignore daemonsets      Before removing the node  reset the state installed by  kubeadm       bash kubeadm reset      The reset process does not reset or clean up iptables rules or IPVS tables  If you wish to reset iptables  you must do so manually      bash iptables  F    iptables  t nat  F    iptables  t mangle  F    iptables  X      If you want to reset the IPVS tables  you must run the following command      bash ipvsadm  C      Now remove the node      bash kubectl delete node  node name       If you wish to start over  run  kubeadm init  or  kubeadm join  with the appropriate arguments       Clean up the control plane  You can use  kubeadm reset  on the control plane host to trigger a best effort clean up   See the   kubeadm reset    docs reference setup tools kubeadm kubeadm reset   reference documentation for more information about this subcommand and its options      Version skew policy   version skew policy   While kubeadm allows version skew against some components that it manages  it is recommended that you match the kubeadm version with the versions of the control plane components  kube proxy and kubelet       kubeadm s skew against the Kubernetes version  kubeadm can be used with Kubernetes components that are the same version as kubeadm or one version older  The Kubernetes version can be specified to kubeadm by using the    kubernetes version  flag of  kubeadm init  or the   ClusterConfiguration kubernetesVersion    docs reference config api kubeadm config v1beta4   field when using    config   This option will control the versions of kube apiserver  kube controller manager  kube scheduler and kube proxy   Example     kubeadm is at     kubernetesVersion  must be at  or       kubeadm s skew against the kubelet  Similarly to the Kubernetes version  kubeadm can be used with a kubelet version that is the same version as kubeadm or three versions older   Example     kubeadm is at    kubelet on the host must be at        or       kubeadm s skew against kubeadm  There are certain limitations on how kubeadm commands can operate on existing nodes or whole clusters managed by kubeadm   If new nodes are joined to the cluster  the kubeadm binary used for  kubeadm join  must match the last version of kubeadm used to either create the cluster with  kubeadm init  or to upgrade the same node with  kubeadm upgrade   Similar rules apply to the rest of the kubeadm commands with the exception of  kubeadm upgrade    Example for  kubeadm join      kubeadm version  was used to create a cluster with  kubeadm init    Joining nodes must use a kubeadm binary that is at version   Nodes that are being upgraded must use a version of kubeadm that is the same MINOR version or one MINOR version newer than the version of kubeadm used for managing the node   Example for  kubeadm upgrade      kubeadm version  was used to create or upgrade the node   The version of kubeadm used for upgrading the node must be at    or   To learn more about the version skew between the different Kubernetes component see the  Version Skew Policy   releases version skew policy        Limitations   limitations       Cluster resilience   resilience   The cluster created here has a single control plane node  with a single etcd database running on it  This means that if the control plane node fails  your cluster may lose data and may need to be recreated from scratch   Workarounds     Regularly  back up etcd  https   etcd io docs v3 5 op guide recovery    The   etcd data directory configured by kubeadm is at   var lib etcd  on the control plane node     Use multiple control plane nodes  You can read    Options for Highly Available topology   docs setup production environment tools kubeadm ha topology   to pick a cluster   topology that provides  high availability   docs setup production environment tools kubeadm high availability         Platform compatibility   multi platform   kubeadm deb rpm packages and binaries are built for amd64  arm  32 bit   arm64  ppc64le  and s390x following the  multi platform proposal  https   git k8s io design proposals archive multi platform md    Multiplatform container images for the control plane and addons are also supported since v1 12   Only some of the network providers offer solutions for all platforms  Please consult the list of network providers above or the documentation from each provider to figure out whether the provider supports your chosen platform      Troubleshooting   troubleshooting   If you are running into difficulties with kubeadm  please consult our  troubleshooting docs   docs setup production environment tools kubeadm troubleshooting kubeadm          discussion             Verify that your cluster is running properly with  Sonobuoy  https   github com heptio sonobuoy     a id  lifecycle    See  Upgrading kubeadm clusters   docs tasks administer cluster kubeadm kubeadm upgrade     for details about upgrading your cluster using  kubeadm     Learn about advanced  kubeadm  usage in the  kubeadm reference documentation   docs reference setup tools kubeadm     Learn more about Kubernetes  concepts   docs concepts   and   kubectl    docs reference kubectl      See the  Cluster Networking   docs concepts cluster administration networking   page for a bigger list   of Pod network add ons     a id  other addons    See the  list of add ons   docs concepts cluster administration addons   to   explore other add ons  including tools for logging  monitoring  network policy  visualization  amp    control of your Kubernetes cluster    Configure how your cluster handles logs for cluster events and from   applications running in Pods    See  Logging Architecture   docs concepts cluster administration logging   for   an overview of what is involved       Feedback   feedback     For bugs  visit the  kubeadm GitHub issue tracker  https   github com kubernetes kubeadm issues    For support  visit the     kubeadm  https   kubernetes slack com messages kubeadm   Slack channel   General SIG Cluster Lifecycle development Slack channel      sig cluster lifecycle  https   kubernetes slack com messages sig cluster lifecycle     SIG Cluster Lifecycle  SIG information  https   github com kubernetes community tree master sig cluster lifecycle readme    SIG Cluster Lifecycle mailing list     kubernetes sig cluster lifecycle  https   groups google com forum   forum kubernetes sig cluster lifecycle "}
{"questions":"kubernetes setup weight 100 feature state fork8sversion v1 23 state stable title Dual stack support with kubeadm contenttype task min kubernetes server version 1 21 overview","answers":"---\ntitle: Dual-stack support with kubeadm\ncontent_type: task\nweight: 100\nmin-kubernetes-server-version: 1.21\n---\n\n<!-- overview -->\n\n\n\nYour Kubernetes cluster includes [dual-stack](\/docs\/concepts\/services-networking\/dual-stack\/)\nnetworking, which means that cluster networking lets you use either address family.\nIn a cluster, the control plane can assign both an IPv4 address and an IPv6 address to a single\n or a .\n\n<!-- body -->\n\n## \n\nYou need to have installed the  tool,\nfollowing the steps from [Installing kubeadm](\/docs\/setup\/production-environment\/tools\/kubeadm\/install-kubeadm\/).\n\nFor each server that you want to use as a ,\nmake sure it allows IPv6 forwarding. \n\n### Enable IPv6 packet forwarding {#prerequisite-ipv6-forwarding}\n\nTo check if IPv6 packet forwarding is enabled:\n\n```bash\nsysctl net.ipv6.conf.all.forwarding\n```\nIf the output is `net.ipv6.conf.all.forwarding = 1` it is already enabled. \nOtherwise it is not enabled yet.\n\nTo manually enable IPv6 packet forwarding:\n\n```bash\n# sysctl params required by setup, params persist across reboots\ncat <<EOF | sudo tee -a \/etc\/sysctl.d\/k8s.conf\nnet.ipv6.conf.all.forwarding = 1\nEOF\n\n# Apply sysctl params without reboot\nsudo sysctl --system\n```\n\n\nYou need to have an IPv4 and and IPv6 address range to use. Cluster operators typically\nuse private address ranges for IPv4. For IPv6, a cluster operator typically chooses a global\nunicast address block from within `2000::\/3`, using a range that is assigned to the operator.\nYou don't have to route the cluster's IP address ranges to the public internet.\n\nThe size of the IP address allocations should be suitable for the number of Pods and\nServices that you are planning to run.\n\n\nIf you are upgrading an existing cluster with the `kubeadm upgrade` command,\n`kubeadm` does not support making modifications to the pod IP address range\n(\u201ccluster CIDR\u201d) nor to the cluster's Service address range (\u201cService CIDR\u201d).\n\n\n### Create a dual-stack cluster\n\nTo create a dual-stack cluster with `kubeadm init` you can pass command line arguments\nsimilar to the following example:\n\n```shell\n# These address ranges are examples\nkubeadm init --pod-network-cidr=10.244.0.0\/16,2001:db8:42:0::\/56 --service-cidr=10.96.0.0\/16,2001:db8:42:1::\/112\n```\n\nTo make things clearer, here is an example kubeadm\n[configuration file](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/)\n`kubeadm-config.yaml` for the primary dual-stack control plane node.\n\n```yaml\n---\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: ClusterConfiguration\nnetworking:\n  podSubnet: 10.244.0.0\/16,2001:db8:42:0::\/56\n  serviceSubnet: 10.96.0.0\/16,2001:db8:42:1::\/112\n---\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: InitConfiguration\nlocalAPIEndpoint:\n  advertiseAddress: \"10.100.0.1\"\n  bindPort: 6443\nnodeRegistration:\n  kubeletExtraArgs:\n  - name: \"node-ip\"\n    value: \"10.100.0.2,fd00:1:2:3::2\"\n```\n\n`advertiseAddress` in InitConfiguration specifies the IP address that the API Server\nwill advertise it is listening on. The value of `advertiseAddress` equals the\n`--apiserver-advertise-address` flag of `kubeadm init`.\n\nRun kubeadm to initiate the dual-stack control plane node:\n\n```shell\nkubeadm init --config=kubeadm-config.yaml\n```\n\nThe kube-controller-manager flags `--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6`\nare set with default values. See [configure IPv4\/IPv6 dual stack](\/docs\/concepts\/services-networking\/dual-stack#configure-ipv4-ipv6-dual-stack).\n\n\nThe `--apiserver-advertise-address` flag does not support dual-stack.\n\n\n### Join a node to dual-stack cluster\n\nBefore joining a node, make sure that the node has IPv6 routable network interface and allows IPv6 forwarding.\n\nHere is an example kubeadm [configuration file](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/)\n`kubeadm-config.yaml` for joining a worker node to the cluster.\n\n```yaml\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: JoinConfiguration\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 10.100.0.1:6443\n    token: \"clvldh.vjjwg16ucnhp94qr\"\n    caCertHashes:\n    - \"sha256:a4863cde706cfc580a439f842cc65d5ef112b7b2be31628513a9881cf0d9fe0e\"\n    # change auth info above to match the actual token and CA certificate hash for your cluster\nnodeRegistration:\n  kubeletExtraArgs:\n  - name: \"node-ip\"\n    value: \"10.100.0.2,fd00:1:2:3::3\"\n```\n\nAlso, here is an example kubeadm [configuration file](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/)\n`kubeadm-config.yaml` for joining another control plane node to the cluster.\n\n```yaml\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: JoinConfiguration\ncontrolPlane:\n  localAPIEndpoint:\n    advertiseAddress: \"10.100.0.2\"\n    bindPort: 6443\ndiscovery:\n  bootstrapToken:\n    apiServerEndpoint: 10.100.0.1:6443\n    token: \"clvldh.vjjwg16ucnhp94qr\"\n    caCertHashes:\n    - \"sha256:a4863cde706cfc580a439f842cc65d5ef112b7b2be31628513a9881cf0d9fe0e\"\n    # change auth info above to match the actual token and CA certificate hash for your cluster\nnodeRegistration:\n  kubeletExtraArgs:\n  - name: \"node-ip\"\n    value: \"10.100.0.2,fd00:1:2:3::4\"\n```\n\n`advertiseAddress` in JoinConfiguration.controlPlane specifies the IP address that the\nAPI Server will advertise it is listening on. The value of `advertiseAddress` equals\nthe `--apiserver-advertise-address` flag of `kubeadm join`.\n\n```shell\nkubeadm join --config=kubeadm-config.yaml\n```\n\n### Create a single-stack cluster\n\n\nDual-stack support doesn't mean that you need to use dual-stack addressing.\nYou can deploy a single-stack cluster that has the dual-stack networking feature enabled.\n\n\nTo make things more clear, here is an example kubeadm\n[configuration file](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/)\n`kubeadm-config.yaml` for the single-stack control plane node.\n\n```yaml\napiVersion: kubeadm.k8s.io\/v1beta4\nkind: ClusterConfiguration\nnetworking:\n  podSubnet: 10.244.0.0\/16\n  serviceSubnet: 10.96.0.0\/16\n```\n\n## \n\n* [Validate IPv4\/IPv6 dual-stack](\/docs\/tasks\/network\/validate-dual-stack) networking\n* Read about [Dual-stack](\/docs\/concepts\/services-networking\/dual-stack\/) cluster networking\n* Learn more about the kubeadm [configuration format](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/)","site":"kubernetes setup","answers_cleaned":"    title  Dual stack support with kubeadm content type  task weight  100 min kubernetes server version  1 21           overview        Your Kubernetes cluster includes  dual stack   docs concepts services networking dual stack   networking  which means that cluster networking lets you use either address family  In a cluster  the control plane can assign both an IPv4 address and an IPv6 address to a single  or a         body           You need to have installed the  tool  following the steps from  Installing kubeadm   docs setup production environment tools kubeadm install kubeadm     For each server that you want to use as a   make sure it allows IPv6 forwarding        Enable IPv6 packet forwarding   prerequisite ipv6 forwarding   To check if IPv6 packet forwarding is enabled      bash sysctl net ipv6 conf all forwarding     If the output is  net ipv6 conf all forwarding   1  it is already enabled   Otherwise it is not enabled yet   To manually enable IPv6 packet forwarding      bash   sysctl params required by setup  params persist across reboots cat   EOF   sudo tee  a  etc sysctl d k8s conf net ipv6 conf all forwarding   1 EOF    Apply sysctl params without reboot sudo sysctl   system       You need to have an IPv4 and and IPv6 address range to use  Cluster operators typically use private address ranges for IPv4  For IPv6  a cluster operator typically chooses a global unicast address block from within  2000   3   using a range that is assigned to the operator  You don t have to route the cluster s IP address ranges to the public internet   The size of the IP address allocations should be suitable for the number of Pods and Services that you are planning to run    If you are upgrading an existing cluster with the  kubeadm upgrade  command   kubeadm  does not support making modifications to the pod IP address range   cluster CIDR   nor to the cluster s Service address range   Service CIDR          Create a dual stack cluster  To create a dual stack cluster with  kubeadm init  you can pass command line arguments similar to the following example      shell   These address ranges are examples kubeadm init   pod network cidr 10 244 0 0 16 2001 db8 42 0   56   service cidr 10 96 0 0 16 2001 db8 42 1   112      To make things clearer  here is an example kubeadm  configuration file   docs reference config api kubeadm config v1beta4    kubeadm config yaml  for the primary dual stack control plane node      yaml     apiVersion  kubeadm k8s io v1beta4 kind  ClusterConfiguration networking    podSubnet  10 244 0 0 16 2001 db8 42 0   56   serviceSubnet  10 96 0 0 16 2001 db8 42 1   112     apiVersion  kubeadm k8s io v1beta4 kind  InitConfiguration localAPIEndpoint    advertiseAddress   10 100 0 1    bindPort  6443 nodeRegistration    kubeletExtraArgs      name   node ip      value   10 100 0 2 fd00 1 2 3  2        advertiseAddress  in InitConfiguration specifies the IP address that the API Server will advertise it is listening on  The value of  advertiseAddress  equals the    apiserver advertise address  flag of  kubeadm init    Run kubeadm to initiate the dual stack control plane node      shell kubeadm init   config kubeadm config yaml      The kube controller manager flags    node cidr mask size ipv4   node cidr mask size ipv6  are set with default values  See  configure IPv4 IPv6 dual stack   docs concepts services networking dual stack configure ipv4 ipv6 dual stack     The    apiserver advertise address  flag does not support dual stack        Join a node to dual stack cluster  Before joining a node  make sure that the node has IPv6 routable network interface and allows IPv6 forwarding   Here is an example kubeadm  configuration file   docs reference config api kubeadm config v1beta4    kubeadm config yaml  for joining a worker node to the cluster      yaml apiVersion  kubeadm k8s io v1beta4 kind  JoinConfiguration discovery    bootstrapToken      apiServerEndpoint  10 100 0 1 6443     token   clvldh vjjwg16ucnhp94qr      caCertHashes         sha256 a4863cde706cfc580a439f842cc65d5ef112b7b2be31628513a9881cf0d9fe0e        change auth info above to match the actual token and CA certificate hash for your cluster nodeRegistration    kubeletExtraArgs      name   node ip      value   10 100 0 2 fd00 1 2 3  3       Also  here is an example kubeadm  configuration file   docs reference config api kubeadm config v1beta4    kubeadm config yaml  for joining another control plane node to the cluster      yaml apiVersion  kubeadm k8s io v1beta4 kind  JoinConfiguration controlPlane    localAPIEndpoint      advertiseAddress   10 100 0 2      bindPort  6443 discovery    bootstrapToken      apiServerEndpoint  10 100 0 1 6443     token   clvldh vjjwg16ucnhp94qr      caCertHashes         sha256 a4863cde706cfc580a439f842cc65d5ef112b7b2be31628513a9881cf0d9fe0e        change auth info above to match the actual token and CA certificate hash for your cluster nodeRegistration    kubeletExtraArgs      name   node ip      value   10 100 0 2 fd00 1 2 3  4        advertiseAddress  in JoinConfiguration controlPlane specifies the IP address that the API Server will advertise it is listening on  The value of  advertiseAddress  equals the    apiserver advertise address  flag of  kubeadm join       shell kubeadm join   config kubeadm config yaml          Create a single stack cluster   Dual stack support doesn t mean that you need to use dual stack addressing  You can deploy a single stack cluster that has the dual stack networking feature enabled    To make things more clear  here is an example kubeadm  configuration file   docs reference config api kubeadm config v1beta4    kubeadm config yaml  for the single stack control plane node      yaml apiVersion  kubeadm k8s io v1beta4 kind  ClusterConfiguration networking    podSubnet  10 244 0 0 16   serviceSubnet  10 96 0 0 16              Validate IPv4 IPv6 dual stack   docs tasks network validate dual stack  networking   Read about  Dual stack   docs concepts services networking dual stack   cluster networking   Learn more about the kubeadm  configuration format   docs reference config api kubeadm config v1beta4  "}
{"questions":"kubernetes setup weight 80 title Configuring each kubelet in your cluster using kubeadm contenttype concept sig cluster lifecycle reviewers overview","answers":"---\nreviewers:\n- sig-cluster-lifecycle\ntitle: Configuring each kubelet in your cluster using kubeadm\ncontent_type: concept\nweight: 80\n---\n\n<!-- overview -->\n\n\n\n\n\nThe lifecycle of the kubeadm CLI tool is decoupled from the\n[kubelet](\/docs\/reference\/command-line-tools-reference\/kubelet), which is a daemon that runs\non each node within the Kubernetes cluster. The kubeadm CLI tool is executed by the user when Kubernetes is\ninitialized or upgraded, whereas the kubelet is always running in the background.\n\nSince the kubelet is a daemon, it needs to be maintained by some kind of an init\nsystem or service manager. When the kubelet is installed using DEBs or RPMs,\nsystemd is configured to manage the kubelet. You can use a different service\nmanager instead, but you need to configure it manually.\n\nSome kubelet configuration details need to be the same across all kubelets involved in the cluster, while\nother configuration aspects need to be set on a per-kubelet basis to accommodate the different\ncharacteristics of a given machine (such as OS, storage, and networking). You can manage the configuration\nof your kubelets manually, but kubeadm now provides a `KubeletConfiguration` API type for\n[managing your kubelet configurations centrally](#configure-kubelets-using-kubeadm).\n\n<!-- body -->\n\n## Kubelet configuration patterns\n\nThe following sections describe patterns to kubelet configuration that are simplified by\nusing kubeadm, rather than managing the kubelet configuration for each Node manually.\n\n### Propagating cluster-level configuration to each kubelet\n\nYou can provide the kubelet with default values to be used by `kubeadm init` and `kubeadm join`\ncommands. Interesting examples include using a different container runtime or setting the default subnet\nused by services.\n\nIf you want your services to use the subnet `10.96.0.0\/12` as the default for services, you can pass\nthe `--service-cidr` parameter to kubeadm:\n\n```bash\nkubeadm init --service-cidr 10.96.0.0\/12\n```\n\nVirtual IPs for services are now allocated from this subnet. You also need to set the DNS address used\nby the kubelet, using the `--cluster-dns` flag. This setting needs to be the same for every kubelet\non every manager and Node in the cluster. The kubelet provides a versioned, structured API object\nthat can configure most parameters in the kubelet and push out this configuration to each running\nkubelet in the cluster. This object is called\n[`KubeletConfiguration`](\/docs\/reference\/config-api\/kubelet-config.v1beta1\/).\nThe `KubeletConfiguration` allows the user to specify flags such as the cluster DNS IP addresses expressed as\na list of values to a camelCased key, illustrated by the following example:\n\n```yaml\napiVersion: kubelet.config.k8s.io\/v1beta1\nkind: KubeletConfiguration\nclusterDNS:\n- 10.96.0.10\n```\n\nFor more details on the `KubeletConfiguration` have a look at [this section](#configure-kubelets-using-kubeadm).\n\n### Providing instance-specific configuration details\n\nSome hosts require specific kubelet configurations due to differences in hardware, operating system,\nnetworking, or other host-specific parameters. The following list provides a few examples.\n\n- The path to the DNS resolution file, as specified by the `--resolv-conf` kubelet\n  configuration flag, may differ among operating systems, or depending on whether you are using\n  `systemd-resolved`. If this path is wrong, DNS resolution will fail on the Node whose kubelet\n  is configured incorrectly.\n\n- The Node API object `.metadata.name` is set to the machine's hostname by default,\n  unless you are using a cloud provider. You can use the `--hostname-override` flag to override the\n  default behavior if you need to specify a Node name different from the machine's hostname.\n\n- Currently, the kubelet cannot automatically detect the cgroup driver used by the container runtime,\n  but the value of `--cgroup-driver` must match the cgroup driver used by the container runtime to ensure\n  the health of the kubelet.\n\n- To specify the container runtime you must set its endpoint with the\n`--container-runtime-endpoint=<path>` flag.\n\nThe recommended way of applying such instance-specific configuration is by using\n[`KubeletConfiguration` patches](\/docs\/setup\/production-environment\/tools\/kubeadm\/control-plane-flags#patches).\n\n## Configure kubelets using kubeadm\n\nIt is possible to configure the kubelet that kubeadm will start if a custom\n[`KubeletConfiguration`](\/docs\/reference\/config-api\/kubelet-config.v1beta1\/)\nAPI object is passed with a configuration file like so `kubeadm ... --config some-config-file.yaml`.\n\nBy calling `kubeadm config print init-defaults --component-configs KubeletConfiguration` you can\nsee all the default values for this structure.\n\nIt is also possible to apply instance-specific patches over the base `KubeletConfiguration`.\nHave a look at [Customizing the kubelet](\/docs\/setup\/production-environment\/tools\/kubeadm\/control-plane-flags#customizing-the-kubelet)\nfor more details.\n\n### Workflow when using `kubeadm init`\n\nWhen you call `kubeadm init`, the kubelet configuration is marshalled to disk\nat `\/var\/lib\/kubelet\/config.yaml`, and also uploaded to a `kubelet-config` ConfigMap in the `kube-system`\nnamespace of the cluster. A kubelet configuration file is also written to `\/etc\/kubernetes\/kubelet.conf`\nwith the baseline cluster-wide configuration for all kubelets in the cluster. This configuration file\npoints to the client certificates that allow the kubelet to communicate with the API server. This\naddresses the need to\n[propagate cluster-level configuration to each kubelet](#propagating-cluster-level-configuration-to-each-kubelet).\n\nTo address the second pattern of\n[providing instance-specific configuration details](#providing-instance-specific-configuration-details),\nkubeadm writes an environment file to `\/var\/lib\/kubelet\/kubeadm-flags.env`, which contains a list of\nflags to pass to the kubelet when it starts. The flags are presented in the file like this:\n\n```bash\nKUBELET_KUBEADM_ARGS=\"--flag1=value1 --flag2=value2 ...\"\n```\n\nIn addition to the flags used when starting the kubelet, the file also contains dynamic\nparameters such as the cgroup driver and whether to use a different container runtime socket\n(`--cri-socket`).\n\nAfter marshalling these two files to disk, kubeadm attempts to run the following two\ncommands, if you are using systemd:\n\n```bash\nsystemctl daemon-reload && systemctl restart kubelet\n```\n\nIf the reload and restart are successful, the normal `kubeadm init` workflow continues.\n\n### Workflow when using `kubeadm join`\n\nWhen you run `kubeadm join`, kubeadm uses the Bootstrap Token credential to perform\na TLS bootstrap, which fetches the credential needed to download the\n`kubelet-config` ConfigMap and writes it to `\/var\/lib\/kubelet\/config.yaml`. The dynamic\nenvironment file is generated in exactly the same way as `kubeadm init`.\n\nNext, `kubeadm` runs the following two commands to load the new configuration into the kubelet:\n\n```bash\nsystemctl daemon-reload && systemctl restart kubelet\n```\n\nAfter the kubelet loads the new configuration, kubeadm writes the\n`\/etc\/kubernetes\/bootstrap-kubelet.conf` KubeConfig file, which contains a CA certificate and Bootstrap\nToken. These are used by the kubelet to perform the TLS Bootstrap and obtain a unique\ncredential, which is stored in `\/etc\/kubernetes\/kubelet.conf`.\n\nWhen the `\/etc\/kubernetes\/kubelet.conf` file is written, the kubelet has finished performing the TLS Bootstrap.\nKubeadm deletes the `\/etc\/kubernetes\/bootstrap-kubelet.conf` file after completing the TLS Bootstrap.\n\n##  The kubelet drop-in file for systemd\n\n`kubeadm` ships with configuration for how systemd should run the kubelet.\nNote that the kubeadm CLI command never touches this drop-in file.\n\nThis configuration file installed by the `kubeadm`\n[package](https:\/\/github.com\/kubernetes\/release\/blob\/cd53840\/cmd\/krel\/templates\/latest\/kubeadm\/10-kubeadm.conf) is written to\n`\/usr\/lib\/systemd\/system\/kubelet.service.d\/10-kubeadm.conf` and is used by systemd.\nIt augments the basic\n[`kubelet.service`](https:\/\/github.com\/kubernetes\/release\/blob\/cd53840\/cmd\/krel\/templates\/latest\/kubelet\/kubelet.service).\n\nIf you want to override that further, you can make a directory `\/etc\/systemd\/system\/kubelet.service.d\/`\n(not `\/usr\/lib\/systemd\/system\/kubelet.service.d\/`) and put your own customizations into a file there.\nFor example, you might add a new local file `\/etc\/systemd\/system\/kubelet.service.d\/local-overrides.conf`\nto override the unit settings configured by `kubeadm`.\n\nHere is what you are likely to find in `\/usr\/lib\/systemd\/system\/kubelet.service.d\/10-kubeadm.conf`:\n\n\nThe contents below are just an example. If you don't want to use a package manager\nfollow the guide outlined in the ([Without a package manager](\/docs\/setup\/production-environment\/tools\/kubeadm\/install-kubeadm\/#k8s-install-2))\nsection.\n\n\n```none\n[Service]\nEnvironment=\"KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=\/etc\/kubernetes\/bootstrap-kubelet.conf --kubeconfig=\/etc\/kubernetes\/kubelet.conf\"\nEnvironment=\"KUBELET_CONFIG_ARGS=--config=\/var\/lib\/kubelet\/config.yaml\"\n# This is a file that \"kubeadm init\" and \"kubeadm join\" generate at runtime, populating\n# the KUBELET_KUBEADM_ARGS variable dynamically\nEnvironmentFile=-\/var\/lib\/kubelet\/kubeadm-flags.env\n# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably,\n# the user should use the .NodeRegistration.KubeletExtraArgs object in the configuration files instead.\n# KUBELET_EXTRA_ARGS should be sourced from this file.\nEnvironmentFile=-\/etc\/default\/kubelet\nExecStart=\nExecStart=\/usr\/bin\/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS\n```\n\nThis file specifies the default locations for all of the files managed by kubeadm for the kubelet.\n\n- The KubeConfig file to use for the TLS Bootstrap is `\/etc\/kubernetes\/bootstrap-kubelet.conf`,\n  but it is only used if `\/etc\/kubernetes\/kubelet.conf` does not exist.\n- The KubeConfig file with the unique kubelet identity is `\/etc\/kubernetes\/kubelet.conf`.\n- The file containing the kubelet's ComponentConfig is `\/var\/lib\/kubelet\/config.yaml`.\n- The dynamic environment file that contains `KUBELET_KUBEADM_ARGS` is sourced from `\/var\/lib\/kubelet\/kubeadm-flags.env`.\n- The file that can contain user-specified flag overrides with `KUBELET_EXTRA_ARGS` is sourced from\n  `\/etc\/default\/kubelet` (for DEBs), or `\/etc\/sysconfig\/kubelet` (for RPMs). `KUBELET_EXTRA_ARGS`\n  is last in the flag chain and has the highest priority in the event of conflicting settings.\n\n## Kubernetes binaries and package contents\n\nThe DEB and RPM packages shipped with the Kubernetes releases are:\n\n| Package name | Description |\n|--------------|-------------|\n| `kubeadm`    | Installs the `\/usr\/bin\/kubeadm` CLI tool and the [kubelet drop-in file](#the-kubelet-drop-in-file-for-systemd) for the kubelet. |\n| `kubelet`    | Installs the `\/usr\/bin\/kubelet` binary. |\n| `kubectl`    | Installs the `\/usr\/bin\/kubectl` binary. |\n| `cri-tools` | Installs the `\/usr\/bin\/crictl` binary from the [cri-tools git repository](https:\/\/github.com\/kubernetes-sigs\/cri-tools). |\n| `kubernetes-cni` | Installs the `\/opt\/cni\/bin` binaries from the [plugins git repository](https:\/\/github.com\/containernetworking\/plugins). |","site":"kubernetes setup","answers_cleaned":"    reviewers    sig cluster lifecycle title  Configuring each kubelet in your cluster using kubeadm content type  concept weight  80           overview          The lifecycle of the kubeadm CLI tool is decoupled from the  kubelet   docs reference command line tools reference kubelet   which is a daemon that runs on each node within the Kubernetes cluster  The kubeadm CLI tool is executed by the user when Kubernetes is initialized or upgraded  whereas the kubelet is always running in the background   Since the kubelet is a daemon  it needs to be maintained by some kind of an init system or service manager  When the kubelet is installed using DEBs or RPMs  systemd is configured to manage the kubelet  You can use a different service manager instead  but you need to configure it manually   Some kubelet configuration details need to be the same across all kubelets involved in the cluster  while other configuration aspects need to be set on a per kubelet basis to accommodate the different characteristics of a given machine  such as OS  storage  and networking   You can manage the configuration of your kubelets manually  but kubeadm now provides a  KubeletConfiguration  API type for  managing your kubelet configurations centrally   configure kubelets using kubeadm         body         Kubelet configuration patterns  The following sections describe patterns to kubelet configuration that are simplified by using kubeadm  rather than managing the kubelet configuration for each Node manually       Propagating cluster level configuration to each kubelet  You can provide the kubelet with default values to be used by  kubeadm init  and  kubeadm join  commands  Interesting examples include using a different container runtime or setting the default subnet used by services   If you want your services to use the subnet  10 96 0 0 12  as the default for services  you can pass the    service cidr  parameter to kubeadm      bash kubeadm init   service cidr 10 96 0 0 12      Virtual IPs for services are now allocated from this subnet  You also need to set the DNS address used by the kubelet  using the    cluster dns  flag  This setting needs to be the same for every kubelet on every manager and Node in the cluster  The kubelet provides a versioned  structured API object that can configure most parameters in the kubelet and push out this configuration to each running kubelet in the cluster  This object is called   KubeletConfiguration    docs reference config api kubelet config v1beta1    The  KubeletConfiguration  allows the user to specify flags such as the cluster DNS IP addresses expressed as a list of values to a camelCased key  illustrated by the following example      yaml apiVersion  kubelet config k8s io v1beta1 kind  KubeletConfiguration clusterDNS    10 96 0 10      For more details on the  KubeletConfiguration  have a look at  this section   configure kubelets using kubeadm        Providing instance specific configuration details  Some hosts require specific kubelet configurations due to differences in hardware  operating system  networking  or other host specific parameters  The following list provides a few examples     The path to the DNS resolution file  as specified by the    resolv conf  kubelet   configuration flag  may differ among operating systems  or depending on whether you are using    systemd resolved   If this path is wrong  DNS resolution will fail on the Node whose kubelet   is configured incorrectly     The Node API object   metadata name  is set to the machine s hostname by default    unless you are using a cloud provider  You can use the    hostname override  flag to override the   default behavior if you need to specify a Node name different from the machine s hostname     Currently  the kubelet cannot automatically detect the cgroup driver used by the container runtime    but the value of    cgroup driver  must match the cgroup driver used by the container runtime to ensure   the health of the kubelet     To specify the container runtime you must set its endpoint with the    container runtime endpoint  path   flag   The recommended way of applying such instance specific configuration is by using   KubeletConfiguration  patches   docs setup production environment tools kubeadm control plane flags patches       Configure kubelets using kubeadm  It is possible to configure the kubelet that kubeadm will start if a custom   KubeletConfiguration    docs reference config api kubelet config v1beta1   API object is passed with a configuration file like so  kubeadm       config some config file yaml    By calling  kubeadm config print init defaults   component configs KubeletConfiguration  you can see all the default values for this structure   It is also possible to apply instance specific patches over the base  KubeletConfiguration   Have a look at  Customizing the kubelet   docs setup production environment tools kubeadm control plane flags customizing the kubelet  for more details       Workflow when using  kubeadm init   When you call  kubeadm init   the kubelet configuration is marshalled to disk at   var lib kubelet config yaml   and also uploaded to a  kubelet config  ConfigMap in the  kube system  namespace of the cluster  A kubelet configuration file is also written to   etc kubernetes kubelet conf  with the baseline cluster wide configuration for all kubelets in the cluster  This configuration file points to the client certificates that allow the kubelet to communicate with the API server  This addresses the need to  propagate cluster level configuration to each kubelet   propagating cluster level configuration to each kubelet    To address the second pattern of  providing instance specific configuration details   providing instance specific configuration details   kubeadm writes an environment file to   var lib kubelet kubeadm flags env   which contains a list of flags to pass to the kubelet when it starts  The flags are presented in the file like this      bash KUBELET KUBEADM ARGS    flag1 value1   flag2 value2           In addition to the flags used when starting the kubelet  the file also contains dynamic parameters such as the cgroup driver and whether to use a different container runtime socket     cri socket     After marshalling these two files to disk  kubeadm attempts to run the following two commands  if you are using systemd      bash systemctl daemon reload    systemctl restart kubelet      If the reload and restart are successful  the normal  kubeadm init  workflow continues       Workflow when using  kubeadm join   When you run  kubeadm join   kubeadm uses the Bootstrap Token credential to perform a TLS bootstrap  which fetches the credential needed to download the  kubelet config  ConfigMap and writes it to   var lib kubelet config yaml   The dynamic environment file is generated in exactly the same way as  kubeadm init    Next   kubeadm  runs the following two commands to load the new configuration into the kubelet      bash systemctl daemon reload    systemctl restart kubelet      After the kubelet loads the new configuration  kubeadm writes the   etc kubernetes bootstrap kubelet conf  KubeConfig file  which contains a CA certificate and Bootstrap Token  These are used by the kubelet to perform the TLS Bootstrap and obtain a unique credential  which is stored in   etc kubernetes kubelet conf    When the   etc kubernetes kubelet conf  file is written  the kubelet has finished performing the TLS Bootstrap  Kubeadm deletes the   etc kubernetes bootstrap kubelet conf  file after completing the TLS Bootstrap       The kubelet drop in file for systemd   kubeadm  ships with configuration for how systemd should run the kubelet  Note that the kubeadm CLI command never touches this drop in file   This configuration file installed by the  kubeadm   package  https   github com kubernetes release blob cd53840 cmd krel templates latest kubeadm 10 kubeadm conf  is written to   usr lib systemd system kubelet service d 10 kubeadm conf  and is used by systemd  It augments the basic   kubelet service   https   github com kubernetes release blob cd53840 cmd krel templates latest kubelet kubelet service    If you want to override that further  you can make a directory   etc systemd system kubelet service d    not   usr lib systemd system kubelet service d    and put your own customizations into a file there  For example  you might add a new local file   etc systemd system kubelet service d local overrides conf  to override the unit settings configured by  kubeadm    Here is what you are likely to find in   usr lib systemd system kubelet service d 10 kubeadm conf     The contents below are just an example  If you don t want to use a package manager follow the guide outlined in the   Without a package manager   docs setup production environment tools kubeadm install kubeadm  k8s install 2   section       none  Service  Environment  KUBELET KUBECONFIG ARGS   bootstrap kubeconfig  etc kubernetes bootstrap kubelet conf   kubeconfig  etc kubernetes kubelet conf  Environment  KUBELET CONFIG ARGS   config  var lib kubelet config yaml    This is a file that  kubeadm init  and  kubeadm join  generate at runtime  populating   the KUBELET KUBEADM ARGS variable dynamically EnvironmentFile   var lib kubelet kubeadm flags env   This is a file that the user can use for overrides of the kubelet args as a last resort  Preferably    the user should use the  NodeRegistration KubeletExtraArgs object in the configuration files instead    KUBELET EXTRA ARGS should be sourced from this file  EnvironmentFile   etc default kubelet ExecStart  ExecStart  usr bin kubelet  KUBELET KUBECONFIG ARGS  KUBELET CONFIG ARGS  KUBELET KUBEADM ARGS  KUBELET EXTRA ARGS      This file specifies the default locations for all of the files managed by kubeadm for the kubelet     The KubeConfig file to use for the TLS Bootstrap is   etc kubernetes bootstrap kubelet conf     but it is only used if   etc kubernetes kubelet conf  does not exist    The KubeConfig file with the unique kubelet identity is   etc kubernetes kubelet conf     The file containing the kubelet s ComponentConfig is   var lib kubelet config yaml     The dynamic environment file that contains  KUBELET KUBEADM ARGS  is sourced from   var lib kubelet kubeadm flags env     The file that can contain user specified flag overrides with  KUBELET EXTRA ARGS  is sourced from     etc default kubelet   for DEBs   or   etc sysconfig kubelet   for RPMs    KUBELET EXTRA ARGS    is last in the flag chain and has the highest priority in the event of conflicting settings      Kubernetes binaries and package contents  The DEB and RPM packages shipped with the Kubernetes releases are     Package name   Description                                     kubeadm       Installs the   usr bin kubeadm  CLI tool and the  kubelet drop in file   the kubelet drop in file for systemd  for the kubelet       kubelet       Installs the   usr bin kubelet  binary       kubectl       Installs the   usr bin kubectl  binary       cri tools    Installs the   usr bin crictl  binary from the  cri tools git repository  https   github com kubernetes sigs cri tools        kubernetes cni    Installs the   opt cni bin  binaries from the  plugins git repository  https   github com containernetworking plugins    "}
{"questions":"kubernetes setup title Creating Highly Available Clusters with kubeadm contenttype task sig cluster lifecycle reviewers overview weight 60","answers":"---\nreviewers:\n- sig-cluster-lifecycle\ntitle: Creating Highly Available Clusters with kubeadm\ncontent_type: task\nweight: 60\n---\n\n<!-- overview -->\n\nThis page explains two different approaches to setting up a highly available Kubernetes\ncluster using kubeadm:\n\n- With stacked control plane nodes. This approach requires less infrastructure. The etcd members\n  and control plane nodes are co-located.\n- With an external etcd cluster. This approach requires more infrastructure. The\n  control plane nodes and etcd members are separated.\n\nBefore proceeding, you should carefully consider which approach best meets the needs of your applications\nand environment. [Options for Highly Available topology](\/docs\/setup\/production-environment\/tools\/kubeadm\/ha-topology\/)\noutlines the advantages and disadvantages of each.\n\nIf you encounter issues with setting up the HA cluster, please report these\nin the kubeadm [issue tracker](https:\/\/github.com\/kubernetes\/kubeadm\/issues\/new).\n\nSee also the [upgrade documentation](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-upgrade\/).\n\n\nThis page does not address running your cluster on a cloud provider. In a cloud\nenvironment, neither approach documented here works with Service objects of type\nLoadBalancer, or with dynamic PersistentVolumes.\n\n\n## \n\nThe prerequisites depend on which topology you have selected for your cluster's\ncontrol plane:\n\n\n\n<!--\n    note to reviewers: these prerequisites should match the start of the\n    external etc tab\n-->\n\nYou need:\n\n- Three or more machines that meet [kubeadm's minimum requirements](\/docs\/setup\/production-environment\/tools\/kubeadm\/install-kubeadm\/#before-you-begin) for\n  the control-plane nodes. Having an odd number of control plane nodes can help\n  with leader selection in the case of machine or zone failure.\n  - including a , already set up and working\n- Three or more machines that meet [kubeadm's minimum\n  requirements](\/docs\/setup\/production-environment\/tools\/kubeadm\/install-kubeadm\/#before-you-begin) for the workers\n  - including a container runtime, already set up and working\n- Full network connectivity between all machines in the cluster (public or\n  private network)\n- Superuser privileges on all machines using `sudo`\n  - You can use a different tool; this guide uses `sudo` in the examples.\n- SSH access from one device to all nodes in the system\n- `kubeadm` and `kubelet` already installed on all machines.\n\n_See [Stacked etcd topology](\/docs\/setup\/production-environment\/tools\/kubeadm\/ha-topology\/#stacked-etcd-topology) for context._\n\n\n\n<!--\n    note to reviewers: these prerequisites should match the start of the\n    stacked etc tab\n-->\nYou need:\n\n- Three or more machines that meet [kubeadm's minimum requirements](\/docs\/setup\/production-environment\/tools\/kubeadm\/install-kubeadm\/#before-you-begin) for\n  the control-plane nodes. Having an odd number of control plane nodes can help\n  with leader selection in the case of machine or zone failure.\n  - including a , already set up and working\n- Three or more machines that meet [kubeadm's minimum\n  requirements](\/docs\/setup\/production-environment\/tools\/kubeadm\/install-kubeadm\/#before-you-begin) for the workers\n  - including a container runtime, already set up and working\n- Full network connectivity between all machines in the cluster (public or\n  private network)\n- Superuser privileges on all machines using `sudo`\n  - You can use a different tool; this guide uses `sudo` in the examples.\n- SSH access from one device to all nodes in the system\n- `kubeadm` and `kubelet` already installed on all machines.\n\n<!-- end of shared prerequisites -->\n\nAnd you also need:\n\n- Three or more additional machines, that will become etcd cluster members.\n  Having an odd number of members in the etcd cluster is a requirement for achieving\n  optimal voting quorum.\n  - These machines again need to have `kubeadm` and `kubelet` installed.\n  - These machines also require a container runtime, that is already set up and working.\n\n_See [External etcd topology](\/docs\/setup\/production-environment\/tools\/kubeadm\/ha-topology\/#external-etcd-topology) for context._\n\n\n\n### Container images\n\nEach host should have access read and fetch images from the Kubernetes container image registry,\n`registry.k8s.io`. If you want to deploy a highly-available cluster where the hosts do not have\naccess to pull images, this is possible. You must ensure by some other means that the correct\ncontainer images are already available on the relevant hosts.\n\n### Command line interface {#kubectl}\n\nTo manage Kubernetes once your cluster is set up, you should\n[install kubectl](\/docs\/tasks\/tools\/#kubectl) on your PC. It is also useful\nto install the `kubectl` tool on each control plane node, as this can be\nhelpful for troubleshooting.\n\n<!-- steps -->\n\n## First steps for both methods\n\n### Create load balancer for kube-apiserver\n\n\nThere are many configurations for load balancers. The following example is only one\noption. Your cluster requirements may need a different configuration.\n\n\n1. Create a kube-apiserver load balancer with a name that resolves to DNS.\n\n   - In a cloud environment you should place your control plane nodes behind a TCP\n     forwarding load balancer. This load balancer distributes traffic to all\n     healthy control plane nodes in its target list. The health check for\n     an apiserver is a TCP check on the port the kube-apiserver listens on\n     (default value `:6443`).\n\n   - It is not recommended to use an IP address directly in a cloud environment.\n\n   - The load balancer must be able to communicate with all control plane nodes\n     on the apiserver port. It must also allow incoming traffic on its\n     listening port.\n\n   - Make sure the address of the load balancer always matches\n     the address of kubeadm's `ControlPlaneEndpoint`.\n\n   - Read the [Options for Software Load Balancing](https:\/\/git.k8s.io\/kubeadm\/docs\/ha-considerations.md#options-for-software-load-balancing)\n     guide for more details.\n\n1. Add the first control plane node to the load balancer, and test the\n   connection:\n\n   ```shell\n   nc -v <LOAD_BALANCER_IP> <PORT>\n   ```\n\n   A connection refused error is expected because the API server is not yet\n   running. A timeout, however, means the load balancer cannot communicate\n   with the control plane node. If a timeout occurs, reconfigure the load\n   balancer to communicate with the control plane node.\n\n1. Add the remaining control plane nodes to the load balancer target group.\n\n## Stacked control plane and etcd nodes\n\n### Steps for the first control plane node\n\n1. Initialize the control plane:\n\n   ```sh\n   sudo kubeadm init --control-plane-endpoint \"LOAD_BALANCER_DNS:LOAD_BALANCER_PORT\" --upload-certs\n   ```\n\n   - You can use the `--kubernetes-version` flag to set the Kubernetes version to use.\n     It is recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.\n   - The `--control-plane-endpoint` flag should be set to the address or DNS and port of the load balancer.\n\n   - The `--upload-certs` flag is used to upload the certificates that should be shared\n     across all the control-plane instances to the cluster. If instead, you prefer to copy certs across\n     control-plane nodes manually or using automation tools, please remove this flag and refer to [Manual\n     certificate distribution](#manual-certs) section below.\n\n   \n   The `kubeadm init` flags `--config` and `--certificate-key` cannot be mixed, therefore if you want\n   to use the [kubeadm configuration](\/docs\/reference\/config-api\/kubeadm-config.v1beta4\/)\n   you must add the `certificateKey` field in the appropriate config locations\n   (under `InitConfiguration` and `JoinConfiguration: controlPlane`).\n   \n\n   \n   Some CNI network plugins require additional configuration, for example specifying the pod IP CIDR, while others do not.\n   See the [CNI network documentation](\/docs\/setup\/production-environment\/tools\/kubeadm\/create-cluster-kubeadm\/#pod-network).\n   To add a pod CIDR pass the flag `--pod-network-cidr`, or if you are using a kubeadm configuration file\n   set the `podSubnet` field under the `networking` object of `ClusterConfiguration`.\n   \n\n   The output looks similar to:\n\n   ```sh\n   ...\n   You can now join any number of control-plane node by running the following command on each as a root:\n       kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07\n\n   Please note that the certificate-key gives access to cluster sensitive data, keep it secret!\n   As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.\n\n   Then you can join any number of worker nodes by running the following on each as root:\n       kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866\n   ```\n\n   - Copy this output to a text file. You will need it later to join control plane and worker nodes to\n     the cluster.\n   - When `--upload-certs` is used with `kubeadm init`, the certificates of the primary control plane\n     are encrypted and uploaded in the `kubeadm-certs` Secret.\n   - To re-upload the certificates and generate a new decryption key, use the following command on a\n     control plane\n     node that is already joined to the cluster:\n\n     ```sh\n     sudo kubeadm init phase upload-certs --upload-certs\n     ```\n\n   - You can also specify a custom `--certificate-key` during `init` that can later be used by `join`.\n     To generate such a key you can use the following command:\n\n     ```sh\n     kubeadm certs certificate-key\n     ```\n\n   The certificate key is a hex encoded string that is an AES key of size 32 bytes.\n\n   \n   The `kubeadm-certs` Secret and the decryption key expire after two hours.\n   \n\n   \n   As stated in the command output, the certificate key gives access to cluster sensitive data, keep it secret!\n   \n\n1. Apply the CNI plugin of your choice:\n   [Follow these instructions](\/docs\/setup\/production-environment\/tools\/kubeadm\/create-cluster-kubeadm\/#pod-network)\n   to install the CNI provider. Make sure the configuration corresponds to the Pod CIDR specified in the\n   kubeadm configuration file (if applicable).\n\n   \n   You must pick a network plugin that suits your use case and deploy it before you move on to next step.\n   If you don't do this, you will not be able to launch your cluster properly.\n   \n\n1. Type the following and watch the pods of the control plane components get started:\n\n   ```sh\n   kubectl get pod -n kube-system -w\n   ```\n\n### Steps for the rest of the control plane nodes\n\nFor each additional control plane node you should:\n\n1. Execute the join command that was previously given to you by the `kubeadm init` output on the first node.\n   It should look something like this:\n\n   ```sh\n   sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07\n   ```\n\n   - The `--control-plane` flag tells `kubeadm join` to create a new control plane.\n   - The `--certificate-key ...` will cause the control plane certificates to be downloaded\n     from the `kubeadm-certs` Secret in the cluster and be decrypted using the given key.\n\nYou can join multiple control-plane nodes in parallel.\n\n## External etcd nodes\n\nSetting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd\nwith the exception that you should setup etcd first, and you should pass the etcd information\nin the kubeadm config file.\n\n### Set up the etcd cluster\n\n1. Follow these [instructions](\/docs\/setup\/production-environment\/tools\/kubeadm\/setup-ha-etcd-with-kubeadm\/) to set up the etcd cluster.\n\n1. Set up SSH as described [here](#manual-certs).\n\n1. Copy the following files from any etcd node in the cluster to the first control plane node:\n\n   ```sh\n   export CONTROL_PLANE=\"ubuntu@10.0.0.7\"\n   scp \/etc\/kubernetes\/pki\/etcd\/ca.crt \"${CONTROL_PLANE}\":\n   scp \/etc\/kubernetes\/pki\/apiserver-etcd-client.crt \"${CONTROL_PLANE}\":\n   scp \/etc\/kubernetes\/pki\/apiserver-etcd-client.key \"${CONTROL_PLANE}\":\n   ```\n\n   - Replace the value of `CONTROL_PLANE` with the `user@host` of the first control-plane node.\n\n### Set up the first control plane node\n\n1. Create a file called `kubeadm-config.yaml` with the following contents:\n\n   ```yaml\n   ---\n   apiVersion: kubeadm.k8s.io\/v1beta4\n   kind: ClusterConfiguration\n   kubernetesVersion: stable\n   controlPlaneEndpoint: \"LOAD_BALANCER_DNS:LOAD_BALANCER_PORT\" # change this (see below)\n   etcd:\n     external:\n       endpoints:\n         - https:\/\/ETCD_0_IP:2379 # change ETCD_0_IP appropriately\n         - https:\/\/ETCD_1_IP:2379 # change ETCD_1_IP appropriately\n         - https:\/\/ETCD_2_IP:2379 # change ETCD_2_IP appropriately\n       caFile: \/etc\/kubernetes\/pki\/etcd\/ca.crt\n       certFile: \/etc\/kubernetes\/pki\/apiserver-etcd-client.crt\n       keyFile: \/etc\/kubernetes\/pki\/apiserver-etcd-client.key\n   ```\n\n   \n   The difference between stacked etcd and external etcd here is that the external etcd setup requires\n   a configuration file with the etcd endpoints under the `external` object for `etcd`.\n   In the case of the stacked etcd topology, this is managed automatically.\n   \n\n   - Replace the following variables in the config template with the appropriate values for your cluster:\n\n     - `LOAD_BALANCER_DNS`\n     - `LOAD_BALANCER_PORT`\n     - `ETCD_0_IP`\n     - `ETCD_1_IP`\n     - `ETCD_2_IP`\n\nThe following steps are similar to the stacked etcd setup:\n\n1. Run `sudo kubeadm init --config kubeadm-config.yaml --upload-certs` on this node.\n\n1. Write the output join commands that are returned to a text file for later use.\n\n1. Apply the CNI plugin of your choice.\n\n   \n   You must pick a network plugin that suits your use case and deploy it before you move on to next step.\n   If you don't do this, you will not be able to launch your cluster properly.\n   \n\n### Steps for the rest of the control plane nodes\n\nThe steps are the same as for the stacked etcd setup:\n\n- Make sure the first control plane node is fully initialized.\n- Join each control plane node with the join command you saved to a text file. It's recommended\n  to join the control plane nodes one at a time.\n- Don't forget that the decryption key from `--certificate-key` expires after two hours, by default.\n\n## Common tasks after bootstrapping control plane\n\n### Install workers\n\nWorker nodes can be joined to the cluster with the command you stored previously\nas the output from the `kubeadm init` command:\n\n```sh\nsudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866\n```\n\n## Manual certificate distribution {#manual-certs}\n\nIf you choose to not use `kubeadm init` with the `--upload-certs` flag this means that\nyou are going to have to manually copy the certificates from the primary control plane node to the\njoining control plane nodes.\n\nThere are many ways to do this. The following example uses `ssh` and `scp`:\n\nSSH is required if you want to control all nodes from a single machine.\n\n1. Enable ssh-agent on your main device that has access to all other nodes in\n   the system:\n\n   ```shell\n   eval $(ssh-agent)\n   ```\n\n1. Add your SSH identity to the session:\n\n   ```shell\n   ssh-add ~\/.ssh\/path_to_private_key\n   ```\n\n1. SSH between nodes to check that the connection is working correctly.\n\n   - When you SSH to any node, add the `-A` flag. This flag allows the node that you\n     have logged into via SSH to access the SSH agent on your PC. Consider alternative\n     methods if you do not fully trust the security of your user session on the node.\n\n     ```shell\n     ssh -A 10.0.0.7\n     ```\n\n   - When using sudo on any node, make sure to preserve the environment so SSH\n     forwarding works:\n\n     ```shell\n     sudo -E -s\n     ```\n\n1. After configuring SSH on all the nodes you should run the following script on the first\n   control plane node after running `kubeadm init`. This script will copy the certificates from\n   the first control plane node to the other control plane nodes:\n\n   In the following example, replace `CONTROL_PLANE_IPS` with the IP addresses of the\n   other control plane nodes.\n\n   ```sh\n   USER=ubuntu # customizable\n   CONTROL_PLANE_IPS=\"10.0.0.7 10.0.0.8\"\n   for host in ${CONTROL_PLANE_IPS}; do\n       scp \/etc\/kubernetes\/pki\/ca.crt \"${USER}\"@$host:\n       scp \/etc\/kubernetes\/pki\/ca.key \"${USER}\"@$host:\n       scp \/etc\/kubernetes\/pki\/sa.key \"${USER}\"@$host:\n       scp \/etc\/kubernetes\/pki\/sa.pub \"${USER}\"@$host:\n       scp \/etc\/kubernetes\/pki\/front-proxy-ca.crt \"${USER}\"@$host:\n       scp \/etc\/kubernetes\/pki\/front-proxy-ca.key \"${USER}\"@$host:\n       scp \/etc\/kubernetes\/pki\/etcd\/ca.crt \"${USER}\"@$host:etcd-ca.crt\n       # Skip the next line if you are using external etcd\n       scp \/etc\/kubernetes\/pki\/etcd\/ca.key \"${USER}\"@$host:etcd-ca.key\n   done\n   ```\n\n   \n   Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates\n   with the required SANs for the joining control-plane instances. If you copy all the certificates by mistake,\n   the creation of additional nodes could fail due to a lack of required SANs.\n   \n\n1. Then on each joining control plane node you have to run the following script before running `kubeadm join`.\n   This script will move the previously copied certificates from the home directory to `\/etc\/kubernetes\/pki`:\n\n   ```sh\n   USER=ubuntu # customizable\n   mkdir -p \/etc\/kubernetes\/pki\/etcd\n   mv \/home\/${USER}\/ca.crt \/etc\/kubernetes\/pki\/\n   mv \/home\/${USER}\/ca.key \/etc\/kubernetes\/pki\/\n   mv \/home\/${USER}\/sa.pub \/etc\/kubernetes\/pki\/\n   mv \/home\/${USER}\/sa.key \/etc\/kubernetes\/pki\/\n   mv \/home\/${USER}\/front-proxy-ca.crt \/etc\/kubernetes\/pki\/\n   mv \/home\/${USER}\/front-proxy-ca.key \/etc\/kubernetes\/pki\/\n   mv \/home\/${USER}\/etcd-ca.crt \/etc\/kubernetes\/pki\/etcd\/ca.crt\n   # Skip the next line if you are using external etcd\n   mv \/home\/${USER}\/etcd-ca.key \/etc\/kubernetes\/pki\/etcd\/ca.key\n   ```","site":"kubernetes setup","answers_cleaned":"    reviewers    sig cluster lifecycle title  Creating Highly Available Clusters with kubeadm content type  task weight  60           overview      This page explains two different approaches to setting up a highly available Kubernetes cluster using kubeadm     With stacked control plane nodes  This approach requires less infrastructure  The etcd members   and control plane nodes are co located    With an external etcd cluster  This approach requires more infrastructure  The   control plane nodes and etcd members are separated   Before proceeding  you should carefully consider which approach best meets the needs of your applications and environment   Options for Highly Available topology   docs setup production environment tools kubeadm ha topology   outlines the advantages and disadvantages of each   If you encounter issues with setting up the HA cluster  please report these in the kubeadm  issue tracker  https   github com kubernetes kubeadm issues new    See also the  upgrade documentation   docs tasks administer cluster kubeadm kubeadm upgrade      This page does not address running your cluster on a cloud provider  In a cloud environment  neither approach documented here works with Service objects of type LoadBalancer  or with dynamic PersistentVolumes         The prerequisites depend on which topology you have selected for your cluster s control plane              note to reviewers  these prerequisites should match the start of the     external etc tab      You need     Three or more machines that meet  kubeadm s minimum requirements   docs setup production environment tools kubeadm install kubeadm  before you begin  for   the control plane nodes  Having an odd number of control plane nodes can help   with leader selection in the case of machine or zone failure      including a   already set up and working   Three or more machines that meet  kubeadm s minimum   requirements   docs setup production environment tools kubeadm install kubeadm  before you begin  for the workers     including a container runtime  already set up and working   Full network connectivity between all machines in the cluster  public or   private network    Superuser privileges on all machines using  sudo      You can use a different tool  this guide uses  sudo  in the examples    SSH access from one device to all nodes in the system    kubeadm  and  kubelet  already installed on all machines    See  Stacked etcd topology   docs setup production environment tools kubeadm ha topology  stacked etcd topology  for context               note to reviewers  these prerequisites should match the start of the     stacked etc tab     You need     Three or more machines that meet  kubeadm s minimum requirements   docs setup production environment tools kubeadm install kubeadm  before you begin  for   the control plane nodes  Having an odd number of control plane nodes can help   with leader selection in the case of machine or zone failure      including a   already set up and working   Three or more machines that meet  kubeadm s minimum   requirements   docs setup production environment tools kubeadm install kubeadm  before you begin  for the workers     including a container runtime  already set up and working   Full network connectivity between all machines in the cluster  public or   private network    Superuser privileges on all machines using  sudo      You can use a different tool  this guide uses  sudo  in the examples    SSH access from one device to all nodes in the system    kubeadm  and  kubelet  already installed on all machines        end of shared prerequisites      And you also need     Three or more additional machines  that will become etcd cluster members    Having an odd number of members in the etcd cluster is a requirement for achieving   optimal voting quorum      These machines again need to have  kubeadm  and  kubelet  installed      These machines also require a container runtime  that is already set up and working    See  External etcd topology   docs setup production environment tools kubeadm ha topology  external etcd topology  for context          Container images  Each host should have access read and fetch images from the Kubernetes container image registry   registry k8s io   If you want to deploy a highly available cluster where the hosts do not have access to pull images  this is possible  You must ensure by some other means that the correct container images are already available on the relevant hosts       Command line interface   kubectl   To manage Kubernetes once your cluster is set up  you should  install kubectl   docs tasks tools  kubectl  on your PC  It is also useful to install the  kubectl  tool on each control plane node  as this can be helpful for troubleshooting        steps         First steps for both methods      Create load balancer for kube apiserver   There are many configurations for load balancers  The following example is only one option  Your cluster requirements may need a different configuration    1  Create a kube apiserver load balancer with a name that resolves to DNS        In a cloud environment you should place your control plane nodes behind a TCP      forwarding load balancer  This load balancer distributes traffic to all      healthy control plane nodes in its target list  The health check for      an apiserver is a TCP check on the port the kube apiserver listens on       default value   6443          It is not recommended to use an IP address directly in a cloud environment        The load balancer must be able to communicate with all control plane nodes      on the apiserver port  It must also allow incoming traffic on its      listening port        Make sure the address of the load balancer always matches      the address of kubeadm s  ControlPlaneEndpoint         Read the  Options for Software Load Balancing  https   git k8s io kubeadm docs ha considerations md options for software load balancing       guide for more details   1  Add the first control plane node to the load balancer  and test the    connection         shell    nc  v  LOAD BALANCER IP   PORT             A connection refused error is expected because the API server is not yet    running  A timeout  however  means the load balancer cannot communicate    with the control plane node  If a timeout occurs  reconfigure the load    balancer to communicate with the control plane node   1  Add the remaining control plane nodes to the load balancer target group      Stacked control plane and etcd nodes      Steps for the first control plane node  1  Initialize the control plane         sh    sudo kubeadm init   control plane endpoint  LOAD BALANCER DNS LOAD BALANCER PORT    upload certs              You can use the    kubernetes version  flag to set the Kubernetes version to use       It is recommended that the versions of kubeadm  kubelet  kubectl and Kubernetes match       The    control plane endpoint  flag should be set to the address or DNS and port of the load balancer        The    upload certs  flag is used to upload the certificates that should be shared      across all the control plane instances to the cluster  If instead  you prefer to copy certs across      control plane nodes manually or using automation tools  please remove this flag and refer to  Manual      certificate distribution   manual certs  section below          The  kubeadm init  flags    config  and    certificate key  cannot be mixed  therefore if you want    to use the  kubeadm configuration   docs reference config api kubeadm config v1beta4      you must add the  certificateKey  field in the appropriate config locations     under  InitConfiguration  and  JoinConfiguration  controlPlane                Some CNI network plugins require additional configuration  for example specifying the pod IP CIDR  while others do not     See the  CNI network documentation   docs setup production environment tools kubeadm create cluster kubeadm  pod network      To add a pod CIDR pass the flag    pod network cidr   or if you are using a kubeadm configuration file    set the  podSubnet  field under the  networking  object of  ClusterConfiguration           The output looks similar to         sh           You can now join any number of control plane node by running the following command on each as a root         kubeadm join 192 168 0 200 6443   token 9vr73a a8uxyaju799qwdjv   discovery token ca cert hash sha256 7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866   control plane   certificate key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07     Please note that the certificate key gives access to cluster sensitive data  keep it secret     As a safeguard  uploaded certs will be deleted in two hours  If necessary  you can use kubeadm init phase upload certs to reload certs afterward      Then you can join any number of worker nodes by running the following on each as root         kubeadm join 192 168 0 200 6443   token 9vr73a a8uxyaju799qwdjv   discovery token ca cert hash sha256 7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866              Copy this output to a text file  You will need it later to join control plane and worker nodes to      the cluster       When    upload certs  is used with  kubeadm init   the certificates of the primary control plane      are encrypted and uploaded in the  kubeadm certs  Secret       To re upload the certificates and generate a new decryption key  use the following command on a      control plane      node that is already joined to the cluster           sh      sudo kubeadm init phase upload certs   upload certs                You can also specify a custom    certificate key  during  init  that can later be used by  join        To generate such a key you can use the following command           sh      kubeadm certs certificate key              The certificate key is a hex encoded string that is an AES key of size 32 bytes          The  kubeadm certs  Secret and the decryption key expire after two hours              As stated in the command output  the certificate key gives access to cluster sensitive data  keep it secret       1  Apply the CNI plugin of your choice      Follow these instructions   docs setup production environment tools kubeadm create cluster kubeadm  pod network     to install the CNI provider  Make sure the configuration corresponds to the Pod CIDR specified in the    kubeadm configuration file  if applicable           You must pick a network plugin that suits your use case and deploy it before you move on to next step     If you don t do this  you will not be able to launch your cluster properly       1  Type the following and watch the pods of the control plane components get started         sh    kubectl get pod  n kube system  w             Steps for the rest of the control plane nodes  For each additional control plane node you should   1  Execute the join command that was previously given to you by the  kubeadm init  output on the first node     It should look something like this         sh    sudo kubeadm join 192 168 0 200 6443   token 9vr73a a8uxyaju799qwdjv   discovery token ca cert hash sha256 7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866   control plane   certificate key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07              The    control plane  flag tells  kubeadm join  to create a new control plane       The    certificate key      will cause the control plane certificates to be downloaded      from the  kubeadm certs  Secret in the cluster and be decrypted using the given key   You can join multiple control plane nodes in parallel      External etcd nodes  Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd with the exception that you should setup etcd first  and you should pass the etcd information in the kubeadm config file       Set up the etcd cluster  1  Follow these  instructions   docs setup production environment tools kubeadm setup ha etcd with kubeadm   to set up the etcd cluster   1  Set up SSH as described  here   manual certs    1  Copy the following files from any etcd node in the cluster to the first control plane node         sh    export CONTROL PLANE  ubuntu 10 0 0 7     scp  etc kubernetes pki etcd ca crt    CONTROL PLANE       scp  etc kubernetes pki apiserver etcd client crt    CONTROL PLANE       scp  etc kubernetes pki apiserver etcd client key    CONTROL PLANE                 Replace the value of  CONTROL PLANE  with the  user host  of the first control plane node       Set up the first control plane node  1  Create a file called  kubeadm config yaml  with the following contents         yaml           apiVersion  kubeadm k8s io v1beta4    kind  ClusterConfiguration    kubernetesVersion  stable    controlPlaneEndpoint   LOAD BALANCER DNS LOAD BALANCER PORT    change this  see below     etcd       external         endpoints             https   ETCD 0 IP 2379   change ETCD 0 IP appropriately            https   ETCD 1 IP 2379   change ETCD 1 IP appropriately            https   ETCD 2 IP 2379   change ETCD 2 IP appropriately        caFile   etc kubernetes pki etcd ca crt        certFile   etc kubernetes pki apiserver etcd client crt        keyFile   etc kubernetes pki apiserver etcd client key                The difference between stacked etcd and external etcd here is that the external etcd setup requires    a configuration file with the etcd endpoints under the  external  object for  etcd      In the case of the stacked etcd topology  this is managed automatically            Replace the following variables in the config template with the appropriate values for your cluster           LOAD BALANCER DNS          LOAD BALANCER PORT          ETCD 0 IP          ETCD 1 IP          ETCD 2 IP   The following steps are similar to the stacked etcd setup   1  Run  sudo kubeadm init   config kubeadm config yaml   upload certs  on this node   1  Write the output join commands that are returned to a text file for later use   1  Apply the CNI plugin of your choice          You must pick a network plugin that suits your use case and deploy it before you move on to next step     If you don t do this  you will not be able to launch your cluster properly           Steps for the rest of the control plane nodes  The steps are the same as for the stacked etcd setup     Make sure the first control plane node is fully initialized    Join each control plane node with the join command you saved to a text file  It s recommended   to join the control plane nodes one at a time    Don t forget that the decryption key from    certificate key  expires after two hours  by default      Common tasks after bootstrapping control plane      Install workers  Worker nodes can be joined to the cluster with the command you stored previously as the output from the  kubeadm init  command      sh sudo kubeadm join 192 168 0 200 6443   token 9vr73a a8uxyaju799qwdjv   discovery token ca cert hash sha256 7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866         Manual certificate distribution   manual certs   If you choose to not use  kubeadm init  with the    upload certs  flag this means that you are going to have to manually copy the certificates from the primary control plane node to the joining control plane nodes   There are many ways to do this  The following example uses  ssh  and  scp    SSH is required if you want to control all nodes from a single machine   1  Enable ssh agent on your main device that has access to all other nodes in    the system         shell    eval   ssh agent          1  Add your SSH identity to the session         shell    ssh add    ssh path to private key         1  SSH between nodes to check that the connection is working correctly        When you SSH to any node  add the   A  flag  This flag allows the node that you      have logged into via SSH to access the SSH agent on your PC  Consider alternative      methods if you do not fully trust the security of your user session on the node           shell      ssh  A 10 0 0 7                When using sudo on any node  make sure to preserve the environment so SSH      forwarding works           shell      sudo  E  s           1  After configuring SSH on all the nodes you should run the following script on the first    control plane node after running  kubeadm init   This script will copy the certificates from    the first control plane node to the other control plane nodes      In the following example  replace  CONTROL PLANE IPS  with the IP addresses of the    other control plane nodes         sh    USER ubuntu   customizable    CONTROL PLANE IPS  10 0 0 7 10 0 0 8     for host in   CONTROL PLANE IPS   do        scp  etc kubernetes pki ca crt    USER    host         scp  etc kubernetes pki ca key    USER    host         scp  etc kubernetes pki sa key    USER    host         scp  etc kubernetes pki sa pub    USER    host         scp  etc kubernetes pki front proxy ca crt    USER    host         scp  etc kubernetes pki front proxy ca key    USER    host         scp  etc kubernetes pki etcd ca crt    USER    host etcd ca crt          Skip the next line if you are using external etcd        scp  etc kubernetes pki etcd ca key    USER    host etcd ca key    done                Copy only the certificates in the above list  kubeadm will take care of generating the rest of the certificates    with the required SANs for the joining control plane instances  If you copy all the certificates by mistake     the creation of additional nodes could fail due to a lack of required SANs       1  Then on each joining control plane node you have to run the following script before running  kubeadm join      This script will move the previously copied certificates from the home directory to   etc kubernetes pki          sh    USER ubuntu   customizable    mkdir  p  etc kubernetes pki etcd    mv  home   USER  ca crt  etc kubernetes pki     mv  home   USER  ca key  etc kubernetes pki     mv  home   USER  sa pub  etc kubernetes pki     mv  home   USER  sa key  etc kubernetes pki     mv  home   USER  front proxy ca crt  etc kubernetes pki     mv  home   USER  front proxy ca key  etc kubernetes pki     mv  home   USER  etcd ca crt  etc kubernetes pki etcd ca crt      Skip the next line if you are using external etcd    mv  home   USER  etcd ca key  etc kubernetes pki etcd ca key       "}
{"questions":"kubernetes setup jlowdermilk weight 20 title Running in multiple zones justinsb contenttype concept quinton hoole reviewers","answers":"---\nreviewers:\n- jlowdermilk\n- justinsb\n- quinton-hoole\ntitle: Running in multiple zones\nweight: 20\ncontent_type: concept\n---\n\n<!-- overview -->\n\nThis page describes running Kubernetes across multiple zones.\n\n<!-- body -->\n\n## Background\n\nKubernetes is designed so that a single Kubernetes cluster can run\nacross multiple failure zones, typically where these zones fit within\na logical grouping called a _region_. Major cloud providers define a region\nas a set of failure zones (also called _availability zones_) that provide\na consistent set of features: within a region, each zone offers the same\nAPIs and services.\n\nTypical cloud architectures aim to minimize the chance that a failure in\none zone also impairs services in another zone.\n\n## Control plane behavior\n\nAll [control plane components](\/docs\/concepts\/architecture\/#control-plane-components)\nsupport running as a pool of interchangeable resources, replicated per\ncomponent.\n\nWhen you deploy a cluster control plane, place replicas of\ncontrol plane components across multiple failure zones. If availability is\nan important concern, select at least three failure zones and replicate\neach individual control plane component (API server, scheduler, etcd,\ncluster controller manager) across at least three failure zones.\nIf you are running a cloud controller manager then you should\nalso replicate this across all the failure zones you selected.\n\n\nKubernetes does not provide cross-zone resilience for the API server\nendpoints. You can use various techniques to improve availability for\nthe cluster API server, including DNS round-robin, SRV records, or\na third-party load balancing solution with health checking.\n\n\n## Node behavior\n\nKubernetes automatically spreads the Pods for\nworkload resources (such as \nor )\nacross different nodes in a cluster. This spreading helps\nreduce the impact of failures.\n\nWhen nodes start up, the kubelet on each node automatically adds\n to the Node object\nthat represents that specific kubelet in the Kubernetes API.\nThese labels can include\n[zone information](\/docs\/reference\/labels-annotations-taints\/#topologykubernetesiozone).\n\nIf your cluster spans multiple zones or regions, you can use node labels\nin conjunction with\n[Pod topology spread constraints](\/docs\/concepts\/scheduling-eviction\/topology-spread-constraints\/)\nto control how Pods are spread across your cluster among fault domains:\nregions, zones, and even specific nodes.\nThese hints enable the\n to place\nPods for better expected availability, reducing the risk that a correlated\nfailure affects your whole workload.\n\nFor example, you can set a constraint to make sure that the\n3 replicas of a StatefulSet are all running in different zones to each\nother, whenever that is feasible. You can define this declaratively\nwithout explicitly defining which availability zones are in use for\neach workload.\n\n### Distributing nodes across zones\n\nKubernetes' core does not create nodes for you; you need to do that yourself,\nor use a tool such as the [Cluster API](https:\/\/cluster-api.sigs.k8s.io\/) to\nmanage nodes on your behalf.\n\nUsing tools such as the Cluster API you can define sets of machines to run as\nworker nodes for your cluster across multiple failure domains, and rules to\nautomatically heal the cluster in case of whole-zone service disruption.\n\n## Manual zone assignment for Pods\n\nYou can apply [node selector constraints](\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#nodeselector)\nto Pods that you create, as well as to Pod templates in workload resources\nsuch as Deployment, StatefulSet, or Job.\n\n## Storage access for zones\n\nWhen persistent volumes are created, Kubernetes automatically adds zone labels \nto any PersistentVolumes that are linked to a specific zone.\nThe  then ensures,\nthrough its `NoVolumeZoneConflict` predicate, that pods which claim a given PersistentVolume\nare only placed into the same zone as that volume.\n\nPlease note that the method of adding zone labels can depend on your \ncloud provider and the storage provisioner you\u2019re using. Always refer to the specific \ndocumentation for your environment to ensure correct configuration.\n\nYou can specify a \nfor PersistentVolumeClaims that specifies the failure domains (zones) that the\nstorage in that class may use.\nTo learn about configuring a StorageClass that is aware of failure domains or zones,\nsee [Allowed topologies](\/docs\/concepts\/storage\/storage-classes\/#allowed-topologies).\n\n## Networking\n\nBy itself, Kubernetes does not include zone-aware networking. You can use a\n[network plugin](\/docs\/concepts\/extend-kubernetes\/compute-storage-net\/network-plugins\/)\nto configure cluster networking, and that network solution might have zone-specific\nelements. For example, if your cloud provider supports Services with\n`type=LoadBalancer`, the load balancer might only send traffic to Pods running in the\nsame zone as the load balancer element processing a given connection.\nCheck your cloud provider's documentation for details.\n\nFor custom or on-premises deployments, similar considerations apply.\n and\n behavior, including handling\nof different failure zones, does vary depending on exactly how your cluster is set up.\n\n## Fault recovery\n\nWhen you set up your cluster, you might also need to consider whether and how\nyour setup can restore service if all the failure zones in a region go\noff-line at the same time. For example, do you rely on there being at least\none node able to run Pods in a zone?  \nMake sure that any cluster-critical repair work does not rely\non there being at least one healthy node in your cluster. For example: if all nodes\nare unhealthy, you might need to run a repair Job with a special\n so that the repair\ncan complete enough to bring at least one node into service.\n\nKubernetes doesn't come with an answer for this challenge; however, it's\nsomething to consider.\n\n## \n\nTo learn how the scheduler places Pods in a cluster, honoring the configured constraints,\nvisit [Scheduling and Eviction](\/docs\/concepts\/scheduling-eviction\/).","site":"kubernetes setup","answers_cleaned":"    reviewers    jlowdermilk   justinsb   quinton hoole title  Running in multiple zones weight  20 content type  concept           overview      This page describes running Kubernetes across multiple zones        body         Background  Kubernetes is designed so that a single Kubernetes cluster can run across multiple failure zones  typically where these zones fit within a logical grouping called a  region   Major cloud providers define a region as a set of failure zones  also called  availability zones   that provide a consistent set of features  within a region  each zone offers the same APIs and services   Typical cloud architectures aim to minimize the chance that a failure in one zone also impairs services in another zone      Control plane behavior  All  control plane components   docs concepts architecture  control plane components  support running as a pool of interchangeable resources  replicated per component   When you deploy a cluster control plane  place replicas of control plane components across multiple failure zones  If availability is an important concern  select at least three failure zones and replicate each individual control plane component  API server  scheduler  etcd  cluster controller manager  across at least three failure zones  If you are running a cloud controller manager then you should also replicate this across all the failure zones you selected    Kubernetes does not provide cross zone resilience for the API server endpoints  You can use various techniques to improve availability for the cluster API server  including DNS round robin  SRV records  or a third party load balancing solution with health checking       Node behavior  Kubernetes automatically spreads the Pods for workload resources  such as  or   across different nodes in a cluster  This spreading helps reduce the impact of failures   When nodes start up  the kubelet on each node automatically adds  to the Node object that represents that specific kubelet in the Kubernetes API  These labels can include  zone information   docs reference labels annotations taints  topologykubernetesiozone    If your cluster spans multiple zones or regions  you can use node labels in conjunction with  Pod topology spread constraints   docs concepts scheduling eviction topology spread constraints   to control how Pods are spread across your cluster among fault domains  regions  zones  and even specific nodes  These hints enable the  to place Pods for better expected availability  reducing the risk that a correlated failure affects your whole workload   For example  you can set a constraint to make sure that the 3 replicas of a StatefulSet are all running in different zones to each other  whenever that is feasible  You can define this declaratively without explicitly defining which availability zones are in use for each workload       Distributing nodes across zones  Kubernetes  core does not create nodes for you  you need to do that yourself  or use a tool such as the  Cluster API  https   cluster api sigs k8s io   to manage nodes on your behalf   Using tools such as the Cluster API you can define sets of machines to run as worker nodes for your cluster across multiple failure domains  and rules to automatically heal the cluster in case of whole zone service disruption      Manual zone assignment for Pods  You can apply  node selector constraints   docs concepts scheduling eviction assign pod node  nodeselector  to Pods that you create  as well as to Pod templates in workload resources such as Deployment  StatefulSet  or Job      Storage access for zones  When persistent volumes are created  Kubernetes automatically adds zone labels  to any PersistentVolumes that are linked to a specific zone  The  then ensures  through its  NoVolumeZoneConflict  predicate  that pods which claim a given PersistentVolume are only placed into the same zone as that volume   Please note that the method of adding zone labels can depend on your  cloud provider and the storage provisioner you re using  Always refer to the specific  documentation for your environment to ensure correct configuration   You can specify a  for PersistentVolumeClaims that specifies the failure domains  zones  that the storage in that class may use  To learn about configuring a StorageClass that is aware of failure domains or zones  see  Allowed topologies   docs concepts storage storage classes  allowed topologies       Networking  By itself  Kubernetes does not include zone aware networking  You can use a  network plugin   docs concepts extend kubernetes compute storage net network plugins   to configure cluster networking  and that network solution might have zone specific elements  For example  if your cloud provider supports Services with  type LoadBalancer   the load balancer might only send traffic to Pods running in the same zone as the load balancer element processing a given connection  Check your cloud provider s documentation for details   For custom or on premises deployments  similar considerations apply   and  behavior  including handling of different failure zones  does vary depending on exactly how your cluster is set up      Fault recovery  When you set up your cluster  you might also need to consider whether and how your setup can restore service if all the failure zones in a region go off line at the same time  For example  do you rely on there being at least one node able to run Pods in a zone    Make sure that any cluster critical repair work does not rely on there being at least one healthy node in your cluster  For example  if all nodes are unhealthy  you might need to run a repair Job with a special  so that the repair can complete enough to bring at least one node into service   Kubernetes doesn t come with an answer for this challenge  however  it s something to consider        To learn how the scheduler places Pods in a cluster  honoring the configured constraints  visit  Scheduling and Eviction   docs concepts scheduling eviction   "}
{"questions":"kubernetes setup title Considerations for large clusters davidopp weight 10 or virtual machines running Kubernetes agents managed by the reviewers A cluster is a set of glossarytooltip text nodes termid node physical lavalamp","answers":"---\nreviewers:\n- davidopp\n- lavalamp\ntitle: Considerations for large clusters\nweight: 10\n---\n\nA cluster is a set of  (physical\nor virtual machines) running Kubernetes agents, managed by the\n.\nKubernetes  supports clusters with up to 5,000 nodes. More specifically,\nKubernetes is designed to accommodate configurations that meet *all* of the following criteria:\n\n* No more than 110 pods per node\n* No more than 5,000 nodes\n* No more than 150,000 total pods\n* No more than 300,000 total containers\n\nYou can scale your cluster by adding or removing nodes. The way you do this depends\non how your cluster is deployed.\n\n## Cloud provider resource quotas {#quota-issues}\n\nTo avoid running into cloud provider quota issues, when creating a cluster with many nodes,\nconsider:\n* Requesting a quota increase for cloud resources such as:\n    * Computer instances\n    * CPUs\n    * Storage volumes\n    * In-use IP addresses\n    * Packet filtering rule sets\n    * Number of load balancers\n    * Network subnets\n    * Log streams\n* Gating the cluster scaling actions to bring up new nodes in batches, with a pause\n  between batches, because some cloud providers rate limit the creation of new instances.\n\n## Control plane components\n\nFor a large cluster, you need a control plane with sufficient compute and other\nresources.\n\nTypically you would run one or two control plane instances per failure zone,\nscaling those instances vertically first and then scaling horizontally after reaching\nthe point of falling returns to (vertical) scale.\n\nYou should run at least one instance per failure zone to provide fault-tolerance. Kubernetes\nnodes do not automatically steer traffic towards control-plane endpoints that are in the\nsame failure zone; however, your cloud provider might have its own mechanisms to do this.\n\nFor example, using a managed load balancer, you configure the load balancer to send traffic\nthat originates from the kubelet and Pods in failure zone _A_, and direct that traffic only\nto the control plane hosts that are also in zone _A_. If a single control-plane host or\nendpoint failure zone _A_ goes offline, that means that all the control-plane traffic for\nnodes in zone _A_ is now being sent between zones. Running multiple control plane hosts in\neach zone makes that outcome less likely.\n\n### etcd storage\n\nTo improve performance of large clusters, you can store Event objects in a separate\ndedicated etcd instance.\n\nWhen creating a cluster, you can (using custom tooling):\n\n* start and configure additional etcd instance\n* configure the  to use it for storing events\n\nSee [Operating etcd clusters for Kubernetes](\/docs\/tasks\/administer-cluster\/configure-upgrade-etcd\/) and\n[Set up a High Availability etcd cluster with kubeadm](\/docs\/setup\/production-environment\/tools\/kubeadm\/setup-ha-etcd-with-kubeadm\/)\nfor details on configuring and managing etcd for a large cluster.\n\n## Addon resources\n\nKubernetes [resource limits](\/docs\/concepts\/configuration\/manage-resources-containers\/)\nhelp to minimize the impact of memory leaks and other ways that pods and containers can\nimpact on other components. These resource limits apply to\n resources just as they apply to application workloads.\n\nFor example, you can set CPU and memory limits for a logging component:\n\n```yaml\n  ...\n  containers:\n  - name: fluentd-cloud-logging\n    image: fluent\/fluentd-kubernetes-daemonset:v1\n    resources:\n      limits:\n        cpu: 100m\n        memory: 200Mi\n```\n\nAddons' default limits are typically based on data collected from experience running\neach addon on small or medium Kubernetes clusters. When running on large\nclusters, addons often consume more of some resources than their default limits.\nIf a large cluster is deployed without adjusting these values, the addon(s)\nmay continuously get killed because they keep hitting the memory limit.\nAlternatively, the addon may run but with poor performance due to CPU time\nslice restrictions.\n\nTo avoid running into cluster addon resource issues, when creating a cluster with\nmany nodes, consider the following:\n\n* Some addons scale vertically - there is one replica of the addon for the cluster\n  or serving a whole failure zone. For these addons, increase requests and limits\n  as you scale out your cluster.\n* Many addons scale horizontally - you add capacity by running more pods - but with\n  a very large cluster you may also need to raise CPU or memory limits slightly.\n  The [Vertical Pod Autoscaler](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/vertical-pod-autoscaler#readme) can run in _recommender_ mode to provide suggested\n  figures for requests and limits.\n* Some addons run as one copy per node, controlled by a : for example, a node-level log aggregator. Similar to\n  the case with horizontally-scaled addons, you may also need to raise CPU or memory\n  limits slightly.\n\n## \n\n* `VerticalPodAutoscaler` is a custom resource that you can deploy into your cluster\nto help you manage resource requests and limits for pods.  \nLearn more about [Vertical Pod Autoscaler](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/vertical-pod-autoscaler#readme) \nand how you can use it to scale cluster\ncomponents, including cluster-critical addons.\n\n* Read about [cluster autoscaling](\/docs\/concepts\/cluster-administration\/cluster-autoscaling\/)\n\n* The [addon resizer](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/addon-resizer#readme)\nhelps you in resizing the addons automatically as your cluster's scale changes.","site":"kubernetes setup","answers_cleaned":"    reviewers    davidopp   lavalamp title  Considerations for large clusters weight  10      A cluster is a set of   physical or virtual machines  running Kubernetes agents  managed by the   Kubernetes  supports clusters with up to 5 000 nodes  More specifically  Kubernetes is designed to accommodate configurations that meet  all  of the following criteria     No more than 110 pods per node   No more than 5 000 nodes   No more than 150 000 total pods   No more than 300 000 total containers  You can scale your cluster by adding or removing nodes  The way you do this depends on how your cluster is deployed      Cloud provider resource quotas   quota issues   To avoid running into cloud provider quota issues  when creating a cluster with many nodes  consider    Requesting a quota increase for cloud resources such as        Computer instances       CPUs       Storage volumes       In use IP addresses       Packet filtering rule sets       Number of load balancers       Network subnets       Log streams   Gating the cluster scaling actions to bring up new nodes in batches  with a pause   between batches  because some cloud providers rate limit the creation of new instances      Control plane components  For a large cluster  you need a control plane with sufficient compute and other resources   Typically you would run one or two control plane instances per failure zone  scaling those instances vertically first and then scaling horizontally after reaching the point of falling returns to  vertical  scale   You should run at least one instance per failure zone to provide fault tolerance  Kubernetes nodes do not automatically steer traffic towards control plane endpoints that are in the same failure zone  however  your cloud provider might have its own mechanisms to do this   For example  using a managed load balancer  you configure the load balancer to send traffic that originates from the kubelet and Pods in failure zone  A   and direct that traffic only to the control plane hosts that are also in zone  A   If a single control plane host or endpoint failure zone  A  goes offline  that means that all the control plane traffic for nodes in zone  A  is now being sent between zones  Running multiple control plane hosts in each zone makes that outcome less likely       etcd storage  To improve performance of large clusters  you can store Event objects in a separate dedicated etcd instance   When creating a cluster  you can  using custom tooling      start and configure additional etcd instance   configure the  to use it for storing events  See  Operating etcd clusters for Kubernetes   docs tasks administer cluster configure upgrade etcd   and  Set up a High Availability etcd cluster with kubeadm   docs setup production environment tools kubeadm setup ha etcd with kubeadm   for details on configuring and managing etcd for a large cluster      Addon resources  Kubernetes  resource limits   docs concepts configuration manage resources containers   help to minimize the impact of memory leaks and other ways that pods and containers can impact on other components  These resource limits apply to  resources just as they apply to application workloads   For example  you can set CPU and memory limits for a logging component      yaml         containers      name  fluentd cloud logging     image  fluent fluentd kubernetes daemonset v1     resources        limits          cpu  100m         memory  200Mi      Addons  default limits are typically based on data collected from experience running each addon on small or medium Kubernetes clusters  When running on large clusters  addons often consume more of some resources than their default limits  If a large cluster is deployed without adjusting these values  the addon s  may continuously get killed because they keep hitting the memory limit  Alternatively  the addon may run but with poor performance due to CPU time slice restrictions   To avoid running into cluster addon resource issues  when creating a cluster with many nodes  consider the following     Some addons scale vertically   there is one replica of the addon for the cluster   or serving a whole failure zone  For these addons  increase requests and limits   as you scale out your cluster    Many addons scale horizontally   you add capacity by running more pods   but with   a very large cluster you may also need to raise CPU or memory limits slightly    The  Vertical Pod Autoscaler  https   github com kubernetes autoscaler tree master vertical pod autoscaler readme  can run in  recommender  mode to provide suggested   figures for requests and limits    Some addons run as one copy per node  controlled by a   for example  a node level log aggregator  Similar to   the case with horizontally scaled addons  you may also need to raise CPU or memory   limits slightly           VerticalPodAutoscaler  is a custom resource that you can deploy into your cluster to help you manage resource requests and limits for pods    Learn more about  Vertical Pod Autoscaler  https   github com kubernetes autoscaler tree master vertical pod autoscaler readme   and how you can use it to scale cluster components  including cluster critical addons     Read about  cluster autoscaling   docs concepts cluster administration cluster autoscaling      The  addon resizer  https   github com kubernetes autoscaler tree master addon resizer readme  helps you in resizing the addons automatically as your cluster s scale changes "}
{"questions":"kubernetes setup title PKI certificates and requirements contenttype concept sig cluster lifecycle reviewers overview weight 50","answers":"---\ntitle: PKI certificates and requirements\nreviewers:\n- sig-cluster-lifecycle\ncontent_type: concept\nweight: 50\n---\n\n<!-- overview -->\n\nKubernetes requires PKI certificates for authentication over TLS.\nIf you install Kubernetes with [kubeadm](\/docs\/reference\/setup-tools\/kubeadm\/), the certificates\nthat your cluster requires are automatically generated.\nYou can also generate your own certificates -- for example, to keep your private keys more secure\nby not storing them on the API server.\nThis page explains the certificates that your cluster requires.\n\n<!-- body -->\n\n## How certificates are used by your cluster\n\nKubernetes requires PKI for the following operations:\n\n### Server certificates\n\n* Server certificate for the API server endpoint\n* Server certificate for the etcd server\n* [Server certificates](\/docs\/reference\/access-authn-authz\/kubelet-tls-bootstrapping\/#client-and-serving-certificates)\n  for each kubelet (every  runs a kubelet)\n* Optional server certificate for the [front-proxy](\/docs\/tasks\/extend-kubernetes\/configure-aggregation-layer\/)\n\n### Client certificates\n\n* Client certificates for each kubelet, used to authenticate to the API server as a client of\n  the Kubernetes API\n* Client certificate for each API server, used to authenticate to etcd\n* Client certificate for the controller manager to securely communicate with the API server\n* Client certificate for the scheduler to securely communicate with the API server\n* Client certificates, one for each node, for kube-proxy to authenticate to the API server\n* Optional client certificates for administrators of the cluster to authenticate to the API server\n* Optional client certificate for the [front-proxy](\/docs\/tasks\/extend-kubernetes\/configure-aggregation-layer\/)\n\n### Kubelet's server and client certificates\n\nTo establish a secure connection and authenticate itself to the kubelet, the API Server\nrequires a client certificate and key pair.\n\nIn this scenario, there are two approaches for certificate usage:\n\n* Shared Certificates: The kube-apiserver can utilize the same certificate and key pair it uses\n  to authenticate its clients. This means that the existing certificates, such as `apiserver.crt`\n  and `apiserver.key`, can be used for communicating with the kubelet servers.\n\n* Separate Certificates: Alternatively, the kube-apiserver can generate a new client certificate\n  and key pair to authenticate its communication with the kubelet servers. In this case,\n  a distinct certificate named `kubelet-client.crt` and its corresponding private key,\n  `kubelet-client.key` are created.\n\n\n`front-proxy` certificates are required only if you run kube-proxy to support\n[an extension API server](\/docs\/tasks\/extend-kubernetes\/setup-extension-api-server\/).\n\n\netcd also implements mutual TLS to authenticate clients and peers.\n\n## Where certificates are stored\n\nIf you install Kubernetes with kubeadm, most certificates are stored in `\/etc\/kubernetes\/pki`.\nAll paths in this documentation are relative to that directory, with the exception of user account\ncertificates which kubeadm places in `\/etc\/kubernetes`.\n\n## Configure certificates manually\n\nIf you don't want kubeadm to generate the required certificates, you can create them using a\nsingle root CA or by providing all certificates. See [Certificates](\/docs\/tasks\/administer-cluster\/certificates\/)\nfor details on creating your own certificate authority. See\n[Certificate Management with kubeadm](\/docs\/tasks\/administer-cluster\/kubeadm\/kubeadm-certs\/)\nfor more on managing certificates.\n\n### Single root CA\n\nYou can create a single root CA, controlled by an administrator. This root CA can then create\nmultiple intermediate CAs, and delegate all further creation to Kubernetes itself.\n\nRequired CAs:\n\n| Path                   | Default CN                | Description                      |\n|------------------------|---------------------------|----------------------------------|\n| ca.crt,key             | kubernetes-ca             | Kubernetes general CA            |\n| etcd\/ca.crt,key        | etcd-ca                   | For all etcd-related functions   |\n| front-proxy-ca.crt,key | kubernetes-front-proxy-ca | For the [front-end proxy](\/docs\/tasks\/extend-kubernetes\/configure-aggregation-layer\/) |\n\nOn top of the above CAs, it is also necessary to get a public\/private key pair for service account\nmanagement, `sa.key` and `sa.pub`.\nThe following example illustrates the CA key and certificate files shown in the previous table:\n\n```\n\/etc\/kubernetes\/pki\/ca.crt\n\/etc\/kubernetes\/pki\/ca.key\n\/etc\/kubernetes\/pki\/etcd\/ca.crt\n\/etc\/kubernetes\/pki\/etcd\/ca.key\n\/etc\/kubernetes\/pki\/front-proxy-ca.crt\n\/etc\/kubernetes\/pki\/front-proxy-ca.key\n```\n\n### All certificates\n\nIf you don't wish to copy the CA private keys to your cluster, you can generate all certificates yourself.\n\nRequired certificates:\n\n| Default CN                    | Parent CA                 | O (in Subject) | kind             | hosts (SAN)                                         |\n|-------------------------------|---------------------------|----------------|------------------|-----------------------------------------------------|\n| kube-etcd                     | etcd-ca                   |                | server, client   | `<hostname>`, `<Host_IP>`, `localhost`, `127.0.0.1` |\n| kube-etcd-peer                | etcd-ca                   |                | server, client   | `<hostname>`, `<Host_IP>`, `localhost`, `127.0.0.1` |\n| kube-etcd-healthcheck-client  | etcd-ca                   |                | client           |                                                     |\n| kube-apiserver-etcd-client    | etcd-ca                   |                | client           |                                                     |\n| kube-apiserver                | kubernetes-ca             |                | server           | `<hostname>`, `<Host_IP>`, `<advertise_IP>`[^1]     |\n| kube-apiserver-kubelet-client | kubernetes-ca             | system:masters | client           |                                                     |\n| front-proxy-client            | kubernetes-front-proxy-ca |                | client           |                                                     |\n\n\nInstead of using the super-user group `system:masters` for `kube-apiserver-kubelet-client`\na less privileged group can be used. kubeadm uses the `kubeadm:cluster-admins` group for\nthat purpose.\n\n\n[^1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](\/docs\/reference\/setup-tools\/kubeadm\/)\nthe load balancer stable IP and\/or DNS name, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`,\n`kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`)\n\nwhere `kind` maps to one or more of the x509 key usage, which is also documented in the\n`.spec.usages` of a [CertificateSigningRequest](\/docs\/reference\/kubernetes-api\/authentication-resources\/certificate-signing-request-v1#CertificateSigningRequest)\ntype:\n\n| kind   | Key usage                                                                       |\n|--------|---------------------------------------------------------------------------------|\n| server | digital signature, key encipherment, server auth                                |\n| client | digital signature, key encipherment, client auth                                |\n\n\nHosts\/SAN listed above are the recommended ones for getting a working cluster; if required by a\nspecific setup, it is possible to add additional SANs on all the server certificates.\n\n\n\nFor kubeadm users only:\n\n* The scenario where you are copying to your cluster CA certificates without private keys is\n  referred as external CA in the kubeadm documentation.\n* If you are comparing the above list with a kubeadm generated PKI, please be aware that\n  `kube-etcd`, `kube-etcd-peer` and `kube-etcd-healthcheck-client` certificates are not generated\n  in case of external etcd.\n\n\n\n### Certificate paths\n\nCertificates should be placed in a recommended path (as used by [kubeadm](\/docs\/reference\/setup-tools\/kubeadm\/)).\nPaths should be specified using the given argument regardless of location.\n\n| DefaultCN | recommendedkeypath | recommendedcertpath | command | keyargument | certargument |\n| --------- | ------------------ | ------------------- | ------- | ----------- | ------------ |\n| etcd-ca | etcd\/ca.key | etcd\/ca.crt | kube-apiserver | | --etcd-cafile |\n| kube-apiserver-etcd-client | apiserver-etcd-client.key | apiserver-etcd-client.crt | kube-apiserver | --etcd-keyfile | --etcd-certfile |\n| kubernetes-ca | ca.key | ca.crt | kube-apiserver | | --client-ca-file |\n| kubernetes-ca | ca.key | ca.crt | kube-controller-manager | --cluster-signing-key-file | --client-ca-file,--root-ca-file,--cluster-signing-cert-file |\n| kube-apiserver | apiserver.key | apiserver.crt| kube-apiserver | --tls-private-key-file | --tls-cert-file |\n| kube-apiserver-kubelet-client | apiserver-kubelet-client.key | apiserver-kubelet-client.crt | kube-apiserver | --kubelet-client-key | --kubelet-client-certificate |\n| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-apiserver | | --requestheader-client-ca-file |\n| front-proxy-ca | front-proxy-ca.key | front-proxy-ca.crt | kube-controller-manager | | --requestheader-client-ca-file |\n| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-key-file | --proxy-client-cert-file |\n| etcd-ca | etcd\/ca.key | etcd\/ca.crt | etcd | | --trusted-ca-file,--peer-trusted-ca-file |\n| kube-etcd | etcd\/server.key | etcd\/server.crt | etcd | --key-file | --cert-file |\n| kube-etcd-peer | etcd\/peer.key | etcd\/peer.crt | etcd | --peer-key-file | --peer-cert-file |\n| etcd-ca| | etcd\/ca.crt | etcdctl | | --cacert |\n| kube-etcd-healthcheck-client | etcd\/healthcheck-client.key | etcd\/healthcheck-client.crt | etcdctl | --key | --cert |\n\nSame considerations apply for the service account key pair:\n\n| private key path  | public key path  | command                 | argument                             |\n|-------------------|------------------|-------------------------|--------------------------------------|\n|  sa.key           |                  | kube-controller-manager | --service-account-private-key-file   |\n|                   | sa.pub           | kube-apiserver          | --service-account-key-file           |\n\nThe following example illustrates the file paths [from the previous tables](#certificate-paths)\nyou need to provide if you are generating all of your own keys and certificates:\n\n```\n\/etc\/kubernetes\/pki\/etcd\/ca.key\n\/etc\/kubernetes\/pki\/etcd\/ca.crt\n\/etc\/kubernetes\/pki\/apiserver-etcd-client.key\n\/etc\/kubernetes\/pki\/apiserver-etcd-client.crt\n\/etc\/kubernetes\/pki\/ca.key\n\/etc\/kubernetes\/pki\/ca.crt\n\/etc\/kubernetes\/pki\/apiserver.key\n\/etc\/kubernetes\/pki\/apiserver.crt\n\/etc\/kubernetes\/pki\/apiserver-kubelet-client.key\n\/etc\/kubernetes\/pki\/apiserver-kubelet-client.crt\n\/etc\/kubernetes\/pki\/front-proxy-ca.key\n\/etc\/kubernetes\/pki\/front-proxy-ca.crt\n\/etc\/kubernetes\/pki\/front-proxy-client.key\n\/etc\/kubernetes\/pki\/front-proxy-client.crt\n\/etc\/kubernetes\/pki\/etcd\/server.key\n\/etc\/kubernetes\/pki\/etcd\/server.crt\n\/etc\/kubernetes\/pki\/etcd\/peer.key\n\/etc\/kubernetes\/pki\/etcd\/peer.crt\n\/etc\/kubernetes\/pki\/etcd\/healthcheck-client.key\n\/etc\/kubernetes\/pki\/etcd\/healthcheck-client.crt\n\/etc\/kubernetes\/pki\/sa.key\n\/etc\/kubernetes\/pki\/sa.pub\n```\n\n## Configure certificates for user accounts\n\nYou must manually configure these administrator accounts and service accounts:\n\n| Filename                | Credential name            | Default CN                          | O (in Subject)         |\n|-------------------------|----------------------------|-------------------------------------|------------------------|\n| admin.conf              | default-admin              | kubernetes-admin                    | `<admin-group>`        |\n| super-admin.conf        | default-super-admin        | kubernetes-super-admin              | system:masters         |\n| kubelet.conf            | default-auth               | system:node:`<nodeName>` (see note) | system:nodes           |\n| controller-manager.conf | default-controller-manager | system:kube-controller-manager      |                        |\n| scheduler.conf          | default-scheduler          | system:kube-scheduler               |                        |\n\n\nThe value of `<nodeName>` for `kubelet.conf` **must** match precisely the value of the node name\nprovided by the kubelet as it registers with the apiserver. For further details, read the\n[Node Authorization](\/docs\/reference\/access-authn-authz\/node\/).\n\n\n\nIn the above example `<admin-group>` is implementation specific. Some tools sign the\ncertificate in the default `admin.conf` to be part of the `system:masters` group.\n`system:masters` is a break-glass, super user group can bypass the authorization\nlayer of Kubernetes, such as RBAC. Also some tools do not generate a separate\n`super-admin.conf` with a certificate bound to this super user group.\n\nkubeadm generates two separate administrator certificates in kubeconfig files.\nOne is in `admin.conf` and has `Subject: O = kubeadm:cluster-admins, CN = kubernetes-admin`.\n`kubeadm:cluster-admins` is a custom group bound to the `cluster-admin` ClusterRole.\nThis file is generated on all kubeadm managed control plane machines.\n\nAnother is in `super-admin.conf` that has `Subject: O = system:masters, CN = kubernetes-super-admin`.\nThis file is generated only on the node where `kubeadm init` was called.\n\n\n1. For each configuration, generate an x509 certificate\/key pair with the\n   given Common Name (CN) and Organization (O).\n\n1. Run `kubectl` as follows for each configuration:\n\n   ```\n   KUBECONFIG=<filename> kubectl config set-cluster default-cluster --server=https:\/\/<host ip>:6443 --certificate-authority <path-to-kubernetes-ca> --embed-certs\n   KUBECONFIG=<filename> kubectl config set-credentials <credential-name> --client-key <path-to-key>.pem --client-certificate <path-to-cert>.pem --embed-certs\n   KUBECONFIG=<filename> kubectl config set-context default-system --cluster default-cluster --user <credential-name>\n   KUBECONFIG=<filename> kubectl config use-context default-system\n   ```\n\nThese files are used as follows:\n\n| Filename                | Command                 | Comment                                                               |\n|-------------------------|-------------------------|-----------------------------------------------------------------------|\n| admin.conf              | kubectl                 | Configures administrator user for the cluster                         |\n| super-admin.conf        | kubectl                 | Configures super administrator user for the cluster                   |\n| kubelet.conf            | kubelet                 | One required for each node in the cluster.                            |\n| controller-manager.conf | kube-controller-manager | Must be added to manifest in `manifests\/kube-controller-manager.yaml` |\n| scheduler.conf          | kube-scheduler          | Must be added to manifest in `manifests\/kube-scheduler.yaml`          |\n\nThe following files illustrate full paths to the files listed in the previous table:\n\n```\n\/etc\/kubernetes\/admin.conf\n\/etc\/kubernetes\/super-admin.conf\n\/etc\/kubernetes\/kubelet.conf\n\/etc\/kubernetes\/controller-manager.conf\n\/etc\/kubernetes\/scheduler.conf\n```","site":"kubernetes setup","answers_cleaned":"    title  PKI certificates and requirements reviewers    sig cluster lifecycle content type  concept weight  50           overview      Kubernetes requires PKI certificates for authentication over TLS  If you install Kubernetes with  kubeadm   docs reference setup tools kubeadm    the certificates that your cluster requires are automatically generated  You can also generate your own certificates    for example  to keep your private keys more secure by not storing them on the API server  This page explains the certificates that your cluster requires        body         How certificates are used by your cluster  Kubernetes requires PKI for the following operations       Server certificates    Server certificate for the API server endpoint   Server certificate for the etcd server    Server certificates   docs reference access authn authz kubelet tls bootstrapping  client and serving certificates    for each kubelet  every  runs a kubelet    Optional server certificate for the  front proxy   docs tasks extend kubernetes configure aggregation layer        Client certificates    Client certificates for each kubelet  used to authenticate to the API server as a client of   the Kubernetes API   Client certificate for each API server  used to authenticate to etcd   Client certificate for the controller manager to securely communicate with the API server   Client certificate for the scheduler to securely communicate with the API server   Client certificates  one for each node  for kube proxy to authenticate to the API server   Optional client certificates for administrators of the cluster to authenticate to the API server   Optional client certificate for the  front proxy   docs tasks extend kubernetes configure aggregation layer        Kubelet s server and client certificates  To establish a secure connection and authenticate itself to the kubelet  the API Server requires a client certificate and key pair   In this scenario  there are two approaches for certificate usage     Shared Certificates  The kube apiserver can utilize the same certificate and key pair it uses   to authenticate its clients  This means that the existing certificates  such as  apiserver crt    and  apiserver key   can be used for communicating with the kubelet servers     Separate Certificates  Alternatively  the kube apiserver can generate a new client certificate   and key pair to authenticate its communication with the kubelet servers  In this case    a distinct certificate named  kubelet client crt  and its corresponding private key     kubelet client key  are created     front proxy  certificates are required only if you run kube proxy to support  an extension API server   docs tasks extend kubernetes setup extension api server      etcd also implements mutual TLS to authenticate clients and peers      Where certificates are stored  If you install Kubernetes with kubeadm  most certificates are stored in   etc kubernetes pki   All paths in this documentation are relative to that directory  with the exception of user account certificates which kubeadm places in   etc kubernetes       Configure certificates manually  If you don t want kubeadm to generate the required certificates  you can create them using a single root CA or by providing all certificates  See  Certificates   docs tasks administer cluster certificates   for details on creating your own certificate authority  See  Certificate Management with kubeadm   docs tasks administer cluster kubeadm kubeadm certs   for more on managing certificates       Single root CA  You can create a single root CA  controlled by an administrator  This root CA can then create multiple intermediate CAs  and delegate all further creation to Kubernetes itself   Required CAs     Path                     Default CN                  Description                                                                                                                    ca crt key               kubernetes ca               Kubernetes general CA                etcd ca crt key          etcd ca                     For all etcd related functions       front proxy ca crt key   kubernetes front proxy ca   For the  front end proxy   docs tasks extend kubernetes configure aggregation layer      On top of the above CAs  it is also necessary to get a public private key pair for service account management   sa key  and  sa pub   The following example illustrates the CA key and certificate files shown in the previous table        etc kubernetes pki ca crt  etc kubernetes pki ca key  etc kubernetes pki etcd ca crt  etc kubernetes pki etcd ca key  etc kubernetes pki front proxy ca crt  etc kubernetes pki front proxy ca key          All certificates  If you don t wish to copy the CA private keys to your cluster  you can generate all certificates yourself   Required certificates     Default CN                      Parent CA                   O  in Subject    kind               hosts  SAN                                                                                                                                                                                                      kube etcd                       etcd ca                                      server  client       hostname      Host IP     localhost    127 0 0 1      kube etcd peer                  etcd ca                                      server  client       hostname      Host IP     localhost    127 0 0 1      kube etcd healthcheck client    etcd ca                                      client                                                                     kube apiserver etcd client      etcd ca                                      client                                                                     kube apiserver                  kubernetes ca                                server               hostname      Host IP      advertise IP    1          kube apiserver kubelet client   kubernetes ca               system masters   client                                                                     front proxy client              kubernetes front proxy ca                    client                                                                     Instead of using the super user group  system masters  for  kube apiserver kubelet client  a less privileged group can be used  kubeadm uses the  kubeadm cluster admins  group for that purpose      1   any other IP or DNS name you contact your cluster on  as used by  kubeadm   docs reference setup tools kubeadm   the load balancer stable IP and or DNS name   kubernetes    kubernetes default    kubernetes default svc    kubernetes default svc cluster    kubernetes default svc cluster local    where  kind  maps to one or more of the x509 key usage  which is also documented in the   spec usages  of a  CertificateSigningRequest   docs reference kubernetes api authentication resources certificate signing request v1 CertificateSigningRequest  type     kind     Key usage                                                                                                                                                                        server   digital signature  key encipherment  server auth                                    client   digital signature  key encipherment  client auth                                    Hosts SAN listed above are the recommended ones for getting a working cluster  if required by a specific setup  it is possible to add additional SANs on all the server certificates     For kubeadm users only     The scenario where you are copying to your cluster CA certificates without private keys is   referred as external CA in the kubeadm documentation    If you are comparing the above list with a kubeadm generated PKI  please be aware that    kube etcd    kube etcd peer  and  kube etcd healthcheck client  certificates are not generated   in case of external etcd         Certificate paths  Certificates should be placed in a recommended path  as used by  kubeadm   docs reference setup tools kubeadm     Paths should be specified using the given argument regardless of location     DefaultCN   recommendedkeypath   recommendedcertpath   command   keyargument   certargument                                                                                                     etcd ca   etcd ca key   etcd ca crt   kube apiserver       etcd cafile     kube apiserver etcd client   apiserver etcd client key   apiserver etcd client crt   kube apiserver     etcd keyfile     etcd certfile     kubernetes ca   ca key   ca crt   kube apiserver       client ca file     kubernetes ca   ca key   ca crt   kube controller manager     cluster signing key file     client ca file   root ca file   cluster signing cert file     kube apiserver   apiserver key   apiserver crt  kube apiserver     tls private key file     tls cert file     kube apiserver kubelet client   apiserver kubelet client key   apiserver kubelet client crt   kube apiserver     kubelet client key     kubelet client certificate     front proxy ca   front proxy ca key   front proxy ca crt   kube apiserver       requestheader client ca file     front proxy ca   front proxy ca key   front proxy ca crt   kube controller manager       requestheader client ca file     front proxy client   front proxy client key   front proxy client crt   kube apiserver     proxy client key file     proxy client cert file     etcd ca   etcd ca key   etcd ca crt   etcd       trusted ca file   peer trusted ca file     kube etcd   etcd server key   etcd server crt   etcd     key file     cert file     kube etcd peer   etcd peer key   etcd peer crt   etcd     peer key file     peer cert file     etcd ca    etcd ca crt   etcdctl       cacert     kube etcd healthcheck client   etcd healthcheck client key   etcd healthcheck client crt   etcdctl     key     cert    Same considerations apply for the service account key pair     private key path    public key path    command                   argument                                                                                                                                            sa key                                kube controller manager     service account private key file                           sa pub             kube apiserver              service account key file              The following example illustrates the file paths  from the previous tables   certificate paths  you need to provide if you are generating all of your own keys and certificates        etc kubernetes pki etcd ca key  etc kubernetes pki etcd ca crt  etc kubernetes pki apiserver etcd client key  etc kubernetes pki apiserver etcd client crt  etc kubernetes pki ca key  etc kubernetes pki ca crt  etc kubernetes pki apiserver key  etc kubernetes pki apiserver crt  etc kubernetes pki apiserver kubelet client key  etc kubernetes pki apiserver kubelet client crt  etc kubernetes pki front proxy ca key  etc kubernetes pki front proxy ca crt  etc kubernetes pki front proxy client key  etc kubernetes pki front proxy client crt  etc kubernetes pki etcd server key  etc kubernetes pki etcd server crt  etc kubernetes pki etcd peer key  etc kubernetes pki etcd peer crt  etc kubernetes pki etcd healthcheck client key  etc kubernetes pki etcd healthcheck client crt  etc kubernetes pki sa key  etc kubernetes pki sa pub         Configure certificates for user accounts  You must manually configure these administrator accounts and service accounts     Filename                  Credential name              Default CN                            O  in Subject                                                                                                                                      admin conf                default admin                kubernetes admin                        admin group              super admin conf          default super admin          kubernetes super admin                system masters             kubelet conf              default auth                 system node   nodeName    see note    system nodes               controller manager conf   default controller manager   system kube controller manager                                   scheduler conf            default scheduler            system kube scheduler                                            The value of   nodeName   for  kubelet conf    must   match precisely the value of the node name provided by the kubelet as it registers with the apiserver  For further details  read the  Node Authorization   docs reference access authn authz node       In the above example   admin group   is implementation specific  Some tools sign the certificate in the default  admin conf  to be part of the  system masters  group   system masters  is a break glass  super user group can bypass the authorization layer of Kubernetes  such as RBAC  Also some tools do not generate a separate  super admin conf  with a certificate bound to this super user group   kubeadm generates two separate administrator certificates in kubeconfig files  One is in  admin conf  and has  Subject  O   kubeadm cluster admins  CN   kubernetes admin    kubeadm cluster admins  is a custom group bound to the  cluster admin  ClusterRole  This file is generated on all kubeadm managed control plane machines   Another is in  super admin conf  that has  Subject  O   system masters  CN   kubernetes super admin   This file is generated only on the node where  kubeadm init  was called    1  For each configuration  generate an x509 certificate key pair with the    given Common Name  CN  and Organization  O    1  Run  kubectl  as follows for each configuration             KUBECONFIG  filename  kubectl config set cluster default cluster   server https    host ip  6443   certificate authority  path to kubernetes ca    embed certs    KUBECONFIG  filename  kubectl config set credentials  credential name    client key  path to key  pem   client certificate  path to cert  pem   embed certs    KUBECONFIG  filename  kubectl config set context default system   cluster default cluster   user  credential name     KUBECONFIG  filename  kubectl config use context default system         These files are used as follows     Filename                  Command                   Comment                                                                                                                                                                                                 admin conf                kubectl                   Configures administrator user for the cluster                             super admin conf          kubectl                   Configures super administrator user for the cluster                       kubelet conf              kubelet                   One required for each node in the cluster                                 controller manager conf   kube controller manager   Must be added to manifest in  manifests kube controller manager yaml      scheduler conf            kube scheduler            Must be added to manifest in  manifests kube scheduler yaml              The following files illustrate full paths to the files listed in the previous table        etc kubernetes admin conf  etc kubernetes super admin conf  etc kubernetes kubelet conf  etc kubernetes controller manager conf  etc kubernetes scheduler conf    "}
{"questions":"linux lanana org as part of the Linux Assigned Names And Numbers Authority mainline Linux kernel is now the maintained main document Note The original version of this document which was maintained at Last update 2005 01 17 version 1 4 Unicode support LANANA project is no longer existent So this version in the","answers":"Unicode support\n===============\n\n\t\t Last update: 2005-01-17, version 1.4\n\nNote: The original version of this document, which was maintained at\nlanana.org as part of the Linux Assigned Names And Numbers Authority\n(LANANA) project, is no longer existent.  So, this version in the\nmainline Linux kernel is now the maintained main document.\n\nIntroduction\n------------\n\nThe Linux kernel code has been rewritten to use Unicode to map\ncharacters to fonts.  By downloading a single Unicode-to-font table,\nboth the eight-bit character sets and UTF-8 mode are changed to use\nthe font as indicated.\n\nThis changes the semantics of the eight-bit character tables subtly.\nThe four character tables are now:\n\n=============== =============================== ================\nMap symbol\tMap name\t\t\tEscape code (G0)\n=============== =============================== ================\nLAT1_MAP\tLatin-1 (ISO 8859-1)\t\tESC ( B\nGRAF_MAP\tDEC VT100 pseudographics\tESC ( 0\nIBMPC_MAP\tIBM code page 437\t\tESC ( U\nUSER_MAP\tUser defined\t\t\tESC ( K\n=============== =============================== ================\n\nIn particular, ESC ( U is no longer \"straight to font\", since the font\nmight be completely different than the IBM character set.  This\npermits for example the use of block graphics even with a Latin-1 font\nloaded.\n\nNote that although these codes are similar to ISO 2022, neither the\ncodes nor their uses match ISO 2022; Linux has two 8-bit codes (G0 and\nG1), whereas ISO 2022 has four 7-bit codes (G0-G3).\n\nIn accordance with the Unicode standard\/ISO 10646 the range U+F000 to\nU+F8FF has been reserved for OS-wide allocation (the Unicode Standard\nrefers to this as a \"Corporate Zone\", since this is inaccurate for\nLinux we call it the \"Linux Zone\").  U+F000 was picked as the starting\npoint since it lets the direct-mapping area start on a large power of\ntwo (in case 1024- or 2048-character fonts ever become necessary).\nThis leaves U+E000 to U+EFFF as End User Zone.\n\n[v1.2]: The Unicodes range from U+F000 and up to U+F7FF have been\nhard-coded to map directly to the loaded font, bypassing the\ntranslation table.  The user-defined map now defaults to U+F000 to\nU+F0FF, emulating the previous behaviour.  In practice, this range\nmight be shorter; for example, vgacon can only handle 256-character\n(U+F000..U+F0FF) or 512-character (U+F000..U+F1FF) fonts.\n\n\nActual characters assigned in the Linux Zone\n--------------------------------------------\n\nIn addition, the following characters not present in Unicode 1.1.4\nhave been defined; these are used by the DEC VT graphics map.  [v1.2]\nTHIS USE IS OBSOLETE AND SHOULD NO LONGER BE USED; PLEASE SEE BELOW.\n\n====== ======================================\nU+F800 DEC VT GRAPHICS HORIZONTAL LINE SCAN 1\nU+F801 DEC VT GRAPHICS HORIZONTAL LINE SCAN 3\nU+F803 DEC VT GRAPHICS HORIZONTAL LINE SCAN 7\nU+F804 DEC VT GRAPHICS HORIZONTAL LINE SCAN 9\n====== ======================================\n\nThe DEC VT220 uses a 6x10 character matrix, and these characters form\na smooth progression in the DEC VT graphics character set.  I have\nomitted the scan 5 line, since it is also used as a block-graphics\ncharacter, and hence has been coded as U+2500 FORMS LIGHT HORIZONTAL.\n\n[v1.3]: These characters have been officially added to Unicode 3.2.0;\nthey are added at U+23BA, U+23BB, U+23BC, U+23BD.  Linux now uses the\nnew values.\n\n[v1.2]: The following characters have been added to represent common\nkeyboard symbols that are unlikely to ever be added to Unicode proper\nsince they are horribly vendor-specific.  This, of course, is an\nexcellent example of horrible design.\n\n====== ======================================\nU+F810 KEYBOARD SYMBOL FLYING FLAG\nU+F811 KEYBOARD SYMBOL PULLDOWN MENU\nU+F812 KEYBOARD SYMBOL OPEN APPLE\nU+F813 KEYBOARD SYMBOL SOLID APPLE\n====== ======================================\n\nKlingon language support\n------------------------\n\nIn 1996, Linux was the first operating system in the world to add\nsupport for the artificial language Klingon, created by Marc Okrand\nfor the \"Star Trek\" television series.\tThis encoding was later\nadopted by the ConScript Unicode Registry and proposed (but ultimately\nrejected) for inclusion in Unicode Plane 1.  Thus, it remains as a\nLinux\/CSUR private assignment in the Linux Zone.\n\nThis encoding has been endorsed by the Klingon Language Institute.\nFor more information, contact them at:\n\n\thttp:\/\/www.kli.org\/\n\nSince the characters in the beginning of the Linux CZ have been more\nof the dingbats\/symbols\/forms type and this is a language, I have\nlocated it at the end, on a 16-cell boundary in keeping with standard\nUnicode practice.\n\n.. note::\n\n  This range is now officially managed by the ConScript Unicode\n  Registry.  The normative reference is at:\n\n\thttps:\/\/www.evertype.com\/standards\/csur\/klingon.html\n\nKlingon has an alphabet of 26 characters, a positional numeric writing\nsystem with 10 digits, and is written left-to-right, top-to-bottom.\n\nSeveral glyph forms for the Klingon alphabet have been proposed.\nHowever, since the set of symbols appear to be consistent throughout,\nwith only the actual shapes being different, in keeping with standard\nUnicode practice these differences are considered font variants.\n\n======\t=======================================================\nU+F8D0\tKLINGON LETTER A\nU+F8D1\tKLINGON LETTER B\nU+F8D2\tKLINGON LETTER CH\nU+F8D3\tKLINGON LETTER D\nU+F8D4\tKLINGON LETTER E\nU+F8D5\tKLINGON LETTER GH\nU+F8D6\tKLINGON LETTER H\nU+F8D7\tKLINGON LETTER I\nU+F8D8\tKLINGON LETTER J\nU+F8D9\tKLINGON LETTER L\nU+F8DA\tKLINGON LETTER M\nU+F8DB\tKLINGON LETTER N\nU+F8DC\tKLINGON LETTER NG\nU+F8DD\tKLINGON LETTER O\nU+F8DE\tKLINGON LETTER P\nU+F8DF\tKLINGON LETTER Q\n\t- Written <q> in standard Okrand Latin transliteration\nU+F8E0\tKLINGON LETTER QH\n\t- Written <Q> in standard Okrand Latin transliteration\nU+F8E1\tKLINGON LETTER R\nU+F8E2\tKLINGON LETTER S\nU+F8E3\tKLINGON LETTER T\nU+F8E4\tKLINGON LETTER TLH\nU+F8E5\tKLINGON LETTER U\nU+F8E6\tKLINGON LETTER V\nU+F8E7\tKLINGON LETTER W\nU+F8E8\tKLINGON LETTER Y\nU+F8E9\tKLINGON LETTER GLOTTAL STOP\n\nU+F8F0\tKLINGON DIGIT ZERO\nU+F8F1\tKLINGON DIGIT ONE\nU+F8F2\tKLINGON DIGIT TWO\nU+F8F3\tKLINGON DIGIT THREE\nU+F8F4\tKLINGON DIGIT FOUR\nU+F8F5\tKLINGON DIGIT FIVE\nU+F8F6\tKLINGON DIGIT SIX\nU+F8F7\tKLINGON DIGIT SEVEN\nU+F8F8\tKLINGON DIGIT EIGHT\nU+F8F9\tKLINGON DIGIT NINE\n\nU+F8FD\tKLINGON COMMA\nU+F8FE\tKLINGON FULL STOP\nU+F8FF\tKLINGON SYMBOL FOR EMPIRE\n======\t=======================================================\n\nOther Fictional and Artificial Scripts\n--------------------------------------\n\nSince the assignment of the Klingon Linux Unicode block, a registry of\nfictional and artificial scripts has been established by John Cowan\n<jcowan@reutershealth.com> and Michael Everson <everson@evertype.com>.\nThe ConScript Unicode Registry is accessible at:\n\n\t  https:\/\/www.evertype.com\/standards\/csur\/\n\nThe ranges used fall at the low end of the End User Zone and can hence\nnot be normatively assigned, but it is recommended that people who\nwish to encode fictional scripts use these codes, in the interest of\ninteroperability.  For Klingon, CSUR has adopted the Linux encoding.\nThe CSUR people are driving adding Tengwar and Cirth into Unicode\nPlane 1; the addition of Klingon to Unicode Plane 1 has been rejected\nand so the above encoding remains official.","site":"linux","answers_cleaned":"Unicode support                     Last update  2005 01 17  version 1 4  Note  The original version of this document  which was maintained at lanana org as part of the Linux Assigned Names And Numbers Authority  LANANA  project  is no longer existent   So  this version in the mainline Linux kernel is now the maintained main document   Introduction               The Linux kernel code has been rewritten to use Unicode to map characters to fonts   By downloading a single Unicode to font table  both the eight bit character sets and UTF 8 mode are changed to use the font as indicated   This changes the semantics of the eight bit character tables subtly  The four character tables are now                                                                    Map symbol Map name   Escape code  G0                                                                   LAT1 MAP Latin 1  ISO 8859 1   ESC   B GRAF MAP DEC VT100 pseudographics ESC   0 IBMPC MAP IBM code page 437  ESC   U USER MAP User defined   ESC   K                                                                   In particular  ESC   U is no longer  straight to font   since the font might be completely different than the IBM character set   This permits for example the use of block graphics even with a Latin 1 font loaded   Note that although these codes are similar to ISO 2022  neither the codes nor their uses match ISO 2022  Linux has two 8 bit codes  G0 and G1   whereas ISO 2022 has four 7 bit codes  G0 G3    In accordance with the Unicode standard ISO 10646 the range U F000 to U F8FF has been reserved for OS wide allocation  the Unicode Standard refers to this as a  Corporate Zone   since this is inaccurate for Linux we call it the  Linux Zone     U F000 was picked as the starting point since it lets the direct mapping area start on a large power of two  in case 1024  or 2048 character fonts ever become necessary   This leaves U E000 to U EFFF as End User Zone    v1 2   The Unicodes range from U F000 and up to U F7FF have been hard coded to map directly to the loaded font  bypassing the translation table   The user defined map now defaults to U F000 to U F0FF  emulating the previous behaviour   In practice  this range might be shorter  for example  vgacon can only handle 256 character  U F000  U F0FF  or 512 character  U F000  U F1FF  fonts    Actual characters assigned in the Linux Zone                                               In addition  the following characters not present in Unicode 1 1 4 have been defined  these are used by the DEC VT graphics map    v1 2  THIS USE IS OBSOLETE AND SHOULD NO LONGER BE USED  PLEASE SEE BELOW                                                 U F800 DEC VT GRAPHICS HORIZONTAL LINE SCAN 1 U F801 DEC VT GRAPHICS HORIZONTAL LINE SCAN 3 U F803 DEC VT GRAPHICS HORIZONTAL LINE SCAN 7 U F804 DEC VT GRAPHICS HORIZONTAL LINE SCAN 9                                                The DEC VT220 uses a 6x10 character matrix  and these characters form a smooth progression in the DEC VT graphics character set   I have omitted the scan 5 line  since it is also used as a block graphics character  and hence has been coded as U 2500 FORMS LIGHT HORIZONTAL    v1 3   These characters have been officially added to Unicode 3 2 0  they are added at U 23BA  U 23BB  U 23BC  U 23BD   Linux now uses the new values    v1 2   The following characters have been added to represent common keyboard symbols that are unlikely to ever be added to Unicode proper since they are horribly vendor specific   This  of course  is an excellent example of horrible design                                                 U F810 KEYBOARD SYMBOL FLYING FLAG U F811 KEYBOARD SYMBOL PULLDOWN MENU U F812 KEYBOARD SYMBOL OPEN APPLE U F813 KEYBOARD SYMBOL SOLID APPLE                                                Klingon language support                           In 1996  Linux was the first operating system in the world to add support for the artificial language Klingon  created by Marc Okrand for the  Star Trek  television series  This encoding was later adopted by the ConScript Unicode Registry and proposed  but ultimately rejected  for inclusion in Unicode Plane 1   Thus  it remains as a Linux CSUR private assignment in the Linux Zone   This encoding has been endorsed by the Klingon Language Institute  For more information  contact them at    http   www kli org   Since the characters in the beginning of the Linux CZ have been more of the dingbats symbols forms type and this is a language  I have located it at the end  on a 16 cell boundary in keeping with standard Unicode practice      note      This range is now officially managed by the ConScript Unicode   Registry   The normative reference is at    https   www evertype com standards csur klingon html  Klingon has an alphabet of 26 characters  a positional numeric writing system with 10 digits  and is written left to right  top to bottom   Several glyph forms for the Klingon alphabet have been proposed  However  since the set of symbols appear to be consistent throughout  with only the actual shapes being different  in keeping with standard Unicode practice these differences are considered font variants                                                                  U F8D0 KLINGON LETTER A U F8D1 KLINGON LETTER B U F8D2 KLINGON LETTER CH U F8D3 KLINGON LETTER D U F8D4 KLINGON LETTER E U F8D5 KLINGON LETTER GH U F8D6 KLINGON LETTER H U F8D7 KLINGON LETTER I U F8D8 KLINGON LETTER J U F8D9 KLINGON LETTER L U F8DA KLINGON LETTER M U F8DB KLINGON LETTER N U F8DC KLINGON LETTER NG U F8DD KLINGON LETTER O U F8DE KLINGON LETTER P U F8DF KLINGON LETTER Q    Written  q  in standard Okrand Latin transliteration U F8E0 KLINGON LETTER QH    Written  Q  in standard Okrand Latin transliteration U F8E1 KLINGON LETTER R U F8E2 KLINGON LETTER S U F8E3 KLINGON LETTER T U F8E4 KLINGON LETTER TLH U F8E5 KLINGON LETTER U U F8E6 KLINGON LETTER V U F8E7 KLINGON LETTER W U F8E8 KLINGON LETTER Y U F8E9 KLINGON LETTER GLOTTAL STOP  U F8F0 KLINGON DIGIT ZERO U F8F1 KLINGON DIGIT ONE U F8F2 KLINGON DIGIT TWO U F8F3 KLINGON DIGIT THREE U F8F4 KLINGON DIGIT FOUR U F8F5 KLINGON DIGIT FIVE U F8F6 KLINGON DIGIT SIX U F8F7 KLINGON DIGIT SEVEN U F8F8 KLINGON DIGIT EIGHT U F8F9 KLINGON DIGIT NINE  U F8FD KLINGON COMMA U F8FE KLINGON FULL STOP U F8FF KLINGON SYMBOL FOR EMPIRE                                                                 Other Fictional and Artificial Scripts                                         Since the assignment of the Klingon Linux Unicode block  a registry of fictional and artificial scripts has been established by John Cowan  jcowan reutershealth com  and Michael Everson  everson evertype com   The ConScript Unicode Registry is accessible at      https   www evertype com standards csur   The ranges used fall at the low end of the End User Zone and can hence not be normatively assigned  but it is recommended that people who wish to encode fictional scripts use these codes  in the interest of interoperability   For Klingon  CSUR has adopted the Linux encoding  The CSUR people are driving adding Tengwar and Cirth into Unicode Plane 1  the addition of Klingon to Unicode Plane 1 has been rejected and so the above encoding remains official "}
{"questions":"linux organization here this material was not written to be a single coherent The following is a collection of user oriented documents that have been document With luck things will improve quickly over time added to the kernel over time There is as yet little overall order or The Linux kernel user s and administrator s guide This initial section contains overall information including the README","answers":"=================================================\nThe Linux kernel user's and administrator's guide\n=================================================\n\nThe following is a collection of user-oriented documents that have been\nadded to the kernel over time.  There is, as yet, little overall order or\norganization here \u2014 this material was not written to be a single, coherent\ndocument!  With luck things will improve quickly over time.\n\nThis initial section contains overall information, including the README\nfile describing the kernel as a whole, documentation on kernel parameters,\netc.\n\n.. toctree::\n   :maxdepth: 1\n\n   README\n   kernel-parameters\n   devices\n   sysctl\/index\n\n   abi\n   features\n\nThis section describes CPU vulnerabilities and their mitigations.\n\n.. toctree::\n   :maxdepth: 1\n\n   hw-vuln\/index\n\nHere is a set of documents aimed at users who are trying to track down\nproblems and bugs in particular.\n\n.. toctree::\n   :maxdepth: 1\n\n   reporting-issues\n   reporting-regressions\n   quickly-build-trimmed-linux\n   verify-bugs-and-bisect-regressions\n   bug-hunting\n   bug-bisect\n   tainted-kernels\n   ramoops\n   dynamic-debug-howto\n   init\n   kdump\/index\n   perf\/index\n   pstore-blk\n\nThis is the beginning of a section with information of interest to\napplication developers.  Documents covering various aspects of the kernel\nABI will be found here.\n\n.. toctree::\n   :maxdepth: 1\n\n   sysfs-rules\n\nThis is the beginning of a section with information of interest to\napplication developers and system integrators doing analysis of the\nLinux kernel for safety critical applications. Documents supporting\nanalysis of kernel interactions with applications, and key kernel\nsubsystems expectations will be found here.\n\n.. toctree::\n   :maxdepth: 1\n\n   workload-tracing\n\nThe rest of this manual consists of various unordered guides on how to\nconfigure specific aspects of kernel behavior to your liking.\n\n.. toctree::\n   :maxdepth: 1\n\n   acpi\/index\n   aoe\/index\n   auxdisplay\/index\n   bcache\n   binderfs\n   binfmt-misc\n   blockdev\/index\n   bootconfig\n   braille-console\n   btmrvl\n   cgroup-v1\/index\n   cgroup-v2\n   cifs\/index\n   clearing-warn-once\n   cpu-load\n   cputopology\n   dell_rbu\n   device-mapper\/index\n   edid\n   efi-stub\n   ext4\n   filesystem-monitoring\n   nfs\/index\n   gpio\/index\n   highuid\n   hw_random\n   initrd\n   iostats\n   java\n   jfs\n   kernel-per-CPU-kthreads\n   laptops\/index\n   lcd-panel-cgram\n   ldm\n   lockup-watchdogs\n   LSM\/index\n   md\n   media\/index\n   mm\/index\n   module-signing\n   mono\n   namespaces\/index\n   numastat\n   parport\n   perf-security\n   pm\/index\n   pnp\n   rapidio\n   RAS\/index\n   rtc\n   serial-console\n   svga\n   syscall-user-dispatch\n   sysrq\n   thermal\/index\n   thunderbolt\n   ufs\n   unicode\n   vga-softcursor\n   video-output\n   xfs\n\n.. only::  subproject and html\n\n   Indices\n   =======\n\n   * :ref:`genindex`","site":"linux","answers_cleaned":"                                                  The Linux kernel user s and administrator s guide                                                    The following is a collection of user oriented documents that have been added to the kernel over time   There is  as yet  little overall order or organization here   this material was not written to be a single  coherent document   With luck things will improve quickly over time   This initial section contains overall information  including the README file describing the kernel as a whole  documentation on kernel parameters  etc      toctree       maxdepth  1     README    kernel parameters    devices    sysctl index     abi    features  This section describes CPU vulnerabilities and their mitigations      toctree       maxdepth  1     hw vuln index  Here is a set of documents aimed at users who are trying to track down problems and bugs in particular      toctree       maxdepth  1     reporting issues    reporting regressions    quickly build trimmed linux    verify bugs and bisect regressions    bug hunting    bug bisect    tainted kernels    ramoops    dynamic debug howto    init    kdump index    perf index    pstore blk  This is the beginning of a section with information of interest to application developers   Documents covering various aspects of the kernel ABI will be found here      toctree       maxdepth  1     sysfs rules  This is the beginning of a section with information of interest to application developers and system integrators doing analysis of the Linux kernel for safety critical applications  Documents supporting analysis of kernel interactions with applications  and key kernel subsystems expectations will be found here      toctree       maxdepth  1     workload tracing  The rest of this manual consists of various unordered guides on how to configure specific aspects of kernel behavior to your liking      toctree       maxdepth  1     acpi index    aoe index    auxdisplay index    bcache    binderfs    binfmt misc    blockdev index    bootconfig    braille console    btmrvl    cgroup v1 index    cgroup v2    cifs index    clearing warn once    cpu load    cputopology    dell rbu    device mapper index    edid    efi stub    ext4    filesystem monitoring    nfs index    gpio index    highuid    hw random    initrd    iostats    java    jfs    kernel per CPU kthreads    laptops index    lcd panel cgram    ldm    lockup watchdogs    LSM index    md    media index    mm index    module signing    mono    namespaces index    numastat    parport    perf security    pm index    pnp    rapidio    RAS index    rtc    serial console    svga    syscall user dispatch    sysrq    thermal index    thunderbolt    ufs    unicode    vga softcursor    video output    xfs     only    subproject and html     Indices                   ref  genindex "}
{"questions":"linux and depends on internal kernel structures and layout It is agreed upon may not be stable across kernel releases internal API Therefore there are aspects of the sysfs interface that The kernel exported sysfs exports internal kernel implementation details Rules on how to access information in sysfs by the kernel developers that the Linux kernel does not provide a stable To minimize the risk of breaking users of sysfs which are in most cases","answers":"Rules on how to access information in sysfs\n===========================================\n\nThe kernel-exported sysfs exports internal kernel implementation details\nand depends on internal kernel structures and layout. It is agreed upon\nby the kernel developers that the Linux kernel does not provide a stable\ninternal API. Therefore, there are aspects of the sysfs interface that\nmay not be stable across kernel releases.\n\nTo minimize the risk of breaking users of sysfs, which are in most cases\nlow-level userspace applications, with a new kernel release, the users\nof sysfs must follow some rules to use an as-abstract-as-possible way to\naccess this filesystem. The current udev and HAL programs already\nimplement this and users are encouraged to plug, if possible, into the\nabstractions these programs provide instead of accessing sysfs directly.\n\nBut if you really do want or need to access sysfs directly, please follow\nthe following rules and then your programs should work with future\nversions of the sysfs interface.\n\n- Do not use libsysfs\n    It makes assumptions about sysfs which are not true. Its API does not\n    offer any abstraction, it exposes all the kernel driver-core\n    implementation details in its own API. Therefore it is not better than\n    reading directories and opening the files yourself.\n    Also, it is not actively maintained, in the sense of reflecting the\n    current kernel development. The goal of providing a stable interface\n    to sysfs has failed; it causes more problems than it solves. It\n    violates many of the rules in this document.\n\n- sysfs is always at ``\/sys``\n    Parsing ``\/proc\/mounts`` is a waste of time. Other mount points are a\n    system configuration bug you should not try to solve. For test cases,\n    possibly support a ``SYSFS_PATH`` environment variable to overwrite the\n    application's behavior, but never try to search for sysfs. Never try\n    to mount it, if you are not an early boot script.\n\n- devices are only \"devices\"\n    There is no such thing like class-, bus-, physical devices,\n    interfaces, and such that you can rely on in userspace. Everything is\n    just simply a \"device\". Class-, bus-, physical, ... types are just\n    kernel implementation details which should not be expected by\n    applications that look for devices in sysfs.\n\n    The properties of a device are:\n\n    - devpath (``\/devices\/pci0000:00\/0000:00:1d.1\/usb2\/2-2\/2-2:1.0``)\n\n      - identical to the DEVPATH value in the event sent from the kernel\n        at device creation and removal\n      - the unique key to the device at that point in time\n      - the kernel's path to the device directory without the leading\n        ``\/sys``, and always starting with a slash\n      - all elements of a devpath must be real directories. Symlinks\n        pointing to \/sys\/devices must always be resolved to their real\n        target and the target path must be used to access the device.\n        That way the devpath to the device matches the devpath of the\n        kernel used at event time.\n      - using or exposing symlink values as elements in a devpath string\n        is a bug in the application\n\n    - kernel name (``sda``, ``tty``, ``0000:00:1f.2``, ...)\n\n      - a directory name, identical to the last element of the devpath\n      - applications need to handle spaces and characters like ``!`` in\n        the name\n\n    - subsystem (``block``, ``tty``, ``pci``, ...)\n\n      - simple string, never a path or a link\n      - retrieved by reading the \"subsystem\"-link and using only the\n        last element of the target path\n\n    - driver (``tg3``, ``ata_piix``, ``uhci_hcd``)\n\n      - a simple string, which may contain spaces, never a path or a\n        link\n      - it is retrieved by reading the \"driver\"-link and using only the\n        last element of the target path\n      - devices which do not have \"driver\"-link just do not have a\n        driver; copying the driver value in a child device context is a\n        bug in the application\n\n    - attributes\n\n      - the files in the device directory or files below subdirectories\n        of the same device directory\n      - accessing attributes reached by a symlink pointing to another device,\n        like the \"device\"-link, is a bug in the application\n\n    Everything else is just a kernel driver-core implementation detail\n    that should not be assumed to be stable across kernel releases.\n\n- Properties of parent devices never belong into a child device.\n    Always look at the parent devices themselves for determining device\n    context properties. If the device ``eth0`` or ``sda`` does not have a\n    \"driver\"-link, then this device does not have a driver. Its value is empty.\n    Never copy any property of the parent-device into a child-device. Parent\n    device properties may change dynamically without any notice to the\n    child device.\n\n- Hierarchy in a single device tree\n    There is only one valid place in sysfs where hierarchy can be examined\n    and this is below: ``\/sys\/devices.``\n    It is planned that all device directories will end up in the tree\n    below this directory.\n\n- Classification by subsystem\n    There are currently three places for classification of devices:\n    ``\/sys\/block,`` ``\/sys\/class`` and ``\/sys\/bus.`` It is planned that these will\n    not contain any device directories themselves, but only flat lists of\n    symlinks pointing to the unified ``\/sys\/devices`` tree.\n    All three places have completely different rules on how to access\n    device information. It is planned to merge all three\n    classification directories into one place at ``\/sys\/subsystem``,\n    following the layout of the bus directories. All buses and\n    classes, including the converted block subsystem, will show up\n    there.\n    The devices belonging to a subsystem will create a symlink in the\n    \"devices\" directory at ``\/sys\/subsystem\/<name>\/devices``,\n\n    If ``\/sys\/subsystem`` exists, ``\/sys\/bus``, ``\/sys\/class`` and ``\/sys\/block``\n    can be ignored. If it does not exist, you always have to scan all three\n    places, as the kernel is free to move a subsystem from one place to\n    the other, as long as the devices are still reachable by the same\n    subsystem name.\n\n    Assuming ``\/sys\/class\/<subsystem>`` and ``\/sys\/bus\/<subsystem>``, or\n    ``\/sys\/block`` and ``\/sys\/class\/block`` are not interchangeable is a bug in\n    the application.\n\n- Block\n    The converted block subsystem at ``\/sys\/class\/block`` or\n    ``\/sys\/subsystem\/block`` will contain the links for disks and partitions\n    at the same level, never in a hierarchy. Assuming the block subsystem to\n    contain only disks and not partition devices in the same flat list is\n    a bug in the application.\n\n- \"device\"-link and <subsystem>:<kernel name>-links\n    Never depend on the \"device\"-link. The \"device\"-link is a workaround\n    for the old layout, where class devices are not created in\n    ``\/sys\/devices\/`` like the bus devices. If the link-resolving of a\n    device directory does not end in ``\/sys\/devices\/``, you can use the\n    \"device\"-link to find the parent devices in ``\/sys\/devices\/``, That is the\n    single valid use of the \"device\"-link; it must never appear in any\n    path as an element. Assuming the existence of the \"device\"-link for\n    a device in ``\/sys\/devices\/`` is a bug in the application.\n    Accessing ``\/sys\/class\/net\/eth0\/device`` is a bug in the application.\n\n    Never depend on the class-specific links back to the ``\/sys\/class``\n    directory.  These links are also a workaround for the design mistake\n    that class devices are not created in ``\/sys\/devices.`` If a device\n    directory does not contain directories for child devices, these links\n    may be used to find the child devices in ``\/sys\/class.`` That is the single\n    valid use of these links; they must never appear in any path as an\n    element. Assuming the existence of these links for devices which are\n    real child device directories in the ``\/sys\/devices`` tree is a bug in\n    the application.\n\n    It is planned to remove all these links when all class device\n    directories live in ``\/sys\/devices.``\n\n- Position of devices along device chain can change.\n    Never depend on a specific parent device position in the devpath,\n    or the chain of parent devices. The kernel is free to insert devices into\n    the chain. You must always request the parent device you are looking for\n    by its subsystem value. You need to walk up the chain until you find\n    the device that matches the expected subsystem. Depending on a specific\n    position of a parent device or exposing relative paths using ``..\/`` to\n    access the chain of parents is a bug in the application.\n\n- When reading and writing sysfs device attribute files, avoid dependency\n    on specific error codes wherever possible. This minimizes coupling to\n    the error handling implementation within the kernel.\n\n    In general, failures to read or write sysfs device attributes shall\n    propagate errors wherever possible. Common errors include, but are not\n    limited to:\n\n\t``-EIO``: The read or store operation is not supported, typically\n\treturned by the sysfs system itself if the read or store pointer\n\tis ``NULL``.\n\n\t``-ENXIO``: The read or store operation failed\n\n    Error codes will not be changed without good reason, and should a change\n    to error codes result in user-space breakage, it will be fixed, or the\n    the offending change will be reverted.\n\n    Userspace applications can, however, expect the format and contents of\n    the attribute files to remain consistent in the absence of a version\n    attribute change in the context of a given attribute.","site":"linux","answers_cleaned":"Rules on how to access information in sysfs                                              The kernel exported sysfs exports internal kernel implementation details and depends on internal kernel structures and layout  It is agreed upon by the kernel developers that the Linux kernel does not provide a stable internal API  Therefore  there are aspects of the sysfs interface that may not be stable across kernel releases   To minimize the risk of breaking users of sysfs  which are in most cases low level userspace applications  with a new kernel release  the users of sysfs must follow some rules to use an as abstract as possible way to access this filesystem  The current udev and HAL programs already implement this and users are encouraged to plug  if possible  into the abstractions these programs provide instead of accessing sysfs directly   But if you really do want or need to access sysfs directly  please follow the following rules and then your programs should work with future versions of the sysfs interface     Do not use libsysfs     It makes assumptions about sysfs which are not true  Its API does not     offer any abstraction  it exposes all the kernel driver core     implementation details in its own API  Therefore it is not better than     reading directories and opening the files yourself      Also  it is not actively maintained  in the sense of reflecting the     current kernel development  The goal of providing a stable interface     to sysfs has failed  it causes more problems than it solves  It     violates many of the rules in this document     sysfs is always at    sys       Parsing    proc mounts   is a waste of time  Other mount points are a     system configuration bug you should not try to solve  For test cases      possibly support a   SYSFS PATH   environment variable to overwrite the     application s behavior  but never try to search for sysfs  Never try     to mount it  if you are not an early boot script     devices are only  devices      There is no such thing like class   bus   physical devices      interfaces  and such that you can rely on in userspace  Everything is     just simply a  device   Class   bus   physical      types are just     kernel implementation details which should not be expected by     applications that look for devices in sysfs       The properties of a device are         devpath     devices pci0000 00 0000 00 1d 1 usb2 2 2 2 2 1 0             identical to the DEVPATH value in the event sent from the kernel         at device creation and removal         the unique key to the device at that point in time         the kernel s path to the device directory without the leading            sys    and always starting with a slash         all elements of a devpath must be real directories  Symlinks         pointing to  sys devices must always be resolved to their real         target and the target path must be used to access the device          That way the devpath to the device matches the devpath of the         kernel used at event time          using or exposing symlink values as elements in a devpath string         is a bug in the application        kernel name    sda      tty      0000 00 1f 2                  a directory name  identical to the last element of the devpath         applications need to handle spaces and characters like       in         the name        subsystem    block      tty      pci                  simple string  never a path or a link         retrieved by reading the  subsystem  link and using only the         last element of the target path        driver    tg3      ata piix      uhci hcd             a simple string  which may contain spaces  never a path or a         link         it is retrieved by reading the  driver  link and using only the         last element of the target path         devices which do not have  driver  link just do not have a         driver  copying the driver value in a child device context is a         bug in the application        attributes          the files in the device directory or files below subdirectories         of the same device directory         accessing attributes reached by a symlink pointing to another device          like the  device  link  is a bug in the application      Everything else is just a kernel driver core implementation detail     that should not be assumed to be stable across kernel releases     Properties of parent devices never belong into a child device      Always look at the parent devices themselves for determining device     context properties  If the device   eth0   or   sda   does not have a      driver  link  then this device does not have a driver  Its value is empty      Never copy any property of the parent device into a child device  Parent     device properties may change dynamically without any notice to the     child device     Hierarchy in a single device tree     There is only one valid place in sysfs where hierarchy can be examined     and this is below     sys devices        It is planned that all device directories will end up in the tree     below this directory     Classification by subsystem     There are currently three places for classification of devices         sys block       sys class   and    sys bus    It is planned that these will     not contain any device directories themselves  but only flat lists of     symlinks pointing to the unified    sys devices   tree      All three places have completely different rules on how to access     device information  It is planned to merge all three     classification directories into one place at    sys subsystem        following the layout of the bus directories  All buses and     classes  including the converted block subsystem  will show up     there      The devices belonging to a subsystem will create a symlink in the      devices  directory at    sys subsystem  name  devices         If    sys subsystem   exists     sys bus       sys class   and    sys block       can be ignored  If it does not exist  you always have to scan all three     places  as the kernel is free to move a subsystem from one place to     the other  as long as the devices are still reachable by the same     subsystem name       Assuming    sys class  subsystem    and    sys bus  subsystem     or        sys block   and    sys class block   are not interchangeable is a bug in     the application     Block     The converted block subsystem at    sys class block   or        sys subsystem block   will contain the links for disks and partitions     at the same level  never in a hierarchy  Assuming the block subsystem to     contain only disks and not partition devices in the same flat list is     a bug in the application      device  link and  subsystem   kernel name  links     Never depend on the  device  link  The  device  link is a workaround     for the old layout  where class devices are not created in        sys devices    like the bus devices  If the link resolving of a     device directory does not end in    sys devices     you can use the      device  link to find the parent devices in    sys devices     That is the     single valid use of the  device  link  it must never appear in any     path as an element  Assuming the existence of the  device  link for     a device in    sys devices    is a bug in the application      Accessing    sys class net eth0 device   is a bug in the application       Never depend on the class specific links back to the    sys class       directory   These links are also a workaround for the design mistake     that class devices are not created in    sys devices    If a device     directory does not contain directories for child devices  these links     may be used to find the child devices in    sys class    That is the single     valid use of these links  they must never appear in any path as an     element  Assuming the existence of these links for devices which are     real child device directories in the    sys devices   tree is a bug in     the application       It is planned to remove all these links when all class device     directories live in    sys devices       Position of devices along device chain can change      Never depend on a specific parent device position in the devpath      or the chain of parent devices  The kernel is free to insert devices into     the chain  You must always request the parent device you are looking for     by its subsystem value  You need to walk up the chain until you find     the device that matches the expected subsystem  Depending on a specific     position of a parent device or exposing relative paths using         to     access the chain of parents is a bug in the application     When reading and writing sysfs device attribute files  avoid dependency     on specific error codes wherever possible  This minimizes coupling to     the error handling implementation within the kernel       In general  failures to read or write sysfs device attributes shall     propagate errors wherever possible  Common errors include  but are not     limited to       EIO    The read or store operation is not supported  typically  returned by the sysfs system itself if the read or store pointer  is   NULL         ENXIO    The read or store operation failed      Error codes will not be changed without good reason  and should a change     to error codes result in user space breakage  it will be fixed  or the     the offending change will be reverted       Userspace applications can  however  expect the format and contents of     the attribute files to remain consistent in the absence of a version     attribute change in the context of a given attribute "}
{"questions":"linux Ext4 is an advanced level of the ext3 filesystem which incorporates scalability and reliability enhancements for supporting large filesystems 64 bit in keeping with increasing disk capacities and state of the art feature requirements ext4 General Information SPDX License Identifier GPL 2 0","answers":".. SPDX-License-Identifier: GPL-2.0\n\n========================\next4 General Information\n========================\n\nExt4 is an advanced level of the ext3 filesystem which incorporates\nscalability and reliability enhancements for supporting large filesystems\n(64 bit) in keeping with increasing disk capacities and state-of-the-art\nfeature requirements.\n\nMailing list:\tlinux-ext4@vger.kernel.org\nWeb site:\thttp:\/\/ext4.wiki.kernel.org\n\n\nQuick usage instructions\n========================\n\nNote: More extensive information for getting started with ext4 can be\nfound at the ext4 wiki site at the URL:\nhttp:\/\/ext4.wiki.kernel.org\/index.php\/Ext4_Howto\n\n  - The latest version of e2fsprogs can be found at:\n\n    https:\/\/www.kernel.org\/pub\/linux\/kernel\/people\/tytso\/e2fsprogs\/\n\n\tor\n\n    http:\/\/sourceforge.net\/project\/showfiles.php?group_id=2406\n\n\tor grab the latest git repository from:\n\n   https:\/\/git.kernel.org\/pub\/scm\/fs\/ext2\/e2fsprogs.git\n\n  - Create a new filesystem using the ext4 filesystem type:\n\n        # mke2fs -t ext4 \/dev\/hda1\n\n    Or to configure an existing ext3 filesystem to support extents:\n\n\t# tune2fs -O extents \/dev\/hda1\n\n    If the filesystem was created with 128 byte inodes, it can be\n    converted to use 256 byte for greater efficiency via:\n\n        # tune2fs -I 256 \/dev\/hda1\n\n  - Mounting:\n\n\t# mount -t ext4 \/dev\/hda1 \/wherever\n\n  - When comparing performance with other filesystems, it's always\n    important to try multiple workloads; very often a subtle change in a\n    workload parameter can completely change the ranking of which\n    filesystems do well compared to others.  When comparing versus ext3,\n    note that ext4 enables write barriers by default, while ext3 does\n    not enable write barriers by default.  So it is useful to use\n    explicitly specify whether barriers are enabled or not when via the\n    '-o barriers=[0|1]' mount option for both ext3 and ext4 filesystems\n    for a fair comparison.  When tuning ext3 for best benchmark numbers,\n    it is often worthwhile to try changing the data journaling mode; '-o\n    data=writeback' can be faster for some workloads.  (Note however that\n    running mounted with data=writeback can potentially leave stale data\n    exposed in recently written files in case of an unclean shutdown,\n    which could be a security exposure in some situations.)  Configuring\n    the filesystem with a large journal can also be helpful for\n    metadata-intensive workloads.\n\nFeatures\n========\n\nCurrently Available\n-------------------\n\n* ability to use filesystems > 16TB (e2fsprogs support not available yet)\n* extent format reduces metadata overhead (RAM, IO for access, transactions)\n* extent format more robust in face of on-disk corruption due to magics,\n* internal redundancy in tree\n* improved file allocation (multi-block alloc)\n* lift 32000 subdirectory limit imposed by i_links_count[1]\n* nsec timestamps for mtime, atime, ctime, create time\n* inode version field on disk (NFSv4, Lustre)\n* reduced e2fsck time via uninit_bg feature\n* journal checksumming for robustness, performance\n* persistent file preallocation (e.g for streaming media, databases)\n* ability to pack bitmaps and inode tables into larger virtual groups via the\n  flex_bg feature\n* large file support\n* inode allocation using large virtual block groups via flex_bg\n* delayed allocation\n* large block (up to pagesize) support\n* efficient new ordered mode in JBD2 and ext4 (avoid using buffer head to force\n  the ordering)\n* Case-insensitive file name lookups\n* file-based encryption support (fscrypt)\n* file-based verity support (fsverity)\n\n[1] Filesystems with a block size of 1k may see a limit imposed by the\ndirectory hash tree having a maximum depth of two.\n\ncase-insensitive file name lookups\n======================================================\n\nThe case-insensitive file name lookup feature is supported on a\nper-directory basis, allowing the user to mix case-insensitive and\ncase-sensitive directories in the same filesystem.  It is enabled by\nflipping the +F inode attribute of an empty directory.  The\ncase-insensitive string match operation is only defined when we know how\ntext in encoded in a byte sequence.  For that reason, in order to enable\ncase-insensitive directories, the filesystem must have the\ncasefold feature, which stores the filesystem-wide encoding\nmodel used.  By default, the charset adopted is the latest version of\nUnicode (12.1.0, by the time of this writing), encoded in the UTF-8\nform.  The comparison algorithm is implemented by normalizing the\nstrings to the Canonical decomposition form, as defined by Unicode,\nfollowed by a byte per byte comparison.\n\nThe case-awareness is name-preserving on the disk, meaning that the file\nname provided by userspace is a byte-per-byte match to what is actually\nwritten in the disk.  The Unicode normalization format used by the\nkernel is thus an internal representation, and not exposed to the\nuserspace nor to the disk, with the important exception of disk hashes,\nused on large case-insensitive directories with DX feature.  On DX\ndirectories, the hash must be calculated using the casefolded version of\nthe filename, meaning that the normalization format used actually has an\nimpact on where the directory entry is stored.\n\nWhen we change from viewing filenames as opaque byte sequences to seeing\nthem as encoded strings we need to address what happens when a program\ntries to create a file with an invalid name.  The Unicode subsystem\nwithin the kernel leaves the decision of what to do in this case to the\nfilesystem, which select its preferred behavior by enabling\/disabling\nthe strict mode.  When Ext4 encounters one of those strings and the\nfilesystem did not require strict mode, it falls back to considering the\nentire string as an opaque byte sequence, which still allows the user to\noperate on that file, but the case-insensitive lookups won't work.\n\nOptions\n=======\n\nWhen mounting an ext4 filesystem, the following option are accepted:\n(*) == default\n\n  ro\n        Mount filesystem read only. Note that ext4 will replay the journal (and\n        thus write to the partition) even when mounted \"read only\". The mount\n        options \"ro,noload\" can be used to prevent writes to the filesystem.\n\n  journal_checksum\n        Enable checksumming of the journal transactions.  This will allow the\n        recovery code in e2fsck and the kernel to detect corruption in the\n        kernel.  It is a compatible change and will be ignored by older\n        kernels.\n\n  journal_async_commit\n        Commit block can be written to disk without waiting for descriptor\n        blocks. If enabled older kernels cannot mount the device. This will\n        enable 'journal_checksum' internally.\n\n  journal_path=path, journal_dev=devnum\n        When the external journal device's major\/minor numbers have changed,\n        these options allow the user to specify the new journal location.  The\n        journal device is identified through either its new major\/minor numbers\n        encoded in devnum, or via a path to the device.\n\n  norecovery, noload\n        Don't load the journal on mounting.  Note that if the filesystem was\n        not unmounted cleanly, skipping the journal replay will lead to the\n        filesystem containing inconsistencies that can lead to any number of\n        problems.\n\n  data=journal\n        All data are committed into the journal prior to being written into the\n        main file system.  Enabling this mode will disable delayed allocation\n        and O_DIRECT support.\n\n  data=ordered\t(*)\n        All data are forced directly out to the main file system prior to its\n        metadata being committed to the journal.\n\n  data=writeback\n        Data ordering is not preserved, data may be written into the main file\n        system after its metadata has been committed to the journal.\n\n  commit=nrsec\t(*)\n        This setting limits the maximum age of the running transaction to\n        'nrsec' seconds.  The default value is 5 seconds.  This means that if\n        you lose your power, you will lose as much as the latest 5 seconds of\n        metadata changes (your filesystem will not be damaged though, thanks\n        to the journaling). This default value (or any low value) will hurt\n        performance, but it's good for data-safety.  Setting it to 0 will have\n        the same effect as leaving it at the default (5 seconds).  Setting it\n        to very large values will improve performance.  Note that due to\n        delayed allocation even older data can be lost on power failure since\n        writeback of those data begins only after time set in\n        \/proc\/sys\/vm\/dirty_expire_centisecs.\n\n  barrier=<0|1(*)>, barrier(*), nobarrier\n        This enables\/disables the use of write barriers in the jbd code.\n        barrier=0 disables, barrier=1 enables.  This also requires an IO stack\n        which can support barriers, and if jbd gets an error on a barrier\n        write, it will disable again with a warning.  Write barriers enforce\n        proper on-disk ordering of journal commits, making volatile disk write\n        caches safe to use, at some performance penalty.  If your disks are\n        battery-backed in one way or another, disabling barriers may safely\n        improve performance.  The mount options \"barrier\" and \"nobarrier\" can\n        also be used to enable or disable barriers, for consistency with other\n        ext4 mount options.\n\n  inode_readahead_blks=n\n        This tuning parameter controls the maximum number of inode table blocks\n        that ext4's inode table readahead algorithm will pre-read into the\n        buffer cache.  The default value is 32 blocks.\n\n  bsddf\t(*)\n        Make 'df' act like BSD.\n\n  minixdf\n        Make 'df' act like Minix.\n\n  debug\n        Extra debugging information is sent to syslog.\n\n  abort\n        Simulate the effects of calling ext4_abort() for debugging purposes.\n        This is normally used while remounting a filesystem which is already\n        mounted.\n\n  errors=remount-ro\n        Remount the filesystem read-only on an error.\n\n  errors=continue\n        Keep going on a filesystem error.\n\n  errors=panic\n        Panic and halt the machine if an error occurs.  (These mount options\n        override the errors behavior specified in the superblock, which can be\n        configured using tune2fs)\n\n  data_err=ignore(*)\n        Just print an error message if an error occurs in a file data buffer in\n        ordered mode.\n  data_err=abort\n        Abort the journal if an error occurs in a file data buffer in ordered\n        mode.\n\n  grpid | bsdgroups\n        New objects have the group ID of their parent.\n\n  nogrpid (*) | sysvgroups\n        New objects have the group ID of their creator.\n\n  resgid=n\n        The group ID which may use the reserved blocks.\n\n  resuid=n\n        The user ID which may use the reserved blocks.\n\n  sb=\n        Use alternate superblock at this location.\n\n  quota, noquota, grpquota, usrquota\n        These options are ignored by the filesystem. They are used only by\n        quota tools to recognize volumes where quota should be turned on. See\n        documentation in the quota-tools package for more details\n        (http:\/\/sourceforge.net\/projects\/linuxquota).\n\n  jqfmt=<quota type>, usrjquota=<file>, grpjquota=<file>\n        These options tell filesystem details about quota so that quota\n        information can be properly updated during journal replay. They replace\n        the above quota options. See documentation in the quota-tools package\n        for more details (http:\/\/sourceforge.net\/projects\/linuxquota).\n\n  stripe=n\n        Number of filesystem blocks that mballoc will try to use for allocation\n        size and alignment. For RAID5\/6 systems this should be the number of\n        data disks *  RAID chunk size in file system blocks.\n\n  delalloc\t(*)\n        Defer block allocation until just before ext4 writes out the block(s)\n        in question.  This allows ext4 to better allocation decisions more\n        efficiently.\n\n  nodelalloc\n        Disable delayed allocation.  Blocks are allocated when the data is\n        copied from userspace to the page cache, either via the write(2) system\n        call or when an mmap'ed page which was previously unallocated is\n        written for the first time.\n\n  max_batch_time=usec\n        Maximum amount of time ext4 should wait for additional filesystem\n        operations to be batch together with a synchronous write operation.\n        Since a synchronous write operation is going to force a commit and then\n        a wait for the I\/O complete, it doesn't cost much, and can be a huge\n        throughput win, we wait for a small amount of time to see if any other\n        transactions can piggyback on the synchronous write.   The algorithm\n        used is designed to automatically tune for the speed of the disk, by\n        measuring the amount of time (on average) that it takes to finish\n        committing a transaction.  Call this time the \"commit time\".  If the\n        time that the transaction has been running is less than the commit\n        time, ext4 will try sleeping for the commit time to see if other\n        operations will join the transaction.   The commit time is capped by\n        the max_batch_time, which defaults to 15000us (15ms).   This\n        optimization can be turned off entirely by setting max_batch_time to 0.\n\n  min_batch_time=usec\n        This parameter sets the commit time (as described above) to be at least\n        min_batch_time.  It defaults to zero microseconds.  Increasing this\n        parameter may improve the throughput of multi-threaded, synchronous\n        workloads on very fast disks, at the cost of increasing latency.\n\n  journal_ioprio=prio\n        The I\/O priority (from 0 to 7, where 0 is the highest priority) which\n        should be used for I\/O operations submitted by kjournald2 during a\n        commit operation.  This defaults to 3, which is a slightly higher\n        priority than the default I\/O priority.\n\n  auto_da_alloc(*), noauto_da_alloc\n        Many broken applications don't use fsync() when replacing existing\n        files via patterns such as fd = open(\"foo.new\")\/write(fd,..)\/close(fd)\/\n        rename(\"foo.new\", \"foo\"), or worse yet, fd = open(\"foo\",\n        O_TRUNC)\/write(fd,..)\/close(fd).  If auto_da_alloc is enabled, ext4\n        will detect the replace-via-rename and replace-via-truncate patterns\n        and force that any delayed allocation blocks are allocated such that at\n        the next journal commit, in the default data=ordered mode, the data\n        blocks of the new file are forced to disk before the rename() operation\n        is committed.  This provides roughly the same level of guarantees as\n        ext3, and avoids the \"zero-length\" problem that can happen when a\n        system crashes before the delayed allocation blocks are forced to disk.\n\n  noinit_itable\n        Do not initialize any uninitialized inode table blocks in the\n        background.  This feature may be used by installation CD's so that the\n        install process can complete as quickly as possible; the inode table\n        initialization process would then be deferred until the next time the\n        file system is unmounted.\n\n  init_itable=n\n        The lazy itable init code will wait n times the number of milliseconds\n        it took to zero out the previous block group's inode table.  This\n        minimizes the impact on the system performance while file system's\n        inode table is being initialized.\n\n  discard, nodiscard(*)\n        Controls whether ext4 should issue discard\/TRIM commands to the\n        underlying block device when blocks are freed.  This is useful for SSD\n        devices and sparse\/thinly-provisioned LUNs, but it is off by default\n        until sufficient testing has been done.\n\n  nouid32\n        Disables 32-bit UIDs and GIDs.  This is for interoperability  with\n        older kernels which only store and expect 16-bit values.\n\n  block_validity(*), noblock_validity\n        These options enable or disable the in-kernel facility for tracking\n        filesystem metadata blocks within internal data structures.  This\n        allows multi- block allocator and other routines to notice bugs or\n        corrupted allocation bitmaps which cause blocks to be allocated which\n        overlap with filesystem metadata blocks.\n\n  dioread_lock, dioread_nolock\n        Controls whether or not ext4 should use the DIO read locking. If the\n        dioread_nolock option is specified ext4 will allocate uninitialized\n        extent before buffer write and convert the extent to initialized after\n        IO completes. This approach allows ext4 code to avoid using inode\n        mutex, which improves scalability on high speed storages. However this\n        does not work with data journaling and dioread_nolock option will be\n        ignored with kernel warning. Note that dioread_nolock code path is only\n        used for extent-based files.  Because of the restrictions this options\n        comprises it is off by default (e.g. dioread_lock).\n\n  max_dir_size_kb=n\n        This limits the size of directories so that any attempt to expand them\n        beyond the specified limit in kilobytes will cause an ENOSPC error.\n        This is useful in memory constrained environments, where a very large\n        directory can cause severe performance problems or even provoke the Out\n        Of Memory killer.  (For example, if there is only 512mb memory\n        available, a 176mb directory may seriously cramp the system's style.)\n\n  i_version\n        Enable 64-bit inode version support. This option is off by default.\n\n  dax\n        Use direct access (no page cache).  See\n        Documentation\/filesystems\/dax.rst.  Note that this option is\n        incompatible with data=journal.\n\n  inlinecrypt\n        When possible, encrypt\/decrypt the contents of encrypted files using the\n        blk-crypto framework rather than filesystem-layer encryption. This\n        allows the use of inline encryption hardware. The on-disk format is\n        unaffected. For more details, see\n        Documentation\/block\/inline-encryption.rst.\n\nData Mode\n=========\nThere are 3 different data modes:\n\n* writeback mode\n\n  In data=writeback mode, ext4 does not journal data at all.  This mode provides\n  a similar level of journaling as that of XFS, JFS, and ReiserFS in its default\n  mode - metadata journaling.  A crash+recovery can cause incorrect data to\n  appear in files which were written shortly before the crash.  This mode will\n  typically provide the best ext4 performance.\n\n* ordered mode\n\n  In data=ordered mode, ext4 only officially journals metadata, but it logically\n  groups metadata information related to data changes with the data blocks into\n  a single unit called a transaction.  When it's time to write the new metadata\n  out to disk, the associated data blocks are written first.  In general, this\n  mode performs slightly slower than writeback but significantly faster than\n  journal mode.\n\n* journal mode\n\n  data=journal mode provides full data and metadata journaling.  All new data is\n  written to the journal first, and then to its final location.  In the event of\n  a crash, the journal can be replayed, bringing both data and metadata into a\n  consistent state.  This mode is the slowest except when data needs to be read\n  from and written to disk at the same time where it outperforms all others\n  modes.  Enabling this mode will disable delayed allocation and O_DIRECT\n  support.\n\n\/proc entries\n=============\n\nInformation about mounted ext4 file systems can be found in\n\/proc\/fs\/ext4.  Each mounted filesystem will have a directory in\n\/proc\/fs\/ext4 based on its device name (i.e., \/proc\/fs\/ext4\/hdc or\n\/proc\/fs\/ext4\/dm-0).   The files in each per-device directory are shown\nin table below.\n\nFiles in \/proc\/fs\/ext4\/<devname>\n\n  mb_groups\n        details of multiblock allocator buddy cache of free blocks\n\n\/sys entries\n============\n\nInformation about mounted ext4 file systems can be found in\n\/sys\/fs\/ext4.  Each mounted filesystem will have a directory in\n\/sys\/fs\/ext4 based on its device name (i.e., \/sys\/fs\/ext4\/hdc or\n\/sys\/fs\/ext4\/dm-0).   The files in each per-device directory are shown\nin table below.\n\nFiles in \/sys\/fs\/ext4\/<devname>:\n\n(see also Documentation\/ABI\/testing\/sysfs-fs-ext4)\n\n  delayed_allocation_blocks\n        This file is read-only and shows the number of blocks that are dirty in\n        the page cache, but which do not have their location in the filesystem\n        allocated yet.\n\n  inode_goal\n        Tuning parameter which (if non-zero) controls the goal inode used by\n        the inode allocator in preference to all other allocation heuristics.\n        This is intended for debugging use only, and should be 0 on production\n        systems.\n\n  inode_readahead_blks\n        Tuning parameter which controls the maximum number of inode table\n        blocks that ext4's inode table readahead algorithm will pre-read into\n        the buffer cache.\n\n  lifetime_write_kbytes\n        This file is read-only and shows the number of kilobytes of data that\n        have been written to this filesystem since it was created.\n\n  max_writeback_mb_bump\n        The maximum number of megabytes the writeback code will try to write\n        out before move on to another inode.\n\n  mb_group_prealloc\n        The multiblock allocator will round up allocation requests to a\n        multiple of this tuning parameter if the stripe size is not set in the\n        ext4 superblock\n\n  mb_max_to_scan\n        The maximum number of extents the multiblock allocator will search to\n        find the best extent.\n\n  mb_min_to_scan\n        The minimum number of extents the multiblock allocator will search to\n        find the best extent.\n\n  mb_order2_req\n        Tuning parameter which controls the minimum size for requests (as a\n        power of 2) where the buddy cache is used.\n\n  mb_stats\n        Controls whether the multiblock allocator should collect statistics,\n        which are shown during the unmount. 1 means to collect statistics, 0\n        means not to collect statistics.\n\n  mb_stream_req\n        Files which have fewer blocks than this tunable parameter will have\n        their blocks allocated out of a block group specific preallocation\n        pool, so that small files are packed closely together.  Each large file\n        will have its blocks allocated out of its own unique preallocation\n        pool.\n\n  session_write_kbytes\n        This file is read-only and shows the number of kilobytes of data that\n        have been written to this filesystem since it was mounted.\n\n  reserved_clusters\n        This is RW file and contains number of reserved clusters in the file\n        system which will be used in the specific situations to avoid costly\n        zeroout, unexpected ENOSPC, or possible data loss. The default is 2% or\n        4096 clusters, whichever is smaller and this can be changed however it\n        can never exceed number of clusters in the file system. If there is not\n        enough space for the reserved space when mounting the file mount will\n        _not_ fail.\n\nIoctls\n======\n\nExt4 implements various ioctls which can be used by applications to access\next4-specific functionality. An incomplete list of these ioctls is shown in the\ntable below. This list includes truly ext4-specific ioctls (``EXT4_IOC_*``) as\nwell as ioctls that may have been ext4-specific originally but are now supported\nby some other filesystem(s) too (``FS_IOC_*``).\n\nTable of Ext4 ioctls\n\n  FS_IOC_GETFLAGS\n        Get additional attributes associated with inode.  The ioctl argument is\n        an integer bitfield, with bit values described in ext4.h.\n\n  FS_IOC_SETFLAGS\n        Set additional attributes associated with inode.  The ioctl argument is\n        an integer bitfield, with bit values described in ext4.h.\n\n  EXT4_IOC_GETVERSION, EXT4_IOC_GETVERSION_OLD\n        Get the inode i_generation number stored for each inode. The\n        i_generation number is normally changed only when new inode is created\n        and it is particularly useful for network filesystems. The '_OLD'\n        version of this ioctl is an alias for FS_IOC_GETVERSION.\n\n  EXT4_IOC_SETVERSION, EXT4_IOC_SETVERSION_OLD\n        Set the inode i_generation number stored for each inode. The '_OLD'\n        version of this ioctl is an alias for FS_IOC_SETVERSION.\n\n  EXT4_IOC_GROUP_EXTEND\n        This ioctl has the same purpose as the resize mount option. It allows\n        to resize filesystem to the end of the last existing block group,\n        further resize has to be done with resize2fs, either online, or\n        offline. The argument points to the unsigned logn number representing\n        the filesystem new block count.\n\n  EXT4_IOC_MOVE_EXT\n        Move the block extents from orig_fd (the one this ioctl is pointing to)\n        to the donor_fd (the one specified in move_extent structure passed as\n        an argument to this ioctl). Then, exchange inode metadata between\n        orig_fd and donor_fd.  This is especially useful for online\n        defragmentation, because the allocator has the opportunity to allocate\n        moved blocks better, ideally into one contiguous extent.\n\n  EXT4_IOC_GROUP_ADD\n        Add a new group descriptor to an existing or new group descriptor\n        block. The new group descriptor is described by ext4_new_group_input\n        structure, which is passed as an argument to this ioctl. This is\n        especially useful in conjunction with EXT4_IOC_GROUP_EXTEND, which\n        allows online resize of the filesystem to the end of the last existing\n        block group.  Those two ioctls combined is used in userspace online\n        resize tool (e.g. resize2fs).\n\n  EXT4_IOC_MIGRATE\n        This ioctl operates on the filesystem itself.  It converts (migrates)\n        ext3 indirect block mapped inode to ext4 extent mapped inode by walking\n        through indirect block mapping of the original inode and converting\n        contiguous block ranges into ext4 extents of the temporary inode. Then,\n        inodes are swapped. This ioctl might help, when migrating from ext3 to\n        ext4 filesystem, however suggestion is to create fresh ext4 filesystem\n        and copy data from the backup. Note, that filesystem has to support\n        extents for this ioctl to work.\n\n  EXT4_IOC_ALLOC_DA_BLKS\n        Force all of the delay allocated blocks to be allocated to preserve\n        application-expected ext3 behaviour. Note that this will also start\n        triggering a write of the data blocks, but this behaviour may change in\n        the future as it is not necessary and has been done this way only for\n        sake of simplicity.\n\n  EXT4_IOC_RESIZE_FS\n        Resize the filesystem to a new size.  The number of blocks of resized\n        filesystem is passed in via 64 bit integer argument.  The kernel\n        allocates bitmaps and inode table, the userspace tool thus just passes\n        the new number of blocks.\n\n  EXT4_IOC_SWAP_BOOT\n        Swap i_blocks and associated attributes (like i_blocks, i_size,\n        i_flags, ...) from the specified inode with inode EXT4_BOOT_LOADER_INO\n        (#5). This is typically used to store a boot loader in a secure part of\n        the filesystem, where it can't be changed by a normal user by accident.\n        The data blocks of the previous boot loader will be associated with the\n        given inode.\n\nReferences\n==========\n\nkernel source:\t<file:fs\/ext4\/>\n\t\t<file:fs\/jbd2\/>\n\nprograms:\thttp:\/\/e2fsprogs.sourceforge.net\/\n\nuseful links:\thttps:\/\/fedoraproject.org\/wiki\/ext3-devel\n\t\thttp:\/\/www.bullopensource.org\/ext4\/\n\t\thttp:\/\/ext4.wiki.kernel.org\/index.php\/Main_Page\n\t\thttps:\/\/fedoraproject.org\/wiki\/Features\/Ext4","site":"linux","answers_cleaned":"   SPDX License Identifier  GPL 2 0                           ext4 General Information                           Ext4 is an advanced level of the ext3 filesystem which incorporates scalability and reliability enhancements for supporting large filesystems  64 bit  in keeping with increasing disk capacities and state of the art feature requirements   Mailing list  linux ext4 vger kernel org Web site  http   ext4 wiki kernel org   Quick usage instructions                           Note  More extensive information for getting started with ext4 can be found at the ext4 wiki site at the URL  http   ext4 wiki kernel org index php Ext4 Howto      The latest version of e2fsprogs can be found at       https   www kernel org pub linux kernel people tytso e2fsprogs    or      http   sourceforge net project showfiles php group id 2406   or grab the latest git repository from      https   git kernel org pub scm fs ext2 e2fsprogs git      Create a new filesystem using the ext4 filesystem type             mke2fs  t ext4  dev hda1      Or to configure an existing ext3 filesystem to support extents      tune2fs  O extents  dev hda1      If the filesystem was created with 128 byte inodes  it can be     converted to use 256 byte for greater efficiency via             tune2fs  I 256  dev hda1      Mounting      mount  t ext4  dev hda1  wherever      When comparing performance with other filesystems  it s always     important to try multiple workloads  very often a subtle change in a     workload parameter can completely change the ranking of which     filesystems do well compared to others   When comparing versus ext3      note that ext4 enables write barriers by default  while ext3 does     not enable write barriers by default   So it is useful to use     explicitly specify whether barriers are enabled or not when via the       o barriers  0 1   mount option for both ext3 and ext4 filesystems     for a fair comparison   When tuning ext3 for best benchmark numbers      it is often worthwhile to try changing the data journaling mode    o     data writeback  can be faster for some workloads    Note however that     running mounted with data writeback can potentially leave stale data     exposed in recently written files in case of an unclean shutdown      which could be a security exposure in some situations    Configuring     the filesystem with a large journal can also be helpful for     metadata intensive workloads   Features           Currently Available                        ability to use filesystems   16TB  e2fsprogs support not available yet    extent format reduces metadata overhead  RAM  IO for access  transactions    extent format more robust in face of on disk corruption due to magics    internal redundancy in tree   improved file allocation  multi block alloc    lift 32000 subdirectory limit imposed by i links count 1    nsec timestamps for mtime  atime  ctime  create time   inode version field on disk  NFSv4  Lustre    reduced e2fsck time via uninit bg feature   journal checksumming for robustness  performance   persistent file preallocation  e g for streaming media  databases    ability to pack bitmaps and inode tables into larger virtual groups via the   flex bg feature   large file support   inode allocation using large virtual block groups via flex bg   delayed allocation   large block  up to pagesize  support   efficient new ordered mode in JBD2 and ext4  avoid using buffer head to force   the ordering    Case insensitive file name lookups   file based encryption support  fscrypt    file based verity support  fsverity    1  Filesystems with a block size of 1k may see a limit imposed by the directory hash tree having a maximum depth of two   case insensitive file name lookups                                                         The case insensitive file name lookup feature is supported on a per directory basis  allowing the user to mix case insensitive and case sensitive directories in the same filesystem   It is enabled by flipping the  F inode attribute of an empty directory   The case insensitive string match operation is only defined when we know how text in encoded in a byte sequence   For that reason  in order to enable case insensitive directories  the filesystem must have the casefold feature  which stores the filesystem wide encoding model used   By default  the charset adopted is the latest version of Unicode  12 1 0  by the time of this writing   encoded in the UTF 8 form   The comparison algorithm is implemented by normalizing the strings to the Canonical decomposition form  as defined by Unicode  followed by a byte per byte comparison   The case awareness is name preserving on the disk  meaning that the file name provided by userspace is a byte per byte match to what is actually written in the disk   The Unicode normalization format used by the kernel is thus an internal representation  and not exposed to the userspace nor to the disk  with the important exception of disk hashes  used on large case insensitive directories with DX feature   On DX directories  the hash must be calculated using the casefolded version of the filename  meaning that the normalization format used actually has an impact on where the directory entry is stored   When we change from viewing filenames as opaque byte sequences to seeing them as encoded strings we need to address what happens when a program tries to create a file with an invalid name   The Unicode subsystem within the kernel leaves the decision of what to do in this case to the filesystem  which select its preferred behavior by enabling disabling the strict mode   When Ext4 encounters one of those strings and the filesystem did not require strict mode  it falls back to considering the entire string as an opaque byte sequence  which still allows the user to operate on that file  but the case insensitive lookups won t work   Options          When mounting an ext4 filesystem  the following option are accepted         default    ro         Mount filesystem read only  Note that ext4 will replay the journal  and         thus write to the partition  even when mounted  read only   The mount         options  ro noload  can be used to prevent writes to the filesystem     journal checksum         Enable checksumming of the journal transactions   This will allow the         recovery code in e2fsck and the kernel to detect corruption in the         kernel   It is a compatible change and will be ignored by older         kernels     journal async commit         Commit block can be written to disk without waiting for descriptor         blocks  If enabled older kernels cannot mount the device  This will         enable  journal checksum  internally     journal path path  journal dev devnum         When the external journal device s major minor numbers have changed          these options allow the user to specify the new journal location   The         journal device is identified through either its new major minor numbers         encoded in devnum  or via a path to the device     norecovery  noload         Don t load the journal on mounting   Note that if the filesystem was         not unmounted cleanly  skipping the journal replay will lead to the         filesystem containing inconsistencies that can lead to any number of         problems     data journal         All data are committed into the journal prior to being written into the         main file system   Enabling this mode will disable delayed allocation         and O DIRECT support     data ordered             All data are forced directly out to the main file system prior to its         metadata being committed to the journal     data writeback         Data ordering is not preserved  data may be written into the main file         system after its metadata has been committed to the journal     commit nrsec             This setting limits the maximum age of the running transaction to          nrsec  seconds   The default value is 5 seconds   This means that if         you lose your power  you will lose as much as the latest 5 seconds of         metadata changes  your filesystem will not be damaged though  thanks         to the journaling   This default value  or any low value  will hurt         performance  but it s good for data safety   Setting it to 0 will have         the same effect as leaving it at the default  5 seconds    Setting it         to very large values will improve performance   Note that due to         delayed allocation even older data can be lost on power failure since         writeback of those data begins only after time set in          proc sys vm dirty expire centisecs     barrier  0 1      barrier     nobarrier         This enables disables the use of write barriers in the jbd code          barrier 0 disables  barrier 1 enables   This also requires an IO stack         which can support barriers  and if jbd gets an error on a barrier         write  it will disable again with a warning   Write barriers enforce         proper on disk ordering of journal commits  making volatile disk write         caches safe to use  at some performance penalty   If your disks are         battery backed in one way or another  disabling barriers may safely         improve performance   The mount options  barrier  and  nobarrier  can         also be used to enable or disable barriers  for consistency with other         ext4 mount options     inode readahead blks n         This tuning parameter controls the maximum number of inode table blocks         that ext4 s inode table readahead algorithm will pre read into the         buffer cache   The default value is 32 blocks     bsddf             Make  df  act like BSD     minixdf         Make  df  act like Minix     debug         Extra debugging information is sent to syslog     abort         Simulate the effects of calling ext4 abort   for debugging purposes          This is normally used while remounting a filesystem which is already         mounted     errors remount ro         Remount the filesystem read only on an error     errors continue         Keep going on a filesystem error     errors panic         Panic and halt the machine if an error occurs    These mount options         override the errors behavior specified in the superblock  which can be         configured using tune2fs     data err ignore            Just print an error message if an error occurs in a file data buffer in         ordered mode    data err abort         Abort the journal if an error occurs in a file data buffer in ordered         mode     grpid   bsdgroups         New objects have the group ID of their parent     nogrpid       sysvgroups         New objects have the group ID of their creator     resgid n         The group ID which may use the reserved blocks     resuid n         The user ID which may use the reserved blocks     sb          Use alternate superblock at this location     quota  noquota  grpquota  usrquota         These options are ignored by the filesystem  They are used only by         quota tools to recognize volumes where quota should be turned on  See         documentation in the quota tools package for more details          http   sourceforge net projects linuxquota      jqfmt  quota type   usrjquota  file   grpjquota  file          These options tell filesystem details about quota so that quota         information can be properly updated during journal replay  They replace         the above quota options  See documentation in the quota tools package         for more details  http   sourceforge net projects linuxquota      stripe n         Number of filesystem blocks that mballoc will try to use for allocation         size and alignment  For RAID5 6 systems this should be the number of         data disks    RAID chunk size in file system blocks     delalloc             Defer block allocation until just before ext4 writes out the block s          in question   This allows ext4 to better allocation decisions more         efficiently     nodelalloc         Disable delayed allocation   Blocks are allocated when the data is         copied from userspace to the page cache  either via the write 2  system         call or when an mmap ed page which was previously unallocated is         written for the first time     max batch time usec         Maximum amount of time ext4 should wait for additional filesystem         operations to be batch together with a synchronous write operation          Since a synchronous write operation is going to force a commit and then         a wait for the I O complete  it doesn t cost much  and can be a huge         throughput win  we wait for a small amount of time to see if any other         transactions can piggyback on the synchronous write    The algorithm         used is designed to automatically tune for the speed of the disk  by         measuring the amount of time  on average  that it takes to finish         committing a transaction   Call this time the  commit time    If the         time that the transaction has been running is less than the commit         time  ext4 will try sleeping for the commit time to see if other         operations will join the transaction    The commit time is capped by         the max batch time  which defaults to 15000us  15ms     This         optimization can be turned off entirely by setting max batch time to 0     min batch time usec         This parameter sets the commit time  as described above  to be at least         min batch time   It defaults to zero microseconds   Increasing this         parameter may improve the throughput of multi threaded  synchronous         workloads on very fast disks  at the cost of increasing latency     journal ioprio prio         The I O priority  from 0 to 7  where 0 is the highest priority  which         should be used for I O operations submitted by kjournald2 during a         commit operation   This defaults to 3  which is a slightly higher         priority than the default I O priority     auto da alloc     noauto da alloc         Many broken applications don t use fsync   when replacing existing         files via patterns such as fd   open  foo new   write fd     close fd           rename  foo new    foo    or worse yet  fd   open  foo           O TRUNC  write fd     close fd    If auto da alloc is enabled  ext4         will detect the replace via rename and replace via truncate patterns         and force that any delayed allocation blocks are allocated such that at         the next journal commit  in the default data ordered mode  the data         blocks of the new file are forced to disk before the rename   operation         is committed   This provides roughly the same level of guarantees as         ext3  and avoids the  zero length  problem that can happen when a         system crashes before the delayed allocation blocks are forced to disk     noinit itable         Do not initialize any uninitialized inode table blocks in the         background   This feature may be used by installation CD s so that the         install process can complete as quickly as possible  the inode table         initialization process would then be deferred until the next time the         file system is unmounted     init itable n         The lazy itable init code will wait n times the number of milliseconds         it took to zero out the previous block group s inode table   This         minimizes the impact on the system performance while file system s         inode table is being initialized     discard  nodiscard            Controls whether ext4 should issue discard TRIM commands to the         underlying block device when blocks are freed   This is useful for SSD         devices and sparse thinly provisioned LUNs  but it is off by default         until sufficient testing has been done     nouid32         Disables 32 bit UIDs and GIDs   This is for interoperability  with         older kernels which only store and expect 16 bit values     block validity     noblock validity         These options enable or disable the in kernel facility for tracking         filesystem metadata blocks within internal data structures   This         allows multi  block allocator and other routines to notice bugs or         corrupted allocation bitmaps which cause blocks to be allocated which         overlap with filesystem metadata blocks     dioread lock  dioread nolock         Controls whether or not ext4 should use the DIO read locking  If the         dioread nolock option is specified ext4 will allocate uninitialized         extent before buffer write and convert the extent to initialized after         IO completes  This approach allows ext4 code to avoid using inode         mutex  which improves scalability on high speed storages  However this         does not work with data journaling and dioread nolock option will be         ignored with kernel warning  Note that dioread nolock code path is only         used for extent based files   Because of the restrictions this options         comprises it is off by default  e g  dioread lock      max dir size kb n         This limits the size of directories so that any attempt to expand them         beyond the specified limit in kilobytes will cause an ENOSPC error          This is useful in memory constrained environments  where a very large         directory can cause severe performance problems or even provoke the Out         Of Memory killer    For example  if there is only 512mb memory         available  a 176mb directory may seriously cramp the system s style      i version         Enable 64 bit inode version support  This option is off by default     dax         Use direct access  no page cache    See         Documentation filesystems dax rst   Note that this option is         incompatible with data journal     inlinecrypt         When possible  encrypt decrypt the contents of encrypted files using the         blk crypto framework rather than filesystem layer encryption  This         allows the use of inline encryption hardware  The on disk format is         unaffected  For more details  see         Documentation block inline encryption rst   Data Mode           There are 3 different data modes     writeback mode    In data writeback mode  ext4 does not journal data at all   This mode provides   a similar level of journaling as that of XFS  JFS  and ReiserFS in its default   mode   metadata journaling   A crash recovery can cause incorrect data to   appear in files which were written shortly before the crash   This mode will   typically provide the best ext4 performance     ordered mode    In data ordered mode  ext4 only officially journals metadata  but it logically   groups metadata information related to data changes with the data blocks into   a single unit called a transaction   When it s time to write the new metadata   out to disk  the associated data blocks are written first   In general  this   mode performs slightly slower than writeback but significantly faster than   journal mode     journal mode    data journal mode provides full data and metadata journaling   All new data is   written to the journal first  and then to its final location   In the event of   a crash  the journal can be replayed  bringing both data and metadata into a   consistent state   This mode is the slowest except when data needs to be read   from and written to disk at the same time where it outperforms all others   modes   Enabling this mode will disable delayed allocation and O DIRECT   support    proc entries                Information about mounted ext4 file systems can be found in  proc fs ext4   Each mounted filesystem will have a directory in  proc fs ext4 based on its device name  i e    proc fs ext4 hdc or  proc fs ext4 dm 0     The files in each per device directory are shown in table below   Files in  proc fs ext4  devname     mb groups         details of multiblock allocator buddy cache of free blocks   sys entries               Information about mounted ext4 file systems can be found in  sys fs ext4   Each mounted filesystem will have a directory in  sys fs ext4 based on its device name  i e    sys fs ext4 hdc or  sys fs ext4 dm 0     The files in each per device directory are shown in table below   Files in  sys fs ext4  devname     see also Documentation ABI testing sysfs fs ext4     delayed allocation blocks         This file is read only and shows the number of blocks that are dirty in         the page cache  but which do not have their location in the filesystem         allocated yet     inode goal         Tuning parameter which  if non zero  controls the goal inode used by         the inode allocator in preference to all other allocation heuristics          This is intended for debugging use only  and should be 0 on production         systems     inode readahead blks         Tuning parameter which controls the maximum number of inode table         blocks that ext4 s inode table readahead algorithm will pre read into         the buffer cache     lifetime write kbytes         This file is read only and shows the number of kilobytes of data that         have been written to this filesystem since it was created     max writeback mb bump         The maximum number of megabytes the writeback code will try to write         out before move on to another inode     mb group prealloc         The multiblock allocator will round up allocation requests to a         multiple of this tuning parameter if the stripe size is not set in the         ext4 superblock    mb max to scan         The maximum number of extents the multiblock allocator will search to         find the best extent     mb min to scan         The minimum number of extents the multiblock allocator will search to         find the best extent     mb order2 req         Tuning parameter which controls the minimum size for requests  as a         power of 2  where the buddy cache is used     mb stats         Controls whether the multiblock allocator should collect statistics          which are shown during the unmount  1 means to collect statistics  0         means not to collect statistics     mb stream req         Files which have fewer blocks than this tunable parameter will have         their blocks allocated out of a block group specific preallocation         pool  so that small files are packed closely together   Each large file         will have its blocks allocated out of its own unique preallocation         pool     session write kbytes         This file is read only and shows the number of kilobytes of data that         have been written to this filesystem since it was mounted     reserved clusters         This is RW file and contains number of reserved clusters in the file         system which will be used in the specific situations to avoid costly         zeroout  unexpected ENOSPC  or possible data loss  The default is 2  or         4096 clusters  whichever is smaller and this can be changed however it         can never exceed number of clusters in the file system  If there is not         enough space for the reserved space when mounting the file mount will          not  fail   Ioctls         Ext4 implements various ioctls which can be used by applications to access ext4 specific functionality  An incomplete list of these ioctls is shown in the table below  This list includes truly ext4 specific ioctls    EXT4 IOC      as well as ioctls that may have been ext4 specific originally but are now supported by some other filesystem s  too    FS IOC        Table of Ext4 ioctls    FS IOC GETFLAGS         Get additional attributes associated with inode   The ioctl argument is         an integer bitfield  with bit values described in ext4 h     FS IOC SETFLAGS         Set additional attributes associated with inode   The ioctl argument is         an integer bitfield  with bit values described in ext4 h     EXT4 IOC GETVERSION  EXT4 IOC GETVERSION OLD         Get the inode i generation number stored for each inode  The         i generation number is normally changed only when new inode is created         and it is particularly useful for network filesystems  The   OLD          version of this ioctl is an alias for FS IOC GETVERSION     EXT4 IOC SETVERSION  EXT4 IOC SETVERSION OLD         Set the inode i generation number stored for each inode  The   OLD          version of this ioctl is an alias for FS IOC SETVERSION     EXT4 IOC GROUP EXTEND         This ioctl has the same purpose as the resize mount option  It allows         to resize filesystem to the end of the last existing block group          further resize has to be done with resize2fs  either online  or         offline  The argument points to the unsigned logn number representing         the filesystem new block count     EXT4 IOC MOVE EXT         Move the block extents from orig fd  the one this ioctl is pointing to          to the donor fd  the one specified in move extent structure passed as         an argument to this ioctl   Then  exchange inode metadata between         orig fd and donor fd   This is especially useful for online         defragmentation  because the allocator has the opportunity to allocate         moved blocks better  ideally into one contiguous extent     EXT4 IOC GROUP ADD         Add a new group descriptor to an existing or new group descriptor         block  The new group descriptor is described by ext4 new group input         structure  which is passed as an argument to this ioctl  This is         especially useful in conjunction with EXT4 IOC GROUP EXTEND  which         allows online resize of the filesystem to the end of the last existing         block group   Those two ioctls combined is used in userspace online         resize tool  e g  resize2fs      EXT4 IOC MIGRATE         This ioctl operates on the filesystem itself   It converts  migrates          ext3 indirect block mapped inode to ext4 extent mapped inode by walking         through indirect block mapping of the original inode and converting         contiguous block ranges into ext4 extents of the temporary inode  Then          inodes are swapped  This ioctl might help  when migrating from ext3 to         ext4 filesystem  however suggestion is to create fresh ext4 filesystem         and copy data from the backup  Note  that filesystem has to support         extents for this ioctl to work     EXT4 IOC ALLOC DA BLKS         Force all of the delay allocated blocks to be allocated to preserve         application expected ext3 behaviour  Note that this will also start         triggering a write of the data blocks  but this behaviour may change in         the future as it is not necessary and has been done this way only for         sake of simplicity     EXT4 IOC RESIZE FS         Resize the filesystem to a new size   The number of blocks of resized         filesystem is passed in via 64 bit integer argument   The kernel         allocates bitmaps and inode table  the userspace tool thus just passes         the new number of blocks     EXT4 IOC SWAP BOOT         Swap i blocks and associated attributes  like i blocks  i size          i flags       from the specified inode with inode EXT4 BOOT LOADER INO           5   This is typically used to store a boot loader in a secure part of         the filesystem  where it can t be changed by a normal user by accident          The data blocks of the previous boot loader will be associated with the         given inode   References             kernel source   file fs ext4      file fs jbd2    programs  http   e2fsprogs sourceforge net   useful links  https   fedoraproject org wiki ext3 devel   http   www bullopensource org ext4    http   ext4 wiki kernel org index php Main Page   https   fedoraproject org wiki Features Ext4"}
{"questions":"linux This list is the Linux Device List the official registry of allocated admindevices Linux allocated devices 4 x version system The version of this document at lanana org is no longer maintained This device numbers and directory nodes for the Linux operating","answers":".. _admin_devices:\n\nLinux allocated devices (4.x+ version)\n======================================\n\nThis list is the Linux Device List, the official registry of allocated\ndevice numbers and ``\/dev`` directory nodes for the Linux operating\nsystem.\n\nThe version of this document at lanana.org is no longer maintained.  This\nversion in the mainline Linux kernel is the master document.  Updates\nshall be sent as patches to the kernel maintainers (see the\n:ref:`Documentation\/process\/submitting-patches.rst <submittingpatches>` document).\nSpecifically explore the sections titled \"CHAR and MISC DRIVERS\", and\n\"BLOCK LAYER\" in the MAINTAINERS file to find the right maintainers\nto involve for character and block devices.\n\nThis document is included by reference into the Filesystem Hierarchy\nStandard (FHS).\t The FHS is available from https:\/\/www.pathname.com\/fhs\/.\n\nAllocations marked (68k\/Amiga) apply to Linux\/68k on the Amiga\nplatform only.\tAllocations marked (68k\/Atari) apply to Linux\/68k on\nthe Atari platform only.\n\nThis document is in the public domain.\tThe authors requests, however,\nthat semantically altered versions are not distributed without\npermission of the authors, assuming the authors can be contacted without\nan unreasonable effort.\n\n\n.. attention::\n\n  DEVICE DRIVERS AUTHORS PLEASE READ THIS\n\n  Linux now has extensive support for dynamic allocation of device numbering\n  and can use ``sysfs`` and ``udev`` (``systemd``) to handle the naming needs.\n  There are still some exceptions in the serial and boot device area. Before\n  asking   for a device number make sure you actually need one.\n\n  To have a major number allocated, or a minor number in situations\n  where that applies (e.g. busmice), please submit a patch and send to\n  the authors as indicated above.\n\n  Keep the description of the device *in the same format\n  as this list*. The reason for this is that it is the only way we have\n  found to ensure we have all the requisite information to publish your\n  device and avoid conflicts.\n\n  Finally, sometimes we have to play \"namespace police.\"  Please don't be\n  offended.  We often get submissions for ``\/dev`` names that would be bound\n  to cause conflicts down the road.  We are trying to avoid getting in a\n  situation where we would have to suffer an incompatible forward\n  change.  Therefore, please consult with us **before** you make your\n  device names and numbers in any way public, at least to the point\n  where it would be at all difficult to get them changed.\n\n  Your cooperation is appreciated.\n\n.. include:: devices.txt\n   :literal:\n\nAdditional ``\/dev\/`` directory entries\n--------------------------------------\n\nThis section details additional entries that should or may exist in\nthe \/dev directory.  It is preferred that symbolic links use the same\nform (absolute or relative) as is indicated here.  Links are\nclassified as \"hard\" or \"symbolic\" depending on the preferred type of\nlink; if possible, the indicated type of link should be used.\n\nCompulsory links\n++++++++++++++++\n\nThese links should exist on all systems:\n\n=============== =============== =============== ===============================\n\/dev\/fd\t\t\/proc\/self\/fd\tsymbolic\tFile descriptors\n\/dev\/stdin\tfd\/0\t\tsymbolic\tstdin file descriptor\n\/dev\/stdout\tfd\/1\t\tsymbolic\tstdout file descriptor\n\/dev\/stderr\tfd\/2\t\tsymbolic\tstderr file descriptor\n\/dev\/nfsd\tsocksys\t\tsymbolic\tRequired by iBCS-2\n\/dev\/X0R\tnull\t\tsymbolic\tRequired by iBCS-2\n=============== =============== =============== ===============================\n\nNote: ``\/dev\/X0R`` is <letter X>-<digit 0>-<letter R>.\n\nRecommended links\n+++++++++++++++++\n\nIt is recommended that these links exist on all systems:\n\n\n=============== =============== =============== ===============================\n\/dev\/core\t\/proc\/kcore\tsymbolic\tBackward compatibility\n\/dev\/ramdisk\tram0\t\tsymbolic\tBackward compatibility\n\/dev\/ftape\tqft0\t\tsymbolic\tBackward compatibility\n\/dev\/bttv0\tvideo0\t\tsymbolic\tBackward compatibility\n\/dev\/radio\tradio0\t\tsymbolic\tBackward compatibility\n\/dev\/i2o*\t\/dev\/i2o\/*\tsymbolic\tBackward compatibility\n\/dev\/scd?\tsr?\t\thard\t\tAlternate SCSI CD-ROM name\n=============== =============== =============== ===============================\n\nLocally defined links\n+++++++++++++++++++++\n\nThe following links may be established locally to conform to the\nconfiguration of the system.  This is merely a tabulation of existing\npractice, and does not constitute a recommendation.  However, if they\nexist, they should have the following uses.\n\n=============== =============== =============== ===============================\n\/dev\/mouse\tmouse port\tsymbolic\tCurrent mouse device\n\/dev\/tape\ttape device\tsymbolic\tCurrent tape device\n\/dev\/cdrom\tCD-ROM device\tsymbolic\tCurrent CD-ROM device\n\/dev\/cdwriter\tCD-writer\tsymbolic\tCurrent CD-writer device\n\/dev\/scanner\tscanner\t\tsymbolic\tCurrent scanner device\n\/dev\/modem\tmodem port\tsymbolic\tCurrent dialout device\n\/dev\/root\troot device\tsymbolic\tCurrent root filesystem\n\/dev\/swap\tswap device\tsymbolic\tCurrent swap device\n=============== =============== =============== ===============================\n\n``\/dev\/modem`` should not be used for a modem which supports dialin as\nwell as dialout, as it tends to cause lock file problems.  If it\nexists, ``\/dev\/modem`` should point to the appropriate primary TTY device\n(the use of the alternate callout devices is deprecated).\n\nFor SCSI devices, ``\/dev\/tape`` and ``\/dev\/cdrom`` should point to the\n*cooked* devices (``\/dev\/st*`` and ``\/dev\/sr*``, respectively), whereas\n``\/dev\/cdwriter`` and \/dev\/scanner should point to the appropriate generic\nSCSI devices (\/dev\/sg*).\n\n``\/dev\/mouse`` may point to a primary serial TTY device, a hardware mouse\ndevice, or a socket for a mouse driver program (e.g. ``\/dev\/gpmdata``).\n\nSockets and pipes\n+++++++++++++++++\n\nNon-transient sockets and named pipes may exist in \/dev.  Common entries are:\n\n=============== =============== ===============================================\n\/dev\/printer\tsocket\t\tlpd local socket\n\/dev\/log\tsocket\t\tsyslog local socket\n\/dev\/gpmdata\tsocket\t\tgpm mouse multiplexer\n=============== =============== ===============================================\n\nMount points\n++++++++++++\n\nThe following names are reserved for mounting special filesystems\nunder \/dev.  These special filesystems provide kernel interfaces that\ncannot be provided with standard device nodes.\n\n=============== =============== ===============================================\n\/dev\/pts\tdevpts\t\tPTY slave filesystem\n\/dev\/shm\ttmpfs\t\tPOSIX shared memory maintenance access\n=============== =============== ===============================================\n\nTerminal devices\n----------------\n\nTerminal, or TTY devices are a special class of character devices.  A\nterminal device is any device that could act as a controlling terminal\nfor a session; this includes virtual consoles, serial ports, and\npseudoterminals (PTYs).\n\nAll terminal devices share a common set of capabilities known as line\ndisciplines; these include the common terminal line discipline as well\nas SLIP and PPP modes.\n\nAll terminal devices are named similarly; this section explains the\nnaming and use of the various types of TTYs.  Note that the naming\nconventions include several historical warts; some of these are\nLinux-specific, some were inherited from other systems, and some\nreflect Linux outgrowing a borrowed convention.\n\nA hash mark (``#``) in a device name is used here to indicate a decimal\nnumber without leading zeroes.\n\nVirtual consoles and the console device\n+++++++++++++++++++++++++++++++++++++++\n\nVirtual consoles are full-screen terminal displays on the system video\nmonitor.  Virtual consoles are named ``\/dev\/tty#``, with numbering\nstarting at ``\/dev\/tty1``; ``\/dev\/tty0`` is the current virtual console.\n``\/dev\/tty0`` is the device that should be used to access the system video\ncard on those architectures for which the frame buffer devices\n(``\/dev\/fb*``) are not applicable. Do not use ``\/dev\/console``\nfor this purpose.\n\nThe console device, ``\/dev\/console``, is the device to which system\nmessages should be sent, and on which logins should be permitted in\nsingle-user mode.  Starting with Linux 2.1.71, ``\/dev\/console`` is managed\nby the kernel; for previous versions it should be a symbolic link to\neither ``\/dev\/tty0``, a specific virtual console such as ``\/dev\/tty1``, or to\na serial port primary (``tty*``, not ``cu*``) device, depending on the\nconfiguration of the system.\n\nSerial ports\n++++++++++++\n\nSerial ports are RS-232 serial ports and any device which simulates\none, either in hardware (such as internal modems) or in software (such\nas the ISDN driver.)  Under Linux, each serial ports has two device\nnames, the primary or callin device and the alternate or callout one.\nEach kind of device is indicated by a different letter.\t For any\nletter X, the names of the devices are ``\/dev\/ttyX#`` and ``\/dev\/cux#``,\nrespectively; for historical reasons, ``\/dev\/ttyS#`` and ``\/dev\/ttyC#``\ncorrespond to ``\/dev\/cua#`` and ``\/dev\/cub#``. In the future, it should be\nexpected that multiple letters will be used; all letters will be upper\ncase for the \"tty\" device (e.g. ``\/dev\/ttyDP#``) and lower case for the\n\"cu\" device (e.g. ``\/dev\/cudp#``).\n\nThe names ``\/dev\/ttyQ#`` and ``\/dev\/cuq#`` are reserved for local use.\n\nThe alternate devices provide for kernel-based exclusion and somewhat\ndifferent defaults than the primary devices.  Their main purpose is to\nallow the use of serial ports with programs with no inherent or broken\nsupport for serial ports.  Their use is deprecated, and they may be\nremoved from a future version of Linux.\n\nArbitration of serial ports is provided by the use of lock files with\nthe names ``\/var\/lock\/LCK..ttyX#``. The contents of the lock file should\nbe the PID of the locking process as an ASCII number.\n\nIt is common practice to install links such as \/dev\/modem\nwhich point to serial ports.  In order to ensure proper locking in the\npresence of these links, it is recommended that software chase\nsymlinks and lock all possible names; additionally, it is recommended\nthat a lock file be installed with the corresponding alternate\ndevice.\t In order to avoid deadlocks, it is recommended that the locks\nare acquired in the following order, and released in the reverse:\n\n\t1. The symbolic link name, if any (``\/var\/lock\/LCK..modem``)\n\t2. The \"tty\" name (``\/var\/lock\/LCK..ttyS2``)\n\t3. The alternate device name (``\/var\/lock\/LCK..cua2``)\n\nIn the case of nested symbolic links, the lock files should be\ninstalled in the order the symlinks are resolved.\n\nUnder no circumstances should an application hold a lock while waiting\nfor another to be released.  In addition, applications which attempt\nto create lock files for the corresponding alternate device names\nshould take into account the possibility of being used on a non-serial\nport TTY, for which no alternate device would exist.\n\nPseudoterminals (PTYs)\n++++++++++++++++++++++\n\nPseudoterminals, or PTYs, are used to create login sessions or provide\nother capabilities requiring a TTY line discipline (including SLIP or\nPPP capability) to arbitrary data-generation processes.\t Each PTY has\na master side, named ``\/dev\/pty[p-za-e][0-9a-f]``, and a slave side, named\n``\/dev\/tty[p-za-e][0-9a-f]``.  The kernel arbitrates the use of PTYs by\nallowing each master side to be opened only once.\n\nOnce the master side has been opened, the corresponding slave device\ncan be used in the same manner as any TTY device.  The master and\nslave devices are connected by the kernel, generating the equivalent\nof a bidirectional pipe with TTY capabilities.\n\nRecent versions of the Linux kernels and GNU libc contain support for\nthe System V\/Unix98 naming scheme for PTYs, which assigns a common\ndevice, ``\/dev\/ptmx``, to all the masters (opening it will automatically\ngive you a previously unassigned PTY) and a subdirectory, ``\/dev\/pts``,\nfor the slaves; the slaves are named with decimal integers (``\/dev\/pts\/#``\nin our notation).  This removes the problem of exhausting the\nnamespace and enables the kernel to automatically create the device\nnodes for the slaves on demand using the \"devpts\" filesystem.","site":"linux","answers_cleaned":"    admin devices   Linux allocated devices  4 x  version                                          This list is the Linux Device List  the official registry of allocated device numbers and    dev   directory nodes for the Linux operating system   The version of this document at lanana org is no longer maintained   This version in the mainline Linux kernel is the master document   Updates shall be sent as patches to the kernel maintainers  see the  ref  Documentation process submitting patches rst  submittingpatches   document   Specifically explore the sections titled  CHAR and MISC DRIVERS   and  BLOCK LAYER  in the MAINTAINERS file to find the right maintainers to involve for character and block devices   This document is included by reference into the Filesystem Hierarchy Standard  FHS    The FHS is available from https   www pathname com fhs    Allocations marked  68k Amiga  apply to Linux 68k on the Amiga platform only  Allocations marked  68k Atari  apply to Linux 68k on the Atari platform only   This document is in the public domain  The authors requests  however  that semantically altered versions are not distributed without permission of the authors  assuming the authors can be contacted without an unreasonable effort       attention      DEVICE DRIVERS AUTHORS PLEASE READ THIS    Linux now has extensive support for dynamic allocation of device numbering   and can use   sysfs   and   udev      systemd    to handle the naming needs    There are still some exceptions in the serial and boot device area  Before   asking   for a device number make sure you actually need one     To have a major number allocated  or a minor number in situations   where that applies  e g  busmice   please submit a patch and send to   the authors as indicated above     Keep the description of the device  in the same format   as this list   The reason for this is that it is the only way we have   found to ensure we have all the requisite information to publish your   device and avoid conflicts     Finally  sometimes we have to play  namespace police    Please don t be   offended   We often get submissions for    dev   names that would be bound   to cause conflicts down the road   We are trying to avoid getting in a   situation where we would have to suffer an incompatible forward   change   Therefore  please consult with us   before   you make your   device names and numbers in any way public  at least to the point   where it would be at all difficult to get them changed     Your cooperation is appreciated      include   devices txt     literal   Additional    dev    directory entries                                         This section details additional entries that should or may exist in the  dev directory   It is preferred that symbolic links use the same form  absolute or relative  as is indicated here   Links are classified as  hard  or  symbolic  depending on the preferred type of link  if possible  the indicated type of link should be used   Compulsory links                   These links should exist on all systems                                                                                    dev fd   proc self fd symbolic File descriptors  dev stdin fd 0  symbolic stdin file descriptor  dev stdout fd 1  symbolic stdout file descriptor  dev stderr fd 2  symbolic stderr file descriptor  dev nfsd socksys  symbolic Required by iBCS 2  dev X0R null  symbolic Required by iBCS 2                                                                                  Note     dev X0R   is  letter X   digit 0   letter R    Recommended links                    It is recommended that these links exist on all systems                                                                                     dev core  proc kcore symbolic Backward compatibility  dev ramdisk ram0  symbolic Backward compatibility  dev ftape qft0  symbolic Backward compatibility  dev bttv0 video0  symbolic Backward compatibility  dev radio radio0  symbolic Backward compatibility  dev i2o   dev i2o   symbolic Backward compatibility  dev scd  sr   hard  Alternate SCSI CD ROM name                                                                                  Locally defined links                        The following links may be established locally to conform to the configuration of the system   This is merely a tabulation of existing practice  and does not constitute a recommendation   However  if they exist  they should have the following uses                                                                                    dev mouse mouse port symbolic Current mouse device  dev tape tape device symbolic Current tape device  dev cdrom CD ROM device symbolic Current CD ROM device  dev cdwriter CD writer symbolic Current CD writer device  dev scanner scanner  symbolic Current scanner device  dev modem modem port symbolic Current dialout device  dev root root device symbolic Current root filesystem  dev swap swap device symbolic Current swap device                                                                                     dev modem   should not be used for a modem which supports dialin as well as dialout  as it tends to cause lock file problems   If it exists     dev modem   should point to the appropriate primary TTY device  the use of the alternate callout devices is deprecated    For SCSI devices     dev tape   and    dev cdrom   should point to the  cooked  devices     dev st    and    dev sr     respectively   whereas    dev cdwriter   and  dev scanner should point to the appropriate generic SCSI devices   dev sg        dev mouse   may point to a primary serial TTY device  a hardware mouse device  or a socket for a mouse driver program  e g     dev gpmdata      Sockets and pipes                    Non transient sockets and named pipes may exist in  dev   Common entries are                                                                                    dev printer socket  lpd local socket  dev log socket  syslog local socket  dev gpmdata socket  gpm mouse multiplexer                                                                                  Mount points               The following names are reserved for mounting special filesystems under  dev   These special filesystems provide kernel interfaces that cannot be provided with standard device nodes                                                                                    dev pts devpts  PTY slave filesystem  dev shm tmpfs  POSIX shared memory maintenance access                                                                                  Terminal devices                   Terminal  or TTY devices are a special class of character devices   A terminal device is any device that could act as a controlling terminal for a session  this includes virtual consoles  serial ports  and pseudoterminals  PTYs    All terminal devices share a common set of capabilities known as line disciplines  these include the common terminal line discipline as well as SLIP and PPP modes   All terminal devices are named similarly  this section explains the naming and use of the various types of TTYs   Note that the naming conventions include several historical warts  some of these are Linux specific  some were inherited from other systems  and some reflect Linux outgrowing a borrowed convention   A hash mark         in a device name is used here to indicate a decimal number without leading zeroes   Virtual consoles and the console device                                          Virtual consoles are full screen terminal displays on the system video monitor   Virtual consoles are named    dev tty     with numbering starting at    dev tty1       dev tty0   is the current virtual console     dev tty0   is the device that should be used to access the system video card on those architectures for which the frame buffer devices     dev fb     are not applicable  Do not use    dev console   for this purpose   The console device     dev console    is the device to which system messages should be sent  and on which logins should be permitted in single user mode   Starting with Linux 2 1 71     dev console   is managed by the kernel  for previous versions it should be a symbolic link to either    dev tty0    a specific virtual console such as    dev tty1    or to a serial port primary    tty     not   cu     device  depending on the configuration of the system   Serial ports               Serial ports are RS 232 serial ports and any device which simulates one  either in hardware  such as internal modems  or in software  such as the ISDN driver    Under Linux  each serial ports has two device names  the primary or callin device and the alternate or callout one  Each kind of device is indicated by a different letter   For any letter X  the names of the devices are    dev ttyX    and    dev cux     respectively  for historical reasons     dev ttyS    and    dev ttyC    correspond to    dev cua    and    dev cub     In the future  it should be expected that multiple letters will be used  all letters will be upper case for the  tty  device  e g     dev ttyDP     and lower case for the  cu  device  e g     dev cudp       The names    dev ttyQ    and    dev cuq    are reserved for local use   The alternate devices provide for kernel based exclusion and somewhat different defaults than the primary devices   Their main purpose is to allow the use of serial ports with programs with no inherent or broken support for serial ports   Their use is deprecated  and they may be removed from a future version of Linux   Arbitration of serial ports is provided by the use of lock files with the names    var lock LCK  ttyX     The contents of the lock file should be the PID of the locking process as an ASCII number   It is common practice to install links such as  dev modem which point to serial ports   In order to ensure proper locking in the presence of these links  it is recommended that software chase symlinks and lock all possible names  additionally  it is recommended that a lock file be installed with the corresponding alternate device   In order to avoid deadlocks  it is recommended that the locks are acquired in the following order  and released in the reverse    1  The symbolic link name  if any     var lock LCK  modem     2  The  tty  name     var lock LCK  ttyS2     3  The alternate device name     var lock LCK  cua2     In the case of nested symbolic links  the lock files should be installed in the order the symlinks are resolved   Under no circumstances should an application hold a lock while waiting for another to be released   In addition  applications which attempt to create lock files for the corresponding alternate device names should take into account the possibility of being used on a non serial port TTY  for which no alternate device would exist   Pseudoterminals  PTYs                          Pseudoterminals  or PTYs  are used to create login sessions or provide other capabilities requiring a TTY line discipline  including SLIP or PPP capability  to arbitrary data generation processes   Each PTY has a master side  named    dev pty p za e  0 9a f     and a slave side  named    dev tty p za e  0 9a f      The kernel arbitrates the use of PTYs by allowing each master side to be opened only once   Once the master side has been opened  the corresponding slave device can be used in the same manner as any TTY device   The master and slave devices are connected by the kernel  generating the equivalent of a bidirectional pipe with TTY capabilities   Recent versions of the Linux kernels and GNU libc contain support for the System V Unix98 naming scheme for PTYs  which assigns a common device     dev ptmx    to all the masters  opening it will automatically give you a previously unassigned PTY  and a subdirectory     dev pts    for the slaves  the slaves are named with decimal integers     dev pts     in our notation    This removes the problem of exhausting the namespace and enables the kernel to automatically create the device nodes for the slaves on demand using the  devpts  filesystem "}
{"questions":"linux When Linux developers talk about a Real Time Clock they usually mean UTC formerly Greenwich Mean Time works even with system power off Such clocks will normally not track the local time zone or daylight savings time unless they dual boot Real Time Clock RTC Drivers for Linux with MS Windows but will instead be set to Coordinated Universal Time something that tracks wall clock time and is battery backed so that it","answers":"=======================================\nReal Time Clock (RTC) Drivers for Linux\n=======================================\n\nWhen Linux developers talk about a \"Real Time Clock\", they usually mean\nsomething that tracks wall clock time and is battery backed so that it\nworks even with system power off.  Such clocks will normally not track\nthe local time zone or daylight savings time -- unless they dual boot\nwith MS-Windows -- but will instead be set to Coordinated Universal Time\n(UTC, formerly \"Greenwich Mean Time\").\n\nThe newest non-PC hardware tends to just count seconds, like the time(2)\nsystem call reports, but RTCs also very commonly represent time using\nthe Gregorian calendar and 24 hour time, as reported by gmtime(3).\n\nLinux has two largely-compatible userspace RTC API families you may\nneed to know about:\n\n    *\t\/dev\/rtc ... is the RTC provided by PC compatible systems,\n\tso it's not very portable to non-x86 systems.\n\n    *\t\/dev\/rtc0, \/dev\/rtc1 ... are part of a framework that's\n\tsupported by a wide variety of RTC chips on all systems.\n\nProgrammers need to understand that the PC\/AT functionality is not\nalways available, and some systems can do much more.  That is, the\nRTCs use the same API to make requests in both RTC frameworks (using\ndifferent filenames of course), but the hardware may not offer the\nsame functionality.  For example, not every RTC is hooked up to an\nIRQ, so they can't all issue alarms; and where standard PC RTCs can\nonly issue an alarm up to 24 hours in the future, other hardware may\nbe able to schedule one any time in the upcoming century.\n\n\nOld PC\/AT-Compatible driver:  \/dev\/rtc\n--------------------------------------\n\nAll PCs (even Alpha machines) have a Real Time Clock built into them.\nUsually they are built into the chipset of the computer, but some may\nactually have a Motorola MC146818 (or clone) on the board. This is the\nclock that keeps the date and time while your computer is turned off.\n\nACPI has standardized that MC146818 functionality, and extended it in\na few ways (enabling longer alarm periods, and wake-from-hibernate).\nThat functionality is NOT exposed in the old driver.\n\nHowever it can also be used to generate signals from a slow 2Hz to a\nrelatively fast 8192Hz, in increments of powers of two. These signals\nare reported by interrupt number 8. (Oh! So *that* is what IRQ 8 is\nfor...) It can also function as a 24hr alarm, raising IRQ 8 when the\nalarm goes off. The alarm can also be programmed to only check any\nsubset of the three programmable values, meaning that it could be set to\nring on the 30th second of the 30th minute of every hour, for example.\nThe clock can also be set to generate an interrupt upon every clock\nupdate, thus generating a 1Hz signal.\n\nThe interrupts are reported via \/dev\/rtc (major 10, minor 135, read only\ncharacter device) in the form of an unsigned long. The low byte contains\nthe type of interrupt (update-done, alarm-rang, or periodic) that was\nraised, and the remaining bytes contain the number of interrupts since\nthe last read.  Status information is reported through the pseudo-file\n\/proc\/driver\/rtc if the \/proc filesystem was enabled.  The driver has\nbuilt in locking so that only one process is allowed to have the \/dev\/rtc\ninterface open at a time.\n\nA user process can monitor these interrupts by doing a read(2) or a\nselect(2) on \/dev\/rtc -- either will block\/stop the user process until\nthe next interrupt is received. This is useful for things like\nreasonably high frequency data acquisition where one doesn't want to\nburn up 100% CPU by polling gettimeofday etc. etc.\n\nAt high frequencies, or under high loads, the user process should check\nthe number of interrupts received since the last read to determine if\nthere has been any interrupt \"pileup\" so to speak. Just for reference, a\ntypical 486-33 running a tight read loop on \/dev\/rtc will start to suffer\noccasional interrupt pileup (i.e. > 1 IRQ event since last read) for\nfrequencies above 1024Hz. So you really should check the high bytes\nof the value you read, especially at frequencies above that of the\nnormal timer interrupt, which is 100Hz.\n\nProgramming and\/or enabling interrupt frequencies greater than 64Hz is\nonly allowed by root. This is perhaps a bit conservative, but we don't want\nan evil user generating lots of IRQs on a slow 386sx-16, where it might have\na negative impact on performance. This 64Hz limit can be changed by writing\na different value to \/proc\/sys\/dev\/rtc\/max-user-freq. Note that the\ninterrupt handler is only a few lines of code to minimize any possibility\nof this effect.\n\nAlso, if the kernel time is synchronized with an external source, the \nkernel will write the time back to the CMOS clock every 11 minutes. In \nthe process of doing this, the kernel briefly turns off RTC periodic \ninterrupts, so be aware of this if you are doing serious work. If you\ndon't synchronize the kernel time with an external source (via ntp or\nwhatever) then the kernel will keep its hands off the RTC, allowing you\nexclusive access to the device for your applications.\n\nThe alarm and\/or interrupt frequency are programmed into the RTC via\nvarious ioctl(2) calls as listed in .\/include\/linux\/rtc.h\nRather than write 50 pages describing the ioctl() and so on, it is\nperhaps more useful to include a small test program that demonstrates\nhow to use them, and demonstrates the features of the driver. This is\nprobably a lot more useful to people interested in writing applications\nthat will be using this driver.  See the code at the end of this document.\n\n(The original \/dev\/rtc driver was written by Paul Gortmaker.)\n\n\nNew portable \"RTC Class\" drivers:  \/dev\/rtcN\n--------------------------------------------\n\nBecause Linux supports many non-ACPI and non-PC platforms, some of which\nhave more than one RTC style clock, it needed a more portable solution\nthan expecting a single battery-backed MC146818 clone on every system.\nAccordingly, a new \"RTC Class\" framework has been defined.  It offers\nthree different userspace interfaces:\n\n    *\t\/dev\/rtcN ... much the same as the older \/dev\/rtc interface\n\n    *\t\/sys\/class\/rtc\/rtcN ... sysfs attributes support readonly\n\taccess to some RTC attributes.\n\n    *\t\/proc\/driver\/rtc ... the system clock RTC may expose itself\n\tusing a procfs interface. If there is no RTC for the system clock,\n\trtc0 is used by default. More information is (currently) shown\n\there than through sysfs.\n\nThe RTC Class framework supports a wide variety of RTCs, ranging from those\nintegrated into embeddable system-on-chip (SOC) processors to discrete chips\nusing I2C, SPI, or some other bus to communicate with the host CPU.  There's\neven support for PC-style RTCs ... including the features exposed on newer PCs\nthrough ACPI.\n\nThe new framework also removes the \"one RTC per system\" restriction.  For\nexample, maybe the low-power battery-backed RTC is a discrete I2C chip, but\na high functionality RTC is integrated into the SOC.  That system might read\nthe system clock from the discrete RTC, but use the integrated one for all\nother tasks, because of its greater functionality.\n\nCheck out tools\/testing\/selftests\/rtc\/rtctest.c for an example usage of the\nioctl interface.","site":"linux","answers_cleaned":"                                        Real Time Clock  RTC  Drivers for Linux                                          When Linux developers talk about a  Real Time Clock   they usually mean something that tracks wall clock time and is battery backed so that it works even with system power off   Such clocks will normally not track the local time zone or daylight savings time    unless they dual boot with MS Windows    but will instead be set to Coordinated Universal Time  UTC  formerly  Greenwich Mean Time     The newest non PC hardware tends to just count seconds  like the time 2  system call reports  but RTCs also very commonly represent time using the Gregorian calendar and 24 hour time  as reported by gmtime 3    Linux has two largely compatible userspace RTC API families you may need to know about          dev rtc     is the RTC provided by PC compatible systems   so it s not very portable to non x86 systems          dev rtc0   dev rtc1     are part of a framework that s  supported by a wide variety of RTC chips on all systems   Programmers need to understand that the PC AT functionality is not always available  and some systems can do much more   That is  the RTCs use the same API to make requests in both RTC frameworks  using different filenames of course   but the hardware may not offer the same functionality   For example  not every RTC is hooked up to an IRQ  so they can t all issue alarms  and where standard PC RTCs can only issue an alarm up to 24 hours in the future  other hardware may be able to schedule one any time in the upcoming century    Old PC AT Compatible driver    dev rtc                                         All PCs  even Alpha machines  have a Real Time Clock built into them  Usually they are built into the chipset of the computer  but some may actually have a Motorola MC146818  or clone  on the board  This is the clock that keeps the date and time while your computer is turned off   ACPI has standardized that MC146818 functionality  and extended it in a few ways  enabling longer alarm periods  and wake from hibernate   That functionality is NOT exposed in the old driver   However it can also be used to generate signals from a slow 2Hz to a relatively fast 8192Hz  in increments of powers of two  These signals are reported by interrupt number 8   Oh  So  that  is what IRQ 8 is for     It can also function as a 24hr alarm  raising IRQ 8 when the alarm goes off  The alarm can also be programmed to only check any subset of the three programmable values  meaning that it could be set to ring on the 30th second of the 30th minute of every hour  for example  The clock can also be set to generate an interrupt upon every clock update  thus generating a 1Hz signal   The interrupts are reported via  dev rtc  major 10  minor 135  read only character device  in the form of an unsigned long  The low byte contains the type of interrupt  update done  alarm rang  or periodic  that was raised  and the remaining bytes contain the number of interrupts since the last read   Status information is reported through the pseudo file  proc driver rtc if the  proc filesystem was enabled   The driver has built in locking so that only one process is allowed to have the  dev rtc interface open at a time   A user process can monitor these interrupts by doing a read 2  or a select 2  on  dev rtc    either will block stop the user process until the next interrupt is received  This is useful for things like reasonably high frequency data acquisition where one doesn t want to burn up 100  CPU by polling gettimeofday etc  etc   At high frequencies  or under high loads  the user process should check the number of interrupts received since the last read to determine if there has been any interrupt  pileup  so to speak  Just for reference  a typical 486 33 running a tight read loop on  dev rtc will start to suffer occasional interrupt pileup  i e    1 IRQ event since last read  for frequencies above 1024Hz  So you really should check the high bytes of the value you read  especially at frequencies above that of the normal timer interrupt  which is 100Hz   Programming and or enabling interrupt frequencies greater than 64Hz is only allowed by root  This is perhaps a bit conservative  but we don t want an evil user generating lots of IRQs on a slow 386sx 16  where it might have a negative impact on performance  This 64Hz limit can be changed by writing a different value to  proc sys dev rtc max user freq  Note that the interrupt handler is only a few lines of code to minimize any possibility of this effect   Also  if the kernel time is synchronized with an external source  the  kernel will write the time back to the CMOS clock every 11 minutes  In  the process of doing this  the kernel briefly turns off RTC periodic  interrupts  so be aware of this if you are doing serious work  If you don t synchronize the kernel time with an external source  via ntp or whatever  then the kernel will keep its hands off the RTC  allowing you exclusive access to the device for your applications   The alarm and or interrupt frequency are programmed into the RTC via various ioctl 2  calls as listed in   include linux rtc h Rather than write 50 pages describing the ioctl   and so on  it is perhaps more useful to include a small test program that demonstrates how to use them  and demonstrates the features of the driver  This is probably a lot more useful to people interested in writing applications that will be using this driver   See the code at the end of this document    The original  dev rtc driver was written by Paul Gortmaker     New portable  RTC Class  drivers    dev rtcN                                               Because Linux supports many non ACPI and non PC platforms  some of which have more than one RTC style clock  it needed a more portable solution than expecting a single battery backed MC146818 clone on every system  Accordingly  a new  RTC Class  framework has been defined   It offers three different userspace interfaces          dev rtcN     much the same as the older  dev rtc interface         sys class rtc rtcN     sysfs attributes support readonly  access to some RTC attributes          proc driver rtc     the system clock RTC may expose itself  using a procfs interface  If there is no RTC for the system clock   rtc0 is used by default  More information is  currently  shown  here than through sysfs   The RTC Class framework supports a wide variety of RTCs  ranging from those integrated into embeddable system on chip  SOC  processors to discrete chips using I2C  SPI  or some other bus to communicate with the host CPU   There s even support for PC style RTCs     including the features exposed on newer PCs through ACPI   The new framework also removes the  one RTC per system  restriction   For example  maybe the low power battery backed RTC is a discrete I2C chip  but a high functionality RTC is integrated into the SOC   That system might read the system clock from the discrete RTC  but use the integrated one for all other tasks  because of its greater functionality   Check out tools testing selftests rtc rtctest c for an example usage of the ioctl interface "}
{"questions":"linux Author Matt Porter RapidIO is a high speed switched fabric interconnect with features aimed Introduction RapidIO Subsystem Guide","answers":"=======================\nRapidIO Subsystem Guide\n=======================\n\n:Author: Matt Porter\n\nIntroduction\n============\n\nRapidIO is a high speed switched fabric interconnect with features aimed\nat the embedded market. RapidIO provides support for memory-mapped I\/O\nas well as message-based transactions over the switched fabric network.\nRapidIO has a standardized discovery mechanism not unlike the PCI bus\nstandard that allows simple detection of devices in a network.\n\nThis documentation is provided for developers intending to support\nRapidIO on new architectures, write new drivers, or to understand the\nsubsystem internals.\n\nKnown Bugs and Limitations\n==========================\n\nBugs\n----\n\nNone. ;)\n\nLimitations\n-----------\n\n1. Access\/management of RapidIO memory regions is not supported\n\n2. Multiple host enumeration is not supported\n\nRapidIO driver interface\n========================\n\nDrivers are provided a set of calls in order to interface with the\nsubsystem to gather info on devices, request\/map memory region\nresources, and manage mailboxes\/doorbells.\n\nFunctions\n---------\n\n.. kernel-doc:: include\/linux\/rio_drv.h\n   :internal:\n\n.. kernel-doc:: drivers\/rapidio\/rio-driver.c\n   :export:\n\n.. kernel-doc:: drivers\/rapidio\/rio.c\n   :export:\n\nInternals\n=========\n\nThis chapter contains the autogenerated documentation of the RapidIO\nsubsystem.\n\nStructures\n----------\n\n.. kernel-doc:: include\/linux\/rio.h\n   :internal:\n\nEnumeration and Discovery\n-------------------------\n\n.. kernel-doc:: drivers\/rapidio\/rio-scan.c\n   :internal:\n\nDriver functionality\n--------------------\n\n.. kernel-doc:: drivers\/rapidio\/rio.c\n   :internal:\n\n.. kernel-doc:: drivers\/rapidio\/rio-access.c\n   :internal:\n\nDevice model support\n--------------------\n\n.. kernel-doc:: drivers\/rapidio\/rio-driver.c\n   :internal:\n\nPPC32 support\n-------------\n\n.. kernel-doc:: arch\/powerpc\/sysdev\/fsl_rio.c\n   :internal:\n\nCredits\n=======\n\nThe following people have contributed to the RapidIO subsystem directly\nor indirectly:\n\n1. Matt Porter\\ mporter@kernel.crashing.org\n\n2. Randy Vinson\\ rvinson@mvista.com\n\n3. Dan Malek\\ dan@embeddedalley.com\n\nThe following people have contributed to this document:\n\n1. Matt Porter\\ mporter@kernel.crashing.org","site":"linux","answers_cleaned":"                        RapidIO Subsystem Guide                           Author  Matt Porter  Introduction               RapidIO is a high speed switched fabric interconnect with features aimed at the embedded market  RapidIO provides support for memory mapped I O as well as message based transactions over the switched fabric network  RapidIO has a standardized discovery mechanism not unlike the PCI bus standard that allows simple detection of devices in a network   This documentation is provided for developers intending to support RapidIO on new architectures  write new drivers  or to understand the subsystem internals   Known Bugs and Limitations                             Bugs       None      Limitations              1  Access management of RapidIO memory regions is not supported  2  Multiple host enumeration is not supported  RapidIO driver interface                           Drivers are provided a set of calls in order to interface with the subsystem to gather info on devices  request map memory region resources  and manage mailboxes doorbells   Functions               kernel doc   include linux rio drv h     internal      kernel doc   drivers rapidio rio driver c     export      kernel doc   drivers rapidio rio c     export   Internals            This chapter contains the autogenerated documentation of the RapidIO subsystem   Structures                kernel doc   include linux rio h     internal   Enumeration and Discovery                               kernel doc   drivers rapidio rio scan c     internal   Driver functionality                          kernel doc   drivers rapidio rio c     internal      kernel doc   drivers rapidio rio access c     internal   Device model support                          kernel doc   drivers rapidio rio driver c     internal   PPC32 support                   kernel doc   arch powerpc sysdev fsl rio c     internal   Credits          The following people have contributed to the RapidIO subsystem directly or indirectly   1  Matt Porter  mporter kernel crashing org  2  Randy Vinson  rvinson mvista com  3  Dan Malek  dan embeddedalley com  The following people have contributed to this document   1  Matt Porter  mporter kernel crashing org"}
{"questions":"linux Usage of Performance Counters for Linux perfevents 1 2 3 Perf events and tool security can impose a considerable risk of leaking sensitive data accessed by Overview perfsecurity","answers":".. _perf_security:\n\nPerf events and tool security\n=============================\n\nOverview\n--------\n\nUsage of Performance Counters for Linux (perf_events) [1]_ , [2]_ , [3]_\ncan impose a considerable risk of leaking sensitive data accessed by\nmonitored processes. The data leakage is possible both in scenarios of\ndirect usage of perf_events system call API [2]_ and over data files\ngenerated by Perf tool user mode utility (Perf) [3]_ , [4]_ . The risk\ndepends on the nature of data that perf_events performance monitoring\nunits (PMU) [2]_ and Perf collect and expose for performance analysis.\nCollected system and performance data may be split into several\ncategories:\n\n1. System hardware and software configuration data, for example: a CPU\n   model and its cache configuration, an amount of available memory and\n   its topology, used kernel and Perf versions, performance monitoring\n   setup including experiment time, events configuration, Perf command\n   line parameters, etc.\n\n2. User and kernel module paths and their load addresses with sizes,\n   process and thread names with their PIDs and TIDs, timestamps for\n   captured hardware and software events.\n\n3. Content of kernel software counters (e.g., for context switches, page\n   faults, CPU migrations), architectural hardware performance counters\n   (PMC) [8]_ and machine specific registers (MSR) [9]_ that provide\n   execution metrics for various monitored parts of the system (e.g.,\n   memory controller (IMC), interconnect (QPI\/UPI) or peripheral (PCIe)\n   uncore counters) without direct attribution to any execution context\n   state.\n\n4. Content of architectural execution context registers (e.g., RIP, RSP,\n   RBP on x86_64), process user and kernel space memory addresses and\n   data, content of various architectural MSRs that capture data from\n   this category.\n\nData that belong to the fourth category can potentially contain\nsensitive process data. If PMUs in some monitoring modes capture values\nof execution context registers or data from process memory then access\nto such monitoring modes requires to be ordered and secured properly.\nSo, perf_events performance monitoring and observability operations are\nthe subject for security access control management [5]_ .\n\nperf_events access control\n-------------------------------\n\nTo perform security checks, the Linux implementation splits processes\ninto two categories [6]_ : a) privileged processes (whose effective user\nID is 0, referred to as superuser or root), and b) unprivileged\nprocesses (whose effective UID is nonzero). Privileged processes bypass\nall kernel security permission checks so perf_events performance\nmonitoring is fully available to privileged processes without access,\nscope and resource restrictions.\n\nUnprivileged processes are subject to a full security permission check\nbased on the process's credentials [5]_ (usually: effective UID,\neffective GID, and supplementary group list).\n\nLinux divides the privileges traditionally associated with superuser\ninto distinct units, known as capabilities [6]_ , which can be\nindependently enabled and disabled on per-thread basis for processes and\nfiles of unprivileged users.\n\nUnprivileged processes with enabled CAP_PERFMON capability are treated\nas privileged processes with respect to perf_events performance\nmonitoring and observability operations, thus, bypass *scope* permissions\nchecks in the kernel. CAP_PERFMON implements the principle of least\nprivilege [13]_ (POSIX 1003.1e: 2.2.2.39) for performance monitoring and\nobservability operations in the kernel and provides a secure approach to\nperformance monitoring and observability in the system.\n\nFor backward compatibility reasons the access to perf_events monitoring and\nobservability operations is also open for CAP_SYS_ADMIN privileged\nprocesses but CAP_SYS_ADMIN usage for secure monitoring and observability\nuse cases is discouraged with respect to the CAP_PERFMON capability.\nIf system audit records [14]_ for a process using perf_events system call\nAPI contain denial records of acquiring both CAP_PERFMON and CAP_SYS_ADMIN\ncapabilities then providing the process with CAP_PERFMON capability singly\nis recommended as the preferred secure approach to resolve double access\ndenial logging related to usage of performance monitoring and observability.\n\nPrior Linux v5.9 unprivileged processes using perf_events system call\nare also subject for PTRACE_MODE_READ_REALCREDS ptrace access mode check\n[7]_ , whose outcome determines whether monitoring is permitted.\nSo unprivileged processes provided with CAP_SYS_PTRACE capability are\neffectively permitted to pass the check. Starting from Linux v5.9\nCAP_SYS_PTRACE capability is not required and CAP_PERFMON is enough to\nbe provided for processes to make performance monitoring and observability\noperations.\n\nOther capabilities being granted to unprivileged processes can\neffectively enable capturing of additional data required for later\nperformance analysis of monitored processes or a system. For example,\nCAP_SYSLOG capability permits reading kernel space memory addresses from\n\/proc\/kallsyms file.\n\nPrivileged Perf users groups\n---------------------------------\n\nMechanisms of capabilities, privileged capability-dumb files [6]_,\nfile system ACLs [10]_ and sudo [15]_ utility can be used to create\ndedicated groups of privileged Perf users who are permitted to execute\nperformance monitoring and observability without limits. The following\nsteps can be taken to create such groups of privileged Perf users.\n\n1. Create perf_users group of privileged Perf users, assign perf_users\n   group to Perf tool executable and limit access to the executable for\n   other users in the system who are not in the perf_users group:\n\n::\n\n   # groupadd perf_users\n   # ls -alhF\n   -rwxr-xr-x  2 root root  11M Oct 19 15:12 perf\n   # chgrp perf_users perf\n   # ls -alhF\n   -rwxr-xr-x  2 root perf_users  11M Oct 19 15:12 perf\n   # chmod o-rwx perf\n   # ls -alhF\n   -rwxr-x---  2 root perf_users  11M Oct 19 15:12 perf\n\n2. Assign the required capabilities to the Perf tool executable file and\n   enable members of perf_users group with monitoring and observability\n   privileges [6]_ :\n\n::\n\n   # setcap \"cap_perfmon,cap_sys_ptrace,cap_syslog=ep\" perf\n   # setcap -v \"cap_perfmon,cap_sys_ptrace,cap_syslog=ep\" perf\n   perf: OK\n   # getcap perf\n   perf = cap_sys_ptrace,cap_syslog,cap_perfmon+ep\n\nIf the libcap [16]_ installed doesn't yet support \"cap_perfmon\", use \"38\" instead,\ni.e.:\n\n::\n\n   # setcap \"38,cap_ipc_lock,cap_sys_ptrace,cap_syslog=ep\" perf\n\nNote that you may need to have 'cap_ipc_lock' in the mix for tools such as\n'perf top', alternatively use 'perf top -m N', to reduce the memory that\nit uses for the perf ring buffer, see the memory allocation section below.\n\nUsing a libcap without support for CAP_PERFMON will make cap_get_flag(caps, 38,\nCAP_EFFECTIVE, &val) fail, which will lead the default event to be 'cycles:u',\nso as a workaround explicitly ask for the 'cycles' event, i.e.:\n\n::\n\n  # perf top -e cycles\n\nTo get kernel and user samples with a perf binary with just CAP_PERFMON.\n\nAs a result, members of perf_users group are capable of conducting\nperformance monitoring and observability by using functionality of the\nconfigured Perf tool executable that, when executes, passes perf_events\nsubsystem scope checks.\n\nIn case Perf tool executable can't be assigned required capabilities (e.g.\nfile system is mounted with nosuid option or extended attributes are\nnot supported by the file system) then creation of the capabilities\nprivileged environment, naturally shell, is possible. The shell provides\ninherent processes with CAP_PERFMON and other required capabilities so that\nperformance monitoring and observability operations are available in the\nenvironment without limits. Access to the environment can be open via sudo\nutility for members of perf_users group only. In order to create such\nenvironment:\n\n1. Create shell script that uses capsh utility [16]_ to assign CAP_PERFMON\n   and other required capabilities into ambient capability set of the shell\n   process, lock the process security bits after enabling SECBIT_NO_SETUID_FIXUP,\n   SECBIT_NOROOT and SECBIT_NO_CAP_AMBIENT_RAISE bits and then change\n   the process identity to sudo caller of the script who should essentially\n   be a member of perf_users group:\n\n::\n\n   # ls -alh \/usr\/local\/bin\/perf.shell\n   -rwxr-xr-x. 1 root root 83 Oct 13 23:57 \/usr\/local\/bin\/perf.shell\n   # cat \/usr\/local\/bin\/perf.shell\n   exec \/usr\/sbin\/capsh --iab=^cap_perfmon --secbits=239 --user=$SUDO_USER -- -l\n\n2. Extend sudo policy at \/etc\/sudoers file with a rule for perf_users group:\n\n::\n\n   # grep perf_users \/etc\/sudoers\n   %perf_users    ALL=\/usr\/local\/bin\/perf.shell\n\n3. Check that members of perf_users group have access to the privileged\n   shell and have CAP_PERFMON and other required capabilities enabled\n   in permitted, effective and ambient capability sets of an inherent process:\n\n::\n\n  $ id\n  uid=1003(capsh_test) gid=1004(capsh_test) groups=1004(capsh_test),1000(perf_users) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n  $ sudo perf.shell\n  [sudo] password for capsh_test:\n  $ grep Cap \/proc\/self\/status\n  CapInh:        0000004000000000\n  CapPrm:        0000004000000000\n  CapEff:        0000004000000000\n  CapBnd:        000000ffffffffff\n  CapAmb:        0000004000000000\n  $ capsh --decode=0000004000000000\n  0x0000004000000000=cap_perfmon\n\nAs a result, members of perf_users group have access to the privileged\nenvironment where they can use tools employing performance monitoring APIs\ngoverned by CAP_PERFMON Linux capability.\n\nThis specific access control management is only available to superuser\nor root running processes with CAP_SETPCAP, CAP_SETFCAP [6]_\ncapabilities.\n\nUnprivileged users\n-----------------------------------\n\nperf_events *scope* and *access* control for unprivileged processes\nis governed by perf_event_paranoid [2]_ setting:\n\n-1:\n     Impose no *scope* and *access* restrictions on using perf_events\n     performance monitoring. Per-user per-cpu perf_event_mlock_kb [2]_\n     locking limit is ignored when allocating memory buffers for storing\n     performance data. This is the least secure mode since allowed\n     monitored *scope* is maximized and no perf_events specific limits\n     are imposed on *resources* allocated for performance monitoring.\n\n>=0:\n     *scope* includes per-process and system wide performance monitoring\n     but excludes raw tracepoints and ftrace function tracepoints\n     monitoring. CPU and system events happened when executing either in\n     user or in kernel space can be monitored and captured for later\n     analysis. Per-user per-cpu perf_event_mlock_kb locking limit is\n     imposed but ignored for unprivileged processes with CAP_IPC_LOCK\n     [6]_ capability.\n\n>=1:\n     *scope* includes per-process performance monitoring only and\n     excludes system wide performance monitoring. CPU and system events\n     happened when executing either in user or in kernel space can be\n     monitored and captured for later analysis. Per-user per-cpu\n     perf_event_mlock_kb locking limit is imposed but ignored for\n     unprivileged processes with CAP_IPC_LOCK capability.\n\n>=2:\n     *scope* includes per-process performance monitoring only. CPU and\n     system events happened when executing in user space only can be\n     monitored and captured for later analysis. Per-user per-cpu\n     perf_event_mlock_kb locking limit is imposed but ignored for\n     unprivileged processes with CAP_IPC_LOCK capability.\n\nResource control\n---------------------------------\n\nOpen file descriptors\n+++++++++++++++++++++\n\nThe perf_events system call API [2]_ allocates file descriptors for\nevery configured PMU event. Open file descriptors are a per-process\naccountable resource governed by the RLIMIT_NOFILE [11]_ limit\n(ulimit -n), which is usually derived from the login shell process. When\nconfiguring Perf collection for a long list of events on a large server\nsystem, this limit can be easily hit preventing required monitoring\nconfiguration. RLIMIT_NOFILE limit can be increased on per-user basis\nmodifying content of the limits.conf file [12]_ . Ordinarily, a Perf\nsampling session (perf record) requires an amount of open perf_event\nfile descriptors that is not less than the number of monitored events\nmultiplied by the number of monitored CPUs.\n\nMemory allocation\n+++++++++++++++++\n\nThe amount of memory available to user processes for capturing\nperformance monitoring data is governed by the perf_event_mlock_kb [2]_\nsetting. This perf_event specific resource setting defines overall\nper-cpu limits of memory allowed for mapping by the user processes to\nexecute performance monitoring. The setting essentially extends the\nRLIMIT_MEMLOCK [11]_ limit, but only for memory regions mapped\nspecifically for capturing monitored performance events and related data.\n\nFor example, if a machine has eight cores and perf_event_mlock_kb limit\nis set to 516 KiB, then a user process is provided with 516 KiB * 8 =\n4128 KiB of memory above the RLIMIT_MEMLOCK limit (ulimit -l) for\nperf_event mmap buffers. In particular, this means that, if the user\nwants to start two or more performance monitoring processes, the user is\nrequired to manually distribute the available 4128 KiB between the\nmonitoring processes, for example, using the --mmap-pages Perf record\nmode option. Otherwise, the first started performance monitoring process\nallocates all available 4128 KiB and the other processes will fail to\nproceed due to the lack of memory.\n\nRLIMIT_MEMLOCK and perf_event_mlock_kb resource constraints are ignored\nfor processes with the CAP_IPC_LOCK capability. Thus, perf_events\/Perf\nprivileged users can be provided with memory above the constraints for\nperf_events\/Perf performance monitoring purpose by providing the Perf\nexecutable with CAP_IPC_LOCK capability.\n\nBibliography\n------------\n\n.. [1] `<https:\/\/lwn.net\/Articles\/337493\/>`_\n.. [2] `<http:\/\/man7.org\/linux\/man-pages\/man2\/perf_event_open.2.html>`_\n.. [3] `<http:\/\/web.eece.maine.edu\/~vweaver\/projects\/perf_events\/>`_\n.. [4] `<https:\/\/perf.wiki.kernel.org\/index.php\/Main_Page>`_\n.. [5] `<https:\/\/www.kernel.org\/doc\/html\/latest\/security\/credentials.html>`_\n.. [6] `<http:\/\/man7.org\/linux\/man-pages\/man7\/capabilities.7.html>`_\n.. [7] `<http:\/\/man7.org\/linux\/man-pages\/man2\/ptrace.2.html>`_\n.. [8] `<https:\/\/en.wikipedia.org\/wiki\/Hardware_performance_counter>`_\n.. [9] `<https:\/\/en.wikipedia.org\/wiki\/Model-specific_register>`_\n.. [10] `<http:\/\/man7.org\/linux\/man-pages\/man5\/acl.5.html>`_\n.. [11] `<http:\/\/man7.org\/linux\/man-pages\/man2\/getrlimit.2.html>`_\n.. [12] `<http:\/\/man7.org\/linux\/man-pages\/man5\/limits.conf.5.html>`_\n.. [13] `<https:\/\/sites.google.com\/site\/fullycapable>`_\n.. [14] `<http:\/\/man7.org\/linux\/man-pages\/man8\/auditd.8.html>`_\n.. [15] `<https:\/\/man7.org\/linux\/man-pages\/man8\/sudo.8.html>`_\n.. [16] `<https:\/\/git.kernel.org\/pub\/scm\/libs\/libcap\/libcap.git\/>`_","site":"linux","answers_cleaned":"    perf security   Perf events and tool security                                Overview           Usage of Performance Counters for Linux  perf events   1      2      3   can impose a considerable risk of leaking sensitive data accessed by monitored processes  The data leakage is possible both in scenarios of direct usage of perf events system call API  2   and over data files generated by Perf tool user mode utility  Perf   3      4     The risk depends on the nature of data that perf events performance monitoring units  PMU   2   and Perf collect and expose for performance analysis  Collected system and performance data may be split into several categories   1  System hardware and software configuration data  for example  a CPU    model and its cache configuration  an amount of available memory and    its topology  used kernel and Perf versions  performance monitoring    setup including experiment time  events configuration  Perf command    line parameters  etc   2  User and kernel module paths and their load addresses with sizes     process and thread names with their PIDs and TIDs  timestamps for    captured hardware and software events   3  Content of kernel software counters  e g   for context switches  page    faults  CPU migrations   architectural hardware performance counters     PMC   8   and machine specific registers  MSR   9   that provide    execution metrics for various monitored parts of the system  e g      memory controller  IMC   interconnect  QPI UPI  or peripheral  PCIe     uncore counters  without direct attribution to any execution context    state   4  Content of architectural execution context registers  e g   RIP  RSP     RBP on x86 64   process user and kernel space memory addresses and    data  content of various architectural MSRs that capture data from    this category   Data that belong to the fourth category can potentially contain sensitive process data  If PMUs in some monitoring modes capture values of execution context registers or data from process memory then access to such monitoring modes requires to be ordered and secured properly  So  perf events performance monitoring and observability operations are the subject for security access control management  5      perf events access control                                  To perform security checks  the Linux implementation splits processes into two categories  6     a  privileged processes  whose effective user ID is 0  referred to as superuser or root   and b  unprivileged processes  whose effective UID is nonzero   Privileged processes bypass all kernel security permission checks so perf events performance monitoring is fully available to privileged processes without access  scope and resource restrictions   Unprivileged processes are subject to a full security permission check based on the process s credentials  5    usually  effective UID  effective GID  and supplementary group list    Linux divides the privileges traditionally associated with superuser into distinct units  known as capabilities  6     which can be independently enabled and disabled on per thread basis for processes and files of unprivileged users   Unprivileged processes with enabled CAP PERFMON capability are treated as privileged processes with respect to perf events performance monitoring and observability operations  thus  bypass  scope  permissions checks in the kernel  CAP PERFMON implements the principle of least privilege  13    POSIX 1003 1e  2 2 2 39  for performance monitoring and observability operations in the kernel and provides a secure approach to performance monitoring and observability in the system   For backward compatibility reasons the access to perf events monitoring and observability operations is also open for CAP SYS ADMIN privileged processes but CAP SYS ADMIN usage for secure monitoring and observability use cases is discouraged with respect to the CAP PERFMON capability  If system audit records  14   for a process using perf events system call API contain denial records of acquiring both CAP PERFMON and CAP SYS ADMIN capabilities then providing the process with CAP PERFMON capability singly is recommended as the preferred secure approach to resolve double access denial logging related to usage of performance monitoring and observability   Prior Linux v5 9 unprivileged processes using perf events system call are also subject for PTRACE MODE READ REALCREDS ptrace access mode check  7     whose outcome determines whether monitoring is permitted  So unprivileged processes provided with CAP SYS PTRACE capability are effectively permitted to pass the check  Starting from Linux v5 9 CAP SYS PTRACE capability is not required and CAP PERFMON is enough to be provided for processes to make performance monitoring and observability operations   Other capabilities being granted to unprivileged processes can effectively enable capturing of additional data required for later performance analysis of monitored processes or a system  For example  CAP SYSLOG capability permits reading kernel space memory addresses from  proc kallsyms file   Privileged Perf users groups                                    Mechanisms of capabilities  privileged capability dumb files  6    file system ACLs  10   and sudo  15   utility can be used to create dedicated groups of privileged Perf users who are permitted to execute performance monitoring and observability without limits  The following steps can be taken to create such groups of privileged Perf users   1  Create perf users group of privileged Perf users  assign perf users    group to Perf tool executable and limit access to the executable for    other users in the system who are not in the perf users group            groupadd perf users      ls  alhF     rwxr xr x  2 root root  11M Oct 19 15 12 perf      chgrp perf users perf      ls  alhF     rwxr xr x  2 root perf users  11M Oct 19 15 12 perf      chmod o rwx perf      ls  alhF     rwxr x     2 root perf users  11M Oct 19 15 12 perf  2  Assign the required capabilities to the Perf tool executable file and    enable members of perf users group with monitoring and observability    privileges  6               setcap  cap perfmon cap sys ptrace cap syslog ep  perf      setcap  v  cap perfmon cap sys ptrace cap syslog ep  perf    perf  OK      getcap perf    perf   cap sys ptrace cap syslog cap perfmon ep  If the libcap  16   installed doesn t yet support  cap perfmon   use  38  instead  i e             setcap  38 cap ipc lock cap sys ptrace cap syslog ep  perf  Note that you may need to have  cap ipc lock  in the mix for tools such as  perf top   alternatively use  perf top  m N   to reduce the memory that it uses for the perf ring buffer  see the memory allocation section below   Using a libcap without support for CAP PERFMON will make cap get flag caps  38  CAP EFFECTIVE   val  fail  which will lead the default event to be  cycles u   so as a workaround explicitly ask for the  cycles  event  i e            perf top  e cycles  To get kernel and user samples with a perf binary with just CAP PERFMON   As a result  members of perf users group are capable of conducting performance monitoring and observability by using functionality of the configured Perf tool executable that  when executes  passes perf events subsystem scope checks   In case Perf tool executable can t be assigned required capabilities  e g  file system is mounted with nosuid option or extended attributes are not supported by the file system  then creation of the capabilities privileged environment  naturally shell  is possible  The shell provides inherent processes with CAP PERFMON and other required capabilities so that performance monitoring and observability operations are available in the environment without limits  Access to the environment can be open via sudo utility for members of perf users group only  In order to create such environment   1  Create shell script that uses capsh utility  16   to assign CAP PERFMON    and other required capabilities into ambient capability set of the shell    process  lock the process security bits after enabling SECBIT NO SETUID FIXUP     SECBIT NOROOT and SECBIT NO CAP AMBIENT RAISE bits and then change    the process identity to sudo caller of the script who should essentially    be a member of perf users group            ls  alh  usr local bin perf shell     rwxr xr x  1 root root 83 Oct 13 23 57  usr local bin perf shell      cat  usr local bin perf shell    exec  usr sbin capsh   iab  cap perfmon   secbits 239   user  SUDO USER     l  2  Extend sudo policy at  etc sudoers file with a rule for perf users group            grep perf users  etc sudoers     perf users    ALL  usr local bin perf shell  3  Check that members of perf users group have access to the privileged    shell and have CAP PERFMON and other required capabilities enabled    in permitted  effective and ambient capability sets of an inherent process           id   uid 1003 capsh test  gid 1004 capsh test  groups 1004 capsh test  1000 perf users  context unconfined u unconfined r unconfined t s0 s0 c0 c1023     sudo perf shell    sudo  password for capsh test      grep Cap  proc self status   CapInh         0000004000000000   CapPrm         0000004000000000   CapEff         0000004000000000   CapBnd         000000ffffffffff   CapAmb         0000004000000000     capsh   decode 0000004000000000   0x0000004000000000 cap perfmon  As a result  members of perf users group have access to the privileged environment where they can use tools employing performance monitoring APIs governed by CAP PERFMON Linux capability   This specific access control management is only available to superuser or root running processes with CAP SETPCAP  CAP SETFCAP  6   capabilities   Unprivileged users                                      perf events  scope  and  access  control for unprivileged processes is governed by perf event paranoid  2   setting    1       Impose no  scope  and  access  restrictions on using perf events      performance monitoring  Per user per cpu perf event mlock kb  2        locking limit is ignored when allocating memory buffers for storing      performance data  This is the least secure mode since allowed      monitored  scope  is maximized and no perf events specific limits      are imposed on  resources  allocated for performance monitoring     0        scope  includes per process and system wide performance monitoring      but excludes raw tracepoints and ftrace function tracepoints      monitoring  CPU and system events happened when executing either in      user or in kernel space can be monitored and captured for later      analysis  Per user per cpu perf event mlock kb locking limit is      imposed but ignored for unprivileged processes with CAP IPC LOCK       6   capability     1        scope  includes per process performance monitoring only and      excludes system wide performance monitoring  CPU and system events      happened when executing either in user or in kernel space can be      monitored and captured for later analysis  Per user per cpu      perf event mlock kb locking limit is imposed but ignored for      unprivileged processes with CAP IPC LOCK capability     2        scope  includes per process performance monitoring only  CPU and      system events happened when executing in user space only can be      monitored and captured for later analysis  Per user per cpu      perf event mlock kb locking limit is imposed but ignored for      unprivileged processes with CAP IPC LOCK capability   Resource control                                    Open file descriptors                        The perf events system call API  2   allocates file descriptors for every configured PMU event  Open file descriptors are a per process accountable resource governed by the RLIMIT NOFILE  11   limit  ulimit  n   which is usually derived from the login shell process  When configuring Perf collection for a long list of events on a large server system  this limit can be easily hit preventing required monitoring configuration  RLIMIT NOFILE limit can be increased on per user basis modifying content of the limits conf file  12     Ordinarily  a Perf sampling session  perf record  requires an amount of open perf event file descriptors that is not less than the number of monitored events multiplied by the number of monitored CPUs   Memory allocation                    The amount of memory available to user processes for capturing performance monitoring data is governed by the perf event mlock kb  2   setting  This perf event specific resource setting defines overall per cpu limits of memory allowed for mapping by the user processes to execute performance monitoring  The setting essentially extends the RLIMIT MEMLOCK  11   limit  but only for memory regions mapped specifically for capturing monitored performance events and related data   For example  if a machine has eight cores and perf event mlock kb limit is set to 516 KiB  then a user process is provided with 516 KiB   8   4128 KiB of memory above the RLIMIT MEMLOCK limit  ulimit  l  for perf event mmap buffers  In particular  this means that  if the user wants to start two or more performance monitoring processes  the user is required to manually distribute the available 4128 KiB between the monitoring processes  for example  using the   mmap pages Perf record mode option  Otherwise  the first started performance monitoring process allocates all available 4128 KiB and the other processes will fail to proceed due to the lack of memory   RLIMIT MEMLOCK and perf event mlock kb resource constraints are ignored for processes with the CAP IPC LOCK capability  Thus  perf events Perf privileged users can be provided with memory above the constraints for perf events Perf performance monitoring purpose by providing the Perf executable with CAP IPC LOCK capability   Bibliography                   1    https   lwn net Articles 337493         2    http   man7 org linux man pages man2 perf event open 2 html        3    http   web eece maine edu  vweaver projects perf events         4    https   perf wiki kernel org index php Main Page        5    https   www kernel org doc html latest security credentials html        6    http   man7 org linux man pages man7 capabilities 7 html        7    http   man7 org linux man pages man2 ptrace 2 html        8    https   en wikipedia org wiki Hardware performance counter        9    https   en wikipedia org wiki Model specific register        10    http   man7 org linux man pages man5 acl 5 html        11    http   man7 org linux man pages man2 getrlimit 2 html        12    http   man7 org linux man pages man5 limits conf 5 html        13    https   sites google com site fullycapable        14    http   man7 org linux man pages man8 auditd 8 html        15    https   man7 org linux man pages man8 sudo 8 html        16    https   git kernel org pub scm libs libcap libcap git    "}
{"questions":"linux a Random Number Generator RNG The software has two parts special hardware feature on your CPU or motherboard Hardware random number generators The hwrandom framework is software that makes use of a Introduction","answers":"=================================\nHardware random number generators\n=================================\n\nIntroduction\n============\n\nThe hw_random framework is software that makes use of a\nspecial hardware feature on your CPU or motherboard,\na Random Number Generator (RNG).  The software has two parts:\na core providing the \/dev\/hwrng character device and its\nsysfs support, plus a hardware-specific driver that plugs\ninto that core.\n\nTo make the most effective use of these mechanisms, you\nshould download the support software as well.  Download the\nlatest version of the \"rng-tools\" package from:\n\n\thttps:\/\/github.com\/nhorman\/rng-tools\n\nThose tools use \/dev\/hwrng to fill the kernel entropy pool,\nwhich is used internally and exported by the \/dev\/urandom and\n\/dev\/random special files.\n\nTheory of operation\n===================\n\nCHARACTER DEVICE.  Using the standard open()\nand read() system calls, you can read random data from\nthe hardware RNG device.  This data is NOT CHECKED by any\nfitness tests, and could potentially be bogus (if the\nhardware is faulty or has been tampered with).  Data is only\noutput if the hardware \"has-data\" flag is set, but nevertheless\na security-conscious person would run fitness tests on the\ndata before assuming it is truly random.\n\nThe rng-tools package uses such tests in \"rngd\", and lets you\nrun them by hand with a \"rngtest\" utility.\n\n\/dev\/hwrng is char device major 10, minor 183.\n\nCLASS DEVICE.  There is a \/sys\/class\/misc\/hw_random node with\ntwo unique attributes, \"rng_available\" and \"rng_current\".  The\n\"rng_available\" attribute lists the hardware-specific drivers\navailable, while \"rng_current\" lists the one which is currently\nconnected to \/dev\/hwrng.  If your system has more than one\nRNG available, you may change the one used by writing a name from\nthe list in \"rng_available\" into \"rng_current\".\n\n==========================================================================\n\n\nHardware driver for Intel\/AMD\/VIA Random Number Generators (RNG)\n\t- Copyright 2000,2001 Jeff Garzik <jgarzik@pobox.com>\n\t- Copyright 2000,2001 Philipp Rumpf <prumpf@mandrakesoft.com>\n\n\nAbout the Intel RNG hardware, from the firmware hub datasheet\n=============================================================\n\nThe Firmware Hub integrates a Random Number Generator (RNG)\nusing thermal noise generated from inherently random quantum\nmechanical properties of silicon. When not generating new random\nbits the RNG circuitry will enter a low power state. Intel will\nprovide a binary software driver to give third party software\naccess to our RNG for use as a security feature. At this time,\nthe RNG is only to be used with a system in an OS-present state.\n\nIntel RNG Driver notes\n======================\n\nFIXME: support poll(2)\n\n.. note::\n\n\trequest_mem_region was removed, for three reasons:\n\n\t1) Only one RNG is supported by this driver;\n\t2) The location used by the RNG is a fixed location in\n\t   MMIO-addressable memory;\n\t3) users with properly working BIOS e820 handling will always\n\t   have the region in which the RNG is located reserved, so\n\t   request_mem_region calls always fail for proper setups.\n\t   However, for people who use mem=XX, BIOS e820 information is\n\t   **not** in \/proc\/iomem, and request_mem_region(RNG_ADDR) can\n\t   succeed.\n\nDriver details\n==============\n\nBased on:\n\tIntel 82802AB\/82802AC Firmware Hub (FWH) Datasheet\n\tMay 1999 Order Number: 290658-002 R\n\nIntel 82802 Firmware Hub:\n\tRandom Number Generator\n\tProgrammer's Reference Manual\n\tDecember 1999 Order Number: 298029-001 R\n\nIntel 82802 Firmware HUB Random Number Generator Driver\n\tCopyright (c) 2000 Matt Sottek <msottek@quiknet.com>\n\nSpecial thanks to Matt Sottek.  I did the \"guts\", he\ndid the \"brains\" and all the testing.","site":"linux","answers_cleaned":"                                  Hardware random number generators                                    Introduction               The hw random framework is software that makes use of a special hardware feature on your CPU or motherboard  a Random Number Generator  RNG    The software has two parts  a core providing the  dev hwrng character device and its sysfs support  plus a hardware specific driver that plugs into that core   To make the most effective use of these mechanisms  you should download the support software as well   Download the latest version of the  rng tools  package from    https   github com nhorman rng tools  Those tools use  dev hwrng to fill the kernel entropy pool  which is used internally and exported by the  dev urandom and  dev random special files   Theory of operation                      CHARACTER DEVICE   Using the standard open   and read   system calls  you can read random data from the hardware RNG device   This data is NOT CHECKED by any fitness tests  and could potentially be bogus  if the hardware is faulty or has been tampered with    Data is only output if the hardware  has data  flag is set  but nevertheless a security conscious person would run fitness tests on the data before assuming it is truly random   The rng tools package uses such tests in  rngd   and lets you run them by hand with a  rngtest  utility    dev hwrng is char device major 10  minor 183   CLASS DEVICE   There is a  sys class misc hw random node with two unique attributes   rng available  and  rng current    The  rng available  attribute lists the hardware specific drivers available  while  rng current  lists the one which is currently connected to  dev hwrng   If your system has more than one RNG available  you may change the one used by writing a name from the list in  rng available  into  rng current                                                                                 Hardware driver for Intel AMD VIA Random Number Generators  RNG     Copyright 2000 2001 Jeff Garzik  jgarzik pobox com     Copyright 2000 2001 Philipp Rumpf  prumpf mandrakesoft com    About the Intel RNG hardware  from the firmware hub datasheet                                                                The Firmware Hub integrates a Random Number Generator  RNG  using thermal noise generated from inherently random quantum mechanical properties of silicon  When not generating new random bits the RNG circuitry will enter a low power state  Intel will provide a binary software driver to give third party software access to our RNG for use as a security feature  At this time  the RNG is only to be used with a system in an OS present state   Intel RNG Driver notes                         FIXME  support poll 2      note     request mem region was removed  for three reasons    1  Only one RNG is supported by this driver   2  The location used by the RNG is a fixed location in     MMIO addressable memory   3  users with properly working BIOS e820 handling will always     have the region in which the RNG is located reserved  so     request mem region calls always fail for proper setups      However  for people who use mem XX  BIOS e820 information is       not   in  proc iomem  and request mem region RNG ADDR  can     succeed   Driver details                 Based on   Intel 82802AB 82802AC Firmware Hub  FWH  Datasheet  May 1999 Order Number  290658 002 R  Intel 82802 Firmware Hub   Random Number Generator  Programmer s Reference Manual  December 1999 Order Number  298029 001 R  Intel 82802 Firmware HUB Random Number Generator Driver  Copyright  c  2000 Matt Sottek  msottek quiknet com   Special thanks to Matt Sottek   I did the  guts   he did the  brains  and all the testing "}
{"questions":"linux Author Tejun Heo tj kernel org Date October 2015 Control Group v2 cgroup v2 This is the authoritative documentation on the design interface and","answers":".. _cgroup-v2:\n\n================\nControl Group v2\n================\n\n:Date: October, 2015\n:Author: Tejun Heo <tj@kernel.org>\n\nThis is the authoritative documentation on the design, interface and\nconventions of cgroup v2.  It describes all userland-visible aspects\nof cgroup including core and specific controller behaviors.  All\nfuture changes must be reflected in this document.  Documentation for\nv1 is available under :ref:`Documentation\/admin-guide\/cgroup-v1\/index.rst <cgroup-v1>`.\n\n.. CONTENTS\n\n   1. Introduction\n     1-1. Terminology\n     1-2. What is cgroup?\n   2. Basic Operations\n     2-1. Mounting\n     2-2. Organizing Processes and Threads\n       2-2-1. Processes\n       2-2-2. Threads\n     2-3. [Un]populated Notification\n     2-4. Controlling Controllers\n       2-4-1. Enabling and Disabling\n       2-4-2. Top-down Constraint\n       2-4-3. No Internal Process Constraint\n     2-5. Delegation\n       2-5-1. Model of Delegation\n       2-5-2. Delegation Containment\n     2-6. Guidelines\n       2-6-1. Organize Once and Control\n       2-6-2. Avoid Name Collisions\n   3. Resource Distribution Models\n     3-1. Weights\n     3-2. Limits\n     3-3. Protections\n     3-4. Allocations\n   4. Interface Files\n     4-1. Format\n     4-2. Conventions\n     4-3. Core Interface Files\n   5. Controllers\n     5-1. CPU\n       5-1-1. CPU Interface Files\n     5-2. Memory\n       5-2-1. Memory Interface Files\n       5-2-2. Usage Guidelines\n       5-2-3. Memory Ownership\n     5-3. IO\n       5-3-1. IO Interface Files\n       5-3-2. Writeback\n       5-3-3. IO Latency\n         5-3-3-1. How IO Latency Throttling Works\n         5-3-3-2. IO Latency Interface Files\n       5-3-4. IO Priority\n     5-4. PID\n       5-4-1. PID Interface Files\n     5-5. Cpuset\n       5.5-1. Cpuset Interface Files\n     5-6. Device\n     5-7. RDMA\n       5-7-1. RDMA Interface Files\n     5-8. HugeTLB\n       5.8-1. HugeTLB Interface Files\n     5-9. Misc\n       5.9-1 Miscellaneous cgroup Interface Files\n       5.9-2 Migration and Ownership\n     5-10. Others\n       5-10-1. perf_event\n     5-N. Non-normative information\n       5-N-1. CPU controller root cgroup process behaviour\n       5-N-2. IO controller root cgroup process behaviour\n   6. Namespace\n     6-1. Basics\n     6-2. The Root and Views\n     6-3. Migration and setns(2)\n     6-4. Interaction with Other Namespaces\n   P. Information on Kernel Programming\n     P-1. Filesystem Support for Writeback\n   D. Deprecated v1 Core Features\n   R. Issues with v1 and Rationales for v2\n     R-1. Multiple Hierarchies\n     R-2. Thread Granularity\n     R-3. Competition Between Inner Nodes and Threads\n     R-4. Other Interface Issues\n     R-5. Controller Issues and Remedies\n       R-5-1. Memory\n\n\nIntroduction\n============\n\nTerminology\n-----------\n\n\"cgroup\" stands for \"control group\" and is never capitalized.  The\nsingular form is used to designate the whole feature and also as a\nqualifier as in \"cgroup controllers\".  When explicitly referring to\nmultiple individual control groups, the plural form \"cgroups\" is used.\n\n\nWhat is cgroup?\n---------------\n\ncgroup is a mechanism to organize processes hierarchically and\ndistribute system resources along the hierarchy in a controlled and\nconfigurable manner.\n\ncgroup is largely composed of two parts - the core and controllers.\ncgroup core is primarily responsible for hierarchically organizing\nprocesses.  A cgroup controller is usually responsible for\ndistributing a specific type of system resource along the hierarchy\nalthough there are utility controllers which serve purposes other than\nresource distribution.\n\ncgroups form a tree structure and every process in the system belongs\nto one and only one cgroup.  All threads of a process belong to the\nsame cgroup.  On creation, all processes are put in the cgroup that\nthe parent process belongs to at the time.  A process can be migrated\nto another cgroup.  Migration of a process doesn't affect already\nexisting descendant processes.\n\nFollowing certain structural constraints, controllers may be enabled or\ndisabled selectively on a cgroup.  All controller behaviors are\nhierarchical - if a controller is enabled on a cgroup, it affects all\nprocesses which belong to the cgroups consisting the inclusive\nsub-hierarchy of the cgroup.  When a controller is enabled on a nested\ncgroup, it always restricts the resource distribution further.  The\nrestrictions set closer to the root in the hierarchy can not be\noverridden from further away.\n\n\nBasic Operations\n================\n\nMounting\n--------\n\nUnlike v1, cgroup v2 has only single hierarchy.  The cgroup v2\nhierarchy can be mounted with the following mount command::\n\n  # mount -t cgroup2 none $MOUNT_POINT\n\ncgroup2 filesystem has the magic number 0x63677270 (\"cgrp\").  All\ncontrollers which support v2 and are not bound to a v1 hierarchy are\nautomatically bound to the v2 hierarchy and show up at the root.\nControllers which are not in active use in the v2 hierarchy can be\nbound to other hierarchies.  This allows mixing v2 hierarchy with the\nlegacy v1 multiple hierarchies in a fully backward compatible way.\n\nA controller can be moved across hierarchies only after the controller\nis no longer referenced in its current hierarchy.  Because per-cgroup\ncontroller states are destroyed asynchronously and controllers may\nhave lingering references, a controller may not show up immediately on\nthe v2 hierarchy after the final umount of the previous hierarchy.\nSimilarly, a controller should be fully disabled to be moved out of\nthe unified hierarchy and it may take some time for the disabled\ncontroller to become available for other hierarchies; furthermore, due\nto inter-controller dependencies, other controllers may need to be\ndisabled too.\n\nWhile useful for development and manual configurations, moving\ncontrollers dynamically between the v2 and other hierarchies is\nstrongly discouraged for production use.  It is recommended to decide\nthe hierarchies and controller associations before starting using the\ncontrollers after system boot.\n\nDuring transition to v2, system management software might still\nautomount the v1 cgroup filesystem and so hijack all controllers\nduring boot, before manual intervention is possible. To make testing\nand experimenting easier, the kernel parameter cgroup_no_v1= allows\ndisabling controllers in v1 and make them always available in v2.\n\ncgroup v2 currently supports the following mount options.\n\n  nsdelegate\n\tConsider cgroup namespaces as delegation boundaries.  This\n\toption is system wide and can only be set on mount or modified\n\tthrough remount from the init namespace.  The mount option is\n\tignored on non-init namespace mounts.  Please refer to the\n\tDelegation section for details.\n\n  favordynmods\n        Reduce the latencies of dynamic cgroup modifications such as\n        task migrations and controller on\/offs at the cost of making\n        hot path operations such as forks and exits more expensive.\n        The static usage pattern of creating a cgroup, enabling\n        controllers, and then seeding it with CLONE_INTO_CGROUP is\n        not affected by this option.\n\n  memory_localevents\n        Only populate memory.events with data for the current cgroup,\n        and not any subtrees. This is legacy behaviour, the default\n        behaviour without this option is to include subtree counts.\n        This option is system wide and can only be set on mount or\n        modified through remount from the init namespace. The mount\n        option is ignored on non-init namespace mounts.\n\n  memory_recursiveprot\n        Recursively apply memory.min and memory.low protection to\n        entire subtrees, without requiring explicit downward\n        propagation into leaf cgroups.  This allows protecting entire\n        subtrees from one another, while retaining free competition\n        within those subtrees.  This should have been the default\n        behavior but is a mount-option to avoid regressing setups\n        relying on the original semantics (e.g. specifying bogusly\n        high 'bypass' protection values at higher tree levels).\n\n  memory_hugetlb_accounting\n        Count HugeTLB memory usage towards the cgroup's overall\n        memory usage for the memory controller (for the purpose of\n        statistics reporting and memory protetion). This is a new\n        behavior that could regress existing setups, so it must be\n        explicitly opted in with this mount option.\n\n        A few caveats to keep in mind:\n\n        * There is no HugeTLB pool management involved in the memory\n          controller. The pre-allocated pool does not belong to anyone.\n          Specifically, when a new HugeTLB folio is allocated to\n          the pool, it is not accounted for from the perspective of the\n          memory controller. It is only charged to a cgroup when it is\n          actually used (for e.g at page fault time). Host memory\n          overcommit management has to consider this when configuring\n          hard limits. In general, HugeTLB pool management should be\n          done via other mechanisms (such as the HugeTLB controller).\n        * Failure to charge a HugeTLB folio to the memory controller\n          results in SIGBUS. This could happen even if the HugeTLB pool\n          still has pages available (but the cgroup limit is hit and\n          reclaim attempt fails).\n        * Charging HugeTLB memory towards the memory controller affects\n          memory protection and reclaim dynamics. Any userspace tuning\n          (of low, min limits for e.g) needs to take this into account.\n        * HugeTLB pages utilized while this option is not selected\n          will not be tracked by the memory controller (even if cgroup\n          v2 is remounted later on).\n\n  pids_localevents\n        The option restores v1-like behavior of pids.events:max, that is only\n        local (inside cgroup proper) fork failures are counted. Without this\n        option pids.events.max represents any pids.max enforcemnt across\n        cgroup's subtree.\n\n\n\nOrganizing Processes and Threads\n--------------------------------\n\nProcesses\n~~~~~~~~~\n\nInitially, only the root cgroup exists to which all processes belong.\nA child cgroup can be created by creating a sub-directory::\n\n  # mkdir $CGROUP_NAME\n\nA given cgroup may have multiple child cgroups forming a tree\nstructure.  Each cgroup has a read-writable interface file\n\"cgroup.procs\".  When read, it lists the PIDs of all processes which\nbelong to the cgroup one-per-line.  The PIDs are not ordered and the\nsame PID may show up more than once if the process got moved to\nanother cgroup and then back or the PID got recycled while reading.\n\nA process can be migrated into a cgroup by writing its PID to the\ntarget cgroup's \"cgroup.procs\" file.  Only one process can be migrated\non a single write(2) call.  If a process is composed of multiple\nthreads, writing the PID of any thread migrates all threads of the\nprocess.\n\nWhen a process forks a child process, the new process is born into the\ncgroup that the forking process belongs to at the time of the\noperation.  After exit, a process stays associated with the cgroup\nthat it belonged to at the time of exit until it's reaped; however, a\nzombie process does not appear in \"cgroup.procs\" and thus can't be\nmoved to another cgroup.\n\nA cgroup which doesn't have any children or live processes can be\ndestroyed by removing the directory.  Note that a cgroup which doesn't\nhave any children and is associated only with zombie processes is\nconsidered empty and can be removed::\n\n  # rmdir $CGROUP_NAME\n\n\"\/proc\/$PID\/cgroup\" lists a process's cgroup membership.  If legacy\ncgroup is in use in the system, this file may contain multiple lines,\none for each hierarchy.  The entry for cgroup v2 is always in the\nformat \"0::$PATH\"::\n\n  # cat \/proc\/842\/cgroup\n  ...\n  0::\/test-cgroup\/test-cgroup-nested\n\nIf the process becomes a zombie and the cgroup it was associated with\nis removed subsequently, \" (deleted)\" is appended to the path::\n\n  # cat \/proc\/842\/cgroup\n  ...\n  0::\/test-cgroup\/test-cgroup-nested (deleted)\n\n\nThreads\n~~~~~~~\n\ncgroup v2 supports thread granularity for a subset of controllers to\nsupport use cases requiring hierarchical resource distribution across\nthe threads of a group of processes.  By default, all threads of a\nprocess belong to the same cgroup, which also serves as the resource\ndomain to host resource consumptions which are not specific to a\nprocess or thread.  The thread mode allows threads to be spread across\na subtree while still maintaining the common resource domain for them.\n\nControllers which support thread mode are called threaded controllers.\nThe ones which don't are called domain controllers.\n\nMarking a cgroup threaded makes it join the resource domain of its\nparent as a threaded cgroup.  The parent may be another threaded\ncgroup whose resource domain is further up in the hierarchy.  The root\nof a threaded subtree, that is, the nearest ancestor which is not\nthreaded, is called threaded domain or thread root interchangeably and\nserves as the resource domain for the entire subtree.\n\nInside a threaded subtree, threads of a process can be put in\ndifferent cgroups and are not subject to the no internal process\nconstraint - threaded controllers can be enabled on non-leaf cgroups\nwhether they have threads in them or not.\n\nAs the threaded domain cgroup hosts all the domain resource\nconsumptions of the subtree, it is considered to have internal\nresource consumptions whether there are processes in it or not and\ncan't have populated child cgroups which aren't threaded.  Because the\nroot cgroup is not subject to no internal process constraint, it can\nserve both as a threaded domain and a parent to domain cgroups.\n\nThe current operation mode or type of the cgroup is shown in the\n\"cgroup.type\" file which indicates whether the cgroup is a normal\ndomain, a domain which is serving as the domain of a threaded subtree,\nor a threaded cgroup.\n\nOn creation, a cgroup is always a domain cgroup and can be made\nthreaded by writing \"threaded\" to the \"cgroup.type\" file.  The\noperation is single direction::\n\n  # echo threaded > cgroup.type\n\nOnce threaded, the cgroup can't be made a domain again.  To enable the\nthread mode, the following conditions must be met.\n\n- As the cgroup will join the parent's resource domain.  The parent\n  must either be a valid (threaded) domain or a threaded cgroup.\n\n- When the parent is an unthreaded domain, it must not have any domain\n  controllers enabled or populated domain children.  The root is\n  exempt from this requirement.\n\nTopology-wise, a cgroup can be in an invalid state.  Please consider\nthe following topology::\n\n  A (threaded domain) - B (threaded) - C (domain, just created)\n\nC is created as a domain but isn't connected to a parent which can\nhost child domains.  C can't be used until it is turned into a\nthreaded cgroup.  \"cgroup.type\" file will report \"domain (invalid)\" in\nthese cases.  Operations which fail due to invalid topology use\nEOPNOTSUPP as the errno.\n\nA domain cgroup is turned into a threaded domain when one of its child\ncgroup becomes threaded or threaded controllers are enabled in the\n\"cgroup.subtree_control\" file while there are processes in the cgroup.\nA threaded domain reverts to a normal domain when the conditions\nclear.\n\nWhen read, \"cgroup.threads\" contains the list of the thread IDs of all\nthreads in the cgroup.  Except that the operations are per-thread\ninstead of per-process, \"cgroup.threads\" has the same format and\nbehaves the same way as \"cgroup.procs\".  While \"cgroup.threads\" can be\nwritten to in any cgroup, as it can only move threads inside the same\nthreaded domain, its operations are confined inside each threaded\nsubtree.\n\nThe threaded domain cgroup serves as the resource domain for the whole\nsubtree, and, while the threads can be scattered across the subtree,\nall the processes are considered to be in the threaded domain cgroup.\n\"cgroup.procs\" in a threaded domain cgroup contains the PIDs of all\nprocesses in the subtree and is not readable in the subtree proper.\nHowever, \"cgroup.procs\" can be written to from anywhere in the subtree\nto migrate all threads of the matching process to the cgroup.\n\nOnly threaded controllers can be enabled in a threaded subtree.  When\na threaded controller is enabled inside a threaded subtree, it only\naccounts for and controls resource consumptions associated with the\nthreads in the cgroup and its descendants.  All consumptions which\naren't tied to a specific thread belong to the threaded domain cgroup.\n\nBecause a threaded subtree is exempt from no internal process\nconstraint, a threaded controller must be able to handle competition\nbetween threads in a non-leaf cgroup and its child cgroups.  Each\nthreaded controller defines how such competitions are handled.\n\nCurrently, the following controllers are threaded and can be enabled\nin a threaded cgroup::\n\n- cpu\n- cpuset\n- perf_event\n- pids\n\n[Un]populated Notification\n--------------------------\n\nEach non-root cgroup has a \"cgroup.events\" file which contains\n\"populated\" field indicating whether the cgroup's sub-hierarchy has\nlive processes in it.  Its value is 0 if there is no live process in\nthe cgroup and its descendants; otherwise, 1.  poll and [id]notify\nevents are triggered when the value changes.  This can be used, for\nexample, to start a clean-up operation after all processes of a given\nsub-hierarchy have exited.  The populated state updates and\nnotifications are recursive.  Consider the following sub-hierarchy\nwhere the numbers in the parentheses represent the numbers of processes\nin each cgroup::\n\n  A(4) - B(0) - C(1)\n              \\ D(0)\n\nA, B and C's \"populated\" fields would be 1 while D's 0.  After the one\nprocess in C exits, B and C's \"populated\" fields would flip to \"0\" and\nfile modified events will be generated on the \"cgroup.events\" files of\nboth cgroups.\n\n\nControlling Controllers\n-----------------------\n\nEnabling and Disabling\n~~~~~~~~~~~~~~~~~~~~~~\n\nEach cgroup has a \"cgroup.controllers\" file which lists all\ncontrollers available for the cgroup to enable::\n\n  # cat cgroup.controllers\n  cpu io memory\n\nNo controller is enabled by default.  Controllers can be enabled and\ndisabled by writing to the \"cgroup.subtree_control\" file::\n\n  # echo \"+cpu +memory -io\" > cgroup.subtree_control\n\nOnly controllers which are listed in \"cgroup.controllers\" can be\nenabled.  When multiple operations are specified as above, either they\nall succeed or fail.  If multiple operations on the same controller\nare specified, the last one is effective.\n\nEnabling a controller in a cgroup indicates that the distribution of\nthe target resource across its immediate children will be controlled.\nConsider the following sub-hierarchy.  The enabled controllers are\nlisted in parentheses::\n\n  A(cpu,memory) - B(memory) - C()\n                            \\ D()\n\nAs A has \"cpu\" and \"memory\" enabled, A will control the distribution\nof CPU cycles and memory to its children, in this case, B.  As B has\n\"memory\" enabled but not \"CPU\", C and D will compete freely on CPU\ncycles but their division of memory available to B will be controlled.\n\nAs a controller regulates the distribution of the target resource to\nthe cgroup's children, enabling it creates the controller's interface\nfiles in the child cgroups.  In the above example, enabling \"cpu\" on B\nwould create the \"cpu.\" prefixed controller interface files in C and\nD.  Likewise, disabling \"memory\" from B would remove the \"memory.\"\nprefixed controller interface files from C and D.  This means that the\ncontroller interface files - anything which doesn't start with\n\"cgroup.\" are owned by the parent rather than the cgroup itself.\n\n\nTop-down Constraint\n~~~~~~~~~~~~~~~~~~~\n\nResources are distributed top-down and a cgroup can further distribute\na resource only if the resource has been distributed to it from the\nparent.  This means that all non-root \"cgroup.subtree_control\" files\ncan only contain controllers which are enabled in the parent's\n\"cgroup.subtree_control\" file.  A controller can be enabled only if\nthe parent has the controller enabled and a controller can't be\ndisabled if one or more children have it enabled.\n\n\nNo Internal Process Constraint\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nNon-root cgroups can distribute domain resources to their children\nonly when they don't have any processes of their own.  In other words,\nonly domain cgroups which don't contain any processes can have domain\ncontrollers enabled in their \"cgroup.subtree_control\" files.\n\nThis guarantees that, when a domain controller is looking at the part\nof the hierarchy which has it enabled, processes are always only on\nthe leaves.  This rules out situations where child cgroups compete\nagainst internal processes of the parent.\n\nThe root cgroup is exempt from this restriction.  Root contains\nprocesses and anonymous resource consumption which can't be associated\nwith any other cgroups and requires special treatment from most\ncontrollers.  How resource consumption in the root cgroup is governed\nis up to each controller (for more information on this topic please\nrefer to the Non-normative information section in the Controllers\nchapter).\n\nNote that the restriction doesn't get in the way if there is no\nenabled controller in the cgroup's \"cgroup.subtree_control\".  This is\nimportant as otherwise it wouldn't be possible to create children of a\npopulated cgroup.  To control resource distribution of a cgroup, the\ncgroup must create children and transfer all its processes to the\nchildren before enabling controllers in its \"cgroup.subtree_control\"\nfile.\n\n\nDelegation\n----------\n\nModel of Delegation\n~~~~~~~~~~~~~~~~~~~\n\nA cgroup can be delegated in two ways.  First, to a less privileged\nuser by granting write access of the directory and its \"cgroup.procs\",\n\"cgroup.threads\" and \"cgroup.subtree_control\" files to the user.\nSecond, if the \"nsdelegate\" mount option is set, automatically to a\ncgroup namespace on namespace creation.\n\nBecause the resource control interface files in a given directory\ncontrol the distribution of the parent's resources, the delegatee\nshouldn't be allowed to write to them.  For the first method, this is\nachieved by not granting access to these files.  For the second, files\noutside the namespace should be hidden from the delegatee by the means\nof at least mount namespacing, and the kernel rejects writes to all\nfiles on a namespace root from inside the cgroup namespace, except for\nthose files listed in \"\/sys\/kernel\/cgroup\/delegate\" (including\n\"cgroup.procs\", \"cgroup.threads\", \"cgroup.subtree_control\", etc.).\n\nThe end results are equivalent for both delegation types.  Once\ndelegated, the user can build sub-hierarchy under the directory,\norganize processes inside it as it sees fit and further distribute the\nresources it received from the parent.  The limits and other settings\nof all resource controllers are hierarchical and regardless of what\nhappens in the delegated sub-hierarchy, nothing can escape the\nresource restrictions imposed by the parent.\n\nCurrently, cgroup doesn't impose any restrictions on the number of\ncgroups in or nesting depth of a delegated sub-hierarchy; however,\nthis may be limited explicitly in the future.\n\n\nDelegation Containment\n~~~~~~~~~~~~~~~~~~~~~~\n\nA delegated sub-hierarchy is contained in the sense that processes\ncan't be moved into or out of the sub-hierarchy by the delegatee.\n\nFor delegations to a less privileged user, this is achieved by\nrequiring the following conditions for a process with a non-root euid\nto migrate a target process into a cgroup by writing its PID to the\n\"cgroup.procs\" file.\n\n- The writer must have write access to the \"cgroup.procs\" file.\n\n- The writer must have write access to the \"cgroup.procs\" file of the\n  common ancestor of the source and destination cgroups.\n\nThe above two constraints ensure that while a delegatee may migrate\nprocesses around freely in the delegated sub-hierarchy it can't pull\nin from or push out to outside the sub-hierarchy.\n\nFor an example, let's assume cgroups C0 and C1 have been delegated to\nuser U0 who created C00, C01 under C0 and C10 under C1 as follows and\nall processes under C0 and C1 belong to U0::\n\n  ~~~~~~~~~~~~~ - C0 - C00\n  ~ cgroup    ~      \\ C01\n  ~ hierarchy ~\n  ~~~~~~~~~~~~~ - C1 - C10\n\nLet's also say U0 wants to write the PID of a process which is\ncurrently in C10 into \"C00\/cgroup.procs\".  U0 has write access to the\nfile; however, the common ancestor of the source cgroup C10 and the\ndestination cgroup C00 is above the points of delegation and U0 would\nnot have write access to its \"cgroup.procs\" files and thus the write\nwill be denied with -EACCES.\n\nFor delegations to namespaces, containment is achieved by requiring\nthat both the source and destination cgroups are reachable from the\nnamespace of the process which is attempting the migration.  If either\nis not reachable, the migration is rejected with -ENOENT.\n\n\nGuidelines\n----------\n\nOrganize Once and Control\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\nMigrating a process across cgroups is a relatively expensive operation\nand stateful resources such as memory are not moved together with the\nprocess.  This is an explicit design decision as there often exist\ninherent trade-offs between migration and various hot paths in terms\nof synchronization cost.\n\nAs such, migrating processes across cgroups frequently as a means to\napply different resource restrictions is discouraged.  A workload\nshould be assigned to a cgroup according to the system's logical and\nresource structure once on start-up.  Dynamic adjustments to resource\ndistribution can be made by changing controller configuration through\nthe interface files.\n\n\nAvoid Name Collisions\n~~~~~~~~~~~~~~~~~~~~~\n\nInterface files for a cgroup and its children cgroups occupy the same\ndirectory and it is possible to create children cgroups which collide\nwith interface files.\n\nAll cgroup core interface files are prefixed with \"cgroup.\" and each\ncontroller's interface files are prefixed with the controller name and\na dot.  A controller's name is composed of lower case alphabets and\n'_'s but never begins with an '_' so it can be used as the prefix\ncharacter for collision avoidance.  Also, interface file names won't\nstart or end with terms which are often used in categorizing workloads\nsuch as job, service, slice, unit or workload.\n\ncgroup doesn't do anything to prevent name collisions and it's the\nuser's responsibility to avoid them.\n\n\nResource Distribution Models\n============================\n\ncgroup controllers implement several resource distribution schemes\ndepending on the resource type and expected use cases.  This section\ndescribes major schemes in use along with their expected behaviors.\n\n\nWeights\n-------\n\nA parent's resource is distributed by adding up the weights of all\nactive children and giving each the fraction matching the ratio of its\nweight against the sum.  As only children which can make use of the\nresource at the moment participate in the distribution, this is\nwork-conserving.  Due to the dynamic nature, this model is usually\nused for stateless resources.\n\nAll weights are in the range [1, 10000] with the default at 100.  This\nallows symmetric multiplicative biases in both directions at fine\nenough granularity while staying in the intuitive range.\n\nAs long as the weight is in range, all configuration combinations are\nvalid and there is no reason to reject configuration changes or\nprocess migrations.\n\n\"cpu.weight\" proportionally distributes CPU cycles to active children\nand is an example of this type.\n\n\n.. _cgroupv2-limits-distributor:\n\nLimits\n------\n\nA child can only consume up to the configured amount of the resource.\nLimits can be over-committed - the sum of the limits of children can\nexceed the amount of resource available to the parent.\n\nLimits are in the range [0, max] and defaults to \"max\", which is noop.\n\nAs limits can be over-committed, all configuration combinations are\nvalid and there is no reason to reject configuration changes or\nprocess migrations.\n\n\"io.max\" limits the maximum BPS and\/or IOPS that a cgroup can consume\non an IO device and is an example of this type.\n\n.. _cgroupv2-protections-distributor:\n\nProtections\n-----------\n\nA cgroup is protected up to the configured amount of the resource\nas long as the usages of all its ancestors are under their\nprotected levels.  Protections can be hard guarantees or best effort\nsoft boundaries.  Protections can also be over-committed in which case\nonly up to the amount available to the parent is protected among\nchildren.\n\nProtections are in the range [0, max] and defaults to 0, which is\nnoop.\n\nAs protections can be over-committed, all configuration combinations\nare valid and there is no reason to reject configuration changes or\nprocess migrations.\n\n\"memory.low\" implements best-effort memory protection and is an\nexample of this type.\n\n\nAllocations\n-----------\n\nA cgroup is exclusively allocated a certain amount of a finite\nresource.  Allocations can't be over-committed - the sum of the\nallocations of children can not exceed the amount of resource\navailable to the parent.\n\nAllocations are in the range [0, max] and defaults to 0, which is no\nresource.\n\nAs allocations can't be over-committed, some configuration\ncombinations are invalid and should be rejected.  Also, if the\nresource is mandatory for execution of processes, process migrations\nmay be rejected.\n\n\"cpu.rt.max\" hard-allocates realtime slices and is an example of this\ntype.\n\n\nInterface Files\n===============\n\nFormat\n------\n\nAll interface files should be in one of the following formats whenever\npossible::\n\n  New-line separated values\n  (when only one value can be written at once)\n\n\tVAL0\\n\n\tVAL1\\n\n\t...\n\n  Space separated values\n  (when read-only or multiple values can be written at once)\n\n\tVAL0 VAL1 ...\\n\n\n  Flat keyed\n\n\tKEY0 VAL0\\n\n\tKEY1 VAL1\\n\n\t...\n\n  Nested keyed\n\n\tKEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...\n\tKEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...\n\t...\n\nFor a writable file, the format for writing should generally match\nreading; however, controllers may allow omitting later fields or\nimplement restricted shortcuts for most common use cases.\n\nFor both flat and nested keyed files, only the values for a single key\ncan be written at a time.  For nested keyed files, the sub key pairs\nmay be specified in any order and not all pairs have to be specified.\n\n\nConventions\n-----------\n\n- Settings for a single feature should be contained in a single file.\n\n- The root cgroup should be exempt from resource control and thus\n  shouldn't have resource control interface files.\n\n- The default time unit is microseconds.  If a different unit is ever\n  used, an explicit unit suffix must be present.\n\n- A parts-per quantity should use a percentage decimal with at least\n  two digit fractional part - e.g. 13.40.\n\n- If a controller implements weight based resource distribution, its\n  interface file should be named \"weight\" and have the range [1,\n  10000] with 100 as the default.  The values are chosen to allow\n  enough and symmetric bias in both directions while keeping it\n  intuitive (the default is 100%).\n\n- If a controller implements an absolute resource guarantee and\/or\n  limit, the interface files should be named \"min\" and \"max\"\n  respectively.  If a controller implements best effort resource\n  guarantee and\/or limit, the interface files should be named \"low\"\n  and \"high\" respectively.\n\n  In the above four control files, the special token \"max\" should be\n  used to represent upward infinity for both reading and writing.\n\n- If a setting has a configurable default value and keyed specific\n  overrides, the default entry should be keyed with \"default\" and\n  appear as the first entry in the file.\n\n  The default value can be updated by writing either \"default $VAL\" or\n  \"$VAL\".\n\n  When writing to update a specific override, \"default\" can be used as\n  the value to indicate removal of the override.  Override entries\n  with \"default\" as the value must not appear when read.\n\n  For example, a setting which is keyed by major:minor device numbers\n  with integer values may look like the following::\n\n    # cat cgroup-example-interface-file\n    default 150\n    8:0 300\n\n  The default value can be updated by::\n\n    # echo 125 > cgroup-example-interface-file\n\n  or::\n\n    # echo \"default 125\" > cgroup-example-interface-file\n\n  An override can be set by::\n\n    # echo \"8:16 170\" > cgroup-example-interface-file\n\n  and cleared by::\n\n    # echo \"8:0 default\" > cgroup-example-interface-file\n    # cat cgroup-example-interface-file\n    default 125\n    8:16 170\n\n- For events which are not very high frequency, an interface file\n  \"events\" should be created which lists event key value pairs.\n  Whenever a notifiable event happens, file modified event should be\n  generated on the file.\n\n\nCore Interface Files\n--------------------\n\nAll cgroup core files are prefixed with \"cgroup.\"\n\n  cgroup.type\n\tA read-write single value file which exists on non-root\n\tcgroups.\n\n\tWhen read, it indicates the current type of the cgroup, which\n\tcan be one of the following values.\n\n\t- \"domain\" : A normal valid domain cgroup.\n\n\t- \"domain threaded\" : A threaded domain cgroup which is\n          serving as the root of a threaded subtree.\n\n\t- \"domain invalid\" : A cgroup which is in an invalid state.\n\t  It can't be populated or have controllers enabled.  It may\n\t  be allowed to become a threaded cgroup.\n\n\t- \"threaded\" : A threaded cgroup which is a member of a\n          threaded subtree.\n\n\tA cgroup can be turned into a threaded cgroup by writing\n\t\"threaded\" to this file.\n\n  cgroup.procs\n\tA read-write new-line separated values file which exists on\n\tall cgroups.\n\n\tWhen read, it lists the PIDs of all processes which belong to\n\tthe cgroup one-per-line.  The PIDs are not ordered and the\n\tsame PID may show up more than once if the process got moved\n\tto another cgroup and then back or the PID got recycled while\n\treading.\n\n\tA PID can be written to migrate the process associated with\n\tthe PID to the cgroup.  The writer should match all of the\n\tfollowing conditions.\n\n\t- It must have write access to the \"cgroup.procs\" file.\n\n\t- It must have write access to the \"cgroup.procs\" file of the\n\t  common ancestor of the source and destination cgroups.\n\n\tWhen delegating a sub-hierarchy, write access to this file\n\tshould be granted along with the containing directory.\n\n\tIn a threaded cgroup, reading this file fails with EOPNOTSUPP\n\tas all the processes belong to the thread root.  Writing is\n\tsupported and moves every thread of the process to the cgroup.\n\n  cgroup.threads\n\tA read-write new-line separated values file which exists on\n\tall cgroups.\n\n\tWhen read, it lists the TIDs of all threads which belong to\n\tthe cgroup one-per-line.  The TIDs are not ordered and the\n\tsame TID may show up more than once if the thread got moved to\n\tanother cgroup and then back or the TID got recycled while\n\treading.\n\n\tA TID can be written to migrate the thread associated with the\n\tTID to the cgroup.  The writer should match all of the\n\tfollowing conditions.\n\n\t- It must have write access to the \"cgroup.threads\" file.\n\n\t- The cgroup that the thread is currently in must be in the\n          same resource domain as the destination cgroup.\n\n\t- It must have write access to the \"cgroup.procs\" file of the\n\t  common ancestor of the source and destination cgroups.\n\n\tWhen delegating a sub-hierarchy, write access to this file\n\tshould be granted along with the containing directory.\n\n  cgroup.controllers\n\tA read-only space separated values file which exists on all\n\tcgroups.\n\n\tIt shows space separated list of all controllers available to\n\tthe cgroup.  The controllers are not ordered.\n\n  cgroup.subtree_control\n\tA read-write space separated values file which exists on all\n\tcgroups.  Starts out empty.\n\n\tWhen read, it shows space separated list of the controllers\n\twhich are enabled to control resource distribution from the\n\tcgroup to its children.\n\n\tSpace separated list of controllers prefixed with '+' or '-'\n\tcan be written to enable or disable controllers.  A controller\n\tname prefixed with '+' enables the controller and '-'\n\tdisables.  If a controller appears more than once on the list,\n\tthe last one is effective.  When multiple enable and disable\n\toperations are specified, either all succeed or all fail.\n\n  cgroup.events\n\tA read-only flat-keyed file which exists on non-root cgroups.\n\tThe following entries are defined.  Unless specified\n\totherwise, a value change in this file generates a file\n\tmodified event.\n\n\t  populated\n\t\t1 if the cgroup or its descendants contains any live\n\t\tprocesses; otherwise, 0.\n\t  frozen\n\t\t1 if the cgroup is frozen; otherwise, 0.\n\n  cgroup.max.descendants\n\tA read-write single value files.  The default is \"max\".\n\n\tMaximum allowed number of descent cgroups.\n\tIf the actual number of descendants is equal or larger,\n\tan attempt to create a new cgroup in the hierarchy will fail.\n\n  cgroup.max.depth\n\tA read-write single value files.  The default is \"max\".\n\n\tMaximum allowed descent depth below the current cgroup.\n\tIf the actual descent depth is equal or larger,\n\tan attempt to create a new child cgroup will fail.\n\n  cgroup.stat\n\tA read-only flat-keyed file with the following entries:\n\n\t  nr_descendants\n\t\tTotal number of visible descendant cgroups.\n\n\t  nr_dying_descendants\n\t\tTotal number of dying descendant cgroups. A cgroup becomes\n\t\tdying after being deleted by a user. The cgroup will remain\n\t\tin dying state for some time undefined time (which can depend\n\t\ton system load) before being completely destroyed.\n\n\t\tA process can't enter a dying cgroup under any circumstances,\n\t\ta dying cgroup can't revive.\n\n\t\tA dying cgroup can consume system resources not exceeding\n\t\tlimits, which were active at the moment of cgroup deletion.\n\n\t  nr_subsys_<cgroup_subsys>\n\t\tTotal number of live cgroup subsystems (e.g memory\n\t\tcgroup) at and beneath the current cgroup.\n\n\t  nr_dying_subsys_<cgroup_subsys>\n\t\tTotal number of dying cgroup subsystems (e.g. memory\n\t\tcgroup) at and beneath the current cgroup.\n\n  cgroup.freeze\n\tA read-write single value file which exists on non-root cgroups.\n\tAllowed values are \"0\" and \"1\". The default is \"0\".\n\n\tWriting \"1\" to the file causes freezing of the cgroup and all\n\tdescendant cgroups. This means that all belonging processes will\n\tbe stopped and will not run until the cgroup will be explicitly\n\tunfrozen. Freezing of the cgroup may take some time; when this action\n\tis completed, the \"frozen\" value in the cgroup.events control file\n\twill be updated to \"1\" and the corresponding notification will be\n\tissued.\n\n\tA cgroup can be frozen either by its own settings, or by settings\n\tof any ancestor cgroups. If any of ancestor cgroups is frozen, the\n\tcgroup will remain frozen.\n\n\tProcesses in the frozen cgroup can be killed by a fatal signal.\n\tThey also can enter and leave a frozen cgroup: either by an explicit\n\tmove by a user, or if freezing of the cgroup races with fork().\n\tIf a process is moved to a frozen cgroup, it stops. If a process is\n\tmoved out of a frozen cgroup, it becomes running.\n\n\tFrozen status of a cgroup doesn't affect any cgroup tree operations:\n\tit's possible to delete a frozen (and empty) cgroup, as well as\n\tcreate new sub-cgroups.\n\n  cgroup.kill\n\tA write-only single value file which exists in non-root cgroups.\n\tThe only allowed value is \"1\".\n\n\tWriting \"1\" to the file causes the cgroup and all descendant cgroups to\n\tbe killed. This means that all processes located in the affected cgroup\n\ttree will be killed via SIGKILL.\n\n\tKilling a cgroup tree will deal with concurrent forks appropriately and\n\tis protected against migrations.\n\n\tIn a threaded cgroup, writing this file fails with EOPNOTSUPP as\n\tkilling cgroups is a process directed operation, i.e. it affects\n\tthe whole thread-group.\n\n  cgroup.pressure\n\tA read-write single value file that allowed values are \"0\" and \"1\".\n\tThe default is \"1\".\n\n\tWriting \"0\" to the file will disable the cgroup PSI accounting.\n\tWriting \"1\" to the file will re-enable the cgroup PSI accounting.\n\n\tThis control attribute is not hierarchical, so disable or enable PSI\n\taccounting in a cgroup does not affect PSI accounting in descendants\n\tand doesn't need pass enablement via ancestors from root.\n\n\tThe reason this control attribute exists is that PSI accounts stalls for\n\teach cgroup separately and aggregates it at each level of the hierarchy.\n\tThis may cause non-negligible overhead for some workloads when under\n\tdeep level of the hierarchy, in which case this control attribute can\n\tbe used to disable PSI accounting in the non-leaf cgroups.\n\n  irq.pressure\n\tA read-write nested-keyed file.\n\n\tShows pressure stall information for IRQ\/SOFTIRQ. See\n\t:ref:`Documentation\/accounting\/psi.rst <psi>` for details.\n\nControllers\n===========\n\n.. _cgroup-v2-cpu:\n\nCPU\n---\n\nThe \"cpu\" controllers regulates distribution of CPU cycles.  This\ncontroller implements weight and absolute bandwidth limit models for\nnormal scheduling policy and absolute bandwidth allocation model for\nrealtime scheduling policy.\n\nIn all the above models, cycles distribution is defined only on a temporal\nbase and it does not account for the frequency at which tasks are executed.\nThe (optional) utilization clamping support allows to hint the schedutil\ncpufreq governor about the minimum desired frequency which should always be\nprovided by a CPU, as well as the maximum desired frequency, which should not\nbe exceeded by a CPU.\n\nWARNING: cgroup2 doesn't yet support control of realtime processes. For\na kernel built with the CONFIG_RT_GROUP_SCHED option enabled for group\nscheduling of realtime processes, the cpu controller can only be enabled\nwhen all RT processes are in the root cgroup.  This limitation does\nnot apply if CONFIG_RT_GROUP_SCHED is disabled.  Be aware that system\nmanagement software may already have placed RT processes into nonroot\ncgroups during the system boot process, and these processes may need\nto be moved to the root cgroup before the cpu controller can be enabled\nwith a CONFIG_RT_GROUP_SCHED enabled kernel.\n\n\nCPU Interface Files\n~~~~~~~~~~~~~~~~~~~\n\nAll time durations are in microseconds.\n\n  cpu.stat\n\tA read-only flat-keyed file.\n\tThis file exists whether the controller is enabled or not.\n\n\tIt always reports the following three stats:\n\n\t- usage_usec\n\t- user_usec\n\t- system_usec\n\n\tand the following five when the controller is enabled:\n\n\t- nr_periods\n\t- nr_throttled\n\t- throttled_usec\n\t- nr_bursts\n\t- burst_usec\n\n  cpu.weight\n\tA read-write single value file which exists on non-root\n\tcgroups.  The default is \"100\".\n\n\tFor non idle groups (cpu.idle = 0), the weight is in the\n\trange [1, 10000].\n\n\tIf the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1),\n\tthen the weight will show as a 0.\n\n  cpu.weight.nice\n\tA read-write single value file which exists on non-root\n\tcgroups.  The default is \"0\".\n\n\tThe nice value is in the range [-20, 19].\n\n\tThis interface file is an alternative interface for\n\t\"cpu.weight\" and allows reading and setting weight using the\n\tsame values used by nice(2).  Because the range is smaller and\n\tgranularity is coarser for the nice values, the read value is\n\tthe closest approximation of the current weight.\n\n  cpu.max\n\tA read-write two value file which exists on non-root cgroups.\n\tThe default is \"max 100000\".\n\n\tThe maximum bandwidth limit.  It's in the following format::\n\n\t  $MAX $PERIOD\n\n\twhich indicates that the group may consume up to $MAX in each\n\t$PERIOD duration.  \"max\" for $MAX indicates no limit.  If only\n\tone number is written, $MAX is updated.\n\n  cpu.max.burst\n\tA read-write single value file which exists on non-root\n\tcgroups.  The default is \"0\".\n\n\tThe burst in the range [0, $MAX].\n\n  cpu.pressure\n\tA read-write nested-keyed file.\n\n\tShows pressure stall information for CPU. See\n\t:ref:`Documentation\/accounting\/psi.rst <psi>` for details.\n\n  cpu.uclamp.min\n        A read-write single value file which exists on non-root cgroups.\n        The default is \"0\", i.e. no utilization boosting.\n\n        The requested minimum utilization (protection) as a percentage\n        rational number, e.g. 12.34 for 12.34%.\n\n        This interface allows reading and setting minimum utilization clamp\n        values similar to the sched_setattr(2). This minimum utilization\n        value is used to clamp the task specific minimum utilization clamp.\n\n        The requested minimum utilization (protection) is always capped by\n        the current value for the maximum utilization (limit), i.e.\n        `cpu.uclamp.max`.\n\n  cpu.uclamp.max\n        A read-write single value file which exists on non-root cgroups.\n        The default is \"max\". i.e. no utilization capping\n\n        The requested maximum utilization (limit) as a percentage rational\n        number, e.g. 98.76 for 98.76%.\n\n        This interface allows reading and setting maximum utilization clamp\n        values similar to the sched_setattr(2). This maximum utilization\n        value is used to clamp the task specific maximum utilization clamp.\n\n  cpu.idle\n\tA read-write single value file which exists on non-root cgroups.\n\tThe default is 0.\n\n\tThis is the cgroup analog of the per-task SCHED_IDLE sched policy.\n\tSetting this value to a 1 will make the scheduling policy of the\n\tcgroup SCHED_IDLE. The threads inside the cgroup will retain their\n\town relative priorities, but the cgroup itself will be treated as\n\tvery low priority relative to its peers.\n\n\n\nMemory\n------\n\nThe \"memory\" controller regulates distribution of memory.  Memory is\nstateful and implements both limit and protection models.  Due to the\nintertwining between memory usage and reclaim pressure and the\nstateful nature of memory, the distribution model is relatively\ncomplex.\n\nWhile not completely water-tight, all major memory usages by a given\ncgroup are tracked so that the total memory consumption can be\naccounted and controlled to a reasonable extent.  Currently, the\nfollowing types of memory usages are tracked.\n\n- Userland memory - page cache and anonymous memory.\n\n- Kernel data structures such as dentries and inodes.\n\n- TCP socket buffers.\n\nThe above list may expand in the future for better coverage.\n\n\nMemory Interface Files\n~~~~~~~~~~~~~~~~~~~~~~\n\nAll memory amounts are in bytes.  If a value which is not aligned to\nPAGE_SIZE is written, the value may be rounded up to the closest\nPAGE_SIZE multiple when read back.\n\n  memory.current\n\tA read-only single value file which exists on non-root\n\tcgroups.\n\n\tThe total amount of memory currently being used by the cgroup\n\tand its descendants.\n\n  memory.min\n\tA read-write single value file which exists on non-root\n\tcgroups.  The default is \"0\".\n\n\tHard memory protection.  If the memory usage of a cgroup\n\tis within its effective min boundary, the cgroup's memory\n\twon't be reclaimed under any conditions. If there is no\n\tunprotected reclaimable memory available, OOM killer\n\tis invoked. Above the effective min boundary (or\n\teffective low boundary if it is higher), pages are reclaimed\n\tproportionally to the overage, reducing reclaim pressure for\n\tsmaller overages.\n\n\tEffective min boundary is limited by memory.min values of\n\tall ancestor cgroups. If there is memory.min overcommitment\n\t(child cgroup or cgroups are requiring more protected memory\n\tthan parent will allow), then each child cgroup will get\n\tthe part of parent's protection proportional to its\n\tactual memory usage below memory.min.\n\n\tPutting more memory than generally available under this\n\tprotection is discouraged and may lead to constant OOMs.\n\n\tIf a memory cgroup is not populated with processes,\n\tits memory.min is ignored.\n\n  memory.low\n\tA read-write single value file which exists on non-root\n\tcgroups.  The default is \"0\".\n\n\tBest-effort memory protection.  If the memory usage of a\n\tcgroup is within its effective low boundary, the cgroup's\n\tmemory won't be reclaimed unless there is no reclaimable\n\tmemory available in unprotected cgroups.\n\tAbove the effective low\tboundary (or \n\teffective min boundary if it is higher), pages are reclaimed\n\tproportionally to the overage, reducing reclaim pressure for\n\tsmaller overages.\n\n\tEffective low boundary is limited by memory.low values of\n\tall ancestor cgroups. If there is memory.low overcommitment\n\t(child cgroup or cgroups are requiring more protected memory\n\tthan parent will allow), then each child cgroup will get\n\tthe part of parent's protection proportional to its\n\tactual memory usage below memory.low.\n\n\tPutting more memory than generally available under this\n\tprotection is discouraged.\n\n  memory.high\n\tA read-write single value file which exists on non-root\n\tcgroups.  The default is \"max\".\n\n\tMemory usage throttle limit.  If a cgroup's usage goes\n\tover the high boundary, the processes of the cgroup are\n\tthrottled and put under heavy reclaim pressure.\n\n\tGoing over the high limit never invokes the OOM killer and\n\tunder extreme conditions the limit may be breached. The high\n\tlimit should be used in scenarios where an external process\n\tmonitors the limited cgroup to alleviate heavy reclaim\n\tpressure.\n\n  memory.max\n\tA read-write single value file which exists on non-root\n\tcgroups.  The default is \"max\".\n\n\tMemory usage hard limit.  This is the main mechanism to limit\n\tmemory usage of a cgroup.  If a cgroup's memory usage reaches\n\tthis limit and can't be reduced, the OOM killer is invoked in\n\tthe cgroup. Under certain circumstances, the usage may go\n\tover the limit temporarily.\n\n\tIn default configuration regular 0-order allocations always\n\tsucceed unless OOM killer chooses current task as a victim.\n\n\tSome kinds of allocations don't invoke the OOM killer.\n\tCaller could retry them differently, return into userspace\n\tas -ENOMEM or silently ignore in cases like disk readahead.\n\n  memory.reclaim\n\tA write-only nested-keyed file which exists for all cgroups.\n\n\tThis is a simple interface to trigger memory reclaim in the\n\ttarget cgroup.\n\n\tExample::\n\n\t  echo \"1G\" > memory.reclaim\n\n\tPlease note that the kernel can over or under reclaim from\n\tthe target cgroup. If less bytes are reclaimed than the\n\tspecified amount, -EAGAIN is returned.\n\n\tPlease note that the proactive reclaim (triggered by this\n\tinterface) is not meant to indicate memory pressure on the\n\tmemory cgroup. Therefore socket memory balancing triggered by\n\tthe memory reclaim normally is not exercised in this case.\n\tThis means that the networking layer will not adapt based on\n\treclaim induced by memory.reclaim.\n\nThe following nested keys are defined.\n\n\t  ==========            ================================\n\t  swappiness            Swappiness value to reclaim with\n\t  ==========            ================================\n\n\tSpecifying a swappiness value instructs the kernel to perform\n\tthe reclaim with that swappiness value. Note that this has the\n\tsame semantics as vm.swappiness applied to memcg reclaim with\n\tall the existing limitations and potential future extensions.\n\n  memory.peak\n\tA read-write single value file which exists on non-root cgroups.\n\n\tThe max memory usage recorded for the cgroup and its descendants since\n\teither the creation of the cgroup or the most recent reset for that FD.\n\n\tA write of any non-empty string to this file resets it to the\n\tcurrent memory usage for subsequent reads through the same\n\tfile descriptor.\n\n  memory.oom.group\n\tA read-write single value file which exists on non-root\n\tcgroups.  The default value is \"0\".\n\n\tDetermines whether the cgroup should be treated as\n\tan indivisible workload by the OOM killer. If set,\n\tall tasks belonging to the cgroup or to its descendants\n\t(if the memory cgroup is not a leaf cgroup) are killed\n\ttogether or not at all. This can be used to avoid\n\tpartial kills to guarantee workload integrity.\n\n\tTasks with the OOM protection (oom_score_adj set to -1000)\n\tare treated as an exception and are never killed.\n\n\tIf the OOM killer is invoked in a cgroup, it's not going\n\tto kill any tasks outside of this cgroup, regardless\n\tmemory.oom.group values of ancestor cgroups.\n\n  memory.events\n\tA read-only flat-keyed file which exists on non-root cgroups.\n\tThe following entries are defined.  Unless specified\n\totherwise, a value change in this file generates a file\n\tmodified event.\n\n\tNote that all fields in this file are hierarchical and the\n\tfile modified event can be generated due to an event down the\n\thierarchy. For the local events at the cgroup level see\n\tmemory.events.local.\n\n\t  low\n\t\tThe number of times the cgroup is reclaimed due to\n\t\thigh memory pressure even though its usage is under\n\t\tthe low boundary.  This usually indicates that the low\n\t\tboundary is over-committed.\n\n\t  high\n\t\tThe number of times processes of the cgroup are\n\t\tthrottled and routed to perform direct memory reclaim\n\t\tbecause the high memory boundary was exceeded.  For a\n\t\tcgroup whose memory usage is capped by the high limit\n\t\trather than global memory pressure, this event's\n\t\toccurrences are expected.\n\n\t  max\n\t\tThe number of times the cgroup's memory usage was\n\t\tabout to go over the max boundary.  If direct reclaim\n\t\tfails to bring it down, the cgroup goes to OOM state.\n\n\t  oom\n\t\tThe number of time the cgroup's memory usage was\n\t\treached the limit and allocation was about to fail.\n\n\t\tThis event is not raised if the OOM killer is not\n\t\tconsidered as an option, e.g. for failed high-order\n\t\tallocations or if caller asked to not retry attempts.\n\n\t  oom_kill\n\t\tThe number of processes belonging to this cgroup\n\t\tkilled by any kind of OOM killer.\n\n          oom_group_kill\n                The number of times a group OOM has occurred.\n\n  memory.events.local\n\tSimilar to memory.events but the fields in the file are local\n\tto the cgroup i.e. not hierarchical. The file modified event\n\tgenerated on this file reflects only the local events.\n\n  memory.stat\n\tA read-only flat-keyed file which exists on non-root cgroups.\n\n\tThis breaks down the cgroup's memory footprint into different\n\ttypes of memory, type-specific details, and other information\n\ton the state and past events of the memory management system.\n\n\tAll memory amounts are in bytes.\n\n\tThe entries are ordered to be human readable, and new entries\n\tcan show up in the middle. Don't rely on items remaining in a\n\tfixed position; use the keys to look up specific values!\n\n\tIf the entry has no per-node counter (or not show in the\n\tmemory.numa_stat). We use 'npn' (non-per-node) as the tag\n\tto indicate that it will not show in the memory.numa_stat.\n\n\t  anon\n\t\tAmount of memory used in anonymous mappings such as\n\t\tbrk(), sbrk(), and mmap(MAP_ANONYMOUS)\n\n\t  file\n\t\tAmount of memory used to cache filesystem data,\n\t\tincluding tmpfs and shared memory.\n\n\t  kernel (npn)\n\t\tAmount of total kernel memory, including\n\t\t(kernel_stack, pagetables, percpu, vmalloc, slab) in\n\t\taddition to other kernel memory use cases.\n\n\t  kernel_stack\n\t\tAmount of memory allocated to kernel stacks.\n\n\t  pagetables\n                Amount of memory allocated for page tables.\n\n\t  sec_pagetables\n\t\tAmount of memory allocated for secondary page tables,\n\t\tthis currently includes KVM mmu allocations on x86\n\t\tand arm64 and IOMMU page tables.\n\n\t  percpu (npn)\n\t\tAmount of memory used for storing per-cpu kernel\n\t\tdata structures.\n\n\t  sock (npn)\n\t\tAmount of memory used in network transmission buffers\n\n\t  vmalloc (npn)\n\t\tAmount of memory used for vmap backed memory.\n\n\t  shmem\n\t\tAmount of cached filesystem data that is swap-backed,\n\t\tsuch as tmpfs, shm segments, shared anonymous mmap()s\n\n\t  zswap\n\t\tAmount of memory consumed by the zswap compression backend.\n\n\t  zswapped\n\t\tAmount of application memory swapped out to zswap.\n\n\t  file_mapped\n\t\tAmount of cached filesystem data mapped with mmap()\n\n\t  file_dirty\n\t\tAmount of cached filesystem data that was modified but\n\t\tnot yet written back to disk\n\n\t  file_writeback\n\t\tAmount of cached filesystem data that was modified and\n\t\tis currently being written back to disk\n\n\t  swapcached\n\t\tAmount of swap cached in memory. The swapcache is accounted\n\t\tagainst both memory and swap usage.\n\n\t  anon_thp\n\t\tAmount of memory used in anonymous mappings backed by\n\t\ttransparent hugepages\n\n\t  file_thp\n\t\tAmount of cached filesystem data backed by transparent\n\t\thugepages\n\n\t  shmem_thp\n\t\tAmount of shm, tmpfs, shared anonymous mmap()s backed by\n\t\ttransparent hugepages\n\n\t  inactive_anon, active_anon, inactive_file, active_file, unevictable\n\t\tAmount of memory, swap-backed and filesystem-backed,\n\t\ton the internal memory management lists used by the\n\t\tpage reclaim algorithm.\n\n\t\tAs these represent internal list state (eg. shmem pages are on anon\n\t\tmemory management lists), inactive_foo + active_foo may not be equal to\n\t\tthe value for the foo counter, since the foo counter is type-based, not\n\t\tlist-based.\n\n\t  slab_reclaimable\n\t\tPart of \"slab\" that might be reclaimed, such as\n\t\tdentries and inodes.\n\n\t  slab_unreclaimable\n\t\tPart of \"slab\" that cannot be reclaimed on memory\n\t\tpressure.\n\n\t  slab (npn)\n\t\tAmount of memory used for storing in-kernel data\n\t\tstructures.\n\n\t  workingset_refault_anon\n\t\tNumber of refaults of previously evicted anonymous pages.\n\n\t  workingset_refault_file\n\t\tNumber of refaults of previously evicted file pages.\n\n\t  workingset_activate_anon\n\t\tNumber of refaulted anonymous pages that were immediately\n\t\tactivated.\n\n\t  workingset_activate_file\n\t\tNumber of refaulted file pages that were immediately activated.\n\n\t  workingset_restore_anon\n\t\tNumber of restored anonymous pages which have been detected as\n\t\tan active workingset before they got reclaimed.\n\n\t  workingset_restore_file\n\t\tNumber of restored file pages which have been detected as an\n\t\tactive workingset before they got reclaimed.\n\n\t  workingset_nodereclaim\n\t\tNumber of times a shadow node has been reclaimed\n\n\t  pgscan (npn)\n\t\tAmount of scanned pages (in an inactive LRU list)\n\n\t  pgsteal (npn)\n\t\tAmount of reclaimed pages\n\n\t  pgscan_kswapd (npn)\n\t\tAmount of scanned pages by kswapd (in an inactive LRU list)\n\n\t  pgscan_direct (npn)\n\t\tAmount of scanned pages directly  (in an inactive LRU list)\n\n\t  pgscan_khugepaged (npn)\n\t\tAmount of scanned pages by khugepaged  (in an inactive LRU list)\n\n\t  pgsteal_kswapd (npn)\n\t\tAmount of reclaimed pages by kswapd\n\n\t  pgsteal_direct (npn)\n\t\tAmount of reclaimed pages directly\n\n\t  pgsteal_khugepaged (npn)\n\t\tAmount of reclaimed pages by khugepaged\n\n\t  pgfault (npn)\n\t\tTotal number of page faults incurred\n\n\t  pgmajfault (npn)\n\t\tNumber of major page faults incurred\n\n\t  pgrefill (npn)\n\t\tAmount of scanned pages (in an active LRU list)\n\n\t  pgactivate (npn)\n\t\tAmount of pages moved to the active LRU list\n\n\t  pgdeactivate (npn)\n\t\tAmount of pages moved to the inactive LRU list\n\n\t  pglazyfree (npn)\n\t\tAmount of pages postponed to be freed under memory pressure\n\n\t  pglazyfreed (npn)\n\t\tAmount of reclaimed lazyfree pages\n\n\t  swpin_zero\n\t\tNumber of pages swapped into memory and filled with zero, where I\/O\n\t\twas optimized out because the page content was detected to be zero\n\t\tduring swapout.\n\n\t  swpout_zero\n\t\tNumber of zero-filled pages swapped out with I\/O skipped due to the\n\t\tcontent being detected as zero.\n\n\t  zswpin\n\t\tNumber of pages moved in to memory from zswap.\n\n\t  zswpout\n\t\tNumber of pages moved out of memory to zswap.\n\n\t  zswpwb\n\t\tNumber of pages written from zswap to swap.\n\n\t  thp_fault_alloc (npn)\n\t\tNumber of transparent hugepages which were allocated to satisfy\n\t\ta page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE\n                is not set.\n\n\t  thp_collapse_alloc (npn)\n\t\tNumber of transparent hugepages which were allocated to allow\n\t\tcollapsing an existing range of pages. This counter is not\n\t\tpresent when CONFIG_TRANSPARENT_HUGEPAGE is not set.\n\n\t  thp_swpout (npn)\n\t\tNumber of transparent hugepages which are swapout in one piece\n\t\twithout splitting.\n\n\t  thp_swpout_fallback (npn)\n\t\tNumber of transparent hugepages which were split before swapout.\n\t\tUsually because failed to allocate some continuous swap space\n\t\tfor the huge page.\n\n\t  numa_pages_migrated (npn)\n\t\tNumber of pages migrated by NUMA balancing.\n\n\t  numa_pte_updates (npn)\n\t\tNumber of pages whose page table entries are modified by\n\t\tNUMA balancing to produce NUMA hinting faults on access.\n\n\t  numa_hint_faults (npn)\n\t\tNumber of NUMA hinting faults.\n\n\t  pgdemote_kswapd\n\t\tNumber of pages demoted by kswapd.\n\n\t  pgdemote_direct\n\t\tNumber of pages demoted directly.\n\n\t  pgdemote_khugepaged\n\t\tNumber of pages demoted by khugepaged.\n\n\t  hugetlb\n\t\tAmount of memory used by hugetlb pages. This metric only shows\n\t\tup if hugetlb usage is accounted for in memory.current (i.e.\n\t\tcgroup is mounted with the memory_hugetlb_accounting option).\n\n  memory.numa_stat\n\tA read-only nested-keyed file which exists on non-root cgroups.\n\n\tThis breaks down the cgroup's memory footprint into different\n\ttypes of memory, type-specific details, and other information\n\tper node on the state of the memory management system.\n\n\tThis is useful for providing visibility into the NUMA locality\n\tinformation within an memcg since the pages are allowed to be\n\tallocated from any physical node. One of the use case is evaluating\n\tapplication performance by combining this information with the\n\tapplication's CPU allocation.\n\n\tAll memory amounts are in bytes.\n\n\tThe output format of memory.numa_stat is::\n\n\t  type N0=<bytes in node 0> N1=<bytes in node 1> ...\n\n\tThe entries are ordered to be human readable, and new entries\n\tcan show up in the middle. Don't rely on items remaining in a\n\tfixed position; use the keys to look up specific values!\n\n\tThe entries can refer to the memory.stat.\n\n  memory.swap.current\n\tA read-only single value file which exists on non-root\n\tcgroups.\n\n\tThe total amount of swap currently being used by the cgroup\n\tand its descendants.\n\n  memory.swap.high\n\tA read-write single value file which exists on non-root\n\tcgroups.  The default is \"max\".\n\n\tSwap usage throttle limit.  If a cgroup's swap usage exceeds\n\tthis limit, all its further allocations will be throttled to\n\tallow userspace to implement custom out-of-memory procedures.\n\n\tThis limit marks a point of no return for the cgroup. It is NOT\n\tdesigned to manage the amount of swapping a workload does\n\tduring regular operation. Compare to memory.swap.max, which\n\tprohibits swapping past a set amount, but lets the cgroup\n\tcontinue unimpeded as long as other memory can be reclaimed.\n\n\tHealthy workloads are not expected to reach this limit.\n\n  memory.swap.peak\n\tA read-write single value file which exists on non-root cgroups.\n\n\tThe max swap usage recorded for the cgroup and its descendants since\n\tthe creation of the cgroup or the most recent reset for that FD.\n\n\tA write of any non-empty string to this file resets it to the\n\tcurrent memory usage for subsequent reads through the same\n\tfile descriptor.\n\n  memory.swap.max\n\tA read-write single value file which exists on non-root\n\tcgroups.  The default is \"max\".\n\n\tSwap usage hard limit.  If a cgroup's swap usage reaches this\n\tlimit, anonymous memory of the cgroup will not be swapped out.\n\n  memory.swap.events\n\tA read-only flat-keyed file which exists on non-root cgroups.\n\tThe following entries are defined.  Unless specified\n\totherwise, a value change in this file generates a file\n\tmodified event.\n\n\t  high\n\t\tThe number of times the cgroup's swap usage was over\n\t\tthe high threshold.\n\n\t  max\n\t\tThe number of times the cgroup's swap usage was about\n\t\tto go over the max boundary and swap allocation\n\t\tfailed.\n\n\t  fail\n\t\tThe number of times swap allocation failed either\n\t\tbecause of running out of swap system-wide or max\n\t\tlimit.\n\n\tWhen reduced under the current usage, the existing swap\n\tentries are reclaimed gradually and the swap usage may stay\n\thigher than the limit for an extended period of time.  This\n\treduces the impact on the workload and memory management.\n\n  memory.zswap.current\n\tA read-only single value file which exists on non-root\n\tcgroups.\n\n\tThe total amount of memory consumed by the zswap compression\n\tbackend.\n\n  memory.zswap.max\n\tA read-write single value file which exists on non-root\n\tcgroups.  The default is \"max\".\n\n\tZswap usage hard limit. If a cgroup's zswap pool reaches this\n\tlimit, it will refuse to take any more stores before existing\n\tentries fault back in or are written out to disk.\n\n  memory.zswap.writeback\n\tA read-write single value file. The default value is \"1\".\n\tNote that this setting is hierarchical, i.e. the writeback would be\n\timplicitly disabled for child cgroups if the upper hierarchy\n\tdoes so.\n\n\tWhen this is set to 0, all swapping attempts to swapping devices\n\tare disabled. This included both zswap writebacks, and swapping due\n\tto zswap store failures. If the zswap store failures are recurring\n\t(for e.g if the pages are incompressible), users can observe\n\treclaim inefficiency after disabling writeback (because the same\n\tpages might be rejected again and again).\n\n\tNote that this is subtly different from setting memory.swap.max to\n\t0, as it still allows for pages to be written to the zswap pool.\n\tThis setting has no effect if zswap is disabled, and swapping\n\tis allowed unless memory.swap.max is set to 0.\n\n  memory.pressure\n\tA read-only nested-keyed file.\n\n\tShows pressure stall information for memory. See\n\t:ref:`Documentation\/accounting\/psi.rst <psi>` for details.\n\n\nUsage Guidelines\n~~~~~~~~~~~~~~~~\n\n\"memory.high\" is the main mechanism to control memory usage.\nOver-committing on high limit (sum of high limits > available memory)\nand letting global memory pressure to distribute memory according to\nusage is a viable strategy.\n\nBecause breach of the high limit doesn't trigger the OOM killer but\nthrottles the offending cgroup, a management agent has ample\nopportunities to monitor and take appropriate actions such as granting\nmore memory or terminating the workload.\n\nDetermining whether a cgroup has enough memory is not trivial as\nmemory usage doesn't indicate whether the workload can benefit from\nmore memory.  For example, a workload which writes data received from\nnetwork to a file can use all available memory but can also operate as\nperformant with a small amount of memory.  A measure of memory\npressure - how much the workload is being impacted due to lack of\nmemory - is necessary to determine whether a workload needs more\nmemory; unfortunately, memory pressure monitoring mechanism isn't\nimplemented yet.\n\n\nMemory Ownership\n~~~~~~~~~~~~~~~~\n\nA memory area is charged to the cgroup which instantiated it and stays\ncharged to the cgroup until the area is released.  Migrating a process\nto a different cgroup doesn't move the memory usages that it\ninstantiated while in the previous cgroup to the new cgroup.\n\nA memory area may be used by processes belonging to different cgroups.\nTo which cgroup the area will be charged is in-deterministic; however,\nover time, the memory area is likely to end up in a cgroup which has\nenough memory allowance to avoid high reclaim pressure.\n\nIf a cgroup sweeps a considerable amount of memory which is expected\nto be accessed repeatedly by other cgroups, it may make sense to use\nPOSIX_FADV_DONTNEED to relinquish the ownership of memory areas\nbelonging to the affected files to ensure correct memory ownership.\n\n\nIO\n--\n\nThe \"io\" controller regulates the distribution of IO resources.  This\ncontroller implements both weight based and absolute bandwidth or IOPS\nlimit distribution; however, weight based distribution is available\nonly if cfq-iosched is in use and neither scheme is available for\nblk-mq devices.\n\n\nIO Interface Files\n~~~~~~~~~~~~~~~~~~\n\n  io.stat\n\tA read-only nested-keyed file.\n\n\tLines are keyed by $MAJ:$MIN device numbers and not ordered.\n\tThe following nested keys are defined.\n\n\t  ======\t=====================\n\t  rbytes\tBytes read\n\t  wbytes\tBytes written\n\t  rios\t\tNumber of read IOs\n\t  wios\t\tNumber of write IOs\n\t  dbytes\tBytes discarded\n\t  dios\t\tNumber of discard IOs\n\t  ======\t=====================\n\n\tAn example read output follows::\n\n\t  8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0\n\t  8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021\n\n  io.cost.qos\n\tA read-write nested-keyed file which exists only on the root\n\tcgroup.\n\n\tThis file configures the Quality of Service of the IO cost\n\tmodel based controller (CONFIG_BLK_CGROUP_IOCOST) which\n\tcurrently implements \"io.weight\" proportional control.  Lines\n\tare keyed by $MAJ:$MIN device numbers and not ordered.  The\n\tline for a given device is populated on the first write for\n\tthe device on \"io.cost.qos\" or \"io.cost.model\".  The following\n\tnested keys are defined.\n\n\t  ======\t=====================================\n\t  enable\tWeight-based control enable\n\t  ctrl\t\t\"auto\" or \"user\"\n\t  rpct\t\tRead latency percentile    [0, 100]\n\t  rlat\t\tRead latency threshold\n\t  wpct\t\tWrite latency percentile   [0, 100]\n\t  wlat\t\tWrite latency threshold\n\t  min\t\tMinimum scaling percentage [1, 10000]\n\t  max\t\tMaximum scaling percentage [1, 10000]\n\t  ======\t=====================================\n\n\tThe controller is disabled by default and can be enabled by\n\tsetting \"enable\" to 1.  \"rpct\" and \"wpct\" parameters default\n\tto zero and the controller uses internal device saturation\n\tstate to adjust the overall IO rate between \"min\" and \"max\".\n\n\tWhen a better control quality is needed, latency QoS\n\tparameters can be configured.  For example::\n\n\t  8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0\n\n\tshows that on sdb, the controller is enabled, will consider\n\tthe device saturated if the 95th percentile of read completion\n\tlatencies is above 75ms or write 150ms, and adjust the overall\n\tIO issue rate between 50% and 150% accordingly.\n\n\tThe lower the saturation point, the better the latency QoS at\n\tthe cost of aggregate bandwidth.  The narrower the allowed\n\tadjustment range between \"min\" and \"max\", the more conformant\n\tto the cost model the IO behavior.  Note that the IO issue\n\tbase rate may be far off from 100% and setting \"min\" and \"max\"\n\tblindly can lead to a significant loss of device capacity or\n\tcontrol quality.  \"min\" and \"max\" are useful for regulating\n\tdevices which show wide temporary behavior changes - e.g. a\n\tssd which accepts writes at the line speed for a while and\n\tthen completely stalls for multiple seconds.\n\n\tWhen \"ctrl\" is \"auto\", the parameters are controlled by the\n\tkernel and may change automatically.  Setting \"ctrl\" to \"user\"\n\tor setting any of the percentile and latency parameters puts\n\tit into \"user\" mode and disables the automatic changes.  The\n\tautomatic mode can be restored by setting \"ctrl\" to \"auto\".\n\n  io.cost.model\n\tA read-write nested-keyed file which exists only on the root\n\tcgroup.\n\n\tThis file configures the cost model of the IO cost model based\n\tcontroller (CONFIG_BLK_CGROUP_IOCOST) which currently\n\timplements \"io.weight\" proportional control.  Lines are keyed\n\tby $MAJ:$MIN device numbers and not ordered.  The line for a\n\tgiven device is populated on the first write for the device on\n\t\"io.cost.qos\" or \"io.cost.model\".  The following nested keys\n\tare defined.\n\n\t  =====\t\t================================\n\t  ctrl\t\t\"auto\" or \"user\"\n\t  model\t\tThe cost model in use - \"linear\"\n\t  =====\t\t================================\n\n\tWhen \"ctrl\" is \"auto\", the kernel may change all parameters\n\tdynamically.  When \"ctrl\" is set to \"user\" or any other\n\tparameters are written to, \"ctrl\" become \"user\" and the\n\tautomatic changes are disabled.\n\n\tWhen \"model\" is \"linear\", the following model parameters are\n\tdefined.\n\n\t  =============\t========================================\n\t  [r|w]bps\tThe maximum sequential IO throughput\n\t  [r|w]seqiops\tThe maximum 4k sequential IOs per second\n\t  [r|w]randiops\tThe maximum 4k random IOs per second\n\t  =============\t========================================\n\n\tFrom the above, the builtin linear model determines the base\n\tcosts of a sequential and random IO and the cost coefficient\n\tfor the IO size.  While simple, this model can cover most\n\tcommon device classes acceptably.\n\n\tThe IO cost model isn't expected to be accurate in absolute\n\tsense and is scaled to the device behavior dynamically.\n\n\tIf needed, tools\/cgroup\/iocost_coef_gen.py can be used to\n\tgenerate device-specific coefficients.\n\n  io.weight\n\tA read-write flat-keyed file which exists on non-root cgroups.\n\tThe default is \"default 100\".\n\n\tThe first line is the default weight applied to devices\n\twithout specific override.  The rest are overrides keyed by\n\t$MAJ:$MIN device numbers and not ordered.  The weights are in\n\tthe range [1, 10000] and specifies the relative amount IO time\n\tthe cgroup can use in relation to its siblings.\n\n\tThe default weight can be updated by writing either \"default\n\t$WEIGHT\" or simply \"$WEIGHT\".  Overrides can be set by writing\n\t\"$MAJ:$MIN $WEIGHT\" and unset by writing \"$MAJ:$MIN default\".\n\n\tAn example read output follows::\n\n\t  default 100\n\t  8:16 200\n\t  8:0 50\n\n  io.max\n\tA read-write nested-keyed file which exists on non-root\n\tcgroups.\n\n\tBPS and IOPS based IO limit.  Lines are keyed by $MAJ:$MIN\n\tdevice numbers and not ordered.  The following nested keys are\n\tdefined.\n\n\t  =====\t\t==================================\n\t  rbps\t\tMax read bytes per second\n\t  wbps\t\tMax write bytes per second\n\t  riops\t\tMax read IO operations per second\n\t  wiops\t\tMax write IO operations per second\n\t  =====\t\t==================================\n\n\tWhen writing, any number of nested key-value pairs can be\n\tspecified in any order.  \"max\" can be specified as the value\n\tto remove a specific limit.  If the same key is specified\n\tmultiple times, the outcome is undefined.\n\n\tBPS and IOPS are measured in each IO direction and IOs are\n\tdelayed if limit is reached.  Temporary bursts are allowed.\n\n\tSetting read limit at 2M BPS and write at 120 IOPS for 8:16::\n\n\t  echo \"8:16 rbps=2097152 wiops=120\" > io.max\n\n\tReading returns the following::\n\n\t  8:16 rbps=2097152 wbps=max riops=max wiops=120\n\n\tWrite IOPS limit can be removed by writing the following::\n\n\t  echo \"8:16 wiops=max\" > io.max\n\n\tReading now returns the following::\n\n\t  8:16 rbps=2097152 wbps=max riops=max wiops=max\n\n  io.pressure\n\tA read-only nested-keyed file.\n\n\tShows pressure stall information for IO. See\n\t:ref:`Documentation\/accounting\/psi.rst <psi>` for details.\n\n\nWriteback\n~~~~~~~~~\n\nPage cache is dirtied through buffered writes and shared mmaps and\nwritten asynchronously to the backing filesystem by the writeback\nmechanism.  Writeback sits between the memory and IO domains and\nregulates the proportion of dirty memory by balancing dirtying and\nwrite IOs.\n\nThe io controller, in conjunction with the memory controller,\nimplements control of page cache writeback IOs.  The memory controller\ndefines the memory domain that dirty memory ratio is calculated and\nmaintained for and the io controller defines the io domain which\nwrites out dirty pages for the memory domain.  Both system-wide and\nper-cgroup dirty memory states are examined and the more restrictive\nof the two is enforced.\n\ncgroup writeback requires explicit support from the underlying\nfilesystem.  Currently, cgroup writeback is implemented on ext2, ext4,\nbtrfs, f2fs, and xfs.  On other filesystems, all writeback IOs are \nattributed to the root cgroup.\n\nThere are inherent differences in memory and writeback management\nwhich affects how cgroup ownership is tracked.  Memory is tracked per\npage while writeback per inode.  For the purpose of writeback, an\ninode is assigned to a cgroup and all IO requests to write dirty pages\nfrom the inode are attributed to that cgroup.\n\nAs cgroup ownership for memory is tracked per page, there can be pages\nwhich are associated with different cgroups than the one the inode is\nassociated with.  These are called foreign pages.  The writeback\nconstantly keeps track of foreign pages and, if a particular foreign\ncgroup becomes the majority over a certain period of time, switches\nthe ownership of the inode to that cgroup.\n\nWhile this model is enough for most use cases where a given inode is\nmostly dirtied by a single cgroup even when the main writing cgroup\nchanges over time, use cases where multiple cgroups write to a single\ninode simultaneously are not supported well.  In such circumstances, a\nsignificant portion of IOs are likely to be attributed incorrectly.\nAs memory controller assigns page ownership on the first use and\ndoesn't update it until the page is released, even if writeback\nstrictly follows page ownership, multiple cgroups dirtying overlapping\nareas wouldn't work as expected.  It's recommended to avoid such usage\npatterns.\n\nThe sysctl knobs which affect writeback behavior are applied to cgroup\nwriteback as follows.\n\n  vm.dirty_background_ratio, vm.dirty_ratio\n\tThese ratios apply the same to cgroup writeback with the\n\tamount of available memory capped by limits imposed by the\n\tmemory controller and system-wide clean memory.\n\n  vm.dirty_background_bytes, vm.dirty_bytes\n\tFor cgroup writeback, this is calculated into ratio against\n\ttotal available memory and applied the same way as\n\tvm.dirty[_background]_ratio.\n\n\nIO Latency\n~~~~~~~~~~\n\nThis is a cgroup v2 controller for IO workload protection.  You provide a group\nwith a latency target, and if the average latency exceeds that target the\ncontroller will throttle any peers that have a lower latency target than the\nprotected workload.\n\nThe limits are only applied at the peer level in the hierarchy.  This means that\nin the diagram below, only groups A, B, and C will influence each other, and\ngroups D and F will influence each other.  Group G will influence nobody::\n\n\t\t\t[root]\n\t\t\/\t   |\t\t\\\n\t\tA\t   B\t\tC\n\t       \/  \\        |\n\t      D    F\t   G\n\n\nSo the ideal way to configure this is to set io.latency in groups A, B, and C.\nGenerally you do not want to set a value lower than the latency your device\nsupports.  Experiment to find the value that works best for your workload.\nStart at higher than the expected latency for your device and watch the\navg_lat value in io.stat for your workload group to get an idea of the\nlatency you see during normal operation.  Use the avg_lat value as a basis for\nyour real setting, setting at 10-15% higher than the value in io.stat.\n\nHow IO Latency Throttling Works\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nio.latency is work conserving; so as long as everybody is meeting their latency\ntarget the controller doesn't do anything.  Once a group starts missing its\ntarget it begins throttling any peer group that has a higher target than itself.\nThis throttling takes 2 forms:\n\n- Queue depth throttling.  This is the number of outstanding IO's a group is\n  allowed to have.  We will clamp down relatively quickly, starting at no limit\n  and going all the way down to 1 IO at a time.\n\n- Artificial delay induction.  There are certain types of IO that cannot be\n  throttled without possibly adversely affecting higher priority groups.  This\n  includes swapping and metadata IO.  These types of IO are allowed to occur\n  normally, however they are \"charged\" to the originating group.  If the\n  originating group is being throttled you will see the use_delay and delay\n  fields in io.stat increase.  The delay value is how many microseconds that are\n  being added to any process that runs in this group.  Because this number can\n  grow quite large if there is a lot of swapping or metadata IO occurring we\n  limit the individual delay events to 1 second at a time.\n\nOnce the victimized group starts meeting its latency target again it will start\nunthrottling any peer groups that were throttled previously.  If the victimized\ngroup simply stops doing IO the global counter will unthrottle appropriately.\n\nIO Latency Interface Files\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n  io.latency\n\tThis takes a similar format as the other controllers.\n\n\t\t\"MAJOR:MINOR target=<target time in microseconds>\"\n\n  io.stat\n\tIf the controller is enabled you will see extra stats in io.stat in\n\taddition to the normal ones.\n\n\t  depth\n\t\tThis is the current queue depth for the group.\n\n\t  avg_lat\n\t\tThis is an exponential moving average with a decay rate of 1\/exp\n\t\tbound by the sampling interval.  The decay rate interval can be\n\t\tcalculated by multiplying the win value in io.stat by the\n\t\tcorresponding number of samples based on the win value.\n\n\t  win\n\t\tThe sampling window size in milliseconds.  This is the minimum\n\t\tduration of time between evaluation events.  Windows only elapse\n\t\twith IO activity.  Idle periods extend the most recent window.\n\nIO Priority\n~~~~~~~~~~~\n\nA single attribute controls the behavior of the I\/O priority cgroup policy,\nnamely the io.prio.class attribute. The following values are accepted for\nthat attribute:\n\n  no-change\n\tDo not modify the I\/O priority class.\n\n  promote-to-rt\n\tFor requests that have a non-RT I\/O priority class, change it into RT.\n\tAlso change the priority level of these requests to 4. Do not modify\n\tthe I\/O priority of requests that have priority class RT.\n\n  restrict-to-be\n\tFor requests that do not have an I\/O priority class or that have I\/O\n\tpriority class RT, change it into BE. Also change the priority level\n\tof these requests to 0. Do not modify the I\/O priority class of\n\trequests that have priority class IDLE.\n\n  idle\n\tChange the I\/O priority class of all requests into IDLE, the lowest\n\tI\/O priority class.\n\n  none-to-rt\n\tDeprecated. Just an alias for promote-to-rt.\n\nThe following numerical values are associated with the I\/O priority policies:\n\n+----------------+---+\n| no-change      | 0 |\n+----------------+---+\n| promote-to-rt  | 1 |\n+----------------+---+\n| restrict-to-be | 2 |\n+----------------+---+\n| idle           | 3 |\n+----------------+---+\n\nThe numerical value that corresponds to each I\/O priority class is as follows:\n\n+-------------------------------+---+\n| IOPRIO_CLASS_NONE             | 0 |\n+-------------------------------+---+\n| IOPRIO_CLASS_RT (real-time)   | 1 |\n+-------------------------------+---+\n| IOPRIO_CLASS_BE (best effort) | 2 |\n+-------------------------------+---+\n| IOPRIO_CLASS_IDLE             | 3 |\n+-------------------------------+---+\n\nThe algorithm to set the I\/O priority class for a request is as follows:\n\n- If I\/O priority class policy is promote-to-rt, change the request I\/O\n  priority class to IOPRIO_CLASS_RT and change the request I\/O priority\n  level to 4.\n- If I\/O priority class policy is not promote-to-rt, translate the I\/O priority\n  class policy into a number, then change the request I\/O priority class\n  into the maximum of the I\/O priority class policy number and the numerical\n  I\/O priority class.\n\nPID\n---\n\nThe process number controller is used to allow a cgroup to stop any\nnew tasks from being fork()'d or clone()'d after a specified limit is\nreached.\n\nThe number of tasks in a cgroup can be exhausted in ways which other\ncontrollers cannot prevent, thus warranting its own controller.  For\nexample, a fork bomb is likely to exhaust the number of tasks before\nhitting memory restrictions.\n\nNote that PIDs used in this controller refer to TIDs, process IDs as\nused by the kernel.\n\n\nPID Interface Files\n~~~~~~~~~~~~~~~~~~~\n\n  pids.max\n\tA read-write single value file which exists on non-root\n\tcgroups.  The default is \"max\".\n\n\tHard limit of number of processes.\n\n  pids.current\n\tA read-only single value file which exists on non-root cgroups.\n\n\tThe number of processes currently in the cgroup and its\n\tdescendants.\n\n  pids.peak\n\tA read-only single value file which exists on non-root cgroups.\n\n\tThe maximum value that the number of processes in the cgroup and its\n\tdescendants has ever reached.\n\n  pids.events\n\tA read-only flat-keyed file which exists on non-root cgroups. Unless\n\tspecified otherwise, a value change in this file generates a file\n\tmodified event. The following entries are defined.\n\n\t  max\n\t\tThe number of times the cgroup's total number of processes hit the pids.max\n\t\tlimit (see also pids_localevents).\n\n  pids.events.local\n\tSimilar to pids.events but the fields in the file are local\n\tto the cgroup i.e. not hierarchical. The file modified event\n\tgenerated on this file reflects only the local events.\n\nOrganisational operations are not blocked by cgroup policies, so it is\npossible to have pids.current > pids.max.  This can be done by either\nsetting the limit to be smaller than pids.current, or attaching enough\nprocesses to the cgroup such that pids.current is larger than\npids.max.  However, it is not possible to violate a cgroup PID policy\nthrough fork() or clone(). These will return -EAGAIN if the creation\nof a new process would cause a cgroup policy to be violated.\n\n\nCpuset\n------\n\nThe \"cpuset\" controller provides a mechanism for constraining\nthe CPU and memory node placement of tasks to only the resources\nspecified in the cpuset interface files in a task's current cgroup.\nThis is especially valuable on large NUMA systems where placing jobs\non properly sized subsets of the systems with careful processor and\nmemory placement to reduce cross-node memory access and contention\ncan improve overall system performance.\n\nThe \"cpuset\" controller is hierarchical.  That means the controller\ncannot use CPUs or memory nodes not allowed in its parent.\n\n\nCpuset Interface Files\n~~~~~~~~~~~~~~~~~~~~~~\n\n  cpuset.cpus\n\tA read-write multiple values file which exists on non-root\n\tcpuset-enabled cgroups.\n\n\tIt lists the requested CPUs to be used by tasks within this\n\tcgroup.  The actual list of CPUs to be granted, however, is\n\tsubjected to constraints imposed by its parent and can differ\n\tfrom the requested CPUs.\n\n\tThe CPU numbers are comma-separated numbers or ranges.\n\tFor example::\n\n\t  # cat cpuset.cpus\n\t  0-4,6,8-10\n\n\tAn empty value indicates that the cgroup is using the same\n\tsetting as the nearest cgroup ancestor with a non-empty\n\t\"cpuset.cpus\" or all the available CPUs if none is found.\n\n\tThe value of \"cpuset.cpus\" stays constant until the next update\n\tand won't be affected by any CPU hotplug events.\n\n  cpuset.cpus.effective\n\tA read-only multiple values file which exists on all\n\tcpuset-enabled cgroups.\n\n\tIt lists the onlined CPUs that are actually granted to this\n\tcgroup by its parent.  These CPUs are allowed to be used by\n\ttasks within the current cgroup.\n\n\tIf \"cpuset.cpus\" is empty, the \"cpuset.cpus.effective\" file shows\n\tall the CPUs from the parent cgroup that can be available to\n\tbe used by this cgroup.  Otherwise, it should be a subset of\n\t\"cpuset.cpus\" unless none of the CPUs listed in \"cpuset.cpus\"\n\tcan be granted.  In this case, it will be treated just like an\n\tempty \"cpuset.cpus\".\n\n\tIts value will be affected by CPU hotplug events.\n\n  cpuset.mems\n\tA read-write multiple values file which exists on non-root\n\tcpuset-enabled cgroups.\n\n\tIt lists the requested memory nodes to be used by tasks within\n\tthis cgroup.  The actual list of memory nodes granted, however,\n\tis subjected to constraints imposed by its parent and can differ\n\tfrom the requested memory nodes.\n\n\tThe memory node numbers are comma-separated numbers or ranges.\n\tFor example::\n\n\t  # cat cpuset.mems\n\t  0-1,3\n\n\tAn empty value indicates that the cgroup is using the same\n\tsetting as the nearest cgroup ancestor with a non-empty\n\t\"cpuset.mems\" or all the available memory nodes if none\n\tis found.\n\n\tThe value of \"cpuset.mems\" stays constant until the next update\n\tand won't be affected by any memory nodes hotplug events.\n\n\tSetting a non-empty value to \"cpuset.mems\" causes memory of\n\ttasks within the cgroup to be migrated to the designated nodes if\n\tthey are currently using memory outside of the designated nodes.\n\n\tThere is a cost for this memory migration.  The migration\n\tmay not be complete and some memory pages may be left behind.\n\tSo it is recommended that \"cpuset.mems\" should be set properly\n\tbefore spawning new tasks into the cpuset.  Even if there is\n\ta need to change \"cpuset.mems\" with active tasks, it shouldn't\n\tbe done frequently.\n\n  cpuset.mems.effective\n\tA read-only multiple values file which exists on all\n\tcpuset-enabled cgroups.\n\n\tIt lists the onlined memory nodes that are actually granted to\n\tthis cgroup by its parent. These memory nodes are allowed to\n\tbe used by tasks within the current cgroup.\n\n\tIf \"cpuset.mems\" is empty, it shows all the memory nodes from the\n\tparent cgroup that will be available to be used by this cgroup.\n\tOtherwise, it should be a subset of \"cpuset.mems\" unless none of\n\tthe memory nodes listed in \"cpuset.mems\" can be granted.  In this\n\tcase, it will be treated just like an empty \"cpuset.mems\".\n\n\tIts value will be affected by memory nodes hotplug events.\n\n  cpuset.cpus.exclusive\n\tA read-write multiple values file which exists on non-root\n\tcpuset-enabled cgroups.\n\n\tIt lists all the exclusive CPUs that are allowed to be used\n\tto create a new cpuset partition.  Its value is not used\n\tunless the cgroup becomes a valid partition root.  See the\n\t\"cpuset.cpus.partition\" section below for a description of what\n\ta cpuset partition is.\n\n\tWhen the cgroup becomes a partition root, the actual exclusive\n\tCPUs that are allocated to that partition are listed in\n\t\"cpuset.cpus.exclusive.effective\" which may be different\n\tfrom \"cpuset.cpus.exclusive\".  If \"cpuset.cpus.exclusive\"\n\thas previously been set, \"cpuset.cpus.exclusive.effective\"\n\tis always a subset of it.\n\n\tUsers can manually set it to a value that is different from\n\t\"cpuset.cpus\".\tOne constraint in setting it is that the list of\n\tCPUs must be exclusive with respect to \"cpuset.cpus.exclusive\"\n\tof its sibling.  If \"cpuset.cpus.exclusive\" of a sibling cgroup\n\tisn't set, its \"cpuset.cpus\" value, if set, cannot be a subset\n\tof it to leave at least one CPU available when the exclusive\n\tCPUs are taken away.\n\n\tFor a parent cgroup, any one of its exclusive CPUs can only\n\tbe distributed to at most one of its child cgroups.  Having an\n\texclusive CPU appearing in two or more of its child cgroups is\n\tnot allowed (the exclusivity rule).  A value that violates the\n\texclusivity rule will be rejected with a write error.\n\n\tThe root cgroup is a partition root and all its available CPUs\n\tare in its exclusive CPU set.\n\n  cpuset.cpus.exclusive.effective\n\tA read-only multiple values file which exists on all non-root\n\tcpuset-enabled cgroups.\n\n\tThis file shows the effective set of exclusive CPUs that\n\tcan be used to create a partition root.  The content\n\tof this file will always be a subset of its parent's\n\t\"cpuset.cpus.exclusive.effective\" if its parent is not the root\n\tcgroup.  It will also be a subset of \"cpuset.cpus.exclusive\"\n\tif it is set.  If \"cpuset.cpus.exclusive\" is not set, it is\n\ttreated to have an implicit value of \"cpuset.cpus\" in the\n\tformation of local partition.\n\n  cpuset.cpus.isolated\n\tA read-only and root cgroup only multiple values file.\n\n\tThis file shows the set of all isolated CPUs used in existing\n\tisolated partitions. It will be empty if no isolated partition\n\tis created.\n\n  cpuset.cpus.partition\n\tA read-write single value file which exists on non-root\n\tcpuset-enabled cgroups.  This flag is owned by the parent cgroup\n\tand is not delegatable.\n\n\tIt accepts only the following input values when written to.\n\n\t  ==========\t=====================================\n\t  \"member\"\tNon-root member of a partition\n\t  \"root\"\tPartition root\n\t  \"isolated\"\tPartition root without load balancing\n\t  ==========\t=====================================\n\n\tA cpuset partition is a collection of cpuset-enabled cgroups with\n\ta partition root at the top of the hierarchy and its descendants\n\texcept those that are separate partition roots themselves and\n\ttheir descendants.  A partition has exclusive access to the\n\tset of exclusive CPUs allocated to it.\tOther cgroups outside\n\tof that partition cannot use any CPUs in that set.\n\n\tThere are two types of partitions - local and remote.  A local\n\tpartition is one whose parent cgroup is also a valid partition\n\troot.  A remote partition is one whose parent cgroup is not a\n\tvalid partition root itself.  Writing to \"cpuset.cpus.exclusive\"\n\tis optional for the creation of a local partition as its\n\t\"cpuset.cpus.exclusive\" file will assume an implicit value that\n\tis the same as \"cpuset.cpus\" if it is not set.\tWriting the\n\tproper \"cpuset.cpus.exclusive\" values down the cgroup hierarchy\n\tbefore the target partition root is mandatory for the creation\n\tof a remote partition.\n\n\tCurrently, a remote partition cannot be created under a local\n\tpartition.  All the ancestors of a remote partition root except\n\tthe root cgroup cannot be a partition root.\n\n\tThe root cgroup is always a partition root and its state cannot\n\tbe changed.  All other non-root cgroups start out as \"member\".\n\n\tWhen set to \"root\", the current cgroup is the root of a new\n\tpartition or scheduling domain.  The set of exclusive CPUs is\n\tdetermined by the value of its \"cpuset.cpus.exclusive.effective\".\n\n\tWhen set to \"isolated\", the CPUs in that partition will be in\n\tan isolated state without any load balancing from the scheduler\n\tand excluded from the unbound workqueues.  Tasks placed in such\n\ta partition with multiple CPUs should be carefully distributed\n\tand bound to each of the individual CPUs for optimal performance.\n\n\tA partition root (\"root\" or \"isolated\") can be in one of the\n\ttwo possible states - valid or invalid.  An invalid partition\n\troot is in a degraded state where some state information may\n\tbe retained, but behaves more like a \"member\".\n\n\tAll possible state transitions among \"member\", \"root\" and\n\t\"isolated\" are allowed.\n\n\tOn read, the \"cpuset.cpus.partition\" file can show the following\n\tvalues.\n\n\t  =============================\t=====================================\n\t  \"member\"\t\t\tNon-root member of a partition\n\t  \"root\"\t\t\tPartition root\n\t  \"isolated\"\t\t\tPartition root without load balancing\n\t  \"root invalid (<reason>)\"\tInvalid partition root\n\t  \"isolated invalid (<reason>)\"\tInvalid isolated partition root\n\t  =============================\t=====================================\n\n\tIn the case of an invalid partition root, a descriptive string on\n\twhy the partition is invalid is included within parentheses.\n\n\tFor a local partition root to be valid, the following conditions\n\tmust be met.\n\n\t1) The parent cgroup is a valid partition root.\n\t2) The \"cpuset.cpus.exclusive.effective\" file cannot be empty,\n\t   though it may contain offline CPUs.\n\t3) The \"cpuset.cpus.effective\" cannot be empty unless there is\n\t   no task associated with this partition.\n\n\tFor a remote partition root to be valid, all the above conditions\n\texcept the first one must be met.\n\n\tExternal events like hotplug or changes to \"cpuset.cpus\" or\n\t\"cpuset.cpus.exclusive\" can cause a valid partition root to\n\tbecome invalid and vice versa.\tNote that a task cannot be\n\tmoved to a cgroup with empty \"cpuset.cpus.effective\".\n\n\tA valid non-root parent partition may distribute out all its CPUs\n\tto its child local partitions when there is no task associated\n\twith it.\n\n\tCare must be taken to change a valid partition root to \"member\"\n\tas all its child local partitions, if present, will become\n\tinvalid causing disruption to tasks running in those child\n\tpartitions. These inactivated partitions could be recovered if\n\ttheir parent is switched back to a partition root with a proper\n\tvalue in \"cpuset.cpus\" or \"cpuset.cpus.exclusive\".\n\n\tPoll and inotify events are triggered whenever the state of\n\t\"cpuset.cpus.partition\" changes.  That includes changes caused\n\tby write to \"cpuset.cpus.partition\", cpu hotplug or other\n\tchanges that modify the validity status of the partition.\n\tThis will allow user space agents to monitor unexpected changes\n\tto \"cpuset.cpus.partition\" without the need to do continuous\n\tpolling.\n\n\tA user can pre-configure certain CPUs to an isolated state\n\twith load balancing disabled at boot time with the \"isolcpus\"\n\tkernel boot command line option.  If those CPUs are to be put\n\tinto a partition, they have to be used in an isolated partition.\n\n\nDevice controller\n-----------------\n\nDevice controller manages access to device files. It includes both\ncreation of new device files (using mknod), and access to the\nexisting device files.\n\nCgroup v2 device controller has no interface files and is implemented\non top of cgroup BPF. To control access to device files, a user may\ncreate bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach\nthem to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a\ndevice file, corresponding BPF programs will be executed, and depending\non the return value the attempt will succeed or fail with -EPERM.\n\nA BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the\nbpf_cgroup_dev_ctx structure, which describes the device access attempt:\naccess type (mknod\/read\/write) and device (type, major and minor numbers).\nIf the program returns 0, the attempt fails with -EPERM, otherwise it\nsucceeds.\n\nAn example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in\ntools\/testing\/selftests\/bpf\/progs\/dev_cgroup.c in the kernel source tree.\n\n\nRDMA\n----\n\nThe \"rdma\" controller regulates the distribution and accounting of\nRDMA resources.\n\nRDMA Interface Files\n~~~~~~~~~~~~~~~~~~~~\n\n  rdma.max\n\tA readwrite nested-keyed file that exists for all the cgroups\n\texcept root that describes current configured resource limit\n\tfor a RDMA\/IB device.\n\n\tLines are keyed by device name and are not ordered.\n\tEach line contains space separated resource name and its configured\n\tlimit that can be distributed.\n\n\tThe following nested keys are defined.\n\n\t  ==========\t=============================\n\t  hca_handle\tMaximum number of HCA Handles\n\t  hca_object \tMaximum number of HCA Objects\n\t  ==========\t=============================\n\n\tAn example for mlx4 and ocrdma device follows::\n\n\t  mlx4_0 hca_handle=2 hca_object=2000\n\t  ocrdma1 hca_handle=3 hca_object=max\n\n  rdma.current\n\tA read-only file that describes current resource usage.\n\tIt exists for all the cgroup except root.\n\n\tAn example for mlx4 and ocrdma device follows::\n\n\t  mlx4_0 hca_handle=1 hca_object=20\n\t  ocrdma1 hca_handle=1 hca_object=23\n\nHugeTLB\n-------\n\nThe HugeTLB controller allows to limit the HugeTLB usage per control group and\nenforces the controller limit during page fault.\n\nHugeTLB Interface Files\n~~~~~~~~~~~~~~~~~~~~~~~\n\n  hugetlb.<hugepagesize>.current\n\tShow current usage for \"hugepagesize\" hugetlb.  It exists for all\n\tthe cgroup except root.\n\n  hugetlb.<hugepagesize>.max\n\tSet\/show the hard limit of \"hugepagesize\" hugetlb usage.\n\tThe default value is \"max\".  It exists for all the cgroup except root.\n\n  hugetlb.<hugepagesize>.events\n\tA read-only flat-keyed file which exists on non-root cgroups.\n\n\t  max\n\t\tThe number of allocation failure due to HugeTLB limit\n\n  hugetlb.<hugepagesize>.events.local\n\tSimilar to hugetlb.<hugepagesize>.events but the fields in the file\n\tare local to the cgroup i.e. not hierarchical. The file modified event\n\tgenerated on this file reflects only the local events.\n\n  hugetlb.<hugepagesize>.numa_stat\n\tSimilar to memory.numa_stat, it shows the numa information of the\n        hugetlb pages of <hugepagesize> in this cgroup.  Only active in\n        use hugetlb pages are included.  The per-node values are in bytes.\n\nMisc\n----\n\nThe Miscellaneous cgroup provides the resource limiting and tracking\nmechanism for the scalar resources which cannot be abstracted like the other\ncgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config\noption.\n\nA resource can be added to the controller via enum misc_res_type{} in the\ninclude\/linux\/misc_cgroup.h file and the corresponding name via misc_res_name[]\nin the kernel\/cgroup\/misc.c file. Provider of the resource must set its\ncapacity prior to using the resource by calling misc_cg_set_capacity().\n\nOnce a capacity is set then the resource usage can be updated using charge and\nuncharge APIs. All of the APIs to interact with misc controller are in\ninclude\/linux\/misc_cgroup.h.\n\nMisc Interface Files\n~~~~~~~~~~~~~~~~~~~~\n\nMiscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:\n\n  misc.capacity\n        A read-only flat-keyed file shown only in the root cgroup.  It shows\n        miscellaneous scalar resources available on the platform along with\n        their quantities::\n\n\t  $ cat misc.capacity\n\t  res_a 50\n\t  res_b 10\n\n  misc.current\n        A read-only flat-keyed file shown in the all cgroups.  It shows\n        the current usage of the resources in the cgroup and its children.::\n\n\t  $ cat misc.current\n\t  res_a 3\n\t  res_b 0\n\n  misc.peak\n        A read-only flat-keyed file shown in all cgroups.  It shows the\n        historical maximum usage of the resources in the cgroup and its\n        children.::\n\n\t  $ cat misc.peak\n\t  res_a 10\n\t  res_b 8\n\n  misc.max\n        A read-write flat-keyed file shown in the non root cgroups. Allowed\n        maximum usage of the resources in the cgroup and its children.::\n\n\t  $ cat misc.max\n\t  res_a max\n\t  res_b 4\n\n\tLimit can be set by::\n\n\t  # echo res_a 1 > misc.max\n\n\tLimit can be set to max by::\n\n\t  # echo res_a max > misc.max\n\n        Limits can be set higher than the capacity value in the misc.capacity\n        file.\n\n  misc.events\n\tA read-only flat-keyed file which exists on non-root cgroups. The\n\tfollowing entries are defined. Unless specified otherwise, a value\n\tchange in this file generates a file modified event. All fields in\n\tthis file are hierarchical.\n\n\t  max\n\t\tThe number of times the cgroup's resource usage was\n\t\tabout to go over the max boundary.\n\n  misc.events.local\n        Similar to misc.events but the fields in the file are local to the\n        cgroup i.e. not hierarchical. The file modified event generated on\n        this file reflects only the local events.\n\nMigration and Ownership\n~~~~~~~~~~~~~~~~~~~~~~~\n\nA miscellaneous scalar resource is charged to the cgroup in which it is used\nfirst, and stays charged to that cgroup until that resource is freed. Migrating\na process to a different cgroup does not move the charge to the destination\ncgroup where the process has moved.\n\nOthers\n------\n\nperf_event\n~~~~~~~~~~\n\nperf_event controller, if not mounted on a legacy hierarchy, is\nautomatically enabled on the v2 hierarchy so that perf events can\nalways be filtered by cgroup v2 path.  The controller can still be\nmoved to a legacy hierarchy after v2 hierarchy is populated.\n\n\nNon-normative information\n-------------------------\n\nThis section contains information that isn't considered to be a part of\nthe stable kernel API and so is subject to change.\n\n\nCPU controller root cgroup process behaviour\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nWhen distributing CPU cycles in the root cgroup each thread in this\ncgroup is treated as if it was hosted in a separate child cgroup of the\nroot cgroup. This child cgroup weight is dependent on its thread nice\nlevel.\n\nFor details of this mapping see sched_prio_to_weight array in\nkernel\/sched\/core.c file (values from this array should be scaled\nappropriately so the neutral - nice 0 - value is 100 instead of 1024).\n\n\nIO controller root cgroup process behaviour\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nRoot cgroup processes are hosted in an implicit leaf child node.\nWhen distributing IO resources this implicit child node is taken into\naccount as if it was a normal child cgroup of the root cgroup with a\nweight value of 200.\n\n\nNamespace\n=========\n\nBasics\n------\n\ncgroup namespace provides a mechanism to virtualize the view of the\n\"\/proc\/$PID\/cgroup\" file and cgroup mounts.  The CLONE_NEWCGROUP clone\nflag can be used with clone(2) and unshare(2) to create a new cgroup\nnamespace.  The process running inside the cgroup namespace will have\nits \"\/proc\/$PID\/cgroup\" output restricted to cgroupns root.  The\ncgroupns root is the cgroup of the process at the time of creation of\nthe cgroup namespace.\n\nWithout cgroup namespace, the \"\/proc\/$PID\/cgroup\" file shows the\ncomplete path of the cgroup of a process.  In a container setup where\na set of cgroups and namespaces are intended to isolate processes the\n\"\/proc\/$PID\/cgroup\" file may leak potential system level information\nto the isolated processes.  For example::\n\n  # cat \/proc\/self\/cgroup\n  0::\/batchjobs\/container_id1\n\nThe path '\/batchjobs\/container_id1' can be considered as system-data\nand undesirable to expose to the isolated processes.  cgroup namespace\ncan be used to restrict visibility of this path.  For example, before\ncreating a cgroup namespace, one would see::\n\n  # ls -l \/proc\/self\/ns\/cgroup\n  lrwxrwxrwx 1 root root 0 2014-07-15 10:37 \/proc\/self\/ns\/cgroup -> cgroup:[4026531835]\n  # cat \/proc\/self\/cgroup\n  0::\/batchjobs\/container_id1\n\nAfter unsharing a new namespace, the view changes::\n\n  # ls -l \/proc\/self\/ns\/cgroup\n  lrwxrwxrwx 1 root root 0 2014-07-15 10:35 \/proc\/self\/ns\/cgroup -> cgroup:[4026532183]\n  # cat \/proc\/self\/cgroup\n  0::\/\n\nWhen some thread from a multi-threaded process unshares its cgroup\nnamespace, the new cgroupns gets applied to the entire process (all\nthe threads).  This is natural for the v2 hierarchy; however, for the\nlegacy hierarchies, this may be unexpected.\n\nA cgroup namespace is alive as long as there are processes inside or\nmounts pinning it.  When the last usage goes away, the cgroup\nnamespace is destroyed.  The cgroupns root and the actual cgroups\nremain.\n\n\nThe Root and Views\n------------------\n\nThe 'cgroupns root' for a cgroup namespace is the cgroup in which the\nprocess calling unshare(2) is running.  For example, if a process in\n\/batchjobs\/container_id1 cgroup calls unshare, cgroup\n\/batchjobs\/container_id1 becomes the cgroupns root.  For the\ninit_cgroup_ns, this is the real root ('\/') cgroup.\n\nThe cgroupns root cgroup does not change even if the namespace creator\nprocess later moves to a different cgroup::\n\n  # ~\/unshare -c # unshare cgroupns in some cgroup\n  # cat \/proc\/self\/cgroup\n  0::\/\n  # mkdir sub_cgrp_1\n  # echo 0 > sub_cgrp_1\/cgroup.procs\n  # cat \/proc\/self\/cgroup\n  0::\/sub_cgrp_1\n\nEach process gets its namespace-specific view of \"\/proc\/$PID\/cgroup\"\n\nProcesses running inside the cgroup namespace will be able to see\ncgroup paths (in \/proc\/self\/cgroup) only inside their root cgroup.\nFrom within an unshared cgroupns::\n\n  # sleep 100000 &\n  [1] 7353\n  # echo 7353 > sub_cgrp_1\/cgroup.procs\n  # cat \/proc\/7353\/cgroup\n  0::\/sub_cgrp_1\n\nFrom the initial cgroup namespace, the real cgroup path will be\nvisible::\n\n  $ cat \/proc\/7353\/cgroup\n  0::\/batchjobs\/container_id1\/sub_cgrp_1\n\nFrom a sibling cgroup namespace (that is, a namespace rooted at a\ndifferent cgroup), the cgroup path relative to its own cgroup\nnamespace root will be shown.  For instance, if PID 7353's cgroup\nnamespace root is at '\/batchjobs\/container_id2', then it will see::\n\n  # cat \/proc\/7353\/cgroup\n  0::\/..\/container_id2\/sub_cgrp_1\n\nNote that the relative path always starts with '\/' to indicate that\nits relative to the cgroup namespace root of the caller.\n\n\nMigration and setns(2)\n----------------------\n\nProcesses inside a cgroup namespace can move into and out of the\nnamespace root if they have proper access to external cgroups.  For\nexample, from inside a namespace with cgroupns root at\n\/batchjobs\/container_id1, and assuming that the global hierarchy is\nstill accessible inside cgroupns::\n\n  # cat \/proc\/7353\/cgroup\n  0::\/sub_cgrp_1\n  # echo 7353 > batchjobs\/container_id2\/cgroup.procs\n  # cat \/proc\/7353\/cgroup\n  0::\/..\/container_id2\n\nNote that this kind of setup is not encouraged.  A task inside cgroup\nnamespace should only be exposed to its own cgroupns hierarchy.\n\nsetns(2) to another cgroup namespace is allowed when:\n\n(a) the process has CAP_SYS_ADMIN against its current user namespace\n(b) the process has CAP_SYS_ADMIN against the target cgroup\n    namespace's userns\n\nNo implicit cgroup changes happen with attaching to another cgroup\nnamespace.  It is expected that the someone moves the attaching\nprocess under the target cgroup namespace root.\n\n\nInteraction with Other Namespaces\n---------------------------------\n\nNamespace specific cgroup hierarchy can be mounted by a process\nrunning inside a non-init cgroup namespace::\n\n  # mount -t cgroup2 none $MOUNT_POINT\n\nThis will mount the unified cgroup hierarchy with cgroupns root as the\nfilesystem root.  The process needs CAP_SYS_ADMIN against its user and\nmount namespaces.\n\nThe virtualization of \/proc\/self\/cgroup file combined with restricting\nthe view of cgroup hierarchy by namespace-private cgroupfs mount\nprovides a properly isolated cgroup view inside the container.\n\n\nInformation on Kernel Programming\n=================================\n\nThis section contains kernel programming information in the areas\nwhere interacting with cgroup is necessary.  cgroup core and\ncontrollers are not covered.\n\n\nFilesystem Support for Writeback\n--------------------------------\n\nA filesystem can support cgroup writeback by updating\naddress_space_operations->writepage[s]() to annotate bio's using the\nfollowing two functions.\n\n  wbc_init_bio(@wbc, @bio)\n\tShould be called for each bio carrying writeback data and\n\tassociates the bio with the inode's owner cgroup and the\n\tcorresponding request queue.  This must be called after\n\ta queue (device) has been associated with the bio and\n\tbefore submission.\n\n  wbc_account_cgroup_owner(@wbc, @folio, @bytes)\n\tShould be called for each data segment being written out.\n\tWhile this function doesn't care exactly when it's called\n\tduring the writeback session, it's the easiest and most\n\tnatural to call it as data segments are added to a bio.\n\nWith writeback bio's annotated, cgroup support can be enabled per\nsuper_block by setting SB_I_CGROUPWB in ->s_iflags.  This allows for\nselective disabling of cgroup writeback support which is helpful when\ncertain filesystem features, e.g. journaled data mode, are\nincompatible.\n\nwbc_init_bio() binds the specified bio to its cgroup.  Depending on\nthe configuration, the bio may be executed at a lower priority and if\nthe writeback session is holding shared resources, e.g. a journal\nentry, may lead to priority inversion.  There is no one easy solution\nfor the problem.  Filesystems can try to work around specific problem\ncases by skipping wbc_init_bio() and using bio_associate_blkg()\ndirectly.\n\n\nDeprecated v1 Core Features\n===========================\n\n- Multiple hierarchies including named ones are not supported.\n\n- All v1 mount options are not supported.\n\n- The \"tasks\" file is removed and \"cgroup.procs\" is not sorted.\n\n- \"cgroup.clone_children\" is removed.\n\n- \/proc\/cgroups is meaningless for v2.  Use \"cgroup.controllers\" or\n  \"cgroup.stat\" files at the root instead.\n\n\nIssues with v1 and Rationales for v2\n====================================\n\nMultiple Hierarchies\n--------------------\n\ncgroup v1 allowed an arbitrary number of hierarchies and each\nhierarchy could host any number of controllers.  While this seemed to\nprovide a high level of flexibility, it wasn't useful in practice.\n\nFor example, as there is only one instance of each controller, utility\ntype controllers such as freezer which can be useful in all\nhierarchies could only be used in one.  The issue is exacerbated by\nthe fact that controllers couldn't be moved to another hierarchy once\nhierarchies were populated.  Another issue was that all controllers\nbound to a hierarchy were forced to have exactly the same view of the\nhierarchy.  It wasn't possible to vary the granularity depending on\nthe specific controller.\n\nIn practice, these issues heavily limited which controllers could be\nput on the same hierarchy and most configurations resorted to putting\neach controller on its own hierarchy.  Only closely related ones, such\nas the cpu and cpuacct controllers, made sense to be put on the same\nhierarchy.  This often meant that userland ended up managing multiple\nsimilar hierarchies repeating the same steps on each hierarchy\nwhenever a hierarchy management operation was necessary.\n\nFurthermore, support for multiple hierarchies came at a steep cost.\nIt greatly complicated cgroup core implementation but more importantly\nthe support for multiple hierarchies restricted how cgroup could be\nused in general and what controllers was able to do.\n\nThere was no limit on how many hierarchies there might be, which meant\nthat a thread's cgroup membership couldn't be described in finite\nlength.  The key might contain any number of entries and was unlimited\nin length, which made it highly awkward to manipulate and led to\naddition of controllers which existed only to identify membership,\nwhich in turn exacerbated the original problem of proliferating number\nof hierarchies.\n\nAlso, as a controller couldn't have any expectation regarding the\ntopologies of hierarchies other controllers might be on, each\ncontroller had to assume that all other controllers were attached to\ncompletely orthogonal hierarchies.  This made it impossible, or at\nleast very cumbersome, for controllers to cooperate with each other.\n\nIn most use cases, putting controllers on hierarchies which are\ncompletely orthogonal to each other isn't necessary.  What usually is\ncalled for is the ability to have differing levels of granularity\ndepending on the specific controller.  In other words, hierarchy may\nbe collapsed from leaf towards root when viewed from specific\ncontrollers.  For example, a given configuration might not care about\nhow memory is distributed beyond a certain level while still wanting\nto control how CPU cycles are distributed.\n\n\nThread Granularity\n------------------\n\ncgroup v1 allowed threads of a process to belong to different cgroups.\nThis didn't make sense for some controllers and those controllers\nended up implementing different ways to ignore such situations but\nmuch more importantly it blurred the line between API exposed to\nindividual applications and system management interface.\n\nGenerally, in-process knowledge is available only to the process\nitself; thus, unlike service-level organization of processes,\ncategorizing threads of a process requires active participation from\nthe application which owns the target process.\n\ncgroup v1 had an ambiguously defined delegation model which got abused\nin combination with thread granularity.  cgroups were delegated to\nindividual applications so that they can create and manage their own\nsub-hierarchies and control resource distributions along them.  This\neffectively raised cgroup to the status of a syscall-like API exposed\nto lay programs.\n\nFirst of all, cgroup has a fundamentally inadequate interface to be\nexposed this way.  For a process to access its own knobs, it has to\nextract the path on the target hierarchy from \/proc\/self\/cgroup,\nconstruct the path by appending the name of the knob to the path, open\nand then read and\/or write to it.  This is not only extremely clunky\nand unusual but also inherently racy.  There is no conventional way to\ndefine transaction across the required steps and nothing can guarantee\nthat the process would actually be operating on its own sub-hierarchy.\n\ncgroup controllers implemented a number of knobs which would never be\naccepted as public APIs because they were just adding control knobs to\nsystem-management pseudo filesystem.  cgroup ended up with interface\nknobs which were not properly abstracted or refined and directly\nrevealed kernel internal details.  These knobs got exposed to\nindividual applications through the ill-defined delegation mechanism\neffectively abusing cgroup as a shortcut to implementing public APIs\nwithout going through the required scrutiny.\n\nThis was painful for both userland and kernel.  Userland ended up with\nmisbehaving and poorly abstracted interfaces and kernel exposing and\nlocked into constructs inadvertently.\n\n\nCompetition Between Inner Nodes and Threads\n-------------------------------------------\n\ncgroup v1 allowed threads to be in any cgroups which created an\ninteresting problem where threads belonging to a parent cgroup and its\nchildren cgroups competed for resources.  This was nasty as two\ndifferent types of entities competed and there was no obvious way to\nsettle it.  Different controllers did different things.\n\nThe cpu controller considered threads and cgroups as equivalents and\nmapped nice levels to cgroup weights.  This worked for some cases but\nfell flat when children wanted to be allocated specific ratios of CPU\ncycles and the number of internal threads fluctuated - the ratios\nconstantly changed as the number of competing entities fluctuated.\nThere also were other issues.  The mapping from nice level to weight\nwasn't obvious or universal, and there were various other knobs which\nsimply weren't available for threads.\n\nThe io controller implicitly created a hidden leaf node for each\ncgroup to host the threads.  The hidden leaf had its own copies of all\nthe knobs with ``leaf_`` prefixed.  While this allowed equivalent\ncontrol over internal threads, it was with serious drawbacks.  It\nalways added an extra layer of nesting which wouldn't be necessary\notherwise, made the interface messy and significantly complicated the\nimplementation.\n\nThe memory controller didn't have a way to control what happened\nbetween internal tasks and child cgroups and the behavior was not\nclearly defined.  There were attempts to add ad-hoc behaviors and\nknobs to tailor the behavior to specific workloads which would have\nled to problems extremely difficult to resolve in the long term.\n\nMultiple controllers struggled with internal tasks and came up with\ndifferent ways to deal with it; unfortunately, all the approaches were\nseverely flawed and, furthermore, the widely different behaviors\nmade cgroup as a whole highly inconsistent.\n\nThis clearly is a problem which needs to be addressed from cgroup core\nin a uniform way.\n\n\nOther Interface Issues\n----------------------\n\ncgroup v1 grew without oversight and developed a large number of\nidiosyncrasies and inconsistencies.  One issue on the cgroup core side\nwas how an empty cgroup was notified - a userland helper binary was\nforked and executed for each event.  The event delivery wasn't\nrecursive or delegatable.  The limitations of the mechanism also led\nto in-kernel event delivery filtering mechanism further complicating\nthe interface.\n\nController interfaces were problematic too.  An extreme example is\ncontrollers completely ignoring hierarchical organization and treating\nall cgroups as if they were all located directly under the root\ncgroup.  Some controllers exposed a large amount of inconsistent\nimplementation details to userland.\n\nThere also was no consistency across controllers.  When a new cgroup\nwas created, some controllers defaulted to not imposing extra\nrestrictions while others disallowed any resource usage until\nexplicitly configured.  Configuration knobs for the same type of\ncontrol used widely differing naming schemes and formats.  Statistics\nand information knobs were named arbitrarily and used different\nformats and units even in the same controller.\n\ncgroup v2 establishes common conventions where appropriate and updates\ncontrollers so that they expose minimal and consistent interfaces.\n\n\nController Issues and Remedies\n------------------------------\n\nMemory\n~~~~~~\n\nThe original lower boundary, the soft limit, is defined as a limit\nthat is per default unset.  As a result, the set of cgroups that\nglobal reclaim prefers is opt-in, rather than opt-out.  The costs for\noptimizing these mostly negative lookups are so high that the\nimplementation, despite its enormous size, does not even provide the\nbasic desirable behavior.  First off, the soft limit has no\nhierarchical meaning.  All configured groups are organized in a global\nrbtree and treated like equal peers, regardless where they are located\nin the hierarchy.  This makes subtree delegation impossible.  Second,\nthe soft limit reclaim pass is so aggressive that it not just\nintroduces high allocation latencies into the system, but also impacts\nsystem performance due to overreclaim, to the point where the feature\nbecomes self-defeating.\n\nThe memory.low boundary on the other hand is a top-down allocated\nreserve.  A cgroup enjoys reclaim protection when it's within its\neffective low, which makes delegation of subtrees possible. It also\nenjoys having reclaim pressure proportional to its overage when\nabove its effective low.\n\nThe original high boundary, the hard limit, is defined as a strict\nlimit that can not budge, even if the OOM killer has to be called.\nBut this generally goes against the goal of making the most out of the\navailable memory.  The memory consumption of workloads varies during\nruntime, and that requires users to overcommit.  But doing that with a\nstrict upper limit requires either a fairly accurate prediction of the\nworking set size or adding slack to the limit.  Since working set size\nestimation is hard and error prone, and getting it wrong results in\nOOM kills, most users tend to err on the side of a looser limit and\nend up wasting precious resources.\n\nThe memory.high boundary on the other hand can be set much more\nconservatively.  When hit, it throttles allocations by forcing them\ninto direct reclaim to work off the excess, but it never invokes the\nOOM killer.  As a result, a high boundary that is chosen too\naggressively will not terminate the processes, but instead it will\nlead to gradual performance degradation.  The user can monitor this\nand make corrections until the minimal memory footprint that still\ngives acceptable performance is found.\n\nIn extreme cases, with many concurrent allocations and a complete\nbreakdown of reclaim progress within the group, the high boundary can\nbe exceeded.  But even then it's mostly better to satisfy the\nallocation from the slack available in other groups or the rest of the\nsystem than killing the group.  Otherwise, memory.max is there to\nlimit this type of spillover and ultimately contain buggy or even\nmalicious applications.\n\nSetting the original memory.limit_in_bytes below the current usage was\nsubject to a race condition, where concurrent charges could cause the\nlimit setting to fail. memory.max on the other hand will first set the\nlimit to prevent new charges, and then reclaim and OOM kill until the\nnew limit is met - or the task writing to memory.max is killed.\n\nThe combined memory+swap accounting and limiting is replaced by real\ncontrol over swap space.\n\nThe main argument for a combined memory+swap facility in the original\ncgroup design was that global or parental pressure would always be\nable to swap all anonymous memory of a child group, regardless of the\nchild's own (possibly untrusted) configuration.  However, untrusted\ngroups can sabotage swapping by other means - such as referencing its\nanonymous memory in a tight loop - and an admin can not assume full\nswappability when overcommitting untrusted jobs.\n\nFor trusted jobs, on the other hand, a combined counter is not an\nintuitive userspace interface, and it flies in the face of the idea\nthat cgroup controllers should account and limit specific physical\nresources.  Swap space is a resource like all others in the system,\nand that's why unified hierarchy allows distributing it separately.","site":"linux","answers_cleaned":"    cgroup v2                    Control Group v2                    Date  October  2015  Author  Tejun Heo  tj kernel org   This is the authoritative documentation on the design  interface and conventions of cgroup v2   It describes all userland visible aspects of cgroup including core and specific controller behaviors   All future changes must be reflected in this document   Documentation for v1 is available under  ref  Documentation admin guide cgroup v1 index rst  cgroup v1        CONTENTS     1  Introduction      1 1  Terminology      1 2  What is cgroup     2  Basic Operations      2 1  Mounting      2 2  Organizing Processes and Threads        2 2 1  Processes        2 2 2  Threads      2 3   Un populated Notification      2 4  Controlling Controllers        2 4 1  Enabling and Disabling        2 4 2  Top down Constraint        2 4 3  No Internal Process Constraint      2 5  Delegation        2 5 1  Model of Delegation        2 5 2  Delegation Containment      2 6  Guidelines        2 6 1  Organize Once and Control        2 6 2  Avoid Name Collisions    3  Resource Distribution Models      3 1  Weights      3 2  Limits      3 3  Protections      3 4  Allocations    4  Interface Files      4 1  Format      4 2  Conventions      4 3  Core Interface Files    5  Controllers      5 1  CPU        5 1 1  CPU Interface Files      5 2  Memory        5 2 1  Memory Interface Files        5 2 2  Usage Guidelines        5 2 3  Memory Ownership      5 3  IO        5 3 1  IO Interface Files        5 3 2  Writeback        5 3 3  IO Latency          5 3 3 1  How IO Latency Throttling Works          5 3 3 2  IO Latency Interface Files        5 3 4  IO Priority      5 4  PID        5 4 1  PID Interface Files      5 5  Cpuset        5 5 1  Cpuset Interface Files      5 6  Device      5 7  RDMA        5 7 1  RDMA Interface Files      5 8  HugeTLB        5 8 1  HugeTLB Interface Files      5 9  Misc        5 9 1 Miscellaneous cgroup Interface Files        5 9 2 Migration and Ownership      5 10  Others        5 10 1  perf event      5 N  Non normative information        5 N 1  CPU controller root cgroup process behaviour        5 N 2  IO controller root cgroup process behaviour    6  Namespace      6 1  Basics      6 2  The Root and Views      6 3  Migration and setns 2       6 4  Interaction with Other Namespaces    P  Information on Kernel Programming      P 1  Filesystem Support for Writeback    D  Deprecated v1 Core Features    R  Issues with v1 and Rationales for v2      R 1  Multiple Hierarchies      R 2  Thread Granularity      R 3  Competition Between Inner Nodes and Threads      R 4  Other Interface Issues      R 5  Controller Issues and Remedies        R 5 1  Memory   Introduction               Terminology               cgroup  stands for  control group  and is never capitalized   The singular form is used to designate the whole feature and also as a qualifier as in  cgroup controllers    When explicitly referring to multiple individual control groups  the plural form  cgroups  is used    What is cgroup                   cgroup is a mechanism to organize processes hierarchically and distribute system resources along the hierarchy in a controlled and configurable manner   cgroup is largely composed of two parts   the core and controllers  cgroup core is primarily responsible for hierarchically organizing processes   A cgroup controller is usually responsible for distributing a specific type of system resource along the hierarchy although there are utility controllers which serve purposes other than resource distribution   cgroups form a tree structure and every process in the system belongs to one and only one cgroup   All threads of a process belong to the same cgroup   On creation  all processes are put in the cgroup that the parent process belongs to at the time   A process can be migrated to another cgroup   Migration of a process doesn t affect already existing descendant processes   Following certain structural constraints  controllers may be enabled or disabled selectively on a cgroup   All controller behaviors are hierarchical   if a controller is enabled on a cgroup  it affects all processes which belong to the cgroups consisting the inclusive sub hierarchy of the cgroup   When a controller is enabled on a nested cgroup  it always restricts the resource distribution further   The restrictions set closer to the root in the hierarchy can not be overridden from further away    Basic Operations                   Mounting           Unlike v1  cgroup v2 has only single hierarchy   The cgroup v2 hierarchy can be mounted with the following mount command        mount  t cgroup2 none  MOUNT POINT  cgroup2 filesystem has the magic number 0x63677270   cgrp     All controllers which support v2 and are not bound to a v1 hierarchy are automatically bound to the v2 hierarchy and show up at the root  Controllers which are not in active use in the v2 hierarchy can be bound to other hierarchies   This allows mixing v2 hierarchy with the legacy v1 multiple hierarchies in a fully backward compatible way   A controller can be moved across hierarchies only after the controller is no longer referenced in its current hierarchy   Because per cgroup controller states are destroyed asynchronously and controllers may have lingering references  a controller may not show up immediately on the v2 hierarchy after the final umount of the previous hierarchy  Similarly  a controller should be fully disabled to be moved out of the unified hierarchy and it may take some time for the disabled controller to become available for other hierarchies  furthermore  due to inter controller dependencies  other controllers may need to be disabled too   While useful for development and manual configurations  moving controllers dynamically between the v2 and other hierarchies is strongly discouraged for production use   It is recommended to decide the hierarchies and controller associations before starting using the controllers after system boot   During transition to v2  system management software might still automount the v1 cgroup filesystem and so hijack all controllers during boot  before manual intervention is possible  To make testing and experimenting easier  the kernel parameter cgroup no v1  allows disabling controllers in v1 and make them always available in v2   cgroup v2 currently supports the following mount options     nsdelegate  Consider cgroup namespaces as delegation boundaries   This  option is system wide and can only be set on mount or modified  through remount from the init namespace   The mount option is  ignored on non init namespace mounts   Please refer to the  Delegation section for details     favordynmods         Reduce the latencies of dynamic cgroup modifications such as         task migrations and controller on offs at the cost of making         hot path operations such as forks and exits more expensive          The static usage pattern of creating a cgroup  enabling         controllers  and then seeding it with CLONE INTO CGROUP is         not affected by this option     memory localevents         Only populate memory events with data for the current cgroup          and not any subtrees  This is legacy behaviour  the default         behaviour without this option is to include subtree counts          This option is system wide and can only be set on mount or         modified through remount from the init namespace  The mount         option is ignored on non init namespace mounts     memory recursiveprot         Recursively apply memory min and memory low protection to         entire subtrees  without requiring explicit downward         propagation into leaf cgroups   This allows protecting entire         subtrees from one another  while retaining free competition         within those subtrees   This should have been the default         behavior but is a mount option to avoid regressing setups         relying on the original semantics  e g  specifying bogusly         high  bypass  protection values at higher tree levels      memory hugetlb accounting         Count HugeTLB memory usage towards the cgroup s overall         memory usage for the memory controller  for the purpose of         statistics reporting and memory protetion   This is a new         behavior that could regress existing setups  so it must be         explicitly opted in with this mount option           A few caveats to keep in mind             There is no HugeTLB pool management involved in the memory           controller  The pre allocated pool does not belong to anyone            Specifically  when a new HugeTLB folio is allocated to           the pool  it is not accounted for from the perspective of the           memory controller  It is only charged to a cgroup when it is           actually used  for e g at page fault time   Host memory           overcommit management has to consider this when configuring           hard limits  In general  HugeTLB pool management should be           done via other mechanisms  such as the HugeTLB controller             Failure to charge a HugeTLB folio to the memory controller           results in SIGBUS  This could happen even if the HugeTLB pool           still has pages available  but the cgroup limit is hit and           reclaim attempt fails             Charging HugeTLB memory towards the memory controller affects           memory protection and reclaim dynamics  Any userspace tuning            of low  min limits for e g  needs to take this into account            HugeTLB pages utilized while this option is not selected           will not be tracked by the memory controller  even if cgroup           v2 is remounted later on      pids localevents         The option restores v1 like behavior of pids events max  that is only         local  inside cgroup proper  fork failures are counted  Without this         option pids events max represents any pids max enforcemnt across         cgroup s subtree     Organizing Processes and Threads                                   Processes            Initially  only the root cgroup exists to which all processes belong  A child cgroup can be created by creating a sub directory        mkdir  CGROUP NAME  A given cgroup may have multiple child cgroups forming a tree structure   Each cgroup has a read writable interface file  cgroup procs    When read  it lists the PIDs of all processes which belong to the cgroup one per line   The PIDs are not ordered and the same PID may show up more than once if the process got moved to another cgroup and then back or the PID got recycled while reading   A process can be migrated into a cgroup by writing its PID to the target cgroup s  cgroup procs  file   Only one process can be migrated on a single write 2  call   If a process is composed of multiple threads  writing the PID of any thread migrates all threads of the process   When a process forks a child process  the new process is born into the cgroup that the forking process belongs to at the time of the operation   After exit  a process stays associated with the cgroup that it belonged to at the time of exit until it s reaped  however  a zombie process does not appear in  cgroup procs  and thus can t be moved to another cgroup   A cgroup which doesn t have any children or live processes can be destroyed by removing the directory   Note that a cgroup which doesn t have any children and is associated only with zombie processes is considered empty and can be removed        rmdir  CGROUP NAME    proc  PID cgroup  lists a process s cgroup membership   If legacy cgroup is in use in the system  this file may contain multiple lines  one for each hierarchy   The entry for cgroup v2 is always in the format  0   PATH         cat  proc 842 cgroup         0   test cgroup test cgroup nested  If the process becomes a zombie and the cgroup it was associated with is removed subsequently     deleted   is appended to the path        cat  proc 842 cgroup         0   test cgroup test cgroup nested  deleted    Threads          cgroup v2 supports thread granularity for a subset of controllers to support use cases requiring hierarchical resource distribution across the threads of a group of processes   By default  all threads of a process belong to the same cgroup  which also serves as the resource domain to host resource consumptions which are not specific to a process or thread   The thread mode allows threads to be spread across a subtree while still maintaining the common resource domain for them   Controllers which support thread mode are called threaded controllers  The ones which don t are called domain controllers   Marking a cgroup threaded makes it join the resource domain of its parent as a threaded cgroup   The parent may be another threaded cgroup whose resource domain is further up in the hierarchy   The root of a threaded subtree  that is  the nearest ancestor which is not threaded  is called threaded domain or thread root interchangeably and serves as the resource domain for the entire subtree   Inside a threaded subtree  threads of a process can be put in different cgroups and are not subject to the no internal process constraint   threaded controllers can be enabled on non leaf cgroups whether they have threads in them or not   As the threaded domain cgroup hosts all the domain resource consumptions of the subtree  it is considered to have internal resource consumptions whether there are processes in it or not and can t have populated child cgroups which aren t threaded   Because the root cgroup is not subject to no internal process constraint  it can serve both as a threaded domain and a parent to domain cgroups   The current operation mode or type of the cgroup is shown in the  cgroup type  file which indicates whether the cgroup is a normal domain  a domain which is serving as the domain of a threaded subtree  or a threaded cgroup   On creation  a cgroup is always a domain cgroup and can be made threaded by writing  threaded  to the  cgroup type  file   The operation is single direction        echo threaded   cgroup type  Once threaded  the cgroup can t be made a domain again   To enable the thread mode  the following conditions must be met     As the cgroup will join the parent s resource domain   The parent   must either be a valid  threaded  domain or a threaded cgroup     When the parent is an unthreaded domain  it must not have any domain   controllers enabled or populated domain children   The root is   exempt from this requirement   Topology wise  a cgroup can be in an invalid state   Please consider the following topology      A  threaded domain    B  threaded    C  domain  just created   C is created as a domain but isn t connected to a parent which can host child domains   C can t be used until it is turned into a threaded cgroup    cgroup type  file will report  domain  invalid   in these cases   Operations which fail due to invalid topology use EOPNOTSUPP as the errno   A domain cgroup is turned into a threaded domain when one of its child cgroup becomes threaded or threaded controllers are enabled in the  cgroup subtree control  file while there are processes in the cgroup  A threaded domain reverts to a normal domain when the conditions clear   When read   cgroup threads  contains the list of the thread IDs of all threads in the cgroup   Except that the operations are per thread instead of per process   cgroup threads  has the same format and behaves the same way as  cgroup procs    While  cgroup threads  can be written to in any cgroup  as it can only move threads inside the same threaded domain  its operations are confined inside each threaded subtree   The threaded domain cgroup serves as the resource domain for the whole subtree  and  while the threads can be scattered across the subtree  all the processes are considered to be in the threaded domain cgroup   cgroup procs  in a threaded domain cgroup contains the PIDs of all processes in the subtree and is not readable in the subtree proper  However   cgroup procs  can be written to from anywhere in the subtree to migrate all threads of the matching process to the cgroup   Only threaded controllers can be enabled in a threaded subtree   When a threaded controller is enabled inside a threaded subtree  it only accounts for and controls resource consumptions associated with the threads in the cgroup and its descendants   All consumptions which aren t tied to a specific thread belong to the threaded domain cgroup   Because a threaded subtree is exempt from no internal process constraint  a threaded controller must be able to handle competition between threads in a non leaf cgroup and its child cgroups   Each threaded controller defines how such competitions are handled   Currently  the following controllers are threaded and can be enabled in a threaded cgroup      cpu   cpuset   perf event   pids   Un populated Notification                             Each non root cgroup has a  cgroup events  file which contains  populated  field indicating whether the cgroup s sub hierarchy has live processes in it   Its value is 0 if there is no live process in the cgroup and its descendants  otherwise  1   poll and  id notify events are triggered when the value changes   This can be used  for example  to start a clean up operation after all processes of a given sub hierarchy have exited   The populated state updates and notifications are recursive   Consider the following sub hierarchy where the numbers in the parentheses represent the numbers of processes in each cgroup      A 4    B 0    C 1                  D 0   A  B and C s  populated  fields would be 1 while D s 0   After the one process in C exits  B and C s  populated  fields would flip to  0  and file modified events will be generated on the  cgroup events  files of both cgroups    Controlling Controllers                          Enabling and Disabling                         Each cgroup has a  cgroup controllers  file which lists all controllers available for the cgroup to enable        cat cgroup controllers   cpu io memory  No controller is enabled by default   Controllers can be enabled and disabled by writing to the  cgroup subtree control  file        echo   cpu  memory  io    cgroup subtree control  Only controllers which are listed in  cgroup controllers  can be enabled   When multiple operations are specified as above  either they all succeed or fail   If multiple operations on the same controller are specified  the last one is effective   Enabling a controller in a cgroup indicates that the distribution of the target resource across its immediate children will be controlled  Consider the following sub hierarchy   The enabled controllers are listed in parentheses      A cpu memory    B memory    C                                 D    As A has  cpu  and  memory  enabled  A will control the distribution of CPU cycles and memory to its children  in this case  B   As B has  memory  enabled but not  CPU   C and D will compete freely on CPU cycles but their division of memory available to B will be controlled   As a controller regulates the distribution of the target resource to the cgroup s children  enabling it creates the controller s interface files in the child cgroups   In the above example  enabling  cpu  on B would create the  cpu   prefixed controller interface files in C and D   Likewise  disabling  memory  from B would remove the  memory   prefixed controller interface files from C and D   This means that the controller interface files   anything which doesn t start with  cgroup   are owned by the parent rather than the cgroup itself    Top down Constraint                      Resources are distributed top down and a cgroup can further distribute a resource only if the resource has been distributed to it from the parent   This means that all non root  cgroup subtree control  files can only contain controllers which are enabled in the parent s  cgroup subtree control  file   A controller can be enabled only if the parent has the controller enabled and a controller can t be disabled if one or more children have it enabled    No Internal Process Constraint                                 Non root cgroups can distribute domain resources to their children only when they don t have any processes of their own   In other words  only domain cgroups which don t contain any processes can have domain controllers enabled in their  cgroup subtree control  files   This guarantees that  when a domain controller is looking at the part of the hierarchy which has it enabled  processes are always only on the leaves   This rules out situations where child cgroups compete against internal processes of the parent   The root cgroup is exempt from this restriction   Root contains processes and anonymous resource consumption which can t be associated with any other cgroups and requires special treatment from most controllers   How resource consumption in the root cgroup is governed is up to each controller  for more information on this topic please refer to the Non normative information section in the Controllers chapter    Note that the restriction doesn t get in the way if there is no enabled controller in the cgroup s  cgroup subtree control    This is important as otherwise it wouldn t be possible to create children of a populated cgroup   To control resource distribution of a cgroup  the cgroup must create children and transfer all its processes to the children before enabling controllers in its  cgroup subtree control  file    Delegation             Model of Delegation                      A cgroup can be delegated in two ways   First  to a less privileged user by granting write access of the directory and its  cgroup procs    cgroup threads  and  cgroup subtree control  files to the user  Second  if the  nsdelegate  mount option is set  automatically to a cgroup namespace on namespace creation   Because the resource control interface files in a given directory control the distribution of the parent s resources  the delegatee shouldn t be allowed to write to them   For the first method  this is achieved by not granting access to these files   For the second  files outside the namespace should be hidden from the delegatee by the means of at least mount namespacing  and the kernel rejects writes to all files on a namespace root from inside the cgroup namespace  except for those files listed in   sys kernel cgroup delegate   including  cgroup procs    cgroup threads    cgroup subtree control   etc     The end results are equivalent for both delegation types   Once delegated  the user can build sub hierarchy under the directory  organize processes inside it as it sees fit and further distribute the resources it received from the parent   The limits and other settings of all resource controllers are hierarchical and regardless of what happens in the delegated sub hierarchy  nothing can escape the resource restrictions imposed by the parent   Currently  cgroup doesn t impose any restrictions on the number of cgroups in or nesting depth of a delegated sub hierarchy  however  this may be limited explicitly in the future    Delegation Containment                         A delegated sub hierarchy is contained in the sense that processes can t be moved into or out of the sub hierarchy by the delegatee   For delegations to a less privileged user  this is achieved by requiring the following conditions for a process with a non root euid to migrate a target process into a cgroup by writing its PID to the  cgroup procs  file     The writer must have write access to the  cgroup procs  file     The writer must have write access to the  cgroup procs  file of the   common ancestor of the source and destination cgroups   The above two constraints ensure that while a delegatee may migrate processes around freely in the delegated sub hierarchy it can t pull in from or push out to outside the sub hierarchy   For an example  let s assume cgroups C0 and C1 have been delegated to user U0 who created C00  C01 under C0 and C10 under C1 as follows and all processes under C0 and C1 belong to U0                      C0   C00     cgroup             C01     hierarchy                     C1   C10  Let s also say U0 wants to write the PID of a process which is currently in C10 into  C00 cgroup procs    U0 has write access to the file  however  the common ancestor of the source cgroup C10 and the destination cgroup C00 is above the points of delegation and U0 would not have write access to its  cgroup procs  files and thus the write will be denied with  EACCES   For delegations to namespaces  containment is achieved by requiring that both the source and destination cgroups are reachable from the namespace of the process which is attempting the migration   If either is not reachable  the migration is rejected with  ENOENT    Guidelines             Organize Once and Control                            Migrating a process across cgroups is a relatively expensive operation and stateful resources such as memory are not moved together with the process   This is an explicit design decision as there often exist inherent trade offs between migration and various hot paths in terms of synchronization cost   As such  migrating processes across cgroups frequently as a means to apply different resource restrictions is discouraged   A workload should be assigned to a cgroup according to the system s logical and resource structure once on start up   Dynamic adjustments to resource distribution can be made by changing controller configuration through the interface files    Avoid Name Collisions                        Interface files for a cgroup and its children cgroups occupy the same directory and it is possible to create children cgroups which collide with interface files   All cgroup core interface files are prefixed with  cgroup   and each controller s interface files are prefixed with the controller name and a dot   A controller s name is composed of lower case alphabets and    s but never begins with an     so it can be used as the prefix character for collision avoidance   Also  interface file names won t start or end with terms which are often used in categorizing workloads such as job  service  slice  unit or workload   cgroup doesn t do anything to prevent name collisions and it s the user s responsibility to avoid them    Resource Distribution Models                               cgroup controllers implement several resource distribution schemes depending on the resource type and expected use cases   This section describes major schemes in use along with their expected behaviors    Weights          A parent s resource is distributed by adding up the weights of all active children and giving each the fraction matching the ratio of its weight against the sum   As only children which can make use of the resource at the moment participate in the distribution  this is work conserving   Due to the dynamic nature  this model is usually used for stateless resources   All weights are in the range  1  10000  with the default at 100   This allows symmetric multiplicative biases in both directions at fine enough granularity while staying in the intuitive range   As long as the weight is in range  all configuration combinations are valid and there is no reason to reject configuration changes or process migrations    cpu weight  proportionally distributes CPU cycles to active children and is an example of this type        cgroupv2 limits distributor   Limits         A child can only consume up to the configured amount of the resource  Limits can be over committed   the sum of the limits of children can exceed the amount of resource available to the parent   Limits are in the range  0  max  and defaults to  max   which is noop   As limits can be over committed  all configuration combinations are valid and there is no reason to reject configuration changes or process migrations    io max  limits the maximum BPS and or IOPS that a cgroup can consume on an IO device and is an example of this type       cgroupv2 protections distributor   Protections              A cgroup is protected up to the configured amount of the resource as long as the usages of all its ancestors are under their protected levels   Protections can be hard guarantees or best effort soft boundaries   Protections can also be over committed in which case only up to the amount available to the parent is protected among children   Protections are in the range  0  max  and defaults to 0  which is noop   As protections can be over committed  all configuration combinations are valid and there is no reason to reject configuration changes or process migrations    memory low  implements best effort memory protection and is an example of this type    Allocations              A cgroup is exclusively allocated a certain amount of a finite resource   Allocations can t be over committed   the sum of the allocations of children can not exceed the amount of resource available to the parent   Allocations are in the range  0  max  and defaults to 0  which is no resource   As allocations can t be over committed  some configuration combinations are invalid and should be rejected   Also  if the resource is mandatory for execution of processes  process migrations may be rejected    cpu rt max  hard allocates realtime slices and is an example of this type    Interface Files                  Format         All interface files should be in one of the following formats whenever possible      New line separated values    when only one value can be written at once    VAL0 n  VAL1 n         Space separated values    when read only or multiple values can be written at once    VAL0 VAL1     n    Flat keyed   KEY0 VAL0 n  KEY1 VAL1 n         Nested keyed   KEY0 SUB KEY0 VAL00 SUB KEY1 VAL01     KEY1 SUB KEY0 VAL10 SUB KEY1 VAL11          For a writable file  the format for writing should generally match reading  however  controllers may allow omitting later fields or implement restricted shortcuts for most common use cases   For both flat and nested keyed files  only the values for a single key can be written at a time   For nested keyed files  the sub key pairs may be specified in any order and not all pairs have to be specified    Conventions                Settings for a single feature should be contained in a single file     The root cgroup should be exempt from resource control and thus   shouldn t have resource control interface files     The default time unit is microseconds   If a different unit is ever   used  an explicit unit suffix must be present     A parts per quantity should use a percentage decimal with at least   two digit fractional part   e g  13 40     If a controller implements weight based resource distribution  its   interface file should be named  weight  and have the range  1    10000  with 100 as the default   The values are chosen to allow   enough and symmetric bias in both directions while keeping it   intuitive  the default is 100       If a controller implements an absolute resource guarantee and or   limit  the interface files should be named  min  and  max    respectively   If a controller implements best effort resource   guarantee and or limit  the interface files should be named  low    and  high  respectively     In the above four control files  the special token  max  should be   used to represent upward infinity for both reading and writing     If a setting has a configurable default value and keyed specific   overrides  the default entry should be keyed with  default  and   appear as the first entry in the file     The default value can be updated by writing either  default  VAL  or     VAL      When writing to update a specific override   default  can be used as   the value to indicate removal of the override   Override entries   with  default  as the value must not appear when read     For example  a setting which is keyed by major minor device numbers   with integer values may look like the following          cat cgroup example interface file     default 150     8 0 300    The default value can be updated by          echo 125   cgroup example interface file    or          echo  default 125    cgroup example interface file    An override can be set by          echo  8 16 170    cgroup example interface file    and cleared by          echo  8 0 default    cgroup example interface file       cat cgroup example interface file     default 125     8 16 170    For events which are not very high frequency  an interface file    events  should be created which lists event key value pairs    Whenever a notifiable event happens  file modified event should be   generated on the file    Core Interface Files                       All cgroup core files are prefixed with  cgroup      cgroup type  A read write single value file which exists on non root  cgroups    When read  it indicates the current type of the cgroup  which  can be one of the following values       domain    A normal valid domain cgroup       domain threaded    A threaded domain cgroup which is           serving as the root of a threaded subtree       domain invalid    A cgroup which is in an invalid state     It can t be populated or have controllers enabled   It may    be allowed to become a threaded cgroup       threaded    A threaded cgroup which is a member of a           threaded subtree    A cgroup can be turned into a threaded cgroup by writing   threaded  to this file     cgroup procs  A read write new line separated values file which exists on  all cgroups    When read  it lists the PIDs of all processes which belong to  the cgroup one per line   The PIDs are not ordered and the  same PID may show up more than once if the process got moved  to another cgroup and then back or the PID got recycled while  reading    A PID can be written to migrate the process associated with  the PID to the cgroup   The writer should match all of the  following conditions      It must have write access to the  cgroup procs  file      It must have write access to the  cgroup procs  file of the    common ancestor of the source and destination cgroups    When delegating a sub hierarchy  write access to this file  should be granted along with the containing directory    In a threaded cgroup  reading this file fails with EOPNOTSUPP  as all the processes belong to the thread root   Writing is  supported and moves every thread of the process to the cgroup     cgroup threads  A read write new line separated values file which exists on  all cgroups    When read  it lists the TIDs of all threads which belong to  the cgroup one per line   The TIDs are not ordered and the  same TID may show up more than once if the thread got moved to  another cgroup and then back or the TID got recycled while  reading    A TID can be written to migrate the thread associated with the  TID to the cgroup   The writer should match all of the  following conditions      It must have write access to the  cgroup threads  file      The cgroup that the thread is currently in must be in the           same resource domain as the destination cgroup      It must have write access to the  cgroup procs  file of the    common ancestor of the source and destination cgroups    When delegating a sub hierarchy  write access to this file  should be granted along with the containing directory     cgroup controllers  A read only space separated values file which exists on all  cgroups    It shows space separated list of all controllers available to  the cgroup   The controllers are not ordered     cgroup subtree control  A read write space separated values file which exists on all  cgroups   Starts out empty    When read  it shows space separated list of the controllers  which are enabled to control resource distribution from the  cgroup to its children    Space separated list of controllers prefixed with     or      can be written to enable or disable controllers   A controller  name prefixed with     enables the controller and      disables   If a controller appears more than once on the list   the last one is effective   When multiple enable and disable  operations are specified  either all succeed or all fail     cgroup events  A read only flat keyed file which exists on non root cgroups   The following entries are defined   Unless specified  otherwise  a value change in this file generates a file  modified event      populated   1 if the cgroup or its descendants contains any live   processes  otherwise  0     frozen   1 if the cgroup is frozen  otherwise  0     cgroup max descendants  A read write single value files   The default is  max     Maximum allowed number of descent cgroups   If the actual number of descendants is equal or larger   an attempt to create a new cgroup in the hierarchy will fail     cgroup max depth  A read write single value files   The default is  max     Maximum allowed descent depth below the current cgroup   If the actual descent depth is equal or larger   an attempt to create a new child cgroup will fail     cgroup stat  A read only flat keyed file with the following entries      nr descendants   Total number of visible descendant cgroups      nr dying descendants   Total number of dying descendant cgroups  A cgroup becomes   dying after being deleted by a user  The cgroup will remain   in dying state for some time undefined time  which can depend   on system load  before being completely destroyed     A process can t enter a dying cgroup under any circumstances    a dying cgroup can t revive     A dying cgroup can consume system resources not exceeding   limits  which were active at the moment of cgroup deletion      nr subsys  cgroup subsys    Total number of live cgroup subsystems  e g memory   cgroup  at and beneath the current cgroup      nr dying subsys  cgroup subsys    Total number of dying cgroup subsystems  e g  memory   cgroup  at and beneath the current cgroup     cgroup freeze  A read write single value file which exists on non root cgroups   Allowed values are  0  and  1   The default is  0     Writing  1  to the file causes freezing of the cgroup and all  descendant cgroups  This means that all belonging processes will  be stopped and will not run until the cgroup will be explicitly  unfrozen  Freezing of the cgroup may take some time  when this action  is completed  the  frozen  value in the cgroup events control file  will be updated to  1  and the corresponding notification will be  issued    A cgroup can be frozen either by its own settings  or by settings  of any ancestor cgroups  If any of ancestor cgroups is frozen  the  cgroup will remain frozen    Processes in the frozen cgroup can be killed by a fatal signal   They also can enter and leave a frozen cgroup  either by an explicit  move by a user  or if freezing of the cgroup races with fork     If a process is moved to a frozen cgroup  it stops  If a process is  moved out of a frozen cgroup  it becomes running    Frozen status of a cgroup doesn t affect any cgroup tree operations   it s possible to delete a frozen  and empty  cgroup  as well as  create new sub cgroups     cgroup kill  A write only single value file which exists in non root cgroups   The only allowed value is  1     Writing  1  to the file causes the cgroup and all descendant cgroups to  be killed  This means that all processes located in the affected cgroup  tree will be killed via SIGKILL    Killing a cgroup tree will deal with concurrent forks appropriately and  is protected against migrations    In a threaded cgroup  writing this file fails with EOPNOTSUPP as  killing cgroups is a process directed operation  i e  it affects  the whole thread group     cgroup pressure  A read write single value file that allowed values are  0  and  1    The default is  1     Writing  0  to the file will disable the cgroup PSI accounting   Writing  1  to the file will re enable the cgroup PSI accounting    This control attribute is not hierarchical  so disable or enable PSI  accounting in a cgroup does not affect PSI accounting in descendants  and doesn t need pass enablement via ancestors from root    The reason this control attribute exists is that PSI accounts stalls for  each cgroup separately and aggregates it at each level of the hierarchy   This may cause non negligible overhead for some workloads when under  deep level of the hierarchy  in which case this control attribute can  be used to disable PSI accounting in the non leaf cgroups     irq pressure  A read write nested keyed file    Shows pressure stall information for IRQ SOFTIRQ  See   ref  Documentation accounting psi rst  psi   for details   Controllers                  cgroup v2 cpu   CPU      The  cpu  controllers regulates distribution of CPU cycles   This controller implements weight and absolute bandwidth limit models for normal scheduling policy and absolute bandwidth allocation model for realtime scheduling policy   In all the above models  cycles distribution is defined only on a temporal base and it does not account for the frequency at which tasks are executed  The  optional  utilization clamping support allows to hint the schedutil cpufreq governor about the minimum desired frequency which should always be provided by a CPU  as well as the maximum desired frequency  which should not be exceeded by a CPU   WARNING  cgroup2 doesn t yet support control of realtime processes  For a kernel built with the CONFIG RT GROUP SCHED option enabled for group scheduling of realtime processes  the cpu controller can only be enabled when all RT processes are in the root cgroup   This limitation does not apply if CONFIG RT GROUP SCHED is disabled   Be aware that system management software may already have placed RT processes into nonroot cgroups during the system boot process  and these processes may need to be moved to the root cgroup before the cpu controller can be enabled with a CONFIG RT GROUP SCHED enabled kernel    CPU Interface Files                      All time durations are in microseconds     cpu stat  A read only flat keyed file   This file exists whether the controller is enabled or not    It always reports the following three stats      usage usec    user usec    system usec   and the following five when the controller is enabled      nr periods    nr throttled    throttled usec    nr bursts    burst usec    cpu weight  A read write single value file which exists on non root  cgroups   The default is  100     For non idle groups  cpu idle   0   the weight is in the  range  1  10000     If the cgroup has been configured to be SCHED IDLE  cpu idle   1    then the weight will show as a 0     cpu weight nice  A read write single value file which exists on non root  cgroups   The default is  0     The nice value is in the range   20  19     This interface file is an alternative interface for   cpu weight  and allows reading and setting weight using the  same values used by nice 2    Because the range is smaller and  granularity is coarser for the nice values  the read value is  the closest approximation of the current weight     cpu max  A read write two value file which exists on non root cgroups   The default is  max 100000     The maximum bandwidth limit   It s in the following format        MAX  PERIOD   which indicates that the group may consume up to  MAX in each   PERIOD duration    max  for  MAX indicates no limit   If only  one number is written   MAX is updated     cpu max burst  A read write single value file which exists on non root  cgroups   The default is  0     The burst in the range  0   MAX      cpu pressure  A read write nested keyed file    Shows pressure stall information for CPU  See   ref  Documentation accounting psi rst  psi   for details     cpu uclamp min         A read write single value file which exists on non root cgroups          The default is  0   i e  no utilization boosting           The requested minimum utilization  protection  as a percentage         rational number  e g  12 34 for 12 34            This interface allows reading and setting minimum utilization clamp         values similar to the sched setattr 2   This minimum utilization         value is used to clamp the task specific minimum utilization clamp           The requested minimum utilization  protection  is always capped by         the current value for the maximum utilization  limit   i e           cpu uclamp max      cpu uclamp max         A read write single value file which exists on non root cgroups          The default is  max   i e  no utilization capping          The requested maximum utilization  limit  as a percentage rational         number  e g  98 76 for 98 76            This interface allows reading and setting maximum utilization clamp         values similar to the sched setattr 2   This maximum utilization         value is used to clamp the task specific maximum utilization clamp     cpu idle  A read write single value file which exists on non root cgroups   The default is 0    This is the cgroup analog of the per task SCHED IDLE sched policy   Setting this value to a 1 will make the scheduling policy of the  cgroup SCHED IDLE  The threads inside the cgroup will retain their  own relative priorities  but the cgroup itself will be treated as  very low priority relative to its peers     Memory         The  memory  controller regulates distribution of memory   Memory is stateful and implements both limit and protection models   Due to the intertwining between memory usage and reclaim pressure and the stateful nature of memory  the distribution model is relatively complex   While not completely water tight  all major memory usages by a given cgroup are tracked so that the total memory consumption can be accounted and controlled to a reasonable extent   Currently  the following types of memory usages are tracked     Userland memory   page cache and anonymous memory     Kernel data structures such as dentries and inodes     TCP socket buffers   The above list may expand in the future for better coverage    Memory Interface Files                         All memory amounts are in bytes   If a value which is not aligned to PAGE SIZE is written  the value may be rounded up to the closest PAGE SIZE multiple when read back     memory current  A read only single value file which exists on non root  cgroups    The total amount of memory currently being used by the cgroup  and its descendants     memory min  A read write single value file which exists on non root  cgroups   The default is  0     Hard memory protection   If the memory usage of a cgroup  is within its effective min boundary  the cgroup s memory  won t be reclaimed under any conditions  If there is no  unprotected reclaimable memory available  OOM killer  is invoked  Above the effective min boundary  or  effective low boundary if it is higher   pages are reclaimed  proportionally to the overage  reducing reclaim pressure for  smaller overages    Effective min boundary is limited by memory min values of  all ancestor cgroups  If there is memory min overcommitment   child cgroup or cgroups are requiring more protected memory  than parent will allow   then each child cgroup will get  the part of parent s protection proportional to its  actual memory usage below memory min    Putting more memory than generally available under this  protection is discouraged and may lead to constant OOMs    If a memory cgroup is not populated with processes   its memory min is ignored     memory low  A read write single value file which exists on non root  cgroups   The default is  0     Best effort memory protection   If the memory usage of a  cgroup is within its effective low boundary  the cgroup s  memory won t be reclaimed unless there is no reclaimable  memory available in unprotected cgroups   Above the effective low boundary  or   effective min boundary if it is higher   pages are reclaimed  proportionally to the overage  reducing reclaim pressure for  smaller overages    Effective low boundary is limited by memory low values of  all ancestor cgroups  If there is memory low overcommitment   child cgroup or cgroups are requiring more protected memory  than parent will allow   then each child cgroup will get  the part of parent s protection proportional to its  actual memory usage below memory low    Putting more memory than generally available under this  protection is discouraged     memory high  A read write single value file which exists on non root  cgroups   The default is  max     Memory usage throttle limit   If a cgroup s usage goes  over the high boundary  the processes of the cgroup are  throttled and put under heavy reclaim pressure    Going over the high limit never invokes the OOM killer and  under extreme conditions the limit may be breached  The high  limit should be used in scenarios where an external process  monitors the limited cgroup to alleviate heavy reclaim  pressure     memory max  A read write single value file which exists on non root  cgroups   The default is  max     Memory usage hard limit   This is the main mechanism to limit  memory usage of a cgroup   If a cgroup s memory usage reaches  this limit and can t be reduced  the OOM killer is invoked in  the cgroup  Under certain circumstances  the usage may go  over the limit temporarily    In default configuration regular 0 order allocations always  succeed unless OOM killer chooses current task as a victim    Some kinds of allocations don t invoke the OOM killer   Caller could retry them differently  return into userspace  as  ENOMEM or silently ignore in cases like disk readahead     memory reclaim  A write only nested keyed file which exists for all cgroups    This is a simple interface to trigger memory reclaim in the  target cgroup    Example       echo  1G    memory reclaim   Please note that the kernel can over or under reclaim from  the target cgroup  If less bytes are reclaimed than the  specified amount   EAGAIN is returned    Please note that the proactive reclaim  triggered by this  interface  is not meant to indicate memory pressure on the  memory cgroup  Therefore socket memory balancing triggered by  the memory reclaim normally is not exercised in this case   This means that the networking layer will not adapt based on  reclaim induced by memory reclaim   The following nested keys are defined                                                                swappiness            Swappiness value to reclaim with                                                             Specifying a swappiness value instructs the kernel to perform  the reclaim with that swappiness value  Note that this has the  same semantics as vm swappiness applied to memcg reclaim with  all the existing limitations and potential future extensions     memory peak  A read write single value file which exists on non root cgroups    The max memory usage recorded for the cgroup and its descendants since  either the creation of the cgroup or the most recent reset for that FD    A write of any non empty string to this file resets it to the  current memory usage for subsequent reads through the same  file descriptor     memory oom group  A read write single value file which exists on non root  cgroups   The default value is  0     Determines whether the cgroup should be treated as  an indivisible workload by the OOM killer  If set   all tasks belonging to the cgroup or to its descendants   if the memory cgroup is not a leaf cgroup  are killed  together or not at all  This can be used to avoid  partial kills to guarantee workload integrity    Tasks with the OOM protection  oom score adj set to  1000   are treated as an exception and are never killed    If the OOM killer is invoked in a cgroup  it s not going  to kill any tasks outside of this cgroup  regardless  memory oom group values of ancestor cgroups     memory events  A read only flat keyed file which exists on non root cgroups   The following entries are defined   Unless specified  otherwise  a value change in this file generates a file  modified event    Note that all fields in this file are hierarchical and the  file modified event can be generated due to an event down the  hierarchy  For the local events at the cgroup level see  memory events local      low   The number of times the cgroup is reclaimed due to   high memory pressure even though its usage is under   the low boundary   This usually indicates that the low   boundary is over committed      high   The number of times processes of the cgroup are   throttled and routed to perform direct memory reclaim   because the high memory boundary was exceeded   For a   cgroup whose memory usage is capped by the high limit   rather than global memory pressure  this event s   occurrences are expected      max   The number of times the cgroup s memory usage was   about to go over the max boundary   If direct reclaim   fails to bring it down  the cgroup goes to OOM state      oom   The number of time the cgroup s memory usage was   reached the limit and allocation was about to fail     This event is not raised if the OOM killer is not   considered as an option  e g  for failed high order   allocations or if caller asked to not retry attempts      oom kill   The number of processes belonging to this cgroup   killed by any kind of OOM killer             oom group kill                 The number of times a group OOM has occurred     memory events local  Similar to memory events but the fields in the file are local  to the cgroup i e  not hierarchical  The file modified event  generated on this file reflects only the local events     memory stat  A read only flat keyed file which exists on non root cgroups    This breaks down the cgroup s memory footprint into different  types of memory  type specific details  and other information  on the state and past events of the memory management system    All memory amounts are in bytes    The entries are ordered to be human readable  and new entries  can show up in the middle  Don t rely on items remaining in a  fixed position  use the keys to look up specific values    If the entry has no per node counter  or not show in the  memory numa stat   We use  npn   non per node  as the tag  to indicate that it will not show in the memory numa stat      anon   Amount of memory used in anonymous mappings such as   brk    sbrk    and mmap MAP ANONYMOUS      file   Amount of memory used to cache filesystem data    including tmpfs and shared memory      kernel  npn    Amount of total kernel memory  including    kernel stack  pagetables  percpu  vmalloc  slab  in   addition to other kernel memory use cases      kernel stack   Amount of memory allocated to kernel stacks      pagetables                 Amount of memory allocated for page tables      sec pagetables   Amount of memory allocated for secondary page tables    this currently includes KVM mmu allocations on x86   and arm64 and IOMMU page tables      percpu  npn    Amount of memory used for storing per cpu kernel   data structures      sock  npn    Amount of memory used in network transmission buffers     vmalloc  npn    Amount of memory used for vmap backed memory      shmem   Amount of cached filesystem data that is swap backed    such as tmpfs  shm segments  shared anonymous mmap  s     zswap   Amount of memory consumed by the zswap compression backend      zswapped   Amount of application memory swapped out to zswap      file mapped   Amount of cached filesystem data mapped with mmap       file dirty   Amount of cached filesystem data that was modified but   not yet written back to disk     file writeback   Amount of cached filesystem data that was modified and   is currently being written back to disk     swapcached   Amount of swap cached in memory  The swapcache is accounted   against both memory and swap usage      anon thp   Amount of memory used in anonymous mappings backed by   transparent hugepages     file thp   Amount of cached filesystem data backed by transparent   hugepages     shmem thp   Amount of shm  tmpfs  shared anonymous mmap  s backed by   transparent hugepages     inactive anon  active anon  inactive file  active file  unevictable   Amount of memory  swap backed and filesystem backed    on the internal memory management lists used by the   page reclaim algorithm     As these represent internal list state  eg  shmem pages are on anon   memory management lists   inactive foo   active foo may not be equal to   the value for the foo counter  since the foo counter is type based  not   list based      slab reclaimable   Part of  slab  that might be reclaimed  such as   dentries and inodes      slab unreclaimable   Part of  slab  that cannot be reclaimed on memory   pressure      slab  npn    Amount of memory used for storing in kernel data   structures      workingset refault anon   Number of refaults of previously evicted anonymous pages      workingset refault file   Number of refaults of previously evicted file pages      workingset activate anon   Number of refaulted anonymous pages that were immediately   activated      workingset activate file   Number of refaulted file pages that were immediately activated      workingset restore anon   Number of restored anonymous pages which have been detected as   an active workingset before they got reclaimed      workingset restore file   Number of restored file pages which have been detected as an   active workingset before they got reclaimed      workingset nodereclaim   Number of times a shadow node has been reclaimed     pgscan  npn    Amount of scanned pages  in an inactive LRU list      pgsteal  npn    Amount of reclaimed pages     pgscan kswapd  npn    Amount of scanned pages by kswapd  in an inactive LRU list      pgscan direct  npn    Amount of scanned pages directly   in an inactive LRU list      pgscan khugepaged  npn    Amount of scanned pages by khugepaged   in an inactive LRU list      pgsteal kswapd  npn    Amount of reclaimed pages by kswapd     pgsteal direct  npn    Amount of reclaimed pages directly     pgsteal khugepaged  npn    Amount of reclaimed pages by khugepaged     pgfault  npn    Total number of page faults incurred     pgmajfault  npn    Number of major page faults incurred     pgrefill  npn    Amount of scanned pages  in an active LRU list      pgactivate  npn    Amount of pages moved to the active LRU list     pgdeactivate  npn    Amount of pages moved to the inactive LRU list     pglazyfree  npn    Amount of pages postponed to be freed under memory pressure     pglazyfreed  npn    Amount of reclaimed lazyfree pages     swpin zero   Number of pages swapped into memory and filled with zero  where I O   was optimized out because the page content was detected to be zero   during swapout      swpout zero   Number of zero filled pages swapped out with I O skipped due to the   content being detected as zero      zswpin   Number of pages moved in to memory from zswap      zswpout   Number of pages moved out of memory to zswap      zswpwb   Number of pages written from zswap to swap      thp fault alloc  npn    Number of transparent hugepages which were allocated to satisfy   a page fault  This counter is not present when CONFIG TRANSPARENT HUGEPAGE                 is not set      thp collapse alloc  npn    Number of transparent hugepages which were allocated to allow   collapsing an existing range of pages  This counter is not   present when CONFIG TRANSPARENT HUGEPAGE is not set      thp swpout  npn    Number of transparent hugepages which are swapout in one piece   without splitting      thp swpout fallback  npn    Number of transparent hugepages which were split before swapout    Usually because failed to allocate some continuous swap space   for the huge page      numa pages migrated  npn    Number of pages migrated by NUMA balancing      numa pte updates  npn    Number of pages whose page table entries are modified by   NUMA balancing to produce NUMA hinting faults on access      numa hint faults  npn    Number of NUMA hinting faults      pgdemote kswapd   Number of pages demoted by kswapd      pgdemote direct   Number of pages demoted directly      pgdemote khugepaged   Number of pages demoted by khugepaged      hugetlb   Amount of memory used by hugetlb pages  This metric only shows   up if hugetlb usage is accounted for in memory current  i e    cgroup is mounted with the memory hugetlb accounting option      memory numa stat  A read only nested keyed file which exists on non root cgroups    This breaks down the cgroup s memory footprint into different  types of memory  type specific details  and other information  per node on the state of the memory management system    This is useful for providing visibility into the NUMA locality  information within an memcg since the pages are allowed to be  allocated from any physical node  One of the use case is evaluating  application performance by combining this information with the  application s CPU allocation    All memory amounts are in bytes    The output format of memory numa stat is       type N0  bytes in node 0  N1  bytes in node 1        The entries are ordered to be human readable  and new entries  can show up in the middle  Don t rely on items remaining in a  fixed position  use the keys to look up specific values    The entries can refer to the memory stat     memory swap current  A read only single value file which exists on non root  cgroups    The total amount of swap currently being used by the cgroup  and its descendants     memory swap high  A read write single value file which exists on non root  cgroups   The default is  max     Swap usage throttle limit   If a cgroup s swap usage exceeds  this limit  all its further allocations will be throttled to  allow userspace to implement custom out of memory procedures    This limit marks a point of no return for the cgroup  It is NOT  designed to manage the amount of swapping a workload does  during regular operation  Compare to memory swap max  which  prohibits swapping past a set amount  but lets the cgroup  continue unimpeded as long as other memory can be reclaimed    Healthy workloads are not expected to reach this limit     memory swap peak  A read write single value file which exists on non root cgroups    The max swap usage recorded for the cgroup and its descendants since  the creation of the cgroup or the most recent reset for that FD    A write of any non empty string to this file resets it to the  current memory usage for subsequent reads through the same  file descriptor     memory swap max  A read write single value file which exists on non root  cgroups   The default is  max     Swap usage hard limit   If a cgroup s swap usage reaches this  limit  anonymous memory of the cgroup will not be swapped out     memory swap events  A read only flat keyed file which exists on non root cgroups   The following entries are defined   Unless specified  otherwise  a value change in this file generates a file  modified event      high   The number of times the cgroup s swap usage was over   the high threshold      max   The number of times the cgroup s swap usage was about   to go over the max boundary and swap allocation   failed      fail   The number of times swap allocation failed either   because of running out of swap system wide or max   limit    When reduced under the current usage  the existing swap  entries are reclaimed gradually and the swap usage may stay  higher than the limit for an extended period of time   This  reduces the impact on the workload and memory management     memory zswap current  A read only single value file which exists on non root  cgroups    The total amount of memory consumed by the zswap compression  backend     memory zswap max  A read write single value file which exists on non root  cgroups   The default is  max     Zswap usage hard limit  If a cgroup s zswap pool reaches this  limit  it will refuse to take any more stores before existing  entries fault back in or are written out to disk     memory zswap writeback  A read write single value file  The default value is  1    Note that this setting is hierarchical  i e  the writeback would be  implicitly disabled for child cgroups if the upper hierarchy  does so    When this is set to 0  all swapping attempts to swapping devices  are disabled  This included both zswap writebacks  and swapping due  to zswap store failures  If the zswap store failures are recurring   for e g if the pages are incompressible   users can observe  reclaim inefficiency after disabling writeback  because the same  pages might be rejected again and again     Note that this is subtly different from setting memory swap max to  0  as it still allows for pages to be written to the zswap pool   This setting has no effect if zswap is disabled  and swapping  is allowed unless memory swap max is set to 0     memory pressure  A read only nested keyed file    Shows pressure stall information for memory  See   ref  Documentation accounting psi rst  psi   for details    Usage Guidelines                    memory high  is the main mechanism to control memory usage  Over committing on high limit  sum of high limits   available memory  and letting global memory pressure to distribute memory according to usage is a viable strategy   Because breach of the high limit doesn t trigger the OOM killer but throttles the offending cgroup  a management agent has ample opportunities to monitor and take appropriate actions such as granting more memory or terminating the workload   Determining whether a cgroup has enough memory is not trivial as memory usage doesn t indicate whether the workload can benefit from more memory   For example  a workload which writes data received from network to a file can use all available memory but can also operate as performant with a small amount of memory   A measure of memory pressure   how much the workload is being impacted due to lack of memory   is necessary to determine whether a workload needs more memory  unfortunately  memory pressure monitoring mechanism isn t implemented yet    Memory Ownership                   A memory area is charged to the cgroup which instantiated it and stays charged to the cgroup until the area is released   Migrating a process to a different cgroup doesn t move the memory usages that it instantiated while in the previous cgroup to the new cgroup   A memory area may be used by processes belonging to different cgroups  To which cgroup the area will be charged is in deterministic  however  over time  the memory area is likely to end up in a cgroup which has enough memory allowance to avoid high reclaim pressure   If a cgroup sweeps a considerable amount of memory which is expected to be accessed repeatedly by other cgroups  it may make sense to use POSIX FADV DONTNEED to relinquish the ownership of memory areas belonging to the affected files to ensure correct memory ownership    IO     The  io  controller regulates the distribution of IO resources   This controller implements both weight based and absolute bandwidth or IOPS limit distribution  however  weight based distribution is available only if cfq iosched is in use and neither scheme is available for blk mq devices    IO Interface Files                       io stat  A read only nested keyed file    Lines are keyed by  MAJ  MIN device numbers and not ordered   The following nested keys are defined                                      rbytes Bytes read    wbytes Bytes written    rios  Number of read IOs    wios  Number of write IOs    dbytes Bytes discarded    dios  Number of discard IOs                                   An example read output follows       8 16 rbytes 1459200 wbytes 314773504 rios 192 wios 353 dbytes 0 dios 0    8 0 rbytes 90430464 wbytes 299008000 rios 8950 wios 1252 dbytes 50331648 dios 3021    io cost qos  A read write nested keyed file which exists only on the root  cgroup    This file configures the Quality of Service of the IO cost  model based controller  CONFIG BLK CGROUP IOCOST  which  currently implements  io weight  proportional control   Lines  are keyed by  MAJ  MIN device numbers and not ordered   The  line for a given device is populated on the first write for  the device on  io cost qos  or  io cost model    The following  nested keys are defined                                                      enable Weight based control enable    ctrl   auto  or  user     rpct  Read latency percentile     0  100     rlat  Read latency threshold    wpct  Write latency percentile    0  100     wlat  Write latency threshold    min  Minimum scaling percentage  1  10000     max  Maximum scaling percentage  1  10000                                                    The controller is disabled by default and can be enabled by  setting  enable  to 1    rpct  and  wpct  parameters default  to zero and the controller uses internal device saturation  state to adjust the overall IO rate between  min  and  max     When a better control quality is needed  latency QoS  parameters can be configured   For example       8 16 enable 1 ctrl auto rpct 95 00 rlat 75000 wpct 95 00 wlat 150000 min 50 00 max 150 0   shows that on sdb  the controller is enabled  will consider  the device saturated if the 95th percentile of read completion  latencies is above 75ms or write 150ms  and adjust the overall  IO issue rate between 50  and 150  accordingly    The lower the saturation point  the better the latency QoS at  the cost of aggregate bandwidth   The narrower the allowed  adjustment range between  min  and  max   the more conformant  to the cost model the IO behavior   Note that the IO issue  base rate may be far off from 100  and setting  min  and  max   blindly can lead to a significant loss of device capacity or  control quality    min  and  max  are useful for regulating  devices which show wide temporary behavior changes   e g  a  ssd which accepts writes at the line speed for a while and  then completely stalls for multiple seconds    When  ctrl  is  auto   the parameters are controlled by the  kernel and may change automatically   Setting  ctrl  to  user   or setting any of the percentile and latency parameters puts  it into  user  mode and disables the automatic changes   The  automatic mode can be restored by setting  ctrl  to  auto      io cost model  A read write nested keyed file which exists only on the root  cgroup    This file configures the cost model of the IO cost model based  controller  CONFIG BLK CGROUP IOCOST  which currently  implements  io weight  proportional control   Lines are keyed  by  MAJ  MIN device numbers and not ordered   The line for a  given device is populated on the first write for the device on   io cost qos  or  io cost model    The following nested keys  are defined                                                 ctrl   auto  or  user     model  The cost model in use    linear                                               When  ctrl  is  auto   the kernel may change all parameters  dynamically   When  ctrl  is set to  user  or any other  parameters are written to   ctrl  become  user  and the  automatic changes are disabled    When  model  is  linear   the following model parameters are  defined                                                                 r w bps The maximum sequential IO throughput     r w seqiops The maximum 4k sequential IOs per second     r w randiops The maximum 4k random IOs per second                                                             From the above  the builtin linear model determines the base  costs of a sequential and random IO and the cost coefficient  for the IO size   While simple  this model can cover most  common device classes acceptably    The IO cost model isn t expected to be accurate in absolute  sense and is scaled to the device behavior dynamically    If needed  tools cgroup iocost coef gen py can be used to  generate device specific coefficients     io weight  A read write flat keyed file which exists on non root cgroups   The default is  default 100     The first line is the default weight applied to devices  without specific override   The rest are overrides keyed by   MAJ  MIN device numbers and not ordered   The weights are in  the range  1  10000  and specifies the relative amount IO time  the cgroup can use in relation to its siblings    The default weight can be updated by writing either  default   WEIGHT  or simply   WEIGHT    Overrides can be set by writing    MAJ  MIN  WEIGHT  and unset by writing   MAJ  MIN default     An example read output follows       default 100    8 16 200    8 0 50    io max  A read write nested keyed file which exists on non root  cgroups    BPS and IOPS based IO limit   Lines are keyed by  MAJ  MIN  device numbers and not ordered   The following nested keys are  defined                                                   rbps  Max read bytes per second    wbps  Max write bytes per second    riops  Max read IO operations per second    wiops  Max write IO operations per second                                                When writing  any number of nested key value pairs can be  specified in any order    max  can be specified as the value  to remove a specific limit   If the same key is specified  multiple times  the outcome is undefined    BPS and IOPS are measured in each IO direction and IOs are  delayed if limit is reached   Temporary bursts are allowed    Setting read limit at 2M BPS and write at 120 IOPS for 8 16       echo  8 16 rbps 2097152 wiops 120    io max   Reading returns the following       8 16 rbps 2097152 wbps max riops max wiops 120   Write IOPS limit can be removed by writing the following       echo  8 16 wiops max    io max   Reading now returns the following       8 16 rbps 2097152 wbps max riops max wiops max    io pressure  A read only nested keyed file    Shows pressure stall information for IO  See   ref  Documentation accounting psi rst  psi   for details    Writeback            Page cache is dirtied through buffered writes and shared mmaps and written asynchronously to the backing filesystem by the writeback mechanism   Writeback sits between the memory and IO domains and regulates the proportion of dirty memory by balancing dirtying and write IOs   The io controller  in conjunction with the memory controller  implements control of page cache writeback IOs   The memory controller defines the memory domain that dirty memory ratio is calculated and maintained for and the io controller defines the io domain which writes out dirty pages for the memory domain   Both system wide and per cgroup dirty memory states are examined and the more restrictive of the two is enforced   cgroup writeback requires explicit support from the underlying filesystem   Currently  cgroup writeback is implemented on ext2  ext4  btrfs  f2fs  and xfs   On other filesystems  all writeback IOs are  attributed to the root cgroup   There are inherent differences in memory and writeback management which affects how cgroup ownership is tracked   Memory is tracked per page while writeback per inode   For the purpose of writeback  an inode is assigned to a cgroup and all IO requests to write dirty pages from the inode are attributed to that cgroup   As cgroup ownership for memory is tracked per page  there can be pages which are associated with different cgroups than the one the inode is associated with   These are called foreign pages   The writeback constantly keeps track of foreign pages and  if a particular foreign cgroup becomes the majority over a certain period of time  switches the ownership of the inode to that cgroup   While this model is enough for most use cases where a given inode is mostly dirtied by a single cgroup even when the main writing cgroup changes over time  use cases where multiple cgroups write to a single inode simultaneously are not supported well   In such circumstances  a significant portion of IOs are likely to be attributed incorrectly  As memory controller assigns page ownership on the first use and doesn t update it until the page is released  even if writeback strictly follows page ownership  multiple cgroups dirtying overlapping areas wouldn t work as expected   It s recommended to avoid such usage patterns   The sysctl knobs which affect writeback behavior are applied to cgroup writeback as follows     vm dirty background ratio  vm dirty ratio  These ratios apply the same to cgroup writeback with the  amount of available memory capped by limits imposed by the  memory controller and system wide clean memory     vm dirty background bytes  vm dirty bytes  For cgroup writeback  this is calculated into ratio against  total available memory and applied the same way as  vm dirty  background  ratio    IO Latency             This is a cgroup v2 controller for IO workload protection   You provide a group with a latency target  and if the average latency exceeds that target the controller will throttle any peers that have a lower latency target than the protected workload   The limits are only applied at the peer level in the hierarchy   This means that in the diagram below  only groups A  B  and C will influence each other  and groups D and F will influence each other   Group G will influence nobody        root                A    B  C                              D    F    G   So the ideal way to configure this is to set io latency in groups A  B  and C  Generally you do not want to set a value lower than the latency your device supports   Experiment to find the value that works best for your workload  Start at higher than the expected latency for your device and watch the avg lat value in io stat for your workload group to get an idea of the latency you see during normal operation   Use the avg lat value as a basis for your real setting  setting at 10 15  higher than the value in io stat   How IO Latency Throttling Works                                  io latency is work conserving  so as long as everybody is meeting their latency target the controller doesn t do anything   Once a group starts missing its target it begins throttling any peer group that has a higher target than itself  This throttling takes 2 forms     Queue depth throttling   This is the number of outstanding IO s a group is   allowed to have   We will clamp down relatively quickly  starting at no limit   and going all the way down to 1 IO at a time     Artificial delay induction   There are certain types of IO that cannot be   throttled without possibly adversely affecting higher priority groups   This   includes swapping and metadata IO   These types of IO are allowed to occur   normally  however they are  charged  to the originating group   If the   originating group is being throttled you will see the use delay and delay   fields in io stat increase   The delay value is how many microseconds that are   being added to any process that runs in this group   Because this number can   grow quite large if there is a lot of swapping or metadata IO occurring we   limit the individual delay events to 1 second at a time   Once the victimized group starts meeting its latency target again it will start unthrottling any peer groups that were throttled previously   If the victimized group simply stops doing IO the global counter will unthrottle appropriately   IO Latency Interface Files                               io latency  This takes a similar format as the other controllers      MAJOR MINOR target  target time in microseconds      io stat  If the controller is enabled you will see extra stats in io stat in  addition to the normal ones      depth   This is the current queue depth for the group      avg lat   This is an exponential moving average with a decay rate of 1 exp   bound by the sampling interval   The decay rate interval can be   calculated by multiplying the win value in io stat by the   corresponding number of samples based on the win value      win   The sampling window size in milliseconds   This is the minimum   duration of time between evaluation events   Windows only elapse   with IO activity   Idle periods extend the most recent window   IO Priority              A single attribute controls the behavior of the I O priority cgroup policy  namely the io prio class attribute  The following values are accepted for that attribute     no change  Do not modify the I O priority class     promote to rt  For requests that have a non RT I O priority class  change it into RT   Also change the priority level of these requests to 4  Do not modify  the I O priority of requests that have priority class RT     restrict to be  For requests that do not have an I O priority class or that have I O  priority class RT  change it into BE  Also change the priority level  of these requests to 0  Do not modify the I O priority class of  requests that have priority class IDLE     idle  Change the I O priority class of all requests into IDLE  the lowest  I O priority class     none to rt  Deprecated  Just an alias for promote to rt   The following numerical values are associated with the I O priority policies                            no change        0                            promote to rt    1                            restrict to be   2                            idle             3                           The numerical value that corresponds to each I O priority class is as follows                                           IOPRIO CLASS NONE               0                                           IOPRIO CLASS RT  real time      1                                           IOPRIO CLASS BE  best effort    2                                           IOPRIO CLASS IDLE               3                                          The algorithm to set the I O priority class for a request is as follows     If I O priority class policy is promote to rt  change the request I O   priority class to IOPRIO CLASS RT and change the request I O priority   level to 4    If I O priority class policy is not promote to rt  translate the I O priority   class policy into a number  then change the request I O priority class   into the maximum of the I O priority class policy number and the numerical   I O priority class   PID      The process number controller is used to allow a cgroup to stop any new tasks from being fork   d or clone   d after a specified limit is reached   The number of tasks in a cgroup can be exhausted in ways which other controllers cannot prevent  thus warranting its own controller   For example  a fork bomb is likely to exhaust the number of tasks before hitting memory restrictions   Note that PIDs used in this controller refer to TIDs  process IDs as used by the kernel    PID Interface Files                        pids max  A read write single value file which exists on non root  cgroups   The default is  max     Hard limit of number of processes     pids current  A read only single value file which exists on non root cgroups    The number of processes currently in the cgroup and its  descendants     pids peak  A read only single value file which exists on non root cgroups    The maximum value that the number of processes in the cgroup and its  descendants has ever reached     pids events  A read only flat keyed file which exists on non root cgroups  Unless  specified otherwise  a value change in this file generates a file  modified event  The following entries are defined      max   The number of times the cgroup s total number of processes hit the pids max   limit  see also pids localevents      pids events local  Similar to pids events but the fields in the file are local  to the cgroup i e  not hierarchical  The file modified event  generated on this file reflects only the local events   Organisational operations are not blocked by cgroup policies  so it is possible to have pids current   pids max   This can be done by either setting the limit to be smaller than pids current  or attaching enough processes to the cgroup such that pids current is larger than pids max   However  it is not possible to violate a cgroup PID policy through fork   or clone    These will return  EAGAIN if the creation of a new process would cause a cgroup policy to be violated    Cpuset         The  cpuset  controller provides a mechanism for constraining the CPU and memory node placement of tasks to only the resources specified in the cpuset interface files in a task s current cgroup  This is especially valuable on large NUMA systems where placing jobs on properly sized subsets of the systems with careful processor and memory placement to reduce cross node memory access and contention can improve overall system performance   The  cpuset  controller is hierarchical   That means the controller cannot use CPUs or memory nodes not allowed in its parent    Cpuset Interface Files                           cpuset cpus  A read write multiple values file which exists on non root  cpuset enabled cgroups    It lists the requested CPUs to be used by tasks within this  cgroup   The actual list of CPUs to be granted  however  is  subjected to constraints imposed by its parent and can differ  from the requested CPUs    The CPU numbers are comma separated numbers or ranges   For example         cat cpuset cpus    0 4 6 8 10   An empty value indicates that the cgroup is using the same  setting as the nearest cgroup ancestor with a non empty   cpuset cpus  or all the available CPUs if none is found    The value of  cpuset cpus  stays constant until the next update  and won t be affected by any CPU hotplug events     cpuset cpus effective  A read only multiple values file which exists on all  cpuset enabled cgroups    It lists the onlined CPUs that are actually granted to this  cgroup by its parent   These CPUs are allowed to be used by  tasks within the current cgroup    If  cpuset cpus  is empty  the  cpuset cpus effective  file shows  all the CPUs from the parent cgroup that can be available to  be used by this cgroup   Otherwise  it should be a subset of   cpuset cpus  unless none of the CPUs listed in  cpuset cpus   can be granted   In this case  it will be treated just like an  empty  cpuset cpus     Its value will be affected by CPU hotplug events     cpuset mems  A read write multiple values file which exists on non root  cpuset enabled cgroups    It lists the requested memory nodes to be used by tasks within  this cgroup   The actual list of memory nodes granted  however   is subjected to constraints imposed by its parent and can differ  from the requested memory nodes    The memory node numbers are comma separated numbers or ranges   For example         cat cpuset mems    0 1 3   An empty value indicates that the cgroup is using the same  setting as the nearest cgroup ancestor with a non empty   cpuset mems  or all the available memory nodes if none  is found    The value of  cpuset mems  stays constant until the next update  and won t be affected by any memory nodes hotplug events    Setting a non empty value to  cpuset mems  causes memory of  tasks within the cgroup to be migrated to the designated nodes if  they are currently using memory outside of the designated nodes    There is a cost for this memory migration   The migration  may not be complete and some memory pages may be left behind   So it is recommended that  cpuset mems  should be set properly  before spawning new tasks into the cpuset   Even if there is  a need to change  cpuset mems  with active tasks  it shouldn t  be done frequently     cpuset mems effective  A read only multiple values file which exists on all  cpuset enabled cgroups    It lists the onlined memory nodes that are actually granted to  this cgroup by its parent  These memory nodes are allowed to  be used by tasks within the current cgroup    If  cpuset mems  is empty  it shows all the memory nodes from the  parent cgroup that will be available to be used by this cgroup   Otherwise  it should be a subset of  cpuset mems  unless none of  the memory nodes listed in  cpuset mems  can be granted   In this  case  it will be treated just like an empty  cpuset mems     Its value will be affected by memory nodes hotplug events     cpuset cpus exclusive  A read write multiple values file which exists on non root  cpuset enabled cgroups    It lists all the exclusive CPUs that are allowed to be used  to create a new cpuset partition   Its value is not used  unless the cgroup becomes a valid partition root   See the   cpuset cpus partition  section below for a description of what  a cpuset partition is    When the cgroup becomes a partition root  the actual exclusive  CPUs that are allocated to that partition are listed in   cpuset cpus exclusive effective  which may be different  from  cpuset cpus exclusive    If  cpuset cpus exclusive   has previously been set   cpuset cpus exclusive effective   is always a subset of it    Users can manually set it to a value that is different from   cpuset cpus   One constraint in setting it is that the list of  CPUs must be exclusive with respect to  cpuset cpus exclusive   of its sibling   If  cpuset cpus exclusive  of a sibling cgroup  isn t set  its  cpuset cpus  value  if set  cannot be a subset  of it to leave at least one CPU available when the exclusive  CPUs are taken away    For a parent cgroup  any one of its exclusive CPUs can only  be distributed to at most one of its child cgroups   Having an  exclusive CPU appearing in two or more of its child cgroups is  not allowed  the exclusivity rule    A value that violates the  exclusivity rule will be rejected with a write error    The root cgroup is a partition root and all its available CPUs  are in its exclusive CPU set     cpuset cpus exclusive effective  A read only multiple values file which exists on all non root  cpuset enabled cgroups    This file shows the effective set of exclusive CPUs that  can be used to create a partition root   The content  of this file will always be a subset of its parent s   cpuset cpus exclusive effective  if its parent is not the root  cgroup   It will also be a subset of  cpuset cpus exclusive   if it is set   If  cpuset cpus exclusive  is not set  it is  treated to have an implicit value of  cpuset cpus  in the  formation of local partition     cpuset cpus isolated  A read only and root cgroup only multiple values file    This file shows the set of all isolated CPUs used in existing  isolated partitions  It will be empty if no isolated partition  is created     cpuset cpus partition  A read write single value file which exists on non root  cpuset enabled cgroups   This flag is owned by the parent cgroup  and is not delegatable    It accepts only the following input values when written to                                                           member  Non root member of a partition     root  Partition root     isolated  Partition root without load balancing                                                       A cpuset partition is a collection of cpuset enabled cgroups with  a partition root at the top of the hierarchy and its descendants  except those that are separate partition roots themselves and  their descendants   A partition has exclusive access to the  set of exclusive CPUs allocated to it  Other cgroups outside  of that partition cannot use any CPUs in that set    There are two types of partitions   local and remote   A local  partition is one whose parent cgroup is also a valid partition  root   A remote partition is one whose parent cgroup is not a  valid partition root itself   Writing to  cpuset cpus exclusive   is optional for the creation of a local partition as its   cpuset cpus exclusive  file will assume an implicit value that  is the same as  cpuset cpus  if it is not set  Writing the  proper  cpuset cpus exclusive  values down the cgroup hierarchy  before the target partition root is mandatory for the creation  of a remote partition    Currently  a remote partition cannot be created under a local  partition   All the ancestors of a remote partition root except  the root cgroup cannot be a partition root    The root cgroup is always a partition root and its state cannot  be changed   All other non root cgroups start out as  member     When set to  root   the current cgroup is the root of a new  partition or scheduling domain   The set of exclusive CPUs is  determined by the value of its  cpuset cpus exclusive effective     When set to  isolated   the CPUs in that partition will be in  an isolated state without any load balancing from the scheduler  and excluded from the unbound workqueues   Tasks placed in such  a partition with multiple CPUs should be carefully distributed  and bound to each of the individual CPUs for optimal performance    A partition root   root  or  isolated   can be in one of the  two possible states   valid or invalid   An invalid partition  root is in a degraded state where some state information may  be retained  but behaves more like a  member     All possible state transitions among  member    root  and   isolated  are allowed    On read  the  cpuset cpus partition  file can show the following  values                                                                              member    Non root member of a partition     root    Partition root     isolated    Partition root without load balancing     root invalid   reason    Invalid partition root     isolated invalid   reason    Invalid isolated partition root                                                                          In the case of an invalid partition root  a descriptive string on  why the partition is invalid is included within parentheses    For a local partition root to be valid  the following conditions  must be met    1  The parent cgroup is a valid partition root   2  The  cpuset cpus exclusive effective  file cannot be empty      though it may contain offline CPUs   3  The  cpuset cpus effective  cannot be empty unless there is     no task associated with this partition    For a remote partition root to be valid  all the above conditions  except the first one must be met    External events like hotplug or changes to  cpuset cpus  or   cpuset cpus exclusive  can cause a valid partition root to  become invalid and vice versa  Note that a task cannot be  moved to a cgroup with empty  cpuset cpus effective     A valid non root parent partition may distribute out all its CPUs  to its child local partitions when there is no task associated  with it    Care must be taken to change a valid partition root to  member   as all its child local partitions  if present  will become  invalid causing disruption to tasks running in those child  partitions  These inactivated partitions could be recovered if  their parent is switched back to a partition root with a proper  value in  cpuset cpus  or  cpuset cpus exclusive     Poll and inotify events are triggered whenever the state of   cpuset cpus partition  changes   That includes changes caused  by write to  cpuset cpus partition   cpu hotplug or other  changes that modify the validity status of the partition   This will allow user space agents to monitor unexpected changes  to  cpuset cpus partition  without the need to do continuous  polling    A user can pre configure certain CPUs to an isolated state  with load balancing disabled at boot time with the  isolcpus   kernel boot command line option   If those CPUs are to be put  into a partition  they have to be used in an isolated partition    Device controller                    Device controller manages access to device files  It includes both creation of new device files  using mknod   and access to the existing device files   Cgroup v2 device controller has no interface files and is implemented on top of cgroup BPF  To control access to device files  a user may create bpf programs of type BPF PROG TYPE CGROUP DEVICE and attach them to cgroups with BPF CGROUP DEVICE flag  On an attempt to access a device file  corresponding BPF programs will be executed  and depending on the return value the attempt will succeed or fail with  EPERM   A BPF PROG TYPE CGROUP DEVICE program takes a pointer to the bpf cgroup dev ctx structure  which describes the device access attempt  access type  mknod read write  and device  type  major and minor numbers   If the program returns 0  the attempt fails with  EPERM  otherwise it succeeds   An example of BPF PROG TYPE CGROUP DEVICE program may be found in tools testing selftests bpf progs dev cgroup c in the kernel source tree    RDMA       The  rdma  controller regulates the distribution and accounting of RDMA resources   RDMA Interface Files                         rdma max  A readwrite nested keyed file that exists for all the cgroups  except root that describes current configured resource limit  for a RDMA IB device    Lines are keyed by device name and are not ordered   Each line contains space separated resource name and its configured  limit that can be distributed    The following nested keys are defined                                                  hca handle Maximum number of HCA Handles    hca object  Maximum number of HCA Objects                                               An example for mlx4 and ocrdma device follows       mlx4 0 hca handle 2 hca object 2000    ocrdma1 hca handle 3 hca object max    rdma current  A read only file that describes current resource usage   It exists for all the cgroup except root    An example for mlx4 and ocrdma device follows       mlx4 0 hca handle 1 hca object 20    ocrdma1 hca handle 1 hca object 23  HugeTLB          The HugeTLB controller allows to limit the HugeTLB usage per control group and enforces the controller limit during page fault   HugeTLB Interface Files                            hugetlb  hugepagesize  current  Show current usage for  hugepagesize  hugetlb   It exists for all  the cgroup except root     hugetlb  hugepagesize  max  Set show the hard limit of  hugepagesize  hugetlb usage   The default value is  max    It exists for all the cgroup except root     hugetlb  hugepagesize  events  A read only flat keyed file which exists on non root cgroups      max   The number of allocation failure due to HugeTLB limit    hugetlb  hugepagesize  events local  Similar to hugetlb  hugepagesize  events but the fields in the file  are local to the cgroup i e  not hierarchical  The file modified event  generated on this file reflects only the local events     hugetlb  hugepagesize  numa stat  Similar to memory numa stat  it shows the numa information of the         hugetlb pages of  hugepagesize  in this cgroup   Only active in         use hugetlb pages are included   The per node values are in bytes   Misc       The Miscellaneous cgroup provides the resource limiting and tracking mechanism for the scalar resources which cannot be abstracted like the other cgroup resources  Controller is enabled by the CONFIG CGROUP MISC config option   A resource can be added to the controller via enum misc res type   in the include linux misc cgroup h file and the corresponding name via misc res name   in the kernel cgroup misc c file  Provider of the resource must set its capacity prior to using the resource by calling misc cg set capacity     Once a capacity is set then the resource usage can be updated using charge and uncharge APIs  All of the APIs to interact with misc controller are in include linux misc cgroup h   Misc Interface Files                       Miscellaneous controller provides 3 interface files  If two misc resources  res a and res b  are registered then     misc capacity         A read only flat keyed file shown only in the root cgroup   It shows         miscellaneous scalar resources available on the platform along with         their quantities         cat misc capacity    res a 50    res b 10    misc current         A read only flat keyed file shown in the all cgroups   It shows         the current usage of the resources in the cgroup and its children          cat misc current    res a 3    res b 0    misc peak         A read only flat keyed file shown in all cgroups   It shows the         historical maximum usage of the resources in the cgroup and its         children          cat misc peak    res a 10    res b 8    misc max         A read write flat keyed file shown in the non root cgroups  Allowed         maximum usage of the resources in the cgroup and its children          cat misc max    res a max    res b 4   Limit can be set by         echo res a 1   misc max   Limit can be set to max by         echo res a max   misc max          Limits can be set higher than the capacity value in the misc capacity         file     misc events  A read only flat keyed file which exists on non root cgroups  The  following entries are defined  Unless specified otherwise  a value  change in this file generates a file modified event  All fields in  this file are hierarchical      max   The number of times the cgroup s resource usage was   about to go over the max boundary     misc events local         Similar to misc events but the fields in the file are local to the         cgroup i e  not hierarchical  The file modified event generated on         this file reflects only the local events   Migration and Ownership                          A miscellaneous scalar resource is charged to the cgroup in which it is used first  and stays charged to that cgroup until that resource is freed  Migrating a process to a different cgroup does not move the charge to the destination cgroup where the process has moved   Others         perf event             perf event controller  if not mounted on a legacy hierarchy  is automatically enabled on the v2 hierarchy so that perf events can always be filtered by cgroup v2 path   The controller can still be moved to a legacy hierarchy after v2 hierarchy is populated    Non normative information                            This section contains information that isn t considered to be a part of the stable kernel API and so is subject to change    CPU controller root cgroup process behaviour                                               When distributing CPU cycles in the root cgroup each thread in this cgroup is treated as if it was hosted in a separate child cgroup of the root cgroup  This child cgroup weight is dependent on its thread nice level   For details of this mapping see sched prio to weight array in kernel sched core c file  values from this array should be scaled appropriately so the neutral   nice 0   value is 100 instead of 1024     IO controller root cgroup process behaviour                                              Root cgroup processes are hosted in an implicit leaf child node  When distributing IO resources this implicit child node is taken into account as if it was a normal child cgroup of the root cgroup with a weight value of 200    Namespace            Basics         cgroup namespace provides a mechanism to virtualize the view of the   proc  PID cgroup  file and cgroup mounts   The CLONE NEWCGROUP clone flag can be used with clone 2  and unshare 2  to create a new cgroup namespace   The process running inside the cgroup namespace will have its   proc  PID cgroup  output restricted to cgroupns root   The cgroupns root is the cgroup of the process at the time of creation of the cgroup namespace   Without cgroup namespace  the   proc  PID cgroup  file shows the complete path of the cgroup of a process   In a container setup where a set of cgroups and namespaces are intended to isolate processes the   proc  PID cgroup  file may leak potential system level information to the isolated processes   For example        cat  proc self cgroup   0   batchjobs container id1  The path   batchjobs container id1  can be considered as system data and undesirable to expose to the isolated processes   cgroup namespace can be used to restrict visibility of this path   For example  before creating a cgroup namespace  one would see        ls  l  proc self ns cgroup   lrwxrwxrwx 1 root root 0 2014 07 15 10 37  proc self ns cgroup    cgroup  4026531835      cat  proc self cgroup   0   batchjobs container id1  After unsharing a new namespace  the view changes        ls  l  proc self ns cgroup   lrwxrwxrwx 1 root root 0 2014 07 15 10 35  proc self ns cgroup    cgroup  4026532183      cat  proc self cgroup   0     When some thread from a multi threaded process unshares its cgroup namespace  the new cgroupns gets applied to the entire process  all the threads    This is natural for the v2 hierarchy  however  for the legacy hierarchies  this may be unexpected   A cgroup namespace is alive as long as there are processes inside or mounts pinning it   When the last usage goes away  the cgroup namespace is destroyed   The cgroupns root and the actual cgroups remain    The Root and Views                     The  cgroupns root  for a cgroup namespace is the cgroup in which the process calling unshare 2  is running   For example  if a process in  batchjobs container id1 cgroup calls unshare  cgroup  batchjobs container id1 becomes the cgroupns root   For the init cgroup ns  this is the real root       cgroup   The cgroupns root cgroup does not change even if the namespace creator process later moves to a different cgroup          unshare  c   unshare cgroupns in some cgroup     cat  proc self cgroup   0        mkdir sub cgrp 1     echo 0   sub cgrp 1 cgroup procs     cat  proc self cgroup   0   sub cgrp 1  Each process gets its namespace specific view of   proc  PID cgroup   Processes running inside the cgroup namespace will be able to see cgroup paths  in  proc self cgroup  only inside their root cgroup  From within an unshared cgroupns        sleep 100000      1  7353     echo 7353   sub cgrp 1 cgroup procs     cat  proc 7353 cgroup   0   sub cgrp 1  From the initial cgroup namespace  the real cgroup path will be visible        cat  proc 7353 cgroup   0   batchjobs container id1 sub cgrp 1  From a sibling cgroup namespace  that is  a namespace rooted at a different cgroup   the cgroup path relative to its own cgroup namespace root will be shown   For instance  if PID 7353 s cgroup namespace root is at   batchjobs container id2   then it will see        cat  proc 7353 cgroup   0      container id2 sub cgrp 1  Note that the relative path always starts with     to indicate that its relative to the cgroup namespace root of the caller    Migration and setns 2                          Processes inside a cgroup namespace can move into and out of the namespace root if they have proper access to external cgroups   For example  from inside a namespace with cgroupns root at  batchjobs container id1  and assuming that the global hierarchy is still accessible inside cgroupns        cat  proc 7353 cgroup   0   sub cgrp 1     echo 7353   batchjobs container id2 cgroup procs     cat  proc 7353 cgroup   0      container id2  Note that this kind of setup is not encouraged   A task inside cgroup namespace should only be exposed to its own cgroupns hierarchy   setns 2  to another cgroup namespace is allowed when    a  the process has CAP SYS ADMIN against its current user namespace  b  the process has CAP SYS ADMIN against the target cgroup     namespace s userns  No implicit cgroup changes happen with attaching to another cgroup namespace   It is expected that the someone moves the attaching process under the target cgroup namespace root    Interaction with Other Namespaces                                    Namespace specific cgroup hierarchy can be mounted by a process running inside a non init cgroup namespace        mount  t cgroup2 none  MOUNT POINT  This will mount the unified cgroup hierarchy with cgroupns root as the filesystem root   The process needs CAP SYS ADMIN against its user and mount namespaces   The virtualization of  proc self cgroup file combined with restricting the view of cgroup hierarchy by namespace private cgroupfs mount provides a properly isolated cgroup view inside the container    Information on Kernel Programming                                    This section contains kernel programming information in the areas where interacting with cgroup is necessary   cgroup core and controllers are not covered    Filesystem Support for Writeback                                   A filesystem can support cgroup writeback by updating address space operations  writepage s    to annotate bio s using the following two functions     wbc init bio  wbc   bio   Should be called for each bio carrying writeback data and  associates the bio with the inode s owner cgroup and the  corresponding request queue   This must be called after  a queue  device  has been associated with the bio and  before submission     wbc account cgroup owner  wbc   folio   bytes   Should be called for each data segment being written out   While this function doesn t care exactly when it s called  during the writeback session  it s the easiest and most  natural to call it as data segments are added to a bio   With writeback bio s annotated  cgroup support can be enabled per super block by setting SB I CGROUPWB in   s iflags   This allows for selective disabling of cgroup writeback support which is helpful when certain filesystem features  e g  journaled data mode  are incompatible   wbc init bio   binds the specified bio to its cgroup   Depending on the configuration  the bio may be executed at a lower priority and if the writeback session is holding shared resources  e g  a journal entry  may lead to priority inversion   There is no one easy solution for the problem   Filesystems can try to work around specific problem cases by skipping wbc init bio   and using bio associate blkg   directly    Deprecated v1 Core Features                                Multiple hierarchies including named ones are not supported     All v1 mount options are not supported     The  tasks  file is removed and  cgroup procs  is not sorted      cgroup clone children  is removed      proc cgroups is meaningless for v2   Use  cgroup controllers  or    cgroup stat  files at the root instead    Issues with v1 and Rationales for v2                                       Multiple Hierarchies                       cgroup v1 allowed an arbitrary number of hierarchies and each hierarchy could host any number of controllers   While this seemed to provide a high level of flexibility  it wasn t useful in practice   For example  as there is only one instance of each controller  utility type controllers such as freezer which can be useful in all hierarchies could only be used in one   The issue is exacerbated by the fact that controllers couldn t be moved to another hierarchy once hierarchies were populated   Another issue was that all controllers bound to a hierarchy were forced to have exactly the same view of the hierarchy   It wasn t possible to vary the granularity depending on the specific controller   In practice  these issues heavily limited which controllers could be put on the same hierarchy and most configurations resorted to putting each controller on its own hierarchy   Only closely related ones  such as the cpu and cpuacct controllers  made sense to be put on the same hierarchy   This often meant that userland ended up managing multiple similar hierarchies repeating the same steps on each hierarchy whenever a hierarchy management operation was necessary   Furthermore  support for multiple hierarchies came at a steep cost  It greatly complicated cgroup core implementation but more importantly the support for multiple hierarchies restricted how cgroup could be used in general and what controllers was able to do   There was no limit on how many hierarchies there might be  which meant that a thread s cgroup membership couldn t be described in finite length   The key might contain any number of entries and was unlimited in length  which made it highly awkward to manipulate and led to addition of controllers which existed only to identify membership  which in turn exacerbated the original problem of proliferating number of hierarchies   Also  as a controller couldn t have any expectation regarding the topologies of hierarchies other controllers might be on  each controller had to assume that all other controllers were attached to completely orthogonal hierarchies   This made it impossible  or at least very cumbersome  for controllers to cooperate with each other   In most use cases  putting controllers on hierarchies which are completely orthogonal to each other isn t necessary   What usually is called for is the ability to have differing levels of granularity depending on the specific controller   In other words  hierarchy may be collapsed from leaf towards root when viewed from specific controllers   For example  a given configuration might not care about how memory is distributed beyond a certain level while still wanting to control how CPU cycles are distributed    Thread Granularity                     cgroup v1 allowed threads of a process to belong to different cgroups  This didn t make sense for some controllers and those controllers ended up implementing different ways to ignore such situations but much more importantly it blurred the line between API exposed to individual applications and system management interface   Generally  in process knowledge is available only to the process itself  thus  unlike service level organization of processes  categorizing threads of a process requires active participation from the application which owns the target process   cgroup v1 had an ambiguously defined delegation model which got abused in combination with thread granularity   cgroups were delegated to individual applications so that they can create and manage their own sub hierarchies and control resource distributions along them   This effectively raised cgroup to the status of a syscall like API exposed to lay programs   First of all  cgroup has a fundamentally inadequate interface to be exposed this way   For a process to access its own knobs  it has to extract the path on the target hierarchy from  proc self cgroup  construct the path by appending the name of the knob to the path  open and then read and or write to it   This is not only extremely clunky and unusual but also inherently racy   There is no conventional way to define transaction across the required steps and nothing can guarantee that the process would actually be operating on its own sub hierarchy   cgroup controllers implemented a number of knobs which would never be accepted as public APIs because they were just adding control knobs to system management pseudo filesystem   cgroup ended up with interface knobs which were not properly abstracted or refined and directly revealed kernel internal details   These knobs got exposed to individual applications through the ill defined delegation mechanism effectively abusing cgroup as a shortcut to implementing public APIs without going through the required scrutiny   This was painful for both userland and kernel   Userland ended up with misbehaving and poorly abstracted interfaces and kernel exposing and locked into constructs inadvertently    Competition Between Inner Nodes and Threads                                              cgroup v1 allowed threads to be in any cgroups which created an interesting problem where threads belonging to a parent cgroup and its children cgroups competed for resources   This was nasty as two different types of entities competed and there was no obvious way to settle it   Different controllers did different things   The cpu controller considered threads and cgroups as equivalents and mapped nice levels to cgroup weights   This worked for some cases but fell flat when children wanted to be allocated specific ratios of CPU cycles and the number of internal threads fluctuated   the ratios constantly changed as the number of competing entities fluctuated  There also were other issues   The mapping from nice level to weight wasn t obvious or universal  and there were various other knobs which simply weren t available for threads   The io controller implicitly created a hidden leaf node for each cgroup to host the threads   The hidden leaf had its own copies of all the knobs with   leaf    prefixed   While this allowed equivalent control over internal threads  it was with serious drawbacks   It always added an extra layer of nesting which wouldn t be necessary otherwise  made the interface messy and significantly complicated the implementation   The memory controller didn t have a way to control what happened between internal tasks and child cgroups and the behavior was not clearly defined   There were attempts to add ad hoc behaviors and knobs to tailor the behavior to specific workloads which would have led to problems extremely difficult to resolve in the long term   Multiple controllers struggled with internal tasks and came up with different ways to deal with it  unfortunately  all the approaches were severely flawed and  furthermore  the widely different behaviors made cgroup as a whole highly inconsistent   This clearly is a problem which needs to be addressed from cgroup core in a uniform way    Other Interface Issues                         cgroup v1 grew without oversight and developed a large number of idiosyncrasies and inconsistencies   One issue on the cgroup core side was how an empty cgroup was notified   a userland helper binary was forked and executed for each event   The event delivery wasn t recursive or delegatable   The limitations of the mechanism also led to in kernel event delivery filtering mechanism further complicating the interface   Controller interfaces were problematic too   An extreme example is controllers completely ignoring hierarchical organization and treating all cgroups as if they were all located directly under the root cgroup   Some controllers exposed a large amount of inconsistent implementation details to userland   There also was no consistency across controllers   When a new cgroup was created  some controllers defaulted to not imposing extra restrictions while others disallowed any resource usage until explicitly configured   Configuration knobs for the same type of control used widely differing naming schemes and formats   Statistics and information knobs were named arbitrarily and used different formats and units even in the same controller   cgroup v2 establishes common conventions where appropriate and updates controllers so that they expose minimal and consistent interfaces    Controller Issues and Remedies                                 Memory         The original lower boundary  the soft limit  is defined as a limit that is per default unset   As a result  the set of cgroups that global reclaim prefers is opt in  rather than opt out   The costs for optimizing these mostly negative lookups are so high that the implementation  despite its enormous size  does not even provide the basic desirable behavior   First off  the soft limit has no hierarchical meaning   All configured groups are organized in a global rbtree and treated like equal peers  regardless where they are located in the hierarchy   This makes subtree delegation impossible   Second  the soft limit reclaim pass is so aggressive that it not just introduces high allocation latencies into the system  but also impacts system performance due to overreclaim  to the point where the feature becomes self defeating   The memory low boundary on the other hand is a top down allocated reserve   A cgroup enjoys reclaim protection when it s within its effective low  which makes delegation of subtrees possible  It also enjoys having reclaim pressure proportional to its overage when above its effective low   The original high boundary  the hard limit  is defined as a strict limit that can not budge  even if the OOM killer has to be called  But this generally goes against the goal of making the most out of the available memory   The memory consumption of workloads varies during runtime  and that requires users to overcommit   But doing that with a strict upper limit requires either a fairly accurate prediction of the working set size or adding slack to the limit   Since working set size estimation is hard and error prone  and getting it wrong results in OOM kills  most users tend to err on the side of a looser limit and end up wasting precious resources   The memory high boundary on the other hand can be set much more conservatively   When hit  it throttles allocations by forcing them into direct reclaim to work off the excess  but it never invokes the OOM killer   As a result  a high boundary that is chosen too aggressively will not terminate the processes  but instead it will lead to gradual performance degradation   The user can monitor this and make corrections until the minimal memory footprint that still gives acceptable performance is found   In extreme cases  with many concurrent allocations and a complete breakdown of reclaim progress within the group  the high boundary can be exceeded   But even then it s mostly better to satisfy the allocation from the slack available in other groups or the rest of the system than killing the group   Otherwise  memory max is there to limit this type of spillover and ultimately contain buggy or even malicious applications   Setting the original memory limit in bytes below the current usage was subject to a race condition  where concurrent charges could cause the limit setting to fail  memory max on the other hand will first set the limit to prevent new charges  and then reclaim and OOM kill until the new limit is met   or the task writing to memory max is killed   The combined memory swap accounting and limiting is replaced by real control over swap space   The main argument for a combined memory swap facility in the original cgroup design was that global or parental pressure would always be able to swap all anonymous memory of a child group  regardless of the child s own  possibly untrusted  configuration   However  untrusted groups can sabotage swapping by other means   such as referencing its anonymous memory in a tight loop   and an admin can not assume full swappability when overcommitting untrusted jobs   For trusted jobs  on the other hand  a combined counter is not an intuitive userspace interface  and it flies in the face of the idea that cgroup controllers should account and limit specific physical resources   Swap space is a resource like all others in the system  and that s why unified hierarchy allows distributing it separately "}
{"questions":"linux Reducing OS jitter due to per cpu kthreads them to a housekeeping CPU dedicated to such work This document lists per CPU kthreads in the Linux kernel and presents options to control their OS jitter Note that non per CPU kthreads are References not listed here To reduce OS jitter from non per CPU kthreads bind","answers":"==========================================\nReducing OS jitter due to per-cpu kthreads\n==========================================\n\nThis document lists per-CPU kthreads in the Linux kernel and presents\noptions to control their OS jitter.  Note that non-per-CPU kthreads are\nnot listed here.  To reduce OS jitter from non-per-CPU kthreads, bind\nthem to a \"housekeeping\" CPU dedicated to such work.\n\nReferences\n==========\n\n-\tDocumentation\/core-api\/irq\/irq-affinity.rst:  Binding interrupts to sets of CPUs.\n\n-\tDocumentation\/admin-guide\/cgroup-v1:  Using cgroups to bind tasks to sets of CPUs.\n\n-\tman taskset:  Using the taskset command to bind tasks to sets\n\tof CPUs.\n\n-\tman sched_setaffinity:  Using the sched_setaffinity() system\n\tcall to bind tasks to sets of CPUs.\n\n-\t\/sys\/devices\/system\/cpu\/cpuN\/online:  Control CPU N's hotplug state,\n\twriting \"0\" to offline and \"1\" to online.\n\n-\tIn order to locate kernel-generated OS jitter on CPU N:\n\n\t\tcd \/sys\/kernel\/tracing\n\t\techo 1 > max_graph_depth # Increase the \"1\" for more detail\n\t\techo function_graph > current_tracer\n\t\t# run workload\n\t\tcat per_cpu\/cpuN\/trace\n\nkthreads\n========\n\nName:\n  ehca_comp\/%u\n\nPurpose:\n  Periodically process Infiniband-related work.\n\nTo reduce its OS jitter, do any of the following:\n\n1.\tDon't use eHCA Infiniband hardware, instead choosing hardware\n\tthat does not require per-CPU kthreads.  This will prevent these\n\tkthreads from being created in the first place.  (This will\n\twork for most people, as this hardware, though important, is\n\trelatively old and is produced in relatively low unit volumes.)\n2.\tDo all eHCA-Infiniband-related work on other CPUs, including\n\tinterrupts.\n3.\tRework the eHCA driver so that its per-CPU kthreads are\n\tprovisioned only on selected CPUs.\n\n\nName:\n  irq\/%d-%s\n\nPurpose:\n  Handle threaded interrupts.\n\nTo reduce its OS jitter, do the following:\n\n1.\tUse irq affinity to force the irq threads to execute on\n\tsome other CPU.\n\nName:\n  kcmtpd_ctr_%d\n\nPurpose:\n  Handle Bluetooth work.\n\nTo reduce its OS jitter, do one of the following:\n\n1.\tDon't use Bluetooth, in which case these kthreads won't be\n\tcreated in the first place.\n2.\tUse irq affinity to force Bluetooth-related interrupts to\n\toccur on some other CPU and furthermore initiate all\n\tBluetooth activity on some other CPU.\n\nName:\n  ksoftirqd\/%u\n\nPurpose:\n  Execute softirq handlers when threaded or when under heavy load.\n\nTo reduce its OS jitter, each softirq vector must be handled\nseparately as follows:\n\nTIMER_SOFTIRQ\n-------------\n\nDo all of the following:\n\n1.\tTo the extent possible, keep the CPU out of the kernel when it\n\tis non-idle, for example, by avoiding system calls and by forcing\n\tboth kernel threads and interrupts to execute elsewhere.\n2.\tBuild with CONFIG_HOTPLUG_CPU=y.  After boot completes, force\n\tthe CPU offline, then bring it back online.  This forces\n\trecurring timers to migrate elsewhere.\tIf you are concerned\n\twith multiple CPUs, force them all offline before bringing the\n\tfirst one back online.  Once you have onlined the CPUs in question,\n\tdo not offline any other CPUs, because doing so could force the\n\ttimer back onto one of the CPUs in question.\n\nNET_TX_SOFTIRQ and NET_RX_SOFTIRQ\n---------------------------------\n\nDo all of the following:\n\n1.\tForce networking interrupts onto other CPUs.\n2.\tInitiate any network I\/O on other CPUs.\n3.\tOnce your application has started, prevent CPU-hotplug operations\n\tfrom being initiated from tasks that might run on the CPU to\n\tbe de-jittered.  (It is OK to force this CPU offline and then\n\tbring it back online before you start your application.)\n\nBLOCK_SOFTIRQ\n-------------\n\nDo all of the following:\n\n1.\tForce block-device interrupts onto some other CPU.\n2.\tInitiate any block I\/O on other CPUs.\n3.\tOnce your application has started, prevent CPU-hotplug operations\n\tfrom being initiated from tasks that might run on the CPU to\n\tbe de-jittered.  (It is OK to force this CPU offline and then\n\tbring it back online before you start your application.)\n\nIRQ_POLL_SOFTIRQ\n----------------\n\nDo all of the following:\n\n1.\tForce block-device interrupts onto some other CPU.\n2.\tInitiate any block I\/O and block-I\/O polling on other CPUs.\n3.\tOnce your application has started, prevent CPU-hotplug operations\n\tfrom being initiated from tasks that might run on the CPU to\n\tbe de-jittered.  (It is OK to force this CPU offline and then\n\tbring it back online before you start your application.)\n\nTASKLET_SOFTIRQ\n---------------\n\nDo one or more of the following:\n\n1.\tAvoid use of drivers that use tasklets.  (Such drivers will contain\n\tcalls to things like tasklet_schedule().)\n2.\tConvert all drivers that you must use from tasklets to workqueues.\n3.\tForce interrupts for drivers using tasklets onto other CPUs,\n\tand also do I\/O involving these drivers on other CPUs.\n\nSCHED_SOFTIRQ\n-------------\n\nDo all of the following:\n\n1.\tAvoid sending scheduler IPIs to the CPU to be de-jittered,\n\tfor example, ensure that at most one runnable kthread is present\n\ton that CPU.  If a thread that expects to run on the de-jittered\n\tCPU awakens, the scheduler will send an IPI that can result in\n\ta subsequent SCHED_SOFTIRQ.\n2.\tCONFIG_NO_HZ_FULL=y and ensure that the CPU to be de-jittered\n\tis marked as an adaptive-ticks CPU using the \"nohz_full=\"\n\tboot parameter.  This reduces the number of scheduler-clock\n\tinterrupts that the de-jittered CPU receives, minimizing its\n\tchances of being selected to do the load balancing work that\n\truns in SCHED_SOFTIRQ context.\n3.\tTo the extent possible, keep the CPU out of the kernel when it\n\tis non-idle, for example, by avoiding system calls and by\n\tforcing both kernel threads and interrupts to execute elsewhere.\n\tThis further reduces the number of scheduler-clock interrupts\n\treceived by the de-jittered CPU.\n\nHRTIMER_SOFTIRQ\n---------------\n\nDo all of the following:\n\n1.\tTo the extent possible, keep the CPU out of the kernel when it\n\tis non-idle.  For example, avoid system calls and force both\n\tkernel threads and interrupts to execute elsewhere.\n2.\tBuild with CONFIG_HOTPLUG_CPU=y.  Once boot completes, force the\n\tCPU offline, then bring it back online.  This forces recurring\n\ttimers to migrate elsewhere.  If you are concerned with multiple\n\tCPUs, force them all offline before bringing the first one\n\tback online.  Once you have onlined the CPUs in question, do not\n\toffline any other CPUs, because doing so could force the timer\n\tback onto one of the CPUs in question.\n\nRCU_SOFTIRQ\n-----------\n\nDo at least one of the following:\n\n1.\tOffload callbacks and keep the CPU in either dyntick-idle or\n\tadaptive-ticks state by doing all of the following:\n\n\ta.\tCONFIG_NO_HZ_FULL=y and ensure that the CPU to be\n\t\tde-jittered is marked as an adaptive-ticks CPU using the\n\t\t\"nohz_full=\" boot parameter.  Bind the rcuo kthreads to\n\t\thousekeeping CPUs, which can tolerate OS jitter.\n\tb.\tTo the extent possible, keep the CPU out of the kernel\n\t\twhen it is non-idle, for example, by avoiding system\n\t\tcalls and by forcing both kernel threads and interrupts\n\t\tto execute elsewhere.\n\n2.\tEnable RCU to do its processing remotely via dyntick-idle by\n\tdoing all of the following:\n\n\ta.\tBuild with CONFIG_NO_HZ=y.\n\tb.\tEnsure that the CPU goes idle frequently, allowing other\n\t\tCPUs to detect that it has passed through an RCU quiescent\n\t\tstate.\tIf the kernel is built with CONFIG_NO_HZ_FULL=y,\n\t\tuserspace execution also allows other CPUs to detect that\n\t\tthe CPU in question has passed through a quiescent state.\n\tc.\tTo the extent possible, keep the CPU out of the kernel\n\t\twhen it is non-idle, for example, by avoiding system\n\t\tcalls and by forcing both kernel threads and interrupts\n\t\tto execute elsewhere.\n\nName:\n  kworker\/%u:%d%s (cpu, id, priority)\n\nPurpose:\n  Execute workqueue requests\n\nTo reduce its OS jitter, do any of the following:\n\n1.\tRun your workload at a real-time priority, which will allow\n\tpreempting the kworker daemons.\n2.\tA given workqueue can be made visible in the sysfs filesystem\n\tby passing the WQ_SYSFS to that workqueue's alloc_workqueue().\n\tSuch a workqueue can be confined to a given subset of the\n\tCPUs using the ``\/sys\/devices\/virtual\/workqueue\/*\/cpumask`` sysfs\n\tfiles.\tThe set of WQ_SYSFS workqueues can be displayed using\n\t\"ls \/sys\/devices\/virtual\/workqueue\".  That said, the workqueues\n\tmaintainer would like to caution people against indiscriminately\n\tsprinkling WQ_SYSFS across all the workqueues.\tThe reason for\n\tcaution is that it is easy to add WQ_SYSFS, but because sysfs is\n\tpart of the formal user\/kernel API, it can be nearly impossible\n\tto remove it, even if its addition was a mistake.\n3.\tDo any of the following needed to avoid jitter that your\n\tapplication cannot tolerate:\n\n\ta.\tAvoid using oprofile, thus avoiding OS jitter from\n\t\twq_sync_buffer().\n\tb.\tLimit your CPU frequency so that a CPU-frequency\n\t\tgovernor is not required, possibly enlisting the aid of\n\t\tspecial heatsinks or other cooling technologies.  If done\n\t\tcorrectly, and if you CPU architecture permits, you should\n\t\tbe able to build your kernel with CONFIG_CPU_FREQ=n to\n\t\tavoid the CPU-frequency governor periodically running\n\t\ton each CPU, including cs_dbs_timer() and od_dbs_timer().\n\n\t\tWARNING:  Please check your CPU specifications to\n\t\tmake sure that this is safe on your particular system.\n\tc.\tAs of v3.18, Christoph Lameter's on-demand vmstat workers\n\t\tcommit prevents OS jitter due to vmstat_update() on\n\t\tCONFIG_SMP=y systems.  Before v3.18, is not possible\n\t\tto entirely get rid of the OS jitter, but you can\n\t\tdecrease its frequency by writing a large value to\n\t\t\/proc\/sys\/vm\/stat_interval.  The default value is HZ,\n\t\tfor an interval of one second.\tOf course, larger values\n\t\twill make your virtual-memory statistics update more\n\t\tslowly.  Of course, you can also run your workload at\n\t\ta real-time priority, thus preempting vmstat_update(),\n\t\tbut if your workload is CPU-bound, this is a bad idea.\n\t\tHowever, there is an RFC patch from Christoph Lameter\n\t\t(based on an earlier one from Gilad Ben-Yossef) that\n\t\treduces or even eliminates vmstat overhead for some\n\t\tworkloads at https:\/\/lore.kernel.org\/r\/00000140e9dfd6bd-40db3d4f-c1be-434f-8132-7820f81bb586-000000@email.amazonses.com.\n\td.\tIf running on high-end powerpc servers, build with\n\t\tCONFIG_PPC_RTAS_DAEMON=n.  This prevents the RTAS\n\t\tdaemon from running on each CPU every second or so.\n\t\t(This will require editing Kconfig files and will defeat\n\t\tthis platform's RAS functionality.)  This avoids jitter\n\t\tdue to the rtas_event_scan() function.\n\t\tWARNING:  Please check your CPU specifications to\n\t\tmake sure that this is safe on your particular system.\n\te.\tIf running on Cell Processor, build your kernel with\n\t\tCBE_CPUFREQ_SPU_GOVERNOR=n to avoid OS jitter from\n\t\tspu_gov_work().\n\t\tWARNING:  Please check your CPU specifications to\n\t\tmake sure that this is safe on your particular system.\n\tf.\tIf running on PowerMAC, build your kernel with\n\t\tCONFIG_PMAC_RACKMETER=n to disable the CPU-meter,\n\t\tavoiding OS jitter from rackmeter_do_timer().\n\nName:\n  rcuc\/%u\n\nPurpose:\n  Execute RCU callbacks in CONFIG_RCU_BOOST=y kernels.\n\nTo reduce its OS jitter, do at least one of the following:\n\n1.\tBuild the kernel with CONFIG_PREEMPT=n.  This prevents these\n\tkthreads from being created in the first place, and also obviates\n\tthe need for RCU priority boosting.  This approach is feasible\n\tfor workloads that do not require high degrees of responsiveness.\n2.\tBuild the kernel with CONFIG_RCU_BOOST=n.  This prevents these\n\tkthreads from being created in the first place.  This approach\n\tis feasible only if your workload never requires RCU priority\n\tboosting, for example, if you ensure frequent idle time on all\n\tCPUs that might execute within the kernel.\n3.\tBuild with CONFIG_RCU_NOCB_CPU=y and boot with the rcu_nocbs=\n\tboot parameter offloading RCU callbacks from all CPUs susceptible\n\tto OS jitter.  This approach prevents the rcuc\/%u kthreads from\n\thaving any work to do, so that they are never awakened.\n4.\tEnsure that the CPU never enters the kernel, and, in particular,\n\tavoid initiating any CPU hotplug operations on this CPU.  This is\n\tanother way of preventing any callbacks from being queued on the\n\tCPU, again preventing the rcuc\/%u kthreads from having any work\n\tto do.\n\nName:\n  rcuop\/%d, rcuos\/%d, and rcuog\/%d\n\nPurpose:\n  Offload RCU callbacks from the corresponding CPU.\n\nTo reduce its OS jitter, do at least one of the following:\n\n1.\tUse affinity, cgroups, or other mechanism to force these kthreads\n\tto execute on some other CPU.\n2.\tBuild with CONFIG_RCU_NOCB_CPU=n, which will prevent these\n\tkthreads from being created in the first place.  However, please\n\tnote that this will not eliminate OS jitter, but will instead\n\tshift it to RCU_SOFTIRQ.","site":"linux","answers_cleaned":"                                           Reducing OS jitter due to per cpu kthreads                                             This document lists per CPU kthreads in the Linux kernel and presents options to control their OS jitter   Note that non per CPU kthreads are not listed here   To reduce OS jitter from non per CPU kthreads  bind them to a  housekeeping  CPU dedicated to such work   References               Documentation core api irq irq affinity rst   Binding interrupts to sets of CPUs     Documentation admin guide cgroup v1   Using cgroups to bind tasks to sets of CPUs     man taskset   Using the taskset command to bind tasks to sets  of CPUs     man sched setaffinity   Using the sched setaffinity   system  call to bind tasks to sets of CPUs      sys devices system cpu cpuN online   Control CPU N s hotplug state   writing  0  to offline and  1  to online     In order to locate kernel generated OS jitter on CPU N     cd  sys kernel tracing   echo 1   max graph depth   Increase the  1  for more detail   echo function graph   current tracer     run workload   cat per cpu cpuN trace  kthreads           Name    ehca comp  u  Purpose    Periodically process Infiniband related work   To reduce its OS jitter  do any of the following   1  Don t use eHCA Infiniband hardware  instead choosing hardware  that does not require per CPU kthreads   This will prevent these  kthreads from being created in the first place    This will  work for most people  as this hardware  though important  is  relatively old and is produced in relatively low unit volumes   2  Do all eHCA Infiniband related work on other CPUs  including  interrupts  3  Rework the eHCA driver so that its per CPU kthreads are  provisioned only on selected CPUs    Name    irq  d  s  Purpose    Handle threaded interrupts   To reduce its OS jitter  do the following   1  Use irq affinity to force the irq threads to execute on  some other CPU   Name    kcmtpd ctr  d  Purpose    Handle Bluetooth work   To reduce its OS jitter  do one of the following   1  Don t use Bluetooth  in which case these kthreads won t be  created in the first place  2  Use irq affinity to force Bluetooth related interrupts to  occur on some other CPU and furthermore initiate all  Bluetooth activity on some other CPU   Name    ksoftirqd  u  Purpose    Execute softirq handlers when threaded or when under heavy load   To reduce its OS jitter  each softirq vector must be handled separately as follows   TIMER SOFTIRQ                Do all of the following   1  To the extent possible  keep the CPU out of the kernel when it  is non idle  for example  by avoiding system calls and by forcing  both kernel threads and interrupts to execute elsewhere  2  Build with CONFIG HOTPLUG CPU y   After boot completes  force  the CPU offline  then bring it back online   This forces  recurring timers to migrate elsewhere  If you are concerned  with multiple CPUs  force them all offline before bringing the  first one back online   Once you have onlined the CPUs in question   do not offline any other CPUs  because doing so could force the  timer back onto one of the CPUs in question   NET TX SOFTIRQ and NET RX SOFTIRQ                                    Do all of the following   1  Force networking interrupts onto other CPUs  2  Initiate any network I O on other CPUs  3  Once your application has started  prevent CPU hotplug operations  from being initiated from tasks that might run on the CPU to  be de jittered    It is OK to force this CPU offline and then  bring it back online before you start your application    BLOCK SOFTIRQ                Do all of the following   1  Force block device interrupts onto some other CPU  2  Initiate any block I O on other CPUs  3  Once your application has started  prevent CPU hotplug operations  from being initiated from tasks that might run on the CPU to  be de jittered    It is OK to force this CPU offline and then  bring it back online before you start your application    IRQ POLL SOFTIRQ                   Do all of the following   1  Force block device interrupts onto some other CPU  2  Initiate any block I O and block I O polling on other CPUs  3  Once your application has started  prevent CPU hotplug operations  from being initiated from tasks that might run on the CPU to  be de jittered    It is OK to force this CPU offline and then  bring it back online before you start your application    TASKLET SOFTIRQ                  Do one or more of the following   1  Avoid use of drivers that use tasklets    Such drivers will contain  calls to things like tasklet schedule     2  Convert all drivers that you must use from tasklets to workqueues  3  Force interrupts for drivers using tasklets onto other CPUs   and also do I O involving these drivers on other CPUs   SCHED SOFTIRQ                Do all of the following   1  Avoid sending scheduler IPIs to the CPU to be de jittered   for example  ensure that at most one runnable kthread is present  on that CPU   If a thread that expects to run on the de jittered  CPU awakens  the scheduler will send an IPI that can result in  a subsequent SCHED SOFTIRQ  2  CONFIG NO HZ FULL y and ensure that the CPU to be de jittered  is marked as an adaptive ticks CPU using the  nohz full    boot parameter   This reduces the number of scheduler clock  interrupts that the de jittered CPU receives  minimizing its  chances of being selected to do the load balancing work that  runs in SCHED SOFTIRQ context  3  To the extent possible  keep the CPU out of the kernel when it  is non idle  for example  by avoiding system calls and by  forcing both kernel threads and interrupts to execute elsewhere   This further reduces the number of scheduler clock interrupts  received by the de jittered CPU   HRTIMER SOFTIRQ                  Do all of the following   1  To the extent possible  keep the CPU out of the kernel when it  is non idle   For example  avoid system calls and force both  kernel threads and interrupts to execute elsewhere  2  Build with CONFIG HOTPLUG CPU y   Once boot completes  force the  CPU offline  then bring it back online   This forces recurring  timers to migrate elsewhere   If you are concerned with multiple  CPUs  force them all offline before bringing the first one  back online   Once you have onlined the CPUs in question  do not  offline any other CPUs  because doing so could force the timer  back onto one of the CPUs in question   RCU SOFTIRQ              Do at least one of the following   1  Offload callbacks and keep the CPU in either dyntick idle or  adaptive ticks state by doing all of the following    a  CONFIG NO HZ FULL y and ensure that the CPU to be   de jittered is marked as an adaptive ticks CPU using the    nohz full   boot parameter   Bind the rcuo kthreads to   housekeeping CPUs  which can tolerate OS jitter   b  To the extent possible  keep the CPU out of the kernel   when it is non idle  for example  by avoiding system   calls and by forcing both kernel threads and interrupts   to execute elsewhere   2  Enable RCU to do its processing remotely via dyntick idle by  doing all of the following    a  Build with CONFIG NO HZ y   b  Ensure that the CPU goes idle frequently  allowing other   CPUs to detect that it has passed through an RCU quiescent   state  If the kernel is built with CONFIG NO HZ FULL y    userspace execution also allows other CPUs to detect that   the CPU in question has passed through a quiescent state   c  To the extent possible  keep the CPU out of the kernel   when it is non idle  for example  by avoiding system   calls and by forcing both kernel threads and interrupts   to execute elsewhere   Name    kworker  u  d s  cpu  id  priority   Purpose    Execute workqueue requests  To reduce its OS jitter  do any of the following   1  Run your workload at a real time priority  which will allow  preempting the kworker daemons  2  A given workqueue can be made visible in the sysfs filesystem  by passing the WQ SYSFS to that workqueue s alloc workqueue     Such a workqueue can be confined to a given subset of the  CPUs using the    sys devices virtual workqueue   cpumask   sysfs  files  The set of WQ SYSFS workqueues can be displayed using   ls  sys devices virtual workqueue    That said  the workqueues  maintainer would like to caution people against indiscriminately  sprinkling WQ SYSFS across all the workqueues  The reason for  caution is that it is easy to add WQ SYSFS  but because sysfs is  part of the formal user kernel API  it can be nearly impossible  to remove it  even if its addition was a mistake  3  Do any of the following needed to avoid jitter that your  application cannot tolerate    a  Avoid using oprofile  thus avoiding OS jitter from   wq sync buffer     b  Limit your CPU frequency so that a CPU frequency   governor is not required  possibly enlisting the aid of   special heatsinks or other cooling technologies   If done   correctly  and if you CPU architecture permits  you should   be able to build your kernel with CONFIG CPU FREQ n to   avoid the CPU frequency governor periodically running   on each CPU  including cs dbs timer   and od dbs timer       WARNING   Please check your CPU specifications to   make sure that this is safe on your particular system   c  As of v3 18  Christoph Lameter s on demand vmstat workers   commit prevents OS jitter due to vmstat update   on   CONFIG SMP y systems   Before v3 18  is not possible   to entirely get rid of the OS jitter  but you can   decrease its frequency by writing a large value to    proc sys vm stat interval   The default value is HZ    for an interval of one second  Of course  larger values   will make your virtual memory statistics update more   slowly   Of course  you can also run your workload at   a real time priority  thus preempting vmstat update      but if your workload is CPU bound  this is a bad idea    However  there is an RFC patch from Christoph Lameter    based on an earlier one from Gilad Ben Yossef  that   reduces or even eliminates vmstat overhead for some   workloads at https   lore kernel org r 00000140e9dfd6bd 40db3d4f c1be 434f 8132 7820f81bb586 000000 email amazonses com   d  If running on high end powerpc servers  build with   CONFIG PPC RTAS DAEMON n   This prevents the RTAS   daemon from running on each CPU every second or so     This will require editing Kconfig files and will defeat   this platform s RAS functionality    This avoids jitter   due to the rtas event scan   function    WARNING   Please check your CPU specifications to   make sure that this is safe on your particular system   e  If running on Cell Processor  build your kernel with   CBE CPUFREQ SPU GOVERNOR n to avoid OS jitter from   spu gov work      WARNING   Please check your CPU specifications to   make sure that this is safe on your particular system   f  If running on PowerMAC  build your kernel with   CONFIG PMAC RACKMETER n to disable the CPU meter    avoiding OS jitter from rackmeter do timer     Name    rcuc  u  Purpose    Execute RCU callbacks in CONFIG RCU BOOST y kernels   To reduce its OS jitter  do at least one of the following   1  Build the kernel with CONFIG PREEMPT n   This prevents these  kthreads from being created in the first place  and also obviates  the need for RCU priority boosting   This approach is feasible  for workloads that do not require high degrees of responsiveness  2  Build the kernel with CONFIG RCU BOOST n   This prevents these  kthreads from being created in the first place   This approach  is feasible only if your workload never requires RCU priority  boosting  for example  if you ensure frequent idle time on all  CPUs that might execute within the kernel  3  Build with CONFIG RCU NOCB CPU y and boot with the rcu nocbs   boot parameter offloading RCU callbacks from all CPUs susceptible  to OS jitter   This approach prevents the rcuc  u kthreads from  having any work to do  so that they are never awakened  4  Ensure that the CPU never enters the kernel  and  in particular   avoid initiating any CPU hotplug operations on this CPU   This is  another way of preventing any callbacks from being queued on the  CPU  again preventing the rcuc  u kthreads from having any work  to do   Name    rcuop  d  rcuos  d  and rcuog  d  Purpose    Offload RCU callbacks from the corresponding CPU   To reduce its OS jitter  do at least one of the following   1  Use affinity  cgroups  or other mechanism to force these kthreads  to execute on some other CPU  2  Build with CONFIG RCU NOCB CPU n  which will prevent these  kthreads from being created in the first place   However  please  note that this will not eliminate OS jitter  but will instead  shift it to RCU SOFTIRQ "}
{"questions":"linux Reporting issues See the bottom of this file for additional redistribution information SPDX License Identifier GPL 2 0 OR CC BY 4 0 The short guide aka TL DR","answers":".. SPDX-License-Identifier: (GPL-2.0+ OR CC-BY-4.0)\n.. See the bottom of this file for additional redistribution information.\n\nReporting issues\n++++++++++++++++\n\n\nThe short guide (aka TL;DR)\n===========================\n\nAre you facing a regression with vanilla kernels from the same stable or\nlongterm series? One still supported? Then search the `LKML\n<https:\/\/lore.kernel.org\/lkml\/>`_ and the `Linux stable mailing list\n<https:\/\/lore.kernel.org\/stable\/>`_ archives for matching reports to join. If\nyou don't find any, install `the latest release from that series\n<https:\/\/kernel.org\/>`_. If it still shows the issue, report it to the stable\nmailing list (stable@vger.kernel.org) and CC the regressions list\n(regressions@lists.linux.dev); ideally also CC the maintainer and the mailing\nlist for the subsystem in question.\n\nIn all other cases try your best guess which kernel part might be causing the\nissue. Check the :ref:`MAINTAINERS <maintainers>` file for how its developers\nexpect to be told about problems, which most of the time will be by email with a\nmailing list in CC. Check the destination's archives for matching reports;\nsearch the `LKML <https:\/\/lore.kernel.org\/lkml\/>`_ and the web, too. If you\ndon't find any to join, install `the latest mainline kernel\n<https:\/\/kernel.org\/>`_. If the issue is present there, send a report.\n\nThe issue was fixed there, but you would like to see it resolved in a still\nsupported stable or longterm series as well? Then install its latest release.\nIf it shows the problem, search for the change that fixed it in mainline and\ncheck if backporting is in the works or was discarded; if it's neither, ask\nthose who handled the change for it.\n\n**General remarks**: When installing and testing a kernel as outlined above,\nensure it's vanilla (IOW: not patched and not using add-on modules). Also make\nsure it's built and running in a healthy environment and not already tainted\nbefore the issue occurs.\n\nIf you are facing multiple issues with the Linux kernel at once, report each\nseparately. While writing your report, include all information relevant to the\nissue, like the kernel and the distro used. In case of a regression, CC the\nregressions mailing list (regressions@lists.linux.dev) to your report. Also try\nto pin-point the culprit with a bisection; if you succeed, include its\ncommit-id and CC everyone in the sign-off-by chain.\n\nOnce the report is out, answer any questions that come up and help where you\ncan. That includes keeping the ball rolling by occasionally retesting with newer\nreleases and sending a status update afterwards.\n\nStep-by-step guide how to report issues to the kernel maintainers\n=================================================================\n\nThe above TL;DR outlines roughly how to report issues to the Linux kernel\ndevelopers. It might be all that's needed for people already familiar with\nreporting issues to Free\/Libre & Open Source Software (FLOSS) projects. For\neveryone else there is this section. It is more detailed and uses a\nstep-by-step approach. It still tries to be brief for readability and leaves\nout a lot of details; those are described below the step-by-step guide in a\nreference section, which explains each of the steps in more detail.\n\nNote: this section covers a few more aspects than the TL;DR and does things in\na slightly different order. That's in your interest, to make sure you notice\nearly if an issue that looks like a Linux kernel problem is actually caused by\nsomething else. These steps thus help to ensure the time you invest in this\nprocess won't feel wasted in the end:\n\n * Are you facing an issue with a Linux kernel a hardware or software vendor\n   provided? Then in almost all cases you are better off to stop reading this\n   document and reporting the issue to your vendor instead, unless you are\n   willing to install the latest Linux version yourself. Be aware the latter\n   will often be needed anyway to hunt down and fix issues.\n\n * Perform a rough search for existing reports with your favorite internet\n   search engine; additionally, check the archives of the `Linux Kernel Mailing\n   List (LKML) <https:\/\/lore.kernel.org\/lkml\/>`_. If you find matching reports,\n   join the discussion instead of sending a new one.\n\n * See if the issue you are dealing with qualifies as regression, security\n   issue, or a really severe problem: those are 'issues of high priority' that\n   need special handling in some steps that are about to follow.\n\n * Make sure it's not the kernel's surroundings that are causing the issue\n   you face.\n\n * Create a fresh backup and put system repair and restore tools at hand.\n\n * Ensure your system does not enhance its kernels by building additional\n   kernel modules on-the-fly, which solutions like DKMS might be doing locally\n   without your knowledge.\n\n * Check if your kernel was 'tainted' when the issue occurred, as the event\n   that made the kernel set this flag might be causing the issue you face.\n\n * Write down coarsely how to reproduce the issue. If you deal with multiple\n   issues at once, create separate notes for each of them and make sure they\n   work independently on a freshly booted system. That's needed, as each issue\n   needs to get reported to the kernel developers separately, unless they are\n   strongly entangled.\n\n * If you are facing a regression within a stable or longterm version line\n   (say something broke when updating from 5.10.4 to 5.10.5), scroll down to\n   'Dealing with regressions within a stable and longterm kernel line'.\n\n * Locate the driver or kernel subsystem that seems to be causing the issue.\n   Find out how and where its developers expect reports. Note: most of the\n   time this won't be bugzilla.kernel.org, as issues typically need to be sent\n   by mail to a maintainer and a public mailing list.\n\n * Search the archives of the bug tracker or mailing list in question\n   thoroughly for reports that might match your issue. If you find anything,\n   join the discussion instead of sending a new report.\n\nAfter these preparations you'll now enter the main part:\n\n * Unless you are already running the latest 'mainline' Linux kernel, better\n   go and install it for the reporting process. Testing and reporting with\n   the latest 'stable' Linux can be an acceptable alternative in some\n   situations; during the merge window that actually might be even the best\n   approach, but in that development phase it can be an even better idea to\n   suspend your efforts for a few days anyway. Whatever version you choose,\n   ideally use a 'vanilla' build. Ignoring these advices will dramatically\n   increase the risk your report will be rejected or ignored.\n\n * Ensure the kernel you just installed does not 'taint' itself when\n   running.\n\n * Reproduce the issue with the kernel you just installed. If it doesn't show\n   up there, scroll down to the instructions for issues only happening with\n   stable and longterm kernels.\n\n * Optimize your notes: try to find and write the most straightforward way to\n   reproduce your issue. Make sure the end result has all the important\n   details, and at the same time is easy to read and understand for others\n   that hear about it for the first time. And if you learned something in this\n   process, consider searching again for existing reports about the issue.\n\n * If your failure involves a 'panic', 'Oops', 'warning', or 'BUG', consider\n   decoding the kernel log to find the line of code that triggered the error.\n\n * If your problem is a regression, try to narrow down when the issue was\n   introduced as much as possible.\n\n * Start to compile the report by writing a detailed description about the\n   issue. Always mention a few things: the latest kernel version you installed\n   for reproducing, the Linux Distribution used, and your notes on how to\n   reproduce the issue. Ideally, make the kernel's build configuration\n   (.config) and the output from ``dmesg`` available somewhere on the net and\n   link to it. Include or upload all other information that might be relevant,\n   like the output\/screenshot of an Oops or the output from ``lspci``. Once\n   you wrote this main part, insert a normal length paragraph on top of it\n   outlining the issue and the impact quickly. On top of this add one sentence\n   that briefly describes the problem and gets people to read on. Now give the\n   thing a descriptive title or subject that yet again is shorter. Then you're\n   ready to send or file the report like the MAINTAINERS file told you, unless\n   you are dealing with one of those 'issues of high priority': they need\n   special care which is explained in 'Special handling for high priority\n   issues' below.\n\n * Wait for reactions and keep the thing rolling until you can accept the\n   outcome in one way or the other. Thus react publicly and in a timely manner\n   to any inquiries. Test proposed fixes. Do proactive testing: retest with at\n   least every first release candidate (RC) of a new mainline version and\n   report your results. Send friendly reminders if things stall. And try to\n   help yourself, if you don't get any help or if it's unsatisfying.\n\n\nReporting regressions within a stable and longterm kernel line\n--------------------------------------------------------------\n\nThis subsection is for you, if you followed above process and got sent here at\nthe point about regression within a stable or longterm kernel version line. You\nface one of those if something breaks when updating from 5.10.4 to 5.10.5 (a\nswitch from 5.9.15 to 5.10.5 does not qualify). The developers want to fix such\nregressions as quickly as possible, hence there is a streamlined process to\nreport them:\n\n * Check if the kernel developers still maintain the Linux kernel version\n   line you care about: go to the  `front page of kernel.org\n   <https:\/\/kernel.org\/>`_ and make sure it mentions\n   the latest release of the particular version line without an '[EOL]' tag.\n\n * Check the archives of the `Linux stable mailing list\n   <https:\/\/lore.kernel.org\/stable\/>`_ for existing reports.\n\n * Install the latest release from the particular version line as a vanilla\n   kernel. Ensure this kernel is not tainted and still shows the problem, as\n   the issue might have already been fixed there. If you first noticed the\n   problem with a vendor kernel, check a vanilla build of the last version\n   known to work performs fine as well.\n\n * Send a short problem report to the Linux stable mailing list\n   (stable@vger.kernel.org) and CC the Linux regressions mailing list\n   (regressions@lists.linux.dev); if you suspect the cause in a particular\n   subsystem, CC its maintainer and its mailing list. Roughly describe the\n   issue and ideally explain how to reproduce it. Mention the first version\n   that shows the problem and the last version that's working fine. Then\n   wait for further instructions.\n\nThe reference section below explains each of these steps in more detail.\n\n\nReporting issues only occurring in older kernel version lines\n-------------------------------------------------------------\n\nThis subsection is for you, if you tried the latest mainline kernel as outlined\nabove, but failed to reproduce your issue there; at the same time you want to\nsee the issue fixed in a still supported stable or longterm series or vendor\nkernels regularly rebased on those. If that the case, follow these steps:\n\n * Prepare yourself for the possibility that going through the next few steps\n   might not get the issue solved in older releases: the fix might be too big\n   or risky to get backported there.\n\n * Perform the first three steps in the section \"Dealing with regressions\n   within a stable and longterm kernel line\" above.\n\n * Search the Linux kernel version control system for the change that fixed\n   the issue in mainline, as its commit message might tell you if the fix is\n   scheduled for backporting already. If you don't find anything that way,\n   search the appropriate mailing lists for posts that discuss such an issue\n   or peer-review possible fixes; then check the discussions if the fix was\n   deemed unsuitable for backporting. If backporting was not considered at\n   all, join the newest discussion, asking if it's in the cards.\n\n * One of the former steps should lead to a solution. If that doesn't work\n   out, ask the maintainers for the subsystem that seems to be causing the\n   issue for advice; CC the mailing list for the particular subsystem as well\n   as the stable mailing list.\n\nThe reference section below explains each of these steps in more detail.\n\n\nReference section: Reporting issues to the kernel maintainers\n=============================================================\n\nThe detailed guides above outline all the major steps in brief fashion, which\nshould be enough for most people. But sometimes there are situations where even\nexperienced users might wonder how to actually do one of those steps. That's\nwhat this section is for, as it will provide a lot more details on each of the\nabove steps. Consider this as reference documentation: it's possible to read it\nfrom top to bottom. But it's mainly meant to skim over and a place to look up\ndetails how to actually perform those steps.\n\nA few words of general advice before digging into the details:\n\n * The Linux kernel developers are well aware this process is complicated and\n   demands more than other FLOSS projects. We'd love to make it simpler. But\n   that would require work in various places as well as some infrastructure,\n   which would need constant maintenance; nobody has stepped up to do that\n   work, so that's just how things are for now.\n\n * A warranty or support contract with some vendor doesn't entitle you to\n   request fixes from developers in the upstream Linux kernel community: such\n   contracts are completely outside the scope of the Linux kernel, its\n   development community, and this document. That's why you can't demand\n   anything such a contract guarantees in this context, not even if the\n   developer handling the issue works for the vendor in question. If you want\n   to claim your rights, use the vendor's support channel instead. When doing\n   so, you might want to mention you'd like to see the issue fixed in the\n   upstream Linux kernel; motivate them by saying it's the only way to ensure\n   the fix in the end will get incorporated in all Linux distributions.\n\n * If you never reported an issue to a FLOSS project before you should consider\n   reading `How to Report Bugs Effectively\n   <https:\/\/www.chiark.greenend.org.uk\/~sgtatham\/bugs.html>`_, `How To Ask\n   Questions The Smart Way\n   <http:\/\/www.catb.org\/esr\/faqs\/smart-questions.html>`_, and `How to ask good\n   questions <https:\/\/jvns.ca\/blog\/good-questions\/>`_.\n\nWith that off the table, find below the details on how to properly report\nissues to the Linux kernel developers.\n\n\nMake sure you're using the upstream Linux kernel\n------------------------------------------------\n\n   *Are you facing an issue with a Linux kernel a hardware or software vendor\n   provided? Then in almost all cases you are better off to stop reading this\n   document and reporting the issue to your vendor instead, unless you are\n   willing to install the latest Linux version yourself. Be aware the latter\n   will often be needed anyway to hunt down and fix issues.*\n\nLike most programmers, Linux kernel developers don't like to spend time dealing\nwith reports for issues that don't even happen with their current code. It's\njust a waste everybody's time, especially yours. Unfortunately such situations\neasily happen when it comes to the kernel and often leads to frustration on both\nsides. That's because almost all Linux-based kernels pre-installed on devices\n(Computers, Laptops, Smartphones, Routers, \u2026) and most shipped by Linux\ndistributors are quite distant from the official Linux kernel as distributed by\nkernel.org: these kernels from these vendors are often ancient from the point of\nLinux development or heavily modified, often both.\n\nMost of these vendor kernels are quite unsuitable for reporting issues to the\nLinux kernel developers: an issue you face with one of them might have been\nfixed by the Linux kernel developers months or years ago already; additionally,\nthe modifications and enhancements by the vendor might be causing the issue you\nface, even if they look small or totally unrelated. That's why you should report\nissues with these kernels to the vendor. Its developers should look into the\nreport and, in case it turns out to be an upstream issue, fix it directly\nupstream or forward the report there. In practice that often does not work out\nor might not what you want. You thus might want to consider circumventing the\nvendor by installing the very latest Linux kernel core yourself. If that's an\noption for you move ahead in this process, as a later step in this guide will\nexplain how to do that once it rules out other potential causes for your issue.\n\nNote, the previous paragraph is starting with the word 'most', as sometimes\ndevelopers in fact are willing to handle reports about issues occurring with\nvendor kernels. If they do in the end highly depends on the developers and the\nissue in question. Your chances are quite good if the distributor applied only\nsmall modifications to a kernel based on a recent Linux version; that for\nexample often holds true for the mainline kernels shipped by Debian GNU\/Linux\nSid or Fedora Rawhide. Some developers will also accept reports about issues\nwith kernels from distributions shipping the latest stable kernel, as long as\nits only slightly modified; that for example is often the case for Arch Linux,\nregular Fedora releases, and openSUSE Tumbleweed. But keep in mind, you better\nwant to use a mainline Linux and avoid using a stable kernel for this\nprocess, as outlined in the section 'Install a fresh kernel for testing' in more\ndetail.\n\nObviously you are free to ignore all this advice and report problems with an old\nor heavily modified vendor kernel to the upstream Linux developers. But note,\nthose often get rejected or ignored, so consider yourself warned. But it's still\nbetter than not reporting the issue at all: sometimes such reports directly or\nindirectly will help to get the issue fixed over time.\n\n\nSearch for existing reports, first run\n--------------------------------------\n\n   *Perform a rough search for existing reports with your favorite internet\n   search engine; additionally, check the archives of the Linux Kernel Mailing\n   List (LKML). If you find matching reports, join the discussion instead of\n   sending a new one.*\n\nReporting an issue that someone else already brought forward is often a waste of\ntime for everyone involved, especially you as the reporter. So it's in your own\ninterest to thoroughly check if somebody reported the issue already. At this\nstep of the process it's okay to just perform a rough search: a later step will\ntell you to perform a more detailed search once you know where your issue needs\nto be reported to. Nevertheless, do not hurry with this step of the reporting\nprocess, it can save you time and trouble.\n\nSimply search the internet with your favorite search engine first. Afterwards,\nsearch the `Linux Kernel Mailing List (LKML) archives\n<https:\/\/lore.kernel.org\/lkml\/>`_.\n\nIf you get flooded with results consider telling your search engine to limit\nsearch timeframe to the past month or year. And wherever you search, make sure\nto use good search terms; vary them a few times, too. While doing so try to\nlook at the issue from the perspective of someone else: that will help you to\ncome up with other words to use as search terms. Also make sure not to use too\nmany search terms at once. Remember to search with and without information like\nthe name of the kernel driver or the name of the affected hardware component.\nBut its exact brand name (say 'ASUS Red Devil Radeon RX 5700 XT Gaming OC')\noften is not much helpful, as it is too specific. Instead try search terms like\nthe model line (Radeon 5700 or Radeon 5000) and the code name of the main chip\n('Navi' or 'Navi10') with and without its manufacturer ('AMD').\n\nIn case you find an existing report about your issue, join the discussion, as\nyou might be able to provide valuable additional information. That can be\nimportant even when a fix is prepared or in its final stages already, as\ndevelopers might look for people that can provide additional information or\ntest a proposed fix. Jump to the section 'Duties after the report went out' for\ndetails on how to get properly involved.\n\nNote, searching `bugzilla.kernel.org <https:\/\/bugzilla.kernel.org\/>`_ might also\nbe a good idea, as that might provide valuable insights or turn up matching\nreports. If you find the latter, just keep in mind: most subsystems expect\nreports in different places, as described below in the section \"Check where you\nneed to report your issue\". The developers that should take care of the issue\nthus might not even be aware of the bugzilla ticket. Hence, check the ticket if\nthe issue already got reported as outlined in this document and if not consider\ndoing so.\n\n\nIssue of high priority?\n-----------------------\n\n    *See if the issue you are dealing with qualifies as regression, security\n    issue, or a really severe problem: those are 'issues of high priority' that\n    need special handling in some steps that are about to follow.*\n\nLinus Torvalds and the leading Linux kernel developers want to see some issues\nfixed as soon as possible, hence there are 'issues of high priority' that get\nhandled slightly differently in the reporting process. Three type of cases\nqualify: regressions, security issues, and really severe problems.\n\nYou deal with a regression if some application or practical use case running\nfine with one Linux kernel works worse or not at all with a newer version\ncompiled using a similar configuration. The document\nDocumentation\/admin-guide\/reporting-regressions.rst explains this in more\ndetail. It also provides a good deal of other information about regressions you\nmight want to be aware of; it for example explains how to add your issue to the\nlist of tracked regressions, to ensure it won't fall through the cracks.\n\nWhat qualifies as security issue is left to your judgment. Consider reading\nDocumentation\/process\/security-bugs.rst before proceeding, as it\nprovides additional details how to best handle security issues.\n\nAn issue is a 'really severe problem' when something totally unacceptably bad\nhappens. That's for example the case when a Linux kernel corrupts the data it's\nhandling or damages hardware it's running on. You're also dealing with a severe\nissue when the kernel suddenly stops working with an error message ('kernel\npanic') or without any farewell note at all. Note: do not confuse a 'panic' (a\nfatal error where the kernel stop itself) with a 'Oops' (a recoverable error),\nas the kernel remains running after the latter.\n\n\nEnsure a healthy environment\n----------------------------\n\n    *Make sure it's not the kernel's surroundings that are causing the issue\n    you face.*\n\nProblems that look a lot like a kernel issue are sometimes caused by build or\nruntime environment. It's hard to rule out that problem completely, but you\nshould minimize it:\n\n * Use proven tools when building your kernel, as bugs in the compiler or the\n   binutils can cause the resulting kernel to misbehave.\n\n * Ensure your computer components run within their design specifications;\n   that's especially important for the main processor, the main memory, and the\n   motherboard. Therefore, stop undervolting or overclocking when facing a\n   potential kernel issue.\n\n * Try to make sure it's not faulty hardware that is causing your issue. Bad\n   main memory for example can result in a multitude of issues that will\n   manifest itself in problems looking like kernel issues.\n\n * If you're dealing with a filesystem issue, you might want to check the file\n   system in question with ``fsck``, as it might be damaged in a way that leads\n   to unexpected kernel behavior.\n\n * When dealing with a regression, make sure it's not something else that\n   changed in parallel to updating the kernel. The problem for example might be\n   caused by other software that was updated at the same time. It can also\n   happen that a hardware component coincidentally just broke when you rebooted\n   into a new kernel for the first time. Updating the systems BIOS or changing\n   something in the BIOS Setup can also lead to problems that on look a lot\n   like a kernel regression.\n\n\nPrepare for emergencies\n-----------------------\n\n    *Create a fresh backup and put system repair and restore tools at hand.*\n\nReminder, you are dealing with computers, which sometimes do unexpected things,\nespecially if you fiddle with crucial parts like the kernel of its operating\nsystem. That's what you are about to do in this process. Thus, make sure to\ncreate a fresh backup; also ensure you have all tools at hand to repair or\nreinstall the operating system as well as everything you need to restore the\nbackup.\n\n\nMake sure your kernel doesn't get enhanced\n------------------------------------------\n\n    *Ensure your system does not enhance its kernels by building additional\n    kernel modules on-the-fly, which solutions like DKMS might be doing locally\n    without your knowledge.*\n\nThe risk your issue report gets ignored or rejected dramatically increases if\nyour kernel gets enhanced in any way. That's why you should remove or disable\nmechanisms like akmods and DKMS: those build add-on kernel modules\nautomatically, for example when you install a new Linux kernel or boot it for\nthe first time. Also remove any modules they might have installed. Then reboot\nbefore proceeding.\n\nNote, you might not be aware that your system is using one of these solutions:\nthey often get set up silently when you install Nvidia's proprietary graphics\ndriver, VirtualBox, or other software that requires a some support from a\nmodule not part of the Linux kernel. That why your might need to uninstall the\npackages with such software to get rid of any 3rd party kernel module.\n\n\nCheck 'taint' flag\n------------------\n\n    *Check if your kernel was 'tainted' when the issue occurred, as the event\n    that made the kernel set this flag might be causing the issue you face.*\n\nThe kernel marks itself with a 'taint' flag when something happens that might\nlead to follow-up errors that look totally unrelated. The issue you face might\nbe such an error if your kernel is tainted. That's why it's in your interest to\nrule this out early before investing more time into this process. This is the\nonly reason why this step is here, as this process later will tell you to\ninstall the latest mainline kernel; you will need to check the taint flag again\nthen, as that's when it matters because it's the kernel the report will focus\non.\n\nOn a running system is easy to check if the kernel tainted itself: if ``cat\n\/proc\/sys\/kernel\/tainted`` returns '0' then the kernel is not tainted and\neverything is fine. Checking that file is impossible in some situations; that's\nwhy the kernel also mentions the taint status when it reports an internal\nproblem (a 'kernel bug'), a recoverable error (a 'kernel Oops') or a\nnon-recoverable error before halting operation (a 'kernel panic'). Look near\nthe top of the error messages printed when one of these occurs and search for a\nline starting with 'CPU:'. It should end with 'Not tainted' if the kernel was\nnot tainted when it noticed the problem; it was tainted if you see 'Tainted:'\nfollowed by a few spaces and some letters.\n\nIf your kernel is tainted, study Documentation\/admin-guide\/tainted-kernels.rst\nto find out why. Try to eliminate the reason. Often it's caused by one these\nthree things:\n\n 1. A recoverable error (a 'kernel Oops') occurred and the kernel tainted\n    itself, as the kernel knows it might misbehave in strange ways after that\n    point. In that case check your kernel or system log and look for a section\n    that starts with this::\n\n       Oops: 0000 [#1] SMP\n\n    That's the first Oops since boot-up, as the '#1' between the brackets shows.\n    Every Oops and any other problem that happens after that point might be a\n    follow-up problem to that first Oops, even if both look totally unrelated.\n    Rule this out by getting rid of the cause for the first Oops and reproducing\n    the issue afterwards. Sometimes simply restarting will be enough, sometimes\n    a change to the configuration followed by a reboot can eliminate the Oops.\n    But don't invest too much time into this at this point of the process, as\n    the cause for the Oops might already be fixed in the newer Linux kernel\n    version you are going to install later in this process.\n\n 2. Your system uses a software that installs its own kernel modules, for\n    example Nvidia's proprietary graphics driver or VirtualBox. The kernel\n    taints itself when it loads such module from external sources (even if\n    they are Open Source): they sometimes cause errors in unrelated kernel\n    areas and thus might be causing the issue you face. You therefore have to\n    prevent those modules from loading when you want to report an issue to the\n    Linux kernel developers. Most of the time the easiest way to do that is:\n    temporarily uninstall such software including any modules they might have\n    installed. Afterwards reboot.\n\n 3. The kernel also taints itself when it's loading a module that resides in\n    the staging tree of the Linux kernel source. That's a special area for\n    code (mostly drivers) that does not yet fulfill the normal Linux kernel\n    quality standards. When you report an issue with such a module it's\n    obviously okay if the kernel is tainted; just make sure the module in\n    question is the only reason for the taint. If the issue happens in an\n    unrelated area reboot and temporarily block the module from being loaded\n    by specifying ``foo.blacklist=1`` as kernel parameter (replace 'foo' with\n    the name of the module in question).\n\n\nDocument how to reproduce issue\n-------------------------------\n\n    *Write down coarsely how to reproduce the issue. If you deal with multiple\n    issues at once, create separate notes for each of them and make sure they\n    work independently on a freshly booted system. That's needed, as each issue\n    needs to get reported to the kernel developers separately, unless they are\n    strongly entangled.*\n\nIf you deal with multiple issues at once, you'll have to report each of them\nseparately, as they might be handled by different developers. Describing\nvarious issues in one report also makes it quite difficult for others to tear\nit apart. Hence, only combine issues in one report if they are very strongly\nentangled.\n\nAdditionally, during the reporting process you will have to test if the issue\nhappens with other kernel versions. Therefore, it will make your work easier if\nyou know exactly how to reproduce an issue quickly on a freshly booted system.\n\nNote: it's often fruitless to report issues that only happened once, as they\nmight be caused by a bit flip due to cosmic radiation. That's why you should\ntry to rule that out by reproducing the issue before going further. Feel free\nto ignore this advice if you are experienced enough to tell a one-time error\ndue to faulty hardware apart from a kernel issue that rarely happens and thus\nis hard to reproduce.\n\n\nRegression in stable or longterm kernel?\n----------------------------------------\n\n    *If you are facing a regression within a stable or longterm version line\n    (say something broke when updating from 5.10.4 to 5.10.5), scroll down to\n    'Dealing with regressions within a stable and longterm kernel line'.*\n\nRegression within a stable and longterm kernel version line are something the\nLinux developers want to fix badly, as such issues are even more unwanted than\nregression in the main development branch, as they can quickly affect a lot of\npeople. The developers thus want to learn about such issues as quickly as\npossible, hence there is a streamlined process to report them. Note,\nregressions with newer kernel version line (say something broke when switching\nfrom 5.9.15 to 5.10.5) do not qualify.\n\n\nCheck where you need to report your issue\n-----------------------------------------\n\n    *Locate the driver or kernel subsystem that seems to be causing the issue.\n    Find out how and where its developers expect reports. Note: most of the\n    time this won't be bugzilla.kernel.org, as issues typically need to be sent\n    by mail to a maintainer and a public mailing list.*\n\nIt's crucial to send your report to the right people, as the Linux kernel is a\nbig project and most of its developers are only familiar with a small subset of\nit. Quite a few programmers for example only care for just one driver, for\nexample one for a WiFi chip; its developer likely will only have small or no\nknowledge about the internals of remote or unrelated \"subsystems\", like the TCP\nstack, the PCIe\/PCI subsystem, memory management or file systems.\n\nProblem is: the Linux kernel lacks a central bug tracker where you can simply\nfile your issue and make it reach the developers that need to know about it.\nThat's why you have to find the right place and way to report issues yourself.\nYou can do that with the help of a script (see below), but it mainly targets\nkernel developers and experts. For everybody else the MAINTAINERS file is the\nbetter place.\n\nHow to read the MAINTAINERS file\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nTo illustrate how to use the :ref:`MAINTAINERS <maintainers>` file, lets assume\nthe WiFi in your Laptop suddenly misbehaves after updating the kernel. In that\ncase it's likely an issue in the WiFi driver. Obviously it could also be some\ncode it builds upon, but unless you suspect something like that stick to the\ndriver. If it's really something else, the driver's developers will get the\nright people involved.\n\nSadly, there is no way to check which code is driving a particular hardware\ncomponent that is both universal and easy.\n\nIn case of a problem with the WiFi driver you for example might want to look at\nthe output of ``lspci -k``, as it lists devices on the PCI\/PCIe bus and the\nkernel module driving it::\n\n       [user@something ~]$ lspci -k\n       [...]\n       3a:00.0 Network controller: Qualcomm Atheros QCA6174 802.11ac Wireless Network Adapter (rev 32)\n         Subsystem: Bigfoot Networks, Inc. Device 1535\n         Kernel driver in use: ath10k_pci\n         Kernel modules: ath10k_pci\n       [...]\n\nBut this approach won't work if your WiFi chip is connected over USB or some\nother internal bus. In those cases you might want to check your WiFi manager or\nthe output of ``ip link``. Look for the name of the problematic network\ninterface, which might be something like 'wlp58s0'. This name can be used like\nthis to find the module driving it::\n\n       [user@something ~]$ realpath --relative-to=\/sys\/module\/ \/sys\/class\/net\/wlp58s0\/device\/driver\/module\n       ath10k_pci\n\nIn case tricks like these don't bring you any further, try to search the\ninternet on how to narrow down the driver or subsystem in question. And if you\nare unsure which it is: just try your best guess, somebody will help you if you\nguessed poorly.\n\nOnce you know the driver or subsystem, you want to search for it in the\nMAINTAINERS file. In the case of 'ath10k_pci' you won't find anything, as the\nname is too specific. Sometimes you will need to search on the net for help;\nbut before doing so, try a somewhat shorted or modified name when searching the\nMAINTAINERS file, as then you might find something like this::\n\n       QUALCOMM ATHEROS ATH10K WIRELESS DRIVER\n       Mail:          A. Some Human <shuman@example.com>\n       Mailing list:  ath10k@lists.infradead.org\n       Status:        Supported\n       Web-page:      https:\/\/wireless.wiki.kernel.org\/en\/users\/Drivers\/ath10k\n       SCM:           git git:\/\/git.kernel.org\/pub\/scm\/linux\/kernel\/git\/kvalo\/ath.git\n       Files:         drivers\/net\/wireless\/ath\/ath10k\/\n\nNote: the line description will be abbreviations, if you read the plain\nMAINTAINERS file found in the root of the Linux source tree. 'Mail:' for\nexample will be 'M:', 'Mailing list:' will be 'L', and 'Status:' will be 'S:'.\nA section near the top of the file explains these and other abbreviations.\n\nFirst look at the line 'Status'. Ideally it should be 'Supported' or\n'Maintained'. If it states 'Obsolete' then you are using some outdated approach\nthat was replaced by a newer solution you need to switch to. Sometimes the code\nonly has someone who provides 'Odd Fixes' when feeling motivated. And with\n'Orphan' you are totally out of luck, as nobody takes care of the code anymore.\nThat only leaves these options: arrange yourself to live with the issue, fix it\nyourself, or find a programmer somewhere willing to fix it.\n\nAfter checking the status, look for a line starting with 'bugs:': it will tell\nyou where to find a subsystem specific bug tracker to file your issue. The\nexample above does not have such a line. That is the case for most sections, as\nLinux kernel development is completely driven by mail. Very few subsystems use\na bug tracker, and only some of those rely on bugzilla.kernel.org.\n\nIn this and many other cases you thus have to look for lines starting with\n'Mail:' instead. Those mention the name and the email addresses for the\nmaintainers of the particular code. Also look for a line starting with 'Mailing\nlist:', which tells you the public mailing list where the code is developed.\nYour report later needs to go by mail to those addresses. Additionally, for all\nissue reports sent by email, make sure to add the Linux Kernel Mailing List\n(LKML) <linux-kernel@vger.kernel.org> to CC. Don't omit either of the mailing\nlists when sending your issue report by mail later! Maintainers are busy people\nand might leave some work for other developers on the subsystem specific list;\nand LKML is important to have one place where all issue reports can be found.\n\n\nFinding the maintainers with the help of a script\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFor people that have the Linux sources at hand there is a second option to find\nthe proper place to report: the script 'scripts\/get_maintainer.pl' which tries\nto find all people to contact. It queries the MAINTAINERS file and needs to be\ncalled with a path to the source code in question. For drivers compiled as\nmodule if often can be found with a command like this::\n\n       $ modinfo ath10k_pci | grep filename | sed 's!\/lib\/modules\/.*\/kernel\/!!; s!filename:!!; s!\\.ko\\(\\|\\.xz\\)!!'\n       drivers\/net\/wireless\/ath\/ath10k\/ath10k_pci.ko\n\nPass parts of this to the script::\n\n       $ .\/scripts\/get_maintainer.pl -f drivers\/net\/wireless\/ath\/ath10k*\n       Some Human <shuman@example.com> (supporter:QUALCOMM ATHEROS ATH10K WIRELESS DRIVER)\n       Another S. Human <asomehuman@example.com> (maintainer:NETWORKING DRIVERS)\n       ath10k@lists.infradead.org (open list:QUALCOMM ATHEROS ATH10K WIRELESS DRIVER)\n       linux-wireless@vger.kernel.org (open list:NETWORKING DRIVERS (WIRELESS))\n       netdev@vger.kernel.org (open list:NETWORKING DRIVERS)\n       linux-kernel@vger.kernel.org (open list)\n\nDon't sent your report to all of them. Send it to the maintainers, which the\nscript calls \"supporter:\"; additionally CC the most specific mailing list for\nthe code as well as the Linux Kernel Mailing List (LKML). In this case you thus\nwould need to send the report to 'Some Human <shuman@example.com>' with\n'ath10k@lists.infradead.org' and 'linux-kernel@vger.kernel.org' in CC.\n\nNote: in case you cloned the Linux sources with git you might want to call\n``get_maintainer.pl`` a second time with ``--git``. The script then will look\nat the commit history to find which people recently worked on the code in\nquestion, as they might be able to help. But use these results with care, as it\ncan easily send you in a wrong direction. That for example happens quickly in\nareas rarely changed (like old or unmaintained drivers): sometimes such code is\nmodified during tree-wide cleanups by developers that do not care about the\nparticular driver at all.\n\n\nSearch for existing reports, second run\n---------------------------------------\n\n    *Search the archives of the bug tracker or mailing list in question\n    thoroughly for reports that might match your issue. If you find anything,\n    join the discussion instead of sending a new report.*\n\nAs mentioned earlier already: reporting an issue that someone else already\nbrought forward is often a waste of time for everyone involved, especially you\nas the reporter. That's why you should search for existing report again, now\nthat you know where they need to be reported to. If it's mailing list, you will\noften find its archives on `lore.kernel.org <https:\/\/lore.kernel.org\/>`_.\n\nBut some list are hosted in different places. That for example is the case for\nthe ath10k WiFi driver used as example in the previous step. But you'll often\nfind the archives for these lists easily on the net. Searching for 'archive\nath10k@lists.infradead.org' for example will lead you to the `Info page for the\nath10k mailing list <https:\/\/lists.infradead.org\/mailman\/listinfo\/ath10k>`_,\nwhich at the top links to its\n`list archives <https:\/\/lists.infradead.org\/pipermail\/ath10k\/>`_. Sadly this and\nquite a few other lists miss a way to search the archives. In those cases use a\nregular internet search engine and add something like\n'site:lists.infradead.org\/pipermail\/ath10k\/' to your search terms, which limits\nthe results to the archives at that URL.\n\nIt's also wise to check the internet, LKML and maybe bugzilla.kernel.org again\nat this point. If your report needs to be filed in a bug tracker, you may want\nto check the mailing list archives for the subsystem as well, as someone might\nhave reported it only there.\n\nFor details how to search and what to do if you find matching reports see\n\"Search for existing reports, first run\" above.\n\nDo not hurry with this step of the reporting process: spending 30 to 60 minutes\nor even more time can save you and others quite a lot of time and trouble.\n\n\nInstall a fresh kernel for testing\n----------------------------------\n\n    *Unless you are already running the latest 'mainline' Linux kernel, better\n    go and install it for the reporting process. Testing and reporting with\n    the latest 'stable' Linux can be an acceptable alternative in some\n    situations; during the merge window that actually might be even the best\n    approach, but in that development phase it can be an even better idea to\n    suspend your efforts for a few days anyway. Whatever version you choose,\n    ideally use a 'vanilla' built. Ignoring these advices will dramatically\n    increase the risk your report will be rejected or ignored.*\n\nAs mentioned in the detailed explanation for the first step already: Like most\nprogrammers, Linux kernel developers don't like to spend time dealing with\nreports for issues that don't even happen with the current code. It's just a\nwaste everybody's time, especially yours. That's why it's in everybody's\ninterest that you confirm the issue still exists with the latest upstream code\nbefore reporting it. You are free to ignore this advice, but as outlined\nearlier: doing so dramatically increases the risk that your issue report might\nget rejected or simply ignored.\n\nIn the scope of the kernel \"latest upstream\" normally means:\n\n * Install a mainline kernel; the latest stable kernel can be an option, but\n   most of the time is better avoided. Longterm kernels (sometimes called 'LTS\n   kernels') are unsuitable at this point of the process. The next subsection\n   explains all of this in more detail.\n\n * The over next subsection describes way to obtain and install such a kernel.\n   It also outlines that using a pre-compiled kernel are fine, but better are\n   vanilla, which means: it was built using Linux sources taken straight `from\n   kernel.org <https:\/\/kernel.org\/>`_ and not modified or enhanced in any way.\n\nChoosing the right version for testing\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nHead over to `kernel.org <https:\/\/kernel.org\/>`_ to find out which version you\nwant to use for testing. Ignore the big yellow button that says 'Latest release'\nand look a little lower at the table. At its top you'll see a line starting with\nmainline, which most of the time will point to a pre-release with a version\nnumber like '5.8-rc2'. If that's the case, you'll want to use this mainline\nkernel for testing, as that where all fixes have to be applied first. Do not let\nthat 'rc' scare you, these 'development kernels' are pretty reliable \u2014 and you\nmade a backup, as you were instructed above, didn't you?\n\nIn about two out of every nine to ten weeks, mainline might point you to a\nproper release with a version number like '5.7'. If that happens, consider\nsuspending the reporting process until the first pre-release of the next\nversion (5.8-rc1) shows up on kernel.org. That's because the Linux development\ncycle then is in its two-week long 'merge window'. The bulk of the changes and\nall intrusive ones get merged for the next release during this time. It's a bit\nmore risky to use mainline during this period. Kernel developers are also often\nquite busy then and might have no spare time to deal with issue reports. It's\nalso quite possible that one of the many changes applied during the merge\nwindow fixes the issue you face; that's why you soon would have to retest with\na newer kernel version anyway, as outlined below in the section 'Duties after\nthe report went out'.\n\nThat's why it might make sense to wait till the merge window is over. But don't\nto that if you're dealing with something that shouldn't wait. In that case\nconsider obtaining the latest mainline kernel via git (see below) or use the\nlatest stable version offered on kernel.org. Using that is also acceptable in\ncase mainline for some reason does currently not work for you. An in general:\nusing it for reproducing the issue is also better than not reporting it issue\nat all.\n\nBetter avoid using the latest stable kernel outside merge windows, as all fixes\nmust be applied to mainline first. That's why checking the latest mainline\nkernel is so important: any issue you want to see fixed in older version lines\nneeds to be fixed in mainline first before it can get backported, which can\ntake a few days or weeks. Another reason: the fix you hope for might be too\nhard or risky for backporting; reporting the issue again hence is unlikely to\nchange anything.\n\nThese aspects are also why longterm kernels (sometimes called \"LTS kernels\")\nare unsuitable for this part of the reporting process: they are to distant from\nthe current code. Hence go and test mainline first and follow the process\nfurther: if the issue doesn't occur with mainline it will guide you how to get\nit fixed in older version lines, if that's in the cards for the fix in question.\n\nHow to obtain a fresh Linux kernel\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n**Using a pre-compiled kernel**: This is often the quickest, easiest, and safest\nway for testing \u2014 especially is you are unfamiliar with the Linux kernel. The\nproblem: most of those shipped by distributors or add-on repositories are build\nfrom modified Linux sources. They are thus not vanilla and therefore often\nunsuitable for testing and issue reporting: the changes might cause the issue\nyou face or influence it somehow.\n\nBut you are in luck if you are using a popular Linux distribution: for quite a\nfew of them you'll find repositories on the net that contain packages with the\nlatest mainline or stable Linux built as vanilla kernel. It's totally okay to\nuse these, just make sure from the repository's description they are vanilla or\nat least close to it. Additionally ensure the packages contain the latest\nversions as offered on kernel.org. The packages are likely unsuitable if they\nare older than a week, as new mainline and stable kernels typically get released\nat least once a week.\n\nPlease note that you might need to build your own kernel manually later: that's\nsometimes needed for debugging or testing fixes, as described later in this\ndocument. Also be aware that pre-compiled kernels might lack debug symbols that\nare needed to decode messages the kernel prints when a panic, Oops, warning, or\nBUG occurs; if you plan to decode those, you might be better off compiling a\nkernel yourself (see the end of this subsection and the section titled 'Decode\nfailure messages' for details).\n\n**Using git**: Developers and experienced Linux users familiar with git are\noften best served by obtaining the latest Linux kernel sources straight from the\n`official development repository on kernel.org\n<https:\/\/git.kernel.org\/pub\/scm\/linux\/kernel\/git\/torvalds\/linux.git\/tree\/>`_.\nThose are likely a bit ahead of the latest mainline pre-release. Don't worry\nabout it: they are as reliable as a proper pre-release, unless the kernel's\ndevelopment cycle is currently in the middle of a merge window. But even then\nthey are quite reliable.\n\n**Conventional**: People unfamiliar with git are often best served by\ndownloading the sources as tarball from `kernel.org <https:\/\/kernel.org\/>`_.\n\nHow to actually build a kernel is not described here, as many websites explain\nthe necessary steps already. If you are new to it, consider following one of\nthose how-to's that suggest to use ``make localmodconfig``, as that tries to\npick up the configuration of your current kernel and then tries to adjust it\nsomewhat for your system. That does not make the resulting kernel any better,\nbut quicker to compile.\n\nNote: If you are dealing with a panic, Oops, warning, or BUG from the kernel,\nplease try to enable CONFIG_KALLSYMS when configuring your kernel.\nAdditionally, enable CONFIG_DEBUG_KERNEL and CONFIG_DEBUG_INFO, too; the\nlatter is the relevant one of those two, but can only be reached if you enable\nthe former. Be aware CONFIG_DEBUG_INFO increases the storage space required to\nbuild a kernel by quite a bit. But that's worth it, as these options will allow\nyou later to pinpoint the exact line of code that triggers your issue. The\nsection 'Decode failure messages' below explains this in more detail.\n\nBut keep in mind: Always keep a record of the issue encountered in case it is\nhard to reproduce. Sending an undecoded report is better than not reporting\nthe issue at all.\n\n\nCheck 'taint' flag\n------------------\n\n    *Ensure the kernel you just installed does not 'taint' itself when\n    running.*\n\nAs outlined above in more detail already: the kernel sets a 'taint' flag when\nsomething happens that can lead to follow-up errors that look totally\nunrelated. That's why you need to check if the kernel you just installed does\nnot set this flag. And if it does, you in almost all the cases needs to\neliminate the reason for it before you reporting issues that occur with it. See\nthe section above for details how to do that.\n\n\nReproduce issue with the fresh kernel\n-------------------------------------\n\n    *Reproduce the issue with the kernel you just installed. If it doesn't show\n    up there, scroll down to the instructions for issues only happening with\n    stable and longterm kernels.*\n\nCheck if the issue occurs with the fresh Linux kernel version you just\ninstalled. If it was fixed there already, consider sticking with this version\nline and abandoning your plan to report the issue. But keep in mind that other\nusers might still be plagued by it, as long as it's not fixed in either stable\nand longterm version from kernel.org (and thus vendor kernels derived from\nthose). If you prefer to use one of those or just want to help their users,\nhead over to the section \"Details about reporting issues only occurring in\nolder kernel version lines\" below.\n\n\nOptimize description to reproduce issue\n---------------------------------------\n\n    *Optimize your notes: try to find and write the most straightforward way to\n    reproduce your issue. Make sure the end result has all the important\n    details, and at the same time is easy to read and understand for others\n    that hear about it for the first time. And if you learned something in this\n    process, consider searching again for existing reports about the issue.*\n\nAn unnecessarily complex report will make it hard for others to understand your\nreport. Thus try to find a reproducer that's straight forward to describe and\nthus easy to understand in written form. Include all important details, but at\nthe same time try to keep it as short as possible.\n\nIn this in the previous steps you likely have learned a thing or two about the\nissue you face. Use this knowledge and search again for existing reports\ninstead you can join.\n\n\nDecode failure messages\n-----------------------\n\n    *If your failure involves a 'panic', 'Oops', 'warning', or 'BUG', consider\n    decoding the kernel log to find the line of code that triggered the error.*\n\nWhen the kernel detects an internal problem, it will log some information about\nthe executed code. This makes it possible to pinpoint the exact line in the\nsource code that triggered the issue and shows how it was called. But that only\nworks if you enabled CONFIG_DEBUG_INFO and CONFIG_KALLSYMS when configuring\nyour kernel. If you did so, consider to decode the information from the\nkernel's log. That will make it a lot easier to understand what lead to the\n'panic', 'Oops', 'warning', or 'BUG', which increases the chances that someone\ncan provide a fix.\n\nDecoding can be done with a script you find in the Linux source tree. If you\nare running a kernel you compiled yourself earlier, call it like this::\n\n       [user@something ~]$ sudo dmesg | .\/linux-5.10.5\/scripts\/decode_stacktrace.sh .\/linux-5.10.5\/vmlinux\n\nIf you are running a packaged vanilla kernel, you will likely have to install\nthe corresponding packages with debug symbols. Then call the script (which you\nmight need to get from the Linux sources if your distro does not package it)\nlike this::\n\n       [user@something ~]$ sudo dmesg | .\/linux-5.10.5\/scripts\/decode_stacktrace.sh \\\n        \/usr\/lib\/debug\/lib\/modules\/5.10.10-4.1.x86_64\/vmlinux \/usr\/src\/kernels\/5.10.10-4.1.x86_64\/\n\nThe script will work on log lines like the following, which show the address of\nthe code the kernel was executing when the error occurred::\n\n       [   68.387301] RIP: 0010:test_module_init+0x5\/0xffa [test_module]\n\nOnce decoded, these lines will look like this::\n\n       [   68.387301] RIP: 0010:test_module_init (\/home\/username\/linux-5.10.5\/test-module\/test-module.c:16) test_module\n\nIn this case the executed code was built from the file\n'~\/linux-5.10.5\/test-module\/test-module.c' and the error occurred by the\ninstructions found in line '16'.\n\nThe script will similarly decode the addresses mentioned in the section\nstarting with 'Call trace', which show the path to the function where the\nproblem occurred. Additionally, the script will show the assembler output for\nthe code section the kernel was executing.\n\nNote, if you can't get this to work, simply skip this step and mention the\nreason for it in the report. If you're lucky, it might not be needed. And if it\nis, someone might help you to get things going. Also be aware this is just one\nof several ways to decode kernel stack traces. Sometimes different steps will\nbe required to retrieve the relevant details. Don't worry about that, if that's\nneeded in your case, developers will tell you what to do.\n\n\nSpecial care for regressions\n----------------------------\n\n    *If your problem is a regression, try to narrow down when the issue was\n    introduced as much as possible.*\n\nLinux lead developer Linus Torvalds insists that the Linux kernel never\nworsens, that's why he deems regressions as unacceptable and wants to see them\nfixed quickly. That's why changes that introduced a regression are often\npromptly reverted if the issue they cause can't get solved quickly any other\nway. Reporting a regression is thus a bit like playing a kind of trump card to\nget something quickly fixed. But for that to happen the change that's causing\nthe regression needs to be known. Normally it's up to the reporter to track\ndown the culprit, as maintainers often won't have the time or setup at hand to\nreproduce it themselves.\n\nTo find the change there is a process called 'bisection' which the document\nDocumentation\/admin-guide\/bug-bisect.rst describes in detail. That process\nwill often require you to build about ten to twenty kernel images, trying to\nreproduce the issue with each of them before building the next. Yes, that takes\nsome time, but don't worry, it works a lot quicker than most people assume.\nThanks to a 'binary search' this will lead you to the one commit in the source\ncode management system that's causing the regression. Once you find it, search\nthe net for the subject of the change, its commit id and the shortened commit id\n(the first 12 characters of the commit id). This will lead you to existing\nreports about it, if there are any.\n\nNote, a bisection needs a bit of know-how, which not everyone has, and quite a\nbit of effort, which not everyone is willing to invest. Nevertheless, it's\nhighly recommended performing a bisection yourself. If you really can't or\ndon't want to go down that route at least find out which mainline kernel\nintroduced the regression. If something for example breaks when switching from\n5.5.15 to 5.8.4, then try at least all the mainline releases in that area (5.6,\n5.7 and 5.8) to check when it first showed up. Unless you're trying to find a\nregression in a stable or longterm kernel, avoid testing versions which number\nhas three sections (5.6.12, 5.7.8), as that makes the outcome hard to\ninterpret, which might render your testing useless. Once you found the major\nversion which introduced the regression, feel free to move on in the reporting\nprocess. But keep in mind: it depends on the issue at hand if the developers\nwill be able to help without knowing the culprit. Sometimes they might\nrecognize from the report want went wrong and can fix it; other times they will\nbe unable to help unless you perform a bisection.\n\nWhen dealing with regressions make sure the issue you face is really caused by\nthe kernel and not by something else, as outlined above already.\n\nIn the whole process keep in mind: an issue only qualifies as regression if the\nolder and the newer kernel got built with a similar configuration. This can be\nachieved by using ``make olddefconfig``, as explained in more detail by\nDocumentation\/admin-guide\/reporting-regressions.rst; that document also\nprovides a good deal of other information about regressions you might want to be\naware of.\n\n\nWrite and send the report\n-------------------------\n\n    *Start to compile the report by writing a detailed description about the\n    issue. Always mention a few things: the latest kernel version you installed\n    for reproducing, the Linux Distribution used, and your notes on how to\n    reproduce the issue. Ideally, make the kernel's build configuration\n    (.config) and the output from ``dmesg`` available somewhere on the net and\n    link to it. Include or upload all other information that might be relevant,\n    like the output\/screenshot of an Oops or the output from ``lspci``. Once\n    you wrote this main part, insert a normal length paragraph on top of it\n    outlining the issue and the impact quickly. On top of this add one sentence\n    that briefly describes the problem and gets people to read on. Now give the\n    thing a descriptive title or subject that yet again is shorter. Then you're\n    ready to send or file the report like the MAINTAINERS file told you, unless\n    you are dealing with one of those 'issues of high priority': they need\n    special care which is explained in 'Special handling for high priority\n    issues' below.*\n\nNow that you have prepared everything it's time to write your report. How to do\nthat is partly explained by the three documents linked to in the preface above.\nThat's why this text will only mention a few of the essentials as well as\nthings specific to the Linux kernel.\n\nThere is one thing that fits both categories: the most crucial parts of your\nreport are the title\/subject, the first sentence, and the first paragraph.\nDevelopers often get quite a lot of mail. They thus often just take a few\nseconds to skim a mail before deciding to move on or look closer. Thus: the\nbetter the top section of your report, the higher are the chances that someone\nwill look into it and help you. And that is why you should ignore them for now\nand write the detailed report first. ;-)\n\nThings each report should mention\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nDescribe in detail how your issue happens with the fresh vanilla kernel you\ninstalled. Try to include the step-by-step instructions you wrote and optimized\nearlier that outline how you and ideally others can reproduce the issue; in\nthose rare cases where that's impossible try to describe what you did to\ntrigger it.\n\nAlso include all the relevant information others might need to understand the\nissue and its environment. What's actually needed depends a lot on the issue,\nbut there are some things you should include always:\n\n * the output from ``cat \/proc\/version``, which contains the Linux kernel\n   version number and the compiler it was built with.\n\n * the Linux distribution the machine is running (``hostnamectl | grep\n   \"Operating System\"``)\n\n * the architecture of the CPU and the operating system (``uname -mi``)\n\n * if you are dealing with a regression and performed a bisection, mention the\n   subject and the commit-id of the change that is causing it.\n\nIn a lot of cases it's also wise to make two more things available to those\nthat read your report:\n\n * the configuration used for building your Linux kernel (the '.config' file)\n\n * the kernel's messages that you get from ``dmesg`` written to a file. Make\n   sure that it starts with a line like 'Linux version 5.8-1\n   (foobar@example.com) (gcc (GCC) 10.2.1, GNU ld version 2.34) #1 SMP Mon Aug\n   3 14:54:37 UTC 2020' If it's missing, then important messages from the first\n   boot phase already got discarded. In this case instead consider using\n   ``journalctl -b 0 -k``; alternatively you can also reboot, reproduce the\n   issue and call ``dmesg`` right afterwards.\n\nThese two files are big, that's why it's a bad idea to put them directly into\nyour report. If you are filing the issue in a bug tracker then attach them to\nthe ticket. If you report the issue by mail do not attach them, as that makes\nthe mail too large; instead do one of these things:\n\n * Upload the files somewhere public (your website, a public file paste\n   service, a ticket created just for this purpose on `bugzilla.kernel.org\n   <https:\/\/bugzilla.kernel.org\/>`_, ...) and include a link to them in your\n   report. Ideally use something where the files stay available for years, as\n   they could be useful to someone many years from now; this for example can\n   happen if five or ten years from now a developer works on some code that was\n   changed just to fix your issue.\n\n * Put the files aside and mention you will send them later in individual\n   replies to your own mail. Just remember to actually do that once the report\n   went out. ;-)\n\nThings that might be wise to provide\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nDepending on the issue you might need to add more background data. Here are a\nfew suggestions what often is good to provide:\n\n * If you are dealing with a 'warning', an 'OOPS' or a 'panic' from the kernel,\n   include it. If you can't copy'n'paste it, try to capture a netconsole trace\n   or at least take a picture of the screen.\n\n * If the issue might be related to your computer hardware, mention what kind\n   of system you use. If you for example have problems with your graphics card,\n   mention its manufacturer, the card's model, and what chip is uses. If it's a\n   laptop mention its name, but try to make sure it's meaningful. 'Dell XPS 13'\n   for example is not, because it might be the one from 2012; that one looks\n   not that different from the one sold today, but apart from that the two have\n   nothing in common. Hence, in such cases add the exact model number, which\n   for example are '9380' or '7390' for XPS 13 models introduced during 2019.\n   Names like 'Lenovo Thinkpad T590' are also somewhat ambiguous: there are\n   variants of this laptop with and without a dedicated graphics chip, so try\n   to find the exact model name or specify the main components.\n\n * Mention the relevant software in use. If you have problems with loading\n   modules, you want to mention the versions of kmod, systemd, and udev in use.\n   If one of the DRM drivers misbehaves, you want to state the versions of\n   libdrm and Mesa; also specify your Wayland compositor or the X-Server and\n   its driver. If you have a filesystem issue, mention the version of\n   corresponding filesystem utilities (e2fsprogs, btrfs-progs, xfsprogs, ...).\n\n * Gather additional information from the kernel that might be of interest. The\n   output from ``lspci -nn`` will for example help others to identify what\n   hardware you use. If you have a problem with hardware you even might want to\n   make the output from ``sudo lspci -vvv`` available, as that provides\n   insights how the components were configured. For some issues it might be\n   good to include the contents of files like ``\/proc\/cpuinfo``,\n   ``\/proc\/ioports``, ``\/proc\/iomem``, ``\/proc\/modules``, or\n   ``\/proc\/scsi\/scsi``. Some subsystem also offer tools to collect relevant\n   information. One such tool is ``alsa-info.sh`` `which the audio\/sound\n   subsystem developers provide <https:\/\/www.alsa-project.org\/wiki\/AlsaInfo>`_.\n\nThose examples should give your some ideas of what data might be wise to\nattach, but you have to think yourself what will be helpful for others to know.\nDon't worry too much about forgetting something, as developers will ask for\nadditional details they need. But making everything important available from\nthe start increases the chance someone will take a closer look.\n\n\nThe important part: the head of your report\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nNow that you have the detailed part of the report prepared let's get to the\nmost important section: the first few sentences. Thus go to the top, add\nsomething like 'The detailed description:' before the part you just wrote and\ninsert two newlines at the top. Now write one normal length paragraph that\ndescribes the issue roughly. Leave out all boring details and focus on the\ncrucial parts readers need to know to understand what this is all about; if you\nthink this bug affects a lot of users, mention this to get people interested.\n\nOnce you did that insert two more lines at the top and write a one sentence\nsummary that explains quickly what the report is about. After that you have to\nget even more abstract and write an even shorter subject\/title for the report.\n\nNow that you have written this part take some time to optimize it, as it is the\nmost important parts of your report: a lot of people will only read this before\nthey decide if reading the rest is time well spent.\n\nNow send or file the report like the :ref:`MAINTAINERS <maintainers>` file told\nyou, unless it's one of those 'issues of high priority' outlined earlier: in\nthat case please read the next subsection first before sending the report on\nits way.\n\nSpecial handling for high priority issues\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nReports for high priority issues need special handling.\n\n**Severe issues**: make sure the subject or ticket title as well as the first\nparagraph makes the severeness obvious.\n\n**Regressions**: make the report's subject start with '[REGRESSION]'.\n\nIn case you performed a successful bisection, use the title of the change that\nintroduced the regression as the second part of your subject. Make the report\nalso mention the commit id of the culprit. In case of an unsuccessful bisection,\nmake your report mention the latest tested version that's working fine (say 5.7)\nand the oldest where the issue occurs (say 5.8-rc1).\n\nWhen sending the report by mail, CC the Linux regressions mailing list\n(regressions@lists.linux.dev). In case the report needs to be filed to some web\ntracker, proceed to do so. Once filed, forward the report by mail to the\nregressions list; CC the maintainer and the mailing list for the subsystem in\nquestion. Make sure to inline the forwarded report, hence do not attach it.\nAlso add a short note at the top where you mention the URL to the ticket.\n\nWhen mailing or forwarding the report, in case of a successful bisection add the\nauthor of the culprit to the recipients; also CC everyone in the signed-off-by\nchain, which you find at the end of its commit message.\n\n**Security issues**: for these issues your will have to evaluate if a\nshort-term risk to other users would arise if details were publicly disclosed.\nIf that's not the case simply proceed with reporting the issue as described.\nFor issues that bear such a risk you will need to adjust the reporting process\nslightly:\n\n * If the MAINTAINERS file instructed you to report the issue by mail, do not\n   CC any public mailing lists.\n\n * If you were supposed to file the issue in a bug tracker make sure to mark\n   the ticket as 'private' or 'security issue'. If the bug tracker does not\n   offer a way to keep reports private, forget about it and send your report as\n   a private mail to the maintainers instead.\n\nIn both cases make sure to also mail your report to the addresses the\nMAINTAINERS file lists in the section 'security contact'. Ideally directly CC\nthem when sending the report by mail. If you filed it in a bug tracker, forward\nthe report's text to these addresses; but on top of it put a small note where\nyou mention that you filed it with a link to the ticket.\n\nSee Documentation\/process\/security-bugs.rst for more information.\n\n\nDuties after the report went out\n--------------------------------\n\n    *Wait for reactions and keep the thing rolling until you can accept the\n    outcome in one way or the other. Thus react publicly and in a timely manner\n    to any inquiries. Test proposed fixes. Do proactive testing: retest with at\n    least every first release candidate (RC) of a new mainline version and\n    report your results. Send friendly reminders if things stall. And try to\n    help yourself, if you don't get any help or if it's unsatisfying.*\n\nIf your report was good and you are really lucky then one of the developers\nmight immediately spot what's causing the issue; they then might write a patch\nto fix it, test it, and send it straight for integration in mainline while\ntagging it for later backport to stable and longterm kernels that need it. Then\nall you need to do is reply with a 'Thank you very much' and switch to a version\nwith the fix once it gets released.\n\nBut this ideal scenario rarely happens. That's why the job is only starting\nonce you got the report out. What you'll have to do depends on the situations,\nbut often it will be the things listed below. But before digging into the\ndetails, here are a few important things you need to keep in mind for this part\nof the process.\n\n\nGeneral advice for further interactions\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n**Always reply in public**: When you filed the issue in a bug tracker, always\nreply there and do not contact any of the developers privately about it. For\nmailed reports always use the 'Reply-all' function when replying to any mails\nyou receive. That includes mails with any additional data you might want to add\nto your report: go to your mail applications 'Sent' folder and use 'reply-all'\non your mail with the report. This approach will make sure the public mailing\nlist(s) and everyone else that gets involved over time stays in the loop; it\nalso keeps the mail thread intact, which among others is really important for\nmailing lists to group all related mails together.\n\nThere are just two situations where a comment in a bug tracker or a 'Reply-all'\nis unsuitable:\n\n * Someone tells you to send something privately.\n\n * You were told to send something, but noticed it contains sensitive\n   information that needs to be kept private. In that case it's okay to send it\n   in private to the developer that asked for it. But note in the ticket or a\n   mail that you did that, so everyone else knows you honored the request.\n\n**Do research before asking for clarifications or help**: In this part of the\nprocess someone might tell you to do something that requires a skill you might\nnot have mastered yet. For example, you might be asked to use some test tools\nyou never have heard of yet; or you might be asked to apply a patch to the\nLinux kernel sources to test if it helps. In some cases it will be fine sending\na reply asking for instructions how to do that. But before going that route try\nto find the answer own your own by searching the internet; alternatively\nconsider asking in other places for advice. For example ask a friend or post\nabout it to a chatroom or forum you normally hang out.\n\n**Be patient**: If you are really lucky you might get a reply to your report\nwithin a few hours. But most of the time it will take longer, as maintainers\nare scattered around the globe and thus might be in a different time zone \u2013 one\nwhere they already enjoy their night away from keyboard.\n\nIn general, kernel developers will take one to five business days to respond to\nreports. Sometimes it will take longer, as they might be busy with the merge\nwindows, other work, visiting developer conferences, or simply enjoying a long\nsummer holiday.\n\nThe 'issues of high priority' (see above for an explanation) are an exception\nhere: maintainers should address them as soon as possible; that's why you\nshould wait a week at maximum (or just two days if it's something urgent)\nbefore sending a friendly reminder.\n\nSometimes the maintainer might not be responding in a timely manner; other\ntimes there might be disagreements, for example if an issue qualifies as\nregression or not. In such cases raise your concerns on the mailing list and\nask others for public or private replies how to move on. If that fails, it\nmight be appropriate to get a higher authority involved. In case of a WiFi\ndriver that would be the wireless maintainers; if there are no higher level\nmaintainers or all else fails, it might be one of those rare situations where\nit's okay to get Linus Torvalds involved.\n\n**Proactive testing**: Every time the first pre-release (the 'rc1') of a new\nmainline kernel version gets released, go and check if the issue is fixed there\nor if anything of importance changed. Mention the outcome in the ticket or in a\nmail you sent as reply to your report (make sure it has all those in the CC\nthat up to that point participated in the discussion). This will show your\ncommitment and that you are willing to help. It also tells developers if the\nissue persists and makes sure they do not forget about it. A few other\noccasional retests (for example with rc3, rc5 and the final) are also a good\nidea, but only report your results if something relevant changed or if you are\nwriting something anyway.\n\nWith all these general things off the table let's get into the details of how\nto help to get issues resolved once they were reported.\n\nInquires and testing request\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nHere are your duties in case you got replies to your report:\n\n**Check who you deal with**: Most of the time it will be the maintainer or a\ndeveloper of the particular code area that will respond to your report. But as\nissues are normally reported in public it could be anyone that's replying \u2014\nincluding people that want to help, but in the end might guide you totally off\ntrack with their questions or requests. That rarely happens, but it's one of\nmany reasons why it's wise to quickly run an internet search to see who you're\ninteracting with. By doing this you also get aware if your report was heard by\nthe right people, as a reminder to the maintainer (see below) might be in order\nlater if discussion fades out without leading to a satisfying solution for the\nissue.\n\n**Inquiries for data**: Often you will be asked to test something or provide\nadditional details. Try to provide the requested information soon, as you have\nthe attention of someone that might help and risk losing it the longer you\nwait; that outcome is even likely if you do not provide the information within\na few business days.\n\n**Requests for testing**: When you are asked to test a diagnostic patch or a\npossible fix, try to test it in timely manner, too. But do it properly and make\nsure to not rush it: mixing things up can happen easily and can lead to a lot\nof confusion for everyone involved. A common mistake for example is thinking a\nproposed patch with a fix was applied, but in fact wasn't. Things like that\nhappen even to experienced testers occasionally, but they most of the time will\nnotice when the kernel with the fix behaves just as one without it.\n\nWhat to do when nothing of substance happens\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nSome reports will not get any reaction from the responsible Linux kernel\ndevelopers; or a discussion around the issue evolved, but faded out with\nnothing of substance coming out of it.\n\nIn these cases wait two (better: three) weeks before sending a friendly\nreminder: maybe the maintainer was just away from keyboard for a while when\nyour report arrived or had something more important to take care of. When\nwriting the reminder, kindly ask if anything else from your side is needed to\nget the ball running somehow. If the report got out by mail, do that in the\nfirst lines of a mail that is a reply to your initial mail (see above) which\nincludes a full quote of the original report below: that's on of those few\nsituations where such a 'TOFU' (Text Over, Fullquote Under) is the right\napproach, as then all the recipients will have the details at hand immediately\nin the proper order.\n\nAfter the reminder wait three more weeks for replies. If you still don't get a\nproper reaction, you first should reconsider your approach. Did you maybe try\nto reach out to the wrong people? Was the report maybe offensive or so\nconfusing that people decided to completely stay away from it? The best way to\nrule out such factors: show the report to one or two people familiar with FLOSS\nissue reporting and ask for their opinion. Also ask them for their advice how\nto move forward. That might mean: prepare a better report and make those people\nreview it before you send it out. Such an approach is totally fine; just\nmention that this is the second and improved report on the issue and include a\nlink to the first report.\n\nIf the report was proper you can send a second reminder; in it ask for advice\nwhy the report did not get any replies. A good moment for this second reminder\nmail is shortly after the first pre-release (the 'rc1') of a new Linux kernel\nversion got published, as you should retest and provide a status update at that\npoint anyway (see above).\n\nIf the second reminder again results in no reaction within a week, try to\ncontact a higher-level maintainer asking for advice: even busy maintainers by\nthen should at least have sent some kind of acknowledgment.\n\nRemember to prepare yourself for a disappointment: maintainers ideally should\nreact somehow to every issue report, but they are only obliged to fix those\n'issues of high priority' outlined earlier. So don't be too devastating if you\nget a reply along the lines of 'thanks for the report, I have more important\nissues to deal with currently and won't have time to look into this for the\nforeseeable future'.\n\nIt's also possible that after some discussion in the bug tracker or on a list\nnothing happens anymore and reminders don't help to motivate anyone to work out\na fix. Such situations can be devastating, but is within the cards when it\ncomes to Linux kernel development. This and several other reasons for not\ngetting help are explained in 'Why some issues won't get any reaction or remain\nunfixed after being reported' near the end of this document.\n\nDon't get devastated if you don't find any help or if the issue in the end does\nnot get solved: the Linux kernel is FLOSS and thus you can still help yourself.\nYou for example could try to find others that are affected and team up with\nthem to get the issue resolved. Such a team could prepare a fresh report\ntogether that mentions how many you are and why this is something that in your\noption should get fixed. Maybe together you can also narrow down the root cause\nor the change that introduced a regression, which often makes developing a fix\neasier. And with a bit of luck there might be someone in the team that knows a\nbit about programming and might be able to write a fix.\n\n\nReference for \"Reporting regressions within a stable and longterm kernel line\"\n------------------------------------------------------------------------------\n\nThis subsection provides details for the steps you need to perform if you face\na regression within a stable and longterm kernel line.\n\nMake sure the particular version line still gets support\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n    *Check if the kernel developers still maintain the Linux kernel version\n    line you care about: go to the front page of kernel.org and make sure it\n    mentions the latest release of the particular version line without an\n    '[EOL]' tag.*\n\nMost kernel version lines only get supported for about three months, as\nmaintaining them longer is quite a lot of work. Hence, only one per year is\nchosen and gets supported for at least two years (often six). That's why you\nneed to check if the kernel developers still support the version line you care\nfor.\n\nNote, if kernel.org lists two stable version lines on the front page, you\nshould consider switching to the newer one and forget about the older one:\nsupport for it is likely to be abandoned soon. Then it will get a \"end-of-life\"\n(EOL) stamp. Version lines that reached that point still get mentioned on the\nkernel.org front page for a week or two, but are unsuitable for testing and\nreporting.\n\nSearch stable mailing list\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n    *Check the archives of the Linux stable mailing list for existing reports.*\n\nMaybe the issue you face is already known and was fixed or is about to. Hence,\n`search the archives of the Linux stable mailing list\n<https:\/\/lore.kernel.org\/stable\/>`_ for reports about an issue like yours. If\nyou find any matches, consider joining the discussion, unless the fix is\nalready finished and scheduled to get applied soon.\n\nReproduce issue with the newest release\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n    *Install the latest release from the particular version line as a vanilla\n    kernel. Ensure this kernel is not tainted and still shows the problem, as\n    the issue might have already been fixed there. If you first noticed the\n    problem with a vendor kernel, check a vanilla build of the last version\n    known to work performs fine as well.*\n\nBefore investing any more time in this process you want to check if the issue\nwas already fixed in the latest release of version line you're interested in.\nThis kernel needs to be vanilla and shouldn't be tainted before the issue\nhappens, as detailed outlined already above in the section \"Install a fresh\nkernel for testing\".\n\nDid you first notice the regression with a vendor kernel? Then changes the\nvendor applied might be interfering. You need to rule that out by performing\na recheck. Say something broke when you updated from 5.10.4-vendor.42 to\n5.10.5-vendor.43. Then after testing the latest 5.10 release as outlined in\nthe previous paragraph check if a vanilla build of Linux 5.10.4 works fine as\nwell. If things are broken there, the issue does not qualify as upstream\nregression and you need switch back to the main step-by-step guide to report\nthe issue.\n\nReport the regression\n~~~~~~~~~~~~~~~~~~~~~\n\n    *Send a short problem report to the Linux stable mailing list\n    (stable@vger.kernel.org) and CC the Linux regressions mailing list\n    (regressions@lists.linux.dev); if you suspect the cause in a particular\n    subsystem, CC its maintainer and its mailing list. Roughly describe the\n    issue and ideally explain how to reproduce it. Mention the first version\n    that shows the problem and the last version that's working fine. Then\n    wait for further instructions.*\n\nWhen reporting a regression that happens within a stable or longterm kernel\nline (say when updating from 5.10.4 to 5.10.5) a brief report is enough for\nthe start to get the issue reported quickly. Hence a rough description to the\nstable and regressions mailing list is all it takes; but in case you suspect\nthe cause in a particular subsystem, CC its maintainers and its mailing list\nas well, because that will speed things up.\n\nAnd note, it helps developers a great deal if you can specify the exact version\nthat introduced the problem. Hence if possible within a reasonable time frame,\ntry to find that version using vanilla kernels. Lets assume something broke when\nyour distributor released a update from Linux kernel 5.10.5 to 5.10.8. Then as\ninstructed above go and check the latest kernel from that version line, say\n5.10.9. If it shows the problem, try a vanilla 5.10.5 to ensure that no patches\nthe distributor applied interfere. If the issue doesn't manifest itself there,\ntry 5.10.7 and then (depending on the outcome) 5.10.8 or 5.10.6 to find the\nfirst version where things broke. Mention it in the report and state that 5.10.9\nis still broken.\n\nWhat the previous paragraph outlines is basically a rough manual 'bisection'.\nOnce your report is out your might get asked to do a proper one, as it allows to\npinpoint the exact change that causes the issue (which then can easily get\nreverted to fix the issue quickly). Hence consider to do a proper bisection\nright away if time permits. See the section 'Special care for regressions' and\nthe document Documentation\/admin-guide\/bug-bisect.rst for details how to\nperform one. In case of a successful bisection add the author of the culprit to\nthe recipients; also CC everyone in the signed-off-by chain, which you find at\nthe end of its commit message.\n\n\nReference for \"Reporting issues only occurring in older kernel version lines\"\n-----------------------------------------------------------------------------\n\nThis section provides details for the steps you need to take if you could not\nreproduce your issue with a mainline kernel, but want to see it fixed in older\nversion lines (aka stable and longterm kernels).\n\nSome fixes are too complex\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n    *Prepare yourself for the possibility that going through the next few steps\n    might not get the issue solved in older releases: the fix might be too big\n    or risky to get backported there.*\n\nEven small and seemingly obvious code-changes sometimes introduce new and\ntotally unexpected problems. The maintainers of the stable and longterm kernels\nare very aware of that and thus only apply changes to these kernels that are\nwithin rules outlined in Documentation\/process\/stable-kernel-rules.rst.\n\nComplex or risky changes for example do not qualify and thus only get applied\nto mainline. Other fixes are easy to get backported to the newest stable and\nlongterm kernels, but too risky to integrate into older ones. So be aware the\nfix you are hoping for might be one of those that won't be backported to the\nversion line your care about. In that case you'll have no other choice then to\nlive with the issue or switch to a newer Linux version, unless you want to\npatch the fix into your kernels yourself.\n\nCommon preparations\n~~~~~~~~~~~~~~~~~~~\n\n    *Perform the first three steps in the section \"Reporting issues only\n    occurring in older kernel version lines\" above.*\n\nYou need to carry out a few steps already described in another section of this\nguide. Those steps will let you:\n\n * Check if the kernel developers still maintain the Linux kernel version line\n   you care about.\n\n * Search the Linux stable mailing list for exiting reports.\n\n * Check with the latest release.\n\n\nCheck code history and search for existing discussions\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n    *Search the Linux kernel version control system for the change that fixed\n    the issue in mainline, as its commit message might tell you if the fix is\n    scheduled for backporting already. If you don't find anything that way,\n    search the appropriate mailing lists for posts that discuss such an issue\n    or peer-review possible fixes; then check the discussions if the fix was\n    deemed unsuitable for backporting. If backporting was not considered at\n    all, join the newest discussion, asking if it's in the cards.*\n\nIn a lot of cases the issue you deal with will have happened with mainline, but\ngot fixed there. The commit that fixed it would need to get backported as well\nto get the issue solved. That's why you want to search for it or any\ndiscussions abound it.\n\n * First try to find the fix in the Git repository that holds the Linux kernel\n   sources. You can do this with the web interfaces `on kernel.org\n   <https:\/\/git.kernel.org\/pub\/scm\/linux\/kernel\/git\/torvalds\/linux.git\/tree\/>`_\n   or its mirror `on GitHub <https:\/\/github.com\/torvalds\/linux>`_; if you have\n   a local clone you alternatively can search on the command line with ``git\n   log --grep=<pattern>``.\n\n   If you find the fix, look if the commit message near the end contains a\n   'stable tag' that looks like this:\n\n          Cc: <stable@vger.kernel.org> # 5.4+\n\n   If that's case the developer marked the fix safe for backporting to version\n   line 5.4 and later. Most of the time it's getting applied there within two\n   weeks, but sometimes it takes a bit longer.\n\n * If the commit doesn't tell you anything or if you can't find the fix, look\n   again for discussions about the issue. Search the net with your favorite\n   internet search engine as well as the archives for the `Linux kernel\n   developers mailing list <https:\/\/lore.kernel.org\/lkml\/>`_. Also read the\n   section `Locate kernel area that causes the issue` above and follow the\n   instructions to find the subsystem in question: its bug tracker or mailing\n   list archive might have the answer you are looking for.\n\n * If you see a proposed fix, search for it in the version control system as\n   outlined above, as the commit might tell you if a backport can be expected.\n\n   * Check the discussions for any indicators the fix might be too risky to get\n     backported to the version line you care about. If that's the case you have\n     to live with the issue or switch to the kernel version line where the fix\n     got applied.\n\n   * If the fix doesn't contain a stable tag and backporting was not discussed,\n     join the discussion: mention the version where you face the issue and that\n     you would like to see it fixed, if suitable.\n\n\nAsk for advice\n~~~~~~~~~~~~~~\n\n    *One of the former steps should lead to a solution. If that doesn't work\n    out, ask the maintainers for the subsystem that seems to be causing the\n    issue for advice; CC the mailing list for the particular subsystem as well\n    as the stable mailing list.*\n\nIf the previous three steps didn't get you closer to a solution there is only\none option left: ask for advice. Do that in a mail you sent to the maintainers\nfor the subsystem where the issue seems to have its roots; CC the mailing list\nfor the subsystem as well as the stable mailing list (stable@vger.kernel.org).\n\n\nWhy some issues won't get any reaction or remain unfixed after being reported\n=============================================================================\n\nWhen reporting a problem to the Linux developers, be aware only 'issues of high\npriority' (regressions, security issues, severe problems) are definitely going\nto get resolved. The maintainers or if all else fails Linus Torvalds himself\nwill make sure of that. They and the other kernel developers will fix a lot of\nother issues as well. But be aware that sometimes they can't or won't help; and\nsometimes there isn't even anyone to send a report to.\n\nThis is best explained with kernel developers that contribute to the Linux\nkernel in their spare time. Quite a few of the drivers in the kernel were\nwritten by such programmers, often because they simply wanted to make their\nhardware usable on their favorite operating system.\n\nThese programmers most of the time will happily fix problems other people\nreport. But nobody can force them to do, as they are contributing voluntarily.\n\nThen there are situations where such developers really want to fix an issue,\nbut can't: sometimes they lack hardware programming documentation to do so.\nThis often happens when the publicly available docs are superficial or the\ndriver was written with the help of reverse engineering.\n\nSooner or later spare time developers will also stop caring for the driver.\nMaybe their test hardware broke, got replaced by something more fancy, or is so\nold that it's something you don't find much outside of computer museums\nanymore. Sometimes developer stops caring for their code and Linux at all, as\nsomething different in their life became way more important. In some cases\nnobody is willing to take over the job as maintainer \u2013 and nobody can be forced\nto, as contributing to the Linux kernel is done on a voluntary basis. Abandoned\ndrivers nevertheless remain in the kernel: they are still useful for people and\nremoving would be a regression.\n\nThe situation is not that different with developers that are paid for their\nwork on the Linux kernel. Those contribute most changes these days. But their\nemployers sooner or later also stop caring for their code or make its\nprogrammer focus on other things. Hardware vendors for example earn their money\nmainly by selling new hardware; quite a few of them hence are not investing\nmuch time and energy in maintaining a Linux kernel driver for something they\nstopped selling years ago. Enterprise Linux distributors often care for a\nlonger time period, but in new versions often leave support for old and rare\nhardware aside to limit the scope. Often spare time contributors take over once\na company orphans some code, but as mentioned above: sooner or later they will\nleave the code behind, too.\n\nPriorities are another reason why some issues are not fixed, as maintainers\nquite often are forced to set those, as time to work on Linux is limited.\nThat's true for spare time or the time employers grant their developers to\nspend on maintenance work on the upstream kernel. Sometimes maintainers also\nget overwhelmed with reports, even if a driver is working nearly perfectly. To\nnot get completely stuck, the programmer thus might have no other choice than\nto prioritize issue reports and reject some of them.\n\nBut don't worry too much about all of this, a lot of drivers have active\nmaintainers who are quite interested in fixing as many issues as possible.\n\n\nClosing words\n=============\n\nCompared with other Free\/Libre & Open Source Software it's hard to report\nissues to the Linux kernel developers: the length and complexity of this\ndocument and the implications between the lines illustrate that. But that's how\nit is for now. The main author of this text hopes documenting the state of the\nart will lay some groundwork to improve the situation over time.\n\n\n..\n   end-of-content\n..\n   This document is maintained by Thorsten Leemhuis <linux@leemhuis.info>. If\n   you spot a typo or small mistake, feel free to let him know directly and\n   he'll fix it. You are free to do the same in a mostly informal way if you\n   want to contribute changes to the text, but for copyright reasons please CC\n   linux-doc@vger.kernel.org and \"sign-off\" your contribution as\n   Documentation\/process\/submitting-patches.rst outlines in the section \"Sign\n   your work - the Developer's Certificate of Origin\".\n..\n   This text is available under GPL-2.0+ or CC-BY-4.0, as stated at the top\n   of the file. If you want to distribute this text under CC-BY-4.0 only,\n   please use \"The Linux kernel developers\" for author attribution and link\n   this as source:\n   https:\/\/git.kernel.org\/pub\/scm\/linux\/kernel\/git\/torvalds\/linux.git\/plain\/Documentation\/admin-guide\/reporting-issues.rst\n..\n   Note: Only the content of this RST file as found in the Linux kernel sources\n   is available under CC-BY-4.0, as versions of this text that were processed\n   (for example by the kernel's build system) might contain content taken from\n   files which use a more restrictive license.","site":"linux","answers_cleaned":"   SPDX License Identifier   GPL 2 0  OR CC BY 4 0     See the bottom of this file for additional redistribution information   Reporting issues                    The short guide  aka TL DR                               Are you facing a regression with vanilla kernels from the same stable or longterm series  One still supported  Then search the  LKML  https   lore kernel org lkml     and the  Linux stable mailing list  https   lore kernel org stable     archives for matching reports to join  If you don t find any  install  the latest release from that series  https   kernel org      If it still shows the issue  report it to the stable mailing list  stable vger kernel org  and CC the regressions list  regressions lists linux dev   ideally also CC the maintainer and the mailing list for the subsystem in question   In all other cases try your best guess which kernel part might be causing the issue  Check the  ref  MAINTAINERS  maintainers   file for how its developers expect to be told about problems  which most of the time will be by email with a mailing list in CC  Check the destination s archives for matching reports  search the  LKML  https   lore kernel org lkml     and the web  too  If you don t find any to join  install  the latest mainline kernel  https   kernel org      If the issue is present there  send a report   The issue was fixed there  but you would like to see it resolved in a still supported stable or longterm series as well  Then install its latest release  If it shows the problem  search for the change that fixed it in mainline and check if backporting is in the works or was discarded  if it s neither  ask those who handled the change for it     General remarks    When installing and testing a kernel as outlined above  ensure it s vanilla  IOW  not patched and not using add on modules   Also make sure it s built and running in a healthy environment and not already tainted before the issue occurs   If you are facing multiple issues with the Linux kernel at once  report each separately  While writing your report  include all information relevant to the issue  like the kernel and the distro used  In case of a regression  CC the regressions mailing list  regressions lists linux dev  to your report  Also try to pin point the culprit with a bisection  if you succeed  include its commit id and CC everyone in the sign off by chain   Once the report is out  answer any questions that come up and help where you can  That includes keeping the ball rolling by occasionally retesting with newer releases and sending a status update afterwards   Step by step guide how to report issues to the kernel maintainers                                                                    The above TL DR outlines roughly how to report issues to the Linux kernel developers  It might be all that s needed for people already familiar with reporting issues to Free Libre   Open Source Software  FLOSS  projects  For everyone else there is this section  It is more detailed and uses a step by step approach  It still tries to be brief for readability and leaves out a lot of details  those are described below the step by step guide in a reference section  which explains each of the steps in more detail   Note  this section covers a few more aspects than the TL DR and does things in a slightly different order  That s in your interest  to make sure you notice early if an issue that looks like a Linux kernel problem is actually caused by something else  These steps thus help to ensure the time you invest in this process won t feel wasted in the end      Are you facing an issue with a Linux kernel a hardware or software vendor    provided  Then in almost all cases you are better off to stop reading this    document and reporting the issue to your vendor instead  unless you are    willing to install the latest Linux version yourself  Be aware the latter    will often be needed anyway to hunt down and fix issues      Perform a rough search for existing reports with your favorite internet    search engine  additionally  check the archives of the  Linux Kernel Mailing    List  LKML   https   lore kernel org lkml      If you find matching reports     join the discussion instead of sending a new one      See if the issue you are dealing with qualifies as regression  security    issue  or a really severe problem  those are  issues of high priority  that    need special handling in some steps that are about to follow      Make sure it s not the kernel s surroundings that are causing the issue    you face      Create a fresh backup and put system repair and restore tools at hand      Ensure your system does not enhance its kernels by building additional    kernel modules on the fly  which solutions like DKMS might be doing locally    without your knowledge      Check if your kernel was  tainted  when the issue occurred  as the event    that made the kernel set this flag might be causing the issue you face      Write down coarsely how to reproduce the issue  If you deal with multiple    issues at once  create separate notes for each of them and make sure they    work independently on a freshly booted system  That s needed  as each issue    needs to get reported to the kernel developers separately  unless they are    strongly entangled      If you are facing a regression within a stable or longterm version line     say something broke when updating from 5 10 4 to 5 10 5   scroll down to     Dealing with regressions within a stable and longterm kernel line       Locate the driver or kernel subsystem that seems to be causing the issue     Find out how and where its developers expect reports  Note  most of the    time this won t be bugzilla kernel org  as issues typically need to be sent    by mail to a maintainer and a public mailing list      Search the archives of the bug tracker or mailing list in question    thoroughly for reports that might match your issue  If you find anything     join the discussion instead of sending a new report   After these preparations you ll now enter the main part      Unless you are already running the latest  mainline  Linux kernel  better    go and install it for the reporting process  Testing and reporting with    the latest  stable  Linux can be an acceptable alternative in some    situations  during the merge window that actually might be even the best    approach  but in that development phase it can be an even better idea to    suspend your efforts for a few days anyway  Whatever version you choose     ideally use a  vanilla  build  Ignoring these advices will dramatically    increase the risk your report will be rejected or ignored      Ensure the kernel you just installed does not  taint  itself when    running      Reproduce the issue with the kernel you just installed  If it doesn t show    up there  scroll down to the instructions for issues only happening with    stable and longterm kernels      Optimize your notes  try to find and write the most straightforward way to    reproduce your issue  Make sure the end result has all the important    details  and at the same time is easy to read and understand for others    that hear about it for the first time  And if you learned something in this    process  consider searching again for existing reports about the issue      If your failure involves a  panic    Oops    warning   or  BUG   consider    decoding the kernel log to find the line of code that triggered the error      If your problem is a regression  try to narrow down when the issue was    introduced as much as possible      Start to compile the report by writing a detailed description about the    issue  Always mention a few things  the latest kernel version you installed    for reproducing  the Linux Distribution used  and your notes on how to    reproduce the issue  Ideally  make the kernel s build configuration      config  and the output from   dmesg   available somewhere on the net and    link to it  Include or upload all other information that might be relevant     like the output screenshot of an Oops or the output from   lspci    Once    you wrote this main part  insert a normal length paragraph on top of it    outlining the issue and the impact quickly  On top of this add one sentence    that briefly describes the problem and gets people to read on  Now give the    thing a descriptive title or subject that yet again is shorter  Then you re    ready to send or file the report like the MAINTAINERS file told you  unless    you are dealing with one of those  issues of high priority   they need    special care which is explained in  Special handling for high priority    issues  below      Wait for reactions and keep the thing rolling until you can accept the    outcome in one way or the other  Thus react publicly and in a timely manner    to any inquiries  Test proposed fixes  Do proactive testing  retest with at    least every first release candidate  RC  of a new mainline version and    report your results  Send friendly reminders if things stall  And try to    help yourself  if you don t get any help or if it s unsatisfying    Reporting regressions within a stable and longterm kernel line                                                                 This subsection is for you  if you followed above process and got sent here at the point about regression within a stable or longterm kernel version line  You face one of those if something breaks when updating from 5 10 4 to 5 10 5  a switch from 5 9 15 to 5 10 5 does not qualify   The developers want to fix such regressions as quickly as possible  hence there is a streamlined process to report them      Check if the kernel developers still maintain the Linux kernel version    line you care about  go to the   front page of kernel org     https   kernel org     and make sure it mentions    the latest release of the particular version line without an   EOL   tag      Check the archives of the  Linux stable mailing list     https   lore kernel org stable     for existing reports      Install the latest release from the particular version line as a vanilla    kernel  Ensure this kernel is not tainted and still shows the problem  as    the issue might have already been fixed there  If you first noticed the    problem with a vendor kernel  check a vanilla build of the last version    known to work performs fine as well      Send a short problem report to the Linux stable mailing list     stable vger kernel org  and CC the Linux regressions mailing list     regressions lists linux dev   if you suspect the cause in a particular    subsystem  CC its maintainer and its mailing list  Roughly describe the    issue and ideally explain how to reproduce it  Mention the first version    that shows the problem and the last version that s working fine  Then    wait for further instructions   The reference section below explains each of these steps in more detail    Reporting issues only occurring in older kernel version lines                                                                This subsection is for you  if you tried the latest mainline kernel as outlined above  but failed to reproduce your issue there  at the same time you want to see the issue fixed in a still supported stable or longterm series or vendor kernels regularly rebased on those  If that the case  follow these steps      Prepare yourself for the possibility that going through the next few steps    might not get the issue solved in older releases  the fix might be too big    or risky to get backported there      Perform the first three steps in the section  Dealing with regressions    within a stable and longterm kernel line  above      Search the Linux kernel version control system for the change that fixed    the issue in mainline  as its commit message might tell you if the fix is    scheduled for backporting already  If you don t find anything that way     search the appropriate mailing lists for posts that discuss such an issue    or peer review possible fixes  then check the discussions if the fix was    deemed unsuitable for backporting  If backporting was not considered at    all  join the newest discussion  asking if it s in the cards      One of the former steps should lead to a solution  If that doesn t work    out  ask the maintainers for the subsystem that seems to be causing the    issue for advice  CC the mailing list for the particular subsystem as well    as the stable mailing list   The reference section below explains each of these steps in more detail    Reference section  Reporting issues to the kernel maintainers                                                                The detailed guides above outline all the major steps in brief fashion  which should be enough for most people  But sometimes there are situations where even experienced users might wonder how to actually do one of those steps  That s what this section is for  as it will provide a lot more details on each of the above steps  Consider this as reference documentation  it s possible to read it from top to bottom  But it s mainly meant to skim over and a place to look up details how to actually perform those steps   A few words of general advice before digging into the details      The Linux kernel developers are well aware this process is complicated and    demands more than other FLOSS projects  We d love to make it simpler  But    that would require work in various places as well as some infrastructure     which would need constant maintenance  nobody has stepped up to do that    work  so that s just how things are for now      A warranty or support contract with some vendor doesn t entitle you to    request fixes from developers in the upstream Linux kernel community  such    contracts are completely outside the scope of the Linux kernel  its    development community  and this document  That s why you can t demand    anything such a contract guarantees in this context  not even if the    developer handling the issue works for the vendor in question  If you want    to claim your rights  use the vendor s support channel instead  When doing    so  you might want to mention you d like to see the issue fixed in the    upstream Linux kernel  motivate them by saying it s the only way to ensure    the fix in the end will get incorporated in all Linux distributions      If you never reported an issue to a FLOSS project before you should consider    reading  How to Report Bugs Effectively     https   www chiark greenend org uk  sgtatham bugs html      How To Ask    Questions The Smart Way     http   www catb org esr faqs smart questions html     and  How to ask good    questions  https   jvns ca blog good questions       With that off the table  find below the details on how to properly report issues to the Linux kernel developers    Make sure you re using the upstream Linux kernel                                                       Are you facing an issue with a Linux kernel a hardware or software vendor    provided  Then in almost all cases you are better off to stop reading this    document and reporting the issue to your vendor instead  unless you are    willing to install the latest Linux version yourself  Be aware the latter    will often be needed anyway to hunt down and fix issues    Like most programmers  Linux kernel developers don t like to spend time dealing with reports for issues that don t even happen with their current code  It s just a waste everybody s time  especially yours  Unfortunately such situations easily happen when it comes to the kernel and often leads to frustration on both sides  That s because almost all Linux based kernels pre installed on devices  Computers  Laptops  Smartphones  Routers     and most shipped by Linux distributors are quite distant from the official Linux kernel as distributed by kernel org  these kernels from these vendors are often ancient from the point of Linux development or heavily modified  often both   Most of these vendor kernels are quite unsuitable for reporting issues to the Linux kernel developers  an issue you face with one of them might have been fixed by the Linux kernel developers months or years ago already  additionally  the modifications and enhancements by the vendor might be causing the issue you face  even if they look small or totally unrelated  That s why you should report issues with these kernels to the vendor  Its developers should look into the report and  in case it turns out to be an upstream issue  fix it directly upstream or forward the report there  In practice that often does not work out or might not what you want  You thus might want to consider circumventing the vendor by installing the very latest Linux kernel core yourself  If that s an option for you move ahead in this process  as a later step in this guide will explain how to do that once it rules out other potential causes for your issue   Note  the previous paragraph is starting with the word  most   as sometimes developers in fact are willing to handle reports about issues occurring with vendor kernels  If they do in the end highly depends on the developers and the issue in question  Your chances are quite good if the distributor applied only small modifications to a kernel based on a recent Linux version  that for example often holds true for the mainline kernels shipped by Debian GNU Linux Sid or Fedora Rawhide  Some developers will also accept reports about issues with kernels from distributions shipping the latest stable kernel  as long as its only slightly modified  that for example is often the case for Arch Linux  regular Fedora releases  and openSUSE Tumbleweed  But keep in mind  you better want to use a mainline Linux and avoid using a stable kernel for this process  as outlined in the section  Install a fresh kernel for testing  in more detail   Obviously you are free to ignore all this advice and report problems with an old or heavily modified vendor kernel to the upstream Linux developers  But note  those often get rejected or ignored  so consider yourself warned  But it s still better than not reporting the issue at all  sometimes such reports directly or indirectly will help to get the issue fixed over time    Search for existing reports  first run                                             Perform a rough search for existing reports with your favorite internet    search engine  additionally  check the archives of the Linux Kernel Mailing    List  LKML   If you find matching reports  join the discussion instead of    sending a new one    Reporting an issue that someone else already brought forward is often a waste of time for everyone involved  especially you as the reporter  So it s in your own interest to thoroughly check if somebody reported the issue already  At this step of the process it s okay to just perform a rough search  a later step will tell you to perform a more detailed search once you know where your issue needs to be reported to  Nevertheless  do not hurry with this step of the reporting process  it can save you time and trouble   Simply search the internet with your favorite search engine first  Afterwards  search the  Linux Kernel Mailing List  LKML  archives  https   lore kernel org lkml       If you get flooded with results consider telling your search engine to limit search timeframe to the past month or year  And wherever you search  make sure to use good search terms  vary them a few times  too  While doing so try to look at the issue from the perspective of someone else  that will help you to come up with other words to use as search terms  Also make sure not to use too many search terms at once  Remember to search with and without information like the name of the kernel driver or the name of the affected hardware component  But its exact brand name  say  ASUS Red Devil Radeon RX 5700 XT Gaming OC   often is not much helpful  as it is too specific  Instead try search terms like the model line  Radeon 5700 or Radeon 5000  and the code name of the main chip   Navi  or  Navi10   with and without its manufacturer   AMD     In case you find an existing report about your issue  join the discussion  as you might be able to provide valuable additional information  That can be important even when a fix is prepared or in its final stages already  as developers might look for people that can provide additional information or test a proposed fix  Jump to the section  Duties after the report went out  for details on how to get properly involved   Note  searching  bugzilla kernel org  https   bugzilla kernel org     might also be a good idea  as that might provide valuable insights or turn up matching reports  If you find the latter  just keep in mind  most subsystems expect reports in different places  as described below in the section  Check where you need to report your issue   The developers that should take care of the issue thus might not even be aware of the bugzilla ticket  Hence  check the ticket if the issue already got reported as outlined in this document and if not consider doing so    Issue of high priority                                See if the issue you are dealing with qualifies as regression  security     issue  or a really severe problem  those are  issues of high priority  that     need special handling in some steps that are about to follow    Linus Torvalds and the leading Linux kernel developers want to see some issues fixed as soon as possible  hence there are  issues of high priority  that get handled slightly differently in the reporting process  Three type of cases qualify  regressions  security issues  and really severe problems   You deal with a regression if some application or practical use case running fine with one Linux kernel works worse or not at all with a newer version compiled using a similar configuration  The document Documentation admin guide reporting regressions rst explains this in more detail  It also provides a good deal of other information about regressions you might want to be aware of  it for example explains how to add your issue to the list of tracked regressions  to ensure it won t fall through the cracks   What qualifies as security issue is left to your judgment  Consider reading Documentation process security bugs rst before proceeding  as it provides additional details how to best handle security issues   An issue is a  really severe problem  when something totally unacceptably bad happens  That s for example the case when a Linux kernel corrupts the data it s handling or damages hardware it s running on  You re also dealing with a severe issue when the kernel suddenly stops working with an error message   kernel panic   or without any farewell note at all  Note  do not confuse a  panic   a fatal error where the kernel stop itself  with a  Oops   a recoverable error   as the kernel remains running after the latter    Ensure a healthy environment                                    Make sure it s not the kernel s surroundings that are causing the issue     you face    Problems that look a lot like a kernel issue are sometimes caused by build or runtime environment  It s hard to rule out that problem completely  but you should minimize it      Use proven tools when building your kernel  as bugs in the compiler or the    binutils can cause the resulting kernel to misbehave      Ensure your computer components run within their design specifications     that s especially important for the main processor  the main memory  and the    motherboard  Therefore  stop undervolting or overclocking when facing a    potential kernel issue      Try to make sure it s not faulty hardware that is causing your issue  Bad    main memory for example can result in a multitude of issues that will    manifest itself in problems looking like kernel issues      If you re dealing with a filesystem issue  you might want to check the file    system in question with   fsck    as it might be damaged in a way that leads    to unexpected kernel behavior      When dealing with a regression  make sure it s not something else that    changed in parallel to updating the kernel  The problem for example might be    caused by other software that was updated at the same time  It can also    happen that a hardware component coincidentally just broke when you rebooted    into a new kernel for the first time  Updating the systems BIOS or changing    something in the BIOS Setup can also lead to problems that on look a lot    like a kernel regression    Prepare for emergencies                               Create a fresh backup and put system repair and restore tools at hand    Reminder  you are dealing with computers  which sometimes do unexpected things  especially if you fiddle with crucial parts like the kernel of its operating system  That s what you are about to do in this process  Thus  make sure to create a fresh backup  also ensure you have all tools at hand to repair or reinstall the operating system as well as everything you need to restore the backup    Make sure your kernel doesn t get enhanced                                                  Ensure your system does not enhance its kernels by building additional     kernel modules on the fly  which solutions like DKMS might be doing locally     without your knowledge    The risk your issue report gets ignored or rejected dramatically increases if your kernel gets enhanced in any way  That s why you should remove or disable mechanisms like akmods and DKMS  those build add on kernel modules automatically  for example when you install a new Linux kernel or boot it for the first time  Also remove any modules they might have installed  Then reboot before proceeding   Note  you might not be aware that your system is using one of these solutions  they often get set up silently when you install Nvidia s proprietary graphics driver  VirtualBox  or other software that requires a some support from a module not part of the Linux kernel  That why your might need to uninstall the packages with such software to get rid of any 3rd party kernel module    Check  taint  flag                          Check if your kernel was  tainted  when the issue occurred  as the event     that made the kernel set this flag might be causing the issue you face    The kernel marks itself with a  taint  flag when something happens that might lead to follow up errors that look totally unrelated  The issue you face might be such an error if your kernel is tainted  That s why it s in your interest to rule this out early before investing more time into this process  This is the only reason why this step is here  as this process later will tell you to install the latest mainline kernel  you will need to check the taint flag again then  as that s when it matters because it s the kernel the report will focus on   On a running system is easy to check if the kernel tainted itself  if   cat  proc sys kernel tainted   returns  0  then the kernel is not tainted and everything is fine  Checking that file is impossible in some situations  that s why the kernel also mentions the taint status when it reports an internal problem  a  kernel bug    a recoverable error  a  kernel Oops   or a non recoverable error before halting operation  a  kernel panic    Look near the top of the error messages printed when one of these occurs and search for a line starting with  CPU    It should end with  Not tainted  if the kernel was not tainted when it noticed the problem  it was tainted if you see  Tainted   followed by a few spaces and some letters   If your kernel is tainted  study Documentation admin guide tainted kernels rst to find out why  Try to eliminate the reason  Often it s caused by one these three things    1  A recoverable error  a  kernel Oops   occurred and the kernel tainted     itself  as the kernel knows it might misbehave in strange ways after that     point  In that case check your kernel or system log and look for a section     that starts with this           Oops  0000   1  SMP      That s the first Oops since boot up  as the   1  between the brackets shows      Every Oops and any other problem that happens after that point might be a     follow up problem to that first Oops  even if both look totally unrelated      Rule this out by getting rid of the cause for the first Oops and reproducing     the issue afterwards  Sometimes simply restarting will be enough  sometimes     a change to the configuration followed by a reboot can eliminate the Oops      But don t invest too much time into this at this point of the process  as     the cause for the Oops might already be fixed in the newer Linux kernel     version you are going to install later in this process    2  Your system uses a software that installs its own kernel modules  for     example Nvidia s proprietary graphics driver or VirtualBox  The kernel     taints itself when it loads such module from external sources  even if     they are Open Source   they sometimes cause errors in unrelated kernel     areas and thus might be causing the issue you face  You therefore have to     prevent those modules from loading when you want to report an issue to the     Linux kernel developers  Most of the time the easiest way to do that is      temporarily uninstall such software including any modules they might have     installed  Afterwards reboot    3  The kernel also taints itself when it s loading a module that resides in     the staging tree of the Linux kernel source  That s a special area for     code  mostly drivers  that does not yet fulfill the normal Linux kernel     quality standards  When you report an issue with such a module it s     obviously okay if the kernel is tainted  just make sure the module in     question is the only reason for the taint  If the issue happens in an     unrelated area reboot and temporarily block the module from being loaded     by specifying   foo blacklist 1   as kernel parameter  replace  foo  with     the name of the module in question     Document how to reproduce issue                                       Write down coarsely how to reproduce the issue  If you deal with multiple     issues at once  create separate notes for each of them and make sure they     work independently on a freshly booted system  That s needed  as each issue     needs to get reported to the kernel developers separately  unless they are     strongly entangled    If you deal with multiple issues at once  you ll have to report each of them separately  as they might be handled by different developers  Describing various issues in one report also makes it quite difficult for others to tear it apart  Hence  only combine issues in one report if they are very strongly entangled   Additionally  during the reporting process you will have to test if the issue happens with other kernel versions  Therefore  it will make your work easier if you know exactly how to reproduce an issue quickly on a freshly booted system   Note  it s often fruitless to report issues that only happened once  as they might be caused by a bit flip due to cosmic radiation  That s why you should try to rule that out by reproducing the issue before going further  Feel free to ignore this advice if you are experienced enough to tell a one time error due to faulty hardware apart from a kernel issue that rarely happens and thus is hard to reproduce    Regression in stable or longterm kernel                                                 If you are facing a regression within a stable or longterm version line      say something broke when updating from 5 10 4 to 5 10 5   scroll down to      Dealing with regressions within a stable and longterm kernel line     Regression within a stable and longterm kernel version line are something the Linux developers want to fix badly  as such issues are even more unwanted than regression in the main development branch  as they can quickly affect a lot of people  The developers thus want to learn about such issues as quickly as possible  hence there is a streamlined process to report them  Note  regressions with newer kernel version line  say something broke when switching from 5 9 15 to 5 10 5  do not qualify    Check where you need to report your issue                                                 Locate the driver or kernel subsystem that seems to be causing the issue      Find out how and where its developers expect reports  Note  most of the     time this won t be bugzilla kernel org  as issues typically need to be sent     by mail to a maintainer and a public mailing list    It s crucial to send your report to the right people  as the Linux kernel is a big project and most of its developers are only familiar with a small subset of it  Quite a few programmers for example only care for just one driver  for example one for a WiFi chip  its developer likely will only have small or no knowledge about the internals of remote or unrelated  subsystems   like the TCP stack  the PCIe PCI subsystem  memory management or file systems   Problem is  the Linux kernel lacks a central bug tracker where you can simply file your issue and make it reach the developers that need to know about it  That s why you have to find the right place and way to report issues yourself  You can do that with the help of a script  see below   but it mainly targets kernel developers and experts  For everybody else the MAINTAINERS file is the better place   How to read the MAINTAINERS file                                  To illustrate how to use the  ref  MAINTAINERS  maintainers   file  lets assume the WiFi in your Laptop suddenly misbehaves after updating the kernel  In that case it s likely an issue in the WiFi driver  Obviously it could also be some code it builds upon  but unless you suspect something like that stick to the driver  If it s really something else  the driver s developers will get the right people involved   Sadly  there is no way to check which code is driving a particular hardware component that is both universal and easy   In case of a problem with the WiFi driver you for example might want to look at the output of   lspci  k    as it lists devices on the PCI PCIe bus and the kernel module driving it            user something     lspci  k                     3a 00 0 Network controller  Qualcomm Atheros QCA6174 802 11ac Wireless Network Adapter  rev 32           Subsystem  Bigfoot Networks  Inc  Device 1535          Kernel driver in use  ath10k pci          Kernel modules  ath10k pci               But this approach won t work if your WiFi chip is connected over USB or some other internal bus  In those cases you might want to check your WiFi manager or the output of   ip link    Look for the name of the problematic network interface  which might be something like  wlp58s0   This name can be used like this to find the module driving it            user something     realpath   relative to  sys module   sys class net wlp58s0 device driver module        ath10k pci  In case tricks like these don t bring you any further  try to search the internet on how to narrow down the driver or subsystem in question  And if you are unsure which it is  just try your best guess  somebody will help you if you guessed poorly   Once you know the driver or subsystem  you want to search for it in the MAINTAINERS file  In the case of  ath10k pci  you won t find anything  as the name is too specific  Sometimes you will need to search on the net for help  but before doing so  try a somewhat shorted or modified name when searching the MAINTAINERS file  as then you might find something like this           QUALCOMM ATHEROS ATH10K WIRELESS DRIVER        Mail           A  Some Human  shuman example com         Mailing list   ath10k lists infradead org        Status         Supported        Web page       https   wireless wiki kernel org en users Drivers ath10k        SCM            git git   git kernel org pub scm linux kernel git kvalo ath git        Files          drivers net wireless ath ath10k   Note  the line description will be abbreviations  if you read the plain MAINTAINERS file found in the root of the Linux source tree   Mail   for example will be  M     Mailing list   will be  L   and  Status   will be  S    A section near the top of the file explains these and other abbreviations   First look at the line  Status   Ideally it should be  Supported  or  Maintained   If it states  Obsolete  then you are using some outdated approach that was replaced by a newer solution you need to switch to  Sometimes the code only has someone who provides  Odd Fixes  when feeling motivated  And with  Orphan  you are totally out of luck  as nobody takes care of the code anymore  That only leaves these options  arrange yourself to live with the issue  fix it yourself  or find a programmer somewhere willing to fix it   After checking the status  look for a line starting with  bugs    it will tell you where to find a subsystem specific bug tracker to file your issue  The example above does not have such a line  That is the case for most sections  as Linux kernel development is completely driven by mail  Very few subsystems use a bug tracker  and only some of those rely on bugzilla kernel org   In this and many other cases you thus have to look for lines starting with  Mail   instead  Those mention the name and the email addresses for the maintainers of the particular code  Also look for a line starting with  Mailing list    which tells you the public mailing list where the code is developed  Your report later needs to go by mail to those addresses  Additionally  for all issue reports sent by email  make sure to add the Linux Kernel Mailing List  LKML   linux kernel vger kernel org  to CC  Don t omit either of the mailing lists when sending your issue report by mail later  Maintainers are busy people and might leave some work for other developers on the subsystem specific list  and LKML is important to have one place where all issue reports can be found    Finding the maintainers with the help of a script                                                    For people that have the Linux sources at hand there is a second option to find the proper place to report  the script  scripts get maintainer pl  which tries to find all people to contact  It queries the MAINTAINERS file and needs to be called with a path to the source code in question  For drivers compiled as module if often can be found with a command like this             modinfo ath10k pci   grep filename   sed  s  lib modules    kernel     s filename     s   ko      xz             drivers net wireless ath ath10k ath10k pci ko  Pass parts of this to the script               scripts get maintainer pl  f drivers net wireless ath ath10k         Some Human  shuman example com   supporter QUALCOMM ATHEROS ATH10K WIRELESS DRIVER         Another S  Human  asomehuman example com   maintainer NETWORKING DRIVERS         ath10k lists infradead org  open list QUALCOMM ATHEROS ATH10K WIRELESS DRIVER         linux wireless vger kernel org  open list NETWORKING DRIVERS  WIRELESS          netdev vger kernel org  open list NETWORKING DRIVERS         linux kernel vger kernel org  open list   Don t sent your report to all of them  Send it to the maintainers  which the script calls  supporter    additionally CC the most specific mailing list for the code as well as the Linux Kernel Mailing List  LKML   In this case you thus would need to send the report to  Some Human  shuman example com   with  ath10k lists infradead org  and  linux kernel vger kernel org  in CC   Note  in case you cloned the Linux sources with git you might want to call   get maintainer pl   a second time with     git    The script then will look at the commit history to find which people recently worked on the code in question  as they might be able to help  But use these results with care  as it can easily send you in a wrong direction  That for example happens quickly in areas rarely changed  like old or unmaintained drivers   sometimes such code is modified during tree wide cleanups by developers that do not care about the particular driver at all    Search for existing reports  second run                                               Search the archives of the bug tracker or mailing list in question     thoroughly for reports that might match your issue  If you find anything      join the discussion instead of sending a new report    As mentioned earlier already  reporting an issue that someone else already brought forward is often a waste of time for everyone involved  especially you as the reporter  That s why you should search for existing report again  now that you know where they need to be reported to  If it s mailing list  you will often find its archives on  lore kernel org  https   lore kernel org       But some list are hosted in different places  That for example is the case for the ath10k WiFi driver used as example in the previous step  But you ll often find the archives for these lists easily on the net  Searching for  archive ath10k lists infradead org  for example will lead you to the  Info page for the ath10k mailing list  https   lists infradead org mailman listinfo ath10k     which at the top links to its  list archives  https   lists infradead org pipermail ath10k      Sadly this and quite a few other lists miss a way to search the archives  In those cases use a regular internet search engine and add something like  site lists infradead org pipermail ath10k   to your search terms  which limits the results to the archives at that URL   It s also wise to check the internet  LKML and maybe bugzilla kernel org again at this point  If your report needs to be filed in a bug tracker  you may want to check the mailing list archives for the subsystem as well  as someone might have reported it only there   For details how to search and what to do if you find matching reports see  Search for existing reports  first run  above   Do not hurry with this step of the reporting process  spending 30 to 60 minutes or even more time can save you and others quite a lot of time and trouble    Install a fresh kernel for testing                                          Unless you are already running the latest  mainline  Linux kernel  better     go and install it for the reporting process  Testing and reporting with     the latest  stable  Linux can be an acceptable alternative in some     situations  during the merge window that actually might be even the best     approach  but in that development phase it can be an even better idea to     suspend your efforts for a few days anyway  Whatever version you choose      ideally use a  vanilla  built  Ignoring these advices will dramatically     increase the risk your report will be rejected or ignored    As mentioned in the detailed explanation for the first step already  Like most programmers  Linux kernel developers don t like to spend time dealing with reports for issues that don t even happen with the current code  It s just a waste everybody s time  especially yours  That s why it s in everybody s interest that you confirm the issue still exists with the latest upstream code before reporting it  You are free to ignore this advice  but as outlined earlier  doing so dramatically increases the risk that your issue report might get rejected or simply ignored   In the scope of the kernel  latest upstream  normally means      Install a mainline kernel  the latest stable kernel can be an option  but    most of the time is better avoided  Longterm kernels  sometimes called  LTS    kernels   are unsuitable at this point of the process  The next subsection    explains all of this in more detail      The over next subsection describes way to obtain and install such a kernel     It also outlines that using a pre compiled kernel are fine  but better are    vanilla  which means  it was built using Linux sources taken straight  from    kernel org  https   kernel org     and not modified or enhanced in any way   Choosing the right version for testing                                         Head over to  kernel org  https   kernel org     to find out which version you want to use for testing  Ignore the big yellow button that says  Latest release  and look a little lower at the table  At its top you ll see a line starting with mainline  which most of the time will point to a pre release with a version number like  5 8 rc2   If that s the case  you ll want to use this mainline kernel for testing  as that where all fixes have to be applied first  Do not let that  rc  scare you  these  development kernels  are pretty reliable   and you made a backup  as you were instructed above  didn t you   In about two out of every nine to ten weeks  mainline might point you to a proper release with a version number like  5 7   If that happens  consider suspending the reporting process until the first pre release of the next version  5 8 rc1  shows up on kernel org  That s because the Linux development cycle then is in its two week long  merge window   The bulk of the changes and all intrusive ones get merged for the next release during this time  It s a bit more risky to use mainline during this period  Kernel developers are also often quite busy then and might have no spare time to deal with issue reports  It s also quite possible that one of the many changes applied during the merge window fixes the issue you face  that s why you soon would have to retest with a newer kernel version anyway  as outlined below in the section  Duties after the report went out    That s why it might make sense to wait till the merge window is over  But don t to that if you re dealing with something that shouldn t wait  In that case consider obtaining the latest mainline kernel via git  see below  or use the latest stable version offered on kernel org  Using that is also acceptable in case mainline for some reason does currently not work for you  An in general  using it for reproducing the issue is also better than not reporting it issue at all   Better avoid using the latest stable kernel outside merge windows  as all fixes must be applied to mainline first  That s why checking the latest mainline kernel is so important  any issue you want to see fixed in older version lines needs to be fixed in mainline first before it can get backported  which can take a few days or weeks  Another reason  the fix you hope for might be too hard or risky for backporting  reporting the issue again hence is unlikely to change anything   These aspects are also why longterm kernels  sometimes called  LTS kernels   are unsuitable for this part of the reporting process  they are to distant from the current code  Hence go and test mainline first and follow the process further  if the issue doesn t occur with mainline it will guide you how to get it fixed in older version lines  if that s in the cards for the fix in question   How to obtain a fresh Linux kernel                                       Using a pre compiled kernel    This is often the quickest  easiest  and safest way for testing   especially is you are unfamiliar with the Linux kernel  The problem  most of those shipped by distributors or add on repositories are build from modified Linux sources  They are thus not vanilla and therefore often unsuitable for testing and issue reporting  the changes might cause the issue you face or influence it somehow   But you are in luck if you are using a popular Linux distribution  for quite a few of them you ll find repositories on the net that contain packages with the latest mainline or stable Linux built as vanilla kernel  It s totally okay to use these  just make sure from the repository s description they are vanilla or at least close to it  Additionally ensure the packages contain the latest versions as offered on kernel org  The packages are likely unsuitable if they are older than a week  as new mainline and stable kernels typically get released at least once a week   Please note that you might need to build your own kernel manually later  that s sometimes needed for debugging or testing fixes  as described later in this document  Also be aware that pre compiled kernels might lack debug symbols that are needed to decode messages the kernel prints when a panic  Oops  warning  or BUG occurs  if you plan to decode those  you might be better off compiling a kernel yourself  see the end of this subsection and the section titled  Decode failure messages  for details      Using git    Developers and experienced Linux users familiar with git are often best served by obtaining the latest Linux kernel sources straight from the  official development repository on kernel org  https   git kernel org pub scm linux kernel git torvalds linux git tree      Those are likely a bit ahead of the latest mainline pre release  Don t worry about it  they are as reliable as a proper pre release  unless the kernel s development cycle is currently in the middle of a merge window  But even then they are quite reliable     Conventional    People unfamiliar with git are often best served by downloading the sources as tarball from  kernel org  https   kernel org       How to actually build a kernel is not described here  as many websites explain the necessary steps already  If you are new to it  consider following one of those how to s that suggest to use   make localmodconfig    as that tries to pick up the configuration of your current kernel and then tries to adjust it somewhat for your system  That does not make the resulting kernel any better  but quicker to compile   Note  If you are dealing with a panic  Oops  warning  or BUG from the kernel  please try to enable CONFIG KALLSYMS when configuring your kernel  Additionally  enable CONFIG DEBUG KERNEL and CONFIG DEBUG INFO  too  the latter is the relevant one of those two  but can only be reached if you enable the former  Be aware CONFIG DEBUG INFO increases the storage space required to build a kernel by quite a bit  But that s worth it  as these options will allow you later to pinpoint the exact line of code that triggers your issue  The section  Decode failure messages  below explains this in more detail   But keep in mind  Always keep a record of the issue encountered in case it is hard to reproduce  Sending an undecoded report is better than not reporting the issue at all    Check  taint  flag                          Ensure the kernel you just installed does not  taint  itself when     running    As outlined above in more detail already  the kernel sets a  taint  flag when something happens that can lead to follow up errors that look totally unrelated  That s why you need to check if the kernel you just installed does not set this flag  And if it does  you in almost all the cases needs to eliminate the reason for it before you reporting issues that occur with it  See the section above for details how to do that    Reproduce issue with the fresh kernel                                             Reproduce the issue with the kernel you just installed  If it doesn t show     up there  scroll down to the instructions for issues only happening with     stable and longterm kernels    Check if the issue occurs with the fresh Linux kernel version you just installed  If it was fixed there already  consider sticking with this version line and abandoning your plan to report the issue  But keep in mind that other users might still be plagued by it  as long as it s not fixed in either stable and longterm version from kernel org  and thus vendor kernels derived from those   If you prefer to use one of those or just want to help their users  head over to the section  Details about reporting issues only occurring in older kernel version lines  below    Optimize description to reproduce issue                                               Optimize your notes  try to find and write the most straightforward way to     reproduce your issue  Make sure the end result has all the important     details  and at the same time is easy to read and understand for others     that hear about it for the first time  And if you learned something in this     process  consider searching again for existing reports about the issue    An unnecessarily complex report will make it hard for others to understand your report  Thus try to find a reproducer that s straight forward to describe and thus easy to understand in written form  Include all important details  but at the same time try to keep it as short as possible   In this in the previous steps you likely have learned a thing or two about the issue you face  Use this knowledge and search again for existing reports instead you can join    Decode failure messages                               If your failure involves a  panic    Oops    warning   or  BUG   consider     decoding the kernel log to find the line of code that triggered the error    When the kernel detects an internal problem  it will log some information about the executed code  This makes it possible to pinpoint the exact line in the source code that triggered the issue and shows how it was called  But that only works if you enabled CONFIG DEBUG INFO and CONFIG KALLSYMS when configuring your kernel  If you did so  consider to decode the information from the kernel s log  That will make it a lot easier to understand what lead to the  panic    Oops    warning   or  BUG   which increases the chances that someone can provide a fix   Decoding can be done with a script you find in the Linux source tree  If you are running a kernel you compiled yourself earlier  call it like this            user something     sudo dmesg     linux 5 10 5 scripts decode stacktrace sh   linux 5 10 5 vmlinux  If you are running a packaged vanilla kernel  you will likely have to install the corresponding packages with debug symbols  Then call the script  which you might need to get from the Linux sources if your distro does not package it  like this            user something     sudo dmesg     linux 5 10 5 scripts decode stacktrace sh            usr lib debug lib modules 5 10 10 4 1 x86 64 vmlinux  usr src kernels 5 10 10 4 1 x86 64   The script will work on log lines like the following  which show the address of the code the kernel was executing when the error occurred               68 387301  RIP  0010 test module init 0x5 0xffa  test module   Once decoded  these lines will look like this               68 387301  RIP  0010 test module init   home username linux 5 10 5 test module test module c 16  test module  In this case the executed code was built from the file    linux 5 10 5 test module test module c  and the error occurred by the instructions found in line  16    The script will similarly decode the addresses mentioned in the section starting with  Call trace   which show the path to the function where the problem occurred  Additionally  the script will show the assembler output for the code section the kernel was executing   Note  if you can t get this to work  simply skip this step and mention the reason for it in the report  If you re lucky  it might not be needed  And if it is  someone might help you to get things going  Also be aware this is just one of several ways to decode kernel stack traces  Sometimes different steps will be required to retrieve the relevant details  Don t worry about that  if that s needed in your case  developers will tell you what to do    Special care for regressions                                    If your problem is a regression  try to narrow down when the issue was     introduced as much as possible    Linux lead developer Linus Torvalds insists that the Linux kernel never worsens  that s why he deems regressions as unacceptable and wants to see them fixed quickly  That s why changes that introduced a regression are often promptly reverted if the issue they cause can t get solved quickly any other way  Reporting a regression is thus a bit like playing a kind of trump card to get something quickly fixed  But for that to happen the change that s causing the regression needs to be known  Normally it s up to the reporter to track down the culprit  as maintainers often won t have the time or setup at hand to reproduce it themselves   To find the change there is a process called  bisection  which the document Documentation admin guide bug bisect rst describes in detail  That process will often require you to build about ten to twenty kernel images  trying to reproduce the issue with each of them before building the next  Yes  that takes some time  but don t worry  it works a lot quicker than most people assume  Thanks to a  binary search  this will lead you to the one commit in the source code management system that s causing the regression  Once you find it  search the net for the subject of the change  its commit id and the shortened commit id  the first 12 characters of the commit id   This will lead you to existing reports about it  if there are any   Note  a bisection needs a bit of know how  which not everyone has  and quite a bit of effort  which not everyone is willing to invest  Nevertheless  it s highly recommended performing a bisection yourself  If you really can t or don t want to go down that route at least find out which mainline kernel introduced the regression  If something for example breaks when switching from 5 5 15 to 5 8 4  then try at least all the mainline releases in that area  5 6  5 7 and 5 8  to check when it first showed up  Unless you re trying to find a regression in a stable or longterm kernel  avoid testing versions which number has three sections  5 6 12  5 7 8   as that makes the outcome hard to interpret  which might render your testing useless  Once you found the major version which introduced the regression  feel free to move on in the reporting process  But keep in mind  it depends on the issue at hand if the developers will be able to help without knowing the culprit  Sometimes they might recognize from the report want went wrong and can fix it  other times they will be unable to help unless you perform a bisection   When dealing with regressions make sure the issue you face is really caused by the kernel and not by something else  as outlined above already   In the whole process keep in mind  an issue only qualifies as regression if the older and the newer kernel got built with a similar configuration  This can be achieved by using   make olddefconfig    as explained in more detail by Documentation admin guide reporting regressions rst  that document also provides a good deal of other information about regressions you might want to be aware of    Write and send the report                                 Start to compile the report by writing a detailed description about the     issue  Always mention a few things  the latest kernel version you installed     for reproducing  the Linux Distribution used  and your notes on how to     reproduce the issue  Ideally  make the kernel s build configuration       config  and the output from   dmesg   available somewhere on the net and     link to it  Include or upload all other information that might be relevant      like the output screenshot of an Oops or the output from   lspci    Once     you wrote this main part  insert a normal length paragraph on top of it     outlining the issue and the impact quickly  On top of this add one sentence     that briefly describes the problem and gets people to read on  Now give the     thing a descriptive title or subject that yet again is shorter  Then you re     ready to send or file the report like the MAINTAINERS file told you  unless     you are dealing with one of those  issues of high priority   they need     special care which is explained in  Special handling for high priority     issues  below    Now that you have prepared everything it s time to write your report  How to do that is partly explained by the three documents linked to in the preface above  That s why this text will only mention a few of the essentials as well as things specific to the Linux kernel   There is one thing that fits both categories  the most crucial parts of your report are the title subject  the first sentence  and the first paragraph  Developers often get quite a lot of mail  They thus often just take a few seconds to skim a mail before deciding to move on or look closer  Thus  the better the top section of your report  the higher are the chances that someone will look into it and help you  And that is why you should ignore them for now and write the detailed report first       Things each report should mention                                    Describe in detail how your issue happens with the fresh vanilla kernel you installed  Try to include the step by step instructions you wrote and optimized earlier that outline how you and ideally others can reproduce the issue  in those rare cases where that s impossible try to describe what you did to trigger it   Also include all the relevant information others might need to understand the issue and its environment  What s actually needed depends a lot on the issue  but there are some things you should include always      the output from   cat  proc version    which contains the Linux kernel    version number and the compiler it was built with      the Linux distribution the machine is running    hostnamectl   grep     Operating System         the architecture of the CPU and the operating system    uname  mi        if you are dealing with a regression and performed a bisection  mention the    subject and the commit id of the change that is causing it   In a lot of cases it s also wise to make two more things available to those that read your report      the configuration used for building your Linux kernel  the   config  file      the kernel s messages that you get from   dmesg   written to a file  Make    sure that it starts with a line like  Linux version 5 8 1     foobar example com   gcc  GCC  10 2 1  GNU ld version 2 34   1 SMP Mon Aug    3 14 54 37 UTC 2020  If it s missing  then important messages from the first    boot phase already got discarded  In this case instead consider using      journalctl  b 0  k    alternatively you can also reboot  reproduce the    issue and call   dmesg   right afterwards   These two files are big  that s why it s a bad idea to put them directly into your report  If you are filing the issue in a bug tracker then attach them to the ticket  If you report the issue by mail do not attach them  as that makes the mail too large  instead do one of these things      Upload the files somewhere public  your website  a public file paste    service  a ticket created just for this purpose on  bugzilla kernel org     https   bugzilla kernel org           and include a link to them in your    report  Ideally use something where the files stay available for years  as    they could be useful to someone many years from now  this for example can    happen if five or ten years from now a developer works on some code that was    changed just to fix your issue      Put the files aside and mention you will send them later in individual    replies to your own mail  Just remember to actually do that once the report    went out       Things that might be wise to provide                                       Depending on the issue you might need to add more background data  Here are a few suggestions what often is good to provide      If you are dealing with a  warning   an  OOPS  or a  panic  from the kernel     include it  If you can t copy n paste it  try to capture a netconsole trace    or at least take a picture of the screen      If the issue might be related to your computer hardware  mention what kind    of system you use  If you for example have problems with your graphics card     mention its manufacturer  the card s model  and what chip is uses  If it s a    laptop mention its name  but try to make sure it s meaningful   Dell XPS 13     for example is not  because it might be the one from 2012  that one looks    not that different from the one sold today  but apart from that the two have    nothing in common  Hence  in such cases add the exact model number  which    for example are  9380  or  7390  for XPS 13 models introduced during 2019     Names like  Lenovo Thinkpad T590  are also somewhat ambiguous  there are    variants of this laptop with and without a dedicated graphics chip  so try    to find the exact model name or specify the main components      Mention the relevant software in use  If you have problems with loading    modules  you want to mention the versions of kmod  systemd  and udev in use     If one of the DRM drivers misbehaves  you want to state the versions of    libdrm and Mesa  also specify your Wayland compositor or the X Server and    its driver  If you have a filesystem issue  mention the version of    corresponding filesystem utilities  e2fsprogs  btrfs progs  xfsprogs            Gather additional information from the kernel that might be of interest  The    output from   lspci  nn   will for example help others to identify what    hardware you use  If you have a problem with hardware you even might want to    make the output from   sudo lspci  vvv   available  as that provides    insights how the components were configured  For some issues it might be    good to include the contents of files like    proc cpuinfo          proc ioports       proc iomem       proc modules    or       proc scsi scsi    Some subsystem also offer tools to collect relevant    information  One such tool is   alsa info sh    which the audio sound    subsystem developers provide  https   www alsa project org wiki AlsaInfo      Those examples should give your some ideas of what data might be wise to attach  but you have to think yourself what will be helpful for others to know  Don t worry too much about forgetting something  as developers will ask for additional details they need  But making everything important available from the start increases the chance someone will take a closer look    The important part  the head of your report                                              Now that you have the detailed part of the report prepared let s get to the most important section  the first few sentences  Thus go to the top  add something like  The detailed description   before the part you just wrote and insert two newlines at the top  Now write one normal length paragraph that describes the issue roughly  Leave out all boring details and focus on the crucial parts readers need to know to understand what this is all about  if you think this bug affects a lot of users  mention this to get people interested   Once you did that insert two more lines at the top and write a one sentence summary that explains quickly what the report is about  After that you have to get even more abstract and write an even shorter subject title for the report   Now that you have written this part take some time to optimize it  as it is the most important parts of your report  a lot of people will only read this before they decide if reading the rest is time well spent   Now send or file the report like the  ref  MAINTAINERS  maintainers   file told you  unless it s one of those  issues of high priority  outlined earlier  in that case please read the next subsection first before sending the report on its way   Special handling for high priority issues                                            Reports for high priority issues need special handling     Severe issues    make sure the subject or ticket title as well as the first paragraph makes the severeness obvious     Regressions    make the report s subject start with   REGRESSION     In case you performed a successful bisection  use the title of the change that introduced the regression as the second part of your subject  Make the report also mention the commit id of the culprit  In case of an unsuccessful bisection  make your report mention the latest tested version that s working fine  say 5 7  and the oldest where the issue occurs  say 5 8 rc1    When sending the report by mail  CC the Linux regressions mailing list  regressions lists linux dev   In case the report needs to be filed to some web tracker  proceed to do so  Once filed  forward the report by mail to the regressions list  CC the maintainer and the mailing list for the subsystem in question  Make sure to inline the forwarded report  hence do not attach it  Also add a short note at the top where you mention the URL to the ticket   When mailing or forwarding the report  in case of a successful bisection add the author of the culprit to the recipients  also CC everyone in the signed off by chain  which you find at the end of its commit message     Security issues    for these issues your will have to evaluate if a short term risk to other users would arise if details were publicly disclosed  If that s not the case simply proceed with reporting the issue as described  For issues that bear such a risk you will need to adjust the reporting process slightly      If the MAINTAINERS file instructed you to report the issue by mail  do not    CC any public mailing lists      If you were supposed to file the issue in a bug tracker make sure to mark    the ticket as  private  or  security issue   If the bug tracker does not    offer a way to keep reports private  forget about it and send your report as    a private mail to the maintainers instead   In both cases make sure to also mail your report to the addresses the MAINTAINERS file lists in the section  security contact   Ideally directly CC them when sending the report by mail  If you filed it in a bug tracker  forward the report s text to these addresses  but on top of it put a small note where you mention that you filed it with a link to the ticket   See Documentation process security bugs rst for more information    Duties after the report went out                                        Wait for reactions and keep the thing rolling until you can accept the     outcome in one way or the other  Thus react publicly and in a timely manner     to any inquiries  Test proposed fixes  Do proactive testing  retest with at     least every first release candidate  RC  of a new mainline version and     report your results  Send friendly reminders if things stall  And try to     help yourself  if you don t get any help or if it s unsatisfying    If your report was good and you are really lucky then one of the developers might immediately spot what s causing the issue  they then might write a patch to fix it  test it  and send it straight for integration in mainline while tagging it for later backport to stable and longterm kernels that need it  Then all you need to do is reply with a  Thank you very much  and switch to a version with the fix once it gets released   But this ideal scenario rarely happens  That s why the job is only starting once you got the report out  What you ll have to do depends on the situations  but often it will be the things listed below  But before digging into the details  here are a few important things you need to keep in mind for this part of the process    General advice for further interactions                                            Always reply in public    When you filed the issue in a bug tracker  always reply there and do not contact any of the developers privately about it  For mailed reports always use the  Reply all  function when replying to any mails you receive  That includes mails with any additional data you might want to add to your report  go to your mail applications  Sent  folder and use  reply all  on your mail with the report  This approach will make sure the public mailing list s  and everyone else that gets involved over time stays in the loop  it also keeps the mail thread intact  which among others is really important for mailing lists to group all related mails together   There are just two situations where a comment in a bug tracker or a  Reply all  is unsuitable      Someone tells you to send something privately      You were told to send something  but noticed it contains sensitive    information that needs to be kept private  In that case it s okay to send it    in private to the developer that asked for it  But note in the ticket or a    mail that you did that  so everyone else knows you honored the request     Do research before asking for clarifications or help    In this part of the process someone might tell you to do something that requires a skill you might not have mastered yet  For example  you might be asked to use some test tools you never have heard of yet  or you might be asked to apply a patch to the Linux kernel sources to test if it helps  In some cases it will be fine sending a reply asking for instructions how to do that  But before going that route try to find the answer own your own by searching the internet  alternatively consider asking in other places for advice  For example ask a friend or post about it to a chatroom or forum you normally hang out     Be patient    If you are really lucky you might get a reply to your report within a few hours  But most of the time it will take longer  as maintainers are scattered around the globe and thus might be in a different time zone   one where they already enjoy their night away from keyboard   In general  kernel developers will take one to five business days to respond to reports  Sometimes it will take longer  as they might be busy with the merge windows  other work  visiting developer conferences  or simply enjoying a long summer holiday   The  issues of high priority   see above for an explanation  are an exception here  maintainers should address them as soon as possible  that s why you should wait a week at maximum  or just two days if it s something urgent  before sending a friendly reminder   Sometimes the maintainer might not be responding in a timely manner  other times there might be disagreements  for example if an issue qualifies as regression or not  In such cases raise your concerns on the mailing list and ask others for public or private replies how to move on  If that fails  it might be appropriate to get a higher authority involved  In case of a WiFi driver that would be the wireless maintainers  if there are no higher level maintainers or all else fails  it might be one of those rare situations where it s okay to get Linus Torvalds involved     Proactive testing    Every time the first pre release  the  rc1   of a new mainline kernel version gets released  go and check if the issue is fixed there or if anything of importance changed  Mention the outcome in the ticket or in a mail you sent as reply to your report  make sure it has all those in the CC that up to that point participated in the discussion   This will show your commitment and that you are willing to help  It also tells developers if the issue persists and makes sure they do not forget about it  A few other occasional retests  for example with rc3  rc5 and the final  are also a good idea  but only report your results if something relevant changed or if you are writing something anyway   With all these general things off the table let s get into the details of how to help to get issues resolved once they were reported   Inquires and testing request                               Here are your duties in case you got replies to your report     Check who you deal with    Most of the time it will be the maintainer or a developer of the particular code area that will respond to your report  But as issues are normally reported in public it could be anyone that s replying   including people that want to help  but in the end might guide you totally off track with their questions or requests  That rarely happens  but it s one of many reasons why it s wise to quickly run an internet search to see who you re interacting with  By doing this you also get aware if your report was heard by the right people  as a reminder to the maintainer  see below  might be in order later if discussion fades out without leading to a satisfying solution for the issue     Inquiries for data    Often you will be asked to test something or provide additional details  Try to provide the requested information soon  as you have the attention of someone that might help and risk losing it the longer you wait  that outcome is even likely if you do not provide the information within a few business days     Requests for testing    When you are asked to test a diagnostic patch or a possible fix  try to test it in timely manner  too  But do it properly and make sure to not rush it  mixing things up can happen easily and can lead to a lot of confusion for everyone involved  A common mistake for example is thinking a proposed patch with a fix was applied  but in fact wasn t  Things like that happen even to experienced testers occasionally  but they most of the time will notice when the kernel with the fix behaves just as one without it   What to do when nothing of substance happens                                               Some reports will not get any reaction from the responsible Linux kernel developers  or a discussion around the issue evolved  but faded out with nothing of substance coming out of it   In these cases wait two  better  three  weeks before sending a friendly reminder  maybe the maintainer was just away from keyboard for a while when your report arrived or had something more important to take care of  When writing the reminder  kindly ask if anything else from your side is needed to get the ball running somehow  If the report got out by mail  do that in the first lines of a mail that is a reply to your initial mail  see above  which includes a full quote of the original report below  that s on of those few situations where such a  TOFU   Text Over  Fullquote Under  is the right approach  as then all the recipients will have the details at hand immediately in the proper order   After the reminder wait three more weeks for replies  If you still don t get a proper reaction  you first should reconsider your approach  Did you maybe try to reach out to the wrong people  Was the report maybe offensive or so confusing that people decided to completely stay away from it  The best way to rule out such factors  show the report to one or two people familiar with FLOSS issue reporting and ask for their opinion  Also ask them for their advice how to move forward  That might mean  prepare a better report and make those people review it before you send it out  Such an approach is totally fine  just mention that this is the second and improved report on the issue and include a link to the first report   If the report was proper you can send a second reminder  in it ask for advice why the report did not get any replies  A good moment for this second reminder mail is shortly after the first pre release  the  rc1   of a new Linux kernel version got published  as you should retest and provide a status update at that point anyway  see above    If the second reminder again results in no reaction within a week  try to contact a higher level maintainer asking for advice  even busy maintainers by then should at least have sent some kind of acknowledgment   Remember to prepare yourself for a disappointment  maintainers ideally should react somehow to every issue report  but they are only obliged to fix those  issues of high priority  outlined earlier  So don t be too devastating if you get a reply along the lines of  thanks for the report  I have more important issues to deal with currently and won t have time to look into this for the foreseeable future    It s also possible that after some discussion in the bug tracker or on a list nothing happens anymore and reminders don t help to motivate anyone to work out a fix  Such situations can be devastating  but is within the cards when it comes to Linux kernel development  This and several other reasons for not getting help are explained in  Why some issues won t get any reaction or remain unfixed after being reported  near the end of this document   Don t get devastated if you don t find any help or if the issue in the end does not get solved  the Linux kernel is FLOSS and thus you can still help yourself  You for example could try to find others that are affected and team up with them to get the issue resolved  Such a team could prepare a fresh report together that mentions how many you are and why this is something that in your option should get fixed  Maybe together you can also narrow down the root cause or the change that introduced a regression  which often makes developing a fix easier  And with a bit of luck there might be someone in the team that knows a bit about programming and might be able to write a fix    Reference for  Reporting regressions within a stable and longterm kernel line                                                                                  This subsection provides details for the steps you need to perform if you face a regression within a stable and longterm kernel line   Make sure the particular version line still gets support                                                                Check if the kernel developers still maintain the Linux kernel version     line you care about  go to the front page of kernel org and make sure it     mentions the latest release of the particular version line without an       EOL   tag    Most kernel version lines only get supported for about three months  as maintaining them longer is quite a lot of work  Hence  only one per year is chosen and gets supported for at least two years  often six   That s why you need to check if the kernel developers still support the version line you care for   Note  if kernel org lists two stable version lines on the front page  you should consider switching to the newer one and forget about the older one  support for it is likely to be abandoned soon  Then it will get a  end of life   EOL  stamp  Version lines that reached that point still get mentioned on the kernel org front page for a week or two  but are unsuitable for testing and reporting   Search stable mailing list                                  Check the archives of the Linux stable mailing list for existing reports    Maybe the issue you face is already known and was fixed or is about to  Hence   search the archives of the Linux stable mailing list  https   lore kernel org stable     for reports about an issue like yours  If you find any matches  consider joining the discussion  unless the fix is already finished and scheduled to get applied soon   Reproduce issue with the newest release                                               Install the latest release from the particular version line as a vanilla     kernel  Ensure this kernel is not tainted and still shows the problem  as     the issue might have already been fixed there  If you first noticed the     problem with a vendor kernel  check a vanilla build of the last version     known to work performs fine as well    Before investing any more time in this process you want to check if the issue was already fixed in the latest release of version line you re interested in  This kernel needs to be vanilla and shouldn t be tainted before the issue happens  as detailed outlined already above in the section  Install a fresh kernel for testing    Did you first notice the regression with a vendor kernel  Then changes the vendor applied might be interfering  You need to rule that out by performing a recheck  Say something broke when you updated from 5 10 4 vendor 42 to 5 10 5 vendor 43  Then after testing the latest 5 10 release as outlined in the previous paragraph check if a vanilla build of Linux 5 10 4 works fine as well  If things are broken there  the issue does not qualify as upstream regression and you need switch back to the main step by step guide to report the issue   Report the regression                             Send a short problem report to the Linux stable mailing list      stable vger kernel org  and CC the Linux regressions mailing list      regressions lists linux dev   if you suspect the cause in a particular     subsystem  CC its maintainer and its mailing list  Roughly describe the     issue and ideally explain how to reproduce it  Mention the first version     that shows the problem and the last version that s working fine  Then     wait for further instructions    When reporting a regression that happens within a stable or longterm kernel line  say when updating from 5 10 4 to 5 10 5  a brief report is enough for the start to get the issue reported quickly  Hence a rough description to the stable and regressions mailing list is all it takes  but in case you suspect the cause in a particular subsystem  CC its maintainers and its mailing list as well  because that will speed things up   And note  it helps developers a great deal if you can specify the exact version that introduced the problem  Hence if possible within a reasonable time frame  try to find that version using vanilla kernels  Lets assume something broke when your distributor released a update from Linux kernel 5 10 5 to 5 10 8  Then as instructed above go and check the latest kernel from that version line  say 5 10 9  If it shows the problem  try a vanilla 5 10 5 to ensure that no patches the distributor applied interfere  If the issue doesn t manifest itself there  try 5 10 7 and then  depending on the outcome  5 10 8 or 5 10 6 to find the first version where things broke  Mention it in the report and state that 5 10 9 is still broken   What the previous paragraph outlines is basically a rough manual  bisection   Once your report is out your might get asked to do a proper one  as it allows to pinpoint the exact change that causes the issue  which then can easily get reverted to fix the issue quickly   Hence consider to do a proper bisection right away if time permits  See the section  Special care for regressions  and the document Documentation admin guide bug bisect rst for details how to perform one  In case of a successful bisection add the author of the culprit to the recipients  also CC everyone in the signed off by chain  which you find at the end of its commit message    Reference for  Reporting issues only occurring in older kernel version lines                                                                                 This section provides details for the steps you need to take if you could not reproduce your issue with a mainline kernel  but want to see it fixed in older version lines  aka stable and longterm kernels    Some fixes are too complex                                  Prepare yourself for the possibility that going through the next few steps     might not get the issue solved in older releases  the fix might be too big     or risky to get backported there    Even small and seemingly obvious code changes sometimes introduce new and totally unexpected problems  The maintainers of the stable and longterm kernels are very aware of that and thus only apply changes to these kernels that are within rules outlined in Documentation process stable kernel rules rst   Complex or risky changes for example do not qualify and thus only get applied to mainline  Other fixes are easy to get backported to the newest stable and longterm kernels  but too risky to integrate into older ones  So be aware the fix you are hoping for might be one of those that won t be backported to the version line your care about  In that case you ll have no other choice then to live with the issue or switch to a newer Linux version  unless you want to patch the fix into your kernels yourself   Common preparations                           Perform the first three steps in the section  Reporting issues only     occurring in older kernel version lines  above    You need to carry out a few steps already described in another section of this guide  Those steps will let you      Check if the kernel developers still maintain the Linux kernel version line    you care about      Search the Linux stable mailing list for exiting reports      Check with the latest release    Check code history and search for existing discussions                                                              Search the Linux kernel version control system for the change that fixed     the issue in mainline  as its commit message might tell you if the fix is     scheduled for backporting already  If you don t find anything that way      search the appropriate mailing lists for posts that discuss such an issue     or peer review possible fixes  then check the discussions if the fix was     deemed unsuitable for backporting  If backporting was not considered at     all  join the newest discussion  asking if it s in the cards    In a lot of cases the issue you deal with will have happened with mainline  but got fixed there  The commit that fixed it would need to get backported as well to get the issue solved  That s why you want to search for it or any discussions abound it      First try to find the fix in the Git repository that holds the Linux kernel    sources  You can do this with the web interfaces  on kernel org     https   git kernel org pub scm linux kernel git torvalds linux git tree        or its mirror  on GitHub  https   github com torvalds linux     if you have    a local clone you alternatively can search on the command line with   git    log   grep  pattern         If you find the fix  look if the commit message near the end contains a     stable tag  that looks like this             Cc   stable vger kernel org    5 4      If that s case the developer marked the fix safe for backporting to version    line 5 4 and later  Most of the time it s getting applied there within two    weeks  but sometimes it takes a bit longer      If the commit doesn t tell you anything or if you can t find the fix  look    again for discussions about the issue  Search the net with your favorite    internet search engine as well as the archives for the  Linux kernel    developers mailing list  https   lore kernel org lkml      Also read the    section  Locate kernel area that causes the issue  above and follow the    instructions to find the subsystem in question  its bug tracker or mailing    list archive might have the answer you are looking for      If you see a proposed fix  search for it in the version control system as    outlined above  as the commit might tell you if a backport can be expected        Check the discussions for any indicators the fix might be too risky to get      backported to the version line you care about  If that s the case you have      to live with the issue or switch to the kernel version line where the fix      got applied        If the fix doesn t contain a stable tag and backporting was not discussed       join the discussion  mention the version where you face the issue and that      you would like to see it fixed  if suitable    Ask for advice                      One of the former steps should lead to a solution  If that doesn t work     out  ask the maintainers for the subsystem that seems to be causing the     issue for advice  CC the mailing list for the particular subsystem as well     as the stable mailing list    If the previous three steps didn t get you closer to a solution there is only one option left  ask for advice  Do that in a mail you sent to the maintainers for the subsystem where the issue seems to have its roots  CC the mailing list for the subsystem as well as the stable mailing list  stable vger kernel org     Why some issues won t get any reaction or remain unfixed after being reported                                                                                When reporting a problem to the Linux developers  be aware only  issues of high priority   regressions  security issues  severe problems  are definitely going to get resolved  The maintainers or if all else fails Linus Torvalds himself will make sure of that  They and the other kernel developers will fix a lot of other issues as well  But be aware that sometimes they can t or won t help  and sometimes there isn t even anyone to send a report to   This is best explained with kernel developers that contribute to the Linux kernel in their spare time  Quite a few of the drivers in the kernel were written by such programmers  often because they simply wanted to make their hardware usable on their favorite operating system   These programmers most of the time will happily fix problems other people report  But nobody can force them to do  as they are contributing voluntarily   Then there are situations where such developers really want to fix an issue  but can t  sometimes they lack hardware programming documentation to do so  This often happens when the publicly available docs are superficial or the driver was written with the help of reverse engineering   Sooner or later spare time developers will also stop caring for the driver  Maybe their test hardware broke  got replaced by something more fancy  or is so old that it s something you don t find much outside of computer museums anymore  Sometimes developer stops caring for their code and Linux at all  as something different in their life became way more important  In some cases nobody is willing to take over the job as maintainer   and nobody can be forced to  as contributing to the Linux kernel is done on a voluntary basis  Abandoned drivers nevertheless remain in the kernel  they are still useful for people and removing would be a regression   The situation is not that different with developers that are paid for their work on the Linux kernel  Those contribute most changes these days  But their employers sooner or later also stop caring for their code or make its programmer focus on other things  Hardware vendors for example earn their money mainly by selling new hardware  quite a few of them hence are not investing much time and energy in maintaining a Linux kernel driver for something they stopped selling years ago  Enterprise Linux distributors often care for a longer time period  but in new versions often leave support for old and rare hardware aside to limit the scope  Often spare time contributors take over once a company orphans some code  but as mentioned above  sooner or later they will leave the code behind  too   Priorities are another reason why some issues are not fixed  as maintainers quite often are forced to set those  as time to work on Linux is limited  That s true for spare time or the time employers grant their developers to spend on maintenance work on the upstream kernel  Sometimes maintainers also get overwhelmed with reports  even if a driver is working nearly perfectly  To not get completely stuck  the programmer thus might have no other choice than to prioritize issue reports and reject some of them   But don t worry too much about all of this  a lot of drivers have active maintainers who are quite interested in fixing as many issues as possible    Closing words                Compared with other Free Libre   Open Source Software it s hard to report issues to the Linux kernel developers  the length and complexity of this document and the implications between the lines illustrate that  But that s how it is for now  The main author of this text hopes documenting the state of the art will lay some groundwork to improve the situation over time          end of content       This document is maintained by Thorsten Leemhuis  linux leemhuis info   If    you spot a typo or small mistake  feel free to let him know directly and    he ll fix it  You are free to do the same in a mostly informal way if you    want to contribute changes to the text  but for copyright reasons please CC    linux doc vger kernel org and  sign off  your contribution as    Documentation process submitting patches rst outlines in the section  Sign    your work   the Developer s Certificate of Origin         This text is available under GPL 2 0  or CC BY 4 0  as stated at the top    of the file  If you want to distribute this text under CC BY 4 0 only     please use  The Linux kernel developers  for author attribution and link    this as source     https   git kernel org pub scm linux kernel git torvalds linux git plain Documentation admin guide reporting issues rst       Note  Only the content of this RST file as found in the Linux kernel sources    is available under CC BY 4 0  as versions of this text that were processed     for example by the kernel s build system  might contain content taken from    files which use a more restrictive license "}
{"questions":"linux XFS is a high performance journaling filesystem which originated on the SGI IRIX platform It is completely multi threaded can variable block sizes is extent based and makes extensive use of The SGI XFS Filesystem support large files and large filesystems extended attributes SPDX License Identifier GPL 2 0","answers":".. SPDX-License-Identifier: GPL-2.0\n\n======================\nThe SGI XFS Filesystem\n======================\n\nXFS is a high performance journaling filesystem which originated\non the SGI IRIX platform.  It is completely multi-threaded, can\nsupport large files and large filesystems, extended attributes,\nvariable block sizes, is extent based, and makes extensive use of\nBtrees (directories, extents, free space) to aid both performance\nand scalability.\n\nRefer to the documentation at https:\/\/xfs.wiki.kernel.org\/\nfor further details.  This implementation is on-disk compatible\nwith the IRIX version of XFS.\n\n\nMount Options\n=============\n\nWhen mounting an XFS filesystem, the following options are accepted.\n\n  allocsize=size\n\tSets the buffered I\/O end-of-file preallocation size when\n\tdoing delayed allocation writeout (default size is 64KiB).\n\tValid values for this option are page size (typically 4KiB)\n\tthrough to 1GiB, inclusive, in power-of-2 increments.\n\n\tThe default behaviour is for dynamic end-of-file\n\tpreallocation size, which uses a set of heuristics to\n\toptimise the preallocation size based on the current\n\tallocation patterns within the file and the access patterns\n\tto the file. Specifying a fixed ``allocsize`` value turns off\n\tthe dynamic behaviour.\n\n  attr2 or noattr2\n\tThe options enable\/disable an \"opportunistic\" improvement to\n\tbe made in the way inline extended attributes are stored\n\ton-disk.  When the new form is used for the first time when\n\t``attr2`` is selected (either when setting or removing extended\n\tattributes) the on-disk superblock feature bit field will be\n\tupdated to reflect this format being in use.\n\n\tThe default behaviour is determined by the on-disk feature\n\tbit indicating that ``attr2`` behaviour is active. If either\n\tmount option is set, then that becomes the new default used\n\tby the filesystem.\n\n\tCRC enabled filesystems always use the ``attr2`` format, and so\n\twill reject the ``noattr2`` mount option if it is set.\n\n  discard or nodiscard (default)\n\tEnable\/disable the issuing of commands to let the block\n\tdevice reclaim space freed by the filesystem.  This is\n\tuseful for SSD devices, thinly provisioned LUNs and virtual\n\tmachine images, but may have a performance impact.\n\n\tNote: It is currently recommended that you use the ``fstrim``\n\tapplication to ``discard`` unused blocks rather than the ``discard``\n\tmount option because the performance impact of this option\n\tis quite severe.\n\n  grpid\/bsdgroups or nogrpid\/sysvgroups (default)\n\tThese options define what group ID a newly created file\n\tgets.  When ``grpid`` is set, it takes the group ID of the\n\tdirectory in which it is created; otherwise it takes the\n\t``fsgid`` of the current process, unless the directory has the\n\t``setgid`` bit set, in which case it takes the ``gid`` from the\n\tparent directory, and also gets the ``setgid`` bit set if it is\n\ta directory itself.\n\n  filestreams\n\tMake the data allocator use the filestreams allocation mode\n\tacross the entire filesystem rather than just on directories\n\tconfigured to use it.\n\n  ikeep or noikeep (default)\n\tWhen ``ikeep`` is specified, XFS does not delete empty inode\n\tclusters and keeps them around on disk.  When ``noikeep`` is\n\tspecified, empty inode clusters are returned to the free\n\tspace pool.\n\n  inode32 or inode64 (default)\n\tWhen ``inode32`` is specified, it indicates that XFS limits\n\tinode creation to locations which will not result in inode\n\tnumbers with more than 32 bits of significance.\n\n\tWhen ``inode64`` is specified, it indicates that XFS is allowed\n\tto create inodes at any location in the filesystem,\n\tincluding those which will result in inode numbers occupying\n\tmore than 32 bits of significance.\n\n\t``inode32`` is provided for backwards compatibility with older\n\tsystems and applications, since 64 bits inode numbers might\n\tcause problems for some applications that cannot handle\n\tlarge inode numbers.  If applications are in use which do\n\tnot handle inode numbers bigger than 32 bits, the ``inode32``\n\toption should be specified.\n\n  largeio or nolargeio (default)\n\tIf ``nolargeio`` is specified, the optimal I\/O reported in\n\t``st_blksize`` by **stat(2)** will be as small as possible to allow\n\tuser applications to avoid inefficient read\/modify\/write\n\tI\/O.  This is typically the page size of the machine, as\n\tthis is the granularity of the page cache.\n\n\tIf ``largeio`` is specified, a filesystem that was created with a\n\t``swidth`` specified will return the ``swidth`` value (in bytes)\n\tin ``st_blksize``. If the filesystem does not have a ``swidth``\n\tspecified but does specify an ``allocsize`` then ``allocsize``\n\t(in bytes) will be returned instead. Otherwise the behaviour\n\tis the same as if ``nolargeio`` was specified.\n\n  logbufs=value\n\tSet the number of in-memory log buffers.  Valid numbers\n\trange from 2-8 inclusive.\n\n\tThe default value is 8 buffers.\n\n\tIf the memory cost of 8 log buffers is too high on small\n\tsystems, then it may be reduced at some cost to performance\n\ton metadata intensive workloads. The ``logbsize`` option below\n\tcontrols the size of each buffer and so is also relevant to\n\tthis case.\n\n  logbsize=value\n\tSet the size of each in-memory log buffer.  The size may be\n\tspecified in bytes, or in kilobytes with a \"k\" suffix.\n\tValid sizes for version 1 and version 2 logs are 16384 (16k)\n\tand 32768 (32k).  Valid sizes for version 2 logs also\n\tinclude 65536 (64k), 131072 (128k) and 262144 (256k). The\n\tlogbsize must be an integer multiple of the log\n\tstripe unit configured at **mkfs(8)** time.\n\n\tThe default value for version 1 logs is 32768, while the\n\tdefault value for version 2 logs is MAX(32768, log_sunit).\n\n  logdev=device and rtdev=device\n\tUse an external log (metadata journal) and\/or real-time device.\n\tAn XFS filesystem has up to three parts: a data section, a log\n\tsection, and a real-time section.  The real-time section is\n\toptional, and the log section can be separate from the data\n\tsection or contained within it.\n\n  noalign\n\tData allocations will not be aligned at stripe unit\n\tboundaries. This is only relevant to filesystems created\n\twith non-zero data alignment parameters (``sunit``, ``swidth``) by\n\t**mkfs(8)**.\n\n  norecovery\n\tThe filesystem will be mounted without running log recovery.\n\tIf the filesystem was not cleanly unmounted, it is likely to\n\tbe inconsistent when mounted in ``norecovery`` mode.\n\tSome files or directories may not be accessible because of this.\n\tFilesystems mounted ``norecovery`` must be mounted read-only or\n\tthe mount will fail.\n\n  nouuid\n\tDon't check for double mounted file systems using the file\n\tsystem ``uuid``.  This is useful to mount LVM snapshot volumes,\n\tand often used in combination with ``norecovery`` for mounting\n\tread-only snapshots.\n\n  noquota\n\tForcibly turns off all quota accounting and enforcement\n\twithin the filesystem.\n\n  uquota\/usrquota\/uqnoenforce\/quota\n\tUser disk quota accounting enabled, and limits (optionally)\n\tenforced.  Refer to **xfs_quota(8)** for further details.\n\n  gquota\/grpquota\/gqnoenforce\n\tGroup disk quota accounting enabled and limits (optionally)\n\tenforced.  Refer to **xfs_quota(8)** for further details.\n\n  pquota\/prjquota\/pqnoenforce\n\tProject disk quota accounting enabled and limits (optionally)\n\tenforced.  Refer to **xfs_quota(8)** for further details.\n\n  sunit=value and swidth=value\n\tUsed to specify the stripe unit and width for a RAID device\n\tor a stripe volume.  \"value\" must be specified in 512-byte\n\tblock units. These options are only relevant to filesystems\n\tthat were created with non-zero data alignment parameters.\n\n\tThe ``sunit`` and ``swidth`` parameters specified must be compatible\n\twith the existing filesystem alignment characteristics.  In\n\tgeneral, that means the only valid changes to ``sunit`` are\n\tincreasing it by a power-of-2 multiple. Valid ``swidth`` values\n\tare any integer multiple of a valid ``sunit`` value.\n\n\tTypically the only time these mount options are necessary if\n\tafter an underlying RAID device has had its geometry\n\tmodified, such as adding a new disk to a RAID5 lun and\n\treshaping it.\n\n  swalloc\n\tData allocations will be rounded up to stripe width boundaries\n\twhen the current end of file is being extended and the file\n\tsize is larger than the stripe width size.\n\n  wsync\n\tWhen specified, all filesystem namespace operations are\n\texecuted synchronously. This ensures that when the namespace\n\toperation (create, unlink, etc) completes, the change to the\n\tnamespace is on stable storage. This is useful in HA setups\n\twhere failover must not result in clients seeing\n\tinconsistent namespace presentation during or after a\n\tfailover event.\n\nDeprecation of V4 Format\n========================\n\nThe V4 filesystem format lacks certain features that are supported by\nthe V5 format, such as metadata checksumming, strengthened metadata\nverification, and the ability to store timestamps past the year 2038.\nBecause of this, the V4 format is deprecated.  All users should upgrade\nby backing up their files, reformatting, and restoring from the backup.\n\nAdministrators and users can detect a V4 filesystem by running xfs_info\nagainst a filesystem mountpoint and checking for a string containing\n\"crc=\".  If no such string is found, please upgrade xfsprogs to the\nlatest version and try again.\n\nThe deprecation will take place in two parts.  Support for mounting V4\nfilesystems can now be disabled at kernel build time via Kconfig option.\nThe option will default to yes until September 2025, at which time it\nwill be changed to default to no.  In September 2030, support will be\nremoved from the codebase entirely.\n\nNote: Distributors may choose to withdraw V4 format support earlier than\nthe dates listed above.\n\nDeprecated Mount Options\n========================\n\n============================    ================\n  Name\t\t\t\tRemoval Schedule\n============================    ================\nMounting with V4 filesystem     September 2030\nMounting ascii-ci filesystem    September 2030\nikeep\/noikeep\t\t\tSeptember 2025\nattr2\/noattr2\t\t\tSeptember 2025\n============================    ================\n\n\nRemoved Mount Options\n=====================\n\n===========================     =======\n  Name\t\t\t\tRemoved\n===========================\t=======\n  delaylog\/nodelaylog\t\tv4.0\n  ihashsize\t\t\tv4.0\n  irixsgid\t\t\tv4.0\n  osyncisdsync\/osyncisosync\tv4.0\n  barrier\t\t\tv4.19\n  nobarrier\t\t\tv4.19\n===========================     =======\n\nsysctls\n=======\n\nThe following sysctls are available for the XFS filesystem:\n\n  fs.xfs.stats_clear\t\t(Min: 0  Default: 0  Max: 1)\n\tSetting this to \"1\" clears accumulated XFS statistics\n\tin \/proc\/fs\/xfs\/stat.  It then immediately resets to \"0\".\n\n  fs.xfs.xfssyncd_centisecs\t(Min: 100  Default: 3000  Max: 720000)\n\tThe interval at which the filesystem flushes metadata\n\tout to disk and runs internal cache cleanup routines.\n\n  fs.xfs.filestream_centisecs\t(Min: 1  Default: 3000  Max: 360000)\n\tThe interval at which the filesystem ages filestreams cache\n\treferences and returns timed-out AGs back to the free stream\n\tpool.\n\n  fs.xfs.speculative_prealloc_lifetime\n\t(Units: seconds   Min: 1  Default: 300  Max: 86400)\n\tThe interval at which the background scanning for inodes\n\twith unused speculative preallocation runs. The scan\n\tremoves unused preallocation from clean inodes and releases\n\tthe unused space back to the free pool.\n\n  fs.xfs.speculative_cow_prealloc_lifetime\n\tThis is an alias for speculative_prealloc_lifetime.\n\n  fs.xfs.error_level\t\t(Min: 0  Default: 3  Max: 11)\n\tA volume knob for error reporting when internal errors occur.\n\tThis will generate detailed messages & backtraces for filesystem\n\tshutdowns, for example.  Current threshold values are:\n\n\t\tXFS_ERRLEVEL_OFF:       0\n\t\tXFS_ERRLEVEL_LOW:       1\n\t\tXFS_ERRLEVEL_HIGH:      5\n\n  fs.xfs.panic_mask\t\t(Min: 0  Default: 0  Max: 511)\n\tCauses certain error conditions to call BUG(). Value is a bitmask;\n\tOR together the tags which represent errors which should cause panics:\n\n\t\tXFS_NO_PTAG                     0\n\t\tXFS_PTAG_IFLUSH                 0x00000001\n\t\tXFS_PTAG_LOGRES                 0x00000002\n\t\tXFS_PTAG_AILDELETE              0x00000004\n\t\tXFS_PTAG_ERROR_REPORT           0x00000008\n\t\tXFS_PTAG_SHUTDOWN_CORRUPT       0x00000010\n\t\tXFS_PTAG_SHUTDOWN_IOERROR       0x00000020\n\t\tXFS_PTAG_SHUTDOWN_LOGERROR      0x00000040\n\t\tXFS_PTAG_FSBLOCK_ZERO           0x00000080\n\t\tXFS_PTAG_VERIFIER_ERROR         0x00000100\n\n\tThis option is intended for debugging only.\n\n  fs.xfs.irix_symlink_mode\t(Min: 0  Default: 0  Max: 1)\n\tControls whether symlinks are created with mode 0777 (default)\n\tor whether their mode is affected by the umask (irix mode).\n\n  fs.xfs.irix_sgid_inherit\t(Min: 0  Default: 0  Max: 1)\n\tControls files created in SGID directories.\n\tIf the group ID of the new file does not match the effective group\n\tID or one of the supplementary group IDs of the parent dir, the\n\tISGID bit is cleared if the irix_sgid_inherit compatibility sysctl\n\tis set.\n\n  fs.xfs.inherit_sync\t\t(Min: 0  Default: 1  Max: 1)\n\tSetting this to \"1\" will cause the \"sync\" flag set\n\tby the **xfs_io(8)** chattr command on a directory to be\n\tinherited by files in that directory.\n\n  fs.xfs.inherit_nodump\t\t(Min: 0  Default: 1  Max: 1)\n\tSetting this to \"1\" will cause the \"nodump\" flag set\n\tby the **xfs_io(8)** chattr command on a directory to be\n\tinherited by files in that directory.\n\n  fs.xfs.inherit_noatime\t(Min: 0  Default: 1  Max: 1)\n\tSetting this to \"1\" will cause the \"noatime\" flag set\n\tby the **xfs_io(8)** chattr command on a directory to be\n\tinherited by files in that directory.\n\n  fs.xfs.inherit_nosymlinks\t(Min: 0  Default: 1  Max: 1)\n\tSetting this to \"1\" will cause the \"nosymlinks\" flag set\n\tby the **xfs_io(8)** chattr command on a directory to be\n\tinherited by files in that directory.\n\n  fs.xfs.inherit_nodefrag\t(Min: 0  Default: 1  Max: 1)\n\tSetting this to \"1\" will cause the \"nodefrag\" flag set\n\tby the **xfs_io(8)** chattr command on a directory to be\n\tinherited by files in that directory.\n\n  fs.xfs.rotorstep\t\t(Min: 1  Default: 1  Max: 256)\n\tIn \"inode32\" allocation mode, this option determines how many\n\tfiles the allocator attempts to allocate in the same allocation\n\tgroup before moving to the next allocation group.  The intent\n\tis to control the rate at which the allocator moves between\n\tallocation groups when allocating extents for new files.\n\nDeprecated Sysctls\n==================\n\n===========================================     ================\n  Name                                          Removal Schedule\n===========================================     ================\nfs.xfs.irix_sgid_inherit                        September 2025\nfs.xfs.irix_symlink_mode                        September 2025\nfs.xfs.speculative_cow_prealloc_lifetime        September 2025\n===========================================     ================\n\n\nRemoved Sysctls\n===============\n\n=============================\t=======\n  Name\t\t\t\tRemoved\n=============================\t=======\n  fs.xfs.xfsbufd_centisec\tv4.0\n  fs.xfs.age_buffer_centisecs\tv4.0\n=============================\t=======\n\nError handling\n==============\n\nXFS can act differently according to the type of error found during its\noperation. The implementation introduces the following concepts to the error\nhandler:\n\n -failure speed:\n\tDefines how fast XFS should propagate an error upwards when a specific\n\terror is found during the filesystem operation. It can propagate\n\timmediately, after a defined number of retries, after a set time period,\n\tor simply retry forever.\n\n -error classes:\n\tSpecifies the subsystem the error configuration will apply to, such as\n\tmetadata IO or memory allocation. Different subsystems will have\n\tdifferent error handlers for which behaviour can be configured.\n\n -error handlers:\n\tDefines the behavior for a specific error.\n\nThe filesystem behavior during an error can be set via ``sysfs`` files. Each\nerror handler works independently - the first condition met by an error handler\nfor a specific class will cause the error to be propagated rather than reset and\nretried.\n\nThe action taken by the filesystem when the error is propagated is context\ndependent - it may cause a shut down in the case of an unrecoverable error,\nit may be reported back to userspace, or it may even be ignored because\nthere's nothing useful we can with the error or anyone we can report it to (e.g.\nduring unmount).\n\nThe configuration files are organized into the following hierarchy for each\nmounted filesystem:\n\n  \/sys\/fs\/xfs\/<dev>\/error\/<class>\/<error>\/\n\nWhere:\n  <dev>\n\tThe short device name of the mounted filesystem. This is the same device\n\tname that shows up in XFS kernel error messages as \"XFS(<dev>): ...\"\n\n  <class>\n\tThe subsystem the error configuration belongs to. As of 4.9, the defined\n\tclasses are:\n\n\t\t- \"metadata\": applies metadata buffer write IO\n\n  <error>\n\tThe individual error handler configurations.\n\n\nEach filesystem has \"global\" error configuration options defined in their top\nlevel directory:\n\n  \/sys\/fs\/xfs\/<dev>\/error\/\n\n  fail_at_unmount\t\t(Min:  0  Default:  1  Max: 1)\n\tDefines the filesystem error behavior at unmount time.\n\n\tIf set to a value of 1, XFS will override all other error configurations\n\tduring unmount and replace them with \"immediate fail\" characteristics.\n\ti.e. no retries, no retry timeout. This will always allow unmount to\n\tsucceed when there are persistent errors present.\n\n\tIf set to 0, the configured retry behaviour will continue until all\n\tretries and\/or timeouts have been exhausted. This will delay unmount\n\tcompletion when there are persistent errors, and it may prevent the\n\tfilesystem from ever unmounting fully in the case of \"retry forever\"\n\thandler configurations.\n\n\tNote: there is no guarantee that fail_at_unmount can be set while an\n\tunmount is in progress. It is possible that the ``sysfs`` entries are\n\tremoved by the unmounting filesystem before a \"retry forever\" error\n\thandler configuration causes unmount to hang, and hence the filesystem\n\tmust be configured appropriately before unmount begins to prevent\n\tunmount hangs.\n\nEach filesystem has specific error class handlers that define the error\npropagation behaviour for specific errors. There is also a \"default\" error\nhandler defined, which defines the behaviour for all errors that don't have\nspecific handlers defined. Where multiple retry constraints are configured for\na single error, the first retry configuration that expires will cause the error\nto be propagated. The handler configurations are found in the directory:\n\n  \/sys\/fs\/xfs\/<dev>\/error\/<class>\/<error>\/\n\n  max_retries\t\t\t(Min: -1  Default: Varies  Max: INTMAX)\n\tDefines the allowed number of retries of a specific error before\n\tthe filesystem will propagate the error. The retry count for a given\n\terror context (e.g. a specific metadata buffer) is reset every time\n\tthere is a successful completion of the operation.\n\n\tSetting the value to \"-1\" will cause XFS to retry forever for this\n\tspecific error.\n\n\tSetting the value to \"0\" will cause XFS to fail immediately when the\n\tspecific error is reported.\n\n\tSetting the value to \"N\" (where 0 < N < Max) will make XFS retry the\n\toperation \"N\" times before propagating the error.\n\n  retry_timeout_seconds\t\t(Min:  -1  Default:  Varies  Max: 1 day)\n\tDefine the amount of time (in seconds) that the filesystem is\n\tallowed to retry its operations when the specific error is\n\tfound.\n\n\tSetting the value to \"-1\" will allow XFS to retry forever for this\n\tspecific error.\n\n\tSetting the value to \"0\" will cause XFS to fail immediately when the\n\tspecific error is reported.\n\n\tSetting the value to \"N\" (where 0 < N < Max) will allow XFS to retry the\n\toperation for up to \"N\" seconds before propagating the error.\n\n**Note:** The default behaviour for a specific error handler is dependent on both\nthe class and error context. For example, the default values for\n\"metadata\/ENODEV\" are \"0\" rather than \"-1\" so that this error handler defaults\nto \"fail immediately\" behaviour. This is done because ENODEV is a fatal,\nunrecoverable error no matter how many times the metadata IO is retried.\n\nWorkqueue Concurrency\n=====================\n\nXFS uses kernel workqueues to parallelize metadata update processes.  This\nenables it to take advantage of storage hardware that can service many IO\noperations simultaneously.  This interface exposes internal implementation\ndetails of XFS, and as such is explicitly not part of any userspace API\/ABI\nguarantee the kernel may give userspace.  These are undocumented features of\nthe generic workqueue implementation XFS uses for concurrency, and they are\nprovided here purely for diagnostic and tuning purposes and may change at any\ntime in the future.\n\nThe control knobs for a filesystem's workqueues are organized by task at hand\nand the short name of the data device.  They all can be found in:\n\n  \/sys\/bus\/workqueue\/devices\/${task}!${device}\n\n================  ===========\n  Task            Description\n================  ===========\n  xfs_iwalk-$pid  Inode scans of the entire filesystem. Currently limited to\n                  mount time quotacheck.\n  xfs-gc          Background garbage collection of disk space that have been\n                  speculatively allocated beyond EOF or for staging copy on\n                  write operations.\n================  ===========\n\nFor example, the knobs for the quotacheck workqueue for \/dev\/nvme0n1 would be\nfound in \/sys\/bus\/workqueue\/devices\/xfs_iwalk-1111!nvme0n1\/.\n\nThe interesting knobs for XFS workqueues are as follows:\n\n============     ===========\n  Knob           Description\n============     ===========\n  max_active     Maximum number of background threads that can be started to\n                 run the work.\n  cpumask        CPUs upon which the threads are allowed to run.\n  nice           Relative priority of scheduling the threads.  These are the\n                 same nice levels that can be applied to userspace processes.\n============     ===========","site":"linux","answers_cleaned":"   SPDX License Identifier  GPL 2 0                         The SGI XFS Filesystem                         XFS is a high performance journaling filesystem which originated on the SGI IRIX platform   It is completely multi threaded  can support large files and large filesystems  extended attributes  variable block sizes  is extent based  and makes extensive use of Btrees  directories  extents  free space  to aid both performance and scalability   Refer to the documentation at https   xfs wiki kernel org  for further details   This implementation is on disk compatible with the IRIX version of XFS    Mount Options                When mounting an XFS filesystem  the following options are accepted     allocsize size  Sets the buffered I O end of file preallocation size when  doing delayed allocation writeout  default size is 64KiB    Valid values for this option are page size  typically 4KiB   through to 1GiB  inclusive  in power of 2 increments    The default behaviour is for dynamic end of file  preallocation size  which uses a set of heuristics to  optimise the preallocation size based on the current  allocation patterns within the file and the access patterns  to the file  Specifying a fixed   allocsize   value turns off  the dynamic behaviour     attr2 or noattr2  The options enable disable an  opportunistic  improvement to  be made in the way inline extended attributes are stored  on disk   When the new form is used for the first time when    attr2   is selected  either when setting or removing extended  attributes  the on disk superblock feature bit field will be  updated to reflect this format being in use    The default behaviour is determined by the on disk feature  bit indicating that   attr2   behaviour is active  If either  mount option is set  then that becomes the new default used  by the filesystem    CRC enabled filesystems always use the   attr2   format  and so  will reject the   noattr2   mount option if it is set     discard or nodiscard  default   Enable disable the issuing of commands to let the block  device reclaim space freed by the filesystem   This is  useful for SSD devices  thinly provisioned LUNs and virtual  machine images  but may have a performance impact    Note  It is currently recommended that you use the   fstrim    application to   discard   unused blocks rather than the   discard    mount option because the performance impact of this option  is quite severe     grpid bsdgroups or nogrpid sysvgroups  default   These options define what group ID a newly created file  gets   When   grpid   is set  it takes the group ID of the  directory in which it is created  otherwise it takes the    fsgid   of the current process  unless the directory has the    setgid   bit set  in which case it takes the   gid   from the  parent directory  and also gets the   setgid   bit set if it is  a directory itself     filestreams  Make the data allocator use the filestreams allocation mode  across the entire filesystem rather than just on directories  configured to use it     ikeep or noikeep  default   When   ikeep   is specified  XFS does not delete empty inode  clusters and keeps them around on disk   When   noikeep   is  specified  empty inode clusters are returned to the free  space pool     inode32 or inode64  default   When   inode32   is specified  it indicates that XFS limits  inode creation to locations which will not result in inode  numbers with more than 32 bits of significance    When   inode64   is specified  it indicates that XFS is allowed  to create inodes at any location in the filesystem   including those which will result in inode numbers occupying  more than 32 bits of significance      inode32   is provided for backwards compatibility with older  systems and applications  since 64 bits inode numbers might  cause problems for some applications that cannot handle  large inode numbers   If applications are in use which do  not handle inode numbers bigger than 32 bits  the   inode32    option should be specified     largeio or nolargeio  default   If   nolargeio   is specified  the optimal I O reported in    st blksize   by   stat 2    will be as small as possible to allow  user applications to avoid inefficient read modify write  I O   This is typically the page size of the machine  as  this is the granularity of the page cache    If   largeio   is specified  a filesystem that was created with a    swidth   specified will return the   swidth   value  in bytes   in   st blksize    If the filesystem does not have a   swidth    specified but does specify an   allocsize   then   allocsize     in bytes  will be returned instead  Otherwise the behaviour  is the same as if   nolargeio   was specified     logbufs value  Set the number of in memory log buffers   Valid numbers  range from 2 8 inclusive    The default value is 8 buffers    If the memory cost of 8 log buffers is too high on small  systems  then it may be reduced at some cost to performance  on metadata intensive workloads  The   logbsize   option below  controls the size of each buffer and so is also relevant to  this case     logbsize value  Set the size of each in memory log buffer   The size may be  specified in bytes  or in kilobytes with a  k  suffix   Valid sizes for version 1 and version 2 logs are 16384  16k   and 32768  32k    Valid sizes for version 2 logs also  include 65536  64k   131072  128k  and 262144  256k   The  logbsize must be an integer multiple of the log  stripe unit configured at   mkfs 8    time    The default value for version 1 logs is 32768  while the  default value for version 2 logs is MAX 32768  log sunit      logdev device and rtdev device  Use an external log  metadata journal  and or real time device   An XFS filesystem has up to three parts  a data section  a log  section  and a real time section   The real time section is  optional  and the log section can be separate from the data  section or contained within it     noalign  Data allocations will not be aligned at stripe unit  boundaries  This is only relevant to filesystems created  with non zero data alignment parameters    sunit      swidth    by    mkfs 8        norecovery  The filesystem will be mounted without running log recovery   If the filesystem was not cleanly unmounted  it is likely to  be inconsistent when mounted in   norecovery   mode   Some files or directories may not be accessible because of this   Filesystems mounted   norecovery   must be mounted read only or  the mount will fail     nouuid  Don t check for double mounted file systems using the file  system   uuid     This is useful to mount LVM snapshot volumes   and often used in combination with   norecovery   for mounting  read only snapshots     noquota  Forcibly turns off all quota accounting and enforcement  within the filesystem     uquota usrquota uqnoenforce quota  User disk quota accounting enabled  and limits  optionally   enforced   Refer to   xfs quota 8    for further details     gquota grpquota gqnoenforce  Group disk quota accounting enabled and limits  optionally   enforced   Refer to   xfs quota 8    for further details     pquota prjquota pqnoenforce  Project disk quota accounting enabled and limits  optionally   enforced   Refer to   xfs quota 8    for further details     sunit value and swidth value  Used to specify the stripe unit and width for a RAID device  or a stripe volume    value  must be specified in 512 byte  block units  These options are only relevant to filesystems  that were created with non zero data alignment parameters    The   sunit   and   swidth   parameters specified must be compatible  with the existing filesystem alignment characteristics   In  general  that means the only valid changes to   sunit   are  increasing it by a power of 2 multiple  Valid   swidth   values  are any integer multiple of a valid   sunit   value    Typically the only time these mount options are necessary if  after an underlying RAID device has had its geometry  modified  such as adding a new disk to a RAID5 lun and  reshaping it     swalloc  Data allocations will be rounded up to stripe width boundaries  when the current end of file is being extended and the file  size is larger than the stripe width size     wsync  When specified  all filesystem namespace operations are  executed synchronously  This ensures that when the namespace  operation  create  unlink  etc  completes  the change to the  namespace is on stable storage  This is useful in HA setups  where failover must not result in clients seeing  inconsistent namespace presentation during or after a  failover event   Deprecation of V4 Format                           The V4 filesystem format lacks certain features that are supported by the V5 format  such as metadata checksumming  strengthened metadata verification  and the ability to store timestamps past the year 2038  Because of this  the V4 format is deprecated   All users should upgrade by backing up their files  reformatting  and restoring from the backup   Administrators and users can detect a V4 filesystem by running xfs info against a filesystem mountpoint and checking for a string containing  crc     If no such string is found  please upgrade xfsprogs to the latest version and try again   The deprecation will take place in two parts   Support for mounting V4 filesystems can now be disabled at kernel build time via Kconfig option  The option will default to yes until September 2025  at which time it will be changed to default to no   In September 2030  support will be removed from the codebase entirely   Note  Distributors may choose to withdraw V4 format support earlier than the dates listed above   Deprecated Mount Options                                                                              Name    Removal Schedule                                                  Mounting with V4 filesystem     September 2030 Mounting ascii ci filesystem    September 2030 ikeep noikeep   September 2025 attr2 noattr2   September 2025                                                    Removed Mount Options                                                                  Name    Removed                                       delaylog nodelaylog  v4 0   ihashsize   v4 0   irixsgid   v4 0   osyncisdsync osyncisosync v4 0   barrier   v4 19   nobarrier   v4 19                                          sysctls          The following sysctls are available for the XFS filesystem     fs xfs stats clear   Min  0  Default  0  Max  1   Setting this to  1  clears accumulated XFS statistics  in  proc fs xfs stat   It then immediately resets to  0      fs xfs xfssyncd centisecs  Min  100  Default  3000  Max  720000   The interval at which the filesystem flushes metadata  out to disk and runs internal cache cleanup routines     fs xfs filestream centisecs  Min  1  Default  3000  Max  360000   The interval at which the filesystem ages filestreams cache  references and returns timed out AGs back to the free stream  pool     fs xfs speculative prealloc lifetime   Units  seconds   Min  1  Default  300  Max  86400   The interval at which the background scanning for inodes  with unused speculative preallocation runs  The scan  removes unused preallocation from clean inodes and releases  the unused space back to the free pool     fs xfs speculative cow prealloc lifetime  This is an alias for speculative prealloc lifetime     fs xfs error level   Min  0  Default  3  Max  11   A volume knob for error reporting when internal errors occur   This will generate detailed messages   backtraces for filesystem  shutdowns  for example   Current threshold values are     XFS ERRLEVEL OFF        0   XFS ERRLEVEL LOW        1   XFS ERRLEVEL HIGH       5    fs xfs panic mask   Min  0  Default  0  Max  511   Causes certain error conditions to call BUG    Value is a bitmask   OR together the tags which represent errors which should cause panics     XFS NO PTAG                     0   XFS PTAG IFLUSH                 0x00000001   XFS PTAG LOGRES                 0x00000002   XFS PTAG AILDELETE              0x00000004   XFS PTAG ERROR REPORT           0x00000008   XFS PTAG SHUTDOWN CORRUPT       0x00000010   XFS PTAG SHUTDOWN IOERROR       0x00000020   XFS PTAG SHUTDOWN LOGERROR      0x00000040   XFS PTAG FSBLOCK ZERO           0x00000080   XFS PTAG VERIFIER ERROR         0x00000100   This option is intended for debugging only     fs xfs irix symlink mode  Min  0  Default  0  Max  1   Controls whether symlinks are created with mode 0777  default   or whether their mode is affected by the umask  irix mode      fs xfs irix sgid inherit  Min  0  Default  0  Max  1   Controls files created in SGID directories   If the group ID of the new file does not match the effective group  ID or one of the supplementary group IDs of the parent dir  the  ISGID bit is cleared if the irix sgid inherit compatibility sysctl  is set     fs xfs inherit sync   Min  0  Default  1  Max  1   Setting this to  1  will cause the  sync  flag set  by the   xfs io 8    chattr command on a directory to be  inherited by files in that directory     fs xfs inherit nodump   Min  0  Default  1  Max  1   Setting this to  1  will cause the  nodump  flag set  by the   xfs io 8    chattr command on a directory to be  inherited by files in that directory     fs xfs inherit noatime  Min  0  Default  1  Max  1   Setting this to  1  will cause the  noatime  flag set  by the   xfs io 8    chattr command on a directory to be  inherited by files in that directory     fs xfs inherit nosymlinks  Min  0  Default  1  Max  1   Setting this to  1  will cause the  nosymlinks  flag set  by the   xfs io 8    chattr command on a directory to be  inherited by files in that directory     fs xfs inherit nodefrag  Min  0  Default  1  Max  1   Setting this to  1  will cause the  nodefrag  flag set  by the   xfs io 8    chattr command on a directory to be  inherited by files in that directory     fs xfs rotorstep   Min  1  Default  1  Max  256   In  inode32  allocation mode  this option determines how many  files the allocator attempts to allocate in the same allocation  group before moving to the next allocation group   The intent  is to control the rate at which the allocator moves between  allocation groups when allocating extents for new files   Deprecated Sysctls                                                                                        Name                                          Removal Schedule                                                                  fs xfs irix sgid inherit                        September 2025 fs xfs irix symlink mode                        September 2025 fs xfs speculative cow prealloc lifetime        September 2025                                                                    Removed Sysctls                                                          Name    Removed                                         fs xfs xfsbufd centisec v4 0   fs xfs age buffer centisecs v4 0                                        Error handling                 XFS can act differently according to the type of error found during its operation  The implementation introduces the following concepts to the error handler     failure speed   Defines how fast XFS should propagate an error upwards when a specific  error is found during the filesystem operation  It can propagate  immediately  after a defined number of retries  after a set time period   or simply retry forever     error classes   Specifies the subsystem the error configuration will apply to  such as  metadata IO or memory allocation  Different subsystems will have  different error handlers for which behaviour can be configured     error handlers   Defines the behavior for a specific error   The filesystem behavior during an error can be set via   sysfs   files  Each error handler works independently   the first condition met by an error handler for a specific class will cause the error to be propagated rather than reset and retried   The action taken by the filesystem when the error is propagated is context dependent   it may cause a shut down in the case of an unrecoverable error  it may be reported back to userspace  or it may even be ignored because there s nothing useful we can with the error or anyone we can report it to  e g  during unmount    The configuration files are organized into the following hierarchy for each mounted filesystem      sys fs xfs  dev  error  class   error    Where     dev   The short device name of the mounted filesystem  This is the same device  name that shows up in XFS kernel error messages as  XFS  dev             class   The subsystem the error configuration belongs to  As of 4 9  the defined  classes are        metadata   applies metadata buffer write IO     error   The individual error handler configurations    Each filesystem has  global  error configuration options defined in their top level directory      sys fs xfs  dev  error     fail at unmount   Min   0  Default   1  Max  1   Defines the filesystem error behavior at unmount time    If set to a value of 1  XFS will override all other error configurations  during unmount and replace them with  immediate fail  characteristics   i e  no retries  no retry timeout  This will always allow unmount to  succeed when there are persistent errors present    If set to 0  the configured retry behaviour will continue until all  retries and or timeouts have been exhausted  This will delay unmount  completion when there are persistent errors  and it may prevent the  filesystem from ever unmounting fully in the case of  retry forever   handler configurations    Note  there is no guarantee that fail at unmount can be set while an  unmount is in progress  It is possible that the   sysfs   entries are  removed by the unmounting filesystem before a  retry forever  error  handler configuration causes unmount to hang  and hence the filesystem  must be configured appropriately before unmount begins to prevent  unmount hangs   Each filesystem has specific error class handlers that define the error propagation behaviour for specific errors  There is also a  default  error handler defined  which defines the behaviour for all errors that don t have specific handlers defined  Where multiple retry constraints are configured for a single error  the first retry configuration that expires will cause the error to be propagated  The handler configurations are found in the directory      sys fs xfs  dev  error  class   error      max retries    Min   1  Default  Varies  Max  INTMAX   Defines the allowed number of retries of a specific error before  the filesystem will propagate the error  The retry count for a given  error context  e g  a specific metadata buffer  is reset every time  there is a successful completion of the operation    Setting the value to   1  will cause XFS to retry forever for this  specific error    Setting the value to  0  will cause XFS to fail immediately when the  specific error is reported    Setting the value to  N   where 0   N   Max  will make XFS retry the  operation  N  times before propagating the error     retry timeout seconds   Min    1  Default   Varies  Max  1 day   Define the amount of time  in seconds  that the filesystem is  allowed to retry its operations when the specific error is  found    Setting the value to   1  will allow XFS to retry forever for this  specific error    Setting the value to  0  will cause XFS to fail immediately when the  specific error is reported    Setting the value to  N   where 0   N   Max  will allow XFS to retry the  operation for up to  N  seconds before propagating the error     Note    The default behaviour for a specific error handler is dependent on both the class and error context  For example  the default values for  metadata ENODEV  are  0  rather than   1  so that this error handler defaults to  fail immediately  behaviour  This is done because ENODEV is a fatal  unrecoverable error no matter how many times the metadata IO is retried   Workqueue Concurrency                        XFS uses kernel workqueues to parallelize metadata update processes   This enables it to take advantage of storage hardware that can service many IO operations simultaneously   This interface exposes internal implementation details of XFS  and as such is explicitly not part of any userspace API ABI guarantee the kernel may give userspace   These are undocumented features of the generic workqueue implementation XFS uses for concurrency  and they are provided here purely for diagnostic and tuning purposes and may change at any time in the future   The control knobs for a filesystem s workqueues are organized by task at hand and the short name of the data device   They all can be found in      sys bus workqueue devices   task    device                                   Task            Description                                 xfs iwalk  pid  Inode scans of the entire filesystem  Currently limited to                   mount time quotacheck    xfs gc          Background garbage collection of disk space that have been                   speculatively allocated beyond EOF or for staging copy on                   write operations                                 For example  the knobs for the quotacheck workqueue for  dev nvme0n1 would be found in  sys bus workqueue devices xfs iwalk 1111 nvme0n1    The interesting knobs for XFS workqueues are as follows                                  Knob           Description                                max active     Maximum number of background threads that can be started to                  run the work    cpumask        CPUs upon which the threads are allowed to run    nice           Relative priority of scheduling the threads   These are the                  same nice levels that can be applied to userspace processes                              "}
{"questions":"prometheus sortrank 3 Console templates allow for creation of arbitrary consoles using the Go from the Prometheus server Console templates templating language http golang org pkg text template These are served title Console templates","answers":"---\ntitle: Console templates\nsort_rank: 3\n---\n\n# Console templates\n\nConsole templates allow for creation of arbitrary consoles using the [Go\ntemplating language](http:\/\/golang.org\/pkg\/text\/template\/). These are served\nfrom the Prometheus server.\n\nConsole templates are the most powerful way to create templates that can be\neasily managed in source control. There is a learning curve though, so users new\nto this style of monitoring should try out\n[Grafana](\/docs\/visualization\/grafana\/) first.\n\n## Getting started\n\nPrometheus comes with an example set of consoles to get you going. These can be\nfound at `\/consoles\/index.html.example` on a running Prometheus and will\ndisplay Node Exporter consoles if Prometheus is scraping Node Exporters with a\n`job=\"node\"` label.\n\nThe example consoles have 5 parts:\n\n1. A navigation bar on top\n1. A menu on the left\n1. Time controls on the bottom\n1. The main content in the center, usually graphs\n1. A table on the right\n\nThe navigation bar is for links to other systems, such as other Prometheis\n<sup>[1](\/docs\/introduction\/faq\/#what-is-the-plural-of-prometheus)<\/sup>,\ndocumentation, and whatever else makes sense to you. The menu is for navigation\ninside the same Prometheus server, which is very useful to be able to quickly\nopen a console in another tab to correlate information. Both are configured in\n`console_libraries\/menu.lib`.\n\nThe time controls allow changing of the duration and range of the graphs.\nConsole URLs can be shared and will show the same graphs for others.\n\nThe main content is usually graphs. There is a configurable JavaScript graphing\nlibrary provided that will handle requesting data from Prometheus, and rendering\nit via [Rickshaw](https:\/\/shutterstock.github.io\/rickshaw\/).\n\nFinally, the table on the right can be used to display statistics in a more\ncompact form than graphs.\n\n## Example Console\n\nThis is a basic console. It shows the number of tasks, how many of them are up,\nthe average CPU usage, and the average memory usage in the right-hand-side\ntable. The main content has a queries-per-second graph.\n\n```\n\n\n\n<tr>\n  <th>MyJob<\/th>\n  <th>{{ template \"prom_query_drilldown\" (args \"sum(up{job='myjob'})\") }}\n      \/ {{ template \"prom_query_drilldown\" (args \"count(up{job='myjob'})\") }}\n  <\/th>\n<\/tr>\n<tr>\n  <td>CPU<\/td>\n  <td>{{ template \"prom_query_drilldown\" (args\n      \"avg by(job)(rate(process_cpu_seconds_total{job='myjob'}[5m]))\"\n      \"s\/s\" \"humanizeNoSmallPrefix\") }}\n  <\/td>\n<\/tr>\n<tr>\n  <td>Memory<\/td>\n  <td>{{ template \"prom_query_drilldown\" (args\n       \"avg by(job)(process_resident_memory_bytes{job='myjob'})\"\n       \"B\" \"humanize1024\") }}\n  <\/td>\n<\/tr>\n\n\n\n\n<h1>MyJob<\/h1>\n\n<h3>Queries<\/h3>\n<div id=\"queryGraph\"><\/div>\n<script>\nnew PromConsole.Graph({\n  node: document.querySelector(\"#queryGraph\"),\n  expr: \"sum(rate(http_query_count{job='myjob'}[5m]))\",\n  name: \"Queries\",\n  yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,\n  yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,\n  yUnits: \"\/s\",\n  yTitle: \"Queries\"\n})\n<\/script>\n\n\n\n\n```\n\nThe `prom_right_table_head` and `prom_right_table_tail` templates contain the\nright-hand-side table. This is optional.\n\n`prom_query_drilldown` is a template that will evaluate the expression passed to it, format it,\nand link to the expression in the [expression browser](\/docs\/visualization\/browser\/). The first\nargument is the expression. The second argument is the unit to use. The third\nargument is how to format the output. Only the first argument is required.\n\nValid output formats for the third argument to `prom_query_drilldown`:\n\n* Not specified: Default Go display output.\n* `humanize`: Display the result using [metric prefixes](http:\/\/en.wikipedia.org\/wiki\/Metric_prefix).\n* `humanizeNoSmallPrefix`: For absolute values greater than 1, display the\n  result using [metric prefixes](http:\/\/en.wikipedia.org\/wiki\/Metric_prefix). For\n  absolute values less than 1, display 3 significant digits. This is useful\n  to avoid units such as milliqueries per second that can be produced by\n  `humanize`.\n* `humanize1024`: Display the humanized result using a base of 1024 rather than 1000.\n  This is usually used with `B` as the second argument to produce units such as `KiB` and `MiB`.\n* `printf.3g`: Display 3 significant digits.\n\nCustom formats can be defined. See\n[prom.lib](https:\/\/github.com\/prometheus\/prometheus\/blob\/main\/console_libraries\/prom.lib) for examples.\n\n## Graph Library\n\nThe graph library is invoked as:\n\n```\n<div id=\"queryGraph\"><\/div>\n<script>\nnew PromConsole.Graph({\n  node: document.querySelector(\"#queryGraph\"),\n  expr: \"sum(rate(http_query_count{job='myjob'}[5m]))\"\n})\n<\/script>\n```\n\nThe `head` template loads the required Javascript and CSS.\n\nParameters to the graph library:\n\n| Name          | Description\n| ------------- | -------------\n| expr          | Required. Expression to graph. Can be a list.\n| node          | Required. DOM node to render into.\n| duration      | Optional. Duration of the graph. Defaults to 1 hour.\n| endTime       | Optional. Unixtime the graph ends at. Defaults to now.\n| width         | Optional. Width of the graph, excluding titles. Defaults to auto-detection.\n| height        | Optional. Height of the graph, excluding titles and legends. Defaults to 200 pixels.\n| min           | Optional. Minimum x-axis value. Defaults to lowest data value.\n| max           | Optional. Maximum y-axis value. Defaults to highest data value.\n| renderer      | Optional. Type of graph. Options are `line` and `area` (stacked graph). Defaults to `line`.\n| name          | Optional. Title of plots in legend and hover detail. If passed a string, `[[ label ]]` will be substituted with the label value. If passed a function, it will be passed a map of labels and should return the name as a string. Can be a list.\n| xTitle        | Optional. Title of the x-axis. Defaults to `Time`.\n| yUnits        | Optional. Units of the y-axis. Defaults to empty.\n| yTitle        | Optional. Title of the y-axis. Defaults to empty.\n| yAxisFormatter | Optional. Number formatter for the y-axis. Defaults to `PromConsole.NumberFormatter.humanize`.\n| yHoverFormatter | Optional. Number formatter for the hover detail. Defaults to `PromConsole.NumberFormatter.humanizeExact`.\n| colorScheme   | Optional. Color scheme to be used by the plots. Can be either a list of hex color codes or one of the [color scheme names](https:\/\/github.com\/shutterstock\/rickshaw\/blob\/master\/src\/js\/Rickshaw.Fixtures.Color.js) supported by Rickshaw. Defaults to `'colorwheel'`.\n\nIf both `expr` and `name` are lists, they must be of the same length. The name\nwill be applied to the plots for the corresponding expression.\n\nValid options for the `yAxisFormatter` and `yHoverFormatter`:\n\n* `PromConsole.NumberFormatter.humanize`: Format using [metric prefixes](http:\/\/en.wikipedia.org\/wiki\/Metric_prefix).\n* `PromConsole.NumberFormatter.humanizeNoSmallPrefix`: For absolute values\n  greater than 1, format using using [metric prefixes](http:\/\/en.wikipedia.org\/wiki\/Metric_prefix).\n  For absolute values less than 1, format with 3 significant digits. This is\n  useful to avoid units such as milliqueries per second that can be produced by\n  `PromConsole.NumberFormatter.humanize`.\n* `PromConsole.NumberFormatter.humanize1024`: Format the humanized result using a base of 1024 rather than 1000.","site":"prometheus","answers_cleaned":"    title  Console templates sort rank  3        Console templates  Console templates allow for creation of arbitrary consoles using the  Go templating language  http   golang org pkg text template    These are served from the Prometheus server   Console templates are the most powerful way to create templates that can be easily managed in source control  There is a learning curve though  so users new to this style of monitoring should try out  Grafana   docs visualization grafana   first      Getting started  Prometheus comes with an example set of consoles to get you going  These can be found at   consoles index html example  on a running Prometheus and will display Node Exporter consoles if Prometheus is scraping Node Exporters with a  job  node   label   The example consoles have 5 parts   1  A navigation bar on top 1  A menu on the left 1  Time controls on the bottom 1  The main content in the center  usually graphs 1  A table on the right  The navigation bar is for links to other systems  such as other Prometheis  sup  1   docs introduction faq  what is the plural of prometheus   sup   documentation  and whatever else makes sense to you  The menu is for navigation inside the same Prometheus server  which is very useful to be able to quickly open a console in another tab to correlate information  Both are configured in  console libraries menu lib    The time controls allow changing of the duration and range of the graphs  Console URLs can be shared and will show the same graphs for others   The main content is usually graphs  There is a configurable JavaScript graphing library provided that will handle requesting data from Prometheus  and rendering it via  Rickshaw  https   shutterstock github io rickshaw     Finally  the table on the right can be used to display statistics in a more compact form than graphs      Example Console  This is a basic console  It shows the number of tasks  how many of them are up  the average CPU usage  and the average memory usage in the right hand side table  The main content has a queries per second graph           tr     th MyJob  th     th    template  prom query drilldown   args  sum up job  myjob                    template  prom query drilldown   args  count up job  myjob             th    tr   tr     td CPU  td     td    template  prom query drilldown   args        avg by job  rate process cpu seconds total job  myjob   5m            s s   humanizeNoSmallPrefix          td    tr   tr     td Memory  td     td    template  prom query drilldown   args         avg by job  process resident memory bytes job  myjob             B   humanize1024          td    tr       h1 MyJob  h1    h3 Queries  h3   div id  queryGraph    div   script  new PromConsole Graph     node  document querySelector   queryGraph      expr   sum rate http query count job  myjob   5m        name   Queries     yAxisFormatter  PromConsole NumberFormatter humanizeNoSmallPrefix    yHoverFormatter  PromConsole NumberFormatter humanizeNoSmallPrefix    yUnits    s     yTitle   Queries       script           The  prom right table head  and  prom right table tail  templates contain the right hand side table  This is optional    prom query drilldown  is a template that will evaluate the expression passed to it  format it  and link to the expression in the  expression browser   docs visualization browser    The first argument is the expression  The second argument is the unit to use  The third argument is how to format the output  Only the first argument is required   Valid output formats for the third argument to  prom query drilldown      Not specified  Default Go display output     humanize   Display the result using  metric prefixes  http   en wikipedia org wiki Metric prefix      humanizeNoSmallPrefix   For absolute values greater than 1  display the   result using  metric prefixes  http   en wikipedia org wiki Metric prefix   For   absolute values less than 1  display 3 significant digits  This is useful   to avoid units such as milliqueries per second that can be produced by    humanize      humanize1024   Display the humanized result using a base of 1024 rather than 1000    This is usually used with  B  as the second argument to produce units such as  KiB  and  MiB      printf 3g   Display 3 significant digits   Custom formats can be defined  See  prom lib  https   github com prometheus prometheus blob main console libraries prom lib  for examples      Graph Library  The graph library is invoked as        div id  queryGraph    div   script  new PromConsole Graph     node  document querySelector   queryGraph      expr   sum rate http query count job  myjob   5m          script       The  head  template loads the required Javascript and CSS   Parameters to the graph library     Name            Description                                   expr            Required  Expression to graph  Can be a list    node            Required  DOM node to render into    duration        Optional  Duration of the graph  Defaults to 1 hour    endTime         Optional  Unixtime the graph ends at  Defaults to now    width           Optional  Width of the graph  excluding titles  Defaults to auto detection    height          Optional  Height of the graph  excluding titles and legends  Defaults to 200 pixels    min             Optional  Minimum x axis value  Defaults to lowest data value    max             Optional  Maximum y axis value  Defaults to highest data value    renderer        Optional  Type of graph  Options are  line  and  area   stacked graph   Defaults to  line     name            Optional  Title of plots in legend and hover detail  If passed a string      label     will be substituted with the label value  If passed a function  it will be passed a map of labels and should return the name as a string  Can be a list    xTitle          Optional  Title of the x axis  Defaults to  Time     yUnits          Optional  Units of the y axis  Defaults to empty    yTitle          Optional  Title of the y axis  Defaults to empty    yAxisFormatter   Optional  Number formatter for the y axis  Defaults to  PromConsole NumberFormatter humanize     yHoverFormatter   Optional  Number formatter for the hover detail  Defaults to  PromConsole NumberFormatter humanizeExact     colorScheme     Optional  Color scheme to be used by the plots  Can be either a list of hex color codes or one of the  color scheme names  https   github com shutterstock rickshaw blob master src js Rickshaw Fixtures Color js  supported by Rickshaw  Defaults to   colorwheel     If both  expr  and  name  are lists  they must be of the same length  The name will be applied to the plots for the corresponding expression   Valid options for the  yAxisFormatter  and  yHoverFormatter       PromConsole NumberFormatter humanize   Format using  metric prefixes  http   en wikipedia org wiki Metric prefix      PromConsole NumberFormatter humanizeNoSmallPrefix   For absolute values   greater than 1  format using using  metric prefixes  http   en wikipedia org wiki Metric prefix     For absolute values less than 1  format with 3 significant digits  This is   useful to avoid units such as milliqueries per second that can be produced by    PromConsole NumberFormatter humanize      PromConsole NumberFormatter humanize1024   Format the humanized result using a base of 1024 rather than 1000 "}
{"questions":"prometheus sortrank 4 There are a number of libraries and servers which help in exporting existing Exporters and integrations title Exporters and integrations cases where it is not feasible to instrument a given system with Prometheus metrics from third party systems as Prometheus metrics This is useful for","answers":"---\ntitle: Exporters and integrations\nsort_rank: 4\n---\n\n# Exporters and integrations\n\nThere are a number of libraries and servers which help in exporting existing\nmetrics from third-party systems as Prometheus metrics. This is useful for\ncases where it is not feasible to instrument a given system with Prometheus\nmetrics directly (for example, HAProxy or Linux system stats).\n\n## Third-party exporters\n\nSome of these exporters are maintained as part of the official [Prometheus GitHub organization](https:\/\/github.com\/prometheus),\nthose are marked as *official*, others are externally contributed and maintained.\n\nWe encourage the creation of more exporters but cannot vet all of them for\n[best practices](\/docs\/instrumenting\/writing_exporters\/).\nCommonly, those exporters are hosted outside of the Prometheus GitHub\norganization.\n\nThe [exporter default\nport](https:\/\/github.com\/prometheus\/prometheus\/wiki\/Default-port-allocations)\nwiki page has become another catalog of exporters, and may include exporters\nnot listed here due to overlapping functionality or still being in development.\n\nThe [JMX exporter](https:\/\/github.com\/prometheus\/jmx_exporter) can export from a\nwide variety of JVM-based applications, for example [Kafka](http:\/\/kafka.apache.org\/) and\n[Cassandra](http:\/\/cassandra.apache.org\/).\n\n### Databases\n\n   * [Aerospike exporter](https:\/\/github.com\/aerospike\/aerospike-prometheus-exporter)\n   * [AWS RDS exporter](https:\/\/github.com\/qonto\/prometheus-rds-exporter)\n   * [ClickHouse exporter](https:\/\/github.com\/f1yegor\/clickhouse_exporter)\n   * [Consul exporter](https:\/\/github.com\/prometheus\/consul_exporter) (**official**)\n   * [Couchbase exporter](https:\/\/github.com\/couchbase\/couchbase-exporter)\n   * [CouchDB exporter](https:\/\/github.com\/gesellix\/couchdb-exporter)\n   * [Druid Exporter](https:\/\/github.com\/opstree\/druid-exporter)\n   * [Elasticsearch exporter](https:\/\/github.com\/prometheus-community\/elasticsearch_exporter)\n   * [EventStore exporter](https:\/\/github.com\/marcinbudny\/eventstore_exporter)\n   * [IoTDB exporter](https:\/\/github.com\/fagnercarvalho\/prometheus-iotdb-exporter)\n   * [KDB+ exporter](https:\/\/github.com\/KxSystems\/prometheus-kdb-exporter)\n   * [Memcached exporter](https:\/\/github.com\/prometheus\/memcached_exporter) (**official**)\n   * [MongoDB exporter](https:\/\/github.com\/percona\/mongodb_exporter)\n   * [MongoDB query exporter](https:\/\/github.com\/raffis\/mongodb-query-exporter)\n   * [MongoDB Node.js Driver exporter](https:\/\/github.com\/christiangalsterer\/mongodb-driver-prometheus-exporter)\n   * [MSSQL server exporter](https:\/\/github.com\/awaragi\/prometheus-mssql-exporter)\n   * [MySQL router exporter](https:\/\/github.com\/rluisr\/mysqlrouter_exporter)\n   * [MySQL server exporter](https:\/\/github.com\/prometheus\/mysqld_exporter) (**official**)\n   * [OpenTSDB Exporter](https:\/\/github.com\/cloudflare\/opentsdb_exporter)\n   * [Oracle DB Exporter](https:\/\/github.com\/iamseth\/oracledb_exporter)\n   * [PgBouncer exporter](https:\/\/github.com\/prometheus-community\/pgbouncer_exporter)\n   * [PostgreSQL exporter](https:\/\/github.com\/prometheus-community\/postgres_exporter)\n   * [Presto exporter](https:\/\/github.com\/yahoojapan\/presto_exporter)\n   * [ProxySQL exporter](https:\/\/github.com\/percona\/proxysql_exporter)\n   * [RavenDB exporter](https:\/\/github.com\/marcinbudny\/ravendb_exporter)\n   * [Redis exporter](https:\/\/github.com\/oliver006\/redis_exporter)\n   * [RethinkDB exporter](https:\/\/github.com\/oliver006\/rethinkdb_exporter)\n   * [SQL exporter](https:\/\/github.com\/burningalchemist\/sql_exporter)\n   * [Tarantool metric library](https:\/\/github.com\/tarantool\/metrics)\n   * [Twemproxy](https:\/\/github.com\/stuartnelson3\/twemproxy_exporter)\n\n### Hardware related\n   * [apcupsd exporter](https:\/\/github.com\/mdlayher\/apcupsd_exporter)\n   * [BIG-IP exporter](https:\/\/github.com\/ExpressenAB\/bigip_exporter)\n   * [Bosch Sensortec BMP\/BME exporter](https:\/\/github.com\/David-Igou\/bsbmp-exporter)\n   * [Collins exporter](https:\/\/github.com\/soundcloud\/collins_exporter)\n   * [Dell Hardware OMSA exporter](https:\/\/github.com\/galexrt\/dellhw_exporter)\n   * [Disk usage exporter](https:\/\/github.com\/dundee\/disk_usage_exporter)\n   * [Fortigate exporter](https:\/\/github.com\/bluecmd\/fortigate_exporter)\n   * [IBM Z HMC exporter](https:\/\/github.com\/zhmcclient\/zhmc-prometheus-exporter)\n   * [IoT Edison exporter](https:\/\/github.com\/roman-vynar\/edison_exporter)\n   * [InfiniBand exporter](https:\/\/github.com\/treydock\/infiniband_exporter)\n   * [IPMI exporter](https:\/\/github.com\/soundcloud\/ipmi_exporter)\n   * [knxd exporter](https:\/\/github.com\/RichiH\/knxd_exporter)\n   * [Modbus exporter](https:\/\/github.com\/RichiH\/modbus_exporter)\n   * [Netgear Cable Modem Exporter](https:\/\/github.com\/ickymettle\/netgear_cm_exporter)\n   * [Netgear Router exporter](https:\/\/github.com\/DRuggeri\/netgear_exporter)\n   * [Network UPS Tools (NUT) exporter](https:\/\/github.com\/DRuggeri\/nut_exporter)\n   * [Node\/system metrics exporter](https:\/\/github.com\/prometheus\/node_exporter) (**official**)\n   * [NVIDIA GPU exporter](https:\/\/github.com\/mindprince\/nvidia_gpu_prometheus_exporter)\n   * [ProSAFE exporter](https:\/\/github.com\/dalance\/prosafe_exporter)\n   * [SmartRAID exporter](https:\/\/gitlab.com\/calestyo\/prometheus-smartraid-exporter)\n   * [Waveplus Radon Sensor Exporter](https:\/\/github.com\/jeremybz\/waveplus_exporter)\n   * [Weathergoose Climate Monitor Exporter](https:\/\/github.com\/branttaylor\/watchdog-prometheus-exporter)\n   * [Windows exporter](https:\/\/github.com\/prometheus-community\/windows_exporter)\n   * [Intel\u00ae Optane\u2122 Persistent Memory Controller Exporter](https:\/\/github.com\/intel\/ipmctl-exporter)\n\n### Issue trackers and continuous integration\n\n   * [Bamboo exporter](https:\/\/github.com\/AndreyVMarkelov\/bamboo-prometheus-exporter)\n   * [Bitbucket exporter](https:\/\/github.com\/AndreyVMarkelov\/prom-bitbucket-exporter)\n   * [Confluence exporter](https:\/\/github.com\/AndreyVMarkelov\/prom-confluence-exporter)\n   * [Jenkins exporter](https:\/\/github.com\/lovoo\/jenkins_exporter)\n   * [JIRA exporter](https:\/\/github.com\/AndreyVMarkelov\/jira-prometheus-exporter)\n\n### Messaging systems\n   * [Beanstalkd exporter](https:\/\/github.com\/messagebird\/beanstalkd_exporter)\n   * [EMQ exporter](https:\/\/github.com\/nuvo\/emq_exporter)\n   * [Gearman exporter](https:\/\/github.com\/bakins\/gearman-exporter)\n   * [IBM MQ exporter](https:\/\/github.com\/ibm-messaging\/mq-metric-samples\/tree\/master\/cmd\/mq_prometheus)\n   * [Kafka exporter](https:\/\/github.com\/danielqsj\/kafka_exporter)\n   * [NATS exporter](https:\/\/github.com\/nats-io\/prometheus-nats-exporter)\n   * [NSQ exporter](https:\/\/github.com\/lovoo\/nsq_exporter)\n   * [Mirth Connect exporter](https:\/\/github.com\/vynca\/mirth_exporter)\n   * [MQTT blackbox exporter](https:\/\/github.com\/inovex\/mqtt_blackbox_exporter)\n   * [MQTT2Prometheus](https:\/\/github.com\/hikhvar\/mqtt2prometheus)\n   * [RabbitMQ exporter](https:\/\/github.com\/kbudde\/rabbitmq_exporter)\n   * [RabbitMQ Management Plugin exporter](https:\/\/github.com\/deadtrickster\/prometheus_rabbitmq_exporter)\n   * [RocketMQ exporter](https:\/\/github.com\/apache\/rocketmq-exporter)\n   * [Solace exporter](https:\/\/github.com\/solacecommunity\/solace-prometheus-exporter)\n\n### Storage\n   * [Ceph exporter](https:\/\/github.com\/digitalocean\/ceph_exporter)\n   * [Ceph RADOSGW exporter](https:\/\/github.com\/blemmenes\/radosgw_usage_exporter)\n   * [Gluster exporter](https:\/\/github.com\/ofesseler\/gluster_exporter)\n   * [GPFS exporter](https:\/\/github.com\/treydock\/gpfs_exporter)\n   * [Hadoop HDFS FSImage exporter](https:\/\/github.com\/marcelmay\/hadoop-hdfs-fsimage-exporter)\n   * [HPE CSI info metrics provider](https:\/\/scod.hpedev.io\/csi_driver\/metrics.html)\n   * [HPE storage array exporter](https:\/\/hpe-storage.github.io\/array-exporter\/)\n   * [Lustre exporter](https:\/\/github.com\/HewlettPackard\/lustre_exporter)\n   * [NetApp E-Series exporter](https:\/\/github.com\/treydock\/eseries_exporter)\n   * [Pure Storage exporter](https:\/\/github.com\/PureStorage-OpenConnect\/pure-exporter)\n   * [ScaleIO exporter](https:\/\/github.com\/syepes\/sio2prom)\n   * [Tivoli Storage Manager\/IBM Spectrum Protect exporter](https:\/\/github.com\/treydock\/tsm_exporter)\n\n### HTTP\n   * [Apache exporter](https:\/\/github.com\/Lusitaniae\/apache_exporter)\n   * [HAProxy exporter](https:\/\/github.com\/prometheus\/haproxy_exporter) (**official**)\n   * [Nginx metric library](https:\/\/github.com\/knyar\/nginx-lua-prometheus)\n   * [Nginx VTS exporter](https:\/\/github.com\/sysulq\/nginx-vts-exporter)\n   * [Passenger exporter](https:\/\/github.com\/stuartnelson3\/passenger_exporter)\n   * [Squid exporter](https:\/\/github.com\/boynux\/squid-exporter)\n   * [Tinyproxy exporter](https:\/\/github.com\/gmm42\/tinyproxy_exporter)\n   * [Varnish exporter](https:\/\/github.com\/jonnenauha\/prometheus_varnish_exporter)\n   * [WebDriver exporter](https:\/\/github.com\/mattbostock\/webdriver_exporter)\n\n### APIs\n   * [AWS ECS exporter](https:\/\/github.com\/slok\/ecs-exporter)\n   * [AWS Health exporter](https:\/\/github.com\/Jimdo\/aws-health-exporter)\n   * [AWS SQS exporter](https:\/\/github.com\/jmal98\/sqs_exporter)\n   * [Azure Health exporter](https:\/\/github.com\/FXinnovation\/azure-health-exporter)\n   * [BigBlueButton](https:\/\/github.com\/greenstatic\/bigbluebutton-exporter)\n   * [Cloudflare exporter](https:\/\/gitlab.com\/gitlab-org\/cloudflare_exporter)\n   * [Cryptowat exporter](https:\/\/github.com\/nbarrientos\/cryptowat_exporter)\n   * [DigitalOcean exporter](https:\/\/github.com\/metalmatze\/digitalocean_exporter)\n   * [Docker Cloud exporter](https:\/\/github.com\/infinityworks\/docker-cloud-exporter)\n   * [Docker Hub exporter](https:\/\/github.com\/infinityworks\/docker-hub-exporter)\n   * [Fastly exporter](https:\/\/github.com\/peterbourgon\/fastly-exporter)\n   * [GitHub exporter](https:\/\/github.com\/githubexporter\/github-exporter)\n   * [Gmail exporter](https:\/\/github.com\/jamesread\/prometheus-gmail-exporter\/)\n   * [GraphQL exporter](https:\/\/github.com\/ricardbejarano\/graphql_exporter)\n   * [InstaClustr exporter](https:\/\/github.com\/fcgravalos\/instaclustr_exporter)\n   * [Mozilla Observatory exporter](https:\/\/github.com\/Jimdo\/observatory-exporter)\n   * [OpenWeatherMap exporter](https:\/\/github.com\/RichiH\/openweathermap_exporter)\n   * [Pagespeed exporter](https:\/\/github.com\/foomo\/pagespeed_exporter)\n   * [Rancher exporter](https:\/\/github.com\/infinityworks\/prometheus-rancher-exporter)\n   * [Speedtest exporter](https:\/\/github.com\/nlamirault\/speedtest_exporter)\n   * [Tankerk\u00f6nig API Exporter](https:\/\/github.com\/lukasmalkmus\/tankerkoenig_exporter)\n\n### Logging\n   * [Fluentd exporter](https:\/\/github.com\/V3ckt0r\/fluentd_exporter)\n   * [Google's mtail log data extractor](https:\/\/github.com\/google\/mtail)\n   * [Grok exporter](https:\/\/github.com\/fstab\/grok_exporter)\n\n### FinOps\n   * [AWS Cost Exporter](https:\/\/github.com\/electrolux-oss\/aws-cost-exporter)\n   * [Azure Cost Exporter](https:\/\/github.com\/electrolux-oss\/azure-cost-exporter)\n   * [Kubernetes Cost Exporter](https:\/\/github.com\/electrolux-oss\/kubernetes-cost-exporter)\n\n### Other monitoring systems\n   * [Akamai Cloudmonitor exporter](https:\/\/github.com\/ExpressenAB\/cloudmonitor_exporter)\n   * [Alibaba Cloudmonitor exporter](https:\/\/github.com\/aylei\/aliyun-exporter)\n   * [AWS CloudWatch exporter](https:\/\/github.com\/prometheus\/cloudwatch_exporter) (**official**)\n   * [Azure Monitor exporter](https:\/\/github.com\/RobustPerception\/azure_metrics_exporter)\n   * [Cloud Foundry Firehose exporter](https:\/\/github.com\/cloudfoundry-community\/firehose_exporter)\n   * [Collectd exporter](https:\/\/github.com\/prometheus\/collectd_exporter) (**official**)\n   * [Google Stackdriver exporter](https:\/\/github.com\/frodenas\/stackdriver_exporter)\n   * [Graphite exporter](https:\/\/github.com\/prometheus\/graphite_exporter) (**official**)\n   * [Heka dashboard exporter](https:\/\/github.com\/docker-infra\/heka_exporter)\n   * [Heka exporter](https:\/\/github.com\/imgix\/heka_exporter)\n   * [Huawei Cloudeye exporter](https:\/\/github.com\/huaweicloud\/cloudeye-exporter)\n   * [InfluxDB exporter](https:\/\/github.com\/prometheus\/influxdb_exporter) (**official**)\n   * [ITM exporter](https:\/\/github.com\/rafal-szypulka\/itm_exporter)\n   * [Java GC exporter](https:\/\/github.com\/loyispa\/jgc_exporter)\n   * [JavaMelody exporter](https:\/\/github.com\/fschlag\/javamelody-prometheus-exporter)\n   * [JMX exporter](https:\/\/github.com\/prometheus\/jmx_exporter) (**official**)\n   * [Munin exporter](https:\/\/github.com\/pvdh\/munin_exporter)\n   * [Nagios \/ Naemon exporter](https:\/\/github.com\/Griesbacher\/Iapetos)\n   * [Neptune Apex exporter](https:\/\/github.com\/dl-romero\/neptune_exporter)\n   * [New Relic exporter](https:\/\/github.com\/mrf\/newrelic_exporter)\n   * [NRPE exporter](https:\/\/github.com\/robustperception\/nrpe_exporter)\n   * [Osquery exporter](https:\/\/github.com\/zwopir\/osquery_exporter)\n   * [OTC CloudEye exporter](https:\/\/github.com\/tiagoReichert\/otc-cloudeye-prometheus-exporter)\n   * [Pingdom exporter](https:\/\/github.com\/giantswarm\/prometheus-pingdom-exporter)\n   * [Promitor (Azure Monitor)](https:\/\/promitor.io)\n   * [scollector exporter](https:\/\/github.com\/tgulacsi\/prometheus_scollector)\n   * [Sensu exporter](https:\/\/github.com\/reachlin\/sensu_exporter)\n   * [site24x7_exporter](https:\/\/github.com\/svenstaro\/site24x7_exporter)\n   * [SNMP exporter](https:\/\/github.com\/prometheus\/snmp_exporter) (**official**)\n   * [StatsD exporter](https:\/\/github.com\/prometheus\/statsd_exporter) (**official**)\n   * [TencentCloud monitor exporter](https:\/\/github.com\/tencentyun\/tencentcloud-exporter)\n   * [ThousandEyes exporter](https:\/\/github.com\/sapcc\/1000eyes_exporter)\n   * [StatusPage exporter](https:\/\/github.com\/sergeyshevch\/statuspage-exporter)\n\n### Miscellaneous\n\n   * [ACT Fibernet Exporter](https:\/\/git.captnemo.in\/nemo\/prometheus-act-exporter)\n   * [BIND exporter](https:\/\/github.com\/prometheus-community\/bind_exporter)\n   * [BIND query exporter](https:\/\/github.com\/DRuggeri\/bind_query_exporter)\n   * [Bitcoind exporter](https:\/\/github.com\/LePetitBloc\/bitcoind-exporter)\n   * [Blackbox exporter](https:\/\/github.com\/prometheus\/blackbox_exporter) (**official**)\n   * [Bungeecord exporter](https:\/\/github.com\/weihao\/bungeecord-prometheus-exporter)\n   * [BOSH exporter](https:\/\/github.com\/cloudfoundry-community\/bosh_exporter)\n   * [cAdvisor](https:\/\/github.com\/google\/cadvisor)\n   * [Cachet exporter](https:\/\/github.com\/ContaAzul\/cachet_exporter)\n   * [ccache exporter](https:\/\/github.com\/virtualtam\/ccache_exporter)\n   * [c-lightning exporter](https:\/\/github.com\/lightningd\/plugins\/tree\/master\/prometheus)\n   * [DHCPD leases exporter](https:\/\/github.com\/DRuggeri\/dhcpd_leases_exporter)\n   * [Dovecot exporter](https:\/\/github.com\/kumina\/dovecot_exporter)\n   * [Dnsmasq exporter](https:\/\/github.com\/google\/dnsmasq_exporter)\n   * [eBPF exporter](https:\/\/github.com\/cloudflare\/ebpf_exporter)\n   * [Ethereum Client exporter](https:\/\/github.com\/31z4\/ethereum-prometheus-exporter)\n   * [File statistics exporter](https:\/\/github.com\/michael-doubez\/filestat_exporter)\n   * [JFrog Artifactory Exporter](https:\/\/github.com\/peimanja\/artifactory_exporter)\n   * [Hostapd Exporter](https:\/\/github.com\/Fundacio-i2CAT\/hostapd_prometheus_exporter)\n   * [IBM Security Verify Access \/ Security Access Manager Exporter](https:\/\/gitlab.com\/zeblawson\/isva-prometheus-exporter)\n   * [IPsec exporter](https:\/\/github.com\/torilabs\/ipsec-prometheus-exporter)\n   * [IRCd exporter](https:\/\/github.com\/dgl\/ircd_exporter)\n   * [Linux HA ClusterLabs exporter](https:\/\/github.com\/ClusterLabs\/ha_cluster_exporter)\n   * [JMeter plugin](https:\/\/github.com\/johrstrom\/jmeter-prometheus-plugin)\n   * [JSON exporter](https:\/\/github.com\/prometheus-community\/json_exporter)\n   * [Kannel exporter](https:\/\/github.com\/apostvav\/kannel_exporter)\n   * [Kemp LoadBalancer exporter](https:\/\/github.com\/giantswarm\/prometheus-kemp-exporter)\n   * [Kibana Exporter](https:\/\/github.com\/pjhampton\/kibana-prometheus-exporter)\n   * [kube-state-metrics](https:\/\/github.com\/kubernetes\/kube-state-metrics)\n   * [Locust Exporter](https:\/\/github.com\/ContainerSolutions\/locust_exporter)\n   * [Meteor JS web framework exporter](https:\/\/atmospherejs.com\/sevki\/prometheus-exporter)\n   * [Minecraft exporter module](https:\/\/github.com\/Baughn\/PrometheusIntegration)\n   * [Minecraft exporter](https:\/\/github.com\/dirien\/minecraft-prometheus-exporter)\n   * [Nomad exporter](https:\/\/gitlab.com\/yakshaving.art\/nomad-exporter)\n   * [nftables exporter](https:\/\/github.com\/Intrinsec\/nftables_exporter)\n   * [OpenStack exporter](https:\/\/github.com\/openstack-exporter\/openstack-exporter)\n   * [OpenStack blackbox exporter](https:\/\/github.com\/infraly\/openstack_client_exporter)\n   * [oVirt exporter](https:\/\/github.com\/czerwonk\/ovirt_exporter)\n   * [Pact Broker exporter](https:\/\/github.com\/ContainerSolutions\/pactbroker_exporter)\n   * [PHP-FPM exporter](https:\/\/github.com\/bakins\/php-fpm-exporter)\n   * [PowerDNS exporter](https:\/\/github.com\/ledgr\/powerdns_exporter)\n   * [Podman exporter](https:\/\/github.com\/containers\/prometheus-podman-exporter)\n   * [Prefect2 exporter](https:\/\/github.com\/pathfinder177\/prefect2-prometheus-exporter)\n   * [Process exporter](https:\/\/github.com\/ncabatoff\/process-exporter)\n   * [rTorrent exporter](https:\/\/github.com\/mdlayher\/rtorrent_exporter)\n   * [Rundeck exporter](https:\/\/github.com\/phsmith\/rundeck_exporter)\n   * [SABnzbd exporter](https:\/\/github.com\/msroest\/sabnzbd_exporter)\n   * [SAML exporter](https:\/\/github.com\/DoodleScheduling\/saml-exporter)\n   * [Script exporter](https:\/\/github.com\/adhocteam\/script_exporter)\n   * [Shield exporter](https:\/\/github.com\/cloudfoundry-community\/shield_exporter)\n   * [Smokeping prober](https:\/\/github.com\/SuperQ\/smokeping_prober)\n   * [SMTP\/Maildir MDA blackbox prober](https:\/\/github.com\/cherti\/mailexporter)\n   * [SoftEther exporter](https:\/\/github.com\/dalance\/softether_exporter)\n   * [SSH exporter](https:\/\/github.com\/treydock\/ssh_exporter)\n   * [Teamspeak3 exporter](https:\/\/github.com\/hikhvar\/ts3exporter)\n   * [Transmission exporter](https:\/\/github.com\/metalmatze\/transmission-exporter)\n   * [Unbound exporter](https:\/\/github.com\/kumina\/unbound_exporter)\n   * [WireGuard exporter](https:\/\/github.com\/MindFlavor\/prometheus_wireguard_exporter)\n   * [Xen exporter](https:\/\/github.com\/lovoo\/xenstats_exporter)\n\n\nWhen implementing a new Prometheus exporter, please follow the\n[guidelines on writing exporters](\/docs\/instrumenting\/writing_exporters)\nPlease also consider consulting the [development mailing\nlist](https:\/\/groups.google.com\/forum\/#!forum\/prometheus-developers).  We are\nhappy to give advice on how to make your exporter as useful and consistent as\npossible.\n\n## Software exposing Prometheus metrics\n\nSome third-party software exposes metrics in the Prometheus format, so no\nseparate exporters are needed:\n\n   * [Ansible Automation Platform Automation Controller (AWX)](https:\/\/docs.ansible.com\/automation-controller\/latest\/html\/administration\/metrics.html)\n   * [App Connect Enterprise](https:\/\/github.com\/ot4i\/ace-docker)\n   * [Ballerina](https:\/\/ballerina.io\/)\n   * [BFE](https:\/\/github.com\/baidu\/bfe)\n   * [Caddy](https:\/\/caddyserver.com\/docs\/metrics) (**direct**)\n   * [Ceph](https:\/\/docs.ceph.com\/en\/latest\/mgr\/prometheus\/)\n   * [CockroachDB](https:\/\/www.cockroachlabs.com\/docs\/stable\/monitoring-and-alerting.html#prometheus-endpoint)\n   * [Collectd](https:\/\/collectd.org\/wiki\/index.php\/Plugin:Write_Prometheus)\n   * [Concourse](https:\/\/concourse-ci.org\/)\n   * [CRG Roller Derby Scoreboard](https:\/\/github.com\/rollerderby\/scoreboard) (**direct**)\n   * [Diffusion](https:\/\/docs.pushtechnology.com\/docs\/latest\/manual\/html\/administratorguide\/systemmanagement\/r_statistics.html)\n   * [Docker Daemon](https:\/\/docs.docker.com\/engine\/reference\/commandline\/dockerd\/#daemon-metrics)\n   * [Doorman](https:\/\/github.com\/youtube\/doorman) (**direct**)\n   * [Dovecot](https:\/\/doc.dovecot.org\/configuration_manual\/stats\/openmetrics\/)\n   * [Envoy](https:\/\/www.envoyproxy.io\/docs\/envoy\/latest\/operations\/admin.html#get--stats?format=prometheus)\n   * [Etcd](https:\/\/github.com\/coreos\/etcd) (**direct**)\n   * [Flink](https:\/\/github.com\/apache\/flink)\n   * [FreeBSD Kernel](https:\/\/www.freebsd.org\/cgi\/man.cgi?query=prometheus_sysctl_exporter&apropos=0&sektion=8&manpath=FreeBSD+12-current&arch=default&format=html)\n   * [GitLab](https:\/\/docs.gitlab.com\/ee\/administration\/monitoring\/prometheus\/gitlab_metrics.html)\n   * [Grafana](https:\/\/grafana.com\/docs\/grafana\/latest\/administration\/view-server\/internal-metrics\/)\n   * [JavaMelody](https:\/\/github.com\/javamelody\/javamelody\/wiki\/UserGuideAdvanced#exposing-metrics-to-prometheus)\n   * [Kong](https:\/\/github.com\/Kong\/kong-plugin-prometheus)\n   * [Kubernetes](https:\/\/github.com\/kubernetes\/kubernetes) (**direct**)\n   * [LavinMQ](https:\/\/lavinmq.com\/)\n   * [Linkerd](https:\/\/github.com\/BuoyantIO\/linkerd)\n   * [mgmt](https:\/\/github.com\/purpleidea\/mgmt\/blob\/master\/docs\/prometheus.md)\n   * [MidoNet](https:\/\/github.com\/midonet\/midonet)\n   * [midonet-kubernetes](https:\/\/github.com\/midonet\/midonet-kubernetes) (**direct**)\n   * [MinIO](https:\/\/docs.minio.io\/docs\/how-to-monitor-minio-using-prometheus.html)\n   * [PATROL with Monitoring Studio X](https:\/\/www.sentrysoftware.com\/library\/swsyx\/prometheus\/exposing-patrol-parameters-in-prometheus.html)\n   * [Netdata](https:\/\/github.com\/firehol\/netdata)\n   * [OpenZiti](https:\/\/openziti.github.io)\n   * [Pomerium](https:\/\/pomerium.com\/reference\/#metrics-address)\n   * [Pretix](https:\/\/pretix.eu\/)\n   * [Quobyte](https:\/\/www.quobyte.com\/) (**direct**)\n   * [RabbitMQ](https:\/\/rabbitmq.com\/prometheus.html)\n   * [RobustIRC](http:\/\/robustirc.net\/)\n   * [ScyllaDB](http:\/\/github.com\/scylladb\/scylla)\n   * [Skipper](https:\/\/github.com\/zalando\/skipper)\n   * [SkyDNS](https:\/\/github.com\/skynetservices\/skydns) (**direct**)\n   * [Telegraf](https:\/\/github.com\/influxdata\/telegraf\/tree\/master\/plugins\/outputs\/prometheus_client)\n   * [Traefik](https:\/\/github.com\/containous\/traefik)\n   * [Vector](https:\/\/vector.dev)\n   * [VerneMQ](https:\/\/github.com\/vernemq\/vernemq)\n   * [Flux](https:\/\/github.com\/fluxcd\/flux2)\n   * [Xandikos](https:\/\/www.xandikos.org\/) (**direct**)\n   * [Zipkin](https:\/\/github.com\/openzipkin\/zipkin\/tree\/master\/zipkin-server#metrics)\n\nThe software marked *direct* is also directly instrumented with a Prometheus client library.\n\n## Other third-party utilities\n\nThis section lists libraries and other utilities that help you instrument code\nin a certain language. They are not Prometheus client libraries themselves but\nmake use of one of the normal Prometheus client libraries under the hood. As\nfor all independently maintained software, we cannot vet all of them for best\npractices.\n\n   * Clojure: [iapetos](https:\/\/github.com\/clj-commons\/iapetos)\n   * Go: [go-metrics instrumentation library](https:\/\/github.com\/armon\/go-metrics)\n   * Go: [gokit](https:\/\/github.com\/peterbourgon\/gokit)\n   * Go: [prombolt](https:\/\/github.com\/mdlayher\/prombolt)\n   * Java\/JVM: [EclipseLink metrics collector](https:\/\/github.com\/VitaNuova\/eclipselinkexporter)\n   * Java\/JVM: [Hystrix metrics publisher](https:\/\/github.com\/ahus1\/prometheus-hystrix)\n   * Java\/JVM: [Jersey metrics collector](https:\/\/github.com\/VitaNuova\/jerseyexporter)\n   * Java\/JVM: [Micrometer Prometheus Registry](https:\/\/micrometer.io\/docs\/registry\/prometheus)\n   * Python-Django: [django-prometheus](https:\/\/github.com\/korfuri\/django-prometheus)\n   * Node.js: [swagger-stats](https:\/\/github.com\/slanatech\/swagger-stats)","site":"prometheus","answers_cleaned":"    title  Exporters and integrations sort rank  4        Exporters and integrations  There are a number of libraries and servers which help in exporting existing metrics from third party systems as Prometheus metrics  This is useful for cases where it is not feasible to instrument a given system with Prometheus metrics directly  for example  HAProxy or Linux system stats       Third party exporters  Some of these exporters are maintained as part of the official  Prometheus GitHub organization  https   github com prometheus   those are marked as  official   others are externally contributed and maintained   We encourage the creation of more exporters but cannot vet all of them for  best practices   docs instrumenting writing exporters    Commonly  those exporters are hosted outside of the Prometheus GitHub organization   The  exporter default port  https   github com prometheus prometheus wiki Default port allocations  wiki page has become another catalog of exporters  and may include exporters not listed here due to overlapping functionality or still being in development   The  JMX exporter  https   github com prometheus jmx exporter  can export from a wide variety of JVM based applications  for example  Kafka  http   kafka apache org   and  Cassandra  http   cassandra apache org         Databases        Aerospike exporter  https   github com aerospike aerospike prometheus exporter        AWS RDS exporter  https   github com qonto prometheus rds exporter        ClickHouse exporter  https   github com f1yegor clickhouse exporter        Consul exporter  https   github com prometheus consul exporter     official          Couchbase exporter  https   github com couchbase couchbase exporter        CouchDB exporter  https   github com gesellix couchdb exporter        Druid Exporter  https   github com opstree druid exporter        Elasticsearch exporter  https   github com prometheus community elasticsearch exporter        EventStore exporter  https   github com marcinbudny eventstore exporter        IoTDB exporter  https   github com fagnercarvalho prometheus iotdb exporter        KDB  exporter  https   github com KxSystems prometheus kdb exporter        Memcached exporter  https   github com prometheus memcached exporter     official          MongoDB exporter  https   github com percona mongodb exporter        MongoDB query exporter  https   github com raffis mongodb query exporter        MongoDB Node js Driver exporter  https   github com christiangalsterer mongodb driver prometheus exporter        MSSQL server exporter  https   github com awaragi prometheus mssql exporter        MySQL router exporter  https   github com rluisr mysqlrouter exporter        MySQL server exporter  https   github com prometheus mysqld exporter     official          OpenTSDB Exporter  https   github com cloudflare opentsdb exporter        Oracle DB Exporter  https   github com iamseth oracledb exporter        PgBouncer exporter  https   github com prometheus community pgbouncer exporter        PostgreSQL exporter  https   github com prometheus community postgres exporter        Presto exporter  https   github com yahoojapan presto exporter        ProxySQL exporter  https   github com percona proxysql exporter        RavenDB exporter  https   github com marcinbudny ravendb exporter        Redis exporter  https   github com oliver006 redis exporter        RethinkDB exporter  https   github com oliver006 rethinkdb exporter        SQL exporter  https   github com burningalchemist sql exporter        Tarantool metric library  https   github com tarantool metrics        Twemproxy  https   github com stuartnelson3 twemproxy exporter       Hardware related       apcupsd exporter  https   github com mdlayher apcupsd exporter        BIG IP exporter  https   github com ExpressenAB bigip exporter        Bosch Sensortec BMP BME exporter  https   github com David Igou bsbmp exporter        Collins exporter  https   github com soundcloud collins exporter        Dell Hardware OMSA exporter  https   github com galexrt dellhw exporter        Disk usage exporter  https   github com dundee disk usage exporter        Fortigate exporter  https   github com bluecmd fortigate exporter        IBM Z HMC exporter  https   github com zhmcclient zhmc prometheus exporter        IoT Edison exporter  https   github com roman vynar edison exporter        InfiniBand exporter  https   github com treydock infiniband exporter        IPMI exporter  https   github com soundcloud ipmi exporter        knxd exporter  https   github com RichiH knxd exporter        Modbus exporter  https   github com RichiH modbus exporter        Netgear Cable Modem Exporter  https   github com ickymettle netgear cm exporter        Netgear Router exporter  https   github com DRuggeri netgear exporter        Network UPS Tools  NUT  exporter  https   github com DRuggeri nut exporter        Node system metrics exporter  https   github com prometheus node exporter     official          NVIDIA GPU exporter  https   github com mindprince nvidia gpu prometheus exporter        ProSAFE exporter  https   github com dalance prosafe exporter        SmartRAID exporter  https   gitlab com calestyo prometheus smartraid exporter        Waveplus Radon Sensor Exporter  https   github com jeremybz waveplus exporter        Weathergoose Climate Monitor Exporter  https   github com branttaylor watchdog prometheus exporter        Windows exporter  https   github com prometheus community windows exporter        Intel  Optane  Persistent Memory Controller Exporter  https   github com intel ipmctl exporter       Issue trackers and continuous integration        Bamboo exporter  https   github com AndreyVMarkelov bamboo prometheus exporter        Bitbucket exporter  https   github com AndreyVMarkelov prom bitbucket exporter        Confluence exporter  https   github com AndreyVMarkelov prom confluence exporter        Jenkins exporter  https   github com lovoo jenkins exporter        JIRA exporter  https   github com AndreyVMarkelov jira prometheus exporter       Messaging systems       Beanstalkd exporter  https   github com messagebird beanstalkd exporter        EMQ exporter  https   github com nuvo emq exporter        Gearman exporter  https   github com bakins gearman exporter        IBM MQ exporter  https   github com ibm messaging mq metric samples tree master cmd mq prometheus        Kafka exporter  https   github com danielqsj kafka exporter        NATS exporter  https   github com nats io prometheus nats exporter        NSQ exporter  https   github com lovoo nsq exporter        Mirth Connect exporter  https   github com vynca mirth exporter        MQTT blackbox exporter  https   github com inovex mqtt blackbox exporter        MQTT2Prometheus  https   github com hikhvar mqtt2prometheus        RabbitMQ exporter  https   github com kbudde rabbitmq exporter        RabbitMQ Management Plugin exporter  https   github com deadtrickster prometheus rabbitmq exporter        RocketMQ exporter  https   github com apache rocketmq exporter        Solace exporter  https   github com solacecommunity solace prometheus exporter       Storage       Ceph exporter  https   github com digitalocean ceph exporter        Ceph RADOSGW exporter  https   github com blemmenes radosgw usage exporter        Gluster exporter  https   github com ofesseler gluster exporter        GPFS exporter  https   github com treydock gpfs exporter        Hadoop HDFS FSImage exporter  https   github com marcelmay hadoop hdfs fsimage exporter        HPE CSI info metrics provider  https   scod hpedev io csi driver metrics html        HPE storage array exporter  https   hpe storage github io array exporter         Lustre exporter  https   github com HewlettPackard lustre exporter        NetApp E Series exporter  https   github com treydock eseries exporter        Pure Storage exporter  https   github com PureStorage OpenConnect pure exporter        ScaleIO exporter  https   github com syepes sio2prom        Tivoli Storage Manager IBM Spectrum Protect exporter  https   github com treydock tsm exporter       HTTP       Apache exporter  https   github com Lusitaniae apache exporter        HAProxy exporter  https   github com prometheus haproxy exporter     official          Nginx metric library  https   github com knyar nginx lua prometheus        Nginx VTS exporter  https   github com sysulq nginx vts exporter        Passenger exporter  https   github com stuartnelson3 passenger exporter        Squid exporter  https   github com boynux squid exporter        Tinyproxy exporter  https   github com gmm42 tinyproxy exporter        Varnish exporter  https   github com jonnenauha prometheus varnish exporter        WebDriver exporter  https   github com mattbostock webdriver exporter       APIs       AWS ECS exporter  https   github com slok ecs exporter        AWS Health exporter  https   github com Jimdo aws health exporter        AWS SQS exporter  https   github com jmal98 sqs exporter        Azure Health exporter  https   github com FXinnovation azure health exporter        BigBlueButton  https   github com greenstatic bigbluebutton exporter        Cloudflare exporter  https   gitlab com gitlab org cloudflare exporter        Cryptowat exporter  https   github com nbarrientos cryptowat exporter        DigitalOcean exporter  https   github com metalmatze digitalocean exporter        Docker Cloud exporter  https   github com infinityworks docker cloud exporter        Docker Hub exporter  https   github com infinityworks docker hub exporter        Fastly exporter  https   github com peterbourgon fastly exporter        GitHub exporter  https   github com githubexporter github exporter        Gmail exporter  https   github com jamesread prometheus gmail exporter         GraphQL exporter  https   github com ricardbejarano graphql exporter        InstaClustr exporter  https   github com fcgravalos instaclustr exporter        Mozilla Observatory exporter  https   github com Jimdo observatory exporter        OpenWeatherMap exporter  https   github com RichiH openweathermap exporter        Pagespeed exporter  https   github com foomo pagespeed exporter        Rancher exporter  https   github com infinityworks prometheus rancher exporter        Speedtest exporter  https   github com nlamirault speedtest exporter        Tankerk nig API Exporter  https   github com lukasmalkmus tankerkoenig exporter       Logging       Fluentd exporter  https   github com V3ckt0r fluentd exporter        Google s mtail log data extractor  https   github com google mtail        Grok exporter  https   github com fstab grok exporter       FinOps       AWS Cost Exporter  https   github com electrolux oss aws cost exporter        Azure Cost Exporter  https   github com electrolux oss azure cost exporter        Kubernetes Cost Exporter  https   github com electrolux oss kubernetes cost exporter       Other monitoring systems       Akamai Cloudmonitor exporter  https   github com ExpressenAB cloudmonitor exporter        Alibaba Cloudmonitor exporter  https   github com aylei aliyun exporter        AWS CloudWatch exporter  https   github com prometheus cloudwatch exporter     official          Azure Monitor exporter  https   github com RobustPerception azure metrics exporter        Cloud Foundry Firehose exporter  https   github com cloudfoundry community firehose exporter        Collectd exporter  https   github com prometheus collectd exporter     official          Google Stackdriver exporter  https   github com frodenas stackdriver exporter        Graphite exporter  https   github com prometheus graphite exporter     official          Heka dashboard exporter  https   github com docker infra heka exporter        Heka exporter  https   github com imgix heka exporter        Huawei Cloudeye exporter  https   github com huaweicloud cloudeye exporter        InfluxDB exporter  https   github com prometheus influxdb exporter     official          ITM exporter  https   github com rafal szypulka itm exporter        Java GC exporter  https   github com loyispa jgc exporter        JavaMelody exporter  https   github com fschlag javamelody prometheus exporter        JMX exporter  https   github com prometheus jmx exporter     official          Munin exporter  https   github com pvdh munin exporter        Nagios   Naemon exporter  https   github com Griesbacher Iapetos        Neptune Apex exporter  https   github com dl romero neptune exporter        New Relic exporter  https   github com mrf newrelic exporter        NRPE exporter  https   github com robustperception nrpe exporter        Osquery exporter  https   github com zwopir osquery exporter        OTC CloudEye exporter  https   github com tiagoReichert otc cloudeye prometheus exporter        Pingdom exporter  https   github com giantswarm prometheus pingdom exporter        Promitor  Azure Monitor   https   promitor io        scollector exporter  https   github com tgulacsi prometheus scollector        Sensu exporter  https   github com reachlin sensu exporter        site24x7 exporter  https   github com svenstaro site24x7 exporter        SNMP exporter  https   github com prometheus snmp exporter     official          StatsD exporter  https   github com prometheus statsd exporter     official          TencentCloud monitor exporter  https   github com tencentyun tencentcloud exporter        ThousandEyes exporter  https   github com sapcc 1000eyes exporter        StatusPage exporter  https   github com sergeyshevch statuspage exporter       Miscellaneous        ACT Fibernet Exporter  https   git captnemo in nemo prometheus act exporter        BIND exporter  https   github com prometheus community bind exporter        BIND query exporter  https   github com DRuggeri bind query exporter        Bitcoind exporter  https   github com LePetitBloc bitcoind exporter        Blackbox exporter  https   github com prometheus blackbox exporter     official          Bungeecord exporter  https   github com weihao bungeecord prometheus exporter        BOSH exporter  https   github com cloudfoundry community bosh exporter        cAdvisor  https   github com google cadvisor        Cachet exporter  https   github com ContaAzul cachet exporter        ccache exporter  https   github com virtualtam ccache exporter        c lightning exporter  https   github com lightningd plugins tree master prometheus        DHCPD leases exporter  https   github com DRuggeri dhcpd leases exporter        Dovecot exporter  https   github com kumina dovecot exporter        Dnsmasq exporter  https   github com google dnsmasq exporter        eBPF exporter  https   github com cloudflare ebpf exporter        Ethereum Client exporter  https   github com 31z4 ethereum prometheus exporter        File statistics exporter  https   github com michael doubez filestat exporter        JFrog Artifactory Exporter  https   github com peimanja artifactory exporter        Hostapd Exporter  https   github com Fundacio i2CAT hostapd prometheus exporter        IBM Security Verify Access   Security Access Manager Exporter  https   gitlab com zeblawson isva prometheus exporter        IPsec exporter  https   github com torilabs ipsec prometheus exporter        IRCd exporter  https   github com dgl ircd exporter        Linux HA ClusterLabs exporter  https   github com ClusterLabs ha cluster exporter        JMeter plugin  https   github com johrstrom jmeter prometheus plugin        JSON exporter  https   github com prometheus community json exporter        Kannel exporter  https   github com apostvav kannel exporter        Kemp LoadBalancer exporter  https   github com giantswarm prometheus kemp exporter        Kibana Exporter  https   github com pjhampton kibana prometheus exporter        kube state metrics  https   github com kubernetes kube state metrics        Locust Exporter  https   github com ContainerSolutions locust exporter        Meteor JS web framework exporter  https   atmospherejs com sevki prometheus exporter        Minecraft exporter module  https   github com Baughn PrometheusIntegration        Minecraft exporter  https   github com dirien minecraft prometheus exporter        Nomad exporter  https   gitlab com yakshaving art nomad exporter        nftables exporter  https   github com Intrinsec nftables exporter        OpenStack exporter  https   github com openstack exporter openstack exporter        OpenStack blackbox exporter  https   github com infraly openstack client exporter        oVirt exporter  https   github com czerwonk ovirt exporter        Pact Broker exporter  https   github com ContainerSolutions pactbroker exporter        PHP FPM exporter  https   github com bakins php fpm exporter        PowerDNS exporter  https   github com ledgr powerdns exporter        Podman exporter  https   github com containers prometheus podman exporter        Prefect2 exporter  https   github com pathfinder177 prefect2 prometheus exporter        Process exporter  https   github com ncabatoff process exporter        rTorrent exporter  https   github com mdlayher rtorrent exporter        Rundeck exporter  https   github com phsmith rundeck exporter        SABnzbd exporter  https   github com msroest sabnzbd exporter        SAML exporter  https   github com DoodleScheduling saml exporter        Script exporter  https   github com adhocteam script exporter        Shield exporter  https   github com cloudfoundry community shield exporter        Smokeping prober  https   github com SuperQ smokeping prober        SMTP Maildir MDA blackbox prober  https   github com cherti mailexporter        SoftEther exporter  https   github com dalance softether exporter        SSH exporter  https   github com treydock ssh exporter        Teamspeak3 exporter  https   github com hikhvar ts3exporter        Transmission exporter  https   github com metalmatze transmission exporter        Unbound exporter  https   github com kumina unbound exporter        WireGuard exporter  https   github com MindFlavor prometheus wireguard exporter        Xen exporter  https   github com lovoo xenstats exporter    When implementing a new Prometheus exporter  please follow the  guidelines on writing exporters   docs instrumenting writing exporters  Please also consider consulting the  development mailing list  https   groups google com forum   forum prometheus developers    We are happy to give advice on how to make your exporter as useful and consistent as possible      Software exposing Prometheus metrics  Some third party software exposes metrics in the Prometheus format  so no separate exporters are needed         Ansible Automation Platform Automation Controller  AWX   https   docs ansible com automation controller latest html administration metrics html        App Connect Enterprise  https   github com ot4i ace docker        Ballerina  https   ballerina io         BFE  https   github com baidu bfe        Caddy  https   caddyserver com docs metrics     direct          Ceph  https   docs ceph com en latest mgr prometheus         CockroachDB  https   www cockroachlabs com docs stable monitoring and alerting html prometheus endpoint        Collectd  https   collectd org wiki index php Plugin Write Prometheus        Concourse  https   concourse ci org         CRG Roller Derby Scoreboard  https   github com rollerderby scoreboard     direct          Diffusion  https   docs pushtechnology com docs latest manual html administratorguide systemmanagement r statistics html        Docker Daemon  https   docs docker com engine reference commandline dockerd  daemon metrics        Doorman  https   github com youtube doorman     direct          Dovecot  https   doc dovecot org configuration manual stats openmetrics         Envoy  https   www envoyproxy io docs envoy latest operations admin html get  stats format prometheus        Etcd  https   github com coreos etcd     direct          Flink  https   github com apache flink        FreeBSD Kernel  https   www freebsd org cgi man cgi query prometheus sysctl exporter apropos 0 sektion 8 manpath FreeBSD 12 current arch default format html        GitLab  https   docs gitlab com ee administration monitoring prometheus gitlab metrics html        Grafana  https   grafana com docs grafana latest administration view server internal metrics         JavaMelody  https   github com javamelody javamelody wiki UserGuideAdvanced exposing metrics to prometheus        Kong  https   github com Kong kong plugin prometheus        Kubernetes  https   github com kubernetes kubernetes     direct          LavinMQ  https   lavinmq com         Linkerd  https   github com BuoyantIO linkerd        mgmt  https   github com purpleidea mgmt blob master docs prometheus md        MidoNet  https   github com midonet midonet        midonet kubernetes  https   github com midonet midonet kubernetes     direct          MinIO  https   docs minio io docs how to monitor minio using prometheus html        PATROL with Monitoring Studio X  https   www sentrysoftware com library swsyx prometheus exposing patrol parameters in prometheus html        Netdata  https   github com firehol netdata        OpenZiti  https   openziti github io        Pomerium  https   pomerium com reference  metrics address        Pretix  https   pretix eu         Quobyte  https   www quobyte com      direct          RabbitMQ  https   rabbitmq com prometheus html        RobustIRC  http   robustirc net         ScyllaDB  http   github com scylladb scylla        Skipper  https   github com zalando skipper        SkyDNS  https   github com skynetservices skydns     direct          Telegraf  https   github com influxdata telegraf tree master plugins outputs prometheus client        Traefik  https   github com containous traefik        Vector  https   vector dev        VerneMQ  https   github com vernemq vernemq        Flux  https   github com fluxcd flux2        Xandikos  https   www xandikos org      direct          Zipkin  https   github com openzipkin zipkin tree master zipkin server metrics   The software marked  direct  is also directly instrumented with a Prometheus client library      Other third party utilities  This section lists libraries and other utilities that help you instrument code in a certain language  They are not Prometheus client libraries themselves but make use of one of the normal Prometheus client libraries under the hood  As for all independently maintained software  we cannot vet all of them for best practices        Clojure   iapetos  https   github com clj commons iapetos       Go   go metrics instrumentation library  https   github com armon go metrics       Go   gokit  https   github com peterbourgon gokit       Go   prombolt  https   github com mdlayher prombolt       Java JVM   EclipseLink metrics collector  https   github com VitaNuova eclipselinkexporter       Java JVM   Hystrix metrics publisher  https   github com ahus1 prometheus hystrix       Java JVM   Jersey metrics collector  https   github com VitaNuova jerseyexporter       Java JVM   Micrometer Prometheus Registry  https   micrometer io docs registry prometheus       Python Django   django prometheus  https   github com korfuri django prometheus       Node js   swagger stats  https   github com slanatech swagger stats "}
{"questions":"prometheus sortrank 2 Writing client libraries This document covers what functionality and API Prometheus client libraries should offer with the aim of consistency across libraries making the easy use cases easy and avoiding offering functionality that may lead users down the title Writing client libraries","answers":"---\ntitle: Writing client libraries\nsort_rank: 2\n---\n\n# Writing client libraries\n\nThis document covers what functionality and API Prometheus client libraries\nshould offer, with the aim of consistency across libraries, making the easy use\ncases easy and avoiding offering functionality that may lead users down the\nwrong path.\n\nThere are [10 languages already supported](\/docs\/instrumenting\/clientlibs) at\nthe time of writing, so we\u2019ve gotten a good sense by now of how to write a\nclient. These guidelines aim to help authors of new client libraries produce\ngood libraries.\n\n## Conventions\n\nMUST\/MUST NOT\/SHOULD\/SHOULD NOT\/MAY have the meanings given in\n[https:\/\/www.ietf.org\/rfc\/rfc2119.txt](https:\/\/www.ietf.org\/rfc\/rfc2119.txt)\n\nIn addition ENCOURAGED means that a feature is desirable for a library to have,\nbut it\u2019s okay if it\u2019s not present. In other words, a nice to have.\n\nThings to keep in mind:\n\n* Take advantage of each language\u2019s features.\n\n* The common use cases should be easy.\n\n* The correct way to do something should be the easy way.\n\n* More complex use cases should be possible.\n\nThe common use cases are (in order):\n\n* Counters without labels spread liberally around libraries\/applications.\n\n* Timing functions\/blocks of code in Summaries\/Histograms.\n\n* Gauges to track current states of things (and their limits).\n\n* Monitoring of batch jobs.\n\n## Overall structure\n\nClients MUST be written to be callback based internally. Clients SHOULD\ngenerally follow the structure described here.\n\nThe key class is the Collector. This has a method (typically called \u2018collect\u2019)\nthat returns zero or more metrics and their samples. Collectors get registered\nwith a CollectorRegistry. Data is exposed by passing a CollectorRegistry to a\nclass\/method\/function \"bridge\", which returns the metrics in a format\nPrometheus supports. Every time the CollectorRegistry is scraped it must\ncallback to each of the Collectors\u2019 collect method.\n\nThe interface most users interact with are the Counter, Gauge, Summary, and\nHistogram Collectors. These represent a single metric, and should cover the\nvast majority of use cases where a user is instrumenting their own code.\n\nMore advanced uses cases (such as proxying from another\nmonitoring\/instrumentation system) require writing a custom Collector. Someone\nmay also want to write a \"bridge\" that takes a CollectorRegistry and produces\ndata in a format a different monitoring\/instrumentation system understands,\nallowing users to only have to think about one instrumentation system.\n\nCollectorRegistry SHOULD offer `register()`\/`unregister()` functions, and a\nCollector SHOULD be allowed to be registered to multiple CollectorRegistrys.\n\nClient libraries MUST be thread safe.\n\nFor non-OO languages such as C, client libraries should follow the spirit of\nthis structure as much as is practical.\n\n### Naming\n\nClient libraries SHOULD follow function\/method\/class names mentioned in this\ndocument, keeping in mind the naming conventions of the language they\u2019re\nworking in. For example, `set_to_current_time()` is good for a method name in \nPython, but `SetToCurrentTime()` is better in Go and `setToCurrentTime()` is\nthe convention in Java. Where names differ for technical reasons (e.g. not\nallowing function overloading), documentation\/help strings SHOULD point users\ntowards the other names.\n\nLibraries MUST NOT offer functions\/methods\/classes with the same or similar\nnames to ones given here, but with different semantics.\n\n## Metrics\n\nThe Counter, Gauge, Summary and Histogram [metric\ntypes](\/docs\/concepts\/metric_types\/) are the primary interface by users.\n\nCounter and Gauge MUST be part of the client library. At least one of Summary\nand Histogram MUST be offered.\n\nThese should be primarily used as file-static variables, that is, global\nvariables defined in the same file as the code they\u2019re instrumenting. The\nclient library SHOULD enable this. The common use case is instrumenting a piece\nof code overall, not a piece of code in the context of one instance of an\nobject. Users shouldn\u2019t have to worry about plumbing their metrics throughout\ntheir code, the client library should do that for them (and if it doesn\u2019t,\nusers will write a wrapper around the library to make it \"easier\" - which\nrarely tends to go well).\n\nThere MUST be a default CollectorRegistry, the standard metrics MUST by default\nimplicitly register into it with no special work required by the user. There\nMUST be a way to have metrics not register to the default CollectorRegistry,\nfor use in batch jobs and unittests. Custom collectors SHOULD also follow this.\n\nExactly how the metrics should be created varies by language. For some (Java,\nGo) a builder approach is best, whereas for others (Python) function arguments\nare rich enough to do it in one call.\n\nFor example in the Java Simpleclient we have:\n\n```java\nclass YourClass {\n  static final Counter requests = Counter.build()\n      .name(\"requests_total\")\n      .help(\"Requests.\").register();\n}\n```\n\nThis will register requests with the default CollectorRegistry. By calling\n`build()` rather than `register()` the metric won\u2019t be registered (handy for\nunittests), you can also pass in a CollectorRegistry to `register()` (handy for\nbatch jobs).\n\n### Counter\n\n[Counter](\/docs\/concepts\/metric_types\/#counter) is a monotonically increasing\ncounter. It MUST NOT allow the value to decrease, however it MAY be reset to 0\n(such as by server restart).\n\nA counter MUST have the following methods:\n\n* `inc()`: Increment the counter by 1\n* `inc(double v)`: Increment the counter by the given amount. MUST check that v >= 0.\n\nA counter is ENCOURAGED to have:\n\nA way to count exceptions throw\/raised in a given piece of code, and optionally\nonly certain types of exceptions. This is count_exceptions in Python.\n\nCounters MUST start at 0.\n\n### Gauge\n\n[Gauge](\/docs\/concepts\/metric_types\/#gauge) represents a value that can go up\nand down.\n\nA gauge MUST have the following methods:\n\n* `inc()`: Increment the gauge by 1\n* `inc(double v)`: Increment the gauge by the given amount\n* `dec()`: Decrement the gauge by 1\n* `dec(double v)`: Decrement the gauge by the given amount\n* `set(double v)`: Set the gauge to the given value\n\nGauges MUST start at 0, you MAY offer a way for a given gauge to start at a\ndifferent number.\n\nA gauge SHOULD have the following methods:\n\n* `set_to_current_time()`: Set the gauge to the current unixtime in seconds.\n\nA gauge is ENCOURAGED to have:\n\nA way to track in-progress requests in some piece of code\/function. This is\n`track_inprogress` in Python.\n\nA way to time a piece of code and set the gauge to its duration in seconds.\nThis is useful for batch jobs. This is startTimer\/setDuration in Java and the\n`time()` decorator\/context manager in Python. This SHOULD match the pattern in\nSummary\/Histogram (though `set()` rather than `observe()`).\n\n### Summary\n\nA [summary](\/docs\/concepts\/metric_types\/#summary) samples observations (usually\nthings like request durations) over sliding windows of time and provides\ninstantaneous insight into their distributions, frequencies, and sums.\n\nA summary MUST NOT allow the user to set \"quantile\" as a label name, as this is\nused internally to designate summary quantiles. A summary is ENCOURAGED to\noffer quantiles as exports, though these can\u2019t be aggregated and tend to be\nslow. A summary MUST allow not having quantiles, as just `_count`\/`_sum` is\nquite useful and this MUST be the default.\n\nA summary MUST have the following methods:\n\n* `observe(double v)`: Observe the given amount\n\nA summary SHOULD have the following methods:\n\nSome way to time code for users in seconds. In Python this is the `time()`\ndecorator\/context manager. In Java this is startTimer\/observeDuration. Units\nother than seconds MUST NOT be offered (if a user wants something else, they\ncan do it by hand). This should follow the same pattern as Gauge\/Histogram.\n\nSummary `_count`\/`_sum` MUST start at 0.\n\n### Histogram\n\n[Histograms](\/docs\/concepts\/metric_types\/#histogram) allow aggregatable\ndistributions of events, such as request latencies. This is at its core a\ncounter per bucket.\n\nA histogram MUST NOT allow `le` as a user-set label, as `le` is used internally\nto designate buckets.\n\nA histogram MUST offer a way to manually choose the buckets. Ways to set\nbuckets in a `linear(start, width, count)` and `exponential(start, factor,\ncount)` fashion SHOULD be offered. Count MUST include the `+Inf` bucket.\n\nA histogram SHOULD have the same default buckets as other client libraries.\nBuckets MUST NOT be changeable once the metric is created.\n\nA histogram MUST have the following methods:\n\n* `observe(double v)`: Observe the given amount\n\nA histogram SHOULD have the following methods:\n\nSome way to time code for users in seconds. In Python this is the `time()`\ndecorator\/context manager. In Java this is `startTimer`\/`observeDuration`.\nUnits other than seconds MUST NOT be offered (if a user wants something else,\nthey can do it by hand). This should follow the same pattern as Gauge\/Summary.\n\nHistogram  `_count`\/`_sum` and the buckets MUST start at 0.\n\n**Further metrics considerations**\n\nProviding additional functionality in metrics beyond what\u2019s documented above as\nmakes sense for a given language is ENCOURAGED.\n\nIf there\u2019s a common use case you can make simpler then go for it, as long as it\nwon\u2019t encourage undesirable behaviours (such as suboptimal metric\/label\nlayouts, or doing computation in the client).\n\n### Labels\n\nLabels are one of the [most powerful\naspects](\/docs\/practices\/instrumentation\/#use-labels) of Prometheus, but\n[easily abused](\/docs\/practices\/instrumentation\/#do-not-overuse-labels).\nAccordingly client libraries must be very careful in how labels are offered to\nusers.\n\nClient libraries MUST NOT allow users to have different\nlabel names for the same metric for Gauge\/Counter\/Summary\/Histogram or any\nother Collector offered by the library.\n\nMetrics from custom collectors should almost always have consistent label\nnames. As there are still rare but valid use cases where this is not the case,\nclient libraries should not verify this.\n\nWhile labels are powerful, the majority of metrics will not have labels.\nAccordingly the API should allow for labels but not dominate it.\n\nA client library MUST allow for optionally specifying a list of label names at\nGauge\/Counter\/Summary\/Histogram creation time. A client library SHOULD support\nany number of label names. A client library MUST validate that label names meet\nthe [documented\nrequirements](\/docs\/concepts\/data_model\/#metric-names-and-labels).\n\nThe general way to provide access to labeled dimension of a metric is via a\n`labels()` method that takes either a list of the label values or a map from\nlabel name to label value and returns a \"Child\". The usual\n`.inc()`\/`.dec()`\/`.observe()` etc. methods can then be called on the Child.\n\nThe Child returned by `labels()` SHOULD be cacheable by the user, to avoid\nhaving to look it up again - this matters in latency-critical code.\n\nMetrics with labels SHOULD support a `remove()` method with the same signature\nas `labels()` that will remove a Child from the metric no longer exporting it,\nand a `clear()` method that removes all Children from the metric. These\ninvalidate caching of Children.\n\nThere SHOULD be a way to initialize a given Child with the default value,\nusually just calling `labels()`. Metrics without labels MUST always be\ninitialized to avoid [problems with missing\nmetrics](\/docs\/practices\/instrumentation\/#avoid-missing-metrics).\n\n### Metric names\n\nMetric names must follow the\n[specification](\/docs\/concepts\/data_model\/#metric-names-and-labels). As with\nlabel names, this MUST be met for uses of Gauge\/Counter\/Summary\/Histogram and\nin any other Collector offered with the library.\n\nMany client libraries offer setting the name in three parts:\n`namespace_subsystem_name` of which only the `name` is mandatory.\n\nDynamic\/generated metric names or subparts of metric names MUST be discouraged,\nexcept when a custom Collector is proxying from other\ninstrumentation\/monitoring systems. Generated\/dynamic metric names are a sign\nthat you should be using labels instead.\n\n### Metric description and help\n\nGauge\/Counter\/Summary\/Histogram MUST require metric descriptions\/help to be\nprovided.\n\nAny custom Collectors provided with the client libraries MUST have\ndescriptions\/help on their metrics.\n\nIt is suggested to make it a mandatory argument, but not to check that it\u2019s of\na certain length as if someone really doesn\u2019t want to write docs we\u2019re not\ngoing to convince them otherwise. Collectors offered with the library (and\nindeed everywhere we can within the ecosystem) SHOULD have good metric\ndescriptions, to lead by example.\n\n## Exposition\n\nClients MUST implement the text-based exposition format outlined in the\n[exposition formats](\/docs\/instrumenting\/exposition_formats) documentation.\n\nReproducible order of the exposed metrics is ENCOURAGED (especially for human\nreadable formats) if it can be implemented without a significant resource cost.\n\n## Standard and runtime collectors\n\nClient libraries SHOULD offer what they can of the Standard exports, documented\nbelow.\n\nThese SHOULD be implemented as custom Collectors, and registered by default on\nthe default CollectorRegistry. There SHOULD be a way to disable these, as there\nare some very niche use cases where they get in the way.\n\n### Process metrics\n\nThese metrics have the prefix `process_`. If obtaining a necessary value is\nproblematic or even impossible with the used language or runtime, client\nlibraries SHOULD prefer leaving out the corresponding metric over exporting\nbogus, inaccurate, or special values (like `NaN`). All memory values in bytes,\nall times in unixtime\/seconds.\n\n| Metric name                        | Help string                                            | Unit             |\n| ---------------------------------- | ------------------------------------------------------ | ---------------  |\n| `process_cpu_seconds_total`        | Total user and system CPU time spent in seconds.       | seconds          |\n| `process_open_fds`                 | Number of open file descriptors.                       | file descriptors |\n| `process_max_fds`                  | Maximum number of open file descriptors.               | file descriptors |\n| `process_virtual_memory_bytes`     | Virtual memory size in bytes.                          | bytes            |\n| `process_virtual_memory_max_bytes` | Maximum amount of virtual memory available in bytes.   | bytes            |\n| `process_resident_memory_bytes`    | Resident memory size in bytes.                         | bytes            |\n| `process_heap_bytes`               | Process heap size in bytes.                            | bytes            |\n| `process_start_time_seconds`       | Start time of the process since unix epoch in seconds. | seconds          |\n| `process_threads`                  | Number of OS threads in the process.             | threads          |\n\n### Runtime metrics\n\nIn addition, client libraries are ENCOURAGED to also offer whatever makes sense\nin terms of metrics for their language\u2019s runtime (e.g. garbage collection\nstats), with an appropriate prefix such as `go_`, `hotspot_` etc.\n\n## Unit tests\n\nClient libraries SHOULD have unit tests covering the core instrumentation\nlibrary and exposition.\n\nClient libraries are ENCOURAGED to offer ways that make it easy for users to\nunit-test their use of the instrumentation code. For example, the\n`CollectorRegistry.get_sample_value` in Python.\n\n## Packaging and dependencies\n\nIdeally, a client library can be included in any application to add some\ninstrumentation without breaking the application.\n\nAccordingly, caution is advised when adding dependencies to the client library.\nFor example, if you add a library that uses a Prometheus client that requires\nversion x.y of a library but the application uses x.z elsewhere, will that have\nan adverse impact on the application?\n\nIt is suggested that where this may arise, that the core instrumentation is\nseparated from the bridges\/exposition of metrics in a given format. For\nexample, the Java simpleclient `simpleclient` module has no dependencies, and\nthe `simpleclient_servlet` has the HTTP bits.\n\n## Performance considerations\n\nAs client libraries must be thread-safe, some form of concurrency control is\nrequired and consideration must be given to performance on multi-core machines\nand applications.\n\nIn our experience the least performant is mutexes.\n\nProcessor atomic instructions tend to be in the middle, and generally\nacceptable.\n\nApproaches that avoid different CPUs mutating the same bit of RAM work best,\nsuch as the DoubleAdder in Java\u2019s simpleclient. There is a memory cost though.\n\nAs noted above, the result of `labels()` should be cacheable. The concurrent\nmaps that tend to back metric with labels tend to be relatively slow.\nSpecial-casing metrics without labels to avoid `labels()`-like lookups can help\na lot.\n\nMetrics SHOULD avoid blocking when they are being incremented\/decremented\/set\netc. as it\u2019s undesirable for the whole application to be held up while a scrape\nis ongoing.\n\nHaving benchmarks of the main instrumentation operations, including labels, is\nENCOURAGED.\n\nResource consumption, particularly RAM, should be kept in mind when performing\nexposition. Consider reducing the memory footprint by streaming results, and\npotentially having a limit on the number of concurrent scrapes.","site":"prometheus","answers_cleaned":"    title  Writing client libraries sort rank  2        Writing client libraries  This document covers what functionality and API Prometheus client libraries should offer  with the aim of consistency across libraries  making the easy use cases easy and avoiding offering functionality that may lead users down the wrong path   There are  10 languages already supported   docs instrumenting clientlibs  at the time of writing  so we ve gotten a good sense by now of how to write a client  These guidelines aim to help authors of new client libraries produce good libraries      Conventions  MUST MUST NOT SHOULD SHOULD NOT MAY have the meanings given in  https   www ietf org rfc rfc2119 txt  https   www ietf org rfc rfc2119 txt   In addition ENCOURAGED means that a feature is desirable for a library to have  but it s okay if it s not present  In other words  a nice to have   Things to keep in mind     Take advantage of each language s features     The common use cases should be easy     The correct way to do something should be the easy way     More complex use cases should be possible   The common use cases are  in order      Counters without labels spread liberally around libraries applications     Timing functions blocks of code in Summaries Histograms     Gauges to track current states of things  and their limits      Monitoring of batch jobs      Overall structure  Clients MUST be written to be callback based internally  Clients SHOULD generally follow the structure described here   The key class is the Collector  This has a method  typically called  collect   that returns zero or more metrics and their samples  Collectors get registered with a CollectorRegistry  Data is exposed by passing a CollectorRegistry to a class method function  bridge   which returns the metrics in a format Prometheus supports  Every time the CollectorRegistry is scraped it must callback to each of the Collectors  collect method   The interface most users interact with are the Counter  Gauge  Summary  and Histogram Collectors  These represent a single metric  and should cover the vast majority of use cases where a user is instrumenting their own code   More advanced uses cases  such as proxying from another monitoring instrumentation system  require writing a custom Collector  Someone may also want to write a  bridge  that takes a CollectorRegistry and produces data in a format a different monitoring instrumentation system understands  allowing users to only have to think about one instrumentation system   CollectorRegistry SHOULD offer  register     unregister    functions  and a Collector SHOULD be allowed to be registered to multiple CollectorRegistrys   Client libraries MUST be thread safe   For non OO languages such as C  client libraries should follow the spirit of this structure as much as is practical       Naming  Client libraries SHOULD follow function method class names mentioned in this document  keeping in mind the naming conventions of the language they re working in  For example   set to current time    is good for a method name in  Python  but  SetToCurrentTime    is better in Go and  setToCurrentTime    is the convention in Java  Where names differ for technical reasons  e g  not allowing function overloading   documentation help strings SHOULD point users towards the other names   Libraries MUST NOT offer functions methods classes with the same or similar names to ones given here  but with different semantics      Metrics  The Counter  Gauge  Summary and Histogram  metric types   docs concepts metric types   are the primary interface by users   Counter and Gauge MUST be part of the client library  At least one of Summary and Histogram MUST be offered   These should be primarily used as file static variables  that is  global variables defined in the same file as the code they re instrumenting  The client library SHOULD enable this  The common use case is instrumenting a piece of code overall  not a piece of code in the context of one instance of an object  Users shouldn t have to worry about plumbing their metrics throughout their code  the client library should do that for them  and if it doesn t  users will write a wrapper around the library to make it  easier    which rarely tends to go well    There MUST be a default CollectorRegistry  the standard metrics MUST by default implicitly register into it with no special work required by the user  There MUST be a way to have metrics not register to the default CollectorRegistry  for use in batch jobs and unittests  Custom collectors SHOULD also follow this   Exactly how the metrics should be created varies by language  For some  Java  Go  a builder approach is best  whereas for others  Python  function arguments are rich enough to do it in one call   For example in the Java Simpleclient we have      java class YourClass     static final Counter requests   Counter build          name  requests total          help  Requests    register           This will register requests with the default CollectorRegistry  By calling  build    rather than  register    the metric won t be registered  handy for unittests   you can also pass in a CollectorRegistry to  register     handy for batch jobs        Counter   Counter   docs concepts metric types  counter  is a monotonically increasing counter  It MUST NOT allow the value to decrease  however it MAY be reset to 0  such as by server restart    A counter MUST have the following methods      inc     Increment the counter by 1    inc double v    Increment the counter by the given amount  MUST check that v    0   A counter is ENCOURAGED to have   A way to count exceptions throw raised in a given piece of code  and optionally only certain types of exceptions  This is count exceptions in Python   Counters MUST start at 0       Gauge   Gauge   docs concepts metric types  gauge  represents a value that can go up and down   A gauge MUST have the following methods      inc     Increment the gauge by 1    inc double v    Increment the gauge by the given amount    dec     Decrement the gauge by 1    dec double v    Decrement the gauge by the given amount    set double v    Set the gauge to the given value  Gauges MUST start at 0  you MAY offer a way for a given gauge to start at a different number   A gauge SHOULD have the following methods      set to current time     Set the gauge to the current unixtime in seconds   A gauge is ENCOURAGED to have   A way to track in progress requests in some piece of code function  This is  track inprogress  in Python   A way to time a piece of code and set the gauge to its duration in seconds  This is useful for batch jobs  This is startTimer setDuration in Java and the  time    decorator context manager in Python  This SHOULD match the pattern in Summary Histogram  though  set    rather than  observe           Summary  A  summary   docs concepts metric types  summary  samples observations  usually things like request durations  over sliding windows of time and provides instantaneous insight into their distributions  frequencies  and sums   A summary MUST NOT allow the user to set  quantile  as a label name  as this is used internally to designate summary quantiles  A summary is ENCOURAGED to offer quantiles as exports  though these can t be aggregated and tend to be slow  A summary MUST allow not having quantiles  as just   count    sum  is quite useful and this MUST be the default   A summary MUST have the following methods      observe double v    Observe the given amount  A summary SHOULD have the following methods   Some way to time code for users in seconds  In Python this is the  time    decorator context manager  In Java this is startTimer observeDuration  Units other than seconds MUST NOT be offered  if a user wants something else  they can do it by hand   This should follow the same pattern as Gauge Histogram   Summary   count    sum  MUST start at 0       Histogram   Histograms   docs concepts metric types  histogram  allow aggregatable distributions of events  such as request latencies  This is at its core a counter per bucket   A histogram MUST NOT allow  le  as a user set label  as  le  is used internally to designate buckets   A histogram MUST offer a way to manually choose the buckets  Ways to set buckets in a  linear start  width  count   and  exponential start  factor  count   fashion SHOULD be offered  Count MUST include the   Inf  bucket   A histogram SHOULD have the same default buckets as other client libraries  Buckets MUST NOT be changeable once the metric is created   A histogram MUST have the following methods      observe double v    Observe the given amount  A histogram SHOULD have the following methods   Some way to time code for users in seconds  In Python this is the  time    decorator context manager  In Java this is  startTimer   observeDuration   Units other than seconds MUST NOT be offered  if a user wants something else  they can do it by hand   This should follow the same pattern as Gauge Summary   Histogram    count    sum  and the buckets MUST start at 0     Further metrics considerations    Providing additional functionality in metrics beyond what s documented above as makes sense for a given language is ENCOURAGED   If there s a common use case you can make simpler then go for it  as long as it won t encourage undesirable behaviours  such as suboptimal metric label layouts  or doing computation in the client        Labels  Labels are one of the  most powerful aspects   docs practices instrumentation  use labels  of Prometheus  but  easily abused   docs practices instrumentation  do not overuse labels   Accordingly client libraries must be very careful in how labels are offered to users   Client libraries MUST NOT allow users to have different label names for the same metric for Gauge Counter Summary Histogram or any other Collector offered by the library   Metrics from custom collectors should almost always have consistent label names  As there are still rare but valid use cases where this is not the case  client libraries should not verify this   While labels are powerful  the majority of metrics will not have labels  Accordingly the API should allow for labels but not dominate it   A client library MUST allow for optionally specifying a list of label names at Gauge Counter Summary Histogram creation time  A client library SHOULD support any number of label names  A client library MUST validate that label names meet the  documented requirements   docs concepts data model  metric names and labels    The general way to provide access to labeled dimension of a metric is via a  labels    method that takes either a list of the label values or a map from label name to label value and returns a  Child   The usual   inc      dec      observe    etc  methods can then be called on the Child   The Child returned by  labels    SHOULD be cacheable by the user  to avoid having to look it up again   this matters in latency critical code   Metrics with labels SHOULD support a  remove    method with the same signature as  labels    that will remove a Child from the metric no longer exporting it  and a  clear    method that removes all Children from the metric  These invalidate caching of Children   There SHOULD be a way to initialize a given Child with the default value  usually just calling  labels     Metrics without labels MUST always be initialized to avoid  problems with missing metrics   docs practices instrumentation  avoid missing metrics        Metric names  Metric names must follow the  specification   docs concepts data model  metric names and labels   As with label names  this MUST be met for uses of Gauge Counter Summary Histogram and in any other Collector offered with the library   Many client libraries offer setting the name in three parts   namespace subsystem name  of which only the  name  is mandatory   Dynamic generated metric names or subparts of metric names MUST be discouraged  except when a custom Collector is proxying from other instrumentation monitoring systems  Generated dynamic metric names are a sign that you should be using labels instead       Metric description and help  Gauge Counter Summary Histogram MUST require metric descriptions help to be provided   Any custom Collectors provided with the client libraries MUST have descriptions help on their metrics   It is suggested to make it a mandatory argument  but not to check that it s of a certain length as if someone really doesn t want to write docs we re not going to convince them otherwise  Collectors offered with the library  and indeed everywhere we can within the ecosystem  SHOULD have good metric descriptions  to lead by example      Exposition  Clients MUST implement the text based exposition format outlined in the  exposition formats   docs instrumenting exposition formats  documentation   Reproducible order of the exposed metrics is ENCOURAGED  especially for human readable formats  if it can be implemented without a significant resource cost      Standard and runtime collectors  Client libraries SHOULD offer what they can of the Standard exports  documented below   These SHOULD be implemented as custom Collectors  and registered by default on the default CollectorRegistry  There SHOULD be a way to disable these  as there are some very niche use cases where they get in the way       Process metrics  These metrics have the prefix  process    If obtaining a necessary value is problematic or even impossible with the used language or runtime  client libraries SHOULD prefer leaving out the corresponding metric over exporting bogus  inaccurate  or special values  like  NaN    All memory values in bytes  all times in unixtime seconds     Metric name                          Help string                                              Unit                                                                                                                                     process cpu seconds total           Total user and system CPU time spent in seconds          seconds               process open fds                    Number of open file descriptors                          file descriptors      process max fds                     Maximum number of open file descriptors                  file descriptors      process virtual memory bytes        Virtual memory size in bytes                             bytes                 process virtual memory max bytes    Maximum amount of virtual memory available in bytes      bytes                 process resident memory bytes       Resident memory size in bytes                            bytes                 process heap bytes                  Process heap size in bytes                               bytes                 process start time seconds          Start time of the process since unix epoch in seconds    seconds               process threads                     Number of OS threads in the process                threads                 Runtime metrics  In addition  client libraries are ENCOURAGED to also offer whatever makes sense in terms of metrics for their language s runtime  e g  garbage collection stats   with an appropriate prefix such as  go     hotspot   etc      Unit tests  Client libraries SHOULD have unit tests covering the core instrumentation library and exposition   Client libraries are ENCOURAGED to offer ways that make it easy for users to unit test their use of the instrumentation code  For example  the  CollectorRegistry get sample value  in Python      Packaging and dependencies  Ideally  a client library can be included in any application to add some instrumentation without breaking the application   Accordingly  caution is advised when adding dependencies to the client library  For example  if you add a library that uses a Prometheus client that requires version x y of a library but the application uses x z elsewhere  will that have an adverse impact on the application   It is suggested that where this may arise  that the core instrumentation is separated from the bridges exposition of metrics in a given format  For example  the Java simpleclient  simpleclient  module has no dependencies  and the  simpleclient servlet  has the HTTP bits      Performance considerations  As client libraries must be thread safe  some form of concurrency control is required and consideration must be given to performance on multi core machines and applications   In our experience the least performant is mutexes   Processor atomic instructions tend to be in the middle  and generally acceptable   Approaches that avoid different CPUs mutating the same bit of RAM work best  such as the DoubleAdder in Java s simpleclient  There is a memory cost though   As noted above  the result of  labels    should be cacheable  The concurrent maps that tend to back metric with labels tend to be relatively slow  Special casing metrics without labels to avoid  labels    like lookups can help a lot   Metrics SHOULD avoid blocking when they are being incremented decremented set etc  as it s undesirable for the whole application to be held up while a scrape is ongoing   Having benchmarks of the main instrumentation operations  including labels  is ENCOURAGED   Resource consumption  particularly RAM  should be kept in mind when performing exposition  Consider reducing the memory footprint by streaming results  and potentially having a limit on the number of concurrent scrapes "}
{"questions":"prometheus If you are instrumenting your own code the general rules of how to library docs practices instrumentation should be followed When title Writing exporters instrument code with a Prometheus client sortrank 5 Writing exporters","answers":"---\ntitle: Writing exporters\nsort_rank: 5\n---\n\n# Writing exporters\n\nIf you are instrumenting your own code, the [general rules of how to\ninstrument code with a Prometheus client\nlibrary](\/docs\/practices\/instrumentation\/) should be followed. When\ntaking metrics from another monitoring or instrumentation system, things\ntend not to be so black and white.\n\nThis document contains things you should consider when writing an\nexporter or custom collector. The theory covered will also be of\ninterest to those doing direct instrumentation.\n\nIf you are writing an exporter and are unclear on anything here, please\ncontact us on IRC (#prometheus on libera) or the [mailing\nlist](\/community).\n\n## Maintainability and purity\n\nThe main decision you need to make when writing an exporter is how much\nwork you\u2019re willing to put in to get perfect metrics out of it.\n\nIf the system in question has only a handful of metrics that rarely\nchange, then getting everything perfect is an easy choice, a good\nexample of this is the [HAProxy\nexporter](https:\/\/github.com\/prometheus\/haproxy_exporter).\n\nOn the other hand, if you try to get things perfect when the system has\nhundreds of metrics that change frequently with new versions, then\nyou\u2019ve signed yourself up for a lot of ongoing work. The [MySQL\nexporter](https:\/\/github.com\/prometheus\/mysqld_exporter) is on this end\nof the spectrum.\n\nThe [node exporter](https:\/\/github.com\/prometheus\/node_exporter) is a\nmix of these, with complexity varying by module. For example, the\n`mdadm` collector hand-parses a file and exposes metrics created\nspecifically for that collector, so we may as well get the metrics\nright. For the `meminfo` collector the results vary across kernel\nversions so we end up doing just enough of a transform to create valid\nmetrics.\n\n## Configuration\n\nWhen working with applications, you should aim for an exporter that\nrequires no custom configuration by the user beyond telling it where the\napplication is.  You may also need to offer the ability to filter out\ncertain metrics if they may be too granular and expensive on large\nsetups, for example the [HAProxy\nexporter](https:\/\/github.com\/prometheus\/haproxy_exporter) allows\nfiltering of per-server stats. Similarly, there may be expensive metrics\nthat are disabled by default.\n\nWhen working with other monitoring systems, frameworks and protocols you\nwill often need to provide additional configuration or customization to\ngenerate metrics suitable for Prometheus. In the best case scenario, a\nmonitoring system has a similar enough data model to Prometheus that you\ncan automatically determine how to transform metrics. This is the case\nfor [Cloudwatch](https:\/\/github.com\/prometheus\/cloudwatch_exporter),\n[SNMP](https:\/\/github.com\/prometheus\/snmp_exporter) and\n[collectd](https:\/\/github.com\/prometheus\/collectd_exporter). At most, we\nneed the ability to let the user select which metrics they want to pull\nout.\n\nIn other cases, metrics from the system are completely non-standard,\ndepending on the usage of the system and the underlying application.  In\nthat case the user has to tell us how to transform the metrics. The [JMX\nexporter](https:\/\/github.com\/prometheus\/jmx_exporter) is the worst\noffender here, with the\n[Graphite](https:\/\/github.com\/prometheus\/graphite_exporter) and\n[StatsD](https:\/\/github.com\/prometheus\/statsd_exporter) exporters also\nrequiring configuration to extract labels.\n\nEnsuring the exporter works out of the box without configuration, and\nproviding a selection of example configurations for transformation if\nrequired, is advised.\n\nYAML is the standard Prometheus configuration format, all configuration\nshould use YAML by default.\n\n## Metrics\n\n### Naming\n\nFollow the [best practices on metric naming](\/docs\/practices\/naming).\n\nGenerally metric names should allow someone who is familiar with\nPrometheus but not a particular system to make a good guess as to what a\nmetric means.  A metric named `http_requests_total` is not extremely\nuseful - are these being measured as they come in, in some filter or\nwhen they get to the user\u2019s code?  And `requests_total` is even worse,\nwhat type of requests?\n\nWith direct instrumentation, a given metric should exist within exactly\none file. Accordingly, within exporters and collectors, a metric should\napply to exactly one subsystem and be named accordingly.\n\nMetric names should never be procedurally generated, except when writing\na custom collector or exporter.\n\nMetric names for applications should generally be prefixed by the\nexporter name, e.g. `haproxy_up`.\n\nMetrics must use base units (e.g. seconds, bytes) and leave converting\nthem to something more readable to graphing tools. No matter what units\nyou end up using, the units in the metric name must match the units in\nuse. Similarly, expose ratios, not percentages. Even better, specify a\ncounter for each of the two components of the ratio.\n\nMetric names should not include the labels that they\u2019re exported with,\ne.g. `by_type`, as that won\u2019t make sense if the label is aggregated\naway.\n\nThe one exception is when you\u2019re exporting the same data with different\nlabels via multiple metrics, in which case that\u2019s usually the sanest way\nto distinguish them. For direct instrumentation, this should only come\nup when exporting a single metric with all the labels would have too\nhigh a cardinality.\n\nPrometheus metrics and label names are written in `snake_case`.\nConverting `camelCase` to `snake_case` is desirable, though doing so\nautomatically doesn\u2019t always produce nice results for things like\n`myTCPExample` or `isNaN` so sometimes it\u2019s best to leave them as-is.\n\nExposed metrics should not contain colons, these are reserved for user\ndefined recording rules to use when aggregating.\n\nOnly `[a-zA-Z0-9:_]` are valid in metric names.\n\nThe `_sum`, `_count`, `_bucket` and `_total` suffixes are used by\nSummaries, Histograms and Counters. Unless you\u2019re producing one of\nthose, avoid these suffixes.\n\n`_total` is a convention for counters, you should use it if you\u2019re using\nthe COUNTER type.\n\nThe `process_` and `scrape_` prefixes are reserved. It\u2019s okay to add\nyour own prefix on to these if they follow matching semantics.\nFor example, Prometheus has `scrape_duration_seconds` for how long a\nscrape took, it's good practice to also have an exporter-centric metric,\ne.g. `jmx_scrape_duration_seconds`, saying how long the specific\nexporter took to do its thing. For process stats where you have access\nto the PID, both Go and Python offer collectors that\u2019ll handle this for\nyou. A good example of this is the [HAProxy\nexporter](https:\/\/github.com\/prometheus\/haproxy_exporter).\n\nWhen you have a successful request count and a failed request count, the\nbest way to expose this is as one metric for total requests and another\nmetric for failed requests. This makes it easy to calculate the failure\nratio. Do not use one metric with a failed or success label. Similarly,\nwith hit or miss for caches, it\u2019s better to have one metric for total and\nanother for hits.\n\nConsider the likelihood that someone using monitoring will do a code or\nweb search for the metric name. If the names are very well-established\nand unlikely to be used outside of the realm of people used to those\nnames, for example SNMP and network engineers, then leaving them as-is\nmay be a good idea. This logic doesn\u2019t apply for all exporters, for\nexample the MySQL exporter metrics may be used by a variety of people,\nnot just DBAs. A `HELP` string with the original name can provide most\nof the same benefits as using the original names.\n\n### Labels\n\nRead the [general\nadvice](\/docs\/practices\/instrumentation\/#things-to-watch-out-for) on\nlabels.\n\nAvoid `type` as a label name, it\u2019s too generic and often meaningless.\nYou should also try where possible to avoid names that are likely to\nclash with target labels, such as `region`, `zone`, `cluster`,\n`availability_zone`, `az`, `datacenter`, `dc`, `owner`, `customer`,\n`stage`, `service`, `environment` and `env`. If, however, that\u2019s what\nthe application calls some resource, it\u2019s best not to cause confusion by\nrenaming it.\n\nAvoid the temptation to put things into one metric just because they\nshare a prefix. Unless you\u2019re sure something makes sense as one metric,\nmultiple metrics is safer.\n\nThe label `le` has special meaning for Histograms, and `quantile` for\nSummaries. Avoid these labels generally.\n\nRead\/write and send\/receive are best as separate metrics, rather than as\na label. This is usually because you care about only one of them at a\ntime, and it is easier to use them that way.\n\nThe rule of thumb is that one metric should make sense when summed or\naveraged.  There is one other case that comes up with exporters, and\nthat\u2019s where the data is fundamentally tabular and doing otherwise would\nrequire users to do regexes on metric names to be usable. Consider the\nvoltage sensors on your motherboard, while doing math across them is\nmeaningless, it makes sense to have them in one metric rather than\nhaving one metric per sensor. All values within a metric should\n(almost) always have the same unit, for example consider if fan speeds\nwere mixed in with the voltages, and you had no way to automatically\nseparate them.\n\nDon\u2019t do this:\n\n<pre>\nmy_metric{label=\"a\"} 1\nmy_metric{label=\"b\"} 6\n<b>my_metric{label=\"total\"} 7<\/b>\n<\/pre>\n\nor this:\n\n<pre>\nmy_metric{label=\"a\"} 1\nmy_metric{label=\"b\"} 6\n<b>my_metric{} 7<\/b>\n<\/pre>\n\nThe former breaks for people who do a `sum()` over your metric, and the\nlatter breaks sum and is quite difficult to work with. Some client\nlibraries, for example Go, will actively try to stop you doing the\nlatter in a custom collector, and all client libraries should stop you\nfrom doing the latter with direct instrumentation. Never do either of\nthese, rely on Prometheus aggregation instead.\n\nIf your monitoring exposes a total like this, drop the total. If you\nhave to keep it around for some reason, for example the total includes\nthings not counted individually, use different metric names.\n\nInstrumentation labels should be minimal, every extra label is one more\nthat users need to consider when writing their PromQL. Accordingly,\navoid having instrumentation labels which could be removed without\naffecting the uniqueness of the time series. Additional information\naround a metric can be added via an info metric, for an example see\nbelow how to handle version numbers.\n\nHowever, there are cases where it is expected that virtually all users of\na metric will want the additional information. If so, adding a\nnon-unique label, rather than an info metric, is the right solution. For\nexample the\n[mysqld_exporter](https:\/\/github.com\/prometheus\/mysqld_exporter)'s\n`mysqld_perf_schema_events_statements_total`'s `digest` label is a hash\nof the full query pattern and is sufficient for uniqueness. However, it\nis of little use without the human readable `digest_text` label, which\nfor long queries will contain only the start of the query pattern and is\nthus not unique. Thus we end up with both the `digest_text` label for\nhumans and the `digest` label for uniqueness.\n\n### Target labels, not static scraped labels\n\nIf you ever find yourself wanting to apply the same label to all of your\nmetrics, stop.\n\nThere\u2019s generally two cases where this comes up.\n\nThe first is for some label it would be useful to have on the metrics\nsuch as the version number of the software. Instead, use the approach\ndescribed at\n[https:\/\/www.robustperception.io\/how-to-have-labels-for-machine-roles\/](http:\/\/www.robustperception.io\/how-to-have-labels-for-machine-roles\/).\n\nThe second case is when a label is really a target label. These are\nthings like region, cluster names, and so on, that come from your\ninfrastructure setup rather than the application itself. It\u2019s not for an\napplication to say where it fits in your label taxonomy, that\u2019s for the\nperson running the Prometheus server to configure and different people\nmonitoring the same application may give it different names.\n\nAccordingly, these labels belong up in the scrape configs of Prometheus\nvia whatever service discovery you\u2019re using. It\u2019s okay to apply the\nconcept of machine roles here as well, as it\u2019s likely useful information\nfor at least some people scraping it.\n\n### Types\n\nYou should try to match up the types of your metrics to Prometheus\ntypes. This usually means counters and gauges. The `_count` and `_sum`\nof summaries are also relatively common, and on occasion you\u2019ll see\nquantiles. Histograms are rare, if you come across one remember that the\nexposition format exposes cumulative values.\n\nOften it won\u2019t be obvious what the type of metric is, especially if\nyou\u2019re automatically processing a set of metrics. In general `UNTYPED`\nis a safe default.\n\nCounters can\u2019t go down, so if you have a counter type coming from\nanother instrumentation system that can be decremented, for example\nDropwizard metrics then it's not a counter, it's a gauge. `UNTYPED` is\nprobably the best type to use there, as `GAUGE` would be misleading if\nit were being used as a counter.\n\n### Help strings\n\nWhen you\u2019re transforming metrics it\u2019s useful for users to be able to\ntrack back to what the original was, and what rules were in play that\ncaused that transformation. Putting in the name of the\ncollector or exporter, the ID of any rule that was applied and the\nname and details of the original metric into the help string will greatly\naid users.\n\nPrometheus doesn\u2019t like one metric having different help strings. If\nyou\u2019re making one metric from many others, choose one of them to put in\nthe help string.\n\nFor examples of this, the SNMP exporter uses the OID and the JMX\nexporter puts in a sample mBean name. The [HAProxy\nexporter](https:\/\/github.com\/prometheus\/haproxy_exporter) has\nhand-written strings. The [node\nexporter](https:\/\/github.com\/prometheus\/node_exporter) also has a wide\nvariety of examples.\n\n### Drop less useful statistics\n\nSome instrumentation systems expose 1m, 5m, 15m rates, average rates since\napplication start (these are called `mean` in Dropwizard metrics for\nexample) in addition to minimums, maximums and standard deviations.\n\nThese should all be dropped, as they\u2019re not very useful and add clutter.\nPrometheus can calculate rates itself, and usually more accurately as\nthe averages exposed are usually exponentially decaying. You don\u2019t know\nwhat time the min or max were calculated over, and the standard deviation\nis statistically useless and you can always expose sum of squares,\n`_sum` and `_count` if you ever need to calculate it.\n\nQuantiles have related issues, you may choose to drop them or put them\nin a Summary.\n\n### Dotted strings\n\nMany monitoring systems don\u2019t have labels, instead doing things like\n`my.class.path.mymetric.labelvalue1.labelvalue2.labelvalue3`.\n\nThe [Graphite](https:\/\/github.com\/prometheus\/graphite_exporter) and\n[StatsD](https:\/\/github.com\/prometheus\/statsd_exporter) exporters share\na way of transforming these with a small configuration language. Other\nexporters should implement the same. The transformation is currently\nimplemented only in Go, and would benefit from being factored out into a\nseparate library.\n\n## Collectors\n\nWhen implementing the collector for your exporter, you should never use\nthe usual direct instrumentation approach and then update the metrics on\neach scrape.\n\nRather create new metrics each time. In Go this is done with\n[MustNewConstMetric](https:\/\/godoc.org\/github.com\/prometheus\/client_golang\/prometheus#MustNewConstMetric)\nin your `Collect()` method. For Python see\n[https:\/\/github.com\/prometheus\/client_python#custom-collectors](https:\/\/prometheus.github.io\/client_python\/collector\/custom\/)\nand for Java generate a `List<MetricFamilySamples>` in your collect\nmethod, see\n[StandardExports.java](https:\/\/github.com\/prometheus\/client_java\/blob\/master\/simpleclient_hotspot\/src\/main\/java\/io\/prometheus\/client\/hotspot\/StandardExports.java)\nfor an example.\n\nThe reason for this is two-fold. Firstly, two scrapes could happen at\nthe same time, and direct instrumentation uses what are effectively\nfile-level global variables, so you\u2019ll get race conditions. Secondly, if\na label value disappears, it\u2019ll still be exported.\n\nInstrumenting your exporter itself via direct instrumentation is fine,\ne.g. total bytes transferred or calls performed by the exporter across\nall scrapes.  For exporters such as the [blackbox\nexporter](https:\/\/github.com\/prometheus\/blackbox_exporter) and [SNMP\nexporter](https:\/\/github.com\/prometheus\/snmp_exporter), which aren\u2019t\ntied to a single target, these should only be exposed on a vanilla\n`\/metrics` call, not on a scrape of a particular target.\n\n### Metrics about the scrape itself\n\nSometimes you\u2019d like to export metrics that are about the scrape, like\nhow long it took or how many records you processed.\n\nThese should be exposed as gauges as they\u2019re about an event, the scrape,\nand the metric name prefixed by the exporter name, for example\n`jmx_scrape_duration_seconds`. Usually the `_exporter` is excluded and\nif the exporter also makes sense to use as just a collector, then\ndefinitely exclude it.\n\nOther scrape \"meta\" metrics should be avoided. For example, a counter for\nthe number of scrapes, or a histogram of the scrape duration. Having the\nexporter track these metrics duplicate the [automatically generated\nmetrics](docs\/concepts\/jobs_instances\/#automatically-generated-labels-and-time-series)\nof Prometheus itself. This adds to the storage cost of every exporter instance.\n\n### Machine and process metrics\n\nMany systems, for example Elasticsearch, expose machine metrics such as\nCPU, memory and filesystem information. As the [node\nexporter](https:\/\/github.com\/prometheus\/node_exporter) provides these in\nthe Prometheus ecosystem, such metrics should be dropped.\n\nIn the Java world, many instrumentation frameworks expose process-level\nand JVM-level stats such as CPU and GC. The Java client and JMX exporter\nalready include these in the preferred form via\n[DefaultExports.java](https:\/\/github.com\/prometheus\/client_java\/blob\/master\/simpleclient_hotspot\/src\/main\/java\/io\/prometheus\/client\/hotspot\/DefaultExports.java),\nso these should also be dropped.\n\nSimilarly with other languages and frameworks.\n\n## Deployment\n\nEach exporter should monitor exactly one instance application,\npreferably sitting right beside it on the same machine. That means for\nevery HAProxy you run, you run a `haproxy_exporter` process. For every\nmachine with a Mesos worker, you run the [Mesos\nexporter](https:\/\/github.com\/mesosphere\/mesos_exporter) on it, and\nanother one for the master, if a machine has both.\n\nThe theory behind this is that for direct instrumentation this is what\nyou\u2019d be doing, and we\u2019re trying to get as close to that as we can in\nother layouts.  This means that all service discovery is done in\nPrometheus, not in exporters.  This also has the benefit that Prometheus\nhas the target information it needs to allow users probe your service\nwith the [blackbox\nexporter](https:\/\/github.com\/prometheus\/blackbox_exporter).\n\nThere are two exceptions:\n\nThe first is where running beside the application you are monitoring is\ncompletely nonsensical. The SNMP, blackbox and IPMI exporters are the\nmain examples of this. The IPMI and SNMP exporters as the devices are\noften black boxes that it\u2019s impossible to run code on (though if you\ncould run a node exporter on them instead that\u2019d be better), and the\nblackbox exporter where you\u2019re monitoring something like a DNS name,\nwhere there\u2019s also nothing to run on. In this case, Prometheus should\nstill do service discovery, and pass on the target to be scraped. See\nthe blackbox and SNMP exporters for examples.\n\nNote that it is only currently possible to write this type of exporter\nwith the Go, Python and Java client libraries.\n\nThe second exception is where you\u2019re pulling some stats out of a random\ninstance of a system and don\u2019t care which one you\u2019re talking to.\nConsider a set of MySQL replicas you wanted to run some business queries\nagainst the data to then export. Having an exporter that uses your usual\nload balancing approach to talk to one replica is the sanest approach.\n\nThis doesn\u2019t apply when you\u2019re monitoring a system with master-election,\nin that case you should monitor each instance individually and deal with\nthe \"masterness\" in Prometheus. This is as there isn\u2019t always exactly\none master, and changing what a target is underneath Prometheus\u2019s feet\nwill cause oddities.\n\n### Scheduling\n\nMetrics should only be pulled from the application when Prometheus\nscrapes them, exporters should not perform scrapes based on their own\ntimers. That is, all scrapes should be synchronous.\n\nAccordingly, you should not set timestamps on the metrics you expose, let\nPrometheus take care of that. If you think you need timestamps, then you\nprobably need the\n[Pushgateway](https:\/\/prometheus.io\/docs\/instrumenting\/pushing\/)\ninstead.\n\nIf a metric is particularly expensive to retrieve, i.e. takes more than\na minute, it is acceptable to cache it. This should be noted in the\n`HELP` string.\n\nThe default scrape timeout for Prometheus is 10 seconds. If your\nexporter can be expected to exceed this, you should explicitly call this\nout in your user documentation.\n\n### Pushes\n\nSome applications and monitoring systems only push metrics, for example\nStatsD, Graphite and collectd.\n\nThere are two considerations here.\n\nFirstly, when do you expire metrics? Collectd and things talking to\nGraphite both export regularly, and when they stop we want to stop\nexposing the metrics.  Collectd includes an expiry time so we use that,\nGraphite doesn\u2019t so it is a flag on the exporter.\n\nStatsD is a bit different, as it is dealing with events rather than\nmetrics. The best model is to run one exporter beside each application\nand restart them when the application restarts so that the state is\ncleared.\n\nSecondly, these sort of systems tend to allow your users to send either\ndeltas or raw counters. You should rely on the raw counters as far as\npossible, as that\u2019s the general Prometheus model.\n\nFor service-level metrics, e.g. service-level batch jobs, you should\nhave your exporter push into the Pushgateway and exit after the event\nrather than handling the state yourself. For instance-level batch\nmetrics, there is no clear pattern yet. The options are either to abuse\nthe node exporter\u2019s textfile collector, rely on in-memory state\n(probably best if you don\u2019t need to persist over a reboot) or implement\nsimilar functionality to the textfile collector.\n\n### Failed scrapes\n\nThere are currently two patterns for failed scrapes where the\napplication you\u2019re talking to doesn\u2019t respond or has other problems.\n\nThe first is to return a 5xx error.\n\nThe second is to have a `myexporter_up`, e.g. `haproxy_up`, variable\nthat has a value of 0 or 1 depending on whether the scrape worked.\n\nThe latter is better where there\u2019s still some useful metrics you can get\neven with a failed scrape, such as the HAProxy exporter providing\nprocess stats. The former is a tad easier for users to deal with, as\n[`up` works in the usual way](\/docs\/concepts\/jobs_instances\/#automatically-generated-labels-and-time-series), although you can\u2019t distinguish between the\nexporter being down and the application being down.\n\n### Landing page\n\nIt\u2019s nicer for users if visiting `http:\/\/yourexporter\/` has a simple\nHTML page with the name of the exporter, and a link to the `\/metrics`\npage.\n\n### Port numbers\n\nA user may have many exporters and Prometheus components on the same\nmachine, so to make that easier each has a unique port number.\n\n[https:\/\/github.com\/prometheus\/prometheus\/wiki\/Default-port-allocations](https:\/\/github.com\/prometheus\/prometheus\/wiki\/Default-port-allocations)\nis where we track them, this is publicly editable.\n\nFeel free to grab the next free port number when developing your\nexporter, preferably before publicly announcing it. If you\u2019re not ready\nto release yet, putting your username and WIP is fine.\n\nThis is a registry to make our users\u2019 lives a little easier, not a\ncommitment to develop particular exporters. For exporters for internal\napplications we recommend using ports outside of the range of default\nport allocations.\n\n## Announcing\n\nOnce you\u2019re ready to announce your exporter to the world, email the\nmailing list and send a PR to add it to [the list of available\nexporters](https:\/\/github.com\/prometheus\/docs\/blob\/main\/content\/docs\/instrumenting\/exporters.md).","site":"prometheus","answers_cleaned":"    title  Writing exporters sort rank  5        Writing exporters  If you are instrumenting your own code  the  general rules of how to instrument code with a Prometheus client library   docs practices instrumentation   should be followed  When taking metrics from another monitoring or instrumentation system  things tend not to be so black and white   This document contains things you should consider when writing an exporter or custom collector  The theory covered will also be of interest to those doing direct instrumentation   If you are writing an exporter and are unclear on anything here  please contact us on IRC   prometheus on libera  or the  mailing list   community       Maintainability and purity  The main decision you need to make when writing an exporter is how much work you re willing to put in to get perfect metrics out of it   If the system in question has only a handful of metrics that rarely change  then getting everything perfect is an easy choice  a good example of this is the  HAProxy exporter  https   github com prometheus haproxy exporter    On the other hand  if you try to get things perfect when the system has hundreds of metrics that change frequently with new versions  then you ve signed yourself up for a lot of ongoing work  The  MySQL exporter  https   github com prometheus mysqld exporter  is on this end of the spectrum   The  node exporter  https   github com prometheus node exporter  is a mix of these  with complexity varying by module  For example  the  mdadm  collector hand parses a file and exposes metrics created specifically for that collector  so we may as well get the metrics right  For the  meminfo  collector the results vary across kernel versions so we end up doing just enough of a transform to create valid metrics      Configuration  When working with applications  you should aim for an exporter that requires no custom configuration by the user beyond telling it where the application is   You may also need to offer the ability to filter out certain metrics if they may be too granular and expensive on large setups  for example the  HAProxy exporter  https   github com prometheus haproxy exporter  allows filtering of per server stats  Similarly  there may be expensive metrics that are disabled by default   When working with other monitoring systems  frameworks and protocols you will often need to provide additional configuration or customization to generate metrics suitable for Prometheus  In the best case scenario  a monitoring system has a similar enough data model to Prometheus that you can automatically determine how to transform metrics  This is the case for  Cloudwatch  https   github com prometheus cloudwatch exporter    SNMP  https   github com prometheus snmp exporter  and  collectd  https   github com prometheus collectd exporter   At most  we need the ability to let the user select which metrics they want to pull out   In other cases  metrics from the system are completely non standard  depending on the usage of the system and the underlying application   In that case the user has to tell us how to transform the metrics  The  JMX exporter  https   github com prometheus jmx exporter  is the worst offender here  with the  Graphite  https   github com prometheus graphite exporter  and  StatsD  https   github com prometheus statsd exporter  exporters also requiring configuration to extract labels   Ensuring the exporter works out of the box without configuration  and providing a selection of example configurations for transformation if required  is advised   YAML is the standard Prometheus configuration format  all configuration should use YAML by default      Metrics      Naming  Follow the  best practices on metric naming   docs practices naming    Generally metric names should allow someone who is familiar with Prometheus but not a particular system to make a good guess as to what a metric means   A metric named  http requests total  is not extremely useful   are these being measured as they come in  in some filter or when they get to the user s code   And  requests total  is even worse  what type of requests   With direct instrumentation  a given metric should exist within exactly one file  Accordingly  within exporters and collectors  a metric should apply to exactly one subsystem and be named accordingly   Metric names should never be procedurally generated  except when writing a custom collector or exporter   Metric names for applications should generally be prefixed by the exporter name  e g   haproxy up    Metrics must use base units  e g  seconds  bytes  and leave converting them to something more readable to graphing tools  No matter what units you end up using  the units in the metric name must match the units in use  Similarly  expose ratios  not percentages  Even better  specify a counter for each of the two components of the ratio   Metric names should not include the labels that they re exported with  e g   by type   as that won t make sense if the label is aggregated away   The one exception is when you re exporting the same data with different labels via multiple metrics  in which case that s usually the sanest way to distinguish them  For direct instrumentation  this should only come up when exporting a single metric with all the labels would have too high a cardinality   Prometheus metrics and label names are written in  snake case   Converting  camelCase  to  snake case  is desirable  though doing so automatically doesn t always produce nice results for things like  myTCPExample  or  isNaN  so sometimes it s best to leave them as is   Exposed metrics should not contain colons  these are reserved for user defined recording rules to use when aggregating   Only   a zA Z0 9     are valid in metric names   The   sum     count     bucket  and   total  suffixes are used by Summaries  Histograms and Counters  Unless you re producing one of those  avoid these suffixes     total  is a convention for counters  you should use it if you re using the COUNTER type   The  process   and  scrape   prefixes are reserved  It s okay to add your own prefix on to these if they follow matching semantics  For example  Prometheus has  scrape duration seconds  for how long a scrape took  it s good practice to also have an exporter centric metric  e g   jmx scrape duration seconds   saying how long the specific exporter took to do its thing  For process stats where you have access to the PID  both Go and Python offer collectors that ll handle this for you  A good example of this is the  HAProxy exporter  https   github com prometheus haproxy exporter    When you have a successful request count and a failed request count  the best way to expose this is as one metric for total requests and another metric for failed requests  This makes it easy to calculate the failure ratio  Do not use one metric with a failed or success label  Similarly  with hit or miss for caches  it s better to have one metric for total and another for hits   Consider the likelihood that someone using monitoring will do a code or web search for the metric name  If the names are very well established and unlikely to be used outside of the realm of people used to those names  for example SNMP and network engineers  then leaving them as is may be a good idea  This logic doesn t apply for all exporters  for example the MySQL exporter metrics may be used by a variety of people  not just DBAs  A  HELP  string with the original name can provide most of the same benefits as using the original names       Labels  Read the  general advice   docs practices instrumentation  things to watch out for  on labels   Avoid  type  as a label name  it s too generic and often meaningless  You should also try where possible to avoid names that are likely to clash with target labels  such as  region    zone    cluster    availability zone    az    datacenter    dc    owner    customer    stage    service    environment  and  env   If  however  that s what the application calls some resource  it s best not to cause confusion by renaming it   Avoid the temptation to put things into one metric just because they share a prefix  Unless you re sure something makes sense as one metric  multiple metrics is safer   The label  le  has special meaning for Histograms  and  quantile  for Summaries  Avoid these labels generally   Read write and send receive are best as separate metrics  rather than as a label  This is usually because you care about only one of them at a time  and it is easier to use them that way   The rule of thumb is that one metric should make sense when summed or averaged   There is one other case that comes up with exporters  and that s where the data is fundamentally tabular and doing otherwise would require users to do regexes on metric names to be usable  Consider the voltage sensors on your motherboard  while doing math across them is meaningless  it makes sense to have them in one metric rather than having one metric per sensor  All values within a metric should  almost  always have the same unit  for example consider if fan speeds were mixed in with the voltages  and you had no way to automatically separate them   Don t do this    pre  my metric label  a   1 my metric label  b   6  b my metric label  total   7  b    pre   or this    pre  my metric label  a   1 my metric label  b   6  b my metric   7  b    pre   The former breaks for people who do a  sum    over your metric  and the latter breaks sum and is quite difficult to work with  Some client libraries  for example Go  will actively try to stop you doing the latter in a custom collector  and all client libraries should stop you from doing the latter with direct instrumentation  Never do either of these  rely on Prometheus aggregation instead   If your monitoring exposes a total like this  drop the total  If you have to keep it around for some reason  for example the total includes things not counted individually  use different metric names   Instrumentation labels should be minimal  every extra label is one more that users need to consider when writing their PromQL  Accordingly  avoid having instrumentation labels which could be removed without affecting the uniqueness of the time series  Additional information around a metric can be added via an info metric  for an example see below how to handle version numbers   However  there are cases where it is expected that virtually all users of a metric will want the additional information  If so  adding a non unique label  rather than an info metric  is the right solution  For example the  mysqld exporter  https   github com prometheus mysqld exporter  s  mysqld perf schema events statements total  s  digest  label is a hash of the full query pattern and is sufficient for uniqueness  However  it is of little use without the human readable  digest text  label  which for long queries will contain only the start of the query pattern and is thus not unique  Thus we end up with both the  digest text  label for humans and the  digest  label for uniqueness       Target labels  not static scraped labels  If you ever find yourself wanting to apply the same label to all of your metrics  stop   There s generally two cases where this comes up   The first is for some label it would be useful to have on the metrics such as the version number of the software  Instead  use the approach described at  https   www robustperception io how to have labels for machine roles   http   www robustperception io how to have labels for machine roles     The second case is when a label is really a target label  These are things like region  cluster names  and so on  that come from your infrastructure setup rather than the application itself  It s not for an application to say where it fits in your label taxonomy  that s for the person running the Prometheus server to configure and different people monitoring the same application may give it different names   Accordingly  these labels belong up in the scrape configs of Prometheus via whatever service discovery you re using  It s okay to apply the concept of machine roles here as well  as it s likely useful information for at least some people scraping it       Types  You should try to match up the types of your metrics to Prometheus types  This usually means counters and gauges  The   count  and   sum  of summaries are also relatively common  and on occasion you ll see quantiles  Histograms are rare  if you come across one remember that the exposition format exposes cumulative values   Often it won t be obvious what the type of metric is  especially if you re automatically processing a set of metrics  In general  UNTYPED  is a safe default   Counters can t go down  so if you have a counter type coming from another instrumentation system that can be decremented  for example Dropwizard metrics then it s not a counter  it s a gauge   UNTYPED  is probably the best type to use there  as  GAUGE  would be misleading if it were being used as a counter       Help strings  When you re transforming metrics it s useful for users to be able to track back to what the original was  and what rules were in play that caused that transformation  Putting in the name of the collector or exporter  the ID of any rule that was applied and the name and details of the original metric into the help string will greatly aid users   Prometheus doesn t like one metric having different help strings  If you re making one metric from many others  choose one of them to put in the help string   For examples of this  the SNMP exporter uses the OID and the JMX exporter puts in a sample mBean name  The  HAProxy exporter  https   github com prometheus haproxy exporter  has hand written strings  The  node exporter  https   github com prometheus node exporter  also has a wide variety of examples       Drop less useful statistics  Some instrumentation systems expose 1m  5m  15m rates  average rates since application start  these are called  mean  in Dropwizard metrics for example  in addition to minimums  maximums and standard deviations   These should all be dropped  as they re not very useful and add clutter  Prometheus can calculate rates itself  and usually more accurately as the averages exposed are usually exponentially decaying  You don t know what time the min or max were calculated over  and the standard deviation is statistically useless and you can always expose sum of squares    sum  and   count  if you ever need to calculate it   Quantiles have related issues  you may choose to drop them or put them in a Summary       Dotted strings  Many monitoring systems don t have labels  instead doing things like  my class path mymetric labelvalue1 labelvalue2 labelvalue3    The  Graphite  https   github com prometheus graphite exporter  and  StatsD  https   github com prometheus statsd exporter  exporters share a way of transforming these with a small configuration language  Other exporters should implement the same  The transformation is currently implemented only in Go  and would benefit from being factored out into a separate library      Collectors  When implementing the collector for your exporter  you should never use the usual direct instrumentation approach and then update the metrics on each scrape   Rather create new metrics each time  In Go this is done with  MustNewConstMetric  https   godoc org github com prometheus client golang prometheus MustNewConstMetric  in your  Collect    method  For Python see  https   github com prometheus client python custom collectors  https   prometheus github io client python collector custom   and for Java generate a  List MetricFamilySamples   in your collect method  see  StandardExports java  https   github com prometheus client java blob master simpleclient hotspot src main java io prometheus client hotspot StandardExports java  for an example   The reason for this is two fold  Firstly  two scrapes could happen at the same time  and direct instrumentation uses what are effectively file level global variables  so you ll get race conditions  Secondly  if a label value disappears  it ll still be exported   Instrumenting your exporter itself via direct instrumentation is fine  e g  total bytes transferred or calls performed by the exporter across all scrapes   For exporters such as the  blackbox exporter  https   github com prometheus blackbox exporter  and  SNMP exporter  https   github com prometheus snmp exporter   which aren t tied to a single target  these should only be exposed on a vanilla   metrics  call  not on a scrape of a particular target       Metrics about the scrape itself  Sometimes you d like to export metrics that are about the scrape  like how long it took or how many records you processed   These should be exposed as gauges as they re about an event  the scrape  and the metric name prefixed by the exporter name  for example  jmx scrape duration seconds   Usually the   exporter  is excluded and if the exporter also makes sense to use as just a collector  then definitely exclude it   Other scrape  meta  metrics should be avoided  For example  a counter for the number of scrapes  or a histogram of the scrape duration  Having the exporter track these metrics duplicate the  automatically generated metrics  docs concepts jobs instances  automatically generated labels and time series  of Prometheus itself  This adds to the storage cost of every exporter instance       Machine and process metrics  Many systems  for example Elasticsearch  expose machine metrics such as CPU  memory and filesystem information  As the  node exporter  https   github com prometheus node exporter  provides these in the Prometheus ecosystem  such metrics should be dropped   In the Java world  many instrumentation frameworks expose process level and JVM level stats such as CPU and GC  The Java client and JMX exporter already include these in the preferred form via  DefaultExports java  https   github com prometheus client java blob master simpleclient hotspot src main java io prometheus client hotspot DefaultExports java   so these should also be dropped   Similarly with other languages and frameworks      Deployment  Each exporter should monitor exactly one instance application  preferably sitting right beside it on the same machine  That means for every HAProxy you run  you run a  haproxy exporter  process  For every machine with a Mesos worker  you run the  Mesos exporter  https   github com mesosphere mesos exporter  on it  and another one for the master  if a machine has both   The theory behind this is that for direct instrumentation this is what you d be doing  and we re trying to get as close to that as we can in other layouts   This means that all service discovery is done in Prometheus  not in exporters   This also has the benefit that Prometheus has the target information it needs to allow users probe your service with the  blackbox exporter  https   github com prometheus blackbox exporter    There are two exceptions   The first is where running beside the application you are monitoring is completely nonsensical  The SNMP  blackbox and IPMI exporters are the main examples of this  The IPMI and SNMP exporters as the devices are often black boxes that it s impossible to run code on  though if you could run a node exporter on them instead that d be better   and the blackbox exporter where you re monitoring something like a DNS name  where there s also nothing to run on  In this case  Prometheus should still do service discovery  and pass on the target to be scraped  See the blackbox and SNMP exporters for examples   Note that it is only currently possible to write this type of exporter with the Go  Python and Java client libraries   The second exception is where you re pulling some stats out of a random instance of a system and don t care which one you re talking to  Consider a set of MySQL replicas you wanted to run some business queries against the data to then export  Having an exporter that uses your usual load balancing approach to talk to one replica is the sanest approach   This doesn t apply when you re monitoring a system with master election  in that case you should monitor each instance individually and deal with the  masterness  in Prometheus  This is as there isn t always exactly one master  and changing what a target is underneath Prometheus s feet will cause oddities       Scheduling  Metrics should only be pulled from the application when Prometheus scrapes them  exporters should not perform scrapes based on their own timers  That is  all scrapes should be synchronous   Accordingly  you should not set timestamps on the metrics you expose  let Prometheus take care of that  If you think you need timestamps  then you probably need the  Pushgateway  https   prometheus io docs instrumenting pushing   instead   If a metric is particularly expensive to retrieve  i e  takes more than a minute  it is acceptable to cache it  This should be noted in the  HELP  string   The default scrape timeout for Prometheus is 10 seconds  If your exporter can be expected to exceed this  you should explicitly call this out in your user documentation       Pushes  Some applications and monitoring systems only push metrics  for example StatsD  Graphite and collectd   There are two considerations here   Firstly  when do you expire metrics  Collectd and things talking to Graphite both export regularly  and when they stop we want to stop exposing the metrics   Collectd includes an expiry time so we use that  Graphite doesn t so it is a flag on the exporter   StatsD is a bit different  as it is dealing with events rather than metrics  The best model is to run one exporter beside each application and restart them when the application restarts so that the state is cleared   Secondly  these sort of systems tend to allow your users to send either deltas or raw counters  You should rely on the raw counters as far as possible  as that s the general Prometheus model   For service level metrics  e g  service level batch jobs  you should have your exporter push into the Pushgateway and exit after the event rather than handling the state yourself  For instance level batch metrics  there is no clear pattern yet  The options are either to abuse the node exporter s textfile collector  rely on in memory state  probably best if you don t need to persist over a reboot  or implement similar functionality to the textfile collector       Failed scrapes  There are currently two patterns for failed scrapes where the application you re talking to doesn t respond or has other problems   The first is to return a 5xx error   The second is to have a  myexporter up   e g   haproxy up   variable that has a value of 0 or 1 depending on whether the scrape worked   The latter is better where there s still some useful metrics you can get even with a failed scrape  such as the HAProxy exporter providing process stats  The former is a tad easier for users to deal with  as   up  works in the usual way   docs concepts jobs instances  automatically generated labels and time series   although you can t distinguish between the exporter being down and the application being down       Landing page  It s nicer for users if visiting  http   yourexporter   has a simple HTML page with the name of the exporter  and a link to the   metrics  page       Port numbers  A user may have many exporters and Prometheus components on the same machine  so to make that easier each has a unique port number    https   github com prometheus prometheus wiki Default port allocations  https   github com prometheus prometheus wiki Default port allocations  is where we track them  this is publicly editable   Feel free to grab the next free port number when developing your exporter  preferably before publicly announcing it  If you re not ready to release yet  putting your username and WIP is fine   This is a registry to make our users  lives a little easier  not a commitment to develop particular exporters  For exporters for internal applications we recommend using ports outside of the range of default port allocations      Announcing  Once you re ready to announce your exporter to the world  email the mailing list and send a PR to add it to  the list of available exporters  https   github com prometheus docs blob main content docs instrumenting exporters md  "}
{"questions":"prometheus sortrank 6 title Exposition formats Exposition formats Metrics can be exposed to Prometheus using a simple that implement this format for you If your preferred language doesn t have a client exposition format There are various","answers":"---\ntitle: Exposition formats\nsort_rank: 6\n---\n\n# Exposition formats\n\nMetrics can be exposed to Prometheus using a simple [text-based](#text-based-format)\nexposition format. There are various [client libraries](\/docs\/instrumenting\/clientlibs\/)\nthat implement this format for you. If your preferred language doesn't have a client\nlibrary you can [create your own](\/docs\/instrumenting\/writing_clientlibs\/).\n\n## Text-based format\n\nAs of Prometheus version 2.0, all processes that expose metrics to Prometheus need to use\na text-based format. In this section you can find some [basic information](#basic-info)\nabout this format as well as a more [detailed breakdown](#text-format-details) of the\nformat.\n\n### Basic info\n\n| Aspect | Description |\n|--------|-------------|\n| **Inception** | April 2014  |\n| **Supported in** |  Prometheus version `>=0.4.0` |\n| **Transmission** | HTTP |\n| **Encoding** | UTF-8, `\\n` line endings |\n| **HTTP `Content-Type`** | `text\/plain; version=0.0.4` (A missing `version` value will lead to a fall-back to the most recent text format version.) |\n| **Optional HTTP `Content-Encoding`** | `gzip` |\n| **Advantages** | <ul><li>Human-readable<\/li><li>Easy to assemble, especially for minimalistic cases (no nesting required)<\/li><li>Readable line by line (with the exception of type hints and docstrings)<\/li><\/ul> |\n| **Limitations** | <ul><li>Verbose<\/li><li>Types and docstrings not integral part of the syntax, meaning little-to-nonexistent metric contract validation<\/li><li>Parsing cost<\/li><\/ul>|\n| **Supported metric primitives** | <ul><li>Counter<\/li><li>Gauge<\/li><li>Histogram<\/li><li>Summary<\/li><li>Untyped<\/li><\/ul> |\n\n### Text format details\n\nPrometheus' text-based format is line oriented. Lines are separated by a line\nfeed character (`\\n`). The last line must end with a line feed character.\nEmpty lines are ignored.\n\n#### Line format\n\nWithin a line, tokens can be separated by any number of blanks and\/or tabs (and\nmust be separated by at least one if they would otherwise merge with the previous\ntoken). Leading and trailing whitespace is ignored.\n\n#### Comments, help text, and type information\n\nLines with a `#` as the first non-whitespace character are comments. They are\nignored unless the first token after `#` is either `HELP` or `TYPE`. Those\nlines are treated as follows: If the token is `HELP`, at least one more token\nis expected, which is the metric name. All remaining tokens are considered the\ndocstring for that metric name. `HELP` lines may contain any sequence of UTF-8\ncharacters (after the metric name), but the backslash and the line feed\ncharacters have to be escaped as `\\\\` and `\\n`, respectively. Only one `HELP`\nline may exist for any given metric name.\n\nIf the token is `TYPE`, exactly two more tokens are expected. The first is the\nmetric name, and the second is either `counter`, `gauge`, `histogram`,\n`summary`, or `untyped`, defining the type for the metric of that name. Only\none `TYPE` line may exist for a given metric name. The `TYPE` line for a\nmetric name must appear before the first sample is reported for that metric\nname. If there is no `TYPE` line for a metric name, the type is set to\n`untyped`.\n\nThe remaining lines describe samples (one per line) using the following syntax\n([EBNF](https:\/\/en.wikipedia.org\/wiki\/Extended_Backus%E2%80%93Naur_form)):\n\n```\nmetric_name [\n  \"{\" label_name \"=\" `\"` label_value `\"` { \",\" label_name \"=\" `\"` label_value `\"` } [ \",\" ] \"}\"\n] value [ timestamp ]\n```\n\nIn the sample syntax:\n\n*  `metric_name` and `label_name` carry the usual Prometheus expression language restrictions.\n* `label_value` can be any sequence of UTF-8 characters, but the backslash (`\\`), double-quote (`\"`), and line feed (`\\n`) characters have to be escaped as `\\\\`, `\\\"`, and `\\n`, respectively.\n* `value` is a float represented as required by Go's [`ParseFloat()`](https:\/\/golang.org\/pkg\/strconv\/#ParseFloat) function. In addition to standard numerical values, `NaN`, `+Inf`, and `-Inf` are valid values representing not a number, positive infinity, and negative infinity, respectively.\n* The `timestamp` is an `int64` (milliseconds since epoch, i.e. 1970-01-01 00:00:00 UTC, excluding leap seconds), represented as required by Go's [`ParseInt()`](https:\/\/golang.org\/pkg\/strconv\/#ParseInt) function.\n\n#### Grouping and sorting\n\nAll lines for a given metric must be provided as one single group, with\nthe optional `HELP` and `TYPE` lines first (in no particular order). Beyond\nthat, reproducible sorting in repeated expositions is preferred but not\nrequired, i.e. do not sort if the computational cost is prohibitive.\n\nEach line must have a unique combination of a metric name and labels. Otherwise,\nthe ingestion behavior is undefined.\n\n#### Histograms and summaries\n\nThe `histogram` and `summary` types are difficult to represent in the text\nformat. The following conventions apply:\n\n* The sample sum for a summary or histogram named `x` is given as a separate sample named `x_sum`.\n* The sample count for a summary or histogram named `x` is given as a separate sample named `x_count`.\n* Each quantile of a summary named `x` is given as a separate sample line with the same name `x` and a label `{quantile=\"y\"}`.\n* Each bucket count of a histogram named `x` is given as a separate sample line with the name `x_bucket` and a label `{le=\"y\"}` (where `y` is the upper bound of the bucket).\n* A histogram _must_ have a bucket with `{le=\"+Inf\"}`. Its value _must_ be identical to the value of `x_count`.\n* The buckets of a histogram and the quantiles of a summary must appear in increasing numerical order of their label values (for the `le` or the `quantile` label, respectively).\n\n### Text format example\n\nBelow is an example of a full-fledged Prometheus metric exposition, including\ncomments, `HELP` and `TYPE` expressions, a histogram, a summary, character\nescaping examples, and more.\n\n```\n# HELP http_requests_total The total number of HTTP requests.\n# TYPE http_requests_total counter\nhttp_requests_total{method=\"post\",code=\"200\"} 1027 1395066363000\nhttp_requests_total{method=\"post\",code=\"400\"}    3 1395066363000\n\n# Escaping in label values:\nmsdos_file_access_time_seconds{path=\"C:\\\\DIR\\\\FILE.TXT\",error=\"Cannot find file:\\n\\\"FILE.TXT\\\"\"} 1.458255915e9\n\n# Minimalistic line:\nmetric_without_timestamp_and_labels 12.47\n\n# A weird metric from before the epoch:\nsomething_weird{problem=\"division by zero\"} +Inf -3982045\n\n# A histogram, which has a pretty complex representation in the text format:\n# HELP http_request_duration_seconds A histogram of the request duration.\n# TYPE http_request_duration_seconds histogram\nhttp_request_duration_seconds_bucket{le=\"0.05\"} 24054\nhttp_request_duration_seconds_bucket{le=\"0.1\"} 33444\nhttp_request_duration_seconds_bucket{le=\"0.2\"} 100392\nhttp_request_duration_seconds_bucket{le=\"0.5\"} 129389\nhttp_request_duration_seconds_bucket{le=\"1\"} 133988\nhttp_request_duration_seconds_bucket{le=\"+Inf\"} 144320\nhttp_request_duration_seconds_sum 53423\nhttp_request_duration_seconds_count 144320\n\n# Finally a summary, which has a complex representation, too:\n# HELP rpc_duration_seconds A summary of the RPC duration in seconds.\n# TYPE rpc_duration_seconds summary\nrpc_duration_seconds{quantile=\"0.01\"} 3102\nrpc_duration_seconds{quantile=\"0.05\"} 3272\nrpc_duration_seconds{quantile=\"0.5\"} 4773\nrpc_duration_seconds{quantile=\"0.9\"} 9001\nrpc_duration_seconds{quantile=\"0.99\"} 76656\nrpc_duration_seconds_sum 1.7560473e+07\nrpc_duration_seconds_count 2693\n```\n\n## OpenMetrics Text Format\n\n[OpenMetrics](https:\/\/github.com\/OpenObservability\/OpenMetrics) is the an effort to standardize metric wire formatting built off of Prometheus text format. It is possible to scrape targets\nand it is also available to use for federating metrics since at least v2.23.0.\n\n### Exemplars (Experimental)\n\nUtilizing the OpenMetrics format allows for the exposition and querying of [Exemplars](https:\/\/github.com\/OpenObservability\/OpenMetrics\/blob\/main\/specification\/OpenMetrics.md#exemplars).\nExemplars provide a point in time snapshot related to a metric set for an otherwise summarized MetricFamily. Additionally they may have a Trace ID attached to them which when used to together\nwith a tracing system can provide more detailed information related to the specific service.\n\nTo enable this experimental feature you must have at least version v2.26.0 and add `--enable-feature=exemplar-storage` to your arguments.\n\n## Protobuf format\n\nEarlier versions of Prometheus supported an exposition format based on [Protocol Buffers](https:\/\/developers.google.com\/protocol-buffers\/) (aka Protobuf) in addition to the current text-based format. With Prometheus 2.0, the Protobuf format was marked as deprecated and Prometheus stopped ingesting samples from said exposition format.\n\nHowever, new experimental features were added to Prometheus where the Protobuf format was considered the most viable option. Making Prometheus accept Protocol Buffers once again.\n\nHere is a list of experimental features that, once enabled, will configure Prometheus to favor the Protobuf exposition format:\n\n| feature flag | version that introduced it |\n|--------------|----------------------------|\n| native-histograms | 2.40.0 |\n| created-timestamp-zero-ingestion | 2.50.0 |\n\n## Historical versions\n\nFor details on historical format versions, see the legacy\n[Client Data Exposition Format](https:\/\/docs.google.com\/document\/d\/1ZjyKiKxZV83VI9ZKAXRGKaUKK2BIWCT7oiGBKDBpjEY\/edit?usp=sharing)\ndocument.\n\nThe current version of the original Protobuf format (with the recent extensions\nfor native histograms) is maintained in the [prometheus\/client_model\nrepository](https:\/\/github.com\/prometheus\/client_model).","site":"prometheus","answers_cleaned":"    title  Exposition formats sort rank  6        Exposition formats  Metrics can be exposed to Prometheus using a simple  text based   text based format  exposition format  There are various  client libraries   docs instrumenting clientlibs   that implement this format for you  If your preferred language doesn t have a client library you can  create your own   docs instrumenting writing clientlibs        Text based format  As of Prometheus version 2 0  all processes that expose metrics to Prometheus need to use a text based format  In this section you can find some  basic information   basic info  about this format as well as a more  detailed breakdown   text format details  of the format       Basic info    Aspect   Description                                Inception     April 2014        Supported in      Prometheus version    0 4 0        Transmission     HTTP       Encoding     UTF 8    n  line endings       HTTP  Content Type       text plain  version 0 0 4   A missing  version  value will lead to a fall back to the most recent text format version         Optional HTTP  Content Encoding       gzip        Advantages      ul  li Human readable  li  li Easy to assemble  especially for minimalistic cases  no nesting required   li  li Readable line by line  with the exception of type hints and docstrings   li   ul        Limitations      ul  li Verbose  li  li Types and docstrings not integral part of the syntax  meaning little to nonexistent metric contract validation  li  li Parsing cost  li   ul       Supported metric primitives      ul  li Counter  li  li Gauge  li  li Histogram  li  li Summary  li  li Untyped  li   ul         Text format details  Prometheus  text based format is line oriented  Lines are separated by a line feed character    n    The last line must end with a line feed character  Empty lines are ignored        Line format  Within a line  tokens can be separated by any number of blanks and or tabs  and must be separated by at least one if they would otherwise merge with the previous token   Leading and trailing whitespace is ignored        Comments  help text  and type information  Lines with a     as the first non whitespace character are comments  They are ignored unless the first token after     is either  HELP  or  TYPE   Those lines are treated as follows  If the token is  HELP   at least one more token is expected  which is the metric name  All remaining tokens are considered the docstring for that metric name   HELP  lines may contain any sequence of UTF 8 characters  after the metric name   but the backslash and the line feed characters have to be escaped as      and   n   respectively  Only one  HELP  line may exist for any given metric name   If the token is  TYPE   exactly two more tokens are expected  The first is the metric name  and the second is either  counter    gauge    histogram    summary   or  untyped   defining the type for the metric of that name  Only one  TYPE  line may exist for a given metric name  The  TYPE  line for a metric name must appear before the first sample is reported for that metric name  If there is no  TYPE  line for a metric name  the type is set to  untyped    The remaining lines describe samples  one per line  using the following syntax   EBNF  https   en wikipedia org wiki Extended Backus E2 80 93Naur form         metric name         label name         label value           label name         label value                     value   timestamp        In the sample syntax       metric name  and  label name  carry the usual Prometheus expression language restrictions     label value  can be any sequence of UTF 8 characters  but the backslash        double quote        and line feed    n   characters have to be escaped as             and   n   respectively     value  is a float represented as required by Go s   ParseFloat     https   golang org pkg strconv  ParseFloat  function  In addition to standard numerical values   NaN     Inf   and   Inf  are valid values representing not a number  positive infinity  and negative infinity  respectively    The  timestamp  is an  int64   milliseconds since epoch  i e  1970 01 01 00 00 00 UTC  excluding leap seconds   represented as required by Go s   ParseInt     https   golang org pkg strconv  ParseInt  function        Grouping and sorting  All lines for a given metric must be provided as one single group  with the optional  HELP  and  TYPE  lines first  in no particular order   Beyond that  reproducible sorting in repeated expositions is preferred but not required  i e  do not sort if the computational cost is prohibitive   Each line must have a unique combination of a metric name and labels  Otherwise  the ingestion behavior is undefined        Histograms and summaries  The  histogram  and  summary  types are difficult to represent in the text format  The following conventions apply     The sample sum for a summary or histogram named  x  is given as a separate sample named  x sum     The sample count for a summary or histogram named  x  is given as a separate sample named  x count     Each quantile of a summary named  x  is given as a separate sample line with the same name  x  and a label   quantile  y       Each bucket count of a histogram named  x  is given as a separate sample line with the name  x bucket  and a label   le  y     where  y  is the upper bound of the bucket     A histogram  must  have a bucket with   le   Inf     Its value  must  be identical to the value of  x count     The buckets of a histogram and the quantiles of a summary must appear in increasing numerical order of their label values  for the  le  or the  quantile  label  respectively        Text format example  Below is an example of a full fledged Prometheus metric exposition  including comments   HELP  and  TYPE  expressions  a histogram  a summary  character escaping examples  and more         HELP http requests total The total number of HTTP requests    TYPE http requests total counter http requests total method  post  code  200   1027 1395066363000 http requests total method  post  code  400      3 1395066363000    Escaping in label values  msdos file access time seconds path  C   DIR  FILE TXT  error  Cannot find file  n  FILE TXT     1 458255915e9    Minimalistic line  metric without timestamp and labels 12 47    A weird metric from before the epoch  something weird problem  division by zero    Inf  3982045    A histogram  which has a pretty complex representation in the text format    HELP http request duration seconds A histogram of the request duration    TYPE http request duration seconds histogram http request duration seconds bucket le  0 05   24054 http request duration seconds bucket le  0 1   33444 http request duration seconds bucket le  0 2   100392 http request duration seconds bucket le  0 5   129389 http request duration seconds bucket le  1   133988 http request duration seconds bucket le   Inf   144320 http request duration seconds sum 53423 http request duration seconds count 144320    Finally a summary  which has a complex representation  too    HELP rpc duration seconds A summary of the RPC duration in seconds    TYPE rpc duration seconds summary rpc duration seconds quantile  0 01   3102 rpc duration seconds quantile  0 05   3272 rpc duration seconds quantile  0 5   4773 rpc duration seconds quantile  0 9   9001 rpc duration seconds quantile  0 99   76656 rpc duration seconds sum 1 7560473e 07 rpc duration seconds count 2693         OpenMetrics Text Format   OpenMetrics  https   github com OpenObservability OpenMetrics  is the an effort to standardize metric wire formatting built off of Prometheus text format  It is possible to scrape targets and it is also available to use for federating metrics since at least v2 23 0       Exemplars  Experimental   Utilizing the OpenMetrics format allows for the exposition and querying of  Exemplars  https   github com OpenObservability OpenMetrics blob main specification OpenMetrics md exemplars   Exemplars provide a point in time snapshot related to a metric set for an otherwise summarized MetricFamily  Additionally they may have a Trace ID attached to them which when used to together with a tracing system can provide more detailed information related to the specific service   To enable this experimental feature you must have at least version v2 26 0 and add    enable feature exemplar storage  to your arguments      Protobuf format  Earlier versions of Prometheus supported an exposition format based on  Protocol Buffers  https   developers google com protocol buffers    aka Protobuf  in addition to the current text based format  With Prometheus 2 0  the Protobuf format was marked as deprecated and Prometheus stopped ingesting samples from said exposition format   However  new experimental features were added to Prometheus where the Protobuf format was considered the most viable option  Making Prometheus accept Protocol Buffers once again   Here is a list of experimental features that  once enabled  will configure Prometheus to favor the Protobuf exposition format     feature flag   version that introduced it                                                   native histograms   2 40 0     created timestamp zero ingestion   2 50 0       Historical versions  For details on historical format versions  see the legacy  Client Data Exposition Format  https   docs google com document d 1ZjyKiKxZV83VI9ZKAXRGKaUKK2BIWCT7oiGBKDBpjEY edit usp sharing  document   The current version of the original Protobuf format  with the recent extensions for native histograms  is maintained in the  prometheus client model repository  https   github com prometheus client model  "}
{"questions":"prometheus Prometheus Remote Write Specification title Prometheus Remote Write 1 0 Version 1 0 sortrank 5 Status Published Date April 2023","answers":"---\ntitle: Prometheus Remote-Write 1.0\nsort_rank: 5\n---\n\n# Prometheus Remote-Write Specification\n\n- Version: 1.0\n- Status: Published\n- Date: April 2023\n\nThis document is intended to define and standardise the API, wire format, protocol and semantics of the existing, widely and organically adopted protocol, and not to propose anything new.\n\nThe remote write specification is intended to document the standard for how Prometheus and Prometheus remote-write-compatible agents send data to a Prometheus or Prometheus remote-write compatible receiver.\n\nThe key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\",  \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in [RFC 2119](https:\/\/datatracker.ietf.org\/doc\/html\/rfc2119).\n\n> NOTE: This specification has a 2.0 version available, see [here](.\/remote_write_spec_2_0.md).\n\n## Introduction\n### Background\n\nThe remote write protocol is designed to make it possible to reliably propagate samples in real-time from a sender to a receiver, without loss.\n\nThe Remote-Write protocol is designed to be stateless; there is strictly no inter-message communication. As such the protocol is not considered \"streaming\". To achieve a streaming effect multiple messages should be sent over the same connection using e.g. HTTP\/1.1 or HTTP\/2. \"Fancy\" technologies such as gRPC were considered, but at the time were not widely adopted, and it was challenging to expose gRPC services to the internet behind load balancers such as an AWS EC2 ELB.\n\nThe remote write protocol contains opportunities for batching, e.g. sending multiple samples for different series in a single request.  It is not expected that multiple samples for the same series will be commonly sent in the same request, although there is support for this in the protocol.\n\nThe remote write protocol is not intended for use by applications to push metrics to Prometheus remote-write-compatible receivers.  It is intended that a Prometheus remote-write-compatible sender scrapes instrumented applications or exporters and sends remote write messages to a server.\n\nA test suite can be found at https:\/\/github.com\/prometheus\/compliance\/tree\/main\/remotewrite\/sender.\n\n### Glossary\n\nFor the purposes of this document the following definitions MUST be followed:\n\n- a \"Sender\" is something that sends Prometheus Remote Write data.\n- a \"Receiver\" is something that receives Prometheus Remote Write data.\n- a \"Sample\" is a pair of (timestamp, value).\n- a \"Label\" is a pair of (key, value).\n- a \"Series\" is a list of samples, identified by a unique set of labels.\n\n## Definitions\n### Protocol\n\nThe Remote Write Protocol MUST consist of RPCs with the following signature:\n\n```\nfunc Send(WriteRequest)\n\nmessage WriteRequest {\n  repeated TimeSeries timeseries = 1;\n  \/\/ Cortex uses this field to determine the source of the write request.\n  \/\/ We reserve it to avoid any compatibility issues.\n  reserved  2;\n\n  \/\/ Prometheus uses this field to send metadata, but this is\n  \/\/ omitted from v1 of the spec as it is experimental.\n  reserved  3;\n}\n\nmessage TimeSeries {\n  repeated Label labels   = 1;\n  repeated Sample samples = 2;\n}\n\nmessage Label {\n  string name  = 1;\n  string value = 2;\n}\n\nmessage Sample {\n  double value    = 1;\n  int64 timestamp = 2;\n}\n```\n\nRemote write Senders MUST encode the Write Request in the body of a HTTP POST request and send it to the Receivers via HTTP at a provided URL path.  The Receiver MAY specify any HTTP URL path to receive metrics.\n\nTimestamps MUST be int64 counted as milliseconds since the Unix epoch.  Values MUST be float64.\n\nThe following headers MUST be sent with the HTTP request:\n\n- `Content-Encoding: snappy`\n- `Content-Type: application\/x-protobuf`\n- `User-Agent: <name & version of the sender>`\n- `X-Prometheus-Remote-Write-Version: 0.1.0`\n\nClients MAY allow users to send custom HTTP headers; they MUST NOT allow users to configure them in such a way as to send reserved headers.  For more info see https:\/\/github.com\/prometheus\/prometheus\/pull\/8416.\n\nThe remote write request in the body of the HTTP POST MUST be compressed with [Google\u2019s Snappy](https:\/\/github.com\/google\/snappy).  The block format MUST be used - the framed format MUST NOT be used.\n\nThe remote write request MUST be encoded using Google Protobuf 3, and MUST use the schema defined above.  Note [the Prometheus implementation](https:\/\/github.com\/prometheus\/prometheus\/blob\/v2.24.0\/prompb\/remote.proto) uses [gogoproto optimisations](https:\/\/github.com\/gogo\/protobuf) - for receivers written in languages other than Golang the gogoproto types MAY be substituted for line-level equivalents.\n\nThe response body from the remote write receiver SHOULD be empty; clients MUST ignore the response body. The response body is RESERVED for future use.\n\n### Backward and forward compatibility\n\nThe protocol follows [semantic versioning 2.0](https:\/\/semver.org\/): any 1.x compatible receivers MUST be able to read any 1.x compatible sender and so on.  Breaking\/backwards incompatible changes will result in a 2.x version of the spec.\n\nThe proto format itself is forward \/ backward compatible, in some respects:\n\n- Removing fields from the proto will mean a major version bump.\n- Adding (optional) fields will be a minor version bump.\n\nNegotiation:\n\n- Senders MUST send the version number in a headers.\n- Receivers MAY return the highest version number they support in a response header (\"X-Prometheus-Remote-Write-Version\").\n- Senders who wish to send in a format >1.x MUST start by sending an empty 1.x, and see if the response says the receiver supports something else.  The Sender MAY use any supported version .  If there is no version header in the response, senders MUST assume 1.x compatibility only.\n\n### Labels\n\nThe complete set of labels MUST be sent with each sample. Whatsmore, the label set associated with samples:\n\n- SHOULD contain a `__name__` label.\n- MUST NOT contain repeated label names.\n- MUST have label names sorted in lexicographical order.\n- MUST NOT contain any empty label names or values.\n\nSenders MUST only send valid metric names, label names, and label values:\n\n- Metric names MUST adhere to the regex `[a-zA-Z_:]([a-zA-Z0-9_:])*`.\n- Label names MUST adhere to the regex `[a-zA-Z_]([a-zA-Z0-9_])*`.\n- Label values MAY be any sequence of UTF-8 characters .\n\nReceivers MAY impose limits on the number and length of labels, but this will be receiver-specific and is out of scope for this document.\n\nLabel names beginning with \"__\" are RESERVED for system usage and SHOULD NOT be used, see [Prometheus Data Model](https:\/\/prometheus.io\/docs\/concepts\/data_model\/).\n\nRemote write Receivers MAY ingest valid samples within a write request that otherwise contains invalid samples. Receivers MUST return a HTTP 400 status code (\"Bad Request\") for write requests that contain any invalid samples. Receivers SHOULD provide a human readable error message in the response body. Senders MUST NOT try and interpret the error message, and SHOULD log it as is.\n\n### Ordering\nPrometheus Remote Write compatible senders MUST send samples for any given series in timestamp order. Prometheus Remote Write compatible Senders MAY send multiple requests for different series in parallel.\n\n### Retries & Backoff\nPrometheus Remote Write compatible senders MUST retry write requests on HTTP 5xx responses and MUST use a backoff algorithm to prevent overwhelming the server. They MUST NOT retry write requests on HTTP 2xx and 4xx responses other than 429. They MAY retry on HTTP 429 responses, which could result in senders \"falling behind\" if the server cannot keep up. This is done to ensure data is not lost when there are server side errors, and progress is made when there are client side errors.\n\nPrometheus remote Write compatible receivers MUST respond with a HTTP 2xx status code when the write is successful. They MUST respond with HTTP status code 5xx when the write fails and SHOULD be retried. They MUST respond with HTTP status code 4xx when the request is invalid, will never be able to succeed and should not be retried.\n\n### Stale Markers\nPrometheus remote write compatible senders MUST send stale markers when a time series will no longer be appended to.\n\nStale markers MUST be signalled by  the special NaN value 0x7ff0000000000002. This value MUST NOT be used otherwise.\n\nTypically the sender can detect when a time series will no longer be appended to using the following techniques:\n\n1. Detecting, using service discovery, that the target exposing the series has gone away\n1. Noticing the target is no longer exposing the time series between successive scrapes\n1. Failing to scrape the target that originally exposed a time series\n1. Tracking configuration and evaluation for recording and alerting rules\n\n## Out of Scope\nThis document does not intend to explain all the features required for a fully Prometheus-compatible monitoring system.  In particular, the following areas are out of scope for the first version of the spec:\n\n**The \"up\" metric** The definition and semantics of the \"up\" metric are beyond the scope of the remote write protocol and should be documented separately.\n\n**HTTP Path** The path for HTTP handler can be anything - and MUST be provided by the sender.  Generally we expect the whole URL to be specified in config.\n\n**Persistence** It is recommended that Prometheus Remote Write compatible senders should persistently buffer sample data in the event of outages in the receiver. \n\n**Authentication & Encryption** as remote write uses HTTP, we consider authentication & encryption to be a transport-layer problem.  Senders and receivers should support all the usual suspects (Basic auth, TLS etc) and are free to add potentially custom authentication options.  Support for custom authentication in the Prometheus remote write sender and eventual agent should not be assumed, but we will endeavour to support common and widely used auth protocols, where feasible.\n\n**Remote Read** this is a separate interface that has already seen some iteration, and is less widely used.\n\n**Sharding** the current sharding scheme in Prometheus for remote write parallelisation is very much an implementation detail, and isn\u2019t part of the spec.  When senders do implement parallelisation they MUST preserve per-series sample ordering.\n\n**Backfill** The specification does not place a limit on how old series can be pushed, however server\/implementation specific constraints may exist.\n\n**Limits** Limits on the number and length of labels, batch sizes etc are beyond the scope of this document, however it is expected that implementation will impose reasonable limits.\n\n**Push-based Prometheus** Applications pushing metrics to Prometheus Remote Write compatible receivers was not a design goal of this system, and should be explored in a separate doc.\n\n**Labels** Every series MAY include a \"job\" and\/or \"instance\" label, as these are typically added by service discovery in the Sender. These are not mandatory.\n\n## Future Plans\nThis section contains speculative plans that are not considered part of protocol specification, but are mentioned here for completeness.\n\n**Transactionality** Prometheus aims at being \"transactional\" - i.e. to never expose a partially scraped target to a query. We intend to do the same with remote write -  for instance, in the future we would like to \"align\" remote write with scrapes, perhaps such that all the samples, metadata and exemplars for a single scrape are sent in a single remote write request.  This is yet to be designed.\n\n**Metadata** and Exemplars In line with above, we also send metadata (type information, help text) and exemplars along with the scraped samples. We plan to package this up in a single remote write request - future versions of the spec may insist on this.  Prometheus currently has experimental support for sending metadata and exemplars.\n\n**Optimizations** We would like to investigate various optimizations to reduce message size by eliminating repetition of label names and values.\n\n## Related\n### Compatible Senders and Receivers\n\nThe spec is intended to describe how the following components interact (as of April 2023):\n\n- [Prometheus](https:\/\/github.com\/prometheus\/prometheus\/tree\/master\/storage\/remote) (as both a \"sender\" and a \"receiver\")\n- [Avalanche](https:\/\/github.com\/prometheus-community\/avalanche) (as a \"sender\") - A Load Testing Tool Prometheus Metrics.\n- [Cortex](https:\/\/github.com\/cortexproject\/cortex\/blob\/master\/pkg\/util\/push\/push.go#L20) (as a \"receiver\")\n- [Elastic Agent](https:\/\/docs.elastic.co\/integrations\/prometheus#prometheus-server-remote-write) (as a \"receiver\")\n- [Grafana Agent](https:\/\/github.com\/grafana\/agent) (as both a \"sender\" and a \"receiver\")\n- [GreptimeDB](https:\/\/github.com\/greptimeTeam\/greptimedb) (as a [\"receiver\"](https:\/\/docs.greptime.com\/user-guide\/ingest-data\/for-observerbility\/prometheus))\n- InfluxData\u2019s Telegraf agent. ([as a sender](https:\/\/github.com\/influxdata\/telegraf\/tree\/master\/plugins\/serializers\/prometheusremotewrite), and [as a receiver](https:\/\/github.com\/influxdata\/telegraf\/pull\/8967))\n- [M3](https:\/\/m3db.io\/docs\/integrations\/prometheus\/#prometheus-configuration) (as a \"receiver\")\n- [Mimir](https:\/\/github.com\/grafana\/mimir) (as a \"receiver\")\n- [OpenTelemetry Collector](https:\/\/github.com\/open-telemetry\/opentelemetry-collector-releases\/) (as a [\"sender\"](https:\/\/github.com\/open-telemetry\/opentelemetry-collector-contrib\/tree\/main\/exporter\/prometheusremotewriteexporter#readme) and eventually as a \"receiver\")\n- [Thanos](https:\/\/thanos.io\/tip\/components\/receive.md\/) (as a \"receiver\")\n- Vector (as a [\"sender\"](https:\/\/vector.dev\/docs\/reference\/configuration\/sinks\/prometheus_remote_write\/) and a [\"receiver\"](https:\/\/vector.dev\/docs\/reference\/configuration\/sources\/prometheus_remote_write\/))\n- [VictoriaMetrics](https:\/\/github.com\/VictoriaMetrics\/VictoriaMetrics) (as a [\"receiver\"](https:\/\/docs.victoriametrics.com\/#prometheus-setup))\n\n### FAQ\n\n**Why did you not use gRPC?**\nFunnily enough we initially used gRPC, but switched to Protos atop HTTP as in 2016 it was hard to get them past ELBs: https:\/\/github.com\/prometheus\/prometheus\/issues\/1982\n\n**Why not streaming protobuf messages?**\nIf you use persistent HTTP\/1.1 connections, they are pretty close to streaming\u2026  Of course headers have to be re-sent, but yes that is less expensive than a new TCP set up.\n\n**Why do we send samples in order?**\nThe in-order constraint comes from the encoding we use for time series data in Prometheus, the implementation of which is append only. It is possible to remove this constraint, for instance by buffering samples and reordering them before encoding.  We can investigate this in future versions of the protocol.\n\n**How can we parallelise requests with the in-order constraint?**\nSamples must be in-order _for a given series_.  Remote write requests can be sent in parallel as long as they are for different series. In Prometheus, we shard the samples by their labels into separate queues, and then writes happen sequentially in each queue.  This guarantees samples for the same series are delivered in order, but samples for different series are sent in parallel - and potentially \"out of order\" between different series.\n\nWe believe this is necessary as, even if the receiver could support out-of-order samples, we can't have agents sending out of order as they would never be able to send to Prometheus, Cortex and Thanos.  We\u2019re doing this to ensure the integrity of the ecosystem and to prevent confusing\/forking the community into \"prometheus-agents-that-can-write-to-prometheus\" and those that can\u2019t.","site":"prometheus","answers_cleaned":"    title  Prometheus Remote Write 1 0 sort rank  5        Prometheus Remote Write Specification    Version  1 0   Status  Published   Date  April 2023  This document is intended to define and standardise the API  wire format  protocol and semantics of the existing  widely and organically adopted protocol  and not to propose anything new   The remote write specification is intended to document the standard for how Prometheus and Prometheus remote write compatible agents send data to a Prometheus or Prometheus remote write compatible receiver   The key words  MUST    MUST NOT    REQUIRED    SHALL    SHALL NOT    SHOULD    SHOULD NOT    RECOMMENDED     MAY   and  OPTIONAL  in this document are to be interpreted as described in  RFC 2119  https   datatracker ietf org doc html rfc2119      NOTE  This specification has a 2 0 version available  see  here    remote write spec 2 0 md       Introduction     Background  The remote write protocol is designed to make it possible to reliably propagate samples in real time from a sender to a receiver  without loss   The Remote Write protocol is designed to be stateless  there is strictly no inter message communication  As such the protocol is not considered  streaming   To achieve a streaming effect multiple messages should be sent over the same connection using e g  HTTP 1 1 or HTTP 2   Fancy  technologies such as gRPC were considered  but at the time were not widely adopted  and it was challenging to expose gRPC services to the internet behind load balancers such as an AWS EC2 ELB   The remote write protocol contains opportunities for batching  e g  sending multiple samples for different series in a single request   It is not expected that multiple samples for the same series will be commonly sent in the same request  although there is support for this in the protocol   The remote write protocol is not intended for use by applications to push metrics to Prometheus remote write compatible receivers   It is intended that a Prometheus remote write compatible sender scrapes instrumented applications or exporters and sends remote write messages to a server   A test suite can be found at https   github com prometheus compliance tree main remotewrite sender       Glossary  For the purposes of this document the following definitions MUST be followed     a  Sender  is something that sends Prometheus Remote Write data    a  Receiver  is something that receives Prometheus Remote Write data    a  Sample  is a pair of  timestamp  value     a  Label  is a pair of  key  value     a  Series  is a list of samples  identified by a unique set of labels      Definitions     Protocol  The Remote Write Protocol MUST consist of RPCs with the following signature       func Send WriteRequest   message WriteRequest     repeated TimeSeries timeseries   1       Cortex uses this field to determine the source of the write request       We reserve it to avoid any compatibility issues    reserved  2        Prometheus uses this field to send metadata  but this is      omitted from v1 of the spec as it is experimental    reserved  3     message TimeSeries     repeated Label labels     1    repeated Sample samples   2     message Label     string name    1    string value   2     message Sample     double value      1    int64 timestamp   2         Remote write Senders MUST encode the Write Request in the body of a HTTP POST request and send it to the Receivers via HTTP at a provided URL path   The Receiver MAY specify any HTTP URL path to receive metrics   Timestamps MUST be int64 counted as milliseconds since the Unix epoch   Values MUST be float64   The following headers MUST be sent with the HTTP request      Content Encoding  snappy     Content Type  application x protobuf     User Agent   name   version of the sender      X Prometheus Remote Write Version  0 1 0   Clients MAY allow users to send custom HTTP headers  they MUST NOT allow users to configure them in such a way as to send reserved headers   For more info see https   github com prometheus prometheus pull 8416   The remote write request in the body of the HTTP POST MUST be compressed with  Google s Snappy  https   github com google snappy    The block format MUST be used   the framed format MUST NOT be used   The remote write request MUST be encoded using Google Protobuf 3  and MUST use the schema defined above   Note  the Prometheus implementation  https   github com prometheus prometheus blob v2 24 0 prompb remote proto  uses  gogoproto optimisations  https   github com gogo protobuf    for receivers written in languages other than Golang the gogoproto types MAY be substituted for line level equivalents   The response body from the remote write receiver SHOULD be empty  clients MUST ignore the response body  The response body is RESERVED for future use       Backward and forward compatibility  The protocol follows  semantic versioning 2 0  https   semver org    any 1 x compatible receivers MUST be able to read any 1 x compatible sender and so on   Breaking backwards incompatible changes will result in a 2 x version of the spec   The proto format itself is forward   backward compatible  in some respects     Removing fields from the proto will mean a major version bump    Adding  optional  fields will be a minor version bump   Negotiation     Senders MUST send the version number in a headers    Receivers MAY return the highest version number they support in a response header   X Prometheus Remote Write Version      Senders who wish to send in a format  1 x MUST start by sending an empty 1 x  and see if the response says the receiver supports something else   The Sender MAY use any supported version    If there is no version header in the response  senders MUST assume 1 x compatibility only       Labels  The complete set of labels MUST be sent with each sample  Whatsmore  the label set associated with samples     SHOULD contain a    name    label    MUST NOT contain repeated label names    MUST have label names sorted in lexicographical order    MUST NOT contain any empty label names or values   Senders MUST only send valid metric names  label names  and label values     Metric names MUST adhere to the regex   a zA Z     a zA Z0 9          Label names MUST adhere to the regex   a zA Z    a zA Z0 9         Label values MAY be any sequence of UTF 8 characters    Receivers MAY impose limits on the number and length of labels  but this will be receiver specific and is out of scope for this document   Label names beginning with      are RESERVED for system usage and SHOULD NOT be used  see  Prometheus Data Model  https   prometheus io docs concepts data model     Remote write Receivers MAY ingest valid samples within a write request that otherwise contains invalid samples  Receivers MUST return a HTTP 400 status code   Bad Request   for write requests that contain any invalid samples  Receivers SHOULD provide a human readable error message in the response body  Senders MUST NOT try and interpret the error message  and SHOULD log it as is       Ordering Prometheus Remote Write compatible senders MUST send samples for any given series in timestamp order  Prometheus Remote Write compatible Senders MAY send multiple requests for different series in parallel       Retries   Backoff Prometheus Remote Write compatible senders MUST retry write requests on HTTP 5xx responses and MUST use a backoff algorithm to prevent overwhelming the server  They MUST NOT retry write requests on HTTP 2xx and 4xx responses other than 429  They MAY retry on HTTP 429 responses  which could result in senders  falling behind  if the server cannot keep up  This is done to ensure data is not lost when there are server side errors  and progress is made when there are client side errors   Prometheus remote Write compatible receivers MUST respond with a HTTP 2xx status code when the write is successful  They MUST respond with HTTP status code 5xx when the write fails and SHOULD be retried  They MUST respond with HTTP status code 4xx when the request is invalid  will never be able to succeed and should not be retried       Stale Markers Prometheus remote write compatible senders MUST send stale markers when a time series will no longer be appended to   Stale markers MUST be signalled by  the special NaN value 0x7ff0000000000002  This value MUST NOT be used otherwise   Typically the sender can detect when a time series will no longer be appended to using the following techniques   1  Detecting  using service discovery  that the target exposing the series has gone away 1  Noticing the target is no longer exposing the time series between successive scrapes 1  Failing to scrape the target that originally exposed a time series 1  Tracking configuration and evaluation for recording and alerting rules     Out of Scope This document does not intend to explain all the features required for a fully Prometheus compatible monitoring system   In particular  the following areas are out of scope for the first version of the spec     The  up  metric   The definition and semantics of the  up  metric are beyond the scope of the remote write protocol and should be documented separately     HTTP Path   The path for HTTP handler can be anything   and MUST be provided by the sender   Generally we expect the whole URL to be specified in config     Persistence   It is recommended that Prometheus Remote Write compatible senders should persistently buffer sample data in the event of outages in the receiver      Authentication   Encryption   as remote write uses HTTP  we consider authentication   encryption to be a transport layer problem   Senders and receivers should support all the usual suspects  Basic auth  TLS etc  and are free to add potentially custom authentication options   Support for custom authentication in the Prometheus remote write sender and eventual agent should not be assumed  but we will endeavour to support common and widely used auth protocols  where feasible     Remote Read   this is a separate interface that has already seen some iteration  and is less widely used     Sharding   the current sharding scheme in Prometheus for remote write parallelisation is very much an implementation detail  and isn t part of the spec   When senders do implement parallelisation they MUST preserve per series sample ordering     Backfill   The specification does not place a limit on how old series can be pushed  however server implementation specific constraints may exist     Limits   Limits on the number and length of labels  batch sizes etc are beyond the scope of this document  however it is expected that implementation will impose reasonable limits     Push based Prometheus   Applications pushing metrics to Prometheus Remote Write compatible receivers was not a design goal of this system  and should be explored in a separate doc     Labels   Every series MAY include a  job  and or  instance  label  as these are typically added by service discovery in the Sender  These are not mandatory      Future Plans This section contains speculative plans that are not considered part of protocol specification  but are mentioned here for completeness     Transactionality   Prometheus aims at being  transactional    i e  to never expose a partially scraped target to a query  We intend to do the same with remote write    for instance  in the future we would like to  align  remote write with scrapes  perhaps such that all the samples  metadata and exemplars for a single scrape are sent in a single remote write request   This is yet to be designed     Metadata   and Exemplars In line with above  we also send metadata  type information  help text  and exemplars along with the scraped samples  We plan to package this up in a single remote write request   future versions of the spec may insist on this   Prometheus currently has experimental support for sending metadata and exemplars     Optimizations   We would like to investigate various optimizations to reduce message size by eliminating repetition of label names and values      Related     Compatible Senders and Receivers  The spec is intended to describe how the following components interact  as of April 2023       Prometheus  https   github com prometheus prometheus tree master storage remote   as both a  sender  and a  receiver      Avalanche  https   github com prometheus community avalanche   as a  sender     A Load Testing Tool Prometheus Metrics     Cortex  https   github com cortexproject cortex blob master pkg util push push go L20   as a  receiver      Elastic Agent  https   docs elastic co integrations prometheus prometheus server remote write   as a  receiver      Grafana Agent  https   github com grafana agent   as both a  sender  and a  receiver      GreptimeDB  https   github com greptimeTeam greptimedb   as a   receiver   https   docs greptime com user guide ingest data for observerbility prometheus     InfluxData s Telegraf agent    as a sender  https   github com influxdata telegraf tree master plugins serializers prometheusremotewrite   and  as a receiver  https   github com influxdata telegraf pull 8967      M3  https   m3db io docs integrations prometheus  prometheus configuration   as a  receiver      Mimir  https   github com grafana mimir   as a  receiver      OpenTelemetry Collector  https   github com open telemetry opentelemetry collector releases    as a   sender   https   github com open telemetry opentelemetry collector contrib tree main exporter prometheusremotewriteexporter readme  and eventually as a  receiver      Thanos  https   thanos io tip components receive md    as a  receiver     Vector  as a   sender   https   vector dev docs reference configuration sinks prometheus remote write   and a   receiver   https   vector dev docs reference configuration sources prometheus remote write       VictoriaMetrics  https   github com VictoriaMetrics VictoriaMetrics   as a   receiver   https   docs victoriametrics com  prometheus setup        FAQ    Why did you not use gRPC    Funnily enough we initially used gRPC  but switched to Protos atop HTTP as in 2016 it was hard to get them past ELBs  https   github com prometheus prometheus issues 1982    Why not streaming protobuf messages    If you use persistent HTTP 1 1 connections  they are pretty close to streaming   Of course headers have to be re sent  but yes that is less expensive than a new TCP set up     Why do we send samples in order    The in order constraint comes from the encoding we use for time series data in Prometheus  the implementation of which is append only  It is possible to remove this constraint  for instance by buffering samples and reordering them before encoding   We can investigate this in future versions of the protocol     How can we parallelise requests with the in order constraint    Samples must be in order  for a given series    Remote write requests can be sent in parallel as long as they are for different series  In Prometheus  we shard the samples by their labels into separate queues  and then writes happen sequentially in each queue   This guarantees samples for the same series are delivered in order  but samples for different series are sent in parallel   and potentially  out of order  between different series   We believe this is necessary as  even if the receiver could support out of order samples  we can t have agents sending out of order as they would never be able to send to Prometheus  Cortex and Thanos   We re doing this to ensure the integrity of the ecosystem and to prevent confusing forking the community into  prometheus agents that can write to prometheus  and those that can t "}
{"questions":"prometheus Prometheus Remote Write Specification title Prometheus Remote Write 2 0 EXPERIMENTAL sortrank 4 Version 2 0 rc 3 Date May 2024 Status Experimental","answers":"---\ntitle: \"Prometheus Remote-Write 2.0 [EXPERIMENTAL]\"\nsort_rank: 4\n---\n\n# Prometheus Remote-Write Specification\n\n* Version: 2.0-rc.3\n* Status: **Experimental**\n* Date: May 2024\n\nThe Remote-Write specification, in general, is intended to document the standard for how Prometheus and Prometheus Remote-Write compatible senders send data to Prometheus or Prometheus Remote-Write compatible receivers.\n\nThis document is intended to define a second version of the [Prometheus Remote-Write](.\/remote_write_spec.md) API with minor changes to protocol and semantics. This second version adds a new Protobuf Message with new features enabling more use cases and wider adoption on top of performance and cost savings. The second version also deprecates the previous Protobuf Message from a [1.0 Remote-Write specification](\/docs\/specs\/remote_write_spec\/#protocol) and adds mandatory [`X-Prometheus-Remote-Write-*-Written` HTTP response headers](#required-written-response-headers)for reliability purposes. Finally, this spec outlines how to implement backwards-compatible senders and receivers (even under a single endpoint) using existing basic content negotiation request headers. More advanced, automatic content negotiation mechanisms might come in a future minor version if needed. For the rationales behind the 2.0 specification, see [the formal proposal](https:\/\/github.com\/prometheus\/proposals\/pull\/35).\n\nThe key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\",  \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in [RFC 2119](https:\/\/datatracker.ietf.org\/doc\/html\/rfc2119).\n\n> NOTE: This is a release candidate for Remote-Write 2.0 specification. This means that this specification is currently in an experimental state--no major changes are expected, but we reserve the right to break the compatibility if it's necessary, based on the early adopters' feedback. The potential feedback, questions and suggestions should be added as comments to the [PR with the open proposal](https:\/\/github.com\/prometheus\/proposals\/pull\/35).\n\n## Introduction\n\n### Background\n\nThe Remote-Write protocol is designed to make it possible to reliably propagate samples in real-time from a sender to a receiver, without loss.\n\n<!---\nFor the detailed rationales behind each 2.0 Remote-Write decision, see: https:\/\/github.com\/prometheus\/proposals\/blob\/alexg\/remote-write-20-proposal\/proposals\/2024-04-09_remote-write-20.md\n-->\nThe Remote-Write protocol is designed to be stateless; there is strictly no inter-message communication. As such the protocol is not considered \"streaming\". To achieve a streaming effect multiple messages should be sent over the same connection using e.g. HTTP\/1.1 or HTTP\/2. \"Fancy\" technologies such as gRPC were considered, but at the time were not widely adopted, and it was challenging to expose gRPC services to the internet behind load balancers such as an AWS EC2 ELB.\n\nThe Remote-Write protocol contains opportunities for batching, e.g. sending multiple samples for different series in a single request. It is not expected that multiple samples for the same series will be commonly sent in the same request, although there is support for this in the Protobuf Message.\n\nA test suite can be found at https:\/\/github.com\/prometheus\/compliance\/tree\/main\/remote_write_sender. The compliance tests for remote write 2.0 compatibility are still [in progress](https:\/\/github.com\/prometheus\/compliance\/issues\/101).\n\n### Glossary\n\nIn this document, the following definitions are followed:\n\n* `Remote-Write` is the name of this Prometheus protocol.\n* a `Protocol` is a communication specification that enables the client and server to transfer metrics.\n* a `Protobuf Message` (or Proto Message) refers to the [content type](https:\/\/www.rfc-editor.org\/rfc\/rfc9110.html#name-content-type) definition of the data structure for this Protocol. Since the specification uses [Google Protocol Buffers (\"protobuf\")](https:\/\/protobuf.dev\/) exclusively, the schema is defined in a [\"proto\" file](https:\/\/protobuf.dev\/programming-guides\/proto3\/) and represented by a single Protobuf [\"message\"](https:\/\/protobuf.dev\/programming-guides\/proto3\/#simple).\n* a `Wire Format` is the format of the data as it travels on the wire (i.e. in a network). In the case of Remote-Write, this is always the compressed binary protobuf format.\n* a `Sender` is something that sends Remote-Write data.\n* a `Receiver` is something that receives (writes) Remote-Write data. The meaning of `Written` is up to the Receiver e.g. usually it means storing received data in a database, but also just validating, splitting or enhancing it.\n* `Written` refers to data the `Receiver` has received and is accepting. Whether or not it has ingested this data to persistent storage, written it to a WAL, etc. is up to the `Receiver`. The only distinction is that the `Receiver` has accepted this data rather than explicitly rejecting it with an error response.\n* a `Sample` is a pair of (timestamp, value).\n* a `Histogram` is a pair of (timestamp, [histogram value](https:\/\/github.com\/prometheus\/docs\/blob\/b9657b5f5b264b81add39f6db2f1df36faf03efe\/content\/docs\/concepts\/native_histograms.md)).\n* a `Label` is a pair of (key, value).\n* a `Series` is a list of samples, identified by a unique set of labels.\n\n## Definitions\n\n### Protocol\n\nThe Remote-Write Protocol MUST consist of RPCs with the request body serialized using a Google Protocol Buffers and then compressed.\n\n<!---\nRationales: https:\/\/github.com\/prometheus\/proposals\/blob\/alexg\/remote-write-20-proposal\/proposals\/2024-04-09_remote-write-20.md#a-new-protobuf-message-identified-by-fully-qualified-name-old-one-deprecated\n-->\nThe protobuf serialization MUST use either of the following Protobuf Messages:\n\n* The `prometheus.WriteRequest` introduced in [the Remote-Write 1.0 specification](.\/remote_write_spec.md#protocol). As of 2.0, this message is deprecated. It SHOULD be used only for compatibility reasons. Senders and Receivers MAY NOT support the `prometheus.WriteRequest`.\n* The `io.prometheus.write.v2.Request` introduced in this specification and defined [below](#protobuf-message). Senders and Receivers SHOULD use this message when possible. Senders and Receivers MUST support the `io.prometheus.write.v2.Request`.\n\nProtobuf Message MUST use binary Wire Format. Then, MUST be compressed with [Google\u2019s Snappy](https:\/\/github.com\/google\/snappy). Snappy's [block format](https:\/\/github.com\/google\/snappy\/blob\/2c94e11145f0b7b184b831577c93e5a41c4c0346\/format_description.txt) MUST be used -- [the framed format](https:\/\/github.com\/google\/snappy\/blob\/2c94e11145f0b7b184b831577c93e5a41c4c0346\/framing_format.txt) MUST NOT be used.\n\nSenders MUST send a serialized and compressed Protobuf Message in the body of an HTTP POST request and send it to the Receiver via HTTP at the provided URL path. Receivers MAY specify any HTTP URL path to receive metrics.\n\n<!---\nRationales: https:\/\/github.com\/prometheus\/proposals\/blob\/alexg\/remote-write-20-proposal\/proposals\/2024-04-09_remote-write-20.md#basic-content-negotiation-built-on-what-we-have\n-->\nSenders MUST send the following reserved headers with the HTTP request:\n\n- `Content-Encoding`\n- `Content-Type`\n- `X-Prometheus-Remote-Write-Version`\n- `User-Agent`\n\nSenders MAY allow users to add custom HTTP headers; they MUST NOT allow users to configure them in such a way as to send reserved headers.\n\n#### Content-Encoding\n\n```\nContent-Encoding: <compression>\n```\n\n<!---\nRationales: https:\/\/github.com\/prometheus\/proposals\/blob\/alexg\/remote-write-20-proposal\/proposals\/2024-04-09_remote-write-20.md#no-new-compression-added--yet-\n-->\nContent encoding request header MUST follow [the RFC 9110](https:\/\/www.rfc-editor.org\/rfc\/rfc9110.html#name-content-encoding). Senders MUST use the `snappy` value. Receivers MUST support `snappy` compression. New, optional compression algorithms might come in 2.x or beyond.\n\n#### Content-Type\n\n```\nContent-Type: application\/x-protobuf\nContent-Type: application\/x-protobuf;proto=<fully qualified name>\n```\n\nContent type request header MUST follow [the RFC 9110](https:\/\/www.rfc-editor.org\/rfc\/rfc9110.html#name-content-type). Senders MUST use `application\/x-protobuf` as the only media type. Senders MAY add `;proto=` parameter to the header's value to indicate the fully qualified name of the Protobuf Message that was used, from the two mentioned above. As a result, Senders MUST send any of the three supported header values:\n\nFor the deprecated message introduced in PRW 1.0, identified by `prometheus.WriteRequest`:\n\n* `Content-Type: application\/x-protobuf`\n* `Content-Type: application\/x-protobuf;proto=prometheus.WriteRequest`\n\nFor the message introduced in PRW 2.0, identified by `io.prometheus.write.v2.Request`:\n\n* `Content-Type: application\/x-protobuf;proto=io.prometheus.write.v2.Request`\n\nWhen talking to 1.x Receivers, Senders SHOULD use `Content-Type: application\/x-protobuf` for backward compatibility. Otherwise, Senders SHOULD use `Content-Type: application\/x-protobuf;proto=io.prometheus.write.v2.Request`. More Protobuf Messages might come in 2.x or beyond.\n\nReceivers MUST use the content type header to identify the Protobuf Message schema to use. Accidental wrong schema choices may result in non-deterministic behaviour (e.g. corruptions).\n\n> NOTE: Thanks to reserved fields in [`io.prometheus.write.v2.Request`](#protobuf-message), Receiver accidental use of wrong schema with `prometheus.WriteRequest` will result in empty message. This is generally for convenience to avoid surprising errors, but don't rely on it -- future Protobuf Messages might not have this feature.\n\n#### X-Prometheus-Remote-Write-Version\n\n```\nX-Prometheus-Remote-Write-Version: <Remote-Write spec major and minor version>\n```\n\nWhen talking to 1.x Receivers, Senders MUST use `X-Prometheus-Remote-Write-Version: 0.1.0` for backward compatibility. Otherwise, Senders SHOULD use the newest Remote-Write version it is compatible with e.g. `X-Prometheus-Remote-Write-Version: 2.0.0`.\n\n#### User-Agent\n\n```\nUser-Agent: <name & version of the Sender>\n```\n\nSenders MUST include a user agent header that SHOULD follow [the RFC 9110 User-Agent header format](https:\/\/www.rfc-editor.org\/rfc\/rfc9110.html#name-user-agent).\n\n### Response\n\nReceivers that written all data successfully MUST return a [success 2xx HTTP status code](https:\/\/www.rfc-editor.org\/rfc\/rfc9110.html#name-successful-2xx). In such a successful case, the response body from the Receiver SHOULD be empty and the status code SHOULD be [204 HTTP No Content](https:\/\/www.rfc-editor.org\/rfc\/rfc9110.html#name-204-no-content); Senders MUST ignore the response body. The response body is RESERVED for future use.\n\nReceivers MUST NOT return a 2xx HTTP status code if any of the pieces of sent data known to the Receiver (e.g. Samples, Histograms, Exemplars) were NOT written successfully (both [partial write](#partial-write) or full write rejection). In such a case, the Receiver MUST provide a human-readable error message in the response body. The Receiver's error SHOULD contain information about the amount of the samples being rejected and for what reasons. Senders MUST NOT try and interpret the error message and SHOULD log it as is.\n\nThe following subsections specify Sender and Receiver semantics around headers and different write error cases.\n\n#### Required `Written` Response Headers\n\n<!---\nRationales: https:\/\/github.com\/prometheus\/prometheus\/issues\/14359\n-->\nUpon a successful content negotiation, Receivers process (write) the received batch of data. Once completed (with success or failure) for each important piece of data (currently Samples, Histograms and Exemplars) Receivers MUST send a dedicated HTTP `X-Prometheus-Remote-Write-*-Written` response header with the precise number of successfully written elements.\n\nEach header value MUST be a single 64-bit integer. The header names MUST be as follows:\n\n```\nX-Prometheus-Remote-Write-Samples-Written <count of all successfully written Samples>\nX-Prometheus-Remote-Write-Histograms-Written <count of all successfully written Histogram samples>\nX-Prometheus-Remote-Write-Exemplars-Written <count of all successfully written Exemplars>\n```\n\nUpon receiving a 2xx or a 4xx status code, Senders CAN assume that any missing `X-Prometheus-Remote-Write-*-Written` response header means no element from this category (e.g. Sample) was written by the Receiver (count of `0`). Senders MUST NOT assume the same when using the deprecated `prometheus.WriteRequest` Protobuf Message due to the risk of hitting 1.0 Receiver without this feature.\n\nSenders MAY use those headers to confirm which parts of data were successfully written by the Receiver. Common use cases:\n\n* Better handling of the [Partial Write](#partial-write) failure situations: Senders MAY use those headers for more accurate client instrumentation and error handling.\n* Detecting broken 1.0 Receiver implementations: Senders SHOULD assume [415 HTTP Unsupported Media Type](https:\/\/www.rfc-editor.org\/rfc\/rfc9110.html#name-415-unsupported-media-type) status code when sending the data using `io.prometheus.write.v2.Request` request and receiving 2xx HTTP status code, but none of the `X-Prometheus-Remote-Write-*-Written` response headers from the Receiver. This is a common issue for the 1.0 Receivers that do not check the `Content-Type` request header; accidental decoding of the `io.prometheus.write.v2.Request` payload with `prometheus.WriteRequest` schema results in empty result and no decoding errors.\n* Detecting other broken implementations or issues: Senders MAY use those headers to detect broken Sender and Receiver implementations or other problems.\n\nSenders MUST NOT assume what Remote Write specification version the Receiver implements from the remote write response headers.\n\nMore (optional) headers might come in the future, e.g. when more entities or fields are added and worth confirming.\n\n#### Partial Write\n\n<!---\nRationales: https:\/\/github.com\/prometheus\/proposals\/blob\/alexg\/remote-write-20-proposal\/proposals\/2024-04-09_remote-write-20.md#partial-writes\n-->\nSenders SHOULD use Remote-Write to send samples for multiple series in a single request. As a result, Receivers MAY write valid samples within a write request that also contains some invalid or otherwise unwritten samples, which represents a partial write case. In such a case, the Receiver MUST return non-2xx status code following the [Invalid Samples](#invalid-samples) and [Retry on Partial Writes](#retries-on-partial-writes) sections.\n\n#### Unsupported Request Content\n\nReceivers MUST return [415 HTTP Unsupported Media Type](https:\/\/www.rfc-editor.org\/rfc\/rfc9110.html#name-415-unsupported-media-type) status code if they don't support a given content type or encoding provided by Senders.\n\nSenders SHOULD expect [400 HTTP Bad Request](https:\/\/www.rfc-editor.org\/rfc\/rfc9110.html#name-400-bad-request) for the above reasons from 1.x Receivers, for backwards compatibility.\n\n#### Invalid Samples\n\nReceivers MAY NOT support certain metric types or samples (e.g. a Receiver might reject sample without metadata type specified or without created timestamp, while another Receiver might accept such sample.). It\u2019s up to the Receiver what sample is invalid. Receivers MUST return a [400 HTTP Bad Request](https:\/\/www.rfc-editor.org\/rfc\/rfc9110.html#name-400-bad-request) status code for write requests that contain any invalid samples unless the [partial retriable write](#retries-on-partial-writes) occurs.\n\nSenders MUST NOT retry on a 4xx HTTP status codes (other than [429](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429)), which MUST be used by Receivers to indicate that the write operation will never be able to succeed and should not be retried. Senders MAY retry on the 415 HTTP status code with a different content type or encoding to see if the Receiver supports it.\n\n### Retries & Backoff\n\nReceivers MAY return a [429 HTTP Too Many Requests](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429) status code to indicate the overloaded server situation. Receivers MAY return [the Retry-After](https:\/\/www.rfc-editor.org\/rfc\/rfc9110.html#name-retry-after) header to indicate the time for the next write attempt. Receivers MAY return a 5xx HTTP status code to represent internal server errors.\n\nSenders MAY retry on a 429 HTTP status code. Senders MUST retry write requests on 5xx HTTP. Senders MUST use a backoff algorithm to prevent overwhelming the server. Senders MAY handle [the Retry-After response header](https:\/\/www.rfc-editor.org\/rfc\/rfc9110.html#name-retry-after) to estimate the next retry time.\n\nThe difference between 429 vs 5xx handling is due to the potential situation of a Sender \u201cfalling behind\u201d when the Receiver cannot keep up with the request volume, or the Receiver choosing to rate limit the Sender to protect its availability. As a result, Senders has the option to NOT retry on 429, which allows progress to be made when there are Sender side errors (e.g. too much traffic), while the data is not lost when there are Receiver side errors (5xx).\n\n#### Retries on Partial Writes\n\nReceivers MAY return a 5xx HTTP or 429 HTTP status code on partial write or [partial invalid sample cases](#partial-write) when it expects Senders to retry the whole request. In that case, the Receiver MUST support idempotency as Senders MAY retry with the same request.\n\n### Backward and Forward Compatibility\n\nThe protocol follows [semantic versioning 2.0](https:\/\/semver.org\/): any 2.x compatible Receiver MUST be able to read any 2.x compatible Senders and vice versa. Breaking or backwards incompatible changes will result in a 3.x version of the spec.\n\nThe Protobuf Messages (in Wire Format) themselves are forward \/ backward compatible, in some respects:\n\n* Removing fields from the Protobuf Message requires a major version bump.\n* Adding (optional) fields can be done in a minor version bump.\n\nIn other words, this means that future minor versions of 2.x MAY add new optional fields to `io.prometheus.write.v2.Request`, new compressions, Protobuf Messages and negotiation mechanisms, as long as they are backwards compatible (e.g. optional to both Receiver and Sender).\n\n#### 2.x vs 1.x Compatibility\n\nThe 2.x protocol is breaking compatibility with 1.x by introducing a new, mandatory `io.prometheus.write.v2.Request` Protobuf Message and deprecating the `prometheus.WriteRequest`.\n\n2.x Senders MAY support 1.x Receivers by allowing users to configure what content type Senders should use. 2.x Senders also MAY automatically fall back to different content types, if the Receiver returns 415 HTTP status code.\n\n## Protobuf Message\n\n### `io.prometheus.write.v2.Request`\n\nThe `io.prometheus.write.v2.Request` references the new Protobuf Message that's meant to replace and deprecate the Remote-Write 1.0's `prometheus.WriteRequest` message.\n\n<!---\nTODO(bwplotka): Move link to the one on Prometheus main or even buf.\n-->\nThe full schema and source of the truth is in Prometheus repository in [`prompb\/io\/prometheus\/write\/v2\/types.proto`](https:\/\/github.com\/prometheus\/prometheus\/blob\/remote-write-2.0\/prompb\/io\/prometheus\/write\/v2\/types.proto#L32). The `gogo` dependency and options CAN be ignored ([will be removed eventually](https:\/\/github.com\/prometheus\/prometheus\/issues\/11908)). They are not part of the specification as they don't impact the serialized format.\n\nThe simplified version of the new `io.prometheus.write.v2.Request` is presented below.\n\n```\n\/\/ Request represents a request to write the given timeseries to a remote destination.\nmessage Request {\n  \/\/ Since Request supersedes 1.0 spec's prometheus.WriteRequest, we reserve the top-down message\n  \/\/ for the deterministic interop between those two.\n  \/\/ Generally it's not needed, because Receivers must use the Content-Type header, but we want to\n  \/\/ be sympathetic to adopters with mistaken implementations and have deterministic error (empty\n  \/\/ message if you use the wrong proto schema).\n  reserved 1 to 3;\n\n  \/\/ symbols contains a de-duplicated array of string elements used for various\n  \/\/ items in a Request message, like labels and metadata items. For the sender's convenience\n  \/\/ around empty values for optional fields like unit_ref, symbols array MUST start with\n  \/\/ empty string.\n  \/\/\n  \/\/ To decode each of the symbolized strings, referenced, by \"ref(s)\" suffix, you\n  \/\/ need to lookup the actual string by index from symbols array. The order of\n  \/\/ strings is up to the sender. The receiver should not assume any particular encoding.\n  repeated string symbols = 4;\n  \/\/ timeseries represents an array of distinct series with 0 or more samples.\n  repeated TimeSeries timeseries = 5;\n}\n\n\/\/ TimeSeries represents a single series.\nmessage TimeSeries {\n  \/\/ labels_refs is a list of label name-value pair references, encoded\n  \/\/ as indices to the Request.symbols array. This list's length is always\n  \/\/ a multiple of two, and the underlying labels should be sorted lexicographically.\n  \/\/\n  \/\/ Note that there might be multiple TimeSeries objects in the same\n  \/\/ Requests with the same labels e.g. for different exemplars, metadata\n  \/\/ or created timestamp.\n  repeated uint32 labels_refs = 1;\n\n  \/\/ Timeseries messages can either specify samples or (native) histogram samples\n  \/\/ (histogram field), but not both. For a typical sender (real-time metric\n  \/\/ streaming), in healthy cases, there will be only one sample or histogram.\n  \/\/\n  \/\/ Samples and histograms are sorted by timestamp (older first).\n  repeated Sample samples = 2;\n  repeated Histogram histograms = 3;\n\n  \/\/ exemplars represents an optional set of exemplars attached to this series' samples.\n  repeated Exemplar exemplars = 4;\n\n  \/\/ metadata represents the metadata associated with the given series' samples.\n  Metadata metadata = 5;\n\n  \/\/ created_timestamp represents an optional created timestamp associated with\n  \/\/ this series' samples in ms format, typically for counter or histogram type\n  \/\/ metrics. Created timestamp represents the time when the counter started\n  \/\/ counting (sometimes referred to as start timestamp), which can increase\n  \/\/ the accuracy of query results.\n  \/\/\n  \/\/ Note that some receivers might require this and in return fail to\n  \/\/ write such samples within the Request.\n  \/\/\n  \/\/ For Go, see github.com\/prometheus\/prometheus\/model\/timestamp\/timestamp.go\n  \/\/ for conversion from\/to time.Time to Prometheus timestamp.\n  \/\/\n  \/\/ Note that the \"optional\" keyword is omitted due to\n  \/\/ https:\/\/cloud.google.com\/apis\/design\/design_patterns.md#optional_primitive_fields\n  \/\/ Zero value means value not set. If you need to use exactly zero value for\n  \/\/ the timestamp, use 1 millisecond before or after.\n  int64 created_timestamp = 6;\n}\n\n\/\/ Exemplar represents additional information attached to some series' samples.\nmessage Exemplar {\n  \/\/ labels_refs is an optional list of label name-value pair references, encoded\n  \/\/ as indices to the Request.symbols array. This list's len is always\n  \/\/ a multiple of 2, and the underlying labels should be sorted lexicographically.\n  \/\/ If the exemplar references a trace it should use the `trace_id` label name, as a best practice.\n  repeated uint32 labels_refs = 1;\n  \/\/ value represents an exact example value. This can be useful when the exemplar\n  \/\/ is attached to a histogram, which only gives an estimated value through buckets.\n  double value = 2;\n  \/\/ timestamp represents the timestamp of the exemplar in ms.\n  \/\/ For Go, see github.com\/prometheus\/prometheus\/model\/timestamp\/timestamp.go\n  \/\/ for conversion from\/to time.Time to Prometheus timestamp.\n  int64 timestamp = 3;\n}\n\n\/\/ Sample represents series sample.\nmessage Sample {\n  \/\/ value of the sample.\n  double value = 1;\n  \/\/ timestamp represents timestamp of the sample in ms.\n  int64 timestamp = 2;\n}\n\n\/\/ Metadata represents the metadata associated with the given series' samples.\nmessage Metadata {\n  enum MetricType {\n    METRIC_TYPE_UNSPECIFIED    = 0;\n    METRIC_TYPE_COUNTER        = 1;\n    METRIC_TYPE_GAUGE          = 2;\n    METRIC_TYPE_HISTOGRAM      = 3;\n    METRIC_TYPE_GAUGEHISTOGRAM = 4;\n    METRIC_TYPE_SUMMARY        = 5;\n    METRIC_TYPE_INFO           = 6;\n    METRIC_TYPE_STATESET       = 7;\n  }\n  MetricType type = 1;\n  \/\/ help_ref is a reference to the Request.symbols array representing help\n  \/\/ text for the metric. Help is optional, reference should point to an empty string in\n  \/\/ such a case.\n  uint32 help_ref = 3;\n  \/\/ unit_ref is a reference to the Request.symbols array representing a unit\n  \/\/ for the metric. Unit is optional, reference should point to an empty string in\n  \/\/ such a case.\n  uint32 unit_ref = 4;\n}\n\n\/\/ A native histogram, also known as a sparse histogram.\n\/\/ See https:\/\/github.com\/prometheus\/prometheus\/blob\/remote-write-2.0\/prompb\/io\/prometheus\/write\/v2\/types.proto#L142\n\/\/ for a full message that follows the native histogram spec for both sparse\n\/\/ and exponential, as well as, custom bucketing.\nmessage Histogram { ... }\n```\n\nAll timestamps MUST be int64 counted as milliseconds since the Unix epoch. Sample's values MUST be float64.\n\nFor every `TimeSeries` message:\n\n* `labels_refs` MUST be provided.\n\n<!---\nRationales: https:\/\/github.com\/prometheus\/proposals\/blob\/alexg\/remote-write-20-proposal\/proposals\/2024-04-09_remote-write-20.md#partial-writes#samples-vs-native-histogram-samples\n-->\n* At least one element in `samples` or in `histograms` MUST be provided. A `TimeSeries` MUST NOT include both `samples` and `histograms`. For series which (rarely) would mix float and histogram samples, a separate `TimeSeries` message MUST be used.\n\n<!---\nRationales: https:\/\/github.com\/prometheus\/proposals\/blob\/alexg\/remote-write-20-proposal\/proposals\/2024-04-09_remote-write-20.md#always-on-metadata\n-->\n* `metadata` sub-fields SHOULD be provided. Receivers MAY reject series with unspecified `Metadata.type`.\n* Exemplars SHOULD be provided if they exist for a series.\n* `created_timestamp` SHOULD be provided for metrics that follow counter semantics (e.g. counters and histograms). Receivers MAY reject those series without `created_timestamp` being set.\n\nThe following subsections define some schema elements in detail.\n\n#### Symbols\n\n<!---\nRationales: https:\/\/github.com\/prometheus\/proposals\/blob\/alexg\/remote-write-20-proposal\/proposals\/2024-04-09_remote-write-20.md#partial-writes#string-interning\n-->\nThe `io.prometheus.write.v2.Request` Protobuf Message is designed to [intern all strings](https:\/\/en.wikipedia.org\/wiki\/String_interning) for the proven additional compression and memory efficiency gains on top of the standard compressions.\n\nThe `symbols` table MUST be provided and it MUST contain deduplicated strings used in series, exemplar labels, and metadata strings. The first element of the `symbols` table MUST be an empty string, which is used to represent empty or unspecified values such as when `Metadata.unit_ref` or `Metadata.help_ref` are not provided. References MUST point to the existing index in the `symbols` string array.\n\n#### Series Labels\n\n<!---\nRationales: https:\/\/github.com\/prometheus\/proposals\/blob\/alexg\/remote-write-20-proposal\/proposals\/2024-04-09_remote-write-20.md#labels-and-utf-8\n-->\nThe complete set of labels MUST be sent with each `Sample` or `Histogram` sample. Additionally, the label set associated with samples:\n\n* SHOULD contain a `__name__` label.\n* MUST NOT contain repeated label names.\n* MUST have label names sorted in lexicographical order.\n* MUST NOT contain any empty label names or values.\n\nMetric names, label names, and label values MUST be any sequence of UTF-8 characters.\n\nMetric names SHOULD adhere to the regex `[a-zA-Z_:]([a-zA-Z0-9_:])*`.\n\nLabel names SHOULD adhere to the regex `[a-zA-Z_]([a-zA-Z0-9_])*`.\n\nNames that do not adhere to the above, might be harder to use for PromQL users (see [the UTF-8 proposal for more details](https:\/\/github.com\/prometheus\/proposals\/blob\/main\/proposals\/2023-08-21-utf8.md)).\n\nLabel names beginning with \"__\" are RESERVED for system usage and SHOULD NOT be used, see [Prometheus Data Model](https:\/\/prometheus.io\/docs\/concepts\/data_model\/).\n\nReceivers also MAY impose limits on the number and length of labels, but this is receiver-specific and is out of the scope of this document.\n\n#### Samples and Histogram Samples\n\n<!---\nRationales: https:\/\/github.com\/prometheus\/proposals\/blob\/alexg\/remote-write-20-proposal\/proposals\/2024-04-09_remote-write-20.md#partial-writes#native-histograms\n-->\nSenders MUST send `samples` (or `histograms`) for any given `TimeSeries` in timestamp order. Senders MAY send multiple requests for different series in parallel.\n\n<!---\nRationales: https:\/\/github.com\/prometheus\/proposals\/blob\/alexg\/remote-write-20-proposal\/proposals\/2024-04-09_remote-write-20.md#partial-writes#being-pull-vs-push-agnostic\n-->\nSenders SHOULD send stale markers when a time series will no longer be appended to.\nSenders MUST send stale markers if the discontinuation of time series is possible to detect, for example:\n\n* For series that were pulled (scraped), unless explicit timestamp was used.\n* For series that is resulted by a recording rule evaluation.\n\nGenerally, not sending stale markers for series that are discontinued can lead to the Receiver [non-trivial query time alignment issues](https:\/\/prometheus.io\/docs\/prometheus\/latest\/querying\/basics\/#staleness).\n\nStale markers MUST be signalled by the special NaN value `0x7ff0000000000002`. This value MUST NOT be used otherwise.\n\nTypically, Senders can detect when a time series will no longer be appended using the following techniques:\n\n1. Detecting, using service discovery, that the target exposing the series has gone away.\n1. Noticing the target is no longer exposing the time series between successive scrapes.\n1. Failing to scrape the target that originally exposed a time series.\n1. Tracking configuration and evaluation for recording and alerting rules.\n1. Tracking discontinuation of metrics for non-scrape source of metric (e.g. in k6 when the benchmark has finished for series per benchmark, it could emit a stale marker).\n\n#### Metadata\n\nMetadata SHOULD follow the official Prometheus guidelines for [Type](https:\/\/prometheus.io\/docs\/instrumenting\/writing_exporters\/#types) and [Help](https:\/\/prometheus.io\/docs\/instrumenting\/writing_exporters\/#help-strings).\n\nMetadata MAY follow the official OpenMetrics guidelines for [Unit](https:\/\/github.com\/OpenObservability\/OpenMetrics\/blob\/main\/specification\/OpenMetrics.md#unit).\n\n#### Exemplars\n\nEach exemplar, if attached to a `TimeSeries`:\n\n* MUST contain a value.\n\n<!---\nRationales: https:\/\/github.com\/prometheus\/proposals\/blob\/alexg\/remote-write-20-proposal\/proposals\/2024-04-09_remote-write-20.md#partial-writes#exemplars\n-->\n* MAY contain labels e.g. referencing trace or request ID. If the exemplar references a trace it SHOULD use the `trace_id` label name, as a best practice.\n* MUST contain a timestamp. While exemplar timestamps are optional in Prometheus\/Open Metrics exposition formats, the assumption is that a timestamp is assigned at scrape time in the same way a timestamp is assigned to the scrape sample. Receivers require exemplar timestamps to reliably handle (e.g. deduplicate) incoming exemplars.\n\n## Out of Scope\n\nThe same as in [1.0](.\/remote_write_spec.md#out-of-scope).\n\n## Future Plans\n\nThis section contains speculative plans that are not considered part of protocol specification yet but are mentioned here for completeness. Note that 2.0 specification completed [2 of 3 future plans in the 1.0](.\/remote_write_spec.md#future-plans).\n\n* **Transactionality** There is still no transactionality defined for 2.0 specification, mostly because it makes a scalable Sender implementation difficult. Prometheus Sender aims at being \"transactional\" - i.e. to never expose a partially scraped target to a query. We intend to do the same with Remote-Write -- for instance, in the future we would like to \"align\" Remote-Write with scrapes, perhaps such that all the samples, metadata and exemplars for a single scrape are sent in a single Remote-Write request.\n\n  However, Remote-Write 2.0 specification solves an important transactionality problem for [the classic histogram buckets](https:\/\/docs.google.com\/document\/d\/1mpcSWH1B82q-BtJza-eJ8xMLlKt6EJ9oFGH325vtY1Q\/edit#heading=h.ueg7q07wymku). This is done thanks to the native histograms supporting custom bucket-ing possible with the `io.prometheus.write.v2.Request` wire format. Senders might translate all classic histograms to native histograms this way, but it's out of this specification to mandate this. However, for this reason, Receivers MAY ignore certain metric types (e.g. classic histograms).\n\n* **Alternative wire formats**. The OpenTelemetry community has shown the validity of Apache Arrow (and potentially other columnar formats) for over-wire data transfer with their OTLP protocol. We would like to do experiments to confirm the compatibility of a similar format with Prometheus\u2019 data model and include benchmarks of any resource usage changes. We would potentially maintain both a protobuf and columnar format long term for compatibility reasons and use our content negotiation to add different Protobuf Messages for this purpose.\n\n* **Global symbols**. Pre-defined string dictionary for interning The protocol could pre-define a static dictionary of ref->symbol that includes strings that are considered common, e.g. \u201cnamespace\u201d, \u201cle\u201d, \u201cjob\u201d, \u201cseconds\u201d, \u201cbytes\u201d, etc. Senders could refer to these without the need to include them in the request\u2019s symbols table. This dictionary could incrementally grow with minor version releases of this protocol.\n\n## Related\n\n### FAQ\n\n**Why did you not use gRPC?**\nBecause the 1.0 protocol does not use gRPC, breaking it would increase friction in the adoption. See 1.0 [reason](.\/remote_write_spec.md#faq).\n\n**Why not stream protobuf messages?**\nIf you use persistent HTTP\/1.1 connections, they are pretty close to streaming. Of course, headers have to be re-sent, but that is less expensive than a new TCP set up.\n\n**Why do we send samples in order?**\nThe in-order constraint comes from the encoding we use for time series data in Prometheus, the implementation of which is optimized for append-only workloads. However, this requirement is also shared across many other databases and vendors in the ecosystem. In fact, [Prometheus with OOO feature enabled](https:\/\/youtu.be\/qYsycK3nTSQ?t=1321), allows out-of-order writes, but with the performance penalty, thus reserved for rare events. To sum up, Receivers may support out-of-order write, though it is not permitted by the specification. In the future e.g. 2.x spec versions, we could extend content type to negotiate the out-of-order writes, if needed.\n\n**How can we parallelise requests with the in-order constraint?**\nSamples must be in-order _for a given series_. However, even if a Receiver does not support out-of-order write, the Remote-Write requests can be sent in parallel as long as they are for different series. Prometheus shards the samples by their labels into separate queues, and then writes happen sequentially in each queue. This guarantees samples for the same series are delivered in order, but samples for different series are sent in parallel - and potentially \"out of order\" between different series.\n\n**What are the differences between Remote-Write 2.0 and OpenTelemetry's OTLP protocol?**\n[OpenTelemetry OTLP](https:\/\/github.com\/open-telemetry\/opentelemetry-proto\/blob\/a05597bff803d3d9405fcdd1e1fb1f42bed4eb7a\/docs\/specification.md) is a protocol for transporting of telemetry data (such as metrics, logs, traces and profiles) between telemetry sources, intermediate nodes and telemetry backends. The recommended transport involves gRPC with protobuf, but HTTP with protobuf or JSON are also described. It was designed from scratch with the intent to support a variety of different observability signals, data types and extra information. For [metrics](https:\/\/github.com\/open-telemetry\/opentelemetry-proto\/blob\/main\/opentelemetry\/proto\/metrics\/v1\/metrics.proto) that means additional non-identifying labels, flags, temporal aggregations types, resource or scoped metrics, schema URLs and more. OTLP also requires [the semantic convention](https:\/\/opentelemetry.io\/docs\/concepts\/semantic-conventions\/) to be used.\n\nRemote-Write was designed for simplicity, efficiency and organic growth. The first version was officially released in 2023, when already [dozens of battle-tested adopters in the CNCF ecosystem](.\/remote_write_spec.md#compatible-senders-and-receivers) had been using this protocol for years. Remote-Write 2.0 iterates on the previous protocol by adding a few new elements (metadata, exemplars, created timestamp and native histograms) and string interning. Remote-Write 2.0 is always stateless, focuses only on metrics and is opinionated; as such it is scoped down to elements that the Prometheus community considers enough to have a robust metric solution. The intention is to ensure the Remote-Write is a stable protocol that is cheaper and simpler to adopt and use than the alternatives in the observability ecosystem.","site":"prometheus","answers_cleaned":"    title   Prometheus Remote Write 2 0  EXPERIMENTAL   sort rank  4        Prometheus Remote Write Specification    Version  2 0 rc 3   Status    Experimental     Date  May 2024  The Remote Write specification  in general  is intended to document the standard for how Prometheus and Prometheus Remote Write compatible senders send data to Prometheus or Prometheus Remote Write compatible receivers   This document is intended to define a second version of the  Prometheus Remote Write    remote write spec md  API with minor changes to protocol and semantics  This second version adds a new Protobuf Message with new features enabling more use cases and wider adoption on top of performance and cost savings  The second version also deprecates the previous Protobuf Message from a  1 0 Remote Write specification   docs specs remote write spec  protocol  and adds mandatory   X Prometheus Remote Write   Written  HTTP response headers   required written response headers for reliability purposes  Finally  this spec outlines how to implement backwards compatible senders and receivers  even under a single endpoint  using existing basic content negotiation request headers  More advanced  automatic content negotiation mechanisms might come in a future minor version if needed  For the rationales behind the 2 0 specification  see  the formal proposal  https   github com prometheus proposals pull 35    The key words  MUST    MUST NOT    REQUIRED    SHALL    SHALL NOT    SHOULD    SHOULD NOT    RECOMMENDED     MAY   and  OPTIONAL  in this document are to be interpreted as described in  RFC 2119  https   datatracker ietf org doc html rfc2119      NOTE  This is a release candidate for Remote Write 2 0 specification  This means that this specification is currently in an experimental state  no major changes are expected  but we reserve the right to break the compatibility if it s necessary  based on the early adopters  feedback  The potential feedback  questions and suggestions should be added as comments to the  PR with the open proposal  https   github com prometheus proposals pull 35       Introduction      Background  The Remote Write protocol is designed to make it possible to reliably propagate samples in real time from a sender to a receiver  without loss         For the detailed rationales behind each 2 0 Remote Write decision  see  https   github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md     The Remote Write protocol is designed to be stateless  there is strictly no inter message communication  As such the protocol is not considered  streaming   To achieve a streaming effect multiple messages should be sent over the same connection using e g  HTTP 1 1 or HTTP 2   Fancy  technologies such as gRPC were considered  but at the time were not widely adopted  and it was challenging to expose gRPC services to the internet behind load balancers such as an AWS EC2 ELB   The Remote Write protocol contains opportunities for batching  e g  sending multiple samples for different series in a single request  It is not expected that multiple samples for the same series will be commonly sent in the same request  although there is support for this in the Protobuf Message   A test suite can be found at https   github com prometheus compliance tree main remote write sender  The compliance tests for remote write 2 0 compatibility are still  in progress  https   github com prometheus compliance issues 101        Glossary  In this document  the following definitions are followed      Remote Write  is the name of this Prometheus protocol    a  Protocol  is a communication specification that enables the client and server to transfer metrics    a  Protobuf Message   or Proto Message  refers to the  content type  https   www rfc editor org rfc rfc9110 html name content type  definition of the data structure for this Protocol  Since the specification uses  Google Protocol Buffers   protobuf    https   protobuf dev   exclusively  the schema is defined in a   proto  file  https   protobuf dev programming guides proto3   and represented by a single Protobuf   message   https   protobuf dev programming guides proto3  simple     a  Wire Format  is the format of the data as it travels on the wire  i e  in a network   In the case of Remote Write  this is always the compressed binary protobuf format    a  Sender  is something that sends Remote Write data    a  Receiver  is something that receives  writes  Remote Write data  The meaning of  Written  is up to the Receiver e g  usually it means storing received data in a database  but also just validating  splitting or enhancing it     Written  refers to data the  Receiver  has received and is accepting  Whether or not it has ingested this data to persistent storage  written it to a WAL  etc  is up to the  Receiver   The only distinction is that the  Receiver  has accepted this data rather than explicitly rejecting it with an error response    a  Sample  is a pair of  timestamp  value     a  Histogram  is a pair of  timestamp   histogram value  https   github com prometheus docs blob b9657b5f5b264b81add39f6db2f1df36faf03efe content docs concepts native histograms md      a  Label  is a pair of  key  value     a  Series  is a list of samples  identified by a unique set of labels      Definitions      Protocol  The Remote Write Protocol MUST consist of RPCs with the request body serialized using a Google Protocol Buffers and then compressed         Rationales  https   github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md a new protobuf message identified by fully qualified name old one deprecated     The protobuf serialization MUST use either of the following Protobuf Messages     The  prometheus WriteRequest  introduced in  the Remote Write 1 0 specification    remote write spec md protocol   As of 2 0  this message is deprecated  It SHOULD be used only for compatibility reasons  Senders and Receivers MAY NOT support the  prometheus WriteRequest     The  io prometheus write v2 Request  introduced in this specification and defined  below   protobuf message   Senders and Receivers SHOULD use this message when possible  Senders and Receivers MUST support the  io prometheus write v2 Request    Protobuf Message MUST use binary Wire Format  Then  MUST be compressed with  Google s Snappy  https   github com google snappy   Snappy s  block format  https   github com google snappy blob 2c94e11145f0b7b184b831577c93e5a41c4c0346 format description txt  MUST be used     the framed format  https   github com google snappy blob 2c94e11145f0b7b184b831577c93e5a41c4c0346 framing format txt  MUST NOT be used   Senders MUST send a serialized and compressed Protobuf Message in the body of an HTTP POST request and send it to the Receiver via HTTP at the provided URL path  Receivers MAY specify any HTTP URL path to receive metrics         Rationales  https   github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md basic content negotiation built on what we have     Senders MUST send the following reserved headers with the HTTP request      Content Encoding     Content Type     X Prometheus Remote Write Version     User Agent   Senders MAY allow users to add custom HTTP headers  they MUST NOT allow users to configure them in such a way as to send reserved headers        Content Encoding      Content Encoding   compression             Rationales  https   github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md no new compression added  yet      Content encoding request header MUST follow  the RFC 9110  https   www rfc editor org rfc rfc9110 html name content encoding   Senders MUST use the  snappy  value  Receivers MUST support  snappy  compression  New  optional compression algorithms might come in 2 x or beyond        Content Type      Content Type  application x protobuf Content Type  application x protobuf proto  fully qualified name       Content type request header MUST follow  the RFC 9110  https   www rfc editor org rfc rfc9110 html name content type   Senders MUST use  application x protobuf  as the only media type  Senders MAY add   proto   parameter to the header s value to indicate the fully qualified name of the Protobuf Message that was used  from the two mentioned above  As a result  Senders MUST send any of the three supported header values   For the deprecated message introduced in PRW 1 0  identified by  prometheus WriteRequest       Content Type  application x protobuf     Content Type  application x protobuf proto prometheus WriteRequest   For the message introduced in PRW 2 0  identified by  io prometheus write v2 Request       Content Type  application x protobuf proto io prometheus write v2 Request   When talking to 1 x Receivers  Senders SHOULD use  Content Type  application x protobuf  for backward compatibility  Otherwise  Senders SHOULD use  Content Type  application x protobuf proto io prometheus write v2 Request   More Protobuf Messages might come in 2 x or beyond   Receivers MUST use the content type header to identify the Protobuf Message schema to use  Accidental wrong schema choices may result in non deterministic behaviour  e g  corruptions      NOTE  Thanks to reserved fields in   io prometheus write v2 Request    protobuf message   Receiver accidental use of wrong schema with  prometheus WriteRequest  will result in empty message  This is generally for convenience to avoid surprising errors  but don t rely on it    future Protobuf Messages might not have this feature        X Prometheus Remote Write Version      X Prometheus Remote Write Version   Remote Write spec major and minor version       When talking to 1 x Receivers  Senders MUST use  X Prometheus Remote Write Version  0 1 0  for backward compatibility  Otherwise  Senders SHOULD use the newest Remote Write version it is compatible with e g   X Prometheus Remote Write Version  2 0 0         User Agent      User Agent   name   version of the Sender       Senders MUST include a user agent header that SHOULD follow  the RFC 9110 User Agent header format  https   www rfc editor org rfc rfc9110 html name user agent        Response  Receivers that written all data successfully MUST return a  success 2xx HTTP status code  https   www rfc editor org rfc rfc9110 html name successful 2xx   In such a successful case  the response body from the Receiver SHOULD be empty and the status code SHOULD be  204 HTTP No Content  https   www rfc editor org rfc rfc9110 html name 204 no content   Senders MUST ignore the response body  The response body is RESERVED for future use   Receivers MUST NOT return a 2xx HTTP status code if any of the pieces of sent data known to the Receiver  e g  Samples  Histograms  Exemplars  were NOT written successfully  both  partial write   partial write  or full write rejection   In such a case  the Receiver MUST provide a human readable error message in the response body  The Receiver s error SHOULD contain information about the amount of the samples being rejected and for what reasons  Senders MUST NOT try and interpret the error message and SHOULD log it as is   The following subsections specify Sender and Receiver semantics around headers and different write error cases        Required  Written  Response Headers        Rationales  https   github com prometheus prometheus issues 14359     Upon a successful content negotiation  Receivers process  write  the received batch of data  Once completed  with success or failure  for each important piece of data  currently Samples  Histograms and Exemplars  Receivers MUST send a dedicated HTTP  X Prometheus Remote Write   Written  response header with the precise number of successfully written elements   Each header value MUST be a single 64 bit integer  The header names MUST be as follows       X Prometheus Remote Write Samples Written  count of all successfully written Samples  X Prometheus Remote Write Histograms Written  count of all successfully written Histogram samples  X Prometheus Remote Write Exemplars Written  count of all successfully written Exemplars       Upon receiving a 2xx or a 4xx status code  Senders CAN assume that any missing  X Prometheus Remote Write   Written  response header means no element from this category  e g  Sample  was written by the Receiver  count of  0    Senders MUST NOT assume the same when using the deprecated  prometheus WriteRequest  Protobuf Message due to the risk of hitting 1 0 Receiver without this feature   Senders MAY use those headers to confirm which parts of data were successfully written by the Receiver  Common use cases     Better handling of the  Partial Write   partial write  failure situations  Senders MAY use those headers for more accurate client instrumentation and error handling    Detecting broken 1 0 Receiver implementations  Senders SHOULD assume  415 HTTP Unsupported Media Type  https   www rfc editor org rfc rfc9110 html name 415 unsupported media type  status code when sending the data using  io prometheus write v2 Request  request and receiving 2xx HTTP status code  but none of the  X Prometheus Remote Write   Written  response headers from the Receiver  This is a common issue for the 1 0 Receivers that do not check the  Content Type  request header  accidental decoding of the  io prometheus write v2 Request  payload with  prometheus WriteRequest  schema results in empty result and no decoding errors    Detecting other broken implementations or issues  Senders MAY use those headers to detect broken Sender and Receiver implementations or other problems   Senders MUST NOT assume what Remote Write specification version the Receiver implements from the remote write response headers   More  optional  headers might come in the future  e g  when more entities or fields are added and worth confirming        Partial Write        Rationales  https   github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md partial writes     Senders SHOULD use Remote Write to send samples for multiple series in a single request  As a result  Receivers MAY write valid samples within a write request that also contains some invalid or otherwise unwritten samples  which represents a partial write case  In such a case  the Receiver MUST return non 2xx status code following the  Invalid Samples   invalid samples  and  Retry on Partial Writes   retries on partial writes  sections        Unsupported Request Content  Receivers MUST return  415 HTTP Unsupported Media Type  https   www rfc editor org rfc rfc9110 html name 415 unsupported media type  status code if they don t support a given content type or encoding provided by Senders   Senders SHOULD expect  400 HTTP Bad Request  https   www rfc editor org rfc rfc9110 html name 400 bad request  for the above reasons from 1 x Receivers  for backwards compatibility        Invalid Samples  Receivers MAY NOT support certain metric types or samples  e g  a Receiver might reject sample without metadata type specified or without created timestamp  while another Receiver might accept such sample    It s up to the Receiver what sample is invalid  Receivers MUST return a  400 HTTP Bad Request  https   www rfc editor org rfc rfc9110 html name 400 bad request  status code for write requests that contain any invalid samples unless the  partial retriable write   retries on partial writes  occurs   Senders MUST NOT retry on a 4xx HTTP status codes  other than  429  https   developer mozilla org en US docs Web HTTP Status 429    which MUST be used by Receivers to indicate that the write operation will never be able to succeed and should not be retried  Senders MAY retry on the 415 HTTP status code with a different content type or encoding to see if the Receiver supports it       Retries   Backoff  Receivers MAY return a  429 HTTP Too Many Requests  https   developer mozilla org en US docs Web HTTP Status 429  status code to indicate the overloaded server situation  Receivers MAY return  the Retry After  https   www rfc editor org rfc rfc9110 html name retry after  header to indicate the time for the next write attempt  Receivers MAY return a 5xx HTTP status code to represent internal server errors   Senders MAY retry on a 429 HTTP status code  Senders MUST retry write requests on 5xx HTTP  Senders MUST use a backoff algorithm to prevent overwhelming the server  Senders MAY handle  the Retry After response header  https   www rfc editor org rfc rfc9110 html name retry after  to estimate the next retry time   The difference between 429 vs 5xx handling is due to the potential situation of a Sender  falling behind  when the Receiver cannot keep up with the request volume  or the Receiver choosing to rate limit the Sender to protect its availability  As a result  Senders has the option to NOT retry on 429  which allows progress to be made when there are Sender side errors  e g  too much traffic   while the data is not lost when there are Receiver side errors  5xx         Retries on Partial Writes  Receivers MAY return a 5xx HTTP or 429 HTTP status code on partial write or  partial invalid sample cases   partial write  when it expects Senders to retry the whole request  In that case  the Receiver MUST support idempotency as Senders MAY retry with the same request       Backward and Forward Compatibility  The protocol follows  semantic versioning 2 0  https   semver org    any 2 x compatible Receiver MUST be able to read any 2 x compatible Senders and vice versa  Breaking or backwards incompatible changes will result in a 3 x version of the spec   The Protobuf Messages  in Wire Format  themselves are forward   backward compatible  in some respects     Removing fields from the Protobuf Message requires a major version bump    Adding  optional  fields can be done in a minor version bump   In other words  this means that future minor versions of 2 x MAY add new optional fields to  io prometheus write v2 Request   new compressions  Protobuf Messages and negotiation mechanisms  as long as they are backwards compatible  e g  optional to both Receiver and Sender         2 x vs 1 x Compatibility  The 2 x protocol is breaking compatibility with 1 x by introducing a new  mandatory  io prometheus write v2 Request  Protobuf Message and deprecating the  prometheus WriteRequest    2 x Senders MAY support 1 x Receivers by allowing users to configure what content type Senders should use  2 x Senders also MAY automatically fall back to different content types  if the Receiver returns 415 HTTP status code      Protobuf Message       io prometheus write v2 Request   The  io prometheus write v2 Request  references the new Protobuf Message that s meant to replace and deprecate the Remote Write 1 0 s  prometheus WriteRequest  message         TODO bwplotka   Move link to the one on Prometheus main or even buf      The full schema and source of the truth is in Prometheus repository in   prompb io prometheus write v2 types proto   https   github com prometheus prometheus blob remote write 2 0 prompb io prometheus write v2 types proto L32   The  gogo  dependency and options CAN be ignored   will be removed eventually  https   github com prometheus prometheus issues 11908    They are not part of the specification as they don t impact the serialized format   The simplified version of the new  io prometheus write v2 Request  is presented below          Request represents a request to write the given timeseries to a remote destination  message Request        Since Request supersedes 1 0 spec s prometheus WriteRequest  we reserve the top down message      for the deterministic interop between those two       Generally it s not needed  because Receivers must use the Content Type header  but we want to      be sympathetic to adopters with mistaken implementations and have deterministic error  empty      message if you use the wrong proto schema     reserved 1 to 3        symbols contains a de duplicated array of string elements used for various      items in a Request message  like labels and metadata items  For the sender s convenience      around empty values for optional fields like unit ref  symbols array MUST start with      empty string            To decode each of the symbolized strings  referenced  by  ref s   suffix  you      need to lookup the actual string by index from symbols array  The order of      strings is up to the sender  The receiver should not assume any particular encoding    repeated string symbols   4       timeseries represents an array of distinct series with 0 or more samples    repeated TimeSeries timeseries   5        TimeSeries represents a single series  message TimeSeries        labels refs is a list of label name value pair references  encoded      as indices to the Request symbols array  This list s length is always      a multiple of two  and the underlying labels should be sorted lexicographically            Note that there might be multiple TimeSeries objects in the same      Requests with the same labels e g  for different exemplars  metadata      or created timestamp    repeated uint32 labels refs   1        Timeseries messages can either specify samples or  native  histogram samples       histogram field   but not both  For a typical sender  real time metric      streaming   in healthy cases  there will be only one sample or histogram            Samples and histograms are sorted by timestamp  older first     repeated Sample samples   2    repeated Histogram histograms   3        exemplars represents an optional set of exemplars attached to this series  samples    repeated Exemplar exemplars   4        metadata represents the metadata associated with the given series  samples    Metadata metadata   5        created timestamp represents an optional created timestamp associated with      this series  samples in ms format  typically for counter or histogram type      metrics  Created timestamp represents the time when the counter started      counting  sometimes referred to as start timestamp   which can increase      the accuracy of query results            Note that some receivers might require this and in return fail to      write such samples within the Request            For Go  see github com prometheus prometheus model timestamp timestamp go      for conversion from to time Time to Prometheus timestamp            Note that the  optional  keyword is omitted due to      https   cloud google com apis design design patterns md optional primitive fields      Zero value means value not set  If you need to use exactly zero value for      the timestamp  use 1 millisecond before or after    int64 created timestamp   6        Exemplar represents additional information attached to some series  samples  message Exemplar        labels refs is an optional list of label name value pair references  encoded      as indices to the Request symbols array  This list s len is always      a multiple of 2  and the underlying labels should be sorted lexicographically       If the exemplar references a trace it should use the  trace id  label name  as a best practice    repeated uint32 labels refs   1       value represents an exact example value  This can be useful when the exemplar      is attached to a histogram  which only gives an estimated value through buckets    double value   2       timestamp represents the timestamp of the exemplar in ms       For Go  see github com prometheus prometheus model timestamp timestamp go      for conversion from to time Time to Prometheus timestamp    int64 timestamp   3        Sample represents series sample  message Sample        value of the sample    double value   1       timestamp represents timestamp of the sample in ms    int64 timestamp   2        Metadata represents the metadata associated with the given series  samples  message Metadata     enum MetricType       METRIC TYPE UNSPECIFIED      0      METRIC TYPE COUNTER          1      METRIC TYPE GAUGE            2      METRIC TYPE HISTOGRAM        3      METRIC TYPE GAUGEHISTOGRAM   4      METRIC TYPE SUMMARY          5      METRIC TYPE INFO             6      METRIC TYPE STATESET         7        MetricType type   1       help ref is a reference to the Request symbols array representing help      text for the metric  Help is optional  reference should point to an empty string in      such a case    uint32 help ref   3       unit ref is a reference to the Request symbols array representing a unit      for the metric  Unit is optional  reference should point to an empty string in      such a case    uint32 unit ref   4        A native histogram  also known as a sparse histogram     See https   github com prometheus prometheus blob remote write 2 0 prompb io prometheus write v2 types proto L142    for a full message that follows the native histogram spec for both sparse    and exponential  as well as  custom bucketing  message Histogram              All timestamps MUST be int64 counted as milliseconds since the Unix epoch  Sample s values MUST be float64   For every  TimeSeries  message      labels refs  MUST be provided         Rationales  https   github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md partial writes samples vs native histogram samples       At least one element in  samples  or in  histograms  MUST be provided  A  TimeSeries  MUST NOT include both  samples  and  histograms   For series which  rarely  would mix float and histogram samples  a separate  TimeSeries  message MUST be used         Rationales  https   github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md always on metadata        metadata  sub fields SHOULD be provided  Receivers MAY reject series with unspecified  Metadata type     Exemplars SHOULD be provided if they exist for a series     created timestamp  SHOULD be provided for metrics that follow counter semantics  e g  counters and histograms   Receivers MAY reject those series without  created timestamp  being set   The following subsections define some schema elements in detail        Symbols        Rationales  https   github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md partial writes string interning     The  io prometheus write v2 Request  Protobuf Message is designed to  intern all strings  https   en wikipedia org wiki String interning  for the proven additional compression and memory efficiency gains on top of the standard compressions   The  symbols  table MUST be provided and it MUST contain deduplicated strings used in series  exemplar labels  and metadata strings  The first element of the  symbols  table MUST be an empty string  which is used to represent empty or unspecified values such as when  Metadata unit ref  or  Metadata help ref  are not provided  References MUST point to the existing index in the  symbols  string array        Series Labels        Rationales  https   github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md labels and utf 8     The complete set of labels MUST be sent with each  Sample  or  Histogram  sample  Additionally  the label set associated with samples     SHOULD contain a    name    label    MUST NOT contain repeated label names    MUST have label names sorted in lexicographical order    MUST NOT contain any empty label names or values   Metric names  label names  and label values MUST be any sequence of UTF 8 characters   Metric names SHOULD adhere to the regex   a zA Z     a zA Z0 9         Label names SHOULD adhere to the regex   a zA Z    a zA Z0 9        Names that do not adhere to the above  might be harder to use for PromQL users  see  the UTF 8 proposal for more details  https   github com prometheus proposals blob main proposals 2023 08 21 utf8 md     Label names beginning with      are RESERVED for system usage and SHOULD NOT be used  see  Prometheus Data Model  https   prometheus io docs concepts data model     Receivers also MAY impose limits on the number and length of labels  but this is receiver specific and is out of the scope of this document        Samples and Histogram Samples        Rationales  https   github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md partial writes native histograms     Senders MUST send  samples   or  histograms   for any given  TimeSeries  in timestamp order  Senders MAY send multiple requests for different series in parallel         Rationales  https   github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md partial writes being pull vs push agnostic     Senders SHOULD send stale markers when a time series will no longer be appended to  Senders MUST send stale markers if the discontinuation of time series is possible to detect  for example     For series that were pulled  scraped   unless explicit timestamp was used    For series that is resulted by a recording rule evaluation   Generally  not sending stale markers for series that are discontinued can lead to the Receiver  non trivial query time alignment issues  https   prometheus io docs prometheus latest querying basics  staleness    Stale markers MUST be signalled by the special NaN value  0x7ff0000000000002   This value MUST NOT be used otherwise   Typically  Senders can detect when a time series will no longer be appended using the following techniques   1  Detecting  using service discovery  that the target exposing the series has gone away  1  Noticing the target is no longer exposing the time series between successive scrapes  1  Failing to scrape the target that originally exposed a time series  1  Tracking configuration and evaluation for recording and alerting rules  1  Tracking discontinuation of metrics for non scrape source of metric  e g  in k6 when the benchmark has finished for series per benchmark  it could emit a stale marker         Metadata  Metadata SHOULD follow the official Prometheus guidelines for  Type  https   prometheus io docs instrumenting writing exporters  types  and  Help  https   prometheus io docs instrumenting writing exporters  help strings    Metadata MAY follow the official OpenMetrics guidelines for  Unit  https   github com OpenObservability OpenMetrics blob main specification OpenMetrics md unit         Exemplars  Each exemplar  if attached to a  TimeSeries      MUST contain a value         Rationales  https   github com prometheus proposals blob alexg remote write 20 proposal proposals 2024 04 09 remote write 20 md partial writes exemplars       MAY contain labels e g  referencing trace or request ID  If the exemplar references a trace it SHOULD use the  trace id  label name  as a best practice    MUST contain a timestamp  While exemplar timestamps are optional in Prometheus Open Metrics exposition formats  the assumption is that a timestamp is assigned at scrape time in the same way a timestamp is assigned to the scrape sample  Receivers require exemplar timestamps to reliably handle  e g  deduplicate  incoming exemplars      Out of Scope  The same as in  1 0    remote write spec md out of scope       Future Plans  This section contains speculative plans that are not considered part of protocol specification yet but are mentioned here for completeness  Note that 2 0 specification completed  2 of 3 future plans in the 1 0    remote write spec md future plans        Transactionality   There is still no transactionality defined for 2 0 specification  mostly because it makes a scalable Sender implementation difficult  Prometheus Sender aims at being  transactional    i e  to never expose a partially scraped target to a query  We intend to do the same with Remote Write    for instance  in the future we would like to  align  Remote Write with scrapes  perhaps such that all the samples  metadata and exemplars for a single scrape are sent in a single Remote Write request     However  Remote Write 2 0 specification solves an important transactionality problem for  the classic histogram buckets  https   docs google com document d 1mpcSWH1B82q BtJza eJ8xMLlKt6EJ9oFGH325vtY1Q edit heading h ueg7q07wymku   This is done thanks to the native histograms supporting custom bucket ing possible with the  io prometheus write v2 Request  wire format  Senders might translate all classic histograms to native histograms this way  but it s out of this specification to mandate this  However  for this reason  Receivers MAY ignore certain metric types  e g  classic histograms        Alternative wire formats    The OpenTelemetry community has shown the validity of Apache Arrow  and potentially other columnar formats  for over wire data transfer with their OTLP protocol  We would like to do experiments to confirm the compatibility of a similar format with Prometheus  data model and include benchmarks of any resource usage changes  We would potentially maintain both a protobuf and columnar format long term for compatibility reasons and use our content negotiation to add different Protobuf Messages for this purpose       Global symbols    Pre defined string dictionary for interning The protocol could pre define a static dictionary of ref  symbol that includes strings that are considered common  e g   namespace    le    job    seconds    bytes   etc  Senders could refer to these without the need to include them in the request s symbols table  This dictionary could incrementally grow with minor version releases of this protocol      Related      FAQ    Why did you not use gRPC    Because the 1 0 protocol does not use gRPC  breaking it would increase friction in the adoption  See 1 0  reason    remote write spec md faq      Why not stream protobuf messages    If you use persistent HTTP 1 1 connections  they are pretty close to streaming  Of course  headers have to be re sent  but that is less expensive than a new TCP set up     Why do we send samples in order    The in order constraint comes from the encoding we use for time series data in Prometheus  the implementation of which is optimized for append only workloads  However  this requirement is also shared across many other databases and vendors in the ecosystem  In fact   Prometheus with OOO feature enabled  https   youtu be qYsycK3nTSQ t 1321   allows out of order writes  but with the performance penalty  thus reserved for rare events  To sum up  Receivers may support out of order write  though it is not permitted by the specification  In the future e g  2 x spec versions  we could extend content type to negotiate the out of order writes  if needed     How can we parallelise requests with the in order constraint    Samples must be in order  for a given series   However  even if a Receiver does not support out of order write  the Remote Write requests can be sent in parallel as long as they are for different series  Prometheus shards the samples by their labels into separate queues  and then writes happen sequentially in each queue  This guarantees samples for the same series are delivered in order  but samples for different series are sent in parallel   and potentially  out of order  between different series     What are the differences between Remote Write 2 0 and OpenTelemetry s OTLP protocol     OpenTelemetry OTLP  https   github com open telemetry opentelemetry proto blob a05597bff803d3d9405fcdd1e1fb1f42bed4eb7a docs specification md  is a protocol for transporting of telemetry data  such as metrics  logs  traces and profiles  between telemetry sources  intermediate nodes and telemetry backends  The recommended transport involves gRPC with protobuf  but HTTP with protobuf or JSON are also described  It was designed from scratch with the intent to support a variety of different observability signals  data types and extra information  For  metrics  https   github com open telemetry opentelemetry proto blob main opentelemetry proto metrics v1 metrics proto  that means additional non identifying labels  flags  temporal aggregations types  resource or scoped metrics  schema URLs and more  OTLP also requires  the semantic convention  https   opentelemetry io docs concepts semantic conventions   to be used   Remote Write was designed for simplicity  efficiency and organic growth  The first version was officially released in 2023  when already  dozens of battle tested adopters in the CNCF ecosystem    remote write spec md compatible senders and receivers  had been using this protocol for years  Remote Write 2 0 iterates on the previous protocol by adding a few new elements  metadata  exemplars  created timestamp and native histograms  and string interning  Remote Write 2 0 is always stateless  focuses only on metrics and is opinionated  as such it is scoped down to elements that the Prometheus community considers enough to have a robust metric solution  The intention is to ensure the Remote Write is a stable protocol that is cheaper and simpler to adopt and use than the alternatives in the observability ecosystem "}
{"questions":"prometheus Scope sortrank 4 title Comparison to alternatives Comparison to alternatives Prometheus vs Graphite","answers":"---\ntitle: Comparison to alternatives\nsort_rank: 4\n---\n\n# Comparison to alternatives\n\n## Prometheus vs. Graphite\n\n### Scope\n\n[Graphite](http:\/\/graphite.readthedocs.org\/en\/latest\/) focuses on being a\npassive time series database with a query language and graphing features. Any\nother concerns are addressed by external components.\n\nPrometheus is a full monitoring and trending system that includes built-in and\nactive scraping, storing, querying, graphing, and alerting based on time series\ndata. It has knowledge about what the world should look like (which endpoints\nshould exist, what time series patterns mean trouble, etc.), and actively tries\nto find faults.\n\n### Data model\n\nGraphite stores numeric samples for named time series, much like Prometheus\ndoes. However, Prometheus's metadata model is richer: while Graphite metric\nnames consist of dot-separated components which implicitly encode dimensions,\nPrometheus encodes dimensions explicitly as key-value pairs, called labels, attached\nto a metric name. This allows easy filtering, grouping, and matching by these\nlabels via the query language.\n\nFurther, especially when Graphite is used in combination with\n[StatsD](https:\/\/github.com\/etsy\/statsd\/), it is common to store only\naggregated data over all monitored instances, rather than preserving the\ninstance as a dimension and being able to drill down into individual\nproblematic instances.\n\nFor example, storing the number of HTTP requests to API servers with the\nresponse code `500` and the method `POST` to the `\/tracks` endpoint would\ncommonly be encoded like this in Graphite\/StatsD:\n\n```\nstats.api-server.tracks.post.500 -> 93\n```\n\nIn Prometheus the same data could be encoded like this (assuming three api-server instances):\n\n```\napi_server_http_requests_total{method=\"POST\",handler=\"\/tracks\",status=\"500\",instance=\"<sample1>\"} -> 34\napi_server_http_requests_total{method=\"POST\",handler=\"\/tracks\",status=\"500\",instance=\"<sample2>\"} -> 28\napi_server_http_requests_total{method=\"POST\",handler=\"\/tracks\",status=\"500\",instance=\"<sample3>\"} -> 31\n```\n\n### Storage\n\nGraphite stores time series data on local disk in the\n[Whisper](http:\/\/graphite.readthedocs.org\/en\/latest\/whisper.html) format, an\nRRD-style database that expects samples to arrive at regular intervals. Every\ntime series is stored in a separate file, and new samples overwrite old ones\nafter a certain amount of time.\n\nPrometheus also creates one local file per time series, but allows storing\nsamples at arbitrary intervals as scrapes or rule evaluations occur. Since new\nsamples are simply appended, old data may be kept arbitrarily long. Prometheus\nalso works well for many short-lived, frequently changing sets of time series.\n\n### Summary\n\nPrometheus offers a richer data model and query language, in addition to being\neasier to run and integrate into your environment. If you want a clustered\nsolution that can hold historical data long term, Graphite may be a better\nchoice.\n\n\n## Prometheus vs. InfluxDB\n\n[InfluxDB](https:\/\/influxdata.com\/) is an open-source time series database,\nwith a commercial option for scaling and clustering. The InfluxDB project was\nreleased almost a year after Prometheus development began, so we were unable to\nconsider it as an alternative at the time. Still, there are significant\ndifferences between Prometheus and InfluxDB, and both systems are geared\ntowards slightly different use cases.\n\n### Scope\n\nFor a fair comparison, we must also consider\n[Kapacitor](https:\/\/github.com\/influxdata\/kapacitor) together with InfluxDB, as\nin combination they address the same problem space as Prometheus and the\nAlertmanager.\n\nThe same scope differences as in the case of\n[Graphite](#prometheus-vs-graphite) apply here for InfluxDB itself. In addition\nInfluxDB offers continuous queries, which are equivalent to Prometheus\nrecording rules.\n\nKapacitor\u2019s scope is a combination of Prometheus recording rules, alerting\nrules, and the Alertmanager's notification functionality. Prometheus offers [a\nmore powerful query language for graphing and\nalerting](https:\/\/www.robustperception.io\/translating-between-monitoring-languages\/).\nThe Prometheus Alertmanager additionally offers grouping, deduplication and\nsilencing functionality.\n\n### Data model \/ storage\n\nLike Prometheus, the InfluxDB data model has key-value pairs as labels, which\nare called tags. In addition, InfluxDB has a second level of labels called\nfields, which are more limited in use. InfluxDB supports timestamps with up to\nnanosecond resolution, and float64, int64, bool, and string data types.\nPrometheus, by contrast, supports the float64 data type with limited support for\nstrings, and millisecond resolution timestamps.\n\nInfluxDB uses a variant of a [log-structured merge tree for storage with a write ahead log](https:\/\/docs.influxdata.com\/influxdb\/v1.7\/concepts\/storage_engine\/),\nsharded by time. This is much more suitable to event logging than Prometheus's\nappend-only file per time series approach.\n\n[Logs and Metrics and Graphs, Oh My!](https:\/\/grafana.com\/blog\/2016\/01\/05\/logs-and-metrics-and-graphs-oh-my\/)\ndescribes the differences between event logging and metrics recording.\n\n### Architecture\n\nPrometheus servers run independently of each other and only rely on their local\nstorage for their core functionality: scraping, rule processing, and alerting.\nThe open source version of InfluxDB is similar.\n\nThe commercial InfluxDB offering is, by design, a distributed storage cluster\nwith storage and queries being handled by many nodes at once.\n\nThis means that the commercial InfluxDB will be easier to scale horizontally,\nbut it also means that you have to manage the complexity of a distributed\nstorage system from the beginning. Prometheus will be simpler to run, but at\nsome point you will need to shard servers explicitly along scalability\nboundaries like products, services, datacenters, or similar aspects.\nIndependent servers (which can be run redundantly in parallel) may also give\nyou better reliability and failure isolation.\n\nKapacitor's open-source release has no built-in distributed\/redundant options for \nrules,  alerting, or notifications.  The open-source release of Kapacitor can \nbe scaled via manual sharding by the user, similar to Prometheus itself.\nInflux offers [Enterprise Kapacitor](https:\/\/docs.influxdata.com\/enterprise_kapacitor), which supports an \nHA\/redundant alerting system.\n\nPrometheus and the Alertmanager by contrast offer a fully open-source redundant \noption via running redundant replicas of Prometheus and using the Alertmanager's \n[High Availability](https:\/\/github.com\/prometheus\/alertmanager#high-availability)\nmode. \n\n### Summary\n\nThere are many similarities between the systems. Both have labels (called tags\nin InfluxDB) to efficiently support multi-dimensional metrics. Both use\nbasically the same data compression algorithms. Both have extensive\nintegrations, including with each other. Both have hooks allowing you to extend\nthem further, such as analyzing data in statistical tools or performing\nautomated actions.\n\nWhere InfluxDB is better:\n\n  * If you're doing event logging.\n  * Commercial option offers clustering for InfluxDB, which is also better for long term data storage.\n  * Eventually consistent view of data between replicas.\n\nWhere Prometheus is better:\n\n  * If you're primarily doing metrics.\n  * More powerful query language, alerting, and notification functionality.\n  * Higher availability and uptime for graphing and alerting.\n\nInfluxDB is maintained by a single commercial company following the open-core\nmodel, offering premium features like closed-source clustering, hosting and\nsupport. Prometheus is a [fully open source and independent project](\/community\/), maintained\nby a number of companies and individuals, some of whom also offer commercial services and support.\n\n## Prometheus vs. OpenTSDB\n\n[OpenTSDB](http:\/\/opentsdb.net\/) is a distributed time series database based on\n[Hadoop](http:\/\/hadoop.apache.org\/) and [HBase](http:\/\/hbase.apache.org\/).\n\n### Scope\n\nThe same scope differences as in the case of\n[Graphite](\/docs\/introduction\/comparison\/#prometheus-vs-graphite) apply here.\n\n### Data model\n\nOpenTSDB's data model is almost identical to Prometheus's: time series are\nidentified by a set of arbitrary key-value pairs (OpenTSDB tags are\nPrometheus labels). All data for a metric is \n[stored together](http:\/\/opentsdb.net\/docs\/build\/html\/user_guide\/writing\/index.html#time-series-cardinality),\nlimiting the cardinality of metrics. There are minor differences though: Prometheus\nallows arbitrary characters in label values, while OpenTSDB is more restrictive. \nOpenTSDB also lacks a full query language, only allowing simple aggregation and math via its API.\n\n### Storage\n\n[OpenTSDB](http:\/\/opentsdb.net\/)'s storage is implemented on top of\n[Hadoop](http:\/\/hadoop.apache.org\/) and [HBase](http:\/\/hbase.apache.org\/). This\nmeans that it is easy to scale OpenTSDB horizontally, but you have to accept\nthe overall complexity of running a Hadoop\/HBase cluster from the beginning.\n\nPrometheus will be simpler to run initially, but will require explicit sharding\nonce the capacity of a single node is exceeded.\n\n### Summary\n\nPrometheus offers a much richer query language, can handle higher cardinality\nmetrics, and forms part of a complete monitoring system. If you're already\nrunning Hadoop and value long term storage over these benefits, OpenTSDB is a\ngood choice.\n\n## Prometheus vs. Nagios\n\n[Nagios](https:\/\/www.nagios.org\/) is a monitoring system that originated in the\n1990s as NetSaint.\n\n### Scope\n\nNagios is primarily about alerting based on the exit codes of scripts. These are \ncalled \u201cchecks\u201d. There is silencing of individual alerts, however no grouping, \nrouting or deduplication.\n\nThere are a variety of plugins. For example, piping the few kilobytes of\nperfData plugins are allowed to return [to a time series database such as Graphite](https:\/\/github.com\/shawn-sterling\/graphios) or using NRPE to [run checks on remote machines](https:\/\/exchange.nagios.org\/directory\/Addons\/Monitoring-Agents\/NRPE--2D-Nagios-Remote-Plugin-Executor\/details).\n\n### Data model\n\nNagios is host-based. Each host can have one or more services and each service\ncan perform one check.\n\nThere is no notion of labels or a query language.\n\n### Storage\n\nNagios has no storage per-se, beyond the current check state.\nThere are plugins which can store data such as [for visualisation](https:\/\/docs.pnp4nagios.org\/).\n\n### Architecture\n\nNagios servers are standalone. All configuration of checks is via file.\n\n### Summary\n\nNagios is suitable for basic monitoring of small and\/or static systems where\nblackbox probing is sufficient.\n\nIf you want to do whitebox monitoring, or have a dynamic or cloud based\nenvironment, then Prometheus is a good choice.\n\n## Prometheus vs. Sensu\n\n[Sensu](https:\/\/sensu.io) is an open source monitoring and observability pipeline with a commercial distribution which offers additional features for scalability. It can reuse existing Nagios plugins.\n\n### Scope\n\nSensu is an observability pipeline that focuses on processing and alerting of observability data as a stream of [Events](https:\/\/docs.sensu.io\/sensu-go\/latest\/observability-pipeline\/observe-events\/events\/). It provides an extensible framework for event [filtering](https:\/\/docs.sensu.io\/sensu-go\/latest\/observability-pipeline\/observe-filter\/), aggregation, [transformation](https:\/\/docs.sensu.io\/sensu-go\/latest\/observability-pipeline\/observe-transform\/), and [processing](https:\/\/docs.sensu.io\/sensu-go\/latest\/observability-pipeline\/observe-process\/) \u2013\u00a0including sending alerts to other systems and storing events in third-party systems. Sensu's event processing capabilities are similar in scope to Prometheus alerting rules and Alertmanager. \n\n### Data model\n\nSensu [Events](https:\/\/docs.sensu.io\/sensu-go\/latest\/observability-pipeline\/observe-events\/events\/) represent service health and\/or [metrics](https:\/\/docs.sensu.io\/sensu-go\/latest\/observability-pipeline\/observe-events\/events\/#metric-attributes) in a structured data format identified by an [entity](https:\/\/docs.sensu.io\/sensu-go\/latest\/observability-pipeline\/observe-entities\/entities\/) name (e.g. server, cloud compute instance, container, or service), an event name, and optional [key-value metadata](https:\/\/docs.sensu.io\/sensu-go\/latest\/observability-pipeline\/observe-events\/events\/#metadata-attributes) called \"labels\" or \"annotations\". The Sensu Event payload may include one or more metric [`points`](https:\/\/docs.sensu.io\/sensu-go\/latest\/observability-pipeline\/observe-events\/events\/#points-attributes), represented as a JSON object containing a `name`, `tags` (key\/value pairs), `timestamp`, and `value` (always a float).\n\n### Storage\n\nSensu stores current and recent event status information and real-time inventory data in an embedded database (etcd) or an external RDBMS (PostgreSQL). \n\n### Architecture\n\nAll components of a Sensu deployment can be clustered for high availability and improved event-processing throughput. \n\n### Summary\n\nSensu and Prometheus have a few capabilities in common, but they take very different approaches to monitoring. Both offer extensible discovery mechanisms for dynamic cloud-based environments and ephemeral compute platforms, though the underlying mechanisms are quite different. Both provide support for collecting multi-dimensional metrics via labels and annotations. Both have extensive integrations, and Sensu natively supports collecting metrics from all Prometheus exporters. Both are capable of forwarding observability data to third-party data platforms (e.g. event stores or TSDBs). Where Sensu and Prometheus differ the most is in their use cases.\n\nWhere Sensu is better: \n\n- If you're collecting and processing hybrid observability data (including metrics _and\/or_ events)\n- If you're consolidating multiple monitoring tools and need support for metrics _and_ Nagios-style plugins or check scripts\n- More powerful event-processing platform\n\nWhere Prometheus is better: \n\n- If you're primarily collecting and evaluating metrics\n- If you're monitoring homogeneous Kubernetes infrastructure (if 100% of the workloads you're monitoring are in K8s, Prometheus offers better K8s integration)\n- More powerful query language, and built-in support for historical data analysis \n\nSensu is maintained by a single commercial company following the open-core business model, offering premium features like closed-source event correlation and aggregation, federation, and support. Prometheus is a fully open source and independent project, maintained by a number of companies and individuals, some of whom also offer commercial services and support.","site":"prometheus","answers_cleaned":"    title  Comparison to alternatives sort rank  4        Comparison to alternatives     Prometheus vs  Graphite      Scope   Graphite  http   graphite readthedocs org en latest   focuses on being a passive time series database with a query language and graphing features  Any other concerns are addressed by external components   Prometheus is a full monitoring and trending system that includes built in and active scraping  storing  querying  graphing  and alerting based on time series data  It has knowledge about what the world should look like  which endpoints should exist  what time series patterns mean trouble  etc    and actively tries to find faults       Data model  Graphite stores numeric samples for named time series  much like Prometheus does  However  Prometheus s metadata model is richer  while Graphite metric names consist of dot separated components which implicitly encode dimensions  Prometheus encodes dimensions explicitly as key value pairs  called labels  attached to a metric name  This allows easy filtering  grouping  and matching by these labels via the query language   Further  especially when Graphite is used in combination with  StatsD  https   github com etsy statsd    it is common to store only aggregated data over all monitored instances  rather than preserving the instance as a dimension and being able to drill down into individual problematic instances   For example  storing the number of HTTP requests to API servers with the response code  500  and the method  POST  to the   tracks  endpoint would commonly be encoded like this in Graphite StatsD       stats api server tracks post 500    93      In Prometheus the same data could be encoded like this  assuming three api server instances        api server http requests total method  POST  handler   tracks  status  500  instance   sample1       34 api server http requests total method  POST  handler   tracks  status  500  instance   sample2       28 api server http requests total method  POST  handler   tracks  status  500  instance   sample3       31          Storage  Graphite stores time series data on local disk in the  Whisper  http   graphite readthedocs org en latest whisper html  format  an RRD style database that expects samples to arrive at regular intervals  Every time series is stored in a separate file  and new samples overwrite old ones after a certain amount of time   Prometheus also creates one local file per time series  but allows storing samples at arbitrary intervals as scrapes or rule evaluations occur  Since new samples are simply appended  old data may be kept arbitrarily long  Prometheus also works well for many short lived  frequently changing sets of time series       Summary  Prometheus offers a richer data model and query language  in addition to being easier to run and integrate into your environment  If you want a clustered solution that can hold historical data long term  Graphite may be a better choice       Prometheus vs  InfluxDB   InfluxDB  https   influxdata com   is an open source time series database  with a commercial option for scaling and clustering  The InfluxDB project was released almost a year after Prometheus development began  so we were unable to consider it as an alternative at the time  Still  there are significant differences between Prometheus and InfluxDB  and both systems are geared towards slightly different use cases       Scope  For a fair comparison  we must also consider  Kapacitor  https   github com influxdata kapacitor  together with InfluxDB  as in combination they address the same problem space as Prometheus and the Alertmanager   The same scope differences as in the case of  Graphite   prometheus vs graphite  apply here for InfluxDB itself  In addition InfluxDB offers continuous queries  which are equivalent to Prometheus recording rules   Kapacitor s scope is a combination of Prometheus recording rules  alerting rules  and the Alertmanager s notification functionality  Prometheus offers  a more powerful query language for graphing and alerting  https   www robustperception io translating between monitoring languages    The Prometheus Alertmanager additionally offers grouping  deduplication and silencing functionality       Data model   storage  Like Prometheus  the InfluxDB data model has key value pairs as labels  which are called tags  In addition  InfluxDB has a second level of labels called fields  which are more limited in use  InfluxDB supports timestamps with up to nanosecond resolution  and float64  int64  bool  and string data types  Prometheus  by contrast  supports the float64 data type with limited support for strings  and millisecond resolution timestamps   InfluxDB uses a variant of a  log structured merge tree for storage with a write ahead log  https   docs influxdata com influxdb v1 7 concepts storage engine    sharded by time  This is much more suitable to event logging than Prometheus s append only file per time series approach    Logs and Metrics and Graphs  Oh My   https   grafana com blog 2016 01 05 logs and metrics and graphs oh my   describes the differences between event logging and metrics recording       Architecture  Prometheus servers run independently of each other and only rely on their local storage for their core functionality  scraping  rule processing  and alerting  The open source version of InfluxDB is similar   The commercial InfluxDB offering is  by design  a distributed storage cluster with storage and queries being handled by many nodes at once   This means that the commercial InfluxDB will be easier to scale horizontally  but it also means that you have to manage the complexity of a distributed storage system from the beginning  Prometheus will be simpler to run  but at some point you will need to shard servers explicitly along scalability boundaries like products  services  datacenters  or similar aspects  Independent servers  which can be run redundantly in parallel  may also give you better reliability and failure isolation   Kapacitor s open source release has no built in distributed redundant options for  rules   alerting  or notifications   The open source release of Kapacitor can  be scaled via manual sharding by the user  similar to Prometheus itself  Influx offers  Enterprise Kapacitor  https   docs influxdata com enterprise kapacitor   which supports an  HA redundant alerting system   Prometheus and the Alertmanager by contrast offer a fully open source redundant  option via running redundant replicas of Prometheus and using the Alertmanager s   High Availability  https   github com prometheus alertmanager high availability  mode        Summary  There are many similarities between the systems  Both have labels  called tags in InfluxDB  to efficiently support multi dimensional metrics  Both use basically the same data compression algorithms  Both have extensive integrations  including with each other  Both have hooks allowing you to extend them further  such as analyzing data in statistical tools or performing automated actions   Where InfluxDB is better       If you re doing event logging      Commercial option offers clustering for InfluxDB  which is also better for long term data storage      Eventually consistent view of data between replicas   Where Prometheus is better       If you re primarily doing metrics      More powerful query language  alerting  and notification functionality      Higher availability and uptime for graphing and alerting   InfluxDB is maintained by a single commercial company following the open core model  offering premium features like closed source clustering  hosting and support  Prometheus is a  fully open source and independent project   community    maintained by a number of companies and individuals  some of whom also offer commercial services and support      Prometheus vs  OpenTSDB   OpenTSDB  http   opentsdb net   is a distributed time series database based on  Hadoop  http   hadoop apache org   and  HBase  http   hbase apache org         Scope  The same scope differences as in the case of  Graphite   docs introduction comparison  prometheus vs graphite  apply here       Data model  OpenTSDB s data model is almost identical to Prometheus s  time series are identified by a set of arbitrary key value pairs  OpenTSDB tags are Prometheus labels   All data for a metric is   stored together  http   opentsdb net docs build html user guide writing index html time series cardinality   limiting the cardinality of metrics  There are minor differences though  Prometheus allows arbitrary characters in label values  while OpenTSDB is more restrictive   OpenTSDB also lacks a full query language  only allowing simple aggregation and math via its API       Storage   OpenTSDB  http   opentsdb net   s storage is implemented on top of  Hadoop  http   hadoop apache org   and  HBase  http   hbase apache org    This means that it is easy to scale OpenTSDB horizontally  but you have to accept the overall complexity of running a Hadoop HBase cluster from the beginning   Prometheus will be simpler to run initially  but will require explicit sharding once the capacity of a single node is exceeded       Summary  Prometheus offers a much richer query language  can handle higher cardinality metrics  and forms part of a complete monitoring system  If you re already running Hadoop and value long term storage over these benefits  OpenTSDB is a good choice      Prometheus vs  Nagios   Nagios  https   www nagios org   is a monitoring system that originated in the 1990s as NetSaint       Scope  Nagios is primarily about alerting based on the exit codes of scripts  These are  called  checks   There is silencing of individual alerts  however no grouping   routing or deduplication   There are a variety of plugins  For example  piping the few kilobytes of perfData plugins are allowed to return  to a time series database such as Graphite  https   github com shawn sterling graphios  or using NRPE to  run checks on remote machines  https   exchange nagios org directory Addons Monitoring Agents NRPE  2D Nagios Remote Plugin Executor details        Data model  Nagios is host based  Each host can have one or more services and each service can perform one check   There is no notion of labels or a query language       Storage  Nagios has no storage per se  beyond the current check state  There are plugins which can store data such as  for visualisation  https   docs pnp4nagios org         Architecture  Nagios servers are standalone  All configuration of checks is via file       Summary  Nagios is suitable for basic monitoring of small and or static systems where blackbox probing is sufficient   If you want to do whitebox monitoring  or have a dynamic or cloud based environment  then Prometheus is a good choice      Prometheus vs  Sensu   Sensu  https   sensu io  is an open source monitoring and observability pipeline with a commercial distribution which offers additional features for scalability  It can reuse existing Nagios plugins       Scope  Sensu is an observability pipeline that focuses on processing and alerting of observability data as a stream of  Events  https   docs sensu io sensu go latest observability pipeline observe events events    It provides an extensible framework for event  filtering  https   docs sensu io sensu go latest observability pipeline observe filter    aggregation   transformation  https   docs sensu io sensu go latest observability pipeline observe transform    and  processing  https   docs sensu io sensu go latest observability pipeline observe process     including sending alerts to other systems and storing events in third party systems  Sensu s event processing capabilities are similar in scope to Prometheus alerting rules and Alertmanager        Data model  Sensu  Events  https   docs sensu io sensu go latest observability pipeline observe events events   represent service health and or  metrics  https   docs sensu io sensu go latest observability pipeline observe events events  metric attributes  in a structured data format identified by an  entity  https   docs sensu io sensu go latest observability pipeline observe entities entities   name  e g  server  cloud compute instance  container  or service   an event name  and optional  key value metadata  https   docs sensu io sensu go latest observability pipeline observe events events  metadata attributes  called  labels  or  annotations   The Sensu Event payload may include one or more metric   points   https   docs sensu io sensu go latest observability pipeline observe events events  points attributes   represented as a JSON object containing a  name    tags   key value pairs    timestamp   and  value   always a float        Storage  Sensu stores current and recent event status information and real time inventory data in an embedded database  etcd  or an external RDBMS  PostgreSQL         Architecture  All components of a Sensu deployment can be clustered for high availability and improved event processing throughput        Summary  Sensu and Prometheus have a few capabilities in common  but they take very different approaches to monitoring  Both offer extensible discovery mechanisms for dynamic cloud based environments and ephemeral compute platforms  though the underlying mechanisms are quite different  Both provide support for collecting multi dimensional metrics via labels and annotations  Both have extensive integrations  and Sensu natively supports collecting metrics from all Prometheus exporters  Both are capable of forwarding observability data to third party data platforms  e g  event stores or TSDBs   Where Sensu and Prometheus differ the most is in their use cases   Where Sensu is better      If you re collecting and processing hybrid observability data  including metrics  and or  events    If you re consolidating multiple monitoring tools and need support for metrics  and  Nagios style plugins or check scripts   More powerful event processing platform  Where Prometheus is better      If you re primarily collecting and evaluating metrics   If you re monitoring homogeneous Kubernetes infrastructure  if 100  of the workloads you re monitoring are in K8s  Prometheus offers better K8s integration    More powerful query language  and built in support for historical data analysis   Sensu is maintained by a single commercial company following the open core business model  offering premium features like closed source event correlation and aggregation  federation  and support  Prometheus is a fully open source and independent project  maintained by a number of companies and individuals  some of whom also offer commercial services and support "}
{"questions":"prometheus title Glossary Glossary sortrank 9 Alert","answers":"---\ntitle: Glossary\nsort_rank: 9\n---\n\n# Glossary\n\n\n### Alert\n\nAn alert is the outcome of an alerting rule in Prometheus that is\nactively firing. Alerts are sent from Prometheus to the Alertmanager.\n\n### Alertmanager\n\nThe [Alertmanager](..\/..\/alerting\/overview\/) takes in alerts, aggregates them into\ngroups, de-duplicates, applies silences, throttles, and then sends out\nnotifications to email, Pagerduty, Slack etc.\n\n### Bridge\n\nA bridge is a component that takes samples from a client library and\nexposes them to a non-Prometheus monitoring system. For example, the Python, Go, and Java clients can export metrics to Graphite.\n\n### Client library\n\nA client library is a library in some language (e.g. Go, Java, Python, Ruby)\nthat makes it easy to directly instrument your code, write custom collectors to\npull metrics from other systems and expose the metrics to Prometheus.\n\n### Collector\n\nA collector is a part of an exporter that represents a set of metrics. It may be\na single metric if it is part of direct instrumentation, or many metrics if it is pulling metrics from another system.\n\n### Direct instrumentation\n\nDirect instrumentation is instrumentation added inline as part of the source code of a program, using a [client library](#client-library).\n\n### Endpoint\n\nA source of metrics that can be scraped, usually corresponding to a single process.\n\n### Exporter\n\nAn exporter is a binary running alongside the application you\nwant to obtain metrics from. The exporter exposes Prometheus metrics, commonly by converting metrics that are exposed in a non-Prometheus format into a format that Prometheus supports.\n\n### Instance\n\nAn instance is a label that uniquely identifies a target in a job.\n\n### Job\n\nA collection of targets with the same purpose, for example monitoring a group of like processes replicated for scalability or reliability, is called a job.\n\n### Notification\n\nA notification represents a group of one or more alerts, and is sent by the Alertmanager to email, Pagerduty, Slack etc.\n\n### Promdash\n\nPromdash was a native dashboard builder for Prometheus. It has been deprecated and replaced by [Grafana](..\/..\/visualization\/grafana\/).\n\n### Prometheus\n\nPrometheus usually refers to the core binary of the Prometheus system. It may\nalso refer to the Prometheus monitoring system as a whole.\n\n### PromQL\n\n[PromQL](\/docs\/prometheus\/latest\/querying\/basics\/) is the Prometheus Query Language. It allows for\na wide range of operations including aggregation, slicing and dicing, prediction and joins.\n\n### Pushgateway\n\nThe [Pushgateway](..\/..\/instrumenting\/pushing\/) persists the most recent push\nof metrics from batch jobs. This allows Prometheus to scrape their metrics\nafter they have terminated.\n\n### Recording Rules\n\nRecording rules precompute frequently needed or computationally expensive expressions \nand save their results as a new set of time series.\n\n### Remote Read\n\nRemote read is a Prometheus feature that allows transparent reading of time series from\nother systems (such as long term storage) as part of queries.\n\n### Remote Read Adapter\n\nNot all systems directly support remote read. A remote read adapter sits between\nPrometheus and another system, converting time series requests and responses between them.\n\n### Remote Read Endpoint\n\nA remote read endpoint is what Prometheus talks to when doing a remote read.\n\n### Remote Write\n\nRemote write is a Prometheus feature that allows sending ingested samples on the\nfly to other systems, such as long term storage.\n\n### Remote Write Adapter\n\nNot all systems directly support remote write. A remote write adapter sits\nbetween Prometheus and another system, converting the samples in the remote\nwrite into a format the other system can understand.\n\n### Remote Write Endpoint\n\nA remote write endpoint is what Prometheus talks to when doing a remote write.\n\n### Sample\n\nA sample is a single value at a point in time in a time series.\n\nIn Prometheus, each sample consists of a float64 value and a millisecond-precision timestamp.\n\n### Silence\n\nA silence in the Alertmanager prevents alerts, with labels matching the silence, from\nbeing included in notifications.\n\n### Target\n\nA target is the definition of an object to scrape. For example, what labels to apply, any authentication required to connect, or other information that defines how the scrape will occur.\n\n### Time Series\n\nThe Prometheus time series are streams of timestamped values belonging to the same metric and the same set of labeled dimensions.\nPrometheus stores all data as time series.\n","site":"prometheus","answers_cleaned":"    title  Glossary sort rank  9        Glossary       Alert  An alert is the outcome of an alerting rule in Prometheus that is actively firing  Alerts are sent from Prometheus to the Alertmanager       Alertmanager  The  Alertmanager        alerting overview   takes in alerts  aggregates them into groups  de duplicates  applies silences  throttles  and then sends out notifications to email  Pagerduty  Slack etc       Bridge  A bridge is a component that takes samples from a client library and exposes them to a non Prometheus monitoring system  For example  the Python  Go  and Java clients can export metrics to Graphite       Client library  A client library is a library in some language  e g  Go  Java  Python  Ruby  that makes it easy to directly instrument your code  write custom collectors to pull metrics from other systems and expose the metrics to Prometheus       Collector  A collector is a part of an exporter that represents a set of metrics  It may be a single metric if it is part of direct instrumentation  or many metrics if it is pulling metrics from another system       Direct instrumentation  Direct instrumentation is instrumentation added inline as part of the source code of a program  using a  client library   client library        Endpoint  A source of metrics that can be scraped  usually corresponding to a single process       Exporter  An exporter is a binary running alongside the application you want to obtain metrics from  The exporter exposes Prometheus metrics  commonly by converting metrics that are exposed in a non Prometheus format into a format that Prometheus supports       Instance  An instance is a label that uniquely identifies a target in a job       Job  A collection of targets with the same purpose  for example monitoring a group of like processes replicated for scalability or reliability  is called a job       Notification  A notification represents a group of one or more alerts  and is sent by the Alertmanager to email  Pagerduty  Slack etc       Promdash  Promdash was a native dashboard builder for Prometheus  It has been deprecated and replaced by  Grafana        visualization grafana         Prometheus  Prometheus usually refers to the core binary of the Prometheus system  It may also refer to the Prometheus monitoring system as a whole       PromQL   PromQL   docs prometheus latest querying basics   is the Prometheus Query Language  It allows for a wide range of operations including aggregation  slicing and dicing  prediction and joins       Pushgateway  The  Pushgateway        instrumenting pushing   persists the most recent push of metrics from batch jobs  This allows Prometheus to scrape their metrics after they have terminated       Recording Rules  Recording rules precompute frequently needed or computationally expensive expressions  and save their results as a new set of time series       Remote Read  Remote read is a Prometheus feature that allows transparent reading of time series from other systems  such as long term storage  as part of queries       Remote Read Adapter  Not all systems directly support remote read  A remote read adapter sits between Prometheus and another system  converting time series requests and responses between them       Remote Read Endpoint  A remote read endpoint is what Prometheus talks to when doing a remote read       Remote Write  Remote write is a Prometheus feature that allows sending ingested samples on the fly to other systems  such as long term storage       Remote Write Adapter  Not all systems directly support remote write  A remote write adapter sits between Prometheus and another system  converting the samples in the remote write into a format the other system can understand       Remote Write Endpoint  A remote write endpoint is what Prometheus talks to when doing a remote write       Sample  A sample is a single value at a point in time in a time series   In Prometheus  each sample consists of a float64 value and a millisecond precision timestamp       Silence  A silence in the Alertmanager prevents alerts  with labels matching the silence  from being included in notifications       Target  A target is the definition of an object to scrape  For example  what labels to apply  any authentication required to connect  or other information that defines how the scrape will occur       Time Series  The Prometheus time series are streams of timestamped values belonging to the same metric and the same set of labeled dimensions  Prometheus stores all data as time series  "}
{"questions":"prometheus toc full width Frequently Asked Questions sortrank 5 General title FAQ","answers":"---\ntitle: FAQ\nsort_rank: 5\ntoc: full-width\n---\n\n# Frequently Asked Questions\n\n## General\n\n### What is Prometheus?\n\nPrometheus is an open-source systems monitoring and alerting toolkit\nwith an active ecosystem.\nIt is the only system directly supported by [Kubernetes](https:\/\/kubernetes.io\/) and the de facto standard across the [cloud native ecosystem](https:\/\/landscape.cncf.io\/).\nSee the [overview](\/docs\/introduction\/overview\/).\n\n### How does Prometheus compare against other monitoring systems?\n\nSee the [comparison](\/docs\/introduction\/comparison\/) page.\n\n### What dependencies does Prometheus have?\n\nThe main Prometheus server runs standalone as a single monolithic binary and has no external dependencies.\n\n#### Is this cloud native?\n\nYes.\n\nCloud native is a flexible operating model, breaking up old service boundaries to allow for more flexible and scalable deployments.\n\nPrometheus's [service discovery](https:\/\/prometheus.io\/docs\/prometheus\/latest\/configuration\/configuration\/) integrates with most tools and clouds. Its dimensional data model and scale into the tens of millions of active series allows it to monitor large cloud-native deployments.\nThere are always trade-offs to make when running services, and Prometheus values reliably getting alerts out to humans above all else.\n\n### Can Prometheus be made highly available?\n\nYes, run identical Prometheus servers on two or more separate machines.\nIdentical alerts will be deduplicated by the [Alertmanager](https:\/\/github.com\/prometheus\/alertmanager).\n\nAlertmanager supports [high availability](https:\/\/github.com\/prometheus\/alertmanager#high-availability) by interconnecting multiple Alertmanager instances to build an Alertmanager cluster. Instances of a cluster communicate using a gossip protocol managed via [HashiCorp's Memberlist](https:\/\/github.com\/hashicorp\/memberlist) library. \n\n### I was told Prometheus \u201cdoesn't scale\u201d.\n\nThis is often more of a marketing claim than anything else.\n\nA single instance of Prometheus can be more performant than some systems positioning themselves as long term storage solution for Prometheus.\nYou can run Prometheus reliably with tens of millions of active series.\n\nIf you need more than that, there are several options. [Scaling and Federating Prometheus](https:\/\/www.robustperception.io\/scaling-and-federating-prometheus\/) on the Robust Perception blog is a good starting point, as are the long storage systems listed on our [integrations page](https:\/\/prometheus.io\/docs\/operating\/integrations\/#remote-endpoints-and-storage).\n\n### What language is Prometheus written in?\n\nMost Prometheus components are written in Go. Some are also written in Java,\nPython, and Ruby.\n\n### How stable are Prometheus features, storage formats, and APIs?\n\nAll repositories in the Prometheus GitHub organization that have reached\nversion 1.0.0 broadly follow\n[semantic versioning](http:\/\/semver.org\/). Breaking changes are indicated by\nincrements of the major version. Exceptions are possible for experimental\ncomponents, which are clearly marked as such in announcements.\n\nEven repositories that have not yet reached version 1.0.0 are, in general, quite\nstable. We aim for a proper release process and an eventual 1.0.0 release for\neach repository. In any case, breaking changes will be pointed out in release\nnotes (marked by `[CHANGE]`) or communicated clearly for components that do not\nhave formal releases yet.\n\n### Why do you pull rather than push?\n\nPulling over HTTP offers a number of advantages:\n\n* You can start extra monitoring instances as needed, e.g. on your laptop when developing changes.\n* You can more easily and reliably tell if a target is down.\n* You can manually go to a target and inspect its health with a web browser.\n\nOverall, we believe that pulling is slightly better than pushing, but it should\nnot be considered a major point when considering a monitoring system.\n\nFor cases where you must push, we offer the [Pushgateway](\/docs\/instrumenting\/pushing\/).\n\n### How to feed logs into Prometheus?\n\nShort answer: Don't! Use something like [Grafana Loki](https:\/\/grafana.com\/oss\/loki\/) or [OpenSearch](https:\/\/opensearch.org\/) instead.\n\nLonger answer: Prometheus is a system to collect and process metrics, not an\nevent logging system. The Grafana blog post\n[Logs and Metrics and Graphs, Oh My!](https:\/\/grafana.com\/blog\/2016\/01\/05\/logs-and-metrics-and-graphs-oh-my\/)\nprovides more details about the differences between logs and metrics.\n\nIf you want to extract Prometheus metrics from application logs, Grafana Loki is designed for just that. See Loki's [metric queries](https:\/\/grafana.com\/docs\/loki\/latest\/logql\/metric_queries\/) documentation.\n\n### Who wrote Prometheus?\n\nPrometheus was initially started privately by\n[Matt T. Proud](http:\/\/www.matttproud.com) and\n[Julius Volz](http:\/\/juliusv.com). The majority of its\ninitial development was sponsored by [SoundCloud](https:\/\/soundcloud.com).\n\nIt's now maintained and extended by a wide range of [companies](https:\/\/prometheus.devstats.cncf.io\/d\/5\/companies-table?orgId=1) and [individuals](https:\/\/prometheus.io\/governance).\n\n### What license is Prometheus released under?\n\nPrometheus is released under the\n[Apache 2.0](https:\/\/github.com\/prometheus\/prometheus\/blob\/main\/LICENSE) license.\n\n### What is the plural of Prometheus?\n\nAfter [extensive research](https:\/\/youtu.be\/B_CDeYrqxjQ), it has been determined\nthat the correct plural of 'Prometheus' is 'Prometheis'.\n\nIf you can not remember this, \"Prometheus instances\" is a good workaround.\n\n### Can I reload Prometheus's configuration?\n\nYes, sending `SIGHUP` to the Prometheus process or an HTTP POST request to the\n`\/-\/reload` endpoint will reload and apply the configuration file. The\nvarious components attempt to handle failing changes gracefully.\n\n### Can I send alerts?\n\nYes, with the [Alertmanager](https:\/\/github.com\/prometheus\/alertmanager).\n\nWe support sending alerts through [email, various native integrations](https:\/\/prometheus.io\/docs\/alerting\/latest\/configuration\/), and a [webhook system anyone can add integrations to](https:\/\/prometheus.io\/docs\/operating\/integrations\/#alertmanager-webhook-receiver).\n\n### Can I create dashboards?\n\nYes, we recommend [Grafana](\/docs\/visualization\/grafana\/) for production\nusage. There are also [Console templates](\/docs\/visualization\/consoles\/).\n\n### Can I change the timezone? Why is everything in UTC?\n\nTo avoid any kind of timezone confusion, especially when the so-called\ndaylight saving time is involved, we decided to exclusively use Unix\ntime internally and UTC for display purposes in all components of\nPrometheus. A carefully done timezone selection could be introduced\ninto the UI. Contributions are welcome. See\n[issue #500](https:\/\/github.com\/prometheus\/prometheus\/issues\/500)\nfor the current state of this effort.\n\n## Instrumentation\n\n### Which languages have instrumentation libraries?\n\nThere are a number of client libraries for instrumenting your services with\nPrometheus metrics. See the [client libraries](\/docs\/instrumenting\/clientlibs\/)\ndocumentation for details.\n\nIf you are interested in contributing a client library for a new language, see\nthe [exposition formats](\/docs\/instrumenting\/exposition_formats\/).\n\n### Can I monitor machines?\n\nYes, the [Node Exporter](https:\/\/github.com\/prometheus\/node_exporter) exposes\nan extensive set of machine-level metrics on Linux and other Unix systems such\nas CPU usage, memory, disk utilization, filesystem fullness, and network\nbandwidth.\n\n### Can I monitor network devices?\n\nYes, the [SNMP Exporter](https:\/\/github.com\/prometheus\/snmp_exporter) allows\nmonitoring of devices that support SNMP.\nFor industrial networks, there's also a [Modbus exporter](https:\/\/github.com\/RichiH\/modbus_exporter).\n\n### Can I monitor batch jobs?\n\nYes, using the [Pushgateway](\/docs\/instrumenting\/pushing\/). See also the\n[best practices](\/docs\/practices\/instrumentation\/#batch-jobs) for monitoring batch\njobs.\n\n### What applications can Prometheus monitor out of the box?\n\nSee [the list of exporters and integrations](\/docs\/instrumenting\/exporters\/).\n\n### Can I monitor JVM applications via JMX?\n\nYes, for applications that you cannot instrument directly with the Java client, you can use the [JMX Exporter](https:\/\/github.com\/prometheus\/jmx_exporter)\neither standalone or as a Java Agent.\n\n### What is the performance impact of instrumentation?\n\nPerformance across client libraries and languages may vary. For Java,\n[benchmarks](https:\/\/github.com\/prometheus\/client_java\/blob\/master\/benchmarks\/README.md)\nindicate that incrementing a counter\/gauge with the Java client will take\n12-17ns, depending on contention. This is negligible for all but the most\nlatency-critical code.\n\n## Implementation\n\n### Why are all sample values 64-bit floats?\n\nWe restrained ourselves to 64-bit floats to simplify the design. The\n[IEEE 754 double-precision binary floating-point\nformat](http:\/\/en.wikipedia.org\/wiki\/Double-precision_floating-point_format)\nsupports integer precision for values up to 2<sup>53<\/sup>. Supporting\nnative 64 bit integers would (only) help if you need integer precision\nabove 2<sup>53<\/sup> but below 2<sup>63<\/sup>. In principle, support\nfor different sample value types (including some kind of big integer,\nsupporting even more than 64 bit) could be implemented, but it is not\na priority right now. A counter, even if incremented one million times per\nsecond, will only run into precision issues after over 285 years.","site":"prometheus","answers_cleaned":"    title  FAQ sort rank  5 toc  full width        Frequently Asked Questions     General      What is Prometheus   Prometheus is an open source systems monitoring and alerting toolkit with an active ecosystem  It is the only system directly supported by  Kubernetes  https   kubernetes io   and the de facto standard across the  cloud native ecosystem  https   landscape cncf io    See the  overview   docs introduction overview         How does Prometheus compare against other monitoring systems   See the  comparison   docs introduction comparison   page       What dependencies does Prometheus have   The main Prometheus server runs standalone as a single monolithic binary and has no external dependencies        Is this cloud native   Yes   Cloud native is a flexible operating model  breaking up old service boundaries to allow for more flexible and scalable deployments   Prometheus s  service discovery  https   prometheus io docs prometheus latest configuration configuration   integrates with most tools and clouds  Its dimensional data model and scale into the tens of millions of active series allows it to monitor large cloud native deployments  There are always trade offs to make when running services  and Prometheus values reliably getting alerts out to humans above all else       Can Prometheus be made highly available   Yes  run identical Prometheus servers on two or more separate machines  Identical alerts will be deduplicated by the  Alertmanager  https   github com prometheus alertmanager    Alertmanager supports  high availability  https   github com prometheus alertmanager high availability  by interconnecting multiple Alertmanager instances to build an Alertmanager cluster  Instances of a cluster communicate using a gossip protocol managed via  HashiCorp s Memberlist  https   github com hashicorp memberlist  library        I was told Prometheus  doesn t scale    This is often more of a marketing claim than anything else   A single instance of Prometheus can be more performant than some systems positioning themselves as long term storage solution for Prometheus  You can run Prometheus reliably with tens of millions of active series   If you need more than that  there are several options   Scaling and Federating Prometheus  https   www robustperception io scaling and federating prometheus   on the Robust Perception blog is a good starting point  as are the long storage systems listed on our  integrations page  https   prometheus io docs operating integrations  remote endpoints and storage        What language is Prometheus written in   Most Prometheus components are written in Go  Some are also written in Java  Python  and Ruby       How stable are Prometheus features  storage formats  and APIs   All repositories in the Prometheus GitHub organization that have reached version 1 0 0 broadly follow  semantic versioning  http   semver org    Breaking changes are indicated by increments of the major version  Exceptions are possible for experimental components  which are clearly marked as such in announcements   Even repositories that have not yet reached version 1 0 0 are  in general  quite stable  We aim for a proper release process and an eventual 1 0 0 release for each repository  In any case  breaking changes will be pointed out in release notes  marked by   CHANGE    or communicated clearly for components that do not have formal releases yet       Why do you pull rather than push   Pulling over HTTP offers a number of advantages     You can start extra monitoring instances as needed  e g  on your laptop when developing changes    You can more easily and reliably tell if a target is down    You can manually go to a target and inspect its health with a web browser   Overall  we believe that pulling is slightly better than pushing  but it should not be considered a major point when considering a monitoring system   For cases where you must push  we offer the  Pushgateway   docs instrumenting pushing         How to feed logs into Prometheus   Short answer  Don t  Use something like  Grafana Loki  https   grafana com oss loki   or  OpenSearch  https   opensearch org   instead   Longer answer  Prometheus is a system to collect and process metrics  not an event logging system  The Grafana blog post  Logs and Metrics and Graphs  Oh My   https   grafana com blog 2016 01 05 logs and metrics and graphs oh my   provides more details about the differences between logs and metrics   If you want to extract Prometheus metrics from application logs  Grafana Loki is designed for just that  See Loki s  metric queries  https   grafana com docs loki latest logql metric queries   documentation       Who wrote Prometheus   Prometheus was initially started privately by  Matt T  Proud  http   www matttproud com  and  Julius Volz  http   juliusv com   The majority of its initial development was sponsored by  SoundCloud  https   soundcloud com    It s now maintained and extended by a wide range of  companies  https   prometheus devstats cncf io d 5 companies table orgId 1  and  individuals  https   prometheus io governance        What license is Prometheus released under   Prometheus is released under the  Apache 2 0  https   github com prometheus prometheus blob main LICENSE  license       What is the plural of Prometheus   After  extensive research  https   youtu be B CDeYrqxjQ   it has been determined that the correct plural of  Prometheus  is  Prometheis    If you can not remember this   Prometheus instances  is a good workaround       Can I reload Prometheus s configuration   Yes  sending  SIGHUP  to the Prometheus process or an HTTP POST request to the     reload  endpoint will reload and apply the configuration file  The various components attempt to handle failing changes gracefully       Can I send alerts   Yes  with the  Alertmanager  https   github com prometheus alertmanager    We support sending alerts through  email  various native integrations  https   prometheus io docs alerting latest configuration    and a  webhook system anyone can add integrations to  https   prometheus io docs operating integrations  alertmanager webhook receiver        Can I create dashboards   Yes  we recommend  Grafana   docs visualization grafana   for production usage  There are also  Console templates   docs visualization consoles         Can I change the timezone  Why is everything in UTC   To avoid any kind of timezone confusion  especially when the so called daylight saving time is involved  we decided to exclusively use Unix time internally and UTC for display purposes in all components of Prometheus  A carefully done timezone selection could be introduced into the UI  Contributions are welcome  See  issue  500  https   github com prometheus prometheus issues 500  for the current state of this effort      Instrumentation      Which languages have instrumentation libraries   There are a number of client libraries for instrumenting your services with Prometheus metrics  See the  client libraries   docs instrumenting clientlibs   documentation for details   If you are interested in contributing a client library for a new language  see the  exposition formats   docs instrumenting exposition formats         Can I monitor machines   Yes  the  Node Exporter  https   github com prometheus node exporter  exposes an extensive set of machine level metrics on Linux and other Unix systems such as CPU usage  memory  disk utilization  filesystem fullness  and network bandwidth       Can I monitor network devices   Yes  the  SNMP Exporter  https   github com prometheus snmp exporter  allows monitoring of devices that support SNMP  For industrial networks  there s also a  Modbus exporter  https   github com RichiH modbus exporter        Can I monitor batch jobs   Yes  using the  Pushgateway   docs instrumenting pushing    See also the  best practices   docs practices instrumentation  batch jobs  for monitoring batch jobs       What applications can Prometheus monitor out of the box   See  the list of exporters and integrations   docs instrumenting exporters         Can I monitor JVM applications via JMX   Yes  for applications that you cannot instrument directly with the Java client  you can use the  JMX Exporter  https   github com prometheus jmx exporter  either standalone or as a Java Agent       What is the performance impact of instrumentation   Performance across client libraries and languages may vary  For Java   benchmarks  https   github com prometheus client java blob master benchmarks README md  indicate that incrementing a counter gauge with the Java client will take 12 17ns  depending on contention  This is negligible for all but the most latency critical code      Implementation      Why are all sample values 64 bit floats   We restrained ourselves to 64 bit floats to simplify the design  The  IEEE 754 double precision binary floating point format  http   en wikipedia org wiki Double precision floating point format  supports integer precision for values up to 2 sup 53  sup   Supporting native 64 bit integers would  only  help if you need integer precision above 2 sup 53  sup  but below 2 sup 63  sup   In principle  support for different sample value types  including some kind of big integer  supporting even more than 64 bit  could be implemented  but it is not a priority right now  A counter  even if incremented one million times per second  will only run into precision issues after over 285 years "}
{"questions":"prometheus sortrank 3 Welcome to Prometheus Prometheus is a monitoring platform that collects metrics from monitored targets by scraping metrics HTTP endpoints on these targets This guide will show you how to install configure and monitor our first resource with Prometheus You ll download install and run Prometheus You ll also download and install an exporter tools that expose time series data on hosts and services Our first exporter will be Prometheus itself which provides a wide variety of host level metrics about memory usage garbage collection and more First steps with Prometheus Downloading Prometheus title First steps","answers":"---\ntitle: First steps\nsort_rank: 3\n---\n\n# First steps with Prometheus\n\nWelcome to Prometheus! Prometheus is a monitoring platform that collects metrics from monitored targets by scraping metrics HTTP endpoints on these targets. This guide will show you how to install, configure and monitor our first resource with Prometheus. You'll download, install and run Prometheus. You'll also download and install an exporter, tools that expose time series data on hosts and services. Our first exporter will be Prometheus itself, which provides a wide variety of host-level metrics about memory usage, garbage collection, and more.\n\n## Downloading Prometheus\n\n[Download the latest release](\/download) of Prometheus for your platform, then\nextract it:\n\n```language-bash\ntar xvfz prometheus-*.tar.gz\ncd prometheus-*\n```\n\nThe Prometheus server is a single binary called `prometheus` (or `prometheus.exe` on Microsoft Windows). We can run the binary and see help on its options by passing the `--help` flag.\n\n```language-bash\n.\/prometheus --help\nusage: prometheus [<flags>]\n\nThe Prometheus monitoring server\n\n. . .\n```\n\nBefore starting Prometheus, let's configure it.\n\n## Configuring Prometheus\n\nPrometheus configuration is [YAML](https:\/\/yaml.org\/). The Prometheus download comes with a sample configuration in a file called `prometheus.yml` that is a good place to get started.\n\nWe've stripped out most of the comments in the example file to make it more succinct (comments are the lines prefixed with a `#`).\n\n```language-yaml\nglobal:\n  scrape_interval:     15s\n  evaluation_interval: 15s\n\nrule_files:\n  # - \"first.rules\"\n  # - \"second.rules\"\n\nscrape_configs:\n  - job_name: prometheus\n    static_configs:\n      - targets: ['localhost:9090']\n```\n\nThere are three blocks of configuration in the example configuration file: `global`, `rule_files`, and `scrape_configs`.\n\nThe `global` block controls the Prometheus server's global configuration. We have two options present. The first, `scrape_interval`, controls how often Prometheus will scrape targets. You can override this for individual targets. In this case the global setting is to scrape every 15 seconds. The `evaluation_interval` option controls how often Prometheus will evaluate rules. Prometheus uses rules to create new time series and to generate alerts.\n\nThe `rule_files` block specifies the location of any rules we want the Prometheus server to load. For now we've got no rules.\n\nThe last block, `scrape_configs`, controls what resources Prometheus monitors. Since Prometheus also exposes data about itself as an HTTP endpoint it can scrape and monitor its own health. In the default configuration there is a single job, called `prometheus`, which scrapes the time series data exposed by the Prometheus server. The job contains a single, statically configured, target, the `localhost` on port `9090`. Prometheus expects metrics to be available on targets on a path of `\/metrics`. So this default job is scraping via the URL: http:\/\/localhost:9090\/metrics.\n\nThe time series data returned will detail the state and performance of the Prometheus server.\n\nFor a complete specification of configuration options, see the\n[configuration documentation](\/docs\/operating\/configuration).\n\n## Starting Prometheus\n\nTo start Prometheus with our newly created configuration file, change to the directory containing the Prometheus binary and run:\n\n```language-bash\n.\/prometheus --config.file=prometheus.yml\n```\n\nPrometheus should start up. You should also be able to browse to a status page about itself at http:\/\/localhost:9090. Give it about 30 seconds to collect data about itself from its own HTTP metrics endpoint.\n\nYou can also verify that Prometheus is serving metrics about itself by\nnavigating to its own metrics endpoint: http:\/\/localhost:9090\/metrics.\n\n## Using the expression browser\n\nLet us try looking at some data that Prometheus has collected about itself. To\nuse Prometheus's built-in expression browser, navigate to\nhttp:\/\/localhost:9090\/graph and choose the \"Table\" view within the \"Graph\"\ntab.\n\nAs you can gather from http:\/\/localhost:9090\/metrics, one metric that\nPrometheus exports about itself is called\n`promhttp_metric_handler_requests_total` (the total number of `\/metrics` requests the Prometheus server has served). Go ahead and enter this into the expression console:\n\n```\npromhttp_metric_handler_requests_total\n```\n\nThis should return a number of different time series (along with the latest value recorded for each), all with the metric name `promhttp_metric_handler_requests_total`, but with different labels. These labels designate different requests statuses.\n\nIf we were only interested in requests that resulted in HTTP code `200`, we could use this query to retrieve that information:\n\n```\npromhttp_metric_handler_requests_total{code=\"200\"}\n```\n\nTo count the number of returned time series, you could write:\n\n```\ncount(promhttp_metric_handler_requests_total)\n```\n\nFor more about the expression language, see the\n[expression language documentation](\/docs\/querying\/basics\/).\n\n## Using the graphing interface\n\nTo graph expressions, navigate to http:\/\/localhost:9090\/graph and use the \"Graph\" tab.\n\nFor example, enter the following expression to graph the per-second HTTP request rate returning status code 200 happening in the self-scraped Prometheus:\n\n```\nrate(promhttp_metric_handler_requests_total{code=\"200\"}[1m])\n```\n\nYou can experiment with the graph range parameters and other settings.\n\n## Monitoring other targets\n\nCollecting metrics from Prometheus alone isn't a great representation of Prometheus' capabilities. To get a better sense of what Prometheus can do, we recommend exploring documentation about other exporters. The [Monitoring Linux or macOS host metrics using a node exporter](\/docs\/guides\/node-exporter) guide is a good place to start.\n\n## Summary\n\nIn this guide, you installed Prometheus, configured a Prometheus instance to monitor resources, and learned some basics of working with time series data in Prometheus' expression browser. To continue learning about Prometheus, check out the [Overview](\/docs\/introduction\/overview) for some ideas about what to explore next.","site":"prometheus","answers_cleaned":"    title  First steps sort rank  3        First steps with Prometheus  Welcome to Prometheus  Prometheus is a monitoring platform that collects metrics from monitored targets by scraping metrics HTTP endpoints on these targets  This guide will show you how to install  configure and monitor our first resource with Prometheus  You ll download  install and run Prometheus  You ll also download and install an exporter  tools that expose time series data on hosts and services  Our first exporter will be Prometheus itself  which provides a wide variety of host level metrics about memory usage  garbage collection  and more      Downloading Prometheus   Download the latest release   download  of Prometheus for your platform  then extract it      language bash tar xvfz prometheus   tar gz cd prometheus        The Prometheus server is a single binary called  prometheus   or  prometheus exe  on Microsoft Windows   We can run the binary and see help on its options by passing the    help  flag      language bash   prometheus   help usage  prometheus   flags    The Prometheus monitoring server             Before starting Prometheus  let s configure it      Configuring Prometheus  Prometheus configuration is  YAML  https   yaml org    The Prometheus download comes with a sample configuration in a file called  prometheus yml  that is a good place to get started   We ve stripped out most of the comments in the example file to make it more succinct  comments are the lines prefixed with a           language yaml global    scrape interval      15s   evaluation interval  15s  rule files         first rules         second rules   scrape configs      job name  prometheus     static configs          targets    localhost 9090        There are three blocks of configuration in the example configuration file   global    rule files   and  scrape configs    The  global  block controls the Prometheus server s global configuration  We have two options present  The first   scrape interval   controls how often Prometheus will scrape targets  You can override this for individual targets  In this case the global setting is to scrape every 15 seconds  The  evaluation interval  option controls how often Prometheus will evaluate rules  Prometheus uses rules to create new time series and to generate alerts   The  rule files  block specifies the location of any rules we want the Prometheus server to load  For now we ve got no rules   The last block   scrape configs   controls what resources Prometheus monitors  Since Prometheus also exposes data about itself as an HTTP endpoint it can scrape and monitor its own health  In the default configuration there is a single job  called  prometheus   which scrapes the time series data exposed by the Prometheus server  The job contains a single  statically configured  target  the  localhost  on port  9090   Prometheus expects metrics to be available on targets on a path of   metrics   So this default job is scraping via the URL  http   localhost 9090 metrics   The time series data returned will detail the state and performance of the Prometheus server   For a complete specification of configuration options  see the  configuration documentation   docs operating configuration       Starting Prometheus  To start Prometheus with our newly created configuration file  change to the directory containing the Prometheus binary and run      language bash   prometheus   config file prometheus yml      Prometheus should start up  You should also be able to browse to a status page about itself at http   localhost 9090  Give it about 30 seconds to collect data about itself from its own HTTP metrics endpoint   You can also verify that Prometheus is serving metrics about itself by navigating to its own metrics endpoint  http   localhost 9090 metrics      Using the expression browser  Let us try looking at some data that Prometheus has collected about itself  To use Prometheus s built in expression browser  navigate to http   localhost 9090 graph and choose the  Table  view within the  Graph  tab   As you can gather from http   localhost 9090 metrics  one metric that Prometheus exports about itself is called  promhttp metric handler requests total   the total number of   metrics  requests the Prometheus server has served   Go ahead and enter this into the expression console       promhttp metric handler requests total      This should return a number of different time series  along with the latest value recorded for each   all with the metric name  promhttp metric handler requests total   but with different labels  These labels designate different requests statuses   If we were only interested in requests that resulted in HTTP code  200   we could use this query to retrieve that information       promhttp metric handler requests total code  200        To count the number of returned time series  you could write       count promhttp metric handler requests total       For more about the expression language  see the  expression language documentation   docs querying basics        Using the graphing interface  To graph expressions  navigate to http   localhost 9090 graph and use the  Graph  tab   For example  enter the following expression to graph the per second HTTP request rate returning status code 200 happening in the self scraped Prometheus       rate promhttp metric handler requests total code  200   1m        You can experiment with the graph range parameters and other settings      Monitoring other targets  Collecting metrics from Prometheus alone isn t a great representation of Prometheus  capabilities  To get a better sense of what Prometheus can do  we recommend exploring documentation about other exporters  The  Monitoring Linux or macOS host metrics using a node exporter   docs guides node exporter  guide is a good place to start      Summary  In this guide  you installed Prometheus  configured a Prometheus instance to monitor resources  and learned some basics of working with time series data in Prometheus  expression browser  To continue learning about Prometheus  check out the  Overview   docs introduction overview  for some ideas about what to explore next "}
{"questions":"prometheus sortrank 1 fields it contains and provides advanced tips about how to operate the log file as of 2 16 0 This guide demonstrates how to use that log file which Using the Prometheus query log Prometheus has the ability to log all the queries run by the engine to a log title Query Log","answers":"---\ntitle: Query Log\nsort_rank: 1\n---\n\n# Using the Prometheus query log\n\nPrometheus has the ability to log all the queries run by the engine to a log\nfile, as of 2.16.0. This guide demonstrates how to use that log file, which\nfields it contains, and provides advanced tips about how to operate the log\nfile.\n\n## Enable the query log\n\nThe query log can be toggled at runtime. It can therefore be activated when you\nwant to investigate slownesses or high load on your Prometheus instance.\n\nTo enable or disable the query log, two steps are needed:\n\n1. Adapt the configuration to add or remove the query log configuration.\n1. Reload the Prometheus server configuration.\n\n### Logging all the queries to a file\n\nThis example demonstrates how to log all the queries to\na file called `\/prometheus\/query.log`. We will assume that `\/prometheus` is the\ndata directory and that Prometheus has write access to it.\n\nFirst, adapt the `prometheus.yml` configuration file:\n\n```yaml\nglobal:\n  scrape_interval:     15s\n  evaluation_interval: 15s\n  query_log_file: \/prometheus\/query.log\nscrape_configs:\n- job_name: 'prometheus'\n  static_configs:\n  - targets: ['localhost:9090']\n```\n\nThen, [reload](\/docs\/prometheus\/latest\/management_api\/#reload) the Prometheus configuration:\n\n\n```shell\n$ curl -X POST http:\/\/127.0.0.1:9090\/-\/reload\n```\n\nOr, if Prometheus is not launched with `--web.enable-lifecycle`, and you're not\nrunning on Windows, you can trigger the reload by sending a SIGHUP to the\nPrometheus process.\n\n\nThe file `\/prometheus\/query.log` should now exist and all the queries\nwill be logged to that file.\n\nTo disable the query log, repeat the operation but remove `query_log_file` from\nthe configuration.\n\n## Verifying if the query log is enabled\n\nPrometheus conveniently exposes metrics that indicates if the query log is\nenabled and working:\n\n```\n# HELP prometheus_engine_query_log_enabled State of the query log.\n# TYPE prometheus_engine_query_log_enabled gauge\nprometheus_engine_query_log_enabled 0\n# HELP prometheus_engine_query_log_failures_total The number of query log failures.\n# TYPE prometheus_engine_query_log_failures_total counter\nprometheus_engine_query_log_failures_total 0\n```\n\nThe first metric, `prometheus_engine_query_log_enabled` is set to 1 of the\nquery log is enabled, and 0 otherwise.\nThe second one, `prometheus_engine_query_log_failures_total`, indicates the\nnumber of queries that could not be logged.\n\n## Format of the query log\n\nThe query log is a JSON-formatted log. Here is an overview of the fields\npresent for a query:\n\n```\n{\n    \"params\": {\n        \"end\": \"2020-02-08T14:59:50.368Z\",\n        \"query\": \"up == 0\",\n        \"start\": \"2020-02-08T13:59:50.368Z\",\n        \"step\": 5\n    },\n    \"stats\": {\n        \"timings\": {\n            \"evalTotalTime\": 0.000447452,\n            \"execQueueTime\": 7.599e-06,\n            \"execTotalTime\": 0.000461232,\n            \"innerEvalTime\": 0.000427033,\n            \"queryPreparationTime\": 1.4177e-05,\n            \"resultSortTime\": 6.48e-07\n        }\n    },\n    \"ts\": \"2020-02-08T14:59:50.387Z\"\n}\n```\n\n- `params`: The query. The start and end timestamp, the step and the actual\n  query statement.\n- `stats`: Statistics. Currently, it contains internal engine timers.\n- `ts`: The timestamp when the query ended.\n\nAdditionally, depending on what triggered the request, you will have additional\nfields in the JSON lines.\n\n### API Queries and consoles\n\nHTTP requests contain the client IP, the method, and the path:\n\n```\n{\n    \"httpRequest\": {\n        \"clientIP\": \"127.0.0.1\",\n        \"method\": \"GET\",\n        \"path\": \"\/api\/v1\/query_range\"\n    }\n}\n```\n\nThe path will contain the web prefix if it is set, and can also point to a\nconsole.\n\nThe client IP is the network IP address and does not take into consideration the\nheaders like `X-Forwarded-For`. If you wish to log the original caller behind a\nproxy, you need to do so in the proxy itself.\n\n### Recording rules and alerts\n\nRecording rules and alerts contain a ruleGroup element which contains the path\nof the file and the name of the group:\n\n```\n{\n    \"ruleGroup\": {\n        \"file\": \"rules.yml\",\n        \"name\": \"partners\"\n    }\n}\n```\n\n\n## Rotating the query log\n\nPrometheus will not rotate the query log itself. Instead, you can use external\ntools to do so.\n\nOne of those tools is logrotate. It is enabled by default on most Linux\ndistributions.\n\nHere is an example of file you can add as\n`\/etc\/logrotate.d\/prometheus`:\n\n```\n\/prometheus\/query.log {\n    daily\n    rotate 7\n    compress\n    delaycompress\n    postrotate\n        killall -HUP prometheus\n    endscript\n}\n```\n\nThat will rotate your file daily and keep one week of history.","site":"prometheus","answers_cleaned":"    title  Query Log sort rank  1        Using the Prometheus query log  Prometheus has the ability to log all the queries run by the engine to a log file  as of 2 16 0  This guide demonstrates how to use that log file  which fields it contains  and provides advanced tips about how to operate the log file      Enable the query log  The query log can be toggled at runtime  It can therefore be activated when you want to investigate slownesses or high load on your Prometheus instance   To enable or disable the query log  two steps are needed   1  Adapt the configuration to add or remove the query log configuration  1  Reload the Prometheus server configuration       Logging all the queries to a file  This example demonstrates how to log all the queries to a file called   prometheus query log   We will assume that   prometheus  is the data directory and that Prometheus has write access to it   First  adapt the  prometheus yml  configuration file      yaml global    scrape interval      15s   evaluation interval  15s   query log file   prometheus query log scrape configs    job name   prometheus    static configs      targets    localhost 9090        Then   reload   docs prometheus latest management api  reload  the Prometheus configuration       shell   curl  X POST http   127 0 0 1 9090   reload      Or  if Prometheus is not launched with    web enable lifecycle   and you re not running on Windows  you can trigger the reload by sending a SIGHUP to the Prometheus process    The file   prometheus query log  should now exist and all the queries will be logged to that file   To disable the query log  repeat the operation but remove  query log file  from the configuration      Verifying if the query log is enabled  Prometheus conveniently exposes metrics that indicates if the query log is enabled and working         HELP prometheus engine query log enabled State of the query log    TYPE prometheus engine query log enabled gauge prometheus engine query log enabled 0   HELP prometheus engine query log failures total The number of query log failures    TYPE prometheus engine query log failures total counter prometheus engine query log failures total 0      The first metric   prometheus engine query log enabled  is set to 1 of the query log is enabled  and 0 otherwise  The second one   prometheus engine query log failures total   indicates the number of queries that could not be logged      Format of the query log  The query log is a JSON formatted log  Here is an overview of the fields present for a query              params              end    2020 02 08T14 59 50 368Z            query    up    0            start    2020 02 08T13 59 50 368Z            step   5             stats              timings                  evalTotalTime   0 000447452               execQueueTime   7 599e 06               execTotalTime   0 000461232               innerEvalTime   0 000427033               queryPreparationTime   1 4177e 05               resultSortTime   6 48e 07                       ts    2020 02 08T14 59 50 387Z            params   The query  The start and end timestamp  the step and the actual   query statement     stats   Statistics  Currently  it contains internal engine timers     ts   The timestamp when the query ended   Additionally  depending on what triggered the request  you will have additional fields in the JSON lines       API Queries and consoles  HTTP requests contain the client IP  the method  and the path              httpRequest              clientIP    127 0 0 1            method    GET            path     api v1 query range               The path will contain the web prefix if it is set  and can also point to a console   The client IP is the network IP address and does not take into consideration the headers like  X Forwarded For   If you wish to log the original caller behind a proxy  you need to do so in the proxy itself       Recording rules and alerts  Recording rules and alerts contain a ruleGroup element which contains the path of the file and the name of the group              ruleGroup              file    rules yml            name    partners                   Rotating the query log  Prometheus will not rotate the query log itself  Instead  you can use external tools to do so   One of those tools is logrotate  It is enabled by default on most Linux distributions   Here is an example of file you can add as   etc logrotate d prometheus         prometheus query log       daily     rotate 7     compress     delaycompress     postrotate         killall  HUP prometheus     endscript        That will rotate your file daily and keep one week of history "}
{"questions":"prometheus Securing Prometheus API and UI endpoints using basic auth NOTE This tutorial covers basic auth connections to Prometheus instances Basic auth is also supported for connections from Prometheus instances to Prometheus supports aka basic auth for connections to the Prometheus and title Basic auth","answers":"---\ntitle: Basic auth\n---\n\n# Securing Prometheus API and UI endpoints using basic auth\n\nPrometheus supports [basic authentication](https:\/\/en.wikipedia.org\/wiki\/Basic_access_authentication) (aka \"basic auth\") for connections to the Prometheus [expression browser](\/docs\/visualization\/browser) and [HTTP API](\/docs\/prometheus\/latest\/querying\/api).\n\nNOTE: This tutorial covers basic auth connections *to* Prometheus instances. Basic auth is also supported for connections *from* Prometheus instances to [scrape targets](..\/..\/prometheus\/latest\/configuration\/configuration\/#scrape_config).\n\n## Hashing a password\n\nLet's say that you want to require a username and password from all users accessing the Prometheus instance. For this example, use `admin` as the username and choose any password you'd like.\n\nFirst, generate a [bcrypt](https:\/\/en.wikipedia.org\/wiki\/Bcrypt) hash of the password.\nTo generate a hashed password, we will use python3-bcrypt.\n\nLet's install it by running `apt install python3-bcrypt`, assuming you are\nrunning a debian-like distribution. Other alternatives exist to generate hashed\npasswords; for testing you can also use [bcrypt generators on the\nweb](https:\/\/bcrypt-generator.com\/).\n\nHere is a python script which uses python3-bcrypt to prompt for a password and\nhash it:\n\n```python\nimport getpass\nimport bcrypt\n\npassword = getpass.getpass(\"password: \")\nhashed_password = bcrypt.hashpw(password.encode(\"utf-8\"), bcrypt.gensalt())\nprint(hashed_password.decode())\n```\n\nSave that script as `gen-pass.py` and run it:\n\n```shell\n$ python3 gen-pass.py\n```\n\nThat should prompt you for a password:\n\n```\npassword:\n$2b$12$hNf2lSsxfm0.i4a.1kVpSOVyBCfIB51VRjgBUyv6kdnyTlgWj81Ay\n```\n\nIn this example, I used \"test\" as password.\n\nSave that password somewhere, we will use it in the next steps!\n\n\n## Creating web.yml\n\nLet's create a web.yml file\n([documentation](https:\/\/prometheus.io\/docs\/prometheus\/latest\/configuration\/https\/)),\nwith the following content:\n\n```yaml\nbasic_auth_users:\n    admin: $2b$12$hNf2lSsxfm0.i4a.1kVpSOVyBCfIB51VRjgBUyv6kdnyTlgWj81Ay\n```\n\nYou can validate that file with `promtool check web-config web.yml`\n\n```shell\n$ promtool check web-config web.yml\nweb.yml SUCCESS\n```\n\nYou can add multiple users to the file.\n\n## Launching Prometheus\n\nYou can launch prometheus with the web configuration file as follows:\n\n```shell\n$ prometheus --web.config.file=web.yml\n```\n\n## Testing\n\nYou can use cURL to interact with your setup. Try this request:\n\n```bash\ncurl --head http:\/\/localhost:9090\/graph\n```\n\nThis will return a `401 Unauthorized` response because you've failed to supply a valid username and password.\n\nTo successfully access Prometheus endpoints using basic auth, for example the `\/metrics` endpoint, supply the proper username using the `-u` flag and supply the password when prompted:\n\n```bash\ncurl -u admin http:\/\/localhost:9090\/metrics\nEnter host password for user 'admin':\n```\n\nThat should return Prometheus metrics output, which should look something like this:\n\n```\n# HELP go_gc_duration_seconds A summary of the GC invocation durations.\n# TYPE go_gc_duration_seconds summary\ngo_gc_duration_seconds{quantile=\"0\"} 0.0001343\ngo_gc_duration_seconds{quantile=\"0.25\"} 0.0002032\ngo_gc_duration_seconds{quantile=\"0.5\"} 0.0004485\n...\n```\n\n## Summary\n\nIn this guide, you stored a username and a hashed password in a `web.yml` file, launched prometheus with the parameter required to use the credentials in that file to authenticate users accessing Prometheus' HTTP endpoints.","site":"prometheus","answers_cleaned":"    title  Basic auth        Securing Prometheus API and UI endpoints using basic auth  Prometheus supports  basic authentication  https   en wikipedia org wiki Basic access authentication   aka  basic auth   for connections to the Prometheus  expression browser   docs visualization browser  and  HTTP API   docs prometheus latest querying api    NOTE  This tutorial covers basic auth connections  to  Prometheus instances  Basic auth is also supported for connections  from  Prometheus instances to  scrape targets        prometheus latest configuration configuration  scrape config       Hashing a password  Let s say that you want to require a username and password from all users accessing the Prometheus instance  For this example  use  admin  as the username and choose any password you d like   First  generate a  bcrypt  https   en wikipedia org wiki Bcrypt  hash of the password  To generate a hashed password  we will use python3 bcrypt   Let s install it by running  apt install python3 bcrypt   assuming you are running a debian like distribution  Other alternatives exist to generate hashed passwords  for testing you can also use  bcrypt generators on the web  https   bcrypt generator com     Here is a python script which uses python3 bcrypt to prompt for a password and hash it      python import getpass import bcrypt  password   getpass getpass  password     hashed password   bcrypt hashpw password encode  utf 8    bcrypt gensalt    print hashed password decode         Save that script as  gen pass py  and run it      shell   python3 gen pass py      That should prompt you for a password       password   2b 12 hNf2lSsxfm0 i4a 1kVpSOVyBCfIB51VRjgBUyv6kdnyTlgWj81Ay      In this example  I used  test  as password   Save that password somewhere  we will use it in the next steps       Creating web yml  Let s create a web yml file   documentation  https   prometheus io docs prometheus latest configuration https     with the following content      yaml basic auth users      admin   2b 12 hNf2lSsxfm0 i4a 1kVpSOVyBCfIB51VRjgBUyv6kdnyTlgWj81Ay      You can validate that file with  promtool check web config web yml      shell   promtool check web config web yml web yml SUCCESS      You can add multiple users to the file      Launching Prometheus  You can launch prometheus with the web configuration file as follows      shell   prometheus   web config file web yml         Testing  You can use cURL to interact with your setup  Try this request      bash curl   head http   localhost 9090 graph      This will return a  401 Unauthorized  response because you ve failed to supply a valid username and password   To successfully access Prometheus endpoints using basic auth  for example the   metrics  endpoint  supply the proper username using the   u  flag and supply the password when prompted      bash curl  u admin http   localhost 9090 metrics Enter host password for user  admin        That should return Prometheus metrics output  which should look something like this         HELP go gc duration seconds A summary of the GC invocation durations    TYPE go gc duration seconds summary go gc duration seconds quantile  0   0 0001343 go gc duration seconds quantile  0 25   0 0002032 go gc duration seconds quantile  0 5   0 0004485             Summary  In this guide  you stored a username and a hashed password in a  web yml  file  launched prometheus with the parameter required to use the credentials in that file to authenticate users accessing Prometheus  HTTP endpoints "}
{"questions":"prometheus This guide will introduce you to the multi target exporter pattern To achieve this we will run the exporter as an example of the pattern title Understanding and using the multi target exporter pattern Understanding and using the multi target exporter pattern describe the multi target exporter pattern and why it is used","answers":"---\ntitle: Understanding and using the multi-target exporter pattern\n---\n\n# Understanding and using the multi-target exporter pattern\n\nThis guide will introduce you to the multi-target exporter pattern. To achieve this we will:\n\n* describe the multi-target exporter pattern and why it is used,\n* run the [blackbox](https:\/\/github.com\/prometheus\/blackbox_exporter) exporter as an example of the pattern,\n* configure a custom query module for the blackbox exporter,\n* let the blackbox exporter run basic metric queries against the Prometheus [website](https:\/\/prometheus.io),\n* examine a popular pattern of configuring Prometheus to scrape exporters using relabeling.\n\n## The multi-target exporter pattern?\n\nBy multi-target [exporter](\/docs\/instrumenting\/exporters\/) pattern we refer to a specific design, in which:\n\n* the exporter will get the target\u2019s metrics via a network protocol.\n* the exporter does not have to run on the machine the metrics are taken from.\n* the exporter gets the targets and a query config string as parameters of Prometheus\u2019 GET request.\n* the exporter subsequently starts the scrape after getting Prometheus\u2019 GET requests and once it is done with scraping.\n* the exporter can query multiple targets.\n\nThis pattern is only used for certain exporters, such as the [blackbox](https:\/\/github.com\/prometheus\/blackbox_exporter) and the [SNMP exporter](https:\/\/github.com\/prometheus\/snmp_exporter).\n\nThe reason is that we either can\u2019t run an exporter on the targets, e.g. network gear speaking SNMP, or that we are explicitly interested in the distance, e.g. latency and reachability of a website from a specific point outside of our network, a common use case for the [blackbox](https:\/\/github.com\/prometheus\/blackbox_exporter) exporter.\n\n## Running multi-target exporters\n\nMulti-target exporters are flexible regarding their environment and can be run in many ways. As regular programs, in containers, as background services, on baremetal, on virtual machines. Because they are queried and do query over network they do need appropriate open ports. Otherwise they are frugal.\n\nNow let\u2019s try it out for yourself!\n\nUse [Docker](https:\/\/www.docker.com\/) to start a blackbox exporter container by running this in a terminal. Depending on your system configuration you might need to prepend the command with a `sudo`:\n\n```bash\ndocker run -p 9115:9115 prom\/blackbox-exporter\n```\n\nYou should see a few log lines and if everything went well the last one should report `msg=\"Listening on address\"` as seen here:\n\n```\nlevel=info ts=2018-10-17T15:41:35.4997596Z caller=main.go:324 msg=\"Listening on address\" address=:9115\n```\n\n## Basic querying of multi-target exporters\n\nThere are two ways of querying:\n\n1. Querying the exporter itself. It has its own metrics, usually available at `\/metrics`.\n1. Querying the exporter to scrape another target. Usually available at a \"descriptive\" endpoint, e.g. `\/probe`. This is likely what you are primarily interested in, when using multi-target exporters.\n\nYou can manually try the first query type with curl in another terminal or use this [link](http:\/\/localhost:9115\/metrics):\n\n<a name=\"query-exporter\"><\/a>\n\n```bash\ncurl 'localhost:9115\/metrics'\n```\n\nThe response should be something like this:\n\n```\n# HELP blackbox_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which blackbox_exporter was built.\n# TYPE blackbox_exporter_build_info gauge\nblackbox_exporter_build_info{branch=\"HEAD\",goversion=\"go1.10\",revision=\"4a22506cf0cf139d9b2f9cde099f0012d9fcabde\",version=\"0.12.0\"} 1\n# HELP go_gc_duration_seconds A summary of the GC invocation durations.\n# TYPE go_gc_duration_seconds summary\ngo_gc_duration_seconds{quantile=\"0\"} 0\ngo_gc_duration_seconds{quantile=\"0.25\"} 0\ngo_gc_duration_seconds{quantile=\"0.5\"} 0\ngo_gc_duration_seconds{quantile=\"0.75\"} 0\ngo_gc_duration_seconds{quantile=\"1\"} 0\ngo_gc_duration_seconds_sum 0\ngo_gc_duration_seconds_count 0\n# HELP go_goroutines Number of goroutines that currently exist.\n# TYPE go_goroutines gauge\ngo_goroutines 9\n\n[\u2026]\n\n# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.\n# TYPE process_cpu_seconds_total counter\nprocess_cpu_seconds_total 0.05\n# HELP process_max_fds Maximum number of open file descriptors.\n# TYPE process_max_fds gauge\nprocess_max_fds 1.048576e+06\n# HELP process_open_fds Number of open file descriptors.\n# TYPE process_open_fds gauge\nprocess_open_fds 7\n# HELP process_resident_memory_bytes Resident memory size in bytes.\n# TYPE process_resident_memory_bytes gauge\nprocess_resident_memory_bytes 7.8848e+06\n# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.\n# TYPE process_start_time_seconds gauge\nprocess_start_time_seconds 1.54115492874e+09\n# HELP process_virtual_memory_bytes Virtual memory size in bytes.\n# TYPE process_virtual_memory_bytes gauge\nprocess_virtual_memory_bytes 1.5609856e+07\n```\n\nThose are metrics in the Prometheus [format](\/docs\/instrumenting\/exposition_formats\/#text-format-example). They come from the exporter\u2019s [instrumentation](\/docs\/practices\/instrumentation\/) and tell us about the state of the exporter itself while it is running. This is called whitebox monitoring and very useful in daily ops practice. If you are curious, try out our guide on how to [instrument your own applications](https:\/\/prometheus.io\/docs\/guides\/go-application\/).\n\nFor the second type of querying we need to provide a target and module as parameters in the HTTP GET Request. The target is a URI or IP and the module must defined in the exporter\u2019s configuration. The blackbox exporter container comes with a meaningful default configuration.  \nWe will use the target `prometheus.io` and the predefined module `http_2xx`. It tells the exporter to make a GET request like a browser would if you go to `prometheus.io` and to expect a [200 OK](https:\/\/en.wikipedia.org\/wiki\/List_of_HTTP_status_codes#2xx_Success) response.\n\nYou can now tell your blackbox exporter to query `prometheus.io` in the terminal with curl:\n\n```bash\ncurl 'localhost:9115\/probe?target=prometheus.io&module=http_2xx'\n```\n\nThis will return a lot of metrics:\n\n```\n# HELP probe_dns_lookup_time_seconds Returns the time taken for probe dns lookup in seconds\n# TYPE probe_dns_lookup_time_seconds gauge\nprobe_dns_lookup_time_seconds 0.061087943\n# HELP probe_duration_seconds Returns how long the probe took to complete in seconds\n# TYPE probe_duration_seconds gauge\nprobe_duration_seconds 0.065580871\n# HELP probe_failed_due_to_regex Indicates if probe failed due to regex\n# TYPE probe_failed_due_to_regex gauge\nprobe_failed_due_to_regex 0\n# HELP probe_http_content_length Length of http content response\n# TYPE probe_http_content_length gauge\nprobe_http_content_length 0\n# HELP probe_http_duration_seconds Duration of http request by phase, summed over all redirects\n# TYPE probe_http_duration_seconds gauge\nprobe_http_duration_seconds{phase=\"connect\"} 0\nprobe_http_duration_seconds{phase=\"processing\"} 0\nprobe_http_duration_seconds{phase=\"resolve\"} 0.061087943\nprobe_http_duration_seconds{phase=\"tls\"} 0\nprobe_http_duration_seconds{phase=\"transfer\"} 0\n# HELP probe_http_redirects The number of redirects\n# TYPE probe_http_redirects gauge\nprobe_http_redirects 0\n# HELP probe_http_ssl Indicates if SSL was used for the final redirect\n# TYPE probe_http_ssl gauge\nprobe_http_ssl 0\n# HELP probe_http_status_code Response HTTP status code\n# TYPE probe_http_status_code gauge\nprobe_http_status_code 0\n# HELP probe_http_version Returns the version of HTTP of the probe response\n# TYPE probe_http_version gauge\nprobe_http_version 0\n# HELP probe_ip_protocol Specifies whether probe ip protocol is IP4 or IP6\n# TYPE probe_ip_protocol gauge\nprobe_ip_protocol 6\n# HELP probe_success Displays whether or not the probe was a success\n# TYPE probe_success gauge\nprobe_success 0\n```\n\nNotice that almost all metrics have a value of `0`. The last one reads `probe_success 0`. This means the prober could not successfully reach `prometheus.io`. The reason is hidden in the metric `probe_ip_protocol` with the value `6`. By default the prober uses [IPv6](https:\/\/en.wikipedia.org\/wiki\/IPv6) until told otherwise. But the Docker daemon blocks IPv6 until told otherwise. Hence our blackbox exporter running in a Docker container can\u2019t connect via IPv6.\n\nWe could now either tell Docker to allow IPv6 or the blackbox exporter to use IPv4. In the real world both can make sense and as so often the answer to the question \"what is to be done?\" is \"it depends\". Because this is an exporter guide we will change the exporter and take the opportunity to configure a custom module.\n\n## Configuring modules\n\nThe modules are predefined in a file inside the docker container called `config.yml` which is a copy of [blackbox.yml](https:\/\/github.com\/prometheus\/blackbox_exporter\/blob\/master\/blackbox.yml) in the github repo.\n\nWe will copy this file, [adapt](https:\/\/github.com\/prometheus\/blackbox_exporter\/blob\/master\/CONFIGURATION.md) it to our own needs and tell the exporter to use our config file instead of the one included in the container.  \n\nFirst download the file using curl or your browser:\n\n```bash\ncurl -o blackbox.yml https:\/\/raw.githubusercontent.com\/prometheus\/blackbox_exporter\/master\/blackbox.yml\n```\n\nOpen it in an editor. The first few lines look like this:\n\n```yaml\nmodules:\n  http_2xx:\n    prober: http\n  http_post_2xx:\n    prober: http\n    http:\n      method: POST\n```\n\n[YAML](https:\/\/en.wikipedia.org\/wiki\/YAML) uses whitespace indentation to express hierarchy, so you can recognise that two `modules` named `http_2xx` and `http_post_2xx` are defined, and that they both have a prober `http` and for one the method value is specifically set to `POST`.  \nYou will now change the module `http_2xx` by setting the `preferred_ip_protocol` of the prober `http` explicitly to the string `ip4`.\n\n```yaml\nmodules:\n  http_2xx:\n    prober: http\n    http:\n      preferred_ip_protocol: \"ip4\"\n  http_post_2xx:\n    prober: http\n    http:\n      method: POST\n```\n\nIf you want to know more about the available probers and options check out the [documentation](https:\/\/github.com\/prometheus\/blackbox_exporter\/blob\/master\/CONFIGURATION.md).\n\nNow we need to tell the blackbox exporter to use our freshly changed file. You can do that with the flag `--config.file=\"blackbox.yml\"`. But because we are using Docker, we first must make this file [available](https:\/\/docs.docker.com\/storage\/bind-mounts\/) inside the container using the `--mount` command.  \n\nNOTE: If you are using macOS you first need to allow the Docker daemon to access the directory in which your `blackbox.yml` is. You can do that by clicking on the little Docker whale in menu bar and then on `Preferences`->`File Sharing`->`+`. Afterwards press `Apply & Restart`.\n\nFirst you stop the old container by changing into its terminal and press `ctrl+c`.\nMake sure you are in the directory containing your `blackbox.yml`.\nThen you run this command. It is long, but we will explain it:\n\n<a name=\"run-exporter\"><\/a>\n\n```bash\ndocker \\\n  run -p 9115:9115 \\\n  --mount type=bind,source=\"$(pwd)\"\/blackbox.yml,target=\/blackbox.yml,readonly \\\n  prom\/blackbox-exporter \\\n  --config.file=\"\/blackbox.yml\"\n```\n\nWith this command, you told `docker` to:\n\n1. `run` a container with the port `9115` outside the container mapped to the port `9115` inside of the container.\n1. `mount` from your current directory (`$(pwd)` stands for print working directory) the file `blackbox.yml` into `\/blackbox.yml` in `readonly` mode.\n1. use the image `prom\/blackbox-exporter` from [Docker hub](https:\/\/hub.docker.com\/r\/prom\/blackbox-exporter\/).\n1. run the blackbox-exporter with the flag `--config.file` telling it to use `\/blackbox.yml` as config file.\n\nIf everything is correct, you should see something like this:\n\n```\nlevel=info ts=2018-10-19T12:40:51.650462756Z caller=main.go:213 msg=\"Starting blackbox_exporter\" version=\"(version=0.12.0, branch=HEAD, revision=4a22506cf0cf139d9b2f9cde099f0012d9fcabde)\"\nlevel=info ts=2018-10-19T12:40:51.653357722Z caller=main.go:220 msg=\"Loaded config file\"\nlevel=info ts=2018-10-19T12:40:51.65349635Z caller=main.go:324 msg=\"Listening on address\" address=:9115\n```\n\nNow you can try our new IPv4-using module `http_2xx` in a terminal:\n\n```bash\ncurl 'localhost:9115\/probe?target=prometheus.io&module=http_2xx'\n```\n\nWhich should return Prometheus metrics like this:\n\n```\n# HELP probe_dns_lookup_time_seconds Returns the time taken for probe dns lookup in seconds\n# TYPE probe_dns_lookup_time_seconds gauge\nprobe_dns_lookup_time_seconds 0.02679421\n# HELP probe_duration_seconds Returns how long the probe took to complete in seconds\n# TYPE probe_duration_seconds gauge\nprobe_duration_seconds 0.461619124\n# HELP probe_failed_due_to_regex Indicates if probe failed due to regex\n# TYPE probe_failed_due_to_regex gauge\nprobe_failed_due_to_regex 0\n# HELP probe_http_content_length Length of http content response\n# TYPE probe_http_content_length gauge\nprobe_http_content_length -1\n# HELP probe_http_duration_seconds Duration of http request by phase, summed over all redirects\n# TYPE probe_http_duration_seconds gauge\nprobe_http_duration_seconds{phase=\"connect\"} 0.062076202999999996\nprobe_http_duration_seconds{phase=\"processing\"} 0.23481845699999998\nprobe_http_duration_seconds{phase=\"resolve\"} 0.029594103\nprobe_http_duration_seconds{phase=\"tls\"} 0.163420078\nprobe_http_duration_seconds{phase=\"transfer\"} 0.002243199\n# HELP probe_http_redirects The number of redirects\n# TYPE probe_http_redirects gauge\nprobe_http_redirects 1\n# HELP probe_http_ssl Indicates if SSL was used for the final redirect\n# TYPE probe_http_ssl gauge\nprobe_http_ssl 1\n# HELP probe_http_status_code Response HTTP status code\n# TYPE probe_http_status_code gauge\nprobe_http_status_code 200\n# HELP probe_http_uncompressed_body_length Length of uncompressed response body\n# TYPE probe_http_uncompressed_body_length gauge\nprobe_http_uncompressed_body_length 14516\n# HELP probe_http_version Returns the version of HTTP of the probe response\n# TYPE probe_http_version gauge\nprobe_http_version 1.1\n# HELP probe_ip_protocol Specifies whether probe ip protocol is IP4 or IP6\n# TYPE probe_ip_protocol gauge\nprobe_ip_protocol 4\n# HELP probe_ssl_earliest_cert_expiry Returns earliest SSL cert expiry in unixtime\n# TYPE probe_ssl_earliest_cert_expiry gauge\nprobe_ssl_earliest_cert_expiry 1.581897599e+09\n# HELP probe_success Displays whether or not the probe was a success\n# TYPE probe_success gauge\nprobe_success 1\n# HELP probe_tls_version_info Contains the TLS version used\n# TYPE probe_tls_version_info gauge\nprobe_tls_version_info{version=\"TLS 1.3\"} 1\n```\n\nYou can see that the probe was successful and get many useful metrics, like latency by phase, status code, ssl status or certificate expiry in [Unix time](https:\/\/en.wikipedia.org\/wiki\/Unix_time).  \nThe blackbox exporter also offers a tiny web interface at [localhost:9115](http:\/\/localhost:9115) for you to check out the last few probes, the loaded config and debug information. It even offers a direct link to probe `prometheus.io`. Handy if you are wondering why something does not work.\n\n## Querying multi-target exporters with Prometheus\n\nSo far, so good. Congratulate yourself. The blackbox exporter works and you can manually tell it to query a remote target. You are almost there. Now you need to tell Prometheus to do the queries for us.  \n\nBelow you find a minimal prometheus config. It is telling Prometheus to scrape the exporter itself as we did [before](#query-exporter) using `curl 'localhost:9115\/metrics'`:\n\nNOTE: If you use Docker for Mac or Docker for Windows, you can\u2019t use `localhost:9115` in the last line, but must use `host.docker.internal:9115`. This has to do with the virtual machines used to implement Docker on those operating systems. You should not use this in production.\n\n`prometheus.yml` for Linux:\n\n```yaml\nglobal:\n  scrape_interval: 5s\n\nscrape_configs:\n- job_name: blackbox # To get metrics about the exporter itself\n  metrics_path: \/metrics\n  static_configs:\n    - targets:\n      - localhost:9115\n```\n\n`prometheus.yml` for macOS and Windows:\n\n```yaml\nglobal:\n  scrape_interval: 5s\n\nscrape_configs:\n- job_name: blackbox # To get metrics about the exporter itself\n  metrics_path: \/metrics\n  static_configs:\n    - targets:\n      - host.docker.internal:9115\n```\n\nNow run a Prometheus container and tell it to mount our config file from above. Because of the way networking on the host is addressable from the container you need to use a slightly different command on Linux than on MacOS and Windows.:\n\n<a name=run-prometheus><\/a>\n\nRun Prometheus on Linux (don\u2019t use `--network=\"host\"` in production):\n\n```bash\ndocker \\\n  run --network=\"host\"\\\n  --mount type=bind,source=\"$(pwd)\"\/prometheus.yml,target=\/prometheus.yml,readonly \\\n  prom\/prometheus \\\n  --config.file=\"\/prometheus.yml\"\n```\n\nRun Prometheus on MacOS and Windows:\n\n```bash\ndocker \\\n  run -p 9090:9090 \\\n  --mount type=bind,source=\"$(pwd)\"\/prometheus.yml,target=\/prometheus.yml,readonly \\\n  prom\/prometheus \\\n  --config.file=\"\/prometheus.yml\"\n```\n\nThis command works similarly to [running the blackbox exporter using a config file](#run-exporter).\n\nIf everything worked, you should be able to go to [localhost:9090\/targets](http:\/\/localhost:9090\/targets) and see under `blackbox` an endpoint with the state `UP` in green. If you get a red `DOWN` make sure that the blackbox exporter you started [above](#run-exporter) is still running. If you see nothing or a yellow `UNKNOWN` you are really fast and need to give it a few more seconds before reloading your browser\u2019s tab.\n\nTo tell Prometheus to query `\"localhost:9115\/probe?target=prometheus.io&module=http_2xx\"` you add another scrape job `blackbox-http` where you set the `metrics_path` to `\/probe` and the parameters under `params:` in the Prometheus config file `prometheus.yml`:\n\n<a name=\"prometheus-config\"><\/a>\n\n```yaml\nglobal:\n  scrape_interval: 5s\n\nscrape_configs:\n- job_name: blackbox # To get metrics about the exporter itself\n  metrics_path: \/metrics\n  static_configs:\n    - targets:\n      - localhost:9115   # For Windows and macOS replace with - host.docker.internal:9115\n\n- job_name: blackbox-http # To get metrics about the exporter\u2019s targets\n  metrics_path: \/probe\n  params:\n    module: [http_2xx]\n    target: [prometheus.io]\n  static_configs:\n    - targets:\n      - localhost:9115   # For Windows and macOS replace with - host.docker.internal:9115\n```\n\nAfter saving the config file switch to the terminal with your Prometheus docker container and stop it by pressing `ctrl+C` and start it again to reload the configuration by using the existing [command](#run-prometheus).\n\nThe terminal should return the message `\"Server is ready to receive web requests.\"` and after a few seconds you should start to see colourful graphs in [your Prometheus](http:\/\/localhost:9090\/graph?g0.range_input=5m&g0.stacked=0&g0.expr=probe_http_duration_seconds&g0.tab=0).\n\nThis works, but it has a few disadvantages:\n\n1. The actual targets are up in the param config, which is very unusual and hard to understand later.\n1. The `instance` label has the value of the blackbox exporter\u2019s address which is technically true, but not what we are interested in.\n1. We can\u2019t see which URL we probed. This is unpractical and will also mix up different metrics into one if we probe several URLs.\n\nTo fix this, we will use [relabeling](\/docs\/prometheus\/latest\/configuration\/configuration\/#relabel_config).\nRelabeling is useful here because behind the scenes many things in Prometheus are configured with internal labels.\nThe details are complicated and out of scope for this guide. Hence we will limit ourselves to the necessary. But if you want to know more check out this [talk](https:\/\/www.youtube.com\/watch?v=b5-SvvZ7AwI). For now it suffices if you understand this:\n\n* All labels starting with `__` are dropped after the scrape. Most internal labels start with `__`.\n* You can set internal labels that are called `__param_<name>`. Those set URL parameter with the key `<name>` for the scrape request.\n* There is an internal label `__address__` which is set by the `targets` under `static_configs` and whose value is the hostname for the scrape request. By default it is later used to set the value for the label `instance`, which is attached to each metric and tells you where the metrics came from.\n\nHere is the config you will use to do that. Don\u2019t worry if this is a bit much at once, we will go through it step by step:\n\n```yaml\nglobal:\n  scrape_interval: 5s\n\nscrape_configs:\n- job_name: blackbox # To get metrics about the exporter itself\n  metrics_path: \/metrics\n  static_configs:\n    - targets:\n      - localhost:9115   # For Windows and macOS replace with - host.docker.internal:9115\n\n- job_name: blackbox-http # To get metrics about the exporter\u2019s targets\n  metrics_path: \/probe\n  params:\n    module: [http_2xx]\n  static_configs:\n    - targets:\n      - http:\/\/prometheus.io    # Target to probe with http\n      - https:\/\/prometheus.io   # Target to probe with https\n      - http:\/\/example.com:8080 # Target to probe with http on port 8080\n  relabel_configs:\n    - source_labels: [__address__]\n      target_label: __param_target\n    - source_labels: [__param_target]\n      target_label: instance\n    - target_label: __address__\n      replacement: localhost:9115  # The blackbox exporter\u2019s real hostname:port. For Windows and macOS replace with - host.docker.internal:9115\n```\n\nSo what is new compared to the [last config](#prometheus-config)?\n\n`params` does not include `target` anymore. Instead we add the actual targets under `static configs:` `targets`. We also use several because we can do that now:\n\n```yaml\n  params:\n    module: [http_2xx]\n  static_configs:\n    - targets:\n      - http:\/\/prometheus.io    # Target to probe with http\n      - https:\/\/prometheus.io   # Target to probe with https\n      - http:\/\/example.com:8080 # Target to probe with http on port 8080\n```\n\n`relabel_configs` contains the new relabeling rules:\n\n```yaml\n  relabel_configs:\n    - source_labels: [__address__]\n      target_label: __param_target\n    - source_labels: [__param_target]\n      target_label: instance\n    - target_label: __address__\n      replacement: localhost:9115  # The blackbox exporter\u2019s real hostname:port. For Windows and macOS replace with - host.docker.internal:9115\n```\n\nBefore applying the relabeling rules, the URI of a request Prometheus would make would look like this:\n`\"http:\/\/prometheus.io\/probe?module=http_2xx\"`. After relabeling it will look like this `\"http:\/\/localhost:9115\/probe?target=http:\/\/prometheus.io&module=http_2xx\"`.\n\nNow let us explore how each rule does that:\n\nFirst we take the values from the label `__address__` (which contain the values from `targets`) and write them to a new label `__param_target` which will add a parameter `target` to the Prometheus scrape requests:\n\n```yaml\n  relabel_configs:\n    - source_labels: [__address__]\n      target_label: __param_target\n```\n\nAfter this our imagined Prometheus request URI has now a target parameter: `\"http:\/\/prometheus.io\/probe?target=http:\/\/prometheus.io&module=http_2xx\"`.\n\nThen we take the values from the label `__param_target` and create a label instance with the values.\n\n```yaml\n  relabel_configs:\n    - source_labels: [__param_target]\n      target_label: instance\n```\n\nOur request will not change, but the metrics that come back from our request will now bear a label `instance=\"http:\/\/prometheus.io\"`.\n\nAfter that we write the value `localhost:9115` (the URI of our exporter) to the label `__address__`. This will be used as the hostname and port for the Prometheus scrape requests. So that it queries the exporter and not the target URI directly.\n\n```yaml\n  relabel_configs:\n    - target_label: __address__\n      replacement: localhost:9115  # The blackbox exporter\u2019s real hostname:port. For Windows and macOS replace with - host.docker.internal:9115\n```\n\nOur request is now `\"localhost:9115\/probe?target=http:\/\/prometheus.io&module=http_2xx\"`. This way we can have the actual targets there, get them as `instance` label values while letting Prometheus make a request against the blackbox exporter.\n\nOften people combine these with a specific service discovery. Check out the [configuration documentation](\/docs\/prometheus\/latest\/configuration\/configuration) for more information. Using them is no problem, as these write into the `__address__` label just like `targets` defined under `static_configs`.\n\nThat is it. Restart the Prometheus docker container and look at your [metrics](http:\/\/localhost:9090\/graph?g0.range_input=30m&g0.stacked=0&g0.expr=probe_http_duration_seconds&g0.tab=0). Pay attention that you selected the period of time when the metrics were actually collected.\n\n# Summary\n\nIn this guide, you learned how the multi-target exporter pattern works, how to run a blackbox exporter with a customised module, and to configure Prometheus using relabeling to scrape metrics with prober labels.","site":"prometheus","answers_cleaned":"    title  Understanding and using the multi target exporter pattern        Understanding and using the multi target exporter pattern  This guide will introduce you to the multi target exporter pattern  To achieve this we will     describe the multi target exporter pattern and why it is used    run the  blackbox  https   github com prometheus blackbox exporter  exporter as an example of the pattern    configure a custom query module for the blackbox exporter    let the blackbox exporter run basic metric queries against the Prometheus  website  https   prometheus io     examine a popular pattern of configuring Prometheus to scrape exporters using relabeling      The multi target exporter pattern   By multi target  exporter   docs instrumenting exporters   pattern we refer to a specific design  in which     the exporter will get the target s metrics via a network protocol    the exporter does not have to run on the machine the metrics are taken from    the exporter gets the targets and a query config string as parameters of Prometheus  GET request    the exporter subsequently starts the scrape after getting Prometheus  GET requests and once it is done with scraping    the exporter can query multiple targets   This pattern is only used for certain exporters  such as the  blackbox  https   github com prometheus blackbox exporter  and the  SNMP exporter  https   github com prometheus snmp exporter    The reason is that we either can t run an exporter on the targets  e g  network gear speaking SNMP  or that we are explicitly interested in the distance  e g  latency and reachability of a website from a specific point outside of our network  a common use case for the  blackbox  https   github com prometheus blackbox exporter  exporter      Running multi target exporters  Multi target exporters are flexible regarding their environment and can be run in many ways  As regular programs  in containers  as background services  on baremetal  on virtual machines  Because they are queried and do query over network they do need appropriate open ports  Otherwise they are frugal   Now let s try it out for yourself   Use  Docker  https   www docker com   to start a blackbox exporter container by running this in a terminal  Depending on your system configuration you might need to prepend the command with a  sudo       bash docker run  p 9115 9115 prom blackbox exporter      You should see a few log lines and if everything went well the last one should report  msg  Listening on address   as seen here       level info ts 2018 10 17T15 41 35 4997596Z caller main go 324 msg  Listening on address  address  9115         Basic querying of multi target exporters  There are two ways of querying   1  Querying the exporter itself  It has its own metrics  usually available at   metrics   1  Querying the exporter to scrape another target  Usually available at a  descriptive  endpoint  e g    probe   This is likely what you are primarily interested in  when using multi target exporters   You can manually try the first query type with curl in another terminal or use this  link  http   localhost 9115 metrics     a name  query exporter    a      bash curl  localhost 9115 metrics       The response should be something like this         HELP blackbox exporter build info A metric with a constant  1  value labeled by version  revision  branch  and goversion from which blackbox exporter was built    TYPE blackbox exporter build info gauge blackbox exporter build info branch  HEAD  goversion  go1 10  revision  4a22506cf0cf139d9b2f9cde099f0012d9fcabde  version  0 12 0   1   HELP go gc duration seconds A summary of the GC invocation durations    TYPE go gc duration seconds summary go gc duration seconds quantile  0   0 go gc duration seconds quantile  0 25   0 go gc duration seconds quantile  0 5   0 go gc duration seconds quantile  0 75   0 go gc duration seconds quantile  1   0 go gc duration seconds sum 0 go gc duration seconds count 0   HELP go goroutines Number of goroutines that currently exist    TYPE go goroutines gauge go goroutines 9         HELP process cpu seconds total Total user and system CPU time spent in seconds    TYPE process cpu seconds total counter process cpu seconds total 0 05   HELP process max fds Maximum number of open file descriptors    TYPE process max fds gauge process max fds 1 048576e 06   HELP process open fds Number of open file descriptors    TYPE process open fds gauge process open fds 7   HELP process resident memory bytes Resident memory size in bytes    TYPE process resident memory bytes gauge process resident memory bytes 7 8848e 06   HELP process start time seconds Start time of the process since unix epoch in seconds    TYPE process start time seconds gauge process start time seconds 1 54115492874e 09   HELP process virtual memory bytes Virtual memory size in bytes    TYPE process virtual memory bytes gauge process virtual memory bytes 1 5609856e 07      Those are metrics in the Prometheus  format   docs instrumenting exposition formats  text format example   They come from the exporter s  instrumentation   docs practices instrumentation   and tell us about the state of the exporter itself while it is running  This is called whitebox monitoring and very useful in daily ops practice  If you are curious  try out our guide on how to  instrument your own applications  https   prometheus io docs guides go application     For the second type of querying we need to provide a target and module as parameters in the HTTP GET Request  The target is a URI or IP and the module must defined in the exporter s configuration  The blackbox exporter container comes with a meaningful default configuration    We will use the target  prometheus io  and the predefined module  http 2xx   It tells the exporter to make a GET request like a browser would if you go to  prometheus io  and to expect a  200 OK  https   en wikipedia org wiki List of HTTP status codes 2xx Success  response   You can now tell your blackbox exporter to query  prometheus io  in the terminal with curl      bash curl  localhost 9115 probe target prometheus io module http 2xx       This will return a lot of metrics         HELP probe dns lookup time seconds Returns the time taken for probe dns lookup in seconds   TYPE probe dns lookup time seconds gauge probe dns lookup time seconds 0 061087943   HELP probe duration seconds Returns how long the probe took to complete in seconds   TYPE probe duration seconds gauge probe duration seconds 0 065580871   HELP probe failed due to regex Indicates if probe failed due to regex   TYPE probe failed due to regex gauge probe failed due to regex 0   HELP probe http content length Length of http content response   TYPE probe http content length gauge probe http content length 0   HELP probe http duration seconds Duration of http request by phase  summed over all redirects   TYPE probe http duration seconds gauge probe http duration seconds phase  connect   0 probe http duration seconds phase  processing   0 probe http duration seconds phase  resolve   0 061087943 probe http duration seconds phase  tls   0 probe http duration seconds phase  transfer   0   HELP probe http redirects The number of redirects   TYPE probe http redirects gauge probe http redirects 0   HELP probe http ssl Indicates if SSL was used for the final redirect   TYPE probe http ssl gauge probe http ssl 0   HELP probe http status code Response HTTP status code   TYPE probe http status code gauge probe http status code 0   HELP probe http version Returns the version of HTTP of the probe response   TYPE probe http version gauge probe http version 0   HELP probe ip protocol Specifies whether probe ip protocol is IP4 or IP6   TYPE probe ip protocol gauge probe ip protocol 6   HELP probe success Displays whether or not the probe was a success   TYPE probe success gauge probe success 0      Notice that almost all metrics have a value of  0   The last one reads  probe success 0   This means the prober could not successfully reach  prometheus io   The reason is hidden in the metric  probe ip protocol  with the value  6   By default the prober uses  IPv6  https   en wikipedia org wiki IPv6  until told otherwise  But the Docker daemon blocks IPv6 until told otherwise  Hence our blackbox exporter running in a Docker container can t connect via IPv6   We could now either tell Docker to allow IPv6 or the blackbox exporter to use IPv4  In the real world both can make sense and as so often the answer to the question  what is to be done   is  it depends   Because this is an exporter guide we will change the exporter and take the opportunity to configure a custom module      Configuring modules  The modules are predefined in a file inside the docker container called  config yml  which is a copy of  blackbox yml  https   github com prometheus blackbox exporter blob master blackbox yml  in the github repo   We will copy this file   adapt  https   github com prometheus blackbox exporter blob master CONFIGURATION md  it to our own needs and tell the exporter to use our config file instead of the one included in the container     First download the file using curl or your browser      bash curl  o blackbox yml https   raw githubusercontent com prometheus blackbox exporter master blackbox yml      Open it in an editor  The first few lines look like this      yaml modules    http 2xx      prober  http   http post 2xx      prober  http     http        method  POST       YAML  https   en wikipedia org wiki YAML  uses whitespace indentation to express hierarchy  so you can recognise that two  modules  named  http 2xx  and  http post 2xx  are defined  and that they both have a prober  http  and for one the method value is specifically set to  POST     You will now change the module  http 2xx  by setting the  preferred ip protocol  of the prober  http  explicitly to the string  ip4       yaml modules    http 2xx      prober  http     http        preferred ip protocol   ip4    http post 2xx      prober  http     http        method  POST      If you want to know more about the available probers and options check out the  documentation  https   github com prometheus blackbox exporter blob master CONFIGURATION md    Now we need to tell the blackbox exporter to use our freshly changed file  You can do that with the flag    config file  blackbox yml    But because we are using Docker  we first must make this file  available  https   docs docker com storage bind mounts   inside the container using the    mount  command     NOTE  If you are using macOS you first need to allow the Docker daemon to access the directory in which your  blackbox yml  is  You can do that by clicking on the little Docker whale in menu bar and then on  Preferences    File Sharing        Afterwards press  Apply   Restart    First you stop the old container by changing into its terminal and press  ctrl c   Make sure you are in the directory containing your  blackbox yml   Then you run this command  It is long  but we will explain it    a name  run exporter    a      bash docker     run  p 9115 9115       mount type bind source    pwd   blackbox yml target  blackbox yml readonly     prom blackbox exporter       config file   blackbox yml       With this command  you told  docker  to   1   run  a container with the port  9115  outside the container mapped to the port  9115  inside of the container  1   mount  from your current directory     pwd   stands for print working directory  the file  blackbox yml  into   blackbox yml  in  readonly  mode  1  use the image  prom blackbox exporter  from  Docker hub  https   hub docker com r prom blackbox exporter    1  run the blackbox exporter with the flag    config file  telling it to use   blackbox yml  as config file   If everything is correct  you should see something like this       level info ts 2018 10 19T12 40 51 650462756Z caller main go 213 msg  Starting blackbox exporter  version   version 0 12 0  branch HEAD  revision 4a22506cf0cf139d9b2f9cde099f0012d9fcabde   level info ts 2018 10 19T12 40 51 653357722Z caller main go 220 msg  Loaded config file  level info ts 2018 10 19T12 40 51 65349635Z caller main go 324 msg  Listening on address  address  9115      Now you can try our new IPv4 using module  http 2xx  in a terminal      bash curl  localhost 9115 probe target prometheus io module http 2xx       Which should return Prometheus metrics like this         HELP probe dns lookup time seconds Returns the time taken for probe dns lookup in seconds   TYPE probe dns lookup time seconds gauge probe dns lookup time seconds 0 02679421   HELP probe duration seconds Returns how long the probe took to complete in seconds   TYPE probe duration seconds gauge probe duration seconds 0 461619124   HELP probe failed due to regex Indicates if probe failed due to regex   TYPE probe failed due to regex gauge probe failed due to regex 0   HELP probe http content length Length of http content response   TYPE probe http content length gauge probe http content length  1   HELP probe http duration seconds Duration of http request by phase  summed over all redirects   TYPE probe http duration seconds gauge probe http duration seconds phase  connect   0 062076202999999996 probe http duration seconds phase  processing   0 23481845699999998 probe http duration seconds phase  resolve   0 029594103 probe http duration seconds phase  tls   0 163420078 probe http duration seconds phase  transfer   0 002243199   HELP probe http redirects The number of redirects   TYPE probe http redirects gauge probe http redirects 1   HELP probe http ssl Indicates if SSL was used for the final redirect   TYPE probe http ssl gauge probe http ssl 1   HELP probe http status code Response HTTP status code   TYPE probe http status code gauge probe http status code 200   HELP probe http uncompressed body length Length of uncompressed response body   TYPE probe http uncompressed body length gauge probe http uncompressed body length 14516   HELP probe http version Returns the version of HTTP of the probe response   TYPE probe http version gauge probe http version 1 1   HELP probe ip protocol Specifies whether probe ip protocol is IP4 or IP6   TYPE probe ip protocol gauge probe ip protocol 4   HELP probe ssl earliest cert expiry Returns earliest SSL cert expiry in unixtime   TYPE probe ssl earliest cert expiry gauge probe ssl earliest cert expiry 1 581897599e 09   HELP probe success Displays whether or not the probe was a success   TYPE probe success gauge probe success 1   HELP probe tls version info Contains the TLS version used   TYPE probe tls version info gauge probe tls version info version  TLS 1 3   1      You can see that the probe was successful and get many useful metrics  like latency by phase  status code  ssl status or certificate expiry in  Unix time  https   en wikipedia org wiki Unix time     The blackbox exporter also offers a tiny web interface at  localhost 9115  http   localhost 9115  for you to check out the last few probes  the loaded config and debug information  It even offers a direct link to probe  prometheus io   Handy if you are wondering why something does not work      Querying multi target exporters with Prometheus  So far  so good  Congratulate yourself  The blackbox exporter works and you can manually tell it to query a remote target  You are almost there  Now you need to tell Prometheus to do the queries for us     Below you find a minimal prometheus config  It is telling Prometheus to scrape the exporter itself as we did  before   query exporter  using  curl  localhost 9115 metrics     NOTE  If you use Docker for Mac or Docker for Windows  you can t use  localhost 9115  in the last line  but must use  host docker internal 9115   This has to do with the virtual machines used to implement Docker on those operating systems  You should not use this in production    prometheus yml  for Linux      yaml global    scrape interval  5s  scrape configs    job name  blackbox   To get metrics about the exporter itself   metrics path   metrics   static configs        targets          localhost 9115       prometheus yml  for macOS and Windows      yaml global    scrape interval  5s  scrape configs    job name  blackbox   To get metrics about the exporter itself   metrics path   metrics   static configs        targets          host docker internal 9115      Now run a Prometheus container and tell it to mount our config file from above  Because of the way networking on the host is addressable from the container you need to use a slightly different command on Linux than on MacOS and Windows     a name run prometheus   a   Run Prometheus on Linux  don t use    network  host   in production       bash docker     run   network  host       mount type bind source    pwd   prometheus yml target  prometheus yml readonly     prom prometheus       config file   prometheus yml       Run Prometheus on MacOS and Windows      bash docker     run  p 9090 9090       mount type bind source    pwd   prometheus yml target  prometheus yml readonly     prom prometheus       config file   prometheus yml       This command works similarly to  running the blackbox exporter using a config file   run exporter    If everything worked  you should be able to go to  localhost 9090 targets  http   localhost 9090 targets  and see under  blackbox  an endpoint with the state  UP  in green  If you get a red  DOWN  make sure that the blackbox exporter you started  above   run exporter  is still running  If you see nothing or a yellow  UNKNOWN  you are really fast and need to give it a few more seconds before reloading your browser s tab   To tell Prometheus to query   localhost 9115 probe target prometheus io module http 2xx   you add another scrape job  blackbox http  where you set the  metrics path  to   probe  and the parameters under  params   in the Prometheus config file  prometheus yml     a name  prometheus config    a      yaml global    scrape interval  5s  scrape configs    job name  blackbox   To get metrics about the exporter itself   metrics path   metrics   static configs        targets          localhost 9115     For Windows and macOS replace with   host docker internal 9115    job name  blackbox http   To get metrics about the exporter s targets   metrics path   probe   params      module   http 2xx      target   prometheus io    static configs        targets          localhost 9115     For Windows and macOS replace with   host docker internal 9115      After saving the config file switch to the terminal with your Prometheus docker container and stop it by pressing  ctrl C  and start it again to reload the configuration by using the existing  command   run prometheus    The terminal should return the message   Server is ready to receive web requests    and after a few seconds you should start to see colourful graphs in  your Prometheus  http   localhost 9090 graph g0 range input 5m g0 stacked 0 g0 expr probe http duration seconds g0 tab 0    This works  but it has a few disadvantages   1  The actual targets are up in the param config  which is very unusual and hard to understand later  1  The  instance  label has the value of the blackbox exporter s address which is technically true  but not what we are interested in  1  We can t see which URL we probed  This is unpractical and will also mix up different metrics into one if we probe several URLs   To fix this  we will use  relabeling   docs prometheus latest configuration configuration  relabel config   Relabeling is useful here because behind the scenes many things in Prometheus are configured with internal labels  The details are complicated and out of scope for this guide  Hence we will limit ourselves to the necessary  But if you want to know more check out this  talk  https   www youtube com watch v b5 SvvZ7AwI   For now it suffices if you understand this     All labels starting with      are dropped after the scrape  Most internal labels start with         You can set internal labels that are called    param  name    Those set URL parameter with the key   name   for the scrape request    There is an internal label    address    which is set by the  targets  under  static configs  and whose value is the hostname for the scrape request  By default it is later used to set the value for the label  instance   which is attached to each metric and tells you where the metrics came from   Here is the config you will use to do that  Don t worry if this is a bit much at once  we will go through it step by step      yaml global    scrape interval  5s  scrape configs    job name  blackbox   To get metrics about the exporter itself   metrics path   metrics   static configs        targets          localhost 9115     For Windows and macOS replace with   host docker internal 9115    job name  blackbox http   To get metrics about the exporter s targets   metrics path   probe   params      module   http 2xx    static configs        targets          http   prometheus io      Target to probe with http         https   prometheus io     Target to probe with https         http   example com 8080   Target to probe with http on port 8080   relabel configs        source labels     address          target label    param target       source labels     param target        target label  instance       target label    address         replacement  localhost 9115    The blackbox exporter s real hostname port  For Windows and macOS replace with   host docker internal 9115      So what is new compared to the  last config   prometheus config     params  does not include  target  anymore  Instead we add the actual targets under  static configs    targets   We also use several because we can do that now      yaml   params      module   http 2xx    static configs        targets          http   prometheus io      Target to probe with http         https   prometheus io     Target to probe with https         http   example com 8080   Target to probe with http on port 8080       relabel configs  contains the new relabeling rules      yaml   relabel configs        source labels     address          target label    param target       source labels     param target        target label  instance       target label    address         replacement  localhost 9115    The blackbox exporter s real hostname port  For Windows and macOS replace with   host docker internal 9115      Before applying the relabeling rules  the URI of a request Prometheus would make would look like this    http   prometheus io probe module http 2xx    After relabeling it will look like this   http   localhost 9115 probe target http   prometheus io module http 2xx     Now let us explore how each rule does that   First we take the values from the label    address     which contain the values from  targets   and write them to a new label    param target  which will add a parameter  target  to the Prometheus scrape requests      yaml   relabel configs        source labels     address          target label    param target      After this our imagined Prometheus request URI has now a target parameter    http   prometheus io probe target http   prometheus io module http 2xx     Then we take the values from the label    param target  and create a label instance with the values      yaml   relabel configs        source labels     param target        target label  instance      Our request will not change  but the metrics that come back from our request will now bear a label  instance  http   prometheus io     After that we write the value  localhost 9115   the URI of our exporter  to the label    address     This will be used as the hostname and port for the Prometheus scrape requests  So that it queries the exporter and not the target URI directly      yaml   relabel configs        target label    address         replacement  localhost 9115    The blackbox exporter s real hostname port  For Windows and macOS replace with   host docker internal 9115      Our request is now   localhost 9115 probe target http   prometheus io module http 2xx    This way we can have the actual targets there  get them as  instance  label values while letting Prometheus make a request against the blackbox exporter   Often people combine these with a specific service discovery  Check out the  configuration documentation   docs prometheus latest configuration configuration  for more information  Using them is no problem  as these write into the    address    label just like  targets  defined under  static configs    That is it  Restart the Prometheus docker container and look at your  metrics  http   localhost 9090 graph g0 range input 30m g0 stacked 0 g0 expr probe http duration seconds g0 tab 0   Pay attention that you selected the period of time when the metrics were actually collected     Summary  In this guide  you learned how the multi target exporter pattern works  how to run a blackbox exporter with a customised module  and to configure Prometheus using relabeling to scrape metrics with prober labels "}
{"questions":"prometheus title Monitoring Docker container metrics using cAdvisor short for container Advisor analyzes and exposes resource usage and performance data from running containers cAdvisor exposes Prometheus metrics out of the box In this guide we will create a local multi container installation that includes containers running Prometheus cAdvisor and a server respectively examine some container metrics produced by the Redis container collected by cAdvisor and scraped by Prometheus Monitoring Docker container metrics using cAdvisor","answers":"---\ntitle: Monitoring Docker container metrics using cAdvisor\n---\n\n# Monitoring Docker container metrics using cAdvisor\n\n[cAdvisor](https:\/\/github.com\/google\/cadvisor) (short for **c**ontainer **Advisor**) analyzes and exposes resource usage and performance data from running containers. cAdvisor exposes Prometheus metrics out of the box. In this guide, we will:\n\n* create a local multi-container [Docker Compose](https:\/\/docs.docker.com\/compose\/) installation that includes containers running Prometheus, cAdvisor, and a [Redis](https:\/\/redis.io\/) server, respectively\n* examine some container metrics produced by the Redis container, collected by cAdvisor, and scraped by Prometheus\n\n## Prometheus configuration\n\nFirst, you'll need to [configure Prometheus](\/docs\/prometheus\/latest\/configuration\/configuration) to scrape metrics from cAdvisor. Create a `prometheus.yml` file and populate it with this configuration:\n\n```yaml\nscrape_configs:\n- job_name: cadvisor\n  scrape_interval: 5s\n  static_configs:\n  - targets:\n    - cadvisor:8080\n```\n\n## Docker Compose configuration\n\nNow we'll need to create a Docker Compose [configuration](https:\/\/docs.docker.com\/compose\/compose-file\/) that specifies which containers are part of our installation as well as which ports are exposed by each container, which volumes are used, and so on.\n\nIn the same folder where you created the [`prometheus.yml`](#prometheus-configuration) file, create a `docker-compose.yml` file and populate it with this Docker Compose configuration:\n\n```yaml\nversion: '3.2'\nservices:\n  prometheus:\n    image: prom\/prometheus:latest\n    container_name: prometheus\n    ports:\n    - 9090:9090\n    command:\n    - --config.file=\/etc\/prometheus\/prometheus.yml\n    volumes:\n    - .\/prometheus.yml:\/etc\/prometheus\/prometheus.yml:ro\n    depends_on:\n    - cadvisor\n  cadvisor:\n    image: gcr.io\/cadvisor\/cadvisor:latest\n    container_name: cadvisor\n    ports:\n    - 8080:8080\n    volumes:\n    - \/:\/rootfs:ro\n    - \/var\/run:\/var\/run:rw\n    - \/sys:\/sys:ro\n    - \/var\/lib\/docker\/:\/var\/lib\/docker:ro\n    depends_on:\n    - redis\n  redis:\n    image: redis:latest\n    container_name: redis\n    ports:\n    - 6379:6379\n```\n\nThis configuration instructs Docker Compose to run three services, each of which corresponds to a [Docker](https:\/\/docker.com) container:\n\n1. The `prometheus` service uses the local `prometheus.yml` configuration file (imported into the container by the `volumes` parameter).\n1. The `cadvisor` service exposes port 8080 (the default port for cAdvisor metrics) and relies on a variety of local volumes (`\/`, `\/var\/run`, etc.).\n1. The `redis` service is a standard Redis server. cAdvisor will gather container metrics from this container automatically, i.e. without any further configuration.\n\nTo run the installation:\n\n```bash\ndocker-compose up\n```\n\nIf Docker Compose successfully starts up all three containers, you should see output like this:\n\n```\nprometheus  | level=info ts=2018-07-12T22:02:40.5195272Z caller=main.go:500 msg=\"Server is ready to receive web requests.\"\n```\n\nYou can verify that all three containers are running using the [`ps`](https:\/\/docs.docker.com\/compose\/reference\/ps\/) command:\n\n```bash\ndocker-compose ps\n```\n\nYour output will look something like this:\n\n```\n   Name                 Command               State           Ports\n----------------------------------------------------------------------------\ncadvisor     \/usr\/bin\/cadvisor -logtostderr   Up      8080\/tcp\nprometheus   \/bin\/prometheus --config.f ...   Up      0.0.0.0:9090->9090\/tcp\nredis        docker-entrypoint.sh redis ...   Up      0.0.0.0:6379->6379\/tcp\n```\n\n## Exploring the cAdvisor web UI\n\nYou can access the cAdvisor [web UI](https:\/\/github.com\/google\/cadvisor\/blob\/master\/docs\/web.md) at `http:\/\/localhost:8080`. You can explore stats and graphs for specific Docker containers in our installation at `http:\/\/localhost:8080\/docker\/<container>`. Metrics for the Redis container, for example, can be accessed at `http:\/\/localhost:8080\/docker\/redis`, Prometheus at `http:\/\/localhost:8080\/docker\/prometheus`, and so on.\n\n## Exploring metrics in the expression browser\n\ncAdvisor's web UI is a useful interface for exploring the kinds of things that cAdvisor monitors, but it doesn't provide an interface for exploring container *metrics*. For that we'll need the Prometheus [expression browser](\/docs\/visualization\/browser), which is available at `http:\/\/localhost:9090\/graph`. You can enter Prometheus expressions into the expression bar, which looks like this:\n\n![Prometheus expression bar](\/assets\/prometheus-expression-bar.png)\n\nLet's start by exploring the `container_start_time_seconds` metric, which records the start time of containers (in seconds). You can select for specific containers by name using the `name=\"<container_name>\"` expression. The container name corresponds to the `container_name` parameter in the Docker Compose configuration. The [`container_start_time_seconds{name=\"redis\"}`](http:\/\/localhost:9090\/graph?g0.range_input=1h&g0.expr=container_start_time_seconds%7Bname%3D%22redis%22%7D&g0.tab=1) expression, for example, shows the start time for the `redis` container.\n\nNOTE: A full listing of cAdvisor-gathered container metrics exposed to Prometheus can be found in the [cAdvisor documentation](https:\/\/github.com\/google\/cadvisor\/blob\/master\/docs\/storage\/prometheus.md).\n\n## Other expressions\n\nThe table below lists some other example expressions\n\nExpression | Description | For\n:----------|:------------|:---\n[`rate(container_cpu_usage_seconds_total{name=\"redis\"}[1m])`](http:\/\/localhost:9090\/graph?g0.range_input=1h&g0.expr=rate(container_cpu_usage_seconds_total%7Bname%3D%22redis%22%7D%5B1m%5D)&g0.tab=1) | The [cgroup](https:\/\/en.wikipedia.org\/wiki\/Cgroups)'s CPU usage in the last minute | The `redis` container\n[`container_memory_usage_bytes{name=\"redis\"}`](http:\/\/localhost:9090\/graph?g0.range_input=1h&g0.expr=container_memory_usage_bytes%7Bname%3D%22redis%22%7D&g0.tab=1) | The cgroup's total memory usage (in bytes) | The `redis` container\n[`rate(container_network_transmit_bytes_total[1m])`](http:\/\/localhost:9090\/graph?g0.range_input=1h&g0.expr=rate(container_network_transmit_bytes_total%5B1m%5D)&g0.tab=1) | Bytes transmitted over the network by the container per second in the last minute | All containers\n[`rate(container_network_receive_bytes_total[1m])`](http:\/\/localhost:9090\/graph?g0.range_input=1h&g0.expr=rate(container_network_receive_bytes_total%5B1m%5D)&g0.tab=1) | Bytes received over the network by the container per second in the last minute | All containers\n\n## Summary\n\nIn this guide, we ran three separate containers in a single installation using Docker Compose: a Prometheus container scraped metrics from a cAdvisor container which, in turns, gathered metrics produced by a Redis container. We then explored a handful of cAdvisor container metrics using the Prometheus expression browser.","site":"prometheus","answers_cleaned":"    title  Monitoring Docker container metrics using cAdvisor        Monitoring Docker container metrics using cAdvisor   cAdvisor  https   github com google cadvisor   short for   c  ontainer   Advisor    analyzes and exposes resource usage and performance data from running containers  cAdvisor exposes Prometheus metrics out of the box  In this guide  we will     create a local multi container  Docker Compose  https   docs docker com compose   installation that includes containers running Prometheus  cAdvisor  and a  Redis  https   redis io   server  respectively   examine some container metrics produced by the Redis container  collected by cAdvisor  and scraped by Prometheus     Prometheus configuration  First  you ll need to  configure Prometheus   docs prometheus latest configuration configuration  to scrape metrics from cAdvisor  Create a  prometheus yml  file and populate it with this configuration      yaml scrape configs    job name  cadvisor   scrape interval  5s   static configs      targets        cadvisor 8080         Docker Compose configuration  Now we ll need to create a Docker Compose  configuration  https   docs docker com compose compose file   that specifies which containers are part of our installation as well as which ports are exposed by each container  which volumes are used  and so on   In the same folder where you created the   prometheus yml    prometheus configuration  file  create a  docker compose yml  file and populate it with this Docker Compose configuration      yaml version   3 2  services    prometheus      image  prom prometheus latest     container name  prometheus     ports        9090 9090     command          config file  etc prometheus prometheus yml     volumes          prometheus yml  etc prometheus prometheus yml ro     depends on        cadvisor   cadvisor      image  gcr io cadvisor cadvisor latest     container name  cadvisor     ports        8080 8080     volumes           rootfs ro        var run  var run rw        sys  sys ro        var lib docker   var lib docker ro     depends on        redis   redis      image  redis latest     container name  redis     ports        6379 6379      This configuration instructs Docker Compose to run three services  each of which corresponds to a  Docker  https   docker com  container   1  The  prometheus  service uses the local  prometheus yml  configuration file  imported into the container by the  volumes  parameter   1  The  cadvisor  service exposes port 8080  the default port for cAdvisor metrics  and relies on a variety of local volumes         var run   etc    1  The  redis  service is a standard Redis server  cAdvisor will gather container metrics from this container automatically  i e  without any further configuration   To run the installation      bash docker compose up      If Docker Compose successfully starts up all three containers  you should see output like this       prometheus    level info ts 2018 07 12T22 02 40 5195272Z caller main go 500 msg  Server is ready to receive web requests        You can verify that all three containers are running using the   ps   https   docs docker com compose reference ps   command      bash docker compose ps      Your output will look something like this          Name                 Command               State           Ports                                                                              cadvisor      usr bin cadvisor  logtostderr   Up      8080 tcp prometheus    bin prometheus   config f       Up      0 0 0 0 9090  9090 tcp redis        docker entrypoint sh redis       Up      0 0 0 0 6379  6379 tcp         Exploring the cAdvisor web UI  You can access the cAdvisor  web UI  https   github com google cadvisor blob master docs web md  at  http   localhost 8080   You can explore stats and graphs for specific Docker containers in our installation at  http   localhost 8080 docker  container    Metrics for the Redis container  for example  can be accessed at  http   localhost 8080 docker redis   Prometheus at  http   localhost 8080 docker prometheus   and so on      Exploring metrics in the expression browser  cAdvisor s web UI is a useful interface for exploring the kinds of things that cAdvisor monitors  but it doesn t provide an interface for exploring container  metrics   For that we ll need the Prometheus  expression browser   docs visualization browser   which is available at  http   localhost 9090 graph   You can enter Prometheus expressions into the expression bar  which looks like this     Prometheus expression bar   assets prometheus expression bar png   Let s start by exploring the  container start time seconds  metric  which records the start time of containers  in seconds   You can select for specific containers by name using the  name   container name    expression  The container name corresponds to the  container name  parameter in the Docker Compose configuration  The   container start time seconds name  redis     http   localhost 9090 graph g0 range input 1h g0 expr container start time seconds 7Bname 3D 22redis 22 7D g0 tab 1  expression  for example  shows the start time for the  redis  container   NOTE  A full listing of cAdvisor gathered container metrics exposed to Prometheus can be found in the  cAdvisor documentation  https   github com google cadvisor blob master docs storage prometheus md       Other expressions  The table below lists some other example expressions  Expression   Description   For                                  rate container cpu usage seconds total name  redis   1m     http   localhost 9090 graph g0 range input 1h g0 expr rate container cpu usage seconds total 7Bname 3D 22redis 22 7D 5B1m 5D  g0 tab 1    The  cgroup  https   en wikipedia org wiki Cgroups  s CPU usage in the last minute   The  redis  container   container memory usage bytes name  redis     http   localhost 9090 graph g0 range input 1h g0 expr container memory usage bytes 7Bname 3D 22redis 22 7D g0 tab 1    The cgroup s total memory usage  in bytes    The  redis  container   rate container network transmit bytes total 1m     http   localhost 9090 graph g0 range input 1h g0 expr rate container network transmit bytes total 5B1m 5D  g0 tab 1    Bytes transmitted over the network by the container per second in the last minute   All containers   rate container network receive bytes total 1m     http   localhost 9090 graph g0 range input 1h g0 expr rate container network receive bytes total 5B1m 5D  g0 tab 1    Bytes received over the network by the container per second in the last minute   All containers     Summary  In this guide  we ran three separate containers in a single installation using Docker Compose  a Prometheus container scraped metrics from a cAdvisor container which  in turns  gathered metrics produced by a Redis container  We then explored a handful of cAdvisor container metrics using the Prometheus expression browser "}
{"questions":"prometheus title Monitoring Linux host metrics with the Node Exporter The Prometheus exposes a wide variety of hardware and kernel related metrics Monitoring Linux host metrics with the Node Exporter In this guide you will","answers":"---\ntitle: Monitoring Linux host metrics with the Node Exporter\n---\n\n# Monitoring Linux host metrics with the Node Exporter\n\nThe Prometheus [**Node Exporter**](https:\/\/github.com\/prometheus\/node_exporter) exposes a wide variety of hardware- and kernel-related metrics.\n\nIn this guide, you will:\n\n* Start up a Node Exporter on `localhost`\n* Start up a Prometheus instance on `localhost` that's configured to scrape metrics from the running Node Exporter\n\nNOTE: While the Prometheus Node Exporter is for *nix systems, there is the [Windows exporter](https:\/\/github.com\/prometheus-community\/windows_exporter) for Windows that serves an analogous purpose.\n\n## Installing and running the Node Exporter\n\nThe Prometheus Node Exporter is a single static binary that you can install [via tarball](#tarball-installation). Once you've downloaded it from the Prometheus [downloads page](\/download#node_exporter) extract it, and run it:\n\n```bash\n# NOTE: Replace the URL with one from the above mentioned \"downloads\" page.\n# <VERSION>, <OS>, and <ARCH> are placeholders.\nwget https:\/\/github.com\/prometheus\/node_exporter\/releases\/download\/v<VERSION>\/node_exporter-<VERSION>.<OS>-<ARCH>.tar.gz\ntar xvfz node_exporter-*.*-amd64.tar.gz\ncd node_exporter-*.*-amd64\n.\/node_exporter\n```\n\nYou should see output like this indicating that the Node Exporter is now running and exposing metrics on port 9100:\n\n```\nINFO[0000] Starting node_exporter (version=0.16.0, branch=HEAD, revision=d42bd70f4363dced6b77d8fc311ea57b63387e4f)  source=\"node_exporter.go:82\"\nINFO[0000] Build context (go=go1.9.6, user=root@a67a9bc13a69, date=20180515-15:53:28)  source=\"node_exporter.go:83\"\nINFO[0000] Enabled collectors:                           source=\"node_exporter.go:90\"\nINFO[0000]  - boottime                                   source=\"node_exporter.go:97\"\n...\nINFO[0000] Listening on :9100                            source=\"node_exporter.go:111\"\n```\n\n## Node Exporter metrics\n\nOnce the Node Exporter is installed and running, you can verify that metrics are being exported by cURLing the `\/metrics` endpoint:\n\n```bash\ncurl http:\/\/localhost:9100\/metrics\n```\n\nYou should see output like this:\n\n```\n# HELP go_gc_duration_seconds A summary of the GC invocation durations.\n# TYPE go_gc_duration_seconds summary\ngo_gc_duration_seconds{quantile=\"0\"} 3.8996e-05\ngo_gc_duration_seconds{quantile=\"0.25\"} 4.5926e-05\ngo_gc_duration_seconds{quantile=\"0.5\"} 5.846e-05\n# etc.\n```\n\nSuccess! The Node Exporter is now exposing metrics that Prometheus can scrape, including a wide variety of system metrics further down in the output (prefixed with `node_`). To view those metrics (along with help and type information):\n\n```bash\ncurl http:\/\/localhost:9100\/metrics | grep \"node_\"\n```\n\n## Configuring your Prometheus instances\n\nYour locally running Prometheus instance needs to be properly configured in order to access Node Exporter metrics. The following [`prometheus.yml`](..\/..\/prometheus\/latest\/configuration\/configuration\/) example configuration file will tell the Prometheus instance to scrape, and how frequently, from the Node Exporter via `localhost:9100`:\n\n<a id=\"config\"><\/a>\n\n```yaml\nglobal:\n  scrape_interval: 15s\n\nscrape_configs:\n- job_name: node\n  static_configs:\n  - targets: ['localhost:9100']\n```\n\nTo install Prometheus, [download the latest release](\/download) for your platform and untar it:\n\n```bash\nwget https:\/\/github.com\/prometheus\/prometheus\/releases\/download\/v*\/prometheus-*.*-amd64.tar.gz\ntar xvf prometheus-*.*-amd64.tar.gz\ncd prometheus-*.*\n```\n\nOnce Prometheus is installed you can start it up, using the `--config.file` flag to point to the Prometheus configuration that you created [above](#config):\n\n```bash\n.\/prometheus --config.file=.\/prometheus.yml\n```\n\n## Exploring Node Exporter metrics through the Prometheus expression browser\n\nNow that Prometheus is scraping metrics from a running Node Exporter instance, you can explore those metrics using the Prometheus UI (aka the [expression browser](\/docs\/visualization\/browser)). Navigate to `localhost:9090\/graph` in your browser and use the main expression bar at the top of the page to enter expressions. The expression bar looks like this:\n\n![Prometheus expressions browser](\/assets\/prometheus-expression-bar.png)\n\nMetrics specific to the Node Exporter are prefixed with `node_` and include metrics like `node_cpu_seconds_total` and `node_exporter_build_info`.\n\nClick on the links below to see some example metrics:\n\nMetric | Meaning\n:------|:-------\n[`rate(node_cpu_seconds_total{mode=\"system\"}[1m])`](http:\/\/localhost:9090\/graph?g0.range_input=1h&g0.expr=rate(node_cpu_seconds_total%7Bmode%3D%22system%22%7D%5B1m%5D)&g0.tab=1) | The average amount of CPU time spent in system mode, per second, over the last minute (in seconds)\n[`node_filesystem_avail_bytes`](http:\/\/localhost:9090\/graph?g0.range_input=1h&g0.expr=node_filesystem_avail_bytes&g0.tab=1) | The filesystem space available to non-root users (in bytes)\n[`rate(node_network_receive_bytes_total[1m])`](http:\/\/localhost:9090\/graph?g0.range_input=1h&g0.expr=rate(node_network_receive_bytes_total%5B1m%5D)&g0.tab=1) | The average network traffic received, per second, over the last minute (in bytes)","site":"prometheus","answers_cleaned":"    title  Monitoring Linux host metrics with the Node Exporter        Monitoring Linux host metrics with the Node Exporter  The Prometheus    Node Exporter    https   github com prometheus node exporter  exposes a wide variety of hardware  and kernel related metrics   In this guide  you will     Start up a Node Exporter on  localhost    Start up a Prometheus instance on  localhost  that s configured to scrape metrics from the running Node Exporter  NOTE  While the Prometheus Node Exporter is for  nix systems  there is the  Windows exporter  https   github com prometheus community windows exporter  for Windows that serves an analogous purpose      Installing and running the Node Exporter  The Prometheus Node Exporter is a single static binary that you can install  via tarball   tarball installation   Once you ve downloaded it from the Prometheus  downloads page   download node exporter  extract it  and run it      bash   NOTE  Replace the URL with one from the above mentioned  downloads  page     VERSION    OS   and  ARCH  are placeholders  wget https   github com prometheus node exporter releases download v VERSION  node exporter  VERSION   OS   ARCH  tar gz tar xvfz node exporter     amd64 tar gz cd node exporter     amd64   node exporter      You should see output like this indicating that the Node Exporter is now running and exposing metrics on port 9100       INFO 0000  Starting node exporter  version 0 16 0  branch HEAD  revision d42bd70f4363dced6b77d8fc311ea57b63387e4f   source  node exporter go 82  INFO 0000  Build context  go go1 9 6  user root a67a9bc13a69  date 20180515 15 53 28   source  node exporter go 83  INFO 0000  Enabled collectors                            source  node exporter go 90  INFO 0000     boottime                                   source  node exporter go 97      INFO 0000  Listening on  9100                            source  node exporter go 111          Node Exporter metrics  Once the Node Exporter is installed and running  you can verify that metrics are being exported by cURLing the   metrics  endpoint      bash curl http   localhost 9100 metrics      You should see output like this         HELP go gc duration seconds A summary of the GC invocation durations    TYPE go gc duration seconds summary go gc duration seconds quantile  0   3 8996e 05 go gc duration seconds quantile  0 25   4 5926e 05 go gc duration seconds quantile  0 5   5 846e 05   etc       Success  The Node Exporter is now exposing metrics that Prometheus can scrape  including a wide variety of system metrics further down in the output  prefixed with  node     To view those metrics  along with help and type information       bash curl http   localhost 9100 metrics   grep  node           Configuring your Prometheus instances  Your locally running Prometheus instance needs to be properly configured in order to access Node Exporter metrics  The following   prometheus yml         prometheus latest configuration configuration   example configuration file will tell the Prometheus instance to scrape  and how frequently  from the Node Exporter via  localhost 9100     a id  config    a      yaml global    scrape interval  15s  scrape configs    job name  node   static configs      targets    localhost 9100        To install Prometheus   download the latest release   download  for your platform and untar it      bash wget https   github com prometheus prometheus releases download v  prometheus     amd64 tar gz tar xvf prometheus     amd64 tar gz cd prometheus          Once Prometheus is installed you can start it up  using the    config file  flag to point to the Prometheus configuration that you created  above   config       bash   prometheus   config file   prometheus yml         Exploring Node Exporter metrics through the Prometheus expression browser  Now that Prometheus is scraping metrics from a running Node Exporter instance  you can explore those metrics using the Prometheus UI  aka the  expression browser   docs visualization browser    Navigate to  localhost 9090 graph  in your browser and use the main expression bar at the top of the page to enter expressions  The expression bar looks like this     Prometheus expressions browser   assets prometheus expression bar png   Metrics specific to the Node Exporter are prefixed with  node   and include metrics like  node cpu seconds total  and  node exporter build info    Click on the links below to see some example metrics   Metric   Meaning                    rate node cpu seconds total mode  system   1m     http   localhost 9090 graph g0 range input 1h g0 expr rate node cpu seconds total 7Bmode 3D 22system 22 7D 5B1m 5D  g0 tab 1    The average amount of CPU time spent in system mode  per second  over the last minute  in seconds    node filesystem avail bytes   http   localhost 9090 graph g0 range input 1h g0 expr node filesystem avail bytes g0 tab 1    The filesystem space available to non root users  in bytes    rate node network receive bytes total 1m     http   localhost 9090 graph g0 range input 1h g0 expr rate node network receive bytes total 5B1m 5D  g0 tab 1    The average network traffic received  per second  over the last minute  in bytes "}
{"questions":"prometheus Prometheus can discover targets in a Docker Swarm swarm cluster as of title Docker Swarm sortrank 1 v2 20 0 This guide demonstrates how to use that service discovery mechanism Docker Swarm","answers":"---\ntitle: Docker Swarm\nsort_rank: 1\n---\n\n# Docker Swarm\n\nPrometheus can discover targets in a [Docker Swarm][swarm] cluster, as of\nv2.20.0. This guide demonstrates how to use that service discovery mechanism.\n\n## Docker Swarm service discovery architecture\n\nThe [Docker Swarm service discovery][swarmsd] contains 3 different roles: nodes, services,\nand tasks.\n\nThe first role, **nodes**, represents the hosts that are part of the Swarm. It\ncan be used to automatically monitor the Docker daemons or the Node Exporters\nwho run on the Swarm hosts.\n\nThe second role, **tasks**, represents any individual container deployed in the\nswarm. Each task gets its associated service labels. One service can be backed by\none or multiple tasks.\n\nThe third one, **services**, will discover the services deployed in the\nswarm. It will discover the ports exposed by the services. Usually you will want\nto use the tasks role instead of this one.\n\nPrometheus will only discover tasks and service that expose ports.\n\nNOTE: The rest of this post assumes that you have a Swarm running.\n\n## Setting up Prometheus\n\nFor this guide, you need to [setup Prometheus][setup]. We will assume that\nPrometheus runs on a Docker Swarm manager node and has access to the Docker\nsocket at `\/var\/run\/docker.sock`.\n\n## Monitoring Docker daemons\n\nLet's dive into the service discovery itself.\n\nDocker itself, as a daemon, exposes [metrics][dockermetrics] that can be\ningested by a Prometheus server.\n\nYou can enable them by editing `\/etc\/docker\/daemon.json` and setting the\nfollowing properties:\n\n```json\n{\n  \"metrics-addr\" : \"0.0.0.0:9323\",\n  \"experimental\" : true\n}\n```\n\nInstead of `0.0.0.0`, you can set the IP of the Docker Swarm node.\n\nA restart of the daemon is required to take the new configuration into account.\n\nThe [Docker documentation][dockermetrics] contains more info about this.\n\nThen, you can configure Prometheus to scrape the Docker daemon, by providing the\nfollowing `prometheus.yml` file:\n\n\n```yaml\nscrape_configs:\n  # Make Prometheus scrape itself for metrics.\n  - job_name: 'prometheus'\n    static_configs:\n    - targets: ['localhost:9090']\n\n  # Create a job for Docker daemons.\n  - job_name: 'docker'\n    dockerswarm_sd_configs:\n      - host: unix:\/\/\/var\/run\/docker.sock\n        role: nodes\n    relabel_configs:\n      # Fetch metrics on port 9323.\n      - source_labels: [__meta_dockerswarm_node_address]\n        target_label: __address__\n        replacement: $1:9323\n      # Set hostname as instance label\n      - source_labels: [__meta_dockerswarm_node_hostname]\n        target_label: instance\n```\n\nFor the nodes role, you can also use the `port` parameter of\n`dockerswarm_sd_configs`. However, using `relabel_configs` is recommended as it\nenables Prometheus to reuse the same API calls across identical Docker Swarm\nconfigurations.\n\n## Monitoring Containers\n\nLet's now deploy a service in our Swarm. We will deploy [cadvisor][cad], which\nexposes container resources metrics:\n\n```shell\ndocker service create --name cadvisor -l prometheus-job=cadvisor \\\n    --mode=global --publish target=8080,mode=host \\\n    --mount type=bind,src=\/var\/run\/docker.sock,dst=\/var\/run\/docker.sock,ro \\\n    --mount type=bind,src=\/,dst=\/rootfs,ro \\\n    --mount type=bind,src=\/var\/run,dst=\/var\/run \\\n    --mount type=bind,src=\/sys,dst=\/sys,ro \\\n    --mount type=bind,src=\/var\/lib\/docker,dst=\/var\/lib\/docker,ro \\\n    google\/cadvisor -docker_only\n```\n\nThis is a minimal `prometheus.yml` file to monitor it:\n\n```yaml\nscrape_configs:\n  # Make Prometheus scrape itself for metrics.\n  - job_name: 'prometheus'\n    static_configs:\n    - targets: ['localhost:9090']\n\n  # Create a job for Docker Swarm containers.\n  - job_name: 'dockerswarm'\n    dockerswarm_sd_configs:\n      - host: unix:\/\/\/var\/run\/docker.sock\n        role: tasks\n    relabel_configs:\n      # Only keep containers that should be running.\n      - source_labels: [__meta_dockerswarm_task_desired_state]\n        regex: running\n        action: keep\n      # Only keep containers that have a `prometheus-job` label.\n      - source_labels: [__meta_dockerswarm_service_label_prometheus_job]\n        regex: .+\n        action: keep\n      # Use the prometheus-job Swarm label as Prometheus job label.\n      - source_labels: [__meta_dockerswarm_service_label_prometheus_job]\n        target_label: job\n```\n\nLet's analyze each part of the [relabel configuration][rela].\n\n\n```yaml\n- source_labels: [__meta_dockerswarm_task_desired_state]\n  regex: running\n  action: keep\n```\n\nDocker Swarm exposes the desired [state of the tasks][state] over the API. In\nout example, we only **keep** the targets that should be running. It prevents\nmonitoring tasks that should be shut down.\n\n```yaml\n- source_labels: [__meta_dockerswarm_service_label_prometheus_job]\n  regex: .+\n  action: keep\n```\n\nWhen we deployed our cadvisor, we have added a label `prometheus-job=cadvisor`.\nAs Prometheus fetches the tasks labels, we can instruct it to **only** keep the\ntargets which have a `prometheus-job` label.\n\n\n```yaml\n- source_labels: [__meta_dockerswarm_service_label_prometheus_job]\n  target_label: job\n```\n\nThat last part takes the label `prometheus-job` of the task and turns it into\na target label, overwriting the default `dockerswarm` job label that comes from\nthe scrape config.\n\n## Discovered labels\n\nThe [Prometheus Documentation][swarmsd] contains the full list of labels, but\nhere are other relabel configs that you might find useful.\n\n### Scraping metrics via a certain network only\n\n```yaml\n- source_labels: [__meta_dockerswarm_network_name]\n  regex: ingress\n  action: keep\n```\n\n### Scraping global tasks only\n\nGlobal tasks run on every daemon.\n\n```yaml\n- source_labels: [__meta_dockerswarm_service_mode]\n  regex: global\n  action: keep\n- source_labels: [__meta_dockerswarm_task_port_publish_mode]\n  regex: host\n  action: keep\n```\n\n### Adding a docker_node label to the targets\n\n```yaml\n- source_labels: [__meta_dockerswarm_node_hostname]\n  target_label: docker_node\n```\n\n## Connecting to the Docker Swarm\n\nThe above `dockerswarm_sd_configs` entries have a field host:\n\n```yaml\nhost: unix:\/\/\/var\/run\/docker.sock\n```\n\nThat is using the Docker socket. Prometheus offers [additional configuration\noptions][swarmsd] to connect to Swarm using HTTP and HTTPS, if you prefer that\nover the unix socket.\n\n## Conclusion\n\nThere are many discovery labels you can play with to better determine which\ntargets to monitor and how, for the tasks, there is more than 25 labels\navailable. Don't hesitate to look at the \"Service Discovery\" page of your\nPrometheus server (under the \"Status\" menu) to see all the discovered labels.\n\nThe service discovery makes no assumptions about your Swarm stack, in such a way\nthat given proper configuration, this should be pluggable to any existing stack.\n\n[state]:https:\/\/docs.docker.com\/engine\/swarm\/how-swarm-mode-works\/swarm-task-states\/\n[rela]:https:\/\/prometheus.io\/docs\/prometheus\/latest\/configuration\/configuration\/#relabel_config\n[swarm]:https:\/\/docs.docker.com\/engine\/swarm\/\n[swarmsd]:https:\/\/prometheus.io\/docs\/prometheus\/latest\/configuration\/configuration\/#dockerswarm_sd_config\n[dockermetrics]:https:\/\/docs.docker.com\/config\/daemon\/prometheus\/\n[cad]:https:\/\/github.com\/google\/cadvisor\n[setup]:https:\/\/prometheus.io\/docs\/prometheus\/latest\/getting_started\/","site":"prometheus","answers_cleaned":"    title  Docker Swarm sort rank  1        Docker Swarm  Prometheus can discover targets in a  Docker Swarm  swarm  cluster  as of v2 20 0  This guide demonstrates how to use that service discovery mechanism      Docker Swarm service discovery architecture  The  Docker Swarm service discovery  swarmsd  contains 3 different roles  nodes  services  and tasks   The first role    nodes    represents the hosts that are part of the Swarm  It can be used to automatically monitor the Docker daemons or the Node Exporters who run on the Swarm hosts   The second role    tasks    represents any individual container deployed in the swarm  Each task gets its associated service labels  One service can be backed by one or multiple tasks   The third one    services    will discover the services deployed in the swarm  It will discover the ports exposed by the services  Usually you will want to use the tasks role instead of this one   Prometheus will only discover tasks and service that expose ports   NOTE  The rest of this post assumes that you have a Swarm running      Setting up Prometheus  For this guide  you need to  setup Prometheus  setup   We will assume that Prometheus runs on a Docker Swarm manager node and has access to the Docker socket at   var run docker sock       Monitoring Docker daemons  Let s dive into the service discovery itself   Docker itself  as a daemon  exposes  metrics  dockermetrics  that can be ingested by a Prometheus server   You can enable them by editing   etc docker daemon json  and setting the following properties      json      metrics addr     0 0 0 0 9323      experimental    true        Instead of  0 0 0 0   you can set the IP of the Docker Swarm node   A restart of the daemon is required to take the new configuration into account   The  Docker documentation  dockermetrics  contains more info about this   Then  you can configure Prometheus to scrape the Docker daemon  by providing the following  prometheus yml  file       yaml scrape configs      Make Prometheus scrape itself for metrics      job name   prometheus      static configs        targets    localhost 9090        Create a job for Docker daemons      job name   docker      dockerswarm sd configs          host  unix    var run docker sock         role  nodes     relabel configs          Fetch metrics on port 9323          source labels     meta dockerswarm node address          target label    address           replacement   1 9323         Set hostname as instance label         source labels     meta dockerswarm node hostname          target label  instance      For the nodes role  you can also use the  port  parameter of  dockerswarm sd configs   However  using  relabel configs  is recommended as it enables Prometheus to reuse the same API calls across identical Docker Swarm configurations      Monitoring Containers  Let s now deploy a service in our Swarm  We will deploy  cadvisor  cad   which exposes container resources metrics      shell docker service create   name cadvisor  l prometheus job cadvisor         mode global   publish target 8080 mode host         mount type bind src  var run docker sock dst  var run docker sock ro         mount type bind src   dst  rootfs ro         mount type bind src  var run dst  var run         mount type bind src  sys dst  sys ro         mount type bind src  var lib docker dst  var lib docker ro       google cadvisor  docker only      This is a minimal  prometheus yml  file to monitor it      yaml scrape configs      Make Prometheus scrape itself for metrics      job name   prometheus      static configs        targets    localhost 9090        Create a job for Docker Swarm containers      job name   dockerswarm      dockerswarm sd configs          host  unix    var run docker sock         role  tasks     relabel configs          Only keep containers that should be running          source labels     meta dockerswarm task desired state          regex  running         action  keep         Only keep containers that have a  prometheus job  label          source labels     meta dockerswarm service label prometheus job          regex             action  keep         Use the prometheus job Swarm label as Prometheus job label          source labels     meta dockerswarm service label prometheus job          target label  job      Let s analyze each part of the  relabel configuration  rela        yaml   source labels     meta dockerswarm task desired state    regex  running   action  keep      Docker Swarm exposes the desired  state of the tasks  state  over the API  In out example  we only   keep   the targets that should be running  It prevents monitoring tasks that should be shut down      yaml   source labels     meta dockerswarm service label prometheus job    regex       action  keep      When we deployed our cadvisor  we have added a label  prometheus job cadvisor   As Prometheus fetches the tasks labels  we can instruct it to   only   keep the targets which have a  prometheus job  label       yaml   source labels     meta dockerswarm service label prometheus job    target label  job      That last part takes the label  prometheus job  of the task and turns it into a target label  overwriting the default  dockerswarm  job label that comes from the scrape config      Discovered labels  The  Prometheus Documentation  swarmsd  contains the full list of labels  but here are other relabel configs that you might find useful       Scraping metrics via a certain network only     yaml   source labels     meta dockerswarm network name    regex  ingress   action  keep          Scraping global tasks only  Global tasks run on every daemon      yaml   source labels     meta dockerswarm service mode    regex  global   action  keep   source labels     meta dockerswarm task port publish mode    regex  host   action  keep          Adding a docker node label to the targets     yaml   source labels     meta dockerswarm node hostname    target label  docker node         Connecting to the Docker Swarm  The above  dockerswarm sd configs  entries have a field host      yaml host  unix    var run docker sock      That is using the Docker socket  Prometheus offers  additional configuration options  swarmsd  to connect to Swarm using HTTP and HTTPS  if you prefer that over the unix socket      Conclusion  There are many discovery labels you can play with to better determine which targets to monitor and how  for the tasks  there is more than 25 labels available  Don t hesitate to look at the  Service Discovery  page of your Prometheus server  under the  Status  menu  to see all the discovered labels   The service discovery makes no assumptions about your Swarm stack  in such a way that given proper configuration  this should be pluggable to any existing stack    state  https   docs docker com engine swarm how swarm mode works swarm task states   rela  https   prometheus io docs prometheus latest configuration configuration  relabel config  swarm  https   docs docker com engine swarm   swarmsd  https   prometheus io docs prometheus latest configuration configuration  dockerswarm sd config  dockermetrics  https   docs docker com config daemon prometheus   cad  https   github com google cadvisor  setup  https   prometheus io docs prometheus latest getting started "}
{"questions":"prometheus Instrumenting a Go application for Prometheus NOTE For comprehensive API documentation see the for Prometheus various Go libraries title Instrumenting a Go application Prometheus has an official that you can use to instrument Go applications In this guide we ll create a simple Go application that exposes Prometheus metrics via HTTP","answers":"---\ntitle: Instrumenting a Go application\n---\n\n# Instrumenting a Go application for Prometheus\n\nPrometheus has an official [Go client library](https:\/\/github.com\/prometheus\/client_golang) that you can use to instrument Go applications. In this guide, we'll create a simple Go application that exposes Prometheus metrics via HTTP.\n\nNOTE: For comprehensive API documentation, see the [GoDoc](https:\/\/godoc.org\/github.com\/prometheus\/client_golang) for Prometheus' various Go libraries.\n\n## Installation\n\nYou can install the `prometheus`, `promauto`, and `promhttp` libraries necessary for the guide using [`go get`](https:\/\/golang.org\/doc\/articles\/go_command.html):\n\n```bash\ngo get github.com\/prometheus\/client_golang\/prometheus\ngo get github.com\/prometheus\/client_golang\/prometheus\/promauto\ngo get github.com\/prometheus\/client_golang\/prometheus\/promhttp\n```\n\n## How Go exposition works\n\nTo expose Prometheus metrics in a Go application, you need to provide a `\/metrics` HTTP endpoint. You can use the [`prometheus\/promhttp`](https:\/\/godoc.org\/github.com\/prometheus\/client_golang\/prometheus\/promhttp) library's HTTP [`Handler`](https:\/\/godoc.org\/github.com\/prometheus\/client_golang\/prometheus\/promhttp#Handler) as the handler function.\n\nThis minimal application, for example, would expose the default metrics for Go applications via `http:\/\/localhost:2112\/metrics`:\n\n```go\npackage main\n\nimport (\n        \"net\/http\"\n\n        \"github.com\/prometheus\/client_golang\/prometheus\/promhttp\"\n)\n\nfunc main() {\n        http.Handle(\"\/metrics\", promhttp.Handler())\n        http.ListenAndServe(\":2112\", nil)\n}\n```\n\nTo start the application:\n\n```bash\ngo run main.go\n```\n\nTo access the metrics:\n\n```bash\ncurl http:\/\/localhost:2112\/metrics\n```\n\n## Adding your own metrics\n\nThe application [above](#how-go-exposition-works) exposes only the default Go metrics. You can also register your own custom application-specific metrics. This example application exposes a `myapp_processed_ops_total` [counter](\/docs\/concepts\/metric_types\/#counter) that counts the number of operations that have been processed thus far. Every 2 seconds, the counter is incremented by one.\n\n```go\npackage main\n\nimport (\n        \"net\/http\"\n        \"time\"\n\n        \"github.com\/prometheus\/client_golang\/prometheus\"\n        \"github.com\/prometheus\/client_golang\/prometheus\/promauto\"\n        \"github.com\/prometheus\/client_golang\/prometheus\/promhttp\"\n)\n\nfunc recordMetrics() {\n        go func() {\n                for {\n                        opsProcessed.Inc()\n                        time.Sleep(2 * time.Second)\n                }\n        }()\n}\n\nvar (\n        opsProcessed = promauto.NewCounter(prometheus.CounterOpts{\n                Name: \"myapp_processed_ops_total\",\n                Help: \"The total number of processed events\",\n        })\n)\n\nfunc main() {\n        recordMetrics()\n\n        http.Handle(\"\/metrics\", promhttp.Handler())\n        http.ListenAndServe(\":2112\", nil)\n}\n```\n\nTo run the application:\n\n```bash\ngo run main.go\n```\n\nTo access the metrics:\n\n```bash\ncurl http:\/\/localhost:2112\/metrics\n```\n\nIn the metrics output, you'll see the help text, type information, and current value of the `myapp_processed_ops_total` counter:\n\n```\n# HELP myapp_processed_ops_total The total number of processed events\n# TYPE myapp_processed_ops_total counter\nmyapp_processed_ops_total 5\n```\n\nYou can [configure](\/docs\/prometheus\/latest\/configuration\/configuration\/#scrape_config) a locally running Prometheus instance to scrape metrics from the application. Here's an example `prometheus.yml` configuration:\n\n```yaml\nscrape_configs:\n- job_name: myapp\n  scrape_interval: 10s\n  static_configs:\n  - targets:\n    - localhost:2112\n```\n\n## Other Go client features\n\nIn this guide we covered just a small handful of features available in the Prometheus Go client libraries. You can also expose other metrics types, such as [gauges](https:\/\/godoc.org\/github.com\/prometheus\/client_golang\/prometheus#Gauge) and [histograms](https:\/\/godoc.org\/github.com\/prometheus\/client_golang\/prometheus#Histogram), [non-global registries](https:\/\/godoc.org\/github.com\/prometheus\/client_golang\/prometheus#Registry), functions for [pushing metrics](https:\/\/godoc.org\/github.com\/prometheus\/client_golang\/prometheus\/push) to Prometheus [PushGateways](\/docs\/instrumenting\/pushing\/), bridging Prometheus and [Graphite](https:\/\/godoc.org\/github.com\/prometheus\/client_golang\/prometheus\/graphite), and more.\n\n## Summary\n\nIn this guide, you created two sample Go applications that expose metrics to Prometheus---one that exposes only the default Go metrics and one that also exposes a custom Prometheus counter---and configured a Prometheus instance to scrape metrics from those applications.","site":"prometheus","answers_cleaned":"    title  Instrumenting a Go application        Instrumenting a Go application for Prometheus  Prometheus has an official  Go client library  https   github com prometheus client golang  that you can use to instrument Go applications  In this guide  we ll create a simple Go application that exposes Prometheus metrics via HTTP   NOTE  For comprehensive API documentation  see the  GoDoc  https   godoc org github com prometheus client golang  for Prometheus  various Go libraries      Installation  You can install the  prometheus    promauto   and  promhttp  libraries necessary for the guide using   go get   https   golang org doc articles go command html       bash go get github com prometheus client golang prometheus go get github com prometheus client golang prometheus promauto go get github com prometheus client golang prometheus promhttp         How Go exposition works  To expose Prometheus metrics in a Go application  you need to provide a   metrics  HTTP endpoint  You can use the   prometheus promhttp   https   godoc org github com prometheus client golang prometheus promhttp  library s HTTP   Handler   https   godoc org github com prometheus client golang prometheus promhttp Handler  as the handler function   This minimal application  for example  would expose the default metrics for Go applications via  http   localhost 2112 metrics       go package main  import            net http            github com prometheus client golang prometheus promhttp     func main             http Handle   metrics   promhttp Handler            http ListenAndServe   2112   nil         To start the application      bash go run main go      To access the metrics      bash curl http   localhost 2112 metrics         Adding your own metrics  The application  above   how go exposition works  exposes only the default Go metrics  You can also register your own custom application specific metrics  This example application exposes a  myapp processed ops total   counter   docs concepts metric types  counter  that counts the number of operations that have been processed thus far  Every 2 seconds  the counter is incremented by one      go package main  import            net http           time            github com prometheus client golang prometheus           github com prometheus client golang prometheus promauto           github com prometheus client golang prometheus promhttp     func recordMetrics             go func                     for                           opsProcessed Inc                           time Sleep 2   time Second                                   var           opsProcessed   promauto NewCounter prometheus CounterOpts                  Name   myapp processed ops total                   Help   The total number of processed events                 func main             recordMetrics            http Handle   metrics   promhttp Handler            http ListenAndServe   2112   nil         To run the application      bash go run main go      To access the metrics      bash curl http   localhost 2112 metrics      In the metrics output  you ll see the help text  type information  and current value of the  myapp processed ops total  counter         HELP myapp processed ops total The total number of processed events   TYPE myapp processed ops total counter myapp processed ops total 5      You can  configure   docs prometheus latest configuration configuration  scrape config  a locally running Prometheus instance to scrape metrics from the application  Here s an example  prometheus yml  configuration      yaml scrape configs    job name  myapp   scrape interval  10s   static configs      targets        localhost 2112         Other Go client features  In this guide we covered just a small handful of features available in the Prometheus Go client libraries  You can also expose other metrics types  such as  gauges  https   godoc org github com prometheus client golang prometheus Gauge  and  histograms  https   godoc org github com prometheus client golang prometheus Histogram    non global registries  https   godoc org github com prometheus client golang prometheus Registry   functions for  pushing metrics  https   godoc org github com prometheus client golang prometheus push  to Prometheus  PushGateways   docs instrumenting pushing    bridging Prometheus and  Graphite  https   godoc org github com prometheus client golang prometheus graphite   and more      Summary  In this guide  you created two sample Go applications that expose metrics to Prometheus   one that exposes only the default Go metrics and one that also exposes a custom Prometheus counter   and configured a Prometheus instance to scrape metrics from those applications "}
{"questions":"prometheus In this guide we will Use file based service discovery to discover scrape targets title Use file based service discovery to discover scrape targets Prometheus offers a variety of for discovering scrape targets including and many others If you need to use a service discovery system that is not currently supported your use case may be best served by Prometheus mechanism which enables you to list scrape targets in a JSON file along with metadata about those targets","answers":"---\ntitle: Use file-based service discovery to discover scrape targets\n---\n\n# Use file-based service discovery to discover scrape targets\n\nPrometheus offers a variety of [service discovery options](https:\/\/github.com\/prometheus\/prometheus\/tree\/main\/discovery) for discovering scrape targets, including [Kubernetes](\/docs\/prometheus\/latest\/configuration\/configuration\/#kubernetes_sd_config), [Consul](\/docs\/prometheus\/latest\/configuration\/configuration\/#consul_sd_config), and many others. If you need to use a service discovery system that is not currently supported, your use case may be best served by Prometheus' [file-based service discovery](\/docs\/prometheus\/latest\/configuration\/configuration\/#file_sd_config) mechanism, which enables you to list scrape targets in a JSON file (along with metadata about those targets).\n\nIn this guide, we will:\n\n* Install and run a Prometheus [Node Exporter](..\/node-exporter) locally\n* Create a `targets.json` file specifying the host and port information for the Node Exporter\n* Install and run a Prometheus instance that is configured to discover the Node Exporter using the `targets.json` file\n\n## Installing and running the Node Exporter\n\nSee [this section](..\/node-exporter#installing-and-running-the-node-exporter) of the [Monitoring Linux host metrics with the Node Exporter](..\/node-exporter) guide. The Node Exporter runs on port 9100. To ensure that the Node Exporter is exposing metrics:\n\n```bash\ncurl http:\/\/localhost:9100\/metrics\n```\n\nThe metrics output should look something like this:\n\n```\n# HELP go_gc_duration_seconds A summary of the GC invocation durations.\n# TYPE go_gc_duration_seconds summary\ngo_gc_duration_seconds{quantile=\"0\"} 0\ngo_gc_duration_seconds{quantile=\"0.25\"} 0\ngo_gc_duration_seconds{quantile=\"0.5\"} 0\n...\n```\n\n## Installing, configuring, and running Prometheus\n\nLike the Node Exporter, Prometheus is a single static binary that you can install via tarball. [Download the latest release](\/download#prometheus) for your platform and untar it:\n\n```bash\nwget https:\/\/github.com\/prometheus\/prometheus\/releases\/download\/v*\/prometheus-*.*-amd64.tar.gz\ntar xvf prometheus-*.*-amd64.tar.gz\ncd prometheus-*.*\n```\n\nThe untarred directory contains a `prometheus.yml` configuration file. Replace the current contents of that file with this:\n\n```yaml\nscrape_configs:\n- job_name: 'node'\n  file_sd_configs:\n  - files:\n    - 'targets.json'\n```\n\nThis configuration specifies that there is a job called `node` (for the Node Exporter) that retrieves host and port information for Node Exporter instances from a `targets.json` file.\n\nNow create that `targets.json` file and add this content to it:\n\n```json\n[\n  {\n    \"labels\": {\n      \"job\": \"node\"\n    },\n    \"targets\": [\n      \"localhost:9100\"\n    ]\n  }\n]\n```\n\nNOTE: In this guide we'll work with JSON service discovery configurations manually for the sake of brevity. In general, however, we recommend that you use some kind of JSON-generating process or tool instead.\n\nThis configuration specifies that there is a `node` job with one target: `localhost:9100`.\n\nNow you can start up Prometheus:\n\n```bash\n.\/prometheus\n```\n\nIf Prometheus has started up successfully, you should see a line like this in the logs:\n\n```\nlevel=info ts=2018-08-13T20:39:24.905651509Z caller=main.go:500 msg=\"Server is ready to receive web requests.\"\n```\n\n## Exploring the discovered services' metrics\n\nWith Prometheus up and running, you can explore metrics exposed by the `node` service using the Prometheus [expression browser](\/docs\/visualization\/browser). If you explore the [`up{job=\"node\"}`](http:\/\/localhost:9090\/graph?g0.range_input=1h&g0.expr=up%7Bjob%3D%22node%22%7D&g0.tab=1) metric, for example, you can see that the Node Exporter is being appropriately discovered.\n\n## Changing the targets list dynamically\n\nWhen using Prometheus' file-based service discovery mechanism, the Prometheus instance will listen for changes to the file and automatically update the scrape target list, without requiring an instance restart. To demonstrate this, start up a second Node Exporter instance on port 9200. First navigate to the directory containing the Node Exporter binary and run this command in a new terminal window:\n\n```bash\n.\/node_exporter --web.listen-address=\":9200\"\n```\n\nNow modify the config in `targets.json` by adding an entry for the new Node Exporter:\n\n```json\n[\n  {\n    \"targets\": [\n      \"localhost:9100\"\n    ],\n    \"labels\": {\n      \"job\": \"node\"\n    }\n  },\n  {\n    \"targets\": [\n      \"localhost:9200\"\n    ],\n    \"labels\": {\n      \"job\": \"node\"\n    }\n  }\n]\n```\n\nWhen you save the changes, Prometheus will automatically be notified of the new list of targets. The [`up{job=\"node\"}`](http:\/\/localhost:9090\/graph?g0.range_input=1h&g0.expr=up%7Bjob%3D%22node%22%7D&g0.tab=1) metric should display two instances with `instance` labels `localhost:9100` and `localhost:9200`.\n\n## Summary\n\nIn this guide, you installed and ran a Prometheus Node Exporter and configured Prometheus to discover and scrape metrics from the Node Exporter using file-based service discovery.","site":"prometheus","answers_cleaned":"    title  Use file based service discovery to discover scrape targets        Use file based service discovery to discover scrape targets  Prometheus offers a variety of  service discovery options  https   github com prometheus prometheus tree main discovery  for discovering scrape targets  including  Kubernetes   docs prometheus latest configuration configuration  kubernetes sd config    Consul   docs prometheus latest configuration configuration  consul sd config   and many others  If you need to use a service discovery system that is not currently supported  your use case may be best served by Prometheus   file based service discovery   docs prometheus latest configuration configuration  file sd config  mechanism  which enables you to list scrape targets in a JSON file  along with metadata about those targets    In this guide  we will     Install and run a Prometheus  Node Exporter     node exporter  locally   Create a  targets json  file specifying the host and port information for the Node Exporter   Install and run a Prometheus instance that is configured to discover the Node Exporter using the  targets json  file     Installing and running the Node Exporter  See  this section     node exporter installing and running the node exporter  of the  Monitoring Linux host metrics with the Node Exporter     node exporter  guide  The Node Exporter runs on port 9100  To ensure that the Node Exporter is exposing metrics      bash curl http   localhost 9100 metrics      The metrics output should look something like this         HELP go gc duration seconds A summary of the GC invocation durations    TYPE go gc duration seconds summary go gc duration seconds quantile  0   0 go gc duration seconds quantile  0 25   0 go gc duration seconds quantile  0 5   0             Installing  configuring  and running Prometheus  Like the Node Exporter  Prometheus is a single static binary that you can install via tarball   Download the latest release   download prometheus  for your platform and untar it      bash wget https   github com prometheus prometheus releases download v  prometheus     amd64 tar gz tar xvf prometheus     amd64 tar gz cd prometheus          The untarred directory contains a  prometheus yml  configuration file  Replace the current contents of that file with this      yaml scrape configs    job name   node    file sd configs      files         targets json       This configuration specifies that there is a job called  node   for the Node Exporter  that retrieves host and port information for Node Exporter instances from a  targets json  file   Now create that  targets json  file and add this content to it      json            labels            job    node              targets            localhost 9100                   NOTE  In this guide we ll work with JSON service discovery configurations manually for the sake of brevity  In general  however  we recommend that you use some kind of JSON generating process or tool instead   This configuration specifies that there is a  node  job with one target   localhost 9100    Now you can start up Prometheus      bash   prometheus      If Prometheus has started up successfully  you should see a line like this in the logs       level info ts 2018 08 13T20 39 24 905651509Z caller main go 500 msg  Server is ready to receive web requests           Exploring the discovered services  metrics  With Prometheus up and running  you can explore metrics exposed by the  node  service using the Prometheus  expression browser   docs visualization browser   If you explore the   up job  node     http   localhost 9090 graph g0 range input 1h g0 expr up 7Bjob 3D 22node 22 7D g0 tab 1  metric  for example  you can see that the Node Exporter is being appropriately discovered      Changing the targets list dynamically  When using Prometheus  file based service discovery mechanism  the Prometheus instance will listen for changes to the file and automatically update the scrape target list  without requiring an instance restart  To demonstrate this  start up a second Node Exporter instance on port 9200  First navigate to the directory containing the Node Exporter binary and run this command in a new terminal window      bash   node exporter   web listen address   9200       Now modify the config in  targets json  by adding an entry for the new Node Exporter      json            targets            localhost 9100              labels            job    node                      targets            localhost 9200              labels            job    node                   When you save the changes  Prometheus will automatically be notified of the new list of targets  The   up job  node     http   localhost 9090 graph g0 range input 1h g0 expr up 7Bjob 3D 22node 22 7D g0 tab 1  metric should display two instances with  instance  labels  localhost 9100  and  localhost 9200       Summary  In this guide  you installed and ran a Prometheus Node Exporter and configured Prometheus to discover and scrape metrics from the Node Exporter using file based service discovery "}
{"questions":"prometheus title OpenTelemetry Prometheus supports aka OpenTelemetry Protocol ingestion through Enable the OTLP receiver Using Prometheus as your OpenTelemetry backend","answers":"---\ntitle: OpenTelemetry\n---\n\n# Using Prometheus as your OpenTelemetry backend\n\nPrometheus supports [OTLP](https:\/\/opentelemetry.io\/docs\/specs\/otlp) (aka \"OpenTelemetry Protocol\") ingestion through [HTTP](https:\/\/opentelemetry.io\/docs\/specs\/otlp\/#otlphttp).\n\n## Enable the OTLP receiver\n\nBy default, the OTLP receiver is disabled, similarly to the Remote Write receiver.\nThis is because Prometheus can work without any authentication, so it would not be\nsafe to accept incoming traffic unless explicitly configured.\n\nTo enable the receiver you need to toggle the CLI flag `--web.enable-otlp-receiver`. \nThis will cause Prometheus to serve OTLP metrics receiving on HTTP `\/api\/v1\/otlp\/v1\/metrics` path. \n\n```shell\n$ prometheus --web.enable-otlp-receiver\n```\n\n## Send OpenTelemetry Metrics to the Prometheus Server\n\nGenerally you need to tell the source of the OTLP metrics traffic about Prometheus endpoint and the fact that the\n[HTTP](https:\/\/opentelemetry.io\/docs\/specs\/otlp\/#otlphttp) mode of OTLP should be used (gRPC is usually a default).\n\nOpenTelemetry SDKs and instrumentation libraries can be usually configured via [standard environment variables](https:\/\/opentelemetry.io\/docs\/languages\/sdk-configuration\/). The following are the OpenTelemetry variables needed to send OpenTelemetry metrics to a Prometheus server on localhost:\n\n```shell\nexport OTEL_EXPORTER_OTLP_PROTOCOL=http\/protobuf\nexport OTEL_EXPORTER_OTLP_METRICS_ENDPOINT=http:\/\/localhost:9090\/api\/v1\/otlp\/v1\/metrics\n```\n\nTurn off traces and logs:\n\n```shell\nexport OTEL_TRACES_EXPORTER=none\nexport OTEL_LOGS_EXPORTER=none\n```\n\nThe default push interval for OpenTelemetry metrics is 60 seconds. The following will set a 15 second push interval:\n\n```shell\nexport OTEL_METRIC_EXPORT_INTERVAL=15000\n```\n\nIf your instrumentation library does not provide `service.name` and `service.instance.id` out-of-the-box, it is highly recommended to set them.\n\n```shell\nexport OTEL_SERVICE_NAME=\"my-example-service\"\nexport OTEL_RESOURCE_ATTRIBUTES=\"service.instance.id=$(uuidgen)\"\n```\n\nThe above assumes that `uuidgen` command is available on your system. Make sure that `service.instance.id` is unique for each instance, and that a new `service.instance.id` is generated whenever a resource attribute chances. The [recommended](https:\/\/github.com\/open-telemetry\/semantic-conventions\/tree\/main\/docs\/resource) way is to generate a new UUID on each startup of an instance.\n\n## Configuring Prometheus\n\nThis section explains various recommended configuration aspects of Prometheus server to enable and tune your OpenTelemetry flow.\n\nSee the example Prometheus configuration [file](https:\/\/github.com\/prometheus\/prometheus\/blob\/main\/documentation\/examples\/prometheus-otlp.yml)\nwe will use in the below section.\n\n### Enable out-of-order ingestion\n\nThere are multiple reasons why you might want to enable out-of-order ingestion.\n\nFor example, the OpenTelemetry collector encourages batching and you could have multiple replicas of the collector sending data to Prometheus. Because there is no mechanism ordering those samples they could get out-of-order.\n\nTo enable out-of-order ingestion you need to extend the Prometheus configuration file with the following:\n\n```shell\nstorage:\n  tsdb:\n    out_of_order_time_window: 30m\n```\n\n30 minutes of out-of-order have been enough for most cases but don't hesitate to adjust this value to your needs.\n\n### Promoting resource attributes\n\nBased on experience and conversations with our community, we've found that out of all the commonly seen resource attributes,\nthere are certain worth attaching to all your OTLP metrics.\n\nBy default, Prometheus won't be promoting any attributes. If you'd like to promote any\nof them, you can do so in this section of the Prometheus configuration file. The following\nsnippet shares the best practise set of attributes to promote:\n\n```yaml\notlp:\n  # Recommended attributes to be promoted to labels.\n  promote_resource_attributes:\n    - service.instance.id\n    - service.name\n    - service.namespace\n    - cloud.availability_zone\n    - cloud.region\n    - container.name\n    - deployment.environment.name\n    - k8s.cluster.name\n    - k8s.container.name\n    - k8s.cronjob.name\n    - k8s.daemonset.name\n    - k8s.deployment.name\n    - k8s.job.name\n    - k8s.namespace.name\n    - k8s.pod.name\n    - k8s.replicaset.name\n    - k8s.statefulset.name\n```\n\n## Including resource attributes at query time\n\nAll non-promoted, more verbose or unique labels are attached to a special `target_info`.\n\nYou can use this metric to join some labels on query time.\n\nAn example of such a query can look like the following:\n\n```promql\nrate(http_server_request_duration_seconds_count[2m])\n* on (job, instance) group_left (k8s_cluster_name)\ntarget_info\n```\n\nWhat happens in this query is that the time series resulting from `rate(http_server_request_duration_seconds_count[2m])` are augmented with the `k8s_cluster_name` label from the `target_info` series that share the same `job` and `instance` labels.\nIn other words, the `job` and `instance` labels are shared between `http_server_request_duration_seconds_count` and `target_info`, akin to SQL foreign keys.\nThe `k8s_cluster_name` label, On the other hand, corresponds to the OTel resource attribute `k8s.cluster.name` (Prometheus converts dots to underscores).\n\nSo, what is the relation between the `target_info` metric and OTel resource attributes?\nWhen Prometheus processes an OTLP write request, and provided that contained resources include the attributes `service.instance.id` and\/or `service.name`, Prometheus generates the info metric `target_info` for every (OTel) resource.\nIt adds to each such `target_info` series the label `instance` with the value of the `service.instance.id` resource attribute, and the label `job` with the value of the `service.name` resource attribute.\nIf the resource attribute `service.namespace` exists, it's prefixed to the `job` label value (i.e., `<service.namespace>\/<service.name>`).\n\nBy default `service.name`, `service.namespace` and `service.instance.id` themselves are not added to `target_info`, because they are converted into `job` and `instance`. However the following configuration parameter can be enabled to add them to `target_info` directly (going through normalization to replace dots with underscores, if `otlp.translation_strategy` is `UnderscoreEscapingWithSuffixes`) on top of the conversion into `job` and `instance`.\n\n```\notlp:\n  keep_identifying_resource_attributes: true\n```\n\nThe rest of the resource attributes are also added as labels to the `target_info` series, names converted to Prometheus format (e.g. dots converted to underscores) if `otlp.translation_strategy` is `UnderscoreEscapingWithSuffixes`.\nIf a resource lacks both `service.instance.id` and `service.name` attributes, no corresponding `target_info` series is generated.\n\nFor each of a resource's OTel metrics, Prometheus converts it to a corresponding Prometheus time series, and (if `target_info` is generated) adds the right `instance` and `job` labels.\n\n## UTF-8\n\nFrom the 3.x version, Prometheus supports UTF-8 for metric names and labels, so [Prometheus normalization translator package from OpenTelemetry](https:\/\/github.com\/open-telemetry\/opentelemetry-collector-contrib\/tree\/main\/pkg\/translator\/prometheus) can be omitted.\n\nUTF-8 is enabled by default in Prometheus storage and UI, but you need to `translation_strategy` for OTLP metric receiver, which by default is set to old normalization `UnderscoreEscapingWithSuffixes`.\n\nSetting it to `NoUTF8EscapingWithSuffixes`, which we recommend, will disable changing special characters to `_` which allows native use of OpenTelemetry metric format, especially with [the semantic conventions](https:\/\/opentelemetry.io\/docs\/specs\/semconv\/general\/metrics\/). Note that special suffixes like units and `_total` for counters will be attached. There is [ongoing work to have no suffix generation](https:\/\/github.com\/prometheus\/proposals\/pull\/39), stay tuned for that. \n\n```\notlp:\n  # Ingest OTLP data keeping UTF-8 characters in metric\/label names.\n  translation_strategy: NoUTF8EscapingWithSuffixes\n```\n\n> Currently there's a known limitation in the OTLP translation package where characters get removed from metric\/label names if multiple UTF-8 characters are concatenated between words, e.g. `my___metric` becomes `my_metric`. Please see https:\/\/github.com\/prometheus\/prometheus\/issues\/15362 for more details.\n\n## Delta Temporality\n\nThe [OpenTelemetry specification says](https:\/\/opentelemetry.io\/docs\/specs\/otel\/metrics\/data-model\/#temporality) that both Delta temporality and Cumulative temporality are supported.\n\nWhile Delta temporality is common in systems like statsd and graphite, cumulative temporality is the default temporality for Prometheus.\n\nToday Prometheus does not have support for delta temporality but we are learning from the OpenTelemetry community and we are considering adding support for it in the future.\n\nIf you are coming from a delta temporality system we recommend that you use the [delta to cumulative processor](https:\/\/github.com\/open-telemetry\/opentelemetry-collector-contrib\/tree\/main\/processor\/deltatocumulativeprocessor) in your OTel pipeline.","site":"prometheus","answers_cleaned":"    title  OpenTelemetry        Using Prometheus as your OpenTelemetry backend  Prometheus supports  OTLP  https   opentelemetry io docs specs otlp   aka  OpenTelemetry Protocol   ingestion through  HTTP  https   opentelemetry io docs specs otlp  otlphttp       Enable the OTLP receiver  By default  the OTLP receiver is disabled  similarly to the Remote Write receiver  This is because Prometheus can work without any authentication  so it would not be safe to accept incoming traffic unless explicitly configured   To enable the receiver you need to toggle the CLI flag    web enable otlp receiver    This will cause Prometheus to serve OTLP metrics receiving on HTTP   api v1 otlp v1 metrics  path       shell   prometheus   web enable otlp receiver         Send OpenTelemetry Metrics to the Prometheus Server  Generally you need to tell the source of the OTLP metrics traffic about Prometheus endpoint and the fact that the  HTTP  https   opentelemetry io docs specs otlp  otlphttp  mode of OTLP should be used  gRPC is usually a default    OpenTelemetry SDKs and instrumentation libraries can be usually configured via  standard environment variables  https   opentelemetry io docs languages sdk configuration    The following are the OpenTelemetry variables needed to send OpenTelemetry metrics to a Prometheus server on localhost      shell export OTEL EXPORTER OTLP PROTOCOL http protobuf export OTEL EXPORTER OTLP METRICS ENDPOINT http   localhost 9090 api v1 otlp v1 metrics      Turn off traces and logs      shell export OTEL TRACES EXPORTER none export OTEL LOGS EXPORTER none      The default push interval for OpenTelemetry metrics is 60 seconds  The following will set a 15 second push interval      shell export OTEL METRIC EXPORT INTERVAL 15000      If your instrumentation library does not provide  service name  and  service instance id  out of the box  it is highly recommended to set them      shell export OTEL SERVICE NAME  my example service  export OTEL RESOURCE ATTRIBUTES  service instance id   uuidgen        The above assumes that  uuidgen  command is available on your system  Make sure that  service instance id  is unique for each instance  and that a new  service instance id  is generated whenever a resource attribute chances  The  recommended  https   github com open telemetry semantic conventions tree main docs resource  way is to generate a new UUID on each startup of an instance      Configuring Prometheus  This section explains various recommended configuration aspects of Prometheus server to enable and tune your OpenTelemetry flow   See the example Prometheus configuration  file  https   github com prometheus prometheus blob main documentation examples prometheus otlp yml  we will use in the below section       Enable out of order ingestion  There are multiple reasons why you might want to enable out of order ingestion   For example  the OpenTelemetry collector encourages batching and you could have multiple replicas of the collector sending data to Prometheus  Because there is no mechanism ordering those samples they could get out of order   To enable out of order ingestion you need to extend the Prometheus configuration file with the following      shell storage    tsdb      out of order time window  30m      30 minutes of out of order have been enough for most cases but don t hesitate to adjust this value to your needs       Promoting resource attributes  Based on experience and conversations with our community  we ve found that out of all the commonly seen resource attributes  there are certain worth attaching to all your OTLP metrics   By default  Prometheus won t be promoting any attributes  If you d like to promote any of them  you can do so in this section of the Prometheus configuration file  The following snippet shares the best practise set of attributes to promote      yaml otlp      Recommended attributes to be promoted to labels    promote resource attributes        service instance id       service name       service namespace       cloud availability zone       cloud region       container name       deployment environment name       k8s cluster name       k8s container name       k8s cronjob name       k8s daemonset name       k8s deployment name       k8s job name       k8s namespace name       k8s pod name       k8s replicaset name       k8s statefulset name         Including resource attributes at query time  All non promoted  more verbose or unique labels are attached to a special  target info    You can use this metric to join some labels on query time   An example of such a query can look like the following      promql rate http server request duration seconds count 2m     on  job  instance  group left  k8s cluster name  target info      What happens in this query is that the time series resulting from  rate http server request duration seconds count 2m    are augmented with the  k8s cluster name  label from the  target info  series that share the same  job  and  instance  labels  In other words  the  job  and  instance  labels are shared between  http server request duration seconds count  and  target info   akin to SQL foreign keys  The  k8s cluster name  label  On the other hand  corresponds to the OTel resource attribute  k8s cluster name   Prometheus converts dots to underscores    So  what is the relation between the  target info  metric and OTel resource attributes  When Prometheus processes an OTLP write request  and provided that contained resources include the attributes  service instance id  and or  service name   Prometheus generates the info metric  target info  for every  OTel  resource  It adds to each such  target info  series the label  instance  with the value of the  service instance id  resource attribute  and the label  job  with the value of the  service name  resource attribute  If the resource attribute  service namespace  exists  it s prefixed to the  job  label value  i e     service namespace   service name      By default  service name    service namespace  and  service instance id  themselves are not added to  target info   because they are converted into  job  and  instance   However the following configuration parameter can be enabled to add them to  target info  directly  going through normalization to replace dots with underscores  if  otlp translation strategy  is  UnderscoreEscapingWithSuffixes   on top of the conversion into  job  and  instance        otlp    keep identifying resource attributes  true      The rest of the resource attributes are also added as labels to the  target info  series  names converted to Prometheus format  e g  dots converted to underscores  if  otlp translation strategy  is  UnderscoreEscapingWithSuffixes   If a resource lacks both  service instance id  and  service name  attributes  no corresponding  target info  series is generated   For each of a resource s OTel metrics  Prometheus converts it to a corresponding Prometheus time series  and  if  target info  is generated  adds the right  instance  and  job  labels      UTF 8  From the 3 x version  Prometheus supports UTF 8 for metric names and labels  so  Prometheus normalization translator package from OpenTelemetry  https   github com open telemetry opentelemetry collector contrib tree main pkg translator prometheus  can be omitted   UTF 8 is enabled by default in Prometheus storage and UI  but you need to  translation strategy  for OTLP metric receiver  which by default is set to old normalization  UnderscoreEscapingWithSuffixes    Setting it to  NoUTF8EscapingWithSuffixes   which we recommend  will disable changing special characters to     which allows native use of OpenTelemetry metric format  especially with  the semantic conventions  https   opentelemetry io docs specs semconv general metrics    Note that special suffixes like units and   total  for counters will be attached  There is  ongoing work to have no suffix generation  https   github com prometheus proposals pull 39   stay tuned for that        otlp      Ingest OTLP data keeping UTF 8 characters in metric label names    translation strategy  NoUTF8EscapingWithSuffixes        Currently there s a known limitation in the OTLP translation package where characters get removed from metric label names if multiple UTF 8 characters are concatenated between words  e g   my   metric  becomes  my metric   Please see https   github com prometheus prometheus issues 15362 for more details      Delta Temporality  The  OpenTelemetry specification says  https   opentelemetry io docs specs otel metrics data model  temporality  that both Delta temporality and Cumulative temporality are supported   While Delta temporality is common in systems like statsd and graphite  cumulative temporality is the default temporality for Prometheus   Today Prometheus does not have support for delta temporality but we are learning from the OpenTelemetry community and we are considering adding support for it in the future   If you are coming from a delta temporality system we recommend that you use the  delta to cumulative processor  https   github com open telemetry opentelemetry collector contrib tree main processor deltatocumulativeprocessor  in your OTel pipeline "}
{"questions":"prometheus Versions of Prometheus before 3 0 required that metric and label names adhere to are valid names but there are some manual changes needed for other parts of the ecosystem to introduce names with any UTF 8 characters Introduction title UTF 8 in Prometheus a strict set of character requirements With Prometheus 3 0 all UTF 8 strings","answers":"---\ntitle: UTF-8 in Prometheus\n---\n\n# Introduction\n\nVersions of Prometheus before 3.0 required that metric and label names adhere to\na strict set of character requirements. With Prometheus 3.0, all UTF-8 strings\nare valid names, but there are some manual changes needed for other parts of the ecosystem to introduce names with any UTF-8 characters.\n\nThere may also be circumstances where users want to enforce the legacy character\nset, perhaps for compatibility with an older system or one that does not yet\nsupport UTF-8.\n\nThis document guides you through the UTF-8 transition details.\n\n# Go Instrumentation \n\nCurrently, metrics created by the official Prometheus [client_golang library](github.com\/prometheus\/client_golang) will reject UTF-8 names\nby default. It is necessary to change the default validation scheme to allow\nUTF-8. The requirement to set this value will be removed in a future version of\nthe common library.\n\n```golang\nimport \"github.com\/prometheus\/common\/model\"\n\nfunc init() {\n\tmodel.NameValidationScheme = model.UTF8Validation\n}\n```\n\nIf users want to enforce the legacy character set, they can set the validation\nscheme to `LegacyValidation`.\n\nSetting the validation scheme must be done before the instantiation of metrics\nand can be set on the fly if desired.\n\n## Instrumenting in other languages\n\nOther client libraries may have similar requirements to set the validation\nscheme. Check the documentation for the library you are using.\n\n# Configuring Name Validation during Scraping\n\nBy default, Prometheus 3.0 accepts all UTF-8 strings as valid metric and label\nnames. It is possible to override this behavior for scraped targets and reject\nnames that do not conform to the legacy character set.\n\nThis option can be set in the Prometheus YAML file on a global basis:\n\n```yaml\nglobal:\n  metric_name_validation_scheme: legacy\n```\n\nor on a per-scrape config basis:\n\n```yaml\nscrape_configs:\n  - job_name: prometheus\n    metric_name_validation_scheme: legacy\n```\n\nScrape config settings override the global setting.\n\n## Scrape Content Negotiation for UTF-8 escaping\n\nAt scrape time, the scraping system **must** pass `escaping=allow-utf-8` in the\nAccept header in order to be served UTF-8 names. If a system being scraped does\nnot see this header, it will automatically convert UTF-8 names to\nlegacy-compatible using underscore replacement.\n\nScraping systems can also request a specfic escaping method if desired by\nsetting the `escaping` header to a different value.\n\n* `underscores`: The default: convert legacy-invalid characters to underscores.\n* `dots`: similar to UnderscoreEscaping, except that dots are converted to\n  `_dot_` and pre-existing underscores are converted to `__`. This allows for\n  round-tripping of simple metric names that also contain dots.\n* `values`: This mode prepends the name with `U__` and replaces all invalid\n  characters with the unicode value, surrounded by underscores. Single\n  underscores are replaced with double underscores. This mode allows for full\n  round-tripping of UTF-8 names with a legacy system.\n\n## Remote Write 2.0\n\nRemote Write 2.0 automatically accepts all UTF-8 names in Prometheus 3.0. There\nis no way to enforce the legacy character set validation with Remote Write 2.0.\n\n# OTLP Metrics\n\nOTLP receiver in Prometheus 3.0 still normalizes all names to Prometheus format by default. You can change this in `otlp` section of the Prometheus configuration as follows:\n\n\n    otlp:\n      # Ingest OTLP data keeping UTF-8 characters in metric\/label names.\n      translation_strategy: NoUTF8EscapingWithSuffixes\n\n\nSee [OpenTelemetry guide](.\/opentelemetry) for more details.\n\n\n# Querying\n\n\nQuerying for metrics with UTF-8 names will require a slightly different syntax\nin PromQL.\n\nThe classic query syntax will still work for legacy-compatible names:\n\n`my_metric{}`\n\nBut UTF-8 names must be quoted **and** moved into the braces:\n\n`{\"my.metric\"}`\n\nLabel names must also be quoted if they contain legacy-incompatible characters:\n\n`{\"metric.name\", \"my.label.name\"=\"bar\"}`\n\nThe metric name can appear anywhere inside the braces, but style prefers that it\nbe the first term","site":"prometheus","answers_cleaned":"    title  UTF 8 in Prometheus        Introduction  Versions of Prometheus before 3 0 required that metric and label names adhere to a strict set of character requirements  With Prometheus 3 0  all UTF 8 strings are valid names  but there are some manual changes needed for other parts of the ecosystem to introduce names with any UTF 8 characters   There may also be circumstances where users want to enforce the legacy character set  perhaps for compatibility with an older system or one that does not yet support UTF 8   This document guides you through the UTF 8 transition details     Go Instrumentation   Currently  metrics created by the official Prometheus  client golang library  github com prometheus client golang  will reject UTF 8 names by default  It is necessary to change the default validation scheme to allow UTF 8  The requirement to set this value will be removed in a future version of the common library      golang import  github com prometheus common model   func init      model NameValidationScheme   model UTF8Validation        If users want to enforce the legacy character set  they can set the validation scheme to  LegacyValidation    Setting the validation scheme must be done before the instantiation of metrics and can be set on the fly if desired      Instrumenting in other languages  Other client libraries may have similar requirements to set the validation scheme  Check the documentation for the library you are using     Configuring Name Validation during Scraping  By default  Prometheus 3 0 accepts all UTF 8 strings as valid metric and label names  It is possible to override this behavior for scraped targets and reject names that do not conform to the legacy character set   This option can be set in the Prometheus YAML file on a global basis      yaml global    metric name validation scheme  legacy      or on a per scrape config basis      yaml scrape configs      job name  prometheus     metric name validation scheme  legacy      Scrape config settings override the global setting      Scrape Content Negotiation for UTF 8 escaping  At scrape time  the scraping system   must   pass  escaping allow utf 8  in the Accept header in order to be served UTF 8 names  If a system being scraped does not see this header  it will automatically convert UTF 8 names to legacy compatible using underscore replacement   Scraping systems can also request a specfic escaping method if desired by setting the  escaping  header to a different value      underscores   The default  convert legacy invalid characters to underscores     dots   similar to UnderscoreEscaping  except that dots are converted to     dot   and pre existing underscores are converted to       This allows for   round tripping of simple metric names that also contain dots     values   This mode prepends the name with  U    and replaces all invalid   characters with the unicode value  surrounded by underscores  Single   underscores are replaced with double underscores  This mode allows for full   round tripping of UTF 8 names with a legacy system      Remote Write 2 0  Remote Write 2 0 automatically accepts all UTF 8 names in Prometheus 3 0  There is no way to enforce the legacy character set validation with Remote Write 2 0     OTLP Metrics  OTLP receiver in Prometheus 3 0 still normalizes all names to Prometheus format by default  You can change this in  otlp  section of the Prometheus configuration as follows        otlp          Ingest OTLP data keeping UTF 8 characters in metric label names        translation strategy  NoUTF8EscapingWithSuffixes   See  OpenTelemetry guide    opentelemetry  for more details      Querying   Querying for metrics with UTF 8 names will require a slightly different syntax in PromQL   The classic query syntax will still work for legacy compatible names    my metric     But UTF 8 names must be quoted   and   moved into the braces      my metric     Label names must also be quoted if they contain legacy incompatible characters      metric name    my label name   bar     The metric name can appear anywhere inside the braces  but style prefers that it be the first term"}
{"questions":"prometheus to the usage of the specific types and in the wire protocol The Prometheus title Metric types currently only differentiated in the client libraries to enable APIs tailored sortrank 2 Metric types The Prometheus client libraries offer four core metric types These are","answers":"---\ntitle: Metric types\nsort_rank: 2\n---\n\n# Metric types\n\nThe Prometheus client libraries offer four core metric types. These are\ncurrently only differentiated in the client libraries (to enable APIs tailored\nto the usage of the specific types) and in the wire protocol. The Prometheus\nserver does not yet make use of the type information and flattens all data into\nuntyped time series. This may change in the future.\n\n## Counter\n\nA _counter_ is a cumulative metric that represents a single [monotonically \nincreasing counter](https:\/\/en.wikipedia.org\/wiki\/Monotonic_function) whose\nvalue can only increase or be reset to zero on restart. For example, you can\nuse a counter to represent the number of requests served, tasks completed, or\nerrors.\n\nDo not use a counter to expose a value that can decrease. For example, do not\nuse a counter for the number of currently running processes; instead use a gauge.\n\nClient library usage documentation for counters:\n\n   * [Go](http:\/\/godoc.org\/github.com\/prometheus\/client_golang\/prometheus#Counter)\n   * [Java](https:\/\/github.com\/prometheus\/client_java#counter)\n   * [Python](https:\/\/prometheus.github.io\/client_python\/instrumenting\/counter\/)\n   * [Ruby](https:\/\/github.com\/prometheus\/client_ruby#counter)\n   * [.Net](https:\/\/github.com\/prometheus-net\/prometheus-net#counters)\n   * [Rust](https:\/\/docs.rs\/prometheus-client\/latest\/prometheus_client\/metrics\/counter\/index.html)\n\n## Gauge\n\nA _gauge_ is a metric that represents a single numerical value that can\narbitrarily go up and down.\n\nGauges are typically used for measured values like temperatures or current\nmemory usage, but also \"counts\" that can go up and down, like the number of\nconcurrent requests.\n\nClient library usage documentation for gauges:\n\n   * [Go](http:\/\/godoc.org\/github.com\/prometheus\/client_golang\/prometheus#Gauge)\n   * [Java](https:\/\/github.com\/prometheus\/client_java#gauge)\n   * [Python](https:\/\/prometheus.github.io\/client_python\/instrumenting\/gauge\/)\n   * [Ruby](https:\/\/github.com\/prometheus\/client_ruby#gauge)\n   * [.Net](https:\/\/github.com\/prometheus-net\/prometheus-net#gauges)\n   * [Rust](https:\/\/docs.rs\/prometheus-client\/latest\/prometheus_client\/metrics\/gauge\/index.html)\n\n## Histogram\n\nA _histogram_ samples observations (usually things like request durations or\nresponse sizes) and counts them in configurable buckets. It also provides a sum\nof all observed values.\n\nA histogram with a base metric name of `<basename>` exposes multiple time series\nduring a scrape:\n\n  * cumulative counters for the observation buckets, exposed as `<basename>_bucket{le=\"<upper inclusive bound>\"}`\n  * the **total sum** of all observed values, exposed as `<basename>_sum`\n  * the **count** of events that have been observed, exposed as `<basename>_count` (identical to `<basename>_bucket{le=\"+Inf\"}` above)\n\nUse the\n[`histogram_quantile()` function](\/docs\/prometheus\/latest\/querying\/functions\/#histogram_quantile)\nto calculate quantiles from histograms or even aggregations of histograms. A\nhistogram is also suitable to calculate an\n[Apdex score](http:\/\/en.wikipedia.org\/wiki\/Apdex). When operating on buckets,\nremember that the histogram is\n[cumulative](https:\/\/en.wikipedia.org\/wiki\/Histogram#Cumulative_histogram). See\n[histograms and summaries](\/docs\/practices\/histograms) for details of histogram\nusage and differences to [summaries](#summary).\n\nNOTE: Beginning with Prometheus v2.40, there is experimental support for native\nhistograms. A native histogram requires only one time series, which includes a\ndynamic number of buckets in addition to the sum and count of\nobservations. Native histograms allow much higher resolution at a fraction of\nthe cost. Detailed documentation will follow once native histograms are closer\nto becoming a stable feature.\n\nClient library usage documentation for histograms:\n\n   * [Go](http:\/\/godoc.org\/github.com\/prometheus\/client_golang\/prometheus#Histogram)\n   * [Java](https:\/\/github.com\/prometheus\/client_java#histogram)\n   * [Python](https:\/\/prometheus.github.io\/client_python\/instrumenting\/histogram\/)\n   * [Ruby](https:\/\/github.com\/prometheus\/client_ruby#histogram)\n   * [.Net](https:\/\/github.com\/prometheus-net\/prometheus-net#histogram)\n   * [Rust](https:\/\/docs.rs\/prometheus-client\/latest\/prometheus_client\/metrics\/histogram\/index.html)\n\n## Summary\n\nSimilar to a _histogram_, a _summary_ samples observations (usually things like\nrequest durations and response sizes). While it also provides a total count of\nobservations and a sum of all observed values, it calculates configurable\nquantiles over a sliding time window.\n\nA summary with a base metric name of `<basename>` exposes multiple time series\nduring a scrape:\n\n  * streaming **\u03c6-quantiles** (0 \u2264 \u03c6 \u2264 1) of observed events, exposed as `<basename>{quantile=\"<\u03c6>\"}`\n  * the **total sum** of all observed values, exposed as `<basename>_sum`\n  * the **count** of events that have been observed, exposed as `<basename>_count`\n\nSee [histograms and summaries](\/docs\/practices\/histograms) for\ndetailed explanations of \u03c6-quantiles, summary usage, and differences\nto [histograms](#histogram).\n\nClient library usage documentation for summaries:\n\n   * [Go](http:\/\/godoc.org\/github.com\/prometheus\/client_golang\/prometheus#Summary)\n   * [Java](https:\/\/github.com\/prometheus\/client_java#summary)\n   * [Python](https:\/\/prometheus.github.io\/client_python\/instrumenting\/summary\/)\n   * [Ruby](https:\/\/github.com\/prometheus\/client_ruby#summary)\n   * [.Net](https:\/\/github.com\/prometheus-net\/prometheus-net#summary)","site":"prometheus","answers_cleaned":"    title  Metric types sort rank  2        Metric types  The Prometheus client libraries offer four core metric types  These are currently only differentiated in the client libraries  to enable APIs tailored to the usage of the specific types  and in the wire protocol  The Prometheus server does not yet make use of the type information and flattens all data into untyped time series  This may change in the future      Counter  A  counter  is a cumulative metric that represents a single  monotonically  increasing counter  https   en wikipedia org wiki Monotonic function  whose value can only increase or be reset to zero on restart  For example  you can use a counter to represent the number of requests served  tasks completed  or errors   Do not use a counter to expose a value that can decrease  For example  do not use a counter for the number of currently running processes  instead use a gauge   Client library usage documentation for counters         Go  http   godoc org github com prometheus client golang prometheus Counter        Java  https   github com prometheus client java counter        Python  https   prometheus github io client python instrumenting counter         Ruby  https   github com prometheus client ruby counter         Net  https   github com prometheus net prometheus net counters        Rust  https   docs rs prometheus client latest prometheus client metrics counter index html      Gauge  A  gauge  is a metric that represents a single numerical value that can arbitrarily go up and down   Gauges are typically used for measured values like temperatures or current memory usage  but also  counts  that can go up and down  like the number of concurrent requests   Client library usage documentation for gauges         Go  http   godoc org github com prometheus client golang prometheus Gauge        Java  https   github com prometheus client java gauge        Python  https   prometheus github io client python instrumenting gauge         Ruby  https   github com prometheus client ruby gauge         Net  https   github com prometheus net prometheus net gauges        Rust  https   docs rs prometheus client latest prometheus client metrics gauge index html      Histogram  A  histogram  samples observations  usually things like request durations or response sizes  and counts them in configurable buckets  It also provides a sum of all observed values   A histogram with a base metric name of   basename   exposes multiple time series during a scrape       cumulative counters for the observation buckets  exposed as   basename  bucket le   upper inclusive bound         the   total sum   of all observed values  exposed as   basename  sum      the   count   of events that have been observed  exposed as   basename  count   identical to   basename  bucket le   Inf    above   Use the   histogram quantile    function   docs prometheus latest querying functions  histogram quantile  to calculate quantiles from histograms or even aggregations of histograms  A histogram is also suitable to calculate an  Apdex score  http   en wikipedia org wiki Apdex   When operating on buckets  remember that the histogram is  cumulative  https   en wikipedia org wiki Histogram Cumulative histogram   See  histograms and summaries   docs practices histograms  for details of histogram usage and differences to  summaries   summary    NOTE  Beginning with Prometheus v2 40  there is experimental support for native histograms  A native histogram requires only one time series  which includes a dynamic number of buckets in addition to the sum and count of observations  Native histograms allow much higher resolution at a fraction of the cost  Detailed documentation will follow once native histograms are closer to becoming a stable feature   Client library usage documentation for histograms         Go  http   godoc org github com prometheus client golang prometheus Histogram        Java  https   github com prometheus client java histogram        Python  https   prometheus github io client python instrumenting histogram         Ruby  https   github com prometheus client ruby histogram         Net  https   github com prometheus net prometheus net histogram        Rust  https   docs rs prometheus client latest prometheus client metrics histogram index html      Summary  Similar to a  histogram   a  summary  samples observations  usually things like request durations and response sizes   While it also provides a total count of observations and a sum of all observed values  it calculates configurable quantiles over a sliding time window   A summary with a base metric name of   basename   exposes multiple time series during a scrape       streaming     quantiles    0       1  of observed events  exposed as   basename  quantile             the   total sum   of all observed values  exposed as   basename  sum      the   count   of events that have been observed  exposed as   basename  count   See  histograms and summaries   docs practices histograms  for detailed explanations of   quantiles  summary usage  and differences to  histograms   histogram    Client library usage documentation for summaries         Go  http   godoc org github com prometheus client golang prometheus Summary        Java  https   github com prometheus client java summary        Python  https   prometheus github io client python instrumenting summary         Ruby  https   github com prometheus client ruby summary         Net  https   github com prometheus net prometheus net summary "}
{"questions":"prometheus Gauge Prometheus supports four types of metrics which are sortrank 2 title Understanding metric types Counter Types of metrics","answers":"---\ntitle: Understanding metric types  \nsort_rank: 2\n---\n\n# Types of metrics.\n\nPrometheus supports four types of metrics, which are\n    - Counter\n    - Gauge\n    - Histogram\n    - Summary \n\n## Counter\n\nCounter is a metric value that can only increase or reset i.e. the value cannot reduce than the previous value. It can be used for metrics like the number of requests, no of errors, etc.\n\nType the below query in the query bar and click execute.\n\n<code>go\\_gc\\_duration\\_seconds\\_count<\/code>\n\n\n[![Counter](\/assets\/tutorial\/counter_example.png)](\/assets\/tutorial\/counter_example.png)\n\nThe rate() function in PromQL takes the history of metrics over a time frame and calculates how fast the value is increasing per second. Rate is applicable on counter values only.\n\n<code> rate(go\\_gc\\_duration\\_seconds\\_count[5m])<\/code>\n[![Rate Counter](\/assets\/tutorial\/rate_example.png)](\/assets\/tutorial\/rate_example.png)\n\n## Gauge\n\nGauge is a number which can either go up or down. It can be used for metrics like the number of pods in a cluster, the number of events in a queue, etc.\n\n<code> go\\_memstats\\_heap\\_alloc\\_bytes<\/code>\n[![Gauge](\/assets\/tutorial\/gauge_example.png)](\/assets\/tutorial\/gauge_example.png)\n\nPromQL functions like `max_over_time`, `min_over_time` and `avg_over_time` can be used on gauge metrics\n\n## Histogram\n\nHistogram is a more complex metric type when compared to the previous two. Histogram can be used for any calculated value which is counted based on bucket values. Bucket boundaries can be configured by the developer. A common example would be the time it takes to reply to a request, called latency.\n\nExample: Let's assume we want to observe the time taken to process API requests. Instead of storing the request time for each request, histograms allow us to store them in buckets. We define buckets for time taken, for example `lower or equal 0.3`, `le 0.5`, `le 0.7`, `le 1`, and `le 1.2`. So these are our buckets and once the time taken for a request is calculated it is added to the count of all the buckets whose bucket boundaries are higher than the measured value.\n\nLet's say Request 1 for endpoint \u201c\/ping\u201d takes 0.25 s. The count values for the buckets will be.\n\n> \/ping\n\n| Bucket    | Count |\n| --------- | ----- |\n| 0 - 0.3   | 1     |\n| 0 - 0.5   | 1     |\n| 0 - 0.7   | 1     |\n| 0 - 1     | 1     |\n| 0 - 1.2   | 1     |\n| 0 - +Inf  | 1     |\n\nNote: +Inf bucket is added by default.\n\n(Since the histogram is a cumulative frequency 1 is added to all the buckets that are greater than the value)\n\nRequest 2 for endpoint \u201c\/ping\u201d takes 0.4s The count values for the buckets will be this.\n\n> \/ping\n\n| Bucket    | Count |\n| --------- | ----- |\n| 0 - 0.3   | 1     |\n| 0 - 0.5   | 2     |\n| 0 - 0.7   | 2     |\n| 0 - 1     | 2     |\n| 0 - 1.2   | 2     |\n| 0 - +Inf  | 2     |\n\nSince 0.4 is below 0.5, all buckets up to that boundary increase their counts.\n\nLet's explore a histogram metric from the Prometheus UI and apply a few functions.\n\n<code>prometheus\\_http\\_request\\_duration\\_seconds\\_bucket{handler=\"\/graph\"}<\/code>\n\n[![Histogram](\/assets\/tutorial\/histogram_example.png)](\/assets\/tutorial\/histogram_example.png)\n\n`histogram_quantile()` function can be used to calculate quantiles from a histogram\n\n<code>histogram\\_quantile(0.9,prometheus\\_http\\_request\\_duration\\_seconds\\_bucket{handler=\"\/graph\"})<\/code>\n\n[![Histogram Quantile](\/assets\/tutorial\/histogram_quantile_example.png)](\/assets\/tutorial\/histogram_quantile_example.png)\n\nThe graph shows that the 90th percentile is 0.09, To find the histogram_quantile over the last 5m you can use the rate() and time frame\n\n<code>histogram_quantile(0.9, rate(prometheus\\_http\\_request\\_duration\\_seconds\\_bucket{handler=\"\/graph\"}[5m]))<\/code>\n\n[![Histogram Quantile Rate](\/assets\/tutorial\/histogram_rate_example.png)](\/assets\/tutorial\/histogram_rate_example.png)\n\n\n## Summary\n\nSummaries also measure events and are an alternative to histograms. They are cheaper but lose more data. They are calculated on the application level hence aggregation of metrics from multiple instances of the same process is not possible. They are used when the buckets of a metric are not known beforehand, but it is highly recommended to use histograms over summaries whenever possible.\n\nIn this tutorial, we covered the types of metrics in detail and a few PromQL operations like rate, histogram_quantile, etc.","site":"prometheus","answers_cleaned":"    title  Understanding metric types   sort rank  2        Types of metrics   Prometheus supports four types of metrics  which are       Counter       Gauge       Histogram       Summary      Counter  Counter is a metric value that can only increase or reset i e  the value cannot reduce than the previous value  It can be used for metrics like the number of requests  no of errors  etc   Type the below query in the query bar and click execute    code go  gc  duration  seconds  count  code       Counter   assets tutorial counter example png    assets tutorial counter example png   The rate   function in PromQL takes the history of metrics over a time frame and calculates how fast the value is increasing per second  Rate is applicable on counter values only    code  rate go  gc  duration  seconds  count 5m    code     Rate Counter   assets tutorial rate example png    assets tutorial rate example png      Gauge  Gauge is a number which can either go up or down  It can be used for metrics like the number of pods in a cluster  the number of events in a queue  etc    code  go  memstats  heap  alloc  bytes  code     Gauge   assets tutorial gauge example png    assets tutorial gauge example png   PromQL functions like  max over time    min over time  and  avg over time  can be used on gauge metrics     Histogram  Histogram is a more complex metric type when compared to the previous two  Histogram can be used for any calculated value which is counted based on bucket values  Bucket boundaries can be configured by the developer  A common example would be the time it takes to reply to a request  called latency   Example  Let s assume we want to observe the time taken to process API requests  Instead of storing the request time for each request  histograms allow us to store them in buckets  We define buckets for time taken  for example  lower or equal 0 3    le 0 5    le 0 7    le 1   and  le 1 2   So these are our buckets and once the time taken for a request is calculated it is added to the count of all the buckets whose bucket boundaries are higher than the measured value   Let s say Request 1 for endpoint   ping  takes 0 25 s  The count values for the buckets will be      ping    Bucket      Count                           0   0 3     1         0   0 5     1         0   0 7     1         0   1       1         0   1 2     1         0    Inf    1        Note   Inf bucket is added by default    Since the histogram is a cumulative frequency 1 is added to all the buckets that are greater than the value   Request 2 for endpoint   ping  takes 0 4s The count values for the buckets will be this      ping    Bucket      Count                           0   0 3     1         0   0 5     2         0   0 7     2         0   1       2         0   1 2     2         0    Inf    2        Since 0 4 is below 0 5  all buckets up to that boundary increase their counts   Let s explore a histogram metric from the Prometheus UI and apply a few functions    code prometheus  http  request  duration  seconds  bucket handler   graph    code      Histogram   assets tutorial histogram example png    assets tutorial histogram example png    histogram quantile    function can be used to calculate quantiles from a histogram   code histogram  quantile 0 9 prometheus  http  request  duration  seconds  bucket handler   graph     code      Histogram Quantile   assets tutorial histogram quantile example png    assets tutorial histogram quantile example png   The graph shows that the 90th percentile is 0 09  To find the histogram quantile over the last 5m you can use the rate   and time frame   code histogram quantile 0 9  rate prometheus  http  request  duration  seconds  bucket handler   graph   5m     code      Histogram Quantile Rate   assets tutorial histogram rate example png    assets tutorial histogram rate example png       Summary  Summaries also measure events and are an alternative to histograms  They are cheaper but lose more data  They are calculated on the application level hence aggregation of metrics from multiple instances of the same process is not possible  They are used when the buckets of a metric are not known beforehand  but it is highly recommended to use histograms over summaries whenever possible   In this tutorial  we covered the types of metrics in detail and a few PromQL operations like rate  histogram quantile  etc "}
{"questions":"prometheus What is Prometheus sortrank 1 Prometheus is a system monitoring and alerting system It was opensourced by SoundCloud in 2012 and is the second project both to join and to graduate within Cloud Native Computing Foundation after Kubernetes Prometheus stores all metrics data as time series i e metrics information is stored along with the timestamp at which it was recorded optional key value pairs called as labels can also be stored along with metrics title Getting Started with Prometheus What are metrics and why is it important","answers":"---\ntitle: Getting Started with Prometheus\nsort_rank: 1\n---\n\n# What is Prometheus ?\n\nPrometheus is a system monitoring and alerting system. It was opensourced by SoundCloud in 2012 and is the second project both to join and to graduate within Cloud Native Computing Foundation after Kubernetes. Prometheus stores all metrics data as time series, i.e metrics information is stored along with the timestamp at which it was recorded, optional key-value pairs called as labels can also be stored along with metrics.\n# What are metrics and why is it important?\n\nMetrics in layperson terms is a standard for measurement. What we want to measure depends from application to application. For a web server it can be request times, for a database it can be CPU usage or number of active connections etc.\n\nMetrics play an important role in understanding why your application is working in a certain way. If you run a web application and someone comes up to you and says that the application is slow, you will need some information to find out what is happening with your application. For example the application can become slow when the number of requests are high. If you have the request count metric you can spot the reason and increase the number of servers to handle the heavy load. Whenever you are defining the metrics for your application you must put on your detective hat and ask this question **what all information will be important for me to debug if any issue occurs in my application?**\n\n# Basic Architecture of Prometheus\n\nThe basic components of a Prometheus setup are:\n\n- Prometheus Server (the server which scrapes and stores the metrics data).\n- Targets to be scraped, for example an instrumented application that exposes its metrics, or an exporter that exposes metrics of another application.\n- Alertmanager to raise alerts based on preset rules.\n\n(Note: Apart from this Prometheus has push_gateway which is not covered here).\n\n[![Architecture](\/assets\/tutorial\/architecture.png)](\/assets\/tutorial\/architecture.png)\n\nLet's consider a web server as an example application and we want to extract a certain metric like the number of API calls processed by the web server. So we add certain instrumentation code using the Prometheus client library and expose the metrics information. Now that our web server exposes its metrics we can configure Prometheus to scrape it. Now Prometheus is configured to fetch the metrics from the web server which is listening on xyz IP address port 7500 at a specific time interval, say, every minute.\n\nAt 11:00:00 when I make the server public for consumption, the application calculates the request count and exposes it, Prometheus simultaneously scrapes the count metric and stores the value as 0.\n\nBy 11:01:00 one request is processed. The instrumentation logic in the server increments the count to 1. When Prometheus scrapes the metric the value of count is 1 now.\n\nBy 11:02:00 two more requests are processed and the request count is 1+2 = 3 now. Similarly metrics are scraped and stored.\n\nThe user can control the frequency at which metrics are scraped by Prometheus.\n\n| Time Stamp | Request Count (metric) |\n| ---------- | ---------------------- |\n| 11:00:00   | 0                      |\n| 11:01:00   | 1                      |\n| 11:02:00   | 3                      |\n\n(Note: This table is just a representation for understanding purposes. Prometheus doesn\u2019t store the values in this exact format)\n\nPrometheus also has an API which allows to query metrics which have been stored by scraping. This API is used to query the metrics, create dashboards\/charts on it etc. PromQL is used to query these metrics.\n\nA simple Line chart created on the Request Count metric will look like this\n\n[![Graph](\/assets\/tutorial\/sample_graph.png)](\/assets\/tutorial\/sample_graph.png)\n\nOne can scrape multiple useful metrics to understand what is happening in the application and create multiple charts on them. Group the charts into a dashboard and use it to get an overview of the application.\n\n# Show me how it is done\n\nLet\u2019s get our hands dirty and setup Prometheus. Prometheus is written using [Go](https:\/\/golang.org\/) and all you need is the binary compiled for your operating system. Download the binary corresponding to your operating system from [here](https:\/\/prometheus.io\/download\/) and add the binary to your path.\n\nPrometheus exposes its own metrics which can be consumed by itself or another Prometheus server.\n\nNow that we have Prometheus installed, the next step is to run it. All that we need is just the binary and a configuration file. Prometheus uses yaml files for configuration.\n\n\n```yaml\nglobal:\n  scrape_interval: 15s\n\nscrape_configs:\n  - job_name: prometheus\n    static_configs:\n      - targets: [\"localhost:9090\"]\n```\n\nIn the above configuration file we have mentioned the `scrape_interval`, i.e how frequently we want Prometheus to scrape the metrics. We have added `scrape_configs` which has a name and target to scrape the metrics from. Prometheus by default listens on port 9090. So add it to targets.\n\n> prometheus --config.file=prometheus.yml\n\n<iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/ioa0eISf1Q0\" frameborder=\"0\" allowfullscreen><\/iframe>\n\nNow we have Prometheus up and running and scraping its own metrics every 15s. Prometheus has standard exporters available to export metrics. Next we will run a node exporter which is an exporter for machine metrics and scrape the same using Prometheus. ([Download node metrics exporter.](https:\/\/prometheus.io\/download\/#node_exporter))\n\nRun the node exporter in a terminal.\n\n<code>.\/node_exporter<\/code>\n\n[![Node exporter](\/assets\/tutorial\/node_exporter.png)](\/assets\/tutorial\/node_exporter.png)\n\nNext, add node exporter to the list of scrape_configs:\n\n```yaml\nglobal:\n  scrape_interval: 15s\n\nscrape_configs:\n  - job_name: prometheus\n    static_configs:\n      - targets: [\"localhost:9090\"]\n  - job_name: node_exporter\n    static_configs:\n      - targets: [\"localhost:9100\"]\n```\n\n<iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/hM5bp53C7Y8\" frameborder=\"0\" allowfullscreen><\/iframe>\n\nIn this tutorial we discussed what are metrics and why they are important, basic architecture of Prometheus and how to\nrun Prometheus.","site":"prometheus","answers_cleaned":"    title  Getting Started with Prometheus sort rank  1        What is Prometheus    Prometheus is a system monitoring and alerting system  It was opensourced by SoundCloud in 2012 and is the second project both to join and to graduate within Cloud Native Computing Foundation after Kubernetes  Prometheus stores all metrics data as time series  i e metrics information is stored along with the timestamp at which it was recorded  optional key value pairs called as labels can also be stored along with metrics    What are metrics and why is it important   Metrics in layperson terms is a standard for measurement  What we want to measure depends from application to application  For a web server it can be request times  for a database it can be CPU usage or number of active connections etc   Metrics play an important role in understanding why your application is working in a certain way  If you run a web application and someone comes up to you and says that the application is slow  you will need some information to find out what is happening with your application  For example the application can become slow when the number of requests are high  If you have the request count metric you can spot the reason and increase the number of servers to handle the heavy load  Whenever you are defining the metrics for your application you must put on your detective hat and ask this question   what all information will be important for me to debug if any issue occurs in my application       Basic Architecture of Prometheus  The basic components of a Prometheus setup are     Prometheus Server  the server which scrapes and stores the metrics data     Targets to be scraped  for example an instrumented application that exposes its metrics  or an exporter that exposes metrics of another application    Alertmanager to raise alerts based on preset rules    Note  Apart from this Prometheus has push gateway which is not covered here       Architecture   assets tutorial architecture png    assets tutorial architecture png   Let s consider a web server as an example application and we want to extract a certain metric like the number of API calls processed by the web server  So we add certain instrumentation code using the Prometheus client library and expose the metrics information  Now that our web server exposes its metrics we can configure Prometheus to scrape it  Now Prometheus is configured to fetch the metrics from the web server which is listening on xyz IP address port 7500 at a specific time interval  say  every minute   At 11 00 00 when I make the server public for consumption  the application calculates the request count and exposes it  Prometheus simultaneously scrapes the count metric and stores the value as 0   By 11 01 00 one request is processed  The instrumentation logic in the server increments the count to 1  When Prometheus scrapes the metric the value of count is 1 now   By 11 02 00 two more requests are processed and the request count is 1 2   3 now  Similarly metrics are scraped and stored   The user can control the frequency at which metrics are scraped by Prometheus     Time Stamp   Request Count  metric                                              11 00 00     0                          11 01 00     1                          11 02 00     3                          Note  This table is just a representation for understanding purposes  Prometheus doesn t store the values in this exact format   Prometheus also has an API which allows to query metrics which have been stored by scraping  This API is used to query the metrics  create dashboards charts on it etc  PromQL is used to query these metrics   A simple Line chart created on the Request Count metric will look like this     Graph   assets tutorial sample graph png    assets tutorial sample graph png   One can scrape multiple useful metrics to understand what is happening in the application and create multiple charts on them  Group the charts into a dashboard and use it to get an overview of the application     Show me how it is done  Let s get our hands dirty and setup Prometheus  Prometheus is written using  Go  https   golang org   and all you need is the binary compiled for your operating system  Download the binary corresponding to your operating system from  here  https   prometheus io download   and add the binary to your path   Prometheus exposes its own metrics which can be consumed by itself or another Prometheus server   Now that we have Prometheus installed  the next step is to run it  All that we need is just the binary and a configuration file  Prometheus uses yaml files for configuration       yaml global    scrape interval  15s  scrape configs      job name  prometheus     static configs          targets    localhost 9090        In the above configuration file we have mentioned the  scrape interval   i e how frequently we want Prometheus to scrape the metrics  We have added  scrape configs  which has a name and target to scrape the metrics from  Prometheus by default listens on port 9090  So add it to targets     prometheus   config file prometheus yml   iframe width  560  height  315  src  https   www youtube com embed ioa0eISf1Q0  frameborder  0  allowfullscreen   iframe   Now we have Prometheus up and running and scraping its own metrics every 15s  Prometheus has standard exporters available to export metrics  Next we will run a node exporter which is an exporter for machine metrics and scrape the same using Prometheus    Download node metrics exporter   https   prometheus io download  node exporter    Run the node exporter in a terminal    code   node exporter  code      Node exporter   assets tutorial node exporter png    assets tutorial node exporter png   Next  add node exporter to the list of scrape configs      yaml global    scrape interval  15s  scrape configs      job name  prometheus     static configs          targets    localhost 9090       job name  node exporter     static configs          targets    localhost 9100         iframe width  560  height  315  src  https   www youtube com embed hM5bp53C7Y8  frameborder  0  allowfullscreen   iframe   In this tutorial we discussed what are metrics and why they are important  basic architecture of Prometheus and how to run Prometheus "}
{"questions":"prometheus Here we have a simple HTTP server with endpoint which returns as response title Instrumenting HTTP server written in Go In this tutorial we will create a simple Go HTTP server and instrumentation it by adding a counter sortrank 3 metric to keep count of the total number of requests processed by the server","answers":"---\ntitle: Instrumenting HTTP server written in Go  \nsort_rank: 3\n---\n\nIn this tutorial we will create a simple Go HTTP server and instrumentation it by adding a counter\nmetric to keep count of the total number of requests processed by the server.\n\nHere we have a simple HTTP server with `\/ping` endpoint which returns `pong` as response.\n\n```go\npackage main\n\nimport (\n   \"fmt\"\n   \"net\/http\"\n)\n\nfunc ping(w http.ResponseWriter, req *http.Request){\n   fmt.Fprintf(w,\"pong\")\n}\n\nfunc main() {\n   http.HandleFunc(\"\/ping\",ping)\n\n   http.ListenAndServe(\":8090\", nil)\n}\n```\n\nCompile and run the server\n\n```bash\ngo build server.go\n.\/server\n```\n\nNow open `http:\/\/localhost:8090\/ping` in your browser and you must see `pong`.\n\n[![Server](\/assets\/tutorial\/server.png)](\/assets\/tutorial\/server.png)\n\n\nNow lets add a metric to the server which will instrument the number of requests made to the ping endpoint,the counter metric type is suitable for this as we know the request count doesn\u2019t go down and only increases.\n\nCreate a Prometheus counter\n\n```go\nvar pingCounter = prometheus.NewCounter(\n   prometheus.CounterOpts{\n       Name: \"ping_request_count\",\n       Help: \"No of request handled by Ping handler\",\n   },\n)\n```\n\nNext lets update the ping Handler to increase the count of the counter using `pingCounter.Inc()`.\n\n```go\nfunc ping(w http.ResponseWriter, req *http.Request) {\n   pingCounter.Inc()\n   fmt.Fprintf(w, \"pong\")\n}\n```\n\nThen register the counter to the Default Register and expose the metrics.\n\n```go\nfunc main() {\n   prometheus.MustRegister(pingCounter)\n   http.HandleFunc(\"\/ping\", ping)\n   http.Handle(\"\/metrics\", promhttp.Handler())\n   http.ListenAndServe(\":8090\", nil)\n}\n```\n\nThe `prometheus.MustRegister` function registers the pingCounter to the default Register.\nTo expose the metrics the Go Prometheus client library provides the promhttp package.\n`promhttp.Handler()` provides a `http.Handler` which exposes the metrics registered in the Default Register.\n\nThe sample code depends on the  \n\n```go\npackage main\n\nimport (\n   \"fmt\"\n   \"net\/http\"\n\n   \"github.com\/prometheus\/client_golang\/prometheus\"\n   \"github.com\/prometheus\/client_golang\/prometheus\/promhttp\"\n)\n\nvar pingCounter = prometheus.NewCounter(\n   prometheus.CounterOpts{\n       Name: \"ping_request_count\",\n       Help: \"No of request handled by Ping handler\",\n   },\n)\n\nfunc ping(w http.ResponseWriter, req *http.Request) {\n   pingCounter.Inc()\n   fmt.Fprintf(w, \"pong\")\n}\n\nfunc main() {\n   prometheus.MustRegister(pingCounter)\n\n   http.HandleFunc(\"\/ping\", ping)\n   http.Handle(\"\/metrics\", promhttp.Handler())\n   http.ListenAndServe(\":8090\", nil)\n}\n```\n\nRun the example\n\n```sh\ngo mod init prom_example\ngo mod tidy\ngo run server.go\n```\n\nNow hit the localhost:8090\/ping endpoint a couple of times and sending a request to localhost:8090 will provide the metrics.\n\n[![Ping Metric](\/assets\/tutorial\/ping_metric.png)](\/assets\/tutorial\/ping_metric.png)\n\nHere the `ping_request_count` shows that `\/ping` endpoint was called 3 times.\n\nThe Default Register comes with a collector for go runtime metrics and that is why we see other metrics like `go_threads`, `go_goroutines` etc.\n\nWe have built our first metric exporter. Let\u2019s update our Prometheus config to scrape the metrics from our server.\n\n```yaml\nglobal:\n  scrape_interval: 15s\n\nscrape_configs:\n  - job_name: prometheus\n    static_configs:\n      - targets: [\"localhost:9090\"]\n  - job_name: simple_server\n    static_configs:\n      - targets: [\"localhost:8090\"]\n```\n\n<code>prometheus --config.file=prometheus.yml<\/code>\n\n<iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/yQIWgZoiW0o\" frameborder=\"0\" allowfullscreen><\/iframe>","site":"prometheus","answers_cleaned":"    title  Instrumenting HTTP server written in Go   sort rank  3      In this tutorial we will create a simple Go HTTP server and instrumentation it by adding a counter metric to keep count of the total number of requests processed by the server   Here we have a simple HTTP server with   ping  endpoint which returns  pong  as response      go package main  import       fmt      net http     func ping w http ResponseWriter  req  http Request      fmt Fprintf w  pong      func main        http HandleFunc   ping  ping      http ListenAndServe   8090   nil         Compile and run the server     bash go build server go   server      Now open  http   localhost 8090 ping  in your browser and you must see  pong       Server   assets tutorial server png    assets tutorial server png    Now lets add a metric to the server which will instrument the number of requests made to the ping endpoint the counter metric type is suitable for this as we know the request count doesn t go down and only increases   Create a Prometheus counter     go var pingCounter   prometheus NewCounter     prometheus CounterOpts         Name   ping request count          Help   No of request handled by Ping handler                Next lets update the ping Handler to increase the count of the counter using  pingCounter Inc         go func ping w http ResponseWriter  req  http Request       pingCounter Inc      fmt Fprintf w   pong          Then register the counter to the Default Register and expose the metrics      go func main        prometheus MustRegister pingCounter     http HandleFunc   ping   ping     http Handle   metrics   promhttp Handler       http ListenAndServe   8090   nil         The  prometheus MustRegister  function registers the pingCounter to the default Register  To expose the metrics the Go Prometheus client library provides the promhttp package   promhttp Handler    provides a  http Handler  which exposes the metrics registered in the Default Register   The sample code depends on the       go package main  import       fmt      net http       github com prometheus client golang prometheus      github com prometheus client golang prometheus promhttp     var pingCounter   prometheus NewCounter     prometheus CounterOpts         Name   ping request count          Help   No of request handled by Ping handler            func ping w http ResponseWriter  req  http Request       pingCounter Inc      fmt Fprintf w   pong      func main        prometheus MustRegister pingCounter      http HandleFunc   ping   ping     http Handle   metrics   promhttp Handler       http ListenAndServe   8090   nil         Run the example     sh go mod init prom example go mod tidy go run server go      Now hit the localhost 8090 ping endpoint a couple of times and sending a request to localhost 8090 will provide the metrics      Ping Metric   assets tutorial ping metric png    assets tutorial ping metric png   Here the  ping request count  shows that   ping  endpoint was called 3 times   The Default Register comes with a collector for go runtime metrics and that is why we see other metrics like  go threads    go goroutines  etc   We have built our first metric exporter  Let s update our Prometheus config to scrape the metrics from our server      yaml global    scrape interval  15s  scrape configs      job name  prometheus     static configs          targets    localhost 9090       job name  simple server     static configs          targets    localhost 8090         code prometheus   config file prometheus yml  code    iframe width  560  height  315  src  https   www youtube com embed yQIWgZoiW0o  frameborder  0  allowfullscreen   iframe "}
{"questions":"prometheus title Remote write tuning Prometheus implements sane defaults for remote write but many users have different requirements and would like to optimize their remote settings Remote write tuning sortrank 8","answers":"---\ntitle: Remote write tuning\nsort_rank: 8\n---\n\n# Remote write tuning\n\nPrometheus implements sane defaults for remote write, but many users have\ndifferent requirements and would like to optimize their remote settings.\n\nThis page describes the tuning parameters available via the [remote write\nconfiguration.](\/docs\/prometheus\/latest\/configuration\/configuration\/#remote_write)\n\n## Remote write characteristics\n\nEach remote write destination starts a queue which reads from the write-ahead\nlog (WAL), writes the samples into an in memory queue owned by a shard, which\nthen sends a request to the configured endpoint. The flow of data looks like:\n\n```\n      |-->  queue (shard_1)   --> remote endpoint\nWAL --|-->  queue (shard_...) --> remote endpoint\n      |-->  queue (shard_n)   --> remote endpoint\n```\n\nWhen one shard backs up and fills its queue, Prometheus will block reading from\nthe WAL into any shards. Failures will be retried without loss of data unless\nthe remote endpoint remains down for more than 2 hours. After 2 hours, the WAL\nwill be compacted and data that has not been sent will be lost.\n\nDuring operation, Prometheus will continuously calculate the optimal number of\nshards to use based on the incoming sample rate, number of outstanding samples\nnot sent, and time taken to send each sample.\n\n### Resource usage\n\nUsing remote write increases the memory footprint of Prometheus. Most users\nreport ~25% increased memory usage, but that number is dependent on the shape\nof the data. For each series in the WAL, the remote write code caches a mapping\nof series ID to label values, causing large amounts of series churn to\nsignificantly increase memory usage.\n\nIn addition to the series cache, each shard and its queue increases memory\nusage. Shard memory is proportional to the `number of shards * (capacity +\nmax_samples_per_send)`. When tuning, consider reducing `max_shards` alongside\nincreases to `capacity` and `max_samples_per_send` to avoid inadvertently\nrunning out of memory. The default values for `capacity: 10000` and\n`max_samples_per_send: 2000` will constrain shard memory usage to less than 2\nMB per shard.\n\nRemote write will also increase CPU and network usage. However, for the same\nreasons as above, it is difficult to predict by how much. It is generally a\ngood practice to check for CPU and network saturation if your Prometheus server\nfalls behind sending samples via remote write\n(`prometheus_remote_storage_samples_pending`).\n\n## Parameters\n\nAll the relevant parameters are found under the `queue_config` section of the\nremote write configuration.\n\n### `capacity`\n\nCapacity controls how many samples are queued in memory per shard before\nblocking reading from the WAL. Once the WAL is blocked, samples cannot be\nappended to any shards and all throughput will cease.\n\nCapacity should be high enough to avoid blocking other shards in most\ncases, but too much capacity can cause excess memory consumption and longer\ntimes to clear queues during resharding. It is recommended to set capacity\nto 3-10 times `max_samples_per_send`.\n\n### `max_shards`\n\nMax shards configures the maximum number of shards, or parallelism, Prometheus\nwill use for each remote write queue. Prometheus will try not to use too many\nshards, but if the queue falls behind the remote write component will increase\nthe number of shards up to max shards to increase throughput. Unless remote\nwriting to a very slow endpoint, it is unlikely that `max_shards` should be\nincreased beyond the default. However, it may be necessary to reduce max shards\nif there is potential to overwhelm the remote endpoint, or to reduce memory\nusage when data is backed up.\n\n### `min_shards`\n\nMin shards configures the minimum number of shards used by Prometheus, and is\nthe number of shards used when remote write starts. If remote write falls\nbehind, Prometheus will automatically scale up the number of shards so most\nusers do not have to adjust this parameter. However, increasing min shards will\nallow Prometheus to avoid falling behind at the beginning while calculating the\nrequired number of shards.\n\n### `max_samples_per_send`\n\nMax samples per send can be adjusted depending on the backend in use. Many\nsystems work very well by sending more samples per batch without a significant\nincrease in latency. Other backends will have issues if trying to send a large\nnumber of samples in each request. The default value is small enough to work for\nmost systems.\n\n### `batch_send_deadline`\n\nBatch send deadline sets the maximum amount of time between sends for a single\nshard. Even if the queued shards has not reached `max_samples_per_send`, a\nrequest will be sent. Batch send deadline can be increased for low volume\nsystems that are not latency sensitive in order to increase request efficiency.\n\n### `min_backoff`\n\nMin backoff controls the minimum amount of time to wait before retrying a failed\nrequest. Increasing the backoff spreads out requests when a remote endpoint\ncomes back online. The backoff interval is doubled for each failed requests up\nto `max_backoff`.\n\n### `max_backoff`\n\nMax backoff controls the maximum amount of time to wait before retrying a failed\nrequest.","site":"prometheus","answers_cleaned":"    title  Remote write tuning sort rank  8        Remote write tuning  Prometheus implements sane defaults for remote write  but many users have different requirements and would like to optimize their remote settings   This page describes the tuning parameters available via the  remote write configuration    docs prometheus latest configuration configuration  remote write      Remote write characteristics  Each remote write destination starts a queue which reads from the write ahead log  WAL   writes the samples into an in memory queue owned by a shard  which then sends a request to the configured endpoint  The flow of data looks like                   queue  shard 1        remote endpoint WAL         queue  shard          remote endpoint             queue  shard n        remote endpoint      When one shard backs up and fills its queue  Prometheus will block reading from the WAL into any shards  Failures will be retried without loss of data unless the remote endpoint remains down for more than 2 hours  After 2 hours  the WAL will be compacted and data that has not been sent will be lost   During operation  Prometheus will continuously calculate the optimal number of shards to use based on the incoming sample rate  number of outstanding samples not sent  and time taken to send each sample       Resource usage  Using remote write increases the memory footprint of Prometheus  Most users report  25  increased memory usage  but that number is dependent on the shape of the data  For each series in the WAL  the remote write code caches a mapping of series ID to label values  causing large amounts of series churn to significantly increase memory usage   In addition to the series cache  each shard and its queue increases memory usage  Shard memory is proportional to the  number of shards    capacity   max samples per send    When tuning  consider reducing  max shards  alongside increases to  capacity  and  max samples per send  to avoid inadvertently running out of memory  The default values for  capacity  10000  and  max samples per send  2000  will constrain shard memory usage to less than 2 MB per shard   Remote write will also increase CPU and network usage  However  for the same reasons as above  it is difficult to predict by how much  It is generally a good practice to check for CPU and network saturation if your Prometheus server falls behind sending samples via remote write   prometheus remote storage samples pending        Parameters  All the relevant parameters are found under the  queue config  section of the remote write configuration        capacity   Capacity controls how many samples are queued in memory per shard before blocking reading from the WAL  Once the WAL is blocked  samples cannot be appended to any shards and all throughput will cease   Capacity should be high enough to avoid blocking other shards in most cases  but too much capacity can cause excess memory consumption and longer times to clear queues during resharding  It is recommended to set capacity to 3 10 times  max samples per send         max shards   Max shards configures the maximum number of shards  or parallelism  Prometheus will use for each remote write queue  Prometheus will try not to use too many shards  but if the queue falls behind the remote write component will increase the number of shards up to max shards to increase throughput  Unless remote writing to a very slow endpoint  it is unlikely that  max shards  should be increased beyond the default  However  it may be necessary to reduce max shards if there is potential to overwhelm the remote endpoint  or to reduce memory usage when data is backed up        min shards   Min shards configures the minimum number of shards used by Prometheus  and is the number of shards used when remote write starts  If remote write falls behind  Prometheus will automatically scale up the number of shards so most users do not have to adjust this parameter  However  increasing min shards will allow Prometheus to avoid falling behind at the beginning while calculating the required number of shards        max samples per send   Max samples per send can be adjusted depending on the backend in use  Many systems work very well by sending more samples per batch without a significant increase in latency  Other backends will have issues if trying to send a large number of samples in each request  The default value is small enough to work for most systems        batch send deadline   Batch send deadline sets the maximum amount of time between sends for a single shard  Even if the queued shards has not reached  max samples per send   a request will be sent  Batch send deadline can be increased for low volume systems that are not latency sensitive in order to increase request efficiency        min backoff   Min backoff controls the minimum amount of time to wait before retrying a failed request  Increasing the backoff spreads out requests when a remote endpoint comes back online  The backoff interval is doubled for each failed requests up to  max backoff         max backoff   Max backoff controls the maximum amount of time to wait before retrying a failed request "}
{"questions":"prometheus sortrank 4 NOTE This document predates native histograms added as an experimental stable feature this document will be thoroughly updated title Histograms and summaries feature in Prometheus v2 40 Once native histograms are closer to becoming a Histograms and summaries","answers":"---\ntitle: Histograms and summaries\nsort_rank: 4\n---\n\nNOTE: This document predates native histograms (added as an experimental\nfeature in Prometheus v2.40). Once native histograms are closer to becoming a\nstable feature, this document will be thoroughly updated.\n\n# Histograms and summaries\n\nHistograms and summaries are more complex metric types. Not only does\na single histogram or summary create a multitude of time series, it is\nalso more difficult to use these metric types correctly. This section\nhelps you to pick and configure the appropriate metric type for your\nuse case.\n\n## Library support\n\nFirst of all, check the library support for\n[histograms](\/docs\/concepts\/metric_types\/#histogram) and\n[summaries](\/docs\/concepts\/metric_types\/#summary).\n\nSome libraries support only one of the two types, or they support summaries\nonly in a limited fashion (lacking [quantile calculation](#quantiles)).\n\n## Count and sum of observations\n\nHistograms and summaries both sample observations, typically request\ndurations or response sizes. They track the number of observations\n*and* the sum of the observed values, allowing you to calculate the\n*average* of the observed values. Note that the number of observations\n(showing up in Prometheus as a time series with a `_count` suffix) is\ninherently a counter (as described above, it only goes up). The sum of\nobservations (showing up as a time series with a `_sum` suffix)\nbehaves like a counter, too, as long as there are no negative\nobservations. Obviously, request durations or response sizes are\nnever negative. In principle, however, you can use summaries and\nhistograms to observe negative values (e.g. temperatures in\ncentigrade). In that case, the sum of observations can go down, so you\ncannot apply `rate()` to it anymore. In those rare cases where you need to\napply `rate()` and cannot avoid negative observations, you can use two\nseparate summaries, one for positive and one for negative observations\n(the latter with inverted sign), and combine the results later with suitable\nPromQL expressions.\n\nTo calculate the average request duration during the last 5 minutes\nfrom a histogram or summary called `http_request_duration_seconds`,\nuse the following expression:\n\n      rate(http_request_duration_seconds_sum[5m])\n    \/\n      rate(http_request_duration_seconds_count[5m])\n\n## Apdex score\n\nA straight-forward use of histograms (but not summaries) is to count\nobservations falling into particular buckets of observation\nvalues.\n\nYou might have an SLO to serve 95% of requests within 300ms. In that\ncase, configure a histogram to have a bucket with an upper limit of\n0.3 seconds. You can then directly express the relative amount of\nrequests served within 300ms and easily alert if the value drops below\n0.95. The following expression calculates it by job for the requests\nserved in the last 5 minutes. The request durations were collected with\na histogram called `http_request_duration_seconds`.\n\n      sum(rate(http_request_duration_seconds_bucket{le=\"0.3\"}[5m])) by (job)\n    \/\n      sum(rate(http_request_duration_seconds_count[5m])) by (job)\n\n\nYou can approximate the well-known [Apdex\nscore](http:\/\/en.wikipedia.org\/wiki\/Apdex) in a similar way. Configure\na bucket with the target request duration as the upper bound and\nanother bucket with the tolerated request duration (usually 4 times\nthe target request duration) as the upper bound. Example: The target\nrequest duration is 300ms. The tolerable request duration is 1.2s. The\nfollowing expression yields the Apdex score for each job over the last\n5 minutes:\n\n    (\n      sum(rate(http_request_duration_seconds_bucket{le=\"0.3\"}[5m])) by (job)\n    +\n      sum(rate(http_request_duration_seconds_bucket{le=\"1.2\"}[5m])) by (job)\n    ) \/ 2 \/ sum(rate(http_request_duration_seconds_count[5m])) by (job)\n\nNote that we divide the sum of both buckets. The reason is that the histogram\nbuckets are\n[cumulative](https:\/\/en.wikipedia.org\/wiki\/Histogram#Cumulative_histogram). The\n`le=\"0.3\"` bucket is also contained in the `le=\"1.2\"` bucket; dividing it by 2\ncorrects for that.\n\nThe calculation does not exactly match the traditional Apdex score, as it\nincludes errors in the satisfied and tolerable parts of the calculation.\n\n## Quantiles\n\nYou can use both summaries and histograms to calculate so-called \u03c6-quantiles,\nwhere 0 \u2264 \u03c6 \u2264 1. The \u03c6-quantile is the observation value that ranks at number\n\u03c6*N among the N observations. Examples for \u03c6-quantiles: The 0.5-quantile is\nknown as the median. The 0.95-quantile is the 95th percentile.\n\nThe essential difference between summaries and histograms is that summaries\ncalculate streaming \u03c6-quantiles on the client side and expose them directly,\nwhile histograms expose bucketed observation counts and the calculation of\nquantiles from the buckets of a histogram happens on the server side using the\n[`histogram_quantile()`\nfunction](\/docs\/prometheus\/latest\/querying\/functions\/#histogram_quantile).\n\nThe two approaches have a number of different implications:\n\n|   | Histogram | Summary\n|---|-----------|---------\n| Required configuration | Pick buckets suitable for the expected range of observed values. | Pick desired \u03c6-quantiles and sliding window. Other \u03c6-quantiles and sliding windows cannot be calculated later.\n| Client performance | Observations are very cheap as they only need to increment counters. | Observations are expensive due to the streaming quantile calculation.\n| Server performance | The server has to calculate quantiles. You can use [recording rules](\/docs\/prometheus\/latest\/configuration\/recording_rules\/#recording-rules) should the ad-hoc calculation take too long (e.g. in a large dashboard). | Low server-side cost.\n| Number of time series (in addition to the `_sum` and `_count` series) | One time series per configured bucket. | One time series per configured quantile.\n| Quantile error (see below for details) | Error is limited in the dimension of observed values by the width of the relevant bucket. | Error is limited in the dimension of \u03c6 by a configurable value.\n| Specification of \u03c6-quantile and sliding time-window | Ad-hoc with [Prometheus expressions](\/docs\/prometheus\/latest\/querying\/functions\/#histogram_quantile). | Preconfigured by the client.\n| Aggregation | Ad-hoc with [Prometheus expressions](\/docs\/prometheus\/latest\/querying\/functions\/#histogram_quantile). | In general [not aggregatable](http:\/\/latencytipoftheday.blogspot.de\/2014\/06\/latencytipoftheday-you-cant-average.html).\n\nNote the importance of the last item in the table. Let us return to\nthe SLO of serving 95% of requests within 300ms. This time, you do not\nwant to display the percentage of requests served within 300ms, but\ninstead the 95th percentile, i.e. the request duration within which\nyou have served 95% of requests. To do that, you can either configure\na summary with a 0.95-quantile and (for example) a 5-minute decay\ntime, or you configure a histogram with a few buckets around the 300ms\nmark, e.g. `{le=\"0.1\"}`, `{le=\"0.2\"}`, `{le=\"0.3\"}`, and\n`{le=\"0.45\"}`. If your service runs replicated with a number of\ninstances, you will collect request durations from every single one of\nthem, and then you want to aggregate everything into an overall 95th\npercentile. However, aggregating the precomputed quantiles from a\nsummary rarely makes sense. In this particular case, averaging the\nquantiles yields statistically nonsensical values.\n\n    avg(http_request_duration_seconds{quantile=\"0.95\"}) \/\/ BAD!\n\nUsing histograms, the aggregation is perfectly possible with the\n[`histogram_quantile()`\nfunction](\/docs\/prometheus\/latest\/querying\/functions\/#histogram_quantile).\n\n    histogram_quantile(0.95, sum(rate(http_request_duration_seconds_bucket[5m])) by (le)) \/\/ GOOD.\n\nFurthermore, should your SLO change and you now want to plot the 90th\npercentile, or you want to take into account the last 10 minutes\ninstead of the last 5 minutes, you only have to adjust the expression\nabove and you do not need to reconfigure the clients.\n\n## Errors of quantile estimation\n\nQuantiles, whether calculated client-side or server-side, are\nestimated. It is important to understand the errors of that\nestimation.\n\nContinuing the histogram example from above, imagine your usual\nrequest durations are almost all very close to 220ms, or in other\nwords, if you could plot the \"true\" histogram, you would see a very\nsharp spike at 220ms. In the Prometheus histogram metric as configured\nabove, almost all observations, and therefore also the 95th percentile,\nwill fall into the bucket labeled `{le=\"0.3\"}`, i.e. the bucket from\n200ms to 300ms. The histogram implementation guarantees that the true\n95th percentile is somewhere between 200ms and 300ms. To return a\nsingle value (rather than an interval), it applies linear\ninterpolation, which yields 295ms in this case. The calculated\nquantile gives you the impression that you are close to breaching the\nSLO, but in reality, the 95th percentile is a tiny bit above 220ms,\na quite comfortable distance to your SLO.\n\nNext step in our thought experiment: A change in backend routing\nadds a fixed amount of 100ms to all request durations. Now the request\nduration has its sharp spike at 320ms and almost all observations will\nfall into the bucket from 300ms to 450ms. The 95th percentile is\ncalculated to be 442.5ms, although the correct value is close to\n320ms. While you are only a tiny bit outside of your SLO, the\ncalculated 95th quantile looks much worse.\n\nA summary would have had no problem calculating the correct percentile\nvalue in both cases, at least if it uses an appropriate algorithm on\nthe client side (like the [one used by the Go\nclient](http:\/\/dimacs.rutgers.edu\/~graham\/pubs\/slides\/bquant-long.pdf)).\nUnfortunately, you cannot use a summary if you need to aggregate the\nobservations from a number of instances.\n\nLuckily, due to your appropriate choice of bucket boundaries, even in\nthis contrived example of very sharp spikes in the distribution of\nobserved values, the histogram was able to identify correctly if you\nwere within or outside of your SLO. Also, the closer the actual value\nof the quantile is to our SLO (or in other words, the value we are\nactually most interested in), the more accurate the calculated value\nbecomes.\n\nLet us now modify the experiment once more. In the new setup, the\ndistributions of request durations has a spike at 150ms, but it is not\nquite as sharp as before and only comprises 90% of the\nobservations. 10% of the observations are evenly spread out in a long\ntail between 150ms and 450ms. With that distribution, the 95th\npercentile happens to be exactly at our SLO of 300ms. With the\nhistogram, the calculated value is accurate, as the value of the 95th\npercentile happens to coincide with one of the bucket boundaries. Even\nslightly different values would still be accurate as the (contrived)\neven distribution within the relevant buckets is exactly what the\nlinear interpolation within a bucket assumes.\n\nThe error of the quantile reported by a summary gets more interesting\nnow. The error of the quantile in a summary is configured in the\ndimension of \u03c6. In our case we might have configured 0.95\u00b10.01,\ni.e. the calculated value will be between the 94th and 96th\npercentile. The 94th quantile with the distribution described above is\n270ms, the 96th quantile is 330ms. The calculated value of the 95th\npercentile reported by the summary can be anywhere in the interval\nbetween 270ms and 330ms, which unfortunately is all the difference\nbetween clearly within the SLO vs. clearly outside the SLO.\n\nThe bottom line is: If you use a summary, you control the error in the\ndimension of \u03c6. If you use a histogram, you control the error in the\ndimension of the observed value (via choosing the appropriate bucket\nlayout). With a broad distribution, small changes in \u03c6 result in\nlarge deviations in the observed value. With a sharp distribution, a\nsmall interval of observed values covers a large interval of \u03c6.\n\nTwo rules of thumb:\n\n  1. If you need to aggregate, choose histograms.\n\n  2. Otherwise, choose a histogram if you have an idea of the range\n     and distribution of values that will be observed. Choose a\n     summary if you need an accurate quantile, no matter what the\n     range and distribution of the values is.\n\n\n## What can I do if my client library does not support the metric type I need?\n\nImplement it! [Code contributions are welcome](\/community\/). In general, we\nexpect histograms to be more urgently needed than summaries. Histograms are\nalso easier to implement in a client library, so we recommend to implement\nhistograms first, if in doubt.","site":"prometheus","answers_cleaned":"    title  Histograms and summaries sort rank  4      NOTE  This document predates native histograms  added as an experimental feature in Prometheus v2 40   Once native histograms are closer to becoming a stable feature  this document will be thoroughly updated     Histograms and summaries  Histograms and summaries are more complex metric types  Not only does a single histogram or summary create a multitude of time series  it is also more difficult to use these metric types correctly  This section helps you to pick and configure the appropriate metric type for your use case      Library support  First of all  check the library support for  histograms   docs concepts metric types  histogram  and  summaries   docs concepts metric types  summary    Some libraries support only one of the two types  or they support summaries only in a limited fashion  lacking  quantile calculation   quantiles        Count and sum of observations  Histograms and summaries both sample observations  typically request durations or response sizes  They track the number of observations  and  the sum of the observed values  allowing you to calculate the  average  of the observed values  Note that the number of observations  showing up in Prometheus as a time series with a   count  suffix  is inherently a counter  as described above  it only goes up   The sum of observations  showing up as a time series with a   sum  suffix  behaves like a counter  too  as long as there are no negative observations  Obviously  request durations or response sizes are never negative  In principle  however  you can use summaries and histograms to observe negative values  e g  temperatures in centigrade   In that case  the sum of observations can go down  so you cannot apply  rate    to it anymore  In those rare cases where you need to apply  rate    and cannot avoid negative observations  you can use two separate summaries  one for positive and one for negative observations  the latter with inverted sign   and combine the results later with suitable PromQL expressions   To calculate the average request duration during the last 5 minutes from a histogram or summary called  http request duration seconds   use the following expression         rate http request duration seconds sum 5m               rate http request duration seconds count 5m       Apdex score  A straight forward use of histograms  but not summaries  is to count observations falling into particular buckets of observation values   You might have an SLO to serve 95  of requests within 300ms  In that case  configure a histogram to have a bucket with an upper limit of 0 3 seconds  You can then directly express the relative amount of requests served within 300ms and easily alert if the value drops below 0 95  The following expression calculates it by job for the requests served in the last 5 minutes  The request durations were collected with a histogram called  http request duration seconds          sum rate http request duration seconds bucket le  0 3   5m    by  job              sum rate http request duration seconds count 5m    by  job    You can approximate the well known  Apdex score  http   en wikipedia org wiki Apdex  in a similar way  Configure a bucket with the target request duration as the upper bound and another bucket with the tolerated request duration  usually 4 times the target request duration  as the upper bound  Example  The target request duration is 300ms  The tolerable request duration is 1 2s  The following expression yields the Apdex score for each job over the last 5 minutes               sum rate http request duration seconds bucket le  0 3   5m    by  job              sum rate http request duration seconds bucket le  1 2   5m    by  job          2   sum rate http request duration seconds count 5m    by  job   Note that we divide the sum of both buckets  The reason is that the histogram buckets are  cumulative  https   en wikipedia org wiki Histogram Cumulative histogram   The  le  0 3   bucket is also contained in the  le  1 2   bucket  dividing it by 2 corrects for that   The calculation does not exactly match the traditional Apdex score  as it includes errors in the satisfied and tolerable parts of the calculation      Quantiles  You can use both summaries and histograms to calculate so called   quantiles  where 0       1  The   quantile is the observation value that ranks at number   N among the N observations  Examples for   quantiles  The 0 5 quantile is known as the median  The 0 95 quantile is the 95th percentile   The essential difference between summaries and histograms is that summaries calculate streaming   quantiles on the client side and expose them directly  while histograms expose bucketed observation counts and the calculation of quantiles from the buckets of a histogram happens on the server side using the   histogram quantile    function   docs prometheus latest querying functions  histogram quantile    The two approaches have a number of different implications         Histogram   Summary                              Required configuration   Pick buckets suitable for the expected range of observed values    Pick desired   quantiles and sliding window  Other   quantiles and sliding windows cannot be calculated later    Client performance   Observations are very cheap as they only need to increment counters    Observations are expensive due to the streaming quantile calculation    Server performance   The server has to calculate quantiles  You can use  recording rules   docs prometheus latest configuration recording rules  recording rules  should the ad hoc calculation take too long  e g  in a large dashboard     Low server side cost    Number of time series  in addition to the   sum  and   count  series    One time series per configured bucket    One time series per configured quantile    Quantile error  see below for details    Error is limited in the dimension of observed values by the width of the relevant bucket    Error is limited in the dimension of   by a configurable value    Specification of   quantile and sliding time window   Ad hoc with  Prometheus expressions   docs prometheus latest querying functions  histogram quantile     Preconfigured by the client    Aggregation   Ad hoc with  Prometheus expressions   docs prometheus latest querying functions  histogram quantile     In general  not aggregatable  http   latencytipoftheday blogspot de 2014 06 latencytipoftheday you cant average html    Note the importance of the last item in the table  Let us return to the SLO of serving 95  of requests within 300ms  This time  you do not want to display the percentage of requests served within 300ms  but instead the 95th percentile  i e  the request duration within which you have served 95  of requests  To do that  you can either configure a summary with a 0 95 quantile and  for example  a 5 minute decay time  or you configure a histogram with a few buckets around the 300ms mark  e g    le  0 1       le  0 2       le  0 3     and   le  0 45     If your service runs replicated with a number of instances  you will collect request durations from every single one of them  and then you want to aggregate everything into an overall 95th percentile  However  aggregating the precomputed quantiles from a summary rarely makes sense  In this particular case  averaging the quantiles yields statistically nonsensical values       avg http request duration seconds quantile  0 95       BAD   Using histograms  the aggregation is perfectly possible with the   histogram quantile    function   docs prometheus latest querying functions  histogram quantile        histogram quantile 0 95  sum rate http request duration seconds bucket 5m    by  le      GOOD   Furthermore  should your SLO change and you now want to plot the 90th percentile  or you want to take into account the last 10 minutes instead of the last 5 minutes  you only have to adjust the expression above and you do not need to reconfigure the clients      Errors of quantile estimation  Quantiles  whether calculated client side or server side  are estimated  It is important to understand the errors of that estimation   Continuing the histogram example from above  imagine your usual request durations are almost all very close to 220ms  or in other words  if you could plot the  true  histogram  you would see a very sharp spike at 220ms  In the Prometheus histogram metric as configured above  almost all observations  and therefore also the 95th percentile  will fall into the bucket labeled   le  0 3     i e  the bucket from 200ms to 300ms  The histogram implementation guarantees that the true 95th percentile is somewhere between 200ms and 300ms  To return a single value  rather than an interval   it applies linear interpolation  which yields 295ms in this case  The calculated quantile gives you the impression that you are close to breaching the SLO  but in reality  the 95th percentile is a tiny bit above 220ms  a quite comfortable distance to your SLO   Next step in our thought experiment  A change in backend routing adds a fixed amount of 100ms to all request durations  Now the request duration has its sharp spike at 320ms and almost all observations will fall into the bucket from 300ms to 450ms  The 95th percentile is calculated to be 442 5ms  although the correct value is close to 320ms  While you are only a tiny bit outside of your SLO  the calculated 95th quantile looks much worse   A summary would have had no problem calculating the correct percentile value in both cases  at least if it uses an appropriate algorithm on the client side  like the  one used by the Go client  http   dimacs rutgers edu  graham pubs slides bquant long pdf    Unfortunately  you cannot use a summary if you need to aggregate the observations from a number of instances   Luckily  due to your appropriate choice of bucket boundaries  even in this contrived example of very sharp spikes in the distribution of observed values  the histogram was able to identify correctly if you were within or outside of your SLO  Also  the closer the actual value of the quantile is to our SLO  or in other words  the value we are actually most interested in   the more accurate the calculated value becomes   Let us now modify the experiment once more  In the new setup  the distributions of request durations has a spike at 150ms  but it is not quite as sharp as before and only comprises 90  of the observations  10  of the observations are evenly spread out in a long tail between 150ms and 450ms  With that distribution  the 95th percentile happens to be exactly at our SLO of 300ms  With the histogram  the calculated value is accurate  as the value of the 95th percentile happens to coincide with one of the bucket boundaries  Even slightly different values would still be accurate as the  contrived  even distribution within the relevant buckets is exactly what the linear interpolation within a bucket assumes   The error of the quantile reported by a summary gets more interesting now  The error of the quantile in a summary is configured in the dimension of    In our case we might have configured 0 95 0 01  i e  the calculated value will be between the 94th and 96th percentile  The 94th quantile with the distribution described above is 270ms  the 96th quantile is 330ms  The calculated value of the 95th percentile reported by the summary can be anywhere in the interval between 270ms and 330ms  which unfortunately is all the difference between clearly within the SLO vs  clearly outside the SLO   The bottom line is  If you use a summary  you control the error in the dimension of    If you use a histogram  you control the error in the dimension of the observed value  via choosing the appropriate bucket layout   With a broad distribution  small changes in   result in large deviations in the observed value  With a sharp distribution  a small interval of observed values covers a large interval of     Two rules of thumb     1  If you need to aggregate  choose histograms     2  Otherwise  choose a histogram if you have an idea of the range      and distribution of values that will be observed  Choose a      summary if you need an accurate quantile  no matter what the      range and distribution of the values is       What can I do if my client library does not support the metric type I need   Implement it   Code contributions are welcome   community    In general  we expect histograms to be more urgently needed than summaries  Histograms are also easier to implement in a client library  so we recommend to implement histograms first  if in doubt "}
{"questions":"prometheus title Instrumentation sortrank 3 Instrumentation How to instrument This page provides an opinionated set of guidelines for instrumenting your code","answers":"---\ntitle: Instrumentation\nsort_rank: 3\n---\n\n# Instrumentation\n\nThis page provides an opinionated set of guidelines for instrumenting your code.\n\n## How to instrument\n\nThe short answer is to instrument everything. Every library, subsystem and\nservice should have at least a few metrics to give you a rough idea of how it is\nperforming.\n\nInstrumentation should be an integral part of your code. Instantiate the metric\nclasses in the same file you use them. This makes going from alert to console to code\neasy when you are chasing an error.\n\n### The three types of services\n\nFor monitoring purposes, services can generally be broken down into three types:\nonline-serving, offline-processing, and batch jobs. There is overlap between\nthem, but every service tends to fit well into one of these categories.\n\n#### Online-serving systems\n\nAn online-serving system is one where a human or another system is expecting an\nimmediate response. For example, most database and HTTP requests fall into\nthis category.\n\nThe key metrics in such a system are the number of performed queries, errors,\nand latency. The number of in-progress requests can also be useful.\n\nFor counting failed queries, see section [Failures](#failures) below.\n\nOnline-serving systems should be monitored on both the client and server side.\nIf the two sides see different behaviors, that is very useful information for debugging.\nIf a service has many clients, it is not practical for the service to track them\nindividually, so they have to rely on their own stats.\n\nBe consistent in whether you count queries when they start or when they end.\nWhen they end is suggested, as it will line up with the error and latency stats,\nand tends to be easier to code.\n\n#### Offline processing\n\nFor offline processing, no one is actively waiting for a response, and batching\nof work is common. There may also be multiple stages of processing.\n\nFor each stage, track the items coming in, how many are in progress, the last\ntime you processed something, and how many items were sent out. If batching, you\nshould also track batches going in and out.\n\nKnowing the last time that a system processed something is useful for detecting if it has stalled,\nbut it is very localised information. A better approach is to send a heartbeat\nthrough the system: some dummy item that gets passed all the way through\nand includes the timestamp when it was inserted. Each stage can export the most\nrecent heartbeat timestamp it has seen, letting you know how long items are\ntaking to propagate through the system. For systems that do not have quiet\nperiods where no processing occurs, an explicit heartbeat may not be needed.\n\n#### Batch jobs\n\nThere is a fuzzy line between offline-processing and batch jobs, as offline\nprocessing may be done in batch jobs. Batch jobs are distinguished by the\nfact that they do not run continuously, which makes scraping them difficult.\n\nThe key metric of a batch job is the last time it succeeded. It is also useful to track\nhow long each major stage of the job took, the overall runtime and the last\ntime the job completed (successful or failed). These are all gauges, and should\nbe [pushed to a PushGateway](\/docs\/instrumenting\/pushing\/).\nThere are generally also some overall job-specific statistics that would be\nuseful to track, such as the total number of records processed.\n\nFor batch jobs that take more than a few minutes to run, it is useful to also\nscrape them using pull-based monitoring. This lets you track the same metrics over time\nas for other types of jobs, such as resource usage and latency when talking to other\nsystems. This can aid debugging if the job starts to get slow.\n\nFor batch jobs that run very often (say, more often than every 15 minutes), you should\nconsider converting them into daemons and handling them as offline-processing jobs.\n\n### Subsystems\n\nIn addition to the three main types of services, systems have sub-parts that\nshould also be monitored.\n\n#### Libraries\n\nLibraries should provide instrumentation with no additional configuration\nrequired by users.\n\nIf it is a library used to access some resource outside of the process (for example,\nnetwork, disk, or IPC), track the overall query count, errors (if errors are possible)\nand latency at a minimum.\n\nDepending on how heavy the library is, track internal errors and\nlatency within the library itself, and any general statistics you think may be\nuseful.\n\nA library may be used by multiple independent parts of an application against\ndifferent resources, so take care to distinguish uses with labels where\nappropriate. For example, a database connection pool should distinguish the databases\nit is talking to, whereas there is no need to differentiate\nbetween users of a DNS client library.\n\n#### Logging\n\nAs a general rule, for every line of logging code you should also have a\ncounter that is incremented. If you find an interesting log message, you want to\nbe able to see how often it has been happening and for how long.\n\nIf there are multiple closely-related log messages in the same function (for example,\ndifferent branches of an if or switch statement), it can sometimes make sense to\nincrement a single counter for all of them.\n\nIt is also generally useful to export the total number of info\/error\/warning\nlines that were logged by the application as a whole, and check for significant\ndifferences as part of your release process.\n\n#### Failures\n\nFailures should be handled similarly to logging. Every time there is a failure, a\ncounter should be incremented. Unlike logging, the error may also bubble up to a\nmore general error counter depending on how your code is structured.\n\nWhen reporting failures, you should generally have some other metric\nrepresenting the total number of attempts. This makes the failure ratio easy to calculate.\n\n#### Threadpools\n\nFor any sort of threadpool, the key metrics are the number of queued requests, the number of\nthreads in use, the total number of threads, the number of tasks processed, and how long they took.\nIt is also useful to track how long things were waiting in the queue.\n\n#### Caches\n\nThe key metrics for a cache are total queries, hits, overall latency and then\nthe query count, errors and latency of whatever online-serving system the cache is in front of.\n\n#### Collectors\n\nWhen implementing a non-trivial custom metrics collector, it is advised to export a\ngauge for how long the collection took in seconds and another for the number of\nerrors encountered.\n\nThis is one of the two cases when it is okay to export a duration as a gauge\nrather than a summary or a histogram, the other being batch job durations. This\nis because both represent information about that particular push\/scrape, rather\nthan tracking multiple durations over time.\n\n## Things to watch out for\n\nThere are some general things to be aware of when doing monitoring, and also\nPrometheus-specific ones in particular.\n\n### Use labels\n\nFew monitoring systems have the notion of labels and an expression language to\ntake advantage of them, so it takes a bit of getting used to.\n\nWhen you have multiple metrics that you want to add\/average\/sum, they should\nusually be one metric with labels rather than multiple metrics.\n\nFor example, rather than `http_responses_500_total` and `http_responses_403_total`,\ncreate a single metric called `http_responses_total` with a `code` label\nfor the HTTP response code. You can then process the entire metric as one in\nrules and graphs.\n\nAs a rule of thumb, no part of a metric name should ever be procedurally\ngenerated (use labels instead). The one exception is when proxying metrics\nfrom another monitoring\/instrumentation system.\n\nSee also the [naming](\/docs\/practices\/naming\/) section.\n\n### Do not overuse labels\n\nEach labelset is an additional time series that has RAM, CPU, disk, and network\ncosts. Usually the overhead is negligible, but in scenarios with lots of\nmetrics and hundreds of labelsets across hundreds of servers, this can add up\nquickly.\n\nAs a general guideline, try to keep the cardinality of your metrics below 10,\nand for metrics that exceed that, aim to limit them to a handful across your\nwhole system. The vast majority of your metrics should have no labels.\n\nIf you have a metric that has a cardinality over 100 or the potential to grow\nthat large, investigate alternate solutions such as reducing the number of\ndimensions or moving the analysis away from monitoring and to a general-purpose\nprocessing system.\n\nTo give you a better idea of the underlying numbers, let's look at node\\_exporter.\nnode\\_exporter exposes metrics for every mounted filesystem. Every node will have\nin the tens of timeseries for, say, `node_filesystem_avail`. If you have\n10,000 nodes, you will end up with roughly 100,000 timeseries for\n`node_filesystem_avail`, which is fine for Prometheus to handle.\n\nIf you were to now add quota per user, you would quickly reach a double digit\nnumber of millions with 10,000 users on 10,000 nodes. This is too much for the\ncurrent implementation of Prometheus. Even with smaller numbers, there's an\nopportunity cost as you can't have other, potentially more useful metrics on\nthis machine any more.\n\nIf you are unsure, start with no labels and add more labels over time as\nconcrete use cases arise.\n\n### Counter vs. gauge, summary vs. histogram\n\nIt is important to know which of the four main metric types to use for\na given metric.\n\nTo pick between counter and gauge, there is a simple rule of thumb: if\nthe value can go down, it is a gauge.\n\nCounters can only go up (and reset, such as when a process restarts). They are\nuseful for accumulating the number of events, or the amount of something at\neach event. For example, the total number of HTTP requests, or the total number of\nbytes sent in HTTP requests. Raw counters are rarely useful. Use the\n`rate()` function to get the per-second rate at which they are increasing.\n\nGauges can be set, go up, and go down. They are useful for snapshots of state,\nsuch as in-progress requests, free\/total memory, or temperature. You should\nnever take a `rate()` of a gauge.\n\nSummaries and histograms are more complex metric types discussed in\n[their own section](\/docs\/practices\/histograms\/).\n\n### Timestamps, not time since\n\nIf you want to track the amount of time since something happened, export the\nUnix timestamp at which it happened - not the time since it happened.\n\nWith the timestamp exported, you can use the expression `time() - my_timestamp_metric` to\ncalculate the time since the event, removing the need for update logic and\nprotecting you against the update logic getting stuck.\n\n### Inner loops\n\nIn general, the additional resource cost of instrumentation is far outweighed by\nthe benefits it brings to operations and development.\n\nFor code which is performance-critical or called more than 100k times a second\ninside a given process, you may wish to take some care as to how many metrics\nyou update.\n\nA Java counter takes\n[12-17ns](https:\/\/github.com\/prometheus\/client_java\/blob\/master\/benchmark\/README.md)\nto increment depending on contention. Other languages will have similar\nperformance. If that amount of time is significant for your inner loop, limit\nthe number of metrics you increment in the inner loop and avoid labels (or\ncache the result of the label lookup, for example, the return value of `With()`\nin Go or `labels()` in Java) where possible.\n\nBeware also of metric updates involving time or durations, as getting the time\nmay involve a syscall. As with all matters involving performance-critical code,\nbenchmarks are the best way to determine the impact of any given change.\n\n### Avoid missing metrics\n\nTime series that are not present until something happens are difficult\nto deal with, as the usual simple operations are no longer sufficient to\ncorrectly handle them. To avoid this, export a default value such as `0` for\nany time series you know may exist in advance.\n\nMost Prometheus client libraries (including Go, Java, and Python) will\nautomatically export a `0` for you for metrics with no labels.","site":"prometheus","answers_cleaned":"    title  Instrumentation sort rank  3        Instrumentation  This page provides an opinionated set of guidelines for instrumenting your code      How to instrument  The short answer is to instrument everything  Every library  subsystem and service should have at least a few metrics to give you a rough idea of how it is performing   Instrumentation should be an integral part of your code  Instantiate the metric classes in the same file you use them  This makes going from alert to console to code easy when you are chasing an error       The three types of services  For monitoring purposes  services can generally be broken down into three types  online serving  offline processing  and batch jobs  There is overlap between them  but every service tends to fit well into one of these categories        Online serving systems  An online serving system is one where a human or another system is expecting an immediate response  For example  most database and HTTP requests fall into this category   The key metrics in such a system are the number of performed queries  errors  and latency  The number of in progress requests can also be useful   For counting failed queries  see section  Failures   failures  below   Online serving systems should be monitored on both the client and server side  If the two sides see different behaviors  that is very useful information for debugging  If a service has many clients  it is not practical for the service to track them individually  so they have to rely on their own stats   Be consistent in whether you count queries when they start or when they end  When they end is suggested  as it will line up with the error and latency stats  and tends to be easier to code        Offline processing  For offline processing  no one is actively waiting for a response  and batching of work is common  There may also be multiple stages of processing   For each stage  track the items coming in  how many are in progress  the last time you processed something  and how many items were sent out  If batching  you should also track batches going in and out   Knowing the last time that a system processed something is useful for detecting if it has stalled  but it is very localised information  A better approach is to send a heartbeat through the system  some dummy item that gets passed all the way through and includes the timestamp when it was inserted  Each stage can export the most recent heartbeat timestamp it has seen  letting you know how long items are taking to propagate through the system  For systems that do not have quiet periods where no processing occurs  an explicit heartbeat may not be needed        Batch jobs  There is a fuzzy line between offline processing and batch jobs  as offline processing may be done in batch jobs  Batch jobs are distinguished by the fact that they do not run continuously  which makes scraping them difficult   The key metric of a batch job is the last time it succeeded  It is also useful to track how long each major stage of the job took  the overall runtime and the last time the job completed  successful or failed   These are all gauges  and should be  pushed to a PushGateway   docs instrumenting pushing    There are generally also some overall job specific statistics that would be useful to track  such as the total number of records processed   For batch jobs that take more than a few minutes to run  it is useful to also scrape them using pull based monitoring  This lets you track the same metrics over time as for other types of jobs  such as resource usage and latency when talking to other systems  This can aid debugging if the job starts to get slow   For batch jobs that run very often  say  more often than every 15 minutes   you should consider converting them into daemons and handling them as offline processing jobs       Subsystems  In addition to the three main types of services  systems have sub parts that should also be monitored        Libraries  Libraries should provide instrumentation with no additional configuration required by users   If it is a library used to access some resource outside of the process  for example  network  disk  or IPC   track the overall query count  errors  if errors are possible  and latency at a minimum   Depending on how heavy the library is  track internal errors and latency within the library itself  and any general statistics you think may be useful   A library may be used by multiple independent parts of an application against different resources  so take care to distinguish uses with labels where appropriate  For example  a database connection pool should distinguish the databases it is talking to  whereas there is no need to differentiate between users of a DNS client library        Logging  As a general rule  for every line of logging code you should also have a counter that is incremented  If you find an interesting log message  you want to be able to see how often it has been happening and for how long   If there are multiple closely related log messages in the same function  for example  different branches of an if or switch statement   it can sometimes make sense to increment a single counter for all of them   It is also generally useful to export the total number of info error warning lines that were logged by the application as a whole  and check for significant differences as part of your release process        Failures  Failures should be handled similarly to logging  Every time there is a failure  a counter should be incremented  Unlike logging  the error may also bubble up to a more general error counter depending on how your code is structured   When reporting failures  you should generally have some other metric representing the total number of attempts  This makes the failure ratio easy to calculate        Threadpools  For any sort of threadpool  the key metrics are the number of queued requests  the number of threads in use  the total number of threads  the number of tasks processed  and how long they took  It is also useful to track how long things were waiting in the queue        Caches  The key metrics for a cache are total queries  hits  overall latency and then the query count  errors and latency of whatever online serving system the cache is in front of        Collectors  When implementing a non trivial custom metrics collector  it is advised to export a gauge for how long the collection took in seconds and another for the number of errors encountered   This is one of the two cases when it is okay to export a duration as a gauge rather than a summary or a histogram  the other being batch job durations  This is because both represent information about that particular push scrape  rather than tracking multiple durations over time      Things to watch out for  There are some general things to be aware of when doing monitoring  and also Prometheus specific ones in particular       Use labels  Few monitoring systems have the notion of labels and an expression language to take advantage of them  so it takes a bit of getting used to   When you have multiple metrics that you want to add average sum  they should usually be one metric with labels rather than multiple metrics   For example  rather than  http responses 500 total  and  http responses 403 total   create a single metric called  http responses total  with a  code  label for the HTTP response code  You can then process the entire metric as one in rules and graphs   As a rule of thumb  no part of a metric name should ever be procedurally generated  use labels instead   The one exception is when proxying metrics from another monitoring instrumentation system   See also the  naming   docs practices naming   section       Do not overuse labels  Each labelset is an additional time series that has RAM  CPU  disk  and network costs  Usually the overhead is negligible  but in scenarios with lots of metrics and hundreds of labelsets across hundreds of servers  this can add up quickly   As a general guideline  try to keep the cardinality of your metrics below 10  and for metrics that exceed that  aim to limit them to a handful across your whole system  The vast majority of your metrics should have no labels   If you have a metric that has a cardinality over 100 or the potential to grow that large  investigate alternate solutions such as reducing the number of dimensions or moving the analysis away from monitoring and to a general purpose processing system   To give you a better idea of the underlying numbers  let s look at node  exporter  node  exporter exposes metrics for every mounted filesystem  Every node will have in the tens of timeseries for  say   node filesystem avail   If you have 10 000 nodes  you will end up with roughly 100 000 timeseries for  node filesystem avail   which is fine for Prometheus to handle   If you were to now add quota per user  you would quickly reach a double digit number of millions with 10 000 users on 10 000 nodes  This is too much for the current implementation of Prometheus  Even with smaller numbers  there s an opportunity cost as you can t have other  potentially more useful metrics on this machine any more   If you are unsure  start with no labels and add more labels over time as concrete use cases arise       Counter vs  gauge  summary vs  histogram  It is important to know which of the four main metric types to use for a given metric   To pick between counter and gauge  there is a simple rule of thumb  if the value can go down  it is a gauge   Counters can only go up  and reset  such as when a process restarts   They are useful for accumulating the number of events  or the amount of something at each event  For example  the total number of HTTP requests  or the total number of bytes sent in HTTP requests  Raw counters are rarely useful  Use the  rate    function to get the per second rate at which they are increasing   Gauges can be set  go up  and go down  They are useful for snapshots of state  such as in progress requests  free total memory  or temperature  You should never take a  rate    of a gauge   Summaries and histograms are more complex metric types discussed in  their own section   docs practices histograms         Timestamps  not time since  If you want to track the amount of time since something happened  export the Unix timestamp at which it happened   not the time since it happened   With the timestamp exported  you can use the expression  time     my timestamp metric  to calculate the time since the event  removing the need for update logic and protecting you against the update logic getting stuck       Inner loops  In general  the additional resource cost of instrumentation is far outweighed by the benefits it brings to operations and development   For code which is performance critical or called more than 100k times a second inside a given process  you may wish to take some care as to how many metrics you update   A Java counter takes  12 17ns  https   github com prometheus client java blob master benchmark README md  to increment depending on contention  Other languages will have similar performance  If that amount of time is significant for your inner loop  limit the number of metrics you increment in the inner loop and avoid labels  or cache the result of the label lookup  for example  the return value of  With    in Go or  labels    in Java  where possible   Beware also of metric updates involving time or durations  as getting the time may involve a syscall  As with all matters involving performance critical code  benchmarks are the best way to determine the impact of any given change       Avoid missing metrics  Time series that are not present until something happens are difficult to deal with  as the usual simple operations are no longer sufficient to correctly handle them  To avoid this  export a default value such as  0  for any time series you know may exist in advance   Most Prometheus client libraries  including Go  Java  and Python  will automatically export a  0  for you for metrics with no labels "}
{"questions":"prometheus sortrank 6 mistakes by making incorrect or meaningless calculations stand out A consistent naming scheme for makes it easier to interpret the meaning of a rule at a glance It also avoids title Recording rules Recording rules","answers":"---\ntitle: Recording rules\nsort_rank: 6\n---\n\n# Recording rules\n\nA consistent naming scheme for [recording rules](\/docs\/prometheus\/latest\/configuration\/recording_rules\/)\nmakes it easier to interpret the meaning of a rule at a glance. It also avoids\nmistakes by making incorrect or meaningless calculations stand out.\n\nThis page documents proper naming conventions and aggregation for recording rules.\n\n## Naming \n\n* Recording rules should be of the general form `level:metric:operations`.\n* `level` represents the aggregation level and labels of the rule output.\n* `metric` is the metric name and should be unchanged other than stripping `_total` off counters when using `rate()` or `irate()`.\n* `operations` is a list of operations that were applied to the metric, newest operation first.\n\nKeeping the metric name unchanged makes it easy to know what a metric is and\neasy to find in the codebase.\n\nTo keep the operations clean, `_sum` is omitted if there are other operations,\nas `sum()`. Associative operations can be merged (for example `min_min` is the\nsame as `min`).\n\nIf there is no obvious operation to use, use `sum`.  When taking a ratio by\ndoing division, separate the metrics using `_per_` and call the operation\n`ratio`.\n\n## Aggregation\n\n* When aggregating up ratios, aggregate up the numerator and denominator\nseparately and then divide.\n* Do not take the average of a ratio or average of an\naverage, as that is not statistically valid.\n\n* When aggregating up the `_count` and `_sum` of a Summary and dividing to\ncalculate average observation size, treating it as a ratio would be unwieldy.\nInstead keep the metric name without the `_count` or `_sum` suffix and replace\nthe `rate` in the operation with `mean`. This represents the average\nobservation size over that time period.\n\n* Always specify a `without` clause with the labels you are aggregating away.\nThis is to preserve all the other labels such as `job`, which will avoid\nconflicts and give you more useful metrics and alerts.\n\n## Examples\n\n_Note the indentation style with outdented operators on their own line between\ntwo vectors. To make this style possible in Yaml, [block quotes with an\nindentation indicator](https:\/\/yaml.org\/spec\/1.2\/spec.html#style\/block\/scalar)\n(e.g. `|2`) are used._\n\nAggregating up requests per second that has a `path` label:\n\n```\n- record: instance_path:requests:rate5m\n  expr: rate(requests_total{job=\"myjob\"}[5m])\n\n- record: path:requests:rate5m\n  expr: sum without (instance)(instance_path:requests:rate5m{job=\"myjob\"})\n```\n\nCalculating a request failure ratio and aggregating up to the job-level failure ratio:\n\n```\n- record: instance_path:request_failures:rate5m\n  expr: rate(request_failures_total{job=\"myjob\"}[5m])\n\n- record: instance_path:request_failures_per_requests:ratio_rate5m\n  expr: |2\n      instance_path:request_failures:rate5m{job=\"myjob\"}\n    \/\n      instance_path:requests:rate5m{job=\"myjob\"}\n\n# Aggregate up numerator and denominator, then divide to get path-level ratio.\n- record: path:request_failures_per_requests:ratio_rate5m\n  expr: |2\n      sum without (instance)(instance_path:request_failures:rate5m{job=\"myjob\"})\n    \/\n      sum without (instance)(instance_path:requests:rate5m{job=\"myjob\"})\n\n# No labels left from instrumentation or distinguishing instances,\n# so we use 'job' as the level.\n- record: job:request_failures_per_requests:ratio_rate5m\n  expr: |2\n      sum without (instance, path)(instance_path:request_failures:rate5m{job=\"myjob\"})\n    \/\n      sum without (instance, path)(instance_path:requests:rate5m{job=\"myjob\"})\n```\n\n\nCalculating average latency over a time period from a Summary:\n\n```\n- record: instance_path:request_latency_seconds_count:rate5m\n  expr: rate(request_latency_seconds_count{job=\"myjob\"}[5m])\n\n- record: instance_path:request_latency_seconds_sum:rate5m\n  expr: rate(request_latency_seconds_sum{job=\"myjob\"}[5m])\n\n- record: instance_path:request_latency_seconds:mean5m\n  expr: |2\n      instance_path:request_latency_seconds_sum:rate5m{job=\"myjob\"}\n    \/\n      instance_path:request_latency_seconds_count:rate5m{job=\"myjob\"}\n\n# Aggregate up numerator and denominator, then divide.\n- record: path:request_latency_seconds:mean5m\n  expr: |2\n      sum without (instance)(instance_path:request_latency_seconds_sum:rate5m{job=\"myjob\"})\n    \/\n      sum without (instance)(instance_path:request_latency_seconds_count:rate5m{job=\"myjob\"})\n```\n\nCalculating the average query rate across instances and paths is done using the\n`avg()` function:\n\n```\n- record: job:request_latency_seconds_count:avg_rate5m\n  expr: avg without (instance, path)(instance:request_latency_seconds_count:rate5m{job=\"myjob\"})\n```\n\nNotice that when aggregating that the labels in the `without` clause are removed\nfrom the level of the output metric name compared to the input metric names.\nWhen there is no aggregation, the levels always match. If this is not the case\na mistake has likely been made in the rules.","site":"prometheus","answers_cleaned":"    title  Recording rules sort rank  6        Recording rules  A consistent naming scheme for  recording rules   docs prometheus latest configuration recording rules   makes it easier to interpret the meaning of a rule at a glance  It also avoids mistakes by making incorrect or meaningless calculations stand out   This page documents proper naming conventions and aggregation for recording rules      Naming     Recording rules should be of the general form  level metric operations      level  represents the aggregation level and labels of the rule output     metric  is the metric name and should be unchanged other than stripping   total  off counters when using  rate    or  irate        operations  is a list of operations that were applied to the metric  newest operation first   Keeping the metric name unchanged makes it easy to know what a metric is and easy to find in the codebase   To keep the operations clean    sum  is omitted if there are other operations  as  sum     Associative operations can be merged  for example  min min  is the same as  min     If there is no obvious operation to use  use  sum    When taking a ratio by doing division  separate the metrics using   per   and call the operation  ratio       Aggregation    When aggregating up ratios  aggregate up the numerator and denominator separately and then divide    Do not take the average of a ratio or average of an average  as that is not statistically valid     When aggregating up the   count  and   sum  of a Summary and dividing to calculate average observation size  treating it as a ratio would be unwieldy  Instead keep the metric name without the   count  or   sum  suffix and replace the  rate  in the operation with  mean   This represents the average observation size over that time period     Always specify a  without  clause with the labels you are aggregating away  This is to preserve all the other labels such as  job   which will avoid conflicts and give you more useful metrics and alerts      Examples   Note the indentation style with outdented operators on their own line between two vectors  To make this style possible in Yaml   block quotes with an indentation indicator  https   yaml org spec 1 2 spec html style block scalar   e g    2   are used    Aggregating up requests per second that has a  path  label         record  instance path requests rate5m   expr  rate requests total job  myjob   5m      record  path requests rate5m   expr  sum without  instance  instance path requests rate5m job  myjob         Calculating a request failure ratio and aggregating up to the job level failure ratio         record  instance path request failures rate5m   expr  rate request failures total job  myjob   5m      record  instance path request failures per requests ratio rate5m   expr   2       instance path request failures rate5m job  myjob               instance path requests rate5m job  myjob      Aggregate up numerator and denominator  then divide to get path level ratio    record  path request failures per requests ratio rate5m   expr   2       sum without  instance  instance path request failures rate5m job  myjob                sum without  instance  instance path requests rate5m job  myjob       No labels left from instrumentation or distinguishing instances    so we use  job  as the level    record  job request failures per requests ratio rate5m   expr   2       sum without  instance  path  instance path request failures rate5m job  myjob                sum without  instance  path  instance path requests rate5m job  myjob          Calculating average latency over a time period from a Summary         record  instance path request latency seconds count rate5m   expr  rate request latency seconds count job  myjob   5m      record  instance path request latency seconds sum rate5m   expr  rate request latency seconds sum job  myjob   5m      record  instance path request latency seconds mean5m   expr   2       instance path request latency seconds sum rate5m job  myjob               instance path request latency seconds count rate5m job  myjob      Aggregate up numerator and denominator  then divide    record  path request latency seconds mean5m   expr   2       sum without  instance  instance path request latency seconds sum rate5m job  myjob                sum without  instance  instance path request latency seconds count rate5m job  myjob         Calculating the average query rate across instances and paths is done using the  avg    function         record  job request latency seconds count avg rate5m   expr  avg without  instance  path  instance request latency seconds count rate5m job  myjob         Notice that when aggregating that the labels in the  without  clause are removed from the level of the output metric name compared to the input metric names  When there is no aggregation  the levels always match  If this is not the case a mistake has likely been made in the rules "}
{"questions":"prometheus best practices Individual organizations may want to approach some of these Metric and label naming sortrank 1 title Metric and label naming for using Prometheus but can serve as both a style guide and a collection of The metric and label conventions presented in this document are not required","answers":"---\ntitle: Metric and label naming\nsort_rank: 1\n---\n\n# Metric and label naming\n\nThe metric and label conventions presented in this document are not required\nfor using Prometheus, but can serve as both a style-guide and a collection of\nbest practices. Individual organizations may want to approach some of these\npractices, e.g. naming conventions, differently.\n\n## Metric names\n\nA metric name...\n\n* ...must comply with the [data model](\/docs\/concepts\/data_model\/#metric-names-and-labels) for valid characters.\n* ...should have a (single-word) application prefix relevant to the domain the\n  metric belongs to. The prefix is sometimes referred to as `namespace` by\n  client libraries. For metrics specific to an application, the prefix is\n  usually the application name itself. Sometimes, however, metrics are more\n  generic, like standardized metrics exported by client libraries. Examples:\n  * <code><b>prometheus<\/b>\\_notifications\\_total<\/code>\n    (specific to the Prometheus server)\n  * <code><b>process<\/b>\\_cpu\\_seconds\\_total<\/code>\n    (exported by many client libraries)\n  * <code><b>http<\/b>\\_request\\_duration\\_seconds<\/code>\n    (for all HTTP requests)\n* ...must have a single unit (i.e. do not mix seconds with milliseconds, or seconds with bytes).\n* ...should use base units (e.g. seconds, bytes, meters - not milliseconds, megabytes, kilometers).See below for a list of base units.\n* ...should have a suffix describing the unit, in plural form. Note that an accumulating count has `total` as a suffix, in addition to the unit if applicable. Also note that this applies to units in the narrow sense (like the units in the table below), but not to countable things in general. For example, <code>connections<\/code> or <code>notifications<\/code> are not considered units for this rule and do not have to be at the end of the metric name. (See also examples in the next paragraph.)\n  * <code>http\\_request\\_duration\\_<b>seconds<\/b><\/code>\n  * <code>node\\_memory\\_usage\\_<b>bytes<\/b><\/code>\n  * <code>http\\_requests\\_<b>total<\/b><\/code>\n    (for a unit-less accumulating count)\n  * <code>process\\_cpu\\_<b>seconds\\_total<\/b><\/code>\n    (for an accumulating count with unit)\n  * <code>foobar_build<b>\\_info<\/b><\/code>\n    (for a pseudo-metric that provides [metadata](https:\/\/www.robustperception.io\/exposing-the-software-version-to-prometheus) about the running binary)\n  * <code>data\\_pipeline\\_last\\_record\\_processed\\_<b>timestamp_seconds<\/b><\/code>\n    (for a timestamp that tracks the time of the latest record processed in a data processing pipeline)\n* ...may order its name components in a way that leads to convenient grouping when a list of metric names is sorted lexicographically, as long as all the other rules are followed. The following examples have their the common name components first so that all the related metrics are sorted together:\n  * <code>prometheus\\_tsdb\\_head\\_truncations\\_closed\\_total<\/code>\n  * <code>prometheus\\_tsdb\\_head\\_truncations\\_established\\_total<\/code>\n  * <code>prometheus\\_tsdb\\_head\\_truncations\\_failed\\_total<\/code>\n  * <code>prometheus\\_tsdb\\_head\\_truncations\\_total<\/code>\n\n  The following examples are also valid, but are following a different trade-off. They are easier to read individually, but unrelated metrics like <code>prometheus\\_tsdb\\_head\\_series<\/code> might get sorted in between.\n  * <code>prometheus\\_tsdb\\_head\\_closed\\_truncations\\_total<\/code>\n  * <code>prometheus\\_tsdb\\_head\\_established\\_truncations\\_total<\/code>\n  * <code>prometheus\\_tsdb\\_head\\_failed\\_truncations\\_total<\/code>\n  * <code>prometheus\\_tsdb\\_head\\_truncations\\_total<\/code>\n* ...should represent the same logical thing-being-measured across all label\n  dimensions.\n  * request duration\n  * bytes of data transfer\n  * instantaneous resource usage as a percentage\n\nAs a rule of thumb, either the `sum()` or the `avg()` over all dimensions of a\ngiven metric should be meaningful (though not necessarily useful). If it is not\nmeaningful, split the data up into multiple metrics. For example, having the\ncapacity of various queues in one metric is good, while mixing the capacity of a\nqueue with the current number of elements in the queue is not.\n\n## Labels\n\nUse labels to differentiate the characteristics of the thing that is being measured:\n\n * `api_http_requests_total` - differentiate request types: `operation=\"create|update|delete\"`\n * `api_request_duration_seconds` - differentiate request stages: `stage=\"extract|transform|load\"`\n\nDo not put the label names in the metric name, as this introduces redundancy\nand will cause confusion if the respective labels are aggregated away.\n\nCAUTION: Remember that every unique combination of key-value label\npairs represents a new time series, which can dramatically increase the amount\nof data stored. Do not use labels to store dimensions with high cardinality\n(many different label values), such as user IDs, email addresses, or other\nunbounded sets of values.\n\n\n## Base units\n\nPrometheus does not have any units hard coded. For better compatibility, base\nunits should be used. The following lists some metrics families with their base unit.\nThe list is not exhaustive.\n\n| Family | Base unit | Remark |\n| -------| --------- | ------ |\n| Time   | seconds   |        |\n| Temperature | celsius | _celsius_ is preferred over _kelvin_ for practical reasons. _kelvin_ is acceptable as a base unit in special cases like color temperature or where temperature has to be absolute. |\n| Length | meters | |\n| Bytes  | bytes | |\n| Bits   | bytes | To avoid confusion combining different metrics, always use _bytes_, even where _bits_ appear more common. |\n| Percent | ratio | Values are 0\u20131 (rather than 0\u2013100). `ratio` is only used as a suffix for names like `disk_usage_ratio`. The usual metric name follows the pattern `A_per_B`. |\n| Voltage | volts | |\n| Electric current | amperes | |\n| Energy | joules | |\n| Power  | | Prefer exporting a counter of joules, then `rate(joules[5m])` gives you power in Watts. |\n| Mass   | grams | _grams_ is preferred over _kilograms_ to avoid issues with the _kilo_ prefix. |","site":"prometheus","answers_cleaned":"    title  Metric and label naming sort rank  1        Metric and label naming  The metric and label conventions presented in this document are not required for using Prometheus  but can serve as both a style guide and a collection of best practices  Individual organizations may want to approach some of these practices  e g  naming conventions  differently      Metric names  A metric name          must comply with the  data model   docs concepts data model  metric names and labels  for valid characters       should have a  single word  application prefix relevant to the domain the   metric belongs to  The prefix is sometimes referred to as  namespace  by   client libraries  For metrics specific to an application  the prefix is   usually the application name itself  Sometimes  however  metrics are more   generic  like standardized metrics exported by client libraries  Examples       code  b prometheus  b   notifications  total  code       specific to the Prometheus server       code  b process  b   cpu  seconds  total  code       exported by many client libraries       code  b http  b   request  duration  seconds  code       for all HTTP requests       must have a single unit  i e  do not mix seconds with milliseconds  or seconds with bytes        should use base units  e g  seconds  bytes  meters   not milliseconds  megabytes  kilometers  See below for a list of base units       should have a suffix describing the unit  in plural form  Note that an accumulating count has  total  as a suffix  in addition to the unit if applicable  Also note that this applies to units in the narrow sense  like the units in the table below   but not to countable things in general  For example   code connections  code  or  code notifications  code  are not considered units for this rule and do not have to be at the end of the metric name   See also examples in the next paragraph        code http  request  duration   b seconds  b   code       code node  memory  usage   b bytes  b   code       code http  requests   b total  b   code       for a unit less accumulating count       code process  cpu   b seconds  total  b   code       for an accumulating count with unit       code foobar build b   info  b   code       for a pseudo metric that provides  metadata  https   www robustperception io exposing the software version to prometheus  about the running binary       code data  pipeline  last  record  processed   b timestamp seconds  b   code       for a timestamp that tracks the time of the latest record processed in a data processing pipeline       may order its name components in a way that leads to convenient grouping when a list of metric names is sorted lexicographically  as long as all the other rules are followed  The following examples have their the common name components first so that all the related metrics are sorted together       code prometheus  tsdb  head  truncations  closed  total  code       code prometheus  tsdb  head  truncations  established  total  code       code prometheus  tsdb  head  truncations  failed  total  code       code prometheus  tsdb  head  truncations  total  code     The following examples are also valid  but are following a different trade off  They are easier to read individually  but unrelated metrics like  code prometheus  tsdb  head  series  code  might get sorted in between       code prometheus  tsdb  head  closed  truncations  total  code       code prometheus  tsdb  head  established  truncations  total  code       code prometheus  tsdb  head  failed  truncations  total  code       code prometheus  tsdb  head  truncations  total  code       should represent the same logical thing being measured across all label   dimensions      request duration     bytes of data transfer     instantaneous resource usage as a percentage  As a rule of thumb  either the  sum    or the  avg    over all dimensions of a given metric should be meaningful  though not necessarily useful   If it is not meaningful  split the data up into multiple metrics  For example  having the capacity of various queues in one metric is good  while mixing the capacity of a queue with the current number of elements in the queue is not      Labels  Use labels to differentiate the characteristics of the thing that is being measured       api http requests total    differentiate request types   operation  create update delete       api request duration seconds    differentiate request stages   stage  extract transform load    Do not put the label names in the metric name  as this introduces redundancy and will cause confusion if the respective labels are aggregated away   CAUTION  Remember that every unique combination of key value label pairs represents a new time series  which can dramatically increase the amount of data stored  Do not use labels to store dimensions with high cardinality  many different label values   such as user IDs  email addresses  or other unbounded sets of values       Base units  Prometheus does not have any units hard coded  For better compatibility  base units should be used  The following lists some metrics families with their base unit  The list is not exhaustive     Family   Base unit   Remark                                     Time     seconds                Temperature   celsius    celsius  is preferred over  kelvin  for practical reasons   kelvin  is acceptable as a base unit in special cases like color temperature or where temperature has to be absolute      Length   meters       Bytes    bytes       Bits     bytes   To avoid confusion combining different metrics  always use  bytes   even where  bits  appear more common      Percent   ratio   Values are 0 1  rather than 0 100    ratio  is only used as a suffix for names like  disk usage ratio   The usual metric name follows the pattern  A per B       Voltage   volts       Electric current   amperes       Energy   joules       Power      Prefer exporting a counter of joules  then  rate joules 5m    gives you power in Watts      Mass     grams    grams  is preferred over  kilograms  to avoid issues with the  kilo  prefix   "}
{"questions":"prometheus numerous other generic integration points in Prometheus This page lists some In addition to and Integrations there are title Integrations sortrank 5","answers":"---\ntitle: Integrations\nsort_rank: 5\n---\n\n# Integrations\n\nIn addition to [client libraries](\/docs\/instrumenting\/clientlibs\/) and\n[exporters and related libraries](\/docs\/instrumenting\/exporters\/), there are\nnumerous other generic integration points in Prometheus. This page lists some\nof the integrations with these.\n\n\nNot all integrations are listed here, due to overlapping functionality or still\nbeing in development. The [exporter default\nport](https:\/\/github.com\/prometheus\/prometheus\/wiki\/Default-port-allocations)\nwiki page also happens to include a few non-exporter integrations that fit in\nthese categories.\n\n## File Service Discovery\n\nFor service discovery mechanisms not natively supported by Prometheus,\n[file-based service discovery](\/docs\/operating\/configuration\/#%3Cfile_sd_config%3E) provides an interface for integrating.\n\n * [Kuma](https:\/\/github.com\/kumahq\/kuma\/tree\/master\/app\/kuma-prometheus-sd)\n * [Lightsail](https:\/\/github.com\/n888\/prometheus-lightsail-sd)\n * [Netbox](https:\/\/github.com\/FlxPeters\/netbox-prometheus-sd)\n * [Packet](https:\/\/github.com\/packethost\/prometheus-packet-sd)\n * [Scaleway](https:\/\/github.com\/scaleway\/prometheus-scw-sd)\n\n## Remote Endpoints and Storage\n\nThe [remote write](\/docs\/operating\/configuration\/#remote_write) and [remote read](\/docs\/operating\/configuration\/#remote_read)\nfeatures of Prometheus allow transparently sending and receiving samples. This\nis primarily intended for long term storage. It is recommended that you perform\ncareful evaluation of any solution in this space to confirm it can handle your\ndata volumes.\n\n  * [AppOptics](https:\/\/github.com\/solarwinds\/prometheus2appoptics): write\n  * [AWS Timestream](https:\/\/github.com\/dpattmann\/prometheus-timestream-adapter): read and write\n  * [Azure Data Explorer](https:\/\/github.com\/cosh\/PrometheusToAdx): read and write\n  * [Azure Event Hubs](https:\/\/github.com\/bryanklewis\/prometheus-eventhubs-adapter): write\n  * [Chronix](https:\/\/github.com\/ChronixDB\/chronix.ingester): write\n  * [Cortex](https:\/\/github.com\/cortexproject\/cortex): read and write\n  * [CrateDB](https:\/\/github.com\/crate\/crate_adapter): read and write\n  * [Elasticsearch](https:\/\/www.elastic.co\/guide\/en\/beats\/metricbeat\/master\/metricbeat-metricset-prometheus-remote_write.html): write\n  * [Gnocchi](https:\/\/gnocchi.osci.io\/prometheus.html): write\n  * [Google BigQuery](https:\/\/github.com\/KohlsTechnology\/prometheus_bigquery_remote_storage_adapter): read and write\n  * [Google Cloud Spanner](https:\/\/github.com\/google\/truestreet): read and write\n  * [Grafana Mimir](https:\/\/github.com\/grafana\/mimir): read and write\n  * [Graphite](https:\/\/github.com\/prometheus\/prometheus\/tree\/main\/documentation\/examples\/remote_storage\/remote_storage_adapter): write\n  * [GreptimeDB](https:\/\/github.com\/GreptimeTeam\/greptimedb): read and write\n  * [InfluxDB](https:\/\/docs.influxdata.com\/influxdb\/v1.8\/supported_protocols\/prometheus): read and write\n  * [Instana](https:\/\/www.instana.com\/docs\/ecosystem\/prometheus\/#remote-write): write\n  * [IRONdb](https:\/\/github.com\/circonus-labs\/irondb-prometheus-adapter): read and write\n  * [Kafka](https:\/\/github.com\/Telefonica\/prometheus-kafka-adapter): write\n  * [M3DB](https:\/\/m3db.io\/docs\/integrations\/prometheus\/): read and write\n  * [Mezmo](https:\/\/docs.mezmo.com\/telemetry-pipelines\/prometheus-remote-write-pipeline-source): write\n  * [New Relic](https:\/\/docs.newrelic.com\/docs\/set-or-remove-your-prometheus-remote-write-integration): write\n  * [OpenTSDB](https:\/\/github.com\/prometheus\/prometheus\/tree\/main\/documentation\/examples\/remote_storage\/remote_storage_adapter): write\n  * [QuasarDB](https:\/\/doc.quasardb.net\/master\/user-guide\/integration\/prometheus.html): read and write\n  * [SignalFx](https:\/\/github.com\/signalfx\/metricproxy#prometheus): write\n  * [Splunk](https:\/\/github.com\/kebe7jun\/ropee): read and write\n  * [Sysdig Monitor](https:\/\/docs.sysdig.com\/en\/docs\/installation\/prometheus-remote-write\/): write\n  * [TiKV](https:\/\/github.com\/bragfoo\/TiPrometheus): read and write\n  * [Thanos](https:\/\/github.com\/thanos-io\/thanos): read and write\n  * [VictoriaMetrics](https:\/\/github.com\/VictoriaMetrics\/VictoriaMetrics): write\n  * [Wavefront](https:\/\/github.com\/wavefrontHQ\/prometheus-storage-adapter): write\n\n[Prom-migrator](https:\/\/github.com\/timescale\/promscale\/tree\/master\/migration-tool\/cmd\/prom-migrator) is a tool for migrating data between remote storage systems.\n\n## Alertmanager Webhook Receiver\n\nFor notification mechanisms not natively supported by the Alertmanager, the\n[webhook receiver](\/docs\/alerting\/configuration\/#webhook_config) allows for integration.\n\n  * [alertmanager-webhook-logger](https:\/\/github.com\/tomtom-international\/alertmanager-webhook-logger): logs alerts\n  * [Alertsnitch](https:\/\/gitlab.com\/yakshaving.art\/alertsnitch): saves alerts to a MySQL database\n  * [All Quiet](https:\/\/allquiet.app\/integrations\/inbound\/prometheus): on-call & incident management\n  * [Asana](https:\/\/gitlab.com\/lupudu\/alertmanager-asana-bridge)\n  * [AWS SNS](https:\/\/github.com\/DataReply\/alertmanager-sns-forwarder)\n  * [Better Uptime](https:\/\/docs.betteruptime.com\/integrations\/prometheus)\n  * [Canopsis](https:\/\/git.canopsis.net\/canopsis-connectors\/connector-prometheus2canopsis)\n  * [DingTalk](https:\/\/github.com\/timonwong\/prometheus-webhook-dingtalk)\n  * [Discord](https:\/\/github.com\/benjojo\/alertmanager-discord)\n  * [GitLab](https:\/\/docs.gitlab.com\/ee\/operations\/metrics\/alerts.html#external-prometheus-instances)\n  * [Gotify](https:\/\/github.com\/DRuggeri\/alertmanager_gotify_bridge)\n  * [GELF](https:\/\/github.com\/b-com-software-basis\/alertmanager2gelf)\n  * [Heii On-Call](https:\/\/heiioncall.com\/guides\/prometheus-integration)\n  * [Icinga2](https:\/\/github.com\/vshn\/signalilo)\n  * [iLert](https:\/\/docs.ilert.com\/integrations\/prometheus)\n  * [IRC Bot](https:\/\/github.com\/multimfi\/bot)\n  * [JIRAlert](https:\/\/github.com\/free\/jiralert)\n  * [Matrix](https:\/\/github.com\/matrix-org\/go-neb)\n  * [Phabricator \/ Maniphest](https:\/\/github.com\/knyar\/phalerts)\n  * [prom2teams](https:\/\/github.com\/idealista\/prom2teams): forwards notifications to Microsoft Teams\n  * [Ansible Tower](https:\/\/github.com\/pja237\/prom2tower): call Ansible Tower (AWX) API on alerts (launch jobs etc.)\n  * [Signal](https:\/\/github.com\/dgl\/alertmanager-webhook-signald)\n  * [SIGNL4](https:\/\/www.signl4.com\/blog\/portfolio_item\/prometheus-alertmanager-mobile-alert-notification-duty-schedule-escalation)\n  * [Simplepush](https:\/\/codeberg.org\/stealth\/alertpush)\n  * [SMS](https:\/\/github.com\/messagebird\/sachet): supports [multiple providers](https:\/\/github.com\/messagebird\/sachet\/blob\/master\/examples\/config.yaml)\n  * [SNMP traps](https:\/\/github.com\/maxwo\/snmp_notifier)\n  * [Squadcast](https:\/\/support.squadcast.com\/docs\/prometheus)\n  * [STOMP](https:\/\/github.com\/thewillyhuman\/alertmanager-stomp-forwarder)\n  * [Telegram bot](https:\/\/github.com\/inCaller\/prometheus_bot)\n  * [xMatters](https:\/\/github.com\/xmatters\/xm-labs-prometheus)\n  * [XMPP Bot](https:\/\/github.com\/jelmer\/prometheus-xmpp-alerts)\n  * [Zenduty](https:\/\/docs.zenduty.com\/docs\/prometheus\/)\n  * [Zoom](https:\/\/github.com\/Code2Life\/nodess-apps\/tree\/master\/src\/zoom-alert-2.0)\n\n## Management\n\nPrometheus does not include configuration management functionality, allowing\nyou to integrate it with your existing systems or build on top of it.\n\n  * [Prometheus Operator](https:\/\/github.com\/coreos\/prometheus-operator): Manages Prometheus on top of Kubernetes\n  * [Promgen](https:\/\/github.com\/line\/promgen): Web UI and configuration generator for Prometheus and Alertmanager\n\n## Other\n\n  * [Alert analysis](https:\/\/github.com\/m0nikasingh\/am2ch): Stores alerts into a ClickHouse database and provides alert analysis dashboards\n  * [karma](https:\/\/github.com\/prymitive\/karma): alert dashboard\n  * [PushProx](https:\/\/github.com\/RobustPerception\/PushProx): Proxy to transverse NAT and similar network setups\n  * [Promdump](https:\/\/github.com\/ihcsim\/promdump): kubectl plugin to dump and restore data blocks\n  * [Promregator](https:\/\/github.com\/promregator\/promregator): discovery and scraping for Cloud Foundry applications\n  * [pint](https:\/\/github.com\/cloudflare\/pint): Prometheus rule linter","site":"prometheus","answers_cleaned":"    title  Integrations sort rank  5        Integrations  In addition to  client libraries   docs instrumenting clientlibs   and  exporters and related libraries   docs instrumenting exporters    there are numerous other generic integration points in Prometheus  This page lists some of the integrations with these    Not all integrations are listed here  due to overlapping functionality or still being in development  The  exporter default port  https   github com prometheus prometheus wiki Default port allocations  wiki page also happens to include a few non exporter integrations that fit in these categories      File Service Discovery  For service discovery mechanisms not natively supported by Prometheus   file based service discovery   docs operating configuration   3Cfile sd config 3E  provides an interface for integrating       Kuma  https   github com kumahq kuma tree master app kuma prometheus sd      Lightsail  https   github com n888 prometheus lightsail sd      Netbox  https   github com FlxPeters netbox prometheus sd      Packet  https   github com packethost prometheus packet sd      Scaleway  https   github com scaleway prometheus scw sd      Remote Endpoints and Storage  The  remote write   docs operating configuration  remote write  and  remote read   docs operating configuration  remote read  features of Prometheus allow transparently sending and receiving samples  This is primarily intended for long term storage  It is recommended that you perform careful evaluation of any solution in this space to confirm it can handle your data volumes        AppOptics  https   github com solarwinds prometheus2appoptics   write      AWS Timestream  https   github com dpattmann prometheus timestream adapter   read and write      Azure Data Explorer  https   github com cosh PrometheusToAdx   read and write      Azure Event Hubs  https   github com bryanklewis prometheus eventhubs adapter   write      Chronix  https   github com ChronixDB chronix ingester   write      Cortex  https   github com cortexproject cortex   read and write      CrateDB  https   github com crate crate adapter   read and write      Elasticsearch  https   www elastic co guide en beats metricbeat master metricbeat metricset prometheus remote write html   write      Gnocchi  https   gnocchi osci io prometheus html   write      Google BigQuery  https   github com KohlsTechnology prometheus bigquery remote storage adapter   read and write      Google Cloud Spanner  https   github com google truestreet   read and write      Grafana Mimir  https   github com grafana mimir   read and write      Graphite  https   github com prometheus prometheus tree main documentation examples remote storage remote storage adapter   write      GreptimeDB  https   github com GreptimeTeam greptimedb   read and write      InfluxDB  https   docs influxdata com influxdb v1 8 supported protocols prometheus   read and write      Instana  https   www instana com docs ecosystem prometheus  remote write   write      IRONdb  https   github com circonus labs irondb prometheus adapter   read and write      Kafka  https   github com Telefonica prometheus kafka adapter   write      M3DB  https   m3db io docs integrations prometheus    read and write      Mezmo  https   docs mezmo com telemetry pipelines prometheus remote write pipeline source   write      New Relic  https   docs newrelic com docs set or remove your prometheus remote write integration   write      OpenTSDB  https   github com prometheus prometheus tree main documentation examples remote storage remote storage adapter   write      QuasarDB  https   doc quasardb net master user guide integration prometheus html   read and write      SignalFx  https   github com signalfx metricproxy prometheus   write      Splunk  https   github com kebe7jun ropee   read and write      Sysdig Monitor  https   docs sysdig com en docs installation prometheus remote write    write      TiKV  https   github com bragfoo TiPrometheus   read and write      Thanos  https   github com thanos io thanos   read and write      VictoriaMetrics  https   github com VictoriaMetrics VictoriaMetrics   write      Wavefront  https   github com wavefrontHQ prometheus storage adapter   write   Prom migrator  https   github com timescale promscale tree master migration tool cmd prom migrator  is a tool for migrating data between remote storage systems      Alertmanager Webhook Receiver  For notification mechanisms not natively supported by the Alertmanager  the  webhook receiver   docs alerting configuration  webhook config  allows for integration        alertmanager webhook logger  https   github com tomtom international alertmanager webhook logger   logs alerts      Alertsnitch  https   gitlab com yakshaving art alertsnitch   saves alerts to a MySQL database      All Quiet  https   allquiet app integrations inbound prometheus   on call   incident management      Asana  https   gitlab com lupudu alertmanager asana bridge       AWS SNS  https   github com DataReply alertmanager sns forwarder       Better Uptime  https   docs betteruptime com integrations prometheus       Canopsis  https   git canopsis net canopsis connectors connector prometheus2canopsis       DingTalk  https   github com timonwong prometheus webhook dingtalk       Discord  https   github com benjojo alertmanager discord       GitLab  https   docs gitlab com ee operations metrics alerts html external prometheus instances       Gotify  https   github com DRuggeri alertmanager gotify bridge       GELF  https   github com b com software basis alertmanager2gelf       Heii On Call  https   heiioncall com guides prometheus integration       Icinga2  https   github com vshn signalilo       iLert  https   docs ilert com integrations prometheus       IRC Bot  https   github com multimfi bot       JIRAlert  https   github com free jiralert       Matrix  https   github com matrix org go neb       Phabricator   Maniphest  https   github com knyar phalerts       prom2teams  https   github com idealista prom2teams   forwards notifications to Microsoft Teams      Ansible Tower  https   github com pja237 prom2tower   call Ansible Tower  AWX  API on alerts  launch jobs etc        Signal  https   github com dgl alertmanager webhook signald       SIGNL4  https   www signl4 com blog portfolio item prometheus alertmanager mobile alert notification duty schedule escalation       Simplepush  https   codeberg org stealth alertpush       SMS  https   github com messagebird sachet   supports  multiple providers  https   github com messagebird sachet blob master examples config yaml       SNMP traps  https   github com maxwo snmp notifier       Squadcast  https   support squadcast com docs prometheus       STOMP  https   github com thewillyhuman alertmanager stomp forwarder       Telegram bot  https   github com inCaller prometheus bot       xMatters  https   github com xmatters xm labs prometheus       XMPP Bot  https   github com jelmer prometheus xmpp alerts       Zenduty  https   docs zenduty com docs prometheus        Zoom  https   github com Code2Life nodess apps tree master src zoom alert 2 0      Management  Prometheus does not include configuration management functionality  allowing you to integrate it with your existing systems or build on top of it        Prometheus Operator  https   github com coreos prometheus operator   Manages Prometheus on top of Kubernetes      Promgen  https   github com line promgen   Web UI and configuration generator for Prometheus and Alertmanager     Other       Alert analysis  https   github com m0nikasingh am2ch   Stores alerts into a ClickHouse database and provides alert analysis dashboards      karma  https   github com prymitive karma   alert dashboard      PushProx  https   github com RobustPerception PushProx   Proxy to transverse NAT and similar network setups      Promdump  https   github com ihcsim promdump   kubectl plugin to dump and restore data blocks      Promregator  https   github com promregator promregator   discovery and scraping for Cloud Foundry applications      pint  https   github com cloudflare pint   Prometheus rule linter"}
{"questions":"prometheus sortrank 4 title Security Prometheus is a sophisticated system with many components and many integrations with other systems It can be deployed in a variety of trusted and untrusted environments Security Model","answers":"---\ntitle: Security\nsort_rank: 4\n---\n\n# Security Model\n\nPrometheus is a sophisticated system with many components and many integrations\nwith other systems. It can be deployed in a variety of trusted and untrusted\nenvironments.\n\nThis page describes the general security assumptions of Prometheus and the\nattack vectors that some configurations may enable.\n\nAs with any complex system, it is near certain that bugs will be found, some of\nthem security-relevant. If you find a _security bug_ please report it\nprivately to the maintainers listed in the MAINTAINERS of the relevant\nrepository and CC prometheus-team@googlegroups.com. We will fix the issue as soon\nas possible and coordinate a release date with you. You will be able to choose\nif you want public acknowledgement of your effort and if you want to be\nmentioned by name.\n\n### Automated security scanners\n\nSpecial note for security scanner users: Please be mindful with the reports produced.\nMost scanners are generic and produce lots of false positives. More and more\nreports are being sent to us, and it takes a significant amount of work to go\nthrough all of them and reply with the care you expect. This problem is particularly\nbad with Go and NPM dependency scanners.\n\nAs a courtesy to us and our time, we would ask you not to submit raw reports.\nInstead, please submit them with an analysis outlining which specific results\nare applicable to us and why.\n\nPrometheus is maintained by volunteers, not by a company. Therefore, fixing\nsecurity issues is done on a best-effort basis. We strive to release security\nfixes within 7 days for: Prometheus, Alertmanager, Node Exporter,\nBlackbox Exporter, and Pushgateway.\n\n## Prometheus\n\nIt is presumed that untrusted users have access to the Prometheus HTTP endpoint\nand logs. They have access to all time series information contained in the\ndatabase, plus a variety of operational\/debugging information.\n\nIt is also presumed that only trusted users have the ability to change the\ncommand line, configuration file, rule files and other aspects of the runtime\nenvironment of Prometheus and other components.\n\nWhich targets Prometheus scrapes, how often and with what other settings is\ndetermined entirely via the configuration file. The administrator may\ndecide to use information from service discovery systems, which combined with\nrelabelling may grant some of this control to anyone who can modify data in\nthat service discovery system.\n\nScraped targets may be run by untrusted users. It should not by default be\npossible for a target to expose data that impersonates a different target.  The\n`honor_labels` option removes this protection, as can certain relabelling\nsetups.\n\nAs of Prometheus 2.0, the `--web.enable-admin-api` flag controls access to the\nadministrative HTTP API which includes functionality such as deleting time\nseries. This is disabled by default. If enabled, administrative and mutating\nfunctionality will be accessible under the `\/api\/*\/admin\/` paths. The\n`--web.enable-lifecycle` flag controls HTTP reloads and shutdowns of\nPrometheus. This is also disabled by default. If enabled they will be\naccessible under the `\/-\/reload` and `\/-\/quit` paths.\n\nIn Prometheus 1.x, `\/-\/reload` and using `DELETE` on `\/api\/v1\/series` are\naccessible to anyone with access to the HTTP API. The `\/-\/quit` endpoint is\ndisabled by default, but can be enabled with the `-web.enable-remote-shutdown`\nflag.\n\nThe remote read feature allows anyone with HTTP access to send queries to the\nremote read endpoint. If for example the PromQL queries were ending up directly\nrun against a relational database, then anyone with the ability to send queries\nto Prometheus (such as via Grafana) can run arbitrary SQL against that\ndatabase.\n\n## Alertmanager\n\nAny user with access to the Alertmanager HTTP endpoint has access to its data.\nThey can create and resolve alerts. They can create, modify and delete\nsilences.\n\nWhere notifications are sent to is determined by the configuration file. With\ncertain templating setups it is possible for notifications to end up at an\nalert-defined destination. For example if notifications use an alert label as\nthe destination email address, anyone who can send alerts to the Alertmanager\ncan send notifications to any email address. If the alert-defined destination\nis a templatable secret field, anyone with access to either Prometheus or\nAlertmanager will be able to view the secrets.\n\nAny secret fields which are templatable are intended for routing notifications\nin the above use case. They are not intended as a way for secrets to be\nseparated out from the configuration files using the template file feature. Any\nsecrets stored in template files could be exfiltrated by anyone able to\nconfigure receivers in the Alertmanager configuration file. For example in\nlarge setups, each team might have an alertmanager configuration file fragment\nwhich they fully control, that are then combined into the full final\nconfiguration file.\n\n## Pushgateway\n\nAny user with access to the Pushgateway HTTP endpoint can create, modify and\ndelete the metrics contained within. As the Pushgateway is usually scraped with\n`honor_labels` enabled, this means anyone with access to the Pushgateway can\ncreate any time series in Prometheus.\n\nThe `--web.enable-admin-api` flag controls access to the\nadministrative HTTP API, which includes functionality such as wiping all the existing\nmetric groups. This is disabled by default. If enabled, administrative\nfunctionality will be accessible under the `\/api\/*\/admin\/` paths.\n\n## Exporters\n\nExporters generally only talk to one configured instance with a preset set of\ncommands\/requests, which cannot be expanded via their HTTP endpoint.\n\nThere are also exporters such as the SNMP and Blackbox exporters that take\ntheir targets from URL parameters. Thus anyone with HTTP access to these\nexporters can make them send requests to arbitrary endpoints. As they also\nsupport client-side authentication, this could lead to a leak of secrets such\nas HTTP Basic Auth passwords or SNMP community strings. Challenge-response\nauthentication mechanisms such as TLS are not affected by this.\n\n## Client Libraries\n\nClient libraries are intended to be included in users' applications.\n\nIf using a client-library-provided HTTP handler, it should not be possible for\nmalicious requests that reach that handler to cause issues beyond those\nresulting from additional load and failed scrapes.\n\n## Authentication, Authorization, and Encryption\n\nPrometheus, and most exporters, support TLS. Including authentication of clients\nvia TLS client certificates. Details on configuring Prometheus are [`here`](https:\/\/prometheus.io\/docs\/guides\/tls-encryption\/).\n\nThe Go projects share the same TLS library, based on the\nGo [crypto\/tls](https:\/\/golang.org\/pkg\/crypto\/tls) library.\nWe default to TLS 1.2 as minimum version. Our policy regarding this is based on\n[Qualys SSL Labs](https:\/\/www.ssllabs.com\/) recommendations, where we strive to\nachieve a grade 'A' with a default configuration and correctly provided\ncertificates, while sticking as closely as possible to the upstream Go defaults.\nAchieving that grade provides a balance between perfect security and usability.\n\nTLS will be added to Java exporters in the future.\n\nIf you have special TLS needs, like a different cipher suite or older TLS\nversion, you can tune the minimum TLS version and the ciphers, as long as the\ncipher is not [marked as insecure](https:\/\/golang.org\/pkg\/crypto\/tls\/#InsecureCipherSuites)\nin the [crypto\/tls](https:\/\/golang.org\/pkg\/crypto\/tls) library. If that still\ndoes not suit you, the current TLS settings enable you to build a secure tunnel\nbetween the servers and reverse proxies with more special requirements.\n\nHTTP Basic Authentication is also supported. Basic Authentication can be\nused without TLS, but it will then expose usernames and passwords in cleartext\nover the network.\n\nOn the server side, basic authentication passwords are stored as hashes with the\n[bcrypt](https:\/\/en.wikipedia.org\/wiki\/Bcrypt) algorithm. It is your\nresponsibility to pick the number of rounds that matches your security\nstandards. More rounds make brute-force more complicated at the cost of more CPU\npower and more time to authenticate the requests.\n\nVarious Prometheus components support client-side authentication and\nencryption. If TLS client support is offered, there is often also an option\ncalled `insecure_skip_verify` which skips SSL verification.\n\n## API Security\n\nAs administrative and mutating endpoints are intended to be accessed via simple\ntools such as cURL, there is no built in\n[CSRF](https:\/\/en.wikipedia.org\/wiki\/Cross-site_request_forgery) protection as\nthat would break such use cases. Accordingly when using a reverse proxy, you\nmay wish to block such paths to prevent CSRF.\n\nFor non-mutating endpoints, you may wish to set [CORS\nheaders](https:\/\/fetch.spec.whatwg.org\/#http-cors-protocol) such as\n`Access-Control-Allow-Origin` in your reverse proxy to prevent\n[XSS](https:\/\/en.wikipedia.org\/wiki\/Cross-site_scripting).\n\nIf you are composing PromQL queries that include input from untrusted users\n(e.g. URL parameters to console templates, or something you built yourself) who\nare not meant to be able to run arbitrary PromQL queries make sure any\nuntrusted input is appropriately escaped to prevent injection attacks. For\nexample `up{job=\"<user_input>\"}` would become `up{job=\"\"} or\nsome_metric{zzz=\"\"}` if the `<user_input>` was `\"} or some_metric{zzz=\"`.\n\nFor those using Grafana note that [dashboard permissions are not data source\npermissions](https:\/\/grafana.com\/docs\/grafana\/latest\/permissions\/#data-source-permissions),\nso do not limit a user's ability to run arbitrary queries in proxy mode.\n\n## Secrets\n\nNon-secret information or fields may be available via the HTTP API and\/or logs.\n\nIn Prometheus, metadata retrieved from service discovery is not considered\nsecret. Throughout the Prometheus system, metrics are not considered secret.\n\nFields containing secrets in configuration files (marked explicitly as such in\nthe documentation) will not be exposed in logs or via the HTTP API. Secrets\nshould not be placed in other configuration fields, as it is common for\ncomponents to expose their configuration over their HTTP endpoint. It is the\nresponsibility of the user to protect files on disk from unwanted reads and\nwrites.\n\nSecrets from other sources used by dependencies (e.g. the `AWS_SECRET_KEY`\nenvironment variable as used by EC2 service discovery) may end up exposed due to\ncode outside of our control or due to functionality that happens to expose\nwherever it is stored.\n\n## Denial of Service\n\nThere are some mitigations in place for excess load or expensive queries.\nHowever, if too many or too expensive queries\/metrics are provided components\nwill fall over. It is more likely that a component will be accidentally taken\nout by a trusted user than by malicious action.\n\nIt is the responsibility of the user to ensure they provide components with\nsufficient resources including CPU, RAM, disk space, IOPS, file descriptors,\nand bandwidth.\n\nIt is recommended to monitor all components for failure, and to have them\nautomatically restart on failure.\n\n## Libraries\n\nThis document considers vanilla binaries built from the stock source code.\nInformation presented here does not apply if you modify Prometheus source code,\nor use Prometheus internals (beyond the official client library APIs) in your\nown code.\n\n## Build Process\n\nThe build pipeline for Prometheus runs on third-party providers to which many\nmembers of the Prometheus development team and the staff of those providers\nhave access. If you are concerned about the exact provenance of your binaries,\nit is recommended to build them yourself rather than relying on the\npre-built binaries provided by the project.\n\n## Prometheus-Community\n\nThe repositories under the [Prometheus-Community](https:\/\/github.com\/prometheus-community)\norganization are supported by third-party maintainers.\n\nIf you find a _security bug_ in the [Prometheus-Community](https:\/\/github.com\/prometheus-community) organization,\nplease report it privately to the maintainers listed in the MAINTAINERS of the\nrelevant repository and CC prometheus-team@googlegroups.com.\n\nSome repositories under that organization might have a different security model\nthan the ones presented in this document. In such a case, please refer to the\ndocumentation of those repositories.\n\n## External audits\n\n* In 2018, [CNCF](https:\/\/cncf.io) sponsored an external security audit by\n[cure53](https:\/\/cure53.de) which ran from April 2018 to June 2018. For more\ndetails, please read the [final report of the audit](\/assets\/downloads\/2018-06-11--cure53_security_audit.pdf).\n\n* In 2020, CNCF sponsored a\n[second audit by cure53](\/assets\/downloads\/2020-07-21--cure53_security_audit_node_exporter.pdf)\nof Node Exporter.\n\n* In 2023, CNCF sponsored a\n[software supply chain security assessment of Prometheus](\/assets\/downloads\/2023-04-19--chainguard_supply_chain_assessment.pdf)\nby Chainguard.","site":"prometheus","answers_cleaned":"    title  Security sort rank  4        Security Model  Prometheus is a sophisticated system with many components and many integrations with other systems  It can be deployed in a variety of trusted and untrusted environments   This page describes the general security assumptions of Prometheus and the attack vectors that some configurations may enable   As with any complex system  it is near certain that bugs will be found  some of them security relevant  If you find a  security bug  please report it privately to the maintainers listed in the MAINTAINERS of the relevant repository and CC prometheus team googlegroups com  We will fix the issue as soon as possible and coordinate a release date with you  You will be able to choose if you want public acknowledgement of your effort and if you want to be mentioned by name       Automated security scanners  Special note for security scanner users  Please be mindful with the reports produced  Most scanners are generic and produce lots of false positives  More and more reports are being sent to us  and it takes a significant amount of work to go through all of them and reply with the care you expect  This problem is particularly bad with Go and NPM dependency scanners   As a courtesy to us and our time  we would ask you not to submit raw reports  Instead  please submit them with an analysis outlining which specific results are applicable to us and why   Prometheus is maintained by volunteers  not by a company  Therefore  fixing security issues is done on a best effort basis  We strive to release security fixes within 7 days for  Prometheus  Alertmanager  Node Exporter  Blackbox Exporter  and Pushgateway      Prometheus  It is presumed that untrusted users have access to the Prometheus HTTP endpoint and logs  They have access to all time series information contained in the database  plus a variety of operational debugging information   It is also presumed that only trusted users have the ability to change the command line  configuration file  rule files and other aspects of the runtime environment of Prometheus and other components   Which targets Prometheus scrapes  how often and with what other settings is determined entirely via the configuration file  The administrator may decide to use information from service discovery systems  which combined with relabelling may grant some of this control to anyone who can modify data in that service discovery system   Scraped targets may be run by untrusted users  It should not by default be possible for a target to expose data that impersonates a different target   The  honor labels  option removes this protection  as can certain relabelling setups   As of Prometheus 2 0  the    web enable admin api  flag controls access to the administrative HTTP API which includes functionality such as deleting time series  This is disabled by default  If enabled  administrative and mutating functionality will be accessible under the   api   admin   paths  The    web enable lifecycle  flag controls HTTP reloads and shutdowns of Prometheus  This is also disabled by default  If enabled they will be accessible under the     reload  and     quit  paths   In Prometheus 1 x      reload  and using  DELETE  on   api v1 series  are accessible to anyone with access to the HTTP API  The     quit  endpoint is disabled by default  but can be enabled with the   web enable remote shutdown  flag   The remote read feature allows anyone with HTTP access to send queries to the remote read endpoint  If for example the PromQL queries were ending up directly run against a relational database  then anyone with the ability to send queries to Prometheus  such as via Grafana  can run arbitrary SQL against that database      Alertmanager  Any user with access to the Alertmanager HTTP endpoint has access to its data  They can create and resolve alerts  They can create  modify and delete silences   Where notifications are sent to is determined by the configuration file  With certain templating setups it is possible for notifications to end up at an alert defined destination  For example if notifications use an alert label as the destination email address  anyone who can send alerts to the Alertmanager can send notifications to any email address  If the alert defined destination is a templatable secret field  anyone with access to either Prometheus or Alertmanager will be able to view the secrets   Any secret fields which are templatable are intended for routing notifications in the above use case  They are not intended as a way for secrets to be separated out from the configuration files using the template file feature  Any secrets stored in template files could be exfiltrated by anyone able to configure receivers in the Alertmanager configuration file  For example in large setups  each team might have an alertmanager configuration file fragment which they fully control  that are then combined into the full final configuration file      Pushgateway  Any user with access to the Pushgateway HTTP endpoint can create  modify and delete the metrics contained within  As the Pushgateway is usually scraped with  honor labels  enabled  this means anyone with access to the Pushgateway can create any time series in Prometheus   The    web enable admin api  flag controls access to the administrative HTTP API  which includes functionality such as wiping all the existing metric groups  This is disabled by default  If enabled  administrative functionality will be accessible under the   api   admin   paths      Exporters  Exporters generally only talk to one configured instance with a preset set of commands requests  which cannot be expanded via their HTTP endpoint   There are also exporters such as the SNMP and Blackbox exporters that take their targets from URL parameters  Thus anyone with HTTP access to these exporters can make them send requests to arbitrary endpoints  As they also support client side authentication  this could lead to a leak of secrets such as HTTP Basic Auth passwords or SNMP community strings  Challenge response authentication mechanisms such as TLS are not affected by this      Client Libraries  Client libraries are intended to be included in users  applications   If using a client library provided HTTP handler  it should not be possible for malicious requests that reach that handler to cause issues beyond those resulting from additional load and failed scrapes      Authentication  Authorization  and Encryption  Prometheus  and most exporters  support TLS  Including authentication of clients via TLS client certificates  Details on configuring Prometheus are   here   https   prometheus io docs guides tls encryption     The Go projects share the same TLS library  based on the Go  crypto tls  https   golang org pkg crypto tls  library  We default to TLS 1 2 as minimum version  Our policy regarding this is based on  Qualys SSL Labs  https   www ssllabs com   recommendations  where we strive to achieve a grade  A  with a default configuration and correctly provided certificates  while sticking as closely as possible to the upstream Go defaults  Achieving that grade provides a balance between perfect security and usability   TLS will be added to Java exporters in the future   If you have special TLS needs  like a different cipher suite or older TLS version  you can tune the minimum TLS version and the ciphers  as long as the cipher is not  marked as insecure  https   golang org pkg crypto tls  InsecureCipherSuites  in the  crypto tls  https   golang org pkg crypto tls  library  If that still does not suit you  the current TLS settings enable you to build a secure tunnel between the servers and reverse proxies with more special requirements   HTTP Basic Authentication is also supported  Basic Authentication can be used without TLS  but it will then expose usernames and passwords in cleartext over the network   On the server side  basic authentication passwords are stored as hashes with the  bcrypt  https   en wikipedia org wiki Bcrypt  algorithm  It is your responsibility to pick the number of rounds that matches your security standards  More rounds make brute force more complicated at the cost of more CPU power and more time to authenticate the requests   Various Prometheus components support client side authentication and encryption  If TLS client support is offered  there is often also an option called  insecure skip verify  which skips SSL verification      API Security  As administrative and mutating endpoints are intended to be accessed via simple tools such as cURL  there is no built in  CSRF  https   en wikipedia org wiki Cross site request forgery  protection as that would break such use cases  Accordingly when using a reverse proxy  you may wish to block such paths to prevent CSRF   For non mutating endpoints  you may wish to set  CORS headers  https   fetch spec whatwg org  http cors protocol  such as  Access Control Allow Origin  in your reverse proxy to prevent  XSS  https   en wikipedia org wiki Cross site scripting    If you are composing PromQL queries that include input from untrusted users  e g  URL parameters to console templates  or something you built yourself  who are not meant to be able to run arbitrary PromQL queries make sure any untrusted input is appropriately escaped to prevent injection attacks  For example  up job   user input     would become  up job     or some metric zzz      if the   user input   was     or some metric zzz      For those using Grafana note that  dashboard permissions are not data source permissions  https   grafana com docs grafana latest permissions  data source permissions   so do not limit a user s ability to run arbitrary queries in proxy mode      Secrets  Non secret information or fields may be available via the HTTP API and or logs   In Prometheus  metadata retrieved from service discovery is not considered secret  Throughout the Prometheus system  metrics are not considered secret   Fields containing secrets in configuration files  marked explicitly as such in the documentation  will not be exposed in logs or via the HTTP API  Secrets should not be placed in other configuration fields  as it is common for components to expose their configuration over their HTTP endpoint  It is the responsibility of the user to protect files on disk from unwanted reads and writes   Secrets from other sources used by dependencies  e g  the  AWS SECRET KEY  environment variable as used by EC2 service discovery  may end up exposed due to code outside of our control or due to functionality that happens to expose wherever it is stored      Denial of Service  There are some mitigations in place for excess load or expensive queries  However  if too many or too expensive queries metrics are provided components will fall over  It is more likely that a component will be accidentally taken out by a trusted user than by malicious action   It is the responsibility of the user to ensure they provide components with sufficient resources including CPU  RAM  disk space  IOPS  file descriptors  and bandwidth   It is recommended to monitor all components for failure  and to have them automatically restart on failure      Libraries  This document considers vanilla binaries built from the stock source code  Information presented here does not apply if you modify Prometheus source code  or use Prometheus internals  beyond the official client library APIs  in your own code      Build Process  The build pipeline for Prometheus runs on third party providers to which many members of the Prometheus development team and the staff of those providers have access  If you are concerned about the exact provenance of your binaries  it is recommended to build them yourself rather than relying on the pre built binaries provided by the project      Prometheus Community  The repositories under the  Prometheus Community  https   github com prometheus community  organization are supported by third party maintainers   If you find a  security bug  in the  Prometheus Community  https   github com prometheus community  organization  please report it privately to the maintainers listed in the MAINTAINERS of the relevant repository and CC prometheus team googlegroups com   Some repositories under that organization might have a different security model than the ones presented in this document  In such a case  please refer to the documentation of those repositories      External audits    In 2018   CNCF  https   cncf io  sponsored an external security audit by  cure53  https   cure53 de  which ran from April 2018 to June 2018  For more details  please read the  final report of the audit   assets downloads 2018 06 11  cure53 security audit pdf      In 2020  CNCF sponsored a  second audit by cure53   assets downloads 2020 07 21  cure53 security audit node exporter pdf  of Node Exporter     In 2023  CNCF sponsored a  software supply chain security assessment of Prometheus   assets downloads 2023 04 19  chainguard supply chain assessment pdf  by Chainguard "}
{"questions":"redis weight 5 topics replication topics replication md title Redis replication aliases Replication How Redis supports high availability and failover with replication docs manual replication md docs manual replication","answers":"---\ntitle: Redis replication\nlinkTitle: Replication\nweight: 5\ndescription: How Redis supports high availability and failover with replication\naliases: [\n    \/topics\/replication,\n    \/topics\/replication.md,\n    \/docs\/manual\/replication,\n    \/docs\/manual\/replication.md\n]\n---\n\nAt the base of Redis replication (excluding the high availability features provided as an additional layer by Redis Cluster or Redis Sentinel) there is a *leader follower* (master-replica) replication that is simple to use and configure. It allows replica Redis instances to be exact copies of master instances. The replica will automatically reconnect to the master every time the link breaks, and will attempt to be an exact copy of it *regardless* of what happens to the master.\n\nThis system works using three main mechanisms:\n\n1. When a master and a replica instances are well-connected, the master keeps the replica updated by sending a stream of commands to the replica to replicate the effects on the dataset happening in the master side due to: client writes, keys expired or evicted, any other action changing the master dataset.\n2. When the link between the master and the replica breaks, for network issues or because a timeout is sensed in the master or the replica, the replica reconnects and attempts to proceed with a partial resynchronization: it means that it will try to just obtain the part of the stream of commands it missed during the disconnection.\n3. When a partial resynchronization is not possible, the replica will ask for a full resynchronization. This will involve a more complex process in which the master needs to create a snapshot of all its data, send it to the replica, and then continue sending the stream of commands as the dataset changes.\n\nRedis uses by default asynchronous replication, which being low latency and\nhigh performance, is the natural replication mode for the vast majority of Redis\nuse cases. However, Redis replicas asynchronously acknowledge the amount of data\nthey received periodically with the master. So the master does not wait every time\nfor a command to be processed by the replicas, however it knows, if needed, what\nreplica already processed what command. This allows having optional synchronous replication.\n\nSynchronous replication of certain data can be requested by the clients using\nthe `WAIT` command. However `WAIT` is only able to ensure there are the\nspecified number of acknowledged copies in the other Redis instances, it does not\nturn a set of Redis instances into a CP system with strong consistency: acknowledged\nwrites can still be lost during a failover, depending on the exact configuration\nof the Redis persistence. However with `WAIT` the probability of losing a write\nafter a failure event is greatly reduced to certain hard to trigger failure\nmodes.\n\nYou can check the Redis Sentinel or Redis Cluster documentation for more information\nabout high availability and failover. The rest of this document mainly describes the basic characteristics of Redis basic replication.\n\n### Important facts about Redis replication\n\n* Redis uses asynchronous replication, with asynchronous replica-to-master acknowledges of the amount of data processed.\n* A master can have multiple replicas.\n* Replicas are able to accept connections from other replicas. Aside from connecting a number of replicas to the same master, replicas can also be connected to other replicas in a cascading-like structure. Since Redis 4.0, all the sub-replicas will receive exactly the same replication stream from the master.\n* Redis replication is non-blocking on the master side. This means that the master will continue to handle queries when one or more replicas perform the initial synchronization or a partial resynchronization.\n* Replication is also largely non-blocking on the replica side. While the replica is performing the initial synchronization, it can handle queries using the old version of the dataset, assuming you configured Redis to do so in redis.conf.  Otherwise, you can configure Redis replicas to return an error to clients if the replication stream is down. However, after the initial sync, the old dataset must be deleted and the new one must be loaded. The replica will block incoming connections during this brief window (that can be as long as many seconds for very large datasets). Since Redis 4.0 you can configure Redis so that the deletion of the old data set happens in a different thread, however loading the new initial dataset will still happen in the main thread and block the replica.\n* Replication can be used both for scalability, to have multiple replicas for read-only queries (for example, slow O(N) operations can be offloaded to replicas), or simply for improving data safety and high availability.\n* You can use replication to avoid the cost of having the master writing the full dataset to disk: a typical technique involves configuring your master `redis.conf` to avoid persisting to disk at all, then connect a replica configured to save from time to time, or with AOF enabled. However, this setup must be handled with care, since a restarting master will start with an empty dataset: if the replica tries to sync with it, the replica will be emptied as well.\n\n## Safety of replication when master has persistence turned off\n\nIn setups where Redis replication is used, it is strongly advised to have\npersistence turned on in the master and in the replicas. When this is not possible,\nfor example because of latency concerns due to very slow disks, instances should\nbe configured to **avoid restarting automatically** after a reboot.\n\nTo better understand why masters with persistence turned off configured to\nauto restart are dangerous, check the following failure mode where data\nis wiped from the master and all its replicas:\n\n1. We have a setup with node A acting as master, with persistence turned down, and nodes B and C replicating from node A.\n2. Node A crashes, however it has some auto-restart system, that restarts the process. However since persistence is turned off, the node restarts with an empty data set.\n3. Nodes B and C will replicate from node A, which is empty, so they'll effectively destroy their copy of the data.\n\nWhen Redis Sentinel is used for high availability, also turning off persistence\non the master, together with auto restart of the process, is dangerous. For example the master can restart fast enough for Sentinel to not detect a failure, so that the failure mode described above happens.\n\nEvery time data safety is important, and replication is used with master configured without persistence, auto restart of instances should be disabled.\n\n## How Redis replication works\n\nEvery Redis master has a replication ID: it is a large pseudo random string\nthat marks a given story of the dataset. Each master also takes an offset that\nincrements for every byte of replication stream that it is produced to be\nsent to replicas, to update the state of the replicas with the new changes\nmodifying the dataset. The replication offset is incremented even if no replica\nis actually connected, so basically every given pair of:\n\n    Replication ID, offset\n\nIdentifies an exact version of the dataset of a master.\n\nWhen replicas connect to masters, they use the `PSYNC` command to send\ntheir old master replication ID and the offsets they processed so far. This way\nthe master can send just the incremental part needed. However if there is not\nenough *backlog* in the master buffers, or if the replica is referring to an\nhistory (replication ID) which is no longer known, then a full resynchronization\nhappens: in this case the replica will get a full copy of the dataset, from scratch.\n\nThis is how a full synchronization works in more details:\n\nThe master starts a background saving process to produce an RDB file. At the same time it starts to buffer all new write commands received from the clients. When the background saving is complete, the master transfers the database file to the replica, which saves it on disk, and then loads it into memory. The master will then send all buffered commands to the replica. This is done as a stream of commands and is in the same format of the Redis protocol itself.\n\nYou can try it yourself via telnet. Connect to the Redis port while the\nserver is doing some work and issue the `SYNC` command. You'll see a bulk\ntransfer and then every command received by the master will be re-issued\nin the telnet session. Actually `SYNC` is an old protocol no longer used by\nnewer Redis instances, but is still there for backward compatibility: it does\nnot allow partial resynchronizations, so now `PSYNC` is used instead.\n\nAs already said, replicas are able to automatically reconnect when the master-replica link goes down for some reason. If the master receives multiple concurrent replica synchronization requests, it performs a single background save in to serve all of them.\n\n## Replication ID explained\n\nIn the previous section we said that if two instances have the same replication\nID and replication offset, they have exactly the same data. However it is useful\nto understand what exactly is the replication ID, and why instances have actually\ntwo replication IDs: the main ID and the secondary ID.\n\nA replication ID basically marks a given *history* of the data set. Every time\nan instance restarts from scratch as a master, or a replica is promoted to master,\na new replication ID is generated for this instance. The replicas connected to\na master will inherit its replication ID after the handshake. So two instances\nwith the same ID are related by the fact that they hold the same data, but\npotentially at a different time. It is the offset that works as a logical time\nto understand, for a given history (replication ID), who holds the most updated\ndata set.\n\nFor instance, if two instances A and B have the same replication ID, but one\nwith offset 1000 and one with offset 1023, it means that the first lacks certain\ncommands applied to the data set. It also means that A, by applying just a few\ncommands, may reach exactly the same state of B.\n\nThe reason why Redis instances have two replication IDs is because of replicas\nthat are promoted to masters. After a failover, the promoted replica requires\nto still remember what was its past replication ID, because such replication ID\nwas the one of the former master. In this way, when other replicas will sync\nwith the new master, they will try to perform a partial resynchronization using the\nold master replication ID. This will work as expected, because when the replica\nis promoted to master it sets its secondary ID to its main ID, remembering what\nwas the offset when this ID switch happened. Later it will select a new random\nreplication ID, because a new history begins. When handling the new replicas\nconnecting, the master will match their IDs and offsets both with the current\nID and the secondary ID (up to a given offset, for safety). In short this means\nthat after a failover, replicas connecting to the newly promoted master don't have\nto perform a full sync.\n\nIn case you wonder why a replica promoted to master needs to change its\nreplication ID after a failover: it is possible that the old master is still\nworking as a master because of some network partition: retaining the same\nreplication ID would violate the fact that the same ID and same offset of any\ntwo random instances mean they have the same data set.\n\n## Diskless replication\n\nNormally a full resynchronization requires creating an RDB file on disk,\nthen reloading the same RDB from disk to feed the replicas with the data.\n\nWith slow disks this can be a very stressing operation for the master.\nRedis version 2.8.18 is the first version to have support for diskless\nreplication. In this setup the child process directly sends the\nRDB over the wire to replicas, without using the disk as intermediate storage.\n\n## Configuration\n\nTo configure basic Redis replication is trivial: just add the following line to the replica configuration file:\n\n    replicaof 192.168.1.1 6379\n\nOf course you need to replace 192.168.1.1 6379 with your master IP address (or\nhostname) and port. Alternatively, you can call the `REPLICAOF` command and the\nmaster host will start a sync with the replica.\n\nThere are also a few parameters for tuning the replication backlog taken\nin memory by the master to perform the partial resynchronization. See the example\n`redis.conf` shipped with the Redis distribution for more information.\n\nDiskless replication can be enabled using the `repl-diskless-sync` configuration\nparameter. The delay to start the transfer to wait for more replicas to\narrive after the first one is controlled by the `repl-diskless-sync-delay`\nparameter. Please refer to the example `redis.conf` file in the Redis distribution\nfor more details.\n\n## Read-only replica\n\nSince Redis 2.6, replicas support a read-only mode that is enabled by default.\nThis behavior is controlled by the `replica-read-only` option in the redis.conf file, and can be enabled and disabled at runtime using `CONFIG SET`.\n\nRead-only replicas will reject all write commands, so that it is not possible to write to a replica because of a mistake. This does not mean that the feature is intended to expose a replica instance to the internet or more generally to a network where untrusted clients exist, because administrative commands like `DEBUG` or `CONFIG` are still enabled. The [Security](\/topics\/security) page describes how to secure a Redis instance.\n\nYou may wonder why it is possible to revert the read-only setting\nand have replica instances that can be targeted by write operations.\nThe answer is that writable replicas exist only for historical reasons.\nUsing writable replicas can result in inconsistency between the master and the replica, so it is not recommended to use writable replicas.\nTo understand in which situations this can be a problem, we need to understand how replication works.\nChanges on the master is replicated by propagating regular Redis commands to the replica.\nWhen a key expires on the master, this is propagated as a DEL command.\nIf a key which exists on the master but is deleted, expired or has a different type on the replica compared to the master will react differently to commands like DEL, INCR or RPOP propagated from the master than intended.\nThe propagated command may fail on the replica or result in a different outcome.\nTo minimize the risks (if you insist on using writable replicas) we suggest you follow these recommendations:\n\n* Don't write to keys in a writable replica that are also used on the master.\n  (This can be hard to guarantee if you don't have control over all the clients that write to the master.)\n\n* Don't configure an instance as a writable replica as an intermediary step when upgrading a set of instances in a running system.\n  In general, don't configure an instance as a writable replica if it can ever be promoted to a master if you want to guarantee data consistency.\n\nHistorically, there were some use cases that were considered legitimate for writable replicas.\nAs of version 7.0, these use cases are now all obsolete and the same can be achieved by other means.\nFor example:\n\n* Computing slow Set or Sorted set operations and storing the result in temporary local keys using commands like `SUNIONSTORE` and `ZINTERSTORE`.\n  Instead, use commands that return the result without storing it, such as `SUNION` and `ZINTER`.\n\n* Using the `SORT` command (which is not considered a read-only command because of the optional STORE option and therefore cannot be used on a read-only replica).\n  Instead, use `SORT_RO`, which is a read-only command.\n\n* Using `EVAL` and `EVALSHA` are also not considered read-only commands, because the Lua script may call write commands.\n  Instead, use `EVAL_RO` and `EVALSHA_RO` where the Lua script can only call read-only commands.\n\nWhile writes to a replica will be discarded if the replica and the master resync or if the replica is restarted, there is no guarantee that they will sync automatically.\n\nBefore version 4.0, writable replicas were incapable of expiring keys with a time to live set.\nThis means that if you use `EXPIRE` or other commands that set a maximum TTL for a key, the key will leak, and while you may no longer see it while accessing it with read commands, you will see it in the count of keys and it will still use memory.\nRedis 4.0 RC3 and greater versions are able to evict keys with TTL as masters do, with the exceptions of keys written in DB numbers greater than 63 (but by default Redis instances only have 16 databases).\nNote though that even in versions greater than 4.0, using `EXPIRE` on a key that could ever exists on the master can cause inconsistency between the replica and the master.\n\nAlso note that since Redis 4.0 replica writes are only local, and are not propagated to sub-replicas attached to the instance. Sub-replicas instead will always receive the replication stream identical to the one sent by the top-level master to the intermediate replicas. So for example in the following setup:\n\n    A ---> B ---> C\n\nEven if `B` is writable, C will not see `B` writes and will instead have identical dataset as the master instance `A`.\n\n## Setting a replica to authenticate to a master\n\nIf your master has a password via `requirepass`, it's trivial to configure the\nreplica to use that password in all sync operations.\n\nTo do it on a running instance, use `redis-cli` and type:\n\n    config set masterauth <password>\n\nTo set it permanently, add this to your config file:\n\n    masterauth <password>\n\n## Allow writes only with N attached replicas\n\nStarting with Redis 2.8, you can configure a Redis master to\naccept write queries only if at least N replicas are currently connected to the\nmaster.\n\nHowever, because Redis uses asynchronous replication it is not possible to ensure\nthe replica actually received a given write, so there is always a window for data\nloss.\n\nThis is how the feature works:\n\n* Redis replicas ping the master every second, acknowledging the amount of replication stream processed.\n* Redis masters will remember the last time it received a ping from every replica.\n* The user can configure a minimum number of replicas that have a lag not greater than a maximum number of seconds.\n\nIf there are at least N replicas, with a lag less than M seconds, then the write will be accepted.\n\nYou may think of it as a best effort data safety mechanism, where consistency is not ensured for a given write, but at least the time window for data loss is restricted to a given number of seconds. In general bound data loss is better than unbound one.\n\nIf the conditions are not met, the master will instead reply with an error and the write will not be accepted.\n\nThere are two configuration parameters for this feature:\n\n* min-replicas-to-write `<number of replicas>`\n* min-replicas-max-lag `<number of seconds>`\n\nFor more information, please check the example `redis.conf` file shipped with the\nRedis source distribution.\n\n## How Redis replication deals with expires on keys\n\nRedis expires allow keys to have a limited time to live (TTL). Such a feature depends\non the ability of an instance to count the time, however Redis replicas correctly\nreplicate keys with expires, even when such keys are altered using Lua\nscripts.\n\nTo implement such a feature Redis cannot rely on the ability of the master and\nreplica to have synced clocks, since this is a problem that cannot be solved\nand would result in race conditions and diverging data sets, so Redis\nuses three main techniques to make the replication of expired keys\nable to work:\n\n1. Replicas don't expire keys, instead they wait for masters to expire the keys. When a master expires a key (or evict it because of LRU), it synthesizes a `DEL` command which is transmitted to all the replicas.\n2. However because of master-driven expire, sometimes replicas may still have in memory keys that are already logically expired, since the master was not able to provide the `DEL` command in time. To deal with that the replica uses its logical clock to report that a key does not exist **only for read operations** that don't violate the consistency of the data set (as new commands from the master will arrive). In this way replicas avoid reporting logically expired keys that are still existing. In practical terms, an HTML fragments cache that uses replicas to scale will avoid returning items that are already older than the desired time to live.\n3. During Lua scripts executions no key expiries are performed. As a Lua script runs, conceptually the time in the master is frozen, so that a given key will either exist or not for all the time the script runs. This prevents keys expiring in the middle of a script, and is needed to send the same script to the replica in a way that is guaranteed to have the same effects in the data set.\n\nOnce a replica is promoted to a master it will start to expire keys independently, and will not require any help from its old master.\n\n## Configuring replication in Docker and NAT\n\nWhen Docker, or other types of containers using port forwarding, or Network Address Translation is used, Redis replication needs some extra care, especially when using Redis Sentinel or other systems where the master `INFO` or `ROLE` commands output is scanned to discover replicas' addresses.\n\nThe problem is that the `ROLE` command, and the replication section of\nthe `INFO` output, when issued into a master instance, will show replicas\nas having the IP address they use to connect to the master, which, in\nenvironments using NAT may be different compared to the logical address of the\nreplica instance (the one that clients should use to connect to replicas).\n\nSimilarly the replicas will be listed with the listening port configured\ninto `redis.conf`, that may be different from the forwarded port in case\nthe port is remapped.\n\nTo fix both issues, it is possible, since Redis 3.2.2, to force\na replica to announce an arbitrary pair of IP and port to the master.\nThe two configurations directives to use are:\n\n    replica-announce-ip 5.5.5.5\n    replica-announce-port 1234\n\nAnd are documented in the example `redis.conf` of recent Redis distributions.\n\n## The INFO and ROLE command\n\nThere are two Redis commands that provide a lot of information on the current\nreplication parameters of master and replica instances. One is `INFO`. If the\ncommand is called with the `replication` argument as `INFO replication` only\ninformation relevant to the replication are displayed. Another more\ncomputer-friendly command is `ROLE`, that provides the replication status of\nmasters and replicas together with their replication offsets, list of connected\nreplicas and so forth.\n\n## Partial sync after restarts and failovers\n\nSince Redis 4.0, when an instance is promoted to master after a failover,\nit will still be able to perform a partial resynchronization with the replicas\nof the old master. To do so, the replica remembers the old replication ID and\noffset of its former master, so can provide part of the backlog to the connecting\nreplicas even if they ask for the old replication ID.\n\nHowever the new replication ID of the promoted replica will be different, since it\nconstitutes a different history of the data set. For example, the master can\nreturn available and can continue accepting writes for some time, so using the\nsame replication ID in the promoted replica would violate the rule that a\nreplication ID and offset pair identifies only a single data set.\n\nMoreover, replicas - when powered off gently and restarted - are able to store\nin the `RDB` file the information needed to resync with their\nmaster. This is useful in case of upgrades. When this is needed, it is better to\nuse the `SHUTDOWN` command in order to perform a `save & quit` operation on the\nreplica.\n\nIt is not possible to partially sync a replica that restarted via the\nAOF file. However the instance may be turned to RDB persistence before shutting\ndown it, than can be restarted, and finally AOF can be enabled again.\n\n## `Maxmemory` on replicas\n\nBy default, a replica will ignore `maxmemory` (unless it is promoted to master after a failover or manually).\nIt means that the eviction of keys will be handled by the master, sending the DEL commands to the replica as keys evict in the master side.\n\nThis behavior ensures that masters and replicas stay consistent, which is usually what you want.\nHowever, if your replica is writable, or you want the replica to have a different memory setting, and you are sure all the writes performed to the replica are idempotent, then you may change this default (but be sure to understand what you are doing).\n\nNote that since the replica by default does not evict, it may end up using more memory than what is set via `maxmemory` (since there are certain buffers that may be larger on the replica, or data structures may sometimes take more memory and so forth).\nMake sure you monitor your replicas, and make sure they have enough memory to never hit a real out-of-memory condition before the master hits the configured `maxmemory` setting.\n\nTo change this behavior, you can allow a replica to not ignore the `maxmemory`. The configuration directives to use is:\n\n    replica-ignore-maxmemory no","site":"redis","answers_cleaned":"    title  Redis replication linkTitle  Replication weight  5 description  How Redis supports high availability and failover with replication aliases         topics replication       topics replication md       docs manual replication       docs manual replication md        At the base of Redis replication  excluding the high availability features provided as an additional layer by Redis Cluster or Redis Sentinel  there is a  leader follower   master replica  replication that is simple to use and configure  It allows replica Redis instances to be exact copies of master instances  The replica will automatically reconnect to the master every time the link breaks  and will attempt to be an exact copy of it  regardless  of what happens to the master   This system works using three main mechanisms   1  When a master and a replica instances are well connected  the master keeps the replica updated by sending a stream of commands to the replica to replicate the effects on the dataset happening in the master side due to  client writes  keys expired or evicted  any other action changing the master dataset  2  When the link between the master and the replica breaks  for network issues or because a timeout is sensed in the master or the replica  the replica reconnects and attempts to proceed with a partial resynchronization  it means that it will try to just obtain the part of the stream of commands it missed during the disconnection  3  When a partial resynchronization is not possible  the replica will ask for a full resynchronization  This will involve a more complex process in which the master needs to create a snapshot of all its data  send it to the replica  and then continue sending the stream of commands as the dataset changes   Redis uses by default asynchronous replication  which being low latency and high performance  is the natural replication mode for the vast majority of Redis use cases  However  Redis replicas asynchronously acknowledge the amount of data they received periodically with the master  So the master does not wait every time for a command to be processed by the replicas  however it knows  if needed  what replica already processed what command  This allows having optional synchronous replication   Synchronous replication of certain data can be requested by the clients using the  WAIT  command  However  WAIT  is only able to ensure there are the specified number of acknowledged copies in the other Redis instances  it does not turn a set of Redis instances into a CP system with strong consistency  acknowledged writes can still be lost during a failover  depending on the exact configuration of the Redis persistence  However with  WAIT  the probability of losing a write after a failure event is greatly reduced to certain hard to trigger failure modes   You can check the Redis Sentinel or Redis Cluster documentation for more information about high availability and failover  The rest of this document mainly describes the basic characteristics of Redis basic replication       Important facts about Redis replication    Redis uses asynchronous replication  with asynchronous replica to master acknowledges of the amount of data processed    A master can have multiple replicas    Replicas are able to accept connections from other replicas  Aside from connecting a number of replicas to the same master  replicas can also be connected to other replicas in a cascading like structure  Since Redis 4 0  all the sub replicas will receive exactly the same replication stream from the master    Redis replication is non blocking on the master side  This means that the master will continue to handle queries when one or more replicas perform the initial synchronization or a partial resynchronization    Replication is also largely non blocking on the replica side  While the replica is performing the initial synchronization  it can handle queries using the old version of the dataset  assuming you configured Redis to do so in redis conf   Otherwise  you can configure Redis replicas to return an error to clients if the replication stream is down  However  after the initial sync  the old dataset must be deleted and the new one must be loaded  The replica will block incoming connections during this brief window  that can be as long as many seconds for very large datasets   Since Redis 4 0 you can configure Redis so that the deletion of the old data set happens in a different thread  however loading the new initial dataset will still happen in the main thread and block the replica    Replication can be used both for scalability  to have multiple replicas for read only queries  for example  slow O N  operations can be offloaded to replicas   or simply for improving data safety and high availability    You can use replication to avoid the cost of having the master writing the full dataset to disk  a typical technique involves configuring your master  redis conf  to avoid persisting to disk at all  then connect a replica configured to save from time to time  or with AOF enabled  However  this setup must be handled with care  since a restarting master will start with an empty dataset  if the replica tries to sync with it  the replica will be emptied as well      Safety of replication when master has persistence turned off  In setups where Redis replication is used  it is strongly advised to have persistence turned on in the master and in the replicas  When this is not possible  for example because of latency concerns due to very slow disks  instances should be configured to   avoid restarting automatically   after a reboot   To better understand why masters with persistence turned off configured to auto restart are dangerous  check the following failure mode where data is wiped from the master and all its replicas   1  We have a setup with node A acting as master  with persistence turned down  and nodes B and C replicating from node A  2  Node A crashes  however it has some auto restart system  that restarts the process  However since persistence is turned off  the node restarts with an empty data set  3  Nodes B and C will replicate from node A  which is empty  so they ll effectively destroy their copy of the data   When Redis Sentinel is used for high availability  also turning off persistence on the master  together with auto restart of the process  is dangerous  For example the master can restart fast enough for Sentinel to not detect a failure  so that the failure mode described above happens   Every time data safety is important  and replication is used with master configured without persistence  auto restart of instances should be disabled      How Redis replication works  Every Redis master has a replication ID  it is a large pseudo random string that marks a given story of the dataset  Each master also takes an offset that increments for every byte of replication stream that it is produced to be sent to replicas  to update the state of the replicas with the new changes modifying the dataset  The replication offset is incremented even if no replica is actually connected  so basically every given pair of       Replication ID  offset  Identifies an exact version of the dataset of a master   When replicas connect to masters  they use the  PSYNC  command to send their old master replication ID and the offsets they processed so far  This way the master can send just the incremental part needed  However if there is not enough  backlog  in the master buffers  or if the replica is referring to an history  replication ID  which is no longer known  then a full resynchronization happens  in this case the replica will get a full copy of the dataset  from scratch   This is how a full synchronization works in more details   The master starts a background saving process to produce an RDB file  At the same time it starts to buffer all new write commands received from the clients  When the background saving is complete  the master transfers the database file to the replica  which saves it on disk  and then loads it into memory  The master will then send all buffered commands to the replica  This is done as a stream of commands and is in the same format of the Redis protocol itself   You can try it yourself via telnet  Connect to the Redis port while the server is doing some work and issue the  SYNC  command  You ll see a bulk transfer and then every command received by the master will be re issued in the telnet session  Actually  SYNC  is an old protocol no longer used by newer Redis instances  but is still there for backward compatibility  it does not allow partial resynchronizations  so now  PSYNC  is used instead   As already said  replicas are able to automatically reconnect when the master replica link goes down for some reason  If the master receives multiple concurrent replica synchronization requests  it performs a single background save in to serve all of them      Replication ID explained  In the previous section we said that if two instances have the same replication ID and replication offset  they have exactly the same data  However it is useful to understand what exactly is the replication ID  and why instances have actually two replication IDs  the main ID and the secondary ID   A replication ID basically marks a given  history  of the data set  Every time an instance restarts from scratch as a master  or a replica is promoted to master  a new replication ID is generated for this instance  The replicas connected to a master will inherit its replication ID after the handshake  So two instances with the same ID are related by the fact that they hold the same data  but potentially at a different time  It is the offset that works as a logical time to understand  for a given history  replication ID   who holds the most updated data set   For instance  if two instances A and B have the same replication ID  but one with offset 1000 and one with offset 1023  it means that the first lacks certain commands applied to the data set  It also means that A  by applying just a few commands  may reach exactly the same state of B   The reason why Redis instances have two replication IDs is because of replicas that are promoted to masters  After a failover  the promoted replica requires to still remember what was its past replication ID  because such replication ID was the one of the former master  In this way  when other replicas will sync with the new master  they will try to perform a partial resynchronization using the old master replication ID  This will work as expected  because when the replica is promoted to master it sets its secondary ID to its main ID  remembering what was the offset when this ID switch happened  Later it will select a new random replication ID  because a new history begins  When handling the new replicas connecting  the master will match their IDs and offsets both with the current ID and the secondary ID  up to a given offset  for safety   In short this means that after a failover  replicas connecting to the newly promoted master don t have to perform a full sync   In case you wonder why a replica promoted to master needs to change its replication ID after a failover  it is possible that the old master is still working as a master because of some network partition  retaining the same replication ID would violate the fact that the same ID and same offset of any two random instances mean they have the same data set      Diskless replication  Normally a full resynchronization requires creating an RDB file on disk  then reloading the same RDB from disk to feed the replicas with the data   With slow disks this can be a very stressing operation for the master  Redis version 2 8 18 is the first version to have support for diskless replication  In this setup the child process directly sends the RDB over the wire to replicas  without using the disk as intermediate storage      Configuration  To configure basic Redis replication is trivial  just add the following line to the replica configuration file       replicaof 192 168 1 1 6379  Of course you need to replace 192 168 1 1 6379 with your master IP address  or hostname  and port  Alternatively  you can call the  REPLICAOF  command and the master host will start a sync with the replica   There are also a few parameters for tuning the replication backlog taken in memory by the master to perform the partial resynchronization  See the example  redis conf  shipped with the Redis distribution for more information   Diskless replication can be enabled using the  repl diskless sync  configuration parameter  The delay to start the transfer to wait for more replicas to arrive after the first one is controlled by the  repl diskless sync delay  parameter  Please refer to the example  redis conf  file in the Redis distribution for more details      Read only replica  Since Redis 2 6  replicas support a read only mode that is enabled by default  This behavior is controlled by the  replica read only  option in the redis conf file  and can be enabled and disabled at runtime using  CONFIG SET    Read only replicas will reject all write commands  so that it is not possible to write to a replica because of a mistake  This does not mean that the feature is intended to expose a replica instance to the internet or more generally to a network where untrusted clients exist  because administrative commands like  DEBUG  or  CONFIG  are still enabled  The  Security   topics security  page describes how to secure a Redis instance   You may wonder why it is possible to revert the read only setting and have replica instances that can be targeted by write operations  The answer is that writable replicas exist only for historical reasons  Using writable replicas can result in inconsistency between the master and the replica  so it is not recommended to use writable replicas  To understand in which situations this can be a problem  we need to understand how replication works  Changes on the master is replicated by propagating regular Redis commands to the replica  When a key expires on the master  this is propagated as a DEL command  If a key which exists on the master but is deleted  expired or has a different type on the replica compared to the master will react differently to commands like DEL  INCR or RPOP propagated from the master than intended  The propagated command may fail on the replica or result in a different outcome  To minimize the risks  if you insist on using writable replicas  we suggest you follow these recommendations     Don t write to keys in a writable replica that are also used on the master     This can be hard to guarantee if you don t have control over all the clients that write to the master      Don t configure an instance as a writable replica as an intermediary step when upgrading a set of instances in a running system    In general  don t configure an instance as a writable replica if it can ever be promoted to a master if you want to guarantee data consistency   Historically  there were some use cases that were considered legitimate for writable replicas  As of version 7 0  these use cases are now all obsolete and the same can be achieved by other means  For example     Computing slow Set or Sorted set operations and storing the result in temporary local keys using commands like  SUNIONSTORE  and  ZINTERSTORE     Instead  use commands that return the result without storing it  such as  SUNION  and  ZINTER      Using the  SORT  command  which is not considered a read only command because of the optional STORE option and therefore cannot be used on a read only replica     Instead  use  SORT RO   which is a read only command     Using  EVAL  and  EVALSHA  are also not considered read only commands  because the Lua script may call write commands    Instead  use  EVAL RO  and  EVALSHA RO  where the Lua script can only call read only commands   While writes to a replica will be discarded if the replica and the master resync or if the replica is restarted  there is no guarantee that they will sync automatically   Before version 4 0  writable replicas were incapable of expiring keys with a time to live set  This means that if you use  EXPIRE  or other commands that set a maximum TTL for a key  the key will leak  and while you may no longer see it while accessing it with read commands  you will see it in the count of keys and it will still use memory  Redis 4 0 RC3 and greater versions are able to evict keys with TTL as masters do  with the exceptions of keys written in DB numbers greater than 63  but by default Redis instances only have 16 databases   Note though that even in versions greater than 4 0  using  EXPIRE  on a key that could ever exists on the master can cause inconsistency between the replica and the master   Also note that since Redis 4 0 replica writes are only local  and are not propagated to sub replicas attached to the instance  Sub replicas instead will always receive the replication stream identical to the one sent by the top level master to the intermediate replicas  So for example in the following setup       A      B      C  Even if  B  is writable  C will not see  B  writes and will instead have identical dataset as the master instance  A       Setting a replica to authenticate to a master  If your master has a password via  requirepass   it s trivial to configure the replica to use that password in all sync operations   To do it on a running instance  use  redis cli  and type       config set masterauth  password   To set it permanently  add this to your config file       masterauth  password      Allow writes only with N attached replicas  Starting with Redis 2 8  you can configure a Redis master to accept write queries only if at least N replicas are currently connected to the master   However  because Redis uses asynchronous replication it is not possible to ensure the replica actually received a given write  so there is always a window for data loss   This is how the feature works     Redis replicas ping the master every second  acknowledging the amount of replication stream processed    Redis masters will remember the last time it received a ping from every replica    The user can configure a minimum number of replicas that have a lag not greater than a maximum number of seconds   If there are at least N replicas  with a lag less than M seconds  then the write will be accepted   You may think of it as a best effort data safety mechanism  where consistency is not ensured for a given write  but at least the time window for data loss is restricted to a given number of seconds  In general bound data loss is better than unbound one   If the conditions are not met  the master will instead reply with an error and the write will not be accepted   There are two configuration parameters for this feature     min replicas to write   number of replicas     min replicas max lag   number of seconds    For more information  please check the example  redis conf  file shipped with the Redis source distribution      How Redis replication deals with expires on keys  Redis expires allow keys to have a limited time to live  TTL   Such a feature depends on the ability of an instance to count the time  however Redis replicas correctly replicate keys with expires  even when such keys are altered using Lua scripts   To implement such a feature Redis cannot rely on the ability of the master and replica to have synced clocks  since this is a problem that cannot be solved and would result in race conditions and diverging data sets  so Redis uses three main techniques to make the replication of expired keys able to work   1  Replicas don t expire keys  instead they wait for masters to expire the keys  When a master expires a key  or evict it because of LRU   it synthesizes a  DEL  command which is transmitted to all the replicas  2  However because of master driven expire  sometimes replicas may still have in memory keys that are already logically expired  since the master was not able to provide the  DEL  command in time  To deal with that the replica uses its logical clock to report that a key does not exist   only for read operations   that don t violate the consistency of the data set  as new commands from the master will arrive   In this way replicas avoid reporting logically expired keys that are still existing  In practical terms  an HTML fragments cache that uses replicas to scale will avoid returning items that are already older than the desired time to live  3  During Lua scripts executions no key expiries are performed  As a Lua script runs  conceptually the time in the master is frozen  so that a given key will either exist or not for all the time the script runs  This prevents keys expiring in the middle of a script  and is needed to send the same script to the replica in a way that is guaranteed to have the same effects in the data set   Once a replica is promoted to a master it will start to expire keys independently  and will not require any help from its old master      Configuring replication in Docker and NAT  When Docker  or other types of containers using port forwarding  or Network Address Translation is used  Redis replication needs some extra care  especially when using Redis Sentinel or other systems where the master  INFO  or  ROLE  commands output is scanned to discover replicas  addresses   The problem is that the  ROLE  command  and the replication section of the  INFO  output  when issued into a master instance  will show replicas as having the IP address they use to connect to the master  which  in environments using NAT may be different compared to the logical address of the replica instance  the one that clients should use to connect to replicas    Similarly the replicas will be listed with the listening port configured into  redis conf   that may be different from the forwarded port in case the port is remapped   To fix both issues  it is possible  since Redis 3 2 2  to force a replica to announce an arbitrary pair of IP and port to the master  The two configurations directives to use are       replica announce ip 5 5 5 5     replica announce port 1234  And are documented in the example  redis conf  of recent Redis distributions      The INFO and ROLE command  There are two Redis commands that provide a lot of information on the current replication parameters of master and replica instances  One is  INFO   If the command is called with the  replication  argument as  INFO replication  only information relevant to the replication are displayed  Another more computer friendly command is  ROLE   that provides the replication status of masters and replicas together with their replication offsets  list of connected replicas and so forth      Partial sync after restarts and failovers  Since Redis 4 0  when an instance is promoted to master after a failover  it will still be able to perform a partial resynchronization with the replicas of the old master  To do so  the replica remembers the old replication ID and offset of its former master  so can provide part of the backlog to the connecting replicas even if they ask for the old replication ID   However the new replication ID of the promoted replica will be different  since it constitutes a different history of the data set  For example  the master can return available and can continue accepting writes for some time  so using the same replication ID in the promoted replica would violate the rule that a replication ID and offset pair identifies only a single data set   Moreover  replicas   when powered off gently and restarted   are able to store in the  RDB  file the information needed to resync with their master  This is useful in case of upgrades  When this is needed  it is better to use the  SHUTDOWN  command in order to perform a  save   quit  operation on the replica   It is not possible to partially sync a replica that restarted via the AOF file  However the instance may be turned to RDB persistence before shutting down it  than can be restarted  and finally AOF can be enabled again       Maxmemory  on replicas  By default  a replica will ignore  maxmemory   unless it is promoted to master after a failover or manually   It means that the eviction of keys will be handled by the master  sending the DEL commands to the replica as keys evict in the master side   This behavior ensures that masters and replicas stay consistent  which is usually what you want  However  if your replica is writable  or you want the replica to have a different memory setting  and you are sure all the writes performed to the replica are idempotent  then you may change this default  but be sure to understand what you are doing    Note that since the replica by default does not evict  it may end up using more memory than what is set via  maxmemory   since there are certain buffers that may be larger on the replica  or data structures may sometimes take more memory and so forth   Make sure you monitor your replicas  and make sure they have enough memory to never hit a real out of memory condition before the master hits the configured  maxmemory  setting   To change this behavior  you can allow a replica to not ignore the  maxmemory   The configuration directives to use is       replica ignore maxmemory no"}
{"questions":"redis High availability with Sentinel aliases title High availability with Redis Sentinel docs manual sentinel docs manual sentinel md weight 4 topics sentinel High availability for non clustered Redis","answers":"---\ntitle: \"High availability with Redis Sentinel\"\nlinkTitle: \"High availability with Sentinel\"\nweight: 4\ndescription: High availability for non-clustered Redis\naliases: [\n    \/topics\/sentinel,\n    \/docs\/manual\/sentinel,\n    \/docs\/manual\/sentinel.md\n]\n---\n\nRedis Sentinel provides high availability for Redis when not using [Redis Cluster](\/docs\/manual\/scaling). \n\nRedis Sentinel also provides other collateral tasks such as monitoring,\nnotifications and acts as a configuration provider for clients.\n\nThis is the full list of Sentinel capabilities at a macroscopic level (i.e. the *big picture*):\n\n* **Monitoring**. Sentinel constantly checks if your master and replica instances are working as expected.\n* **Notification**. Sentinel can notify the system administrator, or other computer programs, via an API, that something is wrong with one of the monitored Redis instances.\n* **Automatic failover**. If a master is not working as expected, Sentinel can start a failover process where a replica is promoted to master, the other additional replicas are reconfigured to use the new master, and the applications using the Redis server are informed about the new address to use when connecting.\n* **Configuration provider**. Sentinel acts as a source of authority for clients service discovery: clients connect to Sentinels in order to ask for the address of the current Redis master responsible for a given service. If a failover occurs, Sentinels will report the new address.\n\n## Sentinel as a distributed system\n\nRedis Sentinel is a distributed system:\n\nSentinel itself is designed to run in a configuration where there are multiple Sentinel processes cooperating together. The advantage of having multiple Sentinel processes cooperating are the following:\n\n1. Failure detection is performed when multiple Sentinels agree about the fact a given master is no longer available. This lowers the probability of false positives.\n2. Sentinel works even if not all the Sentinel processes are working, making the system robust against failures. There is no fun in having a failover system which is itself a single point of failure, after all.\n\nThe sum of Sentinels, Redis instances (masters and replicas) and clients\nconnecting to Sentinel and Redis, are also a larger distributed system with\nspecific properties. In this document concepts will be introduced gradually\nstarting from basic information needed in order to understand the basic\nproperties of Sentinel, to more complex information (that are optional) in\norder to understand how exactly Sentinel works.\n\n## Sentinel quick start\n\n### Obtaining Sentinel\n\nThe current version of Sentinel is called **Sentinel 2**. It is a rewrite of\nthe initial Sentinel implementation using stronger and simpler-to-predict\nalgorithms (that are explained in this documentation).\n\nA stable release of Redis Sentinel is shipped since Redis 2.8.\n\nNew developments are performed in the *unstable* branch, and new features\nsometimes are back ported into the latest stable branch as soon as they are\nconsidered to be stable.\n\nRedis Sentinel version 1, shipped with Redis 2.6, is deprecated and should not be used.\n\n### Running Sentinel\n\nIf you are using the `redis-sentinel` executable (or if you have a symbolic\nlink with that name to the `redis-server` executable) you can run Sentinel\nwith the following command line:\n\n    redis-sentinel \/path\/to\/sentinel.conf\n\nOtherwise you can use directly the `redis-server` executable starting it in\nSentinel mode:\n\n    redis-server \/path\/to\/sentinel.conf --sentinel\n\nBoth ways work the same.\n\nHowever **it is mandatory** to use a configuration file when running Sentinel, as this file will be used by the system in order to save the current state that will be reloaded in case of restarts. Sentinel will simply refuse to start if no configuration file is given or if the configuration file path is not writable.\n\nSentinels by default run **listening for connections to TCP port 26379**, so\nfor Sentinels to work, port 26379 of your servers **must be open** to receive\nconnections from the IP addresses of the other Sentinel instances.\nOtherwise Sentinels can't talk and can't agree about what to do, so failover\nwill never be performed.\n\n### Fundamental things to know about Sentinel before deploying\n\n1. You need at least three Sentinel instances for a robust deployment.\n2. The three Sentinel instances should be placed into computers or virtual machines that are believed to fail in an independent way. So for example different physical servers or Virtual Machines executed on different availability zones.\n3. Sentinel + Redis distributed system does not guarantee that acknowledged writes are retained during failures, since Redis uses asynchronous replication. However there are ways to deploy Sentinel that make the window to lose writes limited to certain moments, while there are other less secure ways to deploy it.\n4. You need Sentinel support in your clients. Popular client libraries have Sentinel support, but not all.\n5. There is no HA setup which is safe if you don't test from time to time in development environments, or even better if you can, in production environments, if they work. You may have a misconfiguration that will become apparent only when it's too late (at 3am when your master stops working).\n6. **Sentinel, Docker, or other forms of Network Address Translation or Port Mapping should be mixed with care**: Docker performs port remapping, breaking Sentinel auto discovery of other Sentinel processes and the list of replicas for a master. Check the [section about _Sentinel and Docker_](#sentinel-docker-nat-and-possible-issues) later in this document for more information.\n\n### Configuring Sentinel\n\nThe Redis source distribution contains a file called `sentinel.conf`\nthat is a self-documented example configuration file you can use to\nconfigure Sentinel, however a typical minimal configuration file looks like the\nfollowing:\n\n    sentinel monitor mymaster 127.0.0.1 6379 2\n    sentinel down-after-milliseconds mymaster 60000\n    sentinel failover-timeout mymaster 180000\n    sentinel parallel-syncs mymaster 1\n\n    sentinel monitor resque 192.168.1.3 6380 4\n    sentinel down-after-milliseconds resque 10000\n    sentinel failover-timeout resque 180000\n    sentinel parallel-syncs resque 5\n\nYou only need to specify the masters to monitor, giving to each separated\nmaster (that may have any number of replicas) a different name. There is no\nneed to specify replicas, which are auto-discovered. Sentinel will update the\nconfiguration automatically with additional information about replicas (in\norder to retain the information in case of restart). The configuration is\nalso rewritten every time a replica is promoted to master during a failover\nand every time a new Sentinel is discovered.\n\nThe example configuration above basically monitors two sets of Redis\ninstances, each composed of a master and an undefined number of replicas.\nOne set of instances is called `mymaster`, and the other `resque`.\n\nThe meaning of the arguments of `sentinel monitor` statements is the following:\n\n    sentinel monitor <master-name> <ip> <port> <quorum>\n\nFor the sake of clarity, let's check line by line what the configuration\noptions mean:\n\nThe first line is used to tell Redis to monitor a master called *mymaster*,\nthat is at address 127.0.0.1 and port 6379, with a quorum of 2. Everything\nis pretty obvious but the **quorum** argument:\n\n* The **quorum** is the number of Sentinels that need to agree about the fact the master is not reachable, in order to really mark the master as failing, and eventually start a failover procedure if possible.\n* However **the quorum is only used to detect the failure**. In order to actually perform a failover, one of the Sentinels need to be elected leader for the failover and be authorized to proceed. This only happens with the vote of the **majority of the Sentinel processes**.\n\nSo for example if you have 5 Sentinel processes, and the quorum for a given\nmaster set to the value of 2, this is what happens:\n\n* If two Sentinels agree at the same time about the master being unreachable, one of the two will try to start a failover.\n* If there are at least a total of three Sentinels reachable, the failover will be authorized and will actually start.\n\nIn practical terms this means during failures **Sentinel never starts a failover if the majority of Sentinel processes are unable to talk** (aka no failover in the minority partition).\n\n### Other Sentinel options\n\nThe other options are almost always in the form:\n\n    sentinel <option_name> <master_name> <option_value>\n\nAnd are used for the following purposes:\n\n* `down-after-milliseconds` is the time in milliseconds an instance should not\nbe reachable (either does not reply to our PINGs or it is replying with an\nerror) for a Sentinel starting to think it is down.\n* `parallel-syncs` sets the number of replicas that can be reconfigured to use\nthe new master after a failover at the same time. The lower the number, the\nmore time it will take for the failover process to complete, however if the\nreplicas are configured to serve old data, you may not want all the replicas to\nre-synchronize with the master at the same time. While the replication\nprocess is mostly non blocking for a replica, there is a moment when it stops to\nload the bulk data from the master. You may want to make sure only one replica\nat a time is not reachable by setting this option to the value of 1.\n\nAdditional options are described in the rest of this document and\ndocumented in the example `sentinel.conf` file shipped with the Redis\ndistribution.\n\nConfiguration parameters can be modified at runtime:\n\n* Master-specific configuration parameters are modified using `SENTINEL SET`.\n* Global configuration parameters are modified using `SENTINEL CONFIG SET`.\n\nSee the [_Reconfiguring Sentinel at runtime_ section](#reconfiguring-sentinel-at-runtime) for more information.\n\n### Example Sentinel deployments\n\nNow that you know the basic information about Sentinel, you may wonder where\nyou should place your Sentinel processes, how many Sentinel processes you need\nand so forth. This section shows a few example deployments.\n\nWe use ASCII art in order to show you configuration examples in a *graphical*\nformat, this is what the different symbols means:\n\n    +--------------------+\n    | This is a computer |\n    | or VM that fails   |\n    | independently. We  |\n    | call it a \"box\"    |\n    +--------------------+\n\nWe write inside the boxes what they are running:\n\n    +-------------------+\n    | Redis master M1   |\n    | Redis Sentinel S1 |\n    +-------------------+\n\nDifferent boxes are connected by lines, to show that they are able to talk:\n\n    +-------------+               +-------------+\n    | Sentinel S1 |---------------| Sentinel S2 |\n    +-------------+               +-------------+\n\nNetwork partitions are shown as interrupted lines using slashes:\n\n    +-------------+                +-------------+\n    | Sentinel S1 |------ \/\/ ------| Sentinel S2 |\n    +-------------+                +-------------+\n\nAlso note that:\n\n* Masters are called M1, M2, M3, ..., Mn.\n* Replicas are called R1, R2, R3, ..., Rn (R stands for *replica*).\n* Sentinels are called S1, S2, S3, ..., Sn.\n* Clients are called C1, C2, C3, ..., Cn.\n* When an instance changes role because of Sentinel actions, we put it inside square brackets, so [M1] means an instance that is now a master because of Sentinel intervention.\n\nNote that we will never show **setups where just two Sentinels are used**, since\nSentinels always need **to talk with the majority** in order to start a\nfailover.\n\n#### Example 1: just two Sentinels, DON'T DO THIS\n\n    +----+         +----+\n    | M1 |---------| R1 |\n    | S1 |         | S2 |\n    +----+         +----+\n\n    Configuration: quorum = 1\n\n* In this setup, if the master M1 fails, R1 will be promoted since the two Sentinels can reach agreement about the failure (obviously with quorum set to 1) and can also authorize a failover because the majority is two. So apparently it could superficially work, however check the next points to see why this setup is broken.\n* If the box where M1 is running stops working, also S1 stops working. The Sentinel running in the other box S2 will not be able to authorize a failover, so the system will become not available.\n\nNote that a majority is needed in order to order different failovers, and later propagate the latest configuration to all the Sentinels. Also note that the ability to failover in a single side of the above setup, without any agreement, would be very dangerous:\n\n    +----+           +------+\n    | M1 |----\/\/-----| [M1] |\n    | S1 |           | S2   |\n    +----+           +------+\n\nIn the above configuration we created two masters (assuming S2 could failover\nwithout authorization) in a perfectly symmetrical way. Clients may write\nindefinitely to both sides, and there is no way to understand when the\npartition heals what configuration is the right one, in order to prevent\na *permanent split brain condition*.\n\nSo please **deploy at least three Sentinels in three different boxes** always.\n\n#### Example 2: basic setup with three boxes\n\nThis is a very simple setup, that has the advantage to be simple to tune\nfor additional safety. It is based on three boxes, each box running both\na Redis process and a Sentinel process.\n\n\n           +----+\n           | M1 |\n           | S1 |\n           +----+\n              |\n    +----+    |    +----+\n    | R2 |----+----| R3 |\n    | S2 |         | S3 |\n    +----+         +----+\n\n    Configuration: quorum = 2\n\nIf the master M1 fails, S2 and S3 will agree about the failure and will\nbe able to authorize a failover, making clients able to continue.\n\nIn every Sentinel setup, as Redis uses asynchronous replication, there is\nalways the risk of losing some writes because a given acknowledged write\nmay not be able to reach the replica which is promoted to master. However in\nthe above setup there is a higher risk due to clients being partitioned away\nwith an old master, like in the following picture:\n\n             +----+\n             | M1 |\n             | S1 | <- C1 (writes will be lost)\n             +----+\n                |\n                \/\n                \/\n    +------+    |    +----+\n    | [M2] |----+----| R3 |\n    | S2   |         | S3 |\n    +------+         +----+\n\nIn this case a network partition isolated the old master M1, so the\nreplica R2 is promoted to master. However clients, like C1, that are\nin the same partition as the old master, may continue to write data\nto the old master. This data will be lost forever since when the partition\nwill heal, the master will be reconfigured as a replica of the new master,\ndiscarding its data set.\n\nThis problem can be mitigated using the following Redis replication\nfeature, that allows to stop accepting writes if a master detects that\nit is no longer able to transfer its writes to the specified number of replicas.\n\n    min-replicas-to-write 1\n    min-replicas-max-lag 10\n\nWith the above configuration (please see the self-commented `redis.conf` example in the Redis distribution for more information) a Redis instance, when acting as a master, will stop accepting writes if it can't write to at least 1 replica. Since replication is asynchronous *not being able to write* actually means that the replica is either disconnected, or is not sending us asynchronous acknowledges for more than the specified `max-lag` number of seconds.\n\nUsing this configuration, the old Redis master M1 in the above example, will become unavailable after 10 seconds. When the partition heals, the Sentinel configuration will converge to the new one, the client C1 will be able to fetch a valid configuration and will continue with the new master.\n\nHowever there is no free lunch. With this refinement, if the two replicas are\ndown, the master will stop accepting writes. It's a trade off.\n\n#### Example 3: Sentinel in the client boxes\n\nSometimes we have only two Redis boxes available, one for the master and\none for the replica. The configuration in the example 2 is not viable in\nthat case, so we can resort to the following, where Sentinels are placed\nwhere clients are:\n\n                +----+         +----+\n                | M1 |----+----| R1 |\n                |    |    |    |    |\n                +----+    |    +----+\n                          |\n             +------------+------------+\n             |            |            |\n             |            |            |\n          +----+        +----+      +----+\n          | C1 |        | C2 |      | C3 |\n          | S1 |        | S2 |      | S3 |\n          +----+        +----+      +----+\n\n          Configuration: quorum = 2\n\nIn this setup, the point of view Sentinels is the same as the clients: if\na master is reachable by the majority of the clients, it is fine.\nC1, C2, C3 here are generic clients, it does not mean that C1 identifies\na single client connected to Redis. It is more likely something like\nan application server, a Rails app, or something like that.\n\nIf the box where M1 and S1 are running fails, the failover will happen\nwithout issues, however it is easy to see that different network partitions\nwill result in different behaviors. For example Sentinel will not be able\nto setup if the network between the clients and the Redis servers is\ndisconnected, since the Redis master and replica will both be unavailable.\n\nNote that if C3 gets partitioned with M1 (hardly possible with\nthe network described above, but more likely possible with different\nlayouts, or because of failures at the software layer), we have a similar\nissue as described in Example 2, with the difference that here we have\nno way to break the symmetry, since there is just a replica and master, so\nthe master can't stop accepting queries when it is disconnected from its replica,\notherwise the master would never be available during replica failures.\n\nSo this is a valid setup but the setup in the Example 2 has advantages\nsuch as the HA system of Redis running in the same boxes as Redis itself\nwhich may be simpler to manage, and the ability to put a bound on the amount\nof time a master in the minority partition can receive writes.\n\n#### Example 4: Sentinel client side with less than three clients\n\nThe setup described in the Example 3 cannot be used if there are less than\nthree boxes in the client side (for example three web servers). In this\ncase we need to resort to a mixed setup like the following:\n\n                +----+         +----+\n                | M1 |----+----| R1 |\n                | S1 |    |    | S2 |\n                +----+    |    +----+\n                          |\n                   +------+-----+\n                   |            |\n                   |            |\n                +----+        +----+\n                | C1 |        | C2 |\n                | S3 |        | S4 |\n                +----+        +----+\n\n          Configuration: quorum = 3\n\nThis is similar to the setup in Example 3, but here we run four Sentinels\nin the four boxes we have available. If the master M1 becomes unavailable\nthe other three Sentinels will perform the failover.\n\nIn theory this setup works removing the box where C2 and S4 are running, and\nsetting the quorum to 2. However it is unlikely that we want HA in the\nRedis side without having high availability in our application layer.\n\n### Sentinel, Docker, NAT, and possible issues\n\nDocker uses a technique called port mapping: programs running inside Docker\ncontainers may be exposed with a different port compared to the one the\nprogram believes to be using. This is useful in order to run multiple\ncontainers using the same ports, at the same time, in the same server.\n\nDocker is not the only software system where this happens, there are other\nNetwork Address Translation setups where ports may be remapped, and sometimes\nnot ports but also IP addresses.\n\nRemapping ports and addresses creates issues with Sentinel in two ways:\n\n1. Sentinel auto-discovery of other Sentinels no longer works, since it is based on *hello* messages where each Sentinel announce at which port and IP address they are listening for connection. However Sentinels have no way to understand that an address or port is remapped, so it is announcing an information that is not correct for other Sentinels to connect.\n2. Replicas are listed in the `INFO` output of a Redis master in a similar way: the address is detected by the master checking the remote peer of the TCP connection, while the port is advertised by the replica itself during the handshake, however the port may be wrong for the same reason as exposed in point 1.\n\nSince Sentinels auto detect replicas using masters `INFO` output information,\nthe detected replicas will not be reachable, and Sentinel will never be able to\nfailover the master, since there are no good replicas from the point of view of\nthe system, so there is currently no way to monitor with Sentinel a set of\nmaster and replica instances deployed with Docker, **unless you instruct Docker\nto map the port 1:1**.\n\nFor the first problem, in case you want to run a set of Sentinel\ninstances using Docker with forwarded ports (or any other NAT setup where ports\nare remapped), you can use the following two Sentinel configuration directives\nin order to force Sentinel to announce a specific set of IP and port:\n\n    sentinel announce-ip <ip>\n    sentinel announce-port <port>\n\nNote that Docker has the ability to run in *host networking mode* (check the `--net=host` option for more information). This should create no issues since ports are not remapped in this setup.\n\n### IP Addresses and DNS names\n\nOlder versions of Sentinel did not support host names and required IP addresses to be specified everywhere.\nStarting with version 6.2, Sentinel has *optional* support for host names.\n\n**This capability is disabled by default. If you're going to enable DNS\/hostnames support, please note:**\n\n1. The name resolution configuration on your Redis and Sentinel nodes must be reliable and be able to resolve addresses quickly. Unexpected delays in address resolution may have a negative impact on Sentinel.\n2. You should use hostnames everywhere and avoid mixing hostnames and IP addresses. To do that, use `replica-announce-ip <hostname>` and `sentinel announce-ip <hostname>` for all Redis and Sentinel instances, respectively.\n\nEnabling the `resolve-hostnames` global configuration allows Sentinel to accept host names:\n\n* As part of a `sentinel monitor` command\n* As a replica address, if the replica uses a host name value for `replica-announce-ip`\n\nSentinel will accept host names as valid inputs and resolve them, but will still refer to IP addresses when announcing an instance, updating configuration files, etc.\n\nEnabling the `announce-hostnames` global configuration makes Sentinel use host names instead. This affects replies to clients, values written in configuration files, the `REPLICAOF` command issued to replicas, etc.\n\nThis behavior may not be compatible with all Sentinel clients, that may explicitly expect an IP address.\n\nUsing host names may be useful when clients use TLS to connect to instances and require a name rather than an IP address in order to perform certificate ASN matching.\n\n## A quick tutorial\n\nIn the next sections of this document, all the details about [_Sentinel API_](#sentinel-api),\nconfiguration and semantics will be covered incrementally. However for people\nthat want to play with the system ASAP, this section is a tutorial that shows\nhow to configure and interact with 3 Sentinel instances.\n\nHere we assume that the instances are executed at port 5000, 5001, 5002.\nWe also assume that you have a running Redis master at port 6379 with a\nreplica running at port 6380. We will use the IPv4 loopback address 127.0.0.1\neverywhere during the tutorial, assuming you are running the simulation\non your personal computer.\n\nThe three Sentinel configuration files should look like the following:\n\n    port 5000\n    sentinel monitor mymaster 127.0.0.1 6379 2\n    sentinel down-after-milliseconds mymaster 5000\n    sentinel failover-timeout mymaster 60000\n    sentinel parallel-syncs mymaster 1\n\nThe other two configuration files will be identical but using 5001 and 5002\nas port numbers.\n\nA few things to note about the above configuration:\n\n* The master set is called `mymaster`. It identifies the master and its replicas. Since each *master set* has a different name, Sentinel can monitor different sets of masters and replicas at the same time.\n* The quorum was set to the value of 2 (last argument of `sentinel monitor` configuration directive).\n* The `down-after-milliseconds` value is 5000 milliseconds, that is 5 seconds, so masters will be detected as failing as soon as we don't receive any reply from our pings within this amount of time.\n\nOnce you start the three Sentinels, you'll see a few messages they log, like:\n\n    +monitor master mymaster 127.0.0.1 6379 quorum 2\n\nThis is a Sentinel event, and you can receive this kind of events via Pub\/Sub\nif you `SUBSCRIBE` to the event name as specified later in [_Pub\/Sub Messages_ section](#pubsub-messages).\n\nSentinel generates and logs different events during failure detection and\nfailover.\n\nAsking Sentinel about the state of a master\n---\n\nThe most obvious thing to do with Sentinel to get started, is check if the\nmaster it is monitoring is doing well:\n\n    $ redis-cli -p 5000\n    127.0.0.1:5000> sentinel master mymaster\n     1) \"name\"\n     2) \"mymaster\"\n     3) \"ip\"\n     4) \"127.0.0.1\"\n     5) \"port\"\n     6) \"6379\"\n     7) \"runid\"\n     8) \"953ae6a589449c13ddefaee3538d356d287f509b\"\n     9) \"flags\"\n    10) \"master\"\n    11) \"link-pending-commands\"\n    12) \"0\"\n    13) \"link-refcount\"\n    14) \"1\"\n    15) \"last-ping-sent\"\n    16) \"0\"\n    17) \"last-ok-ping-reply\"\n    18) \"735\"\n    19) \"last-ping-reply\"\n    20) \"735\"\n    21) \"down-after-milliseconds\"\n    22) \"5000\"\n    23) \"info-refresh\"\n    24) \"126\"\n    25) \"role-reported\"\n    26) \"master\"\n    27) \"role-reported-time\"\n    28) \"532439\"\n    29) \"config-epoch\"\n    30) \"1\"\n    31) \"num-slaves\"\n    32) \"1\"\n    33) \"num-other-sentinels\"\n    34) \"2\"\n    35) \"quorum\"\n    36) \"2\"\n    37) \"failover-timeout\"\n    38) \"60000\"\n    39) \"parallel-syncs\"\n    40) \"1\"\n\nAs you can see, it prints a number of information about the master. There are\na few that are of particular interest for us:\n\n1. `num-other-sentinels` is 2, so we know the Sentinel already detected two more Sentinels for this master. If you check the logs you'll see the `+sentinel` events generated.\n2. `flags` is just `master`. If the master was down we could expect to see `s_down` or `o_down` flag as well here.\n3. `num-slaves` is correctly set to 1, so Sentinel also detected that there is an attached replica to our master.\n\nIn order to explore more about this instance, you may want to try the following\ntwo commands:\n\n    SENTINEL replicas mymaster\n    SENTINEL sentinels mymaster\n\nThe first will provide similar information about the replicas connected to the\nmaster, and the second about the other Sentinels.\n\nObtaining the address of the current master\n---\n\nAs we already specified, Sentinel also acts as a configuration provider for\nclients that want to connect to a set of master and replicas. Because of\npossible failovers or reconfigurations, clients have no idea about who is\nthe currently active master for a given set of instances, so Sentinel exports\nan API to ask this question:\n\n    127.0.0.1:5000> SENTINEL get-master-addr-by-name mymaster\n    1) \"127.0.0.1\"\n    2) \"6379\"\n\n### Testing the failover\n\nAt this point our toy Sentinel deployment is ready to be tested. We can\njust kill our master and check if the configuration changes. To do so\nwe can just do:\n\n    redis-cli -p 6379 DEBUG sleep 30\n\nThis command will make our master no longer reachable, sleeping for 30 seconds.\nIt basically simulates a master hanging for some reason.\n\nIf you check the Sentinel logs, you should be able to see a lot of action:\n\n1. Each Sentinel detects the master is down with an `+sdown` event.\n2. This event is later escalated to `+odown`, which means that multiple Sentinels agree about the fact the master is not reachable.\n3. Sentinels vote a Sentinel that will start the first failover attempt.\n4. The failover happens.\n\nIf you ask again what is the current master address for `mymaster`, eventually\nwe should get a different reply this time:\n\n    127.0.0.1:5000> SENTINEL get-master-addr-by-name mymaster\n    1) \"127.0.0.1\"\n    2) \"6380\"\n\nSo far so good... At this point you may jump to create your Sentinel deployment\nor can read more to understand all the Sentinel commands and internals.\n\n## Sentinel API\n\nSentinel provides an API in order to inspect its state, check the health\nof monitored masters and replicas, subscribe in order to receive specific\nnotifications, and change the Sentinel configuration at run time.\n\nBy default Sentinel runs using TCP port 26379 (note that 6379 is the normal\nRedis port). Sentinels accept commands using the Redis protocol, so you can\nuse `redis-cli` or any other unmodified Redis client in order to talk with\nSentinel.\n\nIt is possible to directly query a Sentinel to check what is the state of\nthe monitored Redis instances from its point of view, to see what other\nSentinels it knows, and so forth. Alternatively, using Pub\/Sub, it is possible\nto receive *push style* notifications from Sentinels, every time some event\nhappens, like a failover, or an instance entering an error condition, and\nso forth.\n\n### Sentinel commands\n\nThe `SENTINEL` command is the main API for Sentinel. The following is the list of its subcommands (minimal version is noted for where applicable):\n\n* **SENTINEL CONFIG GET `<name>`** (`>= 6.2`) Get the current value of a global Sentinel configuration parameter. The specified name may be a wildcard, similar to the Redis `CONFIG GET` command.\n* **SENTINEL CONFIG SET `<name>` `<value>`** (`>= 6.2`) Set the value of a global Sentinel configuration parameter.\n* **SENTINEL CKQUORUM `<master name>`** Check if the current Sentinel configuration is able to reach the quorum needed to failover a master, and the majority needed to authorize the failover. This command should be used in monitoring systems to check if a Sentinel deployment is ok.\n* **SENTINEL FLUSHCONFIG** Force Sentinel to rewrite its configuration on disk, including the current Sentinel state. Normally Sentinel rewrites the configuration every time something changes in its state (in the context of the subset of the state which is persisted on disk across restart). However sometimes it is possible that the configuration file is lost because of operation errors, disk failures, package upgrade scripts or configuration managers. In those cases a way to force Sentinel to rewrite the configuration file is handy. This command works even if the previous configuration file is completely missing.\n* **SENTINEL FAILOVER `<master name>`** Force a failover as if the master was not reachable, and without asking for agreement to other Sentinels (however a new version of the configuration will be published so that the other Sentinels will update their configurations).\n* **SENTINEL GET-MASTER-ADDR-BY-NAME `<master name>`** Return the ip and port number of the master with that name. If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted replica.\n* **SENTINEL INFO-CACHE** (`>= 3.2`) Return cached `INFO` output from masters and replicas.\n* **SENTINEL IS-MASTER-DOWN-BY-ADDR <ip> <port> <current-epoch> <runid>** Check if the master specified by ip:port is down from current Sentinel's point of view. This command is mostly for internal use.\n* **SENTINEL MASTER `<master name>`** Show the state and info of the specified master.\n* **SENTINEL MASTERS** Show a list of monitored masters and their state.\n* **SENTINEL MONITOR** Start Sentinel's monitoring. Refer to the [_Reconfiguring Sentinel at Runtime_ section](#reconfiguring-sentinel-at-runtime) for more information.\n* **SENTINEL MYID** (`>= 6.2`) Return the ID of the Sentinel instance.\n* **SENTINEL PENDING-SCRIPTS** This command returns information about pending scripts.\n* **SENTINEL REMOVE** Stop Sentinel's monitoring. Refer to the [_Reconfiguring Sentinel at Runtime_ section](#reconfiguring-sentinel-at-runtime) for more information.\n* **SENTINEL REPLICAS `<master name>`** (`>= 5.0`) Show a list of replicas for this master, and their state.\n* **SENTINEL SENTINELS `<master name>`** Show a list of sentinel instances for this master, and their state.\n* **SENTINEL SET** Set Sentinel's monitoring configuration. Refer to the [_Reconfiguring Sentinel at Runtime_ section](#reconfiguring-sentinel-at-runtime) for more information.\n* **SENTINEL SIMULATE-FAILURE (crash-after-election|crash-after-promotion|help)** (`>= 3.2`) This command simulates different Sentinel crash scenarios.\n* **SENTINEL RESET `<pattern>`** This command will reset all the masters with matching name. The pattern argument is a glob-style pattern. The reset process clears any previous state in a master (including a failover in progress), and removes every replica and sentinel already discovered and associated with the master.\n\nFor connection management and administration purposes, Sentinel supports the following subset of Redis' commands:\n\n* **ACL** (`>= 6.2`) This command manages the Sentinel Access Control List. For more information refer to the [ACL](\/topics\/acl) documentation page and the [_Sentinel Access Control List authentication_](#sentinel-access-control-list-authentication).\n* **AUTH** (`>= 5.0.1`) Authenticate a client connection. For more information refer to the `AUTH` command and the [_Configuring Sentinel instances with authentication_ section](#configuring-sentinel-instances-with-authentication).\n* **CLIENT** This command manages client connections. For more information refer to its subcommands' pages.\n* **COMMAND** (`>= 6.2`) This command returns information about commands. For more information refer to the `COMMAND` command and its various subcommands.\n* **HELLO** (`>= 6.0`) Switch the connection's protocol. For more information refer to the `HELLO` command.\n* **INFO** Return information and statistics about the Sentinel server. For more information see the `INFO` command.\n* **PING** This command simply returns PONG.\n* **ROLE** This command returns the string \"sentinel\" and a list of monitored masters. For more information refer to the `ROLE` command.\n* **SHUTDOWN** Shut down the Sentinel instance.\n\nLastly, Sentinel also supports the `SUBSCRIBE`, `UNSUBSCRIBE`, `PSUBSCRIBE` and `PUNSUBSCRIBE` commands. Refer to the [_Pub\/Sub Messages_ section](#pubsub-messages) for more details.\n\n### Reconfiguring Sentinel at Runtime\n\nStarting with Redis version 2.8.4, Sentinel provides an API in order to add, remove, or change the configuration of a given master. Note that if you have multiple sentinels you should apply the changes to all to your instances for Redis Sentinel to work properly. This means that changing the configuration of a single Sentinel does not automatically propagate the changes to the other Sentinels in the network.\n\nThe following is a list of `SENTINEL` subcommands used in order to update the configuration of a Sentinel instance.\n\n* **SENTINEL MONITOR `<name>` `<ip>` `<port>` `<quorum>`** This command tells the Sentinel to start monitoring a new master with the specified name, ip, port, and quorum. It is identical to the `sentinel monitor` configuration directive in `sentinel.conf` configuration file, with the difference that you can't use a hostname in as `ip`, but you need to provide an IPv4 or IPv6 address.\n* **SENTINEL REMOVE `<name>`** is used in order to remove the specified master: the master will no longer be monitored, and will totally be removed from the internal state of the Sentinel, so it will no longer listed by `SENTINEL masters` and so forth.\n* **SENTINEL SET `<name>` [`<option>` `<value>` ...]** The SET command is very similar to the `CONFIG SET` command of Redis, and is used in order to change configuration parameters of a specific master. Multiple option \/ value pairs can be specified (or none at all). All the configuration parameters that can be configured via `sentinel.conf` are also configurable using the SET command.\n\nThe following is an example of `SENTINEL SET` command in order to modify the `down-after-milliseconds` configuration of a master called `objects-cache`:\n\n    SENTINEL SET objects-cache-master down-after-milliseconds 1000\n\nAs already stated, `SENTINEL SET` can be used to set all the configuration parameters that are settable in the startup configuration file. Moreover it is possible to change just the master quorum configuration without removing and re-adding the master with `SENTINEL REMOVE` followed by `SENTINEL MONITOR`, but simply using:\n\n    SENTINEL SET objects-cache-master quorum 5\n\nNote that there is no equivalent GET command since `SENTINEL MASTER` provides all the configuration parameters in a simple to parse format (as a field\/value pairs array).\n\nStarting with Redis version 6.2, Sentinel also allows getting and setting global configuration parameters which were only supported in the configuration file prior to that.\n\n* **SENTINEL CONFIG GET `<name>`** Get the current value of a global Sentinel configuration parameter. The specified name may be a wildcard, similar to the Redis `CONFIG GET` command.\n* **SENTINEL CONFIG SET `<name>` `<value>`** Set the value of a global Sentinel configuration parameter.\n\nGlobal parameters that can be manipulated include:\n\n* `resolve-hostnames`, `announce-hostnames`. See [_IP addresses and DNS names_](#ip-addresses-and-dns-names).\n* `announce-ip`, `announce-port`. See [_Sentinel, Docker, NAT, and possible issues_](#sentinel-docker-nat-and-possible-issues).\n* `sentinel-user`, `sentinel-pass`. See [_Configuring Sentinel instances with authentication_](#configuring-sentinel-instances-with-authentication).\n\n### Adding or removing Sentinels\n\nAdding a new Sentinel to your deployment is a simple process because of the\nauto-discover mechanism implemented by Sentinel. All you need to do is to\nstart the new Sentinel configured to monitor the currently active master.\nWithin 10 seconds the Sentinel will acquire the list of other Sentinels and\nthe set of replicas attached to the master.\n\nIf you need to add multiple Sentinels at once, it is suggested to add it\none after the other, waiting for all the other Sentinels to already know\nabout the first one before adding the next. This is useful in order to still\nguarantee that majority can be achieved only in one side of a partition,\nin the chance failures should happen in the process of adding new Sentinels.\n\nThis can be easily achieved by adding every new Sentinel with a 30 seconds delay, and during absence of network partitions.\n\nAt the end of the process it is possible to use the command\n`SENTINEL MASTER mastername` in order to check if all the Sentinels agree about\nthe total number of Sentinels monitoring the master.\n\nRemoving a Sentinel is a bit more complex: **Sentinels never forget already seen\nSentinels**, even if they are not reachable for a long time, since we don't\nwant to dynamically change the majority needed to authorize a failover and\nthe creation of a new configuration number. So in order to remove a Sentinel\nthe following steps should be performed in absence of network partitions:\n\n1. Stop the Sentinel process of the Sentinel you want to remove.\n2. Send a `SENTINEL RESET *` command to all the other Sentinel instances (instead of `*` you can use the exact master name if you want to reset just a single master). One after the other, waiting at least 30 seconds between instances.\n3. Check that all the Sentinels agree about the number of Sentinels currently active, by inspecting the output of `SENTINEL MASTER mastername` of every Sentinel.\n\n### Removing the old master or unreachable replicas\n\nSentinels never forget about replicas of a given master, even when they are\nunreachable for a long time. This is useful, because Sentinels should be able\nto correctly reconfigure a returning replica after a network partition or a\nfailure event.\n\nMoreover, after a failover, the failed over master is virtually added as a\nreplica of the new master, this way it will be reconfigured to replicate with\nthe new master as soon as it will be available again.\n\nHowever sometimes you want to remove a replica (that may be the old master)\nforever from the list of replicas monitored by Sentinels.\n\nIn order to do this, you need to send a `SENTINEL RESET mastername` command\nto all the Sentinels: they'll refresh the list of replicas within the next\n10 seconds, only adding the ones listed as correctly replicating from the\ncurrent master `INFO` output.\n\n### Pub\/Sub messages\n\nA client can use a Sentinel as a Redis-compatible Pub\/Sub server\n(but you can't use `PUBLISH`) in order to `SUBSCRIBE` or `PSUBSCRIBE` to\nchannels and get notified about specific events.\n\nThe channel name is the same as the name of the event. For instance the\nchannel named `+sdown` will receive all the notifications related to instances\nentering an `SDOWN` (SDOWN means the instance is no longer reachable from\nthe point of view of the Sentinel you are querying) condition.\n\nTo get all the messages simply subscribe using `PSUBSCRIBE *`.\n\nThe following is a list of channels and message formats you can receive using\nthis API. The first word is the channel \/ event name, the rest is the format of the data.\n\nNote: where *instance details* is specified it means that the following arguments are provided to identify the target instance:\n\n    <instance-type> <name> <ip> <port> @ <master-name> <master-ip> <master-port>\n\nThe part identifying the master (from the @ argument to the end) is optional\nand is only specified if the instance is not a master itself.\n\n* **+reset-master** `<instance details>` -- The master was reset.\n* **+slave** `<instance details>` -- A new replica was detected and attached.\n* **+failover-state-reconf-slaves** `<instance details>` -- Failover state changed to `reconf-slaves` state.\n* **+failover-detected** `<instance details>` -- A failover started by another Sentinel or any other external entity was detected (An attached replica turned into a master).\n* **+slave-reconf-sent** `<instance details>` -- The leader sentinel sent the `REPLICAOF` command to this instance in order to reconfigure it for the new replica.\n* **+slave-reconf-inprog** `<instance details>` -- The replica being reconfigured showed to be a replica of the new master ip:port pair, but the synchronization process is not yet complete.\n* **+slave-reconf-done** `<instance details>` -- The replica is now synchronized with the new master.\n* **-dup-sentinel** `<instance details>` -- One or more sentinels for the specified master were removed as duplicated (this happens for instance when a Sentinel instance is restarted).\n* **+sentinel** `<instance details>` -- A new sentinel for this master was detected and attached.\n* **+sdown** `<instance details>` -- The specified instance is now in Subjectively Down state.\n* **-sdown** `<instance details>` -- The specified instance is no longer in Subjectively Down state.\n* **+odown** `<instance details>` -- The specified instance is now in Objectively Down state.\n* **-odown** `<instance details>` -- The specified instance is no longer in Objectively Down state.\n* **+new-epoch** `<instance details>` -- The current epoch was updated.\n* **+try-failover** `<instance details>` -- New failover in progress, waiting to be elected by the majority.\n* **+elected-leader** `<instance details>` -- Won the election for the specified epoch, can do the failover.\n* **+failover-state-select-slave** `<instance details>` -- New failover state is `select-slave`: we are trying to find a suitable replica for promotion.\n* **no-good-slave** `<instance details>` -- There is no good replica to promote. Currently we'll try after some time, but probably this will change and the state machine will abort the failover at all in this case.\n* **selected-slave** `<instance details>` -- We found the specified good replica to promote.\n* **failover-state-send-slaveof-noone** `<instance details>` -- We are trying to reconfigure the promoted replica as master, waiting for it to switch.\n* **failover-end-for-timeout** `<instance details>` -- The failover terminated for timeout, replicas will eventually be configured to replicate with the new master anyway.\n* **failover-end** `<instance details>` -- The failover terminated with success. All the replicas appears to be reconfigured to replicate with the new master.\n* **switch-master** `<master name> <oldip> <oldport> <newip> <newport>` -- The master new IP and address is the specified one after a configuration change. This is **the message most external users are interested in**.\n* **+tilt** -- Tilt mode entered.\n* **-tilt** -- Tilt mode exited.\n\n### Handling of -BUSY state\n\nThe -BUSY error is returned by a Redis instance when a Lua script is running for\nmore time than the configured Lua script time limit. When this happens before\ntriggering a fail over Redis Sentinel will try to send a `SCRIPT KILL`\ncommand, that will only succeed if the script was read-only.\n\nIf the instance is still in an error condition after this try, it will\neventually be failed over.\n\nReplicas priority\n---\n\nRedis instances have a configuration parameter called `replica-priority`.\nThis information is exposed by Redis replica instances in their `INFO` output,\nand Sentinel uses it in order to pick a replica among the ones that can be\nused in order to failover a master:\n\n1. If the replica priority is set to 0, the replica is never promoted to master.\n2. Replicas with a *lower* priority number are preferred by Sentinel.\n\nFor example if there is a replica S1 in the same data center of the current\nmaster, and another replica S2 in another data center, it is possible to set\nS1 with a priority of 10 and S2 with a priority of 100, so that if the master\nfails and both S1 and S2 are available, S1 will be preferred.\n\nFor more information about the way replicas are selected, please check the [_Replica selection and priority_ section](#replica-selection-and-priority) of this documentation.\n\n### Sentinel and Redis authentication\n\nWhen the master is configured to require authentication from clients,\nas a security measure, replicas need to also be aware of the credentials in\norder to authenticate with the master and create the master-replica connection\nused for the asynchronous replication protocol.\n\n## Redis Access Control List authentication\n\nStarting with Redis 6, user authentication and permission is managed with the [Access Control List (ACL)](\/topics\/acl).\n\nIn order for Sentinels to connect to Redis server instances when they are\nconfigured with ACL, the Sentinel configuration must include the\nfollowing directives:\n\n    sentinel auth-user <master-name> <username>\n    sentinel auth-pass <master-name> <password>\n\nWhere `<username>` and `<password>` are the username and password for accessing the group's instances. These credentials should be provisioned on all of the group's Redis instances with the minimal control permissions. For example:\n\n    127.0.0.1:6379> ACL SETUSER sentinel-user ON >somepassword allchannels +multi +slaveof +ping +exec +subscribe +config|rewrite +role +publish +info +client|setname +client|kill +script|kill\n\n### Redis password-only authentication\n\nUntil Redis 6, authentication is achieved using the following configuration directives:\n\n* `requirepass` in the master, in order to set the authentication password, and to make sure the instance will not process requests for non authenticated clients.\n* `masterauth` in the replicas in order for the replicas to authenticate with the master in order to correctly replicate data from it.\n\nWhen Sentinel is used, there is not a single master, since after a failover\nreplicas may play the role of masters, and old masters can be reconfigured in\norder to act as replicas, so what you want to do is to set the above directives\nin all your instances, both masters and replicas.\n\nThis is also usually a sane setup since you don't want to protect\ndata only in the master, having the same data accessible in the replicas.\n\nHowever, in the uncommon case where you need a replica that is accessible\nwithout authentication, you can still do it by setting up **a replica priority\nof zero**, to prevent this replica from being promoted to master, and\nconfiguring in this replica only the `masterauth` directive, without\nusing the `requirepass` directive, so that data will be readable by\nunauthenticated clients.\n\nIn order for Sentinels to connect to Redis server instances when they are\nconfigured with `requirepass`, the Sentinel configuration must include the\n`sentinel auth-pass` directive, in the format:\n\n    sentinel auth-pass <master-name> <password>\n\nConfiguring Sentinel instances with authentication\n---\n\nSentinel instances themselves can be secured by requiring clients to authenticate via the `AUTH` command. Starting with Redis 6.2, the [Access Control List (ACL)](\/topics\/acl) is available, whereas previous versions (starting with Redis 5.0.1) support password-only authentication. \n\nNote that Sentinel's authentication configuration should be **applied to each of the instances** in your deployment, and **all instances should use the same configuration**. Furthermore, ACL and password-only authentication should not be used together.\n\n### Sentinel Access Control List authentication\n\nThe first step in securing a Sentinel instance with ACL is preventing any unauthorized access to it. To do that, you'll need to disable the default superuser (or at the very least set it up with a strong password) and create a new one and allow it access to Pub\/Sub channels:\n\n    127.0.0.1:5000> ACL SETUSER admin ON >admin-password allchannels +@all\n    OK\n    127.0.0.1:5000> ACL SETUSER default off\n    OK\n\nThe default user is used by Sentinel to connect to other instances. You can provide the credentials of another superuser with the following configuration directives:\n\n    sentinel sentinel-user <username>\n    sentinel sentinel-pass <password>\n\nWhere `<username>` and `<password>` are the Sentinel's superuser and password, respectively (e.g. `admin` and `admin-password` in the example above).\n\nLastly, for authenticating incoming client connections, you can create a Sentinel restricted user profile such as the following:\n\n    127.0.0.1:5000> ACL SETUSER sentinel-user ON >user-password -@all +auth +client|getname +client|id +client|setname +command +hello +ping +role +sentinel|get-master-addr-by-name +sentinel|master +sentinel|myid +sentinel|replicas +sentinel|sentinels\n\nRefer to the documentation of your Sentinel client of choice for further information.\n\n### Sentinel password-only authentication\n\nTo use Sentinel with password-only authentication, add the `requirepass` configuration directive to **all** your Sentinel instances as follows:\n\n    requirepass \"your_password_here\"\n\nWhen configured this way, Sentinels will do two things:\n\n1. A password will be required from clients in order to send commands to Sentinels. This is obvious since this is how such configuration directive works in Redis in general.\n2. Moreover the same password configured to access the local Sentinel, will be used by this Sentinel instance in order to authenticate to all the other Sentinel instances it connects to.\n\nThis means that **you will have to configure the same `requirepass` password in all the Sentinel instances**. This way every Sentinel can talk with every other Sentinel without any need to configure for each Sentinel the password to access all the other Sentinels, that would be very impractical.\n\nBefore using this configuration, make sure your client library can send the `AUTH` command to Sentinel instances.\n\n### Sentinel clients implementation\n---\n\nSentinel requires explicit client support, unless the system is configured to execute a script that performs a transparent redirection of all the requests to the new master instance (virtual IP or other similar systems). The topic of client libraries implementation is covered in the document [Sentinel clients guidelines](\/topics\/sentinel-clients).\n\n## More advanced concepts\n\nIn the following sections we'll cover a few details about how Sentinel works,\nwithout resorting to implementation details and algorithms that will be\ncovered in the final part of this document.\n\n### SDOWN and ODOWN failure state\n\nRedis Sentinel has two different concepts of *being down*, one is called\na *Subjectively Down* condition (SDOWN) and is a down condition that is\nlocal to a given Sentinel instance. Another is called *Objectively Down*\ncondition (ODOWN) and is reached when enough Sentinels (at least the\nnumber configured as the `quorum` parameter of the monitored master) have\nan SDOWN condition, and get feedback from other Sentinels using\nthe `SENTINEL is-master-down-by-addr` command.\n\nFrom the point of view of a Sentinel an SDOWN condition is reached when it\ndoes not receive a valid reply to PING requests for the number of seconds\nspecified in the configuration as `is-master-down-after-milliseconds`\nparameter.\n\nAn acceptable reply to PING is one of the following:\n\n* PING replied with +PONG.\n* PING replied with -LOADING error.\n* PING replied with -MASTERDOWN error.\n\nAny other reply (or no reply at all) is considered non valid.\nHowever note that **a logical master that advertises itself as a replica in\nthe INFO output is considered to be down**.\n\nNote that SDOWN requires that no acceptable reply is received for the whole\ninterval configured, so for instance if the interval is 30000 milliseconds\n(30 seconds) and we receive an acceptable ping reply every 29 seconds, the\ninstance is considered to be working.\n\nSDOWN is not enough to trigger a failover: it only means a single Sentinel\nbelieves a Redis instance is not available. To trigger a failover, the\nODOWN state must be reached.\n\nTo switch from SDOWN to ODOWN no strong consensus algorithm is used, but\njust a form of gossip: if a given Sentinel gets reports that a master\nis not working from enough Sentinels **in a given time range**, the SDOWN is\npromoted to ODOWN. If this acknowledge is later missing, the flag is cleared.\n\nA more strict authorization that uses an actual majority is required in\norder to really start the failover, but no failover can be triggered without\nreaching the ODOWN state.\n\nThe ODOWN condition **only applies to masters**. For other kind of instances\nSentinel doesn't require to act, so the ODOWN state is never reached for replicas\nand other sentinels, but only SDOWN is.\n\nHowever SDOWN has also semantic implications. For example a replica in SDOWN\nstate is not selected to be promoted by a Sentinel performing a failover.\n\nSentinels and replicas auto discovery\n---\n\nSentinels stay connected with other Sentinels in order to reciprocally\ncheck the availability of each other, and to exchange messages. However you\ndon't need to configure a list of other Sentinel addresses in every Sentinel\ninstance you run, as Sentinel uses the Redis instances Pub\/Sub capabilities\nin order to discover the other Sentinels that are monitoring the same masters\nand replicas.\n\nThis feature is implemented by sending *hello messages* into the channel named\n`__sentinel__:hello`.\n\nSimilarly you don't need to configure what is the list of the replicas attached\nto a master, as Sentinel will auto discover this list querying Redis.\n\n* Every Sentinel publishes a message to every monitored master and replica Pub\/Sub channel `__sentinel__:hello`, every two seconds, announcing its presence with ip, port, runid.\n* Every Sentinel is subscribed to the Pub\/Sub channel `__sentinel__:hello` of every master and replica, looking for unknown sentinels. When new sentinels are detected, they are added as sentinels of this master.\n* Hello messages also include the full current configuration of the master. If the receiving Sentinel has a configuration for a given master which is older than the one received, it updates to the new configuration immediately.\n* Before adding a new sentinel to a master a Sentinel always checks if there is already a sentinel with the same runid or the same address (ip and port pair). In that case all the matching sentinels are removed, and the new added.\n\nSentinel reconfiguration of instances outside the failover procedure\n---\n\nEven when no failover is in progress, Sentinels will always try to set the\ncurrent configuration on monitored instances. Specifically:\n\n* Replicas (according to the current configuration) that claim to be masters, will be configured as replicas to replicate with the current master.\n* Replicas connected to a wrong master, will be reconfigured to replicate with the right master.\n\nFor Sentinels to reconfigure replicas, the wrong configuration must be observed for some time, that is greater than the period used to broadcast new configurations.\n\nThis prevents Sentinels with a stale configuration (for example because they just rejoined from a partition) will try to change the replicas configuration before receiving an update.\n\nAlso note how the semantics of always trying to impose the current configuration makes the failover more resistant to partitions:\n\n* Masters failed over are reconfigured as replicas when they return available.\n* Replicas partitioned away during a partition are reconfigured once reachable.\n\nThe important lesson to remember about this section is: **Sentinel is a system where each process will always try to impose the last logical configuration to the set of monitored instances**.\n\n### Replica selection and priority\n\nWhen a Sentinel instance is ready to perform a failover, since the master\nis in `ODOWN` state and the Sentinel received the authorization to failover\nfrom the majority of the Sentinel instances known, a suitable replica needs\nto be selected.\n\nThe replica selection process evaluates the following information about replicas:\n\n1. Disconnection time from the master.\n2. Replica priority.\n3. Replication offset processed.\n4. Run ID.\n\nA replica that is found to be disconnected from the master for more than ten\ntimes the configured master timeout (down-after-milliseconds option), plus\nthe time the master is also not available from the point of view of the\nSentinel doing the failover, is considered to be not suitable for the failover\nand is skipped.\n\nIn more rigorous terms, a replica whose the `INFO` output suggests it has been\ndisconnected from the master for more than:\n\n    (down-after-milliseconds * 10) + milliseconds_since_master_is_in_SDOWN_state\n\nIs considered to be unreliable and is disregarded entirely.\n\nThe replica selection only considers the replicas that passed the above test,\nand sorts it based on the above criteria, in the following order.\n\n1. The replicas are sorted by `replica-priority` as configured in the `redis.conf` file of the Redis instance. A lower priority will be preferred.\n2. If the priority is the same, the replication offset processed by the replica is checked, and the replica that received more data from the master is selected.\n3. If multiple replicas have the same priority and processed the same data from the master, a further check is performed, selecting the replica with the lexicographically smaller run ID. Having a lower run ID is not a real advantage for a replica, but is useful in order to make the process of replica selection more deterministic, instead of resorting to select a random replica.\n\nIn most cases, `replica-priority` does not need to be set explicitly so all\ninstances will use the same default value. If there is a particular fail-over\npreference, `replica-priority` must be set on all instances, including masters,\nas a master may become a replica at some future point in time - and it will then\nneed the proper `replica-priority` settings.\n\nA Redis instance can be configured with a special `replica-priority` of zero\nin order to be **never selected** by Sentinels as the new master.\nHowever a replica configured in this way will still be reconfigured by\nSentinels in order to replicate with the new master after a failover, the\nonly difference is that it will never become a master itself.\n\n## Algorithms and internals\n\nIn the following sections we will explore the details of Sentinel behavior.\nIt is not strictly needed for users to be aware of all the details, but a\ndeep understanding of Sentinel may help to deploy and operate Sentinel in\na more effective way.\n\n### Quorum\n\nThe previous sections showed that every master monitored by Sentinel is associated to a configured **quorum**. It specifies the number of Sentinel processes\nthat need to agree about the unreachability or error condition of the master in\norder to trigger a failover.\n\nHowever, after the failover is triggered, in order for the failover to actually be performed, **at least a majority of Sentinels must authorize the Sentinel to\nfailover**. Sentinel never performs a failover in the partition where a\nminority of Sentinels exist.\n\nLet's try to make things a bit more clear:\n\n* Quorum: the number of Sentinel processes that need to detect an error condition in order for a master to be flagged as **ODOWN**.\n* The failover is triggered by the **ODOWN** state.\n* Once the failover is triggered, the Sentinel trying to failover is required to ask for authorization to a majority of Sentinels (or more than the majority if the quorum is set to a number greater than the majority).\n\nThe difference may seem subtle but is actually quite simple to understand and use.  For example if you have 5 Sentinel instances, and the quorum is set to 2, a failover will be triggered as soon as 2 Sentinels believe that the master is not reachable, however one of the two Sentinels will be able to failover only if it gets authorization at least from 3 Sentinels.\n\nIf instead the quorum is configured to 5, all the Sentinels must agree about the master error condition, and the authorization from all Sentinels is required in order to failover.\n\nThis means that the quorum can be used to tune Sentinel in two ways:\n\n1. If a quorum is set to a value smaller than the majority of Sentinels we deploy, we are basically making Sentinel more sensitive to master failures, triggering a failover as soon as even just a minority of Sentinels is no longer able to talk with the master.\n2. If a quorum is set to a value greater than the majority of Sentinels, we are making Sentinel able to failover only when there are a very large number (larger than majority) of well connected Sentinels which agree about the master being down.\n\n### Configuration epochs\n\nSentinels require to get authorizations from a majority in order to start a\nfailover for a few important reasons:\n\nWhen a Sentinel is authorized, it gets a unique **configuration epoch** for the master it is failing over. This is a number that will be used to version the new configuration after the failover is completed. Because a majority agreed that a given version was assigned to a given Sentinel, no other Sentinel will be able to use it. This means that every configuration of every failover is versioned with a unique version. We'll see why this is so important.\n\nMoreover Sentinels have a rule: if a Sentinel voted another Sentinel for the failover of a given master, it will wait some time to try to failover the same master again. This delay is the `2 * failover-timeout` you can configure in `sentinel.conf`. This means that Sentinels will not try to failover the same master at the same time, the first to ask to be authorized will try, if it fails another will try after some time, and so forth.\n\nRedis Sentinel guarantees the *liveness* property that if a majority of Sentinels are able to talk, eventually one will be authorized to failover if the master is down.\n\nRedis Sentinel also guarantees the *safety* property that every Sentinel will failover the same master using a different *configuration epoch*.\n\n### Configuration propagation\n\nOnce a Sentinel is able to failover a master successfully, it will start to broadcast the new configuration so that the other Sentinels will update their information about a given master.\n\nFor a failover to be considered successful, it requires that the Sentinel was able to send the `REPLICAOF NO ONE` command to the selected replica, and that the switch to master was later observed in the `INFO` output of the master.\n\nAt this point, even if the reconfiguration of the replicas is in progress, the failover is considered to be successful, and all the Sentinels are required to start reporting the new configuration.\n\nThe way a new configuration is propagated is the reason why we need that every\nSentinel failover is authorized with a different version number (configuration epoch).\n\nEvery Sentinel continuously broadcast its version of the configuration of a master using Redis Pub\/Sub messages, both in the master and all the replicas.  At the same time all the Sentinels wait for messages to see what is the configuration\nadvertised by the other Sentinels.\n\nConfigurations are broadcast in the `__sentinel__:hello` Pub\/Sub channel.\n\nBecause every configuration has a different version number, the greater version\nalways wins over smaller versions.\n\nSo for example the configuration for the master `mymaster` start with all the\nSentinels believing the master is at 192.168.1.50:6379. This configuration\nhas version 1. After some time a Sentinel is authorized to failover with version 2. If the failover is successful, it will start to broadcast a new configuration, let's say 192.168.1.50:9000, with version 2. All the other instances will see this configuration and will update their configuration accordingly, since the new configuration has a greater version.\n\nThis means that Sentinel guarantees a second liveness property: a set of\nSentinels that are able to communicate will all converge to the same configuration with the higher version number.\n\nBasically if the net is partitioned, every partition will converge to the higher\nlocal configuration. In the special case of no partitions, there is a single\npartition and every Sentinel will agree about the configuration.\n\n### Consistency under partitions\n\nRedis Sentinel configurations are eventually consistent, so every partition will\nconverge to the higher configuration available.\nHowever in a real-world system using Sentinel there are three different players:\n\n* Redis instances.\n* Sentinel instances.\n* Clients.\n\nIn order to define the behavior of the system we have to consider all three.\n\nThe following is a simple network where there are 3 nodes, each running\na Redis instance, and a Sentinel instance:\n\n                +-------------+\n                | Sentinel 1  |----- Client A\n                | Redis 1 (M) |\n                +-------------+\n                        |\n                        |\n    +-------------+     |          +------------+\n    | Sentinel 2  |-----+-- \/\/ ----| Sentinel 3 |----- Client B\n    | Redis 2 (S) |                | Redis 3 (M)|\n    +-------------+                +------------+\n\nIn this system the original state was that Redis 3 was the master, while\nRedis 1 and 2 were replicas. A partition occurred isolating the old master.\nSentinels 1 and 2 started a failover promoting Sentinel 1 as the new master.\n\nThe Sentinel properties guarantee that Sentinel 1 and 2 now have the new\nconfiguration for the master. However Sentinel 3 has still the old configuration\nsince it lives in a different partition.\n\nWe know that Sentinel 3 will get its configuration updated when the network\npartition will heal, however what happens during the partition if there\nare clients partitioned with the old master?\n\nClients will be still able to write to Redis 3, the old master. When the\npartition will rejoin, Redis 3 will be turned into a replica of Redis 1, and\nall the data written during the partition will be lost.\n\nDepending on your configuration you may want or not that this scenario happens:\n\n* If you are using Redis as a cache, it could be handy that Client B is still able to write to the old master, even if its data will be lost.\n* If you are using Redis as a store, this is not good and you need to configure the system in order to partially prevent this problem.\n\nSince Redis is asynchronously replicated, there is no way to totally prevent data loss in this scenario, however you can bound the divergence between Redis 3 and Redis 1\nusing the following Redis configuration option:\n\n    min-replicas-to-write 1\n    min-replicas-max-lag 10\n\nWith the above configuration (please see the self-commented `redis.conf` example in the Redis distribution for more information) a Redis instance, when acting as a master, will stop accepting writes if it can't write to at least 1 replica. Since replication is asynchronous *not being able to write* actually means that the replica is either disconnected, or is not sending us asynchronous acknowledges for more than the specified `max-lag` number of seconds.\n\nUsing this configuration the Redis 3 in the above example will become unavailable after 10 seconds. When the partition heals, the Sentinel 3 configuration will converge to\nthe new one, and Client B will be able to fetch a valid configuration and continue.\n\nIn general Redis + Sentinel as a whole are an **eventually consistent system** where the merge function is **last failover wins**, and the data from old masters are discarded to replicate the data of the current master, so there is always a window for losing acknowledged writes. This is due to Redis asynchronous\nreplication and the discarding nature of the \"virtual\" merge function of the system. Note that this is not a limitation of Sentinel itself, and if you orchestrate the failover with a strongly consistent replicated state machine, the same properties will still apply. There are only two ways to avoid losing acknowledged writes:\n\n1. Use synchronous replication (and a proper consensus algorithm to run a replicated state machine).\n2. Use an eventually consistent system where different versions of the same object can be merged.\n\nRedis currently is not able to use any of the above systems, and is currently outside the development goals. However there are proxies implementing solution \"2\" on top of Redis stores such as SoundCloud [Roshi](https:\/\/github.com\/soundcloud\/roshi), or Netflix [Dynomite](https:\/\/github.com\/Netflix\/dynomite).\n\nSentinel persistent state\n---\n\nSentinel state is persisted in the sentinel configuration file. For example\nevery time a new configuration is received, or created (leader Sentinels), for\na master, the configuration is persisted on disk together with the configuration\nepoch. This means that it is safe to stop and restart Sentinel processes.\n\n### TILT mode\n\nRedis Sentinel is heavily dependent on the computer time: for instance in\norder to understand if an instance is available it remembers the time of the\nlatest successful reply to the PING command, and compares it with the current\ntime to understand how old it is.\n\nHowever if the computer time changes in an unexpected way, or if the computer\nis very busy, or the process blocked for some reason, Sentinel may start to\nbehave in an unexpected way.\n\nThe TILT mode is a special \"protection\" mode that a Sentinel can enter when\nsomething odd is detected that can lower the reliability of the system.\nThe Sentinel timer interrupt is normally called 10 times per second, so we\nexpect that more or less 100 milliseconds will elapse between two calls\nto the timer interrupt.\n\nWhat a Sentinel does is to register the previous time the timer interrupt\nwas called, and compare it with the current call: if the time difference\nis negative or unexpectedly big (2 seconds or more) the TILT mode is entered\n(or if it was already entered the exit from the TILT mode postponed).\n\nWhen in TILT mode the Sentinel will continue to monitor everything, but:\n\n* It stops acting at all.\n* It starts to reply negatively to `SENTINEL is-master-down-by-addr` requests as the ability to detect a failure is no longer trusted.\n\nIf everything appears to be normal for 30 second, the TILT mode is exited.\n \nIn the Sentinel TILT mode, if we send the INFO command, we could get the following response:\n\n    $ redis-cli -p 26379\n    127.0.0.1:26379> info\n    (Other information from Sentinel server skipped.)\n\n    # Sentinel\n    sentinel_masters:1\n    sentinel_tilt:0\n    sentinel_tilt_since_seconds:-1\n    sentinel_running_scripts:0\n    sentinel_scripts_queue_length:0\n    sentinel_simulate_failure_flags:0\n    master0:name=mymaster,status=ok,address=127.0.0.1:6379,slaves=0,sentinels=1\n\nThe field \"sentinel_tilt_since_seconds\" indicates how many seconds the Sentinel already is in the TILT mode.\nIf it is not in TILT mode, the value will be -1.\n\nNote that in some ways TILT mode could be replaced using the monotonic clock\nAPI that many kernels offer. However it is not still clear if this is a good\nsolution since the current system avoids issues in case the process is just\nsuspended or not executed by the scheduler for a long time.\n\n**A note about the word slave used in this man page**: Starting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated.","site":"redis","answers_cleaned":"    title   High availability with Redis Sentinel  linkTitle   High availability with Sentinel  weight  4 description  High availability for non clustered Redis aliases         topics sentinel       docs manual sentinel       docs manual sentinel md        Redis Sentinel provides high availability for Redis when not using  Redis Cluster   docs manual scaling     Redis Sentinel also provides other collateral tasks such as monitoring  notifications and acts as a configuration provider for clients   This is the full list of Sentinel capabilities at a macroscopic level  i e  the  big picture         Monitoring    Sentinel constantly checks if your master and replica instances are working as expected      Notification    Sentinel can notify the system administrator  or other computer programs  via an API  that something is wrong with one of the monitored Redis instances      Automatic failover    If a master is not working as expected  Sentinel can start a failover process where a replica is promoted to master  the other additional replicas are reconfigured to use the new master  and the applications using the Redis server are informed about the new address to use when connecting      Configuration provider    Sentinel acts as a source of authority for clients service discovery  clients connect to Sentinels in order to ask for the address of the current Redis master responsible for a given service  If a failover occurs  Sentinels will report the new address      Sentinel as a distributed system  Redis Sentinel is a distributed system   Sentinel itself is designed to run in a configuration where there are multiple Sentinel processes cooperating together  The advantage of having multiple Sentinel processes cooperating are the following   1  Failure detection is performed when multiple Sentinels agree about the fact a given master is no longer available  This lowers the probability of false positives  2  Sentinel works even if not all the Sentinel processes are working  making the system robust against failures  There is no fun in having a failover system which is itself a single point of failure  after all   The sum of Sentinels  Redis instances  masters and replicas  and clients connecting to Sentinel and Redis  are also a larger distributed system with specific properties  In this document concepts will be introduced gradually starting from basic information needed in order to understand the basic properties of Sentinel  to more complex information  that are optional  in order to understand how exactly Sentinel works      Sentinel quick start      Obtaining Sentinel  The current version of Sentinel is called   Sentinel 2    It is a rewrite of the initial Sentinel implementation using stronger and simpler to predict algorithms  that are explained in this documentation    A stable release of Redis Sentinel is shipped since Redis 2 8   New developments are performed in the  unstable  branch  and new features sometimes are back ported into the latest stable branch as soon as they are considered to be stable   Redis Sentinel version 1  shipped with Redis 2 6  is deprecated and should not be used       Running Sentinel  If you are using the  redis sentinel  executable  or if you have a symbolic link with that name to the  redis server  executable  you can run Sentinel with the following command line       redis sentinel  path to sentinel conf  Otherwise you can use directly the  redis server  executable starting it in Sentinel mode       redis server  path to sentinel conf   sentinel  Both ways work the same   However   it is mandatory   to use a configuration file when running Sentinel  as this file will be used by the system in order to save the current state that will be reloaded in case of restarts  Sentinel will simply refuse to start if no configuration file is given or if the configuration file path is not writable   Sentinels by default run   listening for connections to TCP port 26379    so for Sentinels to work  port 26379 of your servers   must be open   to receive connections from the IP addresses of the other Sentinel instances  Otherwise Sentinels can t talk and can t agree about what to do  so failover will never be performed       Fundamental things to know about Sentinel before deploying  1  You need at least three Sentinel instances for a robust deployment  2  The three Sentinel instances should be placed into computers or virtual machines that are believed to fail in an independent way  So for example different physical servers or Virtual Machines executed on different availability zones  3  Sentinel   Redis distributed system does not guarantee that acknowledged writes are retained during failures  since Redis uses asynchronous replication  However there are ways to deploy Sentinel that make the window to lose writes limited to certain moments  while there are other less secure ways to deploy it  4  You need Sentinel support in your clients  Popular client libraries have Sentinel support  but not all  5  There is no HA setup which is safe if you don t test from time to time in development environments  or even better if you can  in production environments  if they work  You may have a misconfiguration that will become apparent only when it s too late  at 3am when your master stops working   6    Sentinel  Docker  or other forms of Network Address Translation or Port Mapping should be mixed with care    Docker performs port remapping  breaking Sentinel auto discovery of other Sentinel processes and the list of replicas for a master  Check the  section about  Sentinel and Docker    sentinel docker nat and possible issues  later in this document for more information       Configuring Sentinel  The Redis source distribution contains a file called  sentinel conf  that is a self documented example configuration file you can use to configure Sentinel  however a typical minimal configuration file looks like the following       sentinel monitor mymaster 127 0 0 1 6379 2     sentinel down after milliseconds mymaster 60000     sentinel failover timeout mymaster 180000     sentinel parallel syncs mymaster 1      sentinel monitor resque 192 168 1 3 6380 4     sentinel down after milliseconds resque 10000     sentinel failover timeout resque 180000     sentinel parallel syncs resque 5  You only need to specify the masters to monitor  giving to each separated master  that may have any number of replicas  a different name  There is no need to specify replicas  which are auto discovered  Sentinel will update the configuration automatically with additional information about replicas  in order to retain the information in case of restart   The configuration is also rewritten every time a replica is promoted to master during a failover and every time a new Sentinel is discovered   The example configuration above basically monitors two sets of Redis instances  each composed of a master and an undefined number of replicas  One set of instances is called  mymaster   and the other  resque    The meaning of the arguments of  sentinel monitor  statements is the following       sentinel monitor  master name   ip   port   quorum   For the sake of clarity  let s check line by line what the configuration options mean   The first line is used to tell Redis to monitor a master called  mymaster   that is at address 127 0 0 1 and port 6379  with a quorum of 2  Everything is pretty obvious but the   quorum   argument     The   quorum   is the number of Sentinels that need to agree about the fact the master is not reachable  in order to really mark the master as failing  and eventually start a failover procedure if possible    However   the quorum is only used to detect the failure    In order to actually perform a failover  one of the Sentinels need to be elected leader for the failover and be authorized to proceed  This only happens with the vote of the   majority of the Sentinel processes     So for example if you have 5 Sentinel processes  and the quorum for a given master set to the value of 2  this is what happens     If two Sentinels agree at the same time about the master being unreachable  one of the two will try to start a failover    If there are at least a total of three Sentinels reachable  the failover will be authorized and will actually start   In practical terms this means during failures   Sentinel never starts a failover if the majority of Sentinel processes are unable to talk    aka no failover in the minority partition        Other Sentinel options  The other options are almost always in the form       sentinel  option name   master name   option value   And are used for the following purposes      down after milliseconds  is the time in milliseconds an instance should not be reachable  either does not reply to our PINGs or it is replying with an error  for a Sentinel starting to think it is down     parallel syncs  sets the number of replicas that can be reconfigured to use the new master after a failover at the same time  The lower the number  the more time it will take for the failover process to complete  however if the replicas are configured to serve old data  you may not want all the replicas to re synchronize with the master at the same time  While the replication process is mostly non blocking for a replica  there is a moment when it stops to load the bulk data from the master  You may want to make sure only one replica at a time is not reachable by setting this option to the value of 1   Additional options are described in the rest of this document and documented in the example  sentinel conf  file shipped with the Redis distribution   Configuration parameters can be modified at runtime     Master specific configuration parameters are modified using  SENTINEL SET     Global configuration parameters are modified using  SENTINEL CONFIG SET    See the   Reconfiguring Sentinel at runtime  section   reconfiguring sentinel at runtime  for more information       Example Sentinel deployments  Now that you know the basic information about Sentinel  you may wonder where you should place your Sentinel processes  how many Sentinel processes you need and so forth  This section shows a few example deployments   We use ASCII art in order to show you configuration examples in a  graphical  format  this is what the different symbols means                                    This is a computer         or VM that fails           independently  We          call it a  box                                   We write inside the boxes what they are running                                   Redis master M1           Redis Sentinel S1                              Different boxes are connected by lines  to show that they are able to talk                                                           Sentinel S1                   Sentinel S2                                                      Network partitions are shown as interrupted lines using slashes                                                            Sentinel S1                    Sentinel S2                                                       Also note that     Masters are called M1  M2  M3       Mn    Replicas are called R1  R2  R3       Rn  R stands for  replica      Sentinels are called S1  S2  S3       Sn    Clients are called C1  C2  C3       Cn    When an instance changes role because of Sentinel actions  we put it inside square brackets  so  M1  means an instance that is now a master because of Sentinel intervention   Note that we will never show   setups where just two Sentinels are used    since Sentinels always need   to talk with the majority   in order to start a failover        Example 1  just two Sentinels  DON T DO THIS                                  M1             R1         S1             S2                                  Configuration  quorum   1    In this setup  if the master M1 fails  R1 will be promoted since the two Sentinels can reach agreement about the failure  obviously with quorum set to 1  and can also authorize a failover because the majority is two  So apparently it could superficially work  however check the next points to see why this setup is broken    If the box where M1 is running stops working  also S1 stops working  The Sentinel running in the other box S2 will not be able to authorize a failover  so the system will become not available   Note that a majority is needed in order to order different failovers  and later propagate the latest configuration to all the Sentinels  Also note that the ability to failover in a single side of the above setup  without any agreement  would be very dangerous                                       M1                M1          S1               S2                                    In the above configuration we created two masters  assuming S2 could failover without authorization  in a perfectly symmetrical way  Clients may write indefinitely to both sides  and there is no way to understand when the partition heals what configuration is the right one  in order to prevent a  permanent split brain condition    So please   deploy at least three Sentinels in three different boxes   always        Example 2  basic setup with three boxes  This is a very simple setup  that has the advantage to be simple to tune for additional safety  It is based on three boxes  each box running both a Redis process and a Sentinel process                                   M1                S1                                                                     R2             R3         S2             S3                                  Configuration  quorum   2  If the master M1 fails  S2 and S3 will agree about the failure and will be able to authorize a failover  making clients able to continue   In every Sentinel setup  as Redis uses asynchronous replication  there is always the risk of losing some writes because a given acknowledged write may not be able to reach the replica which is promoted to master  However in the above setup there is a higher risk due to clients being partitioned away with an old master  like in the following picture                                      M1                  S1      C1  writes will be lost                                                                                                               M2              R3         S2               S3                                In this case a network partition isolated the old master M1  so the replica R2 is promoted to master  However clients  like C1  that are in the same partition as the old master  may continue to write data to the old master  This data will be lost forever since when the partition will heal  the master will be reconfigured as a replica of the new master  discarding its data set   This problem can be mitigated using the following Redis replication feature  that allows to stop accepting writes if a master detects that it is no longer able to transfer its writes to the specified number of replicas       min replicas to write 1     min replicas max lag 10  With the above configuration  please see the self commented  redis conf  example in the Redis distribution for more information  a Redis instance  when acting as a master  will stop accepting writes if it can t write to at least 1 replica  Since replication is asynchronous  not being able to write  actually means that the replica is either disconnected  or is not sending us asynchronous acknowledges for more than the specified  max lag  number of seconds   Using this configuration  the old Redis master M1 in the above example  will become unavailable after 10 seconds  When the partition heals  the Sentinel configuration will converge to the new one  the client C1 will be able to fetch a valid configuration and will continue with the new master   However there is no free lunch  With this refinement  if the two replicas are down  the master will stop accepting writes  It s a trade off        Example 3  Sentinel in the client boxes  Sometimes we have only two Redis boxes available  one for the master and one for the replica  The configuration in the example 2 is not viable in that case  so we can resort to the following  where Sentinels are placed where clients are                                                           M1             R1                                                                                                                                                                                                                                                                                             C1            C2          C3               S1            S2          S3                                                         Configuration  quorum   2  In this setup  the point of view Sentinels is the same as the clients  if a master is reachable by the majority of the clients  it is fine  C1  C2  C3 here are generic clients  it does not mean that C1 identifies a single client connected to Redis  It is more likely something like an application server  a Rails app  or something like that   If the box where M1 and S1 are running fails  the failover will happen without issues  however it is easy to see that different network partitions will result in different behaviors  For example Sentinel will not be able to setup if the network between the clients and the Redis servers is disconnected  since the Redis master and replica will both be unavailable   Note that if C3 gets partitioned with M1  hardly possible with the network described above  but more likely possible with different layouts  or because of failures at the software layer   we have a similar issue as described in Example 2  with the difference that here we have no way to break the symmetry  since there is just a replica and master  so the master can t stop accepting queries when it is disconnected from its replica  otherwise the master would never be available during replica failures   So this is a valid setup but the setup in the Example 2 has advantages such as the HA system of Redis running in the same boxes as Redis itself which may be simpler to manage  and the ability to put a bound on the amount of time a master in the minority partition can receive writes        Example 4  Sentinel client side with less than three clients  The setup described in the Example 3 cannot be used if there are less than three boxes in the client side  for example three web servers   In this case we need to resort to a mixed setup like the following                                                           M1             R1                     S1             S2                                                                                                                                                                                                                                  C1            C2                     S3            S4                                                   Configuration  quorum   3  This is similar to the setup in Example 3  but here we run four Sentinels in the four boxes we have available  If the master M1 becomes unavailable the other three Sentinels will perform the failover   In theory this setup works removing the box where C2 and S4 are running  and setting the quorum to 2  However it is unlikely that we want HA in the Redis side without having high availability in our application layer       Sentinel  Docker  NAT  and possible issues  Docker uses a technique called port mapping  programs running inside Docker containers may be exposed with a different port compared to the one the program believes to be using  This is useful in order to run multiple containers using the same ports  at the same time  in the same server   Docker is not the only software system where this happens  there are other Network Address Translation setups where ports may be remapped  and sometimes not ports but also IP addresses   Remapping ports and addresses creates issues with Sentinel in two ways   1  Sentinel auto discovery of other Sentinels no longer works  since it is based on  hello  messages where each Sentinel announce at which port and IP address they are listening for connection  However Sentinels have no way to understand that an address or port is remapped  so it is announcing an information that is not correct for other Sentinels to connect  2  Replicas are listed in the  INFO  output of a Redis master in a similar way  the address is detected by the master checking the remote peer of the TCP connection  while the port is advertised by the replica itself during the handshake  however the port may be wrong for the same reason as exposed in point 1   Since Sentinels auto detect replicas using masters  INFO  output information  the detected replicas will not be reachable  and Sentinel will never be able to failover the master  since there are no good replicas from the point of view of the system  so there is currently no way to monitor with Sentinel a set of master and replica instances deployed with Docker    unless you instruct Docker to map the port 1 1     For the first problem  in case you want to run a set of Sentinel instances using Docker with forwarded ports  or any other NAT setup where ports are remapped   you can use the following two Sentinel configuration directives in order to force Sentinel to announce a specific set of IP and port       sentinel announce ip  ip      sentinel announce port  port   Note that Docker has the ability to run in  host networking mode   check the    net host  option for more information   This should create no issues since ports are not remapped in this setup       IP Addresses and DNS names  Older versions of Sentinel did not support host names and required IP addresses to be specified everywhere  Starting with version 6 2  Sentinel has  optional  support for host names     This capability is disabled by default  If you re going to enable DNS hostnames support  please note     1  The name resolution configuration on your Redis and Sentinel nodes must be reliable and be able to resolve addresses quickly  Unexpected delays in address resolution may have a negative impact on Sentinel  2  You should use hostnames everywhere and avoid mixing hostnames and IP addresses  To do that  use  replica announce ip  hostname   and  sentinel announce ip  hostname   for all Redis and Sentinel instances  respectively   Enabling the  resolve hostnames  global configuration allows Sentinel to accept host names     As part of a  sentinel monitor  command   As a replica address  if the replica uses a host name value for  replica announce ip   Sentinel will accept host names as valid inputs and resolve them  but will still refer to IP addresses when announcing an instance  updating configuration files  etc   Enabling the  announce hostnames  global configuration makes Sentinel use host names instead  This affects replies to clients  values written in configuration files  the  REPLICAOF  command issued to replicas  etc   This behavior may not be compatible with all Sentinel clients  that may explicitly expect an IP address   Using host names may be useful when clients use TLS to connect to instances and require a name rather than an IP address in order to perform certificate ASN matching      A quick tutorial  In the next sections of this document  all the details about   Sentinel API    sentinel api   configuration and semantics will be covered incrementally  However for people that want to play with the system ASAP  this section is a tutorial that shows how to configure and interact with 3 Sentinel instances   Here we assume that the instances are executed at port 5000  5001  5002  We also assume that you have a running Redis master at port 6379 with a replica running at port 6380  We will use the IPv4 loopback address 127 0 0 1 everywhere during the tutorial  assuming you are running the simulation on your personal computer   The three Sentinel configuration files should look like the following       port 5000     sentinel monitor mymaster 127 0 0 1 6379 2     sentinel down after milliseconds mymaster 5000     sentinel failover timeout mymaster 60000     sentinel parallel syncs mymaster 1  The other two configuration files will be identical but using 5001 and 5002 as port numbers   A few things to note about the above configuration     The master set is called  mymaster   It identifies the master and its replicas  Since each  master set  has a different name  Sentinel can monitor different sets of masters and replicas at the same time    The quorum was set to the value of 2  last argument of  sentinel monitor  configuration directive     The  down after milliseconds  value is 5000 milliseconds  that is 5 seconds  so masters will be detected as failing as soon as we don t receive any reply from our pings within this amount of time   Once you start the three Sentinels  you ll see a few messages they log  like        monitor master mymaster 127 0 0 1 6379 quorum 2  This is a Sentinel event  and you can receive this kind of events via Pub Sub if you  SUBSCRIBE  to the event name as specified later in   Pub Sub Messages  section   pubsub messages    Sentinel generates and logs different events during failure detection and failover   Asking Sentinel about the state of a master      The most obvious thing to do with Sentinel to get started  is check if the master it is monitoring is doing well         redis cli  p 5000     127 0 0 1 5000  sentinel master mymaster      1   name       2   mymaster       3   ip       4   127 0 0 1       5   port       6   6379       7   runid       8   953ae6a589449c13ddefaee3538d356d287f509b       9   flags      10   master      11   link pending commands      12   0      13   link refcount      14   1      15   last ping sent      16   0      17   last ok ping reply      18   735      19   last ping reply      20   735      21   down after milliseconds      22   5000      23   info refresh      24   126      25   role reported      26   master      27   role reported time      28   532439      29   config epoch      30   1      31   num slaves      32   1      33   num other sentinels      34   2      35   quorum      36   2      37   failover timeout      38   60000      39   parallel syncs      40   1   As you can see  it prints a number of information about the master  There are a few that are of particular interest for us   1   num other sentinels  is 2  so we know the Sentinel already detected two more Sentinels for this master  If you check the logs you ll see the   sentinel  events generated  2   flags  is just  master   If the master was down we could expect to see  s down  or  o down  flag as well here  3   num slaves  is correctly set to 1  so Sentinel also detected that there is an attached replica to our master   In order to explore more about this instance  you may want to try the following two commands       SENTINEL replicas mymaster     SENTINEL sentinels mymaster  The first will provide similar information about the replicas connected to the master  and the second about the other Sentinels   Obtaining the address of the current master      As we already specified  Sentinel also acts as a configuration provider for clients that want to connect to a set of master and replicas  Because of possible failovers or reconfigurations  clients have no idea about who is the currently active master for a given set of instances  so Sentinel exports an API to ask this question       127 0 0 1 5000  SENTINEL get master addr by name mymaster     1   127 0 0 1      2   6379       Testing the failover  At this point our toy Sentinel deployment is ready to be tested  We can just kill our master and check if the configuration changes  To do so we can just do       redis cli  p 6379 DEBUG sleep 30  This command will make our master no longer reachable  sleeping for 30 seconds  It basically simulates a master hanging for some reason   If you check the Sentinel logs  you should be able to see a lot of action   1  Each Sentinel detects the master is down with an   sdown  event  2  This event is later escalated to   odown   which means that multiple Sentinels agree about the fact the master is not reachable  3  Sentinels vote a Sentinel that will start the first failover attempt  4  The failover happens   If you ask again what is the current master address for  mymaster   eventually we should get a different reply this time       127 0 0 1 5000  SENTINEL get master addr by name mymaster     1   127 0 0 1      2   6380   So far so good    At this point you may jump to create your Sentinel deployment or can read more to understand all the Sentinel commands and internals      Sentinel API  Sentinel provides an API in order to inspect its state  check the health of monitored masters and replicas  subscribe in order to receive specific notifications  and change the Sentinel configuration at run time   By default Sentinel runs using TCP port 26379  note that 6379 is the normal Redis port   Sentinels accept commands using the Redis protocol  so you can use  redis cli  or any other unmodified Redis client in order to talk with Sentinel   It is possible to directly query a Sentinel to check what is the state of the monitored Redis instances from its point of view  to see what other Sentinels it knows  and so forth  Alternatively  using Pub Sub  it is possible to receive  push style  notifications from Sentinels  every time some event happens  like a failover  or an instance entering an error condition  and so forth       Sentinel commands  The  SENTINEL  command is the main API for Sentinel  The following is the list of its subcommands  minimal version is noted for where applicable        SENTINEL CONFIG GET   name          6 2   Get the current value of a global Sentinel configuration parameter  The specified name may be a wildcard  similar to the Redis  CONFIG GET  command      SENTINEL CONFIG SET   name     value          6 2   Set the value of a global Sentinel configuration parameter      SENTINEL CKQUORUM   master name     Check if the current Sentinel configuration is able to reach the quorum needed to failover a master  and the majority needed to authorize the failover  This command should be used in monitoring systems to check if a Sentinel deployment is ok      SENTINEL FLUSHCONFIG   Force Sentinel to rewrite its configuration on disk  including the current Sentinel state  Normally Sentinel rewrites the configuration every time something changes in its state  in the context of the subset of the state which is persisted on disk across restart   However sometimes it is possible that the configuration file is lost because of operation errors  disk failures  package upgrade scripts or configuration managers  In those cases a way to force Sentinel to rewrite the configuration file is handy  This command works even if the previous configuration file is completely missing      SENTINEL FAILOVER   master name     Force a failover as if the master was not reachable  and without asking for agreement to other Sentinels  however a new version of the configuration will be published so that the other Sentinels will update their configurations       SENTINEL GET MASTER ADDR BY NAME   master name     Return the ip and port number of the master with that name  If a failover is in progress or terminated successfully for this master it returns the address and port of the promoted replica      SENTINEL INFO CACHE        3 2   Return cached  INFO  output from masters and replicas      SENTINEL IS MASTER DOWN BY ADDR  ip   port   current epoch   runid    Check if the master specified by ip port is down from current Sentinel s point of view  This command is mostly for internal use      SENTINEL MASTER   master name     Show the state and info of the specified master      SENTINEL MASTERS   Show a list of monitored masters and their state      SENTINEL MONITOR   Start Sentinel s monitoring  Refer to the   Reconfiguring Sentinel at Runtime  section   reconfiguring sentinel at runtime  for more information      SENTINEL MYID        6 2   Return the ID of the Sentinel instance      SENTINEL PENDING SCRIPTS   This command returns information about pending scripts      SENTINEL REMOVE   Stop Sentinel s monitoring  Refer to the   Reconfiguring Sentinel at Runtime  section   reconfiguring sentinel at runtime  for more information      SENTINEL REPLICAS   master name          5 0   Show a list of replicas for this master  and their state      SENTINEL SENTINELS   master name     Show a list of sentinel instances for this master  and their state      SENTINEL SET   Set Sentinel s monitoring configuration  Refer to the   Reconfiguring Sentinel at Runtime  section   reconfiguring sentinel at runtime  for more information      SENTINEL SIMULATE FAILURE  crash after election crash after promotion help         3 2   This command simulates different Sentinel crash scenarios      SENTINEL RESET   pattern     This command will reset all the masters with matching name  The pattern argument is a glob style pattern  The reset process clears any previous state in a master  including a failover in progress   and removes every replica and sentinel already discovered and associated with the master   For connection management and administration purposes  Sentinel supports the following subset of Redis  commands       ACL        6 2   This command manages the Sentinel Access Control List  For more information refer to the  ACL   topics acl  documentation page and the   Sentinel Access Control List authentication    sentinel access control list authentication       AUTH        5 0 1   Authenticate a client connection  For more information refer to the  AUTH  command and the   Configuring Sentinel instances with authentication  section   configuring sentinel instances with authentication       CLIENT   This command manages client connections  For more information refer to its subcommands  pages      COMMAND        6 2   This command returns information about commands  For more information refer to the  COMMAND  command and its various subcommands      HELLO        6 0   Switch the connection s protocol  For more information refer to the  HELLO  command      INFO   Return information and statistics about the Sentinel server  For more information see the  INFO  command      PING   This command simply returns PONG      ROLE   This command returns the string  sentinel  and a list of monitored masters  For more information refer to the  ROLE  command      SHUTDOWN   Shut down the Sentinel instance   Lastly  Sentinel also supports the  SUBSCRIBE    UNSUBSCRIBE    PSUBSCRIBE  and  PUNSUBSCRIBE  commands  Refer to the   Pub Sub Messages  section   pubsub messages  for more details       Reconfiguring Sentinel at Runtime  Starting with Redis version 2 8 4  Sentinel provides an API in order to add  remove  or change the configuration of a given master  Note that if you have multiple sentinels you should apply the changes to all to your instances for Redis Sentinel to work properly  This means that changing the configuration of a single Sentinel does not automatically propagate the changes to the other Sentinels in the network   The following is a list of  SENTINEL  subcommands used in order to update the configuration of a Sentinel instance       SENTINEL MONITOR   name     ip     port     quorum     This command tells the Sentinel to start monitoring a new master with the specified name  ip  port  and quorum  It is identical to the  sentinel monitor  configuration directive in  sentinel conf  configuration file  with the difference that you can t use a hostname in as  ip   but you need to provide an IPv4 or IPv6 address      SENTINEL REMOVE   name     is used in order to remove the specified master  the master will no longer be monitored  and will totally be removed from the internal state of the Sentinel  so it will no longer listed by  SENTINEL masters  and so forth      SENTINEL SET   name      option     value          The SET command is very similar to the  CONFIG SET  command of Redis  and is used in order to change configuration parameters of a specific master  Multiple option   value pairs can be specified  or none at all   All the configuration parameters that can be configured via  sentinel conf  are also configurable using the SET command   The following is an example of  SENTINEL SET  command in order to modify the  down after milliseconds  configuration of a master called  objects cache        SENTINEL SET objects cache master down after milliseconds 1000  As already stated   SENTINEL SET  can be used to set all the configuration parameters that are settable in the startup configuration file  Moreover it is possible to change just the master quorum configuration without removing and re adding the master with  SENTINEL REMOVE  followed by  SENTINEL MONITOR   but simply using       SENTINEL SET objects cache master quorum 5  Note that there is no equivalent GET command since  SENTINEL MASTER  provides all the configuration parameters in a simple to parse format  as a field value pairs array    Starting with Redis version 6 2  Sentinel also allows getting and setting global configuration parameters which were only supported in the configuration file prior to that       SENTINEL CONFIG GET   name     Get the current value of a global Sentinel configuration parameter  The specified name may be a wildcard  similar to the Redis  CONFIG GET  command      SENTINEL CONFIG SET   name     value     Set the value of a global Sentinel configuration parameter   Global parameters that can be manipulated include      resolve hostnames    announce hostnames   See   IP addresses and DNS names    ip addresses and dns names      announce ip    announce port   See   Sentinel  Docker  NAT  and possible issues    sentinel docker nat and possible issues      sentinel user    sentinel pass   See   Configuring Sentinel instances with authentication    configuring sentinel instances with authentication        Adding or removing Sentinels  Adding a new Sentinel to your deployment is a simple process because of the auto discover mechanism implemented by Sentinel  All you need to do is to start the new Sentinel configured to monitor the currently active master  Within 10 seconds the Sentinel will acquire the list of other Sentinels and the set of replicas attached to the master   If you need to add multiple Sentinels at once  it is suggested to add it one after the other  waiting for all the other Sentinels to already know about the first one before adding the next  This is useful in order to still guarantee that majority can be achieved only in one side of a partition  in the chance failures should happen in the process of adding new Sentinels   This can be easily achieved by adding every new Sentinel with a 30 seconds delay  and during absence of network partitions   At the end of the process it is possible to use the command  SENTINEL MASTER mastername  in order to check if all the Sentinels agree about the total number of Sentinels monitoring the master   Removing a Sentinel is a bit more complex    Sentinels never forget already seen Sentinels    even if they are not reachable for a long time  since we don t want to dynamically change the majority needed to authorize a failover and the creation of a new configuration number  So in order to remove a Sentinel the following steps should be performed in absence of network partitions   1  Stop the Sentinel process of the Sentinel you want to remove  2  Send a  SENTINEL RESET    command to all the other Sentinel instances  instead of     you can use the exact master name if you want to reset just a single master   One after the other  waiting at least 30 seconds between instances  3  Check that all the Sentinels agree about the number of Sentinels currently active  by inspecting the output of  SENTINEL MASTER mastername  of every Sentinel       Removing the old master or unreachable replicas  Sentinels never forget about replicas of a given master  even when they are unreachable for a long time  This is useful  because Sentinels should be able to correctly reconfigure a returning replica after a network partition or a failure event   Moreover  after a failover  the failed over master is virtually added as a replica of the new master  this way it will be reconfigured to replicate with the new master as soon as it will be available again   However sometimes you want to remove a replica  that may be the old master  forever from the list of replicas monitored by Sentinels   In order to do this  you need to send a  SENTINEL RESET mastername  command to all the Sentinels  they ll refresh the list of replicas within the next 10 seconds  only adding the ones listed as correctly replicating from the current master  INFO  output       Pub Sub messages  A client can use a Sentinel as a Redis compatible Pub Sub server  but you can t use  PUBLISH   in order to  SUBSCRIBE  or  PSUBSCRIBE  to channels and get notified about specific events   The channel name is the same as the name of the event  For instance the channel named   sdown  will receive all the notifications related to instances entering an  SDOWN   SDOWN means the instance is no longer reachable from the point of view of the Sentinel you are querying  condition   To get all the messages simply subscribe using  PSUBSCRIBE      The following is a list of channels and message formats you can receive using this API  The first word is the channel   event name  the rest is the format of the data   Note  where  instance details  is specified it means that the following arguments are provided to identify the target instance        instance type   name   ip   port     master name   master ip   master port   The part identifying the master  from the   argument to the end  is optional and is only specified if the instance is not a master itself        reset master     instance details      The master was reset       slave     instance details      A new replica was detected and attached       failover state reconf slaves     instance details      Failover state changed to  reconf slaves  state       failover detected     instance details      A failover started by another Sentinel or any other external entity was detected  An attached replica turned into a master        slave reconf sent     instance details      The leader sentinel sent the  REPLICAOF  command to this instance in order to reconfigure it for the new replica       slave reconf inprog     instance details      The replica being reconfigured showed to be a replica of the new master ip port pair  but the synchronization process is not yet complete       slave reconf done     instance details      The replica is now synchronized with the new master       dup sentinel     instance details      One or more sentinels for the specified master were removed as duplicated  this happens for instance when a Sentinel instance is restarted        sentinel     instance details      A new sentinel for this master was detected and attached       sdown     instance details      The specified instance is now in Subjectively Down state       sdown     instance details      The specified instance is no longer in Subjectively Down state       odown     instance details      The specified instance is now in Objectively Down state       odown     instance details      The specified instance is no longer in Objectively Down state       new epoch     instance details      The current epoch was updated       try failover     instance details      New failover in progress  waiting to be elected by the majority       elected leader     instance details      Won the election for the specified epoch  can do the failover       failover state select slave     instance details      New failover state is  select slave   we are trying to find a suitable replica for promotion      no good slave     instance details      There is no good replica to promote  Currently we ll try after some time  but probably this will change and the state machine will abort the failover at all in this case      selected slave     instance details      We found the specified good replica to promote      failover state send slaveof noone     instance details      We are trying to reconfigure the promoted replica as master  waiting for it to switch      failover end for timeout     instance details      The failover terminated for timeout  replicas will eventually be configured to replicate with the new master anyway      failover end     instance details      The failover terminated with success  All the replicas appears to be reconfigured to replicate with the new master      switch master     master name   oldip   oldport   newip   newport      The master new IP and address is the specified one after a configuration change  This is   the message most external users are interested in         tilt      Tilt mode entered       tilt      Tilt mode exited       Handling of  BUSY state  The  BUSY error is returned by a Redis instance when a Lua script is running for more time than the configured Lua script time limit  When this happens before triggering a fail over Redis Sentinel will try to send a  SCRIPT KILL  command  that will only succeed if the script was read only   If the instance is still in an error condition after this try  it will eventually be failed over   Replicas priority      Redis instances have a configuration parameter called  replica priority   This information is exposed by Redis replica instances in their  INFO  output  and Sentinel uses it in order to pick a replica among the ones that can be used in order to failover a master   1  If the replica priority is set to 0  the replica is never promoted to master  2  Replicas with a  lower  priority number are preferred by Sentinel   For example if there is a replica S1 in the same data center of the current master  and another replica S2 in another data center  it is possible to set S1 with a priority of 10 and S2 with a priority of 100  so that if the master fails and both S1 and S2 are available  S1 will be preferred   For more information about the way replicas are selected  please check the   Replica selection and priority  section   replica selection and priority  of this documentation       Sentinel and Redis authentication  When the master is configured to require authentication from clients  as a security measure  replicas need to also be aware of the credentials in order to authenticate with the master and create the master replica connection used for the asynchronous replication protocol      Redis Access Control List authentication  Starting with Redis 6  user authentication and permission is managed with the  Access Control List  ACL    topics acl    In order for Sentinels to connect to Redis server instances when they are configured with ACL  the Sentinel configuration must include the following directives       sentinel auth user  master name   username      sentinel auth pass  master name   password   Where   username   and   password   are the username and password for accessing the group s instances  These credentials should be provisioned on all of the group s Redis instances with the minimal control permissions  For example       127 0 0 1 6379  ACL SETUSER sentinel user ON  somepassword allchannels  multi  slaveof  ping  exec  subscribe  config rewrite  role  publish  info  client setname  client kill  script kill      Redis password only authentication  Until Redis 6  authentication is achieved using the following configuration directives      requirepass  in the master  in order to set the authentication password  and to make sure the instance will not process requests for non authenticated clients     masterauth  in the replicas in order for the replicas to authenticate with the master in order to correctly replicate data from it   When Sentinel is used  there is not a single master  since after a failover replicas may play the role of masters  and old masters can be reconfigured in order to act as replicas  so what you want to do is to set the above directives in all your instances  both masters and replicas   This is also usually a sane setup since you don t want to protect data only in the master  having the same data accessible in the replicas   However  in the uncommon case where you need a replica that is accessible without authentication  you can still do it by setting up   a replica priority of zero    to prevent this replica from being promoted to master  and configuring in this replica only the  masterauth  directive  without using the  requirepass  directive  so that data will be readable by unauthenticated clients   In order for Sentinels to connect to Redis server instances when they are configured with  requirepass   the Sentinel configuration must include the  sentinel auth pass  directive  in the format       sentinel auth pass  master name   password   Configuring Sentinel instances with authentication      Sentinel instances themselves can be secured by requiring clients to authenticate via the  AUTH  command  Starting with Redis 6 2  the  Access Control List  ACL    topics acl  is available  whereas previous versions  starting with Redis 5 0 1  support password only authentication    Note that Sentinel s authentication configuration should be   applied to each of the instances   in your deployment  and   all instances should use the same configuration    Furthermore  ACL and password only authentication should not be used together       Sentinel Access Control List authentication  The first step in securing a Sentinel instance with ACL is preventing any unauthorized access to it  To do that  you ll need to disable the default superuser  or at the very least set it up with a strong password  and create a new one and allow it access to Pub Sub channels       127 0 0 1 5000  ACL SETUSER admin ON  admin password allchannels   all     OK     127 0 0 1 5000  ACL SETUSER default off     OK  The default user is used by Sentinel to connect to other instances  You can provide the credentials of another superuser with the following configuration directives       sentinel sentinel user  username      sentinel sentinel pass  password   Where   username   and   password   are the Sentinel s superuser and password  respectively  e g   admin  and  admin password  in the example above    Lastly  for authenticating incoming client connections  you can create a Sentinel restricted user profile such as the following       127 0 0 1 5000  ACL SETUSER sentinel user ON  user password   all  auth  client getname  client id  client setname  command  hello  ping  role  sentinel get master addr by name  sentinel master  sentinel myid  sentinel replicas  sentinel sentinels  Refer to the documentation of your Sentinel client of choice for further information       Sentinel password only authentication  To use Sentinel with password only authentication  add the  requirepass  configuration directive to   all   your Sentinel instances as follows       requirepass  your password here   When configured this way  Sentinels will do two things   1  A password will be required from clients in order to send commands to Sentinels  This is obvious since this is how such configuration directive works in Redis in general  2  Moreover the same password configured to access the local Sentinel  will be used by this Sentinel instance in order to authenticate to all the other Sentinel instances it connects to   This means that   you will have to configure the same  requirepass  password in all the Sentinel instances    This way every Sentinel can talk with every other Sentinel without any need to configure for each Sentinel the password to access all the other Sentinels  that would be very impractical   Before using this configuration  make sure your client library can send the  AUTH  command to Sentinel instances       Sentinel clients implementation      Sentinel requires explicit client support  unless the system is configured to execute a script that performs a transparent redirection of all the requests to the new master instance  virtual IP or other similar systems   The topic of client libraries implementation is covered in the document  Sentinel clients guidelines   topics sentinel clients       More advanced concepts  In the following sections we ll cover a few details about how Sentinel works  without resorting to implementation details and algorithms that will be covered in the final part of this document       SDOWN and ODOWN failure state  Redis Sentinel has two different concepts of  being down   one is called a  Subjectively Down  condition  SDOWN  and is a down condition that is local to a given Sentinel instance  Another is called  Objectively Down  condition  ODOWN  and is reached when enough Sentinels  at least the number configured as the  quorum  parameter of the monitored master  have an SDOWN condition  and get feedback from other Sentinels using the  SENTINEL is master down by addr  command   From the point of view of a Sentinel an SDOWN condition is reached when it does not receive a valid reply to PING requests for the number of seconds specified in the configuration as  is master down after milliseconds  parameter   An acceptable reply to PING is one of the following     PING replied with  PONG    PING replied with  LOADING error    PING replied with  MASTERDOWN error   Any other reply  or no reply at all  is considered non valid  However note that   a logical master that advertises itself as a replica in the INFO output is considered to be down     Note that SDOWN requires that no acceptable reply is received for the whole interval configured  so for instance if the interval is 30000 milliseconds  30 seconds  and we receive an acceptable ping reply every 29 seconds  the instance is considered to be working   SDOWN is not enough to trigger a failover  it only means a single Sentinel believes a Redis instance is not available  To trigger a failover  the ODOWN state must be reached   To switch from SDOWN to ODOWN no strong consensus algorithm is used  but just a form of gossip  if a given Sentinel gets reports that a master is not working from enough Sentinels   in a given time range    the SDOWN is promoted to ODOWN  If this acknowledge is later missing  the flag is cleared   A more strict authorization that uses an actual majority is required in order to really start the failover  but no failover can be triggered without reaching the ODOWN state   The ODOWN condition   only applies to masters    For other kind of instances Sentinel doesn t require to act  so the ODOWN state is never reached for replicas and other sentinels  but only SDOWN is   However SDOWN has also semantic implications  For example a replica in SDOWN state is not selected to be promoted by a Sentinel performing a failover   Sentinels and replicas auto discovery      Sentinels stay connected with other Sentinels in order to reciprocally check the availability of each other  and to exchange messages  However you don t need to configure a list of other Sentinel addresses in every Sentinel instance you run  as Sentinel uses the Redis instances Pub Sub capabilities in order to discover the other Sentinels that are monitoring the same masters and replicas   This feature is implemented by sending  hello messages  into the channel named    sentinel   hello    Similarly you don t need to configure what is the list of the replicas attached to a master  as Sentinel will auto discover this list querying Redis     Every Sentinel publishes a message to every monitored master and replica Pub Sub channel    sentinel   hello   every two seconds  announcing its presence with ip  port  runid    Every Sentinel is subscribed to the Pub Sub channel    sentinel   hello  of every master and replica  looking for unknown sentinels  When new sentinels are detected  they are added as sentinels of this master    Hello messages also include the full current configuration of the master  If the receiving Sentinel has a configuration for a given master which is older than the one received  it updates to the new configuration immediately    Before adding a new sentinel to a master a Sentinel always checks if there is already a sentinel with the same runid or the same address  ip and port pair   In that case all the matching sentinels are removed  and the new added   Sentinel reconfiguration of instances outside the failover procedure      Even when no failover is in progress  Sentinels will always try to set the current configuration on monitored instances  Specifically     Replicas  according to the current configuration  that claim to be masters  will be configured as replicas to replicate with the current master    Replicas connected to a wrong master  will be reconfigured to replicate with the right master   For Sentinels to reconfigure replicas  the wrong configuration must be observed for some time  that is greater than the period used to broadcast new configurations   This prevents Sentinels with a stale configuration  for example because they just rejoined from a partition  will try to change the replicas configuration before receiving an update   Also note how the semantics of always trying to impose the current configuration makes the failover more resistant to partitions     Masters failed over are reconfigured as replicas when they return available    Replicas partitioned away during a partition are reconfigured once reachable   The important lesson to remember about this section is    Sentinel is a system where each process will always try to impose the last logical configuration to the set of monitored instances         Replica selection and priority  When a Sentinel instance is ready to perform a failover  since the master is in  ODOWN  state and the Sentinel received the authorization to failover from the majority of the Sentinel instances known  a suitable replica needs to be selected   The replica selection process evaluates the following information about replicas   1  Disconnection time from the master  2  Replica priority  3  Replication offset processed  4  Run ID   A replica that is found to be disconnected from the master for more than ten times the configured master timeout  down after milliseconds option   plus the time the master is also not available from the point of view of the Sentinel doing the failover  is considered to be not suitable for the failover and is skipped   In more rigorous terms  a replica whose the  INFO  output suggests it has been disconnected from the master for more than        down after milliseconds   10    milliseconds since master is in SDOWN state  Is considered to be unreliable and is disregarded entirely   The replica selection only considers the replicas that passed the above test  and sorts it based on the above criteria  in the following order   1  The replicas are sorted by  replica priority  as configured in the  redis conf  file of the Redis instance  A lower priority will be preferred  2  If the priority is the same  the replication offset processed by the replica is checked  and the replica that received more data from the master is selected  3  If multiple replicas have the same priority and processed the same data from the master  a further check is performed  selecting the replica with the lexicographically smaller run ID  Having a lower run ID is not a real advantage for a replica  but is useful in order to make the process of replica selection more deterministic  instead of resorting to select a random replica   In most cases   replica priority  does not need to be set explicitly so all instances will use the same default value  If there is a particular fail over preference   replica priority  must be set on all instances  including masters  as a master may become a replica at some future point in time   and it will then need the proper  replica priority  settings   A Redis instance can be configured with a special  replica priority  of zero in order to be   never selected   by Sentinels as the new master  However a replica configured in this way will still be reconfigured by Sentinels in order to replicate with the new master after a failover  the only difference is that it will never become a master itself      Algorithms and internals  In the following sections we will explore the details of Sentinel behavior  It is not strictly needed for users to be aware of all the details  but a deep understanding of Sentinel may help to deploy and operate Sentinel in a more effective way       Quorum  The previous sections showed that every master monitored by Sentinel is associated to a configured   quorum    It specifies the number of Sentinel processes that need to agree about the unreachability or error condition of the master in order to trigger a failover   However  after the failover is triggered  in order for the failover to actually be performed    at least a majority of Sentinels must authorize the Sentinel to failover    Sentinel never performs a failover in the partition where a minority of Sentinels exist   Let s try to make things a bit more clear     Quorum  the number of Sentinel processes that need to detect an error condition in order for a master to be flagged as   ODOWN      The failover is triggered by the   ODOWN   state    Once the failover is triggered  the Sentinel trying to failover is required to ask for authorization to a majority of Sentinels  or more than the majority if the quorum is set to a number greater than the majority    The difference may seem subtle but is actually quite simple to understand and use   For example if you have 5 Sentinel instances  and the quorum is set to 2  a failover will be triggered as soon as 2 Sentinels believe that the master is not reachable  however one of the two Sentinels will be able to failover only if it gets authorization at least from 3 Sentinels   If instead the quorum is configured to 5  all the Sentinels must agree about the master error condition  and the authorization from all Sentinels is required in order to failover   This means that the quorum can be used to tune Sentinel in two ways   1  If a quorum is set to a value smaller than the majority of Sentinels we deploy  we are basically making Sentinel more sensitive to master failures  triggering a failover as soon as even just a minority of Sentinels is no longer able to talk with the master  2  If a quorum is set to a value greater than the majority of Sentinels  we are making Sentinel able to failover only when there are a very large number  larger than majority  of well connected Sentinels which agree about the master being down       Configuration epochs  Sentinels require to get authorizations from a majority in order to start a failover for a few important reasons   When a Sentinel is authorized  it gets a unique   configuration epoch   for the master it is failing over  This is a number that will be used to version the new configuration after the failover is completed  Because a majority agreed that a given version was assigned to a given Sentinel  no other Sentinel will be able to use it  This means that every configuration of every failover is versioned with a unique version  We ll see why this is so important   Moreover Sentinels have a rule  if a Sentinel voted another Sentinel for the failover of a given master  it will wait some time to try to failover the same master again  This delay is the  2   failover timeout  you can configure in  sentinel conf   This means that Sentinels will not try to failover the same master at the same time  the first to ask to be authorized will try  if it fails another will try after some time  and so forth   Redis Sentinel guarantees the  liveness  property that if a majority of Sentinels are able to talk  eventually one will be authorized to failover if the master is down   Redis Sentinel also guarantees the  safety  property that every Sentinel will failover the same master using a different  configuration epoch        Configuration propagation  Once a Sentinel is able to failover a master successfully  it will start to broadcast the new configuration so that the other Sentinels will update their information about a given master   For a failover to be considered successful  it requires that the Sentinel was able to send the  REPLICAOF NO ONE  command to the selected replica  and that the switch to master was later observed in the  INFO  output of the master   At this point  even if the reconfiguration of the replicas is in progress  the failover is considered to be successful  and all the Sentinels are required to start reporting the new configuration   The way a new configuration is propagated is the reason why we need that every Sentinel failover is authorized with a different version number  configuration epoch    Every Sentinel continuously broadcast its version of the configuration of a master using Redis Pub Sub messages  both in the master and all the replicas   At the same time all the Sentinels wait for messages to see what is the configuration advertised by the other Sentinels   Configurations are broadcast in the    sentinel   hello  Pub Sub channel   Because every configuration has a different version number  the greater version always wins over smaller versions   So for example the configuration for the master  mymaster  start with all the Sentinels believing the master is at 192 168 1 50 6379  This configuration has version 1  After some time a Sentinel is authorized to failover with version 2  If the failover is successful  it will start to broadcast a new configuration  let s say 192 168 1 50 9000  with version 2  All the other instances will see this configuration and will update their configuration accordingly  since the new configuration has a greater version   This means that Sentinel guarantees a second liveness property  a set of Sentinels that are able to communicate will all converge to the same configuration with the higher version number   Basically if the net is partitioned  every partition will converge to the higher local configuration  In the special case of no partitions  there is a single partition and every Sentinel will agree about the configuration       Consistency under partitions  Redis Sentinel configurations are eventually consistent  so every partition will converge to the higher configuration available  However in a real world system using Sentinel there are three different players     Redis instances    Sentinel instances    Clients   In order to define the behavior of the system we have to consider all three   The following is a simple network where there are 3 nodes  each running a Redis instance  and a Sentinel instance                                                     Sentinel 1         Client A                   Redis 1  M                                                                                                                                                Sentinel 2                     Sentinel 3        Client B       Redis 2  S                     Redis 3  M                                                      In this system the original state was that Redis 3 was the master  while Redis 1 and 2 were replicas  A partition occurred isolating the old master  Sentinels 1 and 2 started a failover promoting Sentinel 1 as the new master   The Sentinel properties guarantee that Sentinel 1 and 2 now have the new configuration for the master  However Sentinel 3 has still the old configuration since it lives in a different partition   We know that Sentinel 3 will get its configuration updated when the network partition will heal  however what happens during the partition if there are clients partitioned with the old master   Clients will be still able to write to Redis 3  the old master  When the partition will rejoin  Redis 3 will be turned into a replica of Redis 1  and all the data written during the partition will be lost   Depending on your configuration you may want or not that this scenario happens     If you are using Redis as a cache  it could be handy that Client B is still able to write to the old master  even if its data will be lost    If you are using Redis as a store  this is not good and you need to configure the system in order to partially prevent this problem   Since Redis is asynchronously replicated  there is no way to totally prevent data loss in this scenario  however you can bound the divergence between Redis 3 and Redis 1 using the following Redis configuration option       min replicas to write 1     min replicas max lag 10  With the above configuration  please see the self commented  redis conf  example in the Redis distribution for more information  a Redis instance  when acting as a master  will stop accepting writes if it can t write to at least 1 replica  Since replication is asynchronous  not being able to write  actually means that the replica is either disconnected  or is not sending us asynchronous acknowledges for more than the specified  max lag  number of seconds   Using this configuration the Redis 3 in the above example will become unavailable after 10 seconds  When the partition heals  the Sentinel 3 configuration will converge to the new one  and Client B will be able to fetch a valid configuration and continue   In general Redis   Sentinel as a whole are an   eventually consistent system   where the merge function is   last failover wins    and the data from old masters are discarded to replicate the data of the current master  so there is always a window for losing acknowledged writes  This is due to Redis asynchronous replication and the discarding nature of the  virtual  merge function of the system  Note that this is not a limitation of Sentinel itself  and if you orchestrate the failover with a strongly consistent replicated state machine  the same properties will still apply  There are only two ways to avoid losing acknowledged writes   1  Use synchronous replication  and a proper consensus algorithm to run a replicated state machine   2  Use an eventually consistent system where different versions of the same object can be merged   Redis currently is not able to use any of the above systems  and is currently outside the development goals  However there are proxies implementing solution  2  on top of Redis stores such as SoundCloud  Roshi  https   github com soundcloud roshi   or Netflix  Dynomite  https   github com Netflix dynomite    Sentinel persistent state      Sentinel state is persisted in the sentinel configuration file  For example every time a new configuration is received  or created  leader Sentinels   for a master  the configuration is persisted on disk together with the configuration epoch  This means that it is safe to stop and restart Sentinel processes       TILT mode  Redis Sentinel is heavily dependent on the computer time  for instance in order to understand if an instance is available it remembers the time of the latest successful reply to the PING command  and compares it with the current time to understand how old it is   However if the computer time changes in an unexpected way  or if the computer is very busy  or the process blocked for some reason  Sentinel may start to behave in an unexpected way   The TILT mode is a special  protection  mode that a Sentinel can enter when something odd is detected that can lower the reliability of the system  The Sentinel timer interrupt is normally called 10 times per second  so we expect that more or less 100 milliseconds will elapse between two calls to the timer interrupt   What a Sentinel does is to register the previous time the timer interrupt was called  and compare it with the current call  if the time difference is negative or unexpectedly big  2 seconds or more  the TILT mode is entered  or if it was already entered the exit from the TILT mode postponed    When in TILT mode the Sentinel will continue to monitor everything  but     It stops acting at all    It starts to reply negatively to  SENTINEL is master down by addr  requests as the ability to detect a failure is no longer trusted   If everything appears to be normal for 30 second  the TILT mode is exited    In the Sentinel TILT mode  if we send the INFO command  we could get the following response         redis cli  p 26379     127 0 0 1 26379  info      Other information from Sentinel server skipped          Sentinel     sentinel masters 1     sentinel tilt 0     sentinel tilt since seconds  1     sentinel running scripts 0     sentinel scripts queue length 0     sentinel simulate failure flags 0     master0 name mymaster status ok address 127 0 0 1 6379 slaves 0 sentinels 1  The field  sentinel tilt since seconds  indicates how many seconds the Sentinel already is in the TILT mode  If it is not in TILT mode  the value will be  1   Note that in some ways TILT mode could be replaced using the monotonic clock API that many kernels offer  However it is not still clear if this is a good solution since the current system avoids issues in case the process is just suspended or not executed by the scheduler for a long time     A note about the word slave used in this man page    Starting with Redis 5  if not for backward compatibility  the Redis project no longer uses the word slave  Unfortunately in this command the word slave is part of the protocol  so we ll be able to remove such occurrences only when this API will be naturally deprecated "}
{"questions":"redis weight 2 docs manual config Overview of redis conf the Redis configuration file aliases Configuration title Redis configuration","answers":"---\ntitle: \"Redis configuration\"\nlinkTitle: \"Configuration\"\nweight: 2\ndescription: >\n    Overview of redis.conf, the Redis configuration file\naliases: [\n    \/docs\/manual\/config\n    ]\n\n---\n\nRedis is able to start without a configuration file using a built-in default\nconfiguration, however this setup is only recommended for testing and\ndevelopment purposes.\n\nThe proper way to configure Redis is by providing a Redis configuration file,\nusually called `redis.conf`.\n\nThe `redis.conf` file contains a number of directives that have a very simple\nformat:\n\n    keyword argument1 argument2 ... argumentN\n\nThis is an example of a configuration directive:\n\n    replicaof 127.0.0.1 6380\n\nIt is possible to provide strings containing spaces as arguments using\n(double or single) quotes, as in the following example:\n\n    requirepass \"hello world\"\n\nSingle-quoted string can contain characters escaped by backslashes, and\ndouble-quoted strings can additionally include any ASCII symbols encoded using\nbackslashed hexadecimal notation \"\\\\xff\".\n\nThe list of configuration directives, and their meaning and intended usage\nis available in the self documented example redis.conf shipped into the\nRedis distribution.\n\n* The self documented [redis.conf for Redis 7.2](https:\/\/raw.githubusercontent.com\/redis\/redis\/7.2\/redis.conf).\n* The self documented [redis.conf for Redis 7.0](https:\/\/raw.githubusercontent.com\/redis\/redis\/7.0\/redis.conf).\n* The self documented [redis.conf for Redis 6.2](https:\/\/raw.githubusercontent.com\/redis\/redis\/6.2\/redis.conf).\n* The self documented [redis.conf for Redis 6.0](https:\/\/raw.githubusercontent.com\/redis\/redis\/6.0\/redis.conf).\n* The self documented [redis.conf for Redis 5.0](https:\/\/raw.githubusercontent.com\/redis\/redis\/5.0\/redis.conf).\n* The self documented [redis.conf for Redis 4.0](https:\/\/raw.githubusercontent.com\/redis\/redis\/4.0\/redis.conf).\n* The self documented [redis.conf for Redis 3.2](https:\/\/raw.githubusercontent.com\/redis\/redis\/3.2\/redis.conf).\n* The self documented [redis.conf for Redis 3.0](https:\/\/raw.githubusercontent.com\/redis\/redis\/3.0\/redis.conf).\n* The self documented [redis.conf for Redis 2.8](https:\/\/raw.githubusercontent.com\/redis\/redis\/2.8\/redis.conf).\n* The self documented [redis.conf for Redis 2.6](https:\/\/raw.githubusercontent.com\/redis\/redis\/2.6\/redis.conf).\n* The self documented [redis.conf for Redis 2.4](https:\/\/raw.githubusercontent.com\/redis\/redis\/2.4\/redis.conf).\n\nPassing arguments via the command line\n---\n\nYou can also pass Redis configuration parameters\nusing the command line directly. This is very useful for testing purposes.\nThe following is an example that starts a new Redis instance using port 6380\nas a replica of the instance running at 127.0.0.1 port 6379.\n\n    .\/redis-server --port 6380 --replicaof 127.0.0.1 6379\n\nThe format of the arguments passed via the command line is exactly the same\nas the one used in the redis.conf file, with the exception that the keyword\nis prefixed with `--`.\n\nNote that internally this generates an in-memory temporary config file\n(possibly concatenating the config file passed by the user, if any) where\narguments are translated into the format of redis.conf.\n\nChanging Redis configuration while the server is running\n---\n\nIt is possible to reconfigure Redis on the fly without stopping and restarting\nthe service, or querying the current configuration programmatically using the\nspecial commands `CONFIG SET` and `CONFIG GET`.\n\nNot all of the configuration directives are supported in this way, but most\nare supported as expected.\nPlease refer to the `CONFIG SET` and `CONFIG GET` pages for more information.\n\nNote that modifying the configuration on the fly **has no effects on the\nredis.conf file** so at the next restart of Redis the old configuration will\nbe used instead.\n\nMake sure to also modify the `redis.conf` file accordingly to the configuration\nyou set using `CONFIG SET`.\nYou can do it manually, or you can use `CONFIG REWRITE`, which will automatically scan your `redis.conf` file and update the fields which don't match the current configuration value.\nFields non existing but set to the default value are not added.\nComments inside your configuration file are retained.\n\nConfiguring Redis as a cache\n---\n\nIf you plan to use Redis as a cache where every key will have an\nexpire set, you may consider using the following configuration instead\n(assuming a max memory limit of 2 megabytes as an example):\n\n    maxmemory 2mb\n    maxmemory-policy allkeys-lru\n\nIn this configuration there is no need for the application to set a\ntime to live for keys using the `EXPIRE` command (or equivalent) since\nall the keys will be evicted using an approximated LRU algorithm as long\nas we hit the 2 megabyte memory limit.\n\nBasically, in this configuration Redis acts in a similar way to memcached.\nWe have more extensive documentation about using Redis as an LRU cache [here](\/topics\/lru-cache).","site":"redis","answers_cleaned":"    title   Redis configuration  linkTitle   Configuration  weight  2 description        Overview of redis conf  the Redis configuration file aliases         docs manual config             Redis is able to start without a configuration file using a built in default configuration  however this setup is only recommended for testing and development purposes   The proper way to configure Redis is by providing a Redis configuration file  usually called  redis conf    The  redis conf  file contains a number of directives that have a very simple format       keyword argument1 argument2     argumentN  This is an example of a configuration directive       replicaof 127 0 0 1 6380  It is possible to provide strings containing spaces as arguments using  double or single  quotes  as in the following example       requirepass  hello world   Single quoted string can contain characters escaped by backslashes  and double quoted strings can additionally include any ASCII symbols encoded using backslashed hexadecimal notation    xff    The list of configuration directives  and their meaning and intended usage is available in the self documented example redis conf shipped into the Redis distribution     The self documented  redis conf for Redis 7 2  https   raw githubusercontent com redis redis 7 2 redis conf     The self documented  redis conf for Redis 7 0  https   raw githubusercontent com redis redis 7 0 redis conf     The self documented  redis conf for Redis 6 2  https   raw githubusercontent com redis redis 6 2 redis conf     The self documented  redis conf for Redis 6 0  https   raw githubusercontent com redis redis 6 0 redis conf     The self documented  redis conf for Redis 5 0  https   raw githubusercontent com redis redis 5 0 redis conf     The self documented  redis conf for Redis 4 0  https   raw githubusercontent com redis redis 4 0 redis conf     The self documented  redis conf for Redis 3 2  https   raw githubusercontent com redis redis 3 2 redis conf     The self documented  redis conf for Redis 3 0  https   raw githubusercontent com redis redis 3 0 redis conf     The self documented  redis conf for Redis 2 8  https   raw githubusercontent com redis redis 2 8 redis conf     The self documented  redis conf for Redis 2 6  https   raw githubusercontent com redis redis 2 6 redis conf     The self documented  redis conf for Redis 2 4  https   raw githubusercontent com redis redis 2 4 redis conf    Passing arguments via the command line      You can also pass Redis configuration parameters using the command line directly  This is very useful for testing purposes  The following is an example that starts a new Redis instance using port 6380 as a replica of the instance running at 127 0 0 1 port 6379         redis server   port 6380   replicaof 127 0 0 1 6379  The format of the arguments passed via the command line is exactly the same as the one used in the redis conf file  with the exception that the keyword is prefixed with        Note that internally this generates an in memory temporary config file  possibly concatenating the config file passed by the user  if any  where arguments are translated into the format of redis conf   Changing Redis configuration while the server is running      It is possible to reconfigure Redis on the fly without stopping and restarting the service  or querying the current configuration programmatically using the special commands  CONFIG SET  and  CONFIG GET    Not all of the configuration directives are supported in this way  but most are supported as expected  Please refer to the  CONFIG SET  and  CONFIG GET  pages for more information   Note that modifying the configuration on the fly   has no effects on the redis conf file   so at the next restart of Redis the old configuration will be used instead   Make sure to also modify the  redis conf  file accordingly to the configuration you set using  CONFIG SET   You can do it manually  or you can use  CONFIG REWRITE   which will automatically scan your  redis conf  file and update the fields which don t match the current configuration value  Fields non existing but set to the default value are not added  Comments inside your configuration file are retained   Configuring Redis as a cache      If you plan to use Redis as a cache where every key will have an expire set  you may consider using the following configuration instead  assuming a max memory limit of 2 megabytes as an example        maxmemory 2mb     maxmemory policy allkeys lru  In this configuration there is no need for the application to set a time to live for keys using the  EXPIRE  command  or equivalent  since all the keys will be evicted using an approximated LRU algorithm as long as we hit the 2 megabyte memory limit   Basically  in this configuration Redis acts in a similar way to memcached  We have more extensive documentation about using Redis as an LRU cache  here   topics lru cache  "}
{"questions":"redis weight 7 How Redis writes data to disk aliases topics persistence title Redis persistence docs manual persistence Persistence topics persistence md docs manual persistence md","answers":"---\ntitle: Redis persistence\nlinkTitle: Persistence\nweight: 7\ndescription: How Redis writes data to disk\naliases: [\n    \/topics\/persistence,\n    \/topics\/persistence.md,\n    \/docs\/manual\/persistence,\n    \/docs\/manual\/persistence.md\n]\n---\n\nPersistence refers to the writing of data to durable storage, such as a solid-state disk (SSD). Redis provides a range of persistence options. These include:\n\n* **RDB** (Redis Database): RDB persistence performs point-in-time snapshots of your dataset at specified intervals.\n* **AOF** (Append Only File): AOF persistence logs every write operation received by the server. These operations can then be replayed again at server startup, reconstructing the original dataset. Commands are logged using the same format as the Redis protocol itself.\n* **No persistence**: You can disable persistence completely. This is sometimes used when caching.\n* **RDB + AOF**: You can also combine both AOF and RDB in the same instance.\n\nIf you'd rather not think about the tradeoffs between these different persistence strategies, you may want to consider [Redis Enterprise's persistence options](https:\/\/docs.redis.com\/latest\/rs\/databases\/configure\/database-persistence\/), which can be pre-configured using a UI.\n\nTo learn more about how to evaluate your Redis persistence strategy, read on.\n\n## RDB advantages\n\n* RDB is a very compact single-file point-in-time representation of your Redis data. RDB files are perfect for backups. For instance you may want to archive your RDB files every hour for the latest 24 hours, and to save an RDB snapshot every day for 30 days. This allows you to easily restore different versions of the data set in case of disasters.\n* RDB is very good for disaster recovery, being a single compact file that can be transferred to far data centers, or onto Amazon S3 (possibly encrypted).\n* RDB maximizes Redis performances since the only work the Redis parent process needs to do in order to persist is forking a child that will do all the rest. The parent process will never perform disk I\/O or alike.\n* RDB allows faster restarts with big datasets compared to AOF.\n* On replicas, RDB supports [partial resynchronizations after restarts and failovers](https:\/\/redis.io\/topics\/replication#partial-resynchronizations-after-restarts-and-failovers).\n\n## RDB disadvantages\n\n* RDB is NOT good if you need to minimize the chance of data loss in case Redis stops working (for example after a power outage). You can configure different *save points* where an RDB is produced (for instance after at least five minutes and 100 writes against the data set, you can have multiple save points). However you'll usually create an RDB snapshot every five minutes or more, so in case of Redis stopping working without a correct shutdown for any reason you should be prepared to lose the latest minutes of data.\n* RDB needs to fork() often in order to persist on disk using a child process. fork() can be time consuming if the dataset is big, and may result in Redis stopping serving clients for some milliseconds or even for one second if the dataset is very big and the CPU performance is not great. AOF also needs to fork() but less frequently and you can tune how often you want to rewrite your logs without any trade-off on durability.\n\n## AOF advantages\n\n* Using AOF Redis is much more durable: you can have different fsync policies: no fsync at all, fsync every second, fsync at every query. With the default policy of fsync every second, write performance is still great. fsync is performed using a background thread and the main thread will try hard to perform writes when no fsync is in progress, so you can only lose one second worth of writes.\n* The AOF log is an append-only log, so there are no seeks, nor corruption problems if there is a power outage. Even if the log ends with a half-written command for some reason (disk full or other reasons) the redis-check-aof tool is able to fix it easily.\n* Redis is able to automatically rewrite the AOF in background when it gets too big. The rewrite is completely safe as while Redis continues appending to the old file, a completely new one is produced with the minimal set of operations needed to create the current data set, and once this second file is ready Redis switches the two and starts appending to the new one.\n* AOF contains a log of all the operations one after the other in an easy to understand and parse format. You can even easily export an AOF file. For instance even if you've accidentally flushed everything using the `FLUSHALL` command, as long as no rewrite of the log was performed in the meantime, you can still save your data set just by stopping the server, removing the latest command, and restarting Redis again.\n\n## AOF disadvantages\n\n* AOF files are usually bigger than the equivalent RDB files for the same dataset.\n* AOF can be slower than RDB depending on the exact fsync policy. In general with fsync set to *every second* performance is still very high, and with fsync disabled it should be exactly as fast as RDB even under high load. Still RDB is able to provide more guarantees about the maximum latency even in the case of a huge write load.\n\n**Redis < 7.0**\n\n* AOF can use a lot of memory if there are writes to the database during a rewrite (these are buffered in memory and written to the new AOF at the end).\n* All write commands that arrive during rewrite are written to disk twice.\n* Redis could freeze writing and fsyncing these write commands to the new AOF file at the end of the rewrite.\n  \nOk, so what should I use?\n---\n\nThe general indication you should use both persistence methods is if\nyou want a degree of data safety comparable to what PostgreSQL can provide you.\n\nIf you care a lot about your data, but still can live with a few minutes of\ndata loss in case of disasters, you can simply use RDB alone.\n\nThere are many users using AOF alone, but we discourage it since to have an\nRDB snapshot from time to time is a great idea for doing database backups,\nfor faster restarts, and in the event of bugs in the AOF engine.\n\nThe following sections will illustrate a few more details about the two persistence models.\n\n## Snapshotting\n\nBy default Redis saves snapshots of the dataset on disk, in a binary\nfile called `dump.rdb`. You can configure Redis to have it save the\ndataset every N seconds if there are at least M changes in the dataset,\nor you can manually call the `SAVE` or `BGSAVE` commands.\n\nFor example, this configuration will make Redis automatically dump the\ndataset to disk every 60 seconds if at least 1000 keys changed:\n\n    save 60 1000\n\nThis strategy is known as _snapshotting_.\n\n### How it works\n\nWhenever Redis needs to dump the dataset to disk, this is what happens:\n\n* Redis [forks](http:\/\/linux.die.net\/man\/2\/fork). We now have a child\nand a parent process.\n\n* The child starts to write the dataset to a temporary RDB file.\n\n* When the child is done writing the new RDB file, it replaces the old\none.\n\nThis method allows Redis to benefit from copy-on-write semantics.\n\n## Append-only file\n\nSnapshotting is not very durable. If your computer running Redis stops,\nyour power line fails, or you accidentally `kill -9` your instance, the\nlatest data written to Redis will be lost.  While this may not be a big\ndeal for some applications, there are use cases for full durability, and\nin these cases Redis snapshotting alone is not a viable option.\n\nThe _append-only file_ is an alternative, fully-durable strategy for\nRedis.  It became available in version 1.1.\n\nYou can turn on the AOF in your configuration file:\n\n    appendonly yes\n\nFrom now on, every time Redis receives a command that changes the\ndataset (e.g. `SET`) it will append it to the AOF.  When you restart\nRedis it will re-play the AOF to rebuild the state.\n\nSince Redis 7.0.0, Redis uses a multi part AOF mechanism.\nThat is, the original single AOF file is split into base file (at most one) and incremental files (there may be more than one).\nThe base file represents an initial (RDB or AOF format) snapshot of the data present when the AOF is [rewritten](#log-rewriting).\nThe incremental files contains incremental changes since the last base AOF file was created. All these files are put in a separate directory and are tracked by a manifest file.\n\n### Log rewriting\n\nThe AOF gets bigger and bigger as write operations are\nperformed.  For example, if you are incrementing a counter 100 times,\nyou'll end up with a single key in your dataset containing the final\nvalue, but 100 entries in your AOF. 99 of those entries are not needed\nto rebuild the current state.\n\nThe rewrite is completely safe.\nWhile Redis continues appending to the old file,\na completely new one is produced with the minimal set of operations needed to create the current data set,\nand once this second file is ready Redis switches the two and starts appending to the new one.\n\nSo Redis supports an interesting feature: it is able to rebuild the AOF\nin the background without interrupting service to clients. Whenever\nyou issue a `BGREWRITEAOF`, Redis will write the shortest sequence of\ncommands needed to rebuild the current dataset in memory.  If you're\nusing the AOF with Redis 2.2 you'll need to run `BGREWRITEAOF` from time to\ntime. Since Redis 2.4 is able to trigger log rewriting automatically (see the\nexample configuration file for more information).\n\nSince Redis 7.0.0, when an AOF rewrite is scheduled, the Redis parent process opens a new incremental AOF file to continue writing.\nThe child process executes the rewrite logic and generates a new base AOF.\nRedis will use a temporary manifest file to track the newly generated base file and incremental file.\nWhen they are ready, Redis will perform an atomic replacement operation to make this temporary manifest file take effect.\nIn order to avoid the problem of creating many incremental files in case of repeated failures and retries of an AOF rewrite,\nRedis introduces an AOF rewrite limiting mechanism to ensure that failed AOF rewrites are retried at a slower and slower rate.\n\n### How durable is the append only file?\n\nYou can configure how many times Redis will\n[`fsync`](http:\/\/linux.die.net\/man\/2\/fsync) data on disk. There are\nthree options:\n\n* `appendfsync always`: `fsync` every time new commands are appended to the AOF. Very very slow, very safe. Note that the commands are appended to the AOF after a batch of commands from multiple clients or a pipeline are executed, so it means a single write and a single fsync (before sending the replies).\n* `appendfsync everysec`: `fsync` every second. Fast enough (since version 2.4 likely to be as fast as snapshotting), and you may lose 1 second of data if there is a disaster.\n* `appendfsync no`: Never `fsync`, just put your data in the hands of the Operating System. The faster and less safe method. Normally Linux will flush data every 30 seconds with this configuration, but it's up to the kernel's exact tuning.\n\nThe suggested (and default) policy is to `fsync` every second. It is\nboth fast and relatively safe. The `always` policy is very slow in\npractice, but it supports group commit, so if there are multiple parallel\nwrites Redis will try to perform a single `fsync` operation.\n\n### What should I do if my AOF gets truncated?\n\nIt is possible the server crashed while writing the AOF file, or the\nvolume where the AOF file is stored was full at the time of writing. When this happens the\nAOF still contains consistent data representing a given point-in-time version\nof the dataset (that may be old up to one second with the default AOF fsync\npolicy), but the last command in the AOF could be truncated.\nThe latest major versions of Redis will be able to load the AOF anyway, just\ndiscarding the last non well formed command in the file. In this case the\nserver will emit a log like the following:\n\n```\n* Reading RDB preamble from AOF file...\n* Reading the remaining AOF tail...\n# !!! Warning: short read while loading the AOF file !!!\n# !!! Truncating the AOF at offset 439 !!!\n# AOF loaded anyway because aof-load-truncated is enabled\n```\n\nYou can change the default configuration to force Redis to stop in such\ncases if you want, but the default configuration is to continue regardless of\nthe fact the last command in the file is not well-formed, in order to guarantee\navailability after a restart.\n\nOlder versions of Redis may not recover, and may require the following steps:\n\n* Make a backup copy of your AOF file.\n* Fix the original file using the `redis-check-aof` tool that ships with Redis:\n\n      $ redis-check-aof --fix <filename>\n\n* Optionally use `diff -u` to check what is the difference between two files.\n* Restart the server with the fixed file.\n\n### What should I do if my AOF gets corrupted?\n\nIf the AOF file is not just truncated, but corrupted with invalid byte\nsequences in the middle, things are more complex. Redis will complain\nat startup and will abort:\n\n```\n* Reading the remaining AOF tail...\n# Bad file format reading the append only file: make a backup of your AOF file, then use .\/redis-check-aof --fix <filename>\n```\n\nThe best thing to do is to run the `redis-check-aof` utility, initially without\nthe `--fix` option, then understand the problem, jump to the given\noffset in the file, and see if it is possible to manually repair the file:\nThe AOF uses the same format of the Redis protocol and is quite simple to fix\nmanually. Otherwise it is possible to let the utility fix the file for us, but\nin that case all the AOF portion from the invalid part to the end of the\nfile may be discarded, leading to a massive amount of data loss if the\ncorruption happened to be in the initial part of the file.\n\n### How it works\n\nLog rewriting uses the same copy-on-write trick already in use for\nsnapshotting.  This is how it works:\n\n**Redis >= 7.0**\n\n* Redis [forks](http:\/\/linux.die.net\/man\/2\/fork), so now we have a child\nand a parent process.\n\n* The child starts writing the new base AOF in a temporary file.\n\n* The parent opens a new increments AOF file to continue writing updates.\n  If the rewriting fails, the old base and increment files (if there are any) plus this newly opened increment file represent the complete updated dataset,\n  so we are safe.\n  \n* When the child is done rewriting the base file, the parent gets a signal,\nand uses the newly opened increment file and child generated base file to build a temp manifest,\nand persist it.\n\n* Profit! Now Redis does an atomic exchange of the manifest files so that the result of this AOF rewrite takes effect. Redis also cleans up the old base file and any unused increment files.\n\n**Redis < 7.0**\n\n* Redis [forks](http:\/\/linux.die.net\/man\/2\/fork), so now we have a child\nand a parent process.\n\n* The child starts writing the new AOF in a temporary file.\n\n* The parent accumulates all the new changes in an in-memory buffer (but\nat the same time it writes the new changes in the old append-only file,\nso if the rewriting fails, we are safe).\n\n* When the child is done rewriting the file, the parent gets a signal,\nand appends the in-memory buffer at the end of the file generated by the\nchild.\n\n* Now Redis atomically renames the new file into the old one,\nand starts appending new data into the new file.\n\n### How I can switch to AOF, if I'm currently using dump.rdb snapshots?\n\nIf you want to enable AOF in a server that is currently using RDB snapshots, you need to convert the data by enabling AOF via CONFIG command on the live server first.\n\n**IMPORTANT:** not following this procedure (e.g. just changing the config and restarting the server) can result in data loss!\n\n**Redis >= 2.2**\n\nPreparations:\n\n* Make a backup of your latest dump.rdb file.\n* Transfer this backup to a safe place.\n\nSwitch to AOF on live database:\n\n* Enable AOF: `redis-cli config set appendonly yes`\n* Optionally disable RDB: `redis-cli config set save \"\"`\n* Make sure writes are appended to the append only file correctly.\n* **IMPORTANT:** Update your `redis.conf` (potentially through `CONFIG REWRITE`) and ensure that it matches the configuration above.\n  If you forget this step, when you restart the server, the configuration changes will be lost and the server will start again with the old configuration, resulting in a loss of your data.\n\nNext time you restart the server:\n\n* Before restarting the server, wait for AOF rewrite to finish persisting the data.\n  You can do that by watching `INFO persistence`, waiting for `aof_rewrite_in_progress` and `aof_rewrite_scheduled` to be `0`, and validating that `aof_last_bgrewrite_status` is `ok`.\n* After restarting the server, check that your database contains the same number of keys it contained previously.\n\n**Redis 2.0**\n\n* Make a backup of your latest dump.rdb file.\n* Transfer this backup into a safe place.\n* Stop all the writes against the database!\n* Issue a `redis-cli BGREWRITEAOF`. This will create the append only file.\n* Stop the server when Redis finished generating the AOF dump.\n* Edit redis.conf end enable append only file persistence.\n* Restart the server.\n* Make sure that your database contains the same number of keys it contained before the switch.\n* Make sure that writes are appended to the append only file correctly.\n\n## Interactions between AOF and RDB persistence\n\nRedis >= 2.4 makes sure to avoid triggering an AOF rewrite when an RDB\nsnapshotting operation is already in progress, or allowing a `BGSAVE` while the\nAOF rewrite is in progress. This prevents two Redis background processes\nfrom doing heavy disk I\/O at the same time.\n\nWhen snapshotting is in progress and the user explicitly requests a log\nrewrite operation using `BGREWRITEAOF` the server will reply with an OK\nstatus code telling the user the operation is scheduled, and the rewrite\nwill start once the snapshotting is completed.\n\nIn the case both AOF and RDB persistence are enabled and Redis restarts the\nAOF file will be used to reconstruct the original dataset since it is\nguaranteed to be the most complete.\n\n## Backing up Redis data\n\nBefore starting this section, make sure to read the following sentence: **Make Sure to Backup Your Database**. Disks break, instances in the cloud disappear, and so forth: no backups means huge risk of data disappearing into \/dev\/null.\n\nRedis is very data backup friendly since you can copy RDB files while the\ndatabase is running: the RDB is never modified once produced, and while it\ngets produced it uses a temporary name and is renamed into its final destination\natomically using rename(2) only when the new snapshot is complete.\n\nThis means that copying the RDB file is completely safe while the server is\nrunning. This is what we suggest:\n\n* Create a cron job in your server creating hourly snapshots of the RDB file in one directory, and daily snapshots in a different directory.\n* Every time the cron script runs, make sure to call the `find` command to make sure too old snapshots are deleted: for instance you can take hourly snapshots for the latest 48 hours, and daily snapshots for one or two months. Make sure to name the snapshots with date and time information.\n* At least one time every day make sure to transfer an RDB snapshot *outside your data center* or at least *outside the physical machine* running your Redis instance.\n\n### Backing up AOF persistence\n\nIf you run a Redis instance with only AOF persistence enabled, you can still perform backups.\nSince Redis 7.0.0, AOF files are split into multiple files which reside in a single directory determined by the `appenddirname` configuration.\nDuring normal operation all you need to do is copy\/tar the files in this directory to achieve a backup. However, if this is done during a [rewrite](#log-rewriting), you might end up with an invalid backup.\nTo work around this you must disable AOF rewrites during the backup:\n\n1. Turn off automatic rewrites with<br\/>\n   `CONFIG SET` `auto-aof-rewrite-percentage 0`<br\/>\n   Make sure you don't manually start a rewrite (using `BGREWRITEAOF`) during this time.\n2. Check there's no current rewrite in progress using<br\/>\n   `INFO` `persistence`<br\/>\n   and verifying `aof_rewrite_in_progress` is 0. If it's 1, then you'll need to wait for the rewrite to complete.\n3. Now you can safely copy the files in the `appenddirname` directory.\n4. Re-enable rewrites when done:<br\/>\n   `CONFIG SET` `auto-aof-rewrite-percentage <prev-value>`\n\n**Note:** If you want to minimize the time AOF rewrites are disabled you may create hard links to the files in `appenddirname` (in step 3 above) and then re-enable rewrites (step 4) after the hard links are created.\nNow you can copy\/tar the hardlinks and delete them when done. This works because Redis guarantees that it\nonly appends to files in this directory, or completely replaces them if necessary, so the content should be\nconsistent at any given point in time.\n\n\n**Note:** If you want to handle the case of the server being restarted during the backup and make sure no rewrite will automatically start after the restart you can change step 1 above to also persist the updated configuration via `CONFIG REWRITE`.\nJust make sure to re-enable automatic rewrites when done (step 4) and persist it with another `CONFIG REWRITE`.\n\nPrior to version 7.0.0 backing up the AOF file can be done simply by copying the aof file (like backing up the RDB snapshot). The file may lack the final part\nbut Redis will still be able to load it (see the previous sections about [truncated AOF files](#what-should-i-do-if-my-aof-gets-truncated)).\n\n\n## Disaster recovery\n\nDisaster recovery in the context of Redis is basically the same story as\nbackups, plus the ability to transfer those backups in many different external\ndata centers. This way data is secured even in the case of some catastrophic\nevent affecting the main data center where Redis is running and producing its\nsnapshots.\n\nWe'll review the most interesting disaster recovery techniques\nthat don't have too high costs.\n\n* Amazon S3 and other similar services are a good way for implementing your disaster recovery system. Simply transfer your daily or hourly RDB snapshot to S3 in an encrypted form. You can encrypt your data using `gpg -c` (in symmetric encryption mode). Make sure to store your password in many different safe places (for instance give a copy to the most important people of your organization). It is recommended to use multiple storage services for improved data safety.\n* Transfer your snapshots using SCP (part of SSH) to far servers. This is a fairly simple and safe route: get a small VPS in a place that is very far from you, install ssh there, and generate a ssh client key without passphrase, then add it in the `authorized_keys` file of your small VPS. You are ready to transfer backups in an automated fashion. Get at least two VPS in two different providers\nfor best results.\n\nIt is important to understand that this system can easily fail if not\nimplemented in the right way. At least, make absolutely sure that after the\ntransfer is completed you are able to verify the file size (that should match\nthe one of the file you copied) and possibly the SHA1 digest, if you are using\na VPS.\n\nYou also need some kind of independent alert system if the transfer of fresh\nbackups is not working for some reason.","site":"redis","answers_cleaned":"    title  Redis persistence linkTitle  Persistence weight  7 description  How Redis writes data to disk aliases         topics persistence       topics persistence md       docs manual persistence       docs manual persistence md        Persistence refers to the writing of data to durable storage  such as a solid state disk  SSD   Redis provides a range of persistence options  These include       RDB    Redis Database   RDB persistence performs point in time snapshots of your dataset at specified intervals      AOF    Append Only File   AOF persistence logs every write operation received by the server  These operations can then be replayed again at server startup  reconstructing the original dataset  Commands are logged using the same format as the Redis protocol itself      No persistence    You can disable persistence completely  This is sometimes used when caching      RDB   AOF    You can also combine both AOF and RDB in the same instance   If you d rather not think about the tradeoffs between these different persistence strategies  you may want to consider  Redis Enterprise s persistence options  https   docs redis com latest rs databases configure database persistence    which can be pre configured using a UI   To learn more about how to evaluate your Redis persistence strategy  read on      RDB advantages    RDB is a very compact single file point in time representation of your Redis data  RDB files are perfect for backups  For instance you may want to archive your RDB files every hour for the latest 24 hours  and to save an RDB snapshot every day for 30 days  This allows you to easily restore different versions of the data set in case of disasters    RDB is very good for disaster recovery  being a single compact file that can be transferred to far data centers  or onto Amazon S3  possibly encrypted     RDB maximizes Redis performances since the only work the Redis parent process needs to do in order to persist is forking a child that will do all the rest  The parent process will never perform disk I O or alike    RDB allows faster restarts with big datasets compared to AOF    On replicas  RDB supports  partial resynchronizations after restarts and failovers  https   redis io topics replication partial resynchronizations after restarts and failovers       RDB disadvantages    RDB is NOT good if you need to minimize the chance of data loss in case Redis stops working  for example after a power outage   You can configure different  save points  where an RDB is produced  for instance after at least five minutes and 100 writes against the data set  you can have multiple save points   However you ll usually create an RDB snapshot every five minutes or more  so in case of Redis stopping working without a correct shutdown for any reason you should be prepared to lose the latest minutes of data    RDB needs to fork   often in order to persist on disk using a child process  fork   can be time consuming if the dataset is big  and may result in Redis stopping serving clients for some milliseconds or even for one second if the dataset is very big and the CPU performance is not great  AOF also needs to fork   but less frequently and you can tune how often you want to rewrite your logs without any trade off on durability      AOF advantages    Using AOF Redis is much more durable  you can have different fsync policies  no fsync at all  fsync every second  fsync at every query  With the default policy of fsync every second  write performance is still great  fsync is performed using a background thread and the main thread will try hard to perform writes when no fsync is in progress  so you can only lose one second worth of writes    The AOF log is an append only log  so there are no seeks  nor corruption problems if there is a power outage  Even if the log ends with a half written command for some reason  disk full or other reasons  the redis check aof tool is able to fix it easily    Redis is able to automatically rewrite the AOF in background when it gets too big  The rewrite is completely safe as while Redis continues appending to the old file  a completely new one is produced with the minimal set of operations needed to create the current data set  and once this second file is ready Redis switches the two and starts appending to the new one    AOF contains a log of all the operations one after the other in an easy to understand and parse format  You can even easily export an AOF file  For instance even if you ve accidentally flushed everything using the  FLUSHALL  command  as long as no rewrite of the log was performed in the meantime  you can still save your data set just by stopping the server  removing the latest command  and restarting Redis again      AOF disadvantages    AOF files are usually bigger than the equivalent RDB files for the same dataset    AOF can be slower than RDB depending on the exact fsync policy  In general with fsync set to  every second  performance is still very high  and with fsync disabled it should be exactly as fast as RDB even under high load  Still RDB is able to provide more guarantees about the maximum latency even in the case of a huge write load     Redis   7 0      AOF can use a lot of memory if there are writes to the database during a rewrite  these are buffered in memory and written to the new AOF at the end     All write commands that arrive during rewrite are written to disk twice    Redis could freeze writing and fsyncing these write commands to the new AOF file at the end of the rewrite     Ok  so what should I use       The general indication you should use both persistence methods is if you want a degree of data safety comparable to what PostgreSQL can provide you   If you care a lot about your data  but still can live with a few minutes of data loss in case of disasters  you can simply use RDB alone   There are many users using AOF alone  but we discourage it since to have an RDB snapshot from time to time is a great idea for doing database backups  for faster restarts  and in the event of bugs in the AOF engine   The following sections will illustrate a few more details about the two persistence models      Snapshotting  By default Redis saves snapshots of the dataset on disk  in a binary file called  dump rdb   You can configure Redis to have it save the dataset every N seconds if there are at least M changes in the dataset  or you can manually call the  SAVE  or  BGSAVE  commands   For example  this configuration will make Redis automatically dump the dataset to disk every 60 seconds if at least 1000 keys changed       save 60 1000  This strategy is known as  snapshotting        How it works  Whenever Redis needs to dump the dataset to disk  this is what happens     Redis  forks  http   linux die net man 2 fork   We now have a child and a parent process     The child starts to write the dataset to a temporary RDB file     When the child is done writing the new RDB file  it replaces the old one   This method allows Redis to benefit from copy on write semantics      Append only file  Snapshotting is not very durable  If your computer running Redis stops  your power line fails  or you accidentally  kill  9  your instance  the latest data written to Redis will be lost   While this may not be a big deal for some applications  there are use cases for full durability  and in these cases Redis snapshotting alone is not a viable option   The  append only file  is an alternative  fully durable strategy for Redis   It became available in version 1 1   You can turn on the AOF in your configuration file       appendonly yes  From now on  every time Redis receives a command that changes the dataset  e g   SET   it will append it to the AOF   When you restart Redis it will re play the AOF to rebuild the state   Since Redis 7 0 0  Redis uses a multi part AOF mechanism  That is  the original single AOF file is split into base file  at most one  and incremental files  there may be more than one   The base file represents an initial  RDB or AOF format  snapshot of the data present when the AOF is  rewritten   log rewriting   The incremental files contains incremental changes since the last base AOF file was created  All these files are put in a separate directory and are tracked by a manifest file       Log rewriting  The AOF gets bigger and bigger as write operations are performed   For example  if you are incrementing a counter 100 times  you ll end up with a single key in your dataset containing the final value  but 100 entries in your AOF  99 of those entries are not needed to rebuild the current state   The rewrite is completely safe  While Redis continues appending to the old file  a completely new one is produced with the minimal set of operations needed to create the current data set  and once this second file is ready Redis switches the two and starts appending to the new one   So Redis supports an interesting feature  it is able to rebuild the AOF in the background without interrupting service to clients  Whenever you issue a  BGREWRITEAOF   Redis will write the shortest sequence of commands needed to rebuild the current dataset in memory   If you re using the AOF with Redis 2 2 you ll need to run  BGREWRITEAOF  from time to time  Since Redis 2 4 is able to trigger log rewriting automatically  see the example configuration file for more information    Since Redis 7 0 0  when an AOF rewrite is scheduled  the Redis parent process opens a new incremental AOF file to continue writing  The child process executes the rewrite logic and generates a new base AOF  Redis will use a temporary manifest file to track the newly generated base file and incremental file  When they are ready  Redis will perform an atomic replacement operation to make this temporary manifest file take effect  In order to avoid the problem of creating many incremental files in case of repeated failures and retries of an AOF rewrite  Redis introduces an AOF rewrite limiting mechanism to ensure that failed AOF rewrites are retried at a slower and slower rate       How durable is the append only file   You can configure how many times Redis will   fsync   http   linux die net man 2 fsync  data on disk  There are three options      appendfsync always    fsync  every time new commands are appended to the AOF  Very very slow  very safe  Note that the commands are appended to the AOF after a batch of commands from multiple clients or a pipeline are executed  so it means a single write and a single fsync  before sending the replies      appendfsync everysec    fsync  every second  Fast enough  since version 2 4 likely to be as fast as snapshotting   and you may lose 1 second of data if there is a disaster     appendfsync no   Never  fsync   just put your data in the hands of the Operating System  The faster and less safe method  Normally Linux will flush data every 30 seconds with this configuration  but it s up to the kernel s exact tuning   The suggested  and default  policy is to  fsync  every second  It is both fast and relatively safe  The  always  policy is very slow in practice  but it supports group commit  so if there are multiple parallel writes Redis will try to perform a single  fsync  operation       What should I do if my AOF gets truncated   It is possible the server crashed while writing the AOF file  or the volume where the AOF file is stored was full at the time of writing  When this happens the AOF still contains consistent data representing a given point in time version of the dataset  that may be old up to one second with the default AOF fsync policy   but the last command in the AOF could be truncated  The latest major versions of Redis will be able to load the AOF anyway  just discarding the last non well formed command in the file  In this case the server will emit a log like the following         Reading RDB preamble from AOF file      Reading the remaining AOF tail          Warning  short read while loading the AOF file           Truncating the AOF at offset 439       AOF loaded anyway because aof load truncated is enabled      You can change the default configuration to force Redis to stop in such cases if you want  but the default configuration is to continue regardless of the fact the last command in the file is not well formed  in order to guarantee availability after a restart   Older versions of Redis may not recover  and may require the following steps     Make a backup copy of your AOF file    Fix the original file using the  redis check aof  tool that ships with Redis           redis check aof   fix  filename     Optionally use  diff  u  to check what is the difference between two files    Restart the server with the fixed file       What should I do if my AOF gets corrupted   If the AOF file is not just truncated  but corrupted with invalid byte sequences in the middle  things are more complex  Redis will complain at startup and will abort         Reading the remaining AOF tail      Bad file format reading the append only file  make a backup of your AOF file  then use   redis check aof   fix  filename       The best thing to do is to run the  redis check aof  utility  initially without the    fix  option  then understand the problem  jump to the given offset in the file  and see if it is possible to manually repair the file  The AOF uses the same format of the Redis protocol and is quite simple to fix manually  Otherwise it is possible to let the utility fix the file for us  but in that case all the AOF portion from the invalid part to the end of the file may be discarded  leading to a massive amount of data loss if the corruption happened to be in the initial part of the file       How it works  Log rewriting uses the same copy on write trick already in use for snapshotting   This is how it works     Redis    7 0      Redis  forks  http   linux die net man 2 fork   so now we have a child and a parent process     The child starts writing the new base AOF in a temporary file     The parent opens a new increments AOF file to continue writing updates    If the rewriting fails  the old base and increment files  if there are any  plus this newly opened increment file represent the complete updated dataset    so we are safe       When the child is done rewriting the base file  the parent gets a signal  and uses the newly opened increment file and child generated base file to build a temp manifest  and persist it     Profit  Now Redis does an atomic exchange of the manifest files so that the result of this AOF rewrite takes effect  Redis also cleans up the old base file and any unused increment files     Redis   7 0      Redis  forks  http   linux die net man 2 fork   so now we have a child and a parent process     The child starts writing the new AOF in a temporary file     The parent accumulates all the new changes in an in memory buffer  but at the same time it writes the new changes in the old append only file  so if the rewriting fails  we are safe      When the child is done rewriting the file  the parent gets a signal  and appends the in memory buffer at the end of the file generated by the child     Now Redis atomically renames the new file into the old one  and starts appending new data into the new file       How I can switch to AOF  if I m currently using dump rdb snapshots   If you want to enable AOF in a server that is currently using RDB snapshots  you need to convert the data by enabling AOF via CONFIG command on the live server first     IMPORTANT    not following this procedure  e g  just changing the config and restarting the server  can result in data loss     Redis    2 2    Preparations     Make a backup of your latest dump rdb file    Transfer this backup to a safe place   Switch to AOF on live database     Enable AOF   redis cli config set appendonly yes    Optionally disable RDB   redis cli config set save       Make sure writes are appended to the append only file correctly      IMPORTANT    Update your  redis conf   potentially through  CONFIG REWRITE   and ensure that it matches the configuration above    If you forget this step  when you restart the server  the configuration changes will be lost and the server will start again with the old configuration  resulting in a loss of your data   Next time you restart the server     Before restarting the server  wait for AOF rewrite to finish persisting the data    You can do that by watching  INFO persistence   waiting for  aof rewrite in progress  and  aof rewrite scheduled  to be  0   and validating that  aof last bgrewrite status  is  ok     After restarting the server  check that your database contains the same number of keys it contained previously     Redis 2 0      Make a backup of your latest dump rdb file    Transfer this backup into a safe place    Stop all the writes against the database    Issue a  redis cli BGREWRITEAOF   This will create the append only file    Stop the server when Redis finished generating the AOF dump    Edit redis conf end enable append only file persistence    Restart the server    Make sure that your database contains the same number of keys it contained before the switch    Make sure that writes are appended to the append only file correctly      Interactions between AOF and RDB persistence  Redis    2 4 makes sure to avoid triggering an AOF rewrite when an RDB snapshotting operation is already in progress  or allowing a  BGSAVE  while the AOF rewrite is in progress  This prevents two Redis background processes from doing heavy disk I O at the same time   When snapshotting is in progress and the user explicitly requests a log rewrite operation using  BGREWRITEAOF  the server will reply with an OK status code telling the user the operation is scheduled  and the rewrite will start once the snapshotting is completed   In the case both AOF and RDB persistence are enabled and Redis restarts the AOF file will be used to reconstruct the original dataset since it is guaranteed to be the most complete      Backing up Redis data  Before starting this section  make sure to read the following sentence    Make Sure to Backup Your Database    Disks break  instances in the cloud disappear  and so forth  no backups means huge risk of data disappearing into  dev null   Redis is very data backup friendly since you can copy RDB files while the database is running  the RDB is never modified once produced  and while it gets produced it uses a temporary name and is renamed into its final destination atomically using rename 2  only when the new snapshot is complete   This means that copying the RDB file is completely safe while the server is running  This is what we suggest     Create a cron job in your server creating hourly snapshots of the RDB file in one directory  and daily snapshots in a different directory    Every time the cron script runs  make sure to call the  find  command to make sure too old snapshots are deleted  for instance you can take hourly snapshots for the latest 48 hours  and daily snapshots for one or two months  Make sure to name the snapshots with date and time information    At least one time every day make sure to transfer an RDB snapshot  outside your data center  or at least  outside the physical machine  running your Redis instance       Backing up AOF persistence  If you run a Redis instance with only AOF persistence enabled  you can still perform backups  Since Redis 7 0 0  AOF files are split into multiple files which reside in a single directory determined by the  appenddirname  configuration  During normal operation all you need to do is copy tar the files in this directory to achieve a backup  However  if this is done during a  rewrite   log rewriting   you might end up with an invalid backup  To work around this you must disable AOF rewrites during the backup   1  Turn off automatic rewrites with br       CONFIG SET   auto aof rewrite percentage 0  br      Make sure you don t manually start a rewrite  using  BGREWRITEAOF   during this time  2  Check there s no current rewrite in progress using br       INFO   persistence  br      and verifying  aof rewrite in progress  is 0  If it s 1  then you ll need to wait for the rewrite to complete  3  Now you can safely copy the files in the  appenddirname  directory  4  Re enable rewrites when done  br       CONFIG SET   auto aof rewrite percentage  prev value      Note    If you want to minimize the time AOF rewrites are disabled you may create hard links to the files in  appenddirname   in step 3 above  and then re enable rewrites  step 4  after the hard links are created  Now you can copy tar the hardlinks and delete them when done  This works because Redis guarantees that it only appends to files in this directory  or completely replaces them if necessary  so the content should be consistent at any given point in time      Note    If you want to handle the case of the server being restarted during the backup and make sure no rewrite will automatically start after the restart you can change step 1 above to also persist the updated configuration via  CONFIG REWRITE   Just make sure to re enable automatic rewrites when done  step 4  and persist it with another  CONFIG REWRITE    Prior to version 7 0 0 backing up the AOF file can be done simply by copying the aof file  like backing up the RDB snapshot   The file may lack the final part but Redis will still be able to load it  see the previous sections about  truncated AOF files   what should i do if my aof gets truncated         Disaster recovery  Disaster recovery in the context of Redis is basically the same story as backups  plus the ability to transfer those backups in many different external data centers  This way data is secured even in the case of some catastrophic event affecting the main data center where Redis is running and producing its snapshots   We ll review the most interesting disaster recovery techniques that don t have too high costs     Amazon S3 and other similar services are a good way for implementing your disaster recovery system  Simply transfer your daily or hourly RDB snapshot to S3 in an encrypted form  You can encrypt your data using  gpg  c   in symmetric encryption mode   Make sure to store your password in many different safe places  for instance give a copy to the most important people of your organization   It is recommended to use multiple storage services for improved data safety    Transfer your snapshots using SCP  part of SSH  to far servers  This is a fairly simple and safe route  get a small VPS in a place that is very far from you  install ssh there  and generate a ssh client key without passphrase  then add it in the  authorized keys  file of your small VPS  You are ready to transfer backups in an automated fashion  Get at least two VPS in two different providers for best results   It is important to understand that this system can easily fail if not implemented in the right way  At least  make absolutely sure that after the transfer is completed you are able to verify the file size  that should match the one of the file you copied  and possibly the SHA1 digest  if you are using a VPS   You also need some kind of independent alert system if the transfer of fresh backups is not working for some reason "}
{"questions":"redis weight 6 topics partitioning docs manual scaling md Scale with Redis Cluster aliases topics cluster tutorial docs manual scaling Horizontal scaling with Redis Cluster title Scale with Redis Cluster","answers":"---\ntitle: Scale with Redis Cluster\nlinkTitle: Scale with Redis Cluster\nweight: 6\ndescription: Horizontal scaling with Redis Cluster\naliases: [\n    \/topics\/cluster-tutorial,\n    \/topics\/partitioning,\n    \/docs\/manual\/scaling,\n    \/docs\/manual\/scaling.md\n]\n---\n\nRedis scales horizontally with a deployment topology called Redis Cluster. \nThis topic will teach you how to set up, test, and operate Redis Cluster in production.\nYou will learn about the availability and consistency characteristics of Redis Cluster from the end user's point of view.\n\nIf you plan to run a production Redis Cluster deployment or want to understand better how Redis Cluster works internally, consult the [Redis Cluster specification](\/topics\/cluster-spec). To learn how Redis Enterprise handles scaling, see [Linear Scaling with Redis Enterprise](https:\/\/redis.com\/redis-enterprise\/technology\/linear-scaling-redis-enterprise\/).\n\n## Redis Cluster 101\n\nRedis Cluster provides a way to run a Redis installation where data is automatically sharded across multiple Redis nodes. \nRedis Cluster also provides some degree of availability during partitions&mdash;in practical terms, the ability to continue operations when some nodes fail or are unable to communicate. \nHowever, the cluster will become unavailable in the event of larger failures (for example, when the majority of masters are unavailable).\n\nSo, with Redis Cluster, you get the ability to:\n\n* Automatically split your dataset among multiple nodes.\n* Continue operations when a subset of the nodes are experiencing failures or are unable to communicate with the rest of the cluster.\n\n#### Redis Cluster TCP ports\n\nEvery Redis Cluster node requires two open TCP connections: a Redis TCP port used to serve clients, e.g., 6379, and second port known as the _cluster bus port_. \nBy default, the cluster bus port is set by adding 10000 to the data port (e.g., 16379); however, you can override this in the `cluster-port` configuration.\n\nCluster bus is a node-to-node communication channel that uses a binary protocol, which is more suited to exchanging information between nodes due to\nlittle bandwidth and processing time. \nNodes use the cluster bus for failure detection, configuration updates, failover authorization, and so forth. \nClients should never try to communicate with the cluster bus port, but rather use the Redis command port. \nHowever, make sure you open both ports in your firewall, otherwise Redis cluster nodes won't be able to communicate.\n\nFor a Redis Cluster to work properly you need, for each node:\n\n1. The client communication port (usually 6379) used to communicate with clients and be open to all the clients that need to reach the cluster, plus all the other cluster nodes that use the client port for key migrations.\n2. The cluster bus port must be reachable from all the other cluster nodes.\n\nIf you don't open both TCP ports, your cluster will not work as expected.\n\n#### Redis Cluster and Docker\n\nCurrently, Redis Cluster does not support NATted environments and in general\nenvironments where IP addresses or TCP ports are remapped.\n\nDocker uses a technique called _port mapping_: programs running inside Docker containers may be exposed with a different port compared to the one the program believes to be using. \nThis is useful for running multiple containers using the same ports, at the same time, in the same server.\n\nTo make Docker compatible with Redis Cluster, you need to use Docker's _host networking mode_. \nPlease see the `--net=host` option in the [Docker documentation](https:\/\/docs.docker.com\/engine\/userguide\/networking\/dockernetworks\/) for more information.\n\n#### Redis Cluster data sharding\n\nRedis Cluster does not use consistent hashing, but a different form of sharding\nwhere every key is conceptually part of what we call a **hash slot**.\n\nThere are 16384 hash slots in Redis Cluster, and to compute the hash\nslot for a given key, we simply take the CRC16 of the key modulo\n16384.\n\nEvery node in a Redis Cluster is responsible for a subset of the hash slots,\nso, for example, you may have a cluster with 3 nodes, where:\n\n* Node A contains hash slots from 0 to 5500.\n* Node B contains hash slots from 5501 to 11000.\n* Node C contains hash slots from 11001 to 16383.\n\nThis makes it easy to add and remove cluster nodes. For example, if\nI want to add a new node D, I need to move some hash slots from nodes A, B, C\nto D. Similarly, if I want to remove node A from the cluster, I can just\nmove the hash slots served by A to B and C. Once node A is empty,\nI can remove it from the cluster completely.\n\nMoving hash slots from a node to another does not require stopping\nany operations; therefore, adding and removing nodes, or changing the percentage of hash slots held by a node, requires no downtime.\n\nRedis Cluster supports multiple key operations as long as all of the keys involved in a single command execution (or whole transaction, or Lua script\nexecution) belong to the same hash slot. The user can force multiple keys\nto be part of the same hash slot by using a feature called *hash tags*.\n\nHash tags are documented in the Redis Cluster specification, but the gist is\nthat if there is a substring between {} brackets in a key, only what is\ninside the string is hashed. For example, the keys `user:{123}:profile` and `user:{123}:account` are guaranteed to be in the same hash slot because they share the same hash tag. As a result, you can operate on these two keys in the same multi-key operation.\n\n#### Redis Cluster master-replica model\n\nTo remain available when a subset of master nodes are failing or are\nnot able to communicate with the majority of nodes, Redis Cluster uses a\nmaster-replica model where every hash slot has from 1 (the master itself) to N\nreplicas (N-1 additional replica nodes).\n\nIn our example cluster with nodes A, B, C, if node B fails the cluster is not\nable to continue, since we no longer have a way to serve hash slots in the\nrange 5501-11000.\n\nHowever, when the cluster is created (or at a later time), we add a replica\nnode to every master, so that the final cluster is composed of A, B, C\nthat are master nodes, and A1, B1, C1 that are replica nodes.\nThis way, the system can continue if node B fails.\n\nNode B1 replicates B, and B fails, the cluster will promote node B1 as the new\nmaster and will continue to operate correctly.\n\nHowever, note that if nodes B and B1 fail at the same time, Redis Cluster will not be able to continue to operate.\n\n#### Redis Cluster consistency guarantees\n\nRedis Cluster does not guarantee **strong consistency**. In practical\nterms this means that under certain conditions it is possible that Redis\nCluster will lose writes that were acknowledged by the system to the client.\n\nThe first reason why Redis Cluster can lose writes is because it uses\nasynchronous replication. This means that during writes the following\nhappens:\n\n* Your client writes to the master B.\n* The master B replies OK to your client.\n* The master B propagates the write to its replicas B1, B2 and B3.\n\nAs you can see, B does not wait for an acknowledgement from B1, B2, B3 before\nreplying to the client, since this would be a prohibitive latency penalty\nfor Redis, so if your client writes something, B acknowledges the write,\nbut crashes before being able to send the write to its replicas, one of the\nreplicas (that did not receive the write) can be promoted to master, losing\nthe write forever.\n\nThis is very similar to what happens with most databases that are\nconfigured to flush data to disk every second, so it is a scenario you\nare already able to reason about because of past experiences with traditional\ndatabase systems not involving distributed systems. Similarly you can\nimprove consistency by forcing the database to flush data to disk before\nreplying to the client, but this usually results in prohibitively low\nperformance. That would be the equivalent of synchronous replication in\nthe case of Redis Cluster.\n\nBasically, there is a trade-off to be made between performance and consistency.\n\nRedis Cluster has support for synchronous writes when absolutely needed,\nimplemented via the `WAIT` command. This makes losing writes a lot less\nlikely. However, note that Redis Cluster does not implement strong consistency\neven when synchronous replication is used: it is always possible, under more\ncomplex failure scenarios, that a replica that was not able to receive the write\nwill be elected as master.\n\nThere is another notable scenario where Redis Cluster will lose writes, that\nhappens during a network partition where a client is isolated with a minority\nof instances including at least a master.\n\nTake as an example our 6 nodes cluster composed of A, B, C, A1, B1, C1,\nwith 3 masters and 3 replicas. There is also a client, that we will call Z1.\n\nAfter a partition occurs, it is possible that in one side of the\npartition we have A, C, A1, B1, C1, and in the other side we have B and Z1.\n\nZ1 is still able to write to B, which will accept its writes. If the\npartition heals in a very short time, the cluster will continue normally.\nHowever, if the partition lasts enough time for B1 to be promoted to master\non the majority side of the partition, the writes that Z1 has sent to B\nin the meantime will be lost.\n\n\nThere is a **maximum window** to the amount of writes Z1 will be able\nto send to B: if enough time has elapsed for the majority side of the\npartition to elect a replica as master, every master node in the minority\nside will have stopped accepting writes.\n\n\nThis amount of time is a very important configuration directive of Redis\nCluster, and is called the **node timeout**.\n\nAfter node timeout has elapsed, a master node is considered to be failing,\nand can be replaced by one of its replicas.\nSimilarly, after node timeout has elapsed without a master node to be able\nto sense the majority of the other master nodes, it enters an error state\nand stops accepting writes.\n\n## Redis Cluster configuration parameters\n\nWe are about to create an example cluster deployment. \nBefore we continue, let's introduce the configuration parameters that Redis Cluster introduces\nin the `redis.conf` file.\n\n* **cluster-enabled `<yes\/no>`**: If yes, enables Redis Cluster support in a specific Redis instance. Otherwise the instance starts as a standalone instance as usual.\n* **cluster-config-file `<filename>`**: Note that despite the name of this option, this is not a user editable configuration file, but the file where a Redis Cluster node automatically persists the cluster configuration (the state, basically) every time there is a change, in order to be able to re-read it at startup. The file lists things like the other nodes in the cluster, their state, persistent variables, and so forth. Often this file is rewritten and flushed on disk as a result of some message reception.\n* **cluster-node-timeout `<milliseconds>`**: The maximum amount of time a Redis Cluster node can be unavailable, without it being considered as failing. If a master node is not reachable for more than the specified amount of time, it will be failed over by its replicas. This parameter controls other important things in Redis Cluster. Notably, every node that can't reach the majority of master nodes for the specified amount of time, will stop accepting queries.\n* **cluster-slave-validity-factor `<factor>`**: If set to zero, a replica will always consider itself valid, and will therefore always try to failover a master, regardless of the amount of time the link between the master and the replica remained disconnected. If the value is positive, a maximum disconnection time is calculated as the *node timeout* value multiplied by the factor provided with this option, and if the node is a replica, it will not try to start a failover if the master link was disconnected for more than the specified amount of time. For example, if the node timeout is set to 5 seconds and the validity factor is set to 10, a replica disconnected from the master for more than 50 seconds will not try to failover its master. Note that any value different than zero may result in Redis Cluster being unavailable after a master failure if there is no replica that is able to failover it. In that case the cluster will return to being available only when the original master rejoins the cluster.\n* **cluster-migration-barrier `<count>`**: Minimum number of replicas a master will remain connected with, for another replica to migrate to a master which is no longer covered by any replica. See the appropriate section about replica migration in this tutorial for more information.\n* **cluster-require-full-coverage `<yes\/no>`**: If this is set to yes, as it is by default, the cluster stops accepting writes if some percentage of the key space is not covered by any node. If the option is set to no, the cluster will still serve queries even if only requests about a subset of keys can be processed.\n* **cluster-allow-reads-when-down `<yes\/no>`**: If this is set to no, as it is by default, a node in a Redis Cluster will stop serving all traffic when the cluster is marked as failed, either when a node can't reach a quorum of masters or when full coverage is not met. This prevents reading potentially inconsistent data from a node that is unaware of changes in the cluster. This option can be set to yes to allow reads from a node during the fail state, which is useful for applications that want to prioritize read availability but still want to prevent inconsistent writes. It can also be used for when using Redis Cluster with only one or two shards, as it allows the nodes to continue serving writes when a master fails but automatic failover is impossible.\n\n## Create and use a Redis Cluster\n\nTo create and use a Redis Cluster, follow these steps:\n\n* [Create a Redis Cluster](#create-a-redis-cluster)\n* [Interact with the cluster](#interact-with-the-cluster)\n* [Write an example app with redis-rb-cluster](#write-an-example-app-with-redis-rb-cluster)\n* [Reshard the cluster](#reshard-the-cluster)\n* [A more interesting example application](#a-more-interesting-example-application)\n* [Test the failover](#test-the-failover)\n* [Manual failover](#manual-failover)\n* [Add a new node](#add-a-new-node)\n* [Remove a node](#remove-a-node)\n* [Replica migration](#replica-migration)\n* [Upgrade nodes in a Redis Cluster](#upgrade-nodes-in-a-redis-cluster)\n* [Migrate to Redis Cluster](#migrate-to-redis-cluster)\n\nBut, first, familiarize yourself with the requirements for creating a cluster.\n\n#### Requirements to create a Redis Cluster\n\nTo create a cluster, the first thing you need is to have a few empty Redis instances running in _cluster mode_. \n\nAt minimum, set the following directives in the `redis.conf` file:\n\n```\nport 7000\ncluster-enabled yes\ncluster-config-file nodes.conf\ncluster-node-timeout 5000\nappendonly yes\n```\n\nTo enable cluster mode, set the `cluster-enabled` directive to `yes`.\nEvery instance also contains the path of a file where the\nconfiguration for this node is stored, which by default is `nodes.conf`.\nThis file is never touched by humans; it is simply generated at startup\nby the Redis Cluster instances, and updated every time it is needed.\n\nNote that the **minimal cluster** that works as expected must contain\nat least three master nodes. For deployment, we strongly recommend\na six-node cluster, with three masters and three replicas.\n\nYou can test this locally by creating the following directories named\nafter the port number of the instance you'll run inside any given directory.\n\nFor example:\n\n```\nmkdir cluster-test\ncd cluster-test\nmkdir 7000 7001 7002 7003 7004 7005\n```\n\nCreate a `redis.conf` file inside each of the directories, from 7000 to 7005.\nAs a template for your configuration file just use the small example above,\nbut make sure to replace the port number `7000` with the right port number\naccording to the directory name.\n\n\nYou can start each instance as follows, each running in a separate terminal tab:\n\n```\ncd 7000\nredis-server .\/redis.conf\n```\nYou'll see from the logs that every node assigns itself a new ID:\n\n    [82462] 26 Nov 11:56:55.329 * No cluster configuration found, I'm 97a3a64667477371c4479320d683e4c8db5858b1\n\nThis ID will be used forever by this specific instance in order for the instance\nto have a unique name in the context of the cluster. Every node\nremembers every other node using this IDs, and not by IP or port.\nIP addresses and ports may change, but the unique node identifier will never\nchange for all the life of the node. We call this identifier simply **Node ID**.\n\n#### Create a Redis Cluster\n\nNow that we have a number of instances running, you need to create your cluster by writing some meaningful configuration to the nodes.\n\nYou can configure and execute individual instances manually or use the create-cluster script.\nLet's go over how you do it manually.\n\nTo create the cluster, run:\n\n    redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 \\\n    127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \\\n    --cluster-replicas 1\n\nThe command used here is **create**, since we want to create a new cluster.\nThe option `--cluster-replicas 1` means that we want a replica for every master created.\n\nThe other arguments are the list of addresses of the instances I want to use\nto create the new cluster.\n\n`redis-cli` will propose a configuration. Accept the proposed configuration by typing **yes**.\nThe cluster will be configured and *joined*, which means that instances will be\nbootstrapped into talking with each other. Finally, if everything has gone well, you'll see a message like this:\n\n    [OK] All 16384 slots covered\n\nThis means that there is at least one master instance serving each of the\n16384 available slots.\n\nIf you don't want to create a Redis Cluster by configuring and executing\nindividual instances manually as explained above, there is a much simpler\nsystem (but you'll not learn the same amount of operational details).\n\nFind the `utils\/create-cluster` directory in the Redis distribution.\nThere is a script called `create-cluster` inside (same name as the directory\nit is contained into), it's a simple bash script. In order to start\na 6 nodes cluster with 3 masters and 3 replicas just type the following\ncommands:\n\n1. `create-cluster start`\n2. `create-cluster create`\n\nReply to `yes` in step 2 when the `redis-cli` utility wants you to accept\nthe cluster layout.\n\nYou can now interact with the cluster, the first node will start at port 30001\nby default. When you are done, stop the cluster with:\n\n3. `create-cluster stop`\n\nPlease read the `README` inside this directory for more information on how\nto run the script.\n\n#### Interact with the cluster\n\nTo connect to Redis Cluster, you'll need a cluster-aware Redis client. \nSee the [documentation](\/docs\/clients) for your client of choice to determine its cluster support.\n\nYou can also test your Redis Cluster using the `redis-cli` command line utility:\n\n```\n$ redis-cli -c -p 7000\nredis 127.0.0.1:7000> set foo bar\n-> Redirected to slot [12182] located at 127.0.0.1:7002\nOK\nredis 127.0.0.1:7002> set hello world\n-> Redirected to slot [866] located at 127.0.0.1:7000\nOK\nredis 127.0.0.1:7000> get foo\n-> Redirected to slot [12182] located at 127.0.0.1:7002\n\"bar\"\nredis 127.0.0.1:7002> get hello\n-> Redirected to slot [866] located at 127.0.0.1:7000\n\"world\"\n```\n\n \nIf you created the cluster using the script, your nodes may listen\non different ports, starting from 30001 by default.\n\n\nThe `redis-cli` cluster support is very basic, so it always uses the fact that\nRedis Cluster nodes are able to redirect a client to the right node.\nA serious client is able to do better than that, and cache the map between\nhash slots and nodes addresses, to directly use the right connection to the\nright node. The map is refreshed only when something changed in the cluster\nconfiguration, for example after a failover or after the system administrator\nchanged the cluster layout by adding or removing nodes.\n\n#### Write an example app with redis-rb-cluster\n\nBefore going forward showing how to operate the Redis Cluster, doing things\nlike a failover, or a resharding, we need to create some example application\nor at least to be able to understand the semantics of a simple Redis Cluster\nclient interaction.\n\nIn this way we can run an example and at the same time try to make nodes\nfailing, or start a resharding, to see how Redis Cluster behaves under real\nworld conditions. It is not very helpful to see what happens while nobody\nis writing to the cluster.\n\nThis section explains some basic usage of\n[redis-rb-cluster](https:\/\/github.com\/antirez\/redis-rb-cluster) showing two\nexamples. \nThe first is the following, and is the\n[`example.rb`](https:\/\/github.com\/antirez\/redis-rb-cluster\/blob\/master\/example.rb)\nfile inside the redis-rb-cluster distribution:\n\n```\n   1  require '.\/cluster'\n   2\n   3  if ARGV.length != 2\n   4      startup_nodes = [\n   5          {:host => \"127.0.0.1\", :port => 7000},\n   6          {:host => \"127.0.0.1\", :port => 7001}\n   7      ]\n   8  else\n   9      startup_nodes = [\n  10          {:host => ARGV[0], :port => ARGV[1].to_i}\n  11      ]\n  12  end\n  13\n  14  rc = RedisCluster.new(startup_nodes,32,:timeout => 0.1)\n  15\n  16  last = false\n  17\n  18  while not last\n  19      begin\n  20          last = rc.get(\"__last__\")\n  21          last = 0 if !last\n  22      rescue => e\n  23          puts \"error #{e.to_s}\"\n  24          sleep 1\n  25      end\n  26  end\n  27\n  28  ((last.to_i+1)..1000000000).each{|x|\n  29      begin\n  30          rc.set(\"foo#{x}\",x)\n  31          puts rc.get(\"foo#{x}\")\n  32          rc.set(\"__last__\",x)\n  33      rescue => e\n  34          puts \"error #{e.to_s}\"\n  35      end\n  36      sleep 0.1\n  37  }\n```\n\nThe application does a very simple thing, it sets keys in the form `foo<number>` to `number`, one after the other. So if you run the program the result is the\nfollowing stream of commands:\n\n* SET foo0 0\n* SET foo1 1\n* SET foo2 2\n* And so forth...\n\nThe program looks more complex than it should usually as it is designed to\nshow errors on the screen instead of exiting with an exception, so every\noperation performed with the cluster is wrapped by `begin` `rescue` blocks.\n\nThe **line 14** is the first interesting line in the program. It creates the\nRedis Cluster object, using as argument a list of *startup nodes*, the maximum\nnumber of connections this object is allowed to take against different nodes,\nand finally the timeout after a given operation is considered to be failed.\n\nThe startup nodes don't need to be all the nodes of the cluster. The important\nthing is that at least one node is reachable. Also note that redis-rb-cluster\nupdates this list of startup nodes as soon as it is able to connect with the\nfirst node. You should expect such a behavior with any other serious client.\n\nNow that we have the Redis Cluster object instance stored in the **rc** variable,\nwe are ready to use the object like if it was a normal Redis object instance.\n\nThis is exactly what happens in **line 18 to 26**: when we restart the example\nwe don't want to start again with `foo0`, so we store the counter inside\nRedis itself. The code above is designed to read this counter, or if the\ncounter does not exist, to assign it the value of zero.\n\nHowever note how it is a while loop, as we want to try again and again even\nif the cluster is down and is returning errors. Normal applications don't need\nto be so careful.\n\n**Lines between 28 and 37** start the main loop where the keys are set or\nan error is displayed.\n\nNote the `sleep` call at the end of the loop. In your tests you can remove\nthe sleep if you want to write to the cluster as fast as possible (relatively\nto the fact that this is a busy loop without real parallelism of course, so\nyou'll get the usually 10k ops\/second in the best of the conditions).\n\nNormally writes are slowed down in order for the example application to be\neasier to follow by humans.\n\nStarting the application produces the following output:\n\n```\nruby .\/example.rb\n1\n2\n3\n4\n5\n6\n7\n8\n9\n^C (I stopped the program here)\n```\n\nThis is not a very interesting program and we'll use a better one in a moment\nbut we can already see what happens during a resharding when the program\nis running.\n\n#### Reshard the cluster\n\nNow we are ready to try a cluster resharding. To do this, please\nkeep the example.rb program running, so that you can see if there is some\nimpact on the program running. Also, you may want to comment the `sleep`\ncall to have some more serious write load during resharding.\n\nResharding basically means to move hash slots from a set of nodes to another\nset of nodes. \nLike cluster creation, it is accomplished using the redis-cli utility.\n\nTo start a resharding, just type:\n\n    redis-cli --cluster reshard 127.0.0.1:7000\n\nYou only need to specify a single node, redis-cli will find the other nodes\nautomatically.\n\nCurrently redis-cli is only able to reshard with the administrator support,\nyou can't just say move 5% of slots from this node to the other one (but\nthis is pretty trivial to implement). So it starts with questions. The first\nis how much of a resharding do you want to do:\n\n    How many slots do you want to move (from 1 to 16384)?\n\nWe can try to reshard 1000 hash slots, that should already contain a non\ntrivial amount of keys if the example is still running without the sleep\ncall.\n\nThen redis-cli needs to know what is the target of the resharding, that is,\nthe node that will receive the hash slots.\nI'll use the first master node, that is, 127.0.0.1:7000, but I need\nto specify the Node ID of the instance. This was already printed in a\nlist by redis-cli, but I can always find the ID of a node with the following\ncommand if I need:\n\n```\n$ redis-cli -p 7000 cluster nodes | grep myself\n97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5460\n```\n\nOk so my target node is 97a3a64667477371c4479320d683e4c8db5858b1.\n\nNow you'll get asked from what nodes you want to take those keys.\nI'll just type `all` in order to take a bit of hash slots from all the\nother master nodes.\n\nAfter the final confirmation you'll see a message for every slot that\nredis-cli is going to move from a node to another, and a dot will be printed\nfor every actual key moved from one side to the other.\n\nWhile the resharding is in progress you should be able to see your\nexample program running unaffected. You can stop and restart it multiple times\nduring the resharding if you want.\n\nAt the end of the resharding, you can test the health of the cluster with\nthe following command:\n\n    redis-cli --cluster check 127.0.0.1:7000\n\nAll the slots will be covered as usual, but this time the master at\n127.0.0.1:7000 will have more hash slots, something around 6461.\n\nResharding can be performed automatically without the need to manually\nenter the parameters in an interactive way. This is possible using a command\nline like the following:\n\n    redis-cli --cluster reshard <host>:<port> --cluster-from <node-id> --cluster-to <node-id> --cluster-slots <number of slots> --cluster-yes\n\nThis allows to build some automatism if you are likely to reshard often,\nhowever currently there is no way for `redis-cli` to automatically\nrebalance the cluster checking the distribution of keys across the cluster\nnodes and intelligently moving slots as needed. This feature will be added\nin the future.\n\nThe `--cluster-yes` option instructs the cluster manager to automatically answer\n\"yes\" to the command's prompts, allowing it to run in a non-interactive mode.\nNote that this option can also be activated by setting the\n`REDISCLI_CLUSTER_YES` environment variable.\n\n#### A more interesting example application\n\nThe example application we wrote early is not very good.\nIt writes to the cluster in a simple way without even checking if what was\nwritten is the right thing.\n\nFrom our point of view the cluster receiving the writes could just always\nwrite the key `foo` to `42` to every operation, and we would not notice at\nall.\n\nSo in the `redis-rb-cluster` repository, there is a more interesting application\nthat is called `consistency-test.rb`. It uses a set of counters, by default 1000, and sends `INCR` commands in order to increment the counters.\n\nHowever instead of just writing, the application does two additional things:\n\n* When a counter is updated using `INCR`, the application remembers the write.\n* It also reads a random counter before every write, and check if the value is what we expected it to be, comparing it with the value it has in memory.\n\nWhat this means is that this application is a simple **consistency checker**,\nand is able to tell you if the cluster lost some write, or if it accepted\na write that we did not receive acknowledgment for. In the first case we'll\nsee a counter having a value that is smaller than the one we remember, while\nin the second case the value will be greater.\n\nRunning the consistency-test application produces a line of output every\nsecond:\n\n```\n$ ruby consistency-test.rb\n925 R (0 err) | 925 W (0 err) |\n5030 R (0 err) | 5030 W (0 err) |\n9261 R (0 err) | 9261 W (0 err) |\n13517 R (0 err) | 13517 W (0 err) |\n17780 R (0 err) | 17780 W (0 err) |\n22025 R (0 err) | 22025 W (0 err) |\n25818 R (0 err) | 25818 W (0 err) |\n```\n\nThe line shows the number of **R**eads and **W**rites performed, and the\nnumber of errors (query not accepted because of errors since the system was\nnot available).\n\nIf some inconsistency is found, new lines are added to the output.\nThis is what happens, for example, if I reset a counter manually while\nthe program is running:\n\n```\n$ redis-cli -h 127.0.0.1 -p 7000 set key_217 0\nOK\n\n(in the other tab I see...)\n\n94774 R (0 err) | 94774 W (0 err) |\n98821 R (0 err) | 98821 W (0 err) |\n102886 R (0 err) | 102886 W (0 err) | 114 lost |\n107046 R (0 err) | 107046 W (0 err) | 114 lost |\n```\n\nWhen I set the counter to 0 the real value was 114, so the program reports\n114 lost writes (`INCR` commands that are not remembered by the cluster).\n\nThis program is much more interesting as a test case, so we'll use it\nto test the Redis Cluster failover.\n\n#### Test the failover\n\nTo trigger the failover, the simplest thing we can do (that is also\nthe semantically simplest failure that can occur in a distributed system)\nis to crash a single process, in our case a single master.\n\n \nDuring this test, you should take a tab open with the consistency test\napplication running.\n \n\nWe can identify a master and crash it with the following command:\n\n```\n$ redis-cli -p 7000 cluster nodes | grep master\n3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385482984082 0 connected 5960-10921\n2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 master - 0 1385482983582 0 connected 11423-16383\n97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5959 10922-11422\n```\n\nOk, so 7000, 7001, and 7002 are masters. Let's crash node 7002 with the\n**DEBUG SEGFAULT** command:\n\n```\n$ redis-cli -p 7002 debug segfault\nError: Server closed the connection\n```\n\nNow we can look at the output of the consistency test to see what it reported.\n\n```\n18849 R (0 err) | 18849 W (0 err) |\n23151 R (0 err) | 23151 W (0 err) |\n27302 R (0 err) | 27302 W (0 err) |\n\n... many error warnings here ...\n\n29659 R (578 err) | 29660 W (577 err) |\n33749 R (578 err) | 33750 W (577 err) |\n37918 R (578 err) | 37919 W (577 err) |\n42077 R (578 err) | 42078 W (577 err) |\n```\n\nAs you can see during the failover the system was not able to accept 578 reads and 577 writes, however no inconsistency was created in the database. This may\nsound unexpected as in the first part of this tutorial we stated that Redis\nCluster can lose writes during the failover because it uses asynchronous\nreplication. What we did not say is that this is not very likely to happen\nbecause Redis sends the reply to the client, and the commands to replicate\nto the replicas, about at the same time, so there is a very small window to\nlose data. However the fact that it is hard to trigger does not mean that it\nis impossible, so this does not change the consistency guarantees provided\nby Redis cluster.\n\nWe can now check what is the cluster setup after the failover (note that\nin the meantime I restarted the crashed instance so that it rejoins the\ncluster as a replica):\n\n```\n$ redis-cli -p 7000 cluster nodes\n3fc783611028b1707fd65345e763befb36454d73 127.0.0.1:7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385503418521 0 connected\na211e242fc6b22a9427fed61285e85892fa04e08 127.0.0.1:7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385503419023 0 connected\n97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5959 10922-11422\n3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7005 master - 0 1385503419023 3 connected 11423-16383\n3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385503417005 0 connected 5960-10921\n2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385503418016 3 connected\n```\n\nNow the masters are running on ports 7000, 7001 and 7005. What was previously\na master, that is the Redis instance running on port 7002, is now a replica of\n7005.\n\nThe output of the `CLUSTER NODES` command may look intimidating, but it is actually pretty simple, and is composed of the following tokens:\n\n* Node ID\n* ip:port\n* flags: master, replica, myself, fail, ...\n* if it is a replica, the Node ID of the master\n* Time of the last pending PING still waiting for a reply.\n* Time of the last PONG received.\n* Configuration epoch for this node (see the Cluster specification).\n* Status of the link to this node.\n* Slots served...\n\n#### Manual failover\n\nSometimes it is useful to force a failover without actually causing any problem\non a master. For example, to upgrade the Redis process of one of the\nmaster nodes it is a good idea to failover it to turn it into a replica\nwith minimal impact on availability.\n\nManual failovers are supported by Redis Cluster using the `CLUSTER FAILOVER`\ncommand, that must be executed in one of the replicas of the master you want\nto failover.\n\nManual failovers are special and are safer compared to failovers resulting from\nactual master failures. They occur in a way that avoids data loss in the\nprocess, by switching clients from the original master to the new master only\nwhen the system is sure that the new master processed all the replication stream\nfrom the old one.\n\nThis is what you see in the replica log when you perform a manual failover:\n\n    # Manual failover user request accepted.\n    # Received replication offset for paused master manual failover: 347540\n    # All master replication stream processed, manual failover can start.\n    # Start of election delayed for 0 milliseconds (rank #0, offset 347540).\n    # Starting a failover election for epoch 7545.\n    # Failover election won: I'm the new master.\n\nBasically clients connected to the master we are failing over are stopped.\nAt the same time the master sends its replication offset to the replica, that\nwaits to reach the offset on its side. When the replication offset is reached,\nthe failover starts, and the old master is informed about the configuration\nswitch. When the clients are unblocked on the old master, they are redirected\nto the new master.\n\n \nTo promote a replica to master, it must first be known as a replica by a majority of the masters in the cluster.\n  Otherwise, it cannot win the failover election.\n  If the replica has just been added to the cluster (see [Add a new node as a replica](#add-a-new-node-as-a-replica)), you may need to wait a while before sending the `CLUSTER FAILOVER` command, to make sure the masters in cluster are aware of the new replica.\n \n\n#### Add a new node\n\nAdding a new node is basically the process of adding an empty node and then\nmoving some data into it, in case it is a new master, or telling it to\nsetup as a replica of a known node, in case it is a replica.\n\nWe'll show both, starting with the addition of a new master instance.\n\nIn both cases the first step to perform is **adding an empty node**.\n\nThis is as simple as to start a new node in port 7006 (we already used\nfrom 7000 to 7005 for our existing 6 nodes) with the same configuration\nused for the other nodes, except for the port number, so what you should\ndo in order to conform with the setup we used for the previous nodes:\n\n* Create a new tab in your terminal application.\n* Enter the `cluster-test` directory.\n* Create a directory named `7006`.\n* Create a redis.conf file inside, similar to the one used for the other nodes but using 7006 as port number.\n* Finally start the server with `..\/redis-server .\/redis.conf`\n\nAt this point the server should be running.\n\nNow we can use **redis-cli** as usual in order to add the node to\nthe existing cluster.\n\n    redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000\n\nAs you can see I used the **add-node** command specifying the address of the\nnew node as first argument, and the address of a random existing node in the\ncluster as second argument.\n\nIn practical terms redis-cli here did very little to help us, it just\nsent a `CLUSTER MEET` message to the node, something that is also possible\nto accomplish manually. However redis-cli also checks the state of the\ncluster before to operate, so it is a good idea to perform cluster operations\nalways via redis-cli even when you know how the internals work.\n\nNow we can connect to the new node to see if it really joined the cluster:\n\n```\nredis 127.0.0.1:7006> cluster nodes\n3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385543178575 0 connected 5960-10921\n3fc783611028b1707fd65345e763befb36454d73 127.0.0.1:7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385543179583 0 connected\nf093c80dde814da99c5cf72a7dd01590792b783b :0 myself,master - 0 0 0 connected\n2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543178072 3 connected\na211e242fc6b22a9427fed61285e85892fa04e08 127.0.0.1:7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385543178575 0 connected\n97a3a64667477371c4479320d683e4c8db5858b1 127.0.0.1:7000 master - 0 1385543179080 0 connected 0-5959 10922-11422\n3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7005 master - 0 1385543177568 3 connected 11423-16383\n```\n\nNote that since this node is already connected to the cluster it is already\nable to redirect client queries correctly and is generally speaking part of\nthe cluster. However it has two peculiarities compared to the other masters:\n\n* It holds no data as it has no assigned hash slots.\n* Because it is a master without assigned slots, it does not participate in the election process when a replica wants to become a master.\n\nNow it is possible to assign hash slots to this node using the resharding\nfeature of `redis-cli`. \nIt is basically useless to show this as we already\ndid in a previous section, there is no difference, it is just a resharding\nhaving as a target the empty node.\n\n##### Add a new node as a replica\n\nAdding a new replica can be performed in two ways. The obvious one is to\nuse redis-cli again, but with the --cluster-slave option, like this:\n\n    redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 --cluster-slave\n\nNote that the command line here is exactly like the one we used to add\na new master, so we are not specifying to which master we want to add\nthe replica. In this case, what happens is that redis-cli will add the new\nnode as replica of a random master among the masters with fewer replicas.\n\nHowever you can specify exactly what master you want to target with your\nnew replica with the following command line:\n\n    redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000 --cluster-slave --cluster-master-id 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e\n\nThis way we assign the new replica to a specific master.\n\nA more manual way to add a replica to a specific master is to add the new\nnode as an empty master, and then turn it into a replica using the\n`CLUSTER REPLICATE` command. This also works if the node was added as a replica\nbut you want to move it as a replica of a different master.\n\nFor example in order to add a replica for the node 127.0.0.1:7005 that is\ncurrently serving hash slots in the range 11423-16383, that has a Node ID\n3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e, all I need to do is to connect\nwith the new node (already added as empty master) and send the command:\n\n    redis 127.0.0.1:7006> cluster replicate 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e\n\nThat's it. Now we have a new replica for this set of hash slots, and all\nthe other nodes in the cluster already know (after a few seconds needed to\nupdate their config). We can verify with the following command:\n\n```\n$ redis-cli -p 7000 cluster nodes | grep slave | grep 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e\nf093c80dde814da99c5cf72a7dd01590792b783b 127.0.0.1:7006 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617702 3 connected\n2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617198 3 connected\n```\n\nThe node 3c3a0c... now has two replicas, running on ports 7002 (the existing one) and 7006 (the new one).\n\n#### Remove a node\n\nTo remove a replica node just use the `del-node` command of redis-cli:\n\n    redis-cli --cluster del-node 127.0.0.1:7000 `<node-id>`\n\nThe first argument is just a random node in the cluster, the second argument\nis the ID of the node you want to remove.\n\nYou can remove a master node in the same way as well, **however in order to\nremove a master node it must be empty**. If the master is not empty you need\nto reshard data away from it to all the other master nodes before.\n\nAn alternative to remove a master node is to perform a manual failover of it\nover one of its replicas and remove the node after it turned into a replica of the\nnew master. Obviously this does not help when you want to reduce the actual\nnumber of masters in your cluster, in that case, a resharding is needed.\n\nThere is a special scenario where you want to remove a failed node.\nYou should not use the `del-node` command because it tries to connect to all nodes and you will encounter a \"connection refused\" error.\nInstead, you can use the `call` command:\n\n    redis-cli --cluster call 127.0.0.1:7000 cluster forget `<node-id>`\n\nThis command will execute `CLUSTER FORGET` command on every node. \n\n#### Replica migration\n\nIn Redis Cluster, you can reconfigure a replica to replicate with a\ndifferent master at any time just using this command:\n\n    CLUSTER REPLICATE <master-node-id>\n\nHowever there is a special scenario where you want replicas to move from one\nmaster to another one automatically, without the help of the system administrator.\nThe automatic reconfiguration of replicas is called *replicas migration* and is\nable to improve the reliability of a Redis Cluster.\n\n \nYou can read the details of replicas migration in the [Redis Cluster Specification](\/topics\/cluster-spec), here we'll only provide some information about the\ngeneral idea and what you should do in order to benefit from it.\n \n\nThe reason why you may want to let your cluster replicas to move from one master\nto another under certain condition, is that usually the Redis Cluster is as\nresistant to failures as the number of replicas attached to a given master.\n\nFor example a cluster where every master has a single replica can't continue\noperations if the master and its replica fail at the same time, simply because\nthere is no other instance to have a copy of the hash slots the master was\nserving. However while net-splits are likely to isolate a number of nodes\nat the same time, many other kind of failures, like hardware or software failures\nlocal to a single node, are a very notable class of failures that are unlikely\nto happen at the same time, so it is possible that in your cluster where\nevery master has a replica, the replica is killed at 4am, and the master is killed\nat 6am. This still will result in a cluster that can no longer operate.\n\nTo improve reliability of the system we have the option to add additional\nreplicas to every master, but this is expensive. Replica migration allows to\nadd more replicas to just a few masters. So you have 10 masters with 1 replica\neach, for a total of 20 instances. However you add, for example, 3 instances\nmore as replicas of some of your masters, so certain masters will have more\nthan a single replica.\n\nWith replicas migration what happens is that if a master is left without\nreplicas, a replica from a master that has multiple replicas will migrate to\nthe *orphaned* master. So after your replica goes down at 4am as in the example\nwe made above, another replica will take its place, and when the master\nwill fail as well at 5am, there is still a replica that can be elected so that\nthe cluster can continue to operate.\n\nSo what you should know about replicas migration in short?\n\n* The cluster will try to migrate a replica from the master that has the greatest number of replicas in a given moment.\n* To benefit from replica migration you have just to add a few more replicas to a single master in your cluster, it does not matter what master.\n* There is a configuration parameter that controls the replica migration feature that is called `cluster-migration-barrier`: you can read more about it in the example `redis.conf` file provided with Redis Cluster.\n\n#### Upgrade nodes in a Redis Cluster\n\nUpgrading replica nodes is easy since you just need to stop the node and restart\nit with an updated version of Redis. If there are clients scaling reads using\nreplica nodes, they should be able to reconnect to a different replica if a given\none is not available.\n\nUpgrading masters is a bit more complex, and the suggested procedure is:\n\n1. Use `CLUSTER FAILOVER` to trigger a manual failover of the master to one of its replicas.\n   (See the [Manual failover](#manual-failover) in this topic.)\n2. Wait for the master to turn into a replica.\n3. Finally upgrade the node as you do for replicas.\n4. If you want the master to be the node you just upgraded, trigger a new manual failover in order to turn back the upgraded node into a master.\n\nFollowing this procedure you should upgrade one node after the other until\nall the nodes are upgraded.\n\n#### Migrate to Redis Cluster\n\nUsers willing to migrate to Redis Cluster may have just a single master, or\nmay already using a preexisting sharding setup, where keys\nare split among N nodes, using some in-house algorithm or a sharding algorithm\nimplemented by their client library or Redis proxy.\n\nIn both cases it is possible to migrate to Redis Cluster easily, however\nwhat is the most important detail is if multiple-keys operations are used\nby the application, and how. There are three different cases:\n\n1. Multiple keys operations, or transactions, or Lua scripts involving multiple keys, are not used. Keys are accessed independently (even if accessed via transactions or Lua scripts grouping multiple commands, about the same key, together).\n2. Multiple keys operations, or transactions, or Lua scripts involving multiple keys are used but only with keys having the same **hash tag**, which means that the keys used together all have a `{...}` sub-string that happens to be identical. For example the following multiple keys operation is defined in the context of the same hash tag: `SUNION {user:1000}.foo {user:1000}.bar`.\n3. Multiple keys operations, or transactions, or Lua scripts involving multiple keys are used with key names not having an explicit, or the same, hash tag.\n\nThe third case is not handled by Redis Cluster: the application requires to\nbe modified in order to not use multi keys operations or only use them in\nthe context of the same hash tag.\n\nCase 1 and 2 are covered, so we'll focus on those two cases, that are handled\nin the same way, so no distinction will be made in the documentation.\n\nAssuming you have your preexisting data set split into N masters, where\nN=1 if you have no preexisting sharding, the following steps are needed\nin order to migrate your data set to Redis Cluster:\n\n1. Stop your clients. No automatic live-migration to Redis Cluster is currently possible. You may be able to do it orchestrating a live migration in the context of your application \/ environment.\n2. Generate an append only file for all of your N masters using the `BGREWRITEAOF` command, and waiting for the AOF file to be completely generated.\n3. Save your AOF files from aof-1 to aof-N somewhere. At this point you can stop your old instances if you wish (this is useful since in non-virtualized deployments you often need to reuse the same computers).\n4. Create a Redis Cluster composed of N masters and zero replicas. You'll add replicas later. Make sure all your nodes are using the append only file for persistence.\n5. Stop all the cluster nodes, substitute their append only file with your pre-existing append only files, aof-1 for the first node, aof-2 for the second node, up to aof-N.\n6. Restart your Redis Cluster nodes with the new AOF files. They'll complain that there are keys that should not be there according to their configuration.\n7. Use `redis-cli --cluster fix` command in order to fix the cluster so that keys will be migrated according to the hash slots each node is authoritative or not.\n8. Use `redis-cli --cluster check` at the end to make sure your cluster is ok.\n9. Restart your clients modified to use a Redis Cluster aware client library.\n\nThere is an alternative way to import data from external instances to a Redis\nCluster, which is to use the `redis-cli --cluster import` command.\n\nThe command moves all the keys of a running instance (deleting the keys from\nthe source instance) to the specified pre-existing Redis Cluster. However\nnote that if you use a Redis 2.8 instance as source instance the operation\nmay be slow since 2.8 does not implement migrate connection caching, so you\nmay want to restart your source instance with a Redis 3.x version before\nto perform such operation.\n\n \nStarting with Redis 5, if not for backward compatibility, the Redis project no longer uses the word slave. Unfortunately in this command the word slave is part of the protocol, so we'll be able to remove such occurrences only when this API will be naturally deprecated.\n \n\n## Learn more\n\n* [Redis Cluster specification](\/topics\/cluster-spec)\n* [Linear Scaling with Redis Enterprise](https:\/\/redis.com\/redis-enterprise\/technology\/linear-scaling-redis-enterprise\/)\n* [Docker documentation](https:\/\/docs.docker.com\/engine\/userguide\/networking\/dockernetworks\/)\n","site":"redis","answers_cleaned":"    title  Scale with Redis Cluster linkTitle  Scale with Redis Cluster weight  6 description  Horizontal scaling with Redis Cluster aliases         topics cluster tutorial       topics partitioning       docs manual scaling       docs manual scaling md        Redis scales horizontally with a deployment topology called Redis Cluster   This topic will teach you how to set up  test  and operate Redis Cluster in production  You will learn about the availability and consistency characteristics of Redis Cluster from the end user s point of view   If you plan to run a production Redis Cluster deployment or want to understand better how Redis Cluster works internally  consult the  Redis Cluster specification   topics cluster spec   To learn how Redis Enterprise handles scaling  see  Linear Scaling with Redis Enterprise  https   redis com redis enterprise technology linear scaling redis enterprise        Redis Cluster 101  Redis Cluster provides a way to run a Redis installation where data is automatically sharded across multiple Redis nodes   Redis Cluster also provides some degree of availability during partitions mdash in practical terms  the ability to continue operations when some nodes fail or are unable to communicate   However  the cluster will become unavailable in the event of larger failures  for example  when the majority of masters are unavailable    So  with Redis Cluster  you get the ability to     Automatically split your dataset among multiple nodes    Continue operations when a subset of the nodes are experiencing failures or are unable to communicate with the rest of the cluster        Redis Cluster TCP ports  Every Redis Cluster node requires two open TCP connections  a Redis TCP port used to serve clients  e g   6379  and second port known as the  cluster bus port    By default  the cluster bus port is set by adding 10000 to the data port  e g   16379   however  you can override this in the  cluster port  configuration   Cluster bus is a node to node communication channel that uses a binary protocol  which is more suited to exchanging information between nodes due to little bandwidth and processing time   Nodes use the cluster bus for failure detection  configuration updates  failover authorization  and so forth   Clients should never try to communicate with the cluster bus port  but rather use the Redis command port   However  make sure you open both ports in your firewall  otherwise Redis cluster nodes won t be able to communicate   For a Redis Cluster to work properly you need  for each node   1  The client communication port  usually 6379  used to communicate with clients and be open to all the clients that need to reach the cluster  plus all the other cluster nodes that use the client port for key migrations  2  The cluster bus port must be reachable from all the other cluster nodes   If you don t open both TCP ports  your cluster will not work as expected        Redis Cluster and Docker  Currently  Redis Cluster does not support NATted environments and in general environments where IP addresses or TCP ports are remapped   Docker uses a technique called  port mapping   programs running inside Docker containers may be exposed with a different port compared to the one the program believes to be using   This is useful for running multiple containers using the same ports  at the same time  in the same server   To make Docker compatible with Redis Cluster  you need to use Docker s  host networking mode    Please see the    net host  option in the  Docker documentation  https   docs docker com engine userguide networking dockernetworks   for more information        Redis Cluster data sharding  Redis Cluster does not use consistent hashing  but a different form of sharding where every key is conceptually part of what we call a   hash slot     There are 16384 hash slots in Redis Cluster  and to compute the hash slot for a given key  we simply take the CRC16 of the key modulo 16384   Every node in a Redis Cluster is responsible for a subset of the hash slots  so  for example  you may have a cluster with 3 nodes  where     Node A contains hash slots from 0 to 5500    Node B contains hash slots from 5501 to 11000    Node C contains hash slots from 11001 to 16383   This makes it easy to add and remove cluster nodes  For example  if I want to add a new node D  I need to move some hash slots from nodes A  B  C to D  Similarly  if I want to remove node A from the cluster  I can just move the hash slots served by A to B and C  Once node A is empty  I can remove it from the cluster completely   Moving hash slots from a node to another does not require stopping any operations  therefore  adding and removing nodes  or changing the percentage of hash slots held by a node  requires no downtime   Redis Cluster supports multiple key operations as long as all of the keys involved in a single command execution  or whole transaction  or Lua script execution  belong to the same hash slot  The user can force multiple keys to be part of the same hash slot by using a feature called  hash tags    Hash tags are documented in the Redis Cluster specification  but the gist is that if there is a substring between    brackets in a key  only what is inside the string is hashed  For example  the keys  user  123  profile  and  user  123  account  are guaranteed to be in the same hash slot because they share the same hash tag  As a result  you can operate on these two keys in the same multi key operation        Redis Cluster master replica model  To remain available when a subset of master nodes are failing or are not able to communicate with the majority of nodes  Redis Cluster uses a master replica model where every hash slot has from 1  the master itself  to N replicas  N 1 additional replica nodes    In our example cluster with nodes A  B  C  if node B fails the cluster is not able to continue  since we no longer have a way to serve hash slots in the range 5501 11000   However  when the cluster is created  or at a later time   we add a replica node to every master  so that the final cluster is composed of A  B  C that are master nodes  and A1  B1  C1 that are replica nodes  This way  the system can continue if node B fails   Node B1 replicates B  and B fails  the cluster will promote node B1 as the new master and will continue to operate correctly   However  note that if nodes B and B1 fail at the same time  Redis Cluster will not be able to continue to operate        Redis Cluster consistency guarantees  Redis Cluster does not guarantee   strong consistency    In practical terms this means that under certain conditions it is possible that Redis Cluster will lose writes that were acknowledged by the system to the client   The first reason why Redis Cluster can lose writes is because it uses asynchronous replication  This means that during writes the following happens     Your client writes to the master B    The master B replies OK to your client    The master B propagates the write to its replicas B1  B2 and B3   As you can see  B does not wait for an acknowledgement from B1  B2  B3 before replying to the client  since this would be a prohibitive latency penalty for Redis  so if your client writes something  B acknowledges the write  but crashes before being able to send the write to its replicas  one of the replicas  that did not receive the write  can be promoted to master  losing the write forever   This is very similar to what happens with most databases that are configured to flush data to disk every second  so it is a scenario you are already able to reason about because of past experiences with traditional database systems not involving distributed systems  Similarly you can improve consistency by forcing the database to flush data to disk before replying to the client  but this usually results in prohibitively low performance  That would be the equivalent of synchronous replication in the case of Redis Cluster   Basically  there is a trade off to be made between performance and consistency   Redis Cluster has support for synchronous writes when absolutely needed  implemented via the  WAIT  command  This makes losing writes a lot less likely  However  note that Redis Cluster does not implement strong consistency even when synchronous replication is used  it is always possible  under more complex failure scenarios  that a replica that was not able to receive the write will be elected as master   There is another notable scenario where Redis Cluster will lose writes  that happens during a network partition where a client is isolated with a minority of instances including at least a master   Take as an example our 6 nodes cluster composed of A  B  C  A1  B1  C1  with 3 masters and 3 replicas  There is also a client  that we will call Z1   After a partition occurs  it is possible that in one side of the partition we have A  C  A1  B1  C1  and in the other side we have B and Z1   Z1 is still able to write to B  which will accept its writes  If the partition heals in a very short time  the cluster will continue normally  However  if the partition lasts enough time for B1 to be promoted to master on the majority side of the partition  the writes that Z1 has sent to B in the meantime will be lost    There is a   maximum window   to the amount of writes Z1 will be able to send to B  if enough time has elapsed for the majority side of the partition to elect a replica as master  every master node in the minority side will have stopped accepting writes    This amount of time is a very important configuration directive of Redis Cluster  and is called the   node timeout     After node timeout has elapsed  a master node is considered to be failing  and can be replaced by one of its replicas  Similarly  after node timeout has elapsed without a master node to be able to sense the majority of the other master nodes  it enters an error state and stops accepting writes      Redis Cluster configuration parameters  We are about to create an example cluster deployment   Before we continue  let s introduce the configuration parameters that Redis Cluster introduces in the  redis conf  file       cluster enabled   yes no      If yes  enables Redis Cluster support in a specific Redis instance  Otherwise the instance starts as a standalone instance as usual      cluster config file   filename      Note that despite the name of this option  this is not a user editable configuration file  but the file where a Redis Cluster node automatically persists the cluster configuration  the state  basically  every time there is a change  in order to be able to re read it at startup  The file lists things like the other nodes in the cluster  their state  persistent variables  and so forth  Often this file is rewritten and flushed on disk as a result of some message reception      cluster node timeout   milliseconds      The maximum amount of time a Redis Cluster node can be unavailable  without it being considered as failing  If a master node is not reachable for more than the specified amount of time  it will be failed over by its replicas  This parameter controls other important things in Redis Cluster  Notably  every node that can t reach the majority of master nodes for the specified amount of time  will stop accepting queries      cluster slave validity factor   factor      If set to zero  a replica will always consider itself valid  and will therefore always try to failover a master  regardless of the amount of time the link between the master and the replica remained disconnected  If the value is positive  a maximum disconnection time is calculated as the  node timeout  value multiplied by the factor provided with this option  and if the node is a replica  it will not try to start a failover if the master link was disconnected for more than the specified amount of time  For example  if the node timeout is set to 5 seconds and the validity factor is set to 10  a replica disconnected from the master for more than 50 seconds will not try to failover its master  Note that any value different than zero may result in Redis Cluster being unavailable after a master failure if there is no replica that is able to failover it  In that case the cluster will return to being available only when the original master rejoins the cluster      cluster migration barrier   count      Minimum number of replicas a master will remain connected with  for another replica to migrate to a master which is no longer covered by any replica  See the appropriate section about replica migration in this tutorial for more information      cluster require full coverage   yes no      If this is set to yes  as it is by default  the cluster stops accepting writes if some percentage of the key space is not covered by any node  If the option is set to no  the cluster will still serve queries even if only requests about a subset of keys can be processed      cluster allow reads when down   yes no      If this is set to no  as it is by default  a node in a Redis Cluster will stop serving all traffic when the cluster is marked as failed  either when a node can t reach a quorum of masters or when full coverage is not met  This prevents reading potentially inconsistent data from a node that is unaware of changes in the cluster  This option can be set to yes to allow reads from a node during the fail state  which is useful for applications that want to prioritize read availability but still want to prevent inconsistent writes  It can also be used for when using Redis Cluster with only one or two shards  as it allows the nodes to continue serving writes when a master fails but automatic failover is impossible      Create and use a Redis Cluster  To create and use a Redis Cluster  follow these steps      Create a Redis Cluster   create a redis cluster     Interact with the cluster   interact with the cluster     Write an example app with redis rb cluster   write an example app with redis rb cluster     Reshard the cluster   reshard the cluster     A more interesting example application   a more interesting example application     Test the failover   test the failover     Manual failover   manual failover     Add a new node   add a new node     Remove a node   remove a node     Replica migration   replica migration     Upgrade nodes in a Redis Cluster   upgrade nodes in a redis cluster     Migrate to Redis Cluster   migrate to redis cluster   But  first  familiarize yourself with the requirements for creating a cluster        Requirements to create a Redis Cluster  To create a cluster  the first thing you need is to have a few empty Redis instances running in  cluster mode     At minimum  set the following directives in the  redis conf  file       port 7000 cluster enabled yes cluster config file nodes conf cluster node timeout 5000 appendonly yes      To enable cluster mode  set the  cluster enabled  directive to  yes   Every instance also contains the path of a file where the configuration for this node is stored  which by default is  nodes conf   This file is never touched by humans  it is simply generated at startup by the Redis Cluster instances  and updated every time it is needed   Note that the   minimal cluster   that works as expected must contain at least three master nodes  For deployment  we strongly recommend a six node cluster  with three masters and three replicas   You can test this locally by creating the following directories named after the port number of the instance you ll run inside any given directory   For example       mkdir cluster test cd cluster test mkdir 7000 7001 7002 7003 7004 7005      Create a  redis conf  file inside each of the directories  from 7000 to 7005  As a template for your configuration file just use the small example above  but make sure to replace the port number  7000  with the right port number according to the directory name    You can start each instance as follows  each running in a separate terminal tab       cd 7000 redis server   redis conf     You ll see from the logs that every node assigns itself a new ID        82462  26 Nov 11 56 55 329   No cluster configuration found  I m 97a3a64667477371c4479320d683e4c8db5858b1  This ID will be used forever by this specific instance in order for the instance to have a unique name in the context of the cluster  Every node remembers every other node using this IDs  and not by IP or port  IP addresses and ports may change  but the unique node identifier will never change for all the life of the node  We call this identifier simply   Node ID          Create a Redis Cluster  Now that we have a number of instances running  you need to create your cluster by writing some meaningful configuration to the nodes   You can configure and execute individual instances manually or use the create cluster script  Let s go over how you do it manually   To create the cluster  run       redis cli   cluster create 127 0 0 1 7000 127 0 0 1 7001       127 0 0 1 7002 127 0 0 1 7003 127 0 0 1 7004 127 0 0 1 7005         cluster replicas 1  The command used here is   create    since we want to create a new cluster  The option    cluster replicas 1  means that we want a replica for every master created   The other arguments are the list of addresses of the instances I want to use to create the new cluster    redis cli  will propose a configuration  Accept the proposed configuration by typing   yes    The cluster will be configured and  joined   which means that instances will be bootstrapped into talking with each other  Finally  if everything has gone well  you ll see a message like this        OK  All 16384 slots covered  This means that there is at least one master instance serving each of the 16384 available slots   If you don t want to create a Redis Cluster by configuring and executing individual instances manually as explained above  there is a much simpler system  but you ll not learn the same amount of operational details    Find the  utils create cluster  directory in the Redis distribution  There is a script called  create cluster  inside  same name as the directory it is contained into   it s a simple bash script  In order to start a 6 nodes cluster with 3 masters and 3 replicas just type the following commands   1   create cluster start  2   create cluster create   Reply to  yes  in step 2 when the  redis cli  utility wants you to accept the cluster layout   You can now interact with the cluster  the first node will start at port 30001 by default  When you are done  stop the cluster with   3   create cluster stop   Please read the  README  inside this directory for more information on how to run the script        Interact with the cluster  To connect to Redis Cluster  you ll need a cluster aware Redis client   See the  documentation   docs clients  for your client of choice to determine its cluster support   You can also test your Redis Cluster using the  redis cli  command line utility         redis cli  c  p 7000 redis 127 0 0 1 7000  set foo bar    Redirected to slot  12182  located at 127 0 0 1 7002 OK redis 127 0 0 1 7002  set hello world    Redirected to slot  866  located at 127 0 0 1 7000 OK redis 127 0 0 1 7000  get foo    Redirected to slot  12182  located at 127 0 0 1 7002  bar  redis 127 0 0 1 7002  get hello    Redirected to slot  866  located at 127 0 0 1 7000  world         If you created the cluster using the script  your nodes may listen on different ports  starting from 30001 by default    The  redis cli  cluster support is very basic  so it always uses the fact that Redis Cluster nodes are able to redirect a client to the right node  A serious client is able to do better than that  and cache the map between hash slots and nodes addresses  to directly use the right connection to the right node  The map is refreshed only when something changed in the cluster configuration  for example after a failover or after the system administrator changed the cluster layout by adding or removing nodes        Write an example app with redis rb cluster  Before going forward showing how to operate the Redis Cluster  doing things like a failover  or a resharding  we need to create some example application or at least to be able to understand the semantics of a simple Redis Cluster client interaction   In this way we can run an example and at the same time try to make nodes failing  or start a resharding  to see how Redis Cluster behaves under real world conditions  It is not very helpful to see what happens while nobody is writing to the cluster   This section explains some basic usage of  redis rb cluster  https   github com antirez redis rb cluster  showing two examples   The first is the following  and is the   example rb   https   github com antirez redis rb cluster blob master example rb  file inside the redis rb cluster distribution          1  require    cluster     2    3  if ARGV length    2    4      startup nodes        5            host     127 0 0 1    port    7000      6            host     127 0 0 1    port    7001     7           8  else    9      startup nodes       10            host    ARGV 0    port    ARGV 1  to i    11          12  end   13   14  rc   RedisCluster new startup nodes 32  timeout    0 1    15   16  last   false   17   18  while not last   19      begin   20          last   rc get    last       21          last   0 if  last   22      rescue    e   23          puts  error   e to s     24          sleep 1   25      end   26  end   27   28    last to i 1   1000000000  each  x    29      begin   30          rc set  foo  x   x    31          puts rc get  foo  x      32          rc set    last    x    33      rescue    e   34          puts  error   e to s     35      end   36      sleep 0 1   37         The application does a very simple thing  it sets keys in the form  foo number   to  number   one after the other  So if you run the program the result is the following stream of commands     SET foo0 0   SET foo1 1   SET foo2 2   And so forth     The program looks more complex than it should usually as it is designed to show errors on the screen instead of exiting with an exception  so every operation performed with the cluster is wrapped by  begin   rescue  blocks   The   line 14   is the first interesting line in the program  It creates the Redis Cluster object  using as argument a list of  startup nodes   the maximum number of connections this object is allowed to take against different nodes  and finally the timeout after a given operation is considered to be failed   The startup nodes don t need to be all the nodes of the cluster  The important thing is that at least one node is reachable  Also note that redis rb cluster updates this list of startup nodes as soon as it is able to connect with the first node  You should expect such a behavior with any other serious client   Now that we have the Redis Cluster object instance stored in the   rc   variable  we are ready to use the object like if it was a normal Redis object instance   This is exactly what happens in   line 18 to 26    when we restart the example we don t want to start again with  foo0   so we store the counter inside Redis itself  The code above is designed to read this counter  or if the counter does not exist  to assign it the value of zero   However note how it is a while loop  as we want to try again and again even if the cluster is down and is returning errors  Normal applications don t need to be so careful     Lines between 28 and 37   start the main loop where the keys are set or an error is displayed   Note the  sleep  call at the end of the loop  In your tests you can remove the sleep if you want to write to the cluster as fast as possible  relatively to the fact that this is a busy loop without real parallelism of course  so you ll get the usually 10k ops second in the best of the conditions    Normally writes are slowed down in order for the example application to be easier to follow by humans   Starting the application produces the following output       ruby   example rb 1 2 3 4 5 6 7 8 9  C  I stopped the program here       This is not a very interesting program and we ll use a better one in a moment but we can already see what happens during a resharding when the program is running        Reshard the cluster  Now we are ready to try a cluster resharding  To do this  please keep the example rb program running  so that you can see if there is some impact on the program running  Also  you may want to comment the  sleep  call to have some more serious write load during resharding   Resharding basically means to move hash slots from a set of nodes to another set of nodes   Like cluster creation  it is accomplished using the redis cli utility   To start a resharding  just type       redis cli   cluster reshard 127 0 0 1 7000  You only need to specify a single node  redis cli will find the other nodes automatically   Currently redis cli is only able to reshard with the administrator support  you can t just say move 5  of slots from this node to the other one  but this is pretty trivial to implement   So it starts with questions  The first is how much of a resharding do you want to do       How many slots do you want to move  from 1 to 16384    We can try to reshard 1000 hash slots  that should already contain a non trivial amount of keys if the example is still running without the sleep call   Then redis cli needs to know what is the target of the resharding  that is  the node that will receive the hash slots  I ll use the first master node  that is  127 0 0 1 7000  but I need to specify the Node ID of the instance  This was already printed in a list by redis cli  but I can always find the ID of a node with the following command if I need         redis cli  p 7000 cluster nodes   grep myself 97a3a64667477371c4479320d683e4c8db5858b1  0 myself master   0 0 0 connected 0 5460      Ok so my target node is 97a3a64667477371c4479320d683e4c8db5858b1   Now you ll get asked from what nodes you want to take those keys  I ll just type  all  in order to take a bit of hash slots from all the other master nodes   After the final confirmation you ll see a message for every slot that redis cli is going to move from a node to another  and a dot will be printed for every actual key moved from one side to the other   While the resharding is in progress you should be able to see your example program running unaffected  You can stop and restart it multiple times during the resharding if you want   At the end of the resharding  you can test the health of the cluster with the following command       redis cli   cluster check 127 0 0 1 7000  All the slots will be covered as usual  but this time the master at 127 0 0 1 7000 will have more hash slots  something around 6461   Resharding can be performed automatically without the need to manually enter the parameters in an interactive way  This is possible using a command line like the following       redis cli   cluster reshard  host   port    cluster from  node id    cluster to  node id    cluster slots  number of slots    cluster yes  This allows to build some automatism if you are likely to reshard often  however currently there is no way for  redis cli  to automatically rebalance the cluster checking the distribution of keys across the cluster nodes and intelligently moving slots as needed  This feature will be added in the future   The    cluster yes  option instructs the cluster manager to automatically answer  yes  to the command s prompts  allowing it to run in a non interactive mode  Note that this option can also be activated by setting the  REDISCLI CLUSTER YES  environment variable        A more interesting example application  The example application we wrote early is not very good  It writes to the cluster in a simple way without even checking if what was written is the right thing   From our point of view the cluster receiving the writes could just always write the key  foo  to  42  to every operation  and we would not notice at all   So in the  redis rb cluster  repository  there is a more interesting application that is called  consistency test rb   It uses a set of counters  by default 1000  and sends  INCR  commands in order to increment the counters   However instead of just writing  the application does two additional things     When a counter is updated using  INCR   the application remembers the write    It also reads a random counter before every write  and check if the value is what we expected it to be  comparing it with the value it has in memory   What this means is that this application is a simple   consistency checker    and is able to tell you if the cluster lost some write  or if it accepted a write that we did not receive acknowledgment for  In the first case we ll see a counter having a value that is smaller than the one we remember  while in the second case the value will be greater   Running the consistency test application produces a line of output every second         ruby consistency test rb 925 R  0 err    925 W  0 err    5030 R  0 err    5030 W  0 err    9261 R  0 err    9261 W  0 err    13517 R  0 err    13517 W  0 err    17780 R  0 err    17780 W  0 err    22025 R  0 err    22025 W  0 err    25818 R  0 err    25818 W  0 err         The line shows the number of   R  eads and   W  rites performed  and the number of errors  query not accepted because of errors since the system was not available    If some inconsistency is found  new lines are added to the output  This is what happens  for example  if I reset a counter manually while the program is running         redis cli  h 127 0 0 1  p 7000 set key 217 0 OK   in the other tab I see      94774 R  0 err    94774 W  0 err    98821 R  0 err    98821 W  0 err    102886 R  0 err    102886 W  0 err    114 lost   107046 R  0 err    107046 W  0 err    114 lost        When I set the counter to 0 the real value was 114  so the program reports 114 lost writes   INCR  commands that are not remembered by the cluster    This program is much more interesting as a test case  so we ll use it to test the Redis Cluster failover        Test the failover  To trigger the failover  the simplest thing we can do  that is also the semantically simplest failure that can occur in a distributed system  is to crash a single process  in our case a single master     During this test  you should take a tab open with the consistency test application running     We can identify a master and crash it with the following command         redis cli  p 7000 cluster nodes   grep master 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127 0 0 1 7001 master   0 1385482984082 0 connected 5960 10921 2938205e12de373867bf38f1ca29d31d0ddb3e46 127 0 0 1 7002 master   0 1385482983582 0 connected 11423 16383 97a3a64667477371c4479320d683e4c8db5858b1  0 myself master   0 0 0 connected 0 5959 10922 11422      Ok  so 7000  7001  and 7002 are masters  Let s crash node 7002 with the   DEBUG SEGFAULT   command         redis cli  p 7002 debug segfault Error  Server closed the connection      Now we can look at the output of the consistency test to see what it reported       18849 R  0 err    18849 W  0 err    23151 R  0 err    23151 W  0 err    27302 R  0 err    27302 W  0 err         many error warnings here      29659 R  578 err    29660 W  577 err    33749 R  578 err    33750 W  577 err    37918 R  578 err    37919 W  577 err    42077 R  578 err    42078 W  577 err         As you can see during the failover the system was not able to accept 578 reads and 577 writes  however no inconsistency was created in the database  This may sound unexpected as in the first part of this tutorial we stated that Redis Cluster can lose writes during the failover because it uses asynchronous replication  What we did not say is that this is not very likely to happen because Redis sends the reply to the client  and the commands to replicate to the replicas  about at the same time  so there is a very small window to lose data  However the fact that it is hard to trigger does not mean that it is impossible  so this does not change the consistency guarantees provided by Redis cluster   We can now check what is the cluster setup after the failover  note that in the meantime I restarted the crashed instance so that it rejoins the cluster as a replica          redis cli  p 7000 cluster nodes 3fc783611028b1707fd65345e763befb36454d73 127 0 0 1 7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385503418521 0 connected a211e242fc6b22a9427fed61285e85892fa04e08 127 0 0 1 7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385503419023 0 connected 97a3a64667477371c4479320d683e4c8db5858b1  0 myself master   0 0 0 connected 0 5959 10922 11422 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127 0 0 1 7005 master   0 1385503419023 3 connected 11423 16383 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127 0 0 1 7001 master   0 1385503417005 0 connected 5960 10921 2938205e12de373867bf38f1ca29d31d0ddb3e46 127 0 0 1 7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385503418016 3 connected      Now the masters are running on ports 7000  7001 and 7005  What was previously a master  that is the Redis instance running on port 7002  is now a replica of 7005   The output of the  CLUSTER NODES  command may look intimidating  but it is actually pretty simple  and is composed of the following tokens     Node ID   ip port   flags  master  replica  myself  fail        if it is a replica  the Node ID of the master   Time of the last pending PING still waiting for a reply    Time of the last PONG received    Configuration epoch for this node  see the Cluster specification     Status of the link to this node    Slots served          Manual failover  Sometimes it is useful to force a failover without actually causing any problem on a master  For example  to upgrade the Redis process of one of the master nodes it is a good idea to failover it to turn it into a replica with minimal impact on availability   Manual failovers are supported by Redis Cluster using the  CLUSTER FAILOVER  command  that must be executed in one of the replicas of the master you want to failover   Manual failovers are special and are safer compared to failovers resulting from actual master failures  They occur in a way that avoids data loss in the process  by switching clients from the original master to the new master only when the system is sure that the new master processed all the replication stream from the old one   This is what you see in the replica log when you perform a manual failover         Manual failover user request accepted        Received replication offset for paused master manual failover  347540       All master replication stream processed  manual failover can start        Start of election delayed for 0 milliseconds  rank  0  offset 347540         Starting a failover election for epoch 7545        Failover election won  I m the new master   Basically clients connected to the master we are failing over are stopped  At the same time the master sends its replication offset to the replica  that waits to reach the offset on its side  When the replication offset is reached  the failover starts  and the old master is informed about the configuration switch  When the clients are unblocked on the old master  they are redirected to the new master     To promote a replica to master  it must first be known as a replica by a majority of the masters in the cluster    Otherwise  it cannot win the failover election    If the replica has just been added to the cluster  see  Add a new node as a replica   add a new node as a replica    you may need to wait a while before sending the  CLUSTER FAILOVER  command  to make sure the masters in cluster are aware of the new replica          Add a new node  Adding a new node is basically the process of adding an empty node and then moving some data into it  in case it is a new master  or telling it to setup as a replica of a known node  in case it is a replica   We ll show both  starting with the addition of a new master instance   In both cases the first step to perform is   adding an empty node     This is as simple as to start a new node in port 7006  we already used from 7000 to 7005 for our existing 6 nodes  with the same configuration used for the other nodes  except for the port number  so what you should do in order to conform with the setup we used for the previous nodes     Create a new tab in your terminal application    Enter the  cluster test  directory    Create a directory named  7006     Create a redis conf file inside  similar to the one used for the other nodes but using 7006 as port number    Finally start the server with     redis server   redis conf   At this point the server should be running   Now we can use   redis cli   as usual in order to add the node to the existing cluster       redis cli   cluster add node 127 0 0 1 7006 127 0 0 1 7000  As you can see I used the   add node   command specifying the address of the new node as first argument  and the address of a random existing node in the cluster as second argument   In practical terms redis cli here did very little to help us  it just sent a  CLUSTER MEET  message to the node  something that is also possible to accomplish manually  However redis cli also checks the state of the cluster before to operate  so it is a good idea to perform cluster operations always via redis cli even when you know how the internals work   Now we can connect to the new node to see if it really joined the cluster       redis 127 0 0 1 7006  cluster nodes 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127 0 0 1 7001 master   0 1385543178575 0 connected 5960 10921 3fc783611028b1707fd65345e763befb36454d73 127 0 0 1 7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385543179583 0 connected f093c80dde814da99c5cf72a7dd01590792b783b  0 myself master   0 0 0 connected 2938205e12de373867bf38f1ca29d31d0ddb3e46 127 0 0 1 7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543178072 3 connected a211e242fc6b22a9427fed61285e85892fa04e08 127 0 0 1 7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385543178575 0 connected 97a3a64667477371c4479320d683e4c8db5858b1 127 0 0 1 7000 master   0 1385543179080 0 connected 0 5959 10922 11422 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127 0 0 1 7005 master   0 1385543177568 3 connected 11423 16383      Note that since this node is already connected to the cluster it is already able to redirect client queries correctly and is generally speaking part of the cluster  However it has two peculiarities compared to the other masters     It holds no data as it has no assigned hash slots    Because it is a master without assigned slots  it does not participate in the election process when a replica wants to become a master   Now it is possible to assign hash slots to this node using the resharding feature of  redis cli    It is basically useless to show this as we already did in a previous section  there is no difference  it is just a resharding having as a target the empty node         Add a new node as a replica  Adding a new replica can be performed in two ways  The obvious one is to use redis cli again  but with the   cluster slave option  like this       redis cli   cluster add node 127 0 0 1 7006 127 0 0 1 7000   cluster slave  Note that the command line here is exactly like the one we used to add a new master  so we are not specifying to which master we want to add the replica  In this case  what happens is that redis cli will add the new node as replica of a random master among the masters with fewer replicas   However you can specify exactly what master you want to target with your new replica with the following command line       redis cli   cluster add node 127 0 0 1 7006 127 0 0 1 7000   cluster slave   cluster master id 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e  This way we assign the new replica to a specific master   A more manual way to add a replica to a specific master is to add the new node as an empty master  and then turn it into a replica using the  CLUSTER REPLICATE  command  This also works if the node was added as a replica but you want to move it as a replica of a different master   For example in order to add a replica for the node 127 0 0 1 7005 that is currently serving hash slots in the range 11423 16383  that has a Node ID 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e  all I need to do is to connect with the new node  already added as empty master  and send the command       redis 127 0 0 1 7006  cluster replicate 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e  That s it  Now we have a new replica for this set of hash slots  and all the other nodes in the cluster already know  after a few seconds needed to update their config   We can verify with the following command         redis cli  p 7000 cluster nodes   grep slave   grep 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e f093c80dde814da99c5cf72a7dd01590792b783b 127 0 0 1 7006 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617702 3 connected 2938205e12de373867bf38f1ca29d31d0ddb3e46 127 0 0 1 7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617198 3 connected      The node 3c3a0c    now has two replicas  running on ports 7002  the existing one  and 7006  the new one         Remove a node  To remove a replica node just use the  del node  command of redis cli       redis cli   cluster del node 127 0 0 1 7000   node id    The first argument is just a random node in the cluster  the second argument is the ID of the node you want to remove   You can remove a master node in the same way as well    however in order to remove a master node it must be empty    If the master is not empty you need to reshard data away from it to all the other master nodes before   An alternative to remove a master node is to perform a manual failover of it over one of its replicas and remove the node after it turned into a replica of the new master  Obviously this does not help when you want to reduce the actual number of masters in your cluster  in that case  a resharding is needed   There is a special scenario where you want to remove a failed node  You should not use the  del node  command because it tries to connect to all nodes and you will encounter a  connection refused  error  Instead  you can use the  call  command       redis cli   cluster call 127 0 0 1 7000 cluster forget   node id    This command will execute  CLUSTER FORGET  command on every node         Replica migration  In Redis Cluster  you can reconfigure a replica to replicate with a different master at any time just using this command       CLUSTER REPLICATE  master node id   However there is a special scenario where you want replicas to move from one master to another one automatically  without the help of the system administrator  The automatic reconfiguration of replicas is called  replicas migration  and is able to improve the reliability of a Redis Cluster     You can read the details of replicas migration in the  Redis Cluster Specification   topics cluster spec   here we ll only provide some information about the general idea and what you should do in order to benefit from it     The reason why you may want to let your cluster replicas to move from one master to another under certain condition  is that usually the Redis Cluster is as resistant to failures as the number of replicas attached to a given master   For example a cluster where every master has a single replica can t continue operations if the master and its replica fail at the same time  simply because there is no other instance to have a copy of the hash slots the master was serving  However while net splits are likely to isolate a number of nodes at the same time  many other kind of failures  like hardware or software failures local to a single node  are a very notable class of failures that are unlikely to happen at the same time  so it is possible that in your cluster where every master has a replica  the replica is killed at 4am  and the master is killed at 6am  This still will result in a cluster that can no longer operate   To improve reliability of the system we have the option to add additional replicas to every master  but this is expensive  Replica migration allows to add more replicas to just a few masters  So you have 10 masters with 1 replica each  for a total of 20 instances  However you add  for example  3 instances more as replicas of some of your masters  so certain masters will have more than a single replica   With replicas migration what happens is that if a master is left without replicas  a replica from a master that has multiple replicas will migrate to the  orphaned  master  So after your replica goes down at 4am as in the example we made above  another replica will take its place  and when the master will fail as well at 5am  there is still a replica that can be elected so that the cluster can continue to operate   So what you should know about replicas migration in short     The cluster will try to migrate a replica from the master that has the greatest number of replicas in a given moment    To benefit from replica migration you have just to add a few more replicas to a single master in your cluster  it does not matter what master    There is a configuration parameter that controls the replica migration feature that is called  cluster migration barrier   you can read more about it in the example  redis conf  file provided with Redis Cluster        Upgrade nodes in a Redis Cluster  Upgrading replica nodes is easy since you just need to stop the node and restart it with an updated version of Redis  If there are clients scaling reads using replica nodes  they should be able to reconnect to a different replica if a given one is not available   Upgrading masters is a bit more complex  and the suggested procedure is   1  Use  CLUSTER FAILOVER  to trigger a manual failover of the master to one of its replicas      See the  Manual failover   manual failover  in this topic   2  Wait for the master to turn into a replica  3  Finally upgrade the node as you do for replicas  4  If you want the master to be the node you just upgraded  trigger a new manual failover in order to turn back the upgraded node into a master   Following this procedure you should upgrade one node after the other until all the nodes are upgraded        Migrate to Redis Cluster  Users willing to migrate to Redis Cluster may have just a single master  or may already using a preexisting sharding setup  where keys are split among N nodes  using some in house algorithm or a sharding algorithm implemented by their client library or Redis proxy   In both cases it is possible to migrate to Redis Cluster easily  however what is the most important detail is if multiple keys operations are used by the application  and how  There are three different cases   1  Multiple keys operations  or transactions  or Lua scripts involving multiple keys  are not used  Keys are accessed independently  even if accessed via transactions or Lua scripts grouping multiple commands  about the same key  together   2  Multiple keys operations  or transactions  or Lua scripts involving multiple keys are used but only with keys having the same   hash tag    which means that the keys used together all have a         sub string that happens to be identical  For example the following multiple keys operation is defined in the context of the same hash tag   SUNION  user 1000  foo  user 1000  bar   3  Multiple keys operations  or transactions  or Lua scripts involving multiple keys are used with key names not having an explicit  or the same  hash tag   The third case is not handled by Redis Cluster  the application requires to be modified in order to not use multi keys operations or only use them in the context of the same hash tag   Case 1 and 2 are covered  so we ll focus on those two cases  that are handled in the same way  so no distinction will be made in the documentation   Assuming you have your preexisting data set split into N masters  where N 1 if you have no preexisting sharding  the following steps are needed in order to migrate your data set to Redis Cluster   1  Stop your clients  No automatic live migration to Redis Cluster is currently possible  You may be able to do it orchestrating a live migration in the context of your application   environment  2  Generate an append only file for all of your N masters using the  BGREWRITEAOF  command  and waiting for the AOF file to be completely generated  3  Save your AOF files from aof 1 to aof N somewhere  At this point you can stop your old instances if you wish  this is useful since in non virtualized deployments you often need to reuse the same computers   4  Create a Redis Cluster composed of N masters and zero replicas  You ll add replicas later  Make sure all your nodes are using the append only file for persistence  5  Stop all the cluster nodes  substitute their append only file with your pre existing append only files  aof 1 for the first node  aof 2 for the second node  up to aof N  6  Restart your Redis Cluster nodes with the new AOF files  They ll complain that there are keys that should not be there according to their configuration  7  Use  redis cli   cluster fix  command in order to fix the cluster so that keys will be migrated according to the hash slots each node is authoritative or not  8  Use  redis cli   cluster check  at the end to make sure your cluster is ok  9  Restart your clients modified to use a Redis Cluster aware client library   There is an alternative way to import data from external instances to a Redis Cluster  which is to use the  redis cli   cluster import  command   The command moves all the keys of a running instance  deleting the keys from the source instance  to the specified pre existing Redis Cluster  However note that if you use a Redis 2 8 instance as source instance the operation may be slow since 2 8 does not implement migrate connection caching  so you may want to restart your source instance with a Redis 3 x version before to perform such operation     Starting with Redis 5  if not for backward compatibility  the Redis project no longer uses the word slave  Unfortunately in this command the word slave is part of the protocol  so we ll be able to remove such occurrences only when this API will be naturally deprecated        Learn more     Redis Cluster specification   topics cluster spec     Linear Scaling with Redis Enterprise  https   redis com redis enterprise technology linear scaling redis enterprise      Docker documentation  https   docs docker com engine userguide networking dockernetworks   "}
{"questions":"redis topics debugging Debugging aliases title Debugging weight 10 docs reference debugging md docs reference debugging A guide to debugging Redis server processes","answers":"---\ntitle: \"Debugging\"\nlinkTitle: \"Debugging\"\nweight: 10\ndescription: >\n    A guide to debugging Redis server processes\naliases: [\n    \/topics\/debugging,\n    \/docs\/reference\/debugging,\n    \/docs\/reference\/debugging.md\n]\n---\n\nRedis is developed with an emphasis on stability. We do our best with\nevery release to make sure you'll experience a stable product with no\ncrashes. However, if you ever need to debug the Redis process itself, read on.\n\nWhen Redis crashes, it produces a detailed report of what happened. However,\nsometimes looking at the crash report is not enough, nor is it possible for\nthe Redis core team to reproduce the issue independently. In this scenario, we\nneed help from the user who can reproduce the issue.\n\nThis guide shows how to use GDB to provide the information the\nRedis developers will need to track the bug more easily.\n\n## What is GDB?\n\nGDB is the Gnu Debugger: a program that is able to inspect the internal state\nof another program. Usually tracking and fixing a bug is an exercise in\ngathering more information about the state of the program at the moment the\nbug happens, so GDB is an extremely useful tool.\n\nGDB can be used in two ways:\n\n* It can attach to a running program and inspect the state of it at runtime.\n* It can inspect the state of a program that already terminated using what is called a *core file*, that is, the image of the memory at the time the program was running.\n\nFrom the point of view of investigating Redis bugs we need to use both of these\nGDB modes. The user able to reproduce the bug attaches GDB to their running Redis\ninstance, and when the crash happens, they create the `core` file that in turn\nthe developer will use to inspect the Redis internals at the time of the crash.\n\nThis way the developer can perform all the inspections in his or her computer\nwithout the help of the user, and the user is free to restart Redis in their\nproduction environment.\n\n## Compiling Redis without optimizations\n\nBy default Redis is compiled with the `-O2` switch, this means that compiler\noptimizations are enabled. This makes the Redis executable faster, but at the\nsame time it makes Redis (like any other program) harder to inspect using GDB.\n\nIt is better to attach GDB to Redis compiled without optimizations using the\n`make noopt` command (instead of just using the plain `make` command). However,\nif you have an already running Redis in production there is no need to recompile\nand restart it if this is going to create problems on your side. GDB still works\nagainst executables compiled with optimizations.\n\nYou should not be overly concerned at the loss of performance from compiling Redis\nwithout optimizations. It is unlikely that this will cause problems in your\nenvironment as Redis is not very CPU-bound.\n\n## Attaching GDB to a running process\n\nIf you have an already running Redis server, you can attach GDB to it, so that\nif Redis crashes it will be possible to both inspect the internals and generate\na `core dump` file.\n\nAfter you attach GDB to the Redis process it will continue running as usual without\nany loss of performance, so this is not a dangerous procedure.\n\nIn order to attach GDB the first thing you need is the *process ID* of the running\nRedis instance (the *pid* of the process). You can easily obtain it using\n`redis-cli`:\n\n    $ redis-cli info | grep process_id\n    process_id:58414\n\nIn the above example the process ID is **58414**.\n\nLogin into your Redis server.\n\n(Optional but recommended) Start **screen** or **tmux** or any other program that will make sure that your GDB session will not be closed if your ssh connection times out. You can learn more about screen in [this article](http:\/\/www.linuxjournal.com\/article\/6340).\n\nAttach GDB to the running Redis server by typing:\n\n    $ gdb <path-to-redis-executable> <pid>\n\nFor example:\n\n    $ gdb \/usr\/local\/bin\/redis-server 58414\n\nGDB will start and will attach to the running server printing something like the following:\n\n    Reading symbols for shared libraries + done\n    0x00007fff8d4797e6 in epoll_wait ()\n    (gdb)\n\nAt this point GDB is attached but **your Redis instance is blocked by GDB**. In\norder to let the Redis instance continue the execution just type **continue** at\nthe GDB prompt, and press enter.\n\n    (gdb) continue\n    Continuing.\n\nDone! Now your Redis instance has GDB attached. Now you can wait for the next crash. :)\n\nNow it's time to detach your screen\/tmux session, if you are running GDB using it, by\npressing **Ctrl-a a** key combination.\n\n## After the crash\n\nRedis has a command to simulate a segmentation fault (in other words a bad crash) using\nthe `DEBUG SEGFAULT` command (don't use it against a real production instance of course!\nSo I'll use this command to crash my instance to show what happens in the GDB side:\n\n    (gdb) continue\n    Continuing.\n\n    Program received signal EXC_BAD_ACCESS, Could not access memory.\n    Reason: KERN_INVALID_ADDRESS at address: 0xffffffffffffffff\n    debugCommand (c=0x7ffc32005000) at debug.c:220\n    220         *((char*)-1) = 'x';\n\nAs you can see GDB detected that Redis crashed, and was even able to show me\nthe file name and line number causing the crash. This is already much better\nthan the Redis crash report back trace (containing just function names and\nbinary offsets).\n\n## Obtaining the stack trace\n\nThe first thing to do is to obtain a full stack trace with GDB. This is as\nsimple as using the **bt** command:\n\n    (gdb) bt\n    #0  debugCommand (c=0x7ffc32005000) at debug.c:220\n    #1  0x000000010d246d63 in call (c=0x7ffc32005000) at redis.c:1163\n    #2  0x000000010d247290 in processCommand (c=0x7ffc32005000) at redis.c:1305\n    #3  0x000000010d251660 in processInputBuffer (c=0x7ffc32005000) at networking.c:959\n    #4  0x000000010d251872 in readQueryFromClient (el=0x0, fd=5, privdata=0x7fff76f1c0b0, mask=220924512) at networking.c:1021\n    #5  0x000000010d243523 in aeProcessEvents (eventLoop=0x7fff6ce408d0, flags=220829559) at ae.c:352\n    #6  0x000000010d24373b in aeMain (eventLoop=0x10d429ef0) at ae.c:397\n    #7  0x000000010d2494ff in main (argc=1, argv=0x10d2b2900) at redis.c:2046\n\nThis shows the backtrace, but we also want to dump the processor registers using the **info registers** command:\n\n    (gdb) info registers\n    rax            0x0  0\n    rbx            0x7ffc32005000   140721147367424\n    rcx            0x10d2b0a60  4515891808\n    rdx            0x7fff76f1c0b0   140735188943024\n    rsi            0x10d299777  4515796855\n    rdi            0x0  0\n    rbp            0x7fff6ce40730   0x7fff6ce40730\n    rsp            0x7fff6ce40650   0x7fff6ce40650\n    r8             0x4f26b3f7   1327936503\n    r9             0x7fff6ce40718   140735020271384\n    r10            0x81 129\n    r11            0x10d430398  4517462936\n    r12            0x4b7c04f8babc0  1327936503000000\n    r13            0x10d3350a0  4516434080\n    r14            0x10d42d9f0  4517452272\n    r15            0x10d430398  4517462936\n    rip            0x10d26cfd4  0x10d26cfd4 <debugCommand+68>\n    eflags         0x10246  66118\n    cs             0x2b 43\n    ss             0x0  0\n    ds             0x0  0\n    es             0x0  0\n    fs             0x0  0\n    gs             0x0  0\n\nPlease **make sure to include** both of these outputs in your bug report.\n\n## Obtaining the core file\n\nThe next step is to generate the core dump, that is the image of the memory of the running Redis process. This is done using the `gcore` command:\n\n    (gdb) gcore\n    Saved corefile core.58414\n\nNow you have the core dump to send to the Redis developer, but **it is important\nto understand** that this happens to contain all the data that was inside the\nRedis instance at the time of the crash; Redis developers will make sure not to\nshare the content with anyone else, and will delete the file as soon as it is no\nlonger used for debugging purposes, but you are warned that by sending the core\nfile you are sending your data.\n\n## What to send to developers\n\nFinally you can send everything to the Redis core team:\n\n* The Redis executable you are using.\n* The stack trace produced by the **bt** command, and the registers dump.\n* The core file you generated with gdb.\n* Information about the operating system and GCC version, and Redis version you are using.\n\n## Thank you\n\nYour help is extremely important! Many issues can only be tracked this way. So\nthanks!","site":"redis","answers_cleaned":"    title   Debugging  linkTitle   Debugging  weight  10 description        A guide to debugging Redis server processes aliases         topics debugging       docs reference debugging       docs reference debugging md        Redis is developed with an emphasis on stability  We do our best with every release to make sure you ll experience a stable product with no crashes  However  if you ever need to debug the Redis process itself  read on   When Redis crashes  it produces a detailed report of what happened  However  sometimes looking at the crash report is not enough  nor is it possible for the Redis core team to reproduce the issue independently  In this scenario  we need help from the user who can reproduce the issue   This guide shows how to use GDB to provide the information the Redis developers will need to track the bug more easily      What is GDB   GDB is the Gnu Debugger  a program that is able to inspect the internal state of another program  Usually tracking and fixing a bug is an exercise in gathering more information about the state of the program at the moment the bug happens  so GDB is an extremely useful tool   GDB can be used in two ways     It can attach to a running program and inspect the state of it at runtime    It can inspect the state of a program that already terminated using what is called a  core file   that is  the image of the memory at the time the program was running   From the point of view of investigating Redis bugs we need to use both of these GDB modes  The user able to reproduce the bug attaches GDB to their running Redis instance  and when the crash happens  they create the  core  file that in turn the developer will use to inspect the Redis internals at the time of the crash   This way the developer can perform all the inspections in his or her computer without the help of the user  and the user is free to restart Redis in their production environment      Compiling Redis without optimizations  By default Redis is compiled with the   O2  switch  this means that compiler optimizations are enabled  This makes the Redis executable faster  but at the same time it makes Redis  like any other program  harder to inspect using GDB   It is better to attach GDB to Redis compiled without optimizations using the  make noopt  command  instead of just using the plain  make  command   However  if you have an already running Redis in production there is no need to recompile and restart it if this is going to create problems on your side  GDB still works against executables compiled with optimizations   You should not be overly concerned at the loss of performance from compiling Redis without optimizations  It is unlikely that this will cause problems in your environment as Redis is not very CPU bound      Attaching GDB to a running process  If you have an already running Redis server  you can attach GDB to it  so that if Redis crashes it will be possible to both inspect the internals and generate a  core dump  file   After you attach GDB to the Redis process it will continue running as usual without any loss of performance  so this is not a dangerous procedure   In order to attach GDB the first thing you need is the  process ID  of the running Redis instance  the  pid  of the process   You can easily obtain it using  redis cli          redis cli info   grep process id     process id 58414  In the above example the process ID is   58414     Login into your Redis server    Optional but recommended  Start   screen   or   tmux   or any other program that will make sure that your GDB session will not be closed if your ssh connection times out  You can learn more about screen in  this article  http   www linuxjournal com article 6340    Attach GDB to the running Redis server by typing         gdb  path to redis executable   pid   For example         gdb  usr local bin redis server 58414  GDB will start and will attach to the running server printing something like the following       Reading symbols for shared libraries   done     0x00007fff8d4797e6 in epoll wait         gdb   At this point GDB is attached but   your Redis instance is blocked by GDB    In order to let the Redis instance continue the execution just type   continue   at the GDB prompt  and press enter        gdb  continue     Continuing   Done  Now your Redis instance has GDB attached  Now you can wait for the next crash      Now it s time to detach your screen tmux session  if you are running GDB using it  by pressing   Ctrl a a   key combination      After the crash  Redis has a command to simulate a segmentation fault  in other words a bad crash  using the  DEBUG SEGFAULT  command  don t use it against a real production instance of course  So I ll use this command to crash my instance to show what happens in the GDB side        gdb  continue     Continuing       Program received signal EXC BAD ACCESS  Could not access memory      Reason  KERN INVALID ADDRESS at address  0xffffffffffffffff     debugCommand  c 0x7ffc32005000  at debug c 220     220            char   1     x    As you can see GDB detected that Redis crashed  and was even able to show me the file name and line number causing the crash  This is already much better than the Redis crash report back trace  containing just function names and binary offsets       Obtaining the stack trace  The first thing to do is to obtain a full stack trace with GDB  This is as simple as using the   bt   command        gdb  bt      0  debugCommand  c 0x7ffc32005000  at debug c 220      1  0x000000010d246d63 in call  c 0x7ffc32005000  at redis c 1163      2  0x000000010d247290 in processCommand  c 0x7ffc32005000  at redis c 1305      3  0x000000010d251660 in processInputBuffer  c 0x7ffc32005000  at networking c 959      4  0x000000010d251872 in readQueryFromClient  el 0x0  fd 5  privdata 0x7fff76f1c0b0  mask 220924512  at networking c 1021      5  0x000000010d243523 in aeProcessEvents  eventLoop 0x7fff6ce408d0  flags 220829559  at ae c 352      6  0x000000010d24373b in aeMain  eventLoop 0x10d429ef0  at ae c 397      7  0x000000010d2494ff in main  argc 1  argv 0x10d2b2900  at redis c 2046  This shows the backtrace  but we also want to dump the processor registers using the   info registers   command        gdb  info registers     rax            0x0  0     rbx            0x7ffc32005000   140721147367424     rcx            0x10d2b0a60  4515891808     rdx            0x7fff76f1c0b0   140735188943024     rsi            0x10d299777  4515796855     rdi            0x0  0     rbp            0x7fff6ce40730   0x7fff6ce40730     rsp            0x7fff6ce40650   0x7fff6ce40650     r8             0x4f26b3f7   1327936503     r9             0x7fff6ce40718   140735020271384     r10            0x81 129     r11            0x10d430398  4517462936     r12            0x4b7c04f8babc0  1327936503000000     r13            0x10d3350a0  4516434080     r14            0x10d42d9f0  4517452272     r15            0x10d430398  4517462936     rip            0x10d26cfd4  0x10d26cfd4  debugCommand 68      eflags         0x10246  66118     cs             0x2b 43     ss             0x0  0     ds             0x0  0     es             0x0  0     fs             0x0  0     gs             0x0  0  Please   make sure to include   both of these outputs in your bug report      Obtaining the core file  The next step is to generate the core dump  that is the image of the memory of the running Redis process  This is done using the  gcore  command        gdb  gcore     Saved corefile core 58414  Now you have the core dump to send to the Redis developer  but   it is important to understand   that this happens to contain all the data that was inside the Redis instance at the time of the crash  Redis developers will make sure not to share the content with anyone else  and will delete the file as soon as it is no longer used for debugging purposes  but you are warned that by sending the core file you are sending your data      What to send to developers  Finally you can send everything to the Redis core team     The Redis executable you are using    The stack trace produced by the   bt   command  and the registers dump    The core file you generated with gdb    Information about the operating system and GCC version  and Redis version you are using      Thank you  Your help is extremely important  Many issues can only be tracked this way  So thanks "}
{"questions":"redis topics acl aliases Redis Access Control List title ACL ACL docs manual security acl md docs manual security acl weight 1","answers":"---\ntitle: \"ACL\"\nlinkTitle: \"ACL\"\nweight: 1\ndescription: Redis Access Control List\naliases: [\n    \/topics\/acl,\n    \/docs\/manual\/security\/acl,\n    \/docs\/manual\/security\/acl.md\n]\n---\n\nThe Redis ACL, short for Access Control List, is the feature that allows certain\nconnections to be limited in terms of the commands that can be executed and the\nkeys that can be accessed. The way it works is that, after connecting, a client\nis required to provide a username and a valid password to authenticate. If authentication succeeded, the connection is associated with a given\nuser and the limits the user has. Redis can be configured so that new\nconnections are already authenticated with a \"default\" user (this is the\ndefault configuration). Configuring the default user has, as a side effect,\nthe ability to provide only a specific subset of functionalities to connections\nthat are not explicitly authenticated.\n\nIn the default configuration, Redis 6 (the first version to have ACLs) works\nexactly like older versions of Redis. Every new connection is\ncapable of calling every possible command and accessing every key, so the\nACL feature is backward compatible with old clients and applications. Also\nthe old way to configure a password, using the **requirepass** configuration\ndirective, still works as expected. However, it now\nsets a password for the default user.\n\nThe Redis `AUTH` command was extended in Redis 6, so now it is possible to\nuse it in the two-arguments form:\n\n    AUTH <username> <password>\n\nHere's an example of the old form:\n\n    AUTH <password>\n\nWhat happens is that the username used to authenticate is \"default\", so\njust specifying the password implies that we want to authenticate against\nthe default user. This provides backward compatibility.\n\n## When ACLs are useful\n\nBefore using ACLs, you may want to ask yourself what's the goal you want to\naccomplish by implementing this layer of protection. Normally there are\ntwo main goals that are well served by ACLs:\n\n1. You want to improve security by restricting the access to commands and keys, so that untrusted clients have no access and trusted clients have just the minimum access level to the database in order to perform the work needed. For instance, certain clients may just be able to execute read only commands.\n2. You want to improve operational safety, so that processes or humans accessing Redis are not allowed to damage the data or the configuration due to software errors or manual mistakes. For instance, there is no reason for a worker that fetches delayed jobs from Redis to be able to call the `FLUSHALL` command.\n\nAnother typical usage of ACLs is related to managed Redis instances. Redis is\noften provided as a managed service both by internal company teams that handle\nthe Redis infrastructure for the other internal customers they have, or is\nprovided in a software-as-a-service setup by cloud providers. In both \nsetups, we want to be sure that configuration commands are excluded for the\ncustomers.\n\n## Configure ACLs with the ACL command\n\nACLs are defined using a DSL (domain specific language) that describes what\na given user is allowed to do. Such rules are always implemented from the\nfirst to the last, left-to-right, because sometimes the order of the rules is\nimportant to understand what the user is really able to do.\n\nBy default there is a single user defined, called *default*. We\ncan use the `ACL LIST` command in order to check the currently active ACLs\nand verify what the configuration of a freshly started, defaults-configured\nRedis instance is:\n\n    > ACL LIST\n    1) \"user default on nopass ~* &* +@all\"\n\nThe command above reports the list of users in the same format that is\nused in the Redis configuration files, by translating the current ACLs set\nfor the users back into their description.\n\nThe first two words in each line are \"user\" followed by the username. The\nnext words are ACL rules that describe different things. We'll show how the rules work in detail, but for now it is enough to say that the default\nuser is configured to be active (on), to require no password (nopass), to\naccess every possible key (`~*`) and Pub\/Sub channel (`&*`), and be able to\ncall every possible command (`+@all`).\n\nAlso, in the special case of the default user, having the *nopass* rule means\nthat new connections are automatically authenticated with the default user\nwithout any explicit `AUTH` call needed.\n\n## ACL rules\n\nThe following is the list of valid ACL rules. Certain rules are just\nsingle words that are used in order to activate or remove a flag, or to\nperform a given change to the user ACL. Other rules are char prefixes that\nare concatenated with command or category names, key patterns, and\nso forth.\n\nEnable and disallow users:\n\n* `on`: Enable the user: it is possible to authenticate as this user.\n* `off`: Disallow the user: it's no longer possible to authenticate with this user; however, previously authenticated connections will still work. Note that if the default user is flagged as *off*, new connections will start as not authenticated and will require the user to send `AUTH` or `HELLO` with the AUTH option in order to authenticate in some way, regardless of the default user configuration.\n\nAllow and disallow commands:\n\n* `+<command>`: Add the command to the list of commands the user can call. Can be used with `|` for allowing subcommands (e.g \"+config|get\").\n* `-<command>`: Remove the command to the list of commands the user can call. Starting Redis 7.0, it can be used with `|` for blocking subcommands (e.g \"-config|set\").\n* `+@<category>`: Add all the commands in such category to be called by the user, with valid categories being like @admin, @set, @sortedset, ... and so forth, see the full list by calling the `ACL CAT` command. The special category @all means all the commands, both the ones currently present in the server, and the ones that will be loaded in the future via modules.\n* `-@<category>`: Like `+@<category>` but removes the commands from the list of commands the client can call.\n* `+<command>|first-arg`: Allow a specific first argument of an otherwise disabled command. It is only supported on commands with no sub-commands, and is not allowed as negative form like -SELECT|1, only additive starting with \"+\". This feature is deprecated and may be removed in the future.\n* `allcommands`: Alias for +@all. Note that it implies the ability to execute all the future commands loaded via the modules system.\n* `nocommands`: Alias for -@all.\n\nAllow and disallow certain keys and key permissions:\n\n* `~<pattern>`: Add a pattern of keys that can be mentioned as part of commands. For instance `~*` allows all the keys. The pattern is a glob-style pattern like the one of `KEYS`. It is possible to specify multiple patterns.\n* `%R~<pattern>`: (Available in Redis 7.0 and later) Add the specified read key pattern. This behaves similar to the regular key pattern but only grants permission to read from keys that match the given pattern. See [key permissions](#key-permissions) for more information.\n* `%W~<pattern>`: (Available in Redis 7.0 and later) Add the specified write key pattern. This behaves similar to the regular key pattern but only grants permission to write to keys that match the given pattern. See [key permissions](#key-permissions) for more information.\n* `%RW~<pattern>`: (Available in Redis 7.0 and later) Alias for `~<pattern>`. \n* `allkeys`: Alias for `~*`.\n* `resetkeys`: Flush the list of allowed keys patterns. For instance the ACL `~foo:* ~bar:* resetkeys ~objects:*`, will only allow the client to access keys that match the pattern `objects:*`.\n\nAllow and disallow Pub\/Sub channels:\n\n* `&<pattern>`: (Available in Redis 6.2 and later) Add a glob style pattern of Pub\/Sub channels that can be accessed by the user. It is possible to specify multiple channel patterns. Note that pattern matching is done only for channels mentioned by `PUBLISH` and `SUBSCRIBE`, whereas `PSUBSCRIBE` requires a literal match between its channel patterns and those allowed for user.\n* `allchannels`: Alias for `&*` that allows the user to access all Pub\/Sub channels.\n* `resetchannels`: Flush the list of allowed channel patterns and disconnect the user's Pub\/Sub clients if these are no longer able to access their respective channels and\/or channel patterns.\n\nConfigure valid passwords for the user:\n\n* `><password>`: Add this password to the list of valid passwords for the user. For example `>mypass` will add \"mypass\" to the list of valid passwords.  This directive clears the *nopass* flag (see later). Every user can have any number of passwords.\n* `<<password>`: Remove this password from the list of valid passwords. Emits an error in case the password you are trying to remove is actually not set.\n* `#<hash>`: Add this SHA-256 hash value to the list of valid passwords for the user. This hash value will be compared to the hash of a password entered for an ACL user. This allows users to store hashes in the `acl.conf` file rather than storing cleartext passwords. Only SHA-256 hash values are accepted as the password hash must be 64 characters and only contain lowercase hexadecimal characters.\n* `!<hash>`: Remove this hash value from the list of valid passwords. This is useful when you do not know the password specified by the hash value but would like to remove the password from the user.\n* `nopass`: All the set passwords of the user are removed, and the user is flagged as requiring no password: it means that every password will work against this user. If this directive is used for the default user, every new connection will be immediately authenticated with the default user without any explicit AUTH command required. Note that the *resetpass* directive will clear this condition.\n* `resetpass`: Flushes the list of allowed passwords and removes the *nopass* status. After *resetpass*, the user has no associated passwords and there is no way to authenticate without adding some password (or setting it as *nopass* later).\n\n*Note: if a user is not flagged with nopass and has no list of valid passwords, that user is effectively impossible to use because there will be no way to log in as that user.*\n\nConfigure selectors for the user:\n\n* `(<rule list>)`: (Available in Redis 7.0 and later) Create a new selector to match rules against. Selectors are evaluated after the user permissions, and are evaluated according to the order they are defined. If a command matches either the user permissions or any selector, it is allowed. See [selectors](#selectors) for more information.\n* `clearselectors`: (Available in Redis 7.0 and later) Delete all of the selectors attached to the user.\n\nReset the user:\n\n* `reset` Performs the following actions: resetpass, resetkeys, resetchannels, allchannels (if acl-pubsub-default is set), off, clearselectors, -@all. The user returns to the same state it had immediately after its creation.\n\n## Create and edit user ACLs with the ACL SETUSER command\n\nUsers can be created and modified in two main ways:\n\n1. Using the ACL command and its `ACL SETUSER` subcommand.\n2. Modifying the server configuration, where users can be defined, and restarting the server. With an *external ACL file*, just call `ACL LOAD`.\n\nIn this section we'll learn how to define users using the `ACL` command.\nWith such knowledge, it will be trivial to do the same things via the\nconfiguration files. Defining users in the configuration deserves its own\nsection and will be discussed later separately.\n\nTo start, try the simplest `ACL SETUSER` command call:\n\n    > ACL SETUSER alice\n    OK\n\nThe `ACL SETUSER` command takes the username and a list of ACL rules to apply\nto the user. However the above example did not specify any rule at all.\nThis will just create the user if it did not exist, using the defaults for new\nusers. If the user already exists, the command above will do nothing at all.\n\nCheck the default user status:\n\n    > ACL LIST\n    1) \"user alice off resetchannels -@all\"\n    2) \"user default on nopass ~* &* +@all\"\n\nThe new user \"alice\" is:\n\n* In the off status, so `AUTH` will not work for the user \"alice\".\n* The user also has no passwords set.\n* Cannot access any command. Note that the user is created by default without the ability to access any command, so the `-@all` in the output above could be omitted; however, `ACL LIST` attempts to be explicit rather than implicit.\n* There are no key patterns that the user can access.\n* There are no Pub\/Sub channels that the user can access.\n\nNew users are created with restrictive permissions by default. Starting with Redis 6.2, ACL provides Pub\/Sub channels access management as well. To ensure backward compatibility with version 6.0 when upgrading to Redis 6.2, new users are granted the 'allchannels' permission by default. The default can be set to `resetchannels` via the `acl-pubsub-default` configuration directive.\n\nFrom 7.0, The `acl-pubsub-default` value is set to `resetchannels` to restrict the channels access by default to provide better security.\nThe default can be set to `allchannels` via the `acl-pubsub-default` configuration directive to be compatible with previous versions.\n\nSuch user is completely useless. Let's try to define the user so that\nit is active, has a password, and can access with only the `GET` command\nto key names starting with the string \"cached:\".\n\n    > ACL SETUSER alice on >p1pp0 ~cached:* +get\n    OK\n\nNow the user can do something, but will refuse to do other things:\n\n    > AUTH alice p1pp0\n    OK\n    > GET foo\n    (error) NOPERM this user has no permissions to access one of the keys used as arguments\n    > GET cached:1234\n    (nil)\n    > SET cached:1234 zap\n    (error) NOPERM this user has no permissions to run the 'set' command\n\nThings are working as expected. In order to inspect the configuration of the\nuser alice (remember that user names are case sensitive), it is possible to\nuse an alternative to `ACL LIST` which is designed to be more suitable for\ncomputers to read, while `ACL GETUSER` is more human readable.\n\n    > ACL GETUSER alice\n    1) \"flags\"\n    2) 1) \"on\"\n    3) \"passwords\"\n    4) 1) \"2d9c75...\"\n    5) \"commands\"\n    6) \"-@all +get\"\n    7) \"keys\"\n    8) \"~cached:*\"\n    9) \"channels\"\n    10) \"\"\n    11) \"selectors\"\n    12) (empty array)\n\nThe `ACL GETUSER` returns a field-value array that describes the user in more parsable terms. The output includes the set of flags, a list of key patterns, passwords, and so forth. The output is probably more readable if we use RESP3, so that it is returned as a map reply:\n\n    > ACL GETUSER alice\n    1# \"flags\" => 1~ \"on\"\n    2# \"passwords\" => 1) \"2d9c75273d72b32df726fb545c8a4edc719f0a95a6fd993950b10c474ad9c927\"\n    3# \"commands\" => \"-@all +get\"\n    4# \"keys\" => \"~cached:*\"\n    5# \"channels\" => \"\"\n    6# \"selectors\" => (empty array)\n\n*Note: from now on, we'll continue using the Redis default protocol, version 2*\n\nUsing another `ACL SETUSER` command (from a different user, because alice cannot run the `ACL` command), we can add multiple patterns to the user:\n\n    > ACL SETUSER alice ~objects:* ~items:* ~public:*\n    OK\n    > ACL LIST\n    1) \"user alice on #2d9c75... ~cached:* ~objects:* ~items:* ~public:* resetchannels -@all +get\"\n    2) \"user default on nopass ~* &* +@all\"\n\nThe user representation in memory is now as we expect it to be.\n\n## Multiple calls to ACL SETUSER\n\nIt is very important to understand what happens when `ACL SETUSER` is called\nmultiple times. What is critical to know is that every `ACL SETUSER` call will\nNOT reset the user, but will just apply the ACL rules to the existing user.\nThe user is reset only if it was not known before. In that case, a brand new\nuser is created with zeroed-ACLs. The user cannot do anything, is\ndisallowed, has no passwords, and so forth. This is the best default for safety.\n\nHowever later calls will just modify the user incrementally. For instance,\nthe following sequence:\n\n    > ACL SETUSER myuser +set\n    OK\n    > ACL SETUSER myuser +get\n    OK\n\nWill result in myuser being able to call both `GET` and `SET`:\n\n    > ACL LIST\n    1) \"user default on nopass ~* &* +@all\"\n    2) \"user myuser off resetchannels -@all +get +set\"\n\n## Command categories\n\nSetting user ACLs by specifying all the commands one after the other is\nreally annoying, so instead we do things like this:\n\n    > ACL SETUSER antirez on +@all -@dangerous >42a979... ~*\n\nBy saying +@all and -@dangerous, we included all the commands and later removed\nall the commands that are tagged as dangerous inside the Redis command table.\nNote that command categories **never include modules commands** with\nthe exception of +@all. If you say +@all, all the commands can be executed by\nthe user, even future commands loaded via the modules system. However if you\nuse the ACL rule +@read or any other, the modules commands are always\nexcluded. This is very important because you should just trust the Redis\ninternal command table. Modules may expose dangerous things and in\nthe case of an ACL that is just additive, that is, in the form of `+@all -...`\nYou should be absolutely sure that you'll never include what you did not mean\nto.\n\nThe following is a list of command categories and their meanings:\n\n* **admin** - Administrative commands. Normal applications will never need to use\n  these. Includes `REPLICAOF`, `CONFIG`, `DEBUG`, `SAVE`, `MONITOR`, `ACL`, `SHUTDOWN`, etc.\n* **bitmap** - Data type: bitmaps related.\n* **blocking** - Potentially blocking the connection until released by another\n  command.\n* **connection** - Commands affecting the connection or other connections.\n  This includes `AUTH`, `SELECT`, `COMMAND`, `CLIENT`, `ECHO`, `PING`, etc.\n* **dangerous** - Potentially dangerous commands (each should be considered with care for\n  various reasons). This includes `FLUSHALL`, `MIGRATE`, `RESTORE`, `SORT`, `KEYS`,\n  `CLIENT`, `DEBUG`, `INFO`, `CONFIG`, `SAVE`, `REPLICAOF`, etc.\n* **geo** - Data type: geospatial indexes related.\n* **hash** - Data type: hashes related.\n* **hyperloglog** - Data type: hyperloglog related.\n* **fast** - Fast O(1) commands. May loop on the number of arguments, but not the\n  number of elements in the key.\n* **keyspace** - Writing or reading from keys, databases, or their metadata \n  in a type agnostic way. Includes `DEL`, `RESTORE`, `DUMP`, `RENAME`, `EXISTS`, `DBSIZE`,\n  `KEYS`, `EXPIRE`, `TTL`, `FLUSHALL`, etc. Commands that may modify the keyspace,\n  key, or metadata will also have the `write` category. Commands that only read\n  the keyspace, key, or metadata will have the `read` category.\n* **list** - Data type: lists related.\n* **pubsub** - PubSub-related commands.\n* **read** - Reading from keys (values or metadata). Note that commands that don't\n  interact with keys, will not have either `read` or `write`.\n* **scripting** - Scripting related.\n* **set** - Data type: sets related.\n* **sortedset** - Data type: sorted sets related.\n* **slow** - All commands that are not `fast`.\n* **stream** - Data type: streams related.\n* **string** - Data type: strings related.\n* **transaction** - `WATCH` \/ `MULTI` \/ `EXEC` related commands.\n* **write** - Writing to keys (values or metadata).\n\nRedis can also show you a list of all categories and the exact commands each category includes using the Redis `ACL CAT` command. It can be used in two forms:\n\n    ACL CAT -- Will just list all the categories available\n    ACL CAT <category-name> -- Will list all the commands inside the category\n\nExamples:\n\n     > ACL CAT\n     1) \"keyspace\"\n     2) \"read\"\n     3) \"write\"\n     4) \"set\"\n     5) \"sortedset\"\n     6) \"list\"\n     7) \"hash\"\n     8) \"string\"\n     9) \"bitmap\"\n    10) \"hyperloglog\"\n    11) \"geo\"\n    12) \"stream\"\n    13) \"pubsub\"\n    14) \"admin\"\n    15) \"fast\"\n    16) \"slow\"\n    17) \"blocking\"\n    18) \"dangerous\"\n    19) \"connection\"\n    20) \"transaction\"\n    21) \"scripting\"\n\nAs you can see, so far there are 21 distinct categories. Now let's check what\ncommand is part of the *geo* category:\n\n     > ACL CAT geo\n     1) \"geohash\"\n     2) \"georadius_ro\"\n     3) \"georadiusbymember\"\n     4) \"geopos\"\n     5) \"geoadd\"\n     6) \"georadiusbymember_ro\"\n     7) \"geodist\"\n     8) \"georadius\"\n     9) \"geosearch\"\n    10) \"geosearchstore\"\n\nNote that commands may be part of multiple categories. For example, an\nACL rule like `+@geo -@read` will result in certain geo commands to be\nexcluded because they are read-only commands.\n\n## Allow\/block subcommands\n\nStarting from Redis 7.0, subcommands can be allowed\/blocked just like other\ncommands (by using the separator `|` between the command and subcommand, for\nexample: `+config|get` or `-config|set`)\n\nThat is true for all commands except DEBUG. In order to allow\/block specific\nDEBUG subcommands, see the next section.\n\n## Allow the first-arg of a blocked command\n\n**Note: This feature is deprecated since Redis 7.0 and may be removed in the future.**\n\nSometimes the ability to exclude or include a command or a subcommand as a whole is not enough.\nMany deployments may not be happy providing the ability to execute a `SELECT` for any DB, but may\nstill want to be able to run `SELECT 0`.\n\nIn such case we could alter the ACL of a user in the following way:\n\n    ACL SETUSER myuser -select +select|0\n\nFirst, remove the `SELECT` command and then add the allowed\nfirst-arg. Note that **it is not possible to do the reverse** since first-args\ncan be only added, not excluded. It is safer to specify all the first-args\nthat are valid for some user since it is possible that\nnew first-args may be added in the future.\n\nAnother example:\n\n    ACL SETUSER myuser -debug +debug|digest\n\nNote that first-arg matching may add some performance penalty; however, it is hard to measure even with synthetic benchmarks. The\nadditional CPU cost is only paid when such commands are called, and not when\nother commands are called.\n\nIt is possible to use this mechanism in order to allow subcommands in Redis\nversions prior to 7.0 (see above section).\n\n## +@all VS -@all\n\nIn the previous section, it was observed how it is possible to define command\nACLs based on adding\/removing single commands.\n\n## Selectors\n\nStarting with Redis 7.0, Redis supports adding multiple sets of rules that are evaluated independently of each other.\nThese secondary sets of permissions are called selectors and added by wrapping a set of rules within parentheses.\nIn order to execute a command, either the root permissions (rules defined outside of parenthesis) or any of the selectors (rules defined inside parenthesis) must match the given command.\nInternally, the root permissions are checked first followed by selectors in the order they were added.\n\nFor example, consider a user with the ACL rules `+GET ~key1 (+SET ~key2)`.\nThis user is able to execute `GET key1` and `SET key2 hello`, but not `GET key2` or `SET key1 world`.\n\nUnlike the user's root permissions, selectors cannot be modified after they are added.\nInstead, selectors can be removed with the `clearselectors` keyword, which removes all of the added selectors.\nNote that `clearselectors` does not remove the root permissions.\n\n## Key permissions\n\nStarting with Redis 7.0, key patterns can also be used to define how a command is able to touch a key.\nThis is achieved through rules that define key permissions.\nThe key permission rules take the form of `%(<permission>)~<pattern>`.\nPermissions are defined as individual characters that map to the following key permissions:\n\n* W (Write): The data stored within the key may be updated or deleted. \n* R (Read): User supplied data from the key is processed, copied or returned. Note that this does not include metadata such as size information (example `STRLEN`), type information (example `TYPE`) or information about whether a value exists within a collection (example `SISMEMBER`). \n\nPermissions can be composed together by specifying multiple characters. \nSpecifying the permission as 'RW' is considered full access and is analogous to just passing in `~<pattern>`.\n\nFor a concrete example, consider a user with ACL rules `+@all ~app1:* (+@read ~app2:*)`.\nThis user has full access on `app1:*` and readonly access on `app2:*`.\nHowever, some commands support reading data from one key, doing some transformation, and storing it into another key.\nOne such command is the `COPY` command, which copies the data from the source key into the destination key.\nThe example set of ACL rules is unable to handle a request copying data from `app2:user` into `app1:user`, since neither the root permission nor the selector fully matches the command.\nHowever, using key selectors you can define a set of ACL rules that can handle this request `+@all ~app1:* %R~app2:*`.\nThe first pattern is able to match `app1:user` and the second pattern is able to match `app2:user`.\n\nWhich type of permission is required for a command is documented through [key specifications](\/topics\/key-specs#logical-operation-flags).\nThe type of permission is based off the keys logical operation flags. \nThe insert, update, and delete flags map to the write key permission. \nThe access flag maps to the read key permission.\nIf the key has no logical operation flags, such as `EXISTS`, the user still needs either key read or key write permissions to execute the command. \n\nNote: Side channels to accessing user data are ignored when it comes to evaluating whether read permissions are required to execute a command.\nThis means that some write commands that return metadata about the modified key only require write permission on the key to execute.\nFor example, consider the following two commands:\n\n* `LPUSH key1 data`: modifies \"key1\" but only returns metadata about it, the size of the list after the push, so the command only requires write permission on \"key1\" to execute.\n* `LPOP key2`: modifies \"key2\" but also returns data from it, the left most item in the list, so the command requires both read and write permission on \"key2\" to execute.\n\nIf an application needs to make sure no data is accessed from a key, including side channels, it's recommended to not provide any access to the key.\n\n## How passwords are stored internally\n\nRedis internally stores passwords hashed with SHA256. If you set a password\nand check the output of `ACL LIST` or `ACL GETUSER`, you'll see a long hex\nstring that looks pseudo random. Here is an example, because in the previous\nexamples, for the sake of brevity, the long hex string was trimmed:\n\n    > ACL GETUSER default\n    1) \"flags\"\n    2) 1) \"on\"\n    3) \"passwords\"\n    4) 1) \"2d9c75273d72b32df726fb545c8a4edc719f0a95a6fd993950b10c474ad9c927\"\n    5) \"commands\"\n    6) \"+@all\"\n    7) \"keys\"\n    8) \"~*\"\n    9) \"channels\"\n    10) \"&*\"\n    11) \"selectors\"\n    12) (empty array)\n\nUsing SHA256 provides the ability to avoid storing the password in clear text\nwhile still allowing for a very fast `AUTH` command, which is a very important\nfeature of Redis and is coherent with what clients expect from Redis.\n\nHowever ACL *passwords* are not really passwords. They are shared secrets\nbetween the server and the client, because the password is\nnot an authentication token used by a human being. For instance:\n\n* There are no length limits, the password will just be memorized in some client software. There is no human that needs to recall a password in this context.\n* The ACL password does not protect any other thing. For example, it will never be the password for some email account.\n* Often when you are able to access the hashed password itself, by having full access to the Redis commands of a given server, or corrupting the system itself, you already have access to what the password is protecting: the Redis instance stability and the data it contains.\n\nFor this reason, slowing down the password authentication, in order to use an\nalgorithm that uses time and space to make password cracking hard,\nis a very poor choice. What we suggest instead is to generate strong\npasswords, so that nobody will be able to crack it using a\ndictionary or a brute force attack even if they have the hash. To do so, there is a special ACL\ncommand `ACL GENPASS` that generates passwords using the system cryptographic pseudorandom\ngenerator:\n\n    > ACL GENPASS\n    \"dd721260bfe1b3d9601e7fbab36de6d04e2e67b0ef1c53de59d45950db0dd3cc\"\n\nThe command outputs a 32-byte (256-bit) pseudorandom string converted to a\n64-byte alphanumerical string. This is long enough to avoid attacks and short\nenough to be easy to manage, cut & paste, store, and so forth. This is what\nyou should use in order to generate Redis passwords.\n\n## Use an external ACL file\n\nThere are two ways to store users inside the Redis configuration:\n\n1. Users can be specified directly inside the `redis.conf` file.\n2. It is possible to specify an external ACL file.\n\nThe two methods are *mutually incompatible*, so Redis will ask you to use one\nor the other. Specifying users inside `redis.conf` is\ngood for simple use cases. When there are multiple users to define, in a\ncomplex environment, we recommend you use the ACL file instead.\n\nThe format used inside `redis.conf` and in the external ACL file is exactly\nthe same, so it is trivial to switch from one to the other, and is\nthe following:\n\n    user <username> ... acl rules ...\n\nFor instance:\n\n    user worker +@list +@connection ~jobs:* on >ffa9203c493aa99\n\nWhen you want to use an external ACL file, you are required to specify\nthe configuration directive called `aclfile`, like this:\n\n    aclfile \/etc\/redis\/users.acl\n\nWhen you are just specifying a few users directly inside the `redis.conf`\nfile, you can use `CONFIG REWRITE` in order to store the new user configuration\ninside the file by rewriting it.\n\nThe external ACL file however is more powerful. You can do the following:\n\n* Use `ACL LOAD` if you modified the ACL file manually and you want Redis to reload the new configuration. Note that this command is able to load the file *only if all the users are correctly specified*. Otherwise, an error is reported to the user, and the old configuration will remain valid.\n* Use `ACL SAVE` to save the current ACL configuration to the ACL file.\n\nNote that `CONFIG REWRITE` does not also trigger `ACL SAVE`. When you use\nan ACL file, the configuration and the ACLs are handled separately.\n\n## ACL rules for Sentinel and Replicas\n\nIn case you don't want to provide Redis replicas and Redis Sentinel instances\nfull access to your Redis instances, the following is the set of commands\nthat must be allowed in order for everything to work correctly.\n\nFor Sentinel, allow the user to access the following commands both in the master and replica instances:\n\n* AUTH, CLIENT, SUBSCRIBE, SCRIPT, PUBLISH, PING, INFO, MULTI, SLAVEOF, CONFIG, CLIENT, EXEC.\n\nSentinel does not need to access any key in the database but does use Pub\/Sub, so the ACL rule would be the following (note: `AUTH` is not needed since it is always allowed):\n\n    ACL SETUSER sentinel-user on >somepassword allchannels +multi +slaveof +ping +exec +subscribe +config|rewrite +role +publish +info +client|setname +client|kill +script|kill\n\nRedis replicas require the following commands to be allowed on the master instance:\n\n* PSYNC, REPLCONF, PING\n\nNo keys need to be accessed, so this translates to the following rules:\n\n    ACL setuser replica-user on >somepassword +psync +replconf +ping\n\nNote that you don't need to configure the replicas to allow the master to be able to execute any set of commands. The master is always authenticated as the root user from the point of view of replicas.","site":"redis","answers_cleaned":"    title   ACL  linkTitle   ACL  weight  1 description  Redis Access Control List aliases         topics acl       docs manual security acl       docs manual security acl md        The Redis ACL  short for Access Control List  is the feature that allows certain connections to be limited in terms of the commands that can be executed and the keys that can be accessed  The way it works is that  after connecting  a client is required to provide a username and a valid password to authenticate  If authentication succeeded  the connection is associated with a given user and the limits the user has  Redis can be configured so that new connections are already authenticated with a  default  user  this is the default configuration   Configuring the default user has  as a side effect  the ability to provide only a specific subset of functionalities to connections that are not explicitly authenticated   In the default configuration  Redis 6  the first version to have ACLs  works exactly like older versions of Redis  Every new connection is capable of calling every possible command and accessing every key  so the ACL feature is backward compatible with old clients and applications  Also the old way to configure a password  using the   requirepass   configuration directive  still works as expected  However  it now sets a password for the default user   The Redis  AUTH  command was extended in Redis 6  so now it is possible to use it in the two arguments form       AUTH  username   password   Here s an example of the old form       AUTH  password   What happens is that the username used to authenticate is  default   so just specifying the password implies that we want to authenticate against the default user  This provides backward compatibility      When ACLs are useful  Before using ACLs  you may want to ask yourself what s the goal you want to accomplish by implementing this layer of protection  Normally there are two main goals that are well served by ACLs   1  You want to improve security by restricting the access to commands and keys  so that untrusted clients have no access and trusted clients have just the minimum access level to the database in order to perform the work needed  For instance  certain clients may just be able to execute read only commands  2  You want to improve operational safety  so that processes or humans accessing Redis are not allowed to damage the data or the configuration due to software errors or manual mistakes  For instance  there is no reason for a worker that fetches delayed jobs from Redis to be able to call the  FLUSHALL  command   Another typical usage of ACLs is related to managed Redis instances  Redis is often provided as a managed service both by internal company teams that handle the Redis infrastructure for the other internal customers they have  or is provided in a software as a service setup by cloud providers  In both  setups  we want to be sure that configuration commands are excluded for the customers      Configure ACLs with the ACL command  ACLs are defined using a DSL  domain specific language  that describes what a given user is allowed to do  Such rules are always implemented from the first to the last  left to right  because sometimes the order of the rules is important to understand what the user is really able to do   By default there is a single user defined  called  default   We can use the  ACL LIST  command in order to check the currently active ACLs and verify what the configuration of a freshly started  defaults configured Redis instance is         ACL LIST     1   user default on nopass         all   The command above reports the list of users in the same format that is used in the Redis configuration files  by translating the current ACLs set for the users back into their description   The first two words in each line are  user  followed by the username  The next words are ACL rules that describe different things  We ll show how the rules work in detail  but for now it is enough to say that the default user is configured to be active  on   to require no password  nopass   to access every possible key        and Pub Sub channel         and be able to call every possible command     all     Also  in the special case of the default user  having the  nopass  rule means that new connections are automatically authenticated with the default user without any explicit  AUTH  call needed      ACL rules  The following is the list of valid ACL rules  Certain rules are just single words that are used in order to activate or remove a flag  or to perform a given change to the user ACL  Other rules are char prefixes that are concatenated with command or category names  key patterns  and so forth   Enable and disallow users      on   Enable the user  it is possible to authenticate as this user     off   Disallow the user  it s no longer possible to authenticate with this user  however  previously authenticated connections will still work  Note that if the default user is flagged as  off   new connections will start as not authenticated and will require the user to send  AUTH  or  HELLO  with the AUTH option in order to authenticate in some way  regardless of the default user configuration   Allow and disallow commands        command    Add the command to the list of commands the user can call  Can be used with     for allowing subcommands  e g   config get         command    Remove the command to the list of commands the user can call  Starting Redis 7 0  it can be used with     for blocking subcommands  e g   config set          category    Add all the commands in such category to be called by the user  with valid categories being like  admin   set   sortedset      and so forth  see the full list by calling the  ACL CAT  command  The special category  all means all the commands  both the ones currently present in the server  and the ones that will be loaded in the future via modules        category    Like     category   but removes the commands from the list of commands the client can call       command  first arg   Allow a specific first argument of an otherwise disabled command  It is only supported on commands with no sub commands  and is not allowed as negative form like  SELECT 1  only additive starting with      This feature is deprecated and may be removed in the future     allcommands   Alias for   all  Note that it implies the ability to execute all the future commands loaded via the modules system     nocommands   Alias for   all   Allow and disallow certain keys and key permissions        pattern    Add a pattern of keys that can be mentioned as part of commands  For instance      allows all the keys  The pattern is a glob style pattern like the one of  KEYS   It is possible to specify multiple patterns      R  pattern     Available in Redis 7 0 and later  Add the specified read key pattern  This behaves similar to the regular key pattern but only grants permission to read from keys that match the given pattern  See  key permissions   key permissions  for more information      W  pattern     Available in Redis 7 0 and later  Add the specified write key pattern  This behaves similar to the regular key pattern but only grants permission to write to keys that match the given pattern  See  key permissions   key permissions  for more information      RW  pattern     Available in Redis 7 0 and later  Alias for    pattern        allkeys   Alias for          resetkeys   Flush the list of allowed keys patterns  For instance the ACL   foo    bar   resetkeys  objects     will only allow the client to access keys that match the pattern  objects      Allow and disallow Pub Sub channels        pattern     Available in Redis 6 2 and later  Add a glob style pattern of Pub Sub channels that can be accessed by the user  It is possible to specify multiple channel patterns  Note that pattern matching is done only for channels mentioned by  PUBLISH  and  SUBSCRIBE   whereas  PSUBSCRIBE  requires a literal match between its channel patterns and those allowed for user     allchannels   Alias for      that allows the user to access all Pub Sub channels     resetchannels   Flush the list of allowed channel patterns and disconnect the user s Pub Sub clients if these are no longer able to access their respective channels and or channel patterns   Configure valid passwords for the user        password    Add this password to the list of valid passwords for the user  For example   mypass  will add  mypass  to the list of valid passwords   This directive clears the  nopass  flag  see later   Every user can have any number of passwords       password    Remove this password from the list of valid passwords  Emits an error in case the password you are trying to remove is actually not set       hash    Add this SHA 256 hash value to the list of valid passwords for the user  This hash value will be compared to the hash of a password entered for an ACL user  This allows users to store hashes in the  acl conf  file rather than storing cleartext passwords  Only SHA 256 hash values are accepted as the password hash must be 64 characters and only contain lowercase hexadecimal characters       hash    Remove this hash value from the list of valid passwords  This is useful when you do not know the password specified by the hash value but would like to remove the password from the user     nopass   All the set passwords of the user are removed  and the user is flagged as requiring no password  it means that every password will work against this user  If this directive is used for the default user  every new connection will be immediately authenticated with the default user without any explicit AUTH command required  Note that the  resetpass  directive will clear this condition     resetpass   Flushes the list of allowed passwords and removes the  nopass  status  After  resetpass   the user has no associated passwords and there is no way to authenticate without adding some password  or setting it as  nopass  later     Note  if a user is not flagged with nopass and has no list of valid passwords  that user is effectively impossible to use because there will be no way to log in as that user    Configure selectors for the user        rule list      Available in Redis 7 0 and later  Create a new selector to match rules against  Selectors are evaluated after the user permissions  and are evaluated according to the order they are defined  If a command matches either the user permissions or any selector  it is allowed  See  selectors   selectors  for more information     clearselectors    Available in Redis 7 0 and later  Delete all of the selectors attached to the user   Reset the user      reset  Performs the following actions  resetpass  resetkeys  resetchannels  allchannels  if acl pubsub default is set   off  clearselectors    all  The user returns to the same state it had immediately after its creation      Create and edit user ACLs with the ACL SETUSER command  Users can be created and modified in two main ways   1  Using the ACL command and its  ACL SETUSER  subcommand  2  Modifying the server configuration  where users can be defined  and restarting the server  With an  external ACL file   just call  ACL LOAD    In this section we ll learn how to define users using the  ACL  command  With such knowledge  it will be trivial to do the same things via the configuration files  Defining users in the configuration deserves its own section and will be discussed later separately   To start  try the simplest  ACL SETUSER  command call         ACL SETUSER alice     OK  The  ACL SETUSER  command takes the username and a list of ACL rules to apply to the user  However the above example did not specify any rule at all  This will just create the user if it did not exist  using the defaults for new users  If the user already exists  the command above will do nothing at all   Check the default user status         ACL LIST     1   user alice off resetchannels   all      2   user default on nopass         all   The new user  alice  is     In the off status  so  AUTH  will not work for the user  alice     The user also has no passwords set    Cannot access any command  Note that the user is created by default without the ability to access any command  so the    all  in the output above could be omitted  however   ACL LIST  attempts to be explicit rather than implicit    There are no key patterns that the user can access    There are no Pub Sub channels that the user can access   New users are created with restrictive permissions by default  Starting with Redis 6 2  ACL provides Pub Sub channels access management as well  To ensure backward compatibility with version 6 0 when upgrading to Redis 6 2  new users are granted the  allchannels  permission by default  The default can be set to  resetchannels  via the  acl pubsub default  configuration directive   From 7 0  The  acl pubsub default  value is set to  resetchannels  to restrict the channels access by default to provide better security  The default can be set to  allchannels  via the  acl pubsub default  configuration directive to be compatible with previous versions   Such user is completely useless  Let s try to define the user so that it is active  has a password  and can access with only the  GET  command to key names starting with the string  cached           ACL SETUSER alice on  p1pp0  cached    get     OK  Now the user can do something  but will refuse to do other things         AUTH alice p1pp0     OK       GET foo      error  NOPERM this user has no permissions to access one of the keys used as arguments       GET cached 1234      nil        SET cached 1234 zap      error  NOPERM this user has no permissions to run the  set  command  Things are working as expected  In order to inspect the configuration of the user alice  remember that user names are case sensitive   it is possible to use an alternative to  ACL LIST  which is designed to be more suitable for computers to read  while  ACL GETUSER  is more human readable         ACL GETUSER alice     1   flags      2  1   on      3   passwords      4  1   2d9c75         5   commands      6     all  get      7   keys      8    cached        9   channels      10         11   selectors      12   empty array   The  ACL GETUSER  returns a field value array that describes the user in more parsable terms  The output includes the set of flags  a list of key patterns  passwords  and so forth  The output is probably more readable if we use RESP3  so that it is returned as a map reply         ACL GETUSER alice     1   flags     1   on      2   passwords     1   2d9c75273d72b32df726fb545c8a4edc719f0a95a6fd993950b10c474ad9c927      3   commands        all  get      4   keys       cached        5   channels            6   selectors      empty array    Note  from now on  we ll continue using the Redis default protocol  version 2   Using another  ACL SETUSER  command  from a different user  because alice cannot run the  ACL  command   we can add multiple patterns to the user         ACL SETUSER alice  objects    items    public       OK       ACL LIST     1   user alice on  2d9c75     cached    objects    items    public   resetchannels   all  get      2   user default on nopass         all   The user representation in memory is now as we expect it to be      Multiple calls to ACL SETUSER  It is very important to understand what happens when  ACL SETUSER  is called multiple times  What is critical to know is that every  ACL SETUSER  call will NOT reset the user  but will just apply the ACL rules to the existing user  The user is reset only if it was not known before  In that case  a brand new user is created with zeroed ACLs  The user cannot do anything  is disallowed  has no passwords  and so forth  This is the best default for safety   However later calls will just modify the user incrementally  For instance  the following sequence         ACL SETUSER myuser  set     OK       ACL SETUSER myuser  get     OK  Will result in myuser being able to call both  GET  and  SET          ACL LIST     1   user default on nopass         all      2   user myuser off resetchannels   all  get  set      Command categories  Setting user ACLs by specifying all the commands one after the other is really annoying  so instead we do things like this         ACL SETUSER antirez on   all   dangerous  42a979        By saying   all and   dangerous  we included all the commands and later removed all the commands that are tagged as dangerous inside the Redis command table  Note that command categories   never include modules commands   with the exception of   all  If you say   all  all the commands can be executed by the user  even future commands loaded via the modules system  However if you use the ACL rule   read or any other  the modules commands are always excluded  This is very important because you should just trust the Redis internal command table  Modules may expose dangerous things and in the case of an ACL that is just additive  that is  in the form of    all       You should be absolutely sure that you ll never include what you did not mean to   The following is a list of command categories and their meanings       admin     Administrative commands  Normal applications will never need to use   these  Includes  REPLICAOF    CONFIG    DEBUG    SAVE    MONITOR    ACL    SHUTDOWN   etc      bitmap     Data type  bitmaps related      blocking     Potentially blocking the connection until released by another   command      connection     Commands affecting the connection or other connections    This includes  AUTH    SELECT    COMMAND    CLIENT    ECHO    PING   etc      dangerous     Potentially dangerous commands  each should be considered with care for   various reasons   This includes  FLUSHALL    MIGRATE    RESTORE    SORT    KEYS      CLIENT    DEBUG    INFO    CONFIG    SAVE    REPLICAOF   etc      geo     Data type  geospatial indexes related      hash     Data type  hashes related      hyperloglog     Data type  hyperloglog related      fast     Fast O 1  commands  May loop on the number of arguments  but not the   number of elements in the key      keyspace     Writing or reading from keys  databases  or their metadata    in a type agnostic way  Includes  DEL    RESTORE    DUMP    RENAME    EXISTS    DBSIZE      KEYS    EXPIRE    TTL    FLUSHALL   etc  Commands that may modify the keyspace    key  or metadata will also have the  write  category  Commands that only read   the keyspace  key  or metadata will have the  read  category      list     Data type  lists related      pubsub     PubSub related commands      read     Reading from keys  values or metadata   Note that commands that don t   interact with keys  will not have either  read  or  write       scripting     Scripting related      set     Data type  sets related      sortedset     Data type  sorted sets related      slow     All commands that are not  fast       stream     Data type  streams related      string     Data type  strings related      transaction      WATCH     MULTI     EXEC  related commands      write     Writing to keys  values or metadata    Redis can also show you a list of all categories and the exact commands each category includes using the Redis  ACL CAT  command  It can be used in two forms       ACL CAT    Will just list all the categories available     ACL CAT  category name     Will list all the commands inside the category  Examples          ACL CAT      1   keyspace       2   read       3   write       4   set       5   sortedset       6   list       7   hash       8   string       9   bitmap      10   hyperloglog      11   geo      12   stream      13   pubsub      14   admin      15   fast      16   slow      17   blocking      18   dangerous      19   connection      20   transaction      21   scripting   As you can see  so far there are 21 distinct categories  Now let s check what command is part of the  geo  category          ACL CAT geo      1   geohash       2   georadius ro       3   georadiusbymember       4   geopos       5   geoadd       6   georadiusbymember ro       7   geodist       8   georadius       9   geosearch      10   geosearchstore   Note that commands may be part of multiple categories  For example  an ACL rule like    geo   read  will result in certain geo commands to be excluded because they are read only commands      Allow block subcommands  Starting from Redis 7 0  subcommands can be allowed blocked just like other commands  by using the separator     between the command and subcommand  for example    config get  or   config set    That is true for all commands except DEBUG  In order to allow block specific DEBUG subcommands  see the next section      Allow the first arg of a blocked command    Note  This feature is deprecated since Redis 7 0 and may be removed in the future     Sometimes the ability to exclude or include a command or a subcommand as a whole is not enough  Many deployments may not be happy providing the ability to execute a  SELECT  for any DB  but may still want to be able to run  SELECT 0    In such case we could alter the ACL of a user in the following way       ACL SETUSER myuser  select  select 0  First  remove the  SELECT  command and then add the allowed first arg  Note that   it is not possible to do the reverse   since first args can be only added  not excluded  It is safer to specify all the first args that are valid for some user since it is possible that new first args may be added in the future   Another example       ACL SETUSER myuser  debug  debug digest  Note that first arg matching may add some performance penalty  however  it is hard to measure even with synthetic benchmarks  The additional CPU cost is only paid when such commands are called  and not when other commands are called   It is possible to use this mechanism in order to allow subcommands in Redis versions prior to 7 0  see above section         all VS   all  In the previous section  it was observed how it is possible to define command ACLs based on adding removing single commands      Selectors  Starting with Redis 7 0  Redis supports adding multiple sets of rules that are evaluated independently of each other  These secondary sets of permissions are called selectors and added by wrapping a set of rules within parentheses  In order to execute a command  either the root permissions  rules defined outside of parenthesis  or any of the selectors  rules defined inside parenthesis  must match the given command  Internally  the root permissions are checked first followed by selectors in the order they were added   For example  consider a user with the ACL rules   GET  key1   SET  key2    This user is able to execute  GET key1  and  SET key2 hello   but not  GET key2  or  SET key1 world    Unlike the user s root permissions  selectors cannot be modified after they are added  Instead  selectors can be removed with the  clearselectors  keyword  which removes all of the added selectors  Note that  clearselectors  does not remove the root permissions      Key permissions  Starting with Redis 7 0  key patterns can also be used to define how a command is able to touch a key  This is achieved through rules that define key permissions  The key permission rules take the form of     permission    pattern    Permissions are defined as individual characters that map to the following key permissions     W  Write   The data stored within the key may be updated or deleted     R  Read   User supplied data from the key is processed  copied or returned  Note that this does not include metadata such as size information  example  STRLEN    type information  example  TYPE   or information about whether a value exists within a collection  example  SISMEMBER      Permissions can be composed together by specifying multiple characters   Specifying the permission as  RW  is considered full access and is analogous to just passing in    pattern     For a concrete example  consider a user with ACL rules    all  app1      read  app2      This user has full access on  app1    and readonly access on  app2     However  some commands support reading data from one key  doing some transformation  and storing it into another key  One such command is the  COPY  command  which copies the data from the source key into the destination key  The example set of ACL rules is unable to handle a request copying data from  app2 user  into  app1 user   since neither the root permission nor the selector fully matches the command  However  using key selectors you can define a set of ACL rules that can handle this request    all  app1    R app2     The first pattern is able to match  app1 user  and the second pattern is able to match  app2 user    Which type of permission is required for a command is documented through  key specifications   topics key specs logical operation flags   The type of permission is based off the keys logical operation flags   The insert  update  and delete flags map to the write key permission   The access flag maps to the read key permission  If the key has no logical operation flags  such as  EXISTS   the user still needs either key read or key write permissions to execute the command    Note  Side channels to accessing user data are ignored when it comes to evaluating whether read permissions are required to execute a command  This means that some write commands that return metadata about the modified key only require write permission on the key to execute  For example  consider the following two commands      LPUSH key1 data   modifies  key1  but only returns metadata about it  the size of the list after the push  so the command only requires write permission on  key1  to execute     LPOP key2   modifies  key2  but also returns data from it  the left most item in the list  so the command requires both read and write permission on  key2  to execute   If an application needs to make sure no data is accessed from a key  including side channels  it s recommended to not provide any access to the key      How passwords are stored internally  Redis internally stores passwords hashed with SHA256  If you set a password and check the output of  ACL LIST  or  ACL GETUSER   you ll see a long hex string that looks pseudo random  Here is an example  because in the previous examples  for the sake of brevity  the long hex string was trimmed         ACL GETUSER default     1   flags      2  1   on      3   passwords      4  1   2d9c75273d72b32df726fb545c8a4edc719f0a95a6fd993950b10c474ad9c927      5   commands      6     all      7   keys      8           9   channels      10           11   selectors      12   empty array   Using SHA256 provides the ability to avoid storing the password in clear text while still allowing for a very fast  AUTH  command  which is a very important feature of Redis and is coherent with what clients expect from Redis   However ACL  passwords  are not really passwords  They are shared secrets between the server and the client  because the password is not an authentication token used by a human being  For instance     There are no length limits  the password will just be memorized in some client software  There is no human that needs to recall a password in this context    The ACL password does not protect any other thing  For example  it will never be the password for some email account    Often when you are able to access the hashed password itself  by having full access to the Redis commands of a given server  or corrupting the system itself  you already have access to what the password is protecting  the Redis instance stability and the data it contains   For this reason  slowing down the password authentication  in order to use an algorithm that uses time and space to make password cracking hard  is a very poor choice  What we suggest instead is to generate strong passwords  so that nobody will be able to crack it using a dictionary or a brute force attack even if they have the hash  To do so  there is a special ACL command  ACL GENPASS  that generates passwords using the system cryptographic pseudorandom generator         ACL GENPASS      dd721260bfe1b3d9601e7fbab36de6d04e2e67b0ef1c53de59d45950db0dd3cc   The command outputs a 32 byte  256 bit  pseudorandom string converted to a 64 byte alphanumerical string  This is long enough to avoid attacks and short enough to be easy to manage  cut   paste  store  and so forth  This is what you should use in order to generate Redis passwords      Use an external ACL file  There are two ways to store users inside the Redis configuration   1  Users can be specified directly inside the  redis conf  file  2  It is possible to specify an external ACL file   The two methods are  mutually incompatible   so Redis will ask you to use one or the other  Specifying users inside  redis conf  is good for simple use cases  When there are multiple users to define  in a complex environment  we recommend you use the ACL file instead   The format used inside  redis conf  and in the external ACL file is exactly the same  so it is trivial to switch from one to the other  and is the following       user  username      acl rules      For instance       user worker   list   connection  jobs   on  ffa9203c493aa99  When you want to use an external ACL file  you are required to specify the configuration directive called  aclfile   like this       aclfile  etc redis users acl  When you are just specifying a few users directly inside the  redis conf  file  you can use  CONFIG REWRITE  in order to store the new user configuration inside the file by rewriting it   The external ACL file however is more powerful  You can do the following     Use  ACL LOAD  if you modified the ACL file manually and you want Redis to reload the new configuration  Note that this command is able to load the file  only if all the users are correctly specified   Otherwise  an error is reported to the user  and the old configuration will remain valid    Use  ACL SAVE  to save the current ACL configuration to the ACL file   Note that  CONFIG REWRITE  does not also trigger  ACL SAVE   When you use an ACL file  the configuration and the ACLs are handled separately      ACL rules for Sentinel and Replicas  In case you don t want to provide Redis replicas and Redis Sentinel instances full access to your Redis instances  the following is the set of commands that must be allowed in order for everything to work correctly   For Sentinel  allow the user to access the following commands both in the master and replica instances     AUTH  CLIENT  SUBSCRIBE  SCRIPT  PUBLISH  PING  INFO  MULTI  SLAVEOF  CONFIG  CLIENT  EXEC   Sentinel does not need to access any key in the database but does use Pub Sub  so the ACL rule would be the following  note   AUTH  is not needed since it is always allowed        ACL SETUSER sentinel user on  somepassword allchannels  multi  slaveof  ping  exec  subscribe  config rewrite  role  publish  info  client setname  client kill  script kill  Redis replicas require the following commands to be allowed on the master instance     PSYNC  REPLCONF  PING  No keys need to be accessed  so this translates to the following rules       ACL setuser replica user on  somepassword  psync  replconf  ping  Note that you don t need to configure the replicas to allow the master to be able to execute any set of commands  The master is always authenticated as the root user from the point of view of replicas "}
{"questions":"redis docs manual security encryption TLS aliases docs manual security encryption md topics encryption title TLS Redis TLS support weight 1","answers":"---\ntitle: \"TLS\"\nlinkTitle: \"TLS\"\nweight: 1\ndescription: Redis TLS support\naliases: [\n    \/topics\/encryption,\n    \/docs\/manual\/security\/encryption,\n    \/docs\/manual\/security\/encryption.md\n]\n---\n\nSSL\/TLS is supported by Redis starting with version 6 as an optional feature\nthat needs to be enabled at compile time.\n\n## Getting Started\n\n### Building\n\nTo build with TLS support you'll need OpenSSL development libraries (e.g.\n`libssl-dev` on Debian\/Ubuntu).\n\nBuild Redis with the following command:\n\n```sh\nmake BUILD_TLS=yes\n```\n\n### Tests\n\nTo run Redis test suite with TLS, you'll need TLS support for TCL (i.e.\n`tcl-tls` package on Debian\/Ubuntu).\n\n1. Run `.\/utils\/gen-test-certs.sh` to generate a root CA and a server\n   certificate.\n\n2. Run `.\/runtest --tls` or `.\/runtest-cluster --tls` to run Redis and Redis\n   Cluster tests in TLS mode.\n\n### Running manually\n\nTo manually run a Redis server with TLS mode (assuming `gen-test-certs.sh` was\ninvoked so sample certificates\/keys are available):\n\n    .\/src\/redis-server --tls-port 6379 --port 0 \\\n        --tls-cert-file .\/tests\/tls\/redis.crt \\\n        --tls-key-file .\/tests\/tls\/redis.key \\\n        --tls-ca-cert-file .\/tests\/tls\/ca.crt\n\nTo connect to this Redis server with `redis-cli`:\n\n    .\/src\/redis-cli --tls \\\n        --cert .\/tests\/tls\/redis.crt \\\n        --key .\/tests\/tls\/redis.key \\\n        --cacert .\/tests\/tls\/ca.crt\n\n### Certificate configuration\n\nIn order to support TLS, Redis must be configured with a X.509 certificate and a\nprivate key. In addition, it is necessary to specify a CA certificate bundle\nfile or path to be used as a trusted root when validating certificates. To\nsupport DH based ciphers, a DH params file can also be configured. For example:\n\n```\ntls-cert-file \/path\/to\/redis.crt\ntls-key-file \/path\/to\/redis.key\ntls-ca-cert-file \/path\/to\/ca.crt\ntls-dh-params-file \/path\/to\/redis.dh\n```\n\n### TLS listening port\n\nThe `tls-port` configuration directive enables accepting SSL\/TLS connections on\nthe specified port. This is **in addition** to listening on `port` for TCP\nconnections, so it is possible to access Redis on different ports using TLS and\nnon-TLS connections simultaneously.\n\nYou may specify `port 0` to disable the non-TLS port completely. To enable only\nTLS on the default Redis port, use:\n\n```\nport 0\ntls-port 6379\n```\n\n### Client certificate authentication\n\nBy default, Redis uses mutual TLS and requires clients to authenticate with a\nvalid certificate (authenticated against trusted root CAs specified by\n`ca-cert-file` or `ca-cert-dir`).\n\nYou may use `tls-auth-clients no` to disable client authentication.\n\n### Replication\n\nA Redis master server handles connecting clients and replica servers in the same\nway, so the above `tls-port` and `tls-auth-clients` directives apply to\nreplication links as well.\n\nOn the replica server side, it is necessary to specify `tls-replication yes` to\nuse TLS for outgoing connections to the master.\n\n### Cluster\n\nWhen Redis Cluster is used, use `tls-cluster yes` in order to enable TLS for the\ncluster bus and cross-node connections.\n\n### Sentinel\n\nSentinel inherits its networking configuration from the common Redis\nconfiguration, so all of the above applies to Sentinel as well.\n\nWhen connecting to master servers, Sentinel will use the `tls-replication`\ndirective to determine if a TLS or non-TLS connection is required.\n\nIn addition, the very same `tls-replication` directive will determine whether Sentinel's\nport, that accepts connections from other Sentinels, will support TLS as well. That is,\nSentinel will be configured with `tls-port` if and only if `tls-replication` is enabled. \n\n### Additional configuration\n\nAdditional TLS configuration is available to control the choice of TLS protocol\nversions, ciphers and cipher suites, etc. Please consult the self documented\n`redis.conf` for more information.\n\n### Performance considerations\n\nTLS adds a layer to the communication stack with overheads due to writing\/reading to\/from an SSL connection, encryption\/decryption and integrity checks. Consequently, using TLS results in a decrease of the achievable throughput per Redis instance (for more information refer to this [discussion](https:\/\/github.com\/redis\/redis\/issues\/7595)). \n\n### Limitations\n\nI\/O threading is currently not supported with TLS.","site":"redis","answers_cleaned":"    title   TLS  linkTitle   TLS  weight  1 description  Redis TLS support aliases         topics encryption       docs manual security encryption       docs manual security encryption md        SSL TLS is supported by Redis starting with version 6 as an optional feature that needs to be enabled at compile time      Getting Started      Building  To build with TLS support you ll need OpenSSL development libraries  e g   libssl dev  on Debian Ubuntu    Build Redis with the following command      sh make BUILD TLS yes          Tests  To run Redis test suite with TLS  you ll need TLS support for TCL  i e   tcl tls  package on Debian Ubuntu    1  Run    utils gen test certs sh  to generate a root CA and a server    certificate   2  Run    runtest   tls  or    runtest cluster   tls  to run Redis and Redis    Cluster tests in TLS mode       Running manually  To manually run a Redis server with TLS mode  assuming  gen test certs sh  was invoked so sample certificates keys are available          src redis server   tls port 6379   port 0             tls cert file   tests tls redis crt             tls key file   tests tls redis key             tls ca cert file   tests tls ca crt  To connect to this Redis server with  redis cli          src redis cli   tls             cert   tests tls redis crt             key   tests tls redis key             cacert   tests tls ca crt      Certificate configuration  In order to support TLS  Redis must be configured with a X 509 certificate and a private key  In addition  it is necessary to specify a CA certificate bundle file or path to be used as a trusted root when validating certificates  To support DH based ciphers  a DH params file can also be configured  For example       tls cert file  path to redis crt tls key file  path to redis key tls ca cert file  path to ca crt tls dh params file  path to redis dh          TLS listening port  The  tls port  configuration directive enables accepting SSL TLS connections on the specified port  This is   in addition   to listening on  port  for TCP connections  so it is possible to access Redis on different ports using TLS and non TLS connections simultaneously   You may specify  port 0  to disable the non TLS port completely  To enable only TLS on the default Redis port  use       port 0 tls port 6379          Client certificate authentication  By default  Redis uses mutual TLS and requires clients to authenticate with a valid certificate  authenticated against trusted root CAs specified by  ca cert file  or  ca cert dir     You may use  tls auth clients no  to disable client authentication       Replication  A Redis master server handles connecting clients and replica servers in the same way  so the above  tls port  and  tls auth clients  directives apply to replication links as well   On the replica server side  it is necessary to specify  tls replication yes  to use TLS for outgoing connections to the master       Cluster  When Redis Cluster is used  use  tls cluster yes  in order to enable TLS for the cluster bus and cross node connections       Sentinel  Sentinel inherits its networking configuration from the common Redis configuration  so all of the above applies to Sentinel as well   When connecting to master servers  Sentinel will use the  tls replication  directive to determine if a TLS or non TLS connection is required   In addition  the very same  tls replication  directive will determine whether Sentinel s port  that accepts connections from other Sentinels  will support TLS as well  That is  Sentinel will be configured with  tls port  if and only if  tls replication  is enabled        Additional configuration  Additional TLS configuration is available to control the choice of TLS protocol versions  ciphers and cipher suites  etc  Please consult the self documented  redis conf  for more information       Performance considerations  TLS adds a layer to the communication stack with overheads due to writing reading to from an SSL connection  encryption decryption and integrity checks  Consequently  using TLS results in a decrease of the achievable throughput per Redis instance  for more information refer to this  discussion  https   github com redis redis issues 7595          Limitations  I O threading is currently not supported with TLS "}
{"questions":"redis title Redis security Security model and features in Redis aliases docs manual security Security weight 1 topics security docs manual security md","answers":"---\ntitle: \"Redis security\"\nlinkTitle: \"Security\"\nweight: 1\ndescription: Security model and features in Redis\naliases: [\n    \/topics\/security,\n    \/docs\/manual\/security,\n    \/docs\/manual\/security.md\n]\n---\n\nThis document provides an introduction to the topic of security from the point of\nview of Redis. It covers the access control provided by Redis, code security concerns,\nattacks that can be triggered from the outside by selecting malicious inputs, and\nother similar topics. \nYou can learn more about access control, data protection and encryption, secure Redis architectures, and secure deployment techniques by taking the [Redis University security course](https:\/\/university.redis.com\/courses\/ru330\/).\n\nFor security-related contacts, open an issue on GitHub, or when you feel it\nis really important to preserve the security of the communication, use the\nGPG key at the end of this document.\n\n## Security model\n\nRedis is designed to be accessed by trusted clients inside trusted environments.\nThis means that usually it is not a good idea to expose the Redis instance\ndirectly to the internet or, in general, to an environment where untrusted\nclients can directly access the Redis TCP port or UNIX socket.\n\nFor instance, in the common context of a web application implemented using Redis\nas a database, cache, or messaging system, the clients inside the front-end\n(web side) of the application will query Redis to generate pages or\nto perform operations requested or triggered by the web application user.\n\nIn this case, the web application mediates access between Redis and\nuntrusted clients (the user browsers accessing the web application).\n\nIn general, untrusted access to Redis should\nalways be mediated by a layer implementing ACLs, validating user input,\nand deciding what operations to perform against the Redis instance.\n\n## Network security\n\nAccess to the Redis port should be denied to everybody but trusted clients\nin the network, so the servers running Redis should be directly accessible\nonly by the computers implementing the application using Redis.\n\nIn the common case of a single computer directly exposed to the internet, such\nas a virtualized Linux instance (Linode, EC2, ...), the Redis port should be\nfirewalled to prevent access from the outside. Clients will still be able to\naccess Redis using the loopback interface.\n\nNote that it is possible to bind Redis to a single interface by adding a line\nlike the following to the **redis.conf** file:\n\n    bind 127.0.0.1\n\nFailing to protect the Redis port from the outside can have a big security\nimpact because of the nature of Redis. For instance, a single `FLUSHALL` command can be used by an external attacker to delete the whole data set.\n\n## Protected mode\n\nUnfortunately, many users fail to protect Redis instances from being accessed\nfrom external networks. Many instances are simply left exposed on the\ninternet with public IPs. Since version 3.2.0, Redis enters a special mode called **protected mode** when it is\nexecuted with the default configuration (binding all the interfaces) and\nwithout any password in order to access it. In this mode, Redis only replies to queries from the\nloopback interfaces, and replies to clients connecting from other\naddresses with an error that explains the problem and how to configure\nRedis properly.\n\nWe expect protected mode to seriously decrease the security issues caused\nby unprotected Redis instances executed without proper administration. However,\nthe system administrator can still ignore the error given by Redis and\ndisable protected mode or manually bind all the interfaces.\n\n## Authentication\n\nRedis provides two ways to authenticate clients.\nThe recommended authentication method, introduced in Redis 6, is via Access Control Lists, allowing named users to be created and assigned fine-grained permissions.\nRead more about Access Control Lists [here](\/docs\/management\/security\/acl\/).\n\nThe legacy authentication method is enabled by editing the **redis.conf** file, and providing a database password using the `requirepass` setting.\nThis password is then used by all clients.\n\nWhen the `requirepass` setting is enabled, Redis will refuse any query by\nunauthenticated clients. A client can authenticate itself by sending the\n**AUTH** command followed by the password.\n\nThe password is set by the system administrator in clear text inside the\nredis.conf file. It should be long enough to prevent brute force attacks\nfor two reasons:\n\n* Redis is very fast at serving queries. Many passwords per second can be tested by an external client.\n* The Redis password is stored in the **redis.conf** file and inside the client configuration. Since the system administrator does not need to remember it, the password can be very long.\n\nThe goal of the authentication layer is to optionally provide a layer of\nredundancy. If firewalling or any other system implemented to protect Redis\nfrom external attackers fail, an external client will still not be able to\naccess the Redis instance without knowledge of the authentication password.\n\nSince the `AUTH` command, like every other Redis command, is sent unencrypted, it\ndoes not protect against an attacker that has enough access to the network to\nperform eavesdropping.\n\n## TLS support\n\nRedis has optional support for TLS on all communication channels, including\nclient connections, replication links, and the Redis Cluster bus protocol.\n\n## Disallowing specific commands\n\nIt is possible to disallow commands in Redis or to rename them as an unguessable\nname, so that normal clients are limited to a specified set of commands.\n\nFor instance, a virtualized server provider may offer a managed Redis instance\nservice. In this context, normal users should probably not be able to\ncall the Redis **CONFIG** command to alter the configuration of the instance,\nbut the systems that provide and remove instances should be able to do so.\n\nIn this case, it is possible to either rename or completely shadow commands from\nthe command table. This feature is available as a statement that can be used\ninside the redis.conf configuration file. For example:\n\n    rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52\n\nIn the above example, the **CONFIG** command was renamed into an unguessable name.  It is also possible to completely disallow it (or any other command) by renaming it to the empty string, like in the following example:\n\n    rename-command CONFIG \"\"\n\n## Attacks triggered by malicious inputs from external clients\n\nThere is a class of attacks that an attacker can trigger from the outside even\nwithout external access to the instance. For example, an attacker might insert data into Redis that triggers pathological (worst case)\nalgorithm complexity on data structures implemented inside Redis internals.\n\nAn attacker could supply, via a web form, a set of strings that\nare known to hash to the same bucket in a hash table in order to turn the\nO(1) expected time (the average time) to the O(N) worst case. This can consume more\nCPU than expected and ultimately cause a Denial of Service.\n\nTo prevent this specific attack, Redis uses a per-execution, pseudo-random\nseed to the hash function.\n\nRedis implements the SORT command using the qsort algorithm. Currently,\nthe algorithm is not randomized, so it is possible to trigger a quadratic\nworst-case behavior by carefully selecting the right set of inputs.\n\n## String escaping and NoSQL injection\n\nThe Redis protocol has no concept of string escaping, so injection\nis impossible under normal circumstances using a normal client library.\nThe protocol uses prefixed-length strings and is completely binary safe.\n\nSince Lua scripts executed by the `EVAL` and `EVALSHA` commands follow the\nsame rules, those commands are also safe.\n\nWhile it would be a strange use case, the application should avoid composing the body of the Lua script from strings obtained from untrusted sources.\n\n## Code security\n\nIn a classical Redis setup, clients are allowed full access to the command set,\nbut accessing the instance should never result in the ability to control the\nsystem where Redis is running.\n\nInternally, Redis uses all the well-known practices for writing secure code to\nprevent buffer overflows, format bugs, and other memory corruption issues.\nHowever, the ability to control the server configuration using the **CONFIG**\ncommand allows the client to change the working directory of the program and\nthe name of the dump file. This allows clients to write RDB Redis files\nto random paths. This is [a security issue](http:\/\/antirez.com\/news\/96) that may lead to the ability to compromise the system and\/or run untrusted code as the same user as Redis is running.\n\nRedis does not require root privileges to run. It is recommended to\nrun it as an unprivileged *redis* user that is only used for this purpose.\n\n## GPG key\n\n```\n-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBF9FWioBEADfBiOE\/iKpj2EF\/cJ\/KzFX+jSBKa8SKrE\/9RE0faVF6OYnqstL\nS5ox\/o+yT45FdfFiRNDflKenjFbOmCbAdIys9Ta0iq6I9hs4sKfkNfNVlKZWtSVG\nW4lI6zO2Zyc2wLZonI+Q32dDiXWNcCEsmajFcddukPevj9vKMTJZtF79P2SylEPq\nmUuhMy\/jOt7q1ibJCj5srtaureBH9662t4IJMFjsEe+hiZ5v071UiQA6Tp7rxLqZ\nO6ZRzuamFP3xfy2Lz5NQ7QwnBH1ROabhJPoBOKCATCbfgFcM1Rj+9AOGfoDCOJKH\n7yiEezMqr9VbDrEmYSmCO4KheqwC0T06lOLIQC4nnwKopNO\/PN21mirCLHvfo01O\nH\/NUG1LZifOwAURbiFNF8Z3+L0csdhD8JnO+1nphjDHr0Xn9Vff2Vej030pRI\/9C\nSJ2s5fZUq8jK4n06sKCbqA4pekpbKyhRy3iuITKv7Nxesl4T\/uhkc9ccpAvbuD1E\nNczN1IH05jiMUMM3lC1A9TSvxSqflqI46TZU3qWLa9yg45kDC8Ryr39TY37LscQk\n9x3WwLLkuHeUurnwAk46fSj7+FCKTGTdPVw8v7XbvNOTDf8vJ3o2PxX1uh2P2BHs\n9L+E1P96oMkiEy1ug7gu8V+mKu5PAuD3QFzU3XCB93DpDakgtznRRXCkAQARAQAB\ntBtSZWRpcyBMYWJzIDxyZWRpc0ByZWRpcy5pbz6JAk4EEwEKADgWIQR5sNCo1OBf\nWO913l22qvOUq0evbgUCX0VaKgIbAwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAK\nCRC2qvOUq0evbpZaD\/4rN7xesDcAG4ec895Fqzk3w74W1\/K9lzRKZDwRsAqI+sAz\nZXvQMtWSxLfF2BITxLnHJXK5P+2Y6XlNgrn1GYwC1MsARyM9e1AzwDJHcXFkHU82\n2aALIMXGtiZs\/ejFh9ZSs5cgRlxBSqot\/uxXm9AvKEByhmIeHPZse\/Rc6e3qa57v\nOhCkVZB4ETx5iZrgA+gdmS8N7MXG0cEu5gJLacG57MHi+2WMOCU9Xfj6+Pqhw3qc\nE6lBinKcA\/LdgUJ1onK0JCnOG1YVHjuFtaisfPXvEmUBGaSGE6lM4J7lass\/OWps\nDd+oHCGI+VOGNx6AiBDZG8mZacu0\/7goRnOTdljJ93rKkj31I+6+j4xzkAC0IXW8\nLAP9Mmo9TGx0L5CaljykhW6z\/RK3qd7dAYE+i7e8J9PuQaGG5pjFzuW4vY45j0V\/\n9JUMKDaGbU5choGqsCpAVtAMFfIBj3UQ5LCt5zKyescKCUb9uifOLeeQ1vay3R9o\neRSD52YpRBpor0AyYxcLur\/pkHB0sSvXEfRZENQTohpY71rHSaFd3q1Hkk7lZl95\nm24NRlrJnjFmeSPKP22vqUYIwoGNUF\/D38UzvqHD8ltTPgkZc+Y+RRbVNqkQYiwW\nGH\/DigNB8r2sdkt+1EUu+YkYosxtzxpxxpYGKXYXx0uf+EZmRqRt\/OSHKnf2GLkC\nDQRfRVoqARAApffsrDNo4JWjX3r6wHJJ8IpwnGEJ2IzGkg8f1Ofk2uKrjkII\/oIx\nsXC3EeauC1Plhs+m9GP\/SPY0LXmZ0OzGD\/S1yMpmBeBuXJ0gONDo+xCg1pKGshPs\n75XzpbggSOtEYR5S8Z46yCu7TGJRXBMGBhDgCfPVFBBNsnG5B0EeHXM4trqqlN6d\nPAcwtLnKPz\/Z+lloKR6bFXvYGuN5vjRXjcVYZLLCEwdV9iY5\/Opqk9sCluasb3t\/\nc2gcsLWWFnNz2desvb\/Y4ADJzxY+Um848DSR8IcdoArSsqmcCTiYvYC\/UU7XPVNk\nJrx\/HwgTVYiLGbtMB3u3fUpHW8SabdHc4xG3sx0LeIvl+JwHgx7yVhNYJEyOQfnE\nmfS97x6surXgTVLbWVjXKIJhoWnWbLP4NkBc27H4qo8wM\/IWH4SSXYNzFLlCDPnw\nvQZSel21qxdqAWaSxkKcymfMS4nVDhVj0jhlcTY3aZcHMjqoUB07p5+laJr9CCGv\n0Y0j0qT2aUO22A3kbv6H9c1Yjv8EI7eNz07aoH1oYU6ShsiaLfIqPfGYb7LwOFWi\nPSl0dCY7WJg2H6UHsV\/y2DwRr\/3oH0a9hv\/cvcMneMi3tpIkRwYFBPXEsIcoD9xr\nRI5dp8BBdO\/Nt+puoQq9oyialWnQK5+AY7ErW1yxjgie4PQ+XtN+85UAEQEAAYkC\nNgQYAQoAIBYhBHmw0KjU4F9Y73XeXbaq85SrR69uBQJfRVoqAhsMAAoJELaq85Sr\nR69uoV0QAIvlxAHYTjvH1lt5KbpVGs5gwIAnCMPxmaOXcaZ8V0Z1GEU+\/IztwV+N\nMYCBv1tYa7OppNs1pn75DhzoNAi+XQOVvU0OZgVJutthZe0fNDFGG9B4i\/cxRscI\nLd8TPQQNiZPBZ4ubcxbZyBinE9HsYUM49otHjsyFZ0GqTpyne+zBf1GAQoekxlKo\ntWSkkmW0x4qW6eiAmyo5lPS1bBjvaSc67i+6Bv5QkZa0UIkRqAzKN4zVvc2FyILz\n+7wVLCzWcXrJt8dOeS6Y\/Fjbhb6m7dtapUSETAKu6wJvSd9ndDUjFHD33NQIZ\/nL\nWaPbn01+e\/PHtUDmyZ2W2KbcdlIT9nb2uHrruqdCN04sXkID8E2m2gYMA+TjhC0Q\nJBJ9WPmdBeKH91R6wWDq6+HwOpgc\/9na+BHZXMG+qyEcvNHB5RJdiu2r1Haf6gHi\nFd6rJ6VzaVwnmKmUSKA2wHUuUJ6oxVJ1nFb7Aaschq8F79TAfee0iaGe9cP+xUHL\nzBDKwZ9PtyGfdBp1qNOb94sfEasWPftT26rLgKPFcroCSR2QCK5qHsMNCZL+u71w\nNnTtq9YZDRaQ2JAc6VDZCcgu+dLiFxVIi1PFcJQ31rVe16+AQ9zsafiNsxkPdZcY\nU9XKndQE028dGZv1E3S5BwpnikrUkWdxcYrVZ4fiNIy5I3My2yCe\n=J9BD\n-----END PGP PUBLIC KEY BLOCK-----\n```","site":"redis","answers_cleaned":"    title   Redis security  linkTitle   Security  weight  1 description  Security model and features in Redis aliases         topics security       docs manual security       docs manual security md        This document provides an introduction to the topic of security from the point of view of Redis  It covers the access control provided by Redis  code security concerns  attacks that can be triggered from the outside by selecting malicious inputs  and other similar topics   You can learn more about access control  data protection and encryption  secure Redis architectures  and secure deployment techniques by taking the  Redis University security course  https   university redis com courses ru330     For security related contacts  open an issue on GitHub  or when you feel it is really important to preserve the security of the communication  use the GPG key at the end of this document      Security model  Redis is designed to be accessed by trusted clients inside trusted environments  This means that usually it is not a good idea to expose the Redis instance directly to the internet or  in general  to an environment where untrusted clients can directly access the Redis TCP port or UNIX socket   For instance  in the common context of a web application implemented using Redis as a database  cache  or messaging system  the clients inside the front end  web side  of the application will query Redis to generate pages or to perform operations requested or triggered by the web application user   In this case  the web application mediates access between Redis and untrusted clients  the user browsers accessing the web application    In general  untrusted access to Redis should always be mediated by a layer implementing ACLs  validating user input  and deciding what operations to perform against the Redis instance      Network security  Access to the Redis port should be denied to everybody but trusted clients in the network  so the servers running Redis should be directly accessible only by the computers implementing the application using Redis   In the common case of a single computer directly exposed to the internet  such as a virtualized Linux instance  Linode  EC2        the Redis port should be firewalled to prevent access from the outside  Clients will still be able to access Redis using the loopback interface   Note that it is possible to bind Redis to a single interface by adding a line like the following to the   redis conf   file       bind 127 0 0 1  Failing to protect the Redis port from the outside can have a big security impact because of the nature of Redis  For instance  a single  FLUSHALL  command can be used by an external attacker to delete the whole data set      Protected mode  Unfortunately  many users fail to protect Redis instances from being accessed from external networks  Many instances are simply left exposed on the internet with public IPs  Since version 3 2 0  Redis enters a special mode called   protected mode   when it is executed with the default configuration  binding all the interfaces  and without any password in order to access it  In this mode  Redis only replies to queries from the loopback interfaces  and replies to clients connecting from other addresses with an error that explains the problem and how to configure Redis properly   We expect protected mode to seriously decrease the security issues caused by unprotected Redis instances executed without proper administration  However  the system administrator can still ignore the error given by Redis and disable protected mode or manually bind all the interfaces      Authentication  Redis provides two ways to authenticate clients  The recommended authentication method  introduced in Redis 6  is via Access Control Lists  allowing named users to be created and assigned fine grained permissions  Read more about Access Control Lists  here   docs management security acl     The legacy authentication method is enabled by editing the   redis conf   file  and providing a database password using the  requirepass  setting  This password is then used by all clients   When the  requirepass  setting is enabled  Redis will refuse any query by unauthenticated clients  A client can authenticate itself by sending the   AUTH   command followed by the password   The password is set by the system administrator in clear text inside the redis conf file  It should be long enough to prevent brute force attacks for two reasons     Redis is very fast at serving queries  Many passwords per second can be tested by an external client    The Redis password is stored in the   redis conf   file and inside the client configuration  Since the system administrator does not need to remember it  the password can be very long   The goal of the authentication layer is to optionally provide a layer of redundancy  If firewalling or any other system implemented to protect Redis from external attackers fail  an external client will still not be able to access the Redis instance without knowledge of the authentication password   Since the  AUTH  command  like every other Redis command  is sent unencrypted  it does not protect against an attacker that has enough access to the network to perform eavesdropping      TLS support  Redis has optional support for TLS on all communication channels  including client connections  replication links  and the Redis Cluster bus protocol      Disallowing specific commands  It is possible to disallow commands in Redis or to rename them as an unguessable name  so that normal clients are limited to a specified set of commands   For instance  a virtualized server provider may offer a managed Redis instance service  In this context  normal users should probably not be able to call the Redis   CONFIG   command to alter the configuration of the instance  but the systems that provide and remove instances should be able to do so   In this case  it is possible to either rename or completely shadow commands from the command table  This feature is available as a statement that can be used inside the redis conf configuration file  For example       rename command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52  In the above example  the   CONFIG   command was renamed into an unguessable name   It is also possible to completely disallow it  or any other command  by renaming it to the empty string  like in the following example       rename command CONFIG        Attacks triggered by malicious inputs from external clients  There is a class of attacks that an attacker can trigger from the outside even without external access to the instance  For example  an attacker might insert data into Redis that triggers pathological  worst case  algorithm complexity on data structures implemented inside Redis internals   An attacker could supply  via a web form  a set of strings that are known to hash to the same bucket in a hash table in order to turn the O 1  expected time  the average time  to the O N  worst case  This can consume more CPU than expected and ultimately cause a Denial of Service   To prevent this specific attack  Redis uses a per execution  pseudo random seed to the hash function   Redis implements the SORT command using the qsort algorithm  Currently  the algorithm is not randomized  so it is possible to trigger a quadratic worst case behavior by carefully selecting the right set of inputs      String escaping and NoSQL injection  The Redis protocol has no concept of string escaping  so injection is impossible under normal circumstances using a normal client library  The protocol uses prefixed length strings and is completely binary safe   Since Lua scripts executed by the  EVAL  and  EVALSHA  commands follow the same rules  those commands are also safe   While it would be a strange use case  the application should avoid composing the body of the Lua script from strings obtained from untrusted sources      Code security  In a classical Redis setup  clients are allowed full access to the command set  but accessing the instance should never result in the ability to control the system where Redis is running   Internally  Redis uses all the well known practices for writing secure code to prevent buffer overflows  format bugs  and other memory corruption issues  However  the ability to control the server configuration using the   CONFIG   command allows the client to change the working directory of the program and the name of the dump file  This allows clients to write RDB Redis files to random paths  This is  a security issue  http   antirez com news 96  that may lead to the ability to compromise the system and or run untrusted code as the same user as Redis is running   Redis does not require root privileges to run  It is recommended to run it as an unprivileged  redis  user that is only used for this purpose      GPG key           BEGIN PGP PUBLIC KEY BLOCK       mQINBF9FWioBEADfBiOE iKpj2EF cJ KzFX jSBKa8SKrE 9RE0faVF6OYnqstL S5ox o yT45FdfFiRNDflKenjFbOmCbAdIys9Ta0iq6I9hs4sKfkNfNVlKZWtSVG W4lI6zO2Zyc2wLZonI Q32dDiXWNcCEsmajFcddukPevj9vKMTJZtF79P2SylEPq mUuhMy jOt7q1ibJCj5srtaureBH9662t4IJMFjsEe hiZ5v071UiQA6Tp7rxLqZ O6ZRzuamFP3xfy2Lz5NQ7QwnBH1ROabhJPoBOKCATCbfgFcM1Rj 9AOGfoDCOJKH 7yiEezMqr9VbDrEmYSmCO4KheqwC0T06lOLIQC4nnwKopNO PN21mirCLHvfo01O H NUG1LZifOwAURbiFNF8Z3 L0csdhD8JnO 1nphjDHr0Xn9Vff2Vej030pRI 9C SJ2s5fZUq8jK4n06sKCbqA4pekpbKyhRy3iuITKv7Nxesl4T uhkc9ccpAvbuD1E NczN1IH05jiMUMM3lC1A9TSvxSqflqI46TZU3qWLa9yg45kDC8Ryr39TY37LscQk 9x3WwLLkuHeUurnwAk46fSj7 FCKTGTdPVw8v7XbvNOTDf8vJ3o2PxX1uh2P2BHs 9L E1P96oMkiEy1ug7gu8V mKu5PAuD3QFzU3XCB93DpDakgtznRRXCkAQARAQAB tBtSZWRpcyBMYWJzIDxyZWRpc0ByZWRpcy5pbz6JAk4EEwEKADgWIQR5sNCo1OBf WO913l22qvOUq0evbgUCX0VaKgIbAwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAK CRC2qvOUq0evbpZaD 4rN7xesDcAG4ec895Fqzk3w74W1 K9lzRKZDwRsAqI sAz ZXvQMtWSxLfF2BITxLnHJXK5P 2Y6XlNgrn1GYwC1MsARyM9e1AzwDJHcXFkHU82 2aALIMXGtiZs ejFh9ZSs5cgRlxBSqot uxXm9AvKEByhmIeHPZse Rc6e3qa57v OhCkVZB4ETx5iZrgA gdmS8N7MXG0cEu5gJLacG57MHi 2WMOCU9Xfj6 Pqhw3qc E6lBinKcA LdgUJ1onK0JCnOG1YVHjuFtaisfPXvEmUBGaSGE6lM4J7lass OWps Dd oHCGI VOGNx6AiBDZG8mZacu0 7goRnOTdljJ93rKkj31I 6 j4xzkAC0IXW8 LAP9Mmo9TGx0L5CaljykhW6z RK3qd7dAYE i7e8J9PuQaGG5pjFzuW4vY45j0V  9JUMKDaGbU5choGqsCpAVtAMFfIBj3UQ5LCt5zKyescKCUb9uifOLeeQ1vay3R9o eRSD52YpRBpor0AyYxcLur pkHB0sSvXEfRZENQTohpY71rHSaFd3q1Hkk7lZl95 m24NRlrJnjFmeSPKP22vqUYIwoGNUF D38UzvqHD8ltTPgkZc Y RRbVNqkQYiwW GH DigNB8r2sdkt 1EUu YkYosxtzxpxxpYGKXYXx0uf EZmRqRt OSHKnf2GLkC DQRfRVoqARAApffsrDNo4JWjX3r6wHJJ8IpwnGEJ2IzGkg8f1Ofk2uKrjkII oIx sXC3EeauC1Plhs m9GP SPY0LXmZ0OzGD S1yMpmBeBuXJ0gONDo xCg1pKGshPs 75XzpbggSOtEYR5S8Z46yCu7TGJRXBMGBhDgCfPVFBBNsnG5B0EeHXM4trqqlN6d PAcwtLnKPz Z lloKR6bFXvYGuN5vjRXjcVYZLLCEwdV9iY5 Opqk9sCluasb3t  c2gcsLWWFnNz2desvb Y4ADJzxY Um848DSR8IcdoArSsqmcCTiYvYC UU7XPVNk Jrx HwgTVYiLGbtMB3u3fUpHW8SabdHc4xG3sx0LeIvl JwHgx7yVhNYJEyOQfnE mfS97x6surXgTVLbWVjXKIJhoWnWbLP4NkBc27H4qo8wM IWH4SSXYNzFLlCDPnw vQZSel21qxdqAWaSxkKcymfMS4nVDhVj0jhlcTY3aZcHMjqoUB07p5 laJr9CCGv 0Y0j0qT2aUO22A3kbv6H9c1Yjv8EI7eNz07aoH1oYU6ShsiaLfIqPfGYb7LwOFWi PSl0dCY7WJg2H6UHsV y2DwRr 3oH0a9hv cvcMneMi3tpIkRwYFBPXEsIcoD9xr RI5dp8BBdO Nt puoQq9oyialWnQK5 AY7ErW1yxjgie4PQ XtN 85UAEQEAAYkC NgQYAQoAIBYhBHmw0KjU4F9Y73XeXbaq85SrR69uBQJfRVoqAhsMAAoJELaq85Sr R69uoV0QAIvlxAHYTjvH1lt5KbpVGs5gwIAnCMPxmaOXcaZ8V0Z1GEU  IztwV N MYCBv1tYa7OppNs1pn75DhzoNAi XQOVvU0OZgVJutthZe0fNDFGG9B4i cxRscI Ld8TPQQNiZPBZ4ubcxbZyBinE9HsYUM49otHjsyFZ0GqTpyne zBf1GAQoekxlKo tWSkkmW0x4qW6eiAmyo5lPS1bBjvaSc67i 6Bv5QkZa0UIkRqAzKN4zVvc2FyILz  7wVLCzWcXrJt8dOeS6Y Fjbhb6m7dtapUSETAKu6wJvSd9ndDUjFHD33NQIZ nL WaPbn01 e PHtUDmyZ2W2KbcdlIT9nb2uHrruqdCN04sXkID8E2m2gYMA TjhC0Q JBJ9WPmdBeKH91R6wWDq6 HwOpgc 9na BHZXMG qyEcvNHB5RJdiu2r1Haf6gHi Fd6rJ6VzaVwnmKmUSKA2wHUuUJ6oxVJ1nFb7Aaschq8F79TAfee0iaGe9cP xUHL zBDKwZ9PtyGfdBp1qNOb94sfEasWPftT26rLgKPFcroCSR2QCK5qHsMNCZL u71w NnTtq9YZDRaQ2JAc6VDZCcgu dLiFxVIi1PFcJQ31rVe16 AQ9zsafiNsxkPdZcY U9XKndQE028dGZv1E3S5BwpnikrUkWdxcYrVZ4fiNIy5I3My2yCe  J9BD      END PGP PUBLIC KEY BLOCK         "}
{"questions":"redis title Redis CPU profiling topics performance on cpu Performance engineering guide for on CPU profiling and tracing CPU profiling aliases docs reference optimization cpu profiling weight 1","answers":"---\ntitle: \"Redis CPU profiling\"\nlinkTitle: \"CPU profiling\"\nweight: 1\ndescription: >\n    Performance engineering guide for on-CPU profiling and tracing\naliases: [\n    \/topics\/performance-on-cpu,\n    \/docs\/reference\/optimization\/cpu-profiling\n]\n---\n\n## Filling the performance checklist\n\nRedis is developed with a great emphasis on performance. We do our best with\nevery release to make sure you'll experience a very stable and fast product. \n\nNevertheless, if you're finding room to improve the efficiency of Redis or\nare pursuing a performance regression investigation you will need a concise\nmethodical way of monitoring and analyzing Redis performance. \n\nTo do so you can rely on different methodologies (some more suited than other \ndepending on the class of issues\/analysis we intend to make). A curated list\nof methodologies and their steps are enumerated by Brendan Greg at the\n[following link](http:\/\/www.brendangregg.com\/methodology.html). \n\nWe recommend the Utilization Saturation and Errors (USE) Method for answering\nthe question of what is your bottleneck. Check the following mapping between\nsystem resource, metric, and tools for a practical deep dive:\n[USE method](http:\/\/www.brendangregg.com\/USEmethod\/use-rosetta.html). \n\n### Ensuring the CPU is your bottleneck\n\nThis guide assumes you've followed one of the above methodologies to perform a \ncomplete check of system health, and identified the bottleneck being the CPU. \n**If you have identified that most of the time is spent blocked on I\/O, locks,\ntimers, paging\/swapping, etc., this guide is not for you**. \n\n### Build Prerequisites\n\nFor a proper On-CPU analysis, Redis (and any dynamically loaded library like\nRedis Modules) requires stack traces to be available to tracers, which you may\nneed to fix first. \n\nBy default, Redis is compiled with the `-O2` switch (which we intent to keep\nduring profiling). This means that compiler optimizations are enabled. Many\ncompilers omit the frame pointer as a runtime optimization (saving a register),\nthus breaking frame pointer-based stack walking. This makes the Redis\nexecutable faster, but at the same time it makes Redis (like any other program)\nharder to trace, potentially wrongfully pinpointing on-CPU time to the last\navailable frame pointer of a call stack that can get a lot deeper (but\nimpossible to trace).\n\nIt's important that you ensure that:\n- debug information is present: compile option `-g`\n- frame pointer register is present: `-fno-omit-frame-pointer`\n- we still run with optimizations to get an accurate representation of production run times, meaning we will keep: `-O2`\n\nYou can do it as follows within redis main repo:\n\n    $ make REDIS_CFLAGS=\"-g -fno-omit-frame-pointer\"\n\n## A set of instruments to identify performance regressions and\/or potential **on-CPU performance** improvements \n\nThis document focuses specifically on **on-CPU** resource bottlenecks analysis,\nmeaning we're interested in understanding where threads are spending CPU cycles\nwhile running on-CPU and, as importantly, whether those cycles are effectively\nbeing used for computation or stalled waiting (not blocked!) for memory I\/O,\nand cache misses, etc.\n\nFor that we will rely on toolkits (perf, bcc tools), and hardware specific PMCs\n(Performance Monitoring Counters), to proceed with:\n\n- Hotspot analysis (perf or bcc tools): to profile code execution and determine which functions are consuming the most time and thus are targets for optimization. We'll present two options to collect, report, and visualize hotspots either with perf or bcc\/BPF tracing tools.\n\n- Call counts analysis: to count events including function calls, enabling us to correlate several calls\/components at once, relying on bcc\/BPF tracing tools.\n\n- Hardware event sampling: crucial for understanding CPU behavior, including memory I\/O, stall cycles, and cache misses.\n\n### Tool prerequisites\n\nThe following steps rely on Linux perf_events (aka [\"perf\"](https:\/\/man7.org\/linux\/man-pages\/man1\/perf.1.html)), [bcc\/BPF tracing tools](https:\/\/github.com\/iovisor\/bcc), and Brendan Greg\u2019s [FlameGraph repo](https:\/\/github.com\/brendangregg\/FlameGraph).\n\nWe assume beforehand you have:\n\n- Installed the perf tool on your system. Most Linux distributions will likely package this as a package related to the kernel. More information about the perf tool can be found at perf [wiki](https:\/\/perf.wiki.kernel.org\/).\n- Followed the install [bcc\/BPF](https:\/\/github.com\/iovisor\/bcc\/blob\/master\/INSTALL.md#installing-bcc) instructions to install bcc toolkit on your machine.\n- Cloned Brendan Greg\u2019s [FlameGraph repo](https:\/\/github.com\/brendangregg\/FlameGraph) and made accessible the `difffolded.pl` and `flamegraph.pl` files, to generated the collapsed stack traces and Flame Graphs.\n\n## Hotspot analysis with perf or eBPF (stack traces sampling)\n\nProfiling CPU usage by sampling stack traces at a timed interval is a fast and\neasy way to identify performance-critical code sections (hotspots).\n\n### Sampling stack traces using perf\n\nTo profile both user- and kernel-level stacks of redis-server for a specific\nlength of time, for example 60 seconds, at a sampling frequency of 999 samples\nper second:\n\n    $ perf record -g --pid $(pgrep redis-server) -F 999 -- sleep 60\n\n#### Displaying the recorded profile information using perf report\n\nBy default perf record will generate a perf.data file in the current working\ndirectory. \n\nYou can then report with a call-graph output (call chain, stack backtrace),\nwith a minimum call graph inclusion threshold of 0.5%, with:\n\n    $ perf report -g \"graph,0.5,caller\"\n\nSee the [perf report](https:\/\/man7.org\/linux\/man-pages\/man1\/perf-report.1.html)\ndocumentation for advanced filtering, sorting and aggregation capabilities.\n\n#### Visualizing the recorded profile information using Flame Graphs\n\n[Flame graphs](http:\/\/www.brendangregg.com\/flamegraphs.html) allow for a quick\nand accurate visualization of frequent code-paths. They can be generated using\nBrendan Greg's open source programs on [github](https:\/\/github.com\/brendangregg\/FlameGraph),\nwhich create interactive SVGs from folded stack files.\n\nSpecifically, for perf we need to convert the generated perf.data into the\ncaptured stacks, and fold each of them into single lines. You can then render\nthe on-CPU flame graph with:\n\n    $ perf script > redis.perf.stacks\n    $ stackcollapse-perf.pl redis.perf.stacks > redis.folded.stacks\n    $ flamegraph.pl redis.folded.stacks > redis.svg\n\nBy default, perf script will generate a perf.data file in the current working\ndirectory. See the [perf script](https:\/\/linux.die.net\/man\/1\/perf-script)\ndocumentation for advanced usage.\n\nSee [FlameGraph usage options](https:\/\/github.com\/brendangregg\/FlameGraph#options)\nfor more advanced stack trace visualizations (like the differential one).\n\n#### Archiving and sharing recorded profile information\n\nSo that analysis of the perf.data contents can be possible on a machine other\nthan the one on which collection happened, you need to export along with the\nperf.data file all object files with build-ids found in the record data file.\nThis can be easily done with the help of \n[perf-archive.sh](https:\/\/github.com\/torvalds\/linux\/blob\/master\/tools\/perf\/perf-archive.sh)\nscript:\n\n    $ perf-archive.sh perf.data\n\nNow please run:\n\n    $ tar xvf perf.data.tar.bz2 -C ~\/.debug\n\non the machine where you need to run `perf report`.\n\n### Sampling stack traces using bcc\/BPF's profile\n    \nSimilarly to perf, as of Linux kernel 4.9, BPF-optimized profiling is now fully\navailable with the promise of lower overhead on CPU (as stack traces are\nfrequency counted in kernel context) and disk I\/O resources during profiling. \n\nApart from that, and relying solely on bcc\/BPF's profile tool, we have also\nremoved the perf.data and intermediate steps if stack traces analysis is our\nmain goal. You can use bcc's profile tool to output folded format directly, for\nflame graph generation:\n\n    $ \/usr\/share\/bcc\/tools\/profile -F 999 -f --pid $(pgrep redis-server) --duration 60 > redis.folded.stacks\n\nIn that manner, we've remove any preprocessing and can render the on-CPU flame\ngraph with a single command:\n\n    $ flamegraph.pl redis.folded.stacks > redis.svg\n\n### Visualizing the recorded profile information using Flame Graphs\n\n## Call counts analysis with bcc\/BPF\n\nA function may consume significant CPU cycles either because its code is slow\nor because it's frequently called. To answer at what rate functions are being\ncalled, you can rely upon call counts analysis using BCC's `funccount` tool:\n\n    $ \/usr\/share\/bcc\/tools\/funccount 'redis-server:(call*|*Read*|*Write*)' --pid $(pgrep redis-server) --duration 60\n    Tracing 64 functions for \"redis-server:(call*|*Read*|*Write*)\"... Hit Ctrl-C to end.\n\n    FUNC                                    COUNT\n    call                                      334\n    handleClientsWithPendingWrites            388\n    clientInstallWriteHandler                 388\n    postponeClientRead                        514\n    handleClientsWithPendingReadsUsingThreads      735\n    handleClientsWithPendingWritesUsingThreads      735\n    prepareClientToWrite                     1442\n    Detaching...\n\n\nThe above output shows that, while tracing, the Redis's call() function was\ncalled 334 times, handleClientsWithPendingWrites() 388 times, etc.\n\n## Hardware event counting with Performance Monitoring Counters (PMCs)\n\nMany modern processors contain a performance monitoring unit (PMU) exposing\nPerformance Monitoring Counters (PMCs). PMCs are crucial for understanding CPU\nbehavior, including memory I\/O, stall cycles, and cache misses, and provide\nlow-level CPU performance statistics that aren't available anywhere else.\n\nThe design and functionality of a PMU is CPU-specific and you should assess\nyour CPU supported counters and features by using `perf list`. \n\nTo calculate the number of instructions per cycle, the number of micro ops\nexecuted, the number of cycles during which no micro ops were dispatched, the\nnumber stalled cycles on memory, including a per memory type stalls, for the\nduration of 60s, specifically for redis process: \n\n    $ perf stat -e \"cpu-clock,cpu-cycles,instructions,uops_executed.core,uops_executed.stall_cycles,cache-references,cache-misses,cycle_activity.stalls_total,cycle_activity.stalls_mem_any,cycle_activity.stalls_l3_miss,cycle_activity.stalls_l2_miss,cycle_activity.stalls_l1d_miss\" --pid $(pgrep redis-server) -- sleep 60\n\n    Performance counter stats for process id '3038':\n\n      60046.411437      cpu-clock (msec)          #    1.001 CPUs utilized          \n      168991975443      cpu-cycles                #    2.814 GHz                      (36.40%)\n      388248178431      instructions              #    2.30  insn per cycle           (45.50%)\n      443134227322      uops_executed.core        # 7379.862 M\/sec                    (45.51%)\n       30317116399      uops_executed.stall_cycles #  504.895 M\/sec                    (45.51%)\n         670821512      cache-references          #   11.172 M\/sec                    (45.52%)\n          23727619      cache-misses              #    3.537 % of all cache refs      (45.43%)\n       30278479141      cycle_activity.stalls_total #  504.251 M\/sec                    (36.33%)\n       19981138777      cycle_activity.stalls_mem_any #  332.762 M\/sec                    (36.33%)\n         725708324      cycle_activity.stalls_l3_miss #   12.086 M\/sec                    (36.33%)\n        8487905659      cycle_activity.stalls_l2_miss #  141.356 M\/sec                    (36.32%)\n       10011909368      cycle_activity.stalls_l1d_miss #  166.736 M\/sec                    (36.31%)\n\n      60.002765665 seconds time elapsed\n\nIt's important to know that there are two very different ways in which PMCs can\nbe used (counting and sampling), and we've focused solely on PMCs counting for\nthe sake of this analysis. Brendan Greg clearly explains it on the following\n[link](http:\/\/www.brendangregg.com\/blog\/2017-05-04\/the-pmcs-of-ec2.html).\n","site":"redis","answers_cleaned":"    title   Redis CPU profiling  linkTitle   CPU profiling  weight  1 description        Performance engineering guide for on CPU profiling and tracing aliases         topics performance on cpu       docs reference optimization cpu profiling           Filling the performance checklist  Redis is developed with a great emphasis on performance  We do our best with every release to make sure you ll experience a very stable and fast product    Nevertheless  if you re finding room to improve the efficiency of Redis or are pursuing a performance regression investigation you will need a concise methodical way of monitoring and analyzing Redis performance    To do so you can rely on different methodologies  some more suited than other  depending on the class of issues analysis we intend to make   A curated list of methodologies and their steps are enumerated by Brendan Greg at the  following link  http   www brendangregg com methodology html     We recommend the Utilization Saturation and Errors  USE  Method for answering the question of what is your bottleneck  Check the following mapping between system resource  metric  and tools for a practical deep dive   USE method  http   www brendangregg com USEmethod use rosetta html         Ensuring the CPU is your bottleneck  This guide assumes you ve followed one of the above methodologies to perform a  complete check of system health  and identified the bottleneck being the CPU     If you have identified that most of the time is spent blocked on I O  locks  timers  paging swapping  etc   this guide is not for you          Build Prerequisites  For a proper On CPU analysis  Redis  and any dynamically loaded library like Redis Modules  requires stack traces to be available to tracers  which you may need to fix first    By default  Redis is compiled with the   O2  switch  which we intent to keep during profiling   This means that compiler optimizations are enabled  Many compilers omit the frame pointer as a runtime optimization  saving a register   thus breaking frame pointer based stack walking  This makes the Redis executable faster  but at the same time it makes Redis  like any other program  harder to trace  potentially wrongfully pinpointing on CPU time to the last available frame pointer of a call stack that can get a lot deeper  but impossible to trace    It s important that you ensure that    debug information is present  compile option   g    frame pointer register is present    fno omit frame pointer    we still run with optimizations to get an accurate representation of production run times  meaning we will keep    O2   You can do it as follows within redis main repo         make REDIS CFLAGS   g  fno omit frame pointer      A set of instruments to identify performance regressions and or potential   on CPU performance   improvements   This document focuses specifically on   on CPU   resource bottlenecks analysis  meaning we re interested in understanding where threads are spending CPU cycles while running on CPU and  as importantly  whether those cycles are effectively being used for computation or stalled waiting  not blocked   for memory I O  and cache misses  etc   For that we will rely on toolkits  perf  bcc tools   and hardware specific PMCs  Performance Monitoring Counters   to proceed with     Hotspot analysis  perf or bcc tools   to profile code execution and determine which functions are consuming the most time and thus are targets for optimization  We ll present two options to collect  report  and visualize hotspots either with perf or bcc BPF tracing tools     Call counts analysis  to count events including function calls  enabling us to correlate several calls components at once  relying on bcc BPF tracing tools     Hardware event sampling  crucial for understanding CPU behavior  including memory I O  stall cycles  and cache misses       Tool prerequisites  The following steps rely on Linux perf events  aka   perf   https   man7 org linux man pages man1 perf 1 html     bcc BPF tracing tools  https   github com iovisor bcc   and Brendan Greg s  FlameGraph repo  https   github com brendangregg FlameGraph    We assume beforehand you have     Installed the perf tool on your system  Most Linux distributions will likely package this as a package related to the kernel  More information about the perf tool can be found at perf  wiki  https   perf wiki kernel org      Followed the install  bcc BPF  https   github com iovisor bcc blob master INSTALL md installing bcc  instructions to install bcc toolkit on your machine    Cloned Brendan Greg s  FlameGraph repo  https   github com brendangregg FlameGraph  and made accessible the  difffolded pl  and  flamegraph pl  files  to generated the collapsed stack traces and Flame Graphs      Hotspot analysis with perf or eBPF  stack traces sampling   Profiling CPU usage by sampling stack traces at a timed interval is a fast and easy way to identify performance critical code sections  hotspots        Sampling stack traces using perf  To profile both user  and kernel level stacks of redis server for a specific length of time  for example 60 seconds  at a sampling frequency of 999 samples per second         perf record  g   pid   pgrep redis server   F 999    sleep 60       Displaying the recorded profile information using perf report  By default perf record will generate a perf data file in the current working directory    You can then report with a call graph output  call chain  stack backtrace   with a minimum call graph inclusion threshold of 0 5   with         perf report  g  graph 0 5 caller   See the  perf report  https   man7 org linux man pages man1 perf report 1 html  documentation for advanced filtering  sorting and aggregation capabilities        Visualizing the recorded profile information using Flame Graphs   Flame graphs  http   www brendangregg com flamegraphs html  allow for a quick and accurate visualization of frequent code paths  They can be generated using Brendan Greg s open source programs on  github  https   github com brendangregg FlameGraph   which create interactive SVGs from folded stack files   Specifically  for perf we need to convert the generated perf data into the captured stacks  and fold each of them into single lines  You can then render the on CPU flame graph with         perf script   redis perf stacks       stackcollapse perf pl redis perf stacks   redis folded stacks       flamegraph pl redis folded stacks   redis svg  By default  perf script will generate a perf data file in the current working directory  See the  perf script  https   linux die net man 1 perf script  documentation for advanced usage   See  FlameGraph usage options  https   github com brendangregg FlameGraph options  for more advanced stack trace visualizations  like the differential one         Archiving and sharing recorded profile information  So that analysis of the perf data contents can be possible on a machine other than the one on which collection happened  you need to export along with the perf data file all object files with build ids found in the record data file  This can be easily done with the help of   perf archive sh  https   github com torvalds linux blob master tools perf perf archive sh  script         perf archive sh perf data  Now please run         tar xvf perf data tar bz2  C    debug  on the machine where you need to run  perf report        Sampling stack traces using bcc BPF s profile      Similarly to perf  as of Linux kernel 4 9  BPF optimized profiling is now fully available with the promise of lower overhead on CPU  as stack traces are frequency counted in kernel context  and disk I O resources during profiling    Apart from that  and relying solely on bcc BPF s profile tool  we have also removed the perf data and intermediate steps if stack traces analysis is our main goal  You can use bcc s profile tool to output folded format directly  for flame graph generation          usr share bcc tools profile  F 999  f   pid   pgrep redis server    duration 60   redis folded stacks  In that manner  we ve remove any preprocessing and can render the on CPU flame graph with a single command         flamegraph pl redis folded stacks   redis svg      Visualizing the recorded profile information using Flame Graphs     Call counts analysis with bcc BPF  A function may consume significant CPU cycles either because its code is slow or because it s frequently called  To answer at what rate functions are being called  you can rely upon call counts analysis using BCC s  funccount  tool          usr share bcc tools funccount  redis server  call   Read   Write      pid   pgrep redis server    duration 60     Tracing 64 functions for  redis server  call   Read   Write       Hit Ctrl C to end       FUNC                                    COUNT     call                                      334     handleClientsWithPendingWrites            388     clientInstallWriteHandler                 388     postponeClientRead                        514     handleClientsWithPendingReadsUsingThreads      735     handleClientsWithPendingWritesUsingThreads      735     prepareClientToWrite                     1442     Detaching      The above output shows that  while tracing  the Redis s call   function was called 334 times  handleClientsWithPendingWrites   388 times  etc      Hardware event counting with Performance Monitoring Counters  PMCs   Many modern processors contain a performance monitoring unit  PMU  exposing Performance Monitoring Counters  PMCs   PMCs are crucial for understanding CPU behavior  including memory I O  stall cycles  and cache misses  and provide low level CPU performance statistics that aren t available anywhere else   The design and functionality of a PMU is CPU specific and you should assess your CPU supported counters and features by using  perf list     To calculate the number of instructions per cycle  the number of micro ops executed  the number of cycles during which no micro ops were dispatched  the number stalled cycles on memory  including a per memory type stalls  for the duration of 60s  specifically for redis process          perf stat  e  cpu clock cpu cycles instructions uops executed core uops executed stall cycles cache references cache misses cycle activity stalls total cycle activity stalls mem any cycle activity stalls l3 miss cycle activity stalls l2 miss cycle activity stalls l1d miss    pid   pgrep redis server     sleep 60      Performance counter stats for process id  3038          60046 411437      cpu clock  msec                1 001 CPUs utilized                 168991975443      cpu cycles                     2 814 GHz                       36 40         388248178431      instructions                   2 30  insn per cycle            45 50         443134227322      uops executed core          7379 862 M sec                     45 51          30317116399      uops executed stall cycles    504 895 M sec                     45 51            670821512      cache references              11 172 M sec                     45 52             23727619      cache misses                   3 537   of all cache refs       45 43          30278479141      cycle activity stalls total    504 251 M sec                     36 33          19981138777      cycle activity stalls mem any    332 762 M sec                     36 33            725708324      cycle activity stalls l3 miss     12 086 M sec                     36 33           8487905659      cycle activity stalls l2 miss    141 356 M sec                     36 32          10011909368      cycle activity stalls l1d miss    166 736 M sec                     36 31          60 002765665 seconds time elapsed  It s important to know that there are two very different ways in which PMCs can be used  counting and sampling   and we ve focused solely on PMCs counting for the sake of this analysis  Brendan Greg clearly explains it on the following  link  http   www brendangregg com blog 2017 05 04 the pmcs of ec2 html   "}
{"questions":"redis docs reference optimization latency monitor Latency monitoring topics latency monitor aliases Discovering slow server events in Redis weight 1 title Redis latency monitoring","answers":"---\ntitle: \"Redis latency monitoring\"\nlinkTitle: \"Latency monitoring\"\nweight: 1\ndescription: Discovering slow server events in Redis\naliases: [\n    \/topics\/latency-monitor,\n    \/docs\/reference\/optimization\/latency-monitor\n]\n---\n\nRedis is often used for demanding use cases, where it\nserves a large number of queries per second per instance, but also has strict latency requirements for the average response\ntime and the worst-case latency.\n\nWhile Redis is an in-memory system, it deals with the operating system in\ndifferent ways, for example, in the context of persisting to disk.\nMoreover Redis implements a rich set of commands. Certain commands\nare fast and run in constant or logarithmic time. Other commands are slower\nO(N) commands that can cause latency spikes.\n\nFinally, Redis is single threaded. This is usually an advantage\nfrom the point of view of the amount of work it can perform per core, and in\nthe latency figures it is able to provide. However, it poses\na challenge for latency, since the single\nthread must be able to perform certain tasks incrementally, for\nexample key expiration, in a way that does not impact the other clients\nthat are served.\n\nFor all these reasons, Redis 2.8.13 introduced a new feature called\n**Latency Monitoring**, that helps the user to check and troubleshoot possible\nlatency problems. Latency monitoring is composed of the following conceptual\nparts:\n\n* Latency hooks that sample different latency-sensitive code paths.\n* Time series recording of latency spikes, split by different events.\n* Reporting engine to fetch raw data from the time series.\n* Analysis engine to provide human-readable reports and hints according to the measurements.\n\nThe rest of this document covers the latency monitoring subsystem\ndetails. For more information about the general topic of Redis\nand latency, see [Redis latency problems troubleshooting](\/topics\/latency).\n\n## Events and time series\n\nDifferent monitored code paths have different names and are called *events*.\nFor example, `command` is an event that measures latency spikes of possibly slow\ncommand executions, while `fast-command` is the event name for the monitoring\nof the O(1) and O(log N) commands. Other events are less generic and monitor\nspecific operations performed by Redis. For example, the `fork` event\nonly monitors the time taken by Redis to execute the `fork(2)` system call.\n\nA latency spike is an event that takes more time to run than the configured latency\nthreshold. There is a separate time series associated with every monitored\nevent. This is how the time series work:\n\n* Every time a latency spike happens, it is logged in the appropriate time series.\n* Every time series is composed of 160 elements.\n* Each element is a pair made of a Unix timestamp of the time the latency spike was measured and the number of milliseconds the event took to execute.\n* Latency spikes for the same event that occur in the same second are merged by taking the maximum latency. Even if continuous latency spikes are measured for a given event, which could happen with a low threshold, at least 160 seconds of history are available.\n* Records the all-time maximum latency for every element.\n\nThe framework monitors and logs latency spikes in the execution time of these events:\n\n* `command`: regular commands.\n* `fast-command`: O(1) and O(log N) commands.\n* `fork`: the `fork(2)` system call.\n* `rdb-unlink-temp-file`: the `unlink(2)` system call.\n* `aof-fsync-always`: the `fsync(2)` system call when invoked by the `appendfsync allways` policy.\n* `aof-write`: writing to the AOF - a catchall event for `write(2)` system calls.\n* `aof-write-pending-fsync`: the `write(2)` system call when there is a pending fsync.\n* `aof-write-active-child`: the `write(2)` system call when there are active child processes.\n* `aof-write-alone`: the `write(2)` system call when no pending fsync and no active child process.\n* `aof-fstat`: the `fstat(2)` system call.\n* `aof-rename`: the `rename(2)` system call for renaming the temporary file after completing `BGREWRITEAOF`.\n* `aof-rewrite-diff-write`: writing the differences accumulated while performing `BGREWRITEAOF`.\n* `active-defrag-cycle`: the active defragmentation cycle.\n* `expire-cycle`: the expiration cycle.\n* `eviction-cycle`: the eviction cycle.\n* `eviction-del`: deletes during the eviction cycle.\n\n## How to enable latency monitoring\n\nWhat is high latency for one use case may not be considered high latency for another. Some applications may require that all queries be served in less than 1 millisecond. For other applications, it may be acceptable for a small amount of clients to experience a 2 second latency on occasion.\n\nThe first step to enable the latency monitor is to set a **latency threshold** in milliseconds. Only events that take longer than the specified threshold will be logged as latency spikes. The user should set the threshold according to their needs. For example, if the application requires a maximum acceptable latency of 100 milliseconds, the threshold should be set to log all the events blocking the server for a time equal or greater to 100 milliseconds.\n\nEnable the latency monitor at runtime in a production server\nwith the following command:\n\n    CONFIG SET latency-monitor-threshold 100\n\nMonitoring is turned off by default (threshold set to 0), even if the actual cost of latency monitoring is near zero. While the memory requirements of latency monitoring are very small, there is no good reason to raise the baseline memory usage of a Redis instance that is working well.\n\n## Report information with the LATENCY command\n\nThe user interface to the latency monitoring subsystem is the `LATENCY` command.\nLike many other Redis commands, `LATENCY` accepts subcommands that modify its behavior. These subcommands are:\n\n* `LATENCY LATEST` - returns the latest latency samples for all events.\n* `LATENCY HISTORY` - returns latency time series for a given event.\n* `LATENCY RESET` - resets latency time series data for one or more events.\n* `LATENCY GRAPH` - renders an ASCII-art graph of an event's latency samples.\n* `LATENCY DOCTOR` - replies with a human-readable latency analysis report.\n\nRefer to each subcommand's documentation page for further information.","site":"redis","answers_cleaned":"    title   Redis latency monitoring  linkTitle   Latency monitoring  weight  1 description  Discovering slow server events in Redis aliases         topics latency monitor       docs reference optimization latency monitor        Redis is often used for demanding use cases  where it serves a large number of queries per second per instance  but also has strict latency requirements for the average response time and the worst case latency   While Redis is an in memory system  it deals with the operating system in different ways  for example  in the context of persisting to disk  Moreover Redis implements a rich set of commands  Certain commands are fast and run in constant or logarithmic time  Other commands are slower O N  commands that can cause latency spikes   Finally  Redis is single threaded  This is usually an advantage from the point of view of the amount of work it can perform per core  and in the latency figures it is able to provide  However  it poses a challenge for latency  since the single thread must be able to perform certain tasks incrementally  for example key expiration  in a way that does not impact the other clients that are served   For all these reasons  Redis 2 8 13 introduced a new feature called   Latency Monitoring    that helps the user to check and troubleshoot possible latency problems  Latency monitoring is composed of the following conceptual parts     Latency hooks that sample different latency sensitive code paths    Time series recording of latency spikes  split by different events    Reporting engine to fetch raw data from the time series    Analysis engine to provide human readable reports and hints according to the measurements   The rest of this document covers the latency monitoring subsystem details  For more information about the general topic of Redis and latency  see  Redis latency problems troubleshooting   topics latency       Events and time series  Different monitored code paths have different names and are called  events   For example   command  is an event that measures latency spikes of possibly slow command executions  while  fast command  is the event name for the monitoring of the O 1  and O log N  commands  Other events are less generic and monitor specific operations performed by Redis  For example  the  fork  event only monitors the time taken by Redis to execute the  fork 2   system call   A latency spike is an event that takes more time to run than the configured latency threshold  There is a separate time series associated with every monitored event  This is how the time series work     Every time a latency spike happens  it is logged in the appropriate time series    Every time series is composed of 160 elements    Each element is a pair made of a Unix timestamp of the time the latency spike was measured and the number of milliseconds the event took to execute    Latency spikes for the same event that occur in the same second are merged by taking the maximum latency  Even if continuous latency spikes are measured for a given event  which could happen with a low threshold  at least 160 seconds of history are available    Records the all time maximum latency for every element   The framework monitors and logs latency spikes in the execution time of these events      command   regular commands     fast command   O 1  and O log N  commands     fork   the  fork 2   system call     rdb unlink temp file   the  unlink 2   system call     aof fsync always   the  fsync 2   system call when invoked by the  appendfsync allways  policy     aof write   writing to the AOF   a catchall event for  write 2   system calls     aof write pending fsync   the  write 2   system call when there is a pending fsync     aof write active child   the  write 2   system call when there are active child processes     aof write alone   the  write 2   system call when no pending fsync and no active child process     aof fstat   the  fstat 2   system call     aof rename   the  rename 2   system call for renaming the temporary file after completing  BGREWRITEAOF      aof rewrite diff write   writing the differences accumulated while performing  BGREWRITEAOF      active defrag cycle   the active defragmentation cycle     expire cycle   the expiration cycle     eviction cycle   the eviction cycle     eviction del   deletes during the eviction cycle      How to enable latency monitoring  What is high latency for one use case may not be considered high latency for another  Some applications may require that all queries be served in less than 1 millisecond  For other applications  it may be acceptable for a small amount of clients to experience a 2 second latency on occasion   The first step to enable the latency monitor is to set a   latency threshold   in milliseconds  Only events that take longer than the specified threshold will be logged as latency spikes  The user should set the threshold according to their needs  For example  if the application requires a maximum acceptable latency of 100 milliseconds  the threshold should be set to log all the events blocking the server for a time equal or greater to 100 milliseconds   Enable the latency monitor at runtime in a production server with the following command       CONFIG SET latency monitor threshold 100  Monitoring is turned off by default  threshold set to 0   even if the actual cost of latency monitoring is near zero  While the memory requirements of latency monitoring are very small  there is no good reason to raise the baseline memory usage of a Redis instance that is working well      Report information with the LATENCY command  The user interface to the latency monitoring subsystem is the  LATENCY  command  Like many other Redis commands   LATENCY  accepts subcommands that modify its behavior  These subcommands are      LATENCY LATEST    returns the latest latency samples for all events     LATENCY HISTORY    returns latency time series for a given event     LATENCY RESET    resets latency time series data for one or more events     LATENCY GRAPH    renders an ASCII art graph of an event s latency samples     LATENCY DOCTOR    replies with a human readable latency analysis report   Refer to each subcommand s documentation page for further information "}
{"questions":"redis Latency diagnosis aliases Finding the causes of slow responses docs reference optimization latency topics latency title Diagnosing latency issues weight 1","answers":"---\ntitle: \"Diagnosing latency issues\"\nlinkTitle: \"Latency diagnosis\"\nweight: 1\ndescription: Finding the causes of slow responses\naliases: [\n    \/topics\/latency,\n    \/docs\/reference\/optimization\/latency\n]\n---\n\nThis document will help you understand what the problem could be if you\nare experiencing latency problems with Redis.\n\nIn this context *latency* is the maximum delay between the time a client\nissues a command and the time the reply to the command is received by the\nclient. Usually Redis processing time is extremely low, in the sub microsecond\nrange, but there are certain conditions leading to higher latency figures.\n\nI've little time, give me the checklist\n---\n\nThe following documentation is very important in order to run Redis in\na low latency fashion. However I understand that we are busy people, so\nlet's start with a quick checklist. If you fail following these steps, please\nreturn here to read the full documentation.\n\n1. Make sure you are not running slow commands that are blocking the server. Use the Redis [Slow Log feature](\/commands\/slowlog) to check this.\n2. For EC2 users, make sure you use HVM based modern EC2 instances, like m3.medium. Otherwise fork() is too slow.\n3. Transparent huge pages must be disabled from your kernel. Use `echo never > \/sys\/kernel\/mm\/transparent_hugepage\/enabled` to disable them, and restart your Redis process.\n4. If you are using a virtual machine, it is possible that you have an intrinsic latency that has nothing to do with Redis. Check the minimum latency you can expect from your runtime environment using `.\/redis-cli --intrinsic-latency 100`. Note: you need to run this command in *the server* not in the client.\n5. Enable and use the [Latency monitor](\/topics\/latency-monitor) feature of Redis in order to get a human readable description of the latency events and causes in your Redis instance.\n\nIn general, use the following table for durability VS latency\/performance tradeoffs, ordered from stronger safety to better latency.\n\n1. AOF + fsync always: this is very slow, you should use it only if you know what you are doing.\n2. AOF + fsync every second: this is a good compromise.\n3. AOF + fsync every second + no-appendfsync-on-rewrite option set to yes: this is as the above, but avoids to fsync during rewrites to lower the disk pressure.\n4. AOF + fsync never. Fsyncing is up to the kernel in this setup, even less disk pressure and risk of latency spikes.\n5. RDB. Here you have a vast spectrum of tradeoffs depending on the save triggers you configure.\n\nAnd now for people with 15 minutes to spend, the details...\n\nMeasuring latency\n-----------------\n\nIf you are experiencing latency problems, you probably know how to measure\nit in the context of your application, or maybe your latency problem is very\nevident even macroscopically. However redis-cli can be used to measure the\nlatency of a Redis server in milliseconds, just try:\n\n    redis-cli --latency -h `host` -p `port`\n\nUsing the internal Redis latency monitoring subsystem\n---\n\nSince Redis 2.8.13, Redis provides latency monitoring capabilities that\nare able to sample different execution paths to understand where the\nserver is blocking. This makes debugging of the problems illustrated in\nthis documentation much simpler, so we suggest enabling latency monitoring\nASAP. Please refer to the [Latency monitor documentation](\/topics\/latency-monitor).\n\nWhile the latency monitoring sampling and reporting capabilities will make\nit simpler to understand the source of latency in your Redis system, it is still\nadvised that you read this documentation extensively to better understand\nthe topic of Redis and latency spikes.\n\nLatency baseline\n----------------\n\nThere is a kind of latency that is inherently part of the environment where\nyou run Redis, that is the latency provided by your operating system kernel\nand, if you are using virtualization, by the hypervisor you are using.\n\nWhile this latency can't be removed it is important to study it because\nit is the baseline, or in other words, you won't be able to achieve a Redis\nlatency that is better than the latency that every process running in your\nenvironment will experience because of the kernel or hypervisor implementation\nor setup.\n\nWe call this kind of latency **intrinsic latency**, and `redis-cli` starting\nfrom Redis version 2.8.7 is able to measure it. This is an example run\nunder Linux 3.11.0 running on an entry level server.\n\nNote: the argument `100` is the number of seconds the test will be executed.\nThe more time we run the test, the more likely we'll be able to spot\nlatency spikes. 100 seconds is usually appropriate, however you may want\nto perform a few runs at different times. Please note that the test is CPU\nintensive and will likely saturate a single core in your system.\n\n    $ .\/redis-cli --intrinsic-latency 100\n    Max latency so far: 1 microseconds.\n    Max latency so far: 16 microseconds.\n    Max latency so far: 50 microseconds.\n    Max latency so far: 53 microseconds.\n    Max latency so far: 83 microseconds.\n    Max latency so far: 115 microseconds.\n\nNote: redis-cli in this special case needs to **run in the server** where you run or plan to run Redis, not in the client. In this special mode redis-cli does not connect to a Redis server at all: it will just try to measure the largest time the kernel does not provide CPU time to run to the redis-cli process itself.\n\nIn the above example, the intrinsic latency of the system is just 0.115\nmilliseconds (or 115 microseconds), which is a good news, however keep in mind\nthat the intrinsic latency may change over time depending on the load of the\nsystem.\n\nVirtualized environments will not show so good numbers, especially with high\nload or if there are noisy neighbors. The following is a run on a Linode 4096\ninstance running Redis and Apache:\n\n    $ .\/redis-cli --intrinsic-latency 100\n    Max latency so far: 573 microseconds.\n    Max latency so far: 695 microseconds.\n    Max latency so far: 919 microseconds.\n    Max latency so far: 1606 microseconds.\n    Max latency so far: 3191 microseconds.\n    Max latency so far: 9243 microseconds.\n    Max latency so far: 9671 microseconds.\n\nHere we have an intrinsic latency of 9.7 milliseconds: this means that we can't ask better than that to Redis. However other runs at different times in different virtualization environments with higher load or with noisy neighbors can easily show even worse values. We were able to measure up to 40 milliseconds in\nsystems otherwise apparently running normally.\n\nLatency induced by network and communication\n--------------------------------------------\n\nClients connect to Redis using a TCP\/IP connection or a Unix domain connection.\nThe typical latency of a 1 Gbit\/s network is about 200 us, while the latency\nwith a Unix domain socket can be as low as 30 us. It actually depends on your\nnetwork and system hardware. On top of the communication itself, the system\nadds some more latency (due to thread scheduling, CPU caches, NUMA placement,\netc ...). System induced latencies are significantly higher on a virtualized\nenvironment than on a physical machine.\n\nThe consequence is even if Redis processes most commands in sub microsecond\nrange, a client performing many roundtrips to the server will have to pay\nfor these network and system related latencies.\n\nAn efficient client will therefore try to limit the number of roundtrips by\npipelining several commands together. This is fully supported by the servers\nand most clients. Aggregated commands like MSET\/MGET can be also used for\nthat purpose. Starting with Redis 2.4, a number of commands also support\nvariadic parameters for all data types.\n\nHere are some guidelines:\n\n+ If you can afford it, prefer a physical machine over a VM to host the server.\n+ Do not systematically connect\/disconnect to the server (especially true\n  for web based applications). Keep your connections as long lived as possible.\n+ If your client is on the same host than the server, use Unix domain sockets.\n+ Prefer to use aggregated commands (MSET\/MGET), or commands with variadic\n  parameters (if possible) over pipelining.\n+ Prefer to use pipelining (if possible) over sequence of roundtrips.\n+ Redis supports Lua server-side scripting to cover cases that are not suitable\n  for raw pipelining (for instance when the result of a command is an input for\n  the following commands).\n\nOn Linux, some people can achieve better latencies by playing with process\nplacement (taskset), cgroups, real-time priorities (chrt), NUMA\nconfiguration (numactl), or by using a low-latency kernel. Please note\nvanilla Redis is not really suitable to be bound on a **single** CPU core.\nRedis can fork background tasks that can be extremely CPU consuming\nlike `BGSAVE` or `BGREWRITEAOF`. These tasks must **never** run on the same core\nas the main event loop.\n\nIn most situations, these kind of system level optimizations are not needed.\nOnly do them if you require them, and if you are familiar with them.\n\nSingle threaded nature of Redis\n-------------------------------\n\nRedis uses a *mostly* single threaded design. This means that a single process\nserves all the client requests, using a technique called **multiplexing**.\nThis means that Redis can serve a single request in every given moment, so\nall the requests are served sequentially. This is very similar to how Node.js\nworks as well. However, both products are not often perceived as being slow.\nThis is caused in part by the small amount of time to complete a single request,\nbut primarily because these products are designed to not block on system calls,\nsuch as reading data from or writing data to a socket.\n\nI said that Redis is *mostly* single threaded since actually from Redis 2.4\nwe use threads in Redis in order to perform some slow I\/O operations in the\nbackground, mainly related to disk I\/O, but this does not change the fact\nthat Redis serves all the requests using a single thread.\n\nLatency generated by slow commands\n----------------------------------\n\nA consequence of being single thread is that when a request is slow to serve\nall the other clients will wait for this request to be served. When executing\nnormal commands, like `GET` or `SET` or `LPUSH` this is not a problem\nat all since these commands are executed in constant (and very small) time.\nHowever there are commands operating on many elements, like `SORT`, `LREM`,\n`SUNION` and others. For instance taking the intersection of two big sets\ncan take a considerable amount of time.\n\nThe algorithmic complexity of all commands is documented. A good practice\nis to systematically check it when using commands you are not familiar with.\n\nIf you have latency concerns you should either not use slow commands against\nvalues composed of many elements, or you should run a replica using Redis\nreplication where you run all your slow queries.\n\nIt is possible to monitor slow commands using the Redis\n[Slow Log feature](\/commands\/slowlog).\n\nAdditionally, you can use your favorite per-process monitoring program\n(top, htop, prstat, etc ...) to quickly check the CPU consumption of the\nmain Redis process. If it is high while the traffic is not, it is usually\na sign that slow commands are used.\n\n**IMPORTANT NOTE**: a VERY common source of latency generated by the execution\nof slow commands is the use of the `KEYS` command in production environments.\n`KEYS`, as documented in the Redis documentation, should only be used for\ndebugging purposes. Since Redis 2.8 a new commands were introduced in order to\niterate the key space and other large collections incrementally, please check\nthe `SCAN`, `SSCAN`, `HSCAN` and `ZSCAN` commands for more information.\n\nLatency generated by fork\n-------------------------\n\nIn order to generate the RDB file in background, or to rewrite the Append Only File if AOF persistence is enabled, Redis has to fork background processes.\nThe fork operation (running in the main thread) can induce latency by itself.\n\nForking is an expensive operation on most Unix-like systems, since it involves\ncopying a good number of objects linked to the process. This is especially\ntrue for the page table associated to the virtual memory mechanism.\n\nFor instance on a Linux\/AMD64 system, the memory is divided in 4 kB pages.\nTo convert virtual addresses to physical addresses, each process stores\na page table (actually represented as a tree) containing at least a pointer\nper page of the address space of the process. So a large 24 GB Redis instance\nrequires a page table of 24 GB \/ 4 kB * 8 = 48 MB.\n\nWhen a background save is performed, this instance will have to be forked,\nwhich will involve allocating and copying 48 MB of memory. It takes time\nand CPU, especially on virtual machines where allocation and initialization\nof a large memory chunk can be expensive.\n\nFork time in different systems\n------------------------------\n\nModern hardware is pretty fast at copying the page table, but Xen is not.\nThe problem with Xen is not virtualization-specific, but Xen-specific. For instance using VMware or Virtual Box does not result into slow fork time.\nThe following is a table that compares fork time for different Redis instance\nsize. Data is obtained performing a BGSAVE and looking at the `latest_fork_usec` filed in the `INFO` command output.\n\nHowever the good news is that **new types of EC2 HVM based instances are much\nbetter with fork times**, almost on par with physical servers, so for example\nusing m3.medium (or better) instances will provide good results.\n\n* **Linux beefy VM on VMware** 6.0GB RSS forked in 77 milliseconds (12.8 milliseconds per GB).\n* **Linux running on physical machine (Unknown HW)** 6.1GB RSS forked in 80 milliseconds (13.1 milliseconds per GB)\n* **Linux running on physical machine (Xeon @ 2.27Ghz)** 6.9GB RSS forked into 62 milliseconds (9 milliseconds per GB).\n* **Linux VM on 6sync (KVM)** 360 MB RSS forked in 8.2 milliseconds (23.3 milliseconds per GB).\n* **Linux VM on EC2, old instance types (Xen)** 6.1GB RSS forked in 1460 milliseconds (239.3 milliseconds per GB).\n* **Linux VM on EC2, new instance types (Xen)** 1GB RSS forked in 10 milliseconds (10 milliseconds per GB).\n* **Linux VM on Linode (Xen)** 0.9GBRSS forked into 382 milliseconds (424 milliseconds per GB).\n\nAs you can see certain VMs running on Xen have a performance hit that is between one order to two orders of magnitude. For EC2 users the suggestion is simple: use modern HVM based instances.\n\nLatency induced by transparent huge pages\n-----------------------------------------\n\nUnfortunately when a Linux kernel has transparent huge pages enabled, Redis\nincurs to a big latency penalty after the `fork` call is used in order to\npersist on disk. Huge pages are the cause of the following issue:\n\n1. Fork is called, two processes with shared huge pages are created.\n2. In a busy instance, a few event loops runs will cause commands to target a few thousand of pages, causing the copy on write of almost the whole process memory.\n3. This will result in big latency and big memory usage.\n\nMake sure to **disable transparent huge pages** using the following command:\n\n    echo never > \/sys\/kernel\/mm\/transparent_hugepage\/enabled\n\nLatency induced by swapping (operating system paging)\n-----------------------------------------------------\n\nLinux (and many other modern operating systems) is able to relocate memory\npages from the memory to the disk, and vice versa, in order to use the\nsystem memory efficiently.\n\nIf a Redis page is moved by the kernel from the memory to the swap file, when\nthe data stored in this memory page is used by Redis (for example accessing\na key stored into this memory page) the kernel will stop the Redis process\nin order to move the page back into the main memory. This is a slow operation\ninvolving random I\/Os (compared to accessing a page that is already in memory)\nand will result into anomalous latency experienced by Redis clients.\n\nThe kernel relocates Redis memory pages on disk mainly because of three reasons:\n\n* The system is under memory pressure since the running processes are demanding\nmore physical memory than the amount that is available. The simplest instance of\nthis problem is simply Redis using more memory than is available.\n* The Redis instance data set, or part of the data set, is mostly completely idle\n(never accessed by clients), so the kernel could swap idle memory pages on disk.\nThis problem is very rare since even a moderately slow instance will touch all\nthe memory pages often, forcing the kernel to retain all the pages in memory.\n* Some processes are generating massive read or write I\/Os on the system. Because\nfiles are generally cached, it tends to put pressure on the kernel to increase\nthe filesystem cache, and therefore generate swapping activity. Please note it\nincludes Redis RDB and\/or AOF background threads which can produce large files.\n\nFortunately Linux offers good tools to investigate the problem, so the simplest\nthing to do is when latency due to swapping is suspected is just to check if\nthis is the case.\n\nThe first thing to do is to checking the amount of Redis memory that is swapped\non disk. In order to do so you need to obtain the Redis instance pid:\n\n    $ redis-cli info | grep process_id\n    process_id:5454\n\nNow enter the \/proc file system directory for this process:\n\n    $ cd \/proc\/5454\n\nHere you'll find a file called **smaps** that describes the memory layout of\nthe Redis process (assuming you are using Linux 2.6.16 or newer).\nThis file contains very detailed information about our process memory maps,\nand one field called **Swap** is exactly what we are looking for. However\nthere is not just a single swap field since the smaps file contains the\ndifferent memory maps of our Redis process (The memory layout of a process\nis more complex than a simple linear array of pages).\n\nSince we are interested in all the memory swapped by our process the first thing\nto do is to grep for the Swap field across all the file:\n\n    $ cat smaps | grep 'Swap:'\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                 12 kB\n    Swap:                156 kB\n    Swap:                  8 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  4 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  4 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  4 kB\n    Swap:                  4 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n    Swap:                  0 kB\n\nIf everything is 0 kB, or if there are sporadic 4k entries, everything is\nperfectly normal. Actually in our example instance (the one of a real web\nsite running Redis and serving hundreds of users every second) there are a\nfew entries that show more swapped pages. To investigate if this is a serious\nproblem or not we change our command in order to also print the size of the\nmemory map:\n\n    $ cat smaps | egrep '^(Swap|Size)'\n    Size:                316 kB\n    Swap:                  0 kB\n    Size:                  4 kB\n    Swap:                  0 kB\n    Size:                  8 kB\n    Swap:                  0 kB\n    Size:                 40 kB\n    Swap:                  0 kB\n    Size:                132 kB\n    Swap:                  0 kB\n    Size:             720896 kB\n    Swap:                 12 kB\n    Size:               4096 kB\n    Swap:                156 kB\n    Size:               4096 kB\n    Swap:                  8 kB\n    Size:               4096 kB\n    Swap:                  0 kB\n    Size:                  4 kB\n    Swap:                  0 kB\n    Size:               1272 kB\n    Swap:                  0 kB\n    Size:                  8 kB\n    Swap:                  0 kB\n    Size:                  4 kB\n    Swap:                  0 kB\n    Size:                 16 kB\n    Swap:                  0 kB\n    Size:                 84 kB\n    Swap:                  0 kB\n    Size:                  4 kB\n    Swap:                  0 kB\n    Size:                  4 kB\n    Swap:                  0 kB\n    Size:                  8 kB\n    Swap:                  4 kB\n    Size:                  8 kB\n    Swap:                  0 kB\n    Size:                  4 kB\n    Swap:                  0 kB\n    Size:                  4 kB\n    Swap:                  4 kB\n    Size:                144 kB\n    Swap:                  0 kB\n    Size:                  4 kB\n    Swap:                  0 kB\n    Size:                  4 kB\n    Swap:                  4 kB\n    Size:                 12 kB\n    Swap:                  4 kB\n    Size:                108 kB\n    Swap:                  0 kB\n    Size:                  4 kB\n    Swap:                  0 kB\n    Size:                  4 kB\n    Swap:                  0 kB\n    Size:                272 kB\n    Swap:                  0 kB\n    Size:                  4 kB\n    Swap:                  0 kB\n\nAs you can see from the output, there is a map of 720896 kB\n(with just 12 kB swapped) and 156 kB more swapped in another map:\nbasically a very small amount of our memory is swapped so this is not\ngoing to create any problem at all.\n\nIf instead a non trivial amount of the process memory is swapped on disk your\nlatency problems are likely related to swapping. If this is the case with your\nRedis instance you can further verify it using the **vmstat** command:\n\n    $ vmstat 1\n    procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n     r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa\n     0  0   3980 697932 147180 1406456    0    0     2     2    2    0  4  4 91  0\n     0  0   3980 697428 147180 1406580    0    0     0     0 19088 16104  9  6 84  0\n     0  0   3980 697296 147180 1406616    0    0     0    28 18936 16193  7  6 87  0\n     0  0   3980 697048 147180 1406640    0    0     0     0 18613 15987  6  6 88  0\n     2  0   3980 696924 147180 1406656    0    0     0     0 18744 16299  6  5 88  0\n     0  0   3980 697048 147180 1406688    0    0     0     4 18520 15974  6  6 88  0\n    ^C\n\nThe interesting part of the output for our needs are the two columns **si**\nand **so**, that counts the amount of memory swapped from\/to the swap file. If\nyou see non zero counts in those two columns then there is swapping activity\nin your system.\n\nFinally, the **iostat** command can be used to check the global I\/O activity of\nthe system.\n\n    $ iostat -xk 1\n    avg-cpu:  %user   %nice %system %iowait  %steal   %idle\n              13.55    0.04    2.92    0.53    0.00   82.95\n\n    Device:         rrqm\/s   wrqm\/s     r\/s     w\/s    rkB\/s    wkB\/s avgrq-sz avgqu-sz   await  svctm  %util\n    sda               0.77     0.00    0.01    0.00     0.40     0.00    73.65     0.00    3.62   2.58   0.00\n    sdb               1.27     4.75    0.82    3.54    38.00    32.32    32.19     0.11   24.80   4.24   1.85\n\nIf your latency problem is due to Redis memory being swapped on disk you need\nto lower the memory pressure in your system, either adding more RAM if Redis\nis using more memory than the available, or avoiding running other memory\nhungry processes in the same system.\n\nLatency due to AOF and disk I\/O\n-------------------------------\n\nAnother source of latency is due to the Append Only File support on Redis.\nThe AOF basically uses two system calls to accomplish its work. One is\nwrite(2) that is used in order to write data to the append only file, and\nthe other one is fdatasync(2) that is used in order to flush the kernel\nfile buffer on disk in order to ensure the durability level specified by\nthe user.\n\nBoth the write(2) and fdatasync(2) calls can be source of latency.\nFor instance write(2) can block both when there is a system wide sync\nin progress, or when the output buffers are full and the kernel requires\nto flush on disk in order to accept new writes.\n\nThe fdatasync(2) call is a worse source of latency as with many combinations\nof kernels and file systems used it can take from a few milliseconds to\na few seconds to complete, especially in the case of some other process\ndoing I\/O. For this reason when possible Redis does the fdatasync(2) call\nin a different thread since Redis 2.4.\n\nWe'll see how configuration can affect the amount and source of latency\nwhen using the AOF file.\n\nThe AOF can be configured to perform a fsync on disk in three different\nways using the **appendfsync** configuration option (this setting can be\nmodified at runtime using the **CONFIG SET** command).\n\n* When appendfsync is set to the value of **no** Redis performs no fsync.\nIn this configuration the only source of latency can be write(2).\nWhen this happens usually there is no solution since simply the disk can't\ncope with the speed at which Redis is receiving data, however this is\nuncommon if the disk is not seriously slowed down by other processes doing\nI\/O.\n\n* When appendfsync is set to the value of **everysec** Redis performs a\nfsync every second. It uses a different thread, and if the fsync is still\nin progress Redis uses a buffer to delay the write(2) call up to two seconds\n(since write would block on Linux if a fsync is in progress against the\nsame file). However if the fsync is taking too long Redis will eventually\nperform the write(2) call even if the fsync is still in progress, and this\ncan be a source of latency.\n\n* When appendfsync is set to the value of **always** a fsync is performed\nat every write operation before replying back to the client with an OK code\n(actually Redis will try to cluster many commands executed at the same time\ninto a single fsync). In this mode performances are very low in general and\nit is strongly recommended to use a fast disk and a file system implementation\nthat can perform the fsync in short time.\n\nMost Redis users will use either the **no** or **everysec** setting for the\nappendfsync configuration directive. The suggestion for minimum latency is\nto avoid other processes doing I\/O in the same system.\nUsing an SSD disk can help as well, but usually even non SSD disks perform\nwell with the append only file if the disk is spare as Redis writes\nto the append only file without performing any seek.\n\nIf you want to investigate your latency issues related to the append only\nfile you can use the strace command under Linux:\n\n    sudo strace -p $(pidof redis-server) -T -e trace=fdatasync\n\nThe above command will show all the fdatasync(2) system calls performed by\nRedis in the main thread. With the above command you'll not see the\nfdatasync system calls performed by the background thread when the\nappendfsync config option is set to **everysec**. In order to do so\njust add the -f switch to strace.\n\nIf you wish you can also see both fdatasync and write system calls with the\nfollowing command:\n\n    sudo strace -p $(pidof redis-server) -T -e trace=fdatasync,write\n\nHowever since write(2) is also used in order to write data to the client\nsockets this will likely show too many things unrelated to disk I\/O.\nApparently there is no way to tell strace to just show slow system calls so\nI use the following command:\n\n    sudo strace -f -p $(pidof redis-server) -T -e trace=fdatasync,write 2>&1 | grep -v '0.0' | grep -v unfinished\n\nLatency generated by expires\n----------------------------\n\nRedis evict expired keys in two ways:\n\n+ One *lazy* way expires a key when it is requested by a command, but it is found to be already expired.\n+ One *active* way expires a few keys every 100 milliseconds.\n\nThe active expiring is designed to be adaptive. An expire cycle is started every 100 milliseconds (10 times per second), and will do the following:\n\n+ Sample `ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP` keys, evicting all the keys already expired.\n+ If the more than 25% of the keys were found expired, repeat.\n\nGiven that `ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP` is set to 20 by default, and the process is performed ten times per second, usually just 200 keys per second are actively expired. This is enough to clean the DB fast enough even when already expired keys are not accessed for a long time, so that the *lazy* algorithm does not help. At the same time expiring just 200 keys per second has no effects in the latency a Redis instance.\n\nHowever the algorithm is adaptive and will loop if it finds more than 25% of keys already expired in the set of sampled keys. But given that we run the algorithm ten times per second, this means that the unlucky event of more than 25% of the keys in our random sample are expiring at least *in the same second*.\n\nBasically this means that **if the database has many, many keys expiring in the same second, and these make up at least 25% of the current population of keys with an expire set**, Redis can block in order to get the percentage of keys already expired below 25%.\n\nThis approach is needed in order to avoid using too much memory for keys that are already expired, and usually is absolutely harmless since it's strange that a big number of keys are going to expire in the same exact second, but it is not impossible that the user used `EXPIREAT` extensively with the same Unix time.\n\nIn short: be aware that many keys expiring at the same moment can be a source of latency.\n\nRedis software watchdog\n---\n\nRedis 2.6 introduces the *Redis Software Watchdog* that is a debugging tool\ndesigned to track those latency problems that for one reason or the other\nescaped an analysis using normal tools.\n\nThe software watchdog is an experimental feature. While it is designed to\nbe used in production environments care should be taken to backup the database\nbefore proceeding as it could possibly have unexpected interactions with the\nnormal execution of the Redis server.\n\nIt is important to use it only as *last resort* when there is no way to track the issue by other means.\n\nThis is how this feature works:\n\n* The user enables the software watchdog using the `CONFIG SET` command.\n* Redis starts monitoring itself constantly.\n* If Redis detects that the server is blocked into some operation that is not returning fast enough, and that may be the source of the latency issue, a low level report about where the server is blocked is dumped on the log file.\n* The user contacts the developers writing a message in the Redis Google Group, including the watchdog report in the message.\n\nNote that this feature cannot be enabled using the redis.conf file, because it is designed to be enabled only in already running instances and only for debugging purposes.\n\nTo enable the feature just use the following:\n\n    CONFIG SET watchdog-period 500\n\nThe period is specified in milliseconds. In the above example I specified to log latency issues only if the server detects a delay of 500 milliseconds or greater. The minimum configurable period is 200 milliseconds.\n\nWhen you are done with the software watchdog you can turn it off setting the `watchdog-period` parameter to 0. **Important:** remember to do this because keeping the instance with the watchdog turned on for a longer time than needed is generally not a good idea.\n\nThe following is an example of what you'll see printed in the log file once the software watchdog detects a delay longer than the configured one:\n\n    [8547 | signal handler] (1333114359)\n    --- WATCHDOG TIMER EXPIRED ---\n    \/lib\/libc.so.6(nanosleep+0x2d) [0x7f16b5c2d39d]\n    \/lib\/libpthread.so.0(+0xf8f0) [0x7f16b5f158f0]\n    \/lib\/libc.so.6(nanosleep+0x2d) [0x7f16b5c2d39d]\n    \/lib\/libc.so.6(usleep+0x34) [0x7f16b5c62844]\n    .\/redis-server(debugCommand+0x3e1) [0x43ab41]\n    .\/redis-server(call+0x5d) [0x415a9d]\n    .\/redis-server(processCommand+0x375) [0x415fc5]\n    .\/redis-server(processInputBuffer+0x4f) [0x4203cf]\n    .\/redis-server(readQueryFromClient+0xa0) [0x4204e0]\n    .\/redis-server(aeProcessEvents+0x128) [0x411b48]\n    .\/redis-server(aeMain+0x2b) [0x411dbb]\n    .\/redis-server(main+0x2b6) [0x418556]\n    \/lib\/libc.so.6(__libc_start_main+0xfd) [0x7f16b5ba1c4d]\n    .\/redis-server() [0x411099]\n    ------\n\nNote: in the example the **DEBUG SLEEP** command was used in order to block the server. The stack trace is different if the server blocks in a different context.\n\nIf you happen to collect multiple watchdog stack traces you are encouraged to send everything to the Redis Google Group: the more traces we obtain, the simpler it will be to understand what the problem with your instance is.","site":"redis","answers_cleaned":"    title   Diagnosing latency issues  linkTitle   Latency diagnosis  weight  1 description  Finding the causes of slow responses aliases         topics latency       docs reference optimization latency        This document will help you understand what the problem could be if you are experiencing latency problems with Redis   In this context  latency  is the maximum delay between the time a client issues a command and the time the reply to the command is received by the client  Usually Redis processing time is extremely low  in the sub microsecond range  but there are certain conditions leading to higher latency figures   I ve little time  give me the checklist      The following documentation is very important in order to run Redis in a low latency fashion  However I understand that we are busy people  so let s start with a quick checklist  If you fail following these steps  please return here to read the full documentation   1  Make sure you are not running slow commands that are blocking the server  Use the Redis  Slow Log feature   commands slowlog  to check this  2  For EC2 users  make sure you use HVM based modern EC2 instances  like m3 medium  Otherwise fork   is too slow  3  Transparent huge pages must be disabled from your kernel  Use  echo never    sys kernel mm transparent hugepage enabled  to disable them  and restart your Redis process  4  If you are using a virtual machine  it is possible that you have an intrinsic latency that has nothing to do with Redis  Check the minimum latency you can expect from your runtime environment using    redis cli   intrinsic latency 100   Note  you need to run this command in  the server  not in the client  5  Enable and use the  Latency monitor   topics latency monitor  feature of Redis in order to get a human readable description of the latency events and causes in your Redis instance   In general  use the following table for durability VS latency performance tradeoffs  ordered from stronger safety to better latency   1  AOF   fsync always  this is very slow  you should use it only if you know what you are doing  2  AOF   fsync every second  this is a good compromise  3  AOF   fsync every second   no appendfsync on rewrite option set to yes  this is as the above  but avoids to fsync during rewrites to lower the disk pressure  4  AOF   fsync never  Fsyncing is up to the kernel in this setup  even less disk pressure and risk of latency spikes  5  RDB  Here you have a vast spectrum of tradeoffs depending on the save triggers you configure   And now for people with 15 minutes to spend  the details     Measuring latency                    If you are experiencing latency problems  you probably know how to measure it in the context of your application  or maybe your latency problem is very evident even macroscopically  However redis cli can be used to measure the latency of a Redis server in milliseconds  just try       redis cli   latency  h  host   p  port   Using the internal Redis latency monitoring subsystem      Since Redis 2 8 13  Redis provides latency monitoring capabilities that are able to sample different execution paths to understand where the server is blocking  This makes debugging of the problems illustrated in this documentation much simpler  so we suggest enabling latency monitoring ASAP  Please refer to the  Latency monitor documentation   topics latency monitor    While the latency monitoring sampling and reporting capabilities will make it simpler to understand the source of latency in your Redis system  it is still advised that you read this documentation extensively to better understand the topic of Redis and latency spikes   Latency baseline                   There is a kind of latency that is inherently part of the environment where you run Redis  that is the latency provided by your operating system kernel and  if you are using virtualization  by the hypervisor you are using   While this latency can t be removed it is important to study it because it is the baseline  or in other words  you won t be able to achieve a Redis latency that is better than the latency that every process running in your environment will experience because of the kernel or hypervisor implementation or setup   We call this kind of latency   intrinsic latency    and  redis cli  starting from Redis version 2 8 7 is able to measure it  This is an example run under Linux 3 11 0 running on an entry level server   Note  the argument  100  is the number of seconds the test will be executed  The more time we run the test  the more likely we ll be able to spot latency spikes  100 seconds is usually appropriate  however you may want to perform a few runs at different times  Please note that the test is CPU intensive and will likely saturate a single core in your system           redis cli   intrinsic latency 100     Max latency so far  1 microseconds      Max latency so far  16 microseconds      Max latency so far  50 microseconds      Max latency so far  53 microseconds      Max latency so far  83 microseconds      Max latency so far  115 microseconds   Note  redis cli in this special case needs to   run in the server   where you run or plan to run Redis  not in the client  In this special mode redis cli does not connect to a Redis server at all  it will just try to measure the largest time the kernel does not provide CPU time to run to the redis cli process itself   In the above example  the intrinsic latency of the system is just 0 115 milliseconds  or 115 microseconds   which is a good news  however keep in mind that the intrinsic latency may change over time depending on the load of the system   Virtualized environments will not show so good numbers  especially with high load or if there are noisy neighbors  The following is a run on a Linode 4096 instance running Redis and Apache           redis cli   intrinsic latency 100     Max latency so far  573 microseconds      Max latency so far  695 microseconds      Max latency so far  919 microseconds      Max latency so far  1606 microseconds      Max latency so far  3191 microseconds      Max latency so far  9243 microseconds      Max latency so far  9671 microseconds   Here we have an intrinsic latency of 9 7 milliseconds  this means that we can t ask better than that to Redis  However other runs at different times in different virtualization environments with higher load or with noisy neighbors can easily show even worse values  We were able to measure up to 40 milliseconds in systems otherwise apparently running normally   Latency induced by network and communication                                               Clients connect to Redis using a TCP IP connection or a Unix domain connection  The typical latency of a 1 Gbit s network is about 200 us  while the latency with a Unix domain socket can be as low as 30 us  It actually depends on your network and system hardware  On top of the communication itself  the system adds some more latency  due to thread scheduling  CPU caches  NUMA placement  etc       System induced latencies are significantly higher on a virtualized environment than on a physical machine   The consequence is even if Redis processes most commands in sub microsecond range  a client performing many roundtrips to the server will have to pay for these network and system related latencies   An efficient client will therefore try to limit the number of roundtrips by pipelining several commands together  This is fully supported by the servers and most clients  Aggregated commands like MSET MGET can be also used for that purpose  Starting with Redis 2 4  a number of commands also support variadic parameters for all data types   Here are some guidelines     If you can afford it  prefer a physical machine over a VM to host the server    Do not systematically connect disconnect to the server  especially true   for web based applications   Keep your connections as long lived as possible    If your client is on the same host than the server  use Unix domain sockets    Prefer to use aggregated commands  MSET MGET   or commands with variadic   parameters  if possible  over pipelining    Prefer to use pipelining  if possible  over sequence of roundtrips    Redis supports Lua server side scripting to cover cases that are not suitable   for raw pipelining  for instance when the result of a command is an input for   the following commands    On Linux  some people can achieve better latencies by playing with process placement  taskset   cgroups  real time priorities  chrt   NUMA configuration  numactl   or by using a low latency kernel  Please note vanilla Redis is not really suitable to be bound on a   single   CPU core  Redis can fork background tasks that can be extremely CPU consuming like  BGSAVE  or  BGREWRITEAOF   These tasks must   never   run on the same core as the main event loop   In most situations  these kind of system level optimizations are not needed  Only do them if you require them  and if you are familiar with them   Single threaded nature of Redis                                  Redis uses a  mostly  single threaded design  This means that a single process serves all the client requests  using a technique called   multiplexing    This means that Redis can serve a single request in every given moment  so all the requests are served sequentially  This is very similar to how Node js works as well  However  both products are not often perceived as being slow  This is caused in part by the small amount of time to complete a single request  but primarily because these products are designed to not block on system calls  such as reading data from or writing data to a socket   I said that Redis is  mostly  single threaded since actually from Redis 2 4 we use threads in Redis in order to perform some slow I O operations in the background  mainly related to disk I O  but this does not change the fact that Redis serves all the requests using a single thread   Latency generated by slow commands                                     A consequence of being single thread is that when a request is slow to serve all the other clients will wait for this request to be served  When executing normal commands  like  GET  or  SET  or  LPUSH  this is not a problem at all since these commands are executed in constant  and very small  time  However there are commands operating on many elements  like  SORT    LREM    SUNION  and others  For instance taking the intersection of two big sets can take a considerable amount of time   The algorithmic complexity of all commands is documented  A good practice is to systematically check it when using commands you are not familiar with   If you have latency concerns you should either not use slow commands against values composed of many elements  or you should run a replica using Redis replication where you run all your slow queries   It is possible to monitor slow commands using the Redis  Slow Log feature   commands slowlog    Additionally  you can use your favorite per process monitoring program  top  htop  prstat  etc      to quickly check the CPU consumption of the main Redis process  If it is high while the traffic is not  it is usually a sign that slow commands are used     IMPORTANT NOTE    a VERY common source of latency generated by the execution of slow commands is the use of the  KEYS  command in production environments   KEYS   as documented in the Redis documentation  should only be used for debugging purposes  Since Redis 2 8 a new commands were introduced in order to iterate the key space and other large collections incrementally  please check the  SCAN    SSCAN    HSCAN  and  ZSCAN  commands for more information   Latency generated by fork                            In order to generate the RDB file in background  or to rewrite the Append Only File if AOF persistence is enabled  Redis has to fork background processes  The fork operation  running in the main thread  can induce latency by itself   Forking is an expensive operation on most Unix like systems  since it involves copying a good number of objects linked to the process  This is especially true for the page table associated to the virtual memory mechanism   For instance on a Linux AMD64 system  the memory is divided in 4 kB pages  To convert virtual addresses to physical addresses  each process stores a page table  actually represented as a tree  containing at least a pointer per page of the address space of the process  So a large 24 GB Redis instance requires a page table of 24 GB   4 kB   8   48 MB   When a background save is performed  this instance will have to be forked  which will involve allocating and copying 48 MB of memory  It takes time and CPU  especially on virtual machines where allocation and initialization of a large memory chunk can be expensive   Fork time in different systems                                 Modern hardware is pretty fast at copying the page table  but Xen is not  The problem with Xen is not virtualization specific  but Xen specific  For instance using VMware or Virtual Box does not result into slow fork time  The following is a table that compares fork time for different Redis instance size  Data is obtained performing a BGSAVE and looking at the  latest fork usec  filed in the  INFO  command output   However the good news is that   new types of EC2 HVM based instances are much better with fork times    almost on par with physical servers  so for example using m3 medium  or better  instances will provide good results       Linux beefy VM on VMware   6 0GB RSS forked in 77 milliseconds  12 8 milliseconds per GB       Linux running on physical machine  Unknown HW    6 1GB RSS forked in 80 milliseconds  13 1 milliseconds per GB      Linux running on physical machine  Xeon   2 27Ghz    6 9GB RSS forked into 62 milliseconds  9 milliseconds per GB       Linux VM on 6sync  KVM    360 MB RSS forked in 8 2 milliseconds  23 3 milliseconds per GB       Linux VM on EC2  old instance types  Xen    6 1GB RSS forked in 1460 milliseconds  239 3 milliseconds per GB       Linux VM on EC2  new instance types  Xen    1GB RSS forked in 10 milliseconds  10 milliseconds per GB       Linux VM on Linode  Xen    0 9GBRSS forked into 382 milliseconds  424 milliseconds per GB    As you can see certain VMs running on Xen have a performance hit that is between one order to two orders of magnitude  For EC2 users the suggestion is simple  use modern HVM based instances   Latency induced by transparent huge pages                                            Unfortunately when a Linux kernel has transparent huge pages enabled  Redis incurs to a big latency penalty after the  fork  call is used in order to persist on disk  Huge pages are the cause of the following issue   1  Fork is called  two processes with shared huge pages are created  2  In a busy instance  a few event loops runs will cause commands to target a few thousand of pages  causing the copy on write of almost the whole process memory  3  This will result in big latency and big memory usage   Make sure to   disable transparent huge pages   using the following command       echo never    sys kernel mm transparent hugepage enabled  Latency induced by swapping  operating system paging                                                         Linux  and many other modern operating systems  is able to relocate memory pages from the memory to the disk  and vice versa  in order to use the system memory efficiently   If a Redis page is moved by the kernel from the memory to the swap file  when the data stored in this memory page is used by Redis  for example accessing a key stored into this memory page  the kernel will stop the Redis process in order to move the page back into the main memory  This is a slow operation involving random I Os  compared to accessing a page that is already in memory  and will result into anomalous latency experienced by Redis clients   The kernel relocates Redis memory pages on disk mainly because of three reasons     The system is under memory pressure since the running processes are demanding more physical memory than the amount that is available  The simplest instance of this problem is simply Redis using more memory than is available    The Redis instance data set  or part of the data set  is mostly completely idle  never accessed by clients   so the kernel could swap idle memory pages on disk  This problem is very rare since even a moderately slow instance will touch all the memory pages often  forcing the kernel to retain all the pages in memory    Some processes are generating massive read or write I Os on the system  Because files are generally cached  it tends to put pressure on the kernel to increase the filesystem cache  and therefore generate swapping activity  Please note it includes Redis RDB and or AOF background threads which can produce large files   Fortunately Linux offers good tools to investigate the problem  so the simplest thing to do is when latency due to swapping is suspected is just to check if this is the case   The first thing to do is to checking the amount of Redis memory that is swapped on disk  In order to do so you need to obtain the Redis instance pid         redis cli info   grep process id     process id 5454  Now enter the  proc file system directory for this process         cd  proc 5454  Here you ll find a file called   smaps   that describes the memory layout of the Redis process  assuming you are using Linux 2 6 16 or newer   This file contains very detailed information about our process memory maps  and one field called   Swap   is exactly what we are looking for  However there is not just a single swap field since the smaps file contains the different memory maps of our Redis process  The memory layout of a process is more complex than a simple linear array of pages    Since we are interested in all the memory swapped by our process the first thing to do is to grep for the Swap field across all the file         cat smaps   grep  Swap       Swap                   0 kB     Swap                   0 kB     Swap                   0 kB     Swap                   0 kB     Swap                   0 kB     Swap                  12 kB     Swap                 156 kB     Swap                   8 kB     Swap                   0 kB     Swap                   0 kB     Swap                   0 kB     Swap                   0 kB     Swap                   0 kB     Swap                   0 kB     Swap                   0 kB     Swap                   0 kB     Swap                   0 kB     Swap                   4 kB     Swap                   0 kB     Swap                   0 kB     Swap                   4 kB     Swap                   0 kB     Swap                   0 kB     Swap                   4 kB     Swap                   4 kB     Swap                   0 kB     Swap                   0 kB     Swap                   0 kB     Swap                   0 kB     Swap                   0 kB  If everything is 0 kB  or if there are sporadic 4k entries  everything is perfectly normal  Actually in our example instance  the one of a real web site running Redis and serving hundreds of users every second  there are a few entries that show more swapped pages  To investigate if this is a serious problem or not we change our command in order to also print the size of the memory map         cat smaps   egrep    Swap Size       Size                 316 kB     Swap                   0 kB     Size                   4 kB     Swap                   0 kB     Size                   8 kB     Swap                   0 kB     Size                  40 kB     Swap                   0 kB     Size                 132 kB     Swap                   0 kB     Size              720896 kB     Swap                  12 kB     Size                4096 kB     Swap                 156 kB     Size                4096 kB     Swap                   8 kB     Size                4096 kB     Swap                   0 kB     Size                   4 kB     Swap                   0 kB     Size                1272 kB     Swap                   0 kB     Size                   8 kB     Swap                   0 kB     Size                   4 kB     Swap                   0 kB     Size                  16 kB     Swap                   0 kB     Size                  84 kB     Swap                   0 kB     Size                   4 kB     Swap                   0 kB     Size                   4 kB     Swap                   0 kB     Size                   8 kB     Swap                   4 kB     Size                   8 kB     Swap                   0 kB     Size                   4 kB     Swap                   0 kB     Size                   4 kB     Swap                   4 kB     Size                 144 kB     Swap                   0 kB     Size                   4 kB     Swap                   0 kB     Size                   4 kB     Swap                   4 kB     Size                  12 kB     Swap                   4 kB     Size                 108 kB     Swap                   0 kB     Size                   4 kB     Swap                   0 kB     Size                   4 kB     Swap                   0 kB     Size                 272 kB     Swap                   0 kB     Size                   4 kB     Swap                   0 kB  As you can see from the output  there is a map of 720896 kB  with just 12 kB swapped  and 156 kB more swapped in another map  basically a very small amount of our memory is swapped so this is not going to create any problem at all   If instead a non trivial amount of the process memory is swapped on disk your latency problems are likely related to swapping  If this is the case with your Redis instance you can further verify it using the   vmstat   command         vmstat 1     procs            memory              swap        io      system       cpu          r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa      0  0   3980 697932 147180 1406456    0    0     2     2    2    0  4  4 91  0      0  0   3980 697428 147180 1406580    0    0     0     0 19088 16104  9  6 84  0      0  0   3980 697296 147180 1406616    0    0     0    28 18936 16193  7  6 87  0      0  0   3980 697048 147180 1406640    0    0     0     0 18613 15987  6  6 88  0      2  0   3980 696924 147180 1406656    0    0     0     0 18744 16299  6  5 88  0      0  0   3980 697048 147180 1406688    0    0     0     4 18520 15974  6  6 88  0      C  The interesting part of the output for our needs are the two columns   si   and   so    that counts the amount of memory swapped from to the swap file  If you see non zero counts in those two columns then there is swapping activity in your system   Finally  the   iostat   command can be used to check the global I O activity of the system         iostat  xk 1     avg cpu    user    nice  system  iowait   steal    idle               13 55    0 04    2 92    0 53    0 00   82 95      Device          rrqm s   wrqm s     r s     w s    rkB s    wkB s avgrq sz avgqu sz   await  svctm   util     sda               0 77     0 00    0 01    0 00     0 40     0 00    73 65     0 00    3 62   2 58   0 00     sdb               1 27     4 75    0 82    3 54    38 00    32 32    32 19     0 11   24 80   4 24   1 85  If your latency problem is due to Redis memory being swapped on disk you need to lower the memory pressure in your system  either adding more RAM if Redis is using more memory than the available  or avoiding running other memory hungry processes in the same system   Latency due to AOF and disk I O                                  Another source of latency is due to the Append Only File support on Redis  The AOF basically uses two system calls to accomplish its work  One is write 2  that is used in order to write data to the append only file  and the other one is fdatasync 2  that is used in order to flush the kernel file buffer on disk in order to ensure the durability level specified by the user   Both the write 2  and fdatasync 2  calls can be source of latency  For instance write 2  can block both when there is a system wide sync in progress  or when the output buffers are full and the kernel requires to flush on disk in order to accept new writes   The fdatasync 2  call is a worse source of latency as with many combinations of kernels and file systems used it can take from a few milliseconds to a few seconds to complete  especially in the case of some other process doing I O  For this reason when possible Redis does the fdatasync 2  call in a different thread since Redis 2 4   We ll see how configuration can affect the amount and source of latency when using the AOF file   The AOF can be configured to perform a fsync on disk in three different ways using the   appendfsync   configuration option  this setting can be modified at runtime using the   CONFIG SET   command      When appendfsync is set to the value of   no   Redis performs no fsync  In this configuration the only source of latency can be write 2   When this happens usually there is no solution since simply the disk can t cope with the speed at which Redis is receiving data  however this is uncommon if the disk is not seriously slowed down by other processes doing I O     When appendfsync is set to the value of   everysec   Redis performs a fsync every second  It uses a different thread  and if the fsync is still in progress Redis uses a buffer to delay the write 2  call up to two seconds  since write would block on Linux if a fsync is in progress against the same file   However if the fsync is taking too long Redis will eventually perform the write 2  call even if the fsync is still in progress  and this can be a source of latency     When appendfsync is set to the value of   always   a fsync is performed at every write operation before replying back to the client with an OK code  actually Redis will try to cluster many commands executed at the same time into a single fsync   In this mode performances are very low in general and it is strongly recommended to use a fast disk and a file system implementation that can perform the fsync in short time   Most Redis users will use either the   no   or   everysec   setting for the appendfsync configuration directive  The suggestion for minimum latency is to avoid other processes doing I O in the same system  Using an SSD disk can help as well  but usually even non SSD disks perform well with the append only file if the disk is spare as Redis writes to the append only file without performing any seek   If you want to investigate your latency issues related to the append only file you can use the strace command under Linux       sudo strace  p   pidof redis server   T  e trace fdatasync  The above command will show all the fdatasync 2  system calls performed by Redis in the main thread  With the above command you ll not see the fdatasync system calls performed by the background thread when the appendfsync config option is set to   everysec    In order to do so just add the  f switch to strace   If you wish you can also see both fdatasync and write system calls with the following command       sudo strace  p   pidof redis server   T  e trace fdatasync write  However since write 2  is also used in order to write data to the client sockets this will likely show too many things unrelated to disk I O  Apparently there is no way to tell strace to just show slow system calls so I use the following command       sudo strace  f  p   pidof redis server   T  e trace fdatasync write 2  1   grep  v  0 0    grep  v unfinished  Latency generated by expires                               Redis evict expired keys in two ways     One  lazy  way expires a key when it is requested by a command  but it is found to be already expired    One  active  way expires a few keys every 100 milliseconds   The active expiring is designed to be adaptive  An expire cycle is started every 100 milliseconds  10 times per second   and will do the following     Sample  ACTIVE EXPIRE CYCLE LOOKUPS PER LOOP  keys  evicting all the keys already expired    If the more than 25  of the keys were found expired  repeat   Given that  ACTIVE EXPIRE CYCLE LOOKUPS PER LOOP  is set to 20 by default  and the process is performed ten times per second  usually just 200 keys per second are actively expired  This is enough to clean the DB fast enough even when already expired keys are not accessed for a long time  so that the  lazy  algorithm does not help  At the same time expiring just 200 keys per second has no effects in the latency a Redis instance   However the algorithm is adaptive and will loop if it finds more than 25  of keys already expired in the set of sampled keys  But given that we run the algorithm ten times per second  this means that the unlucky event of more than 25  of the keys in our random sample are expiring at least  in the same second    Basically this means that   if the database has many  many keys expiring in the same second  and these make up at least 25  of the current population of keys with an expire set    Redis can block in order to get the percentage of keys already expired below 25    This approach is needed in order to avoid using too much memory for keys that are already expired  and usually is absolutely harmless since it s strange that a big number of keys are going to expire in the same exact second  but it is not impossible that the user used  EXPIREAT  extensively with the same Unix time   In short  be aware that many keys expiring at the same moment can be a source of latency   Redis software watchdog      Redis 2 6 introduces the  Redis Software Watchdog  that is a debugging tool designed to track those latency problems that for one reason or the other escaped an analysis using normal tools   The software watchdog is an experimental feature  While it is designed to be used in production environments care should be taken to backup the database before proceeding as it could possibly have unexpected interactions with the normal execution of the Redis server   It is important to use it only as  last resort  when there is no way to track the issue by other means   This is how this feature works     The user enables the software watchdog using the  CONFIG SET  command    Redis starts monitoring itself constantly    If Redis detects that the server is blocked into some operation that is not returning fast enough  and that may be the source of the latency issue  a low level report about where the server is blocked is dumped on the log file    The user contacts the developers writing a message in the Redis Google Group  including the watchdog report in the message   Note that this feature cannot be enabled using the redis conf file  because it is designed to be enabled only in already running instances and only for debugging purposes   To enable the feature just use the following       CONFIG SET watchdog period 500  The period is specified in milliseconds  In the above example I specified to log latency issues only if the server detects a delay of 500 milliseconds or greater  The minimum configurable period is 200 milliseconds   When you are done with the software watchdog you can turn it off setting the  watchdog period  parameter to 0    Important    remember to do this because keeping the instance with the watchdog turned on for a longer time than needed is generally not a good idea   The following is an example of what you ll see printed in the log file once the software watchdog detects a delay longer than the configured one        8547   signal handler   1333114359          WATCHDOG TIMER EXPIRED          lib libc so 6 nanosleep 0x2d   0x7f16b5c2d39d       lib libpthread so 0  0xf8f0   0x7f16b5f158f0       lib libc so 6 nanosleep 0x2d   0x7f16b5c2d39d       lib libc so 6 usleep 0x34   0x7f16b5c62844        redis server debugCommand 0x3e1   0x43ab41        redis server call 0x5d   0x415a9d        redis server processCommand 0x375   0x415fc5        redis server processInputBuffer 0x4f   0x4203cf        redis server readQueryFromClient 0xa0   0x4204e0        redis server aeProcessEvents 0x128   0x411b48        redis server aeMain 0x2b   0x411dbb        redis server main 0x2b6   0x418556       lib libc so 6   libc start main 0xfd   0x7f16b5ba1c4d        redis server    0x411099              Note  in the example the   DEBUG SLEEP   command was used in order to block the server  The stack trace is different if the server blocks in a different context   If you happen to collect multiple watchdog stack traces you are encouraged to send everything to the Redis Google Group  the more traces we obtain  the simpler it will be to understand what the problem with your instance is "}
{"questions":"redis docs reference optimization memory optimization topics memory optimization title Memory optimization Strategies for optimizing memory usage in Redis aliases Memory optimization weight 1","answers":"---\ntitle: Memory optimization\nlinkTitle: Memory optimization\ndescription: Strategies for optimizing memory usage in Redis\nweight: 1\naliases: [\n    \/topics\/memory-optimization,\n    \/docs\/reference\/optimization\/memory-optimization\n]\n---\n\n## Special encoding of small aggregate data types\n\nSince Redis 2.2 many data types are optimized to use less space up to a certain size.\nHashes, Lists, Sets composed of just integers, and Sorted Sets, when smaller than a given number of elements, and up to a maximum element size, are encoded in a very memory-efficient way that uses *up to 10 times less memory* (with 5 times less memory used being the average saving).\n\nThis is completely transparent from the point of view of the user and API.\nSince this is a CPU \/ memory tradeoff it is possible to tune the maximum \nnumber of elements and maximum element size for special encoded types \nusing the following redis.conf directives (defaults are shown):\n\n### Redis <= 6.2\n\n```\nhash-max-ziplist-entries 512\nhash-max-ziplist-value 64\nzset-max-ziplist-entries 128 \nzset-max-ziplist-value 64\nset-max-intset-entries 512\n```\n\n### Redis >= 7.0\n\n```\nhash-max-listpack-entries 512\nhash-max-listpack-value 64\nzset-max-listpack-entries 128\nzset-max-listpack-value 64\nset-max-intset-entries 512\n```\n\n### Redis >= 7.2\n\nThe following directives are also available:\n\n```\nset-max-listpack-entries 128\nset-max-listpack-value 64\n```\n\nIf a specially encoded value overflows the configured max size,\nRedis will automatically convert it into normal encoding.\nThis operation is very fast for small values,\nbut if you change the setting in order to use specially encoded values\nfor much larger aggregate types the suggestion is to run some \nbenchmarks and tests to check the conversion time.\n\n## Using 32-bit instances\n\n\nWhen Redis is compiled as a 32-bit target, it uses a lot less memory per key, since pointers are small,\nbut such an instance will be limited to 4 GB of maximum memory usage.\nTo compile Redis as 32-bit binary use *make 32bit*.\nRDB and AOF files are compatible between 32-bit and 64-bit instances\n(and between little and big endian of course) so you can switch from 32 to 64-bit, or the contrary, without problems.\n\n## Bit and byte level operations\n\nRedis 2.2 introduced new bit and byte level operations: `GETRANGE`, `SETRANGE`, `GETBIT` and `SETBIT`.\nUsing these commands you can treat the Redis string type as a random access array.\nFor instance, if you have an application where users are identified by a unique progressive integer number,\nyou can use a bitmap to save information about the subscription of users in a mailing list,\nsetting the bit for subscribed and clearing it for unsubscribed, or the other way around.\nWith 100 million users this data will take just 12 megabytes of RAM in a Redis instance.\nYou can do the same using `GETRANGE` and `SETRANGE` to store one byte of information for each user.\nThis is just an example but it is possible to model several problems in very little space with these new primitives.\n\n## Use hashes when possible\n\n\nSmall hashes are encoded in a very small space, so you should try representing your data using hashes whenever possible.\nFor instance, if you have objects representing users in a web application, \ninstead of using different keys for name, surname, email, password, use a single hash with all the required fields.\n\nIf you want to know more about this, read the next section.\n\n## Using hashes to abstract a very memory-efficient plain key-value store on top of Redis\n\nI understand the title of this section is a bit scary, but I'm going to explain in detail what this is about.\n\nBasically it is possible to model a plain key-value store using Redis\nwhere values can just be just strings, which is not just more memory efficient\nthan Redis plain keys but also much more memory efficient than memcached.\n\nLet's start with some facts: a few keys use a lot more memory than a single key\ncontaining a hash with a few fields. How is this possible? We use a trick.\nIn theory to guarantee that we perform lookups in constant time\n(also known as O(1) in big O notation) there is the need to use a data structure\nwith a constant time complexity in the average case, like a hash table.\n\nBut many times hashes contain just a few fields. When hashes are small we can\ninstead just encode them in an O(N) data structure, like a linear\narray with length-prefixed key-value pairs. Since we do this only when N\nis small, the amortized time for `HGET` and `HSET` commands is still O(1): the\nhash will be converted into a real hash table as soon as the number of elements\nit contains grows too large (you can configure the limit in redis.conf).\n\nThis does not only work well from the point of view of time complexity, but\nalso from the point of view of constant times since a linear array of key-value pairs happens to play very well with the CPU cache (it has a better\ncache locality than a hash table).\n\nHowever since hash fields and values are not (always) represented as full-featured Redis objects, hash fields can't have an associated time to live\n(expire) like a real key, and can only contain a string. But we are okay with\nthis, this was the intention anyway when the hash data type API was\ndesigned (we trust simplicity more than features, so nested data structures\nare not allowed, as expires of single fields are not allowed).\n\nSo hashes are memory efficient. This is useful when using hashes\nto represent objects or to model other problems when there are group of\nrelated fields. But what about if we have a plain key value business?\n\nImagine we want to use Redis as a cache for many small objects, which can be JSON encoded objects, small HTML fragments, simple key -> boolean values\nand so forth. Basically, anything is a string -> string map with small keys\nand values.\n\nNow let's assume the objects we want to cache are numbered, like:\n\n * object:102393\n * object:1234\n * object:5\n\nThis is what we can do. Every time we perform a\nSET operation to set a new value, we actually split the key into two parts,\none part used as a key, and the other part used as the field name for the hash. For instance, the\nobject named \"object:1234\" is actually split into:\n\n* a Key named object:12\n* a Field named 34\n\nSo we use all the characters but the last two for the key, and the final\ntwo characters for the hash field name. To set our key we use the following\ncommand:\n\n```\nHSET object:12 34 somevalue\n```\n\nAs you can see every hash will end up containing 100 fields, which is an optimal compromise between CPU and memory saved.\n\nThere is another important thing to note, with this schema\nevery hash will have more or\nless 100 fields regardless of the number of objects we cached. This is because our objects will always end with a number and not a random string. In some way, the final number can be considered as a form of implicit pre-sharding.\n\nWhat about small numbers? Like object:2? We handle this case using just\n\"object:\" as a key name, and the whole number as the hash field name.\nSo object:2 and object:10 will both end inside the key \"object:\", but one\nas field name \"2\" and one as \"10\".\n\nHow much memory do we save this way?\n\nI used the following Ruby program to test how this works:\n\n```ruby\nrequire 'rubygems'\nrequire 'redis'\n\nUSE_OPTIMIZATION = true\n\ndef hash_get_key_field(key)\n  s = key.split(':')\n  if s[1].length > 2\n    { key: s[0] + ':' + s[1][0..-3], field: s[1][-2..-1] }\n  else\n    { key: s[0] + ':', field: s[1] }\n  end\nend\n\ndef hash_set(r, key, value)\n  kf = hash_get_key_field(key)\n  r.hset(kf[:key], kf[:field], value)\nend\n\ndef hash_get(r, key, value)\n  kf = hash_get_key_field(key)\n  r.hget(kf[:key], kf[:field], value)\nend\n\nr = Redis.new\n(0..100_000).each do |id|\n  key = \"object:#{id}\"\n  if USE_OPTIMIZATION\n    hash_set(r, key, 'val')\n  else\n    r.set(key, 'val')\n  end\nend\n```\n\nThis is the result against a 64 bit instance of Redis 2.2:\n\n * USE_OPTIMIZATION set to true: 1.7 MB of used memory\n * USE_OPTIMIZATION set to false; 11 MB of used memory\n\nThis is an order of magnitude, I think this makes Redis more or less the most\nmemory efficient plain key value store out there.\n\n*WARNING*: for this to work, make sure that in your redis.conf you have\nsomething like this:\n\n```\nhash-max-zipmap-entries 256\n```\n\nAlso remember to set the following field accordingly to the maximum size\nof your keys and values:\n\n```\nhash-max-zipmap-value 1024\n```\n\nEvery time a hash exceeds the number of elements or element size specified\nit will be converted into a real hash table, and the memory saving will be lost.\n\nYou may ask, why don't you do this implicitly in the normal key space so that\nI don't have to care? There are two reasons: one is that we tend to make\ntradeoffs explicit, and this is a clear tradeoff between many things: CPU,\nmemory, and max element size. The second is that the top-level key space must\nsupport a lot of interesting things like expires, LRU data, and so\nforth so it is not practical to do this in a general way.\n\nBut the Redis Way is that the user must understand how things work so that he can pick the best compromise and to understand how the system will\nbehave exactly.\n\n## Memory allocation\n\nTo store user keys, Redis allocates at most as much memory as the `maxmemory`\nsetting enables (however there are small extra allocations possible).\n\nThe exact value can be set in the configuration file or set later via\n`CONFIG SET` (for more info, see [Using memory as an LRU cache](\/docs\/reference\/eviction)).\nThere are a few things that should be noted about how Redis manages memory:\n\n* Redis will not always free up (return) memory to the OS when keys are removed.\nThis is not something special about Redis, but it is how most malloc() implementations work.\nFor example, if you fill an instance with 5GB worth of data, and then\nremove the equivalent of 2GB of data, the Resident Set Size (also known as\nthe RSS, which is the number of memory pages consumed by the process)\nwill probably still be around 5GB, even if Redis will claim that the user\nmemory is around 3GB.  This happens because the underlying allocator can't easily release the memory.\nFor example, often most of the removed keys were allocated on the same pages as the other keys that still exist.\n* The previous point means that you need to provision memory based on your\n**peak memory usage**. If your workload from time to time requires 10GB, even if\nmost of the time 5GB could do, you need to provision for 10GB.\n* However allocators are smart and are able to reuse free chunks of memory,\nso after you free 2GB of your 5GB data set, when you start adding more keys\nagain, you'll see the RSS (Resident Set Size) stay steady and not grow\nmore, as you add up to 2GB of additional keys. The allocator is basically\ntrying to reuse the 2GB of memory previously (logically) freed.\n* Because of all this, the fragmentation ratio is not reliable when you\nhad a memory usage that at the peak is much larger than the currently used memory.\nThe fragmentation is calculated as the physical memory actually used (the RSS\nvalue) divided by the amount of memory currently in use (as the sum of all\nthe allocations performed by Redis). Because the RSS reflects the peak memory,\nwhen the (virtually) used memory is low since a lot of keys\/values were freed, but the RSS is high, the ratio `RSS \/ mem_used` will be very high.\n\nIf `maxmemory` is not set Redis will keep allocating memory as it sees\nfit and thus it can (gradually) eat up all your free memory.\nTherefore it is generally advisable to configure some limits. You may also\nwant to set `maxmemory-policy` to `noeviction` (which is *not* the default\nvalue in some older versions of Redis).\n\nIt makes Redis return an out-of-memory error for write commands if and when it reaches the \nlimit - which in turn may result in errors in the application but will not render the \nwhole machine dead because of memory starvation.","site":"redis","answers_cleaned":"    title  Memory optimization linkTitle  Memory optimization description  Strategies for optimizing memory usage in Redis weight  1 aliases         topics memory optimization       docs reference optimization memory optimization           Special encoding of small aggregate data types  Since Redis 2 2 many data types are optimized to use less space up to a certain size  Hashes  Lists  Sets composed of just integers  and Sorted Sets  when smaller than a given number of elements  and up to a maximum element size  are encoded in a very memory efficient way that uses  up to 10 times less memory   with 5 times less memory used being the average saving    This is completely transparent from the point of view of the user and API  Since this is a CPU   memory tradeoff it is possible to tune the maximum  number of elements and maximum element size for special encoded types  using the following redis conf directives  defaults are shown        Redis    6 2      hash max ziplist entries 512 hash max ziplist value 64 zset max ziplist entries 128  zset max ziplist value 64 set max intset entries 512          Redis    7 0      hash max listpack entries 512 hash max listpack value 64 zset max listpack entries 128 zset max listpack value 64 set max intset entries 512          Redis    7 2  The following directives are also available       set max listpack entries 128 set max listpack value 64      If a specially encoded value overflows the configured max size  Redis will automatically convert it into normal encoding  This operation is very fast for small values  but if you change the setting in order to use specially encoded values for much larger aggregate types the suggestion is to run some  benchmarks and tests to check the conversion time      Using 32 bit instances   When Redis is compiled as a 32 bit target  it uses a lot less memory per key  since pointers are small  but such an instance will be limited to 4 GB of maximum memory usage  To compile Redis as 32 bit binary use  make 32bit   RDB and AOF files are compatible between 32 bit and 64 bit instances  and between little and big endian of course  so you can switch from 32 to 64 bit  or the contrary  without problems      Bit and byte level operations  Redis 2 2 introduced new bit and byte level operations   GETRANGE    SETRANGE    GETBIT  and  SETBIT   Using these commands you can treat the Redis string type as a random access array  For instance  if you have an application where users are identified by a unique progressive integer number  you can use a bitmap to save information about the subscription of users in a mailing list  setting the bit for subscribed and clearing it for unsubscribed  or the other way around  With 100 million users this data will take just 12 megabytes of RAM in a Redis instance  You can do the same using  GETRANGE  and  SETRANGE  to store one byte of information for each user  This is just an example but it is possible to model several problems in very little space with these new primitives      Use hashes when possible   Small hashes are encoded in a very small space  so you should try representing your data using hashes whenever possible  For instance  if you have objects representing users in a web application   instead of using different keys for name  surname  email  password  use a single hash with all the required fields   If you want to know more about this  read the next section      Using hashes to abstract a very memory efficient plain key value store on top of Redis  I understand the title of this section is a bit scary  but I m going to explain in detail what this is about   Basically it is possible to model a plain key value store using Redis where values can just be just strings  which is not just more memory efficient than Redis plain keys but also much more memory efficient than memcached   Let s start with some facts  a few keys use a lot more memory than a single key containing a hash with a few fields  How is this possible  We use a trick  In theory to guarantee that we perform lookups in constant time  also known as O 1  in big O notation  there is the need to use a data structure with a constant time complexity in the average case  like a hash table   But many times hashes contain just a few fields  When hashes are small we can instead just encode them in an O N  data structure  like a linear array with length prefixed key value pairs  Since we do this only when N is small  the amortized time for  HGET  and  HSET  commands is still O 1   the hash will be converted into a real hash table as soon as the number of elements it contains grows too large  you can configure the limit in redis conf    This does not only work well from the point of view of time complexity  but also from the point of view of constant times since a linear array of key value pairs happens to play very well with the CPU cache  it has a better cache locality than a hash table    However since hash fields and values are not  always  represented as full featured Redis objects  hash fields can t have an associated time to live  expire  like a real key  and can only contain a string  But we are okay with this  this was the intention anyway when the hash data type API was designed  we trust simplicity more than features  so nested data structures are not allowed  as expires of single fields are not allowed    So hashes are memory efficient  This is useful when using hashes to represent objects or to model other problems when there are group of related fields  But what about if we have a plain key value business   Imagine we want to use Redis as a cache for many small objects  which can be JSON encoded objects  small HTML fragments  simple key    boolean values and so forth  Basically  anything is a string    string map with small keys and values   Now let s assume the objects we want to cache are numbered  like      object 102393    object 1234    object 5  This is what we can do  Every time we perform a SET operation to set a new value  we actually split the key into two parts  one part used as a key  and the other part used as the field name for the hash  For instance  the object named  object 1234  is actually split into     a Key named object 12   a Field named 34  So we use all the characters but the last two for the key  and the final two characters for the hash field name  To set our key we use the following command       HSET object 12 34 somevalue      As you can see every hash will end up containing 100 fields  which is an optimal compromise between CPU and memory saved   There is another important thing to note  with this schema every hash will have more or less 100 fields regardless of the number of objects we cached  This is because our objects will always end with a number and not a random string  In some way  the final number can be considered as a form of implicit pre sharding   What about small numbers  Like object 2  We handle this case using just  object   as a key name  and the whole number as the hash field name  So object 2 and object 10 will both end inside the key  object    but one as field name  2  and one as  10    How much memory do we save this way   I used the following Ruby program to test how this works      ruby require  rubygems  require  redis   USE OPTIMIZATION   true  def hash get key field key    s   key split        if s 1  length   2       key  s 0          s 1  0   3   field  s 1   2   1      else       key  s 0         field  s 1      end end  def hash set r  key  value    kf   hash get key field key    r hset kf  key   kf  field   value  end  def hash get r  key  value    kf   hash get key field key    r hget kf  key   kf  field   value  end  r   Redis new  0  100 000  each do  id    key    object   id     if USE OPTIMIZATION     hash set r  key   val     else     r set key   val     end end      This is the result against a 64 bit instance of Redis 2 2      USE OPTIMIZATION set to true  1 7 MB of used memory    USE OPTIMIZATION set to false  11 MB of used memory  This is an order of magnitude  I think this makes Redis more or less the most memory efficient plain key value store out there    WARNING   for this to work  make sure that in your redis conf you have something like this       hash max zipmap entries 256      Also remember to set the following field accordingly to the maximum size of your keys and values       hash max zipmap value 1024      Every time a hash exceeds the number of elements or element size specified it will be converted into a real hash table  and the memory saving will be lost   You may ask  why don t you do this implicitly in the normal key space so that I don t have to care  There are two reasons  one is that we tend to make tradeoffs explicit  and this is a clear tradeoff between many things  CPU  memory  and max element size  The second is that the top level key space must support a lot of interesting things like expires  LRU data  and so forth so it is not practical to do this in a general way   But the Redis Way is that the user must understand how things work so that he can pick the best compromise and to understand how the system will behave exactly      Memory allocation  To store user keys  Redis allocates at most as much memory as the  maxmemory  setting enables  however there are small extra allocations possible    The exact value can be set in the configuration file or set later via  CONFIG SET   for more info  see  Using memory as an LRU cache   docs reference eviction    There are a few things that should be noted about how Redis manages memory     Redis will not always free up  return  memory to the OS when keys are removed  This is not something special about Redis  but it is how most malloc   implementations work  For example  if you fill an instance with 5GB worth of data  and then remove the equivalent of 2GB of data  the Resident Set Size  also known as the RSS  which is the number of memory pages consumed by the process  will probably still be around 5GB  even if Redis will claim that the user memory is around 3GB   This happens because the underlying allocator can t easily release the memory  For example  often most of the removed keys were allocated on the same pages as the other keys that still exist    The previous point means that you need to provision memory based on your   peak memory usage    If your workload from time to time requires 10GB  even if most of the time 5GB could do  you need to provision for 10GB    However allocators are smart and are able to reuse free chunks of memory  so after you free 2GB of your 5GB data set  when you start adding more keys again  you ll see the RSS  Resident Set Size  stay steady and not grow more  as you add up to 2GB of additional keys  The allocator is basically trying to reuse the 2GB of memory previously  logically  freed    Because of all this  the fragmentation ratio is not reliable when you had a memory usage that at the peak is much larger than the currently used memory  The fragmentation is calculated as the physical memory actually used  the RSS value  divided by the amount of memory currently in use  as the sum of all the allocations performed by Redis   Because the RSS reflects the peak memory  when the  virtually  used memory is low since a lot of keys values were freed  but the RSS is high  the ratio  RSS   mem used  will be very high   If  maxmemory  is not set Redis will keep allocating memory as it sees fit and thus it can  gradually  eat up all your free memory  Therefore it is generally advisable to configure some limits  You may also want to set  maxmemory policy  to  noeviction   which is  not  the default value in some older versions of Redis    It makes Redis return an out of memory error for write commands if and when it reaches the  limit   which in turn may result in errors in the application but will not render the  whole machine dead because of memory starvation "}
{"questions":"redis Using the redis benchmark utility on a Redis server docs reference optimization benchmarks md title Redis benchmark aliases Benchmarking topics benchmarks weight 1 docs reference optimization benchmarks","answers":"---\ntitle: \"Redis benchmark\"\nlinkTitle: \"Benchmarking\"\nweight: 1\ndescription: >\n    Using the redis-benchmark utility on a Redis server\naliases: [\n    \/topics\/benchmarks,\n    \/docs\/reference\/optimization\/benchmarks,\n    \/docs\/reference\/optimization\/benchmarks.md\n]\n---\n\nRedis includes the `redis-benchmark` utility that simulates running commands done\nby N clients while at the same time sending M total queries. The utility provides\na default set of tests, or you can supply a custom set of tests.\n\nThe following options are supported:\n\n    Usage: redis-benchmark [-h <host>] [-p <port>] [-c <clients>] [-n <requests]> [-k <boolean>]\n\n     -h <hostname>      Server hostname (default 127.0.0.1)\n     -p <port>          Server port (default 6379)\n     -s <socket>        Server socket (overrides host and port)\n     -a <password>      Password for Redis Auth\n     -c <clients>       Number of parallel connections (default 50)\n     -n <requests>      Total number of requests (default 100000)\n     -d <size>          Data size of SET\/GET value in bytes (default 3)\n     --dbnum <db>       SELECT the specified db number (default 0)\n     -k <boolean>       1=keep alive 0=reconnect (default 1)\n     -r <keyspacelen>   Use random keys for SET\/GET\/INCR, random values for SADD\n      Using this option the benchmark will expand the string __rand_int__\n      inside an argument with a 12 digits number in the specified range\n      from 0 to keyspacelen-1. The substitution changes every time a command\n      is executed. Default tests use this to hit random keys in the\n      specified range.\n     -P <numreq>        Pipeline <numreq> requests. Default 1 (no pipeline).\n     -q                 Quiet. Just show query\/sec values\n     --csv              Output in CSV format\n     -l                 Loop. Run the tests forever\n     -t <tests>         Only run the comma separated list of tests. The test\n                        names are the same as the ones produced as output.\n     -I                 Idle mode. Just open N idle connections and wait.\n\nYou need to have a running Redis instance before launching the benchmark.\nYou can run the benchmarking utility like so:\n\n    redis-benchmark -q -n 100000\n\n### Running only a subset of the tests\n\nYou don't need to run all the default tests every time you execute `redis-benchmark`.\nFor example, to select only a subset of tests, use the `-t` option\nas in the following example:\n\n    $ redis-benchmark -t set,lpush -n 100000 -q\n    SET: 74239.05 requests per second\n    LPUSH: 79239.30 requests per second\n\nThis example runs the tests for the `SET` and `LPUSH` commands and uses quiet mode (see the `-q` switch).\n\nYou can even benchmark a specific command:\n\n    $ redis-benchmark -n 100000 -q script load \"redis.call('set','foo','bar')\"\n    script load redis.call('set','foo','bar'): 69881.20 requests per second\n\n### Selecting the size of the key space\n\nBy default, the benchmark runs against a single key. In Redis the difference\nbetween such a synthetic benchmark and a real one is not huge since it is an\nin-memory system, however it is possible to stress cache misses and in general\nto simulate a more real-world work load by using a large key space.\n\nThis is obtained by using the `-r` switch. For instance if I want to run\none million SET operations, using a random key for every operation out of\n100k possible keys, I'll use the following command line:\n\n    $ redis-cli flushall\n    OK\n\n    $ redis-benchmark -t set -r 100000 -n 1000000\n    ====== SET ======\n      1000000 requests completed in 13.86 seconds\n      50 parallel clients\n      3 bytes payload\n      keep alive: 1\n\n    99.76% `<=` 1 milliseconds\n    99.98% `<=` 2 milliseconds\n    100.00% `<=` 3 milliseconds\n    100.00% `<=` 3 milliseconds\n    72144.87 requests per second\n\n    $ redis-cli dbsize\n    (integer) 99993\n\n### Using pipelining\n\nBy default every client (the benchmark simulates 50 clients if not otherwise\nspecified with `-c`) sends the next command only when the reply of the previous\ncommand is received, this means that the server will likely need a read call\nin order to read each command from every client. Also RTT is paid as well.\n\nRedis supports [pipelining](\/topics\/pipelining), so it is possible to send\nmultiple commands at once, a feature often exploited by real world applications.\nRedis pipelining is able to dramatically improve the number of operations per\nsecond a server is able do deliver.\n\nConsider this example of running the benchmark using a\npipelining of 16 commands:\n\n    $ redis-benchmark -n 1000000 -t set,get -P 16 -q\n    SET: 403063.28 requests per second\n    GET: 508388.41 requests per second\n\nUsing pipelining results in a significant increase in performance.\n\n### Pitfalls and misconceptions\n\nThe first point is obvious: the golden rule of a useful benchmark is to\nonly compare apples and apples. You can compare different versions of Redis on the same workload or the same version of Redis, but with\ndifferent options. If you plan to compare Redis to something else, then it is\nimportant to evaluate the functional and technical differences, and take them\nin account.\n\n+ Redis is a server: all commands involve network or IPC round trips. It is meaningless to compare it to embedded data stores, because the cost of most operations is primarily in network\/protocol management.\n+ Redis commands return an acknowledgment for all usual commands. Some other data stores do not. Comparing Redis to stores involving one-way queries is only mildly useful.\n+ Naively iterating on synchronous Redis commands does not benchmark Redis itself, but rather measure your network (or IPC) latency and the client library intrinsic latency. To really test Redis, you need multiple connections (like redis-benchmark) and\/or to use pipelining to aggregate several commands and\/or multiple threads or processes.\n+ Redis is an in-memory data store with some optional persistence options. If you plan to compare it to transactional servers (MySQL, PostgreSQL, etc ...), then you should consider activating AOF and decide on a suitable fsync policy.\n+ Redis is, mostly, a single-threaded server from the POV of commands execution (actually modern versions of Redis use threads for different things). It is not designed to benefit from multiple CPU cores. People are supposed to launch several Redis instances to scale out on several cores if needed. It is not really fair to compare one single Redis instance to a multi-threaded data store.\n\nThe `redis-benchmark` program is a quick and useful way to get some figures and\nevaluate the performance of a Redis instance on a given hardware. However,\nby default, it does not represent the maximum throughput a Redis instance can\nsustain. Actually, by using pipelining and a fast client (hiredis), it is fairly\neasy to write a program generating more throughput than redis-benchmark. The\ndefault behavior of redis-benchmark is to achieve throughput by exploiting\nconcurrency only (i.e. it creates several connections to the server).\nIt does not use pipelining or any parallelism at all (one pending query per\nconnection at most, and no multi-threading), if not explicitly enabled via\nthe `-P` parameter. So in some way using `redis-benchmark` and, triggering, for\nexample, a `BGSAVE` operation in the background at the same time, will provide\nthe user with numbers more near to the *worst case* than to the best case.\n\nTo run a benchmark using pipelining mode (and achieve higher throughput),\nyou need to explicitly use the -P option. Please note that it is still a\nrealistic behavior since a lot of Redis based applications actively use\npipelining to improve performance. However you should use a pipeline size that\nis more or less the average pipeline length you'll be able to use in your\napplication in order to get realistic numbers.\n\nThe benchmark should apply the same operations, and work in the same way\nwith the multiple data stores you want to compare. It is absolutely pointless to\ncompare the result of redis-benchmark to the result of another benchmark\nprogram and extrapolate.\n\nFor instance, Redis and memcached in single-threaded mode can be compared on\nGET\/SET operations. Both are in-memory data stores, working mostly in the same\nway at the protocol level. Provided their respective benchmark application is\naggregating queries in the same way (pipelining) and use a similar number of\nconnections, the comparison is actually meaningful.\n\nWhen you're benchmarking a high-performance, in-memory database like Redis,\nit may be difficult to saturate\nthe server. Sometimes, the performance bottleneck is on the client side,\nand not the server-side. In that case, the client (i.e., the benchmarking program itself)\nmust be fixed, or perhaps scaled out, to reach the maximum throughput.\n\n### Factors impacting Redis performance\n\nThere are multiple factors having direct consequences on Redis performance.\nWe mention them here, since they can alter the result of any benchmarks.\nPlease note however, that a typical Redis instance running on a low end,\nuntuned box usually provides good enough performance for most applications.\n\n+ Network bandwidth and latency usually have a direct impact on the performance.\nIt is a good practice to use the ping program to quickly check the latency\nbetween the client and server hosts is normal before launching the benchmark.\nRegarding the bandwidth, it is generally useful to estimate\nthe throughput in Gbit\/s and compare it to the theoretical bandwidth\nof the network. For instance a benchmark setting 4 KB strings\nin Redis at 100000 q\/s, would actually consume 3.2 Gbit\/s of bandwidth\nand probably fit within a 10 Gbit\/s link, but not a 1 Gbit\/s one. In many real\nworld scenarios, Redis throughput is limited by the network well before being\nlimited by the CPU. To consolidate several high-throughput Redis instances\non a single server, it worth considering putting a 10 Gbit\/s NIC\nor multiple 1 Gbit\/s NICs with TCP\/IP bonding.\n+ CPU is another very important factor. Being single-threaded, Redis favors\nfast CPUs with large caches and not many cores. At this game, Intel CPUs are\ncurrently the winners. It is not uncommon to get only half the performance on\nan AMD Opteron CPU compared to similar Nehalem EP\/Westmere EP\/Sandy Bridge\nIntel CPUs with Redis. When client and server run on the same box, the CPU is\nthe limiting factor with redis-benchmark.\n+ Speed of RAM and memory bandwidth seem less critical for global performance\nespecially for small objects. For large objects (>10 KB), it may become\nnoticeable though. Usually, it is not really cost-effective to buy expensive\nfast memory modules to optimize Redis.\n+ Redis runs slower on a VM compared to running without virtualization using\nthe same hardware. If you have the chance to run Redis on a physical machine\nthis is preferred. However this does not mean that Redis is slow in\nvirtualized environments, the delivered performances are still very good\nand most of the serious performance issues you may incur in virtualized\nenvironments are due to over-provisioning, non-local disks with high latency,\nor old hypervisor software that have slow `fork` syscall implementation.\n+ When the server and client benchmark programs run on the same box, both\nthe TCP\/IP loopback and unix domain sockets can be used. Depending on the\nplatform, unix domain sockets can achieve around 50% more throughput than\nthe TCP\/IP loopback (on Linux for instance). The default behavior of\nredis-benchmark is to use the TCP\/IP loopback.\n+ The performance benefit of unix domain sockets compared to TCP\/IP loopback\ntends to decrease when pipelining is heavily used (i.e. long pipelines).\n+ When an ethernet network is used to access Redis, aggregating commands using\npipelining is especially efficient when the size of the data is kept under\nthe ethernet packet size (about 1500 bytes). Actually, processing 10 bytes,\n100 bytes, or 1000 bytes queries almost result in the same throughput.\nSee the graph below.\n\n    ![Data size impact](Data_size.png)\n\n+ On multi CPU sockets servers, Redis performance becomes dependent on the\nNUMA configuration and process location. The most visible effect is that\nredis-benchmark results seem non-deterministic because client and server\nprocesses are distributed randomly on the cores. To get deterministic results,\nit is required to use process placement tools (on Linux: taskset or numactl).\nThe most efficient combination is always to put the client and server on two\ndifferent cores of the same CPU to benefit from the L3 cache.\nHere are some results of 4 KB SET benchmark for 3 server CPUs (AMD Istanbul,\nIntel Nehalem EX, and Intel Westmere) with different relative placements.\nPlease note this benchmark is not meant to compare CPU models between themselves\n(CPUs exact model and frequency are therefore not disclosed).\n\n    ![NUMA chart](NUMA_chart.gif)\n\n+ With high-end configurations, the number of client connections is also an\nimportant factor. Being based on epoll\/kqueue, the Redis event loop is quite\nscalable. Redis has already been benchmarked at more than 60000 connections,\nand was still able to sustain 50000 q\/s in these conditions. As a rule of thumb,\nan instance with 30000 connections can only process half the throughput\nachievable with 100 connections. Here is an example showing the throughput of\na Redis instance per number of connections:\n\n    ![connections chart](Connections_chart.png)\n\n+ With high-end configurations, it is possible to achieve higher throughput by\ntuning the NIC(s) configuration and associated interruptions. Best throughput\nis achieved by setting an affinity between Rx\/Tx NIC queues and CPU cores,\nand activating RPS (Receive Packet Steering) support. More information in this\n[thread](https:\/\/groups.google.com\/forum\/#!msg\/redis-db\/gUhc19gnYgc\/BruTPCOroiMJ).\nJumbo frames may also provide a performance boost when large objects are used.\n+ Depending on the platform, Redis can be compiled against different memory\nallocators (libc malloc, jemalloc, tcmalloc), which may have different behaviors\nin term of raw speed, internal and external fragmentation.\nIf you did not compile Redis yourself, you can use the INFO command to check\nthe `mem_allocator` field. Please note most benchmarks do not run long enough to\ngenerate significant external fragmentation (contrary to production Redis\ninstances).\n\n### Other things to consider\n\nOne important goal of any benchmark is to get reproducible results, so they\ncan be compared to the results of other tests.\n\n+ A good practice is to try to run tests on isolated hardware as much as possible.\nIf it is not possible, then the system must be monitored to check the benchmark\nis not impacted by some external activity.\n+ Some configurations (desktops and laptops for sure, some servers as well)\nhave a variable CPU core frequency mechanism. The policy controlling this\nmechanism can be set at the OS level. Some CPU models are more aggressive than\nothers at adapting the frequency of the CPU cores to the workload. To get\nreproducible results, it is better to set the highest possible fixed frequency\nfor all the CPU cores involved in the benchmark.\n+ An important point is to size the system accordingly to the benchmark.\nThe system must have enough RAM and must not swap. On Linux, do not forget\nto set the `overcommit_memory` parameter correctly. Please note 32 and 64 bit\nRedis instances do not have the same memory footprint.\n+ If you plan to use RDB or AOF for your benchmark, please check there is no other\nI\/O activity in the system. Avoid putting RDB or AOF files on NAS or NFS shares,\nor on any other devices impacting your network bandwidth and\/or latency\n(for instance, EBS on Amazon EC2).\n+ Set Redis logging level (loglevel parameter) to warning or notice. Avoid putting\nthe generated log file on a remote filesystem.\n+ Avoid using monitoring tools which can alter the result of the benchmark. For\ninstance using INFO at regular interval to gather statistics is probably fine,\nbut MONITOR will impact the measured performance significantly.\n\n### Other Redis benchmarking tools\n\nThere are several third-party tools that can be used for benchmarking Redis. Refer to each tool's\ndocumentation for more information about its goals and capabilities.\n\n* [memtier_benchmark](https:\/\/github.com\/redislabs\/memtier_benchmark) from [Redis Ltd.](https:\/\/twitter.com\/RedisInc) is a NoSQL Redis and Memcache traffic generation and benchmarking tool.\n* [rpc-perf](https:\/\/github.com\/twitter\/rpc-perf) from [Twitter](https:\/\/twitter.com\/twitter) is a tool for benchmarking RPC services that supports Redis and Memcache.\n* [YCSB](https:\/\/github.com\/brianfrankcooper\/YCSB) from [Yahoo @Yahoo](https:\/\/twitter.com\/Yahoo) is a benchmarking framework with clients to many databases, including Redis. ","site":"redis","answers_cleaned":"    title   Redis benchmark  linkTitle   Benchmarking  weight  1 description        Using the redis benchmark utility on a Redis server aliases         topics benchmarks       docs reference optimization benchmarks       docs reference optimization benchmarks md        Redis includes the  redis benchmark  utility that simulates running commands done by N clients while at the same time sending M total queries  The utility provides a default set of tests  or you can supply a custom set of tests   The following options are supported       Usage  redis benchmark   h  host     p  port     c  clients     n  requests     k  boolean          h  hostname       Server hostname  default 127 0 0 1        p  port           Server port  default 6379        s  socket         Server socket  overrides host and port        a  password       Password for Redis Auth       c  clients        Number of parallel connections  default 50        n  requests       Total number of requests  default 100000        d  size           Data size of SET GET value in bytes  default 3         dbnum  db        SELECT the specified db number  default 0        k  boolean        1 keep alive 0 reconnect  default 1        r  keyspacelen    Use random keys for SET GET INCR  random values for SADD       Using this option the benchmark will expand the string   rand int         inside an argument with a 12 digits number in the specified range       from 0 to keyspacelen 1  The substitution changes every time a command       is executed  Default tests use this to hit random keys in the       specified range        P  numreq         Pipeline  numreq  requests  Default 1  no pipeline         q                 Quiet  Just show query sec values        csv              Output in CSV format       l                 Loop  Run the tests forever       t  tests          Only run the comma separated list of tests  The test                         names are the same as the ones produced as output        I                 Idle mode  Just open N idle connections and wait   You need to have a running Redis instance before launching the benchmark  You can run the benchmarking utility like so       redis benchmark  q  n 100000      Running only a subset of the tests  You don t need to run all the default tests every time you execute  redis benchmark   For example  to select only a subset of tests  use the   t  option as in the following example         redis benchmark  t set lpush  n 100000  q     SET  74239 05 requests per second     LPUSH  79239 30 requests per second  This example runs the tests for the  SET  and  LPUSH  commands and uses quiet mode  see the   q  switch    You can even benchmark a specific command         redis benchmark  n 100000  q script load  redis call  set   foo   bar        script load redis call  set   foo   bar    69881 20 requests per second      Selecting the size of the key space  By default  the benchmark runs against a single key  In Redis the difference between such a synthetic benchmark and a real one is not huge since it is an in memory system  however it is possible to stress cache misses and in general to simulate a more real world work load by using a large key space   This is obtained by using the   r  switch  For instance if I want to run one million SET operations  using a random key for every operation out of 100k possible keys  I ll use the following command line         redis cli flushall     OK        redis benchmark  t set  r 100000  n 1000000            SET              1000000 requests completed in 13 86 seconds       50 parallel clients       3 bytes payload       keep alive  1      99 76       1 milliseconds     99 98       2 milliseconds     100 00       3 milliseconds     100 00       3 milliseconds     72144 87 requests per second        redis cli dbsize      integer  99993      Using pipelining  By default every client  the benchmark simulates 50 clients if not otherwise specified with   c   sends the next command only when the reply of the previous command is received  this means that the server will likely need a read call in order to read each command from every client  Also RTT is paid as well   Redis supports  pipelining   topics pipelining   so it is possible to send multiple commands at once  a feature often exploited by real world applications  Redis pipelining is able to dramatically improve the number of operations per second a server is able do deliver   Consider this example of running the benchmark using a pipelining of 16 commands         redis benchmark  n 1000000  t set get  P 16  q     SET  403063 28 requests per second     GET  508388 41 requests per second  Using pipelining results in a significant increase in performance       Pitfalls and misconceptions  The first point is obvious  the golden rule of a useful benchmark is to only compare apples and apples  You can compare different versions of Redis on the same workload or the same version of Redis  but with different options  If you plan to compare Redis to something else  then it is important to evaluate the functional and technical differences  and take them in account     Redis is a server  all commands involve network or IPC round trips  It is meaningless to compare it to embedded data stores  because the cost of most operations is primarily in network protocol management    Redis commands return an acknowledgment for all usual commands  Some other data stores do not  Comparing Redis to stores involving one way queries is only mildly useful    Naively iterating on synchronous Redis commands does not benchmark Redis itself  but rather measure your network  or IPC  latency and the client library intrinsic latency  To really test Redis  you need multiple connections  like redis benchmark  and or to use pipelining to aggregate several commands and or multiple threads or processes    Redis is an in memory data store with some optional persistence options  If you plan to compare it to transactional servers  MySQL  PostgreSQL  etc       then you should consider activating AOF and decide on a suitable fsync policy    Redis is  mostly  a single threaded server from the POV of commands execution  actually modern versions of Redis use threads for different things   It is not designed to benefit from multiple CPU cores  People are supposed to launch several Redis instances to scale out on several cores if needed  It is not really fair to compare one single Redis instance to a multi threaded data store   The  redis benchmark  program is a quick and useful way to get some figures and evaluate the performance of a Redis instance on a given hardware  However  by default  it does not represent the maximum throughput a Redis instance can sustain  Actually  by using pipelining and a fast client  hiredis   it is fairly easy to write a program generating more throughput than redis benchmark  The default behavior of redis benchmark is to achieve throughput by exploiting concurrency only  i e  it creates several connections to the server   It does not use pipelining or any parallelism at all  one pending query per connection at most  and no multi threading   if not explicitly enabled via the   P  parameter  So in some way using  redis benchmark  and  triggering  for example  a  BGSAVE  operation in the background at the same time  will provide the user with numbers more near to the  worst case  than to the best case   To run a benchmark using pipelining mode  and achieve higher throughput   you need to explicitly use the  P option  Please note that it is still a realistic behavior since a lot of Redis based applications actively use pipelining to improve performance  However you should use a pipeline size that is more or less the average pipeline length you ll be able to use in your application in order to get realistic numbers   The benchmark should apply the same operations  and work in the same way with the multiple data stores you want to compare  It is absolutely pointless to compare the result of redis benchmark to the result of another benchmark program and extrapolate   For instance  Redis and memcached in single threaded mode can be compared on GET SET operations  Both are in memory data stores  working mostly in the same way at the protocol level  Provided their respective benchmark application is aggregating queries in the same way  pipelining  and use a similar number of connections  the comparison is actually meaningful   When you re benchmarking a high performance  in memory database like Redis  it may be difficult to saturate the server  Sometimes  the performance bottleneck is on the client side  and not the server side  In that case  the client  i e   the benchmarking program itself  must be fixed  or perhaps scaled out  to reach the maximum throughput       Factors impacting Redis performance  There are multiple factors having direct consequences on Redis performance  We mention them here  since they can alter the result of any benchmarks  Please note however  that a typical Redis instance running on a low end  untuned box usually provides good enough performance for most applications     Network bandwidth and latency usually have a direct impact on the performance  It is a good practice to use the ping program to quickly check the latency between the client and server hosts is normal before launching the benchmark  Regarding the bandwidth  it is generally useful to estimate the throughput in Gbit s and compare it to the theoretical bandwidth of the network  For instance a benchmark setting 4 KB strings in Redis at 100000 q s  would actually consume 3 2 Gbit s of bandwidth and probably fit within a 10 Gbit s link  but not a 1 Gbit s one  In many real world scenarios  Redis throughput is limited by the network well before being limited by the CPU  To consolidate several high throughput Redis instances on a single server  it worth considering putting a 10 Gbit s NIC or multiple 1 Gbit s NICs with TCP IP bonding    CPU is another very important factor  Being single threaded  Redis favors fast CPUs with large caches and not many cores  At this game  Intel CPUs are currently the winners  It is not uncommon to get only half the performance on an AMD Opteron CPU compared to similar Nehalem EP Westmere EP Sandy Bridge Intel CPUs with Redis  When client and server run on the same box  the CPU is the limiting factor with redis benchmark    Speed of RAM and memory bandwidth seem less critical for global performance especially for small objects  For large objects   10 KB   it may become noticeable though  Usually  it is not really cost effective to buy expensive fast memory modules to optimize Redis    Redis runs slower on a VM compared to running without virtualization using the same hardware  If you have the chance to run Redis on a physical machine this is preferred  However this does not mean that Redis is slow in virtualized environments  the delivered performances are still very good and most of the serious performance issues you may incur in virtualized environments are due to over provisioning  non local disks with high latency  or old hypervisor software that have slow  fork  syscall implementation    When the server and client benchmark programs run on the same box  both the TCP IP loopback and unix domain sockets can be used  Depending on the platform  unix domain sockets can achieve around 50  more throughput than the TCP IP loopback  on Linux for instance   The default behavior of redis benchmark is to use the TCP IP loopback    The performance benefit of unix domain sockets compared to TCP IP loopback tends to decrease when pipelining is heavily used  i e  long pipelines     When an ethernet network is used to access Redis  aggregating commands using pipelining is especially efficient when the size of the data is kept under the ethernet packet size  about 1500 bytes   Actually  processing 10 bytes  100 bytes  or 1000 bytes queries almost result in the same throughput  See the graph below         Data size impact  Data size png     On multi CPU sockets servers  Redis performance becomes dependent on the NUMA configuration and process location  The most visible effect is that redis benchmark results seem non deterministic because client and server processes are distributed randomly on the cores  To get deterministic results  it is required to use process placement tools  on Linux  taskset or numactl   The most efficient combination is always to put the client and server on two different cores of the same CPU to benefit from the L3 cache  Here are some results of 4 KB SET benchmark for 3 server CPUs  AMD Istanbul  Intel Nehalem EX  and Intel Westmere  with different relative placements  Please note this benchmark is not meant to compare CPU models between themselves  CPUs exact model and frequency are therefore not disclosed          NUMA chart  NUMA chart gif     With high end configurations  the number of client connections is also an important factor  Being based on epoll kqueue  the Redis event loop is quite scalable  Redis has already been benchmarked at more than 60000 connections  and was still able to sustain 50000 q s in these conditions  As a rule of thumb  an instance with 30000 connections can only process half the throughput achievable with 100 connections  Here is an example showing the throughput of a Redis instance per number of connections         connections chart  Connections chart png     With high end configurations  it is possible to achieve higher throughput by tuning the NIC s  configuration and associated interruptions  Best throughput is achieved by setting an affinity between Rx Tx NIC queues and CPU cores  and activating RPS  Receive Packet Steering  support  More information in this  thread  https   groups google com forum   msg redis db gUhc19gnYgc BruTPCOroiMJ   Jumbo frames may also provide a performance boost when large objects are used    Depending on the platform  Redis can be compiled against different memory allocators  libc malloc  jemalloc  tcmalloc   which may have different behaviors in term of raw speed  internal and external fragmentation  If you did not compile Redis yourself  you can use the INFO command to check the  mem allocator  field  Please note most benchmarks do not run long enough to generate significant external fragmentation  contrary to production Redis instances        Other things to consider  One important goal of any benchmark is to get reproducible results  so they can be compared to the results of other tests     A good practice is to try to run tests on isolated hardware as much as possible  If it is not possible  then the system must be monitored to check the benchmark is not impacted by some external activity    Some configurations  desktops and laptops for sure  some servers as well  have a variable CPU core frequency mechanism  The policy controlling this mechanism can be set at the OS level  Some CPU models are more aggressive than others at adapting the frequency of the CPU cores to the workload  To get reproducible results  it is better to set the highest possible fixed frequency for all the CPU cores involved in the benchmark    An important point is to size the system accordingly to the benchmark  The system must have enough RAM and must not swap  On Linux  do not forget to set the  overcommit memory  parameter correctly  Please note 32 and 64 bit Redis instances do not have the same memory footprint    If you plan to use RDB or AOF for your benchmark  please check there is no other I O activity in the system  Avoid putting RDB or AOF files on NAS or NFS shares  or on any other devices impacting your network bandwidth and or latency  for instance  EBS on Amazon EC2     Set Redis logging level  loglevel parameter  to warning or notice  Avoid putting the generated log file on a remote filesystem    Avoid using monitoring tools which can alter the result of the benchmark  For instance using INFO at regular interval to gather statistics is probably fine  but MONITOR will impact the measured performance significantly       Other Redis benchmarking tools  There are several third party tools that can be used for benchmarking Redis  Refer to each tool s documentation for more information about its goals and capabilities      memtier benchmark  https   github com redislabs memtier benchmark  from  Redis Ltd   https   twitter com RedisInc  is a NoSQL Redis and Memcache traffic generation and benchmarking tool     rpc perf  https   github com twitter rpc perf  from  Twitter  https   twitter com twitter  is a tool for benchmarking RPC services that supports Redis and Memcache     YCSB  https   github com brianfrankcooper YCSB  from  Yahoo  Yahoo  https   twitter com Yahoo  is a benchmarking framework with clients to many databases  including Redis  "}
{"questions":"scikit-learn is the best approach for most users It will provide a stable version Installing scikit learn There are different ways to install scikit learn This installation instructions","answers":".. _installation-instructions:\n\n=======================\nInstalling scikit-learn\n=======================\n\nThere are different ways to install scikit-learn:\n\n* :ref:`Install the latest official release <install_official_release>`. This\n  is the best approach for most users. It will provide a stable version\n  and pre-built packages are available for most platforms.\n\n* Install the version of scikit-learn provided by your\n  :ref:`operating system or Python distribution <install_by_distribution>`.\n  This is a quick option for those who have operating systems or Python\n  distributions that distribute scikit-learn.\n  It might not provide the latest release version.\n\n* :ref:`Building the package from source\n  <install_bleeding_edge>`. This is best for users who want the\n  latest-and-greatest features and aren't afraid of running\n  brand-new code. This is also needed for users who wish to contribute to the\n  project.\n\n\n.. _install_official_release:\n\nInstalling the latest release\n=============================\n\n.. raw:: html\n\n  <style>\n    \/* Show caption on large screens *\/\n    @media screen and (min-width: 960px) {\n      .install-instructions .sd-tab-set {\n        --tab-caption-width: 20%;\n      }\n\n      .install-instructions .sd-tab-set.tabs-os::before {\n        content: \"Operating System\";\n      }\n\n      .install-instructions .sd-tab-set.tabs-package-manager::before {\n        content: \"Package Manager\";\n      }\n    }\n  <\/style>\n\n.. div:: install-instructions\n\n  .. tab-set::\n    :class: tabs-os\n\n    .. tab-item:: Windows\n      :class-label: tab-4\n\n      .. tab-set::\n        :class: tabs-package-manager\n\n        .. tab-item:: pip\n          :class-label: tab-6\n          :sync: package-manager-pip\n\n          Install the 64-bit version of Python 3, for instance from the\n          `official website <https:\/\/www.python.org\/downloads\/windows\/>`__.\n\n          Now create a `virtual environment (venv)\n          <https:\/\/docs.python.org\/3\/tutorial\/venv.html>`_ and install scikit-learn.\n          Note that the virtual environment is optional but strongly recommended, in\n          order to avoid potential conflicts with other packages.\n\n          .. prompt:: powershell\n\n            python -m venv sklearn-env\n            sklearn-env\\Scripts\\activate  # activate\n            pip install -U scikit-learn\n\n          In order to check your installation, you can use:\n\n          .. prompt:: powershell\n\n            python -m pip show scikit-learn  # show scikit-learn version and location\n            python -m pip freeze             # show all installed packages in the environment\n            python -c \"import sklearn; sklearn.show_versions()\"\n\n        .. tab-item:: conda\n          :class-label: tab-6\n          :sync: package-manager-conda\n\n          .. include:: .\/install_instructions_conda.rst\n\n    .. tab-item:: MacOS\n      :class-label: tab-4\n\n      .. tab-set::\n        :class: tabs-package-manager\n\n        .. tab-item:: pip\n          :class-label: tab-6\n          :sync: package-manager-pip\n\n          Install Python 3 using `homebrew <https:\/\/brew.sh\/>`_ (`brew install python`)\n          or by manually installing the package from the `official website\n          <https:\/\/www.python.org\/downloads\/macos\/>`__.\n\n          Now create a `virtual environment (venv)\n          <https:\/\/docs.python.org\/3\/tutorial\/venv.html>`_ and install scikit-learn.\n          Note that the virtual environment is optional but strongly recommended, in\n          order to avoid potential conflicts with other packges.\n\n          .. prompt:: bash\n\n            python -m venv sklearn-env\n            source sklearn-env\/bin\/activate  # activate\n            pip install -U scikit-learn\n\n          In order to check your installation, you can use:\n\n          .. prompt:: bash\n\n            python -m pip show scikit-learn  # show scikit-learn version and location\n            python -m pip freeze             # show all installed packages in the environment\n            python -c \"import sklearn; sklearn.show_versions()\"\n\n        .. tab-item:: conda\n          :class-label: tab-6\n          :sync: package-manager-conda\n\n          .. include:: .\/install_instructions_conda.rst\n\n    .. tab-item:: Linux\n      :class-label: tab-4\n\n      .. tab-set::\n        :class: tabs-package-manager\n\n        .. tab-item:: pip\n          :class-label: tab-6\n          :sync: package-manager-pip\n\n          Python 3 is usually installed by default on most Linux distributions. To\n          check if you have it installed, try:\n\n          .. prompt:: bash\n\n            python3 --version\n            pip3 --version\n\n          If you don't have Python 3 installed, please install `python3` and\n          `python3-pip` from your distribution's package manager.\n\n          Now create a `virtual environment (venv)\n          <https:\/\/docs.python.org\/3\/tutorial\/venv.html>`_ and install scikit-learn.\n          Note that the virtual environment is optional but strongly recommended, in\n          order to avoid potential conflicts with other packages.\n\n          .. prompt:: bash\n\n            python3 -m venv sklearn-env\n            source sklearn-env\/bin\/activate  # activate\n            pip3 install -U scikit-learn\n\n          In order to check your installation, you can use:\n\n          .. prompt:: bash\n\n            python3 -m pip show scikit-learn  # show scikit-learn version and location\n            python3 -m pip freeze             # show all installed packages in the environment\n            python3 -c \"import sklearn; sklearn.show_versions()\"\n\n        .. tab-item:: conda\n          :class-label: tab-6\n          :sync: package-manager-conda\n\n          .. include:: .\/install_instructions_conda.rst\n\n\nUsing an isolated environment such as pip venv or conda makes it possible to\ninstall a specific version of scikit-learn with pip or conda and its dependencies\nindependently of any previously installed Python packages. In particular under Linux\nit is discouraged to install pip packages alongside the packages managed by the\npackage manager of the distribution (apt, dnf, pacman...).\n\nNote that you should always remember to activate the environment of your choice\nprior to running any Python command whenever you start a new terminal session.\n\nIf you have not installed NumPy or SciPy yet, you can also install these using\nconda or pip. When using pip, please ensure that *binary wheels* are used,\nand NumPy and SciPy are not recompiled from source, which can happen when using\nparticular configurations of operating system and hardware (such as Linux on\na Raspberry Pi).\n\nScikit-learn plotting capabilities (i.e., functions starting with `plot\\_`\nand classes ending with `Display`) require Matplotlib. The examples require\nMatplotlib and some examples require scikit-image, pandas, or seaborn. The\nminimum version of scikit-learn dependencies are listed below along with its\npurpose.\n\n.. include:: min_dependency_table.rst\n\n.. warning::\n\n    Scikit-learn 0.20 was the last version to support Python 2.7 and Python 3.4.\n    Scikit-learn 0.21 supported Python 3.5-3.7.\n    Scikit-learn 0.22 supported Python 3.5-3.8.\n    Scikit-learn 0.23-0.24 required Python 3.6 or newer.\n    Scikit-learn 1.0 supported Python 3.7-3.10.\n    Scikit-learn 1.1, 1.2 and 1.3 support Python 3.8-3.12\n    Scikit-learn 1.4 requires Python 3.9 or newer.\n\n.. _install_by_distribution:\n\nThird party distributions of scikit-learn\n=========================================\n\nSome third-party distributions provide versions of\nscikit-learn integrated with their package-management systems.\n\nThese can make installation and upgrading much easier for users since\nthe integration includes the ability to automatically install\ndependencies (numpy, scipy) that scikit-learn requires.\n\nThe following is an incomplete list of OS and python distributions\nthat provide their own version of scikit-learn.\n\nAlpine Linux\n------------\n\nAlpine Linux's package is provided through the `official repositories\n<https:\/\/pkgs.alpinelinux.org\/packages?name=py3-scikit-learn>`__ as\n``py3-scikit-learn`` for Python.\nIt can be installed by typing the following command:\n\n.. prompt:: bash\n\n  sudo apk add py3-scikit-learn\n\n\nArch Linux\n----------\n\nArch Linux's package is provided through the `official repositories\n<https:\/\/www.archlinux.org\/packages\/?q=scikit-learn>`_ as\n``python-scikit-learn`` for Python.\nIt can be installed by typing the following command:\n\n.. prompt:: bash\n\n  sudo pacman -S python-scikit-learn\n\n\nDebian\/Ubuntu\n-------------\n\nThe Debian\/Ubuntu package is split in three different packages called\n``python3-sklearn`` (python modules), ``python3-sklearn-lib`` (low-level\nimplementations and bindings), ``python-sklearn-doc`` (documentation).\nNote that scikit-learn requires Python 3, hence the need to use the `python3-`\nsuffixed package names.\nPackages can be installed using ``apt-get``:\n\n.. prompt:: bash\n\n  sudo apt-get install python3-sklearn python3-sklearn-lib python-sklearn-doc\n\n\nFedora\n------\n\nThe Fedora package is called ``python3-scikit-learn`` for the python 3 version,\nthe only one available in Fedora.\nIt can be installed using ``dnf``:\n\n.. prompt:: bash\n\n  sudo dnf install python3-scikit-learn\n\n\nNetBSD\n------\n\nscikit-learn is available via `pkgsrc-wip <http:\/\/pkgsrc-wip.sourceforge.net\/>`_:\nhttps:\/\/pkgsrc.se\/math\/py-scikit-learn\n\n\nMacPorts for Mac OSX\n--------------------\n\nThe MacPorts package is named ``py<XY>-scikits-learn``,\nwhere ``XY`` denotes the Python version.\nIt can be installed by typing the following\ncommand:\n\n.. prompt:: bash\n\n  sudo port install py39-scikit-learn\n\n\nAnaconda and Enthought Deployment Manager for all supported platforms\n---------------------------------------------------------------------\n\n`Anaconda <https:\/\/www.anaconda.com\/download>`_ and\n`Enthought Deployment Manager <https:\/\/assets.enthought.com\/downloads\/>`_\nboth ship with scikit-learn in addition to a large set of scientific\npython library for Windows, Mac OSX and Linux.\n\nAnaconda offers scikit-learn as part of its free distribution.\n\n\nIntel Extension for Scikit-learn\n--------------------------------\n\nIntel maintains an optimized x86_64 package, available in PyPI (via `pip`),\nand in the `main`, `conda-forge` and `intel` conda channels:\n\n.. prompt:: bash\n\n  conda install scikit-learn-intelex\n\nThis package has an Intel optimized version of many estimators. Whenever\nan alternative implementation doesn't exist, scikit-learn implementation\nis used as a fallback. Those optimized solvers come from the oneDAL\nC++ library and are optimized for the x86_64 architecture, and are\noptimized for multi-core Intel CPUs.\n\nNote that those solvers are not enabled by default, please refer to the\n`scikit-learn-intelex <https:\/\/intel.github.io\/scikit-learn-intelex\/latest\/what-is-patching.html>`_\ndocumentation for more details on usage scenarios. Direct export example:\n\n.. prompt:: python >>>\n\n  from sklearnex.neighbors import NearestNeighbors\n\nCompatibility with the standard scikit-learn solvers is checked by running the\nfull scikit-learn test suite via automated continuous integration as reported\non https:\/\/github.com\/intel\/scikit-learn-intelex. If you observe any issue\nwith `scikit-learn-intelex`, please report the issue on their\n`issue tracker <https:\/\/github.com\/intel\/scikit-learn-intelex\/issues>`__.\n\n\nWinPython for Windows\n---------------------\n\nThe `WinPython <https:\/\/winpython.github.io\/>`_ project distributes\nscikit-learn as an additional plugin.\n\n\nTroubleshooting\n===============\n\nIf you encounter unexpected failures when installing scikit-learn, you may submit\nan issue to the `issue tracker <https:\/\/github.com\/scikit-learn\/scikit-learn\/issues>`_.\nBefore that, please also make sure to check the following common issues.\n\n.. _windows_longpath:\n\nError caused by file path length limit on Windows\n-------------------------------------------------\n\nIt can happen that pip fails to install packages when reaching the default path\nsize limit of Windows if Python is installed in a nested location such as the\n`AppData` folder structure under the user home directory, for instance::\n\n    C:\\Users\\username>C:\\Users\\username\\AppData\\Local\\Microsoft\\WindowsApps\\python.exe -m pip install scikit-learn\n    Collecting scikit-learn\n    ...\n    Installing collected packages: scikit-learn\n    ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'C:\\\\Users\\\\username\\\\AppData\\\\Local\\\\Packages\\\\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\\\\LocalCache\\\\local-packages\\\\Python37\\\\site-packages\\\\sklearn\\\\datasets\\\\tests\\\\data\\\\openml\\\\292\\\\api-v1-json-data-list-data_name-australian-limit-2-data_version-1-status-deactivated.json.gz'\n\nIn this case it is possible to lift that limit in the Windows registry by\nusing the ``regedit`` tool:\n\n#. Type \"regedit\" in the Windows start menu to launch ``regedit``.\n\n#. Go to the\n   ``Computer\\HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Control\\FileSystem``\n   key.\n\n#. Edit the value of the ``LongPathsEnabled`` property of that key and set\n   it to 1.\n\n#. Reinstall scikit-learn (ignoring the previous broken installation):\n\n   .. prompt:: powershell\n\n      pip install --exists-action=i scikit-learn","site":"scikit-learn","answers_cleaned":"    installation instructions                           Installing scikit learn                          There are different ways to install scikit learn      ref  Install the latest official release  install official release    This   is the best approach for most users  It will provide a stable version   and pre built packages are available for most platforms     Install the version of scikit learn provided by your    ref  operating system or Python distribution  install by distribution      This is a quick option for those who have operating systems or Python   distributions that distribute scikit learn    It might not provide the latest release version      ref  Building the package from source    install bleeding edge    This is best for users who want the   latest and greatest features and aren t afraid of running   brand new code  This is also needed for users who wish to contribute to the   project        install official release   Installing the latest release                                   raw   html     style         Show caption on large screens         media screen and  min width  960px           install instructions  sd tab set             tab caption width  20                   install instructions  sd tab set tabs os  before           content   Operating System                   install instructions  sd tab set tabs package manager  before           content   Package Manager                     style      div   install instructions       tab set        class  tabs os         tab item   Windows        class label  tab 4           tab set            class  tabs package manager             tab item   pip            class label  tab 6            sync  package manager pip            Install the 64 bit version of Python 3  for instance from the            official website  https   www python org downloads windows                  Now create a  virtual environment  venv             https   docs python org 3 tutorial venv html    and install scikit learn            Note that the virtual environment is optional but strongly recommended  in           order to avoid potential conflicts with other packages                prompt   powershell              python  m venv sklearn env             sklearn env Scripts activate    activate             pip install  U scikit learn            In order to check your installation  you can use                prompt   powershell              python  m pip show scikit learn    show scikit learn version and location             python  m pip freeze               show all installed packages in the environment             python  c  import sklearn  sklearn show versions                tab item   conda            class label  tab 6            sync  package manager conda               include     install instructions conda rst         tab item   MacOS        class label  tab 4           tab set            class  tabs package manager             tab item   pip            class label  tab 6            sync  package manager pip            Install Python 3 using  homebrew  https   brew sh       brew install python             or by manually installing the package from the  official website            https   www python org downloads macos                  Now create a  virtual environment  venv             https   docs python org 3 tutorial venv html    and install scikit learn            Note that the virtual environment is optional but strongly recommended  in           order to avoid potential conflicts with other packges                prompt   bash              python  m venv sklearn env             source sklearn env bin activate    activate             pip install  U scikit learn            In order to check your installation  you can use                prompt   bash              python  m pip show scikit learn    show scikit learn version and location             python  m pip freeze               show all installed packages in the environment             python  c  import sklearn  sklearn show versions                tab item   conda            class label  tab 6            sync  package manager conda               include     install instructions conda rst         tab item   Linux        class label  tab 4           tab set            class  tabs package manager             tab item   pip            class label  tab 6            sync  package manager pip            Python 3 is usually installed by default on most Linux distributions  To           check if you have it installed  try                prompt   bash              python3   version             pip3   version            If you don t have Python 3 installed  please install  python3  and            python3 pip  from your distribution s package manager             Now create a  virtual environment  venv             https   docs python org 3 tutorial venv html    and install scikit learn            Note that the virtual environment is optional but strongly recommended  in           order to avoid potential conflicts with other packages                prompt   bash              python3  m venv sklearn env             source sklearn env bin activate    activate             pip3 install  U scikit learn            In order to check your installation  you can use                prompt   bash              python3  m pip show scikit learn    show scikit learn version and location             python3  m pip freeze               show all installed packages in the environment             python3  c  import sklearn  sklearn show versions                tab item   conda            class label  tab 6            sync  package manager conda               include     install instructions conda rst   Using an isolated environment such as pip venv or conda makes it possible to install a specific version of scikit learn with pip or conda and its dependencies independently of any previously installed Python packages  In particular under Linux it is discouraged to install pip packages alongside the packages managed by the package manager of the distribution  apt  dnf  pacman       Note that you should always remember to activate the environment of your choice prior to running any Python command whenever you start a new terminal session   If you have not installed NumPy or SciPy yet  you can also install these using conda or pip  When using pip  please ensure that  binary wheels  are used  and NumPy and SciPy are not recompiled from source  which can happen when using particular configurations of operating system and hardware  such as Linux on a Raspberry Pi    Scikit learn plotting capabilities  i e   functions starting with  plot    and classes ending with  Display   require Matplotlib  The examples require Matplotlib and some examples require scikit image  pandas  or seaborn  The minimum version of scikit learn dependencies are listed below along with its purpose      include   min dependency table rst     warning        Scikit learn 0 20 was the last version to support Python 2 7 and Python 3 4      Scikit learn 0 21 supported Python 3 5 3 7      Scikit learn 0 22 supported Python 3 5 3 8      Scikit learn 0 23 0 24 required Python 3 6 or newer      Scikit learn 1 0 supported Python 3 7 3 10      Scikit learn 1 1  1 2 and 1 3 support Python 3 8 3 12     Scikit learn 1 4 requires Python 3 9 or newer       install by distribution   Third party distributions of scikit learn                                            Some third party distributions provide versions of scikit learn integrated with their package management systems   These can make installation and upgrading much easier for users since the integration includes the ability to automatically install dependencies  numpy  scipy  that scikit learn requires   The following is an incomplete list of OS and python distributions that provide their own version of scikit learn   Alpine Linux               Alpine Linux s package is provided through the  official repositories  https   pkgs alpinelinux org packages name py3 scikit learn     as   py3 scikit learn   for Python  It can be installed by typing the following command      prompt   bash    sudo apk add py3 scikit learn   Arch Linux             Arch Linux s package is provided through the  official repositories  https   www archlinux org packages  q scikit learn    as   python scikit learn   for Python  It can be installed by typing the following command      prompt   bash    sudo pacman  S python scikit learn   Debian Ubuntu                The Debian Ubuntu package is split in three different packages called   python3 sklearn    python modules     python3 sklearn lib    low level implementations and bindings     python sklearn doc    documentation   Note that scikit learn requires Python 3  hence the need to use the  python3   suffixed package names  Packages can be installed using   apt get        prompt   bash    sudo apt get install python3 sklearn python3 sklearn lib python sklearn doc   Fedora         The Fedora package is called   python3 scikit learn   for the python 3 version  the only one available in Fedora  It can be installed using   dnf        prompt   bash    sudo dnf install python3 scikit learn   NetBSD         scikit learn is available via  pkgsrc wip  http   pkgsrc wip sourceforge net      https   pkgsrc se math py scikit learn   MacPorts for Mac OSX                       The MacPorts package is named   py XY  scikits learn    where   XY   denotes the Python version  It can be installed by typing the following command      prompt   bash    sudo port install py39 scikit learn   Anaconda and Enthought Deployment Manager for all supported platforms                                                                         Anaconda  https   www anaconda com download    and  Enthought Deployment Manager  https   assets enthought com downloads     both ship with scikit learn in addition to a large set of scientific python library for Windows  Mac OSX and Linux   Anaconda offers scikit learn as part of its free distribution    Intel Extension for Scikit learn                                   Intel maintains an optimized x86 64 package  available in PyPI  via  pip    and in the  main    conda forge  and  intel  conda channels      prompt   bash    conda install scikit learn intelex  This package has an Intel optimized version of many estimators  Whenever an alternative implementation doesn t exist  scikit learn implementation is used as a fallback  Those optimized solvers come from the oneDAL C   library and are optimized for the x86 64 architecture  and are optimized for multi core Intel CPUs   Note that those solvers are not enabled by default  please refer to the  scikit learn intelex  https   intel github io scikit learn intelex latest what is patching html    documentation for more details on usage scenarios  Direct export example      prompt   python        from sklearnex neighbors import NearestNeighbors  Compatibility with the standard scikit learn solvers is checked by running the full scikit learn test suite via automated continuous integration as reported on https   github com intel scikit learn intelex  If you observe any issue with  scikit learn intelex   please report the issue on their  issue tracker  https   github com intel scikit learn intelex issues        WinPython for Windows                        The  WinPython  https   winpython github io     project distributes scikit learn as an additional plugin    Troubleshooting                  If you encounter unexpected failures when installing scikit learn  you may submit an issue to the  issue tracker  https   github com scikit learn scikit learn issues     Before that  please also make sure to check the following common issues       windows longpath   Error caused by file path length limit on Windows                                                    It can happen that pip fails to install packages when reaching the default path size limit of Windows if Python is installed in a nested location such as the  AppData  folder structure under the user home directory  for instance        C  Users username C  Users username AppData Local Microsoft WindowsApps python exe  m pip install scikit learn     Collecting scikit learn             Installing collected packages  scikit learn     ERROR  Could not install packages due to an OSError   Errno 2  No such file or directory   C   Users  username  AppData  Local  Packages  PythonSoftwareFoundation Python 3 7 qbz5n2kfra8p0  LocalCache  local packages  Python37  site packages  sklearn  datasets  tests  data  openml  292  api v1 json data list data name australian limit 2 data version 1 status deactivated json gz   In this case it is possible to lift that limit in the Windows registry by using the   regedit   tool      Type  regedit  in the Windows start menu to launch   regedit        Go to the      Computer HKEY LOCAL MACHINE SYSTEM CurrentControlSet Control FileSystem      key      Edit the value of the   LongPathsEnabled   property of that key and set    it to 1      Reinstall scikit learn  ignoring the previous broken installation          prompt   powershell        pip install   exists action i scikit learn"}
{"questions":"scikit-learn html roadmap strike","answers":".. |ss| raw:: html\n\n   <strike>\n\n.. |se| raw:: html\n\n   <\/strike>\n\n.. _roadmap:\n\nRoadmap\n=======\n\nPurpose of this document\n------------------------\nThis document list general directions that core contributors are interested\nto see developed in scikit-learn. The fact that an item is listed here is in\nno way a promise that it will happen, as resources are limited. Rather, it\nis an indication that help is welcomed on this topic.\n\nStatement of purpose: Scikit-learn in 2018\n------------------------------------------\nEleven years after the inception of Scikit-learn, much has changed in the\nworld of machine learning. Key changes include:\n\n* Computational tools: The exploitation of GPUs, distributed programming\n  frameworks like Scala\/Spark, etc.\n* High-level Python libraries for experimentation, processing and data\n  management: Jupyter notebook, Cython, Pandas, Dask, Numba...\n* Changes in the focus of machine learning research: artificial intelligence\n  applications (where input structure is key) with deep learning,\n  representation learning, reinforcement learning, domain transfer, etc.\n\nA more subtle change over the last decade is that, due to changing interests\nin ML, PhD students in machine learning are more likely to contribute to\nPyTorch, Dask, etc. than to Scikit-learn, so our contributor pool is very\ndifferent to a decade ago.\n\nScikit-learn remains very popular in practice for trying out canonical\nmachine learning techniques, particularly for applications in experimental\nscience and in data science. A lot of what we provide is now very mature.\nBut it can be costly to maintain, and we cannot therefore include arbitrary\nnew implementations. Yet Scikit-learn is also essential in defining an API\nframework for the development of interoperable machine learning components\nexternal to the core library.\n\n**Thus our main goals in this era are to**:\n\n* continue maintaining a high-quality, well-documented collection of canonical\n  tools for data processing and machine learning within the current scope\n  (i.e. rectangular data largely invariant to column and row order;\n  predicting targets with simple structure)\n* improve the ease for users to develop and publish external components\n* improve interoperability with modern data science tools (e.g. Pandas, Dask)\n  and infrastructures (e.g. distributed processing)\n\nMany of the more fine-grained goals can be found under the `API tag\n<https:\/\/github.com\/scikit-learn\/scikit-learn\/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3AAPI>`_\non the issue tracker.\n\nArchitectural \/ general goals\n-----------------------------\nThe list is numbered not as an indication of the order of priority, but to\nmake referring to specific points easier. Please add new entries only at the\nbottom. Note that the crossed out entries are already done, and we try to keep\nthe document up to date as we work on these issues.\n\n\n#. Improved handling of Pandas DataFrames\n\n   * document current handling\n\n#. Improved handling of categorical features\n\n   * Tree-based models should be able to handle both continuous and categorical\n     features :issue:`29437`.\n   * Handling mixtures of categorical and continuous variables\n\n#. Improved handling of missing data\n\n   * Making sure meta-estimators are lenient towards missing data by implementing\n     a common test.\n   * An amputation sample generator to make parts of a dataset go missing\n     :issue:`6284`\n\n#. More didactic documentation\n\n   * More and more options have been added to scikit-learn. As a result, the\n     documentation is crowded which makes it hard for beginners to get the big\n     picture. Some work could be done in prioritizing the information.\n\n#. Passing around information that is not (X, y): Feature properties\n\n   * Per-feature handling (e.g. \"is this a nominal \/ ordinal \/ English language\n     text?\") should also not need to be provided to estimator constructors,\n     ideally, but should be available as metadata alongside X. :issue:`8480`\n\n#. Passing around information that is not (X, y): Target information\n\n   * We have problems getting the full set of classes to all components when\n     the data is split\/sampled. :issue:`6231` :issue:`8100`\n   * We have no way to handle a mixture of categorical and continuous targets.\n\n#. Make it easier for external users to write Scikit-learn-compatible\n   components\n\n   * More self-sufficient running of scikit-learn-contrib or a similar resource\n\n#. Support resampling and sample reduction\n\n   * Allow subsampling of majority classes (in a pipeline?) :issue:`3855`\n\n#. Better interfaces for interactive development\n\n   * Improve the HTML visualisations of estimators via the `estimator_html_repr`.\n   * Include more plotting tools, not just as examples.\n\n#. Improved tools for model diagnostics and basic inference\n\n   * work on a unified interface for \"feature importance\"\n   * better ways to handle validation sets when fitting\n\n#. Better tools for selecting hyperparameters with transductive estimators\n\n   * Grid search and cross validation are not applicable to most clustering\n     tasks. Stability-based selection is more relevant.\n\n#. Better support for manual and automatic pipeline building\n\n   * Easier way to construct complex pipelines and valid search spaces\n     :issue:`7608` :issue:`5082` :issue:`8243`\n   * provide search ranges for common estimators??\n   * cf. `searchgrid <https:\/\/searchgrid.readthedocs.io\/en\/latest\/>`_\n\n#. Improved tracking of fitting\n\n   * Verbose is not very friendly and should use a standard logging library\n     :issue:`6929`, :issue:`78`\n   * Callbacks or a similar system would facilitate logging and early stopping\n\n#. Distributed parallelism\n\n   * Accept data which complies with ``__array_function__``\n\n#. A way forward for more out of core\n\n   * Dask enables easy out-of-core computation. While the Dask model probably\n     cannot be adaptable to all machine-learning algorithms, most machine\n     learning is on smaller data than ETL, hence we can maybe adapt to very\n     large scale while supporting only a fraction of the patterns.\n\n#. Backwards-compatible de\/serialization of some estimators\n\n   * Currently serialization (with pickle) breaks across versions. While we may\n     not be able to get around other limitations of pickle re security etc, it\n     would be great to offer cross-version safety from version 1.0. Note: Gael\n     and Olivier think that this can cause heavy maintenance burden and we\n     should manage the trade-offs. A possible alternative is presented in the\n     following point.\n\n#. Documentation and tooling for model lifecycle management\n\n   * Document good practices for model deployments and lifecycle: before\n     deploying a model: snapshot the code versions (numpy, scipy, scikit-learn,\n     custom code repo), the training script and an alias on how to retrieve\n     historical training data + snapshot a copy of a small validation set +\n     snapshot of the predictions (predicted probabilities for classifiers)\n     on that validation set.\n   * Document and tools to make it easy to manage upgrade of scikit-learn\n     versions:\n\n     * Try to load the old pickle, if it works, use the validation set\n       prediction snapshot to detect that the serialized model still behave\n       the same;\n     * If joblib.load \/ pickle.load not work, use the versioned control\n       training script + historical training set to retrain the model and use\n       the validation set prediction snapshot to assert that it is possible to\n       recover the previous predictive performance: if this is not the case\n       there is probably a bug in scikit-learn that needs to be reported.\n\n#. Everything in scikit-learn should probably conform to our API contract.\n   We are still in the process of making decisions on some of these related\n   issues.\n\n   * `Pipeline <pipeline.Pipeline>` and `FeatureUnion` modify their input\n     parameters in fit. Fixing this requires making sure we have a good\n     grasp of their use cases to make sure all current functionality is\n     maintained. :issue:`8157` :issue:`7382`\n\n#. (Optional) Improve scikit-learn common tests suite to make sure that (at\n   least for frequently used) models have stable predictions across-versions\n   (to be discussed);\n\n   * Extend documentation to mention how to deploy models in Python-free\n     environments for instance `ONNX <https:\/\/github.com\/onnx\/sklearn-onnx>`_.\n     and use the above best practices to assess predictive consistency between\n     scikit-learn and ONNX prediction functions on validation set.\n   * Document good practices to detect temporal distribution drift for deployed\n     model and good practices for re-training on fresh data without causing\n     catastrophic predictive performance regressions.","site":"scikit-learn","answers_cleaned":"    ss  raw   html      strike       se  raw   html       strike       roadmap   Roadmap          Purpose of this document                          This document list general directions that core contributors are interested to see developed in scikit learn  The fact that an item is listed here is in no way a promise that it will happen  as resources are limited  Rather  it is an indication that help is welcomed on this topic   Statement of purpose  Scikit learn in 2018                                            Eleven years after the inception of Scikit learn  much has changed in the world of machine learning  Key changes include     Computational tools  The exploitation of GPUs  distributed programming   frameworks like Scala Spark  etc    High level Python libraries for experimentation  processing and data   management  Jupyter notebook  Cython  Pandas  Dask  Numba      Changes in the focus of machine learning research  artificial intelligence   applications  where input structure is key  with deep learning    representation learning  reinforcement learning  domain transfer  etc   A more subtle change over the last decade is that  due to changing interests in ML  PhD students in machine learning are more likely to contribute to PyTorch  Dask  etc  than to Scikit learn  so our contributor pool is very different to a decade ago   Scikit learn remains very popular in practice for trying out canonical machine learning techniques  particularly for applications in experimental science and in data science  A lot of what we provide is now very mature  But it can be costly to maintain  and we cannot therefore include arbitrary new implementations  Yet Scikit learn is also essential in defining an API framework for the development of interoperable machine learning components external to the core library     Thus our main goals in this era are to       continue maintaining a high quality  well documented collection of canonical   tools for data processing and machine learning within the current scope    i e  rectangular data largely invariant to column and row order    predicting targets with simple structure    improve the ease for users to develop and publish external components   improve interoperability with modern data science tools  e g  Pandas  Dask    and infrastructures  e g  distributed processing   Many of the more fine grained goals can be found under the  API tag  https   github com scikit learn scikit learn issues q is 3Aissue is 3Aopen sort 3Aupdated desc label 3AAPI    on the issue tracker   Architectural   general goals                               The list is numbered not as an indication of the order of priority  but to make referring to specific points easier  Please add new entries only at the bottom  Note that the crossed out entries are already done  and we try to keep the document up to date as we work on these issues       Improved handling of Pandas DataFrames       document current handling     Improved handling of categorical features       Tree based models should be able to handle both continuous and categorical      features  issue  29437        Handling mixtures of categorical and continuous variables     Improved handling of missing data       Making sure meta estimators are lenient towards missing data by implementing      a common test       An amputation sample generator to make parts of a dataset go missing       issue  6284      More didactic documentation       More and more options have been added to scikit learn  As a result  the      documentation is crowded which makes it hard for beginners to get the big      picture  Some work could be done in prioritizing the information      Passing around information that is not  X  y   Feature properties       Per feature handling  e g   is this a nominal   ordinal   English language      text    should also not need to be provided to estimator constructors       ideally  but should be available as metadata alongside X   issue  8480      Passing around information that is not  X  y   Target information       We have problems getting the full set of classes to all components when      the data is split sampled   issue  6231   issue  8100       We have no way to handle a mixture of categorical and continuous targets      Make it easier for external users to write Scikit learn compatible    components       More self sufficient running of scikit learn contrib or a similar resource     Support resampling and sample reduction       Allow subsampling of majority classes  in a pipeline    issue  3855      Better interfaces for interactive development       Improve the HTML visualisations of estimators via the  estimator html repr        Include more plotting tools  not just as examples      Improved tools for model diagnostics and basic inference       work on a unified interface for  feature importance       better ways to handle validation sets when fitting     Better tools for selecting hyperparameters with transductive estimators       Grid search and cross validation are not applicable to most clustering      tasks  Stability based selection is more relevant      Better support for manual and automatic pipeline building       Easier way to construct complex pipelines and valid search spaces       issue  7608   issue  5082   issue  8243       provide search ranges for common estimators        cf   searchgrid  https   searchgrid readthedocs io en latest         Improved tracking of fitting       Verbose is not very friendly and should use a standard logging library       issue  6929    issue  78       Callbacks or a similar system would facilitate logging and early stopping     Distributed parallelism       Accept data which complies with     array function         A way forward for more out of core       Dask enables easy out of core computation  While the Dask model probably      cannot be adaptable to all machine learning algorithms  most machine      learning is on smaller data than ETL  hence we can maybe adapt to very      large scale while supporting only a fraction of the patterns      Backwards compatible de serialization of some estimators       Currently serialization  with pickle  breaks across versions  While we may      not be able to get around other limitations of pickle re security etc  it      would be great to offer cross version safety from version 1 0  Note  Gael      and Olivier think that this can cause heavy maintenance burden and we      should manage the trade offs  A possible alternative is presented in the      following point      Documentation and tooling for model lifecycle management       Document good practices for model deployments and lifecycle  before      deploying a model  snapshot the code versions  numpy  scipy  scikit learn       custom code repo   the training script and an alias on how to retrieve      historical training data   snapshot a copy of a small validation set        snapshot of the predictions  predicted probabilities for classifiers       on that validation set       Document and tools to make it easy to manage upgrade of scikit learn      versions          Try to load the old pickle  if it works  use the validation set        prediction snapshot to detect that the serialized model still behave        the same         If joblib load   pickle load not work  use the versioned control        training script   historical training set to retrain the model and use        the validation set prediction snapshot to assert that it is possible to        recover the previous predictive performance  if this is not the case        there is probably a bug in scikit learn that needs to be reported      Everything in scikit learn should probably conform to our API contract     We are still in the process of making decisions on some of these related    issues         Pipeline  pipeline Pipeline   and  FeatureUnion  modify their input      parameters in fit  Fixing this requires making sure we have a good      grasp of their use cases to make sure all current functionality is      maintained   issue  8157   issue  7382       Optional  Improve scikit learn common tests suite to make sure that  at    least for frequently used  models have stable predictions across versions     to be discussed         Extend documentation to mention how to deploy models in Python free      environments for instance  ONNX  https   github com onnx sklearn onnx          and use the above best practices to assess predictive consistency between      scikit learn and ONNX prediction functions on validation set       Document good practices to detect temporal distribution drift for deployed      model and good practices for re training on fresh data without causing      catastrophic predictive performance regressions "}
{"questions":"scikit-learn There are several channels to connect with scikit learn developers for assistance feedback or contributions Note Communications on all channels should respect our announcementsandnotification Support","answers":"=======\nSupport\n=======\n\nThere are several channels to connect with scikit-learn developers for assistance, feedback, or contributions.\n\n**Note**: Communications on all channels should respect our `Code of Conduct <https:\/\/github.com\/scikit-learn\/scikit-learn\/blob\/main\/CODE_OF_CONDUCT.md>`_.\n\n\n.. _announcements_and_notification:\n\nMailing Lists\n=============\n\n- **Main Mailing List**: Join the primary discussion \n  platform for scikit-learn at `scikit-learn Mailing List       \n  <https:\/\/mail.python.org\/mailman\/listinfo\/scikitlearn>`_.\n\n- **Commit Updates**: Stay informed about repository \n  updates and test failures on the `scikit-learn-commits list \n  <https:\/\/lists.sourceforge.net\/lists\/listinfo\/scikit-learn-commits>`_.\n\n.. _user_questions:\n\nUser Questions\n==============\n\nIf you have questions, this is our general workflow.\n\n- **Stack Overflow**: Some scikit-learn developers support users using the \n  `[scikit-learn] <https:\/\/stackoverflow.com\/questions\/tagged\/scikit-learn>`_ \n  tag.\n\n- **General Machine Learning Queries**: For broader machine learning \n  discussions, visit `Stack Exchange <https:\/\/stats.stackexchange.com\/>`_.\n\nWhen posting questions:\n\n- Please use a descriptive question in the title field (e.g. no \"Please \n  help with scikit-learn!\" as this is not a question) \n\n- Provide detailed context, expected results, and actual observations.\n\n- Include code and data snippets (preferably minimalistic scripts, \n  up to ~20 lines).\n\n- Describe your data and preprocessing steps, including sample size, \n  feature types (categorical or numerical), and the target for supervised \n  learning tasks (classification type or regression).\n\n**Note**: Avoid asking user questions on the bug tracker to keep \nthe focus on development.\n\n- `GitHub Discussions <https:\/\/github.com\/scikit-learn\/scikit-learn\/discussions>`_\n  Usage questions such as methodological\n\n- `Stack Overflow <https:\/\/stackoverflow.com\/questions\/tagged\/scikit-learn>`_\n  Programming\/user questions with `[scikit-learn]` tag\n\n- `GitHub Bug Tracker <https:\/\/github.com\/scikit-learn\/scikit-learn\/issues>`_\n  Bug reports - Please do not ask usage questions on the issue tracker.\n\n- `Discord Server <https:\/\/discord.gg\/h9qyrK8Jc8>`_\n  Current pull requests - Post any specific PR-related questions on your PR, \n  and you can share a link to your PR on this server.\n\n.. _bug_tracker:\n\nBug Tracker\n===========\n\nEncountered a bug? Report it on our `issue tracker\n<https:\/\/github.com\/scikit-learn\/scikit-learn\/issues>`_\n\nInclude in your report:\n\n- Steps or scripts to reproduce the bug.\n\n- Expected and observed outcomes.\n\n- Python or gdb tracebacks, if applicable.\n\n- The ideal bug report contains a :ref:`short reproducible code snippet\n  <minimal_reproducer>`, this way anyone can try to reproduce the bug easily.\n\n- If your snippet is longer than around 50 lines, please link to a \n  `gist <https:\/\/gist.github.com>`_ or a github repo.\n\n**Tip**: Gists are Git repositories; you can push data files to them using Git.\n\n.. _social_media:\n\nSocial Media\n============\n\nscikit-learn has presence on various social media platforms to share\nupdates with the community. The platforms are not monitored for user\nquestions.\n\n.. _gitter:\n\nGitter\n======\n\n**Note**: The scikit-learn Gitter room is no longer an active community. \nFor live discussions and support, please refer to the other channels \nmentioned in this document.\n\n.. _documentation_resources:\n\nDocumentation Resources\n=======================\n\nThis documentation is for |release|. Find documentation for other versions \n`here <https:\/\/scikit-learn.org\/dev\/versions.html>`__.\n\nOlder versions' printable PDF documentation is available `here\n<https:\/\/sourceforge.net\/projects\/scikit-learn\/files\/documentation\/>`_.\nBuilding the PDF documentation is no longer supported in the website,\nbut you can still generate it locally by following the\n:ref:`building documentation instructions <building_documentation>`.","site":"scikit-learn","answers_cleaned":"        Support          There are several channels to connect with scikit learn developers for assistance  feedback  or contributions     Note    Communications on all channels should respect our  Code of Conduct  https   github com scikit learn scikit learn blob main CODE OF CONDUCT md           announcements and notification   Mailing Lists                    Main Mailing List    Join the primary discussion    platform for scikit learn at  scikit learn Mailing List           https   mail python org mailman listinfo scikitlearn          Commit Updates    Stay informed about repository    updates and test failures on the  scikit learn commits list     https   lists sourceforge net lists listinfo scikit learn commits          user questions   User Questions                 If you have questions  this is our general workflow       Stack Overflow    Some scikit learn developers support users using the      scikit learn   https   stackoverflow com questions tagged scikit learn       tag       General Machine Learning Queries    For broader machine learning    discussions  visit  Stack Exchange  https   stats stackexchange com       When posting questions     Please use a descriptive question in the title field  e g  no  Please    help with scikit learn   as this is not a question      Provide detailed context  expected results  and actual observations     Include code and data snippets  preferably minimalistic scripts     up to  20 lines      Describe your data and preprocessing steps  including sample size     feature types  categorical or numerical   and the target for supervised    learning tasks  classification type or regression      Note    Avoid asking user questions on the bug tracker to keep  the focus on development      GitHub Discussions  https   github com scikit learn scikit learn discussions      Usage questions such as methodological     Stack Overflow  https   stackoverflow com questions tagged scikit learn      Programming user questions with   scikit learn   tag     GitHub Bug Tracker  https   github com scikit learn scikit learn issues      Bug reports   Please do not ask usage questions on the issue tracker      Discord Server  https   discord gg h9qyrK8Jc8      Current pull requests   Post any specific PR related questions on your PR     and you can share a link to your PR on this server       bug tracker   Bug Tracker              Encountered a bug  Report it on our  issue tracker  https   github com scikit learn scikit learn issues     Include in your report     Steps or scripts to reproduce the bug     Expected and observed outcomes     Python or gdb tracebacks  if applicable     The ideal bug report contains a  ref  short reproducible code snippet    minimal reproducer    this way anyone can try to reproduce the bug easily     If your snippet is longer than around 50 lines  please link to a     gist  https   gist github com    or a github repo     Tip    Gists are Git repositories  you can push data files to them using Git       social media   Social Media               scikit learn has presence on various social media platforms to share updates with the community  The platforms are not monitored for user questions       gitter   Gitter           Note    The scikit learn Gitter room is no longer an active community   For live discussions and support  please refer to the other channels  mentioned in this document       documentation resources   Documentation Resources                          This documentation is for  release   Find documentation for other versions   here  https   scikit learn org dev versions html       Older versions  printable PDF documentation is available  here  https   sourceforge net projects scikit learn files documentation      Building the PDF documentation is no longer supported in the website  but you can still generate it locally by following the  ref  building documentation instructions  building documentation   "}
{"questions":"scikit-learn Summary of model persistence methods widths 25 50 50 header rows 1 modelpersistence Model persistence","answers":".. _model_persistence:\n\n=================\nModel persistence\n=================\n\n.. list-table:: Summary of model persistence methods\n   :widths: 25 50 50\n   :header-rows: 1\n\n   * - Persistence method\n     - Pros\n     - Risks \/ Cons\n   * - :ref:`ONNX <onnx_persistence>`\n     - * Serve models without a Python environment\n       * Serving and training environments independent of one another\n       * Most secure option\n     - * Not all scikit-learn models are supported\n       * Custom estimators require more work to support\n       * Original Python object is lost and cannot be reconstructed\n   * - :ref:`skops_persistence`\n     - * More secure than `pickle` based formats\n       * Contents can be partly validated without loading\n     - * Not as fast as `pickle` based formats\n       * Supports less types than `pickle` based formats\n       * Requires the same environment as the training environment\n   * - :mod:`pickle`\n     - * Native to Python\n       * Can serialize most Python objects\n       * Efficient memory usage with `protocol=5`\n     - * Loading can execute arbitrary code\n       * Requires the same environment as the training environment\n   * - :mod:`joblib`\n     - * Efficient memory usage\n       * Supports memory mapping\n       * Easy shortcuts for compression and decompression\n     - * Pickle based format\n       * Loading can execute arbitrary code\n       * Requires the same environment as the training environment\n   * - `cloudpickle`_\n     - * Can serialize non-packaged, custom Python code\n       * Comparable loading efficiency as :mod:`pickle` with `protocol=5`\n     - * Pickle based format\n       * Loading can execute arbitrary code\n       * No forward compatibility guarantees\n       * Requires the same environment as the training environment\n\nAfter training a scikit-learn model, it is desirable to have a way to persist\nthe model for future use without having to retrain. Based on your use-case,\nthere are a few different ways to persist a scikit-learn model, and here we\nhelp you decide which one suits you best. In order to make a decision, you need\nto answer the following questions:\n\n1. Do you need the Python object after persistence, or do you only need to\n   persist in order to serve the model and get predictions out of it?\n\nIf you only need to serve the model and no further investigation on the Python\nobject itself is required, then :ref:`ONNX <onnx_persistence>` might be the\nbest fit for you. Note that not all models are supported by ONNX.\n\nIn case ONNX is not suitable for your use-case, the next question is:\n\n2. Do you absolutely trust the source of the model, or are there any security\n   concerns regarding where the persisted model comes from?\n\nIf you have security concerns, then you should consider using :ref:`skops.io\n<skops_persistence>` which gives you back the Python object, but unlike\n`pickle` based persistence solutions, loading the persisted model doesn't\nautomatically allow arbitrary code execution. Note that this requires manual\ninvestigation of the persisted file, which :mod:`skops.io` allows you to do.\n\nThe other solutions assume you absolutely trust the source of the file to be\nloaded, as they are all susceptible to arbitrary code execution upon loading\nthe persisted file since they all use the pickle protocol under the hood.\n\n3. Do you care about the performance of loading the model, and sharing it\n   between processes where a memory mapped object on disk is beneficial?\n\nIf yes, then you can consider using :ref:`joblib <pickle_persistence>`. If this\nis not a major concern for you, then you can use the built-in :mod:`pickle`\nmodule.\n\n4. Did you try :mod:`pickle` or :mod:`joblib` and found that the model cannot\n   be persisted? It can happen for instance when you have user defined\n   functions in your model.\n\nIf yes, then you can use `cloudpickle`_ which can serialize certain objects\nwhich cannot be serialized by :mod:`pickle` or :mod:`joblib`.\n\n\nWorkflow Overview\n-----------------\n\nIn a typical workflow, the first step is to train the model using scikit-learn\nand scikit-learn compatible libraries. Note that support for scikit-learn and\nthird party estimators varies across the different persistence methods.\n\nTrain and Persist the Model\n...........................\n\nCreating an appropriate model depends on your use-case. As an example, here we\ntrain a :class:`sklearn.ensemble.HistGradientBoostingClassifier` on the iris\ndataset::\n\n  >>> from sklearn import ensemble\n  >>> from sklearn import datasets\n  >>> clf = ensemble.HistGradientBoostingClassifier()\n  >>> X, y = datasets.load_iris(return_X_y=True)\n  >>> clf.fit(X, y)\n  HistGradientBoostingClassifier()\n\nOnce the model is trained, you can persist it using your desired method, and\nthen you can load the model in a separate environment and get predictions from\nit given input data. Here there are two major paths depending on how you\npersist and plan to serve the model:\n\n- :ref:`ONNX <onnx_persistence>`: You need an `ONNX` runtime and an environment\n  with appropriate dependencies installed to load the model and use the runtime\n  to get predictions. This environment can be minimal and does not necessarily\n  even require Python to be installed to load the model and compute\n  predictions. Also note that `onnxruntime` typically requires much less RAM\n  than Python to compute predictions from small models.\n\n- :mod:`skops.io`, :mod:`pickle`, :mod:`joblib`, `cloudpickle`_: You need a\n  Python environment with the appropriate dependencies installed to load the\n  model and get predictions from it. This environment should have the same\n  **packages** and the same **versions** as the environment where the model was\n  trained. Note that none of these methods support loading a model trained with\n  a different version of scikit-learn, and possibly different versions of other\n  dependencies such as `numpy` and `scipy`. Another concern would be running\n  the persisted model on a different hardware, and in most cases you should be\n  able to load your persisted model on a different hardware.\n\n\n.. _onnx_persistence:\n\nONNX\n----\n\n`ONNX`, or `Open Neural Network Exchange <https:\/\/onnx.ai\/>`__ format is best\nsuitable in use-cases where one needs to persist the model and then use the\npersisted artifact to get predictions without the need to load the Python\nobject itself. It is also useful in cases where the serving environment needs\nto be lean and minimal, since the `ONNX` runtime does not require `python`.\n\n`ONNX` is a binary serialization of the model. It has been developed to improve\nthe usability of the interoperable representation of data models. It aims to\nfacilitate the conversion of the data models between different machine learning\nframeworks, and to improve their portability on different computing\narchitectures. More details are available from the `ONNX tutorial\n<https:\/\/onnx.ai\/get-started.html>`__. To convert scikit-learn model to `ONNX`\n`sklearn-onnx <http:\/\/onnx.ai\/sklearn-onnx\/>`__ has been developed. However,\nnot all scikit-learn models are supported, and it is limited to the core\nscikit-learn and does not support most third party estimators. One can write a\ncustom converter for third party or custom estimators, but the documentation to\ndo that is sparse and it might be challenging to do so.\n\n.. dropdown:: Using ONNX\n\n  To convert the model to `ONNX` format, you need to give the converter some\n  information about the input as well, about which you can read more `here\n  <http:\/\/onnx.ai\/sklearn-onnx\/index.html>`__::\n\n      from skl2onnx import to_onnx\n      onx = to_onnx(clf, X[:1].astype(numpy.float32), target_opset=12)\n      with open(\"filename.onnx\", \"wb\") as f:\n          f.write(onx.SerializeToString())\n\n  You can load the model in Python and use the `ONNX` runtime to get\n  predictions::\n\n      from onnxruntime import InferenceSession\n      with open(\"filename.onnx\", \"rb\") as f:\n          onx = f.read()\n      sess = InferenceSession(onx, providers=[\"CPUExecutionProvider\"])\n      pred_ort = sess.run(None, {\"X\": X_test.astype(numpy.float32)})[0]\n\n.. _skops_persistence:\n\n`skops.io`\n----------\n\n:mod:`skops.io` avoids using :mod:`pickle` and only loads files which have types\nand references to functions which are trusted either by default or by the user.\nTherefore it provides a more secure format than :mod:`pickle`, :mod:`joblib`,\nand `cloudpickle`_.\n\n\n.. dropdown:: Using skops\n\n  The API is very similar to :mod:`pickle`, and you can persist your models as\n  explained in the `documentation\n  <https:\/\/skops.readthedocs.io\/en\/stable\/persistence.html>`__ using\n  :func:`skops.io.dump` and :func:`skops.io.dumps`::\n\n      import skops.io as sio\n      obj = sio.dump(clf, \"filename.skops\")\n\n  And you can load them back using :func:`skops.io.load` and\n  :func:`skops.io.loads`. However, you need to specify the types which are\n  trusted by you. You can get existing unknown types in a dumped object \/ file\n  using :func:`skops.io.get_untrusted_types`, and after checking its contents,\n  pass it to the load function::\n\n      unknown_types = sio.get_untrusted_types(file=\"filename.skops\")\n      # investigate the contents of unknown_types, and only load if you trust\n      # everything you see.\n      clf = sio.load(\"filename.skops\", trusted=unknown_types)\n\n  Please report issues and feature requests related to this format on the `skops\n  issue tracker <https:\/\/github.com\/skops-dev\/skops\/issues>`__.\n\n\n.. _pickle_persistence:\n\n`pickle`, `joblib`, and `cloudpickle`\n-------------------------------------\n\nThese three modules \/ packages, use the `pickle` protocol under the hood, but\ncome with slight variations:\n\n- :mod:`pickle` is a module from the Python Standard Library. It can serialize\n  and  deserialize any Python object, including custom Python classes and\n  objects.\n- :mod:`joblib` is more efficient than `pickle` when working with large machine\n  learning models or large numpy arrays.\n- `cloudpickle`_ can serialize certain objects which cannot be serialized by\n  :mod:`pickle` or :mod:`joblib`, such as user defined functions and lambda\n  functions. This can happen for instance, when using a\n  :class:`~sklearn.preprocessing.FunctionTransformer` and using a custom\n  function to transform the data.\n\n.. dropdown:: Using `pickle`, `joblib`, or `cloudpickle`\n\n  Depending on your use-case, you can choose one of these three methods to\n  persist and load your scikit-learn model, and they all follow the same API::\n\n      # Here you can replace pickle with joblib or cloudpickle\n      from pickle import dump\n      with open(\"filename.pkl\", \"wb\") as f:\n          dump(clf, f, protocol=5)\n\n  Using `protocol=5` is recommended to reduce memory usage and make it faster to\n  store and load any large NumPy array stored as a fitted attribute in the model.\n  You can alternatively pass `protocol=pickle.HIGHEST_PROTOCOL` which is\n  equivalent to `protocol=5` in Python 3.8 and later (at the time of writing).\n\n  And later when needed, you can load the same object from the persisted file::\n\n      # Here you can replace pickle with joblib or cloudpickle\n      from pickle import load\n      with open(\"filename.pkl\", \"rb\") as f:\n          clf = load(f)\n\n.. _persistence_limitations:\n\nSecurity & Maintainability Limitations\n--------------------------------------\n\n:mod:`pickle` (and :mod:`joblib` and :mod:`clouldpickle` by extension), has\nmany documented security vulnerabilities by design and should only be used if\nthe artifact, i.e. the pickle-file, is coming from a trusted and verified\nsource. You should never load a pickle file from an untrusted source, similarly\nto how you should never execute code from an untrusted source.\n\nAlso note that arbitrary computations can be represented using the `ONNX`\nformat, and it is therefore recommended to serve models using `ONNX` in a\nsandboxed environment to safeguard against computational and memory exploits.\n\nAlso note that there are no supported ways to load a model trained with a\ndifferent version of scikit-learn. While using :mod:`skops.io`, :mod:`joblib`,\n:mod:`pickle`, or `cloudpickle`_, models saved using one version of\nscikit-learn might load in other versions, however, this is entirely\nunsupported and inadvisable. It should also be kept in mind that operations\nperformed on such data could give different and unexpected results, or even\ncrash your Python process.\n\nIn order to rebuild a similar model with future versions of scikit-learn,\nadditional metadata should be saved along the pickled model:\n\n* The training data, e.g. a reference to an immutable snapshot\n* The Python source code used to generate the model\n* The versions of scikit-learn and its dependencies\n* The cross validation score obtained on the training data\n\nThis should make it possible to check that the cross-validation score is in the\nsame range as before.\n\nAside for a few exceptions, persisted models should be portable across\noperating systems and hardware architectures assuming the same versions of\ndependencies and Python are used. If you encounter an estimator that is not\nportable, please open an issue on GitHub. Persisted models are often deployed\nin production using containers like Docker, in order to freeze the environment\nand dependencies.\n\nIf you want to know more about these issues, please refer to these talks:\n\n- `Adrin Jalali: Let's exploit pickle, and skops to the rescue! | PyData\n  Amsterdam 2023 <https:\/\/www.youtube.com\/watch?v=9w_H5OSTO9A>`__.\n- `Alex Gaynor: Pickles are for Delis, not Software - PyCon 2014\n  <https:\/\/pyvideo.org\/video\/2566\/pickles-are-for-delis-not-software>`__.\n\n\n.. _serving_environment:\n\nReplicating the training environment in production\n..................................................\n\nIf the versions of the dependencies used may differ from training to\nproduction, it may result in unexpected behaviour and errors while using the\ntrained model. To prevent such situations it is recommended to use the same\ndependencies and versions in both the training and production environment.\nThese transitive dependencies can be pinned with the help of package management\ntools like `pip`, `mamba`, `conda`, `poetry`, `conda-lock`, `pixi`, etc.\n\nIt is not always possible to load an model trained with older versions of the\nscikit-learn library and its dependencies in an updated software environment.\nInstead, you might need to retrain the model with the new versions of the all\nthe libraries. So when training a model, it is important to record the training\nrecipe (e.g. a Python script) and training set information, and metadata about\nall the dependencies to be able to automatically reconstruct the same training\nenvironment for the updated software.\n\n.. dropdown:: InconsistentVersionWarning\n\n  When an estimator is loaded with a scikit-learn version that is inconsistent\n  with the version the estimator was pickled with, a\n  :class:`~sklearn.exceptions.InconsistentVersionWarning` is raised. This warning\n  can be caught to obtain the original version the estimator was pickled with::\n\n    from sklearn.exceptions import InconsistentVersionWarning\n    warnings.simplefilter(\"error\", InconsistentVersionWarning)\n\n    try:\n        with open(\"model_from_prevision_version.pickle\", \"rb\") as f:\n            est = pickle.load(f)\n    except InconsistentVersionWarning as w:\n        print(w.original_sklearn_version)\n\n\nServing the model artifact\n..........................\n\nThe last step after training a scikit-learn model is serving the model.\nOnce the trained model is successfully loaded, it can be served to manage\ndifferent prediction requests. This can involve deploying the model as a\nweb service using containerization, or other model deployment strategies,\naccording to the specifications.\n\n\nSummarizing the key points\n--------------------------\n\nBased on the different approaches for model persistence, the key points for\neach approach can be summarized as follows:\n\n* `ONNX`: It provides a uniform format for persisting any machine learning or\n  deep learning model (other than scikit-learn) and is useful for model\n  inference (predictions). It can however, result in compatibility issues with\n  different frameworks.\n* :mod:`skops.io`: Trained scikit-learn models can be easily shared and put\n  into production using :mod:`skops.io`. It is more secure compared to\n  alternate approaches based on :mod:`pickle` because it does not load\n  arbitrary code unless explicitly asked for by the user. Such code needs to be\n  packaged and importable in the target Python environment.\n* :mod:`joblib`: Efficient memory mapping techniques make it faster when using\n  the same persisted model in multiple Python processes when using\n  `mmap_mode=\"r\"`. It also gives easy shortcuts to compress and decompress the\n  persisted object without the need for extra code. However, it may trigger the\n  execution of malicious code when loading a model from an untrusted source as\n  any other pickle-based persistence mechanism.\n* :mod:`pickle`: It is native to Python and most Python objects can be\n  serialized and deserialized using :mod:`pickle`, including custom Python\n  classes and functions as long as they are defined in a package that can be\n  imported in the target environment. While :mod:`pickle` can be used to easily\n  save and load scikit-learn models, it may trigger the execution of malicious\n  code while loading a model from an untrusted source. :mod:`pickle` can also\n  be very efficient memorywise if the model was persisted with `protocol=5` but\n  it does not support memory mapping.\n* `cloudpickle`_: It has comparable loading efficiency as :mod:`pickle` and\n  :mod:`joblib` (without memory mapping), but offers additional flexibility to\n  serialize custom Python code such as lambda expressions and interactively\n  defined functions and classes. It might be a last resort to persist pipelines\n  with custom Python components such as a\n  :class:`sklearn.preprocessing.FunctionTransformer` that wraps a function\n  defined in the training script itself or more generally outside of any\n  importable Python package. Note that `cloudpickle`_ offers no forward\n  compatibility guarantees and you might need the same version of\n  `cloudpickle`_ to load the persisted model along with the same version of all\n  the libraries used to define the model. As the other pickle-based persistence\n  mechanisms, it may trigger the execution of malicious code while loading\n  a model from an untrusted source.\n\n.. _cloudpickle: https:\/\/github.com\/cloudpipe\/cloudpickle","site":"scikit-learn","answers_cleaned":"    model persistence                     Model persistence                       list table   Summary of model persistence methods     widths  25 50 50     header rows  1         Persistence method        Pros        Risks   Cons         ref  ONNX  onnx persistence            Serve models without a Python environment          Serving and training environments independent of one another          Most secure option          Not all scikit learn models are supported          Custom estimators require more work to support          Original Python object is lost and cannot be reconstructed         ref  skops persistence           More secure than  pickle  based formats          Contents can be partly validated without loading          Not as fast as  pickle  based formats          Supports less types than  pickle  based formats          Requires the same environment as the training environment         mod  pickle           Native to Python          Can serialize most Python objects          Efficient memory usage with  protocol 5           Loading can execute arbitrary code          Requires the same environment as the training environment         mod  joblib           Efficient memory usage          Supports memory mapping          Easy shortcuts for compression and decompression          Pickle based format          Loading can execute arbitrary code          Requires the same environment as the training environment         cloudpickle            Can serialize non packaged  custom Python code          Comparable loading efficiency as  mod  pickle  with  protocol 5           Pickle based format          Loading can execute arbitrary code          No forward compatibility guarantees          Requires the same environment as the training environment  After training a scikit learn model  it is desirable to have a way to persist the model for future use without having to retrain  Based on your use case  there are a few different ways to persist a scikit learn model  and here we help you decide which one suits you best  In order to make a decision  you need to answer the following questions   1  Do you need the Python object after persistence  or do you only need to    persist in order to serve the model and get predictions out of it   If you only need to serve the model and no further investigation on the Python object itself is required  then  ref  ONNX  onnx persistence   might be the best fit for you  Note that not all models are supported by ONNX   In case ONNX is not suitable for your use case  the next question is   2  Do you absolutely trust the source of the model  or are there any security    concerns regarding where the persisted model comes from   If you have security concerns  then you should consider using  ref  skops io  skops persistence   which gives you back the Python object  but unlike  pickle  based persistence solutions  loading the persisted model doesn t automatically allow arbitrary code execution  Note that this requires manual investigation of the persisted file  which  mod  skops io  allows you to do   The other solutions assume you absolutely trust the source of the file to be loaded  as they are all susceptible to arbitrary code execution upon loading the persisted file since they all use the pickle protocol under the hood   3  Do you care about the performance of loading the model  and sharing it    between processes where a memory mapped object on disk is beneficial   If yes  then you can consider using  ref  joblib  pickle persistence    If this is not a major concern for you  then you can use the built in  mod  pickle  module   4  Did you try  mod  pickle  or  mod  joblib  and found that the model cannot    be persisted  It can happen for instance when you have user defined    functions in your model   If yes  then you can use  cloudpickle   which can serialize certain objects which cannot be serialized by  mod  pickle  or  mod  joblib     Workflow Overview                    In a typical workflow  the first step is to train the model using scikit learn and scikit learn compatible libraries  Note that support for scikit learn and third party estimators varies across the different persistence methods   Train and Persist the Model                              Creating an appropriate model depends on your use case  As an example  here we train a  class  sklearn ensemble HistGradientBoostingClassifier  on the iris dataset          from sklearn import ensemble       from sklearn import datasets       clf   ensemble HistGradientBoostingClassifier         X  y   datasets load iris return X y True        clf fit X  y    HistGradientBoostingClassifier    Once the model is trained  you can persist it using your desired method  and then you can load the model in a separate environment and get predictions from it given input data  Here there are two major paths depending on how you persist and plan to serve the model      ref  ONNX  onnx persistence    You need an  ONNX  runtime and an environment   with appropriate dependencies installed to load the model and use the runtime   to get predictions  This environment can be minimal and does not necessarily   even require Python to be installed to load the model and compute   predictions  Also note that  onnxruntime  typically requires much less RAM   than Python to compute predictions from small models      mod  skops io    mod  pickle    mod  joblib    cloudpickle    You need a   Python environment with the appropriate dependencies installed to load the   model and get predictions from it  This environment should have the same     packages   and the same   versions   as the environment where the model was   trained  Note that none of these methods support loading a model trained with   a different version of scikit learn  and possibly different versions of other   dependencies such as  numpy  and  scipy   Another concern would be running   the persisted model on a different hardware  and in most cases you should be   able to load your persisted model on a different hardware        onnx persistence   ONNX        ONNX   or  Open Neural Network Exchange  https   onnx ai      format is best suitable in use cases where one needs to persist the model and then use the persisted artifact to get predictions without the need to load the Python object itself  It is also useful in cases where the serving environment needs to be lean and minimal  since the  ONNX  runtime does not require  python     ONNX  is a binary serialization of the model  It has been developed to improve the usability of the interoperable representation of data models  It aims to facilitate the conversion of the data models between different machine learning frameworks  and to improve their portability on different computing architectures  More details are available from the  ONNX tutorial  https   onnx ai get started html      To convert scikit learn model to  ONNX   sklearn onnx  http   onnx ai sklearn onnx      has been developed  However  not all scikit learn models are supported  and it is limited to the core scikit learn and does not support most third party estimators  One can write a custom converter for third party or custom estimators  but the documentation to do that is sparse and it might be challenging to do so      dropdown   Using ONNX    To convert the model to  ONNX  format  you need to give the converter some   information about the input as well  about which you can read more  here    http   onnx ai sklearn onnx index html              from skl2onnx import to onnx       onx   to onnx clf  X  1  astype numpy float32   target opset 12        with open  filename onnx    wb   as f            f write onx SerializeToString       You can load the model in Python and use the  ONNX  runtime to get   predictions          from onnxruntime import InferenceSession       with open  filename onnx    rb   as f            onx   f read         sess   InferenceSession onx  providers   CPUExecutionProvider          pred ort   sess run None    X   X test astype numpy float32    0       skops persistence    skops io               mod  skops io  avoids using  mod  pickle  and only loads files which have types and references to functions which are trusted either by default or by the user  Therefore it provides a more secure format than  mod  pickle    mod  joblib   and  cloudpickle         dropdown   Using skops    The API is very similar to  mod  pickle   and you can persist your models as   explained in the  documentation    https   skops readthedocs io en stable persistence html     using    func  skops io dump  and  func  skops io dumps           import skops io as sio       obj   sio dump clf   filename skops      And you can load them back using  func  skops io load  and    func  skops io loads   However  you need to specify the types which are   trusted by you  You can get existing unknown types in a dumped object   file   using  func  skops io get untrusted types   and after checking its contents    pass it to the load function          unknown types   sio get untrusted types file  filename skops           investigate the contents of unknown types  and only load if you trust         everything you see        clf   sio load  filename skops   trusted unknown types     Please report issues and feature requests related to this format on the  skops   issue tracker  https   github com skops dev skops issues            pickle persistence    pickle    joblib   and  cloudpickle                                         These three modules   packages  use the  pickle  protocol under the hood  but come with slight variations      mod  pickle  is a module from the Python Standard Library  It can serialize   and  deserialize any Python object  including custom Python classes and   objects     mod  joblib  is more efficient than  pickle  when working with large machine   learning models or large numpy arrays     cloudpickle   can serialize certain objects which cannot be serialized by    mod  pickle  or  mod  joblib   such as user defined functions and lambda   functions  This can happen for instance  when using a    class   sklearn preprocessing FunctionTransformer  and using a custom   function to transform the data      dropdown   Using  pickle    joblib   or  cloudpickle     Depending on your use case  you can choose one of these three methods to   persist and load your scikit learn model  and they all follow the same API            Here you can replace pickle with joblib or cloudpickle       from pickle import dump       with open  filename pkl    wb   as f            dump clf  f  protocol 5     Using  protocol 5  is recommended to reduce memory usage and make it faster to   store and load any large NumPy array stored as a fitted attribute in the model    You can alternatively pass  protocol pickle HIGHEST PROTOCOL  which is   equivalent to  protocol 5  in Python 3 8 and later  at the time of writing      And later when needed  you can load the same object from the persisted file            Here you can replace pickle with joblib or cloudpickle       from pickle import load       with open  filename pkl    rb   as f            clf   load f       persistence limitations   Security   Maintainability Limitations                                          mod  pickle   and  mod  joblib  and  mod  clouldpickle  by extension   has many documented security vulnerabilities by design and should only be used if the artifact  i e  the pickle file  is coming from a trusted and verified source  You should never load a pickle file from an untrusted source  similarly to how you should never execute code from an untrusted source   Also note that arbitrary computations can be represented using the  ONNX  format  and it is therefore recommended to serve models using  ONNX  in a sandboxed environment to safeguard against computational and memory exploits   Also note that there are no supported ways to load a model trained with a different version of scikit learn  While using  mod  skops io    mod  joblib    mod  pickle   or  cloudpickle    models saved using one version of scikit learn might load in other versions  however  this is entirely unsupported and inadvisable  It should also be kept in mind that operations performed on such data could give different and unexpected results  or even crash your Python process   In order to rebuild a similar model with future versions of scikit learn  additional metadata should be saved along the pickled model     The training data  e g  a reference to an immutable snapshot   The Python source code used to generate the model   The versions of scikit learn and its dependencies   The cross validation score obtained on the training data  This should make it possible to check that the cross validation score is in the same range as before   Aside for a few exceptions  persisted models should be portable across operating systems and hardware architectures assuming the same versions of dependencies and Python are used  If you encounter an estimator that is not portable  please open an issue on GitHub  Persisted models are often deployed in production using containers like Docker  in order to freeze the environment and dependencies   If you want to know more about these issues  please refer to these talks      Adrin Jalali  Let s exploit pickle  and skops to the rescue    PyData   Amsterdam 2023  https   www youtube com watch v 9w H5OSTO9A         Alex Gaynor  Pickles are for Delis  not Software   PyCon 2014    https   pyvideo org video 2566 pickles are for delis not software            serving environment   Replicating the training environment in production                                                     If the versions of the dependencies used may differ from training to production  it may result in unexpected behaviour and errors while using the trained model  To prevent such situations it is recommended to use the same dependencies and versions in both the training and production environment  These transitive dependencies can be pinned with the help of package management tools like  pip    mamba    conda    poetry    conda lock    pixi   etc   It is not always possible to load an model trained with older versions of the scikit learn library and its dependencies in an updated software environment  Instead  you might need to retrain the model with the new versions of the all the libraries  So when training a model  it is important to record the training recipe  e g  a Python script  and training set information  and metadata about all the dependencies to be able to automatically reconstruct the same training environment for the updated software      dropdown   InconsistentVersionWarning    When an estimator is loaded with a scikit learn version that is inconsistent   with the version the estimator was pickled with  a    class   sklearn exceptions InconsistentVersionWarning  is raised  This warning   can be caught to obtain the original version the estimator was pickled with        from sklearn exceptions import InconsistentVersionWarning     warnings simplefilter  error   InconsistentVersionWarning       try          with open  model from prevision version pickle    rb   as f              est   pickle load f      except InconsistentVersionWarning as w          print w original sklearn version    Serving the model artifact                             The last step after training a scikit learn model is serving the model  Once the trained model is successfully loaded  it can be served to manage different prediction requests  This can involve deploying the model as a web service using containerization  or other model deployment strategies  according to the specifications    Summarizing the key points                             Based on the different approaches for model persistence  the key points for each approach can be summarized as follows      ONNX   It provides a uniform format for persisting any machine learning or   deep learning model  other than scikit learn  and is useful for model   inference  predictions   It can however  result in compatibility issues with   different frameworks     mod  skops io   Trained scikit learn models can be easily shared and put   into production using  mod  skops io   It is more secure compared to   alternate approaches based on  mod  pickle  because it does not load   arbitrary code unless explicitly asked for by the user  Such code needs to be   packaged and importable in the target Python environment     mod  joblib   Efficient memory mapping techniques make it faster when using   the same persisted model in multiple Python processes when using    mmap mode  r    It also gives easy shortcuts to compress and decompress the   persisted object without the need for extra code  However  it may trigger the   execution of malicious code when loading a model from an untrusted source as   any other pickle based persistence mechanism     mod  pickle   It is native to Python and most Python objects can be   serialized and deserialized using  mod  pickle   including custom Python   classes and functions as long as they are defined in a package that can be   imported in the target environment  While  mod  pickle  can be used to easily   save and load scikit learn models  it may trigger the execution of malicious   code while loading a model from an untrusted source   mod  pickle  can also   be very efficient memorywise if the model was persisted with  protocol 5  but   it does not support memory mapping     cloudpickle    It has comparable loading efficiency as  mod  pickle  and    mod  joblib   without memory mapping   but offers additional flexibility to   serialize custom Python code such as lambda expressions and interactively   defined functions and classes  It might be a last resort to persist pipelines   with custom Python components such as a    class  sklearn preprocessing FunctionTransformer  that wraps a function   defined in the training script itself or more generally outside of any   importable Python package  Note that  cloudpickle   offers no forward   compatibility guarantees and you might need the same version of    cloudpickle   to load the persisted model along with the same version of all   the libraries used to define the model  As the other pickle based persistence   mechanisms  it may trigger the execution of malicious code while loading   a model from an untrusted source       cloudpickle  https   github com cloudpipe cloudpickle"}
{"questions":"scikit-learn governance Scikit learn governance and decision making elements of our community interact scikit learn project to clarify how decisions are made and how the various This document establishes a decision making structure that takes into account The purpose of this document is to formalize the governance process used by the","answers":".. _governance:\n\n===========================================\nScikit-learn governance and decision-making\n===========================================\n\nThe purpose of this document is to formalize the governance process used by the\nscikit-learn project, to clarify how decisions are made and how the various\nelements of our community interact.\nThis document establishes a decision-making structure that takes into account\nfeedback from all members of the community and strives to find consensus, while\navoiding any deadlocks.\n\nThis is a meritocratic, consensus-based community project. Anyone with an\ninterest in the project can join the community, contribute to the project\ndesign and participate in the decision making process. This document describes\nhow that participation takes place and how to set about earning merit within\nthe project community.\n\nRoles And Responsibilities\n==========================\n\nWe distinguish between contributors, core contributors, and the technical\ncommittee. A key distinction between them is their voting rights: contributors\nhave no voting rights, whereas the other two groups all have voting rights,\nas well as permissions to the tools relevant to their roles.\n\nContributors\n------------\n\nContributors are community members who contribute in concrete ways to the\nproject. Anyone can become a contributor, and contributions can take many forms\n\u2013 not only code \u2013 as detailed in the :ref:`contributors guide <contributing>`.\nThere is no process to become a contributor: once somebody contributes to the\nproject in any way, they are a contributor.\n\nCore Contributors\n-----------------\n\nAll core contributor members have the same voting rights and right to propose\nnew members to any of the roles listed below. Their membership is represented\nas being an organization member on the scikit-learn `GitHub organization\n<https:\/\/github.com\/orgs\/scikit-learn\/people>`_.\n\nThey are also welcome to join our `monthly core contributor meetings\n<https:\/\/github.com\/scikit-learn\/administrative\/tree\/master\/meeting_notes>`_.\n\nNew members can be nominated by any existing member. Once they have been\nnominated, there will be a vote by the current core contributors. Voting on new\nmembers is one of the few activities that takes place on the project's private\nmailing list. While it is expected that most votes will be unanimous, a\ntwo-thirds majority of the cast votes is enough. The vote needs to be open for\nat least 1 week.\n\nCore contributors that have not contributed to the project, corresponding to\ntheir role, in the past 12 months will be asked if they want to become emeritus\nmembers and recant their rights until they become active again. The list of\nmembers, active and emeritus (with dates at which they became active) is public\non the scikit-learn website. It is the responsibility of the active core\ncontributors to send such a yearly reminder email.\n\nThe following teams form the core contributors group:\n\n* **Contributor Experience Team**\n  The contributor experience team improves the experience of contributors by\n  helping with the triage of issues and pull requests, as well as noticing any\n  repeating patterns where people might struggle, and to help with improving\n  those aspects of the project.\n\n  To this end, they have the required permissions on GitHub to label and close\n  issues. :ref:`Their work <bug_triaging>` is crucial to improve the\n  communication in the project and limit the crowding of the issue tracker.\n\n  .. _communication_team:\n\n* **Communication Team**\n  Members of the communication team help with outreach and communication\n  for scikit-learn. The goal of the team is to develop public awareness of\n  scikit-learn, of its features and usage, as well as branding.\n\n  For this, they can operate the scikit-learn accounts on various social networks\n  and produce materials. They also have the required rights to our blog\n  repository and other relevant accounts and platforms.\n\n* **Documentation Team**\n  Members of the documentation team engage with the documentation of the project\n  among other things. They might also be involved in other aspects of the\n  project, but their reviews on documentation contributions are considered\n  authoritative, and can merge such contributions.\n\n  To this end, they have permissions to merge pull requests in scikit-learn's\n  repository.\n\n* **Maintainers Team**\n  Maintainers are community members who have shown that they are dedicated to the\n  continued development of the project through ongoing engagement with the\n  community. They have shown they can be trusted to maintain scikit-learn with\n  care. Being a maintainer allows contributors to more easily carry on with their\n  project related activities by giving them direct access to the project's\n  repository. Maintainers are expected to review code contributions, merge\n  approved pull requests, cast votes for and against merging a pull-request,\n  and to be involved in deciding major changes to the API.\n\nTechnical Committee\n-------------------\n\nThe Technical Committee (TC) members are maintainers who have additional\nresponsibilities to ensure the smooth running of the project. TC members are\nexpected to participate in strategic planning, and approve changes to the\ngovernance model. The purpose of the TC is to ensure a smooth progress from the\nbig-picture perspective. Indeed changes that impact the full project require a\nsynthetic analysis and a consensus that is both explicit and informed. In cases\nthat the core contributor community (which includes the TC members) fails to\nreach such a consensus in the required time frame, the TC is the entity to\nresolve the issue. Membership of the TC is by nomination by a core contributor.\nA nomination will result in discussion which cannot take more than a month and\nthen a vote by the core contributors which will stay open for a week. TC\nmembership votes are subject to a two-third majority of all cast votes as well\nas a simple majority approval of all the current TC members. TC members who do\nnot actively engage with the TC duties are expected to resign.\n\nThe Technical Committee of scikit-learn consists of :user:`Thomas Fan\n<thomasjpfan>`, :user:`Alexandre Gramfort <agramfort>`, :user:`Olivier Grisel\n<ogrisel>`, :user:`Adrin Jalali <adrinjalali>`, :user:`Andreas M\u00fcller\n<amueller>`, :user:`Joel Nothman <jnothman>` and :user:`Ga\u00ebl Varoquaux\n<GaelVaroquaux>`.\n\nDecision Making Process\n=======================\nDecisions about the future of the project are made through discussion with all\nmembers of the community. All non-sensitive project management discussion takes\nplace on the project contributors' `mailing list <mailto:scikit-learn@python.org>`_\nand the `issue tracker <https:\/\/github.com\/scikit-learn\/scikit-learn\/issues>`_.\nOccasionally, sensitive discussion occurs on a private list.\n\nScikit-learn uses a \"consensus seeking\" process for making decisions. The group\ntries to find a resolution that has no open objections among core contributors.\nAt any point during the discussion, any core contributor can call for a vote,\nwhich will conclude one month from the call for the vote. Most votes have to be\nbacked by a :ref:`SLEP <slep>`. If no option can gather two thirds of the votes\ncast, the decision is escalated to the TC, which in turn will use consensus\nseeking with the fallback option of a simple majority vote if no consensus can\nbe found within a month. This is what we hereafter may refer to as \"**the\ndecision making process**\".\n\nDecisions (in addition to adding core contributors and TC membership as above)\nare made according to the following rules:\n\n* **Minor Documentation changes**, such as typo fixes, or addition \/ correction\n  of a sentence, but no change of the ``scikit-learn.org`` landing page or the\n  \u201cabout\u201d page: Requires +1 by a maintainer, no -1 by a maintainer (lazy\n  consensus), happens on the issue or pull request page. Maintainers are\n  expected to give \u201creasonable time\u201d to others to give their opinion on the\n  pull request if they're not confident others would agree.\n\n* **Code changes and major documentation changes**\n  require +1 by two maintainers, no -1 by a maintainer (lazy\n  consensus), happens on the issue of pull-request page.\n\n* **Changes to the API principles and changes to dependencies or supported\n  versions** happen via :ref:`slep` and follows the decision-making process\n  outlined above.\n\n* **Changes to the governance model** follow the process outlined in `SLEP020\n  <https:\/\/scikit-learn-enhancement-proposals.readthedocs.io\/en\/latest\/slep020\/proposal.html>`__.\n\nIf a veto -1 vote is cast on a lazy consensus, the proposer can appeal to the\ncommunity and maintainers and the change can be approved or rejected using\nthe decision making procedure outlined above.\n\nGovernance Model Changes\n------------------------\n\nGovernance model changes occur through an enhancement proposal or a GitHub Pull\nRequest. An enhancement proposal will go through \"**the decision-making process**\"\ndescribed in the previous section. Alternatively, an author may propose a change\ndirectly to the governance model with a GitHub Pull Request. Logistically, an\nauthor can open a Draft Pull Request for feedback and follow up with a new\nrevised Pull Request for voting. Once that author is happy with the state of the\nPull Request, they can call for a vote on the public mailing list. During the\none-month voting period, the Pull Request can not change. A Pull Request\nApproval will count as a positive vote, and a \"Request Changes\" review will\ncount as a negative vote. If two-thirds of the cast votes are positive, then\nthe governance model change is accepted.\n\n.. _slep:\n\nEnhancement proposals (SLEPs)\n==============================\nFor all votes, a proposal must have been made public and discussed before the\nvote. Such proposal must be a consolidated document, in the form of a\n\"Scikit-Learn Enhancement Proposal\" (SLEP), rather than a long discussion on an\nissue. A SLEP must be submitted as a pull-request to `enhancement proposals\n<https:\/\/scikit-learn-enhancement-proposals.readthedocs.io>`_ using the `SLEP\ntemplate\n<https:\/\/scikit-learn-enhancement-proposals.readthedocs.io\/en\/latest\/slep_template.html>`_.\n`SLEP000\n<https:\/\/scikit-learn-enhancement-proposals.readthedocs.io\/en\/latest\/slep000\/proposal.html>`__\ndescribes the process in more detail.","site":"scikit-learn","answers_cleaned":"    governance                                               Scikit learn governance and decision making                                              The purpose of this document is to formalize the governance process used by the scikit learn project  to clarify how decisions are made and how the various elements of our community interact  This document establishes a decision making structure that takes into account feedback from all members of the community and strives to find consensus  while avoiding any deadlocks   This is a meritocratic  consensus based community project  Anyone with an interest in the project can join the community  contribute to the project design and participate in the decision making process  This document describes how that participation takes place and how to set about earning merit within the project community   Roles And Responsibilities                             We distinguish between contributors  core contributors  and the technical committee  A key distinction between them is their voting rights  contributors have no voting rights  whereas the other two groups all have voting rights  as well as permissions to the tools relevant to their roles   Contributors               Contributors are community members who contribute in concrete ways to the project  Anyone can become a contributor  and contributions can take many forms   not only code   as detailed in the  ref  contributors guide  contributing    There is no process to become a contributor  once somebody contributes to the project in any way  they are a contributor   Core Contributors                    All core contributor members have the same voting rights and right to propose new members to any of the roles listed below  Their membership is represented as being an organization member on the scikit learn  GitHub organization  https   github com orgs scikit learn people      They are also welcome to join our  monthly core contributor meetings  https   github com scikit learn administrative tree master meeting notes      New members can be nominated by any existing member  Once they have been nominated  there will be a vote by the current core contributors  Voting on new members is one of the few activities that takes place on the project s private mailing list  While it is expected that most votes will be unanimous  a two thirds majority of the cast votes is enough  The vote needs to be open for at least 1 week   Core contributors that have not contributed to the project  corresponding to their role  in the past 12 months will be asked if they want to become emeritus members and recant their rights until they become active again  The list of members  active and emeritus  with dates at which they became active  is public on the scikit learn website  It is the responsibility of the active core contributors to send such a yearly reminder email   The following teams form the core contributors group       Contributor Experience Team     The contributor experience team improves the experience of contributors by   helping with the triage of issues and pull requests  as well as noticing any   repeating patterns where people might struggle  and to help with improving   those aspects of the project     To this end  they have the required permissions on GitHub to label and close   issues   ref  Their work  bug triaging   is crucial to improve the   communication in the project and limit the crowding of the issue tracker         communication team       Communication Team     Members of the communication team help with outreach and communication   for scikit learn  The goal of the team is to develop public awareness of   scikit learn  of its features and usage  as well as branding     For this  they can operate the scikit learn accounts on various social networks   and produce materials  They also have the required rights to our blog   repository and other relevant accounts and platforms       Documentation Team     Members of the documentation team engage with the documentation of the project   among other things  They might also be involved in other aspects of the   project  but their reviews on documentation contributions are considered   authoritative  and can merge such contributions     To this end  they have permissions to merge pull requests in scikit learn s   repository       Maintainers Team     Maintainers are community members who have shown that they are dedicated to the   continued development of the project through ongoing engagement with the   community  They have shown they can be trusted to maintain scikit learn with   care  Being a maintainer allows contributors to more easily carry on with their   project related activities by giving them direct access to the project s   repository  Maintainers are expected to review code contributions  merge   approved pull requests  cast votes for and against merging a pull request    and to be involved in deciding major changes to the API   Technical Committee                      The Technical Committee  TC  members are maintainers who have additional responsibilities to ensure the smooth running of the project  TC members are expected to participate in strategic planning  and approve changes to the governance model  The purpose of the TC is to ensure a smooth progress from the big picture perspective  Indeed changes that impact the full project require a synthetic analysis and a consensus that is both explicit and informed  In cases that the core contributor community  which includes the TC members  fails to reach such a consensus in the required time frame  the TC is the entity to resolve the issue  Membership of the TC is by nomination by a core contributor  A nomination will result in discussion which cannot take more than a month and then a vote by the core contributors which will stay open for a week  TC membership votes are subject to a two third majority of all cast votes as well as a simple majority approval of all the current TC members  TC members who do not actively engage with the TC duties are expected to resign   The Technical Committee of scikit learn consists of  user  Thomas Fan  thomasjpfan     user  Alexandre Gramfort  agramfort     user  Olivier Grisel  ogrisel     user  Adrin Jalali  adrinjalali     user  Andreas M ller  amueller     user  Joel Nothman  jnothman   and  user  Ga l Varoquaux  GaelVaroquaux     Decision Making Process                         Decisions about the future of the project are made through discussion with all members of the community  All non sensitive project management discussion takes place on the project contributors   mailing list  mailto scikit learn python org    and the  issue tracker  https   github com scikit learn scikit learn issues     Occasionally  sensitive discussion occurs on a private list   Scikit learn uses a  consensus seeking  process for making decisions  The group tries to find a resolution that has no open objections among core contributors  At any point during the discussion  any core contributor can call for a vote  which will conclude one month from the call for the vote  Most votes have to be backed by a  ref  SLEP  slep    If no option can gather two thirds of the votes cast  the decision is escalated to the TC  which in turn will use consensus seeking with the fallback option of a simple majority vote if no consensus can be found within a month  This is what we hereafter may refer to as    the decision making process      Decisions  in addition to adding core contributors and TC membership as above  are made according to the following rules       Minor Documentation changes    such as typo fixes  or addition   correction   of a sentence  but no change of the   scikit learn org   landing page or the    about  page  Requires  1 by a maintainer  no  1 by a maintainer  lazy   consensus   happens on the issue or pull request page  Maintainers are   expected to give  reasonable time  to others to give their opinion on the   pull request if they re not confident others would agree       Code changes and major documentation changes     require  1 by two maintainers  no  1 by a maintainer  lazy   consensus   happens on the issue of pull request page       Changes to the API principles and changes to dependencies or supported   versions   happen via  ref  slep  and follows the decision making process   outlined above       Changes to the governance model   follow the process outlined in  SLEP020    https   scikit learn enhancement proposals readthedocs io en latest slep020 proposal html       If a veto  1 vote is cast on a lazy consensus  the proposer can appeal to the community and maintainers and the change can be approved or rejected using the decision making procedure outlined above   Governance Model Changes                           Governance model changes occur through an enhancement proposal or a GitHub Pull Request  An enhancement proposal will go through    the decision making process    described in the previous section  Alternatively  an author may propose a change directly to the governance model with a GitHub Pull Request  Logistically  an author can open a Draft Pull Request for feedback and follow up with a new revised Pull Request for voting  Once that author is happy with the state of the Pull Request  they can call for a vote on the public mailing list  During the one month voting period  the Pull Request can not change  A Pull Request Approval will count as a positive vote  and a  Request Changes  review will count as a negative vote  If two thirds of the cast votes are positive  then the governance model change is accepted       slep   Enhancement proposals  SLEPs                                 For all votes  a proposal must have been made public and discussed before the vote  Such proposal must be a consolidated document  in the form of a  Scikit Learn Enhancement Proposal   SLEP   rather than a long discussion on an issue  A SLEP must be submitted as a pull request to  enhancement proposals  https   scikit learn enhancement proposals readthedocs io    using the  SLEP template  https   scikit learn enhancement proposals readthedocs io en latest slep template html      SLEP000  https   scikit learn enhancement proposals readthedocs io en latest slep000 proposal html     describes the process in more detail "}
{"questions":"scikit-learn Projects implementing the scikit learn estimator API are encouraged to use relatedprojects The which facilitates best practices for testing and documenting estimators Related Projects the","answers":".. _related_projects:\n\n=====================================\nRelated Projects\n=====================================\n\nProjects implementing the scikit-learn estimator API are encouraged to use\nthe `scikit-learn-contrib template <https:\/\/github.com\/scikit-learn-contrib\/project-template>`_\nwhich facilitates best practices for testing and documenting estimators.\nThe `scikit-learn-contrib GitHub organization <https:\/\/github.com\/scikit-learn-contrib\/scikit-learn-contrib>`_\nalso accepts high-quality contributions of repositories conforming to this\ntemplate.\n\nBelow is a list of sister-projects, extensions and domain specific packages.\n\nInteroperability and framework enhancements\n-------------------------------------------\n\nThese tools adapt scikit-learn for use with other technologies or otherwise\nenhance the functionality of scikit-learn's estimators.\n\n**Auto-ML**\n\n- `auto-sklearn <https:\/\/github.com\/automl\/auto-sklearn\/>`_\n  An automated machine learning toolkit and a drop-in replacement for a\n  scikit-learn estimator\n\n- `autoviml <https:\/\/github.com\/AutoViML\/Auto_ViML\/>`_\n  Automatically Build Multiple Machine Learning Models with a Single Line of Code.\n  Designed as a faster way to use scikit-learn models without having to preprocess data.\n\n- `TPOT <https:\/\/github.com\/rhiever\/tpot>`_\n  An automated machine learning toolkit that optimizes a series of scikit-learn\n  operators to design a machine learning pipeline, including data and feature\n  preprocessors as well as the estimators. Works as a drop-in replacement for a\n  scikit-learn estimator.\n\n- `Featuretools <https:\/\/github.com\/alteryx\/featuretools>`_\n  A framework to perform automated feature engineering. It can be used for\n  transforming temporal and relational datasets into feature matrices for\n  machine learning.\n\n- `EvalML <https:\/\/github.com\/alteryx\/evalml>`_\n  EvalML is an AutoML library which builds, optimizes, and evaluates\n  machine learning pipelines using domain-specific objective functions.\n  It incorporates multiple modeling libraries under one API, and\n  the objects that EvalML creates use an sklearn-compatible API.\n\n- `MLJAR AutoML <https:\/\/github.com\/mljar\/mljar-supervised>`_\n  Python package for AutoML on Tabular Data with Feature Engineering, \n  Hyper-Parameters Tuning, Explanations and Automatic Documentation.\n\n**Experimentation and model registry frameworks**\n\n- `MLFlow <https:\/\/mlflow.org\/>`_ MLflow is an open source platform to manage the ML\n  lifecycle, including experimentation, reproducibility, deployment, and a central\n  model registry.\n\n- `Neptune <https:\/\/neptune.ai\/>`_ Metadata store for MLOps,\n  built for teams that run a lot of experiments. It gives you a single\n  place to log, store, display, organize, compare, and query all your\n  model building metadata.\n\n- `Sacred <https:\/\/github.com\/IDSIA\/Sacred>`_ Tool to help you configure,\n  organize, log and reproduce experiments\n\n- `Scikit-Learn Laboratory\n  <https:\/\/skll.readthedocs.io\/en\/latest\/index.html>`_  A command-line\n  wrapper around scikit-learn that makes it easy to run machine learning\n  experiments with multiple learners and large feature sets.\n\n**Model inspection and visualization**\n\n- `dtreeviz <https:\/\/github.com\/parrt\/dtreeviz\/>`_ A python library for\n  decision tree visualization and model interpretation.\n\n- `sklearn-evaluation <https:\/\/github.com\/ploomber\/sklearn-evaluation>`_\n  Machine learning model evaluation made easy: plots, tables, HTML reports,\n  experiment tracking and Jupyter notebook analysis. Visual analysis, model\n  selection, evaluation and diagnostics.\n\n- `yellowbrick <https:\/\/github.com\/DistrictDataLabs\/yellowbrick>`_ A suite of\n  custom matplotlib visualizers for scikit-learn estimators to support visual feature\n  analysis, model selection, evaluation, and diagnostics.\n\n**Model export for production**\n\n- `sklearn-onnx <https:\/\/github.com\/onnx\/sklearn-onnx>`_ Serialization of many\n  Scikit-learn pipelines to `ONNX <https:\/\/onnx.ai\/>`_ for interchange and\n  prediction.\n\n- `skops.io <https:\/\/skops.readthedocs.io\/en\/stable\/persistence.html>`__ A\n  persistence model more secure than pickle, which can be used instead of\n  pickle in most common cases.\n\n- `sklearn2pmml <https:\/\/github.com\/jpmml\/sklearn2pmml>`_\n  Serialization of a wide variety of scikit-learn estimators and transformers\n  into PMML with the help of `JPMML-SkLearn <https:\/\/github.com\/jpmml\/jpmml-sklearn>`_\n  library.\n\n- `treelite <https:\/\/treelite.readthedocs.io>`_\n  Compiles tree-based ensemble models into C code for minimizing prediction\n  latency.\n\n- `emlearn <https:\/\/emlearn.org>`_\n  Implements scikit-learn estimators in C99 for embedded devices and microcontrollers.\n  Supports several classifier, regression and outlier detection models.\n\n**Model throughput**\n\n- `Intel(R) Extension for scikit-learn <https:\/\/github.com\/intel\/scikit-learn-intelex>`_\n  Mostly on high end Intel(R) hardware, accelerates some scikit-learn models\n  for both training and inference under certain circumstances. This project is\n  maintained by Intel(R) and scikit-learn's maintainers are not involved in the\n  development of this project. Also note that in some cases using the tools and\n  estimators under ``scikit-learn-intelex`` would give different results than\n  ``scikit-learn`` itself. If you encounter issues while using this project,\n  make sure you report potential issues in their respective repositories.\n\n**Interface to R with genomic applications**\n\n- `BiocSklearn <https:\/\/bioconductor.org\/packages\/BiocSklearn>`_\n  Exposes a small number of dimension reduction facilities as an illustration\n  of the basilisk protocol for interfacing python with R. Intended as a \n  springboard for more complete interop.\n\n\nOther estimators and tasks\n--------------------------\n\nNot everything belongs or is mature enough for the central scikit-learn\nproject. The following are projects providing interfaces similar to\nscikit-learn for additional learning algorithms, infrastructures\nand tasks.\n\n**Time series and forecasting**\n\n- `Darts <https:\/\/unit8co.github.io\/darts\/>`_ Darts is a Python library for\n  user-friendly forecasting and anomaly detection on time series. It contains a variety\n  of models, from classics such as ARIMA to deep neural networks. The forecasting\n  models can all be used in the same way, using fit() and predict() functions, similar\n  to scikit-learn.\n\n- `sktime <https:\/\/github.com\/alan-turing-institute\/sktime>`_ A scikit-learn compatible\n  toolbox for machine learning with time series including time series\n  classification\/regression and (supervised\/panel) forecasting.\n\n- `skforecast <https:\/\/github.com\/JoaquinAmatRodrigo\/skforecast>`_ A python library\n  that eases using scikit-learn regressors as multi-step forecasters. It also works\n  with any regressor compatible with the scikit-learn API.\n\n- `tslearn <https:\/\/github.com\/tslearn-team\/tslearn>`_ A machine learning library for\n  time series that offers tools for pre-processing and feature extraction as well as\n  dedicated models for clustering, classification and regression.\n\n**Gradient (tree) boosting**\n\nNote scikit-learn own modern gradient boosting estimators\n:class:`~sklearn.ensemble.HistGradientBoostingClassifier` and\n:class:`~sklearn.ensemble.HistGradientBoostingRegressor`.\n\n- `XGBoost <https:\/\/github.com\/dmlc\/xgboost>`_ XGBoost is an optimized distributed\n  gradient boosting library designed to be highly efficient, flexible and portable.\n\n- `LightGBM <https:\/\/lightgbm.readthedocs.io>`_ LightGBM is a gradient boosting\n  framework that uses tree based learning algorithms. It is designed to be distributed\n  and efficient.\n\n**Structured learning**\n\n- `HMMLearn <https:\/\/github.com\/hmmlearn\/hmmlearn>`_ Implementation of hidden\n  markov models that was previously part of scikit-learn.\n\n- `pomegranate <https:\/\/github.com\/jmschrei\/pomegranate>`_ Probabilistic modelling\n  for Python, with an emphasis on hidden Markov models.\n\n**Deep neural networks etc.**\n\n- `skorch <https:\/\/github.com\/dnouri\/skorch>`_ A scikit-learn compatible\n  neural network library that wraps PyTorch.\n\n- `scikeras <https:\/\/github.com\/adriangb\/scikeras>`_ provides a wrapper around\n  Keras to interface it with scikit-learn. SciKeras is the successor\n  of `tf.keras.wrappers.scikit_learn`.\n\n**Federated Learning**\n\n- `Flower <https:\/\/flower.dev\/>`_ A friendly federated learning framework with a\n  unified approach that can federate any workload, any ML framework, and any programming language.\n\n**Privacy Preserving Machine Learning**\n\n- `Concrete ML <https:\/\/github.com\/zama-ai\/concrete-ml\/>`_ A privacy preserving\n  ML framework built on top of `Concrete\n  <https:\/\/github.com\/zama-ai\/concrete>`_, with bindings to traditional ML\n  frameworks, thanks to fully homomorphic encryption. APIs of so-called\n  Concrete ML built-in models are very close to scikit-learn APIs.\n\n**Broad scope**\n\n- `mlxtend <https:\/\/github.com\/rasbt\/mlxtend>`_ Includes a number of additional\n  estimators as well as model visualization utilities.\n\n- `scikit-lego <https:\/\/github.com\/koaning\/scikit-lego>`_ A number of scikit-learn compatible\n  custom transformers, models and metrics, focusing on solving practical industry tasks.\n\n**Other regression and classification**\n\n- `py-earth <https:\/\/github.com\/scikit-learn-contrib\/py-earth>`_ Multivariate\n  adaptive regression splines\n\n- `gplearn <https:\/\/github.com\/trevorstephens\/gplearn>`_ Genetic Programming\n  for symbolic regression tasks.\n\n- `scikit-multilearn <https:\/\/github.com\/scikit-multilearn\/scikit-multilearn>`_\n  Multi-label classification with focus on label space manipulation.\n\n**Decomposition and clustering**\n\n- `lda <https:\/\/github.com\/lda-project\/lda\/>`_: Fast implementation of latent\n  Dirichlet allocation in Cython which uses `Gibbs sampling\n  <https:\/\/en.wikipedia.org\/wiki\/Gibbs_sampling>`_ to sample from the true\n  posterior distribution. (scikit-learn's\n  :class:`~sklearn.decomposition.LatentDirichletAllocation` implementation uses\n  `variational inference\n  <https:\/\/en.wikipedia.org\/wiki\/Variational_Bayesian_methods>`_ to sample from\n  a tractable approximation of a topic model's posterior distribution.)\n\n- `kmodes <https:\/\/github.com\/nicodv\/kmodes>`_ k-modes clustering algorithm for\n  categorical data, and several of its variations.\n\n- `hdbscan <https:\/\/github.com\/scikit-learn-contrib\/hdbscan>`_ HDBSCAN and Robust Single\n  Linkage clustering algorithms for robust variable density clustering.\n  As of scikit-learn version 1.3.0, there is :class:`~sklearn.cluster.HDBSCAN`.\n\n**Pre-processing**\n\n- `categorical-encoding\n  <https:\/\/github.com\/scikit-learn-contrib\/categorical-encoding>`_ A\n  library of sklearn compatible categorical variable encoders.\n  As of scikit-learn version 1.3.0, there is\n  :class:`~sklearn.preprocessing.TargetEncoder`.\n\n- `imbalanced-learn\n  <https:\/\/github.com\/scikit-learn-contrib\/imbalanced-learn>`_ Various\n  methods to under- and over-sample datasets.\n\n- `Feature-engine <https:\/\/github.com\/solegalli\/feature_engine>`_ A library\n  of sklearn compatible transformers for missing data imputation, categorical\n  encoding, variable transformation, discretization, outlier handling and more.\n  Feature-engine allows the application of preprocessing steps to selected groups\n  of variables and it is fully compatible with the Scikit-learn Pipeline.\n\n**Topological Data Analysis**\n\n- `giotto-tda <https:\/\/github.com\/giotto-ai\/giotto-tda>`_ A library for\n  `Topological Data Analysis\n  <https:\/\/en.wikipedia.org\/wiki\/Topological_data_analysis>`_ aiming to\n  provide a scikit-learn compatible API. It offers tools to transform data\n  inputs (point clouds, graphs, time series, images) into forms suitable for\n  computations of topological summaries, and components dedicated to\n  extracting sets of scalar features of topological origin, which can be used\n  alongside other feature extraction methods in scikit-learn.\n\nStatistical learning with Python\n--------------------------------\nOther packages useful for data analysis and machine learning.\n\n- `Pandas <https:\/\/pandas.pydata.org\/>`_ Tools for working with heterogeneous and\n  columnar data, relational queries, time series and basic statistics.\n\n- `statsmodels <https:\/\/www.statsmodels.org>`_ Estimating and analysing\n  statistical models. More focused on statistical tests and less on prediction\n  than scikit-learn.\n\n- `PyMC <https:\/\/www.pymc.io\/>`_ Bayesian statistical models and\n  fitting algorithms.\n\n- `Seaborn <https:\/\/stanford.edu\/~mwaskom\/software\/seaborn\/>`_ Visualization library based on\n  matplotlib. It provides a high-level interface for drawing attractive statistical graphics.\n\n- `scikit-survival <https:\/\/scikit-survival.readthedocs.io\/>`_ A library implementing\n  models to learn from censored time-to-event data (also called survival analysis).\n  Models are fully compatible with scikit-learn.\n\nRecommendation Engine packages\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n- `implicit <https:\/\/github.com\/benfred\/implicit>`_, Library for implicit\n  feedback datasets.\n\n- `lightfm <https:\/\/github.com\/lyst\/lightfm>`_ A Python\/Cython\n  implementation of a hybrid recommender system.\n\n- `Surprise Lib <https:\/\/surpriselib.com\/>`_ Library for explicit feedback\n  datasets.\n\nDomain specific packages\n~~~~~~~~~~~~~~~~~~~~~~~~\n\n- `scikit-network <https:\/\/scikit-network.readthedocs.io\/>`_ Machine learning on graphs.\n\n- `scikit-image <https:\/\/scikit-image.org\/>`_ Image processing and computer\n  vision in python.\n\n- `Natural language toolkit (nltk) <https:\/\/www.nltk.org\/>`_ Natural language\n  processing and some machine learning.\n\n- `gensim <https:\/\/radimrehurek.com\/gensim\/>`_  A library for topic modelling,\n  document indexing and similarity retrieval\n\n- `NiLearn <https:\/\/nilearn.github.io\/>`_ Machine learning for neuro-imaging.\n\n- `AstroML <https:\/\/www.astroml.org\/>`_  Machine learning for astronomy.\n\nTranslations of scikit-learn documentation\n------------------------------------------\n\nTranslation's purpose is to ease reading and understanding in languages\nother than English. Its aim is to help people who do not understand English\nor have doubts about its interpretation. Additionally, some people prefer\nto read documentation in their native language, but please bear in mind that\nthe only official documentation is the English one [#f1]_.\n\nThose translation efforts are community initiatives and we have no control\non them.\nIf you want to contribute or report an issue with the translation, please\ncontact the authors of the translation.\nSome available translations are linked here to improve their dissemination\nand promote community efforts.\n\n- `Chinese translation <https:\/\/sklearn.apachecn.org\/>`_\n  (`source <https:\/\/github.com\/apachecn\/sklearn-doc-zh>`__)\n- `Persian translation <https:\/\/sklearn.ir\/>`_\n  (`source <https:\/\/github.com\/mehrdad-dev\/scikit-learn>`__)\n- `Spanish translation <https:\/\/qu4nt.github.io\/sklearn-doc-es\/>`_\n  (`source <https:\/\/github.com\/qu4nt\/sklearn-doc-es>`__)\n- `Korean translation <https:\/\/panda5176.github.io\/scikit-learn-korean\/>`_\n  (`source <https:\/\/github.com\/panda5176\/scikit-learn-korean>`__)\n\n\n.. rubric:: Footnotes\n\n.. [#f1] following `linux documentation Disclaimer\n   <https:\/\/www.kernel.org\/doc\/html\/latest\/translations\/index.html#disclaimer>`__","site":"scikit-learn","answers_cleaned":"    related projects                                         Related Projects                                        Projects implementing the scikit learn estimator API are encouraged to use the  scikit learn contrib template  https   github com scikit learn contrib project template    which facilitates best practices for testing and documenting estimators  The  scikit learn contrib GitHub organization  https   github com scikit learn contrib scikit learn contrib    also accepts high quality contributions of repositories conforming to this template   Below is a list of sister projects  extensions and domain specific packages   Interoperability and framework enhancements                                              These tools adapt scikit learn for use with other technologies or otherwise enhance the functionality of scikit learn s estimators     Auto ML       auto sklearn  https   github com automl auto sklearn       An automated machine learning toolkit and a drop in replacement for a   scikit learn estimator     autoviml  https   github com AutoViML Auto ViML       Automatically Build Multiple Machine Learning Models with a Single Line of Code    Designed as a faster way to use scikit learn models without having to preprocess data      TPOT  https   github com rhiever tpot      An automated machine learning toolkit that optimizes a series of scikit learn   operators to design a machine learning pipeline  including data and feature   preprocessors as well as the estimators  Works as a drop in replacement for a   scikit learn estimator      Featuretools  https   github com alteryx featuretools      A framework to perform automated feature engineering  It can be used for   transforming temporal and relational datasets into feature matrices for   machine learning      EvalML  https   github com alteryx evalml      EvalML is an AutoML library which builds  optimizes  and evaluates   machine learning pipelines using domain specific objective functions    It incorporates multiple modeling libraries under one API  and   the objects that EvalML creates use an sklearn compatible API      MLJAR AutoML  https   github com mljar mljar supervised      Python package for AutoML on Tabular Data with Feature Engineering     Hyper Parameters Tuning  Explanations and Automatic Documentation     Experimentation and model registry frameworks       MLFlow  https   mlflow org     MLflow is an open source platform to manage the ML   lifecycle  including experimentation  reproducibility  deployment  and a central   model registry      Neptune  https   neptune ai     Metadata store for MLOps    built for teams that run a lot of experiments  It gives you a single   place to log  store  display  organize  compare  and query all your   model building metadata      Sacred  https   github com IDSIA Sacred    Tool to help you configure    organize  log and reproduce experiments     Scikit Learn Laboratory    https   skll readthedocs io en latest index html     A command line   wrapper around scikit learn that makes it easy to run machine learning   experiments with multiple learners and large feature sets     Model inspection and visualization       dtreeviz  https   github com parrt dtreeviz     A python library for   decision tree visualization and model interpretation      sklearn evaluation  https   github com ploomber sklearn evaluation      Machine learning model evaluation made easy  plots  tables  HTML reports    experiment tracking and Jupyter notebook analysis  Visual analysis  model   selection  evaluation and diagnostics      yellowbrick  https   github com DistrictDataLabs yellowbrick    A suite of   custom matplotlib visualizers for scikit learn estimators to support visual feature   analysis  model selection  evaluation  and diagnostics     Model export for production       sklearn onnx  https   github com onnx sklearn onnx    Serialization of many   Scikit learn pipelines to  ONNX  https   onnx ai     for interchange and   prediction      skops io  https   skops readthedocs io en stable persistence html     A   persistence model more secure than pickle  which can be used instead of   pickle in most common cases      sklearn2pmml  https   github com jpmml sklearn2pmml      Serialization of a wide variety of scikit learn estimators and transformers   into PMML with the help of  JPMML SkLearn  https   github com jpmml jpmml sklearn      library      treelite  https   treelite readthedocs io      Compiles tree based ensemble models into C code for minimizing prediction   latency      emlearn  https   emlearn org      Implements scikit learn estimators in C99 for embedded devices and microcontrollers    Supports several classifier  regression and outlier detection models     Model throughput       Intel R  Extension for scikit learn  https   github com intel scikit learn intelex      Mostly on high end Intel R  hardware  accelerates some scikit learn models   for both training and inference under certain circumstances  This project is   maintained by Intel R  and scikit learn s maintainers are not involved in the   development of this project  Also note that in some cases using the tools and   estimators under   scikit learn intelex   would give different results than     scikit learn   itself  If you encounter issues while using this project    make sure you report potential issues in their respective repositories     Interface to R with genomic applications       BiocSklearn  https   bioconductor org packages BiocSklearn      Exposes a small number of dimension reduction facilities as an illustration   of the basilisk protocol for interfacing python with R  Intended as a    springboard for more complete interop    Other estimators and tasks                             Not everything belongs or is mature enough for the central scikit learn project  The following are projects providing interfaces similar to scikit learn for additional learning algorithms  infrastructures and tasks     Time series and forecasting       Darts  https   unit8co github io darts     Darts is a Python library for   user friendly forecasting and anomaly detection on time series  It contains a variety   of models  from classics such as ARIMA to deep neural networks  The forecasting   models can all be used in the same way  using fit   and predict   functions  similar   to scikit learn      sktime  https   github com alan turing institute sktime    A scikit learn compatible   toolbox for machine learning with time series including time series   classification regression and  supervised panel  forecasting      skforecast  https   github com JoaquinAmatRodrigo skforecast    A python library   that eases using scikit learn regressors as multi step forecasters  It also works   with any regressor compatible with the scikit learn API      tslearn  https   github com tslearn team tslearn    A machine learning library for   time series that offers tools for pre processing and feature extraction as well as   dedicated models for clustering  classification and regression     Gradient  tree  boosting    Note scikit learn own modern gradient boosting estimators  class   sklearn ensemble HistGradientBoostingClassifier  and  class   sklearn ensemble HistGradientBoostingRegressor       XGBoost  https   github com dmlc xgboost    XGBoost is an optimized distributed   gradient boosting library designed to be highly efficient  flexible and portable      LightGBM  https   lightgbm readthedocs io    LightGBM is a gradient boosting   framework that uses tree based learning algorithms  It is designed to be distributed   and efficient     Structured learning       HMMLearn  https   github com hmmlearn hmmlearn    Implementation of hidden   markov models that was previously part of scikit learn      pomegranate  https   github com jmschrei pomegranate    Probabilistic modelling   for Python  with an emphasis on hidden Markov models     Deep neural networks etc        skorch  https   github com dnouri skorch    A scikit learn compatible   neural network library that wraps PyTorch      scikeras  https   github com adriangb scikeras    provides a wrapper around   Keras to interface it with scikit learn  SciKeras is the successor   of  tf keras wrappers scikit learn      Federated Learning       Flower  https   flower dev     A friendly federated learning framework with a   unified approach that can federate any workload  any ML framework  and any programming language     Privacy Preserving Machine Learning       Concrete ML  https   github com zama ai concrete ml     A privacy preserving   ML framework built on top of  Concrete    https   github com zama ai concrete     with bindings to traditional ML   frameworks  thanks to fully homomorphic encryption  APIs of so called   Concrete ML built in models are very close to scikit learn APIs     Broad scope       mlxtend  https   github com rasbt mlxtend    Includes a number of additional   estimators as well as model visualization utilities      scikit lego  https   github com koaning scikit lego    A number of scikit learn compatible   custom transformers  models and metrics  focusing on solving practical industry tasks     Other regression and classification       py earth  https   github com scikit learn contrib py earth    Multivariate   adaptive regression splines     gplearn  https   github com trevorstephens gplearn    Genetic Programming   for symbolic regression tasks      scikit multilearn  https   github com scikit multilearn scikit multilearn      Multi label classification with focus on label space manipulation     Decomposition and clustering       lda  https   github com lda project lda      Fast implementation of latent   Dirichlet allocation in Cython which uses  Gibbs sampling    https   en wikipedia org wiki Gibbs sampling    to sample from the true   posterior distribution   scikit learn s    class   sklearn decomposition LatentDirichletAllocation  implementation uses    variational inference    https   en wikipedia org wiki Variational Bayesian methods    to sample from   a tractable approximation of a topic model s posterior distribution       kmodes  https   github com nicodv kmodes    k modes clustering algorithm for   categorical data  and several of its variations      hdbscan  https   github com scikit learn contrib hdbscan    HDBSCAN and Robust Single   Linkage clustering algorithms for robust variable density clustering    As of scikit learn version 1 3 0  there is  class   sklearn cluster HDBSCAN      Pre processing       categorical encoding    https   github com scikit learn contrib categorical encoding    A   library of sklearn compatible categorical variable encoders    As of scikit learn version 1 3 0  there is    class   sklearn preprocessing TargetEncoder       imbalanced learn    https   github com scikit learn contrib imbalanced learn    Various   methods to under  and over sample datasets      Feature engine  https   github com solegalli feature engine    A library   of sklearn compatible transformers for missing data imputation  categorical   encoding  variable transformation  discretization  outlier handling and more    Feature engine allows the application of preprocessing steps to selected groups   of variables and it is fully compatible with the Scikit learn Pipeline     Topological Data Analysis       giotto tda  https   github com giotto ai giotto tda    A library for    Topological Data Analysis    https   en wikipedia org wiki Topological data analysis    aiming to   provide a scikit learn compatible API  It offers tools to transform data   inputs  point clouds  graphs  time series  images  into forms suitable for   computations of topological summaries  and components dedicated to   extracting sets of scalar features of topological origin  which can be used   alongside other feature extraction methods in scikit learn   Statistical learning with Python                                  Other packages useful for data analysis and machine learning      Pandas  https   pandas pydata org     Tools for working with heterogeneous and   columnar data  relational queries  time series and basic statistics      statsmodels  https   www statsmodels org    Estimating and analysing   statistical models  More focused on statistical tests and less on prediction   than scikit learn      PyMC  https   www pymc io     Bayesian statistical models and   fitting algorithms      Seaborn  https   stanford edu  mwaskom software seaborn     Visualization library based on   matplotlib  It provides a high level interface for drawing attractive statistical graphics      scikit survival  https   scikit survival readthedocs io     A library implementing   models to learn from censored time to event data  also called survival analysis     Models are fully compatible with scikit learn   Recommendation Engine packages                                    implicit  https   github com benfred implicit     Library for implicit   feedback datasets      lightfm  https   github com lyst lightfm    A Python Cython   implementation of a hybrid recommender system      Surprise Lib  https   surpriselib com     Library for explicit feedback   datasets   Domain specific packages                              scikit network  https   scikit network readthedocs io     Machine learning on graphs      scikit image  https   scikit image org     Image processing and computer   vision in python      Natural language toolkit  nltk   https   www nltk org     Natural language   processing and some machine learning      gensim  https   radimrehurek com gensim      A library for topic modelling    document indexing and similarity retrieval     NiLearn  https   nilearn github io     Machine learning for neuro imaging      AstroML  https   www astroml org      Machine learning for astronomy   Translations of scikit learn documentation                                             Translation s purpose is to ease reading and understanding in languages other than English  Its aim is to help people who do not understand English or have doubts about its interpretation  Additionally  some people prefer to read documentation in their native language  but please bear in mind that the only official documentation is the English one   f1     Those translation efforts are community initiatives and we have no control on them  If you want to contribute or report an issue with the translation  please contact the authors of the translation  Some available translations are linked here to improve their dissemination and promote community efforts      Chinese translation  https   sklearn apachecn org         source  https   github com apachecn sklearn doc zh         Persian translation  https   sklearn ir         source  https   github com mehrdad dev scikit learn         Spanish translation  https   qu4nt github io sklearn doc es         source  https   github com qu4nt sklearn doc es         Korean translation  https   panda5176 github io scikit learn korean         source  https   github com panda5176 scikit learn korean           rubric   Footnotes       f1  following  linux documentation Disclaimer     https   www kernel org doc html latest translations index html disclaimer    "}
{"questions":"scikit-learn html h3 headings on this page are the questions make them rubric like h3 padding bottom 0 2rem border bottom 1px solid var pst color border font weight bold margin 2rem 0 1 15rem 0 style font size 1rem","answers":".. raw:: html\n\n  <style>\n    \/* h3 headings on this page are the questions; make them rubric-like *\/\n    h3 {\n      font-size: 1rem;\n      font-weight: bold;\n      padding-bottom: 0.2rem;\n      margin: 2rem 0 1.15rem 0;\n      border-bottom: 1px solid var(--pst-color-border);\n    }\n\n    \/* Increase top margin for first question in each section *\/\n    h2 + section > h3 {\n      margin-top: 2.5rem;\n    }\n\n    \/* Make the headerlinks a bit more visible *\/\n    h3 > a.headerlink {\n      font-size: 0.9rem;\n    }\n\n    \/* Remove the backlink decoration on the titles *\/\n    h2 > a.toc-backref,\n    h3 > a.toc-backref {\n      text-decoration: none;\n    }\n  <\/style>\n\n.. _faq:\n\n==========================\nFrequently Asked Questions\n==========================\n\n.. currentmodule:: sklearn\n\nHere we try to give some answers to questions that regularly pop up on the mailing list.\n\n.. contents:: Table of Contents\n  :local:\n  :depth: 2\n\n\nAbout the project\n-----------------\n\nWhat is the project name (a lot of people get it wrong)?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nscikit-learn, but not scikit or SciKit nor sci-kit learn.\nAlso not scikits.learn or scikits-learn, which were previously used.\n\nHow do you pronounce the project name?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nsy-kit learn. sci stands for science!\n\nWhy scikit?\n^^^^^^^^^^^\nThere are multiple scikits, which are scientific toolboxes built around SciPy.\nApart from scikit-learn, another popular one is `scikit-image <https:\/\/scikit-image.org\/>`_.\n\nDo you support PyPy?\n^^^^^^^^^^^^^^^^^^^^\n\nDue to limited maintainer resources and small number of users, using\nscikit-learn with `PyPy <https:\/\/pypy.org\/>`_ (an alternative Python\nimplementation with a built-in just-in-time compiler) is not officially\nsupported.\n\nHow can I obtain permission to use the images in scikit-learn for my work?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nThe images contained in the `scikit-learn repository\n<https:\/\/github.com\/scikit-learn\/scikit-learn>`_ and the images generated within\nthe `scikit-learn documentation <https:\/\/scikit-learn.org\/stable\/index.html>`_\ncan be used via the `BSD 3-Clause License\n<https:\/\/github.com\/scikit-learn\/scikit-learn?tab=BSD-3-Clause-1-ov-file>`_ for\nyour work. Citations of scikit-learn are highly encouraged and appreciated. See\n:ref:`citing scikit-learn <citing-scikit-learn>`.\n\nImplementation decisions\n------------------------\n\nWhy is there no support for deep or reinforcement learning? Will there be such support in the future?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nDeep learning and reinforcement learning both require a rich vocabulary to\ndefine an architecture, with deep learning additionally requiring\nGPUs for efficient computing. However, neither of these fit within\nthe design constraints of scikit-learn. As a result, deep learning\nand reinforcement learning are currently out of scope for what\nscikit-learn seeks to achieve.\n\nYou can find more information about the addition of GPU support at\n`Will you add GPU support?`_.\n\nNote that scikit-learn currently implements a simple multilayer perceptron\nin :mod:`sklearn.neural_network`. We will only accept bug fixes for this module.\nIf you want to implement more complex deep learning models, please turn to\npopular deep learning frameworks such as\n`tensorflow <https:\/\/www.tensorflow.org\/>`_,\n`keras <https:\/\/keras.io\/>`_,\nand `pytorch <https:\/\/pytorch.org\/>`_.\n\n.. _adding_graphical_models:\n\nWill you add graphical models or sequence prediction to scikit-learn?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nNot in the foreseeable future.\nscikit-learn tries to provide a unified API for the basic tasks in machine\nlearning, with pipelines and meta-algorithms like grid search to tie\neverything together. The required concepts, APIs, algorithms and\nexpertise required for structured learning are different from what\nscikit-learn has to offer. If we started doing arbitrary structured\nlearning, we'd need to redesign the whole package and the project\nwould likely collapse under its own weight.\n\nThere are two projects with API similar to scikit-learn that\ndo structured prediction:\n\n* `pystruct <https:\/\/pystruct.github.io\/>`_ handles general structured\n  learning (focuses on SSVMs on arbitrary graph structures with\n  approximate inference; defines the notion of sample as an instance of\n  the graph structure).\n\n* `seqlearn <https:\/\/larsmans.github.io\/seqlearn\/>`_ handles sequences only\n  (focuses on exact inference; has HMMs, but mostly for the sake of\n  completeness; treats a feature vector as a sample and uses an offset encoding\n  for the dependencies between feature vectors).\n\nWhy did you remove HMMs from scikit-learn?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nSee :ref:`adding_graphical_models`.\n\n\nWill you add GPU support?\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\nAdding GPU support by default would introduce heavy harware-specific software\ndependencies and existing algorithms would need to be reimplemented. This would\nmake it both harder for the average user to install scikit-learn and harder for\nthe developers to maintain the code.\n\nHowever, since 2023, a limited but growing :ref:`list of scikit-learn\nestimators <array_api_supported>` can already run on GPUs if the input data is\nprovided as a PyTorch or CuPy array and if scikit-learn has been configured to\naccept such inputs as explained in :ref:`array_api`. This Array API support\nallows scikit-learn to run on GPUs without introducing heavy and\nhardware-specific software dependencies to the main package.\n\nMost estimators that rely on NumPy for their computationally intensive operations\ncan be considered for Array API support and therefore GPU support.\n\nHowever, not all scikit-learn estimators are amenable to efficiently running\non GPUs via the Array API for fundamental algorithmic reasons. For instance,\ntree-based models currently implemented with Cython in scikit-learn are\nfundamentally not array-based algorithms. Other algorithms such as k-means or\nk-nearest neighbors rely on array-based algorithms but are also implemented in\nCython. Cython is used to manually interleave consecutive array operations to\navoid introducing performance killing memory access to large intermediate\narrays: this low-level algorithmic rewrite is called \"kernel fusion\" and cannot\nbe expressed via the Array API for the foreseeable future.\n\nAdding efficient GPU support to estimators that cannot be efficiently\nimplemented with the Array API would require designing and adopting a more\nflexible extension system for scikit-learn. This possibility is being\nconsidered in the following GitHub issue (under discussion):\n\n- https:\/\/github.com\/scikit-learn\/scikit-learn\/issues\/22438\n\n\nWhy do categorical variables need preprocessing in scikit-learn, compared to other tools?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nMost of scikit-learn assumes data is in NumPy arrays or SciPy sparse matrices\nof a single numeric dtype. These do not explicitly represent categorical\nvariables at present. Thus, unlike R's ``data.frames`` or :class:`pandas.DataFrame`,\nwe require explicit conversion of categorical features to numeric values, as\ndiscussed in :ref:`preprocessing_categorical_features`.\nSee also :ref:`sphx_glr_auto_examples_compose_plot_column_transformer_mixed_types.py` for an\nexample of working with heterogeneous (e.g. categorical and numeric) data.\n\nNote that recently, :class:`~sklearn.ensemble.HistGradientBoostingClassifier` and\n:class:`~sklearn.ensemble.HistGradientBoostingRegressor` gained native support for\ncategorical features through the option `categorical_features=\"from_dtype\"`. This\noption relies on inferring which columns of the data are categorical based on the\n:class:`pandas.CategoricalDtype` and :class:`polars.datatypes.Categorical` dtypes.\n\nDoes scikit-learn work natively with various types of dataframes?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nScikit-learn has limited support for :class:`pandas.DataFrame` and\n:class:`polars.DataFrame`. Scikit-learn estimators can accept both these dataframe types\nas input, and scikit-learn transformers can output dataframes using the `set_output`\nAPI. For more details, refer to\n:ref:`sphx_glr_auto_examples_miscellaneous_plot_set_output.py`.\n\nHowever, the internal computations in scikit-learn estimators rely on numerical\noperations that are more efficiently performed on homogeneous data structures such as\nNumPy arrays or SciPy sparse matrices. As a result, most scikit-learn estimators will\ninternally convert dataframe inputs into these homogeneous data structures. Similarly,\ndataframe outputs are generated from these homogeneous data structures.\n\nAlso note that :class:`~sklearn.compose.ColumnTransformer` makes it convenient to handle\nheterogeneous pandas dataframes by mapping homogeneous subsets of dataframe columns\nselected by name or dtype to dedicated scikit-learn transformers. Therefore\n:class:`~sklearn.compose.ColumnTransformer` are often used in the first step of\nscikit-learn pipelines when dealing with heterogeneous dataframes (see :ref:`pipeline`\nfor more details).\n\nSee also :ref:`sphx_glr_auto_examples_compose_plot_column_transformer_mixed_types.py`\nfor an example of working with heterogeneous (e.g. categorical and numeric) data.\n\nDo you plan to implement transform for target ``y`` in a pipeline?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nCurrently transform only works for features ``X`` in a pipeline. There's a\nlong-standing discussion about not being able to transform ``y`` in a pipeline.\nFollow on GitHub issue :issue:`4143`. Meanwhile, you can check out\n:class:`~compose.TransformedTargetRegressor`,\n`pipegraph <https:\/\/github.com\/mcasl\/PipeGraph>`_,\nand `imbalanced-learn <https:\/\/github.com\/scikit-learn-contrib\/imbalanced-learn>`_.\nNote that scikit-learn solved for the case where ``y``\nhas an invertible transformation applied before training\nand inverted after prediction. scikit-learn intends to solve for\nuse cases where ``y`` should be transformed at training time\nand not at test time, for resampling and similar uses, like at\n`imbalanced-learn <https:\/\/github.com\/scikit-learn-contrib\/imbalanced-learn>`_.\nIn general, these use cases can be solved\nwith a custom meta estimator rather than a :class:`~pipeline.Pipeline`.\n\nWhy are there so many different estimators for linear models?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nUsually, there is one classifier and one regressor per model type, e.g.\n:class:`~ensemble.GradientBoostingClassifier` and\n:class:`~ensemble.GradientBoostingRegressor`. Both have similar options and\nboth have the parameter `loss`, which is especially useful in the regression\ncase as it enables the estimation of conditional mean as well as conditional\nquantiles.\n\nFor linear models, there are many estimator classes which are very close to\neach other. Let us have a look at\n\n- :class:`~linear_model.LinearRegression`, no penalty\n- :class:`~linear_model.Ridge`, L2 penalty\n- :class:`~linear_model.Lasso`, L1 penalty (sparse models)\n- :class:`~linear_model.ElasticNet`, L1 + L2 penalty (less sparse models)\n- :class:`~linear_model.SGDRegressor` with `loss=\"squared_loss\"`\n\n**Maintainer perspective:**\nThey all do in principle the same and are different only by the penalty they\nimpose. This, however, has a large impact on the way the underlying\noptimization problem is solved. In the end, this amounts to usage of different\nmethods and tricks from linear algebra. A special case is\n:class:`~linear_model.SGDRegressor` which\ncomprises all 4 previous models and is different by the optimization procedure.\nA further side effect is that the different estimators favor different data\nlayouts (`X` C-contiguous or F-contiguous, sparse csr or csc). This complexity\nof the seemingly simple linear models is the reason for having different\nestimator classes for different penalties.\n\n**User perspective:**\nFirst, the current design is inspired by the scientific literature where linear\nregression models with different regularization\/penalty were given different\nnames, e.g. *ridge regression*. Having different model classes with according\nnames makes it easier for users to find those regression models.\nSecondly, if all the 5 above mentioned linear models were unified into a single\nclass, there would be parameters with a lot of options like the ``solver``\nparameter. On top of that, there would be a lot of exclusive interactions\nbetween different parameters. For example, the possible options of the\nparameters ``solver``, ``precompute`` and ``selection`` would depend on the\nchosen values of the penalty parameters ``alpha`` and ``l1_ratio``.\n\n\nContributing\n------------\n\nHow can I contribute to scikit-learn?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nSee :ref:`contributing`. Before wanting to add a new algorithm, which is\nusually a major and lengthy undertaking, it is recommended to start with\n:ref:`known issues <new_contributors>`. Please do not contact the contributors\nof scikit-learn directly regarding contributing to scikit-learn.\n\nWhy is my pull request not getting any attention?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nThe scikit-learn review process takes a significant amount of time, and\ncontributors should not be discouraged by a lack of activity or review on\ntheir pull request. We care a lot about getting things right\nthe first time, as maintenance and later change comes at a high cost.\nWe rarely release any \"experimental\" code, so all of our contributions\nwill be subject to high use immediately and should be of the highest\nquality possible initially.\n\nBeyond that, scikit-learn is limited in its reviewing bandwidth; many of the\nreviewers and core developers are working on scikit-learn on their own time.\nIf a review of your pull request comes slowly, it is likely because the\nreviewers are busy. We ask for your understanding and request that you\nnot close your pull request or discontinue your work solely because of\nthis reason.\n\n.. _new_algorithms_inclusion_criteria:\n\nWhat are the inclusion criteria for new algorithms?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nWe only consider well-established algorithms for inclusion. A rule of thumb is\nat least 3 years since publication, 200+ citations, and wide use and\nusefulness. A technique that provides a clear-cut improvement (e.g. an\nenhanced data structure or a more efficient approximation technique) on\na widely-used method will also be considered for inclusion.\n\nFrom the algorithms or techniques that meet the above criteria, only those\nwhich fit well within the current API of scikit-learn, that is a ``fit``,\n``predict\/transform`` interface and ordinarily having input\/output that is a\nnumpy array or sparse matrix, are accepted.\n\nThe contributor should support the importance of the proposed addition with\nresearch papers and\/or implementations in other similar packages, demonstrate\nits usefulness via common use-cases\/applications and corroborate performance\nimprovements, if any, with benchmarks and\/or plots. It is expected that the\nproposed algorithm should outperform the methods that are already implemented\nin scikit-learn at least in some areas.\n\nInclusion of a new algorithm speeding up an existing model is easier if:\n\n- it does not introduce new hyper-parameters (as it makes the library\n  more future-proof),\n- it is easy to document clearly when the contribution improves the speed\n  and when it does not, for instance, \"when ``n_features >>\n  n_samples``\",\n- benchmarks clearly show a speed up.\n\nAlso, note that your implementation need not be in scikit-learn to be used\ntogether with scikit-learn tools. You can implement your favorite algorithm\nin a scikit-learn compatible way, upload it to GitHub and let us know. We\nwill be happy to list it under :ref:`related_projects`. If you already have\na package on GitHub following the scikit-learn API, you may also be\ninterested to look at `scikit-learn-contrib\n<https:\/\/scikit-learn-contrib.github.io>`_.\n\n.. _selectiveness:\n\nWhy are you so selective on what algorithms you include in scikit-learn?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nCode comes with maintenance cost, and we need to balance the amount of\ncode we have with the size of the team (and add to this the fact that\ncomplexity scales non linearly with the number of features).\nThe package relies on core developers using their free time to\nfix bugs, maintain code and review contributions.\nAny algorithm that is added needs future attention by the developers,\nat which point the original author might long have lost interest.\nSee also :ref:`new_algorithms_inclusion_criteria`. For a great read about\nlong-term maintenance issues in open-source software, look at\n`the Executive Summary of Roads and Bridges\n<https:\/\/www.fordfoundation.org\/media\/2976\/roads-and-bridges-the-unseen-labor-behind-our-digital-infrastructure.pdf#page=8>`_.\n\n\nUsing scikit-learn\n------------------\n\nWhat's the best way to get help on scikit-learn usage?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* General machine learning questions: use `Cross Validated\n  <https:\/\/stats.stackexchange.com\/>`_ with the ``[machine-learning]`` tag.\n\n* scikit-learn usage questions: use `Stack Overflow\n  <https:\/\/stackoverflow.com\/questions\/tagged\/scikit-learn>`_ with the\n  ``[scikit-learn]`` and ``[python]`` tags. You can alternatively use the `mailing list\n  <https:\/\/mail.python.org\/mailman\/listinfo\/scikit-learn>`_.\n\nPlease make sure to include a minimal reproduction code snippet (ideally shorter\nthan 10 lines) that highlights your problem on a toy dataset (for instance from\n:mod:`sklearn.datasets` or randomly generated with functions of ``numpy.random`` with\na fixed random seed). Please remove any line of code that is not necessary to\nreproduce your problem.\n\nThe problem should be reproducible by simply copy-pasting your code snippet in a Python\nshell with scikit-learn installed. Do not forget to include the import statements.\nMore guidance to write good reproduction code snippets can be found at:\nhttps:\/\/stackoverflow.com\/help\/mcve.\n\nIf your problem raises an exception that you do not understand (even after googling it),\nplease make sure to include the full traceback that you obtain when running the\nreproduction script.\n\nFor bug reports or feature requests, please make use of the\n`issue tracker on GitHub <https:\/\/github.com\/scikit-learn\/scikit-learn\/issues>`_.\n\n.. warning::\n  Please do not email any authors directly to ask for assistance, report bugs,\n  or for any other issue related to scikit-learn.\n\nHow should I save, export or deploy estimators for production?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nSee :ref:`model_persistence`.\n\nHow can I create a bunch object?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nBunch objects are sometimes used as an output for functions and methods. They\nextend dictionaries by enabling values to be accessed by key,\n`bunch[\"value_key\"]`, or by an attribute, `bunch.value_key`.\n\nThey should not be used as an input. Therefore you almost never need to create\na :class:`~utils.Bunch` object, unless you are extending scikit-learn's API.\n\nHow can I load my own datasets into a format usable by scikit-learn?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nGenerally, scikit-learn works on any numeric data stored as numpy arrays\nor scipy sparse matrices. Other types that are convertible to numeric\narrays such as :class:`pandas.DataFrame` are also acceptable.\n\nFor more information on loading your data files into these usable data\nstructures, please refer to :ref:`loading external datasets <external_datasets>`.\n\nHow do I deal with string data (or trees, graphs...)?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nscikit-learn estimators assume you'll feed them real-valued feature vectors.\nThis assumption is hard-coded in pretty much all of the library.\nHowever, you can feed non-numerical inputs to estimators in several ways.\n\nIf you have text documents, you can use a term frequency features; see\n:ref:`text_feature_extraction` for the built-in *text vectorizers*.\nFor more general feature extraction from any kind of data, see\n:ref:`dict_feature_extraction` and :ref:`feature_hashing`.\n\nAnother common case is when you have non-numerical data and a custom distance\n(or similarity) metric on these data. Examples include strings with edit\ndistance (aka. Levenshtein distance), for instance, DNA or RNA sequences. These can be\nencoded as numbers, but doing so is painful and error-prone. Working with\ndistance metrics on arbitrary data can be done in two ways.\n\nFirstly, many estimators take precomputed distance\/similarity matrices, so if\nthe dataset is not too large, you can compute distances for all pairs of inputs.\nIf the dataset is large, you can use feature vectors with only one \"feature\",\nwhich is an index into a separate data structure, and supply a custom metric\nfunction that looks up the actual data in this data structure. For instance, to use\n:class:`~cluster.dbscan` with Levenshtein distances::\n\n    >>> import numpy as np\n    >>> from leven import levenshtein  # doctest: +SKIP\n    >>> from sklearn.cluster import dbscan\n    >>> data = [\"ACCTCCTAGAAG\", \"ACCTACTAGAAGTT\", \"GAATATTAGGCCGA\"]\n    >>> def lev_metric(x, y):\n    ...     i, j = int(x[0]), int(y[0])  # extract indices\n    ...     return levenshtein(data[i], data[j])\n    ...\n    >>> X = np.arange(len(data)).reshape(-1, 1)\n    >>> X\n    array([[0],\n           [1],\n           [2]])\n    >>> # We need to specify algorithm='brute' as the default assumes\n    >>> # a continuous feature space.\n    >>> dbscan(X, metric=lev_metric, eps=5, min_samples=2, algorithm='brute')  # doctest: +SKIP\n    (array([0, 1]), array([ 0,  0, -1]))\n\nNote that the example above uses the third-party edit distance package\n`leven <https:\/\/pypi.org\/project\/leven\/>`_. Similar tricks can be used,\nwith some care, for tree kernels, graph kernels, etc.\n\nWhy do I sometimes get a crash\/freeze with ``n_jobs > 1`` under OSX or Linux?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nSeveral scikit-learn tools such as :class:`~model_selection.GridSearchCV` and\n:class:`~model_selection.cross_val_score` rely internally on Python's\n:mod:`multiprocessing` module to parallelize execution\nonto several Python processes by passing ``n_jobs > 1`` as an argument.\n\nThe problem is that Python :mod:`multiprocessing` does a ``fork`` system call\nwithout following it with an ``exec`` system call for performance reasons. Many\nlibraries like (some versions of) Accelerate or vecLib under OSX, (some versions\nof) MKL, the OpenMP runtime of GCC, nvidia's Cuda (and probably many others),\nmanage their own internal thread pool. Upon a call to `fork`, the thread pool\nstate in the child process is corrupted: the thread pool believes it has many\nthreads while only the main thread state has been forked. It is possible to\nchange the libraries to make them detect when a fork happens and reinitialize\nthe thread pool in that case: we did that for OpenBLAS (merged upstream in\nmain since 0.2.10) and we contributed a `patch\n<https:\/\/gcc.gnu.org\/bugzilla\/show_bug.cgi?id=60035>`_ to GCC's OpenMP runtime\n(not yet reviewed).\n\nBut in the end the real culprit is Python's :mod:`multiprocessing` that does\n``fork`` without ``exec`` to reduce the overhead of starting and using new\nPython processes for parallel computing. Unfortunately this is a violation of\nthe POSIX standard and therefore some software editors like Apple refuse to\nconsider the lack of fork-safety in Accelerate and vecLib as a bug.\n\nIn Python 3.4+ it is now possible to configure :mod:`multiprocessing` to\nuse the ``\"forkserver\"`` or ``\"spawn\"`` start methods (instead of the default\n``\"fork\"``) to manage the process pools. To work around this issue when\nusing scikit-learn, you can set the ``JOBLIB_START_METHOD`` environment\nvariable to ``\"forkserver\"``. However the user should be aware that using\nthe ``\"forkserver\"`` method prevents :class:`joblib.Parallel` to call function\ninteractively defined in a shell session.\n\nIf you have custom code that uses :mod:`multiprocessing` directly instead of using\nit via :mod:`joblib` you can enable the ``\"forkserver\"`` mode globally for your\nprogram. Insert the following instructions in your main script::\n\n    import multiprocessing\n\n    # other imports, custom code, load data, define model...\n\n    if __name__ == \"__main__\":\n        multiprocessing.set_start_method(\"forkserver\")\n\n        # call scikit-learn utils with n_jobs > 1 here\n\nYou can find more default on the new start methods in the `multiprocessing\ndocumentation <https:\/\/docs.python.org\/3\/library\/multiprocessing.html#contexts-and-start-methods>`_.\n\n.. _faq_mkl_threading:\n\nWhy does my job use more cores than specified with ``n_jobs``?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nThis is because ``n_jobs`` only controls the number of jobs for\nroutines that are parallelized with :mod:`joblib`, but parallel code can come\nfrom other sources:\n\n- some routines may be parallelized with OpenMP (for code written in C or\n  Cython),\n- scikit-learn relies a lot on numpy, which in turn may rely on numerical\n  libraries like MKL, OpenBLAS or BLIS which can provide parallel\n  implementations.\n\nFor more details, please refer to our :ref:`notes on parallelism <parallelism>`.\n\nHow do I set a ``random_state`` for an entire execution?\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nPlease refer to :ref:`randomness`.","site":"scikit-learn","answers_cleaned":"   raw   html     style         h3 headings on this page are the questions  make them rubric like        h3         font size  1rem        font weight  bold        padding bottom  0 2rem        margin  2rem 0 1 15rem 0        border bottom  1px solid var   pst color border                 Increase top margin for first question in each section        h2   section   h3         margin top  2 5rem                Make the headerlinks a bit more visible        h3   a headerlink         font size  0 9rem                Remove the backlink decoration on the titles        h2   a toc backref      h3   a toc backref         text decoration  none            style       faq                              Frequently Asked Questions                                currentmodule   sklearn  Here we try to give some answers to questions that regularly pop up on the mailing list      contents   Table of Contents    local     depth  2   About the project                    What is the project name  a lot of people get it wrong                                                            scikit learn  but not scikit or SciKit nor sci kit learn  Also not scikits learn or scikits learn  which were previously used   How do you pronounce the project name                                         sy kit learn  sci stands for science   Why scikit              There are multiple scikits  which are scientific toolboxes built around SciPy  Apart from scikit learn  another popular one is  scikit image  https   scikit image org       Do you support PyPy                        Due to limited maintainer resources and small number of users  using scikit learn with  PyPy  https   pypy org      an alternative Python implementation with a built in just in time compiler  is not officially supported   How can I obtain permission to use the images in scikit learn for my work                                                                              The images contained in the  scikit learn repository  https   github com scikit learn scikit learn    and the images generated within the  scikit learn documentation  https   scikit learn org stable index html    can be used via the  BSD 3 Clause License  https   github com scikit learn scikit learn tab BSD 3 Clause 1 ov file    for your work  Citations of scikit learn are highly encouraged and appreciated  See  ref  citing scikit learn  citing scikit learn     Implementation decisions                           Why is there no support for deep or reinforcement learning  Will there be such support in the future                                                                                                         Deep learning and reinforcement learning both require a rich vocabulary to define an architecture  with deep learning additionally requiring GPUs for efficient computing  However  neither of these fit within the design constraints of scikit learn  As a result  deep learning and reinforcement learning are currently out of scope for what scikit learn seeks to achieve   You can find more information about the addition of GPU support at  Will you add GPU support      Note that scikit learn currently implements a simple multilayer perceptron in  mod  sklearn neural network   We will only accept bug fixes for this module  If you want to implement more complex deep learning models  please turn to popular deep learning frameworks such as  tensorflow  https   www tensorflow org       keras  https   keras io      and  pytorch  https   pytorch org           adding graphical models   Will you add graphical models or sequence prediction to scikit learn                                                                         Not in the foreseeable future  scikit learn tries to provide a unified API for the basic tasks in machine learning  with pipelines and meta algorithms like grid search to tie everything together  The required concepts  APIs  algorithms and expertise required for structured learning are different from what scikit learn has to offer  If we started doing arbitrary structured learning  we d need to redesign the whole package and the project would likely collapse under its own weight   There are two projects with API similar to scikit learn that do structured prediction      pystruct  https   pystruct github io     handles general structured   learning  focuses on SSVMs on arbitrary graph structures with   approximate inference  defines the notion of sample as an instance of   the graph structure       seqlearn  https   larsmans github io seqlearn     handles sequences only    focuses on exact inference  has HMMs  but mostly for the sake of   completeness  treats a feature vector as a sample and uses an offset encoding   for the dependencies between feature vectors    Why did you remove HMMs from scikit learn                                             See  ref  adding graphical models     Will you add GPU support                             Adding GPU support by default would introduce heavy harware specific software dependencies and existing algorithms would need to be reimplemented  This would make it both harder for the average user to install scikit learn and harder for the developers to maintain the code   However  since 2023  a limited but growing  ref  list of scikit learn estimators  array api supported   can already run on GPUs if the input data is provided as a PyTorch or CuPy array and if scikit learn has been configured to accept such inputs as explained in  ref  array api   This Array API support allows scikit learn to run on GPUs without introducing heavy and hardware specific software dependencies to the main package   Most estimators that rely on NumPy for their computationally intensive operations can be considered for Array API support and therefore GPU support   However  not all scikit learn estimators are amenable to efficiently running on GPUs via the Array API for fundamental algorithmic reasons  For instance  tree based models currently implemented with Cython in scikit learn are fundamentally not array based algorithms  Other algorithms such as k means or k nearest neighbors rely on array based algorithms but are also implemented in Cython  Cython is used to manually interleave consecutive array operations to avoid introducing performance killing memory access to large intermediate arrays  this low level algorithmic rewrite is called  kernel fusion  and cannot be expressed via the Array API for the foreseeable future   Adding efficient GPU support to estimators that cannot be efficiently implemented with the Array API would require designing and adopting a more flexible extension system for scikit learn  This possibility is being considered in the following GitHub issue  under discussion      https   github com scikit learn scikit learn issues 22438   Why do categorical variables need preprocessing in scikit learn  compared to other tools                                                                                             Most of scikit learn assumes data is in NumPy arrays or SciPy sparse matrices of a single numeric dtype  These do not explicitly represent categorical variables at present  Thus  unlike R s   data frames   or  class  pandas DataFrame   we require explicit conversion of categorical features to numeric values  as discussed in  ref  preprocessing categorical features   See also  ref  sphx glr auto examples compose plot column transformer mixed types py  for an example of working with heterogeneous  e g  categorical and numeric  data   Note that recently   class   sklearn ensemble HistGradientBoostingClassifier  and  class   sklearn ensemble HistGradientBoostingRegressor  gained native support for categorical features through the option  categorical features  from dtype    This option relies on inferring which columns of the data are categorical based on the  class  pandas CategoricalDtype  and  class  polars datatypes Categorical  dtypes   Does scikit learn work natively with various types of dataframes                                                                     Scikit learn has limited support for  class  pandas DataFrame  and  class  polars DataFrame   Scikit learn estimators can accept both these dataframe types as input  and scikit learn transformers can output dataframes using the  set output  API  For more details  refer to  ref  sphx glr auto examples miscellaneous plot set output py    However  the internal computations in scikit learn estimators rely on numerical operations that are more efficiently performed on homogeneous data structures such as NumPy arrays or SciPy sparse matrices  As a result  most scikit learn estimators will internally convert dataframe inputs into these homogeneous data structures  Similarly  dataframe outputs are generated from these homogeneous data structures   Also note that  class   sklearn compose ColumnTransformer  makes it convenient to handle heterogeneous pandas dataframes by mapping homogeneous subsets of dataframe columns selected by name or dtype to dedicated scikit learn transformers  Therefore  class   sklearn compose ColumnTransformer  are often used in the first step of scikit learn pipelines when dealing with heterogeneous dataframes  see  ref  pipeline  for more details    See also  ref  sphx glr auto examples compose plot column transformer mixed types py  for an example of working with heterogeneous  e g  categorical and numeric  data   Do you plan to implement transform for target   y   in a pipeline                                                                     Currently transform only works for features   X   in a pipeline  There s a long standing discussion about not being able to transform   y   in a pipeline  Follow on GitHub issue  issue  4143   Meanwhile  you can check out  class   compose TransformedTargetRegressor    pipegraph  https   github com mcasl PipeGraph     and  imbalanced learn  https   github com scikit learn contrib imbalanced learn     Note that scikit learn solved for the case where   y   has an invertible transformation applied before training and inverted after prediction  scikit learn intends to solve for use cases where   y   should be transformed at training time and not at test time  for resampling and similar uses  like at  imbalanced learn  https   github com scikit learn contrib imbalanced learn     In general  these use cases can be solved with a custom meta estimator rather than a  class   pipeline Pipeline    Why are there so many different estimators for linear models                                                                Usually  there is one classifier and one regressor per model type  e g   class   ensemble GradientBoostingClassifier  and  class   ensemble GradientBoostingRegressor   Both have similar options and both have the parameter  loss   which is especially useful in the regression case as it enables the estimation of conditional mean as well as conditional quantiles   For linear models  there are many estimator classes which are very close to each other  Let us have a look at     class   linear model LinearRegression   no penalty    class   linear model Ridge   L2 penalty    class   linear model Lasso   L1 penalty  sparse models     class   linear model ElasticNet   L1   L2 penalty  less sparse models     class   linear model SGDRegressor  with  loss  squared loss      Maintainer perspective    They all do in principle the same and are different only by the penalty they impose  This  however  has a large impact on the way the underlying optimization problem is solved  In the end  this amounts to usage of different methods and tricks from linear algebra  A special case is  class   linear model SGDRegressor  which comprises all 4 previous models and is different by the optimization procedure  A further side effect is that the different estimators favor different data layouts   X  C contiguous or F contiguous  sparse csr or csc   This complexity of the seemingly simple linear models is the reason for having different estimator classes for different penalties     User perspective    First  the current design is inspired by the scientific literature where linear regression models with different regularization penalty were given different names  e g   ridge regression   Having different model classes with according names makes it easier for users to find those regression models  Secondly  if all the 5 above mentioned linear models were unified into a single class  there would be parameters with a lot of options like the   solver   parameter  On top of that  there would be a lot of exclusive interactions between different parameters  For example  the possible options of the parameters   solver      precompute   and   selection   would depend on the chosen values of the penalty parameters   alpha   and   l1 ratio      Contributing               How can I contribute to scikit learn                                        See  ref  contributing   Before wanting to add a new algorithm  which is usually a major and lengthy undertaking  it is recommended to start with  ref  known issues  new contributors    Please do not contact the contributors of scikit learn directly regarding contributing to scikit learn   Why is my pull request not getting any attention                                                     The scikit learn review process takes a significant amount of time  and contributors should not be discouraged by a lack of activity or review on their pull request  We care a lot about getting things right the first time  as maintenance and later change comes at a high cost  We rarely release any  experimental  code  so all of our contributions will be subject to high use immediately and should be of the highest quality possible initially   Beyond that  scikit learn is limited in its reviewing bandwidth  many of the reviewers and core developers are working on scikit learn on their own time  If a review of your pull request comes slowly  it is likely because the reviewers are busy  We ask for your understanding and request that you not close your pull request or discontinue your work solely because of this reason       new algorithms inclusion criteria   What are the inclusion criteria for new algorithms                                                       We only consider well established algorithms for inclusion  A rule of thumb is at least 3 years since publication  200  citations  and wide use and usefulness  A technique that provides a clear cut improvement  e g  an enhanced data structure or a more efficient approximation technique  on a widely used method will also be considered for inclusion   From the algorithms or techniques that meet the above criteria  only those which fit well within the current API of scikit learn  that is a   fit      predict transform   interface and ordinarily having input output that is a numpy array or sparse matrix  are accepted   The contributor should support the importance of the proposed addition with research papers and or implementations in other similar packages  demonstrate its usefulness via common use cases applications and corroborate performance improvements  if any  with benchmarks and or plots  It is expected that the proposed algorithm should outperform the methods that are already implemented in scikit learn at least in some areas   Inclusion of a new algorithm speeding up an existing model is easier if     it does not introduce new hyper parameters  as it makes the library   more future proof     it is easy to document clearly when the contribution improves the speed   and when it does not  for instance   when   n features      n samples       benchmarks clearly show a speed up   Also  note that your implementation need not be in scikit learn to be used together with scikit learn tools  You can implement your favorite algorithm in a scikit learn compatible way  upload it to GitHub and let us know  We will be happy to list it under  ref  related projects   If you already have a package on GitHub following the scikit learn API  you may also be interested to look at  scikit learn contrib  https   scikit learn contrib github io          selectiveness   Why are you so selective on what algorithms you include in scikit learn                                                                           Code comes with maintenance cost  and we need to balance the amount of code we have with the size of the team  and add to this the fact that complexity scales non linearly with the number of features   The package relies on core developers using their free time to fix bugs  maintain code and review contributions  Any algorithm that is added needs future attention by the developers  at which point the original author might long have lost interest  See also  ref  new algorithms inclusion criteria   For a great read about long term maintenance issues in open source software  look at  the Executive Summary of Roads and Bridges  https   www fordfoundation org media 2976 roads and bridges the unseen labor behind our digital infrastructure pdf page 8       Using scikit learn                     What s the best way to get help on scikit learn usage                                                            General machine learning questions  use  Cross Validated    https   stats stackexchange com     with the    machine learning    tag     scikit learn usage questions  use  Stack Overflow    https   stackoverflow com questions tagged scikit learn    with the      scikit learn    and    python    tags  You can alternatively use the  mailing list    https   mail python org mailman listinfo scikit learn      Please make sure to include a minimal reproduction code snippet  ideally shorter than 10 lines  that highlights your problem on a toy dataset  for instance from  mod  sklearn datasets  or randomly generated with functions of   numpy random   with a fixed random seed   Please remove any line of code that is not necessary to reproduce your problem   The problem should be reproducible by simply copy pasting your code snippet in a Python shell with scikit learn installed  Do not forget to include the import statements  More guidance to write good reproduction code snippets can be found at  https   stackoverflow com help mcve   If your problem raises an exception that you do not understand  even after googling it   please make sure to include the full traceback that you obtain when running the reproduction script   For bug reports or feature requests  please make use of the  issue tracker on GitHub  https   github com scikit learn scikit learn issues         warning     Please do not email any authors directly to ask for assistance  report bugs    or for any other issue related to scikit learn   How should I save  export or deploy estimators for production                                                                  See  ref  model persistence    How can I create a bunch object                                    Bunch objects are sometimes used as an output for functions and methods  They extend dictionaries by enabling values to be accessed by key   bunch  value key     or by an attribute   bunch value key    They should not be used as an input  Therefore you almost never need to create a  class   utils Bunch  object  unless you are extending scikit learn s API   How can I load my own datasets into a format usable by scikit learn                                                                        Generally  scikit learn works on any numeric data stored as numpy arrays or scipy sparse matrices  Other types that are convertible to numeric arrays such as  class  pandas DataFrame  are also acceptable   For more information on loading your data files into these usable data structures  please refer to  ref  loading external datasets  external datasets     How do I deal with string data  or trees  graphs                                                             scikit learn estimators assume you ll feed them real valued feature vectors  This assumption is hard coded in pretty much all of the library  However  you can feed non numerical inputs to estimators in several ways   If you have text documents  you can use a term frequency features  see  ref  text feature extraction  for the built in  text vectorizers   For more general feature extraction from any kind of data  see  ref  dict feature extraction  and  ref  feature hashing    Another common case is when you have non numerical data and a custom distance  or similarity  metric on these data  Examples include strings with edit distance  aka  Levenshtein distance   for instance  DNA or RNA sequences  These can be encoded as numbers  but doing so is painful and error prone  Working with distance metrics on arbitrary data can be done in two ways   Firstly  many estimators take precomputed distance similarity matrices  so if the dataset is not too large  you can compute distances for all pairs of inputs  If the dataset is large  you can use feature vectors with only one  feature   which is an index into a separate data structure  and supply a custom metric function that looks up the actual data in this data structure  For instance  to use  class   cluster dbscan  with Levenshtein distances            import numpy as np         from leven import levenshtein    doctest   SKIP         from sklearn cluster import dbscan         data     ACCTCCTAGAAG    ACCTACTAGAAGTT    GAATATTAGGCCGA           def lev metric x  y               i  j   int x 0    int y 0      extract indices             return levenshtein data i   data j                   X   np arange len data   reshape  1  1          X     array   0               1               2              We need to specify algorithm  brute  as the default assumes           a continuous feature space          dbscan X  metric lev metric  eps 5  min samples 2  algorithm  brute      doctest   SKIP      array  0  1    array   0   0   1     Note that the example above uses the third party edit distance package  leven  https   pypi org project leven      Similar tricks can be used  with some care  for tree kernels  graph kernels  etc   Why do I sometimes get a crash freeze with   n jobs   1   under OSX or Linux                                                                                 Several scikit learn tools such as  class   model selection GridSearchCV  and  class   model selection cross val score  rely internally on Python s  mod  multiprocessing  module to parallelize execution onto several Python processes by passing   n jobs   1   as an argument   The problem is that Python  mod  multiprocessing  does a   fork   system call without following it with an   exec   system call for performance reasons  Many libraries like  some versions of  Accelerate or vecLib under OSX   some versions of  MKL  the OpenMP runtime of GCC  nvidia s Cuda  and probably many others   manage their own internal thread pool  Upon a call to  fork   the thread pool state in the child process is corrupted  the thread pool believes it has many threads while only the main thread state has been forked  It is possible to change the libraries to make them detect when a fork happens and reinitialize the thread pool in that case  we did that for OpenBLAS  merged upstream in main since 0 2 10  and we contributed a  patch  https   gcc gnu org bugzilla show bug cgi id 60035    to GCC s OpenMP runtime  not yet reviewed    But in the end the real culprit is Python s  mod  multiprocessing  that does   fork   without   exec   to reduce the overhead of starting and using new Python processes for parallel computing  Unfortunately this is a violation of the POSIX standard and therefore some software editors like Apple refuse to consider the lack of fork safety in Accelerate and vecLib as a bug   In Python 3 4  it is now possible to configure  mod  multiprocessing  to use the    forkserver    or    spawn    start methods  instead of the default    fork     to manage the process pools  To work around this issue when using scikit learn  you can set the   JOBLIB START METHOD   environment variable to    forkserver     However the user should be aware that using the    forkserver    method prevents  class  joblib Parallel  to call function interactively defined in a shell session   If you have custom code that uses  mod  multiprocessing  directly instead of using it via  mod  joblib  you can enable the    forkserver    mode globally for your program  Insert the following instructions in your main script        import multiprocessing        other imports  custom code  load data  define model         if   name         main             multiprocessing set start method  forkserver              call scikit learn utils with n jobs   1 here  You can find more default on the new start methods in the  multiprocessing documentation  https   docs python org 3 library multiprocessing html contexts and start methods          faq mkl threading   Why does my job use more cores than specified with   n jobs                                                                    This is because   n jobs   only controls the number of jobs for routines that are parallelized with  mod  joblib   but parallel code can come from other sources     some routines may be parallelized with OpenMP  for code written in C or   Cython     scikit learn relies a lot on numpy  which in turn may rely on numerical   libraries like MKL  OpenBLAS or BLIS which can provide parallel   implementations   For more details  please refer to our  ref  notes on parallelism  parallelism     How do I set a   random state   for an entire execution                                                            Please refer to  ref  randomness  "}
{"questions":"scikit-learn sklearn glossary Glossary of Common Terms and API Elements This glossary hopes to definitively represent the tacit and explicit conventions applied in Scikit learn and its API while providing a reference","answers":".. currentmodule:: sklearn\n\n.. _glossary:\n\n=========================================\nGlossary of Common Terms and API Elements\n=========================================\n\nThis glossary hopes to definitively represent the tacit and explicit\nconventions applied in Scikit-learn and its API, while providing a reference\nfor users and contributors. It aims to describe the concepts and either detail\ntheir corresponding API or link to other relevant parts of the documentation\nwhich do so. By linking to glossary entries from the API Reference and User\nGuide, we may minimize redundancy and inconsistency.\n\nWe begin by listing general concepts (and any that didn't fit elsewhere), but\nmore specific sets of related terms are listed below:\n:ref:`glossary_estimator_types`, :ref:`glossary_target_types`,\n:ref:`glossary_methods`, :ref:`glossary_parameters`,\n:ref:`glossary_attributes`, :ref:`glossary_sample_props`.\n\nGeneral Concepts\n================\n\n.. glossary::\n\n    1d\n    1d array\n        One-dimensional array. A NumPy array whose ``.shape`` has length 1.\n        A vector.\n\n    2d\n    2d array\n        Two-dimensional array. A NumPy array whose ``.shape`` has length 2.\n        Often represents a matrix.\n\n    API\n        Refers to both the *specific* interfaces for estimators implemented in\n        Scikit-learn and the *generalized* conventions across types of\n        estimators as described in this glossary and :ref:`overviewed in the\n        contributor documentation <api_overview>`.\n\n        The specific interfaces that constitute Scikit-learn's public API are\n        largely documented in :ref:`api_ref`. However, we less formally consider\n        anything as public API if none of the identifiers required to access it\n        begins with ``_``.  We generally try to maintain :term:`backwards\n        compatibility` for all objects in the public API.\n\n        Private API, including functions, modules and methods beginning ``_``\n        are not assured to be stable.\n\n    array-like\n        The most common data format for *input* to Scikit-learn estimators and\n        functions, array-like is any type object for which\n        :func:`numpy.asarray` will produce an array of appropriate shape\n        (usually 1 or 2-dimensional) of appropriate dtype (usually numeric).\n\n        This includes:\n\n        * a numpy array\n        * a list of numbers\n        * a list of length-k lists of numbers for some fixed length k\n        * a :class:`pandas.DataFrame` with all columns numeric\n        * a numeric :class:`pandas.Series`\n\n        It excludes:\n\n        * a :term:`sparse matrix`\n        * a sparse array\n        * an iterator\n        * a generator\n\n        Note that *output* from scikit-learn estimators and functions (e.g.\n        predictions) should generally be arrays or sparse matrices, or lists\n        thereof (as in multi-output :class:`tree.DecisionTreeClassifier`'s\n        ``predict_proba``). An estimator where ``predict()`` returns a list or\n        a `pandas.Series` is not valid.\n\n    attribute\n    attributes\n        We mostly use attribute to refer to how model information is stored on\n        an estimator during fitting.  Any public attribute stored on an\n        estimator instance is required to begin with an alphabetic character\n        and end in a single underscore if it is set in :term:`fit` or\n        :term:`partial_fit`.  These are what is documented under an estimator's\n        *Attributes* documentation.  The information stored in attributes is\n        usually either: sufficient statistics used for prediction or\n        transformation; :term:`transductive` outputs such as :term:`labels_` or\n        :term:`embedding_`; or diagnostic data, such as\n        :term:`feature_importances_`.\n        Common attributes are listed :ref:`below <glossary_attributes>`.\n\n        A public attribute may have the same name as a constructor\n        :term:`parameter`, with a ``_`` appended.  This is used to store a\n        validated or estimated version of the user's input. For example,\n        :class:`decomposition.PCA` is constructed with an ``n_components``\n        parameter. From this, together with other parameters and the data,\n        PCA estimates the attribute ``n_components_``.\n\n        Further private attributes used in prediction\/transformation\/etc. may\n        also be set when fitting.  These begin with a single underscore and are\n        not assured to be stable for public access.\n\n        A public attribute on an estimator instance that does not end in an\n        underscore should be the stored, unmodified value of an ``__init__``\n        :term:`parameter` of the same name.  Because of this equivalence, these\n        are documented under an estimator's *Parameters* documentation.\n\n    backwards compatibility\n        We generally try to maintain backward compatibility (i.e. interfaces\n        and behaviors may be extended but not changed or removed) from release\n        to release but this comes with some exceptions:\n\n        Public API only\n            The behavior of objects accessed through private identifiers\n            (those beginning ``_``) may be changed arbitrarily between\n            versions.\n        As documented\n            We will generally assume that the users have adhered to the\n            documented parameter types and ranges. If the documentation asks\n            for a list and the user gives a tuple, we do not assure consistent\n            behavior from version to version.\n        Deprecation\n            Behaviors may change following a :term:`deprecation` period\n            (usually two releases long).  Warnings are issued using Python's\n            :mod:`warnings` module.\n        Keyword arguments\n            We may sometimes assume that all optional parameters (other than X\n            and y to :term:`fit` and similar methods) are passed as keyword\n            arguments only and may be positionally reordered.\n        Bug fixes and enhancements\n            Bug fixes and -- less often -- enhancements may change the behavior\n            of estimators, including the predictions of an estimator trained on\n            the same data and :term:`random_state`.  When this happens, we\n            attempt to note it clearly in the changelog.\n        Serialization\n            We make no assurances that pickling an estimator in one version\n            will allow it to be unpickled to an equivalent model in the\n            subsequent version.  (For estimators in the sklearn package, we\n            issue a warning when this unpickling is attempted, even if it may\n            happen to work.)  See :ref:`persistence_limitations`.\n        :func:`utils.estimator_checks.check_estimator`\n            We provide limited backwards compatibility assurances for the\n            estimator checks: we may add extra requirements on estimators\n            tested with this function, usually when these were informally\n            assumed but not formally tested.\n\n        Despite this informal contract with our users, the software is provided\n        as is, as stated in the license.  When a release inadvertently\n        introduces changes that are not backward compatible, these are known\n        as software regressions.\n\n    callable\n        A function, class or an object which implements the ``__call__``\n        method; anything that returns True when the argument of `callable()\n        <https:\/\/docs.python.org\/3\/library\/functions.html#callable>`_.\n\n    categorical feature\n        A categorical or nominal :term:`feature` is one that has a\n        finite set of discrete values across the population of data.\n        These are commonly represented as columns of integers or\n        strings. Strings will be rejected by most scikit-learn\n        estimators, and integers will be treated as ordinal or\n        count-valued. For the use with most estimators, categorical\n        variables should be one-hot encoded. Notable exceptions include\n        tree-based models such as random forests and gradient boosting\n        models that often work better and faster with integer-coded\n        categorical variables.\n        :class:`~sklearn.preprocessing.OrdinalEncoder` helps encoding\n        string-valued categorical features as ordinal integers, and\n        :class:`~sklearn.preprocessing.OneHotEncoder` can be used to\n        one-hot encode categorical features.\n        See also :ref:`preprocessing_categorical_features` and the\n        `categorical-encoding\n        <https:\/\/github.com\/scikit-learn-contrib\/category_encoders>`_\n        package for tools related to encoding categorical features.\n\n    clone\n    cloned\n        To copy an :term:`estimator instance` and create a new one with\n        identical :term:`parameters`, but without any fitted\n        :term:`attributes`, using :func:`~sklearn.base.clone`.\n\n        When ``fit`` is called, a :term:`meta-estimator` usually clones\n        a wrapped estimator instance before fitting the cloned instance.\n        (Exceptions, for legacy reasons, include\n        :class:`~pipeline.Pipeline` and\n        :class:`~pipeline.FeatureUnion`.)\n\n        If the estimator's `random_state` parameter is an integer (or if the\n        estimator doesn't have a `random_state` parameter), an *exact clone*\n        is returned: the clone and the original estimator will give the exact\n        same results. Otherwise, *statistical clone* is returned: the clone\n        might yield different results from the original estimator. More\n        details can be found in :ref:`randomness`.\n\n    common tests\n        This refers to the tests run on almost every estimator class in\n        Scikit-learn to check they comply with basic API conventions.  They are\n        available for external use through\n        :func:`utils.estimator_checks.check_estimator` or\n        :func:`utils.estimator_checks.parametrize_with_checks`, with most of the\n        implementation in ``sklearn\/utils\/estimator_checks.py``.\n\n        Note: Some exceptions to the common testing regime are currently\n        hard-coded into the library, but we hope to replace this by marking\n        exceptional behaviours on the estimator using semantic :term:`estimator\n        tags`.\n\n    cross-fitting\n    cross fitting\n        A resampling method that iteratively partitions data into mutually\n        exclusive subsets to fit two stages. During the first stage, the\n        mutually exclusive subsets enable predictions or transformations to be\n        computed on data not seen during training. The computed data is then\n        used in the second stage. The objective is to avoid having any\n        overfitting in the first stage introduce bias into the input data\n        distribution of the second stage.\n        For examples of its use, see: :class:`~preprocessing.TargetEncoder`,\n        :class:`~ensemble.StackingClassifier`,\n        :class:`~ensemble.StackingRegressor` and\n        :class:`~calibration.CalibratedClassifierCV`.\n\n    cross-validation\n    cross validation\n        A resampling method that iteratively partitions data into mutually\n        exclusive 'train' and 'test' subsets so model performance can be\n        evaluated on unseen data. This conserves data as avoids the need to hold\n        out a 'validation' dataset and accounts for variability as multiple\n        rounds of cross validation are generally performed.\n        See :ref:`User Guide <cross_validation>` for more details.\n\n    deprecation\n        We use deprecation to slowly violate our :term:`backwards\n        compatibility` assurances, usually to:\n\n        * change the default value of a parameter; or\n        * remove a parameter, attribute, method, class, etc.\n\n        We will ordinarily issue a warning when a deprecated element is used,\n        although there may be limitations to this.  For instance, we will raise\n        a warning when someone sets a parameter that has been deprecated, but\n        may not when they access that parameter's attribute on the estimator\n        instance.\n\n        See the :ref:`Contributors' Guide <contributing_deprecation>`.\n\n    dimensionality\n        May be used to refer to the number of :term:`features` (i.e.\n        :term:`n_features`), or columns in a 2d feature matrix.\n        Dimensions are, however, also used to refer to the length of a NumPy\n        array's shape, distinguishing a 1d array from a 2d matrix.\n\n    docstring\n        The embedded documentation for a module, class, function, etc., usually\n        in code as a string at the beginning of the object's definition, and\n        accessible as the object's ``__doc__`` attribute.\n\n        We try to adhere to `PEP257\n        <https:\/\/www.python.org\/dev\/peps\/pep-0257\/>`_, and follow `NumpyDoc\n        conventions <https:\/\/numpydoc.readthedocs.io\/en\/latest\/format.html>`_.\n\n    double underscore\n    double underscore notation\n        When specifying parameter names for nested estimators, ``__`` may be\n        used to separate between parent and child in some contexts. The most\n        common use is when setting parameters through a meta-estimator with\n        :term:`set_params` and hence in specifying a search grid in\n        :ref:`parameter search <grid_search>`. See :term:`parameter`.\n        It is also used in :meth:`pipeline.Pipeline.fit` for passing\n        :term:`sample properties` to the ``fit`` methods of estimators in\n        the pipeline.\n\n    dtype\n    data type\n        NumPy arrays assume a homogeneous data type throughout, available in\n        the ``.dtype`` attribute of an array (or sparse matrix). We generally\n        assume simple data types for scikit-learn data: float or integer.\n        We may support object or string data types for arrays before encoding\n        or vectorizing.  Our estimators do not work with struct arrays, for\n        instance.\n\n        Our documentation can sometimes give information about the dtype\n        precision, e.g. `np.int32`, `np.int64`, etc. When the precision is\n        provided, it refers to the NumPy dtype. If an arbitrary precision is\n        used, the documentation will refer to dtype `integer` or `floating`.\n        Note that in this case, the precision can be platform dependent.\n        The `numeric` dtype refers to accepting both `integer` and `floating`.\n\n        When it comes to choosing between 64-bit dtype (i.e. `np.float64` and\n        `np.int64`) and 32-bit dtype (i.e. `np.float32` and `np.int32`), it\n        boils down to a trade-off between efficiency and precision. The 64-bit\n        types offer more accurate results due to their lower floating-point\n        error, but demand more computational resources, resulting in slower\n        operations and increased memory usage. In contrast, 32-bit types\n        promise enhanced operation speed and reduced memory consumption, but\n        introduce a larger floating-point error. The efficiency improvement are\n        dependent on lower level optimization such as like vectorization,\n        single instruction multiple dispatch (SIMD), or cache optimization but\n        crucially on the compatibility of the algorithm in use.\n\n        Specifically, the choice of precision should account for whether the\n        employed algorithm can effectively leverage `np.float32`. Some\n        algorithms, especially certain minimization methods, are exclusively\n        coded for `np.float64`, meaning that even if `np.float32` is passed, it\n        triggers an automatic conversion back to `np.float64`. This not only\n        negates the intended computational savings but also introduces\n        additional overhead, making operations with `np.float32` unexpectedly\n        slower and more memory-intensive due to this extra conversion step.\n\n    duck typing\n        We try to apply `duck typing\n        <https:\/\/en.wikipedia.org\/wiki\/Duck_typing>`_ to determine how to\n        handle some input values (e.g. checking whether a given estimator is\n        a classifier).  That is, we avoid using ``isinstance`` where possible,\n        and rely on the presence or absence of attributes to determine an\n        object's behaviour.  Some nuance is required when following this\n        approach:\n\n        * For some estimators, an attribute may only be available once it is\n          :term:`fitted`.  For instance, we cannot a priori determine if\n          :term:`predict_proba` is available in a grid search where the grid\n          includes alternating between a probabilistic and a non-probabilistic\n          predictor in the final step of the pipeline.  In the following, we\n          can only determine if ``clf`` is probabilistic after fitting it on\n          some data::\n\n              >>> from sklearn.model_selection import GridSearchCV\n              >>> from sklearn.linear_model import SGDClassifier\n              >>> clf = GridSearchCV(SGDClassifier(),\n              ...                    param_grid={'loss': ['log_loss', 'hinge']})\n\n          This means that we can only check for duck-typed attributes after\n          fitting, and that we must be careful to make :term:`meta-estimators`\n          only present attributes according to the state of the underlying\n          estimator after fitting.\n\n        * Checking if an attribute is present (using ``hasattr``) is in general\n          just as expensive as getting the attribute (``getattr`` or dot\n          notation).  In some cases, getting the attribute may indeed be\n          expensive (e.g. for some implementations of\n          :term:`feature_importances_`, which may suggest this is an API design\n          flaw).  So code which does ``hasattr`` followed by ``getattr`` should\n          be avoided; ``getattr`` within a try-except block is preferred.\n\n        * For determining some aspects of an estimator's expectations or\n          support for some feature, we use :term:`estimator tags` instead of\n          duck typing.\n\n    early stopping\n        This consists in stopping an iterative optimization method before the\n        convergence of the training loss, to avoid over-fitting. This is\n        generally done by monitoring the generalization score on a validation\n        set. When available, it is activated through the parameter\n        ``early_stopping`` or by setting a positive :term:`n_iter_no_change`.\n\n    estimator instance\n        We sometimes use this terminology to distinguish an :term:`estimator`\n        class from a constructed instance. For example, in the following,\n        ``cls`` is an estimator class, while ``est1`` and ``est2`` are\n        instances::\n\n            cls = RandomForestClassifier\n            est1 = cls()\n            est2 = RandomForestClassifier()\n\n    examples\n        We try to give examples of basic usage for most functions and\n        classes in the API:\n\n        * as doctests in their docstrings (i.e. within the ``sklearn\/`` library\n          code itself).\n        * as examples in the :ref:`example gallery <general_examples>`\n          rendered (using `sphinx-gallery\n          <https:\/\/sphinx-gallery.readthedocs.io\/>`_) from scripts in the\n          ``examples\/`` directory, exemplifying key features or parameters\n          of the estimator\/function.  These should also be referenced from the\n          User Guide.\n        * sometimes in the :ref:`User Guide <user_guide>` (built from ``doc\/``)\n          alongside a technical description of the estimator.\n\n    experimental\n        An experimental tool is already usable but its public API, such as\n        default parameter values or fitted attributes, is still subject to\n        change in future versions without the usual :term:`deprecation`\n        warning policy.\n\n    evaluation metric\n    evaluation metrics\n        Evaluation metrics give a measure of how well a model performs.  We may\n        use this term specifically to refer to the functions in :mod:`~sklearn.metrics`\n        (disregarding :mod:`~sklearn.metrics.pairwise`), as distinct from the\n        :term:`score` method and the :term:`scoring` API used in cross\n        validation. See :ref:`model_evaluation`.\n\n        These functions usually accept a ground truth (or the raw data\n        where the metric evaluates clustering without a ground truth) and a\n        prediction, be it the output of :term:`predict` (``y_pred``),\n        of :term:`predict_proba` (``y_proba``), or of an arbitrary score\n        function including :term:`decision_function` (``y_score``).\n        Functions are usually named to end with ``_score`` if a greater\n        score indicates a better model, and ``_loss`` if a lesser score\n        indicates a better model.  This diversity of interface motivates\n        the scoring API.\n\n        Note that some estimators can calculate metrics that are not included\n        in :mod:`~sklearn.metrics` and are estimator-specific, notably model\n        likelihoods.\n\n    estimator tags\n        Estimator tags describe certain capabilities of an estimator.  This would\n        enable some runtime behaviors based on estimator inspection, but it\n        also allows each estimator to be tested for appropriate invariances\n        while being excepted from other :term:`common tests`.\n\n        Some aspects of estimator tags are currently determined through\n        the :term:`duck typing` of methods like ``predict_proba`` and through\n        some special attributes on estimator objects:\n\n        For more detailed info, see :ref:`estimator_tags`.\n\n    feature\n    features\n    feature vector\n        In the abstract, a feature is a function (in its mathematical sense)\n        mapping a sampled object to a numeric or categorical quantity.\n        \"Feature\" is also commonly used to refer to these quantities, being the\n        individual elements of a vector representing a sample. In a data\n        matrix, features are represented as columns: each column contains the\n        result of applying a feature function to a set of samples.\n\n        Elsewhere features are known as attributes, predictors, regressors, or\n        independent variables.\n\n        Nearly all estimators in scikit-learn assume that features are numeric,\n        finite and not missing, even when they have semantically distinct\n        domains and distributions (categorical, ordinal, count-valued,\n        real-valued, interval). See also :term:`categorical feature` and\n        :term:`missing values`.\n\n        ``n_features`` indicates the number of features in a dataset.\n\n    fitting\n        Calling :term:`fit` (or :term:`fit_transform`, :term:`fit_predict`,\n        etc.) on an estimator.\n\n    fitted\n        The state of an estimator after :term:`fitting`.\n\n        There is no conventional procedure for checking if an estimator\n        is fitted.  However, an estimator that is not fitted:\n\n        * should raise :class:`exceptions.NotFittedError` when a prediction\n          method (:term:`predict`, :term:`transform`, etc.) is called.\n          (:func:`utils.validation.check_is_fitted` is used internally\n          for this purpose.)\n        * should not have any :term:`attributes` beginning with an alphabetic\n          character and ending with an underscore. (Note that a descriptor for\n          the attribute may still be present on the class, but hasattr should\n          return False)\n\n    function\n        We provide ad hoc function interfaces for many algorithms, while\n        :term:`estimator` classes provide a more consistent interface.\n\n        In particular, Scikit-learn may provide a function interface that fits\n        a model to some data and returns the learnt model parameters, as in\n        :func:`linear_model.enet_path`.  For transductive models, this also\n        returns the embedding or cluster labels, as in\n        :func:`manifold.spectral_embedding` or :func:`cluster.dbscan`.  Many\n        preprocessing transformers also provide a function interface, akin to\n        calling :term:`fit_transform`, as in\n        :func:`preprocessing.maxabs_scale`.  Users should be careful to avoid\n        :term:`data leakage` when making use of these\n        ``fit_transform``-equivalent functions.\n\n        We do not have a strict policy about when to or when not to provide\n        function forms of estimators, but maintainers should consider\n        consistency with existing interfaces, and whether providing a function\n        would lead users astray from best practices (as regards data leakage,\n        etc.)\n\n    gallery\n        See :term:`examples`.\n\n    hyperparameter\n    hyper-parameter\n        See :term:`parameter`.\n\n    impute\n    imputation\n        Most machine learning algorithms require that their inputs have no\n        :term:`missing values`, and will not work if this requirement is\n        violated. Algorithms that attempt to fill in (or impute) missing values\n        are referred to as imputation algorithms.\n\n    indexable\n        An :term:`array-like`, :term:`sparse matrix`, pandas DataFrame or\n        sequence (usually a list).\n\n    induction\n    inductive\n        Inductive (contrasted with :term:`transductive`) machine learning\n        builds a model of some data that can then be applied to new instances.\n        Most estimators in Scikit-learn are inductive, having :term:`predict`\n        and\/or :term:`transform` methods.\n\n    joblib\n        A Python library (https:\/\/joblib.readthedocs.io) used in Scikit-learn to\n        facilite simple parallelism and caching.  Joblib is oriented towards\n        efficiently working with numpy arrays, such as through use of\n        :term:`memory mapping`. See :ref:`parallelism` for more\n        information.\n\n    label indicator matrix\n    multilabel indicator matrix\n    multilabel indicator matrices\n        The format used to represent multilabel data, where each row of a 2d\n        array or sparse matrix corresponds to a sample, each column\n        corresponds to a class, and each element is 1 if the sample is labeled\n        with the class and 0 if not.\n\n    leakage\n    data leakage\n        A problem in cross validation where generalization performance can be\n        over-estimated since knowledge of the test data was inadvertently\n        included in training a model.  This is a risk, for instance, when\n        applying a :term:`transformer` to the entirety of a dataset rather\n        than each training portion in a cross validation split.\n\n        We aim to provide interfaces (such as :mod:`~sklearn.pipeline` and\n        :mod:`~sklearn.model_selection`) that shield the user from data leakage.\n\n    memmapping\n    memory map\n    memory mapping\n        A memory efficiency strategy that keeps data on disk rather than\n        copying it into main memory.  Memory maps can be created for arrays\n        that can be read, written, or both, using :obj:`numpy.memmap`. When\n        using :term:`joblib` to parallelize operations in Scikit-learn, it\n        may automatically memmap large arrays to reduce memory duplication\n        overhead in multiprocessing.\n\n    missing values\n        Most Scikit-learn estimators do not work with missing values. When they\n        do (e.g. in :class:`impute.SimpleImputer`), NaN is the preferred\n        representation of missing values in float arrays.  If the array has\n        integer dtype, NaN cannot be represented. For this reason, we support\n        specifying another ``missing_values`` value when :term:`imputation` or\n        learning can be performed in integer space.\n        :term:`Unlabeled data <unlabeled data>` is a special case of missing\n        values in the :term:`target`.\n\n    ``n_features``\n        The number of :term:`features`.\n\n    ``n_outputs``\n        The number of :term:`outputs` in the :term:`target`.\n\n    ``n_samples``\n        The number of :term:`samples`.\n\n    ``n_targets``\n        Synonym for :term:`n_outputs`.\n\n    narrative docs\n    narrative documentation\n        An alias for :ref:`User Guide <user_guide>`, i.e. documentation written\n        in ``doc\/modules\/``. Unlike the :ref:`API reference <api_ref>` provided\n        through docstrings, the User Guide aims to:\n\n        * group tools provided by Scikit-learn together thematically or in\n          terms of usage;\n        * motivate why someone would use each particular tool, often through\n          comparison;\n        * provide both intuitive and technical descriptions of tools;\n        * provide or link to :term:`examples` of using key features of a\n          tool.\n\n    np\n        A shorthand for Numpy due to the conventional import statement::\n\n            import numpy as np\n\n    online learning\n        Where a model is iteratively updated by receiving each batch of ground\n        truth :term:`targets` soon after making predictions on corresponding\n        batch of data.  Intrinsically, the model must be usable for prediction\n        after each batch. See :term:`partial_fit`.\n\n    out-of-core\n        An efficiency strategy where not all the data is stored in main memory\n        at once, usually by performing learning on batches of data. See\n        :term:`partial_fit`.\n\n    outputs\n        Individual scalar\/categorical variables per sample in the\n        :term:`target`.  For example, in multilabel classification each\n        possible label corresponds to a binary output. Also called *responses*,\n        *tasks* or *targets*.\n        See :term:`multiclass multioutput` and :term:`continuous multioutput`.\n\n    pair\n        A tuple of length two.\n\n    parameter\n    parameters\n    param\n    params\n        We mostly use *parameter* to refer to the aspects of an estimator that\n        can be specified in its construction. For example, ``max_depth`` and\n        ``random_state`` are parameters of :class:`~ensemble.RandomForestClassifier`.\n        Parameters to an estimator's constructor are stored unmodified as\n        attributes on the estimator instance, and conventionally start with an\n        alphabetic character and end with an alphanumeric character.  Each\n        estimator's constructor parameters are described in the estimator's\n        docstring.\n\n        We do not use parameters in the statistical sense, where parameters are\n        values that specify a model and can be estimated from data. What we\n        call parameters might be what statisticians call hyperparameters to the\n        model: aspects for configuring model structure that are often not\n        directly learnt from data.  However, our parameters are also used to\n        prescribe modeling operations that do not affect the learnt model, such\n        as :term:`n_jobs` for controlling parallelism.\n\n        When talking about the parameters of a :term:`meta-estimator`, we may\n        also be including the parameters of the estimators wrapped by the\n        meta-estimator.  Ordinarily, these nested parameters are denoted by\n        using a :term:`double underscore` (``__``) to separate between the\n        estimator-as-parameter and its parameter.  Thus ``clf =\n        BaggingClassifier(estimator=DecisionTreeClassifier(max_depth=3))``\n        has a deep parameter ``estimator__max_depth`` with value ``3``,\n        which is accessible with ``clf.estimator.max_depth`` or\n        ``clf.get_params()['estimator__max_depth']``.\n\n        The list of parameters and their current values can be retrieved from\n        an :term:`estimator instance` using its :term:`get_params` method.\n\n        Between construction and fitting, parameters may be modified using\n        :term:`set_params`.  To enable this, parameters are not ordinarily\n        validated or altered when the estimator is constructed, or when each\n        parameter is set. Parameter validation is performed when :term:`fit` is\n        called.\n\n        Common parameters are listed :ref:`below <glossary_parameters>`.\n\n    pairwise metric\n    pairwise metrics\n\n        In its broad sense, a pairwise metric defines a function for measuring\n        similarity or dissimilarity between two samples (with each ordinarily\n        represented as a :term:`feature vector`).  We particularly provide\n        implementations of distance metrics (as well as improper metrics like\n        Cosine Distance) through :func:`metrics.pairwise_distances`, and of\n        kernel functions (a constrained class of similarity functions) in\n        :func:`metrics.pairwise.pairwise_kernels`.  These can compute pairwise distance\n        matrices that are symmetric and hence store data redundantly.\n\n        See also :term:`precomputed` and :term:`metric`.\n\n        Note that for most distance metrics, we rely on implementations from\n        :mod:`scipy.spatial.distance`, but may reimplement for efficiency in\n        our context. The :class:`metrics.DistanceMetric` interface is used to implement\n        distance metrics for integration with efficient neighbors search.\n\n    pd\n        A shorthand for `Pandas <https:\/\/pandas.pydata.org>`_ due to the\n        conventional import statement::\n\n            import pandas as pd\n\n    precomputed\n        Where algorithms rely on :term:`pairwise metrics`, and can be computed\n        from pairwise metrics alone, we often allow the user to specify that\n        the :term:`X` provided is already in the pairwise (dis)similarity\n        space, rather than in a feature space.  That is, when passed to\n        :term:`fit`, it is a square, symmetric matrix, with each vector\n        indicating (dis)similarity to every sample, and when passed to\n        prediction\/transformation methods, each row corresponds to a testing\n        sample and each column to a training sample.\n\n        Use of precomputed X is usually indicated by setting a ``metric``,\n        ``affinity`` or ``kernel`` parameter to the string 'precomputed'. If\n        this is the case, then the estimator should set the `pairwise`\n        estimator tag as True.\n\n    rectangular\n        Data that can be represented as a matrix with :term:`samples` on the\n        first axis and a fixed, finite set of :term:`features` on the second\n        is called rectangular.\n\n        This term excludes samples with non-vectorial structures, such as text,\n        an image of arbitrary size, a time series of arbitrary length, a set of\n        vectors, etc. The purpose of a :term:`vectorizer` is to produce\n        rectangular forms of such data.\n\n    sample\n    samples\n        We usually use this term as a noun to indicate a single feature vector.\n        Elsewhere a sample is called an instance, data point, or observation.\n        ``n_samples`` indicates the number of samples in a dataset, being the\n        number of rows in a data array :term:`X`.\n        Note that this definition is standard in machine learning and deviates from\n        statistics where it means *a set of individuals or objects collected or\n        selected*.\n\n    sample property\n    sample properties\n        A sample property is data for each sample (e.g. an array of length\n        n_samples) passed to an estimator method or a similar function,\n        alongside but distinct from the :term:`features` (``X``) and\n        :term:`target` (``y``). The most prominent example is\n        :term:`sample_weight`; see others at :ref:`glossary_sample_props`.\n\n        As of version 0.19 we do not have a consistent approach to handling\n        sample properties and their routing in :term:`meta-estimators`, though\n        a ``fit_params`` parameter is often used.\n\n    scikit-learn-contrib\n        A venue for publishing Scikit-learn-compatible libraries that are\n        broadly authorized by the core developers and the contrib community,\n        but not maintained by the core developer team.\n        See https:\/\/scikit-learn-contrib.github.io.\n\n    scikit-learn enhancement proposals\n    SLEP\n    SLEPs\n        Changes to the API principles and changes to dependencies or supported\n        versions happen via a :ref:`SLEP <slep>` and follows the\n        decision-making process outlined in :ref:`governance`.\n        For all votes, a proposal must have been made public and discussed before the\n        vote. Such a proposal must be a consolidated document, in the form of a\n        \"Scikit-Learn Enhancement Proposal\" (SLEP), rather than a long discussion on an\n        issue. A SLEP must be submitted as a pull-request to\n        `enhancement proposals <https:\/\/scikit-learn-enhancement-proposals.readthedocs.io>`_ using the\n        `SLEP template <https:\/\/scikit-learn-enhancement-proposals.readthedocs.io\/en\/latest\/slep_template.html>`_.\n\n    semi-supervised\n    semi-supervised learning\n    semisupervised\n        Learning where the expected prediction (label or ground truth) is only\n        available for some samples provided as training data when\n        :term:`fitting` the model.  We conventionally apply the label ``-1``\n        to :term:`unlabeled` samples in semi-supervised classification.\n\n    sparse matrix\n    sparse graph\n        A representation of two-dimensional numeric data that is more memory\n        efficient the corresponding dense numpy array where almost all elements\n        are zero. We use the :mod:`scipy.sparse` framework, which provides\n        several underlying sparse data representations, or *formats*.\n        Some formats are more efficient than others for particular tasks, and\n        when a particular format provides especial benefit, we try to document\n        this fact in Scikit-learn parameter descriptions.\n\n        Some sparse matrix formats (notably CSR, CSC, COO and LIL) distinguish\n        between *implicit* and *explicit* zeros. Explicit zeros are stored\n        (i.e. they consume memory in a ``data`` array) in the data structure,\n        while implicit zeros correspond to every element not otherwise defined\n        in explicit storage.\n\n        Two semantics for sparse matrices are used in Scikit-learn:\n\n        matrix semantics\n            The sparse matrix is interpreted as an array with implicit and\n            explicit zeros being interpreted as the number 0.  This is the\n            interpretation most often adopted, e.g. when sparse matrices\n            are used for feature matrices or :term:`multilabel indicator\n            matrices`.\n        graph semantics\n            As with :mod:`scipy.sparse.csgraph`, explicit zeros are\n            interpreted as the number 0, but implicit zeros indicate a masked\n            or absent value, such as the absence of an edge between two\n            vertices of a graph, where an explicit value indicates an edge's\n            weight. This interpretation is adopted to represent connectivity\n            in clustering, in representations of nearest neighborhoods\n            (e.g. :func:`neighbors.kneighbors_graph`), and for precomputed\n            distance representation where only distances in the neighborhood\n            of each point are required.\n\n        When working with sparse matrices, we assume that it is sparse for a\n        good reason, and avoid writing code that densifies a user-provided\n        sparse matrix, instead maintaining sparsity or raising an error if not\n        possible (i.e. if an estimator does not \/ cannot support sparse\n        matrices).\n\n    stateless\n        An estimator is stateless if it does not store any information that is\n        obtained during :term:`fit`. This information can be either parameters\n        learned during :term:`fit` or statistics computed from the\n        training data. An estimator is stateless if it has no :term:`attributes`\n        apart from ones set in `__init__`. Calling :term:`fit` for these\n        estimators will only validate the public :term:`attributes` passed\n        in `__init__`.\n\n    supervised\n    supervised learning\n        Learning where the expected prediction (label or ground truth) is\n        available for each sample when :term:`fitting` the model, provided as\n        :term:`y`.  This is the approach taken in a :term:`classifier` or\n        :term:`regressor` among other estimators.\n\n    target\n    targets\n        The *dependent variable* in :term:`supervised` (and\n        :term:`semisupervised`) learning, passed as :term:`y` to an estimator's\n        :term:`fit` method.  Also known as *dependent variable*, *outcome\n        variable*, *response variable*, *ground truth* or *label*. Scikit-learn\n        works with targets that have minimal structure: a class from a finite\n        set, a finite real-valued number, multiple classes, or multiple\n        numbers. See :ref:`glossary_target_types`.\n\n    transduction\n    transductive\n        A transductive (contrasted with :term:`inductive`) machine learning\n        method is designed to model a specific dataset, but not to apply that\n        model to unseen data.  Examples include :class:`manifold.TSNE`,\n        :class:`cluster.AgglomerativeClustering` and\n        :class:`neighbors.LocalOutlierFactor`.\n\n    unlabeled\n    unlabeled data\n        Samples with an unknown ground truth when fitting; equivalently,\n        :term:`missing values` in the :term:`target`.  See also\n        :term:`semisupervised` and :term:`unsupervised` learning.\n\n    unsupervised\n    unsupervised learning\n        Learning where the expected prediction (label or ground truth) is not\n        available for each sample when :term:`fitting` the model, as in\n        :term:`clusterers` and :term:`outlier detectors`.  Unsupervised\n        estimators ignore any :term:`y` passed to :term:`fit`.\n\n.. _glossary_estimator_types:\n\nClass APIs and Estimator Types\n==============================\n\n.. glossary::\n\n    classifier\n    classifiers\n        A :term:`supervised` (or :term:`semi-supervised`) :term:`predictor`\n        with a finite set of discrete possible output values.\n\n        A classifier supports modeling some of :term:`binary`,\n        :term:`multiclass`, :term:`multilabel`, or :term:`multiclass\n        multioutput` targets.  Within scikit-learn, all classifiers support\n        multi-class classification, defaulting to using a one-vs-rest\n        strategy over the binary classification problem.\n\n        Classifiers must store a :term:`classes_` attribute after fitting,\n        and inherit from :class:`base.ClassifierMixin`, which sets\n        their corresponding :term:`estimator tags` correctly.\n\n        A classifier can be distinguished from other estimators with\n        :func:`~base.is_classifier`.\n\n        A classifier must implement:\n\n        * :term:`fit`\n        * :term:`predict`\n        * :term:`score`\n\n        It may also be appropriate to implement :term:`decision_function`,\n        :term:`predict_proba` and :term:`predict_log_proba`.\n\n    clusterer\n    clusterers\n        A :term:`unsupervised` :term:`predictor` with a finite set of discrete\n        output values.\n\n        A clusterer usually stores :term:`labels_` after fitting, and must do\n        so if it is :term:`transductive`.\n\n        A clusterer must implement:\n\n        * :term:`fit`\n        * :term:`fit_predict` if :term:`transductive`\n        * :term:`predict` if :term:`inductive`\n\n    density estimator\n        An :term:`unsupervised` estimation of input probability density\n        function. Commonly used techniques are:\n\n        * :ref:`kernel_density` - uses a kernel function, controlled by the\n          bandwidth parameter to represent density;\n        * :ref:`Gaussian mixture <mixture>` - uses mixture of Gaussian models\n          to represent density.\n\n    estimator\n    estimators\n        An object which manages the estimation and decoding of a model. The\n        model is estimated as a deterministic function of:\n\n        * :term:`parameters` provided in object construction or with\n          :term:`set_params`;\n        * the global :mod:`numpy.random` random state if the estimator's\n          :term:`random_state` parameter is set to None; and\n        * any data or :term:`sample properties` passed to the most recent\n          call to :term:`fit`, :term:`fit_transform` or :term:`fit_predict`,\n          or data similarly passed in a sequence of calls to\n          :term:`partial_fit`.\n\n        The estimated model is stored in public and private :term:`attributes`\n        on the estimator instance, facilitating decoding through prediction\n        and transformation methods.\n\n        Estimators must provide a :term:`fit` method, and should provide\n        :term:`set_params` and :term:`get_params`, although these are usually\n        provided by inheritance from :class:`base.BaseEstimator`.\n\n        The core functionality of some estimators may also be available as a\n        :term:`function`.\n\n    feature extractor\n    feature extractors\n        A :term:`transformer` which takes input where each sample is not\n        represented as an :term:`array-like` object of fixed length, and\n        produces an :term:`array-like` object of :term:`features` for each\n        sample (and thus a 2-dimensional array-like for a set of samples).  In\n        other words, it (lossily) maps a non-rectangular data representation\n        into :term:`rectangular` data.\n\n        Feature extractors must implement at least:\n\n        * :term:`fit`\n        * :term:`transform`\n        * :term:`get_feature_names_out`\n\n    meta-estimator\n    meta-estimators\n    metaestimator\n    metaestimators\n        An :term:`estimator` which takes another estimator as a parameter.\n        Examples include :class:`pipeline.Pipeline`,\n        :class:`model_selection.GridSearchCV`,\n        :class:`feature_selection.SelectFromModel` and\n        :class:`ensemble.BaggingClassifier`.\n\n        In a meta-estimator's :term:`fit` method, any contained estimators\n        should be :term:`cloned` before they are fit (although FIXME: Pipeline\n        and FeatureUnion do not do this currently). An exception to this is\n        that an estimator may explicitly document that it accepts a pre-fitted\n        estimator (e.g. using ``prefit=True`` in\n        :class:`feature_selection.SelectFromModel`). One known issue with this\n        is that the pre-fitted estimator will lose its model if the\n        meta-estimator is cloned.  A meta-estimator should have ``fit`` called\n        before prediction, even if all contained estimators are pre-fitted.\n\n        In cases where a meta-estimator's primary behaviors (e.g.\n        :term:`predict` or :term:`transform` implementation) are functions of\n        prediction\/transformation methods of the provided *base estimator* (or\n        multiple base estimators), a meta-estimator should provide at least the\n        standard methods provided by the base estimator.  It may not be\n        possible to identify which methods are provided by the underlying\n        estimator until the meta-estimator has been :term:`fitted` (see also\n        :term:`duck typing`), for which\n        :func:`utils.metaestimators.available_if` may help.  It\n        should also provide (or modify) the :term:`estimator tags` and\n        :term:`classes_` attribute provided by the base estimator.\n\n        Meta-estimators should be careful to validate data as minimally as\n        possible before passing it to an underlying estimator. This saves\n        computation time, and may, for instance, allow the underlying\n        estimator to easily work with data that is not :term:`rectangular`.\n\n    outlier detector\n    outlier detectors\n        An :term:`unsupervised` binary :term:`predictor` which models the\n        distinction between core and outlying samples.\n\n        Outlier detectors must implement:\n\n        * :term:`fit`\n        * :term:`fit_predict` if :term:`transductive`\n        * :term:`predict` if :term:`inductive`\n\n        Inductive outlier detectors may also implement\n        :term:`decision_function` to give a normalized inlier score where\n        outliers have score below 0.  :term:`score_samples` may provide an\n        unnormalized score per sample.\n\n    predictor\n    predictors\n        An :term:`estimator` supporting :term:`predict` and\/or\n        :term:`fit_predict`. This encompasses :term:`classifier`,\n        :term:`regressor`, :term:`outlier detector` and :term:`clusterer`.\n\n        In statistics, \"predictors\" refers to :term:`features`.\n\n    regressor\n    regressors\n        A :term:`supervised` (or :term:`semi-supervised`) :term:`predictor`\n        with :term:`continuous` output values.\n\n        Regressors inherit from :class:`base.RegressorMixin`, which sets their\n        :term:`estimator tags` correctly.\n\n        A regressor can be distinguished from other estimators with\n        :func:`~base.is_regressor`.\n\n        A regressor must implement:\n\n        * :term:`fit`\n        * :term:`predict`\n        * :term:`score`\n\n    transformer\n    transformers\n        An estimator supporting :term:`transform` and\/or :term:`fit_transform`.\n        A purely :term:`transductive` transformer, such as\n        :class:`manifold.TSNE`, may not implement ``transform``.\n\n    vectorizer\n    vectorizers\n        See :term:`feature extractor`.\n\nThere are further APIs specifically related to a small family of estimators,\nsuch as:\n\n.. glossary::\n\n    cross-validation splitter\n    CV splitter\n    cross-validation generator\n        A non-estimator family of classes used to split a dataset into a\n        sequence of train and test portions (see :ref:`cross_validation`),\n        by providing :term:`split` and :term:`get_n_splits` methods.\n        Note that unlike estimators, these do not have :term:`fit` methods\n        and do not provide :term:`set_params` or :term:`get_params`.\n        Parameter validation may be performed in ``__init__``.\n\n    cross-validation estimator\n        An estimator that has built-in cross-validation capabilities to\n        automatically select the best hyper-parameters (see the :ref:`User\n        Guide <grid_search>`). Some example of cross-validation estimators\n        are :class:`ElasticNetCV <linear_model.ElasticNetCV>` and\n        :class:`LogisticRegressionCV <linear_model.LogisticRegressionCV>`.\n        Cross-validation estimators are named `EstimatorCV` and tend to be\n        roughly equivalent to `GridSearchCV(Estimator(), ...)`. The\n        advantage of using a cross-validation estimator over the canonical\n        :term:`estimator` class along with :ref:`grid search <grid_search>` is\n        that they can take advantage of warm-starting by reusing precomputed\n        results in the previous steps of the cross-validation process. This\n        generally leads to speed improvements. An exception is the\n        :class:`RidgeCV <linear_model.RidgeCV>` class, which can instead\n        perform efficient Leave-One-Out (LOO) CV. By default, all these\n        estimators, apart from :class:`RidgeCV <linear_model.RidgeCV>` with an\n        LOO-CV, will be refitted on the full training dataset after finding the\n        best combination of hyper-parameters.\n\n    scorer\n        A non-estimator callable object which evaluates an estimator on given\n        test data, returning a number. Unlike :term:`evaluation metrics`,\n        a greater returned number must correspond with a *better* score.\n        See :ref:`scoring_parameter`.\n\nFurther examples:\n\n* :class:`metrics.DistanceMetric`\n* :class:`gaussian_process.kernels.Kernel`\n* ``tree.Criterion``\n\n.. _glossary_metadata_routing:\n\nMetadata Routing\n================\n\n.. glossary::\n\n    consumer\n        An object which consumes :term:`metadata`. This object is usually an\n        :term:`estimator`, a :term:`scorer`, or a :term:`CV splitter`. Consuming\n        metadata means using it in calculations, e.g. using\n        :term:`sample_weight` to calculate a certain type of score. Being a\n        consumer doesn't mean that the object always receives a certain\n        metadata, rather it means it can use it if it is provided.\n\n    metadata\n        Data which is related to the given :term:`X` and :term:`y` data, but\n        is not directly a part of the data, e.g. :term:`sample_weight` or\n        :term:`groups`, and is passed along to different objects and methods,\n        e.g. to a :term:`scorer` or a :term:`CV splitter`.\n\n    router\n        An object which routes metadata to :term:`consumers <consumer>`. This\n        object is usually a :term:`meta-estimator`, e.g.\n        :class:`~pipeline.Pipeline` or :class:`~model_selection.GridSearchCV`.\n        Some routers can also be a consumer. This happens for example when a\n        meta-estimator uses the given :term:`groups`, and it also passes it\n        along to some of its sub-objects, such as a :term:`CV splitter`.\n\nPlease refer to :ref:`Metadata Routing User Guide <metadata_routing>` for more\ninformation.\n\n.. _glossary_target_types:\n\nTarget Types\n============\n\n.. glossary::\n\n    binary\n        A classification problem consisting of two classes.  A binary target\n        may  be represented as for a :term:`multiclass` problem but with only two\n        labels.  A binary decision function is represented as a 1d array.\n\n        Semantically, one class is often considered the \"positive\" class.\n        Unless otherwise specified (e.g. using :term:`pos_label` in\n        :term:`evaluation metrics`), we consider the class label with the\n        greater value (numerically or lexicographically) as the positive class:\n        of labels [0, 1], 1 is the positive class; of [1, 2], 2 is the positive\n        class; of ['no', 'yes'], 'yes' is the positive class; of ['no', 'YES'],\n        'no' is the positive class.  This affects the output of\n        :term:`decision_function`, for instance.\n\n        Note that a dataset sampled from a multiclass ``y`` or a continuous\n        ``y`` may appear to be binary.\n\n        :func:`~utils.multiclass.type_of_target` will return 'binary' for\n        binary input, or a similar array with only a single class present.\n\n    continuous\n        A regression problem where each sample's target is a finite floating\n        point number represented as a 1-dimensional array of floats (or\n        sometimes ints).\n\n        :func:`~utils.multiclass.type_of_target` will return 'continuous' for\n        continuous input, but if the data is all integers, it will be\n        identified as 'multiclass'.\n\n    continuous multioutput\n    continuous multi-output\n    multioutput continuous\n    multi-output continuous\n        A regression problem where each sample's target consists of ``n_outputs``\n        :term:`outputs`, each one a finite floating point number, for a\n        fixed int ``n_outputs > 1`` in a particular dataset.\n\n        Continuous multioutput targets are represented as multiple\n        :term:`continuous` targets, horizontally stacked into an array\n        of shape ``(n_samples, n_outputs)``.\n\n        :func:`~utils.multiclass.type_of_target` will return\n        'continuous-multioutput' for continuous multioutput input, but if the\n        data is all integers, it will be identified as\n        'multiclass-multioutput'.\n\n    multiclass\n    multi-class\n        A classification problem consisting of more than two classes.  A\n        multiclass target may be represented as a 1-dimensional array of\n        strings or integers.  A 2d column vector of integers (i.e. a\n        single output in :term:`multioutput` terms) is also accepted.\n\n        We do not officially support other orderable, hashable objects as class\n        labels, even if estimators may happen to work when given classification\n        targets of such type.\n\n        For semi-supervised classification, :term:`unlabeled` samples should\n        have the special label -1 in ``y``.\n\n        Within scikit-learn, all estimators supporting binary classification\n        also support multiclass classification, using One-vs-Rest by default.\n\n        A :class:`preprocessing.LabelEncoder` helps to canonicalize multiclass\n        targets as integers.\n\n        :func:`~utils.multiclass.type_of_target` will return 'multiclass' for\n        multiclass input. The user may also want to handle 'binary' input\n        identically to 'multiclass'.\n\n    multiclass multioutput\n    multi-class multi-output\n    multioutput multiclass\n    multi-output multi-class\n        A classification problem where each sample's target consists of\n        ``n_outputs`` :term:`outputs`, each a class label, for a fixed int\n        ``n_outputs > 1`` in a particular dataset.  Each output has a\n        fixed set of available classes, and each sample is labeled with a\n        class for each output. An output may be binary or multiclass, and in\n        the case where all outputs are binary, the target is\n        :term:`multilabel`.\n\n        Multiclass multioutput targets are represented as multiple\n        :term:`multiclass` targets, horizontally stacked into an array\n        of shape ``(n_samples, n_outputs)``.\n\n        XXX: For simplicity, we may not always support string class labels\n        for multiclass multioutput, and integer class labels should be used.\n\n        :mod:`~sklearn.multioutput` provides estimators which estimate multi-output\n        problems using multiple single-output estimators.  This may not fully\n        account for dependencies among the different outputs, which methods\n        natively handling the multioutput case (e.g. decision trees, nearest\n        neighbors, neural networks) may do better.\n\n        :func:`~utils.multiclass.type_of_target` will return\n        'multiclass-multioutput' for multiclass multioutput input.\n\n    multilabel\n    multi-label\n        A :term:`multiclass multioutput` target where each output is\n        :term:`binary`.  This may be represented as a 2d (dense) array or\n        sparse matrix of integers, such that each column is a separate binary\n        target, where positive labels are indicated with 1 and negative labels\n        are usually -1 or 0.  Sparse multilabel targets are not supported\n        everywhere that dense multilabel targets are supported.\n\n        Semantically, a multilabel target can be thought of as a set of labels\n        for each sample.  While not used internally,\n        :class:`preprocessing.MultiLabelBinarizer` is provided as a utility to\n        convert from a list of sets representation to a 2d array or sparse\n        matrix. One-hot encoding a multiclass target with\n        :class:`preprocessing.LabelBinarizer` turns it into a multilabel\n        problem.\n\n        :func:`~utils.multiclass.type_of_target` will return\n        'multilabel-indicator' for multilabel input, whether sparse or dense.\n\n    multioutput\n    multi-output\n        A target where each sample has multiple classification\/regression\n        labels. See :term:`multiclass multioutput` and :term:`continuous\n        multioutput`. We do not currently support modelling mixed\n        classification and regression targets.\n\n.. _glossary_methods:\n\nMethods\n=======\n\n.. glossary::\n\n    ``decision_function``\n        In a fitted :term:`classifier` or :term:`outlier detector`, predicts a\n        \"soft\" score for each sample in relation to each class, rather than the\n        \"hard\" categorical prediction produced by :term:`predict`.  Its input\n        is usually only some observed data, :term:`X`.\n\n        If the estimator was not already :term:`fitted`, calling this method\n        should raise a :class:`exceptions.NotFittedError`.\n\n        Output conventions:\n\n        binary classification\n            A 1-dimensional array, where values strictly greater than zero\n            indicate the positive class (i.e. the last class in\n            :term:`classes_`).\n        multiclass classification\n            A 2-dimensional array, where the row-wise arg-maximum is the\n            predicted class.  Columns are ordered according to\n            :term:`classes_`.\n        multilabel classification\n            Scikit-learn is inconsistent in its representation of :term:`multilabel`\n            decision functions. It may be represented one of two ways:\n\n            - List of 2d arrays, each array of shape: (`n_samples`, 2), like in\n              multiclass multioutput. List is of length `n_labels`.\n\n            - Single 2d array of shape (`n_samples`, `n_labels`), with each\n              'column' in the array corresponding to the individual binary\n              classification decisions. This is identical to the\n              multiclass classification format, though its semantics differ: it\n              should be interpreted, like in the binary case, by thresholding at\n              0.\n\n        multioutput classification\n            A list of 2d arrays, corresponding to each multiclass decision\n            function.\n        outlier detection\n            A 1-dimensional array, where a value greater than or equal to zero\n            indicates an inlier.\n\n    ``fit``\n        The ``fit`` method is provided on every estimator. It usually takes some\n        :term:`samples` ``X``, :term:`targets` ``y`` if the model is supervised,\n        and potentially other :term:`sample properties` such as\n        :term:`sample_weight`.  It should:\n\n        * clear any prior :term:`attributes` stored on the estimator, unless\n          :term:`warm_start` is used;\n        * validate and interpret any :term:`parameters`, ideally raising an\n          error if invalid;\n        * validate the input data;\n        * estimate and store model attributes from the estimated parameters and\n          provided data; and\n        * return the now :term:`fitted` estimator to facilitate method\n          chaining.\n\n        :ref:`glossary_target_types` describes possible formats for ``y``.\n\n    ``fit_predict``\n        Used especially for :term:`unsupervised`, :term:`transductive`\n        estimators, this fits the model and returns the predictions (similar to\n        :term:`predict`) on the training data. In clusterers, these predictions\n        are also stored in the :term:`labels_` attribute, and the output of\n        ``.fit_predict(X)`` is usually equivalent to ``.fit(X).predict(X)``.\n        The parameters to ``fit_predict`` are the same as those to ``fit``.\n\n    ``fit_transform``\n        A method on :term:`transformers` which fits the estimator and returns\n        the transformed training data. It takes parameters as in :term:`fit`\n        and its output should have the same shape as calling ``.fit(X,\n        ...).transform(X)``. There are nonetheless rare cases where\n        ``.fit_transform(X, ...)`` and ``.fit(X, ...).transform(X)`` do not\n        return the same value, wherein training data needs to be handled\n        differently (due to model blending in stacked ensembles, for instance;\n        such cases should be clearly documented).\n        :term:`Transductive <transductive>` transformers may also provide\n        ``fit_transform`` but not :term:`transform`.\n\n        One reason to implement ``fit_transform`` is that performing ``fit``\n        and ``transform`` separately would be less efficient than together.\n        :class:`base.TransformerMixin` provides a default implementation,\n        providing a consistent interface across transformers where\n        ``fit_transform`` is or is not specialized.\n\n        In :term:`inductive` learning -- where the goal is to learn a\n        generalized model that can be applied to new data -- users should be\n        careful not to apply ``fit_transform`` to the entirety of a dataset\n        (i.e. training and test data together) before further modelling, as\n        this results in :term:`data leakage`.\n\n    ``get_feature_names_out``\n        Primarily for :term:`feature extractors`, but also used for other\n        transformers to provide string names for each column in the output of\n        the estimator's :term:`transform` method.  It outputs an array of\n        strings and may take an array-like of strings as input, corresponding\n        to the names of input columns from which output column names can\n        be generated.  If `input_features` is not passed in, then the\n        `feature_names_in_` attribute will be used. If the\n        `feature_names_in_` attribute is not defined, then the\n        input names are named `[x0, x1, ..., x(n_features_in_ - 1)]`.\n\n    ``get_n_splits``\n        On a :term:`CV splitter` (not an estimator), returns the number of\n        elements one would get if iterating through the return value of\n        :term:`split` given the same parameters.  Takes the same parameters as\n        split.\n\n    ``get_params``\n        Gets all :term:`parameters`, and their values, that can be set using\n        :term:`set_params`.  A parameter ``deep`` can be used, when set to\n        False to only return those parameters not including ``__``, i.e.  not\n        due to indirection via contained estimators.\n\n        Most estimators adopt the definition from :class:`base.BaseEstimator`,\n        which simply adopts the parameters defined for ``__init__``.\n        :class:`pipeline.Pipeline`, among others, reimplements ``get_params``\n        to declare the estimators named in its ``steps`` parameters as\n        themselves being parameters.\n\n    ``partial_fit``\n        Facilitates fitting an estimator in an online fashion.  Unlike ``fit``,\n        repeatedly calling ``partial_fit`` does not clear the model, but\n        updates it with the data provided. The portion of data\n        provided to ``partial_fit`` may be called a mini-batch.\n        Each mini-batch must be of consistent shape, etc. In iterative\n        estimators, ``partial_fit`` often only performs a single iteration.\n\n        ``partial_fit`` may also be used for :term:`out-of-core` learning,\n        although usually limited to the case where learning can be performed\n        online, i.e. the model is usable after each ``partial_fit`` and there\n        is no separate processing needed to finalize the model.\n        :class:`cluster.Birch` introduces the convention that calling\n        ``partial_fit(X)`` will produce a model that is not finalized, but the\n        model can be finalized by calling ``partial_fit()`` i.e. without\n        passing a further mini-batch.\n\n        Generally, estimator parameters should not be modified between calls\n        to ``partial_fit``, although ``partial_fit`` should validate them\n        as well as the new mini-batch of data.  In contrast, ``warm_start``\n        is used to repeatedly fit the same estimator with the same data\n        but varying parameters.\n\n        Like ``fit``, ``partial_fit`` should return the estimator object.\n\n        To clear the model, a new estimator should be constructed, for instance\n        with :func:`base.clone`.\n\n        NOTE: Using ``partial_fit`` after ``fit`` results in undefined behavior.\n\n    ``predict``\n        Makes a prediction for each sample, usually only taking :term:`X` as\n        input (but see under regressor output conventions below). In a\n        :term:`classifier` or :term:`regressor`, this prediction is in the same\n        target space used in fitting (e.g. one of {'red', 'amber', 'green'} if\n        the ``y`` in fitting consisted of these strings).  Despite this, even\n        when ``y`` passed to :term:`fit` is a list or other array-like, the\n        output of ``predict`` should always be an array or sparse matrix. In a\n        :term:`clusterer` or :term:`outlier detector` the prediction is an\n        integer.\n\n        If the estimator was not already :term:`fitted`, calling this method\n        should raise a :class:`exceptions.NotFittedError`.\n\n        Output conventions:\n\n        classifier\n            An array of shape ``(n_samples,)`` ``(n_samples, n_outputs)``.\n            :term:`Multilabel <multilabel>` data may be represented as a sparse\n            matrix if a sparse matrix was used in fitting. Each element should\n            be one of the values in the classifier's :term:`classes_`\n            attribute.\n\n        clusterer\n            An array of shape ``(n_samples,)`` where each value is from 0 to\n            ``n_clusters - 1`` if the corresponding sample is clustered,\n            and -1 if the sample is not clustered, as in\n            :func:`cluster.dbscan`.\n\n        outlier detector\n            An array of shape ``(n_samples,)`` where each value is -1 for an\n            outlier and 1 otherwise.\n\n        regressor\n            A numeric array of shape ``(n_samples,)``, usually float64.\n            Some regressors have extra options in their ``predict`` method,\n            allowing them to return standard deviation (``return_std=True``)\n            or covariance (``return_cov=True``) relative to the predicted\n            value.  In this case, the return value is a tuple of arrays\n            corresponding to (prediction mean, std, cov) as required.\n\n    ``predict_log_proba``\n        The natural logarithm of the output of :term:`predict_proba`, provided\n        to facilitate numerical stability.\n\n    ``predict_proba``\n        A method in :term:`classifiers` and :term:`clusterers` that can\n        return probability estimates for each class\/cluster.  Its input is\n        usually only some observed data, :term:`X`.\n\n        If the estimator was not already :term:`fitted`, calling this method\n        should raise a :class:`exceptions.NotFittedError`.\n\n        Output conventions are like those for :term:`decision_function` except\n        in the :term:`binary` classification case, where one column is output\n        for each class (while ``decision_function`` outputs a 1d array). For\n        binary and multiclass predictions, each row should add to 1.\n\n        Like other methods, ``predict_proba`` should only be present when the\n        estimator can make probabilistic predictions (see :term:`duck typing`).\n        This means that the presence of the method may depend on estimator\n        parameters (e.g. in :class:`linear_model.SGDClassifier`) or training\n        data (e.g. in :class:`model_selection.GridSearchCV`) and may only\n        appear after fitting.\n\n    ``score``\n        A method on an estimator, usually a :term:`predictor`, which evaluates\n        its predictions on a given dataset, and returns a single numerical\n        score.  A greater return value should indicate better predictions;\n        accuracy is used for classifiers and R^2 for regressors by default.\n\n        If the estimator was not already :term:`fitted`, calling this method\n        should raise a :class:`exceptions.NotFittedError`.\n\n        Some estimators implement a custom, estimator-specific score function,\n        often the likelihood of the data under the model.\n\n    ``score_samples``\n        A method that returns a score for each given sample. The exact\n        definition of *score* varies from one class to another. In the case of\n        density estimation, it can be the log density model on the data, and in\n        the case of outlier detection, it can be the opposite of the outlier\n        factor of the data.\n\n        If the estimator was not already :term:`fitted`, calling this method\n        should raise a :class:`exceptions.NotFittedError`.\n\n    ``set_params``\n        Available in any estimator, takes keyword arguments corresponding to\n        keys in :term:`get_params`.  Each is provided a new value to assign\n        such that calling ``get_params`` after ``set_params`` will reflect the\n        changed :term:`parameters`.  Most estimators use the implementation in\n        :class:`base.BaseEstimator`, which handles nested parameters and\n        otherwise sets the parameter as an attribute on the estimator.\n        The method is overridden in :class:`pipeline.Pipeline` and related\n        estimators.\n\n    ``split``\n        On a :term:`CV splitter` (not an estimator), this method accepts\n        parameters (:term:`X`, :term:`y`, :term:`groups`), where all may be\n        optional, and returns an iterator over ``(train_idx, test_idx)``\n        pairs.  Each of {train,test}_idx is a 1d integer array, with values\n        from 0 from ``X.shape[0] - 1`` of any length, such that no values\n        appear in both some ``train_idx`` and its corresponding ``test_idx``.\n\n    ``transform``\n        In a :term:`transformer`, transforms the input, usually only :term:`X`,\n        into some transformed space (conventionally notated as :term:`Xt`).\n        Output is an array or sparse matrix of length :term:`n_samples` and\n        with the number of columns fixed after :term:`fitting`.\n\n        If the estimator was not already :term:`fitted`, calling this method\n        should raise a :class:`exceptions.NotFittedError`.\n\n.. _glossary_parameters:\n\nParameters\n==========\n\nThese common parameter names, specifically used in estimator construction\n(see concept :term:`parameter`), sometimes also appear as parameters of\nfunctions or non-estimator constructors.\n\n.. glossary::\n\n    ``class_weight``\n        Used to specify sample weights when fitting classifiers as a function\n        of the :term:`target` class.  Where :term:`sample_weight` is also\n        supported and given, it is multiplied by the ``class_weight``\n        contribution. Similarly, where ``class_weight`` is used in a\n        :term:`multioutput` (including :term:`multilabel`) tasks, the weights\n        are multiplied across outputs (i.e. columns of ``y``).\n\n        By default, all samples have equal weight such that classes are\n        effectively weighted by their prevalence in the training data.\n        This could be achieved explicitly with ``class_weight={label1: 1,\n        label2: 1, ...}`` for all class labels.\n\n        More generally, ``class_weight`` is specified as a dict mapping class\n        labels to weights (``{class_label: weight}``), such that each sample\n        of the named class is given that weight.\n\n        ``class_weight='balanced'`` can be used to give all classes\n        equal weight by giving each sample a weight inversely related\n        to its class's prevalence in the training data:\n        ``n_samples \/ (n_classes * np.bincount(y))``. Class weights will be\n        used differently depending on the algorithm: for linear models (such\n        as linear SVM or logistic regression), the class weights will alter the\n        loss function by weighting the loss of each sample by its class weight.\n        For tree-based algorithms, the class weights will be used for\n        reweighting the splitting criterion.\n        **Note** however that this rebalancing does not take the weight of\n        samples in each class into account.\n\n        For multioutput classification, a list of dicts is used to specify\n        weights for each output. For example, for four-class multilabel\n        classification weights should be ``[{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1,\n        1: 1}, {0: 1, 1: 1}]`` instead of ``[{1:1}, {2:5}, {3:1}, {4:1}]``.\n\n        The ``class_weight`` parameter is validated and interpreted with\n        :func:`utils.class_weight.compute_class_weight`.\n\n    ``cv``\n        Determines a cross validation splitting strategy, as used in\n        cross-validation based routines. ``cv`` is also available in estimators\n        such as :class:`multioutput.ClassifierChain` or\n        :class:`calibration.CalibratedClassifierCV` which use the predictions\n        of one estimator as training data for another, to not overfit the\n        training supervision.\n\n        Possible inputs for ``cv`` are usually:\n\n        - An integer, specifying the number of folds in K-fold cross\n          validation. K-fold will be stratified over classes if the estimator\n          is a classifier (determined by :func:`base.is_classifier`) and the\n          :term:`targets` may represent a binary or multiclass (but not\n          multioutput) classification problem (determined by\n          :func:`utils.multiclass.type_of_target`).\n        - A :term:`cross-validation splitter` instance. Refer to the\n          :ref:`User Guide <cross_validation>` for splitters available\n          within Scikit-learn.\n        - An iterable yielding train\/test splits.\n\n        With some exceptions (especially where not using cross validation at\n        all is an option), the default is 5-fold.\n\n        ``cv`` values are validated and interpreted with\n        :func:`model_selection.check_cv`.\n\n    ``kernel``\n        Specifies the kernel function to be used by Kernel Method algorithms.\n        For example, the estimators :class:`svm.SVC` and\n        :class:`gaussian_process.GaussianProcessClassifier` both have a\n        ``kernel`` parameter that takes the name of the kernel to use as string\n        or a callable kernel function used to compute the kernel matrix. For\n        more reference, see the :ref:`kernel_approximation` and the\n        :ref:`gaussian_process` user guides.\n\n    ``max_iter``\n        For estimators involving iterative optimization, this determines the\n        maximum number of iterations to be performed in :term:`fit`.  If\n        ``max_iter`` iterations are run without convergence, a\n        :class:`exceptions.ConvergenceWarning` should be raised.  Note that the\n        interpretation of \"a single iteration\" is inconsistent across\n        estimators: some, but not all, use it to mean a single epoch (i.e. a\n        pass over every sample in the data).\n\n        FIXME perhaps we should have some common tests about the relationship\n        between ConvergenceWarning and max_iter.\n\n    ``memory``\n        Some estimators make use of :class:`joblib.Memory` to\n        store partial solutions during fitting. Thus when ``fit`` is called\n        again, those partial solutions have been memoized and can be reused.\n\n        A ``memory`` parameter can be specified as a string with a path to a\n        directory, or a :class:`joblib.Memory` instance (or an object with a\n        similar interface, i.e. a ``cache`` method) can be used.\n\n        ``memory`` values are validated and interpreted with\n        :func:`utils.validation.check_memory`.\n\n    ``metric``\n        As a parameter, this is the scheme for determining the distance between\n        two data points.  See :func:`metrics.pairwise_distances`.  In practice,\n        for some algorithms, an improper distance metric (one that does not\n        obey the triangle inequality, such as Cosine Distance) may be used.\n\n        XXX: hierarchical clustering uses ``affinity`` with this meaning.\n\n        We also use *metric* to refer to :term:`evaluation metrics`, but avoid\n        using this sense as a parameter name.\n\n    ``n_components``\n        The number of features which a :term:`transformer` should transform the\n        input into. See :term:`components_` for the special case of affine\n        projection.\n\n    ``n_iter_no_change``\n        Number of iterations with no improvement to wait before stopping the\n        iterative procedure. This is also known as a *patience* parameter. It\n        is typically used with :term:`early stopping` to avoid stopping too\n        early.\n\n    ``n_jobs``\n        This parameter is used to specify how many concurrent processes or\n        threads should be used for routines that are parallelized with\n        :term:`joblib`.\n\n        ``n_jobs`` is an integer, specifying the maximum number of concurrently\n        running workers. If 1 is given, no joblib parallelism is used at all,\n        which is useful for debugging. If set to -1, all CPUs are used. For\n        ``n_jobs`` below -1, (n_cpus + 1 + n_jobs) are used. For example with\n        ``n_jobs=-2``, all CPUs but one are used.\n\n        ``n_jobs`` is ``None`` by default, which means *unset*; it will\n        generally be interpreted as ``n_jobs=1``, unless the current\n        :class:`joblib.Parallel` backend context specifies otherwise.\n\n        Note that even if ``n_jobs=1``, low-level parallelism (via Numpy and OpenMP)\n        might be used in some configuration.\n\n        For more details on the use of ``joblib`` and its interactions with\n        scikit-learn, please refer to our :ref:`parallelism notes\n        <parallelism>`.\n\n    ``pos_label``\n        Value with which positive labels must be encoded in binary\n        classification problems in which the positive class is not assumed.\n        This value is typically required to compute asymmetric evaluation\n        metrics such as precision and recall.\n\n    ``random_state``\n        Whenever randomization is part of a Scikit-learn algorithm, a\n        ``random_state`` parameter may be provided to control the random number\n        generator used.  Note that the mere presence of ``random_state`` doesn't\n        mean that randomization is always used, as it may be dependent on\n        another parameter, e.g. ``shuffle``, being set.\n\n        The passed value will have an effect on the reproducibility of the\n        results returned by the function (:term:`fit`, :term:`split`, or any\n        other function like :func:`~sklearn.cluster.k_means`). `random_state`'s\n        value may be:\n\n        None (default)\n            Use the global random state instance from :mod:`numpy.random`.\n            Calling the function multiple times will reuse\n            the same instance, and will produce different results.\n\n        An integer\n            Use a new random number generator seeded by the given integer.\n            Using an int will produce the same results across different calls.\n            However, it may be\n            worthwhile checking that your results are stable across a\n            number of different distinct random seeds. Popular integer\n            random seeds are 0 and `42\n            <https:\/\/en.wikipedia.org\/wiki\/Answer_to_the_Ultimate_Question_of_Life%2C_the_Universe%2C_and_Everything>`_.\n            Integer values must be in the range `[0, 2**32 - 1]`.\n\n        A :class:`numpy.random.RandomState` instance\n            Use the provided random state, only affecting other users\n            of that same random state instance. Calling the function\n            multiple times will reuse the same instance, and\n            will produce different results.\n\n        :func:`utils.check_random_state` is used internally to validate the\n        input ``random_state`` and return a :class:`~numpy.random.RandomState`\n        instance.\n\n        For more details on how to control the randomness of scikit-learn\n        objects and avoid common pitfalls, you may refer to :ref:`randomness`.\n\n    ``scoring``\n        Specifies the score function to be maximized (usually by :ref:`cross\n        validation <cross_validation>`), or -- in some cases -- multiple score\n        functions to be reported. The score function can be a string accepted\n        by :func:`metrics.get_scorer` or a callable :term:`scorer`, not to be\n        confused with an :term:`evaluation metric`, as the latter have a more\n        diverse API.  ``scoring`` may also be set to None, in which case the\n        estimator's :term:`score` method is used.  See :ref:`scoring_parameter`\n        in the User Guide.\n\n        Where multiple metrics can be evaluated, ``scoring`` may be given\n        either as a list of unique strings, a dictionary with names as keys and\n        callables as values or a callable that returns a dictionary. Note that\n        this does *not* specify which score function is to be maximized, and\n        another parameter such as ``refit`` maybe used for this purpose.\n\n\n        The ``scoring`` parameter is validated and interpreted using\n        :func:`metrics.check_scoring`.\n\n    ``verbose``\n        Logging is not handled very consistently in Scikit-learn at present,\n        but when it is provided as an option, the ``verbose`` parameter is\n        usually available to choose no logging (set to False). Any True value\n        should enable some logging, but larger integers (e.g. above 10) may be\n        needed for full verbosity.  Verbose logs are usually printed to\n        Standard Output.\n        Estimators should not produce any output on Standard Output with the\n        default ``verbose`` setting.\n\n    ``warm_start``\n\n        When fitting an estimator repeatedly on the same dataset, but for\n        multiple parameter values (such as to find the value maximizing\n        performance as in :ref:`grid search <grid_search>`), it may be possible\n        to reuse aspects of the model learned from the previous parameter value,\n        saving time.  When ``warm_start`` is true, the existing :term:`fitted`\n        model :term:`attributes` are used to initialize the new model\n        in a subsequent call to :term:`fit`.\n\n        Note that this is only applicable for some models and some\n        parameters, and even some orders of parameter values. In general, there\n        is an interaction between ``warm_start`` and the parameter controlling\n        the number of iterations of the estimator.\n\n        For estimators imported from :mod:`~sklearn.ensemble`,\n        ``warm_start`` will interact with ``n_estimators`` or ``max_iter``.\n        For these models, the number of iterations, reported via\n        ``len(estimators_)`` or ``n_iter_``, corresponds the total number of\n        estimators\/iterations learnt since the initialization of the model.\n        Thus, if a model was already initialized with `N` estimators, and `fit`\n        is called with ``n_estimators`` or ``max_iter`` set to `M`, the model\n        will train `M - N` new estimators.\n\n        Other models, usually using gradient-based solvers, have a different\n        behavior. They all expose a ``max_iter`` parameter. The reported\n        ``n_iter_`` corresponds to the number of iteration done during the last\n        call to ``fit`` and will be at most ``max_iter``. Thus, we do not\n        consider the state of the estimator since the initialization.\n\n        :term:`partial_fit` also retains the model between calls, but differs:\n        with ``warm_start`` the parameters change and the data is\n        (more-or-less) constant across calls to ``fit``; with ``partial_fit``,\n        the mini-batch of data changes and model parameters stay fixed.\n\n        There are cases where you want to use ``warm_start`` to fit on\n        different, but closely related data. For example, one may initially fit\n        to a subset of the data, then fine-tune the parameter search on the\n        full dataset. For classification, all data in a sequence of\n        ``warm_start`` calls to ``fit`` must include samples from each class.\n\n.. _glossary_attributes:\n\nAttributes\n==========\n\nSee concept :term:`attribute`.\n\n.. glossary::\n\n    ``classes_``\n        A list of class labels known to the :term:`classifier`, mapping each\n        label to a numerical index used in the model representation our output.\n        For instance, the array output from :term:`predict_proba` has columns\n        aligned with ``classes_``. For :term:`multi-output` classifiers,\n        ``classes_`` should be a list of lists, with one class listing for\n        each output.  For each output, the classes should be sorted\n        (numerically, or lexicographically for strings).\n\n        ``classes_`` and the mapping to indices is often managed with\n        :class:`preprocessing.LabelEncoder`.\n\n    ``components_``\n        An affine transformation matrix of shape ``(n_components, n_features)``\n        used in many linear :term:`transformers` where :term:`n_components` is\n        the number of output features and :term:`n_features` is the number of\n        input features.\n\n        See also :term:`components_` which is a similar attribute for linear\n        predictors.\n\n    ``coef_``\n        The weight\/coefficient matrix of a generalized linear model\n        :term:`predictor`, of shape ``(n_features,)`` for binary classification\n        and single-output regression, ``(n_classes, n_features)`` for\n        multiclass classification and ``(n_targets, n_features)`` for\n        multi-output regression. Note this does not include the intercept\n        (or bias) term, which is stored in ``intercept_``.\n\n        When available, ``feature_importances_`` is not usually provided as\n        well, but can be calculated as the  norm of each feature's entry in\n        ``coef_``.\n\n        See also :term:`components_` which is a similar attribute for linear\n        transformers.\n\n    ``embedding_``\n        An embedding of the training data in :ref:`manifold learning\n        <manifold>` estimators, with shape ``(n_samples, n_components)``,\n        identical to the output of :term:`fit_transform`.  See also\n        :term:`labels_`.\n\n    ``n_iter_``\n        The number of iterations actually performed when fitting an iterative\n        estimator that may stop upon convergence. See also :term:`max_iter`.\n\n    ``feature_importances_``\n        A vector of shape ``(n_features,)`` available in some\n        :term:`predictors` to provide a relative measure of the importance of\n        each feature in the predictions of the model.\n\n    ``labels_``\n        A vector containing a cluster label for each sample of the training\n        data in :term:`clusterers`, identical to the output of\n        :term:`fit_predict`.  See also :term:`embedding_`.\n\n.. _glossary_sample_props:\n\nData and sample properties\n==========================\n\nSee concept :term:`sample property`.\n\n.. glossary::\n\n    ``groups``\n        Used in cross-validation routines to identify samples that are correlated.\n        Each value is an identifier such that, in a supporting\n        :term:`CV splitter`, samples from some ``groups`` value may not\n        appear in both a training set and its corresponding test set.\n        See :ref:`group_cv`.\n\n    ``sample_weight``\n        A relative weight for each sample.  Intuitively, if all weights are\n        integers, a weighted model or score should be equivalent to that\n        calculated when repeating the sample the number of times specified in\n        the weight.  Weights may be specified as floats, so that sample weights\n        are usually equivalent up to a constant positive scaling factor.\n\n        FIXME  Is this interpretation always the case in practice? We have no\n        common tests.\n\n        Some estimators, such as decision trees, support negative weights.\n        FIXME: This feature or its absence may not be tested or documented in\n        many estimators.\n\n        This is not entirely the case where other parameters of the model\n        consider the number of samples in a region, as with ``min_samples`` in\n        :class:`cluster.DBSCAN`.  In this case, a count of samples becomes\n        to a sum of their weights.\n\n        In classification, sample weights can also be specified as a function\n        of class with the :term:`class_weight` estimator :term:`parameter`.\n\n    ``X``\n        Denotes data that is observed at training and prediction time, used as\n        independent variables in learning.  The notation is uppercase to denote\n        that it is ordinarily a matrix (see :term:`rectangular`).\n        When a matrix, each sample may be represented by a :term:`feature`\n        vector, or a vector of :term:`precomputed` (dis)similarity with each\n        training sample. ``X`` may also not be a matrix, and may require a\n        :term:`feature extractor` or a :term:`pairwise metric` to turn it into\n        one before learning a model.\n\n    ``Xt``\n        Shorthand for \"transformed :term:`X`\".\n\n    ``y``\n    ``Y``\n        Denotes data that may be observed at training time as the dependent\n        variable in learning, but which is unavailable at prediction time, and\n        is usually the :term:`target` of prediction.  The notation may be\n        uppercase to denote that it is a matrix, representing\n        :term:`multi-output` targets, for instance; but usually we use ``y``\n        and sometimes do so even when multiple outputs are assumed.","site":"scikit-learn","answers_cleaned":"   currentmodule   sklearn      glossary                                             Glossary of Common Terms and API Elements                                            This glossary hopes to definitively represent the tacit and explicit conventions applied in Scikit learn and its API  while providing a reference for users and contributors  It aims to describe the concepts and either detail their corresponding API or link to other relevant parts of the documentation which do so  By linking to glossary entries from the API Reference and User Guide  we may minimize redundancy and inconsistency   We begin by listing general concepts  and any that didn t fit elsewhere   but more specific sets of related terms are listed below   ref  glossary estimator types    ref  glossary target types    ref  glossary methods    ref  glossary parameters    ref  glossary attributes    ref  glossary sample props    General Concepts                      glossary        1d     1d array         One dimensional array  A NumPy array whose    shape   has length 1          A vector       2d     2d array         Two dimensional array  A NumPy array whose    shape   has length 2          Often represents a matrix       API         Refers to both the  specific  interfaces for estimators implemented in         Scikit learn and the  generalized  conventions across types of         estimators as described in this glossary and  ref  overviewed in the         contributor documentation  api overview             The specific interfaces that constitute Scikit learn s public API are         largely documented in  ref  api ref   However  we less formally consider         anything as public API if none of the identifiers required to access it         begins with         We generally try to maintain  term  backwards         compatibility  for all objects in the public API           Private API  including functions  modules and methods beginning               are not assured to be stable       array like         The most common data format for  input  to Scikit learn estimators and         functions  array like is any type object for which          func  numpy asarray  will produce an array of appropriate shape          usually 1 or 2 dimensional  of appropriate dtype  usually numeric            This includes             a numpy array           a list of numbers           a list of length k lists of numbers for some fixed length k           a  class  pandas DataFrame  with all columns numeric           a numeric  class  pandas Series           It excludes             a  term  sparse matrix            a sparse array           an iterator           a generator          Note that  output  from scikit learn estimators and functions  e g          predictions  should generally be arrays or sparse matrices  or lists         thereof  as in multi output  class  tree DecisionTreeClassifier  s           predict proba     An estimator where   predict     returns a list or         a  pandas Series  is not valid       attribute     attributes         We mostly use attribute to refer to how model information is stored on         an estimator during fitting   Any public attribute stored on an         estimator instance is required to begin with an alphabetic character         and end in a single underscore if it is set in  term  fit  or          term  partial fit    These are what is documented under an estimator s          Attributes  documentation   The information stored in attributes is         usually either  sufficient statistics used for prediction or         transformation   term  transductive  outputs such as  term  labels   or          term  embedding    or diagnostic data  such as          term  feature importances            Common attributes are listed  ref  below  glossary attributes             A public attribute may have the same name as a constructor          term  parameter   with a       appended   This is used to store a         validated or estimated version of the user s input  For example           class  decomposition PCA  is constructed with an   n components           parameter  From this  together with other parameters and the data          PCA estimates the attribute   n components              Further private attributes used in prediction transformation etc  may         also be set when fitting   These begin with a single underscore and are         not assured to be stable for public access           A public attribute on an estimator instance that does not end in an         underscore should be the stored  unmodified value of an     init              term  parameter  of the same name   Because of this equivalence  these         are documented under an estimator s  Parameters  documentation       backwards compatibility         We generally try to maintain backward compatibility  i e  interfaces         and behaviors may be extended but not changed or removed  from release         to release but this comes with some exceptions           Public API only             The behavior of objects accessed through private identifiers              those beginning        may be changed arbitrarily between             versions          As documented             We will generally assume that the users have adhered to the             documented parameter types and ranges  If the documentation asks             for a list and the user gives a tuple  we do not assure consistent             behavior from version to version          Deprecation             Behaviors may change following a  term  deprecation  period              usually two releases long    Warnings are issued using Python s              mod  warnings  module          Keyword arguments             We may sometimes assume that all optional parameters  other than X             and y to  term  fit  and similar methods  are passed as keyword             arguments only and may be positionally reordered          Bug fixes and enhancements             Bug fixes and    less often    enhancements may change the behavior             of estimators  including the predictions of an estimator trained on             the same data and  term  random state    When this happens  we             attempt to note it clearly in the changelog          Serialization             We make no assurances that pickling an estimator in one version             will allow it to be unpickled to an equivalent model in the             subsequent version    For estimators in the sklearn package  we             issue a warning when this unpickling is attempted  even if it may             happen to work    See  ref  persistence limitations            func  utils estimator checks check estimator              We provide limited backwards compatibility assurances for the             estimator checks  we may add extra requirements on estimators             tested with this function  usually when these were informally             assumed but not formally tested           Despite this informal contract with our users  the software is provided         as is  as stated in the license   When a release inadvertently         introduces changes that are not backward compatible  these are known         as software regressions       callable         A function  class or an object which implements the     call             method  anything that returns True when the argument of  callable            https   docs python org 3 library functions html callable          categorical feature         A categorical or nominal  term  feature  is one that has a         finite set of discrete values across the population of data          These are commonly represented as columns of integers or         strings  Strings will be rejected by most scikit learn         estimators  and integers will be treated as ordinal or         count valued  For the use with most estimators  categorical         variables should be one hot encoded  Notable exceptions include         tree based models such as random forests and gradient boosting         models that often work better and faster with integer coded         categorical variables           class   sklearn preprocessing OrdinalEncoder  helps encoding         string valued categorical features as ordinal integers  and          class   sklearn preprocessing OneHotEncoder  can be used to         one hot encode categorical features          See also  ref  preprocessing categorical features  and the          categorical encoding          https   github com scikit learn contrib category encoders            package for tools related to encoding categorical features       clone     cloned         To copy an  term  estimator instance  and create a new one with         identical  term  parameters   but without any fitted          term  attributes   using  func   sklearn base clone            When   fit   is called  a  term  meta estimator  usually clones         a wrapped estimator instance before fitting the cloned instance           Exceptions  for legacy reasons  include          class   pipeline Pipeline  and          class   pipeline FeatureUnion             If the estimator s  random state  parameter is an integer  or if the         estimator doesn t have a  random state  parameter   an  exact clone          is returned  the clone and the original estimator will give the exact         same results  Otherwise   statistical clone  is returned  the clone         might yield different results from the original estimator  More         details can be found in  ref  randomness        common tests         This refers to the tests run on almost every estimator class in         Scikit learn to check they comply with basic API conventions   They are         available for external use through          func  utils estimator checks check estimator  or          func  utils estimator checks parametrize with checks   with most of the         implementation in   sklearn utils estimator checks py             Note  Some exceptions to the common testing regime are currently         hard coded into the library  but we hope to replace this by marking         exceptional behaviours on the estimator using semantic  term  estimator         tags        cross fitting     cross fitting         A resampling method that iteratively partitions data into mutually         exclusive subsets to fit two stages  During the first stage  the         mutually exclusive subsets enable predictions or transformations to be         computed on data not seen during training  The computed data is then         used in the second stage  The objective is to avoid having any         overfitting in the first stage introduce bias into the input data         distribution of the second stage          For examples of its use  see   class   preprocessing TargetEncoder            class   ensemble StackingClassifier            class   ensemble StackingRegressor  and          class   calibration CalibratedClassifierCV        cross validation     cross validation         A resampling method that iteratively partitions data into mutually         exclusive  train  and  test  subsets so model performance can be         evaluated on unseen data  This conserves data as avoids the need to hold         out a  validation  dataset and accounts for variability as multiple         rounds of cross validation are generally performed          See  ref  User Guide  cross validation   for more details       deprecation         We use deprecation to slowly violate our  term  backwards         compatibility  assurances  usually to             change the default value of a parameter  or           remove a parameter  attribute  method  class  etc           We will ordinarily issue a warning when a deprecated element is used          although there may be limitations to this   For instance  we will raise         a warning when someone sets a parameter that has been deprecated  but         may not when they access that parameter s attribute on the estimator         instance           See the  ref  Contributors  Guide  contributing deprecation         dimensionality         May be used to refer to the number of  term  features   i e           term  n features    or columns in a 2d feature matrix          Dimensions are  however  also used to refer to the length of a NumPy         array s shape  distinguishing a 1d array from a 2d matrix       docstring         The embedded documentation for a module  class  function  etc   usually         in code as a string at the beginning of the object s definition  and         accessible as the object s     doc     attribute           We try to adhere to  PEP257          https   www python org dev peps pep 0257      and follow  NumpyDoc         conventions  https   numpydoc readthedocs io en latest format html          double underscore     double underscore notation         When specifying parameter names for nested estimators         may be         used to separate between parent and child in some contexts  The most         common use is when setting parameters through a meta estimator with          term  set params  and hence in specifying a search grid in          ref  parameter search  grid search    See  term  parameter           It is also used in  meth  pipeline Pipeline fit  for passing          term  sample properties  to the   fit   methods of estimators in         the pipeline       dtype     data type         NumPy arrays assume a homogeneous data type throughout  available in         the    dtype   attribute of an array  or sparse matrix   We generally         assume simple data types for scikit learn data  float or integer          We may support object or string data types for arrays before encoding         or vectorizing   Our estimators do not work with struct arrays  for         instance           Our documentation can sometimes give information about the dtype         precision  e g   np int32    np int64   etc  When the precision is         provided  it refers to the NumPy dtype  If an arbitrary precision is         used  the documentation will refer to dtype  integer  or  floating           Note that in this case  the precision can be platform dependent          The  numeric  dtype refers to accepting both  integer  and  floating            When it comes to choosing between 64 bit dtype  i e   np float64  and          np int64   and 32 bit dtype  i e   np float32  and  np int32    it         boils down to a trade off between efficiency and precision  The 64 bit         types offer more accurate results due to their lower floating point         error  but demand more computational resources  resulting in slower         operations and increased memory usage  In contrast  32 bit types         promise enhanced operation speed and reduced memory consumption  but         introduce a larger floating point error  The efficiency improvement are         dependent on lower level optimization such as like vectorization          single instruction multiple dispatch  SIMD   or cache optimization but         crucially on the compatibility of the algorithm in use           Specifically  the choice of precision should account for whether the         employed algorithm can effectively leverage  np float32   Some         algorithms  especially certain minimization methods  are exclusively         coded for  np float64   meaning that even if  np float32  is passed  it         triggers an automatic conversion back to  np float64   This not only         negates the intended computational savings but also introduces         additional overhead  making operations with  np float32  unexpectedly         slower and more memory intensive due to this extra conversion step       duck typing         We try to apply  duck typing          https   en wikipedia org wiki Duck typing    to determine how to         handle some input values  e g  checking whether a given estimator is         a classifier    That is  we avoid using   isinstance   where possible          and rely on the presence or absence of attributes to determine an         object s behaviour   Some nuance is required when following this         approach             For some estimators  an attribute may only be available once it is            term  fitted    For instance  we cannot a priori determine if            term  predict proba  is available in a grid search where the grid           includes alternating between a probabilistic and a non probabilistic           predictor in the final step of the pipeline   In the following  we           can only determine if   clf   is probabilistic after fitting it on           some data                      from sklearn model selection import GridSearchCV                   from sklearn linear model import SGDClassifier                   clf   GridSearchCV SGDClassifier                                         param grid   loss     log loss    hinge                This means that we can only check for duck typed attributes after           fitting  and that we must be careful to make  term  meta estimators            only present attributes according to the state of the underlying           estimator after fitting             Checking if an attribute is present  using   hasattr    is in general           just as expensive as getting the attribute    getattr   or dot           notation    In some cases  getting the attribute may indeed be           expensive  e g  for some implementations of            term  feature importances    which may suggest this is an API design           flaw    So code which does   hasattr   followed by   getattr   should           be avoided    getattr   within a try except block is preferred             For determining some aspects of an estimator s expectations or           support for some feature  we use  term  estimator tags  instead of           duck typing       early stopping         This consists in stopping an iterative optimization method before the         convergence of the training loss  to avoid over fitting  This is         generally done by monitoring the generalization score on a validation         set  When available  it is activated through the parameter           early stopping   or by setting a positive  term  n iter no change        estimator instance         We sometimes use this terminology to distinguish an  term  estimator          class from a constructed instance  For example  in the following            cls   is an estimator class  while   est1   and   est2   are         instances                cls   RandomForestClassifier             est1   cls               est2   RandomForestClassifier        examples         We try to give examples of basic usage for most functions and         classes in the API             as doctests in their docstrings  i e  within the   sklearn    library           code itself             as examples in the  ref  example gallery  general examples             rendered  using  sphinx gallery            https   sphinx gallery readthedocs io      from scripts in the             examples    directory  exemplifying key features or parameters           of the estimator function   These should also be referenced from the           User Guide            sometimes in the  ref  User Guide  user guide    built from   doc               alongside a technical description of the estimator       experimental         An experimental tool is already usable but its public API  such as         default parameter values or fitted attributes  is still subject to         change in future versions without the usual  term  deprecation          warning policy       evaluation metric     evaluation metrics         Evaluation metrics give a measure of how well a model performs   We may         use this term specifically to refer to the functions in  mod   sklearn metrics           disregarding  mod   sklearn metrics pairwise    as distinct from the          term  score  method and the  term  scoring  API used in cross         validation  See  ref  model evaluation            These functions usually accept a ground truth  or the raw data         where the metric evaluates clustering without a ground truth  and a         prediction  be it the output of  term  predict     y pred             of  term  predict proba     y proba     or of an arbitrary score         function including  term  decision function     y score             Functions are usually named to end with    score   if a greater         score indicates a better model  and    loss   if a lesser score         indicates a better model   This diversity of interface motivates         the scoring API           Note that some estimators can calculate metrics that are not included         in  mod   sklearn metrics  and are estimator specific  notably model         likelihoods       estimator tags         Estimator tags describe certain capabilities of an estimator   This would         enable some runtime behaviors based on estimator inspection  but it         also allows each estimator to be tested for appropriate invariances         while being excepted from other  term  common tests            Some aspects of estimator tags are currently determined through         the  term  duck typing  of methods like   predict proba   and through         some special attributes on estimator objects           For more detailed info  see  ref  estimator tags        feature     features     feature vector         In the abstract  a feature is a function  in its mathematical sense          mapping a sampled object to a numeric or categorical quantity           Feature  is also commonly used to refer to these quantities  being the         individual elements of a vector representing a sample  In a data         matrix  features are represented as columns  each column contains the         result of applying a feature function to a set of samples           Elsewhere features are known as attributes  predictors  regressors  or         independent variables           Nearly all estimators in scikit learn assume that features are numeric          finite and not missing  even when they have semantically distinct         domains and distributions  categorical  ordinal  count valued          real valued  interval   See also  term  categorical feature  and          term  missing values              n features   indicates the number of features in a dataset       fitting         Calling  term  fit   or  term  fit transform    term  fit predict           etc   on an estimator       fitted         The state of an estimator after  term  fitting            There is no conventional procedure for checking if an estimator         is fitted   However  an estimator that is not fitted             should raise  class  exceptions NotFittedError  when a prediction           method   term  predict    term  transform   etc   is called              func  utils validation check is fitted  is used internally           for this purpose             should not have any  term  attributes  beginning with an alphabetic           character and ending with an underscore   Note that a descriptor for           the attribute may still be present on the class  but hasattr should           return False       function         We provide ad hoc function interfaces for many algorithms  while          term  estimator  classes provide a more consistent interface           In particular  Scikit learn may provide a function interface that fits         a model to some data and returns the learnt model parameters  as in          func  linear model enet path    For transductive models  this also         returns the embedding or cluster labels  as in          func  manifold spectral embedding  or  func  cluster dbscan    Many         preprocessing transformers also provide a function interface  akin to         calling  term  fit transform   as in          func  preprocessing maxabs scale    Users should be careful to avoid          term  data leakage  when making use of these           fit transform   equivalent functions           We do not have a strict policy about when to or when not to provide         function forms of estimators  but maintainers should consider         consistency with existing interfaces  and whether providing a function         would lead users astray from best practices  as regards data leakage          etc        gallery         See  term  examples        hyperparameter     hyper parameter         See  term  parameter        impute     imputation         Most machine learning algorithms require that their inputs have no          term  missing values   and will not work if this requirement is         violated  Algorithms that attempt to fill in  or impute  missing values         are referred to as imputation algorithms       indexable         An  term  array like    term  sparse matrix   pandas DataFrame or         sequence  usually a list        induction     inductive         Inductive  contrasted with  term  transductive   machine learning         builds a model of some data that can then be applied to new instances          Most estimators in Scikit learn are inductive  having  term  predict          and or  term  transform  methods       joblib         A Python library  https   joblib readthedocs io  used in Scikit learn to         facilite simple parallelism and caching   Joblib is oriented towards         efficiently working with numpy arrays  such as through use of          term  memory mapping   See  ref  parallelism  for more         information       label indicator matrix     multilabel indicator matrix     multilabel indicator matrices         The format used to represent multilabel data  where each row of a 2d         array or sparse matrix corresponds to a sample  each column         corresponds to a class  and each element is 1 if the sample is labeled         with the class and 0 if not       leakage     data leakage         A problem in cross validation where generalization performance can be         over estimated since knowledge of the test data was inadvertently         included in training a model   This is a risk  for instance  when         applying a  term  transformer  to the entirety of a dataset rather         than each training portion in a cross validation split           We aim to provide interfaces  such as  mod   sklearn pipeline  and          mod   sklearn model selection   that shield the user from data leakage       memmapping     memory map     memory mapping         A memory efficiency strategy that keeps data on disk rather than         copying it into main memory   Memory maps can be created for arrays         that can be read  written  or both  using  obj  numpy memmap   When         using  term  joblib  to parallelize operations in Scikit learn  it         may automatically memmap large arrays to reduce memory duplication         overhead in multiprocessing       missing values         Most Scikit learn estimators do not work with missing values  When they         do  e g  in  class  impute SimpleImputer    NaN is the preferred         representation of missing values in float arrays   If the array has         integer dtype  NaN cannot be represented  For this reason  we support         specifying another   missing values   value when  term  imputation  or         learning can be performed in integer space           term  Unlabeled data  unlabeled data   is a special case of missing         values in the  term  target          n features           The number of  term  features          n outputs           The number of  term  outputs  in the  term  target          n samples           The number of  term  samples          n targets           Synonym for  term  n outputs        narrative docs     narrative documentation         An alias for  ref  User Guide  user guide    i e  documentation written         in   doc modules     Unlike the  ref  API reference  api ref   provided         through docstrings  the User Guide aims to             group tools provided by Scikit learn together thematically or in           terms of usage            motivate why someone would use each particular tool  often through           comparison            provide both intuitive and technical descriptions of tools            provide or link to  term  examples  of using key features of a           tool       np         A shorthand for Numpy due to the conventional import statement                import numpy as np      online learning         Where a model is iteratively updated by receiving each batch of ground         truth  term  targets  soon after making predictions on corresponding         batch of data   Intrinsically  the model must be usable for prediction         after each batch  See  term  partial fit        out of core         An efficiency strategy where not all the data is stored in main memory         at once  usually by performing learning on batches of data  See          term  partial fit        outputs         Individual scalar categorical variables per sample in the          term  target    For example  in multilabel classification each         possible label corresponds to a binary output  Also called  responses            tasks  or  targets           See  term  multiclass multioutput  and  term  continuous multioutput        pair         A tuple of length two       parameter     parameters     param     params         We mostly use  parameter  to refer to the aspects of an estimator that         can be specified in its construction  For example    max depth   and           random state   are parameters of  class   ensemble RandomForestClassifier           Parameters to an estimator s constructor are stored unmodified as         attributes on the estimator instance  and conventionally start with an         alphabetic character and end with an alphanumeric character   Each         estimator s constructor parameters are described in the estimator s         docstring           We do not use parameters in the statistical sense  where parameters are         values that specify a model and can be estimated from data  What we         call parameters might be what statisticians call hyperparameters to the         model  aspects for configuring model structure that are often not         directly learnt from data   However  our parameters are also used to         prescribe modeling operations that do not affect the learnt model  such         as  term  n jobs  for controlling parallelism           When talking about the parameters of a  term  meta estimator   we may         also be including the parameters of the estimators wrapped by the         meta estimator   Ordinarily  these nested parameters are denoted by         using a  term  double underscore           to separate between the         estimator as parameter and its parameter   Thus   clf           BaggingClassifier estimator DecisionTreeClassifier max depth 3             has a deep parameter   estimator  max depth   with value   3            which is accessible with   clf estimator max depth   or           clf get params    estimator  max depth               The list of parameters and their current values can be retrieved from         an  term  estimator instance  using its  term  get params  method           Between construction and fitting  parameters may be modified using          term  set params    To enable this  parameters are not ordinarily         validated or altered when the estimator is constructed  or when each         parameter is set  Parameter validation is performed when  term  fit  is         called           Common parameters are listed  ref  below  glossary parameters         pairwise metric     pairwise metrics          In its broad sense  a pairwise metric defines a function for measuring         similarity or dissimilarity between two samples  with each ordinarily         represented as a  term  feature vector     We particularly provide         implementations of distance metrics  as well as improper metrics like         Cosine Distance  through  func  metrics pairwise distances   and of         kernel functions  a constrained class of similarity functions  in          func  metrics pairwise pairwise kernels    These can compute pairwise distance         matrices that are symmetric and hence store data redundantly           See also  term  precomputed  and  term  metric            Note that for most distance metrics  we rely on implementations from          mod  scipy spatial distance   but may reimplement for efficiency in         our context  The  class  metrics DistanceMetric  interface is used to implement         distance metrics for integration with efficient neighbors search       pd         A shorthand for  Pandas  https   pandas pydata org    due to the         conventional import statement                import pandas as pd      precomputed         Where algorithms rely on  term  pairwise metrics   and can be computed         from pairwise metrics alone  we often allow the user to specify that         the  term  X  provided is already in the pairwise  dis similarity         space  rather than in a feature space   That is  when passed to          term  fit   it is a square  symmetric matrix  with each vector         indicating  dis similarity to every sample  and when passed to         prediction transformation methods  each row corresponds to a testing         sample and each column to a training sample           Use of precomputed X is usually indicated by setting a   metric              affinity   or   kernel   parameter to the string  precomputed   If         this is the case  then the estimator should set the  pairwise          estimator tag as True       rectangular         Data that can be represented as a matrix with  term  samples  on the         first axis and a fixed  finite set of  term  features  on the second         is called rectangular           This term excludes samples with non vectorial structures  such as text          an image of arbitrary size  a time series of arbitrary length  a set of         vectors  etc  The purpose of a  term  vectorizer  is to produce         rectangular forms of such data       sample     samples         We usually use this term as a noun to indicate a single feature vector          Elsewhere a sample is called an instance  data point  or observation            n samples   indicates the number of samples in a dataset  being the         number of rows in a data array  term  X           Note that this definition is standard in machine learning and deviates from         statistics where it means  a set of individuals or objects collected or         selected        sample property     sample properties         A sample property is data for each sample  e g  an array of length         n samples  passed to an estimator method or a similar function          alongside but distinct from the  term  features     X    and          term  target     y     The most prominent example is          term  sample weight   see others at  ref  glossary sample props            As of version 0 19 we do not have a consistent approach to handling         sample properties and their routing in  term  meta estimators   though         a   fit params   parameter is often used       scikit learn contrib         A venue for publishing Scikit learn compatible libraries that are         broadly authorized by the core developers and the contrib community          but not maintained by the core developer team          See https   scikit learn contrib github io       scikit learn enhancement proposals     SLEP     SLEPs         Changes to the API principles and changes to dependencies or supported         versions happen via a  ref  SLEP  slep   and follows the         decision making process outlined in  ref  governance           For all votes  a proposal must have been made public and discussed before the         vote  Such a proposal must be a consolidated document  in the form of a          Scikit Learn Enhancement Proposal   SLEP   rather than a long discussion on an         issue  A SLEP must be submitted as a pull request to          enhancement proposals  https   scikit learn enhancement proposals readthedocs io    using the          SLEP template  https   scikit learn enhancement proposals readthedocs io en latest slep template html          semi supervised     semi supervised learning     semisupervised         Learning where the expected prediction  label or ground truth  is only         available for some samples provided as training data when          term  fitting  the model   We conventionally apply the label    1           to  term  unlabeled  samples in semi supervised classification       sparse matrix     sparse graph         A representation of two dimensional numeric data that is more memory         efficient the corresponding dense numpy array where almost all elements         are zero  We use the  mod  scipy sparse  framework  which provides         several underlying sparse data representations  or  formats           Some formats are more efficient than others for particular tasks  and         when a particular format provides especial benefit  we try to document         this fact in Scikit learn parameter descriptions           Some sparse matrix formats  notably CSR  CSC  COO and LIL  distinguish         between  implicit  and  explicit  zeros  Explicit zeros are stored          i e  they consume memory in a   data   array  in the data structure          while implicit zeros correspond to every element not otherwise defined         in explicit storage           Two semantics for sparse matrices are used in Scikit learn           matrix semantics             The sparse matrix is interpreted as an array with implicit and             explicit zeros being interpreted as the number 0   This is the             interpretation most often adopted  e g  when sparse matrices             are used for feature matrices or  term  multilabel indicator             matrices           graph semantics             As with  mod  scipy sparse csgraph   explicit zeros are             interpreted as the number 0  but implicit zeros indicate a masked             or absent value  such as the absence of an edge between two             vertices of a graph  where an explicit value indicates an edge s             weight  This interpretation is adopted to represent connectivity             in clustering  in representations of nearest neighborhoods              e g   func  neighbors kneighbors graph    and for precomputed             distance representation where only distances in the neighborhood             of each point are required           When working with sparse matrices  we assume that it is sparse for a         good reason  and avoid writing code that densifies a user provided         sparse matrix  instead maintaining sparsity or raising an error if not         possible  i e  if an estimator does not   cannot support sparse         matrices        stateless         An estimator is stateless if it does not store any information that is         obtained during  term  fit   This information can be either parameters         learned during  term  fit  or statistics computed from the         training data  An estimator is stateless if it has no  term  attributes          apart from ones set in    init     Calling  term  fit  for these         estimators will only validate the public  term  attributes  passed         in    init          supervised     supervised learning         Learning where the expected prediction  label or ground truth  is         available for each sample when  term  fitting  the model  provided as          term  y    This is the approach taken in a  term  classifier  or          term  regressor  among other estimators       target     targets         The  dependent variable  in  term  supervised   and          term  semisupervised   learning  passed as  term  y  to an estimator s          term  fit  method   Also known as  dependent variable    outcome         variable    response variable    ground truth  or  label   Scikit learn         works with targets that have minimal structure  a class from a finite         set  a finite real valued number  multiple classes  or multiple         numbers  See  ref  glossary target types        transduction     transductive         A transductive  contrasted with  term  inductive   machine learning         method is designed to model a specific dataset  but not to apply that         model to unseen data   Examples include  class  manifold TSNE            class  cluster AgglomerativeClustering  and          class  neighbors LocalOutlierFactor        unlabeled     unlabeled data         Samples with an unknown ground truth when fitting  equivalently           term  missing values  in the  term  target    See also          term  semisupervised  and  term  unsupervised  learning       unsupervised     unsupervised learning         Learning where the expected prediction  label or ground truth  is not         available for each sample when  term  fitting  the model  as in          term  clusterers  and  term  outlier detectors    Unsupervised         estimators ignore any  term  y  passed to  term  fit        glossary estimator types   Class APIs and Estimator Types                                    glossary        classifier     classifiers         A  term  supervised   or  term  semi supervised    term  predictor          with a finite set of discrete possible output values           A classifier supports modeling some of  term  binary            term  multiclass    term  multilabel   or  term  multiclass         multioutput  targets   Within scikit learn  all classifiers support         multi class classification  defaulting to using a one vs rest         strategy over the binary classification problem           Classifiers must store a  term  classes   attribute after fitting          and inherit from  class  base ClassifierMixin   which sets         their corresponding  term  estimator tags  correctly           A classifier can be distinguished from other estimators with          func   base is classifier            A classifier must implement              term  fit             term  predict             term  score           It may also be appropriate to implement  term  decision function            term  predict proba  and  term  predict log proba        clusterer     clusterers         A  term  unsupervised   term  predictor  with a finite set of discrete         output values           A clusterer usually stores  term  labels   after fitting  and must do         so if it is  term  transductive            A clusterer must implement              term  fit             term  fit predict  if  term  transductive             term  predict  if  term  inductive       density estimator         An  term  unsupervised  estimation of input probability density         function  Commonly used techniques are              ref  kernel density    uses a kernel function  controlled by the           bandwidth parameter to represent density             ref  Gaussian mixture  mixture     uses mixture of Gaussian models           to represent density       estimator     estimators         An object which manages the estimation and decoding of a model  The         model is estimated as a deterministic function of              term  parameters  provided in object construction or with            term  set params             the global  mod  numpy random  random state if the estimator s            term  random state  parameter is set to None  and           any data or  term  sample properties  passed to the most recent           call to  term  fit    term  fit transform  or  term  fit predict             or data similarly passed in a sequence of calls to            term  partial fit            The estimated model is stored in public and private  term  attributes          on the estimator instance  facilitating decoding through prediction         and transformation methods           Estimators must provide a  term  fit  method  and should provide          term  set params  and  term  get params   although these are usually         provided by inheritance from  class  base BaseEstimator            The core functionality of some estimators may also be available as a          term  function        feature extractor     feature extractors         A  term  transformer  which takes input where each sample is not         represented as an  term  array like  object of fixed length  and         produces an  term  array like  object of  term  features  for each         sample  and thus a 2 dimensional array like for a set of samples    In         other words  it  lossily  maps a non rectangular data representation         into  term  rectangular  data           Feature extractors must implement at least              term  fit             term  transform             term  get feature names out       meta estimator     meta estimators     metaestimator     metaestimators         An  term  estimator  which takes another estimator as a parameter          Examples include  class  pipeline Pipeline            class  model selection GridSearchCV            class  feature selection SelectFromModel  and          class  ensemble BaggingClassifier            In a meta estimator s  term  fit  method  any contained estimators         should be  term  cloned  before they are fit  although FIXME  Pipeline         and FeatureUnion do not do this currently   An exception to this is         that an estimator may explicitly document that it accepts a pre fitted         estimator  e g  using   prefit True   in          class  feature selection SelectFromModel    One known issue with this         is that the pre fitted estimator will lose its model if the         meta estimator is cloned   A meta estimator should have   fit   called         before prediction  even if all contained estimators are pre fitted           In cases where a meta estimator s primary behaviors  e g           term  predict  or  term  transform  implementation  are functions of         prediction transformation methods of the provided  base estimator   or         multiple base estimators   a meta estimator should provide at least the         standard methods provided by the base estimator   It may not be         possible to identify which methods are provided by the underlying         estimator until the meta estimator has been  term  fitted   see also          term  duck typing    for which          func  utils metaestimators available if  may help   It         should also provide  or modify  the  term  estimator tags  and          term  classes   attribute provided by the base estimator           Meta estimators should be careful to validate data as minimally as         possible before passing it to an underlying estimator  This saves         computation time  and may  for instance  allow the underlying         estimator to easily work with data that is not  term  rectangular        outlier detector     outlier detectors         An  term  unsupervised  binary  term  predictor  which models the         distinction between core and outlying samples           Outlier detectors must implement              term  fit             term  fit predict  if  term  transductive             term  predict  if  term  inductive           Inductive outlier detectors may also implement          term  decision function  to give a normalized inlier score where         outliers have score below 0    term  score samples  may provide an         unnormalized score per sample       predictor     predictors         An  term  estimator  supporting  term  predict  and or          term  fit predict   This encompasses  term  classifier            term  regressor    term  outlier detector  and  term  clusterer            In statistics   predictors  refers to  term  features        regressor     regressors         A  term  supervised   or  term  semi supervised    term  predictor          with  term  continuous  output values           Regressors inherit from  class  base RegressorMixin   which sets their          term  estimator tags  correctly           A regressor can be distinguished from other estimators with          func   base is regressor            A regressor must implement              term  fit             term  predict             term  score       transformer     transformers         An estimator supporting  term  transform  and or  term  fit transform           A purely  term  transductive  transformer  such as          class  manifold TSNE   may not implement   transform         vectorizer     vectorizers         See  term  feature extractor    There are further APIs specifically related to a small family of estimators  such as      glossary        cross validation splitter     CV splitter     cross validation generator         A non estimator family of classes used to split a dataset into a         sequence of train and test portions  see  ref  cross validation            by providing  term  split  and  term  get n splits  methods          Note that unlike estimators  these do not have  term  fit  methods         and do not provide  term  set params  or  term  get params           Parameter validation may be performed in     init           cross validation estimator         An estimator that has built in cross validation capabilities to         automatically select the best hyper parameters  see the  ref  User         Guide  grid search     Some example of cross validation estimators         are  class  ElasticNetCV  linear model ElasticNetCV   and          class  LogisticRegressionCV  linear model LogisticRegressionCV            Cross validation estimators are named  EstimatorCV  and tend to be         roughly equivalent to  GridSearchCV Estimator           The         advantage of using a cross validation estimator over the canonical          term  estimator  class along with  ref  grid search  grid search   is         that they can take advantage of warm starting by reusing precomputed         results in the previous steps of the cross validation process  This         generally leads to speed improvements  An exception is the          class  RidgeCV  linear model RidgeCV   class  which can instead         perform efficient Leave One Out  LOO  CV  By default  all these         estimators  apart from  class  RidgeCV  linear model RidgeCV   with an         LOO CV  will be refitted on the full training dataset after finding the         best combination of hyper parameters       scorer         A non estimator callable object which evaluates an estimator on given         test data  returning a number  Unlike  term  evaluation metrics           a greater returned number must correspond with a  better  score          See  ref  scoring parameter    Further examples      class  metrics DistanceMetric     class  gaussian process kernels Kernel      tree Criterion        glossary metadata routing   Metadata Routing                      glossary        consumer         An object which consumes  term  metadata   This object is usually an          term  estimator   a  term  scorer   or a  term  CV splitter   Consuming         metadata means using it in calculations  e g  using          term  sample weight  to calculate a certain type of score  Being a         consumer doesn t mean that the object always receives a certain         metadata  rather it means it can use it if it is provided       metadata         Data which is related to the given  term  X  and  term  y  data  but         is not directly a part of the data  e g   term  sample weight  or          term  groups   and is passed along to different objects and methods          e g  to a  term  scorer  or a  term  CV splitter        router         An object which routes metadata to  term  consumers  consumer    This         object is usually a  term  meta estimator   e g           class   pipeline Pipeline  or  class   model selection GridSearchCV           Some routers can also be a consumer  This happens for example when a         meta estimator uses the given  term  groups   and it also passes it         along to some of its sub objects  such as a  term  CV splitter    Please refer to  ref  Metadata Routing User Guide  metadata routing   for more information       glossary target types   Target Types                  glossary        binary         A classification problem consisting of two classes   A binary target         may  be represented as for a  term  multiclass  problem but with only two         labels   A binary decision function is represented as a 1d array           Semantically  one class is often considered the  positive  class          Unless otherwise specified  e g  using  term  pos label  in          term  evaluation metrics    we consider the class label with the         greater value  numerically or lexicographically  as the positive class          of labels  0  1   1 is the positive class  of  1  2   2 is the positive         class  of   no    yes     yes  is the positive class  of   no    YES             no  is the positive class   This affects the output of          term  decision function   for instance           Note that a dataset sampled from a multiclass   y   or a continuous           y   may appear to be binary            func   utils multiclass type of target  will return  binary  for         binary input  or a similar array with only a single class present       continuous         A regression problem where each sample s target is a finite floating         point number represented as a 1 dimensional array of floats  or         sometimes ints             func   utils multiclass type of target  will return  continuous  for         continuous input  but if the data is all integers  it will be         identified as  multiclass        continuous multioutput     continuous multi output     multioutput continuous     multi output continuous         A regression problem where each sample s target consists of   n outputs            term  outputs   each one a finite floating point number  for a         fixed int   n outputs   1   in a particular dataset           Continuous multioutput targets are represented as multiple          term  continuous  targets  horizontally stacked into an array         of shape    n samples  n outputs               func   utils multiclass type of target  will return          continuous multioutput  for continuous multioutput input  but if the         data is all integers  it will be identified as          multiclass multioutput        multiclass     multi class         A classification problem consisting of more than two classes   A         multiclass target may be represented as a 1 dimensional array of         strings or integers   A 2d column vector of integers  i e  a         single output in  term  multioutput  terms  is also accepted           We do not officially support other orderable  hashable objects as class         labels  even if estimators may happen to work when given classification         targets of such type           For semi supervised classification   term  unlabeled  samples should         have the special label  1 in   y             Within scikit learn  all estimators supporting binary classification         also support multiclass classification  using One vs Rest by default           A  class  preprocessing LabelEncoder  helps to canonicalize multiclass         targets as integers            func   utils multiclass type of target  will return  multiclass  for         multiclass input  The user may also want to handle  binary  input         identically to  multiclass        multiclass multioutput     multi class multi output     multioutput multiclass     multi output multi class         A classification problem where each sample s target consists of           n outputs    term  outputs   each a class label  for a fixed int           n outputs   1   in a particular dataset   Each output has a         fixed set of available classes  and each sample is labeled with a         class for each output  An output may be binary or multiclass  and in         the case where all outputs are binary  the target is          term  multilabel            Multiclass multioutput targets are represented as multiple          term  multiclass  targets  horizontally stacked into an array         of shape    n samples  n outputs              XXX  For simplicity  we may not always support string class labels         for multiclass multioutput  and integer class labels should be used            mod   sklearn multioutput  provides estimators which estimate multi output         problems using multiple single output estimators   This may not fully         account for dependencies among the different outputs  which methods         natively handling the multioutput case  e g  decision trees  nearest         neighbors  neural networks  may do better            func   utils multiclass type of target  will return          multiclass multioutput  for multiclass multioutput input       multilabel     multi label         A  term  multiclass multioutput  target where each output is          term  binary    This may be represented as a 2d  dense  array or         sparse matrix of integers  such that each column is a separate binary         target  where positive labels are indicated with 1 and negative labels         are usually  1 or 0   Sparse multilabel targets are not supported         everywhere that dense multilabel targets are supported           Semantically  a multilabel target can be thought of as a set of labels         for each sample   While not used internally           class  preprocessing MultiLabelBinarizer  is provided as a utility to         convert from a list of sets representation to a 2d array or sparse         matrix  One hot encoding a multiclass target with          class  preprocessing LabelBinarizer  turns it into a multilabel         problem            func   utils multiclass type of target  will return          multilabel indicator  for multilabel input  whether sparse or dense       multioutput     multi output         A target where each sample has multiple classification regression         labels  See  term  multiclass multioutput  and  term  continuous         multioutput   We do not currently support modelling mixed         classification and regression targets       glossary methods   Methods             glossary          decision function           In a fitted  term  classifier  or  term  outlier detector   predicts a          soft  score for each sample in relation to each class  rather than the          hard  categorical prediction produced by  term  predict    Its input         is usually only some observed data   term  X            If the estimator was not already  term  fitted   calling this method         should raise a  class  exceptions NotFittedError            Output conventions           binary classification             A 1 dimensional array  where values strictly greater than zero             indicate the positive class  i e  the last class in              term  classes             multiclass classification             A 2 dimensional array  where the row wise arg maximum is the             predicted class   Columns are ordered according to              term  classes            multilabel classification             Scikit learn is inconsistent in its representation of  term  multilabel              decision functions  It may be represented one of two ways                 List of 2d arrays  each array of shape    n samples   2   like in               multiclass multioutput  List is of length  n labels                  Single 2d array of shape   n samples    n labels    with each                column  in the array corresponding to the individual binary               classification decisions  This is identical to the               multiclass classification format  though its semantics differ  it               should be interpreted  like in the binary case  by thresholding at               0           multioutput classification             A list of 2d arrays  corresponding to each multiclass decision             function          outlier detection             A 1 dimensional array  where a value greater than or equal to zero             indicates an inlier         fit           The   fit   method is provided on every estimator  It usually takes some          term  samples    X     term  targets    y   if the model is supervised          and potentially other  term  sample properties  such as          term  sample weight    It should             clear any prior  term  attributes  stored on the estimator  unless            term  warm start  is used            validate and interpret any  term  parameters   ideally raising an           error if invalid            validate the input data            estimate and store model attributes from the estimated parameters and           provided data  and           return the now  term  fitted  estimator to facilitate method           chaining            ref  glossary target types  describes possible formats for   y           fit predict           Used especially for  term  unsupervised    term  transductive          estimators  this fits the model and returns the predictions  similar to          term  predict   on the training data  In clusterers  these predictions         are also stored in the  term  labels   attribute  and the output of            fit predict X    is usually equivalent to    fit X  predict X             The parameters to   fit predict   are the same as those to   fit           fit transform           A method on  term  transformers  which fits the estimator and returns         the transformed training data  It takes parameters as in  term  fit          and its output should have the same shape as calling    fit X               transform X     There are nonetheless rare cases where            fit transform X         and    fit X       transform X    do not         return the same value  wherein training data needs to be handled         differently  due to model blending in stacked ensembles  for instance          such cases should be clearly documented            term  Transductive  transductive   transformers may also provide           fit transform   but not  term  transform            One reason to implement   fit transform   is that performing   fit           and   transform   separately would be less efficient than together           class  base TransformerMixin  provides a default implementation          providing a consistent interface across transformers where           fit transform   is or is not specialized           In  term  inductive  learning    where the goal is to learn a         generalized model that can be applied to new data    users should be         careful not to apply   fit transform   to the entirety of a dataset          i e  training and test data together  before further modelling  as         this results in  term  data leakage          get feature names out           Primarily for  term  feature extractors   but also used for other         transformers to provide string names for each column in the output of         the estimator s  term  transform  method   It outputs an array of         strings and may take an array like of strings as input  corresponding         to the names of input columns from which output column names can         be generated   If  input features  is not passed in  then the          feature names in   attribute will be used  If the          feature names in   attribute is not defined  then the         input names are named   x0  x1       x n features in    1            get n splits           On a  term  CV splitter   not an estimator   returns the number of         elements one would get if iterating through the return value of          term  split  given the same parameters   Takes the same parameters as         split         get params           Gets all  term  parameters   and their values  that can be set using          term  set params    A parameter   deep   can be used  when set to         False to only return those parameters not including         i e   not         due to indirection via contained estimators           Most estimators adopt the definition from  class  base BaseEstimator           which simply adopts the parameters defined for     init               class  pipeline Pipeline   among others  reimplements   get params           to declare the estimators named in its   steps   parameters as         themselves being parameters         partial fit           Facilitates fitting an estimator in an online fashion   Unlike   fit            repeatedly calling   partial fit   does not clear the model  but         updates it with the data provided  The portion of data         provided to   partial fit   may be called a mini batch          Each mini batch must be of consistent shape  etc  In iterative         estimators    partial fit   often only performs a single iteration             partial fit   may also be used for  term  out of core  learning          although usually limited to the case where learning can be performed         online  i e  the model is usable after each   partial fit   and there         is no separate processing needed to finalize the model           class  cluster Birch  introduces the convention that calling           partial fit X    will produce a model that is not finalized  but the         model can be finalized by calling   partial fit     i e  without         passing a further mini batch           Generally  estimator parameters should not be modified between calls         to   partial fit    although   partial fit   should validate them         as well as the new mini batch of data   In contrast    warm start           is used to repeatedly fit the same estimator with the same data         but varying parameters           Like   fit      partial fit   should return the estimator object           To clear the model  a new estimator should be constructed  for instance         with  func  base clone            NOTE  Using   partial fit   after   fit   results in undefined behavior         predict           Makes a prediction for each sample  usually only taking  term  X  as         input  but see under regressor output conventions below   In a          term  classifier  or  term  regressor   this prediction is in the same         target space used in fitting  e g  one of   red    amber    green   if         the   y   in fitting consisted of these strings    Despite this  even         when   y   passed to  term  fit  is a list or other array like  the         output of   predict   should always be an array or sparse matrix  In a          term  clusterer  or  term  outlier detector  the prediction is an         integer           If the estimator was not already  term  fitted   calling this method         should raise a  class  exceptions NotFittedError            Output conventions           classifier             An array of shape    n samples        n samples  n outputs                  term  Multilabel  multilabel   data may be represented as a sparse             matrix if a sparse matrix was used in fitting  Each element should             be one of the values in the classifier s  term  classes               attribute           clusterer             An array of shape    n samples     where each value is from 0 to               n clusters   1   if the corresponding sample is clustered              and  1 if the sample is not clustered  as in              func  cluster dbscan            outlier detector             An array of shape    n samples     where each value is  1 for an             outlier and 1 otherwise           regressor             A numeric array of shape    n samples      usually float64              Some regressors have extra options in their   predict   method              allowing them to return standard deviation    return std True                or covariance    return cov True    relative to the predicted             value   In this case  the return value is a tuple of arrays             corresponding to  prediction mean  std  cov  as required         predict log proba           The natural logarithm of the output of  term  predict proba   provided         to facilitate numerical stability         predict proba           A method in  term  classifiers  and  term  clusterers  that can         return probability estimates for each class cluster   Its input is         usually only some observed data   term  X            If the estimator was not already  term  fitted   calling this method         should raise a  class  exceptions NotFittedError            Output conventions are like those for  term  decision function  except         in the  term  binary  classification case  where one column is output         for each class  while   decision function   outputs a 1d array   For         binary and multiclass predictions  each row should add to 1           Like other methods    predict proba   should only be present when the         estimator can make probabilistic predictions  see  term  duck typing            This means that the presence of the method may depend on estimator         parameters  e g  in  class  linear model SGDClassifier   or training         data  e g  in  class  model selection GridSearchCV   and may only         appear after fitting         score           A method on an estimator  usually a  term  predictor   which evaluates         its predictions on a given dataset  and returns a single numerical         score   A greater return value should indicate better predictions          accuracy is used for classifiers and R 2 for regressors by default           If the estimator was not already  term  fitted   calling this method         should raise a  class  exceptions NotFittedError            Some estimators implement a custom  estimator specific score function          often the likelihood of the data under the model         score samples           A method that returns a score for each given sample  The exact         definition of  score  varies from one class to another  In the case of         density estimation  it can be the log density model on the data  and in         the case of outlier detection  it can be the opposite of the outlier         factor of the data           If the estimator was not already  term  fitted   calling this method         should raise a  class  exceptions NotFittedError          set params           Available in any estimator  takes keyword arguments corresponding to         keys in  term  get params    Each is provided a new value to assign         such that calling   get params   after   set params   will reflect the         changed  term  parameters    Most estimators use the implementation in          class  base BaseEstimator   which handles nested parameters and         otherwise sets the parameter as an attribute on the estimator          The method is overridden in  class  pipeline Pipeline  and related         estimators         split           On a  term  CV splitter   not an estimator   this method accepts         parameters   term  X    term  y    term  groups    where all may be         optional  and returns an iterator over    train idx  test idx            pairs   Each of  train test  idx is a 1d integer array  with values         from 0 from   X shape 0    1   of any length  such that no values         appear in both some   train idx   and its corresponding   test idx           transform           In a  term  transformer   transforms the input  usually only  term  X           into some transformed space  conventionally notated as  term  Xt            Output is an array or sparse matrix of length  term  n samples  and         with the number of columns fixed after  term  fitting            If the estimator was not already  term  fitted   calling this method         should raise a  class  exceptions NotFittedError        glossary parameters   Parameters             These common parameter names  specifically used in estimator construction  see concept  term  parameter    sometimes also appear as parameters of functions or non estimator constructors      glossary          class weight           Used to specify sample weights when fitting classifiers as a function         of the  term  target  class   Where  term  sample weight  is also         supported and given  it is multiplied by the   class weight           contribution  Similarly  where   class weight   is used in a          term  multioutput   including  term  multilabel   tasks  the weights         are multiplied across outputs  i e  columns of   y              By default  all samples have equal weight such that classes are         effectively weighted by their prevalence in the training data          This could be achieved explicitly with   class weight  label1  1          label2  1         for all class labels           More generally    class weight   is specified as a dict mapping class         labels to weights     class label  weight      such that each sample         of the named class is given that weight             class weight  balanced    can be used to give all classes         equal weight by giving each sample a weight inversely related         to its class s prevalence in the training data            n samples    n classes   np bincount y      Class weights will be         used differently depending on the algorithm  for linear models  such         as linear SVM or logistic regression   the class weights will alter the         loss function by weighting the loss of each sample by its class weight          For tree based algorithms  the class weights will be used for         reweighting the splitting criterion            Note   however that this rebalancing does not take the weight of         samples in each class into account           For multioutput classification  a list of dicts is used to specify         weights for each output  For example  for four class multilabel         classification weights should be     0  1  1  1    0  1  1  5    0  1          1  1    0  1  1  1     instead of     1 1    2 5    3 1    4 1               The   class weight   parameter is validated and interpreted with          func  utils class weight compute class weight          cv           Determines a cross validation splitting strategy  as used in         cross validation based routines    cv   is also available in estimators         such as  class  multioutput ClassifierChain  or          class  calibration CalibratedClassifierCV  which use the predictions         of one estimator as training data for another  to not overfit the         training supervision           Possible inputs for   cv   are usually             An integer  specifying the number of folds in K fold cross           validation  K fold will be stratified over classes if the estimator           is a classifier  determined by  func  base is classifier   and the            term  targets  may represent a binary or multiclass  but not           multioutput  classification problem  determined by            func  utils multiclass type of target              A  term  cross validation splitter  instance  Refer to the            ref  User Guide  cross validation   for splitters available           within Scikit learn            An iterable yielding train test splits           With some exceptions  especially where not using cross validation at         all is an option   the default is 5 fold             cv   values are validated and interpreted with          func  model selection check cv          kernel           Specifies the kernel function to be used by Kernel Method algorithms          For example  the estimators  class  svm SVC  and          class  gaussian process GaussianProcessClassifier  both have a           kernel   parameter that takes the name of the kernel to use as string         or a callable kernel function used to compute the kernel matrix  For         more reference  see the  ref  kernel approximation  and the          ref  gaussian process  user guides         max iter           For estimators involving iterative optimization  this determines the         maximum number of iterations to be performed in  term  fit    If           max iter   iterations are run without convergence  a          class  exceptions ConvergenceWarning  should be raised   Note that the         interpretation of  a single iteration  is inconsistent across         estimators  some  but not all  use it to mean a single epoch  i e  a         pass over every sample in the data            FIXME perhaps we should have some common tests about the relationship         between ConvergenceWarning and max iter         memory           Some estimators make use of  class  joblib Memory  to         store partial solutions during fitting  Thus when   fit   is called         again  those partial solutions have been memoized and can be reused           A   memory   parameter can be specified as a string with a path to a         directory  or a  class  joblib Memory  instance  or an object with a         similar interface  i e  a   cache   method  can be used             memory   values are validated and interpreted with          func  utils validation check memory          metric           As a parameter  this is the scheme for determining the distance between         two data points   See  func  metrics pairwise distances    In practice          for some algorithms  an improper distance metric  one that does not         obey the triangle inequality  such as Cosine Distance  may be used           XXX  hierarchical clustering uses   affinity   with this meaning           We also use  metric  to refer to  term  evaluation metrics   but avoid         using this sense as a parameter name         n components           The number of features which a  term  transformer  should transform the         input into  See  term  components   for the special case of affine         projection         n iter no change           Number of iterations with no improvement to wait before stopping the         iterative procedure  This is also known as a  patience  parameter  It         is typically used with  term  early stopping  to avoid stopping too         early         n jobs           This parameter is used to specify how many concurrent processes or         threads should be used for routines that are parallelized with          term  joblib              n jobs   is an integer  specifying the maximum number of concurrently         running workers  If 1 is given  no joblib parallelism is used at all          which is useful for debugging  If set to  1  all CPUs are used  For           n jobs   below  1   n cpus   1   n jobs  are used  For example with           n jobs  2    all CPUs but one are used             n jobs   is   None   by default  which means  unset   it will         generally be interpreted as   n jobs 1    unless the current          class  joblib Parallel  backend context specifies otherwise           Note that even if   n jobs 1    low level parallelism  via Numpy and OpenMP          might be used in some configuration           For more details on the use of   joblib   and its interactions with         scikit learn  please refer to our  ref  parallelism notes          parallelism           pos label           Value with which positive labels must be encoded in binary         classification problems in which the positive class is not assumed          This value is typically required to compute asymmetric evaluation         metrics such as precision and recall         random state           Whenever randomization is part of a Scikit learn algorithm  a           random state   parameter may be provided to control the random number         generator used   Note that the mere presence of   random state   doesn t         mean that randomization is always used  as it may be dependent on         another parameter  e g    shuffle    being set           The passed value will have an effect on the reproducibility of the         results returned by the function   term  fit    term  split   or any         other function like  func   sklearn cluster k means     random state  s         value may be           None  default              Use the global random state instance from  mod  numpy random               Calling the function multiple times will reuse             the same instance  and will produce different results           An integer             Use a new random number generator seeded by the given integer              Using an int will produce the same results across different calls              However  it may be             worthwhile checking that your results are stable across a             number of different distinct random seeds  Popular integer             random seeds are 0 and  42              https   en wikipedia org wiki Answer to the Ultimate Question of Life 2C the Universe 2C and Everything                 Integer values must be in the range   0  2  32   1             A  class  numpy random RandomState  instance             Use the provided random state  only affecting other users             of that same random state instance  Calling the function             multiple times will reuse the same instance  and             will produce different results            func  utils check random state  is used internally to validate the         input   random state   and return a  class   numpy random RandomState          instance           For more details on how to control the randomness of scikit learn         objects and avoid common pitfalls  you may refer to  ref  randomness          scoring           Specifies the score function to be maximized  usually by  ref  cross         validation  cross validation     or    in some cases    multiple score         functions to be reported  The score function can be a string accepted         by  func  metrics get scorer  or a callable  term  scorer   not to be         confused with an  term  evaluation metric   as the latter have a more         diverse API     scoring   may also be set to None  in which case the         estimator s  term  score  method is used   See  ref  scoring parameter          in the User Guide           Where multiple metrics can be evaluated    scoring   may be given         either as a list of unique strings  a dictionary with names as keys and         callables as values or a callable that returns a dictionary  Note that         this does  not  specify which score function is to be maximized  and         another parameter such as   refit   maybe used for this purpose            The   scoring   parameter is validated and interpreted using          func  metrics check scoring          verbose           Logging is not handled very consistently in Scikit learn at present          but when it is provided as an option  the   verbose   parameter is         usually available to choose no logging  set to False   Any True value         should enable some logging  but larger integers  e g  above 10  may be         needed for full verbosity   Verbose logs are usually printed to         Standard Output          Estimators should not produce any output on Standard Output with the         default   verbose   setting         warm start            When fitting an estimator repeatedly on the same dataset  but for         multiple parameter values  such as to find the value maximizing         performance as in  ref  grid search  grid search     it may be possible         to reuse aspects of the model learned from the previous parameter value          saving time   When   warm start   is true  the existing  term  fitted          model  term  attributes  are used to initialize the new model         in a subsequent call to  term  fit            Note that this is only applicable for some models and some         parameters  and even some orders of parameter values  In general  there         is an interaction between   warm start   and the parameter controlling         the number of iterations of the estimator           For estimators imported from  mod   sklearn ensemble             warm start   will interact with   n estimators   or   max iter            For these models  the number of iterations  reported via           len estimators     or   n iter     corresponds the total number of         estimators iterations learnt since the initialization of the model          Thus  if a model was already initialized with  N  estimators  and  fit          is called with   n estimators   or   max iter   set to  M   the model         will train  M   N  new estimators           Other models  usually using gradient based solvers  have a different         behavior  They all expose a   max iter   parameter  The reported           n iter    corresponds to the number of iteration done during the last         call to   fit   and will be at most   max iter    Thus  we do not         consider the state of the estimator since the initialization            term  partial fit  also retains the model between calls  but differs          with   warm start   the parameters change and the data is          more or less  constant across calls to   fit    with   partial fit            the mini batch of data changes and model parameters stay fixed           There are cases where you want to use   warm start   to fit on         different  but closely related data  For example  one may initially fit         to a subset of the data  then fine tune the parameter search on the         full dataset  For classification  all data in a sequence of           warm start   calls to   fit   must include samples from each class       glossary attributes   Attributes             See concept  term  attribute       glossary          classes            A list of class labels known to the  term  classifier   mapping each         label to a numerical index used in the model representation our output          For instance  the array output from  term  predict proba  has columns         aligned with   classes     For  term  multi output  classifiers            classes    should be a list of lists  with one class listing for         each output   For each output  the classes should be sorted          numerically  or lexicographically for strings              classes    and the mapping to indices is often managed with          class  preprocessing LabelEncoder          components            An affine transformation matrix of shape    n components  n features            used in many linear  term  transformers  where  term  n components  is         the number of output features and  term  n features  is the number of         input features           See also  term  components   which is a similar attribute for linear         predictors         coef            The weight coefficient matrix of a generalized linear model          term  predictor   of shape    n features     for binary classification         and single output regression     n classes  n features    for         multiclass classification and    n targets  n features    for         multi output regression  Note this does not include the intercept          or bias  term  which is stored in   intercept              When available    feature importances    is not usually provided as         well  but can be calculated as the  norm of each feature s entry in           coef              See also  term  components   which is a similar attribute for linear         transformers         embedding            An embedding of the training data in  ref  manifold learning          manifold   estimators  with shape    n samples  n components             identical to the output of  term  fit transform    See also          term  labels           n iter            The number of iterations actually performed when fitting an iterative         estimator that may stop upon convergence  See also  term  max iter          feature importances            A vector of shape    n features     available in some          term  predictors  to provide a relative measure of the importance of         each feature in the predictions of the model         labels            A vector containing a cluster label for each sample of the training         data in  term  clusterers   identical to the output of          term  fit predict    See also  term  embedding         glossary sample props   Data and sample properties                             See concept  term  sample property       glossary          groups           Used in cross validation routines to identify samples that are correlated          Each value is an identifier such that  in a supporting          term  CV splitter   samples from some   groups   value may not         appear in both a training set and its corresponding test set          See  ref  group cv          sample weight           A relative weight for each sample   Intuitively  if all weights are         integers  a weighted model or score should be equivalent to that         calculated when repeating the sample the number of times specified in         the weight   Weights may be specified as floats  so that sample weights         are usually equivalent up to a constant positive scaling factor           FIXME  Is this interpretation always the case in practice  We have no         common tests           Some estimators  such as decision trees  support negative weights          FIXME  This feature or its absence may not be tested or documented in         many estimators           This is not entirely the case where other parameters of the model         consider the number of samples in a region  as with   min samples   in          class  cluster DBSCAN    In this case  a count of samples becomes         to a sum of their weights           In classification  sample weights can also be specified as a function         of class with the  term  class weight  estimator  term  parameter          X           Denotes data that is observed at training and prediction time  used as         independent variables in learning   The notation is uppercase to denote         that it is ordinarily a matrix  see  term  rectangular            When a matrix  each sample may be represented by a  term  feature          vector  or a vector of  term  precomputed   dis similarity with each         training sample    X   may also not be a matrix  and may require a          term  feature extractor  or a  term  pairwise metric  to turn it into         one before learning a model         Xt           Shorthand for  transformed  term  X           y         Y           Denotes data that may be observed at training time as the dependent         variable in learning  but which is unavailable at prediction time  and         is usually the  term  target  of prediction   The notation may be         uppercase to denote that it is a matrix  representing          term  multi output  targets  for instance  but usually we use   y           and sometimes do so even when multiple outputs are assumed "}
{"questions":"scikit-learn About us This project was started in 2007 as a Google Summer of Code project by about History","answers":".. _about:\n\n========\nAbout us\n========\n\nHistory\n=======\n\nThis project was started in 2007 as a Google Summer of Code project by\nDavid Cournapeau. Later that year, Matthieu Brucher started working on this project \nas part of his thesis.\n\nIn 2010 Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort and Vincent\nMichel of INRIA took leadership of the project and made the first public\nrelease, February the 1st 2010. Since then, several releases have appeared\nfollowing an approximately 3-month cycle, and a thriving international\ncommunity has been leading the development. As a result, INRIA holds the\ncopyright over the work done by people who were employed by INRIA at the\ntime of the contribution.\n\nGovernance\n==========\n\nThe decision making process and governance structure of scikit-learn, like roles and responsibilities, is laid out in the :ref:`governance document <governance>`.\n\n.. The \"author\" anchors below is there to ensure that old html links (in\n   the form of \"about.html#author\" still work)\n\n.. _authors:\n\nThe people behind scikit-learn\n==============================\n\nScikit-learn is a community project, developed by a large group of\npeople, all across the world. A few core contributor teams, listed below, have\ncentral roles, however a more complete list of contributors can be found `on\ngithub\n<https:\/\/github.com\/scikit-learn\/scikit-learn\/graphs\/contributors>`__.\n\nActive Core Contributors\n------------------------\n\nMaintainers Team\n................\n\nThe following people are currently maintainers, in charge of\nconsolidating scikit-learn's development and maintenance:\n\n.. include:: maintainers.rst\n\n.. note::\n\n  Please do not email the authors directly to ask for assistance or report issues.\n  Instead, please see `What's the best way to ask questions about scikit-learn\n  <https:\/\/scikit-learn.org\/stable\/faq.html#what-s-the-best-way-to-get-help-on-scikit-learn-usage>`_\n  in the FAQ.\n\n.. seealso::\n\n  How you can :ref:`contribute to the project <contributing>`.\n\nDocumentation Team\n..................\n\nThe following people help with documenting the project:\n\n.. include:: documentation_team.rst\n\nContributor Experience Team\n...........................\n\nThe following people are active contributors who also help with\n:ref:`triaging issues <bug_triaging>`, PRs, and general\nmaintenance:\n\n.. include:: contributor_experience_team.rst\n\nCommunication Team\n..................\n\nThe following people help with :ref:`communication around scikit-learn\n<communication_team>`.\n\n.. include:: communication_team.rst\n\nEmeritus Core Contributors\n--------------------------\n\nEmeritus Maintainers Team\n.........................\n\nThe following people have been active contributors in the past, but are no\nlonger active in the project:\n\n.. include:: maintainers_emeritus.rst\n\nEmeritus Communication Team\n...........................\n\nThe following people have been active in the communication team in the\npast, but no longer have communication responsibilities:\n\n.. include:: communication_team_emeritus.rst\n\nEmeritus Contributor Experience Team\n....................................\n\nThe following people have been active in the contributor experience team in the\npast:\n\n.. include:: contributor_experience_team_emeritus.rst\n\n.. _citing-scikit-learn:\n\nCiting scikit-learn\n===================\n\nIf you use scikit-learn in a scientific publication, we would appreciate\ncitations to the following paper:\n\n`Scikit-learn: Machine Learning in Python\n<https:\/\/jmlr.csail.mit.edu\/papers\/v12\/pedregosa11a.html>`_, Pedregosa\n*et al.*, JMLR 12, pp. 2825-2830, 2011.\n\nBibtex entry::\n\n  @article{scikit-learn,\n    title={Scikit-learn: Machine Learning in {P}ython},\n    author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.\n            and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.\n            and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and\n            Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},\n    journal={Journal of Machine Learning Research},\n    volume={12},\n    pages={2825--2830},\n    year={2011}\n  }\n\nIf you want to cite scikit-learn for its API or design, you may also want to consider the\nfollowing paper:\n\n:arxiv:`API design for machine learning software: experiences from the scikit-learn\nproject <1309.0238>`, Buitinck *et al.*, 2013.\n\nBibtex entry::\n\n  @inproceedings{sklearn_api,\n    author    = {Lars Buitinck and Gilles Louppe and Mathieu Blondel and\n                  Fabian Pedregosa and Andreas Mueller and Olivier Grisel and\n                  Vlad Niculae and Peter Prettenhofer and Alexandre Gramfort\n                  and Jaques Grobler and Robert Layton and Jake VanderPlas and\n                  Arnaud Joly and Brian Holt and Ga{\\\"{e}}l Varoquaux},\n    title     = {{API} design for machine learning software: experiences from the scikit-learn\n                  project},\n    booktitle = {ECML PKDD Workshop: Languages for Data Mining and Machine Learning},\n    year      = {2013},\n    pages = {108--122},\n  }\n\nArtwork\n=======\n\nHigh quality PNG and SVG logos are available in the `doc\/logos\/\n<https:\/\/github.com\/scikit-learn\/scikit-learn\/tree\/main\/doc\/logos>`_\nsource directory.\n\n.. image:: images\/scikit-learn-logo-notext.png\n  :align: center\n\nFunding\n=======\n\nScikit-learn is a community driven project, however institutional and private\ngrants help to assure its sustainability.\n\nThe project would like to thank the following funders.\n\n...................................\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `:probabl. <https:\/\/probabl.ai>`_ employs Adrin Jalali, Arturo Amor,\n    Fran\u00e7ois Goupil, Guillaume Lemaitre, J\u00e9r\u00e9mie du Boisberranger, Lo\u00efc Est\u00e8ve,\n    Olivier Grisel, and Stefanie Senger.\n\n  .. div:: image-box\n\n    .. image:: images\/probabl.png\n      :target: https:\/\/probabl.ai\n\n..........\n\n.. |chanel| image:: images\/chanel.png\n  :target: https:\/\/www.chanel.com\n\n.. |axa| image:: images\/axa.png\n  :target: https:\/\/www.axa.fr\/\n\n.. |bnp| image:: images\/bnp.png\n  :target: https:\/\/www.bnpparibascardif.com\/\n\n.. |dataiku| image:: images\/dataiku.png\n  :target: https:\/\/www.dataiku.com\/\n\n.. |nvidia| image:: images\/nvidia.png\n  :target: https:\/\/www.nvidia.com\n\n.. |inria| image:: images\/inria-logo.jpg\n  :target: https:\/\/www.inria.fr\n\n.. raw:: html\n\n  <style>\n    table.image-subtable tr {\n      border-color: transparent;\n    }\n\n    table.image-subtable td {\n      width: 50%;\n      vertical-align: middle;\n      text-align: center;\n    }\n\n    table.image-subtable td img {\n      max-height: 40px !important;\n      max-width: 90% !important;\n    }\n  <\/style>\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    The `Members <https:\/\/scikit-learn.fondation-inria.fr\/en\/home\/#sponsors>`_ of\n    the `Scikit-learn Consortium at Inria Foundation\n    <https:\/\/scikit-learn.fondation-inria.fr\/en\/home\/>`_ help at maintaining and\n    improving the project through their financial support.\n\n  .. div:: image-box\n\n    .. table::\n      :class: image-subtable\n\n      +----------+-----------+\n      |       |chanel|       |\n      +----------+-----------+\n      |  |axa|   |    |bnp|  |\n      +----------+-----------+\n      |       |nvidia|       |\n      +----------+-----------+\n      |       |dataiku|      |\n      +----------+-----------+\n      |        |inria|       |\n      +----------+-----------+\n\n..........\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `NVidia <https:\/\/nvidia.com>`_ funds Tim Head since 2022\n    and is part of the scikit-learn consortium at Inria.\n\n  .. div:: image-box\n\n    .. image:: images\/nvidia.png\n      :target: https:\/\/nvidia.com\n\n..........\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `Microsoft <https:\/\/microsoft.com\/>`_ funds Andreas M\u00fcller since 2020.\n\n  .. div:: image-box\n\n    .. image:: images\/microsoft.png\n      :target: https:\/\/microsoft.com\n\n...........\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `Quansight Labs <https:\/\/labs.quansight.org>`_ funds Lucy Liu since 2022.\n\n  .. div:: image-box\n\n    .. image:: images\/quansight-labs.png\n      :target: https:\/\/labs.quansight.org\n\n...........\n\n.. |czi| image:: images\/czi.png\n  :target: https:\/\/chanzuckerberg.com\n\n.. |wellcome| image:: images\/wellcome-trust.png\n  :target: https:\/\/wellcome.org\/\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `The Chan-Zuckerberg Initiative <https:\/\/chanzuckerberg.com\/>`_ and\n    `Wellcome Trust <https:\/\/wellcome.org\/>`_ fund scikit-learn through the\n    `Essential Open Source Software for Science (EOSS) <https:\/\/chanzuckerberg.com\/eoss\/>`_\n    cycle 6.\n\n    It supports Lucy Liu and diversity & inclusion initiatives that will\n    be announced in the future.\n\n  .. div:: image-box\n\n    .. table::\n      :class: image-subtable\n\n      +----------+----------------+\n      |  |czi|   |    |wellcome|  |\n      +----------+----------------+\n\n...........\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `Tidelift <https:\/\/tidelift.com\/>`_ supports the project via their service\n    agreement.\n\n  .. div:: image-box\n\n    .. image:: images\/Tidelift-logo-on-light.svg\n      :target: https:\/\/tidelift.com\/\n\n...........\n\n\nPast Sponsors\n-------------\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `Quansight Labs <https:\/\/labs.quansight.org>`_ funded Meekail Zain in 2022 and 2023,\n    and funded Thomas J. Fan from 2021 to 2023.\n\n  .. div:: image-box\n\n    .. image:: images\/quansight-labs.png\n      :target: https:\/\/labs.quansight.org\n\n...........\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `Columbia University <https:\/\/columbia.edu\/>`_ funded Andreas M\u00fcller\n    (2016-2020).\n\n  .. div:: image-box\n\n    .. image:: images\/columbia.png\n      :target: https:\/\/columbia.edu\n\n........\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `The University of Sydney <https:\/\/sydney.edu.au\/>`_ funded Joel Nothman\n    (2017-2021).\n\n  .. div:: image-box\n\n    .. image:: images\/sydney-primary.jpeg\n      :target: https:\/\/sydney.edu.au\/\n\n...........\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    Andreas M\u00fcller received a grant to improve scikit-learn from the\n    `Alfred P. Sloan Foundation <https:\/\/sloan.org>`_ .\n    This grant supported the position of Nicolas Hug and Thomas J. Fan.\n\n  .. div:: image-box\n\n    .. image:: images\/sloan_banner.png\n      :target: https:\/\/sloan.org\/\n\n.............\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `INRIA <https:\/\/www.inria.fr>`_ actively supports this project. It has\n    provided funding for Fabian Pedregosa (2010-2012), Jaques Grobler\n    (2012-2013) and Olivier Grisel (2013-2017) to work on this project\n    full-time. It also hosts coding sprints and other events.\n\n  .. div:: image-box\n\n    .. image:: images\/inria-logo.jpg\n      :target: https:\/\/www.inria.fr\n\n.....................\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `Paris-Saclay Center for Data Science <http:\/\/www.datascience-paris-saclay.fr\/>`_\n    funded one year for a developer to work on the project full-time (2014-2015), 50%\n    of the time of Guillaume Lemaitre (2016-2017) and 50% of the time of Joris van den\n    Bossche (2017-2018).\n\n  .. div:: image-box\n\n    .. image:: images\/cds-logo.png\n      :target: http:\/\/www.datascience-paris-saclay.fr\/\n\n..........................\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `NYU Moore-Sloan Data Science Environment <https:\/\/cds.nyu.edu\/mooresloan\/>`_\n    funded Andreas Mueller (2014-2016) to work on this project. The Moore-Sloan\n    Data Science Environment also funds several students to work on the project\n    part-time.\n\n  .. div:: image-box\n\n    .. image:: images\/nyu_short_color.png\n      :target: https:\/\/cds.nyu.edu\/mooresloan\/\n\n........................\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `T\u00e9l\u00e9com Paristech <https:\/\/www.telecom-paristech.fr\/>`_ funded Manoj Kumar\n    (2014), Tom Dupr\u00e9 la Tour (2015), Raghav RV (2015-2017), Thierry Guillemot\n    (2016-2017) and Albert Thomas (2017) to work on scikit-learn.\n\n  .. div:: image-box\n\n    .. image:: images\/telecom.png\n      :target: https:\/\/www.telecom-paristech.fr\/\n\n.....................\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `The Labex DigiCosme <https:\/\/digicosme.lri.fr>`_ funded Nicolas Goix\n    (2015-2016), Tom Dupr\u00e9 la Tour (2015-2016 and 2017-2018), Mathurin Massias\n    (2018-2019) to work part time on scikit-learn during their PhDs. It also\n    funded a scikit-learn coding sprint in 2015.\n\n  .. div:: image-box\n\n    .. image:: images\/digicosme.png\n      :target: https:\/\/digicosme.lri.fr\n\n.....................\n\n.. div:: sk-text-image-grid-small\n\n  .. div:: text-box\n\n    `The Chan-Zuckerberg Initiative <https:\/\/chanzuckerberg.com\/>`_ funded Nicolas\n    Hug to work full-time on scikit-learn in 2020.\n\n  .. div:: image-box\n\n    .. image:: images\/czi.png\n      :target: https:\/\/chanzuckerberg.com\n\n......................\n\nThe following students were sponsored by `Google\n<https:\/\/opensource.google\/>`_ to work on scikit-learn through\nthe `Google Summer of Code <https:\/\/en.wikipedia.org\/wiki\/Google_Summer_of_Code>`_\nprogram.\n\n- 2007 - David Cournapeau\n- 2011 - `Vlad Niculae`_\n- 2012 - `Vlad Niculae`_, Immanuel Bayer\n- 2013 - Kemal Eren, Nicolas Tr\u00e9segnie\n- 2014 - Hamzeh Alsalhi, Issam Laradji, Maheshakya Wijewardena, Manoj Kumar\n- 2015 - `Raghav RV <https:\/\/github.com\/raghavrv>`_, Wei Xue\n- 2016 - `Nelson Liu <http:\/\/nelsonliu.me>`_, `YenChen Lin <https:\/\/yenchenlin.me\/>`_\n\n.. _Vlad Niculae: https:\/\/vene.ro\/\n\n...................\n\nThe `NeuroDebian <http:\/\/neuro.debian.net>`_ project providing `Debian\n<https:\/\/www.debian.org\/>`_ packaging and contributions is supported by\n`Dr. James V. Haxby <http:\/\/haxbylab.dartmouth.edu\/>`_ (`Dartmouth\nCollege <https:\/\/pbs.dartmouth.edu\/>`_).\n\n...................\n\nThe following organizations funded the scikit-learn consortium at Inria in\nthe past:\n\n.. |msn| image:: images\/microsoft.png\n  :target: https:\/\/www.microsoft.com\/\n\n.. |bcg| image:: images\/bcg.png\n  :target: https:\/\/www.bcg.com\/beyond-consulting\/bcg-gamma\/default.aspx\n\n.. |fujitsu| image:: images\/fujitsu.png\n  :target: https:\/\/www.fujitsu.com\/global\/\n\n.. |aphp| image:: images\/logo_APHP_text.png\n  :target: https:\/\/aphp.fr\/\n\n.. |hf| image:: images\/huggingface_logo-noborder.png\n  :target: https:\/\/huggingface.co\n\n.. raw:: html\n\n  <style>\n    div.image-subgrid img {\n      max-height: 50px;\n      max-width: 90%;\n    }\n  <\/style>\n\n.. grid:: 2 2 4 4\n  :class-row: image-subgrid\n  :gutter: 1\n\n  .. grid-item::\n    :class: sd-text-center\n    :child-align: center\n\n    |msn|\n\n  .. grid-item::\n    :class: sd-text-center\n    :child-align: center\n\n    |bcg|\n\n  .. grid-item::\n    :class: sd-text-center\n    :child-align: center\n\n    |fujitsu|\n\n  .. grid-item::\n    :class: sd-text-center\n    :child-align: center\n\n    |aphp|\n\n  .. grid-item::\n    :class: sd-text-center\n    :child-align: center\n\n    |hf|\n\nCoding Sprints\n==============\n\nThe scikit-learn project has a long history of `open source coding sprints\n<https:\/\/blog.scikit-learn.org\/events\/sprints-value\/>`_ with over 50 sprint\nevents from 2010 to present day. There are scores of sponsors who contributed\nto costs which include venue, food, travel, developer time and more. See\n`scikit-learn sprints <https:\/\/blog.scikit-learn.org\/sprints\/>`_ for a full\nlist of events.\n\nDonating to the project\n=======================\n\nIf you are interested in donating to the project or to one of our code-sprints,\nplease donate via the `NumFOCUS Donations Page\n<https:\/\/numfocus.org\/donate-to-scikit-learn>`_.\n\n.. raw:: html\n\n  <p class=\"text-center\">\n    <a class=\"btn sk-btn-orange mb-1\" href=\"https:\/\/numfocus.org\/donate-to-scikit-learn\">\n      Help us, <strong>donate!<\/strong>\n    <\/a>\n  <\/p>\n\nAll donations will be handled by `NumFOCUS <https:\/\/numfocus.org\/>`_, a non-profit\norganization which is managed by a board of `Scipy community members\n<https:\/\/numfocus.org\/board.html>`_. NumFOCUS's mission is to foster scientific\ncomputing software, in particular in Python. As a fiscal home of scikit-learn, it\nensures that money is available when needed to keep the project funded and available\nwhile in compliance with tax regulations.\n\nThe received donations for the scikit-learn project mostly will go towards covering\ntravel-expenses for code sprints, as well as towards the organization budget of the\nproject [#f1]_.\n\n.. rubric:: Notes\n\n.. [#f1] Regarding the organization budget, in particular, we might use some of\n  the donated funds to pay for other project expenses such as DNS,\n  hosting or continuous integration services.\n\n\nInfrastructure support\n======================\n\nWe would also like to thank `Microsoft Azure <https:\/\/azure.microsoft.com\/en-us\/>`_,\n`Cirrus Cl <https:\/\/cirrus-ci.org>`_, `CircleCl <https:\/\/circleci.com\/>`_ for free CPU\ntime on their Continuous Integration servers, and `Anaconda Inc. <https:\/\/www.anaconda.com>`_\nfor the storage they provide for our staging and nightly builds.","site":"scikit-learn","answers_cleaned":"    about            About us           History          This project was started in 2007 as a Google Summer of Code project by David Cournapeau  Later that year  Matthieu Brucher started working on this project  as part of his thesis   In 2010 Fabian Pedregosa  Gael Varoquaux  Alexandre Gramfort and Vincent Michel of INRIA took leadership of the project and made the first public release  February the 1st 2010  Since then  several releases have appeared following an approximately 3 month cycle  and a thriving international community has been leading the development  As a result  INRIA holds the copyright over the work done by people who were employed by INRIA at the time of the contribution   Governance             The decision making process and governance structure of scikit learn  like roles and responsibilities  is laid out in the  ref  governance document  governance        The  author  anchors below is there to ensure that old html links  in    the form of  about html author  still work       authors   The people behind scikit learn                                 Scikit learn is a community project  developed by a large group of people  all across the world  A few core contributor teams  listed below  have central roles  however a more complete list of contributors can be found  on github  https   github com scikit learn scikit learn graphs contributors       Active Core Contributors                           Maintainers Team                   The following people are currently maintainers  in charge of consolidating scikit learn s development and maintenance      include   maintainers rst     note      Please do not email the authors directly to ask for assistance or report issues    Instead  please see  What s the best way to ask questions about scikit learn    https   scikit learn org stable faq html what s the best way to get help on scikit learn usage      in the FAQ      seealso      How you can  ref  contribute to the project  contributing     Documentation Team                     The following people help with documenting the project      include   documentation team rst  Contributor Experience Team                              The following people are active contributors who also help with  ref  triaging issues  bug triaging    PRs  and general maintenance      include   contributor experience team rst  Communication Team                     The following people help with  ref  communication around scikit learn  communication team        include   communication team rst  Emeritus Core Contributors                             Emeritus Maintainers Team                            The following people have been active contributors in the past  but are no longer active in the project      include   maintainers emeritus rst  Emeritus Communication Team                              The following people have been active in the communication team in the past  but no longer have communication responsibilities      include   communication team emeritus rst  Emeritus Contributor Experience Team                                       The following people have been active in the contributor experience team in the past      include   contributor experience team emeritus rst      citing scikit learn   Citing scikit learn                      If you use scikit learn in a scientific publication  we would appreciate citations to the following paper    Scikit learn  Machine Learning in Python  https   jmlr csail mit edu papers v12 pedregosa11a html     Pedregosa  et al    JMLR 12  pp  2825 2830  2011   Bibtex entry       article scikit learn      title  Scikit learn  Machine Learning in  P ython       author  Pedregosa  F  and Varoquaux  G  and Gramfort  A  and Michel  V              and Thirion  B  and Grisel  O  and Blondel  M  and Prettenhofer  P              and Weiss  R  and Dubourg  V  and Vanderplas  J  and Passos  A  and             Cournapeau  D  and Brucher  M  and Perrot  M  and Duchesnay  E        journal  Journal of Machine Learning Research       volume  12       pages  2825  2830       year  2011       If you want to cite scikit learn for its API or design  you may also want to consider the following paper    arxiv  API design for machine learning software  experiences from the scikit learn project  1309 0238    Buitinck  et al    2013   Bibtex entry       inproceedings sklearn api      author       Lars Buitinck and Gilles Louppe and Mathieu Blondel and                   Fabian Pedregosa and Andreas Mueller and Olivier Grisel and                   Vlad Niculae and Peter Prettenhofer and Alexandre Gramfort                   and Jaques Grobler and Robert Layton and Jake VanderPlas and                   Arnaud Joly and Brian Holt and Ga    e  l Varoquaux       title         API  design for machine learning software  experiences from the scikit learn                   project       booktitle    ECML PKDD Workshop  Languages for Data Mining and Machine Learning       year         2013       pages    108  122        Artwork          High quality PNG and SVG logos are available in the  doc logos   https   github com scikit learn scikit learn tree main doc logos    source directory      image   images scikit learn logo notext png    align  center  Funding          Scikit learn is a community driven project  however institutional and private grants help to assure its sustainability   The project would like to thank the following funders                                           div   sk text image grid small       div   text box        probabl   https   probabl ai    employs Adrin Jalali  Arturo Amor      Fran ois Goupil  Guillaume Lemaitre  J r mie du Boisberranger  Lo c Est ve      Olivier Grisel  and Stefanie Senger        div   image box         image   images probabl png        target  https   probabl ai                  chanel  image   images chanel png    target  https   www chanel com      axa  image   images axa png    target  https   www axa fr       bnp  image   images bnp png    target  https   www bnpparibascardif com       dataiku  image   images dataiku png    target  https   www dataiku com       nvidia  image   images nvidia png    target  https   www nvidia com      inria  image   images inria logo jpg    target  https   www inria fr     raw   html     style      table image subtable tr         border color  transparent             table image subtable td         width  50         vertical align  middle        text align  center             table image subtable td img         max height  40px  important        max width  90   important            style      div   sk text image grid small       div   text box      The  Members  https   scikit learn fondation inria fr en home  sponsors    of     the  Scikit learn Consortium at Inria Foundation      https   scikit learn fondation inria fr en home     help at maintaining and     improving the project through their financial support        div   image box         table          class  image subtable                                                chanel                                                   axa          bnp                                                   nvidia                                                        dataiku                                                        inria                                                         div   sk text image grid small       div   text box       NVidia  https   nvidia com    funds Tim Head since 2022     and is part of the scikit learn consortium at Inria        div   image box         image   images nvidia png        target  https   nvidia com                 div   sk text image grid small       div   text box       Microsoft  https   microsoft com     funds Andreas M ller since 2020        div   image box         image   images microsoft png        target  https   microsoft com                  div   sk text image grid small       div   text box       Quansight Labs  https   labs quansight org    funds Lucy Liu since 2022        div   image box         image   images quansight labs png        target  https   labs quansight org                   czi  image   images czi png    target  https   chanzuckerberg com      wellcome  image   images wellcome trust png    target  https   wellcome org      div   sk text image grid small       div   text box       The Chan Zuckerberg Initiative  https   chanzuckerberg com     and      Wellcome Trust  https   wellcome org     fund scikit learn through the      Essential Open Source Software for Science  EOSS   https   chanzuckerberg com eoss         cycle 6       It supports Lucy Liu and diversity   inclusion initiatives that will     be announced in the future        div   image box         table          class  image subtable                                                czi          wellcome                                                          div   sk text image grid small       div   text box       Tidelift  https   tidelift com     supports the project via their service     agreement        div   image box         image   images Tidelift logo on light svg        target  https   tidelift com                 Past Sponsors                   div   sk text image grid small       div   text box       Quansight Labs  https   labs quansight org    funded Meekail Zain in 2022 and 2023      and funded Thomas J  Fan from 2021 to 2023        div   image box         image   images quansight labs png        target  https   labs quansight org                  div   sk text image grid small       div   text box       Columbia University  https   columbia edu     funded Andreas M ller      2016 2020         div   image box         image   images columbia png        target  https   columbia edu               div   sk text image grid small       div   text box       The University of Sydney  https   sydney edu au     funded Joel Nothman      2017 2021         div   image box         image   images sydney primary jpeg        target  https   sydney edu au                   div   sk text image grid small       div   text box      Andreas M ller received a grant to improve scikit learn from the      Alfred P  Sloan Foundation  https   sloan org          This grant supported the position of Nicolas Hug and Thomas J  Fan        div   image box         image   images sloan banner png        target  https   sloan org                     div   sk text image grid small       div   text box       INRIA  https   www inria fr    actively supports this project  It has     provided funding for Fabian Pedregosa  2010 2012   Jaques Grobler      2012 2013  and Olivier Grisel  2013 2017  to work on this project     full time  It also hosts coding sprints and other events        div   image box         image   images inria logo jpg        target  https   www inria fr                            div   sk text image grid small       div   text box       Paris Saclay Center for Data Science  http   www datascience paris saclay fr         funded one year for a developer to work on the project full time  2014 2015   50      of the time of Guillaume Lemaitre  2016 2017  and 50  of the time of Joris van den     Bossche  2017 2018         div   image box         image   images cds logo png        target  http   www datascience paris saclay fr                                  div   sk text image grid small       div   text box       NYU Moore Sloan Data Science Environment  https   cds nyu edu mooresloan         funded Andreas Mueller  2014 2016  to work on this project  The Moore Sloan     Data Science Environment also funds several students to work on the project     part time        div   image box         image   images nyu short color png        target  https   cds nyu edu mooresloan                                div   sk text image grid small       div   text box       T l com Paristech  https   www telecom paristech fr     funded Manoj Kumar      2014   Tom Dupr  la Tour  2015   Raghav RV  2015 2017   Thierry Guillemot      2016 2017  and Albert Thomas  2017  to work on scikit learn        div   image box         image   images telecom png        target  https   www telecom paristech fr                             div   sk text image grid small       div   text box       The Labex DigiCosme  https   digicosme lri fr    funded Nicolas Goix      2015 2016   Tom Dupr  la Tour  2015 2016 and 2017 2018   Mathurin Massias      2018 2019  to work part time on scikit learn during their PhDs  It also     funded a scikit learn coding sprint in 2015        div   image box         image   images digicosme png        target  https   digicosme lri fr                            div   sk text image grid small       div   text box       The Chan Zuckerberg Initiative  https   chanzuckerberg com     funded Nicolas     Hug to work full time on scikit learn in 2020        div   image box         image   images czi png        target  https   chanzuckerberg com                          The following students were sponsored by  Google  https   opensource google     to work on scikit learn through the  Google Summer of Code  https   en wikipedia org wiki Google Summer of Code    program     2007   David Cournapeau   2011    Vlad Niculae     2012    Vlad Niculae    Immanuel Bayer   2013   Kemal Eren  Nicolas Tr segnie   2014   Hamzeh Alsalhi  Issam Laradji  Maheshakya Wijewardena  Manoj Kumar   2015    Raghav RV  https   github com raghavrv     Wei Xue   2016    Nelson Liu  http   nelsonliu me      YenChen Lin  https   yenchenlin me          Vlad Niculae  https   vene ro                        The  NeuroDebian  http   neuro debian net    project providing  Debian  https   www debian org     packaging and contributions is supported by  Dr  James V  Haxby  http   haxbylab dartmouth edu       Dartmouth College  https   pbs dartmouth edu                             The following organizations funded the scikit learn consortium at Inria in the past       msn  image   images microsoft png    target  https   www microsoft com       bcg  image   images bcg png    target  https   www bcg com beyond consulting bcg gamma default aspx      fujitsu  image   images fujitsu png    target  https   www fujitsu com global       aphp  image   images logo APHP text png    target  https   aphp fr       hf  image   images huggingface logo noborder png    target  https   huggingface co     raw   html     style      div image subgrid img         max height  50px        max width  90             style      grid   2 2 4 4    class row  image subgrid    gutter  1       grid item        class  sd text center      child align  center       msn        grid item        class  sd text center      child align  center       bcg        grid item        class  sd text center      child align  center       fujitsu        grid item        class  sd text center      child align  center       aphp        grid item        class  sd text center      child align  center       hf   Coding Sprints                 The scikit learn project has a long history of  open source coding sprints  https   blog scikit learn org events sprints value     with over 50 sprint events from 2010 to present day  There are scores of sponsors who contributed to costs which include venue  food  travel  developer time and more  See  scikit learn sprints  https   blog scikit learn org sprints     for a full list of events   Donating to the project                          If you are interested in donating to the project or to one of our code sprints  please donate via the  NumFOCUS Donations Page  https   numfocus org donate to scikit learn         raw   html     p class  text center        a class  btn sk btn orange mb 1  href  https   numfocus org donate to scikit learn         Help us   strong donate   strong        a      p   All donations will be handled by  NumFOCUS  https   numfocus org      a non profit organization which is managed by a board of  Scipy community members  https   numfocus org board html     NumFOCUS s mission is to foster scientific computing software  in particular in Python  As a fiscal home of scikit learn  it ensures that money is available when needed to keep the project funded and available while in compliance with tax regulations   The received donations for the scikit learn project mostly will go towards covering travel expenses for code sprints  as well as towards the organization budget of the project   f1        rubric   Notes       f1  Regarding the organization budget  in particular  we might use some of   the donated funds to pay for other project expenses such as DNS    hosting or continuous integration services    Infrastructure support                         We would also like to thank  Microsoft Azure  https   azure microsoft com en us       Cirrus Cl  https   cirrus ci org      CircleCl  https   circleci com     for free CPU time on their Continuous Integration servers  and  Anaconda Inc   https   www anaconda com    for the storage they provide for our staging and nightly builds "}
{"questions":"scikit-learn sklearn contributors rst releasenotes15 Version 1 5","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n.. _release_notes_1_5:\n\n===========\nVersion 1.5\n===========\n\nFor a short description of the main highlights of the release, please refer to\n:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_1_5_0.py`.\n\n.. include:: changelog_legend.inc\n\n.. _changes_1_5_2:\n\nVersion 1.5.2\n=============\n\n**September 2024**\n\nChanges impacting many modules\n------------------------------\n\n- |Fix| Fixed performance regression in a few Cython modules in\n  `sklearn._loss`, `sklearn.manifold`, `sklearn.metrics` and `sklearn.utils`,\n  which were built without OpenMP support.\n  :pr:`29694` by :user:`Lo\u00efc Est\u00e8vce <lesteve>`.\n\nChangelog\n---------\n\n:mod:`sklearn.calibration`\n..........................\n\n- |Fix| Raise error when :class:`~sklearn.model_selection.LeaveOneOut` used in\n  `cv`, matching what would happen if `KFold(n_splits=n_samples)` was used.\n  :pr:`29545` by :user:`Lucy Liu <lucyleeow>`\n\n:mod:`sklearn.compose`\n......................\n\n- |Fix| Fixed :class:`compose.TransformedTargetRegressor` not to raise `UserWarning` if\n  transform output is set to `pandas` or `polars`, since it isn't a transformer.\n  :pr:`29401` by :user:`Stefanie Senger <StefanieSenger>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Fix| Increase rank defficiency threshold in the whitening step of\n  :class:`decomposition.FastICA` with `whiten_solver=\"eigh\"` to improve the\n  platform-agnosticity of the estimator.\n  :pr:`29612` by :user:`Olivier Grisel <ogrisel>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| Fix a regression in :func:`metrics.accuracy_score` and in\n  :func:`metrics.zero_one_loss` causing an error for Array API dispatch with multilabel\n  inputs.\n  :pr:`29336` by :user:`Edoardo Abati <EdAbati>`.\n\n:mod:`sklearn.svm`\n..................\n\n- |Fix| Fixed a regression in :class:`svm.SVC` and :class:`svm.SVR` such that we accept\n  `C=float(\"inf\")`.\n  :pr:`29780` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n.. _changes_1_5_1:\n\nVersion 1.5.1\n=============\n\n**July 2024**\n\nChanges impacting many modules\n------------------------------\n\n- |Fix| Fixed a regression in the validation of the input data of all estimators where\n  an unexpected error was raised when passing a DataFrame backed by a read-only buffer.\n  :pr:`29018` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| Fixed a regression causing a dead-lock at import time in some settings.\n  :pr:`29235` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\nChangelog\n---------\n\n:mod:`sklearn.compose`\n......................\n\n- |Efficiency| Fix a performance regression in :class:`compose.ColumnTransformer`\n  where the full input data was copied for each transformer when `n_jobs > 1`.\n  :pr:`29330` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| Fix a regression in :func:`metrics.r2_score`. Passing torch CPU tensors\n  with array API dispatched disabled would complain about non-CPU devices\n  instead of implicitly converting those inputs as regular NumPy arrays.\n  :pr:`29119` by :user:`Olivier Grisel`.\n\n- |Fix| Fix a regression in\n  :func:`metrics.zero_one_loss` causing an error for Array API dispatch with multilabel\n  inputs.\n  :pr:`29269` by :user:`Yaroslav Korobko <Tialo>`.\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |Fix| Fix a regression in :class:`model_selection.GridSearchCV` for parameter\n  grids that have heterogeneous parameter values.\n  :pr:`29078` by :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n- |Fix| Fix a regression in :class:`model_selection.GridSearchCV` for parameter\n  grids that have estimators as parameter values.\n  :pr:`29179` by :user:`Marco Gorelli<MarcoGorelli>`.\n\n- |Fix| Fix a regression in :class:`model_selection.GridSearchCV` for parameter\n  grids that have arrays of different sizes as parameter values.\n  :pr:`29314` by :user:`Marco Gorelli<MarcoGorelli>`.\n\n:mod:`sklearn.tree`\n...................\n\n- |Fix| Fix an issue in :func:`tree.export_graphviz` and :func:`tree.plot_tree`\n  that could potentially result in exception or wrong results on 32bit OSes.\n  :pr:`29327` by :user:`Lo\u00efc Est\u00e8ve<lesteve>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |API| :func:`utils.validation.check_array` has a new parameter, `force_writeable`, to\n  control the writeability of the output array. If set to `True`, the output array will\n  be guaranteed to be writeable and a copy will be made if the input array is read-only.\n  If set to `False`, no guarantee is made about the writeability of the output array.\n  :pr:`29018` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n.. _changes_1_5:\n\nVersion 1.5.0\n=============\n\n**May 2024**\n\nSecurity\n--------\n\n- |Fix| :class:`feature_extraction.text.CountVectorizer` and\n  :class:`feature_extraction.text.TfidfVectorizer` no longer store discarded\n  tokens from the training set in their `stop_words_` attribute. This attribute\n  would hold too frequent (above `max_df`) but also too rare tokens (below\n  `min_df`). This fixes a potential security issue (data leak) if the discarded\n  rare tokens hold sensitive information from the training set without the\n  model developer's knowledge.\n\n  Note: users of those classes are encouraged to either retrain their pipelines\n  with the new scikit-learn version or to manually clear the `stop_words_`\n  attribute from previously trained instances of those transformers. This\n  attribute was designed only for model inspection purposes and has no impact\n  on the behavior of the transformers.\n  :pr:`28823` by :user:`Olivier Grisel <ogrisel>`.\n\nChanged models\n--------------\n\n- |Efficiency| The subsampling in :class:`preprocessing.QuantileTransformer` is now\n  more efficient for dense arrays but the fitted quantiles and the results of\n  `transform` may be slightly different than before (keeping the same statistical\n  properties).\n  :pr:`27344` by :user:`Xuefeng Xu <xuefeng-xu>`.\n\n- |Enhancement| :class:`decomposition.PCA`, :class:`decomposition.SparsePCA`\n  and :class:`decomposition.TruncatedSVD` now set the sign of the `components_`\n  attribute based on the component values instead of using the transformed data\n  as reference. This change is needed to be able to offer consistent component\n  signs across all `PCA` solvers, including the new\n  `svd_solver=\"covariance_eigh\"` option introduced in this release.\n\nChanges impacting many modules\n------------------------------\n\n- |Fix| Raise `ValueError` with an informative error message when passing 1D\n  sparse arrays to methods that expect 2D sparse inputs.\n  :pr:`28988` by :user:`Olivier Grisel <ogrisel>`.\n\n- |API| The name of the input of the `inverse_transform` method of estimators has been\n  standardized to `X`. As a consequence, `Xt` is deprecated and will be removed in\n  version 1.7 in the following estimators: :class:`cluster.FeatureAgglomeration`,\n  :class:`decomposition.MiniBatchNMF`, :class:`decomposition.NMF`,\n  :class:`model_selection.GridSearchCV`, :class:`model_selection.RandomizedSearchCV`,\n  :class:`pipeline.Pipeline` and :class:`preprocessing.KBinsDiscretizer`.\n  :pr:`28756` by :user:`Will Dean <wd60622>`.\n\nSupport for Array API\n---------------------\n\nAdditional estimators and functions have been updated to include support for all\n`Array API <https:\/\/data-apis.org\/array-api\/latest\/>`_ compliant inputs.\n\nSee :ref:`array_api` for more details.\n\n**Functions:**\n\n- :func:`sklearn.metrics.r2_score` now supports Array API compliant inputs.\n  :pr:`27904` by :user:`Eric Lindgren <elindgren>`, :user:`Franck Charras <fcharras>`,\n  :user:`Olivier Grisel <ogrisel>` and :user:`Tim Head <betatim>`.\n\n**Classes:**\n\n- :class:`linear_model.Ridge` now supports the Array API for the `svd` solver.\n  See :ref:`array_api` for more details.\n  :pr:`27800` by :user:`Franck Charras <fcharras>`, :user:`Olivier Grisel <ogrisel>`\n  and :user:`Tim Head <betatim>`.\n\nSupport for building with Meson\n-------------------------------\n\nFrom scikit-learn 1.5 onwards, Meson is the main supported way to build\nscikit-learn, see :ref:`Building from source <install_bleeding_edge>` for more\ndetails.\n\nUnless we discover a major blocker, setuptools support will be dropped in\nscikit-learn 1.6. The 1.5.x releases will support building scikit-learn with\nsetuptools.\n\nMeson support for building scikit-learn was added in :pr:`28040` by\n:user:`Lo\u00efc Est\u00e8ve <lesteve>`\n\nMetadata Routing\n----------------\n\nThe following models now support metadata routing in one or more or their\nmethods. Refer to the :ref:`Metadata Routing User Guide <metadata_routing>` for\nmore details.\n\n- |Feature| :class:`impute.IterativeImputer` now supports metadata routing in\n  its `fit` method. :pr:`28187` by :user:`Stefanie Senger <StefanieSenger>`.\n\n- |Feature| :class:`ensemble.BaggingClassifier` and :class:`ensemble.BaggingRegressor`\n  now support metadata routing. The fit methods now\n  accept ``**fit_params`` which are passed to the underlying estimators\n  via their `fit` methods.\n  :pr:`28432` by :user:`Adam Li <adam2392>` and\n  :user:`Benjamin Bossan <BenjaminBossan>`.\n\n- |Feature| :class:`linear_model.RidgeCV` and\n  :class:`linear_model.RidgeClassifierCV` now support metadata routing in\n  their `fit` method and route metadata to the underlying\n  :class:`model_selection.GridSearchCV` object or the underlying scorer.\n  :pr:`27560` by :user:`Omar Salman <OmarManzoor>`.\n\n- |Feature| :class:`GraphicalLassoCV` now supports metadata routing in it's\n  `fit` method and routes metadata to the CV splitter.\n  :pr:`27566` by :user:`Omar Salman <OmarManzoor>`.\n\n- |Feature| :class:`linear_model.RANSACRegressor` now supports metadata routing\n  in its ``fit``, ``score`` and ``predict`` methods and route metadata to its\n  underlying estimator's' ``fit``, ``score`` and ``predict`` methods.\n  :pr:`28261` by :user:`Stefanie Senger <StefanieSenger>`.\n\n- |Feature| :class:`ensemble.VotingClassifier` and\n  :class:`ensemble.VotingRegressor` now support metadata routing and pass\n  ``**fit_params`` to the underlying estimators via their `fit` methods.\n  :pr:`27584` by :user:`Stefanie Senger <StefanieSenger>`.\n\n- |Feature| :class:`pipeline.FeatureUnion` now supports metadata routing in its\n  ``fit`` and ``fit_transform`` methods and route metadata to the underlying\n  transformers' ``fit`` and ``fit_transform``.\n  :pr:`28205` by :user:`Stefanie Senger <StefanieSenger>`.\n\n- |Fix| Fix an issue when resolving default routing requests set via class\n  attributes.\n  :pr:`28435` by `Adrin Jalali`_.\n\n- |Fix| Fix an issue when `set_{method}_request` methods are used as unbound\n  methods, which can happen if one tries to decorate them.\n  :pr:`28651` by `Adrin Jalali`_.\n\n- |FIX| Prevent a `RecursionError` when estimators with the default `scoring`\n  param (`None`) route metadata.\n  :pr:`28712` by :user:`Stefanie Senger <StefanieSenger>`.\n\nChangelog\n---------\n\n..\n    Entries should be grouped by module (in alphabetic order) and prefixed with\n    one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,\n    |Fix| or |API| (see whats_new.rst for descriptions).\n    Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).\n    Changes not specific to a module should be listed under *Multiple Modules*\n    or *Miscellaneous*.\n    Entries should end with:\n    :pr:`123456` by :user:`Joe Bloggs <joeongithub>`.\n    where 123455 is the *pull request* number, not the issue number.\n\n:mod:`sklearn.calibration`\n..........................\n\n- |Fix| Fixed a regression in :class:`calibration.CalibratedClassifierCV` where\n  an error was wrongly raised with string targets.\n  :pr:`28843` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.cluster`\n......................\n\n- |Fix| The :class:`cluster.MeanShift` class now properly converges for constant data.\n  :pr:`28951` by :user:`Akihiro Kuno <akikuno>`.\n\n- |FIX| Create copy of precomputed sparse matrix within the `fit` method of\n  :class:`~cluster.OPTICS` to avoid in-place modification of the sparse matrix.\n  :pr:`28491` by :user:`Thanh Lam Dang <lamdang2k>`.\n\n- |Fix| :class:`cluster.HDBSCAN` now supports all metrics supported by\n  :func:`sklearn.metrics.pairwise_distances` when `algorithm=\"brute\"` or `\"auto\"`.\n  :pr:`28664` by :user:`Manideep Yenugula <myenugula>`.\n\n:mod:`sklearn.compose`\n......................\n\n- |Feature| A fitted :class:`compose.ColumnTransformer` now implements `__getitem__`\n  which returns the fitted transformers by name. :pr:`27990` by `Thomas Fan`_.\n\n- |Enhancement| :class:`compose.TransformedTargetRegressor` now raises an error in `fit`\n  if only `inverse_func` is provided without `func` (that would default to identity)\n  being explicitly set as well.\n  :pr:`28483` by :user:`Stefanie Senger <StefanieSenger>`.\n\n- |Enhancement| :class:`compose.ColumnTransformer` can now expose the \"remainder\"\n  columns in the fitted `transformers_` attribute as column names or boolean\n  masks, rather than column indices.\n  :pr:`27657` by :user:`J\u00e9r\u00f4me Dock\u00e8s <jeromedockes>`.\n\n- |Fix| Fixed an bug in :class:`compose.ColumnTransformer` with `n_jobs > 1`, where the\n  intermediate selected columns were passed to the transformers as read-only arrays.\n  :pr:`28822` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.cross_decomposition`\n..................................\n\n- |Fix| The `coef_` fitted attribute of :class:`cross_decomposition.PLSRegression`\n  now takes into account both the scale of `X` and `Y` when `scale=True`. Note that\n  the previous predicted values were not affected by this bug.\n  :pr:`28612` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |API| Deprecates `Y` in favor of `y` in the methods fit, transform and\n  inverse_transform of:\n  :class:`cross_decomposition.PLSRegression`.\n  :class:`cross_decomposition.PLSCanonical`,\n  :class:`cross_decomposition.CCA`,\n  and :class:`cross_decomposition.PLSSVD`.\n  `Y` will be removed in version 1.7.\n  :pr:`28604` by :user:`David Leon <davidleon123>`.\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Enhancement| Adds optional arguments `n_retries` and `delay` to functions\n  :func:`datasets.fetch_20newsgroups`,\n  :func:`datasets.fetch_20newsgroups_vectorized`,\n  :func:`datasets.fetch_california_housing`,\n  :func:`datasets.fetch_covtype`,\n  :func:`datasets.fetch_kddcup99`,\n  :func:`datasets.fetch_lfw_pairs`,\n  :func:`datasets.fetch_lfw_people`,\n  :func:`datasets.fetch_olivetti_faces`,\n  :func:`datasets.fetch_rcv1`,\n  and :func:`datasets.fetch_species_distributions`.\n  By default, the functions will retry up to 3 times in case of network failures.\n  :pr:`28160` by :user:`Zhehao Liu <MaxwellLZH>` and\n  :user:`Filip Karlo Do\u0161ilovi\u0107 <fkdosilovic>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Efficiency| :class:`decomposition.PCA` with `svd_solver=\"full\"` now assigns\n  a contiguous `components_` attribute instead of an non-contiguous slice of\n  the singular vectors. When `n_components << n_features`, this can save some\n  memory and, more importantly, help speed-up subsequent calls to the `transform`\n  method by more than an order of magnitude by leveraging cache locality of\n  BLAS GEMM on contiguous arrays.\n  :pr:`27491` by :user:`Olivier Grisel <ogrisel>`.\n\n- |Enhancement| :class:`~decomposition.PCA` now automatically selects the ARPACK solver\n  for sparse inputs when `svd_solver=\"auto\"` instead of raising an error.\n  :pr:`28498` by :user:`Thanh Lam Dang <lamdang2k>`.\n\n- |Enhancement| :class:`decomposition.PCA` now supports a new solver option\n  named `svd_solver=\"covariance_eigh\"` which offers an order of magnitude\n  speed-up and reduced memory usage for datasets with a large number of data\n  points and a small number of features (say, `n_samples >> 1000 >\n  n_features`). The `svd_solver=\"auto\"` option has been updated to use the new\n  solver automatically for such datasets. This solver also accepts sparse input\n  data.\n  :pr:`27491` by :user:`Olivier Grisel <ogrisel>`.\n\n- |Fix| :class:`decomposition.PCA` fit with `svd_solver=\"arpack\"`,\n  `whiten=True` and a value for `n_components` that is larger than the rank of\n  the training set, no longer returns infinite values when transforming\n  hold-out data.\n  :pr:`27491` by :user:`Olivier Grisel <ogrisel>`.\n\n:mod:`sklearn.dummy`\n....................\n\n- |Enhancement| :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor` now\n  have the `n_features_in_` and `feature_names_in_` attributes after `fit`.\n  :pr:`27937` by :user:`Marco vd Boom <tvdboom>`.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |Efficiency| Improves runtime of `predict` of\n  :class:`ensemble.HistGradientBoostingClassifier` by avoiding to call `predict_proba`.\n  :pr:`27844` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Efficiency| :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` are now a tiny bit faster by\n  pre-sorting the data before finding the thresholds for binning.\n  :pr:`28102` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Fix| Fixes a bug in :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` when `monotonic_cst` is specified\n  for non-categorical features.\n  :pr:`28925` by :user:`Xiao Yuan <yuanx749>`.\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |Efficiency| :class:`feature_extraction.text.TfidfTransformer` is now faster\n  and more memory-efficient by using a NumPy vector instead of a sparse matrix\n  for storing the inverse document frequency.\n  :pr:`18843` by :user:`Paolo Montesel <thebabush>`.\n\n- |Enhancement| :class:`feature_extraction.text.TfidfTransformer` now preserves\n  the data type of the input matrix if it is `np.float64` or `np.float32`.\n  :pr:`28136` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Enhancement| :func:`feature_selection.mutual_info_regression` and\n  :func:`feature_selection.mutual_info_classif` now support `n_jobs` parameter.\n  :pr:`28085` by :user:`Neto Menoci <netomenoci>` and\n  :user:`Florin Andrei <FlorinAndrei>`.\n\n- |Enhancement| The `cv_results_` attribute of :class:`feature_selection.RFECV` has\n  a new key, `n_features`, containing an array with the number of features selected\n  at each step.\n  :pr:`28670` by :user:`Miguel Silva <miguelcsilva>`.\n\n:mod:`sklearn.impute`\n.....................\n\n- |Enhancement| :class:`impute.SimpleImputer` now supports custom strategies\n  by passing a function in place of a strategy name.\n  :pr:`28053` by :user:`Mark Elliot <mark-thm>`.\n\n:mod:`sklearn.inspection`\n.........................\n\n- |Fix| :meth:`inspection.DecisionBoundaryDisplay.from_estimator` no longer\n  warns about missing feature names when provided a `polars.DataFrame`.\n  :pr:`28718` by :user:`Patrick Wang <patrickkwang>`.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Enhancement| Solver `\"newton-cg\"` in :class:`linear_model.LogisticRegression` and\n  :class:`linear_model.LogisticRegressionCV` now emits information when `verbose` is\n  set to positive values.\n  :pr:`27526` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Fix| :class:`linear_model.ElasticNet`, :class:`linear_model.ElasticNetCV`,\n  :class:`linear_model.Lasso` and :class:`linear_model.LassoCV` now explicitly don't\n  accept large sparse data formats.\n  :pr:`27576` by :user:`Stefanie Senger <StefanieSenger>`.\n\n- |Fix| :class:`linear_model.RidgeCV` and :class:`RidgeClassifierCV` correctly pass\n  `sample_weight` to the underlying scorer when `cv` is None.\n  :pr:`27560` by :user:`Omar Salman <OmarManzoor>`.\n\n- |Fix| `n_nonzero_coefs_` attribute in :class:`linear_model.OrthogonalMatchingPursuit`\n  will now always be `None` when `tol` is set, as `n_nonzero_coefs` is ignored in\n  this case. :pr:`28557` by :user:`Lucy Liu <lucyleeow>`.\n\n- |API| :class:`linear_model.RidgeCV` and :class:`linear_model.RidgeClassifierCV`\n  will now allow `alpha=0` when `cv != None`, which is consistent with\n  :class:`linear_model.Ridge` and :class:`linear_model.RidgeClassifier`.\n  :pr:`28425` by :user:`Lucy Liu <lucyleeow>`.\n\n- |API| Passing `average=0` to disable averaging is deprecated in\n  :class:`linear_model.PassiveAggressiveClassifier`,\n  :class:`linear_model.PassiveAggressiveRegressor`,\n  :class:`linear_model.SGDClassifier`, :class:`linear_model.SGDRegressor` and\n  :class:`linear_model.SGDOneClassSVM`. Pass `average=False` instead.\n  :pr:`28582` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |API| Parameter `multi_class` was deprecated in\n  :class:`linear_model.LogisticRegression` and\n  :class:`linear_model.LogisticRegressionCV`. `multi_class` will be removed in 1.7,\n  and internally, for 3 and more classes, it will always use multinomial.\n  If you still want to use the one-vs-rest scheme, you can use\n  `OneVsRestClassifier(LogisticRegression(..))`.\n  :pr:`28703` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |API| `store_cv_values` and `cv_values_` are deprecated in favor of\n  `store_cv_results` and `cv_results_` in `~linear_model.RidgeCV` and\n  `~linear_model.RidgeClassifierCV`.\n  :pr:`28915` by :user:`Lucy Liu <lucyleeow>`.\n\n:mod:`sklearn.manifold`\n.......................\n\n- |API| Deprecates `n_iter` in favor of `max_iter` in :class:`manifold.TSNE`.\n  `n_iter` will be removed in version 1.7. This makes :class:`manifold.TSNE`\n  consistent with the rest of the estimators. :pr:`28471` by\n  :user:`Lucy Liu <lucyleeow>`\n\n:mod:`sklearn.metrics`\n......................\n\n- |Feature| :func:`metrics.pairwise_distances` accepts calculating pairwise distances\n  for non-numeric arrays as well. This is supported through custom metrics only.\n  :pr:`27456` by :user:`Venkatachalam N <venkyyuvy>`, :user:`Kshitij Mathur <Kshitij68>`\n  and :user:`Julian Libiseller-Egger <julibeg>`.\n\n- |Feature| :func:`sklearn.metrics.check_scoring` now returns a multi-metric scorer\n  when `scoring` as a `dict`, `set`, `tuple`, or `list`. :pr:`28360` by `Thomas Fan`_.\n\n- |Feature| :func:`metrics.d2_log_loss_score` has been added which\n  calculates the D^2 score for the log loss.\n  :pr:`28351` by :user:`Omar Salman <OmarManzoor>`.\n\n- |Efficiency| Improve efficiency of functions :func:`~metrics.brier_score_loss`,\n  :func:`~calibration.calibration_curve`, :func:`~metrics.det_curve`,\n  :func:`~metrics.precision_recall_curve`,\n  :func:`~metrics.roc_curve` when `pos_label` argument is specified.\n  Also improve efficiency of methods `from_estimator`\n  and `from_predictions` in :class:`~metrics.RocCurveDisplay`,\n  :class:`~metrics.PrecisionRecallDisplay`, :class:`~metrics.DetCurveDisplay`,\n  :class:`~calibration.CalibrationDisplay`.\n  :pr:`28051` by :user:`Pierre de Fr\u00e9minville <pidefrem>`.\n\n- |Fix|:class:`metrics.classification_report` now shows only accuracy and not\n  micro-average when input is a subset of labels.\n  :pr:`28399` by :user:`Vineet Joshi <vjoshi253>`.\n\n- |Fix| Fix OpenBLAS 0.3.26 dead-lock on Windows in pairwise distances\n  computation. This is likely to affect neighbor-based algorithms.\n  :pr:`28692` by :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n- |API| :func:`metrics.precision_recall_curve` deprecated the keyword argument\n  `probas_pred` in favor of `y_score`. `probas_pred` will be removed in version 1.7.\n  :pr:`28092` by :user:`Adam Li <adam2392>`.\n\n- |API| :func:`metrics.brier_score_loss` deprecated the keyword argument `y_prob`\n  in favor of `y_proba`. `y_prob` will be removed in version 1.7.\n  :pr:`28092` by :user:`Adam Li <adam2392>`.\n\n- |API| For classifiers and classification metrics, labels encoded as bytes\n  is deprecated and will raise an error in v1.7.\n  :pr:`18555` by :user:`Kaushik Amar Das <cozek>`.\n\n:mod:`sklearn.mixture`\n......................\n\n- |Fix| The `converged_` attribute of :class:`mixture.GaussianMixture` and\n  :class:`mixture.BayesianGaussianMixture` now reflects the convergence status of\n  the best fit whereas it was previously `True` if any of the fits converged.\n  :pr:`26837` by :user:`Krsto Prorokovi\u0107 <krstopro>`.\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |MajorFeature| :class:`model_selection.TunedThresholdClassifierCV` finds\n  the decision threshold of a binary classifier that maximizes a\n  classification metric through cross-validation.\n  :class:`model_selection.FixedThresholdClassifier` is an alternative when one wants\n  to use a fixed decision threshold without any tuning scheme.\n  :pr:`26120` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Enhancement| :term:`CV splitters <CV splitter>` that ignores the group parameter now\n  raises a warning when groups are passed in to :term:`split`. :pr:`28210` by\n  `Thomas Fan`_.\n\n- |Enhancement| The HTML diagram representation of\n  :class:`~model_selection.GridSearchCV`,\n  :class:`~model_selection.RandomizedSearchCV`,\n  :class:`~model_selection.HalvingGridSearchCV`, and\n  :class:`~model_selection.HalvingRandomSearchCV` will show the best estimator when\n  `refit=True`. :pr:`28722` by :user:`Yao Xiao <Charlie-XIAO>` and `Thomas Fan`_.\n\n- |Fix| the ``cv_results_`` attribute (of :class:`model_selection.GridSearchCV`) now\n  returns masked arrays of the appropriate NumPy dtype, as opposed to always returning\n  dtype ``object``. :pr:`28352` by :user:`Marco Gorelli<MarcoGorelli>`.\n\n- |Fix| :func:`model_selection.train_test_split` works with Array API inputs.\n  Previously indexing was not handled correctly leading to exceptions when using strict\n  implementations of the Array API like CuPY.\n  :pr:`28407` by :user:`Tim Head <betatim>`.\n\n:mod:`sklearn.multioutput`\n..........................\n\n- |Enhancement| `chain_method` parameter added to :class:`multioutput.ClassifierChain`.\n  :pr:`27700` by :user:`Lucy Liu <lucyleeow>`.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Fix| Fixes :class:`neighbors.NeighborhoodComponentsAnalysis` such that\n  `get_feature_names_out` returns the correct number of feature names.\n  :pr:`28306` by :user:`Brendan Lu <brendanlu>`.\n\n:mod:`sklearn.pipeline`\n.......................\n\n- |Feature| :class:`pipeline.FeatureUnion` can now use the\n  `verbose_feature_names_out` attribute. If `True`, `get_feature_names_out`\n  will prefix all feature names with the name of the transformer\n  that generated that feature. If `False`, `get_feature_names_out` will not\n  prefix any feature names and will error if feature names are not unique.\n  :pr:`25991` by :user:`Jiawei Zhang <jiawei-zhang-a>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Enhancement| :class:`preprocessing.QuantileTransformer` and\n  :func:`preprocessing.quantile_transform` now supports disabling\n  subsampling explicitly.\n  :pr:`27636` by :user:`Ralph Urlus <rurlus>`.\n\n:mod:`sklearn.tree`\n...................\n\n- |Enhancement| Plotting trees in matplotlib via :func:`tree.plot_tree` now\n  show a \"True\/False\" label to indicate the directionality the samples traverse\n  given the split condition.\n  :pr:`28552` by :user:`Adam Li <adam2392>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |Fix| :func:`~utils._safe_indexing` now works correctly for polars DataFrame when\n  `axis=0` and supports indexing polars Series.\n  :pr:`28521` by :user:`Yao Xiao <Charlie-XIAO>`.\n\n- |API| :data:`utils.IS_PYPY` is deprecated and will be removed in version 1.7.\n  :pr:`28768` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |API| :func:`utils.tosequence` is deprecated and will be removed in version 1.7.\n  :pr:`28763` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |API| :class:`utils.parallel_backend` and :func:`utils.register_parallel_backend` are\n  deprecated and will be removed in version 1.7. Use `joblib.parallel_backend` and\n  `joblib.register_parallel_backend` instead.\n  :pr:`28847` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |API| Raise informative warning message in :func:`~utils.multiclass.type_of_target`\n  when represented as bytes. For classifiers and classification metrics, labels encoded\n  as bytes is deprecated and will raise an error in v1.7.\n  :pr:`18555` by :user:`Kaushik Amar Das <cozek>`.\n\n- |API| :func:`utils.estimator_checks.check_estimator_sparse_data` was split into two\n  functions: :func:`utils.estimator_checks.check_estimator_sparse_matrix` and\n  :func:`utils.estimator_checks.check_estimator_sparse_array`.\n  :pr:`27576` by :user:`Stefanie Senger <StefanieSenger>`.\n\n.. rubric:: Code and documentation contributors\n\nThanks to everyone who has contributed to the maintenance and improvement of\nthe project since version 1.4, including:\n\n101AlexMartin, Abdulaziz Aloqeely, Adam J. Stewart, Adam Li, Adarsh Wase,\nAdeyemi Biola, Aditi Juneja, Adrin Jalali, Advik Sinha, Aisha, Akash\nSrivastava, Akihiro Kuno, Alan Guedes, Alberto Torres, Alexis IMBERT, alexqiao,\nAna Paula Gomes, Anderson Nelson, Andrei Dzis, Arif Qodari, Arnaud Capitaine,\nArturo Amor, Aswathavicky, Audrey Flanders, awwwyan, baggiponte, Bharat\nRaghunathan, bme-git, brdav, Brendan Lu, Brigitta Sip\u0151cz, Bruno, Cailean\nCarter, Cemlyn, Christian Lorentzen, Christian Veenhuis, Cindy Liang, Claudio\nSalvatore Arcidiacono, Connor Boyle, Conrad Stevens, crispinlogan, David\nMatthew Cherney, Davide Chicco, davidleon123, dependabot[bot], DerWeh, dinga92,\nDipan Banik, Drew Craeton, Duarte S\u00e3o Jos\u00e9, DUONG, Eddie Bergman, Edoardo\nAbati, Egehan Gunduz, Emad Izadifar, EmilyXinyi, Erich Schubert, Evelyn, Filip\nKarlo Do\u0161ilovi\u0107, Franck Charras, Gael Varoquaux, G\u00f6n\u00fcl Ayc\u0131, Guillaume\nLemaitre, Gyeongjae Choi, Harmanan Kohli, Hong Xiang Yue, Ian Faust, Ilya\nKomarov, itsaphel, Ivan Wiryadi, Jack Bowyer, Javier Marin Tur, J\u00e9r\u00e9mie du\nBoisberranger, J\u00e9r\u00f4me Dock\u00e8s, Jiawei Zhang, Jo\u00e3o Morais, Joe Cainey, Joel\nNothman, Johanna Bayer, John Cant, John Enblom, John Hopfensperger, jpcars,\njpienaar-tuks, Julian Chan, Julian Libiseller-Egger, Julien Jerphanion,\nKanchiMoe, Kaushik Amar Das, keyber, Koustav Ghosh, kraktus, Krsto Prorokovi\u0107,\nLars, ldwy4, LeoGrin, lihaitao, Linus Sommer, Loic Esteve, Lucy Liu, Lukas\nGeiger, m-maggi, manasimj, Manuel Labb\u00e9, Manuel Morales, Marco Edward Gorelli,\nMarco Wolsza, Maren Westermann, Marija Vlajic, Mark Elliot, Martin Helm,\nMateusz Sok\u00f3\u0142, mathurinm, Mavs, Michael Dawson, Michael Higgins, Michael Mayer,\nmiguelcsilva, Miki Watanabe, Mohammed Hamdy, myenugula, Nathan Goldbaum, Naziya\nMahimkar, nbrown-ScottLogic, Neto, Nithish Bolleddula, notPlancha, Olivier\nGrisel, Omar Salman, ParsifalXu, Patrick Wang, Pierre de Fr\u00e9minville, Piotr,\nPriyank Shroff, Priyansh Gupta, Priyash Shah, Puneeth K, Rahil Parikh, raisadz,\nRaj Pulapakura, Ralf Gommers, Ralph Urlus, Randolf Scholz, renaissance0ne,\nReshama Shaikh, Richard Barnes, Robert Pollak, Roberto Rosati, Rodrigo Romero,\nrwelsch427, Saad Mahmood, Salim Dohri, Sandip Dutta, SarahRemus,\nscikit-learn-bot, Shaharyar Choudhry, Shubham, sperret6, Stefanie Senger,\nSteffen Schneider, Suha Siddiqui, Thanh Lam DANG, thebabush, Thomas, Thomas J.\nFan, Thomas Lazarus, Tialo, Tim Head, Tuhin Sharma, Tushar Parimi,\nVarunChaduvula, Vineet Joshi, virchan, Wa\u00ebl Boukhobza, Weyb, Will Dean, Xavier\nBeltran, Xiao Yuan, Xuefeng Xu, Yao Xiao, yareyaredesuyo, Ziad Amerr, \u0160t\u011bp\u00e1n\nSr\u0161e\u0148","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn      release notes 1 5               Version 1 5              For a short description of the main highlights of the release  please refer to  ref  sphx glr auto examples release highlights plot release highlights 1 5 0 py       include   changelog legend inc      changes 1 5 2   Version 1 5 2                  September 2024    Changes impacting many modules                                    Fix  Fixed performance regression in a few Cython modules in    sklearn  loss    sklearn manifold    sklearn metrics  and  sklearn utils     which were built without OpenMP support     pr  29694  by  user  Lo c Est vce  lesteve     Changelog             mod  sklearn calibration                                 Fix  Raise error when  class   sklearn model selection LeaveOneOut  used in    cv   matching what would happen if  KFold n splits n samples   was used     pr  29545  by  user  Lucy Liu  lucyleeow     mod  sklearn compose                             Fix  Fixed  class  compose TransformedTargetRegressor  not to raise  UserWarning  if   transform output is set to  pandas  or  polars   since it isn t a transformer     pr  29401  by  user  Stefanie Senger  StefanieSenger      mod  sklearn decomposition                                   Fix  Increase rank defficiency threshold in the whitening step of    class  decomposition FastICA  with  whiten solver  eigh   to improve the   platform agnosticity of the estimator     pr  29612  by  user  Olivier Grisel  ogrisel      mod  sklearn metrics                             Fix  Fix a regression in  func  metrics accuracy score  and in    func  metrics zero one loss  causing an error for Array API dispatch with multilabel   inputs     pr  29336  by  user  Edoardo Abati  EdAbati      mod  sklearn svm                         Fix  Fixed a regression in  class  svm SVC  and  class  svm SVR  such that we accept    C float  inf        pr  29780  by  user  Guillaume Lemaitre  glemaitre         changes 1 5 1   Version 1 5 1                  July 2024    Changes impacting many modules                                    Fix  Fixed a regression in the validation of the input data of all estimators where   an unexpected error was raised when passing a DataFrame backed by a read only buffer     pr  29018  by  user  J r mie du Boisberranger  jeremiedbb        Fix  Fixed a regression causing a dead lock at import time in some settings     pr  29235  by  user  J r mie du Boisberranger  jeremiedbb     Changelog             mod  sklearn compose                             Efficiency  Fix a performance regression in  class  compose ColumnTransformer    where the full input data was copied for each transformer when  n jobs   1      pr  29330  by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn metrics                             Fix  Fix a regression in  func  metrics r2 score   Passing torch CPU tensors   with array API dispatched disabled would complain about non CPU devices   instead of implicitly converting those inputs as regular NumPy arrays     pr  29119  by  user  Olivier Grisel       Fix  Fix a regression in    func  metrics zero one loss  causing an error for Array API dispatch with multilabel   inputs     pr  29269  by  user  Yaroslav Korobko  Tialo      mod  sklearn model selection                                     Fix  Fix a regression in  class  model selection GridSearchCV  for parameter   grids that have heterogeneous parameter values     pr  29078  by  user  Lo c Est ve  lesteve        Fix  Fix a regression in  class  model selection GridSearchCV  for parameter   grids that have estimators as parameter values     pr  29179  by  user  Marco Gorelli MarcoGorelli        Fix  Fix a regression in  class  model selection GridSearchCV  for parameter   grids that have arrays of different sizes as parameter values     pr  29314  by  user  Marco Gorelli MarcoGorelli      mod  sklearn tree                          Fix  Fix an issue in  func  tree export graphviz  and  func  tree plot tree    that could potentially result in exception or wrong results on 32bit OSes     pr  29327  by  user  Lo c Est ve lesteve      mod  sklearn utils                           API   func  utils validation check array  has a new parameter   force writeable   to   control the writeability of the output array  If set to  True   the output array will   be guaranteed to be writeable and a copy will be made if the input array is read only    If set to  False   no guarantee is made about the writeability of the output array     pr  29018  by  user  J r mie du Boisberranger  jeremiedbb         changes 1 5   Version 1 5 0                  May 2024    Security              Fix   class  feature extraction text CountVectorizer  and    class  feature extraction text TfidfVectorizer  no longer store discarded   tokens from the training set in their  stop words   attribute  This attribute   would hold too frequent  above  max df   but also too rare tokens  below    min df    This fixes a potential security issue  data leak  if the discarded   rare tokens hold sensitive information from the training set without the   model developer s knowledge     Note  users of those classes are encouraged to either retrain their pipelines   with the new scikit learn version or to manually clear the  stop words     attribute from previously trained instances of those transformers  This   attribute was designed only for model inspection purposes and has no impact   on the behavior of the transformers     pr  28823  by  user  Olivier Grisel  ogrisel     Changed models                    Efficiency  The subsampling in  class  preprocessing QuantileTransformer  is now   more efficient for dense arrays but the fitted quantiles and the results of    transform  may be slightly different than before  keeping the same statistical   properties      pr  27344  by  user  Xuefeng Xu  xuefeng xu        Enhancement   class  decomposition PCA    class  decomposition SparsePCA    and  class  decomposition TruncatedSVD  now set the sign of the  components     attribute based on the component values instead of using the transformed data   as reference  This change is needed to be able to offer consistent component   signs across all  PCA  solvers  including the new    svd solver  covariance eigh   option introduced in this release   Changes impacting many modules                                    Fix  Raise  ValueError  with an informative error message when passing 1D   sparse arrays to methods that expect 2D sparse inputs     pr  28988  by  user  Olivier Grisel  ogrisel        API  The name of the input of the  inverse transform  method of estimators has been   standardized to  X   As a consequence   Xt  is deprecated and will be removed in   version 1 7 in the following estimators   class  cluster FeatureAgglomeration      class  decomposition MiniBatchNMF    class  decomposition NMF      class  model selection GridSearchCV    class  model selection RandomizedSearchCV      class  pipeline Pipeline  and  class  preprocessing KBinsDiscretizer      pr  28756  by  user  Will Dean  wd60622     Support for Array API                        Additional estimators and functions have been updated to include support for all  Array API  https   data apis org array api latest     compliant inputs   See  ref  array api  for more details     Functions        func  sklearn metrics r2 score  now supports Array API compliant inputs     pr  27904  by  user  Eric Lindgren  elindgren     user  Franck Charras  fcharras       user  Olivier Grisel  ogrisel   and  user  Tim Head  betatim       Classes        class  linear model Ridge  now supports the Array API for the  svd  solver    See  ref  array api  for more details     pr  27800  by  user  Franck Charras  fcharras     user  Olivier Grisel  ogrisel     and  user  Tim Head  betatim     Support for building with Meson                                  From scikit learn 1 5 onwards  Meson is the main supported way to build scikit learn  see  ref  Building from source  install bleeding edge   for more details   Unless we discover a major blocker  setuptools support will be dropped in scikit learn 1 6  The 1 5 x releases will support building scikit learn with setuptools   Meson support for building scikit learn was added in  pr  28040  by  user  Lo c Est ve  lesteve    Metadata Routing                   The following models now support metadata routing in one or more or their methods  Refer to the  ref  Metadata Routing User Guide  metadata routing   for more details      Feature   class  impute IterativeImputer  now supports metadata routing in   its  fit  method   pr  28187  by  user  Stefanie Senger  StefanieSenger        Feature   class  ensemble BaggingClassifier  and  class  ensemble BaggingRegressor    now support metadata routing  The fit methods now   accept     fit params   which are passed to the underlying estimators   via their  fit  methods     pr  28432  by  user  Adam Li  adam2392   and    user  Benjamin Bossan  BenjaminBossan        Feature   class  linear model RidgeCV  and    class  linear model RidgeClassifierCV  now support metadata routing in   their  fit  method and route metadata to the underlying    class  model selection GridSearchCV  object or the underlying scorer     pr  27560  by  user  Omar Salman  OmarManzoor        Feature   class  GraphicalLassoCV  now supports metadata routing in it s    fit  method and routes metadata to the CV splitter     pr  27566  by  user  Omar Salman  OmarManzoor        Feature   class  linear model RANSACRegressor  now supports metadata routing   in its   fit      score   and   predict   methods and route metadata to its   underlying estimator s    fit      score   and   predict   methods     pr  28261  by  user  Stefanie Senger  StefanieSenger        Feature   class  ensemble VotingClassifier  and    class  ensemble VotingRegressor  now support metadata routing and pass       fit params   to the underlying estimators via their  fit  methods     pr  27584  by  user  Stefanie Senger  StefanieSenger        Feature   class  pipeline FeatureUnion  now supports metadata routing in its     fit   and   fit transform   methods and route metadata to the underlying   transformers    fit   and   fit transform       pr  28205  by  user  Stefanie Senger  StefanieSenger        Fix  Fix an issue when resolving default routing requests set via class   attributes     pr  28435  by  Adrin Jalali        Fix  Fix an issue when  set  method  request  methods are used as unbound   methods  which can happen if one tries to decorate them     pr  28651  by  Adrin Jalali        FIX  Prevent a  RecursionError  when estimators with the default  scoring    param   None   route metadata     pr  28712  by  user  Stefanie Senger  StefanieSenger     Changelog                   Entries should be grouped by module  in alphabetic order  and prefixed with     one of the labels   MajorFeature    Feature    Efficiency    Enhancement        Fix  or  API   see whats new rst for descriptions       Entries should be ordered by those labels  e g   Fix  after  Efficiency        Changes not specific to a module should be listed under  Multiple Modules      or  Miscellaneous       Entries should end with       pr  123456  by  user  Joe Bloggs  joeongithub        where 123455 is the  pull request  number  not the issue number    mod  sklearn calibration                                 Fix  Fixed a regression in  class  calibration CalibratedClassifierCV  where   an error was wrongly raised with string targets     pr  28843  by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn cluster                             Fix  The  class  cluster MeanShift  class now properly converges for constant data     pr  28951  by  user  Akihiro Kuno  akikuno        FIX  Create copy of precomputed sparse matrix within the  fit  method of    class   cluster OPTICS  to avoid in place modification of the sparse matrix     pr  28491  by  user  Thanh Lam Dang  lamdang2k        Fix   class  cluster HDBSCAN  now supports all metrics supported by    func  sklearn metrics pairwise distances  when  algorithm  brute   or   auto       pr  28664  by  user  Manideep Yenugula  myenugula      mod  sklearn compose                             Feature  A fitted  class  compose ColumnTransformer  now implements    getitem      which returns the fitted transformers by name   pr  27990  by  Thomas Fan        Enhancement   class  compose TransformedTargetRegressor  now raises an error in  fit    if only  inverse func  is provided without  func   that would default to identity    being explicitly set as well     pr  28483  by  user  Stefanie Senger  StefanieSenger        Enhancement   class  compose ColumnTransformer  can now expose the  remainder    columns in the fitted  transformers   attribute as column names or boolean   masks  rather than column indices     pr  27657  by  user  J r me Dock s  jeromedockes        Fix  Fixed an bug in  class  compose ColumnTransformer  with  n jobs   1   where the   intermediate selected columns were passed to the transformers as read only arrays     pr  28822  by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn cross decomposition                                         Fix  The  coef   fitted attribute of  class  cross decomposition PLSRegression    now takes into account both the scale of  X  and  Y  when  scale True   Note that   the previous predicted values were not affected by this bug     pr  28612  by  user  Guillaume Lemaitre  glemaitre        API  Deprecates  Y  in favor of  y  in the methods fit  transform and   inverse transform of     class  cross decomposition PLSRegression      class  cross decomposition PLSCanonical      class  cross decomposition CCA     and  class  cross decomposition PLSSVD      Y  will be removed in version 1 7     pr  28604  by  user  David Leon  davidleon123      mod  sklearn datasets                              Enhancement  Adds optional arguments  n retries  and  delay  to functions    func  datasets fetch 20newsgroups      func  datasets fetch 20newsgroups vectorized      func  datasets fetch california housing      func  datasets fetch covtype      func  datasets fetch kddcup99      func  datasets fetch lfw pairs      func  datasets fetch lfw people      func  datasets fetch olivetti faces      func  datasets fetch rcv1     and  func  datasets fetch species distributions     By default  the functions will retry up to 3 times in case of network failures     pr  28160  by  user  Zhehao Liu  MaxwellLZH   and    user  Filip Karlo Do ilovi   fkdosilovic      mod  sklearn decomposition                                   Efficiency   class  decomposition PCA  with  svd solver  full   now assigns   a contiguous  components   attribute instead of an non contiguous slice of   the singular vectors  When  n components    n features   this can save some   memory and  more importantly  help speed up subsequent calls to the  transform    method by more than an order of magnitude by leveraging cache locality of   BLAS GEMM on contiguous arrays     pr  27491  by  user  Olivier Grisel  ogrisel        Enhancement   class   decomposition PCA  now automatically selects the ARPACK solver   for sparse inputs when  svd solver  auto   instead of raising an error     pr  28498  by  user  Thanh Lam Dang  lamdang2k        Enhancement   class  decomposition PCA  now supports a new solver option   named  svd solver  covariance eigh   which offers an order of magnitude   speed up and reduced memory usage for datasets with a large number of data   points and a small number of features  say   n samples    1000     n features    The  svd solver  auto   option has been updated to use the new   solver automatically for such datasets  This solver also accepts sparse input   data     pr  27491  by  user  Olivier Grisel  ogrisel        Fix   class  decomposition PCA  fit with  svd solver  arpack       whiten True  and a value for  n components  that is larger than the rank of   the training set  no longer returns infinite values when transforming   hold out data     pr  27491  by  user  Olivier Grisel  ogrisel      mod  sklearn dummy                           Enhancement   class  dummy DummyClassifier  and  class  dummy DummyRegressor  now   have the  n features in   and  feature names in   attributes after  fit      pr  27937  by  user  Marco vd Boom  tvdboom      mod  sklearn ensemble                              Efficiency  Improves runtime of  predict  of    class  ensemble HistGradientBoostingClassifier  by avoiding to call  predict proba      pr  27844  by  user  Christian Lorentzen  lorentzenchr        Efficiency   class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  are now a tiny bit faster by   pre sorting the data before finding the thresholds for binning     pr  28102  by  user  Christian Lorentzen  lorentzenchr        Fix  Fixes a bug in  class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  when  monotonic cst  is specified   for non categorical features     pr  28925  by  user  Xiao Yuan  yuanx749      mod  sklearn feature extraction                                        Efficiency   class  feature extraction text TfidfTransformer  is now faster   and more memory efficient by using a NumPy vector instead of a sparse matrix   for storing the inverse document frequency     pr  18843  by  user  Paolo Montesel  thebabush        Enhancement   class  feature extraction text TfidfTransformer  now preserves   the data type of the input matrix if it is  np float64  or  np float32      pr  28136  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn feature selection                                       Enhancement   func  feature selection mutual info regression  and    func  feature selection mutual info classif  now support  n jobs  parameter     pr  28085  by  user  Neto Menoci  netomenoci   and    user  Florin Andrei  FlorinAndrei        Enhancement  The  cv results   attribute of  class  feature selection RFECV  has   a new key   n features   containing an array with the number of features selected   at each step     pr  28670  by  user  Miguel Silva  miguelcsilva      mod  sklearn impute                            Enhancement   class  impute SimpleImputer  now supports custom strategies   by passing a function in place of a strategy name     pr  28053  by  user  Mark Elliot  mark thm      mod  sklearn inspection                                Fix   meth  inspection DecisionBoundaryDisplay from estimator  no longer   warns about missing feature names when provided a  polars DataFrame      pr  28718  by  user  Patrick Wang  patrickkwang      mod  sklearn linear model                                  Enhancement  Solver   newton cg   in  class  linear model LogisticRegression  and    class  linear model LogisticRegressionCV  now emits information when  verbose  is   set to positive values     pr  27526  by  user  Christian Lorentzen  lorentzenchr        Fix   class  linear model ElasticNet    class  linear model ElasticNetCV      class  linear model Lasso  and  class  linear model LassoCV  now explicitly don t   accept large sparse data formats     pr  27576  by  user  Stefanie Senger  StefanieSenger        Fix   class  linear model RidgeCV  and  class  RidgeClassifierCV  correctly pass    sample weight  to the underlying scorer when  cv  is None     pr  27560  by  user  Omar Salman  OmarManzoor        Fix   n nonzero coefs   attribute in  class  linear model OrthogonalMatchingPursuit    will now always be  None  when  tol  is set  as  n nonzero coefs  is ignored in   this case   pr  28557  by  user  Lucy Liu  lucyleeow        API   class  linear model RidgeCV  and  class  linear model RidgeClassifierCV    will now allow  alpha 0  when  cv    None   which is consistent with    class  linear model Ridge  and  class  linear model RidgeClassifier      pr  28425  by  user  Lucy Liu  lucyleeow        API  Passing  average 0  to disable averaging is deprecated in    class  linear model PassiveAggressiveClassifier      class  linear model PassiveAggressiveRegressor      class  linear model SGDClassifier    class  linear model SGDRegressor  and    class  linear model SGDOneClassSVM   Pass  average False  instead     pr  28582  by  user  J r mie du Boisberranger  jeremiedbb        API  Parameter  multi class  was deprecated in    class  linear model LogisticRegression  and    class  linear model LogisticRegressionCV    multi class  will be removed in 1 7    and internally  for 3 and more classes  it will always use multinomial    If you still want to use the one vs rest scheme  you can use    OneVsRestClassifier LogisticRegression           pr  28703  by  user  Christian Lorentzen  lorentzenchr        API   store cv values  and  cv values   are deprecated in favor of    store cv results  and  cv results   in   linear model RidgeCV  and     linear model RidgeClassifierCV      pr  28915  by  user  Lucy Liu  lucyleeow      mod  sklearn manifold                              API  Deprecates  n iter  in favor of  max iter  in  class  manifold TSNE      n iter  will be removed in version 1 7  This makes  class  manifold TSNE    consistent with the rest of the estimators   pr  28471  by    user  Lucy Liu  lucyleeow     mod  sklearn metrics                             Feature   func  metrics pairwise distances  accepts calculating pairwise distances   for non numeric arrays as well  This is supported through custom metrics only     pr  27456  by  user  Venkatachalam N  venkyyuvy     user  Kshitij Mathur  Kshitij68     and  user  Julian Libiseller Egger  julibeg        Feature   func  sklearn metrics check scoring  now returns a multi metric scorer   when  scoring  as a  dict    set    tuple   or  list    pr  28360  by  Thomas Fan        Feature   func  metrics d2 log loss score  has been added which   calculates the D 2 score for the log loss     pr  28351  by  user  Omar Salman  OmarManzoor        Efficiency  Improve efficiency of functions  func   metrics brier score loss      func   calibration calibration curve    func   metrics det curve      func   metrics precision recall curve      func   metrics roc curve  when  pos label  argument is specified    Also improve efficiency of methods  from estimator    and  from predictions  in  class   metrics RocCurveDisplay      class   metrics PrecisionRecallDisplay    class   metrics DetCurveDisplay      class   calibration CalibrationDisplay      pr  28051  by  user  Pierre de Fr minville  pidefrem        Fix  class  metrics classification report  now shows only accuracy and not   micro average when input is a subset of labels     pr  28399  by  user  Vineet Joshi  vjoshi253        Fix  Fix OpenBLAS 0 3 26 dead lock on Windows in pairwise distances   computation  This is likely to affect neighbor based algorithms     pr  28692  by  user  Lo c Est ve  lesteve        API   func  metrics precision recall curve  deprecated the keyword argument    probas pred  in favor of  y score    probas pred  will be removed in version 1 7     pr  28092  by  user  Adam Li  adam2392        API   func  metrics brier score loss  deprecated the keyword argument  y prob    in favor of  y proba    y prob  will be removed in version 1 7     pr  28092  by  user  Adam Li  adam2392        API  For classifiers and classification metrics  labels encoded as bytes   is deprecated and will raise an error in v1 7     pr  18555  by  user  Kaushik Amar Das  cozek      mod  sklearn mixture                             Fix  The  converged   attribute of  class  mixture GaussianMixture  and    class  mixture BayesianGaussianMixture  now reflects the convergence status of   the best fit whereas it was previously  True  if any of the fits converged     pr  26837  by  user  Krsto Prorokovi   krstopro      mod  sklearn model selection                                     MajorFeature   class  model selection TunedThresholdClassifierCV  finds   the decision threshold of a binary classifier that maximizes a   classification metric through cross validation     class  model selection FixedThresholdClassifier  is an alternative when one wants   to use a fixed decision threshold without any tuning scheme     pr  26120  by  user  Guillaume Lemaitre  glemaitre        Enhancement   term  CV splitters  CV splitter   that ignores the group parameter now   raises a warning when groups are passed in to  term  split    pr  28210  by    Thomas Fan        Enhancement  The HTML diagram representation of    class   model selection GridSearchCV      class   model selection RandomizedSearchCV      class   model selection HalvingGridSearchCV   and    class   model selection HalvingRandomSearchCV  will show the best estimator when    refit True    pr  28722  by  user  Yao Xiao  Charlie XIAO   and  Thomas Fan        Fix  the   cv results    attribute  of  class  model selection GridSearchCV   now   returns masked arrays of the appropriate NumPy dtype  as opposed to always returning   dtype   object     pr  28352  by  user  Marco Gorelli MarcoGorelli        Fix   func  model selection train test split  works with Array API inputs    Previously indexing was not handled correctly leading to exceptions when using strict   implementations of the Array API like CuPY     pr  28407  by  user  Tim Head  betatim      mod  sklearn multioutput                                 Enhancement   chain method  parameter added to  class  multioutput ClassifierChain      pr  27700  by  user  Lucy Liu  lucyleeow      mod  sklearn neighbors                               Fix  Fixes  class  neighbors NeighborhoodComponentsAnalysis  such that    get feature names out  returns the correct number of feature names     pr  28306  by  user  Brendan Lu  brendanlu      mod  sklearn pipeline                              Feature   class  pipeline FeatureUnion  can now use the    verbose feature names out  attribute  If  True    get feature names out    will prefix all feature names with the name of the transformer   that generated that feature  If  False    get feature names out  will not   prefix any feature names and will error if feature names are not unique     pr  25991  by  user  Jiawei Zhang  jiawei zhang a      mod  sklearn preprocessing                                   Enhancement   class  preprocessing QuantileTransformer  and    func  preprocessing quantile transform  now supports disabling   subsampling explicitly     pr  27636  by  user  Ralph Urlus  rurlus      mod  sklearn tree                          Enhancement  Plotting trees in matplotlib via  func  tree plot tree  now   show a  True False  label to indicate the directionality the samples traverse   given the split condition     pr  28552  by  user  Adam Li  adam2392      mod  sklearn utils                           Fix   func   utils  safe indexing  now works correctly for polars DataFrame when    axis 0  and supports indexing polars Series     pr  28521  by  user  Yao Xiao  Charlie XIAO        API   data  utils IS PYPY  is deprecated and will be removed in version 1 7     pr  28768  by  user  J r mie du Boisberranger  jeremiedbb        API   func  utils tosequence  is deprecated and will be removed in version 1 7     pr  28763  by  user  J r mie du Boisberranger  jeremiedbb        API   class  utils parallel backend  and  func  utils register parallel backend  are   deprecated and will be removed in version 1 7  Use  joblib parallel backend  and    joblib register parallel backend  instead     pr  28847  by  user  J r mie du Boisberranger  jeremiedbb        API  Raise informative warning message in  func   utils multiclass type of target    when represented as bytes  For classifiers and classification metrics  labels encoded   as bytes is deprecated and will raise an error in v1 7     pr  18555  by  user  Kaushik Amar Das  cozek        API   func  utils estimator checks check estimator sparse data  was split into two   functions   func  utils estimator checks check estimator sparse matrix  and    func  utils estimator checks check estimator sparse array      pr  27576  by  user  Stefanie Senger  StefanieSenger        rubric   Code and documentation contributors  Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1 4  including   101AlexMartin  Abdulaziz Aloqeely  Adam J  Stewart  Adam Li  Adarsh Wase  Adeyemi Biola  Aditi Juneja  Adrin Jalali  Advik Sinha  Aisha  Akash Srivastava  Akihiro Kuno  Alan Guedes  Alberto Torres  Alexis IMBERT  alexqiao  Ana Paula Gomes  Anderson Nelson  Andrei Dzis  Arif Qodari  Arnaud Capitaine  Arturo Amor  Aswathavicky  Audrey Flanders  awwwyan  baggiponte  Bharat Raghunathan  bme git  brdav  Brendan Lu  Brigitta Sip cz  Bruno  Cailean Carter  Cemlyn  Christian Lorentzen  Christian Veenhuis  Cindy Liang  Claudio Salvatore Arcidiacono  Connor Boyle  Conrad Stevens  crispinlogan  David Matthew Cherney  Davide Chicco  davidleon123  dependabot bot   DerWeh  dinga92  Dipan Banik  Drew Craeton  Duarte S o Jos   DUONG  Eddie Bergman  Edoardo Abati  Egehan Gunduz  Emad Izadifar  EmilyXinyi  Erich Schubert  Evelyn  Filip Karlo Do ilovi   Franck Charras  Gael Varoquaux  G n l Ayc   Guillaume Lemaitre  Gyeongjae Choi  Harmanan Kohli  Hong Xiang Yue  Ian Faust  Ilya Komarov  itsaphel  Ivan Wiryadi  Jack Bowyer  Javier Marin Tur  J r mie du Boisberranger  J r me Dock s  Jiawei Zhang  Jo o Morais  Joe Cainey  Joel Nothman  Johanna Bayer  John Cant  John Enblom  John Hopfensperger  jpcars  jpienaar tuks  Julian Chan  Julian Libiseller Egger  Julien Jerphanion  KanchiMoe  Kaushik Amar Das  keyber  Koustav Ghosh  kraktus  Krsto Prorokovi   Lars  ldwy4  LeoGrin  lihaitao  Linus Sommer  Loic Esteve  Lucy Liu  Lukas Geiger  m maggi  manasimj  Manuel Labb   Manuel Morales  Marco Edward Gorelli  Marco Wolsza  Maren Westermann  Marija Vlajic  Mark Elliot  Martin Helm  Mateusz Sok    mathurinm  Mavs  Michael Dawson  Michael Higgins  Michael Mayer  miguelcsilva  Miki Watanabe  Mohammed Hamdy  myenugula  Nathan Goldbaum  Naziya Mahimkar  nbrown ScottLogic  Neto  Nithish Bolleddula  notPlancha  Olivier Grisel  Omar Salman  ParsifalXu  Patrick Wang  Pierre de Fr minville  Piotr  Priyank Shroff  Priyansh Gupta  Priyash Shah  Puneeth K  Rahil Parikh  raisadz  Raj Pulapakura  Ralf Gommers  Ralph Urlus  Randolf Scholz  renaissance0ne  Reshama Shaikh  Richard Barnes  Robert Pollak  Roberto Rosati  Rodrigo Romero  rwelsch427  Saad Mahmood  Salim Dohri  Sandip Dutta  SarahRemus  scikit learn bot  Shaharyar Choudhry  Shubham  sperret6  Stefanie Senger  Steffen Schneider  Suha Siddiqui  Thanh Lam DANG  thebabush  Thomas  Thomas J  Fan  Thomas Lazarus  Tialo  Tim Head  Tuhin Sharma  Tushar Parimi  VarunChaduvula  Vineet Joshi  virchan  Wa l Boukhobza  Weyb  Will Dean  Xavier Beltran  Xiao Yuan  Xuefeng Xu  Yao Xiao  yareyaredesuyo  Ziad Amerr   t p n Sr e "}
{"questions":"scikit-learn sklearn contributors rst Version 1 4 releasenotes14","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n.. _release_notes_1_4:\n\n===========\nVersion 1.4\n===========\n\nFor a short description of the main highlights of the release, please refer to\n:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_1_4_0.py`.\n\n.. include:: changelog_legend.inc\n\n.. _changes_1_4_2:\n\nVersion 1.4.2\n=============\n\n**April 2024**\n\nThis release only includes support for numpy 2.\n\n.. _changes_1_4_1:\n\nVersion 1.4.1\n=============\n\n**February 2024**\n\nChanged models\n--------------\n\n- |API| The `tree_.value` attribute in :class:`tree.DecisionTreeClassifier`,\n  :class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeClassifier` and\n  :class:`tree.ExtraTreeRegressor` changed from an weighted absolute count\n  of number of samples to a weighted fraction of the total number of samples.\n  :pr:`27639` by :user:`Samuel Ronsin <samronsin>`.\n\nMetadata Routing\n----------------\n\n- |FIX| Fix routing issue with :class:`~compose.ColumnTransformer` when used\n  inside another meta-estimator.\n  :pr:`28188` by `Adrin Jalali`_.\n\n- |Fix| No error is raised when no metadata is passed to a metaestimator that\n  includes a sub-estimator which doesn't support metadata routing.\n  :pr:`28256` by `Adrin Jalali`_.\n\n- |Fix| Fix :class:`multioutput.MultiOutputRegressor` and\n  :class:`multioutput.MultiOutputClassifier` to work with estimators that don't\n  consume any metadata when metadata routing is enabled.\n  :pr:`28240` by `Adrin Jalali`_.\n\nDataFrame Support\n-----------------\n\n- |Enhancement| |Fix| Pandas and Polars dataframe are validated directly without\n  ducktyping checks.\n  :pr:`28195` by `Thomas Fan`_.\n\nChanges impacting many modules\n------------------------------\n\n- |Efficiency| |Fix| Partial revert of :pr:`28191` to avoid a performance regression for\n  estimators relying on euclidean pairwise computation with\n  sparse matrices. The impacted estimators are:\n\n  - :func:`sklearn.metrics.pairwise_distances_argmin`\n  - :func:`sklearn.metrics.pairwise_distances_argmin_min`\n  - :class:`sklearn.cluster.AffinityPropagation`\n  - :class:`sklearn.cluster.Birch`\n  - :class:`sklearn.cluster.SpectralClustering`\n  - :class:`sklearn.neighbors.KNeighborsClassifier`\n  - :class:`sklearn.neighbors.KNeighborsRegressor`\n  - :class:`sklearn.neighbors.RadiusNeighborsClassifier`\n  - :class:`sklearn.neighbors.RadiusNeighborsRegressor`\n  - :class:`sklearn.neighbors.LocalOutlierFactor`\n  - :class:`sklearn.neighbors.NearestNeighbors`\n  - :class:`sklearn.manifold.Isomap`\n  - :class:`sklearn.manifold.TSNE`\n  - :func:`sklearn.manifold.trustworthiness`\n\n  :pr:`28235` by :user:`Julien Jerphanion <jjerphan>`.\n\n- |Fix| Fixes a bug for all scikit-learn transformers when using `set_output` with\n  `transform` set to `pandas` or `polars`. The bug could lead to wrong naming of the\n  columns of the returned dataframe.\n  :pr:`28262` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| When users try to use a method in :class:`~ensemble.StackingClassifier`,\n  :class:`~ensemble.StackingClassifier`, :class:`~ensemble.StackingClassifier`,\n  :class:`~feature_selection.SelectFromModel`, :class:`~feature_selection.RFE`,\n  :class:`~semi_supervised.SelfTrainingClassifier`,\n  :class:`~multiclass.OneVsOneClassifier`, :class:`~multiclass.OutputCodeClassifier` or\n  :class:`~multiclass.OneVsRestClassifier` that their sub-estimators don't implement,\n  the `AttributeError` now reraises in the traceback.\n  :pr:`28167` by :user:`Stefanie Senger <StefanieSenger>`.\n\nChangelog\n---------\n\n:mod:`sklearn.calibration`\n..........................\n\n- |Fix| `calibration.CalibratedClassifierCV` supports :term:`predict_proba` with\n  float32 output from the inner estimator. :pr:`28247` by `Thomas Fan`_.\n\n:mod:`sklearn.cluster`\n......................\n\n- |Fix| :class:`cluster.AffinityPropagation` now avoids assigning multiple different\n  clusters for equal points.\n  :pr:`28121` by :user:`Pietro Peterlongo <pietroppeter>` and\n  :user:`Yao Xiao <Charlie-XIAO>`.\n\n- |Fix| Avoid infinite loop in :class:`cluster.KMeans` when the number of clusters is\n  larger than the number of non-duplicate samples.\n  :pr:`28165` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.compose`\n......................\n\n- |Fix| :class:`compose.ColumnTransformer` now transform into a polars dataframe when\n  `verbose_feature_names_out=True` and the transformers internally used several times\n  the same columns. Previously, it would raise a due to duplicated column names.\n  :pr:`28262` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |Fix| :class:`HistGradientBoostingClassifier` and\n  :class:`HistGradientBoostingRegressor` when fitted on `pandas` `DataFrame`\n  with extension dtypes, for example `pd.Int64Dtype`\n  :pr:`28385` by :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n- |Fix| Fixes error message raised by :class:`ensemble.VotingClassifier` when the\n  target is multilabel or multiclass-multioutput in a DataFrame format.\n  :pr:`27702` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.impute`\n.....................\n\n- |Fix|: :class:`impute.SimpleImputer` now raises an error in `.fit` and\n  `.transform` if `fill_value` can not be cast to input value dtype with\n  `casting='same_kind'`.\n  :pr:`28365` by :user:`Leo Grinsztajn <LeoGrin>`.\n\n:mod:`sklearn.inspection`\n.........................\n\n- |Fix| :func:`inspection.permutation_importance` now handles properly `sample_weight`\n  together with subsampling (i.e. `max_features` < 1.0).\n  :pr:`28184` by :user:`Michael Mayer <mayer79>`.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Fix| :class:`linear_model.ARDRegression` now handles pandas input types\n  for `predict(X, return_std=True)`.\n  :pr:`28377` by :user:`Eddie Bergman <eddiebergman>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Fix| make :class:`preprocessing.FunctionTransformer` more lenient and overwrite\n  output column names with the `get_feature_names_out` in the following cases:\n  (i) the input and output column names remain the same (happen when using NumPy\n  `ufunc`); (ii) the input column names are numbers; (iii) the output will be set to\n  Pandas or Polars dataframe.\n  :pr:`28241` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :class:`preprocessing.FunctionTransformer` now also warns when `set_output`\n  is called with `transform=\"polars\"` and `func` does not return a Polars dataframe or\n  `feature_names_out` is not specified.\n  :pr:`28263` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :class:`preprocessing.TargetEncoder` no longer fails when\n  `target_type=\"continuous\"` and the input is read-only. In particular, it now\n  works with pandas copy-on-write mode enabled.\n  :pr:`28233` by :user:`John Hopfensperger <s-banach>`.\n\n:mod:`sklearn.tree`\n...................\n\n- |Fix| :class:`tree.DecisionTreeClassifier` and\n  :class:`tree.DecisionTreeRegressor` are handling missing values properly. The internal\n  criterion was not initialized when no missing values were present in the data, leading\n  to potentially wrong criterion values.\n  :pr:`28295` by :user:`Guillaume Lemaitre <glemaitre>` and\n  :pr:`28327` by :user:`Adam Li <adam2392>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |Enhancement| |Fix| :func:`utils.metaestimators.available_if` now reraises the error\n  from the `check` function as the cause of the `AttributeError`.\n  :pr:`28198` by `Thomas Fan`_.\n\n- |Fix| :func:`utils._safe_indexing` now raises a `ValueError` when `X` is a Python list\n  and `axis=1`, as documented in the docstring.\n  :pr:`28222` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n.. _changes_1_4:\n\nVersion 1.4.0\n=============\n\n**January 2024**\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- |Efficiency| :class:`linear_model.LogisticRegression` and\n  :class:`linear_model.LogisticRegressionCV` now have much better convergence for\n  solvers `\"lbfgs\"` and `\"newton-cg\"`. Both solvers can now reach much higher precision\n  for the coefficients depending on the specified `tol`. Additionally, lbfgs can\n  make better use of `tol`, i.e., stop sooner or reach higher precision.\n  Note: The lbfgs is the default solver, so this change might effect many models.\n  This change also means that with this new version of scikit-learn, the resulting\n  coefficients `coef_` and `intercept_` of your models will change for these two\n  solvers (when fit on the same data again). The amount of change depends on the\n  specified `tol`, for small values you will get more precise results.\n  :pr:`26721` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Fix| fixes a memory leak seen in PyPy for estimators using the Cython loss functions.\n  :pr:`27670` by :user:`Guillaume Lemaitre <glemaitre>`.\n\nChanges impacting all modules\n-----------------------------\n\n- |MajorFeature| Transformers now support polars output with\n  `set_output(transform=\"polars\")`.\n  :pr:`27315` by `Thomas Fan`_.\n\n- |Enhancement| All estimators now recognizes the column names from any dataframe\n  that adopts the\n  `DataFrame Interchange Protocol <https:\/\/data-apis.org\/dataframe-protocol\/latest\/purpose_and_scope.html>`__.\n  Dataframes that return a correct representation through `np.asarray(df)` is expected\n  to work with our estimators and functions.\n  :pr:`26464` by `Thomas Fan`_.\n\n- |Enhancement| The HTML representation of estimators now includes a link to the\n  documentation and is color-coded to denote whether the estimator is fitted or\n  not (unfitted estimators are orange, fitted estimators are blue).\n  :pr:`26616` by :user:`Riccardo Cappuzzo <rcap107>`,\n  :user:`Ines Ibnukhsein <Ines1999>`, :user:`Gael Varoquaux <GaelVaroquaux>`,\n  `Joel Nothman`_ and :user:`Lilian Boulard <LilianBoulard>`.\n\n- |Fix| Fixed a bug in most estimators and functions where setting a parameter to\n  a large integer would cause a `TypeError`.\n  :pr:`26648` by :user:`Naoise Holohan <naoise-h>`.\n\nMetadata Routing\n----------------\n\nThe following models now support metadata routing in one or more or their\nmethods. Refer to the :ref:`Metadata Routing User Guide <metadata_routing>` for\nmore details.\n\n- |Feature| :class:`LarsCV` and :class:`LassoLarsCV` now support metadata\n  routing in their `fit` method and route metadata to the CV splitter.\n  :pr:`27538` by :user:`Omar Salman <OmarManzoor>`.\n\n- |Feature| :class:`multiclass.OneVsRestClassifier`,\n  :class:`multiclass.OneVsOneClassifier` and\n  :class:`multiclass.OutputCodeClassifier` now support metadata routing in\n  their ``fit`` and ``partial_fit``, and route metadata to the underlying\n  estimator's ``fit`` and ``partial_fit``.\n  :pr:`27308` by :user:`Stefanie Senger <StefanieSenger>`.\n\n- |Feature| :class:`pipeline.Pipeline` now supports metadata routing according\n  to :ref:`metadata routing user guide <metadata_routing>`.\n  :pr:`26789` by `Adrin Jalali`_.\n\n- |Feature| :func:`~model_selection.cross_validate`,\n  :func:`~model_selection.cross_val_score`, and\n  :func:`~model_selection.cross_val_predict` now support metadata routing. The\n  metadata are routed to the estimator's `fit`, the scorer, and the CV\n  splitter's `split`. The metadata is accepted via the new `params` parameter.\n  `fit_params` is deprecated and will be removed in version 1.6. `groups`\n  parameter is also not accepted as a separate argument when metadata routing\n  is enabled and should be passed via the `params` parameter.\n  :pr:`26896` by `Adrin Jalali`_.\n\n- |Feature| :class:`~model_selection.GridSearchCV`,\n  :class:`~model_selection.RandomizedSearchCV`,\n  :class:`~model_selection.HalvingGridSearchCV`, and\n  :class:`~model_selection.HalvingRandomSearchCV` now support metadata routing\n  in their ``fit`` and ``score``, and route metadata to the underlying\n  estimator's ``fit``, the CV splitter, and the scorer.\n  :pr:`27058` by `Adrin Jalali`_.\n\n- |Feature| :class:`~compose.ColumnTransformer` now supports metadata routing\n  according to :ref:`metadata routing user guide <metadata_routing>`.\n  :pr:`27005` by `Adrin Jalali`_.\n\n- |Feature| :class:`linear_model.LogisticRegressionCV` now supports\n  metadata routing. :meth:`linear_model.LogisticRegressionCV.fit` now\n  accepts ``**params`` which are passed to the underlying splitter and\n  scorer. :meth:`linear_model.LogisticRegressionCV.score` now accepts\n  ``**score_params`` which are passed to the underlying scorer.\n  :pr:`26525` by :user:`Omar Salman <OmarManzoor>`.\n\n- |Feature| :class:`feature_selection.SelectFromModel` now supports metadata\n  routing in `fit` and `partial_fit`.\n  :pr:`27490` by :user:`Stefanie Senger <StefanieSenger>`.\n\n- |Feature| :class:`linear_model.OrthogonalMatchingPursuitCV` now supports\n  metadata routing. Its `fit` now accepts ``**fit_params``, which are passed to\n  the underlying splitter.\n  :pr:`27500` by :user:`Stefanie Senger <StefanieSenger>`.\n\n- |Feature| :class:`ElasticNetCV`, :class:`LassoCV`,\n  :class:`MultiTaskElasticNetCV` and :class:`MultiTaskLassoCV`\n  now support metadata routing and route metadata to the CV splitter.\n  :pr:`27478` by :user:`Omar Salman <OmarManzoor>`.\n\n- |Fix| All meta-estimators for which metadata routing is not yet implemented\n  now raise a `NotImplementedError` on `get_metadata_routing` and on `fit` if\n  metadata routing is enabled and any metadata is passed to them.\n  :pr:`27389` by `Adrin Jalali`_.\n\n\nSupport for SciPy sparse arrays\n-------------------------------\n\nSeveral estimators are now supporting SciPy sparse arrays. The following functions\nand classes are impacted:\n\n**Functions:**\n\n- :func:`cluster.compute_optics_graph` in :pr:`27104` by\n  :user:`Maren Westermann <marenwestermann>` and in :pr:`27250` by\n  :user:`Yao Xiao <Charlie-XIAO>`;\n- :func:`cluster.kmeans_plusplus` in :pr:`27179` by :user:`Nurseit Kamchyev <Bncer>`;\n- :func:`decomposition.non_negative_factorization` in :pr:`27100` by\n  :user:`Isaac Virshup <ivirshup>`;\n- :func:`feature_selection.f_regression` in :pr:`27239` by\n  :user:`Yaroslav Korobko <Tialo>`;\n- :func:`feature_selection.r_regression` in :pr:`27239` by\n  :user:`Yaroslav Korobko <Tialo>`;\n- :func:`manifold.trustworthiness` in :pr:`27250` by :user:`Yao Xiao <Charlie-XIAO>`;\n- :func:`manifold.spectral_embedding` in :pr:`27240` by :user:`Yao Xiao <Charlie-XIAO>`;\n- :func:`metrics.pairwise_distances` in :pr:`27250` by :user:`Yao Xiao <Charlie-XIAO>`;\n- :func:`metrics.pairwise_distances_chunked` in :pr:`27250` by\n  :user:`Yao Xiao <Charlie-XIAO>`;\n- :func:`metrics.pairwise.pairwise_kernels` in :pr:`27250` by\n  :user:`Yao Xiao <Charlie-XIAO>`;\n- :func:`utils.multiclass.type_of_target` in :pr:`27274` by\n  :user:`Yao Xiao <Charlie-XIAO>`.\n\n**Classes:**\n\n- :class:`cluster.HDBSCAN` in :pr:`27250` by :user:`Yao Xiao <Charlie-XIAO>`;\n- :class:`cluster.KMeans` in :pr:`27179` by :user:`Nurseit Kamchyev <Bncer>`;\n- :class:`cluster.MiniBatchKMeans` in :pr:`27179` by :user:`Nurseit Kamchyev <Bncer>`;\n- :class:`cluster.OPTICS` in :pr:`27104` by\n  :user:`Maren Westermann <marenwestermann>` and in :pr:`27250` by\n  :user:`Yao Xiao <Charlie-XIAO>`;\n- :class:`cluster.SpectralClustering` in :pr:`27161` by\n  :user:`Bharat Raghunathan <bharatr21>`;\n- :class:`decomposition.MiniBatchNMF` in :pr:`27100` by\n  :user:`Isaac Virshup <ivirshup>`;\n- :class:`decomposition.NMF` in :pr:`27100` by :user:`Isaac Virshup <ivirshup>`;\n- :class:`feature_extraction.text.TfidfTransformer` in :pr:`27219` by\n  :user:`Yao Xiao <Charlie-XIAO>`;\n- :class:`manifold.Isomap` in :pr:`27250` by :user:`Yao Xiao <Charlie-XIAO>`;\n- :class:`manifold.SpectralEmbedding` in :pr:`27240` by :user:`Yao Xiao <Charlie-XIAO>`;\n- :class:`manifold.TSNE` in :pr:`27250` by :user:`Yao Xiao <Charlie-XIAO>`;\n- :class:`impute.SimpleImputer` in :pr:`27277` by :user:`Yao Xiao <Charlie-XIAO>`;\n- :class:`impute.IterativeImputer` in :pr:`27277` by :user:`Yao Xiao <Charlie-XIAO>`;\n- :class:`impute.KNNImputer` in :pr:`27277` by :user:`Yao Xiao <Charlie-XIAO>`;\n- :class:`kernel_approximation.PolynomialCountSketch` in  :pr:`27301` by\n  :user:`Lohit SundaramahaLingam <lohitslohit>`;\n- :class:`neural_network.BernoulliRBM` in :pr:`27252` by\n  :user:`Yao Xiao <Charlie-XIAO>`;\n- :class:`preprocessing.PolynomialFeatures` in :pr:`27166` by\n  :user:`Mohit Joshi <work-mohit>`;\n- :class:`random_projection.GaussianRandomProjection` in :pr:`27314` by\n  :user:`Stefanie Senger <StefanieSenger>`;\n- :class:`random_projection.SparseRandomProjection` in :pr:`27314` by\n  :user:`Stefanie Senger <StefanieSenger>`.\n\nSupport for Array API\n---------------------\n\nSeveral estimators and functions support the\n`Array API <https:\/\/data-apis.org\/array-api\/latest\/>`_. Such changes allows for using\nthe estimators and functions with other libraries such as JAX, CuPy, and PyTorch.\nThis therefore enables some GPU-accelerated computations.\n\nSee :ref:`array_api` for more details.\n\n**Functions:**\n\n- :func:`sklearn.metrics.accuracy_score` and :func:`sklearn.metrics.zero_one_loss` in\n  :pr:`27137` by :user:`Edoardo Abati <EdAbati>`;\n- :func:`sklearn.model_selection.train_test_split` in :pr:`26855` by `Tim Head`_;\n- :func:`~utils.multiclass.is_multilabel` in :pr:`27601` by\n  :user:`Yaroslav Korobko <Tialo>`.\n\n**Classes:**\n\n- :class:`decomposition.PCA` for the `full` and `randomized` solvers (with QR power\n  iterations) in :pr:`26315`, :pr:`27098` and :pr:`27431` by\n  :user:`Mateusz Sok\u00f3\u0142 <mtsokol>`, :user:`Olivier Grisel <ogrisel>` and\n  :user:`Edoardo Abati <EdAbati>`;\n- :class:`preprocessing.KernelCenterer` in :pr:`27556` by\n  :user:`Edoardo Abati <EdAbati>`;\n- :class:`preprocessing.MaxAbsScaler` in :pr:`27110` by :user:`Edoardo Abati <EdAbati>`;\n- :class:`preprocessing.MinMaxScaler` in :pr:`26243` by `Tim Head`_;\n- :class:`preprocessing.Normalizer` in :pr:`27558` by :user:`Edoardo Abati <EdAbati>`.\n\nPrivate Loss Function Module\n----------------------------\n\n- |FIX| The gradient computation of the binomial log loss is now numerically\n  more stable for very large, in absolute value, input (raw predictions). Before, it\n  could result in `np.nan`. Among the models that profit from this change are\n  :class:`ensemble.GradientBoostingClassifier`,\n  :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`linear_model.LogisticRegression`.\n  :pr:`28048` by :user:`Christian Lorentzen <lorentzenchr>`.\n\nChangelog\n---------\n\n..\n    Entries should be grouped by module (in alphabetic order) and prefixed with\n    one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,\n    |Fix| or |API| (see whats_new.rst for descriptions).\n    Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).\n    Changes not specific to a module should be listed under *Multiple Modules*\n    or *Miscellaneous*.\n    Entries should end with:\n    :pr:`123456` by :user:`Joe Bloggs <joeongithub>`.\n    where 123455 is the *pull request* number, not the issue number.\n\n\n:mod:`sklearn.base`\n...................\n\n- |Enhancement| :meth:`base.ClusterMixin.fit_predict` and\n  :meth:`base.OutlierMixin.fit_predict` now accept ``**kwargs`` which are\n  passed to the ``fit`` method of the estimator.\n  :pr:`26506` by `Adrin Jalali`_.\n\n- |Enhancement| :meth:`base.TransformerMixin.fit_transform` and\n  :meth:`base.OutlierMixin.fit_predict` now raise a warning if ``transform`` \/\n  ``predict`` consume metadata, but no custom ``fit_transform`` \/ ``fit_predict``\n  is defined in the class inheriting from them correspondingly.\n  :pr:`26831` by `Adrin Jalali`_.\n\n- |Enhancement| :func:`base.clone` now supports `dict` as input and creates a\n  copy.\n  :pr:`26786` by `Adrin Jalali`_.\n\n- |API|:func:`~utils.metadata_routing.process_routing` now has a different\n  signature. The first two (the object and the method) are positional only,\n  and all metadata are passed as keyword arguments.\n  :pr:`26909` by `Adrin Jalali`_.\n\n:mod:`sklearn.calibration`\n..........................\n\n- |Enhancement| The internal objective and gradient of the `sigmoid` method\n  of :class:`calibration.CalibratedClassifierCV` have been replaced by the\n  private loss module.\n  :pr:`27185` by :user:`Omar Salman <OmarManzoor>`.\n\n:mod:`sklearn.cluster`\n......................\n\n- |Fix| The `degree` parameter in the :class:`cluster.SpectralClustering`\n  constructor now accepts real values instead of only integral values in\n  accordance with the `degree` parameter of the\n  :class:`sklearn.metrics.pairwise.polynomial_kernel`.\n  :pr:`27668` by :user:`Nolan McMahon <NolantheNerd>`.\n\n- |Fix| Fixes a bug in :class:`cluster.OPTICS` where the cluster correction based\n  on predecessor was not using the right indexing. It would lead to inconsistent results\n  depedendent on the order of the data.\n  :pr:`26459` by :user:`Haoying Zhang <stevezhang1999>` and\n  :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Improve error message when checking the number of connected components\n  in the `fit` method of :class:`cluster.HDBSCAN`.\n  :pr:`27678` by :user:`Ganesh Tata <tataganesh>`.\n\n- |Fix| Create copy of precomputed sparse matrix within the\n  `fit` method of :class:`cluster.DBSCAN` to avoid in-place modification of\n  the sparse matrix.\n  :pr:`27651` by :user:`Ganesh Tata <tataganesh>`.\n\n- |Fix| Raises a proper `ValueError` when `metric=\"precomputed\"` and requested storing\n  centers via the parameter `store_centers`.\n  :pr:`27898` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |API| `kdtree` and `balltree` values are now deprecated and are renamed as\n  `kd_tree` and `ball_tree` respectively for the `algorithm` parameter of\n  :class:`cluster.HDBSCAN` ensuring consistency in naming convention.\n  `kdtree` and `balltree` values will be removed in 1.6.\n  :pr:`26744` by :user:`Shreesha Kumar Bhat <Shreesha3112>`.\n\n- |API| The option `metric=None` in\n  :class:`cluster.AgglomerativeClustering` and :class:`cluster.FeatureAgglomeration`\n  is deprecated in version 1.4 and will be removed in version 1.6. Use the default\n  value instead.\n  :pr:`27828` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.compose`\n......................\n\n- |MajorFeature| Adds `polars <https:\/\/www.pola.rs>`__ input support to\n  :class:`compose.ColumnTransformer` through the `DataFrame Interchange Protocol\n  <https:\/\/data-apis.org\/dataframe-protocol\/latest\/purpose_and_scope.html>`__.\n  The minimum supported version for polars is `0.19.12`.\n  :pr:`26683` by `Thomas Fan`_.\n\n- |Fix| :func:`cluster.spectral_clustering` and :class:`cluster.SpectralClustering`\n  now raise an explicit error message indicating that sparse matrices and arrays\n  with `np.int64` indices are not supported.\n  :pr:`27240` by :user:`Yao Xiao <Charlie-XIAO>`.\n\n- |API| outputs that use pandas extension dtypes and contain `pd.NA` in\n  :class:`~compose.ColumnTransformer` now result in a `FutureWarning` and will\n  cause a `ValueError` in version 1.6, unless the output container has been\n  configured as \"pandas\" with `set_output(transform=\"pandas\")`. Before, such\n  outputs resulted in numpy arrays of dtype `object` containing `pd.NA` which\n  could not be converted to numpy floats and caused errors when passed to other\n  scikit-learn estimators.\n  :pr:`27734` by :user:`J\u00e9r\u00f4me Dock\u00e8s <jeromedockes>`.\n\n:mod:`sklearn.covariance`\n.........................\n\n- |Enhancement| Allow :func:`covariance.shrunk_covariance` to process\n  multiple covariance matrices at once by handling nd-arrays.\n  :pr:`25275` by :user:`Quentin Barth\u00e9lemy <qbarthelemy>`.\n\n- |API| |FIX| :class:`~compose.ColumnTransformer` now replaces `\"passthrough\"`\n  with a corresponding :class:`~preprocessing.FunctionTransformer` in the\n  fitted ``transformers_`` attribute.\n  :pr:`27204` by `Adrin Jalali`_.\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Enhancement| :func:`datasets.make_sparse_spd_matrix` now uses a more memory-\n  efficient sparse layout. It also accepts a new keyword `sparse_format` that allows\n  specifying the output format of the sparse matrix. By default `sparse_format=None`,\n  which returns a dense numpy ndarray as before.\n  :pr:`27438` by :user:`Yao Xiao <Charlie-XIAO>`.\n\n- |Fix| :func:`datasets.dump_svmlight_file` now does not raise `ValueError` when `X`\n  is read-only, e.g., a `numpy.memmap` instance.\n  :pr:`28111` by :user:`Yao Xiao <Charlie-XIAO>`.\n\n- |API| :func:`datasets.make_sparse_spd_matrix` deprecated the keyword argument ``dim``\n  in favor of ``n_dim``. ``dim`` will be removed in version 1.6.\n  :pr:`27718` by :user:`Adam Li <adam2392>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Feature| :class:`decomposition.PCA` now supports :class:`scipy.sparse.sparray`\n  and :class:`scipy.sparse.spmatrix` inputs when using the `arpack` solver.\n  When used on sparse data like :func:`datasets.fetch_20newsgroups_vectorized` this\n  can lead to speed-ups of 100x (single threaded) and 70x lower memory usage.\n  Based on :user:`Alexander Tarashansky <atarashansky>`'s implementation in\n  `scanpy <https:\/\/github.com\/scverse\/scanpy>`_.\n  :pr:`18689` by :user:`Isaac Virshup <ivirshup>` and\n  :user:`Andrey Portnoy <andportnoy>`.\n\n- |Enhancement| An \"auto\" option was added to the `n_components` parameter of\n  :func:`decomposition.non_negative_factorization`, :class:`decomposition.NMF` and\n  :class:`decomposition.MiniBatchNMF` to automatically infer the number of components\n  from W or H shapes when using a custom initialization. The default value of this\n  parameter will change from `None` to `auto` in version 1.6.\n  :pr:`26634` by :user:`Alexandre Landeau <AlexL>` and :user:`Alexandre Vigny <avigny>`.\n\n- |Fix| :func:`decomposition.dict_learning_online` does not ignore anymore the parameter\n  `max_iter`.\n  :pr:`27834` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| The `degree` parameter in the :class:`decomposition.KernelPCA`\n  constructor now accepts real values instead of only integral values in\n  accordance with the `degree` parameter of the\n  :class:`sklearn.metrics.pairwise.polynomial_kernel`.\n  :pr:`27668` by :user:`Nolan McMahon <NolantheNerd>`.\n\n- |API| The option `max_iter=None` in\n  :class:`decomposition.MiniBatchDictionaryLearning`,\n  :class:`decomposition.MiniBatchSparsePCA`, and\n  :func:`decomposition.dict_learning_online` is deprecated and will be removed in\n  version 1.6. Use the default value instead.\n  :pr:`27834` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |MajorFeature| :class:`ensemble.RandomForestClassifier` and\n  :class:`ensemble.RandomForestRegressor` support missing values when\n  the criterion is `gini`, `entropy`, or `log_loss`,\n  for classification or `squared_error`, `friedman_mse`, or `poisson`\n  for regression.\n  :pr:`26391` by `Thomas Fan`_.\n\n- |MajorFeature| :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` supports\n  `categorical_features=\"from_dtype\"`, which treats columns with Pandas or\n  Polars Categorical dtype as categories in the algorithm.\n  `categorical_features=\"from_dtype\"` will become the default in v1.6.\n  Categorical features no longer need to be encoded with numbers. When\n  categorical features are numbers, the maximum value no longer needs to be\n  smaller than `max_bins`; only the number of (unique) categories must be\n  smaller than `max_bins`.\n  :pr:`26411` by `Thomas Fan`_ and :pr:`27835` by :user:`J\u00e9r\u00f4me Dock\u00e8s <jeromedockes>`.\n\n- |MajorFeature| :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` got the new parameter\n  `max_features` to specify the proportion of randomly chosen features considered\n  in each split.\n  :pr:`27139` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Feature| :class:`ensemble.RandomForestClassifier`,\n  :class:`ensemble.RandomForestRegressor`, :class:`ensemble.ExtraTreesClassifier`\n  and :class:`ensemble.ExtraTreesRegressor` now support monotonic constraints,\n  useful when features are supposed to have a positive\/negative effect on the target.\n  Missing values in the train data and multi-output targets are not supported.\n  :pr:`13649` by :user:`Samuel Ronsin <samronsin>`,\n  initiated by :user:`Patrick O'Reilly <pat-oreilly>`.\n\n- |Efficiency| :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` are now a bit faster by reusing\n  the parent node's histogram as children node's histogram in the subtraction trick.\n  In effect, less memory has to be allocated and deallocated.\n  :pr:`27865` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Efficiency| :class:`ensemble.GradientBoostingClassifier` is faster,\n  for binary and in particular for multiclass problems thanks to the private loss\n  function module.\n  :pr:`26278` and :pr:`28095` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Efficiency| Improves runtime and memory usage for\n  :class:`ensemble.GradientBoostingClassifier` and\n  :class:`ensemble.GradientBoostingRegressor` when trained on sparse data.\n  :pr:`26957` by `Thomas Fan`_.\n\n- |Efficiency| :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` is now faster when `scoring`\n  is a predefined metric listed in :func:`metrics.get_scorer_names` and\n  early stopping is enabled.\n  :pr:`26163` by `Thomas Fan`_.\n\n- |Enhancement| A fitted property, ``estimators_samples_``, was added to all Forest\n  methods, including\n  :class:`ensemble.RandomForestClassifier`, :class:`ensemble.RandomForestRegressor`,\n  :class:`ensemble.ExtraTreesClassifier` and :class:`ensemble.ExtraTreesRegressor`,\n  which allows to retrieve the training sample indices used for each tree estimator.\n  :pr:`26736` by :user:`Adam Li <adam2392>`.\n\n- |Fix| Fixes :class:`ensemble.IsolationForest` when the input is a sparse matrix and\n  `contamination` is set to a float value.\n  :pr:`27645` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Raises a `ValueError` in :class:`ensemble.RandomForestRegressor` and\n  :class:`ensemble.ExtraTreesRegressor` when requesting OOB score with multioutput model\n  for the targets being all rounded to integer. It was recognized as a multiclass\n  problem.\n  :pr:`27817` by :user:`Daniele Ongari <danieleongari>`\n\n- |Fix| Changes estimator tags to acknowledge that\n  :class:`ensemble.VotingClassifier`, :class:`ensemble.VotingRegressor`,\n  :class:`ensemble.StackingClassifier`, :class:`ensemble.StackingRegressor`,\n  support missing values if all `estimators` support missing values.\n  :pr:`27710` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Support loading pickles of :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` when the pickle has\n  been generated on a platform with a different bitness. A typical example is\n  to train and pickle the model on 64 bit machine and load the model on a 32\n  bit machine for prediction.\n  :pr:`28074` by :user:`Christian Lorentzen <lorentzenchr>` and\n  :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n- |API| In :class:`ensemble.AdaBoostClassifier`, the `algorithm` argument `SAMME.R` was\n  deprecated and will be removed in 1.6.\n  :pr:`26830` by :user:`Stefanie Senger <StefanieSenger>`.\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |API| Changed error type from :class:`AttributeError` to\n  :class:`exceptions.NotFittedError` in unfitted instances of\n  :class:`feature_extraction.DictVectorizer` for the following methods:\n  :func:`feature_extraction.DictVectorizer.inverse_transform`,\n  :func:`feature_extraction.DictVectorizer.restrict`,\n  :func:`feature_extraction.DictVectorizer.transform`.\n  :pr:`24838` by :user:`Lorenz Hertel <LoHertel>`.\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Enhancement| :class:`feature_selection.SelectKBest`,\n  :class:`feature_selection.SelectPercentile`, and\n  :class:`feature_selection.GenericUnivariateSelect` now support unsupervised\n  feature selection by providing a `score_func` taking `X` and `y=None`.\n  :pr:`27721` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Enhancement| :class:`feature_selection.SelectKBest` and\n  :class:`feature_selection.GenericUnivariateSelect` with `mode='k_best'`\n  now shows a warning when `k` is greater than the number of features.\n  :pr:`27841` by `Thomas Fan`_.\n\n- |Fix| :class:`feature_selection.RFE` and :class:`feature_selection.RFECV` do\n  not check for nans during input validation.\n  :pr:`21807` by `Thomas Fan`_.\n\n:mod:`sklearn.inspection`\n.........................\n\n- |Enhancement| :class:`inspection.DecisionBoundaryDisplay` now accepts a parameter\n  `class_of_interest` to select the class of interest when plotting the response\n  provided by `response_method=\"predict_proba\"` or\n  `response_method=\"decision_function\"`. It allows to plot the decision boundary for\n  both binary and multiclass classifiers.\n  :pr:`27291` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :meth:`inspection.DecisionBoundaryDisplay.from_estimator` and\n  :class:`inspection.PartialDependenceDisplay.from_estimator` now return the correct\n  type for subclasses.\n  :pr:`27675` by :user:`John Cant <johncant>`.\n\n- |API| :class:`inspection.DecisionBoundaryDisplay` raise an `AttributeError` instead\n  of a `ValueError` when an estimator does not implement the requested response method.\n  :pr:`27291` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.kernel_ridge`\n...........................\n\n- |Fix| The `degree` parameter in the :class:`kernel_ridge.KernelRidge`\n  constructor now accepts real values instead of only integral values in\n  accordance with the `degree` parameter of the\n  :class:`sklearn.metrics.pairwise.polynomial_kernel`.\n  :pr:`27668` by :user:`Nolan McMahon <NolantheNerd>`.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Efficiency| :class:`linear_model.LogisticRegression` and\n  :class:`linear_model.LogisticRegressionCV` now have much better convergence for\n  solvers `\"lbfgs\"` and `\"newton-cg\"`. Both solvers can now reach much higher precision\n  for the coefficients depending on the specified `tol`. Additionally, lbfgs can\n  make better use of `tol`, i.e., stop sooner or reach higher precision. This is\n  accomplished by better scaling of the objective function, i.e., using average per\n  sample losses instead of sum of per sample losses.\n  :pr:`26721` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Efficiency| :class:`linear_model.LogisticRegression` and\n  :class:`linear_model.LogisticRegressionCV` with solver `\"newton-cg\"` can now be\n  considerably faster for some data and parameter settings. This is accomplished by a\n  better line search convergence check for negligible loss improvements that takes into\n  account gradient information.\n  :pr:`26721` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Efficiency| Solver `\"newton-cg\"` in :class:`linear_model.LogisticRegression` and\n  :class:`linear_model.LogisticRegressionCV` uses a little less memory. The effect is\n  proportional to the number of coefficients (`n_features * n_classes`).\n  :pr:`27417` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Fix| Ensure that the `sigma_` attribute of\n  :class:`linear_model.ARDRegression` and :class:`linear_model.BayesianRidge`\n  always has a `float32` dtype when fitted on `float32` data, even with the\n  type promotion rules of NumPy 2.\n  :pr:`27899` by :user:`Olivier Grisel <ogrisel>`.\n\n- |API| The attribute `loss_function_` of :class:`linear_model.SGDClassifier` and\n  :class:`linear_model.SGDOneClassSVM` has been deprecated and will be removed in\n  version 1.6.\n  :pr:`27979` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Efficiency| Computing pairwise distances via :class:`metrics.DistanceMetric`\n  for CSR x CSR,  Dense x CSR, and CSR x Dense datasets is now 1.5x faster.\n  :pr:`26765` by :user:`Meekail Zain <micky774>`.\n\n- |Efficiency| Computing distances via :class:`metrics.DistanceMetric`\n  for CSR x CSR, Dense x CSR, and CSR x Dense now uses ~50% less memory,\n  and outputs distances in the same dtype as the provided data.\n  :pr:`27006` by :user:`Meekail Zain <micky774>`.\n\n- |Enhancement| Improve the rendering of the plot obtained with the\n  :class:`metrics.PrecisionRecallDisplay` and :class:`metrics.RocCurveDisplay`\n  classes. the x- and y-axis limits are set to [0, 1] and the aspect ratio between\n  both axis is set to be 1 to get a square plot.\n  :pr:`26366` by :user:`Mojdeh Rastgoo <mrastgoo>`.\n\n- |Enhancement| Added `neg_root_mean_squared_log_error_scorer` as scorer\n  :pr:`26734` by :user:`Alejandro Martin Gil <101AlexMartin>`.\n\n- |Enhancement| :func:`metrics.confusion_matrix` now warns when only one label was\n  found in `y_true` and `y_pred`.\n  :pr:`27650` by :user:`Lucy Liu <lucyleeow>`.\n\n- |Fix| computing pairwise distances with :func:`metrics.pairwise.euclidean_distances`\n  no longer raises an exception when `X` is provided as a `float64` array and\n  `X_norm_squared` as a `float32` array.\n  :pr:`27624` by :user:`J\u00e9r\u00f4me Dock\u00e8s <jeromedockes>`.\n\n- |Fix| :func:`f1_score` now provides correct values when handling various\n  cases in which division by zero occurs by using a formulation that does not\n  depend on the precision and recall values.\n  :pr:`27577` by :user:`Omar Salman <OmarManzoor>` and\n  :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :func:`metrics.make_scorer` now raises an error when using a regressor on a\n  scorer requesting a non-thresholded decision function (from `decision_function` or\n  `predict_proba`). Such scorer are specific to classification.\n  :pr:`26840` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :meth:`metrics.DetCurveDisplay.from_predictions`,\n  :class:`metrics.PrecisionRecallDisplay.from_predictions`,\n  :class:`metrics.PredictionErrorDisplay.from_predictions`, and\n  :class:`metrics.RocCurveDisplay.from_predictions` now return the correct type\n  for subclasses.\n  :pr:`27675` by :user:`John Cant <johncant>`.\n\n- |API| Deprecated `needs_threshold` and `needs_proba` from :func:`metrics.make_scorer`.\n  These parameters will be removed in version 1.6. Instead, use `response_method` that\n  accepts `\"predict\"`, `\"predict_proba\"` or `\"decision_function\"` or a list of such\n  values. `needs_proba=True` is equivalent to `response_method=\"predict_proba\"` and\n  `needs_threshold=True` is equivalent to\n  `response_method=(\"decision_function\", \"predict_proba\")`.\n  :pr:`26840` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |API| The `squared` parameter of :func:`metrics.mean_squared_error` and\n  :func:`metrics.mean_squared_log_error` is deprecated and will be removed in 1.6.\n  Use the new functions :func:`metrics.root_mean_squared_error` and\n  :func:`metrics.root_mean_squared_log_error` instead.\n  :pr:`26734` by :user:`Alejandro Martin Gil <101AlexMartin>`.\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |Enhancement| :func:`model_selection.learning_curve` raises a warning when\n  every cross validation fold fails.\n  :pr:`26299` by :user:`Rahil Parikh <rprkh>`.\n\n- |Fix| :class:`model_selection.GridSearchCV`,\n  :class:`model_selection.RandomizedSearchCV`, and\n  :class:`model_selection.HalvingGridSearchCV` now don't change the given\n  object in the parameter grid if it's an estimator.\n  :pr:`26786` by `Adrin Jalali`_.\n\n:mod:`sklearn.multioutput`\n..........................\n\n- |Enhancement| Add method `predict_log_proba` to :class:`multioutput.ClassifierChain`.\n  :pr:`27720` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Efficiency| :meth:`sklearn.neighbors.KNeighborsRegressor.predict` and\n  :meth:`sklearn.neighbors.KNeighborsClassifier.predict_proba` now efficiently support\n  pairs of dense and sparse datasets.\n  :pr:`27018` by :user:`Julien Jerphanion <jjerphan>`.\n\n- |Efficiency| The performance of :meth:`neighbors.RadiusNeighborsClassifier.predict`\n  and of :meth:`neighbors.RadiusNeighborsClassifier.predict_proba` has been improved\n  when `radius` is large and `algorithm=\"brute\"` with non-Euclidean metrics.\n  :pr:`26828` by :user:`Omar Salman <OmarManzoor>`.\n\n- |Fix| Improve error message for :class:`neighbors.LocalOutlierFactor`\n  when it is invoked with `n_samples=n_neighbors`.\n  :pr:`23317` by :user:`Bharat Raghunathan <bharatr21>`.\n\n- |Fix| :meth:`neighbors.KNeighborsClassifier.predict` and\n  :meth:`neighbors.KNeighborsClassifier.predict_proba` now raises an error when the\n  weights of all neighbors of some sample are zero. This can happen when `weights`\n  is a user-defined function.\n  :pr:`26410` by :user:`Yao Xiao <Charlie-XIAO>`.\n\n- |API| :class:`neighbors.KNeighborsRegressor` now accepts\n  :class:`metrics.DistanceMetric` objects directly via the `metric` keyword\n  argument allowing for the use of accelerated third-party\n  :class:`metrics.DistanceMetric` objects.\n  :pr:`26267` by :user:`Meekail Zain <micky774>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Efficiency| :class:`preprocessing.OrdinalEncoder` avoids calculating\n  missing indices twice to improve efficiency.\n  :pr:`27017` by :user:`Xuefeng Xu <xuefeng-xu>`.\n\n- |Efficiency| Improves efficiency in :class:`preprocessing.OneHotEncoder` and\n  :class:`preprocessing.OrdinalEncoder` in checking `nan`.\n  :pr:`27760` by :user:`Xuefeng Xu <xuefeng-xu>`.\n\n- |Enhancement| Improves warnings in :class:`preprocessing.FunctionTransformer` when\n  `func` returns a pandas dataframe and the output is configured to be pandas.\n  :pr:`26944` by `Thomas Fan`_.\n\n- |Enhancement| :class:`preprocessing.TargetEncoder` now supports `target_type`\n  'multiclass'.\n  :pr:`26674` by :user:`Lucy Liu <lucyleeow>`.\n\n- |Fix| :class:`preprocessing.OneHotEncoder` and :class:`preprocessing.OrdinalEncoder`\n  raise an exception when `nan` is a category and is not the last in the user's\n  provided categories.\n  :pr:`27309` by :user:`Xuefeng Xu <xuefeng-xu>`.\n\n- |Fix| :class:`preprocessing.OneHotEncoder` and :class:`preprocessing.OrdinalEncoder`\n  raise an exception if the user provided categories contain duplicates.\n  :pr:`27328` by :user:`Xuefeng Xu <xuefeng-xu>`.\n\n- |Fix| :class:`preprocessing.FunctionTransformer` raises an error at `transform` if\n  the output of `get_feature_names_out` is not consistent with the column names of the\n  output container if those are defined.\n  :pr:`27801` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Raise a `NotFittedError` in :class:`preprocessing.OrdinalEncoder` when calling\n  `transform` without calling `fit` since `categories` always requires to be checked.\n  :pr:`27821` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.tree`\n...................\n\n- |Feature| :class:`tree.DecisionTreeClassifier`, :class:`tree.DecisionTreeRegressor`,\n  :class:`tree.ExtraTreeClassifier` and :class:`tree.ExtraTreeRegressor` now support\n  monotonic constraints, useful when features are supposed to have a positive\/negative\n  effect on the target. Missing values in the train data and multi-output targets are\n  not supported.\n  :pr:`13649` by :user:`Samuel Ronsin <samronsin>`, initiated by\n  :user:`Patrick O'Reilly <pat-oreilly>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |Enhancement| :func:`sklearn.utils.estimator_html_repr` dynamically adapts\n  diagram colors based on the browser's `prefers-color-scheme`, providing\n  improved adaptability to dark mode environments.\n  :pr:`26862` by :user:`Andrew Goh Yisheng <9y5>`, `Thomas Fan`_, `Adrin\n  Jalali`_.\n\n- |Enhancement| :class:`~utils.metadata_routing.MetadataRequest` and\n  :class:`~utils.metadata_routing.MetadataRouter` now have a ``consumes`` method\n  which can be used to check whether a given set of parameters would be consumed.\n  :pr:`26831` by `Adrin Jalali`_.\n\n- |Enhancement| Make :func:`sklearn.utils.check_array` attempt to output\n  `int32`-indexed CSR and COO arrays when converting from DIA arrays if the number of\n  non-zero entries is small enough. This ensures that estimators implemented in Cython\n  and that do not accept `int64`-indexed sparse datastucture, now consistently\n  accept the same sparse input formats for SciPy sparse matrices and arrays.\n  :pr:`27372` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :func:`sklearn.utils.check_array` should accept both matrix and array from\n  the sparse SciPy module. The previous implementation would fail if `copy=True` by\n  calling specific NumPy `np.may_share_memory` that does not work with SciPy sparse\n  array and does not return the correct result for SciPy sparse matrix.\n  :pr:`27336` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :func:`~utils.estimator_checks.check_estimators_pickle` with\n  `readonly_memmap=True` now relies on joblib's own capability to allocate\n  aligned memory mapped arrays when loading a serialized estimator instead of\n  calling a dedicated private function that would crash when OpenBLAS\n  misdetects the CPU architecture.\n  :pr:`27614` by :user:`Olivier Grisel <ogrisel>`.\n\n- |Fix| Error message in :func:`~utils.check_array` when a sparse matrix was\n  passed but `accept_sparse` is `False` now suggests to use `.toarray()` and not\n  `X.toarray()`.\n  :pr:`27757` by :user:`Lucy Liu <lucyleeow>`.\n\n- |Fix| Fix the function :func:`~utils.check_array` to output the right error message\n  when the input is a Series instead of a DataFrame.\n  :pr:`28090` by :user:`Stan Furrer <stanFurrer>` and :user:`Yao Xiao <Charlie-XIAO>`.\n\n- |API| :func:`sklearn.extmath.log_logistic` is deprecated and will be removed in 1.6.\n  Use `-np.logaddexp(0, -x)` instead.\n  :pr:`27544` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n.. rubric:: Code and documentation contributors\n\nThanks to everyone who has contributed to the maintenance and improvement of\nthe project since version 1.3, including:\n\n101AlexMartin, Abhishek Singh Kushwah, Adam Li, Adarsh Wase, Adrin Jalali,\nAdvik Sinha, Alex, Alexander Al-Feghali, Alexis IMBERT, AlexL, Alex Molas, Anam\nFatima, Andrew Goh, andyscanzio, Aniket Patil, Artem Kislovskiy, Arturo Amor,\nashah002, avm19, Ben Holmes, Ben Mares, Benoit Chevallier-Mames, Bharat\nRaghunathan, Binesh Bannerjee, Brendan Lu, Brevin Kunde, Camille Troillard,\nCarlo Lemos, Chad Parmet, Christian Clauss, Christian Lorentzen, Christian\nVeenhuis, Christos Aridas, Cindy Liang, Claudio Salvatore Arcidiacono, Connor\nBoyle, cynthias13w, DaminK, Daniele Ongari, Daniel Schmitz, Daniel Tinoco,\nDavid Brochart, Deborah L. Haar, DevanshKyada27, Dimitri Papadopoulos Orfanos,\nDmitry Nesterov, DUONG, Edoardo Abati, Eitan Hemed, Elabonga Atuo, Elisabeth\nG\u00fcnther, Emma Carballal, Emmanuel Ferdman, epimorphic, Erwan Le Floch, Fabian\nEgli, Filip Karlo Do\u0161ilovi\u0107, Florian Idelberger, Franck Charras, Gael\nVaroquaux, Ganesh Tata, Gleb Levitski, Guillaume Lemaitre, Haoying Zhang,\nHarmanan Kohli, Ily, ioangatop, IsaacTrost, Isaac Virshup, Iwona Zdzieblo,\nJakub Kaczmarzyk, James McDermott, Jarrod Millman, JB Mountford, J\u00e9r\u00e9mie du\nBoisberranger, J\u00e9r\u00f4me Dock\u00e8s, Jiawei Zhang, Joel Nothman, John Cant, John\nHopfensperger, Jona Sassenhagen, Jon Nordby, Julien Jerphanion, Kennedy Waweru,\nkevin moore, Kian Eliasi, Kishan Ved, Konstantinos Pitas, Koustav Ghosh, Kushan\nSharma, ldwy4, Linus, Lohit SundaramahaLingam, Loic Esteve, Lorenz, Louis\nFouquet, Lucy Liu, Luis Silvestrin, Luk\u00e1\u0161 Folwarczn\u00fd, Lukas Geiger, Malte\nLondschien, Marcus Fraa\u00df, Marek Hanu\u0161, Maren Westermann, Mark Elliot, Martin\nLarralde, Mateusz Sok\u00f3\u0142, mathurinm, mecopur, Meekail Zain, Michael Higgins,\nMiki Watanabe, Milton Gomez, MN193, Mohammed Hamdy, Mohit Joshi, mrastgoo,\nNaman Dhingra, Naoise Holohan, Narendra Singh dangi, Noa Malem-Shinitski,\nNolan, Nurseit Kamchyev, Oleksii Kachaiev, Olivier Grisel, Omar Salman, partev,\nPeter Hull, Peter Steinbach, Pierre de Fr\u00e9minville, Pooja Subramaniam, Puneeth\nK, qmarcou, Quentin Barth\u00e9lemy, Rahil Parikh, Rahul Mahajan, Raj Pulapakura,\nRaphael, Ricardo Peres, Riccardo Cappuzzo, Roman Lutz, Salim Dohri, Samuel O.\nRonsin, Sandip Dutta, Sayed Qaiser Ali, scaja, scikit-learn-bot, Sebastian\nBerg, Shreesha Kumar Bhat, Shubhal Gupta, S\u00f8ren Fuglede J\u00f8rgensen, Stefanie\nSenger, Tamara, Tanjina Afroj, THARAK HEGDE, thebabush, Thomas J. Fan, Thomas\nRoehr, Tialo, Tim Head, tongyu, Venkatachalam N, Vijeth Moudgalya, Vincent M,\nVivek Reddy P, Vladimir Fokow, Xiao Yuan, Xuefeng Xu, Yang Tao, Yao Xiao,\nYuchen Zhou, Yuusuke Hiramatsu","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn      release notes 1 4               Version 1 4              For a short description of the main highlights of the release  please refer to  ref  sphx glr auto examples release highlights plot release highlights 1 4 0 py       include   changelog legend inc      changes 1 4 2   Version 1 4 2                  April 2024    This release only includes support for numpy 2       changes 1 4 1   Version 1 4 1                  February 2024    Changed models                    API  The  tree  value  attribute in  class  tree DecisionTreeClassifier      class  tree DecisionTreeRegressor    class  tree ExtraTreeClassifier  and    class  tree ExtraTreeRegressor  changed from an weighted absolute count   of number of samples to a weighted fraction of the total number of samples     pr  27639  by  user  Samuel Ronsin  samronsin     Metadata Routing                      FIX  Fix routing issue with  class   compose ColumnTransformer  when used   inside another meta estimator     pr  28188  by  Adrin Jalali        Fix  No error is raised when no metadata is passed to a metaestimator that   includes a sub estimator which doesn t support metadata routing     pr  28256  by  Adrin Jalali        Fix  Fix  class  multioutput MultiOutputRegressor  and    class  multioutput MultiOutputClassifier  to work with estimators that don t   consume any metadata when metadata routing is enabled     pr  28240  by  Adrin Jalali     DataFrame Support                       Enhancement   Fix  Pandas and Polars dataframe are validated directly without   ducktyping checks     pr  28195  by  Thomas Fan     Changes impacting many modules                                    Efficiency   Fix  Partial revert of  pr  28191  to avoid a performance regression for   estimators relying on euclidean pairwise computation with   sparse matrices  The impacted estimators are        func  sklearn metrics pairwise distances argmin       func  sklearn metrics pairwise distances argmin min       class  sklearn cluster AffinityPropagation       class  sklearn cluster Birch       class  sklearn cluster SpectralClustering       class  sklearn neighbors KNeighborsClassifier       class  sklearn neighbors KNeighborsRegressor       class  sklearn neighbors RadiusNeighborsClassifier       class  sklearn neighbors RadiusNeighborsRegressor       class  sklearn neighbors LocalOutlierFactor       class  sklearn neighbors NearestNeighbors       class  sklearn manifold Isomap       class  sklearn manifold TSNE       func  sklearn manifold trustworthiness      pr  28235  by  user  Julien Jerphanion  jjerphan        Fix  Fixes a bug for all scikit learn transformers when using  set output  with    transform  set to  pandas  or  polars   The bug could lead to wrong naming of the   columns of the returned dataframe     pr  28262  by  user  Guillaume Lemaitre  glemaitre        Fix  When users try to use a method in  class   ensemble StackingClassifier      class   ensemble StackingClassifier    class   ensemble StackingClassifier      class   feature selection SelectFromModel    class   feature selection RFE      class   semi supervised SelfTrainingClassifier      class   multiclass OneVsOneClassifier    class   multiclass OutputCodeClassifier  or    class   multiclass OneVsRestClassifier  that their sub estimators don t implement    the  AttributeError  now reraises in the traceback     pr  28167  by  user  Stefanie Senger  StefanieSenger     Changelog             mod  sklearn calibration                                 Fix   calibration CalibratedClassifierCV  supports  term  predict proba  with   float32 output from the inner estimator   pr  28247  by  Thomas Fan      mod  sklearn cluster                             Fix   class  cluster AffinityPropagation  now avoids assigning multiple different   clusters for equal points     pr  28121  by  user  Pietro Peterlongo  pietroppeter   and    user  Yao Xiao  Charlie XIAO        Fix  Avoid infinite loop in  class  cluster KMeans  when the number of clusters is   larger than the number of non duplicate samples     pr  28165  by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn compose                             Fix   class  compose ColumnTransformer  now transform into a polars dataframe when    verbose feature names out True  and the transformers internally used several times   the same columns  Previously  it would raise a due to duplicated column names     pr  28262  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn ensemble                              Fix   class  HistGradientBoostingClassifier  and    class  HistGradientBoostingRegressor  when fitted on  pandas   DataFrame    with extension dtypes  for example  pd Int64Dtype     pr  28385  by  user  Lo c Est ve  lesteve        Fix  Fixes error message raised by  class  ensemble VotingClassifier  when the   target is multilabel or multiclass multioutput in a DataFrame format     pr  27702  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn impute                            Fix    class  impute SimpleImputer  now raises an error in   fit  and     transform  if  fill value  can not be cast to input value dtype with    casting  same kind       pr  28365  by  user  Leo Grinsztajn  LeoGrin      mod  sklearn inspection                                Fix   func  inspection permutation importance  now handles properly  sample weight    together with subsampling  i e   max features    1 0      pr  28184  by  user  Michael Mayer  mayer79      mod  sklearn linear model                                  Fix   class  linear model ARDRegression  now handles pandas input types   for  predict X  return std True       pr  28377  by  user  Eddie Bergman  eddiebergman      mod  sklearn preprocessing                                   Fix  make  class  preprocessing FunctionTransformer  more lenient and overwrite   output column names with the  get feature names out  in the following cases     i  the input and output column names remain the same  happen when using NumPy    ufunc     ii  the input column names are numbers   iii  the output will be set to   Pandas or Polars dataframe     pr  28241  by  user  Guillaume Lemaitre  glemaitre        Fix   class  preprocessing FunctionTransformer  now also warns when  set output    is called with  transform  polars   and  func  does not return a Polars dataframe or    feature names out  is not specified     pr  28263  by  user  Guillaume Lemaitre  glemaitre        Fix   class  preprocessing TargetEncoder  no longer fails when    target type  continuous   and the input is read only  In particular  it now   works with pandas copy on write mode enabled     pr  28233  by  user  John Hopfensperger  s banach      mod  sklearn tree                          Fix   class  tree DecisionTreeClassifier  and    class  tree DecisionTreeRegressor  are handling missing values properly  The internal   criterion was not initialized when no missing values were present in the data  leading   to potentially wrong criterion values     pr  28295  by  user  Guillaume Lemaitre  glemaitre   and    pr  28327  by  user  Adam Li  adam2392      mod  sklearn utils                           Enhancement   Fix   func  utils metaestimators available if  now reraises the error   from the  check  function as the cause of the  AttributeError      pr  28198  by  Thomas Fan        Fix   func  utils  safe indexing  now raises a  ValueError  when  X  is a Python list   and  axis 1   as documented in the docstring     pr  28222  by  user  Guillaume Lemaitre  glemaitre         changes 1 4   Version 1 4 0                  January 2024    Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      Efficiency   class  linear model LogisticRegression  and    class  linear model LogisticRegressionCV  now have much better convergence for   solvers   lbfgs   and   newton cg    Both solvers can now reach much higher precision   for the coefficients depending on the specified  tol   Additionally  lbfgs can   make better use of  tol   i e   stop sooner or reach higher precision    Note  The lbfgs is the default solver  so this change might effect many models    This change also means that with this new version of scikit learn  the resulting   coefficients  coef   and  intercept   of your models will change for these two   solvers  when fit on the same data again   The amount of change depends on the   specified  tol   for small values you will get more precise results     pr  26721  by  user  Christian Lorentzen  lorentzenchr        Fix  fixes a memory leak seen in PyPy for estimators using the Cython loss functions     pr  27670  by  user  Guillaume Lemaitre  glemaitre     Changes impacting all modules                                   MajorFeature  Transformers now support polars output with    set output transform  polars        pr  27315  by  Thomas Fan        Enhancement  All estimators now recognizes the column names from any dataframe   that adopts the    DataFrame Interchange Protocol  https   data apis org dataframe protocol latest purpose and scope html        Dataframes that return a correct representation through  np asarray df   is expected   to work with our estimators and functions     pr  26464  by  Thomas Fan        Enhancement  The HTML representation of estimators now includes a link to the   documentation and is color coded to denote whether the estimator is fitted or   not  unfitted estimators are orange  fitted estimators are blue      pr  26616  by  user  Riccardo Cappuzzo  rcap107       user  Ines Ibnukhsein  Ines1999     user  Gael Varoquaux  GaelVaroquaux       Joel Nothman   and  user  Lilian Boulard  LilianBoulard        Fix  Fixed a bug in most estimators and functions where setting a parameter to   a large integer would cause a  TypeError      pr  26648  by  user  Naoise Holohan  naoise h     Metadata Routing                   The following models now support metadata routing in one or more or their methods  Refer to the  ref  Metadata Routing User Guide  metadata routing   for more details      Feature   class  LarsCV  and  class  LassoLarsCV  now support metadata   routing in their  fit  method and route metadata to the CV splitter     pr  27538  by  user  Omar Salman  OmarManzoor        Feature   class  multiclass OneVsRestClassifier      class  multiclass OneVsOneClassifier  and    class  multiclass OutputCodeClassifier  now support metadata routing in   their   fit   and   partial fit    and route metadata to the underlying   estimator s   fit   and   partial fit       pr  27308  by  user  Stefanie Senger  StefanieSenger        Feature   class  pipeline Pipeline  now supports metadata routing according   to  ref  metadata routing user guide  metadata routing       pr  26789  by  Adrin Jalali        Feature   func   model selection cross validate      func   model selection cross val score   and    func   model selection cross val predict  now support metadata routing  The   metadata are routed to the estimator s  fit   the scorer  and the CV   splitter s  split   The metadata is accepted via the new  params  parameter     fit params  is deprecated and will be removed in version 1 6   groups    parameter is also not accepted as a separate argument when metadata routing   is enabled and should be passed via the  params  parameter     pr  26896  by  Adrin Jalali        Feature   class   model selection GridSearchCV      class   model selection RandomizedSearchCV      class   model selection HalvingGridSearchCV   and    class   model selection HalvingRandomSearchCV  now support metadata routing   in their   fit   and   score    and route metadata to the underlying   estimator s   fit    the CV splitter  and the scorer     pr  27058  by  Adrin Jalali        Feature   class   compose ColumnTransformer  now supports metadata routing   according to  ref  metadata routing user guide  metadata routing       pr  27005  by  Adrin Jalali        Feature   class  linear model LogisticRegressionCV  now supports   metadata routing   meth  linear model LogisticRegressionCV fit  now   accepts     params   which are passed to the underlying splitter and   scorer   meth  linear model LogisticRegressionCV score  now accepts       score params   which are passed to the underlying scorer     pr  26525  by  user  Omar Salman  OmarManzoor        Feature   class  feature selection SelectFromModel  now supports metadata   routing in  fit  and  partial fit      pr  27490  by  user  Stefanie Senger  StefanieSenger        Feature   class  linear model OrthogonalMatchingPursuitCV  now supports   metadata routing  Its  fit  now accepts     fit params    which are passed to   the underlying splitter     pr  27500  by  user  Stefanie Senger  StefanieSenger        Feature   class  ElasticNetCV    class  LassoCV      class  MultiTaskElasticNetCV  and  class  MultiTaskLassoCV    now support metadata routing and route metadata to the CV splitter     pr  27478  by  user  Omar Salman  OmarManzoor        Fix  All meta estimators for which metadata routing is not yet implemented   now raise a  NotImplementedError  on  get metadata routing  and on  fit  if   metadata routing is enabled and any metadata is passed to them     pr  27389  by  Adrin Jalali      Support for SciPy sparse arrays                                  Several estimators are now supporting SciPy sparse arrays  The following functions and classes are impacted     Functions        func  cluster compute optics graph  in  pr  27104  by    user  Maren Westermann  marenwestermann   and in  pr  27250  by    user  Yao Xiao  Charlie XIAO       func  cluster kmeans plusplus  in  pr  27179  by  user  Nurseit Kamchyev  Bncer       func  decomposition non negative factorization  in  pr  27100  by    user  Isaac Virshup  ivirshup       func  feature selection f regression  in  pr  27239  by    user  Yaroslav Korobko  Tialo       func  feature selection r regression  in  pr  27239  by    user  Yaroslav Korobko  Tialo       func  manifold trustworthiness  in  pr  27250  by  user  Yao Xiao  Charlie XIAO       func  manifold spectral embedding  in  pr  27240  by  user  Yao Xiao  Charlie XIAO       func  metrics pairwise distances  in  pr  27250  by  user  Yao Xiao  Charlie XIAO       func  metrics pairwise distances chunked  in  pr  27250  by    user  Yao Xiao  Charlie XIAO       func  metrics pairwise pairwise kernels  in  pr  27250  by    user  Yao Xiao  Charlie XIAO       func  utils multiclass type of target  in  pr  27274  by    user  Yao Xiao  Charlie XIAO       Classes        class  cluster HDBSCAN  in  pr  27250  by  user  Yao Xiao  Charlie XIAO       class  cluster KMeans  in  pr  27179  by  user  Nurseit Kamchyev  Bncer       class  cluster MiniBatchKMeans  in  pr  27179  by  user  Nurseit Kamchyev  Bncer       class  cluster OPTICS  in  pr  27104  by    user  Maren Westermann  marenwestermann   and in  pr  27250  by    user  Yao Xiao  Charlie XIAO       class  cluster SpectralClustering  in  pr  27161  by    user  Bharat Raghunathan  bharatr21       class  decomposition MiniBatchNMF  in  pr  27100  by    user  Isaac Virshup  ivirshup       class  decomposition NMF  in  pr  27100  by  user  Isaac Virshup  ivirshup       class  feature extraction text TfidfTransformer  in  pr  27219  by    user  Yao Xiao  Charlie XIAO       class  manifold Isomap  in  pr  27250  by  user  Yao Xiao  Charlie XIAO       class  manifold SpectralEmbedding  in  pr  27240  by  user  Yao Xiao  Charlie XIAO       class  manifold TSNE  in  pr  27250  by  user  Yao Xiao  Charlie XIAO       class  impute SimpleImputer  in  pr  27277  by  user  Yao Xiao  Charlie XIAO       class  impute IterativeImputer  in  pr  27277  by  user  Yao Xiao  Charlie XIAO       class  impute KNNImputer  in  pr  27277  by  user  Yao Xiao  Charlie XIAO       class  kernel approximation PolynomialCountSketch  in   pr  27301  by    user  Lohit SundaramahaLingam  lohitslohit       class  neural network BernoulliRBM  in  pr  27252  by    user  Yao Xiao  Charlie XIAO       class  preprocessing PolynomialFeatures  in  pr  27166  by    user  Mohit Joshi  work mohit       class  random projection GaussianRandomProjection  in  pr  27314  by    user  Stefanie Senger  StefanieSenger       class  random projection SparseRandomProjection  in  pr  27314  by    user  Stefanie Senger  StefanieSenger     Support for Array API                        Several estimators and functions support the  Array API  https   data apis org array api latest      Such changes allows for using the estimators and functions with other libraries such as JAX  CuPy  and PyTorch  This therefore enables some GPU accelerated computations   See  ref  array api  for more details     Functions        func  sklearn metrics accuracy score  and  func  sklearn metrics zero one loss  in    pr  27137  by  user  Edoardo Abati  EdAbati       func  sklearn model selection train test split  in  pr  26855  by  Tim Head       func   utils multiclass is multilabel  in  pr  27601  by    user  Yaroslav Korobko  Tialo       Classes        class  decomposition PCA  for the  full  and  randomized  solvers  with QR power   iterations  in  pr  26315    pr  27098  and  pr  27431  by    user  Mateusz Sok    mtsokol     user  Olivier Grisel  ogrisel   and    user  Edoardo Abati  EdAbati       class  preprocessing KernelCenterer  in  pr  27556  by    user  Edoardo Abati  EdAbati       class  preprocessing MaxAbsScaler  in  pr  27110  by  user  Edoardo Abati  EdAbati       class  preprocessing MinMaxScaler  in  pr  26243  by  Tim Head       class  preprocessing Normalizer  in  pr  27558  by  user  Edoardo Abati  EdAbati     Private Loss Function Module                                  FIX  The gradient computation of the binomial log loss is now numerically   more stable for very large  in absolute value  input  raw predictions   Before  it   could result in  np nan   Among the models that profit from this change are    class  ensemble GradientBoostingClassifier      class  ensemble HistGradientBoostingClassifier  and    class  linear model LogisticRegression      pr  28048  by  user  Christian Lorentzen  lorentzenchr     Changelog                   Entries should be grouped by module  in alphabetic order  and prefixed with     one of the labels   MajorFeature    Feature    Efficiency    Enhancement        Fix  or  API   see whats new rst for descriptions       Entries should be ordered by those labels  e g   Fix  after  Efficiency        Changes not specific to a module should be listed under  Multiple Modules      or  Miscellaneous       Entries should end with       pr  123456  by  user  Joe Bloggs  joeongithub        where 123455 is the  pull request  number  not the issue number     mod  sklearn base                          Enhancement   meth  base ClusterMixin fit predict  and    meth  base OutlierMixin fit predict  now accept     kwargs   which are   passed to the   fit   method of the estimator     pr  26506  by  Adrin Jalali        Enhancement   meth  base TransformerMixin fit transform  and    meth  base OutlierMixin fit predict  now raise a warning if   transform         predict   consume metadata  but no custom   fit transform       fit predict     is defined in the class inheriting from them correspondingly     pr  26831  by  Adrin Jalali        Enhancement   func  base clone  now supports  dict  as input and creates a   copy     pr  26786  by  Adrin Jalali        API  func   utils metadata routing process routing  now has a different   signature  The first two  the object and the method  are positional only    and all metadata are passed as keyword arguments     pr  26909  by  Adrin Jalali      mod  sklearn calibration                                 Enhancement  The internal objective and gradient of the  sigmoid  method   of  class  calibration CalibratedClassifierCV  have been replaced by the   private loss module     pr  27185  by  user  Omar Salman  OmarManzoor      mod  sklearn cluster                             Fix  The  degree  parameter in the  class  cluster SpectralClustering    constructor now accepts real values instead of only integral values in   accordance with the  degree  parameter of the    class  sklearn metrics pairwise polynomial kernel      pr  27668  by  user  Nolan McMahon  NolantheNerd        Fix  Fixes a bug in  class  cluster OPTICS  where the cluster correction based   on predecessor was not using the right indexing  It would lead to inconsistent results   depedendent on the order of the data     pr  26459  by  user  Haoying Zhang  stevezhang1999   and    user  Guillaume Lemaitre  glemaitre        Fix  Improve error message when checking the number of connected components   in the  fit  method of  class  cluster HDBSCAN      pr  27678  by  user  Ganesh Tata  tataganesh        Fix  Create copy of precomputed sparse matrix within the    fit  method of  class  cluster DBSCAN  to avoid in place modification of   the sparse matrix     pr  27651  by  user  Ganesh Tata  tataganesh        Fix  Raises a proper  ValueError  when  metric  precomputed   and requested storing   centers via the parameter  store centers      pr  27898  by  user  Guillaume Lemaitre  glemaitre        API   kdtree  and  balltree  values are now deprecated and are renamed as    kd tree  and  ball tree  respectively for the  algorithm  parameter of    class  cluster HDBSCAN  ensuring consistency in naming convention     kdtree  and  balltree  values will be removed in 1 6     pr  26744  by  user  Shreesha Kumar Bhat  Shreesha3112        API  The option  metric None  in    class  cluster AgglomerativeClustering  and  class  cluster FeatureAgglomeration    is deprecated in version 1 4 and will be removed in version 1 6  Use the default   value instead     pr  27828  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn compose                             MajorFeature  Adds  polars  https   www pola rs     input support to    class  compose ColumnTransformer  through the  DataFrame Interchange Protocol    https   data apis org dataframe protocol latest purpose and scope html        The minimum supported version for polars is  0 19 12      pr  26683  by  Thomas Fan        Fix   func  cluster spectral clustering  and  class  cluster SpectralClustering    now raise an explicit error message indicating that sparse matrices and arrays   with  np int64  indices are not supported     pr  27240  by  user  Yao Xiao  Charlie XIAO        API  outputs that use pandas extension dtypes and contain  pd NA  in    class   compose ColumnTransformer  now result in a  FutureWarning  and will   cause a  ValueError  in version 1 6  unless the output container has been   configured as  pandas  with  set output transform  pandas     Before  such   outputs resulted in numpy arrays of dtype  object  containing  pd NA  which   could not be converted to numpy floats and caused errors when passed to other   scikit learn estimators     pr  27734  by  user  J r me Dock s  jeromedockes      mod  sklearn covariance                                Enhancement  Allow  func  covariance shrunk covariance  to process   multiple covariance matrices at once by handling nd arrays     pr  25275  by  user  Quentin Barth lemy  qbarthelemy        API   FIX   class   compose ColumnTransformer  now replaces   passthrough     with a corresponding  class   preprocessing FunctionTransformer  in the   fitted   transformers    attribute     pr  27204  by  Adrin Jalali      mod  sklearn datasets                              Enhancement   func  datasets make sparse spd matrix  now uses a more memory    efficient sparse layout  It also accepts a new keyword  sparse format  that allows   specifying the output format of the sparse matrix  By default  sparse format None     which returns a dense numpy ndarray as before     pr  27438  by  user  Yao Xiao  Charlie XIAO        Fix   func  datasets dump svmlight file  now does not raise  ValueError  when  X    is read only  e g   a  numpy memmap  instance     pr  28111  by  user  Yao Xiao  Charlie XIAO        API   func  datasets make sparse spd matrix  deprecated the keyword argument   dim     in favor of   n dim      dim   will be removed in version 1 6     pr  27718  by  user  Adam Li  adam2392      mod  sklearn decomposition                                   Feature   class  decomposition PCA  now supports  class  scipy sparse sparray    and  class  scipy sparse spmatrix  inputs when using the  arpack  solver    When used on sparse data like  func  datasets fetch 20newsgroups vectorized  this   can lead to speed ups of 100x  single threaded  and 70x lower memory usage    Based on  user  Alexander Tarashansky  atarashansky   s implementation in    scanpy  https   github com scverse scanpy        pr  18689  by  user  Isaac Virshup  ivirshup   and    user  Andrey Portnoy  andportnoy        Enhancement  An  auto  option was added to the  n components  parameter of    func  decomposition non negative factorization    class  decomposition NMF  and    class  decomposition MiniBatchNMF  to automatically infer the number of components   from W or H shapes when using a custom initialization  The default value of this   parameter will change from  None  to  auto  in version 1 6     pr  26634  by  user  Alexandre Landeau  AlexL   and  user  Alexandre Vigny  avigny        Fix   func  decomposition dict learning online  does not ignore anymore the parameter    max iter      pr  27834  by  user  Guillaume Lemaitre  glemaitre        Fix  The  degree  parameter in the  class  decomposition KernelPCA    constructor now accepts real values instead of only integral values in   accordance with the  degree  parameter of the    class  sklearn metrics pairwise polynomial kernel      pr  27668  by  user  Nolan McMahon  NolantheNerd        API  The option  max iter None  in    class  decomposition MiniBatchDictionaryLearning      class  decomposition MiniBatchSparsePCA   and    func  decomposition dict learning online  is deprecated and will be removed in   version 1 6  Use the default value instead     pr  27834  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn ensemble                              MajorFeature   class  ensemble RandomForestClassifier  and    class  ensemble RandomForestRegressor  support missing values when   the criterion is  gini    entropy   or  log loss     for classification or  squared error    friedman mse   or  poisson    for regression     pr  26391  by  Thomas Fan        MajorFeature   class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  supports    categorical features  from dtype    which treats columns with Pandas or   Polars Categorical dtype as categories in the algorithm     categorical features  from dtype   will become the default in v1 6    Categorical features no longer need to be encoded with numbers  When   categorical features are numbers  the maximum value no longer needs to be   smaller than  max bins   only the number of  unique  categories must be   smaller than  max bins      pr  26411  by  Thomas Fan   and  pr  27835  by  user  J r me Dock s  jeromedockes        MajorFeature   class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  got the new parameter    max features  to specify the proportion of randomly chosen features considered   in each split     pr  27139  by  user  Christian Lorentzen  lorentzenchr        Feature   class  ensemble RandomForestClassifier      class  ensemble RandomForestRegressor    class  ensemble ExtraTreesClassifier    and  class  ensemble ExtraTreesRegressor  now support monotonic constraints    useful when features are supposed to have a positive negative effect on the target    Missing values in the train data and multi output targets are not supported     pr  13649  by  user  Samuel Ronsin  samronsin      initiated by  user  Patrick O Reilly  pat oreilly        Efficiency   class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  are now a bit faster by reusing   the parent node s histogram as children node s histogram in the subtraction trick    In effect  less memory has to be allocated and deallocated     pr  27865  by  user  Christian Lorentzen  lorentzenchr        Efficiency   class  ensemble GradientBoostingClassifier  is faster    for binary and in particular for multiclass problems thanks to the private loss   function module     pr  26278  and  pr  28095  by  user  Christian Lorentzen  lorentzenchr        Efficiency  Improves runtime and memory usage for    class  ensemble GradientBoostingClassifier  and    class  ensemble GradientBoostingRegressor  when trained on sparse data     pr  26957  by  Thomas Fan        Efficiency   class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  is now faster when  scoring    is a predefined metric listed in  func  metrics get scorer names  and   early stopping is enabled     pr  26163  by  Thomas Fan        Enhancement  A fitted property    estimators samples     was added to all Forest   methods  including    class  ensemble RandomForestClassifier    class  ensemble RandomForestRegressor      class  ensemble ExtraTreesClassifier  and  class  ensemble ExtraTreesRegressor     which allows to retrieve the training sample indices used for each tree estimator     pr  26736  by  user  Adam Li  adam2392        Fix  Fixes  class  ensemble IsolationForest  when the input is a sparse matrix and    contamination  is set to a float value     pr  27645  by  user  Guillaume Lemaitre  glemaitre        Fix  Raises a  ValueError  in  class  ensemble RandomForestRegressor  and    class  ensemble ExtraTreesRegressor  when requesting OOB score with multioutput model   for the targets being all rounded to integer  It was recognized as a multiclass   problem     pr  27817  by  user  Daniele Ongari  danieleongari       Fix  Changes estimator tags to acknowledge that    class  ensemble VotingClassifier    class  ensemble VotingRegressor      class  ensemble StackingClassifier    class  ensemble StackingRegressor     support missing values if all  estimators  support missing values     pr  27710  by  user  Guillaume Lemaitre  glemaitre        Fix  Support loading pickles of  class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  when the pickle has   been generated on a platform with a different bitness  A typical example is   to train and pickle the model on 64 bit machine and load the model on a 32   bit machine for prediction     pr  28074  by  user  Christian Lorentzen  lorentzenchr   and    user  Lo c Est ve  lesteve        API  In  class  ensemble AdaBoostClassifier   the  algorithm  argument  SAMME R  was   deprecated and will be removed in 1 6     pr  26830  by  user  Stefanie Senger  StefanieSenger      mod  sklearn feature extraction                                        API  Changed error type from  class  AttributeError  to    class  exceptions NotFittedError  in unfitted instances of    class  feature extraction DictVectorizer  for the following methods     func  feature extraction DictVectorizer inverse transform      func  feature extraction DictVectorizer restrict      func  feature extraction DictVectorizer transform      pr  24838  by  user  Lorenz Hertel  LoHertel      mod  sklearn feature selection                                       Enhancement   class  feature selection SelectKBest      class  feature selection SelectPercentile   and    class  feature selection GenericUnivariateSelect  now support unsupervised   feature selection by providing a  score func  taking  X  and  y None      pr  27721  by  user  Guillaume Lemaitre  glemaitre        Enhancement   class  feature selection SelectKBest  and    class  feature selection GenericUnivariateSelect  with  mode  k best     now shows a warning when  k  is greater than the number of features     pr  27841  by  Thomas Fan        Fix   class  feature selection RFE  and  class  feature selection RFECV  do   not check for nans during input validation     pr  21807  by  Thomas Fan      mod  sklearn inspection                                Enhancement   class  inspection DecisionBoundaryDisplay  now accepts a parameter    class of interest  to select the class of interest when plotting the response   provided by  response method  predict proba   or    response method  decision function    It allows to plot the decision boundary for   both binary and multiclass classifiers     pr  27291  by  user  Guillaume Lemaitre  glemaitre        Fix   meth  inspection DecisionBoundaryDisplay from estimator  and    class  inspection PartialDependenceDisplay from estimator  now return the correct   type for subclasses     pr  27675  by  user  John Cant  johncant        API   class  inspection DecisionBoundaryDisplay  raise an  AttributeError  instead   of a  ValueError  when an estimator does not implement the requested response method     pr  27291  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn kernel ridge                                  Fix  The  degree  parameter in the  class  kernel ridge KernelRidge    constructor now accepts real values instead of only integral values in   accordance with the  degree  parameter of the    class  sklearn metrics pairwise polynomial kernel      pr  27668  by  user  Nolan McMahon  NolantheNerd      mod  sklearn linear model                                  Efficiency   class  linear model LogisticRegression  and    class  linear model LogisticRegressionCV  now have much better convergence for   solvers   lbfgs   and   newton cg    Both solvers can now reach much higher precision   for the coefficients depending on the specified  tol   Additionally  lbfgs can   make better use of  tol   i e   stop sooner or reach higher precision  This is   accomplished by better scaling of the objective function  i e   using average per   sample losses instead of sum of per sample losses     pr  26721  by  user  Christian Lorentzen  lorentzenchr        Efficiency   class  linear model LogisticRegression  and    class  linear model LogisticRegressionCV  with solver   newton cg   can now be   considerably faster for some data and parameter settings  This is accomplished by a   better line search convergence check for negligible loss improvements that takes into   account gradient information     pr  26721  by  user  Christian Lorentzen  lorentzenchr        Efficiency  Solver   newton cg   in  class  linear model LogisticRegression  and    class  linear model LogisticRegressionCV  uses a little less memory  The effect is   proportional to the number of coefficients   n features   n classes       pr  27417  by  user  Christian Lorentzen  lorentzenchr        Fix  Ensure that the  sigma   attribute of    class  linear model ARDRegression  and  class  linear model BayesianRidge    always has a  float32  dtype when fitted on  float32  data  even with the   type promotion rules of NumPy 2     pr  27899  by  user  Olivier Grisel  ogrisel        API  The attribute  loss function   of  class  linear model SGDClassifier  and    class  linear model SGDOneClassSVM  has been deprecated and will be removed in   version 1 6     pr  27979  by  user  Christian Lorentzen  lorentzenchr      mod  sklearn metrics                             Efficiency  Computing pairwise distances via  class  metrics DistanceMetric    for CSR x CSR   Dense x CSR  and CSR x Dense datasets is now 1 5x faster     pr  26765  by  user  Meekail Zain  micky774        Efficiency  Computing distances via  class  metrics DistanceMetric    for CSR x CSR  Dense x CSR  and CSR x Dense now uses  50  less memory    and outputs distances in the same dtype as the provided data     pr  27006  by  user  Meekail Zain  micky774        Enhancement  Improve the rendering of the plot obtained with the    class  metrics PrecisionRecallDisplay  and  class  metrics RocCurveDisplay    classes  the x  and y axis limits are set to  0  1  and the aspect ratio between   both axis is set to be 1 to get a square plot     pr  26366  by  user  Mojdeh Rastgoo  mrastgoo        Enhancement  Added  neg root mean squared log error scorer  as scorer    pr  26734  by  user  Alejandro Martin Gil  101AlexMartin        Enhancement   func  metrics confusion matrix  now warns when only one label was   found in  y true  and  y pred      pr  27650  by  user  Lucy Liu  lucyleeow        Fix  computing pairwise distances with  func  metrics pairwise euclidean distances    no longer raises an exception when  X  is provided as a  float64  array and    X norm squared  as a  float32  array     pr  27624  by  user  J r me Dock s  jeromedockes        Fix   func  f1 score  now provides correct values when handling various   cases in which division by zero occurs by using a formulation that does not   depend on the precision and recall values     pr  27577  by  user  Omar Salman  OmarManzoor   and    user  Guillaume Lemaitre  glemaitre        Fix   func  metrics make scorer  now raises an error when using a regressor on a   scorer requesting a non thresholded decision function  from  decision function  or    predict proba    Such scorer are specific to classification     pr  26840  by  user  Guillaume Lemaitre  glemaitre        Fix   meth  metrics DetCurveDisplay from predictions      class  metrics PrecisionRecallDisplay from predictions      class  metrics PredictionErrorDisplay from predictions   and    class  metrics RocCurveDisplay from predictions  now return the correct type   for subclasses     pr  27675  by  user  John Cant  johncant        API  Deprecated  needs threshold  and  needs proba  from  func  metrics make scorer     These parameters will be removed in version 1 6  Instead  use  response method  that   accepts   predict      predict proba   or   decision function   or a list of such   values   needs proba True  is equivalent to  response method  predict proba   and    needs threshold True  is equivalent to    response method   decision function    predict proba        pr  26840  by  user  Guillaume Lemaitre  glemaitre        API  The  squared  parameter of  func  metrics mean squared error  and    func  metrics mean squared log error  is deprecated and will be removed in 1 6    Use the new functions  func  metrics root mean squared error  and    func  metrics root mean squared log error  instead     pr  26734  by  user  Alejandro Martin Gil  101AlexMartin      mod  sklearn model selection                                     Enhancement   func  model selection learning curve  raises a warning when   every cross validation fold fails     pr  26299  by  user  Rahil Parikh  rprkh        Fix   class  model selection GridSearchCV      class  model selection RandomizedSearchCV   and    class  model selection HalvingGridSearchCV  now don t change the given   object in the parameter grid if it s an estimator     pr  26786  by  Adrin Jalali      mod  sklearn multioutput                                 Enhancement  Add method  predict log proba  to  class  multioutput ClassifierChain      pr  27720  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn neighbors                               Efficiency   meth  sklearn neighbors KNeighborsRegressor predict  and    meth  sklearn neighbors KNeighborsClassifier predict proba  now efficiently support   pairs of dense and sparse datasets     pr  27018  by  user  Julien Jerphanion  jjerphan        Efficiency  The performance of  meth  neighbors RadiusNeighborsClassifier predict    and of  meth  neighbors RadiusNeighborsClassifier predict proba  has been improved   when  radius  is large and  algorithm  brute   with non Euclidean metrics     pr  26828  by  user  Omar Salman  OmarManzoor        Fix  Improve error message for  class  neighbors LocalOutlierFactor    when it is invoked with  n samples n neighbors      pr  23317  by  user  Bharat Raghunathan  bharatr21        Fix   meth  neighbors KNeighborsClassifier predict  and    meth  neighbors KNeighborsClassifier predict proba  now raises an error when the   weights of all neighbors of some sample are zero  This can happen when  weights    is a user defined function     pr  26410  by  user  Yao Xiao  Charlie XIAO        API   class  neighbors KNeighborsRegressor  now accepts    class  metrics DistanceMetric  objects directly via the  metric  keyword   argument allowing for the use of accelerated third party    class  metrics DistanceMetric  objects     pr  26267  by  user  Meekail Zain  micky774      mod  sklearn preprocessing                                   Efficiency   class  preprocessing OrdinalEncoder  avoids calculating   missing indices twice to improve efficiency     pr  27017  by  user  Xuefeng Xu  xuefeng xu        Efficiency  Improves efficiency in  class  preprocessing OneHotEncoder  and    class  preprocessing OrdinalEncoder  in checking  nan      pr  27760  by  user  Xuefeng Xu  xuefeng xu        Enhancement  Improves warnings in  class  preprocessing FunctionTransformer  when    func  returns a pandas dataframe and the output is configured to be pandas     pr  26944  by  Thomas Fan        Enhancement   class  preprocessing TargetEncoder  now supports  target type     multiclass      pr  26674  by  user  Lucy Liu  lucyleeow        Fix   class  preprocessing OneHotEncoder  and  class  preprocessing OrdinalEncoder    raise an exception when  nan  is a category and is not the last in the user s   provided categories     pr  27309  by  user  Xuefeng Xu  xuefeng xu        Fix   class  preprocessing OneHotEncoder  and  class  preprocessing OrdinalEncoder    raise an exception if the user provided categories contain duplicates     pr  27328  by  user  Xuefeng Xu  xuefeng xu        Fix   class  preprocessing FunctionTransformer  raises an error at  transform  if   the output of  get feature names out  is not consistent with the column names of the   output container if those are defined     pr  27801  by  user  Guillaume Lemaitre  glemaitre        Fix  Raise a  NotFittedError  in  class  preprocessing OrdinalEncoder  when calling    transform  without calling  fit  since  categories  always requires to be checked     pr  27821  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn tree                          Feature   class  tree DecisionTreeClassifier    class  tree DecisionTreeRegressor      class  tree ExtraTreeClassifier  and  class  tree ExtraTreeRegressor  now support   monotonic constraints  useful when features are supposed to have a positive negative   effect on the target  Missing values in the train data and multi output targets are   not supported     pr  13649  by  user  Samuel Ronsin  samronsin    initiated by    user  Patrick O Reilly  pat oreilly      mod  sklearn utils                           Enhancement   func  sklearn utils estimator html repr  dynamically adapts   diagram colors based on the browser s  prefers color scheme   providing   improved adaptability to dark mode environments     pr  26862  by  user  Andrew Goh Yisheng  9y5     Thomas Fan     Adrin   Jalali        Enhancement   class   utils metadata routing MetadataRequest  and    class   utils metadata routing MetadataRouter  now have a   consumes   method   which can be used to check whether a given set of parameters would be consumed     pr  26831  by  Adrin Jalali        Enhancement  Make  func  sklearn utils check array  attempt to output    int32  indexed CSR and COO arrays when converting from DIA arrays if the number of   non zero entries is small enough  This ensures that estimators implemented in Cython   and that do not accept  int64  indexed sparse datastucture  now consistently   accept the same sparse input formats for SciPy sparse matrices and arrays     pr  27372  by  user  Guillaume Lemaitre  glemaitre        Fix   func  sklearn utils check array  should accept both matrix and array from   the sparse SciPy module  The previous implementation would fail if  copy True  by   calling specific NumPy  np may share memory  that does not work with SciPy sparse   array and does not return the correct result for SciPy sparse matrix     pr  27336  by  user  Guillaume Lemaitre  glemaitre        Fix   func   utils estimator checks check estimators pickle  with    readonly memmap True  now relies on joblib s own capability to allocate   aligned memory mapped arrays when loading a serialized estimator instead of   calling a dedicated private function that would crash when OpenBLAS   misdetects the CPU architecture     pr  27614  by  user  Olivier Grisel  ogrisel        Fix  Error message in  func   utils check array  when a sparse matrix was   passed but  accept sparse  is  False  now suggests to use   toarray    and not    X toarray        pr  27757  by  user  Lucy Liu  lucyleeow        Fix  Fix the function  func   utils check array  to output the right error message   when the input is a Series instead of a DataFrame     pr  28090  by  user  Stan Furrer  stanFurrer   and  user  Yao Xiao  Charlie XIAO        API   func  sklearn extmath log logistic  is deprecated and will be removed in 1 6    Use   np logaddexp 0   x   instead     pr  27544  by  user  Christian Lorentzen  lorentzenchr        rubric   Code and documentation contributors  Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1 3  including   101AlexMartin  Abhishek Singh Kushwah  Adam Li  Adarsh Wase  Adrin Jalali  Advik Sinha  Alex  Alexander Al Feghali  Alexis IMBERT  AlexL  Alex Molas  Anam Fatima  Andrew Goh  andyscanzio  Aniket Patil  Artem Kislovskiy  Arturo Amor  ashah002  avm19  Ben Holmes  Ben Mares  Benoit Chevallier Mames  Bharat Raghunathan  Binesh Bannerjee  Brendan Lu  Brevin Kunde  Camille Troillard  Carlo Lemos  Chad Parmet  Christian Clauss  Christian Lorentzen  Christian Veenhuis  Christos Aridas  Cindy Liang  Claudio Salvatore Arcidiacono  Connor Boyle  cynthias13w  DaminK  Daniele Ongari  Daniel Schmitz  Daniel Tinoco  David Brochart  Deborah L  Haar  DevanshKyada27  Dimitri Papadopoulos Orfanos  Dmitry Nesterov  DUONG  Edoardo Abati  Eitan Hemed  Elabonga Atuo  Elisabeth G nther  Emma Carballal  Emmanuel Ferdman  epimorphic  Erwan Le Floch  Fabian Egli  Filip Karlo Do ilovi   Florian Idelberger  Franck Charras  Gael Varoquaux  Ganesh Tata  Gleb Levitski  Guillaume Lemaitre  Haoying Zhang  Harmanan Kohli  Ily  ioangatop  IsaacTrost  Isaac Virshup  Iwona Zdzieblo  Jakub Kaczmarzyk  James McDermott  Jarrod Millman  JB Mountford  J r mie du Boisberranger  J r me Dock s  Jiawei Zhang  Joel Nothman  John Cant  John Hopfensperger  Jona Sassenhagen  Jon Nordby  Julien Jerphanion  Kennedy Waweru  kevin moore  Kian Eliasi  Kishan Ved  Konstantinos Pitas  Koustav Ghosh  Kushan Sharma  ldwy4  Linus  Lohit SundaramahaLingam  Loic Esteve  Lorenz  Louis Fouquet  Lucy Liu  Luis Silvestrin  Luk   Folwarczn   Lukas Geiger  Malte Londschien  Marcus Fraa   Marek Hanu   Maren Westermann  Mark Elliot  Martin Larralde  Mateusz Sok    mathurinm  mecopur  Meekail Zain  Michael Higgins  Miki Watanabe  Milton Gomez  MN193  Mohammed Hamdy  Mohit Joshi  mrastgoo  Naman Dhingra  Naoise Holohan  Narendra Singh dangi  Noa Malem Shinitski  Nolan  Nurseit Kamchyev  Oleksii Kachaiev  Olivier Grisel  Omar Salman  partev  Peter Hull  Peter Steinbach  Pierre de Fr minville  Pooja Subramaniam  Puneeth K  qmarcou  Quentin Barth lemy  Rahil Parikh  Rahul Mahajan  Raj Pulapakura  Raphael  Ricardo Peres  Riccardo Cappuzzo  Roman Lutz  Salim Dohri  Samuel O  Ronsin  Sandip Dutta  Sayed Qaiser Ali  scaja  scikit learn bot  Sebastian Berg  Shreesha Kumar Bhat  Shubhal Gupta  S ren Fuglede J rgensen  Stefanie Senger  Tamara  Tanjina Afroj  THARAK HEGDE  thebabush  Thomas J  Fan  Thomas Roehr  Tialo  Tim Head  tongyu  Venkatachalam N  Vijeth Moudgalya  Vincent M  Vivek Reddy P  Vladimir Fokow  Xiao Yuan  Xuefeng Xu  Yang Tao  Yao Xiao  Yuchen Zhou  Yuusuke Hiramatsu"}
{"questions":"scikit-learn sklearn contributors rst changes019 Version 0 19","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n============\nVersion 0.19\n============\n\n.. _changes_0_19:\n\nVersion 0.19.2\n==============\n\n**July, 2018**\n\nThis release is exclusively in order to support Python 3.7.\n\nRelated changes\n---------------\n\n- ``n_iter_`` may vary from previous releases in\n  :class:`linear_model.LogisticRegression` with ``solver='lbfgs'`` and\n  :class:`linear_model.HuberRegressor`.  For Scipy <= 1.0.0, the optimizer could\n  perform more than the requested maximum number of iterations. Now both\n  estimators will report at most ``max_iter`` iterations even if more were\n  performed. :issue:`10723` by `Joel Nothman`_.\n\nVersion 0.19.1\n==============\n\n**October 23, 2017**\n\nThis is a bug-fix release with some minor documentation improvements and\nenhancements to features released in 0.19.0.\n\nNote there may be minor differences in TSNE output in this release (due to\n:issue:`9623`), in the case where multiple samples have equal distance to some\nsample.\n\nChangelog\n---------\n\nAPI changes\n...........\n\n- Reverted the addition of ``metrics.ndcg_score`` and ``metrics.dcg_score``\n  which had been merged into version 0.19.0 by error.  The implementations\n  were broken and undocumented.\n\n- ``return_train_score`` which was added to\n  :class:`model_selection.GridSearchCV`,\n  :class:`model_selection.RandomizedSearchCV` and\n  :func:`model_selection.cross_validate` in version 0.19.0 will be changing its\n  default value from True to False in version 0.21.  We found that calculating\n  training score could have a great effect on cross validation runtime in some\n  cases.  Users should explicitly set ``return_train_score`` to False if\n  prediction or scoring functions are slow, resulting in a deleterious effect\n  on CV runtime, or to True if they wish to use the calculated scores.\n  :issue:`9677` by :user:`Kumar Ashutosh <thechargedneutron>` and `Joel\n  Nothman`_.\n\n- ``correlation_models`` and ``regression_models`` from the legacy gaussian\n  processes implementation have been belatedly deprecated. :issue:`9717` by\n  :user:`Kumar Ashutosh <thechargedneutron>`.\n\nBug fixes\n.........\n\n- Avoid integer overflows in :func:`metrics.matthews_corrcoef`.\n  :issue:`9693` by :user:`Sam Steingold <sam-s>`.\n\n- Fixed a bug in the objective function for :class:`manifold.TSNE` (both exact\n  and with the Barnes-Hut approximation) when ``n_components >= 3``.\n  :issue:`9711` by :user:`goncalo-rodrigues`.\n\n- Fix regression in :func:`model_selection.cross_val_predict` where it\n  raised an error with ``method='predict_proba'`` for some probabilistic\n  classifiers. :issue:`9641` by :user:`James Bourbeau <jrbourbeau>`.\n\n- Fixed a bug where :func:`datasets.make_classification` modified its input\n  ``weights``. :issue:`9865` by :user:`Sachin Kelkar <s4chin>`.\n\n- :class:`model_selection.StratifiedShuffleSplit` now works with multioutput\n  multiclass or multilabel data with more than 1000 columns.  :issue:`9922` by\n  :user:`Charlie Brummitt <crbrummitt>`.\n\n- Fixed a bug with nested and conditional parameter setting, e.g. setting a\n  pipeline step and its parameter at the same time. :issue:`9945` by `Andreas\n  M\u00fcller`_ and `Joel Nothman`_.\n\nRegressions in 0.19.0 fixed in 0.19.1:\n\n- Fixed a bug where parallelised prediction in random forests was not\n  thread-safe and could (rarely) result in arbitrary errors. :issue:`9830` by\n  `Joel Nothman`_.\n\n- Fix regression in :func:`model_selection.cross_val_predict` where it no\n  longer accepted ``X`` as a list. :issue:`9600` by :user:`Rasul Kerimov\n  <CoderINusE>`.\n\n- Fixed handling of :func:`model_selection.cross_val_predict` for binary\n  classification with ``method='decision_function'``. :issue:`9593` by\n  :user:`Reiichiro Nakano <reiinakano>` and core devs.\n\n- Fix regression in :class:`pipeline.Pipeline` where it no longer accepted\n  ``steps`` as a tuple. :issue:`9604` by :user:`Joris Van den Bossche\n  <jorisvandenbossche>`.\n\n- Fix bug where ``n_iter`` was not properly deprecated, leaving ``n_iter``\n  unavailable for interim use in\n  :class:`linear_model.SGDClassifier`, :class:`linear_model.SGDRegressor`,\n  :class:`linear_model.PassiveAggressiveClassifier`,\n  :class:`linear_model.PassiveAggressiveRegressor` and\n  :class:`linear_model.Perceptron`. :issue:`9558` by `Andreas M\u00fcller`_.\n\n- Dataset fetchers make sure temporary files are closed before removing them,\n  which caused errors on Windows. :issue:`9847` by :user:`Joan Massich <massich>`.\n\n- Fixed a regression in :class:`manifold.TSNE` where it no longer supported\n  metrics other than 'euclidean' and 'precomputed'. :issue:`9623` by :user:`Oli\n  Blum <oliblum90>`.\n\nEnhancements\n............\n\n- Our test suite and :func:`utils.estimator_checks.check_estimator` can now be\n  run without Nose installed. :issue:`9697` by :user:`Joan Massich <massich>`.\n\n- To improve usability of version 0.19's :class:`pipeline.Pipeline`\n  caching, ``memory`` now allows ``joblib.Memory`` instances.\n  This make use of the new :func:`utils.validation.check_memory` helper.\n  issue:`9584` by :user:`Kumar Ashutosh <thechargedneutron>`\n\n- Some fixes to examples: :issue:`9750`, :issue:`9788`, :issue:`9815`\n\n- Made a FutureWarning in SGD-based estimators less verbose. :issue:`9802` by\n  :user:`Vrishank Bhardwaj <vrishank97>`.\n\nCode and Documentation Contributors\n-----------------------------------\n\nWith thanks to:\n\nJoel Nothman, Loic Esteve, Andreas Mueller, Kumar Ashutosh,\nVrishank Bhardwaj, Hanmin Qin, Rasul Kerimov, James Bourbeau,\nNagarjuna Kumar, Nathaniel Saul, Olivier Grisel, Roman\nYurchak, Reiichiro Nakano, Sachin Kelkar, Sam Steingold,\nYaroslav Halchenko, diegodlh, felix, goncalo-rodrigues,\njkleint, oliblum90, pasbi, Anthony Gitter, Ben Lawson, Charlie\nBrummitt, Didi Bar-Zev, Gael Varoquaux, Joan Massich, Joris\nVan den Bossche, nielsenmarkus11\n\n\nVersion 0.19\n============\n\n**August 12, 2017**\n\nHighlights\n----------\n\nWe are excited to release a number of great new features including\n:class:`neighbors.LocalOutlierFactor` for anomaly detection,\n:class:`preprocessing.QuantileTransformer` for robust feature transformation,\nand the :class:`multioutput.ClassifierChain` meta-estimator to simply account\nfor dependencies between classes in multilabel problems. We have some new\nalgorithms in existing estimators, such as multiplicative update in\n:class:`decomposition.NMF` and multinomial\n:class:`linear_model.LogisticRegression` with L1 loss (use ``solver='saga'``).\n\nCross validation is now able to return the results from multiple metric\nevaluations. The new :func:`model_selection.cross_validate` can return many\nscores on the test data as well as training set performance and timings, and we\nhave extended the ``scoring`` and ``refit`` parameters for grid\/randomized\nsearch :ref:`to handle multiple metrics <multimetric_grid_search>`.\n\nYou can also learn faster.  For instance, the :ref:`new option to cache\ntransformations <pipeline_cache>` in :class:`pipeline.Pipeline` makes grid\nsearch over pipelines including slow transformations much more efficient.  And\nyou can predict faster: if you're sure you know what you're doing, you can turn\noff validating that the input is finite using :func:`config_context`.\n\nWe've made some important fixes too.  We've fixed a longstanding implementation\nerror in :func:`metrics.average_precision_score`, so please be cautious with\nprior results reported from that function.  A number of errors in the\n:class:`manifold.TSNE` implementation have been fixed, particularly in the\ndefault Barnes-Hut approximation.  :class:`semi_supervised.LabelSpreading` and\n:class:`semi_supervised.LabelPropagation` have had substantial fixes.\nLabelPropagation was previously broken. LabelSpreading should now correctly\nrespect its alpha parameter.\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- :class:`cluster.KMeans` with sparse X and initial centroids given (bug fix)\n- :class:`cross_decomposition.PLSRegression`\n  with ``scale=True`` (bug fix)\n- :class:`ensemble.GradientBoostingClassifier` and\n  :class:`ensemble.GradientBoostingRegressor` where ``min_impurity_split`` is used (bug fix)\n- gradient boosting ``loss='quantile'`` (bug fix)\n- :class:`ensemble.IsolationForest` (bug fix)\n- :class:`feature_selection.SelectFdr` (bug fix)\n- :class:`linear_model.RANSACRegressor` (bug fix)\n- :class:`linear_model.LassoLars` (bug fix)\n- :class:`linear_model.LassoLarsIC` (bug fix)\n- :class:`manifold.TSNE` (bug fix)\n- :class:`neighbors.NearestCentroid` (bug fix)\n- :class:`semi_supervised.LabelSpreading` (bug fix)\n- :class:`semi_supervised.LabelPropagation` (bug fix)\n- tree based models where ``min_weight_fraction_leaf`` is used (enhancement)\n- :class:`model_selection.StratifiedKFold` with ``shuffle=True``\n  (this change, due to :issue:`7823` was not mentioned in the release notes at\n  the time)\n\nDetails are listed in the changelog below.\n\n(While we are trying to better inform users by providing this information, we\ncannot assure that this list is complete.)\n\nChangelog\n---------\n\nNew features\n............\n\nClassifiers and regressors\n\n- Added :class:`multioutput.ClassifierChain` for multi-label\n  classification. By :user:`Adam Kleczewski <adamklec>`.\n\n- Added solver ``'saga'`` that implements the improved version of Stochastic\n  Average Gradient, in :class:`linear_model.LogisticRegression` and\n  :class:`linear_model.Ridge`. It allows the use of L1 penalty with\n  multinomial logistic loss, and behaves marginally better than 'sag'\n  during the first epochs of ridge and logistic regression.\n  :issue:`8446` by `Arthur Mensch`_.\n\nOther estimators\n\n- Added the :class:`neighbors.LocalOutlierFactor` class for anomaly\n  detection based on nearest neighbors.\n  :issue:`5279` by `Nicolas Goix`_ and `Alexandre Gramfort`_.\n\n- Added :class:`preprocessing.QuantileTransformer` class and\n  :func:`preprocessing.quantile_transform` function for features\n  normalization based on quantiles.\n  :issue:`8363` by :user:`Denis Engemann <dengemann>`,\n  :user:`Guillaume Lemaitre <glemaitre>`, `Olivier Grisel`_, `Raghav RV`_,\n  :user:`Thierry Guillemot <tguillemot>`, and `Gael Varoquaux`_.\n\n- The new solver ``'mu'`` implements a Multiplicate Update in\n  :class:`decomposition.NMF`, allowing the optimization of all\n  beta-divergences, including the Frobenius norm, the generalized\n  Kullback-Leibler divergence and the Itakura-Saito divergence.\n  :issue:`5295` by `Tom Dupre la Tour`_.\n\nModel selection and evaluation\n\n- :class:`model_selection.GridSearchCV` and\n  :class:`model_selection.RandomizedSearchCV` now support simultaneous\n  evaluation of multiple metrics. Refer to the\n  :ref:`multimetric_grid_search` section of the user guide for more\n  information. :issue:`7388` by `Raghav RV`_\n\n- Added the :func:`model_selection.cross_validate` which allows evaluation\n  of multiple metrics. This function returns a dict with more useful\n  information from cross-validation such as the train scores, fit times and\n  score times.\n  Refer to :ref:`multimetric_cross_validation` section of the userguide\n  for more information. :issue:`7388` by `Raghav RV`_\n\n- Added :func:`metrics.mean_squared_log_error`, which computes\n  the mean square error of the logarithmic transformation of targets,\n  particularly useful for targets with an exponential trend.\n  :issue:`7655` by :user:`Karan Desai <karandesai-96>`.\n\n- Added :func:`metrics.dcg_score` and :func:`metrics.ndcg_score`, which\n  compute Discounted cumulative gain (DCG) and Normalized discounted\n  cumulative gain (NDCG).\n  :issue:`7739` by :user:`David Gasquez <davidgasquez>`.\n\n- Added the :class:`model_selection.RepeatedKFold` and\n  :class:`model_selection.RepeatedStratifiedKFold`.\n  :issue:`8120` by `Neeraj Gangwar`_.\n\nMiscellaneous\n\n- Validation that input data contains no NaN or inf can now be suppressed\n  using :func:`config_context`, at your own risk. This will save on runtime,\n  and may be particularly useful for prediction time. :issue:`7548` by\n  `Joel Nothman`_.\n\n- Added a test to ensure parameter listing in docstrings match the\n  function\/class signature. :issue:`9206` by `Alexandre Gramfort`_ and\n  `Raghav RV`_.\n\nEnhancements\n............\n\nTrees and ensembles\n\n- The ``min_weight_fraction_leaf`` constraint in tree construction is now\n  more efficient, taking a fast path to declare a node a leaf if its weight\n  is less than 2 * the minimum. Note that the constructed tree will be\n  different from previous versions where ``min_weight_fraction_leaf`` is\n  used. :issue:`7441` by :user:`Nelson Liu <nelson-liu>`.\n\n- :class:`ensemble.GradientBoostingClassifier` and :class:`ensemble.GradientBoostingRegressor`\n  now support sparse input for prediction.\n  :issue:`6101` by :user:`Ibraim Ganiev <olologin>`.\n\n- :class:`ensemble.VotingClassifier` now allows changing estimators by using\n  :meth:`ensemble.VotingClassifier.set_params`. An estimator can also be\n  removed by setting it to ``None``.\n  :issue:`7674` by :user:`Yichuan Liu <yl565>`.\n\n- :func:`tree.export_graphviz` now shows configurable number of decimal\n  places. :issue:`8698` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- Added ``flatten_transform`` parameter to :class:`ensemble.VotingClassifier`\n  to change output shape of `transform` method to 2 dimensional.\n  :issue:`7794` by :user:`Ibraim Ganiev <olologin>` and\n  :user:`Herilalaina Rakotoarison <herilalaina>`.\n\nLinear, kernelized and related models\n\n- :class:`linear_model.SGDClassifier`, :class:`linear_model.SGDRegressor`,\n  :class:`linear_model.PassiveAggressiveClassifier`,\n  :class:`linear_model.PassiveAggressiveRegressor` and\n  :class:`linear_model.Perceptron` now expose ``max_iter`` and\n  ``tol`` parameters, to handle convergence more precisely.\n  ``n_iter`` parameter is deprecated, and the fitted estimator exposes\n  a ``n_iter_`` attribute, with actual number of iterations before\n  convergence. :issue:`5036` by `Tom Dupre la Tour`_.\n\n- Added ``average`` parameter to perform weight averaging in\n  :class:`linear_model.PassiveAggressiveClassifier`. :issue:`4939`\n  by :user:`Andrea Esuli <aesuli>`.\n\n- :class:`linear_model.RANSACRegressor` no longer throws an error\n  when calling ``fit`` if no inliers are found in its first iteration.\n  Furthermore, causes of skipped iterations are tracked in newly added\n  attributes, ``n_skips_*``.\n  :issue:`7914` by :user:`Michael Horrell <mthorrell>`.\n\n- In :class:`gaussian_process.GaussianProcessRegressor`, method ``predict``\n  is a lot faster with ``return_std=True``. :issue:`8591` by\n  :user:`Hadrien Bertrand <hbertrand>`.\n\n- Added ``return_std`` to ``predict`` method of\n  :class:`linear_model.ARDRegression` and\n  :class:`linear_model.BayesianRidge`.\n  :issue:`7838` by :user:`Sergey Feldman <sergeyf>`.\n\n- Memory usage enhancements: Prevent cast from float32 to float64 in:\n  :class:`linear_model.MultiTaskElasticNet`;\n  :class:`linear_model.LogisticRegression` when using newton-cg solver; and\n  :class:`linear_model.Ridge` when using svd, sparse_cg, cholesky or lsqr\n  solvers. :issue:`8835`, :issue:`8061` by :user:`Joan Massich <massich>` and :user:`Nicolas\n  Cordier <ncordier>` and :user:`Thierry Guillemot <tguillemot>`.\n\nOther predictors\n\n- Custom metrics for the :mod:`sklearn.neighbors` binary trees now have\n  fewer constraints: they must take two 1d-arrays and return a float.\n  :issue:`6288` by `Jake Vanderplas`_.\n\n- ``algorithm='auto`` in :mod:`sklearn.neighbors` estimators now chooses the most\n  appropriate algorithm for all input types and metrics. :issue:`9145` by\n  :user:`Herilalaina Rakotoarison <herilalaina>` and :user:`Reddy Chinthala\n  <preddy5>`.\n\nDecomposition, manifold learning and clustering\n\n- :class:`cluster.MiniBatchKMeans` and :class:`cluster.KMeans`\n  now use significantly less memory when assigning data points to their\n  nearest cluster center. :issue:`7721` by :user:`Jon Crall <Erotemic>`.\n\n- :class:`decomposition.PCA`, :class:`decomposition.IncrementalPCA` and\n  :class:`decomposition.TruncatedSVD` now expose the singular values\n  from the underlying SVD. They are stored in the attribute\n  ``singular_values_``, like in :class:`decomposition.IncrementalPCA`.\n  :issue:`7685` by :user:`Tommy L\u00f6fstedt <tomlof>`\n\n- :class:`decomposition.NMF` now faster when ``beta_loss=0``.\n  :issue:`9277` by :user:`hongkahjun`.\n\n- Memory improvements for method ``barnes_hut`` in :class:`manifold.TSNE`\n  :issue:`7089` by :user:`Thomas Moreau <tomMoral>` and `Olivier Grisel`_.\n\n- Optimization schedule improvements for Barnes-Hut :class:`manifold.TSNE`\n  so the results are closer to the one from the reference implementation\n  `lvdmaaten\/bhtsne <https:\/\/github.com\/lvdmaaten\/bhtsne>`_ by :user:`Thomas\n  Moreau <tomMoral>` and `Olivier Grisel`_.\n\n- Memory usage enhancements: Prevent cast from float32 to float64 in\n  :class:`decomposition.PCA` and\n  `decomposition.randomized_svd_low_rank`.\n  :issue:`9067` by `Raghav RV`_.\n\nPreprocessing and feature selection\n\n- Added ``norm_order`` parameter to :class:`feature_selection.SelectFromModel`\n  to enable selection of the norm order when ``coef_`` is more than 1D.\n  :issue:`6181` by :user:`Antoine Wendlinger <antoinewdg>`.\n\n- Added ability to use sparse matrices in :func:`feature_selection.f_regression`\n  with ``center=True``. :issue:`8065` by :user:`Daniel LeJeune <acadiansith>`.\n\n- Small performance improvement to n-gram creation in\n  :mod:`sklearn.feature_extraction.text` by binding methods for loops and\n  special-casing unigrams. :issue:`7567` by :user:`Jaye Doepke <jtdoepke>`\n\n- Relax assumption on the data for the\n  :class:`kernel_approximation.SkewedChi2Sampler`. Since the Skewed-Chi2\n  kernel is defined on the open interval :math:`(-skewedness; +\\infty)^d`,\n  the transform function should not check whether ``X < 0`` but whether ``X <\n  -self.skewedness``. :issue:`7573` by :user:`Romain Brault <RomainBrault>`.\n\n- Made default kernel parameters kernel-dependent in\n  :class:`kernel_approximation.Nystroem`.\n  :issue:`5229` by :user:`Saurabh Bansod <mth4saurabh>` and `Andreas M\u00fcller`_.\n\nModel evaluation and meta-estimators\n\n- :class:`pipeline.Pipeline` is now able to cache transformers\n  within a pipeline by using the ``memory`` constructor parameter.\n  :issue:`7990` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- :class:`pipeline.Pipeline` steps can now be accessed as attributes of its\n  ``named_steps`` attribute. :issue:`8586` by :user:`Herilalaina\n  Rakotoarison <herilalaina>`.\n\n- Added ``sample_weight`` parameter to :meth:`pipeline.Pipeline.score`.\n  :issue:`7723` by :user:`Mikhail Korobov <kmike>`.\n\n- Added ability to set ``n_jobs`` parameter to :func:`pipeline.make_union`.\n  A ``TypeError`` will be raised for any other kwargs. :issue:`8028`\n  by :user:`Alexander Booth <alexandercbooth>`.\n\n- :class:`model_selection.GridSearchCV`,\n  :class:`model_selection.RandomizedSearchCV` and\n  :func:`model_selection.cross_val_score` now allow estimators with callable\n  kernels which were previously prohibited.\n  :issue:`8005` by `Andreas M\u00fcller`_ .\n\n- :func:`model_selection.cross_val_predict` now returns output of the\n  correct shape for all values of the argument ``method``.\n  :issue:`7863` by :user:`Aman Dalmia <dalmia>`.\n\n- Added ``shuffle`` and ``random_state`` parameters to shuffle training\n  data before taking prefixes of it based on training sizes in\n  :func:`model_selection.learning_curve`.\n  :issue:`7506` by :user:`Narine Kokhlikyan <NarineK>`.\n\n- :class:`model_selection.StratifiedShuffleSplit` now works with multioutput\n  multiclass (or multilabel) data.  :issue:`9044` by `Vlad Niculae`_.\n\n- Speed improvements to :class:`model_selection.StratifiedShuffleSplit`.\n  :issue:`5991` by :user:`Arthur Mensch <arthurmensch>` and `Joel Nothman`_.\n\n- Add ``shuffle`` parameter to :func:`model_selection.train_test_split`.\n  :issue:`8845` by  :user:`themrmax <themrmax>`\n\n- :class:`multioutput.MultiOutputRegressor` and :class:`multioutput.MultiOutputClassifier`\n  now support online learning using ``partial_fit``.\n  :issue: `8053` by :user:`Peng Yu <yupbank>`.\n\n- Add ``max_train_size`` parameter to :class:`model_selection.TimeSeriesSplit`\n  :issue:`8282` by :user:`Aman Dalmia <dalmia>`.\n\n- More clustering metrics are now available through :func:`metrics.get_scorer`\n  and ``scoring`` parameters. :issue:`8117` by `Raghav RV`_.\n\n- A scorer based on :func:`metrics.explained_variance_score` is also available.\n  :issue:`9259` by :user:`Hanmin Qin <qinhanmin2014>`.\n\nMetrics\n\n- :func:`metrics.matthews_corrcoef` now support multiclass classification.\n  :issue:`8094` by :user:`Jon Crall <Erotemic>`.\n\n- Add ``sample_weight`` parameter to :func:`metrics.cohen_kappa_score`.\n  :issue:`8335` by :user:`Victor Poughon <vpoughon>`.\n\nMiscellaneous\n\n- :func:`utils.estimator_checks.check_estimator` now attempts to ensure that methods\n  transform, predict, etc.  do not set attributes on the estimator.\n  :issue:`7533` by :user:`Ekaterina Krivich <kiote>`.\n\n- Added type checking to the ``accept_sparse`` parameter in\n  :mod:`sklearn.utils.validation` methods. This parameter now accepts only boolean,\n  string, or list\/tuple of strings. ``accept_sparse=None`` is deprecated and\n  should be replaced by ``accept_sparse=False``.\n  :issue:`7880` by :user:`Josh Karnofsky <jkarno>`.\n\n- Make it possible to load a chunk of an svmlight formatted file by\n  passing a range of bytes to :func:`datasets.load_svmlight_file`.\n  :issue:`935` by :user:`Olivier Grisel <ogrisel>`.\n\n- :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor`\n  now accept non-finite features. :issue:`8931` by :user:`Attractadore`.\n\nBug fixes\n.........\n\nTrees and ensembles\n\n- Fixed a memory leak in trees when using trees with ``criterion='mae'``.\n  :issue:`8002` by `Raghav RV`_.\n\n- Fixed a bug where :class:`ensemble.IsolationForest` uses an\n  an incorrect formula for the average path length\n  :issue:`8549` by `Peter Wang <https:\/\/github.com\/PTRWang>`_.\n\n- Fixed a bug where :class:`ensemble.AdaBoostClassifier` throws\n  ``ZeroDivisionError`` while fitting data with single class labels.\n  :issue:`7501` by :user:`Dominik Krzeminski <dokato>`.\n\n- Fixed a bug in :class:`ensemble.GradientBoostingClassifier` and\n  :class:`ensemble.GradientBoostingRegressor` where a float being compared\n  to ``0.0`` using ``==`` caused a divide by zero error. :issue:`7970` by\n  :user:`He Chen <chenhe95>`.\n\n- Fix a bug where :class:`ensemble.GradientBoostingClassifier` and\n  :class:`ensemble.GradientBoostingRegressor` ignored the\n  ``min_impurity_split`` parameter.\n  :issue:`8006` by :user:`Sebastian P\u00f6lsterl <sebp>`.\n\n- Fixed ``oob_score`` in :class:`ensemble.BaggingClassifier`.\n  :issue:`8936` by :user:`Michael Lewis <mlewis1729>`\n\n- Fixed excessive memory usage in prediction for random forests estimators.\n  :issue:`8672` by :user:`Mike Benfield <mikebenfield>`.\n\n- Fixed a bug where ``sample_weight`` as a list broke random forests in Python 2\n  :issue:`8068` by :user:`xor`.\n\n- Fixed a bug where :class:`ensemble.IsolationForest` fails when\n  ``max_features`` is less than 1.\n  :issue:`5732` by :user:`Ishank Gulati <IshankGulati>`.\n\n- Fix a bug where gradient boosting with ``loss='quantile'`` computed\n  negative errors for negative values of ``ytrue - ypred`` leading to wrong\n  values when calling ``__call__``.\n  :issue:`8087` by :user:`Alexis Mignon <AlexisMignon>`\n\n- Fix a bug where :class:`ensemble.VotingClassifier` raises an error\n  when a numpy array is passed in for weights. :issue:`7983` by\n  :user:`Vincent Pham <vincentpham1991>`.\n\n- Fixed a bug where :func:`tree.export_graphviz` raised an error\n  when the length of features_names does not match n_features in the decision\n  tree. :issue:`8512` by :user:`Li Li <aikinogard>`.\n\nLinear, kernelized and related models\n\n- Fixed a bug where :func:`linear_model.RANSACRegressor.fit` may run until\n  ``max_iter`` if it finds a large inlier group early. :issue:`8251` by\n  :user:`aivision2020`.\n\n- Fixed a bug where :class:`naive_bayes.MultinomialNB` and\n  :class:`naive_bayes.BernoulliNB` failed when ``alpha=0``. :issue:`5814` by\n  :user:`Yichuan Liu <yl565>` and :user:`Herilalaina Rakotoarison\n  <herilalaina>`.\n\n- Fixed a bug where :class:`linear_model.LassoLars` does not give\n  the same result as the LassoLars implementation available\n  in R (lars library). :issue:`7849` by :user:`Jair Montoya Martinez <jmontoyam>`.\n\n- Fixed a bug in `linear_model.RandomizedLasso`,\n  :class:`linear_model.Lars`, :class:`linear_model.LassoLars`,\n  :class:`linear_model.LarsCV` and :class:`linear_model.LassoLarsCV`,\n  where the parameter ``precompute`` was not used consistently across\n  classes, and some values proposed in the docstring could raise errors.\n  :issue:`5359` by `Tom Dupre la Tour`_.\n\n- Fix inconsistent results between :class:`linear_model.RidgeCV` and\n  :class:`linear_model.Ridge` when using ``normalize=True``. :issue:`9302`\n  by `Alexandre Gramfort`_.\n\n- Fix a bug where :func:`linear_model.LassoLars.fit` sometimes\n  left ``coef_`` as a list, rather than an ndarray.\n  :issue:`8160` by :user:`CJ Carey <perimosocordiae>`.\n\n- Fix :func:`linear_model.BayesianRidge.fit` to return\n  ridge parameter ``alpha_`` and ``lambda_`` consistent with calculated\n  coefficients ``coef_`` and ``intercept_``.\n  :issue:`8224` by :user:`Peter Gedeck <gedeck>`.\n\n- Fixed a bug in :class:`svm.OneClassSVM` where it returned floats instead of\n  integer classes. :issue:`8676` by :user:`Vathsala Achar <VathsalaAchar>`.\n\n- Fix AIC\/BIC criterion computation in :class:`linear_model.LassoLarsIC`.\n  :issue:`9022` by `Alexandre Gramfort`_ and :user:`Mehmet Basbug <mehmetbasbug>`.\n\n- Fixed a memory leak in our LibLinear implementation. :issue:`9024` by\n  :user:`Sergei Lebedev <superbobry>`\n\n- Fix bug where stratified CV splitters did not work with\n  :class:`linear_model.LassoCV`. :issue:`8973` by\n  :user:`Paulo Haddad <paulochf>`.\n\n- Fixed a bug in :class:`gaussian_process.GaussianProcessRegressor`\n  when the standard deviation and covariance predicted without fit\n  would fail with a unmeaningful error by default.\n  :issue:`6573` by :user:`Quazi Marufur Rahman <qmaruf>` and\n  `Manoj Kumar`_.\n\nOther predictors\n\n- Fix `semi_supervised.BaseLabelPropagation` to correctly implement\n  ``LabelPropagation`` and ``LabelSpreading`` as done in the referenced\n  papers. :issue:`9239`\n  by :user:`Andre Ambrosio Boechat <boechat107>`, :user:`Utkarsh Upadhyay\n  <musically-ut>`, and `Joel Nothman`_.\n\nDecomposition, manifold learning and clustering\n\n- Fixed the implementation of :class:`manifold.TSNE`:\n- ``early_exageration`` parameter had no effect and is now used for the\n  first 250 optimization iterations.\n- Fixed the ``AssertionError: Tree consistency failed`` exception\n  reported in :issue:`8992`.\n- Improve the learning schedule to match the one from the reference\n  implementation `lvdmaaten\/bhtsne <https:\/\/github.com\/lvdmaaten\/bhtsne>`_.\n  by :user:`Thomas Moreau <tomMoral>` and `Olivier Grisel`_.\n\n- Fix a bug in :class:`decomposition.LatentDirichletAllocation`\n  where the ``perplexity`` method was returning incorrect results because\n  the ``transform`` method returns normalized document topic distributions\n  as of version 0.18. :issue:`7954` by :user:`Gary Foreman <garyForeman>`.\n\n- Fix output shape and bugs with n_jobs > 1 in\n  :class:`decomposition.SparseCoder` transform and\n  :func:`decomposition.sparse_encode`\n  for one-dimensional data and one component.\n  This also impacts the output shape of :class:`decomposition.DictionaryLearning`.\n  :issue:`8086` by `Andreas M\u00fcller`_.\n\n- Fixed the implementation of ``explained_variance_``\n  in :class:`decomposition.PCA`,\n  `decomposition.RandomizedPCA` and\n  :class:`decomposition.IncrementalPCA`.\n  :issue:`9105` by `Hanmin Qin <https:\/\/github.com\/qinhanmin2014>`_.\n\n- Fixed the implementation of ``noise_variance_`` in :class:`decomposition.PCA`.\n  :issue:`9108` by `Hanmin Qin <https:\/\/github.com\/qinhanmin2014>`_.\n\n- Fixed a bug where :class:`cluster.DBSCAN` gives incorrect\n  result when input is a precomputed sparse matrix with initial\n  rows all zero. :issue:`8306` by :user:`Akshay Gupta <Akshay0724>`\n\n- Fix a bug regarding fitting :class:`cluster.KMeans` with a sparse\n  array X and initial centroids, where X's means were unnecessarily being\n  subtracted from the centroids. :issue:`7872` by :user:`Josh Karnofsky <jkarno>`.\n\n- Fixes to the input validation in :class:`covariance.EllipticEnvelope`.\n  :issue:`8086` by `Andreas M\u00fcller`_.\n\n- Fixed a bug in :class:`covariance.MinCovDet` where inputting data\n  that produced a singular covariance matrix would cause the helper method\n  ``_c_step`` to throw an exception.\n  :issue:`3367` by :user:`Jeremy Steward <ThatGeoGuy>`\n\n- Fixed a bug in :class:`manifold.TSNE` affecting convergence of the\n  gradient descent. :issue:`8768` by :user:`David DeTomaso <deto>`.\n\n- Fixed a bug in :class:`manifold.TSNE` where it stored the incorrect\n  ``kl_divergence_``. :issue:`6507` by :user:`Sebastian Saeger <ssaeger>`.\n\n- Fixed improper scaling in :class:`cross_decomposition.PLSRegression`\n  with ``scale=True``. :issue:`7819` by :user:`jayzed82 <jayzed82>`.\n\n- :class:`cluster.SpectralCoclustering` and\n  :class:`cluster.SpectralBiclustering` ``fit`` method conforms\n  with API by accepting ``y`` and returning the object.  :issue:`6126`,\n  :issue:`7814` by :user:`Laurent Direr <ldirer>` and :user:`Maniteja\n  Nandana <maniteja123>`.\n\n- Fix bug where :mod:`sklearn.mixture` ``sample`` methods did not return as many\n  samples as requested. :issue:`7702` by :user:`Levi John Wolf <ljwolf>`.\n\n- Fixed the shrinkage implementation in :class:`neighbors.NearestCentroid`.\n  :issue:`9219` by `Hanmin Qin <https:\/\/github.com\/qinhanmin2014>`_.\n\nPreprocessing and feature selection\n\n- For sparse matrices, :func:`preprocessing.normalize` with ``return_norm=True``\n  will now raise a ``NotImplementedError`` with 'l1' or 'l2' norm and with\n  norm 'max' the norms returned will be the same as for dense matrices.\n  :issue:`7771` by `Ang Lu <https:\/\/github.com\/luang008>`_.\n\n- Fix a bug where :class:`feature_selection.SelectFdr` did not\n  exactly implement Benjamini-Hochberg procedure. It formerly may have\n  selected fewer features than it should.\n  :issue:`7490` by :user:`Peng Meng <mpjlu>`.\n\n- Fixed a bug where `linear_model.RandomizedLasso` and\n  `linear_model.RandomizedLogisticRegression` breaks for\n  sparse input. :issue:`8259` by :user:`Aman Dalmia <dalmia>`.\n\n- Fix a bug where :class:`feature_extraction.FeatureHasher`\n  mandatorily applied a sparse random projection to the hashed features,\n  preventing the use of\n  :class:`feature_extraction.text.HashingVectorizer` in a\n  pipeline with  :class:`feature_extraction.text.TfidfTransformer`.\n  :issue:`7565` by :user:`Roman Yurchak <rth>`.\n\n- Fix a bug where :class:`feature_selection.mutual_info_regression` did not\n  correctly use ``n_neighbors``. :issue:`8181` by :user:`Guillaume Lemaitre\n  <glemaitre>`.\n\nModel evaluation and meta-estimators\n\n- Fixed a bug where `model_selection.BaseSearchCV.inverse_transform`\n  returns ``self.best_estimator_.transform()`` instead of\n  ``self.best_estimator_.inverse_transform()``.\n  :issue:`8344` by :user:`Akshay Gupta <Akshay0724>` and :user:`Rasmus Eriksson <MrMjauh>`.\n\n- Added ``classes_`` attribute to :class:`model_selection.GridSearchCV`,\n  :class:`model_selection.RandomizedSearchCV`,  `grid_search.GridSearchCV`,\n  and  `grid_search.RandomizedSearchCV` that matches the ``classes_``\n  attribute of ``best_estimator_``. :issue:`7661` and :issue:`8295`\n  by :user:`Alyssa Batula <abatula>`, :user:`Dylan Werner-Meier <unautre>`,\n  and :user:`Stephen Hoover <stephen-hoover>`.\n\n- Fixed a bug where :func:`model_selection.validation_curve`\n  reused the same estimator for each parameter value.\n  :issue:`7365` by :user:`Aleksandr Sandrovskii <Sundrique>`.\n\n- :func:`model_selection.permutation_test_score` now works with Pandas\n  types. :issue:`5697` by :user:`Stijn Tonk <equialgo>`.\n\n- Several fixes to input validation in\n  :class:`multiclass.OutputCodeClassifier`\n  :issue:`8086` by `Andreas M\u00fcller`_.\n\n- :class:`multiclass.OneVsOneClassifier`'s ``partial_fit`` now ensures all\n  classes are provided up-front. :issue:`6250` by\n  :user:`Asish Panda <kaichogami>`.\n\n- Fix :func:`multioutput.MultiOutputClassifier.predict_proba` to return a\n  list of 2d arrays, rather than a 3d array. In the case where different\n  target columns had different numbers of classes, a ``ValueError`` would be\n  raised on trying to stack matrices with different dimensions.\n  :issue:`8093` by :user:`Peter Bull <pjbull>`.\n\n- Cross validation now works with Pandas datatypes that have a\n  read-only index. :issue:`9507` by `Loic Esteve`_.\n\nMetrics\n\n- :func:`metrics.average_precision_score` no longer linearly\n  interpolates between operating points, and instead weighs precisions\n  by the change in recall since the last operating point, as per the\n  `Wikipedia entry <https:\/\/en.wikipedia.org\/wiki\/Average_precision>`_.\n  (`#7356 <https:\/\/github.com\/scikit-learn\/scikit-learn\/pull\/7356>`_). By\n  :user:`Nick Dingwall <ndingwall>` and `Gael Varoquaux`_.\n\n- Fix a bug in `metrics.classification._check_targets`\n  which would return ``'binary'`` if ``y_true`` and ``y_pred`` were\n  both ``'binary'`` but the union of ``y_true`` and ``y_pred`` was\n  ``'multiclass'``. :issue:`8377` by `Loic Esteve`_.\n\n- Fixed an integer overflow bug in :func:`metrics.confusion_matrix` and\n  hence :func:`metrics.cohen_kappa_score`. :issue:`8354`, :issue:`7929`\n  by `Joel Nothman`_ and :user:`Jon Crall <Erotemic>`.\n\n- Fixed passing of ``gamma`` parameter to the ``chi2`` kernel in\n  :func:`metrics.pairwise.pairwise_kernels` :issue:`5211` by\n  :user:`Nick Rhinehart <nrhine1>`,\n  :user:`Saurabh Bansod <mth4saurabh>` and `Andreas M\u00fcller`_.\n\nMiscellaneous\n\n- Fixed a bug when :func:`datasets.make_classification` fails\n  when generating more than 30 features. :issue:`8159` by\n  :user:`Herilalaina Rakotoarison <herilalaina>`.\n\n- Fixed a bug where :func:`datasets.make_moons` gives an\n  incorrect result when ``n_samples`` is odd.\n  :issue:`8198` by :user:`Josh Levy <levy5674>`.\n\n- Some ``fetch_`` functions in :mod:`sklearn.datasets` were ignoring the\n  ``download_if_missing`` keyword. :issue:`7944` by :user:`Ralf Gommers <rgommers>`.\n\n- Fix estimators to accept a ``sample_weight`` parameter of type\n  ``pandas.Series`` in their ``fit`` function. :issue:`7825` by\n  `Kathleen Chen`_.\n\n- Fix a bug in cases where ``numpy.cumsum`` may be numerically unstable,\n  raising an exception if instability is identified. :issue:`7376` and\n  :issue:`7331` by `Joel Nothman`_ and :user:`yangarbiter`.\n\n- Fix a bug where `base.BaseEstimator.__getstate__`\n  obstructed pickling customizations of child-classes, when used in a\n  multiple inheritance context.\n  :issue:`8316` by :user:`Holger Peters <HolgerPeters>`.\n\n- Update Sphinx-Gallery from 0.1.4 to 0.1.7 for resolving links in\n  documentation build with Sphinx>1.5 :issue:`8010`, :issue:`7986` by\n  :user:`Oscar Najera <Titan-C>`\n\n- Add ``data_home`` parameter to :func:`sklearn.datasets.fetch_kddcup99`.\n  :issue:`9289` by `Loic Esteve`_.\n\n- Fix dataset loaders using Python 3 version of makedirs to also work in\n  Python 2. :issue:`9284` by :user:`Sebastin Santy <SebastinSanty>`.\n\n- Several minor issues were fixed with thanks to the alerts of\n  `lgtm.com <https:\/\/lgtm.com\/>`_. :issue:`9278` by :user:`Jean Helie <jhelie>`,\n  among others.\n\nAPI changes summary\n-------------------\n\nTrees and ensembles\n\n- Gradient boosting base models are no longer estimators. By `Andreas M\u00fcller`_.\n\n- All tree based estimators now accept a ``min_impurity_decrease``\n  parameter in lieu of the ``min_impurity_split``, which is now deprecated.\n  The ``min_impurity_decrease`` helps stop splitting the nodes in which\n  the weighted impurity decrease from splitting is no longer at least\n  ``min_impurity_decrease``. :issue:`8449` by `Raghav RV`_.\n\nLinear, kernelized and related models\n\n- ``n_iter`` parameter is deprecated in :class:`linear_model.SGDClassifier`,\n  :class:`linear_model.SGDRegressor`,\n  :class:`linear_model.PassiveAggressiveClassifier`,\n  :class:`linear_model.PassiveAggressiveRegressor` and\n  :class:`linear_model.Perceptron`. By `Tom Dupre la Tour`_.\n\nOther predictors\n\n- `neighbors.LSHForest` has been deprecated and will be\n  removed in 0.21 due to poor performance.\n  :issue:`9078` by :user:`Laurent Direr <ldirer>`.\n\n- :class:`neighbors.NearestCentroid` no longer purports to support\n  ``metric='precomputed'`` which now raises an error. :issue:`8515` by\n  :user:`Sergul Aydore <sergulaydore>`.\n\n- The ``alpha`` parameter of :class:`semi_supervised.LabelPropagation` now\n  has no effect and is deprecated to be removed in 0.21. :issue:`9239`\n  by :user:`Andre Ambrosio Boechat <boechat107>`, :user:`Utkarsh Upadhyay\n  <musically-ut>`, and `Joel Nothman`_.\n\nDecomposition, manifold learning and clustering\n\n- Deprecate the ``doc_topic_distr`` argument of the ``perplexity`` method\n  in :class:`decomposition.LatentDirichletAllocation` because the\n  user no longer has access to the unnormalized document topic distribution\n  needed for the perplexity calculation. :issue:`7954` by\n  :user:`Gary Foreman <garyForeman>`.\n\n- The ``n_topics`` parameter of :class:`decomposition.LatentDirichletAllocation`\n  has been renamed to ``n_components`` and will be removed in version 0.21.\n  :issue:`8922` by :user:`Attractadore`.\n\n- :meth:`decomposition.SparsePCA.transform`'s ``ridge_alpha`` parameter is\n  deprecated in preference for class parameter.\n  :issue:`8137` by :user:`Naoya Kanai <naoyak>`.\n\n- :class:`cluster.DBSCAN` now has a ``metric_params`` parameter.\n  :issue:`8139` by :user:`Naoya Kanai <naoyak>`.\n\nPreprocessing and feature selection\n\n- :class:`feature_selection.SelectFromModel` now has a ``partial_fit``\n  method only if the underlying estimator does. By `Andreas M\u00fcller`_.\n\n- :class:`feature_selection.SelectFromModel` now validates the ``threshold``\n  parameter and sets the ``threshold_`` attribute during the call to\n  ``fit``, and no longer during the call to ``transform```. By `Andreas\n  M\u00fcller`_.\n\n- The ``non_negative`` parameter in :class:`feature_extraction.FeatureHasher`\n  has been deprecated, and replaced with a more principled alternative,\n  ``alternate_sign``.\n  :issue:`7565` by :user:`Roman Yurchak <rth>`.\n\n- `linear_model.RandomizedLogisticRegression`,\n  and `linear_model.RandomizedLasso` have been deprecated and will\n  be removed in version 0.21.\n  :issue:`8995` by :user:`Ramana.S <sentient07>`.\n\nModel evaluation and meta-estimators\n\n- Deprecate the ``fit_params`` constructor input to the\n  :class:`model_selection.GridSearchCV` and\n  :class:`model_selection.RandomizedSearchCV` in favor\n  of passing keyword parameters to the ``fit`` methods\n  of those classes. Data-dependent parameters needed for model\n  training should be passed as keyword arguments to ``fit``,\n  and conforming to this convention will allow the hyperparameter\n  selection classes to be used with tools such as\n  :func:`model_selection.cross_val_predict`.\n  :issue:`2879` by :user:`Stephen Hoover <stephen-hoover>`.\n\n- In version 0.21, the default behavior of splitters that use the\n  ``test_size`` and ``train_size`` parameter will change, such that\n  specifying ``train_size`` alone will cause ``test_size`` to be the\n  remainder. :issue:`7459` by :user:`Nelson Liu <nelson-liu>`.\n\n- :class:`multiclass.OneVsRestClassifier` now has ``partial_fit``,\n  ``decision_function`` and ``predict_proba`` methods only when the\n  underlying estimator does.  :issue:`7812` by `Andreas M\u00fcller`_ and\n  :user:`Mikhail Korobov <kmike>`.\n\n- :class:`multiclass.OneVsRestClassifier` now has a ``partial_fit`` method\n  only if the underlying estimator does.  By `Andreas M\u00fcller`_.\n\n- The ``decision_function`` output shape for binary classification in\n  :class:`multiclass.OneVsRestClassifier` and\n  :class:`multiclass.OneVsOneClassifier` is now ``(n_samples,)`` to conform\n  to scikit-learn conventions. :issue:`9100` by `Andreas M\u00fcller`_.\n\n- The :func:`multioutput.MultiOutputClassifier.predict_proba`\n  function used to return a 3d array (``n_samples``, ``n_classes``,\n  ``n_outputs``). In the case where different target columns had different\n  numbers of classes, a ``ValueError`` would be raised on trying to stack\n  matrices with different dimensions. This function now returns a list of\n  arrays where the length of the list is ``n_outputs``, and each array is\n  (``n_samples``, ``n_classes``) for that particular output.\n  :issue:`8093` by :user:`Peter Bull <pjbull>`.\n\n- Replace attribute ``named_steps`` ``dict`` to :class:`utils.Bunch`\n  in :class:`pipeline.Pipeline` to enable tab completion in interactive\n  environment. In the case conflict value on ``named_steps`` and ``dict``\n  attribute, ``dict`` behavior will be prioritized.\n  :issue:`8481` by :user:`Herilalaina Rakotoarison <herilalaina>`.\n\nMiscellaneous\n\n- Deprecate the ``y`` parameter in ``transform`` and ``inverse_transform``.\n  The method  should not accept ``y`` parameter, as it's used at the prediction time.\n  :issue:`8174` by :user:`Tahar Zanouda <tzano>`, `Alexandre Gramfort`_\n  and `Raghav RV`_.\n\n- SciPy >= 0.13.3 and NumPy >= 1.8.2 are now the minimum supported versions\n  for scikit-learn. The following backported functions in\n  :mod:`sklearn.utils` have been removed or deprecated accordingly.\n  :issue:`8854` and :issue:`8874` by :user:`Naoya Kanai <naoyak>`\n\n- The ``store_covariances`` and ``covariances_`` parameters of\n  :class:`discriminant_analysis.QuadraticDiscriminantAnalysis`\n  has been renamed to ``store_covariance`` and ``covariance_`` to be\n  consistent with the corresponding parameter names of the\n  :class:`discriminant_analysis.LinearDiscriminantAnalysis`. They will be\n  removed in version 0.21. :issue:`7998` by :user:`Jiacheng <mrbeann>`\n\n  Removed in 0.19:\n\n  - ``utils.fixes.argpartition``\n  - ``utils.fixes.array_equal``\n  - ``utils.fixes.astype``\n  - ``utils.fixes.bincount``\n  - ``utils.fixes.expit``\n  - ``utils.fixes.frombuffer_empty``\n  - ``utils.fixes.in1d``\n  - ``utils.fixes.norm``\n  - ``utils.fixes.rankdata``\n  - ``utils.fixes.safe_copy``\n\n  Deprecated in 0.19, to be removed in 0.21:\n\n  - ``utils.arpack.eigs``\n  - ``utils.arpack.eigsh``\n  - ``utils.arpack.svds``\n  - ``utils.extmath.fast_dot``\n  - ``utils.extmath.logsumexp``\n  - ``utils.extmath.norm``\n  - ``utils.extmath.pinvh``\n  - ``utils.graph.graph_laplacian``\n  - ``utils.random.choice``\n  - ``utils.sparsetools.connected_components``\n  - ``utils.stats.rankdata``\n\n- Estimators with both methods ``decision_function`` and ``predict_proba``\n  are now required to have a monotonic relation between them. The\n  method ``check_decision_proba_consistency`` has been added in\n  **utils.estimator_checks** to check their consistency.\n  :issue:`7578` by :user:`Shubham Bhardwaj <shubham0704>`\n\n- All checks in ``utils.estimator_checks``, in particular\n  :func:`utils.estimator_checks.check_estimator` now accept estimator\n  instances. Most other checks do not accept\n  estimator classes any more. :issue:`9019` by `Andreas M\u00fcller`_.\n\n- Ensure that estimators' attributes ending with ``_`` are not set\n  in the constructor but only in the ``fit`` method. Most notably,\n  ensemble estimators (deriving from `ensemble.BaseEnsemble`)\n  now only have ``self.estimators_`` available after ``fit``.\n  :issue:`7464` by `Lars Buitinck`_ and `Loic Esteve`_.\n\n\nCode and Documentation Contributors\n-----------------------------------\n\nThanks to everyone who has contributed to the maintenance and improvement of the\nproject since version 0.18, including:\n\nJoel Nothman, Loic Esteve, Andreas Mueller, Guillaume Lemaitre, Olivier Grisel,\nHanmin Qin, Raghav RV, Alexandre Gramfort, themrmax, Aman Dalmia, Gael\nVaroquaux, Naoya Kanai, Tom Dupr\u00e9 la Tour, Rishikesh, Nelson Liu, Taehoon Lee,\nNelle Varoquaux, Aashil, Mikhail Korobov, Sebastin Santy, Joan Massich, Roman\nYurchak, RAKOTOARISON Herilalaina, Thierry Guillemot, Alexandre Abadie, Carol\nWilling, Balakumaran Manoharan, Josh Karnofsky, Vlad Niculae, Utkarsh Upadhyay,\nDmitry Petrov, Minghui Liu, Srivatsan, Vincent Pham, Albert Thomas, Jake\nVanderPlas, Attractadore, JC Liu, alexandercbooth, chkoar, \u00d3scar N\u00e1jera,\nAarshay Jain, Kyle Gilliam, Ramana Subramanyam, CJ Carey, Clement Joudet, David\nRobles, He Chen, Joris Van den Bossche, Karan Desai, Katie Luangkote, Leland\nMcInnes, Maniteja Nandana, Michele Lacchia, Sergei Lebedev, Shubham Bhardwaj,\nakshay0724, omtcyfz, rickiepark, waterponey, Vathsala Achar, jbDelafosse, Ralf\nGommers, Ekaterina Krivich, Vivek Kumar, Ishank Gulati, Dave Elliott, ldirer,\nReiichiro Nakano, Levi John Wolf, Mathieu Blondel, Sid Kapur, Dougal J.\nSutherland, midinas, mikebenfield, Sourav Singh, Aseem Bansal, Ibraim Ganiev,\nStephen Hoover, AishwaryaRK, Steven C. Howell, Gary Foreman, Neeraj Gangwar,\nTahar, Jon Crall, dokato, Kathy Chen, ferria, Thomas Moreau, Charlie Brummitt,\nNicolas Goix, Adam Kleczewski, Sam Shleifer, Nikita Singh, Basil Beirouti,\nGiorgio Patrini, Manoj Kumar, Rafael Possas, James Bourbeau, James A. Bednar,\nJanine Harper, Jaye, Jean Helie, Jeremy Steward, Artsiom, John Wei, Jonathan\nLIgo, Jonathan Rahn, seanpwilliams, Arthur Mensch, Josh Levy, Julian Kuhlmann,\nJulien Aubert, J\u00f6rn Hees, Kai, shivamgargsya, Kat Hempstalk, Kaushik\nLakshmikanth, Kennedy, Kenneth Lyons, Kenneth Myers, Kevin Yap, Kirill Bobyrev,\nKonstantin Podshumok, Arthur Imbert, Lee Murray, toastedcornflakes, Lera, Li\nLi, Arthur Douillard, Mainak Jas, tobycheese, Manraj Singh, Manvendra Singh,\nMarc Meketon, MarcoFalke, Matthew Brett, Matthias Gilch, Mehul Ahuja, Melanie\nGoetz, Meng, Peng, Michael Dezube, Michal Baumgartner, vibrantabhi19, Artem\nGolubin, Milen Paskov, Antonin Carette, Morikko, MrMjauh, NALEPA Emmanuel,\nNamiya, Antoine Wendlinger, Narine Kokhlikyan, NarineK, Nate Guerin, Angus\nWilliams, Ang Lu, Nicole Vavrova, Nitish Pandey, Okhlopkov Daniil Olegovich,\nAndy Craze, Om Prakash, Parminder Singh, Patrick Carlson, Patrick Pei, Paul\nGanssle, Paulo Haddad, Pawe\u0142 Lorek, Peng Yu, Pete Bachant, Peter Bull, Peter\nCsizsek, Peter Wang, Pieter Arthur de Jong, Ping-Yao, Chang, Preston Parry,\nPuneet Mathur, Quentin Hibon, Andrew Smith, Andrew Jackson, 1kastner, Rameshwar\nBhaskaran, Rebecca Bilbro, Remi Rampin, Andrea Esuli, Rob Hall, Robert\nBradshaw, Romain Brault, Aman Pratik, Ruifeng Zheng, Russell Smith, Sachin\nAgarwal, Sailesh Choyal, Samson Tan, Samu\u00ebl Weber, Sarah Brown, Sebastian\nP\u00f6lsterl, Sebastian Raschka, Sebastian Saeger, Alyssa Batula, Abhyuday Pratap\nSingh, Sergey Feldman, Sergul Aydore, Sharan Yalburgi, willduan, Siddharth\nGupta, Sri Krishna, Almer, Stijn Tonk, Allen Riddell, Theofilos Papapanagiotou,\nAlison, Alexis Mignon, Tommy Boucher, Tommy L\u00f6fstedt, Toshihiro Kamishima,\nTyler Folkman, Tyler Lanigan, Alexander Junge, Varun Shenoy, Victor Poughon,\nVilhelm von Ehrenheim, Aleksandr Sandrovskii, Alan Yee, Vlasios Vasileiou,\nWarut Vijitbenjaronk, Yang Zhang, Yaroslav Halchenko, Yichuan Liu, Yuichi\nFujikawa, affanv14, aivision2020, xor, andreh7, brady salz, campustrampus,\nAgamemnon Krasoulis, ditenberg, elena-sharova, filipj8, fukatani, gedeck,\nguiniol, guoci, hakaa1, hongkahjun, i-am-xhy, jakirkham, jaroslaw-weber,\njayzed82, jeroko, jmontoyam, jonathan.striebel, josephsalmon, jschendel,\nleereeves, martin-hahn, mathurinm, mehak-sachdeva, mlewis1729, mlliou112,\nmthorrell, ndingwall, nuffe, yangarbiter, plagree, pldtc325, Breno Freitas,\nBrett Olsen, Brian A. Alfano, Brian Burns, polmauri, Brandon Carter, Charlton\nAustin, Chayant T15h, Chinmaya Pancholi, Christian Danielsen, Chung Yen,\nChyi-Kwei Yau, pravarmahajan, DOHMATOB Elvis, Daniel LeJeune, Daniel Hnyk,\nDarius Morawiec, David DeTomaso, David Gasquez, David Haberth\u00fcr, David\nHeryanto, David Kirkby, David Nicholson, rashchedrin, Deborah Gertrude Digges,\nDenis Engemann, Devansh D, Dickson, Bob Baxley, Don86, E. Lynch-Klarup, Ed\nRogers, Elizabeth Ferriss, Ellen-Co2, Fabian Egli, Fang-Chieh Chou, Bing Tian\nDai, Greg Stupp, Grzegorz Szpak, Bertrand Thirion, Hadrien Bertrand, Harizo\nRajaona, zxcvbnius, Henry Lin, Holger Peters, Icyblade Dai, Igor\nAndriushchenko, Ilya, Isaac Laughlin, Iv\u00e1n Vall\u00e9s, Aur\u00e9lien Bellet, JPFrancoia,\nJacob Schreiber, Asish Mahapatra","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn               Version 0 19                   changes 0 19   Version 0 19 2                   July  2018    This release is exclusively in order to support Python 3 7   Related changes                      n iter    may vary from previous releases in    class  linear model LogisticRegression  with   solver  lbfgs    and    class  linear model HuberRegressor    For Scipy    1 0 0  the optimizer could   perform more than the requested maximum number of iterations  Now both   estimators will report at most   max iter   iterations even if more were   performed   issue  10723  by  Joel Nothman     Version 0 19 1                   October 23  2017    This is a bug fix release with some minor documentation improvements and enhancements to features released in 0 19 0   Note there may be minor differences in TSNE output in this release  due to  issue  9623    in the case where multiple samples have equal distance to some sample   Changelog            API changes                Reverted the addition of   metrics ndcg score   and   metrics dcg score     which had been merged into version 0 19 0 by error   The implementations   were broken and undocumented       return train score   which was added to    class  model selection GridSearchCV      class  model selection RandomizedSearchCV  and    func  model selection cross validate  in version 0 19 0 will be changing its   default value from True to False in version 0 21   We found that calculating   training score could have a great effect on cross validation runtime in some   cases   Users should explicitly set   return train score   to False if   prediction or scoring functions are slow  resulting in a deleterious effect   on CV runtime  or to True if they wish to use the calculated scores     issue  9677  by  user  Kumar Ashutosh  thechargedneutron   and  Joel   Nothman         correlation models   and   regression models   from the legacy gaussian   processes implementation have been belatedly deprecated   issue  9717  by    user  Kumar Ashutosh  thechargedneutron     Bug fixes              Avoid integer overflows in  func  metrics matthews corrcoef      issue  9693  by  user  Sam Steingold  sam s       Fixed a bug in the objective function for  class  manifold TSNE   both exact   and with the Barnes Hut approximation  when   n components    3       issue  9711  by  user  goncalo rodrigues      Fix regression in  func  model selection cross val predict  where it   raised an error with   method  predict proba    for some probabilistic   classifiers   issue  9641  by  user  James Bourbeau  jrbourbeau       Fixed a bug where  func  datasets make classification  modified its input     weights     issue  9865  by  user  Sachin Kelkar  s4chin        class  model selection StratifiedShuffleSplit  now works with multioutput   multiclass or multilabel data with more than 1000 columns    issue  9922  by    user  Charlie Brummitt  crbrummitt       Fixed a bug with nested and conditional parameter setting  e g  setting a   pipeline step and its parameter at the same time   issue  9945  by  Andreas   M ller   and  Joel Nothman     Regressions in 0 19 0 fixed in 0 19 1     Fixed a bug where parallelised prediction in random forests was not   thread safe and could  rarely  result in arbitrary errors   issue  9830  by    Joel Nothman       Fix regression in  func  model selection cross val predict  where it no   longer accepted   X   as a list   issue  9600  by  user  Rasul Kerimov    CoderINusE       Fixed handling of  func  model selection cross val predict  for binary   classification with   method  decision function      issue  9593  by    user  Reiichiro Nakano  reiinakano   and core devs     Fix regression in  class  pipeline Pipeline  where it no longer accepted     steps   as a tuple   issue  9604  by  user  Joris Van den Bossche    jorisvandenbossche       Fix bug where   n iter   was not properly deprecated  leaving   n iter     unavailable for interim use in    class  linear model SGDClassifier    class  linear model SGDRegressor      class  linear model PassiveAggressiveClassifier      class  linear model PassiveAggressiveRegressor  and    class  linear model Perceptron    issue  9558  by  Andreas M ller       Dataset fetchers make sure temporary files are closed before removing them    which caused errors on Windows   issue  9847  by  user  Joan Massich  massich       Fixed a regression in  class  manifold TSNE  where it no longer supported   metrics other than  euclidean  and  precomputed    issue  9623  by  user  Oli   Blum  oliblum90     Enhancements                 Our test suite and  func  utils estimator checks check estimator  can now be   run without Nose installed   issue  9697  by  user  Joan Massich  massich       To improve usability of version 0 19 s  class  pipeline Pipeline    caching    memory   now allows   joblib Memory   instances    This make use of the new  func  utils validation check memory  helper    issue  9584  by  user  Kumar Ashutosh  thechargedneutron      Some fixes to examples   issue  9750    issue  9788    issue  9815     Made a FutureWarning in SGD based estimators less verbose   issue  9802  by    user  Vrishank Bhardwaj  vrishank97     Code and Documentation Contributors                                      With thanks to   Joel Nothman  Loic Esteve  Andreas Mueller  Kumar Ashutosh  Vrishank Bhardwaj  Hanmin Qin  Rasul Kerimov  James Bourbeau  Nagarjuna Kumar  Nathaniel Saul  Olivier Grisel  Roman Yurchak  Reiichiro Nakano  Sachin Kelkar  Sam Steingold  Yaroslav Halchenko  diegodlh  felix  goncalo rodrigues  jkleint  oliblum90  pasbi  Anthony Gitter  Ben Lawson  Charlie Brummitt  Didi Bar Zev  Gael Varoquaux  Joan Massich  Joris Van den Bossche  nielsenmarkus11   Version 0 19                 August 12  2017    Highlights             We are excited to release a number of great new features including  class  neighbors LocalOutlierFactor  for anomaly detection   class  preprocessing QuantileTransformer  for robust feature transformation  and the  class  multioutput ClassifierChain  meta estimator to simply account for dependencies between classes in multilabel problems  We have some new algorithms in existing estimators  such as multiplicative update in  class  decomposition NMF  and multinomial  class  linear model LogisticRegression  with L1 loss  use   solver  saga       Cross validation is now able to return the results from multiple metric evaluations  The new  func  model selection cross validate  can return many scores on the test data as well as training set performance and timings  and we have extended the   scoring   and   refit   parameters for grid randomized search  ref  to handle multiple metrics  multimetric grid search     You can also learn faster   For instance  the  ref  new option to cache transformations  pipeline cache   in  class  pipeline Pipeline  makes grid search over pipelines including slow transformations much more efficient   And you can predict faster  if you re sure you know what you re doing  you can turn off validating that the input is finite using  func  config context    We ve made some important fixes too   We ve fixed a longstanding implementation error in  func  metrics average precision score   so please be cautious with prior results reported from that function   A number of errors in the  class  manifold TSNE  implementation have been fixed  particularly in the default Barnes Hut approximation    class  semi supervised LabelSpreading  and  class  semi supervised LabelPropagation  have had substantial fixes  LabelPropagation was previously broken  LabelSpreading should now correctly respect its alpha parameter   Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      class  cluster KMeans  with sparse X and initial centroids given  bug fix     class  cross decomposition PLSRegression    with   scale True    bug fix     class  ensemble GradientBoostingClassifier  and    class  ensemble GradientBoostingRegressor  where   min impurity split   is used  bug fix    gradient boosting   loss  quantile     bug fix     class  ensemble IsolationForest   bug fix     class  feature selection SelectFdr   bug fix     class  linear model RANSACRegressor   bug fix     class  linear model LassoLars   bug fix     class  linear model LassoLarsIC   bug fix     class  manifold TSNE   bug fix     class  neighbors NearestCentroid   bug fix     class  semi supervised LabelSpreading   bug fix     class  semi supervised LabelPropagation   bug fix    tree based models where   min weight fraction leaf   is used  enhancement     class  model selection StratifiedKFold  with   shuffle True      this change  due to  issue  7823  was not mentioned in the release notes at   the time   Details are listed in the changelog below    While we are trying to better inform users by providing this information  we cannot assure that this list is complete    Changelog            New features               Classifiers and regressors    Added  class  multioutput ClassifierChain  for multi label   classification  By  user  Adam Kleczewski  adamklec       Added solver    saga    that implements the improved version of Stochastic   Average Gradient  in  class  linear model LogisticRegression  and    class  linear model Ridge   It allows the use of L1 penalty with   multinomial logistic loss  and behaves marginally better than  sag    during the first epochs of ridge and logistic regression     issue  8446  by  Arthur Mensch     Other estimators    Added the  class  neighbors LocalOutlierFactor  class for anomaly   detection based on nearest neighbors     issue  5279  by  Nicolas Goix   and  Alexandre Gramfort       Added  class  preprocessing QuantileTransformer  class and    func  preprocessing quantile transform  function for features   normalization based on quantiles     issue  8363  by  user  Denis Engemann  dengemann       user  Guillaume Lemaitre  glemaitre     Olivier Grisel     Raghav RV       user  Thierry Guillemot  tguillemot    and  Gael Varoquaux       The new solver    mu    implements a Multiplicate Update in    class  decomposition NMF   allowing the optimization of all   beta divergences  including the Frobenius norm  the generalized   Kullback Leibler divergence and the Itakura Saito divergence     issue  5295  by  Tom Dupre la Tour     Model selection and evaluation     class  model selection GridSearchCV  and    class  model selection RandomizedSearchCV  now support simultaneous   evaluation of multiple metrics  Refer to the    ref  multimetric grid search  section of the user guide for more   information   issue  7388  by  Raghav RV      Added the  func  model selection cross validate  which allows evaluation   of multiple metrics  This function returns a dict with more useful   information from cross validation such as the train scores  fit times and   score times    Refer to  ref  multimetric cross validation  section of the userguide   for more information   issue  7388  by  Raghav RV      Added  func  metrics mean squared log error   which computes   the mean square error of the logarithmic transformation of targets    particularly useful for targets with an exponential trend     issue  7655  by  user  Karan Desai  karandesai 96       Added  func  metrics dcg score  and  func  metrics ndcg score   which   compute Discounted cumulative gain  DCG  and Normalized discounted   cumulative gain  NDCG      issue  7739  by  user  David Gasquez  davidgasquez       Added the  class  model selection RepeatedKFold  and    class  model selection RepeatedStratifiedKFold      issue  8120  by  Neeraj Gangwar     Miscellaneous    Validation that input data contains no NaN or inf can now be suppressed   using  func  config context   at your own risk  This will save on runtime    and may be particularly useful for prediction time   issue  7548  by    Joel Nothman       Added a test to ensure parameter listing in docstrings match the   function class signature   issue  9206  by  Alexandre Gramfort   and    Raghav RV     Enhancements               Trees and ensembles    The   min weight fraction leaf   constraint in tree construction is now   more efficient  taking a fast path to declare a node a leaf if its weight   is less than 2   the minimum  Note that the constructed tree will be   different from previous versions where   min weight fraction leaf   is   used   issue  7441  by  user  Nelson Liu  nelson liu        class  ensemble GradientBoostingClassifier  and  class  ensemble GradientBoostingRegressor    now support sparse input for prediction     issue  6101  by  user  Ibraim Ganiev  olologin        class  ensemble VotingClassifier  now allows changing estimators by using    meth  ensemble VotingClassifier set params   An estimator can also be   removed by setting it to   None       issue  7674  by  user  Yichuan Liu  yl565        func  tree export graphviz  now shows configurable number of decimal   places   issue  8698  by  user  Guillaume Lemaitre  glemaitre       Added   flatten transform   parameter to  class  ensemble VotingClassifier    to change output shape of  transform  method to 2 dimensional     issue  7794  by  user  Ibraim Ganiev  olologin   and    user  Herilalaina Rakotoarison  herilalaina     Linear  kernelized and related models     class  linear model SGDClassifier    class  linear model SGDRegressor      class  linear model PassiveAggressiveClassifier      class  linear model PassiveAggressiveRegressor  and    class  linear model Perceptron  now expose   max iter   and     tol   parameters  to handle convergence more precisely      n iter   parameter is deprecated  and the fitted estimator exposes   a   n iter    attribute  with actual number of iterations before   convergence   issue  5036  by  Tom Dupre la Tour       Added   average   parameter to perform weight averaging in    class  linear model PassiveAggressiveClassifier    issue  4939    by  user  Andrea Esuli  aesuli        class  linear model RANSACRegressor  no longer throws an error   when calling   fit   if no inliers are found in its first iteration    Furthermore  causes of skipped iterations are tracked in newly added   attributes    n skips         issue  7914  by  user  Michael Horrell  mthorrell       In  class  gaussian process GaussianProcessRegressor   method   predict     is a lot faster with   return std True     issue  8591  by    user  Hadrien Bertrand  hbertrand       Added   return std   to   predict   method of    class  linear model ARDRegression  and    class  linear model BayesianRidge      issue  7838  by  user  Sergey Feldman  sergeyf       Memory usage enhancements  Prevent cast from float32 to float64 in     class  linear model MultiTaskElasticNet      class  linear model LogisticRegression  when using newton cg solver  and    class  linear model Ridge  when using svd  sparse cg  cholesky or lsqr   solvers   issue  8835    issue  8061  by  user  Joan Massich  massich   and  user  Nicolas   Cordier  ncordier   and  user  Thierry Guillemot  tguillemot     Other predictors    Custom metrics for the  mod  sklearn neighbors  binary trees now have   fewer constraints  they must take two 1d arrays and return a float     issue  6288  by  Jake Vanderplas         algorithm  auto   in  mod  sklearn neighbors  estimators now chooses the most   appropriate algorithm for all input types and metrics   issue  9145  by    user  Herilalaina Rakotoarison  herilalaina   and  user  Reddy Chinthala    preddy5     Decomposition  manifold learning and clustering     class  cluster MiniBatchKMeans  and  class  cluster KMeans    now use significantly less memory when assigning data points to their   nearest cluster center   issue  7721  by  user  Jon Crall  Erotemic        class  decomposition PCA    class  decomposition IncrementalPCA  and    class  decomposition TruncatedSVD  now expose the singular values   from the underlying SVD  They are stored in the attribute     singular values     like in  class  decomposition IncrementalPCA      issue  7685  by  user  Tommy L fstedt  tomlof       class  decomposition NMF  now faster when   beta loss 0       issue  9277  by  user  hongkahjun      Memory improvements for method   barnes hut   in  class  manifold TSNE     issue  7089  by  user  Thomas Moreau  tomMoral   and  Olivier Grisel       Optimization schedule improvements for Barnes Hut  class  manifold TSNE    so the results are closer to the one from the reference implementation    lvdmaaten bhtsne  https   github com lvdmaaten bhtsne    by  user  Thomas   Moreau  tomMoral   and  Olivier Grisel       Memory usage enhancements  Prevent cast from float32 to float64 in    class  decomposition PCA  and    decomposition randomized svd low rank      issue  9067  by  Raghav RV     Preprocessing and feature selection    Added   norm order   parameter to  class  feature selection SelectFromModel    to enable selection of the norm order when   coef    is more than 1D     issue  6181  by  user  Antoine Wendlinger  antoinewdg       Added ability to use sparse matrices in  func  feature selection f regression    with   center True     issue  8065  by  user  Daniel LeJeune  acadiansith       Small performance improvement to n gram creation in    mod  sklearn feature extraction text  by binding methods for loops and   special casing unigrams   issue  7567  by  user  Jaye Doepke  jtdoepke      Relax assumption on the data for the    class  kernel approximation SkewedChi2Sampler   Since the Skewed Chi2   kernel is defined on the open interval  math    skewedness    infty  d     the transform function should not check whether   X   0   but whether   X      self skewedness     issue  7573  by  user  Romain Brault  RomainBrault       Made default kernel parameters kernel dependent in    class  kernel approximation Nystroem      issue  5229  by  user  Saurabh Bansod  mth4saurabh   and  Andreas M ller     Model evaluation and meta estimators     class  pipeline Pipeline  is now able to cache transformers   within a pipeline by using the   memory   constructor parameter     issue  7990  by  user  Guillaume Lemaitre  glemaitre        class  pipeline Pipeline  steps can now be accessed as attributes of its     named steps   attribute   issue  8586  by  user  Herilalaina   Rakotoarison  herilalaina       Added   sample weight   parameter to  meth  pipeline Pipeline score      issue  7723  by  user  Mikhail Korobov  kmike       Added ability to set   n jobs   parameter to  func  pipeline make union     A   TypeError   will be raised for any other kwargs   issue  8028    by  user  Alexander Booth  alexandercbooth        class  model selection GridSearchCV      class  model selection RandomizedSearchCV  and    func  model selection cross val score  now allow estimators with callable   kernels which were previously prohibited     issue  8005  by  Andreas M ller         func  model selection cross val predict  now returns output of the   correct shape for all values of the argument   method       issue  7863  by  user  Aman Dalmia  dalmia       Added   shuffle   and   random state   parameters to shuffle training   data before taking prefixes of it based on training sizes in    func  model selection learning curve      issue  7506  by  user  Narine Kokhlikyan  NarineK        class  model selection StratifiedShuffleSplit  now works with multioutput   multiclass  or multilabel  data    issue  9044  by  Vlad Niculae       Speed improvements to  class  model selection StratifiedShuffleSplit      issue  5991  by  user  Arthur Mensch  arthurmensch   and  Joel Nothman       Add   shuffle   parameter to  func  model selection train test split      issue  8845  by   user  themrmax  themrmax       class  multioutput MultiOutputRegressor  and  class  multioutput MultiOutputClassifier    now support online learning using   partial fit       issue   8053  by  user  Peng Yu  yupbank       Add   max train size   parameter to  class  model selection TimeSeriesSplit     issue  8282  by  user  Aman Dalmia  dalmia       More clustering metrics are now available through  func  metrics get scorer    and   scoring   parameters   issue  8117  by  Raghav RV       A scorer based on  func  metrics explained variance score  is also available     issue  9259  by  user  Hanmin Qin  qinhanmin2014     Metrics     func  metrics matthews corrcoef  now support multiclass classification     issue  8094  by  user  Jon Crall  Erotemic       Add   sample weight   parameter to  func  metrics cohen kappa score      issue  8335  by  user  Victor Poughon  vpoughon     Miscellaneous     func  utils estimator checks check estimator  now attempts to ensure that methods   transform  predict  etc   do not set attributes on the estimator     issue  7533  by  user  Ekaterina Krivich  kiote       Added type checking to the   accept sparse   parameter in    mod  sklearn utils validation  methods  This parameter now accepts only boolean    string  or list tuple of strings    accept sparse None   is deprecated and   should be replaced by   accept sparse False       issue  7880  by  user  Josh Karnofsky  jkarno       Make it possible to load a chunk of an svmlight formatted file by   passing a range of bytes to  func  datasets load svmlight file      issue  935  by  user  Olivier Grisel  ogrisel        class  dummy DummyClassifier  and  class  dummy DummyRegressor    now accept non finite features   issue  8931  by  user  Attractadore    Bug fixes            Trees and ensembles    Fixed a memory leak in trees when using trees with   criterion  mae        issue  8002  by  Raghav RV       Fixed a bug where  class  ensemble IsolationForest  uses an   an incorrect formula for the average path length    issue  8549  by  Peter Wang  https   github com PTRWang        Fixed a bug where  class  ensemble AdaBoostClassifier  throws     ZeroDivisionError   while fitting data with single class labels     issue  7501  by  user  Dominik Krzeminski  dokato       Fixed a bug in  class  ensemble GradientBoostingClassifier  and    class  ensemble GradientBoostingRegressor  where a float being compared   to   0 0   using        caused a divide by zero error   issue  7970  by    user  He Chen  chenhe95       Fix a bug where  class  ensemble GradientBoostingClassifier  and    class  ensemble GradientBoostingRegressor  ignored the     min impurity split   parameter     issue  8006  by  user  Sebastian P lsterl  sebp       Fixed   oob score   in  class  ensemble BaggingClassifier      issue  8936  by  user  Michael Lewis  mlewis1729      Fixed excessive memory usage in prediction for random forests estimators     issue  8672  by  user  Mike Benfield  mikebenfield       Fixed a bug where   sample weight   as a list broke random forests in Python 2    issue  8068  by  user  xor      Fixed a bug where  class  ensemble IsolationForest  fails when     max features   is less than 1     issue  5732  by  user  Ishank Gulati  IshankGulati       Fix a bug where gradient boosting with   loss  quantile    computed   negative errors for negative values of   ytrue   ypred   leading to wrong   values when calling     call         issue  8087  by  user  Alexis Mignon  AlexisMignon      Fix a bug where  class  ensemble VotingClassifier  raises an error   when a numpy array is passed in for weights   issue  7983  by    user  Vincent Pham  vincentpham1991       Fixed a bug where  func  tree export graphviz  raised an error   when the length of features names does not match n features in the decision   tree   issue  8512  by  user  Li Li  aikinogard     Linear  kernelized and related models    Fixed a bug where  func  linear model RANSACRegressor fit  may run until     max iter   if it finds a large inlier group early   issue  8251  by    user  aivision2020      Fixed a bug where  class  naive bayes MultinomialNB  and    class  naive bayes BernoulliNB  failed when   alpha 0     issue  5814  by    user  Yichuan Liu  yl565   and  user  Herilalaina Rakotoarison    herilalaina       Fixed a bug where  class  linear model LassoLars  does not give   the same result as the LassoLars implementation available   in R  lars library    issue  7849  by  user  Jair Montoya Martinez  jmontoyam       Fixed a bug in  linear model RandomizedLasso      class  linear model Lars    class  linear model LassoLars      class  linear model LarsCV  and  class  linear model LassoLarsCV     where the parameter   precompute   was not used consistently across   classes  and some values proposed in the docstring could raise errors     issue  5359  by  Tom Dupre la Tour       Fix inconsistent results between  class  linear model RidgeCV  and    class  linear model Ridge  when using   normalize True     issue  9302    by  Alexandre Gramfort       Fix a bug where  func  linear model LassoLars fit  sometimes   left   coef    as a list  rather than an ndarray     issue  8160  by  user  CJ Carey  perimosocordiae       Fix  func  linear model BayesianRidge fit  to return   ridge parameter   alpha    and   lambda    consistent with calculated   coefficients   coef    and   intercept        issue  8224  by  user  Peter Gedeck  gedeck       Fixed a bug in  class  svm OneClassSVM  where it returned floats instead of   integer classes   issue  8676  by  user  Vathsala Achar  VathsalaAchar       Fix AIC BIC criterion computation in  class  linear model LassoLarsIC      issue  9022  by  Alexandre Gramfort   and  user  Mehmet Basbug  mehmetbasbug       Fixed a memory leak in our LibLinear implementation   issue  9024  by    user  Sergei Lebedev  superbobry      Fix bug where stratified CV splitters did not work with    class  linear model LassoCV    issue  8973  by    user  Paulo Haddad  paulochf       Fixed a bug in  class  gaussian process GaussianProcessRegressor    when the standard deviation and covariance predicted without fit   would fail with a unmeaningful error by default     issue  6573  by  user  Quazi Marufur Rahman  qmaruf   and    Manoj Kumar     Other predictors    Fix  semi supervised BaseLabelPropagation  to correctly implement     LabelPropagation   and   LabelSpreading   as done in the referenced   papers   issue  9239    by  user  Andre Ambrosio Boechat  boechat107     user  Utkarsh Upadhyay    musically ut    and  Joel Nothman     Decomposition  manifold learning and clustering    Fixed the implementation of  class  manifold TSNE       early exageration   parameter had no effect and is now used for the   first 250 optimization iterations    Fixed the   AssertionError  Tree consistency failed   exception   reported in  issue  8992     Improve the learning schedule to match the one from the reference   implementation  lvdmaaten bhtsne  https   github com lvdmaaten bhtsne       by  user  Thomas Moreau  tomMoral   and  Olivier Grisel       Fix a bug in  class  decomposition LatentDirichletAllocation    where the   perplexity   method was returning incorrect results because   the   transform   method returns normalized document topic distributions   as of version 0 18   issue  7954  by  user  Gary Foreman  garyForeman       Fix output shape and bugs with n jobs   1 in    class  decomposition SparseCoder  transform and    func  decomposition sparse encode    for one dimensional data and one component    This also impacts the output shape of  class  decomposition DictionaryLearning      issue  8086  by  Andreas M ller       Fixed the implementation of   explained variance      in  class  decomposition PCA      decomposition RandomizedPCA  and    class  decomposition IncrementalPCA      issue  9105  by  Hanmin Qin  https   github com qinhanmin2014        Fixed the implementation of   noise variance    in  class  decomposition PCA      issue  9108  by  Hanmin Qin  https   github com qinhanmin2014        Fixed a bug where  class  cluster DBSCAN  gives incorrect   result when input is a precomputed sparse matrix with initial   rows all zero   issue  8306  by  user  Akshay Gupta  Akshay0724      Fix a bug regarding fitting  class  cluster KMeans  with a sparse   array X and initial centroids  where X s means were unnecessarily being   subtracted from the centroids   issue  7872  by  user  Josh Karnofsky  jkarno       Fixes to the input validation in  class  covariance EllipticEnvelope      issue  8086  by  Andreas M ller       Fixed a bug in  class  covariance MinCovDet  where inputting data   that produced a singular covariance matrix would cause the helper method      c step   to throw an exception     issue  3367  by  user  Jeremy Steward  ThatGeoGuy      Fixed a bug in  class  manifold TSNE  affecting convergence of the   gradient descent   issue  8768  by  user  David DeTomaso  deto       Fixed a bug in  class  manifold TSNE  where it stored the incorrect     kl divergence      issue  6507  by  user  Sebastian Saeger  ssaeger       Fixed improper scaling in  class  cross decomposition PLSRegression    with   scale True     issue  7819  by  user  jayzed82  jayzed82        class  cluster SpectralCoclustering  and    class  cluster SpectralBiclustering    fit   method conforms   with API by accepting   y   and returning the object    issue  6126      issue  7814  by  user  Laurent Direr  ldirer   and  user  Maniteja   Nandana  maniteja123       Fix bug where  mod  sklearn mixture    sample   methods did not return as many   samples as requested   issue  7702  by  user  Levi John Wolf  ljwolf       Fixed the shrinkage implementation in  class  neighbors NearestCentroid      issue  9219  by  Hanmin Qin  https   github com qinhanmin2014      Preprocessing and feature selection    For sparse matrices   func  preprocessing normalize  with   return norm True     will now raise a   NotImplementedError   with  l1  or  l2  norm and with   norm  max  the norms returned will be the same as for dense matrices     issue  7771  by  Ang Lu  https   github com luang008        Fix a bug where  class  feature selection SelectFdr  did not   exactly implement Benjamini Hochberg procedure  It formerly may have   selected fewer features than it should     issue  7490  by  user  Peng Meng  mpjlu       Fixed a bug where  linear model RandomizedLasso  and    linear model RandomizedLogisticRegression  breaks for   sparse input   issue  8259  by  user  Aman Dalmia  dalmia       Fix a bug where  class  feature extraction FeatureHasher    mandatorily applied a sparse random projection to the hashed features    preventing the use of    class  feature extraction text HashingVectorizer  in a   pipeline with   class  feature extraction text TfidfTransformer      issue  7565  by  user  Roman Yurchak  rth       Fix a bug where  class  feature selection mutual info regression  did not   correctly use   n neighbors     issue  8181  by  user  Guillaume Lemaitre    glemaitre     Model evaluation and meta estimators    Fixed a bug where  model selection BaseSearchCV inverse transform    returns   self best estimator  transform     instead of     self best estimator  inverse transform         issue  8344  by  user  Akshay Gupta  Akshay0724   and  user  Rasmus Eriksson  MrMjauh       Added   classes    attribute to  class  model selection GridSearchCV      class  model selection RandomizedSearchCV     grid search GridSearchCV     and   grid search RandomizedSearchCV  that matches the   classes      attribute of   best estimator      issue  7661  and  issue  8295    by  user  Alyssa Batula  abatula     user  Dylan Werner Meier  unautre      and  user  Stephen Hoover  stephen hoover       Fixed a bug where  func  model selection validation curve    reused the same estimator for each parameter value     issue  7365  by  user  Aleksandr Sandrovskii  Sundrique        func  model selection permutation test score  now works with Pandas   types   issue  5697  by  user  Stijn Tonk  equialgo       Several fixes to input validation in    class  multiclass OutputCodeClassifier     issue  8086  by  Andreas M ller        class  multiclass OneVsOneClassifier  s   partial fit   now ensures all   classes are provided up front   issue  6250  by    user  Asish Panda  kaichogami       Fix  func  multioutput MultiOutputClassifier predict proba  to return a   list of 2d arrays  rather than a 3d array  In the case where different   target columns had different numbers of classes  a   ValueError   would be   raised on trying to stack matrices with different dimensions     issue  8093  by  user  Peter Bull  pjbull       Cross validation now works with Pandas datatypes that have a   read only index   issue  9507  by  Loic Esteve     Metrics     func  metrics average precision score  no longer linearly   interpolates between operating points  and instead weighs precisions   by the change in recall since the last operating point  as per the    Wikipedia entry  https   en wikipedia org wiki Average precision          7356  https   github com scikit learn scikit learn pull 7356      By    user  Nick Dingwall  ndingwall   and  Gael Varoquaux       Fix a bug in  metrics classification  check targets    which would return    binary    if   y true   and   y pred   were   both    binary    but the union of   y true   and   y pred   was      multiclass      issue  8377  by  Loic Esteve       Fixed an integer overflow bug in  func  metrics confusion matrix  and   hence  func  metrics cohen kappa score    issue  8354    issue  7929    by  Joel Nothman   and  user  Jon Crall  Erotemic       Fixed passing of   gamma   parameter to the   chi2   kernel in    func  metrics pairwise pairwise kernels   issue  5211  by    user  Nick Rhinehart  nrhine1       user  Saurabh Bansod  mth4saurabh   and  Andreas M ller     Miscellaneous    Fixed a bug when  func  datasets make classification  fails   when generating more than 30 features   issue  8159  by    user  Herilalaina Rakotoarison  herilalaina       Fixed a bug where  func  datasets make moons  gives an   incorrect result when   n samples   is odd     issue  8198  by  user  Josh Levy  levy5674       Some   fetch    functions in  mod  sklearn datasets  were ignoring the     download if missing   keyword   issue  7944  by  user  Ralf Gommers  rgommers       Fix estimators to accept a   sample weight   parameter of type     pandas Series   in their   fit   function   issue  7825  by    Kathleen Chen       Fix a bug in cases where   numpy cumsum   may be numerically unstable    raising an exception if instability is identified   issue  7376  and    issue  7331  by  Joel Nothman   and  user  yangarbiter      Fix a bug where  base BaseEstimator   getstate      obstructed pickling customizations of child classes  when used in a   multiple inheritance context     issue  8316  by  user  Holger Peters  HolgerPeters       Update Sphinx Gallery from 0 1 4 to 0 1 7 for resolving links in   documentation build with Sphinx 1 5  issue  8010    issue  7986  by    user  Oscar Najera  Titan C      Add   data home   parameter to  func  sklearn datasets fetch kddcup99      issue  9289  by  Loic Esteve       Fix dataset loaders using Python 3 version of makedirs to also work in   Python 2   issue  9284  by  user  Sebastin Santy  SebastinSanty       Several minor issues were fixed with thanks to the alerts of    lgtm com  https   lgtm com       issue  9278  by  user  Jean Helie  jhelie      among others   API changes summary                      Trees and ensembles    Gradient boosting base models are no longer estimators  By  Andreas M ller       All tree based estimators now accept a   min impurity decrease     parameter in lieu of the   min impurity split    which is now deprecated    The   min impurity decrease   helps stop splitting the nodes in which   the weighted impurity decrease from splitting is no longer at least     min impurity decrease     issue  8449  by  Raghav RV     Linear  kernelized and related models      n iter   parameter is deprecated in  class  linear model SGDClassifier      class  linear model SGDRegressor      class  linear model PassiveAggressiveClassifier      class  linear model PassiveAggressiveRegressor  and    class  linear model Perceptron   By  Tom Dupre la Tour     Other predictors     neighbors LSHForest  has been deprecated and will be   removed in 0 21 due to poor performance     issue  9078  by  user  Laurent Direr  ldirer        class  neighbors NearestCentroid  no longer purports to support     metric  precomputed    which now raises an error   issue  8515  by    user  Sergul Aydore  sergulaydore       The   alpha   parameter of  class  semi supervised LabelPropagation  now   has no effect and is deprecated to be removed in 0 21   issue  9239    by  user  Andre Ambrosio Boechat  boechat107     user  Utkarsh Upadhyay    musically ut    and  Joel Nothman     Decomposition  manifold learning and clustering    Deprecate the   doc topic distr   argument of the   perplexity   method   in  class  decomposition LatentDirichletAllocation  because the   user no longer has access to the unnormalized document topic distribution   needed for the perplexity calculation   issue  7954  by    user  Gary Foreman  garyForeman       The   n topics   parameter of  class  decomposition LatentDirichletAllocation    has been renamed to   n components   and will be removed in version 0 21     issue  8922  by  user  Attractadore       meth  decomposition SparsePCA transform  s   ridge alpha   parameter is   deprecated in preference for class parameter     issue  8137  by  user  Naoya Kanai  naoyak        class  cluster DBSCAN  now has a   metric params   parameter     issue  8139  by  user  Naoya Kanai  naoyak     Preprocessing and feature selection     class  feature selection SelectFromModel  now has a   partial fit     method only if the underlying estimator does  By  Andreas M ller        class  feature selection SelectFromModel  now validates the   threshold     parameter and sets the   threshold    attribute during the call to     fit    and no longer during the call to   transform     By  Andreas   M ller       The   non negative   parameter in  class  feature extraction FeatureHasher    has been deprecated  and replaced with a more principled alternative      alternate sign       issue  7565  by  user  Roman Yurchak  rth        linear model RandomizedLogisticRegression     and  linear model RandomizedLasso  have been deprecated and will   be removed in version 0 21     issue  8995  by  user  Ramana S  sentient07     Model evaluation and meta estimators    Deprecate the   fit params   constructor input to the    class  model selection GridSearchCV  and    class  model selection RandomizedSearchCV  in favor   of passing keyword parameters to the   fit   methods   of those classes  Data dependent parameters needed for model   training should be passed as keyword arguments to   fit      and conforming to this convention will allow the hyperparameter   selection classes to be used with tools such as    func  model selection cross val predict      issue  2879  by  user  Stephen Hoover  stephen hoover       In version 0 21  the default behavior of splitters that use the     test size   and   train size   parameter will change  such that   specifying   train size   alone will cause   test size   to be the   remainder   issue  7459  by  user  Nelson Liu  nelson liu        class  multiclass OneVsRestClassifier  now has   partial fit        decision function   and   predict proba   methods only when the   underlying estimator does    issue  7812  by  Andreas M ller   and    user  Mikhail Korobov  kmike        class  multiclass OneVsRestClassifier  now has a   partial fit   method   only if the underlying estimator does   By  Andreas M ller       The   decision function   output shape for binary classification in    class  multiclass OneVsRestClassifier  and    class  multiclass OneVsOneClassifier  is now    n samples     to conform   to scikit learn conventions   issue  9100  by  Andreas M ller       The  func  multioutput MultiOutputClassifier predict proba    function used to return a 3d array    n samples      n classes        n outputs     In the case where different target columns had different   numbers of classes  a   ValueError   would be raised on trying to stack   matrices with different dimensions  This function now returns a list of   arrays where the length of the list is   n outputs    and each array is      n samples      n classes    for that particular output     issue  8093  by  user  Peter Bull  pjbull       Replace attribute   named steps     dict   to  class  utils Bunch    in  class  pipeline Pipeline  to enable tab completion in interactive   environment  In the case conflict value on   named steps   and   dict     attribute    dict   behavior will be prioritized     issue  8481  by  user  Herilalaina Rakotoarison  herilalaina     Miscellaneous    Deprecate the   y   parameter in   transform   and   inverse transform      The method  should not accept   y   parameter  as it s used at the prediction time     issue  8174  by  user  Tahar Zanouda  tzano     Alexandre Gramfort     and  Raghav RV       SciPy    0 13 3 and NumPy    1 8 2 are now the minimum supported versions   for scikit learn  The following backported functions in    mod  sklearn utils  have been removed or deprecated accordingly     issue  8854  and  issue  8874  by  user  Naoya Kanai  naoyak      The   store covariances   and   covariances    parameters of    class  discriminant analysis QuadraticDiscriminantAnalysis    has been renamed to   store covariance   and   covariance    to be   consistent with the corresponding parameter names of the    class  discriminant analysis LinearDiscriminantAnalysis   They will be   removed in version 0 21   issue  7998  by  user  Jiacheng  mrbeann      Removed in 0 19         utils fixes argpartition         utils fixes array equal         utils fixes astype         utils fixes bincount         utils fixes expit         utils fixes frombuffer empty         utils fixes in1d         utils fixes norm         utils fixes rankdata         utils fixes safe copy      Deprecated in 0 19  to be removed in 0 21         utils arpack eigs         utils arpack eigsh         utils arpack svds         utils extmath fast dot         utils extmath logsumexp         utils extmath norm         utils extmath pinvh         utils graph graph laplacian         utils random choice         utils sparsetools connected components         utils stats rankdata      Estimators with both methods   decision function   and   predict proba     are now required to have a monotonic relation between them  The   method   check decision proba consistency   has been added in     utils estimator checks   to check their consistency     issue  7578  by  user  Shubham Bhardwaj  shubham0704      All checks in   utils estimator checks    in particular    func  utils estimator checks check estimator  now accept estimator   instances  Most other checks do not accept   estimator classes any more   issue  9019  by  Andreas M ller       Ensure that estimators  attributes ending with       are not set   in the constructor but only in the   fit   method  Most notably    ensemble estimators  deriving from  ensemble BaseEnsemble     now only have   self estimators    available after   fit       issue  7464  by  Lars Buitinck   and  Loic Esteve      Code and Documentation Contributors                                      Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0 18  including   Joel Nothman  Loic Esteve  Andreas Mueller  Guillaume Lemaitre  Olivier Grisel  Hanmin Qin  Raghav RV  Alexandre Gramfort  themrmax  Aman Dalmia  Gael Varoquaux  Naoya Kanai  Tom Dupr  la Tour  Rishikesh  Nelson Liu  Taehoon Lee  Nelle Varoquaux  Aashil  Mikhail Korobov  Sebastin Santy  Joan Massich  Roman Yurchak  RAKOTOARISON Herilalaina  Thierry Guillemot  Alexandre Abadie  Carol Willing  Balakumaran Manoharan  Josh Karnofsky  Vlad Niculae  Utkarsh Upadhyay  Dmitry Petrov  Minghui Liu  Srivatsan  Vincent Pham  Albert Thomas  Jake VanderPlas  Attractadore  JC Liu  alexandercbooth  chkoar   scar N jera  Aarshay Jain  Kyle Gilliam  Ramana Subramanyam  CJ Carey  Clement Joudet  David Robles  He Chen  Joris Van den Bossche  Karan Desai  Katie Luangkote  Leland McInnes  Maniteja Nandana  Michele Lacchia  Sergei Lebedev  Shubham Bhardwaj  akshay0724  omtcyfz  rickiepark  waterponey  Vathsala Achar  jbDelafosse  Ralf Gommers  Ekaterina Krivich  Vivek Kumar  Ishank Gulati  Dave Elliott  ldirer  Reiichiro Nakano  Levi John Wolf  Mathieu Blondel  Sid Kapur  Dougal J  Sutherland  midinas  mikebenfield  Sourav Singh  Aseem Bansal  Ibraim Ganiev  Stephen Hoover  AishwaryaRK  Steven C  Howell  Gary Foreman  Neeraj Gangwar  Tahar  Jon Crall  dokato  Kathy Chen  ferria  Thomas Moreau  Charlie Brummitt  Nicolas Goix  Adam Kleczewski  Sam Shleifer  Nikita Singh  Basil Beirouti  Giorgio Patrini  Manoj Kumar  Rafael Possas  James Bourbeau  James A  Bednar  Janine Harper  Jaye  Jean Helie  Jeremy Steward  Artsiom  John Wei  Jonathan LIgo  Jonathan Rahn  seanpwilliams  Arthur Mensch  Josh Levy  Julian Kuhlmann  Julien Aubert  J rn Hees  Kai  shivamgargsya  Kat Hempstalk  Kaushik Lakshmikanth  Kennedy  Kenneth Lyons  Kenneth Myers  Kevin Yap  Kirill Bobyrev  Konstantin Podshumok  Arthur Imbert  Lee Murray  toastedcornflakes  Lera  Li Li  Arthur Douillard  Mainak Jas  tobycheese  Manraj Singh  Manvendra Singh  Marc Meketon  MarcoFalke  Matthew Brett  Matthias Gilch  Mehul Ahuja  Melanie Goetz  Meng  Peng  Michael Dezube  Michal Baumgartner  vibrantabhi19  Artem Golubin  Milen Paskov  Antonin Carette  Morikko  MrMjauh  NALEPA Emmanuel  Namiya  Antoine Wendlinger  Narine Kokhlikyan  NarineK  Nate Guerin  Angus Williams  Ang Lu  Nicole Vavrova  Nitish Pandey  Okhlopkov Daniil Olegovich  Andy Craze  Om Prakash  Parminder Singh  Patrick Carlson  Patrick Pei  Paul Ganssle  Paulo Haddad  Pawe  Lorek  Peng Yu  Pete Bachant  Peter Bull  Peter Csizsek  Peter Wang  Pieter Arthur de Jong  Ping Yao  Chang  Preston Parry  Puneet Mathur  Quentin Hibon  Andrew Smith  Andrew Jackson  1kastner  Rameshwar Bhaskaran  Rebecca Bilbro  Remi Rampin  Andrea Esuli  Rob Hall  Robert Bradshaw  Romain Brault  Aman Pratik  Ruifeng Zheng  Russell Smith  Sachin Agarwal  Sailesh Choyal  Samson Tan  Samu l Weber  Sarah Brown  Sebastian P lsterl  Sebastian Raschka  Sebastian Saeger  Alyssa Batula  Abhyuday Pratap Singh  Sergey Feldman  Sergul Aydore  Sharan Yalburgi  willduan  Siddharth Gupta  Sri Krishna  Almer  Stijn Tonk  Allen Riddell  Theofilos Papapanagiotou  Alison  Alexis Mignon  Tommy Boucher  Tommy L fstedt  Toshihiro Kamishima  Tyler Folkman  Tyler Lanigan  Alexander Junge  Varun Shenoy  Victor Poughon  Vilhelm von Ehrenheim  Aleksandr Sandrovskii  Alan Yee  Vlasios Vasileiou  Warut Vijitbenjaronk  Yang Zhang  Yaroslav Halchenko  Yichuan Liu  Yuichi Fujikawa  affanv14  aivision2020  xor  andreh7  brady salz  campustrampus  Agamemnon Krasoulis  ditenberg  elena sharova  filipj8  fukatani  gedeck  guiniol  guoci  hakaa1  hongkahjun  i am xhy  jakirkham  jaroslaw weber  jayzed82  jeroko  jmontoyam  jonathan striebel  josephsalmon  jschendel  leereeves  martin hahn  mathurinm  mehak sachdeva  mlewis1729  mlliou112  mthorrell  ndingwall  nuffe  yangarbiter  plagree  pldtc325  Breno Freitas  Brett Olsen  Brian A  Alfano  Brian Burns  polmauri  Brandon Carter  Charlton Austin  Chayant T15h  Chinmaya Pancholi  Christian Danielsen  Chung Yen  Chyi Kwei Yau  pravarmahajan  DOHMATOB Elvis  Daniel LeJeune  Daniel Hnyk  Darius Morawiec  David DeTomaso  David Gasquez  David Haberth r  David Heryanto  David Kirkby  David Nicholson  rashchedrin  Deborah Gertrude Digges  Denis Engemann  Devansh D  Dickson  Bob Baxley  Don86  E  Lynch Klarup  Ed Rogers  Elizabeth Ferriss  Ellen Co2  Fabian Egli  Fang Chieh Chou  Bing Tian Dai  Greg Stupp  Grzegorz Szpak  Bertrand Thirion  Hadrien Bertrand  Harizo Rajaona  zxcvbnius  Henry Lin  Holger Peters  Icyblade Dai  Igor Andriushchenko  Ilya  Isaac Laughlin  Iv n Vall s  Aur lien Bellet  JPFrancoia  Jacob Schreiber  Asish Mahapatra"}
{"questions":"scikit-learn sklearn contributors rst Version 0 18","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n============\nVersion 0.18\n============\n\n.. warning::\n\n    Scikit-learn 0.18 is the last major release of scikit-learn to support Python 2.6.\n    Later versions of scikit-learn will require Python 2.7 or above.\n\n\n.. _changes_0_18_2:\n\nVersion 0.18.2\n==============\n\n**June 20, 2017**\n\nChangelog\n---------\n\n- Fixes for compatibility with NumPy 1.13.0: :issue:`7946` :issue:`8355` by\n  `Loic Esteve`_.\n\n- Minor compatibility changes in the examples :issue:`9010` :issue:`8040`\n  :issue:`9149`.\n\nCode Contributors\n-----------------\nAman Dalmia, Loic Esteve, Nate Guerin, Sergei Lebedev\n\n\n.. _changes_0_18_1:\n\nVersion 0.18.1\n==============\n\n**November 11, 2016**\n\nChangelog\n---------\n\nEnhancements\n............\n\n- Improved ``sample_without_replacement`` speed by utilizing\n  numpy.random.permutation for most cases. As a result,\n  samples may differ in this release for a fixed random state.\n  Affected estimators:\n\n  - :class:`ensemble.BaggingClassifier`\n  - :class:`ensemble.BaggingRegressor`\n  - :class:`linear_model.RANSACRegressor`\n  - :class:`model_selection.RandomizedSearchCV`\n  - :class:`random_projection.SparseRandomProjection`\n\n  This also affects the :meth:`datasets.make_classification`\n  method.\n\nBug fixes\n.........\n\n- Fix issue where ``min_grad_norm`` and ``n_iter_without_progress``\n  parameters were not being utilised by :class:`manifold.TSNE`.\n  :issue:`6497` by :user:`Sebastian S\u00e4ger <ssaeger>`\n\n- Fix bug for svm's decision values when ``decision_function_shape``\n  is ``ovr`` in :class:`svm.SVC`.\n  :class:`svm.SVC`'s decision_function was incorrect from versions\n  0.17.0 through 0.18.0.\n  :issue:`7724` by `Bing Tian Dai`_\n\n- Attribute ``explained_variance_ratio`` of\n  :class:`discriminant_analysis.LinearDiscriminantAnalysis` calculated\n  with SVD and Eigen solver are now of the same length. :issue:`7632`\n  by :user:`JPFrancoia <JPFrancoia>`\n\n- Fixes issue in :ref:`univariate_feature_selection` where score\n  functions were not accepting multi-label targets. :issue:`7676`\n  by :user:`Mohammed Affan <affanv14>`\n\n- Fixed setting parameters when calling ``fit`` multiple times on\n  :class:`feature_selection.SelectFromModel`. :issue:`7756` by `Andreas M\u00fcller`_\n\n- Fixes issue in ``partial_fit`` method of\n  :class:`multiclass.OneVsRestClassifier` when number of classes used in\n  ``partial_fit`` was less than the total number of classes in the\n  data. :issue:`7786` by `Srivatsan Ramesh`_\n\n- Fixes issue in :class:`calibration.CalibratedClassifierCV` where\n  the sum of probabilities of each class for a data was not 1, and\n  ``CalibratedClassifierCV`` now handles the case where the training set\n  has less number of classes than the total data. :issue:`7799` by\n  `Srivatsan Ramesh`_\n\n- Fix a bug where :class:`sklearn.feature_selection.SelectFdr` did not\n  exactly implement Benjamini-Hochberg procedure. It formerly may have\n  selected fewer features than it should.\n  :issue:`7490` by :user:`Peng Meng <mpjlu>`.\n\n- :class:`sklearn.manifold.LocallyLinearEmbedding` now correctly handles\n  integer inputs. :issue:`6282` by `Jake Vanderplas`_.\n\n- The ``min_weight_fraction_leaf`` parameter of tree-based classifiers and\n  regressors now assumes uniform sample weights by default if the\n  ``sample_weight`` argument is not passed to the ``fit`` function.\n  Previously, the parameter was silently ignored. :issue:`7301`\n  by :user:`Nelson Liu <nelson-liu>`.\n\n- Numerical issue with :class:`linear_model.RidgeCV` on centered data when\n  `n_features > n_samples`. :issue:`6178` by `Bertrand Thirion`_\n\n- Tree splitting criterion classes' cloning\/pickling is now memory safe\n  :issue:`7680` by :user:`Ibraim Ganiev <olologin>`.\n\n- Fixed a bug where :class:`decomposition.NMF` sets its ``n_iters_``\n  attribute in `transform()`. :issue:`7553` by :user:`Ekaterina\n  Krivich <kiote>`.\n\n- :class:`sklearn.linear_model.LogisticRegressionCV` now correctly handles\n  string labels. :issue:`5874` by `Raghav RV`_.\n\n- Fixed a bug where :func:`sklearn.model_selection.train_test_split` raised\n  an error when ``stratify`` is a list of string labels. :issue:`7593` by\n  `Raghav RV`_.\n\n- Fixed a bug where :class:`sklearn.model_selection.GridSearchCV` and\n  :class:`sklearn.model_selection.RandomizedSearchCV` were not pickleable\n  because of a pickling bug in ``np.ma.MaskedArray``. :issue:`7594` by\n  `Raghav RV`_.\n\n- All cross-validation utilities in :mod:`sklearn.model_selection` now\n  permit one time cross-validation splitters for the ``cv`` parameter. Also\n  non-deterministic cross-validation splitters (where multiple calls to\n  ``split`` produce dissimilar splits) can be used as ``cv`` parameter.\n  The :class:`sklearn.model_selection.GridSearchCV` will cross-validate each\n  parameter setting on the split produced by the first ``split`` call\n  to the cross-validation splitter.  :issue:`7660` by `Raghav RV`_.\n\n- Fix bug where :meth:`preprocessing.MultiLabelBinarizer.fit_transform`\n  returned an invalid CSR matrix.\n  :issue:`7750` by :user:`CJ Carey <perimosocordiae>`.\n\n- Fixed a bug where :func:`metrics.pairwise.cosine_distances` could return a\n  small negative distance. :issue:`7732` by :user:`Artsion <asanakoy>`.\n\nAPI changes summary\n-------------------\n\nTrees and forests\n\n- The ``min_weight_fraction_leaf`` parameter of tree-based classifiers and\n  regressors now assumes uniform sample weights by default if the\n  ``sample_weight`` argument is not passed to the ``fit`` function.\n  Previously, the parameter was silently ignored. :issue:`7301` by :user:`Nelson\n  Liu <nelson-liu>`.\n\n- Tree splitting criterion classes' cloning\/pickling is now memory safe.\n  :issue:`7680` by :user:`Ibraim Ganiev <olologin>`.\n\n\nLinear, kernelized and related models\n\n- Length of ``explained_variance_ratio`` of\n  :class:`discriminant_analysis.LinearDiscriminantAnalysis`\n  changed for both Eigen and SVD solvers. The attribute has now a length\n  of min(n_components, n_classes - 1). :issue:`7632`\n  by :user:`JPFrancoia <JPFrancoia>`\n\n- Numerical issue with :class:`linear_model.RidgeCV` on centered data when\n  ``n_features > n_samples``. :issue:`6178` by `Bertrand Thirion`_\n\n.. _changes_0_18:\n\nVersion 0.18\n============\n\n**September 28, 2016**\n\n.. _model_selection_changes:\n\nModel Selection Enhancements and API Changes\n--------------------------------------------\n\n- **The model_selection module**\n\n  The new module :mod:`sklearn.model_selection`, which groups together the\n  functionalities of formerly `sklearn.cross_validation`,\n  `sklearn.grid_search` and `sklearn.learning_curve`, introduces new\n  possibilities such as nested cross-validation and better manipulation of\n  parameter searches with Pandas.\n\n  Many things will stay the same but there are some key differences. Read\n  below to know more about the changes.\n\n- **Data-independent CV splitters enabling nested cross-validation**\n\n  The new cross-validation splitters, defined in the\n  :mod:`sklearn.model_selection`, are no longer initialized with any\n  data-dependent parameters such as ``y``. Instead they expose a\n  `split` method that takes in the data and yields a generator for the\n  different splits.\n\n  This change makes it possible to use the cross-validation splitters to\n  perform nested cross-validation, facilitated by\n  :class:`model_selection.GridSearchCV` and\n  :class:`model_selection.RandomizedSearchCV` utilities.\n\n- **The enhanced cv_results_ attribute**\n\n  The new ``cv_results_`` attribute (of :class:`model_selection.GridSearchCV`\n  and :class:`model_selection.RandomizedSearchCV`) introduced in lieu of the\n  ``grid_scores_`` attribute is a dict of 1D arrays with elements in each\n  array corresponding to the parameter settings (i.e. search candidates).\n\n  The ``cv_results_`` dict can be easily imported into ``pandas`` as a\n  ``DataFrame`` for exploring the search results.\n\n  The ``cv_results_`` arrays include scores for each cross-validation split\n  (with keys such as ``'split0_test_score'``), as well as their mean\n  (``'mean_test_score'``) and standard deviation (``'std_test_score'``).\n\n  The ranks for the search candidates (based on their mean\n  cross-validation score) is available at ``cv_results_['rank_test_score']``.\n\n  The parameter values for each parameter is stored separately as numpy\n  masked object arrays. The value, for that search candidate, is masked if\n  the corresponding parameter is not applicable. Additionally a list of all\n  the parameter dicts are stored at ``cv_results_['params']``.\n\n- **Parameters n_folds and n_iter renamed to n_splits**\n\n  Some parameter names have changed:\n  The ``n_folds`` parameter in new :class:`model_selection.KFold`,\n  :class:`model_selection.GroupKFold` (see below for the name change),\n  and :class:`model_selection.StratifiedKFold` is now renamed to\n  ``n_splits``. The ``n_iter`` parameter in\n  :class:`model_selection.ShuffleSplit`, the new class\n  :class:`model_selection.GroupShuffleSplit` and\n  :class:`model_selection.StratifiedShuffleSplit` is now renamed to\n  ``n_splits``.\n\n- **Rename of splitter classes which accepts group labels along with data**\n\n  The cross-validation splitters ``LabelKFold``,\n  ``LabelShuffleSplit``, ``LeaveOneLabelOut`` and ``LeavePLabelOut`` have\n  been renamed to :class:`model_selection.GroupKFold`,\n  :class:`model_selection.GroupShuffleSplit`,\n  :class:`model_selection.LeaveOneGroupOut` and\n  :class:`model_selection.LeavePGroupsOut` respectively.\n\n  Note the change from singular to plural form in\n  :class:`model_selection.LeavePGroupsOut`.\n\n- **Fit parameter labels renamed to groups**\n\n  The ``labels`` parameter in the `split` method of the newly renamed\n  splitters :class:`model_selection.GroupKFold`,\n  :class:`model_selection.LeaveOneGroupOut`,\n  :class:`model_selection.LeavePGroupsOut`,\n  :class:`model_selection.GroupShuffleSplit` is renamed to ``groups``\n  following the new nomenclature of their class names.\n\n- **Parameter n_labels renamed to n_groups**\n\n  The parameter ``n_labels`` in the newly renamed\n  :class:`model_selection.LeavePGroupsOut` is changed to ``n_groups``.\n\n- Training scores and Timing information\n\n  ``cv_results_`` also includes the training scores for each\n  cross-validation split (with keys such as ``'split0_train_score'``), as\n  well as their mean (``'mean_train_score'``) and standard deviation\n  (``'std_train_score'``). To avoid the cost of evaluating training score,\n  set ``return_train_score=False``.\n\n  Additionally the mean and standard deviation of the times taken to split,\n  train and score the model across all the cross-validation splits is\n  available at the key ``'mean_time'`` and ``'std_time'`` respectively.\n\nChangelog\n---------\n\nNew features\n............\n\nClassifiers and Regressors\n\n- The Gaussian Process module has been reimplemented and now offers classification\n  and regression estimators through :class:`gaussian_process.GaussianProcessClassifier`\n  and  :class:`gaussian_process.GaussianProcessRegressor`. Among other things, the new\n  implementation supports kernel engineering, gradient-based hyperparameter optimization or\n  sampling of functions from GP prior and GP posterior. Extensive documentation and\n  examples are provided. By `Jan Hendrik Metzen`_.\n\n- Added new supervised learning algorithm: :ref:`Multi-layer Perceptron <multilayer_perceptron>`\n  :issue:`3204` by :user:`Issam H. Laradji <IssamLaradji>`\n\n- Added :class:`linear_model.HuberRegressor`, a linear model robust to outliers.\n  :issue:`5291` by `Manoj Kumar`_.\n\n- Added the :class:`multioutput.MultiOutputRegressor` meta-estimator. It\n  converts single output regressors to multi-output regressors by fitting\n  one regressor per output. By :user:`Tim Head <betatim>`.\n\nOther estimators\n\n- New :class:`mixture.GaussianMixture` and :class:`mixture.BayesianGaussianMixture`\n  replace former mixture models, employing faster inference\n  for sounder results. :issue:`7295` by :user:`Wei Xue <xuewei4d>` and\n  :user:`Thierry Guillemot <tguillemot>`.\n\n- Class `decomposition.RandomizedPCA` is now factored into :class:`decomposition.PCA`\n  and it is available calling with parameter ``svd_solver='randomized'``.\n  The default number of ``n_iter`` for ``'randomized'`` has changed to 4. The old\n  behavior of PCA is recovered by ``svd_solver='full'``. An additional solver\n  calls ``arpack`` and performs truncated (non-randomized) SVD. By default,\n  the best solver is selected depending on the size of the input and the\n  number of components requested. :issue:`5299` by :user:`Giorgio Patrini <giorgiop>`.\n\n- Added two functions for mutual information estimation:\n  :func:`feature_selection.mutual_info_classif` and\n  :func:`feature_selection.mutual_info_regression`. These functions can be\n  used in :class:`feature_selection.SelectKBest` and\n  :class:`feature_selection.SelectPercentile` as score functions.\n  By :user:`Andrea Bravi <AndreaBravi>` and :user:`Nikolay Mayorov <nmayorov>`.\n\n- Added the :class:`ensemble.IsolationForest` class for anomaly detection based on\n  random forests. By `Nicolas Goix`_.\n\n- Added ``algorithm=\"elkan\"`` to :class:`cluster.KMeans` implementing\n  Elkan's fast K-Means algorithm. By `Andreas M\u00fcller`_.\n\nModel selection and evaluation\n\n- Added :func:`metrics.fowlkes_mallows_score`, the Fowlkes Mallows\n  Index which measures the similarity of two clusterings of a set of points\n  By :user:`Arnaud Fouchet <afouchet>` and :user:`Thierry Guillemot <tguillemot>`.\n\n- Added `metrics.calinski_harabaz_score`, which computes the Calinski\n  and Harabaz score to evaluate the resulting clustering of a set of points.\n  By :user:`Arnaud Fouchet <afouchet>` and :user:`Thierry Guillemot <tguillemot>`.\n\n- Added new cross-validation splitter\n  :class:`model_selection.TimeSeriesSplit` to handle time series data.\n  :issue:`6586` by :user:`YenChen Lin <yenchenlin>`\n\n- The cross-validation iterators are replaced by cross-validation splitters\n  available from :mod:`sklearn.model_selection`, allowing for nested\n  cross-validation. See :ref:`model_selection_changes` for more information.\n  :issue:`4294` by `Raghav RV`_.\n\nEnhancements\n............\n\nTrees and ensembles\n\n- Added a new splitting criterion for :class:`tree.DecisionTreeRegressor`,\n  the mean absolute error. This criterion can also be used in\n  :class:`ensemble.ExtraTreesRegressor`,\n  :class:`ensemble.RandomForestRegressor`, and the gradient boosting\n  estimators. :issue:`6667` by :user:`Nelson Liu <nelson-liu>`.\n\n- Added weighted impurity-based early stopping criterion for decision tree\n  growth. :issue:`6954` by :user:`Nelson Liu <nelson-liu>`\n\n- The random forest, extra tree and decision tree estimators now has a\n  method ``decision_path`` which returns the decision path of samples in\n  the tree. By `Arnaud Joly`_.\n\n- A new example has been added unveiling the decision tree structure.\n  By `Arnaud Joly`_.\n\n- Random forest, extra trees, decision trees and gradient boosting estimator\n  accept the parameter ``min_samples_split`` and ``min_samples_leaf``\n  provided as a percentage of the training samples. By :user:`yelite <yelite>` and `Arnaud Joly`_.\n\n- Gradient boosting estimators accept the parameter ``criterion`` to specify\n  to splitting criterion used in built decision trees.\n  :issue:`6667` by :user:`Nelson Liu <nelson-liu>`.\n\n- The memory footprint is reduced (sometimes greatly) for\n  `ensemble.bagging.BaseBagging` and classes that inherit from it,\n  i.e, :class:`ensemble.BaggingClassifier`,\n  :class:`ensemble.BaggingRegressor`, and :class:`ensemble.IsolationForest`,\n  by dynamically generating attribute ``estimators_samples_`` only when it is\n  needed. By :user:`David Staub <staubda>`.\n\n- Added ``n_jobs`` and ``sample_weight`` parameters for\n  :class:`ensemble.VotingClassifier` to fit underlying estimators in parallel.\n  :issue:`5805` by :user:`Ibraim Ganiev <olologin>`.\n\nLinear, kernelized and related models\n\n- In :class:`linear_model.LogisticRegression`, the SAG solver is now\n  available in the multinomial case. :issue:`5251` by `Tom Dupre la Tour`_.\n\n- :class:`linear_model.RANSACRegressor`, :class:`svm.LinearSVC` and\n  :class:`svm.LinearSVR` now support ``sample_weight``.\n  By :user:`Imaculate <Imaculate>`.\n\n- Add parameter ``loss`` to :class:`linear_model.RANSACRegressor` to measure the\n  error on the samples for every trial. By `Manoj Kumar`_.\n\n- Prediction of out-of-sample events with Isotonic Regression\n  (:class:`isotonic.IsotonicRegression`) is now much faster (over 1000x in tests with synthetic\n  data). By :user:`Jonathan Arfa <jarfa>`.\n\n- Isotonic regression (:class:`isotonic.IsotonicRegression`) now uses a better algorithm to avoid\n  `O(n^2)` behavior in pathological cases, and is also generally faster\n  (:issue:`#6691`). By `Antony Lee`_.\n\n- :class:`naive_bayes.GaussianNB` now accepts data-independent class-priors\n  through the parameter ``priors``. By :user:`Guillaume Lemaitre <glemaitre>`.\n\n- :class:`linear_model.ElasticNet` and :class:`linear_model.Lasso`\n  now works with ``np.float32`` input data without converting it\n  into ``np.float64``. This allows to reduce the memory\n  consumption. :issue:`6913` by :user:`YenChen Lin <yenchenlin>`.\n\n- :class:`semi_supervised.LabelPropagation` and :class:`semi_supervised.LabelSpreading`\n  now accept arbitrary kernel functions in addition to strings ``knn`` and ``rbf``.\n  :issue:`5762` by :user:`Utkarsh Upadhyay <musically-ut>`.\n\nDecomposition, manifold learning and clustering\n\n- Added ``inverse_transform`` function to :class:`decomposition.NMF` to compute\n  data matrix of original shape. By :user:`Anish Shah <AnishShah>`.\n\n- :class:`cluster.KMeans` and :class:`cluster.MiniBatchKMeans` now works\n  with ``np.float32`` and ``np.float64`` input data without converting it.\n  This allows to reduce the memory consumption by using ``np.float32``.\n  :issue:`6846` by :user:`Sebastian S\u00e4ger <ssaeger>` and\n  :user:`YenChen Lin <yenchenlin>`.\n\nPreprocessing and feature selection\n\n- :class:`preprocessing.RobustScaler` now accepts ``quantile_range`` parameter.\n  :issue:`5929` by :user:`Konstantin Podshumok <podshumok>`.\n\n- :class:`feature_extraction.FeatureHasher` now accepts string values.\n  :issue:`6173` by :user:`Ryad Zenine <ryadzenine>` and\n  :user:`Devashish Deshpande <dsquareindia>`.\n\n- Keyword arguments can now be supplied to ``func`` in\n  :class:`preprocessing.FunctionTransformer` by means of the ``kw_args``\n  parameter. By `Brian McFee`_.\n\n- :class:`feature_selection.SelectKBest` and :class:`feature_selection.SelectPercentile`\n  now accept score functions that take X, y as input and return only the scores.\n  By :user:`Nikolay Mayorov <nmayorov>`.\n\nModel evaluation and meta-estimators\n\n- :class:`multiclass.OneVsOneClassifier` and :class:`multiclass.OneVsRestClassifier`\n  now support ``partial_fit``. By :user:`Asish Panda <kaichogami>` and\n  :user:`Philipp Dowling <phdowling>`.\n\n- Added support for substituting or disabling :class:`pipeline.Pipeline`\n  and :class:`pipeline.FeatureUnion` components using the ``set_params``\n  interface that powers `sklearn.grid_search`.\n  See :ref:`sphx_glr_auto_examples_compose_plot_compare_reduction.py`\n  By `Joel Nothman`_ and :user:`Robert McGibbon <rmcgibbo>`.\n\n- The new ``cv_results_`` attribute of :class:`model_selection.GridSearchCV`\n  (and :class:`model_selection.RandomizedSearchCV`) can be easily imported\n  into pandas as a ``DataFrame``. Ref :ref:`model_selection_changes` for\n  more information. :issue:`6697` by `Raghav RV`_.\n\n- Generalization of :func:`model_selection.cross_val_predict`.\n  One can pass method names such as `predict_proba` to be used in the cross\n  validation framework instead of the default `predict`.\n  By :user:`Ori Ziv <zivori>` and :user:`Sears Merritt <merritts>`.\n\n- The training scores and time taken for training followed by scoring for\n  each search candidate are now available at the ``cv_results_`` dict.\n  See :ref:`model_selection_changes` for more information.\n  :issue:`7325` by :user:`Eugene Chen <eyc88>` and `Raghav RV`_.\n\nMetrics\n\n- Added ``labels`` flag to :class:`metrics.log_loss` to explicitly provide\n  the labels when the number of classes in ``y_true`` and ``y_pred`` differ.\n  :issue:`7239` by :user:`Hong Guangguo <hongguangguo>` with help from\n  :user:`Mads Jensen <indianajensen>` and :user:`Nelson Liu <nelson-liu>`.\n\n- Support sparse contingency matrices in cluster evaluation\n  (`metrics.cluster.supervised`) to scale to a large number of\n  clusters.\n  :issue:`7419` by :user:`Gregory Stupp <stuppie>` and `Joel Nothman`_.\n\n- Add ``sample_weight`` parameter to :func:`metrics.matthews_corrcoef`.\n  By :user:`Jatin Shah <jatinshah>` and `Raghav RV`_.\n\n- Speed up :func:`metrics.silhouette_score` by using vectorized operations.\n  By `Manoj Kumar`_.\n\n- Add ``sample_weight`` parameter to :func:`metrics.confusion_matrix`.\n  By :user:`Bernardo Stein <DanielSidhion>`.\n\nMiscellaneous\n\n- Added ``n_jobs`` parameter to :class:`feature_selection.RFECV` to compute\n  the score on the test folds in parallel. By `Manoj Kumar`_\n\n- Codebase does not contain C\/C++ cython generated files: they are\n  generated during build. Distribution packages will still contain generated\n  C\/C++ files. By :user:`Arthur Mensch <arthurmensch>`.\n\n- Reduce the memory usage for 32-bit float input arrays of\n  `utils.sparse_func.mean_variance_axis` and\n  `utils.sparse_func.incr_mean_variance_axis` by supporting cython\n  fused types. By :user:`YenChen Lin <yenchenlin>`.\n\n- The `ignore_warnings` now accept a category argument to ignore only\n  the warnings of a specified type. By :user:`Thierry Guillemot <tguillemot>`.\n\n- Added parameter ``return_X_y`` and return type ``(data, target) : tuple`` option to\n  :func:`datasets.load_iris` dataset\n  :issue:`7049`,\n  :func:`datasets.load_breast_cancer` dataset\n  :issue:`7152`,\n  :func:`datasets.load_digits` dataset,\n  :func:`datasets.load_diabetes` dataset,\n  :func:`datasets.load_linnerud` dataset,\n  `datasets.load_boston` dataset\n  :issue:`7154` by\n  :user:`Manvendra Singh<manu-chroma>`.\n\n- Simplification of the ``clone`` function, deprecate support for estimators\n  that modify parameters in ``__init__``. :issue:`5540` by `Andreas M\u00fcller`_.\n\n- When unpickling a scikit-learn estimator in a different version than the one\n  the estimator was trained with, a ``UserWarning`` is raised, see :ref:`the documentation\n  on model persistence <persistence_limitations>` for more details. (:issue:`7248`)\n  By `Andreas M\u00fcller`_.\n\nBug fixes\n.........\n\nTrees and ensembles\n\n- Random forest, extra trees, decision trees and gradient boosting\n  won't accept anymore ``min_samples_split=1`` as at least 2 samples\n  are required to split a decision tree node. By `Arnaud Joly`_\n\n- :class:`ensemble.VotingClassifier` now raises ``NotFittedError`` if ``predict``,\n  ``transform`` or ``predict_proba`` are called on the non-fitted estimator.\n  by `Sebastian Raschka`_.\n\n- Fix bug where :class:`ensemble.AdaBoostClassifier` and\n  :class:`ensemble.AdaBoostRegressor` would perform poorly if the\n  ``random_state`` was fixed\n  (:issue:`7411`). By `Joel Nothman`_.\n\n- Fix bug in ensembles with randomization where the ensemble would not\n  set ``random_state`` on base estimators in a pipeline or similar nesting.\n  (:issue:`7411`). Note, results for :class:`ensemble.BaggingClassifier`\n  :class:`ensemble.BaggingRegressor`, :class:`ensemble.AdaBoostClassifier`\n  and :class:`ensemble.AdaBoostRegressor` will now differ from previous\n  versions. By `Joel Nothman`_.\n\nLinear, kernelized and related models\n\n- Fixed incorrect gradient computation for ``loss='squared_epsilon_insensitive'`` in\n  :class:`linear_model.SGDClassifier` and :class:`linear_model.SGDRegressor`\n  (:issue:`6764`). By :user:`Wenhua Yang <geekoala>`.\n\n- Fix bug in :class:`linear_model.LogisticRegressionCV` where\n  ``solver='liblinear'`` did not accept ``class_weights='balanced``.\n  (:issue:`6817`). By `Tom Dupre la Tour`_.\n\n- Fix bug in :class:`neighbors.RadiusNeighborsClassifier` where an error\n  occurred when there were outliers being labelled and a weight function\n  specified (:issue:`6902`).  By\n  `LeonieBorne <https:\/\/github.com\/LeonieBorne>`_.\n\n- Fix :class:`linear_model.ElasticNet` sparse decision function to match\n  output with dense in the multioutput case.\n\nDecomposition, manifold learning and clustering\n\n- `decomposition.RandomizedPCA` default number of `iterated_power` is 4 instead of 3.\n  :issue:`5141` by :user:`Giorgio Patrini <giorgiop>`.\n\n- :func:`utils.extmath.randomized_svd` performs 4 power iterations by default, instead or 0.\n  In practice this is enough for obtaining a good approximation of the\n  true eigenvalues\/vectors in the presence of noise. When `n_components` is\n  small (``< .1 * min(X.shape)``) `n_iter` is set to 7, unless the user specifies\n  a higher number. This improves precision with few components.\n  :issue:`5299` by :user:`Giorgio Patrini<giorgiop>`.\n\n- Whiten\/non-whiten inconsistency between components of :class:`decomposition.PCA`\n  and `decomposition.RandomizedPCA` (now factored into PCA, see the\n  New features) is fixed. `components_` are stored with no whitening.\n  :issue:`5299` by :user:`Giorgio Patrini <giorgiop>`.\n\n- Fixed bug in :func:`manifold.spectral_embedding` where diagonal of unnormalized\n  Laplacian matrix was incorrectly set to 1. :issue:`4995` by :user:`Peter Fischer <yanlend>`.\n\n- Fixed incorrect initialization of `utils.arpack.eigsh` on all\n  occurrences. Affects `cluster.bicluster.SpectralBiclustering`,\n  :class:`decomposition.KernelPCA`, :class:`manifold.LocallyLinearEmbedding`,\n  and :class:`manifold.SpectralEmbedding` (:issue:`5012`). By\n  :user:`Peter Fischer <yanlend>`.\n\n- Attribute ``explained_variance_ratio_`` calculated with the SVD solver\n  of :class:`discriminant_analysis.LinearDiscriminantAnalysis` now returns\n  correct results. By :user:`JPFrancoia <JPFrancoia>`\n\nPreprocessing and feature selection\n\n- `preprocessing.data._transform_selected` now always passes a copy\n  of ``X`` to transform function when ``copy=True`` (:issue:`7194`). By `Caio\n  Oliveira <https:\/\/github.com\/caioaao>`_.\n\nModel evaluation and meta-estimators\n\n- :class:`model_selection.StratifiedKFold` now raises error if all n_labels\n  for individual classes is less than n_folds.\n  :issue:`6182` by :user:`Devashish Deshpande <dsquareindia>`.\n\n- Fixed bug in :class:`model_selection.StratifiedShuffleSplit`\n  where train and test sample could overlap in some edge cases,\n  see :issue:`6121` for\n  more details. By `Loic Esteve`_.\n\n- Fix in :class:`sklearn.model_selection.StratifiedShuffleSplit` to\n  return splits of size ``train_size`` and ``test_size`` in all cases\n  (:issue:`6472`). By `Andreas M\u00fcller`_.\n\n- Cross-validation of :class:`multiclass.OneVsOneClassifier` and\n  :class:`multiclass.OneVsRestClassifier` now works with precomputed kernels.\n  :issue:`7350` by :user:`Russell Smith <rsmith54>`.\n\n- Fix incomplete ``predict_proba`` method delegation from\n  :class:`model_selection.GridSearchCV` to\n  :class:`linear_model.SGDClassifier` (:issue:`7159`)\n  by `Yichuan Liu <https:\/\/github.com\/yl565>`_.\n\nMetrics\n\n- Fix bug in :func:`metrics.silhouette_score` in which clusters of\n  size 1 were incorrectly scored. They should get a score of 0.\n  By `Joel Nothman`_.\n\n- Fix bug in :func:`metrics.silhouette_samples` so that it now works with\n  arbitrary labels, not just those ranging from 0 to n_clusters - 1.\n\n- Fix bug where expected and adjusted mutual information were incorrect if\n  cluster contingency cells exceeded ``2**16``. By `Joel Nothman`_.\n\n- :func:`metrics.pairwise_distances` now converts arrays to\n  boolean arrays when required in ``scipy.spatial.distance``.\n  :issue:`5460` by `Tom Dupre la Tour`_.\n\n- Fix sparse input support in :func:`metrics.silhouette_score` as well as\n  example examples\/text\/document_clustering.py. By :user:`YenChen Lin <yenchenlin>`.\n\n- :func:`metrics.roc_curve` and :func:`metrics.precision_recall_curve` no\n  longer round ``y_score`` values when creating ROC curves; this was causing\n  problems for users with very small differences in scores (:issue:`7353`).\n\nMiscellaneous\n\n- `model_selection.tests._search._check_param_grid` now works correctly with all types\n  that extends\/implements `Sequence` (except string), including range (Python 3.x) and xrange\n  (Python 2.x). :issue:`7323` by Viacheslav Kovalevskyi.\n\n- :func:`utils.extmath.randomized_range_finder` is more numerically stable when many\n  power iterations are requested, since it applies LU normalization by default.\n  If ``n_iter<2`` numerical issues are unlikely, thus no normalization is applied.\n  Other normalization options are available: ``'none', 'LU'`` and ``'QR'``.\n  :issue:`5141` by :user:`Giorgio Patrini <giorgiop>`.\n\n- Fix a bug where some formats of ``scipy.sparse`` matrix, and estimators\n  with them as parameters, could not be passed to :func:`base.clone`.\n  By `Loic Esteve`_.\n\n- :func:`datasets.load_svmlight_file` now is able to read long int QID values.\n  :issue:`7101` by :user:`Ibraim Ganiev <olologin>`.\n\n\nAPI changes summary\n-------------------\n\nLinear, kernelized and related models\n\n- ``residual_metric`` has been deprecated in :class:`linear_model.RANSACRegressor`.\n  Use ``loss`` instead. By `Manoj Kumar`_.\n\n- Access to public attributes ``.X_`` and ``.y_`` has been deprecated in\n  :class:`isotonic.IsotonicRegression`. By :user:`Jonathan Arfa <jarfa>`.\n\nDecomposition, manifold learning and clustering\n\n- The old `mixture.DPGMM` is deprecated in favor of the new\n  :class:`mixture.BayesianGaussianMixture` (with the parameter\n  ``weight_concentration_prior_type='dirichlet_process'``).\n  The new class solves the computational\n  problems of the old class and computes the Gaussian mixture with a\n  Dirichlet process prior faster than before.\n  :issue:`7295` by :user:`Wei Xue <xuewei4d>` and :user:`Thierry Guillemot <tguillemot>`.\n\n- The old `mixture.VBGMM` is deprecated in favor of the new\n  :class:`mixture.BayesianGaussianMixture` (with the parameter\n  ``weight_concentration_prior_type='dirichlet_distribution'``).\n  The new class solves the computational\n  problems of the old class and computes the Variational Bayesian Gaussian\n  mixture faster than before.\n  :issue:`6651` by :user:`Wei Xue <xuewei4d>` and :user:`Thierry Guillemot <tguillemot>`.\n\n- The old `mixture.GMM` is deprecated in favor of the new\n  :class:`mixture.GaussianMixture`. The new class computes the Gaussian mixture\n  faster than before and some of computational problems have been solved.\n  :issue:`6666` by :user:`Wei Xue <xuewei4d>` and :user:`Thierry Guillemot <tguillemot>`.\n\nModel evaluation and meta-estimators\n\n- The `sklearn.cross_validation`, `sklearn.grid_search` and\n  `sklearn.learning_curve` have been deprecated and the classes and\n  functions have been reorganized into the :mod:`sklearn.model_selection`\n  module. Ref :ref:`model_selection_changes` for more information.\n  :issue:`4294` by `Raghav RV`_.\n\n- The ``grid_scores_`` attribute of :class:`model_selection.GridSearchCV`\n  and :class:`model_selection.RandomizedSearchCV` is deprecated in favor of\n  the attribute ``cv_results_``.\n  Ref :ref:`model_selection_changes` for more information.\n  :issue:`6697` by `Raghav RV`_.\n\n- The parameters ``n_iter`` or ``n_folds`` in old CV splitters are replaced\n  by the new parameter ``n_splits`` since it can provide a consistent\n  and unambiguous interface to represent the number of train-test splits.\n  :issue:`7187` by :user:`YenChen Lin <yenchenlin>`.\n\n- ``classes`` parameter was renamed to ``labels`` in\n  :func:`metrics.hamming_loss`. :issue:`7260` by :user:`Sebasti\u00e1n Vanrell <srvanrell>`.\n\n- The splitter classes ``LabelKFold``, ``LabelShuffleSplit``,\n  ``LeaveOneLabelOut`` and ``LeavePLabelsOut`` are renamed to\n  :class:`model_selection.GroupKFold`,\n  :class:`model_selection.GroupShuffleSplit`,\n  :class:`model_selection.LeaveOneGroupOut`\n  and :class:`model_selection.LeavePGroupsOut` respectively.\n  Also the parameter ``labels`` in the `split` method of the newly\n  renamed splitters :class:`model_selection.LeaveOneGroupOut` and\n  :class:`model_selection.LeavePGroupsOut` is renamed to\n  ``groups``. Additionally in :class:`model_selection.LeavePGroupsOut`,\n  the parameter ``n_labels`` is renamed to ``n_groups``.\n  :issue:`6660` by `Raghav RV`_.\n\n- Error and loss names for ``scoring`` parameters are now prefixed by\n  ``'neg_'``, such as ``neg_mean_squared_error``. The unprefixed versions\n  are deprecated and will be removed in version 0.20.\n  :issue:`7261` by :user:`Tim Head <betatim>`.\n\nCode Contributors\n-----------------\nAditya Joshi, Alejandro, Alexander Fabisch, Alexander Loginov, Alexander\nMinyushkin, Alexander Rudy, Alexandre Abadie, Alexandre Abraham, Alexandre\nGramfort, Alexandre Saint, alexfields, Alvaro Ulloa, alyssaq, Amlan Kar,\nAndreas Mueller, andrew giessel, Andrew Jackson, Andrew McCulloh, Andrew\nMurray, Anish Shah, Arafat, Archit Sharma, Ariel Rokem, Arnaud Joly, Arnaud\nRachez, Arthur Mensch, Ash Hoover, asnt, b0noI, Behzad Tabibian, Bernardo,\nBernhard Kratzwald, Bhargav Mangipudi, blakeflei, Boyuan Deng, Brandon Carter,\nBrett Naul, Brian McFee, Caio Oliveira, Camilo Lamus, Carol Willing, Cass,\nCeShine Lee, Charles Truong, Chyi-Kwei Yau, CJ Carey, codevig, Colin Ni, Dan\nShiebler, Daniel, Daniel Hnyk, David Ellis, David Nicholson, David Staub, David\nThaler, David Warshaw, Davide Lasagna, Deborah, definitelyuncertain, Didi\nBar-Zev, djipey, dsquareindia, edwinENSAE, Elias Kuthe, Elvis DOHMATOB, Ethan\nWhite, Fabian Pedregosa, Fabio Ticconi, fisache, Florian Wilhelm, Francis,\nFrancis O'Donovan, Gael Varoquaux, Ganiev Ibraim, ghg, Gilles Louppe, Giorgio\nPatrini, Giovanni Cherubin, Giovanni Lanzani, Glenn Qian, Gordon\nMohr, govin-vatsan, Graham Clenaghan, Greg Reda, Greg Stupp, Guillaume\nLemaitre, Gustav M\u00f6rtberg, halwai, Harizo Rajaona, Harry Mavroforakis,\nhashcode55, hdmetor, Henry Lin, Hobson Lane, Hugo Bowne-Anderson,\nIgor Andriushchenko, Imaculate, Inki Hwang, Isaac Sijaranamual,\nIshank Gulati, Issam Laradji, Iver Jordal, jackmartin, Jacob Schreiber, Jake\nVanderplas, James Fiedler, James Routley, Jan Zikes, Janna Brettingen, jarfa, Jason\nLaska, jblackburne, jeff levesque, Jeffrey Blackburne, Jeffrey04, Jeremy Hintz,\njeremynixon, Jeroen, Jessica Yung, Jill-J\u00eann Vie, Jimmy Jia, Jiyuan Qian, Joel\nNothman, johannah, John, John Boersma, John Kirkham, John Moeller,\njonathan.striebel, joncrall, Jordi, Joseph Munoz, Joshua Cook, JPFrancoia,\njrfiedler, JulianKahnert, juliathebrave, kaichogami, KamalakerDadi, Kenneth\nLyons, Kevin Wang, kingjr, kjell, Konstantin Podshumok, Kornel Kielczewski,\nKrishna Kalyan, krishnakalyan3, Kvle Putnam, Kyle Jackson, Lars Buitinck,\nldavid, LeiG, LeightonZhang, Leland McInnes, Liang-Chi Hsieh, Lilian Besson,\nlizsz, Loic Esteve, Louis Tiao, L\u00e9onie Borne, Mads Jensen, Maniteja Nandana,\nManoj Kumar, Manvendra Singh, Marco, Mario Krell, Mark Bao, Mark Szepieniec,\nMartin Madsen, MartinBpr, MaryanMorel, Massil, Matheus, Mathieu Blondel,\nMathieu Dubois, Matteo, Matthias Ekman, Max Moroz, Michael Scherer, michiaki\nariga, Mikhail Korobov, Moussa Taifi, mrandrewandrade, Mridul Seth, nadya-p,\nNaoya Kanai, Nate George, Nelle Varoquaux, Nelson Liu, Nick James,\nNickleDave, Nico, Nicolas Goix, Nikolay Mayorov, ningchi, nlathia,\nokbalefthanded, Okhlopkov, Olivier Grisel, Panos Louridas, Paul Strickland,\nPerrine Letellier, pestrickland, Peter Fischer, Pieter, Ping-Yao, Chang,\npracticalswift, Preston Parry, Qimu Zheng, Rachit Kansal, Raghav RV,\nRalf Gommers, Ramana.S, Rammig, Randy Olson, Rob Alexander, Robert Lutz,\nRobin Schucker, Rohan Jain, Ruifeng Zheng, Ryan Yu, R\u00e9my L\u00e9one, saihttam,\nSaiwing Yeung, Sam Shleifer, Samuel St-Jean, Sartaj Singh, Sasank Chilamkurthy,\nsaurabh.bansod, Scott Andrews, Scott Lowe, seales, Sebastian Raschka, Sebastian\nSaeger, Sebasti\u00e1n Vanrell, Sergei Lebedev, shagun Sodhani, shanmuga cv,\nShashank Shekhar, shawpan, shengxiduan, Shota, shuckle16, Skipper Seabold,\nsklearn-ci, SmedbergM, srvanrell, S\u00e9bastien Lerique, Taranjeet, themrmax,\nThierry, Thierry Guillemot, Thomas, Thomas Hallock, Thomas Moreau, Tim Head,\ntKammy, toastedcornflakes, Tom, TomDLT, Toshihiro Kamishima, tracer0tong, Trent\nHauck, trevorstephens, Tue Vo, Varun, Varun Jewalikar, Viacheslav, Vighnesh\nBirodkar, Vikram, Villu Ruusmann, Vinayak Mehta, walter, waterponey, Wenhua\nYang, Wenjian Huang, Will Welch, wyseguy7, xyguo, yanlend, Yaroslav Halchenko,\nyelite, Yen, YenChenLin, Yichuan Liu, Yoav Ram, Yoshiki, Zheng RuiFeng, zivori, \u00d3scar N\u00e1jera","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn               Version 0 18                  warning        Scikit learn 0 18 is the last major release of scikit learn to support Python 2 6      Later versions of scikit learn will require Python 2 7 or above        changes 0 18 2   Version 0 18 2                   June 20  2017    Changelog              Fixes for compatibility with NumPy 1 13 0   issue  7946   issue  8355  by    Loic Esteve       Minor compatibility changes in the examples  issue  9010   issue  8040     issue  9149    Code Contributors                   Aman Dalmia  Loic Esteve  Nate Guerin  Sergei Lebedev       changes 0 18 1   Version 0 18 1                   November 11  2016    Changelog            Enhancements                 Improved   sample without replacement   speed by utilizing   numpy random permutation for most cases  As a result    samples may differ in this release for a fixed random state    Affected estimators        class  ensemble BaggingClassifier       class  ensemble BaggingRegressor       class  linear model RANSACRegressor       class  model selection RandomizedSearchCV       class  random projection SparseRandomProjection     This also affects the  meth  datasets make classification    method   Bug fixes              Fix issue where   min grad norm   and   n iter without progress     parameters were not being utilised by  class  manifold TSNE      issue  6497  by  user  Sebastian S ger  ssaeger      Fix bug for svm s decision values when   decision function shape     is   ovr   in  class  svm SVC      class  svm SVC  s decision function was incorrect from versions   0 17 0 through 0 18 0     issue  7724  by  Bing Tian Dai      Attribute   explained variance ratio   of    class  discriminant analysis LinearDiscriminantAnalysis  calculated   with SVD and Eigen solver are now of the same length   issue  7632    by  user  JPFrancoia  JPFrancoia      Fixes issue in  ref  univariate feature selection  where score   functions were not accepting multi label targets   issue  7676    by  user  Mohammed Affan  affanv14      Fixed setting parameters when calling   fit   multiple times on    class  feature selection SelectFromModel    issue  7756  by  Andreas M ller      Fixes issue in   partial fit   method of    class  multiclass OneVsRestClassifier  when number of classes used in     partial fit   was less than the total number of classes in the   data   issue  7786  by  Srivatsan Ramesh      Fixes issue in  class  calibration CalibratedClassifierCV  where   the sum of probabilities of each class for a data was not 1  and     CalibratedClassifierCV   now handles the case where the training set   has less number of classes than the total data   issue  7799  by    Srivatsan Ramesh      Fix a bug where  class  sklearn feature selection SelectFdr  did not   exactly implement Benjamini Hochberg procedure  It formerly may have   selected fewer features than it should     issue  7490  by  user  Peng Meng  mpjlu        class  sklearn manifold LocallyLinearEmbedding  now correctly handles   integer inputs   issue  6282  by  Jake Vanderplas       The   min weight fraction leaf   parameter of tree based classifiers and   regressors now assumes uniform sample weights by default if the     sample weight   argument is not passed to the   fit   function    Previously  the parameter was silently ignored   issue  7301    by  user  Nelson Liu  nelson liu       Numerical issue with  class  linear model RidgeCV  on centered data when    n features   n samples    issue  6178  by  Bertrand Thirion      Tree splitting criterion classes  cloning pickling is now memory safe    issue  7680  by  user  Ibraim Ganiev  olologin       Fixed a bug where  class  decomposition NMF  sets its   n iters      attribute in  transform      issue  7553  by  user  Ekaterina   Krivich  kiote        class  sklearn linear model LogisticRegressionCV  now correctly handles   string labels   issue  5874  by  Raghav RV       Fixed a bug where  func  sklearn model selection train test split  raised   an error when   stratify   is a list of string labels   issue  7593  by    Raghav RV       Fixed a bug where  class  sklearn model selection GridSearchCV  and    class  sklearn model selection RandomizedSearchCV  were not pickleable   because of a pickling bug in   np ma MaskedArray     issue  7594  by    Raghav RV       All cross validation utilities in  mod  sklearn model selection  now   permit one time cross validation splitters for the   cv   parameter  Also   non deterministic cross validation splitters  where multiple calls to     split   produce dissimilar splits  can be used as   cv   parameter    The  class  sklearn model selection GridSearchCV  will cross validate each   parameter setting on the split produced by the first   split   call   to the cross validation splitter    issue  7660  by  Raghav RV       Fix bug where  meth  preprocessing MultiLabelBinarizer fit transform    returned an invalid CSR matrix     issue  7750  by  user  CJ Carey  perimosocordiae       Fixed a bug where  func  metrics pairwise cosine distances  could return a   small negative distance   issue  7732  by  user  Artsion  asanakoy     API changes summary                      Trees and forests    The   min weight fraction leaf   parameter of tree based classifiers and   regressors now assumes uniform sample weights by default if the     sample weight   argument is not passed to the   fit   function    Previously  the parameter was silently ignored   issue  7301  by  user  Nelson   Liu  nelson liu       Tree splitting criterion classes  cloning pickling is now memory safe     issue  7680  by  user  Ibraim Ganiev  olologin      Linear  kernelized and related models    Length of   explained variance ratio   of    class  discriminant analysis LinearDiscriminantAnalysis    changed for both Eigen and SVD solvers  The attribute has now a length   of min n components  n classes   1    issue  7632    by  user  JPFrancoia  JPFrancoia      Numerical issue with  class  linear model RidgeCV  on centered data when     n features   n samples     issue  6178  by  Bertrand Thirion        changes 0 18   Version 0 18                 September 28  2016        model selection changes   Model Selection Enhancements and API Changes                                                   The model selection module      The new module  mod  sklearn model selection   which groups together the   functionalities of formerly  sklearn cross validation      sklearn grid search  and  sklearn learning curve   introduces new   possibilities such as nested cross validation and better manipulation of   parameter searches with Pandas     Many things will stay the same but there are some key differences  Read   below to know more about the changes       Data independent CV splitters enabling nested cross validation      The new cross validation splitters  defined in the    mod  sklearn model selection   are no longer initialized with any   data dependent parameters such as   y    Instead they expose a    split  method that takes in the data and yields a generator for the   different splits     This change makes it possible to use the cross validation splitters to   perform nested cross validation  facilitated by    class  model selection GridSearchCV  and    class  model selection RandomizedSearchCV  utilities       The enhanced cv results  attribute      The new   cv results    attribute  of  class  model selection GridSearchCV    and  class  model selection RandomizedSearchCV   introduced in lieu of the     grid scores    attribute is a dict of 1D arrays with elements in each   array corresponding to the parameter settings  i e  search candidates      The   cv results    dict can be easily imported into   pandas   as a     DataFrame   for exploring the search results     The   cv results    arrays include scores for each cross validation split    with keys such as    split0 test score      as well as their mean       mean test score     and standard deviation     std test score         The ranks for the search candidates  based on their mean   cross validation score  is available at   cv results   rank test score         The parameter values for each parameter is stored separately as numpy   masked object arrays  The value  for that search candidate  is masked if   the corresponding parameter is not applicable  Additionally a list of all   the parameter dicts are stored at   cv results   params           Parameters n folds and n iter renamed to n splits      Some parameter names have changed    The   n folds   parameter in new  class  model selection KFold      class  model selection GroupKFold   see below for the name change     and  class  model selection StratifiedKFold  is now renamed to     n splits    The   n iter   parameter in    class  model selection ShuffleSplit   the new class    class  model selection GroupShuffleSplit  and    class  model selection StratifiedShuffleSplit  is now renamed to     n splits         Rename of splitter classes which accepts group labels along with data      The cross validation splitters   LabelKFold        LabelShuffleSplit      LeaveOneLabelOut   and   LeavePLabelOut   have   been renamed to  class  model selection GroupKFold      class  model selection GroupShuffleSplit      class  model selection LeaveOneGroupOut  and    class  model selection LeavePGroupsOut  respectively     Note the change from singular to plural form in    class  model selection LeavePGroupsOut        Fit parameter labels renamed to groups      The   labels   parameter in the  split  method of the newly renamed   splitters  class  model selection GroupKFold      class  model selection LeaveOneGroupOut      class  model selection LeavePGroupsOut      class  model selection GroupShuffleSplit  is renamed to   groups     following the new nomenclature of their class names       Parameter n labels renamed to n groups      The parameter   n labels   in the newly renamed    class  model selection LeavePGroupsOut  is changed to   n groups       Training scores and Timing information      cv results    also includes the training scores for each   cross validation split  with keys such as    split0 train score      as   well as their mean     mean train score     and standard deviation       std train score      To avoid the cost of evaluating training score    set   return train score False       Additionally the mean and standard deviation of the times taken to split    train and score the model across all the cross validation splits is   available at the key    mean time    and    std time    respectively   Changelog            New features               Classifiers and Regressors    The Gaussian Process module has been reimplemented and now offers classification   and regression estimators through  class  gaussian process GaussianProcessClassifier    and   class  gaussian process GaussianProcessRegressor   Among other things  the new   implementation supports kernel engineering  gradient based hyperparameter optimization or   sampling of functions from GP prior and GP posterior  Extensive documentation and   examples are provided  By  Jan Hendrik Metzen       Added new supervised learning algorithm   ref  Multi layer Perceptron  multilayer perceptron      issue  3204  by  user  Issam H  Laradji  IssamLaradji      Added  class  linear model HuberRegressor   a linear model robust to outliers     issue  5291  by  Manoj Kumar       Added the  class  multioutput MultiOutputRegressor  meta estimator  It   converts single output regressors to multi output regressors by fitting   one regressor per output  By  user  Tim Head  betatim     Other estimators    New  class  mixture GaussianMixture  and  class  mixture BayesianGaussianMixture    replace former mixture models  employing faster inference   for sounder results   issue  7295  by  user  Wei Xue  xuewei4d   and    user  Thierry Guillemot  tguillemot       Class  decomposition RandomizedPCA  is now factored into  class  decomposition PCA    and it is available calling with parameter   svd solver  randomized       The default number of   n iter   for    randomized    has changed to 4  The old   behavior of PCA is recovered by   svd solver  full     An additional solver   calls   arpack   and performs truncated  non randomized  SVD  By default    the best solver is selected depending on the size of the input and the   number of components requested   issue  5299  by  user  Giorgio Patrini  giorgiop       Added two functions for mutual information estimation     func  feature selection mutual info classif  and    func  feature selection mutual info regression   These functions can be   used in  class  feature selection SelectKBest  and    class  feature selection SelectPercentile  as score functions    By  user  Andrea Bravi  AndreaBravi   and  user  Nikolay Mayorov  nmayorov       Added the  class  ensemble IsolationForest  class for anomaly detection based on   random forests  By  Nicolas Goix       Added   algorithm  elkan    to  class  cluster KMeans  implementing   Elkan s fast K Means algorithm  By  Andreas M ller     Model selection and evaluation    Added  func  metrics fowlkes mallows score   the Fowlkes Mallows   Index which measures the similarity of two clusterings of a set of points   By  user  Arnaud Fouchet  afouchet   and  user  Thierry Guillemot  tguillemot       Added  metrics calinski harabaz score   which computes the Calinski   and Harabaz score to evaluate the resulting clustering of a set of points    By  user  Arnaud Fouchet  afouchet   and  user  Thierry Guillemot  tguillemot       Added new cross validation splitter    class  model selection TimeSeriesSplit  to handle time series data     issue  6586  by  user  YenChen Lin  yenchenlin      The cross validation iterators are replaced by cross validation splitters   available from  mod  sklearn model selection   allowing for nested   cross validation  See  ref  model selection changes  for more information     issue  4294  by  Raghav RV     Enhancements               Trees and ensembles    Added a new splitting criterion for  class  tree DecisionTreeRegressor     the mean absolute error  This criterion can also be used in    class  ensemble ExtraTreesRegressor      class  ensemble RandomForestRegressor   and the gradient boosting   estimators   issue  6667  by  user  Nelson Liu  nelson liu       Added weighted impurity based early stopping criterion for decision tree   growth   issue  6954  by  user  Nelson Liu  nelson liu      The random forest  extra tree and decision tree estimators now has a   method   decision path   which returns the decision path of samples in   the tree  By  Arnaud Joly       A new example has been added unveiling the decision tree structure    By  Arnaud Joly       Random forest  extra trees  decision trees and gradient boosting estimator   accept the parameter   min samples split   and   min samples leaf     provided as a percentage of the training samples  By  user  yelite  yelite   and  Arnaud Joly       Gradient boosting estimators accept the parameter   criterion   to specify   to splitting criterion used in built decision trees     issue  6667  by  user  Nelson Liu  nelson liu       The memory footprint is reduced  sometimes greatly  for    ensemble bagging BaseBagging  and classes that inherit from it    i e   class  ensemble BaggingClassifier      class  ensemble BaggingRegressor   and  class  ensemble IsolationForest     by dynamically generating attribute   estimators samples    only when it is   needed  By  user  David Staub  staubda       Added   n jobs   and   sample weight   parameters for    class  ensemble VotingClassifier  to fit underlying estimators in parallel     issue  5805  by  user  Ibraim Ganiev  olologin     Linear  kernelized and related models    In  class  linear model LogisticRegression   the SAG solver is now   available in the multinomial case   issue  5251  by  Tom Dupre la Tour        class  linear model RANSACRegressor    class  svm LinearSVC  and    class  svm LinearSVR  now support   sample weight      By  user  Imaculate  Imaculate       Add parameter   loss   to  class  linear model RANSACRegressor  to measure the   error on the samples for every trial  By  Manoj Kumar       Prediction of out of sample events with Isotonic Regression     class  isotonic IsotonicRegression   is now much faster  over 1000x in tests with synthetic   data   By  user  Jonathan Arfa  jarfa       Isotonic regression   class  isotonic IsotonicRegression   now uses a better algorithm to avoid    O n 2   behavior in pathological cases  and is also generally faster     issue   6691    By  Antony Lee        class  naive bayes GaussianNB  now accepts data independent class priors   through the parameter   priors    By  user  Guillaume Lemaitre  glemaitre        class  linear model ElasticNet  and  class  linear model Lasso    now works with   np float32   input data without converting it   into   np float64    This allows to reduce the memory   consumption   issue  6913  by  user  YenChen Lin  yenchenlin        class  semi supervised LabelPropagation  and  class  semi supervised LabelSpreading    now accept arbitrary kernel functions in addition to strings   knn   and   rbf       issue  5762  by  user  Utkarsh Upadhyay  musically ut     Decomposition  manifold learning and clustering    Added   inverse transform   function to  class  decomposition NMF  to compute   data matrix of original shape  By  user  Anish Shah  AnishShah        class  cluster KMeans  and  class  cluster MiniBatchKMeans  now works   with   np float32   and   np float64   input data without converting it    This allows to reduce the memory consumption by using   np float32       issue  6846  by  user  Sebastian S ger  ssaeger   and    user  YenChen Lin  yenchenlin     Preprocessing and feature selection     class  preprocessing RobustScaler  now accepts   quantile range   parameter     issue  5929  by  user  Konstantin Podshumok  podshumok        class  feature extraction FeatureHasher  now accepts string values     issue  6173  by  user  Ryad Zenine  ryadzenine   and    user  Devashish Deshpande  dsquareindia       Keyword arguments can now be supplied to   func   in    class  preprocessing FunctionTransformer  by means of the   kw args     parameter  By  Brian McFee        class  feature selection SelectKBest  and  class  feature selection SelectPercentile    now accept score functions that take X  y as input and return only the scores    By  user  Nikolay Mayorov  nmayorov     Model evaluation and meta estimators     class  multiclass OneVsOneClassifier  and  class  multiclass OneVsRestClassifier    now support   partial fit    By  user  Asish Panda  kaichogami   and    user  Philipp Dowling  phdowling       Added support for substituting or disabling  class  pipeline Pipeline    and  class  pipeline FeatureUnion  components using the   set params     interface that powers  sklearn grid search     See  ref  sphx glr auto examples compose plot compare reduction py    By  Joel Nothman   and  user  Robert McGibbon  rmcgibbo       The new   cv results    attribute of  class  model selection GridSearchCV     and  class  model selection RandomizedSearchCV   can be easily imported   into pandas as a   DataFrame    Ref  ref  model selection changes  for   more information   issue  6697  by  Raghav RV       Generalization of  func  model selection cross val predict     One can pass method names such as  predict proba  to be used in the cross   validation framework instead of the default  predict     By  user  Ori Ziv  zivori   and  user  Sears Merritt  merritts       The training scores and time taken for training followed by scoring for   each search candidate are now available at the   cv results    dict    See  ref  model selection changes  for more information     issue  7325  by  user  Eugene Chen  eyc88   and  Raghav RV     Metrics    Added   labels   flag to  class  metrics log loss  to explicitly provide   the labels when the number of classes in   y true   and   y pred   differ     issue  7239  by  user  Hong Guangguo  hongguangguo   with help from    user  Mads Jensen  indianajensen   and  user  Nelson Liu  nelson liu       Support sparse contingency matrices in cluster evaluation     metrics cluster supervised   to scale to a large number of   clusters     issue  7419  by  user  Gregory Stupp  stuppie   and  Joel Nothman       Add   sample weight   parameter to  func  metrics matthews corrcoef     By  user  Jatin Shah  jatinshah   and  Raghav RV       Speed up  func  metrics silhouette score  by using vectorized operations    By  Manoj Kumar       Add   sample weight   parameter to  func  metrics confusion matrix     By  user  Bernardo Stein  DanielSidhion     Miscellaneous    Added   n jobs   parameter to  class  feature selection RFECV  to compute   the score on the test folds in parallel  By  Manoj Kumar      Codebase does not contain C C   cython generated files  they are   generated during build  Distribution packages will still contain generated   C C   files  By  user  Arthur Mensch  arthurmensch       Reduce the memory usage for 32 bit float input arrays of    utils sparse func mean variance axis  and    utils sparse func incr mean variance axis  by supporting cython   fused types  By  user  YenChen Lin  yenchenlin       The  ignore warnings  now accept a category argument to ignore only   the warnings of a specified type  By  user  Thierry Guillemot  tguillemot       Added parameter   return X y   and return type    data  target    tuple   option to    func  datasets load iris  dataset    issue  7049      func  datasets load breast cancer  dataset    issue  7152      func  datasets load digits  dataset     func  datasets load diabetes  dataset     func  datasets load linnerud  dataset     datasets load boston  dataset    issue  7154  by    user  Manvendra Singh manu chroma       Simplification of the   clone   function  deprecate support for estimators   that modify parameters in     init       issue  5540  by  Andreas M ller       When unpickling a scikit learn estimator in a different version than the one   the estimator was trained with  a   UserWarning   is raised  see  ref  the documentation   on model persistence  persistence limitations   for more details    issue  7248     By  Andreas M ller     Bug fixes            Trees and ensembles    Random forest  extra trees  decision trees and gradient boosting   won t accept anymore   min samples split 1   as at least 2 samples   are required to split a decision tree node  By  Arnaud Joly       class  ensemble VotingClassifier  now raises   NotFittedError   if   predict        transform   or   predict proba   are called on the non fitted estimator    by  Sebastian Raschka       Fix bug where  class  ensemble AdaBoostClassifier  and    class  ensemble AdaBoostRegressor  would perform poorly if the     random state   was fixed     issue  7411    By  Joel Nothman       Fix bug in ensembles with randomization where the ensemble would not   set   random state   on base estimators in a pipeline or similar nesting      issue  7411    Note  results for  class  ensemble BaggingClassifier     class  ensemble BaggingRegressor    class  ensemble AdaBoostClassifier    and  class  ensemble AdaBoostRegressor  will now differ from previous   versions  By  Joel Nothman     Linear  kernelized and related models    Fixed incorrect gradient computation for   loss  squared epsilon insensitive    in    class  linear model SGDClassifier  and  class  linear model SGDRegressor      issue  6764    By  user  Wenhua Yang  geekoala       Fix bug in  class  linear model LogisticRegressionCV  where     solver  liblinear    did not accept   class weights  balanced        issue  6817    By  Tom Dupre la Tour       Fix bug in  class  neighbors RadiusNeighborsClassifier  where an error   occurred when there were outliers being labelled and a weight function   specified   issue  6902     By    LeonieBorne  https   github com LeonieBorne        Fix  class  linear model ElasticNet  sparse decision function to match   output with dense in the multioutput case   Decomposition  manifold learning and clustering     decomposition RandomizedPCA  default number of  iterated power  is 4 instead of 3     issue  5141  by  user  Giorgio Patrini  giorgiop        func  utils extmath randomized svd  performs 4 power iterations by default  instead or 0    In practice this is enough for obtaining a good approximation of the   true eigenvalues vectors in the presence of noise  When  n components  is   small       1   min X shape      n iter  is set to 7  unless the user specifies   a higher number  This improves precision with few components     issue  5299  by  user  Giorgio Patrini giorgiop       Whiten non whiten inconsistency between components of  class  decomposition PCA    and  decomposition RandomizedPCA   now factored into PCA  see the   New features  is fixed   components   are stored with no whitening     issue  5299  by  user  Giorgio Patrini  giorgiop       Fixed bug in  func  manifold spectral embedding  where diagonal of unnormalized   Laplacian matrix was incorrectly set to 1   issue  4995  by  user  Peter Fischer  yanlend       Fixed incorrect initialization of  utils arpack eigsh  on all   occurrences  Affects  cluster bicluster SpectralBiclustering      class  decomposition KernelPCA    class  manifold LocallyLinearEmbedding     and  class  manifold SpectralEmbedding    issue  5012    By    user  Peter Fischer  yanlend       Attribute   explained variance ratio    calculated with the SVD solver   of  class  discriminant analysis LinearDiscriminantAnalysis  now returns   correct results  By  user  JPFrancoia  JPFrancoia    Preprocessing and feature selection     preprocessing data  transform selected  now always passes a copy   of   X   to transform function when   copy True     issue  7194    By  Caio   Oliveira  https   github com caioaao      Model evaluation and meta estimators     class  model selection StratifiedKFold  now raises error if all n labels   for individual classes is less than n folds     issue  6182  by  user  Devashish Deshpande  dsquareindia       Fixed bug in  class  model selection StratifiedShuffleSplit    where train and test sample could overlap in some edge cases    see  issue  6121  for   more details  By  Loic Esteve       Fix in  class  sklearn model selection StratifiedShuffleSplit  to   return splits of size   train size   and   test size   in all cases     issue  6472    By  Andreas M ller       Cross validation of  class  multiclass OneVsOneClassifier  and    class  multiclass OneVsRestClassifier  now works with precomputed kernels     issue  7350  by  user  Russell Smith  rsmith54       Fix incomplete   predict proba   method delegation from    class  model selection GridSearchCV  to    class  linear model SGDClassifier    issue  7159     by  Yichuan Liu  https   github com yl565      Metrics    Fix bug in  func  metrics silhouette score  in which clusters of   size 1 were incorrectly scored  They should get a score of 0    By  Joel Nothman       Fix bug in  func  metrics silhouette samples  so that it now works with   arbitrary labels  not just those ranging from 0 to n clusters   1     Fix bug where expected and adjusted mutual information were incorrect if   cluster contingency cells exceeded   2  16    By  Joel Nothman        func  metrics pairwise distances  now converts arrays to   boolean arrays when required in   scipy spatial distance       issue  5460  by  Tom Dupre la Tour       Fix sparse input support in  func  metrics silhouette score  as well as   example examples text document clustering py  By  user  YenChen Lin  yenchenlin        func  metrics roc curve  and  func  metrics precision recall curve  no   longer round   y score   values when creating ROC curves  this was causing   problems for users with very small differences in scores   issue  7353     Miscellaneous     model selection tests  search  check param grid  now works correctly with all types   that extends implements  Sequence   except string   including range  Python 3 x  and xrange    Python 2 x    issue  7323  by Viacheslav Kovalevskyi      func  utils extmath randomized range finder  is more numerically stable when many   power iterations are requested  since it applies LU normalization by default    If   n iter 2   numerical issues are unlikely  thus no normalization is applied    Other normalization options are available     none    LU    and    QR        issue  5141  by  user  Giorgio Patrini  giorgiop       Fix a bug where some formats of   scipy sparse   matrix  and estimators   with them as parameters  could not be passed to  func  base clone     By  Loic Esteve        func  datasets load svmlight file  now is able to read long int QID values     issue  7101  by  user  Ibraim Ganiev  olologin      API changes summary                      Linear  kernelized and related models      residual metric   has been deprecated in  class  linear model RANSACRegressor     Use   loss   instead  By  Manoj Kumar       Access to public attributes    X    and    y    has been deprecated in    class  isotonic IsotonicRegression   By  user  Jonathan Arfa  jarfa     Decomposition  manifold learning and clustering    The old  mixture DPGMM  is deprecated in favor of the new    class  mixture BayesianGaussianMixture   with the parameter     weight concentration prior type  dirichlet process        The new class solves the computational   problems of the old class and computes the Gaussian mixture with a   Dirichlet process prior faster than before     issue  7295  by  user  Wei Xue  xuewei4d   and  user  Thierry Guillemot  tguillemot       The old  mixture VBGMM  is deprecated in favor of the new    class  mixture BayesianGaussianMixture   with the parameter     weight concentration prior type  dirichlet distribution        The new class solves the computational   problems of the old class and computes the Variational Bayesian Gaussian   mixture faster than before     issue  6651  by  user  Wei Xue  xuewei4d   and  user  Thierry Guillemot  tguillemot       The old  mixture GMM  is deprecated in favor of the new    class  mixture GaussianMixture   The new class computes the Gaussian mixture   faster than before and some of computational problems have been solved     issue  6666  by  user  Wei Xue  xuewei4d   and  user  Thierry Guillemot  tguillemot     Model evaluation and meta estimators    The  sklearn cross validation    sklearn grid search  and    sklearn learning curve  have been deprecated and the classes and   functions have been reorganized into the  mod  sklearn model selection    module  Ref  ref  model selection changes  for more information     issue  4294  by  Raghav RV       The   grid scores    attribute of  class  model selection GridSearchCV    and  class  model selection RandomizedSearchCV  is deprecated in favor of   the attribute   cv results       Ref  ref  model selection changes  for more information     issue  6697  by  Raghav RV       The parameters   n iter   or   n folds   in old CV splitters are replaced   by the new parameter   n splits   since it can provide a consistent   and unambiguous interface to represent the number of train test splits     issue  7187  by  user  YenChen Lin  yenchenlin         classes   parameter was renamed to   labels   in    func  metrics hamming loss    issue  7260  by  user  Sebasti n Vanrell  srvanrell       The splitter classes   LabelKFold      LabelShuffleSplit        LeaveOneLabelOut   and   LeavePLabelsOut   are renamed to    class  model selection GroupKFold      class  model selection GroupShuffleSplit      class  model selection LeaveOneGroupOut    and  class  model selection LeavePGroupsOut  respectively    Also the parameter   labels   in the  split  method of the newly   renamed splitters  class  model selection LeaveOneGroupOut  and    class  model selection LeavePGroupsOut  is renamed to     groups    Additionally in  class  model selection LeavePGroupsOut     the parameter   n labels   is renamed to   n groups       issue  6660  by  Raghav RV       Error and loss names for   scoring   parameters are now prefixed by      neg      such as   neg mean squared error    The unprefixed versions   are deprecated and will be removed in version 0 20     issue  7261  by  user  Tim Head  betatim     Code Contributors                   Aditya Joshi  Alejandro  Alexander Fabisch  Alexander Loginov  Alexander Minyushkin  Alexander Rudy  Alexandre Abadie  Alexandre Abraham  Alexandre Gramfort  Alexandre Saint  alexfields  Alvaro Ulloa  alyssaq  Amlan Kar  Andreas Mueller  andrew giessel  Andrew Jackson  Andrew McCulloh  Andrew Murray  Anish Shah  Arafat  Archit Sharma  Ariel Rokem  Arnaud Joly  Arnaud Rachez  Arthur Mensch  Ash Hoover  asnt  b0noI  Behzad Tabibian  Bernardo  Bernhard Kratzwald  Bhargav Mangipudi  blakeflei  Boyuan Deng  Brandon Carter  Brett Naul  Brian McFee  Caio Oliveira  Camilo Lamus  Carol Willing  Cass  CeShine Lee  Charles Truong  Chyi Kwei Yau  CJ Carey  codevig  Colin Ni  Dan Shiebler  Daniel  Daniel Hnyk  David Ellis  David Nicholson  David Staub  David Thaler  David Warshaw  Davide Lasagna  Deborah  definitelyuncertain  Didi Bar Zev  djipey  dsquareindia  edwinENSAE  Elias Kuthe  Elvis DOHMATOB  Ethan White  Fabian Pedregosa  Fabio Ticconi  fisache  Florian Wilhelm  Francis  Francis O Donovan  Gael Varoquaux  Ganiev Ibraim  ghg  Gilles Louppe  Giorgio Patrini  Giovanni Cherubin  Giovanni Lanzani  Glenn Qian  Gordon Mohr  govin vatsan  Graham Clenaghan  Greg Reda  Greg Stupp  Guillaume Lemaitre  Gustav M rtberg  halwai  Harizo Rajaona  Harry Mavroforakis  hashcode55  hdmetor  Henry Lin  Hobson Lane  Hugo Bowne Anderson  Igor Andriushchenko  Imaculate  Inki Hwang  Isaac Sijaranamual  Ishank Gulati  Issam Laradji  Iver Jordal  jackmartin  Jacob Schreiber  Jake Vanderplas  James Fiedler  James Routley  Jan Zikes  Janna Brettingen  jarfa  Jason Laska  jblackburne  jeff levesque  Jeffrey Blackburne  Jeffrey04  Jeremy Hintz  jeremynixon  Jeroen  Jessica Yung  Jill J nn Vie  Jimmy Jia  Jiyuan Qian  Joel Nothman  johannah  John  John Boersma  John Kirkham  John Moeller  jonathan striebel  joncrall  Jordi  Joseph Munoz  Joshua Cook  JPFrancoia  jrfiedler  JulianKahnert  juliathebrave  kaichogami  KamalakerDadi  Kenneth Lyons  Kevin Wang  kingjr  kjell  Konstantin Podshumok  Kornel Kielczewski  Krishna Kalyan  krishnakalyan3  Kvle Putnam  Kyle Jackson  Lars Buitinck  ldavid  LeiG  LeightonZhang  Leland McInnes  Liang Chi Hsieh  Lilian Besson  lizsz  Loic Esteve  Louis Tiao  L onie Borne  Mads Jensen  Maniteja Nandana  Manoj Kumar  Manvendra Singh  Marco  Mario Krell  Mark Bao  Mark Szepieniec  Martin Madsen  MartinBpr  MaryanMorel  Massil  Matheus  Mathieu Blondel  Mathieu Dubois  Matteo  Matthias Ekman  Max Moroz  Michael Scherer  michiaki ariga  Mikhail Korobov  Moussa Taifi  mrandrewandrade  Mridul Seth  nadya p  Naoya Kanai  Nate George  Nelle Varoquaux  Nelson Liu  Nick James  NickleDave  Nico  Nicolas Goix  Nikolay Mayorov  ningchi  nlathia  okbalefthanded  Okhlopkov  Olivier Grisel  Panos Louridas  Paul Strickland  Perrine Letellier  pestrickland  Peter Fischer  Pieter  Ping Yao  Chang  practicalswift  Preston Parry  Qimu Zheng  Rachit Kansal  Raghav RV  Ralf Gommers  Ramana S  Rammig  Randy Olson  Rob Alexander  Robert Lutz  Robin Schucker  Rohan Jain  Ruifeng Zheng  Ryan Yu  R my L one  saihttam  Saiwing Yeung  Sam Shleifer  Samuel St Jean  Sartaj Singh  Sasank Chilamkurthy  saurabh bansod  Scott Andrews  Scott Lowe  seales  Sebastian Raschka  Sebastian Saeger  Sebasti n Vanrell  Sergei Lebedev  shagun Sodhani  shanmuga cv  Shashank Shekhar  shawpan  shengxiduan  Shota  shuckle16  Skipper Seabold  sklearn ci  SmedbergM  srvanrell  S bastien Lerique  Taranjeet  themrmax  Thierry  Thierry Guillemot  Thomas  Thomas Hallock  Thomas Moreau  Tim Head  tKammy  toastedcornflakes  Tom  TomDLT  Toshihiro Kamishima  tracer0tong  Trent Hauck  trevorstephens  Tue Vo  Varun  Varun Jewalikar  Viacheslav  Vighnesh Birodkar  Vikram  Villu Ruusmann  Vinayak Mehta  walter  waterponey  Wenhua Yang  Wenjian Huang  Will Welch  wyseguy7  xyguo  yanlend  Yaroslav Halchenko  yelite  Yen  YenChenLin  Yichuan Liu  Yoav Ram  Yoshiki  Zheng RuiFeng  zivori   scar N jera"}
{"questions":"scikit-learn sklearn contributors rst Version 0 24 releasenotes024","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n.. _release_notes_0_24:\n\n============\nVersion 0.24\n============\n\nFor a short description of the main highlights of the release, please refer to\n:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_0_24_0.py`.\n\n.. include:: changelog_legend.inc\n\n.. _changes_0_24_2:\n\nVersion 0.24.2\n==============\n\n**April 2021**\n\nChangelog\n---------\n\n:mod:`sklearn.compose`\n......................\n\n- |Fix| `compose.ColumnTransformer.get_feature_names` does not call\n  `get_feature_names` on transformers with an empty column selection.\n  :pr:`19579` by `Thomas Fan`_.\n\n:mod:`sklearn.cross_decomposition`\n..................................\n\n- |Fix| Fixed a regression in :class:`cross_decomposition.CCA`. :pr:`19646`\n  by `Thomas Fan`_.\n\n- |Fix| :class:`cross_decomposition.PLSRegression` raises warning for\n  constant y residuals instead of a `StopIteration` error. :pr:`19922`\n  by `Thomas Fan`_.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Fix| Fixed a bug in :class:`decomposition.KernelPCA`'s\n  ``inverse_transform``.  :pr:`19732` by :user:`Kei Ishikawa <kstoneriv3>`.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |Fix| Fixed a bug in :class:`ensemble.HistGradientBoostingRegressor` `fit`\n  with `sample_weight` parameter and `least_absolute_deviation` loss function.\n  :pr:`19407` by :user:`Vadim Ushtanit <vadim-ushtanit>`.\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |Fix| Fixed a bug to support multiple strings for a category when\n  `sparse=False` in :class:`feature_extraction.DictVectorizer`.\n  :pr:`19982` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.gaussian_process`\n...............................\n\n- |Fix| Avoid explicitly forming inverse covariance matrix in\n  :class:`gaussian_process.GaussianProcessRegressor` when set to output\n  standard deviation. With certain covariance matrices this inverse is unstable\n  to compute explicitly. Calling Cholesky solver mitigates this issue in\n  computation.\n  :pr:`19939` by :user:`Ian Halvic <iwhalvic>`.\n\n- |Fix| Avoid division by zero when scaling constant target in\n  :class:`gaussian_process.GaussianProcessRegressor`. It was due to a std. dev.\n  equal to 0. Now, such case is detected and the std. dev. is affected to 1\n  avoiding a division by zero and thus the presence of NaN values in the\n  normalized target.\n  :pr:`19703` by :user:`sobkevich`, :user:`Boris Villaz\u00f3n-Terrazas <boricles>`\n  and :user:`Alexandr Fonari <afonari>`.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Fix|: Fixed a bug in :class:`linear_model.LogisticRegression`: the\n  sample_weight object is not modified anymore. :pr:`19182` by\n  :user:`Yosuke KOBAYASHI <m7142yosuke>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| :func:`metrics.top_k_accuracy_score` now supports multiclass\n  problems where only two classes appear in `y_true` and all the classes\n  are specified in `labels`.\n  :pr:`19721` by :user:`Joris Clement <flyingdutchman23>`.\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |Fix| :class:`model_selection.RandomizedSearchCV` and\n  :class:`model_selection.GridSearchCV` now correctly shows the score for\n  single metrics and verbose > 2. :pr:`19659` by `Thomas Fan`_.\n\n- |Fix| Some values in the `cv_results_` attribute of\n  :class:`model_selection.HalvingRandomSearchCV` and\n  :class:`model_selection.HalvingGridSearchCV` were not properly converted to\n  numpy arrays. :pr:`19211` by `Nicolas Hug`_.\n\n- |Fix| The `fit` method of the successive halving parameter search\n  (:class:`model_selection.HalvingGridSearchCV`, and\n  :class:`model_selection.HalvingRandomSearchCV`) now correctly handles the\n  `groups` parameter. :pr:`19847` by :user:`Xiaoyu Chai <xiaoyuchai>`.\n\n:mod:`sklearn.multioutput`\n..........................\n\n- |Fix| :class:`multioutput.MultiOutputRegressor` now works with estimators\n  that dynamically define `predict` during fitting, such as\n  :class:`ensemble.StackingRegressor`. :pr:`19308` by `Thomas Fan`_.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Fix| Validate the constructor parameter `handle_unknown` in\n  :class:`preprocessing.OrdinalEncoder` to only allow for `'error'` and\n  `'use_encoded_value'` strategies.\n  :pr:`19234` by `Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Fix encoder categories having dtype='S'\n  :class:`preprocessing.OneHotEncoder` and\n  :class:`preprocessing.OrdinalEncoder`.\n  :pr:`19727` by :user:`Andrew Delong <andrewdelong>`.\n\n- |Fix| :meth:`preprocessing.OrdinalEncoder.transform` correctly handles\n  unknown values for string dtypes. :pr:`19888` by `Thomas Fan`_.\n\n- |Fix| :meth:`preprocessing.OneHotEncoder.fit` no longer alters the `drop`\n  parameter. :pr:`19924` by `Thomas Fan`_.\n\n:mod:`sklearn.semi_supervised`\n..............................\n\n- |Fix| Avoid NaN during label propagation in\n  :class:`~sklearn.semi_supervised.LabelPropagation`.\n  :pr:`19271` by :user:`Zhaowei Wang <ThuWangzw>`.\n\n:mod:`sklearn.tree`\n...................\n\n- |Fix| Fix a bug in `fit` of `tree.BaseDecisionTree` that caused\n  segmentation faults under certain conditions. `fit` now deep copies the\n  `Criterion` object to prevent shared concurrent accesses.\n  :pr:`19580` by :user:`Samuel Brice <samdbrice>` and\n  :user:`Alex Adamson <aadamson>` and\n  :user:`Wil Yegelwel <wyegelwel>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |Fix| Better contains the CSS provided by :func:`utils.estimator_html_repr`\n  by giving CSS ids to the html representation. :pr:`19417` by `Thomas Fan`_.\n\n.. _changes_0_24_1:\n\nVersion 0.24.1\n==============\n\n**January 2021**\n\nPackaging\n---------\n\nThe 0.24.0 scikit-learn wheels were not working with MacOS <1.15 due to\n`libomp`. The version of `libomp` used to build the wheels was too recent for\nolder macOS versions. This issue has been fixed for 0.24.1 scikit-learn wheels.\nScikit-learn wheels published on PyPI.org now officially support macOS 10.13\nand later.\n\nChangelog\n---------\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| Fix numerical stability bug that could happen in\n  :func:`metrics.adjusted_mutual_info_score` and\n  :func:`metrics.mutual_info_score` with NumPy 1.20+.\n  :pr:`19179` by `Thomas Fan`_.\n\n:mod:`sklearn.semi_supervised`\n..............................\n\n- |Fix| :class:`semi_supervised.SelfTrainingClassifier` is now accepting\n  meta-estimator (e.g. :class:`ensemble.StackingClassifier`). The validation\n  of this estimator is done on the fitted estimator, once we know the existence\n  of the method `predict_proba`.\n  :pr:`19126` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n.. _changes_0_24:\n\nVersion 0.24.0\n==============\n\n**December 2020**\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- |Fix| :class:`decomposition.KernelPCA` behaviour is now more consistent\n  between 32-bits and 64-bits data when the kernel has small positive\n  eigenvalues.\n\n- |Fix| :class:`decomposition.TruncatedSVD` becomes deterministic by exposing\n  a `random_state` parameter.\n\n- |Fix| :class:`linear_model.Perceptron` when `penalty='elasticnet'`.\n\n- |Fix| Change in the random sampling procedures for the center initialization\n  of :class:`cluster.KMeans`.\n\nDetails are listed in the changelog below.\n\n(While we are trying to better inform users by providing this information, we\ncannot assure that this list is complete.)\n\nChangelog\n---------\n\n:mod:`sklearn.base`\n...................\n\n- |Fix| :meth:`base.BaseEstimator.get_params` now will raise an\n  `AttributeError` if a parameter cannot be retrieved as\n  an instance attribute. Previously it would return `None`.\n  :pr:`17448` by :user:`Juan Carlos Alfaro Jim\u00e9nez <alfaro96>`.\n\n:mod:`sklearn.calibration`\n..........................\n\n- |Efficiency| :class:`calibration.CalibratedClassifierCV.fit` now supports\n  parallelization via `joblib.Parallel` using argument `n_jobs`.\n  :pr:`17107` by :user:`Julien Jerphanion <jjerphan>`.\n\n- |Enhancement| Allow :class:`calibration.CalibratedClassifierCV` use with\n  prefit :class:`pipeline.Pipeline` where data is not `X` is not array-like,\n  sparse matrix or dataframe at the start. :pr:`17546` by\n  :user:`Lucy Liu <lucyleeow>`.\n\n- |Enhancement| Add `ensemble` parameter to\n  :class:`calibration.CalibratedClassifierCV`, which enables implementation\n  of calibration via an ensemble of calibrators (current method) or\n  just one calibrator using all the data (similar to the built-in feature of\n  :mod:`sklearn.svm` estimators with the `probabilities=True` parameter).\n  :pr:`17856` by :user:`Lucy Liu <lucyleeow>` and\n  :user:`Andrea Esuli <aesuli>`.\n\n:mod:`sklearn.cluster`\n......................\n\n- |Enhancement| :class:`cluster.AgglomerativeClustering` has a new parameter\n  `compute_distances`. When set to `True`, distances between clusters are\n  computed and stored in the `distances_` attribute even when the parameter\n  `distance_threshold` is not used. This new parameter is useful to produce\n  dendrogram visualizations, but introduces a computational and memory\n  overhead. :pr:`17984` by :user:`Michael Riedmann <mriedmann>`,\n  :user:`Emilie Delattre <EmilieDel>`, and\n  :user:`Francesco Casalegno <FrancescoCasalegno>`.\n\n- |Enhancement| :class:`cluster.SpectralClustering` and\n  :func:`cluster.spectral_clustering` have a new keyword argument `verbose`.\n  When set to `True`, additional messages will be displayed which can aid with\n  debugging. :pr:`18052` by :user:`Sean O. Stalley <sstalley>`.\n\n- |Enhancement| Added :func:`cluster.kmeans_plusplus` as public function.\n  Initialization by KMeans++ can now be called separately to generate\n  initial cluster centroids. :pr:`17937` by :user:`g-walsh`\n\n- |API| :class:`cluster.MiniBatchKMeans` attributes, `counts_` and\n  `init_size_`, are deprecated and will be removed in 1.1 (renaming of 0.26).\n  :pr:`17864` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.compose`\n......................\n\n- |Fix| :class:`compose.ColumnTransformer` will skip transformers the\n  column selector is a list of bools that are False. :pr:`17616` by\n  `Thomas Fan`_.\n\n- |Fix| :class:`compose.ColumnTransformer` now displays the remainder in the\n  diagram display. :pr:`18167` by `Thomas Fan`_.\n\n- |Fix| :class:`compose.ColumnTransformer` enforces strict count and order\n  of column names between `fit` and `transform` by raising an error instead\n  of a warning, following the deprecation cycle.\n  :pr:`18256` by :user:`Madhura Jayratne <madhuracj>`.\n\n:mod:`sklearn.covariance`\n.........................\n\n- |API| Deprecates `cv_alphas_` in favor of `cv_results_['alphas']` and\n  `grid_scores_` in favor of split scores in `cv_results_` in\n  :class:`covariance.GraphicalLassoCV`. `cv_alphas_` and `grid_scores_` will be\n  removed in version 1.1 (renaming of 0.26).\n  :pr:`16392` by `Thomas Fan`_.\n\n:mod:`sklearn.cross_decomposition`\n..................................\n\n- |Fix| Fixed a bug in :class:`cross_decomposition.PLSSVD` which would\n  sometimes return components in the reversed order of importance.\n  :pr:`17095` by `Nicolas Hug`_.\n\n- |Fix| Fixed a bug in :class:`cross_decomposition.PLSSVD`,\n  :class:`cross_decomposition.CCA`, and\n  :class:`cross_decomposition.PLSCanonical`, which would lead to incorrect\n  predictions for `est.transform(Y)` when the training data is single-target.\n  :pr:`17095` by `Nicolas Hug`_.\n\n- |Fix| Increases the stability of :class:`cross_decomposition.CCA` :pr:`18746`\n  by `Thomas Fan`_.\n\n- |API| The bounds of the `n_components` parameter is now restricted:\n\n  - into `[1, min(n_samples, n_features, n_targets)]`, for\n    :class:`cross_decomposition.PLSSVD`, :class:`cross_decomposition.CCA`,\n    and :class:`cross_decomposition.PLSCanonical`.\n  - into `[1, n_features]` or :class:`cross_decomposition.PLSRegression`.\n\n  An error will be raised in 1.1 (renaming of 0.26).\n  :pr:`17095` by `Nicolas Hug`_.\n\n- |API| For :class:`cross_decomposition.PLSSVD`,\n  :class:`cross_decomposition.CCA`, and\n  :class:`cross_decomposition.PLSCanonical`, the `x_scores_` and `y_scores_`\n  attributes were deprecated and will be removed in 1.1 (renaming of 0.26).\n  They can be retrieved by calling `transform` on the training data.\n  The `norm_y_weights` attribute will also be removed.\n  :pr:`17095` by `Nicolas Hug`_.\n\n- |API| For :class:`cross_decomposition.PLSRegression`,\n  :class:`cross_decomposition.PLSCanonical`,\n  :class:`cross_decomposition.CCA`, and\n  :class:`cross_decomposition.PLSSVD`, the `x_mean_`, `y_mean_`, `x_std_`, and\n  `y_std_` attributes were deprecated and will be removed in 1.1\n  (renaming of 0.26).\n  :pr:`18768` by :user:`Maren Westermann <marenwestermann>`.\n\n- |Fix| :class:`decomposition.TruncatedSVD` becomes deterministic by using the\n  `random_state`. It controls the weights' initialization of the underlying\n  ARPACK solver.\n  :pr:` #18302` by :user:`Gaurav Desai <gauravkdesai>` and\n  :user:`Ivan Panico <FollowKenny>`.\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Feature| :func:`datasets.fetch_openml` now validates md5 checksum of arff\n  files downloaded or cached to ensure data integrity.\n  :pr:`14800` by :user:`Shashank Singh <shashanksingh28>` and `Joel Nothman`_.\n\n- |Enhancement| :func:`datasets.fetch_openml` now allows argument `as_frame`\n  to be 'auto', which tries to convert returned data to pandas DataFrame\n  unless data is sparse.\n  :pr:`17396` by :user:`Jiaxiang <fujiaxiang>`.\n\n- |Enhancement| :func:`datasets.fetch_covtype` now supports the optional\n  argument `as_frame`; when it is set to True, the returned Bunch object's\n  `data` and `frame` members are pandas DataFrames, and the `target` member is\n  a pandas Series.\n  :pr:`17491` by :user:`Alex Liang <tianchuliang>`.\n\n- |Enhancement| :func:`datasets.fetch_kddcup99` now supports the optional\n  argument `as_frame`; when it is set to True, the returned Bunch object's\n  `data` and `frame` members are pandas DataFrames, and the `target` member is\n  a pandas Series.\n  :pr:`18280` by :user:`Alex Liang <tianchuliang>` and\n  `Guillaume Lemaitre`_.\n\n- |Enhancement| :func:`datasets.fetch_20newsgroups_vectorized` now supports\n  loading as a pandas ``DataFrame`` by setting ``as_frame=True``.\n  :pr:`17499` by :user:`Brigitta Sip\u0151cz <bsipocz>` and\n  `Guillaume Lemaitre`_.\n\n- |API| The default value of `as_frame` in :func:`datasets.fetch_openml` is\n  changed from False to 'auto'.\n  :pr:`17610` by :user:`Jiaxiang <fujiaxiang>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |API| For :class:`decomposition.NMF`,\n  the `init` value, when 'init=None' and\n  n_components <= min(n_samples, n_features) will be changed from\n  `'nndsvd'` to `'nndsvda'` in 1.1 (renaming of 0.26).\n  :pr:`18525` by :user:`Chiara Marmo <cmarmo>`.\n\n- |Enhancement| :func:`decomposition.FactorAnalysis` now supports the optional\n  argument `rotation`, which can take the value `None`, `'varimax'` or\n  `'quartimax'`. :pr:`11064` by :user:`Jona Sassenhagen <jona-sassenhagen>`.\n\n- |Enhancement| :class:`decomposition.NMF` now supports the optional parameter\n  `regularization`, which can take the values `None`, 'components',\n  'transformation' or 'both', in accordance with\n  `decomposition.NMF.non_negative_factorization`.\n  :pr:`17414` by :user:`Bharat Raghunathan <bharatr21>`.\n\n- |Fix| :class:`decomposition.KernelPCA` behaviour is now more consistent\n  between 32-bits and 64-bits data input when the kernel has small positive\n  eigenvalues. Small positive eigenvalues were not correctly discarded for\n  32-bits data.\n  :pr:`18149` by :user:`Sylvain Mari\u00e9 <smarie>`.\n\n- |Fix| Fix :class:`decomposition.SparseCoder` such that it follows\n  scikit-learn API and support cloning. The attribute `components_` is\n  deprecated in 0.24 and will be removed in 1.1 (renaming of 0.26).\n  This attribute was redundant with the `dictionary` attribute and constructor\n  parameter.\n  :pr:`17679` by :user:`Xavier Dupr\u00e9 <sdpython>`.\n\n- |Fix| :meth:`decomposition.TruncatedSVD.fit_transform` consistently returns\n  the same as :meth:`decomposition.TruncatedSVD.fit` followed by\n  :meth:`decomposition.TruncatedSVD.transform`.\n  :pr:`18528` by :user:`Albert Villanova del Moral <albertvillanova>` and\n  :user:`Ruifeng Zheng <zhengruifeng>`.\n\n:mod:`sklearn.discriminant_analysis`\n....................................\n\n- |Enhancement| :class:`discriminant_analysis.LinearDiscriminantAnalysis` can\n  now use custom covariance estimate by setting the `covariance_estimator`\n  parameter. :pr:`14446` by :user:`Hugo Richard <hugorichard>`.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |MajorFeature| :class:`ensemble.HistGradientBoostingRegressor` and\n  :class:`ensemble.HistGradientBoostingClassifier` now have native\n  support for categorical features with the `categorical_features`\n  parameter. :pr:`18394` by `Nicolas Hug`_ and `Thomas Fan`_.\n\n- |Feature| :class:`ensemble.HistGradientBoostingRegressor` and\n  :class:`ensemble.HistGradientBoostingClassifier` now support the\n  method `staged_predict`, which allows monitoring of each stage.\n  :pr:`16985` by :user:`Hao Chun Chang <haochunchang>`.\n\n- |Efficiency| break cyclic references in the tree nodes used internally in\n  :class:`ensemble.HistGradientBoostingRegressor` and\n  :class:`ensemble.HistGradientBoostingClassifier` to allow for the timely\n  garbage collection of large intermediate datastructures and to improve memory\n  usage in `fit`. :pr:`18334` by `Olivier Grisel`_ `Nicolas Hug`_, `Thomas\n  Fan`_ and `Andreas M\u00fcller`_.\n\n- |Efficiency| Histogram initialization is now done in parallel in\n  :class:`ensemble.HistGradientBoostingRegressor` and\n  :class:`ensemble.HistGradientBoostingClassifier` which results in speed\n  improvement for problems that build a lot of nodes on multicore machines.\n  :pr:`18341` by `Olivier Grisel`_, `Nicolas Hug`_, `Thomas Fan`_, and\n  :user:`Egor Smirnov <SmirnovEgorRu>`.\n\n- |Fix| Fixed a bug in\n  :class:`ensemble.HistGradientBoostingRegressor` and\n  :class:`ensemble.HistGradientBoostingClassifier` which can now accept data\n  with `uint8` dtype in `predict`. :pr:`18410` by `Nicolas Hug`_.\n\n- |API| The parameter ``n_classes_`` is now deprecated in\n  :class:`ensemble.GradientBoostingRegressor` and returns `1`.\n  :pr:`17702` by :user:`Simona Maggio <simonamaggio>`.\n\n- |API| Mean absolute error ('mae') is now deprecated for the parameter\n  ``criterion`` in :class:`ensemble.GradientBoostingRegressor` and\n  :class:`ensemble.GradientBoostingClassifier`.\n  :pr:`18326` by :user:`Madhura Jayaratne <madhuracj>`.\n\n:mod:`sklearn.exceptions`\n.........................\n\n- |API| `exceptions.ChangedBehaviorWarning` and\n  `exceptions.NonBLASDotWarning` are deprecated and will be removed in\n  1.1 (renaming of 0.26).\n  :pr:`17804` by `Adrin Jalali`_.\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |Enhancement| :class:`feature_extraction.DictVectorizer` accepts multiple\n  values for one categorical feature. :pr:`17367` by :user:`Peng Yu <yupbank>`\n  and :user:`Chiara Marmo <cmarmo>`.\n\n- |Fix| :class:`feature_extraction.text.CountVectorizer` raises an issue if a\n  custom token pattern which capture more than one group is provided.\n  :pr:`15427` by :user:`Gangesh Gudmalwar <ggangesh>` and\n  :user:`Erin R Hoffman <hoffm386>`.\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Feature| Added :class:`feature_selection.SequentialFeatureSelector`\n  which implements forward and backward sequential feature selection.\n  :pr:`6545` by `Sebastian Raschka`_ and :pr:`17159` by `Nicolas Hug`_.\n\n- |Feature| A new parameter `importance_getter` was added to\n  :class:`feature_selection.RFE`, :class:`feature_selection.RFECV` and\n  :class:`feature_selection.SelectFromModel`, allowing the user to specify an\n  attribute name\/path or a `callable` for extracting feature importance from\n  the estimator.  :pr:`15361` by :user:`Venkatachalam N <venkyyuvy>`.\n\n- |Efficiency| Reduce memory footprint in\n  :func:`feature_selection.mutual_info_classif`\n  and :func:`feature_selection.mutual_info_regression` by calling\n  :class:`neighbors.KDTree` for counting nearest neighbors. :pr:`17878` by\n  :user:`Noel Rogers <noelano>`.\n\n- |Enhancement| :class:`feature_selection.RFE` supports the option for the\n  number of `n_features_to_select` to be given as a float representing the\n  percentage of features to select.\n  :pr:`17090` by :user:`Lisa Schwetlick <lschwetlick>` and\n  :user:`Marija Vlajic Wheeler <marijavlajic>`.\n\n:mod:`sklearn.gaussian_process`\n...............................\n\n- |Enhancement| A new method\n  `gaussian_process.kernel._check_bounds_params` is called after\n  fitting a Gaussian Process and raises a ``ConvergenceWarning`` if the bounds\n  of the hyperparameters are too tight.\n  :issue:`12638` by :user:`Sylvain Lannuzel <SylvainLan>`.\n\n:mod:`sklearn.impute`\n.....................\n\n- |Feature| :class:`impute.SimpleImputer` now supports a list of strings\n  when ``strategy='most_frequent'`` or ``strategy='constant'``.\n  :pr:`17526` by :user:`Ayako YAGI <yagi-3>` and\n  :user:`Juan Carlos Alfaro Jim\u00e9nez <alfaro96>`.\n\n- |Feature| Added method :meth:`impute.SimpleImputer.inverse_transform` to\n  revert imputed data to original when instantiated with\n  ``add_indicator=True``. :pr:`17612` by :user:`Srimukh Sripada <d3b0unce>`.\n\n- |Fix| replace the default values in :class:`impute.IterativeImputer`\n  of `min_value` and `max_value` parameters to `-np.inf` and `np.inf`,\n  respectively instead of `None`. However, the behaviour of the class does not\n  change since `None` was defaulting to these values already.\n  :pr:`16493` by :user:`Darshan N <DarshanGowda0>`.\n\n- |Fix| :class:`impute.IterativeImputer` will not attempt to set the\n  estimator's `random_state` attribute, allowing to use it with more external classes.\n  :pr:`15636` by :user:`David Cortes <david-cortes>`.\n\n- |Efficiency| :class:`impute.SimpleImputer` is now faster with `object` dtype array.\n  when `strategy='most_frequent'` in :class:`~sklearn.impute.SimpleImputer`.\n  :pr:`18987` by :user:`David Katz <DavidKatz-il>`.\n\n:mod:`sklearn.inspection`\n.........................\n\n- |Feature| :func:`inspection.partial_dependence` and\n  `inspection.plot_partial_dependence` now support calculating and\n  plotting Individual Conditional Expectation (ICE) curves controlled by the\n  ``kind`` parameter.\n  :pr:`16619` by :user:`Madhura Jayratne <madhuracj>`.\n\n- |Feature| Add `sample_weight` parameter to\n  :func:`inspection.permutation_importance`. :pr:`16906` by\n  :user:`Roei Kahny <RoeiKa>`.\n\n- |API| Positional arguments are deprecated in\n  :meth:`inspection.PartialDependenceDisplay.plot` and will error in 1.1\n  (renaming of 0.26).\n  :pr:`18293` by `Thomas Fan`_.\n\n:mod:`sklearn.isotonic`\n.......................\n\n- |Feature| Expose fitted attributes ``X_thresholds_`` and ``y_thresholds_``\n  that hold the de-duplicated interpolation thresholds of an\n  :class:`isotonic.IsotonicRegression` instance for model inspection purpose.\n  :pr:`16289` by :user:`Masashi Kishimoto <kishimoto-banana>` and\n  :user:`Olivier Grisel <ogrisel>`.\n\n- |Enhancement| :class:`isotonic.IsotonicRegression` now accepts 2d array with\n  1 feature as input array. :pr:`17379` by :user:`Jiaxiang <fujiaxiang>`.\n\n- |Fix| Add tolerance when determining duplicate X values to prevent\n  inf values from being predicted by :class:`isotonic.IsotonicRegression`.\n  :pr:`18639` by :user:`Lucy Liu <lucyleeow>`.\n\n:mod:`sklearn.kernel_approximation`\n...................................\n\n- |Feature| Added class :class:`kernel_approximation.PolynomialCountSketch`\n  which implements the Tensor Sketch algorithm for polynomial kernel feature\n  map approximation.\n  :pr:`13003` by :user:`Daniel L\u00f3pez S\u00e1nchez <lopeLH>`.\n\n- |Efficiency| :class:`kernel_approximation.Nystroem` now supports\n  parallelization via `joblib.Parallel` using argument `n_jobs`.\n  :pr:`18545` by :user:`Laurenz Reitsam <LaurenzReitsam>`.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Feature| :class:`linear_model.LinearRegression` now forces coefficients\n  to be all positive when ``positive`` is set to ``True``.\n  :pr:`17578` by :user:`Joseph Knox <jknox13>`,\n  :user:`Nelle Varoquaux <NelleV>` and :user:`Chiara Marmo <cmarmo>`.\n\n- |Enhancement| :class:`linear_model.RidgeCV` now supports finding an optimal\n  regularization value `alpha` for each target separately by setting\n  ``alpha_per_target=True``. This is only supported when using the default\n  efficient leave-one-out cross-validation scheme ``cv=None``. :pr:`6624` by\n  :user:`Marijn van Vliet <wmvanvliet>`.\n\n- |Fix| Fixes bug in :class:`linear_model.TheilSenRegressor` where\n  `predict` and `score` would fail when `fit_intercept=False` and there was\n  one feature during fitting. :pr:`18121` by `Thomas Fan`_.\n\n- |Fix| Fixes bug in :class:`linear_model.ARDRegression` where `predict`\n  was raising an error when `normalize=True` and `return_std=True` because\n  `X_offset_` and `X_scale_` were undefined.\n  :pr:`18607` by :user:`fhaselbeck <fhaselbeck>`.\n\n- |Fix| Added the missing `l1_ratio` parameter in\n  :class:`linear_model.Perceptron`, to be used when `penalty='elasticnet'`.\n  This changes the default from 0 to 0.15. :pr:`18622` by\n  :user:`Haesun Park <rickiepark>`.\n\n:mod:`sklearn.manifold`\n.......................\n\n- |Efficiency| Fixed :issue:`10493`. Improve Local Linear Embedding (LLE)\n  that raised `MemoryError` exception when used with large inputs.\n  :pr:`17997` by :user:`Bertrand Maisonneuve <bmaisonn>`.\n\n- |Enhancement| Add `square_distances` parameter to :class:`manifold.TSNE`,\n  which provides backward compatibility during deprecation of legacy squaring\n  behavior. Distances will be squared by default in 1.1 (renaming of 0.26),\n  and this parameter will be removed in 1.3. :pr:`17662` by\n  :user:`Joshua Newton <joshuacwnewton>`.\n\n- |Fix| :class:`manifold.MDS` now correctly sets its `_pairwise` attribute.\n  :pr:`18278` by `Thomas Fan`_.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Feature| Added :func:`metrics.cluster.pair_confusion_matrix` implementing\n  the confusion matrix arising from pairs of elements from two clusterings.\n  :pr:`17412` by :user:`Uwe F Mayer <ufmayer>`.\n\n- |Feature| new metric :func:`metrics.top_k_accuracy_score`. It's a\n  generalization of :func:`metrics.top_k_accuracy_score`, the difference is\n  that a prediction is considered correct as long as the true label is\n  associated with one of the `k` highest predicted scores.\n  :func:`metrics.accuracy_score` is the special case of `k = 1`.\n  :pr:`16625` by :user:`Geoffrey Bolmier <gbolmier>`.\n\n- |Feature| Added :func:`metrics.det_curve` to compute Detection Error Tradeoff\n  curve classification metric.\n  :pr:`10591` by :user:`Jeremy Karnowski <jkarnows>` and\n  :user:`Daniel Mohns <dmohns>`.\n\n- |Feature| Added `metrics.plot_det_curve` and\n  :class:`metrics.DetCurveDisplay` to ease the plot of DET curves.\n  :pr:`18176` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Feature| Added :func:`metrics.mean_absolute_percentage_error` metric and\n  the associated scorer for regression problems. :issue:`10708` fixed with the\n  PR :pr:`15007` by :user:`Ashutosh Hathidara <ashutosh1919>`. The scorer and\n  some practical test cases were taken from PR :pr:`10711` by\n  :user:`Mohamed Ali Jamaoui <mohamed-ali>`.\n\n- |Feature| Added :func:`metrics.rand_score` implementing the (unadjusted)\n  Rand index.\n  :pr:`17412` by :user:`Uwe F Mayer <ufmayer>`.\n\n- |Feature| `metrics.plot_confusion_matrix` now supports making colorbar\n  optional in the matplotlib plot by setting `colorbar=False`. :pr:`17192` by\n  :user:`Avi Gupta <avigupta2612>`\n\n- |Enhancement| Add `sample_weight` parameter to\n  :func:`metrics.median_absolute_error`. :pr:`17225` by\n  :user:`Lucy Liu <lucyleeow>`.\n\n- |Enhancement| Add `pos_label` parameter in\n  `metrics.plot_precision_recall_curve` in order to specify the positive\n  class to be used when computing the precision and recall statistics.\n  :pr:`17569` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Enhancement| Add `pos_label` parameter in\n  `metrics.plot_roc_curve` in order to specify the positive\n  class to be used when computing the roc auc statistics.\n  :pr:`17651` by :user:`Clara Matos <claramatos>`.\n\n- |Fix| Fixed a bug in\n  :func:`metrics.classification_report` which was raising AttributeError\n  when called with `output_dict=True` for 0-length values.\n  :pr:`17777` by :user:`Shubhanshu Mishra <napsternxg>`.\n\n- |Fix| Fixed a bug in\n  :func:`metrics.classification_report` which was raising AttributeError\n  when called with `output_dict=True` for 0-length values.\n  :pr:`17777` by :user:`Shubhanshu Mishra <napsternxg>`.\n\n- |Fix| Fixed a bug in\n  :func:`metrics.jaccard_score` which recommended the `zero_division`\n  parameter when called with no true or predicted samples.\n  :pr:`17826` by :user:`Richard Decal <crypdick>` and\n  :user:`Joseph Willard <josephwillard>`\n\n- |Fix| bug in :func:`metrics.hinge_loss` where error occurs when\n  ``y_true`` is missing some labels that are provided explicitly in the\n  ``labels`` parameter.\n  :pr:`17935` by :user:`Cary Goltermann <Ultramann>`.\n\n- |Fix| Fix scorers that accept a pos_label parameter and compute their metrics\n  from values returned by `decision_function` or `predict_proba`. Previously,\n  they would return erroneous values when pos_label was not corresponding to\n  `classifier.classes_[1]`. This is especially important when training\n  classifiers directly with string labeled target classes.\n  :pr:`18114` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Fixed bug in `metrics.plot_confusion_matrix` where error occurs\n  when `y_true` contains labels that were not previously seen by the classifier\n  while the `labels` and `display_labels` parameters are set to `None`.\n  :pr:`18405` by :user:`Thomas J. Fan <thomasjpfan>` and\n  :user:`Yakov Pchelintsev <kyouma>`.\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |MajorFeature| Added (experimental) parameter search estimators\n  :class:`model_selection.HalvingRandomSearchCV` and\n  :class:`model_selection.HalvingGridSearchCV` which implement Successive\n  Halving, and can be used as a drop-in replacements for\n  :class:`model_selection.RandomizedSearchCV` and\n  :class:`model_selection.GridSearchCV`. :pr:`13900` by `Nicolas Hug`_, `Joel\n  Nothman`_ and `Andreas M\u00fcller`_.\n\n- |Feature| :class:`model_selection.RandomizedSearchCV` and\n  :class:`model_selection.GridSearchCV` now have the method ``score_samples``\n  :pr:`17478` by :user:`Teon Brooks <teonbrooks>` and\n  :user:`Mohamed Maskani <maskani-moh>`.\n\n- |Enhancement| :class:`model_selection.TimeSeriesSplit` has two new keyword\n  arguments `test_size` and `gap`. `test_size` allows the out-of-sample\n  time series length to be fixed for all folds. `gap` removes a fixed number of\n  samples between the train and test set on each fold.\n  :pr:`13204` by :user:`Kyle Kosic <kykosic>`.\n\n- |Enhancement| :func:`model_selection.permutation_test_score` and\n  :func:`model_selection.validation_curve` now accept fit_params\n  to pass additional estimator parameters.\n  :pr:`18527` by :user:`Gaurav Dhingra <gxyd>`,\n  :user:`Julien Jerphanion <jjerphan>` and :user:`Amanda Dsouza <amy12xx>`.\n\n- |Enhancement| :func:`model_selection.cross_val_score`,\n  :func:`model_selection.cross_validate`,\n  :class:`model_selection.GridSearchCV`, and\n  :class:`model_selection.RandomizedSearchCV` allows estimator to fail scoring\n  and replace the score with `error_score`. If `error_score=\"raise\"`, the error\n  will be raised.\n  :pr:`18343` by `Guillaume Lemaitre`_ and :user:`Devi Sandeep <dsandeep0138>`.\n\n- |Enhancement| :func:`model_selection.learning_curve` now accept fit_params\n  to pass additional estimator parameters.\n  :pr:`18595` by :user:`Amanda Dsouza <amy12xx>`.\n\n- |Fix| Fixed the `len` of :class:`model_selection.ParameterSampler` when\n  all distributions are lists and `n_iter` is more than the number of unique\n  parameter combinations. :pr:`18222` by `Nicolas Hug`_.\n\n- |Fix| A fix to raise warning when one or more CV splits of\n  :class:`model_selection.GridSearchCV` and\n  :class:`model_selection.RandomizedSearchCV` results in non-finite scores.\n  :pr:`18266` by :user:`Subrat Sahu <subrat93>`,\n  :user:`Nirvan <Nirvan101>` and :user:`Arthur Book <ArthurBook>`.\n\n- |Enhancement| :class:`model_selection.GridSearchCV`,\n  :class:`model_selection.RandomizedSearchCV` and\n  :func:`model_selection.cross_validate` support `scoring` being a callable\n  returning a dictionary of of multiple metric names\/values association.\n  :pr:`15126` by `Thomas Fan`_.\n\n:mod:`sklearn.multiclass`\n.........................\n\n- |Enhancement| :class:`multiclass.OneVsOneClassifier` now accepts\n  the inputs with missing values. Hence, estimators which can handle\n  missing values (may be a pipeline with imputation step) can be used as\n  a estimator for multiclass wrappers.\n  :pr:`17987` by :user:`Venkatachalam N <venkyyuvy>`.\n\n- |Fix| A fix to allow :class:`multiclass.OutputCodeClassifier` to accept\n  sparse input data in its `fit` and `predict` methods. The check for\n  validity of the input is now delegated to the base estimator.\n  :pr:`17233` by :user:`Zolisa Bleki <zoj613>`.\n\n:mod:`sklearn.multioutput`\n..........................\n\n- |Enhancement| :class:`multioutput.MultiOutputClassifier` and\n  :class:`multioutput.MultiOutputRegressor` now accepts the inputs\n  with missing values. Hence, estimators which can handle missing\n  values (may be a pipeline with imputation step, HistGradientBoosting\n  estimators) can be used as a estimator for multiclass wrappers.\n  :pr:`17987` by :user:`Venkatachalam N <venkyyuvy>`.\n\n- |Fix| A fix to accept tuples for the ``order`` parameter\n  in :class:`multioutput.ClassifierChain`.\n  :pr:`18124` by :user:`Gus Brocchini <boldloop>` and\n  :user:`Amanda Dsouza <amy12xx>`.\n\n:mod:`sklearn.naive_bayes`\n..........................\n\n- |Enhancement| Adds a parameter `min_categories` to\n  :class:`naive_bayes.CategoricalNB` that allows a minimum number of categories\n  per feature to be specified. This allows categories unseen during training\n  to be accounted for.\n  :pr:`16326` by :user:`George Armstrong <gwarmstrong>`.\n\n- |API| The attributes ``coef_`` and ``intercept_`` are now deprecated in\n  :class:`naive_bayes.MultinomialNB`, :class:`naive_bayes.ComplementNB`,\n  :class:`naive_bayes.BernoulliNB` and :class:`naive_bayes.CategoricalNB`,\n  and will be removed in v1.1 (renaming of 0.26).\n  :pr:`17427` by :user:`Juan Carlos Alfaro Jim\u00e9nez <alfaro96>`.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Efficiency| Speed up ``seuclidean``, ``wminkowski``, ``mahalanobis`` and\n  ``haversine`` metrics in `neighbors.DistanceMetric` by avoiding\n  unexpected GIL acquiring in Cython when setting ``n_jobs>1`` in\n  :class:`neighbors.KNeighborsClassifier`,\n  :class:`neighbors.KNeighborsRegressor`,\n  :class:`neighbors.RadiusNeighborsClassifier`,\n  :class:`neighbors.RadiusNeighborsRegressor`,\n  :func:`metrics.pairwise_distances`\n  and by validating data out of loops.\n  :pr:`17038` by :user:`Wenbo Zhao <webber26232>`.\n\n- |Efficiency| `neighbors.NeighborsBase` benefits of an improved\n  `algorithm = 'auto'` heuristic. In addition to the previous set of rules,\n  now, when the number of features exceeds 15, `brute` is selected, assuming\n  the data intrinsic dimensionality is too high for tree-based methods.\n  :pr:`17148` by :user:`Geoffrey Bolmier <gbolmier>`.\n\n- |Fix| `neighbors.BinaryTree`\n  will raise a `ValueError` when fitting on data array having points with\n  different dimensions.\n  :pr:`18691` by :user:`Chiara Marmo <cmarmo>`.\n\n- |Fix| :class:`neighbors.NearestCentroid` with a numerical `shrink_threshold`\n  will raise a `ValueError` when fitting on data with all constant features.\n  :pr:`18370` by :user:`Trevor Waite <trewaite>`.\n\n- |Fix| In  methods `radius_neighbors` and\n  `radius_neighbors_graph` of :class:`neighbors.NearestNeighbors`,\n  :class:`neighbors.RadiusNeighborsClassifier`,\n  :class:`neighbors.RadiusNeighborsRegressor`, and\n  :class:`neighbors.RadiusNeighborsTransformer`, using `sort_results=True` now\n  correctly sorts the results even when fitting with the \"brute\" algorithm.\n  :pr:`18612` by `Tom Dupre la Tour`_.\n\n:mod:`sklearn.neural_network`\n.............................\n\n- |Efficiency| Neural net training and prediction are now a little faster.\n  :pr:`17603`, :pr:`17604`, :pr:`17606`, :pr:`17608`, :pr:`17609`, :pr:`17633`,\n  :pr:`17661`, :pr:`17932` by :user:`Alex Henrie <alexhenrie>`.\n\n- |Enhancement| Avoid converting float32 input to float64 in\n  :class:`neural_network.BernoulliRBM`.\n  :pr:`16352` by :user:`Arthur Imbert <Henley13>`.\n\n- |Enhancement| Support 32-bit computations in\n  :class:`neural_network.MLPClassifier` and\n  :class:`neural_network.MLPRegressor`.\n  :pr:`17759` by :user:`Srimukh Sripada <d3b0unce>`.\n\n- |Fix| Fix method  :meth:`neural_network.MLPClassifier.fit`\n  not iterating to ``max_iter`` if warm started.\n  :pr:`18269` by :user:`Norbert Preining <norbusan>` and\n  :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.pipeline`\n.......................\n\n- |Enhancement| References to transformers passed through ``transformer_weights``\n  to :class:`pipeline.FeatureUnion` that aren't present in ``transformer_list``\n  will raise a ``ValueError``.\n  :pr:`17876` by :user:`Cary Goltermann <Ultramann>`.\n\n- |Fix| A slice of a :class:`pipeline.Pipeline` now inherits the parameters of\n  the original pipeline (`memory` and `verbose`).\n  :pr:`18429` by :user:`Albert Villanova del Moral <albertvillanova>` and\n  :user:`Pawe\u0142 Biernat <pwl>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Feature| :class:`preprocessing.OneHotEncoder` now supports missing\n  values by treating them as a category. :pr:`17317` by `Thomas Fan`_.\n\n- |Feature| Add a new ``handle_unknown`` parameter with a\n  ``use_encoded_value`` option, along with a new ``unknown_value`` parameter,\n  to :class:`preprocessing.OrdinalEncoder` to allow unknown categories during\n  transform and set the encoded value of the unknown categories.\n  :pr:`17406` by :user:`Felix Wick <FelixWick>` and :pr:`18406` by\n  `Nicolas Hug`_.\n\n- |Feature| Add ``clip`` parameter to :class:`preprocessing.MinMaxScaler`,\n  which clips the transformed values of test data to ``feature_range``.\n  :pr:`17833` by :user:`Yashika Sharma <yashika51>`.\n\n- |Feature| Add ``sample_weight`` parameter to\n  :class:`preprocessing.StandardScaler`. Allows setting\n  individual weights for each sample. :pr:`18510` and\n  :pr:`18447` and :pr:`16066` and :pr:`18682` by\n  :user:`Maria Telenczuk <maikia>` and :user:`Albert Villanova <albertvillanova>`\n  and :user:`panpiort8` and :user:`Alex Gramfort <agramfort>`.\n\n- |Enhancement| Verbose output of :class:`model_selection.GridSearchCV` has\n  been improved for readability. :pr:`16935` by :user:`Raghav Rajagopalan\n  <raghavrv>` and :user:`Chiara Marmo <cmarmo>`.\n\n- |Enhancement| Add ``unit_variance`` to :class:`preprocessing.RobustScaler`,\n  which scales output data such that normally distributed features have a\n  variance of 1. :pr:`17193` by :user:`Lucy Liu <lucyleeow>` and\n  :user:`Mabel Villalba <mabelvj>`.\n\n- |Enhancement| Add `dtype` parameter to\n  :class:`preprocessing.KBinsDiscretizer`.\n  :pr:`16335` by :user:`Arthur Imbert <Henley13>`.\n\n- |Fix| Raise error on\n  :meth:`sklearn.preprocessing.OneHotEncoder.inverse_transform`\n  when `handle_unknown='error'` and `drop=None` for samples\n  encoded as all zeros. :pr:`14982` by\n  :user:`Kevin Winata <kwinata>`.\n\n:mod:`sklearn.semi_supervised`\n..............................\n\n- |MajorFeature| Added :class:`semi_supervised.SelfTrainingClassifier`, a\n  meta-classifier that allows any supervised classifier to function as a\n  semi-supervised classifier that can learn from unlabeled data. :issue:`11682`\n  by :user:`Oliver Rausch <orausch>` and :user:`Patrice Becker <pr0duktiv>`.\n\n- |Fix| Fix incorrect encoding when using unicode string dtypes in\n  :class:`preprocessing.OneHotEncoder` and\n  :class:`preprocessing.OrdinalEncoder`. :pr:`15763` by `Thomas Fan`_.\n\n:mod:`sklearn.svm`\n..................\n\n- |Enhancement| invoke SciPy BLAS API for SVM kernel function in ``fit``,\n  ``predict`` and related methods of :class:`svm.SVC`, :class:`svm.NuSVC`,\n  :class:`svm.SVR`, :class:`svm.NuSVR`, :class:`svm.OneClassSVM`.\n  :pr:`16530` by :user:`Shuhua Fan <jim0421>`.\n\n:mod:`sklearn.tree`\n...................\n\n- |Feature| :class:`tree.DecisionTreeRegressor` now supports the new splitting\n  criterion ``'poisson'`` useful for modeling count data. :pr:`17386` by\n  :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Enhancement| :func:`tree.plot_tree` now uses colors from the matplotlib\n  configuration settings. :pr:`17187` by `Andreas M\u00fcller`_.\n\n- |API| The parameter ``X_idx_sorted`` is now deprecated in\n  :meth:`tree.DecisionTreeClassifier.fit` and\n  :meth:`tree.DecisionTreeRegressor.fit`, and has not effect.\n  :pr:`17614` by :user:`Juan Carlos Alfaro Jim\u00e9nez <alfaro96>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |Enhancement| Add ``check_methods_sample_order_invariance`` to\n  :func:`~utils.estimator_checks.check_estimator`, which checks that\n  estimator methods are invariant if applied to the same dataset\n  with different sample order :pr:`17598` by :user:`Jason Ngo <ngojason9>`.\n\n- |Enhancement| Add support for weights in\n  `utils.sparse_func.incr_mean_variance_axis`.\n  By :user:`Maria Telenczuk <maikia>` and :user:`Alex Gramfort <agramfort>`.\n\n- |Fix| Raise ValueError with clear error message in :func:`utils.check_array`\n  for sparse DataFrames with mixed types.\n  :pr:`17992` by :user:`Thomas J. Fan <thomasjpfan>` and\n  :user:`Alex Shacked <alexshacked>`.\n\n- |Fix| Allow serialized tree based models to be unpickled on a machine\n  with different endianness.\n  :pr:`17644` by :user:`Qi Zhang <qzhang90>`.\n\n- |Fix| Check that we raise proper error when axis=1 and the\n  dimensions do not match in `utils.sparse_func.incr_mean_variance_axis`.\n  By :user:`Alex Gramfort <agramfort>`.\n\nMiscellaneous\n.............\n\n- |Enhancement| Calls to ``repr`` are now faster\n  when `print_changed_only=True`, especially with meta-estimators.\n  :pr:`18508` by :user:`Nathan C. <Xethan>`.\n\n.. rubric:: Code and documentation contributors\n\nThanks to everyone who has contributed to the maintenance and improvement of\nthe project since version 0.23, including:\n\nAbo7atm, Adam Spannbauer, Adrin Jalali, adrinjalali, Agamemnon Krasoulis,\nAkshay Deodhar, Albert Villanova del Moral, Alessandro Gentile, Alex Henrie,\nAlex Itkes, Alex Liang, Alexander Lenail, alexandracraciun, Alexandre Gramfort,\nalexshacked, Allan D Butler, Amanda Dsouza, amy12xx, Anand Tiwari, Anderson\nNelson, Andreas Mueller, Ankit Choraria, Archana Subramaniyan, Arthur Imbert,\nAshutosh Hathidara, Ashutosh Kushwaha, Atsushi Nukariya, Aura Munoz, AutoViz\nand Auto_ViML, Avi Gupta, Avinash Anakal, Ayako YAGI, barankarakus,\nbarberogaston, beatrizsmg, Ben Mainye, Benjamin Bossan, Benjamin Pedigo, Bharat\nRaghunathan, Bhavika Devnani, Biprateep Dey, bmaisonn, Bo Chang, Boris\nVillaz\u00f3n-Terrazas, brigi, Brigitta Sip\u0151cz, Bruno Charron, Byron Smith, Cary\nGoltermann, Cat Chenal, CeeThinwa, chaitanyamogal, Charles Patel, Chiara Marmo,\nChristian Kastner, Christian Lorentzen, Christoph Deil, Christos Aridas, Clara\nMatos, clmbst, Coelhudo, crispinlogan, Cristina Mulas, Daniel L\u00f3pez, Daniel\nMohns, darioka, Darshan N, david-cortes, Declan O'Neill, Deeksha Madan,\nElizabeth DuPre, Eric Fiegel, Eric Larson, Erich Schubert, Erin Khoo, Erin R\nHoffman, eschibli, Felix Wick, fhaselbeck, Forrest Koch, Francesco Casalegno,\nFrans Larsson, Gael Varoquaux, Gaurav Desai, Gaurav Sheni, genvalen, Geoffrey\nBolmier, George Armstrong, George Kiragu, Gesa Stupperich, Ghislain Antony\nVaillant, Gim Seng, Gordon Walsh, Gregory R. Lee, Guillaume Chevalier,\nGuillaume Lemaitre, Haesun Park, Hannah Bohle, Hao Chun Chang, Harry Scholes,\nHarsh Soni, Henry, Hirofumi Suzuki, Hitesh Somani, Hoda1394, Hugo Le Moine,\nhugorichard, indecisiveuser, Isuru Fernando, Ivan Wiryadi, j0rd1smit, Jaehyun\nAhn, Jake Tae, James Hoctor, Jan Vesely, Jeevan Anand Anne, JeroenPeterBos,\nJHayes, Jiaxiang, Jie Zheng, Jigna Panchal, jim0421, Jin Li, Joaquin\nVanschoren, Joel Nothman, Jona Sassenhagen, Jonathan, Jorge Gorbe Moya, Joseph\nLucas, Joshua Newton, Juan Carlos Alfaro Jim\u00e9nez, Julien Jerphanion, Justin\nHuber, J\u00e9r\u00e9mie du Boisberranger, Kartik Chugh, Katarina Slama, kaylani2,\nKendrick Cetina, Kenny Huynh, Kevin Markham, Kevin Winata, Kiril Isakov,\nkishimoto, Koki Nishihara, Krum Arnaudov, Kyle Kosic, Lauren Oldja, Laurenz\nReitsam, Lisa Schwetlick, Louis Douge, Louis Guitton, Lucy Liu, Madhura\nJayaratne, maikia, Manimaran, Manuel L\u00f3pez-Ib\u00e1\u00f1ez, Maren Westermann, Maria\nTelenczuk, Mariam-ke, Marijn van Vliet, Markus L\u00f6ning, Martin Scheubrein,\nMartina G. Vilas, Martina Megasari, Mateusz G\u00f3rski, mathschy, mathurinm,\nMatthias Bussonnier, Max Del Giudice, Michael, Milan Straka, Muoki Caleb, N.\nHaiat, Nadia Tahiri, Ph. D, Naoki Hamada, Neil Botelho, Nicolas Hug, Nils\nWerner, noelano, Norbert Preining, oj_lappi, Oleh Kozynets, Olivier Grisel,\nPankaj Jindal, Pardeep Singh, Parthiv Chigurupati, Patrice Becker, Pete Green,\npgithubs, Poorna Kumar, Prabakaran Kumaresshan, Probinette4, pspachtholz,\npwalchessen, Qi Zhang, rachel fischoff, Rachit Toshniwal, Rafey Iqbal Rahman,\nRahul Jakhar, Ram Rachum, RamyaNP, rauwuckl, Ravi Kiran Boggavarapu, Ray Bell,\nReshama Shaikh, Richard Decal, Rishi Advani, Rithvik Rao, Rob Romijnders, roei,\nRomain Tavenard, Roman Yurchak, Ruby Werman, Ryotaro Tsukada, sadak, Saket\nKhandelwal, Sam, Sam Ezebunandu, Sam Kimbinyi, Sarah Brown, Saurabh Jain, Sean\nO. Stalley, Sergio, Shail Shah, Shane Keller, Shao Yang Hong, Shashank Singh,\nShooter23, Shubhanshu Mishra, simonamaggio, Soledad Galli, Srimukh Sripada,\nStephan Steinfurt, subrat93, Sunitha Selvan, Swier, Sylvain Mari\u00e9, SylvainLan,\nt-kusanagi2, Teon L Brooks, Terence Honles, Thijs van den Berg, Thomas J Fan,\nThomas J. Fan, Thomas S Benjamin, Thomas9292, Thorben Jensen, tijanajovanovic,\nTimo Kaufmann, tnwei, Tom Dupr\u00e9 la Tour, Trevor Waite, ufmayer, Umberto Lupo,\nVenkatachalam N, Vikas Pandey, Vinicius Rios Fuck, Violeta, watchtheblur, Wenbo\nZhao, willpeppo, xavier dupr\u00e9, Xethan, Xue Qianming, xun-tang, yagi-3, Yakov\nPchelintsev, Yashika Sharma, Yi-Yan Ge, Yue Wu, Yutaro Ikeda, Zaccharie Ramzi,\nzoj613, Zhao Feng.","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn      release notes 0 24                Version 0 24               For a short description of the main highlights of the release  please refer to  ref  sphx glr auto examples release highlights plot release highlights 0 24 0 py       include   changelog legend inc      changes 0 24 2   Version 0 24 2                   April 2021    Changelog             mod  sklearn compose                             Fix   compose ColumnTransformer get feature names  does not call    get feature names  on transformers with an empty column selection     pr  19579  by  Thomas Fan      mod  sklearn cross decomposition                                         Fix  Fixed a regression in  class  cross decomposition CCA    pr  19646    by  Thomas Fan        Fix   class  cross decomposition PLSRegression  raises warning for   constant y residuals instead of a  StopIteration  error   pr  19922    by  Thomas Fan      mod  sklearn decomposition                                   Fix  Fixed a bug in  class  decomposition KernelPCA  s     inverse transform      pr  19732  by  user  Kei Ishikawa  kstoneriv3      mod  sklearn ensemble                              Fix  Fixed a bug in  class  ensemble HistGradientBoostingRegressor   fit    with  sample weight  parameter and  least absolute deviation  loss function     pr  19407  by  user  Vadim Ushtanit  vadim ushtanit      mod  sklearn feature extraction                                        Fix  Fixed a bug to support multiple strings for a category when    sparse False  in  class  feature extraction DictVectorizer      pr  19982  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn gaussian process                                      Fix  Avoid explicitly forming inverse covariance matrix in    class  gaussian process GaussianProcessRegressor  when set to output   standard deviation  With certain covariance matrices this inverse is unstable   to compute explicitly  Calling Cholesky solver mitigates this issue in   computation     pr  19939  by  user  Ian Halvic  iwhalvic        Fix  Avoid division by zero when scaling constant target in    class  gaussian process GaussianProcessRegressor   It was due to a std  dev    equal to 0  Now  such case is detected and the std  dev  is affected to 1   avoiding a division by zero and thus the presence of NaN values in the   normalized target     pr  19703  by  user  sobkevich    user  Boris Villaz n Terrazas  boricles     and  user  Alexandr Fonari  afonari      mod  sklearn linear model                                  Fix   Fixed a bug in  class  linear model LogisticRegression   the   sample weight object is not modified anymore   pr  19182  by    user  Yosuke KOBAYASHI  m7142yosuke      mod  sklearn metrics                             Fix   func  metrics top k accuracy score  now supports multiclass   problems where only two classes appear in  y true  and all the classes   are specified in  labels      pr  19721  by  user  Joris Clement  flyingdutchman23      mod  sklearn model selection                                     Fix   class  model selection RandomizedSearchCV  and    class  model selection GridSearchCV  now correctly shows the score for   single metrics and verbose   2   pr  19659  by  Thomas Fan        Fix  Some values in the  cv results   attribute of    class  model selection HalvingRandomSearchCV  and    class  model selection HalvingGridSearchCV  were not properly converted to   numpy arrays   pr  19211  by  Nicolas Hug        Fix  The  fit  method of the successive halving parameter search     class  model selection HalvingGridSearchCV   and    class  model selection HalvingRandomSearchCV   now correctly handles the    groups  parameter   pr  19847  by  user  Xiaoyu Chai  xiaoyuchai      mod  sklearn multioutput                                 Fix   class  multioutput MultiOutputRegressor  now works with estimators   that dynamically define  predict  during fitting  such as    class  ensemble StackingRegressor    pr  19308  by  Thomas Fan      mod  sklearn preprocessing                                   Fix  Validate the constructor parameter  handle unknown  in    class  preprocessing OrdinalEncoder  to only allow for   error   and     use encoded value   strategies     pr  19234  by  Guillaume Lemaitre  glemaitre        Fix  Fix encoder categories having dtype  S     class  preprocessing OneHotEncoder  and    class  preprocessing OrdinalEncoder      pr  19727  by  user  Andrew Delong  andrewdelong        Fix   meth  preprocessing OrdinalEncoder transform  correctly handles   unknown values for string dtypes   pr  19888  by  Thomas Fan        Fix   meth  preprocessing OneHotEncoder fit  no longer alters the  drop    parameter   pr  19924  by  Thomas Fan      mod  sklearn semi supervised                                     Fix  Avoid NaN during label propagation in    class   sklearn semi supervised LabelPropagation      pr  19271  by  user  Zhaowei Wang  ThuWangzw      mod  sklearn tree                          Fix  Fix a bug in  fit  of  tree BaseDecisionTree  that caused   segmentation faults under certain conditions   fit  now deep copies the    Criterion  object to prevent shared concurrent accesses     pr  19580  by  user  Samuel Brice  samdbrice   and    user  Alex Adamson  aadamson   and    user  Wil Yegelwel  wyegelwel      mod  sklearn utils                           Fix  Better contains the CSS provided by  func  utils estimator html repr    by giving CSS ids to the html representation   pr  19417  by  Thomas Fan         changes 0 24 1   Version 0 24 1                   January 2021    Packaging            The 0 24 0 scikit learn wheels were not working with MacOS  1 15 due to  libomp   The version of  libomp  used to build the wheels was too recent for older macOS versions  This issue has been fixed for 0 24 1 scikit learn wheels  Scikit learn wheels published on PyPI org now officially support macOS 10 13 and later   Changelog             mod  sklearn metrics                             Fix  Fix numerical stability bug that could happen in    func  metrics adjusted mutual info score  and    func  metrics mutual info score  with NumPy 1 20      pr  19179  by  Thomas Fan      mod  sklearn semi supervised                                     Fix   class  semi supervised SelfTrainingClassifier  is now accepting   meta estimator  e g   class  ensemble StackingClassifier    The validation   of this estimator is done on the fitted estimator  once we know the existence   of the method  predict proba      pr  19126  by  user  Guillaume Lemaitre  glemaitre         changes 0 24   Version 0 24 0                   December 2020    Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      Fix   class  decomposition KernelPCA  behaviour is now more consistent   between 32 bits and 64 bits data when the kernel has small positive   eigenvalues      Fix   class  decomposition TruncatedSVD  becomes deterministic by exposing   a  random state  parameter      Fix   class  linear model Perceptron  when  penalty  elasticnet        Fix  Change in the random sampling procedures for the center initialization   of  class  cluster KMeans    Details are listed in the changelog below    While we are trying to better inform users by providing this information  we cannot assure that this list is complete    Changelog             mod  sklearn base                          Fix   meth  base BaseEstimator get params  now will raise an    AttributeError  if a parameter cannot be retrieved as   an instance attribute  Previously it would return  None      pr  17448  by  user  Juan Carlos Alfaro Jim nez  alfaro96      mod  sklearn calibration                                 Efficiency   class  calibration CalibratedClassifierCV fit  now supports   parallelization via  joblib Parallel  using argument  n jobs      pr  17107  by  user  Julien Jerphanion  jjerphan        Enhancement  Allow  class  calibration CalibratedClassifierCV  use with   prefit  class  pipeline Pipeline  where data is not  X  is not array like    sparse matrix or dataframe at the start   pr  17546  by    user  Lucy Liu  lucyleeow        Enhancement  Add  ensemble  parameter to    class  calibration CalibratedClassifierCV   which enables implementation   of calibration via an ensemble of calibrators  current method  or   just one calibrator using all the data  similar to the built in feature of    mod  sklearn svm  estimators with the  probabilities True  parameter      pr  17856  by  user  Lucy Liu  lucyleeow   and    user  Andrea Esuli  aesuli      mod  sklearn cluster                             Enhancement   class  cluster AgglomerativeClustering  has a new parameter    compute distances   When set to  True   distances between clusters are   computed and stored in the  distances   attribute even when the parameter    distance threshold  is not used  This new parameter is useful to produce   dendrogram visualizations  but introduces a computational and memory   overhead   pr  17984  by  user  Michael Riedmann  mriedmann       user  Emilie Delattre  EmilieDel    and    user  Francesco Casalegno  FrancescoCasalegno        Enhancement   class  cluster SpectralClustering  and    func  cluster spectral clustering  have a new keyword argument  verbose     When set to  True   additional messages will be displayed which can aid with   debugging   pr  18052  by  user  Sean O  Stalley  sstalley        Enhancement  Added  func  cluster kmeans plusplus  as public function    Initialization by KMeans   can now be called separately to generate   initial cluster centroids   pr  17937  by  user  g walsh      API   class  cluster MiniBatchKMeans  attributes   counts   and    init size    are deprecated and will be removed in 1 1  renaming of 0 26      pr  17864  by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn compose                             Fix   class  compose ColumnTransformer  will skip transformers the   column selector is a list of bools that are False   pr  17616  by    Thomas Fan        Fix   class  compose ColumnTransformer  now displays the remainder in the   diagram display   pr  18167  by  Thomas Fan        Fix   class  compose ColumnTransformer  enforces strict count and order   of column names between  fit  and  transform  by raising an error instead   of a warning  following the deprecation cycle     pr  18256  by  user  Madhura Jayratne  madhuracj      mod  sklearn covariance                                API  Deprecates  cv alphas   in favor of  cv results   alphas    and    grid scores   in favor of split scores in  cv results   in    class  covariance GraphicalLassoCV    cv alphas   and  grid scores   will be   removed in version 1 1  renaming of 0 26      pr  16392  by  Thomas Fan      mod  sklearn cross decomposition                                         Fix  Fixed a bug in  class  cross decomposition PLSSVD  which would   sometimes return components in the reversed order of importance     pr  17095  by  Nicolas Hug        Fix  Fixed a bug in  class  cross decomposition PLSSVD      class  cross decomposition CCA   and    class  cross decomposition PLSCanonical   which would lead to incorrect   predictions for  est transform Y   when the training data is single target     pr  17095  by  Nicolas Hug        Fix  Increases the stability of  class  cross decomposition CCA   pr  18746    by  Thomas Fan        API  The bounds of the  n components  parameter is now restricted       into   1  min n samples  n features  n targets     for      class  cross decomposition PLSSVD    class  cross decomposition CCA       and  class  cross decomposition PLSCanonical       into   1  n features   or  class  cross decomposition PLSRegression      An error will be raised in 1 1  renaming of 0 26      pr  17095  by  Nicolas Hug        API  For  class  cross decomposition PLSSVD      class  cross decomposition CCA   and    class  cross decomposition PLSCanonical   the  x scores   and  y scores     attributes were deprecated and will be removed in 1 1  renaming of 0 26     They can be retrieved by calling  transform  on the training data    The  norm y weights  attribute will also be removed     pr  17095  by  Nicolas Hug        API  For  class  cross decomposition PLSRegression      class  cross decomposition PLSCanonical      class  cross decomposition CCA   and    class  cross decomposition PLSSVD   the  x mean     y mean     x std    and    y std   attributes were deprecated and will be removed in 1 1    renaming of 0 26      pr  18768  by  user  Maren Westermann  marenwestermann        Fix   class  decomposition TruncatedSVD  becomes deterministic by using the    random state   It controls the weights  initialization of the underlying   ARPACK solver     pr    18302  by  user  Gaurav Desai  gauravkdesai   and    user  Ivan Panico  FollowKenny      mod  sklearn datasets                              Feature   func  datasets fetch openml  now validates md5 checksum of arff   files downloaded or cached to ensure data integrity     pr  14800  by  user  Shashank Singh  shashanksingh28   and  Joel Nothman        Enhancement   func  datasets fetch openml  now allows argument  as frame    to be  auto   which tries to convert returned data to pandas DataFrame   unless data is sparse     pr  17396  by  user  Jiaxiang  fujiaxiang        Enhancement   func  datasets fetch covtype  now supports the optional   argument  as frame   when it is set to True  the returned Bunch object s    data  and  frame  members are pandas DataFrames  and the  target  member is   a pandas Series     pr  17491  by  user  Alex Liang  tianchuliang        Enhancement   func  datasets fetch kddcup99  now supports the optional   argument  as frame   when it is set to True  the returned Bunch object s    data  and  frame  members are pandas DataFrames  and the  target  member is   a pandas Series     pr  18280  by  user  Alex Liang  tianchuliang   and    Guillaume Lemaitre        Enhancement   func  datasets fetch 20newsgroups vectorized  now supports   loading as a pandas   DataFrame   by setting   as frame True       pr  17499  by  user  Brigitta Sip cz  bsipocz   and    Guillaume Lemaitre        API  The default value of  as frame  in  func  datasets fetch openml  is   changed from False to  auto      pr  17610  by  user  Jiaxiang  fujiaxiang      mod  sklearn decomposition                                   API  For  class  decomposition NMF     the  init  value  when  init None  and   n components    min n samples  n features  will be changed from     nndsvd   to   nndsvda   in 1 1  renaming of 0 26      pr  18525  by  user  Chiara Marmo  cmarmo        Enhancement   func  decomposition FactorAnalysis  now supports the optional   argument  rotation   which can take the value  None     varimax   or     quartimax     pr  11064  by  user  Jona Sassenhagen  jona sassenhagen        Enhancement   class  decomposition NMF  now supports the optional parameter    regularization   which can take the values  None    components      transformation  or  both   in accordance with    decomposition NMF non negative factorization      pr  17414  by  user  Bharat Raghunathan  bharatr21        Fix   class  decomposition KernelPCA  behaviour is now more consistent   between 32 bits and 64 bits data input when the kernel has small positive   eigenvalues  Small positive eigenvalues were not correctly discarded for   32 bits data     pr  18149  by  user  Sylvain Mari   smarie        Fix  Fix  class  decomposition SparseCoder  such that it follows   scikit learn API and support cloning  The attribute  components   is   deprecated in 0 24 and will be removed in 1 1  renaming of 0 26     This attribute was redundant with the  dictionary  attribute and constructor   parameter     pr  17679  by  user  Xavier Dupr   sdpython        Fix   meth  decomposition TruncatedSVD fit transform  consistently returns   the same as  meth  decomposition TruncatedSVD fit  followed by    meth  decomposition TruncatedSVD transform      pr  18528  by  user  Albert Villanova del Moral  albertvillanova   and    user  Ruifeng Zheng  zhengruifeng      mod  sklearn discriminant analysis                                           Enhancement   class  discriminant analysis LinearDiscriminantAnalysis  can   now use custom covariance estimate by setting the  covariance estimator    parameter   pr  14446  by  user  Hugo Richard  hugorichard      mod  sklearn ensemble                              MajorFeature   class  ensemble HistGradientBoostingRegressor  and    class  ensemble HistGradientBoostingClassifier  now have native   support for categorical features with the  categorical features    parameter   pr  18394  by  Nicolas Hug   and  Thomas Fan        Feature   class  ensemble HistGradientBoostingRegressor  and    class  ensemble HistGradientBoostingClassifier  now support the   method  staged predict   which allows monitoring of each stage     pr  16985  by  user  Hao Chun Chang  haochunchang        Efficiency  break cyclic references in the tree nodes used internally in    class  ensemble HistGradientBoostingRegressor  and    class  ensemble HistGradientBoostingClassifier  to allow for the timely   garbage collection of large intermediate datastructures and to improve memory   usage in  fit    pr  18334  by  Olivier Grisel    Nicolas Hug     Thomas   Fan   and  Andreas M ller        Efficiency  Histogram initialization is now done in parallel in    class  ensemble HistGradientBoostingRegressor  and    class  ensemble HistGradientBoostingClassifier  which results in speed   improvement for problems that build a lot of nodes on multicore machines     pr  18341  by  Olivier Grisel     Nicolas Hug     Thomas Fan    and    user  Egor Smirnov  SmirnovEgorRu        Fix  Fixed a bug in    class  ensemble HistGradientBoostingRegressor  and    class  ensemble HistGradientBoostingClassifier  which can now accept data   with  uint8  dtype in  predict    pr  18410  by  Nicolas Hug        API  The parameter   n classes    is now deprecated in    class  ensemble GradientBoostingRegressor  and returns  1      pr  17702  by  user  Simona Maggio  simonamaggio        API  Mean absolute error   mae   is now deprecated for the parameter     criterion   in  class  ensemble GradientBoostingRegressor  and    class  ensemble GradientBoostingClassifier      pr  18326  by  user  Madhura Jayaratne  madhuracj      mod  sklearn exceptions                                API   exceptions ChangedBehaviorWarning  and    exceptions NonBLASDotWarning  are deprecated and will be removed in   1 1  renaming of 0 26      pr  17804  by  Adrin Jalali      mod  sklearn feature extraction                                        Enhancement   class  feature extraction DictVectorizer  accepts multiple   values for one categorical feature   pr  17367  by  user  Peng Yu  yupbank     and  user  Chiara Marmo  cmarmo        Fix   class  feature extraction text CountVectorizer  raises an issue if a   custom token pattern which capture more than one group is provided     pr  15427  by  user  Gangesh Gudmalwar  ggangesh   and    user  Erin R Hoffman  hoffm386      mod  sklearn feature selection                                       Feature  Added  class  feature selection SequentialFeatureSelector    which implements forward and backward sequential feature selection     pr  6545  by  Sebastian Raschka   and  pr  17159  by  Nicolas Hug        Feature  A new parameter  importance getter  was added to    class  feature selection RFE    class  feature selection RFECV  and    class  feature selection SelectFromModel   allowing the user to specify an   attribute name path or a  callable  for extracting feature importance from   the estimator    pr  15361  by  user  Venkatachalam N  venkyyuvy        Efficiency  Reduce memory footprint in    func  feature selection mutual info classif    and  func  feature selection mutual info regression  by calling    class  neighbors KDTree  for counting nearest neighbors   pr  17878  by    user  Noel Rogers  noelano        Enhancement   class  feature selection RFE  supports the option for the   number of  n features to select  to be given as a float representing the   percentage of features to select     pr  17090  by  user  Lisa Schwetlick  lschwetlick   and    user  Marija Vlajic Wheeler  marijavlajic      mod  sklearn gaussian process                                      Enhancement  A new method    gaussian process kernel  check bounds params  is called after   fitting a Gaussian Process and raises a   ConvergenceWarning   if the bounds   of the hyperparameters are too tight     issue  12638  by  user  Sylvain Lannuzel  SylvainLan      mod  sklearn impute                            Feature   class  impute SimpleImputer  now supports a list of strings   when   strategy  most frequent    or   strategy  constant        pr  17526  by  user  Ayako YAGI  yagi 3   and    user  Juan Carlos Alfaro Jim nez  alfaro96        Feature  Added method  meth  impute SimpleImputer inverse transform  to   revert imputed data to original when instantiated with     add indicator True     pr  17612  by  user  Srimukh Sripada  d3b0unce        Fix  replace the default values in  class  impute IterativeImputer    of  min value  and  max value  parameters to   np inf  and  np inf     respectively instead of  None   However  the behaviour of the class does not   change since  None  was defaulting to these values already     pr  16493  by  user  Darshan N  DarshanGowda0        Fix   class  impute IterativeImputer  will not attempt to set the   estimator s  random state  attribute  allowing to use it with more external classes     pr  15636  by  user  David Cortes  david cortes        Efficiency   class  impute SimpleImputer  is now faster with  object  dtype array    when  strategy  most frequent   in  class   sklearn impute SimpleImputer      pr  18987  by  user  David Katz  DavidKatz il      mod  sklearn inspection                                Feature   func  inspection partial dependence  and    inspection plot partial dependence  now support calculating and   plotting Individual Conditional Expectation  ICE  curves controlled by the     kind   parameter     pr  16619  by  user  Madhura Jayratne  madhuracj        Feature  Add  sample weight  parameter to    func  inspection permutation importance    pr  16906  by    user  Roei Kahny  RoeiKa        API  Positional arguments are deprecated in    meth  inspection PartialDependenceDisplay plot  and will error in 1 1    renaming of 0 26      pr  18293  by  Thomas Fan      mod  sklearn isotonic                              Feature  Expose fitted attributes   X thresholds    and   y thresholds      that hold the de duplicated interpolation thresholds of an    class  isotonic IsotonicRegression  instance for model inspection purpose     pr  16289  by  user  Masashi Kishimoto  kishimoto banana   and    user  Olivier Grisel  ogrisel        Enhancement   class  isotonic IsotonicRegression  now accepts 2d array with   1 feature as input array   pr  17379  by  user  Jiaxiang  fujiaxiang        Fix  Add tolerance when determining duplicate X values to prevent   inf values from being predicted by  class  isotonic IsotonicRegression      pr  18639  by  user  Lucy Liu  lucyleeow      mod  sklearn kernel approximation                                          Feature  Added class  class  kernel approximation PolynomialCountSketch    which implements the Tensor Sketch algorithm for polynomial kernel feature   map approximation     pr  13003  by  user  Daniel L pez S nchez  lopeLH        Efficiency   class  kernel approximation Nystroem  now supports   parallelization via  joblib Parallel  using argument  n jobs      pr  18545  by  user  Laurenz Reitsam  LaurenzReitsam      mod  sklearn linear model                                  Feature   class  linear model LinearRegression  now forces coefficients   to be all positive when   positive   is set to   True       pr  17578  by  user  Joseph Knox  jknox13       user  Nelle Varoquaux  NelleV   and  user  Chiara Marmo  cmarmo        Enhancement   class  linear model RidgeCV  now supports finding an optimal   regularization value  alpha  for each target separately by setting     alpha per target True    This is only supported when using the default   efficient leave one out cross validation scheme   cv None     pr  6624  by    user  Marijn van Vliet  wmvanvliet        Fix  Fixes bug in  class  linear model TheilSenRegressor  where    predict  and  score  would fail when  fit intercept False  and there was   one feature during fitting   pr  18121  by  Thomas Fan        Fix  Fixes bug in  class  linear model ARDRegression  where  predict    was raising an error when  normalize True  and  return std True  because    X offset   and  X scale   were undefined     pr  18607  by  user  fhaselbeck  fhaselbeck        Fix  Added the missing  l1 ratio  parameter in    class  linear model Perceptron   to be used when  penalty  elasticnet      This changes the default from 0 to 0 15   pr  18622  by    user  Haesun Park  rickiepark      mod  sklearn manifold                              Efficiency  Fixed  issue  10493   Improve Local Linear Embedding  LLE    that raised  MemoryError  exception when used with large inputs     pr  17997  by  user  Bertrand Maisonneuve  bmaisonn        Enhancement  Add  square distances  parameter to  class  manifold TSNE     which provides backward compatibility during deprecation of legacy squaring   behavior  Distances will be squared by default in 1 1  renaming of 0 26     and this parameter will be removed in 1 3   pr  17662  by    user  Joshua Newton  joshuacwnewton        Fix   class  manifold MDS  now correctly sets its   pairwise  attribute     pr  18278  by  Thomas Fan      mod  sklearn metrics                             Feature  Added  func  metrics cluster pair confusion matrix  implementing   the confusion matrix arising from pairs of elements from two clusterings     pr  17412  by  user  Uwe F Mayer  ufmayer        Feature  new metric  func  metrics top k accuracy score   It s a   generalization of  func  metrics top k accuracy score   the difference is   that a prediction is considered correct as long as the true label is   associated with one of the  k  highest predicted scores     func  metrics accuracy score  is the special case of  k   1      pr  16625  by  user  Geoffrey Bolmier  gbolmier        Feature  Added  func  metrics det curve  to compute Detection Error Tradeoff   curve classification metric     pr  10591  by  user  Jeremy Karnowski  jkarnows   and    user  Daniel Mohns  dmohns        Feature  Added  metrics plot det curve  and    class  metrics DetCurveDisplay  to ease the plot of DET curves     pr  18176  by  user  Guillaume Lemaitre  glemaitre        Feature  Added  func  metrics mean absolute percentage error  metric and   the associated scorer for regression problems   issue  10708  fixed with the   PR  pr  15007  by  user  Ashutosh Hathidara  ashutosh1919    The scorer and   some practical test cases were taken from PR  pr  10711  by    user  Mohamed Ali Jamaoui  mohamed ali        Feature  Added  func  metrics rand score  implementing the  unadjusted    Rand index     pr  17412  by  user  Uwe F Mayer  ufmayer        Feature   metrics plot confusion matrix  now supports making colorbar   optional in the matplotlib plot by setting  colorbar False    pr  17192  by    user  Avi Gupta  avigupta2612       Enhancement  Add  sample weight  parameter to    func  metrics median absolute error    pr  17225  by    user  Lucy Liu  lucyleeow        Enhancement  Add  pos label  parameter in    metrics plot precision recall curve  in order to specify the positive   class to be used when computing the precision and recall statistics     pr  17569  by  user  Guillaume Lemaitre  glemaitre        Enhancement  Add  pos label  parameter in    metrics plot roc curve  in order to specify the positive   class to be used when computing the roc auc statistics     pr  17651  by  user  Clara Matos  claramatos        Fix  Fixed a bug in    func  metrics classification report  which was raising AttributeError   when called with  output dict True  for 0 length values     pr  17777  by  user  Shubhanshu Mishra  napsternxg        Fix  Fixed a bug in    func  metrics classification report  which was raising AttributeError   when called with  output dict True  for 0 length values     pr  17777  by  user  Shubhanshu Mishra  napsternxg        Fix  Fixed a bug in    func  metrics jaccard score  which recommended the  zero division    parameter when called with no true or predicted samples     pr  17826  by  user  Richard Decal  crypdick   and    user  Joseph Willard  josephwillard       Fix  bug in  func  metrics hinge loss  where error occurs when     y true   is missing some labels that are provided explicitly in the     labels   parameter     pr  17935  by  user  Cary Goltermann  Ultramann        Fix  Fix scorers that accept a pos label parameter and compute their metrics   from values returned by  decision function  or  predict proba   Previously    they would return erroneous values when pos label was not corresponding to    classifier classes  1    This is especially important when training   classifiers directly with string labeled target classes     pr  18114  by  user  Guillaume Lemaitre  glemaitre        Fix  Fixed bug in  metrics plot confusion matrix  where error occurs   when  y true  contains labels that were not previously seen by the classifier   while the  labels  and  display labels  parameters are set to  None      pr  18405  by  user  Thomas J  Fan  thomasjpfan   and    user  Yakov Pchelintsev  kyouma      mod  sklearn model selection                                     MajorFeature  Added  experimental  parameter search estimators    class  model selection HalvingRandomSearchCV  and    class  model selection HalvingGridSearchCV  which implement Successive   Halving  and can be used as a drop in replacements for    class  model selection RandomizedSearchCV  and    class  model selection GridSearchCV    pr  13900  by  Nicolas Hug     Joel   Nothman   and  Andreas M ller        Feature   class  model selection RandomizedSearchCV  and    class  model selection GridSearchCV  now have the method   score samples      pr  17478  by  user  Teon Brooks  teonbrooks   and    user  Mohamed Maskani  maskani moh        Enhancement   class  model selection TimeSeriesSplit  has two new keyword   arguments  test size  and  gap    test size  allows the out of sample   time series length to be fixed for all folds   gap  removes a fixed number of   samples between the train and test set on each fold     pr  13204  by  user  Kyle Kosic  kykosic        Enhancement   func  model selection permutation test score  and    func  model selection validation curve  now accept fit params   to pass additional estimator parameters     pr  18527  by  user  Gaurav Dhingra  gxyd       user  Julien Jerphanion  jjerphan   and  user  Amanda Dsouza  amy12xx        Enhancement   func  model selection cross val score      func  model selection cross validate      class  model selection GridSearchCV   and    class  model selection RandomizedSearchCV  allows estimator to fail scoring   and replace the score with  error score   If  error score  raise    the error   will be raised     pr  18343  by  Guillaume Lemaitre   and  user  Devi Sandeep  dsandeep0138        Enhancement   func  model selection learning curve  now accept fit params   to pass additional estimator parameters     pr  18595  by  user  Amanda Dsouza  amy12xx        Fix  Fixed the  len  of  class  model selection ParameterSampler  when   all distributions are lists and  n iter  is more than the number of unique   parameter combinations   pr  18222  by  Nicolas Hug        Fix  A fix to raise warning when one or more CV splits of    class  model selection GridSearchCV  and    class  model selection RandomizedSearchCV  results in non finite scores     pr  18266  by  user  Subrat Sahu  subrat93       user  Nirvan  Nirvan101   and  user  Arthur Book  ArthurBook        Enhancement   class  model selection GridSearchCV      class  model selection RandomizedSearchCV  and    func  model selection cross validate  support  scoring  being a callable   returning a dictionary of of multiple metric names values association     pr  15126  by  Thomas Fan      mod  sklearn multiclass                                Enhancement   class  multiclass OneVsOneClassifier  now accepts   the inputs with missing values  Hence  estimators which can handle   missing values  may be a pipeline with imputation step  can be used as   a estimator for multiclass wrappers     pr  17987  by  user  Venkatachalam N  venkyyuvy        Fix  A fix to allow  class  multiclass OutputCodeClassifier  to accept   sparse input data in its  fit  and  predict  methods  The check for   validity of the input is now delegated to the base estimator     pr  17233  by  user  Zolisa Bleki  zoj613      mod  sklearn multioutput                                 Enhancement   class  multioutput MultiOutputClassifier  and    class  multioutput MultiOutputRegressor  now accepts the inputs   with missing values  Hence  estimators which can handle missing   values  may be a pipeline with imputation step  HistGradientBoosting   estimators  can be used as a estimator for multiclass wrappers     pr  17987  by  user  Venkatachalam N  venkyyuvy        Fix  A fix to accept tuples for the   order   parameter   in  class  multioutput ClassifierChain      pr  18124  by  user  Gus Brocchini  boldloop   and    user  Amanda Dsouza  amy12xx      mod  sklearn naive bayes                                 Enhancement  Adds a parameter  min categories  to    class  naive bayes CategoricalNB  that allows a minimum number of categories   per feature to be specified  This allows categories unseen during training   to be accounted for     pr  16326  by  user  George Armstrong  gwarmstrong        API  The attributes   coef    and   intercept    are now deprecated in    class  naive bayes MultinomialNB    class  naive bayes ComplementNB      class  naive bayes BernoulliNB  and  class  naive bayes CategoricalNB     and will be removed in v1 1  renaming of 0 26      pr  17427  by  user  Juan Carlos Alfaro Jim nez  alfaro96      mod  sklearn neighbors                               Efficiency  Speed up   seuclidean      wminkowski      mahalanobis   and     haversine   metrics in  neighbors DistanceMetric  by avoiding   unexpected GIL acquiring in Cython when setting   n jobs 1   in    class  neighbors KNeighborsClassifier      class  neighbors KNeighborsRegressor      class  neighbors RadiusNeighborsClassifier      class  neighbors RadiusNeighborsRegressor      func  metrics pairwise distances    and by validating data out of loops     pr  17038  by  user  Wenbo Zhao  webber26232        Efficiency   neighbors NeighborsBase  benefits of an improved    algorithm    auto   heuristic  In addition to the previous set of rules    now  when the number of features exceeds 15   brute  is selected  assuming   the data intrinsic dimensionality is too high for tree based methods     pr  17148  by  user  Geoffrey Bolmier  gbolmier        Fix   neighbors BinaryTree    will raise a  ValueError  when fitting on data array having points with   different dimensions     pr  18691  by  user  Chiara Marmo  cmarmo        Fix   class  neighbors NearestCentroid  with a numerical  shrink threshold    will raise a  ValueError  when fitting on data with all constant features     pr  18370  by  user  Trevor Waite  trewaite        Fix  In  methods  radius neighbors  and    radius neighbors graph  of  class  neighbors NearestNeighbors      class  neighbors RadiusNeighborsClassifier      class  neighbors RadiusNeighborsRegressor   and    class  neighbors RadiusNeighborsTransformer   using  sort results True  now   correctly sorts the results even when fitting with the  brute  algorithm     pr  18612  by  Tom Dupre la Tour      mod  sklearn neural network                                    Efficiency  Neural net training and prediction are now a little faster     pr  17603    pr  17604    pr  17606    pr  17608    pr  17609    pr  17633      pr  17661    pr  17932  by  user  Alex Henrie  alexhenrie        Enhancement  Avoid converting float32 input to float64 in    class  neural network BernoulliRBM      pr  16352  by  user  Arthur Imbert  Henley13        Enhancement  Support 32 bit computations in    class  neural network MLPClassifier  and    class  neural network MLPRegressor      pr  17759  by  user  Srimukh Sripada  d3b0unce        Fix  Fix method   meth  neural network MLPClassifier fit    not iterating to   max iter   if warm started     pr  18269  by  user  Norbert Preining  norbusan   and    user  Guillaume Lemaitre  glemaitre      mod  sklearn pipeline                              Enhancement  References to transformers passed through   transformer weights     to  class  pipeline FeatureUnion  that aren t present in   transformer list     will raise a   ValueError       pr  17876  by  user  Cary Goltermann  Ultramann        Fix  A slice of a  class  pipeline Pipeline  now inherits the parameters of   the original pipeline   memory  and  verbose       pr  18429  by  user  Albert Villanova del Moral  albertvillanova   and    user  Pawe  Biernat  pwl      mod  sklearn preprocessing                                   Feature   class  preprocessing OneHotEncoder  now supports missing   values by treating them as a category   pr  17317  by  Thomas Fan        Feature  Add a new   handle unknown   parameter with a     use encoded value   option  along with a new   unknown value   parameter    to  class  preprocessing OrdinalEncoder  to allow unknown categories during   transform and set the encoded value of the unknown categories     pr  17406  by  user  Felix Wick  FelixWick   and  pr  18406  by    Nicolas Hug        Feature  Add   clip   parameter to  class  preprocessing MinMaxScaler     which clips the transformed values of test data to   feature range       pr  17833  by  user  Yashika Sharma  yashika51        Feature  Add   sample weight   parameter to    class  preprocessing StandardScaler   Allows setting   individual weights for each sample   pr  18510  and    pr  18447  and  pr  16066  and  pr  18682  by    user  Maria Telenczuk  maikia   and  user  Albert Villanova  albertvillanova     and  user  panpiort8  and  user  Alex Gramfort  agramfort        Enhancement  Verbose output of  class  model selection GridSearchCV  has   been improved for readability   pr  16935  by  user  Raghav Rajagopalan    raghavrv   and  user  Chiara Marmo  cmarmo        Enhancement  Add   unit variance   to  class  preprocessing RobustScaler     which scales output data such that normally distributed features have a   variance of 1   pr  17193  by  user  Lucy Liu  lucyleeow   and    user  Mabel Villalba  mabelvj        Enhancement  Add  dtype  parameter to    class  preprocessing KBinsDiscretizer      pr  16335  by  user  Arthur Imbert  Henley13        Fix  Raise error on    meth  sklearn preprocessing OneHotEncoder inverse transform    when  handle unknown  error   and  drop None  for samples   encoded as all zeros   pr  14982  by    user  Kevin Winata  kwinata      mod  sklearn semi supervised                                     MajorFeature  Added  class  semi supervised SelfTrainingClassifier   a   meta classifier that allows any supervised classifier to function as a   semi supervised classifier that can learn from unlabeled data   issue  11682    by  user  Oliver Rausch  orausch   and  user  Patrice Becker  pr0duktiv        Fix  Fix incorrect encoding when using unicode string dtypes in    class  preprocessing OneHotEncoder  and    class  preprocessing OrdinalEncoder    pr  15763  by  Thomas Fan      mod  sklearn svm                         Enhancement  invoke SciPy BLAS API for SVM kernel function in   fit        predict   and related methods of  class  svm SVC    class  svm NuSVC      class  svm SVR    class  svm NuSVR    class  svm OneClassSVM      pr  16530  by  user  Shuhua Fan  jim0421      mod  sklearn tree                          Feature   class  tree DecisionTreeRegressor  now supports the new splitting   criterion    poisson    useful for modeling count data   pr  17386  by    user  Christian Lorentzen  lorentzenchr        Enhancement   func  tree plot tree  now uses colors from the matplotlib   configuration settings   pr  17187  by  Andreas M ller        API  The parameter   X idx sorted   is now deprecated in    meth  tree DecisionTreeClassifier fit  and    meth  tree DecisionTreeRegressor fit   and has not effect     pr  17614  by  user  Juan Carlos Alfaro Jim nez  alfaro96      mod  sklearn utils                           Enhancement  Add   check methods sample order invariance   to    func   utils estimator checks check estimator   which checks that   estimator methods are invariant if applied to the same dataset   with different sample order  pr  17598  by  user  Jason Ngo  ngojason9        Enhancement  Add support for weights in    utils sparse func incr mean variance axis     By  user  Maria Telenczuk  maikia   and  user  Alex Gramfort  agramfort        Fix  Raise ValueError with clear error message in  func  utils check array    for sparse DataFrames with mixed types     pr  17992  by  user  Thomas J  Fan  thomasjpfan   and    user  Alex Shacked  alexshacked        Fix  Allow serialized tree based models to be unpickled on a machine   with different endianness     pr  17644  by  user  Qi Zhang  qzhang90        Fix  Check that we raise proper error when axis 1 and the   dimensions do not match in  utils sparse func incr mean variance axis     By  user  Alex Gramfort  agramfort     Miscellaneous                   Enhancement  Calls to   repr   are now faster   when  print changed only True   especially with meta estimators     pr  18508  by  user  Nathan C   Xethan        rubric   Code and documentation contributors  Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0 23  including   Abo7atm  Adam Spannbauer  Adrin Jalali  adrinjalali  Agamemnon Krasoulis  Akshay Deodhar  Albert Villanova del Moral  Alessandro Gentile  Alex Henrie  Alex Itkes  Alex Liang  Alexander Lenail  alexandracraciun  Alexandre Gramfort  alexshacked  Allan D Butler  Amanda Dsouza  amy12xx  Anand Tiwari  Anderson Nelson  Andreas Mueller  Ankit Choraria  Archana Subramaniyan  Arthur Imbert  Ashutosh Hathidara  Ashutosh Kushwaha  Atsushi Nukariya  Aura Munoz  AutoViz and Auto ViML  Avi Gupta  Avinash Anakal  Ayako YAGI  barankarakus  barberogaston  beatrizsmg  Ben Mainye  Benjamin Bossan  Benjamin Pedigo  Bharat Raghunathan  Bhavika Devnani  Biprateep Dey  bmaisonn  Bo Chang  Boris Villaz n Terrazas  brigi  Brigitta Sip cz  Bruno Charron  Byron Smith  Cary Goltermann  Cat Chenal  CeeThinwa  chaitanyamogal  Charles Patel  Chiara Marmo  Christian Kastner  Christian Lorentzen  Christoph Deil  Christos Aridas  Clara Matos  clmbst  Coelhudo  crispinlogan  Cristina Mulas  Daniel L pez  Daniel Mohns  darioka  Darshan N  david cortes  Declan O Neill  Deeksha Madan  Elizabeth DuPre  Eric Fiegel  Eric Larson  Erich Schubert  Erin Khoo  Erin R Hoffman  eschibli  Felix Wick  fhaselbeck  Forrest Koch  Francesco Casalegno  Frans Larsson  Gael Varoquaux  Gaurav Desai  Gaurav Sheni  genvalen  Geoffrey Bolmier  George Armstrong  George Kiragu  Gesa Stupperich  Ghislain Antony Vaillant  Gim Seng  Gordon Walsh  Gregory R  Lee  Guillaume Chevalier  Guillaume Lemaitre  Haesun Park  Hannah Bohle  Hao Chun Chang  Harry Scholes  Harsh Soni  Henry  Hirofumi Suzuki  Hitesh Somani  Hoda1394  Hugo Le Moine  hugorichard  indecisiveuser  Isuru Fernando  Ivan Wiryadi  j0rd1smit  Jaehyun Ahn  Jake Tae  James Hoctor  Jan Vesely  Jeevan Anand Anne  JeroenPeterBos  JHayes  Jiaxiang  Jie Zheng  Jigna Panchal  jim0421  Jin Li  Joaquin Vanschoren  Joel Nothman  Jona Sassenhagen  Jonathan  Jorge Gorbe Moya  Joseph Lucas  Joshua Newton  Juan Carlos Alfaro Jim nez  Julien Jerphanion  Justin Huber  J r mie du Boisberranger  Kartik Chugh  Katarina Slama  kaylani2  Kendrick Cetina  Kenny Huynh  Kevin Markham  Kevin Winata  Kiril Isakov  kishimoto  Koki Nishihara  Krum Arnaudov  Kyle Kosic  Lauren Oldja  Laurenz Reitsam  Lisa Schwetlick  Louis Douge  Louis Guitton  Lucy Liu  Madhura Jayaratne  maikia  Manimaran  Manuel L pez Ib  ez  Maren Westermann  Maria Telenczuk  Mariam ke  Marijn van Vliet  Markus L ning  Martin Scheubrein  Martina G  Vilas  Martina Megasari  Mateusz G rski  mathschy  mathurinm  Matthias Bussonnier  Max Del Giudice  Michael  Milan Straka  Muoki Caleb  N  Haiat  Nadia Tahiri  Ph  D  Naoki Hamada  Neil Botelho  Nicolas Hug  Nils Werner  noelano  Norbert Preining  oj lappi  Oleh Kozynets  Olivier Grisel  Pankaj Jindal  Pardeep Singh  Parthiv Chigurupati  Patrice Becker  Pete Green  pgithubs  Poorna Kumar  Prabakaran Kumaresshan  Probinette4  pspachtholz  pwalchessen  Qi Zhang  rachel fischoff  Rachit Toshniwal  Rafey Iqbal Rahman  Rahul Jakhar  Ram Rachum  RamyaNP  rauwuckl  Ravi Kiran Boggavarapu  Ray Bell  Reshama Shaikh  Richard Decal  Rishi Advani  Rithvik Rao  Rob Romijnders  roei  Romain Tavenard  Roman Yurchak  Ruby Werman  Ryotaro Tsukada  sadak  Saket Khandelwal  Sam  Sam Ezebunandu  Sam Kimbinyi  Sarah Brown  Saurabh Jain  Sean O  Stalley  Sergio  Shail Shah  Shane Keller  Shao Yang Hong  Shashank Singh  Shooter23  Shubhanshu Mishra  simonamaggio  Soledad Galli  Srimukh Sripada  Stephan Steinfurt  subrat93  Sunitha Selvan  Swier  Sylvain Mari   SylvainLan  t kusanagi2  Teon L Brooks  Terence Honles  Thijs van den Berg  Thomas J Fan  Thomas J  Fan  Thomas S Benjamin  Thomas9292  Thorben Jensen  tijanajovanovic  Timo Kaufmann  tnwei  Tom Dupr  la Tour  Trevor Waite  ufmayer  Umberto Lupo  Venkatachalam N  Vikas Pandey  Vinicius Rios Fuck  Violeta  watchtheblur  Wenbo Zhao  willpeppo  xavier dupr   Xethan  Xue Qianming  xun tang  yagi 3  Yakov Pchelintsev  Yashika Sharma  Yi Yan Ge  Yue Wu  Yutaro Ikeda  Zaccharie Ramzi  zoj613  Zhao Feng "}
{"questions":"scikit-learn sklearn contributors rst releasenotes13 Version 1 3","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n.. _release_notes_1_3:\n\n===========\nVersion 1.3\n===========\n\nFor a short description of the main highlights of the release, please refer to\n:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_1_3_0.py`.\n\n.. include:: changelog_legend.inc\n\n.. _changes_1_3_2:\n\nVersion 1.3.2\n=============\n\n**October 2023**\n\nChangelog\n---------\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Fix| All dataset fetchers now accept `data_home` as any object that implements\n  the :class:`os.PathLike` interface, for instance, :class:`pathlib.Path`.\n  :pr:`27468` by :user:`Yao Xiao <Charlie-XIAO>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Fix| Fixes a bug in :class:`decomposition.KernelPCA` by forcing the output of\n  the internal :class:`preprocessing.KernelCenterer` to be a default array. When the\n  arpack solver is used, it expects an array with a `dtype` attribute.\n  :pr:`27583` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| Fixes a bug for metrics using `zero_division=np.nan`\n  (e.g. :func:`~metrics.precision_score`) within a paralell loop\n  (e.g. :func:`~model_selection.cross_val_score`) where the singleton for `np.nan`\n  will be different in the sub-processes.\n  :pr:`27573` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.tree`\n...................\n\n- |Fix| Do not leak data via non-initialized memory in decision tree pickle files and make\n  the generation of those files deterministic. :pr:`27580` by :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n\n.. _changes_1_3_1:\n\nVersion 1.3.1\n=============\n\n**September 2023**\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- |Fix| Ridge models with `solver='sparse_cg'` may have slightly different\n  results with scipy>=1.12, because of an underlying change in the scipy solver\n  (see `scipy#18488 <https:\/\/github.com\/scipy\/scipy\/pull\/18488>`_ for more\n  details)\n  :pr:`26814` by :user:`Lo\u00efc Est\u00e8ve <lesteve>`\n\nChanges impacting all modules\n-----------------------------\n\n- |Fix| The `set_output` API correctly works with list input. :pr:`27044` by\n  `Thomas Fan`_.\n\nChangelog\n---------\n\n:mod:`sklearn.calibration`\n..........................\n\n- |Fix| :class:`calibration.CalibratedClassifierCV` can now handle models that\n  produce large prediction scores. Before it was numerically unstable.\n  :pr:`26913` by :user:`Omar Salman <OmarManzoor>`.\n\n:mod:`sklearn.cluster`\n......................\n\n- |Fix| :class:`cluster.BisectingKMeans` could crash when predicting on data\n  with a different scale than the data used to fit the model.\n  :pr:`27167` by `Olivier Grisel`_.\n\n- |Fix| :class:`cluster.BisectingKMeans` now works with data that has a single feature.\n  :pr:`27243` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.cross_decomposition`\n..................................\n\n- |Fix| :class:`cross_decomposition.PLSRegression` now automatically ravels the output\n  of `predict` if fitted with one dimensional `y`.\n  :pr:`26602` by :user:`Yao Xiao <Charlie-XIAO>`.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |Fix| Fix a bug in :class:`ensemble.AdaBoostClassifier` with `algorithm=\"SAMME\"`\n  where the decision function of each weak learner should be symmetric (i.e.\n  the sum of the scores should sum to zero for a sample).\n  :pr:`26521` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Fix| :func:`feature_selection.mutual_info_regression` now correctly computes the\n  result when `X` is of integer dtype. :pr:`26748` by :user:`Yao Xiao <Charlie-XIAO>`.\n\n:mod:`sklearn.impute`\n.....................\n\n- |Fix| :class:`impute.KNNImputer` now correctly adds a missing indicator column in\n  ``transform`` when ``add_indicator`` is set to ``True`` and missing values are observed\n  during ``fit``. :pr:`26600` by :user:`Shreesha Kumar Bhat <Shreesha3112>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| Scorers used with :func:`metrics.get_scorer` handle properly\n  multilabel-indicator matrix.\n  :pr:`27002` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.mixture`\n......................\n\n- |Fix| The initialization of :class:`mixture.GaussianMixture` from user-provided\n  `precisions_init` for `covariance_type` of `full` or `tied` was not correct,\n  and has been fixed.\n  :pr:`26416` by :user:`Yang Tao <mchikyt3>`.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Fix| :meth:`neighbors.KNeighborsClassifier.predict` no longer raises an\n  exception for `pandas.DataFrames` input.\n  :pr:`26772` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| Reintroduce `sklearn.neighbors.BallTree.valid_metrics` and\n  `sklearn.neighbors.KDTree.valid_metrics` as public class attributes.\n  :pr:`26754` by :user:`Julien Jerphanion <jjerphan>`.\n\n- |Fix| :class:`sklearn.model_selection.HalvingRandomSearchCV` no longer raises\n  when the input to the `param_distributions` parameter is a list of dicts.\n  :pr:`26893` by :user:`Stefanie Senger <StefanieSenger>`.\n\n- |Fix| Neighbors based estimators now correctly work when `metric=\"minkowski\"` and the\n  metric parameter `p` is in the range `0 < p < 1`, regardless of the `dtype` of `X`.\n  :pr:`26760` by :user:`Shreesha Kumar Bhat <Shreesha3112>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Fix| :class:`preprocessing.LabelEncoder` correctly accepts `y` as a keyword\n  argument. :pr:`26940` by `Thomas Fan`_.\n\n- |Fix| :class:`preprocessing.OneHotEncoder` shows a more informative error message\n  when `sparse_output=True` and the output is configured to be pandas.\n  :pr:`26931` by `Thomas Fan`_.\n\n:mod:`sklearn.tree`\n...................\n\n- |Fix| :func:`tree.plot_tree` now accepts `class_names=True` as documented.\n  :pr:`26903` by :user:`Thomas Roehr <2maz>`\n\n- |Fix| The `feature_names` parameter of :func:`tree.plot_tree` now accepts any kind of\n  array-like instead of just a list. :pr:`27292` by :user:`Rahil Parikh <rprkh>`.\n\n.. _changes_1_3:\n\nVersion 1.3.0\n=============\n\n**June 2023**\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- |Enhancement| :meth:`multiclass.OutputCodeClassifier.predict` now uses a more\n  efficient pairwise distance reduction. As a consequence, the tie-breaking\n  strategy is different and thus the predicted labels may be different.\n  :pr:`25196` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Enhancement| The `fit_transform` method of :class:`decomposition.DictionaryLearning`\n  is more efficient but may produce different results as in previous versions when\n  `transform_algorithm` is not the same as `fit_algorithm` and the number of iterations\n  is small. :pr:`24871` by :user:`Omar Salman <OmarManzoor>`.\n\n- |Enhancement| The `sample_weight` parameter now will be used in centroids\n  initialization for :class:`cluster.KMeans`, :class:`cluster.BisectingKMeans`\n  and :class:`cluster.MiniBatchKMeans`.\n  This change will break backward compatibility, since numbers generated\n  from same random seeds will be different.\n  :pr:`25752` by :user:`Gleb Levitski <glevv>`,\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`,\n  :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Treat more consistently small values in the `W` and `H` matrices during the\n  `fit` and `transform` steps of :class:`decomposition.NMF` and\n  :class:`decomposition.MiniBatchNMF` which can produce different results than previous\n  versions. :pr:`25438` by :user:`Yotam Avidar-Constantini <yotamcons>`.\n\n- |Fix| :class:`decomposition.KernelPCA` may produce different results through\n  `inverse_transform` if `gamma` is `None`. Now it will be chosen correctly as\n  `1\/n_features` of the data that it is fitted on, while previously it might be\n  incorrectly chosen as `1\/n_features` of the data passed to `inverse_transform`.\n  A new attribute `gamma_` is provided for revealing the actual value of `gamma`\n  used each time the kernel is called.\n  :pr:`26337` by :user:`Yao Xiao <Charlie-XIAO>`.\n\nChanged displays\n----------------\n\n- |Enhancement| :class:`model_selection.LearningCurveDisplay` displays both the\n  train and test curves by default. You can set `score_type=\"test\"` to keep the\n  past behaviour.\n  :pr:`25120` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :class:`model_selection.ValidationCurveDisplay` now accepts passing a\n  list to the `param_range` parameter.\n  :pr:`27311` by :user:`Arturo Amor <ArturoAmorQ>`.\n\nChanges impacting all modules\n-----------------------------\n\n- |Enhancement| The `get_feature_names_out` method of the following classes now\n  raises a `NotFittedError` if the instance is not fitted. This ensures the error is\n  consistent in all estimators with the `get_feature_names_out` method.\n\n  - :class:`impute.MissingIndicator`\n  - :class:`feature_extraction.DictVectorizer`\n  - :class:`feature_extraction.text.TfidfTransformer`\n  - :class:`feature_selection.GenericUnivariateSelect`\n  - :class:`feature_selection.RFE`\n  - :class:`feature_selection.RFECV`\n  - :class:`feature_selection.SelectFdr`\n  - :class:`feature_selection.SelectFpr`\n  - :class:`feature_selection.SelectFromModel`\n  - :class:`feature_selection.SelectFwe`\n  - :class:`feature_selection.SelectKBest`\n  - :class:`feature_selection.SelectPercentile`\n  - :class:`feature_selection.SequentialFeatureSelector`\n  - :class:`feature_selection.VarianceThreshold`\n  - :class:`kernel_approximation.AdditiveChi2Sampler`\n  - :class:`impute.IterativeImputer`\n  - :class:`impute.KNNImputer`\n  - :class:`impute.SimpleImputer`\n  - :class:`isotonic.IsotonicRegression`\n  - :class:`preprocessing.Binarizer`\n  - :class:`preprocessing.KBinsDiscretizer`\n  - :class:`preprocessing.MaxAbsScaler`\n  - :class:`preprocessing.MinMaxScaler`\n  - :class:`preprocessing.Normalizer`\n  - :class:`preprocessing.OrdinalEncoder`\n  - :class:`preprocessing.PowerTransformer`\n  - :class:`preprocessing.QuantileTransformer`\n  - :class:`preprocessing.RobustScaler`\n  - :class:`preprocessing.SplineTransformer`\n  - :class:`preprocessing.StandardScaler`\n  - :class:`random_projection.GaussianRandomProjection`\n  - :class:`random_projection.SparseRandomProjection`\n\n  The `NotFittedError` displays an informative message asking to fit the instance\n  with the appropriate arguments.\n\n  :pr:`25294`, :pr:`25308`, :pr:`25291`, :pr:`25367`, :pr:`25402`,\n  by :user:`John Pangas <jpangas>`, :user:`Rahil Parikh <rprkh>` ,\n  and :user:`Alex Buzenet <albuzenet>`.\n\n- |Enhancement| Added a multi-threaded Cython routine to the compute squared\n  Euclidean distances (sometimes followed by a fused reduction operation) for a\n  pair of datasets consisting of a sparse CSR matrix and a dense NumPy.\n\n  This can improve the performance of following functions and estimators:\n\n  - :func:`sklearn.metrics.pairwise_distances_argmin`\n  - :func:`sklearn.metrics.pairwise_distances_argmin_min`\n  - :class:`sklearn.cluster.AffinityPropagation`\n  - :class:`sklearn.cluster.Birch`\n  - :class:`sklearn.cluster.MeanShift`\n  - :class:`sklearn.cluster.OPTICS`\n  - :class:`sklearn.cluster.SpectralClustering`\n  - :func:`sklearn.feature_selection.mutual_info_regression`\n  - :class:`sklearn.neighbors.KNeighborsClassifier`\n  - :class:`sklearn.neighbors.KNeighborsRegressor`\n  - :class:`sklearn.neighbors.RadiusNeighborsClassifier`\n  - :class:`sklearn.neighbors.RadiusNeighborsRegressor`\n  - :class:`sklearn.neighbors.LocalOutlierFactor`\n  - :class:`sklearn.neighbors.NearestNeighbors`\n  - :class:`sklearn.manifold.Isomap`\n  - :class:`sklearn.manifold.LocallyLinearEmbedding`\n  - :class:`sklearn.manifold.TSNE`\n  - :func:`sklearn.manifold.trustworthiness`\n  - :class:`sklearn.semi_supervised.LabelPropagation`\n  - :class:`sklearn.semi_supervised.LabelSpreading`\n\n  A typical example of this performance improvement happens when passing a sparse\n  CSR matrix to the `predict` or `transform` method of estimators that rely on\n  a dense NumPy representation to store their fitted parameters (or the reverse).\n\n  For instance, :meth:`sklearn.neighbors.NearestNeighbors.kneighbors` is now up\n  to 2 times faster for this case on commonly available laptops.\n\n  :pr:`25044` by :user:`Julien Jerphanion <jjerphan>`.\n\n- |Enhancement| All estimators that internally rely on OpenMP multi-threading\n  (via Cython) now use a number of threads equal to the number of physical\n  (instead of logical) cores by default. In the past, we observed that using as\n  many threads as logical cores on SMT hosts could sometimes cause severe\n  performance problems depending on the algorithms and the shape of the data.\n  Note that it is still possible to manually adjust the number of threads used\n  by OpenMP as documented in :ref:`parallelism`.\n\n  :pr:`26082` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>` and\n  :user:`Olivier Grisel <ogrisel>`.\n\nExperimental \/ Under Development\n--------------------------------\n\n- |MajorFeature| :ref:`Metadata routing <metadata_routing>`'s related base\n  methods are included in this release. This feature is only available via the\n  `enable_metadata_routing` feature flag which can be enabled using\n  :func:`sklearn.set_config` and :func:`sklearn.config_context`. For now this\n  feature is mostly useful for third party developers to prepare their code\n  base for metadata routing, and we strongly recommend that they also hide it\n  behind the same feature flag, rather than having it enabled by default.\n  :pr:`24027` by `Adrin Jalali`_, :user:`Benjamin Bossan <BenjaminBossan>`, and\n  :user:`Omar Salman <OmarManzoor>`.\n\nChangelog\n---------\n\n..\n    Entries should be grouped by module (in alphabetic order) and prefixed with\n    one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,\n    |Fix| or |API| (see whats_new.rst for descriptions).\n    Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).\n    Changes not specific to a module should be listed under *Multiple Modules*\n    or *Miscellaneous*.\n    Entries should end with:\n    :pr:`123456` by :user:`Joe Bloggs <joeongithub>`.\n    where 123456 is the *pull request* number, not the issue number.\n\n`sklearn`\n.........\n\n- |Feature| Added a new option `skip_parameter_validation`, to the function\n  :func:`sklearn.set_config` and context manager :func:`sklearn.config_context`, that\n  allows to skip the validation of the parameters passed to the estimators and public\n  functions. This can be useful to speed up the code but should be used with care\n  because it can lead to unexpected behaviors or raise obscure error messages when\n  setting invalid parameters.\n  :pr:`25815` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.base`\n...................\n\n- |Feature| A `__sklearn_clone__` protocol is now available to override the\n  default behavior of :func:`base.clone`. :pr:`24568` by `Thomas Fan`_.\n\n- |Fix| :class:`base.TransformerMixin` now currently keeps a namedtuple's class\n  if `transform` returns a namedtuple. :pr:`26121` by `Thomas Fan`_.\n\n:mod:`sklearn.calibration`\n..........................\n\n- |Fix| :class:`calibration.CalibratedClassifierCV` now does not enforce sample\n  alignment on `fit_params`. :pr:`25805` by `Adrin Jalali`_.\n\n:mod:`sklearn.cluster`\n......................\n\n- |MajorFeature| Added :class:`cluster.HDBSCAN`, a modern hierarchical density-based\n  clustering algorithm. Similarly to :class:`cluster.OPTICS`, it can be seen as a\n  generalization of :class:`cluster.DBSCAN` by allowing for hierarchical instead of flat\n  clustering, however it varies in its approach from :class:`cluster.OPTICS`. This\n  algorithm is very robust with respect to its hyperparameters' values and can\n  be used on a wide variety of data without much, if any, tuning.\n\n  This implementation is an adaptation from the original implementation of HDBSCAN in\n  `scikit-learn-contrib\/hdbscan <https:\/\/github.com\/scikit-learn-contrib\/hdbscan>`_,\n  by :user:`Leland McInnes <lmcinnes>` et al.\n\n  :pr:`26385` by :user:`Meekail Zain <micky774>`\n\n- |Enhancement| The `sample_weight` parameter now will be used in centroids\n  initialization for :class:`cluster.KMeans`, :class:`cluster.BisectingKMeans`\n  and :class:`cluster.MiniBatchKMeans`.\n  This change will break backward compatibility, since numbers generated\n  from same random seeds will be different.\n  :pr:`25752` by :user:`Gleb Levitski <glevv>`,\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`,\n  :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :class:`cluster.KMeans`, :class:`cluster.MiniBatchKMeans` and\n  :func:`cluster.k_means` now correctly handle the combination of `n_init=\"auto\"`\n  and `init` being an array-like, running one initialization in that case.\n  :pr:`26657` by :user:`Binesh Bannerjee <bnsh>`.\n\n- |API| The `sample_weight` parameter in `predict` for\n  :meth:`cluster.KMeans.predict` and :meth:`cluster.MiniBatchKMeans.predict`\n  is now deprecated and will be removed in v1.5.\n  :pr:`25251` by :user:`Gleb Levitski <glevv>`.\n\n- |API| The `Xred` argument in :func:`cluster.FeatureAgglomeration.inverse_transform`\n  is renamed to `Xt` and will be removed in v1.5. :pr:`26503` by `Adrin Jalali`_.\n\n:mod:`sklearn.compose`\n......................\n\n- |Fix| :class:`compose.ColumnTransformer` raises an informative error when the individual\n  transformers of `ColumnTransformer` output pandas dataframes with indexes that are\n  not consistent with each other and the output is configured to be pandas.\n  :pr:`26286` by `Thomas Fan`_.\n\n- |Fix| :class:`compose.ColumnTransformer` correctly sets the output of the\n  remainder when `set_output` is called. :pr:`26323` by `Thomas Fan`_.\n\n:mod:`sklearn.covariance`\n.........................\n\n- |Fix| Allows `alpha=0` in :class:`covariance.GraphicalLasso` to be\n  consistent with :func:`covariance.graphical_lasso`.\n  :pr:`26033` by :user:`Genesis Valencia <genvalen>`.\n\n- |Fix| :func:`covariance.empirical_covariance` now gives an informative\n  error message when input is not appropriate.\n  :pr:`26108` by :user:`Quentin Barth\u00e9lemy <qbarthelemy>`.\n\n- |API| Deprecates `cov_init` in :func:`covariance.graphical_lasso` in 1.3 since\n  the parameter has no effect. It will be removed in 1.5.\n  :pr:`26033` by :user:`Genesis Valencia <genvalen>`.\n\n- |API| Adds `costs_` fitted attribute in :class:`covariance.GraphicalLasso` and\n  :class:`covariance.GraphicalLassoCV`.\n  :pr:`26033` by :user:`Genesis Valencia <genvalen>`.\n\n- |API| Adds `covariance` parameter in :class:`covariance.GraphicalLasso`.\n  :pr:`26033` by :user:`Genesis Valencia <genvalen>`.\n\n- |API| Adds `eps` parameter in :class:`covariance.GraphicalLasso`,\n  :func:`covariance.graphical_lasso`, and :class:`covariance.GraphicalLassoCV`.\n  :pr:`26033` by :user:`Genesis Valencia <genvalen>`.\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Enhancement| Allows to overwrite the parameters used to open the ARFF file using\n  the parameter `read_csv_kwargs` in :func:`datasets.fetch_openml` when using the\n  pandas parser.\n  :pr:`26433` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :func:`datasets.fetch_openml` returns improved data types when\n  `as_frame=True` and `parser=\"liac-arff\"`. :pr:`26386` by `Thomas Fan`_.\n\n- |Fix| Following the ARFF specs, only the marker `\"?\"` is now considered as a missing\n  values when opening ARFF files fetched using :func:`datasets.fetch_openml` when using\n  the pandas parser. The parameter `read_csv_kwargs` allows to overwrite this behaviour.\n  :pr:`26551` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :func:`datasets.fetch_openml` will consistently use `np.nan` as missing marker\n  with both parsers `\"pandas\"` and `\"liac-arff\"`.\n  :pr:`26579` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |API| The `data_transposed` argument of :func:`datasets.make_sparse_coded_signal`\n  is deprecated and will be removed in v1.5.\n  :pr:`25784` by :user:`J\u00e9r\u00e9mie du Boisberranger`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Efficiency| :class:`decomposition.MiniBatchDictionaryLearning` and\n  :class:`decomposition.MiniBatchSparsePCA` are now faster for small batch sizes by\n  avoiding duplicate validations.\n  :pr:`25490` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Enhancement| :class:`decomposition.DictionaryLearning` now accepts the parameter\n  `callback` for consistency with the function :func:`decomposition.dict_learning`.\n  :pr:`24871` by :user:`Omar Salman <OmarManzoor>`.\n\n- |Fix| Treat more consistently small values in the `W` and `H` matrices during the\n  `fit` and `transform` steps of :class:`decomposition.NMF` and\n  :class:`decomposition.MiniBatchNMF` which can produce different results than previous\n  versions. :pr:`25438` by :user:`Yotam Avidar-Constantini <yotamcons>`.\n\n- |API| The `W` argument in :func:`decomposition.NMF.inverse_transform` and\n  :class:`decomposition.MiniBatchNMF.inverse_transform` is renamed to `Xt` and\n  will be removed in v1.5. :pr:`26503` by `Adrin Jalali`_.\n\n:mod:`sklearn.discriminant_analysis`\n....................................\n\n- |Enhancement| :class:`discriminant_analysis.LinearDiscriminantAnalysis` now\n  supports the `PyTorch <https:\/\/pytorch.org\/>`__. See\n  :ref:`array_api` for more details. :pr:`25956` by `Thomas Fan`_.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |Feature| :class:`ensemble.HistGradientBoostingRegressor` now supports\n  the Gamma deviance loss via `loss=\"gamma\"`.\n  Using the Gamma deviance as loss function comes in handy for modelling skewed\n  distributed, strictly positive valued targets.\n  :pr:`22409` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Feature| Compute a custom out-of-bag score by passing a callable to\n  :class:`ensemble.RandomForestClassifier`, :class:`ensemble.RandomForestRegressor`,\n  :class:`ensemble.ExtraTreesClassifier` and :class:`ensemble.ExtraTreesRegressor`.\n  :pr:`25177` by `Tim Head`_.\n\n- |Feature| :class:`ensemble.GradientBoostingClassifier` now exposes\n  out-of-bag scores via the `oob_scores_` or `oob_score_` attributes.\n  :pr:`24882` by :user:`Ashwin Mathur <awinml>`.\n\n- |Efficiency| :class:`ensemble.IsolationForest` predict time is now faster\n  (typically by a factor of 8 or more). Internally, the estimator now precomputes\n  decision path lengths per tree at `fit` time. It is therefore not possible\n  to load an estimator trained with scikit-learn 1.2 to make it predict with\n  scikit-learn 1.3: retraining with scikit-learn 1.3 is required.\n  :pr:`25186` by :user:`Felipe Breve Siola <fsiola>`.\n\n- |Efficiency| :class:`ensemble.RandomForestClassifier` and\n  :class:`ensemble.RandomForestRegressor` with `warm_start=True` now only\n  recomputes out-of-bag scores when there are actually more `n_estimators`\n  in subsequent `fit` calls.\n  :pr:`26318` by :user:`Joshua Choo Yun Keat <choo8>`.\n\n- |Enhancement| :class:`ensemble.BaggingClassifier` and\n  :class:`ensemble.BaggingRegressor` expose the `allow_nan` tag from the\n  underlying estimator. :pr:`25506` by `Thomas Fan`_.\n\n- |Fix| :meth:`ensemble.RandomForestClassifier.fit` sets `max_samples = 1`\n  when `max_samples` is a float and `round(n_samples * max_samples) < 1`.\n  :pr:`25601` by :user:`Jan Fidor <JanFidor>`.\n\n- |Fix| :meth:`ensemble.IsolationForest.fit` no longer warns about missing\n  feature names when called with `contamination` not `\"auto\"` on a pandas\n  dataframe.\n  :pr:`25931` by :user:`Yao Xiao <Charlie-XIAO>`.\n\n- |Fix| :class:`ensemble.HistGradientBoostingRegressor` and\n  :class:`ensemble.HistGradientBoostingClassifier` treats negative values for\n  categorical features consistently as missing values, following LightGBM's and\n  pandas' conventions.\n  :pr:`25629` by `Thomas Fan`_.\n\n- |Fix| Fix deprecation of `base_estimator` in :class:`ensemble.AdaBoostClassifier`\n  and :class:`ensemble.AdaBoostRegressor` that was introduced in :pr:`23819`.\n  :pr:`26242` by :user:`Marko Toplak <markotoplak>`.\n\n:mod:`sklearn.exceptions`\n.........................\n\n- |Feature| Added :class:`exceptions.InconsistentVersionWarning` which is raised\n  when a scikit-learn estimator is unpickled with a scikit-learn version that is\n  inconsistent with the sckit-learn version the estimator was pickled with.\n  :pr:`25297` by `Thomas Fan`_.\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |API| :class:`feature_extraction.image.PatchExtractor` now follows the\n  transformer API of scikit-learn. This class is defined as a stateless transformer\n  meaning that it is note required to call `fit` before calling `transform`.\n  Parameter validation only happens at `fit` time.\n  :pr:`24230` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Enhancement| All selectors in :mod:`sklearn.feature_selection` will preserve\n  a DataFrame's dtype when transformed. :pr:`25102` by `Thomas Fan`_.\n\n- |Fix| :class:`feature_selection.SequentialFeatureSelector`'s `cv` parameter\n  now supports generators. :pr:`25973` by `Yao Xiao <Charlie-XIAO>`.\n\n:mod:`sklearn.impute`\n.....................\n\n- |Enhancement| Added the parameter `fill_value` to :class:`impute.IterativeImputer`.\n  :pr:`25232` by :user:`Thijs van Weezel <ValueInvestorThijs>`.\n\n- |Fix| :class:`impute.IterativeImputer` now correctly preserves the Pandas\n  Index when the `set_config(transform_output=\"pandas\")`. :pr:`26454` by `Thomas Fan`_.\n\n:mod:`sklearn.inspection`\n.........................\n\n- |Enhancement| Added support for `sample_weight` in\n  :func:`inspection.partial_dependence` and\n  :meth:`inspection.PartialDependenceDisplay.from_estimator`. This allows for\n  weighted averaging when aggregating for each value of the grid we are making the\n  inspection on. The option is only available when `method` is set to `brute`.\n  :pr:`25209` and :pr:`26644` by :user:`Carlo Lemos <vitaliset>`.\n\n- |API| :func:`inspection.partial_dependence` returns a :class:`utils.Bunch` with\n  new key: `grid_values`. The `values` key is deprecated in favor of `grid_values`\n  and the `values` key will be removed in 1.5.\n  :pr:`21809` and :pr:`25732` by `Thomas Fan`_.\n\n:mod:`sklearn.kernel_approximation`\n...................................\n\n- |Fix| :class:`kernel_approximation.AdditiveChi2Sampler` is now stateless.\n  The `sample_interval_` attribute is deprecated and will be removed in 1.5.\n  :pr:`25190` by :user:`Vincent Maladi\u00e8re <Vincent-Maladiere>`.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Efficiency| Avoid data scaling when `sample_weight=None` and other\n  unnecessary data copies and unexpected dense to sparse data conversion in\n  :class:`linear_model.LinearRegression`.\n  :pr:`26207` by :user:`Olivier Grisel <ogrisel>`.\n\n- |Enhancement| :class:`linear_model.SGDClassifier`,\n  :class:`linear_model.SGDRegressor` and :class:`linear_model.SGDOneClassSVM`\n  now preserve dtype for `numpy.float32`.\n  :pr:`25587` by :user:`Omar Salman <OmarManzoor>`.\n\n- |Enhancement| The `n_iter_` attribute has been included in\n  :class:`linear_model.ARDRegression` to expose the actual number of iterations\n  required to reach the stopping criterion.\n  :pr:`25697` by :user:`John Pangas <jpangas>`.\n\n- |Fix| Use a more robust criterion to detect convergence of\n  :class:`linear_model.LogisticRegression` with `penalty=\"l1\"` and `solver=\"liblinear\"`\n  on linearly separable problems.\n  :pr:`25214` by `Tom Dupre la Tour`_.\n\n- |Fix| Fix a crash when calling `fit` on\n  :class:`linear_model.LogisticRegression` with `solver=\"newton-cholesky\"` and\n  `max_iter=0` which failed to inspect the state of the model prior to the\n  first parameter update.\n  :pr:`26653` by :user:`Olivier Grisel <ogrisel>`.\n\n- |API| Deprecates `n_iter` in favor of `max_iter` in\n  :class:`linear_model.BayesianRidge` and :class:`linear_model.ARDRegression`.\n  `n_iter` will be removed in scikit-learn 1.5. This change makes those\n  estimators consistent with the rest of estimators.\n  :pr:`25697` by :user:`John Pangas <jpangas>`.\n\n:mod:`sklearn.manifold`\n.......................\n\n- |Fix| :class:`manifold.Isomap` now correctly preserves the Pandas\n  Index when the `set_config(transform_output=\"pandas\")`. :pr:`26454` by `Thomas Fan`_.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Feature| Adds `zero_division=np.nan` to multiple classification metrics:\n  :func:`metrics.precision_score`, :func:`metrics.recall_score`,\n  :func:`metrics.f1_score`, :func:`metrics.fbeta_score`,\n  :func:`metrics.precision_recall_fscore_support`,\n  :func:`metrics.classification_report`. When `zero_division=np.nan` and there is a\n  zero division, the metric is undefined and is excluded from averaging. When not used\n  for averages, the value returned is `np.nan`.\n  :pr:`25531` by :user:`Marc Torrellas Socastro <marctorsoc>`.\n\n- |Feature| :func:`metrics.average_precision_score` now supports the\n  multiclass case.\n  :pr:`17388` by :user:`Geoffrey Bolmier <gbolmier>` and\n  :pr:`24769` by :user:`Ashwin Mathur <awinml>`.\n\n- |Efficiency| The computation of the expected mutual information in\n  :func:`metrics.adjusted_mutual_info_score` is now faster when the number of\n  unique labels is large and its memory usage is reduced in general.\n  :pr:`25713` by :user:`Kshitij Mathur <Kshitij68>`,\n  :user:`Guillaume Lemaitre <glemaitre>`, :user:`Omar Salman <OmarManzoor>` and\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Enhancement| :class:`metrics.silhouette_samples` nows accepts a sparse\n  matrix of pairwise distances between samples, or a feature array.\n  :pr:`18723` by :user:`Sahil Gupta <sahilgupta2105>` and\n  :pr:`24677` by :user:`Ashwin Mathur <awinml>`.\n\n- |Enhancement| A new parameter `drop_intermediate` was added to\n  :func:`metrics.precision_recall_curve`,\n  :func:`metrics.PrecisionRecallDisplay.from_estimator`,\n  :func:`metrics.PrecisionRecallDisplay.from_predictions`,\n  which drops some suboptimal thresholds to create lighter precision-recall\n  curves.\n  :pr:`24668` by :user:`dberenbaum`.\n\n- |Enhancement| :meth:`metrics.RocCurveDisplay.from_estimator` and\n  :meth:`metrics.RocCurveDisplay.from_predictions` now accept two new keywords,\n  `plot_chance_level` and `chance_level_kw` to plot the baseline chance\n  level. This line is exposed in the `chance_level_` attribute.\n  :pr:`25987` by :user:`Yao Xiao <Charlie-XIAO>`.\n\n- |Enhancement| :meth:`metrics.PrecisionRecallDisplay.from_estimator` and\n  :meth:`metrics.PrecisionRecallDisplay.from_predictions` now accept two new\n  keywords, `plot_chance_level` and `chance_level_kw` to plot the baseline\n  chance level. This line is exposed in the `chance_level_` attribute.\n  :pr:`26019` by :user:`Yao Xiao <Charlie-XIAO>`.\n\n- |Fix| :func:`metrics.pairwise.manhattan_distances` now supports readonly sparse datasets.\n  :pr:`25432` by :user:`Julien Jerphanion <jjerphan>`.\n\n- |Fix| Fixed :func:`metrics.classification_report` so that empty input will return\n  `np.nan`. Previously, \"macro avg\" and `weighted avg` would return\n  e.g. `f1-score=np.nan` and `f1-score=0.0`, being inconsistent. Now, they\n  both return `np.nan`.\n  :pr:`25531` by :user:`Marc Torrellas Socastro <marctorsoc>`.\n\n- |Fix| :func:`metrics.ndcg_score` now gives a meaningful error message for input of\n  length 1.\n  :pr:`25672` by :user:`Lene Preuss <lene>` and :user:`Wei-Chun Chu <wcchu>`.\n\n- |Fix| :func:`metrics.log_loss` raises a warning if the values of the parameter\n  `y_pred` are not normalized, instead of actually normalizing them in the metric.\n  Starting from 1.5 this will raise an error.\n  :pr:`25299` by :user:`Omar Salman <OmarManzoor`.\n\n- |Fix| In :func:`metrics.roc_curve`, use the threshold value `np.inf` instead of\n  arbitrary `max(y_score) + 1`. This threshold is associated with the ROC curve point\n  `tpr=0` and `fpr=0`.\n  :pr:`26194` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| The `'matching'` metric has been removed when using SciPy>=1.9\n  to be consistent with `scipy.spatial.distance` which does not support\n  `'matching'` anymore.\n  :pr:`26264` by :user:`Barata T. Onggo <magnusbarata>`\n\n- |API| The `eps` parameter of the :func:`metrics.log_loss` has been deprecated and\n  will be removed in 1.5. :pr:`25299` by :user:`Omar Salman <OmarManzoor>`.\n\n:mod:`sklearn.gaussian_process`\n...............................\n\n- |Fix| :class:`gaussian_process.GaussianProcessRegressor` has a new argument\n  `n_targets`, which is used to decide the number of outputs when sampling\n  from the prior distributions. :pr:`23099` by :user:`Zhehao Liu <MaxwellLZH>`.\n\n:mod:`sklearn.mixture`\n......................\n\n- |Efficiency| :class:`mixture.GaussianMixture` is more efficient now and will bypass\n  unnecessary initialization if the weights, means, and precisions are\n  given by users.\n  :pr:`26021` by :user:`Jiawei Zhang <jiawei-zhang-a>`.\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |MajorFeature| Added the class :class:`model_selection.ValidationCurveDisplay`\n  that allows easy plotting of validation curves obtained by the function\n  :func:`model_selection.validation_curve`.\n  :pr:`25120` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |API| The parameter `log_scale` in the class\n  :class:`model_selection.LearningCurveDisplay` has been deprecated in 1.3 and\n  will be removed in 1.5. The default scale can be overridden by setting it\n  directly on the `ax` object and will be set automatically from the spacing\n  of the data points otherwise.\n  :pr:`25120` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Enhancement| :func:`model_selection.cross_validate` accepts a new parameter\n  `return_indices` to return the train-test indices of each cv split.\n  :pr:`25659` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.multioutput`\n..........................\n\n- |Fix| :func:`getattr` on :meth:`multioutput.MultiOutputRegressor.partial_fit`\n  and :meth:`multioutput.MultiOutputClassifier.partial_fit` now correctly raise\n  an `AttributeError` if done before calling `fit`. :pr:`26333` by `Adrin\n  Jalali`_.\n\n:mod:`sklearn.naive_bayes`\n..........................\n\n- |Fix| :class:`naive_bayes.GaussianNB` does not raise anymore a `ZeroDivisionError`\n  when the provided `sample_weight` reduces the problem to a single class in `fit`.\n  :pr:`24140` by :user:`Jonathan Ohayon <Johayon>` and :user:`Chiara Marmo <cmarmo>`.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Enhancement| The performance of :meth:`neighbors.KNeighborsClassifier.predict`\n  and of :meth:`neighbors.KNeighborsClassifier.predict_proba` has been improved\n  when `n_neighbors` is large and `algorithm=\"brute\"` with non Euclidean metrics.\n  :pr:`24076` by :user:`Meekail Zain <micky774>`, :user:`Julien Jerphanion <jjerphan>`.\n\n- |Fix| Remove support for `KulsinskiDistance` in :class:`neighbors.BallTree`. This\n  dissimilarity is not a metric and cannot be supported by the BallTree.\n  :pr:`25417` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |API| The support for metrics other than `euclidean` and `manhattan` and for\n  callables in :class:`neighbors.NearestNeighbors` is deprecated and will be removed in\n  version 1.5. :pr:`24083` by :user:`Valentin Laurent <Valentin-Laurent>`.\n\n:mod:`sklearn.neural_network`\n.............................\n\n- |Fix| :class:`neural_network.MLPRegressor` and :class:`neural_network.MLPClassifier`\n  reports the right `n_iter_` when `warm_start=True`. It corresponds to the number\n  of iterations performed on the current call to `fit` instead of the total number\n  of iterations performed since the initialization of the estimator.\n  :pr:`25443` by :user:`Marvin Krawutschke <Marvvxi>`.\n\n:mod:`sklearn.pipeline`\n.......................\n\n- |Feature| :class:`pipeline.FeatureUnion` can now use indexing notation (e.g.\n  `feature_union[\"scalar\"]`) to access transformers by name. :pr:`25093` by\n  `Thomas Fan`_.\n\n- |Feature| :class:`pipeline.FeatureUnion` can now access the\n  `feature_names_in_` attribute if the `X` value seen during `.fit` has a\n  `columns` attribute and all columns are strings. e.g. when `X` is a\n  `pandas.DataFrame`\n  :pr:`25220` by :user:`Ian Thompson <it176131>`.\n\n- |Fix| :meth:`pipeline.Pipeline.fit_transform` now raises an `AttributeError`\n  if the last step of the pipeline does not support `fit_transform`.\n  :pr:`26325` by `Adrin Jalali`_.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |MajorFeature| Introduces :class:`preprocessing.TargetEncoder` which is a\n  categorical encoding based on target mean conditioned on the value of the\n  category. :pr:`25334` by `Thomas Fan`_.\n\n- |Feature| :class:`preprocessing.OrdinalEncoder` now supports grouping\n  infrequent categories into a single feature. Grouping infrequent categories\n  is enabled by specifying how to select infrequent categories with\n  `min_frequency` or `max_categories`. :pr:`25677` by `Thomas Fan`_.\n\n- |Enhancement| :class:`preprocessing.PolynomialFeatures` now calculates the\n  number of expanded terms a-priori when dealing with sparse `csr` matrices\n  in order to optimize the choice of `dtype` for `indices` and `indptr`. It\n  can now output `csr` matrices with `np.int32` `indices\/indptr` components\n  when there are few enough elements, and will automatically use `np.int64`\n  for sufficiently large matrices.\n  :pr:`20524` by :user:`niuk-a <niuk-a>` and\n  :pr:`23731` by :user:`Meekail Zain <micky774>`\n\n- |Enhancement| A new parameter `sparse_output` was added to\n  :class:`preprocessing.SplineTransformer`, available as of SciPy 1.8. If\n  `sparse_output=True`, :class:`preprocessing.SplineTransformer` returns a sparse\n  CSR matrix. :pr:`24145` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Enhancement| Adds a `feature_name_combiner` parameter to\n  :class:`preprocessing.OneHotEncoder`. This specifies a custom callable to\n  create feature names to be returned by\n  :meth:`preprocessing.OneHotEncoder.get_feature_names_out`. The callable\n  combines input arguments `(input_feature, category)` to a string.\n  :pr:`22506` by :user:`Mario Kostelac <mariokostelac>`.\n\n- |Enhancement| Added support for `sample_weight` in\n  :class:`preprocessing.KBinsDiscretizer`. This allows specifying the parameter\n  `sample_weight` for each sample to be used while fitting. The option is only\n  available when `strategy` is set to `quantile` and `kmeans`.\n  :pr:`24935` by :user:`Seladus <seladus>`, :user:`Guillaume Lemaitre <glemaitre>`, and\n  :user:`Dea Mar\u00eda L\u00e9on <deamarialeon>`, :pr:`25257` by :user:`Gleb Levitski <glevv>`.\n\n- |Enhancement| Subsampling through the `subsample` parameter can now be used in\n  :class:`preprocessing.KBinsDiscretizer` regardless of the strategy used.\n  :pr:`26424` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| :class:`preprocessing.PowerTransformer` now correctly preserves the Pandas\n  Index when the `set_config(transform_output=\"pandas\")`. :pr:`26454` by `Thomas Fan`_.\n\n- |Fix| :class:`preprocessing.PowerTransformer` now correctly raises error when\n  using `method=\"box-cox\"` on data with a constant `np.nan` column.\n  :pr:`26400` by :user:`Yao Xiao <Charlie-XIAO>`.\n\n- |Fix| :class:`preprocessing.PowerTransformer` with `method=\"yeo-johnson\"` now leaves\n  constant features unchanged instead of transforming with an arbitrary value for\n  the `lambdas_` fitted parameter.\n  :pr:`26566` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |API| The default value of the `subsample` parameter of\n  :class:`preprocessing.KBinsDiscretizer` will change from `None` to `200_000` in\n  version 1.5 when `strategy=\"kmeans\"` or `strategy=\"uniform\"`.\n  :pr:`26424` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.svm`\n..................\n\n- |API| `dual` parameter now accepts `auto` option for\n  :class:`svm.LinearSVC` and :class:`svm.LinearSVR`.\n  :pr:`26093` by :user:`Gleb Levitski <glevv>`.\n\n:mod:`sklearn.tree`\n...................\n\n- |MajorFeature| :class:`tree.DecisionTreeRegressor` and\n  :class:`tree.DecisionTreeClassifier` support missing values when\n  `splitter='best'` and criterion is `gini`, `entropy`, or `log_loss`,\n  for classification or `squared_error`, `friedman_mse`, or `poisson`\n  for regression. :pr:`23595`, :pr:`26376` by `Thomas Fan`_.\n\n- |Enhancement| Adds a `class_names` parameter to\n  :func:`tree.export_text`. This allows specifying the parameter `class_names`\n  for each target class in ascending numerical order.\n  :pr:`25387` by :user:`William M <Akbeeh>` and :user:`crispinlogan <crispinlogan>`.\n\n- |Fix| :func:`tree.export_graphviz` and :func:`tree.export_text` now accepts\n  `feature_names` and `class_names` as array-like rather than lists.\n  :pr:`26289` by :user:`Yao Xiao <Charlie-XIAO>`\n\n:mod:`sklearn.utils`\n....................\n\n- |FIX| Fixes :func:`utils.check_array` to properly convert pandas\n  extension arrays. :pr:`25813` and :pr:`26106` by `Thomas Fan`_.\n\n- |Fix| :func:`utils.check_array` now supports pandas DataFrames with\n  extension arrays and object dtypes by return an ndarray with object dtype.\n  :pr:`25814` by `Thomas Fan`_.\n\n- |API| `utils.estimator_checks.check_transformers_unfitted_stateless` has been\n  introduced to ensure stateless transformers don't raise `NotFittedError`\n  during `transform` with no prior call to `fit` or `fit_transform`.\n  :pr:`25190` by :user:`Vincent Maladi\u00e8re <Vincent-Maladiere>`.\n\n- |API| A `FutureWarning` is now raised when instantiating a class which inherits from\n  a deprecated base class (i.e. decorated by :class:`utils.deprecated`) and which\n  overrides the `__init__` method.\n  :pr:`25733` by :user:`Brigitta Sip\u0151cz <bsipocz>` and\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.semi_supervised`\n..............................\n\n- |Enhancement| :meth:`semi_supervised.LabelSpreading.fit` and\n  :meth:`semi_supervised.LabelPropagation.fit` now accepts sparse metrics.\n  :pr:`19664` by :user:`Kaushik Amar Das <cozek>`.\n\nMiscellaneous\n.............\n\n- |Enhancement| Replace obsolete exceptions `EnvironmentError`, `IOError` and\n  `WindowsError`.\n  :pr:`26466` by :user:`Dimitri Papadopoulos ORfanos <DimitriPapadopoulos>`.\n\n.. rubric:: Code and documentation contributors\n\nThanks to everyone who has contributed to the maintenance and improvement of\nthe project since version 1.2, including:\n\n2357juan, Abhishek Singh Kushwah, Adam Handke, Adam Kania, Adam Li, adienes,\nAdmir Demiraj, adoublet, Adrin Jalali, A.H.Mansouri, Ahmedbgh, Ala-Na, Alex\nBuzenet, AlexL, Ali H. El-Kassas, amay, Andr\u00e1s Simon, Andr\u00e9 Pedersen, Andrew\nWang, Ankur Singh, annegnx, Ansam Zedan, Anthony22-dev, Artur Hermano, Arturo\nAmor, as-90, ashah002, Ashish Dutt, Ashwin Mathur, AymericBasset, Azaria\nGebremichael, Barata Tripramudya Onggo, Benedek Harsanyi, Benjamin Bossan,\nBharat Raghunathan, Binesh Bannerjee, Boris Feld, Brendan Lu, Brevin Kunde,\ncache-missing, Camille Troillard, Carla J, carlo, Carlo Lemos, c-git, Changyao\nChen, Chiara Marmo, Christian Lorentzen, Christian Veenhuis, Christine P. Chai,\ncrispinlogan, Da-Lan, DanGonite57, Dave Berenbaum, davidblnc, david-cortes,\nDayne, Dea Mar\u00eda L\u00e9on, Denis, Dimitri Papadopoulos Orfanos, Dimitris\nLitsidis, Dmitry Nesterov, Dominic Fox, Dominik Prodinger, Edern, Ekaterina\nButyugina, Elabonga Atuo, Emir, farhan khan, Felipe Siola, futurewarning, Gael\nVaroquaux, genvalen, Gleb Levitski, Guillaume Lemaitre, gunesbayir, Haesun\nPark, hujiahong726, i-aki-y, Ian Thompson, Ido M, Ily, Irene, Jack McIvor,\njakirkham, James Dean, JanFidor, Jarrod Millman, JB Mountford, J\u00e9r\u00e9mie du\nBoisberranger, Jessicakk0711, Jiawei Zhang, Joey Ortiz, JohnathanPi, John\nPangas, Joshua Choo Yun Keat, Joshua Hedlund, JuliaSchoepp, Julien Jerphanion,\njygerardy, ka00ri, Kaushik Amar Das, Kento Nozawa, Kian Eliasi, Kilian Kluge,\nLene Preuss, Linus, Logan Thomas, Loic Esteve, Louis Fouquet, Lucy Liu, Madhura\nJayaratne, Marc Torrellas Socastro, Maren Westermann, Mario Kostelac, Mark\nHarfouche, Marko Toplak, Marvin Krawutschke, Masanori Kanazu, mathurinm, Matt\nHaberland, Max Halford, maximeSaur, Maxwell Liu, m. bou, mdarii, Meekail Zain,\nMikhail Iljin, murezzda, Nawazish Alam, Nicola Fanelli, Nightwalkx, Nikolay\nPetrov, Nishu Choudhary, NNLNR, npache, Olivier Grisel, Omar Salman, ouss1508,\nPAB, Pandata, partev, Peter Piontek, Phil, pnucci, Pooja M, Pooja Subramaniam,\nprecondition, Quentin Barth\u00e9lemy, Rafal Wojdyla, Raghuveer Bhat, Rahil Parikh,\nRalf Gommers, ram vikram singh, Rushil Desai, Sadra Barikbin, SANJAI_3, Sashka\nWarner, Scott Gigante, Scott Gustafson, searchforpassion, Seoeun\nHong, Shady el Gewily, Shiva chauhan, Shogo Hida, Shreesha Kumar Bhat, sonnivs,\nSortofamudkip, Stanislav (Stanley) Modrak, Stefanie Senger, Steven Van\nVaerenbergh, Tabea Kossen, Th\u00e9ophile Baranger, Thijs van Weezel, Thomas A\nCaswell, Thomas Germer, Thomas J. Fan, Tim Head, Tim P, Tom Dupr\u00e9 la Tour,\ntomiock, tspeng, Valentin Laurent, Veghit, VIGNESH D, Vijeth Moudgalya, Vinayak\nMehta, Vincent M, Vincent-violet, Vyom Pathak, William M, windiana42, Xiao\nYuan, Yao Xiao, Yaroslav Halchenko, Yotam Avidar-Constantini, Yuchen Zhou,\nYusuf Raji, zeeshan lone","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn      release notes 1 3               Version 1 3              For a short description of the main highlights of the release  please refer to  ref  sphx glr auto examples release highlights plot release highlights 1 3 0 py       include   changelog legend inc      changes 1 3 2   Version 1 3 2                  October 2023    Changelog             mod  sklearn datasets                              Fix  All dataset fetchers now accept  data home  as any object that implements   the  class  os PathLike  interface  for instance   class  pathlib Path      pr  27468  by  user  Yao Xiao  Charlie XIAO      mod  sklearn decomposition                                   Fix  Fixes a bug in  class  decomposition KernelPCA  by forcing the output of   the internal  class  preprocessing KernelCenterer  to be a default array  When the   arpack solver is used  it expects an array with a  dtype  attribute     pr  27583  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn metrics                             Fix  Fixes a bug for metrics using  zero division np nan     e g   func   metrics precision score   within a paralell loop    e g   func   model selection cross val score   where the singleton for  np nan    will be different in the sub processes     pr  27573  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn tree                          Fix  Do not leak data via non initialized memory in decision tree pickle files and make   the generation of those files deterministic   pr  27580  by  user  Lo c Est ve  lesteve          changes 1 3 1   Version 1 3 1                  September 2023    Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      Fix  Ridge models with  solver  sparse cg   may have slightly different   results with scipy  1 12  because of an underlying change in the scipy solver    see  scipy 18488  https   github com scipy scipy pull 18488    for more   details     pr  26814  by  user  Lo c Est ve  lesteve    Changes impacting all modules                                   Fix  The  set output  API correctly works with list input   pr  27044  by    Thomas Fan     Changelog             mod  sklearn calibration                                 Fix   class  calibration CalibratedClassifierCV  can now handle models that   produce large prediction scores  Before it was numerically unstable     pr  26913  by  user  Omar Salman  OmarManzoor      mod  sklearn cluster                             Fix   class  cluster BisectingKMeans  could crash when predicting on data   with a different scale than the data used to fit the model     pr  27167  by  Olivier Grisel        Fix   class  cluster BisectingKMeans  now works with data that has a single feature     pr  27243  by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn cross decomposition                                         Fix   class  cross decomposition PLSRegression  now automatically ravels the output   of  predict  if fitted with one dimensional  y      pr  26602  by  user  Yao Xiao  Charlie XIAO      mod  sklearn ensemble                              Fix  Fix a bug in  class  ensemble AdaBoostClassifier  with  algorithm  SAMME     where the decision function of each weak learner should be symmetric  i e    the sum of the scores should sum to zero for a sample      pr  26521  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn feature selection                                       Fix   func  feature selection mutual info regression  now correctly computes the   result when  X  is of integer dtype   pr  26748  by  user  Yao Xiao  Charlie XIAO      mod  sklearn impute                            Fix   class  impute KNNImputer  now correctly adds a missing indicator column in     transform   when   add indicator   is set to   True   and missing values are observed   during   fit     pr  26600  by  user  Shreesha Kumar Bhat  Shreesha3112      mod  sklearn metrics                             Fix  Scorers used with  func  metrics get scorer  handle properly   multilabel indicator matrix     pr  27002  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn mixture                             Fix  The initialization of  class  mixture GaussianMixture  from user provided    precisions init  for  covariance type  of  full  or  tied  was not correct    and has been fixed     pr  26416  by  user  Yang Tao  mchikyt3      mod  sklearn neighbors                               Fix   meth  neighbors KNeighborsClassifier predict  no longer raises an   exception for  pandas DataFrames  input     pr  26772  by  user  J r mie du Boisberranger  jeremiedbb        Fix  Reintroduce  sklearn neighbors BallTree valid metrics  and    sklearn neighbors KDTree valid metrics  as public class attributes     pr  26754  by  user  Julien Jerphanion  jjerphan        Fix   class  sklearn model selection HalvingRandomSearchCV  no longer raises   when the input to the  param distributions  parameter is a list of dicts     pr  26893  by  user  Stefanie Senger  StefanieSenger        Fix  Neighbors based estimators now correctly work when  metric  minkowski   and the   metric parameter  p  is in the range  0   p   1   regardless of the  dtype  of  X      pr  26760  by  user  Shreesha Kumar Bhat  Shreesha3112      mod  sklearn preprocessing                                   Fix   class  preprocessing LabelEncoder  correctly accepts  y  as a keyword   argument   pr  26940  by  Thomas Fan        Fix   class  preprocessing OneHotEncoder  shows a more informative error message   when  sparse output True  and the output is configured to be pandas     pr  26931  by  Thomas Fan      mod  sklearn tree                          Fix   func  tree plot tree  now accepts  class names True  as documented     pr  26903  by  user  Thomas Roehr  2maz       Fix  The  feature names  parameter of  func  tree plot tree  now accepts any kind of   array like instead of just a list   pr  27292  by  user  Rahil Parikh  rprkh         changes 1 3   Version 1 3 0                  June 2023    Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      Enhancement   meth  multiclass OutputCodeClassifier predict  now uses a more   efficient pairwise distance reduction  As a consequence  the tie breaking   strategy is different and thus the predicted labels may be different     pr  25196  by  user  Guillaume Lemaitre  glemaitre        Enhancement  The  fit transform  method of  class  decomposition DictionaryLearning    is more efficient but may produce different results as in previous versions when    transform algorithm  is not the same as  fit algorithm  and the number of iterations   is small   pr  24871  by  user  Omar Salman  OmarManzoor        Enhancement  The  sample weight  parameter now will be used in centroids   initialization for  class  cluster KMeans    class  cluster BisectingKMeans    and  class  cluster MiniBatchKMeans     This change will break backward compatibility  since numbers generated   from same random seeds will be different     pr  25752  by  user  Gleb Levitski  glevv       user  J r mie du Boisberranger  jeremiedbb       user  Guillaume Lemaitre  glemaitre        Fix  Treat more consistently small values in the  W  and  H  matrices during the    fit  and  transform  steps of  class  decomposition NMF  and    class  decomposition MiniBatchNMF  which can produce different results than previous   versions   pr  25438  by  user  Yotam Avidar Constantini  yotamcons        Fix   class  decomposition KernelPCA  may produce different results through    inverse transform  if  gamma  is  None   Now it will be chosen correctly as    1 n features  of the data that it is fitted on  while previously it might be   incorrectly chosen as  1 n features  of the data passed to  inverse transform     A new attribute  gamma   is provided for revealing the actual value of  gamma    used each time the kernel is called     pr  26337  by  user  Yao Xiao  Charlie XIAO     Changed displays                      Enhancement   class  model selection LearningCurveDisplay  displays both the   train and test curves by default  You can set  score type  test   to keep the   past behaviour     pr  25120  by  user  Guillaume Lemaitre  glemaitre        Fix   class  model selection ValidationCurveDisplay  now accepts passing a   list to the  param range  parameter     pr  27311  by  user  Arturo Amor  ArturoAmorQ     Changes impacting all modules                                   Enhancement  The  get feature names out  method of the following classes now   raises a  NotFittedError  if the instance is not fitted  This ensures the error is   consistent in all estimators with the  get feature names out  method        class  impute MissingIndicator       class  feature extraction DictVectorizer       class  feature extraction text TfidfTransformer       class  feature selection GenericUnivariateSelect       class  feature selection RFE       class  feature selection RFECV       class  feature selection SelectFdr       class  feature selection SelectFpr       class  feature selection SelectFromModel       class  feature selection SelectFwe       class  feature selection SelectKBest       class  feature selection SelectPercentile       class  feature selection SequentialFeatureSelector       class  feature selection VarianceThreshold       class  kernel approximation AdditiveChi2Sampler       class  impute IterativeImputer       class  impute KNNImputer       class  impute SimpleImputer       class  isotonic IsotonicRegression       class  preprocessing Binarizer       class  preprocessing KBinsDiscretizer       class  preprocessing MaxAbsScaler       class  preprocessing MinMaxScaler       class  preprocessing Normalizer       class  preprocessing OrdinalEncoder       class  preprocessing PowerTransformer       class  preprocessing QuantileTransformer       class  preprocessing RobustScaler       class  preprocessing SplineTransformer       class  preprocessing StandardScaler       class  random projection GaussianRandomProjection       class  random projection SparseRandomProjection     The  NotFittedError  displays an informative message asking to fit the instance   with the appropriate arguments      pr  25294    pr  25308    pr  25291    pr  25367    pr  25402     by  user  John Pangas  jpangas     user  Rahil Parikh  rprkh       and  user  Alex Buzenet  albuzenet        Enhancement  Added a multi threaded Cython routine to the compute squared   Euclidean distances  sometimes followed by a fused reduction operation  for a   pair of datasets consisting of a sparse CSR matrix and a dense NumPy     This can improve the performance of following functions and estimators        func  sklearn metrics pairwise distances argmin       func  sklearn metrics pairwise distances argmin min       class  sklearn cluster AffinityPropagation       class  sklearn cluster Birch       class  sklearn cluster MeanShift       class  sklearn cluster OPTICS       class  sklearn cluster SpectralClustering       func  sklearn feature selection mutual info regression       class  sklearn neighbors KNeighborsClassifier       class  sklearn neighbors KNeighborsRegressor       class  sklearn neighbors RadiusNeighborsClassifier       class  sklearn neighbors RadiusNeighborsRegressor       class  sklearn neighbors LocalOutlierFactor       class  sklearn neighbors NearestNeighbors       class  sklearn manifold Isomap       class  sklearn manifold LocallyLinearEmbedding       class  sklearn manifold TSNE       func  sklearn manifold trustworthiness       class  sklearn semi supervised LabelPropagation       class  sklearn semi supervised LabelSpreading     A typical example of this performance improvement happens when passing a sparse   CSR matrix to the  predict  or  transform  method of estimators that rely on   a dense NumPy representation to store their fitted parameters  or the reverse      For instance   meth  sklearn neighbors NearestNeighbors kneighbors  is now up   to 2 times faster for this case on commonly available laptops      pr  25044  by  user  Julien Jerphanion  jjerphan        Enhancement  All estimators that internally rely on OpenMP multi threading    via Cython  now use a number of threads equal to the number of physical    instead of logical  cores by default  In the past  we observed that using as   many threads as logical cores on SMT hosts could sometimes cause severe   performance problems depending on the algorithms and the shape of the data    Note that it is still possible to manually adjust the number of threads used   by OpenMP as documented in  ref  parallelism       pr  26082  by  user  J r mie du Boisberranger  jeremiedbb   and    user  Olivier Grisel  ogrisel     Experimental   Under Development                                      MajorFeature   ref  Metadata routing  metadata routing   s related base   methods are included in this release  This feature is only available via the    enable metadata routing  feature flag which can be enabled using    func  sklearn set config  and  func  sklearn config context   For now this   feature is mostly useful for third party developers to prepare their code   base for metadata routing  and we strongly recommend that they also hide it   behind the same feature flag  rather than having it enabled by default     pr  24027  by  Adrin Jalali     user  Benjamin Bossan  BenjaminBossan    and    user  Omar Salman  OmarManzoor     Changelog                   Entries should be grouped by module  in alphabetic order  and prefixed with     one of the labels   MajorFeature    Feature    Efficiency    Enhancement        Fix  or  API   see whats new rst for descriptions       Entries should be ordered by those labels  e g   Fix  after  Efficiency        Changes not specific to a module should be listed under  Multiple Modules      or  Miscellaneous       Entries should end with       pr  123456  by  user  Joe Bloggs  joeongithub        where 123456 is the  pull request  number  not the issue number    sklearn                Feature  Added a new option  skip parameter validation   to the function    func  sklearn set config  and context manager  func  sklearn config context   that   allows to skip the validation of the parameters passed to the estimators and public   functions  This can be useful to speed up the code but should be used with care   because it can lead to unexpected behaviors or raise obscure error messages when   setting invalid parameters     pr  25815  by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn base                          Feature  A    sklearn clone    protocol is now available to override the   default behavior of  func  base clone    pr  24568  by  Thomas Fan        Fix   class  base TransformerMixin  now currently keeps a namedtuple s class   if  transform  returns a namedtuple   pr  26121  by  Thomas Fan      mod  sklearn calibration                                 Fix   class  calibration CalibratedClassifierCV  now does not enforce sample   alignment on  fit params    pr  25805  by  Adrin Jalali      mod  sklearn cluster                             MajorFeature  Added  class  cluster HDBSCAN   a modern hierarchical density based   clustering algorithm  Similarly to  class  cluster OPTICS   it can be seen as a   generalization of  class  cluster DBSCAN  by allowing for hierarchical instead of flat   clustering  however it varies in its approach from  class  cluster OPTICS   This   algorithm is very robust with respect to its hyperparameters  values and can   be used on a wide variety of data without much  if any  tuning     This implementation is an adaptation from the original implementation of HDBSCAN in    scikit learn contrib hdbscan  https   github com scikit learn contrib hdbscan       by  user  Leland McInnes  lmcinnes   et al      pr  26385  by  user  Meekail Zain  micky774       Enhancement  The  sample weight  parameter now will be used in centroids   initialization for  class  cluster KMeans    class  cluster BisectingKMeans    and  class  cluster MiniBatchKMeans     This change will break backward compatibility  since numbers generated   from same random seeds will be different     pr  25752  by  user  Gleb Levitski  glevv       user  J r mie du Boisberranger  jeremiedbb       user  Guillaume Lemaitre  glemaitre        Fix   class  cluster KMeans    class  cluster MiniBatchKMeans  and    func  cluster k means  now correctly handle the combination of  n init  auto     and  init  being an array like  running one initialization in that case     pr  26657  by  user  Binesh Bannerjee  bnsh        API  The  sample weight  parameter in  predict  for    meth  cluster KMeans predict  and  meth  cluster MiniBatchKMeans predict    is now deprecated and will be removed in v1 5     pr  25251  by  user  Gleb Levitski  glevv        API  The  Xred  argument in  func  cluster FeatureAgglomeration inverse transform    is renamed to  Xt  and will be removed in v1 5   pr  26503  by  Adrin Jalali      mod  sklearn compose                             Fix   class  compose ColumnTransformer  raises an informative error when the individual   transformers of  ColumnTransformer  output pandas dataframes with indexes that are   not consistent with each other and the output is configured to be pandas     pr  26286  by  Thomas Fan        Fix   class  compose ColumnTransformer  correctly sets the output of the   remainder when  set output  is called   pr  26323  by  Thomas Fan      mod  sklearn covariance                                Fix  Allows  alpha 0  in  class  covariance GraphicalLasso  to be   consistent with  func  covariance graphical lasso      pr  26033  by  user  Genesis Valencia  genvalen        Fix   func  covariance empirical covariance  now gives an informative   error message when input is not appropriate     pr  26108  by  user  Quentin Barth lemy  qbarthelemy        API  Deprecates  cov init  in  func  covariance graphical lasso  in 1 3 since   the parameter has no effect  It will be removed in 1 5     pr  26033  by  user  Genesis Valencia  genvalen        API  Adds  costs   fitted attribute in  class  covariance GraphicalLasso  and    class  covariance GraphicalLassoCV      pr  26033  by  user  Genesis Valencia  genvalen        API  Adds  covariance  parameter in  class  covariance GraphicalLasso      pr  26033  by  user  Genesis Valencia  genvalen        API  Adds  eps  parameter in  class  covariance GraphicalLasso      func  covariance graphical lasso   and  class  covariance GraphicalLassoCV      pr  26033  by  user  Genesis Valencia  genvalen      mod  sklearn datasets                              Enhancement  Allows to overwrite the parameters used to open the ARFF file using   the parameter  read csv kwargs  in  func  datasets fetch openml  when using the   pandas parser     pr  26433  by  user  Guillaume Lemaitre  glemaitre        Fix   func  datasets fetch openml  returns improved data types when    as frame True  and  parser  liac arff     pr  26386  by  Thomas Fan        Fix  Following the ARFF specs  only the marker       is now considered as a missing   values when opening ARFF files fetched using  func  datasets fetch openml  when using   the pandas parser  The parameter  read csv kwargs  allows to overwrite this behaviour     pr  26551  by  user  Guillaume Lemaitre  glemaitre        Fix   func  datasets fetch openml  will consistently use  np nan  as missing marker   with both parsers   pandas   and   liac arff       pr  26579  by  user  Guillaume Lemaitre  glemaitre        API  The  data transposed  argument of  func  datasets make sparse coded signal    is deprecated and will be removed in v1 5     pr  25784  by  user  J r mie du Boisberranger     mod  sklearn decomposition                                   Efficiency   class  decomposition MiniBatchDictionaryLearning  and    class  decomposition MiniBatchSparsePCA  are now faster for small batch sizes by   avoiding duplicate validations     pr  25490  by  user  J r mie du Boisberranger  jeremiedbb        Enhancement   class  decomposition DictionaryLearning  now accepts the parameter    callback  for consistency with the function  func  decomposition dict learning      pr  24871  by  user  Omar Salman  OmarManzoor        Fix  Treat more consistently small values in the  W  and  H  matrices during the    fit  and  transform  steps of  class  decomposition NMF  and    class  decomposition MiniBatchNMF  which can produce different results than previous   versions   pr  25438  by  user  Yotam Avidar Constantini  yotamcons        API  The  W  argument in  func  decomposition NMF inverse transform  and    class  decomposition MiniBatchNMF inverse transform  is renamed to  Xt  and   will be removed in v1 5   pr  26503  by  Adrin Jalali      mod  sklearn discriminant analysis                                           Enhancement   class  discriminant analysis LinearDiscriminantAnalysis  now   supports the  PyTorch  https   pytorch org       See    ref  array api  for more details   pr  25956  by  Thomas Fan      mod  sklearn ensemble                              Feature   class  ensemble HistGradientBoostingRegressor  now supports   the Gamma deviance loss via  loss  gamma      Using the Gamma deviance as loss function comes in handy for modelling skewed   distributed  strictly positive valued targets     pr  22409  by  user  Christian Lorentzen  lorentzenchr        Feature  Compute a custom out of bag score by passing a callable to    class  ensemble RandomForestClassifier    class  ensemble RandomForestRegressor      class  ensemble ExtraTreesClassifier  and  class  ensemble ExtraTreesRegressor      pr  25177  by  Tim Head        Feature   class  ensemble GradientBoostingClassifier  now exposes   out of bag scores via the  oob scores   or  oob score   attributes     pr  24882  by  user  Ashwin Mathur  awinml        Efficiency   class  ensemble IsolationForest  predict time is now faster    typically by a factor of 8 or more   Internally  the estimator now precomputes   decision path lengths per tree at  fit  time  It is therefore not possible   to load an estimator trained with scikit learn 1 2 to make it predict with   scikit learn 1 3  retraining with scikit learn 1 3 is required     pr  25186  by  user  Felipe Breve Siola  fsiola        Efficiency   class  ensemble RandomForestClassifier  and    class  ensemble RandomForestRegressor  with  warm start True  now only   recomputes out of bag scores when there are actually more  n estimators    in subsequent  fit  calls     pr  26318  by  user  Joshua Choo Yun Keat  choo8        Enhancement   class  ensemble BaggingClassifier  and    class  ensemble BaggingRegressor  expose the  allow nan  tag from the   underlying estimator   pr  25506  by  Thomas Fan        Fix   meth  ensemble RandomForestClassifier fit  sets  max samples   1    when  max samples  is a float and  round n samples   max samples    1      pr  25601  by  user  Jan Fidor  JanFidor        Fix   meth  ensemble IsolationForest fit  no longer warns about missing   feature names when called with  contamination  not   auto   on a pandas   dataframe     pr  25931  by  user  Yao Xiao  Charlie XIAO        Fix   class  ensemble HistGradientBoostingRegressor  and    class  ensemble HistGradientBoostingClassifier  treats negative values for   categorical features consistently as missing values  following LightGBM s and   pandas  conventions     pr  25629  by  Thomas Fan        Fix  Fix deprecation of  base estimator  in  class  ensemble AdaBoostClassifier    and  class  ensemble AdaBoostRegressor  that was introduced in  pr  23819      pr  26242  by  user  Marko Toplak  markotoplak      mod  sklearn exceptions                                Feature  Added  class  exceptions InconsistentVersionWarning  which is raised   when a scikit learn estimator is unpickled with a scikit learn version that is   inconsistent with the sckit learn version the estimator was pickled with     pr  25297  by  Thomas Fan      mod  sklearn feature extraction                                        API   class  feature extraction image PatchExtractor  now follows the   transformer API of scikit learn  This class is defined as a stateless transformer   meaning that it is note required to call  fit  before calling  transform     Parameter validation only happens at  fit  time     pr  24230  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn feature selection                                       Enhancement  All selectors in  mod  sklearn feature selection  will preserve   a DataFrame s dtype when transformed   pr  25102  by  Thomas Fan        Fix   class  feature selection SequentialFeatureSelector  s  cv  parameter   now supports generators   pr  25973  by  Yao Xiao  Charlie XIAO      mod  sklearn impute                            Enhancement  Added the parameter  fill value  to  class  impute IterativeImputer      pr  25232  by  user  Thijs van Weezel  ValueInvestorThijs        Fix   class  impute IterativeImputer  now correctly preserves the Pandas   Index when the  set config transform output  pandas      pr  26454  by  Thomas Fan      mod  sklearn inspection                                Enhancement  Added support for  sample weight  in    func  inspection partial dependence  and    meth  inspection PartialDependenceDisplay from estimator   This allows for   weighted averaging when aggregating for each value of the grid we are making the   inspection on  The option is only available when  method  is set to  brute      pr  25209  and  pr  26644  by  user  Carlo Lemos  vitaliset        API   func  inspection partial dependence  returns a  class  utils Bunch  with   new key   grid values   The  values  key is deprecated in favor of  grid values    and the  values  key will be removed in 1 5     pr  21809  and  pr  25732  by  Thomas Fan      mod  sklearn kernel approximation                                          Fix   class  kernel approximation AdditiveChi2Sampler  is now stateless    The  sample interval   attribute is deprecated and will be removed in 1 5     pr  25190  by  user  Vincent Maladi re  Vincent Maladiere      mod  sklearn linear model                                  Efficiency  Avoid data scaling when  sample weight None  and other   unnecessary data copies and unexpected dense to sparse data conversion in    class  linear model LinearRegression      pr  26207  by  user  Olivier Grisel  ogrisel        Enhancement   class  linear model SGDClassifier      class  linear model SGDRegressor  and  class  linear model SGDOneClassSVM    now preserve dtype for  numpy float32      pr  25587  by  user  Omar Salman  OmarManzoor        Enhancement  The  n iter   attribute has been included in    class  linear model ARDRegression  to expose the actual number of iterations   required to reach the stopping criterion     pr  25697  by  user  John Pangas  jpangas        Fix  Use a more robust criterion to detect convergence of    class  linear model LogisticRegression  with  penalty  l1   and  solver  liblinear     on linearly separable problems     pr  25214  by  Tom Dupre la Tour        Fix  Fix a crash when calling  fit  on    class  linear model LogisticRegression  with  solver  newton cholesky   and    max iter 0  which failed to inspect the state of the model prior to the   first parameter update     pr  26653  by  user  Olivier Grisel  ogrisel        API  Deprecates  n iter  in favor of  max iter  in    class  linear model BayesianRidge  and  class  linear model ARDRegression      n iter  will be removed in scikit learn 1 5  This change makes those   estimators consistent with the rest of estimators     pr  25697  by  user  John Pangas  jpangas      mod  sklearn manifold                              Fix   class  manifold Isomap  now correctly preserves the Pandas   Index when the  set config transform output  pandas      pr  26454  by  Thomas Fan      mod  sklearn metrics                             Feature  Adds  zero division np nan  to multiple classification metrics     func  metrics precision score    func  metrics recall score      func  metrics f1 score    func  metrics fbeta score      func  metrics precision recall fscore support      func  metrics classification report   When  zero division np nan  and there is a   zero division  the metric is undefined and is excluded from averaging  When not used   for averages  the value returned is  np nan      pr  25531  by  user  Marc Torrellas Socastro  marctorsoc        Feature   func  metrics average precision score  now supports the   multiclass case     pr  17388  by  user  Geoffrey Bolmier  gbolmier   and    pr  24769  by  user  Ashwin Mathur  awinml        Efficiency  The computation of the expected mutual information in    func  metrics adjusted mutual info score  is now faster when the number of   unique labels is large and its memory usage is reduced in general     pr  25713  by  user  Kshitij Mathur  Kshitij68       user  Guillaume Lemaitre  glemaitre     user  Omar Salman  OmarManzoor   and    user  J r mie du Boisberranger  jeremiedbb        Enhancement   class  metrics silhouette samples  nows accepts a sparse   matrix of pairwise distances between samples  or a feature array     pr  18723  by  user  Sahil Gupta  sahilgupta2105   and    pr  24677  by  user  Ashwin Mathur  awinml        Enhancement  A new parameter  drop intermediate  was added to    func  metrics precision recall curve      func  metrics PrecisionRecallDisplay from estimator      func  metrics PrecisionRecallDisplay from predictions     which drops some suboptimal thresholds to create lighter precision recall   curves     pr  24668  by  user  dberenbaum       Enhancement   meth  metrics RocCurveDisplay from estimator  and    meth  metrics RocCurveDisplay from predictions  now accept two new keywords     plot chance level  and  chance level kw  to plot the baseline chance   level  This line is exposed in the  chance level   attribute     pr  25987  by  user  Yao Xiao  Charlie XIAO        Enhancement   meth  metrics PrecisionRecallDisplay from estimator  and    meth  metrics PrecisionRecallDisplay from predictions  now accept two new   keywords   plot chance level  and  chance level kw  to plot the baseline   chance level  This line is exposed in the  chance level   attribute     pr  26019  by  user  Yao Xiao  Charlie XIAO        Fix   func  metrics pairwise manhattan distances  now supports readonly sparse datasets     pr  25432  by  user  Julien Jerphanion  jjerphan        Fix  Fixed  func  metrics classification report  so that empty input will return    np nan   Previously   macro avg  and  weighted avg  would return   e g   f1 score np nan  and  f1 score 0 0   being inconsistent  Now  they   both return  np nan      pr  25531  by  user  Marc Torrellas Socastro  marctorsoc        Fix   func  metrics ndcg score  now gives a meaningful error message for input of   length 1     pr  25672  by  user  Lene Preuss  lene   and  user  Wei Chun Chu  wcchu        Fix   func  metrics log loss  raises a warning if the values of the parameter    y pred  are not normalized  instead of actually normalizing them in the metric    Starting from 1 5 this will raise an error     pr  25299  by  user  Omar Salman  OmarManzoor       Fix  In  func  metrics roc curve   use the threshold value  np inf  instead of   arbitrary  max y score    1   This threshold is associated with the ROC curve point    tpr 0  and  fpr 0      pr  26194  by  user  Guillaume Lemaitre  glemaitre        Fix  The   matching   metric has been removed when using SciPy  1 9   to be consistent with  scipy spatial distance  which does not support     matching   anymore     pr  26264  by  user  Barata T  Onggo  magnusbarata       API  The  eps  parameter of the  func  metrics log loss  has been deprecated and   will be removed in 1 5   pr  25299  by  user  Omar Salman  OmarManzoor      mod  sklearn gaussian process                                      Fix   class  gaussian process GaussianProcessRegressor  has a new argument    n targets   which is used to decide the number of outputs when sampling   from the prior distributions   pr  23099  by  user  Zhehao Liu  MaxwellLZH      mod  sklearn mixture                             Efficiency   class  mixture GaussianMixture  is more efficient now and will bypass   unnecessary initialization if the weights  means  and precisions are   given by users     pr  26021  by  user  Jiawei Zhang  jiawei zhang a      mod  sklearn model selection                                     MajorFeature  Added the class  class  model selection ValidationCurveDisplay    that allows easy plotting of validation curves obtained by the function    func  model selection validation curve      pr  25120  by  user  Guillaume Lemaitre  glemaitre        API  The parameter  log scale  in the class    class  model selection LearningCurveDisplay  has been deprecated in 1 3 and   will be removed in 1 5  The default scale can be overridden by setting it   directly on the  ax  object and will be set automatically from the spacing   of the data points otherwise     pr  25120  by  user  Guillaume Lemaitre  glemaitre        Enhancement   func  model selection cross validate  accepts a new parameter    return indices  to return the train test indices of each cv split     pr  25659  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn multioutput                                 Fix   func  getattr  on  meth  multioutput MultiOutputRegressor partial fit    and  meth  multioutput MultiOutputClassifier partial fit  now correctly raise   an  AttributeError  if done before calling  fit    pr  26333  by  Adrin   Jalali      mod  sklearn naive bayes                                 Fix   class  naive bayes GaussianNB  does not raise anymore a  ZeroDivisionError    when the provided  sample weight  reduces the problem to a single class in  fit      pr  24140  by  user  Jonathan Ohayon  Johayon   and  user  Chiara Marmo  cmarmo      mod  sklearn neighbors                               Enhancement  The performance of  meth  neighbors KNeighborsClassifier predict    and of  meth  neighbors KNeighborsClassifier predict proba  has been improved   when  n neighbors  is large and  algorithm  brute   with non Euclidean metrics     pr  24076  by  user  Meekail Zain  micky774     user  Julien Jerphanion  jjerphan        Fix  Remove support for  KulsinskiDistance  in  class  neighbors BallTree   This   dissimilarity is not a metric and cannot be supported by the BallTree     pr  25417  by  user  Guillaume Lemaitre  glemaitre        API  The support for metrics other than  euclidean  and  manhattan  and for   callables in  class  neighbors NearestNeighbors  is deprecated and will be removed in   version 1 5   pr  24083  by  user  Valentin Laurent  Valentin Laurent      mod  sklearn neural network                                    Fix   class  neural network MLPRegressor  and  class  neural network MLPClassifier    reports the right  n iter   when  warm start True   It corresponds to the number   of iterations performed on the current call to  fit  instead of the total number   of iterations performed since the initialization of the estimator     pr  25443  by  user  Marvin Krawutschke  Marvvxi      mod  sklearn pipeline                              Feature   class  pipeline FeatureUnion  can now use indexing notation  e g     feature union  scalar     to access transformers by name   pr  25093  by    Thomas Fan        Feature   class  pipeline FeatureUnion  can now access the    feature names in   attribute if the  X  value seen during   fit  has a    columns  attribute and all columns are strings  e g  when  X  is a    pandas DataFrame     pr  25220  by  user  Ian Thompson  it176131        Fix   meth  pipeline Pipeline fit transform  now raises an  AttributeError    if the last step of the pipeline does not support  fit transform      pr  26325  by  Adrin Jalali      mod  sklearn preprocessing                                   MajorFeature  Introduces  class  preprocessing TargetEncoder  which is a   categorical encoding based on target mean conditioned on the value of the   category   pr  25334  by  Thomas Fan        Feature   class  preprocessing OrdinalEncoder  now supports grouping   infrequent categories into a single feature  Grouping infrequent categories   is enabled by specifying how to select infrequent categories with    min frequency  or  max categories    pr  25677  by  Thomas Fan        Enhancement   class  preprocessing PolynomialFeatures  now calculates the   number of expanded terms a priori when dealing with sparse  csr  matrices   in order to optimize the choice of  dtype  for  indices  and  indptr   It   can now output  csr  matrices with  np int32   indices indptr  components   when there are few enough elements  and will automatically use  np int64    for sufficiently large matrices     pr  20524  by  user  niuk a  niuk a   and    pr  23731  by  user  Meekail Zain  micky774       Enhancement  A new parameter  sparse output  was added to    class  preprocessing SplineTransformer   available as of SciPy 1 8  If    sparse output True    class  preprocessing SplineTransformer  returns a sparse   CSR matrix   pr  24145  by  user  Christian Lorentzen  lorentzenchr        Enhancement  Adds a  feature name combiner  parameter to    class  preprocessing OneHotEncoder   This specifies a custom callable to   create feature names to be returned by    meth  preprocessing OneHotEncoder get feature names out   The callable   combines input arguments   input feature  category   to a string     pr  22506  by  user  Mario Kostelac  mariokostelac        Enhancement  Added support for  sample weight  in    class  preprocessing KBinsDiscretizer   This allows specifying the parameter    sample weight  for each sample to be used while fitting  The option is only   available when  strategy  is set to  quantile  and  kmeans      pr  24935  by  user  Seladus  seladus     user  Guillaume Lemaitre  glemaitre    and    user  Dea Mar a L on  deamarialeon     pr  25257  by  user  Gleb Levitski  glevv        Enhancement  Subsampling through the  subsample  parameter can now be used in    class  preprocessing KBinsDiscretizer  regardless of the strategy used     pr  26424  by  user  J r mie du Boisberranger  jeremiedbb        Fix   class  preprocessing PowerTransformer  now correctly preserves the Pandas   Index when the  set config transform output  pandas      pr  26454  by  Thomas Fan        Fix   class  preprocessing PowerTransformer  now correctly raises error when   using  method  box cox   on data with a constant  np nan  column     pr  26400  by  user  Yao Xiao  Charlie XIAO        Fix   class  preprocessing PowerTransformer  with  method  yeo johnson   now leaves   constant features unchanged instead of transforming with an arbitrary value for   the  lambdas   fitted parameter     pr  26566  by  user  J r mie du Boisberranger  jeremiedbb        API  The default value of the  subsample  parameter of    class  preprocessing KBinsDiscretizer  will change from  None  to  200 000  in   version 1 5 when  strategy  kmeans   or  strategy  uniform       pr  26424  by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn svm                         API   dual  parameter now accepts  auto  option for    class  svm LinearSVC  and  class  svm LinearSVR      pr  26093  by  user  Gleb Levitski  glevv      mod  sklearn tree                          MajorFeature   class  tree DecisionTreeRegressor  and    class  tree DecisionTreeClassifier  support missing values when    splitter  best   and criterion is  gini    entropy   or  log loss     for classification or  squared error    friedman mse   or  poisson    for regression   pr  23595    pr  26376  by  Thomas Fan        Enhancement  Adds a  class names  parameter to    func  tree export text   This allows specifying the parameter  class names    for each target class in ascending numerical order     pr  25387  by  user  William M  Akbeeh   and  user  crispinlogan  crispinlogan        Fix   func  tree export graphviz  and  func  tree export text  now accepts    feature names  and  class names  as array like rather than lists     pr  26289  by  user  Yao Xiao  Charlie XIAO     mod  sklearn utils                           FIX  Fixes  func  utils check array  to properly convert pandas   extension arrays   pr  25813  and  pr  26106  by  Thomas Fan        Fix   func  utils check array  now supports pandas DataFrames with   extension arrays and object dtypes by return an ndarray with object dtype     pr  25814  by  Thomas Fan        API   utils estimator checks check transformers unfitted stateless  has been   introduced to ensure stateless transformers don t raise  NotFittedError    during  transform  with no prior call to  fit  or  fit transform      pr  25190  by  user  Vincent Maladi re  Vincent Maladiere        API  A  FutureWarning  is now raised when instantiating a class which inherits from   a deprecated base class  i e  decorated by  class  utils deprecated   and which   overrides the    init    method     pr  25733  by  user  Brigitta Sip cz  bsipocz   and    user  J r mie du Boisberranger  jeremiedbb      mod  sklearn semi supervised                                     Enhancement   meth  semi supervised LabelSpreading fit  and    meth  semi supervised LabelPropagation fit  now accepts sparse metrics     pr  19664  by  user  Kaushik Amar Das  cozek     Miscellaneous                   Enhancement  Replace obsolete exceptions  EnvironmentError    IOError  and    WindowsError      pr  26466  by  user  Dimitri Papadopoulos ORfanos  DimitriPapadopoulos        rubric   Code and documentation contributors  Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1 2  including   2357juan  Abhishek Singh Kushwah  Adam Handke  Adam Kania  Adam Li  adienes  Admir Demiraj  adoublet  Adrin Jalali  A H Mansouri  Ahmedbgh  Ala Na  Alex Buzenet  AlexL  Ali H  El Kassas  amay  Andr s Simon  Andr  Pedersen  Andrew Wang  Ankur Singh  annegnx  Ansam Zedan  Anthony22 dev  Artur Hermano  Arturo Amor  as 90  ashah002  Ashish Dutt  Ashwin Mathur  AymericBasset  Azaria Gebremichael  Barata Tripramudya Onggo  Benedek Harsanyi  Benjamin Bossan  Bharat Raghunathan  Binesh Bannerjee  Boris Feld  Brendan Lu  Brevin Kunde  cache missing  Camille Troillard  Carla J  carlo  Carlo Lemos  c git  Changyao Chen  Chiara Marmo  Christian Lorentzen  Christian Veenhuis  Christine P  Chai  crispinlogan  Da Lan  DanGonite57  Dave Berenbaum  davidblnc  david cortes  Dayne  Dea Mar a L on  Denis  Dimitri Papadopoulos Orfanos  Dimitris Litsidis  Dmitry Nesterov  Dominic Fox  Dominik Prodinger  Edern  Ekaterina Butyugina  Elabonga Atuo  Emir  farhan khan  Felipe Siola  futurewarning  Gael Varoquaux  genvalen  Gleb Levitski  Guillaume Lemaitre  gunesbayir  Haesun Park  hujiahong726  i aki y  Ian Thompson  Ido M  Ily  Irene  Jack McIvor  jakirkham  James Dean  JanFidor  Jarrod Millman  JB Mountford  J r mie du Boisberranger  Jessicakk0711  Jiawei Zhang  Joey Ortiz  JohnathanPi  John Pangas  Joshua Choo Yun Keat  Joshua Hedlund  JuliaSchoepp  Julien Jerphanion  jygerardy  ka00ri  Kaushik Amar Das  Kento Nozawa  Kian Eliasi  Kilian Kluge  Lene Preuss  Linus  Logan Thomas  Loic Esteve  Louis Fouquet  Lucy Liu  Madhura Jayaratne  Marc Torrellas Socastro  Maren Westermann  Mario Kostelac  Mark Harfouche  Marko Toplak  Marvin Krawutschke  Masanori Kanazu  mathurinm  Matt Haberland  Max Halford  maximeSaur  Maxwell Liu  m  bou  mdarii  Meekail Zain  Mikhail Iljin  murezzda  Nawazish Alam  Nicola Fanelli  Nightwalkx  Nikolay Petrov  Nishu Choudhary  NNLNR  npache  Olivier Grisel  Omar Salman  ouss1508  PAB  Pandata  partev  Peter Piontek  Phil  pnucci  Pooja M  Pooja Subramaniam  precondition  Quentin Barth lemy  Rafal Wojdyla  Raghuveer Bhat  Rahil Parikh  Ralf Gommers  ram vikram singh  Rushil Desai  Sadra Barikbin  SANJAI 3  Sashka Warner  Scott Gigante  Scott Gustafson  searchforpassion  Seoeun Hong  Shady el Gewily  Shiva chauhan  Shogo Hida  Shreesha Kumar Bhat  sonnivs  Sortofamudkip  Stanislav  Stanley  Modrak  Stefanie Senger  Steven Van Vaerenbergh  Tabea Kossen  Th ophile Baranger  Thijs van Weezel  Thomas A Caswell  Thomas Germer  Thomas J  Fan  Tim Head  Tim P  Tom Dupr  la Tour  tomiock  tspeng  Valentin Laurent  Veghit  VIGNESH D  Vijeth Moudgalya  Vinayak Mehta  Vincent M  Vincent violet  Vyom Pathak  William M  windiana42  Xiao Yuan  Yao Xiao  Yaroslav Halchenko  Yotam Avidar Constantini  Yuchen Zhou  Yusuf Raji  zeeshan lone"}
{"questions":"scikit-learn sklearn contributors rst Version 0 20","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n============\nVersion 0.20\n============\n\n.. warning::\n\n    Version 0.20 is the last version of scikit-learn to support Python 2.7 and Python 3.4.\n    Scikit-learn 0.21 will require Python 3.5 or higher.\n\n.. include:: changelog_legend.inc\n\n.. _changes_0_20_4:\n\nVersion 0.20.4\n==============\n\n**July 30, 2019**\n\nThis is a bug-fix release with some bug fixes applied to version 0.20.3.\n\nChangelog\n---------\n\nThe bundled version of joblib was upgraded from 0.13.0 to 0.13.2.\n\n:mod:`sklearn.cluster`\n..............................\n\n- |Fix| Fixed a bug in :class:`cluster.KMeans` where KMeans++ initialisation\n  could rarely result in an IndexError. :issue:`11756` by `Joel Nothman`_.\n\n:mod:`sklearn.compose`\n.......................\n\n- |Fix| Fixed an issue in :class:`compose.ColumnTransformer` where using\n  DataFrames whose column order differs between :func:``fit`` and\n  :func:``transform`` could lead to silently passing incorrect columns to the\n  ``remainder`` transformer.\n  :pr:`14237` by `Andreas Schuderer <schuderer>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Fix| Fixed a bug in :class:`cross_decomposition.CCA` improving numerical\n  stability when `Y` is close to zero. :pr:`13903` by `Thomas Fan`_.\n\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |Fix| Fixed a bug where :class:`model_selection.StratifiedKFold`\n  shuffles each class's samples with the same ``random_state``,\n  making ``shuffle=True`` ineffective.\n  :issue:`13124` by :user:`Hanmin Qin <qinhanmin2014>`.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Fix| Fixed a bug in :class:`neighbors.KernelDensity` which could not be\n  restored from a pickle if ``sample_weight`` had been used.\n  :issue:`13772` by :user:`Aditya Vyas <aditya1702>`.\n\n.. _changes_0_20_3:\n\nVersion 0.20.3\n==============\n\n**March 1, 2019**\n\nThis is a bug-fix release with some minor documentation improvements and\nenhancements to features released in 0.20.0.\n\nChangelog\n---------\n\n:mod:`sklearn.cluster`\n......................\n\n- |Fix| Fixed a bug in :class:`cluster.KMeans` where computation was single\n  threaded when `n_jobs > 1` or `n_jobs = -1`.\n  :issue:`12949` by :user:`Prabakaran Kumaresshan <nixphix>`.\n\n:mod:`sklearn.compose`\n......................\n\n- |Fix| Fixed a bug in :class:`compose.ColumnTransformer` to handle\n  negative indexes in the columns list of the transformers.\n  :issue:`12946` by :user:`Pierre Tallotte <pierretallotte>`.\n\n:mod:`sklearn.covariance`\n.........................\n\n- |Fix| Fixed a regression in :func:`covariance.graphical_lasso` so that\n  the case `n_features=2` is handled correctly. :issue:`13276` by\n  :user:`Aur\u00e9lien Bellet <bellet>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Fix| Fixed a bug in :func:`decomposition.sparse_encode` where computation was single\n  threaded when `n_jobs > 1` or `n_jobs = -1`.\n  :issue:`13005` by :user:`Prabakaran Kumaresshan <nixphix>`.\n\n:mod:`sklearn.datasets`\n............................\n\n- |Efficiency| :func:`sklearn.datasets.fetch_openml` now loads data by\n  streaming, avoiding high memory usage.  :issue:`13312` by `Joris Van den\n  Bossche`_.\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |Fix| Fixed a bug in :class:`feature_extraction.text.CountVectorizer` which\n  would result in the sparse feature matrix having conflicting `indptr` and\n  `indices` precisions under very large vocabularies. :issue:`11295` by\n  :user:`Gabriel Vacaliuc <gvacaliuc>`.\n\n:mod:`sklearn.impute`\n.....................\n\n- |Fix| add support for non-numeric data in\n  :class:`sklearn.impute.MissingIndicator` which was not supported while\n  :class:`sklearn.impute.SimpleImputer` was supporting this for some\n  imputation strategies.\n  :issue:`13046` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Fix| Fixed a bug in :class:`linear_model.MultiTaskElasticNet` and\n  :class:`linear_model.MultiTaskLasso` which were breaking when\n  ``warm_start = True``. :issue:`12360` by :user:`Aakanksha Joshi <joaak>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Fix| Fixed a bug in :class:`preprocessing.KBinsDiscretizer` where\n  ``strategy='kmeans'`` fails with an error during transformation due to unsorted\n  bin edges. :issue:`13134` by :user:`Sandro Casagrande <SandroCasagrande>`.\n\n- |Fix| Fixed a bug in :class:`preprocessing.OneHotEncoder` where the\n  deprecation of ``categorical_features`` was handled incorrectly in\n  combination with ``handle_unknown='ignore'``.\n  :issue:`12881` by `Joris Van den Bossche`_.\n\n- |Fix| Bins whose width are too small (i.e., <= 1e-8) are removed\n  with a warning in :class:`preprocessing.KBinsDiscretizer`.\n  :issue:`13165` by :user:`Hanmin Qin <qinhanmin2014>`.\n\n:mod:`sklearn.svm`\n..................\n\n- |FIX| Fixed a bug in :class:`svm.SVC`, :class:`svm.NuSVC`, :class:`svm.SVR`,\n  :class:`svm.NuSVR` and :class:`svm.OneClassSVM` where the ``scale`` option\n  of parameter ``gamma`` is erroneously defined as\n  ``1 \/ (n_features * X.std())``. It's now defined as\n  ``1 \/ (n_features * X.var())``.\n  :issue:`13221` by :user:`Hanmin Qin <qinhanmin2014>`.\n\nCode and Documentation Contributors\n-----------------------------------\n\nWith thanks to:\n\nAdrin Jalali, Agamemnon Krasoulis, Albert Thomas, Andreas Mueller, Aur\u00e9lien\nBellet, bertrandhaut, Bharat Raghunathan, Dowon, Emmanuel Arias, Fibinse\nXavier, Finn O'Shea, Gabriel Vacaliuc, Gael Varoquaux, Guillaume Lemaitre,\nHanmin Qin, joaak, Joel Nothman, Joris Van den Bossche, J\u00e9r\u00e9mie M\u00e9hault, kms15,\nKossori Aruku, Lakshya KD, maikia, Manuel L\u00f3pez-Ib\u00e1\u00f1ez, Marco Gorelli,\nMarcoGorelli, mferrari3, Micka\u00ebl Schoentgen, Nicolas Hug, pavlos kallis, Pierre\nGlaser, pierretallotte, Prabakaran Kumaresshan, Reshama Shaikh, Rohit Kapoor,\nRoman Yurchak, SandroCasagrande, Tashay Green, Thomas Fan, Vishaal Kapoor,\nZhuyi Xue, Zijie (ZJ) Poh\n\n.. _changes_0_20_2:\n\nVersion 0.20.2\n==============\n\n**December 20, 2018**\n\nThis is a bug-fix release with some minor documentation improvements and\nenhancements to features released in 0.20.0.\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- :mod:`sklearn.neighbors` when ``metric=='jaccard'`` (bug fix)\n- use of ``'seuclidean'`` or ``'mahalanobis'`` metrics in some cases (bug fix)\n\nChangelog\n---------\n\n:mod:`sklearn.compose`\n......................\n\n- |Fix| Fixed an issue in :func:`compose.make_column_transformer` which raises\n  unexpected error when columns is pandas Index or pandas Series.\n  :issue:`12704` by :user:`Hanmin Qin <qinhanmin2014>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| Fixed a bug in :func:`metrics.pairwise_distances` and\n  :func:`metrics.pairwise_distances_chunked` where parameters ``V`` of\n  ``\"seuclidean\"`` and ``VI`` of ``\"mahalanobis\"`` metrics were computed after\n  the data was split into chunks instead of being pre-computed on whole data.\n  :issue:`12701` by :user:`Jeremie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Fix| Fixed `sklearn.neighbors.DistanceMetric` jaccard distance\n  function to return 0 when two all-zero vectors are compared.\n  :issue:`12685` by :user:`Thomas Fan <thomasjpfan>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |Fix| Calling :func:`utils.check_array` on `pandas.Series` with categorical\n  data, which raised an error in 0.20.0, now returns the expected output again.\n  :issue:`12699` by `Joris Van den Bossche`_.\n\nCode and Documentation Contributors\n-----------------------------------\n\nWith thanks to:\n\n\nadanhawth, Adrin Jalali, Albert Thomas, Andreas Mueller, Dan Stine, Feda Curic,\nHanmin Qin, Jan S, jeremiedbb, Joel Nothman, Joris Van den Bossche,\njosephsalmon, Katrin Leinweber, Loic Esteve, Muhammad Hassaan Rafique, Nicolas\nHug, Olivier Grisel, Paul Paczuski, Reshama Shaikh, Sam Waterbury, Shivam\nKotwalia, Thomas Fan\n\n.. _changes_0_20_1:\n\nVersion 0.20.1\n==============\n\n**November 21, 2018**\n\nThis is a bug-fix release with some minor documentation improvements and\nenhancements to features released in 0.20.0. Note that we also include some\nAPI changes in this release, so you might get some extra warnings after\nupdating from 0.20.0 to 0.20.1.\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- :class:`decomposition.IncrementalPCA` (bug fix)\n\nChangelog\n---------\n\n:mod:`sklearn.cluster`\n......................\n\n- |Efficiency| make :class:`cluster.MeanShift` no longer try to do nested\n  parallelism as the overhead would hurt performance significantly when\n  ``n_jobs > 1``.\n  :issue:`12159` by :user:`Olivier Grisel <ogrisel>`.\n\n- |Fix| Fixed a bug in :class:`cluster.DBSCAN` with precomputed sparse neighbors\n  graph, which would add explicitly zeros on the diagonal even when already\n  present. :issue:`12105` by `Tom Dupre la Tour`_.\n\n:mod:`sklearn.compose`\n......................\n\n- |Fix| Fixed an issue in :class:`compose.ColumnTransformer` when stacking\n  columns with types not convertible to a numeric.\n  :issue:`11912` by :user:`Adrin Jalali <adrinjalali>`.\n\n- |API| :class:`compose.ColumnTransformer` now applies the ``sparse_threshold``\n  even if all transformation results are sparse. :issue:`12304` by `Andreas\n  M\u00fcller`_.\n\n- |API| :func:`compose.make_column_transformer` now expects\n  ``(transformer, columns)`` instead of ``(columns, transformer)`` to keep\n  consistent with :class:`compose.ColumnTransformer`.\n  :issue:`12339` by :user:`Adrin Jalali <adrinjalali>`.\n\n:mod:`sklearn.datasets`\n............................\n\n- |Fix| :func:`datasets.fetch_openml` to correctly use the local cache.\n  :issue:`12246` by :user:`Jan N. van Rijn <janvanrijn>`.\n\n- |Fix| :func:`datasets.fetch_openml` to correctly handle ignore attributes and\n  row id attributes. :issue:`12330` by :user:`Jan N. van Rijn <janvanrijn>`.\n\n- |Fix| Fixed integer overflow in :func:`datasets.make_classification`\n  for values of ``n_informative`` parameter larger than 64.\n  :issue:`10811` by :user:`Roman Feldbauer <VarIr>`.\n\n- |Fix| Fixed olivetti faces dataset ``DESCR`` attribute to point to the right\n  location in :func:`datasets.fetch_olivetti_faces`. :issue:`12441` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`\n\n- |Fix| :func:`datasets.fetch_openml` to retry downloading when reading\n  from local cache fails. :issue:`12517` by :user:`Thomas Fan <thomasjpfan>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Fix| Fixed a regression in :class:`decomposition.IncrementalPCA` where\n  0.20.0 raised an error if the number of samples in the final batch for\n  fitting IncrementalPCA was smaller than n_components.\n  :issue:`12234` by :user:`Ming Li <minggli>`.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |Fix| Fixed a bug mostly affecting :class:`ensemble.RandomForestClassifier`\n  where ``class_weight='balanced_subsample'`` failed with more than 32 classes.\n  :issue:`12165` by `Joel Nothman`_.\n\n- |Fix| Fixed a bug affecting :class:`ensemble.BaggingClassifier`,\n  :class:`ensemble.BaggingRegressor` and :class:`ensemble.IsolationForest`,\n  where ``max_features`` was sometimes rounded down to zero.\n  :issue:`12388` by :user:`Connor Tann <Connossor>`.\n\n:mod:`sklearn.feature_extraction`\n..................................\n\n- |Fix| Fixed a regression in v0.20.0 where\n  :func:`feature_extraction.text.CountVectorizer` and other text vectorizers\n  could error during stop words validation with custom preprocessors\n  or tokenizers. :issue:`12393` by `Roman Yurchak`_.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Fix| :class:`linear_model.SGDClassifier` and variants\n  with ``early_stopping=True`` would not use a consistent validation\n  split in the multiclass case and this would cause a crash when using\n  those estimators as part of parallel parameter search or cross-validation.\n  :issue:`12122` by :user:`Olivier Grisel <ogrisel>`.\n\n- |Fix| Fixed a bug affecting :class:`linear_model.SGDClassifier` in the multiclass\n  case. Each one-versus-all step is run in a :class:`joblib.Parallel` call and\n  mutating a common parameter, causing a segmentation fault if called within a\n  backend using processes and not threads. We now use ``require=sharedmem``\n  at the :class:`joblib.Parallel` instance creation. :issue:`12518` by\n  :user:`Pierre Glaser <pierreglaser>` and :user:`Olivier Grisel <ogrisel>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| Fixed a bug in `metrics.pairwise.pairwise_distances_argmin_min`\n  which returned the square root of the distance when the metric parameter was\n  set to \"euclidean\". :issue:`12481` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| Fixed a bug in `metrics.pairwise.pairwise_distances_chunked`\n  which didn't ensure the diagonal is zero for euclidean distances.\n  :issue:`12612` by :user:`Andreas M\u00fcller <amueller>`.\n\n- |API| The `metrics.calinski_harabaz_score` has been renamed to\n  :func:`metrics.calinski_harabasz_score` and will be removed in version 0.23.\n  :issue:`12211` by :user:`Lisa Thomas <LisaThomas9>`,\n  :user:`Mark Hannel <markhannel>` and :user:`Melissa Ferrari <mferrari3>`.\n\n:mod:`sklearn.mixture`\n........................\n\n- |Fix| Ensure that the ``fit_predict`` method of\n  :class:`mixture.GaussianMixture` and :class:`mixture.BayesianGaussianMixture`\n  always yield assignments consistent with ``fit`` followed by ``predict`` even\n  if the convergence criterion is too loose or not met. :issue:`12451`\n  by :user:`Olivier Grisel <ogrisel>`.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Fix| force the parallelism backend to :code:`threading` for\n  :class:`neighbors.KDTree` and :class:`neighbors.BallTree` in Python 2.7 to\n  avoid pickling errors caused by the serialization of their methods.\n  :issue:`12171` by :user:`Thomas Moreau <tomMoral>`.\n\n:mod:`sklearn.preprocessing`\n.............................\n\n- |Fix| Fixed bug in :class:`preprocessing.OrdinalEncoder` when passing\n  manually specified categories. :issue:`12365` by `Joris Van den Bossche`_.\n\n- |Fix| Fixed bug in :class:`preprocessing.KBinsDiscretizer` where the\n  ``transform`` method mutates the ``_encoder`` attribute. The ``transform``\n  method is now thread safe. :issue:`12514` by\n  :user:`Hanmin Qin <qinhanmin2014>`.\n\n- |Fix| Fixed a bug in :class:`preprocessing.PowerTransformer` where the\n  Yeo-Johnson transform was incorrect for lambda parameters outside of `[0, 2]`\n  :issue:`12522` by :user:`Nicolas Hug<NicolasHug>`.\n\n- |Fix| Fixed a bug in :class:`preprocessing.OneHotEncoder` where transform\n  failed when set to ignore unknown numpy strings of different lengths\n  :issue:`12471` by :user:`Gabriel Marzinotto<GMarzinotto>`.\n\n- |API| The default value of the :code:`method` argument in\n  :func:`preprocessing.power_transform` will be changed from :code:`box-cox`\n  to :code:`yeo-johnson` to match :class:`preprocessing.PowerTransformer`\n  in version 0.23. A FutureWarning is raised when the default value is used.\n  :issue:`12317` by :user:`Eric Chang <chang>`.\n\n:mod:`sklearn.utils`\n........................\n\n- |Fix| Use float64 for mean accumulator to avoid floating point\n  precision issues in :class:`preprocessing.StandardScaler` and\n  :class:`decomposition.IncrementalPCA` when using float32 datasets.\n  :issue:`12338` by :user:`bauks <bauks>`.\n\n- |Fix| Calling :func:`utils.check_array` on `pandas.Series`, which\n  raised an error in 0.20.0, now returns the expected output again.\n  :issue:`12625` by `Andreas M\u00fcller`_\n\nMiscellaneous\n.............\n\n- |Fix| When using site joblib by setting the environment variable\n  `SKLEARN_SITE_JOBLIB`, added compatibility with joblib 0.11 in addition\n  to 0.12+. :issue:`12350` by `Joel Nothman`_ and `Roman Yurchak`_.\n\n- |Fix| Make sure to avoid raising ``FutureWarning`` when calling\n  ``np.vstack`` with numpy 1.16 and later (use list comprehensions\n  instead of generator expressions in many locations of the scikit-learn\n  code base). :issue:`12467` by :user:`Olivier Grisel <ogrisel>`.\n\n- |API| Removed all mentions of ``sklearn.externals.joblib``, and deprecated\n  joblib methods exposed in ``sklearn.utils``, except for\n  :func:`utils.parallel_backend` and :func:`utils.register_parallel_backend`,\n  which allow users to configure parallel computation in scikit-learn.\n  Other functionalities are part of `joblib <https:\/\/joblib.readthedocs.io\/>`_.\n  package and should be used directly, by installing it.\n  The goal of this change is to prepare for\n  unvendoring joblib in future version of scikit-learn.\n  :issue:`12345` by :user:`Thomas Moreau <tomMoral>`\n\nCode and Documentation Contributors\n-----------------------------------\n\nWith thanks to:\n\n^__^, Adrin Jalali, Andrea Navarrete, Andreas Mueller,\nbauks, BenjaStudio, Cheuk Ting Ho, Connossor,\nCorey Levinson, Dan Stine, daten-kieker, Denis Kataev,\nDillon Gardner, Dmitry Vukolov, Dougal J. Sutherland, Edward J Brown,\nEric Chang, Federico Caselli, Gabriel Marzinotto, Gael Varoquaux,\nGauravAhlawat, Gustavo De Mari Pereira, Hanmin Qin, haroldfox,\nJackLangerman, Jacopo Notarstefano, janvanrijn, jdethurens,\njeremiedbb, Joel Nothman, Joris Van den Bossche, Koen,\nKushal Chauhan, Lee Yi Jie Joel, Lily Xiong, mail-liam,\nMark Hannel, melsyt, Ming Li, Nicholas Smith,\nNicolas Hug, Nikolay Shebanov, Oleksandr Pavlyk, Olivier Grisel,\nPeter Hausamann, Pierre Glaser, Pulkit Maloo, Quentin Batista,\nRadostin Stoyanov, Ramil Nugmanov, Rebekah Kim, Reshama Shaikh,\nRohan Singh, Roman Feldbauer, Roman Yurchak, Roopam Sharma,\nSam Waterbury, Scott Lowe, Sebastian Raschka, Stephen Tierney,\nSylvainLan, TakingItCasual, Thomas Fan, Thomas Moreau,\nTom Dupr\u00e9 la Tour, Tulio Casagrande, Utkarsh Upadhyay, Xing Han Lu,\nYaroslav Halchenko, Zach Miller\n\n\n.. _changes_0_20:\n\nVersion 0.20.0\n==============\n\n**September 25, 2018**\n\nThis release packs in a mountain of bug fixes, features and enhancements for\nthe Scikit-learn library, and improvements to the documentation and examples.\nThanks to our contributors!\n\nThis release is dedicated to the memory of Raghav Rajagopalan.\n\nHighlights\n----------\n\nWe have tried to improve our support for common data-science use-cases\nincluding missing values, categorical variables, heterogeneous data, and\nfeatures\/targets with unusual distributions.\nMissing values in features, represented by NaNs, are now accepted in\ncolumn-wise preprocessing such as scalers. Each feature is fitted disregarding\nNaNs, and data containing NaNs can be transformed. The new :mod:`sklearn.impute`\nmodule provides estimators for learning despite missing data.\n\n:class:`~compose.ColumnTransformer` handles the case where different features\nor columns of a pandas.DataFrame need different preprocessing.\nString or pandas Categorical columns can now be encoded with\n:class:`~preprocessing.OneHotEncoder` or\n:class:`~preprocessing.OrdinalEncoder`.\n\n:class:`~compose.TransformedTargetRegressor` helps when the regression target\nneeds to be transformed to be modeled. :class:`~preprocessing.PowerTransformer`\nand :class:`~preprocessing.KBinsDiscretizer` join\n:class:`~preprocessing.QuantileTransformer` as non-linear transformations.\n\nBeyond this, we have added :term:`sample_weight` support to several estimators\n(including :class:`~cluster.KMeans`, :class:`~linear_model.BayesianRidge` and\n:class:`~neighbors.KernelDensity`) and improved stopping criteria in others\n(including :class:`~neural_network.MLPRegressor`,\n:class:`~ensemble.GradientBoostingRegressor` and\n:class:`~linear_model.SGDRegressor`).\n\nThis release is also the first to be accompanied by a :ref:`glossary` developed\nby `Joel Nothman`_. The glossary is a reference resource to help users and\ncontributors become familiar with the terminology and conventions used in\nScikit-learn.\n\nSorry if your contribution didn't make it into the highlights. There's a lot\nhere...\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- :class:`cluster.MeanShift` (bug fix)\n- :class:`decomposition.IncrementalPCA` in Python 2 (bug fix)\n- :class:`decomposition.SparsePCA` (bug fix)\n- :class:`ensemble.GradientBoostingClassifier` (bug fix affecting feature importances)\n- :class:`isotonic.IsotonicRegression` (bug fix)\n- :class:`linear_model.ARDRegression` (bug fix)\n- :class:`linear_model.LogisticRegressionCV` (bug fix)\n- :class:`linear_model.OrthogonalMatchingPursuit` (bug fix)\n- :class:`linear_model.PassiveAggressiveClassifier` (bug fix)\n- :class:`linear_model.PassiveAggressiveRegressor` (bug fix)\n- :class:`linear_model.Perceptron` (bug fix)\n- :class:`linear_model.SGDClassifier` (bug fix)\n- :class:`linear_model.SGDRegressor` (bug fix)\n- :class:`metrics.roc_auc_score` (bug fix)\n- :class:`metrics.roc_curve` (bug fix)\n- `neural_network.BaseMultilayerPerceptron` (bug fix)\n- :class:`neural_network.MLPClassifier` (bug fix)\n- :class:`neural_network.MLPRegressor` (bug fix)\n- The v0.19.0 release notes failed to mention a backwards incompatibility with\n  :class:`model_selection.StratifiedKFold` when ``shuffle=True`` due to\n  :issue:`7823`.\n\nDetails are listed in the changelog below.\n\n(While we are trying to better inform users by providing this information, we\ncannot assure that this list is complete.)\n\nKnown Major Bugs\n----------------\n\n* :issue:`11924`: :class:`linear_model.LogisticRegressionCV` with\n  `solver='lbfgs'` and `multi_class='multinomial'` may be non-deterministic or\n  otherwise broken on macOS. This appears to be the case on Travis CI servers,\n  but has not been confirmed on personal MacBooks! This issue has been present\n  in previous releases.\n\n* :issue:`9354`: :func:`metrics.pairwise.euclidean_distances` (which is used\n  several times throughout the library) gives results with poor precision,\n  which particularly affects its use with 32-bit float inputs. This became\n  more problematic in versions 0.18 and 0.19 when some algorithms were changed\n  to avoid casting 32-bit data into 64-bit.\n\nChangelog\n---------\n\nSupport for Python 3.3 has been officially dropped.\n\n\n:mod:`sklearn.cluster`\n......................\n\n- |MajorFeature| :class:`cluster.AgglomerativeClustering` now supports Single\n  Linkage clustering via ``linkage='single'``. :issue:`9372` by :user:`Leland\n  McInnes <lmcinnes>` and :user:`Steve Astels <sastels>`.\n\n- |Feature| :class:`cluster.KMeans` and :class:`cluster.MiniBatchKMeans` now support\n  sample weights via new parameter ``sample_weight`` in ``fit`` function.\n  :issue:`10933` by :user:`Johannes Hansen <jnhansen>`.\n\n- |Efficiency| :class:`cluster.KMeans`, :class:`cluster.MiniBatchKMeans` and\n  :func:`cluster.k_means` passed with ``algorithm='full'`` now enforces\n  row-major ordering, improving runtime.\n  :issue:`10471` by :user:`Gaurav Dhingra <gxyd>`.\n\n- |Efficiency| :class:`cluster.DBSCAN` now is parallelized according to ``n_jobs``\n  regardless of ``algorithm``.\n  :issue:`8003` by :user:`Jo\u00ebl Billaud <recamshak>`.\n\n- |Enhancement| :class:`cluster.KMeans` now gives a warning if the number of\n  distinct clusters found is smaller than ``n_clusters``. This may occur when\n  the number of distinct points in the data set is actually smaller than the\n  number of cluster one is looking for.\n  :issue:`10059` by :user:`Christian Braune <christianbraune79>`.\n\n- |Fix| Fixed a bug where the ``fit`` method of\n  :class:`cluster.AffinityPropagation` stored cluster\n  centers as 3d array instead of 2d array in case of non-convergence. For the\n  same class, fixed undefined and arbitrary behavior in case of training data\n  where all samples had equal similarity.\n  :issue:`9612`. By :user:`Jonatan Samoocha <jsamoocha>`.\n\n- |Fix| Fixed a bug in :func:`cluster.spectral_clustering` where the normalization of\n  the spectrum was using a division instead of a multiplication. :issue:`8129`\n  by :user:`Jan Margeta <jmargeta>`, :user:`Guillaume Lemaitre <glemaitre>`,\n  and :user:`Devansh D. <devanshdalal>`.\n\n- |Fix| Fixed a bug in `cluster.k_means_elkan` where the returned\n  ``iteration`` was 1 less than the correct value. Also added the missing\n  ``n_iter_`` attribute in the docstring of :class:`cluster.KMeans`.\n  :issue:`11353` by :user:`Jeremie du Boisberranger <jeremiedbb>`.\n\n- |Fix| Fixed a bug in :func:`cluster.mean_shift` where the assigned labels\n  were not deterministic if there were multiple clusters with the same\n  intensities.\n  :issue:`11901` by :user:`Adrin Jalali <adrinjalali>`.\n\n- |API| Deprecate ``pooling_func`` unused parameter in\n  :class:`cluster.AgglomerativeClustering`.\n  :issue:`9875` by :user:`Kumar Ashutosh <thechargedneutron>`.\n\n\n:mod:`sklearn.compose`\n......................\n\n- New module.\n\n- |MajorFeature| Added :class:`compose.ColumnTransformer`, which allows to\n  apply different transformers to different columns of arrays or pandas\n  DataFrames. :issue:`9012` by `Andreas M\u00fcller`_ and `Joris Van den Bossche`_,\n  and :issue:`11315` by :user:`Thomas Fan <thomasjpfan>`.\n\n- |MajorFeature| Added the :class:`compose.TransformedTargetRegressor` which\n  transforms the target y before fitting a regression model. The predictions\n  are mapped back to the original space via an inverse transform. :issue:`9041`\n  by `Andreas M\u00fcller`_ and :user:`Guillaume Lemaitre <glemaitre>`.\n\n\n\n:mod:`sklearn.covariance`\n.........................\n\n- |Efficiency| Runtime improvements to :class:`covariance.GraphicalLasso`.\n  :issue:`9858` by :user:`Steven Brown <stevendbrown>`.\n\n- |API| The `covariance.graph_lasso`,\n  `covariance.GraphLasso` and `covariance.GraphLassoCV` have been\n  renamed to :func:`covariance.graphical_lasso`,\n  :class:`covariance.GraphicalLasso` and :class:`covariance.GraphicalLassoCV`\n  respectively and will be removed in version 0.22.\n  :issue:`9993` by :user:`Artiem Krinitsyn <artiemq>`\n\n\n:mod:`sklearn.datasets`\n.......................\n\n- |MajorFeature| Added :func:`datasets.fetch_openml` to fetch datasets from\n  `OpenML <https:\/\/openml.org>`_. OpenML is a free, open data sharing platform\n  and will be used instead of mldata as it provides better service availability.\n  :issue:`9908` by `Andreas M\u00fcller`_ and :user:`Jan N. van Rijn <janvanrijn>`.\n\n- |Feature| In :func:`datasets.make_blobs`, one can now pass a list to the\n  ``n_samples`` parameter to indicate the number of samples to generate per\n  cluster. :issue:`8617` by :user:`Maskani Filali Mohamed <maskani-moh>` and\n  :user:`Konstantinos Katrioplas <kkatrio>`.\n\n- |Feature| Add ``filename`` attribute to :mod:`sklearn.datasets` that have a CSV file.\n  :issue:`9101` by :user:`alex-33 <alex-33>`\n  and :user:`Maskani Filali Mohamed <maskani-moh>`.\n\n- |Feature| ``return_X_y`` parameter has been added to several dataset loaders.\n  :issue:`10774` by :user:`Chris Catalfo <ccatalfo>`.\n\n- |Fix| Fixed a bug in `datasets.load_boston` which had a wrong data\n  point. :issue:`10795` by :user:`Takeshi Yoshizawa <tarcusx>`.\n\n- |Fix| Fixed a bug in :func:`datasets.load_iris` which had two wrong data points.\n  :issue:`11082` by :user:`Sadhana Srinivasan <rotuna>`\n  and :user:`Hanmin Qin <qinhanmin2014>`.\n\n- |Fix| Fixed a bug in :func:`datasets.fetch_kddcup99`, where data were not\n  properly shuffled. :issue:`9731` by `Nicolas Goix`_.\n\n- |Fix| Fixed a bug in :func:`datasets.make_circles`, where no odd number of\n  data points could be generated. :issue:`10045` by :user:`Christian Braune\n  <christianbraune79>`.\n\n- |API| Deprecated `sklearn.datasets.fetch_mldata` to be removed in\n  version 0.22. mldata.org is no longer operational. Until removal it will\n  remain possible to load cached datasets. :issue:`11466` by `Joel Nothman`_.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Feature| :func:`decomposition.dict_learning` functions and models now\n  support positivity constraints. This applies to the dictionary and sparse\n  code. :issue:`6374` by :user:`John Kirkham <jakirkham>`.\n\n- |Feature| |Fix| :class:`decomposition.SparsePCA` now exposes\n  ``normalize_components``. When set to True, the train and test data are\n  centered with the train mean respectively during the fit phase and the\n  transform phase. This fixes the behavior of SparsePCA. When set to False,\n  which is the default, the previous abnormal behaviour still holds. The False\n  value is for backward compatibility and should not be used. :issue:`11585`\n  by :user:`Ivan Panico <FollowKenny>`.\n\n- |Efficiency| Efficiency improvements in :func:`decomposition.dict_learning`.\n  :issue:`11420` and others by :user:`John Kirkham <jakirkham>`.\n\n- |Fix| Fix for uninformative error in :class:`decomposition.IncrementalPCA`:\n  now an error is raised if the number of components is larger than the\n  chosen batch size. The ``n_components=None`` case was adapted accordingly.\n  :issue:`6452`. By :user:`Wally Gauze <wallygauze>`.\n\n- |Fix| Fixed a bug where the ``partial_fit`` method of\n  :class:`decomposition.IncrementalPCA` used integer division instead of float\n  division on Python 2.\n  :issue:`9492` by :user:`James Bourbeau <jrbourbeau>`.\n\n- |Fix| In :class:`decomposition.PCA` selecting a n_components parameter greater\n  than the number of samples now raises an error. Similarly, the\n  ``n_components=None`` case now selects the minimum of ``n_samples`` and\n  ``n_features``.\n  :issue:`8484` by :user:`Wally Gauze <wallygauze>`.\n\n- |Fix| Fixed a bug in :class:`decomposition.PCA` where users will get\n  unexpected error with large datasets when ``n_components='mle'`` on Python 3\n  versions.\n  :issue:`9886` by :user:`Hanmin Qin <qinhanmin2014>`.\n\n- |Fix| Fixed an underflow in calculating KL-divergence for\n  :class:`decomposition.NMF` :issue:`10142` by `Tom Dupre la Tour`_.\n\n- |Fix| Fixed a bug in :class:`decomposition.SparseCoder` when running OMP\n  sparse coding in parallel using read-only memory mapped datastructures.\n  :issue:`5956` by :user:`Vighnesh Birodkar <vighneshbirodkar>` and\n  :user:`Olivier Grisel <ogrisel>`.\n\n\n:mod:`sklearn.discriminant_analysis`\n....................................\n\n- |Efficiency| Memory usage improvement for `_class_means` and\n  `_class_cov` in :mod:`sklearn.discriminant_analysis`. :issue:`10898` by\n  :user:`Nanxin Chen <bobchennan>`.\n\n\n:mod:`sklearn.dummy`\n....................\n\n- |Feature| :class:`dummy.DummyRegressor` now has a ``return_std`` option in its\n  ``predict`` method. The returned standard deviations will be zeros.\n\n- |Feature| :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor` now\n  only require X to be an object with finite length or shape. :issue:`9832` by\n  :user:`Vrishank Bhardwaj <vrishank97>`.\n\n- |Feature| :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor`\n  can now be scored without supplying test samples.\n  :issue:`11951` by :user:`R\u00fcdiger Busche <JarnoRFB>`.\n\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |Feature| :class:`ensemble.BaggingRegressor` and\n  :class:`ensemble.BaggingClassifier` can now be fit with missing\/non-finite\n  values in X and\/or multi-output Y to support wrapping pipelines that perform\n  their own imputation. :issue:`9707` by :user:`Jimmy Wan <jimmywan>`.\n\n- |Feature| :class:`ensemble.GradientBoostingClassifier` and\n  :class:`ensemble.GradientBoostingRegressor` now support early stopping\n  via ``n_iter_no_change``, ``validation_fraction`` and ``tol``. :issue:`7071`\n  by `Raghav RV`_\n\n- |Feature| Added ``named_estimators_`` parameter in\n  :class:`ensemble.VotingClassifier` to access fitted estimators.\n  :issue:`9157` by :user:`Herilalaina Rakotoarison <herilalaina>`.\n\n- |Fix| Fixed a bug when fitting :class:`ensemble.GradientBoostingClassifier` or\n  :class:`ensemble.GradientBoostingRegressor` with ``warm_start=True`` which\n  previously raised a segmentation fault due to a non-conversion of CSC matrix\n  into CSR format expected by ``decision_function``. Similarly, Fortran-ordered\n  arrays are converted to C-ordered arrays in the dense case. :issue:`9991` by\n  :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Fixed a bug in :class:`ensemble.GradientBoostingRegressor`\n  and :class:`ensemble.GradientBoostingClassifier` to have\n  feature importances summed and then normalized, rather than normalizing on a\n  per-tree basis. The previous behavior over-weighted the Gini importance of\n  features that appear in later stages. This issue only affected feature\n  importances. :issue:`11176` by :user:`Gil Forsyth <gforsyth>`.\n\n- |API| The default value of the ``n_estimators`` parameter of\n  :class:`ensemble.RandomForestClassifier`, :class:`ensemble.RandomForestRegressor`,\n  :class:`ensemble.ExtraTreesClassifier`, :class:`ensemble.ExtraTreesRegressor`,\n  and :class:`ensemble.RandomTreesEmbedding` will change from 10 in version 0.20\n  to 100 in 0.22. A FutureWarning is raised when the default value is used.\n  :issue:`11542` by :user:`Anna Ayzenshtat <annaayzenshtat>`.\n\n- |API| Classes derived from `ensemble.BaseBagging`. The attribute\n  ``estimators_samples_`` will return a list of arrays containing the indices\n  selected for each bootstrap instead of a list of arrays containing the mask\n  of the samples selected for each bootstrap. Indices allows to repeat samples\n  while mask does not allow this functionality.\n  :issue:`9524` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| `ensemble.BaseBagging` where one could not deterministically\n  reproduce ``fit`` result using the object attributes when ``random_state``\n  is set. :issue:`9723` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |Feature| Enable the call to `get_feature_names` in unfitted\n  :class:`feature_extraction.text.CountVectorizer` initialized with a\n  vocabulary. :issue:`10908` by :user:`Mohamed Maskani <maskani-moh>`.\n\n- |Enhancement| ``idf_`` can now be set on a\n  :class:`feature_extraction.text.TfidfTransformer`.\n  :issue:`10899` by :user:`Sergey Melderis <serega>`.\n\n- |Fix| Fixed a bug in :func:`feature_extraction.image.extract_patches_2d` which\n  would throw an exception if ``max_patches`` was greater than or equal to the\n  number of all possible patches rather than simply returning the number of\n  possible patches. :issue:`10101` by :user:`Varun Agrawal <varunagrawal>`\n\n- |Fix| Fixed a bug in :class:`feature_extraction.text.CountVectorizer`,\n  :class:`feature_extraction.text.TfidfVectorizer`,\n  :class:`feature_extraction.text.HashingVectorizer` to support 64 bit sparse\n  array indexing necessary to process large datasets with more than 2\u00b710\u2079 tokens\n  (words or n-grams). :issue:`9147` by :user:`Claes-Fredrik Mannby <mannby>`\n  and `Roman Yurchak`_.\n\n- |Fix| Fixed bug in :class:`feature_extraction.text.TfidfVectorizer` which\n  was ignoring the parameter ``dtype``. In addition,\n  :class:`feature_extraction.text.TfidfTransformer` will preserve ``dtype``\n  for floating and raise a warning if ``dtype`` requested is integer.\n  :issue:`10441` by :user:`Mayur Kulkarni <maykulkarni>` and\n  :user:`Guillaume Lemaitre <glemaitre>`.\n\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Feature| Added select K best features functionality to\n  :class:`feature_selection.SelectFromModel`.\n  :issue:`6689` by :user:`Nihar Sheth <nsheth12>` and\n  :user:`Quazi Rahman <qmaruf>`.\n\n- |Feature| Added ``min_features_to_select`` parameter to\n  :class:`feature_selection.RFECV` to bound evaluated features counts.\n  :issue:`11293` by :user:`Brent Yi <brentyi>`.\n\n- |Feature| :class:`feature_selection.RFECV`'s fit method now supports\n  :term:`groups`.  :issue:`9656` by :user:`Adam Greenhall <adamgreenhall>`.\n\n- |Fix| Fixed computation of ``n_features_to_compute`` for edge case with tied\n  CV scores in :class:`feature_selection.RFECV`.\n  :issue:`9222` by :user:`Nick Hoh <nickypie>`.\n\n:mod:`sklearn.gaussian_process`\n...............................\n\n- |Efficiency| In :class:`gaussian_process.GaussianProcessRegressor`, method\n  ``predict`` is faster when using ``return_std=True`` in particular more when\n  called several times in a row. :issue:`9234` by :user:`andrewww <andrewww>`\n  and :user:`Minghui Liu <minghui-liu>`.\n\n\n:mod:`sklearn.impute`\n.....................\n\n- New module, adopting ``preprocessing.Imputer`` as\n  :class:`impute.SimpleImputer` with minor changes (see under preprocessing\n  below).\n\n- |MajorFeature| Added :class:`impute.MissingIndicator` which generates a\n  binary indicator for missing values. :issue:`8075` by :user:`Maniteja Nandana\n  <maniteja123>` and :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Feature| The :class:`impute.SimpleImputer` has a new strategy,\n  ``'constant'``, to complete missing values with a fixed one, given by the\n  ``fill_value`` parameter. This strategy supports numeric and non-numeric\n  data, and so does the ``'most_frequent'`` strategy now. :issue:`11211` by\n  :user:`Jeremie du Boisberranger <jeremiedbb>`.\n\n\n:mod:`sklearn.isotonic`\n.......................\n\n- |Fix| Fixed a bug in :class:`isotonic.IsotonicRegression` which incorrectly\n  combined weights when fitting a model to data involving points with\n  identical X values.\n  :issue:`9484` by :user:`Dallas Card <dallascard>`\n\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Feature| :class:`linear_model.SGDClassifier`,\n  :class:`linear_model.SGDRegressor`,\n  :class:`linear_model.PassiveAggressiveClassifier`,\n  :class:`linear_model.PassiveAggressiveRegressor` and\n  :class:`linear_model.Perceptron` now expose ``early_stopping``,\n  ``validation_fraction`` and ``n_iter_no_change`` parameters, to stop\n  optimization monitoring the score on a validation set. A new learning rate\n  ``\"adaptive\"`` strategy divides the learning rate by 5 each time\n  ``n_iter_no_change`` consecutive epochs fail to improve the model.\n  :issue:`9043` by `Tom Dupre la Tour`_.\n\n- |Feature| Add `sample_weight` parameter to the fit method of\n  :class:`linear_model.BayesianRidge` for weighted linear regression.\n  :issue:`10112` by :user:`Peter St. John <pstjohn>`.\n\n- |Fix| Fixed a bug in `logistic.logistic_regression_path` to ensure\n  that the returned coefficients are correct when ``multiclass='multinomial'``.\n  Previously, some of the coefficients would override each other, leading to\n  incorrect results in :class:`linear_model.LogisticRegressionCV`.\n  :issue:`11724` by :user:`Nicolas Hug <NicolasHug>`.\n\n- |Fix| Fixed a bug in :class:`linear_model.LogisticRegression` where when using\n  the parameter ``multi_class='multinomial'``, the ``predict_proba`` method was\n  returning incorrect probabilities in the case of binary outcomes.\n  :issue:`9939` by :user:`Roger Westover <rwolst>`.\n\n- |Fix| Fixed a bug in :class:`linear_model.LogisticRegressionCV` where the\n  ``score`` method always computes accuracy, not the metric given by\n  the ``scoring`` parameter.\n  :issue:`10998` by :user:`Thomas Fan <thomasjpfan>`.\n\n- |Fix| Fixed a bug in :class:`linear_model.LogisticRegressionCV` where the\n  'ovr' strategy was always used to compute cross-validation scores in the\n  multiclass setting, even if ``'multinomial'`` was set.\n  :issue:`8720` by :user:`William de Vazelhes <wdevazelhes>`.\n\n- |Fix| Fixed a bug in :class:`linear_model.OrthogonalMatchingPursuit` that was\n  broken when setting ``normalize=False``.\n  :issue:`10071` by `Alexandre Gramfort`_.\n\n- |Fix| Fixed a bug in :class:`linear_model.ARDRegression` which caused\n  incorrectly updated estimates for the standard deviation and the\n  coefficients. :issue:`10153` by :user:`J\u00f6rg D\u00f6pfert <jdoepfert>`.\n\n- |Fix| Fixed a bug in :class:`linear_model.ARDRegression` and\n  :class:`linear_model.BayesianRidge` which caused NaN predictions when fitted\n  with a constant target.\n  :issue:`10095` by :user:`J\u00f6rg D\u00f6pfert <jdoepfert>`.\n\n- |Fix| Fixed a bug in :class:`linear_model.RidgeClassifierCV` where\n  the parameter ``store_cv_values`` was not implemented though\n  it was documented in ``cv_values`` as a way to set up the storage\n  of cross-validation values for different alphas. :issue:`10297` by\n  :user:`Mabel Villalba-Jim\u00e9nez <mabelvj>`.\n\n- |Fix| Fixed a bug in :class:`linear_model.ElasticNet` which caused the input\n  to be overridden when using parameter ``copy_X=True`` and\n  ``check_input=False``. :issue:`10581` by :user:`Yacine Mazari <ymazari>`.\n\n- |Fix| Fixed a bug in :class:`sklearn.linear_model.Lasso`\n  where the coefficient had wrong shape when ``fit_intercept=False``.\n  :issue:`10687` by :user:`Martin Hahn <martin-hahn>`.\n\n- |Fix| Fixed a bug in :func:`sklearn.linear_model.LogisticRegression` where the\n  ``multi_class='multinomial'`` with binary output ``with warm_start=True``\n  :issue:`10836` by :user:`Aishwarya Srinivasan <aishgrt1>`.\n\n- |Fix| Fixed a bug in :class:`linear_model.RidgeCV` where using integer\n  ``alphas`` raised an error.\n  :issue:`10397` by :user:`Mabel Villalba-Jim\u00e9nez <mabelvj>`.\n\n- |Fix| Fixed condition triggering gap computation in\n  :class:`linear_model.Lasso` and :class:`linear_model.ElasticNet` when working\n  with sparse matrices. :issue:`10992` by `Alexandre Gramfort`_.\n\n- |Fix| Fixed a bug in :class:`linear_model.SGDClassifier`,\n  :class:`linear_model.SGDRegressor`,\n  :class:`linear_model.PassiveAggressiveClassifier`,\n  :class:`linear_model.PassiveAggressiveRegressor` and\n  :class:`linear_model.Perceptron`, where the stopping criterion was stopping\n  the algorithm before convergence. A parameter ``n_iter_no_change`` was added\n  and set by default to 5. Previous behavior is equivalent to setting the\n  parameter to 1. :issue:`9043` by `Tom Dupre la Tour`_.\n\n- |Fix| Fixed a bug where liblinear and libsvm-based estimators would segfault\n  if passed a scipy.sparse matrix with 64-bit indices. They now raise a\n  ValueError.\n  :issue:`11327` by :user:`Karan Dhingra <kdhingra307>` and `Joel Nothman`_.\n\n- |API| The default values of the ``solver`` and ``multi_class`` parameters of\n  :class:`linear_model.LogisticRegression` will change respectively from\n  ``'liblinear'`` and ``'ovr'`` in version 0.20 to ``'lbfgs'`` and\n  ``'auto'`` in version 0.22. A FutureWarning is raised when the default\n  values are used. :issue:`11905` by `Tom Dupre la Tour`_ and `Joel Nothman`_.\n\n- |API| Deprecate ``positive=True`` option in :class:`linear_model.Lars` as\n  the underlying implementation is broken. Use :class:`linear_model.Lasso`\n  instead. :issue:`9837` by `Alexandre Gramfort`_.\n\n- |API| ``n_iter_`` may vary from previous releases in\n  :class:`linear_model.LogisticRegression` with ``solver='lbfgs'`` and\n  :class:`linear_model.HuberRegressor`. For Scipy <= 1.0.0, the optimizer could\n  perform more than the requested maximum number of iterations. Now both\n  estimators will report at most ``max_iter`` iterations even if more were\n  performed. :issue:`10723` by `Joel Nothman`_.\n\n\n:mod:`sklearn.manifold`\n.......................\n\n- |Efficiency| Speed improvements for both 'exact' and 'barnes_hut' methods in\n  :class:`manifold.TSNE`. :issue:`10593` and :issue:`10610` by\n  `Tom Dupre la Tour`_.\n\n- |Feature| Support sparse input in :meth:`manifold.Isomap.fit`.\n  :issue:`8554` by :user:`Leland McInnes <lmcinnes>`.\n\n- |Feature| `manifold.t_sne.trustworthiness` accepts metrics other than\n  Euclidean. :issue:`9775` by :user:`William de Vazelhes <wdevazelhes>`.\n\n- |Fix| Fixed a bug in :func:`manifold.spectral_embedding` where the\n  normalization of the spectrum was using a division instead of a\n  multiplication. :issue:`8129` by :user:`Jan Margeta <jmargeta>`,\n  :user:`Guillaume Lemaitre <glemaitre>`, and :user:`Devansh D.\n  <devanshdalal>`.\n\n- |API| |Feature| Deprecate ``precomputed`` parameter in function\n  `manifold.t_sne.trustworthiness`. Instead, the new parameter ``metric``\n  should be used with any compatible metric including 'precomputed', in which\n  case the input matrix ``X`` should be a matrix of pairwise distances or\n  squared distances. :issue:`9775` by :user:`William de Vazelhes\n  <wdevazelhes>`.\n\n- |API| Deprecate ``precomputed`` parameter in function\n  `manifold.t_sne.trustworthiness`. Instead, the new parameter\n  ``metric`` should be used with any compatible metric including\n  'precomputed', in which case the input matrix ``X`` should be a matrix of\n  pairwise distances or squared distances. :issue:`9775` by\n  :user:`William de Vazelhes <wdevazelhes>`.\n\n\n:mod:`sklearn.metrics`\n......................\n\n- |MajorFeature| Added the :func:`metrics.davies_bouldin_score` metric for\n  evaluation of clustering models without a ground truth. :issue:`10827` by\n  :user:`Luis Osa <logc>`.\n\n- |MajorFeature| Added the :func:`metrics.balanced_accuracy_score` metric and\n  a corresponding ``'balanced_accuracy'`` scorer for binary and multiclass\n  classification. :issue:`8066` by :user:`xyguo` and :user:`Aman Dalmia\n  <dalmia>`, and :issue:`10587` by `Joel Nothman`_.\n\n- |Feature| Partial AUC is available via ``max_fpr`` parameter in\n  :func:`metrics.roc_auc_score`. :issue:`3840` by\n  :user:`Alexander Niederb\u00fchl <Alexander-N>`.\n\n- |Feature| A scorer based on :func:`metrics.brier_score_loss` is also\n  available. :issue:`9521` by :user:`Hanmin Qin <qinhanmin2014>`.\n\n- |Feature| Added control over the normalization in\n  :func:`metrics.normalized_mutual_info_score` and\n  :func:`metrics.adjusted_mutual_info_score` via the ``average_method``\n  parameter. In version 0.22, the default normalizer for each will become\n  the *arithmetic* mean of the entropies of each clustering. :issue:`11124` by\n  :user:`Arya McCarthy <aryamccarthy>`.\n\n- |Feature| Added ``output_dict`` parameter in :func:`metrics.classification_report`\n  to return classification statistics as dictionary.\n  :issue:`11160` by :user:`Dan Barkhorn <danielbarkhorn>`.\n\n- |Feature| :func:`metrics.classification_report` now reports all applicable averages on\n  the given data, including micro, macro and weighted average as well as samples\n  average for multilabel data. :issue:`11679` by :user:`Alexander Pacha <apacha>`.\n\n- |Feature| :func:`metrics.average_precision_score` now supports binary\n  ``y_true`` other than ``{0, 1}`` or ``{-1, 1}`` through ``pos_label``\n  parameter. :issue:`9980` by :user:`Hanmin Qin <qinhanmin2014>`.\n\n- |Feature| :func:`metrics.label_ranking_average_precision_score` now supports\n  ``sample_weight``.\n  :issue:`10845` by :user:`Jose Perez-Parras Toledano <jopepato>`.\n\n- |Feature| Add ``dense_output`` parameter to :func:`metrics.pairwise.linear_kernel`.\n  When False and both inputs are sparse, will return a sparse matrix.\n  :issue:`10999` by :user:`Taylor G Smith <tgsmith61591>`.\n\n- |Efficiency| :func:`metrics.silhouette_score` and\n  :func:`metrics.silhouette_samples` are more memory efficient and run\n  faster. This avoids some reported freezes and MemoryErrors.\n  :issue:`11135` by `Joel Nothman`_.\n\n- |Fix| Fixed a bug in :func:`metrics.precision_recall_fscore_support`\n  when truncated `range(n_labels)` is passed as value for `labels`.\n  :issue:`10377` by :user:`Gaurav Dhingra <gxyd>`.\n\n- |Fix| Fixed a bug due to floating point error in\n  :func:`metrics.roc_auc_score` with non-integer sample weights. :issue:`9786`\n  by :user:`Hanmin Qin <qinhanmin2014>`.\n\n- |Fix| Fixed a bug where :func:`metrics.roc_curve` sometimes starts on y-axis\n  instead of (0, 0), which is inconsistent with the document and other\n  implementations. Note that this will not influence the result from\n  :func:`metrics.roc_auc_score` :issue:`10093` by :user:`alexryndin\n  <alexryndin>` and :user:`Hanmin Qin <qinhanmin2014>`.\n\n- |Fix| Fixed a bug to avoid integer overflow. Casted product to 64 bits integer in\n  :func:`metrics.mutual_info_score`.\n  :issue:`9772` by :user:`Kumar Ashutosh <thechargedneutron>`.\n\n- |Fix| Fixed a bug where :func:`metrics.average_precision_score` will sometimes return\n  ``nan`` when ``sample_weight`` contains 0.\n  :issue:`9980` by :user:`Hanmin Qin <qinhanmin2014>`.\n\n- |Fix| Fixed a bug in :func:`metrics.fowlkes_mallows_score` to avoid integer\n  overflow. Casted return value of `contingency_matrix` to `int64` and computed\n  product of square roots rather than square root of product.\n  :issue:`9515` by :user:`Alan Liddell <aliddell>` and\n  :user:`Manh Dao <manhdao>`.\n\n- |API| Deprecate ``reorder`` parameter in :func:`metrics.auc` as it's no\n  longer required for :func:`metrics.roc_auc_score`. Moreover using\n  ``reorder=True`` can hide bugs due to floating point error in the input.\n  :issue:`9851` by :user:`Hanmin Qin <qinhanmin2014>`.\n\n- |API| In :func:`metrics.normalized_mutual_info_score` and\n  :func:`metrics.adjusted_mutual_info_score`, warn that\n  ``average_method`` will have a new default value. In version 0.22, the\n  default normalizer for each will become the *arithmetic* mean of the\n  entropies of each clustering. Currently,\n  :func:`metrics.normalized_mutual_info_score` uses the default of\n  ``average_method='geometric'``, and\n  :func:`metrics.adjusted_mutual_info_score` uses the default of\n  ``average_method='max'`` to match their behaviors in version 0.19.\n  :issue:`11124` by :user:`Arya McCarthy <aryamccarthy>`.\n\n- |API| The ``batch_size`` parameter to :func:`metrics.pairwise_distances_argmin_min`\n  and :func:`metrics.pairwise_distances_argmin` is deprecated to be removed in\n  v0.22. It no longer has any effect, as batch size is determined by global\n  ``working_memory`` config. See :ref:`working_memory`. :issue:`10280` by `Joel\n  Nothman`_ and :user:`Aman Dalmia <dalmia>`.\n\n\n:mod:`sklearn.mixture`\n......................\n\n- |Feature| Added function :term:`fit_predict` to :class:`mixture.GaussianMixture`\n  and :class:`mixture.GaussianMixture`, which is essentially equivalent to\n  calling :term:`fit` and :term:`predict`. :issue:`10336` by :user:`Shu Haoran\n  <haoranShu>` and :user:`Andrew Peng <Andrew-peng>`.\n\n- |Fix| Fixed a bug in `mixture.BaseMixture` where the reported `n_iter_` was\n  missing an iteration. It affected :class:`mixture.GaussianMixture` and\n  :class:`mixture.BayesianGaussianMixture`. :issue:`10740` by :user:`Erich\n  Schubert <kno10>` and :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Fixed a bug in `mixture.BaseMixture` and its subclasses\n  :class:`mixture.GaussianMixture` and :class:`mixture.BayesianGaussianMixture`\n  where the ``lower_bound_`` was not the max lower bound across all\n  initializations (when ``n_init > 1``), but just the lower bound of the last\n  initialization. :issue:`10869` by :user:`Aur\u00e9lien G\u00e9ron <ageron>`.\n\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |Feature| Add `return_estimator` parameter in\n  :func:`model_selection.cross_validate` to return estimators fitted on each\n  split. :issue:`9686` by :user:`Aur\u00e9lien Bellet <bellet>`.\n\n- |Feature| New ``refit_time_`` attribute will be stored in\n  :class:`model_selection.GridSearchCV` and\n  :class:`model_selection.RandomizedSearchCV` if ``refit`` is set to ``True``.\n  This will allow measuring the complete time it takes to perform\n  hyperparameter optimization and refitting the best model on the whole\n  dataset. :issue:`11310` by :user:`Matthias Feurer <mfeurer>`.\n\n- |Feature| Expose `error_score` parameter in\n  :func:`model_selection.cross_validate`,\n  :func:`model_selection.cross_val_score`,\n  :func:`model_selection.learning_curve` and\n  :func:`model_selection.validation_curve` to control the behavior triggered\n  when an error occurs in `model_selection._fit_and_score`.\n  :issue:`11576` by :user:`Samuel O. Ronsin <samronsin>`.\n\n- |Feature| `BaseSearchCV` now has an experimental, private interface to\n  support customized parameter search strategies, through its ``_run_search``\n  method. See the implementations in :class:`model_selection.GridSearchCV` and\n  :class:`model_selection.RandomizedSearchCV` and please provide feedback if\n  you use this. Note that we do not assure the stability of this API beyond\n  version 0.20. :issue:`9599` by `Joel Nothman`_\n\n- |Enhancement| Add improved error message in\n  :func:`model_selection.cross_val_score` when multiple metrics are passed in\n  ``scoring`` keyword. :issue:`11006` by :user:`Ming Li <minggli>`.\n\n- |API| The default number of cross-validation folds ``cv`` and the default\n  number of splits ``n_splits`` in the :class:`model_selection.KFold`-like\n  splitters will change from 3 to 5 in 0.22 as 3-fold has a lot of variance.\n  :issue:`11557` by :user:`Alexandre Boucaud <aboucaud>`.\n\n- |API| The default of ``iid`` parameter of :class:`model_selection.GridSearchCV`\n  and :class:`model_selection.RandomizedSearchCV` will change from ``True`` to\n  ``False`` in version 0.22 to correspond to the standard definition of\n  cross-validation, and the parameter will be removed in version 0.24\n  altogether. This parameter is of greatest practical significance where the\n  sizes of different test sets in cross-validation were very unequal, i.e. in\n  group-based CV strategies. :issue:`9085` by :user:`Laurent Direr <ldirer>`\n  and `Andreas M\u00fcller`_.\n\n- |API| The default value of the ``error_score`` parameter in\n  :class:`model_selection.GridSearchCV` and\n  :class:`model_selection.RandomizedSearchCV` will change to ``np.NaN`` in\n  version 0.22. :issue:`10677` by :user:`Kirill Zhdanovich <Zhdanovich>`.\n\n- |API| Changed ValueError exception raised in\n  :class:`model_selection.ParameterSampler` to a UserWarning for case where the\n  class is instantiated with a greater value of ``n_iter`` than the total space\n  of parameters in the parameter grid. ``n_iter`` now acts as an upper bound on\n  iterations. :issue:`10982` by :user:`Juliet Lawton <julietcl>`\n\n- |API| Invalid input for :class:`model_selection.ParameterGrid` now\n  raises TypeError.\n  :issue:`10928` by :user:`Solutus Immensus <solutusimmensus>`\n\n\n:mod:`sklearn.multioutput`\n..........................\n\n- |MajorFeature| Added :class:`multioutput.RegressorChain` for multi-target\n  regression. :issue:`9257` by :user:`Kumar Ashutosh <thechargedneutron>`.\n\n\n:mod:`sklearn.naive_bayes`\n..........................\n\n- |MajorFeature| Added :class:`naive_bayes.ComplementNB`, which implements the\n  Complement Naive Bayes classifier described in Rennie et al. (2003).\n  :issue:`8190` by :user:`Michael A. Alcorn <airalcorn2>`.\n\n- |Feature| Add `var_smoothing` parameter in :class:`naive_bayes.GaussianNB`\n  to give a precise control over variances calculation.\n  :issue:`9681` by :user:`Dmitry Mottl <Mottl>`.\n\n- |Fix| Fixed a bug in :class:`naive_bayes.GaussianNB` which incorrectly\n  raised error for prior list which summed to 1.\n  :issue:`10005` by :user:`Gaurav Dhingra <gxyd>`.\n\n- |Fix| Fixed a bug in :class:`naive_bayes.MultinomialNB` which did not accept\n  vector valued pseudocounts (alpha).\n  :issue:`10346` by :user:`Tobias Madsen <TobiasMadsen>`\n\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Efficiency| :class:`neighbors.RadiusNeighborsRegressor` and\n  :class:`neighbors.RadiusNeighborsClassifier` are now\n  parallelized according to ``n_jobs`` regardless of ``algorithm``.\n  :issue:`10887` by :user:`Jo\u00ebl Billaud <recamshak>`.\n\n- |Efficiency| :mod:`sklearn.neighbors` query methods are now more\n  memory efficient when ``algorithm='brute'``.\n  :issue:`11136` by `Joel Nothman`_ and :user:`Aman Dalmia <dalmia>`.\n\n- |Feature| Add ``sample_weight`` parameter to the fit method of\n  :class:`neighbors.KernelDensity` to enable weighting in kernel density\n  estimation.\n  :issue:`4394` by :user:`Samuel O. Ronsin <samronsin>`.\n\n- |Feature| Novelty detection with :class:`neighbors.LocalOutlierFactor`:\n  Add a ``novelty`` parameter to :class:`neighbors.LocalOutlierFactor`. When\n  ``novelty`` is set to True, :class:`neighbors.LocalOutlierFactor` can then\n  be used for novelty detection, i.e. predict on new unseen data. Available\n  prediction methods are ``predict``, ``decision_function`` and\n  ``score_samples``. By default, ``novelty`` is set to ``False``, and only\n  the ``fit_predict`` method is available.\n  By :user:`Albert Thomas <albertcthomas>`.\n\n- |Fix| Fixed a bug in :class:`neighbors.NearestNeighbors` where fitting a\n  NearestNeighbors model fails when a) the distance metric used is a\n  callable and b) the input to the NearestNeighbors model is sparse.\n  :issue:`9579` by :user:`Thomas Kober <tttthomasssss>`.\n\n- |Fix| Fixed a bug so ``predict`` in\n  :class:`neighbors.RadiusNeighborsRegressor` can handle empty neighbor set\n  when using non uniform weights. Also raises a new warning when no neighbors\n  are found for samples. :issue:`9655` by :user:`Andreas Bjerre-Nielsen\n  <abjer>`.\n\n- |Fix| |Efficiency| Fixed a bug in ``KDTree`` construction that results in\n  faster construction and querying times.\n  :issue:`11556` by :user:`Jake VanderPlas <jakevdp>`\n\n- |Fix| Fixed a bug in :class:`neighbors.KDTree` and :class:`neighbors.BallTree` where\n  pickled tree objects would change their type to the super class `BinaryTree`.\n  :issue:`11774` by :user:`Nicolas Hug <NicolasHug>`.\n\n\n:mod:`sklearn.neural_network`\n.............................\n\n- |Feature| Add `n_iter_no_change` parameter in\n  `neural_network.BaseMultilayerPerceptron`,\n  :class:`neural_network.MLPRegressor`, and\n  :class:`neural_network.MLPClassifier` to give control over\n  maximum number of epochs to not meet ``tol`` improvement.\n  :issue:`9456` by :user:`Nicholas Nadeau <nnadeau>`.\n\n- |Fix| Fixed a bug in `neural_network.BaseMultilayerPerceptron`,\n  :class:`neural_network.MLPRegressor`, and\n  :class:`neural_network.MLPClassifier` with new ``n_iter_no_change``\n  parameter now at 10 from previously hardcoded 2.\n  :issue:`9456` by :user:`Nicholas Nadeau <nnadeau>`.\n\n- |Fix| Fixed a bug in :class:`neural_network.MLPRegressor` where fitting\n  quit unexpectedly early due to local minima or fluctuations.\n  :issue:`9456` by :user:`Nicholas Nadeau <nnadeau>`\n\n\n:mod:`sklearn.pipeline`\n.......................\n\n- |Feature| The ``predict`` method of :class:`pipeline.Pipeline` now passes\n  keyword arguments on to the pipeline's last estimator, enabling the use of\n  parameters such as ``return_std`` in a pipeline with caution.\n  :issue:`9304` by :user:`Breno Freitas <brenolf>`.\n\n- |API| :class:`pipeline.FeatureUnion` now supports ``'drop'`` as a transformer\n  to drop features. :issue:`11144` by :user:`Thomas Fan <thomasjpfan>`.\n\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |MajorFeature| Expanded :class:`preprocessing.OneHotEncoder` to allow to\n  encode categorical string features as a numeric array using a one-hot (or\n  dummy) encoding scheme, and added :class:`preprocessing.OrdinalEncoder` to\n  convert to ordinal integers. Those two classes now handle encoding of all\n  feature types (also handles string-valued features) and derives the\n  categories based on the unique values in the features instead of the maximum\n  value in the features. :issue:`9151` and :issue:`10521` by :user:`Vighnesh\n  Birodkar <vighneshbirodkar>` and `Joris Van den Bossche`_.\n\n- |MajorFeature| Added :class:`preprocessing.KBinsDiscretizer` for turning\n  continuous features into categorical or one-hot encoded\n  features. :issue:`7668`, :issue:`9647`, :issue:`10195`,\n  :issue:`10192`, :issue:`11272`, :issue:`11467` and :issue:`11505`.\n  by :user:`Henry Lin <hlin117>`, `Hanmin Qin`_,\n  `Tom Dupre la Tour`_ and :user:`Giovanni Giuseppe Costa <ggc87>`.\n\n- |MajorFeature| Added :class:`preprocessing.PowerTransformer`, which\n  implements the Yeo-Johnson and Box-Cox power transformations. Power\n  transformations try to find a set of feature-wise parametric transformations\n  to approximately map data to a Gaussian distribution centered at zero and\n  with unit variance. This is useful as a variance-stabilizing transformation\n  in situations where normality and homoscedasticity are desirable.\n  :issue:`10210` by :user:`Eric Chang <chang>` and :user:`Maniteja\n  Nandana <maniteja123>`, and :issue:`11520` by :user:`Nicolas Hug\n  <nicolashug>`.\n\n- |MajorFeature| NaN values are ignored and handled in the following\n  preprocessing methods:\n  :class:`preprocessing.MaxAbsScaler`,\n  :class:`preprocessing.MinMaxScaler`,\n  :class:`preprocessing.RobustScaler`,\n  :class:`preprocessing.StandardScaler`,\n  :class:`preprocessing.PowerTransformer`,\n  :class:`preprocessing.QuantileTransformer` classes and\n  :func:`preprocessing.maxabs_scale`,\n  :func:`preprocessing.minmax_scale`,\n  :func:`preprocessing.robust_scale`,\n  :func:`preprocessing.scale`,\n  :func:`preprocessing.power_transform`,\n  :func:`preprocessing.quantile_transform` functions respectively addressed in\n  issues :issue:`11011`, :issue:`11005`, :issue:`11308`, :issue:`11206`,\n  :issue:`11306`, and :issue:`10437`.\n  By :user:`Lucija Gregov <LucijaGregov>` and\n  :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Feature| :class:`preprocessing.PolynomialFeatures` now supports sparse\n  input. :issue:`10452` by :user:`Aman Dalmia <dalmia>` and `Joel Nothman`_.\n\n- |Feature| :class:`preprocessing.RobustScaler` and\n  :func:`preprocessing.robust_scale` can be fitted using sparse matrices.\n  :issue:`11308` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Feature| :class:`preprocessing.OneHotEncoder` now supports the\n  `get_feature_names` method to obtain the transformed feature names.\n  :issue:`10181` by :user:`Nirvan Anjirbag <Nirvan101>` and\n  `Joris Van den Bossche`_.\n\n- |Feature| A parameter ``check_inverse`` was added to\n  :class:`preprocessing.FunctionTransformer` to ensure that ``func`` and\n  ``inverse_func`` are the inverse of each other.\n  :issue:`9399` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Feature| The ``transform`` method of :class:`sklearn.preprocessing.MultiLabelBinarizer`\n  now ignores any unknown classes. A warning is raised stating the unknown classes\n  classes found which are ignored.\n  :issue:`10913` by :user:`Rodrigo Agundez <rragundez>`.\n\n- |Fix| Fixed bugs in :class:`preprocessing.LabelEncoder` which would\n  sometimes throw errors when ``transform`` or ``inverse_transform`` was called\n  with empty arrays. :issue:`10458` by :user:`Mayur Kulkarni <maykulkarni>`.\n\n- |Fix| Fix ValueError in :class:`preprocessing.LabelEncoder` when using\n  ``inverse_transform`` on unseen labels. :issue:`9816` by :user:`Charlie Newey\n  <newey01c>`.\n\n- |Fix| Fix bug in :class:`preprocessing.OneHotEncoder` which discarded the\n  ``dtype`` when returning a sparse matrix output.\n  :issue:`11042` by :user:`Daniel Morales <DanielMorales9>`.\n\n- |Fix| Fix ``fit`` and ``partial_fit`` in\n  :class:`preprocessing.StandardScaler` in the rare case when ``with_mean=False``\n  and `with_std=False` which was crashing by calling ``fit`` more than once and\n  giving inconsistent results for ``mean_`` whether the input was a sparse or a\n  dense matrix. ``mean_`` will be set to ``None`` with both sparse and dense\n  inputs. ``n_samples_seen_`` will be also reported for both input types.\n  :issue:`11235` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |API| Deprecate ``n_values`` and ``categorical_features`` parameters and\n  ``active_features_``, ``feature_indices_`` and ``n_values_`` attributes\n  of :class:`preprocessing.OneHotEncoder`. The ``n_values`` parameter can be\n  replaced with the new ``categories`` parameter, and the attributes with the\n  new ``categories_`` attribute. Selecting the categorical features with\n  the ``categorical_features`` parameter is now better supported using the\n  :class:`compose.ColumnTransformer`.\n  :issue:`10521` by `Joris Van den Bossche`_.\n\n- |API| Deprecate `preprocessing.Imputer` and move\n  the corresponding module to :class:`impute.SimpleImputer`.\n  :issue:`9726` by :user:`Kumar Ashutosh\n  <thechargedneutron>`.\n\n- |API| The ``axis`` parameter that was in\n  `preprocessing.Imputer` is no longer present in\n  :class:`impute.SimpleImputer`. The behavior is equivalent\n  to ``axis=0`` (impute along columns). Row-wise\n  imputation can be performed with FunctionTransformer\n  (e.g., ``FunctionTransformer(lambda X:\n  SimpleImputer().fit_transform(X.T).T)``). :issue:`10829`\n  by :user:`Guillaume Lemaitre <glemaitre>` and\n  :user:`Gilberto Olimpio <gilbertoolimpio>`.\n\n- |API| The NaN marker for the missing values has been changed\n  between the `preprocessing.Imputer` and the\n  `impute.SimpleImputer`.\n  ``missing_values='NaN'`` should now be\n  ``missing_values=np.nan``. :issue:`11211` by\n  :user:`Jeremie du Boisberranger <jeremiedbb>`.\n\n- |API| In :class:`preprocessing.FunctionTransformer`, the default of\n  ``validate`` will be from ``True`` to ``False`` in 0.22.\n  :issue:`10655` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n\n:mod:`sklearn.svm`\n..................\n\n- |Fix| Fixed a bug in :class:`svm.SVC` where when the argument ``kernel`` is\n  unicode in Python2, the ``predict_proba`` method was raising an\n  unexpected TypeError given dense inputs.\n  :issue:`10412` by :user:`Jiongyan Zhang <qmick>`.\n\n- |API| Deprecate ``random_state`` parameter in :class:`svm.OneClassSVM` as\n  the underlying implementation is not random.\n  :issue:`9497` by :user:`Albert Thomas <albertcthomas>`.\n\n- |API| The default value of ``gamma`` parameter of :class:`svm.SVC`,\n  :class:`~svm.NuSVC`, :class:`~svm.SVR`, :class:`~svm.NuSVR`,\n  :class:`~svm.OneClassSVM` will change from ``'auto'`` to ``'scale'`` in\n  version 0.22 to account better for unscaled features. :issue:`8361` by\n  :user:`Gaurav Dhingra <gxyd>` and :user:`Ting Neo <neokt>`.\n\n\n:mod:`sklearn.tree`\n...................\n\n- |Enhancement| Although private (and hence not assured API stability),\n  `tree._criterion.ClassificationCriterion` and\n  `tree._criterion.RegressionCriterion` may now be cimported and\n  extended. :issue:`10325` by :user:`Camil Staps <camilstaps>`.\n\n- |Fix| Fixed a bug in `tree.BaseDecisionTree` with `splitter=\"best\"`\n  where split threshold could become infinite when values in X were\n  near infinite. :issue:`10536` by :user:`Jonathan Ohayon <Johayon>`.\n\n- |Fix| Fixed a bug in `tree.MAE` to ensure sample weights are being\n  used during the calculation of tree MAE impurity. Previous behaviour could\n  cause suboptimal splits to be chosen since the impurity calculation\n  considered all samples to be of equal weight importance.\n  :issue:`11464` by :user:`John Stott <JohnStott>`.\n\n\n:mod:`sklearn.utils`\n....................\n\n- |Feature| :func:`utils.check_array` and :func:`utils.check_X_y` now have\n  ``accept_large_sparse`` to control whether scipy.sparse matrices with 64-bit\n  indices should be rejected.\n  :issue:`11327` by :user:`Karan Dhingra <kdhingra307>` and `Joel Nothman`_.\n\n- |Efficiency| |Fix| Avoid copying the data in :func:`utils.check_array` when\n  the input data is a memmap (and ``copy=False``). :issue:`10663` by\n  :user:`Arthur Mensch <arthurmensch>` and :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n- |API| :func:`utils.check_array` yield a ``FutureWarning`` indicating\n  that arrays of bytes\/strings will be interpreted as decimal numbers\n  beginning in version 0.22. :issue:`10229` by :user:`Ryan Lee <rtlee9>`\n\n\nMultiple modules\n................\n\n- |Feature| |API| More consistent outlier detection API:\n  Add a ``score_samples`` method in :class:`svm.OneClassSVM`,\n  :class:`ensemble.IsolationForest`, :class:`neighbors.LocalOutlierFactor`,\n  :class:`covariance.EllipticEnvelope`. It allows to access raw score\n  functions from original papers. A new ``offset_`` parameter allows to link\n  ``score_samples`` and ``decision_function`` methods.\n  The ``contamination`` parameter of :class:`ensemble.IsolationForest` and\n  :class:`neighbors.LocalOutlierFactor` ``decision_function`` methods is used\n  to define this ``offset_`` such that outliers (resp. inliers) have negative (resp.\n  positive) ``decision_function`` values. By default, ``contamination`` is\n  kept unchanged to 0.1 for a deprecation period. In 0.22, it will be set to \"auto\",\n  thus using method-specific score offsets.\n  In :class:`covariance.EllipticEnvelope` ``decision_function`` method, the\n  ``raw_values`` parameter is deprecated as the shifted Mahalanobis distance\n  will be always returned in 0.22. :issue:`9015` by `Nicolas Goix`_.\n\n- |Feature| |API| A ``behaviour`` parameter has been introduced in :class:`ensemble.IsolationForest`\n  to ensure backward compatibility.\n  In the old behaviour, the ``decision_function`` is independent of the ``contamination``\n  parameter. A threshold attribute depending on the ``contamination`` parameter is thus\n  used.\n  In the new behaviour the ``decision_function`` is dependent on the ``contamination``\n  parameter, in such a way that 0 becomes its natural threshold to detect outliers.\n  Setting behaviour to \"old\" is deprecated and will not be possible in version 0.22.\n  Beside, the behaviour parameter will be removed in 0.24.\n  :issue:`11553` by `Nicolas Goix`_.\n\n- |API| Added convergence warning to :class:`svm.LinearSVC` and\n  :class:`linear_model.LogisticRegression` when ``verbose`` is set to 0.\n  :issue:`10881` by :user:`Alexandre Sevin <AlexandreSev>`.\n\n- |API| Changed warning type from :class:`UserWarning` to\n  :class:`exceptions.ConvergenceWarning` for failing convergence in\n  `linear_model.logistic_regression_path`,\n  :class:`linear_model.RANSACRegressor`, :func:`linear_model.ridge_regression`,\n  :class:`gaussian_process.GaussianProcessRegressor`,\n  :class:`gaussian_process.GaussianProcessClassifier`,\n  :func:`decomposition.fastica`, :class:`cross_decomposition.PLSCanonical`,\n  :class:`cluster.AffinityPropagation`, and :class:`cluster.Birch`.\n  :issue:`10306` by :user:`Jonathan Siebert <jotasi>`.\n\n\nMiscellaneous\n.............\n\n- |MajorFeature| A new configuration parameter, ``working_memory`` was added\n  to control memory consumption limits in chunked operations, such as the new\n  :func:`metrics.pairwise_distances_chunked`. See :ref:`working_memory`.\n  :issue:`10280` by `Joel Nothman`_ and :user:`Aman Dalmia <dalmia>`.\n\n- |Feature| The version of :mod:`joblib` bundled with Scikit-learn is now 0.12.\n  This uses a new default multiprocessing implementation, named `loky\n  <https:\/\/github.com\/tomMoral\/loky>`_. While this may incur some memory and\n  communication overhead, it should provide greater cross-platform stability\n  than relying on Python standard library multiprocessing. :issue:`11741` by\n  the Joblib developers, especially :user:`Thomas Moreau <tomMoral>` and\n  `Olivier Grisel`_.\n\n- |Feature| An environment variable to use the site joblib instead of the\n  vendored one was added (:ref:`environment_variable`). The main API of joblib\n  is now exposed in :mod:`sklearn.utils`.\n  :issue:`11166` by `Gael Varoquaux`_.\n\n- |Feature| Add almost complete PyPy 3 support. Known unsupported\n  functionalities are :func:`datasets.load_svmlight_file`,\n  :class:`feature_extraction.FeatureHasher` and\n  :class:`feature_extraction.text.HashingVectorizer`. For running on PyPy,\n  PyPy3-v5.10+, Numpy 1.14.0+, and scipy 1.1.0+ are required.\n  :issue:`11010` by :user:`Ronan Lamy <rlamy>` and `Roman Yurchak`_.\n\n- |Feature| A utility method :func:`sklearn.show_versions()` was added to\n  print out information relevant for debugging. It includes the user system,\n  the Python executable, the version of the main libraries and BLAS binding\n  information. :issue:`11596` by :user:`Alexandre Boucaud <aboucaud>`\n\n- |Fix| Fixed a bug when setting parameters on meta-estimator, involving both\n  a wrapped estimator and its parameter. :issue:`9999` by :user:`Marcus Voss\n  <marcus-voss>` and `Joel Nothman`_.\n\n- |Fix| Fixed a bug where calling :func:`sklearn.base.clone` was not thread\n  safe and could result in a \"pop from empty list\" error. :issue:`9569`\n  by `Andreas M\u00fcller`_.\n\n- |API| The default value of ``n_jobs`` is changed from ``1`` to ``None`` in\n  all related functions and classes. ``n_jobs=None`` means ``unset``. It will\n  generally be interpreted as ``n_jobs=1``, unless the current\n  ``joblib.Parallel`` backend context specifies otherwise (See\n  :term:`Glossary <n_jobs>` for additional information). Note that this change\n  happens immediately (i.e., without a deprecation cycle).\n  :issue:`11741` by `Olivier Grisel`_.\n\n- |Fix| Fixed a bug in validation helpers where passing a Dask DataFrame results\n  in an error. :issue:`12462` by :user:`Zachariah Miller <zwmiller>`\n\nChanges to estimator checks\n---------------------------\n\nThese changes mostly affect library developers.\n\n- Checks for transformers now apply if the estimator implements\n  :term:`transform`, regardless of whether it inherits from\n  :class:`sklearn.base.TransformerMixin`. :issue:`10474` by `Joel Nothman`_.\n\n- Classifiers are now checked for consistency between :term:`decision_function`\n  and categorical predictions.\n  :issue:`10500` by :user:`Narine Kokhlikyan <NarineK>`.\n\n- Allow tests in :func:`utils.estimator_checks.check_estimator` to test functions\n  that accept pairwise data.\n  :issue:`9701` by :user:`Kyle Johnson <gkjohns>`\n\n- Allow :func:`utils.estimator_checks.check_estimator` to check that there is no\n  private settings apart from parameters during estimator initialization.\n  :issue:`9378` by :user:`Herilalaina Rakotoarison <herilalaina>`\n\n- The set of checks in :func:`utils.estimator_checks.check_estimator` now includes a\n  ``check_set_params`` test which checks that ``set_params`` is equivalent to\n  passing parameters in ``__init__`` and warns if it encounters parameter\n  validation. :issue:`7738` by :user:`Alvin Chiang <absolutelyNoWarranty>`\n\n- Add invariance tests for clustering metrics. :issue:`8102` by :user:`Ankita\n  Sinha <anki08>` and :user:`Guillaume Lemaitre <glemaitre>`.\n\n- Add ``check_methods_subset_invariance`` to\n  :func:`~utils.estimator_checks.check_estimator`, which checks that\n  estimator methods are invariant if applied to a data subset.\n  :issue:`10428` by :user:`Jonathan Ohayon <Johayon>`\n\n- Add tests in :func:`utils.estimator_checks.check_estimator` to check that an\n  estimator can handle read-only memmap input data. :issue:`10663` by\n  :user:`Arthur Mensch <arthurmensch>` and :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n- ``check_sample_weights_pandas_series`` now uses 8 rather than 6 samples\n  to accommodate for the default number of clusters in :class:`cluster.KMeans`.\n  :issue:`10933` by :user:`Johannes Hansen <jnhansen>`.\n\n- Estimators are now checked for whether ``sample_weight=None`` equates to\n  ``sample_weight=np.ones(...)``.\n  :issue:`11558` by :user:`Sergul Aydore <sergulaydore>`.\n\n\nCode and Documentation Contributors\n-----------------------------------\n\nThanks to everyone who has contributed to the maintenance and improvement of the\nproject since version 0.19, including:\n\n211217613, Aarshay Jain, absolutelyNoWarranty, Adam Greenhall, Adam Kleczewski,\nAdam Richie-Halford, adelr, AdityaDaflapurkar, Adrin Jalali, Aidan Fitzgerald,\naishgrt1, Akash Shivram, Alan Liddell, Alan Yee, Albert Thomas, Alexander\nLenail, Alexander-N, Alexandre Boucaud, Alexandre Gramfort, Alexandre Sevin,\nAlex Egg, Alvaro Perez-Diaz, Amanda, Aman Dalmia, Andreas Bjerre-Nielsen,\nAndreas Mueller, Andrew Peng, Angus Williams, Aniruddha Dave, annaayzenshtat,\nAnthony Gitter, Antonio Quinonez, Anubhav Marwaha, Arik Pamnani, Arthur Ozga,\nArtiem K, Arunava, Arya McCarthy, Attractadore, Aur\u00e9lien Bellet, Aur\u00e9lien\nGeron, Ayush Gupta, Balakumaran Manoharan, Bangda Sun, Barry Hart, Bastian\nVenthur, Ben Lawson, Benn Roth, Breno Freitas, Brent Yi, brett koonce, Caio\nOliveira, Camil Staps, cclauss, Chady Kamar, Charlie Brummitt, Charlie Newey,\nchris, Chris, Chris Catalfo, Chris Foster, Chris Holdgraf, Christian Braune,\nChristian Hirsch, Christian Hogan, Christopher Jenness, Clement Joudet, cnx,\ncwitte, Dallas Card, Dan Barkhorn, Daniel, Daniel Ferreira, Daniel Gomez,\nDaniel Klevebring, Danielle Shwed, Daniel Mohns, Danil Baibak, Darius Morawiec,\nDavid Beach, David Burns, David Kirkby, David Nicholson, David Pickup, Derek,\nDidi Bar-Zev, diegodlh, Dillon Gardner, Dillon Niederhut, dilutedsauce,\ndlovell, Dmitry Mottl, Dmitry Petrov, Dor Cohen, Douglas Duhaime, Ekaterina\nTuzova, Eric Chang, Eric Dean Sanchez, Erich Schubert, Eunji, Fang-Chieh Chou,\nFarahSaeed, felix, F\u00e9lix Raimundo, fenx, filipj8, FrankHui, Franz Wompner,\nFreija Descamps, frsi, Gabriele Calvo, Gael Varoquaux, Gaurav Dhingra, Georgi\nPeev, Gil Forsyth, Giovanni Giuseppe Costa, gkevinyen5418, goncalo-rodrigues,\nGryllos Prokopis, Guillaume Lemaitre, Guillaume \"Vermeille\" Sanchez, Gustavo De\nMari Pereira, hakaa1, Hanmin Qin, Henry Lin, Hong, Honghe, Hossein Pourbozorg,\nHristo, Hunan Rostomyan, iampat, Ivan PANICO, Jaewon Chung, Jake VanderPlas,\njakirkham, James Bourbeau, James Malcolm, Jamie Cox, Jan Koch, Jan Margeta, Jan\nSchl\u00fcter, janvanrijn, Jason Wolosonovich, JC Liu, Jeb Bearer, jeremiedbb, Jimmy\nWan, Jinkun Wang, Jiongyan Zhang, jjabl, jkleint, Joan Massich, Jo\u00ebl Billaud,\nJoel Nothman, Johannes Hansen, JohnStott, Jonatan Samoocha, Jonathan Ohayon,\nJ\u00f6rg D\u00f6pfert, Joris Van den Bossche, Jose Perez-Parras Toledano, josephsalmon,\njotasi, jschendel, Julian Kuhlmann, Julien Chaumond, julietcl, Justin Shenk,\nKarl F, Kasper Primdal Lauritzen, Katrin Leinweber, Kirill, ksemb, Kuai Yu,\nKumar Ashutosh, Kyeongpil Kang, Kye Taylor, kyledrogo, Leland McInnes, L\u00e9o DS,\nLiam Geron, Liutong Zhou, Lizao Li, lkjcalc, Loic Esteve, louib, Luciano Viola,\nLucija Gregov, Luis Osa, Luis Pedro Coelho, Luke M Craig, Luke Persola, Mabel,\nMabel Villalba, Maniteja Nandana, MarkIwanchyshyn, Mark Roth, Markus M\u00fcller,\nMarsGuy, Martin Gubri, martin-hahn, martin-kokos, mathurinm, Matthias Feurer,\nMax Copeland, Mayur Kulkarni, Meghann Agarwal, Melanie Goetz, Michael A.\nAlcorn, Minghui Liu, Ming Li, Minh Le, Mohamed Ali Jamaoui, Mohamed Maskani,\nMohammad Shahebaz, Muayyad Alsadi, Nabarun Pal, Nagarjuna Kumar, Naoya Kanai,\nNarendran Santhanam, NarineK, Nathaniel Saul, Nathan Suh, Nicholas Nadeau,\nP.Eng.,  AVS, Nick Hoh, Nicolas Goix, Nicolas Hug, Nicolau Werneck,\nnielsenmarkus11, Nihar Sheth, Nikita Titov, Nilesh Kevlani, Nirvan Anjirbag,\nnotmatthancock, nzw, Oleksandr Pavlyk, oliblum90, Oliver Rausch, Olivier\nGrisel, Oren Milman, Osaid Rehman Nasir, pasbi, Patrick Fernandes, Patrick\nOlden, Paul Paczuski, Pedro Morales, Peter, Peter St. John, pierreablin,\npietruh, Pinaki Nath Chowdhury, Piotr Szyma\u0144ski, Pradeep Reddy Raamana, Pravar\nD Mahajan, pravarmahajan, QingYing Chen, Raghav RV, Rajendra arora,\nRAKOTOARISON Herilalaina, Rameshwar Bhaskaran, RankyLau, Rasul Kerimov,\nReiichiro Nakano, Rob, Roman Kosobrodov, Roman Yurchak, Ronan Lamy, rragundez,\nR\u00fcdiger Busche, Ryan, Sachin Kelkar, Sagnik Bhattacharya, Sailesh Choyal, Sam\nRadhakrishnan, Sam Steingold, Samuel Bell, Samuel O. Ronsin, Saqib Nizam\nShamsi, SATISH J, Saurabh Gupta, Scott Gigante, Sebastian Flennerhag, Sebastian\nRaschka, Sebastien Dubois, S\u00e9bastien Lerique, Sebastin Santy, Sergey Feldman,\nSergey Melderis, Sergul Aydore, Shahebaz, Shalil Awaley, Shangwu Yao, Sharad\nVijalapuram, Sharan Yalburgi, shenhanc78, Shivam Rastogi, Shu Haoran, siftikha,\nSinclert P\u00e9rez, SolutusImmensus, Somya Anand, srajan paliwal, Sriharsha Hatwar,\nSri Krishna, Stefan van der Walt, Stephen McDowell, Steven Brown, syonekura,\nTaehoon Lee, Takanori Hayashi, tarcusx, Taylor G Smith, theriley106, Thomas,\nThomas Fan, Thomas Heavey, Tobias Madsen, tobycheese, Tom Augspurger, Tom Dupr\u00e9\nla Tour, Tommy, Trevor Stephens, Trishnendu Ghorai, Tulio Casagrande,\ntwosigmajab, Umar Farouk Umar, Urvang Patel, Utkarsh Upadhyay, Vadim\nMarkovtsev, Varun Agrawal, Vathsala Achar, Vilhelm von Ehrenheim, Vinayak\nMehta, Vinit, Vinod Kumar L, Viraj Mavani, Viraj Navkal, Vivek Kumar, Vlad\nNiculae, vqean3, Vrishank Bhardwaj, vufg, wallygauze, Warut Vijitbenjaronk,\nwdevazelhes, Wenhao Zhang, Wes Barnett, Will, William de Vazelhes, Will\nRosenfeld, Xin Xiong, Yiming (Paul) Li, ymazari, Yufeng, Zach Griffith, Z\u00e9\nVin\u00edcius, Zhenqing Hu, Zhiqing Xiao, Zijie (ZJ) Poh","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn               Version 0 20                  warning        Version 0 20 is the last version of scikit learn to support Python 2 7 and Python 3 4      Scikit learn 0 21 will require Python 3 5 or higher      include   changelog legend inc      changes 0 20 4   Version 0 20 4                   July 30  2019    This is a bug fix release with some bug fixes applied to version 0 20 3   Changelog            The bundled version of joblib was upgraded from 0 13 0 to 0 13 2    mod  sklearn cluster                                     Fix  Fixed a bug in  class  cluster KMeans  where KMeans   initialisation   could rarely result in an IndexError   issue  11756  by  Joel Nothman      mod  sklearn compose                              Fix  Fixed an issue in  class  compose ColumnTransformer  where using   DataFrames whose column order differs between  func   fit   and    func   transform   could lead to silently passing incorrect columns to the     remainder   transformer     pr  14237  by  Andreas Schuderer  schuderer      mod  sklearn decomposition                                   Fix  Fixed a bug in  class  cross decomposition CCA  improving numerical   stability when  Y  is close to zero   pr  13903  by  Thomas Fan       mod  sklearn model selection                                     Fix  Fixed a bug where  class  model selection StratifiedKFold    shuffles each class s samples with the same   random state      making   shuffle True   ineffective     issue  13124  by  user  Hanmin Qin  qinhanmin2014      mod  sklearn neighbors                               Fix  Fixed a bug in  class  neighbors KernelDensity  which could not be   restored from a pickle if   sample weight   had been used     issue  13772  by  user  Aditya Vyas  aditya1702         changes 0 20 3   Version 0 20 3                   March 1  2019    This is a bug fix release with some minor documentation improvements and enhancements to features released in 0 20 0   Changelog             mod  sklearn cluster                             Fix  Fixed a bug in  class  cluster KMeans  where computation was single   threaded when  n jobs   1  or  n jobs    1      issue  12949  by  user  Prabakaran Kumaresshan  nixphix      mod  sklearn compose                             Fix  Fixed a bug in  class  compose ColumnTransformer  to handle   negative indexes in the columns list of the transformers     issue  12946  by  user  Pierre Tallotte  pierretallotte      mod  sklearn covariance                                Fix  Fixed a regression in  func  covariance graphical lasso  so that   the case  n features 2  is handled correctly   issue  13276  by    user  Aur lien Bellet  bellet      mod  sklearn decomposition                                   Fix  Fixed a bug in  func  decomposition sparse encode  where computation was single   threaded when  n jobs   1  or  n jobs    1      issue  13005  by  user  Prabakaran Kumaresshan  nixphix      mod  sklearn datasets                                   Efficiency   func  sklearn datasets fetch openml  now loads data by   streaming  avoiding high memory usage    issue  13312  by  Joris Van den   Bossche      mod  sklearn feature extraction                                        Fix  Fixed a bug in  class  feature extraction text CountVectorizer  which   would result in the sparse feature matrix having conflicting  indptr  and    indices  precisions under very large vocabularies   issue  11295  by    user  Gabriel Vacaliuc  gvacaliuc      mod  sklearn impute                            Fix  add support for non numeric data in    class  sklearn impute MissingIndicator  which was not supported while    class  sklearn impute SimpleImputer  was supporting this for some   imputation strategies     issue  13046  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn linear model                                  Fix  Fixed a bug in  class  linear model MultiTaskElasticNet  and    class  linear model MultiTaskLasso  which were breaking when     warm start   True     issue  12360  by  user  Aakanksha Joshi  joaak      mod  sklearn preprocessing                                   Fix  Fixed a bug in  class  preprocessing KBinsDiscretizer  where     strategy  kmeans    fails with an error during transformation due to unsorted   bin edges   issue  13134  by  user  Sandro Casagrande  SandroCasagrande        Fix  Fixed a bug in  class  preprocessing OneHotEncoder  where the   deprecation of   categorical features   was handled incorrectly in   combination with   handle unknown  ignore        issue  12881  by  Joris Van den Bossche        Fix  Bins whose width are too small  i e      1e 8  are removed   with a warning in  class  preprocessing KBinsDiscretizer      issue  13165  by  user  Hanmin Qin  qinhanmin2014      mod  sklearn svm                         FIX  Fixed a bug in  class  svm SVC    class  svm NuSVC    class  svm SVR      class  svm NuSVR  and  class  svm OneClassSVM  where the   scale   option   of parameter   gamma   is erroneously defined as     1    n features   X std       It s now defined as     1    n features   X var          issue  13221  by  user  Hanmin Qin  qinhanmin2014     Code and Documentation Contributors                                      With thanks to   Adrin Jalali  Agamemnon Krasoulis  Albert Thomas  Andreas Mueller  Aur lien Bellet  bertrandhaut  Bharat Raghunathan  Dowon  Emmanuel Arias  Fibinse Xavier  Finn O Shea  Gabriel Vacaliuc  Gael Varoquaux  Guillaume Lemaitre  Hanmin Qin  joaak  Joel Nothman  Joris Van den Bossche  J r mie M hault  kms15  Kossori Aruku  Lakshya KD  maikia  Manuel L pez Ib  ez  Marco Gorelli  MarcoGorelli  mferrari3  Micka l Schoentgen  Nicolas Hug  pavlos kallis  Pierre Glaser  pierretallotte  Prabakaran Kumaresshan  Reshama Shaikh  Rohit Kapoor  Roman Yurchak  SandroCasagrande  Tashay Green  Thomas Fan  Vishaal Kapoor  Zhuyi Xue  Zijie  ZJ  Poh      changes 0 20 2   Version 0 20 2                   December 20  2018    This is a bug fix release with some minor documentation improvements and enhancements to features released in 0 20 0   Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      mod  sklearn neighbors  when   metric   jaccard     bug fix    use of    seuclidean    or    mahalanobis    metrics in some cases  bug fix   Changelog             mod  sklearn compose                             Fix  Fixed an issue in  func  compose make column transformer  which raises   unexpected error when columns is pandas Index or pandas Series     issue  12704  by  user  Hanmin Qin  qinhanmin2014      mod  sklearn metrics                             Fix  Fixed a bug in  func  metrics pairwise distances  and    func  metrics pairwise distances chunked  where parameters   V   of      seuclidean    and   VI   of    mahalanobis    metrics were computed after   the data was split into chunks instead of being pre computed on whole data     issue  12701  by  user  Jeremie du Boisberranger  jeremiedbb      mod  sklearn neighbors                               Fix  Fixed  sklearn neighbors DistanceMetric  jaccard distance   function to return 0 when two all zero vectors are compared     issue  12685  by  user  Thomas Fan  thomasjpfan      mod  sklearn utils                           Fix  Calling  func  utils check array  on  pandas Series  with categorical   data  which raised an error in 0 20 0  now returns the expected output again     issue  12699  by  Joris Van den Bossche     Code and Documentation Contributors                                      With thanks to    adanhawth  Adrin Jalali  Albert Thomas  Andreas Mueller  Dan Stine  Feda Curic  Hanmin Qin  Jan S  jeremiedbb  Joel Nothman  Joris Van den Bossche  josephsalmon  Katrin Leinweber  Loic Esteve  Muhammad Hassaan Rafique  Nicolas Hug  Olivier Grisel  Paul Paczuski  Reshama Shaikh  Sam Waterbury  Shivam Kotwalia  Thomas Fan      changes 0 20 1   Version 0 20 1                   November 21  2018    This is a bug fix release with some minor documentation improvements and enhancements to features released in 0 20 0  Note that we also include some API changes in this release  so you might get some extra warnings after updating from 0 20 0 to 0 20 1   Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      class  decomposition IncrementalPCA   bug fix   Changelog             mod  sklearn cluster                             Efficiency  make  class  cluster MeanShift  no longer try to do nested   parallelism as the overhead would hurt performance significantly when     n jobs   1       issue  12159  by  user  Olivier Grisel  ogrisel        Fix  Fixed a bug in  class  cluster DBSCAN  with precomputed sparse neighbors   graph  which would add explicitly zeros on the diagonal even when already   present   issue  12105  by  Tom Dupre la Tour      mod  sklearn compose                             Fix  Fixed an issue in  class  compose ColumnTransformer  when stacking   columns with types not convertible to a numeric     issue  11912  by  user  Adrin Jalali  adrinjalali        API   class  compose ColumnTransformer  now applies the   sparse threshold     even if all transformation results are sparse   issue  12304  by  Andreas   M ller        API   func  compose make column transformer  now expects      transformer  columns    instead of    columns  transformer    to keep   consistent with  class  compose ColumnTransformer      issue  12339  by  user  Adrin Jalali  adrinjalali      mod  sklearn datasets                                   Fix   func  datasets fetch openml  to correctly use the local cache     issue  12246  by  user  Jan N  van Rijn  janvanrijn        Fix   func  datasets fetch openml  to correctly handle ignore attributes and   row id attributes   issue  12330  by  user  Jan N  van Rijn  janvanrijn        Fix  Fixed integer overflow in  func  datasets make classification    for values of   n informative   parameter larger than 64     issue  10811  by  user  Roman Feldbauer  VarIr        Fix  Fixed olivetti faces dataset   DESCR   attribute to point to the right   location in  func  datasets fetch olivetti faces    issue  12441  by    user  J r mie du Boisberranger  jeremiedbb       Fix   func  datasets fetch openml  to retry downloading when reading   from local cache fails   issue  12517  by  user  Thomas Fan  thomasjpfan      mod  sklearn decomposition                                   Fix  Fixed a regression in  class  decomposition IncrementalPCA  where   0 20 0 raised an error if the number of samples in the final batch for   fitting IncrementalPCA was smaller than n components     issue  12234  by  user  Ming Li  minggli      mod  sklearn ensemble                              Fix  Fixed a bug mostly affecting  class  ensemble RandomForestClassifier    where   class weight  balanced subsample    failed with more than 32 classes     issue  12165  by  Joel Nothman        Fix  Fixed a bug affecting  class  ensemble BaggingClassifier      class  ensemble BaggingRegressor  and  class  ensemble IsolationForest     where   max features   was sometimes rounded down to zero     issue  12388  by  user  Connor Tann  Connossor      mod  sklearn feature extraction                                         Fix  Fixed a regression in v0 20 0 where    func  feature extraction text CountVectorizer  and other text vectorizers   could error during stop words validation with custom preprocessors   or tokenizers   issue  12393  by  Roman Yurchak      mod  sklearn linear model                                  Fix   class  linear model SGDClassifier  and variants   with   early stopping True   would not use a consistent validation   split in the multiclass case and this would cause a crash when using   those estimators as part of parallel parameter search or cross validation     issue  12122  by  user  Olivier Grisel  ogrisel        Fix  Fixed a bug affecting  class  linear model SGDClassifier  in the multiclass   case  Each one versus all step is run in a  class  joblib Parallel  call and   mutating a common parameter  causing a segmentation fault if called within a   backend using processes and not threads  We now use   require sharedmem     at the  class  joblib Parallel  instance creation   issue  12518  by    user  Pierre Glaser  pierreglaser   and  user  Olivier Grisel  ogrisel      mod  sklearn metrics                             Fix  Fixed a bug in  metrics pairwise pairwise distances argmin min    which returned the square root of the distance when the metric parameter was   set to  euclidean    issue  12481  by    user  J r mie du Boisberranger  jeremiedbb        Fix  Fixed a bug in  metrics pairwise pairwise distances chunked    which didn t ensure the diagonal is zero for euclidean distances     issue  12612  by  user  Andreas M ller  amueller        API  The  metrics calinski harabaz score  has been renamed to    func  metrics calinski harabasz score  and will be removed in version 0 23     issue  12211  by  user  Lisa Thomas  LisaThomas9       user  Mark Hannel  markhannel   and  user  Melissa Ferrari  mferrari3      mod  sklearn mixture                               Fix  Ensure that the   fit predict   method of    class  mixture GaussianMixture  and  class  mixture BayesianGaussianMixture    always yield assignments consistent with   fit   followed by   predict   even   if the convergence criterion is too loose or not met   issue  12451    by  user  Olivier Grisel  ogrisel      mod  sklearn neighbors                               Fix  force the parallelism backend to  code  threading  for    class  neighbors KDTree  and  class  neighbors BallTree  in Python 2 7 to   avoid pickling errors caused by the serialization of their methods     issue  12171  by  user  Thomas Moreau  tomMoral      mod  sklearn preprocessing                                    Fix  Fixed bug in  class  preprocessing OrdinalEncoder  when passing   manually specified categories   issue  12365  by  Joris Van den Bossche        Fix  Fixed bug in  class  preprocessing KBinsDiscretizer  where the     transform   method mutates the    encoder   attribute  The   transform     method is now thread safe   issue  12514  by    user  Hanmin Qin  qinhanmin2014        Fix  Fixed a bug in  class  preprocessing PowerTransformer  where the   Yeo Johnson transform was incorrect for lambda parameters outside of   0  2      issue  12522  by  user  Nicolas Hug NicolasHug        Fix  Fixed a bug in  class  preprocessing OneHotEncoder  where transform   failed when set to ignore unknown numpy strings of different lengths    issue  12471  by  user  Gabriel Marzinotto GMarzinotto        API  The default value of the  code  method  argument in    func  preprocessing power transform  will be changed from  code  box cox    to  code  yeo johnson  to match  class  preprocessing PowerTransformer    in version 0 23  A FutureWarning is raised when the default value is used     issue  12317  by  user  Eric Chang  chang      mod  sklearn utils                               Fix  Use float64 for mean accumulator to avoid floating point   precision issues in  class  preprocessing StandardScaler  and    class  decomposition IncrementalPCA  when using float32 datasets     issue  12338  by  user  bauks  bauks        Fix  Calling  func  utils check array  on  pandas Series   which   raised an error in 0 20 0  now returns the expected output again     issue  12625  by  Andreas M ller    Miscellaneous                   Fix  When using site joblib by setting the environment variable    SKLEARN SITE JOBLIB   added compatibility with joblib 0 11 in addition   to 0 12    issue  12350  by  Joel Nothman   and  Roman Yurchak        Fix  Make sure to avoid raising   FutureWarning   when calling     np vstack   with numpy 1 16 and later  use list comprehensions   instead of generator expressions in many locations of the scikit learn   code base    issue  12467  by  user  Olivier Grisel  ogrisel        API  Removed all mentions of   sklearn externals joblib    and deprecated   joblib methods exposed in   sklearn utils    except for    func  utils parallel backend  and  func  utils register parallel backend     which allow users to configure parallel computation in scikit learn    Other functionalities are part of  joblib  https   joblib readthedocs io        package and should be used directly  by installing it    The goal of this change is to prepare for   unvendoring joblib in future version of scikit learn     issue  12345  by  user  Thomas Moreau  tomMoral    Code and Documentation Contributors                                      With thanks to         Adrin Jalali  Andrea Navarrete  Andreas Mueller  bauks  BenjaStudio  Cheuk Ting Ho  Connossor  Corey Levinson  Dan Stine  daten kieker  Denis Kataev  Dillon Gardner  Dmitry Vukolov  Dougal J  Sutherland  Edward J Brown  Eric Chang  Federico Caselli  Gabriel Marzinotto  Gael Varoquaux  GauravAhlawat  Gustavo De Mari Pereira  Hanmin Qin  haroldfox  JackLangerman  Jacopo Notarstefano  janvanrijn  jdethurens  jeremiedbb  Joel Nothman  Joris Van den Bossche  Koen  Kushal Chauhan  Lee Yi Jie Joel  Lily Xiong  mail liam  Mark Hannel  melsyt  Ming Li  Nicholas Smith  Nicolas Hug  Nikolay Shebanov  Oleksandr Pavlyk  Olivier Grisel  Peter Hausamann  Pierre Glaser  Pulkit Maloo  Quentin Batista  Radostin Stoyanov  Ramil Nugmanov  Rebekah Kim  Reshama Shaikh  Rohan Singh  Roman Feldbauer  Roman Yurchak  Roopam Sharma  Sam Waterbury  Scott Lowe  Sebastian Raschka  Stephen Tierney  SylvainLan  TakingItCasual  Thomas Fan  Thomas Moreau  Tom Dupr  la Tour  Tulio Casagrande  Utkarsh Upadhyay  Xing Han Lu  Yaroslav Halchenko  Zach Miller       changes 0 20   Version 0 20 0                   September 25  2018    This release packs in a mountain of bug fixes  features and enhancements for the Scikit learn library  and improvements to the documentation and examples  Thanks to our contributors   This release is dedicated to the memory of Raghav Rajagopalan   Highlights             We have tried to improve our support for common data science use cases including missing values  categorical variables  heterogeneous data  and features targets with unusual distributions  Missing values in features  represented by NaNs  are now accepted in column wise preprocessing such as scalers  Each feature is fitted disregarding NaNs  and data containing NaNs can be transformed  The new  mod  sklearn impute  module provides estimators for learning despite missing data    class   compose ColumnTransformer  handles the case where different features or columns of a pandas DataFrame need different preprocessing  String or pandas Categorical columns can now be encoded with  class   preprocessing OneHotEncoder  or  class   preprocessing OrdinalEncoder     class   compose TransformedTargetRegressor  helps when the regression target needs to be transformed to be modeled   class   preprocessing PowerTransformer  and  class   preprocessing KBinsDiscretizer  join  class   preprocessing QuantileTransformer  as non linear transformations   Beyond this  we have added  term  sample weight  support to several estimators  including  class   cluster KMeans    class   linear model BayesianRidge  and  class   neighbors KernelDensity   and improved stopping criteria in others  including  class   neural network MLPRegressor    class   ensemble GradientBoostingRegressor  and  class   linear model SGDRegressor     This release is also the first to be accompanied by a  ref  glossary  developed by  Joel Nothman    The glossary is a reference resource to help users and contributors become familiar with the terminology and conventions used in Scikit learn   Sorry if your contribution didn t make it into the highlights  There s a lot here     Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      class  cluster MeanShift   bug fix     class  decomposition IncrementalPCA  in Python 2  bug fix     class  decomposition SparsePCA   bug fix     class  ensemble GradientBoostingClassifier   bug fix affecting feature importances     class  isotonic IsotonicRegression   bug fix     class  linear model ARDRegression   bug fix     class  linear model LogisticRegressionCV   bug fix     class  linear model OrthogonalMatchingPursuit   bug fix     class  linear model PassiveAggressiveClassifier   bug fix     class  linear model PassiveAggressiveRegressor   bug fix     class  linear model Perceptron   bug fix     class  linear model SGDClassifier   bug fix     class  linear model SGDRegressor   bug fix     class  metrics roc auc score   bug fix     class  metrics roc curve   bug fix     neural network BaseMultilayerPerceptron   bug fix     class  neural network MLPClassifier   bug fix     class  neural network MLPRegressor   bug fix    The v0 19 0 release notes failed to mention a backwards incompatibility with    class  model selection StratifiedKFold  when   shuffle True   due to    issue  7823    Details are listed in the changelog below    While we are trying to better inform users by providing this information  we cannot assure that this list is complete    Known Major Bugs                      issue  11924    class  linear model LogisticRegressionCV  with    solver  lbfgs   and  multi class  multinomial   may be non deterministic or   otherwise broken on macOS  This appears to be the case on Travis CI servers    but has not been confirmed on personal MacBooks  This issue has been present   in previous releases      issue  9354    func  metrics pairwise euclidean distances   which is used   several times throughout the library  gives results with poor precision    which particularly affects its use with 32 bit float inputs  This became   more problematic in versions 0 18 and 0 19 when some algorithms were changed   to avoid casting 32 bit data into 64 bit   Changelog            Support for Python 3 3 has been officially dropped     mod  sklearn cluster                             MajorFeature   class  cluster AgglomerativeClustering  now supports Single   Linkage clustering via   linkage  single      issue  9372  by  user  Leland   McInnes  lmcinnes   and  user  Steve Astels  sastels        Feature   class  cluster KMeans  and  class  cluster MiniBatchKMeans  now support   sample weights via new parameter   sample weight   in   fit   function     issue  10933  by  user  Johannes Hansen  jnhansen        Efficiency   class  cluster KMeans    class  cluster MiniBatchKMeans  and    func  cluster k means  passed with   algorithm  full    now enforces   row major ordering  improving runtime     issue  10471  by  user  Gaurav Dhingra  gxyd        Efficiency   class  cluster DBSCAN  now is parallelized according to   n jobs     regardless of   algorithm       issue  8003  by  user  Jo l Billaud  recamshak        Enhancement   class  cluster KMeans  now gives a warning if the number of   distinct clusters found is smaller than   n clusters    This may occur when   the number of distinct points in the data set is actually smaller than the   number of cluster one is looking for     issue  10059  by  user  Christian Braune  christianbraune79        Fix  Fixed a bug where the   fit   method of    class  cluster AffinityPropagation  stored cluster   centers as 3d array instead of 2d array in case of non convergence  For the   same class  fixed undefined and arbitrary behavior in case of training data   where all samples had equal similarity     issue  9612   By  user  Jonatan Samoocha  jsamoocha        Fix  Fixed a bug in  func  cluster spectral clustering  where the normalization of   the spectrum was using a division instead of a multiplication   issue  8129    by  user  Jan Margeta  jmargeta     user  Guillaume Lemaitre  glemaitre      and  user  Devansh D   devanshdalal        Fix  Fixed a bug in  cluster k means elkan  where the returned     iteration   was 1 less than the correct value  Also added the missing     n iter    attribute in the docstring of  class  cluster KMeans      issue  11353  by  user  Jeremie du Boisberranger  jeremiedbb        Fix  Fixed a bug in  func  cluster mean shift  where the assigned labels   were not deterministic if there were multiple clusters with the same   intensities     issue  11901  by  user  Adrin Jalali  adrinjalali        API  Deprecate   pooling func   unused parameter in    class  cluster AgglomerativeClustering      issue  9875  by  user  Kumar Ashutosh  thechargedneutron       mod  sklearn compose                            New module      MajorFeature  Added  class  compose ColumnTransformer   which allows to   apply different transformers to different columns of arrays or pandas   DataFrames   issue  9012  by  Andreas M ller   and  Joris Van den Bossche      and  issue  11315  by  user  Thomas Fan  thomasjpfan        MajorFeature  Added the  class  compose TransformedTargetRegressor  which   transforms the target y before fitting a regression model  The predictions   are mapped back to the original space via an inverse transform   issue  9041    by  Andreas M ller   and  user  Guillaume Lemaitre  glemaitre        mod  sklearn covariance                                Efficiency  Runtime improvements to  class  covariance GraphicalLasso      issue  9858  by  user  Steven Brown  stevendbrown        API  The  covariance graph lasso      covariance GraphLasso  and  covariance GraphLassoCV  have been   renamed to  func  covariance graphical lasso      class  covariance GraphicalLasso  and  class  covariance GraphicalLassoCV    respectively and will be removed in version 0 22     issue  9993  by  user  Artiem Krinitsyn  artiemq      mod  sklearn datasets                              MajorFeature  Added  func  datasets fetch openml  to fetch datasets from    OpenML  https   openml org     OpenML is a free  open data sharing platform   and will be used instead of mldata as it provides better service availability     issue  9908  by  Andreas M ller   and  user  Jan N  van Rijn  janvanrijn        Feature  In  func  datasets make blobs   one can now pass a list to the     n samples   parameter to indicate the number of samples to generate per   cluster   issue  8617  by  user  Maskani Filali Mohamed  maskani moh   and    user  Konstantinos Katrioplas  kkatrio        Feature  Add   filename   attribute to  mod  sklearn datasets  that have a CSV file     issue  9101  by  user  alex 33  alex 33     and  user  Maskani Filali Mohamed  maskani moh        Feature    return X y   parameter has been added to several dataset loaders     issue  10774  by  user  Chris Catalfo  ccatalfo        Fix  Fixed a bug in  datasets load boston  which had a wrong data   point   issue  10795  by  user  Takeshi Yoshizawa  tarcusx        Fix  Fixed a bug in  func  datasets load iris  which had two wrong data points     issue  11082  by  user  Sadhana Srinivasan  rotuna     and  user  Hanmin Qin  qinhanmin2014        Fix  Fixed a bug in  func  datasets fetch kddcup99   where data were not   properly shuffled   issue  9731  by  Nicolas Goix        Fix  Fixed a bug in  func  datasets make circles   where no odd number of   data points could be generated   issue  10045  by  user  Christian Braune    christianbraune79        API  Deprecated  sklearn datasets fetch mldata  to be removed in   version 0 22  mldata org is no longer operational  Until removal it will   remain possible to load cached datasets   issue  11466  by  Joel Nothman      mod  sklearn decomposition                                   Feature   func  decomposition dict learning  functions and models now   support positivity constraints  This applies to the dictionary and sparse   code   issue  6374  by  user  John Kirkham  jakirkham        Feature   Fix   class  decomposition SparsePCA  now exposes     normalize components    When set to True  the train and test data are   centered with the train mean respectively during the fit phase and the   transform phase  This fixes the behavior of SparsePCA  When set to False    which is the default  the previous abnormal behaviour still holds  The False   value is for backward compatibility and should not be used   issue  11585    by  user  Ivan Panico  FollowKenny        Efficiency  Efficiency improvements in  func  decomposition dict learning      issue  11420  and others by  user  John Kirkham  jakirkham        Fix  Fix for uninformative error in  class  decomposition IncrementalPCA     now an error is raised if the number of components is larger than the   chosen batch size  The   n components None   case was adapted accordingly     issue  6452   By  user  Wally Gauze  wallygauze        Fix  Fixed a bug where the   partial fit   method of    class  decomposition IncrementalPCA  used integer division instead of float   division on Python 2     issue  9492  by  user  James Bourbeau  jrbourbeau        Fix  In  class  decomposition PCA  selecting a n components parameter greater   than the number of samples now raises an error  Similarly  the     n components None   case now selects the minimum of   n samples   and     n features       issue  8484  by  user  Wally Gauze  wallygauze        Fix  Fixed a bug in  class  decomposition PCA  where users will get   unexpected error with large datasets when   n components  mle    on Python 3   versions     issue  9886  by  user  Hanmin Qin  qinhanmin2014        Fix  Fixed an underflow in calculating KL divergence for    class  decomposition NMF   issue  10142  by  Tom Dupre la Tour        Fix  Fixed a bug in  class  decomposition SparseCoder  when running OMP   sparse coding in parallel using read only memory mapped datastructures     issue  5956  by  user  Vighnesh Birodkar  vighneshbirodkar   and    user  Olivier Grisel  ogrisel       mod  sklearn discriminant analysis                                           Efficiency  Memory usage improvement for   class means  and     class cov  in  mod  sklearn discriminant analysis    issue  10898  by    user  Nanxin Chen  bobchennan       mod  sklearn dummy                           Feature   class  dummy DummyRegressor  now has a   return std   option in its     predict   method  The returned standard deviations will be zeros      Feature   class  dummy DummyClassifier  and  class  dummy DummyRegressor  now   only require X to be an object with finite length or shape   issue  9832  by    user  Vrishank Bhardwaj  vrishank97        Feature   class  dummy DummyClassifier  and  class  dummy DummyRegressor    can now be scored without supplying test samples     issue  11951  by  user  R diger Busche  JarnoRFB       mod  sklearn ensemble                              Feature   class  ensemble BaggingRegressor  and    class  ensemble BaggingClassifier  can now be fit with missing non finite   values in X and or multi output Y to support wrapping pipelines that perform   their own imputation   issue  9707  by  user  Jimmy Wan  jimmywan        Feature   class  ensemble GradientBoostingClassifier  and    class  ensemble GradientBoostingRegressor  now support early stopping   via   n iter no change      validation fraction   and   tol     issue  7071    by  Raghav RV       Feature  Added   named estimators    parameter in    class  ensemble VotingClassifier  to access fitted estimators     issue  9157  by  user  Herilalaina Rakotoarison  herilalaina        Fix  Fixed a bug when fitting  class  ensemble GradientBoostingClassifier  or    class  ensemble GradientBoostingRegressor  with   warm start True   which   previously raised a segmentation fault due to a non conversion of CSC matrix   into CSR format expected by   decision function    Similarly  Fortran ordered   arrays are converted to C ordered arrays in the dense case   issue  9991  by    user  Guillaume Lemaitre  glemaitre        Fix  Fixed a bug in  class  ensemble GradientBoostingRegressor    and  class  ensemble GradientBoostingClassifier  to have   feature importances summed and then normalized  rather than normalizing on a   per tree basis  The previous behavior over weighted the Gini importance of   features that appear in later stages  This issue only affected feature   importances   issue  11176  by  user  Gil Forsyth  gforsyth        API  The default value of the   n estimators   parameter of    class  ensemble RandomForestClassifier    class  ensemble RandomForestRegressor      class  ensemble ExtraTreesClassifier    class  ensemble ExtraTreesRegressor     and  class  ensemble RandomTreesEmbedding  will change from 10 in version 0 20   to 100 in 0 22  A FutureWarning is raised when the default value is used     issue  11542  by  user  Anna Ayzenshtat  annaayzenshtat        API  Classes derived from  ensemble BaseBagging   The attribute     estimators samples    will return a list of arrays containing the indices   selected for each bootstrap instead of a list of arrays containing the mask   of the samples selected for each bootstrap  Indices allows to repeat samples   while mask does not allow this functionality     issue  9524  by  user  Guillaume Lemaitre  glemaitre        Fix   ensemble BaseBagging  where one could not deterministically   reproduce   fit   result using the object attributes when   random state     is set   issue  9723  by  user  Guillaume Lemaitre  glemaitre       mod  sklearn feature extraction                                        Feature  Enable the call to  get feature names  in unfitted    class  feature extraction text CountVectorizer  initialized with a   vocabulary   issue  10908  by  user  Mohamed Maskani  maskani moh        Enhancement    idf    can now be set on a    class  feature extraction text TfidfTransformer      issue  10899  by  user  Sergey Melderis  serega        Fix  Fixed a bug in  func  feature extraction image extract patches 2d  which   would throw an exception if   max patches   was greater than or equal to the   number of all possible patches rather than simply returning the number of   possible patches   issue  10101  by  user  Varun Agrawal  varunagrawal       Fix  Fixed a bug in  class  feature extraction text CountVectorizer      class  feature extraction text TfidfVectorizer      class  feature extraction text HashingVectorizer  to support 64 bit sparse   array indexing necessary to process large datasets with more than 2 10  tokens    words or n grams    issue  9147  by  user  Claes Fredrik Mannby  mannby     and  Roman Yurchak        Fix  Fixed bug in  class  feature extraction text TfidfVectorizer  which   was ignoring the parameter   dtype    In addition     class  feature extraction text TfidfTransformer  will preserve   dtype     for floating and raise a warning if   dtype   requested is integer     issue  10441  by  user  Mayur Kulkarni  maykulkarni   and    user  Guillaume Lemaitre  glemaitre       mod  sklearn feature selection                                       Feature  Added select K best features functionality to    class  feature selection SelectFromModel      issue  6689  by  user  Nihar Sheth  nsheth12   and    user  Quazi Rahman  qmaruf        Feature  Added   min features to select   parameter to    class  feature selection RFECV  to bound evaluated features counts     issue  11293  by  user  Brent Yi  brentyi        Feature   class  feature selection RFECV  s fit method now supports    term  groups     issue  9656  by  user  Adam Greenhall  adamgreenhall        Fix  Fixed computation of   n features to compute   for edge case with tied   CV scores in  class  feature selection RFECV      issue  9222  by  user  Nick Hoh  nickypie      mod  sklearn gaussian process                                      Efficiency  In  class  gaussian process GaussianProcessRegressor   method     predict   is faster when using   return std True   in particular more when   called several times in a row   issue  9234  by  user  andrewww  andrewww     and  user  Minghui Liu  minghui liu       mod  sklearn impute                           New module  adopting   preprocessing Imputer   as    class  impute SimpleImputer  with minor changes  see under preprocessing   below       MajorFeature  Added  class  impute MissingIndicator  which generates a   binary indicator for missing values   issue  8075  by  user  Maniteja Nandana    maniteja123   and  user  Guillaume Lemaitre  glemaitre        Feature  The  class  impute SimpleImputer  has a new strategy       constant     to complete missing values with a fixed one  given by the     fill value   parameter  This strategy supports numeric and non numeric   data  and so does the    most frequent    strategy now   issue  11211  by    user  Jeremie du Boisberranger  jeremiedbb       mod  sklearn isotonic                              Fix  Fixed a bug in  class  isotonic IsotonicRegression  which incorrectly   combined weights when fitting a model to data involving points with   identical X values     issue  9484  by  user  Dallas Card  dallascard      mod  sklearn linear model                                  Feature   class  linear model SGDClassifier      class  linear model SGDRegressor      class  linear model PassiveAggressiveClassifier      class  linear model PassiveAggressiveRegressor  and    class  linear model Perceptron  now expose   early stopping        validation fraction   and   n iter no change   parameters  to stop   optimization monitoring the score on a validation set  A new learning rate      adaptive    strategy divides the learning rate by 5 each time     n iter no change   consecutive epochs fail to improve the model     issue  9043  by  Tom Dupre la Tour        Feature  Add  sample weight  parameter to the fit method of    class  linear model BayesianRidge  for weighted linear regression     issue  10112  by  user  Peter St  John  pstjohn        Fix  Fixed a bug in  logistic logistic regression path  to ensure   that the returned coefficients are correct when   multiclass  multinomial       Previously  some of the coefficients would override each other  leading to   incorrect results in  class  linear model LogisticRegressionCV      issue  11724  by  user  Nicolas Hug  NicolasHug        Fix  Fixed a bug in  class  linear model LogisticRegression  where when using   the parameter   multi class  multinomial     the   predict proba   method was   returning incorrect probabilities in the case of binary outcomes     issue  9939  by  user  Roger Westover  rwolst        Fix  Fixed a bug in  class  linear model LogisticRegressionCV  where the     score   method always computes accuracy  not the metric given by   the   scoring   parameter     issue  10998  by  user  Thomas Fan  thomasjpfan        Fix  Fixed a bug in  class  linear model LogisticRegressionCV  where the    ovr  strategy was always used to compute cross validation scores in the   multiclass setting  even if    multinomial    was set     issue  8720  by  user  William de Vazelhes  wdevazelhes        Fix  Fixed a bug in  class  linear model OrthogonalMatchingPursuit  that was   broken when setting   normalize False       issue  10071  by  Alexandre Gramfort        Fix  Fixed a bug in  class  linear model ARDRegression  which caused   incorrectly updated estimates for the standard deviation and the   coefficients   issue  10153  by  user  J rg D pfert  jdoepfert        Fix  Fixed a bug in  class  linear model ARDRegression  and    class  linear model BayesianRidge  which caused NaN predictions when fitted   with a constant target     issue  10095  by  user  J rg D pfert  jdoepfert        Fix  Fixed a bug in  class  linear model RidgeClassifierCV  where   the parameter   store cv values   was not implemented though   it was documented in   cv values   as a way to set up the storage   of cross validation values for different alphas   issue  10297  by    user  Mabel Villalba Jim nez  mabelvj        Fix  Fixed a bug in  class  linear model ElasticNet  which caused the input   to be overridden when using parameter   copy X True   and     check input False     issue  10581  by  user  Yacine Mazari  ymazari        Fix  Fixed a bug in  class  sklearn linear model Lasso    where the coefficient had wrong shape when   fit intercept False       issue  10687  by  user  Martin Hahn  martin hahn        Fix  Fixed a bug in  func  sklearn linear model LogisticRegression  where the     multi class  multinomial    with binary output   with warm start True      issue  10836  by  user  Aishwarya Srinivasan  aishgrt1        Fix  Fixed a bug in  class  linear model RidgeCV  where using integer     alphas   raised an error     issue  10397  by  user  Mabel Villalba Jim nez  mabelvj        Fix  Fixed condition triggering gap computation in    class  linear model Lasso  and  class  linear model ElasticNet  when working   with sparse matrices   issue  10992  by  Alexandre Gramfort        Fix  Fixed a bug in  class  linear model SGDClassifier      class  linear model SGDRegressor      class  linear model PassiveAggressiveClassifier      class  linear model PassiveAggressiveRegressor  and    class  linear model Perceptron   where the stopping criterion was stopping   the algorithm before convergence  A parameter   n iter no change   was added   and set by default to 5  Previous behavior is equivalent to setting the   parameter to 1   issue  9043  by  Tom Dupre la Tour        Fix  Fixed a bug where liblinear and libsvm based estimators would segfault   if passed a scipy sparse matrix with 64 bit indices  They now raise a   ValueError     issue  11327  by  user  Karan Dhingra  kdhingra307   and  Joel Nothman        API  The default values of the   solver   and   multi class   parameters of    class  linear model LogisticRegression  will change respectively from      liblinear    and    ovr    in version 0 20 to    lbfgs    and      auto    in version 0 22  A FutureWarning is raised when the default   values are used   issue  11905  by  Tom Dupre la Tour   and  Joel Nothman        API  Deprecate   positive True   option in  class  linear model Lars  as   the underlying implementation is broken  Use  class  linear model Lasso    instead   issue  9837  by  Alexandre Gramfort        API    n iter    may vary from previous releases in    class  linear model LogisticRegression  with   solver  lbfgs    and    class  linear model HuberRegressor   For Scipy    1 0 0  the optimizer could   perform more than the requested maximum number of iterations  Now both   estimators will report at most   max iter   iterations even if more were   performed   issue  10723  by  Joel Nothman       mod  sklearn manifold                              Efficiency  Speed improvements for both  exact  and  barnes hut  methods in    class  manifold TSNE    issue  10593  and  issue  10610  by    Tom Dupre la Tour        Feature  Support sparse input in  meth  manifold Isomap fit      issue  8554  by  user  Leland McInnes  lmcinnes        Feature   manifold t sne trustworthiness  accepts metrics other than   Euclidean   issue  9775  by  user  William de Vazelhes  wdevazelhes        Fix  Fixed a bug in  func  manifold spectral embedding  where the   normalization of the spectrum was using a division instead of a   multiplication   issue  8129  by  user  Jan Margeta  jmargeta       user  Guillaume Lemaitre  glemaitre    and  user  Devansh D     devanshdalal        API   Feature  Deprecate   precomputed   parameter in function    manifold t sne trustworthiness   Instead  the new parameter   metric     should be used with any compatible metric including  precomputed   in which   case the input matrix   X   should be a matrix of pairwise distances or   squared distances   issue  9775  by  user  William de Vazelhes    wdevazelhes        API  Deprecate   precomputed   parameter in function    manifold t sne trustworthiness   Instead  the new parameter     metric   should be used with any compatible metric including    precomputed   in which case the input matrix   X   should be a matrix of   pairwise distances or squared distances   issue  9775  by    user  William de Vazelhes  wdevazelhes       mod  sklearn metrics                             MajorFeature  Added the  func  metrics davies bouldin score  metric for   evaluation of clustering models without a ground truth   issue  10827  by    user  Luis Osa  logc        MajorFeature  Added the  func  metrics balanced accuracy score  metric and   a corresponding    balanced accuracy    scorer for binary and multiclass   classification   issue  8066  by  user  xyguo  and  user  Aman Dalmia    dalmia    and  issue  10587  by  Joel Nothman        Feature  Partial AUC is available via   max fpr   parameter in    func  metrics roc auc score    issue  3840  by    user  Alexander Niederb hl  Alexander N        Feature  A scorer based on  func  metrics brier score loss  is also   available   issue  9521  by  user  Hanmin Qin  qinhanmin2014        Feature  Added control over the normalization in    func  metrics normalized mutual info score  and    func  metrics adjusted mutual info score  via the   average method     parameter  In version 0 22  the default normalizer for each will become   the  arithmetic  mean of the entropies of each clustering   issue  11124  by    user  Arya McCarthy  aryamccarthy        Feature  Added   output dict   parameter in  func  metrics classification report    to return classification statistics as dictionary     issue  11160  by  user  Dan Barkhorn  danielbarkhorn        Feature   func  metrics classification report  now reports all applicable averages on   the given data  including micro  macro and weighted average as well as samples   average for multilabel data   issue  11679  by  user  Alexander Pacha  apacha        Feature   func  metrics average precision score  now supports binary     y true   other than    0  1    or     1  1    through   pos label     parameter   issue  9980  by  user  Hanmin Qin  qinhanmin2014        Feature   func  metrics label ranking average precision score  now supports     sample weight       issue  10845  by  user  Jose Perez Parras Toledano  jopepato        Feature  Add   dense output   parameter to  func  metrics pairwise linear kernel     When False and both inputs are sparse  will return a sparse matrix     issue  10999  by  user  Taylor G Smith  tgsmith61591        Efficiency   func  metrics silhouette score  and    func  metrics silhouette samples  are more memory efficient and run   faster  This avoids some reported freezes and MemoryErrors     issue  11135  by  Joel Nothman        Fix  Fixed a bug in  func  metrics precision recall fscore support    when truncated  range n labels   is passed as value for  labels      issue  10377  by  user  Gaurav Dhingra  gxyd        Fix  Fixed a bug due to floating point error in    func  metrics roc auc score  with non integer sample weights   issue  9786    by  user  Hanmin Qin  qinhanmin2014        Fix  Fixed a bug where  func  metrics roc curve  sometimes starts on y axis   instead of  0  0   which is inconsistent with the document and other   implementations  Note that this will not influence the result from    func  metrics roc auc score   issue  10093  by  user  alexryndin    alexryndin   and  user  Hanmin Qin  qinhanmin2014        Fix  Fixed a bug to avoid integer overflow  Casted product to 64 bits integer in    func  metrics mutual info score      issue  9772  by  user  Kumar Ashutosh  thechargedneutron        Fix  Fixed a bug where  func  metrics average precision score  will sometimes return     nan   when   sample weight   contains 0     issue  9980  by  user  Hanmin Qin  qinhanmin2014        Fix  Fixed a bug in  func  metrics fowlkes mallows score  to avoid integer   overflow  Casted return value of  contingency matrix  to  int64  and computed   product of square roots rather than square root of product     issue  9515  by  user  Alan Liddell  aliddell   and    user  Manh Dao  manhdao        API  Deprecate   reorder   parameter in  func  metrics auc  as it s no   longer required for  func  metrics roc auc score   Moreover using     reorder True   can hide bugs due to floating point error in the input     issue  9851  by  user  Hanmin Qin  qinhanmin2014        API  In  func  metrics normalized mutual info score  and    func  metrics adjusted mutual info score   warn that     average method   will have a new default value  In version 0 22  the   default normalizer for each will become the  arithmetic  mean of the   entropies of each clustering  Currently     func  metrics normalized mutual info score  uses the default of     average method  geometric     and    func  metrics adjusted mutual info score  uses the default of     average method  max    to match their behaviors in version 0 19     issue  11124  by  user  Arya McCarthy  aryamccarthy        API  The   batch size   parameter to  func  metrics pairwise distances argmin min    and  func  metrics pairwise distances argmin  is deprecated to be removed in   v0 22  It no longer has any effect  as batch size is determined by global     working memory   config  See  ref  working memory    issue  10280  by  Joel   Nothman   and  user  Aman Dalmia  dalmia       mod  sklearn mixture                             Feature  Added function  term  fit predict  to  class  mixture GaussianMixture    and  class  mixture GaussianMixture   which is essentially equivalent to   calling  term  fit  and  term  predict    issue  10336  by  user  Shu Haoran    haoranShu   and  user  Andrew Peng  Andrew peng        Fix  Fixed a bug in  mixture BaseMixture  where the reported  n iter   was   missing an iteration  It affected  class  mixture GaussianMixture  and    class  mixture BayesianGaussianMixture    issue  10740  by  user  Erich   Schubert  kno10   and  user  Guillaume Lemaitre  glemaitre        Fix  Fixed a bug in  mixture BaseMixture  and its subclasses    class  mixture GaussianMixture  and  class  mixture BayesianGaussianMixture    where the   lower bound    was not the max lower bound across all   initializations  when   n init   1     but just the lower bound of the last   initialization   issue  10869  by  user  Aur lien G ron  ageron       mod  sklearn model selection                                     Feature  Add  return estimator  parameter in    func  model selection cross validate  to return estimators fitted on each   split   issue  9686  by  user  Aur lien Bellet  bellet        Feature  New   refit time    attribute will be stored in    class  model selection GridSearchCV  and    class  model selection RandomizedSearchCV  if   refit   is set to   True      This will allow measuring the complete time it takes to perform   hyperparameter optimization and refitting the best model on the whole   dataset   issue  11310  by  user  Matthias Feurer  mfeurer        Feature  Expose  error score  parameter in    func  model selection cross validate      func  model selection cross val score      func  model selection learning curve  and    func  model selection validation curve  to control the behavior triggered   when an error occurs in  model selection  fit and score      issue  11576  by  user  Samuel O  Ronsin  samronsin        Feature   BaseSearchCV  now has an experimental  private interface to   support customized parameter search strategies  through its    run search     method  See the implementations in  class  model selection GridSearchCV  and    class  model selection RandomizedSearchCV  and please provide feedback if   you use this  Note that we do not assure the stability of this API beyond   version 0 20   issue  9599  by  Joel Nothman       Enhancement  Add improved error message in    func  model selection cross val score  when multiple metrics are passed in     scoring   keyword   issue  11006  by  user  Ming Li  minggli        API  The default number of cross validation folds   cv   and the default   number of splits   n splits   in the  class  model selection KFold  like   splitters will change from 3 to 5 in 0 22 as 3 fold has a lot of variance     issue  11557  by  user  Alexandre Boucaud  aboucaud        API  The default of   iid   parameter of  class  model selection GridSearchCV    and  class  model selection RandomizedSearchCV  will change from   True   to     False   in version 0 22 to correspond to the standard definition of   cross validation  and the parameter will be removed in version 0 24   altogether  This parameter is of greatest practical significance where the   sizes of different test sets in cross validation were very unequal  i e  in   group based CV strategies   issue  9085  by  user  Laurent Direr  ldirer     and  Andreas M ller        API  The default value of the   error score   parameter in    class  model selection GridSearchCV  and    class  model selection RandomizedSearchCV  will change to   np NaN   in   version 0 22   issue  10677  by  user  Kirill Zhdanovich  Zhdanovich        API  Changed ValueError exception raised in    class  model selection ParameterSampler  to a UserWarning for case where the   class is instantiated with a greater value of   n iter   than the total space   of parameters in the parameter grid    n iter   now acts as an upper bound on   iterations   issue  10982  by  user  Juliet Lawton  julietcl       API  Invalid input for  class  model selection ParameterGrid  now   raises TypeError     issue  10928  by  user  Solutus Immensus  solutusimmensus      mod  sklearn multioutput                                 MajorFeature  Added  class  multioutput RegressorChain  for multi target   regression   issue  9257  by  user  Kumar Ashutosh  thechargedneutron       mod  sklearn naive bayes                                 MajorFeature  Added  class  naive bayes ComplementNB   which implements the   Complement Naive Bayes classifier described in Rennie et al   2003      issue  8190  by  user  Michael A  Alcorn  airalcorn2        Feature  Add  var smoothing  parameter in  class  naive bayes GaussianNB    to give a precise control over variances calculation     issue  9681  by  user  Dmitry Mottl  Mottl        Fix  Fixed a bug in  class  naive bayes GaussianNB  which incorrectly   raised error for prior list which summed to 1     issue  10005  by  user  Gaurav Dhingra  gxyd        Fix  Fixed a bug in  class  naive bayes MultinomialNB  which did not accept   vector valued pseudocounts  alpha      issue  10346  by  user  Tobias Madsen  TobiasMadsen      mod  sklearn neighbors                               Efficiency   class  neighbors RadiusNeighborsRegressor  and    class  neighbors RadiusNeighborsClassifier  are now   parallelized according to   n jobs   regardless of   algorithm       issue  10887  by  user  Jo l Billaud  recamshak        Efficiency   mod  sklearn neighbors  query methods are now more   memory efficient when   algorithm  brute        issue  11136  by  Joel Nothman   and  user  Aman Dalmia  dalmia        Feature  Add   sample weight   parameter to the fit method of    class  neighbors KernelDensity  to enable weighting in kernel density   estimation     issue  4394  by  user  Samuel O  Ronsin  samronsin        Feature  Novelty detection with  class  neighbors LocalOutlierFactor     Add a   novelty   parameter to  class  neighbors LocalOutlierFactor   When     novelty   is set to True   class  neighbors LocalOutlierFactor  can then   be used for novelty detection  i e  predict on new unseen data  Available   prediction methods are   predict      decision function   and     score samples    By default    novelty   is set to   False    and only   the   fit predict   method is available    By  user  Albert Thomas  albertcthomas        Fix  Fixed a bug in  class  neighbors NearestNeighbors  where fitting a   NearestNeighbors model fails when a  the distance metric used is a   callable and b  the input to the NearestNeighbors model is sparse     issue  9579  by  user  Thomas Kober  tttthomasssss        Fix  Fixed a bug so   predict   in    class  neighbors RadiusNeighborsRegressor  can handle empty neighbor set   when using non uniform weights  Also raises a new warning when no neighbors   are found for samples   issue  9655  by  user  Andreas Bjerre Nielsen    abjer        Fix   Efficiency  Fixed a bug in   KDTree   construction that results in   faster construction and querying times     issue  11556  by  user  Jake VanderPlas  jakevdp       Fix  Fixed a bug in  class  neighbors KDTree  and  class  neighbors BallTree  where   pickled tree objects would change their type to the super class  BinaryTree      issue  11774  by  user  Nicolas Hug  NicolasHug       mod  sklearn neural network                                    Feature  Add  n iter no change  parameter in    neural network BaseMultilayerPerceptron      class  neural network MLPRegressor   and    class  neural network MLPClassifier  to give control over   maximum number of epochs to not meet   tol   improvement     issue  9456  by  user  Nicholas Nadeau  nnadeau        Fix  Fixed a bug in  neural network BaseMultilayerPerceptron      class  neural network MLPRegressor   and    class  neural network MLPClassifier  with new   n iter no change     parameter now at 10 from previously hardcoded 2     issue  9456  by  user  Nicholas Nadeau  nnadeau        Fix  Fixed a bug in  class  neural network MLPRegressor  where fitting   quit unexpectedly early due to local minima or fluctuations     issue  9456  by  user  Nicholas Nadeau  nnadeau      mod  sklearn pipeline                              Feature  The   predict   method of  class  pipeline Pipeline  now passes   keyword arguments on to the pipeline s last estimator  enabling the use of   parameters such as   return std   in a pipeline with caution     issue  9304  by  user  Breno Freitas  brenolf        API   class  pipeline FeatureUnion  now supports    drop    as a transformer   to drop features   issue  11144  by  user  Thomas Fan  thomasjpfan       mod  sklearn preprocessing                                   MajorFeature  Expanded  class  preprocessing OneHotEncoder  to allow to   encode categorical string features as a numeric array using a one hot  or   dummy  encoding scheme  and added  class  preprocessing OrdinalEncoder  to   convert to ordinal integers  Those two classes now handle encoding of all   feature types  also handles string valued features  and derives the   categories based on the unique values in the features instead of the maximum   value in the features   issue  9151  and  issue  10521  by  user  Vighnesh   Birodkar  vighneshbirodkar   and  Joris Van den Bossche        MajorFeature  Added  class  preprocessing KBinsDiscretizer  for turning   continuous features into categorical or one hot encoded   features   issue  7668    issue  9647    issue  10195      issue  10192    issue  11272    issue  11467  and  issue  11505     by  user  Henry Lin  hlin117     Hanmin Qin       Tom Dupre la Tour   and  user  Giovanni Giuseppe Costa  ggc87        MajorFeature  Added  class  preprocessing PowerTransformer   which   implements the Yeo Johnson and Box Cox power transformations  Power   transformations try to find a set of feature wise parametric transformations   to approximately map data to a Gaussian distribution centered at zero and   with unit variance  This is useful as a variance stabilizing transformation   in situations where normality and homoscedasticity are desirable     issue  10210  by  user  Eric Chang  chang   and  user  Maniteja   Nandana  maniteja123    and  issue  11520  by  user  Nicolas Hug    nicolashug        MajorFeature  NaN values are ignored and handled in the following   preprocessing methods     class  preprocessing MaxAbsScaler      class  preprocessing MinMaxScaler      class  preprocessing RobustScaler      class  preprocessing StandardScaler      class  preprocessing PowerTransformer      class  preprocessing QuantileTransformer  classes and    func  preprocessing maxabs scale      func  preprocessing minmax scale      func  preprocessing robust scale      func  preprocessing scale      func  preprocessing power transform      func  preprocessing quantile transform  functions respectively addressed in   issues  issue  11011    issue  11005    issue  11308    issue  11206      issue  11306   and  issue  10437     By  user  Lucija Gregov  LucijaGregov   and    user  Guillaume Lemaitre  glemaitre        Feature   class  preprocessing PolynomialFeatures  now supports sparse   input   issue  10452  by  user  Aman Dalmia  dalmia   and  Joel Nothman        Feature   class  preprocessing RobustScaler  and    func  preprocessing robust scale  can be fitted using sparse matrices     issue  11308  by  user  Guillaume Lemaitre  glemaitre        Feature   class  preprocessing OneHotEncoder  now supports the    get feature names  method to obtain the transformed feature names     issue  10181  by  user  Nirvan Anjirbag  Nirvan101   and    Joris Van den Bossche        Feature  A parameter   check inverse   was added to    class  preprocessing FunctionTransformer  to ensure that   func   and     inverse func   are the inverse of each other     issue  9399  by  user  Guillaume Lemaitre  glemaitre        Feature  The   transform   method of  class  sklearn preprocessing MultiLabelBinarizer    now ignores any unknown classes  A warning is raised stating the unknown classes   classes found which are ignored     issue  10913  by  user  Rodrigo Agundez  rragundez        Fix  Fixed bugs in  class  preprocessing LabelEncoder  which would   sometimes throw errors when   transform   or   inverse transform   was called   with empty arrays   issue  10458  by  user  Mayur Kulkarni  maykulkarni        Fix  Fix ValueError in  class  preprocessing LabelEncoder  when using     inverse transform   on unseen labels   issue  9816  by  user  Charlie Newey    newey01c        Fix  Fix bug in  class  preprocessing OneHotEncoder  which discarded the     dtype   when returning a sparse matrix output     issue  11042  by  user  Daniel Morales  DanielMorales9        Fix  Fix   fit   and   partial fit   in    class  preprocessing StandardScaler  in the rare case when   with mean False     and  with std False  which was crashing by calling   fit   more than once and   giving inconsistent results for   mean    whether the input was a sparse or a   dense matrix    mean    will be set to   None   with both sparse and dense   inputs    n samples seen    will be also reported for both input types     issue  11235  by  user  Guillaume Lemaitre  glemaitre        API  Deprecate   n values   and   categorical features   parameters and     active features       feature indices    and   n values    attributes   of  class  preprocessing OneHotEncoder   The   n values   parameter can be   replaced with the new   categories   parameter  and the attributes with the   new   categories    attribute  Selecting the categorical features with   the   categorical features   parameter is now better supported using the    class  compose ColumnTransformer      issue  10521  by  Joris Van den Bossche        API  Deprecate  preprocessing Imputer  and move   the corresponding module to  class  impute SimpleImputer      issue  9726  by  user  Kumar Ashutosh    thechargedneutron        API  The   axis   parameter that was in    preprocessing Imputer  is no longer present in    class  impute SimpleImputer   The behavior is equivalent   to   axis 0    impute along columns   Row wise   imputation can be performed with FunctionTransformer    e g     FunctionTransformer lambda X    SimpleImputer   fit transform X T  T       issue  10829    by  user  Guillaume Lemaitre  glemaitre   and    user  Gilberto Olimpio  gilbertoolimpio        API  The NaN marker for the missing values has been changed   between the  preprocessing Imputer  and the    impute SimpleImputer       missing values  NaN    should now be     missing values np nan     issue  11211  by    user  Jeremie du Boisberranger  jeremiedbb        API  In  class  preprocessing FunctionTransformer   the default of     validate   will be from   True   to   False   in 0 22     issue  10655  by  user  Guillaume Lemaitre  glemaitre       mod  sklearn svm                         Fix  Fixed a bug in  class  svm SVC  where when the argument   kernel   is   unicode in Python2  the   predict proba   method was raising an   unexpected TypeError given dense inputs     issue  10412  by  user  Jiongyan Zhang  qmick        API  Deprecate   random state   parameter in  class  svm OneClassSVM  as   the underlying implementation is not random     issue  9497  by  user  Albert Thomas  albertcthomas        API  The default value of   gamma   parameter of  class  svm SVC      class   svm NuSVC    class   svm SVR    class   svm NuSVR      class   svm OneClassSVM  will change from    auto    to    scale    in   version 0 22 to account better for unscaled features   issue  8361  by    user  Gaurav Dhingra  gxyd   and  user  Ting Neo  neokt       mod  sklearn tree                          Enhancement  Although private  and hence not assured API stability      tree  criterion ClassificationCriterion  and    tree  criterion RegressionCriterion  may now be cimported and   extended   issue  10325  by  user  Camil Staps  camilstaps        Fix  Fixed a bug in  tree BaseDecisionTree  with  splitter  best     where split threshold could become infinite when values in X were   near infinite   issue  10536  by  user  Jonathan Ohayon  Johayon        Fix  Fixed a bug in  tree MAE  to ensure sample weights are being   used during the calculation of tree MAE impurity  Previous behaviour could   cause suboptimal splits to be chosen since the impurity calculation   considered all samples to be of equal weight importance     issue  11464  by  user  John Stott  JohnStott       mod  sklearn utils                           Feature   func  utils check array  and  func  utils check X y  now have     accept large sparse   to control whether scipy sparse matrices with 64 bit   indices should be rejected     issue  11327  by  user  Karan Dhingra  kdhingra307   and  Joel Nothman        Efficiency   Fix  Avoid copying the data in  func  utils check array  when   the input data is a memmap  and   copy False      issue  10663  by    user  Arthur Mensch  arthurmensch   and  user  Lo c Est ve  lesteve        API   func  utils check array  yield a   FutureWarning   indicating   that arrays of bytes strings will be interpreted as decimal numbers   beginning in version 0 22   issue  10229  by  user  Ryan Lee  rtlee9     Multiple modules                      Feature   API  More consistent outlier detection API    Add a   score samples   method in  class  svm OneClassSVM      class  ensemble IsolationForest    class  neighbors LocalOutlierFactor      class  covariance EllipticEnvelope   It allows to access raw score   functions from original papers  A new   offset    parameter allows to link     score samples   and   decision function   methods    The   contamination   parameter of  class  ensemble IsolationForest  and    class  neighbors LocalOutlierFactor    decision function   methods is used   to define this   offset    such that outliers  resp  inliers  have negative  resp    positive    decision function   values  By default    contamination   is   kept unchanged to 0 1 for a deprecation period  In 0 22  it will be set to  auto     thus using method specific score offsets    In  class  covariance EllipticEnvelope    decision function   method  the     raw values   parameter is deprecated as the shifted Mahalanobis distance   will be always returned in 0 22   issue  9015  by  Nicolas Goix        Feature   API  A   behaviour   parameter has been introduced in  class  ensemble IsolationForest    to ensure backward compatibility    In the old behaviour  the   decision function   is independent of the   contamination     parameter  A threshold attribute depending on the   contamination   parameter is thus   used    In the new behaviour the   decision function   is dependent on the   contamination     parameter  in such a way that 0 becomes its natural threshold to detect outliers    Setting behaviour to  old  is deprecated and will not be possible in version 0 22    Beside  the behaviour parameter will be removed in 0 24     issue  11553  by  Nicolas Goix        API  Added convergence warning to  class  svm LinearSVC  and    class  linear model LogisticRegression  when   verbose   is set to 0     issue  10881  by  user  Alexandre Sevin  AlexandreSev        API  Changed warning type from  class  UserWarning  to    class  exceptions ConvergenceWarning  for failing convergence in    linear model logistic regression path      class  linear model RANSACRegressor    func  linear model ridge regression      class  gaussian process GaussianProcessRegressor      class  gaussian process GaussianProcessClassifier      func  decomposition fastica    class  cross decomposition PLSCanonical      class  cluster AffinityPropagation   and  class  cluster Birch      issue  10306  by  user  Jonathan Siebert  jotasi      Miscellaneous                   MajorFeature  A new configuration parameter    working memory   was added   to control memory consumption limits in chunked operations  such as the new    func  metrics pairwise distances chunked   See  ref  working memory      issue  10280  by  Joel Nothman   and  user  Aman Dalmia  dalmia        Feature  The version of  mod  joblib  bundled with Scikit learn is now 0 12    This uses a new default multiprocessing implementation  named  loky    https   github com tomMoral loky     While this may incur some memory and   communication overhead  it should provide greater cross platform stability   than relying on Python standard library multiprocessing   issue  11741  by   the Joblib developers  especially  user  Thomas Moreau  tomMoral   and    Olivier Grisel        Feature  An environment variable to use the site joblib instead of the   vendored one was added   ref  environment variable    The main API of joblib   is now exposed in  mod  sklearn utils      issue  11166  by  Gael Varoquaux        Feature  Add almost complete PyPy 3 support  Known unsupported   functionalities are  func  datasets load svmlight file      class  feature extraction FeatureHasher  and    class  feature extraction text HashingVectorizer   For running on PyPy    PyPy3 v5 10   Numpy 1 14 0   and scipy 1 1 0  are required     issue  11010  by  user  Ronan Lamy  rlamy   and  Roman Yurchak        Feature  A utility method  func  sklearn show versions    was added to   print out information relevant for debugging  It includes the user system    the Python executable  the version of the main libraries and BLAS binding   information   issue  11596  by  user  Alexandre Boucaud  aboucaud       Fix  Fixed a bug when setting parameters on meta estimator  involving both   a wrapped estimator and its parameter   issue  9999  by  user  Marcus Voss    marcus voss   and  Joel Nothman        Fix  Fixed a bug where calling  func  sklearn base clone  was not thread   safe and could result in a  pop from empty list  error   issue  9569    by  Andreas M ller        API  The default value of   n jobs   is changed from   1   to   None   in   all related functions and classes    n jobs None   means   unset    It will   generally be interpreted as   n jobs 1    unless the current     joblib Parallel   backend context specifies otherwise  See    term  Glossary  n jobs   for additional information   Note that this change   happens immediately  i e   without a deprecation cycle      issue  11741  by  Olivier Grisel        Fix  Fixed a bug in validation helpers where passing a Dask DataFrame results   in an error   issue  12462  by  user  Zachariah Miller  zwmiller    Changes to estimator checks                              These changes mostly affect library developers     Checks for transformers now apply if the estimator implements    term  transform   regardless of whether it inherits from    class  sklearn base TransformerMixin    issue  10474  by  Joel Nothman       Classifiers are now checked for consistency between  term  decision function    and categorical predictions     issue  10500  by  user  Narine Kokhlikyan  NarineK       Allow tests in  func  utils estimator checks check estimator  to test functions   that accept pairwise data     issue  9701  by  user  Kyle Johnson  gkjohns      Allow  func  utils estimator checks check estimator  to check that there is no   private settings apart from parameters during estimator initialization     issue  9378  by  user  Herilalaina Rakotoarison  herilalaina      The set of checks in  func  utils estimator checks check estimator  now includes a     check set params   test which checks that   set params   is equivalent to   passing parameters in     init     and warns if it encounters parameter   validation   issue  7738  by  user  Alvin Chiang  absolutelyNoWarranty      Add invariance tests for clustering metrics   issue  8102  by  user  Ankita   Sinha  anki08   and  user  Guillaume Lemaitre  glemaitre       Add   check methods subset invariance   to    func   utils estimator checks check estimator   which checks that   estimator methods are invariant if applied to a data subset     issue  10428  by  user  Jonathan Ohayon  Johayon      Add tests in  func  utils estimator checks check estimator  to check that an   estimator can handle read only memmap input data   issue  10663  by    user  Arthur Mensch  arthurmensch   and  user  Lo c Est ve  lesteve         check sample weights pandas series   now uses 8 rather than 6 samples   to accommodate for the default number of clusters in  class  cluster KMeans      issue  10933  by  user  Johannes Hansen  jnhansen       Estimators are now checked for whether   sample weight None   equates to     sample weight np ones            issue  11558  by  user  Sergul Aydore  sergulaydore      Code and Documentation Contributors                                      Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0 19  including   211217613  Aarshay Jain  absolutelyNoWarranty  Adam Greenhall  Adam Kleczewski  Adam Richie Halford  adelr  AdityaDaflapurkar  Adrin Jalali  Aidan Fitzgerald  aishgrt1  Akash Shivram  Alan Liddell  Alan Yee  Albert Thomas  Alexander Lenail  Alexander N  Alexandre Boucaud  Alexandre Gramfort  Alexandre Sevin  Alex Egg  Alvaro Perez Diaz  Amanda  Aman Dalmia  Andreas Bjerre Nielsen  Andreas Mueller  Andrew Peng  Angus Williams  Aniruddha Dave  annaayzenshtat  Anthony Gitter  Antonio Quinonez  Anubhav Marwaha  Arik Pamnani  Arthur Ozga  Artiem K  Arunava  Arya McCarthy  Attractadore  Aur lien Bellet  Aur lien Geron  Ayush Gupta  Balakumaran Manoharan  Bangda Sun  Barry Hart  Bastian Venthur  Ben Lawson  Benn Roth  Breno Freitas  Brent Yi  brett koonce  Caio Oliveira  Camil Staps  cclauss  Chady Kamar  Charlie Brummitt  Charlie Newey  chris  Chris  Chris Catalfo  Chris Foster  Chris Holdgraf  Christian Braune  Christian Hirsch  Christian Hogan  Christopher Jenness  Clement Joudet  cnx  cwitte  Dallas Card  Dan Barkhorn  Daniel  Daniel Ferreira  Daniel Gomez  Daniel Klevebring  Danielle Shwed  Daniel Mohns  Danil Baibak  Darius Morawiec  David Beach  David Burns  David Kirkby  David Nicholson  David Pickup  Derek  Didi Bar Zev  diegodlh  Dillon Gardner  Dillon Niederhut  dilutedsauce  dlovell  Dmitry Mottl  Dmitry Petrov  Dor Cohen  Douglas Duhaime  Ekaterina Tuzova  Eric Chang  Eric Dean Sanchez  Erich Schubert  Eunji  Fang Chieh Chou  FarahSaeed  felix  F lix Raimundo  fenx  filipj8  FrankHui  Franz Wompner  Freija Descamps  frsi  Gabriele Calvo  Gael Varoquaux  Gaurav Dhingra  Georgi Peev  Gil Forsyth  Giovanni Giuseppe Costa  gkevinyen5418  goncalo rodrigues  Gryllos Prokopis  Guillaume Lemaitre  Guillaume  Vermeille  Sanchez  Gustavo De Mari Pereira  hakaa1  Hanmin Qin  Henry Lin  Hong  Honghe  Hossein Pourbozorg  Hristo  Hunan Rostomyan  iampat  Ivan PANICO  Jaewon Chung  Jake VanderPlas  jakirkham  James Bourbeau  James Malcolm  Jamie Cox  Jan Koch  Jan Margeta  Jan Schl ter  janvanrijn  Jason Wolosonovich  JC Liu  Jeb Bearer  jeremiedbb  Jimmy Wan  Jinkun Wang  Jiongyan Zhang  jjabl  jkleint  Joan Massich  Jo l Billaud  Joel Nothman  Johannes Hansen  JohnStott  Jonatan Samoocha  Jonathan Ohayon  J rg D pfert  Joris Van den Bossche  Jose Perez Parras Toledano  josephsalmon  jotasi  jschendel  Julian Kuhlmann  Julien Chaumond  julietcl  Justin Shenk  Karl F  Kasper Primdal Lauritzen  Katrin Leinweber  Kirill  ksemb  Kuai Yu  Kumar Ashutosh  Kyeongpil Kang  Kye Taylor  kyledrogo  Leland McInnes  L o DS  Liam Geron  Liutong Zhou  Lizao Li  lkjcalc  Loic Esteve  louib  Luciano Viola  Lucija Gregov  Luis Osa  Luis Pedro Coelho  Luke M Craig  Luke Persola  Mabel  Mabel Villalba  Maniteja Nandana  MarkIwanchyshyn  Mark Roth  Markus M ller  MarsGuy  Martin Gubri  martin hahn  martin kokos  mathurinm  Matthias Feurer  Max Copeland  Mayur Kulkarni  Meghann Agarwal  Melanie Goetz  Michael A  Alcorn  Minghui Liu  Ming Li  Minh Le  Mohamed Ali Jamaoui  Mohamed Maskani  Mohammad Shahebaz  Muayyad Alsadi  Nabarun Pal  Nagarjuna Kumar  Naoya Kanai  Narendran Santhanam  NarineK  Nathaniel Saul  Nathan Suh  Nicholas Nadeau  P Eng    AVS  Nick Hoh  Nicolas Goix  Nicolas Hug  Nicolau Werneck  nielsenmarkus11  Nihar Sheth  Nikita Titov  Nilesh Kevlani  Nirvan Anjirbag  notmatthancock  nzw  Oleksandr Pavlyk  oliblum90  Oliver Rausch  Olivier Grisel  Oren Milman  Osaid Rehman Nasir  pasbi  Patrick Fernandes  Patrick Olden  Paul Paczuski  Pedro Morales  Peter  Peter St  John  pierreablin  pietruh  Pinaki Nath Chowdhury  Piotr Szyma ski  Pradeep Reddy Raamana  Pravar D Mahajan  pravarmahajan  QingYing Chen  Raghav RV  Rajendra arora  RAKOTOARISON Herilalaina  Rameshwar Bhaskaran  RankyLau  Rasul Kerimov  Reiichiro Nakano  Rob  Roman Kosobrodov  Roman Yurchak  Ronan Lamy  rragundez  R diger Busche  Ryan  Sachin Kelkar  Sagnik Bhattacharya  Sailesh Choyal  Sam Radhakrishnan  Sam Steingold  Samuel Bell  Samuel O  Ronsin  Saqib Nizam Shamsi  SATISH J  Saurabh Gupta  Scott Gigante  Sebastian Flennerhag  Sebastian Raschka  Sebastien Dubois  S bastien Lerique  Sebastin Santy  Sergey Feldman  Sergey Melderis  Sergul Aydore  Shahebaz  Shalil Awaley  Shangwu Yao  Sharad Vijalapuram  Sharan Yalburgi  shenhanc78  Shivam Rastogi  Shu Haoran  siftikha  Sinclert P rez  SolutusImmensus  Somya Anand  srajan paliwal  Sriharsha Hatwar  Sri Krishna  Stefan van der Walt  Stephen McDowell  Steven Brown  syonekura  Taehoon Lee  Takanori Hayashi  tarcusx  Taylor G Smith  theriley106  Thomas  Thomas Fan  Thomas Heavey  Tobias Madsen  tobycheese  Tom Augspurger  Tom Dupr  la Tour  Tommy  Trevor Stephens  Trishnendu Ghorai  Tulio Casagrande  twosigmajab  Umar Farouk Umar  Urvang Patel  Utkarsh Upadhyay  Vadim Markovtsev  Varun Agrawal  Vathsala Achar  Vilhelm von Ehrenheim  Vinayak Mehta  Vinit  Vinod Kumar L  Viraj Mavani  Viraj Navkal  Vivek Kumar  Vlad Niculae  vqean3  Vrishank Bhardwaj  vufg  wallygauze  Warut Vijitbenjaronk  wdevazelhes  Wenhao Zhang  Wes Barnett  Will  William de Vazelhes  Will Rosenfeld  Xin Xiong  Yiming  Paul  Li  ymazari  Yufeng  Zach Griffith  Z  Vin cius  Zhenqing Hu  Zhiqing Xiao  Zijie  ZJ  Poh"}
{"questions":"scikit-learn sklearn contributors rst changeloglegend inc Version 0 21","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n============\nVersion 0.21\n============\n\n.. include:: changelog_legend.inc\n\n.. _changes_0_21_3:\n\nVersion 0.21.3\n==============\n\n**July 30, 2019**\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- The v0.20.0 release notes failed to mention a backwards incompatibility in\n  :func:`metrics.make_scorer` when `needs_proba=True` and `y_true` is binary.\n  Now, the scorer function is supposed to accept a 1D `y_pred` (i.e.,\n  probability of the positive class, shape `(n_samples,)`), instead of a 2D\n  `y_pred` (i.e., shape `(n_samples, 2)`).\n\nChangelog\n---------\n\n:mod:`sklearn.cluster`\n......................\n\n- |Fix| Fixed a bug in :class:`cluster.KMeans` where computation with\n  `init='random'` was single threaded for `n_jobs > 1` or `n_jobs = -1`.\n  :pr:`12955` by :user:`Prabakaran Kumaresshan <nixphix>`.\n\n- |Fix| Fixed a bug in :class:`cluster.OPTICS` where users were unable to pass\n  float `min_samples` and `min_cluster_size`. :pr:`14496` by\n  :user:`Fabian Klopfer <someusername1>`\n  and :user:`Hanmin Qin <qinhanmin2014>`.\n\n- |Fix| Fixed a bug in :class:`cluster.KMeans` where KMeans++ initialisation\n  could rarely result in an IndexError. :issue:`11756` by `Joel Nothman`_.\n\n:mod:`sklearn.compose`\n......................\n\n- |Fix| Fixed an issue in :class:`compose.ColumnTransformer` where using\n  DataFrames whose column order differs between :func:``fit`` and\n  :func:``transform`` could lead to silently passing incorrect columns to the\n  ``remainder`` transformer.\n  :pr:`14237` by `Andreas Schuderer <schuderer>`.\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Fix| :func:`datasets.fetch_california_housing`,\n  :func:`datasets.fetch_covtype`,\n  :func:`datasets.fetch_kddcup99`, :func:`datasets.fetch_olivetti_faces`,\n  :func:`datasets.fetch_rcv1`, and :func:`datasets.fetch_species_distributions`\n  try to persist the previously cache using the new ``joblib`` if the cached\n  data was persisted using the deprecated ``sklearn.externals.joblib``. This\n  behavior is set to be deprecated and removed in v0.23.\n  :pr:`14197` by `Adrin Jalali`_.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |Fix| Fix zero division error in :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor`.\n  :pr:`14024` by `Nicolas Hug <NicolasHug>`.\n\n:mod:`sklearn.impute`\n.....................\n\n- |Fix| Fixed a bug in :class:`impute.SimpleImputer` and\n  :class:`impute.IterativeImputer` so that no errors are thrown when there are\n  missing values in training data. :pr:`13974` by `Frank Hoang <fhoang7>`.\n\n:mod:`sklearn.inspection`\n.........................\n\n- |Fix| Fixed a bug in `inspection.plot_partial_dependence` where\n  ``target`` parameter was not being taken into account for multiclass problems.\n  :pr:`14393` by :user:`Guillem G. Subies <guillemgsubies>`.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Fix| Fixed a bug in :class:`linear_model.LogisticRegressionCV` where\n  ``refit=False`` would fail depending on the ``'multiclass'`` and\n  ``'penalty'`` parameters (regression introduced in 0.21). :pr:`14087` by\n  `Nicolas Hug`_.\n\n- |Fix| Compatibility fix for :class:`linear_model.ARDRegression` and\n  Scipy>=1.3.0. Adapts to upstream changes to the default `pinvh` cutoff\n  threshold which otherwise results in poor accuracy in some cases.\n  :pr:`14067` by :user:`Tim Staley <timstaley>`.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Fix| Fixed a bug in :class:`neighbors.NeighborhoodComponentsAnalysis` where\n  the validation of initial parameters ``n_components``, ``max_iter`` and\n  ``tol`` required too strict types. :pr:`14092` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.tree`\n...................\n\n- |Fix| Fixed bug in :func:`tree.export_text` when the tree has one feature and\n  a single feature name is passed in. :pr:`14053` by `Thomas Fan`.\n\n- |Fix| Fixed an issue with :func:`tree.plot_tree` where it displayed\n  entropy calculations even for `gini` criterion in DecisionTreeClassifiers.\n  :pr:`13947` by :user:`Frank Hoang <fhoang7>`.\n\n.. _changes_0_21_2:\n\nVersion 0.21.2\n==============\n\n**24 May 2019**\n\nChangelog\n---------\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Fix| Fixed a bug in :class:`cross_decomposition.CCA` improving numerical\n  stability when `Y` is close to zero. :pr:`13903` by `Thomas Fan`_.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| Fixed a bug in :func:`metrics.pairwise.euclidean_distances` where a\n  part of the distance matrix was left un-instanciated for sufficiently large\n  float32 datasets (regression introduced in 0.21). :pr:`13910` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Fix| Fixed a bug in :class:`preprocessing.OneHotEncoder` where the new\n  `drop` parameter was not reflected in `get_feature_names`. :pr:`13894`\n  by :user:`James Myatt <jamesmyatt>`.\n\n\n`sklearn.utils.sparsefuncs`\n...........................\n\n- |Fix| Fixed a bug where `min_max_axis` would fail on 32-bit systems\n  for certain large inputs. This affects :class:`preprocessing.MaxAbsScaler`,\n  :func:`preprocessing.normalize` and :class:`preprocessing.LabelBinarizer`.\n  :pr:`13741` by :user:`Roddy MacSween <rlms>`.\n\n.. _changes_0_21_1:\n\nVersion 0.21.1\n==============\n\n**17 May 2019**\n\nThis is a bug-fix release to primarily resolve some packaging issues in version\n0.21.0. It also includes minor documentation improvements and some bug fixes.\n\nChangelog\n---------\n\n:mod:`sklearn.inspection`\n.........................\n\n- |Fix| Fixed a bug in :func:`inspection.partial_dependence` to only check\n  classifier and not regressor for the multiclass-multioutput case.\n  :pr:`14309` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| Fixed a bug in :class:`metrics.pairwise_distances` where it would raise\n  ``AttributeError`` for boolean metrics when ``X`` had a boolean dtype and\n  ``Y == None``.\n  :issue:`13864` by :user:`Paresh Mathur <rick2047>`.\n\n- |Fix| Fixed two bugs in :class:`metrics.pairwise_distances` when\n  ``n_jobs > 1``. First it used to return a distance matrix with same dtype as\n  input, even for integer dtype. Then the diagonal was not zeros for euclidean\n  metric when ``Y`` is ``X``. :issue:`13877` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Fix| Fixed a bug in :class:`neighbors.KernelDensity` which could not be\n  restored from a pickle if ``sample_weight`` had been used.\n  :issue:`13772` by :user:`Aditya Vyas <aditya1702>`.\n\n\n.. _changes_0_21:\n\nVersion 0.21.0\n==============\n\n**May 2019**\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- :class:`discriminant_analysis.LinearDiscriminantAnalysis` for multiclass\n  classification. |Fix|\n- :class:`discriminant_analysis.LinearDiscriminantAnalysis` with 'eigen'\n  solver. |Fix|\n- :class:`linear_model.BayesianRidge` |Fix|\n- Decision trees and derived ensembles when both `max_depth` and\n  `max_leaf_nodes` are set. |Fix|\n- :class:`linear_model.LogisticRegression` and\n  :class:`linear_model.LogisticRegressionCV` with 'saga' solver. |Fix|\n- :class:`ensemble.GradientBoostingClassifier` |Fix|\n- :class:`sklearn.feature_extraction.text.HashingVectorizer`,\n  :class:`sklearn.feature_extraction.text.TfidfVectorizer`, and\n  :class:`sklearn.feature_extraction.text.CountVectorizer` |Fix|\n- :class:`neural_network.MLPClassifier` |Fix|\n- :func:`svm.SVC.decision_function` and\n  :func:`multiclass.OneVsOneClassifier.decision_function`. |Fix|\n- :class:`linear_model.SGDClassifier` and any derived classifiers. |Fix|\n- Any model using the `linear_model._sag.sag_solver` function with a `0`\n  seed, including :class:`linear_model.LogisticRegression`,\n  :class:`linear_model.LogisticRegressionCV`, :class:`linear_model.Ridge`,\n  and :class:`linear_model.RidgeCV` with 'sag' solver. |Fix|\n- :class:`linear_model.RidgeCV` when using leave-one-out cross-validation\n  with sparse inputs. |Fix|\n\n\nDetails are listed in the changelog below.\n\n(While we are trying to better inform users by providing this information, we\ncannot assure that this list is complete.)\n\nKnown Major Bugs\n----------------\n\n* The default `max_iter` for :class:`linear_model.LogisticRegression` is too\n  small for many solvers given the default `tol`. In particular, we\n  accidentally changed the default `max_iter` for the liblinear solver from\n  1000 to 100 iterations in :pr:`3591` released in version 0.16.\n  In a future release we hope to choose better default `max_iter` and `tol`\n  heuristically depending on the solver (see :pr:`13317`).\n\nChangelog\n---------\n\nSupport for Python 3.4 and below has been officially dropped.\n\n..\n    Entries should be grouped by module (in alphabetic order) and prefixed with\n    one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,\n    |Fix| or |API| (see whats_new.rst for descriptions).\n    Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).\n    Changes not specific to a module should be listed under *Multiple Modules*\n    or *Miscellaneous*.\n    Entries should end with:\n    :pr:`123456` by :user:`Joe Bloggs <joeongithub>`.\n    where 123456 is the *pull request* number, not the issue number.\n\n:mod:`sklearn.base`\n...................\n\n- |API| The R2 score used when calling ``score`` on a regressor will use\n  ``multioutput='uniform_average'`` from version 0.23 to keep consistent with\n  :func:`metrics.r2_score`. This will influence the ``score`` method of all\n  the multioutput regressors (except for\n  :class:`multioutput.MultiOutputRegressor`).\n  :pr:`13157` by :user:`Hanmin Qin <qinhanmin2014>`.\n\n:mod:`sklearn.calibration`\n..........................\n\n- |Enhancement| Added support to bin the data passed into\n  :class:`calibration.calibration_curve` by quantiles instead of uniformly\n  between 0 and 1.\n  :pr:`13086` by :user:`Scott Cole <srcole>`.\n\n- |Enhancement| Allow n-dimensional arrays as input for\n  `calibration.CalibratedClassifierCV`. :pr:`13485` by\n  :user:`William de Vazelhes <wdevazelhes>`.\n\n:mod:`sklearn.cluster`\n......................\n\n- |MajorFeature| A new clustering algorithm: :class:`cluster.OPTICS`: an\n  algorithm related to :class:`cluster.DBSCAN`, that has hyperparameters easier\n  to set and that scales better, by :user:`Shane <espg>`,\n  `Adrin Jalali`_, :user:`Erich Schubert <kno10>`, `Hanmin Qin`_, and\n  :user:`Assia Benbihi <assiaben>`.\n\n- |Fix| Fixed a bug where :class:`cluster.Birch` could occasionally raise an\n  AttributeError. :pr:`13651` by `Joel Nothman`_.\n\n- |Fix| Fixed a bug in :class:`cluster.KMeans` where empty clusters weren't\n  correctly relocated when using sample weights. :pr:`13486` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |API| The ``n_components_`` attribute in :class:`cluster.AgglomerativeClustering`\n  and :class:`cluster.FeatureAgglomeration` has been renamed to\n  ``n_connected_components_``.\n  :pr:`13427` by :user:`Stephane Couvreur <scouvreur>`.\n\n- |Enhancement| :class:`cluster.AgglomerativeClustering` and\n  :class:`cluster.FeatureAgglomeration` now accept a ``distance_threshold``\n  parameter which can be used to find the clusters instead of ``n_clusters``.\n  :issue:`9069` by :user:`Vathsala Achar <VathsalaAchar>` and `Adrin Jalali`_.\n\n:mod:`sklearn.compose`\n......................\n\n- |API| :class:`compose.ColumnTransformer` is no longer an experimental\n  feature. :pr:`13835` by :user:`Hanmin Qin <qinhanmin2014>`.\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Fix| Added support for 64-bit group IDs and pointers in SVMLight files.\n  :pr:`10727` by :user:`Bryan K Woods <bryan-woods>`.\n\n- |Fix| :func:`datasets.load_sample_images` returns images with a deterministic\n  order. :pr:`13250` by :user:`Thomas Fan <thomasjpfan>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Enhancement| :class:`decomposition.KernelPCA` now has deterministic output\n  (resolved sign ambiguity in eigenvalue decomposition of the kernel matrix).\n  :pr:`13241` by :user:`Aur\u00e9lien Bellet <bellet>`.\n\n- |Fix| Fixed a bug in :class:`decomposition.KernelPCA`, `fit().transform()`\n  now produces the correct output (the same as `fit_transform()`) in case\n  of non-removed zero eigenvalues (`remove_zero_eig=False`).\n  `fit_inverse_transform` was also accelerated by using the same trick as\n  `fit_transform` to compute the transform of `X`.\n  :pr:`12143` by :user:`Sylvain Mari\u00e9 <smarie>`\n\n- |Fix| Fixed a bug in :class:`decomposition.NMF` where `init = 'nndsvd'`,\n  `init = 'nndsvda'`, and `init = 'nndsvdar'` are allowed when\n  `n_components < n_features` instead of\n  `n_components <= min(n_samples, n_features)`.\n  :pr:`11650` by :user:`Hossein Pourbozorg <hossein-pourbozorg>` and\n  :user:`Zijie (ZJ) Poh <zjpoh>`.\n\n- |API| The default value of the :code:`init` argument in\n  :func:`decomposition.non_negative_factorization` will change from\n  :code:`random` to :code:`None` in version 0.23 to make it consistent with\n  :class:`decomposition.NMF`. A FutureWarning is raised when\n  the default value is used.\n  :pr:`12988` by :user:`Zijie (ZJ) Poh <zjpoh>`.\n\n:mod:`sklearn.discriminant_analysis`\n....................................\n\n- |Enhancement| :class:`discriminant_analysis.LinearDiscriminantAnalysis` now\n  preserves ``float32`` and ``float64`` dtypes. :pr:`8769` and\n  :pr:`11000` by :user:`Thibault Sejourne <thibsej>`\n\n- |Fix| A ``ChangedBehaviourWarning`` is now raised when\n  :class:`discriminant_analysis.LinearDiscriminantAnalysis` is given as\n  parameter ``n_components > min(n_features, n_classes - 1)``, and\n  ``n_components`` is changed to ``min(n_features, n_classes - 1)`` if so.\n  Previously the change was made, but silently. :pr:`11526` by\n  :user:`William de Vazelhes<wdevazelhes>`.\n\n- |Fix| Fixed a bug in :class:`discriminant_analysis.LinearDiscriminantAnalysis`\n  where the predicted probabilities would be incorrectly computed in the\n  multiclass case. :pr:`6848`, by :user:`Agamemnon Krasoulis\n  <agamemnonc>` and `Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Fixed a bug in :class:`discriminant_analysis.LinearDiscriminantAnalysis`\n  where the predicted probabilities would be incorrectly computed with ``eigen``\n  solver. :pr:`11727`, by :user:`Agamemnon Krasoulis\n  <agamemnonc>`.\n\n:mod:`sklearn.dummy`\n....................\n\n- |Fix| Fixed a bug in :class:`dummy.DummyClassifier` where the\n  ``predict_proba`` method was returning int32 array instead of\n  float64 for the ``stratified`` strategy. :pr:`13266` by\n  :user:`Christos Aridas<chkoar>`.\n\n- |Fix| Fixed a bug in :class:`dummy.DummyClassifier` where it was throwing a\n  dimension mismatch error in prediction time if a column vector ``y`` with\n  ``shape=(n, 1)`` was given at ``fit`` time. :pr:`13545` by :user:`Nick\n  Sorros <nsorros>` and `Adrin Jalali`_.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |MajorFeature| Add two new implementations of\n  gradient boosting trees: :class:`ensemble.HistGradientBoostingClassifier`\n  and :class:`ensemble.HistGradientBoostingRegressor`. The implementation of\n  these estimators is inspired by\n  `LightGBM <https:\/\/github.com\/Microsoft\/LightGBM>`_ and can be orders of\n  magnitude faster than :class:`ensemble.GradientBoostingRegressor` and\n  :class:`ensemble.GradientBoostingClassifier` when the number of samples is\n  larger than tens of thousands of samples. The API of these new estimators\n  is slightly different, and some of the features from\n  :class:`ensemble.GradientBoostingClassifier` and\n  :class:`ensemble.GradientBoostingRegressor` are not yet supported.\n\n  These new estimators are experimental, which means that their results or\n  their API might change without any deprecation cycle. To use them, you\n  need to explicitly import ``enable_hist_gradient_boosting``::\n\n    >>> # explicitly require this experimental feature\n    >>> from sklearn.experimental import enable_hist_gradient_boosting  # noqa\n    >>> # now you can import normally from sklearn.ensemble\n    >>> from sklearn.ensemble import HistGradientBoostingClassifier\n\n  .. note::\n      Update: since version 1.0, these estimators are not experimental\n      anymore and you don't need to use `from sklearn.experimental import\n      enable_hist_gradient_boosting`.\n\n  :pr:`12807` by :user:`Nicolas Hug<NicolasHug>`.\n\n- |Feature| Add :class:`ensemble.VotingRegressor`\n  which provides an equivalent of :class:`ensemble.VotingClassifier`\n  for regression problems.\n  :pr:`12513` by :user:`Ramil Nugmanov <stsouko>` and\n  :user:`Mohamed Ali Jamaoui <mohamed-ali>`.\n\n- |Efficiency| Make :class:`ensemble.IsolationForest` prefer threads over\n  processes when running with ``n_jobs > 1`` as the underlying decision tree\n  fit calls do release the GIL. This changes reduces memory usage and\n  communication overhead. :pr:`12543` by :user:`Isaac Storch <istorch>`\n  and `Olivier Grisel`_.\n\n- |Efficiency| Make :class:`ensemble.IsolationForest` more memory efficient\n  by avoiding keeping in memory each tree prediction. :pr:`13260` by\n  `Nicolas Goix`_.\n\n- |Efficiency| :class:`ensemble.IsolationForest` now uses chunks of data at\n  prediction step, thus capping the memory usage. :pr:`13283` by\n  `Nicolas Goix`_.\n\n- |Efficiency| :class:`sklearn.ensemble.GradientBoostingClassifier` and\n  :class:`sklearn.ensemble.GradientBoostingRegressor` now keep the\n  input ``y`` as ``float64`` to avoid it being copied internally by trees.\n  :pr:`13524` by `Adrin Jalali`_.\n\n- |Enhancement| Minimized the validation of X in\n  :class:`ensemble.AdaBoostClassifier` and :class:`ensemble.AdaBoostRegressor`\n  :pr:`13174` by :user:`Christos Aridas <chkoar>`.\n\n- |Enhancement| :class:`ensemble.IsolationForest` now exposes ``warm_start``\n  parameter, allowing iterative addition of trees to an isolation\n  forest. :pr:`13496` by :user:`Peter Marko <petibear>`.\n\n- |Fix| The values of ``feature_importances_`` in all random forest based\n  models (i.e.\n  :class:`ensemble.RandomForestClassifier`,\n  :class:`ensemble.RandomForestRegressor`,\n  :class:`ensemble.ExtraTreesClassifier`,\n  :class:`ensemble.ExtraTreesRegressor`,\n  :class:`ensemble.RandomTreesEmbedding`,\n  :class:`ensemble.GradientBoostingClassifier`, and\n  :class:`ensemble.GradientBoostingRegressor`) now:\n\n  - sum up to ``1``\n  - all the single node trees in feature importance calculation are ignored\n  - in case all trees have only one single node (i.e. a root node),\n    feature importances will be an array of all zeros.\n\n  :pr:`13636` and :pr:`13620` by `Adrin Jalali`_.\n\n- |Fix| Fixed a bug in :class:`ensemble.GradientBoostingClassifier` and\n  :class:`ensemble.GradientBoostingRegressor`, which didn't support\n  scikit-learn estimators as the initial estimator. Also added support of\n  initial estimator which does not support sample weights. :pr:`12436` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>` and :pr:`12983` by\n  :user:`Nicolas Hug<NicolasHug>`.\n\n- |Fix| Fixed the output of the average path length computed in\n  :class:`ensemble.IsolationForest` when the input is either 0, 1 or 2.\n  :pr:`13251` by :user:`Albert Thomas <albertcthomas>`\n  and :user:`joshuakennethjones <joshuakennethjones>`.\n\n- |Fix| Fixed a bug in :class:`ensemble.GradientBoostingClassifier` where\n  the gradients would be incorrectly computed in multiclass classification\n  problems. :pr:`12715` by :user:`Nicolas Hug<NicolasHug>`.\n\n- |Fix| Fixed a bug in :class:`ensemble.GradientBoostingClassifier` where\n  validation sets for early stopping were not sampled with stratification.\n  :pr:`13164` by :user:`Nicolas Hug<NicolasHug>`.\n\n- |Fix| Fixed a bug in :class:`ensemble.GradientBoostingClassifier` where\n  the default initial prediction of a multiclass classifier would predict the\n  classes priors instead of the log of the priors. :pr:`12983` by\n  :user:`Nicolas Hug<NicolasHug>`.\n\n- |Fix| Fixed a bug in :class:`ensemble.RandomForestClassifier` where the\n  ``predict`` method would error for multiclass multioutput forests models\n  if any targets were strings. :pr:`12834` by :user:`Elizabeth Sander\n  <elsander>`.\n\n- |Fix| Fixed a bug in `ensemble.gradient_boosting.LossFunction` and\n  `ensemble.gradient_boosting.LeastSquaresError` where the default\n  value of ``learning_rate`` in ``update_terminal_regions`` is not consistent\n  with the document and the caller functions. Note however that directly using\n  these loss functions is deprecated.\n  :pr:`6463` by :user:`movelikeriver <movelikeriver>`.\n\n- |Fix| `ensemble.partial_dependence` (and consequently the new\n  version :func:`sklearn.inspection.partial_dependence`) now takes sample\n  weights into account for the partial dependence computation when the\n  gradient boosting model has been trained with sample weights.\n  :pr:`13193` by :user:`Samuel O. Ronsin <samronsin>`.\n\n- |API| `ensemble.partial_dependence` and\n  `ensemble.plot_partial_dependence` are now deprecated in favor of\n  :func:`inspection.partial_dependence<sklearn.inspection.partial_dependence>`\n  and\n  `inspection.plot_partial_dependence<sklearn.inspection.plot_partial_dependence>`.\n  :pr:`12599` by :user:`Trevor Stephens<trevorstephens>` and\n  :user:`Nicolas Hug<NicolasHug>`.\n\n- |Fix| :class:`ensemble.VotingClassifier` and\n  :class:`ensemble.VotingRegressor` were failing during ``fit`` in one\n  of the estimators was set to ``None`` and ``sample_weight`` was not ``None``.\n  :pr:`13779` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |API| :class:`ensemble.VotingClassifier` and\n  :class:`ensemble.VotingRegressor` accept ``'drop'`` to disable an estimator\n  in addition to ``None`` to be consistent with other estimators (i.e.,\n  :class:`pipeline.FeatureUnion` and :class:`compose.ColumnTransformer`).\n  :pr:`13780` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n`sklearn.externals`\n...................\n\n- |API| Deprecated `externals.six` since we have dropped support for\n  Python 2.7. :pr:`12916` by :user:`Hanmin Qin <qinhanmin2014>`.\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |Fix| If ``input='file'`` or ``input='filename'``, and a callable is given as\n  the ``analyzer``, :class:`sklearn.feature_extraction.text.HashingVectorizer`,\n  :class:`sklearn.feature_extraction.text.TfidfVectorizer`, and\n  :class:`sklearn.feature_extraction.text.CountVectorizer` now read the data\n  from the file(s) and then pass it to the given ``analyzer``, instead of\n  passing the file name(s) or the file object(s) to the analyzer.\n  :pr:`13641` by `Adrin Jalali`_.\n\n:mod:`sklearn.impute`\n.....................\n\n- |MajorFeature| Added :class:`impute.IterativeImputer`, which is a strategy\n  for imputing missing values by modeling each feature with missing values as a\n  function of other features in a round-robin fashion. :pr:`8478` and\n  :pr:`12177` by :user:`Sergey Feldman <sergeyf>` and :user:`Ben Lawson\n  <benlawson>`.\n\n  The API of IterativeImputer is experimental and subject to change without any\n  deprecation cycle. To use them, you need to explicitly import\n  ``enable_iterative_imputer``::\n\n    >>> from sklearn.experimental import enable_iterative_imputer  # noqa\n    >>> # now you can import normally from sklearn.impute\n    >>> from sklearn.impute import IterativeImputer\n\n\n- |Feature| The :class:`impute.SimpleImputer` and\n  :class:`impute.IterativeImputer` have a new parameter ``'add_indicator'``,\n  which simply stacks a :class:`impute.MissingIndicator` transform into the\n  output of the imputer's transform. That allows a predictive estimator to\n  account for missingness. :pr:`12583`, :pr:`13601` by :user:`Danylo Baibak\n  <DanilBaibak>`.\n\n- |Fix| In :class:`impute.MissingIndicator` avoid implicit densification by\n  raising an exception if input is sparse add `missing_values` property\n  is set to 0. :pr:`13240` by :user:`Bartosz Telenczuk <btel>`.\n\n- |Fix| Fixed two bugs in :class:`impute.MissingIndicator`. First, when\n  ``X`` is sparse, all the non-zero non missing values used to become\n  explicit False in the transformed data. Then, when\n  ``features='missing-only'``, all features used to be kept if there were no\n  missing values at all. :pr:`13562` by :user:`J\u00e9r\u00e9mie du Boisberranger\n  <jeremiedbb>`.\n\n:mod:`sklearn.inspection`\n.........................\n\n(new subpackage)\n\n- |Feature| Partial dependence plots\n  (`inspection.plot_partial_dependence`) are now supported for\n  any regressor or classifier (provided that they have a `predict_proba`\n  method). :pr:`12599` by :user:`Trevor Stephens <trevorstephens>` and\n  :user:`Nicolas Hug <NicolasHug>`.\n\n:mod:`sklearn.isotonic`\n.......................\n\n- |Feature| Allow different dtypes (such as float32) in\n  :class:`isotonic.IsotonicRegression`.\n  :pr:`8769` by :user:`Vlad Niculae <vene>`\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Enhancement| :class:`linear_model.Ridge` now preserves ``float32`` and\n  ``float64`` dtypes. :issue:`8769` and :issue:`11000` by\n  :user:`Guillaume Lemaitre <glemaitre>`, and :user:`Joan Massich <massich>`\n\n- |Feature| :class:`linear_model.LogisticRegression` and\n  :class:`linear_model.LogisticRegressionCV` now support Elastic-Net penalty,\n  with the 'saga' solver. :pr:`11646` by :user:`Nicolas Hug <NicolasHug>`.\n\n- |Feature| Added :class:`linear_model.lars_path_gram`, which is\n  :class:`linear_model.lars_path` in the sufficient stats mode, allowing\n  users to compute :class:`linear_model.lars_path` without providing\n  ``X`` and ``y``. :pr:`11699` by :user:`Kuai Yu <yukuairoy>`.\n\n- |Efficiency| `linear_model.make_dataset` now preserves\n  ``float32`` and ``float64`` dtypes, reducing memory consumption in stochastic\n  gradient, SAG and SAGA solvers.\n  :pr:`8769` and :pr:`11000` by\n  :user:`Nelle Varoquaux <NelleV>`, :user:`Arthur Imbert <Henley13>`,\n  :user:`Guillaume Lemaitre <glemaitre>`, and :user:`Joan Massich <massich>`\n\n- |Enhancement| :class:`linear_model.LogisticRegression` now supports an\n  unregularized objective when ``penalty='none'`` is passed. This is\n  equivalent to setting ``C=np.inf`` with l2 regularization. Not supported\n  by the liblinear solver. :pr:`12860` by :user:`Nicolas Hug\n  <NicolasHug>`.\n\n- |Enhancement| `sparse_cg` solver in :class:`linear_model.Ridge`\n  now supports fitting the intercept (i.e. ``fit_intercept=True``) when\n  inputs are sparse. :pr:`13336` by :user:`Bartosz Telenczuk <btel>`.\n\n- |Enhancement| The coordinate descent solver used in `Lasso`, `ElasticNet`,\n  etc. now issues a `ConvergenceWarning` when it completes without meeting the\n  desired toleranbce.\n  :pr:`11754` and :pr:`13397` by :user:`Brent Fagan <brentfagan>` and\n  :user:`Adrin Jalali <adrinjalali>`.\n\n- |Fix| Fixed a bug in :class:`linear_model.LogisticRegression` and\n  :class:`linear_model.LogisticRegressionCV` with 'saga' solver, where the\n  weights would not be correctly updated in some cases.\n  :pr:`11646` by `Tom Dupre la Tour`_.\n\n- |Fix| Fixed the posterior mean, posterior covariance and returned\n  regularization parameters in :class:`linear_model.BayesianRidge`. The\n  posterior mean and the posterior covariance were not the ones computed\n  with the last update of the regularization parameters and the returned\n  regularization parameters were not the final ones. Also fixed the formula of\n  the log marginal likelihood used to compute the score when\n  `compute_score=True`. :pr:`12174` by\n  :user:`Albert Thomas <albertcthomas>`.\n\n- |Fix| Fixed a bug in :class:`linear_model.LassoLarsIC`, where user input\n  ``copy_X=False`` at instance creation would be overridden by default\n  parameter value ``copy_X=True`` in ``fit``.\n  :pr:`12972` by :user:`Lucio Fernandez-Arjona <luk-f-a>`\n\n- |Fix| Fixed a bug in :class:`linear_model.LinearRegression` that\n  was not returning the same coeffecients and intercepts with\n  ``fit_intercept=True`` in sparse and dense case.\n  :pr:`13279` by `Alexandre Gramfort`_\n\n- |Fix| Fixed a bug in :class:`linear_model.HuberRegressor` that was\n  broken when ``X`` was of dtype bool. :pr:`13328` by `Alexandre Gramfort`_.\n\n- |Fix| Fixed a performance issue of ``saga`` and ``sag`` solvers when called\n  in a :class:`joblib.Parallel` setting with ``n_jobs > 1`` and\n  ``backend=\"threading\"``, causing them to perform worse than in the sequential\n  case. :pr:`13389` by :user:`Pierre Glaser <pierreglaser>`.\n\n- |Fix| Fixed a bug in\n  `linear_model.stochastic_gradient.BaseSGDClassifier` that was not\n  deterministic when trained in a multi-class setting on several threads.\n  :pr:`13422` by :user:`Cl\u00e9ment Doumouro <ClemDoum>`.\n\n- |Fix| Fixed bug in :func:`linear_model.ridge_regression`,\n  :class:`linear_model.Ridge` and\n  :class:`linear_model.RidgeClassifier` that\n  caused unhandled exception for arguments ``return_intercept=True`` and\n  ``solver=auto`` (default) or any other solver different from ``sag``.\n  :pr:`13363` by :user:`Bartosz Telenczuk <btel>`\n\n- |Fix| :func:`linear_model.ridge_regression` will now raise an exception\n  if ``return_intercept=True`` and solver is different from ``sag``. Previously,\n  only warning was issued. :pr:`13363` by :user:`Bartosz Telenczuk <btel>`\n\n- |Fix| :func:`linear_model.ridge_regression` will choose ``sparse_cg``\n  solver for sparse inputs when ``solver=auto`` and ``sample_weight``\n  is provided (previously `cholesky` solver was selected).\n  :pr:`13363` by :user:`Bartosz Telenczuk <btel>`\n\n- |API|  The use of :class:`linear_model.lars_path` with ``X=None``\n  while passing ``Gram`` is deprecated in version 0.21 and will be removed\n  in version 0.23. Use :class:`linear_model.lars_path_gram` instead.\n  :pr:`11699` by :user:`Kuai Yu <yukuairoy>`.\n\n- |API| `linear_model.logistic_regression_path` is deprecated\n  in version 0.21 and will be removed in version 0.23.\n  :pr:`12821` by :user:`Nicolas Hug <NicolasHug>`.\n\n- |Fix| :class:`linear_model.RidgeCV` with leave-one-out cross-validation\n  now correctly fits an intercept when ``fit_intercept=True`` and the design\n  matrix is sparse. :issue:`13350` by :user:`J\u00e9r\u00f4me Dock\u00e8s <jeromedockes>`\n\n:mod:`sklearn.manifold`\n.......................\n\n- |Efficiency| Make :func:`manifold.trustworthiness` use an inverted index\n  instead of an `np.where` lookup to find the rank of neighbors in the input\n  space. This improves efficiency in particular when computed with\n  lots of neighbors and\/or small datasets.\n  :pr:`9907` by :user:`William de Vazelhes <wdevazelhes>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Feature| Added the :func:`metrics.max_error` metric and a corresponding\n  ``'max_error'`` scorer for single output regression.\n  :pr:`12232` by :user:`Krishna Sangeeth <whiletruelearn>`.\n\n- |Feature| Add :func:`metrics.multilabel_confusion_matrix`, which calculates a\n  confusion matrix with true positive, false positive, false negative and true\n  negative counts for each class. This facilitates the calculation of set-wise\n  metrics such as recall, specificity, fall out and miss rate.\n  :pr:`11179` by :user:`Shangwu Yao <ShangwuYao>` and `Joel Nothman`_.\n\n- |Feature| :func:`metrics.jaccard_score` has been added to calculate the\n  Jaccard coefficient as an evaluation metric for binary, multilabel and\n  multiclass tasks, with an interface analogous to :func:`metrics.f1_score`.\n  :pr:`13151` by :user:`Gaurav Dhingra <gxyd>` and `Joel Nothman`_.\n\n- |Feature| Added :func:`metrics.pairwise.haversine_distances` which can be\n  accessed with `metric='pairwise'` through :func:`metrics.pairwise_distances`\n  and estimators. (Haversine distance was previously available for nearest\n  neighbors calculation.) :pr:`12568` by :user:`Wei Xue <xuewei4d>`,\n  :user:`Emmanuel Arias <eamanu>` and `Joel Nothman`_.\n\n- |Efficiency| Faster :func:`metrics.pairwise_distances` with `n_jobs`\n  > 1 by using a thread-based backend, instead of process-based backends.\n  :pr:`8216` by :user:`Pierre Glaser <pierreglaser>` and\n  :user:`Romuald Menuet <zanospi>`\n\n- |Efficiency| The pairwise manhattan distances with sparse input now uses the\n  BLAS shipped with scipy instead of the bundled BLAS. :pr:`12732` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`\n\n- |Enhancement| Use label `accuracy` instead of `micro-average` on\n  :func:`metrics.classification_report` to avoid confusion. `micro-average` is\n  only shown for multi-label or multi-class with a subset of classes because\n  it is otherwise identical to accuracy.\n  :pr:`12334` by :user:`Emmanuel Arias <eamanu@eamanu.com>`,\n  `Joel Nothman`_ and `Andreas M\u00fcller`_\n\n- |Enhancement| Added `beta` parameter to\n  :func:`metrics.homogeneity_completeness_v_measure` and\n  :func:`metrics.v_measure_score` to configure the\n  tradeoff between homogeneity and completeness.\n  :pr:`13607` by :user:`Stephane Couvreur <scouvreur>` and\n  and :user:`Ivan Sanchez <ivsanro1>`.\n\n- |Fix| The metric :func:`metrics.r2_score` is degenerate with a single sample\n  and now it returns NaN and raises :class:`exceptions.UndefinedMetricWarning`.\n  :pr:`12855` by :user:`Pawel Sendyk <psendyk>`.\n\n- |Fix| Fixed a bug where :func:`metrics.brier_score_loss` will sometimes\n  return incorrect result when there's only one class in ``y_true``.\n  :pr:`13628` by :user:`Hanmin Qin <qinhanmin2014>`.\n\n- |Fix| Fixed a bug in :func:`metrics.label_ranking_average_precision_score`\n  where sample_weight wasn't taken into account for samples with degenerate\n  labels.\n  :pr:`13447` by :user:`Dan Ellis <dpwe>`.\n\n- |API| The parameter ``labels`` in :func:`metrics.hamming_loss` is deprecated\n  in version 0.21 and will be removed in version 0.23. :pr:`10580` by\n  :user:`Reshama Shaikh <reshamas>` and :user:`Sandra Mitrovic <SandraMNE>`.\n\n- |Fix| The function :func:`metrics.pairwise.euclidean_distances`, and\n  therefore several estimators with ``metric='euclidean'``, suffered from\n  numerical precision issues with ``float32`` features. Precision has been\n  increased at the cost of a small drop of performance. :pr:`13554` by\n  :user:`Celelibi` and :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |API| `metrics.jaccard_similarity_score` is deprecated in favour of\n  the more consistent :func:`metrics.jaccard_score`. The former behavior for\n  binary and multiclass targets is broken.\n  :pr:`13151` by `Joel Nothman`_.\n\n:mod:`sklearn.mixture`\n......................\n\n- |Fix| Fixed a bug in `mixture.BaseMixture` and therefore on estimators\n  based on it, i.e. :class:`mixture.GaussianMixture` and\n  :class:`mixture.BayesianGaussianMixture`, where ``fit_predict`` and\n  ``fit.predict`` were not equivalent. :pr:`13142` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |Feature| Classes :class:`~model_selection.GridSearchCV` and\n  :class:`~model_selection.RandomizedSearchCV` now allow for refit=callable\n  to add flexibility in identifying the best estimator.\n  See :ref:`sphx_glr_auto_examples_model_selection_plot_grid_search_refit_callable.py`.\n  :pr:`11354` by :user:`Wenhao Zhang <wenhaoz@ucla.edu>`,\n  `Joel Nothman`_ and :user:`Adrin Jalali <adrinjalali>`.\n\n- |Enhancement| Classes :class:`~model_selection.GridSearchCV`,\n  :class:`~model_selection.RandomizedSearchCV`, and methods\n  :func:`~model_selection.cross_val_score`,\n  :func:`~model_selection.cross_val_predict`,\n  :func:`~model_selection.cross_validate`, now print train scores when\n  `return_train_scores` is True and `verbose` > 2. For\n  :func:`~model_selection.learning_curve`, and\n  :func:`~model_selection.validation_curve` only the latter is required.\n  :pr:`12613` and :pr:`12669` by :user:`Marc Torrellas <marctorrellas>`.\n\n- |Enhancement| Some :term:`CV splitter` classes and\n  `model_selection.train_test_split` now raise ``ValueError`` when the\n  resulting training set is empty.\n  :pr:`12861` by :user:`Nicolas Hug <NicolasHug>`.\n\n- |Fix| Fixed a bug where :class:`model_selection.StratifiedKFold`\n  shuffles each class's samples with the same ``random_state``,\n  making ``shuffle=True`` ineffective.\n  :pr:`13124` by :user:`Hanmin Qin <qinhanmin2014>`.\n\n- |Fix| Added ability for :func:`model_selection.cross_val_predict` to handle\n  multi-label (and multioutput-multiclass) targets with ``predict_proba``-type\n  methods. :pr:`8773` by :user:`Stephen Hoover <stephen-hoover>`.\n\n- |Fix| Fixed an issue in :func:`~model_selection.cross_val_predict` where\n  `method=\"predict_proba\"` returned always `0.0` when one of the classes was\n  excluded in a cross-validation fold.\n  :pr:`13366` by :user:`Guillaume Fournier <gfournier>`\n\n:mod:`sklearn.multiclass`\n.........................\n\n- |Fix| Fixed an issue in :func:`multiclass.OneVsOneClassifier.decision_function`\n  where the decision_function value of a given sample was different depending on\n  whether the decision_function was evaluated on the sample alone or on a batch\n  containing this same sample due to the scaling used in decision_function.\n  :pr:`10440` by :user:`Jonathan Ohayon <Johayon>`.\n\n:mod:`sklearn.multioutput`\n..........................\n\n- |Fix| Fixed a bug in :class:`multioutput.MultiOutputClassifier` where the\n  `predict_proba` method incorrectly checked for `predict_proba` attribute in\n  the estimator object.\n  :pr:`12222` by :user:`Rebekah Kim <rebekahkim>`\n\n:mod:`sklearn.neighbors`\n........................\n\n- |MajorFeature| Added :class:`neighbors.NeighborhoodComponentsAnalysis` for\n  metric learning, which implements the Neighborhood Components Analysis\n  algorithm.  :pr:`10058` by :user:`William de Vazelhes <wdevazelhes>` and\n  :user:`John Chiotellis <johny-c>`.\n\n- |API| Methods in :class:`neighbors.NearestNeighbors` :\n  :func:`~neighbors.NearestNeighbors.kneighbors`,\n  :func:`~neighbors.NearestNeighbors.radius_neighbors`,\n  :func:`~neighbors.NearestNeighbors.kneighbors_graph`,\n  :func:`~neighbors.NearestNeighbors.radius_neighbors_graph`\n  now raise ``NotFittedError``, rather than ``AttributeError``,\n  when called before ``fit`` :pr:`12279` by :user:`Krishna Sangeeth\n  <whiletruelearn>`.\n\n:mod:`sklearn.neural_network`\n.............................\n\n- |Fix| Fixed a bug in :class:`neural_network.MLPClassifier` and\n  :class:`neural_network.MLPRegressor` where the option :code:`shuffle=False`\n  was being ignored. :pr:`12582` by :user:`Sam Waterbury <samwaterbury>`.\n\n- |Fix| Fixed a bug in :class:`neural_network.MLPClassifier` where\n  validation sets for early stopping were not sampled with stratification. In\n  the multilabel case however, splits are still not stratified.\n  :pr:`13164` by :user:`Nicolas Hug<NicolasHug>`.\n\n:mod:`sklearn.pipeline`\n.......................\n\n- |Feature| :class:`pipeline.Pipeline` can now use indexing notation (e.g.\n  ``my_pipeline[0:-1]``) to extract a subsequence of steps as another Pipeline\n  instance.  A Pipeline can also be indexed directly to extract a particular\n  step (e.g. ``my_pipeline['svc']``), rather than accessing ``named_steps``.\n  :pr:`2568` by `Joel Nothman`_.\n\n- |Feature| Added optional parameter ``verbose`` in :class:`pipeline.Pipeline`,\n  :class:`compose.ColumnTransformer` and :class:`pipeline.FeatureUnion`\n  and corresponding ``make_`` helpers for showing progress and timing of\n  each step. :pr:`11364` by :user:`Baze Petrushev <petrushev>`,\n  :user:`Karan Desai <karandesai-96>`, `Joel Nothman`_, and\n  :user:`Thomas Fan <thomasjpfan>`.\n\n- |Enhancement| :class:`pipeline.Pipeline` now supports using ``'passthrough'``\n  as a transformer, with the same effect as ``None``.\n  :pr:`11144` by :user:`Thomas Fan <thomasjpfan>`.\n\n- |Enhancement| :class:`pipeline.Pipeline`  implements ``__len__`` and\n  therefore ``len(pipeline)`` returns the number of steps in the pipeline.\n  :pr:`13439` by :user:`Lakshya KD <LakshKD>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Feature| :class:`preprocessing.OneHotEncoder` now supports dropping one\n  feature per category with a new drop parameter. :pr:`12908` by\n  :user:`Drew Johnston <drewmjohnston>`.\n\n- |Efficiency| :class:`preprocessing.OneHotEncoder` and\n  :class:`preprocessing.OrdinalEncoder` now handle pandas DataFrames more\n  efficiently. :pr:`13253` by :user:`maikia`.\n\n- |Efficiency| Make :class:`preprocessing.MultiLabelBinarizer` cache class\n  mappings instead of calculating it every time on the fly.\n  :pr:`12116` by :user:`Ekaterina Krivich <kiote>` and `Joel Nothman`_.\n\n- |Efficiency| :class:`preprocessing.PolynomialFeatures` now supports\n  compressed sparse row (CSR) matrices as input for degrees 2 and 3. This is\n  typically much faster than the dense case as it scales with matrix density\n  and expansion degree (on the order of density^degree), and is much, much\n  faster than the compressed sparse column (CSC) case.\n  :pr:`12197` by :user:`Andrew Nystrom <awnystrom>`.\n\n- |Efficiency| Speed improvement in :class:`preprocessing.PolynomialFeatures`,\n  in the dense case. Also added a new parameter ``order`` which controls output\n  order for further speed performances. :pr:`12251` by `Tom Dupre la Tour`_.\n\n- |Fix| Fixed the calculation overflow when using a float16 dtype with\n  :class:`preprocessing.StandardScaler`.\n  :pr:`13007` by :user:`Raffaello Baluyot <baluyotraf>`\n\n- |Fix| Fixed a bug in :class:`preprocessing.QuantileTransformer` and\n  :func:`preprocessing.quantile_transform` to force n_quantiles to be at most\n  equal to n_samples. Values of n_quantiles larger than n_samples were either\n  useless or resulting in a wrong approximation of the cumulative distribution\n  function estimator. :pr:`13333` by :user:`Albert Thomas <albertcthomas>`.\n\n- |API| The default value of `copy` in :func:`preprocessing.quantile_transform`\n  will change from False to True in 0.23 in order to make it more consistent\n  with the default `copy` values of other functions in\n  :mod:`sklearn.preprocessing` and prevent unexpected side effects by modifying\n  the value of `X` inplace.\n  :pr:`13459` by :user:`Hunter McGushion <HunterMcGushion>`.\n\n:mod:`sklearn.svm`\n..................\n\n- |Fix| Fixed an issue in :func:`svm.SVC.decision_function` when\n  ``decision_function_shape='ovr'``. The decision_function value of a given\n  sample was different depending on whether the decision_function was evaluated\n  on the sample alone or on a batch containing this same sample due to the\n  scaling used in decision_function.\n  :pr:`10440` by :user:`Jonathan Ohayon <Johayon>`.\n\n:mod:`sklearn.tree`\n...................\n\n- |Feature| Decision Trees can now be plotted with matplotlib using\n  `tree.plot_tree` without relying on the ``dot`` library,\n  removing a hard-to-install dependency. :pr:`8508` by `Andreas M\u00fcller`_.\n\n- |Feature| Decision Trees can now be exported in a human readable\n  textual format using :func:`tree.export_text`.\n  :pr:`6261` by `Giuseppe Vettigli <JustGlowing>`.\n\n- |Feature| ``get_n_leaves()`` and ``get_depth()`` have been added to\n  `tree.BaseDecisionTree` and consequently all estimators based\n  on it, including :class:`tree.DecisionTreeClassifier`,\n  :class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeClassifier`,\n  and :class:`tree.ExtraTreeRegressor`.\n  :pr:`12300` by :user:`Adrin Jalali <adrinjalali>`.\n\n- |Fix| Trees and forests did not previously `predict` multi-output\n  classification targets with string labels, despite accepting them in `fit`.\n  :pr:`11458` by :user:`Mitar Milutinovic <mitar>`.\n\n- |Fix| Fixed an issue with `tree.BaseDecisionTree`\n  and consequently all estimators based\n  on it, including :class:`tree.DecisionTreeClassifier`,\n  :class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeClassifier`,\n  and :class:`tree.ExtraTreeRegressor`, where they used to exceed the given\n  ``max_depth`` by 1 while expanding the tree if ``max_leaf_nodes`` and\n  ``max_depth`` were both specified by the user. Please note that this also\n  affects all ensemble methods using decision trees.\n  :pr:`12344` by :user:`Adrin Jalali <adrinjalali>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |Feature| :func:`utils.resample` now accepts a ``stratify`` parameter for\n  sampling according to class distributions. :pr:`13549` by :user:`Nicolas\n  Hug <NicolasHug>`.\n\n- |API| Deprecated ``warn_on_dtype`` parameter from :func:`utils.check_array`\n  and :func:`utils.check_X_y`. Added explicit warning for dtype conversion\n  in `check_pairwise_arrays` if the ``metric`` being passed is a\n  pairwise boolean metric.\n  :pr:`13382` by :user:`Prathmesh Savale <praths007>`.\n\nMultiple modules\n................\n\n- |MajorFeature| The `__repr__()` method of all estimators (used when calling\n  `print(estimator)`) has been entirely re-written, building on Python's\n  pretty printing standard library. All parameters are printed by default,\n  but this can be altered with the ``print_changed_only`` option in\n  :func:`sklearn.set_config`. :pr:`11705` by :user:`Nicolas Hug\n  <NicolasHug>`.\n\n- |MajorFeature| Add estimators tags: these are annotations of estimators\n  that allow programmatic inspection of their capabilities, such as sparse\n  matrix support, supported output types and supported methods. Estimator\n  tags also determine the tests that are run on an estimator when\n  `check_estimator` is called. Read more in the :ref:`User Guide\n  <estimator_tags>`. :pr:`8022` by :user:`Andreas M\u00fcller <amueller>`.\n\n- |Efficiency| Memory copies are avoided when casting arrays to a different\n  dtype in multiple estimators. :pr:`11973` by :user:`Roman Yurchak\n  <rth>`.\n\n- |Fix| Fixed a bug in the implementation of the `our_rand_r`\n  helper function that was not behaving consistently across platforms.\n  :pr:`13422` by :user:`Madhura Parikh <jdnc>` and\n  :user:`Cl\u00e9ment Doumouro <ClemDoum>`.\n\n\nMiscellaneous\n.............\n\n- |Enhancement| Joblib is no longer vendored in scikit-learn, and becomes a\n  dependency. Minimal supported version is joblib 0.11, however using\n  version >= 0.13 is strongly recommended.\n  :pr:`13531` by :user:`Roman Yurchak <rth>`.\n\n\nChanges to estimator checks\n---------------------------\n\nThese changes mostly affect library developers.\n\n- Add ``check_fit_idempotent`` to\n  :func:`~utils.estimator_checks.check_estimator`, which checks that\n  when `fit` is called twice with the same data, the output of\n  `predict`, `predict_proba`, `transform`, and `decision_function` does not\n  change. :pr:`12328` by :user:`Nicolas Hug <NicolasHug>`\n\n- Many checks can now be disabled or configured with :ref:`estimator_tags`.\n  :pr:`8022` by :user:`Andreas M\u00fcller <amueller>`.\n\n.. rubric:: Code and documentation contributors\n\nThanks to everyone who has contributed to the maintenance and improvement of the\nproject since version 0.20, including:\n\nadanhawth, Aditya Vyas, Adrin Jalali, Agamemnon Krasoulis, Albert Thomas,\nAlberto Torres, Alexandre Gramfort, amourav, Andrea Navarrete, Andreas Mueller,\nAndrew Nystrom, assiaben, Aur\u00e9lien Bellet, Bartosz Micha\u0142owski, Bartosz\nTelenczuk, bauks, BenjaStudio, bertrandhaut, Bharat Raghunathan, brentfagan,\nBryan Woods, Cat Chenal, Cheuk Ting Ho, Chris Choe, Christos Aridas, Cl\u00e9ment\nDoumouro, Cole Smith, Connossor, Corey Levinson, Dan Ellis, Dan Stine, Danylo\nBaibak, daten-kieker, Denis Kataev, Didi Bar-Zev, Dillon Gardner, Dmitry Mottl,\nDmitry Vukolov, Dougal J. Sutherland, Dowon, drewmjohnston, Dror Atariah,\nEdward J Brown, Ekaterina Krivich, Elizabeth Sander, Emmanuel Arias, Eric\nChang, Eric Larson, Erich Schubert, esvhd, Falak, Feda Curic, Federico Caselli,\nFrank Hoang, Fibinse Xavier`, Finn O'Shea, Gabriel Marzinotto, Gabriel Vacaliuc,\nGabriele Calvo, Gael Varoquaux, GauravAhlawat, Giuseppe Vettigli, Greg Gandenberger,\nGuillaume Fournier, Guillaume Lemaitre, Gustavo De Mari Pereira, Hanmin Qin,\nharoldfox, hhu-luqi, Hunter McGushion, Ian Sanders, JackLangerman, Jacopo\nNotarstefano, jakirkham, James Bourbeau, Jan Koch, Jan S, janvanrijn, Jarrod\nMillman, jdethurens, jeremiedbb, JF, joaak, Joan Massich, Joel Nothman,\nJonathan Ohayon, Joris Van den Bossche, josephsalmon, J\u00e9r\u00e9mie M\u00e9hault, Katrin\nLeinweber, ken, kms15, Koen, Kossori Aruku, Krishna Sangeeth, Kuai Yu, Kulbear,\nKushal Chauhan, Kyle Jackson, Lakshya KD, Leandro Hermida, Lee Yi Jie Joel,\nLily Xiong, Lisa Sarah Thomas, Loic Esteve, louib, luk-f-a, maikia, mail-liam,\nManimaran, Manuel L\u00f3pez-Ib\u00e1\u00f1ez, Marc Torrellas, Marco Gaido, Marco Gorelli,\nMarcoGorelli, marineLM, Mark Hannel, Martin Gubri, Masstran, mathurinm, Matthew\nRoeschke, Max Copeland, melsyt, mferrari3, Micka\u00ebl Schoentgen, Ming Li, Mitar,\nMohammad Aftab, Mohammed AbdelAal, Mohammed Ibraheem, Muhammad Hassaan Rafique,\nmwestt, Naoya Iijima, Nicholas Smith, Nicolas Goix, Nicolas Hug, Nikolay\nShebanov, Oleksandr Pavlyk, Oliver Rausch, Olivier Grisel, Orestis, Osman, Owen\nFlanagan, Paul Paczuski, Pavel Soriano, pavlos kallis, Pawel Sendyk, peay,\nPeter, Peter Cock, Peter Hausamann, Peter Marko, Pierre Glaser, pierretallotte,\nPim de Haan, Piotr Szyma\u0144ski, Prabakaran Kumaresshan, Pradeep Reddy Raamana,\nPrathmesh Savale, Pulkit Maloo, Quentin Batista, Radostin Stoyanov, Raf\nBaluyot, Rajdeep Dua, Ramil Nugmanov, Ra\u00fal Garc\u00eda Calvo, Rebekah Kim, Reshama\nShaikh, Rohan Lekhwani, Rohan Singh, Rohan Varma, Rohit Kapoor, Roman\nFeldbauer, Roman Yurchak, Romuald M, Roopam Sharma, Ryan, R\u00fcdiger Busche, Sam\nWaterbury, Samuel O. Ronsin, SandroCasagrande, Scott Cole, Scott Lowe,\nSebastian Raschka, Shangwu Yao, Shivam Kotwalia, Shiyu Duan, smarie, Sriharsha\nHatwar, Stephen Hoover, Stephen Tierney, St\u00e9phane Couvreur, surgan12,\nSylvainLan, TakingItCasual, Tashay Green, thibsej, Thomas Fan, Thomas J Fan,\nThomas Moreau, Tom Dupr\u00e9 la Tour, Tommy, Tulio Casagrande, Umar Farouk Umar,\nUtkarsh Upadhyay, Vinayak Mehta, Vishaal Kapoor, Vivek Kumar, Vlad Niculae,\nvqean3, Wenhao Zhang, William de Vazelhes, xhan, Xing Han Lu, xinyuliu12,\nYaroslav Halchenko, Zach Griffith, Zach Miller, Zayd Hammoudeh, Zhuyi Xue,\nZijie (ZJ) Poh, ^__^","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn               Version 0 21                  include   changelog legend inc      changes 0 21 3   Version 0 21 3                   July 30  2019    Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures     The v0 20 0 release notes failed to mention a backwards incompatibility in    func  metrics make scorer  when  needs proba True  and  y true  is binary    Now  the scorer function is supposed to accept a 1D  y pred   i e     probability of the positive class  shape   n samples      instead of a 2D    y pred   i e   shape   n samples  2      Changelog             mod  sklearn cluster                             Fix  Fixed a bug in  class  cluster KMeans  where computation with    init  random   was single threaded for  n jobs   1  or  n jobs    1      pr  12955  by  user  Prabakaran Kumaresshan  nixphix        Fix  Fixed a bug in  class  cluster OPTICS  where users were unable to pass   float  min samples  and  min cluster size    pr  14496  by    user  Fabian Klopfer  someusername1     and  user  Hanmin Qin  qinhanmin2014        Fix  Fixed a bug in  class  cluster KMeans  where KMeans   initialisation   could rarely result in an IndexError   issue  11756  by  Joel Nothman      mod  sklearn compose                             Fix  Fixed an issue in  class  compose ColumnTransformer  where using   DataFrames whose column order differs between  func   fit   and    func   transform   could lead to silently passing incorrect columns to the     remainder   transformer     pr  14237  by  Andreas Schuderer  schuderer      mod  sklearn datasets                              Fix   func  datasets fetch california housing      func  datasets fetch covtype      func  datasets fetch kddcup99    func  datasets fetch olivetti faces      func  datasets fetch rcv1   and  func  datasets fetch species distributions    try to persist the previously cache using the new   joblib   if the cached   data was persisted using the deprecated   sklearn externals joblib    This   behavior is set to be deprecated and removed in v0 23     pr  14197  by  Adrin Jalali      mod  sklearn ensemble                              Fix  Fix zero division error in  class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor      pr  14024  by  Nicolas Hug  NicolasHug      mod  sklearn impute                            Fix  Fixed a bug in  class  impute SimpleImputer  and    class  impute IterativeImputer  so that no errors are thrown when there are   missing values in training data   pr  13974  by  Frank Hoang  fhoang7      mod  sklearn inspection                                Fix  Fixed a bug in  inspection plot partial dependence  where     target   parameter was not being taken into account for multiclass problems     pr  14393  by  user  Guillem G  Subies  guillemgsubies      mod  sklearn linear model                                  Fix  Fixed a bug in  class  linear model LogisticRegressionCV  where     refit False   would fail depending on the    multiclass    and      penalty    parameters  regression introduced in 0 21    pr  14087  by    Nicolas Hug        Fix  Compatibility fix for  class  linear model ARDRegression  and   Scipy  1 3 0  Adapts to upstream changes to the default  pinvh  cutoff   threshold which otherwise results in poor accuracy in some cases     pr  14067  by  user  Tim Staley  timstaley      mod  sklearn neighbors                               Fix  Fixed a bug in  class  neighbors NeighborhoodComponentsAnalysis  where   the validation of initial parameters   n components      max iter   and     tol   required too strict types   pr  14092  by    user  J r mie du Boisberranger  jeremiedbb      mod  sklearn tree                          Fix  Fixed bug in  func  tree export text  when the tree has one feature and   a single feature name is passed in   pr  14053  by  Thomas Fan       Fix  Fixed an issue with  func  tree plot tree  where it displayed   entropy calculations even for  gini  criterion in DecisionTreeClassifiers     pr  13947  by  user  Frank Hoang  fhoang7         changes 0 21 2   Version 0 21 2                   24 May 2019    Changelog             mod  sklearn decomposition                                   Fix  Fixed a bug in  class  cross decomposition CCA  improving numerical   stability when  Y  is close to zero   pr  13903  by  Thomas Fan      mod  sklearn metrics                             Fix  Fixed a bug in  func  metrics pairwise euclidean distances  where a   part of the distance matrix was left un instanciated for sufficiently large   float32 datasets  regression introduced in 0 21    pr  13910  by    user  J r mie du Boisberranger  jeremiedbb      mod  sklearn preprocessing                                   Fix  Fixed a bug in  class  preprocessing OneHotEncoder  where the new    drop  parameter was not reflected in  get feature names    pr  13894    by  user  James Myatt  jamesmyatt       sklearn utils sparsefuncs                                  Fix  Fixed a bug where  min max axis  would fail on 32 bit systems   for certain large inputs  This affects  class  preprocessing MaxAbsScaler      func  preprocessing normalize  and  class  preprocessing LabelBinarizer      pr  13741  by  user  Roddy MacSween  rlms         changes 0 21 1   Version 0 21 1                   17 May 2019    This is a bug fix release to primarily resolve some packaging issues in version 0 21 0  It also includes minor documentation improvements and some bug fixes   Changelog             mod  sklearn inspection                                Fix  Fixed a bug in  func  inspection partial dependence  to only check   classifier and not regressor for the multiclass multioutput case     pr  14309  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn metrics                             Fix  Fixed a bug in  class  metrics pairwise distances  where it would raise     AttributeError   for boolean metrics when   X   had a boolean dtype and     Y    None       issue  13864  by  user  Paresh Mathur  rick2047        Fix  Fixed two bugs in  class  metrics pairwise distances  when     n jobs   1    First it used to return a distance matrix with same dtype as   input  even for integer dtype  Then the diagonal was not zeros for euclidean   metric when   Y   is   X     issue  13877  by    user  J r mie du Boisberranger  jeremiedbb      mod  sklearn neighbors                               Fix  Fixed a bug in  class  neighbors KernelDensity  which could not be   restored from a pickle if   sample weight   had been used     issue  13772  by  user  Aditya Vyas  aditya1702          changes 0 21   Version 0 21 0                   May 2019    Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      class  discriminant analysis LinearDiscriminantAnalysis  for multiclass   classification   Fix     class  discriminant analysis LinearDiscriminantAnalysis  with  eigen    solver   Fix     class  linear model BayesianRidge   Fix    Decision trees and derived ensembles when both  max depth  and    max leaf nodes  are set   Fix     class  linear model LogisticRegression  and    class  linear model LogisticRegressionCV  with  saga  solver   Fix     class  ensemble GradientBoostingClassifier   Fix     class  sklearn feature extraction text HashingVectorizer      class  sklearn feature extraction text TfidfVectorizer   and    class  sklearn feature extraction text CountVectorizer   Fix     class  neural network MLPClassifier   Fix     func  svm SVC decision function  and    func  multiclass OneVsOneClassifier decision function    Fix     class  linear model SGDClassifier  and any derived classifiers   Fix    Any model using the  linear model  sag sag solver  function with a  0    seed  including  class  linear model LogisticRegression      class  linear model LogisticRegressionCV    class  linear model Ridge     and  class  linear model RidgeCV  with  sag  solver   Fix     class  linear model RidgeCV  when using leave one out cross validation   with sparse inputs   Fix    Details are listed in the changelog below    While we are trying to better inform users by providing this information  we cannot assure that this list is complete    Known Major Bugs                     The default  max iter  for  class  linear model LogisticRegression  is too   small for many solvers given the default  tol   In particular  we   accidentally changed the default  max iter  for the liblinear solver from   1000 to 100 iterations in  pr  3591  released in version 0 16    In a future release we hope to choose better default  max iter  and  tol    heuristically depending on the solver  see  pr  13317     Changelog            Support for Python 3 4 and below has been officially dropped          Entries should be grouped by module  in alphabetic order  and prefixed with     one of the labels   MajorFeature    Feature    Efficiency    Enhancement        Fix  or  API   see whats new rst for descriptions       Entries should be ordered by those labels  e g   Fix  after  Efficiency        Changes not specific to a module should be listed under  Multiple Modules      or  Miscellaneous       Entries should end with       pr  123456  by  user  Joe Bloggs  joeongithub        where 123456 is the  pull request  number  not the issue number    mod  sklearn base                          API  The R2 score used when calling   score   on a regressor will use     multioutput  uniform average    from version 0 23 to keep consistent with    func  metrics r2 score   This will influence the   score   method of all   the multioutput regressors  except for    class  multioutput MultiOutputRegressor       pr  13157  by  user  Hanmin Qin  qinhanmin2014      mod  sklearn calibration                                 Enhancement  Added support to bin the data passed into    class  calibration calibration curve  by quantiles instead of uniformly   between 0 and 1     pr  13086  by  user  Scott Cole  srcole        Enhancement  Allow n dimensional arrays as input for    calibration CalibratedClassifierCV    pr  13485  by    user  William de Vazelhes  wdevazelhes      mod  sklearn cluster                             MajorFeature  A new clustering algorithm   class  cluster OPTICS   an   algorithm related to  class  cluster DBSCAN   that has hyperparameters easier   to set and that scales better  by  user  Shane  espg       Adrin Jalali     user  Erich Schubert  kno10     Hanmin Qin    and    user  Assia Benbihi  assiaben        Fix  Fixed a bug where  class  cluster Birch  could occasionally raise an   AttributeError   pr  13651  by  Joel Nothman        Fix  Fixed a bug in  class  cluster KMeans  where empty clusters weren t   correctly relocated when using sample weights   pr  13486  by    user  J r mie du Boisberranger  jeremiedbb        API  The   n components    attribute in  class  cluster AgglomerativeClustering    and  class  cluster FeatureAgglomeration  has been renamed to     n connected components        pr  13427  by  user  Stephane Couvreur  scouvreur        Enhancement   class  cluster AgglomerativeClustering  and    class  cluster FeatureAgglomeration  now accept a   distance threshold     parameter which can be used to find the clusters instead of   n clusters       issue  9069  by  user  Vathsala Achar  VathsalaAchar   and  Adrin Jalali      mod  sklearn compose                             API   class  compose ColumnTransformer  is no longer an experimental   feature   pr  13835  by  user  Hanmin Qin  qinhanmin2014      mod  sklearn datasets                              Fix  Added support for 64 bit group IDs and pointers in SVMLight files     pr  10727  by  user  Bryan K Woods  bryan woods        Fix   func  datasets load sample images  returns images with a deterministic   order   pr  13250  by  user  Thomas Fan  thomasjpfan      mod  sklearn decomposition                                   Enhancement   class  decomposition KernelPCA  now has deterministic output    resolved sign ambiguity in eigenvalue decomposition of the kernel matrix      pr  13241  by  user  Aur lien Bellet  bellet        Fix  Fixed a bug in  class  decomposition KernelPCA    fit   transform      now produces the correct output  the same as  fit transform     in case   of non removed zero eigenvalues   remove zero eig False       fit inverse transform  was also accelerated by using the same trick as    fit transform  to compute the transform of  X      pr  12143  by  user  Sylvain Mari   smarie       Fix  Fixed a bug in  class  decomposition NMF  where  init    nndsvd       init    nndsvda    and  init    nndsvdar   are allowed when    n components   n features  instead of    n components    min n samples  n features       pr  11650  by  user  Hossein Pourbozorg  hossein pourbozorg   and    user  Zijie  ZJ  Poh  zjpoh        API  The default value of the  code  init  argument in    func  decomposition non negative factorization  will change from    code  random  to  code  None  in version 0 23 to make it consistent with    class  decomposition NMF   A FutureWarning is raised when   the default value is used     pr  12988  by  user  Zijie  ZJ  Poh  zjpoh      mod  sklearn discriminant analysis                                           Enhancement   class  discriminant analysis LinearDiscriminantAnalysis  now   preserves   float32   and   float64   dtypes   pr  8769  and    pr  11000  by  user  Thibault Sejourne  thibsej       Fix  A   ChangedBehaviourWarning   is now raised when    class  discriminant analysis LinearDiscriminantAnalysis  is given as   parameter   n components   min n features  n classes   1     and     n components   is changed to   min n features  n classes   1    if so    Previously the change was made  but silently   pr  11526  by    user  William de Vazelhes wdevazelhes        Fix  Fixed a bug in  class  discriminant analysis LinearDiscriminantAnalysis    where the predicted probabilities would be incorrectly computed in the   multiclass case   pr  6848   by  user  Agamemnon Krasoulis    agamemnonc   and  Guillaume Lemaitre  glemaitre        Fix  Fixed a bug in  class  discriminant analysis LinearDiscriminantAnalysis    where the predicted probabilities would be incorrectly computed with   eigen     solver   pr  11727   by  user  Agamemnon Krasoulis    agamemnonc      mod  sklearn dummy                           Fix  Fixed a bug in  class  dummy DummyClassifier  where the     predict proba   method was returning int32 array instead of   float64 for the   stratified   strategy   pr  13266  by    user  Christos Aridas chkoar        Fix  Fixed a bug in  class  dummy DummyClassifier  where it was throwing a   dimension mismatch error in prediction time if a column vector   y   with     shape  n  1    was given at   fit   time   pr  13545  by  user  Nick   Sorros  nsorros   and  Adrin Jalali      mod  sklearn ensemble                              MajorFeature  Add two new implementations of   gradient boosting trees   class  ensemble HistGradientBoostingClassifier    and  class  ensemble HistGradientBoostingRegressor   The implementation of   these estimators is inspired by    LightGBM  https   github com Microsoft LightGBM    and can be orders of   magnitude faster than  class  ensemble GradientBoostingRegressor  and    class  ensemble GradientBoostingClassifier  when the number of samples is   larger than tens of thousands of samples  The API of these new estimators   is slightly different  and some of the features from    class  ensemble GradientBoostingClassifier  and    class  ensemble GradientBoostingRegressor  are not yet supported     These new estimators are experimental  which means that their results or   their API might change without any deprecation cycle  To use them  you   need to explicitly import   enable hist gradient boosting                explicitly require this experimental feature         from sklearn experimental import enable hist gradient boosting    noqa           now you can import normally from sklearn ensemble         from sklearn ensemble import HistGradientBoostingClassifier       note         Update  since version 1 0  these estimators are not experimental       anymore and you don t need to use  from sklearn experimental import       enable hist gradient boosting       pr  12807  by  user  Nicolas Hug NicolasHug        Feature  Add  class  ensemble VotingRegressor    which provides an equivalent of  class  ensemble VotingClassifier    for regression problems     pr  12513  by  user  Ramil Nugmanov  stsouko   and    user  Mohamed Ali Jamaoui  mohamed ali        Efficiency  Make  class  ensemble IsolationForest  prefer threads over   processes when running with   n jobs   1   as the underlying decision tree   fit calls do release the GIL  This changes reduces memory usage and   communication overhead   pr  12543  by  user  Isaac Storch  istorch     and  Olivier Grisel        Efficiency  Make  class  ensemble IsolationForest  more memory efficient   by avoiding keeping in memory each tree prediction   pr  13260  by    Nicolas Goix        Efficiency   class  ensemble IsolationForest  now uses chunks of data at   prediction step  thus capping the memory usage   pr  13283  by    Nicolas Goix        Efficiency   class  sklearn ensemble GradientBoostingClassifier  and    class  sklearn ensemble GradientBoostingRegressor  now keep the   input   y   as   float64   to avoid it being copied internally by trees     pr  13524  by  Adrin Jalali        Enhancement  Minimized the validation of X in    class  ensemble AdaBoostClassifier  and  class  ensemble AdaBoostRegressor     pr  13174  by  user  Christos Aridas  chkoar        Enhancement   class  ensemble IsolationForest  now exposes   warm start     parameter  allowing iterative addition of trees to an isolation   forest   pr  13496  by  user  Peter Marko  petibear        Fix  The values of   feature importances    in all random forest based   models  i e     class  ensemble RandomForestClassifier      class  ensemble RandomForestRegressor      class  ensemble ExtraTreesClassifier      class  ensemble ExtraTreesRegressor      class  ensemble RandomTreesEmbedding      class  ensemble GradientBoostingClassifier   and    class  ensemble GradientBoostingRegressor   now       sum up to   1       all the single node trees in feature importance calculation are ignored     in case all trees have only one single node  i e  a root node       feature importances will be an array of all zeros      pr  13636  and  pr  13620  by  Adrin Jalali        Fix  Fixed a bug in  class  ensemble GradientBoostingClassifier  and    class  ensemble GradientBoostingRegressor   which didn t support   scikit learn estimators as the initial estimator  Also added support of   initial estimator which does not support sample weights   pr  12436  by    user  J r mie du Boisberranger  jeremiedbb   and  pr  12983  by    user  Nicolas Hug NicolasHug        Fix  Fixed the output of the average path length computed in    class  ensemble IsolationForest  when the input is either 0  1 or 2     pr  13251  by  user  Albert Thomas  albertcthomas     and  user  joshuakennethjones  joshuakennethjones        Fix  Fixed a bug in  class  ensemble GradientBoostingClassifier  where   the gradients would be incorrectly computed in multiclass classification   problems   pr  12715  by  user  Nicolas Hug NicolasHug        Fix  Fixed a bug in  class  ensemble GradientBoostingClassifier  where   validation sets for early stopping were not sampled with stratification     pr  13164  by  user  Nicolas Hug NicolasHug        Fix  Fixed a bug in  class  ensemble GradientBoostingClassifier  where   the default initial prediction of a multiclass classifier would predict the   classes priors instead of the log of the priors   pr  12983  by    user  Nicolas Hug NicolasHug        Fix  Fixed a bug in  class  ensemble RandomForestClassifier  where the     predict   method would error for multiclass multioutput forests models   if any targets were strings   pr  12834  by  user  Elizabeth Sander    elsander        Fix  Fixed a bug in  ensemble gradient boosting LossFunction  and    ensemble gradient boosting LeastSquaresError  where the default   value of   learning rate   in   update terminal regions   is not consistent   with the document and the caller functions  Note however that directly using   these loss functions is deprecated     pr  6463  by  user  movelikeriver  movelikeriver        Fix   ensemble partial dependence   and consequently the new   version  func  sklearn inspection partial dependence   now takes sample   weights into account for the partial dependence computation when the   gradient boosting model has been trained with sample weights     pr  13193  by  user  Samuel O  Ronsin  samronsin        API   ensemble partial dependence  and    ensemble plot partial dependence  are now deprecated in favor of    func  inspection partial dependence sklearn inspection partial dependence     and    inspection plot partial dependence sklearn inspection plot partial dependence       pr  12599  by  user  Trevor Stephens trevorstephens   and    user  Nicolas Hug NicolasHug        Fix   class  ensemble VotingClassifier  and    class  ensemble VotingRegressor  were failing during   fit   in one   of the estimators was set to   None   and   sample weight   was not   None       pr  13779  by  user  Guillaume Lemaitre  glemaitre        API   class  ensemble VotingClassifier  and    class  ensemble VotingRegressor  accept    drop    to disable an estimator   in addition to   None   to be consistent with other estimators  i e      class  pipeline FeatureUnion  and  class  compose ColumnTransformer       pr  13780  by  user  Guillaume Lemaitre  glemaitre      sklearn externals                          API  Deprecated  externals six  since we have dropped support for   Python 2 7   pr  12916  by  user  Hanmin Qin  qinhanmin2014      mod  sklearn feature extraction                                        Fix  If   input  file    or   input  filename     and a callable is given as   the   analyzer     class  sklearn feature extraction text HashingVectorizer      class  sklearn feature extraction text TfidfVectorizer   and    class  sklearn feature extraction text CountVectorizer  now read the data   from the file s  and then pass it to the given   analyzer    instead of   passing the file name s  or the file object s  to the analyzer     pr  13641  by  Adrin Jalali      mod  sklearn impute                            MajorFeature  Added  class  impute IterativeImputer   which is a strategy   for imputing missing values by modeling each feature with missing values as a   function of other features in a round robin fashion   pr  8478  and    pr  12177  by  user  Sergey Feldman  sergeyf   and  user  Ben Lawson    benlawson       The API of IterativeImputer is experimental and subject to change without any   deprecation cycle  To use them  you need to explicitly import     enable iterative imputer              from sklearn experimental import enable iterative imputer    noqa           now you can import normally from sklearn impute         from sklearn impute import IterativeImputer      Feature  The  class  impute SimpleImputer  and    class  impute IterativeImputer  have a new parameter    add indicator       which simply stacks a  class  impute MissingIndicator  transform into the   output of the imputer s transform  That allows a predictive estimator to   account for missingness   pr  12583    pr  13601  by  user  Danylo Baibak    DanilBaibak        Fix  In  class  impute MissingIndicator  avoid implicit densification by   raising an exception if input is sparse add  missing values  property   is set to 0   pr  13240  by  user  Bartosz Telenczuk  btel        Fix  Fixed two bugs in  class  impute MissingIndicator   First  when     X   is sparse  all the non zero non missing values used to become   explicit False in the transformed data  Then  when     features  missing only     all features used to be kept if there were no   missing values at all   pr  13562  by  user  J r mie du Boisberranger    jeremiedbb      mod  sklearn inspection                              new subpackage      Feature  Partial dependence plots     inspection plot partial dependence   are now supported for   any regressor or classifier  provided that they have a  predict proba    method    pr  12599  by  user  Trevor Stephens  trevorstephens   and    user  Nicolas Hug  NicolasHug      mod  sklearn isotonic                              Feature  Allow different dtypes  such as float32  in    class  isotonic IsotonicRegression      pr  8769  by  user  Vlad Niculae  vene     mod  sklearn linear model                                  Enhancement   class  linear model Ridge  now preserves   float32   and     float64   dtypes   issue  8769  and  issue  11000  by    user  Guillaume Lemaitre  glemaitre    and  user  Joan Massich  massich       Feature   class  linear model LogisticRegression  and    class  linear model LogisticRegressionCV  now support Elastic Net penalty    with the  saga  solver   pr  11646  by  user  Nicolas Hug  NicolasHug        Feature  Added  class  linear model lars path gram   which is    class  linear model lars path  in the sufficient stats mode  allowing   users to compute  class  linear model lars path  without providing     X   and   y     pr  11699  by  user  Kuai Yu  yukuairoy        Efficiency   linear model make dataset  now preserves     float32   and   float64   dtypes  reducing memory consumption in stochastic   gradient  SAG and SAGA solvers     pr  8769  and  pr  11000  by    user  Nelle Varoquaux  NelleV     user  Arthur Imbert  Henley13       user  Guillaume Lemaitre  glemaitre    and  user  Joan Massich  massich       Enhancement   class  linear model LogisticRegression  now supports an   unregularized objective when   penalty  none    is passed  This is   equivalent to setting   C np inf   with l2 regularization  Not supported   by the liblinear solver   pr  12860  by  user  Nicolas Hug    NicolasHug        Enhancement   sparse cg  solver in  class  linear model Ridge    now supports fitting the intercept  i e    fit intercept True    when   inputs are sparse   pr  13336  by  user  Bartosz Telenczuk  btel        Enhancement  The coordinate descent solver used in  Lasso    ElasticNet     etc  now issues a  ConvergenceWarning  when it completes without meeting the   desired toleranbce     pr  11754  and  pr  13397  by  user  Brent Fagan  brentfagan   and    user  Adrin Jalali  adrinjalali        Fix  Fixed a bug in  class  linear model LogisticRegression  and    class  linear model LogisticRegressionCV  with  saga  solver  where the   weights would not be correctly updated in some cases     pr  11646  by  Tom Dupre la Tour        Fix  Fixed the posterior mean  posterior covariance and returned   regularization parameters in  class  linear model BayesianRidge   The   posterior mean and the posterior covariance were not the ones computed   with the last update of the regularization parameters and the returned   regularization parameters were not the final ones  Also fixed the formula of   the log marginal likelihood used to compute the score when    compute score True    pr  12174  by    user  Albert Thomas  albertcthomas        Fix  Fixed a bug in  class  linear model LassoLarsIC   where user input     copy X False   at instance creation would be overridden by default   parameter value   copy X True   in   fit       pr  12972  by  user  Lucio Fernandez Arjona  luk f a       Fix  Fixed a bug in  class  linear model LinearRegression  that   was not returning the same coeffecients and intercepts with     fit intercept True   in sparse and dense case     pr  13279  by  Alexandre Gramfort       Fix  Fixed a bug in  class  linear model HuberRegressor  that was   broken when   X   was of dtype bool   pr  13328  by  Alexandre Gramfort        Fix  Fixed a performance issue of   saga   and   sag   solvers when called   in a  class  joblib Parallel  setting with   n jobs   1   and     backend  threading     causing them to perform worse than in the sequential   case   pr  13389  by  user  Pierre Glaser  pierreglaser        Fix  Fixed a bug in    linear model stochastic gradient BaseSGDClassifier  that was not   deterministic when trained in a multi class setting on several threads     pr  13422  by  user  Cl ment Doumouro  ClemDoum        Fix  Fixed bug in  func  linear model ridge regression      class  linear model Ridge  and    class  linear model RidgeClassifier  that   caused unhandled exception for arguments   return intercept True   and     solver auto    default  or any other solver different from   sag       pr  13363  by  user  Bartosz Telenczuk  btel       Fix   func  linear model ridge regression  will now raise an exception   if   return intercept True   and solver is different from   sag    Previously    only warning was issued   pr  13363  by  user  Bartosz Telenczuk  btel       Fix   func  linear model ridge regression  will choose   sparse cg     solver for sparse inputs when   solver auto   and   sample weight     is provided  previously  cholesky  solver was selected      pr  13363  by  user  Bartosz Telenczuk  btel       API   The use of  class  linear model lars path  with   X None     while passing   Gram   is deprecated in version 0 21 and will be removed   in version 0 23  Use  class  linear model lars path gram  instead     pr  11699  by  user  Kuai Yu  yukuairoy        API   linear model logistic regression path  is deprecated   in version 0 21 and will be removed in version 0 23     pr  12821  by  user  Nicolas Hug  NicolasHug        Fix   class  linear model RidgeCV  with leave one out cross validation   now correctly fits an intercept when   fit intercept True   and the design   matrix is sparse   issue  13350  by  user  J r me Dock s  jeromedockes     mod  sklearn manifold                              Efficiency  Make  func  manifold trustworthiness  use an inverted index   instead of an  np where  lookup to find the rank of neighbors in the input   space  This improves efficiency in particular when computed with   lots of neighbors and or small datasets     pr  9907  by  user  William de Vazelhes  wdevazelhes      mod  sklearn metrics                             Feature  Added the  func  metrics max error  metric and a corresponding      max error    scorer for single output regression     pr  12232  by  user  Krishna Sangeeth  whiletruelearn        Feature  Add  func  metrics multilabel confusion matrix   which calculates a   confusion matrix with true positive  false positive  false negative and true   negative counts for each class  This facilitates the calculation of set wise   metrics such as recall  specificity  fall out and miss rate     pr  11179  by  user  Shangwu Yao  ShangwuYao   and  Joel Nothman        Feature   func  metrics jaccard score  has been added to calculate the   Jaccard coefficient as an evaluation metric for binary  multilabel and   multiclass tasks  with an interface analogous to  func  metrics f1 score      pr  13151  by  user  Gaurav Dhingra  gxyd   and  Joel Nothman        Feature  Added  func  metrics pairwise haversine distances  which can be   accessed with  metric  pairwise   through  func  metrics pairwise distances    and estimators   Haversine distance was previously available for nearest   neighbors calculation    pr  12568  by  user  Wei Xue  xuewei4d       user  Emmanuel Arias  eamanu   and  Joel Nothman        Efficiency  Faster  func  metrics pairwise distances  with  n jobs      1 by using a thread based backend  instead of process based backends     pr  8216  by  user  Pierre Glaser  pierreglaser   and    user  Romuald Menuet  zanospi       Efficiency  The pairwise manhattan distances with sparse input now uses the   BLAS shipped with scipy instead of the bundled BLAS   pr  12732  by    user  J r mie du Boisberranger  jeremiedbb       Enhancement  Use label  accuracy  instead of  micro average  on    func  metrics classification report  to avoid confusion   micro average  is   only shown for multi label or multi class with a subset of classes because   it is otherwise identical to accuracy     pr  12334  by  user  Emmanuel Arias  eamanu eamanu com       Joel Nothman   and  Andreas M ller       Enhancement  Added  beta  parameter to    func  metrics homogeneity completeness v measure  and    func  metrics v measure score  to configure the   tradeoff between homogeneity and completeness     pr  13607  by  user  Stephane Couvreur  scouvreur   and   and  user  Ivan Sanchez  ivsanro1        Fix  The metric  func  metrics r2 score  is degenerate with a single sample   and now it returns NaN and raises  class  exceptions UndefinedMetricWarning      pr  12855  by  user  Pawel Sendyk  psendyk        Fix  Fixed a bug where  func  metrics brier score loss  will sometimes   return incorrect result when there s only one class in   y true       pr  13628  by  user  Hanmin Qin  qinhanmin2014        Fix  Fixed a bug in  func  metrics label ranking average precision score    where sample weight wasn t taken into account for samples with degenerate   labels     pr  13447  by  user  Dan Ellis  dpwe        API  The parameter   labels   in  func  metrics hamming loss  is deprecated   in version 0 21 and will be removed in version 0 23   pr  10580  by    user  Reshama Shaikh  reshamas   and  user  Sandra Mitrovic  SandraMNE        Fix  The function  func  metrics pairwise euclidean distances   and   therefore several estimators with   metric  euclidean     suffered from   numerical precision issues with   float32   features  Precision has been   increased at the cost of a small drop of performance   pr  13554  by    user  Celelibi  and  user  J r mie du Boisberranger  jeremiedbb        API   metrics jaccard similarity score  is deprecated in favour of   the more consistent  func  metrics jaccard score   The former behavior for   binary and multiclass targets is broken     pr  13151  by  Joel Nothman      mod  sklearn mixture                             Fix  Fixed a bug in  mixture BaseMixture  and therefore on estimators   based on it  i e   class  mixture GaussianMixture  and    class  mixture BayesianGaussianMixture   where   fit predict   and     fit predict   were not equivalent   pr  13142  by    user  J r mie du Boisberranger  jeremiedbb       mod  sklearn model selection                                     Feature  Classes  class   model selection GridSearchCV  and    class   model selection RandomizedSearchCV  now allow for refit callable   to add flexibility in identifying the best estimator    See  ref  sphx glr auto examples model selection plot grid search refit callable py      pr  11354  by  user  Wenhao Zhang  wenhaoz ucla edu       Joel Nothman   and  user  Adrin Jalali  adrinjalali        Enhancement  Classes  class   model selection GridSearchCV      class   model selection RandomizedSearchCV   and methods    func   model selection cross val score      func   model selection cross val predict      func   model selection cross validate   now print train scores when    return train scores  is True and  verbose    2  For    func   model selection learning curve   and    func   model selection validation curve  only the latter is required     pr  12613  and  pr  12669  by  user  Marc Torrellas  marctorrellas        Enhancement  Some  term  CV splitter  classes and    model selection train test split  now raise   ValueError   when the   resulting training set is empty     pr  12861  by  user  Nicolas Hug  NicolasHug        Fix  Fixed a bug where  class  model selection StratifiedKFold    shuffles each class s samples with the same   random state      making   shuffle True   ineffective     pr  13124  by  user  Hanmin Qin  qinhanmin2014        Fix  Added ability for  func  model selection cross val predict  to handle   multi label  and multioutput multiclass  targets with   predict proba   type   methods   pr  8773  by  user  Stephen Hoover  stephen hoover        Fix  Fixed an issue in  func   model selection cross val predict  where    method  predict proba   returned always  0 0  when one of the classes was   excluded in a cross validation fold     pr  13366  by  user  Guillaume Fournier  gfournier     mod  sklearn multiclass                                Fix  Fixed an issue in  func  multiclass OneVsOneClassifier decision function    where the decision function value of a given sample was different depending on   whether the decision function was evaluated on the sample alone or on a batch   containing this same sample due to the scaling used in decision function     pr  10440  by  user  Jonathan Ohayon  Johayon      mod  sklearn multioutput                                 Fix  Fixed a bug in  class  multioutput MultiOutputClassifier  where the    predict proba  method incorrectly checked for  predict proba  attribute in   the estimator object     pr  12222  by  user  Rebekah Kim  rebekahkim     mod  sklearn neighbors                               MajorFeature  Added  class  neighbors NeighborhoodComponentsAnalysis  for   metric learning  which implements the Neighborhood Components Analysis   algorithm    pr  10058  by  user  William de Vazelhes  wdevazelhes   and    user  John Chiotellis  johny c        API  Methods in  class  neighbors NearestNeighbors       func   neighbors NearestNeighbors kneighbors      func   neighbors NearestNeighbors radius neighbors      func   neighbors NearestNeighbors kneighbors graph      func   neighbors NearestNeighbors radius neighbors graph    now raise   NotFittedError    rather than   AttributeError      when called before   fit    pr  12279  by  user  Krishna Sangeeth    whiletruelearn      mod  sklearn neural network                                    Fix  Fixed a bug in  class  neural network MLPClassifier  and    class  neural network MLPRegressor  where the option  code  shuffle False    was being ignored   pr  12582  by  user  Sam Waterbury  samwaterbury        Fix  Fixed a bug in  class  neural network MLPClassifier  where   validation sets for early stopping were not sampled with stratification  In   the multilabel case however  splits are still not stratified     pr  13164  by  user  Nicolas Hug NicolasHug      mod  sklearn pipeline                              Feature   class  pipeline Pipeline  can now use indexing notation  e g      my pipeline 0  1     to extract a subsequence of steps as another Pipeline   instance   A Pipeline can also be indexed directly to extract a particular   step  e g    my pipeline  svc       rather than accessing   named steps       pr  2568  by  Joel Nothman        Feature  Added optional parameter   verbose   in  class  pipeline Pipeline      class  compose ColumnTransformer  and  class  pipeline FeatureUnion    and corresponding   make    helpers for showing progress and timing of   each step   pr  11364  by  user  Baze Petrushev  petrushev       user  Karan Desai  karandesai 96     Joel Nothman    and    user  Thomas Fan  thomasjpfan        Enhancement   class  pipeline Pipeline  now supports using    passthrough      as a transformer  with the same effect as   None       pr  11144  by  user  Thomas Fan  thomasjpfan        Enhancement   class  pipeline Pipeline   implements     len     and   therefore   len pipeline    returns the number of steps in the pipeline     pr  13439  by  user  Lakshya KD  LakshKD      mod  sklearn preprocessing                                   Feature   class  preprocessing OneHotEncoder  now supports dropping one   feature per category with a new drop parameter   pr  12908  by    user  Drew Johnston  drewmjohnston        Efficiency   class  preprocessing OneHotEncoder  and    class  preprocessing OrdinalEncoder  now handle pandas DataFrames more   efficiently   pr  13253  by  user  maikia       Efficiency  Make  class  preprocessing MultiLabelBinarizer  cache class   mappings instead of calculating it every time on the fly     pr  12116  by  user  Ekaterina Krivich  kiote   and  Joel Nothman        Efficiency   class  preprocessing PolynomialFeatures  now supports   compressed sparse row  CSR  matrices as input for degrees 2 and 3  This is   typically much faster than the dense case as it scales with matrix density   and expansion degree  on the order of density degree   and is much  much   faster than the compressed sparse column  CSC  case     pr  12197  by  user  Andrew Nystrom  awnystrom        Efficiency  Speed improvement in  class  preprocessing PolynomialFeatures     in the dense case  Also added a new parameter   order   which controls output   order for further speed performances   pr  12251  by  Tom Dupre la Tour        Fix  Fixed the calculation overflow when using a float16 dtype with    class  preprocessing StandardScaler      pr  13007  by  user  Raffaello Baluyot  baluyotraf       Fix  Fixed a bug in  class  preprocessing QuantileTransformer  and    func  preprocessing quantile transform  to force n quantiles to be at most   equal to n samples  Values of n quantiles larger than n samples were either   useless or resulting in a wrong approximation of the cumulative distribution   function estimator   pr  13333  by  user  Albert Thomas  albertcthomas        API  The default value of  copy  in  func  preprocessing quantile transform    will change from False to True in 0 23 in order to make it more consistent   with the default  copy  values of other functions in    mod  sklearn preprocessing  and prevent unexpected side effects by modifying   the value of  X  inplace     pr  13459  by  user  Hunter McGushion  HunterMcGushion      mod  sklearn svm                         Fix  Fixed an issue in  func  svm SVC decision function  when     decision function shape  ovr     The decision function value of a given   sample was different depending on whether the decision function was evaluated   on the sample alone or on a batch containing this same sample due to the   scaling used in decision function     pr  10440  by  user  Jonathan Ohayon  Johayon      mod  sklearn tree                          Feature  Decision Trees can now be plotted with matplotlib using    tree plot tree  without relying on the   dot   library    removing a hard to install dependency   pr  8508  by  Andreas M ller        Feature  Decision Trees can now be exported in a human readable   textual format using  func  tree export text      pr  6261  by  Giuseppe Vettigli  JustGlowing        Feature    get n leaves     and   get depth     have been added to    tree BaseDecisionTree  and consequently all estimators based   on it  including  class  tree DecisionTreeClassifier      class  tree DecisionTreeRegressor    class  tree ExtraTreeClassifier     and  class  tree ExtraTreeRegressor      pr  12300  by  user  Adrin Jalali  adrinjalali        Fix  Trees and forests did not previously  predict  multi output   classification targets with string labels  despite accepting them in  fit      pr  11458  by  user  Mitar Milutinovic  mitar        Fix  Fixed an issue with  tree BaseDecisionTree    and consequently all estimators based   on it  including  class  tree DecisionTreeClassifier      class  tree DecisionTreeRegressor    class  tree ExtraTreeClassifier     and  class  tree ExtraTreeRegressor   where they used to exceed the given     max depth   by 1 while expanding the tree if   max leaf nodes   and     max depth   were both specified by the user  Please note that this also   affects all ensemble methods using decision trees     pr  12344  by  user  Adrin Jalali  adrinjalali      mod  sklearn utils                           Feature   func  utils resample  now accepts a   stratify   parameter for   sampling according to class distributions   pr  13549  by  user  Nicolas   Hug  NicolasHug        API  Deprecated   warn on dtype   parameter from  func  utils check array    and  func  utils check X y   Added explicit warning for dtype conversion   in  check pairwise arrays  if the   metric   being passed is a   pairwise boolean metric     pr  13382  by  user  Prathmesh Savale  praths007     Multiple modules                      MajorFeature  The    repr      method of all estimators  used when calling    print estimator    has been entirely re written  building on Python s   pretty printing standard library  All parameters are printed by default    but this can be altered with the   print changed only   option in    func  sklearn set config    pr  11705  by  user  Nicolas Hug    NicolasHug        MajorFeature  Add estimators tags  these are annotations of estimators   that allow programmatic inspection of their capabilities  such as sparse   matrix support  supported output types and supported methods  Estimator   tags also determine the tests that are run on an estimator when    check estimator  is called  Read more in the  ref  User Guide    estimator tags     pr  8022  by  user  Andreas M ller  amueller        Efficiency  Memory copies are avoided when casting arrays to a different   dtype in multiple estimators   pr  11973  by  user  Roman Yurchak    rth        Fix  Fixed a bug in the implementation of the  our rand r    helper function that was not behaving consistently across platforms     pr  13422  by  user  Madhura Parikh  jdnc   and    user  Cl ment Doumouro  ClemDoum      Miscellaneous                   Enhancement  Joblib is no longer vendored in scikit learn  and becomes a   dependency  Minimal supported version is joblib 0 11  however using   version    0 13 is strongly recommended     pr  13531  by  user  Roman Yurchak  rth      Changes to estimator checks                              These changes mostly affect library developers     Add   check fit idempotent   to    func   utils estimator checks check estimator   which checks that   when  fit  is called twice with the same data  the output of    predict    predict proba    transform   and  decision function  does not   change   pr  12328  by  user  Nicolas Hug  NicolasHug      Many checks can now be disabled or configured with  ref  estimator tags      pr  8022  by  user  Andreas M ller  amueller        rubric   Code and documentation contributors  Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0 20  including   adanhawth  Aditya Vyas  Adrin Jalali  Agamemnon Krasoulis  Albert Thomas  Alberto Torres  Alexandre Gramfort  amourav  Andrea Navarrete  Andreas Mueller  Andrew Nystrom  assiaben  Aur lien Bellet  Bartosz Micha owski  Bartosz Telenczuk  bauks  BenjaStudio  bertrandhaut  Bharat Raghunathan  brentfagan  Bryan Woods  Cat Chenal  Cheuk Ting Ho  Chris Choe  Christos Aridas  Cl ment Doumouro  Cole Smith  Connossor  Corey Levinson  Dan Ellis  Dan Stine  Danylo Baibak  daten kieker  Denis Kataev  Didi Bar Zev  Dillon Gardner  Dmitry Mottl  Dmitry Vukolov  Dougal J  Sutherland  Dowon  drewmjohnston  Dror Atariah  Edward J Brown  Ekaterina Krivich  Elizabeth Sander  Emmanuel Arias  Eric Chang  Eric Larson  Erich Schubert  esvhd  Falak  Feda Curic  Federico Caselli  Frank Hoang  Fibinse Xavier   Finn O Shea  Gabriel Marzinotto  Gabriel Vacaliuc  Gabriele Calvo  Gael Varoquaux  GauravAhlawat  Giuseppe Vettigli  Greg Gandenberger  Guillaume Fournier  Guillaume Lemaitre  Gustavo De Mari Pereira  Hanmin Qin  haroldfox  hhu luqi  Hunter McGushion  Ian Sanders  JackLangerman  Jacopo Notarstefano  jakirkham  James Bourbeau  Jan Koch  Jan S  janvanrijn  Jarrod Millman  jdethurens  jeremiedbb  JF  joaak  Joan Massich  Joel Nothman  Jonathan Ohayon  Joris Van den Bossche  josephsalmon  J r mie M hault  Katrin Leinweber  ken  kms15  Koen  Kossori Aruku  Krishna Sangeeth  Kuai Yu  Kulbear  Kushal Chauhan  Kyle Jackson  Lakshya KD  Leandro Hermida  Lee Yi Jie Joel  Lily Xiong  Lisa Sarah Thomas  Loic Esteve  louib  luk f a  maikia  mail liam  Manimaran  Manuel L pez Ib  ez  Marc Torrellas  Marco Gaido  Marco Gorelli  MarcoGorelli  marineLM  Mark Hannel  Martin Gubri  Masstran  mathurinm  Matthew Roeschke  Max Copeland  melsyt  mferrari3  Micka l Schoentgen  Ming Li  Mitar  Mohammad Aftab  Mohammed AbdelAal  Mohammed Ibraheem  Muhammad Hassaan Rafique  mwestt  Naoya Iijima  Nicholas Smith  Nicolas Goix  Nicolas Hug  Nikolay Shebanov  Oleksandr Pavlyk  Oliver Rausch  Olivier Grisel  Orestis  Osman  Owen Flanagan  Paul Paczuski  Pavel Soriano  pavlos kallis  Pawel Sendyk  peay  Peter  Peter Cock  Peter Hausamann  Peter Marko  Pierre Glaser  pierretallotte  Pim de Haan  Piotr Szyma ski  Prabakaran Kumaresshan  Pradeep Reddy Raamana  Prathmesh Savale  Pulkit Maloo  Quentin Batista  Radostin Stoyanov  Raf Baluyot  Rajdeep Dua  Ramil Nugmanov  Ra l Garc a Calvo  Rebekah Kim  Reshama Shaikh  Rohan Lekhwani  Rohan Singh  Rohan Varma  Rohit Kapoor  Roman Feldbauer  Roman Yurchak  Romuald M  Roopam Sharma  Ryan  R diger Busche  Sam Waterbury  Samuel O  Ronsin  SandroCasagrande  Scott Cole  Scott Lowe  Sebastian Raschka  Shangwu Yao  Shivam Kotwalia  Shiyu Duan  smarie  Sriharsha Hatwar  Stephen Hoover  Stephen Tierney  St phane Couvreur  surgan12  SylvainLan  TakingItCasual  Tashay Green  thibsej  Thomas Fan  Thomas J Fan  Thomas Moreau  Tom Dupr  la Tour  Tommy  Tulio Casagrande  Umar Farouk Umar  Utkarsh Upadhyay  Vinayak Mehta  Vishaal Kapoor  Vivek Kumar  Vlad Niculae  vqean3  Wenhao Zhang  William de Vazelhes  xhan  Xing Han Lu  xinyuliu12  Yaroslav Halchenko  Zach Griffith  Zach Miller  Zayd Hammoudeh  Zhuyi Xue  Zijie  ZJ  Poh      "}
{"questions":"scikit-learn sklearn contributors rst releasenotes12 Version 1 2","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n.. _release_notes_1_2:\n\n===========\nVersion 1.2\n===========\n\nFor a short description of the main highlights of the release, please refer to\n:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_1_2_0.py`.\n\n.. include:: changelog_legend.inc\n\n.. _changes_1_2_2:\n\nVersion 1.2.2\n=============\n\n**March 2023**\n\nChangelog\n---------\n\n:mod:`sklearn.base`\n...................\n\n- |Fix| When `set_output(transform=\"pandas\")`, :class:`base.TransformerMixin` maintains\n  the index if the :term:`transform` output is already a DataFrame. :pr:`25747` by\n  `Thomas Fan`_.\n\n:mod:`sklearn.calibration`\n..........................\n\n- |Fix| A deprecation warning is raised when using the `base_estimator__` prefix to\n  set parameters of the estimator used in :class:`calibration.CalibratedClassifierCV`.\n  :pr:`25477` by :user:`Tim Head <betatim>`.\n\n:mod:`sklearn.cluster`\n......................\n\n- |Fix| Fixed a bug in :class:`cluster.BisectingKMeans`, preventing `fit` to randomly\n  fail due to a permutation of the labels when running multiple inits.\n  :pr:`25563` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.compose`\n......................\n\n- |Fix| Fixes a bug in :class:`compose.ColumnTransformer` which now supports\n  empty selection of columns when `set_output(transform=\"pandas\")`.\n  :pr:`25570` by `Thomas Fan`_.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |Fix| A deprecation warning is raised when using the `base_estimator__` prefix\n  to set parameters of the estimator used in :class:`ensemble.AdaBoostClassifier`,\n  :class:`ensemble.AdaBoostRegressor`, :class:`ensemble.BaggingClassifier`,\n  and :class:`ensemble.BaggingRegressor`.\n  :pr:`25477` by :user:`Tim Head <betatim>`.\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Fix| Fixed a regression where a negative `tol` would not be accepted any more by\n  :class:`feature_selection.SequentialFeatureSelector`.\n  :pr:`25664` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.inspection`\n.........................\n\n- |Fix| Raise a more informative error message in :func:`inspection.partial_dependence`\n  when dealing with mixed data type categories that cannot be sorted by\n  :func:`numpy.unique`. This problem usually happen when categories are `str` and\n  missing values are present using `np.nan`.\n  :pr:`25774` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.isotonic`\n.......................\n\n- |Fix| Fixes a bug in :class:`isotonic.IsotonicRegression` where\n  :meth:`isotonic.IsotonicRegression.predict` would return a pandas DataFrame\n  when the global configuration sets `transform_output=\"pandas\"`.\n  :pr:`25500` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Fix| `preprocessing.OneHotEncoder.drop_idx_` now properly\n  references the dropped category in the `categories_` attribute\n  when there are infrequent categories. :pr:`25589` by `Thomas Fan`_.\n\n- |Fix| :class:`preprocessing.OrdinalEncoder` now correctly supports\n  `encoded_missing_value` or `unknown_value` set to a categories' cardinality\n  when there is missing values in the training data. :pr:`25704` by `Thomas Fan`_.\n\n:mod:`sklearn.tree`\n...................\n\n- |Fix| Fixed a regression in :class:`tree.DecisionTreeClassifier`,\n  :class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeClassifier` and\n  :class:`tree.ExtraTreeRegressor` where an error was no longer raised in version\n  1.2 when `min_sample_split=1`.\n  :pr:`25744` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |Fix| Fixes a bug in :func:`utils.check_array` which now correctly performs\n  non-finite validation with the Array API specification. :pr:`25619` by\n  `Thomas Fan`_.\n\n- |Fix| :func:`utils.multiclass.type_of_target` can identify pandas\n  nullable data types as classification targets. :pr:`25638` by `Thomas Fan`_.\n\n.. _changes_1_2_1:\n\nVersion 1.2.1\n=============\n\n**January 2023**\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- |Fix| The fitted components in\n  :class:`decomposition.MiniBatchDictionaryLearning` might differ. The online\n  updates of the sufficient statistics now properly take the sizes of the\n  batches into account.\n  :pr:`25354` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| The `categories_` attribute of :class:`preprocessing.OneHotEncoder` now\n  always contains an array of `object`s when using predefined categories that\n  are strings. Predefined categories encoded as bytes will no longer work\n  with `X` encoded as strings. :pr:`25174` by :user:`Tim Head <betatim>`.\n\nChanges impacting all modules\n-----------------------------\n\n- |Fix| Support `pandas.Int64` dtyped `y` for classifiers and regressors.\n  :pr:`25089` by :user:`Tim Head <betatim>`.\n\n- |Fix| Remove spurious warnings for estimators internally using neighbors search methods.\n  :pr:`25129` by :user:`Julien Jerphanion <jjerphan>`.\n\n- |Fix| Fix a bug where the current configuration was ignored in estimators using\n  `n_jobs > 1`. This bug was triggered for tasks dispatched by the auxiliary\n  thread of `joblib` as :func:`sklearn.get_config` used to access an empty thread\n  local configuration instead of the configuration visible from the thread where\n  `joblib.Parallel` was first called.\n  :pr:`25363` by :user:`Guillaume Lemaitre <glemaitre>`.\n\nChangelog\n---------\n\n:mod:`sklearn.base`\n...................\n\n- |Fix| Fix a regression in `BaseEstimator.__getstate__` that would prevent\n  certain estimators to be pickled when using Python 3.11. :pr:`25188` by\n  :user:`Benjamin Bossan <BenjaminBossan>`.\n\n- |Fix| Inheriting from :class:`base.TransformerMixin` will only wrap the `transform`\n  method if the class defines `transform` itself. :pr:`25295` by `Thomas Fan`_.\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Fix| Fixes an inconsistency in :func:`datasets.fetch_openml` between liac-arff\n  and pandas parser when a leading space is introduced after the delimiter.\n  The ARFF specs requires to ignore the leading space.\n  :pr:`25312` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Fixes a bug in :func:`datasets.fetch_openml` when using `parser=\"pandas\"`\n  where single quote and backslash escape characters were not properly handled.\n  :pr:`25511` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Fix| Fixed a bug in :class:`decomposition.MiniBatchDictionaryLearning` where the\n  online updates of the sufficient statistics where not correct when calling\n  `partial_fit` on batches of different sizes.\n  :pr:`25354` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| :class:`decomposition.DictionaryLearning` better supports readonly NumPy\n  arrays. In particular, it better supports large datasets which are memory-mapped\n  when it is used with coordinate descent algorithms (i.e. when `fit_algorithm='cd'`).\n  :pr:`25172` by :user:`Julien Jerphanion <jjerphan>`.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |Fix| :class:`ensemble.RandomForestClassifier`,\n  :class:`ensemble.RandomForestRegressor` :class:`ensemble.ExtraTreesClassifier`\n  and :class:`ensemble.ExtraTreesRegressor` now support sparse readonly datasets.\n  :pr:`25341` by :user:`Julien Jerphanion <jjerphan>`\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |Fix| :class:`feature_extraction.FeatureHasher` raises an informative error\n  when the input is a list of strings. :pr:`25094` by `Thomas Fan`_.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Fix| Fix a regression in :class:`linear_model.SGDClassifier` and\n  :class:`linear_model.SGDRegressor` that makes them unusable with the\n  `verbose` parameter set to a value greater than 0.\n  :pr:`25250` by :user:`J\u00e9r\u00e9mie Du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.manifold`\n.......................\n\n- |Fix| :class:`manifold.TSNE` now works correctly when output type is\n  set to pandas :pr:`25370` by :user:`Tim Head <betatim>`.\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |Fix| :func:`model_selection.cross_validate` with multimetric scoring in\n  case of some failing scorers the non-failing scorers now returns proper\n  scores instead of `error_score` values.\n  :pr:`23101` by :user:`Andr\u00e1s Simon <simonandras>` and `Thomas Fan`_.\n\n:mod:`sklearn.neural_network`\n.............................\n\n- |Fix| :class:`neural_network.MLPClassifier` and :class:`neural_network.MLPRegressor`\n  no longer raise warnings when fitting data with feature names.\n  :pr:`24873` by :user:`Tim Head <betatim>`.\n\n- |Fix| Improves error message in :class:`neural_network.MLPClassifier` and\n  :class:`neural_network.MLPRegressor`, when `early_stopping=True` and\n  `partial_fit` is called. :pr:`25694` by `Thomas Fan`_.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Fix| :meth:`preprocessing.FunctionTransformer.inverse_transform` correctly\n  supports DataFrames that are all numerical when `check_inverse=True`.\n  :pr:`25274` by `Thomas Fan`_.\n\n- |Fix| :meth:`preprocessing.SplineTransformer.get_feature_names_out` correctly\n  returns feature names when `extrapolations=\"periodic\"`. :pr:`25296` by\n  `Thomas Fan`_.\n\n:mod:`sklearn.tree`\n...................\n\n- |Fix| :class:`tree.DecisionTreeClassifier`, :class:`tree.DecisionTreeRegressor`\n  :class:`tree.ExtraTreeClassifier` and :class:`tree.ExtraTreeRegressor`\n  now support sparse readonly datasets.\n  :pr:`25341` by :user:`Julien Jerphanion <jjerphan>`\n\n:mod:`sklearn.utils`\n....................\n\n- |Fix| Restore :func:`utils.check_array`'s behaviour for pandas Series of type\n  boolean. The type is maintained, instead of converting to `float64.`\n  :pr:`25147` by :user:`Tim Head <betatim>`.\n\n- |API| `utils.fixes.delayed` is deprecated in 1.2.1 and will be removed\n  in 1.5. Instead, import :func:`utils.parallel.delayed` and use it in\n  conjunction with the newly introduced :func:`utils.parallel.Parallel`\n  to ensure proper propagation of the scikit-learn configuration to\n  the workers.\n  :pr:`25363` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n.. _changes_1_2:\n\nVersion 1.2.0\n=============\n\n**December 2022**\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- |Enhancement| The default `eigen_tol` for :class:`cluster.SpectralClustering`,\n  :class:`manifold.SpectralEmbedding`, :func:`cluster.spectral_clustering`,\n  and :func:`manifold.spectral_embedding` is now `None` when using the `'amg'`\n  or `'lobpcg'` solvers. This change improves numerical stability of the\n  solver, but may result in a different model.\n\n- |Enhancement| :class:`linear_model.GammaRegressor`,\n  :class:`linear_model.PoissonRegressor` and :class:`linear_model.TweedieRegressor`\n  can reach higher precision with the lbfgs solver, in particular when `tol` is set\n  to a tiny value. Moreover, `verbose` is now properly propagated to L-BFGS-B.\n  :pr:`23619` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Enhancement| The default value for `eps` :func:`metrics.log_loss` has changed\n  from `1e-15` to `\"auto\"`. `\"auto\"` sets `eps` to `np.finfo(y_pred.dtype).eps`.\n  :pr:`24354` by :user:`Safiuddin Khaja <Safikh>` and :user:`gsiisg <gsiisg>`.\n\n- |Fix| Make sign of `components_` deterministic in :class:`decomposition.SparsePCA`.\n  :pr:`23935` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| The `components_` signs in :class:`decomposition.FastICA` might differ.\n  It is now consistent and deterministic with all SVD solvers.\n  :pr:`22527` by :user:`Meekail Zain <micky774>` and `Thomas Fan`_.\n\n- |Fix| The condition for early stopping has now been changed in\n  `linear_model._sgd_fast._plain_sgd` which is used by\n  :class:`linear_model.SGDRegressor` and :class:`linear_model.SGDClassifier`. The old\n  condition did not disambiguate between\n  training and validation set and had an effect of overscaling the error tolerance.\n  This has been fixed in :pr:`23798` by :user:`Harsh Agrawal <Harsh14901>`.\n\n- |Fix| For :class:`model_selection.GridSearchCV` and\n  :class:`model_selection.RandomizedSearchCV` ranks corresponding to nan\n  scores will all be set to the maximum possible rank.\n  :pr:`24543` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |API| The default value of `tol` was changed from `1e-3` to `1e-4` for\n  :func:`linear_model.ridge_regression`, :class:`linear_model.Ridge` and\n  :class:`linear_model.RidgeClassifier`.\n  :pr:`24465` by :user:`Christian Lorentzen <lorentzenchr>`.\n\nChanges impacting all modules\n-----------------------------\n\n- |MajorFeature| The `set_output` API has been adopted by all transformers.\n  Meta-estimators that contain transformers such as :class:`pipeline.Pipeline`\n  or :class:`compose.ColumnTransformer` also define a `set_output`.\n  For details, see\n  `SLEP018 <https:\/\/scikit-learn-enhancement-proposals.readthedocs.io\/en\/latest\/slep018\/proposal.html>`__.\n  :pr:`23734` and :pr:`24699` by `Thomas Fan`_.\n\n- |Efficiency| Low-level routines for reductions on pairwise distances\n  for dense float32 datasets have been refactored. The following functions\n  and estimators now benefit from improved performances in terms of hardware\n  scalability and speed-ups:\n\n  - :func:`sklearn.metrics.pairwise_distances_argmin`\n  - :func:`sklearn.metrics.pairwise_distances_argmin_min`\n  - :class:`sklearn.cluster.AffinityPropagation`\n  - :class:`sklearn.cluster.Birch`\n  - :class:`sklearn.cluster.MeanShift`\n  - :class:`sklearn.cluster.OPTICS`\n  - :class:`sklearn.cluster.SpectralClustering`\n  - :func:`sklearn.feature_selection.mutual_info_regression`\n  - :class:`sklearn.neighbors.KNeighborsClassifier`\n  - :class:`sklearn.neighbors.KNeighborsRegressor`\n  - :class:`sklearn.neighbors.RadiusNeighborsClassifier`\n  - :class:`sklearn.neighbors.RadiusNeighborsRegressor`\n  - :class:`sklearn.neighbors.LocalOutlierFactor`\n  - :class:`sklearn.neighbors.NearestNeighbors`\n  - :class:`sklearn.manifold.Isomap`\n  - :class:`sklearn.manifold.LocallyLinearEmbedding`\n  - :class:`sklearn.manifold.TSNE`\n  - :func:`sklearn.manifold.trustworthiness`\n  - :class:`sklearn.semi_supervised.LabelPropagation`\n  - :class:`sklearn.semi_supervised.LabelSpreading`\n\n  For instance :meth:`sklearn.neighbors.NearestNeighbors.kneighbors` and\n  :meth:`sklearn.neighbors.NearestNeighbors.radius_neighbors`\n  can respectively be up to \u00d720 and \u00d75 faster than previously on a laptop.\n\n  Moreover, implementations of those two algorithms are now suitable\n  for machine with many cores, making them usable for datasets consisting\n  of millions of samples.\n\n  :pr:`23865` by :user:`Julien Jerphanion <jjerphan>`.\n\n- |Enhancement| Finiteness checks (detection of NaN and infinite values) in all\n  estimators are now significantly more efficient for float32 data by leveraging\n  NumPy's SIMD optimized primitives.\n  :pr:`23446` by :user:`Meekail Zain <micky774>`\n\n- |Enhancement| Finiteness checks (detection of NaN and infinite values) in all\n  estimators are now faster by utilizing a more efficient stop-on-first\n  second-pass algorithm.\n  :pr:`23197` by :user:`Meekail Zain <micky774>`\n\n- |Enhancement| Support for combinations of dense and sparse datasets pairs\n  for all distance metrics and for float32 and float64 datasets has been added\n  or has seen its performance improved for the following estimators:\n\n  - :func:`sklearn.metrics.pairwise_distances_argmin`\n  - :func:`sklearn.metrics.pairwise_distances_argmin_min`\n  - :class:`sklearn.cluster.AffinityPropagation`\n  - :class:`sklearn.cluster.Birch`\n  - :class:`sklearn.cluster.SpectralClustering`\n  - :class:`sklearn.neighbors.KNeighborsClassifier`\n  - :class:`sklearn.neighbors.KNeighborsRegressor`\n  - :class:`sklearn.neighbors.RadiusNeighborsClassifier`\n  - :class:`sklearn.neighbors.RadiusNeighborsRegressor`\n  - :class:`sklearn.neighbors.LocalOutlierFactor`\n  - :class:`sklearn.neighbors.NearestNeighbors`\n  - :class:`sklearn.manifold.Isomap`\n  - :class:`sklearn.manifold.TSNE`\n  - :func:`sklearn.manifold.trustworthiness`\n\n  :pr:`23604` and :pr:`23585` by :user:`Julien Jerphanion <jjerphan>`,\n  :user:`Olivier Grisel <ogrisel>`, and `Thomas Fan`_,\n  :pr:`24556` by :user:`Vincent Maladi\u00e8re <Vincent-Maladiere>`.\n\n- |Fix| Systematically check the sha256 digest of dataset tarballs used in code\n  examples in the documentation.\n  :pr:`24617` by :user:`Olivier Grisel <ogrisel>` and `Thomas Fan`_. Thanks to\n  `Sim4n6 <https:\/\/huntr.dev\/users\/sim4n6>`_ for the report.\n\nChangelog\n---------\n\n..\n    Entries should be grouped by module (in alphabetic order) and prefixed with\n    one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,\n    |Fix| or |API| (see whats_new.rst for descriptions).\n    Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).\n    Changes not specific to a module should be listed under *Multiple Modules*\n    or *Miscellaneous*.\n    Entries should end with:\n    :pr:`123456` by :user:`Joe Bloggs <joeongithub>`.\n    where 123456 is the *pull request* number, not the issue number.\n\n:mod:`sklearn.base`\n...................\n\n- |Enhancement| Introduces :class:`base.ClassNamePrefixFeaturesOutMixin` and\n  :class:`base.ClassNamePrefixFeaturesOutMixin` mixins that defines\n  :term:`get_feature_names_out` for common transformer uses cases.\n  :pr:`24688` by `Thomas Fan`_.\n\n:mod:`sklearn.calibration`\n..........................\n\n- |API| Rename `base_estimator` to `estimator` in\n  :class:`calibration.CalibratedClassifierCV` to improve readability and consistency.\n  The parameter `base_estimator` is deprecated and will be removed in 1.4.\n  :pr:`22054` by :user:`Kevin Roice <kevroi>`.\n\n:mod:`sklearn.cluster`\n......................\n\n- |Efficiency| :class:`cluster.KMeans` with `algorithm=\"lloyd\"` is now faster\n  and uses less memory. :pr:`24264` by\n  :user:`Vincent Maladiere <Vincent-Maladiere>`.\n\n- |Enhancement| The `predict` and `fit_predict` methods of :class:`cluster.OPTICS` now\n  accept sparse data type for input data. :pr:`14736` by :user:`Hunt Zhan <huntzhan>`,\n  :pr:`20802` by :user:`Brandon Pokorny <Clickedbigfoot>`,\n  and :pr:`22965` by :user:`Meekail Zain <micky774>`.\n\n- |Enhancement| :class:`cluster.Birch` now preserves dtype for `numpy.float32`\n  inputs. :pr:`22968` by `Meekail Zain <micky774>`.\n\n- |Enhancement| :class:`cluster.KMeans` and :class:`cluster.MiniBatchKMeans`\n  now accept a new `'auto'` option for `n_init` which changes the number of\n  random initializations to one when using `init='k-means++'` for efficiency.\n  This begins deprecation for the default values of `n_init` in the two classes\n  and both will have their defaults changed to `n_init='auto'` in 1.4.\n  :pr:`23038` by :user:`Meekail Zain <micky774>`.\n\n- |Enhancement| :class:`cluster.SpectralClustering` and\n  :func:`cluster.spectral_clustering` now propagates the `eigen_tol` parameter\n  to all choices of `eigen_solver`. Includes a new option `eigen_tol=\"auto\"`\n  and begins deprecation to change the default from `eigen_tol=0` to\n  `eigen_tol=\"auto\"` in version 1.3.\n  :pr:`23210` by :user:`Meekail Zain <micky774>`.\n\n- |Fix| :class:`cluster.KMeans` now supports readonly attributes when predicting.\n  :pr:`24258` by `Thomas Fan`_\n\n- |API| The `affinity` attribute is now deprecated for\n  :class:`cluster.AgglomerativeClustering` and will be renamed to `metric` in v1.4.\n  :pr:`23470` by :user:`Meekail Zain <micky774>`.\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Enhancement| Introduce the new parameter `parser` in\n  :func:`datasets.fetch_openml`. `parser=\"pandas\"` allows to use the very CPU\n  and memory efficient `pandas.read_csv` parser to load dense ARFF\n  formatted dataset files. It is possible to pass `parser=\"liac-arff\"`\n  to use the old LIAC parser.\n  When `parser=\"auto\"`, dense datasets are loaded with \"pandas\" and sparse\n  datasets are loaded with \"liac-arff\".\n  Currently, `parser=\"liac-arff\"` by default and will change to `parser=\"auto\"`\n  in version 1.4\n  :pr:`21938` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Enhancement| :func:`datasets.dump_svmlight_file` is now accelerated with a\n  Cython implementation, providing 2-4x speedups.\n  :pr:`23127` by :user:`Meekail Zain <micky774>`\n\n- |Enhancement| Path-like objects, such as those created with pathlib are now\n  allowed as paths in :func:`datasets.load_svmlight_file` and\n  :func:`datasets.load_svmlight_files`.\n  :pr:`19075` by :user:`Carlos Ramos Carre\u00f1o <vnmabus>`.\n\n- |Fix| Make sure that :func:`datasets.fetch_lfw_people` and\n  :func:`datasets.fetch_lfw_pairs` internally crops images based on the\n  `slice_` parameter.\n  :pr:`24951` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Efficiency| :func:`decomposition.FastICA.fit` has been optimised w.r.t\n  its memory footprint and runtime.\n  :pr:`22268` by :user:`MohamedBsh <Bsh>`.\n\n- |Enhancement| :class:`decomposition.SparsePCA` and\n  :class:`decomposition.MiniBatchSparsePCA` now implements an `inverse_transform`\n  function.\n  :pr:`23905` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Enhancement| :class:`decomposition.FastICA` now allows the user to select\n  how whitening is performed through the new `whiten_solver` parameter, which\n  supports `svd` and `eigh`. `whiten_solver` defaults to `svd` although `eigh`\n  may be faster and more memory efficient in cases where\n  `num_features > num_samples`.\n  :pr:`11860` by :user:`Pierre Ablin <pierreablin>`,\n  :pr:`22527` by :user:`Meekail Zain <micky774>` and `Thomas Fan`_.\n\n- |Enhancement| :class:`decomposition.LatentDirichletAllocation` now preserves dtype\n  for `numpy.float32` input. :pr:`24528` by :user:`Takeshi Oura <takoika>` and\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| Make sign of `components_` deterministic in :class:`decomposition.SparsePCA`.\n  :pr:`23935` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |API| The `n_iter` parameter of :class:`decomposition.MiniBatchSparsePCA` is\n  deprecated and replaced by the parameters `max_iter`, `tol`, and\n  `max_no_improvement` to be consistent with\n  :class:`decomposition.MiniBatchDictionaryLearning`. `n_iter` will be removed\n  in version 1.3. :pr:`23726` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |API| The `n_features_` attribute of\n  :class:`decomposition.PCA` is deprecated in favor of\n  `n_features_in_` and will be removed in 1.4. :pr:`24421` by\n  :user:`Kshitij Mathur <Kshitij68>`.\n\n:mod:`sklearn.discriminant_analysis`\n....................................\n\n- |MajorFeature| :class:`discriminant_analysis.LinearDiscriminantAnalysis` now\n  supports the `Array API <https:\/\/data-apis.org\/array-api\/latest\/>`_ for\n  `solver=\"svd\"`. Array API support is considered experimental and might evolve\n  without being subjected to our usual rolling deprecation cycle policy. See\n  :ref:`array_api` for more details. :pr:`22554` by `Thomas Fan`_.\n\n- |Fix| Validate parameters only in `fit` and not in `__init__`\n  for :class:`discriminant_analysis.QuadraticDiscriminantAnalysis`.\n  :pr:`24218` by :user:`Stefanie Molin <stefmolin>`.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |MajorFeature| :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` now support\n  interaction constraints via the argument `interaction_cst` of their\n  constructors.\n  :pr:`21020` by :user:`Christian Lorentzen <lorentzenchr>`.\n  Using interaction constraints also makes fitting faster.\n  :pr:`24856` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Feature| Adds `class_weight` to :class:`ensemble.HistGradientBoostingClassifier`.\n  :pr:`22014` by `Thomas Fan`_.\n\n- |Efficiency| Improve runtime performance of :class:`ensemble.IsolationForest`\n  by avoiding data copies. :pr:`23252` by :user:`Zhehao Liu <MaxwellLZH>`.\n\n- |Enhancement| :class:`ensemble.StackingClassifier` now accepts any kind of\n  base estimator.\n  :pr:`24538` by :user:`Guillem G Subies <GuillemGSubies>`.\n\n- |Enhancement| Make it possible to pass the `categorical_features` parameter\n  of :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` as feature names.\n  :pr:`24889` by :user:`Olivier Grisel <ogrisel>`.\n\n- |Enhancement| :class:`ensemble.StackingClassifier` now supports\n  multilabel-indicator target\n  :pr:`24146` by :user:`Nicolas Peretti <nicoperetti>`,\n  :user:`Nestor Navarro <nestornav>`, :user:`Nati Tomattis <natitomattis>`,\n  and :user:`Vincent Maladiere <Vincent-Maladiere>`.\n\n- |Enhancement| :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingClassifier` now accept their\n  `monotonic_cst` parameter to be passed as a dictionary in addition\n  to the previously supported array-like format.\n  Such dictionary have feature names as keys and one of `-1`, `0`, `1`\n  as value to specify monotonicity constraints for each feature.\n  :pr:`24855` by :user:`Olivier Grisel <ogrisel>`.\n\n- |Enhancement| Interaction constraints for\n  :class:`ensemble.HistGradientBoostingClassifier`\n  and :class:`ensemble.HistGradientBoostingRegressor` can now be specified\n  as strings for two common cases: \"no_interactions\" and \"pairwise\" interactions.\n  :pr:`24849` by :user:`Tim Head <betatim>`.\n\n- |Fix| Fixed the issue where :class:`ensemble.AdaBoostClassifier` outputs\n  NaN in feature importance when fitted with very small sample weight.\n  :pr:`20415` by :user:`Zhehao Liu <MaxwellLZH>`.\n\n- |Fix| :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` no longer error when predicting\n  on categories encoded as negative values and instead consider them a member\n  of the \"missing category\". :pr:`24283` by `Thomas Fan`_.\n\n- |Fix| :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor`, with `verbose>=1`, print detailed\n  timing information on computing histograms and finding best splits. The time spent in\n  the root node was previously missing and is now included in the printed information.\n  :pr:`24894` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |API| Rename the constructor parameter `base_estimator` to `estimator` in\n  the following classes:\n  :class:`ensemble.BaggingClassifier`,\n  :class:`ensemble.BaggingRegressor`,\n  :class:`ensemble.AdaBoostClassifier`,\n  :class:`ensemble.AdaBoostRegressor`.\n  `base_estimator` is deprecated in 1.2 and will be removed in 1.4.\n  :pr:`23819` by :user:`Adrian Trujillo <trujillo9616>` and\n  :user:`Edoardo Abati <EdAbati>`.\n\n- |API| Rename the fitted attribute `base_estimator_` to `estimator_` in\n  the following classes:\n  :class:`ensemble.BaggingClassifier`,\n  :class:`ensemble.BaggingRegressor`,\n  :class:`ensemble.AdaBoostClassifier`,\n  :class:`ensemble.AdaBoostRegressor`,\n  :class:`ensemble.RandomForestClassifier`,\n  :class:`ensemble.RandomForestRegressor`,\n  :class:`ensemble.ExtraTreesClassifier`,\n  :class:`ensemble.ExtraTreesRegressor`,\n  :class:`ensemble.RandomTreesEmbedding`,\n  :class:`ensemble.IsolationForest`.\n  `base_estimator_` is deprecated in 1.2 and will be removed in 1.4.\n  :pr:`23819` by :user:`Adrian Trujillo <trujillo9616>` and\n  :user:`Edoardo Abati <EdAbati>`.\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Fix| Fix a bug in :func:`feature_selection.mutual_info_regression` and\n  :func:`feature_selection.mutual_info_classif`, where the continuous features\n  in `X` should be scaled to a unit variance independently if the target `y` is\n  continuous or discrete.\n  :pr:`24747` by :user:`Guillaume Lemaitre <glemaitre>`\n\n:mod:`sklearn.gaussian_process`\n...............................\n\n- |Fix| Fix :class:`gaussian_process.kernels.Matern` gradient computation with\n  `nu=0.5` for PyPy (and possibly other non CPython interpreters). :pr:`24245`\n  by :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n- |Fix| The `fit` method of :class:`gaussian_process.GaussianProcessRegressor`\n  will not modify the input X in case a custom kernel is used, with a `diag`\n  method that returns part of the input X. :pr:`24405`\n  by :user:`Omar Salman <OmarManzoor>`.\n\n:mod:`sklearn.impute`\n.....................\n\n- |Enhancement| Added `keep_empty_features` parameter to\n  :class:`impute.SimpleImputer`, :class:`impute.KNNImputer` and\n  :class:`impute.IterativeImputer`, preventing removal of features\n  containing only missing values when transforming.\n  :pr:`16695` by :user:`Vitor Santa Rosa <vitorsrg>`.\n\n:mod:`sklearn.inspection`\n.........................\n\n- |MajorFeature| Extended :func:`inspection.partial_dependence` and\n  :class:`inspection.PartialDependenceDisplay` to handle categorical features.\n  :pr:`18298` by :user:`Madhura Jayaratne <madhuracj>` and\n  :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :class:`inspection.DecisionBoundaryDisplay` now raises error if input\n  data is not 2-dimensional.\n  :pr:`25077` by :user:`Arturo Amor <ArturoAmorQ>`.\n\n:mod:`sklearn.kernel_approximation`\n...................................\n\n- |Enhancement| :class:`kernel_approximation.RBFSampler` now preserves\n  dtype for `numpy.float32` inputs. :pr:`24317` by `Tim Head <betatim>`.\n\n- |Enhancement| :class:`kernel_approximation.SkewedChi2Sampler` now preserves\n  dtype for `numpy.float32` inputs. :pr:`24350` by :user:`Rahil Parikh <rprkh>`.\n\n- |Enhancement| :class:`kernel_approximation.RBFSampler` now accepts\n  `'scale'` option for parameter `gamma`.\n  :pr:`24755` by :user:`Gleb Levitski <GLevV>`.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Enhancement| :class:`linear_model.LogisticRegression`,\n  :class:`linear_model.LogisticRegressionCV`, :class:`linear_model.GammaRegressor`,\n  :class:`linear_model.PoissonRegressor` and :class:`linear_model.TweedieRegressor` got\n  a new solver `solver=\"newton-cholesky\"`. This is a 2nd order (Newton) optimisation\n  routine that uses a Cholesky decomposition of the hessian matrix.\n  When `n_samples >> n_features`, the `\"newton-cholesky\"` solver has been observed to\n  converge both faster and to a higher precision solution than the `\"lbfgs\"` solver on\n  problems with one-hot encoded categorical variables with some rare categorical\n  levels.\n  :pr:`24637` and :pr:`24767` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Enhancement| :class:`linear_model.GammaRegressor`,\n  :class:`linear_model.PoissonRegressor` and :class:`linear_model.TweedieRegressor`\n  can reach higher precision with the lbfgs solver, in particular when `tol` is set\n  to a tiny value. Moreover, `verbose` is now properly propagated to L-BFGS-B.\n  :pr:`23619` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Fix| :class:`linear_model.SGDClassifier` and :class:`linear_model.SGDRegressor` will\n  raise an error when all the validation samples have zero sample weight.\n  :pr:`23275` by `Zhehao Liu <MaxwellLZH>`.\n\n- |Fix| :class:`linear_model.SGDOneClassSVM` no longer performs parameter\n  validation in the constructor. All validation is now handled in `fit()` and\n  `partial_fit()`.\n  :pr:`24433` by :user:`Yogendrasingh <iofall>`, :user:`Arisa Y. <arisayosh>`\n  and :user:`Tim Head <betatim>`.\n\n- |Fix| Fix average loss calculation when early stopping is enabled in\n  :class:`linear_model.SGDRegressor` and :class:`linear_model.SGDClassifier`.\n  Also updated the condition for early stopping accordingly.\n  :pr:`23798` by :user:`Harsh Agrawal <Harsh14901>`.\n\n- |API| The default value for the `solver` parameter in\n  :class:`linear_model.QuantileRegressor` will change from `\"interior-point\"`\n  to `\"highs\"` in version 1.4.\n  :pr:`23637` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |API| String option `\"none\"` is deprecated for `penalty` argument\n  in :class:`linear_model.LogisticRegression`, and will be removed in version 1.4.\n  Use `None` instead. :pr:`23877` by :user:`Zhehao Liu <MaxwellLZH>`.\n\n- |API| The default value of `tol` was changed from `1e-3` to `1e-4` for\n  :func:`linear_model.ridge_regression`, :class:`linear_model.Ridge` and\n  :class:`linear_model.RidgeClassifier`.\n  :pr:`24465` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n:mod:`sklearn.manifold`\n.......................\n\n- |Feature| Adds option to use the normalized stress in :class:`manifold.MDS`. This is\n  enabled by setting the new `normalize` parameter to `True`.\n  :pr:`10168` by :user:`\u0141ukasz Borchmann <Borchmann>`,\n  :pr:`12285` by :user:`Matthias Miltenberger <mattmilten>`,\n  :pr:`13042` by :user:`Matthieu Parizy <matthieu-pa>`,\n  :pr:`18094` by :user:`Roth E Conrad <rotheconrad>` and\n  :pr:`22562` by :user:`Meekail Zain <micky774>`.\n\n- |Enhancement| Adds `eigen_tol` parameter to\n  :class:`manifold.SpectralEmbedding`. Both :func:`manifold.spectral_embedding`\n  and :class:`manifold.SpectralEmbedding` now propagate `eigen_tol` to all\n  choices of `eigen_solver`. Includes a new option `eigen_tol=\"auto\"`\n  and begins deprecation to change the default from `eigen_tol=0` to\n  `eigen_tol=\"auto\"` in version 1.3.\n  :pr:`23210` by :user:`Meekail Zain <micky774>`.\n\n- |Enhancement| :class:`manifold.Isomap` now preserves\n  dtype for `np.float32` inputs. :pr:`24714` by :user:`Rahil Parikh <rprkh>`.\n\n- |API| Added an `\"auto\"` option to the `normalized_stress` argument in\n  :class:`manifold.MDS` and :func:`manifold.smacof`. Note that\n  `normalized_stress` is only valid for non-metric MDS, therefore the `\"auto\"`\n  option enables `normalized_stress` when `metric=False` and disables it when\n  `metric=True`. `\"auto\"` will become the default value for `normalized_stress`\n  in version 1.4.\n  :pr:`23834` by :user:`Meekail Zain <micky774>`\n\n:mod:`sklearn.metrics`\n......................\n\n- |Feature| :func:`metrics.ConfusionMatrixDisplay.from_estimator`,\n  :func:`metrics.ConfusionMatrixDisplay.from_predictions`, and\n  :meth:`metrics.ConfusionMatrixDisplay.plot` accepts a `text_kw` parameter which is\n  passed to matplotlib's `text` function. :pr:`24051` by `Thomas Fan`_.\n\n- |Feature| :func:`metrics.class_likelihood_ratios` is added to compute the positive and\n  negative likelihood ratios derived from the confusion matrix\n  of a binary classification problem. :pr:`22518` by\n  :user:`Arturo Amor <ArturoAmorQ>`.\n\n- |Feature| Add :class:`metrics.PredictionErrorDisplay` to plot residuals vs\n  predicted and actual vs predicted to qualitatively assess the behavior of a\n  regressor. The display can be created with the class methods\n  :func:`metrics.PredictionErrorDisplay.from_estimator` and\n  :func:`metrics.PredictionErrorDisplay.from_predictions`. :pr:`18020` by\n  :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Feature| :func:`metrics.roc_auc_score` now supports micro-averaging\n  (`average=\"micro\"`) for the One-vs-Rest multiclass case (`multi_class=\"ovr\"`).\n  :pr:`24338` by :user:`Arturo Amor <ArturoAmorQ>`.\n\n- |Enhancement| Adds an `\"auto\"` option to `eps` in :func:`metrics.log_loss`.\n  This option will automatically set the `eps` value depending on the data\n  type of `y_pred`. In addition, the default value of `eps` is changed from\n  `1e-15` to the new `\"auto\"` option.\n  :pr:`24354` by :user:`Safiuddin Khaja <Safikh>` and :user:`gsiisg <gsiisg>`.\n\n- |Fix| Allows `csr_matrix` as input for parameter: `y_true` of\n  the :func:`metrics.label_ranking_average_precision_score` metric.\n  :pr:`23442` by :user:`Sean Atukorala <ShehanAT>`\n\n- |Fix| :func:`metrics.ndcg_score` will now trigger a warning when the `y_true`\n  value contains a negative value. Users may still use negative values, but the\n  result may not be between 0 and 1. Starting in v1.4, passing in negative\n  values for `y_true` will raise an error.\n  :pr:`22710` by :user:`Conroy Trinh <trinhcon>` and\n  :pr:`23461` by :user:`Meekail Zain <micky774>`.\n\n- |Fix| :func:`metrics.log_loss` with `eps=0` now returns a correct value of 0 or\n  `np.inf` instead of `nan` for predictions at the boundaries (0 or 1). It also accepts\n  integer input.\n  :pr:`24365` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |API| The parameter `sum_over_features` of\n  :func:`metrics.pairwise.manhattan_distances` is deprecated and will be removed in 1.4.\n  :pr:`24630` by :user:`Rushil Desai <rusdes>`.\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |Feature| Added the class :class:`model_selection.LearningCurveDisplay`\n  that allows to make easy plotting of learning curves obtained by the function\n  :func:`model_selection.learning_curve`.\n  :pr:`24084` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| For all `SearchCV` classes and scipy >= 1.10, rank corresponding to a\n  nan score is correctly set to the maximum possible rank, rather than\n  `np.iinfo(np.int32).min`. :pr:`24141` by :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n- |Fix| In both :class:`model_selection.HalvingGridSearchCV` and\n  :class:`model_selection.HalvingRandomSearchCV` parameter\n  combinations with a NaN score now share the lowest rank.\n  :pr:`24539` by :user:`Tim Head <betatim>`.\n\n- |Fix| For :class:`model_selection.GridSearchCV` and\n  :class:`model_selection.RandomizedSearchCV` ranks corresponding to nan\n  scores will all be set to the maximum possible rank.\n  :pr:`24543` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.multioutput`\n..........................\n\n- |Feature| Added boolean `verbose` flag to classes:\n  :class:`multioutput.ClassifierChain` and :class:`multioutput.RegressorChain`.\n  :pr:`23977` by :user:`Eric Fiegel <efiegel>`,\n  :user:`Chiara Marmo <cmarmo>`,\n  :user:`Lucy Liu <lucyleeow>`, and\n  :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.naive_bayes`\n..........................\n\n- |Feature| Add methods `predict_joint_log_proba` to all naive Bayes classifiers.\n  :pr:`23683` by :user:`Andrey Melnik <avm19>`.\n\n- |Enhancement| A new parameter `force_alpha` was added to\n  :class:`naive_bayes.BernoulliNB`, :class:`naive_bayes.ComplementNB`,\n  :class:`naive_bayes.CategoricalNB`, and :class:`naive_bayes.MultinomialNB`,\n  allowing user to set parameter alpha to a very small number, greater or equal\n  0, which was earlier automatically changed to `1e-10` instead.\n  :pr:`16747` by :user:`arka204`,\n  :pr:`18805` by :user:`hongshaoyang`,\n  :pr:`22269` by :user:`Meekail Zain <micky774>`.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Feature| Adds new function :func:`neighbors.sort_graph_by_row_values` to\n  sort a CSR sparse graph such that each row is stored with increasing values.\n  This is useful to improve efficiency when using precomputed sparse distance\n  matrices in a variety of estimators and avoid an `EfficiencyWarning`.\n  :pr:`23139` by `Tom Dupre la Tour`_.\n\n- |Efficiency| :class:`neighbors.NearestCentroid` is faster and requires\n  less memory as it better leverages CPUs' caches to compute predictions.\n  :pr:`24645` by :user:`Olivier Grisel <ogrisel>`.\n\n- |Enhancement| :class:`neighbors.KernelDensity` bandwidth parameter now accepts\n  definition using Scott's and Silverman's estimation methods.\n  :pr:`10468` by :user:`Ruben <icfly2>` and :pr:`22993` by\n  :user:`Jovan Stojanovic <jovan-stojanovic>`.\n\n- |Enhancement| `neighbors.NeighborsBase` now accepts\n  Minkowski semi-metric (i.e. when :math:`0 < p < 1` for\n  `metric=\"minkowski\"`) for `algorithm=\"auto\"` or `algorithm=\"brute\"`.\n  :pr:`24750` by :user:`Rudresh Veerkhare <RudreshVeerkhare>`\n\n- |Fix| :class:`neighbors.NearestCentroid` now raises an informative error message at fit-time\n  instead of failing with a low-level error message at predict-time.\n  :pr:`23874` by :user:`Juan Gomez <2357juan>`.\n\n- |Fix| Set `n_jobs=None` by default (instead of `1`) for\n  :class:`neighbors.KNeighborsTransformer` and\n  :class:`neighbors.RadiusNeighborsTransformer`.\n  :pr:`24075` by :user:`Valentin Laurent <Valentin-Laurent>`.\n\n- |Enhancement| :class:`neighbors.LocalOutlierFactor` now preserves\n  dtype for `numpy.float32` inputs.\n  :pr:`22665` by :user:`Julien Jerphanion <jjerphan>`.\n\n:mod:`sklearn.neural_network`\n.............................\n\n- |Fix| :class:`neural_network.MLPClassifier` and\n  :class:`neural_network.MLPRegressor` always expose the parameters `best_loss_`,\n  `validation_scores_`, and `best_validation_score_`. `best_loss_` is set to\n  `None` when `early_stopping=True`, while `validation_scores_` and\n  `best_validation_score_` are set to `None` when `early_stopping=False`.\n  :pr:`24683` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.pipeline`\n.......................\n\n- |Enhancement| :meth:`pipeline.FeatureUnion.get_feature_names_out` can now\n  be used when one of the transformers in the :class:`pipeline.FeatureUnion` is\n  `\"passthrough\"`. :pr:`24058` by :user:`Diederik Perdok <diederikwp>`\n\n- |Enhancement| The :class:`pipeline.FeatureUnion` class now has a `named_transformers`\n  attribute for accessing transformers by name.\n  :pr:`20331` by :user:`Christopher Flynn <crflynn>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Enhancement| :class:`preprocessing.FunctionTransformer` will always try to set\n  `n_features_in_` and `feature_names_in_` regardless of the `validate` parameter.\n  :pr:`23993` by `Thomas Fan`_.\n\n- |Fix| :class:`preprocessing.LabelEncoder` correctly encodes NaNs in `transform`.\n  :pr:`22629` by `Thomas Fan`_.\n\n- |API| The `sparse` parameter of :class:`preprocessing.OneHotEncoder`\n  is now deprecated and will be removed in version 1.4. Use `sparse_output` instead.\n  :pr:`24412` by :user:`Rushil Desai <rusdes>`.\n\n:mod:`sklearn.svm`\n..................\n\n- |API| The `class_weight_` attribute is now deprecated for\n  :class:`svm.NuSVR`, :class:`svm.SVR`, :class:`svm.OneClassSVM`.\n  :pr:`22898` by :user:`Meekail Zain <micky774>`.\n\n:mod:`sklearn.tree`\n...................\n\n- |Enhancement| :func:`tree.plot_tree`, :func:`tree.export_graphviz` now uses\n  a lower case `x[i]` to represent feature `i`. :pr:`23480` by `Thomas Fan`_.\n\n:mod:`sklearn.utils`\n....................\n\n- |Feature| A new module exposes development tools to discover estimators (i.e.\n  :func:`utils.discovery.all_estimators`), displays (i.e.\n  :func:`utils.discovery.all_displays`) and functions (i.e.\n  :func:`utils.discovery.all_functions`) in scikit-learn.\n  :pr:`21469` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Enhancement| :func:`utils.extmath.randomized_svd` now accepts an argument,\n  `lapack_svd_driver`, to specify the lapack driver used in the internal\n  deterministic SVD used by the randomized SVD algorithm.\n  :pr:`20617` by :user:`Srinath Kailasa <skailasa>`\n\n- |Enhancement| :func:`utils.validation.column_or_1d` now accepts a `dtype`\n  parameter to specific `y`'s dtype. :pr:`22629` by `Thomas Fan`_.\n\n- |Enhancement| `utils.extmath.cartesian` now accepts arrays with different\n  `dtype` and will cast the output to the most permissive `dtype`.\n  :pr:`25067` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :func:`utils.multiclass.type_of_target` now properly handles sparse matrices.\n  :pr:`14862` by :user:`L\u00e9onard Binet <leonardbinet>`.\n\n- |Fix| HTML representation no longer errors when an estimator class is a value in\n  `get_params`. :pr:`24512` by `Thomas Fan`_.\n\n- |Fix| :func:`utils.estimator_checks.check_estimator` now takes into account\n  the `requires_positive_X` tag correctly. :pr:`24667` by `Thomas Fan`_.\n\n- |Fix| :func:`utils.check_array` now supports Pandas Series with `pd.NA`\n  by raising a better error message or returning a compatible `ndarray`.\n  :pr:`25080` by `Thomas Fan`_.\n\n- |API| The extra keyword parameters of :func:`utils.extmath.density` are deprecated\n  and will be removed in 1.4.\n  :pr:`24523` by :user:`Mia Bajic <clytaemnestra>`.\n\n.. rubric:: Code and documentation contributors\n\nThanks to everyone who has contributed to the maintenance and improvement of\nthe project since version 1.1, including:\n\n2357juan, 3lLobo, Adam J. Stewart, Adam Kania, Adam Li, Aditya Anulekh, Admir\nDemiraj, adoublet, Adrin Jalali, Ahmedbgh, Aiko, Akshita Prasanth, Ala-Na,\nAlessandro Miola, Alex, Alexandr, Alexandre Perez-Lebel, Alex Buzenet, Ali H.\nEl-Kassas, aman kumar, Amit Bera, Andr\u00e1s Simon, Andreas Grivas, Andreas\nMueller, Andrew Wang, angela-maennel, Aniket Shirsat, Anthony22-dev, Antony\nLee, anupam, Apostolos Tsetoglou, Aravindh R, Artur Hermano, Arturo Amor,\nas-90, ashah002, Ashwin Mathur, avm19, Azaria Gebremichael, b0rxington, Badr\nMOUFAD, Bardiya Ak, Bart\u0142omiej Go\u0144da, BdeGraaff, Benjamin Bossan, Benjamin\nCarter, berkecanrizai, Bernd Fritzke, Bhoomika, Biswaroop Mitra, Brandon TH\nChen, Brett Cannon, Bsh, cache-missing, carlo, Carlos Ramos Carre\u00f1o, ceh,\nchalulu, Changyao Chen, Charles Zablit, Chiara Marmo, Christian Lorentzen,\nChristian Ritter, Christian Veenhuis, christianwaldmann, Christine P. Chai,\nClaudio Salvatore Arcidiacono, Cl\u00e9ment Verrier, crispinlogan, Da-Lan,\nDanGonite57, Daniela Fernandes, DanielGaerber, darioka, Darren Nguyen,\ndavidblnc, david-cortes, David Gilbertson, David Poznik, Dayne, Dea Mar\u00eda\nL\u00e9on, Denis, Dev Khant, Dhanshree Arora, Diadochokinetic, diederikwp, Dimitri\nPapadopoulos Orfanos, Dimitris Litsidis, drewhogg, Duarte OC, Dwight Lindquist,\nEden Brekke, Edern, Edoardo Abati, Eleanore Denies, EliaSchiavon, Emir,\nErmolaevPA, Fabrizio Damicelli, fcharras, Felipe Siola, Flynn,\nfrancesco-tuveri, Franck Charras, ftorres16, Gael Varoquaux, Geevarghese\nGeorge, genvalen, GeorgiaMayDay, Gianr Lazz, Gleb Levitski, Gl\u00f2ria Maci\u00e0\nMu\u00f1oz, Guillaume Lemaitre, Guillem Garc\u00eda Subies, Guitared, gunesbayir,\nHaesun Park, Hansin Ahuja, Hao Chun Chang, Harsh Agrawal, harshit5674,\nhasan-yaman, henrymooresc, Henry Sorsky, Hristo Vrigazov, htsedebenham, humahn,\ni-aki-y, Ian Thompson, Ido M, Iglesys, Iliya Zhechev, Irene, ivanllt, Ivan\nSedykh, Jack McIvor, jakirkham, JanFidor, Jason G, J\u00e9r\u00e9mie du Boisberranger,\nJiten Sidhpura, jkarolczak, Jo\u00e3o David, JohnathanPi, John Koumentis, John P,\nJohn Pangas, johnthagen, Jordan Fleming, Joshua Choo Yun Keat, Jovan\nStojanovic, Juan Carlos Alfaro Jim\u00e9nez, juanfe88, Juan Felipe Arias,\nJuliaSchoepp, Julien Jerphanion, jygerardy, ka00ri, Kanishk Sachdev, Kanissh,\nKaushik Amar Das, Kendall, Kenneth Prabakaran, Kento Nozawa, kernc, Kevin\nRoice, Kian Eliasi, Kilian Kluge, Kilian Lieret, Kirandevraj, Kraig, krishna\nkumar, krishna vamsi, Kshitij Kapadni, Kshitij Mathur, Lauren Burke, L\u00e9onard\nBinet, lingyi1110, Lisa Casino, Logan Thomas, Loic Esteve, Luciano Mantovani,\nLucy Liu, Maascha, Madhura Jayaratne, madinak, Maksym, Malte S. Kurz, Mansi\nAgrawal, Marco Edward Gorelli, Marco Wurps, Maren Westermann, Maria Telenczuk,\nMario Kostelac, martin-kokos, Marvin Krawutschke, Masanori Kanazu, mathurinm,\nMatt Haberland, mauroantonioserrano, Max Halford, Maxi Marufo, maximeSaur,\nMaxim Smolskiy, Maxwell, m. bou, Meekail Zain, Mehgarg, mehmetcanakbay, Mia\nBaji\u0107, Michael Flaks, Michael Hornstein, Michel de Ruiter, Michelle Paradis,\nMikhail Iljin, Misa Ogura, Moritz Wilksch, mrastgoo, Naipawat Poolsawat, Naoise\nHolohan, Nass, Nathan Jacobi, Nawazish Alam, Nguy\u1ec5n V\u0103n Di\u1ec5n, Nicola\nFanelli, Nihal Thukarama Rao, Nikita Jare, nima10khodaveisi, Nima Sarajpoor,\nnitinramvelraj, NNLNR, npache, Nwanna-Joseph, Nymark Kho, o-holman, Olivier\nGrisel, Olle Lukowski, Omar Hassoun, Omar Salman, osman tamer, ouss1508,\nOyindamola Olatunji, PAB, Pandata, partev, Paulo Sergio  Soares, Petar\nMlinari\u0107, Peter Jansson, Peter Steinbach, Philipp Jung, Piet Br\u00f6mmel, Pooja\nM, Pooja Subramaniam, priyam kakati, puhuk, Rachel Freeland, Rachit Keerti Das,\nRafal Wojdyla, Raghuveer Bhat, Rahil Parikh, Ralf Gommers, ram vikram singh,\nRavi Makhija, Rehan Guha, Reshama Shaikh, Richard Klima, Rob Crockett, Robert\nHommes, Robert Juergens, Robin Lenz, Rocco Meli, Roman4oo, Ross Barnowski,\nRowan Mankoo, Rudresh Veerkhare, Rushil Desai, Sabri Monaf Sabri, Safikh,\nSafiuddin Khaja, Salahuddin, Sam Adam Day, Sandra Yojana Meneses, Sandro\nEphrem, Sangam, SangamSwadik, SANJAI_3, SarahRemus, Sashka Warner, SavkoMax,\nScott Gigante, Scott Gustafson, Sean Atukorala, sec65, SELEE, seljaks, Shady el\nGewily, Shane, shellyfung, Shinsuke Mori, Shiva chauhan, Shoaib Khan, Shogo\nHida, Shrankhla Srivastava, Shuangchi He, Simon, sonnivs, Sortofamudkip,\nSrinath Kailasa, Stanislav (Stanley) Modrak, Stefanie Molin, stellalin7,\nSt\u00e9phane Collot, Steven Van Vaerenbergh, Steve Schmerler, Sven Stehle, Tabea\nKossen, TheDevPanda, the-syd-sre, Thijs van Weezel, Thomas Bonald, Thomas\nGermer, Thomas J. Fan, Ti-Ion, Tim Head, Timofei Kornev, toastedyeast, Tobias\nPitters, Tom Dupr\u00e9 la Tour, tomiock, Tom Mathews, Tom McTiernan, tspeng, Tyler\nEgashira, Valentin Laurent, Varun Jain, Vera Komeyer, Vicente Reyes-Puerta,\nVinayak Mehta, Vincent M, Vishal, Vyom Pathak, wattai, wchathura, WEN Hao,\nWilliam M, x110, Xiao Yuan, Xunius, yanhong-zhao-ef, Yusuf Raji, Z Adil Khwaja,\nzeeshan lone","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn      release notes 1 2               Version 1 2              For a short description of the main highlights of the release  please refer to  ref  sphx glr auto examples release highlights plot release highlights 1 2 0 py       include   changelog legend inc      changes 1 2 2   Version 1 2 2                  March 2023    Changelog             mod  sklearn base                          Fix  When  set output transform  pandas      class  base TransformerMixin  maintains   the index if the  term  transform  output is already a DataFrame   pr  25747  by    Thomas Fan      mod  sklearn calibration                                 Fix  A deprecation warning is raised when using the  base estimator    prefix to   set parameters of the estimator used in  class  calibration CalibratedClassifierCV      pr  25477  by  user  Tim Head  betatim      mod  sklearn cluster                             Fix  Fixed a bug in  class  cluster BisectingKMeans   preventing  fit  to randomly   fail due to a permutation of the labels when running multiple inits     pr  25563  by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn compose                             Fix  Fixes a bug in  class  compose ColumnTransformer  which now supports   empty selection of columns when  set output transform  pandas        pr  25570  by  Thomas Fan      mod  sklearn ensemble                              Fix  A deprecation warning is raised when using the  base estimator    prefix   to set parameters of the estimator used in  class  ensemble AdaBoostClassifier      class  ensemble AdaBoostRegressor    class  ensemble BaggingClassifier     and  class  ensemble BaggingRegressor      pr  25477  by  user  Tim Head  betatim      mod  sklearn feature selection                                       Fix  Fixed a regression where a negative  tol  would not be accepted any more by    class  feature selection SequentialFeatureSelector      pr  25664  by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn inspection                                Fix  Raise a more informative error message in  func  inspection partial dependence    when dealing with mixed data type categories that cannot be sorted by    func  numpy unique   This problem usually happen when categories are  str  and   missing values are present using  np nan      pr  25774  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn isotonic                              Fix  Fixes a bug in  class  isotonic IsotonicRegression  where    meth  isotonic IsotonicRegression predict  would return a pandas DataFrame   when the global configuration sets  transform output  pandas       pr  25500  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn preprocessing                                   Fix   preprocessing OneHotEncoder drop idx   now properly   references the dropped category in the  categories   attribute   when there are infrequent categories   pr  25589  by  Thomas Fan        Fix   class  preprocessing OrdinalEncoder  now correctly supports    encoded missing value  or  unknown value  set to a categories  cardinality   when there is missing values in the training data   pr  25704  by  Thomas Fan      mod  sklearn tree                          Fix  Fixed a regression in  class  tree DecisionTreeClassifier      class  tree DecisionTreeRegressor    class  tree ExtraTreeClassifier  and    class  tree ExtraTreeRegressor  where an error was no longer raised in version   1 2 when  min sample split 1      pr  25744  by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn utils                           Fix  Fixes a bug in  func  utils check array  which now correctly performs   non finite validation with the Array API specification   pr  25619  by    Thomas Fan        Fix   func  utils multiclass type of target  can identify pandas   nullable data types as classification targets   pr  25638  by  Thomas Fan         changes 1 2 1   Version 1 2 1                  January 2023    Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      Fix  The fitted components in    class  decomposition MiniBatchDictionaryLearning  might differ  The online   updates of the sufficient statistics now properly take the sizes of the   batches into account     pr  25354  by  user  J r mie du Boisberranger  jeremiedbb        Fix  The  categories   attribute of  class  preprocessing OneHotEncoder  now   always contains an array of  object s when using predefined categories that   are strings  Predefined categories encoded as bytes will no longer work   with  X  encoded as strings   pr  25174  by  user  Tim Head  betatim     Changes impacting all modules                                   Fix  Support  pandas Int64  dtyped  y  for classifiers and regressors     pr  25089  by  user  Tim Head  betatim        Fix  Remove spurious warnings for estimators internally using neighbors search methods     pr  25129  by  user  Julien Jerphanion  jjerphan        Fix  Fix a bug where the current configuration was ignored in estimators using    n jobs   1   This bug was triggered for tasks dispatched by the auxiliary   thread of  joblib  as  func  sklearn get config  used to access an empty thread   local configuration instead of the configuration visible from the thread where    joblib Parallel  was first called     pr  25363  by  user  Guillaume Lemaitre  glemaitre     Changelog             mod  sklearn base                          Fix  Fix a regression in  BaseEstimator   getstate    that would prevent   certain estimators to be pickled when using Python 3 11   pr  25188  by    user  Benjamin Bossan  BenjaminBossan        Fix  Inheriting from  class  base TransformerMixin  will only wrap the  transform    method if the class defines  transform  itself   pr  25295  by  Thomas Fan      mod  sklearn datasets                              Fix  Fixes an inconsistency in  func  datasets fetch openml  between liac arff   and pandas parser when a leading space is introduced after the delimiter    The ARFF specs requires to ignore the leading space     pr  25312  by  user  Guillaume Lemaitre  glemaitre        Fix  Fixes a bug in  func  datasets fetch openml  when using  parser  pandas     where single quote and backslash escape characters were not properly handled     pr  25511  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn decomposition                                   Fix  Fixed a bug in  class  decomposition MiniBatchDictionaryLearning  where the   online updates of the sufficient statistics where not correct when calling    partial fit  on batches of different sizes     pr  25354  by  user  J r mie du Boisberranger  jeremiedbb        Fix   class  decomposition DictionaryLearning  better supports readonly NumPy   arrays  In particular  it better supports large datasets which are memory mapped   when it is used with coordinate descent algorithms  i e  when  fit algorithm  cd        pr  25172  by  user  Julien Jerphanion  jjerphan      mod  sklearn ensemble                              Fix   class  ensemble RandomForestClassifier      class  ensemble RandomForestRegressor   class  ensemble ExtraTreesClassifier    and  class  ensemble ExtraTreesRegressor  now support sparse readonly datasets     pr  25341  by  user  Julien Jerphanion  jjerphan     mod  sklearn feature extraction                                        Fix   class  feature extraction FeatureHasher  raises an informative error   when the input is a list of strings   pr  25094  by  Thomas Fan      mod  sklearn linear model                                  Fix  Fix a regression in  class  linear model SGDClassifier  and    class  linear model SGDRegressor  that makes them unusable with the    verbose  parameter set to a value greater than 0     pr  25250  by  user  J r mie Du Boisberranger  jeremiedbb      mod  sklearn manifold                              Fix   class  manifold TSNE  now works correctly when output type is   set to pandas  pr  25370  by  user  Tim Head  betatim      mod  sklearn model selection                                     Fix   func  model selection cross validate  with multimetric scoring in   case of some failing scorers the non failing scorers now returns proper   scores instead of  error score  values     pr  23101  by  user  Andr s Simon  simonandras   and  Thomas Fan      mod  sklearn neural network                                    Fix   class  neural network MLPClassifier  and  class  neural network MLPRegressor    no longer raise warnings when fitting data with feature names     pr  24873  by  user  Tim Head  betatim        Fix  Improves error message in  class  neural network MLPClassifier  and    class  neural network MLPRegressor   when  early stopping True  and    partial fit  is called   pr  25694  by  Thomas Fan      mod  sklearn preprocessing                                   Fix   meth  preprocessing FunctionTransformer inverse transform  correctly   supports DataFrames that are all numerical when  check inverse True      pr  25274  by  Thomas Fan        Fix   meth  preprocessing SplineTransformer get feature names out  correctly   returns feature names when  extrapolations  periodic     pr  25296  by    Thomas Fan      mod  sklearn tree                          Fix   class  tree DecisionTreeClassifier    class  tree DecisionTreeRegressor     class  tree ExtraTreeClassifier  and  class  tree ExtraTreeRegressor    now support sparse readonly datasets     pr  25341  by  user  Julien Jerphanion  jjerphan     mod  sklearn utils                           Fix  Restore  func  utils check array  s behaviour for pandas Series of type   boolean  The type is maintained  instead of converting to  float64      pr  25147  by  user  Tim Head  betatim        API   utils fixes delayed  is deprecated in 1 2 1 and will be removed   in 1 5  Instead  import  func  utils parallel delayed  and use it in   conjunction with the newly introduced  func  utils parallel Parallel    to ensure proper propagation of the scikit learn configuration to   the workers     pr  25363  by  user  Guillaume Lemaitre  glemaitre         changes 1 2   Version 1 2 0                  December 2022    Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      Enhancement  The default  eigen tol  for  class  cluster SpectralClustering      class  manifold SpectralEmbedding    func  cluster spectral clustering     and  func  manifold spectral embedding  is now  None  when using the   amg     or   lobpcg   solvers  This change improves numerical stability of the   solver  but may result in a different model      Enhancement   class  linear model GammaRegressor      class  linear model PoissonRegressor  and  class  linear model TweedieRegressor    can reach higher precision with the lbfgs solver  in particular when  tol  is set   to a tiny value  Moreover   verbose  is now properly propagated to L BFGS B     pr  23619  by  user  Christian Lorentzen  lorentzenchr        Enhancement  The default value for  eps   func  metrics log loss  has changed   from  1e 15  to   auto      auto   sets  eps  to  np finfo y pred dtype  eps      pr  24354  by  user  Safiuddin Khaja  Safikh   and  user  gsiisg  gsiisg        Fix  Make sign of  components   deterministic in  class  decomposition SparsePCA      pr  23935  by  user  Guillaume Lemaitre  glemaitre        Fix  The  components   signs in  class  decomposition FastICA  might differ    It is now consistent and deterministic with all SVD solvers     pr  22527  by  user  Meekail Zain  micky774   and  Thomas Fan        Fix  The condition for early stopping has now been changed in    linear model  sgd fast  plain sgd  which is used by    class  linear model SGDRegressor  and  class  linear model SGDClassifier   The old   condition did not disambiguate between   training and validation set and had an effect of overscaling the error tolerance    This has been fixed in  pr  23798  by  user  Harsh Agrawal  Harsh14901        Fix  For  class  model selection GridSearchCV  and    class  model selection RandomizedSearchCV  ranks corresponding to nan   scores will all be set to the maximum possible rank     pr  24543  by  user  Guillaume Lemaitre  glemaitre        API  The default value of  tol  was changed from  1e 3  to  1e 4  for    func  linear model ridge regression    class  linear model Ridge  and    class  linear model RidgeClassifier      pr  24465  by  user  Christian Lorentzen  lorentzenchr     Changes impacting all modules                                   MajorFeature  The  set output  API has been adopted by all transformers    Meta estimators that contain transformers such as  class  pipeline Pipeline    or  class  compose ColumnTransformer  also define a  set output     For details  see    SLEP018  https   scikit learn enhancement proposals readthedocs io en latest slep018 proposal html         pr  23734  and  pr  24699  by  Thomas Fan        Efficiency  Low level routines for reductions on pairwise distances   for dense float32 datasets have been refactored  The following functions   and estimators now benefit from improved performances in terms of hardware   scalability and speed ups        func  sklearn metrics pairwise distances argmin       func  sklearn metrics pairwise distances argmin min       class  sklearn cluster AffinityPropagation       class  sklearn cluster Birch       class  sklearn cluster MeanShift       class  sklearn cluster OPTICS       class  sklearn cluster SpectralClustering       func  sklearn feature selection mutual info regression       class  sklearn neighbors KNeighborsClassifier       class  sklearn neighbors KNeighborsRegressor       class  sklearn neighbors RadiusNeighborsClassifier       class  sklearn neighbors RadiusNeighborsRegressor       class  sklearn neighbors LocalOutlierFactor       class  sklearn neighbors NearestNeighbors       class  sklearn manifold Isomap       class  sklearn manifold LocallyLinearEmbedding       class  sklearn manifold TSNE       func  sklearn manifold trustworthiness       class  sklearn semi supervised LabelPropagation       class  sklearn semi supervised LabelSpreading     For instance  meth  sklearn neighbors NearestNeighbors kneighbors  and    meth  sklearn neighbors NearestNeighbors radius neighbors    can respectively be up to  20 and  5 faster than previously on a laptop     Moreover  implementations of those two algorithms are now suitable   for machine with many cores  making them usable for datasets consisting   of millions of samples      pr  23865  by  user  Julien Jerphanion  jjerphan        Enhancement  Finiteness checks  detection of NaN and infinite values  in all   estimators are now significantly more efficient for float32 data by leveraging   NumPy s SIMD optimized primitives     pr  23446  by  user  Meekail Zain  micky774       Enhancement  Finiteness checks  detection of NaN and infinite values  in all   estimators are now faster by utilizing a more efficient stop on first   second pass algorithm     pr  23197  by  user  Meekail Zain  micky774       Enhancement  Support for combinations of dense and sparse datasets pairs   for all distance metrics and for float32 and float64 datasets has been added   or has seen its performance improved for the following estimators        func  sklearn metrics pairwise distances argmin       func  sklearn metrics pairwise distances argmin min       class  sklearn cluster AffinityPropagation       class  sklearn cluster Birch       class  sklearn cluster SpectralClustering       class  sklearn neighbors KNeighborsClassifier       class  sklearn neighbors KNeighborsRegressor       class  sklearn neighbors RadiusNeighborsClassifier       class  sklearn neighbors RadiusNeighborsRegressor       class  sklearn neighbors LocalOutlierFactor       class  sklearn neighbors NearestNeighbors       class  sklearn manifold Isomap       class  sklearn manifold TSNE       func  sklearn manifold trustworthiness      pr  23604  and  pr  23585  by  user  Julien Jerphanion  jjerphan       user  Olivier Grisel  ogrisel    and  Thomas Fan       pr  24556  by  user  Vincent Maladi re  Vincent Maladiere        Fix  Systematically check the sha256 digest of dataset tarballs used in code   examples in the documentation     pr  24617  by  user  Olivier Grisel  ogrisel   and  Thomas Fan    Thanks to    Sim4n6  https   huntr dev users sim4n6    for the report   Changelog                   Entries should be grouped by module  in alphabetic order  and prefixed with     one of the labels   MajorFeature    Feature    Efficiency    Enhancement        Fix  or  API   see whats new rst for descriptions       Entries should be ordered by those labels  e g   Fix  after  Efficiency        Changes not specific to a module should be listed under  Multiple Modules      or  Miscellaneous       Entries should end with       pr  123456  by  user  Joe Bloggs  joeongithub        where 123456 is the  pull request  number  not the issue number    mod  sklearn base                          Enhancement  Introduces  class  base ClassNamePrefixFeaturesOutMixin  and    class  base ClassNamePrefixFeaturesOutMixin  mixins that defines    term  get feature names out  for common transformer uses cases     pr  24688  by  Thomas Fan      mod  sklearn calibration                                 API  Rename  base estimator  to  estimator  in    class  calibration CalibratedClassifierCV  to improve readability and consistency    The parameter  base estimator  is deprecated and will be removed in 1 4     pr  22054  by  user  Kevin Roice  kevroi      mod  sklearn cluster                             Efficiency   class  cluster KMeans  with  algorithm  lloyd   is now faster   and uses less memory   pr  24264  by    user  Vincent Maladiere  Vincent Maladiere        Enhancement  The  predict  and  fit predict  methods of  class  cluster OPTICS  now   accept sparse data type for input data   pr  14736  by  user  Hunt Zhan  huntzhan       pr  20802  by  user  Brandon Pokorny  Clickedbigfoot      and  pr  22965  by  user  Meekail Zain  micky774        Enhancement   class  cluster Birch  now preserves dtype for  numpy float32    inputs   pr  22968  by  Meekail Zain  micky774        Enhancement   class  cluster KMeans  and  class  cluster MiniBatchKMeans    now accept a new   auto   option for  n init  which changes the number of   random initializations to one when using  init  k means     for efficiency    This begins deprecation for the default values of  n init  in the two classes   and both will have their defaults changed to  n init  auto   in 1 4     pr  23038  by  user  Meekail Zain  micky774        Enhancement   class  cluster SpectralClustering  and    func  cluster spectral clustering  now propagates the  eigen tol  parameter   to all choices of  eigen solver   Includes a new option  eigen tol  auto     and begins deprecation to change the default from  eigen tol 0  to    eigen tol  auto   in version 1 3     pr  23210  by  user  Meekail Zain  micky774        Fix   class  cluster KMeans  now supports readonly attributes when predicting     pr  24258  by  Thomas Fan       API  The  affinity  attribute is now deprecated for    class  cluster AgglomerativeClustering  and will be renamed to  metric  in v1 4     pr  23470  by  user  Meekail Zain  micky774      mod  sklearn datasets                              Enhancement  Introduce the new parameter  parser  in    func  datasets fetch openml    parser  pandas   allows to use the very CPU   and memory efficient  pandas read csv  parser to load dense ARFF   formatted dataset files  It is possible to pass  parser  liac arff     to use the old LIAC parser    When  parser  auto    dense datasets are loaded with  pandas  and sparse   datasets are loaded with  liac arff     Currently   parser  liac arff   by default and will change to  parser  auto     in version 1 4    pr  21938  by  user  Guillaume Lemaitre  glemaitre        Enhancement   func  datasets dump svmlight file  is now accelerated with a   Cython implementation  providing 2 4x speedups     pr  23127  by  user  Meekail Zain  micky774       Enhancement  Path like objects  such as those created with pathlib are now   allowed as paths in  func  datasets load svmlight file  and    func  datasets load svmlight files      pr  19075  by  user  Carlos Ramos Carre o  vnmabus        Fix  Make sure that  func  datasets fetch lfw people  and    func  datasets fetch lfw pairs  internally crops images based on the    slice   parameter     pr  24951  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn decomposition                                   Efficiency   func  decomposition FastICA fit  has been optimised w r t   its memory footprint and runtime     pr  22268  by  user  MohamedBsh  Bsh        Enhancement   class  decomposition SparsePCA  and    class  decomposition MiniBatchSparsePCA  now implements an  inverse transform    function     pr  23905  by  user  Guillaume Lemaitre  glemaitre        Enhancement   class  decomposition FastICA  now allows the user to select   how whitening is performed through the new  whiten solver  parameter  which   supports  svd  and  eigh    whiten solver  defaults to  svd  although  eigh    may be faster and more memory efficient in cases where    num features   num samples      pr  11860  by  user  Pierre Ablin  pierreablin       pr  22527  by  user  Meekail Zain  micky774   and  Thomas Fan        Enhancement   class  decomposition LatentDirichletAllocation  now preserves dtype   for  numpy float32  input   pr  24528  by  user  Takeshi Oura  takoika   and    user  J r mie du Boisberranger  jeremiedbb        Fix  Make sign of  components   deterministic in  class  decomposition SparsePCA      pr  23935  by  user  Guillaume Lemaitre  glemaitre        API  The  n iter  parameter of  class  decomposition MiniBatchSparsePCA  is   deprecated and replaced by the parameters  max iter    tol   and    max no improvement  to be consistent with    class  decomposition MiniBatchDictionaryLearning    n iter  will be removed   in version 1 3   pr  23726  by  user  Guillaume Lemaitre  glemaitre        API  The  n features   attribute of    class  decomposition PCA  is deprecated in favor of    n features in   and will be removed in 1 4   pr  24421  by    user  Kshitij Mathur  Kshitij68      mod  sklearn discriminant analysis                                           MajorFeature   class  discriminant analysis LinearDiscriminantAnalysis  now   supports the  Array API  https   data apis org array api latest     for    solver  svd    Array API support is considered experimental and might evolve   without being subjected to our usual rolling deprecation cycle policy  See    ref  array api  for more details   pr  22554  by  Thomas Fan        Fix  Validate parameters only in  fit  and not in    init      for  class  discriminant analysis QuadraticDiscriminantAnalysis      pr  24218  by  user  Stefanie Molin  stefmolin      mod  sklearn ensemble                              MajorFeature   class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  now support   interaction constraints via the argument  interaction cst  of their   constructors     pr  21020  by  user  Christian Lorentzen  lorentzenchr      Using interaction constraints also makes fitting faster     pr  24856  by  user  Christian Lorentzen  lorentzenchr        Feature  Adds  class weight  to  class  ensemble HistGradientBoostingClassifier      pr  22014  by  Thomas Fan        Efficiency  Improve runtime performance of  class  ensemble IsolationForest    by avoiding data copies   pr  23252  by  user  Zhehao Liu  MaxwellLZH        Enhancement   class  ensemble StackingClassifier  now accepts any kind of   base estimator     pr  24538  by  user  Guillem G Subies  GuillemGSubies        Enhancement  Make it possible to pass the  categorical features  parameter   of  class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  as feature names     pr  24889  by  user  Olivier Grisel  ogrisel        Enhancement   class  ensemble StackingClassifier  now supports   multilabel indicator target    pr  24146  by  user  Nicolas Peretti  nicoperetti       user  Nestor Navarro  nestornav     user  Nati Tomattis  natitomattis      and  user  Vincent Maladiere  Vincent Maladiere        Enhancement   class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingClassifier  now accept their    monotonic cst  parameter to be passed as a dictionary in addition   to the previously supported array like format    Such dictionary have feature names as keys and one of   1    0    1    as value to specify monotonicity constraints for each feature     pr  24855  by  user  Olivier Grisel  ogrisel        Enhancement  Interaction constraints for    class  ensemble HistGradientBoostingClassifier    and  class  ensemble HistGradientBoostingRegressor  can now be specified   as strings for two common cases   no interactions  and  pairwise  interactions     pr  24849  by  user  Tim Head  betatim        Fix  Fixed the issue where  class  ensemble AdaBoostClassifier  outputs   NaN in feature importance when fitted with very small sample weight     pr  20415  by  user  Zhehao Liu  MaxwellLZH        Fix   class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  no longer error when predicting   on categories encoded as negative values and instead consider them a member   of the  missing category    pr  24283  by  Thomas Fan        Fix   class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor   with  verbose  1   print detailed   timing information on computing histograms and finding best splits  The time spent in   the root node was previously missing and is now included in the printed information     pr  24894  by  user  Christian Lorentzen  lorentzenchr        API  Rename the constructor parameter  base estimator  to  estimator  in   the following classes     class  ensemble BaggingClassifier      class  ensemble BaggingRegressor      class  ensemble AdaBoostClassifier      class  ensemble AdaBoostRegressor      base estimator  is deprecated in 1 2 and will be removed in 1 4     pr  23819  by  user  Adrian Trujillo  trujillo9616   and    user  Edoardo Abati  EdAbati        API  Rename the fitted attribute  base estimator   to  estimator   in   the following classes     class  ensemble BaggingClassifier      class  ensemble BaggingRegressor      class  ensemble AdaBoostClassifier      class  ensemble AdaBoostRegressor      class  ensemble RandomForestClassifier      class  ensemble RandomForestRegressor      class  ensemble ExtraTreesClassifier      class  ensemble ExtraTreesRegressor      class  ensemble RandomTreesEmbedding      class  ensemble IsolationForest      base estimator   is deprecated in 1 2 and will be removed in 1 4     pr  23819  by  user  Adrian Trujillo  trujillo9616   and    user  Edoardo Abati  EdAbati      mod  sklearn feature selection                                       Fix  Fix a bug in  func  feature selection mutual info regression  and    func  feature selection mutual info classif   where the continuous features   in  X  should be scaled to a unit variance independently if the target  y  is   continuous or discrete     pr  24747  by  user  Guillaume Lemaitre  glemaitre     mod  sklearn gaussian process                                      Fix  Fix  class  gaussian process kernels Matern  gradient computation with    nu 0 5  for PyPy  and possibly other non CPython interpreters    pr  24245    by  user  Lo c Est ve  lesteve        Fix  The  fit  method of  class  gaussian process GaussianProcessRegressor    will not modify the input X in case a custom kernel is used  with a  diag    method that returns part of the input X   pr  24405    by  user  Omar Salman  OmarManzoor      mod  sklearn impute                            Enhancement  Added  keep empty features  parameter to    class  impute SimpleImputer    class  impute KNNImputer  and    class  impute IterativeImputer   preventing removal of features   containing only missing values when transforming     pr  16695  by  user  Vitor Santa Rosa  vitorsrg      mod  sklearn inspection                                MajorFeature  Extended  func  inspection partial dependence  and    class  inspection PartialDependenceDisplay  to handle categorical features     pr  18298  by  user  Madhura Jayaratne  madhuracj   and    user  Guillaume Lemaitre  glemaitre        Fix   class  inspection DecisionBoundaryDisplay  now raises error if input   data is not 2 dimensional     pr  25077  by  user  Arturo Amor  ArturoAmorQ      mod  sklearn kernel approximation                                          Enhancement   class  kernel approximation RBFSampler  now preserves   dtype for  numpy float32  inputs   pr  24317  by  Tim Head  betatim        Enhancement   class  kernel approximation SkewedChi2Sampler  now preserves   dtype for  numpy float32  inputs   pr  24350  by  user  Rahil Parikh  rprkh        Enhancement   class  kernel approximation RBFSampler  now accepts     scale   option for parameter  gamma      pr  24755  by  user  Gleb Levitski  GLevV      mod  sklearn linear model                                  Enhancement   class  linear model LogisticRegression      class  linear model LogisticRegressionCV    class  linear model GammaRegressor      class  linear model PoissonRegressor  and  class  linear model TweedieRegressor  got   a new solver  solver  newton cholesky    This is a 2nd order  Newton  optimisation   routine that uses a Cholesky decomposition of the hessian matrix    When  n samples    n features   the   newton cholesky   solver has been observed to   converge both faster and to a higher precision solution than the   lbfgs   solver on   problems with one hot encoded categorical variables with some rare categorical   levels     pr  24637  and  pr  24767  by  user  Christian Lorentzen  lorentzenchr        Enhancement   class  linear model GammaRegressor      class  linear model PoissonRegressor  and  class  linear model TweedieRegressor    can reach higher precision with the lbfgs solver  in particular when  tol  is set   to a tiny value  Moreover   verbose  is now properly propagated to L BFGS B     pr  23619  by  user  Christian Lorentzen  lorentzenchr        Fix   class  linear model SGDClassifier  and  class  linear model SGDRegressor  will   raise an error when all the validation samples have zero sample weight     pr  23275  by  Zhehao Liu  MaxwellLZH        Fix   class  linear model SGDOneClassSVM  no longer performs parameter   validation in the constructor  All validation is now handled in  fit    and    partial fit        pr  24433  by  user  Yogendrasingh  iofall     user  Arisa Y   arisayosh     and  user  Tim Head  betatim        Fix  Fix average loss calculation when early stopping is enabled in    class  linear model SGDRegressor  and  class  linear model SGDClassifier     Also updated the condition for early stopping accordingly     pr  23798  by  user  Harsh Agrawal  Harsh14901        API  The default value for the  solver  parameter in    class  linear model QuantileRegressor  will change from   interior point     to   highs   in version 1 4     pr  23637  by  user  Guillaume Lemaitre  glemaitre        API  String option   none   is deprecated for  penalty  argument   in  class  linear model LogisticRegression   and will be removed in version 1 4    Use  None  instead   pr  23877  by  user  Zhehao Liu  MaxwellLZH        API  The default value of  tol  was changed from  1e 3  to  1e 4  for    func  linear model ridge regression    class  linear model Ridge  and    class  linear model RidgeClassifier      pr  24465  by  user  Christian Lorentzen  lorentzenchr      mod  sklearn manifold                              Feature  Adds option to use the normalized stress in  class  manifold MDS   This is   enabled by setting the new  normalize  parameter to  True      pr  10168  by  user   ukasz Borchmann  Borchmann       pr  12285  by  user  Matthias Miltenberger  mattmilten       pr  13042  by  user  Matthieu Parizy  matthieu pa       pr  18094  by  user  Roth E Conrad  rotheconrad   and    pr  22562  by  user  Meekail Zain  micky774        Enhancement  Adds  eigen tol  parameter to    class  manifold SpectralEmbedding   Both  func  manifold spectral embedding    and  class  manifold SpectralEmbedding  now propagate  eigen tol  to all   choices of  eigen solver   Includes a new option  eigen tol  auto     and begins deprecation to change the default from  eigen tol 0  to    eigen tol  auto   in version 1 3     pr  23210  by  user  Meekail Zain  micky774        Enhancement   class  manifold Isomap  now preserves   dtype for  np float32  inputs   pr  24714  by  user  Rahil Parikh  rprkh        API  Added an   auto   option to the  normalized stress  argument in    class  manifold MDS  and  func  manifold smacof   Note that    normalized stress  is only valid for non metric MDS  therefore the   auto     option enables  normalized stress  when  metric False  and disables it when    metric True     auto   will become the default value for  normalized stress    in version 1 4     pr  23834  by  user  Meekail Zain  micky774     mod  sklearn metrics                             Feature   func  metrics ConfusionMatrixDisplay from estimator      func  metrics ConfusionMatrixDisplay from predictions   and    meth  metrics ConfusionMatrixDisplay plot  accepts a  text kw  parameter which is   passed to matplotlib s  text  function   pr  24051  by  Thomas Fan        Feature   func  metrics class likelihood ratios  is added to compute the positive and   negative likelihood ratios derived from the confusion matrix   of a binary classification problem   pr  22518  by    user  Arturo Amor  ArturoAmorQ        Feature  Add  class  metrics PredictionErrorDisplay  to plot residuals vs   predicted and actual vs predicted to qualitatively assess the behavior of a   regressor  The display can be created with the class methods    func  metrics PredictionErrorDisplay from estimator  and    func  metrics PredictionErrorDisplay from predictions    pr  18020  by    user  Guillaume Lemaitre  glemaitre        Feature   func  metrics roc auc score  now supports micro averaging     average  micro    for the One vs Rest multiclass case   multi class  ovr        pr  24338  by  user  Arturo Amor  ArturoAmorQ        Enhancement  Adds an   auto   option to  eps  in  func  metrics log loss     This option will automatically set the  eps  value depending on the data   type of  y pred   In addition  the default value of  eps  is changed from    1e 15  to the new   auto   option     pr  24354  by  user  Safiuddin Khaja  Safikh   and  user  gsiisg  gsiisg        Fix  Allows  csr matrix  as input for parameter   y true  of   the  func  metrics label ranking average precision score  metric     pr  23442  by  user  Sean Atukorala  ShehanAT       Fix   func  metrics ndcg score  will now trigger a warning when the  y true    value contains a negative value  Users may still use negative values  but the   result may not be between 0 and 1  Starting in v1 4  passing in negative   values for  y true  will raise an error     pr  22710  by  user  Conroy Trinh  trinhcon   and    pr  23461  by  user  Meekail Zain  micky774        Fix   func  metrics log loss  with  eps 0  now returns a correct value of 0 or    np inf  instead of  nan  for predictions at the boundaries  0 or 1   It also accepts   integer input     pr  24365  by  user  Christian Lorentzen  lorentzenchr        API  The parameter  sum over features  of    func  metrics pairwise manhattan distances  is deprecated and will be removed in 1 4     pr  24630  by  user  Rushil Desai  rusdes      mod  sklearn model selection                                     Feature  Added the class  class  model selection LearningCurveDisplay    that allows to make easy plotting of learning curves obtained by the function    func  model selection learning curve      pr  24084  by  user  Guillaume Lemaitre  glemaitre        Fix  For all  SearchCV  classes and scipy    1 10  rank corresponding to a   nan score is correctly set to the maximum possible rank  rather than    np iinfo np int32  min    pr  24141  by  user  Lo c Est ve  lesteve        Fix  In both  class  model selection HalvingGridSearchCV  and    class  model selection HalvingRandomSearchCV  parameter   combinations with a NaN score now share the lowest rank     pr  24539  by  user  Tim Head  betatim        Fix  For  class  model selection GridSearchCV  and    class  model selection RandomizedSearchCV  ranks corresponding to nan   scores will all be set to the maximum possible rank     pr  24543  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn multioutput                                 Feature  Added boolean  verbose  flag to classes     class  multioutput ClassifierChain  and  class  multioutput RegressorChain      pr  23977  by  user  Eric Fiegel  efiegel       user  Chiara Marmo  cmarmo       user  Lucy Liu  lucyleeow    and    user  Guillaume Lemaitre  glemaitre      mod  sklearn naive bayes                                 Feature  Add methods  predict joint log proba  to all naive Bayes classifiers     pr  23683  by  user  Andrey Melnik  avm19        Enhancement  A new parameter  force alpha  was added to    class  naive bayes BernoulliNB    class  naive bayes ComplementNB      class  naive bayes CategoricalNB   and  class  naive bayes MultinomialNB     allowing user to set parameter alpha to a very small number  greater or equal   0  which was earlier automatically changed to  1e 10  instead     pr  16747  by  user  arka204      pr  18805  by  user  hongshaoyang      pr  22269  by  user  Meekail Zain  micky774      mod  sklearn neighbors                               Feature  Adds new function  func  neighbors sort graph by row values  to   sort a CSR sparse graph such that each row is stored with increasing values    This is useful to improve efficiency when using precomputed sparse distance   matrices in a variety of estimators and avoid an  EfficiencyWarning      pr  23139  by  Tom Dupre la Tour        Efficiency   class  neighbors NearestCentroid  is faster and requires   less memory as it better leverages CPUs  caches to compute predictions     pr  24645  by  user  Olivier Grisel  ogrisel        Enhancement   class  neighbors KernelDensity  bandwidth parameter now accepts   definition using Scott s and Silverman s estimation methods     pr  10468  by  user  Ruben  icfly2   and  pr  22993  by    user  Jovan Stojanovic  jovan stojanovic        Enhancement   neighbors NeighborsBase  now accepts   Minkowski semi metric  i e  when  math  0   p   1  for    metric  minkowski    for  algorithm  auto   or  algorithm  brute       pr  24750  by  user  Rudresh Veerkhare  RudreshVeerkhare       Fix   class  neighbors NearestCentroid  now raises an informative error message at fit time   instead of failing with a low level error message at predict time     pr  23874  by  user  Juan Gomez  2357juan        Fix  Set  n jobs None  by default  instead of  1   for    class  neighbors KNeighborsTransformer  and    class  neighbors RadiusNeighborsTransformer      pr  24075  by  user  Valentin Laurent  Valentin Laurent        Enhancement   class  neighbors LocalOutlierFactor  now preserves   dtype for  numpy float32  inputs     pr  22665  by  user  Julien Jerphanion  jjerphan      mod  sklearn neural network                                    Fix   class  neural network MLPClassifier  and    class  neural network MLPRegressor  always expose the parameters  best loss       validation scores    and  best validation score     best loss   is set to    None  when  early stopping True   while  validation scores   and    best validation score   are set to  None  when  early stopping False      pr  24683  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn pipeline                              Enhancement   meth  pipeline FeatureUnion get feature names out  can now   be used when one of the transformers in the  class  pipeline FeatureUnion  is     passthrough     pr  24058  by  user  Diederik Perdok  diederikwp       Enhancement  The  class  pipeline FeatureUnion  class now has a  named transformers    attribute for accessing transformers by name     pr  20331  by  user  Christopher Flynn  crflynn      mod  sklearn preprocessing                                   Enhancement   class  preprocessing FunctionTransformer  will always try to set    n features in   and  feature names in   regardless of the  validate  parameter     pr  23993  by  Thomas Fan        Fix   class  preprocessing LabelEncoder  correctly encodes NaNs in  transform      pr  22629  by  Thomas Fan        API  The  sparse  parameter of  class  preprocessing OneHotEncoder    is now deprecated and will be removed in version 1 4  Use  sparse output  instead     pr  24412  by  user  Rushil Desai  rusdes      mod  sklearn svm                         API  The  class weight   attribute is now deprecated for    class  svm NuSVR    class  svm SVR    class  svm OneClassSVM      pr  22898  by  user  Meekail Zain  micky774      mod  sklearn tree                          Enhancement   func  tree plot tree    func  tree export graphviz  now uses   a lower case  x i   to represent feature  i    pr  23480  by  Thomas Fan      mod  sklearn utils                           Feature  A new module exposes development tools to discover estimators  i e     func  utils discovery all estimators    displays  i e     func  utils discovery all displays   and functions  i e     func  utils discovery all functions   in scikit learn     pr  21469  by  user  Guillaume Lemaitre  glemaitre        Enhancement   func  utils extmath randomized svd  now accepts an argument     lapack svd driver   to specify the lapack driver used in the internal   deterministic SVD used by the randomized SVD algorithm     pr  20617  by  user  Srinath Kailasa  skailasa       Enhancement   func  utils validation column or 1d  now accepts a  dtype    parameter to specific  y  s dtype   pr  22629  by  Thomas Fan        Enhancement   utils extmath cartesian  now accepts arrays with different    dtype  and will cast the output to the most permissive  dtype      pr  25067  by  user  Guillaume Lemaitre  glemaitre        Fix   func  utils multiclass type of target  now properly handles sparse matrices     pr  14862  by  user  L onard Binet  leonardbinet        Fix  HTML representation no longer errors when an estimator class is a value in    get params    pr  24512  by  Thomas Fan        Fix   func  utils estimator checks check estimator  now takes into account   the  requires positive X  tag correctly   pr  24667  by  Thomas Fan        Fix   func  utils check array  now supports Pandas Series with  pd NA    by raising a better error message or returning a compatible  ndarray      pr  25080  by  Thomas Fan        API  The extra keyword parameters of  func  utils extmath density  are deprecated   and will be removed in 1 4     pr  24523  by  user  Mia Bajic  clytaemnestra        rubric   Code and documentation contributors  Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1 1  including   2357juan  3lLobo  Adam J  Stewart  Adam Kania  Adam Li  Aditya Anulekh  Admir Demiraj  adoublet  Adrin Jalali  Ahmedbgh  Aiko  Akshita Prasanth  Ala Na  Alessandro Miola  Alex  Alexandr  Alexandre Perez Lebel  Alex Buzenet  Ali H  El Kassas  aman kumar  Amit Bera  Andr s Simon  Andreas Grivas  Andreas Mueller  Andrew Wang  angela maennel  Aniket Shirsat  Anthony22 dev  Antony Lee  anupam  Apostolos Tsetoglou  Aravindh R  Artur Hermano  Arturo Amor  as 90  ashah002  Ashwin Mathur  avm19  Azaria Gebremichael  b0rxington  Badr MOUFAD  Bardiya Ak  Bart omiej Go da  BdeGraaff  Benjamin Bossan  Benjamin Carter  berkecanrizai  Bernd Fritzke  Bhoomika  Biswaroop Mitra  Brandon TH Chen  Brett Cannon  Bsh  cache missing  carlo  Carlos Ramos Carre o  ceh  chalulu  Changyao Chen  Charles Zablit  Chiara Marmo  Christian Lorentzen  Christian Ritter  Christian Veenhuis  christianwaldmann  Christine P  Chai  Claudio Salvatore Arcidiacono  Cl ment Verrier  crispinlogan  Da Lan  DanGonite57  Daniela Fernandes  DanielGaerber  darioka  Darren Nguyen  davidblnc  david cortes  David Gilbertson  David Poznik  Dayne  Dea Mar a L on  Denis  Dev Khant  Dhanshree Arora  Diadochokinetic  diederikwp  Dimitri Papadopoulos Orfanos  Dimitris Litsidis  drewhogg  Duarte OC  Dwight Lindquist  Eden Brekke  Edern  Edoardo Abati  Eleanore Denies  EliaSchiavon  Emir  ErmolaevPA  Fabrizio Damicelli  fcharras  Felipe Siola  Flynn  francesco tuveri  Franck Charras  ftorres16  Gael Varoquaux  Geevarghese George  genvalen  GeorgiaMayDay  Gianr Lazz  Gleb Levitski  Gl ria Maci  Mu oz  Guillaume Lemaitre  Guillem Garc a Subies  Guitared  gunesbayir  Haesun Park  Hansin Ahuja  Hao Chun Chang  Harsh Agrawal  harshit5674  hasan yaman  henrymooresc  Henry Sorsky  Hristo Vrigazov  htsedebenham  humahn  i aki y  Ian Thompson  Ido M  Iglesys  Iliya Zhechev  Irene  ivanllt  Ivan Sedykh  Jack McIvor  jakirkham  JanFidor  Jason G  J r mie du Boisberranger  Jiten Sidhpura  jkarolczak  Jo o David  JohnathanPi  John Koumentis  John P  John Pangas  johnthagen  Jordan Fleming  Joshua Choo Yun Keat  Jovan Stojanovic  Juan Carlos Alfaro Jim nez  juanfe88  Juan Felipe Arias  JuliaSchoepp  Julien Jerphanion  jygerardy  ka00ri  Kanishk Sachdev  Kanissh  Kaushik Amar Das  Kendall  Kenneth Prabakaran  Kento Nozawa  kernc  Kevin Roice  Kian Eliasi  Kilian Kluge  Kilian Lieret  Kirandevraj  Kraig  krishna kumar  krishna vamsi  Kshitij Kapadni  Kshitij Mathur  Lauren Burke  L onard Binet  lingyi1110  Lisa Casino  Logan Thomas  Loic Esteve  Luciano Mantovani  Lucy Liu  Maascha  Madhura Jayaratne  madinak  Maksym  Malte S  Kurz  Mansi Agrawal  Marco Edward Gorelli  Marco Wurps  Maren Westermann  Maria Telenczuk  Mario Kostelac  martin kokos  Marvin Krawutschke  Masanori Kanazu  mathurinm  Matt Haberland  mauroantonioserrano  Max Halford  Maxi Marufo  maximeSaur  Maxim Smolskiy  Maxwell  m  bou  Meekail Zain  Mehgarg  mehmetcanakbay  Mia Baji   Michael Flaks  Michael Hornstein  Michel de Ruiter  Michelle Paradis  Mikhail Iljin  Misa Ogura  Moritz Wilksch  mrastgoo  Naipawat Poolsawat  Naoise Holohan  Nass  Nathan Jacobi  Nawazish Alam  Nguy n V n Di n  Nicola Fanelli  Nihal Thukarama Rao  Nikita Jare  nima10khodaveisi  Nima Sarajpoor  nitinramvelraj  NNLNR  npache  Nwanna Joseph  Nymark Kho  o holman  Olivier Grisel  Olle Lukowski  Omar Hassoun  Omar Salman  osman tamer  ouss1508  Oyindamola Olatunji  PAB  Pandata  partev  Paulo Sergio  Soares  Petar Mlinari   Peter Jansson  Peter Steinbach  Philipp Jung  Piet Br mmel  Pooja M  Pooja Subramaniam  priyam kakati  puhuk  Rachel Freeland  Rachit Keerti Das  Rafal Wojdyla  Raghuveer Bhat  Rahil Parikh  Ralf Gommers  ram vikram singh  Ravi Makhija  Rehan Guha  Reshama Shaikh  Richard Klima  Rob Crockett  Robert Hommes  Robert Juergens  Robin Lenz  Rocco Meli  Roman4oo  Ross Barnowski  Rowan Mankoo  Rudresh Veerkhare  Rushil Desai  Sabri Monaf Sabri  Safikh  Safiuddin Khaja  Salahuddin  Sam Adam Day  Sandra Yojana Meneses  Sandro Ephrem  Sangam  SangamSwadik  SANJAI 3  SarahRemus  Sashka Warner  SavkoMax  Scott Gigante  Scott Gustafson  Sean Atukorala  sec65  SELEE  seljaks  Shady el Gewily  Shane  shellyfung  Shinsuke Mori  Shiva chauhan  Shoaib Khan  Shogo Hida  Shrankhla Srivastava  Shuangchi He  Simon  sonnivs  Sortofamudkip  Srinath Kailasa  Stanislav  Stanley  Modrak  Stefanie Molin  stellalin7  St phane Collot  Steven Van Vaerenbergh  Steve Schmerler  Sven Stehle  Tabea Kossen  TheDevPanda  the syd sre  Thijs van Weezel  Thomas Bonald  Thomas Germer  Thomas J  Fan  Ti Ion  Tim Head  Timofei Kornev  toastedyeast  Tobias Pitters  Tom Dupr  la Tour  tomiock  Tom Mathews  Tom McTiernan  tspeng  Tyler Egashira  Valentin Laurent  Varun Jain  Vera Komeyer  Vicente Reyes Puerta  Vinayak Mehta  Vincent M  Vishal  Vyom Pathak  wattai  wchathura  WEN Hao  William M  x110  Xiao Yuan  Xunius  yanhong zhao ef  Yusuf Raji  Z Adil Khwaja  zeeshan lone"}
{"questions":"scikit-learn sklearn contributors rst Version 1 0 releasenotes10","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n.. _release_notes_1_0:\n\n===========\nVersion 1.0\n===========\n\nFor a short description of the main highlights of the release, please refer to\n:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_1_0_0.py`.\n\n.. include:: changelog_legend.inc\n\n.. _changes_1_0_2:\n\nVersion 1.0.2\n=============\n\n**December 2021**\n\n- |Fix| :class:`cluster.Birch`,\n  :class:`feature_selection.RFECV`, :class:`ensemble.RandomForestRegressor`,\n  :class:`ensemble.RandomForestClassifier`,\n  :class:`ensemble.GradientBoostingRegressor`, and\n  :class:`ensemble.GradientBoostingClassifier` do not raise warning when fitted\n  on a pandas DataFrame anymore. :pr:`21578` by `Thomas Fan`_.\n\nChangelog\n---------\n\n:mod:`sklearn.cluster`\n......................\n\n- |Fix| Fixed an infinite loop in :func:`cluster.SpectralClustering` by\n  moving an iteration counter from try to except.\n  :pr:`21271` by :user:`Tyler Martin <martintb>`.\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Fix| :func:`datasets.fetch_openml` is now thread safe. Data is first\n  downloaded to a temporary subfolder and then renamed.\n  :pr:`21833` by :user:`Siavash Rezazadeh <siavrez>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Fix| Fixed the constraint on the objective function of\n  :class:`decomposition.DictionaryLearning`,\n  :class:`decomposition.MiniBatchDictionaryLearning`, :class:`decomposition.SparsePCA`\n  and :class:`decomposition.MiniBatchSparsePCA` to be convex and match the referenced\n  article. :pr:`19210` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |Fix| :class:`ensemble.RandomForestClassifier`,\n  :class:`ensemble.RandomForestRegressor`,\n  :class:`ensemble.ExtraTreesClassifier`, :class:`ensemble.ExtraTreesRegressor`,\n  and :class:`ensemble.RandomTreesEmbedding` now raise a ``ValueError`` when\n  ``bootstrap=False`` and ``max_samples`` is not ``None``.\n  :pr:`21295` :user:`Haoyin Xu <PSSF23>`.\n\n- |Fix| Solve a bug in :class:`ensemble.GradientBoostingClassifier` where the\n  exponential loss was computing the positive gradient instead of the\n  negative one.\n  :pr:`22050` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Fix| Fixed :class:`feature_selection.SelectFromModel` by improving support\n  for base estimators that do not set `feature_names_in_`. :pr:`21991` by\n  `Thomas Fan`_.\n\n:mod:`sklearn.impute`\n.....................\n\n- |Fix| Fix a bug in :class:`linear_model.RidgeClassifierCV` where the method\n  `predict` was performing an `argmax` on the scores obtained from\n  `decision_function` instead of returning the multilabel indicator matrix.\n  :pr:`19869` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Fix| :class:`linear_model.LassoLarsIC` now correctly computes AIC\n  and BIC. An error is now raised when `n_features > n_samples` and\n  when the noise variance is not provided.\n  :pr:`21481` by :user:`Guillaume Lemaitre <glemaitre>` and\n  :user:`Andr\u00e9s Babino <ababino>`.\n\n:mod:`sklearn.manifold`\n.......................\n\n- |Fix| Fixed an unnecessary error when fitting :class:`manifold.Isomap` with a\n  precomputed dense distance matrix where the neighbors graph has multiple\n  disconnected components. :pr:`21915` by `Tom Dupre la Tour`_.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| All :class:`sklearn.metrics.DistanceMetric` subclasses now correctly support\n  read-only buffer attributes.\n  This fixes a regression introduced in 1.0.0 with respect to 0.24.2.\n  :pr:`21694` by :user:`Julien Jerphanion <jjerphan>`.\n\n- |Fix| All `sklearn.metrics.MinkowskiDistance` now accepts a weight\n  parameter that makes it possible to write code that behaves consistently both\n  with scipy 1.8 and earlier versions. In turns this means that all\n  neighbors-based estimators (except those that use `algorithm=\"kd_tree\"`) now\n  accept a weight parameter with `metric=\"minknowski\"` to yield results that\n  are always consistent with `scipy.spatial.distance.cdist`.\n  :pr:`21741` by :user:`Olivier Grisel <ogrisel>`.\n\n:mod:`sklearn.multiclass`\n.........................\n\n- |Fix| :meth:`multiclass.OneVsRestClassifier.predict_proba` does not error when\n  fitted on constant integer targets. :pr:`21871` by `Thomas Fan`_.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Fix| :class:`neighbors.KDTree` and :class:`neighbors.BallTree` correctly supports\n  read-only buffer attributes. :pr:`21845` by `Thomas Fan`_.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Fix| Fixes compatibility bug with NumPy 1.22 in :class:`preprocessing.OneHotEncoder`.\n  :pr:`21517` by `Thomas Fan`_.\n\n:mod:`sklearn.tree`\n...................\n\n- |Fix| Prevents :func:`tree.plot_tree` from drawing out of the boundary of\n  the figure. :pr:`21917` by `Thomas Fan`_.\n\n- |Fix| Support loading pickles of decision tree models when the pickle has\n  been generated on a platform with a different bitness. A typical example is\n  to train and pickle the model on 64 bit machine and load the model on a 32\n  bit machine for prediction. :pr:`21552` by :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |Fix| :func:`utils.estimator_html_repr` now escapes all the estimator\n  descriptions in the generated HTML. :pr:`21493` by\n  :user:`Aur\u00e9lien Geron <ageron>`.\n\n.. _changes_1_0_1:\n\nVersion 1.0.1\n=============\n\n**October 2021**\n\nFixed models\n------------\n\n- |Fix| Non-fit methods in the following classes do not raise a UserWarning\n  when fitted on DataFrames with valid feature names:\n  :class:`covariance.EllipticEnvelope`, :class:`ensemble.IsolationForest`,\n  :class:`ensemble.AdaBoostClassifier`, :class:`neighbors.KNeighborsClassifier`,\n  :class:`neighbors.KNeighborsRegressor`,\n  :class:`neighbors.RadiusNeighborsClassifier`,\n  :class:`neighbors.RadiusNeighborsRegressor`. :pr:`21199` by `Thomas Fan`_.\n\n:mod:`sklearn.calibration`\n..........................\n\n- |Fix| Fixed :class:`calibration.CalibratedClassifierCV` to take into account\n  `sample_weight` when computing the base estimator prediction when\n  `ensemble=False`.\n  :pr:`20638` by :user:`Julien Bohn\u00e9 <JulienB-78>`.\n\n- |Fix| Fixed a bug in :class:`calibration.CalibratedClassifierCV` with\n  `method=\"sigmoid\"` that was ignoring the `sample_weight` when computing the\n  the Bayesian priors.\n  :pr:`21179` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.cluster`\n......................\n\n- |Fix| Fixed a bug in :class:`cluster.KMeans`, ensuring reproducibility and equivalence\n  between sparse and dense input. :pr:`21195`\n  by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |Fix| Fixed a bug that could produce a segfault in rare cases for\n  :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor`.\n  :pr:`21130` :user:`Christian Lorentzen <lorentzenchr>`.\n\n:mod:`sklearn.gaussian_process`\n...............................\n\n- |Fix| Compute `y_std` properly with multi-target in\n  :class:`sklearn.gaussian_process.GaussianProcessRegressor` allowing\n  proper normalization in multi-target scene.\n  :pr:`20761` by :user:`Patrick de C. T. R. Ferreira <patrickctrf>`.\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |Efficiency| Fixed an efficiency regression introduced in version 1.0.0 in the\n  `transform` method of :class:`feature_extraction.text.CountVectorizer` which no\n  longer checks for uppercase characters in the provided vocabulary. :pr:`21251`\n  by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| Fixed a bug in :class:`feature_extraction.text.CountVectorizer` and\n  :class:`feature_extraction.text.TfidfVectorizer` by raising an\n  error when 'min_idf' or 'max_idf' are floating-point numbers greater than 1.\n  :pr:`20752` by :user:`Alek Lefebvre <AlekLefebvre>`.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Fix| Improves stability of :class:`linear_model.LassoLars` for different\n  versions of openblas. :pr:`21340` by `Thomas Fan`_.\n\n- |Fix| :class:`linear_model.LogisticRegression` now raises a better error\n  message when the solver does not support sparse matrices with int64 indices.\n  :pr:`21093` by `Tom Dupre la Tour`_.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Fix| :class:`neighbors.KNeighborsClassifier`,\n  :class:`neighbors.KNeighborsRegressor`,\n  :class:`neighbors.RadiusNeighborsClassifier`,\n  :class:`neighbors.RadiusNeighborsRegressor` with `metric=\"precomputed\"` raises\n  an error for `bsr` and `dok` sparse matrices in methods: `fit`, `kneighbors`\n  and `radius_neighbors`, due to handling of explicit zeros in `bsr` and `dok`\n  :term:`sparse graph` formats. :pr:`21199` by `Thomas Fan`_.\n\n:mod:`sklearn.pipeline`\n.......................\n\n- |Fix| :meth:`pipeline.Pipeline.get_feature_names_out` correctly passes feature\n  names out from one step of a pipeline to the next. :pr:`21351` by\n  `Thomas Fan`_.\n\n:mod:`sklearn.svm`\n..................\n\n- |Fix| :class:`svm.SVC` and :class:`svm.SVR` check for an inconsistency\n  in its internal representation and raise an error instead of segfaulting.\n  This fix also resolves\n  `CVE-2020-28975 <https:\/\/nvd.nist.gov\/vuln\/detail\/CVE-2020-28975>`__.\n  :pr:`21336` by `Thomas Fan`_.\n\n:mod:`sklearn.utils`\n....................\n\n- |Enhancement| `utils.validation._check_sample_weight` can perform a\n  non-negativity check on the sample weights. It can be turned on\n  using the only_non_negative bool parameter.\n  Estimators that check for non-negative weights are updated:\n  :func:`linear_model.LinearRegression` (here the previous\n  error message was misleading),\n  :func:`ensemble.AdaBoostClassifier`,\n  :func:`ensemble.AdaBoostRegressor`,\n  :func:`neighbors.KernelDensity`.\n  :pr:`20880` by :user:`Guillaume Lemaitre <glemaitre>`\n  and :user:`Andr\u00e1s Simon <simonandras>`.\n\n- |Fix| Solve a bug in ``sklearn.utils.metaestimators.if_delegate_has_method``\n  where the underlying check for an attribute did not work with NumPy arrays.\n  :pr:`21145` by :user:`Zahlii <Zahlii>`.\n\nMiscellaneous\n.............\n\n- |Fix| Fitting an estimator on a dataset that has no feature names, that was previously\n  fitted on a dataset with feature names no longer keeps the old feature names stored in\n  the `feature_names_in_` attribute. :pr:`21389` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n.. _changes_1_0:\n\nVersion 1.0.0\n=============\n\n**September 2021**\n\nMinimal dependencies\n--------------------\n\nVersion 1.0.0 of scikit-learn requires python 3.7+, numpy 1.14.6+ and\nscipy 1.1.0+. Optional minimal dependency is matplotlib 2.2.2+.\n\nEnforcing keyword-only arguments\n--------------------------------\n\nIn an effort to promote clear and non-ambiguous use of the library, most\nconstructor and function parameters must now be passed as keyword arguments\n(i.e. using the `param=value` syntax) instead of positional. If a keyword-only\nparameter is used as positional, a `TypeError` is now raised.\n:issue:`15005` :pr:`20002` by `Joel Nothman`_, `Adrin Jalali`_, `Thomas Fan`_,\n`Nicolas Hug`_, and `Tom Dupre la Tour`_. See `SLEP009\n<https:\/\/scikit-learn-enhancement-proposals.readthedocs.io\/en\/latest\/slep009\/proposal.html>`_\nfor more details.\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- |Fix| :class:`manifold.TSNE` now avoids numerical underflow issues during\n  affinity matrix computation.\n\n- |Fix| :class:`manifold.Isomap` now connects disconnected components of the\n  neighbors graph along some minimum distance pairs, instead of changing\n  every infinite distances to zero.\n\n- |Fix| The splitting criterion of :class:`tree.DecisionTreeClassifier` and\n  :class:`tree.DecisionTreeRegressor` can be impacted by a fix in the handling\n  of rounding errors. Previously some extra spurious splits could occur.\n\n- |Fix| :func:`model_selection.train_test_split` with a `stratify` parameter\n  and :class:`model_selection.StratifiedShuffleSplit` may lead to slightly\n  different results.\n\nDetails are listed in the changelog below.\n\n(While we are trying to better inform users by providing this information, we\ncannot assure that this list is complete.)\n\n\nChangelog\n---------\n\n..\n    Entries should be grouped by module (in alphabetic order) and prefixed with\n    one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,\n    |Fix| or |API| (see whats_new.rst for descriptions).\n    Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).\n    Changes not specific to a module should be listed under *Multiple Modules*\n    or *Miscellaneous*.\n    Entries should end with:\n    :pr:`123456` by :user:`Joe Bloggs <joeongithub>`.\n    where 123456 is the *pull request* number, not the issue number.\n\n- |API| The option for using the squared error via ``loss`` and\n  ``criterion`` parameters was made more consistent. The preferred way is by\n  setting the value to `\"squared_error\"`. Old option names are still valid,\n  produce the same models, but are deprecated and will be removed in version\n  1.2.\n  :pr:`19310` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n  - For :class:`ensemble.ExtraTreesRegressor`, `criterion=\"mse\"` is deprecated,\n    use `\"squared_error\"` instead which is now the default.\n\n  - For :class:`ensemble.GradientBoostingRegressor`, `loss=\"ls\"` is deprecated,\n    use `\"squared_error\"` instead which is now the default.\n\n  - For :class:`ensemble.RandomForestRegressor`, `criterion=\"mse\"` is deprecated,\n    use `\"squared_error\"` instead which is now the default.\n\n  - For :class:`ensemble.HistGradientBoostingRegressor`, `loss=\"least_squares\"`\n    is deprecated, use `\"squared_error\"` instead which is now the default.\n\n  - For :class:`linear_model.RANSACRegressor`, `loss=\"squared_loss\"` is\n    deprecated, use `\"squared_error\"` instead.\n\n  - For :class:`linear_model.SGDRegressor`, `loss=\"squared_loss\"` is\n    deprecated, use `\"squared_error\"` instead which is now the default.\n\n  - For :class:`tree.DecisionTreeRegressor`, `criterion=\"mse\"` is deprecated,\n    use `\"squared_error\"` instead which is now the default.\n\n  - For :class:`tree.ExtraTreeRegressor`, `criterion=\"mse\"` is deprecated,\n    use `\"squared_error\"` instead which is now the default.\n\n- |API| The option for using the absolute error via ``loss`` and\n  ``criterion`` parameters was made more consistent. The preferred way is by\n  setting the value to `\"absolute_error\"`. Old option names are still valid,\n  produce the same models, but are deprecated and will be removed in version\n  1.2.\n  :pr:`19733` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n  - For :class:`ensemble.ExtraTreesRegressor`, `criterion=\"mae\"` is deprecated,\n    use `\"absolute_error\"` instead.\n\n  - For :class:`ensemble.GradientBoostingRegressor`, `loss=\"lad\"` is deprecated,\n    use `\"absolute_error\"` instead.\n\n  - For :class:`ensemble.RandomForestRegressor`, `criterion=\"mae\"` is deprecated,\n    use `\"absolute_error\"` instead.\n\n  - For :class:`ensemble.HistGradientBoostingRegressor`,\n    `loss=\"least_absolute_deviation\"` is deprecated, use `\"absolute_error\"`\n    instead.\n\n  - For :class:`linear_model.RANSACRegressor`, `loss=\"absolute_loss\"` is\n    deprecated, use `\"absolute_error\"` instead which is now the default.\n\n  - For :class:`tree.DecisionTreeRegressor`, `criterion=\"mae\"` is deprecated,\n    use `\"absolute_error\"` instead.\n\n  - For :class:`tree.ExtraTreeRegressor`, `criterion=\"mae\"` is deprecated,\n    use `\"absolute_error\"` instead.\n\n- |API| `np.matrix` usage is deprecated in 1.0 and will raise a `TypeError` in\n  1.2. :pr:`20165` by `Thomas Fan`_.\n\n- |API| :term:`get_feature_names_out` has been added to the transformer API\n  to get the names of the output features. `get_feature_names` has in\n  turn been deprecated. :pr:`18444` by `Thomas Fan`_.\n\n- |API| All estimators store `feature_names_in_` when fitted on pandas Dataframes.\n  These feature names are compared to names seen in non-`fit` methods, e.g.\n  `transform` and will raise a `FutureWarning` if they are not consistent.\n  These ``FutureWarning`` s will become ``ValueError`` s in 1.2. :pr:`18010` by\n  `Thomas Fan`_.\n\n:mod:`sklearn.base`\n...................\n\n- |Fix| :func:`config_context` is now threadsafe. :pr:`18736` by `Thomas Fan`_.\n\n:mod:`sklearn.calibration`\n..........................\n\n- |Feature| :func:`calibration.CalibrationDisplay` added to plot\n  calibration curves. :pr:`17443` by :user:`Lucy Liu <lucyleeow>`.\n\n- |Fix| The ``predict`` and ``predict_proba`` methods of\n  :class:`calibration.CalibratedClassifierCV` can now properly be used on\n  prefitted pipelines. :pr:`19641` by :user:`Alek Lefebvre <AlekLefebvre>`.\n\n- |Fix| Fixed an error when using a :class:`ensemble.VotingClassifier`\n  as `base_estimator` in :class:`calibration.CalibratedClassifierCV`.\n  :pr:`20087` by :user:`Cl\u00e9ment Fauchereau <clement-f>`.\n\n\n:mod:`sklearn.cluster`\n......................\n\n- |Efficiency| The ``\"k-means++\"`` initialization of :class:`cluster.KMeans`\n  and :class:`cluster.MiniBatchKMeans`\u00a0is now faster, especially in multicore\n  settings. :pr:`19002` by :user:`Jon Crall <Erotemic>` and :user:`J\u00e9r\u00e9mie du\n  Boisberranger <jeremiedbb>`.\n\n- |Efficiency| :class:`cluster.KMeans` with `algorithm='elkan'` is now faster\n  in multicore settings. :pr:`19052` by\n  :user:`Yusuke Nagasaka <YusukeNagasaka>`.\n\n- |Efficiency| :class:`cluster.MiniBatchKMeans` is now faster in multicore\n  settings. :pr:`17622` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Efficiency| :class:`cluster.OPTICS` can now cache the output of the\n  computation of the tree, using the `memory` parameter.  :pr:`19024` by\n  :user:`Frankie Robertson <frankier>`.\n\n- |Enhancement| The `predict` and `fit_predict` methods of\n  :class:`cluster.AffinityPropagation` now accept sparse data type for input\n  data.\n  :pr:`20117` by :user:`Venkatachalam Natchiappan <venkyyuvy>`\n\n- |Fix| Fixed a bug in :class:`cluster.MiniBatchKMeans` where the sample\n  weights were partially ignored when the input is sparse. :pr:`17622` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| Improved convergence detection based on center change in\n  :class:`cluster.MiniBatchKMeans` which was almost never achievable.\n  :pr:`17622` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |FIX| :class:`cluster.AgglomerativeClustering` now supports readonly\n  memory-mapped datasets.\n  :pr:`19883` by :user:`Julien Jerphanion <jjerphan>`.\n\n- |Fix| :class:`cluster.AgglomerativeClustering` correctly connects components\n  when connectivity and affinity are both precomputed and the number\n  of connected components is greater than 1. :pr:`20597` by\n  `Thomas Fan`_.\n\n- |Fix| :class:`cluster.FeatureAgglomeration` does not accept a ``**params`` kwarg in\n  the ``fit`` function anymore, resulting in a more concise error message. :pr:`20899`\n  by :user:`Adam Li <adam2392>`.\n\n- |Fix| Fixed a bug in :class:`cluster.KMeans`, ensuring reproducibility and equivalence\n  between sparse and dense input. :pr:`20200`\n  by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |API| :class:`cluster.Birch` attributes, `fit_` and `partial_fit_`, are\n  deprecated and will be removed in 1.2. :pr:`19297` by `Thomas Fan`_.\n\n- |API| the default value for the `batch_size` parameter of\n  :class:`cluster.MiniBatchKMeans` was changed from 100 to 1024 due to\n  efficiency reasons. The `n_iter_` attribute of\n  :class:`cluster.MiniBatchKMeans` now reports the number of started epochs and\n  the `n_steps_` attribute reports the number of mini batches processed.\n  :pr:`17622` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |API| :func:`cluster.spectral_clustering` raises an improved error when passed\n  a `np.matrix`. :pr:`20560` by `Thomas Fan`_.\n\n:mod:`sklearn.compose`\n......................\n\n- |Enhancement| :class:`compose.ColumnTransformer` now records the output\n  of each transformer in `output_indices_`. :pr:`18393` by\n  :user:`Luca Bittarello <lbittarello>`.\n\n- |Enhancement| :class:`compose.ColumnTransformer` now allows DataFrame input to\n  have its columns appear in a changed order in `transform`. Further, columns that\n  are dropped will not be required in transform, and additional columns will be\n  ignored if `remainder='drop'`. :pr:`19263` by `Thomas Fan`_.\n\n- |Enhancement| Adds `**predict_params` keyword argument to\n  :meth:`compose.TransformedTargetRegressor.predict` that passes keyword\n  argument to the regressor.\n  :pr:`19244` by :user:`Ricardo <ricardojnf>`.\n\n- |FIX| `compose.ColumnTransformer.get_feature_names` supports\n  non-string feature names returned by any of its transformers. However, note\n  that ``get_feature_names`` is deprecated, use ``get_feature_names_out``\n  instead. :pr:`18459` by :user:`Albert Villanova del Moral <albertvillanova>`\n  and :user:`Alonso Silva Allende <alonsosilvaallende>`.\n\n- |Fix| :class:`compose.TransformedTargetRegressor` now takes nD targets with\n  an adequate transformer.\n  :pr:`18898` by :user:`Oras Phongpanagnam <panangam>`.\n\n- |API| Adds `verbose_feature_names_out` to :class:`compose.ColumnTransformer`.\n  This flag controls the prefixing of feature names out in\n  :term:`get_feature_names_out`. :pr:`18444` and :pr:`21080` by `Thomas Fan`_.\n\n:mod:`sklearn.covariance`\n.........................\n\n- |Fix| Adds arrays check to :func:`covariance.ledoit_wolf` and\n  :func:`covariance.ledoit_wolf_shrinkage`. :pr:`20416` by :user:`Hugo Defois\n  <defoishugo>`.\n\n- |API| Deprecates the following keys in `cv_results_`: `'mean_score'`,\n  `'std_score'`, and `'split(k)_score'` in favor of `'mean_test_score'`\n  `'std_test_score'`, and `'split(k)_test_score'`. :pr:`20583` by `Thomas Fan`_.\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Enhancement| :func:`datasets.fetch_openml` now supports categories with\n  missing values when returning a pandas dataframe. :pr:`19365` by\n  `Thomas Fan`_ and :user:`Amanda Dsouza <amy12xx>` and\n  :user:`EL-ATEIF Sara <elateifsara>`.\n\n- |Enhancement| :func:`datasets.fetch_kddcup99` raises a better message\n  when the cached file is invalid. :pr:`19669` `Thomas Fan`_.\n\n- |Enhancement| Replace usages of ``__file__`` related to resource file I\/O\n  with ``importlib.resources`` to avoid the assumption that these resource\n  files (e.g. ``iris.csv``) already exist on a filesystem, and by extension\n  to enable compatibility with tools such as ``PyOxidizer``.\n  :pr:`20297` by :user:`Jack Liu <jackzyliu>`.\n\n- |Fix| Shorten data file names in the openml tests to better support\n  installing on Windows and its default 260 character limit on file names.\n  :pr:`20209` by `Thomas Fan`_.\n\n- |Fix| :func:`datasets.fetch_kddcup99` returns dataframes when\n  `return_X_y=True` and `as_frame=True`. :pr:`19011` by `Thomas Fan`_.\n\n- |API| Deprecates `datasets.load_boston` in 1.0 and it will be removed\n  in 1.2. Alternative code snippets to load similar datasets are provided.\n  Please report to the docstring of the function for details.\n  :pr:`20729` by `Guillaume Lemaitre`_.\n\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Enhancement| added a new approximate solver (randomized SVD, available with\n  `eigen_solver='randomized'`) to :class:`decomposition.KernelPCA`. This\n  significantly accelerates computation when the number of samples is much\n  larger than the desired number of components.\n  :pr:`12069` by :user:`Sylvain Mari\u00e9 <smarie>`.\n\n- |Fix| Fixes incorrect multiple data-conversion warnings when clustering\n  boolean data. :pr:`19046` by :user:`Surya Prakash <jdsurya>`.\n\n- |Fix| Fixed :func:`decomposition.dict_learning`, used by\n  :class:`decomposition.DictionaryLearning`, to ensure determinism of the\n  output. Achieved by flipping signs of the SVD output which is used to\n  initialize the code. :pr:`18433` by :user:`Bruno Charron <brcharron>`.\n\n- |Fix| Fixed a bug in :class:`decomposition.MiniBatchDictionaryLearning`,\n  :class:`decomposition.MiniBatchSparsePCA` and\n  :func:`decomposition.dict_learning_online` where the update of the dictionary\n  was incorrect. :pr:`19198` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| Fixed a bug in :class:`decomposition.DictionaryLearning`,\n  :class:`decomposition.SparsePCA`,\n  :class:`decomposition.MiniBatchDictionaryLearning`,\n  :class:`decomposition.MiniBatchSparsePCA`,\n  :func:`decomposition.dict_learning` and\n  :func:`decomposition.dict_learning_online` where the restart of unused atoms\n  during the dictionary update was not working as expected. :pr:`19198` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |API| In :class:`decomposition.DictionaryLearning`,\n  :class:`decomposition.MiniBatchDictionaryLearning`,\n  :func:`decomposition.dict_learning` and\n  :func:`decomposition.dict_learning_online`, `transform_alpha` will be equal\n  to `alpha` instead of 1.0 by default starting from version 1.2 :pr:`19159` by\n  :user:`Beno\u00eet Mal\u00e9zieux <bmalezieux>`.\n\n- |API| Rename variable names in :class:`decomposition.KernelPCA` to improve\n  readability. `lambdas_` and `alphas_` are renamed to `eigenvalues_`\n  and `eigenvectors_`, respectively. `lambdas_` and `alphas_` are\n  deprecated and will be removed in 1.2.\n  :pr:`19908` by :user:`Kei Ishikawa <kstoneriv3>`.\n\n- |API| The `alpha` and `regularization` parameters of :class:`decomposition.NMF` and\n  :func:`decomposition.non_negative_factorization` are deprecated and will be removed\n  in 1.2. Use the new parameters `alpha_W` and `alpha_H` instead. :pr:`20512` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.dummy`\n....................\n\n- |API| Attribute `n_features_in_` in :class:`dummy.DummyRegressor` and\n  :class:`dummy.DummyRegressor` is deprecated and will be removed in 1.2.\n  :pr:`20960` by `Thomas Fan`_.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |Enhancement| :class:`~sklearn.ensemble.HistGradientBoostingClassifier` and\n  :class:`~sklearn.ensemble.HistGradientBoostingRegressor` take cgroups quotas\n  into account when deciding the number of threads used by OpenMP. This\n  avoids performance problems caused by over-subscription when using those\n  classes in a docker container for instance. :pr:`20477`\n  by `Thomas Fan`_.\n\n- |Enhancement| :class:`~sklearn.ensemble.HistGradientBoostingClassifier` and\n  :class:`~sklearn.ensemble.HistGradientBoostingRegressor` are no longer\n  experimental. They are now considered stable and are subject to the same\n  deprecation cycles as all other estimators. :pr:`19799` by `Nicolas Hug`_.\n\n- |Enhancement| Improve the HTML rendering of the\n  :class:`ensemble.StackingClassifier` and :class:`ensemble.StackingRegressor`.\n  :pr:`19564` by `Thomas Fan`_.\n\n- |Enhancement| Added Poisson criterion to\n  :class:`ensemble.RandomForestRegressor`. :pr:`19836` by :user:`Brian Sun\n  <bsun94>`.\n\n- |Fix| Do not allow to compute out-of-bag (OOB) score in\n  :class:`ensemble.RandomForestClassifier` and\n  :class:`ensemble.ExtraTreesClassifier` with multiclass-multioutput target\n  since scikit-learn does not provide any metric supporting this type of\n  target. Additional private refactoring was performed.\n  :pr:`19162` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Improve numerical precision for weights boosting in\n  :class:`ensemble.AdaBoostClassifier` and :class:`ensemble.AdaBoostRegressor`\n  to avoid underflows.\n  :pr:`10096` by :user:`Fenil Suchak <fenilsuchak>`.\n\n- |Fix| Fixed the range of the argument ``max_samples`` to be ``(0.0, 1.0]``\n  in :class:`ensemble.RandomForestClassifier`,\n  :class:`ensemble.RandomForestRegressor`, where `max_samples=1.0` is\n  interpreted as using all `n_samples` for bootstrapping. :pr:`20159` by\n  :user:`murata-yu`.\n\n- |Fix| Fixed a bug in :class:`ensemble.AdaBoostClassifier` and\n  :class:`ensemble.AdaBoostRegressor` where the `sample_weight` parameter\n  got overwritten during `fit`.\n  :pr:`20534` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |API| Removes `tol=None` option in\n  :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor`. Please use `tol=0` for\n  the same behavior. :pr:`19296` by `Thomas Fan`_.\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |Fix| Fixed a bug in :class:`feature_extraction.text.HashingVectorizer`\n  where some input strings would result in negative indices in the transformed\n  data. :pr:`19035` by :user:`Liu Yu <ly648499246>`.\n\n- |Fix| Fixed a bug in :class:`feature_extraction.DictVectorizer` by raising an\n  error with unsupported value type.\n  :pr:`19520` by :user:`Jeff Zhao <kamiyaa>`.\n\n- |Fix| Fixed a bug in :func:`feature_extraction.image.img_to_graph`\n  and :func:`feature_extraction.image.grid_to_graph` where singleton connected\n  components were not handled properly, resulting in a wrong vertex indexing.\n  :pr:`18964` by `Bertrand Thirion`_.\n\n- |Fix| Raise a warning in :class:`feature_extraction.text.CountVectorizer`\n  with `lowercase=True` when there are vocabulary entries with uppercase\n  characters to avoid silent misses in the resulting feature vectors.\n  :pr:`19401` by :user:`Zito Relova <zitorelova>`\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Feature| :func:`feature_selection.r_regression` computes Pearson's R\n  correlation coefficients between the features and the target.\n  :pr:`17169` by :user:`Dmytro Lituiev <DSLituiev>`\n  and :user:`Julien Jerphanion <jjerphan>`.\n\n- |Enhancement| :func:`feature_selection.RFE.fit` accepts additional estimator\n  parameters that are passed directly to the estimator's `fit` method.\n  :pr:`20380` by :user:`Iv\u00e1n Pulido <ijpulidos>`, :user:`Felipe Bidu <fbidu>`,\n  :user:`Gil Rutter <g-rutter>`, and :user:`Adrin Jalali <adrinjalali>`.\n\n- |FIX| Fix a bug in :func:`isotonic.isotonic_regression` where the\n  `sample_weight` passed by a user were overwritten during ``fit``.\n  :pr:`20515` by :user:`Carsten Allefeld <allefeld>`.\n\n- |Fix| Change :func:`feature_selection.SequentialFeatureSelector` to\n  allow for unsupervised modelling so that the `fit` signature need not\n  do any `y` validation and allow for `y=None`.\n  :pr:`19568` by :user:`Shyam Desai <ShyamDesai>`.\n\n- |API| Raises an error in :class:`feature_selection.VarianceThreshold`\n  when the variance threshold is negative.\n  :pr:`20207` by :user:`Tomohiro Endo <europeanplaice>`\n\n- |API| Deprecates `grid_scores_` in favor of split scores in `cv_results_` in\n  :class:`feature_selection.RFECV`. `grid_scores_` will be removed in\n  version 1.2.\n  :pr:`20161` by :user:`Shuhei Kayawari <wowry>` and :user:`arka204`.\n\n:mod:`sklearn.inspection`\n.........................\n\n- |Enhancement| Add `max_samples` parameter in\n  :func:`inspection.permutation_importance`. It enables to draw a subset of the\n  samples to compute the permutation importance. This is useful to keep the\n  method tractable when evaluating feature importance on large datasets.\n  :pr:`20431` by :user:`Oliver Pfaffel <o1iv3r>`.\n\n- |Enhancement| Add kwargs to format ICE and PD lines separately in partial\n  dependence plots `inspection.plot_partial_dependence` and\n  :meth:`inspection.PartialDependenceDisplay.plot`. :pr:`19428` by :user:`Mehdi\n  Hamoumi <mhham>`.\n\n- |Fix| Allow multiple scorers input to\n  :func:`inspection.permutation_importance`. :pr:`19411` by :user:`Simona\n  Maggio <simonamaggio>`.\n\n- |API| :class:`inspection.PartialDependenceDisplay` exposes a class method:\n  :func:`~inspection.PartialDependenceDisplay.from_estimator`.\n  `inspection.plot_partial_dependence` is deprecated in favor of the\n  class method and will be removed in 1.2. :pr:`20959` by `Thomas Fan`_.\n\n:mod:`sklearn.kernel_approximation`\n...................................\n\n- |Fix| Fix a bug in :class:`kernel_approximation.Nystroem`\n  where the attribute `component_indices_` did not correspond to the subset of\n  sample indices used to generate the approximated kernel. :pr:`20554` by\n  :user:`Xiangyin Kong <kxytim>`.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |MajorFeature| Added :class:`linear_model.QuantileRegressor` which implements\n  linear quantile regression with L1 penalty.\n  :pr:`9978` by :user:`David Dale <avidale>` and\n  :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Feature| The new :class:`linear_model.SGDOneClassSVM` provides an SGD\n  implementation of the linear One-Class SVM. Combined with kernel\n  approximation techniques, this implementation approximates the solution of\n  a kernelized One Class SVM while benefitting from a linear\n  complexity in the number of samples.\n  :pr:`10027` by :user:`Albert Thomas <albertcthomas>`.\n\n- |Feature| Added `sample_weight` parameter to\n  :class:`linear_model.LassoCV` and :class:`linear_model.ElasticNetCV`.\n  :pr:`16449` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Feature| Added new solver `lbfgs` (available with `solver=\"lbfgs\"`)\n  and `positive` argument to :class:`linear_model.Ridge`. When `positive` is\n  set to `True`, forces the coefficients to be positive (only supported by\n  `lbfgs`). :pr:`20231` by :user:`Toshihiro Nakae <tnakae>`.\n\n- |Efficiency| The implementation of :class:`linear_model.LogisticRegression`\n  has been optimised for dense matrices when using `solver='newton-cg'` and\n  `multi_class!='multinomial'`.\n  :pr:`19571` by :user:`Julien Jerphanion <jjerphan>`.\n\n- |Enhancement| `fit` method preserves dtype for numpy.float32 in\n  :class:`linear_model.Lars`, :class:`linear_model.LassoLars`,\n  :class:`linear_model.LassoLars`, :class:`linear_model.LarsCV` and\n  :class:`linear_model.LassoLarsCV`. :pr:`20155` by :user:`Takeshi Oura\n  <takoika>`.\n\n- |Enhancement| Validate user-supplied gram matrix passed to linear models\n  via the `precompute` argument. :pr:`19004` by :user:`Adam Midvidy <amidvidy>`.\n\n- |Fix| :meth:`linear_model.ElasticNet.fit` no longer modifies `sample_weight`\n  in place. :pr:`19055` by `Thomas Fan`_.\n\n- |Fix| :class:`linear_model.Lasso` and :class:`linear_model.ElasticNet` no\n  longer have a `dual_gap_` not corresponding to their objective. :pr:`19172`\n  by :user:`Mathurin Massias <mathurinm>`\n\n- |Fix| `sample_weight` are now fully taken into account in linear models\n  when `normalize=True` for both feature centering and feature\n  scaling.\n  :pr:`19426` by :user:`Alexandre Gramfort <agramfort>` and\n  :user:`Maria Telenczuk <maikia>`.\n\n- |Fix| Points with residuals equal to  ``residual_threshold`` are now considered\n  as inliers for :class:`linear_model.RANSACRegressor`. This allows fitting\n  a model perfectly on some datasets when `residual_threshold=0`.\n  :pr:`19499` by :user:`Gregory Strubel <gregorystrubel>`.\n\n- |Fix| Sample weight invariance for :class:`linear_model.Ridge` was fixed in\n  :pr:`19616` by :user:`Oliver Grisel <ogrisel>` and :user:`Christian Lorentzen\n  <lorentzenchr>`.\n\n- |Fix| The dictionary `params` in :func:`linear_model.enet_path` and\n  :func:`linear_model.lasso_path` should only contain parameter of the\n  coordinate descent solver. Otherwise, an error will be raised.\n  :pr:`19391` by :user:`Shao Yang Hong <hongshaoyang>`.\n\n- |API| Raise a warning in :class:`linear_model.RANSACRegressor` that from\n  version 1.2, `min_samples` need to be set explicitly for models other than\n  :class:`linear_model.LinearRegression`. :pr:`19390` by :user:`Shao Yang Hong\n  <hongshaoyang>`.\n\n- |API|: The parameter ``normalize`` of :class:`linear_model.LinearRegression`\n  is deprecated and will be removed in 1.2. Motivation for this deprecation:\n  ``normalize`` parameter did not take any effect if ``fit_intercept`` was set\n  to False and therefore was deemed confusing. The behavior of the deprecated\n  ``LinearModel(normalize=True)`` can be reproduced with a\n  :class:`~sklearn.pipeline.Pipeline` with ``LinearModel`` (where\n  ``LinearModel`` is :class:`~linear_model.LinearRegression`,\n  :class:`~linear_model.Ridge`, :class:`~linear_model.RidgeClassifier`,\n  :class:`~linear_model.RidgeCV` or :class:`~linear_model.RidgeClassifierCV`)\n  as follows: ``make_pipeline(StandardScaler(with_mean=False),\n  LinearModel())``. The ``normalize`` parameter in\n  :class:`~linear_model.LinearRegression` was deprecated in :pr:`17743` by\n  :user:`Maria Telenczuk <maikia>` and :user:`Alexandre Gramfort <agramfort>`.\n  Same for :class:`~linear_model.Ridge`,\n  :class:`~linear_model.RidgeClassifier`, :class:`~linear_model.RidgeCV`, and\n  :class:`~linear_model.RidgeClassifierCV`, in: :pr:`17772` by :user:`Maria\n  Telenczuk <maikia>` and :user:`Alexandre Gramfort <agramfort>`. Same for\n  :class:`~linear_model.BayesianRidge`, :class:`~linear_model.ARDRegression`\n  in: :pr:`17746` by :user:`Maria Telenczuk <maikia>`. Same for\n  :class:`~linear_model.Lasso`, :class:`~linear_model.LassoCV`,\n  :class:`~linear_model.ElasticNet`, :class:`~linear_model.ElasticNetCV`,\n  :class:`~linear_model.MultiTaskLasso`,\n  :class:`~linear_model.MultiTaskLassoCV`,\n  :class:`~linear_model.MultiTaskElasticNet`,\n  :class:`~linear_model.MultiTaskElasticNetCV`, in: :pr:`17785` by :user:`Maria\n  Telenczuk <maikia>` and :user:`Alexandre Gramfort <agramfort>`.\n\n- |API| The ``normalize`` parameter of\n  :class:`~linear_model.OrthogonalMatchingPursuit` and\n  :class:`~linear_model.OrthogonalMatchingPursuitCV` will default to False in\n  1.2 and will be removed in 1.4. :pr:`17750` by :user:`Maria Telenczuk\n  <maikia>` and :user:`Alexandre Gramfort <agramfort>`. Same for\n  :class:`~linear_model.Lars` :class:`~linear_model.LarsCV`\n  :class:`~linear_model.LassoLars` :class:`~linear_model.LassoLarsCV`\n  :class:`~linear_model.LassoLarsIC`, in :pr:`17769` by :user:`Maria Telenczuk\n  <maikia>` and :user:`Alexandre Gramfort <agramfort>`.\n\n- |API| Keyword validation has moved from `__init__` and `set_params` to `fit`\n  for the following estimators conforming to scikit-learn's conventions:\n  :class:`~linear_model.SGDClassifier`,\n  :class:`~linear_model.SGDRegressor`,\n  :class:`~linear_model.SGDOneClassSVM`,\n  :class:`~linear_model.PassiveAggressiveClassifier`, and\n  :class:`~linear_model.PassiveAggressiveRegressor`.\n  :pr:`20683` by `Guillaume Lemaitre`_.\n\n:mod:`sklearn.manifold`\n.......................\n\n- |Enhancement| Implement `'auto'` heuristic for the `learning_rate` in\n  :class:`manifold.TSNE`. It will become default in 1.2. The default\n  initialization will change to `pca` in 1.2. PCA initialization will\n  be scaled to have standard deviation 1e-4 in 1.2.\n  :pr:`19491` by :user:`Dmitry Kobak <dkobak>`.\n\n- |Fix| Change numerical precision to prevent underflow issues\n  during affinity matrix computation for :class:`manifold.TSNE`.\n  :pr:`19472` by :user:`Dmitry Kobak <dkobak>`.\n\n- |Fix| :class:`manifold.Isomap` now uses `scipy.sparse.csgraph.shortest_path`\n  to compute the graph shortest path. It also connects disconnected components\n  of the neighbors graph along some minimum distance pairs, instead of changing\n  every infinite distances to zero. :pr:`20531` by `Roman Yurchak`_ and `Tom\n  Dupre la Tour`_.\n\n- |Fix| Decrease the numerical default tolerance in the lobpcg call\n  in :func:`manifold.spectral_embedding` to prevent numerical instability.\n  :pr:`21194` by :user:`Andrew Knyazev <lobpcg>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Feature| :func:`metrics.mean_pinball_loss` exposes the pinball loss for\n  quantile regression. :pr:`19415` by :user:`Xavier Dupr\u00e9 <sdpython>`\n  and :user:`Oliver Grisel <ogrisel>`.\n\n- |Feature| :func:`metrics.d2_tweedie_score` calculates the D^2 regression\n  score for Tweedie deviances with power parameter ``power``. This is a\n  generalization of the `r2_score` and can be interpreted as percentage of\n  Tweedie deviance explained.\n  :pr:`17036` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Feature|  :func:`metrics.mean_squared_log_error` now supports\n  `squared=False`.\n  :pr:`20326` by :user:`Uttam kumar <helper-uttam>`.\n\n- |Efficiency| Improved speed of :func:`metrics.confusion_matrix` when labels\n  are integral.\n  :pr:`9843` by :user:`Jon Crall <Erotemic>`.\n\n- |Enhancement| A fix to raise an error in :func:`metrics.hinge_loss` when\n  ``pred_decision`` is 1d whereas it is a multiclass classification or when\n  ``pred_decision`` parameter is not consistent with the ``labels`` parameter.\n  :pr:`19643` by :user:`Pierre Attard <PierreAttard>`.\n\n- |Fix| :meth:`metrics.ConfusionMatrixDisplay.plot` uses the correct max\n  for colormap. :pr:`19784` by `Thomas Fan`_.\n\n- |Fix| Samples with zero `sample_weight` values do not affect the results\n  from :func:`metrics.det_curve`, :func:`metrics.precision_recall_curve`\n  and :func:`metrics.roc_curve`.\n  :pr:`18328` by :user:`Albert Villanova del Moral <albertvillanova>` and\n  :user:`Alonso Silva Allende <alonsosilvaallende>`.\n\n- |Fix| avoid overflow in :func:`metrics.adjusted_rand_score` with\n  large amount of data. :pr:`20312` by :user:`Divyanshu Deoli\n  <divyanshudeoli>`.\n\n- |API| :class:`metrics.ConfusionMatrixDisplay` exposes two class methods\n  :func:`~metrics.ConfusionMatrixDisplay.from_estimator` and\n  :func:`~metrics.ConfusionMatrixDisplay.from_predictions` allowing to create\n  a confusion matrix plot using an estimator or the predictions.\n  `metrics.plot_confusion_matrix` is deprecated in favor of these two\n  class methods and will be removed in 1.2.\n  :pr:`18543` by `Guillaume Lemaitre`_.\n\n- |API| :class:`metrics.PrecisionRecallDisplay` exposes two class methods\n  :func:`~metrics.PrecisionRecallDisplay.from_estimator` and\n  :func:`~metrics.PrecisionRecallDisplay.from_predictions` allowing to create\n  a precision-recall curve using an estimator or the predictions.\n  `metrics.plot_precision_recall_curve` is deprecated in favor of these\n  two class methods and will be removed in 1.2.\n  :pr:`20552` by `Guillaume Lemaitre`_.\n\n- |API| :class:`metrics.DetCurveDisplay` exposes two class methods\n  :func:`~metrics.DetCurveDisplay.from_estimator` and\n  :func:`~metrics.DetCurveDisplay.from_predictions` allowing to create\n  a confusion matrix plot using an estimator or the predictions.\n  `metrics.plot_det_curve` is deprecated in favor of these two\n  class methods and will be removed in 1.2.\n  :pr:`19278` by `Guillaume Lemaitre`_.\n\n:mod:`sklearn.mixture`\n......................\n\n- |Fix| Ensure that the best parameters are set appropriately\n  in the case of divergency for :class:`mixture.GaussianMixture` and\n  :class:`mixture.BayesianGaussianMixture`.\n  :pr:`20030` by :user:`Tingshan Liu <tliu68>` and\n  :user:`Benjamin Pedigo <bdpedigo>`.\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |Feature| added :class:`model_selection.StratifiedGroupKFold`, that combines\n  :class:`model_selection.StratifiedKFold` and\n  :class:`model_selection.GroupKFold`, providing an ability to split data\n  preserving the distribution of classes in each split while keeping each\n  group within a single split.\n  :pr:`18649` by :user:`Leandro Hermida <hermidalc>` and\n  :user:`Rodion Martynov <marrodion>`.\n\n- |Enhancement| warn only once in the main process for per-split fit failures\n  in cross-validation. :pr:`20619` by :user:`Lo\u00efc Est\u00e8ve <lesteve>`\n\n- |Enhancement| The `model_selection.BaseShuffleSplit` base class is\n  now public. :pr:`20056` by :user:`pabloduque0`.\n\n- |Fix| Avoid premature overflow in :func:`model_selection.train_test_split`.\n  :pr:`20904` by :user:`Tomasz Jakubek <t-jakubek>`.\n\n:mod:`sklearn.naive_bayes`\n..........................\n\n- |Fix| The `fit` and `partial_fit` methods of the discrete naive Bayes\n  classifiers (:class:`naive_bayes.BernoulliNB`,\n  :class:`naive_bayes.CategoricalNB`, :class:`naive_bayes.ComplementNB`,\n  and :class:`naive_bayes.MultinomialNB`) now correctly handle the degenerate\n  case of a single class in the training set.\n  :pr:`18925` by :user:`David Poznik <dpoznik>`.\n\n- |API| The attribute ``sigma_`` is now deprecated in\n  :class:`naive_bayes.GaussianNB` and will be removed in 1.2.\n  Use ``var_`` instead.\n  :pr:`18842` by :user:`Hong Shao Yang <hongshaoyang>`.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Enhancement| The creation of :class:`neighbors.KDTree` and\n  :class:`neighbors.BallTree` has been improved for their worst-cases time\n  complexity from :math:`\\mathcal{O}(n^2)` to :math:`\\mathcal{O}(n)`.\n  :pr:`19473` by :user:`jiefangxuanyan <jiefangxuanyan>` and\n  :user:`Julien Jerphanion <jjerphan>`.\n\n- |FIX| `neighbors.DistanceMetric` subclasses now support readonly\n  memory-mapped datasets. :pr:`19883` by :user:`Julien Jerphanion <jjerphan>`.\n\n- |FIX| :class:`neighbors.NearestNeighbors`, :class:`neighbors.KNeighborsClassifier`,\n  :class:`neighbors.RadiusNeighborsClassifier`, :class:`neighbors.KNeighborsRegressor`\n  and :class:`neighbors.RadiusNeighborsRegressor` do not validate `weights` in\n  `__init__` and validates `weights` in `fit` instead. :pr:`20072` by\n  :user:`Juan Carlos Alfaro Jim\u00e9nez <alfaro96>`.\n\n- |API| The parameter `kwargs` of :class:`neighbors.RadiusNeighborsClassifier` is\n  deprecated and will be removed in 1.2.\n  :pr:`20842` by :user:`Juan Mart\u00edn Loyola <jmloyola>`.\n\n:mod:`sklearn.neural_network`\n.............................\n\n- |Fix| :class:`neural_network.MLPClassifier` and\n  :class:`neural_network.MLPRegressor` now correctly support continued training\n  when loading from a pickled file. :pr:`19631` by `Thomas Fan`_.\n\n:mod:`sklearn.pipeline`\n.......................\n\n- |API| The `predict_proba` and `predict_log_proba` methods of the\n  :class:`pipeline.Pipeline` now support passing prediction kwargs to the final\n  estimator. :pr:`19790` by :user:`Christopher Flynn <crflynn>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Feature| The new :class:`preprocessing.SplineTransformer` is a feature\n  preprocessing tool for the generation of B-splines, parametrized by the\n  polynomial ``degree`` of the splines, number of knots ``n_knots`` and knot\n  positioning strategy ``knots``.\n  :pr:`18368` by :user:`Christian Lorentzen <lorentzenchr>`.\n  :class:`preprocessing.SplineTransformer` also supports periodic\n  splines via the ``extrapolation`` argument.\n  :pr:`19483` by :user:`Malte Londschien <mlondschien>`.\n  :class:`preprocessing.SplineTransformer` supports sample weights for\n  knot position strategy ``\"quantile\"``.\n  :pr:`20526` by :user:`Malte Londschien <mlondschien>`.\n\n- |Feature| :class:`preprocessing.OrdinalEncoder` supports passing through\n  missing values by default. :pr:`19069` by `Thomas Fan`_.\n\n- |Feature| :class:`preprocessing.OneHotEncoder` now supports\n  `handle_unknown='ignore'` and dropping categories. :pr:`19041` by\n  `Thomas Fan`_.\n\n- |Feature| :class:`preprocessing.PolynomialFeatures` now supports passing\n  a tuple to `degree`, i.e. `degree=(min_degree, max_degree)`.\n  :pr:`20250` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Efficiency| :class:`preprocessing.StandardScaler` is faster and more memory\n  efficient. :pr:`20652` by `Thomas Fan`_.\n\n- |Efficiency| Changed ``algorithm`` argument for :class:`cluster.KMeans` in\n  :class:`preprocessing.KBinsDiscretizer` from ``auto`` to ``full``.\n  :pr:`19934` by :user:`Gleb Levitskiy <GLevV>`.\n\n- |Efficiency| The implementation of `fit` for\n  :class:`preprocessing.PolynomialFeatures` transformer is now faster. This is\n  especially noticeable on large sparse input. :pr:`19734` by :user:`Fred\n  Robinson <frrad>`.\n\n- |Fix| The :func:`preprocessing.StandardScaler.inverse_transform` method\n  now raises error when the input data is 1D. :pr:`19752` by :user:`Zhehao Liu\n  <Max1993Liu>`.\n\n- |Fix| :func:`preprocessing.scale`, :class:`preprocessing.StandardScaler`\n  and similar scalers detect near-constant features to avoid scaling them to\n  very large values. This problem happens in particular when using a scaler on\n  sparse data with a constant column with sample weights, in which case\n  centering is typically disabled. :pr:`19527` by :user:`Oliver Grisel\n  <ogrisel>` and :user:`Maria Telenczuk <maikia>` and :pr:`19788` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| :meth:`preprocessing.StandardScaler.inverse_transform` now\n  correctly handles integer dtypes. :pr:`19356` by :user:`makoeppel`.\n\n- |Fix| :meth:`preprocessing.OrdinalEncoder.inverse_transform` is not\n  supporting sparse matrix and raises the appropriate error message.\n  :pr:`19879` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| The `fit` method of :class:`preprocessing.OrdinalEncoder` will not\n  raise error when `handle_unknown='ignore'` and unknown categories are given\n  to `fit`.\n  :pr:`19906` by :user:`Zhehao Liu <MaxwellLZH>`.\n\n- |Fix| Fix a regression in :class:`preprocessing.OrdinalEncoder` where large\n  Python numeric would raise an error due to overflow when casted to C type\n  (`np.float64` or `np.int64`).\n  :pr:`20727` by `Guillaume Lemaitre`_.\n\n- |Fix| :class:`preprocessing.FunctionTransformer` does not set `n_features_in_`\n  based on the input to `inverse_transform`. :pr:`20961` by `Thomas Fan`_.\n\n- |API| The `n_input_features_` attribute of\n  :class:`preprocessing.PolynomialFeatures` is deprecated in favor of\n  `n_features_in_` and will be removed in 1.2. :pr:`20240` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.svm`\n...................\n\n- |API| The parameter `**params` of :func:`svm.OneClassSVM.fit` is\n  deprecated and will be removed in 1.2.\n  :pr:`20843` by :user:`Juan Mart\u00edn Loyola <jmloyola>`.\n\n:mod:`sklearn.tree`\n...................\n\n- |Enhancement| Add `fontname` argument in :func:`tree.export_graphviz`\n  for non-English characters. :pr:`18959` by :user:`Zero <Zeroto521>`\n  and :user:`wstates <wstates>`.\n\n- |Fix| Improves compatibility of :func:`tree.plot_tree` with high DPI screens.\n  :pr:`20023` by `Thomas Fan`_.\n\n- |Fix| Fixed a bug in :class:`tree.DecisionTreeClassifier`,\n  :class:`tree.DecisionTreeRegressor` where a node could be split whereas it\n  should not have been due to incorrect handling of rounding errors.\n  :pr:`19336` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |API| The `n_features_` attribute of :class:`tree.DecisionTreeClassifier`,\n  :class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeClassifier` and\n  :class:`tree.ExtraTreeRegressor` is deprecated in favor of `n_features_in_`\n  and will be removed in 1.2. :pr:`20272` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |Enhancement| Deprecated the default value of the `random_state=0` in\n  :func:`~sklearn.utils.extmath.randomized_svd`. Starting in 1.2,\n  the default value of `random_state` will be set to `None`.\n  :pr:`19459` by :user:`Cindy Bezuidenhout <cinbez>` and\n  :user:`Clifford Akai-Nettey<cliffordEmmanuel>`.\n\n- |Enhancement| Added helper decorator :func:`utils.metaestimators.available_if`\n  to provide flexibility in metaestimators making methods available or\n  unavailable on the basis of state, in a more readable way.\n  :pr:`19948` by `Joel Nothman`_.\n\n- |Enhancement| :func:`utils.validation.check_is_fitted` now uses\n  ``__sklearn_is_fitted__`` if available, instead of checking for attributes\n  ending with an underscore. This also makes :class:`pipeline.Pipeline` and\n  :class:`preprocessing.FunctionTransformer` pass\n  ``check_is_fitted(estimator)``. :pr:`20657` by `Adrin Jalali`_.\n\n- |Fix| Fixed a bug in :func:`utils.sparsefuncs.mean_variance_axis` where the\n  precision of the computed variance was very poor when the real variance is\n  exactly zero. :pr:`19766` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| The docstrings of properties that are decorated with\n  :func:`utils.deprecated` are now properly wrapped. :pr:`20385` by `Thomas\n  Fan`_.\n\n- |Fix| `utils.stats._weighted_percentile` now correctly ignores\n  zero-weighted observations smaller than the smallest observation with\n  positive weight for ``percentile=0``. Affected classes are\n  :class:`dummy.DummyRegressor` for ``quantile=0`` and\n  `ensemble.HuberLossFunction` and `ensemble.HuberLossFunction`\n  for ``alpha=0``. :pr:`20528` by :user:`Malte Londschien <mlondschien>`.\n\n- |Fix| :func:`utils._safe_indexing` explicitly takes a dataframe copy when\n  integer indices are provided avoiding to raise a warning from Pandas. This\n  warning was previously raised in resampling utilities and functions using\n  those utilities (e.g. :func:`model_selection.train_test_split`,\n  :func:`model_selection.cross_validate`,\n  :func:`model_selection.cross_val_score`,\n  :func:`model_selection.cross_val_predict`).\n  :pr:`20673` by :user:`Joris Van den Bossche  <jorisvandenbossche>`.\n\n- |Fix| Fix a regression in `utils.is_scalar_nan` where large Python\n  numbers would raise an error due to overflow in C types (`np.float64` or\n  `np.int64`).\n  :pr:`20727` by `Guillaume Lemaitre`_.\n\n- |Fix| Support for `np.matrix` is deprecated in\n  :func:`~sklearn.utils.check_array` in 1.0 and will raise a `TypeError` in\n  1.2. :pr:`20165` by `Thomas Fan`_.\n\n- |API| `utils._testing.assert_warns` and `utils._testing.assert_warns_message`\n  are deprecated in 1.0 and will be removed in 1.2. Used `pytest.warns` context\n  manager instead. Note that these functions were not documented and part from\n  the public API. :pr:`20521` by :user:`Olivier Grisel <ogrisel>`.\n\n- |API| Fixed several bugs in `utils.graph.graph_shortest_path`, which is\n  now deprecated. Use `scipy.sparse.csgraph.shortest_path` instead. :pr:`20531`\n  by `Tom Dupre la Tour`_.\n\n.. rubric:: Code and documentation contributors\n\nThanks to everyone who has contributed to the maintenance and improvement of\nthe project since version 0.24, including:\n\nAbdulelah S. Al Mesfer, Abhinav Gupta, Adam J. Stewart, Adam Li, Adam Midvidy,\nAdrian Garcia Badaracco, Adrian Sad\u0142ocha, Adrin Jalali, Agamemnon Krasoulis,\nAlberto Rubiales, Albert Thomas, Albert Villanova del Moral, Alek Lefebvre,\nAlessia Marcolini, Alexandr Fonari, Alihan Zihna, Aline Ribeiro de Almeida,\nAmanda, Amanda Dsouza, Amol Deshmukh, Ana Pessoa, Anavelyz, Andreas Mueller,\nAndrew Delong, Ashish, Ashvith Shetty, Atsushi Nukariya, Aur\u00e9lien Geron, Avi\nGupta, Ayush Singh, baam, BaptBillard, Benjamin Pedigo, Bertrand Thirion,\nBharat Raghunathan, bmalezieux, Brian Rice, Brian Sun, Bruno Charron, Bryan\nChen, bumblebee, caherrera-meli, Carsten Allefeld, CeeThinwa, Chiara Marmo,\nchrissobel, Christian Lorentzen, Christopher Yeh, Chuliang Xiao, Cl\u00e9ment\nFauchereau, cliffordEmmanuel, Conner Shen, Connor Tann, David Dale, David Katz,\nDavid Poznik, Dimitri Papadopoulos Orfanos, Divyanshu Deoli, dmallia17,\nDmitry Kobak, DS_anas, Eduardo Jardim, EdwinWenink, EL-ATEIF Sara, Eleni\nMarkou, EricEllwanger, Eric Fiegel, Erich Schubert, Ezri-Mudde, Fatos Morina,\nFelipe Rodrigues, Felix Hafner, Fenil Suchak, flyingdutchman23, Flynn, Fortune\nUwha, Francois Berenger, Frankie Robertson, Frans Larsson, Frederick Robinson,\nfrellwan, Gabriel S Vicente, Gael Varoquaux, genvalen, Geoffrey Thomas,\ngeroldcsendes, Gleb Levitskiy, Glen, Gl\u00f2ria Maci\u00e0 Mu\u00f1oz, gregorystrubel,\ngroceryheist, Guillaume Lemaitre, guiweber, Haidar Almubarak, Hans Moritz\nG\u00fcnther, Haoyin Xu, Harris Mirza, Harry Wei, Harutaka Kawamura, Hassan\nAlsawadi, Helder Geovane Gomes de Lima, Hugo DEFOIS, Igor Ilic, Ikko Ashimine,\nIsaack Mungui, Ishaan Bhat, Ishan Mishra, Iv\u00e1n Pulido, iwhalvic, J Alexander,\nJack Liu, James Alan Preiss, James Budarz, James Lamb, Jannik, Jeff Zhao,\nJennifer Maldonado, J\u00e9r\u00e9mie du Boisberranger, Jesse Lima, Jianzhu Guo, jnboehm,\nJoel Nothman, JohanWork, John Paton, Jonathan Schneider, Jon Crall, Jon Haitz\nLegarreta Gorro\u00f1o, Joris Van den Bossche, Jos\u00e9 Manuel N\u00e1poles Duarte, Juan\nCarlos Alfaro Jim\u00e9nez, Juan Martin Loyola, Julien Jerphanion, Julio Batista\nSilva, julyrashchenko, JVM, Kadatatlu Kishore, Karen Palacio, Kei Ishikawa,\nkmatt10, kobaski, Kot271828, Kunj, KurumeYuta, kxytim, lacrosse91, LalliAcqua,\nLaveen Bagai, Leonardo Rocco, Leonardo Uieda, Leopoldo Corona, Loic Esteve,\nLSturtew, Luca Bittarello, Luccas Quadros, Lucy Jim\u00e9nez, Lucy Liu, ly648499246,\nMabu Manaileng, Manimaran, makoeppel, Marco Gorelli, Maren Westermann,\nMariangela, Maria Telenczuk, marielaraj, Martin Hirzel, Mateo Nore\u00f1a, Mathieu\nBlondel, Mathis Batoul, mathurinm, Matthew Calcote, Maxime Prieur, Maxwell,\nMehdi Hamoumi, Mehmet Ali \u00d6zer, Miao Cai, Michal Karbownik, michalkrawczyk,\nMitzi, mlondschien, Mohamed Haseeb, Mohamed Khoualed, Muhammad Jarir Kanji,\nmurata-yu, Nadim Kawwa, Nanshan Li, naozin555, Nate Parsons, Neal Fultz, Nic\nAnnau, Nicolas Hug, Nicolas Miller, Nico Stefani, Nigel Bosch, Nikita Titov,\nNodar Okroshiashvili, Norbert Preining, novaya, Ogbonna Chibuike Stephen,\nOGordon100, Oliver Pfaffel, Olivier Grisel, Oras Phongpanangam, Pablo Duque,\nPablo Ibieta-Jimenez, Patric Lacouth, Paulo S. Costa, Pawe\u0142 Olszewski, Peter\nDye, PierreAttard, Pierre-Yves Le Borgne, PranayAnchuri, Prince Canuma,\nputschblos, qdeffense, RamyaNP, ranjanikrishnan, Ray Bell, Rene Jean Corneille,\nReshama Shaikh, ricardojnf, RichardScottOZ, Rodion Martynov, Rohan Paul, Roman\nLutz, Roman Yurchak, Samuel Brice, Sandy Khosasi, Sean Benhur J, Sebastian\nFlores, Sebastian P\u00f6lsterl, Shao Yang Hong, shinehide, shinnar, shivamgargsya,\nShooter23, Shuhei Kayawari, Shyam Desai, simonamaggio, Sina Tootoonian,\nsolosilence, Steven Kolawole, Steve Stagg, Surya Prakash, swpease, Sylvain\nMari\u00e9, Takeshi Oura, Terence Honles, TFiFiE, Thomas A Caswell, Thomas J. Fan,\nTim Gates, TimotheeMathieu, Timothy Wolodzko, Tim Vink, t-jakubek, t-kusanagi,\ntliu68, Tobias Uhmann, tom1092, Tom\u00e1s Moreyra, Tom\u00e1s Ronald Hughes, Tom\nDupr\u00e9 la Tour, Tommaso Di Noto, Tomohiro Endo, TONY GEORGE, Toshihiro NAKAE,\ntsuga, Uttam kumar, vadim-ushtanit, Vangelis Gkiastas, Venkatachalam N, Vil\u00e9m\nZouhar, Vinicius Rios Fuck, Vlasovets, waijean, Whidou, xavier dupr\u00e9,\nxiaoyuchai, Yasmeen Alsaedy, yoch, Yosuke KOBAYASHI, Yu Feng, YusukeNagasaka,\nyzhenman, Zero, ZeyuSun, ZhaoweiWang, Zito, Zito Relova","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn      release notes 1 0               Version 1 0              For a short description of the main highlights of the release  please refer to  ref  sphx glr auto examples release highlights plot release highlights 1 0 0 py       include   changelog legend inc      changes 1 0 2   Version 1 0 2                  December 2021       Fix   class  cluster Birch      class  feature selection RFECV    class  ensemble RandomForestRegressor      class  ensemble RandomForestClassifier      class  ensemble GradientBoostingRegressor   and    class  ensemble GradientBoostingClassifier  do not raise warning when fitted   on a pandas DataFrame anymore   pr  21578  by  Thomas Fan     Changelog             mod  sklearn cluster                             Fix  Fixed an infinite loop in  func  cluster SpectralClustering  by   moving an iteration counter from try to except     pr  21271  by  user  Tyler Martin  martintb      mod  sklearn datasets                              Fix   func  datasets fetch openml  is now thread safe  Data is first   downloaded to a temporary subfolder and then renamed     pr  21833  by  user  Siavash Rezazadeh  siavrez      mod  sklearn decomposition                                   Fix  Fixed the constraint on the objective function of    class  decomposition DictionaryLearning      class  decomposition MiniBatchDictionaryLearning    class  decomposition SparsePCA    and  class  decomposition MiniBatchSparsePCA  to be convex and match the referenced   article   pr  19210  by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn ensemble                              Fix   class  ensemble RandomForestClassifier      class  ensemble RandomForestRegressor      class  ensemble ExtraTreesClassifier    class  ensemble ExtraTreesRegressor     and  class  ensemble RandomTreesEmbedding  now raise a   ValueError   when     bootstrap False   and   max samples   is not   None       pr  21295   user  Haoyin Xu  PSSF23        Fix  Solve a bug in  class  ensemble GradientBoostingClassifier  where the   exponential loss was computing the positive gradient instead of the   negative one     pr  22050  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn feature selection                                       Fix  Fixed  class  feature selection SelectFromModel  by improving support   for base estimators that do not set  feature names in     pr  21991  by    Thomas Fan      mod  sklearn impute                            Fix  Fix a bug in  class  linear model RidgeClassifierCV  where the method    predict  was performing an  argmax  on the scores obtained from    decision function  instead of returning the multilabel indicator matrix     pr  19869  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn linear model                                  Fix   class  linear model LassoLarsIC  now correctly computes AIC   and BIC  An error is now raised when  n features   n samples  and   when the noise variance is not provided     pr  21481  by  user  Guillaume Lemaitre  glemaitre   and    user  Andr s Babino  ababino      mod  sklearn manifold                              Fix  Fixed an unnecessary error when fitting  class  manifold Isomap  with a   precomputed dense distance matrix where the neighbors graph has multiple   disconnected components   pr  21915  by  Tom Dupre la Tour      mod  sklearn metrics                             Fix  All  class  sklearn metrics DistanceMetric  subclasses now correctly support   read only buffer attributes    This fixes a regression introduced in 1 0 0 with respect to 0 24 2     pr  21694  by  user  Julien Jerphanion  jjerphan        Fix  All  sklearn metrics MinkowskiDistance  now accepts a weight   parameter that makes it possible to write code that behaves consistently both   with scipy 1 8 and earlier versions  In turns this means that all   neighbors based estimators  except those that use  algorithm  kd tree    now   accept a weight parameter with  metric  minknowski   to yield results that   are always consistent with  scipy spatial distance cdist      pr  21741  by  user  Olivier Grisel  ogrisel      mod  sklearn multiclass                                Fix   meth  multiclass OneVsRestClassifier predict proba  does not error when   fitted on constant integer targets   pr  21871  by  Thomas Fan      mod  sklearn neighbors                               Fix   class  neighbors KDTree  and  class  neighbors BallTree  correctly supports   read only buffer attributes   pr  21845  by  Thomas Fan      mod  sklearn preprocessing                                   Fix  Fixes compatibility bug with NumPy 1 22 in  class  preprocessing OneHotEncoder      pr  21517  by  Thomas Fan      mod  sklearn tree                          Fix  Prevents  func  tree plot tree  from drawing out of the boundary of   the figure   pr  21917  by  Thomas Fan        Fix  Support loading pickles of decision tree models when the pickle has   been generated on a platform with a different bitness  A typical example is   to train and pickle the model on 64 bit machine and load the model on a 32   bit machine for prediction   pr  21552  by  user  Lo c Est ve  lesteve      mod  sklearn utils                           Fix   func  utils estimator html repr  now escapes all the estimator   descriptions in the generated HTML   pr  21493  by    user  Aur lien Geron  ageron         changes 1 0 1   Version 1 0 1                  October 2021    Fixed models                  Fix  Non fit methods in the following classes do not raise a UserWarning   when fitted on DataFrames with valid feature names     class  covariance EllipticEnvelope    class  ensemble IsolationForest      class  ensemble AdaBoostClassifier    class  neighbors KNeighborsClassifier      class  neighbors KNeighborsRegressor      class  neighbors RadiusNeighborsClassifier      class  neighbors RadiusNeighborsRegressor    pr  21199  by  Thomas Fan      mod  sklearn calibration                                 Fix  Fixed  class  calibration CalibratedClassifierCV  to take into account    sample weight  when computing the base estimator prediction when    ensemble False      pr  20638  by  user  Julien Bohn   JulienB 78        Fix  Fixed a bug in  class  calibration CalibratedClassifierCV  with    method  sigmoid   that was ignoring the  sample weight  when computing the   the Bayesian priors     pr  21179  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn cluster                             Fix  Fixed a bug in  class  cluster KMeans   ensuring reproducibility and equivalence   between sparse and dense input   pr  21195    by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn ensemble                              Fix  Fixed a bug that could produce a segfault in rare cases for    class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor      pr  21130   user  Christian Lorentzen  lorentzenchr      mod  sklearn gaussian process                                      Fix  Compute  y std  properly with multi target in    class  sklearn gaussian process GaussianProcessRegressor  allowing   proper normalization in multi target scene     pr  20761  by  user  Patrick de C  T  R  Ferreira  patrickctrf      mod  sklearn feature extraction                                        Efficiency  Fixed an efficiency regression introduced in version 1 0 0 in the    transform  method of  class  feature extraction text CountVectorizer  which no   longer checks for uppercase characters in the provided vocabulary   pr  21251    by  user  J r mie du Boisberranger  jeremiedbb        Fix  Fixed a bug in  class  feature extraction text CountVectorizer  and    class  feature extraction text TfidfVectorizer  by raising an   error when  min idf  or  max idf  are floating point numbers greater than 1     pr  20752  by  user  Alek Lefebvre  AlekLefebvre      mod  sklearn linear model                                  Fix  Improves stability of  class  linear model LassoLars  for different   versions of openblas   pr  21340  by  Thomas Fan        Fix   class  linear model LogisticRegression  now raises a better error   message when the solver does not support sparse matrices with int64 indices     pr  21093  by  Tom Dupre la Tour      mod  sklearn neighbors                               Fix   class  neighbors KNeighborsClassifier      class  neighbors KNeighborsRegressor      class  neighbors RadiusNeighborsClassifier      class  neighbors RadiusNeighborsRegressor  with  metric  precomputed   raises   an error for  bsr  and  dok  sparse matrices in methods   fit    kneighbors    and  radius neighbors   due to handling of explicit zeros in  bsr  and  dok     term  sparse graph  formats   pr  21199  by  Thomas Fan      mod  sklearn pipeline                              Fix   meth  pipeline Pipeline get feature names out  correctly passes feature   names out from one step of a pipeline to the next   pr  21351  by    Thomas Fan      mod  sklearn svm                         Fix   class  svm SVC  and  class  svm SVR  check for an inconsistency   in its internal representation and raise an error instead of segfaulting    This fix also resolves    CVE 2020 28975  https   nvd nist gov vuln detail CVE 2020 28975         pr  21336  by  Thomas Fan      mod  sklearn utils                           Enhancement   utils validation  check sample weight  can perform a   non negativity check on the sample weights  It can be turned on   using the only non negative bool parameter    Estimators that check for non negative weights are updated     func  linear model LinearRegression   here the previous   error message was misleading      func  ensemble AdaBoostClassifier      func  ensemble AdaBoostRegressor      func  neighbors KernelDensity      pr  20880  by  user  Guillaume Lemaitre  glemaitre     and  user  Andr s Simon  simonandras        Fix  Solve a bug in   sklearn utils metaestimators if delegate has method     where the underlying check for an attribute did not work with NumPy arrays     pr  21145  by  user  Zahlii  Zahlii     Miscellaneous                   Fix  Fitting an estimator on a dataset that has no feature names  that was previously   fitted on a dataset with feature names no longer keeps the old feature names stored in   the  feature names in   attribute   pr  21389  by    user  J r mie du Boisberranger  jeremiedbb         changes 1 0   Version 1 0 0                  September 2021    Minimal dependencies                       Version 1 0 0 of scikit learn requires python 3 7   numpy 1 14 6  and scipy 1 1 0   Optional minimal dependency is matplotlib 2 2 2    Enforcing keyword only arguments                                   In an effort to promote clear and non ambiguous use of the library  most constructor and function parameters must now be passed as keyword arguments  i e  using the  param value  syntax  instead of positional  If a keyword only parameter is used as positional  a  TypeError  is now raised   issue  15005   pr  20002  by  Joel Nothman     Adrin Jalali     Thomas Fan     Nicolas Hug    and  Tom Dupre la Tour    See  SLEP009  https   scikit learn enhancement proposals readthedocs io en latest slep009 proposal html    for more details   Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      Fix   class  manifold TSNE  now avoids numerical underflow issues during   affinity matrix computation      Fix   class  manifold Isomap  now connects disconnected components of the   neighbors graph along some minimum distance pairs  instead of changing   every infinite distances to zero      Fix  The splitting criterion of  class  tree DecisionTreeClassifier  and    class  tree DecisionTreeRegressor  can be impacted by a fix in the handling   of rounding errors  Previously some extra spurious splits could occur      Fix   func  model selection train test split  with a  stratify  parameter   and  class  model selection StratifiedShuffleSplit  may lead to slightly   different results   Details are listed in the changelog below    While we are trying to better inform users by providing this information  we cannot assure that this list is complete     Changelog                   Entries should be grouped by module  in alphabetic order  and prefixed with     one of the labels   MajorFeature    Feature    Efficiency    Enhancement        Fix  or  API   see whats new rst for descriptions       Entries should be ordered by those labels  e g   Fix  after  Efficiency        Changes not specific to a module should be listed under  Multiple Modules      or  Miscellaneous       Entries should end with       pr  123456  by  user  Joe Bloggs  joeongithub        where 123456 is the  pull request  number  not the issue number      API  The option for using the squared error via   loss   and     criterion   parameters was made more consistent  The preferred way is by   setting the value to   squared error    Old option names are still valid    produce the same models  but are deprecated and will be removed in version   1 2     pr  19310  by  user  Christian Lorentzen  lorentzenchr         For  class  ensemble ExtraTreesRegressor    criterion  mse   is deprecated      use   squared error   instead which is now the default       For  class  ensemble GradientBoostingRegressor    loss  ls   is deprecated      use   squared error   instead which is now the default       For  class  ensemble RandomForestRegressor    criterion  mse   is deprecated      use   squared error   instead which is now the default       For  class  ensemble HistGradientBoostingRegressor    loss  least squares       is deprecated  use   squared error   instead which is now the default       For  class  linear model RANSACRegressor    loss  squared loss   is     deprecated  use   squared error   instead       For  class  linear model SGDRegressor    loss  squared loss   is     deprecated  use   squared error   instead which is now the default       For  class  tree DecisionTreeRegressor    criterion  mse   is deprecated      use   squared error   instead which is now the default       For  class  tree ExtraTreeRegressor    criterion  mse   is deprecated      use   squared error   instead which is now the default      API  The option for using the absolute error via   loss   and     criterion   parameters was made more consistent  The preferred way is by   setting the value to   absolute error    Old option names are still valid    produce the same models  but are deprecated and will be removed in version   1 2     pr  19733  by  user  Christian Lorentzen  lorentzenchr         For  class  ensemble ExtraTreesRegressor    criterion  mae   is deprecated      use   absolute error   instead       For  class  ensemble GradientBoostingRegressor    loss  lad   is deprecated      use   absolute error   instead       For  class  ensemble RandomForestRegressor    criterion  mae   is deprecated      use   absolute error   instead       For  class  ensemble HistGradientBoostingRegressor        loss  least absolute deviation   is deprecated  use   absolute error       instead       For  class  linear model RANSACRegressor    loss  absolute loss   is     deprecated  use   absolute error   instead which is now the default       For  class  tree DecisionTreeRegressor    criterion  mae   is deprecated      use   absolute error   instead       For  class  tree ExtraTreeRegressor    criterion  mae   is deprecated      use   absolute error   instead      API   np matrix  usage is deprecated in 1 0 and will raise a  TypeError  in   1 2   pr  20165  by  Thomas Fan        API   term  get feature names out  has been added to the transformer API   to get the names of the output features   get feature names  has in   turn been deprecated   pr  18444  by  Thomas Fan        API  All estimators store  feature names in   when fitted on pandas Dataframes    These feature names are compared to names seen in non  fit  methods  e g     transform  and will raise a  FutureWarning  if they are not consistent    These   FutureWarning   s will become   ValueError   s in 1 2   pr  18010  by    Thomas Fan      mod  sklearn base                          Fix   func  config context  is now threadsafe   pr  18736  by  Thomas Fan      mod  sklearn calibration                                 Feature   func  calibration CalibrationDisplay  added to plot   calibration curves   pr  17443  by  user  Lucy Liu  lucyleeow        Fix  The   predict   and   predict proba   methods of    class  calibration CalibratedClassifierCV  can now properly be used on   prefitted pipelines   pr  19641  by  user  Alek Lefebvre  AlekLefebvre        Fix  Fixed an error when using a  class  ensemble VotingClassifier    as  base estimator  in  class  calibration CalibratedClassifierCV      pr  20087  by  user  Cl ment Fauchereau  clement f       mod  sklearn cluster                             Efficiency  The    k means      initialization of  class  cluster KMeans    and  class  cluster MiniBatchKMeans  is now faster  especially in multicore   settings   pr  19002  by  user  Jon Crall  Erotemic   and  user  J r mie du   Boisberranger  jeremiedbb        Efficiency   class  cluster KMeans  with  algorithm  elkan   is now faster   in multicore settings   pr  19052  by    user  Yusuke Nagasaka  YusukeNagasaka        Efficiency   class  cluster MiniBatchKMeans  is now faster in multicore   settings   pr  17622  by  user  J r mie du Boisberranger  jeremiedbb        Efficiency   class  cluster OPTICS  can now cache the output of the   computation of the tree  using the  memory  parameter    pr  19024  by    user  Frankie Robertson  frankier        Enhancement  The  predict  and  fit predict  methods of    class  cluster AffinityPropagation  now accept sparse data type for input   data     pr  20117  by  user  Venkatachalam Natchiappan  venkyyuvy       Fix  Fixed a bug in  class  cluster MiniBatchKMeans  where the sample   weights were partially ignored when the input is sparse   pr  17622  by    user  J r mie du Boisberranger  jeremiedbb        Fix  Improved convergence detection based on center change in    class  cluster MiniBatchKMeans  which was almost never achievable     pr  17622  by  user  J r mie du Boisberranger  jeremiedbb        FIX   class  cluster AgglomerativeClustering  now supports readonly   memory mapped datasets     pr  19883  by  user  Julien Jerphanion  jjerphan        Fix   class  cluster AgglomerativeClustering  correctly connects components   when connectivity and affinity are both precomputed and the number   of connected components is greater than 1   pr  20597  by    Thomas Fan        Fix   class  cluster FeatureAgglomeration  does not accept a     params   kwarg in   the   fit   function anymore  resulting in a more concise error message   pr  20899    by  user  Adam Li  adam2392        Fix  Fixed a bug in  class  cluster KMeans   ensuring reproducibility and equivalence   between sparse and dense input   pr  20200    by  user  J r mie du Boisberranger  jeremiedbb        API   class  cluster Birch  attributes   fit   and  partial fit    are   deprecated and will be removed in 1 2   pr  19297  by  Thomas Fan        API  the default value for the  batch size  parameter of    class  cluster MiniBatchKMeans  was changed from 100 to 1024 due to   efficiency reasons  The  n iter   attribute of    class  cluster MiniBatchKMeans  now reports the number of started epochs and   the  n steps   attribute reports the number of mini batches processed     pr  17622  by  user  J r mie du Boisberranger  jeremiedbb        API   func  cluster spectral clustering  raises an improved error when passed   a  np matrix    pr  20560  by  Thomas Fan      mod  sklearn compose                             Enhancement   class  compose ColumnTransformer  now records the output   of each transformer in  output indices     pr  18393  by    user  Luca Bittarello  lbittarello        Enhancement   class  compose ColumnTransformer  now allows DataFrame input to   have its columns appear in a changed order in  transform   Further  columns that   are dropped will not be required in transform  and additional columns will be   ignored if  remainder  drop     pr  19263  by  Thomas Fan        Enhancement  Adds    predict params  keyword argument to    meth  compose TransformedTargetRegressor predict  that passes keyword   argument to the regressor     pr  19244  by  user  Ricardo  ricardojnf        FIX   compose ColumnTransformer get feature names  supports   non string feature names returned by any of its transformers  However  note   that   get feature names   is deprecated  use   get feature names out     instead   pr  18459  by  user  Albert Villanova del Moral  albertvillanova     and  user  Alonso Silva Allende  alonsosilvaallende        Fix   class  compose TransformedTargetRegressor  now takes nD targets with   an adequate transformer     pr  18898  by  user  Oras Phongpanagnam  panangam        API  Adds  verbose feature names out  to  class  compose ColumnTransformer     This flag controls the prefixing of feature names out in    term  get feature names out    pr  18444  and  pr  21080  by  Thomas Fan      mod  sklearn covariance                                Fix  Adds arrays check to  func  covariance ledoit wolf  and    func  covariance ledoit wolf shrinkage    pr  20416  by  user  Hugo Defois    defoishugo        API  Deprecates the following keys in  cv results      mean score        std score    and   split k  score   in favor of   mean test score       std test score    and   split k  test score     pr  20583  by  Thomas Fan      mod  sklearn datasets                              Enhancement   func  datasets fetch openml  now supports categories with   missing values when returning a pandas dataframe   pr  19365  by    Thomas Fan   and  user  Amanda Dsouza  amy12xx   and    user  EL ATEIF Sara  elateifsara        Enhancement   func  datasets fetch kddcup99  raises a better message   when the cached file is invalid   pr  19669   Thomas Fan        Enhancement  Replace usages of     file     related to resource file I O   with   importlib resources   to avoid the assumption that these resource   files  e g    iris csv    already exist on a filesystem  and by extension   to enable compatibility with tools such as   PyOxidizer       pr  20297  by  user  Jack Liu  jackzyliu        Fix  Shorten data file names in the openml tests to better support   installing on Windows and its default 260 character limit on file names     pr  20209  by  Thomas Fan        Fix   func  datasets fetch kddcup99  returns dataframes when    return X y True  and  as frame True    pr  19011  by  Thomas Fan        API  Deprecates  datasets load boston  in 1 0 and it will be removed   in 1 2  Alternative code snippets to load similar datasets are provided    Please report to the docstring of the function for details     pr  20729  by  Guillaume Lemaitre       mod  sklearn decomposition                                   Enhancement  added a new approximate solver  randomized SVD  available with    eigen solver  randomized    to  class  decomposition KernelPCA   This   significantly accelerates computation when the number of samples is much   larger than the desired number of components     pr  12069  by  user  Sylvain Mari   smarie        Fix  Fixes incorrect multiple data conversion warnings when clustering   boolean data   pr  19046  by  user  Surya Prakash  jdsurya        Fix  Fixed  func  decomposition dict learning   used by    class  decomposition DictionaryLearning   to ensure determinism of the   output  Achieved by flipping signs of the SVD output which is used to   initialize the code   pr  18433  by  user  Bruno Charron  brcharron        Fix  Fixed a bug in  class  decomposition MiniBatchDictionaryLearning      class  decomposition MiniBatchSparsePCA  and    func  decomposition dict learning online  where the update of the dictionary   was incorrect   pr  19198  by  user  J r mie du Boisberranger  jeremiedbb        Fix  Fixed a bug in  class  decomposition DictionaryLearning      class  decomposition SparsePCA      class  decomposition MiniBatchDictionaryLearning      class  decomposition MiniBatchSparsePCA      func  decomposition dict learning  and    func  decomposition dict learning online  where the restart of unused atoms   during the dictionary update was not working as expected   pr  19198  by    user  J r mie du Boisberranger  jeremiedbb        API  In  class  decomposition DictionaryLearning      class  decomposition MiniBatchDictionaryLearning      func  decomposition dict learning  and    func  decomposition dict learning online    transform alpha  will be equal   to  alpha  instead of 1 0 by default starting from version 1 2  pr  19159  by    user  Beno t Mal zieux  bmalezieux        API  Rename variable names in  class  decomposition KernelPCA  to improve   readability   lambdas   and  alphas   are renamed to  eigenvalues     and  eigenvectors    respectively   lambdas   and  alphas   are   deprecated and will be removed in 1 2     pr  19908  by  user  Kei Ishikawa  kstoneriv3        API  The  alpha  and  regularization  parameters of  class  decomposition NMF  and    func  decomposition non negative factorization  are deprecated and will be removed   in 1 2  Use the new parameters  alpha W  and  alpha H  instead   pr  20512  by    user  J r mie du Boisberranger  jeremiedbb      mod  sklearn dummy                           API  Attribute  n features in   in  class  dummy DummyRegressor  and    class  dummy DummyRegressor  is deprecated and will be removed in 1 2     pr  20960  by  Thomas Fan      mod  sklearn ensemble                              Enhancement   class   sklearn ensemble HistGradientBoostingClassifier  and    class   sklearn ensemble HistGradientBoostingRegressor  take cgroups quotas   into account when deciding the number of threads used by OpenMP  This   avoids performance problems caused by over subscription when using those   classes in a docker container for instance   pr  20477    by  Thomas Fan        Enhancement   class   sklearn ensemble HistGradientBoostingClassifier  and    class   sklearn ensemble HistGradientBoostingRegressor  are no longer   experimental  They are now considered stable and are subject to the same   deprecation cycles as all other estimators   pr  19799  by  Nicolas Hug        Enhancement  Improve the HTML rendering of the    class  ensemble StackingClassifier  and  class  ensemble StackingRegressor      pr  19564  by  Thomas Fan        Enhancement  Added Poisson criterion to    class  ensemble RandomForestRegressor    pr  19836  by  user  Brian Sun    bsun94        Fix  Do not allow to compute out of bag  OOB  score in    class  ensemble RandomForestClassifier  and    class  ensemble ExtraTreesClassifier  with multiclass multioutput target   since scikit learn does not provide any metric supporting this type of   target  Additional private refactoring was performed     pr  19162  by  user  Guillaume Lemaitre  glemaitre        Fix  Improve numerical precision for weights boosting in    class  ensemble AdaBoostClassifier  and  class  ensemble AdaBoostRegressor    to avoid underflows     pr  10096  by  user  Fenil Suchak  fenilsuchak        Fix  Fixed the range of the argument   max samples   to be    0 0  1 0      in  class  ensemble RandomForestClassifier      class  ensemble RandomForestRegressor   where  max samples 1 0  is   interpreted as using all  n samples  for bootstrapping   pr  20159  by    user  murata yu       Fix  Fixed a bug in  class  ensemble AdaBoostClassifier  and    class  ensemble AdaBoostRegressor  where the  sample weight  parameter   got overwritten during  fit      pr  20534  by  user  Guillaume Lemaitre  glemaitre        API  Removes  tol None  option in    class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor   Please use  tol 0  for   the same behavior   pr  19296  by  Thomas Fan      mod  sklearn feature extraction                                        Fix  Fixed a bug in  class  feature extraction text HashingVectorizer    where some input strings would result in negative indices in the transformed   data   pr  19035  by  user  Liu Yu  ly648499246        Fix  Fixed a bug in  class  feature extraction DictVectorizer  by raising an   error with unsupported value type     pr  19520  by  user  Jeff Zhao  kamiyaa        Fix  Fixed a bug in  func  feature extraction image img to graph    and  func  feature extraction image grid to graph  where singleton connected   components were not handled properly  resulting in a wrong vertex indexing     pr  18964  by  Bertrand Thirion        Fix  Raise a warning in  class  feature extraction text CountVectorizer    with  lowercase True  when there are vocabulary entries with uppercase   characters to avoid silent misses in the resulting feature vectors     pr  19401  by  user  Zito Relova  zitorelova     mod  sklearn feature selection                                       Feature   func  feature selection r regression  computes Pearson s R   correlation coefficients between the features and the target     pr  17169  by  user  Dmytro Lituiev  DSLituiev     and  user  Julien Jerphanion  jjerphan        Enhancement   func  feature selection RFE fit  accepts additional estimator   parameters that are passed directly to the estimator s  fit  method     pr  20380  by  user  Iv n Pulido  ijpulidos     user  Felipe Bidu  fbidu       user  Gil Rutter  g rutter    and  user  Adrin Jalali  adrinjalali        FIX  Fix a bug in  func  isotonic isotonic regression  where the    sample weight  passed by a user were overwritten during   fit       pr  20515  by  user  Carsten Allefeld  allefeld        Fix  Change  func  feature selection SequentialFeatureSelector  to   allow for unsupervised modelling so that the  fit  signature need not   do any  y  validation and allow for  y None      pr  19568  by  user  Shyam Desai  ShyamDesai        API  Raises an error in  class  feature selection VarianceThreshold    when the variance threshold is negative     pr  20207  by  user  Tomohiro Endo  europeanplaice       API  Deprecates  grid scores   in favor of split scores in  cv results   in    class  feature selection RFECV    grid scores   will be removed in   version 1 2     pr  20161  by  user  Shuhei Kayawari  wowry   and  user  arka204     mod  sklearn inspection                                Enhancement  Add  max samples  parameter in    func  inspection permutation importance   It enables to draw a subset of the   samples to compute the permutation importance  This is useful to keep the   method tractable when evaluating feature importance on large datasets     pr  20431  by  user  Oliver Pfaffel  o1iv3r        Enhancement  Add kwargs to format ICE and PD lines separately in partial   dependence plots  inspection plot partial dependence  and    meth  inspection PartialDependenceDisplay plot    pr  19428  by  user  Mehdi   Hamoumi  mhham        Fix  Allow multiple scorers input to    func  inspection permutation importance    pr  19411  by  user  Simona   Maggio  simonamaggio        API   class  inspection PartialDependenceDisplay  exposes a class method     func   inspection PartialDependenceDisplay from estimator      inspection plot partial dependence  is deprecated in favor of the   class method and will be removed in 1 2   pr  20959  by  Thomas Fan      mod  sklearn kernel approximation                                          Fix  Fix a bug in  class  kernel approximation Nystroem    where the attribute  component indices   did not correspond to the subset of   sample indices used to generate the approximated kernel   pr  20554  by    user  Xiangyin Kong  kxytim      mod  sklearn linear model                                  MajorFeature  Added  class  linear model QuantileRegressor  which implements   linear quantile regression with L1 penalty     pr  9978  by  user  David Dale  avidale   and    user  Christian Lorentzen  lorentzenchr        Feature  The new  class  linear model SGDOneClassSVM  provides an SGD   implementation of the linear One Class SVM  Combined with kernel   approximation techniques  this implementation approximates the solution of   a kernelized One Class SVM while benefitting from a linear   complexity in the number of samples     pr  10027  by  user  Albert Thomas  albertcthomas        Feature  Added  sample weight  parameter to    class  linear model LassoCV  and  class  linear model ElasticNetCV      pr  16449  by  user  Christian Lorentzen  lorentzenchr        Feature  Added new solver  lbfgs   available with  solver  lbfgs      and  positive  argument to  class  linear model Ridge   When  positive  is   set to  True   forces the coefficients to be positive  only supported by    lbfgs     pr  20231  by  user  Toshihiro Nakae  tnakae        Efficiency  The implementation of  class  linear model LogisticRegression    has been optimised for dense matrices when using  solver  newton cg   and    multi class   multinomial       pr  19571  by  user  Julien Jerphanion  jjerphan        Enhancement   fit  method preserves dtype for numpy float32 in    class  linear model Lars    class  linear model LassoLars      class  linear model LassoLars    class  linear model LarsCV  and    class  linear model LassoLarsCV    pr  20155  by  user  Takeshi Oura    takoika        Enhancement  Validate user supplied gram matrix passed to linear models   via the  precompute  argument   pr  19004  by  user  Adam Midvidy  amidvidy        Fix   meth  linear model ElasticNet fit  no longer modifies  sample weight    in place   pr  19055  by  Thomas Fan        Fix   class  linear model Lasso  and  class  linear model ElasticNet  no   longer have a  dual gap   not corresponding to their objective   pr  19172    by  user  Mathurin Massias  mathurinm       Fix   sample weight  are now fully taken into account in linear models   when  normalize True  for both feature centering and feature   scaling     pr  19426  by  user  Alexandre Gramfort  agramfort   and    user  Maria Telenczuk  maikia        Fix  Points with residuals equal to    residual threshold   are now considered   as inliers for  class  linear model RANSACRegressor   This allows fitting   a model perfectly on some datasets when  residual threshold 0      pr  19499  by  user  Gregory Strubel  gregorystrubel        Fix  Sample weight invariance for  class  linear model Ridge  was fixed in    pr  19616  by  user  Oliver Grisel  ogrisel   and  user  Christian Lorentzen    lorentzenchr        Fix  The dictionary  params  in  func  linear model enet path  and    func  linear model lasso path  should only contain parameter of the   coordinate descent solver  Otherwise  an error will be raised     pr  19391  by  user  Shao Yang Hong  hongshaoyang        API  Raise a warning in  class  linear model RANSACRegressor  that from   version 1 2   min samples  need to be set explicitly for models other than    class  linear model LinearRegression    pr  19390  by  user  Shao Yang Hong    hongshaoyang        API   The parameter   normalize   of  class  linear model LinearRegression    is deprecated and will be removed in 1 2  Motivation for this deprecation      normalize   parameter did not take any effect if   fit intercept   was set   to False and therefore was deemed confusing  The behavior of the deprecated     LinearModel normalize True    can be reproduced with a    class   sklearn pipeline Pipeline  with   LinearModel    where     LinearModel   is  class   linear model LinearRegression      class   linear model Ridge    class   linear model RidgeClassifier      class   linear model RidgeCV  or  class   linear model RidgeClassifierCV     as follows    make pipeline StandardScaler with mean False     LinearModel       The   normalize   parameter in    class   linear model LinearRegression  was deprecated in  pr  17743  by    user  Maria Telenczuk  maikia   and  user  Alexandre Gramfort  agramfort      Same for  class   linear model Ridge      class   linear model RidgeClassifier    class   linear model RidgeCV   and    class   linear model RidgeClassifierCV   in   pr  17772  by  user  Maria   Telenczuk  maikia   and  user  Alexandre Gramfort  agramfort    Same for    class   linear model BayesianRidge    class   linear model ARDRegression    in   pr  17746  by  user  Maria Telenczuk  maikia    Same for    class   linear model Lasso    class   linear model LassoCV      class   linear model ElasticNet    class   linear model ElasticNetCV      class   linear model MultiTaskLasso      class   linear model MultiTaskLassoCV      class   linear model MultiTaskElasticNet      class   linear model MultiTaskElasticNetCV   in   pr  17785  by  user  Maria   Telenczuk  maikia   and  user  Alexandre Gramfort  agramfort        API  The   normalize   parameter of    class   linear model OrthogonalMatchingPursuit  and    class   linear model OrthogonalMatchingPursuitCV  will default to False in   1 2 and will be removed in 1 4   pr  17750  by  user  Maria Telenczuk    maikia   and  user  Alexandre Gramfort  agramfort    Same for    class   linear model Lars   class   linear model LarsCV     class   linear model LassoLars   class   linear model LassoLarsCV     class   linear model LassoLarsIC   in  pr  17769  by  user  Maria Telenczuk    maikia   and  user  Alexandre Gramfort  agramfort        API  Keyword validation has moved from    init    and  set params  to  fit    for the following estimators conforming to scikit learn s conventions     class   linear model SGDClassifier      class   linear model SGDRegressor      class   linear model SGDOneClassSVM      class   linear model PassiveAggressiveClassifier   and    class   linear model PassiveAggressiveRegressor      pr  20683  by  Guillaume Lemaitre      mod  sklearn manifold                              Enhancement  Implement   auto   heuristic for the  learning rate  in    class  manifold TSNE   It will become default in 1 2  The default   initialization will change to  pca  in 1 2  PCA initialization will   be scaled to have standard deviation 1e 4 in 1 2     pr  19491  by  user  Dmitry Kobak  dkobak        Fix  Change numerical precision to prevent underflow issues   during affinity matrix computation for  class  manifold TSNE      pr  19472  by  user  Dmitry Kobak  dkobak        Fix   class  manifold Isomap  now uses  scipy sparse csgraph shortest path    to compute the graph shortest path  It also connects disconnected components   of the neighbors graph along some minimum distance pairs  instead of changing   every infinite distances to zero   pr  20531  by  Roman Yurchak   and  Tom   Dupre la Tour        Fix  Decrease the numerical default tolerance in the lobpcg call   in  func  manifold spectral embedding  to prevent numerical instability     pr  21194  by  user  Andrew Knyazev  lobpcg      mod  sklearn metrics                             Feature   func  metrics mean pinball loss  exposes the pinball loss for   quantile regression   pr  19415  by  user  Xavier Dupr   sdpython     and  user  Oliver Grisel  ogrisel        Feature   func  metrics d2 tweedie score  calculates the D 2 regression   score for Tweedie deviances with power parameter   power    This is a   generalization of the  r2 score  and can be interpreted as percentage of   Tweedie deviance explained     pr  17036  by  user  Christian Lorentzen  lorentzenchr        Feature    func  metrics mean squared log error  now supports    squared False      pr  20326  by  user  Uttam kumar  helper uttam        Efficiency  Improved speed of  func  metrics confusion matrix  when labels   are integral     pr  9843  by  user  Jon Crall  Erotemic        Enhancement  A fix to raise an error in  func  metrics hinge loss  when     pred decision   is 1d whereas it is a multiclass classification or when     pred decision   parameter is not consistent with the   labels   parameter     pr  19643  by  user  Pierre Attard  PierreAttard        Fix   meth  metrics ConfusionMatrixDisplay plot  uses the correct max   for colormap   pr  19784  by  Thomas Fan        Fix  Samples with zero  sample weight  values do not affect the results   from  func  metrics det curve    func  metrics precision recall curve    and  func  metrics roc curve      pr  18328  by  user  Albert Villanova del Moral  albertvillanova   and    user  Alonso Silva Allende  alonsosilvaallende        Fix  avoid overflow in  func  metrics adjusted rand score  with   large amount of data   pr  20312  by  user  Divyanshu Deoli    divyanshudeoli        API   class  metrics ConfusionMatrixDisplay  exposes two class methods    func   metrics ConfusionMatrixDisplay from estimator  and    func   metrics ConfusionMatrixDisplay from predictions  allowing to create   a confusion matrix plot using an estimator or the predictions     metrics plot confusion matrix  is deprecated in favor of these two   class methods and will be removed in 1 2     pr  18543  by  Guillaume Lemaitre        API   class  metrics PrecisionRecallDisplay  exposes two class methods    func   metrics PrecisionRecallDisplay from estimator  and    func   metrics PrecisionRecallDisplay from predictions  allowing to create   a precision recall curve using an estimator or the predictions     metrics plot precision recall curve  is deprecated in favor of these   two class methods and will be removed in 1 2     pr  20552  by  Guillaume Lemaitre        API   class  metrics DetCurveDisplay  exposes two class methods    func   metrics DetCurveDisplay from estimator  and    func   metrics DetCurveDisplay from predictions  allowing to create   a confusion matrix plot using an estimator or the predictions     metrics plot det curve  is deprecated in favor of these two   class methods and will be removed in 1 2     pr  19278  by  Guillaume Lemaitre      mod  sklearn mixture                             Fix  Ensure that the best parameters are set appropriately   in the case of divergency for  class  mixture GaussianMixture  and    class  mixture BayesianGaussianMixture      pr  20030  by  user  Tingshan Liu  tliu68   and    user  Benjamin Pedigo  bdpedigo      mod  sklearn model selection                                     Feature  added  class  model selection StratifiedGroupKFold   that combines    class  model selection StratifiedKFold  and    class  model selection GroupKFold   providing an ability to split data   preserving the distribution of classes in each split while keeping each   group within a single split     pr  18649  by  user  Leandro Hermida  hermidalc   and    user  Rodion Martynov  marrodion        Enhancement  warn only once in the main process for per split fit failures   in cross validation   pr  20619  by  user  Lo c Est ve  lesteve       Enhancement  The  model selection BaseShuffleSplit  base class is   now public   pr  20056  by  user  pabloduque0       Fix  Avoid premature overflow in  func  model selection train test split      pr  20904  by  user  Tomasz Jakubek  t jakubek      mod  sklearn naive bayes                                 Fix  The  fit  and  partial fit  methods of the discrete naive Bayes   classifiers   class  naive bayes BernoulliNB      class  naive bayes CategoricalNB    class  naive bayes ComplementNB     and  class  naive bayes MultinomialNB   now correctly handle the degenerate   case of a single class in the training set     pr  18925  by  user  David Poznik  dpoznik        API  The attribute   sigma    is now deprecated in    class  naive bayes GaussianNB  and will be removed in 1 2    Use   var    instead     pr  18842  by  user  Hong Shao Yang  hongshaoyang      mod  sklearn neighbors                               Enhancement  The creation of  class  neighbors KDTree  and    class  neighbors BallTree  has been improved for their worst cases time   complexity from  math   mathcal O  n 2   to  math   mathcal O  n       pr  19473  by  user  jiefangxuanyan  jiefangxuanyan   and    user  Julien Jerphanion  jjerphan        FIX   neighbors DistanceMetric  subclasses now support readonly   memory mapped datasets   pr  19883  by  user  Julien Jerphanion  jjerphan        FIX   class  neighbors NearestNeighbors    class  neighbors KNeighborsClassifier      class  neighbors RadiusNeighborsClassifier    class  neighbors KNeighborsRegressor    and  class  neighbors RadiusNeighborsRegressor  do not validate  weights  in      init    and validates  weights  in  fit  instead   pr  20072  by    user  Juan Carlos Alfaro Jim nez  alfaro96        API  The parameter  kwargs  of  class  neighbors RadiusNeighborsClassifier  is   deprecated and will be removed in 1 2     pr  20842  by  user  Juan Mart n Loyola  jmloyola      mod  sklearn neural network                                    Fix   class  neural network MLPClassifier  and    class  neural network MLPRegressor  now correctly support continued training   when loading from a pickled file   pr  19631  by  Thomas Fan      mod  sklearn pipeline                              API  The  predict proba  and  predict log proba  methods of the    class  pipeline Pipeline  now support passing prediction kwargs to the final   estimator   pr  19790  by  user  Christopher Flynn  crflynn      mod  sklearn preprocessing                                   Feature  The new  class  preprocessing SplineTransformer  is a feature   preprocessing tool for the generation of B splines  parametrized by the   polynomial   degree   of the splines  number of knots   n knots   and knot   positioning strategy   knots       pr  18368  by  user  Christian Lorentzen  lorentzenchr       class  preprocessing SplineTransformer  also supports periodic   splines via the   extrapolation   argument     pr  19483  by  user  Malte Londschien  mlondschien       class  preprocessing SplineTransformer  supports sample weights for   knot position strategy    quantile        pr  20526  by  user  Malte Londschien  mlondschien        Feature   class  preprocessing OrdinalEncoder  supports passing through   missing values by default   pr  19069  by  Thomas Fan        Feature   class  preprocessing OneHotEncoder  now supports    handle unknown  ignore   and dropping categories   pr  19041  by    Thomas Fan        Feature   class  preprocessing PolynomialFeatures  now supports passing   a tuple to  degree   i e   degree  min degree  max degree       pr  20250  by  user  Christian Lorentzen  lorentzenchr        Efficiency   class  preprocessing StandardScaler  is faster and more memory   efficient   pr  20652  by  Thomas Fan        Efficiency  Changed   algorithm   argument for  class  cluster KMeans  in    class  preprocessing KBinsDiscretizer  from   auto   to   full       pr  19934  by  user  Gleb Levitskiy  GLevV        Efficiency  The implementation of  fit  for    class  preprocessing PolynomialFeatures  transformer is now faster  This is   especially noticeable on large sparse input   pr  19734  by  user  Fred   Robinson  frrad        Fix  The  func  preprocessing StandardScaler inverse transform  method   now raises error when the input data is 1D   pr  19752  by  user  Zhehao Liu    Max1993Liu        Fix   func  preprocessing scale    class  preprocessing StandardScaler    and similar scalers detect near constant features to avoid scaling them to   very large values  This problem happens in particular when using a scaler on   sparse data with a constant column with sample weights  in which case   centering is typically disabled   pr  19527  by  user  Oliver Grisel    ogrisel   and  user  Maria Telenczuk  maikia   and  pr  19788  by    user  J r mie du Boisberranger  jeremiedbb        Fix   meth  preprocessing StandardScaler inverse transform  now   correctly handles integer dtypes   pr  19356  by  user  makoeppel       Fix   meth  preprocessing OrdinalEncoder inverse transform  is not   supporting sparse matrix and raises the appropriate error message     pr  19879  by  user  Guillaume Lemaitre  glemaitre        Fix  The  fit  method of  class  preprocessing OrdinalEncoder  will not   raise error when  handle unknown  ignore   and unknown categories are given   to  fit      pr  19906  by  user  Zhehao Liu  MaxwellLZH        Fix  Fix a regression in  class  preprocessing OrdinalEncoder  where large   Python numeric would raise an error due to overflow when casted to C type     np float64  or  np int64       pr  20727  by  Guillaume Lemaitre        Fix   class  preprocessing FunctionTransformer  does not set  n features in     based on the input to  inverse transform    pr  20961  by  Thomas Fan        API  The  n input features   attribute of    class  preprocessing PolynomialFeatures  is deprecated in favor of    n features in   and will be removed in 1 2   pr  20240  by    user  J r mie du Boisberranger  jeremiedbb      mod  sklearn svm                          API  The parameter    params  of  func  svm OneClassSVM fit  is   deprecated and will be removed in 1 2     pr  20843  by  user  Juan Mart n Loyola  jmloyola      mod  sklearn tree                          Enhancement  Add  fontname  argument in  func  tree export graphviz    for non English characters   pr  18959  by  user  Zero  Zeroto521     and  user  wstates  wstates        Fix  Improves compatibility of  func  tree plot tree  with high DPI screens     pr  20023  by  Thomas Fan        Fix  Fixed a bug in  class  tree DecisionTreeClassifier      class  tree DecisionTreeRegressor  where a node could be split whereas it   should not have been due to incorrect handling of rounding errors     pr  19336  by  user  J r mie du Boisberranger  jeremiedbb        API  The  n features   attribute of  class  tree DecisionTreeClassifier      class  tree DecisionTreeRegressor    class  tree ExtraTreeClassifier  and    class  tree ExtraTreeRegressor  is deprecated in favor of  n features in     and will be removed in 1 2   pr  20272  by    user  J r mie du Boisberranger  jeremiedbb      mod  sklearn utils                           Enhancement  Deprecated the default value of the  random state 0  in    func   sklearn utils extmath randomized svd   Starting in 1 2    the default value of  random state  will be set to  None      pr  19459  by  user  Cindy Bezuidenhout  cinbez   and    user  Clifford Akai Nettey cliffordEmmanuel        Enhancement  Added helper decorator  func  utils metaestimators available if    to provide flexibility in metaestimators making methods available or   unavailable on the basis of state  in a more readable way     pr  19948  by  Joel Nothman        Enhancement   func  utils validation check is fitted  now uses       sklearn is fitted     if available  instead of checking for attributes   ending with an underscore  This also makes  class  pipeline Pipeline  and    class  preprocessing FunctionTransformer  pass     check is fitted estimator      pr  20657  by  Adrin Jalali        Fix  Fixed a bug in  func  utils sparsefuncs mean variance axis  where the   precision of the computed variance was very poor when the real variance is   exactly zero   pr  19766  by  user  J r mie du Boisberranger  jeremiedbb        Fix  The docstrings of properties that are decorated with    func  utils deprecated  are now properly wrapped   pr  20385  by  Thomas   Fan        Fix   utils stats  weighted percentile  now correctly ignores   zero weighted observations smaller than the smallest observation with   positive weight for   percentile 0    Affected classes are    class  dummy DummyRegressor  for   quantile 0   and    ensemble HuberLossFunction  and  ensemble HuberLossFunction    for   alpha 0     pr  20528  by  user  Malte Londschien  mlondschien        Fix   func  utils  safe indexing  explicitly takes a dataframe copy when   integer indices are provided avoiding to raise a warning from Pandas  This   warning was previously raised in resampling utilities and functions using   those utilities  e g   func  model selection train test split      func  model selection cross validate      func  model selection cross val score      func  model selection cross val predict       pr  20673  by  user  Joris Van den Bossche   jorisvandenbossche        Fix  Fix a regression in  utils is scalar nan  where large Python   numbers would raise an error due to overflow in C types   np float64  or    np int64       pr  20727  by  Guillaume Lemaitre        Fix  Support for  np matrix  is deprecated in    func   sklearn utils check array  in 1 0 and will raise a  TypeError  in   1 2   pr  20165  by  Thomas Fan        API   utils  testing assert warns  and  utils  testing assert warns message    are deprecated in 1 0 and will be removed in 1 2  Used  pytest warns  context   manager instead  Note that these functions were not documented and part from   the public API   pr  20521  by  user  Olivier Grisel  ogrisel        API  Fixed several bugs in  utils graph graph shortest path   which is   now deprecated  Use  scipy sparse csgraph shortest path  instead   pr  20531    by  Tom Dupre la Tour        rubric   Code and documentation contributors  Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0 24  including   Abdulelah S  Al Mesfer  Abhinav Gupta  Adam J  Stewart  Adam Li  Adam Midvidy  Adrian Garcia Badaracco  Adrian Sad ocha  Adrin Jalali  Agamemnon Krasoulis  Alberto Rubiales  Albert Thomas  Albert Villanova del Moral  Alek Lefebvre  Alessia Marcolini  Alexandr Fonari  Alihan Zihna  Aline Ribeiro de Almeida  Amanda  Amanda Dsouza  Amol Deshmukh  Ana Pessoa  Anavelyz  Andreas Mueller  Andrew Delong  Ashish  Ashvith Shetty  Atsushi Nukariya  Aur lien Geron  Avi Gupta  Ayush Singh  baam  BaptBillard  Benjamin Pedigo  Bertrand Thirion  Bharat Raghunathan  bmalezieux  Brian Rice  Brian Sun  Bruno Charron  Bryan Chen  bumblebee  caherrera meli  Carsten Allefeld  CeeThinwa  Chiara Marmo  chrissobel  Christian Lorentzen  Christopher Yeh  Chuliang Xiao  Cl ment Fauchereau  cliffordEmmanuel  Conner Shen  Connor Tann  David Dale  David Katz  David Poznik  Dimitri Papadopoulos Orfanos  Divyanshu Deoli  dmallia17  Dmitry Kobak  DS anas  Eduardo Jardim  EdwinWenink  EL ATEIF Sara  Eleni Markou  EricEllwanger  Eric Fiegel  Erich Schubert  Ezri Mudde  Fatos Morina  Felipe Rodrigues  Felix Hafner  Fenil Suchak  flyingdutchman23  Flynn  Fortune Uwha  Francois Berenger  Frankie Robertson  Frans Larsson  Frederick Robinson  frellwan  Gabriel S Vicente  Gael Varoquaux  genvalen  Geoffrey Thomas  geroldcsendes  Gleb Levitskiy  Glen  Gl ria Maci  Mu oz  gregorystrubel  groceryheist  Guillaume Lemaitre  guiweber  Haidar Almubarak  Hans Moritz G nther  Haoyin Xu  Harris Mirza  Harry Wei  Harutaka Kawamura  Hassan Alsawadi  Helder Geovane Gomes de Lima  Hugo DEFOIS  Igor Ilic  Ikko Ashimine  Isaack Mungui  Ishaan Bhat  Ishan Mishra  Iv n Pulido  iwhalvic  J Alexander  Jack Liu  James Alan Preiss  James Budarz  James Lamb  Jannik  Jeff Zhao  Jennifer Maldonado  J r mie du Boisberranger  Jesse Lima  Jianzhu Guo  jnboehm  Joel Nothman  JohanWork  John Paton  Jonathan Schneider  Jon Crall  Jon Haitz Legarreta Gorro o  Joris Van den Bossche  Jos  Manuel N poles Duarte  Juan Carlos Alfaro Jim nez  Juan Martin Loyola  Julien Jerphanion  Julio Batista Silva  julyrashchenko  JVM  Kadatatlu Kishore  Karen Palacio  Kei Ishikawa  kmatt10  kobaski  Kot271828  Kunj  KurumeYuta  kxytim  lacrosse91  LalliAcqua  Laveen Bagai  Leonardo Rocco  Leonardo Uieda  Leopoldo Corona  Loic Esteve  LSturtew  Luca Bittarello  Luccas Quadros  Lucy Jim nez  Lucy Liu  ly648499246  Mabu Manaileng  Manimaran  makoeppel  Marco Gorelli  Maren Westermann  Mariangela  Maria Telenczuk  marielaraj  Martin Hirzel  Mateo Nore a  Mathieu Blondel  Mathis Batoul  mathurinm  Matthew Calcote  Maxime Prieur  Maxwell  Mehdi Hamoumi  Mehmet Ali  zer  Miao Cai  Michal Karbownik  michalkrawczyk  Mitzi  mlondschien  Mohamed Haseeb  Mohamed Khoualed  Muhammad Jarir Kanji  murata yu  Nadim Kawwa  Nanshan Li  naozin555  Nate Parsons  Neal Fultz  Nic Annau  Nicolas Hug  Nicolas Miller  Nico Stefani  Nigel Bosch  Nikita Titov  Nodar Okroshiashvili  Norbert Preining  novaya  Ogbonna Chibuike Stephen  OGordon100  Oliver Pfaffel  Olivier Grisel  Oras Phongpanangam  Pablo Duque  Pablo Ibieta Jimenez  Patric Lacouth  Paulo S  Costa  Pawe  Olszewski  Peter Dye  PierreAttard  Pierre Yves Le Borgne  PranayAnchuri  Prince Canuma  putschblos  qdeffense  RamyaNP  ranjanikrishnan  Ray Bell  Rene Jean Corneille  Reshama Shaikh  ricardojnf  RichardScottOZ  Rodion Martynov  Rohan Paul  Roman Lutz  Roman Yurchak  Samuel Brice  Sandy Khosasi  Sean Benhur J  Sebastian Flores  Sebastian P lsterl  Shao Yang Hong  shinehide  shinnar  shivamgargsya  Shooter23  Shuhei Kayawari  Shyam Desai  simonamaggio  Sina Tootoonian  solosilence  Steven Kolawole  Steve Stagg  Surya Prakash  swpease  Sylvain Mari   Takeshi Oura  Terence Honles  TFiFiE  Thomas A Caswell  Thomas J  Fan  Tim Gates  TimotheeMathieu  Timothy Wolodzko  Tim Vink  t jakubek  t kusanagi  tliu68  Tobias Uhmann  tom1092  Tom s Moreyra  Tom s Ronald Hughes  Tom Dupr  la Tour  Tommaso Di Noto  Tomohiro Endo  TONY GEORGE  Toshihiro NAKAE  tsuga  Uttam kumar  vadim ushtanit  Vangelis Gkiastas  Venkatachalam N  Vil m Zouhar  Vinicius Rios Fuck  Vlasovets  waijean  Whidou  xavier dupr   xiaoyuchai  Yasmeen Alsaedy  yoch  Yosuke KOBAYASHI  Yu Feng  YusukeNagasaka  yzhenman  Zero  ZeyuSun  ZhaoweiWang  Zito  Zito Relova"}
{"questions":"scikit-learn sklearn contributors rst releasenotes023 Version 0 23","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n.. _release_notes_0_23:\n\n============\nVersion 0.23\n============\n\nFor a short description of the main highlights of the release, please refer to\n:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_0_23_0.py`.\n\n.. include:: changelog_legend.inc\n\n.. _changes_0_23_2:\n\nVersion 0.23.2\n==============\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- |Fix| ``inertia_`` attribute of :class:`cluster.KMeans` and\n  :class:`cluster.MiniBatchKMeans`.\n\nDetails are listed in the changelog below.\n\n(While we are trying to better inform users by providing this information, we\ncannot assure that this list is complete.)\n\nChangelog\n---------\n\n:mod:`sklearn.cluster`\n......................\n\n- |Fix| Fixed a bug in :class:`cluster.KMeans` where rounding errors could\n  prevent convergence to be declared when `tol=0`. :pr:`17959` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| Fixed a bug in :class:`cluster.KMeans` and\n  :class:`cluster.MiniBatchKMeans` where the reported inertia was incorrectly\n  weighted by the sample weights. :pr:`17848` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| Fixed a bug in :class:`cluster.MeanShift` with `bin_seeding=True`. When\n  the estimated bandwidth is 0, the behavior is equivalent to\n  `bin_seeding=False`.\n  :pr:`17742` by :user:`Jeremie du Boisberranger <jeremiedbb>`.\n\n- |Fix| Fixed a bug in :class:`cluster.AffinityPropagation`, that\n  gives incorrect clusters when the array dtype is float32.\n  :pr:`17995` by :user:`Thomaz Santana  <Wikilicious>` and\n  :user:`Amanda Dsouza <amy12xx>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Fix| Fixed a bug in\n  :func:`decomposition.MiniBatchDictionaryLearning.partial_fit` which should\n  update the dictionary by iterating only once over a mini-batch.\n  :pr:`17433` by :user:`Chiara Marmo <cmarmo>`.\n\n- |Fix| Avoid overflows on Windows in\n  :func:`decomposition.IncrementalPCA.partial_fit` for large ``batch_size`` and\n  ``n_samples`` values.\n  :pr:`17985` by :user:`Alan Butler <aldee153>` and\n  :user:`Amanda Dsouza <amy12xx>`.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |Fix| Fixed bug in `ensemble.MultinomialDeviance` where the\n  average of logloss was incorrectly calculated as sum of logloss.\n  :pr:`17694` by :user:`Markus Rempfler <rempfler>` and\n  :user:`Tsutomu Kusanagi <t-kusanagi2>`.\n\n- |Fix| Fixes :class:`ensemble.StackingClassifier` and\n  :class:`ensemble.StackingRegressor` compatibility with estimators that\n  do not define `n_features_in_`. :pr:`17357` by `Thomas Fan`_.\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |Fix| Fixes bug in :class:`feature_extraction.text.CountVectorizer` where\n  sample order invariance was broken when `max_features` was set and features\n  had the same count. :pr:`18016` by `Thomas Fan`_, `Roman Yurchak`_, and\n  `Joel Nothman`_.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Fix| :func:`linear_model.lars_path` does not overwrite `X` when\n  `X_copy=True` and `Gram='auto'`. :pr:`17914` by `Thomas Fan`_.\n\n:mod:`sklearn.manifold`\n.......................\n\n- |Fix| Fixed a bug where :func:`metrics.pairwise_distances` would raise an\n  error if ``metric='seuclidean'`` and ``X`` is not type ``np.float64``.\n  :pr:`15730` by :user:`Forrest Koch <ForrestCKoch>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| Fixed a bug in :func:`metrics.mean_squared_error` where the\n  average of multiple RMSE values was incorrectly calculated as the root of the\n  average of multiple MSE values.\n  :pr:`17309` by :user:`Swier Heeres <swierh>`.\n\n:mod:`sklearn.pipeline`\n.......................\n\n- |Fix| :class:`pipeline.FeatureUnion` raises a deprecation warning when\n  `None` is included in `transformer_list`. :pr:`17360` by `Thomas Fan`_.\n\n:mod:`sklearn.utils`\n....................\n\n- |Fix| Fix :func:`utils.estimator_checks.check_estimator` so that all test\n  cases support the `binary_only` estimator tag.\n  :pr:`17812` by :user:`Bruno Charron <brcharron>`.\n\n.. _changes_0_23_1:\n\nVersion 0.23.1\n==============\n\n**May 18 2020**\n\nChangelog\n---------\n\n:mod:`sklearn.cluster`\n......................\n\n- |Efficiency| :class:`cluster.KMeans` efficiency has been improved for very\n  small datasets. In particular it cannot spawn idle threads any more.\n  :pr:`17210` and :pr:`17235` by :user:`Jeremie du Boisberranger <jeremiedbb>`.\n\n- |Fix| Fixed a bug in :class:`cluster.KMeans` where the sample weights\n  provided by the user were modified in place. :pr:`17204` by\n  :user:`Jeremie du Boisberranger <jeremiedbb>`.\n\n\nMiscellaneous\n.............\n\n- |Fix| Fixed a bug in the `repr` of third-party estimators that use a\n  `**kwargs` parameter in their constructor, when `changed_only` is True\n  which is now the default. :pr:`17205` by `Nicolas Hug`_.\n\n.. _changes_0_23:\n\nVersion 0.23.0\n==============\n\n**May 12 2020**\n\n\nEnforcing keyword-only arguments\n--------------------------------\n\nIn an effort to promote clear and non-ambiguous use of the library, most\nconstructor and function parameters are now expected to be passed as keyword\narguments (i.e. using the `param=value` syntax) instead of positional. To\nease the transition, a `FutureWarning` is raised if a keyword-only parameter\nis used as positional. In version 1.0 (renaming of 0.25), these parameters\nwill be strictly keyword-only, and a `TypeError` will be raised.\n:issue:`15005` by `Joel Nothman`_, `Adrin Jalali`_, `Thomas Fan`_, and\n`Nicolas Hug`_. See `SLEP009\n<https:\/\/scikit-learn-enhancement-proposals.readthedocs.io\/en\/latest\/slep009\/proposal.html>`_\nfor more details.\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- |Fix| :class:`ensemble.BaggingClassifier`, :class:`ensemble.BaggingRegressor`,\n  and :class:`ensemble.IsolationForest`.\n- |Fix| :class:`cluster.KMeans` with ``algorithm=\"elkan\"`` and\n  ``algorithm=\"full\"``.\n- |Fix| :class:`cluster.Birch`\n- |Fix| `compose.ColumnTransformer.get_feature_names`\n- |Fix| :func:`compose.ColumnTransformer.fit`\n- |Fix| :func:`datasets.make_multilabel_classification`\n- |Fix| :class:`decomposition.PCA` with `n_components='mle'`\n- |Enhancement| :class:`decomposition.NMF` and\n  :func:`decomposition.non_negative_factorization` with float32 dtype input.\n- |Fix| :func:`decomposition.KernelPCA.inverse_transform`\n- |API| :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor`\n- |Fix| ``estimator_samples_`` in :class:`ensemble.BaggingClassifier`,\n  :class:`ensemble.BaggingRegressor` and :class:`ensemble.IsolationForest`\n- |Fix| :class:`ensemble.StackingClassifier` and\n  :class:`ensemble.StackingRegressor` with `sample_weight`\n- |Fix| :class:`gaussian_process.GaussianProcessRegressor`\n- |Fix| :class:`linear_model.RANSACRegressor` with ``sample_weight``.\n- |Fix| :class:`linear_model.RidgeClassifierCV`\n- |Fix| :func:`metrics.mean_squared_error` with `squared` and\n  `multioutput='raw_values'`.\n- |Fix| :func:`metrics.mutual_info_score` with negative scores.\n- |Fix| :func:`metrics.confusion_matrix` with zero length `y_true` and `y_pred`\n- |Fix| :class:`neural_network.MLPClassifier`\n- |Fix| :class:`preprocessing.StandardScaler` with `partial_fit` and sparse\n  input.\n- |Fix| :class:`preprocessing.Normalizer` with norm='max'\n- |Fix| Any model using the `svm.libsvm` or the `svm.liblinear` solver,\n  including :class:`svm.LinearSVC`, :class:`svm.LinearSVR`,\n  :class:`svm.NuSVC`, :class:`svm.NuSVR`, :class:`svm.OneClassSVM`,\n  :class:`svm.SVC`, :class:`svm.SVR`, :class:`linear_model.LogisticRegression`.\n- |Fix| :class:`tree.DecisionTreeClassifier`, :class:`tree.ExtraTreeClassifier` and\n  :class:`ensemble.GradientBoostingClassifier` as well as ``predict`` method of\n  :class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeRegressor`, and\n  :class:`ensemble.GradientBoostingRegressor` and read-only float32 input in\n  ``predict``, ``decision_path`` and ``predict_proba``.\n\nDetails are listed in the changelog below.\n\n(While we are trying to better inform users by providing this information, we\ncannot assure that this list is complete.)\n\nChangelog\n---------\n\n..\n    Entries should be grouped by module (in alphabetic order) and prefixed with\n    one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,\n    |Fix| or |API| (see whats_new.rst for descriptions).\n    Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).\n    Changes not specific to a module should be listed under *Multiple Modules*\n    or *Miscellaneous*.\n    Entries should end with:\n    :pr:`123456` by :user:`Joe Bloggs <joeongithub>`.\n    where 123456 is the *pull request* number, not the issue number.\n\n:mod:`sklearn.cluster`\n......................\n\n- |Efficiency| :class:`cluster.Birch` implementation of the predict method\n  avoids high memory footprint by calculating the distances matrix using\n  a chunked scheme.\n  :pr:`16149` by :user:`Jeremie du Boisberranger <jeremiedbb>` and\n  :user:`Alex Shacked <alexshacked>`.\n\n- |Efficiency| |MajorFeature| The critical parts of :class:`cluster.KMeans`\n  have a more optimized implementation. Parallelism is now over the data\n  instead of over initializations allowing better scalability. :pr:`11950` by\n  :user:`Jeremie du Boisberranger <jeremiedbb>`.\n\n- |Enhancement| :class:`cluster.KMeans` now supports sparse data when\n  `solver = \"elkan\"`. :pr:`11950` by\n  :user:`Jeremie du Boisberranger <jeremiedbb>`.\n\n- |Enhancement| :class:`cluster.AgglomerativeClustering` has a faster and more\n  memory efficient implementation of single linkage clustering.\n  :pr:`11514` by :user:`Leland McInnes <lmcinnes>`.\n\n- |Fix| :class:`cluster.KMeans` with ``algorithm=\"elkan\"`` now converges with\n  ``tol=0`` as with the default ``algorithm=\"full\"``. :pr:`16075` by\n  :user:`Erich Schubert <kno10>`.\n\n- |Fix| Fixed a bug in :class:`cluster.Birch` where the `n_clusters` parameter\n  could not have a `np.int64` type. :pr:`16484`\n  by :user:`Jeremie du Boisberranger <jeremiedbb>`.\n\n- |Fix| :class:`cluster.AgglomerativeClustering` add specific error when\n  distance matrix is not square and `affinity=precomputed`.\n  :pr:`16257` by :user:`Simona Maggio <simonamaggio>`.\n\n- |API| The ``n_jobs`` parameter of :class:`cluster.KMeans`,\n  :class:`cluster.SpectralCoclustering` and\n  :class:`cluster.SpectralBiclustering` is deprecated. They now use OpenMP\n  based parallelism. For more details on how to control the number of threads,\n  please refer to our :ref:`parallelism` notes. :pr:`11950` by\n  :user:`Jeremie du Boisberranger <jeremiedbb>`.\n\n- |API| The ``precompute_distances`` parameter of :class:`cluster.KMeans` is\n  deprecated. It has no effect. :pr:`11950` by\n  :user:`Jeremie du Boisberranger <jeremiedbb>`.\n\n- |API| The ``random_state`` parameter has been added to\n  :class:`cluster.AffinityPropagation`. :pr:`16801` by :user:`rcwoolston`\n  and :user:`Chiara Marmo <cmarmo>`.\n\n:mod:`sklearn.compose`\n......................\n\n- |Efficiency| :class:`compose.ColumnTransformer` is now faster when working\n  with dataframes and strings are used to specific subsets of data for\n  transformers. :pr:`16431` by `Thomas Fan`_.\n\n- |Enhancement| :class:`compose.ColumnTransformer` method ``get_feature_names``\n  now supports `'passthrough'` columns, with the feature name being either\n  the column name for a dataframe, or `'xi'` for column index `i`.\n  :pr:`14048` by :user:`Lewis Ball <lrjball>`.\n\n- |Fix| :class:`compose.ColumnTransformer` method ``get_feature_names`` now\n  returns correct results when one of the transformer steps applies on an\n  empty list of columns :pr:`15963` by `Roman Yurchak`_.\n\n- |Fix| :func:`compose.ColumnTransformer.fit` will error when selecting\n  a column name that is not unique in the dataframe. :pr:`16431` by\n  `Thomas Fan`_.\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Efficiency| :func:`datasets.fetch_openml` has reduced memory usage because\n  it no longer stores the full dataset text stream in memory. :pr:`16084` by\n  `Joel Nothman`_.\n\n- |Feature| :func:`datasets.fetch_california_housing` now supports\n  heterogeneous data using pandas by setting `as_frame=True`. :pr:`15950`\n  by :user:`Stephanie Andrews <gitsteph>` and\n  :user:`Reshama Shaikh <reshamas>`.\n\n- |Feature| embedded dataset loaders :func:`datasets.load_breast_cancer`,\n  :func:`datasets.load_diabetes`, :func:`datasets.load_digits`,\n  :func:`datasets.load_iris`, :func:`datasets.load_linnerud` and\n  :func:`datasets.load_wine` now support loading as a pandas ``DataFrame`` by\n  setting `as_frame=True`. :pr:`15980` by :user:`wconnell` and\n  :user:`Reshama Shaikh <reshamas>`.\n\n- |Enhancement| Added ``return_centers`` parameter  in\n  :func:`datasets.make_blobs`, which can be used to return\n  centers for each cluster.\n  :pr:`15709` by :user:`shivamgargsya` and\n  :user:`Venkatachalam N <venkyyuvy>`.\n\n- |Enhancement| Functions :func:`datasets.make_circles` and\n  :func:`datasets.make_moons` now accept two-element tuple.\n  :pr:`15707` by :user:`Maciej J Mikulski <mjmikulski>`.\n\n- |Fix| :func:`datasets.make_multilabel_classification` now generates\n  `ValueError` for arguments `n_classes < 1` OR `length < 1`.\n  :pr:`16006` by :user:`Rushabh Vasani <rushabh-v>`.\n\n- |API| The `StreamHandler` was removed from `sklearn.logger` to avoid\n  double logging of messages in common cases where a handler is attached\n  to the root logger, and to follow the Python logging documentation\n  recommendation for libraries to leave the log message handling to\n  users and application code. :pr:`16451` by :user:`Christoph Deil <cdeil>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Enhancement| :class:`decomposition.NMF` and\n  :func:`decomposition.non_negative_factorization` now preserves float32 dtype.\n  :pr:`16280` by :user:`Jeremie du Boisberranger <jeremiedbb>`.\n\n- |Enhancement| :func:`decomposition.TruncatedSVD.transform` is now faster on\n  given sparse ``csc`` matrices. :pr:`16837` by :user:`wornbb`.\n\n- |Fix| :class:`decomposition.PCA` with a float `n_components` parameter, will\n  exclusively choose the components that explain the variance greater than\n  `n_components`. :pr:`15669` by :user:`Krishna Chaitanya <krishnachaitanya9>`\n\n- |Fix| :class:`decomposition.PCA` with `n_components='mle'` now correctly\n  handles small eigenvalues, and does not infer 0 as the correct number of\n  components. :pr:`16224` by :user:`Lisa Schwetlick <lschwetlick>`, and\n  :user:`Gelavizh Ahmadi <gelavizh1>` and :user:`Marija Vlajic Wheeler\n  <marijavlajic>` and :pr:`16841` by `Nicolas Hug`_.\n\n- |Fix| :class:`decomposition.KernelPCA` method ``inverse_transform`` now\n  applies the correct inverse transform to the transformed data. :pr:`16655`\n  by :user:`Lewis Ball <lrjball>`.\n\n- |Fix| Fixed bug that was causing :class:`decomposition.KernelPCA` to sometimes\n  raise `invalid value encountered in multiply` during `fit`.\n  :pr:`16718` by :user:`Gui Miotto <gui-miotto>`.\n\n- |Feature| Added `n_components_` attribute to :class:`decomposition.SparsePCA`\n  and :class:`decomposition.MiniBatchSparsePCA`. :pr:`16981` by\n  :user:`Mateusz G\u00f3rski <Reksbril>`.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |MajorFeature|  :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` now support\n  :term:`sample_weight`. :pr:`14696` by `Adrin Jalali`_ and `Nicolas Hug`_.\n\n- |Feature| Early stopping in\n  :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` is now determined with a\n  new `early_stopping` parameter instead of `n_iter_no_change`. Default value\n  is 'auto', which enables early stopping if there are at least 10,000\n  samples in the training set. :pr:`14516` by :user:`Johann Faouzi\n  <johannfaouzi>`.\n\n- |MajorFeature| :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` now support monotonic\n  constraints, useful when features are supposed to have a positive\/negative\n  effect on the target. :pr:`15582` by `Nicolas Hug`_.\n\n- |API| Added boolean `verbose` flag to classes:\n  :class:`ensemble.VotingClassifier` and :class:`ensemble.VotingRegressor`.\n  :pr:`16069` by :user:`Sam Bail <spbail>`,\n  :user:`Hanna Bruce MacDonald <hannahbrucemacdonald>`,\n  :user:`Reshama Shaikh <reshamas>`, and\n  :user:`Chiara Marmo <cmarmo>`.\n\n- |API| Fixed a bug in :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` that would not respect the\n  `max_leaf_nodes` parameter if the criteria was reached at the same time as\n  the `max_depth` criteria. :pr:`16183` by `Nicolas Hug`_.\n\n- |Fix|  Changed the convention for `max_depth` parameter of\n  :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor`. The depth now corresponds to\n  the number of edges to go from the root to the deepest leaf.\n  Stumps (trees with one split) are now allowed.\n  :pr:`16182` by :user:`Santhosh B <santhoshbala18>`\n\n- |Fix| Fixed a bug in :class:`ensemble.BaggingClassifier`,\n  :class:`ensemble.BaggingRegressor` and :class:`ensemble.IsolationForest`\n  where the attribute `estimators_samples_` did not generate the proper indices\n  used during `fit`.\n  :pr:`16437` by :user:`Jin-Hwan CHO <chofchof>`.\n\n- |Fix| Fixed a bug in :class:`ensemble.StackingClassifier` and\n  :class:`ensemble.StackingRegressor` where the `sample_weight`\n  argument was not being passed to `cross_val_predict` when\n  evaluating the base estimators on cross-validation folds\n  to obtain the input to the meta estimator.\n  :pr:`16539` by :user:`Bill DeRose <wderose>`.\n\n- |Feature| Added additional option `loss=\"poisson\"` to\n  :class:`ensemble.HistGradientBoostingRegressor`, which adds Poisson deviance\n  with log-link useful for modeling count data.\n  :pr:`16692` by :user:`Christian Lorentzen <lorentzenchr>`\n\n- |Fix| Fixed a bug where :class:`ensemble.HistGradientBoostingRegressor` and\n  :class:`ensemble.HistGradientBoostingClassifier` would fail with multiple\n  calls to fit when `warm_start=True`, `early_stopping=True`, and there is no\n  validation set. :pr:`16663` by `Thomas Fan`_.\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |Efficiency| :class:`feature_extraction.text.CountVectorizer` now sorts\n  features after pruning them by document frequency. This improves performances\n  for datasets with large vocabularies combined with ``min_df`` or ``max_df``.\n  :pr:`15834` by :user:`Santiago M. Mola <smola>`.\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Enhancement| Added support for multioutput data in\n  :class:`feature_selection.RFE` and :class:`feature_selection.RFECV`.\n  :pr:`16103` by :user:`Divyaprabha M <divyaprabha123>`.\n\n- |API| Adds :class:`feature_selection.SelectorMixin` back to public API.\n  :pr:`16132` by :user:`trimeta`.\n\n:mod:`sklearn.gaussian_process`\n...............................\n\n- |Enhancement| :func:`gaussian_process.kernels.Matern` returns the RBF kernel when ``nu=np.inf``.\n  :pr:`15503` by :user:`Sam Dixon <sam-dixon>`.\n\n- |Fix| Fixed bug in :class:`gaussian_process.GaussianProcessRegressor` that\n  caused predicted standard deviations to only be between 0 and 1 when\n  WhiteKernel is not used. :pr:`15782`\n  by :user:`plgreenLIRU`.\n\n:mod:`sklearn.impute`\n.....................\n\n- |Enhancement| :class:`impute.IterativeImputer` accepts both scalar and array-like inputs for\n  ``max_value`` and ``min_value``. Array-like inputs allow a different max and min to be specified\n  for each feature. :pr:`16403` by :user:`Narendra Mukherjee <narendramukherjee>`.\n\n- |Enhancement| :class:`impute.SimpleImputer`, :class:`impute.KNNImputer`, and\n  :class:`impute.IterativeImputer` accepts pandas' nullable integer dtype with\n  missing values. :pr:`16508` by `Thomas Fan`_.\n\n:mod:`sklearn.inspection`\n.........................\n\n- |Feature| :func:`inspection.partial_dependence` and\n  `inspection.plot_partial_dependence` now support the fast 'recursion'\n  method for :class:`ensemble.RandomForestRegressor` and\n  :class:`tree.DecisionTreeRegressor`. :pr:`15864` by\n  `Nicolas Hug`_.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |MajorFeature| Added generalized linear models (GLM) with non normal error\n  distributions, including :class:`linear_model.PoissonRegressor`,\n  :class:`linear_model.GammaRegressor` and :class:`linear_model.TweedieRegressor`\n  which use Poisson, Gamma and Tweedie distributions respectively.\n  :pr:`14300` by :user:`Christian Lorentzen <lorentzenchr>`, `Roman Yurchak`_,\n  and `Olivier Grisel`_.\n\n- |MajorFeature| Support of `sample_weight` in\n  :class:`linear_model.ElasticNet` and :class:`linear_model.Lasso` for dense\n  feature matrix `X`. :pr:`15436` by :user:`Christian Lorentzen\n  <lorentzenchr>`.\n\n- |Efficiency| :class:`linear_model.RidgeCV` and\n  :class:`linear_model.RidgeClassifierCV` now does not allocate a\n  potentially large array to store dual coefficients for all hyperparameters\n  during its `fit`, nor an array to store all error or LOO predictions unless\n  `store_cv_values` is `True`.\n  :pr:`15652` by :user:`J\u00e9r\u00f4me Dock\u00e8s <jeromedockes>`.\n\n- |Enhancement| :class:`linear_model.LassoLars` and\n  :class:`linear_model.Lars` now support a `jitter` parameter that adds\n  random noise to the target. This might help with stability in some edge\n  cases. :pr:`15179` by :user:`angelaambroz`.\n\n- |Fix| Fixed a bug where if a `sample_weight` parameter was passed to the fit\n  method of :class:`linear_model.RANSACRegressor`, it would not be passed to\n  the wrapped `base_estimator` during the fitting of the final model.\n  :pr:`15773` by :user:`Jeremy Alexandre <J-A16>`.\n\n- |Fix| Add `best_score_` attribute to :class:`linear_model.RidgeCV` and\n  :class:`linear_model.RidgeClassifierCV`.\n  :pr:`15655` by :user:`J\u00e9r\u00f4me Dock\u00e8s <jeromedockes>`.\n\n- |Fix| Fixed a bug in :class:`linear_model.RidgeClassifierCV` to pass a\n  specific scoring strategy. Before the internal estimator outputs score\n  instead of predictions.\n  :pr:`14848` by :user:`Venkatachalam N <venkyyuvy>`.\n\n- |Fix| :class:`linear_model.LogisticRegression` will now avoid an unnecessary\n  iteration when `solver='newton-cg'` by checking for inferior or equal instead\n  of strictly inferior for maximum of `absgrad` and `tol` in `utils.optimize._newton_cg`.\n  :pr:`16266` by :user:`Rushabh Vasani <rushabh-v>`.\n\n- |API| Deprecated public attributes `standard_coef_`, `standard_intercept_`,\n  `average_coef_`, and `average_intercept_` in\n  :class:`linear_model.SGDClassifier`,\n  :class:`linear_model.SGDRegressor`,\n  :class:`linear_model.PassiveAggressiveClassifier`,\n  :class:`linear_model.PassiveAggressiveRegressor`.\n  :pr:`16261` by :user:`Carlos Brandt <chbrandt>`.\n\n- |Fix| |Efficiency| :class:`linear_model.ARDRegression` is more stable and\n  much faster when `n_samples > n_features`. It can now scale to hundreds of\n  thousands of samples. The stability fix might imply changes in the number\n  of non-zero coefficients and in the predicted output. :pr:`16849` by\n  `Nicolas Hug`_.\n\n- |Fix| Fixed a bug in :class:`linear_model.ElasticNetCV`,\n  :class:`linear_model.MultiTaskElasticNetCV`, :class:`linear_model.LassoCV`\n  and :class:`linear_model.MultiTaskLassoCV` where fitting would fail when\n  using joblib loky backend. :pr:`14264` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Efficiency| Speed up :class:`linear_model.MultiTaskLasso`,\n  :class:`linear_model.MultiTaskLassoCV`, :class:`linear_model.MultiTaskElasticNet`,\n  :class:`linear_model.MultiTaskElasticNetCV` by avoiding slower\n  BLAS Level 2 calls on small arrays\n  :pr:`17021` by :user:`Alex Gramfort <agramfort>` and\n  :user:`Mathurin Massias <mathurinm>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Enhancement| :func:`metrics.pairwise_distances_chunked` now allows\n  its ``reduce_func`` to not have a return value, enabling in-place operations.\n  :pr:`16397` by `Joel Nothman`_.\n\n- |Fix| Fixed a bug in :func:`metrics.mean_squared_error` to not ignore\n  argument `squared` when argument `multioutput='raw_values'`.\n  :pr:`16323` by :user:`Rushabh Vasani <rushabh-v>`\n\n- |Fix| Fixed a bug in :func:`metrics.mutual_info_score` where negative\n  scores could be returned. :pr:`16362` by `Thomas Fan`_.\n\n- |Fix| Fixed a bug in :func:`metrics.confusion_matrix` that would raise\n  an error when `y_true` and `y_pred` were length zero and `labels` was\n  not `None`. In addition, we raise an error when an empty list is given to\n  the `labels` parameter.\n  :pr:`16442` by :user:`Kyle Parsons <parsons-kyle-89>`.\n\n- |API| Changed the formatting of values in\n  :meth:`metrics.ConfusionMatrixDisplay.plot` and\n  `metrics.plot_confusion_matrix` to pick the shorter format (either '2g'\n  or 'd'). :pr:`16159` by :user:`Rick Mackenbach <Rick-Mackenbach>` and\n  `Thomas Fan`_.\n\n- |API| From version 0.25, :func:`metrics.pairwise_distances` will no\n  longer automatically compute the ``VI`` parameter for Mahalanobis distance\n  and the ``V`` parameter for seuclidean distance if ``Y`` is passed. The user\n  will be expected to compute this parameter on the training data of their\n  choice and pass it to `pairwise_distances`. :pr:`16993` by `Joel Nothman`_.\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |Enhancement| :class:`model_selection.GridSearchCV` and\n  :class:`model_selection.RandomizedSearchCV` yields stack trace information\n  in fit failed warning messages in addition to previously emitted\n  type and details.\n  :pr:`15622` by :user:`Gregory Morse <GregoryMorse>`.\n\n- |Fix| :func:`model_selection.cross_val_predict` supports\n  `method=\"predict_proba\"` when `y=None`. :pr:`15918` by\n  :user:`Luca Kubin <lkubin>`.\n\n- |Fix| `model_selection.fit_grid_point` is deprecated in 0.23 and will\n  be removed in 0.25. :pr:`16401` by\n  :user:`Arie Pratama Sutiono <ariepratama>`\n\n:mod:`sklearn.multioutput`\n..........................\n\n- |Feature| :func:`multioutput.MultiOutputRegressor.fit` and\n  :func:`multioutput.MultiOutputClassifier.fit` now can accept `fit_params`\n  to pass to the `estimator.fit` method of each step. :issue:`15953`\n  :pr:`15959` by :user:`Ke Huang <huangk10>`.\n\n- |Enhancement| :class:`multioutput.RegressorChain` now supports `fit_params`\n  for `base_estimator` during `fit`.\n  :pr:`16111` by :user:`Venkatachalam N <venkyyuvy>`.\n\n:mod:`sklearn.naive_bayes`\n.............................\n\n- |Fix| A correctly formatted error message is shown in\n  :class:`naive_bayes.CategoricalNB` when the number of features in the input\n  differs between `predict` and `fit`.\n  :pr:`16090` by :user:`Madhura Jayaratne <madhuracj>`.\n\n:mod:`sklearn.neural_network`\n.............................\n\n- |Efficiency| :class:`neural_network.MLPClassifier` and\n  :class:`neural_network.MLPRegressor` has reduced memory footprint when using\n  stochastic solvers, `'sgd'` or `'adam'`, and `shuffle=True`. :pr:`14075` by\n  :user:`meyer89`.\n\n- |Fix| Increases the numerical stability of the logistic loss function in\n  :class:`neural_network.MLPClassifier` by clipping the probabilities.\n  :pr:`16117` by `Thomas Fan`_.\n\n:mod:`sklearn.inspection`\n.........................\n\n- |Enhancement| :class:`inspection.PartialDependenceDisplay` now exposes the\n  deciles lines as attributes so they can be hidden or customized. :pr:`15785`\n  by `Nicolas Hug`_\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Feature| argument `drop` of :class:`preprocessing.OneHotEncoder`\n  will now accept value 'if_binary' and will drop the first category of\n  each feature with two categories. :pr:`16245`\n  by :user:`Rushabh Vasani <rushabh-v>`.\n\n- |Enhancement| :class:`preprocessing.OneHotEncoder`'s `drop_idx_` ndarray\n  can now contain `None`, where `drop_idx_[i] = None` means that no category\n  is dropped for index `i`. :pr:`16585` by :user:`Chiara Marmo <cmarmo>`.\n\n- |Enhancement| :class:`preprocessing.MaxAbsScaler`,\n  :class:`preprocessing.MinMaxScaler`, :class:`preprocessing.StandardScaler`,\n  :class:`preprocessing.PowerTransformer`,\n  :class:`preprocessing.QuantileTransformer`,\n  :class:`preprocessing.RobustScaler` now supports pandas' nullable integer\n  dtype with missing values. :pr:`16508` by `Thomas Fan`_.\n\n- |Efficiency| :class:`preprocessing.OneHotEncoder` is now faster at\n  transforming. :pr:`15762` by `Thomas Fan`_.\n\n- |Fix| Fix a bug in :class:`preprocessing.StandardScaler` which was incorrectly\n  computing statistics when calling `partial_fit` on sparse inputs.\n  :pr:`16466` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Fix a bug in :class:`preprocessing.Normalizer` with norm='max',\n  which was not taking the absolute value of the maximum values before\n  normalizing the vectors. :pr:`16632` by\n  :user:`Maura Pintor <Maupin1991>` and :user:`Battista Biggio <bbiggio>`.\n\n:mod:`sklearn.semi_supervised`\n..............................\n\n- |Fix| :class:`semi_supervised.LabelSpreading` and\n  :class:`semi_supervised.LabelPropagation` avoids divide by zero warnings\n  when normalizing `label_distributions_`. :pr:`15946` by :user:`ngshya`.\n\n:mod:`sklearn.svm`\n..................\n\n- |Fix| |Efficiency| Improved ``libsvm`` and ``liblinear`` random number\n  generators used to randomly select coordinates in the coordinate descent\n  algorithms. Platform-dependent C ``rand()`` was used, which is only able to\n  generate numbers up to ``32767`` on windows platform (see this `blog\n  post <https:\/\/codeforces.com\/blog\/entry\/61587>`_) and also has poor\n  randomization power as suggested by `this presentation\n  <https:\/\/channel9.msdn.com\/Events\/GoingNative\/2013\/rand-Considered-Harmful>`_.\n  It was replaced with C++11 ``mt19937``, a Mersenne Twister that correctly\n  generates 31bits\/63bits random numbers on all platforms. In addition, the\n  crude \"modulo\" postprocessor used to get a random number in a bounded\n  interval was replaced by the tweaked Lemire method as suggested by `this blog\n  post <http:\/\/www.pcg-random.org\/posts\/bounded-rands.html>`_.\n  Any model using the `svm.libsvm` or the `svm.liblinear` solver,\n  including :class:`svm.LinearSVC`, :class:`svm.LinearSVR`,\n  :class:`svm.NuSVC`, :class:`svm.NuSVR`, :class:`svm.OneClassSVM`,\n  :class:`svm.SVC`, :class:`svm.SVR`, :class:`linear_model.LogisticRegression`,\n  is affected. In particular users can expect a better convergence when the\n  number of samples (LibSVM) or the number of features (LibLinear) is large.\n  :pr:`13511` by :user:`Sylvain Mari\u00e9 <smarie>`.\n\n- |Fix| Fix use of custom kernel not taking float entries such as string\n  kernels in :class:`svm.SVC` and :class:`svm.SVR`. Note that custom kennels\n  are now expected to validate their input where they previously received\n  valid numeric arrays.\n  :pr:`11296` by `Alexandre Gramfort`_ and  :user:`Georgi Peev <georgipeev>`.\n\n- |API| :class:`svm.SVR` and :class:`svm.OneClassSVM` attributes, `probA_` and\n  `probB_`, are now deprecated as they were not useful. :pr:`15558` by\n  `Thomas Fan`_.\n\n:mod:`sklearn.tree`\n...................\n\n- |Fix| :func:`tree.plot_tree` `rotate` parameter was unused and has been\n  deprecated.\n  :pr:`15806` by :user:`Chiara Marmo <cmarmo>`.\n\n- |Fix| Fix support of read-only float32 array input in ``predict``,\n  ``decision_path`` and ``predict_proba`` methods of\n  :class:`tree.DecisionTreeClassifier`, :class:`tree.ExtraTreeClassifier` and\n  :class:`ensemble.GradientBoostingClassifier` as well as ``predict`` method of\n  :class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeRegressor`, and\n  :class:`ensemble.GradientBoostingRegressor`.\n  :pr:`16331` by :user:`Alexandre Batisse <batalex>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |MajorFeature| Estimators can now be displayed with a rich html\n  representation. This can be enabled in Jupyter notebooks by setting\n  `display='diagram'` in :func:`~sklearn.set_config`. The raw html can be\n  returned by using :func:`utils.estimator_html_repr`.\n  :pr:`14180` by `Thomas Fan`_.\n\n- |Enhancement| improve error message in :func:`utils.validation.column_or_1d`.\n  :pr:`15926` by :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n- |Enhancement| add warning in :func:`utils.check_array` for\n  pandas sparse DataFrame.\n  :pr:`16021` by :user:`Rushabh Vasani <rushabh-v>`.\n\n- |Enhancement| :func:`utils.check_array` now constructs a sparse\n  matrix from a pandas DataFrame that contains only `SparseArray` columns.\n  :pr:`16728` by `Thomas Fan`_.\n\n- |Enhancement| :func:`utils.check_array` supports pandas'\n  nullable integer dtype with missing values when `force_all_finite` is set to\n  `False` or `'allow-nan'` in which case the data is converted to floating\n  point values where `pd.NA` values are replaced by `np.nan`. As a consequence,\n  all :mod:`sklearn.preprocessing` transformers that accept numeric inputs with\n  missing values represented as `np.nan` now also accepts being directly fed\n  pandas dataframes with `pd.Int* or `pd.Uint*` typed columns that use `pd.NA`\n  as a missing value marker. :pr:`16508` by `Thomas Fan`_.\n\n- |API| Passing classes to :func:`utils.estimator_checks.check_estimator` and\n  :func:`utils.estimator_checks.parametrize_with_checks` is now deprecated,\n  and support for classes will be removed in 0.24. Pass instances instead.\n  :pr:`17032` by `Nicolas Hug`_.\n\n- |API| The private utility `_safe_tags` in `utils.estimator_checks` was\n  removed, hence all tags should be obtained through `estimator._get_tags()`.\n  Note that Mixins like `RegressorMixin` must come *before* base classes\n  in the MRO for `_get_tags()` to work properly.\n  :pr:`16950` by `Nicolas Hug`_.\n\n- |FIX| `utils.all_estimators` now only returns public estimators.\n  :pr:`15380` by `Thomas Fan`_.\n\nMiscellaneous\n.............\n\n- |MajorFeature| Adds a HTML representation of estimators to be shown in\n  a jupyter notebook or lab. This visualization is activated by setting the\n  `display` option in :func:`sklearn.set_config`. :pr:`14180` by\n  `Thomas Fan`_.\n\n- |Enhancement| ``scikit-learn`` now works with ``mypy`` without errors.\n  :pr:`16726` by `Roman Yurchak`_.\n\n- |API| Most estimators now expose a `n_features_in_` attribute. This\n  attribute is equal to the number of features passed to the `fit` method.\n  See `SLEP010\n  <https:\/\/scikit-learn-enhancement-proposals.readthedocs.io\/en\/latest\/slep010\/proposal.html>`_\n  for details. :pr:`16112` by `Nicolas Hug`_.\n\n- |API| Estimators now have a `requires_y` tags which is False by default\n  except for estimators that inherit from `~sklearn.base.RegressorMixin` or\n  `~sklearn.base.ClassifierMixin`. This tag is used to ensure that a proper\n  error message is raised when y was expected but None was passed.\n  :pr:`16622` by `Nicolas Hug`_.\n\n- |API| The default setting `print_changed_only` has been changed from False\n  to True. This means that the `repr` of estimators is now more concise and\n  only shows the parameters whose default value has been changed when\n  printing an estimator. You can restore the previous behaviour by using\n  `sklearn.set_config(print_changed_only=False)`. Also, note that it is\n  always possible to quickly inspect the parameters of any estimator using\n  `est.get_params(deep=False)`. :pr:`17061` by `Nicolas Hug`_.\n\n.. rubric:: Code and documentation contributors\n\nThanks to everyone who has contributed to the maintenance and improvement of the\nproject since version 0.22, including:\n\nAbbie Popa, Adrin Jalali, Aleksandra Kocot, Alexandre Batisse, Alexandre\nGramfort, Alex Henrie, Alex Itkes, Alex Liang, alexshacked, Alonso Silva\nAllende, Ana Casado, Andreas Mueller, Angela Ambroz, Ankit810, Arie Pratama\nSutiono, Arunav Konwar, Baptiste Maingret, Benjamin Beier Liu, bernie gray,\nBharathi Srinivasan, Bharat Raghunathan, Bibhash Chandra Mitra, Brian Wignall,\nbrigi, Brigitta Sip\u0151cz, Carlos H Brandt, CastaChick, castor, cgsavard, Chiara\nMarmo, Chris Gregory, Christian Kastner, Christian Lorentzen, Corrie\nBartelheimer, Dani\u00ebl van Gelder, Daphne, David Breuer, david-cortes, dbauer9,\nDivyaprabha M, Edward Qian, Ekaterina Borovikova, ELNS, Emily Taylor, Erich\nSchubert, Eric Leung, Evgeni Chasnovski, Fabiana, Facundo Ferr\u00edn, Fan,\nFranziska Boenisch, Gael Varoquaux, Gaurav Sharma, Geoffrey Bolmier, Georgi\nPeev, gholdman1, Gonthier Nicolas, Gregory Morse, Gregory R. Lee, Guillaume\nLemaitre, Gui Miotto, Hailey Nguyen, Hanmin Qin, Hao Chun Chang, HaoYin, H\u00e9lion\ndu Mas des Bourboux, Himanshu Garg, Hirofumi Suzuki, huangk10, Hugo van\nKemenade, Hye Sung Jung, indecisiveuser, inderjeet, J-A16, J\u00e9r\u00e9mie du\nBoisberranger, Jin-Hwan CHO, JJmistry, Joel Nothman, Johann Faouzi, Jon Haitz\nLegarreta Gorro\u00f1o, Juan Carlos Alfaro Jim\u00e9nez, judithabk6, jumon, Kathryn\nPoole, Katrina Ni, Kesshi Jordan, Kevin Loftis, Kevin Markham,\nkrishnachaitanya9, Lam Gia Thuan, Leland McInnes, Lisa Schwetlick, lkubin, Loic\nEsteve, lopusz, lrjball, lucgiffon, lucyleeow, Lucy Liu, Lukas Kemkes, Maciej J\nMikulski, Madhura Jayaratne, Magda Zielinska, maikia, Mandy Gu, Manimaran,\nManish Aradwad, Maren Westermann, Maria, Mariana Meireles, Marie Douriez,\nMarielle, Mateusz G\u00f3rski, mathurinm, Matt Hall, Maura Pintor, mc4229, meyer89,\nm.fab, Michael Shoemaker, Micha\u0142 S\u0142apek, Mina Naghshhnejad, mo, Mohamed\nMaskani, Mojca Bertoncelj, narendramukherjee, ngshya, Nicholas Won, Nicolas\nHug, nicolasservel, Niklas, @nkish, Noa Tamir, Oleksandr Pavlyk, olicairns,\nOliver Urs Lenz, Olivier Grisel, parsons-kyle-89, Paula, Pete Green, Pierre\nDelanoue, pspachtholz, Pulkit Mehta, Qizhi  Jiang, Quang Nguyen, rachelcjordan,\nraduspaimoc, Reshama Shaikh, Riccardo Folloni, Rick Mackenbach, Ritchie Ng,\nRoman Feldbauer, Roman Yurchak, Rory Hartong-Redden, R\u00fcdiger Busche, Rushabh\nVasani, Sambhav Kothari, Samesh Lakhotia, Samuel Duan, SanthoshBala18, Santiago\nM. Mola, Sarat Addepalli, scibol, Sebastian Kie\u00dfling, SergioDSR, Sergul Aydore,\nShiki-H, shivamgargsya, SHUBH CHATTERJEE, Siddharth Gupta, simonamaggio,\nsmarie, Snowhite, stareh, Stephen Blystone, Stephen Marsh, Sunmi Yoon,\nSylvainLan, talgatomarov, tamirlan1, th0rwas, theoptips, Thomas J Fan, Thomas\nLi, Thomas Schmitt, Tim Nonner, Tim Vink, Tiphaine Viard, Tirth Patel, Titus\nChristian, Tom Dupr\u00e9 la Tour, trimeta, Vachan D A, Vandana Iyer, Venkatachalam\nN, waelbenamara, wconnell, wderose, wenliwyan, Windber, wornbb, Yu-Hang \"Maxin\"\nTang","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn      release notes 0 23                Version 0 23               For a short description of the main highlights of the release  please refer to  ref  sphx glr auto examples release highlights plot release highlights 0 23 0 py       include   changelog legend inc      changes 0 23 2   Version 0 23 2                 Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      Fix    inertia    attribute of  class  cluster KMeans  and    class  cluster MiniBatchKMeans    Details are listed in the changelog below    While we are trying to better inform users by providing this information  we cannot assure that this list is complete    Changelog             mod  sklearn cluster                             Fix  Fixed a bug in  class  cluster KMeans  where rounding errors could   prevent convergence to be declared when  tol 0    pr  17959  by    user  J r mie du Boisberranger  jeremiedbb        Fix  Fixed a bug in  class  cluster KMeans  and    class  cluster MiniBatchKMeans  where the reported inertia was incorrectly   weighted by the sample weights   pr  17848  by    user  J r mie du Boisberranger  jeremiedbb        Fix  Fixed a bug in  class  cluster MeanShift  with  bin seeding True   When   the estimated bandwidth is 0  the behavior is equivalent to    bin seeding False      pr  17742  by  user  Jeremie du Boisberranger  jeremiedbb        Fix  Fixed a bug in  class  cluster AffinityPropagation   that   gives incorrect clusters when the array dtype is float32     pr  17995  by  user  Thomaz Santana   Wikilicious   and    user  Amanda Dsouza  amy12xx      mod  sklearn decomposition                                   Fix  Fixed a bug in    func  decomposition MiniBatchDictionaryLearning partial fit  which should   update the dictionary by iterating only once over a mini batch     pr  17433  by  user  Chiara Marmo  cmarmo        Fix  Avoid overflows on Windows in    func  decomposition IncrementalPCA partial fit  for large   batch size   and     n samples   values     pr  17985  by  user  Alan Butler  aldee153   and    user  Amanda Dsouza  amy12xx      mod  sklearn ensemble                              Fix  Fixed bug in  ensemble MultinomialDeviance  where the   average of logloss was incorrectly calculated as sum of logloss     pr  17694  by  user  Markus Rempfler  rempfler   and    user  Tsutomu Kusanagi  t kusanagi2        Fix  Fixes  class  ensemble StackingClassifier  and    class  ensemble StackingRegressor  compatibility with estimators that   do not define  n features in     pr  17357  by  Thomas Fan      mod  sklearn feature extraction                                        Fix  Fixes bug in  class  feature extraction text CountVectorizer  where   sample order invariance was broken when  max features  was set and features   had the same count   pr  18016  by  Thomas Fan     Roman Yurchak    and    Joel Nothman      mod  sklearn linear model                                  Fix   func  linear model lars path  does not overwrite  X  when    X copy True  and  Gram  auto     pr  17914  by  Thomas Fan      mod  sklearn manifold                              Fix  Fixed a bug where  func  metrics pairwise distances  would raise an   error if   metric  seuclidean    and   X   is not type   np float64       pr  15730  by  user  Forrest Koch  ForrestCKoch      mod  sklearn metrics                             Fix  Fixed a bug in  func  metrics mean squared error  where the   average of multiple RMSE values was incorrectly calculated as the root of the   average of multiple MSE values     pr  17309  by  user  Swier Heeres  swierh      mod  sklearn pipeline                              Fix   class  pipeline FeatureUnion  raises a deprecation warning when    None  is included in  transformer list    pr  17360  by  Thomas Fan      mod  sklearn utils                           Fix  Fix  func  utils estimator checks check estimator  so that all test   cases support the  binary only  estimator tag     pr  17812  by  user  Bruno Charron  brcharron         changes 0 23 1   Version 0 23 1                   May 18 2020    Changelog             mod  sklearn cluster                             Efficiency   class  cluster KMeans  efficiency has been improved for very   small datasets  In particular it cannot spawn idle threads any more     pr  17210  and  pr  17235  by  user  Jeremie du Boisberranger  jeremiedbb        Fix  Fixed a bug in  class  cluster KMeans  where the sample weights   provided by the user were modified in place   pr  17204  by    user  Jeremie du Boisberranger  jeremiedbb      Miscellaneous                   Fix  Fixed a bug in the  repr  of third party estimators that use a      kwargs  parameter in their constructor  when  changed only  is True   which is now the default   pr  17205  by  Nicolas Hug         changes 0 23   Version 0 23 0                   May 12 2020     Enforcing keyword only arguments                                   In an effort to promote clear and non ambiguous use of the library  most constructor and function parameters are now expected to be passed as keyword arguments  i e  using the  param value  syntax  instead of positional  To ease the transition  a  FutureWarning  is raised if a keyword only parameter is used as positional  In version 1 0  renaming of 0 25   these parameters will be strictly keyword only  and a  TypeError  will be raised   issue  15005  by  Joel Nothman     Adrin Jalali     Thomas Fan    and  Nicolas Hug    See  SLEP009  https   scikit learn enhancement proposals readthedocs io en latest slep009 proposal html    for more details   Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      Fix   class  ensemble BaggingClassifier    class  ensemble BaggingRegressor     and  class  ensemble IsolationForest      Fix   class  cluster KMeans  with   algorithm  elkan    and     algorithm  full        Fix   class  cluster Birch     Fix   compose ColumnTransformer get feature names     Fix   func  compose ColumnTransformer fit     Fix   func  datasets make multilabel classification     Fix   class  decomposition PCA  with  n components  mle      Enhancement   class  decomposition NMF  and    func  decomposition non negative factorization  with float32 dtype input     Fix   func  decomposition KernelPCA inverse transform     API   class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor     Fix    estimator samples    in  class  ensemble BaggingClassifier      class  ensemble BaggingRegressor  and  class  ensemble IsolationForest     Fix   class  ensemble StackingClassifier  and    class  ensemble StackingRegressor  with  sample weight     Fix   class  gaussian process GaussianProcessRegressor     Fix   class  linear model RANSACRegressor  with   sample weight       Fix   class  linear model RidgeClassifierCV     Fix   func  metrics mean squared error  with  squared  and    multioutput  raw values       Fix   func  metrics mutual info score  with negative scores     Fix   func  metrics confusion matrix  with zero length  y true  and  y pred     Fix   class  neural network MLPClassifier     Fix   class  preprocessing StandardScaler  with  partial fit  and sparse   input     Fix   class  preprocessing Normalizer  with norm  max     Fix  Any model using the  svm libsvm  or the  svm liblinear  solver    including  class  svm LinearSVC    class  svm LinearSVR      class  svm NuSVC    class  svm NuSVR    class  svm OneClassSVM      class  svm SVC    class  svm SVR    class  linear model LogisticRegression      Fix   class  tree DecisionTreeClassifier    class  tree ExtraTreeClassifier  and    class  ensemble GradientBoostingClassifier  as well as   predict   method of    class  tree DecisionTreeRegressor    class  tree ExtraTreeRegressor   and    class  ensemble GradientBoostingRegressor  and read only float32 input in     predict      decision path   and   predict proba     Details are listed in the changelog below    While we are trying to better inform users by providing this information  we cannot assure that this list is complete    Changelog                   Entries should be grouped by module  in alphabetic order  and prefixed with     one of the labels   MajorFeature    Feature    Efficiency    Enhancement        Fix  or  API   see whats new rst for descriptions       Entries should be ordered by those labels  e g   Fix  after  Efficiency        Changes not specific to a module should be listed under  Multiple Modules      or  Miscellaneous       Entries should end with       pr  123456  by  user  Joe Bloggs  joeongithub        where 123456 is the  pull request  number  not the issue number    mod  sklearn cluster                             Efficiency   class  cluster Birch  implementation of the predict method   avoids high memory footprint by calculating the distances matrix using   a chunked scheme     pr  16149  by  user  Jeremie du Boisberranger  jeremiedbb   and    user  Alex Shacked  alexshacked        Efficiency   MajorFeature  The critical parts of  class  cluster KMeans    have a more optimized implementation  Parallelism is now over the data   instead of over initializations allowing better scalability   pr  11950  by    user  Jeremie du Boisberranger  jeremiedbb        Enhancement   class  cluster KMeans  now supports sparse data when    solver    elkan     pr  11950  by    user  Jeremie du Boisberranger  jeremiedbb        Enhancement   class  cluster AgglomerativeClustering  has a faster and more   memory efficient implementation of single linkage clustering     pr  11514  by  user  Leland McInnes  lmcinnes        Fix   class  cluster KMeans  with   algorithm  elkan    now converges with     tol 0   as with the default   algorithm  full      pr  16075  by    user  Erich Schubert  kno10        Fix  Fixed a bug in  class  cluster Birch  where the  n clusters  parameter   could not have a  np int64  type   pr  16484    by  user  Jeremie du Boisberranger  jeremiedbb        Fix   class  cluster AgglomerativeClustering  add specific error when   distance matrix is not square and  affinity precomputed      pr  16257  by  user  Simona Maggio  simonamaggio        API  The   n jobs   parameter of  class  cluster KMeans      class  cluster SpectralCoclustering  and    class  cluster SpectralBiclustering  is deprecated  They now use OpenMP   based parallelism  For more details on how to control the number of threads    please refer to our  ref  parallelism  notes   pr  11950  by    user  Jeremie du Boisberranger  jeremiedbb        API  The   precompute distances   parameter of  class  cluster KMeans  is   deprecated  It has no effect   pr  11950  by    user  Jeremie du Boisberranger  jeremiedbb        API  The   random state   parameter has been added to    class  cluster AffinityPropagation    pr  16801  by  user  rcwoolston    and  user  Chiara Marmo  cmarmo      mod  sklearn compose                             Efficiency   class  compose ColumnTransformer  is now faster when working   with dataframes and strings are used to specific subsets of data for   transformers   pr  16431  by  Thomas Fan        Enhancement   class  compose ColumnTransformer  method   get feature names     now supports   passthrough   columns  with the feature name being either   the column name for a dataframe  or   xi   for column index  i      pr  14048  by  user  Lewis Ball  lrjball        Fix   class  compose ColumnTransformer  method   get feature names   now   returns correct results when one of the transformer steps applies on an   empty list of columns  pr  15963  by  Roman Yurchak        Fix   func  compose ColumnTransformer fit  will error when selecting   a column name that is not unique in the dataframe   pr  16431  by    Thomas Fan      mod  sklearn datasets                              Efficiency   func  datasets fetch openml  has reduced memory usage because   it no longer stores the full dataset text stream in memory   pr  16084  by    Joel Nothman        Feature   func  datasets fetch california housing  now supports   heterogeneous data using pandas by setting  as frame True    pr  15950    by  user  Stephanie Andrews  gitsteph   and    user  Reshama Shaikh  reshamas        Feature  embedded dataset loaders  func  datasets load breast cancer      func  datasets load diabetes    func  datasets load digits      func  datasets load iris    func  datasets load linnerud  and    func  datasets load wine  now support loading as a pandas   DataFrame   by   setting  as frame True    pr  15980  by  user  wconnell  and    user  Reshama Shaikh  reshamas        Enhancement  Added   return centers   parameter  in    func  datasets make blobs   which can be used to return   centers for each cluster     pr  15709  by  user  shivamgargsya  and    user  Venkatachalam N  venkyyuvy        Enhancement  Functions  func  datasets make circles  and    func  datasets make moons  now accept two element tuple     pr  15707  by  user  Maciej J Mikulski  mjmikulski        Fix   func  datasets make multilabel classification  now generates    ValueError  for arguments  n classes   1  OR  length   1      pr  16006  by  user  Rushabh Vasani  rushabh v        API  The  StreamHandler  was removed from  sklearn logger  to avoid   double logging of messages in common cases where a handler is attached   to the root logger  and to follow the Python logging documentation   recommendation for libraries to leave the log message handling to   users and application code   pr  16451  by  user  Christoph Deil  cdeil      mod  sklearn decomposition                                   Enhancement   class  decomposition NMF  and    func  decomposition non negative factorization  now preserves float32 dtype     pr  16280  by  user  Jeremie du Boisberranger  jeremiedbb        Enhancement   func  decomposition TruncatedSVD transform  is now faster on   given sparse   csc   matrices   pr  16837  by  user  wornbb       Fix   class  decomposition PCA  with a float  n components  parameter  will   exclusively choose the components that explain the variance greater than    n components    pr  15669  by  user  Krishna Chaitanya  krishnachaitanya9       Fix   class  decomposition PCA  with  n components  mle   now correctly   handles small eigenvalues  and does not infer 0 as the correct number of   components   pr  16224  by  user  Lisa Schwetlick  lschwetlick    and    user  Gelavizh Ahmadi  gelavizh1   and  user  Marija Vlajic Wheeler    marijavlajic   and  pr  16841  by  Nicolas Hug        Fix   class  decomposition KernelPCA  method   inverse transform   now   applies the correct inverse transform to the transformed data   pr  16655    by  user  Lewis Ball  lrjball        Fix  Fixed bug that was causing  class  decomposition KernelPCA  to sometimes   raise  invalid value encountered in multiply  during  fit      pr  16718  by  user  Gui Miotto  gui miotto        Feature  Added  n components   attribute to  class  decomposition SparsePCA    and  class  decomposition MiniBatchSparsePCA    pr  16981  by    user  Mateusz G rski  Reksbril      mod  sklearn ensemble                              MajorFeature    class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  now support    term  sample weight    pr  14696  by  Adrin Jalali   and  Nicolas Hug        Feature  Early stopping in    class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  is now determined with a   new  early stopping  parameter instead of  n iter no change   Default value   is  auto   which enables early stopping if there are at least 10 000   samples in the training set   pr  14516  by  user  Johann Faouzi    johannfaouzi        MajorFeature   class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  now support monotonic   constraints  useful when features are supposed to have a positive negative   effect on the target   pr  15582  by  Nicolas Hug        API  Added boolean  verbose  flag to classes     class  ensemble VotingClassifier  and  class  ensemble VotingRegressor      pr  16069  by  user  Sam Bail  spbail       user  Hanna Bruce MacDonald  hannahbrucemacdonald       user  Reshama Shaikh  reshamas    and    user  Chiara Marmo  cmarmo        API  Fixed a bug in  class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  that would not respect the    max leaf nodes  parameter if the criteria was reached at the same time as   the  max depth  criteria   pr  16183  by  Nicolas Hug        Fix   Changed the convention for  max depth  parameter of    class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor   The depth now corresponds to   the number of edges to go from the root to the deepest leaf    Stumps  trees with one split  are now allowed     pr  16182  by  user  Santhosh B  santhoshbala18       Fix  Fixed a bug in  class  ensemble BaggingClassifier      class  ensemble BaggingRegressor  and  class  ensemble IsolationForest    where the attribute  estimators samples   did not generate the proper indices   used during  fit      pr  16437  by  user  Jin Hwan CHO  chofchof        Fix  Fixed a bug in  class  ensemble StackingClassifier  and    class  ensemble StackingRegressor  where the  sample weight    argument was not being passed to  cross val predict  when   evaluating the base estimators on cross validation folds   to obtain the input to the meta estimator     pr  16539  by  user  Bill DeRose  wderose        Feature  Added additional option  loss  poisson   to    class  ensemble HistGradientBoostingRegressor   which adds Poisson deviance   with log link useful for modeling count data     pr  16692  by  user  Christian Lorentzen  lorentzenchr       Fix  Fixed a bug where  class  ensemble HistGradientBoostingRegressor  and    class  ensemble HistGradientBoostingClassifier  would fail with multiple   calls to fit when  warm start True    early stopping True   and there is no   validation set   pr  16663  by  Thomas Fan      mod  sklearn feature extraction                                        Efficiency   class  feature extraction text CountVectorizer  now sorts   features after pruning them by document frequency  This improves performances   for datasets with large vocabularies combined with   min df   or   max df       pr  15834  by  user  Santiago M  Mola  smola      mod  sklearn feature selection                                       Enhancement  Added support for multioutput data in    class  feature selection RFE  and  class  feature selection RFECV      pr  16103  by  user  Divyaprabha M  divyaprabha123        API  Adds  class  feature selection SelectorMixin  back to public API     pr  16132  by  user  trimeta     mod  sklearn gaussian process                                      Enhancement   func  gaussian process kernels Matern  returns the RBF kernel when   nu np inf       pr  15503  by  user  Sam Dixon  sam dixon        Fix  Fixed bug in  class  gaussian process GaussianProcessRegressor  that   caused predicted standard deviations to only be between 0 and 1 when   WhiteKernel is not used   pr  15782    by  user  plgreenLIRU     mod  sklearn impute                            Enhancement   class  impute IterativeImputer  accepts both scalar and array like inputs for     max value   and   min value    Array like inputs allow a different max and min to be specified   for each feature   pr  16403  by  user  Narendra Mukherjee  narendramukherjee        Enhancement   class  impute SimpleImputer    class  impute KNNImputer   and    class  impute IterativeImputer  accepts pandas  nullable integer dtype with   missing values   pr  16508  by  Thomas Fan      mod  sklearn inspection                                Feature   func  inspection partial dependence  and    inspection plot partial dependence  now support the fast  recursion    method for  class  ensemble RandomForestRegressor  and    class  tree DecisionTreeRegressor    pr  15864  by    Nicolas Hug      mod  sklearn linear model                                  MajorFeature  Added generalized linear models  GLM  with non normal error   distributions  including  class  linear model PoissonRegressor      class  linear model GammaRegressor  and  class  linear model TweedieRegressor    which use Poisson  Gamma and Tweedie distributions respectively     pr  14300  by  user  Christian Lorentzen  lorentzenchr     Roman Yurchak      and  Olivier Grisel        MajorFeature  Support of  sample weight  in    class  linear model ElasticNet  and  class  linear model Lasso  for dense   feature matrix  X    pr  15436  by  user  Christian Lorentzen    lorentzenchr        Efficiency   class  linear model RidgeCV  and    class  linear model RidgeClassifierCV  now does not allocate a   potentially large array to store dual coefficients for all hyperparameters   during its  fit   nor an array to store all error or LOO predictions unless    store cv values  is  True      pr  15652  by  user  J r me Dock s  jeromedockes        Enhancement   class  linear model LassoLars  and    class  linear model Lars  now support a  jitter  parameter that adds   random noise to the target  This might help with stability in some edge   cases   pr  15179  by  user  angelaambroz       Fix  Fixed a bug where if a  sample weight  parameter was passed to the fit   method of  class  linear model RANSACRegressor   it would not be passed to   the wrapped  base estimator  during the fitting of the final model     pr  15773  by  user  Jeremy Alexandre  J A16        Fix  Add  best score   attribute to  class  linear model RidgeCV  and    class  linear model RidgeClassifierCV      pr  15655  by  user  J r me Dock s  jeromedockes        Fix  Fixed a bug in  class  linear model RidgeClassifierCV  to pass a   specific scoring strategy  Before the internal estimator outputs score   instead of predictions     pr  14848  by  user  Venkatachalam N  venkyyuvy        Fix   class  linear model LogisticRegression  will now avoid an unnecessary   iteration when  solver  newton cg   by checking for inferior or equal instead   of strictly inferior for maximum of  absgrad  and  tol  in  utils optimize  newton cg      pr  16266  by  user  Rushabh Vasani  rushabh v        API  Deprecated public attributes  standard coef     standard intercept       average coef    and  average intercept   in    class  linear model SGDClassifier      class  linear model SGDRegressor      class  linear model PassiveAggressiveClassifier      class  linear model PassiveAggressiveRegressor      pr  16261  by  user  Carlos Brandt  chbrandt        Fix   Efficiency   class  linear model ARDRegression  is more stable and   much faster when  n samples   n features   It can now scale to hundreds of   thousands of samples  The stability fix might imply changes in the number   of non zero coefficients and in the predicted output   pr  16849  by    Nicolas Hug        Fix  Fixed a bug in  class  linear model ElasticNetCV      class  linear model MultiTaskElasticNetCV    class  linear model LassoCV    and  class  linear model MultiTaskLassoCV  where fitting would fail when   using joblib loky backend   pr  14264  by    user  J r mie du Boisberranger  jeremiedbb        Efficiency  Speed up  class  linear model MultiTaskLasso      class  linear model MultiTaskLassoCV    class  linear model MultiTaskElasticNet      class  linear model MultiTaskElasticNetCV  by avoiding slower   BLAS Level 2 calls on small arrays    pr  17021  by  user  Alex Gramfort  agramfort   and    user  Mathurin Massias  mathurinm      mod  sklearn metrics                             Enhancement   func  metrics pairwise distances chunked  now allows   its   reduce func   to not have a return value  enabling in place operations     pr  16397  by  Joel Nothman        Fix  Fixed a bug in  func  metrics mean squared error  to not ignore   argument  squared  when argument  multioutput  raw values       pr  16323  by  user  Rushabh Vasani  rushabh v       Fix  Fixed a bug in  func  metrics mutual info score  where negative   scores could be returned   pr  16362  by  Thomas Fan        Fix  Fixed a bug in  func  metrics confusion matrix  that would raise   an error when  y true  and  y pred  were length zero and  labels  was   not  None   In addition  we raise an error when an empty list is given to   the  labels  parameter     pr  16442  by  user  Kyle Parsons  parsons kyle 89        API  Changed the formatting of values in    meth  metrics ConfusionMatrixDisplay plot  and    metrics plot confusion matrix  to pick the shorter format  either  2g    or  d     pr  16159  by  user  Rick Mackenbach  Rick Mackenbach   and    Thomas Fan        API  From version 0 25   func  metrics pairwise distances  will no   longer automatically compute the   VI   parameter for Mahalanobis distance   and the   V   parameter for seuclidean distance if   Y   is passed  The user   will be expected to compute this parameter on the training data of their   choice and pass it to  pairwise distances    pr  16993  by  Joel Nothman      mod  sklearn model selection                                     Enhancement   class  model selection GridSearchCV  and    class  model selection RandomizedSearchCV  yields stack trace information   in fit failed warning messages in addition to previously emitted   type and details     pr  15622  by  user  Gregory Morse  GregoryMorse        Fix   func  model selection cross val predict  supports    method  predict proba   when  y None    pr  15918  by    user  Luca Kubin  lkubin        Fix   model selection fit grid point  is deprecated in 0 23 and will   be removed in 0 25   pr  16401  by    user  Arie Pratama Sutiono  ariepratama     mod  sklearn multioutput                                 Feature   func  multioutput MultiOutputRegressor fit  and    func  multioutput MultiOutputClassifier fit  now can accept  fit params    to pass to the  estimator fit  method of each step   issue  15953     pr  15959  by  user  Ke Huang  huangk10        Enhancement   class  multioutput RegressorChain  now supports  fit params    for  base estimator  during  fit      pr  16111  by  user  Venkatachalam N  venkyyuvy      mod  sklearn naive bayes                                    Fix  A correctly formatted error message is shown in    class  naive bayes CategoricalNB  when the number of features in the input   differs between  predict  and  fit      pr  16090  by  user  Madhura Jayaratne  madhuracj      mod  sklearn neural network                                    Efficiency   class  neural network MLPClassifier  and    class  neural network MLPRegressor  has reduced memory footprint when using   stochastic solvers    sgd   or   adam    and  shuffle True    pr  14075  by    user  meyer89       Fix  Increases the numerical stability of the logistic loss function in    class  neural network MLPClassifier  by clipping the probabilities     pr  16117  by  Thomas Fan      mod  sklearn inspection                                Enhancement   class  inspection PartialDependenceDisplay  now exposes the   deciles lines as attributes so they can be hidden or customized   pr  15785    by  Nicolas Hug     mod  sklearn preprocessing                                   Feature  argument  drop  of  class  preprocessing OneHotEncoder    will now accept value  if binary  and will drop the first category of   each feature with two categories   pr  16245    by  user  Rushabh Vasani  rushabh v        Enhancement   class  preprocessing OneHotEncoder  s  drop idx   ndarray   can now contain  None   where  drop idx  i    None  means that no category   is dropped for index  i    pr  16585  by  user  Chiara Marmo  cmarmo        Enhancement   class  preprocessing MaxAbsScaler      class  preprocessing MinMaxScaler    class  preprocessing StandardScaler      class  preprocessing PowerTransformer      class  preprocessing QuantileTransformer      class  preprocessing RobustScaler  now supports pandas  nullable integer   dtype with missing values   pr  16508  by  Thomas Fan        Efficiency   class  preprocessing OneHotEncoder  is now faster at   transforming   pr  15762  by  Thomas Fan        Fix  Fix a bug in  class  preprocessing StandardScaler  which was incorrectly   computing statistics when calling  partial fit  on sparse inputs     pr  16466  by  user  Guillaume Lemaitre  glemaitre        Fix  Fix a bug in  class  preprocessing Normalizer  with norm  max     which was not taking the absolute value of the maximum values before   normalizing the vectors   pr  16632  by    user  Maura Pintor  Maupin1991   and  user  Battista Biggio  bbiggio      mod  sklearn semi supervised                                     Fix   class  semi supervised LabelSpreading  and    class  semi supervised LabelPropagation  avoids divide by zero warnings   when normalizing  label distributions     pr  15946  by  user  ngshya     mod  sklearn svm                         Fix   Efficiency  Improved   libsvm   and   liblinear   random number   generators used to randomly select coordinates in the coordinate descent   algorithms  Platform dependent C   rand     was used  which is only able to   generate numbers up to   32767   on windows platform  see this  blog   post  https   codeforces com blog entry 61587     and also has poor   randomization power as suggested by  this presentation    https   channel9 msdn com Events GoingNative 2013 rand Considered Harmful       It was replaced with C  11   mt19937    a Mersenne Twister that correctly   generates 31bits 63bits random numbers on all platforms  In addition  the   crude  modulo  postprocessor used to get a random number in a bounded   interval was replaced by the tweaked Lemire method as suggested by  this blog   post  http   www pcg random org posts bounded rands html       Any model using the  svm libsvm  or the  svm liblinear  solver    including  class  svm LinearSVC    class  svm LinearSVR      class  svm NuSVC    class  svm NuSVR    class  svm OneClassSVM      class  svm SVC    class  svm SVR    class  linear model LogisticRegression     is affected  In particular users can expect a better convergence when the   number of samples  LibSVM  or the number of features  LibLinear  is large     pr  13511  by  user  Sylvain Mari   smarie        Fix  Fix use of custom kernel not taking float entries such as string   kernels in  class  svm SVC  and  class  svm SVR   Note that custom kennels   are now expected to validate their input where they previously received   valid numeric arrays     pr  11296  by  Alexandre Gramfort   and   user  Georgi Peev  georgipeev        API   class  svm SVR  and  class  svm OneClassSVM  attributes   probA   and    probB    are now deprecated as they were not useful   pr  15558  by    Thomas Fan      mod  sklearn tree                          Fix   func  tree plot tree   rotate  parameter was unused and has been   deprecated     pr  15806  by  user  Chiara Marmo  cmarmo        Fix  Fix support of read only float32 array input in   predict        decision path   and   predict proba   methods of    class  tree DecisionTreeClassifier    class  tree ExtraTreeClassifier  and    class  ensemble GradientBoostingClassifier  as well as   predict   method of    class  tree DecisionTreeRegressor    class  tree ExtraTreeRegressor   and    class  ensemble GradientBoostingRegressor      pr  16331  by  user  Alexandre Batisse  batalex      mod  sklearn utils                           MajorFeature  Estimators can now be displayed with a rich html   representation  This can be enabled in Jupyter notebooks by setting    display  diagram   in  func   sklearn set config   The raw html can be   returned by using  func  utils estimator html repr      pr  14180  by  Thomas Fan        Enhancement  improve error message in  func  utils validation column or 1d      pr  15926  by  user  Lo c Est ve  lesteve        Enhancement  add warning in  func  utils check array  for   pandas sparse DataFrame     pr  16021  by  user  Rushabh Vasani  rushabh v        Enhancement   func  utils check array  now constructs a sparse   matrix from a pandas DataFrame that contains only  SparseArray  columns     pr  16728  by  Thomas Fan        Enhancement   func  utils check array  supports pandas    nullable integer dtype with missing values when  force all finite  is set to    False  or   allow nan   in which case the data is converted to floating   point values where  pd NA  values are replaced by  np nan   As a consequence    all  mod  sklearn preprocessing  transformers that accept numeric inputs with   missing values represented as  np nan  now also accepts being directly fed   pandas dataframes with  pd Int  or  pd Uint   typed columns that use  pd NA    as a missing value marker   pr  16508  by  Thomas Fan        API  Passing classes to  func  utils estimator checks check estimator  and    func  utils estimator checks parametrize with checks  is now deprecated    and support for classes will be removed in 0 24  Pass instances instead     pr  17032  by  Nicolas Hug        API  The private utility   safe tags  in  utils estimator checks  was   removed  hence all tags should be obtained through  estimator  get tags       Note that Mixins like  RegressorMixin  must come  before  base classes   in the MRO for   get tags    to work properly     pr  16950  by  Nicolas Hug        FIX   utils all estimators  now only returns public estimators     pr  15380  by  Thomas Fan     Miscellaneous                   MajorFeature  Adds a HTML representation of estimators to be shown in   a jupyter notebook or lab  This visualization is activated by setting the    display  option in  func  sklearn set config    pr  14180  by    Thomas Fan        Enhancement    scikit learn   now works with   mypy   without errors     pr  16726  by  Roman Yurchak        API  Most estimators now expose a  n features in   attribute  This   attribute is equal to the number of features passed to the  fit  method    See  SLEP010    https   scikit learn enhancement proposals readthedocs io en latest slep010 proposal html      for details   pr  16112  by  Nicolas Hug        API  Estimators now have a  requires y  tags which is False by default   except for estimators that inherit from   sklearn base RegressorMixin  or     sklearn base ClassifierMixin   This tag is used to ensure that a proper   error message is raised when y was expected but None was passed     pr  16622  by  Nicolas Hug        API  The default setting  print changed only  has been changed from False   to True  This means that the  repr  of estimators is now more concise and   only shows the parameters whose default value has been changed when   printing an estimator  You can restore the previous behaviour by using    sklearn set config print changed only False    Also  note that it is   always possible to quickly inspect the parameters of any estimator using    est get params deep False     pr  17061  by  Nicolas Hug        rubric   Code and documentation contributors  Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0 22  including   Abbie Popa  Adrin Jalali  Aleksandra Kocot  Alexandre Batisse  Alexandre Gramfort  Alex Henrie  Alex Itkes  Alex Liang  alexshacked  Alonso Silva Allende  Ana Casado  Andreas Mueller  Angela Ambroz  Ankit810  Arie Pratama Sutiono  Arunav Konwar  Baptiste Maingret  Benjamin Beier Liu  bernie gray  Bharathi Srinivasan  Bharat Raghunathan  Bibhash Chandra Mitra  Brian Wignall  brigi  Brigitta Sip cz  Carlos H Brandt  CastaChick  castor  cgsavard  Chiara Marmo  Chris Gregory  Christian Kastner  Christian Lorentzen  Corrie Bartelheimer  Dani l van Gelder  Daphne  David Breuer  david cortes  dbauer9  Divyaprabha M  Edward Qian  Ekaterina Borovikova  ELNS  Emily Taylor  Erich Schubert  Eric Leung  Evgeni Chasnovski  Fabiana  Facundo Ferr n  Fan  Franziska Boenisch  Gael Varoquaux  Gaurav Sharma  Geoffrey Bolmier  Georgi Peev  gholdman1  Gonthier Nicolas  Gregory Morse  Gregory R  Lee  Guillaume Lemaitre  Gui Miotto  Hailey Nguyen  Hanmin Qin  Hao Chun Chang  HaoYin  H lion du Mas des Bourboux  Himanshu Garg  Hirofumi Suzuki  huangk10  Hugo van Kemenade  Hye Sung Jung  indecisiveuser  inderjeet  J A16  J r mie du Boisberranger  Jin Hwan CHO  JJmistry  Joel Nothman  Johann Faouzi  Jon Haitz Legarreta Gorro o  Juan Carlos Alfaro Jim nez  judithabk6  jumon  Kathryn Poole  Katrina Ni  Kesshi Jordan  Kevin Loftis  Kevin Markham  krishnachaitanya9  Lam Gia Thuan  Leland McInnes  Lisa Schwetlick  lkubin  Loic Esteve  lopusz  lrjball  lucgiffon  lucyleeow  Lucy Liu  Lukas Kemkes  Maciej J Mikulski  Madhura Jayaratne  Magda Zielinska  maikia  Mandy Gu  Manimaran  Manish Aradwad  Maren Westermann  Maria  Mariana Meireles  Marie Douriez  Marielle  Mateusz G rski  mathurinm  Matt Hall  Maura Pintor  mc4229  meyer89  m fab  Michael Shoemaker  Micha  S apek  Mina Naghshhnejad  mo  Mohamed Maskani  Mojca Bertoncelj  narendramukherjee  ngshya  Nicholas Won  Nicolas Hug  nicolasservel  Niklas   nkish  Noa Tamir  Oleksandr Pavlyk  olicairns  Oliver Urs Lenz  Olivier Grisel  parsons kyle 89  Paula  Pete Green  Pierre Delanoue  pspachtholz  Pulkit Mehta  Qizhi  Jiang  Quang Nguyen  rachelcjordan  raduspaimoc  Reshama Shaikh  Riccardo Folloni  Rick Mackenbach  Ritchie Ng  Roman Feldbauer  Roman Yurchak  Rory Hartong Redden  R diger Busche  Rushabh Vasani  Sambhav Kothari  Samesh Lakhotia  Samuel Duan  SanthoshBala18  Santiago M  Mola  Sarat Addepalli  scibol  Sebastian Kie ling  SergioDSR  Sergul Aydore  Shiki H  shivamgargsya  SHUBH CHATTERJEE  Siddharth Gupta  simonamaggio  smarie  Snowhite  stareh  Stephen Blystone  Stephen Marsh  Sunmi Yoon  SylvainLan  talgatomarov  tamirlan1  th0rwas  theoptips  Thomas J Fan  Thomas Li  Thomas Schmitt  Tim Nonner  Tim Vink  Tiphaine Viard  Tirth Patel  Titus Christian  Tom Dupr  la Tour  trimeta  Vachan D A  Vandana Iyer  Venkatachalam N  waelbenamara  wconnell  wderose  wenliwyan  Windber  wornbb  Yu Hang  Maxin  Tang"}
{"questions":"scikit-learn It also defines other ReST substitutions format html their github page to be their URL target Historically it was used to raw html raw hyperlink all contributors names and should now be preferred This file maps contributor names to their URLs It should mostly be used for core contributors and occasionally for contributors who do not want","answers":"\n..\n    This file maps contributor names to their URLs. It should mostly be used\n    for core contributors, and occasionally for contributors who do not want\n    their github page to be their URL target. Historically it was used to\n    hyperlink all contributors' names, and ``:user:`` should now be preferred.\n    It also defines other ReST substitutions.\n\n.. role:: raw-html(raw)\n   :format: html\n\n.. role:: raw-latex(raw)\n   :format: latex\n\n.. |MajorFeature| replace:: :raw-html:`<span class=\"badge text-bg-success\">Major Feature<\/span>` :raw-latex:`{\\small\\sc [Major Feature]}`\n.. |Feature| replace:: :raw-html:`<span class=\"badge text-bg-success\">Feature<\/span>` :raw-latex:`{\\small\\sc [Feature]}`\n.. |Efficiency| replace:: :raw-html:`<span class=\"badge text-bg-info\">Efficiency<\/span>` :raw-latex:`{\\small\\sc [Efficiency]}`\n.. |Enhancement| replace:: :raw-html:`<span class=\"badge text-bg-info\">Enhancement<\/span>` :raw-latex:`{\\small\\sc [Enhancement]}`\n.. |Fix| replace:: :raw-html:`<span class=\"badge text-bg-danger\">Fix<\/span>` :raw-latex:`{\\small\\sc [Fix]}`\n.. |API| replace:: :raw-html:`<span class=\"badge text-bg-warning\">API Change<\/span>` :raw-latex:`{\\small\\sc [API Change]}`\n\n\n.. _Olivier Grisel: https:\/\/twitter.com\/ogrisel\n\n.. _Gael Varoquaux: http:\/\/gael-varoquaux.info\n\n.. _Alexandre Gramfort: http:\/\/alexandre.gramfort.net\n\n.. _Fabian Pedregosa: http:\/\/fa.bianp.net\n\n.. _Mathieu Blondel: http:\/\/www.mblondel.org\n\n.. _James Bergstra: http:\/\/www-etud.iro.umontreal.ca\/~bergstrj\/\n\n.. _liblinear: https:\/\/www.csie.ntu.edu.tw\/~cjlin\/liblinear\/\n\n.. _Yaroslav Halchenko: http:\/\/www.onerussian.com\/\n\n.. _Vlad Niculae: https:\/\/vene.ro\/\n\n.. _Edouard Duchesnay: https:\/\/duchesnay.github.io\/\n\n.. _Peter Prettenhofer: https:\/\/sites.google.com\/site\/peterprettenhofer\/\n\n.. _Alexandre Passos: http:\/\/atpassos.me\n\n.. _Nicolas Pinto: https:\/\/twitter.com\/npinto\n\n.. _Bertrand Thirion: https:\/\/team.inria.fr\/parietal\/bertrand-thirions-page\n\n.. _Andreas M\u00fcller: https:\/\/amueller.github.io\/\n\n.. _Matthieu Perrot: http:\/\/brainvisa.info\/biblio\/lnao\/en\/Author\/PERROT-M.html\n\n.. _Jake Vanderplas: https:\/\/staff.washington.edu\/jakevdp\/\n\n.. _Gilles Louppe: http:\/\/www.montefiore.ulg.ac.be\/~glouppe\/\n\n.. _INRIA: https:\/\/www.inria.fr\/\n\n.. _Parietal Team: http:\/\/parietal.saclay.inria.fr\/\n\n.. _David Warde-Farley: http:\/\/www-etud.iro.umontreal.ca\/~wardefar\/\n\n.. _Brian Holt: http:\/\/personal.ee.surrey.ac.uk\/Personal\/B.Holt\n\n.. _Satrajit Ghosh: https:\/\/www.mit.edu\/~satra\/\n\n.. _Robert Layton: https:\/\/twitter.com\/robertlayton\n\n.. _Scott White: https:\/\/twitter.com\/scottblanc\n\n.. _David Marek: https:\/\/davidmarek.cz\/\n\n.. _Christian Osendorfer: https:\/\/osdf.github.io\n\n.. _Arnaud Joly: http:\/\/www.ajoly.org\n\n.. _Rob Zinkov: https:\/\/www.zinkov.com\/\n\n.. _Joel Nothman: https:\/\/joelnothman.com\/\n\n.. _Nicolas Tr\u00e9segnie: https:\/\/github.com\/NicolasTr\n\n.. _Kemal Eren: http:\/\/www.kemaleren.com\n\n.. _Yann Dauphin: https:\/\/ynd.github.io\/\n\n.. _Yannick Schwartz: https:\/\/team.inria.fr\/parietal\/schwarty\/\n\n.. _Kyle Kastner: https:\/\/kastnerkyle.github.io\/\n\n.. _Daniel Nouri: http:\/\/danielnouri.org\n\n.. _Manoj Kumar: https:\/\/manojbits.wordpress.com\n\n.. _Luis Pedro Coelho: http:\/\/luispedro.org\n\n.. _Fares Hedyati: http:\/\/www.eecs.berkeley.edu\/~fareshed\n\n.. _Antony Lee: https:\/\/www.ocf.berkeley.edu\/~antonyl\/\n\n.. _Martin Billinger: https:\/\/tnsre.embs.org\/author\/martinbillinger\/\n\n.. _Matteo Visconti di Oleggio Castello: http:\/\/www.mvdoc.me\n\n.. _Trevor Stephens: http:\/\/trevorstephens.com\/\n\n.. _Jan Hendrik Metzen: https:\/\/jmetzen.github.io\/\n\n.. _Will Dawson: http:\/\/www.dawsonresearch.com\n\n.. _Andrew Tulloch: https:\/\/tullo.ch\/\n\n.. _Hanna Wallach: https:\/\/dirichlet.net\/\n\n.. _Yan Yi: http:\/\/seowyanyi.org\n\n.. _Herv\u00e9 Bredin: https:\/\/herve.niderb.fr\/\n\n.. _Eric Martin: http:\/\/www.ericmart.in\n\n.. _Nicolas Goix: https:\/\/ngoix.github.io\/\n\n.. _Sebastian Raschka: https:\/\/sebastianraschka.com\/\n\n.. _Brian McFee: https:\/\/bmcfee.github.io\n\n.. _Valentin Stolbunov: http:\/\/www.vstolbunov.com\n\n.. _Jaques Grobler: https:\/\/github.com\/jaquesgrobler\n\n.. _Lars Buitinck: https:\/\/github.com\/larsmans\n\n.. _Loic Esteve: https:\/\/github.com\/lesteve\n\n.. _Noel Dawe: https:\/\/github.com\/ndawe\n\n.. _Raghav RV: https:\/\/github.com\/raghavrv\n\n.. _Tom Dupre la Tour: https:\/\/github.com\/TomDLT\n\n.. _Nelle Varoquaux: https:\/\/github.com\/nellev\n\n.. _Bing Tian Dai: https:\/\/github.com\/btdai\n\n.. _Dylan Werner-Meier: https:\/\/github.com\/unautre\n\n.. _Alyssa Batula: https:\/\/github.com\/abatula\n\n.. _Srivatsan Ramesh: https:\/\/github.com\/srivatsan-ramesh\n\n.. _Ron Weiss: https:\/\/www.ee.columbia.edu\/~ronw\/\n\n.. _Kathleen Chen: https:\/\/github.com\/kchen17\n\n.. _Vincent Pham: https:\/\/github.com\/vincentpham1991\n\n.. _Denis Engemann: http:\/\/denis-engemann.de\n\n.. _Anish Shah: https:\/\/github.com\/AnishShah\n\n.. _Neeraj Gangwar: http:\/\/neerajgangwar.in\n\n.. _Arthur Mensch: https:\/\/amensch.fr\n\n.. _Joris Van den Bossche: https:\/\/github.com\/jorisvandenbossche\n\n.. _Roman Yurchak: https:\/\/github.com\/rth\n\n.. _Hanmin Qin: https:\/\/github.com\/qinhanmin2014\n\n.. _Adrin Jalali: https:\/\/github.com\/adrinjalali\n\n.. _Thomas Fan: https:\/\/github.com\/thomasjpfan\n\n.. _Nicolas Hug: https:\/\/github.com\/NicolasHug\n\n.. _Guillaume Lemaitre: https:\/\/github.com\/glemaitre\n\n.. _Tim Head: https:\/\/betatim.github.io\/","site":"scikit-learn","answers_cleaned":"        This file maps contributor names to their URLs  It should mostly be used     for core contributors  and occasionally for contributors who do not want     their github page to be their URL target  Historically it was used to     hyperlink all contributors  names  and    user    should now be preferred      It also defines other ReST substitutions      role   raw html raw      format  html     role   raw latex raw      format  latex      MajorFeature  replace    raw html   span class  badge text bg success  Major Feature  span    raw latex    small sc  Major Feature        Feature  replace    raw html   span class  badge text bg success  Feature  span    raw latex    small sc  Feature        Efficiency  replace    raw html   span class  badge text bg info  Efficiency  span    raw latex    small sc  Efficiency        Enhancement  replace    raw html   span class  badge text bg info  Enhancement  span    raw latex    small sc  Enhancement        Fix  replace    raw html   span class  badge text bg danger  Fix  span    raw latex    small sc  Fix        API  replace    raw html   span class  badge text bg warning  API Change  span    raw latex    small sc  API Change          Olivier Grisel  https   twitter com ogrisel      Gael Varoquaux  http   gael varoquaux info      Alexandre Gramfort  http   alexandre gramfort net      Fabian Pedregosa  http   fa bianp net      Mathieu Blondel  http   www mblondel org      James Bergstra  http   www etud iro umontreal ca  bergstrj       liblinear  https   www csie ntu edu tw  cjlin liblinear       Yaroslav Halchenko  http   www onerussian com       Vlad Niculae  https   vene ro       Edouard Duchesnay  https   duchesnay github io       Peter Prettenhofer  https   sites google com site peterprettenhofer       Alexandre Passos  http   atpassos me      Nicolas Pinto  https   twitter com npinto      Bertrand Thirion  https   team inria fr parietal bertrand thirions page      Andreas M ller  https   amueller github io       Matthieu Perrot  http   brainvisa info biblio lnao en Author PERROT M html      Jake Vanderplas  https   staff washington edu jakevdp       Gilles Louppe  http   www montefiore ulg ac be  glouppe       INRIA  https   www inria fr       Parietal Team  http   parietal saclay inria fr       David Warde Farley  http   www etud iro umontreal ca  wardefar       Brian Holt  http   personal ee surrey ac uk Personal B Holt      Satrajit Ghosh  https   www mit edu  satra       Robert Layton  https   twitter com robertlayton      Scott White  https   twitter com scottblanc      David Marek  https   davidmarek cz       Christian Osendorfer  https   osdf github io      Arnaud Joly  http   www ajoly org      Rob Zinkov  https   www zinkov com       Joel Nothman  https   joelnothman com       Nicolas Tr segnie  https   github com NicolasTr      Kemal Eren  http   www kemaleren com      Yann Dauphin  https   ynd github io       Yannick Schwartz  https   team inria fr parietal schwarty       Kyle Kastner  https   kastnerkyle github io       Daniel Nouri  http   danielnouri org      Manoj Kumar  https   manojbits wordpress com      Luis Pedro Coelho  http   luispedro org      Fares Hedyati  http   www eecs berkeley edu  fareshed      Antony Lee  https   www ocf berkeley edu  antonyl       Martin Billinger  https   tnsre embs org author martinbillinger       Matteo Visconti di Oleggio Castello  http   www mvdoc me      Trevor Stephens  http   trevorstephens com       Jan Hendrik Metzen  https   jmetzen github io       Will Dawson  http   www dawsonresearch com      Andrew Tulloch  https   tullo ch       Hanna Wallach  https   dirichlet net       Yan Yi  http   seowyanyi org      Herv  Bredin  https   herve niderb fr       Eric Martin  http   www ericmart in      Nicolas Goix  https   ngoix github io       Sebastian Raschka  https   sebastianraschka com       Brian McFee  https   bmcfee github io      Valentin Stolbunov  http   www vstolbunov com      Jaques Grobler  https   github com jaquesgrobler      Lars Buitinck  https   github com larsmans      Loic Esteve  https   github com lesteve      Noel Dawe  https   github com ndawe      Raghav RV  https   github com raghavrv      Tom Dupre la Tour  https   github com TomDLT      Nelle Varoquaux  https   github com nellev      Bing Tian Dai  https   github com btdai      Dylan Werner Meier  https   github com unautre      Alyssa Batula  https   github com abatula      Srivatsan Ramesh  https   github com srivatsan ramesh      Ron Weiss  https   www ee columbia edu  ronw       Kathleen Chen  https   github com kchen17      Vincent Pham  https   github com vincentpham1991      Denis Engemann  http   denis engemann de      Anish Shah  https   github com AnishShah      Neeraj Gangwar  http   neerajgangwar in      Arthur Mensch  https   amensch fr      Joris Van den Bossche  https   github com jorisvandenbossche      Roman Yurchak  https   github com rth      Hanmin Qin  https   github com qinhanmin2014      Adrin Jalali  https   github com adrinjalali      Thomas Fan  https   github com thomasjpfan      Nicolas Hug  https   github com NicolasHug      Guillaume Lemaitre  https   github com glemaitre      Tim Head  https   betatim github io "}
{"questions":"scikit-learn sklearn contributors rst releasenotes022 Version 0 22","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n.. _release_notes_0_22:\n\n============\nVersion 0.22\n============\n\nFor a short description of the main highlights of the release, please refer to\n:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_0_22_0.py`.\n\n.. include:: changelog_legend.inc\n\n.. _changes_0_22_2:\n\nVersion 0.22.2.post1\n====================\n\n**March 3 2020**\n\nThe 0.22.2.post1 release includes a packaging fix for the source distribution\nbut the content of the packages is otherwise identical to the content of the\nwheels with the 0.22.2 version (without the .post1 suffix). Both contain the\nfollowing changes.\n\nChangelog\n---------\n\n:mod:`sklearn.impute`\n.....................\n\n- |Efficiency| Reduce :func:`impute.KNNImputer` asymptotic memory usage by\n  chunking pairwise distance computation.\n  :pr:`16397` by `Joel Nothman`_.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| Fixed a bug in `metrics.plot_roc_curve` where\n  the name of the estimator was passed in the :class:`metrics.RocCurveDisplay`\n  instead of the parameter `name`. It results in a different plot when calling\n  :meth:`metrics.RocCurveDisplay.plot` for the subsequent times.\n  :pr:`16500` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Fixed a bug in `metrics.plot_precision_recall_curve` where the\n  name of the estimator was passed in the\n  :class:`metrics.PrecisionRecallDisplay` instead of the parameter `name`. It\n  results in a different plot when calling\n  :meth:`metrics.PrecisionRecallDisplay.plot` for the subsequent times.\n  :pr:`16505` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Fix| Fix a bug which converted a list of arrays into a 2-D object\n  array instead of a 1-D array containing NumPy arrays. This bug\n  was affecting :meth:`neighbors.NearestNeighbors.radius_neighbors`.\n  :pr:`16076` by :user:`Guillaume Lemaitre <glemaitre>` and\n  :user:`Alex Shacked <alexshacked>`.\n\n.. _changes_0_22_1:\n\nVersion 0.22.1\n==============\n\n**January 2 2020**\n\nThis is a bug-fix release to primarily resolve some packaging issues in version\n0.22.0. It also includes minor documentation improvements and some bug fixes.\n\nChangelog\n---------\n\n\n:mod:`sklearn.cluster`\n......................\n\n- |Fix| :class:`cluster.KMeans` with ``algorithm=\"elkan\"`` now uses the same\n  stopping criterion as with the default ``algorithm=\"full\"``. :pr:`15930` by\n  :user:`inder128`.\n\n:mod:`sklearn.inspection`\n.........................\n\n- |Fix| :func:`inspection.permutation_importance` will return the same\n  `importances` when a `random_state` is given for both `n_jobs=1` or\n  `n_jobs>1` both with shared memory backends (thread-safety) and\n  isolated memory, process-based backends.\n  Also avoid casting the data as object dtype and avoid read-only error\n  on large dataframes with `n_jobs>1` as reported in :issue:`15810`.\n  Follow-up of :pr:`15898` by :user:`Shivam Gargsya <shivamgargsya>`.\n  :pr:`15933` by :user:`Guillaume Lemaitre <glemaitre>` and `Olivier Grisel`_.\n\n- |Fix| `inspection.plot_partial_dependence` and\n  :meth:`inspection.PartialDependenceDisplay.plot` now consistently checks\n  the number of axes passed in. :pr:`15760` by `Thomas Fan`_.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| `metrics.plot_confusion_matrix` now raises error when `normalize`\n  is invalid. Previously, it runs fine with no normalization.\n  :pr:`15888` by `Hanmin Qin`_.\n\n- |Fix| `metrics.plot_confusion_matrix` now colors the label color\n  correctly to maximize contrast with its background. :pr:`15936` by\n  `Thomas Fan`_ and :user:`DizietAsahi`.\n\n- |Fix| :func:`metrics.classification_report` does no longer ignore the\n  value of the ``zero_division`` keyword argument. :pr:`15879`\n  by :user:`Bibhash Chandra Mitra <Bibyutatsu>`.\n\n- |Fix| Fixed a bug in `metrics.plot_confusion_matrix` to correctly\n  pass the `values_format` parameter to the :class:`metrics.ConfusionMatrixDisplay`\n  plot() call. :pr:`15937` by :user:`Stephen Blystone <blynotes>`.\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |Fix| :class:`model_selection.GridSearchCV` and\n  :class:`model_selection.RandomizedSearchCV` accept scalar values provided in\n  `fit_params`. Change in 0.22 was breaking backward compatibility.\n  :pr:`15863` by :user:`Adrin Jalali <adrinjalali>` and\n  :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.naive_bayes`\n..........................\n\n- |Fix| Removed `abstractmethod` decorator for the method `_check_X` in\n  `naive_bayes.BaseNB` that could break downstream projects inheriting\n  from this deprecated public base class. :pr:`15996` by\n  :user:`Brigitta Sip\u0151cz <bsipocz>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Fix| :class:`preprocessing.QuantileTransformer` now guarantees the\n  `quantiles_` attribute to be completely sorted in non-decreasing manner.\n  :pr:`15751` by :user:`Tirth Patel <tirthasheshpatel>`.\n\n:mod:`sklearn.semi_supervised`\n..............................\n\n- |Fix| :class:`semi_supervised.LabelPropagation` and\n  :class:`semi_supervised.LabelSpreading` now allow callable kernel function to\n  return sparse weight matrix.\n  :pr:`15868` by :user:`Niklas Smedemark-Margulies <nik-sm>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |Fix| :func:`utils.check_array` now correctly converts pandas DataFrame with\n  boolean columns to floats. :pr:`15797` by `Thomas Fan`_.\n\n- |Fix| :func:`utils.validation.check_is_fitted` accepts back an explicit ``attributes``\n  argument to check for specific attributes as explicit markers of a fitted\n  estimator. When no explicit ``attributes`` are provided, only the attributes\n  that end with a underscore and do not start with double underscore are used\n  as \"fitted\" markers. The ``all_or_any`` argument is also no longer\n  deprecated. This change is made to restore some backward compatibility with\n  the behavior of this utility in version 0.21. :pr:`15947` by `Thomas Fan`_.\n\n.. _changes_0_22:\n\nVersion 0.22.0\n==============\n\n**December 3 2019**\n\nWebsite update\n--------------\n\n`Our website <https:\/\/scikit-learn.org\/>`_ was revamped and given a fresh\nnew look. :pr:`14849` by `Thomas Fan`_.\n\nClear definition of the public API\n----------------------------------\n\nScikit-learn has a public API, and a private API.\n\nWe do our best not to break the public API, and to only introduce\nbackward-compatible changes that do not require any user action. However, in\ncases where that's not possible, any change to the public API is subject to\na deprecation cycle of two minor versions. The private API isn't publicly\ndocumented and isn't subject to any deprecation cycle, so users should not\nrely on its stability.\n\nA function or object is public if it is documented in the `API Reference\n<https:\/\/scikit-learn.org\/dev\/modules\/classes.html>`_ and if it can be\nimported with an import path without leading underscores. For example\n``sklearn.pipeline.make_pipeline`` is public, while\n`sklearn.pipeline._name_estimators` is private.\n``sklearn.ensemble._gb.BaseEnsemble`` is private too because the whole `_gb`\nmodule is private.\n\nUp to 0.22, some tools were de-facto public (no leading underscore), while\nthey should have been private in the first place. In version 0.22, these\ntools have been made properly private, and the public API space has been\ncleaned. In addition, importing from most sub-modules is now deprecated: you\nshould for example use ``from sklearn.cluster import Birch`` instead of\n``from sklearn.cluster.birch import Birch`` (in practice, ``birch.py`` has\nbeen moved to ``_birch.py``).\n\n.. note::\n\n    All the tools in the public API should be documented in the `API\n    Reference <https:\/\/scikit-learn.org\/dev\/modules\/classes.html>`_. If you\n    find a public tool (without leading underscore) that isn't in the API\n    reference, that means it should either be private or documented. Please\n    let us know by opening an issue!\n\nThis work was tracked in `issue 9250\n<https:\/\/github.com\/scikit-learn\/scikit-learn\/issues\/9250>`_ and `issue\n12927 <https:\/\/github.com\/scikit-learn\/scikit-learn\/issues\/12927>`_.\n\n\nDeprecations: using ``FutureWarning`` from now on\n-------------------------------------------------\n\nWhen deprecating a feature, previous versions of scikit-learn used to raise\na ``DeprecationWarning``. Since the ``DeprecationWarnings`` aren't shown by\ndefault by Python, scikit-learn needed to resort to a custom warning filter\nto always show the warnings. That filter would sometimes interfere\nwith users custom warning filters.\n\nStarting from version 0.22, scikit-learn will show ``FutureWarnings`` for\ndeprecations, `as recommended by the Python documentation\n<https:\/\/docs.python.org\/3\/library\/exceptions.html#FutureWarning>`_.\n``FutureWarnings`` are always shown by default by Python, so the custom\nfilter has been removed and scikit-learn no longer hinders with user\nfilters. :pr:`15080` by `Nicolas Hug`_.\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- :class:`cluster.KMeans` when `n_jobs=1`. |Fix|\n- :class:`decomposition.SparseCoder`,\n  :class:`decomposition.DictionaryLearning`, and\n  :class:`decomposition.MiniBatchDictionaryLearning` |Fix|\n- :class:`decomposition.SparseCoder` with `algorithm='lasso_lars'` |Fix|\n- :class:`decomposition.SparsePCA` where `normalize_components` has no effect\n  due to deprecation.\n- :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` |Fix|, |Feature|,\n  |Enhancement|.\n- :class:`impute.IterativeImputer` when `X` has features with no missing\n  values. |Feature|\n- :class:`linear_model.Ridge` when `X` is sparse. |Fix|\n- :class:`model_selection.StratifiedKFold` and any use of `cv=int` with a\n  classifier. |Fix|\n- :class:`cross_decomposition.CCA` when using scipy >= 1.3 |Fix|\n\nDetails are listed in the changelog below.\n\n(While we are trying to better inform users by providing this information, we\ncannot assure that this list is complete.)\n\nChangelog\n---------\n\n..\n    Entries should be grouped by module (in alphabetic order) and prefixed with\n    one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,\n    |Fix| or |API| (see whats_new.rst for descriptions).\n    Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).\n    Changes not specific to a module should be listed under *Multiple Modules*\n    or *Miscellaneous*.\n    Entries should end with:\n    :pr:`123456` by :user:`Joe Bloggs <joeongithub>`.\n    where 123456 is the *pull request* number, not the issue number.\n\n:mod:`sklearn.base`\n...................\n\n- |API| From version 0.24 :meth:`base.BaseEstimator.get_params` will raise an\n  AttributeError rather than return None for parameters that are in the\n  estimator's constructor but not stored as attributes on the instance.\n  :pr:`14464` by `Joel Nothman`_.\n\n:mod:`sklearn.calibration`\n..........................\n\n- |Fix| Fixed a bug that made :class:`calibration.CalibratedClassifierCV` fail when\n  given a `sample_weight` parameter of type `list` (in the case where\n  `sample_weights` are not supported by the wrapped estimator). :pr:`13575`\n  by :user:`William de Vazelhes <wdevazelhes>`.\n\n:mod:`sklearn.cluster`\n......................\n\n- |Feature| :class:`cluster.SpectralClustering` now accepts precomputed sparse\n  neighbors graph as input. :issue:`10482` by `Tom Dupre la Tour`_ and\n  :user:`Kumar Ashutosh <thechargedneutron>`.\n\n- |Enhancement| :class:`cluster.SpectralClustering` now accepts a ``n_components``\n  parameter. This parameter extends `SpectralClustering` class functionality to\n  match :meth:`cluster.spectral_clustering`.\n  :pr:`13726` by :user:`Shuzhe Xiao <fdas3213>`.\n\n- |Fix| Fixed a bug where :class:`cluster.KMeans` produced inconsistent results\n  between `n_jobs=1` and `n_jobs>1` due to the handling of the random state.\n  :pr:`9288` by :user:`Bryan Yang <bryanyang0528>`.\n\n- |Fix| Fixed a bug where `elkan` algorithm in :class:`cluster.KMeans` was\n  producing Segmentation Fault on large arrays due to integer index overflow.\n  :pr:`15057` by :user:`Vladimir Korolev <balodja>`.\n\n- |Fix| :class:`~cluster.MeanShift` now accepts a :term:`max_iter` with a\n  default value of 300 instead of always using the default 300. It also now\n  exposes an ``n_iter_`` indicating the maximum number of iterations performed\n  on each seed. :pr:`15120` by `Adrin Jalali`_.\n\n- |Fix| :class:`cluster.AgglomerativeClustering` and\n  :class:`cluster.FeatureAgglomeration` now raise an error if\n  `affinity='cosine'` and `X` has samples that are all-zeros. :pr:`7943` by\n  :user:`mthorrell`.\n\n:mod:`sklearn.compose`\n......................\n\n- |Feature|  Adds :func:`compose.make_column_selector` which is used with\n  :class:`compose.ColumnTransformer` to select DataFrame columns on the basis\n  of name and dtype. :pr:`12303` by `Thomas Fan`_.\n\n- |Fix| Fixed a bug in :class:`compose.ColumnTransformer` which failed to\n  select the proper columns when using a boolean list, with NumPy older than\n  1.12.\n  :pr:`14510` by `Guillaume Lemaitre`_.\n\n- |Fix| Fixed a bug in :class:`compose.TransformedTargetRegressor` which did not\n  pass `**fit_params` to the underlying regressor.\n  :pr:`14890` by :user:`Miguel Cabrera <mfcabrera>`.\n\n- |Fix| The :class:`compose.ColumnTransformer` now requires the number of\n  features to be consistent between `fit` and `transform`. A `FutureWarning`\n  is raised now, and this will raise an error in 0.24. If the number of\n  features isn't consistent and negative indexing is used, an error is\n  raised. :pr:`14544` by `Adrin Jalali`_.\n\n:mod:`sklearn.cross_decomposition`\n..................................\n\n- |Feature| :class:`cross_decomposition.PLSCanonical` and\n  :class:`cross_decomposition.PLSRegression` have a new function\n  ``inverse_transform`` to transform data to the original space.\n  :pr:`15304` by :user:`Jaime Ferrando Huertas <jiwidi>`.\n\n- |Enhancement| :class:`decomposition.KernelPCA` now properly checks the\n  eigenvalues found by the solver for numerical or conditioning issues. This\n  ensures consistency of results across solvers (different choices for\n  ``eigen_solver``), including approximate solvers such as ``'randomized'`` and\n  ``'lobpcg'`` (see :issue:`12068`).\n  :pr:`12145` by :user:`Sylvain Mari\u00e9 <smarie>`\n\n- |Fix| Fixed a bug where :class:`cross_decomposition.PLSCanonical` and\n  :class:`cross_decomposition.PLSRegression` were raising an error when fitted\n  with a target matrix `Y` in which the first column was constant.\n  :issue:`13609` by :user:`Camila Williamson <camilaagw>`.\n\n- |Fix| :class:`cross_decomposition.CCA` now produces the same results with\n  scipy 1.3 and previous scipy versions. :pr:`15661` by `Thomas Fan`_.\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Feature| :func:`datasets.fetch_openml` now supports heterogeneous data using\n  pandas by setting `as_frame=True`. :pr:`13902` by `Thomas Fan`_.\n\n- |Feature| :func:`datasets.fetch_openml` now includes the `target_names` in\n  the returned Bunch. :pr:`15160` by `Thomas Fan`_.\n\n- |Enhancement| The parameter `return_X_y` was added to\n  :func:`datasets.fetch_20newsgroups` and :func:`datasets.fetch_olivetti_faces`\n  . :pr:`14259` by :user:`Sourav Singh <souravsingh>`.\n\n- |Enhancement| :func:`datasets.make_classification` now accepts array-like\n  `weights` parameter, i.e. list or numpy.array, instead of list only.\n  :pr:`14764` by :user:`Cat Chenal <CatChenal>`.\n\n- |Enhancement| The parameter `normalize` was added to\n   :func:`datasets.fetch_20newsgroups_vectorized`.\n   :pr:`14740` by :user:`St\u00e9phan Tulkens <stephantul>`\n\n- |Fix| Fixed a bug in :func:`datasets.fetch_openml`, which failed to load\n  an OpenML dataset that contains an ignored feature.\n  :pr:`14623` by :user:`Sarra Habchi <HabchiSarra>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Efficiency| :class:`decomposition.NMF` with `solver=\"mu\"` fitted on sparse input\n  matrices now uses batching to avoid briefly allocating an array with size\n  (#non-zero elements, n_components). :pr:`15257` by :user:`Mart Willocx <Maocx>`.\n\n- |Enhancement| :func:`decomposition.dict_learning` and\n  :func:`decomposition.dict_learning_online` now accept `method_max_iter` and\n  pass it to :meth:`decomposition.sparse_encode`.\n  :issue:`12650` by `Adrin Jalali`_.\n\n- |Enhancement| :class:`decomposition.SparseCoder`,\n  :class:`decomposition.DictionaryLearning`, and\n  :class:`decomposition.MiniBatchDictionaryLearning` now take a\n  `transform_max_iter` parameter and pass it to either\n  :func:`decomposition.dict_learning()` or\n  :func:`decomposition.sparse_encode()`. :issue:`12650` by `Adrin Jalali`_.\n\n- |Enhancement| :class:`decomposition.IncrementalPCA` now accepts sparse\n  matrices as input, converting them to dense in batches thereby avoiding the\n  need to store the entire dense matrix at once.\n  :pr:`13960` by :user:`Scott Gigante <scottgigante>`.\n\n- |Fix| :func:`decomposition.sparse_encode()` now passes the `max_iter` to the\n  underlying :class:`linear_model.LassoLars` when `algorithm='lasso_lars'`.\n  :issue:`12650` by `Adrin Jalali`_.\n\n:mod:`sklearn.dummy`\n....................\n\n- |Fix| :class:`dummy.DummyClassifier` now handles checking the existence\n  of the provided constant in multiouput cases.\n  :pr:`14908` by :user:`Martina G. Vilas <martinagvilas>`.\n\n- |API| The default value of the `strategy` parameter in\n  :class:`dummy.DummyClassifier` will change from `'stratified'` in version\n  0.22 to `'prior'` in 0.24. A FutureWarning is raised when the default value\n  is used. :pr:`15382` by `Thomas Fan`_.\n\n- |API| The ``outputs_2d_`` attribute is deprecated in\n  :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor`. It is\n  equivalent to ``n_outputs > 1``. :pr:`14933` by `Nicolas Hug`_\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |MajorFeature| Added :class:`ensemble.StackingClassifier` and\n  :class:`ensemble.StackingRegressor` to stack predictors using a final\n  classifier or regressor.  :pr:`11047` by :user:`Guillaume Lemaitre\n  <glemaitre>` and :user:`Caio Oliveira <caioaao>` and :pr:`15138` by\n  :user:`Jon Cusick <jcusick13>`..\n\n- |MajorFeature| Many improvements were made to\n  :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor`:\n\n  - |Feature| Estimators now natively support dense data with missing\n    values both for training and predicting. They also support infinite\n    values. :pr:`13911` and :pr:`14406` by `Nicolas Hug`_, `Adrin Jalali`_\n    and `Olivier Grisel`_.\n  - |Feature| Estimators now have an additional `warm_start` parameter that\n    enables warm starting. :pr:`14012` by :user:`Johann Faouzi <johannfaouzi>`.\n  - |Feature| :func:`inspection.partial_dependence` and\n    `inspection.plot_partial_dependence` now support the fast 'recursion'\n    method for both estimators. :pr:`13769` by `Nicolas Hug`_.\n  - |Enhancement| for :class:`ensemble.HistGradientBoostingClassifier` the\n    training loss or score is now monitored on a class-wise stratified\n    subsample to preserve the class balance of the original training set.\n    :pr:`14194` by :user:`Johann Faouzi <johannfaouzi>`.\n  - |Enhancement| :class:`ensemble.HistGradientBoostingRegressor` now supports\n    the 'least_absolute_deviation' loss. :pr:`13896` by `Nicolas Hug`_.\n  - |Fix| Estimators now bin the training and validation data separately to\n    avoid any data leak. :pr:`13933` by `Nicolas Hug`_.\n  - |Fix| Fixed a bug where early stopping would break with string targets.\n    :pr:`14710` by `Guillaume Lemaitre`_.\n  - |Fix| :class:`ensemble.HistGradientBoostingClassifier` now raises an error\n    if ``categorical_crossentropy`` loss is given for a binary classification\n    problem. :pr:`14869` by `Adrin Jalali`_.\n\n  Note that pickles from 0.21 will not work in 0.22.\n\n- |Enhancement| Addition of ``max_samples`` argument allows limiting\n  size of bootstrap samples to be less than size of dataset. Added to\n  :class:`ensemble.RandomForestClassifier`,\n  :class:`ensemble.RandomForestRegressor`,\n  :class:`ensemble.ExtraTreesClassifier`,\n  :class:`ensemble.ExtraTreesRegressor`. :pr:`14682` by\n  :user:`Matt Hancock <notmatthancock>` and\n  :pr:`5963` by :user:`Pablo Duboue <DrDub>`.\n\n- |Fix| :func:`ensemble.VotingClassifier.predict_proba` will no longer be\n  present when `voting='hard'`. :pr:`14287` by `Thomas Fan`_.\n\n- |Fix| The `named_estimators_` attribute in :class:`ensemble.VotingClassifier`\n  and :class:`ensemble.VotingRegressor` now correctly maps to dropped estimators.\n  Previously, the `named_estimators_` mapping was incorrect whenever one of the\n  estimators was dropped. :pr:`15375` by `Thomas Fan`_.\n\n- |Fix| Run by default\n  :func:`utils.estimator_checks.check_estimator` on both\n  :class:`ensemble.VotingClassifier` and :class:`ensemble.VotingRegressor`. It\n  leads to solve issues regarding shape consistency during `predict` which was\n  failing when the underlying estimators were not outputting consistent array\n  dimensions. Note that it should be replaced by refactoring the common tests\n  in the future.\n  :pr:`14305` by `Guillaume Lemaitre`_.\n\n- |Fix| :class:`ensemble.AdaBoostClassifier` computes probabilities based on\n  the decision function as in the literature. Thus, `predict` and\n  `predict_proba` give consistent results.\n  :pr:`14114` by `Guillaume Lemaitre`_.\n\n- |Fix| Stacking and Voting estimators now ensure that their underlying\n  estimators are either all classifiers or all regressors.\n  :class:`ensemble.StackingClassifier`, :class:`ensemble.StackingRegressor`,\n  and :class:`ensemble.VotingClassifier` and :class:`ensemble.VotingRegressor`\n  now raise consistent error messages.\n  :pr:`15084` by `Guillaume Lemaitre`_.\n\n- |Fix| :class:`ensemble.AdaBoostRegressor` where the loss should be normalized\n  by the max of the samples with non-null weights only.\n  :pr:`14294` by `Guillaume Lemaitre`_.\n\n- |API| ``presort`` is now deprecated in\n  :class:`ensemble.GradientBoostingClassifier` and\n  :class:`ensemble.GradientBoostingRegressor`, and the parameter has no effect.\n  Users are recommended to use :class:`ensemble.HistGradientBoostingClassifier`\n  and :class:`ensemble.HistGradientBoostingRegressor` instead.\n  :pr:`14907` by `Adrin Jalali`_.\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |Enhancement| A warning  will  now be raised  if a parameter choice means\n  that another parameter will be unused on calling the fit() method for\n  :class:`feature_extraction.text.HashingVectorizer`,\n  :class:`feature_extraction.text.CountVectorizer` and\n  :class:`feature_extraction.text.TfidfVectorizer`.\n  :pr:`14602` by :user:`Gaurav Chawla <getgaurav2>`.\n\n- |Fix| Functions created by ``build_preprocessor`` and ``build_analyzer`` of\n  `feature_extraction.text.VectorizerMixin` can now be pickled.\n  :pr:`14430` by :user:`Dillon Niederhut <deniederhut>`.\n\n- |Fix| `feature_extraction.text.strip_accents_unicode` now correctly\n  removes accents from strings that are in NFKD normalized form. :pr:`15100` by\n  :user:`Daniel Grady <DGrady>`.\n\n- |Fix| Fixed a bug that caused :class:`feature_extraction.DictVectorizer` to raise\n  an `OverflowError` during the `transform` operation when producing a `scipy.sparse`\n  matrix on large input data. :pr:`15463` by :user:`Norvan Sahiner <norvan>`.\n\n- |API| Deprecated unused `copy` param for\n  :meth:`feature_extraction.text.TfidfVectorizer.transform` it will be\n  removed in v0.24. :pr:`14520` by\n  :user:`Guillem G. Subies <guillemgsubies>`.\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Enhancement| Updated the following :mod:`sklearn.feature_selection`\n  estimators to allow NaN\/Inf values in ``transform`` and ``fit``:\n  :class:`feature_selection.RFE`, :class:`feature_selection.RFECV`,\n  :class:`feature_selection.SelectFromModel`,\n  and :class:`feature_selection.VarianceThreshold`. Note that if the underlying\n  estimator of the feature selector does not allow NaN\/Inf then it will still\n  error, but the feature selectors themselves no longer enforce this\n  restriction unnecessarily. :issue:`11635` by :user:`Alec Peters <adpeters>`.\n\n- |Fix| Fixed a bug where :class:`feature_selection.VarianceThreshold` with\n  `threshold=0` did not remove constant features due to numerical instability,\n  by using range rather than variance in this case.\n  :pr:`13704` by :user:`Roddy MacSween <rlms>`.\n\n:mod:`sklearn.gaussian_process`\n...............................\n\n- |Feature| Gaussian process models on structured data: :class:`gaussian_process.GaussianProcessRegressor`\n  and :class:`gaussian_process.GaussianProcessClassifier` can now accept a list\n  of generic objects (e.g. strings, trees, graphs, etc.) as the ``X`` argument\n  to their training\/prediction methods.\n  A user-defined kernel should be provided for computing the kernel matrix among\n  the generic objects, and should inherit from `gaussian_process.kernels.GenericKernelMixin`\n  to notify the GPR\/GPC model that it handles non-vectorial samples.\n  :pr:`15557` by :user:`Yu-Hang Tang <yhtang>`.\n\n- |Efficiency| :func:`gaussian_process.GaussianProcessClassifier.log_marginal_likelihood`\n  and :func:`gaussian_process.GaussianProcessRegressor.log_marginal_likelihood` now\n  accept a ``clone_kernel=True`` keyword argument. When set to ``False``,\n  the kernel attribute is modified, but may result in a performance improvement.\n  :pr:`14378` by :user:`Masashi Shibata <c-bata>`.\n\n- |API| From version 0.24 :meth:`gaussian_process.kernels.Kernel.get_params` will raise an\n  ``AttributeError`` rather than return ``None`` for parameters that are in the\n  estimator's constructor but not stored as attributes on the instance.\n  :pr:`14464` by `Joel Nothman`_.\n\n:mod:`sklearn.impute`\n.....................\n\n- |MajorFeature| Added :class:`impute.KNNImputer`, to impute missing values using\n  k-Nearest Neighbors. :issue:`12852` by :user:`Ashim Bhattarai <ashimb9>` and\n  `Thomas Fan`_ and :pr:`15010` by `Guillaume Lemaitre`_.\n\n- |Feature| :class:`impute.IterativeImputer` has new `skip_compute` flag that\n  is False by default, which, when True, will skip computation on features that\n  have no missing values during the fit phase. :issue:`13773` by\n  :user:`Sergey Feldman <sergeyf>`.\n\n- |Efficiency| :meth:`impute.MissingIndicator.fit_transform` avoid repeated\n  computation of the masked matrix. :pr:`14356` by :user:`Harsh Soni <harsh020>`.\n\n- |Fix| :class:`impute.IterativeImputer` now works when there is only one feature.\n  By :user:`Sergey Feldman <sergeyf>`.\n\n- |Fix| Fixed a bug in :class:`impute.IterativeImputer` where features where\n  imputed in the reverse desired order with ``imputation_order`` either\n  ``\"ascending\"`` or ``\"descending\"``. :pr:`15393` by\n  :user:`Venkatachalam N <venkyyuvy>`.\n\n:mod:`sklearn.inspection`\n.........................\n\n- |MajorFeature| :func:`inspection.permutation_importance` has been added to\n  measure the importance of each feature in an arbitrary trained model with\n  respect to a given scoring function. :issue:`13146` by `Thomas Fan`_.\n\n- |Feature| :func:`inspection.partial_dependence` and\n  `inspection.plot_partial_dependence` now support the fast 'recursion'\n  method for :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor`. :pr:`13769` by\n  `Nicolas Hug`_.\n\n- |Enhancement| `inspection.plot_partial_dependence` has been extended to\n  now support the new visualization API described in the :ref:`User Guide\n  <visualizations>`. :pr:`14646` by `Thomas Fan`_.\n\n- |Enhancement| :func:`inspection.partial_dependence` accepts pandas DataFrame\n  and :class:`pipeline.Pipeline` containing :class:`compose.ColumnTransformer`.\n  In addition `inspection.plot_partial_dependence` will use the column\n  names by default when a dataframe is passed.\n  :pr:`14028` and :pr:`15429` by `Guillaume Lemaitre`_.\n\n:mod:`sklearn.kernel_approximation`\n...................................\n\n- |Fix| Fixed a bug where :class:`kernel_approximation.Nystroem` raised a\n  `KeyError` when using `kernel=\"precomputed\"`.\n  :pr:`14706` by :user:`Venkatachalam N <venkyyuvy>`.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Efficiency| The 'liblinear' logistic regression solver is now faster and\n  requires less memory.\n  :pr:`14108`, :pr:`14170`, :pr:`14296` by :user:`Alex Henrie <alexhenrie>`.\n\n- |Enhancement| :class:`linear_model.BayesianRidge` now accepts hyperparameters\n  ``alpha_init`` and ``lambda_init`` which can be used to set the initial value\n  of the maximization procedure in :term:`fit`.\n  :pr:`13618` by :user:`Yoshihiro Uchida <c56pony>`.\n\n- |Fix| :class:`linear_model.Ridge` now correctly fits an intercept when `X` is\n  sparse, `solver=\"auto\"` and `fit_intercept=True`, because the default solver\n  in this configuration has changed to `sparse_cg`, which can fit an intercept\n  with sparse data. :pr:`13995` by :user:`J\u00e9r\u00f4me Dock\u00e8s <jeromedockes>`.\n\n- |Fix| :class:`linear_model.Ridge` with `solver='sag'` now accepts F-ordered\n  and non-contiguous arrays and makes a conversion instead of failing.\n  :pr:`14458` by `Guillaume Lemaitre`_.\n\n- |Fix| :class:`linear_model.LassoCV` no longer forces ``precompute=False``\n  when fitting the final model. :pr:`14591` by `Andreas M\u00fcller`_.\n\n- |Fix| :class:`linear_model.RidgeCV` and :class:`linear_model.RidgeClassifierCV`\n  now correctly scores when `cv=None`.\n  :pr:`14864` by :user:`Venkatachalam N <venkyyuvy>`.\n\n- |Fix| Fixed a bug in :class:`linear_model.LogisticRegressionCV` where the\n  ``scores_``, ``n_iter_`` and ``coefs_paths_`` attribute would have a wrong\n  ordering with ``penalty='elastic-net'``. :pr:`15044` by `Nicolas Hug`_\n\n- |Fix| :class:`linear_model.MultiTaskLassoCV` and\n  :class:`linear_model.MultiTaskElasticNetCV` with X of dtype int\n  and `fit_intercept=True`.\n  :pr:`15086` by :user:`Alex Gramfort <agramfort>`.\n\n- |Fix| The liblinear solver now supports ``sample_weight``.\n  :pr:`15038` by `Guillaume Lemaitre`_.\n\n:mod:`sklearn.manifold`\n.......................\n\n- |Feature| :class:`manifold.Isomap`, :class:`manifold.TSNE`, and\n  :class:`manifold.SpectralEmbedding` now accept precomputed sparse\n  neighbors graph as input. :issue:`10482` by `Tom Dupre la Tour`_ and\n  :user:`Kumar Ashutosh <thechargedneutron>`.\n\n- |Feature| Exposed the ``n_jobs`` parameter in :class:`manifold.TSNE` for\n  multi-core calculation of the neighbors graph. This parameter has no\n  impact when ``metric=\"precomputed\"`` or (``metric=\"euclidean\"`` and\n  ``method=\"exact\"``). :issue:`15082` by `Roman Yurchak`_.\n\n- |Efficiency| Improved efficiency of :class:`manifold.TSNE` when\n  ``method=\"barnes-hut\"`` by computing the gradient in parallel.\n  :pr:`13213` by :user:`Thomas Moreau <tommoral>`\n\n- |Fix| Fixed a bug where :func:`manifold.spectral_embedding` (and therefore\n  :class:`manifold.SpectralEmbedding` and :class:`cluster.SpectralClustering`)\n  computed wrong eigenvalues with ``eigen_solver='amg'`` when\n  ``n_samples < 5 * n_components``. :pr:`14647` by `Andreas M\u00fcller`_.\n\n- |Fix| Fixed a bug in :func:`manifold.spectral_embedding`  used in\n  :class:`manifold.SpectralEmbedding` and :class:`cluster.SpectralClustering`\n  where ``eigen_solver=\"amg\"`` would sometimes result in a LinAlgError.\n  :issue:`13393` by :user:`Andrew Knyazev <lobpcg>`\n  :pr:`13707` by :user:`Scott White <whitews>`\n\n- |API| Deprecate ``training_data_`` unused attribute in\n  :class:`manifold.Isomap`. :issue:`10482` by `Tom Dupre la Tour`_.\n\n:mod:`sklearn.metrics`\n......................\n\n- |MajorFeature| `metrics.plot_roc_curve` has been added to plot roc\n  curves. This function introduces the visualization API described in\n  the :ref:`User Guide <visualizations>`. :pr:`14357` by `Thomas Fan`_.\n\n- |Feature| Added a new parameter ``zero_division`` to multiple classification\n  metrics: :func:`metrics.precision_score`, :func:`metrics.recall_score`,\n  :func:`metrics.f1_score`, :func:`metrics.fbeta_score`,\n  :func:`metrics.precision_recall_fscore_support`,\n  :func:`metrics.classification_report`. This allows to set returned value for\n  ill-defined metrics.\n  :pr:`14900` by :user:`Marc Torrellas Socastro <marctorrellas>`.\n\n- |Feature| Added the :func:`metrics.pairwise.nan_euclidean_distances` metric,\n  which calculates euclidean distances in the presence of missing values.\n  :issue:`12852` by :user:`Ashim Bhattarai <ashimb9>` and `Thomas Fan`_.\n\n- |Feature| New ranking metrics :func:`metrics.ndcg_score` and\n  :func:`metrics.dcg_score` have been added to compute Discounted Cumulative\n  Gain and Normalized Discounted Cumulative Gain. :pr:`9951` by :user:`J\u00e9r\u00f4me\n  Dock\u00e8s <jeromedockes>`.\n\n- |Feature| `metrics.plot_precision_recall_curve` has been added to plot\n  precision recall curves. :pr:`14936` by `Thomas Fan`_.\n\n- |Feature| `metrics.plot_confusion_matrix` has been added to plot\n  confusion matrices. :pr:`15083` by `Thomas Fan`_.\n\n- |Feature| Added multiclass support to :func:`metrics.roc_auc_score` with\n  corresponding scorers `'roc_auc_ovr'`, `'roc_auc_ovo'`,\n  `'roc_auc_ovr_weighted'`, and `'roc_auc_ovo_weighted'`.\n  :pr:`12789` and :pr:`15274` by\n  :user:`Kathy Chen <kathyxchen>`, :user:`Mohamed Maskani <maskani-moh>`, and\n  `Thomas Fan`_.\n\n- |Feature| Add :class:`metrics.mean_tweedie_deviance` measuring the\n  Tweedie deviance for a given ``power`` parameter. Also add mean Poisson\n  deviance :class:`metrics.mean_poisson_deviance` and mean Gamma deviance\n  :class:`metrics.mean_gamma_deviance` that are special cases of the Tweedie\n  deviance for ``power=1`` and ``power=2`` respectively.\n  :pr:`13938` by :user:`Christian Lorentzen <lorentzenchr>` and\n  `Roman Yurchak`_.\n\n- |Efficiency| Improved performance of\n  :func:`metrics.pairwise.manhattan_distances` in the case of sparse matrices.\n  :pr:`15049` by `Paolo Toccaceli <ptocca>`.\n\n- |Enhancement| The parameter ``beta`` in :func:`metrics.fbeta_score` is\n  updated to accept the zero and `float('+inf')` value.\n  :pr:`13231` by :user:`Dong-hee Na <corona10>`.\n\n- |Enhancement| Added parameter ``squared`` in :func:`metrics.mean_squared_error`\n  to return root mean squared error.\n  :pr:`13467` by :user:`Urvang Patel <urvang96>`.\n\n- |Enhancement| Allow computing averaged metrics in the case of no true positives.\n  :pr:`14595` by `Andreas M\u00fcller`_.\n\n- |Enhancement| Multilabel metrics now supports list of lists as input.\n  :pr:`14865` :user:`Srivatsan Ramesh <srivatsan-ramesh>`,\n  :user:`Herilalaina Rakotoarison <herilalaina>`,\n  :user:`L\u00e9onard Binet <leonardbinet>`.\n\n- |Enhancement| :func:`metrics.median_absolute_error` now supports\n  ``multioutput`` parameter.\n  :pr:`14732` by :user:`Agamemnon Krasoulis <agamemnonc>`.\n\n- |Enhancement| 'roc_auc_ovr_weighted' and 'roc_auc_ovo_weighted' can now be\n  used as the :term:`scoring` parameter of model-selection tools.\n  :pr:`14417` by `Thomas Fan`_.\n\n- |Enhancement| :func:`metrics.confusion_matrix` accepts a parameters\n  `normalize` allowing to normalize the confusion matrix by column, rows, or\n  overall.\n  :pr:`15625` by `Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| Raise a ValueError in :func:`metrics.silhouette_score` when a\n  precomputed distance matrix contains non-zero diagonal entries.\n  :pr:`12258` by :user:`Stephen Tierney <sjtrny>`.\n\n- |API| ``scoring=\"neg_brier_score\"`` should be used instead of\n  ``scoring=\"brier_score_loss\"`` which is now deprecated.\n  :pr:`14898` by :user:`Stefan Matcovici <stefan-matcovici>`.\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |Efficiency| Improved performance of multimetric scoring in\n  :func:`model_selection.cross_validate`,\n  :class:`model_selection.GridSearchCV`, and\n  :class:`model_selection.RandomizedSearchCV`. :pr:`14593` by `Thomas Fan`_.\n\n- |Enhancement| :class:`model_selection.learning_curve` now accepts parameter\n  ``return_times`` which can be used to retrieve computation times in order to\n  plot model scalability (see learning_curve example).\n  :pr:`13938` by :user:`Hadrien Reboul <H4dr1en>`.\n\n- |Enhancement| :class:`model_selection.RandomizedSearchCV` now accepts lists\n  of parameter distributions. :pr:`14549` by `Andreas M\u00fcller`_.\n\n- |Fix| Reimplemented :class:`model_selection.StratifiedKFold` to fix an issue\n  where one test set could be `n_classes` larger than another. Test sets should\n  now be near-equally sized. :pr:`14704` by `Joel Nothman`_.\n\n- |Fix| The `cv_results_` attribute of :class:`model_selection.GridSearchCV`\n  and :class:`model_selection.RandomizedSearchCV` now only contains unfitted\n  estimators. This potentially saves a lot of memory since the state of the\n  estimators isn't stored. :pr:`#15096` by `Andreas M\u00fcller`_.\n\n- |API| :class:`model_selection.KFold` and\n  :class:`model_selection.StratifiedKFold` now raise a warning if\n  `random_state` is set but `shuffle` is False. This will raise an error in\n  0.24.\n\n:mod:`sklearn.multioutput`\n..........................\n\n- |Fix| :class:`multioutput.MultiOutputClassifier` now has attribute\n  ``classes_``. :pr:`14629` by :user:`Agamemnon Krasoulis <agamemnonc>`.\n\n- |Fix| :class:`multioutput.MultiOutputClassifier` now has `predict_proba`\n  as property and can be checked with `hasattr`.\n  :issue:`15488` :pr:`15490` by :user:`Rebekah Kim <rebekahkim>`\n\n:mod:`sklearn.naive_bayes`\n...............................\n\n- |MajorFeature| Added :class:`naive_bayes.CategoricalNB` that implements the\n  Categorical Naive Bayes classifier.\n  :pr:`12569` by :user:`Tim Bicker <timbicker>` and\n  :user:`Florian Wilhelm <FlorianWilhelm>`.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |MajorFeature| Added :class:`neighbors.KNeighborsTransformer` and\n  :class:`neighbors.RadiusNeighborsTransformer`, which transform input dataset\n  into a sparse neighbors graph. They give finer control on nearest neighbors\n  computations and enable easy pipeline caching for multiple use.\n  :issue:`10482` by `Tom Dupre la Tour`_.\n\n- |Feature| :class:`neighbors.KNeighborsClassifier`,\n  :class:`neighbors.KNeighborsRegressor`,\n  :class:`neighbors.RadiusNeighborsClassifier`,\n  :class:`neighbors.RadiusNeighborsRegressor`, and\n  :class:`neighbors.LocalOutlierFactor` now accept precomputed sparse\n  neighbors graph as input. :issue:`10482` by `Tom Dupre la Tour`_ and\n  :user:`Kumar Ashutosh <thechargedneutron>`.\n\n- |Feature| :class:`neighbors.RadiusNeighborsClassifier` now supports\n  predicting probabilities by using `predict_proba` and supports more\n  outlier_label options: 'most_frequent', or different outlier_labels\n  for multi-outputs.\n  :pr:`9597` by :user:`Wenbo Zhao <webber26232>`.\n\n- |Efficiency| Efficiency improvements for\n  :func:`neighbors.RadiusNeighborsClassifier.predict`.\n  :pr:`9597` by :user:`Wenbo Zhao <webber26232>`.\n\n- |Fix| :class:`neighbors.KNeighborsRegressor` now throws error when\n  `metric='precomputed'` and fit on non-square data.  :pr:`14336` by\n  :user:`Gregory Dexter <gdex1>`.\n\n:mod:`sklearn.neural_network`\n.............................\n\n- |Feature| Add `max_fun` parameter in\n  `neural_network.BaseMultilayerPerceptron`,\n  :class:`neural_network.MLPRegressor`, and\n  :class:`neural_network.MLPClassifier` to give control over\n  maximum number of function evaluation to not meet ``tol`` improvement.\n  :issue:`9274` by :user:`Daniel Perry <daniel-perry>`.\n\n:mod:`sklearn.pipeline`\n.......................\n\n- |Enhancement| :class:`pipeline.Pipeline` now supports :term:`score_samples` if\n  the final estimator does.\n  :pr:`13806` by :user:`Ana\u00ebl Beaugnon <ab-anssi>`.\n\n- |Fix| The `fit` in :class:`~pipeline.FeatureUnion` now accepts `fit_params`\n  to pass to the underlying transformers. :pr:`15119` by `Adrin Jalali`_.\n\n- |API| `None` as a transformer is now deprecated in\n  :class:`pipeline.FeatureUnion`. Please use `'drop'` instead. :pr:`15053` by\n  `Thomas Fan`_.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Efficiency| :class:`preprocessing.PolynomialFeatures` is now faster when\n  the input data is dense. :pr:`13290` by :user:`Xavier Dupr\u00e9 <sdpython>`.\n\n- |Enhancement| Avoid unnecessary data copy when fitting preprocessors\n  :class:`preprocessing.StandardScaler`, :class:`preprocessing.MinMaxScaler`,\n  :class:`preprocessing.MaxAbsScaler`, :class:`preprocessing.RobustScaler`\n  and :class:`preprocessing.QuantileTransformer` which results in a slight\n  performance improvement. :pr:`13987` by `Roman Yurchak`_.\n\n- |Fix| KernelCenterer now throws error when fit on non-square\n  :class:`preprocessing.KernelCenterer`\n  :pr:`14336` by :user:`Gregory Dexter <gdex1>`.\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |Fix| :class:`model_selection.GridSearchCV` and\n  `model_selection.RandomizedSearchCV` now supports the\n  `_pairwise` property, which prevents an error during cross-validation\n  for estimators with pairwise inputs (such as\n  :class:`neighbors.KNeighborsClassifier` when :term:`metric` is set to\n  'precomputed').\n  :pr:`13925` by :user:`Isaac S. Robson <isrobson>` and :pr:`15524` by\n  :user:`Xun Tang <xun-tang>`.\n\n:mod:`sklearn.svm`\n..................\n\n- |Enhancement| :class:`svm.SVC` and :class:`svm.NuSVC` now accept a\n  ``break_ties`` parameter. This parameter results in :term:`predict` breaking\n  the ties according to the confidence values of :term:`decision_function`, if\n  ``decision_function_shape='ovr'``, and the number of target classes > 2.\n  :pr:`12557` by `Adrin Jalali`_.\n\n- |Enhancement| SVM estimators now throw a more specific error when\n  `kernel='precomputed'` and fit on non-square data.\n  :pr:`14336` by :user:`Gregory Dexter <gdex1>`.\n\n- |Fix| :class:`svm.SVC`, :class:`svm.SVR`, :class:`svm.NuSVR` and\n  :class:`svm.OneClassSVM` when received values negative or zero\n  for parameter ``sample_weight`` in method fit(), generated an\n  invalid model. This behavior occurred only in some border scenarios.\n  Now in these cases, fit() will fail with an Exception.\n  :pr:`14286` by :user:`Alex Shacked <alexshacked>`.\n\n- |Fix| The `n_support_` attribute of :class:`svm.SVR` and\n  :class:`svm.OneClassSVM` was previously non-initialized, and had size 2. It\n  has now size 1 with the correct value. :pr:`15099` by `Nicolas Hug`_.\n\n- |Fix| fixed a bug in `BaseLibSVM._sparse_fit` where n_SV=0 raised a\n  ZeroDivisionError. :pr:`14894` by :user:`Danna Naser <danna-naser>`.\n\n- |Fix| The liblinear solver now supports ``sample_weight``.\n  :pr:`15038` by `Guillaume Lemaitre`_.\n\n\n:mod:`sklearn.tree`\n...................\n\n- |Feature| Adds minimal cost complexity pruning, controlled by ``ccp_alpha``,\n  to :class:`tree.DecisionTreeClassifier`, :class:`tree.DecisionTreeRegressor`,\n  :class:`tree.ExtraTreeClassifier`, :class:`tree.ExtraTreeRegressor`,\n  :class:`ensemble.RandomForestClassifier`,\n  :class:`ensemble.RandomForestRegressor`,\n  :class:`ensemble.ExtraTreesClassifier`,\n  :class:`ensemble.ExtraTreesRegressor`,\n  :class:`ensemble.GradientBoostingClassifier`,\n  and :class:`ensemble.GradientBoostingRegressor`.\n  :pr:`12887` by `Thomas Fan`_.\n\n- |API| ``presort`` is now deprecated in\n  :class:`tree.DecisionTreeClassifier` and\n  :class:`tree.DecisionTreeRegressor`, and the parameter has no effect.\n  :pr:`14907` by `Adrin Jalali`_.\n\n- |API| The ``classes_`` and ``n_classes_`` attributes of\n  :class:`tree.DecisionTreeRegressor` are now deprecated. :pr:`15028` by\n  :user:`Mei Guan <meiguan>`, `Nicolas Hug`_, and `Adrin Jalali`_.\n\n:mod:`sklearn.utils`\n....................\n\n- |Feature| :func:`~utils.estimator_checks.check_estimator` can now generate\n  checks by setting `generate_only=True`. Previously, running\n  :func:`~utils.estimator_checks.check_estimator` will stop when the first\n  check fails. With `generate_only=True`, all checks can run independently and\n  report the ones that are failing. Read more in\n  :ref:`rolling_your_own_estimator`. :pr:`14381` by `Thomas Fan`_.\n\n- |Feature| Added a pytest specific decorator,\n  :func:`~utils.estimator_checks.parametrize_with_checks`, to parametrize\n  estimator checks for a list of estimators. :pr:`14381` by `Thomas Fan`_.\n\n- |Feature| A new random variable, `utils.fixes.loguniform` implements a\n  log-uniform random variable (e.g., for use in RandomizedSearchCV).\n  For example, the outcomes ``1``, ``10`` and ``100`` are all equally likely\n  for ``loguniform(1, 100)``. See :issue:`11232` by\n  :user:`Scott Sievert <stsievert>` and :user:`Nathaniel Saul <sauln>`,\n  and `SciPy PR 10815 <https:\/\/github.com\/scipy\/scipy\/pull\/10815>`.\n\n- |Enhancement| `utils.safe_indexing` (now deprecated) accepts an\n  ``axis`` parameter to index array-like across rows and columns. The column\n  indexing can be done on NumPy array, SciPy sparse matrix, and Pandas\n  DataFrame. An additional refactoring was done. :pr:`14035` and :pr:`14475`\n  by `Guillaume Lemaitre`_.\n\n- |Enhancement| :func:`utils.extmath.safe_sparse_dot` works between 3D+ ndarray\n  and sparse matrix.\n  :pr:`14538` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| :func:`utils.check_array` is now raising an error instead of casting\n  NaN to integer.\n  :pr:`14872` by `Roman Yurchak`_.\n\n- |Fix| :func:`utils.check_array` will now correctly detect numeric dtypes in\n  pandas dataframes, fixing a bug where ``float32`` was upcast to ``float64``\n  unnecessarily. :pr:`15094` by `Andreas M\u00fcller`_.\n\n- |API| The following utils have been deprecated and are now private:\n\n  - ``choose_check_classifiers_labels``\n  - ``enforce_estimator_tags_y``\n  - ``mocking.MockDataFrame``\n  - ``mocking.CheckingClassifier``\n  - ``optimize.newton_cg``\n  - ``random.random_choice_csc``\n  - ``utils.choose_check_classifiers_labels``\n  - ``utils.enforce_estimator_tags_y``\n  - ``utils.optimize.newton_cg``\n  - ``utils.random.random_choice_csc``\n  - ``utils.safe_indexing``\n  - ``utils.mocking``\n  - ``utils.fast_dict``\n  - ``utils.seq_dataset``\n  - ``utils.weight_vector``\n  - ``utils.fixes.parallel_helper`` (removed)\n  - All of ``utils.testing`` except for ``all_estimators`` which is now in\n    ``utils``.\n\n:mod:`sklearn.isotonic`\n..................................\n\n- |Fix| Fixed a bug where :class:`isotonic.IsotonicRegression.fit` raised error\n  when `X.dtype == 'float32'` and `X.dtype != y.dtype`.\n  :pr:`14902` by :user:`Lucas <lostcoaster>`.\n\nMiscellaneous\n.............\n\n- |Fix| Port `lobpcg` from SciPy which implement some bug fixes but only\n  available in 1.3+.\n  :pr:`13609` and :pr:`14971` by `Guillaume Lemaitre`_.\n\n- |API| Scikit-learn now converts any input data structure implementing a\n  duck array to a numpy array (using ``__array__``) to ensure consistent\n  behavior instead of relying on ``__array_function__`` (see `NEP 18\n  <https:\/\/numpy.org\/neps\/nep-0018-array-function-protocol.html>`_).\n  :pr:`14702` by `Andreas M\u00fcller`_.\n\n- |API| Replace manual checks with ``check_is_fitted``. Errors thrown when\n  using a non-fitted estimators are now more uniform.\n  :pr:`13013` by :user:`Agamemnon Krasoulis <agamemnonc>`.\n\nChanges to estimator checks\n---------------------------\n\nThese changes mostly affect library developers.\n\n- Estimators are now expected to raise a ``NotFittedError`` if ``predict`` or\n  ``transform`` is called before ``fit``; previously an ``AttributeError`` or\n  ``ValueError`` was acceptable.\n  :pr:`13013` by by :user:`Agamemnon Krasoulis <agamemnonc>`.\n\n- Binary only classifiers are now supported in estimator checks.\n  Such classifiers need to have the `binary_only=True` estimator tag.\n  :pr:`13875` by `Trevor Stephens`_.\n\n- Estimators are expected to convert input data (``X``, ``y``,\n  ``sample_weights``) to :class:`numpy.ndarray` and never call\n  ``__array_function__`` on the original datatype that is passed (see `NEP 18\n  <https:\/\/numpy.org\/neps\/nep-0018-array-function-protocol.html>`_).\n  :pr:`14702` by `Andreas M\u00fcller`_.\n\n- `requires_positive_X` estimator tag (for models that require\n  X to be non-negative) is now used by :meth:`utils.estimator_checks.check_estimator`\n  to make sure a proper error message is raised if X contains some negative entries.\n  :pr:`14680` by :user:`Alex Gramfort <agramfort>`.\n\n- Added check that pairwise estimators raise error on non-square data\n  :pr:`14336` by :user:`Gregory Dexter <gdex1>`.\n\n- Added two common multioutput estimator tests\n  `utils.estimator_checks.check_classifier_multioutput` and\n  `utils.estimator_checks.check_regressor_multioutput`.\n  :pr:`13392` by :user:`Rok Mihevc <rok>`.\n\n- |Fix| Added ``check_transformer_data_not_an_array`` to checks where missing\n\n- |Fix| The estimators tags resolution now follows the regular MRO. They used\n  to be overridable only once. :pr:`14884` by `Andreas M\u00fcller`_.\n\n\n.. rubric:: Code and documentation contributors\n\nThanks to everyone who has contributed to the maintenance and improvement of the\nproject since version 0.21, including:\n\nAaron Alphonsus, Abbie Popa, Abdur-Rahmaan Janhangeer, abenbihi, Abhinav Sagar,\nAbhishek Jana, Abraham K. Lagat, Adam J. Stewart, Aditya Vyas, Adrin Jalali,\nAgamemnon Krasoulis, Alec Peters, Alessandro Surace, Alexandre de Siqueira,\nAlexandre Gramfort, alexgoryainov, Alex Henrie, Alex Itkes, alexshacked, Allen\nAkinkunle, Ana\u00ebl Beaugnon, Anders Kaseorg, Andrea Maldonado, Andrea Navarrete,\nAndreas Mueller, Andreas Schuderer, Andrew Nystrom, Angela Ambroz, Anisha\nKeshavan, Ankit Jha, Antonio Gutierrez, Anuja Kelkar, Archana Alva,\narnaudstiegler, arpanchowdhry, ashimb9, Ayomide Bamidele, Baran Buluttekin,\nbarrycg, Bharat Raghunathan, Bill Mill, Biswadip Mandal, blackd0t, Brian G.\nBarkley, Brian Wignall, Bryan Yang, c56pony, camilaagw, cartman_nabana,\ncatajara, Cat Chenal, Cathy, cgsavard, Charles Vesteghem, Chiara Marmo, Chris\nGregory, Christian Lorentzen, Christos Aridas, Dakota Grusak, Daniel Grady,\nDaniel Perry, Danna Naser, DatenBergwerk, David Dormagen, deeplook, Dillon\nNiederhut, Dong-hee Na, Dougal J. Sutherland, DrGFreeman, Dylan Cashman,\nedvardlindelof, Eric Larson, Eric Ndirangu, Eunseop Jeong, Fanny,\nfedericopisanu, Felix Divo, flaviomorelli, FranciDona, Franco M. Luque, Frank\nHoang, Frederic Haase, g0g0gadget, Gabriel Altay, Gabriel do Vale Rios, Gael\nVaroquaux, ganevgv, gdex1, getgaurav2, Gideon Sonoiya, Gordon Chen, gpapadok,\nGreg Mogavero, Grzegorz Szpak, Guillaume Lemaitre, Guillem Garc\u00eda Subies,\nH4dr1en, hadshirt, Hailey Nguyen, Hanmin Qin, Hannah Bruce Macdonald, Harsh\nMahajan, Harsh Soni, Honglu Zhang, Hossein Pourbozorg, Ian Sanders, Ingrid\nSpielman, J-A16, jaehong park, Jaime Ferrando Huertas, James Hill, James Myatt,\nJay, jeremiedbb, J\u00e9r\u00e9mie du Boisberranger, jeromedockes, Jesper Dramsch, Joan\nMassich, Joanna Zhang, Joel Nothman, Johann Faouzi, Jonathan Rahn, Jon Cusick,\nJose Ortiz, Kanika Sabharwal, Katarina Slama, kellycarmody, Kennedy Kang'ethe,\nKensuke Arai, Kesshi Jordan, Kevad, Kevin Loftis, Kevin Winata, Kevin Yu-Sheng\nLi, Kirill Dolmatov, Kirthi Shankar Sivamani, krishna katyal, Lakshmi Krishnan,\nLakshya KD, LalliAcqua, lbfin, Leland McInnes, L\u00e9onard Binet, Loic Esteve,\nloopyme, lostcoaster, Louis Huynh, lrjball, Luca Ionescu, Lutz Roeder,\nMaggieChege, Maithreyi Venkatesh, Maltimore, Maocx, Marc Torrellas, Marie\nDouriez, Markus, Markus Frey, Martina G. Vilas, Martin Oywa, Martin Thoma,\nMasashi SHIBATA, Maxwell Aladago, mbillingr, m-clare, Meghann Agarwal, m.fab,\nMicah Smith, miguelbarao, Miguel Cabrera, Mina Naghshhnejad, Ming Li, motmoti,\nmschaffenroth, mthorrell, Natasha Borders, nezar-a, Nicolas Hug, Nidhin\nPattaniyil, Nikita Titov, Nishan Singh Mann, Nitya Mandyam, norvan,\nnotmatthancock, novaya, nxorable, Oleg Stikhin, Oleksandr Pavlyk, Olivier\nGrisel, Omar Saleem, Owen Flanagan, panpiort8, Paolo, Paolo Toccaceli, Paresh\nMathur, Paula, Peng Yu, Peter Marko, pierretallotte, poorna-kumar, pspachtholz,\nqdeffense, Rajat Garg, Rapha\u00ebl Bournhonesque, Ray, Ray Bell, Rebekah Kim, Reza\nGharibi, Richard Payne, Richard W, rlms, Robert Juergens, Rok Mihevc, Roman\nFeldbauer, Roman Yurchak, R Sanjabi, RuchitaGarde, Ruth Waithera, Sackey, Sam\nDixon, Samesh Lakhotia, Samuel Taylor, Sarra Habchi, Scott Gigante, Scott\nSievert, Scott White, Sebastian P\u00f6lsterl, Sergey Feldman, SeWook Oh, she-dares,\nShreya V, Shubham Mehta, Shuzhe Xiao, SimonCW, smarie, smujjiga, S\u00f6nke\nBehrends, Soumirai, Sourav Singh, stefan-matcovici, steinfurt, St\u00e9phane\nCouvreur, Stephan Tulkens, Stephen Cowley, Stephen Tierney, SylvainLan,\nth0rwas, theoptips, theotheo, Thierno Ibrahima DIOP, Thomas Edwards, Thomas J\nFan, Thomas Moreau, Thomas Schmitt, Tilen Kusterle, Tim Bicker, Timsaur, Tim\nStaley, Tirth Patel, Tola A, Tom Augspurger, Tom Dupr\u00e9 la Tour, topisan, Trevor\nStephens, ttang131, Urvang Patel, Vathsala Achar, veerlosar, Venkatachalam N,\nVictor Luzgin, Vincent Jeanselme, Vincent Lostanlen, Vladimir Korolev,\nvnherdeiro, Wenbo Zhao, Wendy Hu, willdarnell, William de Vazelhes,\nwolframalpha, xavier dupr\u00e9, xcjason, x-martian, xsat, xun-tang, Yinglr,\nyokasre, Yu-Hang \"Maxin\" Tang, Yulia Zamriy, Zhao Feng","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn      release notes 0 22                Version 0 22               For a short description of the main highlights of the release  please refer to  ref  sphx glr auto examples release highlights plot release highlights 0 22 0 py       include   changelog legend inc      changes 0 22 2   Version 0 22 2 post1                         March 3 2020    The 0 22 2 post1 release includes a packaging fix for the source distribution but the content of the packages is otherwise identical to the content of the wheels with the 0 22 2 version  without the  post1 suffix   Both contain the following changes   Changelog             mod  sklearn impute                            Efficiency  Reduce  func  impute KNNImputer  asymptotic memory usage by   chunking pairwise distance computation     pr  16397  by  Joel Nothman      mod  sklearn metrics                             Fix  Fixed a bug in  metrics plot roc curve  where   the name of the estimator was passed in the  class  metrics RocCurveDisplay    instead of the parameter  name   It results in a different plot when calling    meth  metrics RocCurveDisplay plot  for the subsequent times     pr  16500  by  user  Guillaume Lemaitre  glemaitre        Fix  Fixed a bug in  metrics plot precision recall curve  where the   name of the estimator was passed in the    class  metrics PrecisionRecallDisplay  instead of the parameter  name   It   results in a different plot when calling    meth  metrics PrecisionRecallDisplay plot  for the subsequent times     pr  16505  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn neighbors                               Fix  Fix a bug which converted a list of arrays into a 2 D object   array instead of a 1 D array containing NumPy arrays  This bug   was affecting  meth  neighbors NearestNeighbors radius neighbors      pr  16076  by  user  Guillaume Lemaitre  glemaitre   and    user  Alex Shacked  alexshacked         changes 0 22 1   Version 0 22 1                   January 2 2020    This is a bug fix release to primarily resolve some packaging issues in version 0 22 0  It also includes minor documentation improvements and some bug fixes   Changelog              mod  sklearn cluster                             Fix   class  cluster KMeans  with   algorithm  elkan    now uses the same   stopping criterion as with the default   algorithm  full      pr  15930  by    user  inder128     mod  sklearn inspection                                Fix   func  inspection permutation importance  will return the same    importances  when a  random state  is given for both  n jobs 1  or    n jobs 1  both with shared memory backends  thread safety  and   isolated memory  process based backends    Also avoid casting the data as object dtype and avoid read only error   on large dataframes with  n jobs 1  as reported in  issue  15810     Follow up of  pr  15898  by  user  Shivam Gargsya  shivamgargsya       pr  15933  by  user  Guillaume Lemaitre  glemaitre   and  Olivier Grisel        Fix   inspection plot partial dependence  and    meth  inspection PartialDependenceDisplay plot  now consistently checks   the number of axes passed in   pr  15760  by  Thomas Fan      mod  sklearn metrics                             Fix   metrics plot confusion matrix  now raises error when  normalize    is invalid  Previously  it runs fine with no normalization     pr  15888  by  Hanmin Qin        Fix   metrics plot confusion matrix  now colors the label color   correctly to maximize contrast with its background   pr  15936  by    Thomas Fan   and  user  DizietAsahi       Fix   func  metrics classification report  does no longer ignore the   value of the   zero division   keyword argument   pr  15879    by  user  Bibhash Chandra Mitra  Bibyutatsu        Fix  Fixed a bug in  metrics plot confusion matrix  to correctly   pass the  values format  parameter to the  class  metrics ConfusionMatrixDisplay    plot   call   pr  15937  by  user  Stephen Blystone  blynotes      mod  sklearn model selection                                     Fix   class  model selection GridSearchCV  and    class  model selection RandomizedSearchCV  accept scalar values provided in    fit params   Change in 0 22 was breaking backward compatibility     pr  15863  by  user  Adrin Jalali  adrinjalali   and    user  Guillaume Lemaitre  glemaitre      mod  sklearn naive bayes                                 Fix  Removed  abstractmethod  decorator for the method   check X  in    naive bayes BaseNB  that could break downstream projects inheriting   from this deprecated public base class   pr  15996  by    user  Brigitta Sip cz  bsipocz      mod  sklearn preprocessing                                   Fix   class  preprocessing QuantileTransformer  now guarantees the    quantiles   attribute to be completely sorted in non decreasing manner     pr  15751  by  user  Tirth Patel  tirthasheshpatel      mod  sklearn semi supervised                                     Fix   class  semi supervised LabelPropagation  and    class  semi supervised LabelSpreading  now allow callable kernel function to   return sparse weight matrix     pr  15868  by  user  Niklas Smedemark Margulies  nik sm      mod  sklearn utils                           Fix   func  utils check array  now correctly converts pandas DataFrame with   boolean columns to floats   pr  15797  by  Thomas Fan        Fix   func  utils validation check is fitted  accepts back an explicit   attributes     argument to check for specific attributes as explicit markers of a fitted   estimator  When no explicit   attributes   are provided  only the attributes   that end with a underscore and do not start with double underscore are used   as  fitted  markers  The   all or any   argument is also no longer   deprecated  This change is made to restore some backward compatibility with   the behavior of this utility in version 0 21   pr  15947  by  Thomas Fan         changes 0 22   Version 0 22 0                   December 3 2019    Website update                  Our website  https   scikit learn org     was revamped and given a fresh new look   pr  14849  by  Thomas Fan     Clear definition of the public API                                     Scikit learn has a public API  and a private API   We do our best not to break the public API  and to only introduce backward compatible changes that do not require any user action  However  in cases where that s not possible  any change to the public API is subject to a deprecation cycle of two minor versions  The private API isn t publicly documented and isn t subject to any deprecation cycle  so users should not rely on its stability   A function or object is public if it is documented in the  API Reference  https   scikit learn org dev modules classes html    and if it can be imported with an import path without leading underscores  For example   sklearn pipeline make pipeline   is public  while  sklearn pipeline  name estimators  is private    sklearn ensemble  gb BaseEnsemble   is private too because the whole   gb  module is private   Up to 0 22  some tools were de facto public  no leading underscore   while they should have been private in the first place  In version 0 22  these tools have been made properly private  and the public API space has been cleaned  In addition  importing from most sub modules is now deprecated  you should for example use   from sklearn cluster import Birch   instead of   from sklearn cluster birch import Birch    in practice    birch py   has been moved to    birch py         note        All the tools in the public API should be documented in the  API     Reference  https   scikit learn org dev modules classes html     If you     find a public tool  without leading underscore  that isn t in the API     reference  that means it should either be private or documented  Please     let us know by opening an issue   This work was tracked in  issue 9250  https   github com scikit learn scikit learn issues 9250    and  issue 12927  https   github com scikit learn scikit learn issues 12927       Deprecations  using   FutureWarning   from now on                                                    When deprecating a feature  previous versions of scikit learn used to raise a   DeprecationWarning    Since the   DeprecationWarnings   aren t shown by default by Python  scikit learn needed to resort to a custom warning filter to always show the warnings  That filter would sometimes interfere with users custom warning filters   Starting from version 0 22  scikit learn will show   FutureWarnings   for deprecations   as recommended by the Python documentation  https   docs python org 3 library exceptions html FutureWarning       FutureWarnings   are always shown by default by Python  so the custom filter has been removed and scikit learn no longer hinders with user filters   pr  15080  by  Nicolas Hug     Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      class  cluster KMeans  when  n jobs 1    Fix     class  decomposition SparseCoder      class  decomposition DictionaryLearning   and    class  decomposition MiniBatchDictionaryLearning   Fix     class  decomposition SparseCoder  with  algorithm  lasso lars    Fix     class  decomposition SparsePCA  where  normalize components  has no effect   due to deprecation     class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor   Fix    Feature      Enhancement      class  impute IterativeImputer  when  X  has features with no missing   values   Feature     class  linear model Ridge  when  X  is sparse   Fix     class  model selection StratifiedKFold  and any use of  cv int  with a   classifier   Fix     class  cross decomposition CCA  when using scipy    1 3  Fix   Details are listed in the changelog below    While we are trying to better inform users by providing this information  we cannot assure that this list is complete    Changelog                   Entries should be grouped by module  in alphabetic order  and prefixed with     one of the labels   MajorFeature    Feature    Efficiency    Enhancement        Fix  or  API   see whats new rst for descriptions       Entries should be ordered by those labels  e g   Fix  after  Efficiency        Changes not specific to a module should be listed under  Multiple Modules      or  Miscellaneous       Entries should end with       pr  123456  by  user  Joe Bloggs  joeongithub        where 123456 is the  pull request  number  not the issue number    mod  sklearn base                          API  From version 0 24  meth  base BaseEstimator get params  will raise an   AttributeError rather than return None for parameters that are in the   estimator s constructor but not stored as attributes on the instance     pr  14464  by  Joel Nothman      mod  sklearn calibration                                 Fix  Fixed a bug that made  class  calibration CalibratedClassifierCV  fail when   given a  sample weight  parameter of type  list   in the case where    sample weights  are not supported by the wrapped estimator    pr  13575    by  user  William de Vazelhes  wdevazelhes      mod  sklearn cluster                             Feature   class  cluster SpectralClustering  now accepts precomputed sparse   neighbors graph as input   issue  10482  by  Tom Dupre la Tour   and    user  Kumar Ashutosh  thechargedneutron        Enhancement   class  cluster SpectralClustering  now accepts a   n components     parameter  This parameter extends  SpectralClustering  class functionality to   match  meth  cluster spectral clustering      pr  13726  by  user  Shuzhe Xiao  fdas3213        Fix  Fixed a bug where  class  cluster KMeans  produced inconsistent results   between  n jobs 1  and  n jobs 1  due to the handling of the random state     pr  9288  by  user  Bryan Yang  bryanyang0528        Fix  Fixed a bug where  elkan  algorithm in  class  cluster KMeans  was   producing Segmentation Fault on large arrays due to integer index overflow     pr  15057  by  user  Vladimir Korolev  balodja        Fix   class   cluster MeanShift  now accepts a  term  max iter  with a   default value of 300 instead of always using the default 300  It also now   exposes an   n iter    indicating the maximum number of iterations performed   on each seed   pr  15120  by  Adrin Jalali        Fix   class  cluster AgglomerativeClustering  and    class  cluster FeatureAgglomeration  now raise an error if    affinity  cosine   and  X  has samples that are all zeros   pr  7943  by    user  mthorrell     mod  sklearn compose                             Feature   Adds  func  compose make column selector  which is used with    class  compose ColumnTransformer  to select DataFrame columns on the basis   of name and dtype   pr  12303  by  Thomas Fan        Fix  Fixed a bug in  class  compose ColumnTransformer  which failed to   select the proper columns when using a boolean list  with NumPy older than   1 12     pr  14510  by  Guillaume Lemaitre        Fix  Fixed a bug in  class  compose TransformedTargetRegressor  which did not   pass    fit params  to the underlying regressor     pr  14890  by  user  Miguel Cabrera  mfcabrera        Fix  The  class  compose ColumnTransformer  now requires the number of   features to be consistent between  fit  and  transform   A  FutureWarning    is raised now  and this will raise an error in 0 24  If the number of   features isn t consistent and negative indexing is used  an error is   raised   pr  14544  by  Adrin Jalali      mod  sklearn cross decomposition                                         Feature   class  cross decomposition PLSCanonical  and    class  cross decomposition PLSRegression  have a new function     inverse transform   to transform data to the original space     pr  15304  by  user  Jaime Ferrando Huertas  jiwidi        Enhancement   class  decomposition KernelPCA  now properly checks the   eigenvalues found by the solver for numerical or conditioning issues  This   ensures consistency of results across solvers  different choices for     eigen solver     including approximate solvers such as    randomized    and      lobpcg     see  issue  12068       pr  12145  by  user  Sylvain Mari   smarie       Fix  Fixed a bug where  class  cross decomposition PLSCanonical  and    class  cross decomposition PLSRegression  were raising an error when fitted   with a target matrix  Y  in which the first column was constant     issue  13609  by  user  Camila Williamson  camilaagw        Fix   class  cross decomposition CCA  now produces the same results with   scipy 1 3 and previous scipy versions   pr  15661  by  Thomas Fan      mod  sklearn datasets                              Feature   func  datasets fetch openml  now supports heterogeneous data using   pandas by setting  as frame True    pr  13902  by  Thomas Fan        Feature   func  datasets fetch openml  now includes the  target names  in   the returned Bunch   pr  15160  by  Thomas Fan        Enhancement  The parameter  return X y  was added to    func  datasets fetch 20newsgroups  and  func  datasets fetch olivetti faces       pr  14259  by  user  Sourav Singh  souravsingh        Enhancement   func  datasets make classification  now accepts array like    weights  parameter  i e  list or numpy array  instead of list only     pr  14764  by  user  Cat Chenal  CatChenal        Enhancement  The parameter  normalize  was added to     func  datasets fetch 20newsgroups vectorized       pr  14740  by  user  St phan Tulkens  stephantul       Fix  Fixed a bug in  func  datasets fetch openml   which failed to load   an OpenML dataset that contains an ignored feature     pr  14623  by  user  Sarra Habchi  HabchiSarra      mod  sklearn decomposition                                   Efficiency   class  decomposition NMF  with  solver  mu   fitted on sparse input   matrices now uses batching to avoid briefly allocating an array with size     non zero elements  n components    pr  15257  by  user  Mart Willocx  Maocx        Enhancement   func  decomposition dict learning  and    func  decomposition dict learning online  now accept  method max iter  and   pass it to  meth  decomposition sparse encode      issue  12650  by  Adrin Jalali        Enhancement   class  decomposition SparseCoder      class  decomposition DictionaryLearning   and    class  decomposition MiniBatchDictionaryLearning  now take a    transform max iter  parameter and pass it to either    func  decomposition dict learning    or    func  decomposition sparse encode      issue  12650  by  Adrin Jalali        Enhancement   class  decomposition IncrementalPCA  now accepts sparse   matrices as input  converting them to dense in batches thereby avoiding the   need to store the entire dense matrix at once     pr  13960  by  user  Scott Gigante  scottgigante        Fix   func  decomposition sparse encode    now passes the  max iter  to the   underlying  class  linear model LassoLars  when  algorithm  lasso lars       issue  12650  by  Adrin Jalali      mod  sklearn dummy                           Fix   class  dummy DummyClassifier  now handles checking the existence   of the provided constant in multiouput cases     pr  14908  by  user  Martina G  Vilas  martinagvilas        API  The default value of the  strategy  parameter in    class  dummy DummyClassifier  will change from   stratified   in version   0 22 to   prior   in 0 24  A FutureWarning is raised when the default value   is used   pr  15382  by  Thomas Fan        API  The   outputs 2d    attribute is deprecated in    class  dummy DummyClassifier  and  class  dummy DummyRegressor   It is   equivalent to   n outputs   1     pr  14933  by  Nicolas Hug     mod  sklearn ensemble                              MajorFeature  Added  class  ensemble StackingClassifier  and    class  ensemble StackingRegressor  to stack predictors using a final   classifier or regressor    pr  11047  by  user  Guillaume Lemaitre    glemaitre   and  user  Caio Oliveira  caioaao   and  pr  15138  by    user  Jon Cusick  jcusick13         MajorFeature  Many improvements were made to    class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor         Feature  Estimators now natively support dense data with missing     values both for training and predicting  They also support infinite     values   pr  13911  and  pr  14406  by  Nicolas Hug     Adrin Jalali       and  Olivier Grisel         Feature  Estimators now have an additional  warm start  parameter that     enables warm starting   pr  14012  by  user  Johann Faouzi  johannfaouzi         Feature   func  inspection partial dependence  and      inspection plot partial dependence  now support the fast  recursion      method for both estimators   pr  13769  by  Nicolas Hug         Enhancement  for  class  ensemble HistGradientBoostingClassifier  the     training loss or score is now monitored on a class wise stratified     subsample to preserve the class balance of the original training set       pr  14194  by  user  Johann Faouzi  johannfaouzi         Enhancement   class  ensemble HistGradientBoostingRegressor  now supports     the  least absolute deviation  loss   pr  13896  by  Nicolas Hug         Fix  Estimators now bin the training and validation data separately to     avoid any data leak   pr  13933  by  Nicolas Hug         Fix  Fixed a bug where early stopping would break with string targets       pr  14710  by  Guillaume Lemaitre         Fix   class  ensemble HistGradientBoostingClassifier  now raises an error     if   categorical crossentropy   loss is given for a binary classification     problem   pr  14869  by  Adrin Jalali       Note that pickles from 0 21 will not work in 0 22      Enhancement  Addition of   max samples   argument allows limiting   size of bootstrap samples to be less than size of dataset  Added to    class  ensemble RandomForestClassifier      class  ensemble RandomForestRegressor      class  ensemble ExtraTreesClassifier      class  ensemble ExtraTreesRegressor    pr  14682  by    user  Matt Hancock  notmatthancock   and    pr  5963  by  user  Pablo Duboue  DrDub        Fix   func  ensemble VotingClassifier predict proba  will no longer be   present when  voting  hard     pr  14287  by  Thomas Fan        Fix  The  named estimators   attribute in  class  ensemble VotingClassifier    and  class  ensemble VotingRegressor  now correctly maps to dropped estimators    Previously  the  named estimators   mapping was incorrect whenever one of the   estimators was dropped   pr  15375  by  Thomas Fan        Fix  Run by default    func  utils estimator checks check estimator  on both    class  ensemble VotingClassifier  and  class  ensemble VotingRegressor   It   leads to solve issues regarding shape consistency during  predict  which was   failing when the underlying estimators were not outputting consistent array   dimensions  Note that it should be replaced by refactoring the common tests   in the future     pr  14305  by  Guillaume Lemaitre        Fix   class  ensemble AdaBoostClassifier  computes probabilities based on   the decision function as in the literature  Thus   predict  and    predict proba  give consistent results     pr  14114  by  Guillaume Lemaitre        Fix  Stacking and Voting estimators now ensure that their underlying   estimators are either all classifiers or all regressors     class  ensemble StackingClassifier    class  ensemble StackingRegressor     and  class  ensemble VotingClassifier  and  class  ensemble VotingRegressor    now raise consistent error messages     pr  15084  by  Guillaume Lemaitre        Fix   class  ensemble AdaBoostRegressor  where the loss should be normalized   by the max of the samples with non null weights only     pr  14294  by  Guillaume Lemaitre        API    presort   is now deprecated in    class  ensemble GradientBoostingClassifier  and    class  ensemble GradientBoostingRegressor   and the parameter has no effect    Users are recommended to use  class  ensemble HistGradientBoostingClassifier    and  class  ensemble HistGradientBoostingRegressor  instead     pr  14907  by  Adrin Jalali      mod  sklearn feature extraction                                        Enhancement  A warning  will  now be raised  if a parameter choice means   that another parameter will be unused on calling the fit   method for    class  feature extraction text HashingVectorizer      class  feature extraction text CountVectorizer  and    class  feature extraction text TfidfVectorizer      pr  14602  by  user  Gaurav Chawla  getgaurav2        Fix  Functions created by   build preprocessor   and   build analyzer   of    feature extraction text VectorizerMixin  can now be pickled     pr  14430  by  user  Dillon Niederhut  deniederhut        Fix   feature extraction text strip accents unicode  now correctly   removes accents from strings that are in NFKD normalized form   pr  15100  by    user  Daniel Grady  DGrady        Fix  Fixed a bug that caused  class  feature extraction DictVectorizer  to raise   an  OverflowError  during the  transform  operation when producing a  scipy sparse    matrix on large input data   pr  15463  by  user  Norvan Sahiner  norvan        API  Deprecated unused  copy  param for    meth  feature extraction text TfidfVectorizer transform  it will be   removed in v0 24   pr  14520  by    user  Guillem G  Subies  guillemgsubies      mod  sklearn feature selection                                       Enhancement  Updated the following  mod  sklearn feature selection    estimators to allow NaN Inf values in   transform   and   fit       class  feature selection RFE    class  feature selection RFECV      class  feature selection SelectFromModel     and  class  feature selection VarianceThreshold   Note that if the underlying   estimator of the feature selector does not allow NaN Inf then it will still   error  but the feature selectors themselves no longer enforce this   restriction unnecessarily   issue  11635  by  user  Alec Peters  adpeters        Fix  Fixed a bug where  class  feature selection VarianceThreshold  with    threshold 0  did not remove constant features due to numerical instability    by using range rather than variance in this case     pr  13704  by  user  Roddy MacSween  rlms      mod  sklearn gaussian process                                      Feature  Gaussian process models on structured data   class  gaussian process GaussianProcessRegressor    and  class  gaussian process GaussianProcessClassifier  can now accept a list   of generic objects  e g  strings  trees  graphs  etc   as the   X   argument   to their training prediction methods    A user defined kernel should be provided for computing the kernel matrix among   the generic objects  and should inherit from  gaussian process kernels GenericKernelMixin    to notify the GPR GPC model that it handles non vectorial samples     pr  15557  by  user  Yu Hang Tang  yhtang        Efficiency   func  gaussian process GaussianProcessClassifier log marginal likelihood    and  func  gaussian process GaussianProcessRegressor log marginal likelihood  now   accept a   clone kernel True   keyword argument  When set to   False      the kernel attribute is modified  but may result in a performance improvement     pr  14378  by  user  Masashi Shibata  c bata        API  From version 0 24  meth  gaussian process kernels Kernel get params  will raise an     AttributeError   rather than return   None   for parameters that are in the   estimator s constructor but not stored as attributes on the instance     pr  14464  by  Joel Nothman      mod  sklearn impute                            MajorFeature  Added  class  impute KNNImputer   to impute missing values using   k Nearest Neighbors   issue  12852  by  user  Ashim Bhattarai  ashimb9   and    Thomas Fan   and  pr  15010  by  Guillaume Lemaitre        Feature   class  impute IterativeImputer  has new  skip compute  flag that   is False by default  which  when True  will skip computation on features that   have no missing values during the fit phase   issue  13773  by    user  Sergey Feldman  sergeyf        Efficiency   meth  impute MissingIndicator fit transform  avoid repeated   computation of the masked matrix   pr  14356  by  user  Harsh Soni  harsh020        Fix   class  impute IterativeImputer  now works when there is only one feature    By  user  Sergey Feldman  sergeyf        Fix  Fixed a bug in  class  impute IterativeImputer  where features where   imputed in the reverse desired order with   imputation order   either      ascending    or    descending      pr  15393  by    user  Venkatachalam N  venkyyuvy      mod  sklearn inspection                                MajorFeature   func  inspection permutation importance  has been added to   measure the importance of each feature in an arbitrary trained model with   respect to a given scoring function   issue  13146  by  Thomas Fan        Feature   func  inspection partial dependence  and    inspection plot partial dependence  now support the fast  recursion    method for  class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor    pr  13769  by    Nicolas Hug        Enhancement   inspection plot partial dependence  has been extended to   now support the new visualization API described in the  ref  User Guide    visualizations     pr  14646  by  Thomas Fan        Enhancement   func  inspection partial dependence  accepts pandas DataFrame   and  class  pipeline Pipeline  containing  class  compose ColumnTransformer     In addition  inspection plot partial dependence  will use the column   names by default when a dataframe is passed     pr  14028  and  pr  15429  by  Guillaume Lemaitre      mod  sklearn kernel approximation                                          Fix  Fixed a bug where  class  kernel approximation Nystroem  raised a    KeyError  when using  kernel  precomputed       pr  14706  by  user  Venkatachalam N  venkyyuvy      mod  sklearn linear model                                  Efficiency  The  liblinear  logistic regression solver is now faster and   requires less memory     pr  14108    pr  14170    pr  14296  by  user  Alex Henrie  alexhenrie        Enhancement   class  linear model BayesianRidge  now accepts hyperparameters     alpha init   and   lambda init   which can be used to set the initial value   of the maximization procedure in  term  fit      pr  13618  by  user  Yoshihiro Uchida  c56pony        Fix   class  linear model Ridge  now correctly fits an intercept when  X  is   sparse   solver  auto   and  fit intercept True   because the default solver   in this configuration has changed to  sparse cg   which can fit an intercept   with sparse data   pr  13995  by  user  J r me Dock s  jeromedockes        Fix   class  linear model Ridge  with  solver  sag   now accepts F ordered   and non contiguous arrays and makes a conversion instead of failing     pr  14458  by  Guillaume Lemaitre        Fix   class  linear model LassoCV  no longer forces   precompute False     when fitting the final model   pr  14591  by  Andreas M ller        Fix   class  linear model RidgeCV  and  class  linear model RidgeClassifierCV    now correctly scores when  cv None      pr  14864  by  user  Venkatachalam N  venkyyuvy        Fix  Fixed a bug in  class  linear model LogisticRegressionCV  where the     scores       n iter    and   coefs paths    attribute would have a wrong   ordering with   penalty  elastic net      pr  15044  by  Nicolas Hug       Fix   class  linear model MultiTaskLassoCV  and    class  linear model MultiTaskElasticNetCV  with X of dtype int   and  fit intercept True      pr  15086  by  user  Alex Gramfort  agramfort        Fix  The liblinear solver now supports   sample weight       pr  15038  by  Guillaume Lemaitre      mod  sklearn manifold                              Feature   class  manifold Isomap    class  manifold TSNE   and    class  manifold SpectralEmbedding  now accept precomputed sparse   neighbors graph as input   issue  10482  by  Tom Dupre la Tour   and    user  Kumar Ashutosh  thechargedneutron        Feature  Exposed the   n jobs   parameter in  class  manifold TSNE  for   multi core calculation of the neighbors graph  This parameter has no   impact when   metric  precomputed    or    metric  euclidean    and     method  exact       issue  15082  by  Roman Yurchak        Efficiency  Improved efficiency of  class  manifold TSNE  when     method  barnes hut    by computing the gradient in parallel     pr  13213  by  user  Thomas Moreau  tommoral       Fix  Fixed a bug where  func  manifold spectral embedding   and therefore    class  manifold SpectralEmbedding  and  class  cluster SpectralClustering     computed wrong eigenvalues with   eigen solver  amg    when     n samples   5   n components     pr  14647  by  Andreas M ller        Fix  Fixed a bug in  func  manifold spectral embedding   used in    class  manifold SpectralEmbedding  and  class  cluster SpectralClustering    where   eigen solver  amg    would sometimes result in a LinAlgError     issue  13393  by  user  Andrew Knyazev  lobpcg      pr  13707  by  user  Scott White  whitews       API  Deprecate   training data    unused attribute in    class  manifold Isomap    issue  10482  by  Tom Dupre la Tour      mod  sklearn metrics                             MajorFeature   metrics plot roc curve  has been added to plot roc   curves  This function introduces the visualization API described in   the  ref  User Guide  visualizations     pr  14357  by  Thomas Fan        Feature  Added a new parameter   zero division   to multiple classification   metrics   func  metrics precision score    func  metrics recall score      func  metrics f1 score    func  metrics fbeta score      func  metrics precision recall fscore support      func  metrics classification report   This allows to set returned value for   ill defined metrics     pr  14900  by  user  Marc Torrellas Socastro  marctorrellas        Feature  Added the  func  metrics pairwise nan euclidean distances  metric    which calculates euclidean distances in the presence of missing values     issue  12852  by  user  Ashim Bhattarai  ashimb9   and  Thomas Fan        Feature  New ranking metrics  func  metrics ndcg score  and    func  metrics dcg score  have been added to compute Discounted Cumulative   Gain and Normalized Discounted Cumulative Gain   pr  9951  by  user  J r me   Dock s  jeromedockes        Feature   metrics plot precision recall curve  has been added to plot   precision recall curves   pr  14936  by  Thomas Fan        Feature   metrics plot confusion matrix  has been added to plot   confusion matrices   pr  15083  by  Thomas Fan        Feature  Added multiclass support to  func  metrics roc auc score  with   corresponding scorers   roc auc ovr      roc auc ovo        roc auc ovr weighted    and   roc auc ovo weighted       pr  12789  and  pr  15274  by    user  Kathy Chen  kathyxchen     user  Mohamed Maskani  maskani moh    and    Thomas Fan        Feature  Add  class  metrics mean tweedie deviance  measuring the   Tweedie deviance for a given   power   parameter  Also add mean Poisson   deviance  class  metrics mean poisson deviance  and mean Gamma deviance    class  metrics mean gamma deviance  that are special cases of the Tweedie   deviance for   power 1   and   power 2   respectively     pr  13938  by  user  Christian Lorentzen  lorentzenchr   and    Roman Yurchak        Efficiency  Improved performance of    func  metrics pairwise manhattan distances  in the case of sparse matrices     pr  15049  by  Paolo Toccaceli  ptocca        Enhancement  The parameter   beta   in  func  metrics fbeta score  is   updated to accept the zero and  float   inf    value     pr  13231  by  user  Dong hee Na  corona10        Enhancement  Added parameter   squared   in  func  metrics mean squared error    to return root mean squared error     pr  13467  by  user  Urvang Patel  urvang96        Enhancement  Allow computing averaged metrics in the case of no true positives     pr  14595  by  Andreas M ller        Enhancement  Multilabel metrics now supports list of lists as input     pr  14865   user  Srivatsan Ramesh  srivatsan ramesh       user  Herilalaina Rakotoarison  herilalaina       user  L onard Binet  leonardbinet        Enhancement   func  metrics median absolute error  now supports     multioutput   parameter     pr  14732  by  user  Agamemnon Krasoulis  agamemnonc        Enhancement   roc auc ovr weighted  and  roc auc ovo weighted  can now be   used as the  term  scoring  parameter of model selection tools     pr  14417  by  Thomas Fan        Enhancement   func  metrics confusion matrix  accepts a parameters    normalize  allowing to normalize the confusion matrix by column  rows  or   overall     pr  15625  by  Guillaume Lemaitre  glemaitre        Fix  Raise a ValueError in  func  metrics silhouette score  when a   precomputed distance matrix contains non zero diagonal entries     pr  12258  by  user  Stephen Tierney  sjtrny        API    scoring  neg brier score    should be used instead of     scoring  brier score loss    which is now deprecated     pr  14898  by  user  Stefan Matcovici  stefan matcovici      mod  sklearn model selection                                     Efficiency  Improved performance of multimetric scoring in    func  model selection cross validate      class  model selection GridSearchCV   and    class  model selection RandomizedSearchCV    pr  14593  by  Thomas Fan        Enhancement   class  model selection learning curve  now accepts parameter     return times   which can be used to retrieve computation times in order to   plot model scalability  see learning curve example      pr  13938  by  user  Hadrien Reboul  H4dr1en        Enhancement   class  model selection RandomizedSearchCV  now accepts lists   of parameter distributions   pr  14549  by  Andreas M ller        Fix  Reimplemented  class  model selection StratifiedKFold  to fix an issue   where one test set could be  n classes  larger than another  Test sets should   now be near equally sized   pr  14704  by  Joel Nothman        Fix  The  cv results   attribute of  class  model selection GridSearchCV    and  class  model selection RandomizedSearchCV  now only contains unfitted   estimators  This potentially saves a lot of memory since the state of the   estimators isn t stored   pr   15096  by  Andreas M ller        API   class  model selection KFold  and    class  model selection StratifiedKFold  now raise a warning if    random state  is set but  shuffle  is False  This will raise an error in   0 24    mod  sklearn multioutput                                 Fix   class  multioutput MultiOutputClassifier  now has attribute     classes      pr  14629  by  user  Agamemnon Krasoulis  agamemnonc        Fix   class  multioutput MultiOutputClassifier  now has  predict proba    as property and can be checked with  hasattr      issue  15488   pr  15490  by  user  Rebekah Kim  rebekahkim     mod  sklearn naive bayes                                      MajorFeature  Added  class  naive bayes CategoricalNB  that implements the   Categorical Naive Bayes classifier     pr  12569  by  user  Tim Bicker  timbicker   and    user  Florian Wilhelm  FlorianWilhelm      mod  sklearn neighbors                               MajorFeature  Added  class  neighbors KNeighborsTransformer  and    class  neighbors RadiusNeighborsTransformer   which transform input dataset   into a sparse neighbors graph  They give finer control on nearest neighbors   computations and enable easy pipeline caching for multiple use     issue  10482  by  Tom Dupre la Tour        Feature   class  neighbors KNeighborsClassifier      class  neighbors KNeighborsRegressor      class  neighbors RadiusNeighborsClassifier      class  neighbors RadiusNeighborsRegressor   and    class  neighbors LocalOutlierFactor  now accept precomputed sparse   neighbors graph as input   issue  10482  by  Tom Dupre la Tour   and    user  Kumar Ashutosh  thechargedneutron        Feature   class  neighbors RadiusNeighborsClassifier  now supports   predicting probabilities by using  predict proba  and supports more   outlier label options   most frequent   or different outlier labels   for multi outputs     pr  9597  by  user  Wenbo Zhao  webber26232        Efficiency  Efficiency improvements for    func  neighbors RadiusNeighborsClassifier predict      pr  9597  by  user  Wenbo Zhao  webber26232        Fix   class  neighbors KNeighborsRegressor  now throws error when    metric  precomputed   and fit on non square data    pr  14336  by    user  Gregory Dexter  gdex1      mod  sklearn neural network                                    Feature  Add  max fun  parameter in    neural network BaseMultilayerPerceptron      class  neural network MLPRegressor   and    class  neural network MLPClassifier  to give control over   maximum number of function evaluation to not meet   tol   improvement     issue  9274  by  user  Daniel Perry  daniel perry      mod  sklearn pipeline                              Enhancement   class  pipeline Pipeline  now supports  term  score samples  if   the final estimator does     pr  13806  by  user  Ana l Beaugnon  ab anssi        Fix  The  fit  in  class   pipeline FeatureUnion  now accepts  fit params    to pass to the underlying transformers   pr  15119  by  Adrin Jalali        API   None  as a transformer is now deprecated in    class  pipeline FeatureUnion   Please use   drop   instead   pr  15053  by    Thomas Fan      mod  sklearn preprocessing                                   Efficiency   class  preprocessing PolynomialFeatures  is now faster when   the input data is dense   pr  13290  by  user  Xavier Dupr   sdpython        Enhancement  Avoid unnecessary data copy when fitting preprocessors    class  preprocessing StandardScaler    class  preprocessing MinMaxScaler      class  preprocessing MaxAbsScaler    class  preprocessing RobustScaler    and  class  preprocessing QuantileTransformer  which results in a slight   performance improvement   pr  13987  by  Roman Yurchak        Fix  KernelCenterer now throws error when fit on non square    class  preprocessing KernelCenterer     pr  14336  by  user  Gregory Dexter  gdex1      mod  sklearn model selection                                     Fix   class  model selection GridSearchCV  and    model selection RandomizedSearchCV  now supports the     pairwise  property  which prevents an error during cross validation   for estimators with pairwise inputs  such as    class  neighbors KNeighborsClassifier  when  term  metric  is set to    precomputed       pr  13925  by  user  Isaac S  Robson  isrobson   and  pr  15524  by    user  Xun Tang  xun tang      mod  sklearn svm                         Enhancement   class  svm SVC  and  class  svm NuSVC  now accept a     break ties   parameter  This parameter results in  term  predict  breaking   the ties according to the confidence values of  term  decision function   if     decision function shape  ovr     and the number of target classes   2     pr  12557  by  Adrin Jalali        Enhancement  SVM estimators now throw a more specific error when    kernel  precomputed   and fit on non square data     pr  14336  by  user  Gregory Dexter  gdex1        Fix   class  svm SVC    class  svm SVR    class  svm NuSVR  and    class  svm OneClassSVM  when received values negative or zero   for parameter   sample weight   in method fit    generated an   invalid model  This behavior occurred only in some border scenarios    Now in these cases  fit   will fail with an Exception     pr  14286  by  user  Alex Shacked  alexshacked        Fix  The  n support   attribute of  class  svm SVR  and    class  svm OneClassSVM  was previously non initialized  and had size 2  It   has now size 1 with the correct value   pr  15099  by  Nicolas Hug        Fix  fixed a bug in  BaseLibSVM  sparse fit  where n SV 0 raised a   ZeroDivisionError   pr  14894  by  user  Danna Naser  danna naser        Fix  The liblinear solver now supports   sample weight       pr  15038  by  Guillaume Lemaitre       mod  sklearn tree                          Feature  Adds minimal cost complexity pruning  controlled by   ccp alpha      to  class  tree DecisionTreeClassifier    class  tree DecisionTreeRegressor      class  tree ExtraTreeClassifier    class  tree ExtraTreeRegressor      class  ensemble RandomForestClassifier      class  ensemble RandomForestRegressor      class  ensemble ExtraTreesClassifier      class  ensemble ExtraTreesRegressor      class  ensemble GradientBoostingClassifier     and  class  ensemble GradientBoostingRegressor      pr  12887  by  Thomas Fan        API    presort   is now deprecated in    class  tree DecisionTreeClassifier  and    class  tree DecisionTreeRegressor   and the parameter has no effect     pr  14907  by  Adrin Jalali        API  The   classes    and   n classes    attributes of    class  tree DecisionTreeRegressor  are now deprecated   pr  15028  by    user  Mei Guan  meiguan     Nicolas Hug    and  Adrin Jalali      mod  sklearn utils                           Feature   func   utils estimator checks check estimator  can now generate   checks by setting  generate only True   Previously  running    func   utils estimator checks check estimator  will stop when the first   check fails  With  generate only True   all checks can run independently and   report the ones that are failing  Read more in    ref  rolling your own estimator    pr  14381  by  Thomas Fan        Feature  Added a pytest specific decorator     func   utils estimator checks parametrize with checks   to parametrize   estimator checks for a list of estimators   pr  14381  by  Thomas Fan        Feature  A new random variable   utils fixes loguniform  implements a   log uniform random variable  e g   for use in RandomizedSearchCV     For example  the outcomes   1      10   and   100   are all equally likely   for   loguniform 1  100     See  issue  11232  by    user  Scott Sievert  stsievert   and  user  Nathaniel Saul  sauln      and  SciPy PR 10815  https   github com scipy scipy pull 10815        Enhancement   utils safe indexing   now deprecated  accepts an     axis   parameter to index array like across rows and columns  The column   indexing can be done on NumPy array  SciPy sparse matrix  and Pandas   DataFrame  An additional refactoring was done   pr  14035  and  pr  14475    by  Guillaume Lemaitre        Enhancement   func  utils extmath safe sparse dot  works between 3D  ndarray   and sparse matrix     pr  14538  by  user  J r mie du Boisberranger  jeremiedbb        Fix   func  utils check array  is now raising an error instead of casting   NaN to integer     pr  14872  by  Roman Yurchak        Fix   func  utils check array  will now correctly detect numeric dtypes in   pandas dataframes  fixing a bug where   float32   was upcast to   float64     unnecessarily   pr  15094  by  Andreas M ller        API  The following utils have been deprecated and are now private         choose check classifiers labels         enforce estimator tags y         mocking MockDataFrame         mocking CheckingClassifier         optimize newton cg         random random choice csc         utils choose check classifiers labels         utils enforce estimator tags y         utils optimize newton cg         utils random random choice csc         utils safe indexing         utils mocking         utils fast dict         utils seq dataset         utils weight vector         utils fixes parallel helper    removed      All of   utils testing   except for   all estimators   which is now in       utils      mod  sklearn isotonic                                         Fix  Fixed a bug where  class  isotonic IsotonicRegression fit  raised error   when  X dtype     float32   and  X dtype    y dtype      pr  14902  by  user  Lucas  lostcoaster     Miscellaneous                   Fix  Port  lobpcg  from SciPy which implement some bug fixes but only   available in 1 3      pr  13609  and  pr  14971  by  Guillaume Lemaitre        API  Scikit learn now converts any input data structure implementing a   duck array to a numpy array  using     array      to ensure consistent   behavior instead of relying on     array function      see  NEP 18    https   numpy org neps nep 0018 array function protocol html         pr  14702  by  Andreas M ller        API  Replace manual checks with   check is fitted    Errors thrown when   using a non fitted estimators are now more uniform     pr  13013  by  user  Agamemnon Krasoulis  agamemnonc     Changes to estimator checks                              These changes mostly affect library developers     Estimators are now expected to raise a   NotFittedError   if   predict   or     transform   is called before   fit    previously an   AttributeError   or     ValueError   was acceptable     pr  13013  by by  user  Agamemnon Krasoulis  agamemnonc       Binary only classifiers are now supported in estimator checks    Such classifiers need to have the  binary only True  estimator tag     pr  13875  by  Trevor Stephens       Estimators are expected to convert input data    X      y        sample weights    to  class  numpy ndarray  and never call       array function     on the original datatype that is passed  see  NEP 18    https   numpy org neps nep 0018 array function protocol html         pr  14702  by  Andreas M ller        requires positive X  estimator tag  for models that require   X to be non negative  is now used by  meth  utils estimator checks check estimator    to make sure a proper error message is raised if X contains some negative entries     pr  14680  by  user  Alex Gramfort  agramfort       Added check that pairwise estimators raise error on non square data    pr  14336  by  user  Gregory Dexter  gdex1       Added two common multioutput estimator tests    utils estimator checks check classifier multioutput  and    utils estimator checks check regressor multioutput      pr  13392  by  user  Rok Mihevc  rok        Fix  Added   check transformer data not an array   to checks where missing     Fix  The estimators tags resolution now follows the regular MRO  They used   to be overridable only once   pr  14884  by  Andreas M ller         rubric   Code and documentation contributors  Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0 21  including   Aaron Alphonsus  Abbie Popa  Abdur Rahmaan Janhangeer  abenbihi  Abhinav Sagar  Abhishek Jana  Abraham K  Lagat  Adam J  Stewart  Aditya Vyas  Adrin Jalali  Agamemnon Krasoulis  Alec Peters  Alessandro Surace  Alexandre de Siqueira  Alexandre Gramfort  alexgoryainov  Alex Henrie  Alex Itkes  alexshacked  Allen Akinkunle  Ana l Beaugnon  Anders Kaseorg  Andrea Maldonado  Andrea Navarrete  Andreas Mueller  Andreas Schuderer  Andrew Nystrom  Angela Ambroz  Anisha Keshavan  Ankit Jha  Antonio Gutierrez  Anuja Kelkar  Archana Alva  arnaudstiegler  arpanchowdhry  ashimb9  Ayomide Bamidele  Baran Buluttekin  barrycg  Bharat Raghunathan  Bill Mill  Biswadip Mandal  blackd0t  Brian G  Barkley  Brian Wignall  Bryan Yang  c56pony  camilaagw  cartman nabana  catajara  Cat Chenal  Cathy  cgsavard  Charles Vesteghem  Chiara Marmo  Chris Gregory  Christian Lorentzen  Christos Aridas  Dakota Grusak  Daniel Grady  Daniel Perry  Danna Naser  DatenBergwerk  David Dormagen  deeplook  Dillon Niederhut  Dong hee Na  Dougal J  Sutherland  DrGFreeman  Dylan Cashman  edvardlindelof  Eric Larson  Eric Ndirangu  Eunseop Jeong  Fanny  federicopisanu  Felix Divo  flaviomorelli  FranciDona  Franco M  Luque  Frank Hoang  Frederic Haase  g0g0gadget  Gabriel Altay  Gabriel do Vale Rios  Gael Varoquaux  ganevgv  gdex1  getgaurav2  Gideon Sonoiya  Gordon Chen  gpapadok  Greg Mogavero  Grzegorz Szpak  Guillaume Lemaitre  Guillem Garc a Subies  H4dr1en  hadshirt  Hailey Nguyen  Hanmin Qin  Hannah Bruce Macdonald  Harsh Mahajan  Harsh Soni  Honglu Zhang  Hossein Pourbozorg  Ian Sanders  Ingrid Spielman  J A16  jaehong park  Jaime Ferrando Huertas  James Hill  James Myatt  Jay  jeremiedbb  J r mie du Boisberranger  jeromedockes  Jesper Dramsch  Joan Massich  Joanna Zhang  Joel Nothman  Johann Faouzi  Jonathan Rahn  Jon Cusick  Jose Ortiz  Kanika Sabharwal  Katarina Slama  kellycarmody  Kennedy Kang ethe  Kensuke Arai  Kesshi Jordan  Kevad  Kevin Loftis  Kevin Winata  Kevin Yu Sheng Li  Kirill Dolmatov  Kirthi Shankar Sivamani  krishna katyal  Lakshmi Krishnan  Lakshya KD  LalliAcqua  lbfin  Leland McInnes  L onard Binet  Loic Esteve  loopyme  lostcoaster  Louis Huynh  lrjball  Luca Ionescu  Lutz Roeder  MaggieChege  Maithreyi Venkatesh  Maltimore  Maocx  Marc Torrellas  Marie Douriez  Markus  Markus Frey  Martina G  Vilas  Martin Oywa  Martin Thoma  Masashi SHIBATA  Maxwell Aladago  mbillingr  m clare  Meghann Agarwal  m fab  Micah Smith  miguelbarao  Miguel Cabrera  Mina Naghshhnejad  Ming Li  motmoti  mschaffenroth  mthorrell  Natasha Borders  nezar a  Nicolas Hug  Nidhin Pattaniyil  Nikita Titov  Nishan Singh Mann  Nitya Mandyam  norvan  notmatthancock  novaya  nxorable  Oleg Stikhin  Oleksandr Pavlyk  Olivier Grisel  Omar Saleem  Owen Flanagan  panpiort8  Paolo  Paolo Toccaceli  Paresh Mathur  Paula  Peng Yu  Peter Marko  pierretallotte  poorna kumar  pspachtholz  qdeffense  Rajat Garg  Rapha l Bournhonesque  Ray  Ray Bell  Rebekah Kim  Reza Gharibi  Richard Payne  Richard W  rlms  Robert Juergens  Rok Mihevc  Roman Feldbauer  Roman Yurchak  R Sanjabi  RuchitaGarde  Ruth Waithera  Sackey  Sam Dixon  Samesh Lakhotia  Samuel Taylor  Sarra Habchi  Scott Gigante  Scott Sievert  Scott White  Sebastian P lsterl  Sergey Feldman  SeWook Oh  she dares  Shreya V  Shubham Mehta  Shuzhe Xiao  SimonCW  smarie  smujjiga  S nke Behrends  Soumirai  Sourav Singh  stefan matcovici  steinfurt  St phane Couvreur  Stephan Tulkens  Stephen Cowley  Stephen Tierney  SylvainLan  th0rwas  theoptips  theotheo  Thierno Ibrahima DIOP  Thomas Edwards  Thomas J Fan  Thomas Moreau  Thomas Schmitt  Tilen Kusterle  Tim Bicker  Timsaur  Tim Staley  Tirth Patel  Tola A  Tom Augspurger  Tom Dupr  la Tour  topisan  Trevor Stephens  ttang131  Urvang Patel  Vathsala Achar  veerlosar  Venkatachalam N  Victor Luzgin  Vincent Jeanselme  Vincent Lostanlen  Vladimir Korolev  vnherdeiro  Wenbo Zhao  Wendy Hu  willdarnell  William de Vazelhes  wolframalpha  xavier dupr   xcjason  x martian  xsat  xun tang  Yinglr  yokasre  Yu Hang  Maxin  Tang  Yulia Zamriy  Zhao Feng"}
{"questions":"scikit-learn sklearn contributors rst Version 1 1 releasenotes11","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n.. _release_notes_1_1:\n\n===========\nVersion 1.1\n===========\n\nFor a short description of the main highlights of the release, please refer to\n:ref:`sphx_glr_auto_examples_release_highlights_plot_release_highlights_1_1_0.py`.\n\n.. include:: changelog_legend.inc\n\n.. _changes_1_1_3:\n\nVersion 1.1.3\n=============\n\n**October 2022**\n\nThis bugfix release only includes fixes for compatibility with the latest\nSciPy release >= 1.9.2. Notable changes include:\n\n- |Fix| Include `msvcp140.dll` in the scikit-learn wheels since it has been\n  removed in the latest SciPy wheels.\n  :pr:`24631` by :user:`Chiara Marmo <cmarmo>`.\n\n- |Enhancement| Create wheels for Python 3.11.\n  :pr:`24446` by :user:`Chiara Marmo <cmarmo>`.\n\nOther bug fixes will be available in the next 1.2 release, which will be\nreleased in the coming weeks.\n\nNote that support for 32-bit Python on Windows has been dropped in this release. This\nis due to the fact that SciPy 1.9.2 also dropped the support for that platform.\nWindows users are advised to install the 64-bit version of Python instead.\n\n.. _changes_1_1_2:\n\nVersion 1.1.2\n=============\n\n**August 2022**\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- |Fix| :class:`manifold.TSNE` now throws a `ValueError` when fit with\n  `perplexity>=n_samples` to ensure mathematical correctness of the algorithm.\n  :pr:`10805` by :user:`Mathias Andersen <MrMathias>` and\n  :pr:`23471` by :user:`Meekail Zain <micky774>`.\n\nChangelog\n---------\n\n- |Fix| A default HTML representation is shown for meta-estimators with invalid\n  parameters. :pr:`24015` by `Thomas Fan`_.\n\n- |Fix| Add support for F-contiguous arrays for estimators and functions whose back-end\n  have been changed in 1.1.\n  :pr:`23990` by :user:`Julien Jerphanion <jjerphan>`.\n\n- |Fix| Wheels are now available for MacOS 10.9 and greater. :pr:`23833` by\n  `Thomas Fan`_.\n\n:mod:`sklearn.base`\n...................\n\n- |Fix| The `get_params` method of the :class:`base.BaseEstimator` class now supports\n  estimators with `type`-type params that have the `get_params` method.\n  :pr:`24017` by :user:`Henry Sorsky <hsorsky>`.\n\n:mod:`sklearn.cluster`\n......................\n\n- |Fix| Fixed a bug in :class:`cluster.Birch` that could trigger an error when splitting\n  a node if there are duplicates in the dataset.\n  :pr:`23395` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Fix| :class:`feature_selection.SelectFromModel` defaults to selection\n  threshold 1e-5 when the estimator is either :class:`linear_model.ElasticNet`\n  or :class:`linear_model.ElasticNetCV` with `l1_ratio` equals 1 or\n  :class:`linear_model.LassoCV`.\n  :pr:`23636` by :user:`Hao Chun Chang <haochunchang>`.\n\n:mod:`sklearn.impute`\n.....................\n\n- |Fix| :class:`impute.SimpleImputer` uses the dtype seen in `fit` for\n  `transform` when the dtype is object. :pr:`22063` by `Thomas Fan`_.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Fix| Use dtype-aware tolerances for the validation of gram matrices (passed by users\n  or precomputed). :pr:`22059` by :user:`Malte S. Kurz <MalteKurz>`.\n\n- |Fix| Fixed an error in :class:`linear_model.LogisticRegression` with\n  `solver=\"newton-cg\"`, `fit_intercept=True`, and a single feature. :pr:`23608`\n  by `Tom Dupre la Tour`_.\n\n:mod:`sklearn.manifold`\n.......................\n\n- |Fix| :class:`manifold.TSNE` now throws a `ValueError` when fit with\n  `perplexity>=n_samples` to ensure mathematical correctness of the algorithm.\n  :pr:`10805` by :user:`Mathias Andersen <MrMathias>` and\n  :pr:`23471` by :user:`Meekail Zain <micky774>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| Fixed error message of :class:`metrics.coverage_error` for 1D array input.\n  :pr:`23548` by :user:`Hao Chun Chang <haochunchang>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Fix| :meth:`preprocessing.OrdinalEncoder.inverse_transform` correctly handles\n  use cases where `unknown_value` or `encoded_missing_value` is `nan`. :pr:`24087`\n  by `Thomas Fan`_.\n\n:mod:`sklearn.tree`\n...................\n\n- |Fix| Fixed invalid memory access bug during fit in\n  :class:`tree.DecisionTreeRegressor` and :class:`tree.DecisionTreeClassifier`.\n  :pr:`23273` by `Thomas Fan`_.\n\n.. _changes_1_1_1:\n\nVersion 1.1.1\n=============\n\n**May 2022**\n\nChangelog\n---------\n\n- |Enhancement| The error message is improved when importing\n  :class:`model_selection.HalvingGridSearchCV`,\n  :class:`model_selection.HalvingRandomSearchCV`, or\n  :class:`impute.IterativeImputer` without importing the experimental flag.\n  :pr:`23194` by `Thomas Fan`_.\n\n- |Enhancement| Added an extension in doc\/conf.py to automatically generate\n  the list of estimators that handle NaN values.\n  :pr:`23198` by :user:`Lise Kleiber <lisekleiber>`, :user:`Zhehao Liu <MaxwellLZH>`\n  and :user:`Chiara Marmo <cmarmo>`.\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Fix| Avoid timeouts in :func:`datasets.fetch_openml` by not passing a\n  `timeout` argument, :pr:`23358` by :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |Fix| Avoid spurious warning in :class:`decomposition.IncrementalPCA` when\n  `n_samples == n_components`. :pr:`23264` by :user:`Lucy Liu <lucyleeow>`.\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Fix| The `partial_fit` method of :class:`feature_selection.SelectFromModel`\n  now conducts validation for `max_features` and `feature_names_in` parameters.\n  :pr:`23299` by :user:`Long Bao <lorentzbao>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Fix| Fixes :func:`metrics.precision_recall_curve` to compute precision-recall at 100%\n  recall. The Precision-Recall curve now displays the last point corresponding to a\n  classifier that always predicts the positive class: recall=100% and\n  precision=class balance.\n  :pr:`23214` by :user:`St\u00e9phane Collot <stephanecollot>` and :user:`Max Baak <mbaak>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Fix| :class:`preprocessing.PolynomialFeatures` with ``degree`` equal to 0\n  will raise error when ``include_bias`` is set to False, and outputs a single\n  constant array when ``include_bias`` is set to True.\n  :pr:`23370` by :user:`Zhehao Liu <MaxwellLZH>`.\n\n:mod:`sklearn.tree`\n...................\n\n- |Fix| Fixes performance regression with low cardinality features for\n  :class:`tree.DecisionTreeClassifier`,\n  :class:`tree.DecisionTreeRegressor`,\n  :class:`ensemble.RandomForestClassifier`,\n  :class:`ensemble.RandomForestRegressor`,\n  :class:`ensemble.GradientBoostingClassifier`, and\n  :class:`ensemble.GradientBoostingRegressor`.\n  :pr:`23410` by :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |Fix| :func:`utils.class_weight.compute_sample_weight` now works with sparse `y`.\n  :pr:`23115` by :user:`kernc <kernc>`.\n\n.. _changes_1_1:\n\nVersion 1.1.0\n=============\n\n**May 2022**\n\nMinimal dependencies\n--------------------\n\nVersion 1.1.0 of scikit-learn requires python 3.8+, numpy 1.17.3+ and\nscipy 1.3.2+. Optional minimal dependency is matplotlib 3.1.2+.\n\nChanged models\n--------------\n\nThe following estimators and functions, when fit with the same data and\nparameters, may produce different models from the previous version. This often\noccurs due to changes in the modelling logic (bug fixes or enhancements), or in\nrandom sampling procedures.\n\n- |Efficiency| :class:`cluster.KMeans` now defaults to ``algorithm=\"lloyd\"``\n  instead of ``algorithm=\"auto\"``, which was equivalent to\n  ``algorithm=\"elkan\"``. Lloyd's algorithm and Elkan's algorithm converge to the\n  same solution, up to numerical rounding errors, but in general Lloyd's\n  algorithm uses much less memory, and it is often faster.\n\n- |Efficiency| Fitting :class:`tree.DecisionTreeClassifier`,\n  :class:`tree.DecisionTreeRegressor`,\n  :class:`ensemble.RandomForestClassifier`,\n  :class:`ensemble.RandomForestRegressor`,\n  :class:`ensemble.GradientBoostingClassifier`, and\n  :class:`ensemble.GradientBoostingRegressor` is on average 15% faster than in\n  previous versions thanks to a new sort algorithm to find the best split.\n  Models might be different because of a different handling of splits\n  with tied criterion values: both the old and the new sorting algorithm\n  are unstable sorting algorithms. :pr:`22868` by `Thomas Fan`_.\n\n- |Fix| The eigenvectors initialization for :class:`cluster.SpectralClustering`\n  and :class:`manifold.SpectralEmbedding` now samples from a Gaussian when\n  using the `'amg'` or `'lobpcg'` solver. This change  improves numerical\n  stability of the solver, but may result in a different model.\n\n- |Fix| :func:`feature_selection.f_regression` and\n  :func:`feature_selection.r_regression` will now returned finite score by\n  default instead of `np.nan` and `np.inf` for some corner case. You can use\n  `force_finite=False` if you really want to get non-finite values and keep\n  the old behavior.\n\n- |Fix| Panda's DataFrames with all non-string columns such as a MultiIndex no\n  longer warns when passed into an Estimator. Estimators will continue to\n  ignore the column names in DataFrames with non-string columns. For\n  `feature_names_in_` to be defined, columns must be all strings. :pr:`22410` by\n  `Thomas Fan`_.\n\n- |Fix| :class:`preprocessing.KBinsDiscretizer` changed handling of bin edges\n  slightly, which might result in a different encoding with the same data.\n\n- |Fix| :func:`calibration.calibration_curve` changed handling of bin\n  edges slightly, which might result in a different output curve given the same\n  data.\n\n- |Fix| :class:`discriminant_analysis.LinearDiscriminantAnalysis` now uses\n  the correct variance-scaling coefficient which may result in different model\n  behavior.\n\n- |Fix| :meth:`feature_selection.SelectFromModel.fit` and\n  :meth:`feature_selection.SelectFromModel.partial_fit` can now be called with\n  `prefit=True`. `estimators_` will be a deep copy of `estimator` when\n  `prefit=True`. :pr:`23271` by :user:`Guillaume Lemaitre <glemaitre>`.\n\nChangelog\n---------\n\n..\n    Entries should be grouped by module (in alphabetic order) and prefixed with\n    one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|,\n    |Fix| or |API| (see whats_new.rst for descriptions).\n    Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|).\n    Changes not specific to a module should be listed under *Multiple Modules*\n    or *Miscellaneous*.\n    Entries should end with:\n    :pr:`123456` by :user:`Joe Bloggs <joeongithub>`.\n    where 123456 is the *pull request* number, not the issue number.\n\n\n- |Efficiency| Low-level routines for reductions on pairwise distances\n  for dense float64 datasets have been refactored. The following functions\n  and estimators now benefit from improved performances in terms of hardware\n  scalability and speed-ups:\n\n  - :func:`sklearn.metrics.pairwise_distances_argmin`\n  - :func:`sklearn.metrics.pairwise_distances_argmin_min`\n  - :class:`sklearn.cluster.AffinityPropagation`\n  - :class:`sklearn.cluster.Birch`\n  - :class:`sklearn.cluster.MeanShift`\n  - :class:`sklearn.cluster.OPTICS`\n  - :class:`sklearn.cluster.SpectralClustering`\n  - :func:`sklearn.feature_selection.mutual_info_regression`\n  - :class:`sklearn.neighbors.KNeighborsClassifier`\n  - :class:`sklearn.neighbors.KNeighborsRegressor`\n  - :class:`sklearn.neighbors.RadiusNeighborsClassifier`\n  - :class:`sklearn.neighbors.RadiusNeighborsRegressor`\n  - :class:`sklearn.neighbors.LocalOutlierFactor`\n  - :class:`sklearn.neighbors.NearestNeighbors`\n  - :class:`sklearn.manifold.Isomap`\n  - :class:`sklearn.manifold.LocallyLinearEmbedding`\n  - :class:`sklearn.manifold.TSNE`\n  - :func:`sklearn.manifold.trustworthiness`\n  - :class:`sklearn.semi_supervised.LabelPropagation`\n  - :class:`sklearn.semi_supervised.LabelSpreading`\n\n  For instance :class:`sklearn.neighbors.NearestNeighbors.kneighbors` and\n  :class:`sklearn.neighbors.NearestNeighbors.radius_neighbors`\n  can respectively be up to \u00d720 and \u00d75 faster than previously on a laptop.\n\n  Moreover, implementations of those two algorithms are now suitable\n  for machine with many cores, making them usable for datasets consisting\n  of millions of samples.\n\n  :pr:`21987`, :pr:`22064`, :pr:`22065`, :pr:`22288` and :pr:`22320`\n  by :user:`Julien Jerphanion <jjerphan>`.\n\n- |Enhancement| All scikit-learn models now generate a more informative\n  error message when some input contains unexpected `NaN` or infinite values.\n  In particular the message contains the input name (\"X\", \"y\" or\n  \"sample_weight\") and if an unexpected `NaN` value is found in `X`, the error\n  message suggests potential solutions.\n  :pr:`21219` by :user:`Olivier Grisel <ogrisel>`.\n\n- |Enhancement| All scikit-learn models now generate a more informative\n  error message when setting invalid hyper-parameters with `set_params`.\n  :pr:`21542` by :user:`Olivier Grisel <ogrisel>`.\n\n- |Enhancement| Removes random unique identifiers in the HTML representation.\n  With this change, jupyter notebooks are reproducible as long as the cells are\n  run in the same order. :pr:`23098` by `Thomas Fan`_.\n\n- |Fix| Estimators with `non_deterministic` tag set to `True` will skip both\n  `check_methods_sample_order_invariance` and `check_methods_subset_invariance` tests.\n  :pr:`22318` by :user:`Zhehao Liu <MaxwellLZH>`.\n\n- |API| The option for using the log loss, aka binomial or multinomial deviance, via\n  the `loss` parameters was made more consistent. The preferred way is by\n  setting the value to `\"log_loss\"`. Old option names are still valid and\n  produce the same models, but are deprecated and will be removed in version\n  1.3.\n\n  - For :class:`ensemble.GradientBoostingClassifier`, the `loss` parameter name\n    \"deviance\" is deprecated in favor of the new name \"log_loss\", which is now the\n    default.\n    :pr:`23036` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n  - For :class:`ensemble.HistGradientBoostingClassifier`, the `loss` parameter names\n    \"auto\", \"binary_crossentropy\" and \"categorical_crossentropy\" are deprecated in\n    favor of the new name \"log_loss\", which is now the default.\n    :pr:`23040` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n  - For :class:`linear_model.SGDClassifier`, the `loss` parameter name\n    \"log\" is deprecated in favor of the new name \"log_loss\".\n    :pr:`23046` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |API| Rich html representation of estimators is now enabled by default in Jupyter\n  notebooks. It can be deactivated by setting `display='text'` in\n  :func:`sklearn.set_config`.\n  :pr:`22856` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n:mod:`sklearn.calibration`\n..........................\n\n- |Enhancement| :func:`calibration.calibration_curve` accepts a parameter\n  `pos_label` to specify the positive class label.\n  :pr:`21032` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Enhancement| :meth:`calibration.CalibratedClassifierCV.fit` now supports passing\n  `fit_params`, which are routed to the `base_estimator`.\n  :pr:`18170` by :user:`Benjamin Bossan <BenjaminBossan>`.\n\n- |Enhancement| :class:`calibration.CalibrationDisplay` accepts a parameter `pos_label`\n  to add this information to the plot.\n  :pr:`21038` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :func:`calibration.calibration_curve` handles bin edges more consistently now.\n  :pr:`14975` by `Andreas M\u00fcller`_ and :pr:`22526` by :user:`Meekail Zain <micky774>`.\n\n- |API| :func:`calibration.calibration_curve`'s `normalize` parameter is\n  now deprecated and will be removed in version 1.3. It is recommended that\n  a proper probability (i.e. a classifier's :term:`predict_proba` positive\n  class) is used for `y_prob`.\n  :pr:`23095` by :user:`Jordan Silke <jsilke>`.\n\n:mod:`sklearn.cluster`\n......................\n\n- |MajorFeature| :class:`cluster.BisectingKMeans` introducing Bisecting K-Means algorithm\n  :pr:`20031` by :user:`Michal Krawczyk <michalkrawczyk>`,\n  :user:`Tom Dupre la Tour <TomDLT>`\n  and :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Enhancement| :class:`cluster.SpectralClustering` and\n  :func:`cluster.spectral_clustering` now include the new `'cluster_qr'` method that\n  clusters samples in the embedding space as an alternative to the existing `'kmeans'`\n  and `'discrete'` methods. See :func:`cluster.spectral_clustering` for more details.\n  :pr:`21148` by :user:`Andrew Knyazev <lobpcg>`.\n\n- |Enhancement| Adds :term:`get_feature_names_out` to :class:`cluster.Birch`,\n  :class:`cluster.FeatureAgglomeration`, :class:`cluster.KMeans`,\n  :class:`cluster.MiniBatchKMeans`. :pr:`22255` by `Thomas Fan`_.\n\n- |Enhancement| :class:`cluster.SpectralClustering` now raises consistent\n  error messages when passed invalid values for `n_clusters`, `n_init`,\n  `gamma`, `n_neighbors`, `eigen_tol` or `degree`.\n  :pr:`21881` by :user:`Hugo Vassard <hvassard>`.\n\n- |Enhancement| :class:`cluster.AffinityPropagation` now returns cluster\n  centers and labels if they exist, even if the model has not fully converged.\n  When returning these potentially-degenerate cluster centers and labels, a new\n  warning message is shown. If no cluster centers were constructed,\n  then the cluster centers remain an empty list with labels set to\n  `-1` and the original warning message is shown.\n  :pr:`22217` by :user:`Meekail Zain <micky774>`.\n\n- |Efficiency| In :class:`cluster.KMeans`, the default ``algorithm`` is now\n  ``\"lloyd\"`` which is the full classical EM-style algorithm. Both ``\"auto\"``\n  and ``\"full\"`` are deprecated and will be removed in version 1.3. They are\n  now aliases for ``\"lloyd\"``. The previous default was ``\"auto\"``, which relied\n  on Elkan's algorithm. Lloyd's algorithm uses less memory than Elkan's, it\n  is faster on many datasets, and its results are identical, hence the change.\n  :pr:`21735` by :user:`Aur\u00e9lien Geron <ageron>`.\n\n- |Fix| :class:`cluster.KMeans`'s `init` parameter now properly supports\n  array-like input and NumPy string scalars. :pr:`22154` by `Thomas Fan`_.\n\n:mod:`sklearn.compose`\n......................\n\n- |Fix| :class:`compose.ColumnTransformer` now removes validation errors from\n  `__init__` and `set_params` methods.\n  :pr:`22537` by :user:`iofall <iofall>` and :user:`Arisa Y. <arisayosh>`.\n\n- |Fix| :term:`get_feature_names_out` functionality in\n  :class:`compose.ColumnTransformer` was broken when columns were specified\n  using `slice`. This is fixed in :pr:`22775` and :pr:`22913` by\n  :user:`randomgeek78 <randomgeek78>`.\n\n:mod:`sklearn.covariance`\n.........................\n\n- |Fix| :class:`covariance.GraphicalLassoCV` now accepts NumPy array for the\n  parameter `alphas`.\n  :pr:`22493` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.cross_decomposition`\n..................................\n\n- |Enhancement| the `inverse_transform` method of\n  :class:`cross_decomposition.PLSRegression`, :class:`cross_decomposition.PLSCanonical`\n  and :class:`cross_decomposition.CCA` now allows reconstruction of a `X` target when\n  a `Y` parameter is given. :pr:`19680` by\n  :user:`Robin Thibaut <robinthibaut>`.\n\n- |Enhancement| Adds :term:`get_feature_names_out` to all transformers in the\n  :mod:`~sklearn.cross_decomposition` module:\n  :class:`cross_decomposition.CCA`,\n  :class:`cross_decomposition.PLSSVD`,\n  :class:`cross_decomposition.PLSRegression`,\n  and :class:`cross_decomposition.PLSCanonical`. :pr:`22119` by `Thomas Fan`_.\n\n- |Fix| The shape of the :term:`coef_` attribute of :class:`cross_decomposition.CCA`,\n  :class:`cross_decomposition.PLSCanonical` and\n  :class:`cross_decomposition.PLSRegression` will change in version 1.3, from\n  `(n_features, n_targets)` to `(n_targets, n_features)`, to be consistent\n  with other linear models and to make it work with interface expecting a\n  specific shape for `coef_` (e.g. :class:`feature_selection.RFE`).\n  :pr:`22016` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |API| add the fitted attribute `intercept_` to\n  :class:`cross_decomposition.PLSCanonical`,\n  :class:`cross_decomposition.PLSRegression`, and\n  :class:`cross_decomposition.CCA`. The method `predict` is indeed equivalent to\n  `Y = X @ coef_ + intercept_`.\n  :pr:`22015` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.datasets`\n.......................\n\n- |Feature| :func:`datasets.load_files` now accepts a ignore list and\n  an allow list based on file extensions.\n  :pr:`19747` by :user:`Tony Attalla <tonyattalla>` and :pr:`22498` by\n  :user:`Meekail Zain <micky774>`.\n\n- |Enhancement| :func:`datasets.make_swiss_roll` now supports the optional argument\n  hole; when set to True, it returns the swiss-hole dataset. :pr:`21482` by\n  :user:`Sebastian Pujalte <pujaltes>`.\n\n- |Enhancement| :func:`datasets.make_blobs` no longer copies data during the generation\n  process, therefore uses less memory.\n  :pr:`22412` by :user:`Zhehao Liu <MaxwellLZH>`.\n\n- |Enhancement| :func:`datasets.load_diabetes` now accepts the parameter\n  ``scaled``, to allow loading unscaled data. The scaled version of this\n  dataset is now computed from the unscaled data, and can produce slightly\n  different results that in previous version (within a 1e-4 absolute\n  tolerance).\n  :pr:`16605` by :user:`Mandy Gu <happilyeverafter95>`.\n\n- |Enhancement| :func:`datasets.fetch_openml` now has two optional arguments\n  `n_retries` and `delay`. By default, :func:`datasets.fetch_openml` will retry\n  3 times in case of a network failure with a delay between each try.\n  :pr:`21901` by :user:`Rileran <rileran>`.\n\n- |Fix| :func:`datasets.fetch_covtype` is now concurrent-safe: data is downloaded\n  to a temporary directory before being moved to the data directory.\n  :pr:`23113` by :user:`Ilion Beyst <iasoon>`.\n\n- |API| :func:`datasets.make_sparse_coded_signal` now accepts a parameter\n  `data_transposed` to explicitly specify the shape of matrix `X`. The default\n  behavior `True` is to return a transposed matrix `X` corresponding to a\n  `(n_features, n_samples)` shape. The default value will change to `False` in\n  version 1.3. :pr:`21425` by :user:`Gabriel Stefanini Vicente <g4brielvs>`.\n\n:mod:`sklearn.decomposition`\n............................\n\n- |MajorFeature| Added a new estimator :class:`decomposition.MiniBatchNMF`. It is a\n  faster but less accurate version of non-negative matrix factorization, better suited\n  for large datasets. :pr:`16948` by :user:`Chiara Marmo <cmarmo>`,\n  :user:`Patricio Cerda <pcerda>` and :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Enhancement| :func:`decomposition.dict_learning`,\n  :func:`decomposition.dict_learning_online`\n  and :func:`decomposition.sparse_encode` preserve dtype for `numpy.float32`.\n  :class:`decomposition.DictionaryLearning`,\n  :class:`decomposition.MiniBatchDictionaryLearning`\n  and :class:`decomposition.SparseCoder` preserve dtype for `numpy.float32`.\n  :pr:`22002` by :user:`Takeshi Oura <takoika>`.\n\n- |Enhancement| :class:`decomposition.PCA` exposes a parameter `n_oversamples` to tune\n  :func:`utils.extmath.randomized_svd` and get accurate results when the number of\n  features is large.\n  :pr:`21109` by :user:`Smile <x-shadow-man>`.\n\n- |Enhancement| The :class:`decomposition.MiniBatchDictionaryLearning` and\n  :func:`decomposition.dict_learning_online` have been refactored and now have a\n  stopping criterion based on a small change of the dictionary or objective function,\n  controlled by the new `max_iter`, `tol` and `max_no_improvement` parameters. In\n  addition, some of their parameters and attributes are deprecated.\n\n  - the `n_iter` parameter of both is deprecated. Use `max_iter` instead.\n  - the `iter_offset`, `return_inner_stats`, `inner_stats` and `return_n_iter`\n    parameters of :func:`decomposition.dict_learning_online` serve internal purpose\n    and are deprecated.\n  - the `inner_stats_`, `iter_offset_` and `random_state_` attributes of\n    :class:`decomposition.MiniBatchDictionaryLearning` serve internal purpose and are\n    deprecated.\n  - the default value of the `batch_size` parameter of both will change from 3 to 256\n    in version 1.3.\n\n  :pr:`18975` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Enhancement| :class:`decomposition.SparsePCA` and :class:`decomposition.MiniBatchSparsePCA`\n  preserve dtype for `numpy.float32`.\n  :pr:`22111` by :user:`Takeshi Oura <takoika>`.\n\n- |Enhancement| :class:`decomposition.TruncatedSVD` now allows\n  `n_components == n_features`, if `algorithm='randomized'`.\n  :pr:`22181` by :user:`Zach Deane-Mayer <zachmayer>`.\n\n- |Enhancement| Adds :term:`get_feature_names_out` to all transformers in the\n  :mod:`~sklearn.decomposition` module:\n  :class:`decomposition.DictionaryLearning`,\n  :class:`decomposition.FactorAnalysis`,\n  :class:`decomposition.FastICA`,\n  :class:`decomposition.IncrementalPCA`,\n  :class:`decomposition.KernelPCA`,\n  :class:`decomposition.LatentDirichletAllocation`,\n  :class:`decomposition.MiniBatchDictionaryLearning`,\n  :class:`decomposition.MiniBatchSparsePCA`,\n  :class:`decomposition.NMF`,\n  :class:`decomposition.PCA`,\n  :class:`decomposition.SparsePCA`,\n  and :class:`decomposition.TruncatedSVD`. :pr:`21334` by\n  `Thomas Fan`_.\n\n- |Enhancement| :class:`decomposition.TruncatedSVD` exposes the parameter\n  `n_oversamples` and `power_iteration_normalizer` to tune\n  :func:`utils.extmath.randomized_svd` and get accurate results when the number\n  of features is large, the rank of the matrix is high, or other features of\n  the matrix make low rank approximation difficult.\n  :pr:`21705` by :user:`Jay S. Stanley III <stanleyjs>`.\n\n- |Enhancement| :class:`decomposition.PCA` exposes the parameter\n  `power_iteration_normalizer` to tune :func:`utils.extmath.randomized_svd` and\n  get more accurate results when low rank approximation is difficult.\n  :pr:`21705` by :user:`Jay S. Stanley III <stanleyjs>`.\n\n- |Fix| :class:`decomposition.FastICA` now validates input parameters in `fit`\n  instead of `__init__`.\n  :pr:`21432` by :user:`Hannah Bohle <hhnnhh>` and\n  :user:`Maren Westermann <marenwestermann>`.\n\n- |Fix| :class:`decomposition.FastICA` now accepts `np.float32` data without\n  silent upcasting. The dtype is preserved by `fit` and `fit_transform` and the\n  main fitted attributes use a dtype of the same precision as the training\n  data. :pr:`22806` by :user:`Jihane Bennis <JihaneBennis>` and\n  :user:`Olivier Grisel <ogrisel>`.\n\n- |Fix| :class:`decomposition.FactorAnalysis` now validates input parameters\n  in `fit` instead of `__init__`.\n  :pr:`21713` by :user:`Haya <HayaAlmutairi>` and :user:`Krum Arnaudov <krumeto>`.\n\n- |Fix| :class:`decomposition.KernelPCA` now validates input parameters in\n  `fit` instead of `__init__`.\n  :pr:`21567` by :user:`Maggie Chege <MaggieChege>`.\n\n- |Fix| :class:`decomposition.PCA` and :class:`decomposition.IncrementalPCA`\n  more safely calculate precision using the inverse of the covariance matrix\n  if `self.noise_variance_` is zero.\n  :pr:`22300` by :user:`Meekail Zain <micky774>` and :pr:`15948` by :user:`sysuresh`.\n\n- |Fix| Greatly reduced peak memory usage in :class:`decomposition.PCA` when\n  calling `fit` or `fit_transform`.\n  :pr:`22553` by :user:`Meekail Zain <micky774>`.\n\n- |API| :func:`decomposition.FastICA` now supports unit variance for whitening.\n  The default value of its `whiten` argument will change from `True`\n  (which behaves like `'arbitrary-variance'`) to `'unit-variance'` in version 1.3.\n  :pr:`19490` by :user:`Facundo Ferrin <fferrin>` and\n  :user:`Julien Jerphanion <jjerphan>`.\n\n:mod:`sklearn.discriminant_analysis`\n....................................\n\n- |Enhancement| Adds :term:`get_feature_names_out` to\n  :class:`discriminant_analysis.LinearDiscriminantAnalysis`. :pr:`22120` by\n  `Thomas Fan`_.\n\n- |Fix| :class:`discriminant_analysis.LinearDiscriminantAnalysis` now uses\n  the correct variance-scaling coefficient which may result in different model\n  behavior. :pr:`15984` by :user:`Okon Samuel <OkonSamuel>` and :pr:`22696` by\n  :user:`Meekail Zain <micky774>`.\n\n:mod:`sklearn.dummy`\n....................\n\n- |Fix| :class:`dummy.DummyRegressor` no longer overrides the `constant`\n  parameter during `fit`. :pr:`22486` by `Thomas Fan`_.\n\n:mod:`sklearn.ensemble`\n.......................\n\n- |MajorFeature| Added additional option `loss=\"quantile\"` to\n  :class:`ensemble.HistGradientBoostingRegressor` for modelling quantiles.\n  The quantile level can be specified with the new parameter `quantile`.\n  :pr:`21800` and :pr:`20567` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Efficiency| `fit` of :class:`ensemble.GradientBoostingClassifier`\n  and :class:`ensemble.GradientBoostingRegressor` now calls :func:`utils.check_array`\n  with parameter `force_all_finite=False` for non initial warm-start runs as it has\n  already been checked before.\n  :pr:`22159` by :user:`Geoffrey Paris <Geoffrey-Paris>`.\n\n- |Enhancement| :class:`ensemble.HistGradientBoostingClassifier` is faster,\n  for binary and in particular for multiclass problems thanks to the new private loss\n  function module.\n  :pr:`20811`, :pr:`20567` and :pr:`21814` by\n  :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Enhancement| Adds support to use pre-fit models with `cv=\"prefit\"`\n  in :class:`ensemble.StackingClassifier` and :class:`ensemble.StackingRegressor`.\n  :pr:`16748` by :user:`Siqi He <siqi-he>` and :pr:`22215` by\n  :user:`Meekail Zain <micky774>`.\n\n- |Enhancement| :class:`ensemble.RandomForestClassifier` and\n  :class:`ensemble.ExtraTreesClassifier` have the new `criterion=\"log_loss\"`, which is\n  equivalent to `criterion=\"entropy\"`.\n  :pr:`23047` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Enhancement| Adds :term:`get_feature_names_out` to\n  :class:`ensemble.VotingClassifier`, :class:`ensemble.VotingRegressor`,\n  :class:`ensemble.StackingClassifier`, and\n  :class:`ensemble.StackingRegressor`. :pr:`22695` and :pr:`22697`  by `Thomas Fan`_.\n\n- |Enhancement| :class:`ensemble.RandomTreesEmbedding` now has an informative\n  :term:`get_feature_names_out` function that includes both tree index and leaf index in\n  the output feature names.\n  :pr:`21762` by :user:`Zhehao Liu <MaxwellLZH>` and `Thomas Fan`_.\n\n- |Efficiency| Fitting a :class:`ensemble.RandomForestClassifier`,\n  :class:`ensemble.RandomForestRegressor`, :class:`ensemble.ExtraTreesClassifier`,\n  :class:`ensemble.ExtraTreesRegressor`, and :class:`ensemble.RandomTreesEmbedding`\n  is now faster in a multiprocessing setting, especially for subsequent fits with\n  `warm_start` enabled.\n  :pr:`22106` by :user:`Pieter Gijsbers <PGijsbers>`.\n\n- |Fix| Change the parameter `validation_fraction` in\n  :class:`ensemble.GradientBoostingClassifier` and\n  :class:`ensemble.GradientBoostingRegressor` so that an error is raised if anything\n  other than a float is passed in as an argument.\n  :pr:`21632` by :user:`Genesis Valencia <genvalen>`.\n\n- |Fix| Removed a potential source of CPU oversubscription in\n  :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` when CPU resource usage is limited,\n  for instance using cgroups quota in a docker container. :pr:`22566` by\n  :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| :class:`ensemble.HistGradientBoostingClassifier` and\n  :class:`ensemble.HistGradientBoostingRegressor` no longer warns when\n  fitting on a pandas DataFrame with a non-default `scoring` parameter and\n  early_stopping enabled. :pr:`22908` by `Thomas Fan`_.\n\n- |Fix| Fixes HTML repr for :class:`ensemble.StackingClassifier` and\n  :class:`ensemble.StackingRegressor`. :pr:`23097` by `Thomas Fan`_.\n\n- |API| The attribute `loss_` of :class:`ensemble.GradientBoostingClassifier` and\n  :class:`ensemble.GradientBoostingRegressor` has been deprecated and will be removed\n  in version 1.3.\n  :pr:`23079` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |API| Changed the default of `max_features` to 1.0 for\n  :class:`ensemble.RandomForestRegressor` and to `\"sqrt\"` for\n  :class:`ensemble.RandomForestClassifier`. Note that these give the same fit\n  results as before, but are much easier to understand. The old default value\n  `\"auto\"` has been deprecated and will be removed in version 1.3. The same\n  changes are also applied for :class:`ensemble.ExtraTreesRegressor` and\n  :class:`ensemble.ExtraTreesClassifier`.\n  :pr:`20803` by :user:`Brian Sun <bsun94>`.\n\n- |Efficiency| Improve runtime performance of :class:`ensemble.IsolationForest`\n  by skipping repetitive input checks. :pr:`23149` by :user:`Zhehao Liu <MaxwellLZH>`.\n\n:mod:`sklearn.feature_extraction`\n.................................\n\n- |Feature| :class:`feature_extraction.FeatureHasher` now supports PyPy.\n  :pr:`23023` by `Thomas Fan`_.\n\n- |Fix| :class:`feature_extraction.FeatureHasher` now validates input parameters\n  in `transform` instead of `__init__`. :pr:`21573` by\n  :user:`Hannah Bohle <hhnnhh>` and :user:`Maren Westermann <marenwestermann>`.\n\n- |Fix| :class:`feature_extraction.text.TfidfVectorizer` now does not create\n  a :class:`feature_extraction.text.TfidfTransformer` at `__init__` as required\n  by our API.\n  :pr:`21832` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.feature_selection`\n................................\n\n- |Feature| Added auto mode to :class:`feature_selection.SequentialFeatureSelector`.\n  If the argument `n_features_to_select` is `'auto'`, select features until the score\n  improvement does not exceed the argument `tol`. The default value of\n  `n_features_to_select` changed from `None` to `'warn'` in 1.1 and will become\n  `'auto'` in 1.3. `None` and `'warn'` will be removed in 1.3. :pr:`20145` by\n  :user:`murata-yu <murata-yu>`.\n\n- |Feature| Added the ability to pass callables to the `max_features` parameter\n  of :class:`feature_selection.SelectFromModel`. Also introduced new attribute\n  `max_features_` which is inferred from `max_features` and the data during\n  `fit`. If `max_features` is an integer, then `max_features_ = max_features`.\n  If `max_features` is a callable, then `max_features_ = max_features(X)`.\n  :pr:`22356` by :user:`Meekail Zain <micky774>`.\n\n- |Enhancement| :class:`feature_selection.GenericUnivariateSelect` preserves\n  float32 dtype. :pr:`18482` by :user:`Thierry Gameiro <titigmr>`\n  and :user:`Daniel Kharsa <aflatoune>` and :pr:`22370` by\n  :user:`Meekail Zain <micky774>`.\n\n- |Enhancement| Add a parameter `force_finite` to\n  :func:`feature_selection.f_regression` and\n  :func:`feature_selection.r_regression`. This parameter allows to force the\n  output to be finite in the case where a feature or a the target is constant\n  or that the feature and target are perfectly correlated (only for the\n  F-statistic).\n  :pr:`17819` by :user:`Juan Carlos Alfaro Jim\u00e9nez <alfaro96>`.\n\n- |Efficiency| Improve runtime performance of :func:`feature_selection.chi2`\n  with boolean arrays. :pr:`22235` by `Thomas Fan`_.\n\n- |Efficiency| Reduced memory usage of :func:`feature_selection.chi2`.\n  :pr:`21837` by :user:`Louis Wagner <lrwagner>`.\n\n:mod:`sklearn.gaussian_process`\n...............................\n\n- |Fix| `predict` and `sample_y` methods of\n  :class:`gaussian_process.GaussianProcessRegressor` now return\n  arrays of the correct shape in single-target and multi-target cases, and for\n  both `normalize_y=False` and `normalize_y=True`.\n  :pr:`22199` by :user:`Guillaume Lemaitre <glemaitre>`,\n  :user:`Aidar Shakerimoff <AidarShakerimoff>` and\n  :user:`Tenavi Nakamura-Zimmerer <Tenavi>`.\n\n- |Fix| :class:`gaussian_process.GaussianProcessClassifier` raises\n  a more informative error if `CompoundKernel` is passed via `kernel`.\n  :pr:`22223` by :user:`MarcoM <marcozzxx810>`.\n\n:mod:`sklearn.impute`\n.....................\n\n- |Enhancement| :class:`impute.SimpleImputer` now warns with feature names when features\n  which are skipped due to the lack of any observed values in the training set.\n  :pr:`21617` by :user:`Christian Ritter <chritter>`.\n\n- |Enhancement| Added support for `pd.NA` in :class:`impute.SimpleImputer`.\n  :pr:`21114` by :user:`Ying Xiong <yxiong>`.\n\n- |Enhancement| Adds :term:`get_feature_names_out` to\n  :class:`impute.SimpleImputer`, :class:`impute.KNNImputer`,\n  :class:`impute.IterativeImputer`, and :class:`impute.MissingIndicator`.\n  :pr:`21078` by `Thomas Fan`_.\n\n- |API| The `verbose` parameter was deprecated for :class:`impute.SimpleImputer`.\n  A warning will always be raised upon the removal of empty columns.\n  :pr:`21448` by :user:`Oleh Kozynets <OlehKSS>` and\n  :user:`Christian Ritter <chritter>`.\n\n:mod:`sklearn.inspection`\n.........................\n\n- |Feature| Add a display to plot the boundary decision of a classifier by\n  using the method :func:`inspection.DecisionBoundaryDisplay.from_estimator`.\n  :pr:`16061` by `Thomas Fan`_.\n\n- |Enhancement| In\n  :meth:`inspection.PartialDependenceDisplay.from_estimator`, allow\n  `kind` to accept a list of strings to specify  which type of\n  plot to draw for each feature interaction.\n  :pr:`19438` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Enhancement| :meth:`inspection.PartialDependenceDisplay.from_estimator`,\n  :meth:`inspection.PartialDependenceDisplay.plot`, and\n  `inspection.plot_partial_dependence` now support plotting centered\n  Individual Conditional Expectation (cICE) and centered PDP curves controlled\n  by setting the parameter `centered`.\n  :pr:`18310` by :user:`Johannes Elfner <JoElfner>` and\n  :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.isotonic`\n.......................\n\n- |Enhancement| Adds :term:`get_feature_names_out` to\n  :class:`isotonic.IsotonicRegression`.\n  :pr:`22249` by `Thomas Fan`_.\n\n:mod:`sklearn.kernel_approximation`\n...................................\n\n- |Enhancement| Adds :term:`get_feature_names_out` to\n  :class:`kernel_approximation.AdditiveChi2Sampler`.\n  :class:`kernel_approximation.Nystroem`,\n  :class:`kernel_approximation.PolynomialCountSketch`,\n  :class:`kernel_approximation.RBFSampler`, and\n  :class:`kernel_approximation.SkewedChi2Sampler`.\n  :pr:`22137` and :pr:`22694` by `Thomas Fan`_.\n\n:mod:`sklearn.linear_model`\n...........................\n\n- |Feature| :class:`linear_model.ElasticNet`, :class:`linear_model.ElasticNetCV`,\n  :class:`linear_model.Lasso` and :class:`linear_model.LassoCV` support `sample_weight`\n  for sparse input `X`.\n  :pr:`22808` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Feature| :class:`linear_model.Ridge` with `solver=\"lsqr\"` now supports to fit sparse\n  input with `fit_intercept=True`.\n  :pr:`22950` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Enhancement| :class:`linear_model.QuantileRegressor` support sparse input\n  for the highs based solvers.\n  :pr:`21086` by :user:`Venkatachalam Natchiappan <venkyyuvy>`.\n  In addition, those solvers now use the CSC matrix right from the\n  beginning which speeds up fitting.\n  :pr:`22206` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Enhancement| :class:`linear_model.LogisticRegression` is faster for\n  ``solvers=\"lbfgs\"`` and ``solver=\"newton-cg\"``, for binary and in particular for\n  multiclass problems thanks to the new private loss function module. In the multiclass\n  case, the memory consumption has also been reduced for these solvers as the target is\n  now label encoded (mapped to integers) instead of label binarized (one-hot encoded).\n  The more classes, the larger the benefit.\n  :pr:`21808`, :pr:`20567` and :pr:`21814` by\n  :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Enhancement| :class:`linear_model.GammaRegressor`,\n  :class:`linear_model.PoissonRegressor` and :class:`linear_model.TweedieRegressor`\n  are faster for ``solvers=\"lbfgs\"``.\n  :pr:`22548`, :pr:`21808` and :pr:`20567` by\n  :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Enhancement| Rename parameter `base_estimator` to `estimator` in\n  :class:`linear_model.RANSACRegressor` to improve readability and consistency.\n  `base_estimator` is deprecated and will be removed in 1.3.\n  :pr:`22062` by :user:`Adrian Trujillo <trujillo9616>`.\n\n- |Enhancement| :func:`linear_model.ElasticNet` and\n  and other linear model classes using coordinate descent show error\n  messages when non-finite parameter weights are produced. :pr:`22148`\n  by :user:`Christian Ritter <chritter>` and :user:`Norbert Preining <norbusan>`.\n\n- |Enhancement| :class:`linear_model.ElasticNet` and :class:`linear_model.Lasso`\n  now raise consistent error messages when passed invalid values for `l1_ratio`,\n  `alpha`, `max_iter` and `tol`.\n  :pr:`22240` by :user:`Arturo Amor <ArturoAmorQ>`.\n\n- |Enhancement| :class:`linear_model.BayesianRidge` and\n  :class:`linear_model.ARDRegression` now preserve float32 dtype. :pr:`9087` by\n  :user:`Arthur Imbert <Henley13>` and :pr:`22525` by :user:`Meekail Zain <micky774>`.\n\n- |Enhancement| :class:`linear_model.RidgeClassifier` is now supporting\n  multilabel classification.\n  :pr:`19689` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Enhancement| :class:`linear_model.RidgeCV` and\n  :class:`linear_model.RidgeClassifierCV` now raise consistent error message\n  when passed invalid values for `alphas`.\n  :pr:`21606` by :user:`Arturo Amor <ArturoAmorQ>`.\n\n- |Enhancement| :class:`linear_model.Ridge` and :class:`linear_model.RidgeClassifier`\n  now raise consistent error message when passed invalid values for `alpha`,\n  `max_iter` and `tol`.\n  :pr:`21341` by :user:`Arturo Amor <ArturoAmorQ>`.\n\n- |Enhancement| :func:`linear_model.orthogonal_mp_gram` preservse dtype for\n  `numpy.float32`.\n  :pr:`22002` by :user:`Takeshi Oura <takoika>`.\n\n- |Fix| :class:`linear_model.LassoLarsIC` now correctly computes AIC\n  and BIC. An error is now raised when `n_features > n_samples` and\n  when the noise variance is not provided.\n  :pr:`21481` by :user:`Guillaume Lemaitre <glemaitre>` and\n  :user:`Andr\u00e9s Babino <ababino>`.\n\n- |Fix| :class:`linear_model.TheilSenRegressor` now validates input parameter\n  ``max_subpopulation`` in `fit` instead of `__init__`.\n  :pr:`21767` by :user:`Maren Westermann <marenwestermann>`.\n\n- |Fix| :class:`linear_model.ElasticNetCV` now produces correct\n  warning when `l1_ratio=0`.\n  :pr:`21724` by :user:`Yar Khine Phyo <yarkhinephyo>`.\n\n- |Fix| :class:`linear_model.LogisticRegression` and\n  :class:`linear_model.LogisticRegressionCV` now set the `n_iter_` attribute\n  with a shape that respects the docstring and that is consistent with the shape\n  obtained when using the other solvers in the one-vs-rest setting. Previously,\n  it would record only the maximum of the number of iterations for each binary\n  sub-problem while now all of them are recorded. :pr:`21998` by\n  :user:`Olivier Grisel <ogrisel>`.\n\n- |Fix| The property `family` of :class:`linear_model.TweedieRegressor` is not\n  validated in `__init__` anymore. Instead, this (private) property is deprecated in\n  :class:`linear_model.GammaRegressor`, :class:`linear_model.PoissonRegressor` and\n  :class:`linear_model.TweedieRegressor`, and will be removed in 1.3.\n  :pr:`22548` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Fix| The `coef_` and `intercept_` attributes of\n  :class:`linear_model.LinearRegression` are now correctly computed in the presence of\n  sample weights when the input is sparse.\n  :pr:`22891` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| The `coef_` and `intercept_` attributes of :class:`linear_model.Ridge` with\n  `solver=\"sparse_cg\"` and `solver=\"lbfgs\"` are now correctly computed in the presence\n  of sample weights when the input is sparse.\n  :pr:`22899` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| :class:`linear_model.SGDRegressor` and :class:`linear_model.SGDClassifier` now\n  computes the validation error correctly when early stopping is enabled.\n  :pr:`23256` by :user:`Zhehao Liu <MaxwellLZH>`.\n\n- |API| :class:`linear_model.LassoLarsIC` now exposes `noise_variance` as\n  a parameter in order to provide an estimate of the noise variance.\n  This is particularly relevant when `n_features > n_samples` and the\n  estimator of the noise variance cannot be computed.\n  :pr:`21481` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n:mod:`sklearn.manifold`\n.......................\n\n- |Feature| :class:`manifold.Isomap` now supports radius-based\n  neighbors via the `radius` argument.\n  :pr:`19794` by :user:`Zhehao Liu <MaxwellLZH>`.\n\n- |Enhancement| :func:`manifold.spectral_embedding` and\n  :class:`manifold.SpectralEmbedding` supports `np.float32` dtype and will\n  preserve this dtype.\n  :pr:`21534` by :user:`Andrew Knyazev <lobpcg>`.\n\n- |Enhancement| Adds :term:`get_feature_names_out` to :class:`manifold.Isomap`\n  and :class:`manifold.LocallyLinearEmbedding`. :pr:`22254` by `Thomas Fan`_.\n\n- |Enhancement| added `metric_params` to :class:`manifold.TSNE` constructor for\n  additional parameters of distance metric to use in optimization.\n  :pr:`21805` by :user:`Jeanne Dionisi <jeannedionisi>` and :pr:`22685` by\n  :user:`Meekail Zain <micky774>`.\n\n- |Enhancement| :func:`manifold.trustworthiness` raises an error if\n  `n_neighbours >= n_samples \/ 2` to ensure a correct support for the function.\n  :pr:`18832` by :user:`Hong Shao Yang <hongshaoyang>` and :pr:`23033` by\n  :user:`Meekail Zain <micky774>`.\n\n- |Fix| :func:`manifold.spectral_embedding` now uses Gaussian instead of\n  the previous uniform on [0, 1] random initial approximations to eigenvectors\n  in eigen_solvers `lobpcg` and `amg` to improve their numerical stability.\n  :pr:`21565` by :user:`Andrew Knyazev <lobpcg>`.\n\n:mod:`sklearn.metrics`\n......................\n\n- |Feature| :func:`metrics.r2_score` and :func:`metrics.explained_variance_score` have a\n  new `force_finite` parameter. Setting this parameter to `False` will return the\n  actual non-finite score in case of perfect predictions or constant `y_true`,\n  instead of the finite approximation (`1.0` and `0.0` respectively) currently\n  returned by default. :pr:`17266` by :user:`Sylvain Mari\u00e9 <smarie>`.\n\n- |Feature| :func:`metrics.d2_pinball_score` and :func:`metrics.d2_absolute_error_score`\n  calculate the :math:`D^2` regression score for the pinball loss and the\n  absolute error respectively. :func:`metrics.d2_absolute_error_score` is a special case\n  of :func:`metrics.d2_pinball_score` with a fixed quantile parameter `alpha=0.5`\n  for ease of use and discovery. The :math:`D^2` scores are generalizations\n  of the `r2_score` and can be interpreted as the fraction of deviance explained.\n  :pr:`22118` by :user:`Ohad Michel <ohadmich>`.\n\n- |Enhancement| :func:`metrics.top_k_accuracy_score` raises an improved error\n  message when `y_true` is binary and `y_score` is 2d. :pr:`22284` by `Thomas Fan`_.\n\n- |Enhancement| :func:`metrics.roc_auc_score` now supports ``average=None``\n  in the multiclass case when ``multiclass='ovr'`` which will return the score\n  per class. :pr:`19158` by :user:`Nicki Skafte <SkafteNicki>`.\n\n- |Enhancement| Adds `im_kw` parameter to\n  :meth:`metrics.ConfusionMatrixDisplay.from_estimator`\n  :meth:`metrics.ConfusionMatrixDisplay.from_predictions`, and\n  :meth:`metrics.ConfusionMatrixDisplay.plot`. The `im_kw` parameter is passed\n  to the `matplotlib.pyplot.imshow` call when plotting the confusion matrix.\n  :pr:`20753` by `Thomas Fan`_.\n\n- |Fix| :func:`metrics.silhouette_score` now supports integer input for precomputed\n  distances. :pr:`22108` by `Thomas Fan`_.\n\n- |Fix| Fixed a bug in :func:`metrics.normalized_mutual_info_score` which could return\n  unbounded values. :pr:`22635` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n- |Fix| Fixes :func:`metrics.precision_recall_curve` and\n  :func:`metrics.average_precision_score` when true labels are all negative.\n  :pr:`19085` by :user:`Varun Agrawal <varunagrawal>`.\n\n- |API| `metrics.SCORERS` is now deprecated and will be removed in 1.3. Please\n  use :func:`metrics.get_scorer_names` to retrieve the names of all available\n  scorers. :pr:`22866` by `Adrin Jalali`_.\n\n- |API| Parameters ``sample_weight`` and ``multioutput`` of\n  :func:`metrics.mean_absolute_percentage_error` are now keyword-only, in accordance\n  with `SLEP009 <https:\/\/scikit-learn-enhancement-proposals.readthedocs.io\/en\/latest\/slep009\/proposal.html>`_.\n  A deprecation cycle was introduced.\n  :pr:`21576` by :user:`Paul-Emile Dugnat <pedugnat>`.\n\n- |API| The `\"wminkowski\"` metric of :class:`metrics.DistanceMetric` is deprecated\n  and will be removed in version 1.3. Instead the existing `\"minkowski\"` metric now takes\n  in an optional `w` parameter for weights. This deprecation aims at remaining consistent\n  with SciPy 1.8 convention. :pr:`21873` by :user:`Yar Khine Phyo <yarkhinephyo>`.\n\n- |API| :class:`metrics.DistanceMetric` has been moved from\n  :mod:`sklearn.neighbors` to :mod:`sklearn.metrics`.\n  Using `neighbors.DistanceMetric` for imports is still valid for\n  backward compatibility, but this alias will be removed in 1.3.\n  :pr:`21177` by :user:`Julien Jerphanion <jjerphan>`.\n\n:mod:`sklearn.mixture`\n......................\n\n- |Enhancement| :class:`mixture.GaussianMixture` and\n  :class:`mixture.BayesianGaussianMixture` can now be initialized using\n  k-means++ and random data points. :pr:`20408` by\n  :user:`Gordon Walsh <g-walsh>`, :user:`Alberto Ceballos<alceballosa>`\n  and :user:`Andres Rios<ariosramirez>`.\n\n- |Fix| Fix a bug that correctly initialize `precisions_cholesky_` in\n  :class:`mixture.GaussianMixture` when providing `precisions_init` by taking\n  its square root.\n  :pr:`22058` by :user:`Guillaume Lemaitre <glemaitre>`.\n\n- |Fix| :class:`mixture.GaussianMixture` now normalizes `weights_` more safely,\n  preventing rounding errors when calling :meth:`mixture.GaussianMixture.sample` with\n  `n_components=1`.\n  :pr:`23034` by :user:`Meekail Zain <micky774>`.\n\n:mod:`sklearn.model_selection`\n..............................\n\n- |Enhancement| it is now possible to pass `scoring=\"matthews_corrcoef\"` to all\n  model selection tools with a `scoring` argument to use the Matthews\n  correlation coefficient (MCC).\n  :pr:`22203` by :user:`Olivier Grisel <ogrisel>`.\n\n- |Enhancement| raise an error during cross-validation when the fits for all the\n  splits failed. Similarly raise an error during grid-search when the fits for\n  all the models and all the splits failed.\n  :pr:`21026` by :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n- |Fix| :class:`model_selection.GridSearchCV`,\n  :class:`model_selection.HalvingGridSearchCV`\n  now validate input parameters in `fit` instead of `__init__`.\n  :pr:`21880` by :user:`Mrinal Tyagi <MrinalTyagi>`.\n\n- |Fix| :func:`model_selection.learning_curve` now supports `partial_fit`\n  with regressors. :pr:`22982` by `Thomas Fan`_.\n\n:mod:`sklearn.multiclass`\n.........................\n\n- |Enhancement| :class:`multiclass.OneVsRestClassifier` now supports a `verbose`\n  parameter so progress on fitting can be seen.\n  :pr:`22508` by :user:`Chris Combs <combscCode>`.\n\n- |Fix| :meth:`multiclass.OneVsOneClassifier.predict` returns correct predictions when\n  the inner classifier only has a :term:`predict_proba`. :pr:`22604` by `Thomas Fan`_.\n\n:mod:`sklearn.neighbors`\n........................\n\n- |Enhancement| Adds :term:`get_feature_names_out` to\n  :class:`neighbors.RadiusNeighborsTransformer`,\n  :class:`neighbors.KNeighborsTransformer`\n  and :class:`neighbors.NeighborhoodComponentsAnalysis`.\n  :pr:`22212` by :user:`Meekail Zain <micky774>`.\n\n- |Fix| :class:`neighbors.KernelDensity` now validates input parameters in `fit`\n  instead of `__init__`. :pr:`21430` by :user:`Desislava Vasileva <DessyVV>` and\n  :user:`Lucy Jimenez <LucyJimenez>`.\n\n- |Fix| :func:`neighbors.KNeighborsRegressor.predict` now works properly when\n  given an array-like input if `KNeighborsRegressor` is first constructed with a\n  callable passed to the `weights` parameter. :pr:`22687` by\n  :user:`Meekail Zain <micky774>`.\n\n:mod:`sklearn.neural_network`\n.............................\n\n- |Enhancement| :func:`neural_network.MLPClassifier` and\n  :func:`neural_network.MLPRegressor` show error\n  messages when optimizers produce non-finite parameter weights. :pr:`22150`\n  by :user:`Christian Ritter <chritter>` and :user:`Norbert Preining <norbusan>`.\n\n- |Enhancement| Adds :term:`get_feature_names_out` to\n  :class:`neural_network.BernoulliRBM`. :pr:`22248` by `Thomas Fan`_.\n\n:mod:`sklearn.pipeline`\n.......................\n\n- |Enhancement| Added support for \"passthrough\" in :class:`pipeline.FeatureUnion`.\n  Setting a transformer to \"passthrough\" will pass the features unchanged.\n  :pr:`20860` by :user:`Shubhraneel Pal <shubhraneel>`.\n\n- |Fix| :class:`pipeline.Pipeline` now does not validate hyper-parameters in\n  `__init__` but in `.fit()`.\n  :pr:`21888` by :user:`iofall <iofall>` and :user:`Arisa Y. <arisayosh>`.\n\n- |Fix| :class:`pipeline.FeatureUnion` does not validate hyper-parameters in\n  `__init__`. Validation is now handled in `.fit()` and `.fit_transform()`.\n  :pr:`21954` by :user:`iofall <iofall>` and :user:`Arisa Y. <arisayosh>`.\n\n- |Fix| Defines `__sklearn_is_fitted__` in :class:`pipeline.FeatureUnion` to\n  return correct result with :func:`utils.validation.check_is_fitted`.\n  :pr:`22953` by :user:`randomgeek78 <randomgeek78>`.\n\n:mod:`sklearn.preprocessing`\n............................\n\n- |Feature| :class:`preprocessing.OneHotEncoder` now supports grouping\n  infrequent categories into a single feature. Grouping infrequent categories\n  is enabled by specifying how to select infrequent categories with\n  `min_frequency` or `max_categories`. :pr:`16018` by `Thomas Fan`_.\n\n- |Enhancement| Adds a `subsample` parameter to :class:`preprocessing.KBinsDiscretizer`.\n  This allows specifying a maximum number of samples to be used while fitting\n  the model. The option is only available when `strategy` is set to `quantile`.\n  :pr:`21445` by :user:`Felipe Bidu <fbidu>` and :user:`Amanda Dsouza <amy12xx>`.\n\n- |Enhancement| Adds `encoded_missing_value` to :class:`preprocessing.OrdinalEncoder`\n  to configure the encoded value for missing data. :pr:`21988` by `Thomas Fan`_.\n\n- |Enhancement| Added the `get_feature_names_out` method and a new parameter\n  `feature_names_out` to :class:`preprocessing.FunctionTransformer`. You can set\n  `feature_names_out` to 'one-to-one' to use the input features names as the\n  output feature names, or you can set it to a callable that returns the output\n  feature names. This is especially useful when the transformer changes the\n  number of features. If `feature_names_out` is None (which is the default),\n  then `get_output_feature_names` is not defined.\n  :pr:`21569` by :user:`Aur\u00e9lien Geron <ageron>`.\n\n- |Enhancement| Adds :term:`get_feature_names_out` to\n  :class:`preprocessing.Normalizer`,\n  :class:`preprocessing.KernelCenterer`,\n  :class:`preprocessing.OrdinalEncoder`, and\n  :class:`preprocessing.Binarizer`. :pr:`21079` by `Thomas Fan`_.\n\n- |Fix| :class:`preprocessing.PowerTransformer` with `method='yeo-johnson'`\n  better supports significantly non-Gaussian data when searching for an optimal\n  lambda. :pr:`20653` by `Thomas Fan`_.\n\n- |Fix| :class:`preprocessing.LabelBinarizer` now validates input parameters in\n  `fit` instead of `__init__`.\n  :pr:`21434` by :user:`Krum Arnaudov <krumeto>`.\n\n- |Fix| :class:`preprocessing.FunctionTransformer` with `check_inverse=True`\n  now provides informative error message when input has mixed dtypes. :pr:`19916` by\n  :user:`Zhehao Liu <MaxwellLZH>`.\n\n- |Fix| :class:`preprocessing.KBinsDiscretizer` handles bin edges more consistently now.\n  :pr:`14975` by `Andreas M\u00fcller`_ and :pr:`22526` by :user:`Meekail Zain <micky774>`.\n\n- |Fix| Adds :meth:`preprocessing.KBinsDiscretizer.get_feature_names_out` support when\n  `encode=\"ordinal\"`. :pr:`22735` by `Thomas Fan`_.\n\n:mod:`sklearn.random_projection`\n................................\n\n- |Enhancement| Adds an `inverse_transform` method and a `compute_inverse_transform`\n  parameter to :class:`random_projection.GaussianRandomProjection` and\n  :class:`random_projection.SparseRandomProjection`. When the parameter is set\n  to True, the pseudo-inverse of the components is computed during `fit` and stored as\n  `inverse_components_`. :pr:`21701` by :user:`Aur\u00e9lien Geron <ageron>`.\n\n- |Enhancement| :class:`random_projection.SparseRandomProjection` and\n  :class:`random_projection.GaussianRandomProjection` preserves dtype for\n  `numpy.float32`. :pr:`22114` by :user:`Takeshi Oura <takoika>`.\n\n- |Enhancement| Adds :term:`get_feature_names_out` to all transformers in the\n  :mod:`sklearn.random_projection` module:\n  :class:`random_projection.GaussianRandomProjection` and\n  :class:`random_projection.SparseRandomProjection`. :pr:`21330` by\n  :user:`Lo\u00efc Est\u00e8ve <lesteve>`.\n\n:mod:`sklearn.svm`\n..................\n\n- |Enhancement| :class:`svm.OneClassSVM`, :class:`svm.NuSVC`,\n  :class:`svm.NuSVR`, :class:`svm.SVC` and :class:`svm.SVR` now expose\n  `n_iter_`, the number of iterations of the libsvm optimization routine.\n  :pr:`21408` by :user:`Juan Mart\u00edn Loyola <jmloyola>`.\n\n- |Enhancement| :func:`svm.SVR`, :func:`svm.SVC`, :func:`svm.NuSVR`,\n  :func:`svm.OneClassSVM`, :func:`svm.NuSVC` now raise an error\n  when the dual-gap estimation produce non-finite parameter weights.\n  :pr:`22149` by :user:`Christian Ritter <chritter>` and\n  :user:`Norbert Preining <norbusan>`.\n\n- |Fix| :class:`svm.NuSVC`, :class:`svm.NuSVR`, :class:`svm.SVC`,\n  :class:`svm.SVR`, :class:`svm.OneClassSVM` now validate input\n  parameters in `fit` instead of `__init__`.\n  :pr:`21436` by :user:`Haidar Almubarak <Haidar13 >`.\n\n:mod:`sklearn.tree`\n...................\n\n- |Enhancement| :class:`tree.DecisionTreeClassifier` and\n  :class:`tree.ExtraTreeClassifier` have the new `criterion=\"log_loss\"`, which is\n  equivalent to `criterion=\"entropy\"`.\n  :pr:`23047` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |Fix| Fix a bug in the Poisson splitting criterion for\n  :class:`tree.DecisionTreeRegressor`.\n  :pr:`22191` by :user:`Christian Lorentzen <lorentzenchr>`.\n\n- |API| Changed the default value of `max_features` to 1.0 for\n  :class:`tree.ExtraTreeRegressor` and to `\"sqrt\"` for\n  :class:`tree.ExtraTreeClassifier`, which will not change the fit result. The original\n  default value `\"auto\"` has been deprecated and will be removed in version 1.3.\n  Setting `max_features` to `\"auto\"` is also deprecated\n  for :class:`tree.DecisionTreeClassifier` and :class:`tree.DecisionTreeRegressor`.\n  :pr:`22476` by :user:`Zhehao Liu <MaxwellLZH>`.\n\n:mod:`sklearn.utils`\n....................\n\n- |Enhancement| :func:`utils.check_array` and\n  :func:`utils.multiclass.type_of_target` now accept an `input_name` parameter to make\n  the error message more informative when passed invalid input data (e.g. with NaN or\n  infinite values).\n  :pr:`21219` by :user:`Olivier Grisel <ogrisel>`.\n\n- |Enhancement| :func:`utils.check_array` returns a float\n  ndarray with `np.nan` when passed a `Float32` or `Float64` pandas extension\n  array with `pd.NA`. :pr:`21278` by `Thomas Fan`_.\n\n- |Enhancement| :func:`utils.estimator_html_repr` shows a more helpful error\n  message when running in a jupyter notebook that is not trusted. :pr:`21316`\n  by `Thomas Fan`_.\n\n- |Enhancement| :func:`utils.estimator_html_repr` displays an arrow on the top\n  left corner of the HTML representation to show how the elements are\n  clickable. :pr:`21298` by `Thomas Fan`_.\n\n- |Enhancement| :func:`utils.check_array` with `dtype=None` returns numeric\n  arrays when passed in a pandas DataFrame with mixed dtypes. `dtype=\"numeric\"`\n  will also make better infer the dtype when the DataFrame has mixed dtypes.\n  :pr:`22237` by `Thomas Fan`_.\n\n- |Enhancement| :func:`utils.check_scalar` now has better messages\n  when displaying the type. :pr:`22218` by `Thomas Fan`_.\n\n- |Fix| Changes the error message of the `ValidationError` raised by\n  :func:`utils.check_X_y` when y is None so that it is compatible\n  with the `check_requires_y_none` estimator check. :pr:`22578` by\n  :user:`Claudio Salvatore Arcidiacono <ClaudioSalvatoreArcidiacono>`.\n\n- |Fix| :func:`utils.class_weight.compute_class_weight` now only requires that\n  all classes in `y` have a weight in `class_weight`. An error is still raised\n  when a class is present in `y` but not in `class_weight`. :pr:`22595` by\n  `Thomas Fan`_.\n\n- |Fix| :func:`utils.estimator_html_repr` has an improved visualization for nested\n  meta-estimators. :pr:`21310` by `Thomas Fan`_.\n\n- |Fix| :func:`utils.check_scalar` raises an error when\n  `include_boundaries={\"left\", \"right\"}` and the boundaries are not set.\n  :pr:`22027` by :user:`Marie Lanternier <mlant>`.\n\n- |Fix| :func:`utils.metaestimators.available_if` correctly returns a bounded\n  method that can be pickled. :pr:`23077` by `Thomas Fan`_.\n\n- |API| :func:`utils.estimator_checks.check_estimator`'s argument is now called\n  `estimator` (previous name was `Estimator`). :pr:`22188` by\n  :user:`Mathurin Massias <mathurinm>`.\n\n- |API| ``utils.metaestimators.if_delegate_has_method`` is deprecated and will be\n  removed in version 1.3. Use :func:`utils.metaestimators.available_if` instead.\n  :pr:`22830` by :user:`J\u00e9r\u00e9mie du Boisberranger <jeremiedbb>`.\n\n.. rubric:: Code and documentation contributors\n\nThanks to everyone who has contributed to the maintenance and improvement of\nthe project since version 1.0, including:\n\n2357juan, Abhishek Gupta, adamgonzo, Adam Li, adijohar, Aditya Kumawat, Aditya\nRaghuwanshi, Aditya Singh, Adrian Trujillo Duron, Adrin Jalali, ahmadjubair33,\nAJ Druck, aj-white, Alan Peixinho, Alberto Mario Ceballos-Arroyo, Alek\nLefebvre, Alex, Alexandr, Alexandre Gramfort, alexanmv, almeidayoel, Amanda\nDsouza, Aman Sharma, Amar pratap singh, Amit, amrcode, Andr\u00e1s Simon, Andreas\nGrivas, Andreas Mueller, Andrew Knyazev, Andriy, Angus L'Herrou, Ankit Sharma,\nAnne Ducout, Arisa, Arth, arthurmello, Arturo Amor, ArturoAmor, Atharva Patil,\naufarkari, Aur\u00e9lien Geron, avm19, Ayan Bag, baam, Bardiya Ak, Behrouz B,\nBen3940, Benjamin Bossan, Bharat Raghunathan, Bijil Subhash, bmreiniger,\nBrandon Truth, Brenden Kadota, Brian Sun, cdrig, Chalmer Lowe, Chiara Marmo,\nChitteti Srinath Reddy, Chloe-Agathe Azencott, Christian Lorentzen, Christian\nRitter, christopherlim98, Christoph T. Weidemann, Christos Aridas, Claudio\nSalvatore Arcidiacono, combscCode, Daniela Fernandes, darioka, Darren Nguyen,\nDave Eargle, David Gilbertson, David Poznik, Dea Mar\u00eda L\u00e9on, Dennis Osei,\nDessyVV, Dev514, Dimitri Papadopoulos Orfanos, Diwakar Gupta, Dr. Felix M.\nRiese, drskd, Emiko Sano, Emmanouil Gionanidis, EricEllwanger, Erich Schubert,\nEric Larson, Eric Ndirangu, ErmolaevPA, Estefania Barreto-Ojeda, eyast, Fatima\nGASMI, Federico Luna, Felix Glushchenkov, fkaren27, Fortune Uwha, FPGAwesome,\nfrancoisgoupil, Frans Larsson, ftorres16, Gabor Berei, Gabor Kertesz, Gabriel\nStefanini Vicente, Gabriel S Vicente, Gael Varoquaux, GAURAV CHOUDHARY,\nGauthier I, genvalen, Geoffrey-Paris, Giancarlo Pablo, glennfrutiz, gpapadok,\nGuillaume Lemaitre, Guillermo Tom\u00e1s Fern\u00e1ndez Mart\u00edn, Gustavo Oliveira, Haidar\nAlmubarak, Hannah Bohle, Hansin Ahuja, Haoyin Xu, Haya, Helder Geovane Gomes de\nLima, henrymooresc, Hideaki Imamura, Himanshu Kumar, Hind-M, hmasdev, hvassard,\ni-aki-y, iasoon, Inclusive Coding Bot, Ingela, iofall, Ishan Kumar, Jack Liu,\nJake Cowton, jalexand3r, J Alexander, Jauhar, Jaya Surya Kommireddy, Jay\nStanley, Jeff Hale, je-kr, JElfner, Jenny Vo, J\u00e9r\u00e9mie du Boisberranger, Jihane,\nJirka Borovec, Joel Nothman, Jon Haitz Legarreta Gorro\u00f1o, Jordan Silke, Jorge\nCipri\u00e1n, Jorge Loayza, Joseph Chazalon, Joseph Schwartz-Messing, Jovan\nStojanovic, JSchuerz, Juan Carlos Alfaro Jim\u00e9nez, Juan Martin Loyola, Julien\nJerphanion, katotten, Kaushik Roy Chowdhury, Ken4git, Kenneth Prabakaran,\nkernc, Kevin Doucet, KimAYoung, Koushik Joshi, Kranthi Sedamaki, krishna kumar,\nkrumetoft, lesnee, Lisa Casino, Logan Thomas, Loic Esteve, Louis Wagner,\nLucieClair, Lucy Liu, Luiz Eduardo Amaral, Magali, MaggieChege, Mai,\nmandjevant, Mandy Gu, Manimaran, MarcoM, Marco Wurps, Maren Westermann, Maria\nBoerner, MarieS-WiMLDS, Martel Corentin, martin-kokos, mathurinm, Mat\u00edas,\nmatjansen, Matteo Francia, Maxwell, Meekail Zain, Megabyte, Mehrdad\nMoradizadeh, melemo2, Michael I Chen, michalkrawczyk, Micky774, milana2,\nmillawell, Ming-Yang Ho, Mitzi, miwojc, Mizuki, mlant, Mohamed Haseeb, Mohit\nSharma, Moonkyung94, mpoemsl, MrinalTyagi, Mr. Leu, msabatier, murata-yu, N,\nNadirhan \u015eahin, Naipawat Poolsawat, NartayXD, nastegiano, nathansquan,\nnat-salt, Nicki Skafte Detlefsen, Nicolas Hug, Niket Jain, Nikhil Suresh,\nNikita Titov, Nikolay Kondratyev, Ohad Michel, Oleksandr Husak, Olivier Grisel,\npartev, Patrick Ferreira, Paul, pelennor, PierreAttard, Piet Br\u00f6mmel, Pieter\nGijsbers, Pinky, poloso, Pramod Anantharam, puhuk, Purna Chandra Mansingh,\nQuadV, Rahil Parikh, Randall Boyes, randomgeek78, Raz Hoshia, Reshama Shaikh,\nRicardo Ferreira, Richard Taylor, Rileran, Rishabh, Robin Thibaut, Rocco Meli,\nRoman Feldbauer, Roman Yurchak, Ross Barnowski, rsnegrin, Sachin Yadav,\nsakinaOuisrani, Sam Adam Day, Sanjay Marreddi, Sebastian Pujalte, SEELE, SELEE,\nSeyedsaman (Sam) Emami, ShanDeng123, Shao Yang Hong, sharmadharmpal,\nshaymerNaturalint, Shuangchi He, Shubhraneel Pal, siavrez, slishak, Smile,\nspikebh, sply88, Srinath Kailasa, St\u00e9phane Collot, Sultan Orazbayev, Sumit\nSaha, Sven Eschlbeck, Sven Stehle, Swapnil Jha, Sylvain Mari\u00e9, Takeshi Oura,\nTamires Santana, Tenavi, teunpe, Theis Ferr\u00e9 Hjortkj\u00e6r, Thiruvenkadam, Thomas\nJ. Fan, t-jakubek, toastedyeast, Tom Dupr\u00e9 la Tour, Tom McTiernan, TONY GEORGE,\nTyler Martin, Tyler Reddy, Udit Gupta, Ugo Marchand, Varun Agrawal,\nVenkatachalam N, Vera Komeyer, victoirelouis, Vikas Vishwakarma, Vikrant\nkhedkar, Vladimir Chernyy, Vladimir Kim, WeijiaDu, Xiao Yuan, Yar Khine Phyo,\nYing Xiong, yiyangq, Yosshi999, Yuki Koyama, Zach Deane-Mayer, Zeel B Patel,\nzempleni, zhenfisher, \u8d75\u4e30 (Zhao Feng)","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn      release notes 1 1               Version 1 1              For a short description of the main highlights of the release  please refer to  ref  sphx glr auto examples release highlights plot release highlights 1 1 0 py       include   changelog legend inc      changes 1 1 3   Version 1 1 3                  October 2022    This bugfix release only includes fixes for compatibility with the latest SciPy release    1 9 2  Notable changes include      Fix  Include  msvcp140 dll  in the scikit learn wheels since it has been   removed in the latest SciPy wheels     pr  24631  by  user  Chiara Marmo  cmarmo        Enhancement  Create wheels for Python 3 11     pr  24446  by  user  Chiara Marmo  cmarmo     Other bug fixes will be available in the next 1 2 release  which will be released in the coming weeks   Note that support for 32 bit Python on Windows has been dropped in this release  This is due to the fact that SciPy 1 9 2 also dropped the support for that platform  Windows users are advised to install the 64 bit version of Python instead       changes 1 1 2   Version 1 1 2                  August 2022    Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      Fix   class  manifold TSNE  now throws a  ValueError  when fit with    perplexity  n samples  to ensure mathematical correctness of the algorithm     pr  10805  by  user  Mathias Andersen  MrMathias   and    pr  23471  by  user  Meekail Zain  micky774     Changelog               Fix  A default HTML representation is shown for meta estimators with invalid   parameters   pr  24015  by  Thomas Fan        Fix  Add support for F contiguous arrays for estimators and functions whose back end   have been changed in 1 1     pr  23990  by  user  Julien Jerphanion  jjerphan        Fix  Wheels are now available for MacOS 10 9 and greater   pr  23833  by    Thomas Fan      mod  sklearn base                          Fix  The  get params  method of the  class  base BaseEstimator  class now supports   estimators with  type  type params that have the  get params  method     pr  24017  by  user  Henry Sorsky  hsorsky      mod  sklearn cluster                             Fix  Fixed a bug in  class  cluster Birch  that could trigger an error when splitting   a node if there are duplicates in the dataset     pr  23395  by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn feature selection                                       Fix   class  feature selection SelectFromModel  defaults to selection   threshold 1e 5 when the estimator is either  class  linear model ElasticNet    or  class  linear model ElasticNetCV  with  l1 ratio  equals 1 or    class  linear model LassoCV      pr  23636  by  user  Hao Chun Chang  haochunchang      mod  sklearn impute                            Fix   class  impute SimpleImputer  uses the dtype seen in  fit  for    transform  when the dtype is object   pr  22063  by  Thomas Fan      mod  sklearn linear model                                  Fix  Use dtype aware tolerances for the validation of gram matrices  passed by users   or precomputed    pr  22059  by  user  Malte S  Kurz  MalteKurz        Fix  Fixed an error in  class  linear model LogisticRegression  with    solver  newton cg     fit intercept True   and a single feature   pr  23608    by  Tom Dupre la Tour      mod  sklearn manifold                              Fix   class  manifold TSNE  now throws a  ValueError  when fit with    perplexity  n samples  to ensure mathematical correctness of the algorithm     pr  10805  by  user  Mathias Andersen  MrMathias   and    pr  23471  by  user  Meekail Zain  micky774      mod  sklearn metrics                             Fix  Fixed error message of  class  metrics coverage error  for 1D array input     pr  23548  by  user  Hao Chun Chang  haochunchang      mod  sklearn preprocessing                                   Fix   meth  preprocessing OrdinalEncoder inverse transform  correctly handles   use cases where  unknown value  or  encoded missing value  is  nan    pr  24087    by  Thomas Fan      mod  sklearn tree                          Fix  Fixed invalid memory access bug during fit in    class  tree DecisionTreeRegressor  and  class  tree DecisionTreeClassifier      pr  23273  by  Thomas Fan         changes 1 1 1   Version 1 1 1                  May 2022    Changelog               Enhancement  The error message is improved when importing    class  model selection HalvingGridSearchCV      class  model selection HalvingRandomSearchCV   or    class  impute IterativeImputer  without importing the experimental flag     pr  23194  by  Thomas Fan        Enhancement  Added an extension in doc conf py to automatically generate   the list of estimators that handle NaN values     pr  23198  by  user  Lise Kleiber  lisekleiber     user  Zhehao Liu  MaxwellLZH     and  user  Chiara Marmo  cmarmo      mod  sklearn datasets                              Fix  Avoid timeouts in  func  datasets fetch openml  by not passing a    timeout  argument   pr  23358  by  user  Lo c Est ve  lesteve      mod  sklearn decomposition                                   Fix  Avoid spurious warning in  class  decomposition IncrementalPCA  when    n samples    n components    pr  23264  by  user  Lucy Liu  lucyleeow      mod  sklearn feature selection                                       Fix  The  partial fit  method of  class  feature selection SelectFromModel    now conducts validation for  max features  and  feature names in  parameters     pr  23299  by  user  Long Bao  lorentzbao      mod  sklearn metrics                             Fix  Fixes  func  metrics precision recall curve  to compute precision recall at 100    recall  The Precision Recall curve now displays the last point corresponding to a   classifier that always predicts the positive class  recall 100  and   precision class balance     pr  23214  by  user  St phane Collot  stephanecollot   and  user  Max Baak  mbaak      mod  sklearn preprocessing                                   Fix   class  preprocessing PolynomialFeatures  with   degree   equal to 0   will raise error when   include bias   is set to False  and outputs a single   constant array when   include bias   is set to True     pr  23370  by  user  Zhehao Liu  MaxwellLZH      mod  sklearn tree                          Fix  Fixes performance regression with low cardinality features for    class  tree DecisionTreeClassifier      class  tree DecisionTreeRegressor      class  ensemble RandomForestClassifier      class  ensemble RandomForestRegressor      class  ensemble GradientBoostingClassifier   and    class  ensemble GradientBoostingRegressor      pr  23410  by  user  Lo c Est ve  lesteve      mod  sklearn utils                           Fix   func  utils class weight compute sample weight  now works with sparse  y      pr  23115  by  user  kernc  kernc         changes 1 1   Version 1 1 0                  May 2022    Minimal dependencies                       Version 1 1 0 of scikit learn requires python 3 8   numpy 1 17 3  and scipy 1 3 2   Optional minimal dependency is matplotlib 3 1 2    Changed models                 The following estimators and functions  when fit with the same data and parameters  may produce different models from the previous version  This often occurs due to changes in the modelling logic  bug fixes or enhancements   or in random sampling procedures      Efficiency   class  cluster KMeans  now defaults to   algorithm  lloyd      instead of   algorithm  auto     which was equivalent to     algorithm  elkan     Lloyd s algorithm and Elkan s algorithm converge to the   same solution  up to numerical rounding errors  but in general Lloyd s   algorithm uses much less memory  and it is often faster      Efficiency  Fitting  class  tree DecisionTreeClassifier      class  tree DecisionTreeRegressor      class  ensemble RandomForestClassifier      class  ensemble RandomForestRegressor      class  ensemble GradientBoostingClassifier   and    class  ensemble GradientBoostingRegressor  is on average 15  faster than in   previous versions thanks to a new sort algorithm to find the best split    Models might be different because of a different handling of splits   with tied criterion values  both the old and the new sorting algorithm   are unstable sorting algorithms   pr  22868  by  Thomas Fan        Fix  The eigenvectors initialization for  class  cluster SpectralClustering    and  class  manifold SpectralEmbedding  now samples from a Gaussian when   using the   amg   or   lobpcg   solver  This change  improves numerical   stability of the solver  but may result in a different model      Fix   func  feature selection f regression  and    func  feature selection r regression  will now returned finite score by   default instead of  np nan  and  np inf  for some corner case  You can use    force finite False  if you really want to get non finite values and keep   the old behavior      Fix  Panda s DataFrames with all non string columns such as a MultiIndex no   longer warns when passed into an Estimator  Estimators will continue to   ignore the column names in DataFrames with non string columns  For    feature names in   to be defined  columns must be all strings   pr  22410  by    Thomas Fan        Fix   class  preprocessing KBinsDiscretizer  changed handling of bin edges   slightly  which might result in a different encoding with the same data      Fix   func  calibration calibration curve  changed handling of bin   edges slightly  which might result in a different output curve given the same   data      Fix   class  discriminant analysis LinearDiscriminantAnalysis  now uses   the correct variance scaling coefficient which may result in different model   behavior      Fix   meth  feature selection SelectFromModel fit  and    meth  feature selection SelectFromModel partial fit  can now be called with    prefit True    estimators   will be a deep copy of  estimator  when    prefit True    pr  23271  by  user  Guillaume Lemaitre  glemaitre     Changelog                   Entries should be grouped by module  in alphabetic order  and prefixed with     one of the labels   MajorFeature    Feature    Efficiency    Enhancement        Fix  or  API   see whats new rst for descriptions       Entries should be ordered by those labels  e g   Fix  after  Efficiency        Changes not specific to a module should be listed under  Multiple Modules      or  Miscellaneous       Entries should end with       pr  123456  by  user  Joe Bloggs  joeongithub        where 123456 is the  pull request  number  not the issue number       Efficiency  Low level routines for reductions on pairwise distances   for dense float64 datasets have been refactored  The following functions   and estimators now benefit from improved performances in terms of hardware   scalability and speed ups        func  sklearn metrics pairwise distances argmin       func  sklearn metrics pairwise distances argmin min       class  sklearn cluster AffinityPropagation       class  sklearn cluster Birch       class  sklearn cluster MeanShift       class  sklearn cluster OPTICS       class  sklearn cluster SpectralClustering       func  sklearn feature selection mutual info regression       class  sklearn neighbors KNeighborsClassifier       class  sklearn neighbors KNeighborsRegressor       class  sklearn neighbors RadiusNeighborsClassifier       class  sklearn neighbors RadiusNeighborsRegressor       class  sklearn neighbors LocalOutlierFactor       class  sklearn neighbors NearestNeighbors       class  sklearn manifold Isomap       class  sklearn manifold LocallyLinearEmbedding       class  sklearn manifold TSNE       func  sklearn manifold trustworthiness       class  sklearn semi supervised LabelPropagation       class  sklearn semi supervised LabelSpreading     For instance  class  sklearn neighbors NearestNeighbors kneighbors  and    class  sklearn neighbors NearestNeighbors radius neighbors    can respectively be up to  20 and  5 faster than previously on a laptop     Moreover  implementations of those two algorithms are now suitable   for machine with many cores  making them usable for datasets consisting   of millions of samples      pr  21987    pr  22064    pr  22065    pr  22288  and  pr  22320    by  user  Julien Jerphanion  jjerphan        Enhancement  All scikit learn models now generate a more informative   error message when some input contains unexpected  NaN  or infinite values    In particular the message contains the input name   X    y  or    sample weight   and if an unexpected  NaN  value is found in  X   the error   message suggests potential solutions     pr  21219  by  user  Olivier Grisel  ogrisel        Enhancement  All scikit learn models now generate a more informative   error message when setting invalid hyper parameters with  set params      pr  21542  by  user  Olivier Grisel  ogrisel        Enhancement  Removes random unique identifiers in the HTML representation    With this change  jupyter notebooks are reproducible as long as the cells are   run in the same order   pr  23098  by  Thomas Fan        Fix  Estimators with  non deterministic  tag set to  True  will skip both    check methods sample order invariance  and  check methods subset invariance  tests     pr  22318  by  user  Zhehao Liu  MaxwellLZH        API  The option for using the log loss  aka binomial or multinomial deviance  via   the  loss  parameters was made more consistent  The preferred way is by   setting the value to   log loss    Old option names are still valid and   produce the same models  but are deprecated and will be removed in version   1 3       For  class  ensemble GradientBoostingClassifier   the  loss  parameter name      deviance  is deprecated in favor of the new name  log loss   which is now the     default       pr  23036  by  user  Christian Lorentzen  lorentzenchr         For  class  ensemble HistGradientBoostingClassifier   the  loss  parameter names      auto    binary crossentropy  and  categorical crossentropy  are deprecated in     favor of the new name  log loss   which is now the default       pr  23040  by  user  Christian Lorentzen  lorentzenchr         For  class  linear model SGDClassifier   the  loss  parameter name      log  is deprecated in favor of the new name  log loss        pr  23046  by  user  Christian Lorentzen  lorentzenchr        API  Rich html representation of estimators is now enabled by default in Jupyter   notebooks  It can be deactivated by setting  display  text   in    func  sklearn set config      pr  22856  by  user  J r mie du Boisberranger  jeremiedbb      mod  sklearn calibration                                 Enhancement   func  calibration calibration curve  accepts a parameter    pos label  to specify the positive class label     pr  21032  by  user  Guillaume Lemaitre  glemaitre        Enhancement   meth  calibration CalibratedClassifierCV fit  now supports passing    fit params   which are routed to the  base estimator      pr  18170  by  user  Benjamin Bossan  BenjaminBossan        Enhancement   class  calibration CalibrationDisplay  accepts a parameter  pos label    to add this information to the plot     pr  21038  by  user  Guillaume Lemaitre  glemaitre        Fix   func  calibration calibration curve  handles bin edges more consistently now     pr  14975  by  Andreas M ller   and  pr  22526  by  user  Meekail Zain  micky774        API   func  calibration calibration curve  s  normalize  parameter is   now deprecated and will be removed in version 1 3  It is recommended that   a proper probability  i e  a classifier s  term  predict proba  positive   class  is used for  y prob      pr  23095  by  user  Jordan Silke  jsilke      mod  sklearn cluster                             MajorFeature   class  cluster BisectingKMeans  introducing Bisecting K Means algorithm    pr  20031  by  user  Michal Krawczyk  michalkrawczyk       user  Tom Dupre la Tour  TomDLT     and  user  J r mie du Boisberranger  jeremiedbb        Enhancement   class  cluster SpectralClustering  and    func  cluster spectral clustering  now include the new   cluster qr   method that   clusters samples in the embedding space as an alternative to the existing   kmeans     and   discrete   methods  See  func  cluster spectral clustering  for more details     pr  21148  by  user  Andrew Knyazev  lobpcg        Enhancement  Adds  term  get feature names out  to  class  cluster Birch      class  cluster FeatureAgglomeration    class  cluster KMeans      class  cluster MiniBatchKMeans    pr  22255  by  Thomas Fan        Enhancement   class  cluster SpectralClustering  now raises consistent   error messages when passed invalid values for  n clusters    n init      gamma    n neighbors    eigen tol  or  degree      pr  21881  by  user  Hugo Vassard  hvassard        Enhancement   class  cluster AffinityPropagation  now returns cluster   centers and labels if they exist  even if the model has not fully converged    When returning these potentially degenerate cluster centers and labels  a new   warning message is shown  If no cluster centers were constructed    then the cluster centers remain an empty list with labels set to     1  and the original warning message is shown     pr  22217  by  user  Meekail Zain  micky774        Efficiency  In  class  cluster KMeans   the default   algorithm   is now      lloyd    which is the full classical EM style algorithm  Both    auto      and    full    are deprecated and will be removed in version 1 3  They are   now aliases for    lloyd     The previous default was    auto     which relied   on Elkan s algorithm  Lloyd s algorithm uses less memory than Elkan s  it   is faster on many datasets  and its results are identical  hence the change     pr  21735  by  user  Aur lien Geron  ageron        Fix   class  cluster KMeans  s  init  parameter now properly supports   array like input and NumPy string scalars   pr  22154  by  Thomas Fan      mod  sklearn compose                             Fix   class  compose ColumnTransformer  now removes validation errors from      init    and  set params  methods     pr  22537  by  user  iofall  iofall   and  user  Arisa Y   arisayosh        Fix   term  get feature names out  functionality in    class  compose ColumnTransformer  was broken when columns were specified   using  slice   This is fixed in  pr  22775  and  pr  22913  by    user  randomgeek78  randomgeek78      mod  sklearn covariance                                Fix   class  covariance GraphicalLassoCV  now accepts NumPy array for the   parameter  alphas      pr  22493  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn cross decomposition                                         Enhancement  the  inverse transform  method of    class  cross decomposition PLSRegression    class  cross decomposition PLSCanonical    and  class  cross decomposition CCA  now allows reconstruction of a  X  target when   a  Y  parameter is given   pr  19680  by    user  Robin Thibaut  robinthibaut        Enhancement  Adds  term  get feature names out  to all transformers in the    mod   sklearn cross decomposition  module     class  cross decomposition CCA      class  cross decomposition PLSSVD      class  cross decomposition PLSRegression     and  class  cross decomposition PLSCanonical    pr  22119  by  Thomas Fan        Fix  The shape of the  term  coef   attribute of  class  cross decomposition CCA      class  cross decomposition PLSCanonical  and    class  cross decomposition PLSRegression  will change in version 1 3  from     n features  n targets   to   n targets  n features    to be consistent   with other linear models and to make it work with interface expecting a   specific shape for  coef    e g   class  feature selection RFE       pr  22016  by  user  Guillaume Lemaitre  glemaitre        API  add the fitted attribute  intercept   to    class  cross decomposition PLSCanonical      class  cross decomposition PLSRegression   and    class  cross decomposition CCA   The method  predict  is indeed equivalent to    Y   X   coef    intercept       pr  22015  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn datasets                              Feature   func  datasets load files  now accepts a ignore list and   an allow list based on file extensions     pr  19747  by  user  Tony Attalla  tonyattalla   and  pr  22498  by    user  Meekail Zain  micky774        Enhancement   func  datasets make swiss roll  now supports the optional argument   hole  when set to True  it returns the swiss hole dataset   pr  21482  by    user  Sebastian Pujalte  pujaltes        Enhancement   func  datasets make blobs  no longer copies data during the generation   process  therefore uses less memory     pr  22412  by  user  Zhehao Liu  MaxwellLZH        Enhancement   func  datasets load diabetes  now accepts the parameter     scaled    to allow loading unscaled data  The scaled version of this   dataset is now computed from the unscaled data  and can produce slightly   different results that in previous version  within a 1e 4 absolute   tolerance      pr  16605  by  user  Mandy Gu  happilyeverafter95        Enhancement   func  datasets fetch openml  now has two optional arguments    n retries  and  delay   By default   func  datasets fetch openml  will retry   3 times in case of a network failure with a delay between each try     pr  21901  by  user  Rileran  rileran        Fix   func  datasets fetch covtype  is now concurrent safe  data is downloaded   to a temporary directory before being moved to the data directory     pr  23113  by  user  Ilion Beyst  iasoon        API   func  datasets make sparse coded signal  now accepts a parameter    data transposed  to explicitly specify the shape of matrix  X   The default   behavior  True  is to return a transposed matrix  X  corresponding to a     n features  n samples   shape  The default value will change to  False  in   version 1 3   pr  21425  by  user  Gabriel Stefanini Vicente  g4brielvs      mod  sklearn decomposition                                   MajorFeature  Added a new estimator  class  decomposition MiniBatchNMF   It is a   faster but less accurate version of non negative matrix factorization  better suited   for large datasets   pr  16948  by  user  Chiara Marmo  cmarmo       user  Patricio Cerda  pcerda   and  user  J r mie du Boisberranger  jeremiedbb        Enhancement   func  decomposition dict learning      func  decomposition dict learning online    and  func  decomposition sparse encode  preserve dtype for  numpy float32      class  decomposition DictionaryLearning      class  decomposition MiniBatchDictionaryLearning    and  class  decomposition SparseCoder  preserve dtype for  numpy float32      pr  22002  by  user  Takeshi Oura  takoika        Enhancement   class  decomposition PCA  exposes a parameter  n oversamples  to tune    func  utils extmath randomized svd  and get accurate results when the number of   features is large     pr  21109  by  user  Smile  x shadow man        Enhancement  The  class  decomposition MiniBatchDictionaryLearning  and    func  decomposition dict learning online  have been refactored and now have a   stopping criterion based on a small change of the dictionary or objective function    controlled by the new  max iter    tol  and  max no improvement  parameters  In   addition  some of their parameters and attributes are deprecated       the  n iter  parameter of both is deprecated  Use  max iter  instead      the  iter offset    return inner stats    inner stats  and  return n iter      parameters of  func  decomposition dict learning online  serve internal purpose     and are deprecated      the  inner stats     iter offset   and  random state   attributes of      class  decomposition MiniBatchDictionaryLearning  serve internal purpose and are     deprecated      the default value of the  batch size  parameter of both will change from 3 to 256     in version 1 3      pr  18975  by  user  J r mie du Boisberranger  jeremiedbb        Enhancement   class  decomposition SparsePCA  and  class  decomposition MiniBatchSparsePCA    preserve dtype for  numpy float32      pr  22111  by  user  Takeshi Oura  takoika        Enhancement   class  decomposition TruncatedSVD  now allows    n components    n features   if  algorithm  randomized       pr  22181  by  user  Zach Deane Mayer  zachmayer        Enhancement  Adds  term  get feature names out  to all transformers in the    mod   sklearn decomposition  module     class  decomposition DictionaryLearning      class  decomposition FactorAnalysis      class  decomposition FastICA      class  decomposition IncrementalPCA      class  decomposition KernelPCA      class  decomposition LatentDirichletAllocation      class  decomposition MiniBatchDictionaryLearning      class  decomposition MiniBatchSparsePCA      class  decomposition NMF      class  decomposition PCA      class  decomposition SparsePCA     and  class  decomposition TruncatedSVD    pr  21334  by    Thomas Fan        Enhancement   class  decomposition TruncatedSVD  exposes the parameter    n oversamples  and  power iteration normalizer  to tune    func  utils extmath randomized svd  and get accurate results when the number   of features is large  the rank of the matrix is high  or other features of   the matrix make low rank approximation difficult     pr  21705  by  user  Jay S  Stanley III  stanleyjs        Enhancement   class  decomposition PCA  exposes the parameter    power iteration normalizer  to tune  func  utils extmath randomized svd  and   get more accurate results when low rank approximation is difficult     pr  21705  by  user  Jay S  Stanley III  stanleyjs        Fix   class  decomposition FastICA  now validates input parameters in  fit    instead of    init        pr  21432  by  user  Hannah Bohle  hhnnhh   and    user  Maren Westermann  marenwestermann        Fix   class  decomposition FastICA  now accepts  np float32  data without   silent upcasting  The dtype is preserved by  fit  and  fit transform  and the   main fitted attributes use a dtype of the same precision as the training   data   pr  22806  by  user  Jihane Bennis  JihaneBennis   and    user  Olivier Grisel  ogrisel        Fix   class  decomposition FactorAnalysis  now validates input parameters   in  fit  instead of    init        pr  21713  by  user  Haya  HayaAlmutairi   and  user  Krum Arnaudov  krumeto        Fix   class  decomposition KernelPCA  now validates input parameters in    fit  instead of    init        pr  21567  by  user  Maggie Chege  MaggieChege        Fix   class  decomposition PCA  and  class  decomposition IncrementalPCA    more safely calculate precision using the inverse of the covariance matrix   if  self noise variance   is zero     pr  22300  by  user  Meekail Zain  micky774   and  pr  15948  by  user  sysuresh       Fix  Greatly reduced peak memory usage in  class  decomposition PCA  when   calling  fit  or  fit transform      pr  22553  by  user  Meekail Zain  micky774        API   func  decomposition FastICA  now supports unit variance for whitening    The default value of its  whiten  argument will change from  True     which behaves like   arbitrary variance    to   unit variance   in version 1 3     pr  19490  by  user  Facundo Ferrin  fferrin   and    user  Julien Jerphanion  jjerphan      mod  sklearn discriminant analysis                                           Enhancement  Adds  term  get feature names out  to    class  discriminant analysis LinearDiscriminantAnalysis    pr  22120  by    Thomas Fan        Fix   class  discriminant analysis LinearDiscriminantAnalysis  now uses   the correct variance scaling coefficient which may result in different model   behavior   pr  15984  by  user  Okon Samuel  OkonSamuel   and  pr  22696  by    user  Meekail Zain  micky774      mod  sklearn dummy                           Fix   class  dummy DummyRegressor  no longer overrides the  constant    parameter during  fit    pr  22486  by  Thomas Fan      mod  sklearn ensemble                              MajorFeature  Added additional option  loss  quantile   to    class  ensemble HistGradientBoostingRegressor  for modelling quantiles    The quantile level can be specified with the new parameter  quantile      pr  21800  and  pr  20567  by  user  Christian Lorentzen  lorentzenchr        Efficiency   fit  of  class  ensemble GradientBoostingClassifier    and  class  ensemble GradientBoostingRegressor  now calls  func  utils check array    with parameter  force all finite False  for non initial warm start runs as it has   already been checked before     pr  22159  by  user  Geoffrey Paris  Geoffrey Paris        Enhancement   class  ensemble HistGradientBoostingClassifier  is faster    for binary and in particular for multiclass problems thanks to the new private loss   function module     pr  20811    pr  20567  and  pr  21814  by    user  Christian Lorentzen  lorentzenchr        Enhancement  Adds support to use pre fit models with  cv  prefit     in  class  ensemble StackingClassifier  and  class  ensemble StackingRegressor      pr  16748  by  user  Siqi He  siqi he   and  pr  22215  by    user  Meekail Zain  micky774        Enhancement   class  ensemble RandomForestClassifier  and    class  ensemble ExtraTreesClassifier  have the new  criterion  log loss    which is   equivalent to  criterion  entropy       pr  23047  by  user  Christian Lorentzen  lorentzenchr        Enhancement  Adds  term  get feature names out  to    class  ensemble VotingClassifier    class  ensemble VotingRegressor      class  ensemble StackingClassifier   and    class  ensemble StackingRegressor    pr  22695  and  pr  22697   by  Thomas Fan        Enhancement   class  ensemble RandomTreesEmbedding  now has an informative    term  get feature names out  function that includes both tree index and leaf index in   the output feature names     pr  21762  by  user  Zhehao Liu  MaxwellLZH   and  Thomas Fan        Efficiency  Fitting a  class  ensemble RandomForestClassifier      class  ensemble RandomForestRegressor    class  ensemble ExtraTreesClassifier      class  ensemble ExtraTreesRegressor   and  class  ensemble RandomTreesEmbedding    is now faster in a multiprocessing setting  especially for subsequent fits with    warm start  enabled     pr  22106  by  user  Pieter Gijsbers  PGijsbers        Fix  Change the parameter  validation fraction  in    class  ensemble GradientBoostingClassifier  and    class  ensemble GradientBoostingRegressor  so that an error is raised if anything   other than a float is passed in as an argument     pr  21632  by  user  Genesis Valencia  genvalen        Fix  Removed a potential source of CPU oversubscription in    class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  when CPU resource usage is limited    for instance using cgroups quota in a docker container   pr  22566  by    user  J r mie du Boisberranger  jeremiedbb        Fix   class  ensemble HistGradientBoostingClassifier  and    class  ensemble HistGradientBoostingRegressor  no longer warns when   fitting on a pandas DataFrame with a non default  scoring  parameter and   early stopping enabled   pr  22908  by  Thomas Fan        Fix  Fixes HTML repr for  class  ensemble StackingClassifier  and    class  ensemble StackingRegressor    pr  23097  by  Thomas Fan        API  The attribute  loss   of  class  ensemble GradientBoostingClassifier  and    class  ensemble GradientBoostingRegressor  has been deprecated and will be removed   in version 1 3     pr  23079  by  user  Christian Lorentzen  lorentzenchr        API  Changed the default of  max features  to 1 0 for    class  ensemble RandomForestRegressor  and to   sqrt   for    class  ensemble RandomForestClassifier   Note that these give the same fit   results as before  but are much easier to understand  The old default value     auto   has been deprecated and will be removed in version 1 3  The same   changes are also applied for  class  ensemble ExtraTreesRegressor  and    class  ensemble ExtraTreesClassifier      pr  20803  by  user  Brian Sun  bsun94        Efficiency  Improve runtime performance of  class  ensemble IsolationForest    by skipping repetitive input checks   pr  23149  by  user  Zhehao Liu  MaxwellLZH      mod  sklearn feature extraction                                        Feature   class  feature extraction FeatureHasher  now supports PyPy     pr  23023  by  Thomas Fan        Fix   class  feature extraction FeatureHasher  now validates input parameters   in  transform  instead of    init      pr  21573  by    user  Hannah Bohle  hhnnhh   and  user  Maren Westermann  marenwestermann        Fix   class  feature extraction text TfidfVectorizer  now does not create   a  class  feature extraction text TfidfTransformer  at    init    as required   by our API     pr  21832  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn feature selection                                       Feature  Added auto mode to  class  feature selection SequentialFeatureSelector     If the argument  n features to select  is   auto    select features until the score   improvement does not exceed the argument  tol   The default value of    n features to select  changed from  None  to   warn   in 1 1 and will become     auto   in 1 3   None  and   warn   will be removed in 1 3   pr  20145  by    user  murata yu  murata yu        Feature  Added the ability to pass callables to the  max features  parameter   of  class  feature selection SelectFromModel   Also introduced new attribute    max features   which is inferred from  max features  and the data during    fit   If  max features  is an integer  then  max features    max features     If  max features  is a callable  then  max features    max features X       pr  22356  by  user  Meekail Zain  micky774        Enhancement   class  feature selection GenericUnivariateSelect  preserves   float32 dtype   pr  18482  by  user  Thierry Gameiro  titigmr     and  user  Daniel Kharsa  aflatoune   and  pr  22370  by    user  Meekail Zain  micky774        Enhancement  Add a parameter  force finite  to    func  feature selection f regression  and    func  feature selection r regression   This parameter allows to force the   output to be finite in the case where a feature or a the target is constant   or that the feature and target are perfectly correlated  only for the   F statistic      pr  17819  by  user  Juan Carlos Alfaro Jim nez  alfaro96        Efficiency  Improve runtime performance of  func  feature selection chi2    with boolean arrays   pr  22235  by  Thomas Fan        Efficiency  Reduced memory usage of  func  feature selection chi2      pr  21837  by  user  Louis Wagner  lrwagner      mod  sklearn gaussian process                                      Fix   predict  and  sample y  methods of    class  gaussian process GaussianProcessRegressor  now return   arrays of the correct shape in single target and multi target cases  and for   both  normalize y False  and  normalize y True      pr  22199  by  user  Guillaume Lemaitre  glemaitre       user  Aidar Shakerimoff  AidarShakerimoff   and    user  Tenavi Nakamura Zimmerer  Tenavi        Fix   class  gaussian process GaussianProcessClassifier  raises   a more informative error if  CompoundKernel  is passed via  kernel      pr  22223  by  user  MarcoM  marcozzxx810      mod  sklearn impute                            Enhancement   class  impute SimpleImputer  now warns with feature names when features   which are skipped due to the lack of any observed values in the training set     pr  21617  by  user  Christian Ritter  chritter        Enhancement  Added support for  pd NA  in  class  impute SimpleImputer      pr  21114  by  user  Ying Xiong  yxiong        Enhancement  Adds  term  get feature names out  to    class  impute SimpleImputer    class  impute KNNImputer      class  impute IterativeImputer   and  class  impute MissingIndicator      pr  21078  by  Thomas Fan        API  The  verbose  parameter was deprecated for  class  impute SimpleImputer     A warning will always be raised upon the removal of empty columns     pr  21448  by  user  Oleh Kozynets  OlehKSS   and    user  Christian Ritter  chritter      mod  sklearn inspection                                Feature  Add a display to plot the boundary decision of a classifier by   using the method  func  inspection DecisionBoundaryDisplay from estimator      pr  16061  by  Thomas Fan        Enhancement  In    meth  inspection PartialDependenceDisplay from estimator   allow    kind  to accept a list of strings to specify  which type of   plot to draw for each feature interaction     pr  19438  by  user  Guillaume Lemaitre  glemaitre        Enhancement   meth  inspection PartialDependenceDisplay from estimator      meth  inspection PartialDependenceDisplay plot   and    inspection plot partial dependence  now support plotting centered   Individual Conditional Expectation  cICE  and centered PDP curves controlled   by setting the parameter  centered      pr  18310  by  user  Johannes Elfner  JoElfner   and    user  Guillaume Lemaitre  glemaitre      mod  sklearn isotonic                              Enhancement  Adds  term  get feature names out  to    class  isotonic IsotonicRegression      pr  22249  by  Thomas Fan      mod  sklearn kernel approximation                                          Enhancement  Adds  term  get feature names out  to    class  kernel approximation AdditiveChi2Sampler      class  kernel approximation Nystroem      class  kernel approximation PolynomialCountSketch      class  kernel approximation RBFSampler   and    class  kernel approximation SkewedChi2Sampler      pr  22137  and  pr  22694  by  Thomas Fan      mod  sklearn linear model                                  Feature   class  linear model ElasticNet    class  linear model ElasticNetCV      class  linear model Lasso  and  class  linear model LassoCV  support  sample weight    for sparse input  X      pr  22808  by  user  Christian Lorentzen  lorentzenchr        Feature   class  linear model Ridge  with  solver  lsqr   now supports to fit sparse   input with  fit intercept True      pr  22950  by  user  Christian Lorentzen  lorentzenchr        Enhancement   class  linear model QuantileRegressor  support sparse input   for the highs based solvers     pr  21086  by  user  Venkatachalam Natchiappan  venkyyuvy      In addition  those solvers now use the CSC matrix right from the   beginning which speeds up fitting     pr  22206  by  user  Christian Lorentzen  lorentzenchr        Enhancement   class  linear model LogisticRegression  is faster for     solvers  lbfgs    and   solver  newton cg     for binary and in particular for   multiclass problems thanks to the new private loss function module  In the multiclass   case  the memory consumption has also been reduced for these solvers as the target is   now label encoded  mapped to integers  instead of label binarized  one hot encoded     The more classes  the larger the benefit     pr  21808    pr  20567  and  pr  21814  by    user  Christian Lorentzen  lorentzenchr        Enhancement   class  linear model GammaRegressor      class  linear model PoissonRegressor  and  class  linear model TweedieRegressor    are faster for   solvers  lbfgs        pr  22548    pr  21808  and  pr  20567  by    user  Christian Lorentzen  lorentzenchr        Enhancement  Rename parameter  base estimator  to  estimator  in    class  linear model RANSACRegressor  to improve readability and consistency     base estimator  is deprecated and will be removed in 1 3     pr  22062  by  user  Adrian Trujillo  trujillo9616        Enhancement   func  linear model ElasticNet  and   and other linear model classes using coordinate descent show error   messages when non finite parameter weights are produced   pr  22148    by  user  Christian Ritter  chritter   and  user  Norbert Preining  norbusan        Enhancement   class  linear model ElasticNet  and  class  linear model Lasso    now raise consistent error messages when passed invalid values for  l1 ratio      alpha    max iter  and  tol      pr  22240  by  user  Arturo Amor  ArturoAmorQ        Enhancement   class  linear model BayesianRidge  and    class  linear model ARDRegression  now preserve float32 dtype   pr  9087  by    user  Arthur Imbert  Henley13   and  pr  22525  by  user  Meekail Zain  micky774        Enhancement   class  linear model RidgeClassifier  is now supporting   multilabel classification     pr  19689  by  user  Guillaume Lemaitre  glemaitre        Enhancement   class  linear model RidgeCV  and    class  linear model RidgeClassifierCV  now raise consistent error message   when passed invalid values for  alphas      pr  21606  by  user  Arturo Amor  ArturoAmorQ        Enhancement   class  linear model Ridge  and  class  linear model RidgeClassifier    now raise consistent error message when passed invalid values for  alpha      max iter  and  tol      pr  21341  by  user  Arturo Amor  ArturoAmorQ        Enhancement   func  linear model orthogonal mp gram  preservse dtype for    numpy float32      pr  22002  by  user  Takeshi Oura  takoika        Fix   class  linear model LassoLarsIC  now correctly computes AIC   and BIC  An error is now raised when  n features   n samples  and   when the noise variance is not provided     pr  21481  by  user  Guillaume Lemaitre  glemaitre   and    user  Andr s Babino  ababino        Fix   class  linear model TheilSenRegressor  now validates input parameter     max subpopulation   in  fit  instead of    init        pr  21767  by  user  Maren Westermann  marenwestermann        Fix   class  linear model ElasticNetCV  now produces correct   warning when  l1 ratio 0      pr  21724  by  user  Yar Khine Phyo  yarkhinephyo        Fix   class  linear model LogisticRegression  and    class  linear model LogisticRegressionCV  now set the  n iter   attribute   with a shape that respects the docstring and that is consistent with the shape   obtained when using the other solvers in the one vs rest setting  Previously    it would record only the maximum of the number of iterations for each binary   sub problem while now all of them are recorded   pr  21998  by    user  Olivier Grisel  ogrisel        Fix  The property  family  of  class  linear model TweedieRegressor  is not   validated in    init    anymore  Instead  this  private  property is deprecated in    class  linear model GammaRegressor    class  linear model PoissonRegressor  and    class  linear model TweedieRegressor   and will be removed in 1 3     pr  22548  by  user  Christian Lorentzen  lorentzenchr        Fix  The  coef   and  intercept   attributes of    class  linear model LinearRegression  are now correctly computed in the presence of   sample weights when the input is sparse     pr  22891  by  user  J r mie du Boisberranger  jeremiedbb        Fix  The  coef   and  intercept   attributes of  class  linear model Ridge  with    solver  sparse cg   and  solver  lbfgs   are now correctly computed in the presence   of sample weights when the input is sparse     pr  22899  by  user  J r mie du Boisberranger  jeremiedbb        Fix   class  linear model SGDRegressor  and  class  linear model SGDClassifier  now   computes the validation error correctly when early stopping is enabled     pr  23256  by  user  Zhehao Liu  MaxwellLZH        API   class  linear model LassoLarsIC  now exposes  noise variance  as   a parameter in order to provide an estimate of the noise variance    This is particularly relevant when  n features   n samples  and the   estimator of the noise variance cannot be computed     pr  21481  by  user  Guillaume Lemaitre  glemaitre      mod  sklearn manifold                              Feature   class  manifold Isomap  now supports radius based   neighbors via the  radius  argument     pr  19794  by  user  Zhehao Liu  MaxwellLZH        Enhancement   func  manifold spectral embedding  and    class  manifold SpectralEmbedding  supports  np float32  dtype and will   preserve this dtype     pr  21534  by  user  Andrew Knyazev  lobpcg        Enhancement  Adds  term  get feature names out  to  class  manifold Isomap    and  class  manifold LocallyLinearEmbedding    pr  22254  by  Thomas Fan        Enhancement  added  metric params  to  class  manifold TSNE  constructor for   additional parameters of distance metric to use in optimization     pr  21805  by  user  Jeanne Dionisi  jeannedionisi   and  pr  22685  by    user  Meekail Zain  micky774        Enhancement   func  manifold trustworthiness  raises an error if    n neighbours    n samples   2  to ensure a correct support for the function     pr  18832  by  user  Hong Shao Yang  hongshaoyang   and  pr  23033  by    user  Meekail Zain  micky774        Fix   func  manifold spectral embedding  now uses Gaussian instead of   the previous uniform on  0  1  random initial approximations to eigenvectors   in eigen solvers  lobpcg  and  amg  to improve their numerical stability     pr  21565  by  user  Andrew Knyazev  lobpcg      mod  sklearn metrics                             Feature   func  metrics r2 score  and  func  metrics explained variance score  have a   new  force finite  parameter  Setting this parameter to  False  will return the   actual non finite score in case of perfect predictions or constant  y true     instead of the finite approximation   1 0  and  0 0  respectively  currently   returned by default   pr  17266  by  user  Sylvain Mari   smarie        Feature   func  metrics d2 pinball score  and  func  metrics d2 absolute error score    calculate the  math  D 2  regression score for the pinball loss and the   absolute error respectively   func  metrics d2 absolute error score  is a special case   of  func  metrics d2 pinball score  with a fixed quantile parameter  alpha 0 5    for ease of use and discovery  The  math  D 2  scores are generalizations   of the  r2 score  and can be interpreted as the fraction of deviance explained     pr  22118  by  user  Ohad Michel  ohadmich        Enhancement   func  metrics top k accuracy score  raises an improved error   message when  y true  is binary and  y score  is 2d   pr  22284  by  Thomas Fan        Enhancement   func  metrics roc auc score  now supports   average None     in the multiclass case when   multiclass  ovr    which will return the score   per class   pr  19158  by  user  Nicki Skafte  SkafteNicki        Enhancement  Adds  im kw  parameter to    meth  metrics ConfusionMatrixDisplay from estimator     meth  metrics ConfusionMatrixDisplay from predictions   and    meth  metrics ConfusionMatrixDisplay plot   The  im kw  parameter is passed   to the  matplotlib pyplot imshow  call when plotting the confusion matrix     pr  20753  by  Thomas Fan        Fix   func  metrics silhouette score  now supports integer input for precomputed   distances   pr  22108  by  Thomas Fan        Fix  Fixed a bug in  func  metrics normalized mutual info score  which could return   unbounded values   pr  22635  by  user  J r mie du Boisberranger  jeremiedbb        Fix  Fixes  func  metrics precision recall curve  and    func  metrics average precision score  when true labels are all negative     pr  19085  by  user  Varun Agrawal  varunagrawal        API   metrics SCORERS  is now deprecated and will be removed in 1 3  Please   use  func  metrics get scorer names  to retrieve the names of all available   scorers   pr  22866  by  Adrin Jalali        API  Parameters   sample weight   and   multioutput   of    func  metrics mean absolute percentage error  are now keyword only  in accordance   with  SLEP009  https   scikit learn enhancement proposals readthedocs io en latest slep009 proposal html       A deprecation cycle was introduced     pr  21576  by  user  Paul Emile Dugnat  pedugnat        API  The   wminkowski   metric of  class  metrics DistanceMetric  is deprecated   and will be removed in version 1 3  Instead the existing   minkowski   metric now takes   in an optional  w  parameter for weights  This deprecation aims at remaining consistent   with SciPy 1 8 convention   pr  21873  by  user  Yar Khine Phyo  yarkhinephyo        API   class  metrics DistanceMetric  has been moved from    mod  sklearn neighbors  to  mod  sklearn metrics     Using  neighbors DistanceMetric  for imports is still valid for   backward compatibility  but this alias will be removed in 1 3     pr  21177  by  user  Julien Jerphanion  jjerphan      mod  sklearn mixture                             Enhancement   class  mixture GaussianMixture  and    class  mixture BayesianGaussianMixture  can now be initialized using   k means   and random data points   pr  20408  by    user  Gordon Walsh  g walsh     user  Alberto Ceballos alceballosa     and  user  Andres Rios ariosramirez        Fix  Fix a bug that correctly initialize  precisions cholesky   in    class  mixture GaussianMixture  when providing  precisions init  by taking   its square root     pr  22058  by  user  Guillaume Lemaitre  glemaitre        Fix   class  mixture GaussianMixture  now normalizes  weights   more safely    preventing rounding errors when calling  meth  mixture GaussianMixture sample  with    n components 1      pr  23034  by  user  Meekail Zain  micky774      mod  sklearn model selection                                     Enhancement  it is now possible to pass  scoring  matthews corrcoef   to all   model selection tools with a  scoring  argument to use the Matthews   correlation coefficient  MCC      pr  22203  by  user  Olivier Grisel  ogrisel        Enhancement  raise an error during cross validation when the fits for all the   splits failed  Similarly raise an error during grid search when the fits for   all the models and all the splits failed     pr  21026  by  user  Lo c Est ve  lesteve        Fix   class  model selection GridSearchCV      class  model selection HalvingGridSearchCV    now validate input parameters in  fit  instead of    init        pr  21880  by  user  Mrinal Tyagi  MrinalTyagi        Fix   func  model selection learning curve  now supports  partial fit    with regressors   pr  22982  by  Thomas Fan      mod  sklearn multiclass                                Enhancement   class  multiclass OneVsRestClassifier  now supports a  verbose    parameter so progress on fitting can be seen     pr  22508  by  user  Chris Combs  combscCode        Fix   meth  multiclass OneVsOneClassifier predict  returns correct predictions when   the inner classifier only has a  term  predict proba    pr  22604  by  Thomas Fan      mod  sklearn neighbors                               Enhancement  Adds  term  get feature names out  to    class  neighbors RadiusNeighborsTransformer      class  neighbors KNeighborsTransformer    and  class  neighbors NeighborhoodComponentsAnalysis      pr  22212  by  user  Meekail Zain  micky774        Fix   class  neighbors KernelDensity  now validates input parameters in  fit    instead of    init      pr  21430  by  user  Desislava Vasileva  DessyVV   and    user  Lucy Jimenez  LucyJimenez        Fix   func  neighbors KNeighborsRegressor predict  now works properly when   given an array like input if  KNeighborsRegressor  is first constructed with a   callable passed to the  weights  parameter   pr  22687  by    user  Meekail Zain  micky774      mod  sklearn neural network                                    Enhancement   func  neural network MLPClassifier  and    func  neural network MLPRegressor  show error   messages when optimizers produce non finite parameter weights   pr  22150    by  user  Christian Ritter  chritter   and  user  Norbert Preining  norbusan        Enhancement  Adds  term  get feature names out  to    class  neural network BernoulliRBM    pr  22248  by  Thomas Fan      mod  sklearn pipeline                              Enhancement  Added support for  passthrough  in  class  pipeline FeatureUnion     Setting a transformer to  passthrough  will pass the features unchanged     pr  20860  by  user  Shubhraneel Pal  shubhraneel        Fix   class  pipeline Pipeline  now does not validate hyper parameters in      init    but in   fit        pr  21888  by  user  iofall  iofall   and  user  Arisa Y   arisayosh        Fix   class  pipeline FeatureUnion  does not validate hyper parameters in      init     Validation is now handled in   fit    and   fit transform        pr  21954  by  user  iofall  iofall   and  user  Arisa Y   arisayosh        Fix  Defines    sklearn is fitted    in  class  pipeline FeatureUnion  to   return correct result with  func  utils validation check is fitted      pr  22953  by  user  randomgeek78  randomgeek78      mod  sklearn preprocessing                                   Feature   class  preprocessing OneHotEncoder  now supports grouping   infrequent categories into a single feature  Grouping infrequent categories   is enabled by specifying how to select infrequent categories with    min frequency  or  max categories    pr  16018  by  Thomas Fan        Enhancement  Adds a  subsample  parameter to  class  preprocessing KBinsDiscretizer     This allows specifying a maximum number of samples to be used while fitting   the model  The option is only available when  strategy  is set to  quantile      pr  21445  by  user  Felipe Bidu  fbidu   and  user  Amanda Dsouza  amy12xx        Enhancement  Adds  encoded missing value  to  class  preprocessing OrdinalEncoder    to configure the encoded value for missing data   pr  21988  by  Thomas Fan        Enhancement  Added the  get feature names out  method and a new parameter    feature names out  to  class  preprocessing FunctionTransformer   You can set    feature names out  to  one to one  to use the input features names as the   output feature names  or you can set it to a callable that returns the output   feature names  This is especially useful when the transformer changes the   number of features  If  feature names out  is None  which is the default     then  get output feature names  is not defined     pr  21569  by  user  Aur lien Geron  ageron        Enhancement  Adds  term  get feature names out  to    class  preprocessing Normalizer      class  preprocessing KernelCenterer      class  preprocessing OrdinalEncoder   and    class  preprocessing Binarizer    pr  21079  by  Thomas Fan        Fix   class  preprocessing PowerTransformer  with  method  yeo johnson     better supports significantly non Gaussian data when searching for an optimal   lambda   pr  20653  by  Thomas Fan        Fix   class  preprocessing LabelBinarizer  now validates input parameters in    fit  instead of    init        pr  21434  by  user  Krum Arnaudov  krumeto        Fix   class  preprocessing FunctionTransformer  with  check inverse True    now provides informative error message when input has mixed dtypes   pr  19916  by    user  Zhehao Liu  MaxwellLZH        Fix   class  preprocessing KBinsDiscretizer  handles bin edges more consistently now     pr  14975  by  Andreas M ller   and  pr  22526  by  user  Meekail Zain  micky774        Fix  Adds  meth  preprocessing KBinsDiscretizer get feature names out  support when    encode  ordinal     pr  22735  by  Thomas Fan      mod  sklearn random projection                                       Enhancement  Adds an  inverse transform  method and a  compute inverse transform    parameter to  class  random projection GaussianRandomProjection  and    class  random projection SparseRandomProjection   When the parameter is set   to True  the pseudo inverse of the components is computed during  fit  and stored as    inverse components     pr  21701  by  user  Aur lien Geron  ageron        Enhancement   class  random projection SparseRandomProjection  and    class  random projection GaussianRandomProjection  preserves dtype for    numpy float32    pr  22114  by  user  Takeshi Oura  takoika        Enhancement  Adds  term  get feature names out  to all transformers in the    mod  sklearn random projection  module     class  random projection GaussianRandomProjection  and    class  random projection SparseRandomProjection    pr  21330  by    user  Lo c Est ve  lesteve      mod  sklearn svm                         Enhancement   class  svm OneClassSVM    class  svm NuSVC      class  svm NuSVR    class  svm SVC  and  class  svm SVR  now expose    n iter    the number of iterations of the libsvm optimization routine     pr  21408  by  user  Juan Mart n Loyola  jmloyola        Enhancement   func  svm SVR    func  svm SVC    func  svm NuSVR      func  svm OneClassSVM    func  svm NuSVC  now raise an error   when the dual gap estimation produce non finite parameter weights     pr  22149  by  user  Christian Ritter  chritter   and    user  Norbert Preining  norbusan        Fix   class  svm NuSVC    class  svm NuSVR    class  svm SVC      class  svm SVR    class  svm OneClassSVM  now validate input   parameters in  fit  instead of    init        pr  21436  by  user  Haidar Almubarak  Haidar13       mod  sklearn tree                          Enhancement   class  tree DecisionTreeClassifier  and    class  tree ExtraTreeClassifier  have the new  criterion  log loss    which is   equivalent to  criterion  entropy       pr  23047  by  user  Christian Lorentzen  lorentzenchr        Fix  Fix a bug in the Poisson splitting criterion for    class  tree DecisionTreeRegressor      pr  22191  by  user  Christian Lorentzen  lorentzenchr        API  Changed the default value of  max features  to 1 0 for    class  tree ExtraTreeRegressor  and to   sqrt   for    class  tree ExtraTreeClassifier   which will not change the fit result  The original   default value   auto   has been deprecated and will be removed in version 1 3    Setting  max features  to   auto   is also deprecated   for  class  tree DecisionTreeClassifier  and  class  tree DecisionTreeRegressor      pr  22476  by  user  Zhehao Liu  MaxwellLZH      mod  sklearn utils                           Enhancement   func  utils check array  and    func  utils multiclass type of target  now accept an  input name  parameter to make   the error message more informative when passed invalid input data  e g  with NaN or   infinite values      pr  21219  by  user  Olivier Grisel  ogrisel        Enhancement   func  utils check array  returns a float   ndarray with  np nan  when passed a  Float32  or  Float64  pandas extension   array with  pd NA    pr  21278  by  Thomas Fan        Enhancement   func  utils estimator html repr  shows a more helpful error   message when running in a jupyter notebook that is not trusted   pr  21316    by  Thomas Fan        Enhancement   func  utils estimator html repr  displays an arrow on the top   left corner of the HTML representation to show how the elements are   clickable   pr  21298  by  Thomas Fan        Enhancement   func  utils check array  with  dtype None  returns numeric   arrays when passed in a pandas DataFrame with mixed dtypes   dtype  numeric     will also make better infer the dtype when the DataFrame has mixed dtypes     pr  22237  by  Thomas Fan        Enhancement   func  utils check scalar  now has better messages   when displaying the type   pr  22218  by  Thomas Fan        Fix  Changes the error message of the  ValidationError  raised by    func  utils check X y  when y is None so that it is compatible   with the  check requires y none  estimator check   pr  22578  by    user  Claudio Salvatore Arcidiacono  ClaudioSalvatoreArcidiacono        Fix   func  utils class weight compute class weight  now only requires that   all classes in  y  have a weight in  class weight   An error is still raised   when a class is present in  y  but not in  class weight    pr  22595  by    Thomas Fan        Fix   func  utils estimator html repr  has an improved visualization for nested   meta estimators   pr  21310  by  Thomas Fan        Fix   func  utils check scalar  raises an error when    include boundaries   left    right    and the boundaries are not set     pr  22027  by  user  Marie Lanternier  mlant        Fix   func  utils metaestimators available if  correctly returns a bounded   method that can be pickled   pr  23077  by  Thomas Fan        API   func  utils estimator checks check estimator  s argument is now called    estimator   previous name was  Estimator     pr  22188  by    user  Mathurin Massias  mathurinm        API    utils metaestimators if delegate has method   is deprecated and will be   removed in version 1 3  Use  func  utils metaestimators available if  instead     pr  22830  by  user  J r mie du Boisberranger  jeremiedbb        rubric   Code and documentation contributors  Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1 0  including   2357juan  Abhishek Gupta  adamgonzo  Adam Li  adijohar  Aditya Kumawat  Aditya Raghuwanshi  Aditya Singh  Adrian Trujillo Duron  Adrin Jalali  ahmadjubair33  AJ Druck  aj white  Alan Peixinho  Alberto Mario Ceballos Arroyo  Alek Lefebvre  Alex  Alexandr  Alexandre Gramfort  alexanmv  almeidayoel  Amanda Dsouza  Aman Sharma  Amar pratap singh  Amit  amrcode  Andr s Simon  Andreas Grivas  Andreas Mueller  Andrew Knyazev  Andriy  Angus L Herrou  Ankit Sharma  Anne Ducout  Arisa  Arth  arthurmello  Arturo Amor  ArturoAmor  Atharva Patil  aufarkari  Aur lien Geron  avm19  Ayan Bag  baam  Bardiya Ak  Behrouz B  Ben3940  Benjamin Bossan  Bharat Raghunathan  Bijil Subhash  bmreiniger  Brandon Truth  Brenden Kadota  Brian Sun  cdrig  Chalmer Lowe  Chiara Marmo  Chitteti Srinath Reddy  Chloe Agathe Azencott  Christian Lorentzen  Christian Ritter  christopherlim98  Christoph T  Weidemann  Christos Aridas  Claudio Salvatore Arcidiacono  combscCode  Daniela Fernandes  darioka  Darren Nguyen  Dave Eargle  David Gilbertson  David Poznik  Dea Mar a L on  Dennis Osei  DessyVV  Dev514  Dimitri Papadopoulos Orfanos  Diwakar Gupta  Dr  Felix M  Riese  drskd  Emiko Sano  Emmanouil Gionanidis  EricEllwanger  Erich Schubert  Eric Larson  Eric Ndirangu  ErmolaevPA  Estefania Barreto Ojeda  eyast  Fatima GASMI  Federico Luna  Felix Glushchenkov  fkaren27  Fortune Uwha  FPGAwesome  francoisgoupil  Frans Larsson  ftorres16  Gabor Berei  Gabor Kertesz  Gabriel Stefanini Vicente  Gabriel S Vicente  Gael Varoquaux  GAURAV CHOUDHARY  Gauthier I  genvalen  Geoffrey Paris  Giancarlo Pablo  glennfrutiz  gpapadok  Guillaume Lemaitre  Guillermo Tom s Fern ndez Mart n  Gustavo Oliveira  Haidar Almubarak  Hannah Bohle  Hansin Ahuja  Haoyin Xu  Haya  Helder Geovane Gomes de Lima  henrymooresc  Hideaki Imamura  Himanshu Kumar  Hind M  hmasdev  hvassard  i aki y  iasoon  Inclusive Coding Bot  Ingela  iofall  Ishan Kumar  Jack Liu  Jake Cowton  jalexand3r  J Alexander  Jauhar  Jaya Surya Kommireddy  Jay Stanley  Jeff Hale  je kr  JElfner  Jenny Vo  J r mie du Boisberranger  Jihane  Jirka Borovec  Joel Nothman  Jon Haitz Legarreta Gorro o  Jordan Silke  Jorge Cipri n  Jorge Loayza  Joseph Chazalon  Joseph Schwartz Messing  Jovan Stojanovic  JSchuerz  Juan Carlos Alfaro Jim nez  Juan Martin Loyola  Julien Jerphanion  katotten  Kaushik Roy Chowdhury  Ken4git  Kenneth Prabakaran  kernc  Kevin Doucet  KimAYoung  Koushik Joshi  Kranthi Sedamaki  krishna kumar  krumetoft  lesnee  Lisa Casino  Logan Thomas  Loic Esteve  Louis Wagner  LucieClair  Lucy Liu  Luiz Eduardo Amaral  Magali  MaggieChege  Mai  mandjevant  Mandy Gu  Manimaran  MarcoM  Marco Wurps  Maren Westermann  Maria Boerner  MarieS WiMLDS  Martel Corentin  martin kokos  mathurinm  Mat as  matjansen  Matteo Francia  Maxwell  Meekail Zain  Megabyte  Mehrdad Moradizadeh  melemo2  Michael I Chen  michalkrawczyk  Micky774  milana2  millawell  Ming Yang Ho  Mitzi  miwojc  Mizuki  mlant  Mohamed Haseeb  Mohit Sharma  Moonkyung94  mpoemsl  MrinalTyagi  Mr  Leu  msabatier  murata yu  N  Nadirhan  ahin  Naipawat Poolsawat  NartayXD  nastegiano  nathansquan  nat salt  Nicki Skafte Detlefsen  Nicolas Hug  Niket Jain  Nikhil Suresh  Nikita Titov  Nikolay Kondratyev  Ohad Michel  Oleksandr Husak  Olivier Grisel  partev  Patrick Ferreira  Paul  pelennor  PierreAttard  Piet Br mmel  Pieter Gijsbers  Pinky  poloso  Pramod Anantharam  puhuk  Purna Chandra Mansingh  QuadV  Rahil Parikh  Randall Boyes  randomgeek78  Raz Hoshia  Reshama Shaikh  Ricardo Ferreira  Richard Taylor  Rileran  Rishabh  Robin Thibaut  Rocco Meli  Roman Feldbauer  Roman Yurchak  Ross Barnowski  rsnegrin  Sachin Yadav  sakinaOuisrani  Sam Adam Day  Sanjay Marreddi  Sebastian Pujalte  SEELE  SELEE  Seyedsaman  Sam  Emami  ShanDeng123  Shao Yang Hong  sharmadharmpal  shaymerNaturalint  Shuangchi He  Shubhraneel Pal  siavrez  slishak  Smile  spikebh  sply88  Srinath Kailasa  St phane Collot  Sultan Orazbayev  Sumit Saha  Sven Eschlbeck  Sven Stehle  Swapnil Jha  Sylvain Mari   Takeshi Oura  Tamires Santana  Tenavi  teunpe  Theis Ferr  Hjortkj r  Thiruvenkadam  Thomas J  Fan  t jakubek  toastedyeast  Tom Dupr  la Tour  Tom McTiernan  TONY GEORGE  Tyler Martin  Tyler Reddy  Udit Gupta  Ugo Marchand  Varun Agrawal  Venkatachalam N  Vera Komeyer  victoirelouis  Vikas Vishwakarma  Vikrant khedkar  Vladimir Chernyy  Vladimir Kim  WeijiaDu  Xiao Yuan  Yar Khine Phyo  Ying Xiong  yiyangq  Yosshi999  Yuki Koyama  Zach Deane Mayer  Zeel B Patel  zempleni  zhenfisher      Zhao Feng "}
{"questions":"scikit-learn sklearn contributors rst Version 0 13 changes0131","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n============\nVersion 0.13\n============\n\n.. _changes_0_13_1:\n\nVersion 0.13.1\n==============\n\n**February 23, 2013**\n\nThe 0.13.1 release only fixes some bugs and does not add any new functionality.\n\nChangelog\n---------\n\n- Fixed a testing error caused by the function `cross_validation.train_test_split` being\n  interpreted as a test by `Yaroslav Halchenko`_.\n\n- Fixed a bug in the reassignment of small clusters in the :class:`cluster.MiniBatchKMeans`\n  by `Gael Varoquaux`_.\n\n- Fixed default value of ``gamma`` in :class:`decomposition.KernelPCA` by `Lars Buitinck`_.\n\n- Updated joblib to ``0.7.0d`` by `Gael Varoquaux`_.\n\n- Fixed scaling of the deviance in :class:`ensemble.GradientBoostingClassifier` by `Peter Prettenhofer`_.\n\n- Better tie-breaking in :class:`multiclass.OneVsOneClassifier` by `Andreas M\u00fcller`_.\n\n- Other small improvements to tests and documentation.\n\nPeople\n------\nList of contributors for release 0.13.1 by number of commits.\n\n* 16  `Lars Buitinck`_\n* 12  `Andreas M\u00fcller`_\n*  8  `Gael Varoquaux`_\n*  5  Robert Marchman\n*  3  `Peter Prettenhofer`_\n*  2  Hrishikesh Huilgolkar\n*  1  Bastiaan van den Berg\n*  1  Diego Molla\n*  1  `Gilles Louppe`_\n*  1  `Mathieu Blondel`_\n*  1  `Nelle Varoquaux`_\n*  1  Rafael Cunha de Almeida\n*  1  Rolando Espinoza La fuente\n*  1  `Vlad Niculae`_\n*  1  `Yaroslav Halchenko`_\n\n\n.. _changes_0_13:\n\nVersion 0.13\n============\n\n**January 21, 2013**\n\nNew Estimator Classes\n---------------------\n\n- :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor`, two\n  data-independent predictors by `Mathieu Blondel`_. Useful to sanity-check\n  your estimators. See :ref:`dummy_estimators` in the user guide.\n  Multioutput support added by `Arnaud Joly`_.\n\n- :class:`decomposition.FactorAnalysis`, a transformer implementing the\n  classical factor analysis, by `Christian Osendorfer`_ and `Alexandre\n  Gramfort`_. See :ref:`FA` in the user guide.\n\n- :class:`feature_extraction.FeatureHasher`, a transformer implementing the\n  \"hashing trick\" for fast, low-memory feature extraction from string fields\n  by `Lars Buitinck`_ and :class:`feature_extraction.text.HashingVectorizer`\n  for text documents by `Olivier Grisel`_  See :ref:`feature_hashing` and\n  :ref:`hashing_vectorizer` for the documentation and sample usage.\n\n- :class:`pipeline.FeatureUnion`, a transformer that concatenates\n  results of several other transformers by `Andreas M\u00fcller`_. See\n  :ref:`feature_union` in the user guide.\n\n- :class:`random_projection.GaussianRandomProjection`,\n  :class:`random_projection.SparseRandomProjection` and the function\n  :func:`random_projection.johnson_lindenstrauss_min_dim`. The first two are\n  transformers implementing Gaussian and sparse random projection matrix\n  by `Olivier Grisel`_ and `Arnaud Joly`_.\n  See :ref:`random_projection` in the user guide.\n\n- :class:`kernel_approximation.Nystroem`, a transformer for approximating\n  arbitrary kernels by `Andreas M\u00fcller`_. See\n  :ref:`nystroem_kernel_approx` in the user guide.\n\n- :class:`preprocessing.OneHotEncoder`, a transformer that computes binary\n  encodings of categorical features by `Andreas M\u00fcller`_. See\n  :ref:`preprocessing_categorical_features` in the user guide.\n\n- :class:`linear_model.PassiveAggressiveClassifier` and\n  :class:`linear_model.PassiveAggressiveRegressor`, predictors implementing\n  an efficient stochastic optimization for linear models by `Rob Zinkov`_ and\n  `Mathieu Blondel`_. See :ref:`passive_aggressive` in the user\n  guide.\n\n- :class:`ensemble.RandomTreesEmbedding`, a transformer for creating high-dimensional\n  sparse representations using ensembles of totally random trees by  `Andreas M\u00fcller`_.\n  See :ref:`random_trees_embedding` in the user guide.\n\n- :class:`manifold.SpectralEmbedding` and function\n  :func:`manifold.spectral_embedding`, implementing the \"laplacian\n  eigenmaps\" transformation for non-linear dimensionality reduction by Wei\n  Li. See :ref:`spectral_embedding` in the user guide.\n\n- :class:`isotonic.IsotonicRegression` by `Fabian Pedregosa`_, `Alexandre Gramfort`_\n  and `Nelle Varoquaux`_,\n\n\nChangelog\n---------\n\n- :func:`metrics.zero_one_loss` (formerly ``metrics.zero_one``) now has\n  option for normalized output that reports the fraction of\n  misclassifications, rather than the raw number of misclassifications. By\n  Kyle Beauchamp.\n\n- :class:`tree.DecisionTreeClassifier` and all derived ensemble models now\n  support sample weighting, by `Noel Dawe`_  and `Gilles Louppe`_.\n\n- Speedup improvement when using bootstrap samples in forests of randomized\n  trees, by `Peter Prettenhofer`_  and `Gilles Louppe`_.\n\n- Partial dependence plots for :ref:`gradient_boosting` in\n  `ensemble.partial_dependence.partial_dependence` by `Peter\n  Prettenhofer`_. See :ref:`sphx_glr_auto_examples_inspection_plot_partial_dependence.py` for an\n  example.\n\n- The table of contents on the website has now been made expandable by\n  `Jaques Grobler`_.\n\n- :class:`feature_selection.SelectPercentile` now breaks ties\n  deterministically instead of returning all equally ranked features.\n\n- :class:`feature_selection.SelectKBest` and\n  :class:`feature_selection.SelectPercentile` are more numerically stable\n  since they use scores, rather than p-values, to rank results. This means\n  that they might sometimes select different features than they did\n  previously.\n\n- Ridge regression and ridge classification fitting with ``sparse_cg`` solver\n  no longer has quadratic memory complexity, by `Lars Buitinck`_ and\n  `Fabian Pedregosa`_.\n\n- Ridge regression and ridge classification now support a new fast solver\n  called ``lsqr``, by `Mathieu Blondel`_.\n\n- Speed up of :func:`metrics.precision_recall_curve` by Conrad Lee.\n\n- Added support for reading\/writing svmlight files with pairwise\n  preference attribute (qid in svmlight file format) in\n  :func:`datasets.dump_svmlight_file` and\n  :func:`datasets.load_svmlight_file` by `Fabian Pedregosa`_.\n\n- Faster and more robust :func:`metrics.confusion_matrix` and\n  :ref:`clustering_evaluation` by Wei Li.\n\n- `cross_validation.cross_val_score` now works with precomputed kernels\n  and affinity matrices, by `Andreas M\u00fcller`_.\n\n- LARS algorithm made more numerically stable with heuristics to drop\n  regressors too correlated as well as to stop the path when\n  numerical noise becomes predominant, by `Gael Varoquaux`_.\n\n- Faster implementation of :func:`metrics.precision_recall_curve` by\n  Conrad Lee.\n\n- New kernel `metrics.chi2_kernel` by `Andreas M\u00fcller`_, often used\n  in computer vision applications.\n\n- Fix of longstanding bug in :class:`naive_bayes.BernoulliNB` fixed by\n  Shaun Jackman.\n\n- Implemented ``predict_proba`` in :class:`multiclass.OneVsRestClassifier`,\n  by Andrew Winterman.\n\n- Improve consistency in gradient boosting: estimators\n  :class:`ensemble.GradientBoostingRegressor` and\n  :class:`ensemble.GradientBoostingClassifier` use the estimator\n  :class:`tree.DecisionTreeRegressor` instead of the\n  `tree._tree.Tree` data structure by `Arnaud Joly`_.\n\n- Fixed a floating point exception in the :ref:`decision trees <tree>`\n  module, by Seberg.\n\n- Fix :func:`metrics.roc_curve` fails when y_true has only one class\n  by Wei Li.\n\n- Add the :func:`metrics.mean_absolute_error` function which computes the\n  mean absolute error. The :func:`metrics.mean_squared_error`,\n  :func:`metrics.mean_absolute_error` and\n  :func:`metrics.r2_score` metrics support multioutput by `Arnaud Joly`_.\n\n- Fixed ``class_weight`` support in :class:`svm.LinearSVC` and\n  :class:`linear_model.LogisticRegression` by `Andreas M\u00fcller`_. The meaning\n  of ``class_weight`` was reversed as erroneously higher weight meant less\n  positives of a given class in earlier releases.\n\n- Improve narrative documentation and consistency in\n  :mod:`sklearn.metrics` for regression and classification metrics\n  by `Arnaud Joly`_.\n\n- Fixed a bug in :class:`sklearn.svm.SVC` when using csr-matrices with\n  unsorted indices by Xinfan Meng and `Andreas M\u00fcller`_.\n\n- :class:`cluster.MiniBatchKMeans`: Add random reassignment of cluster centers\n  with little observations attached to them, by `Gael Varoquaux`_.\n\n\nAPI changes summary\n-------------------\n- Renamed all occurrences of ``n_atoms`` to ``n_components`` for consistency.\n  This applies to :class:`decomposition.DictionaryLearning`,\n  :class:`decomposition.MiniBatchDictionaryLearning`,\n  :func:`decomposition.dict_learning`, :func:`decomposition.dict_learning_online`.\n\n- Renamed all occurrences of ``max_iters`` to ``max_iter`` for consistency.\n  This applies to `semi_supervised.LabelPropagation` and\n  `semi_supervised.label_propagation.LabelSpreading`.\n\n- Renamed all occurrences of ``learn_rate`` to ``learning_rate`` for\n  consistency in `ensemble.BaseGradientBoosting` and\n  :class:`ensemble.GradientBoostingRegressor`.\n\n- The module ``sklearn.linear_model.sparse`` is gone. Sparse matrix support\n  was already integrated into the \"regular\" linear models.\n\n- `sklearn.metrics.mean_square_error`, which incorrectly returned the\n  accumulated error, was removed. Use :func:`metrics.mean_squared_error` instead.\n\n- Passing ``class_weight`` parameters to ``fit`` methods is no longer\n  supported. Pass them to estimator constructors instead.\n\n- GMMs no longer have ``decode`` and ``rvs`` methods. Use the ``score``,\n  ``predict`` or ``sample`` methods instead.\n\n- The ``solver`` fit option in Ridge regression and classification is now\n  deprecated and will be removed in v0.14. Use the constructor option\n  instead.\n\n- `feature_extraction.text.DictVectorizer` now returns sparse\n  matrices in the CSR format, instead of COO.\n\n- Renamed ``k`` in `cross_validation.KFold` and\n  `cross_validation.StratifiedKFold` to ``n_folds``, renamed\n  ``n_bootstraps`` to ``n_iter`` in ``cross_validation.Bootstrap``.\n\n- Renamed all occurrences of ``n_iterations`` to ``n_iter`` for consistency.\n  This applies to `cross_validation.ShuffleSplit`,\n  `cross_validation.StratifiedShuffleSplit`,\n  :func:`utils.extmath.randomized_range_finder` and\n  :func:`utils.extmath.randomized_svd`.\n\n- Replaced ``rho`` in :class:`linear_model.ElasticNet` and\n  :class:`linear_model.SGDClassifier` by ``l1_ratio``. The ``rho`` parameter\n  had different meanings; ``l1_ratio`` was introduced to avoid confusion.\n  It has the same meaning as previously ``rho`` in\n  :class:`linear_model.ElasticNet` and ``(1-rho)`` in\n  :class:`linear_model.SGDClassifier`.\n\n- :class:`linear_model.LassoLars` and :class:`linear_model.Lars` now\n  store a list of paths in the case of multiple targets, rather than\n  an array of paths.\n\n- The attribute ``gmm`` of `hmm.GMMHMM` was renamed to ``gmm_``\n  to adhere more strictly with the API.\n\n- `cluster.spectral_embedding` was moved to\n  :func:`manifold.spectral_embedding`.\n\n- Renamed ``eig_tol`` in :func:`manifold.spectral_embedding`,\n  :class:`cluster.SpectralClustering` to ``eigen_tol``, renamed ``mode``\n  to ``eigen_solver``.\n\n- Renamed ``mode`` in :func:`manifold.spectral_embedding` and\n  :class:`cluster.SpectralClustering` to ``eigen_solver``.\n\n- ``classes_`` and ``n_classes_`` attributes of\n  :class:`tree.DecisionTreeClassifier` and all derived ensemble models are\n  now flat in case of single output problems and nested in case of\n  multi-output problems.\n\n- The ``estimators_`` attribute of\n  :class:`ensemble.GradientBoostingRegressor` and\n  :class:`ensemble.GradientBoostingClassifier` is now an\n  array of :class:`tree.DecisionTreeRegressor`.\n\n- Renamed ``chunk_size`` to ``batch_size`` in\n  :class:`decomposition.MiniBatchDictionaryLearning` and\n  :class:`decomposition.MiniBatchSparsePCA` for consistency.\n\n- :class:`svm.SVC` and :class:`svm.NuSVC` now provide a ``classes_``\n  attribute and support arbitrary dtypes for labels ``y``.\n  Also, the dtype returned by ``predict`` now reflects the dtype of\n  ``y`` during ``fit`` (used to be ``np.float``).\n\n- Changed default test_size in `cross_validation.train_test_split`\n  to None, added possibility to infer ``test_size`` from ``train_size`` in\n  `cross_validation.ShuffleSplit` and\n  `cross_validation.StratifiedShuffleSplit`.\n\n- Renamed function `sklearn.metrics.zero_one` to\n  `sklearn.metrics.zero_one_loss`. Be aware that the default behavior\n  in `sklearn.metrics.zero_one_loss` is different from\n  `sklearn.metrics.zero_one`: ``normalize=False`` is changed to\n  ``normalize=True``.\n\n- Renamed function `metrics.zero_one_score` to\n  :func:`metrics.accuracy_score`.\n\n- :func:`datasets.make_circles` now has the same number of inner and outer points.\n\n- In the Naive Bayes classifiers, the ``class_prior`` parameter was moved\n  from ``fit`` to ``__init__``.\n\nPeople\n------\nList of contributors for release 0.13 by number of commits.\n\n* 364  `Andreas M\u00fcller`_\n* 143  `Arnaud Joly`_\n* 137  `Peter Prettenhofer`_\n* 131  `Gael Varoquaux`_\n* 117  `Mathieu Blondel`_\n* 108  `Lars Buitinck`_\n* 106  Wei Li\n* 101  `Olivier Grisel`_\n*  65  `Vlad Niculae`_\n*  54  `Gilles Louppe`_\n*  40  `Jaques Grobler`_\n*  38  `Alexandre Gramfort`_\n*  30  `Rob Zinkov`_\n*  19  Aymeric Masurelle\n*  18  Andrew Winterman\n*  17  `Fabian Pedregosa`_\n*  17  Nelle Varoquaux\n*  16  `Christian Osendorfer`_\n*  14  `Daniel Nouri`_\n*  13  :user:`Virgile Fritsch <VirgileFritsch>`\n*  13  syhw\n*  12  `Satrajit Ghosh`_\n*  10  Corey Lynch\n*  10  Kyle Beauchamp\n*   9  Brian Cheung\n*   9  Immanuel Bayer\n*   9  mr.Shu\n*   8  Conrad Lee\n*   8  `James Bergstra`_\n*   7  Tadej Jane\u017e\n*   6  Brian Cajes\n*   6  `Jake Vanderplas`_\n*   6  Michael\n*   6  Noel Dawe\n*   6  Tiago Nunes\n*   6  cow\n*   5  Anze\n*   5  Shiqiao Du\n*   4  Christian Jauvin\n*   4  Jacques Kvam\n*   4  Richard T. Guy\n*   4  `Robert Layton`_\n*   3  Alexandre Abraham\n*   3  Doug Coleman\n*   3  Scott Dickerson\n*   2  ApproximateIdentity\n*   2  John Benediktsson\n*   2  Mark Veronda\n*   2  Matti Lyra\n*   2  Mikhail Korobov\n*   2  Xinfan Meng\n*   1  Alejandro Weinstein\n*   1  `Alexandre Passos`_\n*   1  Christoph Deil\n*   1  Eugene Nizhibitsky\n*   1  Kenneth C. Arnold\n*   1  Luis Pedro Coelho\n*   1  Miroslav Batchkarov\n*   1  Pavel\n*   1  Sebastian Berg\n*   1  Shaun Jackman\n*   1  Subhodeep Moitra\n*   1  bob\n*   1  dengemann\n*   1  emanuele\n*   1  x006","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn               Version 0 13                   changes 0 13 1   Version 0 13 1                   February 23  2013    The 0 13 1 release only fixes some bugs and does not add any new functionality   Changelog              Fixed a testing error caused by the function  cross validation train test split  being   interpreted as a test by  Yaroslav Halchenko       Fixed a bug in the reassignment of small clusters in the  class  cluster MiniBatchKMeans    by  Gael Varoquaux       Fixed default value of   gamma   in  class  decomposition KernelPCA  by  Lars Buitinck       Updated joblib to   0 7 0d   by  Gael Varoquaux       Fixed scaling of the deviance in  class  ensemble GradientBoostingClassifier  by  Peter Prettenhofer       Better tie breaking in  class  multiclass OneVsOneClassifier  by  Andreas M ller       Other small improvements to tests and documentation   People        List of contributors for release 0 13 1 by number of commits     16   Lars Buitinck     12   Andreas M ller      8   Gael Varoquaux      5  Robert Marchman    3   Peter Prettenhofer      2  Hrishikesh Huilgolkar    1  Bastiaan van den Berg    1  Diego Molla    1   Gilles Louppe      1   Mathieu Blondel      1   Nelle Varoquaux      1  Rafael Cunha de Almeida    1  Rolando Espinoza La fuente    1   Vlad Niculae      1   Yaroslav Halchenko         changes 0 13   Version 0 13                 January 21  2013    New Estimator Classes                           class  dummy DummyClassifier  and  class  dummy DummyRegressor   two   data independent predictors by  Mathieu Blondel    Useful to sanity check   your estimators  See  ref  dummy estimators  in the user guide    Multioutput support added by  Arnaud Joly        class  decomposition FactorAnalysis   a transformer implementing the   classical factor analysis  by  Christian Osendorfer   and  Alexandre   Gramfort    See  ref  FA  in the user guide      class  feature extraction FeatureHasher   a transformer implementing the    hashing trick  for fast  low memory feature extraction from string fields   by  Lars Buitinck   and  class  feature extraction text HashingVectorizer    for text documents by  Olivier Grisel    See  ref  feature hashing  and    ref  hashing vectorizer  for the documentation and sample usage      class  pipeline FeatureUnion   a transformer that concatenates   results of several other transformers by  Andreas M ller    See    ref  feature union  in the user guide      class  random projection GaussianRandomProjection      class  random projection SparseRandomProjection  and the function    func  random projection johnson lindenstrauss min dim   The first two are   transformers implementing Gaussian and sparse random projection matrix   by  Olivier Grisel   and  Arnaud Joly      See  ref  random projection  in the user guide      class  kernel approximation Nystroem   a transformer for approximating   arbitrary kernels by  Andreas M ller    See    ref  nystroem kernel approx  in the user guide      class  preprocessing OneHotEncoder   a transformer that computes binary   encodings of categorical features by  Andreas M ller    See    ref  preprocessing categorical features  in the user guide      class  linear model PassiveAggressiveClassifier  and    class  linear model PassiveAggressiveRegressor   predictors implementing   an efficient stochastic optimization for linear models by  Rob Zinkov   and    Mathieu Blondel    See  ref  passive aggressive  in the user   guide      class  ensemble RandomTreesEmbedding   a transformer for creating high dimensional   sparse representations using ensembles of totally random trees by   Andreas M ller      See  ref  random trees embedding  in the user guide      class  manifold SpectralEmbedding  and function    func  manifold spectral embedding   implementing the  laplacian   eigenmaps  transformation for non linear dimensionality reduction by Wei   Li  See  ref  spectral embedding  in the user guide      class  isotonic IsotonicRegression  by  Fabian Pedregosa     Alexandre Gramfort     and  Nelle Varoquaux      Changelog               func  metrics zero one loss   formerly   metrics zero one    now has   option for normalized output that reports the fraction of   misclassifications  rather than the raw number of misclassifications  By   Kyle Beauchamp      class  tree DecisionTreeClassifier  and all derived ensemble models now   support sample weighting  by  Noel Dawe    and  Gilles Louppe       Speedup improvement when using bootstrap samples in forests of randomized   trees  by  Peter Prettenhofer    and  Gilles Louppe       Partial dependence plots for  ref  gradient boosting  in    ensemble partial dependence partial dependence  by  Peter   Prettenhofer    See  ref  sphx glr auto examples inspection plot partial dependence py  for an   example     The table of contents on the website has now been made expandable by    Jaques Grobler        class  feature selection SelectPercentile  now breaks ties   deterministically instead of returning all equally ranked features      class  feature selection SelectKBest  and    class  feature selection SelectPercentile  are more numerically stable   since they use scores  rather than p values  to rank results  This means   that they might sometimes select different features than they did   previously     Ridge regression and ridge classification fitting with   sparse cg   solver   no longer has quadratic memory complexity  by  Lars Buitinck   and    Fabian Pedregosa       Ridge regression and ridge classification now support a new fast solver   called   lsqr    by  Mathieu Blondel       Speed up of  func  metrics precision recall curve  by Conrad Lee     Added support for reading writing svmlight files with pairwise   preference attribute  qid in svmlight file format  in    func  datasets dump svmlight file  and    func  datasets load svmlight file  by  Fabian Pedregosa       Faster and more robust  func  metrics confusion matrix  and    ref  clustering evaluation  by Wei Li      cross validation cross val score  now works with precomputed kernels   and affinity matrices  by  Andreas M ller       LARS algorithm made more numerically stable with heuristics to drop   regressors too correlated as well as to stop the path when   numerical noise becomes predominant  by  Gael Varoquaux       Faster implementation of  func  metrics precision recall curve  by   Conrad Lee     New kernel  metrics chi2 kernel  by  Andreas M ller    often used   in computer vision applications     Fix of longstanding bug in  class  naive bayes BernoulliNB  fixed by   Shaun Jackman     Implemented   predict proba   in  class  multiclass OneVsRestClassifier     by Andrew Winterman     Improve consistency in gradient boosting  estimators    class  ensemble GradientBoostingRegressor  and    class  ensemble GradientBoostingClassifier  use the estimator    class  tree DecisionTreeRegressor  instead of the    tree  tree Tree  data structure by  Arnaud Joly       Fixed a floating point exception in the  ref  decision trees  tree     module  by Seberg     Fix  func  metrics roc curve  fails when y true has only one class   by Wei Li     Add the  func  metrics mean absolute error  function which computes the   mean absolute error  The  func  metrics mean squared error      func  metrics mean absolute error  and    func  metrics r2 score  metrics support multioutput by  Arnaud Joly       Fixed   class weight   support in  class  svm LinearSVC  and    class  linear model LogisticRegression  by  Andreas M ller    The meaning   of   class weight   was reversed as erroneously higher weight meant less   positives of a given class in earlier releases     Improve narrative documentation and consistency in    mod  sklearn metrics  for regression and classification metrics   by  Arnaud Joly       Fixed a bug in  class  sklearn svm SVC  when using csr matrices with   unsorted indices by Xinfan Meng and  Andreas M ller        class  cluster MiniBatchKMeans   Add random reassignment of cluster centers   with little observations attached to them  by  Gael Varoquaux      API changes summary                       Renamed all occurrences of   n atoms   to   n components   for consistency    This applies to  class  decomposition DictionaryLearning      class  decomposition MiniBatchDictionaryLearning      func  decomposition dict learning    func  decomposition dict learning online      Renamed all occurrences of   max iters   to   max iter   for consistency    This applies to  semi supervised LabelPropagation  and    semi supervised label propagation LabelSpreading      Renamed all occurrences of   learn rate   to   learning rate   for   consistency in  ensemble BaseGradientBoosting  and    class  ensemble GradientBoostingRegressor      The module   sklearn linear model sparse   is gone  Sparse matrix support   was already integrated into the  regular  linear models      sklearn metrics mean square error   which incorrectly returned the   accumulated error  was removed  Use  func  metrics mean squared error  instead     Passing   class weight   parameters to   fit   methods is no longer   supported  Pass them to estimator constructors instead     GMMs no longer have   decode   and   rvs   methods  Use the   score        predict   or   sample   methods instead     The   solver   fit option in Ridge regression and classification is now   deprecated and will be removed in v0 14  Use the constructor option   instead      feature extraction text DictVectorizer  now returns sparse   matrices in the CSR format  instead of COO     Renamed   k   in  cross validation KFold  and    cross validation StratifiedKFold  to   n folds    renamed     n bootstraps   to   n iter   in   cross validation Bootstrap       Renamed all occurrences of   n iterations   to   n iter   for consistency    This applies to  cross validation ShuffleSplit      cross validation StratifiedShuffleSplit      func  utils extmath randomized range finder  and    func  utils extmath randomized svd      Replaced   rho   in  class  linear model ElasticNet  and    class  linear model SGDClassifier  by   l1 ratio    The   rho   parameter   had different meanings    l1 ratio   was introduced to avoid confusion    It has the same meaning as previously   rho   in    class  linear model ElasticNet  and    1 rho    in    class  linear model SGDClassifier       class  linear model LassoLars  and  class  linear model Lars  now   store a list of paths in the case of multiple targets  rather than   an array of paths     The attribute   gmm   of  hmm GMMHMM  was renamed to   gmm      to adhere more strictly with the API      cluster spectral embedding  was moved to    func  manifold spectral embedding      Renamed   eig tol   in  func  manifold spectral embedding      class  cluster SpectralClustering  to   eigen tol    renamed   mode     to   eigen solver       Renamed   mode   in  func  manifold spectral embedding  and    class  cluster SpectralClustering  to   eigen solver         classes    and   n classes    attributes of    class  tree DecisionTreeClassifier  and all derived ensemble models are   now flat in case of single output problems and nested in case of   multi output problems     The   estimators    attribute of    class  ensemble GradientBoostingRegressor  and    class  ensemble GradientBoostingClassifier  is now an   array of  class  tree DecisionTreeRegressor      Renamed   chunk size   to   batch size   in    class  decomposition MiniBatchDictionaryLearning  and    class  decomposition MiniBatchSparsePCA  for consistency      class  svm SVC  and  class  svm NuSVC  now provide a   classes      attribute and support arbitrary dtypes for labels   y      Also  the dtype returned by   predict   now reflects the dtype of     y   during   fit    used to be   np float        Changed default test size in  cross validation train test split    to None  added possibility to infer   test size   from   train size   in    cross validation ShuffleSplit  and    cross validation StratifiedShuffleSplit      Renamed function  sklearn metrics zero one  to    sklearn metrics zero one loss   Be aware that the default behavior   in  sklearn metrics zero one loss  is different from    sklearn metrics zero one     normalize False   is changed to     normalize True       Renamed function  metrics zero one score  to    func  metrics accuracy score       func  datasets make circles  now has the same number of inner and outer points     In the Naive Bayes classifiers  the   class prior   parameter was moved   from   fit   to     init       People        List of contributors for release 0 13 by number of commits     364   Andreas M ller     143   Arnaud Joly     137   Peter Prettenhofer     131   Gael Varoquaux     117   Mathieu Blondel     108   Lars Buitinck     106  Wei Li   101   Olivier Grisel      65   Vlad Niculae      54   Gilles Louppe      40   Jaques Grobler      38   Alexandre Gramfort      30   Rob Zinkov      19  Aymeric Masurelle    18  Andrew Winterman    17   Fabian Pedregosa      17  Nelle Varoquaux    16   Christian Osendorfer      14   Daniel Nouri      13   user  Virgile Fritsch  VirgileFritsch      13  syhw    12   Satrajit Ghosh      10  Corey Lynch    10  Kyle Beauchamp     9  Brian Cheung     9  Immanuel Bayer     9  mr Shu     8  Conrad Lee     8   James Bergstra       7  Tadej Jane      6  Brian Cajes     6   Jake Vanderplas       6  Michael     6  Noel Dawe     6  Tiago Nunes     6  cow     5  Anze     5  Shiqiao Du     4  Christian Jauvin     4  Jacques Kvam     4  Richard T  Guy     4   Robert Layton       3  Alexandre Abraham     3  Doug Coleman     3  Scott Dickerson     2  ApproximateIdentity     2  John Benediktsson     2  Mark Veronda     2  Matti Lyra     2  Mikhail Korobov     2  Xinfan Meng     1  Alejandro Weinstein     1   Alexandre Passos       1  Christoph Deil     1  Eugene Nizhibitsky     1  Kenneth C  Arnold     1  Luis Pedro Coelho     1  Miroslav Batchkarov     1  Pavel     1  Sebastian Berg     1  Shaun Jackman     1  Subhodeep Moitra     1  bob     1  dengemann     1  emanuele     1  x006"}
{"questions":"scikit-learn sklearn contributors rst changes0152 Version 0 15","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n============\nVersion 0.15\n============\n\n.. _changes_0_15_2:\n\nVersion 0.15.2\n==============\n\n**September 4, 2014**\n\nBug fixes\n---------\n\n- Fixed handling of the ``p`` parameter of the Minkowski distance that was\n  previously ignored in nearest neighbors models. By :user:`Nikolay\n  Mayorov <nmayorov>`.\n\n- Fixed duplicated alphas in :class:`linear_model.LassoLars` with early\n  stopping on 32 bit Python. By `Olivier Grisel`_ and `Fabian Pedregosa`_.\n\n- Fixed the build under Windows when scikit-learn is built with MSVC while\n  NumPy is built with MinGW. By `Olivier Grisel`_ and :user:`Federico\n  Vaggi <FedericoV>`.\n\n- Fixed an array index overflow bug in the coordinate descent solver. By\n  `Gael Varoquaux`_.\n\n- Better handling of numpy 1.9 deprecation warnings. By `Gael Varoquaux`_.\n\n- Removed unnecessary data copy in :class:`cluster.KMeans`.\n  By `Gael Varoquaux`_.\n\n- Explicitly close open files to avoid ``ResourceWarnings`` under Python 3.\n  By Calvin Giles.\n\n- The ``transform`` of :class:`discriminant_analysis.LinearDiscriminantAnalysis`\n  now projects the input on the most discriminant directions. By Martin Billinger.\n\n- Fixed potential overflow in ``_tree.safe_realloc`` by `Lars Buitinck`_.\n\n- Performance optimization in :class:`isotonic.IsotonicRegression`.\n  By Robert Bradshaw.\n\n- ``nose`` is non-longer a runtime dependency to import ``sklearn``, only for\n  running the tests. By `Joel Nothman`_.\n\n- Many documentation and website fixes by `Joel Nothman`_, `Lars Buitinck`_\n  :user:`Matt Pico <MattpSoftware>`, and others.\n\n.. _changes_0_15_1:\n\nVersion 0.15.1\n==============\n\n**August 1, 2014**\n\nBug fixes\n---------\n\n- Made `cross_validation.cross_val_score` use\n  `cross_validation.KFold` instead of\n  `cross_validation.StratifiedKFold` on multi-output classification\n  problems. By :user:`Nikolay Mayorov <nmayorov>`.\n\n- Support unseen labels :class:`preprocessing.LabelBinarizer` to restore\n  the default behavior of 0.14.1 for backward compatibility. By\n  :user:`Hamzeh Alsalhi <hamsal>`.\n\n- Fixed the :class:`cluster.KMeans` stopping criterion that prevented early\n  convergence detection. By Edward Raff and `Gael Varoquaux`_.\n\n- Fixed the behavior of :class:`multiclass.OneVsOneClassifier`.\n  in case of ties at the per-class vote level by computing the correct\n  per-class sum of prediction scores. By `Andreas M\u00fcller`_.\n\n- Made `cross_validation.cross_val_score` and\n  `grid_search.GridSearchCV` accept Python lists as input data.\n  This is especially useful for cross-validation and model selection of\n  text processing pipelines. By `Andreas M\u00fcller`_.\n\n- Fixed data input checks of most estimators to accept input data that\n  implements the NumPy ``__array__`` protocol. This is the case for\n  for ``pandas.Series`` and ``pandas.DataFrame`` in recent versions of\n  pandas. By `Gael Varoquaux`_.\n\n- Fixed a regression for :class:`linear_model.SGDClassifier` with\n  ``class_weight=\"auto\"`` on data with non-contiguous labels. By\n  `Olivier Grisel`_.\n\n\n.. _changes_0_15:\n\nVersion 0.15\n============\n\n**July 15, 2014**\n\nHighlights\n-----------\n\n- Many speed and memory improvements all across the code\n\n- Huge speed and memory improvements to random forests (and extra\n  trees) that also benefit better from parallel computing.\n\n- Incremental fit to :class:`BernoulliRBM <neural_network.BernoulliRBM>`\n\n- Added :class:`cluster.AgglomerativeClustering` for hierarchical\n  agglomerative clustering with average linkage, complete linkage and\n  ward strategies.\n\n- Added :class:`linear_model.RANSACRegressor` for robust regression\n  models.\n\n- Added dimensionality reduction with :class:`manifold.TSNE` which can be\n  used to visualize high-dimensional data.\n\n\nChangelog\n---------\n\nNew features\n............\n\n- Added :class:`ensemble.BaggingClassifier` and\n  :class:`ensemble.BaggingRegressor` meta-estimators for ensembling\n  any kind of base estimator. See the :ref:`Bagging <bagging>` section of\n  the user guide for details and examples. By `Gilles Louppe`_.\n\n- New unsupervised feature selection algorithm\n  :class:`feature_selection.VarianceThreshold`, by `Lars Buitinck`_.\n\n- Added :class:`linear_model.RANSACRegressor` meta-estimator for the robust\n  fitting of regression models. By :user:`Johannes Sch\u00f6nberger <ahojnnes>`.\n\n- Added :class:`cluster.AgglomerativeClustering` for hierarchical\n  agglomerative clustering with average linkage, complete linkage and\n  ward strategies, by  `Nelle Varoquaux`_ and `Gael Varoquaux`_.\n\n- Shorthand constructors :func:`pipeline.make_pipeline` and\n  :func:`pipeline.make_union` were added by `Lars Buitinck`_.\n\n- Shuffle option for `cross_validation.StratifiedKFold`.\n  By :user:`Jeffrey Blackburne <jblackburne>`.\n\n- Incremental learning (``partial_fit``) for Gaussian Naive Bayes by\n  Imran Haque.\n\n- Added ``partial_fit`` to :class:`BernoulliRBM\n  <neural_network.BernoulliRBM>`\n  By :user:`Danny Sullivan <dsullivan7>`.\n\n- Added `learning_curve` utility to\n  chart performance with respect to training size. See\n  :ref:`sphx_glr_auto_examples_model_selection_plot_learning_curve.py`. By Alexander Fabisch.\n\n- Add positive option in :class:`LassoCV <linear_model.LassoCV>` and\n  :class:`ElasticNetCV <linear_model.ElasticNetCV>`.\n  By Brian Wignall and `Alexandre Gramfort`_.\n\n- Added :class:`linear_model.MultiTaskElasticNetCV` and\n  :class:`linear_model.MultiTaskLassoCV`. By `Manoj Kumar`_.\n\n- Added :class:`manifold.TSNE`. By Alexander Fabisch.\n\nEnhancements\n............\n\n- Add sparse input support to :class:`ensemble.AdaBoostClassifier` and\n  :class:`ensemble.AdaBoostRegressor` meta-estimators.\n  By :user:`Hamzeh Alsalhi <hamsal>`.\n\n- Memory improvements of decision trees, by `Arnaud Joly`_.\n\n- Decision trees can now be built in best-first manner by using ``max_leaf_nodes``\n  as the stopping criteria. Refactored the tree code to use either a\n  stack or a priority queue for tree building.\n  By `Peter Prettenhofer`_ and `Gilles Louppe`_.\n\n- Decision trees can now be fitted on fortran- and c-style arrays, and\n  non-continuous arrays without the need to make a copy.\n  If the input array has a different dtype than ``np.float32``, a fortran-\n  style copy will be made since fortran-style memory layout has speed\n  advantages. By `Peter Prettenhofer`_ and `Gilles Louppe`_.\n\n- Speed improvement of regression trees by optimizing the\n  the computation of the mean square error criterion. This lead\n  to speed improvement of the tree, forest and gradient boosting tree\n  modules. By `Arnaud Joly`_\n\n- The ``img_to_graph`` and ``grid_tograph`` functions in\n  :mod:`sklearn.feature_extraction.image` now return ``np.ndarray``\n  instead of ``np.matrix`` when ``return_as=np.ndarray``.  See the\n  Notes section for more information on compatibility.\n\n- Changed the internal storage of decision trees to use a struct array.\n  This fixed some small bugs, while improving code and providing a small\n  speed gain. By `Joel Nothman`_.\n\n- Reduce memory usage and overhead when fitting and predicting with forests\n  of randomized trees in parallel with ``n_jobs != 1`` by leveraging new\n  threading backend of joblib 0.8 and releasing the GIL in the tree fitting\n  Cython code.  By `Olivier Grisel`_ and `Gilles Louppe`_.\n\n- Speed improvement of the `sklearn.ensemble.gradient_boosting` module.\n  By `Gilles Louppe`_ and `Peter Prettenhofer`_.\n\n- Various enhancements to the `sklearn.ensemble.gradient_boosting`\n  module: a ``warm_start`` argument to fit additional trees,\n  a ``max_leaf_nodes`` argument to fit GBM style trees,\n  a ``monitor`` fit argument to inspect the estimator during training, and\n  refactoring of the verbose code. By `Peter Prettenhofer`_.\n\n- Faster `sklearn.ensemble.ExtraTrees` by caching feature values.\n  By `Arnaud Joly`_.\n\n- Faster depth-based tree building algorithm such as decision tree,\n  random forest, extra trees or gradient tree boosting (with depth based\n  growing strategy) by avoiding trying to split on found constant features\n  in the sample subset. By `Arnaud Joly`_.\n\n- Add ``min_weight_fraction_leaf`` pre-pruning parameter to tree-based\n  methods: the minimum weighted fraction of the input samples required to be\n  at a leaf node. By `Noel Dawe`_.\n\n- Added :func:`metrics.pairwise_distances_argmin_min`, by Philippe Gervais.\n\n- Added predict method to :class:`cluster.AffinityPropagation` and\n  :class:`cluster.MeanShift`, by `Mathieu Blondel`_.\n\n- Vector and matrix multiplications have been optimised throughout the\n  library by `Denis Engemann`_, and `Alexandre Gramfort`_.\n  In particular, they should take less memory with older NumPy versions\n  (prior to 1.7.2).\n\n- Precision-recall and ROC examples now use train_test_split, and have more\n  explanation of why these metrics are useful. By `Kyle Kastner`_\n\n- The training algorithm for :class:`decomposition.NMF` is faster for\n  sparse matrices and has much lower memory complexity, meaning it will\n  scale up gracefully to large datasets. By `Lars Buitinck`_.\n\n- Added svd_method option with default value to \"randomized\" to\n  :class:`decomposition.FactorAnalysis` to save memory and\n  significantly speedup computation by `Denis Engemann`_, and\n  `Alexandre Gramfort`_.\n\n- Changed `cross_validation.StratifiedKFold` to try and\n  preserve as much of the original ordering of samples as possible so as\n  not to hide overfitting on datasets with a non-negligible level of\n  samples dependency.\n  By `Daniel Nouri`_ and `Olivier Grisel`_.\n\n- Add multi-output support to :class:`gaussian_process.GaussianProcessRegressor`\n  by John Novak.\n\n- Support for precomputed distance matrices in nearest neighbor estimators\n  by `Robert Layton`_ and `Joel Nothman`_.\n\n- Norm computations optimized for NumPy 1.6 and later versions by\n  `Lars Buitinck`_. In particular, the k-means algorithm no longer\n  needs a temporary data structure the size of its input.\n\n- :class:`dummy.DummyClassifier` can now be used to predict a constant\n  output value. By `Manoj Kumar`_.\n\n- :class:`dummy.DummyRegressor` has now a strategy parameter which allows\n  to predict the mean, the median of the training set or a constant\n  output value. By :user:`Maheshakya Wijewardena <maheshakya>`.\n\n- Multi-label classification output in multilabel indicator format\n  is now supported by :func:`metrics.roc_auc_score` and\n  :func:`metrics.average_precision_score` by `Arnaud Joly`_.\n\n- Significant performance improvements (more than 100x speedup for\n  large problems) in :class:`isotonic.IsotonicRegression` by\n  `Andrew Tulloch`_.\n\n- Speed and memory usage improvements to the SGD algorithm for linear\n  models: it now uses threads, not separate processes, when ``n_jobs>1``.\n  By `Lars Buitinck`_.\n\n- Grid search and cross validation allow NaNs in the input arrays so that\n  preprocessors such as `preprocessing.Imputer` can be trained within the cross\n  validation loop, avoiding potentially skewed results.\n\n- Ridge regression can now deal with sample weights in feature space\n  (only sample space until then). By :user:`Michael Eickenberg <eickenberg>`.\n  Both solutions are provided by the Cholesky solver.\n\n- Several classification and regression metrics now support weighted\n  samples with the new ``sample_weight`` argument:\n  :func:`metrics.accuracy_score`,\n  :func:`metrics.zero_one_loss`,\n  :func:`metrics.precision_score`,\n  :func:`metrics.average_precision_score`,\n  :func:`metrics.f1_score`,\n  :func:`metrics.fbeta_score`,\n  :func:`metrics.recall_score`,\n  :func:`metrics.roc_auc_score`,\n  :func:`metrics.explained_variance_score`,\n  :func:`metrics.mean_squared_error`,\n  :func:`metrics.mean_absolute_error`,\n  :func:`metrics.r2_score`.\n  By `Noel Dawe`_.\n\n- Speed up of the sample generator\n  :func:`datasets.make_multilabel_classification`. By `Joel Nothman`_.\n\nDocumentation improvements\n...........................\n\n- The Working With Text Data tutorial\n  has now been worked in to the main documentation's tutorial section.\n  Includes exercises and skeletons for tutorial presentation.\n  Original tutorial created by several authors including\n  `Olivier Grisel`_, Lars Buitinck and many others.\n  Tutorial integration into the scikit-learn documentation\n  by `Jaques Grobler`_\n\n- Added :ref:`Computational Performance <computational_performance>`\n  documentation. Discussion and examples of prediction latency \/ throughput\n  and different factors that have influence over speed. Additional tips for\n  building faster models and choosing a relevant compromise between speed\n  and predictive power.\n  By :user:`Eustache Diemert <oddskool>`.\n\nBug fixes\n.........\n\n- Fixed bug in :class:`decomposition.MiniBatchDictionaryLearning` :\n  ``partial_fit`` was not working properly.\n\n- Fixed bug in `linear_model.stochastic_gradient` :\n  ``l1_ratio`` was used as ``(1.0 - l1_ratio)`` .\n\n- Fixed bug in :class:`multiclass.OneVsOneClassifier` with string\n  labels\n\n- Fixed a bug in :class:`LassoCV <linear_model.LassoCV>` and\n  :class:`ElasticNetCV <linear_model.ElasticNetCV>`: they would not\n  pre-compute the Gram matrix with ``precompute=True`` or\n  ``precompute=\"auto\"`` and ``n_samples > n_features``. By `Manoj Kumar`_.\n\n- Fixed incorrect estimation of the degrees of freedom in\n  :func:`feature_selection.f_regression` when variates are not centered.\n  By :user:`Virgile Fritsch <VirgileFritsch>`.\n\n- Fixed a race condition in parallel processing with\n  ``pre_dispatch != \"all\"`` (for instance, in ``cross_val_score``).\n  By `Olivier Grisel`_.\n\n- Raise error in :class:`cluster.FeatureAgglomeration` and\n  `cluster.WardAgglomeration` when no samples are given,\n  rather than returning meaningless clustering.\n\n- Fixed bug in `gradient_boosting.GradientBoostingRegressor` with\n  ``loss='huber'``: ``gamma`` might have not been initialized.\n\n- Fixed feature importances as computed with a forest of randomized trees\n  when fit with ``sample_weight != None`` and\/or with ``bootstrap=True``.\n  By `Gilles Louppe`_.\n\nAPI changes summary\n-------------------\n\n- `sklearn.hmm` is deprecated. Its removal is planned\n  for the 0.17 release.\n\n- Use of `covariance.EllipticEnvelop` has now been removed after\n  deprecation.\n  Please use :class:`covariance.EllipticEnvelope` instead.\n\n- `cluster.Ward` is deprecated. Use\n  :class:`cluster.AgglomerativeClustering` instead.\n\n- `cluster.WardClustering` is deprecated. Use\n- :class:`cluster.AgglomerativeClustering` instead.\n\n- `cross_validation.Bootstrap` is deprecated.\n  `cross_validation.KFold` or\n  `cross_validation.ShuffleSplit` are recommended instead.\n\n- Direct support for the sequence of sequences (or list of lists) multilabel\n  format is deprecated. To convert to and from the supported binary\n  indicator matrix format, use\n  :class:`preprocessing.MultiLabelBinarizer`.\n  By `Joel Nothman`_.\n\n- Add score method to :class:`decomposition.PCA` following the model of\n  probabilistic PCA and deprecate\n  `ProbabilisticPCA` model whose\n  score implementation is not correct. The computation now also exploits the\n  matrix inversion lemma for faster computation. By `Alexandre Gramfort`_.\n\n- The score method of :class:`decomposition.FactorAnalysis`\n  now returns the average log-likelihood of the samples. Use score_samples\n  to get log-likelihood of each sample. By `Alexandre Gramfort`_.\n\n- Generating boolean masks (the setting ``indices=False``)\n  from cross-validation generators is deprecated.\n  Support for masks will be removed in 0.17.\n  The generators have produced arrays of indices by default since 0.10.\n  By `Joel Nothman`_.\n\n- 1-d arrays containing strings with ``dtype=object`` (as used in Pandas)\n  are now considered valid classification targets. This fixes a regression\n  from version 0.13 in some classifiers. By `Joel Nothman`_.\n\n- Fix wrong ``explained_variance_ratio_`` attribute in\n  `RandomizedPCA`.\n  By `Alexandre Gramfort`_.\n\n- Fit alphas for each ``l1_ratio`` instead of ``mean_l1_ratio`` in\n  :class:`linear_model.ElasticNetCV` and :class:`linear_model.LassoCV`.\n  This changes the shape of ``alphas_`` from ``(n_alphas,)`` to\n  ``(n_l1_ratio, n_alphas)`` if the ``l1_ratio`` provided is a 1-D array like\n  object of length greater than one.\n  By `Manoj Kumar`_.\n\n- Fix :class:`linear_model.ElasticNetCV` and :class:`linear_model.LassoCV`\n  when fitting intercept and input data is sparse. The automatic grid\n  of alphas was not computed correctly and the scaling with normalize\n  was wrong. By `Manoj Kumar`_.\n\n- Fix wrong maximal number of features drawn (``max_features``) at each split\n  for decision trees, random forests and gradient tree boosting.\n  Previously, the count for the number of drawn features started only after\n  one non constant features in the split. This bug fix will affect\n  computational and generalization performance of those algorithms in the\n  presence of constant features. To get back previous generalization\n  performance, you should modify the value of ``max_features``.\n  By `Arnaud Joly`_.\n\n- Fix wrong maximal number of features drawn (``max_features``) at each split\n  for :class:`ensemble.ExtraTreesClassifier` and\n  :class:`ensemble.ExtraTreesRegressor`. Previously, only non constant\n  features in the split was counted as drawn. Now constant features are\n  counted as drawn. Furthermore at least one feature must be non constant\n  in order to make a valid split. This bug fix will affect\n  computational and generalization performance of extra trees in the\n  presence of constant features. To get back previous generalization\n  performance, you should modify the value of ``max_features``.\n  By `Arnaud Joly`_.\n\n- Fix :func:`utils.class_weight.compute_class_weight` when ``class_weight==\"auto\"``.\n  Previously it was broken for input of non-integer ``dtype`` and the\n  weighted array that was returned was wrong. By `Manoj Kumar`_.\n\n- Fix `cross_validation.Bootstrap` to return ``ValueError``\n  when ``n_train + n_test > n``. By :user:`Ronald Phlypo <rphlypo>`.\n\n\nPeople\n------\n\nList of contributors for release 0.15 by number of commits.\n\n* 312\tOlivier Grisel\n* 275\tLars Buitinck\n* 221\tGael Varoquaux\n* 148\tArnaud Joly\n* 134\tJohannes Sch\u00f6nberger\n* 119\tGilles Louppe\n* 113\tJoel Nothman\n* 111\tAlexandre Gramfort\n*  95\tJaques Grobler\n*  89\tDenis Engemann\n*  83\tPeter Prettenhofer\n*  83\tAlexander Fabisch\n*  62\tMathieu Blondel\n*  60\tEustache Diemert\n*  60\tNelle Varoquaux\n*  49\tMichael Bommarito\n*  45\tManoj-Kumar-S\n*  28\tKyle Kastner\n*  26\tAndreas Mueller\n*  22\tNoel Dawe\n*  21\tMaheshakya Wijewardena\n*  21\tBrooke Osborn\n*  21\tHamzeh Alsalhi\n*  21\tJake VanderPlas\n*  21\tPhilippe Gervais\n*  19\tBala Subrahmanyam Varanasi\n*  12\tRonald Phlypo\n*  10\tMikhail Korobov\n*   8\tThomas Unterthiner\n*   8\tJeffrey Blackburne\n*   8\teltermann\n*   8\tbwignall\n*   7\tAnkit Agrawal\n*   7\tCJ Carey\n*   6\tDaniel Nouri\n*   6\tChen Liu\n*   6\tMichael Eickenberg\n*   6\tugurthemaster\n*   5\tAaron Schumacher\n*   5\tBaptiste Lagarde\n*   5\tRajat Khanduja\n*   5\tRobert McGibbon\n*   5\tSergio Pascual\n*   4\tAlexis Metaireau\n*   4\tIgnacio Rossi\n*   4\tVirgile Fritsch\n*   4\tSebastian S\u00e4ger\n*   4\tIlambharathi Kanniah\n*   4\tsdenton4\n*   4\tRobert Layton\n*   4\tAlyssa\n*   4\tAmos Waterland\n*   3\tAndrew Tulloch\n*   3\tmurad\n*   3\tSteven Maude\n*   3\tKarol Pysniak\n*   3\tJacques Kvam\n*   3\tcgohlke\n*   3\tcjlin\n*   3\tMichael Becker\n*   3\thamzeh\n*   3\tEric Jacobsen\n*   3\tjohn collins\n*   3\tkaushik94\n*   3\tErwin Marsi\n*   2\tcsytracy\n*   2\tLK\n*   2\tVlad Niculae\n*   2\tLaurent Direr\n*   2\tErik Shilts\n*   2\tRaul Garreta\n*   2\tYoshiki V\u00e1zquez Baeza\n*   2\tYung Siang Liau\n*   2\tabhishek thakur\n*   2\tJames Yu\n*   2\tRohit Sivaprasad\n*   2\tRoland Szabo\n*   2\tamormachine\n*   2\tAlexis Mignon\n*   2\tOscar Carlsson\n*   2\tNantas Nardelli\n*   2\tjess010\n*   2\tkowalski87\n*   2\tAndrew Clegg\n*   2\tFederico Vaggi\n*   2\tSimon Frid\n*   2\tF\u00e9lix-Antoine Fortin\n*   1\tRalf Gommers\n*   1\tt-aft\n*   1\tRonan Amicel\n*   1\tRupesh Kumar Srivastava\n*   1\tRyan Wang\n*   1\tSamuel Charron\n*   1\tSamuel St-Jean\n*   1\tFabian Pedregosa\n*   1\tSkipper Seabold\n*   1\tStefan Walk\n*   1\tStefan van der Walt\n*   1\tStephan Hoyer\n*   1\tAllen Riddell\n*   1\tValentin Haenel\n*   1\tVijay Ramesh\n*   1\tWill Myers\n*   1\tYaroslav Halchenko\n*   1\tYoni Ben-Meshulam\n*   1\tYury V. Zaytsev\n*   1\tadrinjalali\n*   1\tai8rahim\n*   1\talemagnani\n*   1\talex\n*   1\tbenjamin wilson\n*   1\tchalmerlowe\n*   1\tdzikie dro\u017cd\u017ce\n*   1\tjamestwebber\n*   1\tmatrixorz\n*   1\tpopo\n*   1\tsamuela\n*   1\tFran\u00e7ois Boulogne\n*   1\tAlexander Measure\n*   1\tEthan White\n*   1\tGuilherme Trein\n*   1\tHendrik Heuer\n*   1\tIvicaJovic\n*   1\tJan Hendrik Metzen\n*   1\tJean Michel Rouly\n*   1\tEduardo Ari\u00f1o de la Rubia\n*   1\tJelle Zijlstra\n*   1\tEddy L O Jansson\n*   1\tDenis\n*   1\tJohn\n*   1\tJohn Schmidt\n*   1\tJorge Ca\u00f1ardo Alastuey\n*   1\tJoseph Perla\n*   1\tJoshua Vredevoogd\n*   1\tJos\u00e9 Ricardo\n*   1\tJulien Miotte\n*   1\tKemal Eren\n*   1\tKenta Sato\n*   1\tDavid Cournapeau\n*   1\tKyle Kelley\n*   1\tDaniele Medri\n*   1\tLaurent Luce\n*   1\tLaurent Pierron\n*   1\tLuis Pedro Coelho\n*   1\tDanielWeitzenfeld\n*   1\tCraig Thompson\n*   1\tChyi-Kwei Yau\n*   1\tMatthew Brett\n*   1\tMatthias Feurer\n*   1\tMax Linke\n*   1\tChris Filo Gorgolewski\n*   1\tCharles Earl\n*   1\tMichael Hanke\n*   1\tMichele Orr\u00f9\n*   1\tBryan Lunt\n*   1\tBrian Kearns\n*   1\tPaul Butler\n*   1\tPawe\u0142 Mandera\n*   1\tPeter\n*   1\tAndrew Ash\n*   1\tPietro Zambelli\n*   1\tstaubda","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn               Version 0 15                   changes 0 15 2   Version 0 15 2                   September 4  2014    Bug fixes              Fixed handling of the   p   parameter of the Minkowski distance that was   previously ignored in nearest neighbors models  By  user  Nikolay   Mayorov  nmayorov       Fixed duplicated alphas in  class  linear model LassoLars  with early   stopping on 32 bit Python  By  Olivier Grisel   and  Fabian Pedregosa       Fixed the build under Windows when scikit learn is built with MSVC while   NumPy is built with MinGW  By  Olivier Grisel   and  user  Federico   Vaggi  FedericoV       Fixed an array index overflow bug in the coordinate descent solver  By    Gael Varoquaux       Better handling of numpy 1 9 deprecation warnings  By  Gael Varoquaux       Removed unnecessary data copy in  class  cluster KMeans     By  Gael Varoquaux       Explicitly close open files to avoid   ResourceWarnings   under Python 3    By Calvin Giles     The   transform   of  class  discriminant analysis LinearDiscriminantAnalysis    now projects the input on the most discriminant directions  By Martin Billinger     Fixed potential overflow in    tree safe realloc   by  Lars Buitinck       Performance optimization in  class  isotonic IsotonicRegression     By Robert Bradshaw       nose   is non longer a runtime dependency to import   sklearn    only for   running the tests  By  Joel Nothman       Many documentation and website fixes by  Joel Nothman     Lars Buitinck      user  Matt Pico  MattpSoftware    and others       changes 0 15 1   Version 0 15 1                   August 1  2014    Bug fixes              Made  cross validation cross val score  use    cross validation KFold  instead of    cross validation StratifiedKFold  on multi output classification   problems  By  user  Nikolay Mayorov  nmayorov       Support unseen labels  class  preprocessing LabelBinarizer  to restore   the default behavior of 0 14 1 for backward compatibility  By    user  Hamzeh Alsalhi  hamsal       Fixed the  class  cluster KMeans  stopping criterion that prevented early   convergence detection  By Edward Raff and  Gael Varoquaux       Fixed the behavior of  class  multiclass OneVsOneClassifier     in case of ties at the per class vote level by computing the correct   per class sum of prediction scores  By  Andreas M ller       Made  cross validation cross val score  and    grid search GridSearchCV  accept Python lists as input data    This is especially useful for cross validation and model selection of   text processing pipelines  By  Andreas M ller       Fixed data input checks of most estimators to accept input data that   implements the NumPy     array     protocol  This is the case for   for   pandas Series   and   pandas DataFrame   in recent versions of   pandas  By  Gael Varoquaux       Fixed a regression for  class  linear model SGDClassifier  with     class weight  auto    on data with non contiguous labels  By    Olivier Grisel          changes 0 15   Version 0 15                 July 15  2014    Highlights                Many speed and memory improvements all across the code    Huge speed and memory improvements to random forests  and extra   trees  that also benefit better from parallel computing     Incremental fit to  class  BernoulliRBM  neural network BernoulliRBM      Added  class  cluster AgglomerativeClustering  for hierarchical   agglomerative clustering with average linkage  complete linkage and   ward strategies     Added  class  linear model RANSACRegressor  for robust regression   models     Added dimensionality reduction with  class  manifold TSNE  which can be   used to visualize high dimensional data    Changelog            New features                 Added  class  ensemble BaggingClassifier  and    class  ensemble BaggingRegressor  meta estimators for ensembling   any kind of base estimator  See the  ref  Bagging  bagging   section of   the user guide for details and examples  By  Gilles Louppe       New unsupervised feature selection algorithm    class  feature selection VarianceThreshold   by  Lars Buitinck       Added  class  linear model RANSACRegressor  meta estimator for the robust   fitting of regression models  By  user  Johannes Sch nberger  ahojnnes       Added  class  cluster AgglomerativeClustering  for hierarchical   agglomerative clustering with average linkage  complete linkage and   ward strategies  by   Nelle Varoquaux   and  Gael Varoquaux       Shorthand constructors  func  pipeline make pipeline  and    func  pipeline make union  were added by  Lars Buitinck       Shuffle option for  cross validation StratifiedKFold     By  user  Jeffrey Blackburne  jblackburne       Incremental learning    partial fit    for Gaussian Naive Bayes by   Imran Haque     Added   partial fit   to  class  BernoulliRBM    neural network BernoulliRBM     By  user  Danny Sullivan  dsullivan7       Added  learning curve  utility to   chart performance with respect to training size  See    ref  sphx glr auto examples model selection plot learning curve py   By Alexander Fabisch     Add positive option in  class  LassoCV  linear model LassoCV   and    class  ElasticNetCV  linear model ElasticNetCV      By Brian Wignall and  Alexandre Gramfort       Added  class  linear model MultiTaskElasticNetCV  and    class  linear model MultiTaskLassoCV   By  Manoj Kumar       Added  class  manifold TSNE   By Alexander Fabisch   Enhancements                 Add sparse input support to  class  ensemble AdaBoostClassifier  and    class  ensemble AdaBoostRegressor  meta estimators    By  user  Hamzeh Alsalhi  hamsal       Memory improvements of decision trees  by  Arnaud Joly       Decision trees can now be built in best first manner by using   max leaf nodes     as the stopping criteria  Refactored the tree code to use either a   stack or a priority queue for tree building    By  Peter Prettenhofer   and  Gilles Louppe       Decision trees can now be fitted on fortran  and c style arrays  and   non continuous arrays without the need to make a copy    If the input array has a different dtype than   np float32    a fortran    style copy will be made since fortran style memory layout has speed   advantages  By  Peter Prettenhofer   and  Gilles Louppe       Speed improvement of regression trees by optimizing the   the computation of the mean square error criterion  This lead   to speed improvement of the tree  forest and gradient boosting tree   modules  By  Arnaud Joly      The   img to graph   and   grid tograph   functions in    mod  sklearn feature extraction image  now return   np ndarray     instead of   np matrix   when   return as np ndarray     See the   Notes section for more information on compatibility     Changed the internal storage of decision trees to use a struct array    This fixed some small bugs  while improving code and providing a small   speed gain  By  Joel Nothman       Reduce memory usage and overhead when fitting and predicting with forests   of randomized trees in parallel with   n jobs    1   by leveraging new   threading backend of joblib 0 8 and releasing the GIL in the tree fitting   Cython code   By  Olivier Grisel   and  Gilles Louppe       Speed improvement of the  sklearn ensemble gradient boosting  module    By  Gilles Louppe   and  Peter Prettenhofer       Various enhancements to the  sklearn ensemble gradient boosting    module  a   warm start   argument to fit additional trees    a   max leaf nodes   argument to fit GBM style trees    a   monitor   fit argument to inspect the estimator during training  and   refactoring of the verbose code  By  Peter Prettenhofer       Faster  sklearn ensemble ExtraTrees  by caching feature values    By  Arnaud Joly       Faster depth based tree building algorithm such as decision tree    random forest  extra trees or gradient tree boosting  with depth based   growing strategy  by avoiding trying to split on found constant features   in the sample subset  By  Arnaud Joly       Add   min weight fraction leaf   pre pruning parameter to tree based   methods  the minimum weighted fraction of the input samples required to be   at a leaf node  By  Noel Dawe       Added  func  metrics pairwise distances argmin min   by Philippe Gervais     Added predict method to  class  cluster AffinityPropagation  and    class  cluster MeanShift   by  Mathieu Blondel       Vector and matrix multiplications have been optimised throughout the   library by  Denis Engemann    and  Alexandre Gramfort      In particular  they should take less memory with older NumPy versions    prior to 1 7 2      Precision recall and ROC examples now use train test split  and have more   explanation of why these metrics are useful  By  Kyle Kastner      The training algorithm for  class  decomposition NMF  is faster for   sparse matrices and has much lower memory complexity  meaning it will   scale up gracefully to large datasets  By  Lars Buitinck       Added svd method option with default value to  randomized  to    class  decomposition FactorAnalysis  to save memory and   significantly speedup computation by  Denis Engemann    and    Alexandre Gramfort       Changed  cross validation StratifiedKFold  to try and   preserve as much of the original ordering of samples as possible so as   not to hide overfitting on datasets with a non negligible level of   samples dependency    By  Daniel Nouri   and  Olivier Grisel       Add multi output support to  class  gaussian process GaussianProcessRegressor    by John Novak     Support for precomputed distance matrices in nearest neighbor estimators   by  Robert Layton   and  Joel Nothman       Norm computations optimized for NumPy 1 6 and later versions by    Lars Buitinck    In particular  the k means algorithm no longer   needs a temporary data structure the size of its input      class  dummy DummyClassifier  can now be used to predict a constant   output value  By  Manoj Kumar        class  dummy DummyRegressor  has now a strategy parameter which allows   to predict the mean  the median of the training set or a constant   output value  By  user  Maheshakya Wijewardena  maheshakya       Multi label classification output in multilabel indicator format   is now supported by  func  metrics roc auc score  and    func  metrics average precision score  by  Arnaud Joly       Significant performance improvements  more than 100x speedup for   large problems  in  class  isotonic IsotonicRegression  by    Andrew Tulloch       Speed and memory usage improvements to the SGD algorithm for linear   models  it now uses threads  not separate processes  when   n jobs 1      By  Lars Buitinck       Grid search and cross validation allow NaNs in the input arrays so that   preprocessors such as  preprocessing Imputer  can be trained within the cross   validation loop  avoiding potentially skewed results     Ridge regression can now deal with sample weights in feature space    only sample space until then   By  user  Michael Eickenberg  eickenberg      Both solutions are provided by the Cholesky solver     Several classification and regression metrics now support weighted   samples with the new   sample weight   argument     func  metrics accuracy score      func  metrics zero one loss      func  metrics precision score      func  metrics average precision score      func  metrics f1 score      func  metrics fbeta score      func  metrics recall score      func  metrics roc auc score      func  metrics explained variance score      func  metrics mean squared error      func  metrics mean absolute error      func  metrics r2 score     By  Noel Dawe       Speed up of the sample generator    func  datasets make multilabel classification   By  Joel Nothman     Documentation improvements                                The Working With Text Data tutorial   has now been worked in to the main documentation s tutorial section    Includes exercises and skeletons for tutorial presentation    Original tutorial created by several authors including    Olivier Grisel    Lars Buitinck and many others    Tutorial integration into the scikit learn documentation   by  Jaques Grobler      Added  ref  Computational Performance  computational performance     documentation  Discussion and examples of prediction latency   throughput   and different factors that have influence over speed  Additional tips for   building faster models and choosing a relevant compromise between speed   and predictive power    By  user  Eustache Diemert  oddskool     Bug fixes              Fixed bug in  class  decomposition MiniBatchDictionaryLearning        partial fit   was not working properly     Fixed bug in  linear model stochastic gradient        l1 ratio   was used as    1 0   l1 ratio         Fixed bug in  class  multiclass OneVsOneClassifier  with string   labels    Fixed a bug in  class  LassoCV  linear model LassoCV   and    class  ElasticNetCV  linear model ElasticNetCV    they would not   pre compute the Gram matrix with   precompute True   or     precompute  auto    and   n samples   n features    By  Manoj Kumar       Fixed incorrect estimation of the degrees of freedom in    func  feature selection f regression  when variates are not centered    By  user  Virgile Fritsch  VirgileFritsch       Fixed a race condition in parallel processing with     pre dispatch     all     for instance  in   cross val score       By  Olivier Grisel       Raise error in  class  cluster FeatureAgglomeration  and    cluster WardAgglomeration  when no samples are given    rather than returning meaningless clustering     Fixed bug in  gradient boosting GradientBoostingRegressor  with     loss  huber       gamma   might have not been initialized     Fixed feature importances as computed with a forest of randomized trees   when fit with   sample weight    None   and or with   bootstrap True      By  Gilles Louppe     API changes summary                         sklearn hmm  is deprecated  Its removal is planned   for the 0 17 release     Use of  covariance EllipticEnvelop  has now been removed after   deprecation    Please use  class  covariance EllipticEnvelope  instead      cluster Ward  is deprecated  Use    class  cluster AgglomerativeClustering  instead      cluster WardClustering  is deprecated  Use    class  cluster AgglomerativeClustering  instead      cross validation Bootstrap  is deprecated     cross validation KFold  or    cross validation ShuffleSplit  are recommended instead     Direct support for the sequence of sequences  or list of lists  multilabel   format is deprecated  To convert to and from the supported binary   indicator matrix format  use    class  preprocessing MultiLabelBinarizer     By  Joel Nothman       Add score method to  class  decomposition PCA  following the model of   probabilistic PCA and deprecate    ProbabilisticPCA  model whose   score implementation is not correct  The computation now also exploits the   matrix inversion lemma for faster computation  By  Alexandre Gramfort       The score method of  class  decomposition FactorAnalysis    now returns the average log likelihood of the samples  Use score samples   to get log likelihood of each sample  By  Alexandre Gramfort       Generating boolean masks  the setting   indices False      from cross validation generators is deprecated    Support for masks will be removed in 0 17    The generators have produced arrays of indices by default since 0 10    By  Joel Nothman       1 d arrays containing strings with   dtype object    as used in Pandas    are now considered valid classification targets  This fixes a regression   from version 0 13 in some classifiers  By  Joel Nothman       Fix wrong   explained variance ratio    attribute in    RandomizedPCA     By  Alexandre Gramfort       Fit alphas for each   l1 ratio   instead of   mean l1 ratio   in    class  linear model ElasticNetCV  and  class  linear model LassoCV     This changes the shape of   alphas    from    n alphas     to      n l1 ratio  n alphas    if the   l1 ratio   provided is a 1 D array like   object of length greater than one    By  Manoj Kumar       Fix  class  linear model ElasticNetCV  and  class  linear model LassoCV    when fitting intercept and input data is sparse  The automatic grid   of alphas was not computed correctly and the scaling with normalize   was wrong  By  Manoj Kumar       Fix wrong maximal number of features drawn    max features    at each split   for decision trees  random forests and gradient tree boosting    Previously  the count for the number of drawn features started only after   one non constant features in the split  This bug fix will affect   computational and generalization performance of those algorithms in the   presence of constant features  To get back previous generalization   performance  you should modify the value of   max features      By  Arnaud Joly       Fix wrong maximal number of features drawn    max features    at each split   for  class  ensemble ExtraTreesClassifier  and    class  ensemble ExtraTreesRegressor   Previously  only non constant   features in the split was counted as drawn  Now constant features are   counted as drawn  Furthermore at least one feature must be non constant   in order to make a valid split  This bug fix will affect   computational and generalization performance of extra trees in the   presence of constant features  To get back previous generalization   performance  you should modify the value of   max features      By  Arnaud Joly       Fix  func  utils class weight compute class weight  when   class weight   auto       Previously it was broken for input of non integer   dtype   and the   weighted array that was returned was wrong  By  Manoj Kumar       Fix  cross validation Bootstrap  to return   ValueError     when   n train   n test   n    By  user  Ronald Phlypo  rphlypo      People         List of contributors for release 0 15 by number of commits     312 Olivier Grisel   275 Lars Buitinck   221 Gael Varoquaux   148 Arnaud Joly   134 Johannes Sch nberger   119 Gilles Louppe   113 Joel Nothman   111 Alexandre Gramfort    95 Jaques Grobler    89 Denis Engemann    83 Peter Prettenhofer    83 Alexander Fabisch    62 Mathieu Blondel    60 Eustache Diemert    60 Nelle Varoquaux    49 Michael Bommarito    45 Manoj Kumar S    28 Kyle Kastner    26 Andreas Mueller    22 Noel Dawe    21 Maheshakya Wijewardena    21 Brooke Osborn    21 Hamzeh Alsalhi    21 Jake VanderPlas    21 Philippe Gervais    19 Bala Subrahmanyam Varanasi    12 Ronald Phlypo    10 Mikhail Korobov     8 Thomas Unterthiner     8 Jeffrey Blackburne     8 eltermann     8 bwignall     7 Ankit Agrawal     7 CJ Carey     6 Daniel Nouri     6 Chen Liu     6 Michael Eickenberg     6 ugurthemaster     5 Aaron Schumacher     5 Baptiste Lagarde     5 Rajat Khanduja     5 Robert McGibbon     5 Sergio Pascual     4 Alexis Metaireau     4 Ignacio Rossi     4 Virgile Fritsch     4 Sebastian S ger     4 Ilambharathi Kanniah     4 sdenton4     4 Robert Layton     4 Alyssa     4 Amos Waterland     3 Andrew Tulloch     3 murad     3 Steven Maude     3 Karol Pysniak     3 Jacques Kvam     3 cgohlke     3 cjlin     3 Michael Becker     3 hamzeh     3 Eric Jacobsen     3 john collins     3 kaushik94     3 Erwin Marsi     2 csytracy     2 LK     2 Vlad Niculae     2 Laurent Direr     2 Erik Shilts     2 Raul Garreta     2 Yoshiki V zquez Baeza     2 Yung Siang Liau     2 abhishek thakur     2 James Yu     2 Rohit Sivaprasad     2 Roland Szabo     2 amormachine     2 Alexis Mignon     2 Oscar Carlsson     2 Nantas Nardelli     2 jess010     2 kowalski87     2 Andrew Clegg     2 Federico Vaggi     2 Simon Frid     2 F lix Antoine Fortin     1 Ralf Gommers     1 t aft     1 Ronan Amicel     1 Rupesh Kumar Srivastava     1 Ryan Wang     1 Samuel Charron     1 Samuel St Jean     1 Fabian Pedregosa     1 Skipper Seabold     1 Stefan Walk     1 Stefan van der Walt     1 Stephan Hoyer     1 Allen Riddell     1 Valentin Haenel     1 Vijay Ramesh     1 Will Myers     1 Yaroslav Halchenko     1 Yoni Ben Meshulam     1 Yury V  Zaytsev     1 adrinjalali     1 ai8rahim     1 alemagnani     1 alex     1 benjamin wilson     1 chalmerlowe     1 dzikie dro d e     1 jamestwebber     1 matrixorz     1 popo     1 samuela     1 Fran ois Boulogne     1 Alexander Measure     1 Ethan White     1 Guilherme Trein     1 Hendrik Heuer     1 IvicaJovic     1 Jan Hendrik Metzen     1 Jean Michel Rouly     1 Eduardo Ari o de la Rubia     1 Jelle Zijlstra     1 Eddy L O Jansson     1 Denis     1 John     1 John Schmidt     1 Jorge Ca ardo Alastuey     1 Joseph Perla     1 Joshua Vredevoogd     1 Jos  Ricardo     1 Julien Miotte     1 Kemal Eren     1 Kenta Sato     1 David Cournapeau     1 Kyle Kelley     1 Daniele Medri     1 Laurent Luce     1 Laurent Pierron     1 Luis Pedro Coelho     1 DanielWeitzenfeld     1 Craig Thompson     1 Chyi Kwei Yau     1 Matthew Brett     1 Matthias Feurer     1 Max Linke     1 Chris Filo Gorgolewski     1 Charles Earl     1 Michael Hanke     1 Michele Orr      1 Bryan Lunt     1 Brian Kearns     1 Paul Butler     1 Pawe  Mandera     1 Peter     1 Andrew Ash     1 Pietro Zambelli     1 staubda"}
{"questions":"scikit-learn sklearn contributors rst Version 0 14 changes014","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n============\nVersion 0.14\n============\n\n.. _changes_0_14:\n\nVersion 0.14\n===============\n\n**August 7, 2013**\n\nChangelog\n---------\n\n- Missing values with sparse and dense matrices can be imputed with the\n  transformer `preprocessing.Imputer` by `Nicolas Tr\u00e9segnie`_.\n\n- The core implementation of decisions trees has been rewritten from\n  scratch, allowing for faster tree induction and lower memory\n  consumption in all tree-based estimators. By `Gilles Louppe`_.\n\n- Added :class:`ensemble.AdaBoostClassifier` and\n  :class:`ensemble.AdaBoostRegressor`, by `Noel Dawe`_  and\n  `Gilles Louppe`_. See the :ref:`AdaBoost <adaboost>` section of the user\n  guide for details and examples.\n\n- Added `grid_search.RandomizedSearchCV` and\n  `grid_search.ParameterSampler` for randomized hyperparameter\n  optimization. By `Andreas M\u00fcller`_.\n\n- Added :ref:`biclustering <biclustering>` algorithms\n  (`sklearn.cluster.bicluster.SpectralCoclustering` and\n  `sklearn.cluster.bicluster.SpectralBiclustering`), data\n  generation methods (:func:`sklearn.datasets.make_biclusters` and\n  :func:`sklearn.datasets.make_checkerboard`), and scoring metrics\n  (:func:`sklearn.metrics.consensus_score`). By `Kemal Eren`_.\n\n- Added :ref:`Restricted Boltzmann Machines<rbm>`\n  (:class:`neural_network.BernoulliRBM`). By `Yann Dauphin`_.\n\n- Python 3 support by :user:`Justin Vincent <justinvf>`, `Lars Buitinck`_,\n  :user:`Subhodeep Moitra <smoitra87>` and `Olivier Grisel`_. All tests now pass under\n  Python 3.3.\n\n- Ability to pass one penalty (alpha value) per target in\n  :class:`linear_model.Ridge`, by @eickenberg and `Mathieu Blondel`_.\n\n- Fixed `sklearn.linear_model.stochastic_gradient.py` L2 regularization\n  issue (minor practical significance).\n  By :user:`Norbert Crombach <norbert>` and `Mathieu Blondel`_ .\n\n- Added an interactive version of `Andreas M\u00fcller`_'s\n  `Machine Learning Cheat Sheet (for scikit-learn)\n  <https:\/\/peekaboo-vision.blogspot.de\/2013\/01\/machine-learning-cheat-sheet-for-scikit.html>`_\n  to the documentation. See :ref:`Choosing the right estimator <ml_map>`.\n  By `Jaques Grobler`_.\n\n- `grid_search.GridSearchCV` and\n  `cross_validation.cross_val_score` now support the use of advanced\n  scoring function such as area under the ROC curve and f-beta scores.\n  See :ref:`scoring_parameter` for details. By `Andreas M\u00fcller`_\n  and `Lars Buitinck`_.\n  Passing a function from :mod:`sklearn.metrics` as ``score_func`` is\n  deprecated.\n\n- Multi-label classification output is now supported by\n  :func:`metrics.accuracy_score`, :func:`metrics.zero_one_loss`,\n  :func:`metrics.f1_score`, :func:`metrics.fbeta_score`,\n  :func:`metrics.classification_report`,\n  :func:`metrics.precision_score` and :func:`metrics.recall_score`\n  by `Arnaud Joly`_.\n\n- Two new metrics :func:`metrics.hamming_loss` and\n  `metrics.jaccard_similarity_score`\n  are added with multi-label support by `Arnaud Joly`_.\n\n- Speed and memory usage improvements in\n  :class:`feature_extraction.text.CountVectorizer` and\n  :class:`feature_extraction.text.TfidfVectorizer`,\n  by Jochen Wersd\u00f6rfer and Roman Sinayev.\n\n- The ``min_df`` parameter in\n  :class:`feature_extraction.text.CountVectorizer` and\n  :class:`feature_extraction.text.TfidfVectorizer`, which used to be 2,\n  has been reset to 1 to avoid unpleasant surprises (empty vocabularies)\n  for novice users who try it out on tiny document collections.\n  A value of at least 2 is still recommended for practical use.\n\n- :class:`svm.LinearSVC`, :class:`linear_model.SGDClassifier` and\n  :class:`linear_model.SGDRegressor` now have a ``sparsify`` method that\n  converts their ``coef_`` into a sparse matrix, meaning stored models\n  trained using these estimators can be made much more compact.\n\n- :class:`linear_model.SGDClassifier` now produces multiclass probability\n  estimates when trained under log loss or modified Huber loss.\n\n- Hyperlinks to documentation in example code on the website by\n  :user:`Martin Luessi <mluessi>`.\n\n- Fixed bug in :class:`preprocessing.MinMaxScaler` causing incorrect scaling\n  of the features for non-default ``feature_range`` settings. By `Andreas\n  M\u00fcller`_.\n\n- ``max_features`` in :class:`tree.DecisionTreeClassifier`,\n  :class:`tree.DecisionTreeRegressor` and all derived ensemble estimators\n  now supports percentage values. By `Gilles Louppe`_.\n\n- Performance improvements in :class:`isotonic.IsotonicRegression` by\n  `Nelle Varoquaux`_.\n\n- :func:`metrics.accuracy_score` has an option normalize to return\n  the fraction or the number of correctly classified sample\n  by `Arnaud Joly`_.\n\n- Added :func:`metrics.log_loss` that computes log loss, aka cross-entropy\n  loss. By Jochen Wersd\u00f6rfer and `Lars Buitinck`_.\n\n- A bug that caused :class:`ensemble.AdaBoostClassifier`'s to output\n  incorrect probabilities has been fixed.\n\n- Feature selectors now share a mixin providing consistent ``transform``,\n  ``inverse_transform`` and ``get_support`` methods. By `Joel Nothman`_.\n\n- A fitted `grid_search.GridSearchCV` or\n  `grid_search.RandomizedSearchCV` can now generally be pickled.\n  By `Joel Nothman`_.\n\n- Refactored and vectorized implementation of :func:`metrics.roc_curve`\n  and :func:`metrics.precision_recall_curve`. By `Joel Nothman`_.\n\n- The new estimator :class:`sklearn.decomposition.TruncatedSVD`\n  performs dimensionality reduction using SVD on sparse matrices,\n  and can be used for latent semantic analysis (LSA).\n  By `Lars Buitinck`_.\n\n- Added self-contained example of out-of-core learning on text data\n  :ref:`sphx_glr_auto_examples_applications_plot_out_of_core_classification.py`.\n  By :user:`Eustache Diemert <oddskool>`.\n\n- The default number of components for\n  `sklearn.decomposition.RandomizedPCA` is now correctly documented\n  to be ``n_features``. This was the default behavior, so programs using it\n  will continue to work as they did.\n\n- :class:`sklearn.cluster.KMeans` now fits several orders of magnitude\n  faster on sparse data (the speedup depends on the sparsity). By\n  `Lars Buitinck`_.\n\n- Reduce memory footprint of FastICA by `Denis Engemann`_ and\n  `Alexandre Gramfort`_.\n\n- Verbose output in `sklearn.ensemble.gradient_boosting` now uses\n  a column format and prints progress in decreasing frequency.\n  It also shows the remaining time. By `Peter Prettenhofer`_.\n\n- `sklearn.ensemble.gradient_boosting` provides out-of-bag improvement\n  `oob_improvement_`\n  rather than the OOB score for model selection. An example that shows\n  how to use OOB estimates to select the number of trees was added.\n  By `Peter Prettenhofer`_.\n\n- Most metrics now support string labels for multiclass classification\n  by `Arnaud Joly`_ and `Lars Buitinck`_.\n\n- New OrthogonalMatchingPursuitCV class by `Alexandre Gramfort`_\n  and `Vlad Niculae`_.\n\n- Fixed a bug in `sklearn.covariance.GraphLassoCV`: the\n  'alphas' parameter now works as expected when given a list of\n  values. By Philippe Gervais.\n\n- Fixed an important bug in `sklearn.covariance.GraphLassoCV`\n  that prevented all folds provided by a CV object to be used (only\n  the first 3 were used). When providing a CV object, execution\n  time may thus increase significantly compared to the previous\n  version (bug results are correct now). By Philippe Gervais.\n\n- `cross_validation.cross_val_score` and the `grid_search`\n  module is now tested with multi-output data by `Arnaud Joly`_.\n\n- :func:`datasets.make_multilabel_classification` can now return\n  the output in label indicator multilabel format  by `Arnaud Joly`_.\n\n- K-nearest neighbors, :class:`neighbors.KNeighborsRegressor`\n  and :class:`neighbors.RadiusNeighborsRegressor`,\n  and radius neighbors, :class:`neighbors.RadiusNeighborsRegressor` and\n  :class:`neighbors.RadiusNeighborsClassifier` support multioutput data\n  by `Arnaud Joly`_.\n\n- Random state in LibSVM-based estimators (:class:`svm.SVC`, :class:`svm.NuSVC`,\n  :class:`svm.OneClassSVM`, :class:`svm.SVR`, :class:`svm.NuSVR`) can now be\n  controlled.  This is useful to ensure consistency in the probability\n  estimates for the classifiers trained with ``probability=True``. By\n  `Vlad Niculae`_.\n\n- Out-of-core learning support for discrete naive Bayes classifiers\n  :class:`sklearn.naive_bayes.MultinomialNB` and\n  :class:`sklearn.naive_bayes.BernoulliNB` by adding the ``partial_fit``\n  method by `Olivier Grisel`_.\n\n- New website design and navigation by `Gilles Louppe`_, `Nelle Varoquaux`_,\n  Vincent Michel and `Andreas M\u00fcller`_.\n\n- Improved documentation on :ref:`multi-class, multi-label and multi-output\n  classification <multiclass>` by `Yannick Schwartz`_ and `Arnaud Joly`_.\n\n- Better input and error handling in the :mod:`sklearn.metrics` module by\n  `Arnaud Joly`_ and `Joel Nothman`_.\n\n- Speed optimization of the `hmm` module by :user:`Mikhail Korobov <kmike>`\n\n- Significant speed improvements for :class:`sklearn.cluster.DBSCAN`\n  by `cleverless <https:\/\/github.com\/cleverless>`_\n\n\nAPI changes summary\n-------------------\n\n- The `auc_score` was renamed :func:`metrics.roc_auc_score`.\n\n- Testing scikit-learn with ``sklearn.test()`` is deprecated. Use\n  ``nosetests sklearn`` from the command line.\n\n- Feature importances in :class:`tree.DecisionTreeClassifier`,\n  :class:`tree.DecisionTreeRegressor` and all derived ensemble estimators\n  are now computed on the fly when accessing  the ``feature_importances_``\n  attribute. Setting ``compute_importances=True`` is no longer required.\n  By `Gilles Louppe`_.\n\n- :class:`linear_model.lasso_path` and\n  :class:`linear_model.enet_path` can return its results in the same\n  format as that of :class:`linear_model.lars_path`. This is done by\n  setting the ``return_models`` parameter to ``False``. By\n  `Jaques Grobler`_ and `Alexandre Gramfort`_\n\n- `grid_search.IterGrid` was renamed to `grid_search.ParameterGrid`.\n\n- Fixed bug in `KFold` causing imperfect class balance in some\n  cases. By `Alexandre Gramfort`_ and Tadej Jane\u017e.\n\n- :class:`sklearn.neighbors.BallTree` has been refactored, and a\n  :class:`sklearn.neighbors.KDTree` has been\n  added which shares the same interface.  The Ball Tree now works with\n  a wide variety of distance metrics.  Both classes have many new\n  methods, including single-tree and dual-tree queries, breadth-first\n  and depth-first searching, and more advanced queries such as\n  kernel density estimation and 2-point correlation functions.\n  By `Jake Vanderplas`_\n\n- Support for scipy.spatial.cKDTree within neighbors queries has been\n  removed, and the functionality replaced with the new\n  :class:`sklearn.neighbors.KDTree` class.\n\n- :class:`sklearn.neighbors.KernelDensity` has been added, which performs\n  efficient kernel density estimation with a variety of kernels.\n\n- :class:`sklearn.decomposition.KernelPCA` now always returns output with\n  ``n_components`` components, unless the new parameter ``remove_zero_eig``\n  is set to ``True``. This new behavior is consistent with the way\n  kernel PCA was always documented; previously, the removal of components\n  with zero eigenvalues was tacitly performed on all data.\n\n- ``gcv_mode=\"auto\"`` no longer tries to perform SVD on a densified\n  sparse matrix in :class:`sklearn.linear_model.RidgeCV`.\n\n- Sparse matrix support in `sklearn.decomposition.RandomizedPCA`\n  is now deprecated in favor of the new ``TruncatedSVD``.\n\n- `cross_validation.KFold` and\n  `cross_validation.StratifiedKFold` now enforce `n_folds >= 2`\n  otherwise a ``ValueError`` is raised. By `Olivier Grisel`_.\n\n- :func:`datasets.load_files`'s ``charset`` and ``charset_errors``\n  parameters were renamed ``encoding`` and ``decode_errors``.\n\n- Attribute ``oob_score_`` in :class:`sklearn.ensemble.GradientBoostingRegressor`\n  and :class:`sklearn.ensemble.GradientBoostingClassifier`\n  is deprecated and has been replaced by ``oob_improvement_`` .\n\n- Attributes in OrthogonalMatchingPursuit have been deprecated\n  (copy_X, Gram, ...) and precompute_gram renamed precompute\n  for consistency. See #2224.\n\n- :class:`sklearn.preprocessing.StandardScaler` now converts integer input\n  to float, and raises a warning. Previously it rounded for dense integer\n  input.\n\n- :class:`sklearn.multiclass.OneVsRestClassifier` now has a\n  ``decision_function`` method. This will return the distance of each\n  sample from the decision boundary for each class, as long as the\n  underlying estimators implement the ``decision_function`` method.\n  By `Kyle Kastner`_.\n\n- Better input validation, warning on unexpected shapes for y.\n\nPeople\n------\nList of contributors for release 0.14 by number of commits.\n\n* 277  Gilles Louppe\n* 245  Lars Buitinck\n* 187  Andreas Mueller\n* 124  Arnaud Joly\n* 112  Jaques Grobler\n* 109  Gael Varoquaux\n* 107  Olivier Grisel\n* 102  Noel Dawe\n*  99  Kemal Eren\n*  79  Joel Nothman\n*  75  Jake VanderPlas\n*  73  Nelle Varoquaux\n*  71  Vlad Niculae\n*  65  Peter Prettenhofer\n*  64  Alexandre Gramfort\n*  54  Mathieu Blondel\n*  38  Nicolas Tr\u00e9segnie\n*  35  eustache\n*  27  Denis Engemann\n*  25  Yann N. Dauphin\n*  19  Justin Vincent\n*  17  Robert Layton\n*  15  Doug Coleman\n*  14  Michael Eickenberg\n*  13  Robert Marchman\n*  11  Fabian Pedregosa\n*  11  Philippe Gervais\n*  10  Jim Holmstr\u00f6m\n*  10  Tadej Jane\u017e\n*  10  syhw\n*   9  Mikhail Korobov\n*   9  Steven De Gryze\n*   8  sergeyf\n*   7  Ben Root\n*   7  Hrishikesh Huilgolkar\n*   6  Kyle Kastner\n*   6  Martin Luessi\n*   6  Rob Speer\n*   5  Federico Vaggi\n*   5  Raul Garreta\n*   5  Rob Zinkov\n*   4  Ken Geis\n*   3  A. Flaxman\n*   3  Denton Cockburn\n*   3  Dougal Sutherland\n*   3  Ian Ozsvald\n*   3  Johannes Sch\u00f6nberger\n*   3  Robert McGibbon\n*   3  Roman Sinayev\n*   3  Szabo Roland\n*   2  Diego Molla\n*   2  Imran Haque\n*   2  Jochen Wersd\u00f6rfer\n*   2  Sergey Karayev\n*   2  Yannick Schwartz\n*   2  jamestwebber\n*   1  Abhijeet Kolhe\n*   1  Alexander Fabisch\n*   1  Bastiaan van den Berg\n*   1  Benjamin Peterson\n*   1  Daniel Velkov\n*   1  Fazlul Shahriar\n*   1  Felix Brockherde\n*   1  F\u00e9lix-Antoine Fortin\n*   1  Harikrishnan S\n*   1  Jack Hale\n*   1  JakeMick\n*   1  James McDermott\n*   1  John Benediktsson\n*   1  John Zwinck\n*   1  Joshua Vredevoogd\n*   1  Justin Pati\n*   1  Kevin Hughes\n*   1  Kyle Kelley\n*   1  Matthias Ekman\n*   1  Miroslav Shubernetskiy\n*   1  Naoki Orii\n*   1  Norbert Crombach\n*   1  Rafael Cunha de Almeida\n*   1  Rolando Espinoza La fuente\n*   1  Seamus Abshere\n*   1  Sergey Feldman\n*   1  Sergio Medina\n*   1  Stefano Lattarini\n*   1  Steve Koch\n*   1  Sturla Molden\n*   1  Thomas Jarosch\n*   1  Yaroslav Halchenko","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn               Version 0 14                   changes 0 14   Version 0 14                    August 7  2013    Changelog              Missing values with sparse and dense matrices can be imputed with the   transformer  preprocessing Imputer  by  Nicolas Tr segnie       The core implementation of decisions trees has been rewritten from   scratch  allowing for faster tree induction and lower memory   consumption in all tree based estimators  By  Gilles Louppe       Added  class  ensemble AdaBoostClassifier  and    class  ensemble AdaBoostRegressor   by  Noel Dawe    and    Gilles Louppe    See the  ref  AdaBoost  adaboost   section of the user   guide for details and examples     Added  grid search RandomizedSearchCV  and    grid search ParameterSampler  for randomized hyperparameter   optimization  By  Andreas M ller       Added  ref  biclustering  biclustering   algorithms     sklearn cluster bicluster SpectralCoclustering  and    sklearn cluster bicluster SpectralBiclustering    data   generation methods   func  sklearn datasets make biclusters  and    func  sklearn datasets make checkerboard    and scoring metrics     func  sklearn metrics consensus score    By  Kemal Eren       Added  ref  Restricted Boltzmann Machines rbm       class  neural network BernoulliRBM    By  Yann Dauphin       Python 3 support by  user  Justin Vincent  justinvf     Lars Buitinck       user  Subhodeep Moitra  smoitra87   and  Olivier Grisel    All tests now pass under   Python 3 3     Ability to pass one penalty  alpha value  per target in    class  linear model Ridge   by  eickenberg and  Mathieu Blondel       Fixed  sklearn linear model stochastic gradient py  L2 regularization   issue  minor practical significance     By  user  Norbert Crombach  norbert   and  Mathieu Blondel        Added an interactive version of  Andreas M ller   s    Machine Learning Cheat Sheet  for scikit learn     https   peekaboo vision blogspot de 2013 01 machine learning cheat sheet for scikit html      to the documentation  See  ref  Choosing the right estimator  ml map      By  Jaques Grobler        grid search GridSearchCV  and    cross validation cross val score  now support the use of advanced   scoring function such as area under the ROC curve and f beta scores    See  ref  scoring parameter  for details  By  Andreas M ller     and  Lars Buitinck      Passing a function from  mod  sklearn metrics  as   score func   is   deprecated     Multi label classification output is now supported by    func  metrics accuracy score    func  metrics zero one loss      func  metrics f1 score    func  metrics fbeta score      func  metrics classification report      func  metrics precision score  and  func  metrics recall score    by  Arnaud Joly       Two new metrics  func  metrics hamming loss  and    metrics jaccard similarity score    are added with multi label support by  Arnaud Joly       Speed and memory usage improvements in    class  feature extraction text CountVectorizer  and    class  feature extraction text TfidfVectorizer     by Jochen Wersd rfer and Roman Sinayev     The   min df   parameter in    class  feature extraction text CountVectorizer  and    class  feature extraction text TfidfVectorizer   which used to be 2    has been reset to 1 to avoid unpleasant surprises  empty vocabularies    for novice users who try it out on tiny document collections    A value of at least 2 is still recommended for practical use      class  svm LinearSVC    class  linear model SGDClassifier  and    class  linear model SGDRegressor  now have a   sparsify   method that   converts their   coef    into a sparse matrix  meaning stored models   trained using these estimators can be made much more compact      class  linear model SGDClassifier  now produces multiclass probability   estimates when trained under log loss or modified Huber loss     Hyperlinks to documentation in example code on the website by    user  Martin Luessi  mluessi       Fixed bug in  class  preprocessing MinMaxScaler  causing incorrect scaling   of the features for non default   feature range   settings  By  Andreas   M ller         max features   in  class  tree DecisionTreeClassifier      class  tree DecisionTreeRegressor  and all derived ensemble estimators   now supports percentage values  By  Gilles Louppe       Performance improvements in  class  isotonic IsotonicRegression  by    Nelle Varoquaux        func  metrics accuracy score  has an option normalize to return   the fraction or the number of correctly classified sample   by  Arnaud Joly       Added  func  metrics log loss  that computes log loss  aka cross entropy   loss  By Jochen Wersd rfer and  Lars Buitinck       A bug that caused  class  ensemble AdaBoostClassifier  s to output   incorrect probabilities has been fixed     Feature selectors now share a mixin providing consistent   transform        inverse transform   and   get support   methods  By  Joel Nothman       A fitted  grid search GridSearchCV  or    grid search RandomizedSearchCV  can now generally be pickled    By  Joel Nothman       Refactored and vectorized implementation of  func  metrics roc curve    and  func  metrics precision recall curve   By  Joel Nothman       The new estimator  class  sklearn decomposition TruncatedSVD    performs dimensionality reduction using SVD on sparse matrices    and can be used for latent semantic analysis  LSA     By  Lars Buitinck       Added self contained example of out of core learning on text data    ref  sphx glr auto examples applications plot out of core classification py     By  user  Eustache Diemert  oddskool       The default number of components for    sklearn decomposition RandomizedPCA  is now correctly documented   to be   n features    This was the default behavior  so programs using it   will continue to work as they did      class  sklearn cluster KMeans  now fits several orders of magnitude   faster on sparse data  the speedup depends on the sparsity   By    Lars Buitinck       Reduce memory footprint of FastICA by  Denis Engemann   and    Alexandre Gramfort       Verbose output in  sklearn ensemble gradient boosting  now uses   a column format and prints progress in decreasing frequency    It also shows the remaining time  By  Peter Prettenhofer        sklearn ensemble gradient boosting  provides out of bag improvement    oob improvement     rather than the OOB score for model selection  An example that shows   how to use OOB estimates to select the number of trees was added    By  Peter Prettenhofer       Most metrics now support string labels for multiclass classification   by  Arnaud Joly   and  Lars Buitinck       New OrthogonalMatchingPursuitCV class by  Alexandre Gramfort     and  Vlad Niculae       Fixed a bug in  sklearn covariance GraphLassoCV   the    alphas  parameter now works as expected when given a list of   values  By Philippe Gervais     Fixed an important bug in  sklearn covariance GraphLassoCV    that prevented all folds provided by a CV object to be used  only   the first 3 were used   When providing a CV object  execution   time may thus increase significantly compared to the previous   version  bug results are correct now   By Philippe Gervais      cross validation cross val score  and the  grid search    module is now tested with multi output data by  Arnaud Joly        func  datasets make multilabel classification  can now return   the output in label indicator multilabel format  by  Arnaud Joly       K nearest neighbors   class  neighbors KNeighborsRegressor    and  class  neighbors RadiusNeighborsRegressor     and radius neighbors   class  neighbors RadiusNeighborsRegressor  and    class  neighbors RadiusNeighborsClassifier  support multioutput data   by  Arnaud Joly       Random state in LibSVM based estimators   class  svm SVC    class  svm NuSVC      class  svm OneClassSVM    class  svm SVR    class  svm NuSVR   can now be   controlled   This is useful to ensure consistency in the probability   estimates for the classifiers trained with   probability True    By    Vlad Niculae       Out of core learning support for discrete naive Bayes classifiers    class  sklearn naive bayes MultinomialNB  and    class  sklearn naive bayes BernoulliNB  by adding the   partial fit     method by  Olivier Grisel       New website design and navigation by  Gilles Louppe     Nelle Varoquaux      Vincent Michel and  Andreas M ller       Improved documentation on  ref  multi class  multi label and multi output   classification  multiclass   by  Yannick Schwartz   and  Arnaud Joly       Better input and error handling in the  mod  sklearn metrics  module by    Arnaud Joly   and  Joel Nothman       Speed optimization of the  hmm  module by  user  Mikhail Korobov  kmike      Significant speed improvements for  class  sklearn cluster DBSCAN    by  cleverless  https   github com cleverless      API changes summary                        The  auc score  was renamed  func  metrics roc auc score      Testing scikit learn with   sklearn test     is deprecated  Use     nosetests sklearn   from the command line     Feature importances in  class  tree DecisionTreeClassifier      class  tree DecisionTreeRegressor  and all derived ensemble estimators   are now computed on the fly when accessing  the   feature importances      attribute  Setting   compute importances True   is no longer required    By  Gilles Louppe        class  linear model lasso path  and    class  linear model enet path  can return its results in the same   format as that of  class  linear model lars path   This is done by   setting the   return models   parameter to   False    By    Jaques Grobler   and  Alexandre Gramfort       grid search IterGrid  was renamed to  grid search ParameterGrid      Fixed bug in  KFold  causing imperfect class balance in some   cases  By  Alexandre Gramfort   and Tadej Jane       class  sklearn neighbors BallTree  has been refactored  and a    class  sklearn neighbors KDTree  has been   added which shares the same interface   The Ball Tree now works with   a wide variety of distance metrics   Both classes have many new   methods  including single tree and dual tree queries  breadth first   and depth first searching  and more advanced queries such as   kernel density estimation and 2 point correlation functions    By  Jake Vanderplas      Support for scipy spatial cKDTree within neighbors queries has been   removed  and the functionality replaced with the new    class  sklearn neighbors KDTree  class      class  sklearn neighbors KernelDensity  has been added  which performs   efficient kernel density estimation with a variety of kernels      class  sklearn decomposition KernelPCA  now always returns output with     n components   components  unless the new parameter   remove zero eig     is set to   True    This new behavior is consistent with the way   kernel PCA was always documented  previously  the removal of components   with zero eigenvalues was tacitly performed on all data       gcv mode  auto    no longer tries to perform SVD on a densified   sparse matrix in  class  sklearn linear model RidgeCV      Sparse matrix support in  sklearn decomposition RandomizedPCA    is now deprecated in favor of the new   TruncatedSVD        cross validation KFold  and    cross validation StratifiedKFold  now enforce  n folds    2    otherwise a   ValueError   is raised  By  Olivier Grisel        func  datasets load files  s   charset   and   charset errors     parameters were renamed   encoding   and   decode errors       Attribute   oob score    in  class  sklearn ensemble GradientBoostingRegressor    and  class  sklearn ensemble GradientBoostingClassifier    is deprecated and has been replaced by   oob improvement         Attributes in OrthogonalMatchingPursuit have been deprecated    copy X  Gram       and precompute gram renamed precompute   for consistency  See  2224      class  sklearn preprocessing StandardScaler  now converts integer input   to float  and raises a warning  Previously it rounded for dense integer   input      class  sklearn multiclass OneVsRestClassifier  now has a     decision function   method  This will return the distance of each   sample from the decision boundary for each class  as long as the   underlying estimators implement the   decision function   method    By  Kyle Kastner       Better input validation  warning on unexpected shapes for y   People        List of contributors for release 0 14 by number of commits     277  Gilles Louppe   245  Lars Buitinck   187  Andreas Mueller   124  Arnaud Joly   112  Jaques Grobler   109  Gael Varoquaux   107  Olivier Grisel   102  Noel Dawe    99  Kemal Eren    79  Joel Nothman    75  Jake VanderPlas    73  Nelle Varoquaux    71  Vlad Niculae    65  Peter Prettenhofer    64  Alexandre Gramfort    54  Mathieu Blondel    38  Nicolas Tr segnie    35  eustache    27  Denis Engemann    25  Yann N  Dauphin    19  Justin Vincent    17  Robert Layton    15  Doug Coleman    14  Michael Eickenberg    13  Robert Marchman    11  Fabian Pedregosa    11  Philippe Gervais    10  Jim Holmstr m    10  Tadej Jane     10  syhw     9  Mikhail Korobov     9  Steven De Gryze     8  sergeyf     7  Ben Root     7  Hrishikesh Huilgolkar     6  Kyle Kastner     6  Martin Luessi     6  Rob Speer     5  Federico Vaggi     5  Raul Garreta     5  Rob Zinkov     4  Ken Geis     3  A  Flaxman     3  Denton Cockburn     3  Dougal Sutherland     3  Ian Ozsvald     3  Johannes Sch nberger     3  Robert McGibbon     3  Roman Sinayev     3  Szabo Roland     2  Diego Molla     2  Imran Haque     2  Jochen Wersd rfer     2  Sergey Karayev     2  Yannick Schwartz     2  jamestwebber     1  Abhijeet Kolhe     1  Alexander Fabisch     1  Bastiaan van den Berg     1  Benjamin Peterson     1  Daniel Velkov     1  Fazlul Shahriar     1  Felix Brockherde     1  F lix Antoine Fortin     1  Harikrishnan S     1  Jack Hale     1  JakeMick     1  James McDermott     1  John Benediktsson     1  John Zwinck     1  Joshua Vredevoogd     1  Justin Pati     1  Kevin Hughes     1  Kyle Kelley     1  Matthias Ekman     1  Miroslav Shubernetskiy     1  Naoki Orii     1  Norbert Crombach     1  Rafael Cunha de Almeida     1  Rolando Espinoza La fuente     1  Seamus Abshere     1  Sergey Feldman     1  Sergio Medina     1  Stefano Lattarini     1  Steve Koch     1  Sturla Molden     1  Thomas Jarosch     1  Yaroslav Halchenko"}
{"questions":"scikit-learn sklearn contributors rst Version 0 16 changes0161","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n============\nVersion 0.16\n============\n\n.. _changes_0_16_1:\n\nVersion 0.16.1\n===============\n\n**April 14, 2015**\n\nChangelog\n---------\n\nBug fixes\n.........\n\n- Allow input data larger than ``block_size`` in\n  :class:`covariance.LedoitWolf` by `Andreas M\u00fcller`_.\n\n- Fix a bug in :class:`isotonic.IsotonicRegression` deduplication that\n  caused unstable result in :class:`calibration.CalibratedClassifierCV` by\n  `Jan Hendrik Metzen`_.\n\n- Fix sorting of labels in func:`preprocessing.label_binarize` by Michael Heilman.\n\n- Fix several stability and convergence issues in\n  :class:`cross_decomposition.CCA` and\n  :class:`cross_decomposition.PLSCanonical` by `Andreas M\u00fcller`_\n\n- Fix a bug in :class:`cluster.KMeans` when ``precompute_distances=False``\n  on fortran-ordered data.\n\n- Fix a speed regression in :class:`ensemble.RandomForestClassifier`'s ``predict``\n  and ``predict_proba`` by `Andreas M\u00fcller`_.\n\n- Fix a regression where ``utils.shuffle`` converted lists and dataframes to arrays, by `Olivier Grisel`_\n\n.. _changes_0_16:\n\nVersion 0.16\n============\n\n**March 26, 2015**\n\nHighlights\n-----------\n\n- Speed improvements (notably in :class:`cluster.DBSCAN`), reduced memory\n  requirements, bug-fixes and better default settings.\n\n- Multinomial Logistic regression and a path algorithm in\n  :class:`linear_model.LogisticRegressionCV`.\n\n- Out-of core learning of PCA via :class:`decomposition.IncrementalPCA`.\n\n- Probability calibration of classifiers using\n  :class:`calibration.CalibratedClassifierCV`.\n\n- :class:`cluster.Birch` clustering method for large-scale datasets.\n\n- Scalable approximate nearest neighbors search with Locality-sensitive\n  hashing forests in `neighbors.LSHForest`.\n\n- Improved error messages and better validation when using malformed input data.\n\n- More robust integration with pandas dataframes.\n\nChangelog\n---------\n\nNew features\n............\n\n- The new `neighbors.LSHForest` implements locality-sensitive hashing\n  for approximate nearest neighbors search. By :user:`Maheshakya Wijewardena<maheshakya>`.\n\n- Added :class:`svm.LinearSVR`. This class uses the liblinear implementation\n  of Support Vector Regression which is much faster for large\n  sample sizes than :class:`svm.SVR` with linear kernel. By\n  `Fabian Pedregosa`_ and Qiang Luo.\n\n- Incremental fit for :class:`GaussianNB <naive_bayes.GaussianNB>`.\n\n- Added ``sample_weight`` support to :class:`dummy.DummyClassifier` and\n  :class:`dummy.DummyRegressor`. By `Arnaud Joly`_.\n\n- Added the :func:`metrics.label_ranking_average_precision_score` metrics.\n  By `Arnaud Joly`_.\n\n- Add the :func:`metrics.coverage_error` metrics. By `Arnaud Joly`_.\n\n- Added :class:`linear_model.LogisticRegressionCV`. By\n  `Manoj Kumar`_, `Fabian Pedregosa`_, `Gael Varoquaux`_\n  and `Alexandre Gramfort`_.\n\n- Added ``warm_start`` constructor parameter to make it possible for any\n  trained forest model to grow additional trees incrementally. By\n  :user:`Laurent Direr<ldirer>`.\n\n- Added ``sample_weight`` support to :class:`ensemble.GradientBoostingClassifier` and\n  :class:`ensemble.GradientBoostingRegressor`. By `Peter Prettenhofer`_.\n\n- Added :class:`decomposition.IncrementalPCA`, an implementation of the PCA\n  algorithm that supports out-of-core learning with a ``partial_fit``\n  method. By `Kyle Kastner`_.\n\n- Averaged SGD for :class:`SGDClassifier <linear_model.SGDClassifier>`\n  and :class:`SGDRegressor <linear_model.SGDRegressor>` By\n  :user:`Danny Sullivan <dsullivan7>`.\n\n- Added `cross_val_predict`\n  function which computes cross-validated estimates. By `Luis Pedro Coelho`_\n\n- Added :class:`linear_model.TheilSenRegressor`, a robust\n  generalized-median-based estimator. By :user:`Florian Wilhelm <FlorianWilhelm>`.\n\n- Added :func:`metrics.median_absolute_error`, a robust metric.\n  By `Gael Varoquaux`_ and :user:`Florian Wilhelm <FlorianWilhelm>`.\n\n- Add :class:`cluster.Birch`, an online clustering algorithm. By\n  `Manoj Kumar`_, `Alexandre Gramfort`_ and `Joel Nothman`_.\n\n- Added shrinkage support to :class:`discriminant_analysis.LinearDiscriminantAnalysis`\n  using two new solvers. By :user:`Clemens Brunner <cle1109>` and `Martin Billinger`_.\n\n- Added :class:`kernel_ridge.KernelRidge`, an implementation of\n  kernelized ridge regression.\n  By `Mathieu Blondel`_ and `Jan Hendrik Metzen`_.\n\n- All solvers in :class:`linear_model.Ridge` now support `sample_weight`.\n  By `Mathieu Blondel`_.\n\n- Added `cross_validation.PredefinedSplit` cross-validation\n  for fixed user-provided cross-validation folds.\n  By :user:`Thomas Unterthiner <untom>`.\n\n- Added :class:`calibration.CalibratedClassifierCV`, an approach for\n  calibrating the predicted probabilities of a classifier.\n  By `Alexandre Gramfort`_, `Jan Hendrik Metzen`_, `Mathieu Blondel`_\n  and :user:`Balazs Kegl <kegl>`.\n\n\nEnhancements\n............\n\n- Add option ``return_distance`` in `hierarchical.ward_tree`\n  to return distances between nodes for both structured and unstructured\n  versions of the algorithm. By `Matteo Visconti di Oleggio Castello`_.\n  The same option was added in `hierarchical.linkage_tree`.\n  By `Manoj Kumar`_\n\n- Add support for sample weights in scorer objects.  Metrics with sample\n  weight support will automatically benefit from it. By `Noel Dawe`_ and\n  `Vlad Niculae`_.\n\n- Added ``newton-cg`` and `lbfgs` solver support in\n  :class:`linear_model.LogisticRegression`. By `Manoj Kumar`_.\n\n- Add ``selection=\"random\"`` parameter to implement stochastic coordinate\n  descent for :class:`linear_model.Lasso`, :class:`linear_model.ElasticNet`\n  and related. By `Manoj Kumar`_.\n\n- Add ``sample_weight`` parameter to\n  `metrics.jaccard_similarity_score` and :func:`metrics.log_loss`.\n  By :user:`Jatin Shah <jatinshah>`.\n\n- Support sparse multilabel indicator representation in\n  :class:`preprocessing.LabelBinarizer` and\n  :class:`multiclass.OneVsRestClassifier` (by :user:`Hamzeh Alsalhi <hamsal>` with thanks\n  to Rohit Sivaprasad), as well as evaluation metrics (by\n  `Joel Nothman`_).\n\n- Add ``sample_weight`` parameter to `metrics.jaccard_similarity_score`.\n  By `Jatin Shah`.\n\n- Add support for multiclass in `metrics.hinge_loss`. Added ``labels=None``\n  as optional parameter. By `Saurabh Jha`.\n\n- Add ``sample_weight`` parameter to `metrics.hinge_loss`.\n  By `Saurabh Jha`.\n\n- Add ``multi_class=\"multinomial\"`` option in\n  :class:`linear_model.LogisticRegression` to implement a Logistic\n  Regression solver that minimizes the cross-entropy or multinomial loss\n  instead of the default One-vs-Rest setting. Supports `lbfgs` and\n  `newton-cg` solvers. By `Lars Buitinck`_ and `Manoj Kumar`_. Solver option\n  `newton-cg` by Simon Wu.\n\n- ``DictVectorizer`` can now perform ``fit_transform`` on an iterable in a\n  single pass, when giving the option ``sort=False``. By :user:`Dan\n  Blanchard <dan-blanchard>`.\n\n- :class:`model_selection.GridSearchCV` and\n  :class:`model_selection.RandomizedSearchCV` can now be configured to work\n  with estimators that may fail and raise errors on individual folds. This\n  option is controlled by the `error_score` parameter. This does not affect\n  errors raised on re-fit. By :user:`Michal Romaniuk <romaniukm>`.\n\n- Add ``digits`` parameter to `metrics.classification_report` to allow\n  report to show different precision of floating point numbers. By\n  :user:`Ian Gilmore <agileminor>`.\n\n- Add a quantile prediction strategy to the :class:`dummy.DummyRegressor`.\n  By :user:`Aaron Staple <staple>`.\n\n- Add ``handle_unknown`` option to :class:`preprocessing.OneHotEncoder` to\n  handle unknown categorical features more gracefully during transform.\n  By `Manoj Kumar`_.\n\n- Added support for sparse input data to decision trees and their ensembles.\n  By `Fares Hedyati`_ and `Arnaud Joly`_.\n\n- Optimized :class:`cluster.AffinityPropagation` by reducing the number of\n  memory allocations of large temporary data-structures. By `Antony Lee`_.\n\n- Parellization of the computation of feature importances in random forest.\n  By `Olivier Grisel`_ and `Arnaud Joly`_.\n\n- Add ``n_iter_`` attribute to estimators that accept a ``max_iter`` attribute\n  in their constructor. By `Manoj Kumar`_.\n\n- Added decision function for :class:`multiclass.OneVsOneClassifier`\n  By `Raghav RV`_ and :user:`Kyle Beauchamp <kyleabeauchamp>`.\n\n- `neighbors.kneighbors_graph` and `radius_neighbors_graph`\n  support non-Euclidean metrics. By `Manoj Kumar`_\n\n- Parameter ``connectivity`` in :class:`cluster.AgglomerativeClustering`\n  and family now accept callables that return a connectivity matrix.\n  By `Manoj Kumar`_.\n\n- Sparse support for :func:`metrics.pairwise.paired_distances`. By `Joel Nothman`_.\n\n- :class:`cluster.DBSCAN` now supports sparse input and sample weights and\n  has been optimized: the inner loop has been rewritten in Cython and\n  radius neighbors queries are now computed in batch. By `Joel Nothman`_\n  and `Lars Buitinck`_.\n\n- Add ``class_weight`` parameter to automatically weight samples by class\n  frequency for :class:`ensemble.RandomForestClassifier`,\n  :class:`tree.DecisionTreeClassifier`, :class:`ensemble.ExtraTreesClassifier`\n  and :class:`tree.ExtraTreeClassifier`. By `Trevor Stephens`_.\n\n- `grid_search.RandomizedSearchCV` now does sampling without\n  replacement if all parameters are given as lists. By `Andreas M\u00fcller`_.\n\n- Parallelized calculation of :func:`metrics.pairwise_distances` is now supported\n  for scipy metrics and custom callables. By `Joel Nothman`_.\n\n- Allow the fitting and scoring of all clustering algorithms in\n  :class:`pipeline.Pipeline`. By `Andreas M\u00fcller`_.\n\n- More robust seeding and improved error messages in :class:`cluster.MeanShift`\n  by `Andreas M\u00fcller`_.\n\n- Make the stopping criterion for `mixture.GMM`,\n  `mixture.DPGMM` and `mixture.VBGMM` less dependent on the\n  number of samples by thresholding the average log-likelihood change\n  instead of its sum over all samples. By `Herv\u00e9 Bredin`_.\n\n- The outcome of :func:`manifold.spectral_embedding` was made deterministic\n  by flipping the sign of eigenvectors. By :user:`Hasil Sharma <Hasil-Sharma>`.\n\n- Significant performance and memory usage improvements in\n  :class:`preprocessing.PolynomialFeatures`. By `Eric Martin`_.\n\n- Numerical stability improvements for :class:`preprocessing.StandardScaler`\n  and :func:`preprocessing.scale`. By `Nicolas Goix`_\n\n- :class:`svm.SVC` fitted on sparse input now implements ``decision_function``.\n  By `Rob Zinkov`_ and `Andreas M\u00fcller`_.\n\n- `cross_validation.train_test_split` now preserves the input type,\n  instead of converting to numpy arrays.\n\n\nDocumentation improvements\n..........................\n\n- Added example of using :class:`pipeline.FeatureUnion` for heterogeneous input.\n  By :user:`Matt Terry <mrterry>`\n\n- Documentation on scorers was improved, to highlight the handling of loss\n  functions. By :user:`Matt Pico <MattpSoftware>`.\n\n- A discrepancy between liblinear output and scikit-learn's wrappers\n  is now noted. By `Manoj Kumar`_.\n\n- Improved documentation generation: examples referring to a class or\n  function are now shown in a gallery on the class\/function's API reference\n  page. By `Joel Nothman`_.\n\n- More explicit documentation of sample generators and of data\n  transformation. By `Joel Nothman`_.\n\n- :class:`sklearn.neighbors.BallTree` and :class:`sklearn.neighbors.KDTree`\n  used to point to empty pages stating that they are aliases of BinaryTree.\n  This has been fixed to show the correct class docs. By `Manoj Kumar`_.\n\n- Added silhouette plots for analysis of KMeans clustering using\n  :func:`metrics.silhouette_samples` and :func:`metrics.silhouette_score`.\n  See :ref:`sphx_glr_auto_examples_cluster_plot_kmeans_silhouette_analysis.py`\n\nBug fixes\n.........\n- Metaestimators now support ducktyping for the presence of ``decision_function``,\n  ``predict_proba`` and other methods. This fixes behavior of\n  `grid_search.GridSearchCV`,\n  `grid_search.RandomizedSearchCV`, :class:`pipeline.Pipeline`,\n  :class:`feature_selection.RFE`, :class:`feature_selection.RFECV` when nested.\n  By `Joel Nothman`_\n\n- The ``scoring`` attribute of grid-search and cross-validation methods is no longer\n  ignored when a `grid_search.GridSearchCV` is given as a base estimator or\n  the base estimator doesn't have predict.\n\n- The function `hierarchical.ward_tree` now returns the children in\n  the same order for both the structured and unstructured versions. By\n  `Matteo Visconti di Oleggio Castello`_.\n\n- :class:`feature_selection.RFECV` now correctly handles cases when\n  ``step`` is not equal to 1. By :user:`Nikolay Mayorov <nmayorov>`\n\n- The :class:`decomposition.PCA` now undoes whitening in its\n  ``inverse_transform``. Also, its ``components_`` now always have unit\n  length. By :user:`Michael Eickenberg <eickenberg>`.\n\n- Fix incomplete download of the dataset when\n  `datasets.download_20newsgroups` is called. By `Manoj Kumar`_.\n\n- Various fixes to the Gaussian processes subpackage by Vincent Dubourg\n  and Jan Hendrik Metzen.\n\n- Calling ``partial_fit`` with ``class_weight=='auto'`` throws an\n  appropriate error message and suggests a work around.\n  By :user:`Danny Sullivan <dsullivan7>`.\n\n- :class:`RBFSampler <kernel_approximation.RBFSampler>` with ``gamma=g``\n  formerly approximated :func:`rbf_kernel <metrics.pairwise.rbf_kernel>`\n  with ``gamma=g\/2.``; the definition of ``gamma`` is now consistent,\n  which may substantially change your results if you use a fixed value.\n  (If you cross-validated over ``gamma``, it probably doesn't matter\n  too much.) By :user:`Dougal Sutherland <dougalsutherland>`.\n\n- Pipeline object delegate the ``classes_`` attribute to the underlying\n  estimator. It allows, for instance, to make bagging of a pipeline object.\n  By `Arnaud Joly`_\n\n- :class:`neighbors.NearestCentroid` now uses the median as the centroid\n  when metric is set to ``manhattan``. It was using the mean before.\n  By `Manoj Kumar`_\n\n- Fix numerical stability issues in :class:`linear_model.SGDClassifier`\n  and :class:`linear_model.SGDRegressor` by clipping large gradients and\n  ensuring that weight decay rescaling is always positive (for large\n  l2 regularization and large learning rate values).\n  By `Olivier Grisel`_\n\n- When `compute_full_tree` is set to \"auto\", the full tree is\n  built when n_clusters is high and is early stopped when n_clusters is\n  low, while the behavior should be vice-versa in\n  :class:`cluster.AgglomerativeClustering` (and friends).\n  This has been fixed By `Manoj Kumar`_\n\n- Fix lazy centering of data in :func:`linear_model.enet_path` and\n  :func:`linear_model.lasso_path`. It was centered around one. It has\n  been changed to be centered around the origin. By `Manoj Kumar`_\n\n- Fix handling of precomputed affinity matrices in\n  :class:`cluster.AgglomerativeClustering` when using connectivity\n  constraints. By :user:`Cathy Deng <cathydeng>`\n\n- Correct ``partial_fit`` handling of ``class_prior`` for\n  :class:`sklearn.naive_bayes.MultinomialNB` and\n  :class:`sklearn.naive_bayes.BernoulliNB`. By `Trevor Stephens`_.\n\n- Fixed a crash in :func:`metrics.precision_recall_fscore_support`\n  when using unsorted ``labels`` in the multi-label setting.\n  By `Andreas M\u00fcller`_.\n\n- Avoid skipping the first nearest neighbor in the methods ``radius_neighbors``,\n  ``kneighbors``, ``kneighbors_graph`` and ``radius_neighbors_graph`` in\n  :class:`sklearn.neighbors.NearestNeighbors` and family, when the query\n  data is not the same as fit data. By `Manoj Kumar`_.\n\n- Fix log-density calculation in the `mixture.GMM` with\n  tied covariance. By `Will Dawson`_\n\n- Fixed a scaling error in :class:`feature_selection.SelectFdr`\n  where a factor ``n_features`` was missing. By `Andrew Tulloch`_\n\n- Fix zero division in :class:`neighbors.KNeighborsRegressor` and related\n  classes when using distance weighting and having identical data points.\n  By `Garret-R <https:\/\/github.com\/Garrett-R>`_.\n\n- Fixed round off errors with non positive-definite covariance matrices\n  in GMM. By :user:`Alexis Mignon <AlexisMignon>`.\n\n- Fixed a error in the computation of conditional probabilities in\n  :class:`naive_bayes.BernoulliNB`. By `Hanna Wallach`_.\n\n- Make the method ``radius_neighbors`` of\n  :class:`neighbors.NearestNeighbors` return the samples lying on the\n  boundary for ``algorithm='brute'``. By `Yan Yi`_.\n\n- Flip sign of ``dual_coef_`` of :class:`svm.SVC`\n  to make it consistent with the documentation and\n  ``decision_function``. By Artem Sobolev.\n\n- Fixed handling of ties in :class:`isotonic.IsotonicRegression`.\n  We now use the weighted average of targets (secondary method). By\n  `Andreas M\u00fcller`_ and `Michael Bommarito <http:\/\/bommaritollc.com\/>`_.\n\nAPI changes summary\n-------------------\n\n- `GridSearchCV` and\n  `cross_val_score` and other\n  meta-estimators don't convert pandas DataFrames into arrays any more,\n  allowing DataFrame specific operations in custom estimators.\n\n- `multiclass.fit_ovr`, `multiclass.predict_ovr`,\n  `predict_proba_ovr`,\n  `multiclass.fit_ovo`, `multiclass.predict_ovo`,\n  `multiclass.fit_ecoc` and `multiclass.predict_ecoc`\n  are deprecated. Use the underlying estimators instead.\n\n- Nearest neighbors estimators used to take arbitrary keyword arguments\n  and pass these to their distance metric. This will no longer be supported\n  in scikit-learn 0.18; use the ``metric_params`` argument instead.\n\n- `n_jobs` parameter of the fit method shifted to the constructor of the\n       LinearRegression class.\n\n- The ``predict_proba`` method of :class:`multiclass.OneVsRestClassifier`\n  now returns two probabilities per sample in the multiclass case; this\n  is consistent with other estimators and with the method's documentation,\n  but previous versions accidentally returned only the positive\n  probability. Fixed by Will Lamond and `Lars Buitinck`_.\n\n- Change default value of precompute in :class:`linear_model.ElasticNet` and\n  :class:`linear_model.Lasso` to False. Setting precompute to \"auto\" was found\n  to be slower when n_samples > n_features since the computation of the Gram\n  matrix is computationally expensive and outweighs the benefit of fitting the\n  Gram for just one alpha.\n  ``precompute=\"auto\"`` is now deprecated and will be removed in 0.18\n  By `Manoj Kumar`_.\n\n- Expose ``positive`` option in :func:`linear_model.enet_path` and\n  :func:`linear_model.enet_path` which constrains coefficients to be\n  positive. By `Manoj Kumar`_.\n\n- Users should now supply an explicit ``average`` parameter to\n  :func:`sklearn.metrics.f1_score`, :func:`sklearn.metrics.fbeta_score`,\n  :func:`sklearn.metrics.recall_score` and\n  :func:`sklearn.metrics.precision_score` when performing multiclass\n  or multilabel (i.e. not binary) classification. By `Joel Nothman`_.\n\n- `scoring` parameter for cross validation now accepts `'f1_micro'`,\n  `'f1_macro'` or `'f1_weighted'`. `'f1'` is now for binary classification\n  only. Similar changes apply to `'precision'` and `'recall'`.\n  By `Joel Nothman`_.\n\n- The ``fit_intercept``, ``normalize`` and ``return_models`` parameters in\n  :func:`linear_model.enet_path` and :func:`linear_model.lasso_path` have\n  been removed. They were deprecated since 0.14\n\n- From now onwards, all estimators will uniformly raise ``NotFittedError``\n  when any of the ``predict`` like methods are called before the model is fit.\n  By `Raghav RV`_.\n\n- Input data validation was refactored for more consistent input\n  validation. The ``check_arrays`` function was replaced by ``check_array``\n  and ``check_X_y``. By `Andreas M\u00fcller`_.\n\n- Allow ``X=None`` in the methods ``radius_neighbors``, ``kneighbors``,\n  ``kneighbors_graph`` and ``radius_neighbors_graph`` in\n  :class:`sklearn.neighbors.NearestNeighbors` and family. If set to None,\n  then for every sample this avoids setting the sample itself as the\n  first nearest neighbor. By `Manoj Kumar`_.\n\n- Add parameter ``include_self`` in :func:`neighbors.kneighbors_graph`\n  and :func:`neighbors.radius_neighbors_graph` which has to be explicitly\n  set by the user. If set to True, then the sample itself is considered\n  as the first nearest neighbor.\n\n- `thresh` parameter is deprecated in favor of new `tol` parameter in\n  `GMM`, `DPGMM` and `VBGMM`. See `Enhancements`\n  section for details. By `Herv\u00e9 Bredin`_.\n\n- Estimators will treat input with dtype object as numeric when possible.\n  By `Andreas M\u00fcller`_\n\n- Estimators now raise `ValueError` consistently when fitted on empty\n  data (less than 1 sample or less than 1 feature for 2D input).\n  By `Olivier Grisel`_.\n\n\n- The ``shuffle`` option of :class:`.linear_model.SGDClassifier`,\n  :class:`linear_model.SGDRegressor`, :class:`linear_model.Perceptron`,\n  :class:`linear_model.PassiveAggressiveClassifier` and\n  :class:`linear_model.PassiveAggressiveRegressor` now defaults to ``True``.\n\n- :class:`cluster.DBSCAN` now uses a deterministic initialization. The\n  `random_state` parameter is deprecated. By :user:`Erich Schubert <kno10>`.\n\nCode Contributors\n-----------------\nA. Flaxman, Aaron Schumacher, Aaron Staple, abhishek thakur, Akshay, akshayah3,\nAldrian Obaja, Alexander Fabisch, Alexandre Gramfort, Alexis Mignon, Anders\nAagaard, Andreas Mueller, Andreas van Cranenburgh, Andrew Tulloch, Andrew\nWalker, Antony Lee, Arnaud Joly, banilo, Barmaley.exe, Ben Davies, Benedikt\nKoehler, bhsu, Boris Feld, Borja Ayerdi, Boyuan Deng, Brent Pedersen, Brian\nWignall, Brooke Osborn, Calvin Giles, Cathy Deng, Celeo, cgohlke, chebee7i,\nChristian Stade-Schuldt, Christof Angermueller, Chyi-Kwei Yau, CJ Carey,\nClemens Brunner, Daiki Aminaka, Dan Blanchard, danfrankj, Danny Sullivan, David\nFletcher, Dmitrijs Milajevs, Dougal J. Sutherland, Erich Schubert, Fabian\nPedregosa, Florian Wilhelm, floydsoft, F\u00e9lix-Antoine Fortin, Gael Varoquaux,\nGarrett-R, Gilles Louppe, gpassino, gwulfs, Hampus Bengtsson, Hamzeh Alsalhi,\nHanna Wallach, Harry Mavroforakis, Hasil Sharma, Helder, Herve Bredin,\nHsiang-Fu Yu, Hugues SALAMIN, Ian Gilmore, Ilambharathi Kanniah, Imran Haque,\nisms, Jake VanderPlas, Jan Dlabal, Jan Hendrik Metzen, Jatin Shah, Javier L\u00f3pez\nPe\u00f1a, jdcaballero, Jean Kossaifi, Jeff Hammerbacher, Joel Nothman, Jonathan\nHelmus, Joseph, Kaicheng Zhang, Kevin Markham, Kyle Beauchamp, Kyle Kastner,\nLagacherie Matthieu, Lars Buitinck, Laurent Direr, leepei, Loic Esteve, Luis\nPedro Coelho, Lukas Michelbacher, maheshakya, Manoj Kumar, Manuel, Mario\nMichael Krell, Martin, Martin Billinger, Martin Ku, Mateusz Susik, Mathieu\nBlondel, Matt Pico, Matt Terry, Matteo Visconti dOC, Matti Lyra, Max Linke,\nMehdi Cherti, Michael Bommarito, Michael Eickenberg, Michal Romaniuk, MLG,\nmr.Shu, Nelle Varoquaux, Nicola Montecchio, Nicolas, Nikolay Mayorov, Noel\nDawe, Okal Billy, Olivier Grisel, \u00d3scar N\u00e1jera, Paolo Puggioni, Peter\nPrettenhofer, Pratap Vardhan, pvnguyen, queqichao, Rafael Carrascosa, Raghav R\nV, Rahiel Kasim, Randall Mason, Rob Zinkov, Robert Bradshaw, Saket Choudhary,\nSam Nicholls, Samuel Charron, Saurabh Jha, sethdandridge, sinhrks, snuderl,\nStefan Otte, Stefan van der Walt, Steve Tjoa, swu, Sylvain Zimmer, tejesh95,\nterrycojones, Thomas Delteil, Thomas Unterthiner, Tomas Kazmar, trevorstephens,\ntttthomasssss, Tzu-Ming Kuo, ugurcaliskan, ugurthemaster, Vinayak Mehta,\nVincent Dubourg, Vjacheslav Murashkin, Vlad Niculae, wadawson, Wei Xue, Will\nLamond, Wu Jiang, x0l, Xinfan Meng, Yan Yi, Yu-Chin","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn               Version 0 16                   changes 0 16 1   Version 0 16 1                    April 14  2015    Changelog            Bug fixes              Allow input data larger than   block size   in    class  covariance LedoitWolf  by  Andreas M ller       Fix a bug in  class  isotonic IsotonicRegression  deduplication that   caused unstable result in  class  calibration CalibratedClassifierCV  by    Jan Hendrik Metzen       Fix sorting of labels in func  preprocessing label binarize  by Michael Heilman     Fix several stability and convergence issues in    class  cross decomposition CCA  and    class  cross decomposition PLSCanonical  by  Andreas M ller      Fix a bug in  class  cluster KMeans  when   precompute distances False     on fortran ordered data     Fix a speed regression in  class  ensemble RandomForestClassifier  s   predict     and   predict proba   by  Andreas M ller       Fix a regression where   utils shuffle   converted lists and dataframes to arrays  by  Olivier Grisel        changes 0 16   Version 0 16                 March 26  2015    Highlights                Speed improvements  notably in  class  cluster DBSCAN    reduced memory   requirements  bug fixes and better default settings     Multinomial Logistic regression and a path algorithm in    class  linear model LogisticRegressionCV      Out of core learning of PCA via  class  decomposition IncrementalPCA      Probability calibration of classifiers using    class  calibration CalibratedClassifierCV       class  cluster Birch  clustering method for large scale datasets     Scalable approximate nearest neighbors search with Locality sensitive   hashing forests in  neighbors LSHForest      Improved error messages and better validation when using malformed input data     More robust integration with pandas dataframes   Changelog            New features                 The new  neighbors LSHForest  implements locality sensitive hashing   for approximate nearest neighbors search  By  user  Maheshakya Wijewardena maheshakya       Added  class  svm LinearSVR   This class uses the liblinear implementation   of Support Vector Regression which is much faster for large   sample sizes than  class  svm SVR  with linear kernel  By    Fabian Pedregosa   and Qiang Luo     Incremental fit for  class  GaussianNB  naive bayes GaussianNB       Added   sample weight   support to  class  dummy DummyClassifier  and    class  dummy DummyRegressor   By  Arnaud Joly       Added the  func  metrics label ranking average precision score  metrics    By  Arnaud Joly       Add the  func  metrics coverage error  metrics  By  Arnaud Joly       Added  class  linear model LogisticRegressionCV   By    Manoj Kumar     Fabian Pedregosa     Gael Varoquaux     and  Alexandre Gramfort       Added   warm start   constructor parameter to make it possible for any   trained forest model to grow additional trees incrementally  By    user  Laurent Direr ldirer       Added   sample weight   support to  class  ensemble GradientBoostingClassifier  and    class  ensemble GradientBoostingRegressor   By  Peter Prettenhofer       Added  class  decomposition IncrementalPCA   an implementation of the PCA   algorithm that supports out of core learning with a   partial fit     method  By  Kyle Kastner       Averaged SGD for  class  SGDClassifier  linear model SGDClassifier     and  class  SGDRegressor  linear model SGDRegressor   By    user  Danny Sullivan  dsullivan7       Added  cross val predict    function which computes cross validated estimates  By  Luis Pedro Coelho      Added  class  linear model TheilSenRegressor   a robust   generalized median based estimator  By  user  Florian Wilhelm  FlorianWilhelm       Added  func  metrics median absolute error   a robust metric    By  Gael Varoquaux   and  user  Florian Wilhelm  FlorianWilhelm       Add  class  cluster Birch   an online clustering algorithm  By    Manoj Kumar     Alexandre Gramfort   and  Joel Nothman       Added shrinkage support to  class  discriminant analysis LinearDiscriminantAnalysis    using two new solvers  By  user  Clemens Brunner  cle1109   and  Martin Billinger       Added  class  kernel ridge KernelRidge   an implementation of   kernelized ridge regression    By  Mathieu Blondel   and  Jan Hendrik Metzen       All solvers in  class  linear model Ridge  now support  sample weight     By  Mathieu Blondel       Added  cross validation PredefinedSplit  cross validation   for fixed user provided cross validation folds    By  user  Thomas Unterthiner  untom       Added  class  calibration CalibratedClassifierCV   an approach for   calibrating the predicted probabilities of a classifier    By  Alexandre Gramfort     Jan Hendrik Metzen     Mathieu Blondel     and  user  Balazs Kegl  kegl      Enhancements                 Add option   return distance   in  hierarchical ward tree    to return distances between nodes for both structured and unstructured   versions of the algorithm  By  Matteo Visconti di Oleggio Castello      The same option was added in  hierarchical linkage tree     By  Manoj Kumar      Add support for sample weights in scorer objects   Metrics with sample   weight support will automatically benefit from it  By  Noel Dawe   and    Vlad Niculae       Added   newton cg   and  lbfgs  solver support in    class  linear model LogisticRegression   By  Manoj Kumar       Add   selection  random    parameter to implement stochastic coordinate   descent for  class  linear model Lasso    class  linear model ElasticNet    and related  By  Manoj Kumar       Add   sample weight   parameter to    metrics jaccard similarity score  and  func  metrics log loss     By  user  Jatin Shah  jatinshah       Support sparse multilabel indicator representation in    class  preprocessing LabelBinarizer  and    class  multiclass OneVsRestClassifier   by  user  Hamzeh Alsalhi  hamsal   with thanks   to Rohit Sivaprasad   as well as evaluation metrics  by    Joel Nothman        Add   sample weight   parameter to  metrics jaccard similarity score     By  Jatin Shah      Add support for multiclass in  metrics hinge loss   Added   labels None     as optional parameter  By  Saurabh Jha      Add   sample weight   parameter to  metrics hinge loss     By  Saurabh Jha      Add   multi class  multinomial    option in    class  linear model LogisticRegression  to implement a Logistic   Regression solver that minimizes the cross entropy or multinomial loss   instead of the default One vs Rest setting  Supports  lbfgs  and    newton cg  solvers  By  Lars Buitinck   and  Manoj Kumar    Solver option    newton cg  by Simon Wu       DictVectorizer   can now perform   fit transform   on an iterable in a   single pass  when giving the option   sort False    By  user  Dan   Blanchard  dan blanchard        class  model selection GridSearchCV  and    class  model selection RandomizedSearchCV  can now be configured to work   with estimators that may fail and raise errors on individual folds  This   option is controlled by the  error score  parameter  This does not affect   errors raised on re fit  By  user  Michal Romaniuk  romaniukm       Add   digits   parameter to  metrics classification report  to allow   report to show different precision of floating point numbers  By    user  Ian Gilmore  agileminor       Add a quantile prediction strategy to the  class  dummy DummyRegressor     By  user  Aaron Staple  staple       Add   handle unknown   option to  class  preprocessing OneHotEncoder  to   handle unknown categorical features more gracefully during transform    By  Manoj Kumar       Added support for sparse input data to decision trees and their ensembles    By  Fares Hedyati   and  Arnaud Joly       Optimized  class  cluster AffinityPropagation  by reducing the number of   memory allocations of large temporary data structures  By  Antony Lee       Parellization of the computation of feature importances in random forest    By  Olivier Grisel   and  Arnaud Joly       Add   n iter    attribute to estimators that accept a   max iter   attribute   in their constructor  By  Manoj Kumar       Added decision function for  class  multiclass OneVsOneClassifier    By  Raghav RV   and  user  Kyle Beauchamp  kyleabeauchamp        neighbors kneighbors graph  and  radius neighbors graph    support non Euclidean metrics  By  Manoj Kumar      Parameter   connectivity   in  class  cluster AgglomerativeClustering    and family now accept callables that return a connectivity matrix    By  Manoj Kumar       Sparse support for  func  metrics pairwise paired distances   By  Joel Nothman        class  cluster DBSCAN  now supports sparse input and sample weights and   has been optimized  the inner loop has been rewritten in Cython and   radius neighbors queries are now computed in batch  By  Joel Nothman     and  Lars Buitinck       Add   class weight   parameter to automatically weight samples by class   frequency for  class  ensemble RandomForestClassifier      class  tree DecisionTreeClassifier    class  ensemble ExtraTreesClassifier    and  class  tree ExtraTreeClassifier   By  Trevor Stephens        grid search RandomizedSearchCV  now does sampling without   replacement if all parameters are given as lists  By  Andreas M ller       Parallelized calculation of  func  metrics pairwise distances  is now supported   for scipy metrics and custom callables  By  Joel Nothman       Allow the fitting and scoring of all clustering algorithms in    class  pipeline Pipeline   By  Andreas M ller       More robust seeding and improved error messages in  class  cluster MeanShift    by  Andreas M ller       Make the stopping criterion for  mixture GMM      mixture DPGMM  and  mixture VBGMM  less dependent on the   number of samples by thresholding the average log likelihood change   instead of its sum over all samples  By  Herv  Bredin       The outcome of  func  manifold spectral embedding  was made deterministic   by flipping the sign of eigenvectors  By  user  Hasil Sharma  Hasil Sharma       Significant performance and memory usage improvements in    class  preprocessing PolynomialFeatures   By  Eric Martin       Numerical stability improvements for  class  preprocessing StandardScaler    and  func  preprocessing scale   By  Nicolas Goix       class  svm SVC  fitted on sparse input now implements   decision function      By  Rob Zinkov   and  Andreas M ller        cross validation train test split  now preserves the input type    instead of converting to numpy arrays    Documentation improvements                               Added example of using  class  pipeline FeatureUnion  for heterogeneous input    By  user  Matt Terry  mrterry      Documentation on scorers was improved  to highlight the handling of loss   functions  By  user  Matt Pico  MattpSoftware       A discrepancy between liblinear output and scikit learn s wrappers   is now noted  By  Manoj Kumar       Improved documentation generation  examples referring to a class or   function are now shown in a gallery on the class function s API reference   page  By  Joel Nothman       More explicit documentation of sample generators and of data   transformation  By  Joel Nothman        class  sklearn neighbors BallTree  and  class  sklearn neighbors KDTree    used to point to empty pages stating that they are aliases of BinaryTree    This has been fixed to show the correct class docs  By  Manoj Kumar       Added silhouette plots for analysis of KMeans clustering using    func  metrics silhouette samples  and  func  metrics silhouette score     See  ref  sphx glr auto examples cluster plot kmeans silhouette analysis py   Bug fixes             Metaestimators now support ducktyping for the presence of   decision function        predict proba   and other methods  This fixes behavior of    grid search GridSearchCV      grid search RandomizedSearchCV    class  pipeline Pipeline      class  feature selection RFE    class  feature selection RFECV  when nested    By  Joel Nothman      The   scoring   attribute of grid search and cross validation methods is no longer   ignored when a  grid search GridSearchCV  is given as a base estimator or   the base estimator doesn t have predict     The function  hierarchical ward tree  now returns the children in   the same order for both the structured and unstructured versions  By    Matteo Visconti di Oleggio Castello        class  feature selection RFECV  now correctly handles cases when     step   is not equal to 1  By  user  Nikolay Mayorov  nmayorov      The  class  decomposition PCA  now undoes whitening in its     inverse transform    Also  its   components    now always have unit   length  By  user  Michael Eickenberg  eickenberg       Fix incomplete download of the dataset when    datasets download 20newsgroups  is called  By  Manoj Kumar       Various fixes to the Gaussian processes subpackage by Vincent Dubourg   and Jan Hendrik Metzen     Calling   partial fit   with   class weight   auto    throws an   appropriate error message and suggests a work around    By  user  Danny Sullivan  dsullivan7        class  RBFSampler  kernel approximation RBFSampler   with   gamma g     formerly approximated  func  rbf kernel  metrics pairwise rbf kernel     with   gamma g 2     the definition of   gamma   is now consistent    which may substantially change your results if you use a fixed value     If you cross validated over   gamma    it probably doesn t matter   too much   By  user  Dougal Sutherland  dougalsutherland       Pipeline object delegate the   classes    attribute to the underlying   estimator  It allows  for instance  to make bagging of a pipeline object    By  Arnaud Joly       class  neighbors NearestCentroid  now uses the median as the centroid   when metric is set to   manhattan    It was using the mean before    By  Manoj Kumar      Fix numerical stability issues in  class  linear model SGDClassifier    and  class  linear model SGDRegressor  by clipping large gradients and   ensuring that weight decay rescaling is always positive  for large   l2 regularization and large learning rate values     By  Olivier Grisel      When  compute full tree  is set to  auto   the full tree is   built when n clusters is high and is early stopped when n clusters is   low  while the behavior should be vice versa in    class  cluster AgglomerativeClustering   and friends     This has been fixed By  Manoj Kumar      Fix lazy centering of data in  func  linear model enet path  and    func  linear model lasso path   It was centered around one  It has   been changed to be centered around the origin  By  Manoj Kumar      Fix handling of precomputed affinity matrices in    class  cluster AgglomerativeClustering  when using connectivity   constraints  By  user  Cathy Deng  cathydeng      Correct   partial fit   handling of   class prior   for    class  sklearn naive bayes MultinomialNB  and    class  sklearn naive bayes BernoulliNB   By  Trevor Stephens       Fixed a crash in  func  metrics precision recall fscore support    when using unsorted   labels   in the multi label setting    By  Andreas M ller       Avoid skipping the first nearest neighbor in the methods   radius neighbors        kneighbors      kneighbors graph   and   radius neighbors graph   in    class  sklearn neighbors NearestNeighbors  and family  when the query   data is not the same as fit data  By  Manoj Kumar       Fix log density calculation in the  mixture GMM  with   tied covariance  By  Will Dawson      Fixed a scaling error in  class  feature selection SelectFdr    where a factor   n features   was missing  By  Andrew Tulloch      Fix zero division in  class  neighbors KNeighborsRegressor  and related   classes when using distance weighting and having identical data points    By  Garret R  https   github com Garrett R        Fixed round off errors with non positive definite covariance matrices   in GMM  By  user  Alexis Mignon  AlexisMignon       Fixed a error in the computation of conditional probabilities in    class  naive bayes BernoulliNB   By  Hanna Wallach       Make the method   radius neighbors   of    class  neighbors NearestNeighbors  return the samples lying on the   boundary for   algorithm  brute     By  Yan Yi       Flip sign of   dual coef    of  class  svm SVC    to make it consistent with the documentation and     decision function    By Artem Sobolev     Fixed handling of ties in  class  isotonic IsotonicRegression     We now use the weighted average of targets  secondary method   By    Andreas M ller   and  Michael Bommarito  http   bommaritollc com       API changes summary                         GridSearchCV  and    cross val score  and other   meta estimators don t convert pandas DataFrames into arrays any more    allowing DataFrame specific operations in custom estimators      multiclass fit ovr    multiclass predict ovr      predict proba ovr      multiclass fit ovo    multiclass predict ovo      multiclass fit ecoc  and  multiclass predict ecoc    are deprecated  Use the underlying estimators instead     Nearest neighbors estimators used to take arbitrary keyword arguments   and pass these to their distance metric  This will no longer be supported   in scikit learn 0 18  use the   metric params   argument instead      n jobs  parameter of the fit method shifted to the constructor of the        LinearRegression class     The   predict proba   method of  class  multiclass OneVsRestClassifier    now returns two probabilities per sample in the multiclass case  this   is consistent with other estimators and with the method s documentation    but previous versions accidentally returned only the positive   probability  Fixed by Will Lamond and  Lars Buitinck       Change default value of precompute in  class  linear model ElasticNet  and    class  linear model Lasso  to False  Setting precompute to  auto  was found   to be slower when n samples   n features since the computation of the Gram   matrix is computationally expensive and outweighs the benefit of fitting the   Gram for just one alpha      precompute  auto    is now deprecated and will be removed in 0 18   By  Manoj Kumar       Expose   positive   option in  func  linear model enet path  and    func  linear model enet path  which constrains coefficients to be   positive  By  Manoj Kumar       Users should now supply an explicit   average   parameter to    func  sklearn metrics f1 score    func  sklearn metrics fbeta score      func  sklearn metrics recall score  and    func  sklearn metrics precision score  when performing multiclass   or multilabel  i e  not binary  classification  By  Joel Nothman        scoring  parameter for cross validation now accepts   f1 micro        f1 macro   or   f1 weighted      f1   is now for binary classification   only  Similar changes apply to   precision   and   recall      By  Joel Nothman       The   fit intercept      normalize   and   return models   parameters in    func  linear model enet path  and  func  linear model lasso path  have   been removed  They were deprecated since 0 14    From now onwards  all estimators will uniformly raise   NotFittedError     when any of the   predict   like methods are called before the model is fit    By  Raghav RV       Input data validation was refactored for more consistent input   validation  The   check arrays   function was replaced by   check array     and   check X y    By  Andreas M ller       Allow   X None   in the methods   radius neighbors      kneighbors        kneighbors graph   and   radius neighbors graph   in    class  sklearn neighbors NearestNeighbors  and family  If set to None    then for every sample this avoids setting the sample itself as the   first nearest neighbor  By  Manoj Kumar       Add parameter   include self   in  func  neighbors kneighbors graph    and  func  neighbors radius neighbors graph  which has to be explicitly   set by the user  If set to True  then the sample itself is considered   as the first nearest neighbor      thresh  parameter is deprecated in favor of new  tol  parameter in    GMM    DPGMM  and  VBGMM   See  Enhancements    section for details  By  Herv  Bredin       Estimators will treat input with dtype object as numeric when possible    By  Andreas M ller      Estimators now raise  ValueError  consistently when fitted on empty   data  less than 1 sample or less than 1 feature for 2D input     By  Olivier Grisel        The   shuffle   option of  class   linear model SGDClassifier      class  linear model SGDRegressor    class  linear model Perceptron      class  linear model PassiveAggressiveClassifier  and    class  linear model PassiveAggressiveRegressor  now defaults to   True        class  cluster DBSCAN  now uses a deterministic initialization  The    random state  parameter is deprecated  By  user  Erich Schubert  kno10     Code Contributors                   A  Flaxman  Aaron Schumacher  Aaron Staple  abhishek thakur  Akshay  akshayah3  Aldrian Obaja  Alexander Fabisch  Alexandre Gramfort  Alexis Mignon  Anders Aagaard  Andreas Mueller  Andreas van Cranenburgh  Andrew Tulloch  Andrew Walker  Antony Lee  Arnaud Joly  banilo  Barmaley exe  Ben Davies  Benedikt Koehler  bhsu  Boris Feld  Borja Ayerdi  Boyuan Deng  Brent Pedersen  Brian Wignall  Brooke Osborn  Calvin Giles  Cathy Deng  Celeo  cgohlke  chebee7i  Christian Stade Schuldt  Christof Angermueller  Chyi Kwei Yau  CJ Carey  Clemens Brunner  Daiki Aminaka  Dan Blanchard  danfrankj  Danny Sullivan  David Fletcher  Dmitrijs Milajevs  Dougal J  Sutherland  Erich Schubert  Fabian Pedregosa  Florian Wilhelm  floydsoft  F lix Antoine Fortin  Gael Varoquaux  Garrett R  Gilles Louppe  gpassino  gwulfs  Hampus Bengtsson  Hamzeh Alsalhi  Hanna Wallach  Harry Mavroforakis  Hasil Sharma  Helder  Herve Bredin  Hsiang Fu Yu  Hugues SALAMIN  Ian Gilmore  Ilambharathi Kanniah  Imran Haque  isms  Jake VanderPlas  Jan Dlabal  Jan Hendrik Metzen  Jatin Shah  Javier L pez Pe a  jdcaballero  Jean Kossaifi  Jeff Hammerbacher  Joel Nothman  Jonathan Helmus  Joseph  Kaicheng Zhang  Kevin Markham  Kyle Beauchamp  Kyle Kastner  Lagacherie Matthieu  Lars Buitinck  Laurent Direr  leepei  Loic Esteve  Luis Pedro Coelho  Lukas Michelbacher  maheshakya  Manoj Kumar  Manuel  Mario Michael Krell  Martin  Martin Billinger  Martin Ku  Mateusz Susik  Mathieu Blondel  Matt Pico  Matt Terry  Matteo Visconti dOC  Matti Lyra  Max Linke  Mehdi Cherti  Michael Bommarito  Michael Eickenberg  Michal Romaniuk  MLG  mr Shu  Nelle Varoquaux  Nicola Montecchio  Nicolas  Nikolay Mayorov  Noel Dawe  Okal Billy  Olivier Grisel   scar N jera  Paolo Puggioni  Peter Prettenhofer  Pratap Vardhan  pvnguyen  queqichao  Rafael Carrascosa  Raghav R V  Rahiel Kasim  Randall Mason  Rob Zinkov  Robert Bradshaw  Saket Choudhary  Sam Nicholls  Samuel Charron  Saurabh Jha  sethdandridge  sinhrks  snuderl  Stefan Otte  Stefan van der Walt  Steve Tjoa  swu  Sylvain Zimmer  tejesh95  terrycojones  Thomas Delteil  Thomas Unterthiner  Tomas Kazmar  trevorstephens  tttthomasssss  Tzu Ming Kuo  ugurcaliskan  ugurthemaster  Vinayak Mehta  Vincent Dubourg  Vjacheslav Murashkin  Vlad Niculae  wadawson  Wei Xue  Will Lamond  Wu Jiang  x0l  Xinfan Meng  Yan Yi  Yu Chin"}
{"questions":"scikit-learn sklearn contributors rst Older Versions changes012 1","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n==============\nOlder Versions\n==============\n\n.. _changes_0_12.1:\n\nVersion 0.12.1\n===============\n\n**October 8, 2012**\n\nThe 0.12.1 release is a bug-fix release with no additional features, but is\ninstead a set of bug fixes\n\nChangelog\n----------\n\n- Improved numerical stability in spectral embedding by `Gael\n  Varoquaux`_\n\n- Doctest under windows 64bit by `Gael Varoquaux`_\n\n- Documentation fixes for elastic net by `Andreas M\u00fcller`_ and\n  `Alexandre Gramfort`_\n\n- Proper behavior with fortran-ordered NumPy arrays by `Gael Varoquaux`_\n\n- Make GridSearchCV work with non-CSR sparse matrix by `Lars Buitinck`_\n\n- Fix parallel computing in MDS by `Gael Varoquaux`_\n\n- Fix Unicode support in count vectorizer by `Andreas M\u00fcller`_\n\n- Fix MinCovDet breaking with X.shape = (3, 1) by :user:`Virgile Fritsch <VirgileFritsch>`\n\n- Fix clone of SGD objects by `Peter Prettenhofer`_\n\n- Stabilize GMM by :user:`Virgile Fritsch <VirgileFritsch>`\n\nPeople\n------\n\n*  14  `Peter Prettenhofer`_\n*  12  `Gael Varoquaux`_\n*  10  `Andreas M\u00fcller`_\n*   5  `Lars Buitinck`_\n*   3  :user:`Virgile Fritsch <VirgileFritsch>`\n*   1  `Alexandre Gramfort`_\n*   1  `Gilles Louppe`_\n*   1  `Mathieu Blondel`_\n\n.. _changes_0_12:\n\nVersion 0.12\n============\n\n**September 4, 2012**\n\nChangelog\n---------\n\n- Various speed improvements of the :ref:`decision trees <tree>` module, by\n  `Gilles Louppe`_.\n\n- :class:`~ensemble.GradientBoostingRegressor` and\n  :class:`~ensemble.GradientBoostingClassifier` now support feature subsampling\n  via the ``max_features`` argument, by `Peter Prettenhofer`_.\n\n- Added Huber and Quantile loss functions to\n  :class:`~ensemble.GradientBoostingRegressor`, by `Peter Prettenhofer`_.\n\n- :ref:`Decision trees <tree>` and :ref:`forests of randomized trees <forest>`\n  now support multi-output classification and regression problems, by\n  `Gilles Louppe`_.\n\n- Added :class:`~preprocessing.LabelEncoder`, a simple utility class to\n  normalize labels or transform non-numerical labels, by `Mathieu Blondel`_.\n\n- Added the epsilon-insensitive loss and the ability to make probabilistic\n  predictions with the modified huber loss in :ref:`sgd`, by\n  `Mathieu Blondel`_.\n\n- Added :ref:`multidimensional_scaling`, by Nelle Varoquaux.\n\n- SVMlight file format loader now detects compressed (gzip\/bzip2) files and\n  decompresses them on the fly, by `Lars Buitinck`_.\n\n- SVMlight file format serializer now preserves double precision floating\n  point values, by `Olivier Grisel`_.\n\n- A common testing framework for all estimators was added, by `Andreas M\u00fcller`_.\n\n- Understandable error messages for estimators that do not accept\n  sparse input by `Gael Varoquaux`_\n\n- Speedups in hierarchical clustering by `Gael Varoquaux`_. In\n  particular building the tree now supports early stopping. This is\n  useful when the number of clusters is not small compared to the\n  number of samples.\n\n- Add MultiTaskLasso and MultiTaskElasticNet for joint feature selection,\n  by `Alexandre Gramfort`_.\n\n- Added `metrics.auc_score` and\n  :func:`metrics.average_precision_score` convenience functions by `Andreas\n  M\u00fcller`_.\n\n- Improved sparse matrix support in the :ref:`feature_selection`\n  module by `Andreas M\u00fcller`_.\n\n- New word boundaries-aware character n-gram analyzer for the\n  :ref:`text_feature_extraction` module by :user:`@kernc <kernc>`.\n\n- Fixed bug in spectral clustering that led to single point clusters\n  by `Andreas M\u00fcller`_.\n\n- In :class:`~feature_extraction.text.CountVectorizer`, added an option to\n  ignore infrequent words, ``min_df`` by  `Andreas M\u00fcller`_.\n\n- Add support for multiple targets in some linear models (ElasticNet, Lasso\n  and OrthogonalMatchingPursuit) by `Vlad Niculae`_ and\n  `Alexandre Gramfort`_.\n\n- Fixes in `decomposition.ProbabilisticPCA` score function by Wei Li.\n\n- Fixed feature importance computation in\n  :ref:`gradient_boosting`.\n\nAPI changes summary\n-------------------\n\n- The old ``scikits.learn`` package has disappeared; all code should import\n  from ``sklearn`` instead, which was introduced in 0.9.\n\n- In :func:`metrics.roc_curve`, the ``thresholds`` array is now returned\n  with it's order reversed, in order to keep it consistent with the order\n  of the returned ``fpr`` and ``tpr``.\n\n- In `hmm` objects, like `hmm.GaussianHMM`,\n  `hmm.MultinomialHMM`, etc., all parameters must be passed to the\n  object when initialising it and not through ``fit``. Now ``fit`` will\n  only accept the data as an input parameter.\n\n- For all SVM classes, a faulty behavior of ``gamma`` was fixed. Previously,\n  the default gamma value was only computed the first time ``fit`` was called\n  and then stored. It is now recalculated on every call to ``fit``.\n\n- All ``Base`` classes are now abstract meta classes so that they can not be\n  instantiated.\n\n- :func:`cluster.ward_tree` now also returns the parent array. This is\n  necessary for early-stopping in which case the tree is not\n  completely built.\n\n- In :class:`~feature_extraction.text.CountVectorizer` the parameters\n  ``min_n`` and ``max_n`` were joined to the parameter ``n_gram_range`` to\n  enable grid-searching both at once.\n\n- In :class:`~feature_extraction.text.CountVectorizer`, words that appear\n  only in one document are now ignored by default. To reproduce\n  the previous behavior, set ``min_df=1``.\n\n- Fixed API inconsistency: :meth:`linear_model.SGDClassifier.predict_proba` now\n  returns 2d array when fit on two classes.\n\n- Fixed API inconsistency: :meth:`discriminant_analysis.QuadraticDiscriminantAnalysis.decision_function`\n  and :meth:`discriminant_analysis.LinearDiscriminantAnalysis.decision_function` now return 1d arrays\n  when fit on two classes.\n\n- Grid of alphas used for fitting :class:`~linear_model.LassoCV` and\n  :class:`~linear_model.ElasticNetCV` is now stored\n  in the attribute ``alphas_`` rather than overriding the init parameter\n  ``alphas``.\n\n- Linear models when alpha is estimated by cross-validation store\n  the estimated value in the ``alpha_`` attribute rather than just\n  ``alpha`` or ``best_alpha``.\n\n- :class:`~ensemble.GradientBoostingClassifier` now supports\n  :meth:`~ensemble.GradientBoostingClassifier.staged_predict_proba`, and\n  :meth:`~ensemble.GradientBoostingClassifier.staged_predict`.\n\n- `svm.sparse.SVC` and other sparse SVM classes are now deprecated.\n  The all classes in the :ref:`svm` module now automatically select the\n  sparse or dense representation base on the input.\n\n- All clustering algorithms now interpret the array ``X`` given to ``fit`` as\n  input data, in particular :class:`~cluster.SpectralClustering` and\n  :class:`~cluster.AffinityPropagation` which previously expected affinity matrices.\n\n- For clustering algorithms that take the desired number of clusters as a parameter,\n  this parameter is now called ``n_clusters``.\n\n\nPeople\n------\n* 267  `Andreas M\u00fcller`_\n*  94  `Gilles Louppe`_\n*  89  `Gael Varoquaux`_\n*  79  `Peter Prettenhofer`_\n*  60  `Mathieu Blondel`_\n*  57  `Alexandre Gramfort`_\n*  52  `Vlad Niculae`_\n*  45  `Lars Buitinck`_\n*  44  Nelle Varoquaux\n*  37  `Jaques Grobler`_\n*  30  Alexis Mignon\n*  30  Immanuel Bayer\n*  27  `Olivier Grisel`_\n*  16  Subhodeep Moitra\n*  13  Yannick Schwartz\n*  12  :user:`@kernc <kernc>`\n*  11  :user:`Virgile Fritsch <VirgileFritsch>`\n*   9  Daniel Duckworth\n*   9  `Fabian Pedregosa`_\n*   9  `Robert Layton`_\n*   8  John Benediktsson\n*   7  Marko Burjek\n*   5  `Nicolas Pinto`_\n*   4  Alexandre Abraham\n*   4  `Jake Vanderplas`_\n*   3  `Brian Holt`_\n*   3  `Edouard Duchesnay`_\n*   3  Florian Hoenig\n*   3  flyingimmidev\n*   2  Francois Savard\n*   2  Hannes Schulz\n*   2  Peter Welinder\n*   2  `Yaroslav Halchenko`_\n*   2  Wei Li\n*   1  Alex Companioni\n*   1  Brandyn A. White\n*   1  Bussonnier Matthias\n*   1  Charles-Pierre Astolfi\n*   1  Dan O'Huiginn\n*   1  David Cournapeau\n*   1  Keith Goodman\n*   1  Ludwig Schwardt\n*   1  Olivier Hervieu\n*   1  Sergio Medina\n*   1  Shiqiao Du\n*   1  Tim Sheerman-Chase\n*   1  buguen\n\n\n\n.. _changes_0_11:\n\nVersion 0.11\n============\n\n**May 7, 2012**\n\nChangelog\n---------\n\nHighlights\n.............\n\n- Gradient boosted regression trees (:ref:`gradient_boosting`)\n  for classification and regression by `Peter Prettenhofer`_\n  and `Scott White`_ .\n\n- Simple dict-based feature loader with support for categorical variables\n  (:class:`~feature_extraction.DictVectorizer`) by `Lars Buitinck`_.\n\n- Added Matthews correlation coefficient (:func:`metrics.matthews_corrcoef`)\n  and added macro and micro average options to\n  :func:`~metrics.precision_score`, :func:`metrics.recall_score` and\n  :func:`~metrics.f1_score` by `Satrajit Ghosh`_.\n\n- :ref:`out_of_bag` of generalization error for :ref:`ensemble`\n  by `Andreas M\u00fcller`_.\n\n- Randomized sparse linear models for feature\n  selection, by `Alexandre Gramfort`_ and `Gael Varoquaux`_\n\n- :ref:`label_propagation` for semi-supervised learning, by Clay\n  Woolam. **Note** the semi-supervised API is still work in progress,\n  and may change.\n\n- Added BIC\/AIC model selection to classical :ref:`gmm` and unified\n  the API with the remainder of scikit-learn, by `Bertrand Thirion`_\n\n- Added `sklearn.cross_validation.StratifiedShuffleSplit`, which is\n  a `sklearn.cross_validation.ShuffleSplit` with balanced splits,\n  by Yannick Schwartz.\n\n- :class:`~sklearn.neighbors.NearestCentroid` classifier added, along with a\n  ``shrink_threshold`` parameter, which implements **shrunken centroid\n  classification**, by `Robert Layton`_.\n\nOther changes\n..............\n\n- Merged dense and sparse implementations of :ref:`sgd` module and\n  exposed utility extension types for sequential\n  datasets ``seq_dataset`` and weight vectors ``weight_vector``\n  by `Peter Prettenhofer`_.\n\n- Added ``partial_fit`` (support for online\/minibatch learning) and\n  warm_start to the :ref:`sgd` module by `Mathieu Blondel`_.\n\n- Dense and sparse implementations of :ref:`svm` classes and\n  :class:`~linear_model.LogisticRegression` merged by `Lars Buitinck`_.\n\n- Regressors can now be used as base estimator in the :ref:`multiclass`\n  module by `Mathieu Blondel`_.\n\n- Added n_jobs option to :func:`metrics.pairwise_distances`\n  and :func:`metrics.pairwise.pairwise_kernels` for parallel computation,\n  by `Mathieu Blondel`_.\n\n- :ref:`k_means` can now be run in parallel, using the ``n_jobs`` argument\n  to either :ref:`k_means` or :class:`cluster.KMeans`, by `Robert Layton`_.\n\n- Improved :ref:`cross_validation` and :ref:`grid_search` documentation\n  and introduced the new `cross_validation.train_test_split`\n  helper function by `Olivier Grisel`_\n\n- :class:`~svm.SVC` members ``coef_`` and ``intercept_`` changed sign for\n  consistency with ``decision_function``; for ``kernel==linear``,\n  ``coef_`` was fixed in the one-vs-one case, by `Andreas M\u00fcller`_.\n\n- Performance improvements to efficient leave-one-out cross-validated\n  Ridge regression, esp. for the ``n_samples > n_features`` case, in\n  :class:`~linear_model.RidgeCV`, by Reuben Fletcher-Costin.\n\n- Refactoring and simplification of the :ref:`text_feature_extraction`\n  API and fixed a bug that caused possible negative IDF,\n  by `Olivier Grisel`_.\n\n- Beam pruning option in `_BaseHMM` module has been removed since it\n  is difficult to Cythonize. If you are interested in contributing a Cython\n  version, you can use the python version in the git history as a reference.\n\n- Classes in :ref:`neighbors` now support arbitrary Minkowski metric for\n  nearest neighbors searches. The metric can be specified by argument ``p``.\n\nAPI changes summary\n-------------------\n\n- `covariance.EllipticEnvelop` is now deprecated.\n  Please use :class:`~covariance.EllipticEnvelope` instead.\n\n- ``NeighborsClassifier`` and ``NeighborsRegressor`` are gone in the module\n  :ref:`neighbors`. Use the classes :class:`~neighbors.KNeighborsClassifier`,\n  :class:`~neighbors.RadiusNeighborsClassifier`, :class:`~neighbors.KNeighborsRegressor`\n  and\/or :class:`~neighbors.RadiusNeighborsRegressor` instead.\n\n- Sparse classes in the :ref:`sgd` module are now deprecated.\n\n- In `mixture.GMM`, `mixture.DPGMM` and `mixture.VBGMM`,\n  parameters must be passed to an object when initialising it and not through\n  ``fit``. Now ``fit`` will only accept the data as an input parameter.\n\n- methods ``rvs`` and ``decode`` in `GMM` module are now deprecated.\n  ``sample`` and ``score`` or ``predict`` should be used instead.\n\n- attribute ``_scores`` and ``_pvalues`` in univariate feature selection\n  objects are now deprecated.\n  ``scores_`` or ``pvalues_`` should be used instead.\n\n- In :class:`~linear_model.LogisticRegression`, :class:`~svm.LinearSVC`,\n  :class:`~svm.SVC` and :class:`~svm.NuSVC`, the ``class_weight`` parameter is\n  now an initialization parameter, not a parameter to fit. This makes grid\n  searches over this parameter possible.\n\n- LFW ``data`` is now always shape ``(n_samples, n_features)`` to be\n  consistent with the Olivetti faces dataset. Use ``images`` and\n  ``pairs`` attribute to access the natural images shapes instead.\n\n- In :class:`~svm.LinearSVC`, the meaning of the ``multi_class`` parameter\n  changed.  Options now are ``'ovr'`` and ``'crammer_singer'``, with\n  ``'ovr'`` being the default.  This does not change the default behavior\n  but hopefully is less confusing.\n\n- Class `feature_selection.text.Vectorizer` is deprecated and\n  replaced by `feature_selection.text.TfidfVectorizer`.\n\n- The preprocessor \/ analyzer nested structure for text feature\n  extraction has been removed. All those features are\n  now directly passed as flat constructor arguments\n  to `feature_selection.text.TfidfVectorizer` and\n  `feature_selection.text.CountVectorizer`, in particular the\n  following parameters are now used:\n\n- ``analyzer`` can be ``'word'`` or ``'char'`` to switch the default\n  analysis scheme, or use a specific python callable (as previously).\n\n- ``tokenizer`` and ``preprocessor`` have been introduced to make it\n  still possible to customize those steps with the new API.\n\n- ``input`` explicitly control how to interpret the sequence passed to\n  ``fit`` and ``predict``: filenames, file objects or direct (byte or\n  Unicode) strings.\n\n- charset decoding is explicit and strict by default.\n\n- the ``vocabulary``, fitted or not is now stored in the\n  ``vocabulary_`` attribute to be consistent with the project\n  conventions.\n\n- Class `feature_selection.text.TfidfVectorizer` now derives directly\n  from `feature_selection.text.CountVectorizer` to make grid\n  search trivial.\n\n- methods ``rvs`` in `_BaseHMM` module are now deprecated.\n  ``sample`` should be used instead.\n\n- Beam pruning option in `_BaseHMM` module is removed since it is\n  difficult to be Cythonized. If you are interested, you can look in the\n  history codes by git.\n\n- The SVMlight format loader now supports files with both zero-based and\n  one-based column indices, since both occur \"in the wild\".\n\n- Arguments in class :class:`~model_selection.ShuffleSplit` are now consistent with\n  :class:`~model_selection.StratifiedShuffleSplit`. Arguments ``test_fraction`` and\n  ``train_fraction`` are deprecated and renamed to ``test_size`` and\n  ``train_size`` and can accept both ``float`` and ``int``.\n\n- Arguments in class `Bootstrap` are now consistent with\n  :class:`~model_selection.StratifiedShuffleSplit`. Arguments ``n_test`` and\n  ``n_train`` are deprecated and renamed to ``test_size`` and\n  ``train_size`` and can accept both ``float`` and ``int``.\n\n- Argument ``p`` added to classes in :ref:`neighbors` to specify an\n  arbitrary Minkowski metric for nearest neighbors searches.\n\n\nPeople\n------\n\n* 282  `Andreas M\u00fcller`_\n* 239  `Peter Prettenhofer`_\n* 198  `Gael Varoquaux`_\n* 129  `Olivier Grisel`_\n* 114  `Mathieu Blondel`_\n* 103  Clay Woolam\n*  96  `Lars Buitinck`_\n*  88  `Jaques Grobler`_\n*  82  `Alexandre Gramfort`_\n*  50  `Bertrand Thirion`_\n*  42  `Robert Layton`_\n*  28  flyingimmidev\n*  26  `Jake Vanderplas`_\n*  26  Shiqiao Du\n*  21  `Satrajit Ghosh`_\n*  17  `David Marek`_\n*  17  `Gilles Louppe`_\n*  14  `Vlad Niculae`_\n*  11  Yannick Schwartz\n*  10  `Fabian Pedregosa`_\n*   9  fcostin\n*   7  Nick Wilson\n*   5  Adrien Gaidon\n*   5  `Nicolas Pinto`_\n*   4  `David Warde-Farley`_\n*   5  Nelle Varoquaux\n*   5  Emmanuelle Gouillart\n*   3  Joonas Sillanp\u00e4\u00e4\n*   3  Paolo Losi\n*   2  Charles McCarthy\n*   2  Roy Hyunjin Han\n*   2  Scott White\n*   2  ibayer\n*   1  Brandyn White\n*   1  Carlos Scheidegger\n*   1  Claire Revillet\n*   1  Conrad Lee\n*   1  `Edouard Duchesnay`_\n*   1  Jan Hendrik Metzen\n*   1  Meng Xinfan\n*   1  `Rob Zinkov`_\n*   1  Shiqiao\n*   1  Udi Weinsberg\n*   1  Virgile Fritsch\n*   1  Xinfan Meng\n*   1  Yaroslav Halchenko\n*   1  jansoe\n*   1  Leon Palafox\n\n\n.. _changes_0_10:\n\nVersion 0.10\n============\n\n**January 11, 2012**\n\nChangelog\n---------\n\n- Python 2.5 compatibility was dropped; the minimum Python version needed\n  to use scikit-learn is now 2.6.\n\n- :ref:`sparse_inverse_covariance` estimation using the graph Lasso, with\n  associated cross-validated estimator, by `Gael Varoquaux`_\n\n- New :ref:`Tree <tree>` module by `Brian Holt`_, `Peter Prettenhofer`_,\n  `Satrajit Ghosh`_ and `Gilles Louppe`_. The module comes with complete\n  documentation and examples.\n\n- Fixed a bug in the RFE module by `Gilles Louppe`_ (issue #378).\n\n- Fixed a memory leak in :ref:`svm` module by `Brian Holt`_ (issue #367).\n\n- Faster tests by `Fabian Pedregosa`_ and others.\n\n- Silhouette Coefficient cluster analysis evaluation metric added as\n  :func:`~sklearn.metrics.silhouette_score` by Robert Layton.\n\n- Fixed a bug in :ref:`k_means` in the handling of the ``n_init`` parameter:\n  the clustering algorithm used to be run ``n_init`` times but the last\n  solution was retained instead of the best solution by `Olivier Grisel`_.\n\n- Minor refactoring in :ref:`sgd` module; consolidated dense and sparse\n  predict methods; Enhanced test time performance by converting model\n  parameters to fortran-style arrays after fitting (only multi-class).\n\n- Adjusted Mutual Information metric added as\n  :func:`~sklearn.metrics.adjusted_mutual_info_score` by Robert Layton.\n\n- Models like SVC\/SVR\/LinearSVC\/LogisticRegression from libsvm\/liblinear\n  now support scaling of C regularization parameter by the number of\n  samples by `Alexandre Gramfort`_.\n\n- New :ref:`Ensemble Methods <ensemble>` module by `Gilles Louppe`_ and\n  `Brian Holt`_. The module comes with the random forest algorithm and the\n  extra-trees method, along with documentation and examples.\n\n- :ref:`outlier_detection`: outlier and novelty detection, by\n  :user:`Virgile Fritsch <VirgileFritsch>`.\n\n- :ref:`kernel_approximation`: a transform implementing kernel\n  approximation for fast SGD on non-linear kernels by\n  `Andreas M\u00fcller`_.\n\n- Fixed a bug due to atom swapping in :ref:`OMP` by `Vlad Niculae`_.\n\n- :ref:`SparseCoder` by `Vlad Niculae`_.\n\n- :ref:`mini_batch_kmeans` performance improvements by `Olivier Grisel`_.\n\n- :ref:`k_means` support for sparse matrices by `Mathieu Blondel`_.\n\n- Improved documentation for developers and for the :mod:`sklearn.utils`\n  module, by `Jake Vanderplas`_.\n\n- Vectorized 20newsgroups dataset loader\n  (:func:`~sklearn.datasets.fetch_20newsgroups_vectorized`) by\n  `Mathieu Blondel`_.\n\n- :ref:`multiclass` by `Lars Buitinck`_.\n\n- Utilities for fast computation of mean and variance for sparse matrices\n  by `Mathieu Blondel`_.\n\n- Make :func:`~sklearn.preprocessing.scale` and\n  `sklearn.preprocessing.Scaler` work on sparse matrices by\n  `Olivier Grisel`_\n\n- Feature importances using decision trees and\/or forest of trees,\n  by `Gilles Louppe`_.\n\n- Parallel implementation of forests of randomized trees by\n  `Gilles Louppe`_.\n\n- `sklearn.cross_validation.ShuffleSplit` can subsample the train\n  sets as well as the test sets by `Olivier Grisel`_.\n\n- Errors in the build of the documentation fixed by `Andreas M\u00fcller`_.\n\n\nAPI changes summary\n-------------------\n\nHere are the code migration instructions when upgrading from scikit-learn\nversion 0.9:\n\n- Some estimators that may overwrite their inputs to save memory previously\n  had ``overwrite_`` parameters; these have been replaced with ``copy_``\n  parameters with exactly the opposite meaning.\n\n  This particularly affects some of the estimators in :mod:`~sklearn.linear_model`.\n  The default behavior is still to copy everything passed in.\n\n- The SVMlight dataset loader :func:`~sklearn.datasets.load_svmlight_file` no\n  longer supports loading two files at once; use ``load_svmlight_files``\n  instead. Also, the (unused) ``buffer_mb`` parameter is gone.\n\n- Sparse estimators in the :ref:`sgd` module use dense parameter vector\n  ``coef_`` instead of ``sparse_coef_``. This significantly improves\n  test time performance.\n\n- The :ref:`covariance` module now has a robust estimator of\n  covariance, the Minimum Covariance Determinant estimator.\n\n- Cluster evaluation metrics in :mod:`~sklearn.metrics.cluster` have been refactored\n  but the changes are backwards compatible. They have been moved to the\n  `metrics.cluster.supervised`, along with\n  `metrics.cluster.unsupervised` which contains the Silhouette\n  Coefficient.\n\n- The ``permutation_test_score`` function now behaves the same way as\n  ``cross_val_score`` (i.e. uses the mean score across the folds.)\n\n- Cross Validation generators now use integer indices (``indices=True``)\n  by default instead of boolean masks. This make it more intuitive to\n  use with sparse matrix data.\n\n- The functions used for sparse coding, ``sparse_encode`` and\n  ``sparse_encode_parallel`` have been combined into\n  :func:`~sklearn.decomposition.sparse_encode`, and the shapes of the arrays\n  have been transposed for consistency with the matrix factorization setting,\n  as opposed to the regression setting.\n\n- Fixed an off-by-one error in the SVMlight\/LibSVM file format handling;\n  files generated using :func:`~sklearn.datasets.dump_svmlight_file` should be\n  re-generated. (They should continue to work, but accidentally had one\n  extra column of zeros prepended.)\n\n- ``BaseDictionaryLearning`` class replaced by ``SparseCodingMixin``.\n\n- `sklearn.utils.extmath.fast_svd` has been renamed\n  :func:`~sklearn.utils.extmath.randomized_svd` and the default\n  oversampling is now fixed to 10 additional random vectors instead\n  of doubling the number of components to extract. The new behavior\n  follows the reference paper.\n\n\nPeople\n------\n\nThe following people contributed to scikit-learn since last release:\n\n* 246  `Andreas M\u00fcller`_\n* 242  `Olivier Grisel`_\n* 220  `Gilles Louppe`_\n* 183  `Brian Holt`_\n* 166  `Gael Varoquaux`_\n* 144  `Lars Buitinck`_\n*  73  `Vlad Niculae`_\n*  65  `Peter Prettenhofer`_\n*  64  `Fabian Pedregosa`_\n*  60  Robert Layton\n*  55  `Mathieu Blondel`_\n*  52  `Jake Vanderplas`_\n*  44  Noel Dawe\n*  38  `Alexandre Gramfort`_\n*  24  :user:`Virgile Fritsch <VirgileFritsch>`\n*  23  `Satrajit Ghosh`_\n*   3  Jan Hendrik Metzen\n*   3  Kenneth C. Arnold\n*   3  Shiqiao Du\n*   3  Tim Sheerman-Chase\n*   3  `Yaroslav Halchenko`_\n*   2  Bala Subrahmanyam Varanasi\n*   2  DraXus\n*   2  Michael Eickenberg\n*   1  Bogdan Trach\n*   1  F\u00e9lix-Antoine Fortin\n*   1  Juan Manuel Caicedo Carvajal\n*   1  Nelle Varoquaux\n*   1  `Nicolas Pinto`_\n*   1  Tiziano Zito\n*   1  Xinfan Meng\n\n\n\n.. _changes_0_9:\n\nVersion 0.9\n===========\n\n**September 21, 2011**\n\nscikit-learn 0.9 was released on September 2011, three months after the 0.8\nrelease and includes the new modules :ref:`manifold`, :ref:`dirichlet_process`\nas well as several new algorithms and documentation improvements.\n\nThis release also includes the dictionary-learning work developed by\n`Vlad Niculae`_ as part of the `Google Summer of Code\n<https:\/\/developers.google.com\/open-source\/gsoc>`_ program.\n\n\n\n.. |banner1| image:: ..\/auto_examples\/manifold\/images\/thumb\/sphx_glr_plot_compare_methods_thumb.png\n   :target: ..\/auto_examples\/manifold\/plot_compare_methods.html\n\n.. |banner2| image:: ..\/auto_examples\/linear_model\/images\/thumb\/sphx_glr_plot_omp_thumb.png\n   :target: ..\/auto_examples\/linear_model\/plot_omp.html\n\n.. |banner3| image:: ..\/auto_examples\/decomposition\/images\/thumb\/sphx_glr_plot_kernel_pca_thumb.png\n   :target: ..\/auto_examples\/decomposition\/plot_kernel_pca.html\n\n.. |center-div| raw:: html\n\n    <div style=\"text-align: center; margin: 0px 0 -5px 0;\">\n\n.. |end-div| raw:: html\n\n    <\/div>\n\n\n|center-div| |banner2| |banner1| |banner3| |end-div|\n\nChangelog\n---------\n\n- New :ref:`manifold` module by `Jake Vanderplas`_ and\n  `Fabian Pedregosa`_.\n\n- New :ref:`Dirichlet Process <dirichlet_process>` Gaussian Mixture\n  Model by `Alexandre Passos`_\n\n- :ref:`neighbors` module refactoring by `Jake Vanderplas`_ :\n  general refactoring, support for sparse matrices in input, speed and\n  documentation improvements. See the next section for a full list of API\n  changes.\n\n- Improvements on the :ref:`feature_selection` module by\n  `Gilles Louppe`_ : refactoring of the RFE classes, documentation\n  rewrite, increased efficiency and minor API changes.\n\n- :ref:`SparsePCA` by `Vlad Niculae`_, `Gael Varoquaux`_ and\n  `Alexandre Gramfort`_\n\n- Printing an estimator now behaves independently of architectures\n  and Python version thanks to :user:`Jean Kossaifi <JeanKossaifi>`.\n\n- :ref:`Loader for libsvm\/svmlight format <libsvm_loader>` by\n  `Mathieu Blondel`_ and `Lars Buitinck`_\n\n- Documentation improvements: thumbnails in\n  example gallery by `Fabian Pedregosa`_.\n\n- Important bugfixes in :ref:`svm` module (segfaults, bad\n  performance) by `Fabian Pedregosa`_.\n\n- Added :ref:`multinomial_naive_bayes` and :ref:`bernoulli_naive_bayes`\n  by `Lars Buitinck`_\n\n- Text feature extraction optimizations by Lars Buitinck\n\n- Chi-Square feature selection\n  (:func:`feature_selection.chi2`) by `Lars Buitinck`_.\n\n- :ref:`sample_generators` module refactoring by `Gilles Louppe`_\n\n- :ref:`multiclass` by `Mathieu Blondel`_\n\n- Ball tree rewrite by `Jake Vanderplas`_\n\n- Implementation of :ref:`dbscan` algorithm by Robert Layton\n\n- Kmeans predict and transform by Robert Layton\n\n- Preprocessing module refactoring by `Olivier Grisel`_\n\n- Faster mean shift by Conrad Lee\n\n- New ``Bootstrap``, :ref:`ShuffleSplit` and various other\n  improvements in cross validation schemes by `Olivier Grisel`_ and\n  `Gael Varoquaux`_\n\n- Adjusted Rand index and V-Measure clustering evaluation metrics by `Olivier Grisel`_\n\n- Added :class:`Orthogonal Matching Pursuit <linear_model.OrthogonalMatchingPursuit>` by `Vlad Niculae`_\n\n- Added 2D-patch extractor utilities in the :ref:`feature_extraction` module by `Vlad Niculae`_\n\n- Implementation of :class:`~linear_model.LassoLarsCV`\n  (cross-validated Lasso solver using the Lars algorithm) and\n  :class:`~linear_model.LassoLarsIC` (BIC\/AIC model\n  selection in Lars) by `Gael Varoquaux`_\n  and `Alexandre Gramfort`_\n\n- Scalability improvements to :func:`metrics.roc_curve` by Olivier Hervieu\n\n- Distance helper functions :func:`metrics.pairwise_distances`\n  and :func:`metrics.pairwise.pairwise_kernels` by Robert Layton\n\n- :class:`Mini-Batch K-Means <cluster.MiniBatchKMeans>` by Nelle Varoquaux and Peter Prettenhofer.\n\n- mldata utilities by Pietro Berkes.\n\n- :ref:`olivetti_faces_dataset` by `David Warde-Farley`_.\n\n\nAPI changes summary\n-------------------\n\nHere are the code migration instructions when upgrading from scikit-learn\nversion 0.8:\n\n- The ``scikits.learn`` package was renamed ``sklearn``. There is\n  still a ``scikits.learn`` package alias for backward compatibility.\n\n  Third-party projects with a dependency on scikit-learn 0.9+ should\n  upgrade their codebase. For instance, under Linux \/ MacOSX just run\n  (make a backup first!)::\n\n      find -name \"*.py\" | xargs sed -i 's\/\\bscikits.learn\\b\/sklearn\/g'\n\n- Estimators no longer accept model parameters as ``fit`` arguments:\n  instead all parameters must be only be passed as constructor\n  arguments or using the now public ``set_params`` method inherited\n  from :class:`~base.BaseEstimator`.\n\n  Some estimators can still accept keyword arguments on the ``fit``\n  but this is restricted to data-dependent values (e.g. a Gram matrix\n  or an affinity matrix that are precomputed from the ``X`` data matrix.\n\n- The ``cross_val`` package has been renamed to ``cross_validation``\n  although there is also a ``cross_val`` package alias in place for\n  backward compatibility.\n\n  Third-party projects with a dependency on scikit-learn 0.9+ should\n  upgrade their codebase. For instance, under Linux \/ MacOSX just run\n  (make a backup first!)::\n\n      find -name \"*.py\" | xargs sed -i 's\/\\bcross_val\\b\/cross_validation\/g'\n\n- The ``score_func`` argument of the\n  ``sklearn.cross_validation.cross_val_score`` function is now expected\n  to accept ``y_test`` and ``y_predicted`` as only arguments for\n  classification and regression tasks or ``X_test`` for unsupervised\n  estimators.\n\n- ``gamma`` parameter for support vector machine algorithms is set\n  to ``1 \/ n_features`` by default, instead of ``1 \/ n_samples``.\n\n- The ``sklearn.hmm`` has been marked as orphaned: it will be removed\n  from scikit-learn in version 0.11 unless someone steps up to\n  contribute documentation, examples and fix lurking numerical\n  stability issues.\n\n- ``sklearn.neighbors`` has been made into a submodule.  The two previously\n  available estimators, ``NeighborsClassifier`` and ``NeighborsRegressor``\n  have been marked as deprecated.  Their functionality has been divided\n  among five new classes: ``NearestNeighbors`` for unsupervised neighbors\n  searches, ``KNeighborsClassifier`` & ``RadiusNeighborsClassifier``\n  for supervised classification problems, and ``KNeighborsRegressor``\n  & ``RadiusNeighborsRegressor`` for supervised regression problems.\n\n- ``sklearn.ball_tree.BallTree`` has been moved to\n  ``sklearn.neighbors.BallTree``.  Using the former will generate a warning.\n\n- ``sklearn.linear_model.LARS()`` and related classes (LassoLARS,\n  LassoLARSCV, etc.) have been renamed to\n  ``sklearn.linear_model.Lars()``.\n\n- All distance metrics and kernels in ``sklearn.metrics.pairwise`` now have a Y\n  parameter, which by default is None. If not given, the result is the distance\n  (or kernel similarity) between each sample in Y. If given, the result is the\n  pairwise distance (or kernel similarity) between samples in X to Y.\n\n- ``sklearn.metrics.pairwise.l1_distance`` is now called ``manhattan_distance``,\n  and by default returns the pairwise distance. For the component wise distance,\n  set the parameter ``sum_over_features`` to ``False``.\n\nBackward compatibility package aliases and other deprecated classes and\nfunctions will be removed in version 0.11.\n\n\nPeople\n------\n\n38 people contributed to this release.\n\n- 387  `Vlad Niculae`_\n- 320  `Olivier Grisel`_\n- 192  `Lars Buitinck`_\n- 179  `Gael Varoquaux`_\n- 168  `Fabian Pedregosa`_ (`INRIA`_, `Parietal Team`_)\n- 127  `Jake Vanderplas`_\n- 120  `Mathieu Blondel`_\n- 85  `Alexandre Passos`_\n- 67  `Alexandre Gramfort`_\n- 57  `Peter Prettenhofer`_\n- 56  `Gilles Louppe`_\n- 42  Robert Layton\n- 38  Nelle Varoquaux\n- 32  :user:`Jean Kossaifi <JeanKossaifi>`\n- 30  Conrad Lee\n- 22  Pietro Berkes\n- 18  andy\n- 17  David Warde-Farley\n- 12  Brian Holt\n- 11  Robert\n- 8  Amit Aides\n- 8  :user:`Virgile Fritsch <VirgileFritsch>`\n- 7  `Yaroslav Halchenko`_\n- 6  Salvatore Masecchia\n- 5  Paolo Losi\n- 4  Vincent Schut\n- 3  Alexis Metaireau\n- 3  Bryan Silverthorn\n- 3  `Andreas M\u00fcller`_\n- 2  Minwoo Jake Lee\n- 1  Emmanuelle Gouillart\n- 1  Keith Goodman\n- 1  Lucas Wiman\n- 1  `Nicolas Pinto`_\n- 1  Thouis (Ray) Jones\n- 1  Tim Sheerman-Chase\n\n\n.. _changes_0_8:\n\nVersion 0.8\n===========\n\n**May 11, 2011**\n\nscikit-learn 0.8 was released on May 2011, one month after the first\n\"international\" `scikit-learn coding sprint\n<https:\/\/github.com\/scikit-learn\/scikit-learn\/wiki\/Upcoming-events>`_ and is\nmarked by the inclusion of important modules: :ref:`hierarchical_clustering`,\n:ref:`cross_decomposition`, :ref:`NMF`, initial support for Python 3 and by important\nenhancements and bug fixes.\n\n\nChangelog\n---------\n\nSeveral new modules where introduced during this release:\n\n- New :ref:`hierarchical_clustering` module by Vincent Michel,\n  `Bertrand Thirion`_, `Alexandre Gramfort`_ and `Gael Varoquaux`_.\n\n- :ref:`kernel_pca` implementation by `Mathieu Blondel`_\n\n- :ref:`labeled_faces_in_the_wild_dataset` by `Olivier Grisel`_.\n\n- New :ref:`cross_decomposition` module by `Edouard Duchesnay`_.\n\n- :ref:`NMF` module `Vlad Niculae`_\n\n- Implementation of the :ref:`oracle_approximating_shrinkage` algorithm by\n  :user:`Virgile Fritsch <VirgileFritsch>` in the :ref:`covariance` module.\n\n\nSome other modules benefited from significant improvements or cleanups.\n\n\n- Initial support for Python 3: builds and imports cleanly,\n  some modules are usable while others have failing tests by `Fabian Pedregosa`_.\n\n- :class:`~decomposition.PCA` is now usable from the Pipeline object by `Olivier Grisel`_.\n\n- Guide :ref:`performance-howto` by `Olivier Grisel`_.\n\n- Fixes for memory leaks in libsvm bindings, 64-bit safer BallTree by Lars Buitinck.\n\n- bug and style fixing in :ref:`k_means` algorithm by Jan Schl\u00fcter.\n\n- Add attribute converged to Gaussian Mixture Models by Vincent Schut.\n\n- Implemented ``transform``, ``predict_log_proba`` in\n  :class:`~discriminant_analysis.LinearDiscriminantAnalysis` By `Mathieu Blondel`_.\n\n- Refactoring in the :ref:`svm` module and bug fixes by `Fabian Pedregosa`_,\n  `Gael Varoquaux`_ and Amit Aides.\n\n- Refactored SGD module (removed code duplication, better variable naming),\n  added interface for sample weight by `Peter Prettenhofer`_.\n\n- Wrapped BallTree with Cython by Thouis (Ray) Jones.\n\n- Added function :func:`svm.l1_min_c` by Paolo Losi.\n\n- Typos, doc style, etc. by `Yaroslav Halchenko`_, `Gael Varoquaux`_,\n  `Olivier Grisel`_, Yann Malet, `Nicolas Pinto`_, Lars Buitinck and\n  `Fabian Pedregosa`_.\n\n\nPeople\n-------\n\nPeople that made this release possible preceded by number of commits:\n\n\n- 159  `Olivier Grisel`_\n- 96  `Gael Varoquaux`_\n- 96  `Vlad Niculae`_\n- 94  `Fabian Pedregosa`_\n- 36  `Alexandre Gramfort`_\n- 32  Paolo Losi\n- 31  `Edouard Duchesnay`_\n- 30  `Mathieu Blondel`_\n- 25  `Peter Prettenhofer`_\n- 22  `Nicolas Pinto`_\n- 11  :user:`Virgile Fritsch <VirgileFritsch>`\n-  7  Lars Buitinck\n-  6  Vincent Michel\n-  5  `Bertrand Thirion`_\n-  4  Thouis (Ray) Jones\n-  4  Vincent Schut\n-  3  Jan Schl\u00fcter\n-  2  Julien Miotte\n-  2  `Matthieu Perrot`_\n-  2  Yann Malet\n-  2  `Yaroslav Halchenko`_\n-  1  Amit Aides\n-  1  `Andreas M\u00fcller`_\n-  1  Feth Arezki\n-  1  Meng Xinfan\n\n\n.. _changes_0_7:\n\nVersion 0.7\n===========\n\n**March 2, 2011**\n\nscikit-learn 0.7 was released in March 2011, roughly three months\nafter the 0.6 release. This release is marked by the speed\nimprovements in existing algorithms like k-Nearest Neighbors and\nK-Means algorithm and by the inclusion of an efficient algorithm for\ncomputing the Ridge Generalized Cross Validation solution. Unlike the\npreceding release, no new modules where added to this release.\n\nChangelog\n---------\n\n- Performance improvements for Gaussian Mixture Model sampling [Jan\n  Schl\u00fcter].\n\n- Implementation of efficient leave-one-out cross-validated Ridge in\n  :class:`~linear_model.RidgeCV` [`Mathieu Blondel`_]\n\n- Better handling of collinearity and early stopping in\n  :func:`linear_model.lars_path` [`Alexandre Gramfort`_ and `Fabian\n  Pedregosa`_].\n\n- Fixes for liblinear ordering of labels and sign of coefficients\n  [Dan Yamins, Paolo Losi, `Mathieu Blondel`_ and `Fabian Pedregosa`_].\n\n- Performance improvements for Nearest Neighbors algorithm in\n  high-dimensional spaces [`Fabian Pedregosa`_].\n\n- Performance improvements for :class:`~cluster.KMeans` [`Gael\n  Varoquaux`_ and `James Bergstra`_].\n\n- Sanity checks for SVM-based classes [`Mathieu Blondel`_].\n\n- Refactoring of `neighbors.NeighborsClassifier` and\n  :func:`neighbors.kneighbors_graph`: added different algorithms for\n  the k-Nearest Neighbor Search and implemented a more stable\n  algorithm for finding barycenter weights. Also added some\n  developer documentation for this module, see\n  `notes_neighbors\n  <https:\/\/github.com\/scikit-learn\/scikit-learn\/wiki\/Neighbors-working-notes>`_ for more information [`Fabian Pedregosa`_].\n\n- Documentation improvements: Added `pca.RandomizedPCA` and\n  :class:`~linear_model.LogisticRegression` to the class\n  reference. Also added references of matrices used for clustering\n  and other fixes [`Gael Varoquaux`_, `Fabian Pedregosa`_, `Mathieu\n  Blondel`_, `Olivier Grisel`_, Virgile Fritsch , Emmanuelle\n  Gouillart]\n\n- Binded decision_function in classes that make use of liblinear_,\n  dense and sparse variants, like :class:`~svm.LinearSVC` or\n  :class:`~linear_model.LogisticRegression` [`Fabian Pedregosa`_].\n\n- Performance and API improvements to\n  :func:`metrics.pairwise.euclidean_distances` and to\n  `pca.RandomizedPCA` [`James Bergstra`_].\n\n- Fix compilation issues under NetBSD [Kamel Ibn Hassen Derouiche]\n\n- Allow input sequences of different lengths in `hmm.GaussianHMM`\n  [`Ron Weiss`_].\n\n- Fix bug in affinity propagation caused by incorrect indexing [Xinfan Meng]\n\n\nPeople\n------\n\nPeople that made this release possible preceded by number of commits:\n\n- 85  `Fabian Pedregosa`_\n- 67  `Mathieu Blondel`_\n- 20  `Alexandre Gramfort`_\n- 19  `James Bergstra`_\n- 14  Dan Yamins\n- 13  `Olivier Grisel`_\n- 12  `Gael Varoquaux`_\n- 4  `Edouard Duchesnay`_\n- 4  `Ron Weiss`_\n- 2  Satrajit Ghosh\n- 2  Vincent Dubourg\n- 1  Emmanuelle Gouillart\n- 1  Kamel Ibn Hassen Derouiche\n- 1  Paolo Losi\n- 1  VirgileFritsch\n- 1  `Yaroslav Halchenko`_\n- 1  Xinfan Meng\n\n\n.. _changes_0_6:\n\nVersion 0.6\n===========\n\n**December 21, 2010**\n\nscikit-learn 0.6 was released on December 2010. It is marked by the\ninclusion of several new modules and a general renaming of old\nones. It is also marked by the inclusion of new example, including\napplications to real-world datasets.\n\n\nChangelog\n---------\n\n- New `stochastic gradient\n  <https:\/\/scikit-learn.org\/stable\/modules\/sgd.html>`_ descent\n  module by Peter Prettenhofer. The module comes with complete\n  documentation and examples.\n\n- Improved svm module: memory consumption has been reduced by 50%,\n  heuristic to automatically set class weights, possibility to\n  assign weights to samples (see\n  :ref:`sphx_glr_auto_examples_svm_plot_weighted_samples.py` for an example).\n\n- New :ref:`gaussian_process` module by Vincent Dubourg. This module\n  also has great documentation and some very neat examples. See\n  example_gaussian_process_plot_gp_regression.py or\n  example_gaussian_process_plot_gp_probabilistic_classification_after_regression.py\n  for a taste of what can be done.\n\n- It is now possible to use liblinear's Multi-class SVC (option\n  multi_class in :class:`~svm.LinearSVC`)\n\n- New features and performance improvements of text feature\n  extraction.\n\n- Improved sparse matrix support, both in main classes\n  (:class:`~model_selection.GridSearchCV`) as in modules\n  sklearn.svm.sparse and sklearn.linear_model.sparse.\n\n- Lots of cool new examples and a new section that uses real-world\n  datasets was created. These include:\n  :ref:`sphx_glr_auto_examples_applications_plot_face_recognition.py`,\n  :ref:`sphx_glr_auto_examples_applications_plot_species_distribution_modeling.py`,\n  :ref:`sphx_glr_auto_examples_applications_wikipedia_principal_eigenvector.py` and\n  others.\n\n- Faster :ref:`least_angle_regression` algorithm. It is now 2x\n  faster than the R version on worst case and up to 10x times faster\n  on some cases.\n\n- Faster coordinate descent algorithm. In particular, the full path\n  version of lasso (:func:`linear_model.lasso_path`) is more than\n  200x times faster than before.\n\n- It is now possible to get probability estimates from a\n  :class:`~linear_model.LogisticRegression` model.\n\n- module renaming: the glm module has been renamed to linear_model,\n  the gmm module has been included into the more general mixture\n  model and the sgd module has been included in linear_model.\n\n- Lots of bug fixes and documentation improvements.\n\n\nPeople\n------\n\nPeople that made this release possible preceded by number of commits:\n\n* 207  `Olivier Grisel`_\n\n* 167 `Fabian Pedregosa`_\n\n* 97 `Peter Prettenhofer`_\n\n* 68 `Alexandre Gramfort`_\n\n* 59  `Mathieu Blondel`_\n\n* 55  `Gael Varoquaux`_\n\n* 33  Vincent Dubourg\n\n* 21  `Ron Weiss`_\n\n* 9  Bertrand Thirion\n\n* 3  `Alexandre Passos`_\n\n* 3  Anne-Laure Fouque\n\n* 2  Ronan Amicel\n\n* 1 `Christian Osendorfer`_\n\n\n\n.. _changes_0_5:\n\n\nVersion 0.5\n===========\n\n**October 11, 2010**\n\nChangelog\n---------\n\nNew classes\n-----------\n\n- Support for sparse matrices in some classifiers of modules\n  ``svm`` and ``linear_model`` (see `svm.sparse.SVC`,\n  `svm.sparse.SVR`, `svm.sparse.LinearSVC`,\n  `linear_model.sparse.Lasso`, `linear_model.sparse.ElasticNet`)\n\n- New :class:`~pipeline.Pipeline` object to compose different estimators.\n\n- Recursive Feature Elimination routines in module\n  :ref:`feature_selection`.\n\n- Addition of various classes capable of cross validation in the\n  linear_model module (:class:`~linear_model.LassoCV`, :class:`~linear_model.ElasticNetCV`,\n  etc.).\n\n- New, more efficient LARS algorithm implementation. The Lasso\n  variant of the algorithm is also implemented. See\n  :class:`~linear_model.lars_path`, :class:`~linear_model.Lars` and\n  :class:`~linear_model.LassoLars`.\n\n- New Hidden Markov Models module (see classes\n  `hmm.GaussianHMM`, `hmm.MultinomialHMM`, `hmm.GMMHMM`)\n\n- New module feature_extraction (see :ref:`class reference\n  <feature_extraction_ref>`)\n\n- New FastICA algorithm in module sklearn.fastica\n\n\nDocumentation\n-------------\n\n- Improved documentation for many modules, now separating\n  narrative documentation from the class reference. As an example,\n  see `documentation for the SVM module\n  <https:\/\/scikit-learn.org\/stable\/modules\/svm.html>`_ and the\n  complete `class reference\n  <https:\/\/scikit-learn.org\/stable\/modules\/classes.html>`_.\n\nFixes\n-----\n\n- API changes: adhere variable names to PEP-8, give more\n  meaningful names.\n\n- Fixes for svm module to run on a shared memory context\n  (multiprocessing).\n\n- It is again possible to generate latex (and thus PDF) from the\n  sphinx docs.\n\nExamples\n--------\n\n- new examples using some of the mlcomp datasets:\n  ``sphx_glr_auto_examples_mlcomp_sparse_document_classification.py`` (since removed) and\n  :ref:`sphx_glr_auto_examples_text_plot_document_classification_20newsgroups.py`\n\n- Many more examples. `See here\n  <https:\/\/scikit-learn.org\/stable\/auto_examples\/index.html>`_\n  the full list of examples.\n\n\nExternal dependencies\n---------------------\n\n- Joblib is now a dependency of this package, although it is\n  shipped with (sklearn.externals.joblib).\n\nRemoved modules\n---------------\n\n- Module ann (Artificial Neural Networks) has been removed from\n  the distribution. Users wanting this sort of algorithms should\n  take a look into pybrain.\n\nMisc\n----\n\n- New sphinx theme for the web page.\n\n\nAuthors\n-------\n\nThe following is a list of authors for this release, preceded by\nnumber of commits:\n\n* 262  Fabian Pedregosa\n* 240  Gael Varoquaux\n* 149  Alexandre Gramfort\n* 116  Olivier Grisel\n*  40  Vincent Michel\n*  38  Ron Weiss\n*  23  Matthieu Perrot\n*  10  Bertrand Thirion\n*   7  Yaroslav Halchenko\n*   9  VirgileFritsch\n*   6  Edouard Duchesnay\n*   4  Mathieu Blondel\n*   1  Ariel Rokem\n*   1  Matthieu Brucher\n\nVersion 0.4\n===========\n\n**August 26, 2010**\n\nChangelog\n---------\n\nMajor changes in this release include:\n\n- Coordinate Descent algorithm (Lasso, ElasticNet) refactoring &\n  speed improvements (roughly 100x times faster).\n\n- Coordinate Descent Refactoring (and bug fixing) for consistency\n  with R's package GLMNET.\n\n- New metrics module.\n\n- New GMM module contributed by Ron Weiss.\n\n- Implementation of the LARS algorithm (without Lasso variant for now).\n\n- feature_selection module redesign.\n\n- Migration to GIT as version control system.\n\n- Removal of obsolete attrselect module.\n\n- Rename of private compiled extensions (added underscore).\n\n- Removal of legacy unmaintained code.\n\n- Documentation improvements (both docstring and rst).\n\n- Improvement of the build system to (optionally) link with MKL.\n  Also, provide a lite BLAS implementation in case no system-wide BLAS is\n  found.\n\n- Lots of new examples.\n\n- Many, many bug fixes ...\n\n\nAuthors\n-------\n\nThe committer list for this release is the following (preceded by number\nof commits):\n\n* 143  Fabian Pedregosa\n* 35  Alexandre Gramfort\n* 34  Olivier Grisel\n* 11  Gael Varoquaux\n*  5  Yaroslav Halchenko\n*  2  Vincent Michel\n*  1  Chris Filo Gorgolewski\n\n\nEarlier versions\n================\n\nEarlier versions included contributions by Fred Mailhot, David Cooke,\nDavid Huard, Dave Morrill, Ed Schofield, Travis Oliphant, Pearu Peterson.","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn                 Older Versions                     changes 0 12 1   Version 0 12 1                    October 8  2012    The 0 12 1 release is a bug fix release with no additional features  but is instead a set of bug fixes  Changelog               Improved numerical stability in spectral embedding by  Gael   Varoquaux      Doctest under windows 64bit by  Gael Varoquaux      Documentation fixes for elastic net by  Andreas M ller   and    Alexandre Gramfort      Proper behavior with fortran ordered NumPy arrays by  Gael Varoquaux      Make GridSearchCV work with non CSR sparse matrix by  Lars Buitinck      Fix parallel computing in MDS by  Gael Varoquaux      Fix Unicode support in count vectorizer by  Andreas M ller      Fix MinCovDet breaking with X shape    3  1  by  user  Virgile Fritsch  VirgileFritsch      Fix clone of SGD objects by  Peter Prettenhofer      Stabilize GMM by  user  Virgile Fritsch  VirgileFritsch    People            14   Peter Prettenhofer      12   Gael Varoquaux      10   Andreas M ller       5   Lars Buitinck       3   user  Virgile Fritsch  VirgileFritsch       1   Alexandre Gramfort       1   Gilles Louppe       1   Mathieu Blondel        changes 0 12   Version 0 12                 September 4  2012    Changelog              Various speed improvements of the  ref  decision trees  tree   module  by    Gilles Louppe        class   ensemble GradientBoostingRegressor  and    class   ensemble GradientBoostingClassifier  now support feature subsampling   via the   max features   argument  by  Peter Prettenhofer       Added Huber and Quantile loss functions to    class   ensemble GradientBoostingRegressor   by  Peter Prettenhofer        ref  Decision trees  tree   and  ref  forests of randomized trees  forest     now support multi output classification and regression problems  by    Gilles Louppe       Added  class   preprocessing LabelEncoder   a simple utility class to   normalize labels or transform non numerical labels  by  Mathieu Blondel       Added the epsilon insensitive loss and the ability to make probabilistic   predictions with the modified huber loss in  ref  sgd   by    Mathieu Blondel       Added  ref  multidimensional scaling   by Nelle Varoquaux     SVMlight file format loader now detects compressed  gzip bzip2  files and   decompresses them on the fly  by  Lars Buitinck       SVMlight file format serializer now preserves double precision floating   point values  by  Olivier Grisel       A common testing framework for all estimators was added  by  Andreas M ller       Understandable error messages for estimators that do not accept   sparse input by  Gael Varoquaux      Speedups in hierarchical clustering by  Gael Varoquaux    In   particular building the tree now supports early stopping  This is   useful when the number of clusters is not small compared to the   number of samples     Add MultiTaskLasso and MultiTaskElasticNet for joint feature selection    by  Alexandre Gramfort       Added  metrics auc score  and    func  metrics average precision score  convenience functions by  Andreas   M ller       Improved sparse matrix support in the  ref  feature selection    module by  Andreas M ller       New word boundaries aware character n gram analyzer for the    ref  text feature extraction  module by  user   kernc  kernc       Fixed bug in spectral clustering that led to single point clusters   by  Andreas M ller       In  class   feature extraction text CountVectorizer   added an option to   ignore infrequent words    min df   by   Andreas M ller       Add support for multiple targets in some linear models  ElasticNet  Lasso   and OrthogonalMatchingPursuit  by  Vlad Niculae   and    Alexandre Gramfort       Fixes in  decomposition ProbabilisticPCA  score function by Wei Li     Fixed feature importance computation in    ref  gradient boosting    API changes summary                        The old   scikits learn   package has disappeared  all code should import   from   sklearn   instead  which was introduced in 0 9     In  func  metrics roc curve   the   thresholds   array is now returned   with it s order reversed  in order to keep it consistent with the order   of the returned   fpr   and   tpr       In  hmm  objects  like  hmm GaussianHMM      hmm MultinomialHMM   etc   all parameters must be passed to the   object when initialising it and not through   fit    Now   fit   will   only accept the data as an input parameter     For all SVM classes  a faulty behavior of   gamma   was fixed  Previously    the default gamma value was only computed the first time   fit   was called   and then stored  It is now recalculated on every call to   fit       All   Base   classes are now abstract meta classes so that they can not be   instantiated      func  cluster ward tree  now also returns the parent array  This is   necessary for early stopping in which case the tree is not   completely built     In  class   feature extraction text CountVectorizer  the parameters     min n   and   max n   were joined to the parameter   n gram range   to   enable grid searching both at once     In  class   feature extraction text CountVectorizer   words that appear   only in one document are now ignored by default  To reproduce   the previous behavior  set   min df 1       Fixed API inconsistency   meth  linear model SGDClassifier predict proba  now   returns 2d array when fit on two classes     Fixed API inconsistency   meth  discriminant analysis QuadraticDiscriminantAnalysis decision function    and  meth  discriminant analysis LinearDiscriminantAnalysis decision function  now return 1d arrays   when fit on two classes     Grid of alphas used for fitting  class   linear model LassoCV  and    class   linear model ElasticNetCV  is now stored   in the attribute   alphas    rather than overriding the init parameter     alphas       Linear models when alpha is estimated by cross validation store   the estimated value in the   alpha    attribute rather than just     alpha   or   best alpha        class   ensemble GradientBoostingClassifier  now supports    meth   ensemble GradientBoostingClassifier staged predict proba   and    meth   ensemble GradientBoostingClassifier staged predict       svm sparse SVC  and other sparse SVM classes are now deprecated    The all classes in the  ref  svm  module now automatically select the   sparse or dense representation base on the input     All clustering algorithms now interpret the array   X   given to   fit   as   input data  in particular  class   cluster SpectralClustering  and    class   cluster AffinityPropagation  which previously expected affinity matrices     For clustering algorithms that take the desired number of clusters as a parameter    this parameter is now called   n clusters      People          267   Andreas M ller      94   Gilles Louppe      89   Gael Varoquaux      79   Peter Prettenhofer      60   Mathieu Blondel      57   Alexandre Gramfort      52   Vlad Niculae      45   Lars Buitinck      44  Nelle Varoquaux    37   Jaques Grobler      30  Alexis Mignon    30  Immanuel Bayer    27   Olivier Grisel      16  Subhodeep Moitra    13  Yannick Schwartz    12   user   kernc  kernc      11   user  Virgile Fritsch  VirgileFritsch       9  Daniel Duckworth     9   Fabian Pedregosa       9   Robert Layton       8  John Benediktsson     7  Marko Burjek     5   Nicolas Pinto       4  Alexandre Abraham     4   Jake Vanderplas       3   Brian Holt       3   Edouard Duchesnay       3  Florian Hoenig     3  flyingimmidev     2  Francois Savard     2  Hannes Schulz     2  Peter Welinder     2   Yaroslav Halchenko       2  Wei Li     1  Alex Companioni     1  Brandyn A  White     1  Bussonnier Matthias     1  Charles Pierre Astolfi     1  Dan O Huiginn     1  David Cournapeau     1  Keith Goodman     1  Ludwig Schwardt     1  Olivier Hervieu     1  Sergio Medina     1  Shiqiao Du     1  Tim Sheerman Chase     1  buguen        changes 0 11   Version 0 11                 May 7  2012    Changelog            Highlights                  Gradient boosted regression trees   ref  gradient boosting     for classification and regression by  Peter Prettenhofer     and  Scott White        Simple dict based feature loader with support for categorical variables     class   feature extraction DictVectorizer   by  Lars Buitinck       Added Matthews correlation coefficient   func  metrics matthews corrcoef     and added macro and micro average options to    func   metrics precision score    func  metrics recall score  and    func   metrics f1 score  by  Satrajit Ghosh        ref  out of bag  of generalization error for  ref  ensemble    by  Andreas M ller       Randomized sparse linear models for feature   selection  by  Alexandre Gramfort   and  Gael Varoquaux       ref  label propagation  for semi supervised learning  by Clay   Woolam    Note   the semi supervised API is still work in progress    and may change     Added BIC AIC model selection to classical  ref  gmm  and unified   the API with the remainder of scikit learn  by  Bertrand Thirion      Added  sklearn cross validation StratifiedShuffleSplit   which is   a  sklearn cross validation ShuffleSplit  with balanced splits    by Yannick Schwartz      class   sklearn neighbors NearestCentroid  classifier added  along with a     shrink threshold   parameter  which implements   shrunken centroid   classification    by  Robert Layton     Other changes                   Merged dense and sparse implementations of  ref  sgd  module and   exposed utility extension types for sequential   datasets   seq dataset   and weight vectors   weight vector     by  Peter Prettenhofer       Added   partial fit    support for online minibatch learning  and   warm start to the  ref  sgd  module by  Mathieu Blondel       Dense and sparse implementations of  ref  svm  classes and    class   linear model LogisticRegression  merged by  Lars Buitinck       Regressors can now be used as base estimator in the  ref  multiclass    module by  Mathieu Blondel       Added n jobs option to  func  metrics pairwise distances    and  func  metrics pairwise pairwise kernels  for parallel computation    by  Mathieu Blondel        ref  k means  can now be run in parallel  using the   n jobs   argument   to either  ref  k means  or  class  cluster KMeans   by  Robert Layton       Improved  ref  cross validation  and  ref  grid search  documentation   and introduced the new  cross validation train test split    helper function by  Olivier Grisel       class   svm SVC  members   coef    and   intercept    changed sign for   consistency with   decision function    for   kernel  linear        coef    was fixed in the one vs one case  by  Andreas M ller       Performance improvements to efficient leave one out cross validated   Ridge regression  esp  for the   n samples   n features   case  in    class   linear model RidgeCV   by Reuben Fletcher Costin     Refactoring and simplification of the  ref  text feature extraction    API and fixed a bug that caused possible negative IDF    by  Olivier Grisel       Beam pruning option in   BaseHMM  module has been removed since it   is difficult to Cythonize  If you are interested in contributing a Cython   version  you can use the python version in the git history as a reference     Classes in  ref  neighbors  now support arbitrary Minkowski metric for   nearest neighbors searches  The metric can be specified by argument   p     API changes summary                         covariance EllipticEnvelop  is now deprecated    Please use  class   covariance EllipticEnvelope  instead       NeighborsClassifier   and   NeighborsRegressor   are gone in the module    ref  neighbors   Use the classes  class   neighbors KNeighborsClassifier      class   neighbors RadiusNeighborsClassifier    class   neighbors KNeighborsRegressor    and or  class   neighbors RadiusNeighborsRegressor  instead     Sparse classes in the  ref  sgd  module are now deprecated     In  mixture GMM    mixture DPGMM  and  mixture VBGMM     parameters must be passed to an object when initialising it and not through     fit    Now   fit   will only accept the data as an input parameter     methods   rvs   and   decode   in  GMM  module are now deprecated      sample   and   score   or   predict   should be used instead     attribute    scores   and    pvalues   in univariate feature selection   objects are now deprecated      scores    or   pvalues    should be used instead     In  class   linear model LogisticRegression    class   svm LinearSVC      class   svm SVC  and  class   svm NuSVC   the   class weight   parameter is   now an initialization parameter  not a parameter to fit  This makes grid   searches over this parameter possible     LFW   data   is now always shape    n samples  n features    to be   consistent with the Olivetti faces dataset  Use   images   and     pairs   attribute to access the natural images shapes instead     In  class   svm LinearSVC   the meaning of the   multi class   parameter   changed   Options now are    ovr    and    crammer singer     with      ovr    being the default   This does not change the default behavior   but hopefully is less confusing     Class  feature selection text Vectorizer  is deprecated and   replaced by  feature selection text TfidfVectorizer      The preprocessor   analyzer nested structure for text feature   extraction has been removed  All those features are   now directly passed as flat constructor arguments   to  feature selection text TfidfVectorizer  and    feature selection text CountVectorizer   in particular the   following parameters are now used       analyzer   can be    word    or    char    to switch the default   analysis scheme  or use a specific python callable  as previously        tokenizer   and   preprocessor   have been introduced to make it   still possible to customize those steps with the new API       input   explicitly control how to interpret the sequence passed to     fit   and   predict    filenames  file objects or direct  byte or   Unicode  strings     charset decoding is explicit and strict by default     the   vocabulary    fitted or not is now stored in the     vocabulary    attribute to be consistent with the project   conventions     Class  feature selection text TfidfVectorizer  now derives directly   from  feature selection text CountVectorizer  to make grid   search trivial     methods   rvs   in   BaseHMM  module are now deprecated      sample   should be used instead     Beam pruning option in   BaseHMM  module is removed since it is   difficult to be Cythonized  If you are interested  you can look in the   history codes by git     The SVMlight format loader now supports files with both zero based and   one based column indices  since both occur  in the wild      Arguments in class  class   model selection ShuffleSplit  are now consistent with    class   model selection StratifiedShuffleSplit   Arguments   test fraction   and     train fraction   are deprecated and renamed to   test size   and     train size   and can accept both   float   and   int       Arguments in class  Bootstrap  are now consistent with    class   model selection StratifiedShuffleSplit   Arguments   n test   and     n train   are deprecated and renamed to   test size   and     train size   and can accept both   float   and   int       Argument   p   added to classes in  ref  neighbors  to specify an   arbitrary Minkowski metric for nearest neighbors searches    People           282   Andreas M ller     239   Peter Prettenhofer     198   Gael Varoquaux     129   Olivier Grisel     114   Mathieu Blondel     103  Clay Woolam    96   Lars Buitinck      88   Jaques Grobler      82   Alexandre Gramfort      50   Bertrand Thirion      42   Robert Layton      28  flyingimmidev    26   Jake Vanderplas      26  Shiqiao Du    21   Satrajit Ghosh      17   David Marek      17   Gilles Louppe      14   Vlad Niculae      11  Yannick Schwartz    10   Fabian Pedregosa       9  fcostin     7  Nick Wilson     5  Adrien Gaidon     5   Nicolas Pinto       4   David Warde Farley       5  Nelle Varoquaux     5  Emmanuelle Gouillart     3  Joonas Sillanp       3  Paolo Losi     2  Charles McCarthy     2  Roy Hyunjin Han     2  Scott White     2  ibayer     1  Brandyn White     1  Carlos Scheidegger     1  Claire Revillet     1  Conrad Lee     1   Edouard Duchesnay       1  Jan Hendrik Metzen     1  Meng Xinfan     1   Rob Zinkov       1  Shiqiao     1  Udi Weinsberg     1  Virgile Fritsch     1  Xinfan Meng     1  Yaroslav Halchenko     1  jansoe     1  Leon Palafox       changes 0 10   Version 0 10                 January 11  2012    Changelog              Python 2 5 compatibility was dropped  the minimum Python version needed   to use scikit learn is now 2 6      ref  sparse inverse covariance  estimation using the graph Lasso  with   associated cross validated estimator  by  Gael Varoquaux      New  ref  Tree  tree   module by  Brian Holt     Peter Prettenhofer       Satrajit Ghosh   and  Gilles Louppe    The module comes with complete   documentation and examples     Fixed a bug in the RFE module by  Gilles Louppe    issue  378      Fixed a memory leak in  ref  svm  module by  Brian Holt    issue  367      Faster tests by  Fabian Pedregosa   and others     Silhouette Coefficient cluster analysis evaluation metric added as    func   sklearn metrics silhouette score  by Robert Layton     Fixed a bug in  ref  k means  in the handling of the   n init   parameter    the clustering algorithm used to be run   n init   times but the last   solution was retained instead of the best solution by  Olivier Grisel       Minor refactoring in  ref  sgd  module  consolidated dense and sparse   predict methods  Enhanced test time performance by converting model   parameters to fortran style arrays after fitting  only multi class      Adjusted Mutual Information metric added as    func   sklearn metrics adjusted mutual info score  by Robert Layton     Models like SVC SVR LinearSVC LogisticRegression from libsvm liblinear   now support scaling of C regularization parameter by the number of   samples by  Alexandre Gramfort       New  ref  Ensemble Methods  ensemble   module by  Gilles Louppe   and    Brian Holt    The module comes with the random forest algorithm and the   extra trees method  along with documentation and examples      ref  outlier detection   outlier and novelty detection  by    user  Virgile Fritsch  VirgileFritsch        ref  kernel approximation   a transform implementing kernel   approximation for fast SGD on non linear kernels by    Andreas M ller       Fixed a bug due to atom swapping in  ref  OMP  by  Vlad Niculae        ref  SparseCoder  by  Vlad Niculae        ref  mini batch kmeans  performance improvements by  Olivier Grisel        ref  k means  support for sparse matrices by  Mathieu Blondel       Improved documentation for developers and for the  mod  sklearn utils    module  by  Jake Vanderplas       Vectorized 20newsgroups dataset loader     func   sklearn datasets fetch 20newsgroups vectorized   by    Mathieu Blondel        ref  multiclass  by  Lars Buitinck       Utilities for fast computation of mean and variance for sparse matrices   by  Mathieu Blondel       Make  func   sklearn preprocessing scale  and    sklearn preprocessing Scaler  work on sparse matrices by    Olivier Grisel      Feature importances using decision trees and or forest of trees    by  Gilles Louppe       Parallel implementation of forests of randomized trees by    Gilles Louppe        sklearn cross validation ShuffleSplit  can subsample the train   sets as well as the test sets by  Olivier Grisel       Errors in the build of the documentation fixed by  Andreas M ller      API changes summary                      Here are the code migration instructions when upgrading from scikit learn version 0 9     Some estimators that may overwrite their inputs to save memory previously   had   overwrite    parameters  these have been replaced with   copy      parameters with exactly the opposite meaning     This particularly affects some of the estimators in  mod   sklearn linear model     The default behavior is still to copy everything passed in     The SVMlight dataset loader  func   sklearn datasets load svmlight file  no   longer supports loading two files at once  use   load svmlight files     instead  Also  the  unused    buffer mb   parameter is gone     Sparse estimators in the  ref  sgd  module use dense parameter vector     coef    instead of   sparse coef     This significantly improves   test time performance     The  ref  covariance  module now has a robust estimator of   covariance  the Minimum Covariance Determinant estimator     Cluster evaluation metrics in  mod   sklearn metrics cluster  have been refactored   but the changes are backwards compatible  They have been moved to the    metrics cluster supervised   along with    metrics cluster unsupervised  which contains the Silhouette   Coefficient     The   permutation test score   function now behaves the same way as     cross val score    i e  uses the mean score across the folds      Cross Validation generators now use integer indices    indices True      by default instead of boolean masks  This make it more intuitive to   use with sparse matrix data     The functions used for sparse coding    sparse encode   and     sparse encode parallel   have been combined into    func   sklearn decomposition sparse encode   and the shapes of the arrays   have been transposed for consistency with the matrix factorization setting    as opposed to the regression setting     Fixed an off by one error in the SVMlight LibSVM file format handling    files generated using  func   sklearn datasets dump svmlight file  should be   re generated   They should continue to work  but accidentally had one   extra column of zeros prepended        BaseDictionaryLearning   class replaced by   SparseCodingMixin        sklearn utils extmath fast svd  has been renamed    func   sklearn utils extmath randomized svd  and the default   oversampling is now fixed to 10 additional random vectors instead   of doubling the number of components to extract  The new behavior   follows the reference paper    People         The following people contributed to scikit learn since last release     246   Andreas M ller     242   Olivier Grisel     220   Gilles Louppe     183   Brian Holt     166   Gael Varoquaux     144   Lars Buitinck      73   Vlad Niculae      65   Peter Prettenhofer      64   Fabian Pedregosa      60  Robert Layton    55   Mathieu Blondel      52   Jake Vanderplas      44  Noel Dawe    38   Alexandre Gramfort      24   user  Virgile Fritsch  VirgileFritsch      23   Satrajit Ghosh       3  Jan Hendrik Metzen     3  Kenneth C  Arnold     3  Shiqiao Du     3  Tim Sheerman Chase     3   Yaroslav Halchenko       2  Bala Subrahmanyam Varanasi     2  DraXus     2  Michael Eickenberg     1  Bogdan Trach     1  F lix Antoine Fortin     1  Juan Manuel Caicedo Carvajal     1  Nelle Varoquaux     1   Nicolas Pinto       1  Tiziano Zito     1  Xinfan Meng        changes 0 9   Version 0 9                September 21  2011    scikit learn 0 9 was released on September 2011  three months after the 0 8 release and includes the new modules  ref  manifold    ref  dirichlet process  as well as several new algorithms and documentation improvements   This release also includes the dictionary learning work developed by  Vlad Niculae   as part of the  Google Summer of Code  https   developers google com open source gsoc    program         banner1  image      auto examples manifold images thumb sphx glr plot compare methods thumb png     target     auto examples manifold plot compare methods html      banner2  image      auto examples linear model images thumb sphx glr plot omp thumb png     target     auto examples linear model plot omp html      banner3  image      auto examples decomposition images thumb sphx glr plot kernel pca thumb png     target     auto examples decomposition plot kernel pca html      center div  raw   html       div style  text align  center  margin  0px 0  5px 0         end div  raw   html        div     center div   banner2   banner1   banner3   end div   Changelog              New  ref  manifold  module by  Jake Vanderplas   and    Fabian Pedregosa       New  ref  Dirichlet Process  dirichlet process   Gaussian Mixture   Model by  Alexandre Passos       ref  neighbors  module refactoring by  Jake Vanderplas       general refactoring  support for sparse matrices in input  speed and   documentation improvements  See the next section for a full list of API   changes     Improvements on the  ref  feature selection  module by    Gilles Louppe     refactoring of the RFE classes  documentation   rewrite  increased efficiency and minor API changes      ref  SparsePCA  by  Vlad Niculae     Gael Varoquaux   and    Alexandre Gramfort      Printing an estimator now behaves independently of architectures   and Python version thanks to  user  Jean Kossaifi  JeanKossaifi        ref  Loader for libsvm svmlight format  libsvm loader   by    Mathieu Blondel   and  Lars Buitinck      Documentation improvements  thumbnails in   example gallery by  Fabian Pedregosa       Important bugfixes in  ref  svm  module  segfaults  bad   performance  by  Fabian Pedregosa       Added  ref  multinomial naive bayes  and  ref  bernoulli naive bayes    by  Lars Buitinck      Text feature extraction optimizations by Lars Buitinck    Chi Square feature selection     func  feature selection chi2   by  Lars Buitinck        ref  sample generators  module refactoring by  Gilles Louppe       ref  multiclass  by  Mathieu Blondel      Ball tree rewrite by  Jake Vanderplas      Implementation of  ref  dbscan  algorithm by Robert Layton    Kmeans predict and transform by Robert Layton    Preprocessing module refactoring by  Olivier Grisel      Faster mean shift by Conrad Lee    New   Bootstrap     ref  ShuffleSplit  and various other   improvements in cross validation schemes by  Olivier Grisel   and    Gael Varoquaux      Adjusted Rand index and V Measure clustering evaluation metrics by  Olivier Grisel      Added  class  Orthogonal Matching Pursuit  linear model OrthogonalMatchingPursuit   by  Vlad Niculae      Added 2D patch extractor utilities in the  ref  feature extraction  module by  Vlad Niculae      Implementation of  class   linear model LassoLarsCV     cross validated Lasso solver using the Lars algorithm  and    class   linear model LassoLarsIC   BIC AIC model   selection in Lars  by  Gael Varoquaux     and  Alexandre Gramfort      Scalability improvements to  func  metrics roc curve  by Olivier Hervieu    Distance helper functions  func  metrics pairwise distances    and  func  metrics pairwise pairwise kernels  by Robert Layton     class  Mini Batch K Means  cluster MiniBatchKMeans   by Nelle Varoquaux and Peter Prettenhofer     mldata utilities by Pietro Berkes      ref  olivetti faces dataset  by  David Warde Farley      API changes summary                      Here are the code migration instructions when upgrading from scikit learn version 0 8     The   scikits learn   package was renamed   sklearn    There is   still a   scikits learn   package alias for backward compatibility     Third party projects with a dependency on scikit learn 0 9  should   upgrade their codebase  For instance  under Linux   MacOSX just run    make a backup first            find  name    py    xargs sed  i  s  bscikits learn b sklearn g     Estimators no longer accept model parameters as   fit   arguments    instead all parameters must be only be passed as constructor   arguments or using the now public   set params   method inherited   from  class   base BaseEstimator      Some estimators can still accept keyword arguments on the   fit     but this is restricted to data dependent values  e g  a Gram matrix   or an affinity matrix that are precomputed from the   X   data matrix     The   cross val   package has been renamed to   cross validation     although there is also a   cross val   package alias in place for   backward compatibility     Third party projects with a dependency on scikit learn 0 9  should   upgrade their codebase  For instance  under Linux   MacOSX just run    make a backup first            find  name    py    xargs sed  i  s  bcross val b cross validation g     The   score func   argument of the     sklearn cross validation cross val score   function is now expected   to accept   y test   and   y predicted   as only arguments for   classification and regression tasks or   X test   for unsupervised   estimators       gamma   parameter for support vector machine algorithms is set   to   1   n features   by default  instead of   1   n samples       The   sklearn hmm   has been marked as orphaned  it will be removed   from scikit learn in version 0 11 unless someone steps up to   contribute documentation  examples and fix lurking numerical   stability issues       sklearn neighbors   has been made into a submodule   The two previously   available estimators    NeighborsClassifier   and   NeighborsRegressor     have been marked as deprecated   Their functionality has been divided   among five new classes    NearestNeighbors   for unsupervised neighbors   searches    KNeighborsClassifier       RadiusNeighborsClassifier     for supervised classification problems  and   KNeighborsRegressor         RadiusNeighborsRegressor   for supervised regression problems       sklearn ball tree BallTree   has been moved to     sklearn neighbors BallTree     Using the former will generate a warning       sklearn linear model LARS     and related classes  LassoLARS    LassoLARSCV  etc   have been renamed to     sklearn linear model Lars         All distance metrics and kernels in   sklearn metrics pairwise   now have a Y   parameter  which by default is None  If not given  the result is the distance    or kernel similarity  between each sample in Y  If given  the result is the   pairwise distance  or kernel similarity  between samples in X to Y       sklearn metrics pairwise l1 distance   is now called   manhattan distance      and by default returns the pairwise distance  For the component wise distance    set the parameter   sum over features   to   False     Backward compatibility package aliases and other deprecated classes and functions will be removed in version 0 11    People         38 people contributed to this release     387   Vlad Niculae     320   Olivier Grisel     192   Lars Buitinck     179   Gael Varoquaux     168   Fabian Pedregosa     INRIA     Parietal Team      127   Jake Vanderplas     120   Mathieu Blondel     85   Alexandre Passos     67   Alexandre Gramfort     57   Peter Prettenhofer     56   Gilles Louppe     42  Robert Layton   38  Nelle Varoquaux   32   user  Jean Kossaifi  JeanKossaifi     30  Conrad Lee   22  Pietro Berkes   18  andy   17  David Warde Farley   12  Brian Holt   11  Robert   8  Amit Aides   8   user  Virgile Fritsch  VirgileFritsch     7   Yaroslav Halchenko     6  Salvatore Masecchia   5  Paolo Losi   4  Vincent Schut   3  Alexis Metaireau   3  Bryan Silverthorn   3   Andreas M ller     2  Minwoo Jake Lee   1  Emmanuelle Gouillart   1  Keith Goodman   1  Lucas Wiman   1   Nicolas Pinto     1  Thouis  Ray  Jones   1  Tim Sheerman Chase       changes 0 8   Version 0 8                May 11  2011    scikit learn 0 8 was released on May 2011  one month after the first  international   scikit learn coding sprint  https   github com scikit learn scikit learn wiki Upcoming events    and is marked by the inclusion of important modules   ref  hierarchical clustering    ref  cross decomposition    ref  NMF   initial support for Python 3 and by important enhancements and bug fixes    Changelog            Several new modules where introduced during this release     New  ref  hierarchical clustering  module by Vincent Michel     Bertrand Thirion     Alexandre Gramfort   and  Gael Varoquaux        ref  kernel pca  implementation by  Mathieu Blondel       ref  labeled faces in the wild dataset  by  Olivier Grisel       New  ref  cross decomposition  module by  Edouard Duchesnay        ref  NMF  module  Vlad Niculae      Implementation of the  ref  oracle approximating shrinkage  algorithm by    user  Virgile Fritsch  VirgileFritsch   in the  ref  covariance  module    Some other modules benefited from significant improvements or cleanups      Initial support for Python 3  builds and imports cleanly    some modules are usable while others have failing tests by  Fabian Pedregosa        class   decomposition PCA  is now usable from the Pipeline object by  Olivier Grisel       Guide  ref  performance howto  by  Olivier Grisel       Fixes for memory leaks in libsvm bindings  64 bit safer BallTree by Lars Buitinck     bug and style fixing in  ref  k means  algorithm by Jan Schl ter     Add attribute converged to Gaussian Mixture Models by Vincent Schut     Implemented   transform      predict log proba   in    class   discriminant analysis LinearDiscriminantAnalysis  By  Mathieu Blondel       Refactoring in the  ref  svm  module and bug fixes by  Fabian Pedregosa       Gael Varoquaux   and Amit Aides     Refactored SGD module  removed code duplication  better variable naming     added interface for sample weight by  Peter Prettenhofer       Wrapped BallTree with Cython by Thouis  Ray  Jones     Added function  func  svm l1 min c  by Paolo Losi     Typos  doc style  etc  by  Yaroslav Halchenko     Gael Varoquaux       Olivier Grisel    Yann Malet   Nicolas Pinto    Lars Buitinck and    Fabian Pedregosa      People          People that made this release possible preceded by number of commits      159   Olivier Grisel     96   Gael Varoquaux     96   Vlad Niculae     94   Fabian Pedregosa     36   Alexandre Gramfort     32  Paolo Losi   31   Edouard Duchesnay     30   Mathieu Blondel     25   Peter Prettenhofer     22   Nicolas Pinto     11   user  Virgile Fritsch  VirgileFritsch      7  Lars Buitinck    6  Vincent Michel    5   Bertrand Thirion      4  Thouis  Ray  Jones    4  Vincent Schut    3  Jan Schl ter    2  Julien Miotte    2   Matthieu Perrot      2  Yann Malet    2   Yaroslav Halchenko      1  Amit Aides    1   Andreas M ller      1  Feth Arezki    1  Meng Xinfan       changes 0 7   Version 0 7                March 2  2011    scikit learn 0 7 was released in March 2011  roughly three months after the 0 6 release  This release is marked by the speed improvements in existing algorithms like k Nearest Neighbors and K Means algorithm and by the inclusion of an efficient algorithm for computing the Ridge Generalized Cross Validation solution  Unlike the preceding release  no new modules where added to this release   Changelog              Performance improvements for Gaussian Mixture Model sampling  Jan   Schl ter      Implementation of efficient leave one out cross validated Ridge in    class   linear model RidgeCV    Mathieu Blondel       Better handling of collinearity and early stopping in    func  linear model lars path    Alexandre Gramfort   and  Fabian   Pedregosa        Fixes for liblinear ordering of labels and sign of coefficients    Dan Yamins  Paolo Losi   Mathieu Blondel   and  Fabian Pedregosa        Performance improvements for Nearest Neighbors algorithm in   high dimensional spaces   Fabian Pedregosa        Performance improvements for  class   cluster KMeans    Gael   Varoquaux   and  James Bergstra        Sanity checks for SVM based classes   Mathieu Blondel        Refactoring of  neighbors NeighborsClassifier  and    func  neighbors kneighbors graph   added different algorithms for   the k Nearest Neighbor Search and implemented a more stable   algorithm for finding barycenter weights  Also added some   developer documentation for this module  see    notes neighbors    https   github com scikit learn scikit learn wiki Neighbors working notes    for more information   Fabian Pedregosa        Documentation improvements  Added  pca RandomizedPCA  and    class   linear model LogisticRegression  to the class   reference  Also added references of matrices used for clustering   and other fixes   Gael Varoquaux     Fabian Pedregosa     Mathieu   Blondel     Olivier Grisel    Virgile Fritsch   Emmanuelle   Gouillart     Binded decision function in classes that make use of liblinear     dense and sparse variants  like  class   svm LinearSVC  or    class   linear model LogisticRegression    Fabian Pedregosa        Performance and API improvements to    func  metrics pairwise euclidean distances  and to    pca RandomizedPCA    James Bergstra        Fix compilation issues under NetBSD  Kamel Ibn Hassen Derouiche     Allow input sequences of different lengths in  hmm GaussianHMM      Ron Weiss        Fix bug in affinity propagation caused by incorrect indexing  Xinfan Meng    People         People that made this release possible preceded by number of commits     85   Fabian Pedregosa     67   Mathieu Blondel     20   Alexandre Gramfort     19   James Bergstra     14  Dan Yamins   13   Olivier Grisel     12   Gael Varoquaux     4   Edouard Duchesnay     4   Ron Weiss     2  Satrajit Ghosh   2  Vincent Dubourg   1  Emmanuelle Gouillart   1  Kamel Ibn Hassen Derouiche   1  Paolo Losi   1  VirgileFritsch   1   Yaroslav Halchenko     1  Xinfan Meng       changes 0 6   Version 0 6                December 21  2010    scikit learn 0 6 was released on December 2010  It is marked by the inclusion of several new modules and a general renaming of old ones  It is also marked by the inclusion of new example  including applications to real world datasets    Changelog              New  stochastic gradient    https   scikit learn org stable modules sgd html    descent   module by Peter Prettenhofer  The module comes with complete   documentation and examples     Improved svm module  memory consumption has been reduced by 50     heuristic to automatically set class weights  possibility to   assign weights to samples  see    ref  sphx glr auto examples svm plot weighted samples py  for an example      New  ref  gaussian process  module by Vincent Dubourg  This module   also has great documentation and some very neat examples  See   example gaussian process plot gp regression py or   example gaussian process plot gp probabilistic classification after regression py   for a taste of what can be done     It is now possible to use liblinear s Multi class SVC  option   multi class in  class   svm LinearSVC      New features and performance improvements of text feature   extraction     Improved sparse matrix support  both in main classes     class   model selection GridSearchCV   as in modules   sklearn svm sparse and sklearn linear model sparse     Lots of cool new examples and a new section that uses real world   datasets was created  These include     ref  sphx glr auto examples applications plot face recognition py      ref  sphx glr auto examples applications plot species distribution modeling py      ref  sphx glr auto examples applications wikipedia principal eigenvector py  and   others     Faster  ref  least angle regression  algorithm  It is now 2x   faster than the R version on worst case and up to 10x times faster   on some cases     Faster coordinate descent algorithm  In particular  the full path   version of lasso   func  linear model lasso path   is more than   200x times faster than before     It is now possible to get probability estimates from a    class   linear model LogisticRegression  model     module renaming  the glm module has been renamed to linear model    the gmm module has been included into the more general mixture   model and the sgd module has been included in linear model     Lots of bug fixes and documentation improvements    People         People that made this release possible preceded by number of commits     207   Olivier Grisel      167  Fabian Pedregosa      97  Peter Prettenhofer      68  Alexandre Gramfort      59   Mathieu Blondel      55   Gael Varoquaux      33  Vincent Dubourg    21   Ron Weiss      9  Bertrand Thirion    3   Alexandre Passos      3  Anne Laure Fouque    2  Ronan Amicel    1  Christian Osendorfer          changes 0 5    Version 0 5                October 11  2010    Changelog            New classes                Support for sparse matrices in some classifiers of modules     svm   and   linear model    see  svm sparse SVC      svm sparse SVR    svm sparse LinearSVC      linear model sparse Lasso    linear model sparse ElasticNet      New  class   pipeline Pipeline  object to compose different estimators     Recursive Feature Elimination routines in module    ref  feature selection      Addition of various classes capable of cross validation in the   linear model module   class   linear model LassoCV    class   linear model ElasticNetCV     etc       New  more efficient LARS algorithm implementation  The Lasso   variant of the algorithm is also implemented  See    class   linear model lars path    class   linear model Lars  and    class   linear model LassoLars      New Hidden Markov Models module  see classes    hmm GaussianHMM    hmm MultinomialHMM    hmm GMMHMM      New module feature extraction  see  ref  class reference    feature extraction ref       New FastICA algorithm in module sklearn fastica   Documentation                  Improved documentation for many modules  now separating   narrative documentation from the class reference  As an example    see  documentation for the SVM module    https   scikit learn org stable modules svm html    and the   complete  class reference    https   scikit learn org stable modules classes html      Fixes          API changes  adhere variable names to PEP 8  give more   meaningful names     Fixes for svm module to run on a shared memory context    multiprocessing      It is again possible to generate latex  and thus PDF  from the   sphinx docs   Examples             new examples using some of the mlcomp datasets      sphx glr auto examples mlcomp sparse document classification py    since removed  and    ref  sphx glr auto examples text plot document classification 20newsgroups py     Many more examples   See here    https   scikit learn org stable auto examples index html      the full list of examples    External dependencies                          Joblib is now a dependency of this package  although it is   shipped with  sklearn externals joblib    Removed modules                    Module ann  Artificial Neural Networks  has been removed from   the distribution  Users wanting this sort of algorithms should   take a look into pybrain   Misc         New sphinx theme for the web page    Authors          The following is a list of authors for this release  preceded by number of commits     262  Fabian Pedregosa   240  Gael Varoquaux   149  Alexandre Gramfort   116  Olivier Grisel    40  Vincent Michel    38  Ron Weiss    23  Matthieu Perrot    10  Bertrand Thirion     7  Yaroslav Halchenko     9  VirgileFritsch     6  Edouard Duchesnay     4  Mathieu Blondel     1  Ariel Rokem     1  Matthieu Brucher  Version 0 4                August 26  2010    Changelog            Major changes in this release include     Coordinate Descent algorithm  Lasso  ElasticNet  refactoring     speed improvements  roughly 100x times faster      Coordinate Descent Refactoring  and bug fixing  for consistency   with R s package GLMNET     New metrics module     New GMM module contributed by Ron Weiss     Implementation of the LARS algorithm  without Lasso variant for now      feature selection module redesign     Migration to GIT as version control system     Removal of obsolete attrselect module     Rename of private compiled extensions  added underscore      Removal of legacy unmaintained code     Documentation improvements  both docstring and rst      Improvement of the build system to  optionally  link with MKL    Also  provide a lite BLAS implementation in case no system wide BLAS is   found     Lots of new examples     Many  many bug fixes       Authors          The committer list for this release is the following  preceded by number of commits      143  Fabian Pedregosa   35  Alexandre Gramfort   34  Olivier Grisel   11  Gael Varoquaux    5  Yaroslav Halchenko    2  Vincent Michel    1  Chris Filo Gorgolewski   Earlier versions                   Earlier versions included contributions by Fred Mailhot  David Cooke  David Huard  Dave Morrill  Ed Schofield  Travis Oliphant  Pearu Peterson "}
{"questions":"scikit-learn sklearn contributors rst Version 0 17 changes0171","answers":".. include:: _contributors.rst\n\n.. currentmodule:: sklearn\n\n============\nVersion 0.17\n============\n\n.. _changes_0_17_1:\n\nVersion 0.17.1\n==============\n\n**February 18, 2016**\n\nChangelog\n---------\n\nBug fixes\n.........\n\n\n- Upgrade vendored joblib to version 0.9.4 that fixes an important bug in\n  ``joblib.Parallel`` that can silently yield to wrong results when working\n  on datasets larger than 1MB:\n  https:\/\/github.com\/joblib\/joblib\/blob\/0.9.4\/CHANGES.rst\n\n- Fixed reading of Bunch pickles generated with scikit-learn\n  version <= 0.16. This can affect users who have already\n  downloaded a dataset with scikit-learn 0.16 and are loading it\n  with scikit-learn 0.17. See :issue:`6196` for\n  how this affected :func:`datasets.fetch_20newsgroups`. By `Loic\n  Esteve`_.\n\n- Fixed a bug that prevented using ROC AUC score to perform grid search on\n  several CPU \/ cores on large arrays. See :issue:`6147`\n  By `Olivier Grisel`_.\n\n- Fixed a bug that prevented to properly set the ``presort`` parameter\n  in :class:`ensemble.GradientBoostingRegressor`. See :issue:`5857`\n  By Andrew McCulloh.\n\n- Fixed a joblib error when evaluating the perplexity of a\n  :class:`decomposition.LatentDirichletAllocation` model. See :issue:`6258`\n  By Chyi-Kwei Yau.\n\n\n.. _changes_0_17:\n\nVersion 0.17\n============\n\n**November 5, 2015**\n\nChangelog\n---------\n\nNew features\n............\n\n- All the Scaler classes but :class:`preprocessing.RobustScaler` can be fitted online by\n  calling `partial_fit`. By :user:`Giorgio Patrini <giorgiop>`.\n\n- The new class :class:`ensemble.VotingClassifier` implements a\n  \"majority rule\" \/ \"soft voting\" ensemble classifier to combine\n  estimators for classification. By `Sebastian Raschka`_.\n\n- The new class :class:`preprocessing.RobustScaler` provides an\n  alternative to :class:`preprocessing.StandardScaler` for feature-wise\n  centering and range normalization that is robust to outliers.\n  By :user:`Thomas Unterthiner <untom>`.\n\n- The new class :class:`preprocessing.MaxAbsScaler` provides an\n  alternative to :class:`preprocessing.MinMaxScaler` for feature-wise\n  range normalization when the data is already centered or sparse.\n  By :user:`Thomas Unterthiner <untom>`.\n\n- The new class :class:`preprocessing.FunctionTransformer` turns a Python\n  function into a ``Pipeline``-compatible transformer object.\n  By Joe Jevnik.\n\n- The new classes `cross_validation.LabelKFold` and\n  `cross_validation.LabelShuffleSplit` generate train-test folds,\n  respectively similar to `cross_validation.KFold` and\n  `cross_validation.ShuffleSplit`, except that the folds are\n  conditioned on a label array. By `Brian McFee`_, :user:`Jean\n  Kossaifi <JeanKossaifi>` and `Gilles Louppe`_.\n\n- :class:`decomposition.LatentDirichletAllocation` implements the Latent\n  Dirichlet Allocation topic model with online  variational\n  inference. By :user:`Chyi-Kwei Yau <chyikwei>`, with code based on an implementation\n  by Matt Hoffman. (:issue:`3659`)\n\n- The new solver ``sag`` implements a Stochastic Average Gradient descent\n  and is available in both :class:`linear_model.LogisticRegression` and\n  :class:`linear_model.Ridge`. This solver is very efficient for large\n  datasets. By :user:`Danny Sullivan <dsullivan7>` and `Tom Dupre la Tour`_.\n  (:issue:`4738`)\n\n- The new solver ``cd`` implements a Coordinate Descent in\n  :class:`decomposition.NMF`. Previous solver based on Projected Gradient is\n  still available setting new parameter ``solver`` to ``pg``, but is\n  deprecated and will be removed in 0.19, along with\n  `decomposition.ProjectedGradientNMF` and parameters ``sparseness``,\n  ``eta``, ``beta`` and ``nls_max_iter``. New parameters ``alpha`` and\n  ``l1_ratio`` control L1 and L2 regularization, and ``shuffle`` adds a\n  shuffling step in the ``cd`` solver.\n  By `Tom Dupre la Tour`_ and `Mathieu Blondel`_.\n\nEnhancements\n............\n- :class:`manifold.TSNE` now supports approximate optimization via the\n  Barnes-Hut method, leading to much faster fitting. By Christopher Erick Moody.\n  (:issue:`4025`)\n\n- :class:`cluster.MeanShift` now supports parallel execution,\n  as implemented in the ``mean_shift`` function. By :user:`Martino\n  Sorbaro <martinosorb>`.\n\n- :class:`naive_bayes.GaussianNB` now supports fitting with ``sample_weight``.\n  By `Jan Hendrik Metzen`_.\n\n- :class:`dummy.DummyClassifier` now supports a prior fitting strategy.\n  By `Arnaud Joly`_.\n\n- Added a ``fit_predict`` method for `mixture.GMM` and subclasses.\n  By :user:`Cory Lorenz <clorenz7>`.\n\n- Added the :func:`metrics.label_ranking_loss` metric.\n  By `Arnaud Joly`_.\n\n- Added the :func:`metrics.cohen_kappa_score` metric.\n\n- Added a ``warm_start`` constructor parameter to the bagging ensemble\n  models to increase the size of the ensemble. By :user:`Tim Head <betatim>`.\n\n- Added option to use multi-output regression metrics without averaging.\n  By Konstantin Shmelkov and :user:`Michael Eickenberg<eickenberg>`.\n\n- Added ``stratify`` option to `cross_validation.train_test_split`\n  for stratified splitting. By Miroslav Batchkarov.\n\n- The :func:`tree.export_graphviz` function now supports aesthetic\n  improvements for :class:`tree.DecisionTreeClassifier` and\n  :class:`tree.DecisionTreeRegressor`, including options for coloring nodes\n  by their majority class or impurity, showing variable names, and using\n  node proportions instead of raw sample counts. By `Trevor Stephens`_.\n\n- Improved speed of ``newton-cg`` solver in\n  :class:`linear_model.LogisticRegression`, by avoiding loss computation.\n  By `Mathieu Blondel`_ and `Tom Dupre la Tour`_.\n\n- The ``class_weight=\"auto\"`` heuristic in classifiers supporting\n  ``class_weight`` was deprecated and replaced by the ``class_weight=\"balanced\"``\n  option, which has a simpler formula and interpretation.\n  By `Hanna Wallach`_ and `Andreas M\u00fcller`_.\n\n- Add ``class_weight`` parameter to automatically weight samples by class\n  frequency for :class:`linear_model.PassiveAggressiveClassifier`. By\n  `Trevor Stephens`_.\n\n- Added backlinks from the API reference pages to the user guide. By\n  `Andreas M\u00fcller`_.\n\n- The ``labels`` parameter to :func:`sklearn.metrics.f1_score`,\n  :func:`sklearn.metrics.fbeta_score`,\n  :func:`sklearn.metrics.recall_score` and\n  :func:`sklearn.metrics.precision_score` has been extended.\n  It is now possible to ignore one or more labels, such as where\n  a multiclass problem has a majority class to ignore. By `Joel Nothman`_.\n\n- Add ``sample_weight`` support to :class:`linear_model.RidgeClassifier`.\n  By `Trevor Stephens`_.\n\n- Provide an option for sparse output from\n  :func:`sklearn.metrics.pairwise.cosine_similarity`. By\n  :user:`Jaidev Deshpande <jaidevd>`.\n\n- Add :func:`preprocessing.minmax_scale` to provide a function interface for\n  :class:`preprocessing.MinMaxScaler`. By :user:`Thomas Unterthiner <untom>`.\n\n- ``dump_svmlight_file`` now handles multi-label datasets.\n  By Chih-Wei Chang.\n\n- RCV1 dataset loader (:func:`sklearn.datasets.fetch_rcv1`).\n  By `Tom Dupre la Tour`_.\n\n- The \"Wisconsin Breast Cancer\" classical two-class classification dataset\n  is now included in scikit-learn, available with\n  :func:`datasets.load_breast_cancer`.\n\n- Upgraded to joblib 0.9.3 to benefit from the new automatic batching of\n  short tasks. This makes it possible for scikit-learn to benefit from\n  parallelism when many very short tasks are executed in parallel, for\n  instance by the `grid_search.GridSearchCV` meta-estimator\n  with ``n_jobs > 1`` used with a large grid of parameters on a small\n  dataset. By `Vlad Niculae`_, `Olivier Grisel`_ and `Loic Esteve`_.\n\n- For more details about changes in joblib 0.9.3 see the release notes:\n  https:\/\/github.com\/joblib\/joblib\/blob\/master\/CHANGES.rst#release-093\n\n- Improved speed (3 times per iteration) of\n  `decomposition.DictLearning` with coordinate descent method\n  from :class:`linear_model.Lasso`. By :user:`Arthur Mensch <arthurmensch>`.\n\n- Parallel processing (threaded) for queries of nearest neighbors\n  (using the ball-tree) by Nikolay Mayorov.\n\n- Allow :func:`datasets.make_multilabel_classification` to output\n  a sparse ``y``. By Kashif Rasul.\n\n- :class:`cluster.DBSCAN` now accepts a sparse matrix of precomputed\n  distances, allowing memory-efficient distance precomputation. By\n  `Joel Nothman`_.\n\n- :class:`tree.DecisionTreeClassifier` now exposes an ``apply`` method\n  for retrieving the leaf indices samples are predicted as. By\n  :user:`Daniel Galvez <galv>` and `Gilles Louppe`_.\n\n- Speed up decision tree regressors, random forest regressors, extra trees\n  regressors and gradient boosting estimators by computing a proxy\n  of the impurity improvement during the tree growth. The proxy quantity is\n  such that the split that maximizes this value also maximizes the impurity\n  improvement. By `Arnaud Joly`_, :user:`Jacob Schreiber <jmschrei>`\n  and `Gilles Louppe`_.\n\n- Speed up tree based methods by reducing the number of computations needed\n  when computing the impurity measure taking into account linear\n  relationship of the computed statistics. The effect is particularly\n  visible with extra trees and on datasets with categorical or sparse\n  features. By `Arnaud Joly`_.\n\n- :class:`ensemble.GradientBoostingRegressor` and\n  :class:`ensemble.GradientBoostingClassifier` now expose an ``apply``\n  method for retrieving the leaf indices each sample ends up in under\n  each try. By :user:`Jacob Schreiber <jmschrei>`.\n\n- Add ``sample_weight`` support to :class:`linear_model.LinearRegression`.\n  By Sonny Hu. (:issue:`#4881`)\n\n- Add ``n_iter_without_progress`` to :class:`manifold.TSNE` to control\n  the stopping criterion. By Santi Villalba. (:issue:`5186`)\n\n- Added optional parameter ``random_state`` in :class:`linear_model.Ridge`\n  , to set the seed of the pseudo random generator used in ``sag`` solver. By `Tom Dupre la Tour`_.\n\n- Added optional parameter ``warm_start`` in\n  :class:`linear_model.LogisticRegression`. If set to True, the solvers\n  ``lbfgs``, ``newton-cg`` and ``sag`` will be initialized with the\n  coefficients computed in the previous fit. By `Tom Dupre la Tour`_.\n\n- Added ``sample_weight`` support to :class:`linear_model.LogisticRegression` for\n  the ``lbfgs``, ``newton-cg``, and ``sag`` solvers. By `Valentin Stolbunov`_.\n  Support added to the ``liblinear`` solver. By `Manoj Kumar`_.\n\n- Added optional parameter ``presort`` to :class:`ensemble.GradientBoostingRegressor`\n  and :class:`ensemble.GradientBoostingClassifier`, keeping default behavior\n  the same. This allows gradient boosters to turn off presorting when building\n  deep trees or using sparse data. By :user:`Jacob Schreiber <jmschrei>`.\n\n- Altered :func:`metrics.roc_curve` to drop unnecessary thresholds by\n  default. By :user:`Graham Clenaghan <gclenaghan>`.\n\n- Added :class:`feature_selection.SelectFromModel` meta-transformer which can\n  be used along with estimators that have `coef_` or `feature_importances_`\n  attribute to select important features of the input data. By\n  :user:`Maheshakya Wijewardena <maheshakya>`, `Joel Nothman`_ and `Manoj Kumar`_.\n\n- Added :func:`metrics.pairwise.laplacian_kernel`.  By `Clyde Fare <https:\/\/github.com\/Clyde-fare>`_.\n\n- `covariance.GraphLasso` allows separate control of the convergence criterion\n  for the Elastic-Net subproblem via  the ``enet_tol`` parameter.\n\n- Improved verbosity in :class:`decomposition.DictionaryLearning`.\n\n- :class:`ensemble.RandomForestClassifier` and\n  :class:`ensemble.RandomForestRegressor` no longer explicitly store the\n  samples used in bagging, resulting in a much reduced memory footprint for\n  storing random forest models.\n\n- Added ``positive`` option to :class:`linear_model.Lars` and\n  :func:`linear_model.lars_path` to force coefficients to be positive.\n  (:issue:`5131`)\n\n- Added the ``X_norm_squared`` parameter to :func:`metrics.pairwise.euclidean_distances`\n  to provide precomputed squared norms for ``X``.\n\n- Added the ``fit_predict`` method to :class:`pipeline.Pipeline`.\n\n- Added the :func:`preprocessing.minmax_scale` function.\n\nBug fixes\n.........\n\n- Fixed non-determinism in :class:`dummy.DummyClassifier` with sparse\n  multi-label output. By `Andreas M\u00fcller`_.\n\n- Fixed the output shape of :class:`linear_model.RANSACRegressor` to\n  ``(n_samples, )``. By `Andreas M\u00fcller`_.\n\n- Fixed bug in `decomposition.DictLearning` when ``n_jobs < 0``. By\n  `Andreas M\u00fcller`_.\n\n- Fixed bug where `grid_search.RandomizedSearchCV` could consume a\n  lot of memory for large discrete grids. By `Joel Nothman`_.\n\n- Fixed bug in :class:`linear_model.LogisticRegressionCV` where `penalty` was ignored\n  in the final fit. By `Manoj Kumar`_.\n\n- Fixed bug in `ensemble.forest.ForestClassifier` while computing\n  oob_score and X is a sparse.csc_matrix. By :user:`Ankur Ankan <ankurankan>`.\n\n- All regressors now consistently handle and warn when given ``y`` that is of\n  shape ``(n_samples, 1)``. By `Andreas M\u00fcller`_ and Henry Lin.\n  (:issue:`5431`)\n\n- Fix in :class:`cluster.KMeans` cluster reassignment for sparse input by\n  `Lars Buitinck`_.\n\n- Fixed a bug in :class:`discriminant_analysis.LinearDiscriminantAnalysis` that\n  could cause asymmetric covariance matrices when using shrinkage. By `Martin\n  Billinger`_.\n\n- Fixed `cross_validation.cross_val_predict` for estimators with\n  sparse predictions. By Buddha Prakash.\n\n- Fixed the ``predict_proba`` method of :class:`linear_model.LogisticRegression`\n  to use soft-max instead of one-vs-rest normalization. By `Manoj Kumar`_.\n  (:issue:`5182`)\n\n- Fixed the `partial_fit` method of :class:`linear_model.SGDClassifier`\n  when called with ``average=True``. By :user:`Andrew Lamb <andylamb>`.\n  (:issue:`5282`)\n\n- Dataset fetchers use different filenames under Python 2 and Python 3 to\n  avoid pickling compatibility issues. By `Olivier Grisel`_.\n  (:issue:`5355`)\n\n- Fixed a bug in :class:`naive_bayes.GaussianNB` which caused classification\n  results to depend on scale. By `Jake Vanderplas`_.\n\n- Fixed temporarily :class:`linear_model.Ridge`, which was incorrect\n  when fitting the intercept in the case of sparse data. The fix\n  automatically changes the solver to 'sag' in this case.\n  :issue:`5360` by `Tom Dupre la Tour`_.\n\n- Fixed a performance bug in `decomposition.RandomizedPCA` on data\n  with a large number of features and fewer samples. (:issue:`4478`)\n  By `Andreas M\u00fcller`_, `Loic Esteve`_ and :user:`Giorgio Patrini <giorgiop>`.\n\n- Fixed bug in `cross_decomposition.PLS` that yielded unstable and\n  platform dependent output, and failed on `fit_transform`.\n  By :user:`Arthur Mensch <arthurmensch>`.\n\n- Fixes to the ``Bunch`` class used to store datasets.\n\n- Fixed `ensemble.plot_partial_dependence` ignoring the\n  ``percentiles`` parameter.\n\n- Providing a ``set`` as vocabulary in ``CountVectorizer`` no longer\n  leads to inconsistent results when pickling.\n\n- Fixed the conditions on when a precomputed Gram matrix needs to\n  be recomputed in :class:`linear_model.LinearRegression`,\n  :class:`linear_model.OrthogonalMatchingPursuit`,\n  :class:`linear_model.Lasso` and :class:`linear_model.ElasticNet`.\n\n- Fixed inconsistent memory layout in the coordinate descent solver\n  that affected `linear_model.DictionaryLearning` and\n  `covariance.GraphLasso`. (:issue:`5337`)\n  By `Olivier Grisel`_.\n\n- :class:`manifold.LocallyLinearEmbedding` no longer ignores the ``reg``\n  parameter.\n\n- Nearest Neighbor estimators with custom distance metrics can now be pickled.\n  (:issue:`4362`)\n\n- Fixed a bug in :class:`pipeline.FeatureUnion` where ``transformer_weights``\n  were not properly handled when performing grid-searches.\n\n- Fixed a bug in :class:`linear_model.LogisticRegression` and\n  :class:`linear_model.LogisticRegressionCV` when using\n  ``class_weight='balanced'`` or ``class_weight='auto'``.\n  By `Tom Dupre la Tour`_.\n\n- Fixed bug :issue:`5495` when\n  doing OVR(SVC(decision_function_shape=\"ovr\")). Fixed by\n  :user:`Elvis Dohmatob <dohmatob>`.\n\n\nAPI changes summary\n-------------------\n- Attribute `data_min`, `data_max` and `data_range` in\n  :class:`preprocessing.MinMaxScaler` are deprecated and won't be available\n  from 0.19. Instead, the class now exposes `data_min_`, `data_max_`\n  and `data_range_`. By :user:`Giorgio Patrini <giorgiop>`.\n\n- All Scaler classes now have an `scale_` attribute, the feature-wise\n  rescaling applied by their `transform` methods. The old attribute `std_`\n  in :class:`preprocessing.StandardScaler` is deprecated and superseded\n  by `scale_`; it won't be available in 0.19. By :user:`Giorgio Patrini <giorgiop>`.\n\n- :class:`svm.SVC` and :class:`svm.NuSVC` now have an ``decision_function_shape``\n  parameter to make their decision function of shape ``(n_samples, n_classes)``\n  by setting ``decision_function_shape='ovr'``. This will be the default behavior\n  starting in 0.19. By `Andreas M\u00fcller`_.\n\n- Passing 1D data arrays as input to estimators is now deprecated as it\n  caused confusion in how the array elements should be interpreted\n  as features or as samples. All data arrays are now expected\n  to be explicitly shaped ``(n_samples, n_features)``.\n  By :user:`Vighnesh Birodkar <vighneshbirodkar>`.\n\n- `lda.LDA` and `qda.QDA` have been moved to\n  :class:`discriminant_analysis.LinearDiscriminantAnalysis` and\n  :class:`discriminant_analysis.QuadraticDiscriminantAnalysis`.\n\n- The ``store_covariance`` and ``tol`` parameters have been moved from\n  the fit method to the constructor in\n  :class:`discriminant_analysis.LinearDiscriminantAnalysis` and the\n  ``store_covariances`` and ``tol`` parameters have been moved from the\n  fit method to the constructor in\n  :class:`discriminant_analysis.QuadraticDiscriminantAnalysis`.\n\n- Models inheriting from ``_LearntSelectorMixin`` will no longer support the\n  transform methods. (i.e,  RandomForests, GradientBoosting, LogisticRegression,\n  DecisionTrees, SVMs and SGD related models). Wrap these models around the\n  metatransfomer :class:`feature_selection.SelectFromModel` to remove\n  features (according to `coefs_` or `feature_importances_`)\n  which are below a certain threshold value instead.\n\n- :class:`cluster.KMeans` re-runs cluster-assignments in case of non-convergence,\n  to ensure consistency of ``predict(X)`` and ``labels_``. By\n  :user:`Vighnesh Birodkar <vighneshbirodkar>`.\n\n- Classifier and Regressor models are now tagged as such using the\n  ``_estimator_type`` attribute.\n\n- Cross-validation iterators always provide indices into training and test set,\n  not boolean masks.\n\n- The ``decision_function`` on all regressors was deprecated and will be\n  removed in 0.19.  Use ``predict`` instead.\n\n- `datasets.load_lfw_pairs` is deprecated and will be removed in 0.19.\n  Use :func:`datasets.fetch_lfw_pairs` instead.\n\n- The deprecated ``hmm`` module was removed.\n\n- The deprecated ``Bootstrap`` cross-validation iterator was removed.\n\n- The deprecated ``Ward`` and ``WardAgglomerative`` classes have been removed.\n  Use :class:`cluster.AgglomerativeClustering` instead.\n\n- `cross_validation.check_cv` is now a public function.\n\n- The property ``residues_`` of :class:`linear_model.LinearRegression` is deprecated\n  and will be removed in 0.19.\n\n- The deprecated ``n_jobs`` parameter of :class:`linear_model.LinearRegression` has been moved\n  to the constructor.\n\n- Removed deprecated ``class_weight`` parameter from :class:`linear_model.SGDClassifier`'s ``fit``\n  method. Use the construction parameter instead.\n\n- The deprecated support for the sequence of sequences (or list of lists) multilabel\n  format was removed. To convert to and from the supported binary\n  indicator matrix format, use\n  :class:`MultiLabelBinarizer <preprocessing.MultiLabelBinarizer>`.\n\n- The behavior of calling the ``inverse_transform`` method of ``Pipeline.pipeline`` will\n  change in 0.19. It will no longer reshape one-dimensional input to two-dimensional input.\n\n- The deprecated attributes ``indicator_matrix_``, ``multilabel_`` and ``classes_`` of\n  :class:`preprocessing.LabelBinarizer` were removed.\n\n- Using ``gamma=0`` in :class:`svm.SVC` and :class:`svm.SVR` to automatically set the\n  gamma to ``1. \/ n_features`` is deprecated and will be removed in 0.19.\n  Use ``gamma=\"auto\"`` instead.\n\nCode Contributors\n-----------------\nAaron Schumacher, Adithya Ganesh, akitty, Alexandre Gramfort, Alexey Grigorev,\nAli Baharev, Allen Riddell, Ando Saabas, Andreas Mueller, Andrew Lamb, Anish\nShah, Ankur Ankan, Anthony Erlinger, Ari Rouvinen, Arnaud Joly, Arnaud Rachez,\nArthur Mensch, banilo, Barmaley.exe, benjaminirving, Boyuan Deng, Brett Naul,\nBrian McFee, Buddha Prakash, Chi Zhang, Chih-Wei Chang, Christof Angermueller,\nChristoph Gohlke, Christophe Bourguignat, Christopher Erick Moody, Chyi-Kwei\nYau, Cindy Sridharan, CJ Carey, Clyde-fare, Cory Lorenz, Dan Blanchard, Daniel\nGalvez, Daniel Kronovet, Danny Sullivan, Data1010, David, David D Lowe, David\nDotson, djipey, Dmitry Spikhalskiy, Donne Martin, Dougal J. Sutherland, Dougal\nSutherland, edson duarte, Eduardo Caro, Eric Larson, Eric Martin, Erich\nSchubert, Fernando Carrillo, Frank C. Eckert, Frank Zalkow, Gael Varoquaux,\nGaniev Ibraim, Gilles Louppe, Giorgio Patrini, giorgiop, Graham Clenaghan,\nGryllos Prokopis, gwulfs, Henry Lin, Hsuan-Tien Lin, Immanuel Bayer, Ishank\nGulati, Jack Martin, Jacob Schreiber, Jaidev Deshpande, Jake Vanderplas, Jan\nHendrik Metzen, Jean Kossaifi, Jeffrey04, Jeremy, jfraj, Jiali Mei,\nJoe Jevnik, Joel Nothman, John Kirkham, John Wittenauer, Joseph, Joshua Loyal,\nJungkook Park, KamalakerDadi, Kashif Rasul, Keith Goodman, Kian Ho, Konstantin\nShmelkov, Kyler Brown, Lars Buitinck, Lilian Besson, Loic Esteve, Louis Tiao,\nmaheshakya, Maheshakya Wijewardena, Manoj Kumar, MarkTab marktab.net, Martin\nKu, Martin Spacek, MartinBpr, martinosorb, MaryanMorel, Masafumi Oyamada,\nMathieu Blondel, Matt Krump, Matti Lyra, Maxim Kolganov, mbillinger, mhg,\nMichael Heilman, Michael Patterson, Miroslav Batchkarov, Nelle Varoquaux,\nNicolas, Nikolay Mayorov, Olivier Grisel, Omer Katz, \u00d3scar N\u00e1jera, Pauli\nVirtanen, Peter Fischer, Peter Prettenhofer, Phil Roth, pianomania, Preston\nParry, Raghav RV, Rob Zinkov, Robert Layton, Rohan Ramanath, Saket Choudhary,\nSam Zhang, santi, saurabh.bansod, scls19fr, Sebastian Raschka, Sebastian\nSaeger, Shivan Sornarajah, SimonPL, sinhrks, Skipper Seabold, Sonny Hu, sseg,\nStephen Hoover, Steven De Gryze, Steven Seguin, Theodore Vasiloudis, Thomas\nUnterthiner, Tiago Freitas Pereira, Tian Wang, Tim Head, Timothy Hopper,\ntokoroten, Tom Dupr\u00e9 la Tour, Trevor Stephens, Valentin Stolbunov, Vighnesh\nBirodkar, Vinayak Mehta, Vincent, Vincent Michel, vstolbunov, wangz10, Wei Xue,\nYucheng Low, Yury Zhauniarovich, Zac Stewart, zhai_pro, Zichen Wang","site":"scikit-learn","answers_cleaned":"   include    contributors rst     currentmodule   sklearn               Version 0 17                   changes 0 17 1   Version 0 17 1                   February 18  2016    Changelog            Bug fixes               Upgrade vendored joblib to version 0 9 4 that fixes an important bug in     joblib Parallel   that can silently yield to wrong results when working   on datasets larger than 1MB    https   github com joblib joblib blob 0 9 4 CHANGES rst    Fixed reading of Bunch pickles generated with scikit learn   version    0 16  This can affect users who have already   downloaded a dataset with scikit learn 0 16 and are loading it   with scikit learn 0 17  See  issue  6196  for   how this affected  func  datasets fetch 20newsgroups   By  Loic   Esteve       Fixed a bug that prevented using ROC AUC score to perform grid search on   several CPU   cores on large arrays  See  issue  6147    By  Olivier Grisel       Fixed a bug that prevented to properly set the   presort   parameter   in  class  ensemble GradientBoostingRegressor   See  issue  5857    By Andrew McCulloh     Fixed a joblib error when evaluating the perplexity of a    class  decomposition LatentDirichletAllocation  model  See  issue  6258    By Chyi Kwei Yau        changes 0 17   Version 0 17                 November 5  2015    Changelog            New features                 All the Scaler classes but  class  preprocessing RobustScaler  can be fitted online by   calling  partial fit   By  user  Giorgio Patrini  giorgiop       The new class  class  ensemble VotingClassifier  implements a    majority rule     soft voting  ensemble classifier to combine   estimators for classification  By  Sebastian Raschka       The new class  class  preprocessing RobustScaler  provides an   alternative to  class  preprocessing StandardScaler  for feature wise   centering and range normalization that is robust to outliers    By  user  Thomas Unterthiner  untom       The new class  class  preprocessing MaxAbsScaler  provides an   alternative to  class  preprocessing MinMaxScaler  for feature wise   range normalization when the data is already centered or sparse    By  user  Thomas Unterthiner  untom       The new class  class  preprocessing FunctionTransformer  turns a Python   function into a   Pipeline   compatible transformer object    By Joe Jevnik     The new classes  cross validation LabelKFold  and    cross validation LabelShuffleSplit  generate train test folds    respectively similar to  cross validation KFold  and    cross validation ShuffleSplit   except that the folds are   conditioned on a label array  By  Brian McFee     user  Jean   Kossaifi  JeanKossaifi   and  Gilles Louppe        class  decomposition LatentDirichletAllocation  implements the Latent   Dirichlet Allocation topic model with online  variational   inference  By  user  Chyi Kwei Yau  chyikwei    with code based on an implementation   by Matt Hoffman    issue  3659      The new solver   sag   implements a Stochastic Average Gradient descent   and is available in both  class  linear model LogisticRegression  and    class  linear model Ridge   This solver is very efficient for large   datasets  By  user  Danny Sullivan  dsullivan7   and  Tom Dupre la Tour        issue  4738      The new solver   cd   implements a Coordinate Descent in    class  decomposition NMF   Previous solver based on Projected Gradient is   still available setting new parameter   solver   to   pg    but is   deprecated and will be removed in 0 19  along with    decomposition ProjectedGradientNMF  and parameters   sparseness        eta      beta   and   nls max iter    New parameters   alpha   and     l1 ratio   control L1 and L2 regularization  and   shuffle   adds a   shuffling step in the   cd   solver    By  Tom Dupre la Tour   and  Mathieu Blondel     Enhancements                 class  manifold TSNE  now supports approximate optimization via the   Barnes Hut method  leading to much faster fitting  By Christopher Erick Moody      issue  4025       class  cluster MeanShift  now supports parallel execution    as implemented in the   mean shift   function  By  user  Martino   Sorbaro  martinosorb        class  naive bayes GaussianNB  now supports fitting with   sample weight      By  Jan Hendrik Metzen        class  dummy DummyClassifier  now supports a prior fitting strategy    By  Arnaud Joly       Added a   fit predict   method for  mixture GMM  and subclasses    By  user  Cory Lorenz  clorenz7       Added the  func  metrics label ranking loss  metric    By  Arnaud Joly       Added the  func  metrics cohen kappa score  metric     Added a   warm start   constructor parameter to the bagging ensemble   models to increase the size of the ensemble  By  user  Tim Head  betatim       Added option to use multi output regression metrics without averaging    By Konstantin Shmelkov and  user  Michael Eickenberg eickenberg       Added   stratify   option to  cross validation train test split    for stratified splitting  By Miroslav Batchkarov     The  func  tree export graphviz  function now supports aesthetic   improvements for  class  tree DecisionTreeClassifier  and    class  tree DecisionTreeRegressor   including options for coloring nodes   by their majority class or impurity  showing variable names  and using   node proportions instead of raw sample counts  By  Trevor Stephens       Improved speed of   newton cg   solver in    class  linear model LogisticRegression   by avoiding loss computation    By  Mathieu Blondel   and  Tom Dupre la Tour       The   class weight  auto    heuristic in classifiers supporting     class weight   was deprecated and replaced by the   class weight  balanced      option  which has a simpler formula and interpretation    By  Hanna Wallach   and  Andreas M ller       Add   class weight   parameter to automatically weight samples by class   frequency for  class  linear model PassiveAggressiveClassifier   By    Trevor Stephens       Added backlinks from the API reference pages to the user guide  By    Andreas M ller       The   labels   parameter to  func  sklearn metrics f1 score      func  sklearn metrics fbeta score      func  sklearn metrics recall score  and    func  sklearn metrics precision score  has been extended    It is now possible to ignore one or more labels  such as where   a multiclass problem has a majority class to ignore  By  Joel Nothman       Add   sample weight   support to  class  linear model RidgeClassifier     By  Trevor Stephens       Provide an option for sparse output from    func  sklearn metrics pairwise cosine similarity   By    user  Jaidev Deshpande  jaidevd       Add  func  preprocessing minmax scale  to provide a function interface for    class  preprocessing MinMaxScaler   By  user  Thomas Unterthiner  untom         dump svmlight file   now handles multi label datasets    By Chih Wei Chang     RCV1 dataset loader   func  sklearn datasets fetch rcv1      By  Tom Dupre la Tour       The  Wisconsin Breast Cancer  classical two class classification dataset   is now included in scikit learn  available with    func  datasets load breast cancer      Upgraded to joblib 0 9 3 to benefit from the new automatic batching of   short tasks  This makes it possible for scikit learn to benefit from   parallelism when many very short tasks are executed in parallel  for   instance by the  grid search GridSearchCV  meta estimator   with   n jobs   1   used with a large grid of parameters on a small   dataset  By  Vlad Niculae     Olivier Grisel   and  Loic Esteve       For more details about changes in joblib 0 9 3 see the release notes    https   github com joblib joblib blob master CHANGES rst release 093    Improved speed  3 times per iteration  of    decomposition DictLearning  with coordinate descent method   from  class  linear model Lasso   By  user  Arthur Mensch  arthurmensch       Parallel processing  threaded  for queries of nearest neighbors    using the ball tree  by Nikolay Mayorov     Allow  func  datasets make multilabel classification  to output   a sparse   y    By Kashif Rasul      class  cluster DBSCAN  now accepts a sparse matrix of precomputed   distances  allowing memory efficient distance precomputation  By    Joel Nothman        class  tree DecisionTreeClassifier  now exposes an   apply   method   for retrieving the leaf indices samples are predicted as  By    user  Daniel Galvez  galv   and  Gilles Louppe       Speed up decision tree regressors  random forest regressors  extra trees   regressors and gradient boosting estimators by computing a proxy   of the impurity improvement during the tree growth  The proxy quantity is   such that the split that maximizes this value also maximizes the impurity   improvement  By  Arnaud Joly     user  Jacob Schreiber  jmschrei     and  Gilles Louppe       Speed up tree based methods by reducing the number of computations needed   when computing the impurity measure taking into account linear   relationship of the computed statistics  The effect is particularly   visible with extra trees and on datasets with categorical or sparse   features  By  Arnaud Joly        class  ensemble GradientBoostingRegressor  and    class  ensemble GradientBoostingClassifier  now expose an   apply     method for retrieving the leaf indices each sample ends up in under   each try  By  user  Jacob Schreiber  jmschrei       Add   sample weight   support to  class  linear model LinearRegression     By Sonny Hu    issue   4881      Add   n iter without progress   to  class  manifold TSNE  to control   the stopping criterion  By Santi Villalba    issue  5186      Added optional parameter   random state   in  class  linear model Ridge      to set the seed of the pseudo random generator used in   sag   solver  By  Tom Dupre la Tour       Added optional parameter   warm start   in    class  linear model LogisticRegression   If set to True  the solvers     lbfgs      newton cg   and   sag   will be initialized with the   coefficients computed in the previous fit  By  Tom Dupre la Tour       Added   sample weight   support to  class  linear model LogisticRegression  for   the   lbfgs      newton cg    and   sag   solvers  By  Valentin Stolbunov      Support added to the   liblinear   solver  By  Manoj Kumar       Added optional parameter   presort   to  class  ensemble GradientBoostingRegressor    and  class  ensemble GradientBoostingClassifier   keeping default behavior   the same  This allows gradient boosters to turn off presorting when building   deep trees or using sparse data  By  user  Jacob Schreiber  jmschrei       Altered  func  metrics roc curve  to drop unnecessary thresholds by   default  By  user  Graham Clenaghan  gclenaghan       Added  class  feature selection SelectFromModel  meta transformer which can   be used along with estimators that have  coef   or  feature importances     attribute to select important features of the input data  By    user  Maheshakya Wijewardena  maheshakya     Joel Nothman   and  Manoj Kumar       Added  func  metrics pairwise laplacian kernel    By  Clyde Fare  https   github com Clyde fare         covariance GraphLasso  allows separate control of the convergence criterion   for the Elastic Net subproblem via  the   enet tol   parameter     Improved verbosity in  class  decomposition DictionaryLearning       class  ensemble RandomForestClassifier  and    class  ensemble RandomForestRegressor  no longer explicitly store the   samples used in bagging  resulting in a much reduced memory footprint for   storing random forest models     Added   positive   option to  class  linear model Lars  and    func  linear model lars path  to force coefficients to be positive      issue  5131      Added the   X norm squared   parameter to  func  metrics pairwise euclidean distances    to provide precomputed squared norms for   X       Added the   fit predict   method to  class  pipeline Pipeline      Added the  func  preprocessing minmax scale  function   Bug fixes              Fixed non determinism in  class  dummy DummyClassifier  with sparse   multi label output  By  Andreas M ller       Fixed the output shape of  class  linear model RANSACRegressor  to      n samples       By  Andreas M ller       Fixed bug in  decomposition DictLearning  when   n jobs   0    By    Andreas M ller       Fixed bug where  grid search RandomizedSearchCV  could consume a   lot of memory for large discrete grids  By  Joel Nothman       Fixed bug in  class  linear model LogisticRegressionCV  where  penalty  was ignored   in the final fit  By  Manoj Kumar       Fixed bug in  ensemble forest ForestClassifier  while computing   oob score and X is a sparse csc matrix  By  user  Ankur Ankan  ankurankan       All regressors now consistently handle and warn when given   y   that is of   shape    n samples  1     By  Andreas M ller   and Henry Lin      issue  5431      Fix in  class  cluster KMeans  cluster reassignment for sparse input by    Lars Buitinck       Fixed a bug in  class  discriminant analysis LinearDiscriminantAnalysis  that   could cause asymmetric covariance matrices when using shrinkage  By  Martin   Billinger       Fixed  cross validation cross val predict  for estimators with   sparse predictions  By Buddha Prakash     Fixed the   predict proba   method of  class  linear model LogisticRegression    to use soft max instead of one vs rest normalization  By  Manoj Kumar        issue  5182      Fixed the  partial fit  method of  class  linear model SGDClassifier    when called with   average True    By  user  Andrew Lamb  andylamb        issue  5282      Dataset fetchers use different filenames under Python 2 and Python 3 to   avoid pickling compatibility issues  By  Olivier Grisel        issue  5355      Fixed a bug in  class  naive bayes GaussianNB  which caused classification   results to depend on scale  By  Jake Vanderplas       Fixed temporarily  class  linear model Ridge   which was incorrect   when fitting the intercept in the case of sparse data  The fix   automatically changes the solver to  sag  in this case     issue  5360  by  Tom Dupre la Tour       Fixed a performance bug in  decomposition RandomizedPCA  on data   with a large number of features and fewer samples    issue  4478     By  Andreas M ller     Loic Esteve   and  user  Giorgio Patrini  giorgiop       Fixed bug in  cross decomposition PLS  that yielded unstable and   platform dependent output  and failed on  fit transform     By  user  Arthur Mensch  arthurmensch       Fixes to the   Bunch   class used to store datasets     Fixed  ensemble plot partial dependence  ignoring the     percentiles   parameter     Providing a   set   as vocabulary in   CountVectorizer   no longer   leads to inconsistent results when pickling     Fixed the conditions on when a precomputed Gram matrix needs to   be recomputed in  class  linear model LinearRegression      class  linear model OrthogonalMatchingPursuit      class  linear model Lasso  and  class  linear model ElasticNet      Fixed inconsistent memory layout in the coordinate descent solver   that affected  linear model DictionaryLearning  and    covariance GraphLasso     issue  5337     By  Olivier Grisel        class  manifold LocallyLinearEmbedding  no longer ignores the   reg     parameter     Nearest Neighbor estimators with custom distance metrics can now be pickled      issue  4362      Fixed a bug in  class  pipeline FeatureUnion  where   transformer weights     were not properly handled when performing grid searches     Fixed a bug in  class  linear model LogisticRegression  and    class  linear model LogisticRegressionCV  when using     class weight  balanced    or   class weight  auto       By  Tom Dupre la Tour       Fixed bug  issue  5495  when   doing OVR SVC decision function shape  ovr     Fixed by    user  Elvis Dohmatob  dohmatob      API changes summary                       Attribute  data min    data max  and  data range  in    class  preprocessing MinMaxScaler  are deprecated and won t be available   from 0 19  Instead  the class now exposes  data min     data max     and  data range    By  user  Giorgio Patrini  giorgiop       All Scaler classes now have an  scale   attribute  the feature wise   rescaling applied by their  transform  methods  The old attribute  std     in  class  preprocessing StandardScaler  is deprecated and superseded   by  scale    it won t be available in 0 19  By  user  Giorgio Patrini  giorgiop        class  svm SVC  and  class  svm NuSVC  now have an   decision function shape     parameter to make their decision function of shape    n samples  n classes      by setting   decision function shape  ovr     This will be the default behavior   starting in 0 19  By  Andreas M ller       Passing 1D data arrays as input to estimators is now deprecated as it   caused confusion in how the array elements should be interpreted   as features or as samples  All data arrays are now expected   to be explicitly shaped    n samples  n features       By  user  Vighnesh Birodkar  vighneshbirodkar        lda LDA  and  qda QDA  have been moved to    class  discriminant analysis LinearDiscriminantAnalysis  and    class  discriminant analysis QuadraticDiscriminantAnalysis      The   store covariance   and   tol   parameters have been moved from   the fit method to the constructor in    class  discriminant analysis LinearDiscriminantAnalysis  and the     store covariances   and   tol   parameters have been moved from the   fit method to the constructor in    class  discriminant analysis QuadraticDiscriminantAnalysis      Models inheriting from    LearntSelectorMixin   will no longer support the   transform methods   i e   RandomForests  GradientBoosting  LogisticRegression    DecisionTrees  SVMs and SGD related models   Wrap these models around the   metatransfomer  class  feature selection SelectFromModel  to remove   features  according to  coefs   or  feature importances      which are below a certain threshold value instead      class  cluster KMeans  re runs cluster assignments in case of non convergence    to ensure consistency of   predict X    and   labels     By    user  Vighnesh Birodkar  vighneshbirodkar       Classifier and Regressor models are now tagged as such using the      estimator type   attribute     Cross validation iterators always provide indices into training and test set    not boolean masks     The   decision function   on all regressors was deprecated and will be   removed in 0 19   Use   predict   instead      datasets load lfw pairs  is deprecated and will be removed in 0 19    Use  func  datasets fetch lfw pairs  instead     The deprecated   hmm   module was removed     The deprecated   Bootstrap   cross validation iterator was removed     The deprecated   Ward   and   WardAgglomerative   classes have been removed    Use  class  cluster AgglomerativeClustering  instead      cross validation check cv  is now a public function     The property   residues    of  class  linear model LinearRegression  is deprecated   and will be removed in 0 19     The deprecated   n jobs   parameter of  class  linear model LinearRegression  has been moved   to the constructor     Removed deprecated   class weight   parameter from  class  linear model SGDClassifier  s   fit     method  Use the construction parameter instead     The deprecated support for the sequence of sequences  or list of lists  multilabel   format was removed  To convert to and from the supported binary   indicator matrix format  use    class  MultiLabelBinarizer  preprocessing MultiLabelBinarizer       The behavior of calling the   inverse transform   method of   Pipeline pipeline   will   change in 0 19  It will no longer reshape one dimensional input to two dimensional input     The deprecated attributes   indicator matrix       multilabel    and   classes    of    class  preprocessing LabelBinarizer  were removed     Using   gamma 0   in  class  svm SVC  and  class  svm SVR  to automatically set the   gamma to   1    n features   is deprecated and will be removed in 0 19    Use   gamma  auto    instead   Code Contributors                   Aaron Schumacher  Adithya Ganesh  akitty  Alexandre Gramfort  Alexey Grigorev  Ali Baharev  Allen Riddell  Ando Saabas  Andreas Mueller  Andrew Lamb  Anish Shah  Ankur Ankan  Anthony Erlinger  Ari Rouvinen  Arnaud Joly  Arnaud Rachez  Arthur Mensch  banilo  Barmaley exe  benjaminirving  Boyuan Deng  Brett Naul  Brian McFee  Buddha Prakash  Chi Zhang  Chih Wei Chang  Christof Angermueller  Christoph Gohlke  Christophe Bourguignat  Christopher Erick Moody  Chyi Kwei Yau  Cindy Sridharan  CJ Carey  Clyde fare  Cory Lorenz  Dan Blanchard  Daniel Galvez  Daniel Kronovet  Danny Sullivan  Data1010  David  David D Lowe  David Dotson  djipey  Dmitry Spikhalskiy  Donne Martin  Dougal J  Sutherland  Dougal Sutherland  edson duarte  Eduardo Caro  Eric Larson  Eric Martin  Erich Schubert  Fernando Carrillo  Frank C  Eckert  Frank Zalkow  Gael Varoquaux  Ganiev Ibraim  Gilles Louppe  Giorgio Patrini  giorgiop  Graham Clenaghan  Gryllos Prokopis  gwulfs  Henry Lin  Hsuan Tien Lin  Immanuel Bayer  Ishank Gulati  Jack Martin  Jacob Schreiber  Jaidev Deshpande  Jake Vanderplas  Jan Hendrik Metzen  Jean Kossaifi  Jeffrey04  Jeremy  jfraj  Jiali Mei  Joe Jevnik  Joel Nothman  John Kirkham  John Wittenauer  Joseph  Joshua Loyal  Jungkook Park  KamalakerDadi  Kashif Rasul  Keith Goodman  Kian Ho  Konstantin Shmelkov  Kyler Brown  Lars Buitinck  Lilian Besson  Loic Esteve  Louis Tiao  maheshakya  Maheshakya Wijewardena  Manoj Kumar  MarkTab marktab net  Martin Ku  Martin Spacek  MartinBpr  martinosorb  MaryanMorel  Masafumi Oyamada  Mathieu Blondel  Matt Krump  Matti Lyra  Maxim Kolganov  mbillinger  mhg  Michael Heilman  Michael Patterson  Miroslav Batchkarov  Nelle Varoquaux  Nicolas  Nikolay Mayorov  Olivier Grisel  Omer Katz   scar N jera  Pauli Virtanen  Peter Fischer  Peter Prettenhofer  Phil Roth  pianomania  Preston Parry  Raghav RV  Rob Zinkov  Robert Layton  Rohan Ramanath  Saket Choudhary  Sam Zhang  santi  saurabh bansod  scls19fr  Sebastian Raschka  Sebastian Saeger  Shivan Sornarajah  SimonPL  sinhrks  Skipper Seabold  Sonny Hu  sseg  Stephen Hoover  Steven De Gryze  Steven Seguin  Theodore Vasiloudis  Thomas Unterthiner  Tiago Freitas Pereira  Tian Wang  Tim Head  Timothy Hopper  tokoroten  Tom Dupr  la Tour  Trevor Stephens  Valentin Stolbunov  Vighnesh Birodkar  Vinayak Mehta  Vincent  Vincent Michel  vstolbunov  wangz10  Wei Xue  Yucheng Low  Yury Zhauniarovich  Zac Stewart  zhai pro  Zichen Wang"}
{"questions":"scikit-learn orphan Who is using scikit learn Testimonials testimonials","answers":":orphan:\n\n.. title:: Testimonials\n\n.. _testimonials:\n\n==========================\nWho is using scikit-learn?\n==========================\n\n`J.P.Morgan <https:\/\/www.jpmorgan.com>`_\n----------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    Scikit-learn is an indispensable part of the Python machine learning\n    toolkit at JPMorgan. It is very widely used across all parts of the bank\n    for classification, predictive analytics, and very many other machine\n    learning tasks. Its straightforward API, its breadth of algorithms, and\n    the quality of its documentation combine to make scikit-learn\n    simultaneously very approachable and very powerful.\n\n    .. rst-class:: annotation\n\n      Stephen Simmons, VP, Athena Research, JPMorgan\n\n  .. div:: image-box\n\n    .. image:: images\/jpmorgan.png\n      :target: https:\/\/www.jpmorgan.com\n\n\n`Spotify <https:\/\/www.spotify.com>`_\n------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    Scikit-learn provides a toolbox with solid implementations of a bunch of\n    state-of-the-art models and makes it easy to plug them into existing\n    applications. We've been using it quite a lot for music recommendations at\n    Spotify and I think it's the most well-designed ML package I've seen so far.\n\n    .. rst-class:: annotation\n\n      Erik Bernhardsson, Engineering Manager Music Discovery & Machine Learning, Spotify\n\n  .. div:: image-box\n\n    .. image:: images\/spotify.png\n      :target: https:\/\/www.spotify.com\n\n\n`Inria <https:\/\/www.inria.fr\/>`_\n--------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    At INRIA, we use scikit-learn to support leading-edge basic research in many\n    teams: `Parietal <https:\/\/team.inria.fr\/parietal\/>`_ for neuroimaging, `Lear\n    <https:\/\/lear.inrialpes.fr\/>`_ for computer vision, `Visages\n    <https:\/\/team.inria.fr\/visages\/>`_ for medical image analysis, `Privatics\n    <https:\/\/team.inria.fr\/privatics>`_ for security. The project is a fantastic\n    tool to address difficult applications of machine learning in an academic\n    environment as it is performant and versatile, but all easy-to-use and well\n    documented, which makes it well suited to grad students.\n\n    .. rst-class:: annotation\n\n      Ga\u00ebl Varoquaux, research at Parietal\n\n  .. div:: image-box\n\n    .. image:: images\/inria.png\n      :target: https:\/\/www.inria.fr\/\n\n\n`betaworks <https:\/\/betaworks.com>`_\n------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    Betaworks is a NYC-based startup studio that builds new products, grows\n    companies, and invests in others. Over the past 8 years we've launched a\n    handful of social data analytics-driven services, such as Bitly, Chartbeat,\n    digg and Scale Model. Consistently the betaworks data science team uses\n    Scikit-learn for a variety of tasks. From exploratory analysis, to product\n    development, it is an essential part of our toolkit. Recent uses are included\n    in `digg's new video recommender system\n    <https:\/\/medium.com\/i-data\/the-digg-video-recommender-2f9ade7c4ba3>`_,\n    and Poncho's `dynamic heuristic subspace clustering\n    <https:\/\/medium.com\/@DiggData\/scaling-poncho-using-data-ca24569d56fd>`_.\n\n    .. rst-class:: annotation\n\n      Gilad Lotan, Chief Data Scientist\n\n  .. div:: image-box\n\n    .. image:: images\/betaworks.png\n      :target: https:\/\/betaworks.com\n\n\n`Hugging Face <https:\/\/huggingface.co>`_\n----------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    At Hugging Face we're using NLP and probabilistic models to generate\n    conversational Artificial intelligences that are fun to chat with. Despite using\n    deep neural nets for `a few <https:\/\/medium.com\/huggingface\/understanding-emotions-from-keras-to-pytorch-3ccb61d5a983>`_\n    of our `NLP tasks <https:\/\/huggingface.co\/coref\/>`_, scikit-learn is still the\n    bread-and-butter of our daily machine learning routine. The ease of use and\n    predictability of the interface, as well as the straightforward mathematical\n    explanations that are here when you need them, is the killer feature. We use a\n    variety of scikit-learn models in production and they are also operationally very\n    pleasant to work with.\n\n    .. rst-class:: annotation\n\n      Julien Chaumond, Chief Technology Officer\n\n  .. div:: image-box\n\n    .. image:: images\/huggingface.png\n      :target: https:\/\/huggingface.co\n\n\n`Evernote <https:\/\/evernote.com>`_\n----------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    Building a classifier is typically an iterative process of exploring\n    the data, selecting the features (the attributes of the data believed\n    to be predictive in some way), training the models, and finally\n    evaluating them. For many of these tasks, we relied on the excellent\n    scikit-learn package for Python.\n\n    `Read more <http:\/\/blog.evernote.com\/tech\/2013\/01\/22\/stay-classified\/>`_\n\n    .. rst-class:: annotation\n\n      Mark Ayzenshtat, VP, Augmented Intelligence\n\n  .. div:: image-box\n\n    .. image:: images\/evernote.png\n      :target: https:\/\/evernote.com\n\n\n`T\u00e9l\u00e9com ParisTech <https:\/\/www.telecom-paristech.fr\/>`_\n--------------------------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    At Telecom ParisTech, scikit-learn is used for hands-on sessions and home\n    assignments in introductory and advanced machine learning courses. The classes\n    are for undergrads and masters students. The great benefit of scikit-learn is\n    its fast learning curve that allows students to quickly start working on\n    interesting and motivating problems.\n\n    .. rst-class:: annotation\n\n      Alexandre Gramfort, Assistant Professor\n\n  .. div:: image-box\n\n    .. image:: images\/telecomparistech.jpg\n      :target: https:\/\/www.telecom-paristech.fr\/\n\n\n`Booking.com <https:\/\/www.booking.com>`_\n----------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    At Booking.com, we use machine learning algorithms for many different\n    applications, such as recommending hotels and destinations to our customers,\n    detecting fraudulent reservations, or scheduling our customer service agents.\n    Scikit-learn is one of the tools we use when implementing standard algorithms\n    for prediction tasks. Its API and documentations are excellent and make it easy\n    to use. The scikit-learn developers do a great job of incorporating state of\n    the art implementations and new algorithms into the package. Thus, scikit-learn\n    provides convenient access to a wide spectrum of algorithms, and allows us to\n    readily find the right tool for the right job.\n\n    .. rst-class:: annotation\n\n      Melanie Mueller, Data Scientist\n\n  .. div:: image-box\n\n    .. image:: images\/booking.png\n      :target: https:\/\/www.booking.com\n\n\n`AWeber <https:\/\/www.aweber.com\/>`_\n-----------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    The scikit-learn toolkit is indispensable for the Data Analysis and Management\n    team at AWeber.  It allows us to do AWesome stuff we would not otherwise have\n    the time or resources to accomplish. The documentation is excellent, allowing\n    new engineers to quickly evaluate and apply many different algorithms to our\n    data. The text feature extraction utilities are useful when working with the\n    large volume of email content we have at AWeber. The RandomizedPCA\n    implementation, along with Pipelining and FeatureUnions, allows us to develop\n    complex machine learning algorithms efficiently and reliably.\n\n    Anyone interested in learning more about how AWeber deploys scikit-learn in a\n    production environment should check out talks from PyData Boston by AWeber's\n    Michael Becker available at https:\/\/github.com\/mdbecker\/pydata_2013.\n\n    .. rst-class:: annotation\n\n      Michael Becker, Software Engineer, Data Analysis and Management Ninjas\n\n  .. div:: image-box\n\n    .. image:: images\/aweber.png\n      :target: https:\/\/www.aweber.com\n\n\n`Yhat <https:\/\/www.yhat.com>`_\n------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    The combination of consistent APIs, thorough documentation, and top notch\n    implementation make scikit-learn our favorite machine learning package in\n    Python. scikit-learn makes doing advanced analysis in Python accessible to\n    anyone. At Yhat, we make it easy to integrate these models into your production\n    applications. Thus eliminating the unnecessary dev time encountered\n    productionizing analytical work.\n\n    .. rst-class:: annotation\n\n      Greg Lamp, Co-founder\n\n  .. div:: image-box\n\n    .. image:: images\/yhat.png\n      :target: https:\/\/www.yhat.com\n\n\n`Rangespan <http:\/\/www.rangespan.com>`_\n---------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    The Python scikit-learn toolkit is a core tool in the data science\n    group at Rangespan. Its large collection of well documented models and\n    algorithms allow our team of data scientists to prototype fast and\n    quickly iterate to find the right solution to our learning problems.\n    We find that scikit-learn is not only the right tool for prototyping,\n    but its careful and well tested implementation give us the confidence\n    to run scikit-learn models in production.\n\n    .. rst-class:: annotation\n\n      Jurgen Van Gael, Data Science Director\n\n  .. div:: image-box\n\n    .. image:: images\/rangespan.png\n      :target: http:\/\/www.rangespan.com\n\n\n`Birchbox <https:\/\/www.birchbox.com>`_\n--------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    At Birchbox, we face a range of machine learning problems typical to\n    E-commerce: product recommendation, user clustering, inventory prediction,\n    trends detection, etc. Scikit-learn lets us experiment with many models,\n    especially in the exploration phase of a new project: the data can be passed\n    around in a consistent way; models are easy to save and reuse; updates keep us\n    informed of new developments from the pattern discovery research community.\n    Scikit-learn is an important tool for our team, built the right way in the\n    right language.\n\n    .. rst-class:: annotation\n\n      Thierry Bertin-Mahieux, Data Scientist\n\n  .. div:: image-box\n\n    .. image:: images\/birchbox.jpg\n      :target: https:\/\/www.birchbox.com\n\n\n`Bestofmedia Group <http:\/\/www.bestofmedia.com>`_\n-------------------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    Scikit-learn is our #1 toolkit for all things machine learning\n    at Bestofmedia. We use it for a variety of tasks (e.g. spam fighting,\n    ad click prediction, various ranking models) thanks to the varied,\n    state-of-the-art algorithm implementations packaged into it.\n    In the lab it accelerates prototyping of complex pipelines. In\n    production I can say it has proven to be robust and efficient enough\n    to be deployed for business critical components.\n\n    .. rst-class:: annotation\n\n      Eustache Diemert, Lead Scientist\n\n  .. div:: image-box\n\n    .. image:: images\/bestofmedia-logo.png\n      :target: http:\/\/www.bestofmedia.com\n\n\n`Change.org <https:\/\/www.change.org>`_\n--------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    At change.org we automate the use of scikit-learn's RandomForestClassifier\n    in our production systems to drive email targeting that reaches millions\n    of users across the world each week. In the lab, scikit-learn's ease-of-use,\n    performance, and overall variety of algorithms implemented has proved invaluable\n    in giving us a single reliable source to turn to for our machine-learning needs.\n\n    .. rst-class:: annotation\n\n      Vijay Ramesh, Software Engineer in Data\/science at Change.org\n\n  .. div:: image-box\n\n    .. image:: images\/change-logo.png\n      :target: https:\/\/www.change.org\n\n\n`PHIMECA Engineering <https:\/\/www.phimeca.com\/?lang=en>`_\n---------------------------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    At PHIMECA Engineering, we use scikit-learn estimators as surrogates for\n    expensive-to-evaluate numerical models (mostly but not exclusively\n    finite-element mechanical models) for speeding up the intensive post-processing\n    operations involved in our simulation-based decision making framework.\n    Scikit-learn's fit\/predict API together with its efficient cross-validation\n    tools considerably eases the task of selecting the best-fit estimator. We are\n    also using scikit-learn for illustrating concepts in our training sessions.\n    Trainees are always impressed by the ease-of-use of scikit-learn despite the\n    apparent theoretical complexity of machine learning.\n\n    .. rst-class:: annotation\n\n      Vincent Dubourg, PHIMECA Engineering, PhD Engineer\n\n  .. div:: image-box\n\n    .. image:: images\/phimeca.png\n      :target: https:\/\/www.phimeca.com\/?lang=en\n\n\n`HowAboutWe <http:\/\/www.howaboutwe.com\/>`_\n------------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    At HowAboutWe, scikit-learn lets us implement a wide array of machine learning\n    techniques in analysis and in production, despite having a small team.  We use\n    scikit-learn's classification algorithms to predict user behavior, enabling us\n    to (for example) estimate the value of leads from a given traffic source early\n    in the lead's tenure on our site. Also, our users' profiles consist of\n    primarily unstructured data (answers to open-ended questions), so we use\n    scikit-learn's feature extraction and dimensionality reduction tools to\n    translate these unstructured data into inputs for our matchmaking system.\n\n    .. rst-class:: annotation\n\n      Daniel Weitzenfeld, Senior Data Scientist at HowAboutWe\n\n  .. div:: image-box\n\n    .. image:: images\/howaboutwe.png\n      :target: http:\/\/www.howaboutwe.com\/\n\n\n`PeerIndex <https:\/\/www.brandwatch.com\/peerindex-and-brandwatch>`_\n------------------------------------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    At PeerIndex we use scientific methodology to build the Influence Graph - a\n    unique dataset that allows us to identify who's really influential and in which\n    context. To do this, we have to tackle a range of machine learning and\n    predictive modeling problems. Scikit-learn has emerged as our primary tool for\n    developing prototypes and making quick progress. From predicting missing data\n    and classifying tweets to clustering communities of social media users, scikit-\n    learn proved useful in a variety of applications. Its very intuitive interface\n    and excellent compatibility with other python tools makes it and indispensable\n    tool in our daily research efforts.\n\n    .. rst-class:: annotation\n\n      Ferenc Huszar, Senior Data Scientist at Peerindex\n\n  .. div:: image-box\n\n    .. image:: images\/peerindex.png\n      :target: https:\/\/www.brandwatch.com\/peerindex-and-brandwatch\n\n\n`DataRobot <https:\/\/www.datarobot.com>`_\n----------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    DataRobot is building next generation predictive analytics software to make data\n    scientists more productive, and scikit-learn is an integral part of our system. The\n    variety of machine learning techniques in combination with the solid implementations\n    that scikit-learn offers makes it a one-stop-shopping library for machine learning\n    in Python. Moreover, its consistent API, well-tested code and permissive licensing\n    allow us to use it in a production environment. Scikit-learn has literally saved us\n    years of work we would have had to do ourselves to bring our product to market.\n\n    .. rst-class:: annotation\n\n      Jeremy Achin, CEO & Co-founder DataRobot Inc.\n\n  .. div:: image-box\n\n    .. image:: images\/datarobot.png\n      :target: https:\/\/www.datarobot.com\n\n\n`OkCupid <https:\/\/www.okcupid.com\/>`_\n-------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    We're using scikit-learn at OkCupid to evaluate and improve our matchmaking\n    system. The range of features it has, especially preprocessing utilities, means\n    we can use it for a wide variety of projects, and it's performant enough to\n    handle the volume of data that we need to sort through. The documentation is\n    really thorough, as well, which makes the library quite easy to use.\n\n    .. rst-class:: annotation\n\n      David Koh - Senior Data Scientist at OkCupid\n\n  .. div:: image-box\n\n    .. image:: images\/okcupid.png\n      :target: https:\/\/www.okcupid.com\n\n\n`Lovely <https:\/\/livelovely.com\/>`_\n-----------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    At Lovely, we strive to deliver the best apartment marketplace, with respect to\n    our users and our listings. From understanding user behavior, improving data\n    quality, and detecting fraud, scikit-learn is a regular tool for gathering\n    insights, predictive modeling and improving our product. The easy-to-read\n    documentation and intuitive architecture of the API makes machine learning both\n    explorable and accessible to a wide range of python developers. I'm constantly\n    recommending that more developers and scientists try scikit-learn.\n\n    .. rst-class:: annotation\n\n      Simon Frid - Data Scientist, Lead at Lovely\n\n  .. div:: image-box\n\n    .. image:: images\/lovely.png\n      :target: https:\/\/livelovely.com\n\n\n`Data Publica <http:\/\/www.data-publica.com\/>`_\n----------------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    Data Publica builds a new predictive sales tool for commercial and marketing teams\n    called C-Radar. We extensively use scikit-learn to build segmentations of customers\n    through clustering, and to predict future customers based on past partnerships\n    success or failure. We also categorize companies using their website communication\n    thanks to scikit-learn and its machine learning algorithm implementations.\n    Eventually, machine learning makes it possible to detect weak signals that\n    traditional tools cannot see. All these complex tasks are performed in an easy and\n    straightforward way thanks to the great quality of the scikit-learn framework.\n\n    .. rst-class:: annotation\n\n      Guillaume Lebourgeois & Samuel Charron - Data Scientists at Data Publica\n\n  .. div:: image-box\n\n    .. image:: images\/datapublica.png\n      :target: http:\/\/www.data-publica.com\/\n\n\n`Machinalis <https:\/\/www.machinalis.com\/>`_\n-------------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    Scikit-learn is the cornerstone of all the machine learning projects carried at\n    Machinalis. It has a consistent API, a wide selection of algorithms and lots of\n    auxiliary tools to deal with the boilerplate. We have used it in production\n    environments on a variety of projects including click-through rate prediction,\n    `information extraction <https:\/\/github.com\/machinalis\/iepy>`_, and even counting\n    sheep!\n\n    In fact, we use it so much that we've started to freeze our common use cases\n    into Python packages, some of them open-sourced, like `FeatureForge\n    <https:\/\/github.com\/machinalis\/featureforge>`_. Scikit-learn in one word: Awesome.\n\n    .. rst-class:: annotation\n\n      Rafael Carrascosa, Lead developer\n\n  .. div:: image-box\n\n    .. image:: images\/machinalis.png\n      :target: https:\/\/www.machinalis.com\/\n\n\n`solido <https:\/\/www.solidodesign.com\/>`_\n-----------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    Scikit-learn is helping to drive Moore's Law, via Solido. Solido creates\n    computer-aided design tools used by the majority of top-20 semiconductor\n    companies and fabs, to design the bleeding-edge chips inside smartphones,\n    automobiles, and more. Scikit-learn helps to power Solido's algorithms for\n    rare-event estimation, worst-case verification, optimization, and more. At\n    Solido, we are particularly fond of scikit-learn's libraries for Gaussian\n    Process models, large-scale regularized linear regression, and classification.\n    Scikit-learn has increased our productivity, because for many ML problems we no\n    longer need to \u201croll our own\u201d code. `This PyData 2014 talk\n    <https:\/\/www.youtube.com\/watch?v=Jm-eBD9xR3w>`_ has details.\n\n    .. rst-class:: annotation\n\n      Trent McConaghy, founder, Solido Design Automation Inc.\n\n  .. div:: image-box\n\n    .. image:: images\/solido_logo.png\n      :target: https:\/\/www.solidodesign.com\/\n\n\n`INFONEA <http:\/\/www.infonea.com\/en\/>`_\n---------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    We employ scikit-learn for rapid prototyping and custom-made Data Science\n    solutions within our in-memory based Business Intelligence Software\n    INFONEA\u00ae. As a well-documented and comprehensive collection of\n    state-of-the-art algorithms and pipelining methods, scikit-learn enables\n    us to provide flexible and scalable scientific analysis solutions. Thus,\n    scikit-learn is immensely valuable in realizing a powerful integration of\n    Data Science technology within self-service business analytics.\n\n    .. rst-class:: annotation\n\n      Thorsten Kranz, Data Scientist, Coma Soft AG.\n\n  .. div:: image-box\n\n    .. image:: images\/infonea.jpg\n      :target: http:\/\/www.infonea.com\/en\/\n\n\n`Dataiku <https:\/\/www.dataiku.com\/>`_\n-------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    Our software, Data Science Studio (DSS), enables users to create data services\n    that combine `ETL <https:\/\/en.wikipedia.org\/wiki\/Extract,_transform,_load>`_ with\n    Machine Learning. Our Machine Learning module integrates\n    many scikit-learn algorithms. The scikit-learn library is a perfect integration\n    with DSS because it offers algorithms for virtually all business cases. Our goal\n    is to offer a transparent and flexible tool that makes it easier to optimize\n    time consuming aspects of building a data service, preparing data, and training\n    machine learning algorithms on all types of data.\n\n    .. rst-class:: annotation\n\n      Florian Douetteau, CEO, Dataiku\n\n  .. div:: image-box\n\n    .. image:: images\/dataiku_logo.png\n      :target: https:\/\/www.dataiku.com\/\n\n\n`Otto Group <https:\/\/ottogroup.com\/>`_\n--------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    Here at Otto Group, one of global Big Five B2C online retailers, we are using\n    scikit-learn in all aspects of our daily work from data exploration to development\n    of machine learning application to the productive deployment of those services.\n    It helps us to tackle machine learning problems ranging from e-commerce to logistics.\n    It consistent APIs enabled us to build the `Palladium REST-API framework\n    <https:\/\/github.com\/ottogroup\/palladium\/>`_ around it and continuously deliver\n    scikit-learn based services.\n\n    .. rst-class:: annotation\n\n      Christian Rammig, Head of Data Science, Otto Group\n\n  .. div:: image-box\n\n    .. image:: images\/ottogroup_logo.png\n      :target: https:\/\/ottogroup.com\n\n\n`Zopa <https:\/\/zopa.com\/>`_\n---------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    At Zopa, the first ever Peer-to-Peer lending platform, we extensively use\n    scikit-learn to run the business and optimize our users' experience. It powers our\n    Machine Learning models involved in credit risk, fraud risk, marketing, and pricing,\n    and has been used for originating at least 1 billion GBP worth of Zopa loans. It is\n    very well documented, powerful, and simple to use. We are grateful for the\n    capabilities it has provided, and for allowing us to deliver on our mission of\n    making money simple and fair.\n\n    .. rst-class:: annotation\n\n      Vlasios Vasileiou, Head of Data Science, Zopa\n\n  .. div:: image-box\n\n    .. image:: images\/zopa.png\n      :target: https:\/\/zopa.com\n\n\n`MARS <https:\/\/www.mars.com\/global>`_\n-------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    Scikit-Learn is integral to the Machine Learning Ecosystem at Mars. Whether\n    we're designing better recipes for petfood or closely analysing our cocoa\n    supply chain, Scikit-Learn is used as a tool for rapidly prototyping ideas\n    and taking them to production. This allows us to better understand and meet\n    the needs of our consumers worldwide. Scikit-Learn's feature-rich toolset is\n    easy to use and equips our associates with the capabilities they need to\n    solve the business challenges they face every day.\n\n    .. rst-class:: annotation\n\n      Michael Fitzke, Next Generation Technologies Sr Leader, Mars Inc.\n\n  .. div:: image-box\n\n    .. image:: images\/mars.png\n      :target: https:\/\/www.mars.com\/global\n\n\n`BNP Paribas Cardif <https:\/\/www.bnpparibascardif.com\/>`_\n---------------------------------------------------------\n\n.. div:: sk-text-image-grid-large\n\n  .. div:: text-box\n\n    BNP Paribas Cardif uses scikit-learn for several of its machine learning models\n    in production. Our internal community of developers and data scientists has\n    been using scikit-learn since 2015, for several reasons: the quality of the\n    developments, documentation and contribution governance, and the sheer size of\n    the contributing community. We even explicitly mention the use of\n    scikit-learn's pipelines in our internal model risk governance as one of our\n    good practices to decrease operational risks and overfitting risk. As a way to\n    support open source software development and in particular scikit-learn\n    project, we decided to participate to scikit-learn's consortium at La Fondation\n    Inria since its creation in 2018.\n\n    .. rst-class:: annotation\n\n      S\u00e9bastien Conort, Chief Data Scientist, BNP Paribas Cardif\n\n  .. div:: image-box\n\n    .. image:: images\/bnp_paribas_cardif.png\n      :target: https:\/\/www.bnpparibascardif.com\/","site":"scikit-learn","answers_cleaned":" orphan      title   Testimonials      testimonials                              Who is using scikit learn                               J P Morgan  https   www jpmorgan com                                                 div   sk text image grid large       div   text box      Scikit learn is an indispensable part of the Python machine learning     toolkit at JPMorgan  It is very widely used across all parts of the bank     for classification  predictive analytics  and very many other machine     learning tasks  Its straightforward API  its breadth of algorithms  and     the quality of its documentation combine to make scikit learn     simultaneously very approachable and very powerful          rst class   annotation        Stephen Simmons  VP  Athena Research  JPMorgan       div   image box         image   images jpmorgan png        target  https   www jpmorgan com    Spotify  https   www spotify com                                             div   sk text image grid large       div   text box      Scikit learn provides a toolbox with solid implementations of a bunch of     state of the art models and makes it easy to plug them into existing     applications  We ve been using it quite a lot for music recommendations at     Spotify and I think it s the most well designed ML package I ve seen so far          rst class   annotation        Erik Bernhardsson  Engineering Manager Music Discovery   Machine Learning  Spotify       div   image box         image   images spotify png        target  https   www spotify com    Inria  https   www inria fr                                          div   sk text image grid large       div   text box      At INRIA  we use scikit learn to support leading edge basic research in many     teams   Parietal  https   team inria fr parietal     for neuroimaging   Lear      https   lear inrialpes fr     for computer vision   Visages      https   team inria fr visages     for medical image analysis   Privatics      https   team inria fr privatics    for security  The project is a fantastic     tool to address difficult applications of machine learning in an academic     environment as it is performant and versatile  but all easy to use and well     documented  which makes it well suited to grad students          rst class   annotation        Ga l Varoquaux  research at Parietal       div   image box         image   images inria png        target  https   www inria fr     betaworks  https   betaworks com                                             div   sk text image grid large       div   text box      Betaworks is a NYC based startup studio that builds new products  grows     companies  and invests in others  Over the past 8 years we ve launched a     handful of social data analytics driven services  such as Bitly  Chartbeat      digg and Scale Model  Consistently the betaworks data science team uses     Scikit learn for a variety of tasks  From exploratory analysis  to product     development  it is an essential part of our toolkit  Recent uses are included     in  digg s new video recommender system      https   medium com i data the digg video recommender 2f9ade7c4ba3         and Poncho s  dynamic heuristic subspace clustering      https   medium com  DiggData scaling poncho using data ca24569d56fd             rst class   annotation        Gilad Lotan  Chief Data Scientist       div   image box         image   images betaworks png        target  https   betaworks com    Hugging Face  https   huggingface co                                                 div   sk text image grid large       div   text box      At Hugging Face we re using NLP and probabilistic models to generate     conversational Artificial intelligences that are fun to chat with  Despite using     deep neural nets for  a few  https   medium com huggingface understanding emotions from keras to pytorch 3ccb61d5a983        of our  NLP tasks  https   huggingface co coref      scikit learn is still the     bread and butter of our daily machine learning routine  The ease of use and     predictability of the interface  as well as the straightforward mathematical     explanations that are here when you need them  is the killer feature  We use a     variety of scikit learn models in production and they are also operationally very     pleasant to work with          rst class   annotation        Julien Chaumond  Chief Technology Officer       div   image box         image   images huggingface png        target  https   huggingface co    Evernote  https   evernote com                                           div   sk text image grid large       div   text box      Building a classifier is typically an iterative process of exploring     the data  selecting the features  the attributes of the data believed     to be predictive in some way   training the models  and finally     evaluating them  For many of these tasks  we relied on the excellent     scikit learn package for Python        Read more  http   blog evernote com tech 2013 01 22 stay classified             rst class   annotation        Mark Ayzenshtat  VP  Augmented Intelligence       div   image box         image   images evernote png        target  https   evernote com    T l com ParisTech  https   www telecom paristech fr                                                                  div   sk text image grid large       div   text box      At Telecom ParisTech  scikit learn is used for hands on sessions and home     assignments in introductory and advanced machine learning courses  The classes     are for undergrads and masters students  The great benefit of scikit learn is     its fast learning curve that allows students to quickly start working on     interesting and motivating problems          rst class   annotation        Alexandre Gramfort  Assistant Professor       div   image box         image   images telecomparistech jpg        target  https   www telecom paristech fr     Booking com  https   www booking com                                                 div   sk text image grid large       div   text box      At Booking com  we use machine learning algorithms for many different     applications  such as recommending hotels and destinations to our customers      detecting fraudulent reservations  or scheduling our customer service agents      Scikit learn is one of the tools we use when implementing standard algorithms     for prediction tasks  Its API and documentations are excellent and make it easy     to use  The scikit learn developers do a great job of incorporating state of     the art implementations and new algorithms into the package  Thus  scikit learn     provides convenient access to a wide spectrum of algorithms  and allows us to     readily find the right tool for the right job          rst class   annotation        Melanie Mueller  Data Scientist       div   image box         image   images booking png        target  https   www booking com    AWeber  https   www aweber com                                             div   sk text image grid large       div   text box      The scikit learn toolkit is indispensable for the Data Analysis and Management     team at AWeber   It allows us to do AWesome stuff we would not otherwise have     the time or resources to accomplish  The documentation is excellent  allowing     new engineers to quickly evaluate and apply many different algorithms to our     data  The text feature extraction utilities are useful when working with the     large volume of email content we have at AWeber  The RandomizedPCA     implementation  along with Pipelining and FeatureUnions  allows us to develop     complex machine learning algorithms efficiently and reliably       Anyone interested in learning more about how AWeber deploys scikit learn in a     production environment should check out talks from PyData Boston by AWeber s     Michael Becker available at https   github com mdbecker pydata 2013          rst class   annotation        Michael Becker  Software Engineer  Data Analysis and Management Ninjas       div   image box         image   images aweber png        target  https   www aweber com    Yhat  https   www yhat com                                       div   sk text image grid large       div   text box      The combination of consistent APIs  thorough documentation  and top notch     implementation make scikit learn our favorite machine learning package in     Python  scikit learn makes doing advanced analysis in Python accessible to     anyone  At Yhat  we make it easy to integrate these models into your production     applications  Thus eliminating the unnecessary dev time encountered     productionizing analytical work          rst class   annotation        Greg Lamp  Co founder       div   image box         image   images yhat png        target  https   www yhat com    Rangespan  http   www rangespan com                                                div   sk text image grid large       div   text box      The Python scikit learn toolkit is a core tool in the data science     group at Rangespan  Its large collection of well documented models and     algorithms allow our team of data scientists to prototype fast and     quickly iterate to find the right solution to our learning problems      We find that scikit learn is not only the right tool for prototyping      but its careful and well tested implementation give us the confidence     to run scikit learn models in production          rst class   annotation        Jurgen Van Gael  Data Science Director       div   image box         image   images rangespan png        target  http   www rangespan com    Birchbox  https   www birchbox com                                               div   sk text image grid large       div   text box      At Birchbox  we face a range of machine learning problems typical to     E commerce  product recommendation  user clustering  inventory prediction      trends detection  etc  Scikit learn lets us experiment with many models      especially in the exploration phase of a new project  the data can be passed     around in a consistent way  models are easy to save and reuse  updates keep us     informed of new developments from the pattern discovery research community      Scikit learn is an important tool for our team  built the right way in the     right language          rst class   annotation        Thierry Bertin Mahieux  Data Scientist       div   image box         image   images birchbox jpg        target  https   www birchbox com    Bestofmedia Group  http   www bestofmedia com                                                          div   sk text image grid large       div   text box      Scikit learn is our  1 toolkit for all things machine learning     at Bestofmedia  We use it for a variety of tasks  e g  spam fighting      ad click prediction  various ranking models  thanks to the varied      state of the art algorithm implementations packaged into it      In the lab it accelerates prototyping of complex pipelines  In     production I can say it has proven to be robust and efficient enough     to be deployed for business critical components          rst class   annotation        Eustache Diemert  Lead Scientist       div   image box         image   images bestofmedia logo png        target  http   www bestofmedia com    Change org  https   www change org                                               div   sk text image grid large       div   text box      At change org we automate the use of scikit learn s RandomForestClassifier     in our production systems to drive email targeting that reaches millions     of users across the world each week  In the lab  scikit learn s ease of use      performance  and overall variety of algorithms implemented has proved invaluable     in giving us a single reliable source to turn to for our machine learning needs          rst class   annotation        Vijay Ramesh  Software Engineer in Data science at Change org       div   image box         image   images change logo png        target  https   www change org    PHIMECA Engineering  https   www phimeca com  lang en                                                                  div   sk text image grid large       div   text box      At PHIMECA Engineering  we use scikit learn estimators as surrogates for     expensive to evaluate numerical models  mostly but not exclusively     finite element mechanical models  for speeding up the intensive post processing     operations involved in our simulation based decision making framework      Scikit learn s fit predict API together with its efficient cross validation     tools considerably eases the task of selecting the best fit estimator  We are     also using scikit learn for illustrating concepts in our training sessions      Trainees are always impressed by the ease of use of scikit learn despite the     apparent theoretical complexity of machine learning          rst class   annotation        Vincent Dubourg  PHIMECA Engineering  PhD Engineer       div   image box         image   images phimeca png        target  https   www phimeca com  lang en    HowAboutWe  http   www howaboutwe com                                                    div   sk text image grid large       div   text box      At HowAboutWe  scikit learn lets us implement a wide array of machine learning     techniques in analysis and in production  despite having a small team   We use     scikit learn s classification algorithms to predict user behavior  enabling us     to  for example  estimate the value of leads from a given traffic source early     in the lead s tenure on our site  Also  our users  profiles consist of     primarily unstructured data  answers to open ended questions   so we use     scikit learn s feature extraction and dimensionality reduction tools to     translate these unstructured data into inputs for our matchmaking system          rst class   annotation        Daniel Weitzenfeld  Senior Data Scientist at HowAboutWe       div   image box         image   images howaboutwe png        target  http   www howaboutwe com     PeerIndex  https   www brandwatch com peerindex and brandwatch                                                                           div   sk text image grid large       div   text box      At PeerIndex we use scientific methodology to build the Influence Graph   a     unique dataset that allows us to identify who s really influential and in which     context  To do this  we have to tackle a range of machine learning and     predictive modeling problems  Scikit learn has emerged as our primary tool for     developing prototypes and making quick progress  From predicting missing data     and classifying tweets to clustering communities of social media users  scikit      learn proved useful in a variety of applications  Its very intuitive interface     and excellent compatibility with other python tools makes it and indispensable     tool in our daily research efforts          rst class   annotation        Ferenc Huszar  Senior Data Scientist at Peerindex       div   image box         image   images peerindex png        target  https   www brandwatch com peerindex and brandwatch    DataRobot  https   www datarobot com                                                 div   sk text image grid large       div   text box      DataRobot is building next generation predictive analytics software to make data     scientists more productive  and scikit learn is an integral part of our system  The     variety of machine learning techniques in combination with the solid implementations     that scikit learn offers makes it a one stop shopping library for machine learning     in Python  Moreover  its consistent API  well tested code and permissive licensing     allow us to use it in a production environment  Scikit learn has literally saved us     years of work we would have had to do ourselves to bring our product to market          rst class   annotation        Jeremy Achin  CEO   Co founder DataRobot Inc        div   image box         image   images datarobot png        target  https   www datarobot com    OkCupid  https   www okcupid com                                               div   sk text image grid large       div   text box      We re using scikit learn at OkCupid to evaluate and improve our matchmaking     system  The range of features it has  especially preprocessing utilities  means     we can use it for a wide variety of projects  and it s performant enough to     handle the volume of data that we need to sort through  The documentation is     really thorough  as well  which makes the library quite easy to use          rst class   annotation        David Koh   Senior Data Scientist at OkCupid       div   image box         image   images okcupid png        target  https   www okcupid com    Lovely  https   livelovely com                                             div   sk text image grid large       div   text box      At Lovely  we strive to deliver the best apartment marketplace  with respect to     our users and our listings  From understanding user behavior  improving data     quality  and detecting fraud  scikit learn is a regular tool for gathering     insights  predictive modeling and improving our product  The easy to read     documentation and intuitive architecture of the API makes machine learning both     explorable and accessible to a wide range of python developers  I m constantly     recommending that more developers and scientists try scikit learn          rst class   annotation        Simon Frid   Data Scientist  Lead at Lovely       div   image box         image   images lovely png        target  https   livelovely com    Data Publica  http   www data publica com                                                        div   sk text image grid large       div   text box      Data Publica builds a new predictive sales tool for commercial and marketing teams     called C Radar  We extensively use scikit learn to build segmentations of customers     through clustering  and to predict future customers based on past partnerships     success or failure  We also categorize companies using their website communication     thanks to scikit learn and its machine learning algorithm implementations      Eventually  machine learning makes it possible to detect weak signals that     traditional tools cannot see  All these complex tasks are performed in an easy and     straightforward way thanks to the great quality of the scikit learn framework          rst class   annotation        Guillaume Lebourgeois   Samuel Charron   Data Scientists at Data Publica       div   image box         image   images datapublica png        target  http   www data publica com     Machinalis  https   www machinalis com                                                     div   sk text image grid large       div   text box      Scikit learn is the cornerstone of all the machine learning projects carried at     Machinalis  It has a consistent API  a wide selection of algorithms and lots of     auxiliary tools to deal with the boilerplate  We have used it in production     environments on a variety of projects including click through rate prediction       information extraction  https   github com machinalis iepy     and even counting     sheep       In fact  we use it so much that we ve started to freeze our common use cases     into Python packages  some of them open sourced  like  FeatureForge      https   github com machinalis featureforge     Scikit learn in one word  Awesome          rst class   annotation        Rafael Carrascosa  Lead developer       div   image box         image   images machinalis png        target  https   www machinalis com     solido  https   www solidodesign com                                                   div   sk text image grid large       div   text box      Scikit learn is helping to drive Moore s Law  via Solido  Solido creates     computer aided design tools used by the majority of top 20 semiconductor     companies and fabs  to design the bleeding edge chips inside smartphones      automobiles  and more  Scikit learn helps to power Solido s algorithms for     rare event estimation  worst case verification  optimization  and more  At     Solido  we are particularly fond of scikit learn s libraries for Gaussian     Process models  large scale regularized linear regression  and classification      Scikit learn has increased our productivity  because for many ML problems we no     longer need to  roll our own  code   This PyData 2014 talk      https   www youtube com watch v Jm eBD9xR3w    has details          rst class   annotation        Trent McConaghy  founder  Solido Design Automation Inc        div   image box         image   images solido logo png        target  https   www solidodesign com     INFONEA  http   www infonea com en                                                 div   sk text image grid large       div   text box      We employ scikit learn for rapid prototyping and custom made Data Science     solutions within our in memory based Business Intelligence Software     INFONEA   As a well documented and comprehensive collection of     state of the art algorithms and pipelining methods  scikit learn enables     us to provide flexible and scalable scientific analysis solutions  Thus      scikit learn is immensely valuable in realizing a powerful integration of     Data Science technology within self service business analytics          rst class   annotation        Thorsten Kranz  Data Scientist  Coma Soft AG        div   image box         image   images infonea jpg        target  http   www infonea com en     Dataiku  https   www dataiku com                                               div   sk text image grid large       div   text box      Our software  Data Science Studio  DSS   enables users to create data services     that combine  ETL  https   en wikipedia org wiki Extract  transform  load    with     Machine Learning  Our Machine Learning module integrates     many scikit learn algorithms  The scikit learn library is a perfect integration     with DSS because it offers algorithms for virtually all business cases  Our goal     is to offer a transparent and flexible tool that makes it easier to optimize     time consuming aspects of building a data service  preparing data  and training     machine learning algorithms on all types of data          rst class   annotation        Florian Douetteau  CEO  Dataiku       div   image box         image   images dataiku logo png        target  https   www dataiku com     Otto Group  https   ottogroup com                                                div   sk text image grid large       div   text box      Here at Otto Group  one of global Big Five B2C online retailers  we are using     scikit learn in all aspects of our daily work from data exploration to development     of machine learning application to the productive deployment of those services      It helps us to tackle machine learning problems ranging from e commerce to logistics      It consistent APIs enabled us to build the  Palladium REST API framework      https   github com ottogroup palladium     around it and continuously deliver     scikit learn based services          rst class   annotation        Christian Rammig  Head of Data Science  Otto Group       div   image box         image   images ottogroup logo png        target  https   ottogroup com    Zopa  https   zopa com                                     div   sk text image grid large       div   text box      At Zopa  the first ever Peer to Peer lending platform  we extensively use     scikit learn to run the business and optimize our users  experience  It powers our     Machine Learning models involved in credit risk  fraud risk  marketing  and pricing      and has been used for originating at least 1 billion GBP worth of Zopa loans  It is     very well documented  powerful  and simple to use  We are grateful for the     capabilities it has provided  and for allowing us to deliver on our mission of     making money simple and fair          rst class   annotation        Vlasios Vasileiou  Head of Data Science  Zopa       div   image box         image   images zopa png        target  https   zopa com    MARS  https   www mars com global                                              div   sk text image grid large       div   text box      Scikit Learn is integral to the Machine Learning Ecosystem at Mars  Whether     we re designing better recipes for petfood or closely analysing our cocoa     supply chain  Scikit Learn is used as a tool for rapidly prototyping ideas     and taking them to production  This allows us to better understand and meet     the needs of our consumers worldwide  Scikit Learn s feature rich toolset is     easy to use and equips our associates with the capabilities they need to     solve the business challenges they face every day          rst class   annotation        Michael Fitzke  Next Generation Technologies Sr Leader  Mars Inc        div   image box         image   images mars png        target  https   www mars com global    BNP Paribas Cardif  https   www bnpparibascardif com                                                                   div   sk text image grid large       div   text box      BNP Paribas Cardif uses scikit learn for several of its machine learning models     in production  Our internal community of developers and data scientists has     been using scikit learn since 2015  for several reasons  the quality of the     developments  documentation and contribution governance  and the sheer size of     the contributing community  We even explicitly mention the use of     scikit learn s pipelines in our internal model risk governance as one of our     good practices to decrease operational risks and overfitting risk  As a way to     support open source software development and in particular scikit learn     project  we decided to participate to scikit learn s consortium at La Fondation     Inria since its creation in 2018          rst class   annotation        S bastien Conort  Chief Data Scientist  BNP Paribas Cardif       div   image box         image   images bnp paribas cardif png        target  https   www bnpparibascardif com "}
{"questions":"scikit-learn In addition scikit learn includes various random sample generators that samplegenerators sklearn datasets Generated datasets can be used to build artificial datasets of controlled size and complexity","answers":".. _sample_generators:\n\nGenerated datasets\n==================\n\n.. currentmodule:: sklearn.datasets\n\nIn addition, scikit-learn includes various random sample generators that\ncan be used to build artificial datasets of controlled size and complexity.\n\nGenerators for classification and clustering\n--------------------------------------------\n\nThese generators produce a matrix of features and corresponding discrete\ntargets.\n\nSingle label\n~~~~~~~~~~~~\n\n:func:`make_blobs` creates a multiclass dataset by allocating each class to one\nnormally-distributed cluster of points. It provides control over the centers and\nstandard deviations of each cluster. This dataset is used to demonstrate clustering.\n\n.. plot::\n   :context: close-figs\n   :scale: 70\n   :align: center\n\n   import matplotlib.pyplot as plt\n   from sklearn.datasets import make_blobs\n\n   X, y = make_blobs(centers=3, cluster_std=0.5, random_state=0)\n\n   plt.scatter(X[:, 0], X[:, 1], c=y)\n   plt.title(\"Three normally-distributed clusters\")\n   plt.show()\n\n:func:`make_classification` also creates multiclass datasets but specializes in\nintroducing noise by way of: correlated, redundant and uninformative features; multiple\nGaussian clusters per class; and linear transformations of the feature space.\n\n.. plot::\n   :context: close-figs\n   :scale: 70\n   :align: center\n\n   import matplotlib.pyplot as plt\n   from sklearn.datasets import make_classification\n\n   fig, axs = plt.subplots(1, 3, figsize=(12, 4), sharey=True, sharex=True)\n   titles = [\"Two classes,\\none informative feature,\\none cluster per class\",\n             \"Two classes,\\ntwo informative features,\\ntwo clusters per class\",\n             \"Three classes,\\ntwo informative features,\\none cluster per class\"]\n   params = [\n       {\"n_informative\": 1, \"n_clusters_per_class\": 1, \"n_classes\": 2},\n       {\"n_informative\": 2, \"n_clusters_per_class\": 2, \"n_classes\": 2},\n       {\"n_informative\": 2, \"n_clusters_per_class\": 1, \"n_classes\": 3}\n   ]\n\n   for i, param in enumerate(params):\n       X, Y = make_classification(n_features=2, n_redundant=0, random_state=1, **param)\n       axs[i].scatter(X[:, 0], X[:, 1], c=Y)\n       axs[i].set_title(titles[i])\n\n   plt.tight_layout()\n   plt.show()\n\n:func:`make_gaussian_quantiles` divides a single Gaussian cluster into\nnear-equal-size classes separated by concentric hyperspheres.\n\n.. plot::\n   :context: close-figs\n   :scale: 70\n   :align: center\n\n   import matplotlib.pyplot as plt\n   from sklearn.datasets import make_gaussian_quantiles\n\n   X, Y = make_gaussian_quantiles(n_features=2, n_classes=3, random_state=0)\n   plt.scatter(X[:, 0], X[:, 1], c=Y)\n   plt.title(\"Gaussian divided into three quantiles\")\n   plt.show()\n\n:func:`make_hastie_10_2` generates a similar binary, 10-dimensional problem.\n\n:func:`make_circles` and :func:`make_moons` generate 2D binary classification\ndatasets that are challenging to certain algorithms (e.g., centroid-based\nclustering or linear classification), including optional Gaussian noise.\nThey are useful for visualization. :func:`make_circles` produces Gaussian data\nwith a spherical decision boundary for binary classification, while\n:func:`make_moons` produces two interleaving half-circles.\n\n\n.. plot::\n   :context: close-figs\n   :scale: 70\n   :align: center\n\n   import matplotlib.pyplot as plt\n   from sklearn.datasets import make_circles, make_moons\n\n   fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))\n\n   X, Y = make_circles(noise=0.1, factor=0.3, random_state=0)\n   ax1.scatter(X[:, 0], X[:, 1], c=Y)\n   ax1.set_title(\"make_circles\")\n\n   X, Y = make_moons(noise=0.1, random_state=0)\n   ax2.scatter(X[:, 0], X[:, 1], c=Y)\n   ax2.set_title(\"make_moons\")\n\n   plt.tight_layout()\n   plt.show()\n\n\n\nMultilabel\n~~~~~~~~~~\n\n:func:`make_multilabel_classification` generates random samples with multiple\nlabels, reflecting a bag of words drawn from a mixture of topics. The number of\ntopics for each document is drawn from a Poisson distribution, and the topics\nthemselves are drawn from a fixed random distribution. Similarly, the number of\nwords is drawn from Poisson, with words drawn from a multinomial, where each\ntopic defines a probability distribution over words. Simplifications with\nrespect to true bag-of-words mixtures include:\n\n* Per-topic word distributions are independently drawn, where in reality all\n  would be affected by a sparse base distribution, and would be correlated.\n* For a document generated from multiple topics, all topics are weighted\n  equally in generating its bag of words.\n* Documents without labels words at random, rather than from a base\n  distribution.\n\n.. image:: ..\/auto_examples\/datasets\/images\/sphx_glr_plot_random_multilabel_dataset_001.png\n   :target: ..\/auto_examples\/datasets\/plot_random_multilabel_dataset.html\n   :scale: 50\n   :align: center\n\nBiclustering\n~~~~~~~~~~~~\n\n.. autosummary::\n\n   make_biclusters\n   make_checkerboard\n\n\nGenerators for regression\n-------------------------\n\n:func:`make_regression` produces regression targets as an optionally-sparse\nrandom linear combination of random features, with noise. Its informative\nfeatures may be uncorrelated, or low rank (few features account for most of the\nvariance).\n\nOther regression generators generate functions deterministically from\nrandomized features.  :func:`make_sparse_uncorrelated` produces a target as a\nlinear combination of four features with fixed coefficients.\nOthers encode explicitly non-linear relations:\n:func:`make_friedman1` is related by polynomial and sine transforms;\n:func:`make_friedman2` includes feature multiplication and reciprocation; and\n:func:`make_friedman3` is similar with an arctan transformation on the target.\n\nGenerators for manifold learning\n--------------------------------\n\n.. autosummary::\n\n   make_s_curve\n   make_swiss_roll\n\nGenerators for decomposition\n----------------------------\n\n.. autosummary::\n\n   make_low_rank_matrix\n   make_sparse_coded_signal\n   make_spd_matrix\n   make_sparse_spd_matrix","site":"scikit-learn","answers_cleaned":"    sample generators   Generated datasets                        currentmodule   sklearn datasets  In addition  scikit learn includes various random sample generators that can be used to build artificial datasets of controlled size and complexity   Generators for classification and clustering                                               These generators produce a matrix of features and corresponding discrete targets   Single label                func  make blobs  creates a multiclass dataset by allocating each class to one normally distributed cluster of points  It provides control over the centers and standard deviations of each cluster  This dataset is used to demonstrate clustering      plot       context  close figs     scale  70     align  center     import matplotlib pyplot as plt    from sklearn datasets import make blobs     X  y   make blobs centers 3  cluster std 0 5  random state 0      plt scatter X    0   X    1   c y     plt title  Three normally distributed clusters      plt show     func  make classification  also creates multiclass datasets but specializes in introducing noise by way of  correlated  redundant and uninformative features  multiple Gaussian clusters per class  and linear transformations of the feature space      plot       context  close figs     scale  70     align  center     import matplotlib pyplot as plt    from sklearn datasets import make classification     fig  axs   plt subplots 1  3  figsize  12  4   sharey True  sharex True     titles     Two classes  none informative feature  none cluster per class                 Two classes  ntwo informative features  ntwo clusters per class                 Three classes  ntwo informative features  none cluster per class      params              n informative   1   n clusters per class   1   n classes   2            n informative   2   n clusters per class   2   n classes   2            n informative   2   n clusters per class   1   n classes   3           for i  param in enumerate params          X  Y   make classification n features 2  n redundant 0  random state 1    param         axs i  scatter X    0   X    1   c Y         axs i  set title titles i       plt tight layout      plt show     func  make gaussian quantiles  divides a single Gaussian cluster into near equal size classes separated by concentric hyperspheres      plot       context  close figs     scale  70     align  center     import matplotlib pyplot as plt    from sklearn datasets import make gaussian quantiles     X  Y   make gaussian quantiles n features 2  n classes 3  random state 0     plt scatter X    0   X    1   c Y     plt title  Gaussian divided into three quantiles      plt show     func  make hastie 10 2  generates a similar binary  10 dimensional problem    func  make circles  and  func  make moons  generate 2D binary classification datasets that are challenging to certain algorithms  e g   centroid based clustering or linear classification   including optional Gaussian noise  They are useful for visualization   func  make circles  produces Gaussian data with a spherical decision boundary for binary classification  while  func  make moons  produces two interleaving half circles       plot       context  close figs     scale  70     align  center     import matplotlib pyplot as plt    from sklearn datasets import make circles  make moons     fig   ax1  ax2    plt subplots nrows 1  ncols 2  figsize  8  4       X  Y   make circles noise 0 1  factor 0 3  random state 0     ax1 scatter X    0   X    1   c Y     ax1 set title  make circles       X  Y   make moons noise 0 1  random state 0     ax2 scatter X    0   X    1   c Y     ax2 set title  make moons       plt tight layout      plt show      Multilabel              func  make multilabel classification  generates random samples with multiple labels  reflecting a bag of words drawn from a mixture of topics  The number of topics for each document is drawn from a Poisson distribution  and the topics themselves are drawn from a fixed random distribution  Similarly  the number of words is drawn from Poisson  with words drawn from a multinomial  where each topic defines a probability distribution over words  Simplifications with respect to true bag of words mixtures include     Per topic word distributions are independently drawn  where in reality all   would be affected by a sparse base distribution  and would be correlated    For a document generated from multiple topics  all topics are weighted   equally in generating its bag of words    Documents without labels words at random  rather than from a base   distribution      image      auto examples datasets images sphx glr plot random multilabel dataset 001 png     target     auto examples datasets plot random multilabel dataset html     scale  50     align  center  Biclustering                  autosummary       make biclusters    make checkerboard   Generators for regression                             func  make regression  produces regression targets as an optionally sparse random linear combination of random features  with noise  Its informative features may be uncorrelated  or low rank  few features account for most of the variance    Other regression generators generate functions deterministically from randomized features    func  make sparse uncorrelated  produces a target as a linear combination of four features with fixed coefficients  Others encode explicitly non linear relations   func  make friedman1  is related by polynomial and sine transforms   func  make friedman2  includes feature multiplication and reciprocation  and  func  make friedman3  is similar with an arctan transformation on the target   Generators for manifold learning                                      autosummary       make s curve    make swiss roll  Generators for decomposition                                  autosummary       make low rank matrix    make sparse coded signal    make spd matrix    make sparse spd matrix"}
{"questions":"scikit-learn scalingstrategies For some applications the amount of examples features or both and or the speed at which they need to be processed are challenging for traditional approaches In these cases scikit learn has a number of options you can consider to make your system scale Strategies to scale computationally bigger data","answers":".. _scaling_strategies:\n\nStrategies to scale computationally: bigger data\n=================================================\n\nFor some applications the amount of examples, features (or both) and\/or the\nspeed at which they need to be processed are challenging for traditional\napproaches. In these cases scikit-learn has a number of options you can\nconsider to make your system scale.\n\nScaling with instances using out-of-core learning\n--------------------------------------------------\n\nOut-of-core (or \"external memory\") learning is a technique used to learn from\ndata that cannot fit in a computer's main memory (RAM).\n\nHere is a sketch of a system designed to achieve this goal:\n\n1. a way to stream instances\n2. a way to extract features from instances\n3. an incremental algorithm\n\nStreaming instances\n....................\n\nBasically, 1. may be a reader that yields instances from files on a\nhard drive, a database, from a network stream etc. However,\ndetails on how to achieve this are beyond the scope of this documentation.\n\nExtracting features\n...................\n\n\\2. could be any relevant way to extract features among the\ndifferent :ref:`feature extraction <feature_extraction>` methods supported by\nscikit-learn. However, when working with data that needs vectorization and\nwhere the set of features or values is not known in advance one should take\nexplicit care. A good example is text classification where unknown terms are\nlikely to be found during training. It is possible to use a stateful\nvectorizer if making multiple passes over the data is reasonable from an\napplication point of view. Otherwise, one can turn up the difficulty by using\na stateless feature extractor. Currently the preferred way to do this is to\nuse the so-called :ref:`hashing trick<feature_hashing>` as implemented by\n:class:`sklearn.feature_extraction.FeatureHasher` for datasets with categorical\nvariables represented as list of Python dicts or\n:class:`sklearn.feature_extraction.text.HashingVectorizer` for text documents.\n\nIncremental learning\n.....................\n\nFinally, for 3. we have a number of options inside scikit-learn. Although not\nall algorithms can learn incrementally (i.e. without seeing all the instances\nat once), all estimators implementing the ``partial_fit`` API are candidates.\nActually, the ability to learn incrementally from a mini-batch of instances\n(sometimes called \"online learning\") is key to out-of-core learning as it\nguarantees that at any given time there will be only a small amount of\ninstances in the main memory. Choosing a good size for the mini-batch that\nbalances relevancy and memory footprint could involve some tuning [1]_.\n\nHere is a list of incremental estimators for different tasks:\n\n- Classification\n    + :class:`sklearn.naive_bayes.MultinomialNB`\n    + :class:`sklearn.naive_bayes.BernoulliNB`\n    + :class:`sklearn.linear_model.Perceptron`\n    + :class:`sklearn.linear_model.SGDClassifier`\n    + :class:`sklearn.linear_model.PassiveAggressiveClassifier`\n    + :class:`sklearn.neural_network.MLPClassifier`\n- Regression\n    + :class:`sklearn.linear_model.SGDRegressor`\n    + :class:`sklearn.linear_model.PassiveAggressiveRegressor`\n    + :class:`sklearn.neural_network.MLPRegressor`\n- Clustering\n    + :class:`sklearn.cluster.MiniBatchKMeans`\n    + :class:`sklearn.cluster.Birch`\n- Decomposition \/ feature Extraction\n    + :class:`sklearn.decomposition.MiniBatchDictionaryLearning`\n    + :class:`sklearn.decomposition.IncrementalPCA`\n    + :class:`sklearn.decomposition.LatentDirichletAllocation`\n    + :class:`sklearn.decomposition.MiniBatchNMF`\n- Preprocessing\n    + :class:`sklearn.preprocessing.StandardScaler`\n    + :class:`sklearn.preprocessing.MinMaxScaler`\n    + :class:`sklearn.preprocessing.MaxAbsScaler`\n\nFor classification, a somewhat important thing to note is that although a\nstateless feature extraction routine may be able to cope with new\/unseen\nattributes, the incremental learner itself may be unable to cope with\nnew\/unseen targets classes. In this case you have to pass all the possible\nclasses to the first ``partial_fit`` call using the ``classes=`` parameter.\n\nAnother aspect to consider when choosing a proper algorithm is that not all of\nthem put the same importance on each example over time. Namely, the\n``Perceptron`` is still sensitive to badly labeled examples even after many\nexamples whereas the ``SGD*`` and ``PassiveAggressive*`` families are more\nrobust to this kind of artifacts. Conversely, the latter also tend to give less\nimportance to remarkably different, yet properly labeled examples when they\ncome late in the stream as their learning rate decreases over time.\n\nExamples\n..........\n\nFinally, we have a full-fledged example of\n:ref:`sphx_glr_auto_examples_applications_plot_out_of_core_classification.py`. It is aimed at\nproviding a starting point for people wanting to build out-of-core learning\nsystems and demonstrates most of the notions discussed above.\n\nFurthermore, it also shows the evolution of the performance of different\nalgorithms with the number of processed examples.\n\n.. |accuracy_over_time| image::  ..\/auto_examples\/applications\/images\/sphx_glr_plot_out_of_core_classification_001.png\n    :target: ..\/auto_examples\/applications\/plot_out_of_core_classification.html\n    :scale: 80\n\n.. centered:: |accuracy_over_time|\n\nNow looking at the computation time of the different parts, we see that the\nvectorization is much more expensive than learning itself. From the different\nalgorithms, ``MultinomialNB`` is the most expensive, but its overhead can be\nmitigated by increasing the size of the mini-batches (exercise: change\n``minibatch_size`` to 100 and 10000 in the program and compare).\n\n.. |computation_time| image::  ..\/auto_examples\/applications\/images\/sphx_glr_plot_out_of_core_classification_003.png\n    :target: ..\/auto_examples\/applications\/plot_out_of_core_classification.html\n    :scale: 80\n\n.. centered:: |computation_time|\n\n\nNotes\n......\n\n.. [1] Depending on the algorithm the mini-batch size can influence results or\n       not. SGD*, PassiveAggressive*, and discrete NaiveBayes are truly online\n       and are not affected by batch size. Conversely, MiniBatchKMeans\n       convergence rate is affected by the batch size. Also, its memory\n       footprint can vary dramatically with batch size.","site":"scikit-learn","answers_cleaned":"    scaling strategies   Strategies to scale computationally  bigger data                                                    For some applications the amount of examples  features  or both  and or the speed at which they need to be processed are challenging for traditional approaches  In these cases scikit learn has a number of options you can consider to make your system scale   Scaling with instances using out of core learning                                                     Out of core  or  external memory   learning is a technique used to learn from data that cannot fit in a computer s main memory  RAM    Here is a sketch of a system designed to achieve this goal   1  a way to stream instances 2  a way to extract features from instances 3  an incremental algorithm  Streaming instances                       Basically  1  may be a reader that yields instances from files on a hard drive  a database  from a network stream etc  However  details on how to achieve this are beyond the scope of this documentation   Extracting features                       2  could be any relevant way to extract features among the different  ref  feature extraction  feature extraction   methods supported by scikit learn  However  when working with data that needs vectorization and where the set of features or values is not known in advance one should take explicit care  A good example is text classification where unknown terms are likely to be found during training  It is possible to use a stateful vectorizer if making multiple passes over the data is reasonable from an application point of view  Otherwise  one can turn up the difficulty by using a stateless feature extractor  Currently the preferred way to do this is to use the so called  ref  hashing trick feature hashing   as implemented by  class  sklearn feature extraction FeatureHasher  for datasets with categorical variables represented as list of Python dicts or  class  sklearn feature extraction text HashingVectorizer  for text documents   Incremental learning                        Finally  for 3  we have a number of options inside scikit learn  Although not all algorithms can learn incrementally  i e  without seeing all the instances at once   all estimators implementing the   partial fit   API are candidates  Actually  the ability to learn incrementally from a mini batch of instances  sometimes called  online learning   is key to out of core learning as it guarantees that at any given time there will be only a small amount of instances in the main memory  Choosing a good size for the mini batch that balances relevancy and memory footprint could involve some tuning  1     Here is a list of incremental estimators for different tasks     Classification        class  sklearn naive bayes MultinomialNB         class  sklearn naive bayes BernoulliNB         class  sklearn linear model Perceptron         class  sklearn linear model SGDClassifier         class  sklearn linear model PassiveAggressiveClassifier         class  sklearn neural network MLPClassifier    Regression        class  sklearn linear model SGDRegressor         class  sklearn linear model PassiveAggressiveRegressor         class  sklearn neural network MLPRegressor    Clustering        class  sklearn cluster MiniBatchKMeans         class  sklearn cluster Birch    Decomposition   feature Extraction        class  sklearn decomposition MiniBatchDictionaryLearning         class  sklearn decomposition IncrementalPCA         class  sklearn decomposition LatentDirichletAllocation         class  sklearn decomposition MiniBatchNMF    Preprocessing        class  sklearn preprocessing StandardScaler         class  sklearn preprocessing MinMaxScaler         class  sklearn preprocessing MaxAbsScaler   For classification  a somewhat important thing to note is that although a stateless feature extraction routine may be able to cope with new unseen attributes  the incremental learner itself may be unable to cope with new unseen targets classes  In this case you have to pass all the possible classes to the first   partial fit   call using the   classes    parameter   Another aspect to consider when choosing a proper algorithm is that not all of them put the same importance on each example over time  Namely  the   Perceptron   is still sensitive to badly labeled examples even after many examples whereas the   SGD    and   PassiveAggressive    families are more robust to this kind of artifacts  Conversely  the latter also tend to give less importance to remarkably different  yet properly labeled examples when they come late in the stream as their learning rate decreases over time   Examples             Finally  we have a full fledged example of  ref  sphx glr auto examples applications plot out of core classification py   It is aimed at providing a starting point for people wanting to build out of core learning systems and demonstrates most of the notions discussed above   Furthermore  it also shows the evolution of the performance of different algorithms with the number of processed examples       accuracy over time  image       auto examples applications images sphx glr plot out of core classification 001 png      target     auto examples applications plot out of core classification html      scale  80     centered    accuracy over time   Now looking at the computation time of the different parts  we see that the vectorization is much more expensive than learning itself  From the different algorithms    MultinomialNB   is the most expensive  but its overhead can be mitigated by increasing the size of the mini batches  exercise  change   minibatch size   to 100 and 10000 in the program and compare        computation time  image       auto examples applications images sphx glr plot out of core classification 003 png      target     auto examples applications plot out of core classification html      scale  80     centered    computation time    Notes             1  Depending on the algorithm the mini batch size can influence results or        not  SGD   PassiveAggressive   and discrete NaiveBayes are truly online        and are not affected by batch size  Conversely  MiniBatchKMeans        convergence rate is affected by the batch size  Also  its memory        footprint can vary dramatically with batch size "}
{"questions":"scikit-learn correspond to certain kernels as they are used for example in support vector machines see kernelapproximation This submodule contains functions that approximate the feature mappings that The following feature functions perform non linear transformations of the Kernel Approximation input which can serve as a basis for linear classification or other","answers":".. _kernel_approximation:\n\nKernel Approximation\n====================\n\nThis submodule contains functions that approximate the feature mappings that\ncorrespond to certain kernels, as they are used for example in support vector\nmachines (see :ref:`svm`).\nThe following feature functions perform non-linear transformations of the\ninput, which can serve as a basis for linear classification or other\nalgorithms.\n\n.. currentmodule:: sklearn.linear_model\n\nThe advantage of using approximate explicit feature maps compared to the\n`kernel trick <https:\/\/en.wikipedia.org\/wiki\/Kernel_trick>`_,\nwhich makes use of feature maps implicitly, is that explicit mappings\ncan be better suited for online learning and can significantly reduce the cost\nof learning with very large datasets.\nStandard kernelized SVMs do not scale well to large datasets, but using an\napproximate kernel map it is possible to use much more efficient linear SVMs.\nIn particular, the combination of kernel map approximations with\n:class:`SGDClassifier` can make non-linear learning on large datasets possible.\n\nSince there has not been much empirical work using approximate embeddings, it\nis advisable to compare results against exact kernel methods when possible.\n\n.. seealso::\n\n   :ref:`polynomial_regression` for an exact polynomial transformation.\n\n.. currentmodule:: sklearn.kernel_approximation\n\n.. _nystroem_kernel_approx:\n\nNystroem Method for Kernel Approximation\n----------------------------------------\nThe Nystroem method, as implemented in :class:`Nystroem` is a general method for\nreduced rank approximations of kernels. It achieves this by subsampling without\nreplacement rows\/columns of the data on which the kernel is evaluated. While the\ncomputational complexity of the exact method is\n:math:`\\mathcal{O}(n^3_{\\text{samples}})`, the complexity of the approximation\nis :math:`\\mathcal{O}(n^2_{\\text{components}} \\cdot n_{\\text{samples}})`, where\none can set :math:`n_{\\text{components}} \\ll n_{\\text{samples}}` without a\nsignificative decrease in performance [WS2001]_.\n\nWe can construct the eigendecomposition of the kernel matrix :math:`K`, based\non the features of the data, and then split it into sampled and unsampled data\npoints.\n\n.. math::\n\n        K = U \\Lambda U^T\n        = \\begin{bmatrix} U_1 \\\\ U_2\\end{bmatrix} \\Lambda \\begin{bmatrix} U_1 \\\\ U_2 \\end{bmatrix}^T\n        = \\begin{bmatrix} U_1 \\Lambda U_1^T & U_1 \\Lambda U_2^T \\\\ U_2 \\Lambda U_1^T & U_2 \\Lambda U_2^T \\end{bmatrix}\n        \\equiv \\begin{bmatrix} K_{11} & K_{12} \\\\ K_{21} & K_{22} \\end{bmatrix}\n\nwhere:\n\n* :math:`U` is orthonormal\n* :math:`\\Lambda` is diagonal matrix of eigenvalues\n* :math:`U_1` is orthonormal matrix of samples that were chosen\n* :math:`U_2` is orthonormal matrix of samples that were not chosen\n\nGiven that :math:`U_1 \\Lambda U_1^T` can be obtained by orthonormalization of\nthe matrix :math:`K_{11}`, and :math:`U_2 \\Lambda U_1^T` can be evaluated (as\nwell as its transpose), the only remaining term to elucidate is\n:math:`U_2 \\Lambda U_2^T`. To do this we can express it in terms of the already\nevaluated matrices:\n\n.. math::\n\n         \\begin{align} U_2 \\Lambda U_2^T &= \\left(K_{21} U_1 \\Lambda^{-1}\\right) \\Lambda \\left(K_{21} U_1 \\Lambda^{-1}\\right)^T\n         \\\\&= K_{21} U_1 (\\Lambda^{-1} \\Lambda) \\Lambda^{-1} U_1^T K_{21}^T\n         \\\\&= K_{21} U_1 \\Lambda^{-1} U_1^T K_{21}^T\n         \\\\&= K_{21} K_{11}^{-1} K_{21}^T\n         \\\\&= \\left( K_{21} K_{11}^{-\\frac12} \\right) \\left( K_{21} K_{11}^{-\\frac12} \\right)^T\n         .\\end{align}\n\nDuring ``fit``, the class :class:`Nystroem` evaluates the basis :math:`U_1`, and\ncomputes the normalization constant, :math:`K_{11}^{-\\frac12}`. Later, during\n``transform``, the kernel matrix is determined between the basis (given by the\n`components_` attribute) and the new data points, ``X``. This matrix is then\nmultiplied by the ``normalization_`` matrix for the final result.\n\nBy default :class:`Nystroem` uses the ``rbf`` kernel, but it can use any kernel\nfunction or a precomputed kernel matrix. The number of samples used - which is\nalso the dimensionality of the features computed - is given by the parameter\n``n_components``.\n\n.. rubric:: Examples\n\n* See the example entitled\n  :ref:`sphx_glr_auto_examples_applications_plot_cyclical_feature_engineering.py`,\n  that shows an efficient machine learning pipeline that uses a\n  :class:`Nystroem` kernel.\n\n.. _rbf_kernel_approx:\n\nRadial Basis Function Kernel\n----------------------------\n\nThe :class:`RBFSampler` constructs an approximate mapping for the radial basis\nfunction kernel, also known as *Random Kitchen Sinks* [RR2007]_. This\ntransformation can be used to explicitly model a kernel map, prior to applying\na linear algorithm, for example a linear SVM::\n\n    >>> from sklearn.kernel_approximation import RBFSampler\n    >>> from sklearn.linear_model import SGDClassifier\n    >>> X = [[0, 0], [1, 1], [1, 0], [0, 1]]\n    >>> y = [0, 0, 1, 1]\n    >>> rbf_feature = RBFSampler(gamma=1, random_state=1)\n    >>> X_features = rbf_feature.fit_transform(X)\n    >>> clf = SGDClassifier(max_iter=5)\n    >>> clf.fit(X_features, y)\n    SGDClassifier(max_iter=5)\n    >>> clf.score(X_features, y)\n    1.0\n\nThe mapping relies on a Monte Carlo approximation to the\nkernel values. The ``fit`` function performs the Monte Carlo sampling, whereas\nthe ``transform`` method performs the mapping of the data.  Because of the\ninherent randomness of the process, results may vary between different calls to\nthe ``fit`` function.\n\nThe ``fit`` function takes two arguments:\n``n_components``, which is the target dimensionality of the feature transform,\nand ``gamma``, the parameter of the RBF-kernel.  A higher ``n_components`` will\nresult in a better approximation of the kernel and will yield results more\nsimilar to those produced by a kernel SVM. Note that \"fitting\" the feature\nfunction does not actually depend on the data given to the ``fit`` function.\nOnly the dimensionality of the data is used.\nDetails on the method can be found in [RR2007]_.\n\nFor a given value of ``n_components`` :class:`RBFSampler` is often less accurate\nas :class:`Nystroem`. :class:`RBFSampler` is cheaper to compute, though, making\nuse of larger feature spaces more efficient.\n\n.. figure:: ..\/auto_examples\/miscellaneous\/images\/sphx_glr_plot_kernel_approximation_002.png\n    :target: ..\/auto_examples\/miscellaneous\/plot_kernel_approximation.html\n    :scale: 50%\n    :align: center\n\n    Comparing an exact RBF kernel (left) with the approximation (right)\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_miscellaneous_plot_kernel_approximation.py`\n\n.. _additive_chi_kernel_approx:\n\nAdditive Chi Squared Kernel\n---------------------------\n\nThe additive chi squared kernel is a kernel on histograms, often used in computer vision.\n\nThe additive chi squared kernel as used here is given by\n\n.. math::\n\n        k(x, y) = \\sum_i \\frac{2x_iy_i}{x_i+y_i}\n\nThis is not exactly the same as :func:`sklearn.metrics.pairwise.additive_chi2_kernel`.\nThe authors of [VZ2010]_ prefer the version above as it is always positive\ndefinite.\nSince the kernel is additive, it is possible to treat all components\n:math:`x_i` separately for embedding. This makes it possible to sample\nthe Fourier transform in regular intervals, instead of approximating\nusing Monte Carlo sampling.\n\nThe class :class:`AdditiveChi2Sampler` implements this component wise\ndeterministic sampling. Each component is sampled :math:`n` times, yielding\n:math:`2n+1` dimensions per input dimension (the multiple of two stems\nfrom the real and complex part of the Fourier transform).\nIn the literature, :math:`n` is usually chosen to be 1 or 2, transforming\nthe dataset to size ``n_samples * 5 * n_features`` (in the case of :math:`n=2`).\n\nThe approximate feature map provided by :class:`AdditiveChi2Sampler` can be combined\nwith the approximate feature map provided by :class:`RBFSampler` to yield an approximate\nfeature map for the exponentiated chi squared kernel.\nSee the [VZ2010]_ for details and [VVZ2010]_ for combination with the :class:`RBFSampler`.\n\n.. _skewed_chi_kernel_approx:\n\nSkewed Chi Squared Kernel\n-------------------------\n\nThe skewed chi squared kernel is given by:\n\n.. math::\n\n        k(x,y) = \\prod_i \\frac{2\\sqrt{x_i+c}\\sqrt{y_i+c}}{x_i + y_i + 2c}\n\n\nIt has properties that are similar to the exponentiated chi squared kernel\noften used in computer vision, but allows for a simple Monte Carlo\napproximation of the feature map.\n\nThe usage of the :class:`SkewedChi2Sampler` is the same as the usage described\nabove for the :class:`RBFSampler`. The only difference is in the free\nparameter, that is called :math:`c`.\nFor a motivation for this mapping and the mathematical details see [LS2010]_.\n\n.. _polynomial_kernel_approx:\n\nPolynomial Kernel Approximation via Tensor Sketch\n-------------------------------------------------\n\nThe :ref:`polynomial kernel <polynomial_kernel>` is a popular type of kernel\nfunction given by:\n\n.. math::\n\n        k(x, y) = (\\gamma x^\\top y +c_0)^d\n\nwhere:\n\n* ``x``, ``y`` are the input vectors\n* ``d`` is the kernel degree\n\nIntuitively, the feature space of the polynomial kernel of degree `d`\nconsists of all possible degree-`d` products among input features, which enables\nlearning algorithms using this kernel to account for interactions between features.\n\nThe TensorSketch [PP2013]_ method, as implemented in :class:`PolynomialCountSketch`, is a\nscalable, input data independent method for polynomial kernel approximation.\nIt is based on the concept of Count sketch [WIKICS]_ [CCF2002]_ , a dimensionality\nreduction technique similar to feature hashing, which instead uses several\nindependent hash functions. TensorSketch obtains a Count Sketch of the outer product\nof two vectors (or a vector with itself), which can be used as an approximation of the\npolynomial kernel feature space. In particular, instead of explicitly computing\nthe outer product, TensorSketch computes the Count Sketch of the vectors and then\nuses polynomial multiplication via the Fast Fourier Transform to compute the\nCount Sketch of their outer product.\n\nConveniently, the training phase of TensorSketch simply consists of initializing\nsome random variables. It is thus independent of the input data, i.e. it only\ndepends on the number of input features, but not the data values.\nIn addition, this method can transform samples in\n:math:`\\mathcal{O}(n_{\\text{samples}}(n_{\\text{features}} + n_{\\text{components}} \\log(n_{\\text{components}})))`\ntime, where :math:`n_{\\text{components}}` is the desired output dimension,\ndetermined by ``n_components``.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_kernel_approximation_plot_scalable_poly_kernels.py`\n\n.. _tensor_sketch_kernel_approx:\n\nMathematical Details\n--------------------\n\nKernel methods like support vector machines or kernelized\nPCA rely on a property of reproducing kernel Hilbert spaces.\nFor any positive definite kernel function :math:`k` (a so called Mercer kernel),\nit is guaranteed that there exists a mapping :math:`\\phi`\ninto a Hilbert space :math:`\\mathcal{H}`, such that\n\n.. math::\n\n        k(x,y) = \\langle \\phi(x), \\phi(y) \\rangle\n\nWhere :math:`\\langle \\cdot, \\cdot \\rangle` denotes the inner product in the\nHilbert space.\n\nIf an algorithm, such as a linear support vector machine or PCA,\nrelies only on the scalar product of data points :math:`x_i`, one may use\nthe value of :math:`k(x_i, x_j)`, which corresponds to applying the algorithm\nto the mapped data points :math:`\\phi(x_i)`.\nThe advantage of using :math:`k` is that the mapping :math:`\\phi` never has\nto be calculated explicitly, allowing for arbitrary large\nfeatures (even infinite).\n\nOne drawback of kernel methods is, that it might be necessary\nto store many kernel values :math:`k(x_i, x_j)` during optimization.\nIf a kernelized classifier is applied to new data :math:`y_j`,\n:math:`k(x_i, y_j)` needs to be computed to make predictions,\npossibly for many different :math:`x_i` in the training set.\n\nThe classes in this submodule allow to approximate the embedding\n:math:`\\phi`, thereby working explicitly with the representations\n:math:`\\phi(x_i)`, which obviates the need to apply the kernel\nor store training examples.\n\n\n.. rubric:: References\n\n.. [WS2001] `\"Using the Nystr\u00f6m method to speed up kernel machines\"\n  <https:\/\/papers.nips.cc\/paper_files\/paper\/2000\/hash\/19de10adbaa1b2ee13f77f679fa1483a-Abstract.html>`_\n  Williams, C.K.I.; Seeger, M. - 2001.\n.. [RR2007] `\"Random features for large-scale kernel machines\"\n  <https:\/\/papers.nips.cc\/paper\/2007\/hash\/013a006f03dbc5392effeb8f18fda755-Abstract.html>`_\n  Rahimi, A. and Recht, B. - Advances in neural information processing 2007,\n.. [LS2010] `\"Random Fourier approximations for skewed multiplicative histogram kernels\"\n  <https:\/\/www.researchgate.net\/publication\/221114584_Random_Fourier_Approximations_for_Skewed_Multiplicative_Histogram_Kernels>`_\n  Li, F., Ionescu, C., and Sminchisescu, C.\n  - Pattern Recognition,  DAGM 2010, Lecture Notes in Computer Science.\n.. [VZ2010] `\"Efficient additive kernels via explicit feature maps\"\n  <https:\/\/www.robots.ox.ac.uk\/~vgg\/publications\/2011\/Vedaldi11\/vedaldi11.pdf>`_\n  Vedaldi, A. and Zisserman, A. - Computer Vision and Pattern Recognition 2010\n.. [VVZ2010] `\"Generalized RBF feature maps for Efficient Detection\"\n  <https:\/\/www.robots.ox.ac.uk\/~vgg\/publications\/2010\/Sreekanth10\/sreekanth10.pdf>`_\n  Vempati, S. and Vedaldi, A. and Zisserman, A. and Jawahar, CV - 2010\n.. [PP2013] :doi:`\"Fast and scalable polynomial kernels via explicit feature maps\"\n  <10.1145\/2487575.2487591>`\n  Pham, N., & Pagh, R. - 2013\n.. [CCF2002] `\"Finding frequent items in data streams\"\n  <https:\/\/www.cs.princeton.edu\/courses\/archive\/spring04\/cos598B\/bib\/CharikarCF.pdf>`_\n  Charikar, M., Chen, K., & Farach-Colton - 2002\n.. [WIKICS] `\"Wikipedia: Count sketch\"\n  <https:\/\/en.wikipedia.org\/wiki\/Count_sketch>`_","site":"scikit-learn","answers_cleaned":"    kernel approximation   Kernel Approximation                       This submodule contains functions that approximate the feature mappings that correspond to certain kernels  as they are used for example in support vector machines  see  ref  svm    The following feature functions perform non linear transformations of the input  which can serve as a basis for linear classification or other algorithms      currentmodule   sklearn linear model  The advantage of using approximate explicit feature maps compared to the  kernel trick  https   en wikipedia org wiki Kernel trick     which makes use of feature maps implicitly  is that explicit mappings can be better suited for online learning and can significantly reduce the cost of learning with very large datasets  Standard kernelized SVMs do not scale well to large datasets  but using an approximate kernel map it is possible to use much more efficient linear SVMs  In particular  the combination of kernel map approximations with  class  SGDClassifier  can make non linear learning on large datasets possible   Since there has not been much empirical work using approximate embeddings  it is advisable to compare results against exact kernel methods when possible      seealso        ref  polynomial regression  for an exact polynomial transformation      currentmodule   sklearn kernel approximation      nystroem kernel approx   Nystroem Method for Kernel Approximation                                          The Nystroem method  as implemented in  class  Nystroem  is a general method for reduced rank approximations of kernels  It achieves this by subsampling without replacement rows columns of the data on which the kernel is evaluated  While the computational complexity of the exact method is  math   mathcal O  n 3   text samples      the complexity of the approximation is  math   mathcal O  n 2   text components    cdot n   text samples      where one can set  math  n   text components    ll n   text samples    without a significative decrease in performance  WS2001     We can construct the eigendecomposition of the kernel matrix  math  K   based on the features of the data  and then split it into sampled and unsampled data points      math            K   U  Lambda U T            begin bmatrix  U 1    U 2 end bmatrix   Lambda  begin bmatrix  U 1    U 2  end bmatrix  T            begin bmatrix  U 1  Lambda U 1 T   U 1  Lambda U 2 T    U 2  Lambda U 1 T   U 2  Lambda U 2 T  end bmatrix           equiv  begin bmatrix  K  11    K  12     K  21    K  22   end bmatrix   where      math  U  is orthonormal    math   Lambda  is diagonal matrix of eigenvalues    math  U 1  is orthonormal matrix of samples that were chosen    math  U 2  is orthonormal matrix of samples that were not chosen  Given that  math  U 1  Lambda U 1 T  can be obtained by orthonormalization of the matrix  math  K  11    and  math  U 2  Lambda U 1 T  can be evaluated  as well as its transpose   the only remaining term to elucidate is  math  U 2  Lambda U 2 T   To do this we can express it in terms of the already evaluated matrices      math              begin align  U 2  Lambda U 2 T     left K  21  U 1  Lambda   1  right   Lambda  left K  21  U 1  Lambda   1  right  T               K  21  U 1   Lambda   1   Lambda   Lambda   1  U 1 T K  21  T               K  21  U 1  Lambda   1  U 1 T K  21  T               K  21  K  11    1  K  21  T                left  K  21  K  11     frac12   right   left  K  21  K  11     frac12   right  T            end align   During   fit    the class  class  Nystroem  evaluates the basis  math  U 1   and computes the normalization constant   math  K  11     frac12    Later  during   transform    the kernel matrix is determined between the basis  given by the  components   attribute  and the new data points    X    This matrix is then multiplied by the   normalization    matrix for the final result   By default  class  Nystroem  uses the   rbf   kernel  but it can use any kernel function or a precomputed kernel matrix  The number of samples used   which is also the dimensionality of the features computed   is given by the parameter   n components        rubric   Examples    See the example entitled    ref  sphx glr auto examples applications plot cyclical feature engineering py     that shows an efficient machine learning pipeline that uses a    class  Nystroem  kernel       rbf kernel approx   Radial Basis Function Kernel                               The  class  RBFSampler  constructs an approximate mapping for the radial basis function kernel  also known as  Random Kitchen Sinks   RR2007    This transformation can be used to explicitly model a kernel map  prior to applying a linear algorithm  for example a linear SVM            from sklearn kernel approximation import RBFSampler         from sklearn linear model import SGDClassifier         X     0  0    1  1    1  0    0  1           y    0  0  1  1          rbf feature   RBFSampler gamma 1  random state 1          X features   rbf feature fit transform X          clf   SGDClassifier max iter 5          clf fit X features  y      SGDClassifier max iter 5          clf score X features  y      1 0  The mapping relies on a Monte Carlo approximation to the kernel values  The   fit   function performs the Monte Carlo sampling  whereas the   transform   method performs the mapping of the data   Because of the inherent randomness of the process  results may vary between different calls to the   fit   function   The   fit   function takes two arguments    n components    which is the target dimensionality of the feature transform  and   gamma    the parameter of the RBF kernel   A higher   n components   will result in a better approximation of the kernel and will yield results more similar to those produced by a kernel SVM  Note that  fitting  the feature function does not actually depend on the data given to the   fit   function  Only the dimensionality of the data is used  Details on the method can be found in  RR2007     For a given value of   n components    class  RBFSampler  is often less accurate as  class  Nystroem    class  RBFSampler  is cheaper to compute  though  making use of larger feature spaces more efficient      figure      auto examples miscellaneous images sphx glr plot kernel approximation 002 png      target     auto examples miscellaneous plot kernel approximation html      scale  50       align  center      Comparing an exact RBF kernel  left  with the approximation  right      rubric   Examples     ref  sphx glr auto examples miscellaneous plot kernel approximation py       additive chi kernel approx   Additive Chi Squared Kernel                              The additive chi squared kernel is a kernel on histograms  often used in computer vision   The additive chi squared kernel as used here is given by     math            k x  y     sum i  frac 2x iy i  x i y i   This is not exactly the same as  func  sklearn metrics pairwise additive chi2 kernel   The authors of  VZ2010   prefer the version above as it is always positive definite  Since the kernel is additive  it is possible to treat all components  math  x i  separately for embedding  This makes it possible to sample the Fourier transform in regular intervals  instead of approximating using Monte Carlo sampling   The class  class  AdditiveChi2Sampler  implements this component wise deterministic sampling  Each component is sampled  math  n  times  yielding  math  2n 1  dimensions per input dimension  the multiple of two stems from the real and complex part of the Fourier transform   In the literature   math  n  is usually chosen to be 1 or 2  transforming the dataset to size   n samples   5   n features    in the case of  math  n 2     The approximate feature map provided by  class  AdditiveChi2Sampler  can be combined with the approximate feature map provided by  class  RBFSampler  to yield an approximate feature map for the exponentiated chi squared kernel  See the  VZ2010   for details and  VVZ2010   for combination with the  class  RBFSampler        skewed chi kernel approx   Skewed Chi Squared Kernel                            The skewed chi squared kernel is given by      math            k x y     prod i  frac 2 sqrt x i c  sqrt y i c   x i   y i   2c    It has properties that are similar to the exponentiated chi squared kernel often used in computer vision  but allows for a simple Monte Carlo approximation of the feature map   The usage of the  class  SkewedChi2Sampler  is the same as the usage described above for the  class  RBFSampler   The only difference is in the free parameter  that is called  math  c   For a motivation for this mapping and the mathematical details see  LS2010         polynomial kernel approx   Polynomial Kernel Approximation via Tensor Sketch                                                    The  ref  polynomial kernel  polynomial kernel   is a popular type of kernel function given by      math            k x  y      gamma x  top y  c 0  d  where       x      y   are the input vectors     d   is the kernel degree  Intuitively  the feature space of the polynomial kernel of degree  d  consists of all possible degree  d  products among input features  which enables learning algorithms using this kernel to account for interactions between features   The TensorSketch  PP2013   method  as implemented in  class  PolynomialCountSketch   is a scalable  input data independent method for polynomial kernel approximation  It is based on the concept of Count sketch  WIKICS    CCF2002     a dimensionality reduction technique similar to feature hashing  which instead uses several independent hash functions  TensorSketch obtains a Count Sketch of the outer product of two vectors  or a vector with itself   which can be used as an approximation of the polynomial kernel feature space  In particular  instead of explicitly computing the outer product  TensorSketch computes the Count Sketch of the vectors and then uses polynomial multiplication via the Fast Fourier Transform to compute the Count Sketch of their outer product   Conveniently  the training phase of TensorSketch simply consists of initializing some random variables  It is thus independent of the input data  i e  it only depends on the number of input features  but not the data values  In addition  this method can transform samples in  math   mathcal O  n   text samples   n   text features     n   text components    log n   text components       time  where  math  n   text components    is the desired output dimension  determined by   n components        rubric   Examples     ref  sphx glr auto examples kernel approximation plot scalable poly kernels py       tensor sketch kernel approx   Mathematical Details                       Kernel methods like support vector machines or kernelized PCA rely on a property of reproducing kernel Hilbert spaces  For any positive definite kernel function  math  k   a so called Mercer kernel   it is guaranteed that there exists a mapping  math   phi  into a Hilbert space  math   mathcal H    such that     math            k x y     langle  phi x    phi y   rangle  Where  math   langle  cdot   cdot  rangle  denotes the inner product in the Hilbert space   If an algorithm  such as a linear support vector machine or PCA  relies only on the scalar product of data points  math  x i   one may use the value of  math  k x i  x j    which corresponds to applying the algorithm to the mapped data points  math   phi x i    The advantage of using  math  k  is that the mapping  math   phi  never has to be calculated explicitly  allowing for arbitrary large features  even infinite    One drawback of kernel methods is  that it might be necessary to store many kernel values  math  k x i  x j   during optimization  If a kernelized classifier is applied to new data  math  y j    math  k x i  y j   needs to be computed to make predictions  possibly for many different  math  x i  in the training set   The classes in this submodule allow to approximate the embedding  math   phi   thereby working explicitly with the representations  math   phi x i    which obviates the need to apply the kernel or store training examples       rubric   References      WS2001    Using the Nystr m method to speed up kernel machines     https   papers nips cc paper files paper 2000 hash 19de10adbaa1b2ee13f77f679fa1483a Abstract html      Williams  C K I   Seeger  M    2001      RR2007    Random features for large scale kernel machines     https   papers nips cc paper 2007 hash 013a006f03dbc5392effeb8f18fda755 Abstract html      Rahimi  A  and Recht  B    Advances in neural information processing 2007      LS2010    Random Fourier approximations for skewed multiplicative histogram kernels     https   www researchgate net publication 221114584 Random Fourier Approximations for Skewed Multiplicative Histogram Kernels      Li  F   Ionescu  C   and Sminchisescu  C      Pattern Recognition   DAGM 2010  Lecture Notes in Computer Science      VZ2010    Efficient additive kernels via explicit feature maps     https   www robots ox ac uk  vgg publications 2011 Vedaldi11 vedaldi11 pdf      Vedaldi  A  and Zisserman  A    Computer Vision and Pattern Recognition 2010     VVZ2010    Generalized RBF feature maps for Efficient Detection     https   www robots ox ac uk  vgg publications 2010 Sreekanth10 sreekanth10 pdf      Vempati  S  and Vedaldi  A  and Zisserman  A  and Jawahar  CV   2010     PP2013   doi   Fast and scalable polynomial kernels via explicit feature maps     10 1145 2487575 2487591     Pham  N     Pagh  R    2013     CCF2002    Finding frequent items in data streams     https   www cs princeton edu courses archive spring04 cos598B bib CharikarCF pdf      Charikar  M   Chen  K     Farach Colton   2002     WIKICS    Wikipedia  Count sketch     https   en wikipedia org wiki Count sketch   "}
{"questions":"scikit-learn neuralnetworksunsupervised Neural network models unsupervised sklearn neuralnetwork rbm","answers":".. _neural_networks_unsupervised:\n\n====================================\nNeural network models (unsupervised)\n====================================\n\n.. currentmodule:: sklearn.neural_network\n\n\n.. _rbm:\n\nRestricted Boltzmann machines\n=============================\n\nRestricted Boltzmann machines (RBM) are unsupervised nonlinear feature learners\nbased on a probabilistic model. The features extracted by an RBM or a hierarchy\nof RBMs often give good results when fed into a linear classifier such as a\nlinear SVM or a perceptron.\n\nThe model makes assumptions regarding the distribution of inputs. At the moment,\nscikit-learn only provides :class:`BernoulliRBM`, which assumes the inputs are\neither binary values or values between 0 and 1, each encoding the probability\nthat the specific feature would be turned on.\n\nThe RBM tries to maximize the likelihood of the data using a particular\ngraphical model. The parameter learning algorithm used (:ref:`Stochastic\nMaximum Likelihood <sml>`) prevents the representations from straying far\nfrom the input data, which makes them capture interesting regularities, but\nmakes the model less useful for small datasets, and usually not useful for\ndensity estimation.\n\nThe method gained popularity for initializing deep neural networks with the\nweights of independent RBMs. This method is known as unsupervised pre-training.\n\n.. figure:: ..\/auto_examples\/neural_networks\/images\/sphx_glr_plot_rbm_logistic_classification_001.png\n   :target: ..\/auto_examples\/neural_networks\/plot_rbm_logistic_classification.html\n   :align: center\n   :scale: 100%\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_neural_networks_plot_rbm_logistic_classification.py`\n\n\nGraphical model and parametrization\n-----------------------------------\n\nThe graphical model of an RBM is a fully-connected bipartite graph.\n\n.. image:: ..\/images\/rbm_graph.png\n   :align: center\n\nThe nodes are random variables whose states depend on the state of the other\nnodes they are connected to. The model is therefore parameterized by the\nweights of the connections, as well as one intercept (bias) term for each\nvisible and hidden unit, omitted from the image for simplicity.\n\nThe energy function measures the quality of a joint assignment:\n\n.. math::\n\n   E(\\mathbf{v}, \\mathbf{h}) = -\\sum_i \\sum_j w_{ij}v_ih_j - \\sum_i b_iv_i\n     - \\sum_j c_jh_j\n\nIn the formula above, :math:`\\mathbf{b}` and :math:`\\mathbf{c}` are the\nintercept vectors for the visible and hidden layers, respectively. The\njoint probability of the model is defined in terms of the energy:\n\n.. math::\n\n   P(\\mathbf{v}, \\mathbf{h}) = \\frac{e^{-E(\\mathbf{v}, \\mathbf{h})}}{Z}\n\n\nThe word *restricted* refers to the bipartite structure of the model, which\nprohibits direct interaction between hidden units, or between visible units.\nThis means that the following conditional independencies are assumed:\n\n.. math::\n\n   h_i \\bot h_j | \\mathbf{v} \\\\\n   v_i \\bot v_j | \\mathbf{h}\n\nThe bipartite structure allows for the use of efficient block Gibbs sampling for\ninference.\n\nBernoulli Restricted Boltzmann machines\n---------------------------------------\n\nIn the :class:`BernoulliRBM`, all units are binary stochastic units. This\nmeans that the input data should either be binary, or real-valued between 0 and\n1 signifying the probability that the visible unit would turn on or off. This\nis a good model for character recognition, where the interest is on which\npixels are active and which aren't. For images of natural scenes it no longer\nfits because of background, depth and the tendency of neighbouring pixels to\ntake the same values.\n\nThe conditional probability distribution of each unit is given by the\nlogistic sigmoid activation function of the input it receives:\n\n.. math::\n\n   P(v_i=1|\\mathbf{h}) = \\sigma(\\sum_j w_{ij}h_j + b_i) \\\\\n   P(h_i=1|\\mathbf{v}) = \\sigma(\\sum_i w_{ij}v_i + c_j)\n\nwhere :math:`\\sigma` is the logistic sigmoid function:\n\n.. math::\n\n   \\sigma(x) = \\frac{1}{1 + e^{-x}}\n\n.. _sml:\n\nStochastic Maximum Likelihood learning\n--------------------------------------\n\nThe training algorithm implemented in :class:`BernoulliRBM` is known as\nStochastic Maximum Likelihood (SML) or Persistent Contrastive Divergence\n(PCD). Optimizing maximum likelihood directly is infeasible because of\nthe form of the data likelihood:\n\n.. math::\n\n   \\log P(v) = \\log \\sum_h e^{-E(v, h)} - \\log \\sum_{x, y} e^{-E(x, y)}\n\nFor simplicity the equation above is written for a single training example.\nThe gradient with respect to the weights is formed of two terms corresponding to\nthe ones above. They are usually known as the positive gradient and the negative\ngradient, because of their respective signs.  In this implementation, the\ngradients are estimated over mini-batches of samples.\n\nIn maximizing the log-likelihood, the positive gradient makes the model prefer\nhidden states that are compatible with the observed training data. Because of\nthe bipartite structure of RBMs, it can be computed efficiently. The\nnegative gradient, however, is intractable. Its goal is to lower the energy of\njoint states that the model prefers, therefore making it stay true to the data.\nIt can be approximated by Markov chain Monte Carlo using block Gibbs sampling by\niteratively sampling each of :math:`v` and :math:`h` given the other, until the\nchain mixes. Samples generated in this way are sometimes referred as fantasy\nparticles. This is inefficient and it is difficult to determine whether the\nMarkov chain mixes.\n\nThe Contrastive Divergence method suggests to stop the chain after a small\nnumber of iterations, :math:`k`, usually even 1. This method is fast and has\nlow variance, but the samples are far from the model distribution.\n\nPersistent Contrastive Divergence addresses this. Instead of starting a new\nchain each time the gradient is needed, and performing only one Gibbs sampling\nstep, in PCD we keep a number of chains (fantasy particles) that are updated\n:math:`k` Gibbs steps after each weight update. This allows the particles to\nexplore the space more thoroughly.\n\n.. rubric:: References\n\n* `\"A fast learning algorithm for deep belief nets\"\n  <https:\/\/www.cs.toronto.edu\/~hinton\/absps\/fastnc.pdf>`_,\n  G. Hinton, S. Osindero, Y.-W. Teh, 2006\n\n* `\"Training Restricted Boltzmann Machines using Approximations to\n  the Likelihood Gradient\"\n  <https:\/\/www.cs.toronto.edu\/~tijmen\/pcd\/pcd.pdf>`_,\n  T. Tieleman, 2008","site":"scikit-learn","answers_cleaned":"    neural networks unsupervised                                        Neural network models  unsupervised                                           currentmodule   sklearn neural network       rbm   Restricted Boltzmann machines                                Restricted Boltzmann machines  RBM  are unsupervised nonlinear feature learners based on a probabilistic model  The features extracted by an RBM or a hierarchy of RBMs often give good results when fed into a linear classifier such as a linear SVM or a perceptron   The model makes assumptions regarding the distribution of inputs  At the moment  scikit learn only provides  class  BernoulliRBM   which assumes the inputs are either binary values or values between 0 and 1  each encoding the probability that the specific feature would be turned on   The RBM tries to maximize the likelihood of the data using a particular graphical model  The parameter learning algorithm used   ref  Stochastic Maximum Likelihood  sml    prevents the representations from straying far from the input data  which makes them capture interesting regularities  but makes the model less useful for small datasets  and usually not useful for density estimation   The method gained popularity for initializing deep neural networks with the weights of independent RBMs  This method is known as unsupervised pre training      figure      auto examples neural networks images sphx glr plot rbm logistic classification 001 png     target     auto examples neural networks plot rbm logistic classification html     align  center     scale  100      rubric   Examples     ref  sphx glr auto examples neural networks plot rbm logistic classification py    Graphical model and parametrization                                      The graphical model of an RBM is a fully connected bipartite graph      image      images rbm graph png     align  center  The nodes are random variables whose states depend on the state of the other nodes they are connected to  The model is therefore parameterized by the weights of the connections  as well as one intercept  bias  term for each visible and hidden unit  omitted from the image for simplicity   The energy function measures the quality of a joint assignment      math       E  mathbf v    mathbf h       sum i  sum j w  ij v ih j    sum i b iv i         sum j c jh j  In the formula above   math   mathbf b   and  math   mathbf c   are the intercept vectors for the visible and hidden layers  respectively  The joint probability of the model is defined in terms of the energy      math       P  mathbf v    mathbf h      frac e   E  mathbf v    mathbf h     Z    The word  restricted  refers to the bipartite structure of the model  which prohibits direct interaction between hidden units  or between visible units  This means that the following conditional independencies are assumed      math       h i  bot h j    mathbf v        v i  bot v j    mathbf h   The bipartite structure allows for the use of efficient block Gibbs sampling for inference   Bernoulli Restricted Boltzmann machines                                          In the  class  BernoulliRBM   all units are binary stochastic units  This means that the input data should either be binary  or real valued between 0 and 1 signifying the probability that the visible unit would turn on or off  This is a good model for character recognition  where the interest is on which pixels are active and which aren t  For images of natural scenes it no longer fits because of background  depth and the tendency of neighbouring pixels to take the same values   The conditional probability distribution of each unit is given by the logistic sigmoid activation function of the input it receives      math       P v i 1  mathbf h      sigma  sum j w  ij h j   b i        P h i 1  mathbf v      sigma  sum i w  ij v i   c j   where  math   sigma  is the logistic sigmoid function      math        sigma x     frac 1  1   e   x        sml   Stochastic Maximum Likelihood learning                                         The training algorithm implemented in  class  BernoulliRBM  is known as Stochastic Maximum Likelihood  SML  or Persistent Contrastive Divergence  PCD   Optimizing maximum likelihood directly is infeasible because of the form of the data likelihood      math        log P v     log  sum h e   E v  h      log  sum  x  y  e   E x  y    For simplicity the equation above is written for a single training example  The gradient with respect to the weights is formed of two terms corresponding to the ones above  They are usually known as the positive gradient and the negative gradient  because of their respective signs   In this implementation  the gradients are estimated over mini batches of samples   In maximizing the log likelihood  the positive gradient makes the model prefer hidden states that are compatible with the observed training data  Because of the bipartite structure of RBMs  it can be computed efficiently  The negative gradient  however  is intractable  Its goal is to lower the energy of joint states that the model prefers  therefore making it stay true to the data  It can be approximated by Markov chain Monte Carlo using block Gibbs sampling by iteratively sampling each of  math  v  and  math  h  given the other  until the chain mixes  Samples generated in this way are sometimes referred as fantasy particles  This is inefficient and it is difficult to determine whether the Markov chain mixes   The Contrastive Divergence method suggests to stop the chain after a small number of iterations   math  k   usually even 1  This method is fast and has low variance  but the samples are far from the model distribution   Persistent Contrastive Divergence addresses this  Instead of starting a new chain each time the gradient is needed  and performing only one Gibbs sampling step  in PCD we keep a number of chains  fantasy particles  that are updated  math  k  Gibbs steps after each weight update  This allows the particles to explore the space more thoroughly      rubric   References      A fast learning algorithm for deep belief nets     https   www cs toronto edu  hinton absps fastnc pdf       G  Hinton  S  Osindero  Y  W  Teh  2006      Training Restricted Boltzmann Machines using Approximations to   the Likelihood Gradient     https   www cs toronto edu  tijmen pcd pcd pdf       T  Tieleman  2008"}
{"questions":"scikit-learn Density estimation walks the line between unsupervised learning feature Density Estimation density estimation techniques are mixture models such as Jake Vanderplas vanderplas astro washington edu densityestimation engineering and data modeling Some of the most popular and useful","answers":".. _density_estimation:\n\n==================\nDensity Estimation\n==================\n.. sectionauthor:: Jake Vanderplas <vanderplas@astro.washington.edu>\n\nDensity estimation walks the line between unsupervised learning, feature\nengineering, and data modeling.  Some of the most popular and useful\ndensity estimation techniques are mixture models such as\nGaussian Mixtures (:class:`~sklearn.mixture.GaussianMixture`), and\nneighbor-based approaches such as the kernel density estimate\n(:class:`~sklearn.neighbors.KernelDensity`).\nGaussian Mixtures are discussed more fully in the context of\n:ref:`clustering <clustering>`, because the technique is also useful as\nan unsupervised clustering scheme.\n\nDensity estimation is a very simple concept, and most people are already\nfamiliar with one common density estimation technique: the histogram.\n\nDensity Estimation: Histograms\n==============================\nA histogram is a simple visualization of data where bins are defined, and the\nnumber of data points within each bin is tallied.  An example of a histogram\ncan be seen in the upper-left panel of the following figure:\n\n.. |hist_to_kde| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_kde_1d_001.png\n   :target: ..\/auto_examples\/neighbors\/plot_kde_1d.html\n   :scale: 80\n\n.. centered:: |hist_to_kde|\n\nA major problem with histograms, however, is that the choice of binning can\nhave a disproportionate effect on the resulting visualization.  Consider the\nupper-right panel of the above figure.  It shows a histogram over the same\ndata, with the bins shifted right.  The results of the two visualizations look\nentirely different, and might lead to different interpretations of the data.\n\nIntuitively, one can also think of a histogram as a stack of blocks, one block\nper point.  By stacking the blocks in the appropriate grid space, we recover\nthe histogram.  But what if, instead of stacking the blocks on a regular grid,\nwe center each block on the point it represents, and sum the total height at\neach location?  This idea leads to the lower-left visualization.  It is perhaps\nnot as clean as a histogram, but the fact that the data drive the block\nlocations mean that it is a much better representation of the underlying\ndata.\n\nThis visualization is an example of a *kernel density estimation*, in this case\nwith a top-hat kernel (i.e. a square block at each point).  We can recover a\nsmoother distribution by using a smoother kernel.  The bottom-right plot shows\na Gaussian kernel density estimate, in which each point contributes a Gaussian\ncurve to the total.  The result is a smooth density estimate which is derived\nfrom the data, and functions as a powerful non-parametric model of the\ndistribution of points.\n\n.. _kernel_density:\n\nKernel Density Estimation\n=========================\nKernel density estimation in scikit-learn is implemented in the\n:class:`~sklearn.neighbors.KernelDensity` estimator, which uses the\nBall Tree or KD Tree for efficient queries (see :ref:`neighbors` for\na discussion of these).  Though the above example\nuses a 1D data set for simplicity, kernel density estimation can be\nperformed in any number of dimensions, though in practice the curse of\ndimensionality causes its performance to degrade in high dimensions.\n\nIn the following figure, 100 points are drawn from a bimodal distribution,\nand the kernel density estimates are shown for three choices of kernels:\n\n.. |kde_1d_distribution| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_kde_1d_003.png\n   :target: ..\/auto_examples\/neighbors\/plot_kde_1d.html\n   :scale: 80\n\n.. centered:: |kde_1d_distribution|\n\nIt's clear how the kernel shape affects the smoothness of the resulting\ndistribution.  The scikit-learn kernel density estimator can be used as\nfollows:\n\n   >>> from sklearn.neighbors import KernelDensity\n   >>> import numpy as np\n   >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])\n   >>> kde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(X)\n   >>> kde.score_samples(X)\n   array([-0.41075698, -0.41075698, -0.41076071, -0.41075698, -0.41075698,\n          -0.41076071])\n\nHere we have used ``kernel='gaussian'``, as seen above.\nMathematically, a kernel is a positive function :math:`K(x;h)`\nwhich is controlled by the bandwidth parameter :math:`h`.\nGiven this kernel form, the density estimate at a point :math:`y` within\na group of points :math:`x_i; i=1\\cdots N` is given by:\n\n.. math::\n    \\rho_K(y) = \\sum_{i=1}^{N} K(y - x_i; h)\n\nThe bandwidth here acts as a smoothing parameter, controlling the tradeoff\nbetween bias and variance in the result.  A large bandwidth leads to a very\nsmooth (i.e. high-bias) density distribution.  A small bandwidth leads\nto an unsmooth (i.e. high-variance) density distribution.\n\nThe parameter `bandwidth` controls this smoothing. One can either set\nmanually this parameter or use Scott's and Silverman's estimation\nmethods.\n\n:class:`~sklearn.neighbors.KernelDensity` implements several common kernel\nforms, which are shown in the following figure:\n\n.. |kde_kernels| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_kde_1d_002.png\n   :target: ..\/auto_examples\/neighbors\/plot_kde_1d.html\n   :scale: 80\n\n.. centered:: |kde_kernels|\n\n.. dropdown:: Kernels' mathematical expressions\n\n  The form of these kernels is as follows:\n\n  * Gaussian kernel (``kernel = 'gaussian'``)\n\n    :math:`K(x; h) \\propto \\exp(- \\frac{x^2}{2h^2} )`\n\n  * Tophat kernel (``kernel = 'tophat'``)\n\n    :math:`K(x; h) \\propto 1` if :math:`x < h`\n\n  * Epanechnikov kernel (``kernel = 'epanechnikov'``)\n\n    :math:`K(x; h) \\propto 1 - \\frac{x^2}{h^2}`\n\n  * Exponential kernel (``kernel = 'exponential'``)\n\n    :math:`K(x; h) \\propto \\exp(-x\/h)`\n\n  * Linear kernel (``kernel = 'linear'``)\n\n    :math:`K(x; h) \\propto 1 - x\/h` if :math:`x < h`\n\n  * Cosine kernel (``kernel = 'cosine'``)\n\n    :math:`K(x; h) \\propto \\cos(\\frac{\\pi x}{2h})` if :math:`x < h`\n\n\nThe kernel density estimator can be used with any of the valid distance\nmetrics (see :class:`~sklearn.metrics.DistanceMetric` for a list of\navailable metrics), though the results are properly normalized only\nfor the Euclidean metric.  One particularly useful metric is the\n`Haversine distance <https:\/\/en.wikipedia.org\/wiki\/Haversine_formula>`_\nwhich measures the angular distance between points on a sphere.  Here\nis an example of using a kernel density estimate for a visualization\nof geospatial data, in this case the distribution of observations of two\ndifferent species on the South American continent:\n\n.. |species_kde| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_species_kde_001.png\n   :target: ..\/auto_examples\/neighbors\/plot_species_kde.html\n   :scale: 80\n\n.. centered:: |species_kde|\n\nOne other useful application of kernel density estimation is to learn a\nnon-parametric generative model of a dataset in order to efficiently\ndraw new samples from this generative model.\nHere is an example of using this process to\ncreate a new set of hand-written digits, using a Gaussian kernel learned\non a PCA projection of the data:\n\n.. |digits_kde| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_digits_kde_sampling_001.png\n   :target: ..\/auto_examples\/neighbors\/plot_digits_kde_sampling.html\n   :scale: 80\n\n.. centered:: |digits_kde|\n\nThe \"new\" data consists of linear combinations of the input data, with weights\nprobabilistically drawn given the KDE model.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_neighbors_plot_kde_1d.py`: computation of simple kernel\n  density estimates in one dimension.\n\n* :ref:`sphx_glr_auto_examples_neighbors_plot_digits_kde_sampling.py`: an example of using\n  Kernel Density estimation to learn a generative model of the hand-written\n  digits data, and drawing new samples from this model.\n\n* :ref:`sphx_glr_auto_examples_neighbors_plot_species_kde.py`: an example of Kernel Density\n  estimation using the Haversine distance metric to visualize geospatial data","site":"scikit-learn","answers_cleaned":"    density estimation                      Density Estimation                       sectionauthor   Jake Vanderplas  vanderplas astro washington edu   Density estimation walks the line between unsupervised learning  feature engineering  and data modeling   Some of the most popular and useful density estimation techniques are mixture models such as Gaussian Mixtures   class   sklearn mixture GaussianMixture    and neighbor based approaches such as the kernel density estimate   class   sklearn neighbors KernelDensity    Gaussian Mixtures are discussed more fully in the context of  ref  clustering  clustering    because the technique is also useful as an unsupervised clustering scheme   Density estimation is a very simple concept  and most people are already familiar with one common density estimation technique  the histogram   Density Estimation  Histograms                                A histogram is a simple visualization of data where bins are defined  and the number of data points within each bin is tallied   An example of a histogram can be seen in the upper left panel of the following figure       hist to kde  image      auto examples neighbors images sphx glr plot kde 1d 001 png     target     auto examples neighbors plot kde 1d html     scale  80     centered    hist to kde   A major problem with histograms  however  is that the choice of binning can have a disproportionate effect on the resulting visualization   Consider the upper right panel of the above figure   It shows a histogram over the same data  with the bins shifted right   The results of the two visualizations look entirely different  and might lead to different interpretations of the data   Intuitively  one can also think of a histogram as a stack of blocks  one block per point   By stacking the blocks in the appropriate grid space  we recover the histogram   But what if  instead of stacking the blocks on a regular grid  we center each block on the point it represents  and sum the total height at each location   This idea leads to the lower left visualization   It is perhaps not as clean as a histogram  but the fact that the data drive the block locations mean that it is a much better representation of the underlying data   This visualization is an example of a  kernel density estimation   in this case with a top hat kernel  i e  a square block at each point    We can recover a smoother distribution by using a smoother kernel   The bottom right plot shows a Gaussian kernel density estimate  in which each point contributes a Gaussian curve to the total   The result is a smooth density estimate which is derived from the data  and functions as a powerful non parametric model of the distribution of points       kernel density   Kernel Density Estimation                           Kernel density estimation in scikit learn is implemented in the  class   sklearn neighbors KernelDensity  estimator  which uses the Ball Tree or KD Tree for efficient queries  see  ref  neighbors  for a discussion of these    Though the above example uses a 1D data set for simplicity  kernel density estimation can be performed in any number of dimensions  though in practice the curse of dimensionality causes its performance to degrade in high dimensions   In the following figure  100 points are drawn from a bimodal distribution  and the kernel density estimates are shown for three choices of kernels       kde 1d distribution  image      auto examples neighbors images sphx glr plot kde 1d 003 png     target     auto examples neighbors plot kde 1d html     scale  80     centered    kde 1d distribution   It s clear how the kernel shape affects the smoothness of the resulting distribution   The scikit learn kernel density estimator can be used as follows          from sklearn neighbors import KernelDensity        import numpy as np        X   np array    1   1     2   1     3   2    1  1    2  1    3  2           kde   KernelDensity kernel  gaussian   bandwidth 0 2  fit X         kde score samples X     array   0 41075698   0 41075698   0 41076071   0 41075698   0 41075698             0 41076071    Here we have used   kernel  gaussian     as seen above  Mathematically  a kernel is a positive function  math  K x h   which is controlled by the bandwidth parameter  math  h   Given this kernel form  the density estimate at a point  math  y  within a group of points  math  x i  i 1 cdots N  is given by      math        rho K y     sum  i 1   N  K y   x i  h   The bandwidth here acts as a smoothing parameter  controlling the tradeoff between bias and variance in the result   A large bandwidth leads to a very smooth  i e  high bias  density distribution   A small bandwidth leads to an unsmooth  i e  high variance  density distribution   The parameter  bandwidth  controls this smoothing  One can either set manually this parameter or use Scott s and Silverman s estimation methods    class   sklearn neighbors KernelDensity  implements several common kernel forms  which are shown in the following figure       kde kernels  image      auto examples neighbors images sphx glr plot kde 1d 002 png     target     auto examples neighbors plot kde 1d html     scale  80     centered    kde kernels      dropdown   Kernels  mathematical expressions    The form of these kernels is as follows       Gaussian kernel    kernel    gaussian           math  K x  h   propto  exp    frac x 2  2h 2          Tophat kernel    kernel    tophat           math  K x  h   propto 1  if  math  x   h       Epanechnikov kernel    kernel    epanechnikov           math  K x  h   propto 1    frac x 2  h 2        Exponential kernel    kernel    exponential           math  K x  h   propto  exp  x h        Linear kernel    kernel    linear           math  K x  h   propto 1   x h  if  math  x   h       Cosine kernel    kernel    cosine           math  K x  h   propto  cos  frac  pi x  2h    if  math  x   h    The kernel density estimator can be used with any of the valid distance metrics  see  class   sklearn metrics DistanceMetric  for a list of available metrics   though the results are properly normalized only for the Euclidean metric   One particularly useful metric is the  Haversine distance  https   en wikipedia org wiki Haversine formula    which measures the angular distance between points on a sphere   Here is an example of using a kernel density estimate for a visualization of geospatial data  in this case the distribution of observations of two different species on the South American continent       species kde  image      auto examples neighbors images sphx glr plot species kde 001 png     target     auto examples neighbors plot species kde html     scale  80     centered    species kde   One other useful application of kernel density estimation is to learn a non parametric generative model of a dataset in order to efficiently draw new samples from this generative model  Here is an example of using this process to create a new set of hand written digits  using a Gaussian kernel learned on a PCA projection of the data       digits kde  image      auto examples neighbors images sphx glr plot digits kde sampling 001 png     target     auto examples neighbors plot digits kde sampling html     scale  80     centered    digits kde   The  new  data consists of linear combinations of the input data  with weights probabilistically drawn given the KDE model      rubric   Examples     ref  sphx glr auto examples neighbors plot kde 1d py   computation of simple kernel   density estimates in one dimension      ref  sphx glr auto examples neighbors plot digits kde sampling py   an example of using   Kernel Density estimation to learn a generative model of the hand written   digits data  and drawing new samples from this model      ref  sphx glr auto examples neighbors plot species kde py   an example of Kernel Density   estimation using the Haversine distance metric to visualize geospatial data"}
{"questions":"scikit-learn sklearn gaussianprocess to solve regression and probabilistic classification problems Gaussian Processes Gaussian Processes GP are a nonparametric supervised learning method used gaussianprocess","answers":".. _gaussian_process:\n\n==================\nGaussian Processes\n==================\n\n.. currentmodule:: sklearn.gaussian_process\n\n**Gaussian Processes (GP)** are a nonparametric supervised learning method used\nto solve *regression* and *probabilistic classification* problems.\n\nThe advantages of Gaussian processes are:\n\n- The prediction interpolates the observations (at least for regular\n  kernels).\n\n- The prediction is probabilistic (Gaussian) so that one can compute\n  empirical confidence intervals and decide based on those if one should\n  refit (online fitting, adaptive fitting) the prediction in some\n  region of interest.\n\n- Versatile: different :ref:`kernels\n  <gp_kernels>` can be specified. Common kernels are provided, but\n  it is also possible to specify custom kernels.\n\nThe disadvantages of Gaussian processes include:\n\n- Our implementation is not sparse, i.e., they use the whole samples\/features\n  information to perform the prediction.\n\n- They lose efficiency in high dimensional spaces -- namely when the number\n  of features exceeds a few dozens.\n\n\n.. _gpr:\n\nGaussian Process Regression (GPR)\n=================================\n\n.. currentmodule:: sklearn.gaussian_process\n\nThe :class:`GaussianProcessRegressor` implements Gaussian processes (GP) for\nregression purposes. For this, the prior of the GP needs to be specified. GP\nwill combine this prior and the likelihood function based on training samples.\nIt allows to give a probabilistic approach to prediction by giving the mean and\nstandard deviation as output when predicting.\n\n.. figure:: ..\/auto_examples\/gaussian_process\/images\/sphx_glr_plot_gpr_noisy_targets_002.png\n   :target: ..\/auto_examples\/gaussian_process\/plot_gpr_noisy_targets.html\n   :align: center\n\nThe prior mean is assumed to be constant and zero (for `normalize_y=False`) or\nthe training data's mean (for `normalize_y=True`). The prior's covariance is\nspecified by passing a :ref:`kernel <gp_kernels>` object. The hyperparameters\nof the kernel are optimized when fitting the :class:`GaussianProcessRegressor`\nby maximizing the log-marginal-likelihood (LML) based on the passed\n`optimizer`. As the LML may have multiple local optima, the optimizer can be\nstarted repeatedly by specifying `n_restarts_optimizer`. The first run is\nalways conducted starting from the initial hyperparameter values of the kernel;\nsubsequent runs are conducted from hyperparameter values that have been chosen\nrandomly from the range of allowed values. If the initial hyperparameters\nshould be kept fixed, `None` can be passed as optimizer.\n\nThe noise level in the targets can be specified by passing it via the parameter\n`alpha`, either globally as a scalar or per datapoint. Note that a moderate\nnoise level can also be helpful for dealing with numeric instabilities during\nfitting as it is effectively implemented as Tikhonov regularization, i.e., by\nadding it to the diagonal of the kernel matrix. An alternative to specifying\nthe noise level explicitly is to include a\n:class:`~sklearn.gaussian_process.kernels.WhiteKernel` component into the\nkernel, which can estimate the global noise level from the data (see example\nbelow). The figure below shows the effect of noisy target handled by setting\nthe parameter `alpha`.\n\n.. figure:: ..\/auto_examples\/gaussian_process\/images\/sphx_glr_plot_gpr_noisy_targets_003.png\n   :target: ..\/auto_examples\/gaussian_process\/plot_gpr_noisy_targets.html\n   :align: center\n\nThe implementation is based on Algorithm 2.1 of [RW2006]_. In addition to\nthe API of standard scikit-learn estimators, :class:`GaussianProcessRegressor`:\n\n* allows prediction without prior fitting (based on the GP prior)\n\n* provides an additional method ``sample_y(X)``, which evaluates samples\n  drawn from the GPR (prior or posterior) at given inputs\n\n* exposes a method ``log_marginal_likelihood(theta)``, which can be used\n  externally for other ways of selecting hyperparameters, e.g., via\n  Markov chain Monte Carlo.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_gaussian_process_plot_gpr_noisy_targets.py`\n* :ref:`sphx_glr_auto_examples_gaussian_process_plot_gpr_noisy.py`\n* :ref:`sphx_glr_auto_examples_gaussian_process_plot_compare_gpr_krr.py`\n* :ref:`sphx_glr_auto_examples_gaussian_process_plot_gpr_co2.py`\n\n.. _gpc:\n\nGaussian Process Classification (GPC)\n=====================================\n\n.. currentmodule:: sklearn.gaussian_process\n\nThe :class:`GaussianProcessClassifier` implements Gaussian processes (GP) for\nclassification purposes, more specifically for probabilistic classification,\nwhere test predictions take the form of class probabilities.\nGaussianProcessClassifier places a GP prior on a latent function :math:`f`,\nwhich is then squashed through a link function to obtain the probabilistic\nclassification. The latent function :math:`f` is a so-called nuisance function,\nwhose values are not observed and are not relevant by themselves.\nIts purpose is to allow a convenient formulation of the model, and :math:`f`\nis removed (integrated out) during prediction. GaussianProcessClassifier\nimplements the logistic link function, for which the integral cannot be\ncomputed analytically but is easily approximated in the binary case.\n\nIn contrast to the regression setting, the posterior of the latent function\n:math:`f` is not Gaussian even for a GP prior since a Gaussian likelihood is\ninappropriate for discrete class labels. Rather, a non-Gaussian likelihood\ncorresponding to the logistic link function (logit) is used.\nGaussianProcessClassifier approximates the non-Gaussian posterior with a\nGaussian based on the Laplace approximation. More details can be found in\nChapter 3 of [RW2006]_.\n\nThe GP prior mean is assumed to be zero. The prior's\ncovariance is specified by passing a :ref:`kernel <gp_kernels>` object. The\nhyperparameters of the kernel are optimized during fitting of\nGaussianProcessRegressor by maximizing the log-marginal-likelihood (LML) based\non the passed ``optimizer``. As the LML may have multiple local optima, the\noptimizer can be started repeatedly by specifying ``n_restarts_optimizer``. The\nfirst run is always conducted starting from the initial hyperparameter values\nof the kernel; subsequent runs are conducted from hyperparameter values\nthat have been chosen randomly from the range of allowed values.\nIf the initial hyperparameters should be kept fixed, `None` can be passed as\noptimizer.\n\n:class:`GaussianProcessClassifier` supports multi-class classification\nby performing either one-versus-rest or one-versus-one based training and\nprediction.  In one-versus-rest, one binary Gaussian process classifier is\nfitted for each class, which is trained to separate this class from the rest.\nIn \"one_vs_one\", one binary Gaussian process classifier is fitted for each pair\nof classes, which is trained to separate these two classes. The predictions of\nthese binary predictors are combined into multi-class predictions. See the\nsection on :ref:`multi-class classification <multiclass>` for more details.\n\nIn the case of Gaussian process classification, \"one_vs_one\" might be\ncomputationally  cheaper since it has to solve many problems involving only a\nsubset of the whole training set rather than fewer problems on the whole\ndataset. Since Gaussian process classification scales cubically with the size\nof the dataset, this might be considerably faster. However, note that\n\"one_vs_one\" does not support predicting probability estimates but only plain\npredictions. Moreover, note that :class:`GaussianProcessClassifier` does not\n(yet) implement a true multi-class Laplace approximation internally, but\nas discussed above is based on solving several binary classification tasks\ninternally, which are combined using one-versus-rest or one-versus-one.\n\nGPC examples\n============\n\nProbabilistic predictions with GPC\n----------------------------------\n\nThis example illustrates the predicted probability of GPC for an RBF kernel\nwith different choices of the hyperparameters. The first figure shows the\npredicted probability of GPC with arbitrarily chosen hyperparameters and with\nthe hyperparameters corresponding to the maximum log-marginal-likelihood (LML).\n\nWhile the hyperparameters chosen by optimizing LML have a considerably larger\nLML, they perform slightly worse according to the log-loss on test data. The\nfigure shows that this is because they exhibit a steep change of the class\nprobabilities at the class boundaries (which is good) but have predicted\nprobabilities close to 0.5 far away from the class boundaries (which is bad)\nThis undesirable effect is caused by the Laplace approximation used\ninternally by GPC.\n\nThe second figure shows the log-marginal-likelihood for different choices of\nthe kernel's hyperparameters, highlighting the two choices of the\nhyperparameters used in the first figure by black dots.\n\n.. figure:: ..\/auto_examples\/gaussian_process\/images\/sphx_glr_plot_gpc_001.png\n   :target: ..\/auto_examples\/gaussian_process\/plot_gpc.html\n   :align: center\n\n.. figure:: ..\/auto_examples\/gaussian_process\/images\/sphx_glr_plot_gpc_002.png\n   :target: ..\/auto_examples\/gaussian_process\/plot_gpc.html\n   :align: center\n\n\nIllustration of GPC on the XOR dataset\n--------------------------------------\n\n.. currentmodule:: sklearn.gaussian_process.kernels\n\nThis example illustrates GPC on XOR data. Compared are a stationary, isotropic\nkernel (:class:`RBF`) and a non-stationary kernel (:class:`DotProduct`). On\nthis particular dataset, the :class:`DotProduct` kernel obtains considerably\nbetter results because the class-boundaries are linear and coincide with the\ncoordinate axes. In practice, however, stationary kernels such as :class:`RBF`\noften obtain better results.\n\n.. figure:: ..\/auto_examples\/gaussian_process\/images\/sphx_glr_plot_gpc_xor_001.png\n   :target: ..\/auto_examples\/gaussian_process\/plot_gpc_xor.html\n   :align: center\n\n.. currentmodule:: sklearn.gaussian_process\n\n\nGaussian process classification (GPC) on iris dataset\n-----------------------------------------------------\n\nThis example illustrates the predicted probability of GPC for an isotropic\nand anisotropic RBF kernel on a two-dimensional version for the iris-dataset.\nThis illustrates the applicability of GPC to non-binary classification.\nThe anisotropic RBF kernel obtains slightly higher log-marginal-likelihood by\nassigning different length-scales to the two feature dimensions.\n\n.. figure:: ..\/auto_examples\/gaussian_process\/images\/sphx_glr_plot_gpc_iris_001.png\n   :target: ..\/auto_examples\/gaussian_process\/plot_gpc_iris.html\n   :align: center\n\n\n.. _gp_kernels:\n\nKernels for Gaussian Processes\n==============================\n.. currentmodule:: sklearn.gaussian_process.kernels\n\nKernels (also called \"covariance functions\" in the context of GPs) are a crucial\ningredient of GPs which determine the shape of prior and posterior of the GP.\nThey encode the assumptions on the function being learned by defining the \"similarity\"\nof two datapoints combined with the assumption that similar datapoints should\nhave similar target values. Two categories of kernels can be distinguished:\nstationary kernels depend only on the distance of two datapoints and not on their\nabsolute values :math:`k(x_i, x_j)= k(d(x_i, x_j))` and are thus invariant to\ntranslations in the input space, while non-stationary kernels\ndepend also on the specific values of the datapoints. Stationary kernels can further\nbe subdivided into isotropic and anisotropic kernels, where isotropic kernels are\nalso invariant to rotations in the input space. For more details, we refer to\nChapter 4 of [RW2006]_. For guidance on how to best combine different kernels,\nwe refer to [Duv2014]_.\n\n.. dropdown:: Gaussian Process Kernel API\n\n   The main usage of a :class:`Kernel` is to compute the GP's covariance between\n   datapoints. For this, the method ``__call__`` of the kernel can be called. This\n   method can either be used to compute the \"auto-covariance\" of all pairs of\n   datapoints in a 2d array X, or the \"cross-covariance\" of all combinations\n   of datapoints of a 2d array X with datapoints in a 2d array Y. The following\n   identity holds true for all kernels k (except for the :class:`WhiteKernel`):\n   ``k(X) == K(X, Y=X)``\n\n   If only the diagonal of the auto-covariance is being used, the method ``diag()``\n   of a kernel can be called, which is more computationally efficient than the\n   equivalent call to ``__call__``: ``np.diag(k(X, X)) == k.diag(X)``\n\n   Kernels are parameterized by a vector :math:`\\theta` of hyperparameters. These\n   hyperparameters can for instance control length-scales or periodicity of a\n   kernel (see below). All kernels support computing analytic gradients\n   of the kernel's auto-covariance with respect to :math:`log(\\theta)` via setting\n   ``eval_gradient=True`` in the ``__call__`` method.\n   That is, a ``(len(X), len(X), len(theta))`` array is returned where the entry\n   ``[i, j, l]`` contains :math:`\\frac{\\partial k_\\theta(x_i, x_j)}{\\partial log(\\theta_l)}`.\n   This gradient is used by the Gaussian process (both regressor and classifier)\n   in computing the gradient of the log-marginal-likelihood, which in turn is used\n   to determine the value of :math:`\\theta`, which maximizes the log-marginal-likelihood,\n   via gradient ascent. For each hyperparameter, the initial value and the\n   bounds need to be specified when creating an instance of the kernel. The\n   current value of :math:`\\theta` can be get and set via the property\n   ``theta`` of the kernel object. Moreover, the bounds of the hyperparameters can be\n   accessed by the property ``bounds`` of the kernel. Note that both properties\n   (theta and bounds) return log-transformed values of the internally used values\n   since those are typically more amenable to gradient-based optimization.\n   The specification of each hyperparameter is stored in the form of an instance of\n   :class:`Hyperparameter` in the respective kernel. Note that a kernel using a\n   hyperparameter with name \"x\" must have the attributes self.x and self.x_bounds.\n\n   The abstract base class for all kernels is :class:`Kernel`. Kernel implements a\n   similar interface as :class:`~sklearn.base.BaseEstimator`, providing the\n   methods ``get_params()``, ``set_params()``, and ``clone()``. This allows\n   setting kernel values also via meta-estimators such as\n   :class:`~sklearn.pipeline.Pipeline` or\n   :class:`~sklearn.model_selection.GridSearchCV`. Note that due to the nested\n   structure of kernels (by applying kernel operators, see below), the names of\n   kernel parameters might become relatively complicated. In general, for a binary\n   kernel operator, parameters of the left operand are prefixed with ``k1__`` and\n   parameters of the right operand with ``k2__``. An additional convenience method\n   is ``clone_with_theta(theta)``, which returns a cloned version of the kernel\n   but with the hyperparameters set to ``theta``. An illustrative example:\n\n      >>> from sklearn.gaussian_process.kernels import ConstantKernel, RBF\n      >>> kernel = ConstantKernel(constant_value=1.0, constant_value_bounds=(0.0, 10.0)) * RBF(length_scale=0.5, length_scale_bounds=(0.0, 10.0)) + RBF(length_scale=2.0, length_scale_bounds=(0.0, 10.0))\n      >>> for hyperparameter in kernel.hyperparameters: print(hyperparameter)\n      Hyperparameter(name='k1__k1__constant_value', value_type='numeric', bounds=array([[ 0., 10.]]), n_elements=1, fixed=False)\n      Hyperparameter(name='k1__k2__length_scale', value_type='numeric', bounds=array([[ 0., 10.]]), n_elements=1, fixed=False)\n      Hyperparameter(name='k2__length_scale', value_type='numeric', bounds=array([[ 0., 10.]]), n_elements=1, fixed=False)\n      >>> params = kernel.get_params()\n      >>> for key in sorted(params): print(\"%s : %s\" % (key, params[key]))\n      k1 : 1**2 * RBF(length_scale=0.5)\n      k1__k1 : 1**2\n      k1__k1__constant_value : 1.0\n      k1__k1__constant_value_bounds : (0.0, 10.0)\n      k1__k2 : RBF(length_scale=0.5)\n      k1__k2__length_scale : 0.5\n      k1__k2__length_scale_bounds : (0.0, 10.0)\n      k2 : RBF(length_scale=2)\n      k2__length_scale : 2.0\n      k2__length_scale_bounds : (0.0, 10.0)\n      >>> print(kernel.theta)  # Note: log-transformed\n      [ 0.         -0.69314718  0.69314718]\n      >>> print(kernel.bounds)  # Note: log-transformed\n      [[      -inf 2.30258509]\n      [      -inf 2.30258509]\n      [      -inf 2.30258509]]\n\n   All Gaussian process kernels are interoperable with :mod:`sklearn.metrics.pairwise`\n   and vice versa: instances of subclasses of :class:`Kernel` can be passed as\n   ``metric`` to ``pairwise_kernels`` from :mod:`sklearn.metrics.pairwise`. Moreover,\n   kernel functions from pairwise can be used as GP kernels by using the wrapper\n   class :class:`PairwiseKernel`. The only caveat is that the gradient of\n   the hyperparameters is not analytic but numeric and all those kernels support\n   only isotropic distances. The parameter ``gamma`` is considered to be a\n   hyperparameter and may be optimized. The other kernel parameters are set\n   directly at initialization and are kept fixed.\n\nBasic kernels\n-------------\nThe :class:`ConstantKernel` kernel can be used as part of a :class:`Product`\nkernel where it scales the magnitude of the other factor (kernel) or as part\nof a :class:`Sum` kernel, where it modifies the mean of the Gaussian process.\nIt depends on a parameter :math:`constant\\_value`. It is defined as:\n\n.. math::\n   k(x_i, x_j) = constant\\_value \\;\\forall\\; x_1, x_2\n\nThe main use-case of the :class:`WhiteKernel` kernel is as part of a\nsum-kernel where it explains the noise-component of the signal. Tuning its\nparameter :math:`noise\\_level` corresponds to estimating the noise-level.\nIt is defined as:\n\n.. math::\n    k(x_i, x_j) = noise\\_level \\text{ if } x_i == x_j \\text{ else } 0\n\n\nKernel operators\n----------------\nKernel operators take one or two base kernels and combine them into a new\nkernel. The :class:`Sum` kernel takes two kernels :math:`k_1` and :math:`k_2`\nand combines them via :math:`k_{sum}(X, Y) = k_1(X, Y) + k_2(X, Y)`.\nThe  :class:`Product` kernel takes two kernels :math:`k_1` and :math:`k_2`\nand combines them via :math:`k_{product}(X, Y) = k_1(X, Y) * k_2(X, Y)`.\nThe :class:`Exponentiation` kernel takes one base kernel and a scalar parameter\n:math:`p` and combines them via\n:math:`k_{exp}(X, Y) = k(X, Y)^p`.\nNote that magic methods ``__add__``, ``__mul___`` and ``__pow__`` are\noverridden on the Kernel objects, so one can use e.g. ``RBF() + RBF()`` as\na shortcut for ``Sum(RBF(), RBF())``.\n\nRadial basis function (RBF) kernel\n----------------------------------\nThe :class:`RBF` kernel is a stationary kernel. It is also known as the \"squared\nexponential\" kernel. It is parameterized by a length-scale parameter :math:`l>0`, which\ncan either be a scalar (isotropic variant of the kernel) or a vector with the same\nnumber of dimensions as the inputs :math:`x` (anisotropic variant of the kernel).\nThe kernel is given by:\n\n.. math::\n   k(x_i, x_j) = \\text{exp}\\left(- \\frac{d(x_i, x_j)^2}{2l^2} \\right)\n\nwhere :math:`d(\\cdot, \\cdot)` is the Euclidean distance.\nThis kernel is infinitely differentiable, which implies that GPs with this\nkernel as covariance function have mean square derivatives of all orders, and are thus\nvery smooth. The prior and posterior of a GP resulting from an RBF kernel are shown in\nthe following figure:\n\n.. figure:: ..\/auto_examples\/gaussian_process\/images\/sphx_glr_plot_gpr_prior_posterior_001.png\n   :target: ..\/auto_examples\/gaussian_process\/plot_gpr_prior_posterior.html\n   :align: center\n\n\nMat\u00e9rn kernel\n-------------\nThe :class:`Matern` kernel is a stationary kernel and a generalization of the\n:class:`RBF` kernel. It has an additional parameter :math:`\\nu` which controls\nthe smoothness of the resulting function. It is parameterized by a length-scale parameter :math:`l>0`, which can either be a scalar (isotropic variant of the kernel) or a vector with the same number of dimensions as the inputs :math:`x` (anisotropic variant of the kernel).\n\n.. dropdown:: Mathematical implementation of Mat\u00e9rn kernel\n\n   The kernel is given by:\n\n   .. math::\n\n      k(x_i, x_j) = \\frac{1}{\\Gamma(\\nu)2^{\\nu-1}}\\Bigg(\\frac{\\sqrt{2\\nu}}{l} d(x_i , x_j )\\Bigg)^\\nu K_\\nu\\Bigg(\\frac{\\sqrt{2\\nu}}{l} d(x_i , x_j )\\Bigg),\n\n   where :math:`d(\\cdot,\\cdot)` is the Euclidean distance, :math:`K_\\nu(\\cdot)` is a modified Bessel function and :math:`\\Gamma(\\cdot)` is the gamma function.\n   As :math:`\\nu\\rightarrow\\infty`, the Mat\u00e9rn kernel converges to the RBF kernel.\n   When :math:`\\nu = 1\/2`, the Mat\u00e9rn kernel becomes identical to the absolute\n   exponential kernel, i.e.,\n\n   .. math::\n      k(x_i, x_j) = \\exp \\Bigg(- \\frac{1}{l} d(x_i , x_j ) \\Bigg) \\quad \\quad \\nu= \\tfrac{1}{2}\n\n   In particular, :math:`\\nu = 3\/2`:\n\n   .. math::\n      k(x_i, x_j) =  \\Bigg(1 + \\frac{\\sqrt{3}}{l} d(x_i , x_j )\\Bigg) \\exp \\Bigg(-\\frac{\\sqrt{3}}{l} d(x_i , x_j ) \\Bigg) \\quad \\quad \\nu= \\tfrac{3}{2}\n\n   and :math:`\\nu = 5\/2`:\n\n   .. math::\n      k(x_i, x_j) = \\Bigg(1 + \\frac{\\sqrt{5}}{l} d(x_i , x_j ) +\\frac{5}{3l} d(x_i , x_j )^2 \\Bigg) \\exp \\Bigg(-\\frac{\\sqrt{5}}{l} d(x_i , x_j ) \\Bigg) \\quad \\quad \\nu= \\tfrac{5}{2}\n\n   are popular choices for learning functions that are not infinitely\n   differentiable (as assumed by the RBF kernel) but at least once (:math:`\\nu =\n   3\/2`) or twice differentiable (:math:`\\nu = 5\/2`).\n\n   The flexibility of controlling the smoothness of the learned function via :math:`\\nu`\n   allows adapting to the properties of the true underlying functional relation.\n\nThe prior and posterior of a GP resulting from a Mat\u00e9rn kernel are shown in\nthe following figure:\n\n.. figure:: ..\/auto_examples\/gaussian_process\/images\/sphx_glr_plot_gpr_prior_posterior_005.png\n   :target: ..\/auto_examples\/gaussian_process\/plot_gpr_prior_posterior.html\n   :align: center\n\nSee [RW2006]_, pp84 for further details regarding the\ndifferent variants of the Mat\u00e9rn kernel.\n\nRational quadratic kernel\n-------------------------\n\nThe :class:`RationalQuadratic` kernel can be seen as a scale mixture (an infinite sum)\nof :class:`RBF` kernels with different characteristic length-scales. It is parameterized\nby a length-scale parameter :math:`l>0` and a scale mixture parameter  :math:`\\alpha>0`\nOnly the isotropic variant where :math:`l` is a scalar is supported at the moment.\nThe kernel is given by:\n\n.. math::\n   k(x_i, x_j) = \\left(1 + \\frac{d(x_i, x_j)^2}{2\\alpha l^2}\\right)^{-\\alpha}\n\nThe prior and posterior of a GP resulting from a :class:`RationalQuadratic` kernel are shown in\nthe following figure:\n\n.. figure:: ..\/auto_examples\/gaussian_process\/images\/sphx_glr_plot_gpr_prior_posterior_002.png\n   :target: ..\/auto_examples\/gaussian_process\/plot_gpr_prior_posterior.html\n   :align: center\n\nExp-Sine-Squared kernel\n-----------------------\n\nThe :class:`ExpSineSquared` kernel allows modeling periodic functions.\nIt is parameterized by a length-scale parameter :math:`l>0` and a periodicity parameter\n:math:`p>0`. Only the isotropic variant where :math:`l` is a scalar is supported at the moment.\nThe kernel is given by:\n\n.. math::\n   k(x_i, x_j) = \\text{exp}\\left(- \\frac{ 2\\sin^2(\\pi d(x_i, x_j) \/ p) }{ l^ 2} \\right)\n\nThe prior and posterior of a GP resulting from an ExpSineSquared kernel are shown in\nthe following figure:\n\n.. figure:: ..\/auto_examples\/gaussian_process\/images\/sphx_glr_plot_gpr_prior_posterior_003.png\n   :target: ..\/auto_examples\/gaussian_process\/plot_gpr_prior_posterior.html\n   :align: center\n\nDot-Product kernel\n------------------\n\nThe :class:`DotProduct` kernel is non-stationary and can be obtained from linear regression\nby putting :math:`N(0, 1)` priors on the coefficients of :math:`x_d (d = 1, . . . , D)` and\na prior of :math:`N(0, \\sigma_0^2)` on the bias. The :class:`DotProduct` kernel is invariant to a rotation\nof the coordinates about the origin, but not translations.\nIt is parameterized by a parameter :math:`\\sigma_0^2`. For :math:`\\sigma_0^2 = 0`, the kernel\nis called the homogeneous linear kernel, otherwise it is inhomogeneous. The kernel is given by\n\n.. math::\n   k(x_i, x_j) = \\sigma_0 ^ 2 + x_i \\cdot x_j\n\nThe :class:`DotProduct` kernel is commonly combined with exponentiation. An example with exponent 2 is\nshown in the following figure:\n\n.. figure:: ..\/auto_examples\/gaussian_process\/images\/sphx_glr_plot_gpr_prior_posterior_004.png\n   :target: ..\/auto_examples\/gaussian_process\/plot_gpr_prior_posterior.html\n   :align: center\n\nReferences\n----------\n\n.. [RW2006] `Carl E. Rasmussen and Christopher K.I. Williams,\n   \"Gaussian Processes for Machine Learning\",\n   MIT Press 2006 <https:\/\/www.gaussianprocess.org\/gpml\/chapters\/RW.pdf>`_\n\n.. [Duv2014] `David Duvenaud, \"The Kernel Cookbook: Advice on Covariance functions\", 2014\n   <https:\/\/www.cs.toronto.edu\/~duvenaud\/cookbook\/>`_\n\n.. currentmodule:: sklearn.gaussian_process","site":"scikit-learn","answers_cleaned":"    gaussian process                      Gaussian Processes                        currentmodule   sklearn gaussian process    Gaussian Processes  GP    are a nonparametric supervised learning method used to solve  regression  and  probabilistic classification  problems   The advantages of Gaussian processes are     The prediction interpolates the observations  at least for regular   kernels      The prediction is probabilistic  Gaussian  so that one can compute   empirical confidence intervals and decide based on those if one should   refit  online fitting  adaptive fitting  the prediction in some   region of interest     Versatile  different  ref  kernels    gp kernels   can be specified  Common kernels are provided  but   it is also possible to specify custom kernels   The disadvantages of Gaussian processes include     Our implementation is not sparse  i e   they use the whole samples features   information to perform the prediction     They lose efficiency in high dimensional spaces    namely when the number   of features exceeds a few dozens        gpr   Gaussian Process Regression  GPR                                        currentmodule   sklearn gaussian process  The  class  GaussianProcessRegressor  implements Gaussian processes  GP  for regression purposes  For this  the prior of the GP needs to be specified  GP will combine this prior and the likelihood function based on training samples  It allows to give a probabilistic approach to prediction by giving the mean and standard deviation as output when predicting      figure      auto examples gaussian process images sphx glr plot gpr noisy targets 002 png     target     auto examples gaussian process plot gpr noisy targets html     align  center  The prior mean is assumed to be constant and zero  for  normalize y False   or the training data s mean  for  normalize y True    The prior s covariance is specified by passing a  ref  kernel  gp kernels   object  The hyperparameters of the kernel are optimized when fitting the  class  GaussianProcessRegressor  by maximizing the log marginal likelihood  LML  based on the passed  optimizer   As the LML may have multiple local optima  the optimizer can be started repeatedly by specifying  n restarts optimizer   The first run is always conducted starting from the initial hyperparameter values of the kernel  subsequent runs are conducted from hyperparameter values that have been chosen randomly from the range of allowed values  If the initial hyperparameters should be kept fixed   None  can be passed as optimizer   The noise level in the targets can be specified by passing it via the parameter  alpha   either globally as a scalar or per datapoint  Note that a moderate noise level can also be helpful for dealing with numeric instabilities during fitting as it is effectively implemented as Tikhonov regularization  i e   by adding it to the diagonal of the kernel matrix  An alternative to specifying the noise level explicitly is to include a  class   sklearn gaussian process kernels WhiteKernel  component into the kernel  which can estimate the global noise level from the data  see example below   The figure below shows the effect of noisy target handled by setting the parameter  alpha       figure      auto examples gaussian process images sphx glr plot gpr noisy targets 003 png     target     auto examples gaussian process plot gpr noisy targets html     align  center  The implementation is based on Algorithm 2 1 of  RW2006    In addition to the API of standard scikit learn estimators   class  GaussianProcessRegressor      allows prediction without prior fitting  based on the GP prior     provides an additional method   sample y X     which evaluates samples   drawn from the GPR  prior or posterior  at given inputs    exposes a method   log marginal likelihood theta     which can be used   externally for other ways of selecting hyperparameters  e g   via   Markov chain Monte Carlo      rubric   Examples     ref  sphx glr auto examples gaussian process plot gpr noisy targets py     ref  sphx glr auto examples gaussian process plot gpr noisy py     ref  sphx glr auto examples gaussian process plot compare gpr krr py     ref  sphx glr auto examples gaussian process plot gpr co2 py       gpc   Gaussian Process Classification  GPC                                            currentmodule   sklearn gaussian process  The  class  GaussianProcessClassifier  implements Gaussian processes  GP  for classification purposes  more specifically for probabilistic classification  where test predictions take the form of class probabilities  GaussianProcessClassifier places a GP prior on a latent function  math  f   which is then squashed through a link function to obtain the probabilistic classification  The latent function  math  f  is a so called nuisance function  whose values are not observed and are not relevant by themselves  Its purpose is to allow a convenient formulation of the model  and  math  f  is removed  integrated out  during prediction  GaussianProcessClassifier implements the logistic link function  for which the integral cannot be computed analytically but is easily approximated in the binary case   In contrast to the regression setting  the posterior of the latent function  math  f  is not Gaussian even for a GP prior since a Gaussian likelihood is inappropriate for discrete class labels  Rather  a non Gaussian likelihood corresponding to the logistic link function  logit  is used  GaussianProcessClassifier approximates the non Gaussian posterior with a Gaussian based on the Laplace approximation  More details can be found in Chapter 3 of  RW2006     The GP prior mean is assumed to be zero  The prior s covariance is specified by passing a  ref  kernel  gp kernels   object  The hyperparameters of the kernel are optimized during fitting of GaussianProcessRegressor by maximizing the log marginal likelihood  LML  based on the passed   optimizer    As the LML may have multiple local optima  the optimizer can be started repeatedly by specifying   n restarts optimizer    The first run is always conducted starting from the initial hyperparameter values of the kernel  subsequent runs are conducted from hyperparameter values that have been chosen randomly from the range of allowed values  If the initial hyperparameters should be kept fixed   None  can be passed as optimizer    class  GaussianProcessClassifier  supports multi class classification by performing either one versus rest or one versus one based training and prediction   In one versus rest  one binary Gaussian process classifier is fitted for each class  which is trained to separate this class from the rest  In  one vs one   one binary Gaussian process classifier is fitted for each pair of classes  which is trained to separate these two classes  The predictions of these binary predictors are combined into multi class predictions  See the section on  ref  multi class classification  multiclass   for more details   In the case of Gaussian process classification   one vs one  might be computationally  cheaper since it has to solve many problems involving only a subset of the whole training set rather than fewer problems on the whole dataset  Since Gaussian process classification scales cubically with the size of the dataset  this might be considerably faster  However  note that  one vs one  does not support predicting probability estimates but only plain predictions  Moreover  note that  class  GaussianProcessClassifier  does not  yet  implement a true multi class Laplace approximation internally  but as discussed above is based on solving several binary classification tasks internally  which are combined using one versus rest or one versus one   GPC examples               Probabilistic predictions with GPC                                     This example illustrates the predicted probability of GPC for an RBF kernel with different choices of the hyperparameters  The first figure shows the predicted probability of GPC with arbitrarily chosen hyperparameters and with the hyperparameters corresponding to the maximum log marginal likelihood  LML    While the hyperparameters chosen by optimizing LML have a considerably larger LML  they perform slightly worse according to the log loss on test data  The figure shows that this is because they exhibit a steep change of the class probabilities at the class boundaries  which is good  but have predicted probabilities close to 0 5 far away from the class boundaries  which is bad  This undesirable effect is caused by the Laplace approximation used internally by GPC   The second figure shows the log marginal likelihood for different choices of the kernel s hyperparameters  highlighting the two choices of the hyperparameters used in the first figure by black dots      figure      auto examples gaussian process images sphx glr plot gpc 001 png     target     auto examples gaussian process plot gpc html     align  center     figure      auto examples gaussian process images sphx glr plot gpc 002 png     target     auto examples gaussian process plot gpc html     align  center   Illustration of GPC on the XOR dataset                                            currentmodule   sklearn gaussian process kernels  This example illustrates GPC on XOR data  Compared are a stationary  isotropic kernel   class  RBF   and a non stationary kernel   class  DotProduct    On this particular dataset  the  class  DotProduct  kernel obtains considerably better results because the class boundaries are linear and coincide with the coordinate axes  In practice  however  stationary kernels such as  class  RBF  often obtain better results      figure      auto examples gaussian process images sphx glr plot gpc xor 001 png     target     auto examples gaussian process plot gpc xor html     align  center     currentmodule   sklearn gaussian process   Gaussian process classification  GPC  on iris dataset                                                        This example illustrates the predicted probability of GPC for an isotropic and anisotropic RBF kernel on a two dimensional version for the iris dataset  This illustrates the applicability of GPC to non binary classification  The anisotropic RBF kernel obtains slightly higher log marginal likelihood by assigning different length scales to the two feature dimensions      figure      auto examples gaussian process images sphx glr plot gpc iris 001 png     target     auto examples gaussian process plot gpc iris html     align  center       gp kernels   Kernels for Gaussian Processes                                   currentmodule   sklearn gaussian process kernels  Kernels  also called  covariance functions  in the context of GPs  are a crucial ingredient of GPs which determine the shape of prior and posterior of the GP  They encode the assumptions on the function being learned by defining the  similarity  of two datapoints combined with the assumption that similar datapoints should have similar target values  Two categories of kernels can be distinguished  stationary kernels depend only on the distance of two datapoints and not on their absolute values  math  k x i  x j   k d x i  x j    and are thus invariant to translations in the input space  while non stationary kernels depend also on the specific values of the datapoints  Stationary kernels can further be subdivided into isotropic and anisotropic kernels  where isotropic kernels are also invariant to rotations in the input space  For more details  we refer to Chapter 4 of  RW2006    For guidance on how to best combine different kernels  we refer to  Duv2014        dropdown   Gaussian Process Kernel API     The main usage of a  class  Kernel  is to compute the GP s covariance between    datapoints  For this  the method     call     of the kernel can be called  This    method can either be used to compute the  auto covariance  of all pairs of    datapoints in a 2d array X  or the  cross covariance  of all combinations    of datapoints of a 2d array X with datapoints in a 2d array Y  The following    identity holds true for all kernels k  except for the  class  WhiteKernel         k X     K X  Y X        If only the diagonal of the auto covariance is being used  the method   diag        of a kernel can be called  which is more computationally efficient than the    equivalent call to     call        np diag k X  X      k diag X        Kernels are parameterized by a vector  math   theta  of hyperparameters  These    hyperparameters can for instance control length scales or periodicity of a    kernel  see below   All kernels support computing analytic gradients    of the kernel s auto covariance with respect to  math  log  theta   via setting      eval gradient True   in the     call     method     That is  a    len X   len X   len theta     array is returned where the entry       i  j  l    contains  math   frac  partial k  theta x i  x j    partial log  theta l        This gradient is used by the Gaussian process  both regressor and classifier     in computing the gradient of the log marginal likelihood  which in turn is used    to determine the value of  math   theta   which maximizes the log marginal likelihood     via gradient ascent  For each hyperparameter  the initial value and the    bounds need to be specified when creating an instance of the kernel  The    current value of  math   theta  can be get and set via the property      theta   of the kernel object  Moreover  the bounds of the hyperparameters can be    accessed by the property   bounds   of the kernel  Note that both properties     theta and bounds  return log transformed values of the internally used values    since those are typically more amenable to gradient based optimization     The specification of each hyperparameter is stored in the form of an instance of     class  Hyperparameter  in the respective kernel  Note that a kernel using a    hyperparameter with name  x  must have the attributes self x and self x bounds      The abstract base class for all kernels is  class  Kernel   Kernel implements a    similar interface as  class   sklearn base BaseEstimator   providing the    methods   get params        set params      and   clone      This allows    setting kernel values also via meta estimators such as     class   sklearn pipeline Pipeline  or     class   sklearn model selection GridSearchCV   Note that due to the nested    structure of kernels  by applying kernel operators  see below   the names of    kernel parameters might become relatively complicated  In general  for a binary    kernel operator  parameters of the left operand are prefixed with   k1     and    parameters of the right operand with   k2      An additional convenience method    is   clone with theta theta     which returns a cloned version of the kernel    but with the hyperparameters set to   theta    An illustrative example             from sklearn gaussian process kernels import ConstantKernel  RBF           kernel   ConstantKernel constant value 1 0  constant value bounds  0 0  10 0     RBF length scale 0 5  length scale bounds  0 0  10 0     RBF length scale 2 0  length scale bounds  0 0  10 0             for hyperparameter in kernel hyperparameters  print hyperparameter        Hyperparameter name  k1  k1  constant value   value type  numeric   bounds array    0   10      n elements 1  fixed False        Hyperparameter name  k1  k2  length scale   value type  numeric   bounds array    0   10      n elements 1  fixed False        Hyperparameter name  k2  length scale   value type  numeric   bounds array    0   10      n elements 1  fixed False            params   kernel get params             for key in sorted params   print   s    s     key  params key          k1   1  2   RBF length scale 0 5        k1  k1   1  2       k1  k1  constant value   1 0       k1  k1  constant value bounds    0 0  10 0        k1  k2   RBF length scale 0 5        k1  k2  length scale   0 5       k1  k2  length scale bounds    0 0  10 0        k2   RBF length scale 2        k2  length scale   2 0       k2  length scale bounds    0 0  10 0            print kernel theta     Note  log transformed         0           0 69314718  0 69314718            print kernel bounds     Note  log transformed                inf 2 30258509                inf 2 30258509                inf 2 30258509       All Gaussian process kernels are interoperable with  mod  sklearn metrics pairwise     and vice versa  instances of subclasses of  class  Kernel  can be passed as      metric   to   pairwise kernels   from  mod  sklearn metrics pairwise   Moreover     kernel functions from pairwise can be used as GP kernels by using the wrapper    class  class  PairwiseKernel   The only caveat is that the gradient of    the hyperparameters is not analytic but numeric and all those kernels support    only isotropic distances  The parameter   gamma   is considered to be a    hyperparameter and may be optimized  The other kernel parameters are set    directly at initialization and are kept fixed   Basic kernels               The  class  ConstantKernel  kernel can be used as part of a  class  Product  kernel where it scales the magnitude of the other factor  kernel  or as part of a  class  Sum  kernel  where it modifies the mean of the Gaussian process  It depends on a parameter  math  constant  value   It is defined as      math      k x i  x j    constant  value    forall   x 1  x 2  The main use case of the  class  WhiteKernel  kernel is as part of a sum kernel where it explains the noise component of the signal  Tuning its parameter  math  noise  level  corresponds to estimating the noise level  It is defined as      math       k x i  x j    noise  level  text  if   x i    x j  text  else   0   Kernel operators                  Kernel operators take one or two base kernels and combine them into a new kernel  The  class  Sum  kernel takes two kernels  math  k 1  and  math  k 2  and combines them via  math  k  sum  X  Y    k 1 X  Y    k 2 X  Y    The   class  Product  kernel takes two kernels  math  k 1  and  math  k 2  and combines them via  math  k  product  X  Y    k 1 X  Y    k 2 X  Y    The  class  Exponentiation  kernel takes one base kernel and a scalar parameter  math  p  and combines them via  math  k  exp  X  Y    k X  Y  p   Note that magic methods     add          mul      and     pow     are overridden on the Kernel objects  so one can use e g    RBF     RBF     as a shortcut for   Sum RBF    RBF        Radial basis function  RBF  kernel                                    The  class  RBF  kernel is a stationary kernel  It is also known as the  squared exponential  kernel  It is parameterized by a length scale parameter  math  l 0   which can either be a scalar  isotropic variant of the kernel  or a vector with the same number of dimensions as the inputs  math  x   anisotropic variant of the kernel   The kernel is given by      math      k x i  x j     text exp  left    frac d x i  x j  2  2l 2   right   where  math  d  cdot   cdot   is the Euclidean distance  This kernel is infinitely differentiable  which implies that GPs with this kernel as covariance function have mean square derivatives of all orders  and are thus very smooth  The prior and posterior of a GP resulting from an RBF kernel are shown in the following figure      figure      auto examples gaussian process images sphx glr plot gpr prior posterior 001 png     target     auto examples gaussian process plot gpr prior posterior html     align  center   Mat rn kernel               The  class  Matern  kernel is a stationary kernel and a generalization of the  class  RBF  kernel  It has an additional parameter  math   nu  which controls the smoothness of the resulting function  It is parameterized by a length scale parameter  math  l 0   which can either be a scalar  isotropic variant of the kernel  or a vector with the same number of dimensions as the inputs  math  x   anisotropic variant of the kernel       dropdown   Mathematical implementation of Mat rn kernel     The kernel is given by         math          k x i  x j     frac 1   Gamma  nu 2   nu 1   Bigg  frac  sqrt 2 nu   l  d x i   x j   Bigg   nu K  nu Bigg  frac  sqrt 2 nu   l  d x i   x j   Bigg       where  math  d  cdot  cdot   is the Euclidean distance   math  K  nu  cdot   is a modified Bessel function and  math   Gamma  cdot   is the gamma function     As  math   nu rightarrow infty   the Mat rn kernel converges to the RBF kernel     When  math   nu   1 2   the Mat rn kernel becomes identical to the absolute    exponential kernel  i e          math         k x i  x j     exp  Bigg    frac 1  l  d x i   x j    Bigg   quad  quad  nu   tfrac 1  2      In particular   math   nu   3 2          math         k x i  x j      Bigg 1    frac  sqrt 3   l  d x i   x j   Bigg   exp  Bigg   frac  sqrt 3   l  d x i   x j    Bigg   quad  quad  nu   tfrac 3  2      and  math   nu   5 2          math         k x i  x j     Bigg 1    frac  sqrt 5   l  d x i   x j     frac 5  3l  d x i   x j   2  Bigg   exp  Bigg   frac  sqrt 5   l  d x i   x j    Bigg   quad  quad  nu   tfrac 5  2      are popular choices for learning functions that are not infinitely    differentiable  as assumed by the RBF kernel  but at least once   math   nu      3 2   or twice differentiable   math   nu   5 2        The flexibility of controlling the smoothness of the learned function via  math   nu     allows adapting to the properties of the true underlying functional relation   The prior and posterior of a GP resulting from a Mat rn kernel are shown in the following figure      figure      auto examples gaussian process images sphx glr plot gpr prior posterior 005 png     target     auto examples gaussian process plot gpr prior posterior html     align  center  See  RW2006    pp84 for further details regarding the different variants of the Mat rn kernel   Rational quadratic kernel                            The  class  RationalQuadratic  kernel can be seen as a scale mixture  an infinite sum  of  class  RBF  kernels with different characteristic length scales  It is parameterized by a length scale parameter  math  l 0  and a scale mixture parameter   math   alpha 0  Only the isotropic variant where  math  l  is a scalar is supported at the moment  The kernel is given by      math      k x i  x j     left 1    frac d x i  x j  2  2 alpha l 2  right     alpha   The prior and posterior of a GP resulting from a  class  RationalQuadratic  kernel are shown in the following figure      figure      auto examples gaussian process images sphx glr plot gpr prior posterior 002 png     target     auto examples gaussian process plot gpr prior posterior html     align  center  Exp Sine Squared kernel                          The  class  ExpSineSquared  kernel allows modeling periodic functions  It is parameterized by a length scale parameter  math  l 0  and a periodicity parameter  math  p 0   Only the isotropic variant where  math  l  is a scalar is supported at the moment  The kernel is given by      math      k x i  x j     text exp  left    frac  2 sin 2  pi d x i  x j    p     l  2   right   The prior and posterior of a GP resulting from an ExpSineSquared kernel are shown in the following figure      figure      auto examples gaussian process images sphx glr plot gpr prior posterior 003 png     target     auto examples gaussian process plot gpr prior posterior html     align  center  Dot Product kernel                     The  class  DotProduct  kernel is non stationary and can be obtained from linear regression by putting  math  N 0  1   priors on the coefficients of  math  x d  d   1          D   and a prior of  math  N 0   sigma 0 2   on the bias  The  class  DotProduct  kernel is invariant to a rotation of the coordinates about the origin  but not translations  It is parameterized by a parameter  math   sigma 0 2   For  math   sigma 0 2   0   the kernel is called the homogeneous linear kernel  otherwise it is inhomogeneous  The kernel is given by     math      k x i  x j     sigma 0   2   x i  cdot x j  The  class  DotProduct  kernel is commonly combined with exponentiation  An example with exponent 2 is shown in the following figure      figure      auto examples gaussian process images sphx glr plot gpr prior posterior 004 png     target     auto examples gaussian process plot gpr prior posterior html     align  center  References                 RW2006   Carl E  Rasmussen and Christopher K I  Williams      Gaussian Processes for Machine Learning      MIT Press 2006  https   www gaussianprocess org gpml chapters RW pdf         Duv2014   David Duvenaud   The Kernel Cookbook  Advice on Covariance functions   2014     https   www cs toronto edu  duvenaud cookbook         currentmodule   sklearn gaussian process"}
{"questions":"scikit-learn sklearn and Quadratic Linear and Quadratic Discriminant Analysis ldaqda Linear Discriminant Analysis","answers":".. _lda_qda:\n\n==========================================\nLinear and Quadratic Discriminant Analysis\n==========================================\n\n.. currentmodule:: sklearn\n\nLinear Discriminant Analysis\n(:class:`~discriminant_analysis.LinearDiscriminantAnalysis`) and Quadratic\nDiscriminant Analysis\n(:class:`~discriminant_analysis.QuadraticDiscriminantAnalysis`) are two classic\nclassifiers, with, as their names suggest, a linear and a quadratic decision\nsurface, respectively.\n\nThese classifiers are attractive because they have closed-form solutions that\ncan be easily computed, are inherently multiclass, have proven to work well in\npractice, and have no hyperparameters to tune.\n\n.. |ldaqda| image:: ..\/auto_examples\/classification\/images\/sphx_glr_plot_lda_qda_001.png\n        :target: ..\/auto_examples\/classification\/plot_lda_qda.html\n        :scale: 80\n\n.. centered:: |ldaqda|\n\nThe plot shows decision boundaries for Linear Discriminant Analysis and\nQuadratic Discriminant Analysis. The bottom row demonstrates that Linear\nDiscriminant Analysis can only learn linear boundaries, while Quadratic\nDiscriminant Analysis can learn quadratic boundaries and is therefore more\nflexible.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_classification_plot_lda_qda.py`: Comparison of LDA and\n  QDA on synthetic data.\n\nDimensionality reduction using Linear Discriminant Analysis\n===========================================================\n\n:class:`~discriminant_analysis.LinearDiscriminantAnalysis` can be used to\nperform supervised dimensionality reduction, by projecting the input data to a\nlinear subspace consisting of the directions which maximize the separation\nbetween classes (in a precise sense discussed in the mathematics section\nbelow). The dimension of the output is necessarily less than the number of\nclasses, so this is in general a rather strong dimensionality reduction, and\nonly makes sense in a multiclass setting.\n\nThis is implemented in the `transform` method. The desired dimensionality can\nbe set using the ``n_components`` parameter. This parameter has no influence\non the `fit` and `predict` methods.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_decomposition_plot_pca_vs_lda.py`: Comparison of LDA and\n  PCA for dimensionality reduction of the Iris dataset\n\n.. _lda_qda_math:\n\nMathematical formulation of the LDA and QDA classifiers\n=======================================================\n\nBoth LDA and QDA can be derived from simple probabilistic models which model\nthe class conditional distribution of the data :math:`P(X|y=k)` for each class\n:math:`k`. Predictions can then be obtained by using Bayes' rule, for each\ntraining sample :math:`x \\in \\mathcal{R}^d`:\n\n.. math::\n    P(y=k | x) = \\frac{P(x | y=k) P(y=k)}{P(x)} = \\frac{P(x | y=k) P(y = k)}{ \\sum_{l} P(x | y=l) \\cdot P(y=l)}\n\nand we select the class :math:`k` which maximizes this posterior probability.\n\nMore specifically, for linear and quadratic discriminant analysis,\n:math:`P(x|y)` is modeled as a multivariate Gaussian distribution with\ndensity:\n\n.. math:: P(x | y=k) = \\frac{1}{(2\\pi)^{d\/2} |\\Sigma_k|^{1\/2}}\\exp\\left(-\\frac{1}{2} (x-\\mu_k)^t \\Sigma_k^{-1} (x-\\mu_k)\\right)\n\nwhere :math:`d` is the number of features.\n\nQDA\n---\n\nAccording to the model above, the log of the posterior is:\n\n.. math::\n\n    \\log P(y=k | x) &= \\log P(x | y=k) + \\log P(y = k) + Cst \\\\\n    &= -\\frac{1}{2} \\log |\\Sigma_k| -\\frac{1}{2} (x-\\mu_k)^t \\Sigma_k^{-1} (x-\\mu_k) + \\log P(y = k) + Cst,\n\nwhere the constant term :math:`Cst` corresponds to the denominator\n:math:`P(x)`, in addition to other constant terms from the Gaussian. The\npredicted class is the one that maximises this log-posterior.\n\n.. note:: **Relation with Gaussian Naive Bayes**\n\n\t  If in the QDA model one assumes that the covariance matrices are diagonal,\n\t  then the inputs are assumed to be conditionally independent in each class,\n\t  and the resulting classifier is equivalent to the Gaussian Naive Bayes\n\t  classifier :class:`naive_bayes.GaussianNB`.\n\nLDA\n---\n\nLDA is a special case of QDA, where the Gaussians for each class are assumed\nto share the same covariance matrix: :math:`\\Sigma_k = \\Sigma` for all\n:math:`k`. This reduces the log posterior to:\n\n.. math:: \\log P(y=k | x) = -\\frac{1}{2} (x-\\mu_k)^t \\Sigma^{-1} (x-\\mu_k) + \\log P(y = k) + Cst.\n\nThe term :math:`(x-\\mu_k)^t \\Sigma^{-1} (x-\\mu_k)` corresponds to the\n`Mahalanobis Distance <https:\/\/en.wikipedia.org\/wiki\/Mahalanobis_distance>`_\nbetween the sample :math:`x` and the mean :math:`\\mu_k`. The Mahalanobis\ndistance tells how close :math:`x` is from :math:`\\mu_k`, while also\naccounting for the variance of each feature. We can thus interpret LDA as\nassigning :math:`x` to the class whose mean is the closest in terms of\nMahalanobis distance, while also accounting for the class prior\nprobabilities.\n\nThe log-posterior of LDA can also be written [3]_ as:\n\n.. math::\n\n    \\log P(y=k | x) = \\omega_k^t x + \\omega_{k0} + Cst.\n\nwhere :math:`\\omega_k = \\Sigma^{-1} \\mu_k` and :math:`\\omega_{k0} =\n-\\frac{1}{2} \\mu_k^t\\Sigma^{-1}\\mu_k + \\log P (y = k)`. These quantities\ncorrespond to the `coef_` and `intercept_` attributes, respectively.\n\nFrom the above formula, it is clear that LDA has a linear decision surface.\nIn the case of QDA, there are no assumptions on the covariance matrices\n:math:`\\Sigma_k` of the Gaussians, leading to quadratic decision surfaces.\nSee [1]_ for more details.\n\nMathematical formulation of LDA dimensionality reduction\n========================================================\n\nFirst note that the K means :math:`\\mu_k` are vectors in\n:math:`\\mathcal{R}^d`, and they lie in an affine subspace :math:`H` of\ndimension at most :math:`K - 1` (2 points lie on a line, 3 points lie on a\nplane, etc.).\n\nAs mentioned above, we can interpret LDA as assigning :math:`x` to the class\nwhose mean :math:`\\mu_k` is the closest in terms of Mahalanobis distance,\nwhile also accounting for the class prior probabilities. Alternatively, LDA\nis equivalent to first *sphering* the data so that the covariance matrix is\nthe identity, and then assigning :math:`x` to the closest mean in terms of\nEuclidean distance (still accounting for the class priors).\n\nComputing Euclidean distances in this d-dimensional space is equivalent to\nfirst projecting the data points into :math:`H`, and computing the distances\nthere (since the other dimensions will contribute equally to each class in\nterms of distance). In other words, if :math:`x` is closest to :math:`\\mu_k`\nin the original space, it will also be the case in :math:`H`.\nThis shows that, implicit in the LDA\nclassifier, there is a dimensionality reduction by linear projection onto a\n:math:`K-1` dimensional space.\n\nWe can reduce the dimension even more, to a chosen :math:`L`, by projecting\nonto the linear subspace :math:`H_L` which maximizes the variance of the\n:math:`\\mu^*_k` after projection (in effect, we are doing a form of PCA for the\ntransformed class means :math:`\\mu^*_k`). This :math:`L` corresponds to the\n``n_components`` parameter used in the\n:func:`~discriminant_analysis.LinearDiscriminantAnalysis.transform` method. See\n[1]_ for more details.\n\nShrinkage and Covariance Estimator\n==================================\n\nShrinkage is a form of regularization used to improve the estimation of\ncovariance matrices in situations where the number of training samples is\nsmall compared to the number of features.\nIn this scenario, the empirical sample covariance is a poor\nestimator, and shrinkage helps improving the generalization performance of\nthe classifier.\nShrinkage LDA can be used by setting the ``shrinkage`` parameter of\nthe :class:`~discriminant_analysis.LinearDiscriminantAnalysis` class to 'auto'.\nThis automatically determines the optimal shrinkage parameter in an analytic\nway following the lemma introduced by Ledoit and Wolf [2]_. Note that\ncurrently shrinkage only works when setting the ``solver`` parameter to 'lsqr'\nor 'eigen'.\n\nThe ``shrinkage`` parameter can also be manually set between 0 and 1. In\nparticular, a value of 0 corresponds to no shrinkage (which means the empirical\ncovariance matrix will be used) and a value of 1 corresponds to complete\nshrinkage (which means that the diagonal matrix of variances will be used as\nan estimate for the covariance matrix). Setting this parameter to a value\nbetween these two extrema will estimate a shrunk version of the covariance\nmatrix.\n\nThe shrunk Ledoit and Wolf estimator of covariance may not always be the\nbest choice. For example if the distribution of the data\nis normally distributed, the\nOracle Approximating Shrinkage estimator :class:`sklearn.covariance.OAS`\nyields a smaller Mean Squared Error than the one given by Ledoit and Wolf's\nformula used with shrinkage=\"auto\". In LDA, the data are assumed to be gaussian\nconditionally to the class. If these assumptions hold, using LDA with\nthe OAS estimator of covariance will yield a better classification\naccuracy than if Ledoit and Wolf or the empirical covariance estimator is used.\n\nThe covariance estimator can be chosen using with the ``covariance_estimator``\nparameter of the :class:`discriminant_analysis.LinearDiscriminantAnalysis`\nclass. A covariance estimator should have a :term:`fit` method and a\n``covariance_`` attribute like all covariance estimators in the\n:mod:`sklearn.covariance` module.\n\n\n.. |shrinkage| image:: ..\/auto_examples\/classification\/images\/sphx_glr_plot_lda_001.png\n        :target: ..\/auto_examples\/classification\/plot_lda.html\n        :scale: 75\n\n.. centered:: |shrinkage|\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_classification_plot_lda.py`: Comparison of LDA classifiers\n  with Empirical, Ledoit Wolf and OAS covariance estimator.\n\nEstimation algorithms\n=====================\n\nUsing LDA and QDA requires computing the log-posterior which depends on the\nclass priors :math:`P(y=k)`, the class means :math:`\\mu_k`, and the\ncovariance matrices.\n\nThe 'svd' solver is the default solver used for\n:class:`~sklearn.discriminant_analysis.LinearDiscriminantAnalysis`, and it is\nthe only available solver for\n:class:`~sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis`.\nIt can perform both classification and transform (for LDA).\nAs it does not rely on the calculation of the covariance matrix, the 'svd'\nsolver may be preferable in situations where the number of features is large.\nThe 'svd' solver cannot be used with shrinkage.\nFor QDA, the use of the SVD solver relies on the fact that the covariance\nmatrix :math:`\\Sigma_k` is, by definition, equal to :math:`\\frac{1}{n - 1}\nX_k^tX_k = \\frac{1}{n - 1} V S^2 V^t` where :math:`V` comes from the SVD of the (centered)\nmatrix: :math:`X_k = U S V^t`. It turns out that we can compute the\nlog-posterior above without having to explicitly compute :math:`\\Sigma`:\ncomputing :math:`S` and :math:`V` via the SVD of :math:`X` is enough. For\nLDA, two SVDs are computed: the SVD of the centered input matrix :math:`X`\nand the SVD of the class-wise mean vectors.\n\nThe 'lsqr' solver is an efficient algorithm that only works for\nclassification. It needs to explicitly compute the covariance matrix\n:math:`\\Sigma`, and supports shrinkage and custom covariance estimators.\nThis solver computes the coefficients\n:math:`\\omega_k = \\Sigma^{-1}\\mu_k` by solving for :math:`\\Sigma \\omega =\n\\mu_k`, thus avoiding the explicit computation of the inverse\n:math:`\\Sigma^{-1}`.\n\nThe 'eigen' solver is based on the optimization of the between class scatter to\nwithin class scatter ratio. It can be used for both classification and\ntransform, and it supports shrinkage. However, the 'eigen' solver needs to\ncompute the covariance matrix, so it might not be suitable for situations with\na high number of features.\n\n.. rubric:: References\n\n.. [1] \"The Elements of Statistical Learning\", Hastie T., Tibshirani R.,\n    Friedman J., Section 4.3, p.106-119, 2008.\n\n.. [2] Ledoit O, Wolf M. Honey, I Shrunk the Sample Covariance Matrix.\n    The Journal of Portfolio Management 30(4), 110-119, 2004.\n\n.. [3] R. O. Duda, P. E. Hart, D. G. Stork. Pattern Classification\n    (Second Edition), section 2.6.2.","site":"scikit-learn","answers_cleaned":"    lda qda                                              Linear and Quadratic Discriminant Analysis                                                currentmodule   sklearn  Linear Discriminant Analysis   class   discriminant analysis LinearDiscriminantAnalysis   and Quadratic Discriminant Analysis   class   discriminant analysis QuadraticDiscriminantAnalysis   are two classic classifiers  with  as their names suggest  a linear and a quadratic decision surface  respectively   These classifiers are attractive because they have closed form solutions that can be easily computed  are inherently multiclass  have proven to work well in practice  and have no hyperparameters to tune       ldaqda  image      auto examples classification images sphx glr plot lda qda 001 png          target     auto examples classification plot lda qda html          scale  80     centered    ldaqda   The plot shows decision boundaries for Linear Discriminant Analysis and Quadratic Discriminant Analysis  The bottom row demonstrates that Linear Discriminant Analysis can only learn linear boundaries  while Quadratic Discriminant Analysis can learn quadratic boundaries and is therefore more flexible      rubric   Examples     ref  sphx glr auto examples classification plot lda qda py   Comparison of LDA and   QDA on synthetic data   Dimensionality reduction using Linear Discriminant Analysis                                                               class   discriminant analysis LinearDiscriminantAnalysis  can be used to perform supervised dimensionality reduction  by projecting the input data to a linear subspace consisting of the directions which maximize the separation between classes  in a precise sense discussed in the mathematics section below   The dimension of the output is necessarily less than the number of classes  so this is in general a rather strong dimensionality reduction  and only makes sense in a multiclass setting   This is implemented in the  transform  method  The desired dimensionality can be set using the   n components   parameter  This parameter has no influence on the  fit  and  predict  methods      rubric   Examples     ref  sphx glr auto examples decomposition plot pca vs lda py   Comparison of LDA and   PCA for dimensionality reduction of the Iris dataset      lda qda math   Mathematical formulation of the LDA and QDA classifiers                                                          Both LDA and QDA can be derived from simple probabilistic models which model the class conditional distribution of the data  math  P X y k   for each class  math  k   Predictions can then be obtained by using Bayes  rule  for each training sample  math  x  in  mathcal R  d       math       P y k   x     frac P x   y k  P y k   P x      frac P x   y k  P y   k     sum  l  P x   y l   cdot P y l    and we select the class  math  k  which maximizes this posterior probability   More specifically  for linear and quadratic discriminant analysis   math  P x y   is modeled as a multivariate Gaussian distribution with density      math   P x   y k     frac 1   2 pi   d 2    Sigma k   1 2   exp left   frac 1  2   x  mu k  t  Sigma k   1   x  mu k  right   where  math  d  is the number of features   QDA      According to the model above  the log of the posterior is      math         log P y k   x      log P x   y k     log P y   k    Cst             frac 1  2   log   Sigma k    frac 1  2   x  mu k  t  Sigma k   1   x  mu k     log P y   k    Cst   where the constant term  math  Cst  corresponds to the denominator  math  P x    in addition to other constant terms from the Gaussian  The predicted class is the one that maximises this log posterior      note     Relation with Gaussian Naive Bayes       If in the QDA model one assumes that the covariance matrices are diagonal     then the inputs are assumed to be conditionally independent in each class     and the resulting classifier is equivalent to the Gaussian Naive Bayes    classifier  class  naive bayes GaussianNB    LDA      LDA is a special case of QDA  where the Gaussians for each class are assumed to share the same covariance matrix   math   Sigma k    Sigma  for all  math  k   This reduces the log posterior to      math    log P y k   x      frac 1  2   x  mu k  t  Sigma   1   x  mu k     log P y   k    Cst   The term  math   x  mu k  t  Sigma   1   x  mu k   corresponds to the  Mahalanobis Distance  https   en wikipedia org wiki Mahalanobis distance    between the sample  math  x  and the mean  math   mu k   The Mahalanobis distance tells how close  math  x  is from  math   mu k   while also accounting for the variance of each feature  We can thus interpret LDA as assigning  math  x  to the class whose mean is the closest in terms of Mahalanobis distance  while also accounting for the class prior probabilities   The log posterior of LDA can also be written  3   as      math         log P y k   x     omega k t x    omega  k0    Cst   where  math   omega k    Sigma   1   mu k  and  math   omega  k0      frac 1  2   mu k t Sigma   1  mu k    log P  y   k    These quantities correspond to the  coef   and  intercept   attributes  respectively   From the above formula  it is clear that LDA has a linear decision surface  In the case of QDA  there are no assumptions on the covariance matrices  math   Sigma k  of the Gaussians  leading to quadratic decision surfaces  See  1   for more details   Mathematical formulation of LDA dimensionality reduction                                                           First note that the K means  math   mu k  are vectors in  math   mathcal R  d   and they lie in an affine subspace  math  H  of dimension at most  math  K   1   2 points lie on a line  3 points lie on a plane  etc     As mentioned above  we can interpret LDA as assigning  math  x  to the class whose mean  math   mu k  is the closest in terms of Mahalanobis distance  while also accounting for the class prior probabilities  Alternatively  LDA is equivalent to first  sphering  the data so that the covariance matrix is the identity  and then assigning  math  x  to the closest mean in terms of Euclidean distance  still accounting for the class priors    Computing Euclidean distances in this d dimensional space is equivalent to first projecting the data points into  math  H   and computing the distances there  since the other dimensions will contribute equally to each class in terms of distance   In other words  if  math  x  is closest to  math   mu k  in the original space  it will also be the case in  math  H   This shows that  implicit in the LDA classifier  there is a dimensionality reduction by linear projection onto a  math  K 1  dimensional space   We can reduce the dimension even more  to a chosen  math  L   by projecting onto the linear subspace  math  H L  which maximizes the variance of the  math   mu   k  after projection  in effect  we are doing a form of PCA for the transformed class means  math   mu   k    This  math  L  corresponds to the   n components   parameter used in the  func   discriminant analysis LinearDiscriminantAnalysis transform  method  See  1   for more details   Shrinkage and Covariance Estimator                                     Shrinkage is a form of regularization used to improve the estimation of covariance matrices in situations where the number of training samples is small compared to the number of features  In this scenario  the empirical sample covariance is a poor estimator  and shrinkage helps improving the generalization performance of the classifier  Shrinkage LDA can be used by setting the   shrinkage   parameter of the  class   discriminant analysis LinearDiscriminantAnalysis  class to  auto   This automatically determines the optimal shrinkage parameter in an analytic way following the lemma introduced by Ledoit and Wolf  2    Note that currently shrinkage only works when setting the   solver   parameter to  lsqr  or  eigen    The   shrinkage   parameter can also be manually set between 0 and 1  In particular  a value of 0 corresponds to no shrinkage  which means the empirical covariance matrix will be used  and a value of 1 corresponds to complete shrinkage  which means that the diagonal matrix of variances will be used as an estimate for the covariance matrix   Setting this parameter to a value between these two extrema will estimate a shrunk version of the covariance matrix   The shrunk Ledoit and Wolf estimator of covariance may not always be the best choice  For example if the distribution of the data is normally distributed  the Oracle Approximating Shrinkage estimator  class  sklearn covariance OAS  yields a smaller Mean Squared Error than the one given by Ledoit and Wolf s formula used with shrinkage  auto   In LDA  the data are assumed to be gaussian conditionally to the class  If these assumptions hold  using LDA with the OAS estimator of covariance will yield a better classification accuracy than if Ledoit and Wolf or the empirical covariance estimator is used   The covariance estimator can be chosen using with the   covariance estimator   parameter of the  class  discriminant analysis LinearDiscriminantAnalysis  class  A covariance estimator should have a  term  fit  method and a   covariance    attribute like all covariance estimators in the  mod  sklearn covariance  module        shrinkage  image      auto examples classification images sphx glr plot lda 001 png          target     auto examples classification plot lda html          scale  75     centered    shrinkage      rubric   Examples     ref  sphx glr auto examples classification plot lda py   Comparison of LDA classifiers   with Empirical  Ledoit Wolf and OAS covariance estimator   Estimation algorithms                        Using LDA and QDA requires computing the log posterior which depends on the class priors  math  P y k    the class means  math   mu k   and the covariance matrices   The  svd  solver is the default solver used for  class   sklearn discriminant analysis LinearDiscriminantAnalysis   and it is the only available solver for  class   sklearn discriminant analysis QuadraticDiscriminantAnalysis   It can perform both classification and transform  for LDA   As it does not rely on the calculation of the covariance matrix  the  svd  solver may be preferable in situations where the number of features is large  The  svd  solver cannot be used with shrinkage  For QDA  the use of the SVD solver relies on the fact that the covariance matrix  math   Sigma k  is  by definition  equal to  math   frac 1  n   1  X k tX k    frac 1  n   1  V S 2 V t  where  math  V  comes from the SVD of the  centered  matrix   math  X k   U S V t   It turns out that we can compute the log posterior above without having to explicitly compute  math   Sigma   computing  math  S  and  math  V  via the SVD of  math  X  is enough  For LDA  two SVDs are computed  the SVD of the centered input matrix  math  X  and the SVD of the class wise mean vectors   The  lsqr  solver is an efficient algorithm that only works for classification  It needs to explicitly compute the covariance matrix  math   Sigma   and supports shrinkage and custom covariance estimators  This solver computes the coefficients  math   omega k    Sigma   1  mu k  by solving for  math   Sigma  omega    mu k   thus avoiding the explicit computation of the inverse  math   Sigma   1     The  eigen  solver is based on the optimization of the between class scatter to within class scatter ratio  It can be used for both classification and transform  and it supports shrinkage  However  the  eigen  solver needs to compute the covariance matrix  so it might not be suitable for situations with a high number of features      rubric   References      1   The Elements of Statistical Learning   Hastie T   Tibshirani R       Friedman J   Section 4 3  p 106 119  2008       2  Ledoit O  Wolf M  Honey  I Shrunk the Sample Covariance Matrix      The Journal of Portfolio Management 30 4   110 119  2004       3  R  O  Duda  P  E  Hart  D  G  Stork  Pattern Classification      Second Edition   section 2 6 2 "}
{"questions":"scikit-learn semisupervised Semi supervised learning sklearn semisupervised https en wikipedia org wiki Semi supervisedlearning is a situation","answers":".. _semi_supervised:\n\n===================================================\nSemi-supervised learning\n===================================================\n\n.. currentmodule:: sklearn.semi_supervised\n\n`Semi-supervised learning\n<https:\/\/en.wikipedia.org\/wiki\/Semi-supervised_learning>`_ is a situation\nin which in your training data some of the samples are not labeled. The\nsemi-supervised estimators in :mod:`sklearn.semi_supervised` are able to\nmake use of this additional unlabeled data to better capture the shape of\nthe underlying data distribution and generalize better to new samples.\nThese algorithms can perform well when we have a very small amount of\nlabeled points and a large amount of unlabeled points.\n\n.. topic:: Unlabeled entries in `y`\n\n   It is important to assign an identifier to unlabeled points along with the\n   labeled data when training the model with the ``fit`` method. The\n   identifier that this implementation uses is the integer value :math:`-1`.\n   Note that for string labels, the dtype of `y` should be object so that it\n   can contain both strings and integers.\n\n.. note::\n\n   Semi-supervised algorithms need to make assumptions about the distribution\n   of the dataset in order to achieve performance gains. See `here\n   <https:\/\/en.wikipedia.org\/wiki\/Semi-supervised_learning#Assumptions>`_\n   for more details.\n\n.. _self_training:\n\nSelf Training\n=============\n\nThis self-training implementation is based on Yarowsky's [1]_ algorithm. Using\nthis algorithm, a given supervised classifier can function as a semi-supervised\nclassifier, allowing it to learn from unlabeled data.\n\n:class:`SelfTrainingClassifier` can be called with any classifier that\nimplements `predict_proba`, passed as the parameter `base_classifier`. In\neach iteration, the `base_classifier` predicts labels for the unlabeled\nsamples and adds a subset of these labels to the labeled dataset.\n\nThe choice of this subset is determined by the selection criterion. This\nselection can be done using a `threshold` on the prediction probabilities, or\nby choosing the `k_best` samples according to the prediction probabilities.\n\nThe labels used for the final fit as well as the iteration in which each sample\nwas labeled are available as attributes. The optional `max_iter` parameter\nspecifies how many times the loop is executed at most.\n\nThe `max_iter` parameter may be set to `None`, causing the algorithm to iterate\nuntil all samples have labels or no new samples are selected in that iteration.\n\n.. note::\n\n   When using the self-training classifier, the\n   :ref:`calibration <calibration>` of the classifier is important.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_semi_supervised_plot_self_training_varying_threshold.py`\n* :ref:`sphx_glr_auto_examples_semi_supervised_plot_semi_supervised_versus_svm_iris.py`\n\n.. rubric:: References\n\n.. [1] :doi:`\"Unsupervised word sense disambiguation rivaling supervised methods\"\n    <10.3115\/981658.981684>`\n    David Yarowsky, Proceedings of the 33rd annual meeting on Association for\n    Computational Linguistics (ACL '95). Association for Computational Linguistics,\n    Stroudsburg, PA, USA, 189-196.\n\n.. _label_propagation:\n\nLabel Propagation\n=================\n\nLabel propagation denotes a few variations of semi-supervised graph\ninference algorithms.\n\nA few features available in this model:\n  * Used for classification tasks\n  * Kernel methods to project data into alternate dimensional spaces\n\n`scikit-learn` provides two label propagation models:\n:class:`LabelPropagation` and :class:`LabelSpreading`. Both work by\nconstructing a similarity graph over all items in the input dataset.\n\n.. figure:: ..\/auto_examples\/semi_supervised\/images\/sphx_glr_plot_label_propagation_structure_001.png\n    :target: ..\/auto_examples\/semi_supervised\/plot_label_propagation_structure.html\n    :align: center\n    :scale: 60%\n\n    **An illustration of label-propagation:** *the structure of unlabeled\n    observations is consistent with the class structure, and thus the\n    class label can be propagated to the unlabeled observations of the\n    training set.*\n\n:class:`LabelPropagation` and :class:`LabelSpreading`\ndiffer in modifications to the similarity matrix that graph and the\nclamping effect on the label distributions.\nClamping allows the algorithm to change the weight of the true ground labeled\ndata to some degree. The :class:`LabelPropagation` algorithm performs hard\nclamping of input labels, which means :math:`\\alpha=0`. This clamping factor\ncan be relaxed, to say :math:`\\alpha=0.2`, which means that we will always\nretain 80 percent of our original label distribution, but the algorithm gets to\nchange its confidence of the distribution within 20 percent.\n\n:class:`LabelPropagation` uses the raw similarity matrix constructed from\nthe data with no modifications. In contrast, :class:`LabelSpreading`\nminimizes a loss function that has regularization properties, as such it\nis often more robust to noise. The algorithm iterates on a modified\nversion of the original graph and normalizes the edge weights by\ncomputing the normalized graph Laplacian matrix. This procedure is also\nused in :ref:`spectral_clustering`.\n\nLabel propagation models have two built-in kernel methods. Choice of kernel\neffects both scalability and performance of the algorithms. The following are\navailable:\n\n* rbf (:math:`\\exp(-\\gamma |x-y|^2), \\gamma > 0`). :math:`\\gamma` is\n  specified by keyword gamma.\n\n* knn (:math:`1[x' \\in kNN(x)]`). :math:`k` is specified by keyword\n  n_neighbors.\n\nThe RBF kernel will produce a fully connected graph which is represented in memory\nby a dense matrix. This matrix may be very large and combined with the cost of\nperforming a full matrix multiplication calculation for each iteration of the\nalgorithm can lead to prohibitively long running times. On the other hand,\nthe KNN kernel will produce a much more memory-friendly sparse matrix\nwhich can drastically reduce running times.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_semi_supervised_plot_semi_supervised_versus_svm_iris.py`\n* :ref:`sphx_glr_auto_examples_semi_supervised_plot_label_propagation_structure.py`\n* :ref:`sphx_glr_auto_examples_semi_supervised_plot_label_propagation_digits.py`\n* :ref:`sphx_glr_auto_examples_semi_supervised_plot_label_propagation_digits_active_learning.py`\n\n.. rubric:: References\n\n[2] Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux. In Semi-Supervised\nLearning (2006), pp. 193-216\n\n[3] Olivier Delalleau, Yoshua Bengio, Nicolas Le Roux. Efficient\nNon-Parametric Function Induction in Semi-Supervised Learning. AISTAT 2005\nhttps:\/\/www.gatsby.ucl.ac.uk\/aistats\/fullpapers\/204.pdf","site":"scikit-learn","answers_cleaned":"    semi supervised                                                       Semi supervised learning                                                         currentmodule   sklearn semi supervised   Semi supervised learning  https   en wikipedia org wiki Semi supervised learning    is a situation in which in your training data some of the samples are not labeled  The semi supervised estimators in  mod  sklearn semi supervised  are able to make use of this additional unlabeled data to better capture the shape of the underlying data distribution and generalize better to new samples  These algorithms can perform well when we have a very small amount of labeled points and a large amount of unlabeled points      topic   Unlabeled entries in  y      It is important to assign an identifier to unlabeled points along with the    labeled data when training the model with the   fit   method  The    identifier that this implementation uses is the integer value  math   1      Note that for string labels  the dtype of  y  should be object so that it    can contain both strings and integers      note       Semi supervised algorithms need to make assumptions about the distribution    of the dataset in order to achieve performance gains  See  here     https   en wikipedia org wiki Semi supervised learning Assumptions       for more details       self training   Self Training                This self training implementation is based on Yarowsky s  1   algorithm  Using this algorithm  a given supervised classifier can function as a semi supervised classifier  allowing it to learn from unlabeled data    class  SelfTrainingClassifier  can be called with any classifier that implements  predict proba   passed as the parameter  base classifier   In each iteration  the  base classifier  predicts labels for the unlabeled samples and adds a subset of these labels to the labeled dataset   The choice of this subset is determined by the selection criterion  This selection can be done using a  threshold  on the prediction probabilities  or by choosing the  k best  samples according to the prediction probabilities   The labels used for the final fit as well as the iteration in which each sample was labeled are available as attributes  The optional  max iter  parameter specifies how many times the loop is executed at most   The  max iter  parameter may be set to  None   causing the algorithm to iterate until all samples have labels or no new samples are selected in that iteration      note       When using the self training classifier  the     ref  calibration  calibration   of the classifier is important      rubric   Examples     ref  sphx glr auto examples semi supervised plot self training varying threshold py     ref  sphx glr auto examples semi supervised plot semi supervised versus svm iris py      rubric   References      1   doi   Unsupervised word sense disambiguation rivaling supervised methods       10 3115 981658 981684       David Yarowsky  Proceedings of the 33rd annual meeting on Association for     Computational Linguistics  ACL  95   Association for Computational Linguistics      Stroudsburg  PA  USA  189 196       label propagation   Label Propagation                    Label propagation denotes a few variations of semi supervised graph inference algorithms   A few features available in this model      Used for classification tasks     Kernel methods to project data into alternate dimensional spaces   scikit learn  provides two label propagation models   class  LabelPropagation  and  class  LabelSpreading   Both work by constructing a similarity graph over all items in the input dataset      figure      auto examples semi supervised images sphx glr plot label propagation structure 001 png      target     auto examples semi supervised plot label propagation structure html      align  center      scale  60         An illustration of label propagation     the structure of unlabeled     observations is consistent with the class structure  and thus the     class label can be propagated to the unlabeled observations of the     training set     class  LabelPropagation  and  class  LabelSpreading  differ in modifications to the similarity matrix that graph and the clamping effect on the label distributions  Clamping allows the algorithm to change the weight of the true ground labeled data to some degree  The  class  LabelPropagation  algorithm performs hard clamping of input labels  which means  math   alpha 0   This clamping factor can be relaxed  to say  math   alpha 0 2   which means that we will always retain 80 percent of our original label distribution  but the algorithm gets to change its confidence of the distribution within 20 percent    class  LabelPropagation  uses the raw similarity matrix constructed from the data with no modifications  In contrast   class  LabelSpreading  minimizes a loss function that has regularization properties  as such it is often more robust to noise  The algorithm iterates on a modified version of the original graph and normalizes the edge weights by computing the normalized graph Laplacian matrix  This procedure is also used in  ref  spectral clustering    Label propagation models have two built in kernel methods  Choice of kernel effects both scalability and performance of the algorithms  The following are available     rbf   math   exp   gamma  x y  2    gamma   0     math   gamma  is   specified by keyword gamma     knn   math  1 x   in kNN x       math  k  is specified by keyword   n neighbors   The RBF kernel will produce a fully connected graph which is represented in memory by a dense matrix  This matrix may be very large and combined with the cost of performing a full matrix multiplication calculation for each iteration of the algorithm can lead to prohibitively long running times  On the other hand  the KNN kernel will produce a much more memory friendly sparse matrix which can drastically reduce running times      rubric   Examples     ref  sphx glr auto examples semi supervised plot semi supervised versus svm iris py     ref  sphx glr auto examples semi supervised plot label propagation structure py     ref  sphx glr auto examples semi supervised plot label propagation digits py     ref  sphx glr auto examples semi supervised plot label propagation digits active learning py      rubric   References   2  Yoshua Bengio  Olivier Delalleau  Nicolas Le Roux  In Semi Supervised Learning  2006   pp  193 216   3  Olivier Delalleau  Yoshua Bengio  Nicolas Le Roux  Efficient Non Parametric Function Induction in Semi Supervised Learning  AISTAT 2005 https   www gatsby ucl ac uk aistats fullpapers 204 pdf"}
{"questions":"scikit-learn Decomposing signals in components matrix factorization problems sklearn decomposition decompositions","answers":".. _decompositions:\n\n\n=================================================================\nDecomposing signals in components (matrix factorization problems)\n=================================================================\n\n.. currentmodule:: sklearn.decomposition\n\n\n.. _PCA:\n\n\nPrincipal component analysis (PCA)\n==================================\n\nExact PCA and probabilistic interpretation\n------------------------------------------\n\nPCA is used to decompose a multivariate dataset in a set of successive\northogonal components that explain a maximum amount of the variance. In\nscikit-learn, :class:`PCA` is implemented as a *transformer* object\nthat learns :math:`n` components in its ``fit`` method, and can be used on new\ndata to project it on these components.\n\nPCA centers but does not scale the input data for each feature before\napplying the SVD. The optional parameter ``whiten=True`` makes it\npossible to project the data onto the singular space while scaling each\ncomponent to unit variance. This is often useful if the models down-stream make\nstrong assumptions on the isotropy of the signal: this is for example the case\nfor Support Vector Machines with the RBF kernel and the K-Means clustering\nalgorithm.\n\nBelow is an example of the iris dataset, which is comprised of 4\nfeatures, projected on the 2 dimensions that explain most variance:\n\n.. figure:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_pca_vs_lda_001.png\n    :target: ..\/auto_examples\/decomposition\/plot_pca_vs_lda.html\n    :align: center\n    :scale: 75%\n\n\nThe :class:`PCA` object also provides a\nprobabilistic interpretation of the PCA that can give a likelihood of\ndata based on the amount of variance it explains. As such it implements a\n:term:`score` method that can be used in cross-validation:\n\n.. figure:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_pca_vs_fa_model_selection_001.png\n    :target: ..\/auto_examples\/decomposition\/plot_pca_vs_fa_model_selection.html\n    :align: center\n    :scale: 75%\n\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_decomposition_plot_pca_iris.py`\n* :ref:`sphx_glr_auto_examples_decomposition_plot_pca_vs_lda.py`\n* :ref:`sphx_glr_auto_examples_decomposition_plot_pca_vs_fa_model_selection.py`\n\n\n.. _IncrementalPCA:\n\nIncremental PCA\n---------------\n\nThe :class:`PCA` object is very useful, but has certain limitations for\nlarge datasets. The biggest limitation is that :class:`PCA` only supports\nbatch processing, which means all of the data to be processed must fit in main\nmemory. The :class:`IncrementalPCA` object uses a different form of\nprocessing and allows for partial computations which almost\nexactly match the results of :class:`PCA` while processing the data in a\nminibatch fashion. :class:`IncrementalPCA` makes it possible to implement\nout-of-core Principal Component Analysis either by:\n\n* Using its ``partial_fit`` method on chunks of data fetched sequentially\n  from the local hard drive or a network database.\n\n* Calling its fit method on a memory mapped file using\n  ``numpy.memmap``.\n\n:class:`IncrementalPCA` only stores estimates of component and noise variances,\nin order update ``explained_variance_ratio_`` incrementally. This is why\nmemory usage depends on the number of samples per batch, rather than the\nnumber of samples to be processed in the dataset.\n\nAs in :class:`PCA`, :class:`IncrementalPCA` centers but does not scale the\ninput data for each feature before applying the SVD.\n\n.. figure:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_incremental_pca_001.png\n    :target: ..\/auto_examples\/decomposition\/plot_incremental_pca.html\n    :align: center\n    :scale: 75%\n\n.. figure:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_incremental_pca_002.png\n    :target: ..\/auto_examples\/decomposition\/plot_incremental_pca.html\n    :align: center\n    :scale: 75%\n\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_decomposition_plot_incremental_pca.py`\n\n\n.. _RandomizedPCA:\n\nPCA using randomized SVD\n------------------------\n\nIt is often interesting to project data to a lower-dimensional\nspace that preserves most of the variance, by dropping the singular vector\nof components associated with lower singular values.\n\nFor instance, if we work with 64x64 pixel gray-level pictures\nfor face recognition,\nthe dimensionality of the data is 4096 and it is slow to train an\nRBF support vector machine on such wide data. Furthermore we know that\nthe intrinsic dimensionality of the data is much lower than 4096 since all\npictures of human faces look somewhat alike.\nThe samples lie on a manifold of much lower\ndimension (say around 200 for instance). The PCA algorithm can be used\nto linearly transform the data while both reducing the dimensionality\nand preserve most of the explained variance at the same time.\n\nThe class :class:`PCA` used with the optional parameter\n``svd_solver='randomized'`` is very useful in that case: since we are going\nto drop most of the singular vectors it is much more efficient to limit the\ncomputation to an approximated estimate of the singular vectors we will keep\nto actually perform the transform.\n\nFor instance, the following shows 16 sample portraits (centered around\n0.0) from the Olivetti dataset. On the right hand side are the first 16\nsingular vectors reshaped as portraits. Since we only require the top\n16 singular vectors of a dataset with size :math:`n_{samples} = 400`\nand :math:`n_{features} = 64 \\times 64 = 4096`, the computation time is\nless than 1s:\n\n.. |orig_img| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_001.png\n   :target: ..\/auto_examples\/decomposition\/plot_faces_decomposition.html\n   :scale: 60%\n\n.. |pca_img| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_002.png\n   :target: ..\/auto_examples\/decomposition\/plot_faces_decomposition.html\n   :scale: 60%\n\n.. centered:: |orig_img| |pca_img|\n\nIf we note :math:`n_{\\max} = \\max(n_{\\mathrm{samples}}, n_{\\mathrm{features}})` and\n:math:`n_{\\min} = \\min(n_{\\mathrm{samples}}, n_{\\mathrm{features}})`, the time complexity\nof the randomized :class:`PCA` is :math:`O(n_{\\max}^2 \\cdot n_{\\mathrm{components}})`\ninstead of :math:`O(n_{\\max}^2 \\cdot n_{\\min})` for the exact method\nimplemented in :class:`PCA`.\n\nThe memory footprint of randomized :class:`PCA` is also proportional to\n:math:`2 \\cdot n_{\\max} \\cdot n_{\\mathrm{components}}` instead of :math:`n_{\\max}\n\\cdot n_{\\min}` for the exact method.\n\nNote: the implementation of ``inverse_transform`` in :class:`PCA` with\n``svd_solver='randomized'`` is not the exact inverse transform of\n``transform`` even when ``whiten=False`` (default).\n\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_applications_plot_face_recognition.py`\n* :ref:`sphx_glr_auto_examples_decomposition_plot_faces_decomposition.py`\n\n.. rubric:: References\n\n* Algorithm 4.3 in\n  :arxiv:`\"Finding structure with randomness: Stochastic algorithms for\n  constructing approximate matrix decompositions\" <0909.4061>`\n  Halko, et al., 2009\n\n* :arxiv:`\"An implementation of a randomized algorithm for principal component\n  analysis\" <1412.3510>` A. Szlam et al. 2014\n\n.. _SparsePCA:\n\nSparse principal components analysis (SparsePCA and MiniBatchSparsePCA)\n-----------------------------------------------------------------------\n\n:class:`SparsePCA` is a variant of PCA, with the goal of extracting the\nset of sparse components that best reconstruct the data.\n\nMini-batch sparse PCA (:class:`MiniBatchSparsePCA`) is a variant of\n:class:`SparsePCA` that is faster but less accurate. The increased speed is\nreached by iterating over small chunks of the set of features, for a given\nnumber of iterations.\n\n\nPrincipal component analysis (:class:`PCA`) has the disadvantage that the\ncomponents extracted by this method have exclusively dense expressions, i.e.\nthey have non-zero coefficients when expressed as linear combinations of the\noriginal variables. This can make interpretation difficult. In many cases,\nthe real underlying components can be more naturally imagined as sparse\nvectors; for example in face recognition, components might naturally map to\nparts of faces.\n\nSparse principal components yields a more parsimonious, interpretable\nrepresentation, clearly emphasizing which of the original features contribute\nto the differences between samples.\n\nThe following example illustrates 16 components extracted using sparse PCA from\nthe Olivetti faces dataset.  It can be seen how the regularization term induces\nmany zeros. Furthermore, the natural structure of the data causes the non-zero\ncoefficients to be vertically adjacent. The model does not enforce this\nmathematically: each component is a vector :math:`h \\in \\mathbf{R}^{4096}`, and\nthere is no notion of vertical adjacency except during the human-friendly\nvisualization as 64x64 pixel images. The fact that the components shown below\nappear local is the effect of the inherent structure of the data, which makes\nsuch local patterns minimize reconstruction error. There exist sparsity-inducing\nnorms that take into account adjacency and different kinds of structure; see\n[Jen09]_ for a review of such methods.\nFor more details on how to use Sparse PCA, see the Examples section, below.\n\n\n.. |spca_img| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_005.png\n   :target: ..\/auto_examples\/decomposition\/plot_faces_decomposition.html\n   :scale: 60%\n\n.. centered:: |pca_img| |spca_img|\n\nNote that there are many different formulations for the Sparse PCA\nproblem. The one implemented here is based on [Mrl09]_ . The optimization\nproblem solved is a PCA problem (dictionary learning) with an\n:math:`\\ell_1` penalty on the components:\n\n.. math::\n   (U^*, V^*) = \\underset{U, V}{\\operatorname{arg\\,min\\,}} & \\frac{1}{2}\n                ||X-UV||_{\\text{Fro}}^2+\\alpha||V||_{1,1} \\\\\n                \\text{subject to } & ||U_k||_2 <= 1 \\text{ for all }\n                0 \\leq k < n_{components}\n\n:math:`||.||_{\\text{Fro}}` stands for the Frobenius norm and :math:`||.||_{1,1}`\nstands for the entry-wise matrix norm which is the sum of the absolute values\nof all the entries in the matrix.\nThe sparsity-inducing :math:`||.||_{1,1}` matrix norm also prevents learning\ncomponents from noise when few training samples are available. The degree\nof penalization (and thus sparsity) can be adjusted through the\nhyperparameter ``alpha``. Small values lead to a gently regularized\nfactorization, while larger values shrink many coefficients to zero.\n\n.. note::\n\n  While in the spirit of an online algorithm, the class\n  :class:`MiniBatchSparsePCA` does not implement ``partial_fit`` because\n  the algorithm is online along the features direction, not the samples\n  direction.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_decomposition_plot_faces_decomposition.py`\n\n.. rubric:: References\n\n.. [Mrl09] `\"Online Dictionary Learning for Sparse Coding\"\n   <https:\/\/www.di.ens.fr\/~fbach\/mairal_icml09.pdf>`_\n   J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009\n.. [Jen09] `\"Structured Sparse Principal Component Analysis\"\n   <https:\/\/www.di.ens.fr\/~fbach\/sspca_AISTATS2010.pdf>`_\n   R. Jenatton, G. Obozinski, F. Bach, 2009\n\n\n.. _kernel_PCA:\n\nKernel Principal Component Analysis (kPCA)\n==========================================\n\nExact Kernel PCA\n----------------\n\n:class:`KernelPCA` is an extension of PCA which achieves non-linear\ndimensionality reduction through the use of kernels (see :ref:`metrics`) [Scholkopf1997]_. It\nhas many applications including denoising, compression and structured\nprediction (kernel dependency estimation). :class:`KernelPCA` supports both\n``transform`` and ``inverse_transform``.\n\n.. figure:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_kernel_pca_002.png\n    :target: ..\/auto_examples\/decomposition\/plot_kernel_pca.html\n    :align: center\n    :scale: 75%\n\n.. note::\n    :meth:`KernelPCA.inverse_transform` relies on a kernel ridge to learn the\n    function mapping samples from the PCA basis into the original feature\n    space [Bakir2003]_. Thus, the reconstruction obtained with\n    :meth:`KernelPCA.inverse_transform` is an approximation. See the example\n    linked below for more details.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_decomposition_plot_kernel_pca.py`\n* :ref:`sphx_glr_auto_examples_applications_plot_digits_denoising.py`\n\n.. rubric:: References\n\n.. [Scholkopf1997] Sch\u00f6lkopf, Bernhard, Alexander Smola, and Klaus-Robert M\u00fcller.\n   `\"Kernel principal component analysis.\"\n   <https:\/\/people.eecs.berkeley.edu\/~wainwrig\/stat241b\/scholkopf_kernel.pdf>`_\n   International conference on artificial neural networks.\n   Springer, Berlin, Heidelberg, 1997.\n\n.. [Bakir2003] Bak\u0131r, G\u00f6khan H., Jason Weston, and Bernhard Sch\u00f6lkopf.\n   `\"Learning to find pre-images.\"\n   <https:\/\/papers.nips.cc\/paper\/2003\/file\/ac1ad983e08ad3304a97e147f522747e-Paper.pdf>`_\n   Advances in neural information processing systems 16 (2003): 449-456.\n\n.. _kPCA_Solvers:\n\nChoice of solver for Kernel PCA\n-------------------------------\n\nWhile in :class:`PCA` the number of components is bounded by the number of\nfeatures, in :class:`KernelPCA` the number of components is bounded by the\nnumber of samples. Many real-world datasets have large number of samples! In\nthese cases finding *all* the components with a full kPCA is a waste of\ncomputation time, as data is mostly described by the first few components\n(e.g. ``n_components<=100``). In other words, the centered Gram matrix that\nis eigendecomposed in the Kernel PCA fitting process has an effective rank that\nis much smaller than its size. This is a situation where approximate\neigensolvers can provide speedup with very low precision loss.\n\n\n.. dropdown:: Eigensolvers\n\n    The optional parameter ``eigen_solver='randomized'`` can be used to\n    *significantly* reduce the computation time when the number of requested\n    ``n_components`` is small compared with the number of samples. It relies on\n    randomized decomposition methods to find an approximate solution in a shorter\n    time.\n\n    The time complexity of the randomized :class:`KernelPCA` is\n    :math:`O(n_{\\mathrm{samples}}^2 \\cdot n_{\\mathrm{components}})`\n    instead of :math:`O(n_{\\mathrm{samples}}^3)` for the exact method\n    implemented with ``eigen_solver='dense'``.\n\n    The memory footprint of randomized :class:`KernelPCA` is also proportional to\n    :math:`2 \\cdot n_{\\mathrm{samples}} \\cdot n_{\\mathrm{components}}` instead of\n    :math:`n_{\\mathrm{samples}}^2` for the exact method.\n\n    Note: this technique is the same as in :ref:`RandomizedPCA`.\n\n    In addition to the above two solvers, ``eigen_solver='arpack'`` can be used as\n    an alternate way to get an approximate decomposition. In practice, this method\n    only provides reasonable execution times when the number of components to find\n    is extremely small. It is enabled by default when the desired number of\n    components is less than 10 (strict) and the number of samples is more than 200\n    (strict). See :class:`KernelPCA` for details.\n\n    .. rubric:: References\n\n    * *dense* solver:\n      `scipy.linalg.eigh documentation\n      <https:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.linalg.eigh.html>`_\n\n    * *randomized* solver:\n\n      * Algorithm 4.3 in\n        :arxiv:`\"Finding structure with randomness: Stochastic\n        algorithms for constructing approximate matrix decompositions\" <0909.4061>`\n        Halko, et al. (2009)\n\n      * :arxiv:`\"An implementation of a randomized algorithm\n        for principal component analysis\" <1412.3510>`\n        A. Szlam et al. (2014)\n\n    * *arpack* solver:\n      `scipy.sparse.linalg.eigsh documentation\n      <https:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.sparse.linalg.eigsh.html>`_\n      R. B. Lehoucq, D. C. Sorensen, and C. Yang, (1998)\n\n\n.. _LSA:\n\nTruncated singular value decomposition and latent semantic analysis\n===================================================================\n\n:class:`TruncatedSVD` implements a variant of singular value decomposition\n(SVD) that only computes the :math:`k` largest singular values,\nwhere :math:`k` is a user-specified parameter.\n\n:class:`TruncatedSVD` is very similar to :class:`PCA`, but differs\nin that the matrix :math:`X` does not need to be centered.\nWhen the columnwise (per-feature) means of :math:`X`\nare subtracted from the feature values,\ntruncated SVD on the resulting matrix is equivalent to PCA.\n\n.. dropdown:: About truncated SVD and latent semantic analysis (LSA)\n\n    When truncated SVD is applied to term-document matrices\n    (as returned by :class:`~sklearn.feature_extraction.text.CountVectorizer` or\n    :class:`~sklearn.feature_extraction.text.TfidfVectorizer`),\n    this transformation is known as\n    `latent semantic analysis <https:\/\/nlp.stanford.edu\/IR-book\/pdf\/18lsi.pdf>`_\n    (LSA), because it transforms such matrices\n    to a \"semantic\" space of low dimensionality.\n    In particular, LSA is known to combat the effects of synonymy and polysemy\n    (both of which roughly mean there are multiple meanings per word),\n    which cause term-document matrices to be overly sparse\n    and exhibit poor similarity under measures such as cosine similarity.\n\n    .. note::\n        LSA is also known as latent semantic indexing, LSI,\n        though strictly that refers to its use in persistent indexes\n        for information retrieval purposes.\n\n    Mathematically, truncated SVD applied to training samples :math:`X`\n    produces a low-rank approximation :math:`X`:\n\n    .. math::\n        X \\approx X_k = U_k \\Sigma_k V_k^\\top\n\n    After this operation, :math:`U_k \\Sigma_k`\n    is the transformed training set with :math:`k` features\n    (called ``n_components`` in the API).\n\n    To also transform a test set :math:`X`, we multiply it with :math:`V_k`:\n\n    .. math::\n        X' = X V_k\n\n    .. note::\n        Most treatments of LSA in the natural language processing (NLP)\n        and information retrieval (IR) literature\n        swap the axes of the matrix :math:`X` so that it has shape\n        ``(n_features, n_samples)``.\n        We present LSA in a different way that matches the scikit-learn API better,\n        but the singular values found are the same.\n\n    While the :class:`TruncatedSVD` transformer\n    works with any feature matrix,\n    using it on tf-idf matrices is recommended over raw frequency counts\n    in an LSA\/document processing setting.\n    In particular, sublinear scaling and inverse document frequency\n    should be turned on (``sublinear_tf=True, use_idf=True``)\n    to bring the feature values closer to a Gaussian distribution,\n    compensating for LSA's erroneous assumptions about textual data.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_text_plot_document_clustering.py`\n\n.. rubric:: References\n\n* Christopher D. Manning, Prabhakar Raghavan and Hinrich Sch\u00fctze (2008),\n  *Introduction to Information Retrieval*, Cambridge University Press,\n  chapter 18: `Matrix decompositions & latent semantic indexing\n  <https:\/\/nlp.stanford.edu\/IR-book\/pdf\/18lsi.pdf>`_\n\n\n\n.. _DictionaryLearning:\n\nDictionary Learning\n===================\n\n.. _SparseCoder:\n\nSparse coding with a precomputed dictionary\n-------------------------------------------\n\nThe :class:`SparseCoder` object is an estimator that can be used to transform signals\ninto sparse linear combination of atoms from a fixed, precomputed dictionary\nsuch as a discrete wavelet basis. This object therefore does not\nimplement a ``fit`` method. The transformation amounts\nto a sparse coding problem: finding a representation of the data as a linear\ncombination of as few dictionary atoms as possible. All variations of\ndictionary learning implement the following transform methods, controllable via\nthe ``transform_method`` initialization parameter:\n\n* Orthogonal matching pursuit (:ref:`omp`)\n\n* Least-angle regression (:ref:`least_angle_regression`)\n\n* Lasso computed by least-angle regression\n\n* Lasso using coordinate descent (:ref:`lasso`)\n\n* Thresholding\n\nThresholding is very fast but it does not yield accurate reconstructions.\nThey have been shown useful in literature for classification tasks. For image\nreconstruction tasks, orthogonal matching pursuit yields the most accurate,\nunbiased reconstruction.\n\nThe dictionary learning objects offer, via the ``split_code`` parameter, the\npossibility to separate the positive and negative values in the results of\nsparse coding. This is useful when dictionary learning is used for extracting\nfeatures that will be used for supervised learning, because it allows the\nlearning algorithm to assign different weights to negative loadings of a\nparticular atom, from to the corresponding positive loading.\n\nThe split code for a single sample has length ``2 * n_components``\nand is constructed using the following rule: First, the regular code of length\n``n_components`` is computed. Then, the first ``n_components`` entries of the\n``split_code`` are\nfilled with the positive part of the regular code vector. The second half of\nthe split code is filled with the negative part of the code vector, only with\na positive sign. Therefore, the split_code is non-negative.\n\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_decomposition_plot_sparse_coding.py`\n\n\nGeneric dictionary learning\n---------------------------\n\nDictionary learning (:class:`DictionaryLearning`) is a matrix factorization\nproblem that amounts to finding a (usually overcomplete) dictionary that will\nperform well at sparsely encoding the fitted data.\n\nRepresenting data as sparse combinations of atoms from an overcomplete\ndictionary is suggested to be the way the mammalian primary visual cortex works.\nConsequently, dictionary learning applied on image patches has been shown to\ngive good results in image processing tasks such as image completion,\ninpainting and denoising, as well as for supervised recognition tasks.\n\nDictionary learning is an optimization problem solved by alternatively updating\nthe sparse code, as a solution to multiple Lasso problems, considering the\ndictionary fixed, and then updating the dictionary to best fit the sparse code.\n\n.. math::\n   (U^*, V^*) = \\underset{U, V}{\\operatorname{arg\\,min\\,}} & \\frac{1}{2}\n                ||X-UV||_{\\text{Fro}}^2+\\alpha||U||_{1,1} \\\\\n                \\text{subject to } & ||V_k||_2 <= 1 \\text{ for all }\n                0 \\leq k < n_{\\mathrm{atoms}}\n\n\n.. |pca_img2| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_002.png\n   :target: ..\/auto_examples\/decomposition\/plot_faces_decomposition.html\n   :scale: 60%\n\n.. |dict_img2| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_007.png\n   :target: ..\/auto_examples\/decomposition\/plot_faces_decomposition.html\n   :scale: 60%\n\n.. centered:: |pca_img2| |dict_img2|\n\n:math:`||.||_{\\text{Fro}}` stands for the Frobenius norm and :math:`||.||_{1,1}`\nstands for the entry-wise matrix norm which is the sum of the absolute values\nof all the entries in the matrix.\nAfter using such a procedure to fit the dictionary, the transform is simply a\nsparse coding step that shares the same implementation with all dictionary\nlearning objects (see :ref:`SparseCoder`).\n\nIt is also possible to constrain the dictionary and\/or code to be positive to\nmatch constraints that may be present in the data. Below are the faces with\ndifferent positivity constraints applied. Red indicates negative values, blue\nindicates positive values, and white represents zeros.\n\n\n.. |dict_img_pos1| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_010.png\n    :target: ..\/auto_examples\/decomposition\/plot_image_denoising.html\n    :scale: 60%\n\n.. |dict_img_pos2| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_011.png\n    :target: ..\/auto_examples\/decomposition\/plot_image_denoising.html\n    :scale: 60%\n\n.. |dict_img_pos3| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_012.png\n    :target: ..\/auto_examples\/decomposition\/plot_image_denoising.html\n    :scale: 60%\n\n.. |dict_img_pos4| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_013.png\n    :target: ..\/auto_examples\/decomposition\/plot_image_denoising.html\n    :scale: 60%\n\n.. centered:: |dict_img_pos1| |dict_img_pos2|\n.. centered:: |dict_img_pos3| |dict_img_pos4|\n\n\nThe following image shows how a dictionary learned from 4x4 pixel image patches\nextracted from part of the image of a raccoon face looks like.\n\n\n.. figure:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_image_denoising_001.png\n    :target: ..\/auto_examples\/decomposition\/plot_image_denoising.html\n    :align: center\n    :scale: 50%\n\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_decomposition_plot_image_denoising.py`\n\n\n.. rubric:: References\n\n* `\"Online dictionary learning for sparse coding\"\n  <https:\/\/www.di.ens.fr\/~fbach\/mairal_icml09.pdf>`_\n  J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009\n\n.. _MiniBatchDictionaryLearning:\n\nMini-batch dictionary learning\n------------------------------\n\n:class:`MiniBatchDictionaryLearning` implements a faster, but less accurate\nversion of the dictionary learning algorithm that is better suited for large\ndatasets.\n\nBy default, :class:`MiniBatchDictionaryLearning` divides the data into\nmini-batches and optimizes in an online manner by cycling over the mini-batches\nfor the specified number of iterations. However, at the moment it does not\nimplement a stopping condition.\n\nThe estimator also implements ``partial_fit``, which updates the dictionary by\niterating only once over a mini-batch. This can be used for online learning\nwhen the data is not readily available from the start, or for when the data\ndoes not fit into the memory.\n\n.. currentmodule:: sklearn.cluster\n\n.. image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_dict_face_patches_001.png\n    :target: ..\/auto_examples\/cluster\/plot_dict_face_patches.html\n    :scale: 50%\n    :align: right\n\n.. topic:: **Clustering for dictionary learning**\n\n   Note that when using dictionary learning to extract a representation\n   (e.g. for sparse coding) clustering can be a good proxy to learn the\n   dictionary. For instance the :class:`MiniBatchKMeans` estimator is\n   computationally efficient and implements on-line learning with a\n   ``partial_fit`` method.\n\n   Example: :ref:`sphx_glr_auto_examples_cluster_plot_dict_face_patches.py`\n\n.. currentmodule:: sklearn.decomposition\n\n.. _FA:\n\nFactor Analysis\n===============\n\nIn unsupervised learning we only have a dataset :math:`X = \\{x_1, x_2, \\dots, x_n\n\\}`. How can this dataset be described mathematically? A very simple\n`continuous latent variable` model for :math:`X` is\n\n.. math:: x_i = W h_i + \\mu + \\epsilon\n\nThe vector :math:`h_i` is called \"latent\" because it is unobserved. :math:`\\epsilon` is\nconsidered a noise term distributed according to a Gaussian with mean 0 and\ncovariance :math:`\\Psi` (i.e. :math:`\\epsilon \\sim \\mathcal{N}(0, \\Psi)`), :math:`\\mu` is some\narbitrary offset vector. Such a model is called \"generative\" as it describes\nhow :math:`x_i` is generated from :math:`h_i`. If we use all the :math:`x_i`'s as columns to form\na matrix :math:`\\mathbf{X}` and all the :math:`h_i`'s as columns of a matrix :math:`\\mathbf{H}`\nthen we can write (with suitably defined :math:`\\mathbf{M}` and :math:`\\mathbf{E}`):\n\n.. math::\n    \\mathbf{X} = W \\mathbf{H} + \\mathbf{M} + \\mathbf{E}\n\nIn other words, we *decomposed* matrix :math:`\\mathbf{X}`.\n\nIf :math:`h_i` is given, the above equation automatically implies the following\nprobabilistic interpretation:\n\n.. math:: p(x_i|h_i) = \\mathcal{N}(Wh_i + \\mu, \\Psi)\n\nFor a complete probabilistic model we also need a prior distribution for the\nlatent variable :math:`h`. The most straightforward assumption (based on the nice\nproperties of the Gaussian distribution) is :math:`h \\sim \\mathcal{N}(0,\n\\mathbf{I})`.  This yields a Gaussian as the marginal distribution of :math:`x`:\n\n.. math:: p(x) = \\mathcal{N}(\\mu, WW^T + \\Psi)\n\nNow, without any further assumptions the idea of having a latent variable :math:`h`\nwould be superfluous -- :math:`x` can be completely modelled with a mean\nand a covariance. We need to impose some more specific structure on one\nof these two parameters. A simple additional assumption regards the\nstructure of the error covariance :math:`\\Psi`:\n\n* :math:`\\Psi = \\sigma^2 \\mathbf{I}`: This assumption leads to\n  the probabilistic model of :class:`PCA`.\n\n* :math:`\\Psi = \\mathrm{diag}(\\psi_1, \\psi_2, \\dots, \\psi_n)`: This model is called\n  :class:`FactorAnalysis`, a classical statistical model. The matrix W is\n  sometimes called the \"factor loading matrix\".\n\nBoth models essentially estimate a Gaussian with a low-rank covariance matrix.\nBecause both models are probabilistic they can be integrated in more complex\nmodels, e.g. Mixture of Factor Analysers. One gets very different models (e.g.\n:class:`FastICA`) if non-Gaussian priors on the latent variables are assumed.\n\nFactor analysis *can* produce similar components (the columns of its loading\nmatrix) to :class:`PCA`. However, one can not make any general statements\nabout these components (e.g. whether they are orthogonal):\n\n.. |pca_img3| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_002.png\n    :target: ..\/auto_examples\/decomposition\/plot_faces_decomposition.html\n    :scale: 60%\n\n.. |fa_img3| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_008.png\n    :target: ..\/auto_examples\/decomposition\/plot_faces_decomposition.html\n    :scale: 60%\n\n.. centered:: |pca_img3| |fa_img3|\n\nThe main advantage for Factor Analysis over :class:`PCA` is that\nit can model the variance in every direction of the input space independently\n(heteroscedastic noise):\n\n.. figure:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_009.png\n    :target: ..\/auto_examples\/decomposition\/plot_faces_decomposition.html\n    :align: center\n    :scale: 75%\n\nThis allows better model selection than probabilistic PCA in the presence\nof heteroscedastic noise:\n\n.. figure:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_pca_vs_fa_model_selection_002.png\n    :target: ..\/auto_examples\/decomposition\/plot_pca_vs_fa_model_selection.html\n    :align: center\n    :scale: 75%\n\nFactor Analysis is often followed by a rotation of the factors (with the\nparameter `rotation`), usually to improve interpretability. For example,\nVarimax rotation maximizes the sum of the variances of the squared loadings,\ni.e., it tends to produce sparser factors, which are influenced by only a few\nfeatures each (the \"simple structure\"). See e.g., the first example below.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_decomposition_plot_varimax_fa.py`\n* :ref:`sphx_glr_auto_examples_decomposition_plot_pca_vs_fa_model_selection.py`\n\n\n.. _ICA:\n\nIndependent component analysis (ICA)\n====================================\n\nIndependent component analysis separates a multivariate signal into\nadditive subcomponents that are maximally independent. It is\nimplemented in scikit-learn using the :class:`Fast ICA <FastICA>`\nalgorithm. Typically, ICA is not used for reducing dimensionality but\nfor separating superimposed signals. Since the ICA model does not include\na noise term, for the model to be correct, whitening must be applied.\nThis can be done internally using the whiten argument or manually using one\nof the PCA variants.\n\nIt is classically used to separate mixed signals (a problem known as\n*blind source separation*), as in the example below:\n\n.. figure:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_ica_blind_source_separation_001.png\n    :target: ..\/auto_examples\/decomposition\/plot_ica_blind_source_separation.html\n    :align: center\n    :scale: 60%\n\n\nICA can also be used as yet another non linear decomposition that finds\ncomponents with some sparsity:\n\n.. |pca_img4| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_002.png\n    :target: ..\/auto_examples\/decomposition\/plot_faces_decomposition.html\n    :scale: 60%\n\n.. |ica_img4| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_004.png\n    :target: ..\/auto_examples\/decomposition\/plot_faces_decomposition.html\n    :scale: 60%\n\n.. centered:: |pca_img4| |ica_img4|\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_decomposition_plot_ica_blind_source_separation.py`\n* :ref:`sphx_glr_auto_examples_decomposition_plot_ica_vs_pca.py`\n* :ref:`sphx_glr_auto_examples_decomposition_plot_faces_decomposition.py`\n\n\n.. _NMF:\n\nNon-negative matrix factorization (NMF or NNMF)\n===============================================\n\nNMF with the Frobenius norm\n---------------------------\n\n:class:`NMF` [1]_ is an alternative approach to decomposition that assumes that the\ndata and the components are non-negative. :class:`NMF` can be plugged in\ninstead of :class:`PCA` or its variants, in the cases where the data matrix\ndoes not contain negative values. It finds a decomposition of samples\n:math:`X` into two matrices :math:`W` and :math:`H` of non-negative elements,\nby optimizing the distance :math:`d` between :math:`X` and the matrix product\n:math:`WH`. The most widely used distance function is the squared Frobenius\nnorm, which is an obvious extension of the Euclidean norm to matrices:\n\n.. math::\n    d_{\\mathrm{Fro}}(X, Y) = \\frac{1}{2} ||X - Y||_{\\mathrm{Fro}}^2 = \\frac{1}{2} \\sum_{i,j} (X_{ij} - {Y}_{ij})^2\n\nUnlike :class:`PCA`, the representation of a vector is obtained in an additive\nfashion, by superimposing the components, without subtracting. Such additive\nmodels are efficient for representing images and text.\n\nIt has been observed in [Hoyer, 2004] [2]_ that, when carefully constrained,\n:class:`NMF` can produce a parts-based representation of the dataset,\nresulting in interpretable models. The following example displays 16\nsparse components found by :class:`NMF` from the images in the Olivetti\nfaces dataset, in comparison with the PCA eigenfaces.\n\n.. |pca_img5| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_002.png\n    :target: ..\/auto_examples\/decomposition\/plot_faces_decomposition.html\n    :scale: 60%\n\n.. |nmf_img5| image:: ..\/auto_examples\/decomposition\/images\/sphx_glr_plot_faces_decomposition_003.png\n    :target: ..\/auto_examples\/decomposition\/plot_faces_decomposition.html\n    :scale: 60%\n\n.. centered:: |pca_img5| |nmf_img5|\n\n\nThe `init` attribute determines the initialization method applied, which\nhas a great impact on the performance of the method. :class:`NMF` implements the\nmethod Nonnegative Double Singular Value Decomposition. NNDSVD [4]_ is based on\ntwo SVD processes, one approximating the data matrix, the other approximating\npositive sections of the resulting partial SVD factors utilizing an algebraic\nproperty of unit rank matrices. The basic NNDSVD algorithm is better fit for\nsparse factorization. Its variants NNDSVDa (in which all zeros are set equal to\nthe mean of all elements of the data), and NNDSVDar (in which the zeros are set\nto random perturbations less than the mean of the data divided by 100) are\nrecommended in the dense case.\n\nNote that the Multiplicative Update ('mu') solver cannot update zeros present in\nthe initialization, so it leads to poorer results when used jointly with the\nbasic NNDSVD algorithm which introduces a lot of zeros; in this case, NNDSVDa or\nNNDSVDar should be preferred.\n\n:class:`NMF` can also be initialized with correctly scaled random non-negative\nmatrices by setting `init=\"random\"`. An integer seed or a\n``RandomState`` can also be passed to `random_state` to control\nreproducibility.\n\nIn :class:`NMF`, L1 and L2 priors can be added to the loss function in order to\nregularize the model. The L2 prior uses the Frobenius norm, while the L1 prior\nuses an elementwise L1 norm. As in :class:`~sklearn.linear_model.ElasticNet`,\nwe control the combination of L1 and L2 with the `l1_ratio` (:math:`\\rho`)\nparameter, and the intensity of the regularization with the `alpha_W` and\n`alpha_H` (:math:`\\alpha_W` and :math:`\\alpha_H`) parameters. The priors are\nscaled by the number of samples (:math:`n\\_samples`) for `H` and the number of\nfeatures (:math:`n\\_features`) for `W` to keep their impact balanced with\nrespect to one another and to the data fit term as independent as possible of\nthe size of the training set. Then the priors terms are:\n\n.. math::\n    (\\alpha_W \\rho ||W||_1 + \\frac{\\alpha_W(1-\\rho)}{2} ||W||_{\\mathrm{Fro}} ^ 2) * n\\_features\n    + (\\alpha_H \\rho ||H||_1 + \\frac{\\alpha_H(1-\\rho)}{2} ||H||_{\\mathrm{Fro}} ^ 2) * n\\_samples\n\nand the regularized objective function is:\n\n.. math::\n    d_{\\mathrm{Fro}}(X, WH)\n    + (\\alpha_W \\rho ||W||_1 + \\frac{\\alpha_W(1-\\rho)}{2} ||W||_{\\mathrm{Fro}} ^ 2) * n\\_features\n    + (\\alpha_H \\rho ||H||_1 + \\frac{\\alpha_H(1-\\rho)}{2} ||H||_{\\mathrm{Fro}} ^ 2) * n\\_samples\n\nNMF with a beta-divergence\n--------------------------\n\nAs described previously, the most widely used distance function is the squared\nFrobenius norm, which is an obvious extension of the Euclidean norm to\nmatrices:\n\n.. math::\n    d_{\\mathrm{Fro}}(X, Y) = \\frac{1}{2} ||X - Y||_{Fro}^2 = \\frac{1}{2} \\sum_{i,j} (X_{ij} - {Y}_{ij})^2\n\nOther distance functions can be used in NMF as, for example, the (generalized)\nKullback-Leibler (KL) divergence, also referred as I-divergence:\n\n.. math::\n    d_{KL}(X, Y) = \\sum_{i,j} (X_{ij} \\log(\\frac{X_{ij}}{Y_{ij}}) - X_{ij} + Y_{ij})\n\nOr, the Itakura-Saito (IS) divergence:\n\n.. math::\n    d_{IS}(X, Y) = \\sum_{i,j} (\\frac{X_{ij}}{Y_{ij}} - \\log(\\frac{X_{ij}}{Y_{ij}}) - 1)\n\nThese three distances are special cases of the beta-divergence family, with\n:math:`\\beta = 2, 1, 0` respectively [6]_. The beta-divergence are\ndefined by :\n\n.. math::\n    d_{\\beta}(X, Y) = \\sum_{i,j} \\frac{1}{\\beta(\\beta - 1)}(X_{ij}^\\beta + (\\beta-1)Y_{ij}^\\beta - \\beta X_{ij} Y_{ij}^{\\beta - 1})\n\n.. image:: ..\/images\/beta_divergence.png\n    :align: center\n    :scale: 75%\n\nNote that this definition is not valid if :math:`\\beta \\in (0; 1)`, yet it can\nbe continuously extended to the definitions of :math:`d_{KL}` and :math:`d_{IS}`\nrespectively.\n\n.. dropdown:: NMF implemented solvers\n\n    :class:`NMF` implements two solvers, using Coordinate Descent ('cd') [5]_, and\n    Multiplicative Update ('mu') [6]_. The 'mu' solver can optimize every\n    beta-divergence, including of course the Frobenius norm (:math:`\\beta=2`), the\n    (generalized) Kullback-Leibler divergence (:math:`\\beta=1`) and the\n    Itakura-Saito divergence (:math:`\\beta=0`). Note that for\n    :math:`\\beta \\in (1; 2)`, the 'mu' solver is significantly faster than for other\n    values of :math:`\\beta`. Note also that with a negative (or 0, i.e.\n    'itakura-saito') :math:`\\beta`, the input matrix cannot contain zero values.\n\n    The 'cd' solver can only optimize the Frobenius norm. Due to the\n    underlying non-convexity of NMF, the different solvers may converge to\n    different minima, even when optimizing the same distance function.\n\nNMF is best used with the ``fit_transform`` method, which returns the matrix W.\nThe matrix H is stored into the fitted model in the ``components_`` attribute;\nthe method ``transform`` will decompose a new matrix X_new based on these\nstored components::\n\n    >>> import numpy as np\n    >>> X = np.array([[1, 1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]])\n    >>> from sklearn.decomposition import NMF\n    >>> model = NMF(n_components=2, init='random', random_state=0)\n    >>> W = model.fit_transform(X)\n    >>> H = model.components_\n    >>> X_new = np.array([[1, 0], [1, 6.1], [1, 0], [1, 4], [3.2, 1], [0, 4]])\n    >>> W_new = model.transform(X_new)\n\n\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_decomposition_plot_faces_decomposition.py`\n* :ref:`sphx_glr_auto_examples_applications_plot_topics_extraction_with_nmf_lda.py`\n\n.. _MiniBatchNMF:\n\nMini-batch Non Negative Matrix Factorization\n--------------------------------------------\n\n:class:`MiniBatchNMF` [7]_ implements a faster, but less accurate version of the\nnon negative matrix factorization (i.e. :class:`~sklearn.decomposition.NMF`),\nbetter suited for large datasets.\n\nBy default, :class:`MiniBatchNMF` divides the data into mini-batches and\noptimizes the NMF model in an online manner by cycling over the mini-batches\nfor the specified number of iterations. The ``batch_size`` parameter controls\nthe size of the batches.\n\nIn order to speed up the mini-batch algorithm it is also possible to scale\npast batches, giving them less importance than newer batches. This is done\nintroducing a so-called forgetting factor controlled by the ``forget_factor``\nparameter.\n\nThe estimator also implements ``partial_fit``, which updates ``H`` by iterating\nonly once over a mini-batch. This can be used for online learning when the data\nis not readily available from the start, or when the data does not fit into memory.\n\n.. rubric:: References\n\n.. [1] `\"Learning the parts of objects by non-negative matrix factorization\"\n  <http:\/\/www.cs.columbia.edu\/~blei\/fogm\/2020F\/readings\/LeeSeung1999.pdf>`_\n  D. Lee, S. Seung, 1999\n\n.. [2] `\"Non-negative Matrix Factorization with Sparseness Constraints\"\n  <https:\/\/www.jmlr.org\/papers\/volume5\/hoyer04a\/hoyer04a.pdf>`_\n  P. Hoyer, 2004\n\n.. [4] `\"SVD based initialization: A head start for nonnegative\n  matrix factorization\"\n  <https:\/\/www.boutsidis.org\/Boutsidis_PRE_08.pdf>`_\n  C. Boutsidis, E. Gallopoulos, 2008\n\n.. [5] `\"Fast local algorithms for large scale nonnegative matrix and tensor\n  factorizations.\"\n  <https:\/\/www.researchgate.net\/profile\/Anh-Huy-Phan\/publication\/220241471_Fast_Local_Algorithms_for_Large_Scale_Nonnegative_Matrix_and_Tensor_Factorizations>`_\n  A. Cichocki, A. Phan, 2009\n\n.. [6] :arxiv:`\"Algorithms for nonnegative matrix factorization with\n  the beta-divergence\" <1010.1763>`\n  C. Fevotte, J. Idier, 2011\n\n.. [7] :arxiv:`\"Online algorithms for nonnegative matrix factorization with the\n  Itakura-Saito divergence\" <1106.4198>`\n  A. Lefevre, F. Bach, C. Fevotte, 2011\n\n.. _LatentDirichletAllocation:\n\nLatent Dirichlet Allocation (LDA)\n=================================\n\nLatent Dirichlet Allocation is a generative probabilistic model for collections of\ndiscrete dataset such as text corpora. It is also a topic model that is used for\ndiscovering abstract topics from a collection of documents.\n\nThe graphical model of LDA is a three-level generative model:\n\n.. image:: ..\/images\/lda_model_graph.png\n   :align: center\n\nNote on notations presented in the graphical model above, which can be found in\nHoffman et al. (2013):\n\n* The corpus is a collection of :math:`D` documents.\n* A document is a sequence of :math:`N` words.\n* There are :math:`K` topics in the corpus.\n* The boxes represent repeated sampling.\n\nIn the graphical model, each node is a random variable and has a role in the\ngenerative process. A shaded node indicates an observed variable and an unshaded\nnode indicates a hidden (latent) variable. In this case, words in the corpus are\nthe only data that we observe. The latent variables determine the random mixture\nof topics in the corpus and the distribution of words in the documents.\nThe goal of LDA is to use the observed words to infer the hidden topic\nstructure.\n\n.. dropdown:: Details on modeling text corpora\n\n    When modeling text corpora, the model assumes the following generative process\n    for a corpus with :math:`D` documents and :math:`K` topics, with :math:`K`\n    corresponding to `n_components` in the API:\n\n    1. For each topic :math:`k \\in K`, draw :math:`\\beta_k \\sim\n       \\mathrm{Dirichlet}(\\eta)`. This provides a distribution over the words,\n       i.e. the probability of a word appearing in topic :math:`k`.\n       :math:`\\eta` corresponds to `topic_word_prior`.\n\n    2. For each document :math:`d \\in D`, draw the topic proportions\n       :math:`\\theta_d \\sim \\mathrm{Dirichlet}(\\alpha)`. :math:`\\alpha`\n       corresponds to `doc_topic_prior`.\n\n    3. For each word :math:`i` in document :math:`d`:\n\n       a. Draw the topic assignment :math:`z_{di} \\sim \\mathrm{Multinomial}\n          (\\theta_d)`\n       b. Draw the observed word :math:`w_{ij} \\sim \\mathrm{Multinomial}\n          (\\beta_{z_{di}})`\n\n    For parameter estimation, the posterior distribution is:\n\n    .. math::\n        p(z, \\theta, \\beta |w, \\alpha, \\eta) =\n        \\frac{p(z, \\theta, \\beta|\\alpha, \\eta)}{p(w|\\alpha, \\eta)}\n\n    Since the posterior is intractable, variational Bayesian method\n    uses a simpler distribution :math:`q(z,\\theta,\\beta | \\lambda, \\phi, \\gamma)`\n    to approximate it, and those variational parameters :math:`\\lambda`,\n    :math:`\\phi`, :math:`\\gamma` are optimized to maximize the Evidence\n    Lower Bound (ELBO):\n\n    .. math::\n        \\log\\: P(w | \\alpha, \\eta) \\geq L(w,\\phi,\\gamma,\\lambda) \\overset{\\triangle}{=}\n        E_{q}[\\log\\:p(w,z,\\theta,\\beta|\\alpha,\\eta)] - E_{q}[\\log\\:q(z, \\theta, \\beta)]\n\n    Maximizing ELBO is equivalent to minimizing the Kullback-Leibler(KL) divergence\n    between :math:`q(z,\\theta,\\beta)` and the true posterior\n    :math:`p(z, \\theta, \\beta |w, \\alpha, \\eta)`.\n\n\n:class:`LatentDirichletAllocation` implements the online variational Bayes\nalgorithm and supports both online and batch update methods.\nWhile the batch method updates variational variables after each full pass through\nthe data, the online method updates variational variables from mini-batch data\npoints.\n\n.. note::\n\n  Although the online method is guaranteed to converge to a local optimum point, the quality of\n  the optimum point and the speed of convergence may depend on mini-batch size and\n  attributes related to learning rate setting.\n\nWhen :class:`LatentDirichletAllocation` is applied on a \"document-term\" matrix, the matrix\nwill be decomposed into a \"topic-term\" matrix and a \"document-topic\" matrix. While\n\"topic-term\" matrix is stored as `components_` in the model, \"document-topic\" matrix\ncan be calculated from ``transform`` method.\n\n:class:`LatentDirichletAllocation` also implements ``partial_fit`` method. This is used\nwhen data can be fetched sequentially.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_applications_plot_topics_extraction_with_nmf_lda.py`\n\n.. rubric:: References\n\n* `\"Latent Dirichlet Allocation\"\n  <https:\/\/www.jmlr.org\/papers\/volume3\/blei03a\/blei03a.pdf>`_\n  D. Blei, A. Ng, M. Jordan, 2003\n\n* `\"Online Learning for Latent Dirichlet Allocation\u201d\n  <https:\/\/papers.nips.cc\/paper\/3902-online-learning-for-latent-dirichlet-allocation.pdf>`_\n  M. Hoffman, D. Blei, F. Bach, 2010\n\n* `\"Stochastic Variational Inference\"\n  <https:\/\/www.cs.columbia.edu\/~blei\/papers\/HoffmanBleiWangPaisley2013.pdf>`_\n  M. Hoffman, D. Blei, C. Wang, J. Paisley, 2013\n\n* `\"The varimax criterion for analytic rotation in factor analysis\"\n  <https:\/\/link.springer.com\/article\/10.1007%2FBF02289233>`_\n  H. F. Kaiser, 1958\n\nSee also :ref:`nca_dim_reduction` for dimensionality reduction with\nNeighborhood Components Analysis.","site":"scikit-learn","answers_cleaned":"    decompositions                                                                      Decomposing signals in components  matrix factorization problems                                                                        currentmodule   sklearn decomposition       PCA    Principal component analysis  PCA                                      Exact PCA and probabilistic interpretation                                             PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance  In scikit learn   class  PCA  is implemented as a  transformer  object that learns  math  n  components in its   fit   method  and can be used on new data to project it on these components   PCA centers but does not scale the input data for each feature before applying the SVD  The optional parameter   whiten True   makes it possible to project the data onto the singular space while scaling each component to unit variance  This is often useful if the models down stream make strong assumptions on the isotropy of the signal  this is for example the case for Support Vector Machines with the RBF kernel and the K Means clustering algorithm   Below is an example of the iris dataset  which is comprised of 4 features  projected on the 2 dimensions that explain most variance      figure      auto examples decomposition images sphx glr plot pca vs lda 001 png      target     auto examples decomposition plot pca vs lda html      align  center      scale  75    The  class  PCA  object also provides a probabilistic interpretation of the PCA that can give a likelihood of data based on the amount of variance it explains  As such it implements a  term  score  method that can be used in cross validation      figure      auto examples decomposition images sphx glr plot pca vs fa model selection 001 png      target     auto examples decomposition plot pca vs fa model selection html      align  center      scale  75       rubric   Examples     ref  sphx glr auto examples decomposition plot pca iris py     ref  sphx glr auto examples decomposition plot pca vs lda py     ref  sphx glr auto examples decomposition plot pca vs fa model selection py        IncrementalPCA   Incremental PCA                  The  class  PCA  object is very useful  but has certain limitations for large datasets  The biggest limitation is that  class  PCA  only supports batch processing  which means all of the data to be processed must fit in main memory  The  class  IncrementalPCA  object uses a different form of processing and allows for partial computations which almost exactly match the results of  class  PCA  while processing the data in a minibatch fashion   class  IncrementalPCA  makes it possible to implement out of core Principal Component Analysis either by     Using its   partial fit   method on chunks of data fetched sequentially   from the local hard drive or a network database     Calling its fit method on a memory mapped file using     numpy memmap      class  IncrementalPCA  only stores estimates of component and noise variances  in order update   explained variance ratio    incrementally  This is why memory usage depends on the number of samples per batch  rather than the number of samples to be processed in the dataset   As in  class  PCA    class  IncrementalPCA  centers but does not scale the input data for each feature before applying the SVD      figure      auto examples decomposition images sphx glr plot incremental pca 001 png      target     auto examples decomposition plot incremental pca html      align  center      scale  75      figure      auto examples decomposition images sphx glr plot incremental pca 002 png      target     auto examples decomposition plot incremental pca html      align  center      scale  75       rubric   Examples     ref  sphx glr auto examples decomposition plot incremental pca py        RandomizedPCA   PCA using randomized SVD                           It is often interesting to project data to a lower dimensional space that preserves most of the variance  by dropping the singular vector of components associated with lower singular values   For instance  if we work with 64x64 pixel gray level pictures for face recognition  the dimensionality of the data is 4096 and it is slow to train an RBF support vector machine on such wide data  Furthermore we know that the intrinsic dimensionality of the data is much lower than 4096 since all pictures of human faces look somewhat alike  The samples lie on a manifold of much lower dimension  say around 200 for instance   The PCA algorithm can be used to linearly transform the data while both reducing the dimensionality and preserve most of the explained variance at the same time   The class  class  PCA  used with the optional parameter   svd solver  randomized    is very useful in that case  since we are going to drop most of the singular vectors it is much more efficient to limit the computation to an approximated estimate of the singular vectors we will keep to actually perform the transform   For instance  the following shows 16 sample portraits  centered around 0 0  from the Olivetti dataset  On the right hand side are the first 16 singular vectors reshaped as portraits  Since we only require the top 16 singular vectors of a dataset with size  math  n  samples    400  and  math  n  features    64  times 64   4096   the computation time is less than 1s       orig img  image      auto examples decomposition images sphx glr plot faces decomposition 001 png     target     auto examples decomposition plot faces decomposition html     scale  60       pca img  image      auto examples decomposition images sphx glr plot faces decomposition 002 png     target     auto examples decomposition plot faces decomposition html     scale  60      centered    orig img   pca img   If we note  math  n   max     max n   mathrm samples    n   mathrm features     and  math  n   min     min n   mathrm samples    n   mathrm features      the time complexity of the randomized  class  PCA  is  math  O n   max  2  cdot n   mathrm components     instead of  math  O n   max  2  cdot n   min    for the exact method implemented in  class  PCA    The memory footprint of randomized  class  PCA  is also proportional to  math  2  cdot n   max   cdot n   mathrm components    instead of  math  n   max   cdot n   min   for the exact method   Note  the implementation of   inverse transform   in  class  PCA  with   svd solver  randomized    is not the exact inverse transform of   transform   even when   whiten False    default        rubric   Examples     ref  sphx glr auto examples applications plot face recognition py     ref  sphx glr auto examples decomposition plot faces decomposition py      rubric   References    Algorithm 4 3 in    arxiv   Finding structure with randomness  Stochastic algorithms for   constructing approximate matrix decompositions   0909 4061     Halko  et al   2009     arxiv   An implementation of a randomized algorithm for principal component   analysis   1412 3510   A  Szlam et al  2014      SparsePCA   Sparse principal components analysis  SparsePCA and MiniBatchSparsePCA                                                                            class  SparsePCA  is a variant of PCA  with the goal of extracting the set of sparse components that best reconstruct the data   Mini batch sparse PCA   class  MiniBatchSparsePCA   is a variant of  class  SparsePCA  that is faster but less accurate  The increased speed is reached by iterating over small chunks of the set of features  for a given number of iterations    Principal component analysis   class  PCA   has the disadvantage that the components extracted by this method have exclusively dense expressions  i e  they have non zero coefficients when expressed as linear combinations of the original variables  This can make interpretation difficult  In many cases  the real underlying components can be more naturally imagined as sparse vectors  for example in face recognition  components might naturally map to parts of faces   Sparse principal components yields a more parsimonious  interpretable representation  clearly emphasizing which of the original features contribute to the differences between samples   The following example illustrates 16 components extracted using sparse PCA from the Olivetti faces dataset   It can be seen how the regularization term induces many zeros  Furthermore  the natural structure of the data causes the non zero coefficients to be vertically adjacent  The model does not enforce this mathematically  each component is a vector  math  h  in  mathbf R   4096    and there is no notion of vertical adjacency except during the human friendly visualization as 64x64 pixel images  The fact that the components shown below appear local is the effect of the inherent structure of the data  which makes such local patterns minimize reconstruction error  There exist sparsity inducing norms that take into account adjacency and different kinds of structure  see  Jen09   for a review of such methods  For more details on how to use Sparse PCA  see the Examples section  below        spca img  image      auto examples decomposition images sphx glr plot faces decomposition 005 png     target     auto examples decomposition plot faces decomposition html     scale  60      centered    pca img   spca img   Note that there are many different formulations for the Sparse PCA problem  The one implemented here is based on  Mrl09     The optimization problem solved is a PCA problem  dictionary learning  with an  math   ell 1  penalty on the components      math       U    V       underset U  V   operatorname arg  min        frac 1  2                    X UV     text Fro   2  alpha  V    1 1                      text subject to       U k   2    1  text  for all                   0  leq k   n  components    math          text Fro    stands for the Frobenius norm and  math         1 1   stands for the entry wise matrix norm which is the sum of the absolute values of all the entries in the matrix  The sparsity inducing  math         1 1   matrix norm also prevents learning components from noise when few training samples are available  The degree of penalization  and thus sparsity  can be adjusted through the hyperparameter   alpha    Small values lead to a gently regularized factorization  while larger values shrink many coefficients to zero      note      While in the spirit of an online algorithm  the class    class  MiniBatchSparsePCA  does not implement   partial fit   because   the algorithm is online along the features direction  not the samples   direction      rubric   Examples     ref  sphx glr auto examples decomposition plot faces decomposition py      rubric   References      Mrl09    Online Dictionary Learning for Sparse Coding      https   www di ens fr  fbach mairal icml09 pdf       J  Mairal  F  Bach  J  Ponce  G  Sapiro  2009     Jen09    Structured Sparse Principal Component Analysis      https   www di ens fr  fbach sspca AISTATS2010 pdf       R  Jenatton  G  Obozinski  F  Bach  2009       kernel PCA   Kernel Principal Component Analysis  kPCA                                              Exact Kernel PCA                    class  KernelPCA  is an extension of PCA which achieves non linear dimensionality reduction through the use of kernels  see  ref  metrics    Scholkopf1997    It has many applications including denoising  compression and structured prediction  kernel dependency estimation    class  KernelPCA  supports both   transform   and   inverse transform        figure      auto examples decomposition images sphx glr plot kernel pca 002 png      target     auto examples decomposition plot kernel pca html      align  center      scale  75      note        meth  KernelPCA inverse transform  relies on a kernel ridge to learn the     function mapping samples from the PCA basis into the original feature     space  Bakir2003    Thus  the reconstruction obtained with      meth  KernelPCA inverse transform  is an approximation  See the example     linked below for more details      rubric   Examples     ref  sphx glr auto examples decomposition plot kernel pca py     ref  sphx glr auto examples applications plot digits denoising py      rubric   References      Scholkopf1997  Sch lkopf  Bernhard  Alexander Smola  and Klaus Robert M ller       Kernel principal component analysis       https   people eecs berkeley edu  wainwrig stat241b scholkopf kernel pdf       International conference on artificial neural networks     Springer  Berlin  Heidelberg  1997       Bakir2003  Bak r  G khan H   Jason Weston  and Bernhard Sch lkopf       Learning to find pre images       https   papers nips cc paper 2003 file ac1ad983e08ad3304a97e147f522747e Paper pdf       Advances in neural information processing systems 16  2003   449 456       kPCA Solvers   Choice of solver for Kernel PCA                                  While in  class  PCA  the number of components is bounded by the number of features  in  class  KernelPCA  the number of components is bounded by the number of samples  Many real world datasets have large number of samples  In these cases finding  all  the components with a full kPCA is a waste of computation time  as data is mostly described by the first few components  e g    n components  100     In other words  the centered Gram matrix that is eigendecomposed in the Kernel PCA fitting process has an effective rank that is much smaller than its size  This is a situation where approximate eigensolvers can provide speedup with very low precision loss       dropdown   Eigensolvers      The optional parameter   eigen solver  randomized    can be used to      significantly  reduce the computation time when the number of requested       n components   is small compared with the number of samples  It relies on     randomized decomposition methods to find an approximate solution in a shorter     time       The time complexity of the randomized  class  KernelPCA  is      math  O n   mathrm samples   2  cdot n   mathrm components         instead of  math  O n   mathrm samples   3   for the exact method     implemented with   eigen solver  dense          The memory footprint of randomized  class  KernelPCA  is also proportional to      math  2  cdot n   mathrm samples    cdot n   mathrm components    instead of      math  n   mathrm samples   2  for the exact method       Note  this technique is the same as in  ref  RandomizedPCA        In addition to the above two solvers    eigen solver  arpack    can be used as     an alternate way to get an approximate decomposition  In practice  this method     only provides reasonable execution times when the number of components to find     is extremely small  It is enabled by default when the desired number of     components is less than 10  strict  and the number of samples is more than 200      strict   See  class  KernelPCA  for details          rubric   References         dense  solver         scipy linalg eigh documentation        https   docs scipy org doc scipy reference generated scipy linalg eigh html            randomized  solver           Algorithm 4 3 in          arxiv   Finding structure with randomness  Stochastic         algorithms for constructing approximate matrix decompositions   0909 4061           Halko  et al   2009            arxiv   An implementation of a randomized algorithm         for principal component analysis   1412 3510           A  Szlam et al   2014          arpack  solver         scipy sparse linalg eigsh documentation        https   docs scipy org doc scipy reference generated scipy sparse linalg eigsh html          R  B  Lehoucq  D  C  Sorensen  and C  Yang   1998        LSA   Truncated singular value decomposition and latent semantic analysis                                                                       class  TruncatedSVD  implements a variant of singular value decomposition  SVD  that only computes the  math  k  largest singular values  where  math  k  is a user specified parameter    class  TruncatedSVD  is very similar to  class  PCA   but differs in that the matrix  math  X  does not need to be centered  When the columnwise  per feature  means of  math  X  are subtracted from the feature values  truncated SVD on the resulting matrix is equivalent to PCA      dropdown   About truncated SVD and latent semantic analysis  LSA       When truncated SVD is applied to term document matrices      as returned by  class   sklearn feature extraction text CountVectorizer  or      class   sklearn feature extraction text TfidfVectorizer        this transformation is known as      latent semantic analysis  https   nlp stanford edu IR book pdf 18lsi pdf         LSA   because it transforms such matrices     to a  semantic  space of low dimensionality      In particular  LSA is known to combat the effects of synonymy and polysemy      both of which roughly mean there are multiple meanings per word       which cause term document matrices to be overly sparse     and exhibit poor similarity under measures such as cosine similarity          note           LSA is also known as latent semantic indexing  LSI          though strictly that refers to its use in persistent indexes         for information retrieval purposes       Mathematically  truncated SVD applied to training samples  math  X      produces a low rank approximation  math  X           math           X  approx X k   U k  Sigma k V k  top      After this operation   math  U k  Sigma k      is the transformed training set with  math  k  features      called   n components   in the API        To also transform a test set  math  X   we multiply it with  math  V k           math           X    X V k         note           Most treatments of LSA in the natural language processing  NLP          and information retrieval  IR  literature         swap the axes of the matrix  math  X  so that it has shape            n features  n samples             We present LSA in a different way that matches the scikit learn API better          but the singular values found are the same       While the  class  TruncatedSVD  transformer     works with any feature matrix      using it on tf idf matrices is recommended over raw frequency counts     in an LSA document processing setting      In particular  sublinear scaling and inverse document frequency     should be turned on    sublinear tf True  use idf True        to bring the feature values closer to a Gaussian distribution      compensating for LSA s erroneous assumptions about textual data      rubric   Examples     ref  sphx glr auto examples text plot document clustering py      rubric   References    Christopher D  Manning  Prabhakar Raghavan and Hinrich Sch tze  2008      Introduction to Information Retrieval   Cambridge University Press    chapter 18   Matrix decompositions   latent semantic indexing    https   nlp stanford edu IR book pdf 18lsi pdf           DictionaryLearning   Dictionary Learning                          SparseCoder   Sparse coding with a precomputed dictionary                                              The  class  SparseCoder  object is an estimator that can be used to transform signals into sparse linear combination of atoms from a fixed  precomputed dictionary such as a discrete wavelet basis  This object therefore does not implement a   fit   method  The transformation amounts to a sparse coding problem  finding a representation of the data as a linear combination of as few dictionary atoms as possible  All variations of dictionary learning implement the following transform methods  controllable via the   transform method   initialization parameter     Orthogonal matching pursuit   ref  omp      Least angle regression   ref  least angle regression      Lasso computed by least angle regression    Lasso using coordinate descent   ref  lasso      Thresholding  Thresholding is very fast but it does not yield accurate reconstructions  They have been shown useful in literature for classification tasks  For image reconstruction tasks  orthogonal matching pursuit yields the most accurate  unbiased reconstruction   The dictionary learning objects offer  via the   split code   parameter  the possibility to separate the positive and negative values in the results of sparse coding  This is useful when dictionary learning is used for extracting features that will be used for supervised learning  because it allows the learning algorithm to assign different weights to negative loadings of a particular atom  from to the corresponding positive loading   The split code for a single sample has length   2   n components   and is constructed using the following rule  First  the regular code of length   n components   is computed  Then  the first   n components   entries of the   split code   are filled with the positive part of the regular code vector  The second half of the split code is filled with the negative part of the code vector  only with a positive sign  Therefore  the split code is non negative       rubric   Examples     ref  sphx glr auto examples decomposition plot sparse coding py    Generic dictionary learning                              Dictionary learning   class  DictionaryLearning   is a matrix factorization problem that amounts to finding a  usually overcomplete  dictionary that will perform well at sparsely encoding the fitted data   Representing data as sparse combinations of atoms from an overcomplete dictionary is suggested to be the way the mammalian primary visual cortex works  Consequently  dictionary learning applied on image patches has been shown to give good results in image processing tasks such as image completion  inpainting and denoising  as well as for supervised recognition tasks   Dictionary learning is an optimization problem solved by alternatively updating the sparse code  as a solution to multiple Lasso problems  considering the dictionary fixed  and then updating the dictionary to best fit the sparse code      math       U    V       underset U  V   operatorname arg  min        frac 1  2                    X UV     text Fro   2  alpha  U    1 1                      text subject to       V k   2    1  text  for all                   0  leq k   n   mathrm atoms         pca img2  image      auto examples decomposition images sphx glr plot faces decomposition 002 png     target     auto examples decomposition plot faces decomposition html     scale  60       dict img2  image      auto examples decomposition images sphx glr plot faces decomposition 007 png     target     auto examples decomposition plot faces decomposition html     scale  60      centered    pca img2   dict img2    math          text Fro    stands for the Frobenius norm and  math         1 1   stands for the entry wise matrix norm which is the sum of the absolute values of all the entries in the matrix  After using such a procedure to fit the dictionary  the transform is simply a sparse coding step that shares the same implementation with all dictionary learning objects  see  ref  SparseCoder     It is also possible to constrain the dictionary and or code to be positive to match constraints that may be present in the data  Below are the faces with different positivity constraints applied  Red indicates negative values  blue indicates positive values  and white represents zeros        dict img pos1  image      auto examples decomposition images sphx glr plot faces decomposition 010 png      target     auto examples decomposition plot image denoising html      scale  60       dict img pos2  image      auto examples decomposition images sphx glr plot faces decomposition 011 png      target     auto examples decomposition plot image denoising html      scale  60       dict img pos3  image      auto examples decomposition images sphx glr plot faces decomposition 012 png      target     auto examples decomposition plot image denoising html      scale  60       dict img pos4  image      auto examples decomposition images sphx glr plot faces decomposition 013 png      target     auto examples decomposition plot image denoising html      scale  60      centered    dict img pos1   dict img pos2     centered    dict img pos3   dict img pos4    The following image shows how a dictionary learned from 4x4 pixel image patches extracted from part of the image of a raccoon face looks like       figure      auto examples decomposition images sphx glr plot image denoising 001 png      target     auto examples decomposition plot image denoising html      align  center      scale  50       rubric   Examples     ref  sphx glr auto examples decomposition plot image denoising py       rubric   References      Online dictionary learning for sparse coding     https   www di ens fr  fbach mairal icml09 pdf      J  Mairal  F  Bach  J  Ponce  G  Sapiro  2009      MiniBatchDictionaryLearning   Mini batch dictionary learning                                  class  MiniBatchDictionaryLearning  implements a faster  but less accurate version of the dictionary learning algorithm that is better suited for large datasets   By default   class  MiniBatchDictionaryLearning  divides the data into mini batches and optimizes in an online manner by cycling over the mini batches for the specified number of iterations  However  at the moment it does not implement a stopping condition   The estimator also implements   partial fit    which updates the dictionary by iterating only once over a mini batch  This can be used for online learning when the data is not readily available from the start  or for when the data does not fit into the memory      currentmodule   sklearn cluster     image      auto examples cluster images sphx glr plot dict face patches 001 png      target     auto examples cluster plot dict face patches html      scale  50       align  right     topic     Clustering for dictionary learning       Note that when using dictionary learning to extract a representation     e g  for sparse coding  clustering can be a good proxy to learn the    dictionary  For instance the  class  MiniBatchKMeans  estimator is    computationally efficient and implements on line learning with a      partial fit   method      Example   ref  sphx glr auto examples cluster plot dict face patches py      currentmodule   sklearn decomposition      FA   Factor Analysis                  In unsupervised learning we only have a dataset  math  X     x 1  x 2   dots  x n      How can this dataset be described mathematically  A very simple  continuous latent variable  model for  math  X  is     math   x i   W h i    mu    epsilon  The vector  math  h i  is called  latent  because it is unobserved   math   epsilon  is considered a noise term distributed according to a Gaussian with mean 0 and covariance  math   Psi   i e   math   epsilon  sim  mathcal N  0   Psi      math   mu  is some arbitrary offset vector  Such a model is called  generative  as it describes how  math  x i  is generated from  math  h i   If we use all the  math  x i  s as columns to form a matrix  math   mathbf X   and all the  math  h i  s as columns of a matrix  math   mathbf H   then we can write  with suitably defined  math   mathbf M   and  math   mathbf E         math        mathbf X    W  mathbf H     mathbf M     mathbf E   In other words  we  decomposed  matrix  math   mathbf X     If  math  h i  is given  the above equation automatically implies the following probabilistic interpretation      math   p x i h i     mathcal N  Wh i    mu   Psi   For a complete probabilistic model we also need a prior distribution for the latent variable  math  h   The most straightforward assumption  based on the nice properties of the Gaussian distribution  is  math  h  sim  mathcal N  0   mathbf I      This yields a Gaussian as the marginal distribution of  math  x       math   p x     mathcal N   mu  WW T    Psi   Now  without any further assumptions the idea of having a latent variable  math  h  would be superfluous     math  x  can be completely modelled with a mean and a covariance  We need to impose some more specific structure on one of these two parameters  A simple additional assumption regards the structure of the error covariance  math   Psi       math   Psi    sigma 2  mathbf I    This assumption leads to   the probabilistic model of  class  PCA       math   Psi    mathrm diag   psi 1   psi 2   dots   psi n    This model is called    class  FactorAnalysis   a classical statistical model  The matrix W is   sometimes called the  factor loading matrix    Both models essentially estimate a Gaussian with a low rank covariance matrix  Because both models are probabilistic they can be integrated in more complex models  e g  Mixture of Factor Analysers  One gets very different models  e g   class  FastICA   if non Gaussian priors on the latent variables are assumed   Factor analysis  can  produce similar components  the columns of its loading matrix  to  class  PCA   However  one can not make any general statements about these components  e g  whether they are orthogonal        pca img3  image      auto examples decomposition images sphx glr plot faces decomposition 002 png      target     auto examples decomposition plot faces decomposition html      scale  60       fa img3  image      auto examples decomposition images sphx glr plot faces decomposition 008 png      target     auto examples decomposition plot faces decomposition html      scale  60      centered    pca img3   fa img3   The main advantage for Factor Analysis over  class  PCA  is that it can model the variance in every direction of the input space independently  heteroscedastic noise       figure      auto examples decomposition images sphx glr plot faces decomposition 009 png      target     auto examples decomposition plot faces decomposition html      align  center      scale  75   This allows better model selection than probabilistic PCA in the presence of heteroscedastic noise      figure      auto examples decomposition images sphx glr plot pca vs fa model selection 002 png      target     auto examples decomposition plot pca vs fa model selection html      align  center      scale  75   Factor Analysis is often followed by a rotation of the factors  with the parameter  rotation    usually to improve interpretability  For example  Varimax rotation maximizes the sum of the variances of the squared loadings  i e   it tends to produce sparser factors  which are influenced by only a few features each  the  simple structure    See e g   the first example below      rubric   Examples     ref  sphx glr auto examples decomposition plot varimax fa py     ref  sphx glr auto examples decomposition plot pca vs fa model selection py        ICA   Independent component analysis  ICA                                        Independent component analysis separates a multivariate signal into additive subcomponents that are maximally independent  It is implemented in scikit learn using the  class  Fast ICA  FastICA   algorithm  Typically  ICA is not used for reducing dimensionality but for separating superimposed signals  Since the ICA model does not include a noise term  for the model to be correct  whitening must be applied  This can be done internally using the whiten argument or manually using one of the PCA variants   It is classically used to separate mixed signals  a problem known as  blind source separation    as in the example below      figure      auto examples decomposition images sphx glr plot ica blind source separation 001 png      target     auto examples decomposition plot ica blind source separation html      align  center      scale  60    ICA can also be used as yet another non linear decomposition that finds components with some sparsity       pca img4  image      auto examples decomposition images sphx glr plot faces decomposition 002 png      target     auto examples decomposition plot faces decomposition html      scale  60       ica img4  image      auto examples decomposition images sphx glr plot faces decomposition 004 png      target     auto examples decomposition plot faces decomposition html      scale  60      centered    pca img4   ica img4      rubric   Examples     ref  sphx glr auto examples decomposition plot ica blind source separation py     ref  sphx glr auto examples decomposition plot ica vs pca py     ref  sphx glr auto examples decomposition plot faces decomposition py        NMF   Non negative matrix factorization  NMF or NNMF                                                   NMF with the Frobenius norm                               class  NMF   1   is an alternative approach to decomposition that assumes that the data and the components are non negative   class  NMF  can be plugged in instead of  class  PCA  or its variants  in the cases where the data matrix does not contain negative values  It finds a decomposition of samples  math  X  into two matrices  math  W  and  math  H  of non negative elements  by optimizing the distance  math  d  between  math  X  and the matrix product  math  WH   The most widely used distance function is the squared Frobenius norm  which is an obvious extension of the Euclidean norm to matrices      math       d   mathrm Fro   X  Y     frac 1  2    X   Y     mathrm Fro   2    frac 1  2   sum  i j   X  ij     Y   ij   2  Unlike  class  PCA   the representation of a vector is obtained in an additive fashion  by superimposing the components  without subtracting  Such additive models are efficient for representing images and text   It has been observed in  Hoyer  2004   2   that  when carefully constrained   class  NMF  can produce a parts based representation of the dataset  resulting in interpretable models  The following example displays 16 sparse components found by  class  NMF  from the images in the Olivetti faces dataset  in comparison with the PCA eigenfaces       pca img5  image      auto examples decomposition images sphx glr plot faces decomposition 002 png      target     auto examples decomposition plot faces decomposition html      scale  60       nmf img5  image      auto examples decomposition images sphx glr plot faces decomposition 003 png      target     auto examples decomposition plot faces decomposition html      scale  60      centered    pca img5   nmf img5    The  init  attribute determines the initialization method applied  which has a great impact on the performance of the method   class  NMF  implements the method Nonnegative Double Singular Value Decomposition  NNDSVD  4   is based on two SVD processes  one approximating the data matrix  the other approximating positive sections of the resulting partial SVD factors utilizing an algebraic property of unit rank matrices  The basic NNDSVD algorithm is better fit for sparse factorization  Its variants NNDSVDa  in which all zeros are set equal to the mean of all elements of the data   and NNDSVDar  in which the zeros are set to random perturbations less than the mean of the data divided by 100  are recommended in the dense case   Note that the Multiplicative Update   mu   solver cannot update zeros present in the initialization  so it leads to poorer results when used jointly with the basic NNDSVD algorithm which introduces a lot of zeros  in this case  NNDSVDa or NNDSVDar should be preferred    class  NMF  can also be initialized with correctly scaled random non negative matrices by setting  init  random    An integer seed or a   RandomState   can also be passed to  random state  to control reproducibility   In  class  NMF   L1 and L2 priors can be added to the loss function in order to regularize the model  The L2 prior uses the Frobenius norm  while the L1 prior uses an elementwise L1 norm  As in  class   sklearn linear model ElasticNet   we control the combination of L1 and L2 with the  l1 ratio    math   rho   parameter  and the intensity of the regularization with the  alpha W  and  alpha H    math   alpha W  and  math   alpha H   parameters  The priors are scaled by the number of samples   math  n  samples   for  H  and the number of features   math  n  features   for  W  to keep their impact balanced with respect to one another and to the data fit term as independent as possible of the size of the training set  Then the priors terms are      math         alpha W  rho   W   1    frac  alpha W 1  rho   2    W     mathrm Fro     2    n  features         alpha H  rho   H   1    frac  alpha H 1  rho   2    H     mathrm Fro     2    n  samples  and the regularized objective function is      math       d   mathrm Fro   X  WH          alpha W  rho   W   1    frac  alpha W 1  rho   2    W     mathrm Fro     2    n  features         alpha H  rho   H   1    frac  alpha H 1  rho   2    H     mathrm Fro     2    n  samples  NMF with a beta divergence                             As described previously  the most widely used distance function is the squared Frobenius norm  which is an obvious extension of the Euclidean norm to matrices      math       d   mathrm Fro   X  Y     frac 1  2    X   Y    Fro  2    frac 1  2   sum  i j   X  ij     Y   ij   2  Other distance functions can be used in NMF as  for example  the  generalized  Kullback Leibler  KL  divergence  also referred as I divergence      math       d  KL  X  Y     sum  i j   X  ij   log  frac X  ij   Y  ij      X  ij    Y  ij    Or  the Itakura Saito  IS  divergence      math       d  IS  X  Y     sum  i j    frac X  ij   Y  ij      log  frac X  ij   Y  ij      1   These three distances are special cases of the beta divergence family  with  math   beta   2  1  0  respectively  6    The beta divergence are defined by       math       d   beta  X  Y     sum  i j   frac 1   beta  beta   1   X  ij   beta     beta 1 Y  ij   beta    beta X  ij  Y  ij    beta   1       image      images beta divergence png      align  center      scale  75   Note that this definition is not valid if  math   beta  in  0  1    yet it can be continuously extended to the definitions of  math  d  KL   and  math  d  IS   respectively      dropdown   NMF implemented solvers       class  NMF  implements two solvers  using Coordinate Descent   cd    5    and     Multiplicative Update   mu    6    The  mu  solver can optimize every     beta divergence  including of course the Frobenius norm   math   beta 2    the      generalized  Kullback Leibler divergence   math   beta 1   and the     Itakura Saito divergence   math   beta 0    Note that for      math   beta  in  1  2    the  mu  solver is significantly faster than for other     values of  math   beta   Note also that with a negative  or 0  i e       itakura saito    math   beta   the input matrix cannot contain zero values       The  cd  solver can only optimize the Frobenius norm  Due to the     underlying non convexity of NMF  the different solvers may converge to     different minima  even when optimizing the same distance function   NMF is best used with the   fit transform   method  which returns the matrix W  The matrix H is stored into the fitted model in the   components    attribute  the method   transform   will decompose a new matrix X new based on these stored components            import numpy as np         X   np array   1  1    2  1    3  1 2    4  1    5  0 8    6  1            from sklearn decomposition import NMF         model   NMF n components 2  init  random   random state 0          W   model fit transform X          H   model components          X new   np array   1  0    1  6 1    1  0    1  4    3 2  1    0  4            W new   model transform X new        rubric   Examples     ref  sphx glr auto examples decomposition plot faces decomposition py     ref  sphx glr auto examples applications plot topics extraction with nmf lda py       MiniBatchNMF   Mini batch Non Negative Matrix Factorization                                                class  MiniBatchNMF   7   implements a faster  but less accurate version of the non negative matrix factorization  i e   class   sklearn decomposition NMF    better suited for large datasets   By default   class  MiniBatchNMF  divides the data into mini batches and optimizes the NMF model in an online manner by cycling over the mini batches for the specified number of iterations  The   batch size   parameter controls the size of the batches   In order to speed up the mini batch algorithm it is also possible to scale past batches  giving them less importance than newer batches  This is done introducing a so called forgetting factor controlled by the   forget factor   parameter   The estimator also implements   partial fit    which updates   H   by iterating only once over a mini batch  This can be used for online learning when the data is not readily available from the start  or when the data does not fit into memory      rubric   References      1    Learning the parts of objects by non negative matrix factorization     http   www cs columbia edu  blei fogm 2020F readings LeeSeung1999 pdf      D  Lee  S  Seung  1999      2    Non negative Matrix Factorization with Sparseness Constraints     https   www jmlr org papers volume5 hoyer04a hoyer04a pdf      P  Hoyer  2004      4    SVD based initialization  A head start for nonnegative   matrix factorization     https   www boutsidis org Boutsidis PRE 08 pdf      C  Boutsidis  E  Gallopoulos  2008      5    Fast local algorithms for large scale nonnegative matrix and tensor   factorizations      https   www researchgate net profile Anh Huy Phan publication 220241471 Fast Local Algorithms for Large Scale Nonnegative Matrix and Tensor Factorizations      A  Cichocki  A  Phan  2009      6   arxiv   Algorithms for nonnegative matrix factorization with   the beta divergence   1010 1763     C  Fevotte  J  Idier  2011      7   arxiv   Online algorithms for nonnegative matrix factorization with the   Itakura Saito divergence   1106 4198     A  Lefevre  F  Bach  C  Fevotte  2011      LatentDirichletAllocation   Latent Dirichlet Allocation  LDA                                     Latent Dirichlet Allocation is a generative probabilistic model for collections of discrete dataset such as text corpora  It is also a topic model that is used for discovering abstract topics from a collection of documents   The graphical model of LDA is a three level generative model      image      images lda model graph png     align  center  Note on notations presented in the graphical model above  which can be found in Hoffman et al   2013      The corpus is a collection of  math  D  documents    A document is a sequence of  math  N  words    There are  math  K  topics in the corpus    The boxes represent repeated sampling   In the graphical model  each node is a random variable and has a role in the generative process  A shaded node indicates an observed variable and an unshaded node indicates a hidden  latent  variable  In this case  words in the corpus are the only data that we observe  The latent variables determine the random mixture of topics in the corpus and the distribution of words in the documents  The goal of LDA is to use the observed words to infer the hidden topic structure      dropdown   Details on modeling text corpora      When modeling text corpora  the model assumes the following generative process     for a corpus with  math  D  documents and  math  K  topics  with  math  K      corresponding to  n components  in the API       1  For each topic  math  k  in K   draw  math   beta k  sim         mathrm Dirichlet   eta    This provides a distribution over the words         i e  the probability of a word appearing in topic  math  k           math   eta  corresponds to  topic word prior        2  For each document  math  d  in D   draw the topic proportions         math   theta d  sim  mathrm Dirichlet   alpha     math   alpha         corresponds to  doc topic prior        3  For each word  math  i  in document  math  d           a  Draw the topic assignment  math  z  di   sim  mathrm Multinomial              theta d          b  Draw the observed word  math  w  ij   sim  mathrm Multinomial              beta  z  di          For parameter estimation  the posterior distribution is          math           p z   theta   beta  w   alpha   eta             frac p z   theta   beta  alpha   eta   p w  alpha   eta        Since the posterior is intractable  variational Bayesian method     uses a simpler distribution  math  q z  theta  beta    lambda   phi   gamma       to approximate it  and those variational parameters  math   lambda        math   phi    math   gamma  are optimized to maximize the Evidence     Lower Bound  ELBO           math            log   P w    alpha   eta   geq L w  phi  gamma  lambda   overset  triangle             E  q   log  p w z  theta  beta  alpha  eta     E  q   log  q z   theta   beta        Maximizing ELBO is equivalent to minimizing the Kullback Leibler KL  divergence     between  math  q z  theta  beta   and the true posterior      math  p z   theta   beta  w   alpha   eta       class  LatentDirichletAllocation  implements the online variational Bayes algorithm and supports both online and batch update methods  While the batch method updates variational variables after each full pass through the data  the online method updates variational variables from mini batch data points      note      Although the online method is guaranteed to converge to a local optimum point  the quality of   the optimum point and the speed of convergence may depend on mini batch size and   attributes related to learning rate setting   When  class  LatentDirichletAllocation  is applied on a  document term  matrix  the matrix will be decomposed into a  topic term  matrix and a  document topic  matrix  While  topic term  matrix is stored as  components   in the model   document topic  matrix can be calculated from   transform   method    class  LatentDirichletAllocation  also implements   partial fit   method  This is used when data can be fetched sequentially      rubric   Examples     ref  sphx glr auto examples applications plot topics extraction with nmf lda py      rubric   References      Latent Dirichlet Allocation     https   www jmlr org papers volume3 blei03a blei03a pdf      D  Blei  A  Ng  M  Jordan  2003      Online Learning for Latent Dirichlet Allocation     https   papers nips cc paper 3902 online learning for latent dirichlet allocation pdf      M  Hoffman  D  Blei  F  Bach  2010      Stochastic Variational Inference     https   www cs columbia edu  blei papers HoffmanBleiWangPaisley2013 pdf      M  Hoffman  D  Blei  C  Wang  J  Paisley  2013      The varimax criterion for analytic rotation in factor analysis     https   link springer com article 10 1007 2FBF02289233      H  F  Kaiser  1958  See also  ref  nca dim reduction  for dimensionality reduction with Neighborhood Components Analysis "}
{"questions":"scikit-learn Each clustering algorithm comes in two variants a class that implements clustering of unlabeled data can be performed with the module Clustering","answers":".. _clustering:\n\n==========\nClustering\n==========\n\n`Clustering <https:\/\/en.wikipedia.org\/wiki\/Cluster_analysis>`__ of\nunlabeled data can be performed with the module :mod:`sklearn.cluster`.\n\nEach clustering algorithm comes in two variants: a class, that implements\nthe ``fit`` method to learn the clusters on train data, and a function,\nthat, given train data, returns an array of integer labels corresponding\nto the different clusters. For the class, the labels over the training\ndata can be found in the ``labels_`` attribute.\n\n.. currentmodule:: sklearn.cluster\n\n.. topic:: Input data\n\n    One important thing to note is that the algorithms implemented in\n    this module can take different kinds of matrix as input. All the\n    methods accept standard data matrices of shape ``(n_samples, n_features)``.\n    These can be obtained from the classes in the :mod:`sklearn.feature_extraction`\n    module. For :class:`AffinityPropagation`, :class:`SpectralClustering`\n    and :class:`DBSCAN` one can also input similarity matrices of shape\n    ``(n_samples, n_samples)``. These can be obtained from the functions\n    in the :mod:`sklearn.metrics.pairwise` module.\n\nOverview of clustering methods\n===============================\n\n.. figure:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_cluster_comparison_001.png\n   :target: ..\/auto_examples\/cluster\/plot_cluster_comparison.html\n   :align: center\n   :scale: 50\n\n   A comparison of the clustering algorithms in scikit-learn\n\n\n.. list-table::\n   :header-rows: 1\n   :widths: 14 15 19 25 20\n\n   * - Method name\n     - Parameters\n     - Scalability\n     - Usecase\n     - Geometry (metric used)\n\n   * - :ref:`K-Means <k_means>`\n     - number of clusters\n     - Very large ``n_samples``, medium ``n_clusters`` with\n       :ref:`MiniBatch code <mini_batch_kmeans>`\n     - General-purpose, even cluster size, flat geometry,\n       not too many clusters, inductive\n     - Distances between points\n\n   * - :ref:`Affinity propagation <affinity_propagation>`\n     - damping, sample preference\n     - Not scalable with n_samples\n     - Many clusters, uneven cluster size, non-flat geometry, inductive\n     - Graph distance (e.g. nearest-neighbor graph)\n\n   * - :ref:`Mean-shift <mean_shift>`\n     - bandwidth\n     - Not scalable with ``n_samples``\n     - Many clusters, uneven cluster size, non-flat geometry, inductive\n     - Distances between points\n\n   * - :ref:`Spectral clustering <spectral_clustering>`\n     - number of clusters\n     - Medium ``n_samples``, small ``n_clusters``\n     - Few clusters, even cluster size, non-flat geometry, transductive\n     - Graph distance (e.g. nearest-neighbor graph)\n\n   * - :ref:`Ward hierarchical clustering <hierarchical_clustering>`\n     - number of clusters or distance threshold\n     - Large ``n_samples`` and ``n_clusters``\n     - Many clusters, possibly connectivity constraints, transductive\n     - Distances between points\n\n   * - :ref:`Agglomerative clustering <hierarchical_clustering>`\n     - number of clusters or distance threshold, linkage type, distance\n     - Large ``n_samples`` and ``n_clusters``\n     - Many clusters, possibly connectivity constraints, non Euclidean\n       distances, transductive\n     - Any pairwise distance\n\n   * - :ref:`DBSCAN <dbscan>`\n     - neighborhood size\n     - Very large ``n_samples``, medium ``n_clusters``\n     - Non-flat geometry, uneven cluster sizes, outlier removal,\n       transductive\n     - Distances between nearest points\n\n   * - :ref:`HDBSCAN <hdbscan>`\n     - minimum cluster membership, minimum point neighbors\n     - large ``n_samples``, medium ``n_clusters``\n     - Non-flat geometry, uneven cluster sizes, outlier removal,\n       transductive, hierarchical, variable cluster density\n     - Distances between nearest points\n\n   * - :ref:`OPTICS <optics>`\n     - minimum cluster membership\n     - Very large ``n_samples``, large ``n_clusters``\n     - Non-flat geometry, uneven cluster sizes, variable cluster density,\n       outlier removal, transductive\n     - Distances between points\n\n   * - :ref:`Gaussian mixtures <mixture>`\n     - many\n     - Not scalable\n     - Flat geometry, good for density estimation, inductive\n     - Mahalanobis distances to  centers\n\n   * - :ref:`BIRCH <birch>`\n     - branching factor, threshold, optional global clusterer.\n     - Large ``n_clusters`` and ``n_samples``\n     - Large dataset, outlier removal, data reduction, inductive\n     - Euclidean distance between points\n\n   * - :ref:`Bisecting K-Means <bisect_k_means>`\n     - number of clusters\n     - Very large ``n_samples``, medium ``n_clusters``\n     - General-purpose, even cluster size, flat geometry,\n       no empty clusters, inductive, hierarchical\n     - Distances between points\n\nNon-flat geometry clustering is useful when the clusters have a specific\nshape, i.e. a non-flat manifold, and the standard euclidean distance is\nnot the right metric. This case arises in the two top rows of the figure\nabove.\n\nGaussian mixture models, useful for clustering, are described in\n:ref:`another chapter of the documentation <mixture>` dedicated to\nmixture models. KMeans can be seen as a special case of Gaussian mixture\nmodel with equal covariance per component.\n\n:term:`Transductive <transductive>` clustering methods (in contrast to\n:term:`inductive` clustering methods) are not designed to be applied to new,\nunseen data.\n\n.. _k_means:\n\nK-means\n=======\n\nThe :class:`KMeans` algorithm clusters data by trying to separate samples in n\ngroups of equal variance, minimizing a criterion known as the *inertia* or\nwithin-cluster sum-of-squares (see below). This algorithm requires the number\nof clusters to be specified. It scales well to large numbers of samples and has\nbeen used across a large range of application areas in many different fields.\n\nThe k-means algorithm divides a set of :math:`N` samples :math:`X` into\n:math:`K` disjoint clusters :math:`C`, each described by the mean :math:`\\mu_j`\nof the samples in the cluster. The means are commonly called the cluster\n\"centroids\"; note that they are not, in general, points from :math:`X`,\nalthough they live in the same space.\n\nThe K-means algorithm aims to choose centroids that minimise the **inertia**,\nor **within-cluster sum-of-squares criterion**:\n\n.. math:: \\sum_{i=0}^{n}\\min_{\\mu_j \\in C}(||x_i - \\mu_j||^2)\n\nInertia can be recognized as a measure of how internally coherent clusters are.\nIt suffers from various drawbacks:\n\n- Inertia makes the assumption that clusters are convex and isotropic,\n  which is not always the case. It responds poorly to elongated clusters,\n  or manifolds with irregular shapes.\n\n- Inertia is not a normalized metric: we just know that lower values are\n  better and zero is optimal. But in very high-dimensional spaces, Euclidean\n  distances tend to become inflated\n  (this is an instance of the so-called \"curse of dimensionality\").\n  Running a dimensionality reduction algorithm such as :ref:`PCA` prior to\n  k-means clustering can alleviate this problem and speed up the\n  computations.\n\n.. image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_kmeans_assumptions_002.png\n   :target: ..\/auto_examples\/cluster\/plot_kmeans_assumptions.html\n   :align: center\n   :scale: 50\n\nFor more detailed descriptions of the issues shown above and how to address them,\nrefer to the examples :ref:`sphx_glr_auto_examples_cluster_plot_kmeans_assumptions.py`\nand :ref:`sphx_glr_auto_examples_cluster_plot_kmeans_silhouette_analysis.py`.\n\nK-means is often referred to as Lloyd's algorithm. In basic terms, the\nalgorithm has three steps. The first step chooses the initial centroids, with\nthe most basic method being to choose :math:`k` samples from the dataset\n:math:`X`. After initialization, K-means consists of looping between the\ntwo other steps. The first step assigns each sample to its nearest centroid.\nThe second step creates new centroids by taking the mean value of all of the\nsamples assigned to each previous centroid. The difference between the old\nand the new centroids are computed and the algorithm repeats these last two\nsteps until this value is less than a threshold. In other words, it repeats\nuntil the centroids do not move significantly.\n\n.. image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_kmeans_digits_001.png\n   :target: ..\/auto_examples\/cluster\/plot_kmeans_digits.html\n   :align: right\n   :scale: 35\n\nK-means is equivalent to the expectation-maximization algorithm\nwith a small, all-equal, diagonal covariance matrix.\n\nThe algorithm can also be understood through the concept of `Voronoi diagrams\n<https:\/\/en.wikipedia.org\/wiki\/Voronoi_diagram>`_. First the Voronoi diagram of\nthe points is calculated using the current centroids. Each segment in the\nVoronoi diagram becomes a separate cluster. Secondly, the centroids are updated\nto the mean of each segment. The algorithm then repeats this until a stopping\ncriterion is fulfilled. Usually, the algorithm stops when the relative decrease\nin the objective function between iterations is less than the given tolerance\nvalue. This is not the case in this implementation: iteration stops when\ncentroids move less than the tolerance.\n\nGiven enough time, K-means will always converge, however this may be to a local\nminimum. This is highly dependent on the initialization of the centroids.\nAs a result, the computation is often done several times, with different\ninitializations of the centroids. One method to help address this issue is the\nk-means++ initialization scheme, which has been implemented in scikit-learn\n(use the ``init='k-means++'`` parameter). This initializes the centroids to be\n(generally) distant from each other, leading to probably better results than\nrandom initialization, as shown in the reference. For detailed examples of\ncomparing different initialization schemes, refer to\n:ref:`sphx_glr_auto_examples_cluster_plot_kmeans_digits.py` and\n:ref:`sphx_glr_auto_examples_cluster_plot_kmeans_stability_low_dim_dense.py`.\n\nK-means++ can also be called independently to select seeds for other\nclustering algorithms, see :func:`sklearn.cluster.kmeans_plusplus` for details\nand example usage.\n\nThe algorithm supports sample weights, which can be given by a parameter\n``sample_weight``. This allows to assign more weight to some samples when\ncomputing cluster centers and values of inertia. For example, assigning a\nweight of 2 to a sample is equivalent to adding a duplicate of that sample\nto the dataset :math:`X`.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_text_plot_document_clustering.py`: Document clustering\n  using :class:`KMeans` and :class:`MiniBatchKMeans` based on sparse data\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_kmeans_plusplus.py`: Using K-means++\n  to select seeds for other clustering algorithms.\n\nLow-level parallelism\n---------------------\n\n:class:`KMeans` benefits from OpenMP based parallelism through Cython. Small\nchunks of data (256 samples) are processed in parallel, which in addition\nyields a low memory footprint. For more details on how to control the number of\nthreads, please refer to our :ref:`parallelism` notes.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_kmeans_assumptions.py`: Demonstrating when\n  k-means performs intuitively and when it does not\n* :ref:`sphx_glr_auto_examples_cluster_plot_kmeans_digits.py`: Clustering handwritten digits\n\n.. dropdown:: References\n\n  * `\"k-means++: The advantages of careful seeding\"\n    <http:\/\/ilpubs.stanford.edu:8090\/778\/1\/2006-13.pdf>`_\n    Arthur, David, and Sergei Vassilvitskii,\n    *Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete\n    algorithms*, Society for Industrial and Applied Mathematics (2007)\n\n\n.. _mini_batch_kmeans:\n\nMini Batch K-Means\n------------------\n\nThe :class:`MiniBatchKMeans` is a variant of the :class:`KMeans` algorithm\nwhich uses mini-batches to reduce the computation time, while still attempting\nto optimise the same objective function. Mini-batches are subsets of the input\ndata, randomly sampled in each training iteration. These mini-batches\ndrastically reduce the amount of computation required to converge to a local\nsolution. In contrast to other algorithms that reduce the convergence time of\nk-means, mini-batch k-means produces results that are generally only slightly\nworse than the standard algorithm.\n\nThe algorithm iterates between two major steps, similar to vanilla k-means.\nIn the first step, :math:`b` samples are drawn randomly from the dataset, to form\na mini-batch. These are then assigned to the nearest centroid. In the second\nstep, the centroids are updated. In contrast to k-means, this is done on a\nper-sample basis. For each sample in the mini-batch, the assigned centroid\nis updated by taking the streaming average of the sample and all previous\nsamples assigned to that centroid. This has the effect of decreasing the\nrate of change for a centroid over time. These steps are performed until\nconvergence or a predetermined number of iterations is reached.\n\n:class:`MiniBatchKMeans` converges faster than :class:`KMeans`, but the quality\nof the results is reduced. In practice this difference in quality can be quite\nsmall, as shown in the example and cited reference.\n\n.. figure:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_mini_batch_kmeans_001.png\n   :target: ..\/auto_examples\/cluster\/plot_mini_batch_kmeans.html\n   :align: center\n   :scale: 100\n\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_mini_batch_kmeans.py`: Comparison of\n  :class:`KMeans` and :class:`MiniBatchKMeans`\n\n* :ref:`sphx_glr_auto_examples_text_plot_document_clustering.py`: Document clustering\n  using :class:`KMeans` and :class:`MiniBatchKMeans` based on sparse data\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_dict_face_patches.py`\n\n.. dropdown:: References\n\n  * `\"Web Scale K-Means clustering\"\n    <https:\/\/www.eecs.tufts.edu\/~dsculley\/papers\/fastkmeans.pdf>`_\n    D. Sculley, *Proceedings of the 19th international conference on World\n    wide web* (2010)\n\n.. _affinity_propagation:\n\nAffinity Propagation\n====================\n\n:class:`AffinityPropagation` creates clusters by sending messages between\npairs of samples until convergence. A dataset is then described using a small\nnumber of exemplars, which are identified as those most representative of other\nsamples. The messages sent between pairs represent the suitability for one\nsample to be the exemplar of the other, which is updated in response to the\nvalues from other pairs. This updating happens iteratively until convergence,\nat which point the final exemplars are chosen, and hence the final clustering\nis given.\n\n.. figure:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_affinity_propagation_001.png\n   :target: ..\/auto_examples\/cluster\/plot_affinity_propagation.html\n   :align: center\n   :scale: 50\n\n\nAffinity Propagation can be interesting as it chooses the number of\nclusters based on the data provided. For this purpose, the two important\nparameters are the *preference*, which controls how many exemplars are\nused, and the *damping factor* which damps the responsibility and\navailability messages to avoid numerical oscillations when updating these\nmessages.\n\nThe main drawback of Affinity Propagation is its complexity. The\nalgorithm has a time complexity of the order :math:`O(N^2 T)`, where :math:`N`\nis the number of samples and :math:`T` is the number of iterations until\nconvergence. Further, the memory complexity is of the order\n:math:`O(N^2)` if a dense similarity matrix is used, but reducible if a\nsparse similarity matrix is used. This makes Affinity Propagation most\nappropriate for small to medium sized datasets.\n\n.. dropdown:: Algorithm description\n\n  The messages sent between points belong to one of two categories. The first is\n  the responsibility :math:`r(i, k)`, which is the accumulated evidence that\n  sample :math:`k` should be the exemplar for sample :math:`i`. The second is the\n  availability :math:`a(i, k)` which is the accumulated evidence that sample\n  :math:`i` should choose sample :math:`k` to be its exemplar, and considers the\n  values for all other samples that :math:`k` should be an exemplar. In this way,\n  exemplars are chosen by samples if they are (1) similar enough to many samples\n  and (2) chosen by many samples to be representative of themselves.\n\n  More formally, the responsibility of a sample :math:`k` to be the exemplar of\n  sample :math:`i` is given by:\n\n  .. math::\n\n      r(i, k) \\leftarrow s(i, k) - max [ a(i, k') + s(i, k') \\forall k' \\neq k ]\n\n  Where :math:`s(i, k)` is the similarity between samples :math:`i` and :math:`k`.\n  The availability of sample :math:`k` to be the exemplar of sample :math:`i` is\n  given by:\n\n  .. math::\n\n      a(i, k) \\leftarrow min [0, r(k, k) + \\sum_{i'~s.t.~i' \\notin \\{i, k\\}}{r(i',\n      k)}]\n\n  To begin with, all values for :math:`r` and :math:`a` are set to zero, and the\n  calculation of each iterates until convergence. As discussed above, in order to\n  avoid numerical oscillations when updating the messages, the damping factor\n  :math:`\\lambda` is introduced to iteration process:\n\n  .. math:: r_{t+1}(i, k) = \\lambda\\cdot r_{t}(i, k) + (1-\\lambda)\\cdot r_{t+1}(i, k)\n  .. math:: a_{t+1}(i, k) = \\lambda\\cdot a_{t}(i, k) + (1-\\lambda)\\cdot a_{t+1}(i, k)\n\n  where :math:`t` indicates the iteration times.\n\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_affinity_propagation.py`: Affinity\n  Propagation on a synthetic 2D datasets with 3 classes\n* :ref:`sphx_glr_auto_examples_applications_plot_stock_market.py` Affinity Propagation\n  on financial time series to find groups of companies\n\n\n.. _mean_shift:\n\nMean Shift\n==========\n:class:`MeanShift` clustering aims to discover *blobs* in a smooth density of\nsamples. It is a centroid based algorithm, which works by updating candidates\nfor centroids to be the mean of the points within a given region. These\ncandidates are then filtered in a post-processing stage to eliminate\nnear-duplicates to form the final set of centroids.\n\n.. dropdown:: Mathematical details\n\n  The position of centroid candidates is iteratively adjusted using a technique\n  called hill climbing, which finds local maxima of the estimated probability\n  density. Given a candidate centroid :math:`x` for iteration :math:`t`, the\n  candidate is updated according to the following equation:\n\n  .. math::\n\n      x^{t+1} = x^t + m(x^t)\n\n  Where :math:`m` is the *mean shift* vector that is computed for each centroid\n  that points towards a region of the maximum increase in the density of points.\n  To compute :math:`m` we define :math:`N(x)` as the neighborhood of samples\n  within a given distance around :math:`x`. Then :math:`m` is computed using the\n  following equation, effectively updating a centroid to be the mean of the\n  samples within its neighborhood:\n\n  .. math::\n\n      m(x) = \\frac{1}{|N(x)|} \\sum_{x_j \\in N(x)}x_j - x\n\n  In general, the equation for :math:`m` depends on a kernel used for density\n  estimation. The generic formula is:\n\n  .. math::\n\n      m(x) = \\frac{\\sum_{x_j \\in N(x)}K(x_j - x)x_j}{\\sum_{x_j \\in N(x)}K(x_j -\n      x)} - x\n\n  In our implementation, :math:`K(x)` is equal to 1 if :math:`x` is small enough\n  and is equal to 0 otherwise. Effectively :math:`K(y - x)` indicates whether\n  :math:`y` is in the neighborhood of :math:`x`.\n\n\nThe algorithm automatically sets the number of clusters, instead of relying on a\nparameter ``bandwidth``, which dictates the size of the region to search through.\nThis parameter can be set manually, but can be estimated using the provided\n``estimate_bandwidth`` function, which is called if the bandwidth is not set.\n\nThe algorithm is not highly scalable, as it requires multiple nearest neighbor\nsearches during the execution of the algorithm. The algorithm is guaranteed to\nconverge, however the algorithm will stop iterating when the change in centroids\nis small.\n\nLabelling a new sample is performed by finding the nearest centroid for a\ngiven sample.\n\n\n.. figure:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_mean_shift_001.png\n   :target: ..\/auto_examples\/cluster\/plot_mean_shift.html\n   :align: center\n   :scale: 50\n\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_mean_shift.py`: Mean Shift clustering\n  on a synthetic 2D datasets with 3 classes.\n\n.. dropdown:: References\n\n  * :doi:`\"Mean shift: A robust approach toward feature space analysis\"\n    <10.1109\/34.1000236>` D. Comaniciu and P. Meer, *IEEE Transactions on Pattern\n    Analysis and Machine Intelligence* (2002)\n\n\n.. _spectral_clustering:\n\nSpectral clustering\n===================\n\n:class:`SpectralClustering` performs a low-dimension embedding of the\naffinity matrix between samples, followed by clustering, e.g., by KMeans,\nof the components of the eigenvectors in the low dimensional space.\nIt is especially computationally efficient if the affinity matrix is sparse\nand the `amg` solver is used for the eigenvalue problem (Note, the `amg` solver\nrequires that the `pyamg <https:\/\/github.com\/pyamg\/pyamg>`_ module is installed.)\n\nThe present version of SpectralClustering requires the number of clusters\nto be specified in advance. It works well for a small number of clusters,\nbut is not advised for many clusters.\n\nFor two clusters, SpectralClustering solves a convex relaxation of the\n`normalized cuts <https:\/\/people.eecs.berkeley.edu\/~malik\/papers\/SM-ncut.pdf>`_\nproblem on the similarity graph: cutting the graph in two so that the weight of\nthe edges cut is small compared to the weights of the edges inside each\ncluster. This criteria is especially interesting when working on images, where\ngraph vertices are pixels, and weights of the edges of the similarity graph are\ncomputed using a function of a gradient of the image.\n\n\n.. |noisy_img| image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_segmentation_toy_001.png\n    :target: ..\/auto_examples\/cluster\/plot_segmentation_toy.html\n    :scale: 50\n\n.. |segmented_img| image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_segmentation_toy_002.png\n    :target: ..\/auto_examples\/cluster\/plot_segmentation_toy.html\n    :scale: 50\n\n.. centered:: |noisy_img| |segmented_img|\n\n.. warning:: Transforming distance to well-behaved similarities\n\n    Note that if the values of your similarity matrix are not well\n    distributed, e.g. with negative values or with a distance matrix\n    rather than a similarity, the spectral problem will be singular and\n    the problem not solvable. In which case it is advised to apply a\n    transformation to the entries of the matrix. For instance, in the\n    case of a signed distance matrix, is common to apply a heat kernel::\n\n        similarity = np.exp(-beta * distance \/ distance.std())\n\n    See the examples for such an application.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_segmentation_toy.py`: Segmenting objects\n  from a noisy background using spectral clustering.\n* :ref:`sphx_glr_auto_examples_cluster_plot_coin_segmentation.py`: Spectral clustering\n  to split the image of coins in regions.\n\n\n.. |coin_kmeans| image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_coin_segmentation_001.png\n  :target: ..\/auto_examples\/cluster\/plot_coin_segmentation.html\n  :scale: 35\n\n.. |coin_discretize| image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_coin_segmentation_002.png\n  :target: ..\/auto_examples\/cluster\/plot_coin_segmentation.html\n  :scale: 35\n\n.. |coin_cluster_qr| image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_coin_segmentation_003.png\n  :target: ..\/auto_examples\/cluster\/plot_coin_segmentation.html\n  :scale: 35\n\n\nDifferent label assignment strategies\n-------------------------------------\n\nDifferent label assignment strategies can be used, corresponding to the\n``assign_labels`` parameter of :class:`SpectralClustering`.\n``\"kmeans\"`` strategy can match finer details, but can be unstable.\nIn particular, unless you control the ``random_state``, it may not be\nreproducible from run-to-run, as it depends on random initialization.\nThe alternative ``\"discretize\"`` strategy is 100% reproducible, but tends\nto create parcels of fairly even and geometrical shape.\nThe recently added ``\"cluster_qr\"`` option is a deterministic alternative that\ntends to create the visually best partitioning on the example application\nbelow.\n\n================================  ================================  ================================\n ``assign_labels=\"kmeans\"``        ``assign_labels=\"discretize\"``    ``assign_labels=\"cluster_qr\"``\n================================  ================================  ================================\n|coin_kmeans|                          |coin_discretize|                  |coin_cluster_qr|\n================================  ================================  ================================\n\n.. dropdown:: References\n\n  * `\"Multiclass spectral clustering\"\n    <https:\/\/people.eecs.berkeley.edu\/~jordan\/courses\/281B-spring04\/readings\/yu-shi.pdf>`_\n    Stella X. Yu, Jianbo Shi, 2003\n\n  * :doi:`\"Simple, direct, and efficient multi-way spectral clustering\"<10.1093\/imaiai\/iay008>`\n    Anil Damle, Victor Minden, Lexing Ying, 2019\n\n\n.. _spectral_clustering_graph:\n\nSpectral Clustering Graphs\n--------------------------\n\nSpectral Clustering can also be used to partition graphs via their spectral\nembeddings.  In this case, the affinity matrix is the adjacency matrix of the\ngraph, and SpectralClustering is initialized with `affinity='precomputed'`::\n\n    >>> from sklearn.cluster import SpectralClustering\n    >>> sc = SpectralClustering(3, affinity='precomputed', n_init=100,\n    ...                         assign_labels='discretize')\n    >>> sc.fit_predict(adjacency_matrix)  # doctest: +SKIP\n\n.. dropdown:: References\n\n  * :doi:`\"A Tutorial on Spectral Clustering\" <10.1007\/s11222-007-9033-z>` Ulrike\n    von Luxburg, 2007\n\n  * :doi:`\"Normalized cuts and image segmentation\" <10.1109\/34.868688>` Jianbo\n    Shi, Jitendra Malik, 2000\n\n  * `\"A Random Walks View of Spectral Segmentation\"\n    <https:\/\/citeseerx.ist.psu.edu\/doc_view\/pid\/84a86a69315e994cfd1e0c7debb86d62d7bd1f44>`_\n    Marina Meila, Jianbo Shi, 2001\n\n  * `\"On Spectral Clustering: Analysis and an algorithm\"\n    <https:\/\/citeseerx.ist.psu.edu\/doc_view\/pid\/796c5d6336fc52aa84db575fb821c78918b65f58>`_\n    Andrew Y. Ng, Michael I. Jordan, Yair Weiss, 2001\n\n  * :arxiv:`\"Preconditioned Spectral Clustering for Stochastic Block Partition\n    Streaming Graph Challenge\" <1708.07481>` David Zhuzhunashvili, Andrew Knyazev\n\n\n.. _hierarchical_clustering:\n\nHierarchical clustering\n=======================\n\nHierarchical clustering is a general family of clustering algorithms that\nbuild nested clusters by merging or splitting them successively. This\nhierarchy of clusters is represented as a tree (or dendrogram). The root of the\ntree is the unique cluster that gathers all the samples, the leaves being the\nclusters with only one sample. See the `Wikipedia page\n<https:\/\/en.wikipedia.org\/wiki\/Hierarchical_clustering>`_ for more details.\n\nThe :class:`AgglomerativeClustering` object performs a hierarchical clustering\nusing a bottom up approach: each observation starts in its own cluster, and\nclusters are successively merged together. The linkage criteria determines the\nmetric used for the merge strategy:\n\n- **Ward** minimizes the sum of squared differences within all clusters. It is a\n  variance-minimizing approach and in this sense is similar to the k-means\n  objective function but tackled with an agglomerative hierarchical\n  approach.\n- **Maximum** or **complete linkage** minimizes the maximum distance between\n  observations of pairs of clusters.\n- **Average linkage** minimizes the average of the distances between all\n  observations of pairs of clusters.\n- **Single linkage** minimizes the distance between the closest\n  observations of pairs of clusters.\n\n:class:`AgglomerativeClustering` can also scale to large number of samples\nwhen it is used jointly with a connectivity matrix, but is computationally\nexpensive when no connectivity constraints are added between samples: it\nconsiders at each step all the possible merges.\n\n.. topic:: :class:`FeatureAgglomeration`\n\n   The :class:`FeatureAgglomeration` uses agglomerative clustering to\n   group together features that look very similar, thus decreasing the\n   number of features. It is a dimensionality reduction tool, see\n   :ref:`data_reduction`.\n\nDifferent linkage type: Ward, complete, average, and single linkage\n-------------------------------------------------------------------\n\n:class:`AgglomerativeClustering` supports Ward, single, average, and complete\nlinkage strategies.\n\n.. image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_linkage_comparison_001.png\n    :target: ..\/auto_examples\/cluster\/plot_linkage_comparison.html\n    :scale: 43\n\nAgglomerative cluster has a \"rich get richer\" behavior that leads to\nuneven cluster sizes. In this regard, single linkage is the worst\nstrategy, and Ward gives the most regular sizes. However, the affinity\n(or distance used in clustering) cannot be varied with Ward, thus for non\nEuclidean metrics, average linkage is a good alternative. Single linkage,\nwhile not robust to noisy data, can be computed very efficiently and can\ntherefore be useful to provide hierarchical clustering of larger datasets.\nSingle linkage can also perform well on non-globular data.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_digits_linkage.py`: exploration of the\n  different linkage strategies in a real dataset.\n\n  * :ref:`sphx_glr_auto_examples_cluster_plot_linkage_comparison.py`: exploration of\n    the different linkage strategies in toy datasets.\n\n\nVisualization of cluster hierarchy\n----------------------------------\n\nIt's possible to visualize the tree representing the hierarchical merging of clusters\nas a dendrogram. Visual inspection can often be useful for understanding the structure\nof the data, though more so in the case of small sample sizes.\n\n.. image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_agglomerative_dendrogram_001.png\n    :target: ..\/auto_examples\/cluster\/plot_agglomerative_dendrogram.html\n    :scale: 42\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_agglomerative_dendrogram.py`\n\n\nAdding connectivity constraints\n-------------------------------\n\nAn interesting aspect of :class:`AgglomerativeClustering` is that\nconnectivity constraints can be added to this algorithm (only adjacent\nclusters can be merged together), through a connectivity matrix that defines\nfor each sample the neighboring samples following a given structure of the\ndata. For instance, in the swiss-roll example below, the connectivity\nconstraints forbid the merging of points that are not adjacent on the swiss\nroll, and thus avoid forming clusters that extend across overlapping folds of\nthe roll.\n\n.. |unstructured| image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_ward_structured_vs_unstructured_001.png\n        :target: ..\/auto_examples\/cluster\/plot_ward_structured_vs_unstructured.html\n        :scale: 49\n\n.. |structured| image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_ward_structured_vs_unstructured_002.png\n        :target: ..\/auto_examples\/cluster\/plot_ward_structured_vs_unstructured.html\n        :scale: 49\n\n.. centered:: |unstructured| |structured|\n\nThese constraint are useful to impose a certain local structure, but they\nalso make the algorithm faster, especially when the number of the samples\nis high.\n\nThe connectivity constraints are imposed via an connectivity matrix: a\nscipy sparse matrix that has elements only at the intersection of a row\nand a column with indices of the dataset that should be connected. This\nmatrix can be constructed from a-priori information: for instance, you\nmay wish to cluster web pages by only merging pages with a link pointing\nfrom one to another. It can also be learned from the data, for instance\nusing :func:`sklearn.neighbors.kneighbors_graph` to restrict\nmerging to nearest neighbors as in :ref:`this example\n<sphx_glr_auto_examples_cluster_plot_agglomerative_clustering.py>`, or\nusing :func:`sklearn.feature_extraction.image.grid_to_graph` to\nenable only merging of neighboring pixels on an image, as in the\n:ref:`coin <sphx_glr_auto_examples_cluster_plot_coin_ward_segmentation.py>` example.\n\n.. warning:: **Connectivity constraints with single, average and complete linkage**\n\n    Connectivity constraints and single, complete or average linkage can enhance\n    the 'rich getting richer' aspect of agglomerative clustering,\n    particularly so if they are built with\n    :func:`sklearn.neighbors.kneighbors_graph`. In the limit of a small\n    number of clusters, they tend to give a few macroscopically occupied\n    clusters and almost empty ones. (see the discussion in\n    :ref:`sphx_glr_auto_examples_cluster_plot_agglomerative_clustering.py`).\n    Single linkage is the most brittle linkage option with regard to this issue.\n\n.. image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_agglomerative_clustering_001.png\n    :target: ..\/auto_examples\/cluster\/plot_agglomerative_clustering.html\n    :scale: 38\n\n.. image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_agglomerative_clustering_002.png\n    :target: ..\/auto_examples\/cluster\/plot_agglomerative_clustering.html\n    :scale: 38\n\n.. image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_agglomerative_clustering_003.png\n    :target: ..\/auto_examples\/cluster\/plot_agglomerative_clustering.html\n    :scale: 38\n\n.. image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_agglomerative_clustering_004.png\n    :target: ..\/auto_examples\/cluster\/plot_agglomerative_clustering.html\n    :scale: 38\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_coin_ward_segmentation.py`: Ward\n  clustering to split the image of coins in regions.\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_ward_structured_vs_unstructured.py`: Example\n  of Ward algorithm on a swiss-roll, comparison of structured approaches\n  versus unstructured approaches.\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_feature_agglomeration_vs_univariate_selection.py`: Example\n  of dimensionality reduction with feature agglomeration based on Ward\n  hierarchical clustering.\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_agglomerative_clustering.py`\n\n\nVarying the metric\n-------------------\n\nSingle, average and complete linkage can be used with a variety of distances (or\naffinities), in particular Euclidean distance (*l2*), Manhattan distance\n(or Cityblock, or *l1*), cosine distance, or any precomputed affinity\nmatrix.\n\n* *l1* distance is often good for sparse features, or sparse noise: i.e.\n  many of the features are zero, as in text mining using occurrences of\n  rare words.\n\n* *cosine* distance is interesting because it is invariant to global\n  scalings of the signal.\n\nThe guidelines for choosing a metric is to use one that maximizes the\ndistance between samples in different classes, and minimizes that within\neach class.\n\n.. image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_agglomerative_clustering_metrics_005.png\n    :target: ..\/auto_examples\/cluster\/plot_agglomerative_clustering_metrics.html\n    :scale: 32\n\n.. image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_agglomerative_clustering_metrics_006.png\n    :target: ..\/auto_examples\/cluster\/plot_agglomerative_clustering_metrics.html\n    :scale: 32\n\n.. image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_agglomerative_clustering_metrics_007.png\n    :target: ..\/auto_examples\/cluster\/plot_agglomerative_clustering_metrics.html\n    :scale: 32\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_agglomerative_clustering_metrics.py`\n\n\nBisecting K-Means\n-----------------\n\n.. _bisect_k_means:\n\nThe :class:`BisectingKMeans` is an iterative variant of :class:`KMeans`, using\ndivisive hierarchical clustering. Instead of creating all centroids at once, centroids\nare picked progressively based on a previous clustering: a cluster is split into two\nnew clusters repeatedly until the target number of clusters is reached.\n\n:class:`BisectingKMeans` is more efficient than :class:`KMeans` when the number of\nclusters is large since it only works on a subset of the data at each bisection\nwhile :class:`KMeans` always works on the entire dataset.\n\nAlthough :class:`BisectingKMeans` can't benefit from the advantages of the `\"k-means++\"`\ninitialization by design, it will still produce comparable results than\n`KMeans(init=\"k-means++\")` in terms of inertia at cheaper computational costs, and will\nlikely produce better results than `KMeans` with a random initialization.\n\nThis variant is more efficient to agglomerative clustering if the number of clusters is\nsmall compared to the number of data points.\n\nThis variant also does not produce empty clusters.\n\nThere exist two strategies for selecting the cluster to split:\n - ``bisecting_strategy=\"largest_cluster\"`` selects the cluster having the most points\n - ``bisecting_strategy=\"biggest_inertia\"`` selects the cluster with biggest inertia\n   (cluster with biggest Sum of Squared Errors within)\n\nPicking by largest amount of data points in most cases produces result as\naccurate as picking by inertia and is faster (especially for larger amount of data\npoints, where calculating error may be costly).\n\nPicking by largest amount of data points will also likely produce clusters of similar\nsizes while `KMeans` is known to produce clusters of different sizes.\n\nDifference between Bisecting K-Means and regular K-Means can be seen on example\n:ref:`sphx_glr_auto_examples_cluster_plot_bisect_kmeans.py`.\nWhile the regular K-Means algorithm tends to create non-related clusters,\nclusters from Bisecting K-Means are well ordered and create quite a visible hierarchy.\n\n.. dropdown:: References\n\n  * `\"A Comparison of Document Clustering Techniques\"\n    <http:\/\/www.philippe-fournier-viger.com\/spmf\/bisectingkmeans.pdf>`_ Michael\n    Steinbach, George Karypis and Vipin Kumar, Department of Computer Science and\n    Egineering, University of Minnesota (June 2000)\n  * `\"Performance Analysis of K-Means and Bisecting K-Means Algorithms in Weblog\n    Data\"\n    <https:\/\/ijeter.everscience.org\/Manuscripts\/Volume-4\/Issue-8\/Vol-4-issue-8-M-23.pdf>`_\n    K.Abirami and Dr.P.Mayilvahanan, International Journal of Emerging\n    Technologies in Engineering Research (IJETER) Volume 4, Issue 8, (August 2016)\n  * `\"Bisecting K-means Algorithm Based on K-valued Self-determining and\n    Clustering Center Optimization\"\n    <http:\/\/www.jcomputers.us\/vol13\/jcp1306-01.pdf>`_ Jian Di, Xinyue Gou School\n    of Control and Computer Engineering,North China Electric Power University,\n    Baoding, Hebei, China (August 2017)\n\n\n.. _dbscan:\n\nDBSCAN\n======\n\nThe :class:`DBSCAN` algorithm views clusters as areas of high density\nseparated by areas of low density. Due to this rather generic view, clusters\nfound by DBSCAN can be any shape, as opposed to k-means which assumes that\nclusters are convex shaped. The central component to the DBSCAN is the concept\nof *core samples*, which are samples that are in areas of high density. A\ncluster is therefore a set of core samples, each close to each other\n(measured by some distance measure)\nand a set of non-core samples that are close to a core sample (but are not\nthemselves core samples). There are two parameters to the algorithm,\n``min_samples`` and ``eps``,\nwhich define formally what we mean when we say *dense*.\nHigher ``min_samples`` or lower ``eps``\nindicate higher density necessary to form a cluster.\n\nMore formally, we define a core sample as being a sample in the dataset such\nthat there exist ``min_samples`` other samples within a distance of\n``eps``, which are defined as *neighbors* of the core sample. This tells\nus that the core sample is in a dense area of the vector space. A cluster\nis a set of core samples that can be built by recursively taking a core\nsample, finding all of its neighbors that are core samples, finding all of\n*their* neighbors that are core samples, and so on. A cluster also has a\nset of non-core samples, which are samples that are neighbors of a core sample\nin the cluster but are not themselves core samples. Intuitively, these samples\nare on the fringes of a cluster.\n\nAny core sample is part of a cluster, by definition. Any sample that is not a\ncore sample, and is at least ``eps`` in distance from any core sample, is\nconsidered an outlier by the algorithm.\n\nWhile the parameter ``min_samples`` primarily controls how tolerant the\nalgorithm is towards noise (on noisy and large data sets it may be desirable\nto increase this parameter), the parameter ``eps`` is *crucial to choose\nappropriately* for the data set and distance function and usually cannot be\nleft at the default value. It controls the local neighborhood of the points.\nWhen chosen too small, most data will not be clustered at all (and labeled\nas ``-1`` for \"noise\"). When chosen too large, it causes close clusters to\nbe merged into one cluster, and eventually the entire data set to be returned\nas a single cluster. Some heuristics for choosing this parameter have been\ndiscussed in the literature, for example based on a knee in the nearest neighbor\ndistances plot (as discussed in the references below).\n\nIn the figure below, the color indicates cluster membership, with large circles\nindicating core samples found by the algorithm. Smaller circles are non-core\nsamples that are still part of a cluster. Moreover, the outliers are indicated\nby black points below.\n\n.. |dbscan_results| image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_dbscan_002.png\n    :target: ..\/auto_examples\/cluster\/plot_dbscan.html\n    :scale: 50\n\n.. centered:: |dbscan_results|\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_dbscan.py`\n\n.. dropdown:: Implementation\n\n  The DBSCAN algorithm is deterministic, always generating the same clusters when\n  given the same data in the same order.  However, the results can differ when\n  data is provided in a different order. First, even though the core samples will\n  always be assigned to the same clusters, the labels of those clusters will\n  depend on the order in which those samples are encountered in the data. Second\n  and more importantly, the clusters to which non-core samples are assigned can\n  differ depending on the data order.  This would happen when a non-core sample\n  has a distance lower than ``eps`` to two core samples in different clusters. By\n  the triangular inequality, those two core samples must be more distant than\n  ``eps`` from each other, or they would be in the same cluster. The non-core\n  sample is assigned to whichever cluster is generated first in a pass through the\n  data, and so the results will depend on the data ordering.\n\n  The current implementation uses ball trees and kd-trees to determine the\n  neighborhood of points, which avoids calculating the full distance matrix (as\n  was done in scikit-learn versions before 0.14). The possibility to use custom\n  metrics is retained; for details, see :class:`NearestNeighbors`.\n\n.. dropdown:: Memory consumption for large sample sizes\n\n  This implementation is by default not memory efficient because it constructs a\n  full pairwise similarity matrix in the case where kd-trees or ball-trees cannot\n  be used (e.g., with sparse matrices). This matrix will consume :math:`n^2`\n  floats. A couple of mechanisms for getting around this are:\n\n  - Use :ref:`OPTICS <optics>` clustering in conjunction with the `extract_dbscan`\n    method. OPTICS clustering also calculates the full pairwise matrix, but only\n    keeps one row in memory at a time (memory complexity n).\n\n  - A sparse radius neighborhood graph (where missing entries are presumed to be\n    out of eps) can be precomputed in a memory-efficient way and dbscan can be run\n    over this with ``metric='precomputed'``.  See\n    :meth:`sklearn.neighbors.NearestNeighbors.radius_neighbors_graph`.\n\n  - The dataset can be compressed, either by removing exact duplicates if these\n    occur in your data, or by using BIRCH. Then you only have a relatively small\n    number of representatives for a large number of points. You can then provide a\n    ``sample_weight`` when fitting DBSCAN.\n\n.. dropdown:: References\n\n* `A Density-Based Algorithm for Discovering Clusters in Large Spatial\n  Databases with Noise <https:\/\/www.aaai.org\/Papers\/KDD\/1996\/KDD96-037.pdf>`_\n  Ester, M., H. P. Kriegel, J. Sander, and X. Xu, In Proceedings of the 2nd\n  International Conference on Knowledge Discovery and Data Mining, Portland, OR,\n  AAAI Press, pp. 226-231. 1996\n\n* :doi:`DBSCAN revisited, revisited: why and how you should (still) use DBSCAN.\n  <10.1145\/3068335>` Schubert, E., Sander, J., Ester, M., Kriegel, H. P., & Xu,\n  X. (2017). In ACM Transactions on Database Systems (TODS), 42(3), 19.\n\n\n.. _hdbscan:\n\nHDBSCAN\n=======\n\nThe :class:`HDBSCAN` algorithm can be seen as an extension of :class:`DBSCAN`\nand :class:`OPTICS`. Specifically, :class:`DBSCAN` assumes that the clustering\ncriterion (i.e. density requirement) is *globally homogeneous*.\nIn other words, :class:`DBSCAN` may struggle to successfully capture clusters\nwith different densities.\n:class:`HDBSCAN` alleviates this assumption and explores all possible density\nscales by building an alternative representation of the clustering problem.\n\n.. note::\n\n  This implementation is adapted from the original implementation of HDBSCAN,\n  `scikit-learn-contrib\/hdbscan <https:\/\/github.com\/scikit-learn-contrib\/hdbscan>`_ based on [LJ2017]_.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_hdbscan.py`\n\nMutual Reachability Graph\n-------------------------\n\nHDBSCAN first defines :math:`d_c(x_p)`, the *core distance* of a sample :math:`x_p`, as the\ndistance to its `min_samples` th-nearest neighbor, counting itself. For example,\nif `min_samples=5` and :math:`x_*` is the 5th-nearest neighbor of :math:`x_p`\nthen the core distance is:\n\n.. math:: d_c(x_p)=d(x_p, x_*).\n\nNext it defines :math:`d_m(x_p, x_q)`, the *mutual reachability distance* of two points\n:math:`x_p, x_q`, as:\n\n.. math:: d_m(x_p, x_q) = \\max\\{d_c(x_p), d_c(x_q), d(x_p, x_q)\\}\n\nThese two notions allow us to construct the *mutual reachability graph*\n:math:`G_{ms}` defined for a fixed choice of `min_samples` by associating each\nsample :math:`x_p` with a vertex of the graph, and thus edges between points\n:math:`x_p, x_q` are the mutual reachability distance :math:`d_m(x_p, x_q)`\nbetween them. We may build subsets of this graph, denoted as\n:math:`G_{ms,\\varepsilon}`, by removing any edges with value greater than :math:`\\varepsilon`:\nfrom the original graph. Any points whose core distance is less than :math:`\\varepsilon`:\nare at this staged marked as noise. The remaining points are then clustered by\nfinding the connected components of this trimmed graph.\n\n.. note::\n\n  Taking the connected components of a trimmed graph :math:`G_{ms,\\varepsilon}` is\n  equivalent to running DBSCAN* with `min_samples` and :math:`\\varepsilon`. DBSCAN* is a\n  slightly modified version of DBSCAN mentioned in [CM2013]_.\n\nHierarchical Clustering\n-----------------------\nHDBSCAN can be seen as an algorithm which performs DBSCAN* clustering across all\nvalues of :math:`\\varepsilon`. As mentioned prior, this is equivalent to finding the connected\ncomponents of the mutual reachability graphs for all values of :math:`\\varepsilon`. To do this\nefficiently, HDBSCAN first extracts a minimum spanning tree (MST) from the fully\n-connected mutual reachability graph, then greedily cuts the edges with highest\nweight. An outline of the HDBSCAN algorithm is as follows:\n\n1. Extract the MST of :math:`G_{ms}`.\n2. Extend the MST by adding a \"self edge\" for each vertex, with weight equal\n   to the core distance of the underlying sample.\n3. Initialize a single cluster and label for the MST.\n4. Remove the edge with the greatest weight from the MST (ties are\n   removed simultaneously).\n5. Assign cluster labels to the connected components which contain the\n   end points of the now-removed edge. If the component does not have at least\n   one edge it is instead assigned a \"null\" label marking it as noise.\n6. Repeat 4-5 until there are no more connected components.\n\nHDBSCAN is therefore able to obtain all possible partitions achievable by\nDBSCAN* for a fixed choice of `min_samples` in a hierarchical fashion.\nIndeed, this allows HDBSCAN to perform clustering across multiple densities\nand as such it no longer needs :math:`\\varepsilon` to be given as a hyperparameter. Instead\nit relies solely on the choice of `min_samples`, which tends to be a more robust\nhyperparameter.\n\n.. |hdbscan_ground_truth| image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_hdbscan_005.png\n    :target: ..\/auto_examples\/cluster\/plot_hdbscan.html\n    :scale: 75\n.. |hdbscan_results| image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_hdbscan_007.png\n    :target: ..\/auto_examples\/cluster\/plot_hdbscan.html\n    :scale: 75\n\n.. centered:: |hdbscan_ground_truth|\n.. centered:: |hdbscan_results|\n\nHDBSCAN can be smoothed with an additional hyperparameter `min_cluster_size`\nwhich specifies that during the hierarchical clustering, components with fewer\nthan `minimum_cluster_size` many samples are considered noise. In practice, one\ncan set `minimum_cluster_size = min_samples` to couple the parameters and\nsimplify the hyperparameter space.\n\n.. rubric:: References\n\n.. [CM2013] Campello, R.J.G.B., Moulavi, D., Sander, J. (2013). Density-Based\n  Clustering Based on Hierarchical Density Estimates. In: Pei, J., Tseng, V.S.,\n  Cao, L., Motoda, H., Xu, G. (eds) Advances in Knowledge Discovery and Data\n  Mining. PAKDD 2013. Lecture Notes in Computer Science(), vol 7819. Springer,\n  Berlin, Heidelberg. :doi:`Density-Based Clustering Based on Hierarchical\n  Density Estimates <10.1007\/978-3-642-37456-2_14>`\n\n.. [LJ2017] L. McInnes and J. Healy, (2017). Accelerated Hierarchical Density\n  Based Clustering. In: IEEE International Conference on Data Mining Workshops\n  (ICDMW), 2017, pp. 33-42. :doi:`Accelerated Hierarchical Density Based\n  Clustering <10.1109\/ICDMW.2017.12>`\n\n.. _optics:\n\nOPTICS\n======\n\nThe :class:`OPTICS` algorithm shares many similarities with the :class:`DBSCAN`\nalgorithm, and can be considered a generalization of DBSCAN that relaxes the\n``eps`` requirement from a single value to a value range. The key difference\nbetween DBSCAN and OPTICS is that the OPTICS algorithm builds a *reachability*\ngraph, which assigns each sample both a ``reachability_`` distance, and a spot\nwithin the cluster ``ordering_`` attribute; these two attributes are assigned\nwhen the model is fitted, and are used to determine cluster membership. If\nOPTICS is run with the default value of *inf* set for ``max_eps``, then DBSCAN\nstyle cluster extraction can be performed repeatedly in linear time for any\ngiven ``eps`` value using the ``cluster_optics_dbscan`` method. Setting\n``max_eps`` to a lower value will result in shorter run times, and can be\nthought of as the maximum neighborhood radius from each point to find other\npotential reachable points.\n\n.. |optics_results| image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_optics_001.png\n        :target: ..\/auto_examples\/cluster\/plot_optics.html\n        :scale: 50\n\n.. centered:: |optics_results|\n\nThe *reachability* distances generated by OPTICS allow for variable density\nextraction of clusters within a single data set. As shown in the above plot,\ncombining *reachability* distances and data set ``ordering_`` produces a\n*reachability plot*, where point density is represented on the Y-axis, and\npoints are ordered such that nearby points are adjacent. 'Cutting' the\nreachability plot at a single value produces DBSCAN like results; all points\nabove the 'cut' are classified as noise, and each time that there is a break\nwhen reading from left to right signifies a new cluster. The default cluster\nextraction with OPTICS looks at the steep slopes within the graph to find\nclusters, and the user can define what counts as a steep slope using the\nparameter ``xi``. There are also other possibilities for analysis on the graph\nitself, such as generating hierarchical representations of the data through\nreachability-plot dendrograms, and the hierarchy of clusters detected by the\nalgorithm can be accessed through the ``cluster_hierarchy_`` parameter. The\nplot above has been color-coded so that cluster colors in planar space match\nthe linear segment clusters of the reachability plot. Note that the blue and\nred clusters are adjacent in the reachability plot, and can be hierarchically\nrepresented as children of a larger parent cluster.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_optics.py`\n\n\n.. dropdown:: Comparison with DBSCAN\n\n  The results from OPTICS ``cluster_optics_dbscan`` method and DBSCAN are very\n  similar, but not always identical; specifically, labeling of periphery and noise\n  points. This is in part because the first samples of each dense area processed\n  by OPTICS have a large reachability value while being close to other points in\n  their area, and will thus sometimes be marked as noise rather than periphery.\n  This affects adjacent points when they are considered as candidates for being\n  marked as either periphery or noise.\n\n  Note that for any single value of ``eps``, DBSCAN will tend to have a shorter\n  run time than OPTICS; however, for repeated runs at varying ``eps`` values, a\n  single run of OPTICS may require less cumulative runtime than DBSCAN. It is also\n  important to note that OPTICS' output is close to DBSCAN's only if ``eps`` and\n  ``max_eps`` are close.\n\n.. dropdown:: Computational Complexity\n\n  Spatial indexing trees are used to avoid calculating the full distance matrix,\n  and allow for efficient memory usage on large sets of samples. Different\n  distance metrics can be supplied via the ``metric`` keyword.\n\n  For large datasets, similar (but not identical) results can be obtained via\n  :class:`HDBSCAN`. The HDBSCAN implementation is multithreaded, and has better\n  algorithmic runtime complexity than OPTICS, at the cost of worse memory scaling.\n  For extremely large datasets that exhaust system memory using HDBSCAN, OPTICS\n  will maintain :math:`n` (as opposed to :math:`n^2`) memory scaling; however,\n  tuning of the ``max_eps`` parameter will likely need to be used to give a\n  solution in a reasonable amount of wall time.\n\n\n.. dropdown:: References\n\n  * \"OPTICS: ordering points to identify the clustering structure.\" Ankerst,\n    Mihael, Markus M. Breunig, Hans-Peter Kriegel, and J\u00f6rg Sander. In ACM Sigmod\n    Record, vol. 28, no. 2, pp. 49-60. ACM, 1999.\n\n\n.. _birch:\n\nBIRCH\n=====\n\nThe :class:`Birch` builds a tree called the Clustering Feature Tree (CFT)\nfor the given data. The data is essentially lossy compressed to a set of\nClustering Feature nodes (CF Nodes). The CF Nodes have a number of\nsubclusters called Clustering Feature subclusters (CF Subclusters)\nand these CF Subclusters located in the non-terminal CF Nodes\ncan have CF Nodes as children.\n\nThe CF Subclusters hold the necessary information for clustering which prevents\nthe need to hold the entire input data in memory. This information includes:\n\n- Number of samples in a subcluster.\n- Linear Sum - An n-dimensional vector holding the sum of all samples\n- Squared Sum - Sum of the squared L2 norm of all samples.\n- Centroids - To avoid recalculation linear sum \/ n_samples.\n- Squared norm of the centroids.\n\nThe BIRCH algorithm has two parameters, the threshold and the branching factor.\nThe branching factor limits the number of subclusters in a node and the\nthreshold limits the distance between the entering sample and the existing\nsubclusters.\n\nThis algorithm can be viewed as an instance or data reduction method,\nsince it reduces the input data to a set of subclusters which are obtained directly\nfrom the leaves of the CFT. This reduced data can be further processed by feeding\nit into a global clusterer. This global clusterer can be set by ``n_clusters``.\nIf ``n_clusters`` is set to None, the subclusters from the leaves are directly\nread off, otherwise a global clustering step labels these subclusters into global\nclusters (labels) and the samples are mapped to the global label of the nearest subcluster.\n\n.. dropdown:: Algorithm description\n\n  - A new sample is inserted into the root of the CF Tree which is a CF Node. It\n    is then merged with the subcluster of the root, that has the smallest radius\n    after merging, constrained by the threshold and branching factor conditions.\n    If the subcluster has any child node, then this is done repeatedly till it\n    reaches a leaf. After finding the nearest subcluster in the leaf, the\n    properties of this subcluster and the parent subclusters are recursively\n    updated.\n\n  - If the radius of the subcluster obtained by merging the new sample and the\n    nearest subcluster is greater than the square of the threshold and if the\n    number of subclusters is greater than the branching factor, then a space is\n    temporarily allocated to this new sample. The two farthest subclusters are\n    taken and the subclusters are divided into two groups on the basis of the\n    distance between these subclusters.\n\n  - If this split node has a parent subcluster and there is room for a new\n    subcluster, then the parent is split into two. If there is no room, then this\n    node is again split into two and the process is continued recursively, till it\n    reaches the root.\n\n.. dropdown:: BIRCH or MiniBatchKMeans?\n\n  - BIRCH does not scale very well to high dimensional data. As a rule of thumb if\n    ``n_features`` is greater than twenty, it is generally better to use MiniBatchKMeans.\n  - If the number of instances of data needs to be reduced, or if one wants a\n    large number of subclusters either as a preprocessing step or otherwise,\n    BIRCH is more useful than MiniBatchKMeans.\n\n  .. image:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_birch_vs_minibatchkmeans_001.png\n    :target: ..\/auto_examples\/cluster\/plot_birch_vs_minibatchkmeans.html\n\n.. dropdown:: How to use partial_fit?\n\n  To avoid the computation of global clustering, for every call of ``partial_fit``\n  the user is advised:\n\n  1. To set ``n_clusters=None`` initially.\n  2. Train all data by multiple calls to partial_fit.\n  3. Set ``n_clusters`` to a required value using\n     ``brc.set_params(n_clusters=n_clusters)``.\n  4. Call ``partial_fit`` finally with no arguments, i.e. ``brc.partial_fit()``\n     which performs the global clustering.\n\n.. dropdown:: References\n\n  * Tian Zhang, Raghu Ramakrishnan, Maron Livny BIRCH: An efficient data\n    clustering method for large databases.\n    https:\/\/www.cs.sfu.ca\/CourseCentral\/459\/han\/papers\/zhang96.pdf\n\n  * Roberto Perdisci JBirch - Java implementation of BIRCH clustering algorithm\n    https:\/\/code.google.com\/archive\/p\/jbirch\n\n\n\n.. _clustering_evaluation:\n\nClustering performance evaluation\n=================================\n\nEvaluating the performance of a clustering algorithm is not as trivial as\ncounting the number of errors or the precision and recall of a supervised\nclassification algorithm. In particular any evaluation metric should not\ntake the absolute values of the cluster labels into account but rather\nif this clustering define separations of the data similar to some ground\ntruth set of classes or satisfying some assumption such that members\nbelong to the same class are more similar than members of different\nclasses according to some similarity metric.\n\n.. currentmodule:: sklearn.metrics\n\n.. _rand_score:\n.. _adjusted_rand_score:\n\nRand index\n----------\n\nGiven the knowledge of the ground truth class assignments\n``labels_true`` and our clustering algorithm assignments of the same\nsamples ``labels_pred``, the **(adjusted or unadjusted) Rand index**\nis a function that measures the **similarity** of the two assignments,\nignoring permutations::\n\n  >>> from sklearn import metrics\n  >>> labels_true = [0, 0, 0, 1, 1, 1]\n  >>> labels_pred = [0, 0, 1, 1, 2, 2]\n  >>> metrics.rand_score(labels_true, labels_pred)\n  0.66...\n\nThe Rand index does not ensure to obtain a value close to 0.0 for a\nrandom labelling. The adjusted Rand index **corrects for chance** and\nwill give such a baseline.\n\n  >>> metrics.adjusted_rand_score(labels_true, labels_pred)\n  0.24...\n\nAs with all clustering metrics, one can permute 0 and 1 in the predicted\nlabels, rename 2 to 3, and get the same score::\n\n  >>> labels_pred = [1, 1, 0, 0, 3, 3]\n  >>> metrics.rand_score(labels_true, labels_pred)\n  0.66...\n  >>> metrics.adjusted_rand_score(labels_true, labels_pred)\n  0.24...\n\nFurthermore, both :func:`rand_score` :func:`adjusted_rand_score` are\n**symmetric**: swapping the argument does not change the scores. They can\nthus be used as **consensus measures**::\n\n  >>> metrics.rand_score(labels_pred, labels_true)\n  0.66...\n  >>> metrics.adjusted_rand_score(labels_pred, labels_true)\n  0.24...\n\nPerfect labeling is scored 1.0::\n\n  >>> labels_pred = labels_true[:]\n  >>> metrics.rand_score(labels_true, labels_pred)\n  1.0\n  >>> metrics.adjusted_rand_score(labels_true, labels_pred)\n  1.0\n\nPoorly agreeing labels (e.g. independent labelings) have lower scores,\nand for the adjusted Rand index the score will be negative or close to\nzero. However, for the unadjusted Rand index the score, while lower,\nwill not necessarily be close to zero.::\n\n  >>> labels_true = [0, 0, 0, 0, 0, 0, 1, 1]\n  >>> labels_pred = [0, 1, 2, 3, 4, 5, 5, 6]\n  >>> metrics.rand_score(labels_true, labels_pred)\n  0.39...\n  >>> metrics.adjusted_rand_score(labels_true, labels_pred)\n  -0.07...\n\n\n.. topic:: Advantages:\n\n  - **Interpretability**: The unadjusted Rand index is proportional to the\n    number of sample pairs whose labels are the same in both `labels_pred` and\n    `labels_true`, or are different in both.\n\n  - **Random (uniform) label assignments have an adjusted Rand index score close\n    to 0.0** for any value of ``n_clusters`` and ``n_samples`` (which is not the\n    case for the unadjusted Rand index or the V-measure for instance).\n\n  - **Bounded range**: Lower values indicate different labelings, similar\n    clusterings have a high (adjusted or unadjusted) Rand index, 1.0 is the\n    perfect match score. The score range is [0, 1] for the unadjusted Rand index\n    and [-0.5, 1] for the adjusted Rand index.\n\n  - **No assumption is made on the cluster structure**: The (adjusted or\n    unadjusted) Rand index can be used to compare all kinds of clustering\n    algorithms, and can be used to compare clustering algorithms such as k-means\n    which assumes isotropic blob shapes with results of spectral clustering\n    algorithms which can find cluster with \"folded\" shapes.\n\n.. topic:: Drawbacks:\n\n  - Contrary to inertia, the **(adjusted or unadjusted) Rand index requires\n    knowledge of the ground truth classes** which is almost never available in\n    practice or requires manual assignment by human annotators (as in the\n    supervised learning setting).\n\n    However (adjusted or unadjusted) Rand index can also be useful in a purely\n    unsupervised setting as a building block for a Consensus Index that can be\n    used for clustering model selection (TODO).\n\n  - The **unadjusted Rand index is often close to 1.0** even if the clusterings\n    themselves differ significantly. This can be understood when interpreting\n    the Rand index as the accuracy of element pair labeling resulting from the\n    clusterings: In practice there often is a majority of element pairs that are\n    assigned the ``different`` pair label under both the predicted and the\n    ground truth clustering resulting in a high proportion of pair labels that\n    agree, which leads subsequently to a high score.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_adjusted_for_chance_measures.py`:\n  Analysis of the impact of the dataset size on the value of\n  clustering measures for random assignments.\n\n.. dropdown:: Mathematical formulation\n\n  If C is a ground truth class assignment and K the clustering, let us define\n  :math:`a` and :math:`b` as:\n\n  - :math:`a`, the number of pairs of elements that are in the same set in C and\n    in the same set in K\n\n  - :math:`b`, the number of pairs of elements that are in different sets in C and\n    in different sets in K\n\n  The unadjusted Rand index is then given by:\n\n  .. math:: \\text{RI} = \\frac{a + b}{C_2^{n_{samples}}}\n\n  where :math:`C_2^{n_{samples}}` is the total number of possible pairs in the\n  dataset. It does not matter if the calculation is performed on ordered pairs or\n  unordered pairs as long as the calculation is performed consistently.\n\n  However, the Rand index does not guarantee that random label assignments will\n  get a value close to zero (esp. if the number of clusters is in the same order\n  of magnitude as the number of samples).\n\n  To counter this effect we can discount the expected RI :math:`E[\\text{RI}]` of\n  random labelings by defining the adjusted Rand index as follows:\n\n  .. math:: \\text{ARI} = \\frac{\\text{RI} - E[\\text{RI}]}{\\max(\\text{RI}) - E[\\text{RI}]}\n\n.. dropdown:: References\n\n  * `Comparing Partitions\n    <https:\/\/link.springer.com\/article\/10.1007%2FBF01908075>`_ L. Hubert and P.\n    Arabie, Journal of Classification 1985\n\n  * `Properties of the Hubert-Arabie adjusted Rand index\n    <https:\/\/psycnet.apa.org\/record\/2004-17801-007>`_ D. Steinley, Psychological\n    Methods 2004\n\n  * `Wikipedia entry for the Rand index\n    <https:\/\/en.wikipedia.org\/wiki\/Rand_index#Adjusted_Rand_index>`_\n\n  * :doi:`Minimum adjusted Rand index for two clusterings of a given size, 2022, J. E. Chac\u00f3n and A. I. Rastrojo <10.1007\/s11634-022-00491-w>`\n\n\n.. _mutual_info_score:\n\nMutual Information based scores\n-------------------------------\n\nGiven the knowledge of the ground truth class assignments ``labels_true`` and\nour clustering algorithm assignments of the same samples ``labels_pred``, the\n**Mutual Information** is a function that measures the **agreement** of the two\nassignments, ignoring permutations.  Two different normalized versions of this\nmeasure are available, **Normalized Mutual Information (NMI)** and **Adjusted\nMutual Information (AMI)**. NMI is often used in the literature, while AMI was\nproposed more recently and is **normalized against chance**::\n\n  >>> from sklearn import metrics\n  >>> labels_true = [0, 0, 0, 1, 1, 1]\n  >>> labels_pred = [0, 0, 1, 1, 2, 2]\n\n  >>> metrics.adjusted_mutual_info_score(labels_true, labels_pred)  # doctest: +SKIP\n  0.22504...\n\nOne can permute 0 and 1 in the predicted labels, rename 2 to 3 and get\nthe same score::\n\n  >>> labels_pred = [1, 1, 0, 0, 3, 3]\n  >>> metrics.adjusted_mutual_info_score(labels_true, labels_pred)  # doctest: +SKIP\n  0.22504...\n\nAll, :func:`mutual_info_score`, :func:`adjusted_mutual_info_score` and\n:func:`normalized_mutual_info_score` are symmetric: swapping the argument does\nnot change the score. Thus they can be used as a **consensus measure**::\n\n  >>> metrics.adjusted_mutual_info_score(labels_pred, labels_true)  # doctest: +SKIP\n  0.22504...\n\nPerfect labeling is scored 1.0::\n\n  >>> labels_pred = labels_true[:]\n  >>> metrics.adjusted_mutual_info_score(labels_true, labels_pred)  # doctest: +SKIP\n  1.0\n\n  >>> metrics.normalized_mutual_info_score(labels_true, labels_pred)  # doctest: +SKIP\n  1.0\n\nThis is not true for ``mutual_info_score``, which is therefore harder to judge::\n\n  >>> metrics.mutual_info_score(labels_true, labels_pred)  # doctest: +SKIP\n  0.69...\n\nBad (e.g. independent labelings) have non-positive scores::\n\n  >>> labels_true = [0, 1, 2, 0, 3, 4, 5, 1]\n  >>> labels_pred = [1, 1, 0, 0, 2, 2, 2, 2]\n  >>> metrics.adjusted_mutual_info_score(labels_true, labels_pred)  # doctest: +SKIP\n  -0.10526...\n\n\n.. topic:: Advantages:\n\n  - **Random (uniform) label assignments have a AMI score close to 0.0** for any\n    value of ``n_clusters`` and ``n_samples`` (which is not the case for raw\n    Mutual Information or the V-measure for instance).\n\n  - **Upper bound  of 1**:  Values close to zero indicate two label assignments\n    that are largely independent, while values close to one indicate significant\n    agreement. Further, an AMI of exactly 1 indicates that the two label\n    assignments are equal (with or without permutation).\n\n.. topic:: Drawbacks:\n\n  - Contrary to inertia, **MI-based measures require the knowledge of the ground\n    truth classes** while almost never available in practice or requires manual\n    assignment by human annotators (as in the supervised learning setting).\n\n    However MI-based measures can also be useful in purely unsupervised setting\n    as a building block for a Consensus Index that can be used for clustering\n    model selection.\n\n  - NMI and MI are not adjusted against chance.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_adjusted_for_chance_measures.py`: Analysis\n  of the impact of the dataset size on the value of clustering measures for random\n  assignments. This example also includes the Adjusted Rand Index.\n\n.. dropdown:: Mathematical formulation\n\n  Assume two label assignments (of the same N objects), :math:`U` and :math:`V`.\n  Their entropy is the amount of uncertainty for a partition set, defined by:\n\n  .. math:: H(U) = - \\sum_{i=1}^{|U|}P(i)\\log(P(i))\n\n  where :math:`P(i) = |U_i| \/ N` is the probability that an object picked at\n  random from :math:`U` falls into class :math:`U_i`. Likewise for :math:`V`:\n\n  .. math:: H(V) = - \\sum_{j=1}^{|V|}P'(j)\\log(P'(j))\n\n  With :math:`P'(j) = |V_j| \/ N`. The mutual information (MI) between :math:`U`\n  and :math:`V` is calculated by:\n\n  .. math:: \\text{MI}(U, V) = \\sum_{i=1}^{|U|}\\sum_{j=1}^{|V|}P(i, j)\\log\\left(\\frac{P(i,j)}{P(i)P'(j)}\\right)\n\n  where :math:`P(i, j) = |U_i \\cap V_j| \/ N` is the probability that an object\n  picked at random falls into both classes :math:`U_i` and :math:`V_j`.\n\n  It also can be expressed in set cardinality formulation:\n\n  .. math:: \\text{MI}(U, V) = \\sum_{i=1}^{|U|} \\sum_{j=1}^{|V|} \\frac{|U_i \\cap V_j|}{N}\\log\\left(\\frac{N|U_i \\cap V_j|}{|U_i||V_j|}\\right)\n\n  The normalized mutual information is defined as\n\n  .. math:: \\text{NMI}(U, V) = \\frac{\\text{MI}(U, V)}{\\text{mean}(H(U), H(V))}\n\n  This value of the mutual information and also the normalized variant is not\n  adjusted for chance and will tend to increase as the number of different labels\n  (clusters) increases, regardless of the actual amount of \"mutual information\"\n  between the label assignments.\n\n  The expected value for the mutual information can be calculated using the\n  following equation [VEB2009]_. In this equation, :math:`a_i = |U_i|` (the number\n  of elements in :math:`U_i`) and :math:`b_j = |V_j|` (the number of elements in\n  :math:`V_j`).\n\n  .. math:: E[\\text{MI}(U,V)]=\\sum_{i=1}^{|U|} \\sum_{j=1}^{|V|} \\sum_{n_{ij}=(a_i+b_j-N)^+\n    }^{\\min(a_i, b_j)} \\frac{n_{ij}}{N}\\log \\left( \\frac{ N.n_{ij}}{a_i b_j}\\right)\n    \\frac{a_i!b_j!(N-a_i)!(N-b_j)!}{N!n_{ij}!(a_i-n_{ij})!(b_j-n_{ij})!\n    (N-a_i-b_j+n_{ij})!}\n\n  Using the expected value, the adjusted mutual information can then be calculated\n  using a similar form to that of the adjusted Rand index:\n\n  .. math:: \\text{AMI} = \\frac{\\text{MI} - E[\\text{MI}]}{\\text{mean}(H(U), H(V)) - E[\\text{MI}]}\n\n  For normalized mutual information and adjusted mutual information, the\n  normalizing value is typically some *generalized* mean of the entropies of each\n  clustering. Various generalized means exist, and no firm rules exist for\n  preferring one over the others.  The decision is largely a field-by-field basis;\n  for instance, in community detection, the arithmetic mean is most common. Each\n  normalizing method provides \"qualitatively similar behaviours\" [YAT2016]_. In\n  our implementation, this is controlled by the ``average_method`` parameter.\n\n  Vinh et al. (2010) named variants of NMI and AMI by their averaging method\n  [VEB2010]_. Their 'sqrt' and 'sum' averages are the geometric and arithmetic\n  means; we use these more broadly common names.\n\n  .. rubric:: References\n\n  * Strehl, Alexander, and Joydeep Ghosh (2002). \"Cluster ensembles - a\n    knowledge reuse framework for combining multiple partitions\". Journal of\n    Machine Learning Research 3: 583-617. `doi:10.1162\/153244303321897735\n    <http:\/\/strehl.com\/download\/strehl-jmlr02.pdf>`_.\n\n  * `Wikipedia entry for the (normalized) Mutual Information\n    <https:\/\/en.wikipedia.org\/wiki\/Mutual_Information>`_\n\n  * `Wikipedia entry for the Adjusted Mutual Information\n    <https:\/\/en.wikipedia.org\/wiki\/Adjusted_Mutual_Information>`_\n\n  .. [VEB2009] Vinh, Epps, and Bailey, (2009). \"Information theoretic measures\n    for clusterings comparison\". Proceedings of the 26th Annual International\n    Conference on Machine Learning - ICML '09. `doi:10.1145\/1553374.1553511\n    <https:\/\/dl.acm.org\/citation.cfm?doid=1553374.1553511>`_. ISBN\n    9781605585161.\n\n  .. [VEB2010] Vinh, Epps, and Bailey, (2010). \"Information Theoretic Measures\n    for Clusterings Comparison: Variants, Properties, Normalization and\n    Correction for Chance\". JMLR\n    <https:\/\/jmlr.csail.mit.edu\/papers\/volume11\/vinh10a\/vinh10a.pdf>\n\n  .. [YAT2016] Yang, Algesheimer, and Tessone, (2016). \"A comparative analysis\n    of community detection algorithms on artificial networks\". Scientific\n    Reports 6: 30750. `doi:10.1038\/srep30750\n    <https:\/\/www.nature.com\/articles\/srep30750>`_.\n\n\n.. _homogeneity_completeness:\n\nHomogeneity, completeness and V-measure\n---------------------------------------\n\nGiven the knowledge of the ground truth class assignments of the samples,\nit is possible to define some intuitive metric using conditional entropy\nanalysis.\n\nIn particular Rosenberg and Hirschberg (2007) define the following two\ndesirable objectives for any cluster assignment:\n\n- **homogeneity**: each cluster contains only members of a single class.\n\n- **completeness**: all members of a given class are assigned to the same\n  cluster.\n\nWe can turn those concept as scores :func:`homogeneity_score` and\n:func:`completeness_score`. Both are bounded below by 0.0 and above by\n1.0 (higher is better)::\n\n  >>> from sklearn import metrics\n  >>> labels_true = [0, 0, 0, 1, 1, 1]\n  >>> labels_pred = [0, 0, 1, 1, 2, 2]\n\n  >>> metrics.homogeneity_score(labels_true, labels_pred)\n  0.66...\n\n  >>> metrics.completeness_score(labels_true, labels_pred)\n  0.42...\n\nTheir harmonic mean called **V-measure** is computed by\n:func:`v_measure_score`::\n\n  >>> metrics.v_measure_score(labels_true, labels_pred)\n  0.51...\n\nThis function's formula is as follows:\n\n.. math:: v = \\frac{(1 + \\beta) \\times \\text{homogeneity} \\times \\text{completeness}}{(\\beta \\times \\text{homogeneity} + \\text{completeness})}\n\n`beta` defaults to a value of 1.0, but for using a value less than 1 for beta::\n\n  >>> metrics.v_measure_score(labels_true, labels_pred, beta=0.6)\n  0.54...\n\nmore weight will be attributed to homogeneity, and using a value greater than 1::\n\n  >>> metrics.v_measure_score(labels_true, labels_pred, beta=1.8)\n  0.48...\n\nmore weight will be attributed to completeness.\n\nThe V-measure is actually equivalent to the mutual information (NMI)\ndiscussed above, with the aggregation function being the arithmetic mean [B2011]_.\n\nHomogeneity, completeness and V-measure can be computed at once using\n:func:`homogeneity_completeness_v_measure` as follows::\n\n  >>> metrics.homogeneity_completeness_v_measure(labels_true, labels_pred)\n  (0.66..., 0.42..., 0.51...)\n\nThe following clustering assignment is slightly better, since it is\nhomogeneous but not complete::\n\n  >>> labels_pred = [0, 0, 0, 1, 2, 2]\n  >>> metrics.homogeneity_completeness_v_measure(labels_true, labels_pred)\n  (1.0, 0.68..., 0.81...)\n\n.. note::\n\n  :func:`v_measure_score` is **symmetric**: it can be used to evaluate\n  the **agreement** of two independent assignments on the same dataset.\n\n  This is not the case for :func:`completeness_score` and\n  :func:`homogeneity_score`: both are bound by the relationship::\n\n    homogeneity_score(a, b) == completeness_score(b, a)\n\n\n.. topic:: Advantages:\n\n  - **Bounded scores**: 0.0 is as bad as it can be, 1.0 is a perfect score.\n\n  - Intuitive interpretation: clustering with bad V-measure can be\n    **qualitatively analyzed in terms of homogeneity and completeness** to\n    better feel what 'kind' of mistakes is done by the assignment.\n\n  - **No assumption is made on the cluster structure**: can be used to compare\n    clustering algorithms such as k-means which assumes isotropic blob shapes\n    with results of spectral clustering algorithms which can find cluster with\n    \"folded\" shapes.\n\n.. topic:: Drawbacks:\n\n  - The previously introduced metrics are **not normalized with regards to\n    random labeling**: this means that depending on the number of samples,\n    clusters and ground truth classes, a completely random labeling will not\n    always yield the same values for homogeneity, completeness and hence\n    v-measure. In particular **random labeling won't yield zero scores\n    especially when the number of clusters is large**.\n\n    This problem can safely be ignored when the number of samples is more than a\n    thousand and the number of clusters is less than 10. **For smaller sample\n    sizes or larger number of clusters it is safer to use an adjusted index such\n    as the Adjusted Rand Index (ARI)**.\n\n  .. figure:: ..\/auto_examples\/cluster\/images\/sphx_glr_plot_adjusted_for_chance_measures_001.png\n    :target: ..\/auto_examples\/cluster\/plot_adjusted_for_chance_measures.html\n    :align: center\n    :scale: 100\n\n  - These metrics **require the knowledge of the ground truth classes** while\n    almost never available in practice or requires manual assignment by human\n    annotators (as in the supervised learning setting).\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_adjusted_for_chance_measures.py`: Analysis\n  of the impact of the dataset size on the value of clustering measures for\n  random assignments.\n\n.. dropdown:: Mathematical formulation\n\n  Homogeneity and completeness scores are formally given by:\n\n  .. math:: h = 1 - \\frac{H(C|K)}{H(C)}\n\n  .. math:: c = 1 - \\frac{H(K|C)}{H(K)}\n\n  where :math:`H(C|K)` is the **conditional entropy of the classes given the\n  cluster assignments** and is given by:\n\n  .. math:: H(C|K) = - \\sum_{c=1}^{|C|} \\sum_{k=1}^{|K|} \\frac{n_{c,k}}{n}\n            \\cdot \\log\\left(\\frac{n_{c,k}}{n_k}\\right)\n\n  and :math:`H(C)` is the **entropy of the classes** and is given by:\n\n  .. math:: H(C) = - \\sum_{c=1}^{|C|} \\frac{n_c}{n} \\cdot \\log\\left(\\frac{n_c}{n}\\right)\n\n  with :math:`n` the total number of samples, :math:`n_c` and :math:`n_k` the\n  number of samples respectively belonging to class :math:`c` and cluster\n  :math:`k`, and finally :math:`n_{c,k}` the number of samples from class\n  :math:`c` assigned to cluster :math:`k`.\n\n  The **conditional entropy of clusters given class** :math:`H(K|C)` and the\n  **entropy of clusters** :math:`H(K)` are defined in a symmetric manner.\n\n  Rosenberg and Hirschberg further define **V-measure** as the **harmonic mean of\n  homogeneity and completeness**:\n\n  .. math:: v = 2 \\cdot \\frac{h \\cdot c}{h + c}\n\n.. rubric:: References\n\n* `V-Measure: A conditional entropy-based external cluster evaluation measure\n  <https:\/\/aclweb.org\/anthology\/D\/D07\/D07-1043.pdf>`_ Andrew Rosenberg and Julia\n  Hirschberg, 2007\n\n.. [B2011] `Identification and Characterization of Events in Social Media\n  <http:\/\/www.cs.columbia.edu\/~hila\/hila-thesis-distributed.pdf>`_, Hila\n  Becker, PhD Thesis.\n\n\n.. _fowlkes_mallows_scores:\n\nFowlkes-Mallows scores\n----------------------\n\nThe original Fowlkes-Mallows index (FMI) was intended to measure the similarity\nbetween two clustering results, which is inherently an unsupervised comparison.\nThe supervised adaptation of the Fowlkes-Mallows index\n(as implemented in :func:`sklearn.metrics.fowlkes_mallows_score`) can be used\nwhen the ground truth class assignments of the samples are known.\nThe FMI is defined as the geometric mean of the pairwise precision and recall:\n\n.. math:: \\text{FMI} = \\frac{\\text{TP}}{\\sqrt{(\\text{TP} + \\text{FP}) (\\text{TP} + \\text{FN})}}\n\nIn the above formula:\n\n* ``TP`` (**True Positive**): The number of pairs of points that are clustered together\n  both in the true labels and in the predicted labels.\n\n* ``FP`` (**False Positive**): The number of pairs of points that are clustered together\n  in the predicted labels but not in the true labels.\n\n* ``FN`` (**False Negative**): The number of pairs of points that are clustered together\n  in the true labels but not in the predicted labels.\n\nThe score ranges from 0 to 1. A high value indicates a good similarity\nbetween two clusters.\n\n  >>> from sklearn import metrics\n  >>> labels_true = [0, 0, 0, 1, 1, 1]\n  >>> labels_pred = [0, 0, 1, 1, 2, 2]\n\n  >>> metrics.fowlkes_mallows_score(labels_true, labels_pred)\n  0.47140...\n\nOne can permute 0 and 1 in the predicted labels, rename 2 to 3 and get\nthe same score::\n\n  >>> labels_pred = [1, 1, 0, 0, 3, 3]\n\n  >>> metrics.fowlkes_mallows_score(labels_true, labels_pred)\n  0.47140...\n\nPerfect labeling is scored 1.0::\n\n  >>> labels_pred = labels_true[:]\n  >>> metrics.fowlkes_mallows_score(labels_true, labels_pred)\n  1.0\n\nBad (e.g. independent labelings) have zero scores::\n\n  >>> labels_true = [0, 1, 2, 0, 3, 4, 5, 1]\n  >>> labels_pred = [1, 1, 0, 0, 2, 2, 2, 2]\n  >>> metrics.fowlkes_mallows_score(labels_true, labels_pred)\n  0.0\n\n.. topic:: Advantages:\n\n  - **Random (uniform) label assignments have a FMI score close to 0.0** for any\n    value of ``n_clusters`` and ``n_samples`` (which is not the case for raw\n    Mutual Information or the V-measure for instance).\n\n  - **Upper-bounded at 1**:  Values close to zero indicate two label assignments\n    that are largely independent, while values close to one indicate significant\n    agreement. Further, values of exactly 0 indicate **purely** independent\n    label assignments and a FMI of exactly 1 indicates that the two label\n    assignments are equal (with or without permutation).\n\n  - **No assumption is made on the cluster structure**: can be used to compare\n    clustering algorithms such as k-means which assumes isotropic blob shapes\n    with results of spectral clustering algorithms which can find cluster with\n    \"folded\" shapes.\n\n.. topic:: Drawbacks:\n\n  - Contrary to inertia, **FMI-based measures require the knowledge of the\n    ground truth classes** while almost never available in practice or requires\n    manual assignment by human annotators (as in the supervised learning\n    setting).\n\n.. dropdown:: References\n\n  * E. B. Fowkles and C. L. Mallows, 1983. \"A method for comparing two\n    hierarchical clusterings\". Journal of the American Statistical Association.\n    https:\/\/www.tandfonline.com\/doi\/abs\/10.1080\/01621459.1983.10478008\n\n  * `Wikipedia entry for the Fowlkes-Mallows Index\n    <https:\/\/en.wikipedia.org\/wiki\/Fowlkes-Mallows_index>`_\n\n\n.. _silhouette_coefficient:\n\nSilhouette Coefficient\n----------------------\n\nIf the ground truth labels are not known, evaluation must be performed using\nthe model itself. The Silhouette Coefficient\n(:func:`sklearn.metrics.silhouette_score`)\nis an example of such an evaluation, where a\nhigher Silhouette Coefficient score relates to a model with better defined\nclusters. The Silhouette Coefficient is defined for each sample and is composed\nof two scores:\n\n- **a**: The mean distance between a sample and all other points in the same\n  class.\n\n- **b**: The mean distance between a sample and all other points in the *next\n  nearest cluster*.\n\nThe Silhouette Coefficient *s* for a single sample is then given as:\n\n.. math:: s = \\frac{b - a}{max(a, b)}\n\nThe Silhouette Coefficient for a set of samples is given as the mean of the\nSilhouette Coefficient for each sample.\n\n\n  >>> from sklearn import metrics\n  >>> from sklearn.metrics import pairwise_distances\n  >>> from sklearn import datasets\n  >>> X, y = datasets.load_iris(return_X_y=True)\n\nIn normal usage, the Silhouette Coefficient is applied to the results of a\ncluster analysis.\n\n  >>> import numpy as np\n  >>> from sklearn.cluster import KMeans\n  >>> kmeans_model = KMeans(n_clusters=3, random_state=1).fit(X)\n  >>> labels = kmeans_model.labels_\n  >>> metrics.silhouette_score(X, labels, metric='euclidean')\n  0.55...\n\n.. topic:: Advantages:\n\n  - The score is bounded between -1 for incorrect clustering and +1 for highly\n    dense clustering. Scores around zero indicate overlapping clusters.\n\n  - The score is higher when clusters are dense and well separated, which\n    relates to a standard concept of a cluster.\n\n.. topic:: Drawbacks:\n\n  - The Silhouette Coefficient is generally higher for convex clusters than\n    other concepts of clusters, such as density based clusters like those\n    obtained through DBSCAN.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cluster_plot_kmeans_silhouette_analysis.py` : In\n  this example the silhouette analysis is used to choose an optimal value for\n  n_clusters.\n\n.. dropdown:: References\n\n  * Peter J. Rousseeuw (1987). :doi:`\"Silhouettes: a Graphical Aid to the\n    Interpretation and Validation of Cluster Analysis\"<10.1016\/0377-0427(87)90125-7>`.\n    Computational and Applied Mathematics 20: 53-65.\n\n\n.. _calinski_harabasz_index:\n\nCalinski-Harabasz Index\n-----------------------\n\n\nIf the ground truth labels are not known, the Calinski-Harabasz index\n(:func:`sklearn.metrics.calinski_harabasz_score`) - also known as the Variance\nRatio Criterion - can be used to evaluate the model, where a higher\nCalinski-Harabasz score relates to a model with better defined clusters.\n\nThe index is the ratio of the sum of between-clusters dispersion and of\nwithin-cluster dispersion for all clusters (where dispersion is defined as the\nsum of distances squared):\n\n  >>> from sklearn import metrics\n  >>> from sklearn.metrics import pairwise_distances\n  >>> from sklearn import datasets\n  >>> X, y = datasets.load_iris(return_X_y=True)\n\nIn normal usage, the Calinski-Harabasz index is applied to the results of a\ncluster analysis:\n\n  >>> import numpy as np\n  >>> from sklearn.cluster import KMeans\n  >>> kmeans_model = KMeans(n_clusters=3, random_state=1).fit(X)\n  >>> labels = kmeans_model.labels_\n  >>> metrics.calinski_harabasz_score(X, labels)\n  561.59...\n\n\n.. topic:: Advantages:\n\n  - The score is higher when clusters are dense and well separated, which\n    relates to a standard concept of a cluster.\n\n  - The score is fast to compute.\n\n.. topic:: Drawbacks:\n\n  - The Calinski-Harabasz index is generally higher for convex clusters than\n    other concepts of clusters, such as density based clusters like those\n    obtained through DBSCAN.\n\n.. dropdown:: Mathematical formulation\n\n  For a set of data :math:`E` of size :math:`n_E` which has been clustered into\n  :math:`k` clusters, the Calinski-Harabasz score :math:`s` is defined as the\n  ratio of the between-clusters dispersion mean and the within-cluster\n  dispersion:\n\n  .. math::\n    s = \\frac{\\mathrm{tr}(B_k)}{\\mathrm{tr}(W_k)} \\times \\frac{n_E - k}{k - 1}\n\n  where :math:`\\mathrm{tr}(B_k)` is trace of the between group dispersion matrix\n  and :math:`\\mathrm{tr}(W_k)` is the trace of the within-cluster dispersion\n  matrix defined by:\n\n  .. math:: W_k = \\sum_{q=1}^k \\sum_{x \\in C_q} (x - c_q) (x - c_q)^T\n\n  .. math:: B_k = \\sum_{q=1}^k n_q (c_q - c_E) (c_q - c_E)^T\n\n  with :math:`C_q` the set of points in cluster :math:`q`, :math:`c_q` the\n  center of cluster :math:`q`, :math:`c_E` the center of :math:`E`, and\n  :math:`n_q` the number of points in cluster :math:`q`.\n\n.. dropdown:: References\n\n  * Cali\u0144ski, T., & Harabasz, J. (1974). `\"A Dendrite Method for Cluster Analysis\"\n    <https:\/\/www.researchgate.net\/publication\/233096619_A_Dendrite_Method_for_Cluster_Analysis>`_.\n    :doi:`Communications in Statistics-theory and Methods 3: 1-27\n    <10.1080\/03610927408827101>`.\n\n\n.. _davies-bouldin_index:\n\nDavies-Bouldin Index\n--------------------\n\nIf the ground truth labels are not known, the Davies-Bouldin index\n(:func:`sklearn.metrics.davies_bouldin_score`) can be used to evaluate the\nmodel, where a lower Davies-Bouldin index relates to a model with better\nseparation between the clusters.\n\nThis index signifies the average 'similarity' between clusters, where the\nsimilarity is a measure that compares the distance between clusters with the\nsize of the clusters themselves.\n\nZero is the lowest possible score. Values closer to zero indicate a better\npartition.\n\nIn normal usage, the Davies-Bouldin index is applied to the results of a\ncluster analysis as follows:\n\n  >>> from sklearn import datasets\n  >>> iris = datasets.load_iris()\n  >>> X = iris.data\n  >>> from sklearn.cluster import KMeans\n  >>> from sklearn.metrics import davies_bouldin_score\n  >>> kmeans = KMeans(n_clusters=3, random_state=1).fit(X)\n  >>> labels = kmeans.labels_\n  >>> davies_bouldin_score(X, labels)\n  0.666...\n\n\n.. topic:: Advantages:\n\n  - The computation of Davies-Bouldin is simpler than that of Silhouette scores.\n  - The index is solely based on quantities and features inherent to the dataset\n    as its computation only uses point-wise distances.\n\n.. topic:: Drawbacks:\n\n  - The Davies-Boulding index is generally higher for convex clusters than other\n    concepts of clusters, such as density based clusters like those obtained\n    from DBSCAN.\n  - The usage of centroid distance limits the distance metric to Euclidean\n    space.\n\n.. dropdown:: Mathematical formulation\n\n  The index is defined as the average similarity between each cluster :math:`C_i`\n  for :math:`i=1, ..., k` and its most similar one :math:`C_j`. In the context of\n  this index, similarity is defined as a measure :math:`R_{ij}` that trades off:\n\n  - :math:`s_i`, the average distance between each point of cluster :math:`i` and\n    the centroid of that cluster -- also know as cluster diameter.\n  - :math:`d_{ij}`, the distance between cluster centroids :math:`i` and\n    :math:`j`.\n\n  A simple choice to construct :math:`R_{ij}` so that it is nonnegative and\n  symmetric is:\n\n  .. math::\n    R_{ij} = \\frac{s_i + s_j}{d_{ij}}\n\n  Then the Davies-Bouldin index is defined as:\n\n  .. math::\n    DB = \\frac{1}{k} \\sum_{i=1}^k \\max_{i \\neq j} R_{ij}\n\n.. dropdown:: References\n\n  * Davies, David L.; Bouldin, Donald W. (1979). :doi:`\"A Cluster Separation\n    Measure\" <10.1109\/TPAMI.1979.4766909>` IEEE Transactions on Pattern Analysis\n    and Machine Intelligence. PAMI-1 (2): 224-227.\n\n  * Halkidi, Maria; Batistakis, Yannis; Vazirgiannis, Michalis (2001). :doi:`\"On\n    Clustering Validation Techniques\" <10.1023\/A:1012801612483>` Journal of\n    Intelligent Information Systems, 17(2-3), 107-145.\n\n  * `Wikipedia entry for Davies-Bouldin index\n    <https:\/\/en.wikipedia.org\/wiki\/Davies-Bouldin_index>`_.\n\n\n.. _contingency_matrix:\n\nContingency Matrix\n------------------\n\nContingency matrix (:func:`sklearn.metrics.cluster.contingency_matrix`)\nreports the intersection cardinality for every true\/predicted cluster pair.\nThe contingency matrix provides sufficient statistics for all clustering\nmetrics where the samples are independent and identically distributed and\none doesn't need to account for some instances not being clustered.\n\nHere is an example::\n\n   >>> from sklearn.metrics.cluster import contingency_matrix\n   >>> x = [\"a\", \"a\", \"a\", \"b\", \"b\", \"b\"]\n   >>> y = [0, 0, 1, 1, 2, 2]\n   >>> contingency_matrix(x, y)\n   array([[2, 1, 0],\n          [0, 1, 2]])\n\nThe first row of output array indicates that there are three samples whose\ntrue cluster is \"a\". Of them, two are in predicted cluster 0, one is in 1,\nand none is in 2. And the second row indicates that there are three samples\nwhose true cluster is \"b\". Of them, none is in predicted cluster 0, one is in\n1 and two are in 2.\n\nA :ref:`confusion matrix <confusion_matrix>` for classification is a square\ncontingency matrix where the order of rows and columns correspond to a list\nof classes.\n\n\n.. topic:: Advantages:\n\n  - Allows to examine the spread of each true cluster across predicted clusters\n    and vice versa.\n\n  - The contingency table calculated is typically utilized in the calculation of\n    a similarity statistic (like the others listed in this document) between the\n    two clusterings.\n\n.. topic:: Drawbacks:\n\n  - Contingency matrix is easy to interpret for a small number of clusters, but\n    becomes very hard to interpret for a large number of clusters.\n\n  - It doesn't give a single metric to use as an objective for clustering\n    optimisation.\n\n.. dropdown:: References\n\n  * `Wikipedia entry for contingency matrix\n    <https:\/\/en.wikipedia.org\/wiki\/Contingency_table>`_\n\n\n.. _pair_confusion_matrix:\n\nPair Confusion Matrix\n---------------------\n\nThe pair confusion matrix\n(:func:`sklearn.metrics.cluster.pair_confusion_matrix`) is a 2x2\nsimilarity matrix\n\n.. math::\n   C = \\left[\\begin{matrix}\n   C_{00} & C_{01} \\\\\n   C_{10} & C_{11}\n   \\end{matrix}\\right]\n\nbetween two clusterings computed by considering all pairs of samples and\ncounting pairs that are assigned into the same or into different clusters\nunder the true and predicted clusterings.\n\nIt has the following entries:\n\n:math:`C_{00}` : number of pairs with both clusterings having the samples\nnot clustered together\n\n:math:`C_{10}` : number of pairs with the true label clustering having the\nsamples clustered together but the other clustering not having the samples\nclustered together\n\n:math:`C_{01}` : number of pairs with the true label clustering not having\nthe samples clustered together but the other clustering having the samples\nclustered together\n\n:math:`C_{11}` : number of pairs with both clusterings having the samples\nclustered together\n\nConsidering a pair of samples that is clustered together a positive pair,\nthen as in binary classification the count of true negatives is\n:math:`C_{00}`, false negatives is :math:`C_{10}`, true positives is\n:math:`C_{11}` and false positives is :math:`C_{01}`.\n\nPerfectly matching labelings have all non-zero entries on the\ndiagonal regardless of actual label values::\n\n   >>> from sklearn.metrics.cluster import pair_confusion_matrix\n   >>> pair_confusion_matrix([0, 0, 1, 1], [0, 0, 1, 1])\n   array([[8, 0],\n          [0, 4]])\n\n::\n\n   >>> pair_confusion_matrix([0, 0, 1, 1], [1, 1, 0, 0])\n   array([[8, 0],\n          [0, 4]])\n\nLabelings that assign all classes members to the same clusters\nare complete but may not always be pure, hence penalized, and\nhave some off-diagonal non-zero entries::\n\n   >>> pair_confusion_matrix([0, 0, 1, 2], [0, 0, 1, 1])\n   array([[8, 2],\n          [0, 2]])\n\nThe matrix is not symmetric::\n\n   >>> pair_confusion_matrix([0, 0, 1, 1], [0, 0, 1, 2])\n   array([[8, 0],\n          [2, 2]])\n\nIf classes members are completely split across different clusters, the\nassignment is totally incomplete, hence the matrix has all zero\ndiagonal entries::\n\n   >>> pair_confusion_matrix([0, 0, 0, 0], [0, 1, 2, 3])\n   array([[ 0,  0],\n          [12,  0]])\n\n.. dropdown:: References\n\n  * :doi:`\"Comparing Partitions\" <10.1007\/BF01908075>` L. Hubert and P. Arabie,\n    Journal of Classification 1985","site":"scikit-learn","answers_cleaned":"    clustering              Clustering              Clustering  https   en wikipedia org wiki Cluster analysis     of unlabeled data can be performed with the module  mod  sklearn cluster    Each clustering algorithm comes in two variants  a class  that implements the   fit   method to learn the clusters on train data  and a function  that  given train data  returns an array of integer labels corresponding to the different clusters  For the class  the labels over the training data can be found in the   labels    attribute      currentmodule   sklearn cluster     topic   Input data      One important thing to note is that the algorithms implemented in     this module can take different kinds of matrix as input  All the     methods accept standard data matrices of shape    n samples  n features         These can be obtained from the classes in the  mod  sklearn feature extraction      module  For  class  AffinityPropagation    class  SpectralClustering      and  class  DBSCAN  one can also input similarity matrices of shape        n samples  n samples     These can be obtained from the functions     in the  mod  sklearn metrics pairwise  module   Overview of clustering methods                                     figure      auto examples cluster images sphx glr plot cluster comparison 001 png     target     auto examples cluster plot cluster comparison html     align  center     scale  50     A comparison of the clustering algorithms in scikit learn      list table       header rows  1     widths  14 15 19 25 20         Method name        Parameters        Scalability        Usecase        Geometry  metric used           ref  K Means  k means          number of clusters        Very large   n samples    medium   n clusters   with         ref  MiniBatch code  mini batch kmeans          General purpose  even cluster size  flat geometry         not too many clusters  inductive        Distances between points          ref  Affinity propagation  affinity propagation          damping  sample preference        Not scalable with n samples        Many clusters  uneven cluster size  non flat geometry  inductive        Graph distance  e g  nearest neighbor graph           ref  Mean shift  mean shift          bandwidth        Not scalable with   n samples          Many clusters  uneven cluster size  non flat geometry  inductive        Distances between points          ref  Spectral clustering  spectral clustering          number of clusters        Medium   n samples    small   n clusters          Few clusters  even cluster size  non flat geometry  transductive        Graph distance  e g  nearest neighbor graph           ref  Ward hierarchical clustering  hierarchical clustering          number of clusters or distance threshold        Large   n samples   and   n clusters          Many clusters  possibly connectivity constraints  transductive        Distances between points          ref  Agglomerative clustering  hierarchical clustering          number of clusters or distance threshold  linkage type  distance        Large   n samples   and   n clusters          Many clusters  possibly connectivity constraints  non Euclidean        distances  transductive        Any pairwise distance          ref  DBSCAN  dbscan          neighborhood size        Very large   n samples    medium   n clusters          Non flat geometry  uneven cluster sizes  outlier removal         transductive        Distances between nearest points          ref  HDBSCAN  hdbscan          minimum cluster membership  minimum point neighbors        large   n samples    medium   n clusters          Non flat geometry  uneven cluster sizes  outlier removal         transductive  hierarchical  variable cluster density        Distances between nearest points          ref  OPTICS  optics          minimum cluster membership        Very large   n samples    large   n clusters          Non flat geometry  uneven cluster sizes  variable cluster density         outlier removal  transductive        Distances between points          ref  Gaussian mixtures  mixture          many        Not scalable        Flat geometry  good for density estimation  inductive        Mahalanobis distances to  centers          ref  BIRCH  birch          branching factor  threshold  optional global clusterer         Large   n clusters   and   n samples          Large dataset  outlier removal  data reduction  inductive        Euclidean distance between points          ref  Bisecting K Means  bisect k means          number of clusters        Very large   n samples    medium   n clusters          General purpose  even cluster size  flat geometry         no empty clusters  inductive  hierarchical        Distances between points  Non flat geometry clustering is useful when the clusters have a specific shape  i e  a non flat manifold  and the standard euclidean distance is not the right metric  This case arises in the two top rows of the figure above   Gaussian mixture models  useful for clustering  are described in  ref  another chapter of the documentation  mixture   dedicated to mixture models  KMeans can be seen as a special case of Gaussian mixture model with equal covariance per component    term  Transductive  transductive   clustering methods  in contrast to  term  inductive  clustering methods  are not designed to be applied to new  unseen data       k means   K means          The  class  KMeans  algorithm clusters data by trying to separate samples in n groups of equal variance  minimizing a criterion known as the  inertia  or within cluster sum of squares  see below   This algorithm requires the number of clusters to be specified  It scales well to large numbers of samples and has been used across a large range of application areas in many different fields   The k means algorithm divides a set of  math  N  samples  math  X  into  math  K  disjoint clusters  math  C   each described by the mean  math   mu j  of the samples in the cluster  The means are commonly called the cluster  centroids   note that they are not  in general  points from  math  X   although they live in the same space   The K means algorithm aims to choose centroids that minimise the   inertia    or   within cluster sum of squares criterion        math    sum  i 0   n  min   mu j  in C    x i    mu j   2   Inertia can be recognized as a measure of how internally coherent clusters are  It suffers from various drawbacks     Inertia makes the assumption that clusters are convex and isotropic    which is not always the case  It responds poorly to elongated clusters    or manifolds with irregular shapes     Inertia is not a normalized metric  we just know that lower values are   better and zero is optimal  But in very high dimensional spaces  Euclidean   distances tend to become inflated    this is an instance of the so called  curse of dimensionality      Running a dimensionality reduction algorithm such as  ref  PCA  prior to   k means clustering can alleviate this problem and speed up the   computations      image      auto examples cluster images sphx glr plot kmeans assumptions 002 png     target     auto examples cluster plot kmeans assumptions html     align  center     scale  50  For more detailed descriptions of the issues shown above and how to address them  refer to the examples  ref  sphx glr auto examples cluster plot kmeans assumptions py  and  ref  sphx glr auto examples cluster plot kmeans silhouette analysis py    K means is often referred to as Lloyd s algorithm  In basic terms  the algorithm has three steps  The first step chooses the initial centroids  with the most basic method being to choose  math  k  samples from the dataset  math  X   After initialization  K means consists of looping between the two other steps  The first step assigns each sample to its nearest centroid  The second step creates new centroids by taking the mean value of all of the samples assigned to each previous centroid  The difference between the old and the new centroids are computed and the algorithm repeats these last two steps until this value is less than a threshold  In other words  it repeats until the centroids do not move significantly      image      auto examples cluster images sphx glr plot kmeans digits 001 png     target     auto examples cluster plot kmeans digits html     align  right     scale  35  K means is equivalent to the expectation maximization algorithm with a small  all equal  diagonal covariance matrix   The algorithm can also be understood through the concept of  Voronoi diagrams  https   en wikipedia org wiki Voronoi diagram     First the Voronoi diagram of the points is calculated using the current centroids  Each segment in the Voronoi diagram becomes a separate cluster  Secondly  the centroids are updated to the mean of each segment  The algorithm then repeats this until a stopping criterion is fulfilled  Usually  the algorithm stops when the relative decrease in the objective function between iterations is less than the given tolerance value  This is not the case in this implementation  iteration stops when centroids move less than the tolerance   Given enough time  K means will always converge  however this may be to a local minimum  This is highly dependent on the initialization of the centroids  As a result  the computation is often done several times  with different initializations of the centroids  One method to help address this issue is the k means   initialization scheme  which has been implemented in scikit learn  use the   init  k means      parameter   This initializes the centroids to be  generally  distant from each other  leading to probably better results than random initialization  as shown in the reference  For detailed examples of comparing different initialization schemes  refer to  ref  sphx glr auto examples cluster plot kmeans digits py  and  ref  sphx glr auto examples cluster plot kmeans stability low dim dense py    K means   can also be called independently to select seeds for other clustering algorithms  see  func  sklearn cluster kmeans plusplus  for details and example usage   The algorithm supports sample weights  which can be given by a parameter   sample weight    This allows to assign more weight to some samples when computing cluster centers and values of inertia  For example  assigning a weight of 2 to a sample is equivalent to adding a duplicate of that sample to the dataset  math  X       rubric   Examples     ref  sphx glr auto examples text plot document clustering py   Document clustering   using  class  KMeans  and  class  MiniBatchKMeans  based on sparse data     ref  sphx glr auto examples cluster plot kmeans plusplus py   Using K means     to select seeds for other clustering algorithms   Low level parallelism                         class  KMeans  benefits from OpenMP based parallelism through Cython  Small chunks of data  256 samples  are processed in parallel  which in addition yields a low memory footprint  For more details on how to control the number of threads  please refer to our  ref  parallelism  notes      rubric   Examples     ref  sphx glr auto examples cluster plot kmeans assumptions py   Demonstrating when   k means performs intuitively and when it does not    ref  sphx glr auto examples cluster plot kmeans digits py   Clustering handwritten digits     dropdown   References        k means    The advantages of careful seeding       http   ilpubs stanford edu 8090 778 1 2006 13 pdf        Arthur  David  and Sergei Vassilvitskii       Proceedings of the eighteenth annual ACM SIAM symposium on Discrete     algorithms   Society for Industrial and Applied Mathematics  2007        mini batch kmeans   Mini Batch K Means                     The  class  MiniBatchKMeans  is a variant of the  class  KMeans  algorithm which uses mini batches to reduce the computation time  while still attempting to optimise the same objective function  Mini batches are subsets of the input data  randomly sampled in each training iteration  These mini batches drastically reduce the amount of computation required to converge to a local solution  In contrast to other algorithms that reduce the convergence time of k means  mini batch k means produces results that are generally only slightly worse than the standard algorithm   The algorithm iterates between two major steps  similar to vanilla k means  In the first step   math  b  samples are drawn randomly from the dataset  to form a mini batch  These are then assigned to the nearest centroid  In the second step  the centroids are updated  In contrast to k means  this is done on a per sample basis  For each sample in the mini batch  the assigned centroid is updated by taking the streaming average of the sample and all previous samples assigned to that centroid  This has the effect of decreasing the rate of change for a centroid over time  These steps are performed until convergence or a predetermined number of iterations is reached    class  MiniBatchKMeans  converges faster than  class  KMeans   but the quality of the results is reduced  In practice this difference in quality can be quite small  as shown in the example and cited reference      figure      auto examples cluster images sphx glr plot mini batch kmeans 001 png     target     auto examples cluster plot mini batch kmeans html     align  center     scale  100      rubric   Examples     ref  sphx glr auto examples cluster plot mini batch kmeans py   Comparison of    class  KMeans  and  class  MiniBatchKMeans      ref  sphx glr auto examples text plot document clustering py   Document clustering   using  class  KMeans  and  class  MiniBatchKMeans  based on sparse data     ref  sphx glr auto examples cluster plot dict face patches py      dropdown   References        Web Scale K Means clustering       https   www eecs tufts edu  dsculley papers fastkmeans pdf        D  Sculley   Proceedings of the 19th international conference on World     wide web   2010       affinity propagation   Affinity Propagation                        class  AffinityPropagation  creates clusters by sending messages between pairs of samples until convergence  A dataset is then described using a small number of exemplars  which are identified as those most representative of other samples  The messages sent between pairs represent the suitability for one sample to be the exemplar of the other  which is updated in response to the values from other pairs  This updating happens iteratively until convergence  at which point the final exemplars are chosen  and hence the final clustering is given      figure      auto examples cluster images sphx glr plot affinity propagation 001 png     target     auto examples cluster plot affinity propagation html     align  center     scale  50   Affinity Propagation can be interesting as it chooses the number of clusters based on the data provided  For this purpose  the two important parameters are the  preference   which controls how many exemplars are used  and the  damping factor  which damps the responsibility and availability messages to avoid numerical oscillations when updating these messages   The main drawback of Affinity Propagation is its complexity  The algorithm has a time complexity of the order  math  O N 2 T    where  math  N  is the number of samples and  math  T  is the number of iterations until convergence  Further  the memory complexity is of the order  math  O N 2   if a dense similarity matrix is used  but reducible if a sparse similarity matrix is used  This makes Affinity Propagation most appropriate for small to medium sized datasets      dropdown   Algorithm description    The messages sent between points belong to one of two categories  The first is   the responsibility  math  r i  k    which is the accumulated evidence that   sample  math  k  should be the exemplar for sample  math  i   The second is the   availability  math  a i  k   which is the accumulated evidence that sample    math  i  should choose sample  math  k  to be its exemplar  and considers the   values for all other samples that  math  k  should be an exemplar  In this way    exemplars are chosen by samples if they are  1  similar enough to many samples   and  2  chosen by many samples to be representative of themselves     More formally  the responsibility of a sample  math  k  to be the exemplar of   sample  math  i  is given by        math          r i  k   leftarrow s i  k    max   a i  k     s i  k    forall k   neq k      Where  math  s i  k   is the similarity between samples  math  i  and  math  k     The availability of sample  math  k  to be the exemplar of sample  math  i  is   given by        math          a i  k   leftarrow min  0  r k  k     sum  i  s t  i   notin   i  k    r i         k       To begin with  all values for  math  r  and  math  a  are set to zero  and the   calculation of each iterates until convergence  As discussed above  in order to   avoid numerical oscillations when updating the messages  the damping factor    math   lambda  is introduced to iteration process        math   r  t 1  i  k     lambda cdot r  t  i  k     1  lambda  cdot r  t 1  i  k       math   a  t 1  i  k     lambda cdot a  t  i  k     1  lambda  cdot a  t 1  i  k     where  math  t  indicates the iteration times       rubric   Examples     ref  sphx glr auto examples cluster plot affinity propagation py   Affinity   Propagation on a synthetic 2D datasets with 3 classes    ref  sphx glr auto examples applications plot stock market py  Affinity Propagation   on financial time series to find groups of companies       mean shift   Mean Shift             class  MeanShift  clustering aims to discover  blobs  in a smooth density of samples  It is a centroid based algorithm  which works by updating candidates for centroids to be the mean of the points within a given region  These candidates are then filtered in a post processing stage to eliminate near duplicates to form the final set of centroids      dropdown   Mathematical details    The position of centroid candidates is iteratively adjusted using a technique   called hill climbing  which finds local maxima of the estimated probability   density  Given a candidate centroid  math  x  for iteration  math  t   the   candidate is updated according to the following equation        math          x  t 1    x t   m x t     Where  math  m  is the  mean shift  vector that is computed for each centroid   that points towards a region of the maximum increase in the density of points    To compute  math  m  we define  math  N x   as the neighborhood of samples   within a given distance around  math  x   Then  math  m  is computed using the   following equation  effectively updating a centroid to be the mean of the   samples within its neighborhood        math          m x     frac 1   N x     sum  x j  in N x  x j   x    In general  the equation for  math  m  depends on a kernel used for density   estimation  The generic formula is        math          m x     frac  sum  x j  in N x  K x j   x x j   sum  x j  in N x  K x j         x     x    In our implementation   math  K x   is equal to 1 if  math  x  is small enough   and is equal to 0 otherwise  Effectively  math  K y   x   indicates whether    math  y  is in the neighborhood of  math  x     The algorithm automatically sets the number of clusters  instead of relying on a parameter   bandwidth    which dictates the size of the region to search through  This parameter can be set manually  but can be estimated using the provided   estimate bandwidth   function  which is called if the bandwidth is not set   The algorithm is not highly scalable  as it requires multiple nearest neighbor searches during the execution of the algorithm  The algorithm is guaranteed to converge  however the algorithm will stop iterating when the change in centroids is small   Labelling a new sample is performed by finding the nearest centroid for a given sample       figure      auto examples cluster images sphx glr plot mean shift 001 png     target     auto examples cluster plot mean shift html     align  center     scale  50      rubric   Examples     ref  sphx glr auto examples cluster plot mean shift py   Mean Shift clustering   on a synthetic 2D datasets with 3 classes      dropdown   References       doi   Mean shift  A robust approach toward feature space analysis       10 1109 34 1000236   D  Comaniciu and P  Meer   IEEE Transactions on Pattern     Analysis and Machine Intelligence   2002        spectral clustering   Spectral clustering                       class  SpectralClustering  performs a low dimension embedding of the affinity matrix between samples  followed by clustering  e g   by KMeans  of the components of the eigenvectors in the low dimensional space  It is especially computationally efficient if the affinity matrix is sparse and the  amg  solver is used for the eigenvalue problem  Note  the  amg  solver requires that the  pyamg  https   github com pyamg pyamg    module is installed    The present version of SpectralClustering requires the number of clusters to be specified in advance  It works well for a small number of clusters  but is not advised for many clusters   For two clusters  SpectralClustering solves a convex relaxation of the  normalized cuts  https   people eecs berkeley edu  malik papers SM ncut pdf    problem on the similarity graph  cutting the graph in two so that the weight of the edges cut is small compared to the weights of the edges inside each cluster  This criteria is especially interesting when working on images  where graph vertices are pixels  and weights of the edges of the similarity graph are computed using a function of a gradient of the image        noisy img  image      auto examples cluster images sphx glr plot segmentation toy 001 png      target     auto examples cluster plot segmentation toy html      scale  50      segmented img  image      auto examples cluster images sphx glr plot segmentation toy 002 png      target     auto examples cluster plot segmentation toy html      scale  50     centered    noisy img   segmented img      warning   Transforming distance to well behaved similarities      Note that if the values of your similarity matrix are not well     distributed  e g  with negative values or with a distance matrix     rather than a similarity  the spectral problem will be singular and     the problem not solvable  In which case it is advised to apply a     transformation to the entries of the matrix  For instance  in the     case of a signed distance matrix  is common to apply a heat kernel            similarity   np exp  beta   distance   distance std         See the examples for such an application      rubric   Examples     ref  sphx glr auto examples cluster plot segmentation toy py   Segmenting objects   from a noisy background using spectral clustering     ref  sphx glr auto examples cluster plot coin segmentation py   Spectral clustering   to split the image of coins in regions        coin kmeans  image      auto examples cluster images sphx glr plot coin segmentation 001 png    target     auto examples cluster plot coin segmentation html    scale  35      coin discretize  image      auto examples cluster images sphx glr plot coin segmentation 002 png    target     auto examples cluster plot coin segmentation html    scale  35      coin cluster qr  image      auto examples cluster images sphx glr plot coin segmentation 003 png    target     auto examples cluster plot coin segmentation html    scale  35   Different label assignment strategies                                        Different label assignment strategies can be used  corresponding to the   assign labels   parameter of  class  SpectralClustering      kmeans    strategy can match finer details  but can be unstable  In particular  unless you control the   random state    it may not be reproducible from run to run  as it depends on random initialization  The alternative    discretize    strategy is 100  reproducible  but tends to create parcels of fairly even and geometrical shape  The recently added    cluster qr    option is a deterministic alternative that tends to create the visually best partitioning on the example application below                                                                                                           assign labels  kmeans             assign labels  discretize         assign labels  cluster qr                                                                                                          coin kmeans                            coin discretize                    coin cluster qr                                                                                                           dropdown   References        Multiclass spectral clustering       https   people eecs berkeley edu  jordan courses 281B spring04 readings yu shi pdf        Stella X  Yu  Jianbo Shi  2003       doi   Simple  direct  and efficient multi way spectral clustering  10 1093 imaiai iay008       Anil Damle  Victor Minden  Lexing Ying  2019       spectral clustering graph   Spectral Clustering Graphs                             Spectral Clustering can also be used to partition graphs via their spectral embeddings   In this case  the affinity matrix is the adjacency matrix of the graph  and SpectralClustering is initialized with  affinity  precomputed              from sklearn cluster import SpectralClustering         sc   SpectralClustering 3  affinity  precomputed   n init 100                                  assign labels  discretize           sc fit predict adjacency matrix     doctest   SKIP     dropdown   References       doi   A Tutorial on Spectral Clustering   10 1007 s11222 007 9033 z   Ulrike     von Luxburg  2007       doi   Normalized cuts and image segmentation   10 1109 34 868688   Jianbo     Shi  Jitendra Malik  2000        A Random Walks View of Spectral Segmentation       https   citeseerx ist psu edu doc view pid 84a86a69315e994cfd1e0c7debb86d62d7bd1f44        Marina Meila  Jianbo Shi  2001        On Spectral Clustering  Analysis and an algorithm       https   citeseerx ist psu edu doc view pid 796c5d6336fc52aa84db575fb821c78918b65f58        Andrew Y  Ng  Michael I  Jordan  Yair Weiss  2001       arxiv   Preconditioned Spectral Clustering for Stochastic Block Partition     Streaming Graph Challenge   1708 07481   David Zhuzhunashvili  Andrew Knyazev       hierarchical clustering   Hierarchical clustering                          Hierarchical clustering is a general family of clustering algorithms that build nested clusters by merging or splitting them successively  This hierarchy of clusters is represented as a tree  or dendrogram   The root of the tree is the unique cluster that gathers all the samples  the leaves being the clusters with only one sample  See the  Wikipedia page  https   en wikipedia org wiki Hierarchical clustering    for more details   The  class  AgglomerativeClustering  object performs a hierarchical clustering using a bottom up approach  each observation starts in its own cluster  and clusters are successively merged together  The linkage criteria determines the metric used for the merge strategy       Ward   minimizes the sum of squared differences within all clusters  It is a   variance minimizing approach and in this sense is similar to the k means   objective function but tackled with an agglomerative hierarchical   approach      Maximum   or   complete linkage   minimizes the maximum distance between   observations of pairs of clusters      Average linkage   minimizes the average of the distances between all   observations of pairs of clusters      Single linkage   minimizes the distance between the closest   observations of pairs of clusters    class  AgglomerativeClustering  can also scale to large number of samples when it is used jointly with a connectivity matrix  but is computationally expensive when no connectivity constraints are added between samples  it considers at each step all the possible merges      topic    class  FeatureAgglomeration      The  class  FeatureAgglomeration  uses agglomerative clustering to    group together features that look very similar  thus decreasing the    number of features  It is a dimensionality reduction tool  see     ref  data reduction    Different linkage type  Ward  complete  average  and single linkage                                                                       class  AgglomerativeClustering  supports Ward  single  average  and complete linkage strategies      image      auto examples cluster images sphx glr plot linkage comparison 001 png      target     auto examples cluster plot linkage comparison html      scale  43  Agglomerative cluster has a  rich get richer  behavior that leads to uneven cluster sizes  In this regard  single linkage is the worst strategy  and Ward gives the most regular sizes  However  the affinity  or distance used in clustering  cannot be varied with Ward  thus for non Euclidean metrics  average linkage is a good alternative  Single linkage  while not robust to noisy data  can be computed very efficiently and can therefore be useful to provide hierarchical clustering of larger datasets  Single linkage can also perform well on non globular data      rubric   Examples     ref  sphx glr auto examples cluster plot digits linkage py   exploration of the   different linkage strategies in a real dataset        ref  sphx glr auto examples cluster plot linkage comparison py   exploration of     the different linkage strategies in toy datasets    Visualization of cluster hierarchy                                     It s possible to visualize the tree representing the hierarchical merging of clusters as a dendrogram  Visual inspection can often be useful for understanding the structure of the data  though more so in the case of small sample sizes      image      auto examples cluster images sphx glr plot agglomerative dendrogram 001 png      target     auto examples cluster plot agglomerative dendrogram html      scale  42     rubric   Examples     ref  sphx glr auto examples cluster plot agglomerative dendrogram py    Adding connectivity constraints                                  An interesting aspect of  class  AgglomerativeClustering  is that connectivity constraints can be added to this algorithm  only adjacent clusters can be merged together   through a connectivity matrix that defines for each sample the neighboring samples following a given structure of the data  For instance  in the swiss roll example below  the connectivity constraints forbid the merging of points that are not adjacent on the swiss roll  and thus avoid forming clusters that extend across overlapping folds of the roll       unstructured  image      auto examples cluster images sphx glr plot ward structured vs unstructured 001 png          target     auto examples cluster plot ward structured vs unstructured html          scale  49      structured  image      auto examples cluster images sphx glr plot ward structured vs unstructured 002 png          target     auto examples cluster plot ward structured vs unstructured html          scale  49     centered    unstructured   structured   These constraint are useful to impose a certain local structure  but they also make the algorithm faster  especially when the number of the samples is high   The connectivity constraints are imposed via an connectivity matrix  a scipy sparse matrix that has elements only at the intersection of a row and a column with indices of the dataset that should be connected  This matrix can be constructed from a priori information  for instance  you may wish to cluster web pages by only merging pages with a link pointing from one to another  It can also be learned from the data  for instance using  func  sklearn neighbors kneighbors graph  to restrict merging to nearest neighbors as in  ref  this example  sphx glr auto examples cluster plot agglomerative clustering py    or using  func  sklearn feature extraction image grid to graph  to enable only merging of neighboring pixels on an image  as in the  ref  coin  sphx glr auto examples cluster plot coin ward segmentation py   example      warning     Connectivity constraints with single  average and complete linkage        Connectivity constraints and single  complete or average linkage can enhance     the  rich getting richer  aspect of agglomerative clustering      particularly so if they are built with      func  sklearn neighbors kneighbors graph   In the limit of a small     number of clusters  they tend to give a few macroscopically occupied     clusters and almost empty ones   see the discussion in      ref  sphx glr auto examples cluster plot agglomerative clustering py        Single linkage is the most brittle linkage option with regard to this issue      image      auto examples cluster images sphx glr plot agglomerative clustering 001 png      target     auto examples cluster plot agglomerative clustering html      scale  38     image      auto examples cluster images sphx glr plot agglomerative clustering 002 png      target     auto examples cluster plot agglomerative clustering html      scale  38     image      auto examples cluster images sphx glr plot agglomerative clustering 003 png      target     auto examples cluster plot agglomerative clustering html      scale  38     image      auto examples cluster images sphx glr plot agglomerative clustering 004 png      target     auto examples cluster plot agglomerative clustering html      scale  38     rubric   Examples     ref  sphx glr auto examples cluster plot coin ward segmentation py   Ward   clustering to split the image of coins in regions      ref  sphx glr auto examples cluster plot ward structured vs unstructured py   Example   of Ward algorithm on a swiss roll  comparison of structured approaches   versus unstructured approaches      ref  sphx glr auto examples cluster plot feature agglomeration vs univariate selection py   Example   of dimensionality reduction with feature agglomeration based on Ward   hierarchical clustering      ref  sphx glr auto examples cluster plot agglomerative clustering py    Varying the metric                      Single  average and complete linkage can be used with a variety of distances  or affinities   in particular Euclidean distance   l2    Manhattan distance  or Cityblock  or  l1    cosine distance  or any precomputed affinity matrix      l1  distance is often good for sparse features  or sparse noise  i e    many of the features are zero  as in text mining using occurrences of   rare words      cosine  distance is interesting because it is invariant to global   scalings of the signal   The guidelines for choosing a metric is to use one that maximizes the distance between samples in different classes  and minimizes that within each class      image      auto examples cluster images sphx glr plot agglomerative clustering metrics 005 png      target     auto examples cluster plot agglomerative clustering metrics html      scale  32     image      auto examples cluster images sphx glr plot agglomerative clustering metrics 006 png      target     auto examples cluster plot agglomerative clustering metrics html      scale  32     image      auto examples cluster images sphx glr plot agglomerative clustering metrics 007 png      target     auto examples cluster plot agglomerative clustering metrics html      scale  32     rubric   Examples     ref  sphx glr auto examples cluster plot agglomerative clustering metrics py    Bisecting K Means                        bisect k means   The  class  BisectingKMeans  is an iterative variant of  class  KMeans   using divisive hierarchical clustering  Instead of creating all centroids at once  centroids are picked progressively based on a previous clustering  a cluster is split into two new clusters repeatedly until the target number of clusters is reached    class  BisectingKMeans  is more efficient than  class  KMeans  when the number of clusters is large since it only works on a subset of the data at each bisection while  class  KMeans  always works on the entire dataset   Although  class  BisectingKMeans  can t benefit from the advantages of the   k means     initialization by design  it will still produce comparable results than  KMeans init  k means      in terms of inertia at cheaper computational costs  and will likely produce better results than  KMeans  with a random initialization   This variant is more efficient to agglomerative clustering if the number of clusters is small compared to the number of data points   This variant also does not produce empty clusters   There exist two strategies for selecting the cluster to split       bisecting strategy  largest cluster    selects the cluster having the most points      bisecting strategy  biggest inertia    selects the cluster with biggest inertia     cluster with biggest Sum of Squared Errors within   Picking by largest amount of data points in most cases produces result as accurate as picking by inertia and is faster  especially for larger amount of data points  where calculating error may be costly    Picking by largest amount of data points will also likely produce clusters of similar sizes while  KMeans  is known to produce clusters of different sizes   Difference between Bisecting K Means and regular K Means can be seen on example  ref  sphx glr auto examples cluster plot bisect kmeans py   While the regular K Means algorithm tends to create non related clusters  clusters from Bisecting K Means are well ordered and create quite a visible hierarchy      dropdown   References        A Comparison of Document Clustering Techniques       http   www philippe fournier viger com spmf bisectingkmeans pdf    Michael     Steinbach  George Karypis and Vipin Kumar  Department of Computer Science and     Egineering  University of Minnesota  June 2000        Performance Analysis of K Means and Bisecting K Means Algorithms in Weblog     Data       https   ijeter everscience org Manuscripts Volume 4 Issue 8 Vol 4 issue 8 M 23 pdf        K Abirami and Dr P Mayilvahanan  International Journal of Emerging     Technologies in Engineering Research  IJETER  Volume 4  Issue 8   August 2016        Bisecting K means Algorithm Based on K valued Self determining and     Clustering Center Optimization       http   www jcomputers us vol13 jcp1306 01 pdf    Jian Di  Xinyue Gou School     of Control and Computer Engineering North China Electric Power University      Baoding  Hebei  China  August 2017        dbscan   DBSCAN         The  class  DBSCAN  algorithm views clusters as areas of high density separated by areas of low density  Due to this rather generic view  clusters found by DBSCAN can be any shape  as opposed to k means which assumes that clusters are convex shaped  The central component to the DBSCAN is the concept of  core samples   which are samples that are in areas of high density  A cluster is therefore a set of core samples  each close to each other  measured by some distance measure  and a set of non core samples that are close to a core sample  but are not themselves core samples   There are two parameters to the algorithm    min samples   and   eps    which define formally what we mean when we say  dense   Higher   min samples   or lower   eps   indicate higher density necessary to form a cluster   More formally  we define a core sample as being a sample in the dataset such that there exist   min samples   other samples within a distance of   eps    which are defined as  neighbors  of the core sample  This tells us that the core sample is in a dense area of the vector space  A cluster is a set of core samples that can be built by recursively taking a core sample  finding all of its neighbors that are core samples  finding all of  their  neighbors that are core samples  and so on  A cluster also has a set of non core samples  which are samples that are neighbors of a core sample in the cluster but are not themselves core samples  Intuitively  these samples are on the fringes of a cluster   Any core sample is part of a cluster  by definition  Any sample that is not a core sample  and is at least   eps   in distance from any core sample  is considered an outlier by the algorithm   While the parameter   min samples   primarily controls how tolerant the algorithm is towards noise  on noisy and large data sets it may be desirable to increase this parameter   the parameter   eps   is  crucial to choose appropriately  for the data set and distance function and usually cannot be left at the default value  It controls the local neighborhood of the points  When chosen too small  most data will not be clustered at all  and labeled as    1   for  noise    When chosen too large  it causes close clusters to be merged into one cluster  and eventually the entire data set to be returned as a single cluster  Some heuristics for choosing this parameter have been discussed in the literature  for example based on a knee in the nearest neighbor distances plot  as discussed in the references below    In the figure below  the color indicates cluster membership  with large circles indicating core samples found by the algorithm  Smaller circles are non core samples that are still part of a cluster  Moreover  the outliers are indicated by black points below       dbscan results  image      auto examples cluster images sphx glr plot dbscan 002 png      target     auto examples cluster plot dbscan html      scale  50     centered    dbscan results      rubric   Examples     ref  sphx glr auto examples cluster plot dbscan py      dropdown   Implementation    The DBSCAN algorithm is deterministic  always generating the same clusters when   given the same data in the same order   However  the results can differ when   data is provided in a different order  First  even though the core samples will   always be assigned to the same clusters  the labels of those clusters will   depend on the order in which those samples are encountered in the data  Second   and more importantly  the clusters to which non core samples are assigned can   differ depending on the data order   This would happen when a non core sample   has a distance lower than   eps   to two core samples in different clusters  By   the triangular inequality  those two core samples must be more distant than     eps   from each other  or they would be in the same cluster  The non core   sample is assigned to whichever cluster is generated first in a pass through the   data  and so the results will depend on the data ordering     The current implementation uses ball trees and kd trees to determine the   neighborhood of points  which avoids calculating the full distance matrix  as   was done in scikit learn versions before 0 14   The possibility to use custom   metrics is retained  for details  see  class  NearestNeighbors       dropdown   Memory consumption for large sample sizes    This implementation is by default not memory efficient because it constructs a   full pairwise similarity matrix in the case where kd trees or ball trees cannot   be used  e g   with sparse matrices   This matrix will consume  math  n 2    floats  A couple of mechanisms for getting around this are       Use  ref  OPTICS  optics   clustering in conjunction with the  extract dbscan      method  OPTICS clustering also calculates the full pairwise matrix  but only     keeps one row in memory at a time  memory complexity n        A sparse radius neighborhood graph  where missing entries are presumed to be     out of eps  can be precomputed in a memory efficient way and dbscan can be run     over this with   metric  precomputed      See      meth  sklearn neighbors NearestNeighbors radius neighbors graph        The dataset can be compressed  either by removing exact duplicates if these     occur in your data  or by using BIRCH  Then you only have a relatively small     number of representatives for a large number of points  You can then provide a       sample weight   when fitting DBSCAN      dropdown   References     A Density Based Algorithm for Discovering Clusters in Large Spatial   Databases with Noise  https   www aaai org Papers KDD 1996 KDD96 037 pdf      Ester  M   H  P  Kriegel  J  Sander  and X  Xu  In Proceedings of the 2nd   International Conference on Knowledge Discovery and Data Mining  Portland  OR    AAAI Press  pp  226 231  1996     doi  DBSCAN revisited  revisited  why and how you should  still  use DBSCAN     10 1145 3068335   Schubert  E   Sander  J   Ester  M   Kriegel  H  P     Xu    X   2017   In ACM Transactions on Database Systems  TODS   42 3   19        hdbscan   HDBSCAN          The  class  HDBSCAN  algorithm can be seen as an extension of  class  DBSCAN  and  class  OPTICS   Specifically   class  DBSCAN  assumes that the clustering criterion  i e  density requirement  is  globally homogeneous   In other words   class  DBSCAN  may struggle to successfully capture clusters with different densities   class  HDBSCAN  alleviates this assumption and explores all possible density scales by building an alternative representation of the clustering problem      note      This implementation is adapted from the original implementation of HDBSCAN     scikit learn contrib hdbscan  https   github com scikit learn contrib hdbscan    based on  LJ2017        rubric   Examples     ref  sphx glr auto examples cluster plot hdbscan py   Mutual Reachability Graph                            HDBSCAN first defines  math  d c x p    the  core distance  of a sample  math  x p   as the distance to its  min samples  th nearest neighbor  counting itself  For example  if  min samples 5  and  math  x    is the 5th nearest neighbor of  math  x p  then the core distance is      math   d c x p  d x p  x      Next it defines  math  d m x p  x q    the  mutual reachability distance  of two points  math  x p  x q   as      math   d m x p  x q     max  d c x p   d c x q   d x p  x q     These two notions allow us to construct the  mutual reachability graph   math  G  ms   defined for a fixed choice of  min samples  by associating each sample  math  x p  with a vertex of the graph  and thus edges between points  math  x p  x q  are the mutual reachability distance  math  d m x p  x q   between them  We may build subsets of this graph  denoted as  math  G  ms  varepsilon    by removing any edges with value greater than  math   varepsilon   from the original graph  Any points whose core distance is less than  math   varepsilon   are at this staged marked as noise  The remaining points are then clustered by finding the connected components of this trimmed graph      note      Taking the connected components of a trimmed graph  math  G  ms  varepsilon   is   equivalent to running DBSCAN  with  min samples  and  math   varepsilon   DBSCAN  is a   slightly modified version of DBSCAN mentioned in  CM2013     Hierarchical Clustering                         HDBSCAN can be seen as an algorithm which performs DBSCAN  clustering across all values of  math   varepsilon   As mentioned prior  this is equivalent to finding the connected components of the mutual reachability graphs for all values of  math   varepsilon   To do this efficiently  HDBSCAN first extracts a minimum spanning tree  MST  from the fully  connected mutual reachability graph  then greedily cuts the edges with highest weight  An outline of the HDBSCAN algorithm is as follows   1  Extract the MST of  math  G  ms    2  Extend the MST by adding a  self edge  for each vertex  with weight equal    to the core distance of the underlying sample  3  Initialize a single cluster and label for the MST  4  Remove the edge with the greatest weight from the MST  ties are    removed simultaneously   5  Assign cluster labels to the connected components which contain the    end points of the now removed edge  If the component does not have at least    one edge it is instead assigned a  null  label marking it as noise  6  Repeat 4 5 until there are no more connected components   HDBSCAN is therefore able to obtain all possible partitions achievable by DBSCAN  for a fixed choice of  min samples  in a hierarchical fashion  Indeed  this allows HDBSCAN to perform clustering across multiple densities and as such it no longer needs  math   varepsilon  to be given as a hyperparameter  Instead it relies solely on the choice of  min samples   which tends to be a more robust hyperparameter       hdbscan ground truth  image      auto examples cluster images sphx glr plot hdbscan 005 png      target     auto examples cluster plot hdbscan html      scale  75     hdbscan results  image      auto examples cluster images sphx glr plot hdbscan 007 png      target     auto examples cluster plot hdbscan html      scale  75     centered    hdbscan ground truth     centered    hdbscan results   HDBSCAN can be smoothed with an additional hyperparameter  min cluster size  which specifies that during the hierarchical clustering  components with fewer than  minimum cluster size  many samples are considered noise  In practice  one can set  minimum cluster size   min samples  to couple the parameters and simplify the hyperparameter space      rubric   References      CM2013  Campello  R J G B   Moulavi  D   Sander  J   2013   Density Based   Clustering Based on Hierarchical Density Estimates  In  Pei  J   Tseng  V S     Cao  L   Motoda  H   Xu  G   eds  Advances in Knowledge Discovery and Data   Mining  PAKDD 2013  Lecture Notes in Computer Science    vol 7819  Springer    Berlin  Heidelberg   doi  Density Based Clustering Based on Hierarchical   Density Estimates  10 1007 978 3 642 37456 2 14        LJ2017  L  McInnes and J  Healy   2017   Accelerated Hierarchical Density   Based Clustering  In  IEEE International Conference on Data Mining Workshops    ICDMW   2017  pp  33 42   doi  Accelerated Hierarchical Density Based   Clustering  10 1109 ICDMW 2017 12        optics   OPTICS         The  class  OPTICS  algorithm shares many similarities with the  class  DBSCAN  algorithm  and can be considered a generalization of DBSCAN that relaxes the   eps   requirement from a single value to a value range  The key difference between DBSCAN and OPTICS is that the OPTICS algorithm builds a  reachability  graph  which assigns each sample both a   reachability    distance  and a spot within the cluster   ordering    attribute  these two attributes are assigned when the model is fitted  and are used to determine cluster membership  If OPTICS is run with the default value of  inf  set for   max eps    then DBSCAN style cluster extraction can be performed repeatedly in linear time for any given   eps   value using the   cluster optics dbscan   method  Setting   max eps   to a lower value will result in shorter run times  and can be thought of as the maximum neighborhood radius from each point to find other potential reachable points       optics results  image      auto examples cluster images sphx glr plot optics 001 png          target     auto examples cluster plot optics html          scale  50     centered    optics results   The  reachability  distances generated by OPTICS allow for variable density extraction of clusters within a single data set  As shown in the above plot  combining  reachability  distances and data set   ordering    produces a  reachability plot   where point density is represented on the Y axis  and points are ordered such that nearby points are adjacent   Cutting  the reachability plot at a single value produces DBSCAN like results  all points above the  cut  are classified as noise  and each time that there is a break when reading from left to right signifies a new cluster  The default cluster extraction with OPTICS looks at the steep slopes within the graph to find clusters  and the user can define what counts as a steep slope using the parameter   xi    There are also other possibilities for analysis on the graph itself  such as generating hierarchical representations of the data through reachability plot dendrograms  and the hierarchy of clusters detected by the algorithm can be accessed through the   cluster hierarchy    parameter  The plot above has been color coded so that cluster colors in planar space match the linear segment clusters of the reachability plot  Note that the blue and red clusters are adjacent in the reachability plot  and can be hierarchically represented as children of a larger parent cluster      rubric   Examples     ref  sphx glr auto examples cluster plot optics py       dropdown   Comparison with DBSCAN    The results from OPTICS   cluster optics dbscan   method and DBSCAN are very   similar  but not always identical  specifically  labeling of periphery and noise   points  This is in part because the first samples of each dense area processed   by OPTICS have a large reachability value while being close to other points in   their area  and will thus sometimes be marked as noise rather than periphery    This affects adjacent points when they are considered as candidates for being   marked as either periphery or noise     Note that for any single value of   eps    DBSCAN will tend to have a shorter   run time than OPTICS  however  for repeated runs at varying   eps   values  a   single run of OPTICS may require less cumulative runtime than DBSCAN  It is also   important to note that OPTICS  output is close to DBSCAN s only if   eps   and     max eps   are close      dropdown   Computational Complexity    Spatial indexing trees are used to avoid calculating the full distance matrix    and allow for efficient memory usage on large sets of samples  Different   distance metrics can be supplied via the   metric   keyword     For large datasets  similar  but not identical  results can be obtained via    class  HDBSCAN   The HDBSCAN implementation is multithreaded  and has better   algorithmic runtime complexity than OPTICS  at the cost of worse memory scaling    For extremely large datasets that exhaust system memory using HDBSCAN  OPTICS   will maintain  math  n   as opposed to  math  n 2   memory scaling  however    tuning of the   max eps   parameter will likely need to be used to give a   solution in a reasonable amount of wall time       dropdown   References       OPTICS  ordering points to identify the clustering structure   Ankerst      Mihael  Markus M  Breunig  Hans Peter Kriegel  and J rg Sander  In ACM Sigmod     Record  vol  28  no  2  pp  49 60  ACM  1999        birch   BIRCH        The  class  Birch  builds a tree called the Clustering Feature Tree  CFT  for the given data  The data is essentially lossy compressed to a set of Clustering Feature nodes  CF Nodes   The CF Nodes have a number of subclusters called Clustering Feature subclusters  CF Subclusters  and these CF Subclusters located in the non terminal CF Nodes can have CF Nodes as children   The CF Subclusters hold the necessary information for clustering which prevents the need to hold the entire input data in memory  This information includes     Number of samples in a subcluster    Linear Sum   An n dimensional vector holding the sum of all samples   Squared Sum   Sum of the squared L2 norm of all samples    Centroids   To avoid recalculation linear sum   n samples    Squared norm of the centroids   The BIRCH algorithm has two parameters  the threshold and the branching factor  The branching factor limits the number of subclusters in a node and the threshold limits the distance between the entering sample and the existing subclusters   This algorithm can be viewed as an instance or data reduction method  since it reduces the input data to a set of subclusters which are obtained directly from the leaves of the CFT  This reduced data can be further processed by feeding it into a global clusterer  This global clusterer can be set by   n clusters    If   n clusters   is set to None  the subclusters from the leaves are directly read off  otherwise a global clustering step labels these subclusters into global clusters  labels  and the samples are mapped to the global label of the nearest subcluster      dropdown   Algorithm description      A new sample is inserted into the root of the CF Tree which is a CF Node  It     is then merged with the subcluster of the root  that has the smallest radius     after merging  constrained by the threshold and branching factor conditions      If the subcluster has any child node  then this is done repeatedly till it     reaches a leaf  After finding the nearest subcluster in the leaf  the     properties of this subcluster and the parent subclusters are recursively     updated       If the radius of the subcluster obtained by merging the new sample and the     nearest subcluster is greater than the square of the threshold and if the     number of subclusters is greater than the branching factor  then a space is     temporarily allocated to this new sample  The two farthest subclusters are     taken and the subclusters are divided into two groups on the basis of the     distance between these subclusters       If this split node has a parent subcluster and there is room for a new     subcluster  then the parent is split into two  If there is no room  then this     node is again split into two and the process is continued recursively  till it     reaches the root      dropdown   BIRCH or MiniBatchKMeans       BIRCH does not scale very well to high dimensional data  As a rule of thumb if       n features   is greater than twenty  it is generally better to use MiniBatchKMeans      If the number of instances of data needs to be reduced  or if one wants a     large number of subclusters either as a preprocessing step or otherwise      BIRCH is more useful than MiniBatchKMeans        image      auto examples cluster images sphx glr plot birch vs minibatchkmeans 001 png      target     auto examples cluster plot birch vs minibatchkmeans html     dropdown   How to use partial fit     To avoid the computation of global clustering  for every call of   partial fit     the user is advised     1  To set   n clusters None   initially    2  Train all data by multiple calls to partial fit    3  Set   n clusters   to a required value using        brc set params n clusters n clusters       4  Call   partial fit   finally with no arguments  i e    brc partial fit          which performs the global clustering      dropdown   References      Tian Zhang  Raghu Ramakrishnan  Maron Livny BIRCH  An efficient data     clustering method for large databases      https   www cs sfu ca CourseCentral 459 han papers zhang96 pdf      Roberto Perdisci JBirch   Java implementation of BIRCH clustering algorithm     https   code google com archive p jbirch        clustering evaluation   Clustering performance evaluation                                    Evaluating the performance of a clustering algorithm is not as trivial as counting the number of errors or the precision and recall of a supervised classification algorithm  In particular any evaluation metric should not take the absolute values of the cluster labels into account but rather if this clustering define separations of the data similar to some ground truth set of classes or satisfying some assumption such that members belong to the same class are more similar than members of different classes according to some similarity metric      currentmodule   sklearn metrics      rand score      adjusted rand score   Rand index             Given the knowledge of the ground truth class assignments   labels true   and our clustering algorithm assignments of the same samples   labels pred    the    adjusted or unadjusted  Rand index   is a function that measures the   similarity   of the two assignments  ignoring permutations          from sklearn import metrics       labels true    0  0  0  1  1  1        labels pred    0  0  1  1  2  2        metrics rand score labels true  labels pred    0 66     The Rand index does not ensure to obtain a value close to 0 0 for a random labelling  The adjusted Rand index   corrects for chance   and will give such a baseline         metrics adjusted rand score labels true  labels pred    0 24     As with all clustering metrics  one can permute 0 and 1 in the predicted labels  rename 2 to 3  and get the same score          labels pred    1  1  0  0  3  3        metrics rand score labels true  labels pred    0 66          metrics adjusted rand score labels true  labels pred    0 24     Furthermore  both  func  rand score   func  adjusted rand score  are   symmetric    swapping the argument does not change the scores  They can thus be used as   consensus measures            metrics rand score labels pred  labels true    0 66          metrics adjusted rand score labels pred  labels true    0 24     Perfect labeling is scored 1 0          labels pred   labels true          metrics rand score labels true  labels pred    1 0       metrics adjusted rand score labels true  labels pred    1 0  Poorly agreeing labels  e g  independent labelings  have lower scores  and for the adjusted Rand index the score will be negative or close to zero  However  for the unadjusted Rand index the score  while lower  will not necessarily be close to zero           labels true    0  0  0  0  0  0  1  1        labels pred    0  1  2  3  4  5  5  6        metrics rand score labels true  labels pred    0 39          metrics adjusted rand score labels true  labels pred     0 07         topic   Advantages         Interpretability    The unadjusted Rand index is proportional to the     number of sample pairs whose labels are the same in both  labels pred  and      labels true   or are different in both         Random  uniform  label assignments have an adjusted Rand index score close     to 0 0   for any value of   n clusters   and   n samples    which is not the     case for the unadjusted Rand index or the V measure for instance          Bounded range    Lower values indicate different labelings  similar     clusterings have a high  adjusted or unadjusted  Rand index  1 0 is the     perfect match score  The score range is  0  1  for the unadjusted Rand index     and   0 5  1  for the adjusted Rand index         No assumption is made on the cluster structure    The  adjusted or     unadjusted  Rand index can be used to compare all kinds of clustering     algorithms  and can be used to compare clustering algorithms such as k means     which assumes isotropic blob shapes with results of spectral clustering     algorithms which can find cluster with  folded  shapes      topic   Drawbacks       Contrary to inertia  the    adjusted or unadjusted  Rand index requires     knowledge of the ground truth classes   which is almost never available in     practice or requires manual assignment by human annotators  as in the     supervised learning setting        However  adjusted or unadjusted  Rand index can also be useful in a purely     unsupervised setting as a building block for a Consensus Index that can be     used for clustering model selection  TODO        The   unadjusted Rand index is often close to 1 0   even if the clusterings     themselves differ significantly  This can be understood when interpreting     the Rand index as the accuracy of element pair labeling resulting from the     clusterings  In practice there often is a majority of element pairs that are     assigned the   different   pair label under both the predicted and the     ground truth clustering resulting in a high proportion of pair labels that     agree  which leads subsequently to a high score      rubric   Examples     ref  sphx glr auto examples cluster plot adjusted for chance measures py     Analysis of the impact of the dataset size on the value of   clustering measures for random assignments      dropdown   Mathematical formulation    If C is a ground truth class assignment and K the clustering  let us define    math  a  and  math  b  as        math  a   the number of pairs of elements that are in the same set in C and     in the same set in K       math  b   the number of pairs of elements that are in different sets in C and     in different sets in K    The unadjusted Rand index is then given by        math    text RI     frac a   b  C 2  n  samples       where  math  C 2  n  samples    is the total number of possible pairs in the   dataset  It does not matter if the calculation is performed on ordered pairs or   unordered pairs as long as the calculation is performed consistently     However  the Rand index does not guarantee that random label assignments will   get a value close to zero  esp  if the number of clusters is in the same order   of magnitude as the number of samples      To counter this effect we can discount the expected RI  math  E  text RI    of   random labelings by defining the adjusted Rand index as follows        math    text ARI     frac  text RI    E  text RI     max  text RI     E  text RI        dropdown   References       Comparing Partitions      https   link springer com article 10 1007 2FBF01908075    L  Hubert and P      Arabie  Journal of Classification 1985       Properties of the Hubert Arabie adjusted Rand index      https   psycnet apa org record 2004 17801 007    D  Steinley  Psychological     Methods 2004       Wikipedia entry for the Rand index      https   en wikipedia org wiki Rand index Adjusted Rand index          doi  Minimum adjusted Rand index for two clusterings of a given size  2022  J  E  Chac n and A  I  Rastrojo  10 1007 s11634 022 00491 w         mutual info score   Mutual Information based scores                                  Given the knowledge of the ground truth class assignments   labels true   and our clustering algorithm assignments of the same samples   labels pred    the   Mutual Information   is a function that measures the   agreement   of the two assignments  ignoring permutations   Two different normalized versions of this measure are available    Normalized Mutual Information  NMI    and   Adjusted Mutual Information  AMI     NMI is often used in the literature  while AMI was proposed more recently and is   normalized against chance            from sklearn import metrics       labels true    0  0  0  1  1  1        labels pred    0  0  1  1  2  2         metrics adjusted mutual info score labels true  labels pred     doctest   SKIP   0 22504     One can permute 0 and 1 in the predicted labels  rename 2 to 3 and get the same score          labels pred    1  1  0  0  3  3        metrics adjusted mutual info score labels true  labels pred     doctest   SKIP   0 22504     All   func  mutual info score    func  adjusted mutual info score  and  func  normalized mutual info score  are symmetric  swapping the argument does not change the score  Thus they can be used as a   consensus measure            metrics adjusted mutual info score labels pred  labels true     doctest   SKIP   0 22504     Perfect labeling is scored 1 0          labels pred   labels true          metrics adjusted mutual info score labels true  labels pred     doctest   SKIP   1 0        metrics normalized mutual info score labels true  labels pred     doctest   SKIP   1 0  This is not true for   mutual info score    which is therefore harder to judge          metrics mutual info score labels true  labels pred     doctest   SKIP   0 69     Bad  e g  independent labelings  have non positive scores          labels true    0  1  2  0  3  4  5  1        labels pred    1  1  0  0  2  2  2  2        metrics adjusted mutual info score labels true  labels pred     doctest   SKIP    0 10526         topic   Advantages         Random  uniform  label assignments have a AMI score close to 0 0   for any     value of   n clusters   and   n samples    which is not the case for raw     Mutual Information or the V measure for instance          Upper bound  of 1     Values close to zero indicate two label assignments     that are largely independent  while values close to one indicate significant     agreement  Further  an AMI of exactly 1 indicates that the two label     assignments are equal  with or without permutation       topic   Drawbacks       Contrary to inertia    MI based measures require the knowledge of the ground     truth classes   while almost never available in practice or requires manual     assignment by human annotators  as in the supervised learning setting        However MI based measures can also be useful in purely unsupervised setting     as a building block for a Consensus Index that can be used for clustering     model selection       NMI and MI are not adjusted against chance      rubric   Examples     ref  sphx glr auto examples cluster plot adjusted for chance measures py   Analysis   of the impact of the dataset size on the value of clustering measures for random   assignments  This example also includes the Adjusted Rand Index      dropdown   Mathematical formulation    Assume two label assignments  of the same N objects    math  U  and  math  V     Their entropy is the amount of uncertainty for a partition set  defined by        math   H U       sum  i 1    U  P i  log P i      where  math  P i     U i    N  is the probability that an object picked at   random from  math  U  falls into class  math  U i   Likewise for  math  V         math   H V       sum  j 1    V  P  j  log P  j      With  math  P  j     V j    N   The mutual information  MI  between  math  U    and  math  V  is calculated by        math    text MI  U  V     sum  i 1    U   sum  j 1    V  P i  j  log left  frac P i j   P i P  j   right     where  math  P i  j     U i  cap V j    N  is the probability that an object   picked at random falls into both classes  math  U i  and  math  V j      It also can be expressed in set cardinality formulation        math    text MI  U  V     sum  i 1    U    sum  j 1    V    frac  U i  cap V j   N  log left  frac N U i  cap V j    U i  V j   right     The normalized mutual information is defined as       math    text NMI  U  V     frac  text MI  U  V    text mean  H U   H V       This value of the mutual information and also the normalized variant is not   adjusted for chance and will tend to increase as the number of different labels    clusters  increases  regardless of the actual amount of  mutual information    between the label assignments     The expected value for the mutual information can be calculated using the   following equation  VEB2009    In this equation   math  a i    U i    the number   of elements in  math  U i   and  math  b j    V j    the number of elements in    math  V j          math   E  text MI  U V    sum  i 1    U    sum  j 1    V    sum  n  ij   a i b j N            min a i  b j    frac n  ij   N  log  left   frac  N n  ij   a i b j  right       frac a i b j  N a i   N b j    N n  ij   a i n  ij    b j n  ij         N a i b j n  ij        Using the expected value  the adjusted mutual information can then be calculated   using a similar form to that of the adjusted Rand index        math    text AMI     frac  text MI    E  text MI     text mean  H U   H V     E  text MI       For normalized mutual information and adjusted mutual information  the   normalizing value is typically some  generalized  mean of the entropies of each   clustering  Various generalized means exist  and no firm rules exist for   preferring one over the others   The decision is largely a field by field basis    for instance  in community detection  the arithmetic mean is most common  Each   normalizing method provides  qualitatively similar behaviours   YAT2016    In   our implementation  this is controlled by the   average method   parameter     Vinh et al   2010  named variants of NMI and AMI by their averaging method    VEB2010    Their  sqrt  and  sum  averages are the geometric and arithmetic   means  we use these more broadly common names        rubric   References      Strehl  Alexander  and Joydeep Ghosh  2002    Cluster ensembles   a     knowledge reuse framework for combining multiple partitions   Journal of     Machine Learning Research 3  583 617   doi 10 1162 153244303321897735      http   strehl com download strehl jmlr02 pdf           Wikipedia entry for the  normalized  Mutual Information      https   en wikipedia org wiki Mutual Information          Wikipedia entry for the Adjusted Mutual Information      https   en wikipedia org wiki Adjusted Mutual Information           VEB2009  Vinh  Epps  and Bailey   2009    Information theoretic measures     for clusterings comparison   Proceedings of the 26th Annual International     Conference on Machine Learning   ICML  09   doi 10 1145 1553374 1553511      https   dl acm org citation cfm doid 1553374 1553511     ISBN     9781605585161         VEB2010  Vinh  Epps  and Bailey   2010    Information Theoretic Measures     for Clusterings Comparison  Variants  Properties  Normalization and     Correction for Chance   JMLR      https   jmlr csail mit edu papers volume11 vinh10a vinh10a pdf         YAT2016  Yang  Algesheimer  and Tessone   2016    A comparative analysis     of community detection algorithms on artificial networks   Scientific     Reports 6  30750   doi 10 1038 srep30750      https   www nature com articles srep30750           homogeneity completeness   Homogeneity  completeness and V measure                                          Given the knowledge of the ground truth class assignments of the samples  it is possible to define some intuitive metric using conditional entropy analysis   In particular Rosenberg and Hirschberg  2007  define the following two desirable objectives for any cluster assignment       homogeneity    each cluster contains only members of a single class       completeness    all members of a given class are assigned to the same   cluster   We can turn those concept as scores  func  homogeneity score  and  func  completeness score   Both are bounded below by 0 0 and above by 1 0  higher is better           from sklearn import metrics       labels true    0  0  0  1  1  1        labels pred    0  0  1  1  2  2         metrics homogeneity score labels true  labels pred    0 66           metrics completeness score labels true  labels pred    0 42     Their harmonic mean called   V measure   is computed by  func  v measure score           metrics v measure score labels true  labels pred    0 51     This function s formula is as follows      math   v    frac  1    beta   times  text homogeneity   times  text completeness     beta  times  text homogeneity     text completeness      beta  defaults to a value of 1 0  but for using a value less than 1 for beta          metrics v measure score labels true  labels pred  beta 0 6    0 54     more weight will be attributed to homogeneity  and using a value greater than 1          metrics v measure score labels true  labels pred  beta 1 8    0 48     more weight will be attributed to completeness   The V measure is actually equivalent to the mutual information  NMI  discussed above  with the aggregation function being the arithmetic mean  B2011     Homogeneity  completeness and V measure can be computed at once using  func  homogeneity completeness v measure  as follows          metrics homogeneity completeness v measure labels true  labels pred     0 66     0 42     0 51      The following clustering assignment is slightly better  since it is homogeneous but not complete          labels pred    0  0  0  1  2  2        metrics homogeneity completeness v measure labels true  labels pred     1 0  0 68     0 81         note       func  v measure score  is   symmetric    it can be used to evaluate   the   agreement   of two independent assignments on the same dataset     This is not the case for  func  completeness score  and    func  homogeneity score   both are bound by the relationship        homogeneity score a  b     completeness score b  a       topic   Advantages         Bounded scores    0 0 is as bad as it can be  1 0 is a perfect score       Intuitive interpretation  clustering with bad V measure can be       qualitatively analyzed in terms of homogeneity and completeness   to     better feel what  kind  of mistakes is done by the assignment         No assumption is made on the cluster structure    can be used to compare     clustering algorithms such as k means which assumes isotropic blob shapes     with results of spectral clustering algorithms which can find cluster with      folded  shapes      topic   Drawbacks       The previously introduced metrics are   not normalized with regards to     random labeling    this means that depending on the number of samples      clusters and ground truth classes  a completely random labeling will not     always yield the same values for homogeneity  completeness and hence     v measure  In particular   random labeling won t yield zero scores     especially when the number of clusters is large         This problem can safely be ignored when the number of samples is more than a     thousand and the number of clusters is less than 10    For smaller sample     sizes or larger number of clusters it is safer to use an adjusted index such     as the Adjusted Rand Index  ARI           figure      auto examples cluster images sphx glr plot adjusted for chance measures 001 png      target     auto examples cluster plot adjusted for chance measures html      align  center      scale  100      These metrics   require the knowledge of the ground truth classes   while     almost never available in practice or requires manual assignment by human     annotators  as in the supervised learning setting       rubric   Examples     ref  sphx glr auto examples cluster plot adjusted for chance measures py   Analysis   of the impact of the dataset size on the value of clustering measures for   random assignments      dropdown   Mathematical formulation    Homogeneity and completeness scores are formally given by        math   h   1    frac H C K   H C         math   c   1    frac H K C   H K      where  math  H C K   is the   conditional entropy of the classes given the   cluster assignments   and is given by        math   H C K       sum  c 1    C    sum  k 1    K    frac n  c k   n               cdot  log left  frac n  c k   n k  right     and  math  H C   is the   entropy of the classes   and is given by        math   H C       sum  c 1    C    frac n c  n   cdot  log left  frac n c  n  right     with  math  n  the total number of samples   math  n c  and  math  n k  the   number of samples respectively belonging to class  math  c  and cluster    math  k   and finally  math  n  c k   the number of samples from class    math  c  assigned to cluster  math  k      The   conditional entropy of clusters given class    math  H K C   and the     entropy of clusters    math  H K   are defined in a symmetric manner     Rosenberg and Hirschberg further define   V measure   as the   harmonic mean of   homogeneity and completeness          math   v   2  cdot  frac h  cdot c  h   c      rubric   References     V Measure  A conditional entropy based external cluster evaluation measure    https   aclweb org anthology D D07 D07 1043 pdf    Andrew Rosenberg and Julia   Hirschberg  2007      B2011   Identification and Characterization of Events in Social Media    http   www cs columbia edu  hila hila thesis distributed pdf     Hila   Becker  PhD Thesis        fowlkes mallows scores   Fowlkes Mallows scores                         The original Fowlkes Mallows index  FMI  was intended to measure the similarity between two clustering results  which is inherently an unsupervised comparison  The supervised adaptation of the Fowlkes Mallows index  as implemented in  func  sklearn metrics fowlkes mallows score   can be used when the ground truth class assignments of the samples are known  The FMI is defined as the geometric mean of the pairwise precision and recall      math    text FMI     frac  text TP    sqrt   text TP     text FP     text TP     text FN      In the above formula       TP      True Positive     The number of pairs of points that are clustered together   both in the true labels and in the predicted labels       FP      False Positive     The number of pairs of points that are clustered together   in the predicted labels but not in the true labels       FN      False Negative     The number of pairs of points that are clustered together   in the true labels but not in the predicted labels   The score ranges from 0 to 1  A high value indicates a good similarity between two clusters         from sklearn import metrics       labels true    0  0  0  1  1  1        labels pred    0  0  1  1  2  2         metrics fowlkes mallows score labels true  labels pred    0 47140     One can permute 0 and 1 in the predicted labels  rename 2 to 3 and get the same score          labels pred    1  1  0  0  3  3         metrics fowlkes mallows score labels true  labels pred    0 47140     Perfect labeling is scored 1 0          labels pred   labels true          metrics fowlkes mallows score labels true  labels pred    1 0  Bad  e g  independent labelings  have zero scores          labels true    0  1  2  0  3  4  5  1        labels pred    1  1  0  0  2  2  2  2        metrics fowlkes mallows score labels true  labels pred    0 0     topic   Advantages         Random  uniform  label assignments have a FMI score close to 0 0   for any     value of   n clusters   and   n samples    which is not the case for raw     Mutual Information or the V measure for instance          Upper bounded at 1     Values close to zero indicate two label assignments     that are largely independent  while values close to one indicate significant     agreement  Further  values of exactly 0 indicate   purely   independent     label assignments and a FMI of exactly 1 indicates that the two label     assignments are equal  with or without permutation          No assumption is made on the cluster structure    can be used to compare     clustering algorithms such as k means which assumes isotropic blob shapes     with results of spectral clustering algorithms which can find cluster with      folded  shapes      topic   Drawbacks       Contrary to inertia    FMI based measures require the knowledge of the     ground truth classes   while almost never available in practice or requires     manual assignment by human annotators  as in the supervised learning     setting       dropdown   References      E  B  Fowkles and C  L  Mallows  1983   A method for comparing two     hierarchical clusterings   Journal of the American Statistical Association      https   www tandfonline com doi abs 10 1080 01621459 1983 10478008       Wikipedia entry for the Fowlkes Mallows Index      https   en wikipedia org wiki Fowlkes Mallows index          silhouette coefficient   Silhouette Coefficient                         If the ground truth labels are not known  evaluation must be performed using the model itself  The Silhouette Coefficient   func  sklearn metrics silhouette score   is an example of such an evaluation  where a higher Silhouette Coefficient score relates to a model with better defined clusters  The Silhouette Coefficient is defined for each sample and is composed of two scores       a    The mean distance between a sample and all other points in the same   class       b    The mean distance between a sample and all other points in the  next   nearest cluster    The Silhouette Coefficient  s  for a single sample is then given as      math   s    frac b   a  max a  b    The Silhouette Coefficient for a set of samples is given as the mean of the Silhouette Coefficient for each sample          from sklearn import metrics       from sklearn metrics import pairwise distances       from sklearn import datasets       X  y   datasets load iris return X y True   In normal usage  the Silhouette Coefficient is applied to the results of a cluster analysis         import numpy as np       from sklearn cluster import KMeans       kmeans model   KMeans n clusters 3  random state 1  fit X        labels   kmeans model labels        metrics silhouette score X  labels  metric  euclidean     0 55        topic   Advantages       The score is bounded between  1 for incorrect clustering and  1 for highly     dense clustering  Scores around zero indicate overlapping clusters       The score is higher when clusters are dense and well separated  which     relates to a standard concept of a cluster      topic   Drawbacks       The Silhouette Coefficient is generally higher for convex clusters than     other concepts of clusters  such as density based clusters like those     obtained through DBSCAN      rubric   Examples     ref  sphx glr auto examples cluster plot kmeans silhouette analysis py    In   this example the silhouette analysis is used to choose an optimal value for   n clusters      dropdown   References      Peter J  Rousseeuw  1987    doi   Silhouettes  a Graphical Aid to the     Interpretation and Validation of Cluster Analysis  10 1016 0377 0427 87 90125 7        Computational and Applied Mathematics 20  53 65        calinski harabasz index   Calinski Harabasz Index                           If the ground truth labels are not known  the Calinski Harabasz index   func  sklearn metrics calinski harabasz score     also known as the Variance Ratio Criterion   can be used to evaluate the model  where a higher Calinski Harabasz score relates to a model with better defined clusters   The index is the ratio of the sum of between clusters dispersion and of within cluster dispersion for all clusters  where dispersion is defined as the sum of distances squared          from sklearn import metrics       from sklearn metrics import pairwise distances       from sklearn import datasets       X  y   datasets load iris return X y True   In normal usage  the Calinski Harabasz index is applied to the results of a cluster analysis         import numpy as np       from sklearn cluster import KMeans       kmeans model   KMeans n clusters 3  random state 1  fit X        labels   kmeans model labels        metrics calinski harabasz score X  labels    561 59         topic   Advantages       The score is higher when clusters are dense and well separated  which     relates to a standard concept of a cluster       The score is fast to compute      topic   Drawbacks       The Calinski Harabasz index is generally higher for convex clusters than     other concepts of clusters  such as density based clusters like those     obtained through DBSCAN      dropdown   Mathematical formulation    For a set of data  math  E  of size  math  n E  which has been clustered into    math  k  clusters  the Calinski Harabasz score  math  s  is defined as the   ratio of the between clusters dispersion mean and the within cluster   dispersion        math       s    frac  mathrm tr  B k    mathrm tr  W k    times  frac n E   k  k   1     where  math   mathrm tr  B k   is trace of the between group dispersion matrix   and  math   mathrm tr  W k   is the trace of the within cluster dispersion   matrix defined by        math   W k    sum  q 1  k  sum  x  in C q   x   c q   x   c q  T       math   B k    sum  q 1  k n q  c q   c E   c q   c E  T    with  math  C q  the set of points in cluster  math  q    math  c q  the   center of cluster  math  q    math  c E  the center of  math  E   and    math  n q  the number of points in cluster  math  q       dropdown   References      Cali ski  T     Harabasz  J   1974     A Dendrite Method for Cluster Analysis       https   www researchgate net publication 233096619 A Dendrite Method for Cluster Analysis          doi  Communications in Statistics theory and Methods 3  1 27      10 1080 03610927408827101          davies bouldin index   Davies Bouldin Index                       If the ground truth labels are not known  the Davies Bouldin index   func  sklearn metrics davies bouldin score   can be used to evaluate the model  where a lower Davies Bouldin index relates to a model with better separation between the clusters   This index signifies the average  similarity  between clusters  where the similarity is a measure that compares the distance between clusters with the size of the clusters themselves   Zero is the lowest possible score  Values closer to zero indicate a better partition   In normal usage  the Davies Bouldin index is applied to the results of a cluster analysis as follows         from sklearn import datasets       iris   datasets load iris         X   iris data       from sklearn cluster import KMeans       from sklearn metrics import davies bouldin score       kmeans   KMeans n clusters 3  random state 1  fit X        labels   kmeans labels        davies bouldin score X  labels    0 666         topic   Advantages       The computation of Davies Bouldin is simpler than that of Silhouette scores      The index is solely based on quantities and features inherent to the dataset     as its computation only uses point wise distances      topic   Drawbacks       The Davies Boulding index is generally higher for convex clusters than other     concepts of clusters  such as density based clusters like those obtained     from DBSCAN      The usage of centroid distance limits the distance metric to Euclidean     space      dropdown   Mathematical formulation    The index is defined as the average similarity between each cluster  math  C i    for  math  i 1       k  and its most similar one  math  C j   In the context of   this index  similarity is defined as a measure  math  R  ij   that trades off        math  s i   the average distance between each point of cluster  math  i  and     the centroid of that cluster    also know as cluster diameter       math  d  ij    the distance between cluster centroids  math  i  and      math  j      A simple choice to construct  math  R  ij   so that it is nonnegative and   symmetric is        math       R  ij     frac s i   s j  d  ij      Then the Davies Bouldin index is defined as        math       DB    frac 1  k   sum  i 1  k  max  i  neq j  R  ij      dropdown   References      Davies  David L   Bouldin  Donald W   1979    doi   A Cluster Separation     Measure   10 1109 TPAMI 1979 4766909   IEEE Transactions on Pattern Analysis     and Machine Intelligence  PAMI 1  2   224 227       Halkidi  Maria  Batistakis  Yannis  Vazirgiannis  Michalis  2001    doi   On     Clustering Validation Techniques   10 1023 A 1012801612483   Journal of     Intelligent Information Systems  17 2 3   107 145        Wikipedia entry for Davies Bouldin index      https   en wikipedia org wiki Davies Bouldin index           contingency matrix   Contingency Matrix                     Contingency matrix   func  sklearn metrics cluster contingency matrix   reports the intersection cardinality for every true predicted cluster pair  The contingency matrix provides sufficient statistics for all clustering metrics where the samples are independent and identically distributed and one doesn t need to account for some instances not being clustered   Here is an example           from sklearn metrics cluster import contingency matrix        x     a    a    a    b    b    b          y    0  0  1  1  2  2         contingency matrix x  y     array   2  1  0              0  1  2     The first row of output array indicates that there are three samples whose true cluster is  a   Of them  two are in predicted cluster 0  one is in 1  and none is in 2  And the second row indicates that there are three samples whose true cluster is  b   Of them  none is in predicted cluster 0  one is in 1 and two are in 2   A  ref  confusion matrix  confusion matrix   for classification is a square contingency matrix where the order of rows and columns correspond to a list of classes       topic   Advantages       Allows to examine the spread of each true cluster across predicted clusters     and vice versa       The contingency table calculated is typically utilized in the calculation of     a similarity statistic  like the others listed in this document  between the     two clusterings      topic   Drawbacks       Contingency matrix is easy to interpret for a small number of clusters  but     becomes very hard to interpret for a large number of clusters       It doesn t give a single metric to use as an objective for clustering     optimisation      dropdown   References       Wikipedia entry for contingency matrix      https   en wikipedia org wiki Contingency table          pair confusion matrix   Pair Confusion Matrix                        The pair confusion matrix   func  sklearn metrics cluster pair confusion matrix   is a 2x2 similarity matrix     math      C    left  begin matrix     C  00    C  01        C  10    C  11      end matrix  right   between two clusterings computed by considering all pairs of samples and counting pairs that are assigned into the same or into different clusters under the true and predicted clusterings   It has the following entries    math  C  00     number of pairs with both clusterings having the samples not clustered together   math  C  10     number of pairs with the true label clustering having the samples clustered together but the other clustering not having the samples clustered together   math  C  01     number of pairs with the true label clustering not having the samples clustered together but the other clustering having the samples clustered together   math  C  11     number of pairs with both clusterings having the samples clustered together  Considering a pair of samples that is clustered together a positive pair  then as in binary classification the count of true negatives is  math  C  00    false negatives is  math  C  10    true positives is  math  C  11   and false positives is  math  C  01     Perfectly matching labelings have all non zero entries on the diagonal regardless of actual label values           from sklearn metrics cluster import pair confusion matrix        pair confusion matrix  0  0  1  1    0  0  1  1      array   8  0              0  4                pair confusion matrix  0  0  1  1    1  1  0  0      array   8  0              0  4     Labelings that assign all classes members to the same clusters are complete but may not always be pure  hence penalized  and have some off diagonal non zero entries           pair confusion matrix  0  0  1  2    0  0  1  1      array   8  2              0  2     The matrix is not symmetric           pair confusion matrix  0  0  1  1    0  0  1  2      array   8  0              2  2     If classes members are completely split across different clusters  the assignment is totally incomplete  hence the matrix has all zero diagonal entries           pair confusion matrix  0  0  0  0    0  1  2  3      array    0   0              12   0        dropdown   References       doi   Comparing Partitions   10 1007 BF01908075   L  Hubert and P  Arabie      Journal of Classification 1985"}
{"questions":"scikit-learn sklearn naivebayes naivebayes Naive Bayes Naive Bayes methods are a set of supervised learning algorithms","answers":".. _naive_bayes:\n\n===========\nNaive Bayes\n===========\n\n.. currentmodule:: sklearn.naive_bayes\n\n\nNaive Bayes methods are a set of supervised learning algorithms\nbased on applying Bayes' theorem with the \"naive\" assumption of\nconditional independence between every pair of features given the\nvalue of the class variable. Bayes' theorem states the following\nrelationship, given class variable :math:`y` and dependent feature\nvector :math:`x_1` through :math:`x_n`, :\n\n.. math::\n\n   P(y \\mid x_1, \\dots, x_n) = \\frac{P(y) P(x_1, \\dots, x_n \\mid y)}\n                                    {P(x_1, \\dots, x_n)}\n\nUsing the naive conditional independence assumption that\n\n.. math::\n\n   P(x_i | y, x_1, \\dots, x_{i-1}, x_{i+1}, \\dots, x_n) = P(x_i | y),\n\nfor all :math:`i`, this relationship is simplified to\n\n.. math::\n\n   P(y \\mid x_1, \\dots, x_n) = \\frac{P(y) \\prod_{i=1}^{n} P(x_i \\mid y)}\n                                    {P(x_1, \\dots, x_n)}\n\nSince :math:`P(x_1, \\dots, x_n)` is constant given the input,\nwe can use the following classification rule:\n\n.. math::\n\n   P(y \\mid x_1, \\dots, x_n) \\propto P(y) \\prod_{i=1}^{n} P(x_i \\mid y)\n\n   \\Downarrow\n\n   \\hat{y} = \\arg\\max_y P(y) \\prod_{i=1}^{n} P(x_i \\mid y),\n\nand we can use Maximum A Posteriori (MAP) estimation to estimate\n:math:`P(y)` and :math:`P(x_i \\mid y)`;\nthe former is then the relative frequency of class :math:`y`\nin the training set.\n\nThe different naive Bayes classifiers differ mainly by the assumptions they\nmake regarding the distribution of :math:`P(x_i \\mid y)`.\n\nIn spite of their apparently over-simplified assumptions, naive Bayes\nclassifiers have worked quite well in many real-world situations, famously\ndocument classification and spam filtering. They require a small amount\nof training data to estimate the necessary parameters. (For theoretical\nreasons why naive Bayes works well, and on which types of data it does, see\nthe references below.)\n\nNaive Bayes learners and classifiers can be extremely fast compared to more\nsophisticated methods.\nThe decoupling of the class conditional feature distributions means that each\ndistribution can be independently estimated as a one dimensional distribution.\nThis in turn helps to alleviate problems stemming from the curse of\ndimensionality.\n\nOn the flip side, although naive Bayes is known as a decent classifier,\nit is known to be a bad estimator, so the probability outputs from\n``predict_proba`` are not to be taken too seriously.\n\n.. dropdown:: References\n\n   * H. Zhang (2004). `The optimality of Naive Bayes.\n     <https:\/\/www.cs.unb.ca\/~hzhang\/publications\/FLAIRS04ZhangH.pdf>`_\n     Proc. FLAIRS.\n\n.. _gaussian_naive_bayes:\n\nGaussian Naive Bayes\n--------------------\n\n:class:`GaussianNB` implements the Gaussian Naive Bayes algorithm for\nclassification. The likelihood of the features is assumed to be Gaussian:\n\n.. math::\n\n   P(x_i \\mid y) = \\frac{1}{\\sqrt{2\\pi\\sigma^2_y}} \\exp\\left(-\\frac{(x_i - \\mu_y)^2}{2\\sigma^2_y}\\right)\n\nThe parameters :math:`\\sigma_y` and :math:`\\mu_y`\nare estimated using maximum likelihood.\n\n   >>> from sklearn.datasets import load_iris\n   >>> from sklearn.model_selection import train_test_split\n   >>> from sklearn.naive_bayes import GaussianNB\n   >>> X, y = load_iris(return_X_y=True)\n   >>> X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0)\n   >>> gnb = GaussianNB()\n   >>> y_pred = gnb.fit(X_train, y_train).predict(X_test)\n   >>> print(\"Number of mislabeled points out of a total %d points : %d\"\n   ...       % (X_test.shape[0], (y_test != y_pred).sum()))\n   Number of mislabeled points out of a total 75 points : 4\n\n.. _multinomial_naive_bayes:\n\nMultinomial Naive Bayes\n-----------------------\n\n:class:`MultinomialNB` implements the naive Bayes algorithm for multinomially\ndistributed data, and is one of the two classic naive Bayes variants used in\ntext classification (where the data are typically represented as word vector\ncounts, although tf-idf vectors are also known to work well in practice).\nThe distribution is parametrized by vectors\n:math:`\\theta_y = (\\theta_{y1},\\ldots,\\theta_{yn})`\nfor each class :math:`y`, where :math:`n` is the number of features\n(in text classification, the size of the vocabulary)\nand :math:`\\theta_{yi}` is the probability :math:`P(x_i \\mid y)`\nof feature :math:`i` appearing in a sample belonging to class :math:`y`.\n\nThe parameters :math:`\\theta_y` is estimated by a smoothed\nversion of maximum likelihood, i.e. relative frequency counting:\n\n.. math::\n\n    \\hat{\\theta}_{yi} = \\frac{ N_{yi} + \\alpha}{N_y + \\alpha n}\n\nwhere :math:`N_{yi} = \\sum_{x \\in T} x_i` is\nthe number of times feature :math:`i` appears in all samples of class :math:`y`\nin the training set :math:`T`,\nand :math:`N_{y} = \\sum_{i=1}^{n} N_{yi}` is the total count of\nall features for class :math:`y`.\n\nThe smoothing priors :math:`\\alpha \\ge 0` accounts for\nfeatures not present in the learning samples and prevents zero probabilities\nin further computations.\nSetting :math:`\\alpha = 1` is called Laplace smoothing,\nwhile :math:`\\alpha < 1` is called Lidstone smoothing.\n\n.. _complement_naive_bayes:\n\nComplement Naive Bayes\n----------------------\n\n:class:`ComplementNB` implements the complement naive Bayes (CNB) algorithm.\nCNB is an adaptation of the standard multinomial naive Bayes (MNB) algorithm\nthat is particularly suited for imbalanced data sets. Specifically, CNB uses\nstatistics from the *complement* of each class to compute the model's weights.\nThe inventors of CNB show empirically that the parameter estimates for CNB are\nmore stable than those for MNB. Further, CNB regularly outperforms MNB (often\nby a considerable margin) on text classification tasks.\n\n.. dropdown:: Weights calculation\n\n   The procedure for calculating the weights is as follows:\n\n   .. math::\n\n      \\hat{\\theta}_{ci} = \\frac{\\alpha_i + \\sum_{j:y_j \\neq c} d_{ij}}\n                              {\\alpha + \\sum_{j:y_j \\neq c} \\sum_{k} d_{kj}}\n\n      w_{ci} = \\log \\hat{\\theta}_{ci}\n\n      w_{ci} = \\frac{w_{ci}}{\\sum_{j} |w_{cj}|}\n\n   where the summations are over all documents :math:`j` not in class :math:`c`,\n   :math:`d_{ij}` is either the count or tf-idf value of term :math:`i` in document\n   :math:`j`, :math:`\\alpha_i` is a smoothing hyperparameter like that found in\n   MNB, and :math:`\\alpha = \\sum_{i} \\alpha_i`. The second normalization addresses\n   the tendency for longer documents to dominate parameter estimates in MNB. The\n   classification rule is:\n\n   .. math::\n\n      \\hat{c} = \\arg\\min_c \\sum_{i} t_i w_{ci}\n\n   i.e., a document is assigned to the class that is the *poorest* complement\n   match.\n\n.. dropdown:: References\n\n   * Rennie, J. D., Shih, L., Teevan, J., & Karger, D. R. (2003).\n     `Tackling the poor assumptions of naive bayes text classifiers.\n     <https:\/\/people.csail.mit.edu\/jrennie\/papers\/icml03-nb.pdf>`_\n     In ICML (Vol. 3, pp. 616-623).\n\n\n.. _bernoulli_naive_bayes:\n\nBernoulli Naive Bayes\n---------------------\n\n:class:`BernoulliNB` implements the naive Bayes training and classification\nalgorithms for data that is distributed according to multivariate Bernoulli\ndistributions; i.e., there may be multiple features but each one is assumed\nto be a binary-valued (Bernoulli, boolean) variable.\nTherefore, this class requires samples to be represented as binary-valued\nfeature vectors; if handed any other kind of data, a :class:`BernoulliNB` instance\nmay binarize its input (depending on the ``binarize`` parameter).\n\nThe decision rule for Bernoulli naive Bayes is based on\n\n.. math::\n\n    P(x_i \\mid y) = P(x_i = 1 \\mid y) x_i + (1 - P(x_i = 1 \\mid y)) (1 - x_i)\n\nwhich differs from multinomial NB's rule\nin that it explicitly penalizes the non-occurrence of a feature :math:`i`\nthat is an indicator for class :math:`y`,\nwhere the multinomial variant would simply ignore a non-occurring feature.\n\nIn the case of text classification, word occurrence vectors (rather than word\ncount vectors) may be used to train and use this classifier. :class:`BernoulliNB`\nmight perform better on some datasets, especially those with shorter documents.\nIt is advisable to evaluate both models, if time permits.\n\n.. dropdown:: References\n\n   * C.D. Manning, P. Raghavan and H. Sch\u00fctze (2008). Introduction to\n     Information Retrieval. Cambridge University Press, pp. 234-265.\n\n   * A. McCallum and K. Nigam (1998).\n     `A comparison of event models for Naive Bayes text classification.\n     <https:\/\/citeseerx.ist.psu.edu\/doc_view\/pid\/04ce064505b1635583fa0d9cc07cac7e9ea993cc>`_\n     Proc. AAAI\/ICML-98 Workshop on Learning for Text Categorization, pp. 41-48.\n\n   * V. Metsis, I. Androutsopoulos and G. Paliouras (2006).\n     `Spam filtering with Naive Bayes -- Which Naive Bayes?\n     <https:\/\/citeseerx.ist.psu.edu\/doc_view\/pid\/8bd0934b366b539ec95e683ae39f8abb29ccc757>`_\n     3rd Conf. on Email and Anti-Spam (CEAS).\n\n\n.. _categorical_naive_bayes:\n\nCategorical Naive Bayes\n-----------------------\n\n:class:`CategoricalNB` implements the categorical naive Bayes\nalgorithm for categorically distributed data. It assumes that each feature,\nwhich is described by the index :math:`i`, has its own categorical\ndistribution.\n\nFor each feature :math:`i` in the training set :math:`X`,\n:class:`CategoricalNB` estimates a categorical distribution for each feature i\nof X conditioned on the class y. The index set of the samples is defined as\n:math:`J = \\{ 1, \\dots, m \\}`, with :math:`m` as the number of samples.\n\n.. dropdown:: Probability calculation\n\n   The probability of category :math:`t` in feature :math:`i` given class\n   :math:`c` is estimated as:\n\n   .. math::\n\n      P(x_i = t \\mid y = c \\: ;\\, \\alpha) = \\frac{ N_{tic} + \\alpha}{N_{c} +\n                                             \\alpha n_i},\n\n   where :math:`N_{tic} = |\\{j \\in J \\mid x_{ij} = t, y_j = c\\}|` is the number\n   of times category :math:`t` appears in the samples :math:`x_{i}`, which belong\n   to class :math:`c`, :math:`N_{c} = |\\{ j \\in J\\mid y_j = c\\}|` is the number\n   of samples with class c, :math:`\\alpha` is a smoothing parameter and\n   :math:`n_i` is the number of available categories of feature :math:`i`.\n\n:class:`CategoricalNB` assumes that the sample matrix :math:`X` is encoded (for\ninstance with the help of :class:`~sklearn.preprocessing.OrdinalEncoder`) such\nthat all categories for each feature :math:`i` are represented with numbers\n:math:`0, ..., n_i - 1` where :math:`n_i` is the number of available categories\nof feature :math:`i`.\n\nOut-of-core naive Bayes model fitting\n-------------------------------------\n\nNaive Bayes models can be used to tackle large scale classification problems\nfor which the full training set might not fit in memory. To handle this case,\n:class:`MultinomialNB`, :class:`BernoulliNB`, and :class:`GaussianNB`\nexpose a ``partial_fit`` method that can be used\nincrementally as done with other classifiers as demonstrated in\n:ref:`sphx_glr_auto_examples_applications_plot_out_of_core_classification.py`. All naive Bayes\nclassifiers support sample weighting.\n\nContrary to the ``fit`` method, the first call to ``partial_fit`` needs to be\npassed the list of all the expected class labels.\n\nFor an overview of available strategies in scikit-learn, see also the\n:ref:`out-of-core learning <scaling_strategies>` documentation.\n\n.. note::\n\n   The ``partial_fit`` method call of naive Bayes models introduces some\n   computational overhead. It is recommended to use data chunk sizes that are as\n   large as possible, that is as the available RAM allows.","site":"scikit-learn","answers_cleaned":"    naive bayes               Naive Bayes                 currentmodule   sklearn naive bayes   Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes  theorem with the  naive  assumption of conditional independence between every pair of features given the value of the class variable  Bayes  theorem states the following relationship  given class variable  math  y  and dependent feature vector  math  x 1  through  math  x n         math       P y  mid x 1   dots  x n     frac P y  P x 1   dots  x n  mid y                                        P x 1   dots  x n    Using the naive conditional independence assumption that     math       P x i   y  x 1   dots  x  i 1   x  i 1    dots  x n    P x i   y    for all  math  i   this relationship is simplified to     math       P y  mid x 1   dots  x n     frac P y   prod  i 1   n  P x i  mid y                                        P x 1   dots  x n    Since  math  P x 1   dots  x n   is constant given the input  we can use the following classification rule      math       P y  mid x 1   dots  x n   propto P y   prod  i 1   n  P x i  mid y       Downarrow      hat y     arg max y P y   prod  i 1   n  P x i  mid y    and we can use Maximum A Posteriori  MAP  estimation to estimate  math  P y   and  math  P x i  mid y    the former is then the relative frequency of class  math  y  in the training set   The different naive Bayes classifiers differ mainly by the assumptions they make regarding the distribution of  math  P x i  mid y     In spite of their apparently over simplified assumptions  naive Bayes classifiers have worked quite well in many real world situations  famously document classification and spam filtering  They require a small amount of training data to estimate the necessary parameters   For theoretical reasons why naive Bayes works well  and on which types of data it does  see the references below    Naive Bayes learners and classifiers can be extremely fast compared to more sophisticated methods  The decoupling of the class conditional feature distributions means that each distribution can be independently estimated as a one dimensional distribution  This in turn helps to alleviate problems stemming from the curse of dimensionality   On the flip side  although naive Bayes is known as a decent classifier  it is known to be a bad estimator  so the probability outputs from   predict proba   are not to be taken too seriously      dropdown   References       H  Zhang  2004    The optimality of Naive Bayes        https   www cs unb ca  hzhang publications FLAIRS04ZhangH pdf         Proc  FLAIRS       gaussian naive bayes   Gaussian Naive Bayes                        class  GaussianNB  implements the Gaussian Naive Bayes algorithm for classification  The likelihood of the features is assumed to be Gaussian      math       P x i  mid y     frac 1   sqrt 2 pi sigma 2 y    exp left   frac  x i    mu y  2  2 sigma 2 y  right   The parameters  math   sigma y  and  math   mu y  are estimated using maximum likelihood          from sklearn datasets import load iris        from sklearn model selection import train test split        from sklearn naive bayes import GaussianNB        X  y   load iris return X y True         X train  X test  y train  y test   train test split X  y  test size 0 5  random state 0         gnb   GaussianNB          y pred   gnb fit X train  y train  predict X test         print  Number of mislabeled points out of a total  d points    d                  X test shape 0    y test    y pred  sum        Number of mislabeled points out of a total 75 points   4      multinomial naive bayes   Multinomial Naive Bayes                           class  MultinomialNB  implements the naive Bayes algorithm for multinomially distributed data  and is one of the two classic naive Bayes variants used in text classification  where the data are typically represented as word vector counts  although tf idf vectors are also known to work well in practice   The distribution is parametrized by vectors  math   theta y     theta  y1   ldots  theta  yn    for each class  math  y   where  math  n  is the number of features  in text classification  the size of the vocabulary  and  math   theta  yi   is the probability  math  P x i  mid y   of feature  math  i  appearing in a sample belonging to class  math  y    The parameters  math   theta y  is estimated by a smoothed version of maximum likelihood  i e  relative frequency counting      math         hat  theta   yi     frac  N  yi     alpha  N y    alpha n   where  math  N  yi     sum  x  in T  x i  is the number of times feature  math  i  appears in all samples of class  math  y  in the training set  math  T   and  math  N  y     sum  i 1   n  N  yi   is the total count of all features for class  math  y    The smoothing priors  math   alpha  ge 0  accounts for features not present in the learning samples and prevents zero probabilities in further computations  Setting  math   alpha   1  is called Laplace smoothing  while  math   alpha   1  is called Lidstone smoothing       complement naive bayes   Complement Naive Bayes                          class  ComplementNB  implements the complement naive Bayes  CNB  algorithm  CNB is an adaptation of the standard multinomial naive Bayes  MNB  algorithm that is particularly suited for imbalanced data sets  Specifically  CNB uses statistics from the  complement  of each class to compute the model s weights  The inventors of CNB show empirically that the parameter estimates for CNB are more stable than those for MNB  Further  CNB regularly outperforms MNB  often by a considerable margin  on text classification tasks      dropdown   Weights calculation     The procedure for calculating the weights is as follows         math           hat  theta   ci     frac  alpha i    sum  j y j  neq c  d  ij                                   alpha    sum  j y j  neq c   sum  k  d  kj          w  ci     log  hat  theta   ci         w  ci     frac w  ci    sum  j   w  cj        where the summations are over all documents  math  j  not in class  math  c       math  d  ij   is either the count or tf idf value of term  math  i  in document     math  j    math   alpha i  is a smoothing hyperparameter like that found in    MNB  and  math   alpha    sum  i   alpha i   The second normalization addresses    the tendency for longer documents to dominate parameter estimates in MNB  The    classification rule is         math           hat c     arg min c  sum  i  t i w  ci      i e   a document is assigned to the class that is the  poorest  complement    match      dropdown   References       Rennie  J  D   Shih  L   Teevan  J     Karger  D  R   2003         Tackling the poor assumptions of naive bayes text classifiers        https   people csail mit edu jrennie papers icml03 nb pdf         In ICML  Vol  3  pp  616 623         bernoulli naive bayes   Bernoulli Naive Bayes                         class  BernoulliNB  implements the naive Bayes training and classification algorithms for data that is distributed according to multivariate Bernoulli distributions  i e   there may be multiple features but each one is assumed to be a binary valued  Bernoulli  boolean  variable  Therefore  this class requires samples to be represented as binary valued feature vectors  if handed any other kind of data  a  class  BernoulliNB  instance may binarize its input  depending on the   binarize   parameter    The decision rule for Bernoulli naive Bayes is based on     math        P x i  mid y    P x i   1  mid y  x i    1   P x i   1  mid y    1   x i   which differs from multinomial NB s rule in that it explicitly penalizes the non occurrence of a feature  math  i  that is an indicator for class  math  y   where the multinomial variant would simply ignore a non occurring feature   In the case of text classification  word occurrence vectors  rather than word count vectors  may be used to train and use this classifier   class  BernoulliNB  might perform better on some datasets  especially those with shorter documents  It is advisable to evaluate both models  if time permits      dropdown   References       C D  Manning  P  Raghavan and H  Sch tze  2008   Introduction to      Information Retrieval  Cambridge University Press  pp  234 265        A  McCallum and K  Nigam  1998         A comparison of event models for Naive Bayes text classification        https   citeseerx ist psu edu doc view pid 04ce064505b1635583fa0d9cc07cac7e9ea993cc         Proc  AAAI ICML 98 Workshop on Learning for Text Categorization  pp  41 48        V  Metsis  I  Androutsopoulos and G  Paliouras  2006         Spam filtering with Naive Bayes    Which Naive Bayes        https   citeseerx ist psu edu doc view pid 8bd0934b366b539ec95e683ae39f8abb29ccc757         3rd Conf  on Email and Anti Spam  CEAS         categorical naive bayes   Categorical Naive Bayes                           class  CategoricalNB  implements the categorical naive Bayes algorithm for categorically distributed data  It assumes that each feature  which is described by the index  math  i   has its own categorical distribution   For each feature  math  i  in the training set  math  X    class  CategoricalNB  estimates a categorical distribution for each feature i of X conditioned on the class y  The index set of the samples is defined as  math  J      1   dots  m      with  math  m  as the number of samples      dropdown   Probability calculation     The probability of category  math  t  in feature  math  i  given class     math  c  is estimated as         math          P x i   t  mid y   c         alpha     frac  N  tic     alpha  N  c                                                  alpha n i       where  math  N  tic       j  in J  mid x  ij    t  y j   c     is the number    of times category  math  t  appears in the samples  math  x  i    which belong    to class  math  c    math  N  c        j  in J mid y j   c     is the number    of samples with class c   math   alpha  is a smoothing parameter and     math  n i  is the number of available categories of feature  math  i     class  CategoricalNB  assumes that the sample matrix  math  X  is encoded  for instance with the help of  class   sklearn preprocessing OrdinalEncoder   such that all categories for each feature  math  i  are represented with numbers  math  0       n i   1  where  math  n i  is the number of available categories of feature  math  i    Out of core naive Bayes model fitting                                        Naive Bayes models can be used to tackle large scale classification problems for which the full training set might not fit in memory  To handle this case   class  MultinomialNB    class  BernoulliNB   and  class  GaussianNB  expose a   partial fit   method that can be used incrementally as done with other classifiers as demonstrated in  ref  sphx glr auto examples applications plot out of core classification py   All naive Bayes classifiers support sample weighting   Contrary to the   fit   method  the first call to   partial fit   needs to be passed the list of all the expected class labels   For an overview of available strategies in scikit learn  see also the  ref  out of core learning  scaling strategies   documentation      note       The   partial fit   method call of naive Bayes models introduces some    computational overhead  It is recommended to use data chunk sizes that are as    large as possible  that is as the available RAM allows "}
{"questions":"scikit-learn The cross decomposition module contains supervised estimators for crossdecomposition Cross decomposition sklearn crossdecomposition dimensionality reduction and regression belonging to the Partial Least","answers":".. _cross_decomposition:\n\n===================\nCross decomposition\n===================\n\n.. currentmodule:: sklearn.cross_decomposition\n\nThe cross decomposition module contains **supervised** estimators for\ndimensionality reduction and regression, belonging to the \"Partial Least\nSquares\" family.\n\n.. figure:: ..\/auto_examples\/cross_decomposition\/images\/sphx_glr_plot_compare_cross_decomposition_001.png\n   :target: ..\/auto_examples\/cross_decomposition\/plot_compare_cross_decomposition.html\n   :scale: 75%\n   :align: center\n\n\nCross decomposition algorithms find the fundamental relations between two\nmatrices (X and Y). They are latent variable approaches to modeling the\ncovariance structures in these two spaces. They will try to find the\nmultidimensional direction in the X space that explains the maximum\nmultidimensional variance direction in the Y space. In other words, PLS\nprojects both `X` and `Y` into a lower-dimensional subspace such that the\ncovariance between `transformed(X)` and `transformed(Y)` is maximal.\n\nPLS draws similarities with `Principal Component Regression\n<https:\/\/en.wikipedia.org\/wiki\/Principal_component_regression>`_ (PCR), where\nthe samples are first projected into a lower-dimensional subspace, and the\ntargets `y` are predicted using `transformed(X)`. One issue with PCR is that\nthe dimensionality reduction is unsupervised, and may lose some important\nvariables: PCR would keep the features with the most variance, but it's\npossible that features with a small variances are relevant from predicting\nthe target. In a way, PLS allows for the same kind of dimensionality\nreduction, but by taking into account the targets `y`. An illustration of\nthis fact is given in the following example:\n* :ref:`sphx_glr_auto_examples_cross_decomposition_plot_pcr_vs_pls.py`.\n\nApart from CCA, the PLS estimators are particularly suited when the matrix of\npredictors has more variables than observations, and when there is\nmulticollinearity among the features. By contrast, standard linear regression\nwould fail in these cases unless it is regularized.\n\nClasses included in this module are :class:`PLSRegression`,\n:class:`PLSCanonical`, :class:`CCA` and :class:`PLSSVD`\n\nPLSCanonical\n------------\n\nWe here describe the algorithm used in :class:`PLSCanonical`. The other\nestimators use variants of this algorithm, and are detailed below.\nWe recommend section [1]_ for more details and comparisons between these\nalgorithms. In [1]_, :class:`PLSCanonical` corresponds to \"PLSW2A\".\n\nGiven two centered matrices :math:`X \\in \\mathbb{R}^{n \\times d}` and\n:math:`Y \\in \\mathbb{R}^{n \\times t}`, and a number of components :math:`K`,\n:class:`PLSCanonical` proceeds as follows:\n\nSet :math:`X_1` to :math:`X` and :math:`Y_1` to :math:`Y`. Then, for each\n:math:`k \\in [1, K]`:\n\n- a) compute :math:`u_k \\in \\mathbb{R}^d` and :math:`v_k \\in \\mathbb{R}^t`,\n  the first left and right singular vectors of the cross-covariance matrix\n  :math:`C = X_k^T Y_k`.\n  :math:`u_k` and :math:`v_k` are called the *weights*.\n  By definition, :math:`u_k` and :math:`v_k` are\n  chosen so that they maximize the covariance between the projected\n  :math:`X_k` and the projected target, that is :math:`\\text{Cov}(X_k u_k,\n  Y_k v_k)`.\n- b) Project :math:`X_k` and :math:`Y_k` on the singular vectors to obtain\n  *scores*: :math:`\\xi_k = X_k u_k` and :math:`\\omega_k = Y_k v_k`\n- c) Regress :math:`X_k` on :math:`\\xi_k`, i.e. find a vector :math:`\\gamma_k\n  \\in \\mathbb{R}^d` such that the rank-1 matrix :math:`\\xi_k \\gamma_k^T`\n  is as close as possible to :math:`X_k`. Do the same on :math:`Y_k` with\n  :math:`\\omega_k` to obtain :math:`\\delta_k`. The vectors\n  :math:`\\gamma_k` and :math:`\\delta_k` are called the *loadings*.\n- d) *deflate* :math:`X_k` and :math:`Y_k`, i.e. subtract the rank-1\n  approximations: :math:`X_{k+1} = X_k - \\xi_k \\gamma_k^T`, and\n  :math:`Y_{k + 1} = Y_k - \\omega_k \\delta_k^T`.\n\nAt the end, we have approximated :math:`X` as a sum of rank-1 matrices:\n:math:`X = \\Xi \\Gamma^T` where :math:`\\Xi \\in \\mathbb{R}^{n \\times K}`\ncontains the scores in its columns, and :math:`\\Gamma^T \\in \\mathbb{R}^{K\n\\times d}` contains the loadings in its rows. Similarly for :math:`Y`, we\nhave :math:`Y = \\Omega \\Delta^T`.\n\nNote that the scores matrices :math:`\\Xi` and :math:`\\Omega` correspond to\nthe projections of the training data :math:`X` and :math:`Y`, respectively.\n\nStep *a)* may be performed in two ways: either by computing the whole SVD of\n:math:`C` and only retain the singular vectors with the biggest singular\nvalues, or by directly computing the singular vectors using the power method (cf section 11.3 in [1]_),\nwhich corresponds to the `'nipals'` option of the `algorithm` parameter.\n\n.. dropdown:: Transforming data\n\n  To transform :math:`X` into :math:`\\bar{X}`, we need to find a projection\n  matrix :math:`P` such that :math:`\\bar{X} = XP`. We know that for the\n  training data, :math:`\\Xi = XP`, and :math:`X = \\Xi \\Gamma^T`. Setting\n  :math:`P = U(\\Gamma^T U)^{-1}` where :math:`U` is the matrix with the\n  :math:`u_k` in the columns, we have :math:`XP = X U(\\Gamma^T U)^{-1} = \\Xi\n  (\\Gamma^T U) (\\Gamma^T U)^{-1} = \\Xi` as desired. The rotation matrix\n  :math:`P` can be accessed from the `x_rotations_` attribute.\n\n  Similarly, :math:`Y` can be transformed using the rotation matrix\n  :math:`V(\\Delta^T V)^{-1}`, accessed via the `y_rotations_` attribute.\n\n.. dropdown:: Predicting the targets `Y`\n\n  To predict the targets of some data :math:`X`, we are looking for a\n  coefficient matrix :math:`\\beta \\in R^{d \\times t}` such that :math:`Y =\n  X\\beta`.\n\n  The idea is to try to predict the transformed targets :math:`\\Omega` as a\n  function of the transformed samples :math:`\\Xi`, by computing :math:`\\alpha\n  \\in \\mathbb{R}` such that :math:`\\Omega = \\alpha \\Xi`.\n\n  Then, we have :math:`Y = \\Omega \\Delta^T = \\alpha \\Xi \\Delta^T`, and since\n  :math:`\\Xi` is the transformed training data we have that :math:`Y = X \\alpha\n  P \\Delta^T`, and as a result the coefficient matrix :math:`\\beta = \\alpha P\n  \\Delta^T`.\n\n  :math:`\\beta` can be accessed through the `coef_` attribute.\n\nPLSSVD\n------\n\n:class:`PLSSVD` is a simplified version of :class:`PLSCanonical`\ndescribed earlier: instead of iteratively deflating the matrices :math:`X_k`\nand :math:`Y_k`, :class:`PLSSVD` computes the SVD of :math:`C = X^TY`\nonly *once*, and stores the `n_components` singular vectors corresponding to\nthe biggest singular values in the matrices `U` and `V`, corresponding to the\n`x_weights_` and `y_weights_` attributes. Here, the transformed data is\nsimply `transformed(X) = XU` and `transformed(Y) = YV`.\n\nIf `n_components == 1`, :class:`PLSSVD` and :class:`PLSCanonical` are\nstrictly equivalent.\n\nPLSRegression\n-------------\n\nThe :class:`PLSRegression` estimator is similar to\n:class:`PLSCanonical` with `algorithm='nipals'`, with 2 significant\ndifferences:\n\n- at step a) in the power method to compute :math:`u_k` and :math:`v_k`,\n  :math:`v_k` is never normalized.\n- at step c), the targets :math:`Y_k` are approximated using the projection\n  of :math:`X_k` (i.e. :math:`\\xi_k`) instead of the projection of\n  :math:`Y_k` (i.e. :math:`\\omega_k`). In other words, the loadings\n  computation is different. As a result, the deflation in step d) will also\n  be affected.\n\nThese two modifications affect the output of `predict` and `transform`,\nwhich are not the same as for :class:`PLSCanonical`. Also, while the number\nof components is limited by `min(n_samples, n_features, n_targets)` in\n:class:`PLSCanonical`, here the limit is the rank of :math:`X^TX`, i.e.\n`min(n_samples, n_features)`.\n\n:class:`PLSRegression` is also known as PLS1 (single targets) and PLS2\n(multiple targets). Much like :class:`~sklearn.linear_model.Lasso`,\n:class:`PLSRegression` is a form of regularized linear regression where the\nnumber of components controls the strength of the regularization.\n\nCanonical Correlation Analysis\n------------------------------\n\nCanonical Correlation Analysis was developed prior and independently to PLS.\nBut it turns out that :class:`CCA` is a special case of PLS, and corresponds\nto PLS in \"Mode B\" in the literature.\n\n:class:`CCA` differs from :class:`PLSCanonical` in the way the weights\n:math:`u_k` and :math:`v_k` are computed in the power method of step a).\nDetails can be found in section 10 of [1]_.\n\nSince :class:`CCA` involves the inversion of :math:`X_k^TX_k` and\n:math:`Y_k^TY_k`, this estimator can be unstable if the number of features or\ntargets is greater than the number of samples.\n\n.. rubric:: References\n\n.. [1] `A survey of Partial Least Squares (PLS) methods, with emphasis on the two-block\n  case <https:\/\/stat.uw.edu\/sites\/default\/files\/files\/reports\/2000\/tr371.pdf>`_,\n  JA Wegelin\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_cross_decomposition_plot_compare_cross_decomposition.py`\n* :ref:`sphx_glr_auto_examples_cross_decomposition_plot_pcr_vs_pls.py`","site":"scikit-learn","answers_cleaned":"    cross decomposition                       Cross decomposition                         currentmodule   sklearn cross decomposition  The cross decomposition module contains   supervised   estimators for dimensionality reduction and regression  belonging to the  Partial Least Squares  family      figure      auto examples cross decomposition images sphx glr plot compare cross decomposition 001 png     target     auto examples cross decomposition plot compare cross decomposition html     scale  75      align  center   Cross decomposition algorithms find the fundamental relations between two matrices  X and Y   They are latent variable approaches to modeling the covariance structures in these two spaces  They will try to find the multidimensional direction in the X space that explains the maximum multidimensional variance direction in the Y space  In other words  PLS projects both  X  and  Y  into a lower dimensional subspace such that the covariance between  transformed X   and  transformed Y   is maximal   PLS draws similarities with  Principal Component Regression  https   en wikipedia org wiki Principal component regression     PCR   where the samples are first projected into a lower dimensional subspace  and the targets  y  are predicted using  transformed X    One issue with PCR is that the dimensionality reduction is unsupervised  and may lose some important variables  PCR would keep the features with the most variance  but it s possible that features with a small variances are relevant from predicting the target  In a way  PLS allows for the same kind of dimensionality reduction  but by taking into account the targets  y   An illustration of this fact is given in the following example     ref  sphx glr auto examples cross decomposition plot pcr vs pls py    Apart from CCA  the PLS estimators are particularly suited when the matrix of predictors has more variables than observations  and when there is multicollinearity among the features  By contrast  standard linear regression would fail in these cases unless it is regularized   Classes included in this module are  class  PLSRegression    class  PLSCanonical    class  CCA  and  class  PLSSVD   PLSCanonical               We here describe the algorithm used in  class  PLSCanonical   The other estimators use variants of this algorithm  and are detailed below  We recommend section  1   for more details and comparisons between these algorithms  In  1     class  PLSCanonical  corresponds to  PLSW2A    Given two centered matrices  math  X  in  mathbb R   n  times d   and  math  Y  in  mathbb R   n  times t    and a number of components  math  K    class  PLSCanonical  proceeds as follows   Set  math  X 1  to  math  X  and  math  Y 1  to  math  Y   Then  for each  math  k  in  1  K       a  compute  math  u k  in  mathbb R  d  and  math  v k  in  mathbb R  t     the first left and right singular vectors of the cross covariance matrix    math  C   X k T Y k      math  u k  and  math  v k  are called the  weights     By definition   math  u k  and  math  v k  are   chosen so that they maximize the covariance between the projected    math  X k  and the projected target  that is  math   text Cov  X k u k    Y k v k      b  Project  math  X k  and  math  Y k  on the singular vectors to obtain    scores    math   xi k   X k u k  and  math   omega k   Y k v k    c  Regress  math  X k  on  math   xi k   i e  find a vector  math   gamma k    in  mathbb R  d  such that the rank 1 matrix  math   xi k  gamma k T    is as close as possible to  math  X k   Do the same on  math  Y k  with    math   omega k  to obtain  math   delta k   The vectors    math   gamma k  and  math   delta k  are called the  loadings     d   deflate   math  X k  and  math  Y k   i e  subtract the rank 1   approximations   math  X  k 1    X k    xi k  gamma k T   and    math  Y  k   1    Y k    omega k  delta k T    At the end  we have approximated  math  X  as a sum of rank 1 matrices   math  X    Xi  Gamma T  where  math   Xi  in  mathbb R   n  times K   contains the scores in its columns  and  math   Gamma T  in  mathbb R   K  times d   contains the loadings in its rows  Similarly for  math  Y   we have  math  Y    Omega  Delta T    Note that the scores matrices  math   Xi  and  math   Omega  correspond to the projections of the training data  math  X  and  math  Y   respectively   Step  a   may be performed in two ways  either by computing the whole SVD of  math  C  and only retain the singular vectors with the biggest singular values  or by directly computing the singular vectors using the power method  cf section 11 3 in  1     which corresponds to the   nipals   option of the  algorithm  parameter      dropdown   Transforming data    To transform  math  X  into  math   bar X    we need to find a projection   matrix  math  P  such that  math   bar X    XP   We know that for the   training data   math   Xi   XP   and  math  X    Xi  Gamma T   Setting    math  P   U  Gamma T U    1   where  math  U  is the matrix with the    math  u k  in the columns  we have  math  XP   X U  Gamma T U    1     Xi     Gamma T U    Gamma T U    1     Xi  as desired  The rotation matrix    math  P  can be accessed from the  x rotations   attribute     Similarly   math  Y  can be transformed using the rotation matrix    math  V  Delta T V    1    accessed via the  y rotations   attribute      dropdown   Predicting the targets  Y     To predict the targets of some data  math  X   we are looking for a   coefficient matrix  math   beta  in R  d  times t   such that  math  Y     X beta      The idea is to try to predict the transformed targets  math   Omega  as a   function of the transformed samples  math   Xi   by computing  math   alpha    in  mathbb R   such that  math   Omega    alpha  Xi      Then  we have  math  Y    Omega  Delta T    alpha  Xi  Delta T   and since    math   Xi  is the transformed training data we have that  math  Y   X  alpha   P  Delta T   and as a result the coefficient matrix  math   beta    alpha P    Delta T       math   beta  can be accessed through the  coef   attribute   PLSSVD          class  PLSSVD  is a simplified version of  class  PLSCanonical  described earlier  instead of iteratively deflating the matrices  math  X k  and  math  Y k    class  PLSSVD  computes the SVD of  math  C   X TY  only  once   and stores the  n components  singular vectors corresponding to the biggest singular values in the matrices  U  and  V   corresponding to the  x weights   and  y weights   attributes  Here  the transformed data is simply  transformed X    XU  and  transformed Y    YV    If  n components    1    class  PLSSVD  and  class  PLSCanonical  are strictly equivalent   PLSRegression                The  class  PLSRegression  estimator is similar to  class  PLSCanonical  with  algorithm  nipals    with 2 significant differences     at step a  in the power method to compute  math  u k  and  math  v k      math  v k  is never normalized    at step c   the targets  math  Y k  are approximated using the projection   of  math  X k   i e   math   xi k   instead of the projection of    math  Y k   i e   math   omega k    In other words  the loadings   computation is different  As a result  the deflation in step d  will also   be affected   These two modifications affect the output of  predict  and  transform   which are not the same as for  class  PLSCanonical   Also  while the number of components is limited by  min n samples  n features  n targets   in  class  PLSCanonical   here the limit is the rank of  math  X TX   i e   min n samples  n features      class  PLSRegression  is also known as PLS1  single targets  and PLS2  multiple targets   Much like  class   sklearn linear model Lasso    class  PLSRegression  is a form of regularized linear regression where the number of components controls the strength of the regularization   Canonical Correlation Analysis                                 Canonical Correlation Analysis was developed prior and independently to PLS  But it turns out that  class  CCA  is a special case of PLS  and corresponds to PLS in  Mode B  in the literature    class  CCA  differs from  class  PLSCanonical  in the way the weights  math  u k  and  math  v k  are computed in the power method of step a   Details can be found in section 10 of  1     Since  class  CCA  involves the inversion of  math  X k TX k  and  math  Y k TY k   this estimator can be unstable if the number of features or targets is greater than the number of samples      rubric   References      1   A survey of Partial Least Squares  PLS  methods  with emphasis on the two block   case  https   stat uw edu sites default files files reports 2000 tr371 pdf       JA Wegelin     rubric   Examples     ref  sphx glr auto examples cross decomposition plot compare cross decomposition py     ref  sphx glr auto examples cross decomposition plot pcr vs pls py "}
{"questions":"scikit-learn for and ref regression tree Decision Trees Decision Trees DTs are a non parametric supervised learning method used sklearn tree","answers":".. _tree:\n\n==============\nDecision Trees\n==============\n\n.. currentmodule:: sklearn.tree\n\n**Decision Trees (DTs)** are a non-parametric supervised learning method used\nfor :ref:`classification <tree_classification>` and :ref:`regression\n<tree_regression>`. The goal is to create a model that predicts the value of a\ntarget variable by learning simple decision rules inferred from the data\nfeatures. A tree can be seen as a piecewise constant approximation.\n\nFor instance, in the example below, decision trees learn from data to\napproximate a sine curve with a set of if-then-else decision rules. The deeper\nthe tree, the more complex the decision rules and the fitter the model.\n\n.. figure:: ..\/auto_examples\/tree\/images\/sphx_glr_plot_tree_regression_001.png\n   :target: ..\/auto_examples\/tree\/plot_tree_regression.html\n   :scale: 75\n   :align: center\n\nSome advantages of decision trees are:\n\n- Simple to understand and to interpret. Trees can be visualized.\n\n- Requires little data preparation. Other techniques often require data\n  normalization, dummy variables need to be created and blank values to\n  be removed. Some tree and algorithm combinations support\n  :ref:`missing values <tree_missing_value_support>`.\n\n- The cost of using the tree (i.e., predicting data) is logarithmic in the\n  number of data points used to train the tree.\n\n- Able to handle both numerical and categorical data. However, the scikit-learn\n  implementation does not support categorical variables for now. Other\n  techniques are usually specialized in analyzing datasets that have only one type\n  of variable. See :ref:`algorithms <tree_algorithms>` for more\n  information.\n\n- Able to handle multi-output problems.\n\n- Uses a white box model. If a given situation is observable in a model,\n  the explanation for the condition is easily explained by boolean logic.\n  By contrast, in a black box model (e.g., in an artificial neural\n  network), results may be more difficult to interpret.\n\n- Possible to validate a model using statistical tests. That makes it\n  possible to account for the reliability of the model.\n\n- Performs well even if its assumptions are somewhat violated by\n  the true model from which the data were generated.\n\n\nThe disadvantages of decision trees include:\n\n- Decision-tree learners can create over-complex trees that do not\n  generalize the data well. This is called overfitting. Mechanisms\n  such as pruning, setting the minimum number of samples required\n  at a leaf node or setting the maximum depth of the tree are\n  necessary to avoid this problem.\n\n- Decision trees can be unstable because small variations in the\n  data might result in a completely different tree being generated.\n  This problem is mitigated by using decision trees within an\n  ensemble.\n\n- Predictions of decision trees are neither smooth nor continuous, but\n  piecewise constant approximations as seen in the above figure. Therefore,\n  they are not good at extrapolation.\n\n- The problem of learning an optimal decision tree is known to be\n  NP-complete under several aspects of optimality and even for simple\n  concepts. Consequently, practical decision-tree learning algorithms\n  are based on heuristic algorithms such as the greedy algorithm where\n  locally optimal decisions are made at each node. Such algorithms\n  cannot guarantee to return the globally optimal decision tree.  This\n  can be mitigated by training multiple trees in an ensemble learner,\n  where the features and samples are randomly sampled with replacement.\n\n- There are concepts that are hard to learn because decision trees\n  do not express them easily, such as XOR, parity or multiplexer problems.\n\n- Decision tree learners create biased trees if some classes dominate.\n  It is therefore recommended to balance the dataset prior to fitting\n  with the decision tree.\n\n\n.. _tree_classification:\n\nClassification\n==============\n\n:class:`DecisionTreeClassifier` is a class capable of performing multi-class\nclassification on a dataset.\n\nAs with other classifiers, :class:`DecisionTreeClassifier` takes as input two arrays:\nan array X, sparse or dense, of shape ``(n_samples, n_features)`` holding the\ntraining samples, and an array Y of integer values, shape ``(n_samples,)``,\nholding the class labels for the training samples::\n\n    >>> from sklearn import tree\n    >>> X = [[0, 0], [1, 1]]\n    >>> Y = [0, 1]\n    >>> clf = tree.DecisionTreeClassifier()\n    >>> clf = clf.fit(X, Y)\n\nAfter being fitted, the model can then be used to predict the class of samples::\n\n    >>> clf.predict([[2., 2.]])\n    array([1])\n\nIn case that there are multiple classes with the same and highest\nprobability, the classifier will predict the class with the lowest index\namongst those classes.\n\nAs an alternative to outputting a specific class, the probability of each class\ncan be predicted, which is the fraction of training samples of the class in a\nleaf::\n\n    >>> clf.predict_proba([[2., 2.]])\n    array([[0., 1.]])\n\n:class:`DecisionTreeClassifier` is capable of both binary (where the\nlabels are [-1, 1]) classification and multiclass (where the labels are\n[0, ..., K-1]) classification.\n\nUsing the Iris dataset, we can construct a tree as follows::\n\n    >>> from sklearn.datasets import load_iris\n    >>> from sklearn import tree\n    >>> iris = load_iris()\n    >>> X, y = iris.data, iris.target\n    >>> clf = tree.DecisionTreeClassifier()\n    >>> clf = clf.fit(X, y)\n\nOnce trained, you can plot the tree with the :func:`plot_tree` function::\n\n\n    >>> tree.plot_tree(clf)\n    [...]\n\n.. figure:: ..\/auto_examples\/tree\/images\/sphx_glr_plot_iris_dtc_002.png\n   :target: ..\/auto_examples\/tree\/plot_iris_dtc.html\n   :scale: 75\n   :align: center\n\n.. dropdown:: Alternative ways to export trees\n\n  We can also export the tree in `Graphviz\n  <https:\/\/www.graphviz.org\/>`_ format using the :func:`export_graphviz`\n  exporter. If you use the `conda <https:\/\/conda.io>`_ package manager, the graphviz binaries\n  and the python package can be installed with `conda install python-graphviz`.\n\n  Alternatively binaries for graphviz can be downloaded from the graphviz project homepage,\n  and the Python wrapper installed from pypi with `pip install graphviz`.\n\n  Below is an example graphviz export of the above tree trained on the entire\n  iris dataset; the results are saved in an output file `iris.pdf`::\n\n\n      >>> import graphviz # doctest: +SKIP\n      >>> dot_data = tree.export_graphviz(clf, out_file=None) # doctest: +SKIP\n      >>> graph = graphviz.Source(dot_data) # doctest: +SKIP\n      >>> graph.render(\"iris\") # doctest: +SKIP\n\n  The :func:`export_graphviz` exporter also supports a variety of aesthetic\n  options, including coloring nodes by their class (or value for regression) and\n  using explicit variable and class names if desired. Jupyter notebooks also\n  render these plots inline automatically::\n\n      >>> dot_data = tree.export_graphviz(clf, out_file=None, # doctest: +SKIP\n      ...                      feature_names=iris.feature_names,  # doctest: +SKIP\n      ...                      class_names=iris.target_names,  # doctest: +SKIP\n      ...                      filled=True, rounded=True,  # doctest: +SKIP\n      ...                      special_characters=True)  # doctest: +SKIP\n      >>> graph = graphviz.Source(dot_data)  # doctest: +SKIP\n      >>> graph # doctest: +SKIP\n\n  .. only:: html\n\n      .. figure:: ..\/images\/iris.svg\n        :align: center\n\n  .. only:: latex\n\n      .. figure:: ..\/images\/iris.pdf\n        :align: center\n\n  .. figure:: ..\/auto_examples\/tree\/images\/sphx_glr_plot_iris_dtc_001.png\n    :target: ..\/auto_examples\/tree\/plot_iris_dtc.html\n    :align: center\n    :scale: 75\n\n  Alternatively, the tree can also be exported in textual format with the\n  function :func:`export_text`. This method doesn't require the installation\n  of external libraries and is more compact:\n\n      >>> from sklearn.datasets import load_iris\n      >>> from sklearn.tree import DecisionTreeClassifier\n      >>> from sklearn.tree import export_text\n      >>> iris = load_iris()\n      >>> decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2)\n      >>> decision_tree = decision_tree.fit(iris.data, iris.target)\n      >>> r = export_text(decision_tree, feature_names=iris['feature_names'])\n      >>> print(r)\n      |--- petal width (cm) <= 0.80\n      |   |--- class: 0\n      |--- petal width (cm) >  0.80\n      |   |--- petal width (cm) <= 1.75\n      |   |   |--- class: 1\n      |   |--- petal width (cm) >  1.75\n      |   |   |--- class: 2\n      <BLANKLINE>\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_tree_plot_iris_dtc.py`\n* :ref:`sphx_glr_auto_examples_tree_plot_unveil_tree_structure.py`\n\n.. _tree_regression:\n\nRegression\n==========\n\n.. figure:: ..\/auto_examples\/tree\/images\/sphx_glr_plot_tree_regression_001.png\n   :target: ..\/auto_examples\/tree\/plot_tree_regression.html\n   :scale: 75\n   :align: center\n\nDecision trees can also be applied to regression problems, using the\n:class:`DecisionTreeRegressor` class.\n\nAs in the classification setting, the fit method will take as argument arrays X\nand y, only that in this case y is expected to have floating point values\ninstead of integer values::\n\n    >>> from sklearn import tree\n    >>> X = [[0, 0], [2, 2]]\n    >>> y = [0.5, 2.5]\n    >>> clf = tree.DecisionTreeRegressor()\n    >>> clf = clf.fit(X, y)\n    >>> clf.predict([[1, 1]])\n    array([0.5])\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_tree_plot_tree_regression.py`\n\n\n.. _tree_multioutput:\n\nMulti-output problems\n=====================\n\nA multi-output problem is a supervised learning problem with several outputs\nto predict, that is when Y is a 2d array of shape ``(n_samples, n_outputs)``.\n\nWhen there is no correlation between the outputs, a very simple way to solve\nthis kind of problem is to build n independent models, i.e. one for each\noutput, and then to use those models to independently predict each one of the n\noutputs. However, because it is likely that the output values related to the\nsame input are themselves correlated, an often better way is to build a single\nmodel capable of predicting simultaneously all n outputs. First, it requires\nlower training time since only a single estimator is built. Second, the\ngeneralization accuracy of the resulting estimator may often be increased.\n\nWith regard to decision trees, this strategy can readily be used to support\nmulti-output problems. This requires the following changes:\n\n- Store n output values in leaves, instead of 1;\n- Use splitting criteria that compute the average reduction across all\n  n outputs.\n\nThis module offers support for multi-output problems by implementing this\nstrategy in both :class:`DecisionTreeClassifier` and\n:class:`DecisionTreeRegressor`. If a decision tree is fit on an output array Y\nof shape ``(n_samples, n_outputs)`` then the resulting estimator will:\n\n* Output n_output values upon ``predict``;\n\n* Output a list of n_output arrays of class probabilities upon\n  ``predict_proba``.\n\nThe use of multi-output trees for regression is demonstrated in\n:ref:`sphx_glr_auto_examples_tree_plot_tree_regression.py`. In this example, the input\nX is a single real value and the outputs Y are the sine and cosine of X.\n\n.. figure:: ..\/auto_examples\/tree\/images\/sphx_glr_plot_tree_regression_002.png\n   :target: ..\/auto_examples\/tree\/plot_tree_regression.html\n   :scale: 75\n   :align: center\n\nThe use of multi-output trees for classification is demonstrated in\n:ref:`sphx_glr_auto_examples_miscellaneous_plot_multioutput_face_completion.py`. In this example, the inputs\nX are the pixels of the upper half of faces and the outputs Y are the pixels of\nthe lower half of those faces.\n\n.. figure:: ..\/auto_examples\/miscellaneous\/images\/sphx_glr_plot_multioutput_face_completion_001.png\n   :target: ..\/auto_examples\/miscellaneous\/plot_multioutput_face_completion.html\n   :scale: 75\n   :align: center\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_miscellaneous_plot_multioutput_face_completion.py`\n\n.. rubric:: References\n\n* M. Dumont et al,  `Fast multi-class image annotation with random subwindows\n  and multiple output randomized trees\n  <http:\/\/www.montefiore.ulg.ac.be\/services\/stochastic\/pubs\/2009\/DMWG09\/dumont-visapp09-shortpaper.pdf>`_,\n  International Conference on Computer Vision Theory and Applications 2009\n\n.. _tree_complexity:\n\nComplexity\n==========\n\nIn general, the run time cost to construct a balanced binary tree is\n:math:`O(n_{samples}n_{features}\\log(n_{samples}))` and query time\n:math:`O(\\log(n_{samples}))`.  Although the tree construction algorithm attempts\nto generate balanced trees, they will not always be balanced.  Assuming that the\nsubtrees remain approximately balanced, the cost at each node consists of\nsearching through :math:`O(n_{features})` to find the feature that offers the\nlargest reduction in the impurity criterion, e.g. log loss (which is equivalent to an\ninformation gain). This has a cost of\n:math:`O(n_{features}n_{samples}\\log(n_{samples}))` at each node, leading to a\ntotal cost over the entire trees (by summing the cost at each node) of\n:math:`O(n_{features}n_{samples}^{2}\\log(n_{samples}))`.\n\n\nTips on practical use\n=====================\n\n* Decision trees tend to overfit on data with a large number of features.\n  Getting the right ratio of samples to number of features is important, since\n  a tree with few samples in high dimensional space is very likely to overfit.\n\n* Consider performing  dimensionality reduction (:ref:`PCA <PCA>`,\n  :ref:`ICA <ICA>`, or :ref:`feature_selection`) beforehand to\n  give your tree a better chance of finding features that are discriminative.\n\n* :ref:`sphx_glr_auto_examples_tree_plot_unveil_tree_structure.py` will help\n  in gaining more insights about how the decision tree makes predictions, which is\n  important for understanding the important features in the data.\n\n* Visualize your tree as you are training by using the ``export``\n  function.  Use ``max_depth=3`` as an initial tree depth to get a feel for\n  how the tree is fitting to your data, and then increase the depth.\n\n* Remember that the number of samples required to populate the tree doubles\n  for each additional level the tree grows to.  Use ``max_depth`` to control\n  the size of the tree to prevent overfitting.\n\n* Use ``min_samples_split`` or ``min_samples_leaf`` to ensure that multiple\n  samples inform every decision in the tree, by controlling which splits will\n  be considered. A very small number will usually mean the tree will overfit,\n  whereas a large number will prevent the tree from learning the data. Try\n  ``min_samples_leaf=5`` as an initial value. If the sample size varies\n  greatly, a float number can be used as percentage in these two parameters.\n  While ``min_samples_split`` can create arbitrarily small leaves,\n  ``min_samples_leaf`` guarantees that each leaf has a minimum size, avoiding\n  low-variance, over-fit leaf nodes in regression problems.  For\n  classification with few classes, ``min_samples_leaf=1`` is often the best\n  choice.\n\n  Note that ``min_samples_split`` considers samples directly and independent of\n  ``sample_weight``, if provided (e.g. a node with m weighted samples is still\n  treated as having exactly m samples). Consider ``min_weight_fraction_leaf`` or\n  ``min_impurity_decrease`` if accounting for sample weights is required at splits.\n\n* Balance your dataset before training to prevent the tree from being biased\n  toward the classes that are dominant. Class balancing can be done by\n  sampling an equal number of samples from each class, or preferably by\n  normalizing the sum of the sample weights (``sample_weight``) for each\n  class to the same value. Also note that weight-based pre-pruning criteria,\n  such as ``min_weight_fraction_leaf``, will then be less biased toward\n  dominant classes than criteria that are not aware of the sample weights,\n  like ``min_samples_leaf``.\n\n* If the samples are weighted, it will be easier to optimize the tree\n  structure using weight-based pre-pruning criterion such as\n  ``min_weight_fraction_leaf``, which ensure that leaf nodes contain at least\n  a fraction of the overall sum of the sample weights.\n\n* All decision trees use ``np.float32`` arrays internally.\n  If training data is not in this format, a copy of the dataset will be made.\n\n* If the input matrix X is very sparse, it is recommended to convert to sparse\n  ``csc_matrix`` before calling fit and sparse ``csr_matrix`` before calling\n  predict. Training time can be orders of magnitude faster for a sparse\n  matrix input compared to a dense matrix when features have zero values in\n  most of the samples.\n\n\n.. _tree_algorithms:\n\nTree algorithms: ID3, C4.5, C5.0 and CART\n==========================================\n\nWhat are all the various decision tree algorithms and how do they differ\nfrom each other? Which one is implemented in scikit-learn?\n\n.. dropdown:: Various decision tree algorithms\n\n  ID3_ (Iterative Dichotomiser 3) was developed in 1986 by Ross Quinlan.\n  The algorithm creates a multiway tree, finding for each node (i.e. in\n  a greedy manner) the categorical feature that will yield the largest\n  information gain for categorical targets. Trees are grown to their\n  maximum size and then a pruning step is usually applied to improve the\n  ability of the tree to generalize to unseen data.\n\n  C4.5 is the successor to ID3 and removed the restriction that features\n  must be categorical by dynamically defining a discrete attribute (based\n  on numerical variables) that partitions the continuous attribute value\n  into a discrete set of intervals. C4.5 converts the trained trees\n  (i.e. the output of the ID3 algorithm) into sets of if-then rules.\n  The accuracy of each rule is then evaluated to determine the order\n  in which they should be applied. Pruning is done by removing a rule's\n  precondition if the accuracy of the rule improves without it.\n\n  C5.0 is Quinlan's latest version release under a proprietary license.\n  It uses less memory and builds smaller rulesets than C4.5 while being\n  more accurate.\n\n  CART (Classification and Regression Trees) is very similar to C4.5, but\n  it differs in that it supports numerical target variables (regression) and\n  does not compute rule sets. CART constructs binary trees using the feature\n  and threshold that yield the largest information gain at each node.\n\nscikit-learn uses an optimized version of the CART algorithm; however, the\nscikit-learn implementation does not support categorical variables for now.\n\n.. _ID3: https:\/\/en.wikipedia.org\/wiki\/ID3_algorithm\n\n\n.. _tree_mathematical_formulation:\n\nMathematical formulation\n========================\n\nGiven training vectors :math:`x_i \\in R^n`, i=1,..., l and a label vector\n:math:`y \\in R^l`, a decision tree recursively partitions the feature space\nsuch that the samples with the same labels or similar target values are grouped\ntogether.\n\nLet the data at node :math:`m` be represented by :math:`Q_m` with :math:`n_m`\nsamples. For each candidate split :math:`\\theta = (j, t_m)` consisting of a\nfeature :math:`j` and threshold :math:`t_m`, partition the data into\n:math:`Q_m^{left}(\\theta)` and :math:`Q_m^{right}(\\theta)` subsets\n\n.. math::\n\n    Q_m^{left}(\\theta) = \\{(x, y) | x_j \\leq t_m\\}\n\n    Q_m^{right}(\\theta) = Q_m \\setminus Q_m^{left}(\\theta)\n\nThe quality of a candidate split of node :math:`m` is then computed using an\nimpurity function or loss function :math:`H()`, the choice of which depends on\nthe task being solved (classification or regression)\n\n.. math::\n\n   G(Q_m, \\theta) = \\frac{n_m^{left}}{n_m} H(Q_m^{left}(\\theta))\n   + \\frac{n_m^{right}}{n_m} H(Q_m^{right}(\\theta))\n\nSelect the parameters that minimises the impurity\n\n.. math::\n\n    \\theta^* = \\operatorname{argmin}_\\theta  G(Q_m, \\theta)\n\nRecurse for subsets :math:`Q_m^{left}(\\theta^*)` and\n:math:`Q_m^{right}(\\theta^*)` until the maximum allowable depth is reached,\n:math:`n_m < \\min_{samples}` or :math:`n_m = 1`.\n\nClassification criteria\n-----------------------\n\nIf a target is a classification outcome taking on values 0,1,...,K-1,\nfor node :math:`m`, let\n\n.. math::\n\n    p_{mk} = \\frac{1}{n_m} \\sum_{y \\in Q_m} I(y = k)\n\nbe the proportion of class k observations in node :math:`m`. If :math:`m` is a\nterminal node, `predict_proba` for this region is set to :math:`p_{mk}`.\nCommon measures of impurity are the following.\n\nGini:\n\n.. math::\n\n    H(Q_m) = \\sum_k p_{mk} (1 - p_{mk})\n\nLog Loss or Entropy:\n\n.. math::\n\n    H(Q_m) = - \\sum_k p_{mk} \\log(p_{mk})\n\n.. dropdown:: Shannon entropy\n\n  The entropy criterion computes the Shannon entropy of the possible classes. It\n  takes the class frequencies of the training data points that reached a given\n  leaf :math:`m` as their probability. Using the **Shannon entropy as tree node\n  splitting criterion is equivalent to minimizing the log loss** (also known as\n  cross-entropy and multinomial deviance) between the true labels :math:`y_i`\n  and the probabilistic predictions :math:`T_k(x_i)` of the tree model :math:`T` for class :math:`k`.\n\n  To see this, first recall that the log loss of a tree model :math:`T`\n  computed on a dataset :math:`D` is defined as follows:\n\n  .. math::\n\n      \\mathrm{LL}(D, T) = -\\frac{1}{n} \\sum_{(x_i, y_i) \\in D} \\sum_k I(y_i = k) \\log(T_k(x_i))\n\n  where :math:`D` is a training dataset of :math:`n` pairs :math:`(x_i, y_i)`.\n\n  In a classification tree, the predicted class probabilities within leaf nodes\n  are constant, that is: for all :math:`(x_i, y_i) \\in Q_m`, one has:\n  :math:`T_k(x_i) = p_{mk}` for each class :math:`k`.\n\n  This property makes it possible to rewrite :math:`\\mathrm{LL}(D, T)` as the\n  sum of the Shannon entropies computed for each leaf of :math:`T` weighted by\n  the number of training data points that reached each leaf:\n\n  .. math::\n\n      \\mathrm{LL}(D, T) = \\sum_{m \\in T} \\frac{n_m}{n} H(Q_m)\n\nRegression criteria\n-------------------\n\nIf the target is a continuous value, then for node :math:`m`, common\ncriteria to minimize as for determining locations for future splits are Mean\nSquared Error (MSE or L2 error), Poisson deviance as well as Mean Absolute\nError (MAE or L1 error). MSE and Poisson deviance both set the predicted value\nof terminal nodes to the learned mean value :math:`\\bar{y}_m` of the node\nwhereas the MAE sets the predicted value of terminal nodes to the median\n:math:`median(y)_m`.\n\nMean Squared Error:\n\n.. math::\n\n    \\bar{y}_m = \\frac{1}{n_m} \\sum_{y \\in Q_m} y\n\n    H(Q_m) = \\frac{1}{n_m} \\sum_{y \\in Q_m} (y - \\bar{y}_m)^2\n\nMean Poisson deviance:\n\n.. math::\n\n    H(Q_m) = \\frac{2}{n_m} \\sum_{y \\in Q_m} (y \\log\\frac{y}{\\bar{y}_m}\n    - y + \\bar{y}_m)\n\nSetting `criterion=\"poisson\"` might be a good choice if your target is a count\nor a frequency (count per some unit). In any case, :math:`y >= 0` is a\nnecessary condition to use this criterion. Note that it fits much slower than\nthe MSE criterion. For performance reasons the actual implementation minimizes\nthe half mean poisson deviance, i.e. the mean poisson deviance divided by 2.\n\nMean Absolute Error:\n\n.. math::\n\n    median(y)_m = \\underset{y \\in Q_m}{\\mathrm{median}}(y)\n\n    H(Q_m) = \\frac{1}{n_m} \\sum_{y \\in Q_m} |y - median(y)_m|\n\nNote that it fits much slower than the MSE criterion.\n\n.. _tree_missing_value_support:\n\nMissing Values Support\n======================\n\n:class:`DecisionTreeClassifier`, :class:`DecisionTreeRegressor`\nhave built-in support for missing values using `splitter='best'`, where\nthe splits are determined in a greedy fashion.\n:class:`ExtraTreeClassifier`, and :class:`ExtraTreeRegressor` have built-in\nsupport for missing values for `splitter='random'`, where the splits\nare determined randomly. For more details on how the splitter differs on\nnon-missing values, see the :ref:`Forest section <forest>`.\n\nThe criterion supported when there are missing-values are\n`'gini'`, `'entropy`', or `'log_loss'`, for classification or\n`'squared_error'`, `'friedman_mse'`, or `'poisson'` for regression.\n\nFirst we will describe how :class:`DecisionTreeClassifier`, :class:`DecisionTreeRegressor`\nhandle missing-values in the data.\n\nFor each potential threshold on the non-missing data, the splitter will evaluate\nthe split with all the missing values going to the left node or the right node.\n\nDecisions are made as follows:\n\n- By default when predicting, the samples with missing values are classified\n  with the class used in the split found during training::\n\n    >>> from sklearn.tree import DecisionTreeClassifier\n    >>> import numpy as np\n\n    >>> X = np.array([0, 1, 6, np.nan]).reshape(-1, 1)\n    >>> y = [0, 0, 1, 1]\n\n    >>> tree = DecisionTreeClassifier(random_state=0).fit(X, y)\n    >>> tree.predict(X)\n    array([0, 0, 1, 1])\n\n- If the criterion evaluation is the same for both nodes,\n  then the tie for missing value at predict time is broken by going to the\n  right node. The splitter also checks the split where all the missing\n  values go to one child and non-missing values go to the other::\n\n    >>> from sklearn.tree import DecisionTreeClassifier\n    >>> import numpy as np\n\n    >>> X = np.array([np.nan, -1, np.nan, 1]).reshape(-1, 1)\n    >>> y = [0, 0, 1, 1]\n\n    >>> tree = DecisionTreeClassifier(random_state=0).fit(X, y)\n\n    >>> X_test = np.array([np.nan]).reshape(-1, 1)\n    >>> tree.predict(X_test)\n    array([1])\n\n- If no missing values are seen during training for a given feature, then during\n  prediction missing values are mapped to the child with the most samples::\n\n    >>> from sklearn.tree import DecisionTreeClassifier\n    >>> import numpy as np\n\n    >>> X = np.array([0, 1, 2, 3]).reshape(-1, 1)\n    >>> y = [0, 1, 1, 1]\n\n    >>> tree = DecisionTreeClassifier(random_state=0).fit(X, y)\n\n    >>> X_test = np.array([np.nan]).reshape(-1, 1)\n    >>> tree.predict(X_test)\n    array([1])\n\n:class:`ExtraTreeClassifier`, and :class:`ExtraTreeRegressor` handle missing values\nin a slightly different way. When splitting a node, a random threshold will be chosen\nto split the non-missing values on. Then the non-missing values will be sent to the\nleft and right child based on the randomly selected threshold, while the missing\nvalues will also be randomly sent to the left or right child. This is repeated for\nevery feature considered at each split. The best split among these is chosen.\n\nDuring prediction, the treatment of missing-values is the same as that of the\ndecision tree:\n\n- By default when predicting, the samples with missing values are classified\n  with the class used in the split found during training.\n\n- If no missing values are seen during training for a given feature, then during\n  prediction missing values are mapped to the child with the most samples.\n\n.. _minimal_cost_complexity_pruning:\n\nMinimal Cost-Complexity Pruning\n===============================\n\nMinimal cost-complexity pruning is an algorithm used to prune a tree to avoid\nover-fitting, described in Chapter 3 of [BRE]_. This algorithm is parameterized\nby :math:`\\alpha\\ge0` known as the complexity parameter. The complexity\nparameter is used to define the cost-complexity measure, :math:`R_\\alpha(T)` of\na given tree :math:`T`:\n\n.. math::\n\n  R_\\alpha(T) = R(T) + \\alpha|\\widetilde{T}|\n\nwhere :math:`|\\widetilde{T}|` is the number of terminal nodes in :math:`T` and :math:`R(T)`\nis traditionally defined as the total misclassification rate of the terminal\nnodes. Alternatively, scikit-learn uses the total sample weighted impurity of\nthe terminal nodes for :math:`R(T)`. As shown above, the impurity of a node\ndepends on the criterion. Minimal cost-complexity pruning finds the subtree of\n:math:`T` that minimizes :math:`R_\\alpha(T)`.\n\nThe cost complexity measure of a single node is\n:math:`R_\\alpha(t)=R(t)+\\alpha`. The branch, :math:`T_t`, is defined to be a\ntree where node :math:`t` is its root. In general, the impurity of a node\nis greater than the sum of impurities of its terminal nodes,\n:math:`R(T_t)<R(t)`. However, the cost complexity measure of a node,\n:math:`t`, and its branch, :math:`T_t`, can be equal depending on\n:math:`\\alpha`. We define the effective :math:`\\alpha` of a node to be the\nvalue where they are equal, :math:`R_\\alpha(T_t)=R_\\alpha(t)` or\n:math:`\\alpha_{eff}(t)=\\frac{R(t)-R(T_t)}{|T|-1}`. A non-terminal node\nwith the smallest value of :math:`\\alpha_{eff}` is the weakest link and will\nbe pruned. This process stops when the pruned tree's minimal\n:math:`\\alpha_{eff}` is greater than the ``ccp_alpha`` parameter.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_tree_plot_cost_complexity_pruning.py`\n\n.. rubric:: References\n\n.. [BRE] L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification\n  and Regression Trees. Wadsworth, Belmont, CA, 1984.\n\n* https:\/\/en.wikipedia.org\/wiki\/Decision_tree_learning\n\n* https:\/\/en.wikipedia.org\/wiki\/Predictive_analytics\n\n* J.R. Quinlan. C4. 5: programs for machine learning. Morgan\n  Kaufmann, 1993.\n\n* T. Hastie, R. Tibshirani and J. Friedman. Elements of Statistical\n  Learning, Springer, 2009.","site":"scikit-learn","answers_cleaned":"    tree                  Decision Trees                    currentmodule   sklearn tree    Decision Trees  DTs    are a non parametric supervised learning method used for  ref  classification  tree classification   and  ref  regression  tree regression    The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features  A tree can be seen as a piecewise constant approximation   For instance  in the example below  decision trees learn from data to approximate a sine curve with a set of if then else decision rules  The deeper the tree  the more complex the decision rules and the fitter the model      figure      auto examples tree images sphx glr plot tree regression 001 png     target     auto examples tree plot tree regression html     scale  75     align  center  Some advantages of decision trees are     Simple to understand and to interpret  Trees can be visualized     Requires little data preparation  Other techniques often require data   normalization  dummy variables need to be created and blank values to   be removed  Some tree and algorithm combinations support    ref  missing values  tree missing value support       The cost of using the tree  i e   predicting data  is logarithmic in the   number of data points used to train the tree     Able to handle both numerical and categorical data  However  the scikit learn   implementation does not support categorical variables for now  Other   techniques are usually specialized in analyzing datasets that have only one type   of variable  See  ref  algorithms  tree algorithms   for more   information     Able to handle multi output problems     Uses a white box model  If a given situation is observable in a model    the explanation for the condition is easily explained by boolean logic    By contrast  in a black box model  e g   in an artificial neural   network   results may be more difficult to interpret     Possible to validate a model using statistical tests  That makes it   possible to account for the reliability of the model     Performs well even if its assumptions are somewhat violated by   the true model from which the data were generated    The disadvantages of decision trees include     Decision tree learners can create over complex trees that do not   generalize the data well  This is called overfitting  Mechanisms   such as pruning  setting the minimum number of samples required   at a leaf node or setting the maximum depth of the tree are   necessary to avoid this problem     Decision trees can be unstable because small variations in the   data might result in a completely different tree being generated    This problem is mitigated by using decision trees within an   ensemble     Predictions of decision trees are neither smooth nor continuous  but   piecewise constant approximations as seen in the above figure  Therefore    they are not good at extrapolation     The problem of learning an optimal decision tree is known to be   NP complete under several aspects of optimality and even for simple   concepts  Consequently  practical decision tree learning algorithms   are based on heuristic algorithms such as the greedy algorithm where   locally optimal decisions are made at each node  Such algorithms   cannot guarantee to return the globally optimal decision tree   This   can be mitigated by training multiple trees in an ensemble learner    where the features and samples are randomly sampled with replacement     There are concepts that are hard to learn because decision trees   do not express them easily  such as XOR  parity or multiplexer problems     Decision tree learners create biased trees if some classes dominate    It is therefore recommended to balance the dataset prior to fitting   with the decision tree        tree classification   Classification                  class  DecisionTreeClassifier  is a class capable of performing multi class classification on a dataset   As with other classifiers   class  DecisionTreeClassifier  takes as input two arrays  an array X  sparse or dense  of shape    n samples  n features    holding the training samples  and an array Y of integer values  shape    n samples      holding the class labels for the training samples            from sklearn import tree         X     0  0    1  1           Y    0  1          clf   tree DecisionTreeClassifier           clf   clf fit X  Y   After being fitted  the model can then be used to predict the class of samples            clf predict   2   2         array  1    In case that there are multiple classes with the same and highest probability  the classifier will predict the class with the lowest index amongst those classes   As an alternative to outputting a specific class  the probability of each class can be predicted  which is the fraction of training samples of the class in a leaf            clf predict proba   2   2         array   0   1       class  DecisionTreeClassifier  is capable of both binary  where the labels are   1  1   classification and multiclass  where the labels are  0       K 1   classification   Using the Iris dataset  we can construct a tree as follows            from sklearn datasets import load iris         from sklearn import tree         iris   load iris           X  y   iris data  iris target         clf   tree DecisionTreeClassifier           clf   clf fit X  y   Once trained  you can plot the tree with the  func  plot tree  function             tree plot tree clf                figure      auto examples tree images sphx glr plot iris dtc 002 png     target     auto examples tree plot iris dtc html     scale  75     align  center     dropdown   Alternative ways to export trees    We can also export the tree in  Graphviz    https   www graphviz org     format using the  func  export graphviz    exporter  If you use the  conda  https   conda io    package manager  the graphviz binaries   and the python package can be installed with  conda install python graphviz      Alternatively binaries for graphviz can be downloaded from the graphviz project homepage    and the Python wrapper installed from pypi with  pip install graphviz      Below is an example graphviz export of the above tree trained on the entire   iris dataset  the results are saved in an output file  iris pdf                import graphviz   doctest   SKIP           dot data   tree export graphviz clf  out file None    doctest   SKIP           graph   graphviz Source dot data    doctest   SKIP           graph render  iris     doctest   SKIP    The  func  export graphviz  exporter also supports a variety of aesthetic   options  including coloring nodes by their class  or value for regression  and   using explicit variable and class names if desired  Jupyter notebooks also   render these plots inline automatically              dot data   tree export graphviz clf  out file None    doctest   SKIP                                feature names iris feature names     doctest   SKIP                                class names iris target names     doctest   SKIP                                filled True  rounded True     doctest   SKIP                                special characters True     doctest   SKIP           graph   graphviz Source dot data     doctest   SKIP           graph   doctest   SKIP       only   html           figure      images iris svg          align  center       only   latex           figure      images iris pdf          align  center       figure      auto examples tree images sphx glr plot iris dtc 001 png      target     auto examples tree plot iris dtc html      align  center      scale  75    Alternatively  the tree can also be exported in textual format with the   function  func  export text   This method doesn t require the installation   of external libraries and is more compact             from sklearn datasets import load iris           from sklearn tree import DecisionTreeClassifier           from sklearn tree import export text           iris   load iris             decision tree   DecisionTreeClassifier random state 0  max depth 2            decision tree   decision tree fit iris data  iris target            r   export text decision tree  feature names iris  feature names              print r             petal width  cm     0 80                class  0            petal width  cm     0 80                petal width  cm     1 75                    class  1                petal width  cm     1 75                    class  2        BLANKLINE      rubric   Examples     ref  sphx glr auto examples tree plot iris dtc py     ref  sphx glr auto examples tree plot unveil tree structure py       tree regression   Regression                figure      auto examples tree images sphx glr plot tree regression 001 png     target     auto examples tree plot tree regression html     scale  75     align  center  Decision trees can also be applied to regression problems  using the  class  DecisionTreeRegressor  class   As in the classification setting  the fit method will take as argument arrays X and y  only that in this case y is expected to have floating point values instead of integer values            from sklearn import tree         X     0  0    2  2           y    0 5  2 5          clf   tree DecisionTreeRegressor           clf   clf fit X  y          clf predict   1  1        array  0 5       rubric   Examples     ref  sphx glr auto examples tree plot tree regression py        tree multioutput   Multi output problems                        A multi output problem is a supervised learning problem with several outputs to predict  that is when Y is a 2d array of shape    n samples  n outputs      When there is no correlation between the outputs  a very simple way to solve this kind of problem is to build n independent models  i e  one for each output  and then to use those models to independently predict each one of the n outputs  However  because it is likely that the output values related to the same input are themselves correlated  an often better way is to build a single model capable of predicting simultaneously all n outputs  First  it requires lower training time since only a single estimator is built  Second  the generalization accuracy of the resulting estimator may often be increased   With regard to decision trees  this strategy can readily be used to support multi output problems  This requires the following changes     Store n output values in leaves  instead of 1    Use splitting criteria that compute the average reduction across all   n outputs   This module offers support for multi output problems by implementing this strategy in both  class  DecisionTreeClassifier  and  class  DecisionTreeRegressor   If a decision tree is fit on an output array Y of shape    n samples  n outputs    then the resulting estimator will     Output n output values upon   predict       Output a list of n output arrays of class probabilities upon     predict proba     The use of multi output trees for regression is demonstrated in  ref  sphx glr auto examples tree plot tree regression py   In this example  the input X is a single real value and the outputs Y are the sine and cosine of X      figure      auto examples tree images sphx glr plot tree regression 002 png     target     auto examples tree plot tree regression html     scale  75     align  center  The use of multi output trees for classification is demonstrated in  ref  sphx glr auto examples miscellaneous plot multioutput face completion py   In this example  the inputs X are the pixels of the upper half of faces and the outputs Y are the pixels of the lower half of those faces      figure      auto examples miscellaneous images sphx glr plot multioutput face completion 001 png     target     auto examples miscellaneous plot multioutput face completion html     scale  75     align  center     rubric   Examples     ref  sphx glr auto examples miscellaneous plot multioutput face completion py      rubric   References    M  Dumont et al    Fast multi class image annotation with random subwindows   and multiple output randomized trees    http   www montefiore ulg ac be services stochastic pubs 2009 DMWG09 dumont visapp09 shortpaper pdf       International Conference on Computer Vision Theory and Applications 2009      tree complexity   Complexity             In general  the run time cost to construct a balanced binary tree is  math  O n  samples n  features  log n  samples     and query time  math  O  log n  samples       Although the tree construction algorithm attempts to generate balanced trees  they will not always be balanced   Assuming that the subtrees remain approximately balanced  the cost at each node consists of searching through  math  O n  features    to find the feature that offers the largest reduction in the impurity criterion  e g  log loss  which is equivalent to an information gain   This has a cost of  math  O n  features n  samples  log n  samples     at each node  leading to a total cost over the entire trees  by summing the cost at each node  of  math  O n  features n  samples   2  log n  samples        Tips on practical use                          Decision trees tend to overfit on data with a large number of features    Getting the right ratio of samples to number of features is important  since   a tree with few samples in high dimensional space is very likely to overfit     Consider performing  dimensionality reduction   ref  PCA  PCA       ref  ICA  ICA    or  ref  feature selection   beforehand to   give your tree a better chance of finding features that are discriminative      ref  sphx glr auto examples tree plot unveil tree structure py  will help   in gaining more insights about how the decision tree makes predictions  which is   important for understanding the important features in the data     Visualize your tree as you are training by using the   export     function   Use   max depth 3   as an initial tree depth to get a feel for   how the tree is fitting to your data  and then increase the depth     Remember that the number of samples required to populate the tree doubles   for each additional level the tree grows to   Use   max depth   to control   the size of the tree to prevent overfitting     Use   min samples split   or   min samples leaf   to ensure that multiple   samples inform every decision in the tree  by controlling which splits will   be considered  A very small number will usually mean the tree will overfit    whereas a large number will prevent the tree from learning the data  Try     min samples leaf 5   as an initial value  If the sample size varies   greatly  a float number can be used as percentage in these two parameters    While   min samples split   can create arbitrarily small leaves      min samples leaf   guarantees that each leaf has a minimum size  avoiding   low variance  over fit leaf nodes in regression problems   For   classification with few classes    min samples leaf 1   is often the best   choice     Note that   min samples split   considers samples directly and independent of     sample weight    if provided  e g  a node with m weighted samples is still   treated as having exactly m samples   Consider   min weight fraction leaf   or     min impurity decrease   if accounting for sample weights is required at splits     Balance your dataset before training to prevent the tree from being biased   toward the classes that are dominant  Class balancing can be done by   sampling an equal number of samples from each class  or preferably by   normalizing the sum of the sample weights    sample weight    for each   class to the same value  Also note that weight based pre pruning criteria    such as   min weight fraction leaf    will then be less biased toward   dominant classes than criteria that are not aware of the sample weights    like   min samples leaf       If the samples are weighted  it will be easier to optimize the tree   structure using weight based pre pruning criterion such as     min weight fraction leaf    which ensure that leaf nodes contain at least   a fraction of the overall sum of the sample weights     All decision trees use   np float32   arrays internally    If training data is not in this format  a copy of the dataset will be made     If the input matrix X is very sparse  it is recommended to convert to sparse     csc matrix   before calling fit and sparse   csr matrix   before calling   predict  Training time can be orders of magnitude faster for a sparse   matrix input compared to a dense matrix when features have zero values in   most of the samples        tree algorithms   Tree algorithms  ID3  C4 5  C5 0 and CART                                             What are all the various decision tree algorithms and how do they differ from each other  Which one is implemented in scikit learn      dropdown   Various decision tree algorithms    ID3   Iterative Dichotomiser 3  was developed in 1986 by Ross Quinlan    The algorithm creates a multiway tree  finding for each node  i e  in   a greedy manner  the categorical feature that will yield the largest   information gain for categorical targets  Trees are grown to their   maximum size and then a pruning step is usually applied to improve the   ability of the tree to generalize to unseen data     C4 5 is the successor to ID3 and removed the restriction that features   must be categorical by dynamically defining a discrete attribute  based   on numerical variables  that partitions the continuous attribute value   into a discrete set of intervals  C4 5 converts the trained trees    i e  the output of the ID3 algorithm  into sets of if then rules    The accuracy of each rule is then evaluated to determine the order   in which they should be applied  Pruning is done by removing a rule s   precondition if the accuracy of the rule improves without it     C5 0 is Quinlan s latest version release under a proprietary license    It uses less memory and builds smaller rulesets than C4 5 while being   more accurate     CART  Classification and Regression Trees  is very similar to C4 5  but   it differs in that it supports numerical target variables  regression  and   does not compute rule sets  CART constructs binary trees using the feature   and threshold that yield the largest information gain at each node   scikit learn uses an optimized version of the CART algorithm  however  the scikit learn implementation does not support categorical variables for now       ID3  https   en wikipedia org wiki ID3 algorithm       tree mathematical formulation   Mathematical formulation                           Given training vectors  math  x i  in R n   i 1      l and a label vector  math  y  in R l   a decision tree recursively partitions the feature space such that the samples with the same labels or similar target values are grouped together   Let the data at node  math  m  be represented by  math  Q m  with  math  n m  samples  For each candidate split  math   theta    j  t m   consisting of a feature  math  j  and threshold  math  t m   partition the data into  math  Q m  left   theta   and  math  Q m  right   theta   subsets     math        Q m  left   theta       x  y    x j  leq t m        Q m  right   theta    Q m  setminus Q m  left   theta   The quality of a candidate split of node  math  m  is then computed using an impurity function or loss function  math  H     the choice of which depends on the task being solved  classification or regression      math       G Q m   theta     frac n m  left   n m  H Q m  left   theta         frac n m  right   n m  H Q m  right   theta    Select the parameters that minimises the impurity     math         theta      operatorname argmin   theta  G Q m   theta   Recurse for subsets  math  Q m  left   theta     and  math  Q m  right   theta     until the maximum allowable depth is reached   math  n m    min  samples   or  math  n m   1    Classification criteria                          If a target is a classification outcome taking on values 0 1     K 1  for node  math  m   let     math        p  mk     frac 1  n m   sum  y  in Q m  I y   k   be the proportion of class k observations in node  math  m   If  math  m  is a terminal node   predict proba  for this region is set to  math  p  mk    Common measures of impurity are the following   Gini      math        H Q m     sum k p  mk   1   p  mk    Log Loss or Entropy      math        H Q m       sum k p  mk   log p  mk       dropdown   Shannon entropy    The entropy criterion computes the Shannon entropy of the possible classes  It   takes the class frequencies of the training data points that reached a given   leaf  math  m  as their probability  Using the   Shannon entropy as tree node   splitting criterion is equivalent to minimizing the log loss    also known as   cross entropy and multinomial deviance  between the true labels  math  y i    and the probabilistic predictions  math  T k x i   of the tree model  math  T  for class  math  k      To see this  first recall that the log loss of a tree model  math  T    computed on a dataset  math  D  is defined as follows        math           mathrm LL  D  T      frac 1  n   sum   x i  y i   in D   sum k I y i   k   log T k x i      where  math  D  is a training dataset of  math  n  pairs  math   x i  y i       In a classification tree  the predicted class probabilities within leaf nodes   are constant  that is  for all  math   x i  y i   in Q m   one has     math  T k x i    p  mk   for each class  math  k      This property makes it possible to rewrite  math   mathrm LL  D  T   as the   sum of the Shannon entropies computed for each leaf of  math  T  weighted by   the number of training data points that reached each leaf        math           mathrm LL  D  T     sum  m  in T   frac n m  n  H Q m   Regression criteria                      If the target is a continuous value  then for node  math  m   common criteria to minimize as for determining locations for future splits are Mean Squared Error  MSE or L2 error   Poisson deviance as well as Mean Absolute Error  MAE or L1 error   MSE and Poisson deviance both set the predicted value of terminal nodes to the learned mean value  math   bar y  m  of the node whereas the MAE sets the predicted value of terminal nodes to the median  math  median y  m    Mean Squared Error      math         bar y  m    frac 1  n m   sum  y  in Q m  y      H Q m     frac 1  n m   sum  y  in Q m   y    bar y  m  2  Mean Poisson deviance      math        H Q m     frac 2  n m   sum  y  in Q m   y  log frac y   bar y  m        y    bar y  m   Setting  criterion  poisson   might be a good choice if your target is a count or a frequency  count per some unit   In any case   math  y    0  is a necessary condition to use this criterion  Note that it fits much slower than the MSE criterion  For performance reasons the actual implementation minimizes the half mean poisson deviance  i e  the mean poisson deviance divided by 2   Mean Absolute Error      math        median y  m    underset y  in Q m   mathrm median   y       H Q m     frac 1  n m   sum  y  in Q m   y   median y  m   Note that it fits much slower than the MSE criterion       tree missing value support   Missing Values Support                          class  DecisionTreeClassifier    class  DecisionTreeRegressor  have built in support for missing values using  splitter  best    where the splits are determined in a greedy fashion   class  ExtraTreeClassifier   and  class  ExtraTreeRegressor  have built in support for missing values for  splitter  random    where the splits are determined randomly  For more details on how the splitter differs on non missing values  see the  ref  Forest section  forest     The criterion supported when there are missing values are   gini      entropy    or   log loss    for classification or   squared error      friedman mse    or   poisson   for regression   First we will describe how  class  DecisionTreeClassifier    class  DecisionTreeRegressor  handle missing values in the data   For each potential threshold on the non missing data  the splitter will evaluate the split with all the missing values going to the left node or the right node   Decisions are made as follows     By default when predicting  the samples with missing values are classified   with the class used in the split found during training            from sklearn tree import DecisionTreeClassifier         import numpy as np          X   np array  0  1  6  np nan   reshape  1  1          y    0  0  1  1           tree   DecisionTreeClassifier random state 0  fit X  y          tree predict X      array  0  0  1  1      If the criterion evaluation is the same for both nodes    then the tie for missing value at predict time is broken by going to the   right node  The splitter also checks the split where all the missing   values go to one child and non missing values go to the other            from sklearn tree import DecisionTreeClassifier         import numpy as np          X   np array  np nan   1  np nan  1   reshape  1  1          y    0  0  1  1           tree   DecisionTreeClassifier random state 0  fit X  y           X test   np array  np nan   reshape  1  1          tree predict X test      array  1      If no missing values are seen during training for a given feature  then during   prediction missing values are mapped to the child with the most samples            from sklearn tree import DecisionTreeClassifier         import numpy as np          X   np array  0  1  2  3   reshape  1  1          y    0  1  1  1           tree   DecisionTreeClassifier random state 0  fit X  y           X test   np array  np nan   reshape  1  1          tree predict X test      array  1     class  ExtraTreeClassifier   and  class  ExtraTreeRegressor  handle missing values in a slightly different way  When splitting a node  a random threshold will be chosen to split the non missing values on  Then the non missing values will be sent to the left and right child based on the randomly selected threshold  while the missing values will also be randomly sent to the left or right child  This is repeated for every feature considered at each split  The best split among these is chosen   During prediction  the treatment of missing values is the same as that of the decision tree     By default when predicting  the samples with missing values are classified   with the class used in the split found during training     If no missing values are seen during training for a given feature  then during   prediction missing values are mapped to the child with the most samples       minimal cost complexity pruning   Minimal Cost Complexity Pruning                                  Minimal cost complexity pruning is an algorithm used to prune a tree to avoid over fitting  described in Chapter 3 of  BRE    This algorithm is parameterized by  math   alpha ge0  known as the complexity parameter  The complexity parameter is used to define the cost complexity measure   math  R  alpha T   of a given tree  math  T       math      R  alpha T    R T     alpha  widetilde T    where  math    widetilde T    is the number of terminal nodes in  math  T  and  math  R T   is traditionally defined as the total misclassification rate of the terminal nodes  Alternatively  scikit learn uses the total sample weighted impurity of the terminal nodes for  math  R T    As shown above  the impurity of a node depends on the criterion  Minimal cost complexity pruning finds the subtree of  math  T  that minimizes  math  R  alpha T     The cost complexity measure of a single node is  math  R  alpha t  R t   alpha   The branch   math  T t   is defined to be a tree where node  math  t  is its root  In general  the impurity of a node is greater than the sum of impurities of its terminal nodes   math  R T t  R t    However  the cost complexity measure of a node   math  t   and its branch   math  T t   can be equal depending on  math   alpha   We define the effective  math   alpha  of a node to be the value where they are equal   math  R  alpha T t  R  alpha t   or  math   alpha  eff  t   frac R t  R T t    T  1    A non terminal node with the smallest value of  math   alpha  eff   is the weakest link and will be pruned  This process stops when the pruned tree s minimal  math   alpha  eff   is greater than the   ccp alpha   parameter      rubric   Examples     ref  sphx glr auto examples tree plot cost complexity pruning py      rubric   References      BRE  L  Breiman  J  Friedman  R  Olshen  and C  Stone  Classification   and Regression Trees  Wadsworth  Belmont  CA  1984     https   en wikipedia org wiki Decision tree learning    https   en wikipedia org wiki Predictive analytics    J R  Quinlan  C4  5  programs for machine learning  Morgan   Kaufmann  1993     T  Hastie  R  Tibshirani and J  Friedman  Elements of Statistical   Learning  Springer  2009 "}
{"questions":"scikit-learn sklearn Metrics and scoring quantifying the quality of predictions whichscoringfunction modelevaluation","answers":".. currentmodule:: sklearn\n\n.. _model_evaluation:\n\n===========================================================\nMetrics and scoring: quantifying the quality of predictions\n===========================================================\n\n.. _which_scoring_function:\n\nWhich scoring function should I use?\n====================================\n\nBefore we take a closer look into the details of the many scores and\n:term:`evaluation metrics`, we want to give some guidance, inspired by statistical\ndecision theory, on the choice of **scoring functions** for **supervised learning**,\nsee [Gneiting2009]_:\n\n- *Which scoring function should I use?*\n- *Which scoring function is a good one for my task?*\n\nIn a nutshell, if the scoring function is given, e.g. in a kaggle competition\nor in a business context, use that one.\nIf you are free to choose, it starts by considering the ultimate goal and application\nof the prediction. It is useful to distinguish two steps:\n\n* Predicting\n* Decision making\n\n**Predicting:**\nUsually, the response variable :math:`Y` is a random variable, in the sense that there\nis *no deterministic* function :math:`Y = g(X)` of the features :math:`X`.\nInstead, there is a probability distribution :math:`F` of :math:`Y`.\nOne can aim to predict the whole distribution, known as *probabilistic prediction*,\nor---more the focus of scikit-learn---issue a *point prediction* (or point forecast)\nby choosing a property or functional of that distribution :math:`F`.\nTypical examples are the mean (expected value), the median or a quantile of the\nresponse variable :math:`Y` (conditionally on :math:`X`).\n\nOnce that is settled, use a **strictly consistent** scoring function for that\n(target) functional, see [Gneiting2009]_.\nThis means using a scoring function that is aligned with *measuring the distance\nbetween predictions* `y_pred` *and the true target functional using observations of*\n:math:`Y`, i.e. `y_true`.\nFor classification **strictly proper scoring rules**, see\n`Wikipedia entry for Scoring rule <https:\/\/en.wikipedia.org\/wiki\/Scoring_rule>`_\nand [Gneiting2007]_, coincide with strictly consistent scoring functions.\nThe table further below provides examples.\nOne could say that consistent scoring functions act as *truth serum* in that\nthey guarantee *\"that truth telling [. . .] is an optimal strategy in\nexpectation\"* [Gneiting2014]_.\n\nOnce a strictly consistent scoring function is chosen, it is best used for both: as\nloss function for model training and as metric\/score in model evaluation and model\ncomparison.\n\nNote that for regressors, the prediction is done with :term:`predict` while for\nclassifiers it is usually :term:`predict_proba`.\n\n**Decision Making:**\nThe most common decisions are done on binary classification tasks, where the result of\n:term:`predict_proba` is turned into a single outcome, e.g., from the predicted\nprobability of rain a decision is made on how to act (whether to take mitigating\nmeasures like an umbrella or not).\nFor classifiers, this is what :term:`predict` returns.\nSee also :ref:`TunedThresholdClassifierCV`.\nThere are many scoring functions which measure different aspects of such a\ndecision, most of them are covered with or derived from the\n:func:`metrics.confusion_matrix`.\n\n**List of strictly consistent scoring functions:**\nHere, we list some of the most relevant statistical functionals and corresponding\nstrictly consistent scoring functions for tasks in practice. Note that the list is not\ncomplete and that there are more of them.\nFor further criteria on how to select a specific one, see [Fissler2022]_.\n\n==================  ===================================================  ====================  =================================\nfunctional          scoring or loss function                             response `y`          prediction\n==================  ===================================================  ====================  =================================\n**Classification**\nmean                :ref:`Brier score <brier_score_loss>` :sup:`1`       multi-class           ``predict_proba``\nmean                :ref:`log loss <log_loss>`                           multi-class           ``predict_proba``\nmode                :ref:`zero-one loss <zero_one_loss>` :sup:`2`        multi-class           ``predict``, categorical\n**Regression**\nmean                :ref:`squared error <mean_squared_error>` :sup:`3`   all reals             ``predict``, all reals\nmean                :ref:`Poisson deviance <mean_tweedie_deviance>`      non-negative          ``predict``, strictly positive\nmean                :ref:`Gamma deviance <mean_tweedie_deviance>`        strictly positive     ``predict``, strictly positive\nmean                :ref:`Tweedie deviance <mean_tweedie_deviance>`      depends on ``power``  ``predict``, depends on ``power``\nmedian              :ref:`absolute error <mean_absolute_error>`          all reals             ``predict``, all reals\nquantile            :ref:`pinball loss <pinball_loss>`                   all reals             ``predict``, all reals\nmode                no consistent one exists                             reals\n==================  ===================================================  ====================  =================================\n\n:sup:`1` The Brier score is just a different name for the squared error in case of\nclassification.\n\n:sup:`2` The zero-one loss is only consistent but not strictly consistent for the mode.\nThe zero-one loss is equivalent to one minus the accuracy score, meaning it gives\ndifferent score values but the same ranking.\n\n:sup:`3` R\u00b2 gives the same ranking as squared error.\n\n**Fictitious Example:**\nLet's make the above arguments more tangible. Consider a setting in network reliability\nengineering, such as maintaining stable internet or Wi-Fi connections.\nAs provider of the network, you have access to the dataset of log entries of network\nconnections containing network load over time and many interesting features.\nYour goal is to improve the reliability of the connections.\nIn fact, you promise your customers that on at least 99% of all days there are no\nconnection discontinuities larger than 1 minute.\nTherefore, you are interested in a prediction of the 99% quantile (of longest\nconnection interruption duration per day) in order to know in advance when to add\nmore bandwidth and thereby satisfy your customers. So the *target functional* is the\n99% quantile. From the table above, you choose the pinball loss as scoring function\n(fair enough, not much choice given), for model training (e.g.\n`HistGradientBoostingRegressor(loss=\"quantile\", quantile=0.99)`) as well as model\nevaluation (`mean_pinball_loss(..., alpha=0.99)` - we apologize for the different\nargument names, `quantile` and `alpha`) be it in grid search for finding\nhyperparameters or in comparing to other models like\n`QuantileRegressor(quantile=0.99)`.\n\n.. rubric:: References\n\n.. [Gneiting2007] T. Gneiting and A. E. Raftery. :doi:`Strictly Proper\n    Scoring Rules, Prediction, and Estimation <10.1198\/016214506000001437>`\n    In: Journal of the American Statistical Association 102 (2007),\n    pp. 359\u2013 378.\n    `link to pdf <www.stat.washington.edu\/people\/raftery\/Research\/PDF\/Gneiting2007jasa.pdf>`_\n\n.. [Gneiting2009] T. Gneiting. :arxiv:`Making and Evaluating Point Forecasts\n    <0912.0902>`\n    Journal of the American Statistical Association 106 (2009): 746 - 762.\n\n.. [Gneiting2014] T. Gneiting and M. Katzfuss. :doi:`Probabilistic Forecasting\n    <10.1146\/annurev-st atistics-062713-085831>`. In: Annual Review of Statistics and Its Application 1.1 (2014), pp. 125\u2013151.\n\n.. [Fissler2022] T. Fissler, C. Lorentzen and M. Mayer. :arxiv:`Model\n    Comparison and Calibration Assessment: User Guide for Consistent Scoring\n    Functions in Machine Learning and Actuarial Practice. <2202.12780>`\n\n.. _scoring_api_overview:\n\nScoring API overview\n====================\n\nThere are 3 different APIs for evaluating the quality of a model's\npredictions:\n\n* **Estimator score method**: Estimators have a ``score`` method providing a\n  default evaluation criterion for the problem they are designed to solve.\n  Most commonly this is :ref:`accuracy <accuracy_score>` for classifiers and the\n  :ref:`coefficient of determination <r2_score>` (:math:`R^2`) for regressors.\n  Details for each estimator can be found in its documentation.\n\n* **Scoring parameter**: Model-evaluation tools that use\n  :ref:`cross-validation <cross_validation>` (such as\n  :class:`model_selection.GridSearchCV`, :func:`model_selection.validation_curve` and\n  :class:`linear_model.LogisticRegressionCV`) rely on an internal *scoring* strategy.\n  This can be specified using the `scoring` parameter of that tool and is discussed\n  in the section :ref:`scoring_parameter`.\n\n* **Metric functions**: The :mod:`sklearn.metrics` module implements functions\n  assessing prediction error for specific purposes. These metrics are detailed\n  in sections on :ref:`classification_metrics`,\n  :ref:`multilabel_ranking_metrics`, :ref:`regression_metrics` and\n  :ref:`clustering_metrics`.\n\nFinally, :ref:`dummy_estimators` are useful to get a baseline\nvalue of those metrics for random predictions.\n\n.. seealso::\n\n   For \"pairwise\" metrics, between *samples* and not estimators or\n   predictions, see the :ref:`metrics` section.\n\n.. _scoring_parameter:\n\nThe ``scoring`` parameter: defining model evaluation rules\n==========================================================\n\nModel selection and evaluation tools that internally use\n:ref:`cross-validation <cross_validation>` (such as\n:class:`model_selection.GridSearchCV`, :func:`model_selection.validation_curve` and\n:class:`linear_model.LogisticRegressionCV`) take a ``scoring`` parameter that\ncontrols what metric they apply to the estimators evaluated.\n\nThey can be specified in several ways:\n\n* `None`: the estimator's default evaluation criterion (i.e., the metric used in the\n  estimator's `score` method) is used.\n* :ref:`String name <scoring_string_names>`: common metrics can be passed via a string\n  name.\n* :ref:`Callable <scoring_callable>`: more complex metrics can be passed via a custom\n  metric callable (e.g., function).\n\nSome tools do also accept multiple metric evaluation. See :ref:`multimetric_scoring`\nfor details.\n\n.. _scoring_string_names:\n\nString name scorers\n-------------------\n\nFor the most common use cases, you can designate a scorer object with the\n``scoring`` parameter via a string name; the table below shows all possible values.\nAll scorer objects follow the convention that **higher return values are better\nthan lower return values**. Thus metrics which measure the distance between\nthe model and the data, like :func:`metrics.mean_squared_error`, are\navailable as 'neg_mean_squared_error' which return the negated value\nof the metric.\n\n====================================   ==============================================     ==================================\nScoring string name                    Function                                           Comment\n====================================   ==============================================     ==================================\n**Classification**\n'accuracy'                             :func:`metrics.accuracy_score`\n'balanced_accuracy'                    :func:`metrics.balanced_accuracy_score`\n'top_k_accuracy'                       :func:`metrics.top_k_accuracy_score`\n'average_precision'                    :func:`metrics.average_precision_score`\n'neg_brier_score'                      :func:`metrics.brier_score_loss`\n'f1'                                   :func:`metrics.f1_score`                           for binary targets\n'f1_micro'                             :func:`metrics.f1_score`                           micro-averaged\n'f1_macro'                             :func:`metrics.f1_score`                           macro-averaged\n'f1_weighted'                          :func:`metrics.f1_score`                           weighted average\n'f1_samples'                           :func:`metrics.f1_score`                           by multilabel sample\n'neg_log_loss'                         :func:`metrics.log_loss`                           requires ``predict_proba`` support\n'precision' etc.                       :func:`metrics.precision_score`                    suffixes apply as with 'f1'\n'recall' etc.                          :func:`metrics.recall_score`                       suffixes apply as with 'f1'\n'jaccard' etc.                         :func:`metrics.jaccard_score`                      suffixes apply as with 'f1'\n'roc_auc'                              :func:`metrics.roc_auc_score`\n'roc_auc_ovr'                          :func:`metrics.roc_auc_score`\n'roc_auc_ovo'                          :func:`metrics.roc_auc_score`\n'roc_auc_ovr_weighted'                 :func:`metrics.roc_auc_score`\n'roc_auc_ovo_weighted'                 :func:`metrics.roc_auc_score`\n'd2_log_loss_score'                    :func:`metrics.d2_log_loss_score`\n\n**Clustering**\n'adjusted_mutual_info_score'           :func:`metrics.adjusted_mutual_info_score`\n'adjusted_rand_score'                  :func:`metrics.adjusted_rand_score`\n'completeness_score'                   :func:`metrics.completeness_score`\n'fowlkes_mallows_score'                :func:`metrics.fowlkes_mallows_score`\n'homogeneity_score'                    :func:`metrics.homogeneity_score`\n'mutual_info_score'                    :func:`metrics.mutual_info_score`\n'normalized_mutual_info_score'         :func:`metrics.normalized_mutual_info_score`\n'rand_score'                           :func:`metrics.rand_score`\n'v_measure_score'                      :func:`metrics.v_measure_score`\n\n**Regression**\n'explained_variance'                   :func:`metrics.explained_variance_score`\n'neg_max_error'                        :func:`metrics.max_error`\n'neg_mean_absolute_error'              :func:`metrics.mean_absolute_error`\n'neg_mean_squared_error'               :func:`metrics.mean_squared_error`\n'neg_root_mean_squared_error'          :func:`metrics.root_mean_squared_error`\n'neg_mean_squared_log_error'           :func:`metrics.mean_squared_log_error`\n'neg_root_mean_squared_log_error'      :func:`metrics.root_mean_squared_log_error`\n'neg_median_absolute_error'            :func:`metrics.median_absolute_error`\n'r2'                                   :func:`metrics.r2_score`\n'neg_mean_poisson_deviance'            :func:`metrics.mean_poisson_deviance`\n'neg_mean_gamma_deviance'              :func:`metrics.mean_gamma_deviance`\n'neg_mean_absolute_percentage_error'   :func:`metrics.mean_absolute_percentage_error`\n'd2_absolute_error_score' \t           :func:`metrics.d2_absolute_error_score`\n====================================   ==============================================     ==================================\n\nUsage examples:\n\n    >>> from sklearn import svm, datasets\n    >>> from sklearn.model_selection import cross_val_score\n    >>> X, y = datasets.load_iris(return_X_y=True)\n    >>> clf = svm.SVC(random_state=0)\n    >>> cross_val_score(clf, X, y, cv=5, scoring='recall_macro')\n    array([0.96..., 0.96..., 0.96..., 0.93..., 1.        ])\n\n.. note::\n\n    If a wrong scoring name is passed, an ``InvalidParameterError`` is raised.\n    You can retrieve the names of all available scorers by calling\n    :func:`~sklearn.metrics.get_scorer_names`.\n\n.. currentmodule:: sklearn.metrics\n\n.. _scoring_callable:\n\nCallable scorers\n----------------\n\nFor more complex use cases and more flexibility, you can pass a callable to\nthe `scoring` parameter. This can be done by:\n\n* :ref:`scoring_adapt_metric`\n* :ref:`scoring_custom` (most flexible)\n\n.. _scoring_adapt_metric:\n\nAdapting predefined metrics via `make_scorer`\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nThe following metric functions are not implemented as named scorers,\nsometimes because they require additional parameters, such as\n:func:`fbeta_score`. They cannot be passed to the ``scoring``\nparameters; instead their callable needs to be passed to\n:func:`make_scorer` together with the value of the user-settable\nparameters.\n\n=====================================  =========  ==============================================\nFunction                               Parameter  Example usage\n=====================================  =========  ==============================================\n**Classification**\n:func:`metrics.fbeta_score`            ``beta``   ``make_scorer(fbeta_score, beta=2)``\n\n**Regression**\n:func:`metrics.mean_tweedie_deviance`  ``power``  ``make_scorer(mean_tweedie_deviance, power=1.5)``\n:func:`metrics.mean_pinball_loss`      ``alpha``  ``make_scorer(mean_pinball_loss, alpha=0.95)``\n:func:`metrics.d2_tweedie_score`       ``power``  ``make_scorer(d2_tweedie_score, power=1.5)``\n:func:`metrics.d2_pinball_score`       ``alpha``  ``make_scorer(d2_pinball_score, alpha=0.95)``\n=====================================  =========  ==============================================\n\nOne typical use case is to wrap an existing metric function from the library\nwith non-default values for its parameters, such as the ``beta`` parameter for\nthe :func:`fbeta_score` function::\n\n    >>> from sklearn.metrics import fbeta_score, make_scorer\n    >>> ftwo_scorer = make_scorer(fbeta_score, beta=2)\n    >>> from sklearn.model_selection import GridSearchCV\n    >>> from sklearn.svm import LinearSVC\n    >>> grid = GridSearchCV(LinearSVC(), param_grid={'C': [1, 10]},\n    ...                     scoring=ftwo_scorer, cv=5)\n\nThe module :mod:`sklearn.metrics` also exposes a set of simple functions\nmeasuring a prediction error given ground truth and prediction:\n\n- functions ending with ``_score`` return a value to\n  maximize, the higher the better.\n\n- functions ending with ``_error``, ``_loss``, or ``_deviance`` return a\n  value to minimize, the lower the better. When converting\n  into a scorer object using :func:`make_scorer`, set\n  the ``greater_is_better`` parameter to ``False`` (``True`` by default; see the\n  parameter description below).\n\n.. _scoring_custom:\n\nCreating a custom scorer object\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nYou can create your own custom scorer object using\n:func:`make_scorer` or for the most flexibility, from scratch. See below for details.\n\n.. dropdown:: Custom scorer objects using `make_scorer`\n\n  You can build a completely custom scorer object\n  from a simple python function using :func:`make_scorer`, which can\n  take several parameters:\n\n  * the python function you want to use (``my_custom_loss_func``\n    in the example below)\n\n  * whether the python function returns a score (``greater_is_better=True``,\n    the default) or a loss (``greater_is_better=False``). If a loss, the output\n    of the python function is negated by the scorer object, conforming to\n    the cross validation convention that scorers return higher values for better models.\n\n  * for classification metrics only: whether the python function you provided requires\n    continuous decision certainties. If the scoring function only accepts probability\n    estimates (e.g. :func:`metrics.log_loss`), then one needs to set the parameter\n    `response_method=\"predict_proba\"`. Some scoring\n    functions do not necessarily require probability estimates but rather non-thresholded\n    decision values (e.g. :func:`metrics.roc_auc_score`). In this case, one can provide a\n    list (e.g., `response_method=[\"decision_function\", \"predict_proba\"]`),\n    and scorer will use the first available method, in the order given in the list,\n    to compute the scores.\n\n  * any additional parameters of the scoring function, such as ``beta`` or ``labels``.\n\n  Here is an example of building custom scorers, and of using the\n  ``greater_is_better`` parameter::\n\n      >>> import numpy as np\n      >>> def my_custom_loss_func(y_true, y_pred):\n      ...     diff = np.abs(y_true - y_pred).max()\n      ...     return np.log1p(diff)\n      ...\n      >>> # score will negate the return value of my_custom_loss_func,\n      >>> # which will be np.log(2), 0.693, given the values for X\n      >>> # and y defined below.\n      >>> score = make_scorer(my_custom_loss_func, greater_is_better=False)\n      >>> X = [[1], [1]]\n      >>> y = [0, 1]\n      >>> from sklearn.dummy import DummyClassifier\n      >>> clf = DummyClassifier(strategy='most_frequent', random_state=0)\n      >>> clf = clf.fit(X, y)\n      >>> my_custom_loss_func(y, clf.predict(X))\n      0.69...\n      >>> score(clf, X, y)\n      -0.69...\n\n.. dropdown:: Custom scorer objects from scratch\n\n  You can generate even more flexible model scorers by constructing your own\n  scoring object from scratch, without using the :func:`make_scorer` factory.\n\n  For a callable to be a scorer, it needs to meet the protocol specified by\n  the following two rules:\n\n  - It can be called with parameters ``(estimator, X, y)``, where ``estimator``\n    is the model that should be evaluated, ``X`` is validation data, and ``y`` is\n    the ground truth target for ``X`` (in the supervised case) or ``None`` (in the\n    unsupervised case).\n\n  - It returns a floating point number that quantifies the\n    ``estimator`` prediction quality on ``X``, with reference to ``y``.\n    Again, by convention higher numbers are better, so if your scorer\n    returns loss, that value should be negated.\n\n  - Advanced: If it requires extra metadata to be passed to it, it should expose\n    a ``get_metadata_routing`` method returning the requested metadata. The user\n    should be able to set the requested metadata via a ``set_score_request``\n    method. Please see :ref:`User Guide <metadata_routing>` and :ref:`Developer\n    Guide <sphx_glr_auto_examples_miscellaneous_plot_metadata_routing.py>` for\n    more details.\n\n\n.. dropdown:: Using custom scorers in functions where n_jobs > 1\n\n    While defining the custom scoring function alongside the calling function\n    should work out of the box with the default joblib backend (loky),\n    importing it from another module will be a more robust approach and work\n    independently of the joblib backend.\n\n    For example, to use ``n_jobs`` greater than 1 in the example below,\n    ``custom_scoring_function`` function is saved in a user-created module\n    (``custom_scorer_module.py``) and imported::\n\n        >>> from custom_scorer_module import custom_scoring_function # doctest: +SKIP\n        >>> cross_val_score(model,\n        ...  X_train,\n        ...  y_train,\n        ...  scoring=make_scorer(custom_scoring_function, greater_is_better=False),\n        ...  cv=5,\n        ...  n_jobs=-1) # doctest: +SKIP\n\n.. _multimetric_scoring:\n\nUsing multiple metric evaluation\n--------------------------------\n\nScikit-learn also permits evaluation of multiple metrics in ``GridSearchCV``,\n``RandomizedSearchCV`` and ``cross_validate``.\n\nThere are three ways to specify multiple scoring metrics for the ``scoring``\nparameter:\n\n- As an iterable of string metrics::\n\n    >>> scoring = ['accuracy', 'precision']\n\n- As a ``dict`` mapping the scorer name to the scoring function::\n\n    >>> from sklearn.metrics import accuracy_score\n    >>> from sklearn.metrics import make_scorer\n    >>> scoring = {'accuracy': make_scorer(accuracy_score),\n    ...            'prec': 'precision'}\n\n  Note that the dict values can either be scorer functions or one of the\n  predefined metric strings.\n\n- As a callable that returns a dictionary of scores::\n\n    >>> from sklearn.model_selection import cross_validate\n    >>> from sklearn.metrics import confusion_matrix\n    >>> # A sample toy binary classification dataset\n    >>> X, y = datasets.make_classification(n_classes=2, random_state=0)\n    >>> svm = LinearSVC(random_state=0)\n    >>> def confusion_matrix_scorer(clf, X, y):\n    ...      y_pred = clf.predict(X)\n    ...      cm = confusion_matrix(y, y_pred)\n    ...      return {'tn': cm[0, 0], 'fp': cm[0, 1],\n    ...              'fn': cm[1, 0], 'tp': cm[1, 1]}\n    >>> cv_results = cross_validate(svm, X, y, cv=5,\n    ...                             scoring=confusion_matrix_scorer)\n    >>> # Getting the test set true positive scores\n    >>> print(cv_results['test_tp'])\n    [10  9  8  7  8]\n    >>> # Getting the test set false negative scores\n    >>> print(cv_results['test_fn'])\n    [0 1 2 3 2]\n\n.. _classification_metrics:\n\nClassification metrics\n=======================\n\n.. currentmodule:: sklearn.metrics\n\nThe :mod:`sklearn.metrics` module implements several loss, score, and utility\nfunctions to measure classification performance.\nSome metrics might require probability estimates of the positive class,\nconfidence values, or binary decisions values.\nMost implementations allow each sample to provide a weighted contribution\nto the overall score, through the ``sample_weight`` parameter.\n\nSome of these are restricted to the binary classification case:\n\n.. autosummary::\n\n   precision_recall_curve\n   roc_curve\n   class_likelihood_ratios\n   det_curve\n\n\nOthers also work in the multiclass case:\n\n.. autosummary::\n\n   balanced_accuracy_score\n   cohen_kappa_score\n   confusion_matrix\n   hinge_loss\n   matthews_corrcoef\n   roc_auc_score\n   top_k_accuracy_score\n\n\nSome also work in the multilabel case:\n\n.. autosummary::\n\n   accuracy_score\n   classification_report\n   f1_score\n   fbeta_score\n   hamming_loss\n   jaccard_score\n   log_loss\n   multilabel_confusion_matrix\n   precision_recall_fscore_support\n   precision_score\n   recall_score\n   roc_auc_score\n   zero_one_loss\n   d2_log_loss_score\n\nAnd some work with binary and multilabel (but not multiclass) problems:\n\n.. autosummary::\n\n   average_precision_score\n\n\nIn the following sub-sections, we will describe each of those functions,\npreceded by some notes on common API and metric definition.\n\n.. _average:\n\nFrom binary to multiclass and multilabel\n----------------------------------------\n\nSome metrics are essentially defined for binary classification tasks (e.g.\n:func:`f1_score`, :func:`roc_auc_score`). In these cases, by default\nonly the positive label is evaluated, assuming by default that the positive\nclass is labelled ``1`` (though this may be configurable through the\n``pos_label`` parameter).\n\nIn extending a binary metric to multiclass or multilabel problems, the data\nis treated as a collection of binary problems, one for each class.\nThere are then a number of ways to average binary metric calculations across\nthe set of classes, each of which may be useful in some scenario.\nWhere available, you should select among these using the ``average`` parameter.\n\n* ``\"macro\"`` simply calculates the mean of the binary metrics,\n  giving equal weight to each class.  In problems where infrequent classes\n  are nonetheless important, macro-averaging may be a means of highlighting\n  their performance. On the other hand, the assumption that all classes are\n  equally important is often untrue, such that macro-averaging will\n  over-emphasize the typically low performance on an infrequent class.\n* ``\"weighted\"`` accounts for class imbalance by computing the average of\n  binary metrics in which each class's score is weighted by its presence in the\n  true data sample.\n* ``\"micro\"`` gives each sample-class pair an equal contribution to the overall\n  metric (except as a result of sample-weight). Rather than summing the\n  metric per class, this sums the dividends and divisors that make up the\n  per-class metrics to calculate an overall quotient.\n  Micro-averaging may be preferred in multilabel settings, including\n  multiclass classification where a majority class is to be ignored.\n* ``\"samples\"`` applies only to multilabel problems. It does not calculate a\n  per-class measure, instead calculating the metric over the true and predicted\n  classes for each sample in the evaluation data, and returning their\n  (``sample_weight``-weighted) average.\n* Selecting ``average=None`` will return an array with the score for each\n  class.\n\nWhile multiclass data is provided to the metric, like binary targets, as an\narray of class labels, multilabel data is specified as an indicator matrix,\nin which cell ``[i, j]`` has value 1 if sample ``i`` has label ``j`` and value\n0 otherwise.\n\n.. _accuracy_score:\n\nAccuracy score\n--------------\n\nThe :func:`accuracy_score` function computes the\n`accuracy <https:\/\/en.wikipedia.org\/wiki\/Accuracy_and_precision>`_, either the fraction\n(default) or the count (normalize=False) of correct predictions.\n\n\nIn multilabel classification, the function returns the subset accuracy. If\nthe entire set of predicted labels for a sample strictly match with the true\nset of labels, then the subset accuracy is 1.0; otherwise it is 0.0.\n\nIf :math:`\\hat{y}_i` is the predicted value of\nthe :math:`i`-th sample and :math:`y_i` is the corresponding true value,\nthen the fraction of correct predictions over :math:`n_\\text{samples}` is\ndefined as\n\n.. math::\n\n  \\texttt{accuracy}(y, \\hat{y}) = \\frac{1}{n_\\text{samples}} \\sum_{i=0}^{n_\\text{samples}-1} 1(\\hat{y}_i = y_i)\n\nwhere :math:`1(x)` is the `indicator function\n<https:\/\/en.wikipedia.org\/wiki\/Indicator_function>`_.\n\n  >>> import numpy as np\n  >>> from sklearn.metrics import accuracy_score\n  >>> y_pred = [0, 2, 1, 3]\n  >>> y_true = [0, 1, 2, 3]\n  >>> accuracy_score(y_true, y_pred)\n  0.5\n  >>> accuracy_score(y_true, y_pred, normalize=False)\n  2.0\n\nIn the multilabel case with binary label indicators::\n\n  >>> accuracy_score(np.array([[0, 1], [1, 1]]), np.ones((2, 2)))\n  0.5\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_model_selection_plot_permutation_tests_for_classification.py`\n  for an example of accuracy score usage using permutations of\n  the dataset.\n\n.. _top_k_accuracy_score:\n\nTop-k accuracy score\n--------------------\n\nThe :func:`top_k_accuracy_score` function is a generalization of\n:func:`accuracy_score`. The difference is that a prediction is considered\ncorrect as long as the true label is associated with one of the ``k`` highest\npredicted scores. :func:`accuracy_score` is the special case of `k = 1`.\n\nThe function covers the binary and multiclass classification cases but not the\nmultilabel case.\n\nIf :math:`\\hat{f}_{i,j}` is the predicted class for the :math:`i`-th sample\ncorresponding to the :math:`j`-th largest predicted score and :math:`y_i` is the\ncorresponding true value, then the fraction of correct predictions over\n:math:`n_\\text{samples}` is defined as\n\n.. math::\n\n   \\texttt{top-k accuracy}(y, \\hat{f}) = \\frac{1}{n_\\text{samples}} \\sum_{i=0}^{n_\\text{samples}-1} \\sum_{j=1}^{k} 1(\\hat{f}_{i,j} = y_i)\n\nwhere :math:`k` is the number of guesses allowed and :math:`1(x)` is the\n`indicator function <https:\/\/en.wikipedia.org\/wiki\/Indicator_function>`_.\n\n  >>> import numpy as np\n  >>> from sklearn.metrics import top_k_accuracy_score\n  >>> y_true = np.array([0, 1, 2, 2])\n  >>> y_score = np.array([[0.5, 0.2, 0.2],\n  ...                     [0.3, 0.4, 0.2],\n  ...                     [0.2, 0.4, 0.3],\n  ...                     [0.7, 0.2, 0.1]])\n  >>> top_k_accuracy_score(y_true, y_score, k=2)\n  0.75\n  >>> # Not normalizing gives the number of \"correctly\" classified samples\n  >>> top_k_accuracy_score(y_true, y_score, k=2, normalize=False)\n  3\n\n.. _balanced_accuracy_score:\n\nBalanced accuracy score\n-----------------------\n\nThe :func:`balanced_accuracy_score` function computes the `balanced accuracy\n<https:\/\/en.wikipedia.org\/wiki\/Accuracy_and_precision>`_, which avoids inflated\nperformance estimates on imbalanced datasets. It is the macro-average of recall\nscores per class or, equivalently, raw accuracy where each sample is weighted\naccording to the inverse prevalence of its true class.\nThus for balanced datasets, the score is equal to accuracy.\n\nIn the binary case, balanced accuracy is equal to the arithmetic mean of\n`sensitivity <https:\/\/en.wikipedia.org\/wiki\/Sensitivity_and_specificity>`_\n(true positive rate) and `specificity\n<https:\/\/en.wikipedia.org\/wiki\/Sensitivity_and_specificity>`_ (true negative\nrate), or the area under the ROC curve with binary predictions rather than\nscores:\n\n.. math::\n\n   \\texttt{balanced-accuracy} = \\frac{1}{2}\\left( \\frac{TP}{TP + FN} + \\frac{TN}{TN + FP}\\right )\n\nIf the classifier performs equally well on either class, this term reduces to\nthe conventional accuracy (i.e., the number of correct predictions divided by\nthe total number of predictions).\n\nIn contrast, if the conventional accuracy is above chance only because the\nclassifier takes advantage of an imbalanced test set, then the balanced\naccuracy, as appropriate, will drop to :math:`\\frac{1}{n\\_classes}`.\n\nThe score ranges from 0 to 1, or when ``adjusted=True`` is used, it rescaled to\nthe range :math:`\\frac{1}{1 - n\\_classes}` to 1, inclusive, with\nperformance at random scoring 0.\n\nIf :math:`y_i` is the true value of the :math:`i`-th sample, and :math:`w_i`\nis the corresponding sample weight, then we adjust the sample weight to:\n\n.. math::\n\n   \\hat{w}_i = \\frac{w_i}{\\sum_j{1(y_j = y_i) w_j}}\n\nwhere :math:`1(x)` is the `indicator function <https:\/\/en.wikipedia.org\/wiki\/Indicator_function>`_.\nGiven predicted :math:`\\hat{y}_i` for sample :math:`i`, balanced accuracy is\ndefined as:\n\n.. math::\n\n   \\texttt{balanced-accuracy}(y, \\hat{y}, w) = \\frac{1}{\\sum{\\hat{w}_i}} \\sum_i 1(\\hat{y}_i = y_i) \\hat{w}_i\n\nWith ``adjusted=True``, balanced accuracy reports the relative increase from\n:math:`\\texttt{balanced-accuracy}(y, \\mathbf{0}, w) =\n\\frac{1}{n\\_classes}`.  In the binary case, this is also known as\n`*Youden's J statistic* <https:\/\/en.wikipedia.org\/wiki\/Youden%27s_J_statistic>`_,\nor *informedness*.\n\n.. note::\n\n    The multiclass definition here seems the most reasonable extension of the\n    metric used in binary classification, though there is no certain consensus\n    in the literature:\n\n    * Our definition: [Mosley2013]_, [Kelleher2015]_ and [Guyon2015]_, where\n      [Guyon2015]_ adopt the adjusted version to ensure that random predictions\n      have a score of :math:`0` and perfect predictions have a score of :math:`1`..\n    * Class balanced accuracy as described in [Mosley2013]_: the minimum between the precision\n      and the recall for each class is computed. Those values are then averaged over the total\n      number of classes to get the balanced accuracy.\n    * Balanced Accuracy as described in [Urbanowicz2015]_: the average of sensitivity and specificity\n      is computed for each class and then averaged over total number of classes.\n\n.. rubric:: References\n\n.. [Guyon2015] I. Guyon, K. Bennett, G. Cawley, H.J. Escalante, S. Escalera, T.K. Ho, N. Maci\u00e0,\n    B. Ray, M. Saeed, A.R. Statnikov, E. Viegas, `Design of the 2015 ChaLearn AutoML Challenge\n    <https:\/\/ieeexplore.ieee.org\/document\/7280767>`_, IJCNN 2015.\n.. [Mosley2013] L. Mosley, `A balanced approach to the multi-class imbalance problem\n    <https:\/\/lib.dr.iastate.edu\/etd\/13537\/>`_, IJCV 2010.\n.. [Kelleher2015] John. D. Kelleher, Brian Mac Namee, Aoife D'Arcy, `Fundamentals of\n    Machine Learning for Predictive Data Analytics: Algorithms, Worked Examples,\n    and Case Studies <https:\/\/mitpress.mit.edu\/books\/fundamentals-machine-learning-predictive-data-analytics>`_,\n    2015.\n.. [Urbanowicz2015] Urbanowicz R.J.,  Moore, J.H. :doi:`ExSTraCS 2.0: description\n    and evaluation of a scalable learning classifier\n    system <10.1007\/s12065-015-0128-8>`, Evol. Intel. (2015) 8: 89.\n\n.. _cohen_kappa:\n\nCohen's kappa\n-------------\n\nThe function :func:`cohen_kappa_score` computes `Cohen's kappa\n<https:\/\/en.wikipedia.org\/wiki\/Cohen%27s_kappa>`_ statistic.\nThis measure is intended to compare labelings by different human annotators,\nnot a classifier versus a ground truth.\n\nThe kappa score is a number between -1 and 1.\nScores above .8 are generally considered good agreement;\nzero or lower means no agreement (practically random labels).\n\nKappa scores can be computed for binary or multiclass problems,\nbut not for multilabel problems (except by manually computing a per-label score)\nand not for more than two annotators.\n\n  >>> from sklearn.metrics import cohen_kappa_score\n  >>> labeling1 = [2, 0, 2, 2, 0, 1]\n  >>> labeling2 = [0, 0, 2, 2, 0, 2]\n  >>> cohen_kappa_score(labeling1, labeling2)\n  0.4285714285714286\n\n.. _confusion_matrix:\n\nConfusion matrix\n----------------\n\nThe :func:`confusion_matrix` function evaluates\nclassification accuracy by computing the `confusion matrix\n<https:\/\/en.wikipedia.org\/wiki\/Confusion_matrix>`_ with each row corresponding\nto the true class (Wikipedia and other references may use different convention\nfor axes).\n\nBy definition, entry :math:`i, j` in a confusion matrix is\nthe number of observations actually in group :math:`i`, but\npredicted to be in group :math:`j`. Here is an example::\n\n  >>> from sklearn.metrics import confusion_matrix\n  >>> y_true = [2, 0, 2, 2, 0, 1]\n  >>> y_pred = [0, 0, 2, 2, 0, 2]\n  >>> confusion_matrix(y_true, y_pred)\n  array([[2, 0, 0],\n         [0, 0, 1],\n         [1, 0, 2]])\n\n:class:`ConfusionMatrixDisplay` can be used to visually represent a confusion\nmatrix as shown in the\n:ref:`sphx_glr_auto_examples_model_selection_plot_confusion_matrix.py`\nexample, which creates the following figure:\n\n.. image:: ..\/auto_examples\/model_selection\/images\/sphx_glr_plot_confusion_matrix_001.png\n   :target: ..\/auto_examples\/model_selection\/plot_confusion_matrix.html\n   :scale: 75\n   :align: center\n\nThe parameter ``normalize`` allows to report ratios instead of counts. The\nconfusion matrix can be normalized in 3 different ways: ``'pred'``, ``'true'``,\nand ``'all'`` which will divide the counts by the sum of each columns, rows, or\nthe entire matrix, respectively.\n\n  >>> y_true = [0, 0, 0, 1, 1, 1, 1, 1]\n  >>> y_pred = [0, 1, 0, 1, 0, 1, 0, 1]\n  >>> confusion_matrix(y_true, y_pred, normalize='all')\n  array([[0.25 , 0.125],\n         [0.25 , 0.375]])\n\nFor binary problems, we can get counts of true negatives, false positives,\nfalse negatives and true positives as follows::\n\n  >>> y_true = [0, 0, 0, 1, 1, 1, 1, 1]\n  >>> y_pred = [0, 1, 0, 1, 0, 1, 0, 1]\n  >>> tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()\n  >>> tn, fp, fn, tp\n  (2, 1, 2, 3)\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_model_selection_plot_confusion_matrix.py`\n  for an example of using a confusion matrix to evaluate classifier output\n  quality.\n\n* See :ref:`sphx_glr_auto_examples_classification_plot_digits_classification.py`\n  for an example of using a confusion matrix to classify\n  hand-written digits.\n\n* See :ref:`sphx_glr_auto_examples_text_plot_document_classification_20newsgroups.py`\n  for an example of using a confusion matrix to classify text\n  documents.\n\n.. _classification_report:\n\nClassification report\n----------------------\n\nThe :func:`classification_report` function builds a text report showing the\nmain classification metrics. Here is a small example with custom ``target_names``\nand inferred labels::\n\n   >>> from sklearn.metrics import classification_report\n   >>> y_true = [0, 1, 2, 2, 0]\n   >>> y_pred = [0, 0, 2, 1, 0]\n   >>> target_names = ['class 0', 'class 1', 'class 2']\n   >>> print(classification_report(y_true, y_pred, target_names=target_names))\n                 precision    recall  f1-score   support\n   <BLANKLINE>\n        class 0       0.67      1.00      0.80         2\n        class 1       0.00      0.00      0.00         1\n        class 2       1.00      0.50      0.67         2\n   <BLANKLINE>\n       accuracy                           0.60         5\n      macro avg       0.56      0.50      0.49         5\n   weighted avg       0.67      0.60      0.59         5\n   <BLANKLINE>\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_classification_plot_digits_classification.py`\n  for an example of classification report usage for\n  hand-written digits.\n\n* See :ref:`sphx_glr_auto_examples_model_selection_plot_grid_search_digits.py`\n  for an example of classification report usage for\n  grid search with nested cross-validation.\n\n.. _hamming_loss:\n\nHamming loss\n-------------\n\nThe :func:`hamming_loss` computes the average Hamming loss or `Hamming\ndistance <https:\/\/en.wikipedia.org\/wiki\/Hamming_distance>`_ between two sets\nof samples.\n\nIf :math:`\\hat{y}_{i,j}` is the predicted value for the :math:`j`-th label of a\ngiven sample :math:`i`, :math:`y_{i,j}` is the corresponding true value,\n:math:`n_\\text{samples}` is the number of samples and :math:`n_\\text{labels}`\nis the number of labels, then the Hamming loss :math:`L_{Hamming}` is defined\nas:\n\n.. math::\n\n   L_{Hamming}(y, \\hat{y}) = \\frac{1}{n_\\text{samples} * n_\\text{labels}} \\sum_{i=0}^{n_\\text{samples}-1} \\sum_{j=0}^{n_\\text{labels} - 1} 1(\\hat{y}_{i,j} \\not= y_{i,j})\n\nwhere :math:`1(x)` is the `indicator function\n<https:\/\/en.wikipedia.org\/wiki\/Indicator_function>`_.\n\nThe equation above does not hold true in the case of multiclass classification.\nPlease refer to the note below for more information. ::\n\n  >>> from sklearn.metrics import hamming_loss\n  >>> y_pred = [1, 2, 3, 4]\n  >>> y_true = [2, 2, 3, 4]\n  >>> hamming_loss(y_true, y_pred)\n  0.25\n\nIn the multilabel case with binary label indicators::\n\n  >>> hamming_loss(np.array([[0, 1], [1, 1]]), np.zeros((2, 2)))\n  0.75\n\n.. note::\n\n    In multiclass classification, the Hamming loss corresponds to the Hamming\n    distance between ``y_true`` and ``y_pred`` which is similar to the\n    :ref:`zero_one_loss` function.  However, while zero-one loss penalizes\n    prediction sets that do not strictly match true sets, the Hamming loss\n    penalizes individual labels.  Thus the Hamming loss, upper bounded by the zero-one\n    loss, is always between zero and one, inclusive; and predicting a proper subset\n    or superset of the true labels will give a Hamming loss between\n    zero and one, exclusive.\n\n.. _precision_recall_f_measure_metrics:\n\nPrecision, recall and F-measures\n---------------------------------\n\nIntuitively, `precision\n<https:\/\/en.wikipedia.org\/wiki\/Precision_and_recall#Precision>`_ is the ability\nof the classifier not to label as positive a sample that is negative, and\n`recall <https:\/\/en.wikipedia.org\/wiki\/Precision_and_recall#Recall>`_ is the\nability of the classifier to find all the positive samples.\n\nThe  `F-measure <https:\/\/en.wikipedia.org\/wiki\/F1_score>`_\n(:math:`F_\\beta` and :math:`F_1` measures) can be interpreted as a weighted\nharmonic mean of the precision and recall. A\n:math:`F_\\beta` measure reaches its best value at 1 and its worst score at 0.\nWith :math:`\\beta = 1`,  :math:`F_\\beta` and\n:math:`F_1`  are equivalent, and the recall and the precision are equally important.\n\nThe :func:`precision_recall_curve` computes a precision-recall curve\nfrom the ground truth label and a score given by the classifier\nby varying a decision threshold.\n\nThe :func:`average_precision_score` function computes the\n`average precision <https:\/\/en.wikipedia.org\/w\/index.php?title=Information_retrieval&oldid=793358396#Average_precision>`_\n(AP) from prediction scores. The value is between 0 and 1 and higher is better.\nAP is defined as\n\n.. math::\n    \\text{AP} = \\sum_n (R_n - R_{n-1}) P_n\n\nwhere :math:`P_n` and :math:`R_n` are the precision and recall at the\nnth threshold. With random predictions, the AP is the fraction of positive\nsamples.\n\nReferences [Manning2008]_ and [Everingham2010]_ present alternative variants of\nAP that interpolate the precision-recall curve. Currently,\n:func:`average_precision_score` does not implement any interpolated variant.\nReferences [Davis2006]_ and [Flach2015]_ describe why a linear interpolation of\npoints on the precision-recall curve provides an overly-optimistic measure of\nclassifier performance. This linear interpolation is used when computing area\nunder the curve with the trapezoidal rule in :func:`auc`.\n\nSeveral functions allow you to analyze the precision, recall and F-measures\nscore:\n\n.. autosummary::\n\n   average_precision_score\n   f1_score\n   fbeta_score\n   precision_recall_curve\n   precision_recall_fscore_support\n   precision_score\n   recall_score\n\nNote that the :func:`precision_recall_curve` function is restricted to the\nbinary case. The :func:`average_precision_score` function supports multiclass\nand multilabel formats by computing each class score in a One-vs-the-rest (OvR)\nfashion and averaging them or not depending of its ``average`` argument value.\n\nThe :func:`PrecisionRecallDisplay.from_estimator` and\n:func:`PrecisionRecallDisplay.from_predictions` functions will plot the\nprecision-recall curve as follows.\n\n.. image:: ..\/auto_examples\/model_selection\/images\/sphx_glr_plot_precision_recall_001.png\n        :target: ..\/auto_examples\/model_selection\/plot_precision_recall.html#plot-the-precision-recall-curve\n        :scale: 75\n        :align: center\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_model_selection_plot_grid_search_digits.py`\n  for an example of :func:`precision_score` and :func:`recall_score` usage\n  to estimate parameters using grid search with nested cross-validation.\n\n* See :ref:`sphx_glr_auto_examples_model_selection_plot_precision_recall.py`\n  for an example of :func:`precision_recall_curve` usage to evaluate\n  classifier output quality.\n\n.. rubric:: References\n\n.. [Manning2008] C.D. Manning, P. Raghavan, H. Sch\u00fctze, `Introduction to Information Retrieval\n    <https:\/\/nlp.stanford.edu\/IR-book\/html\/htmledition\/evaluation-of-ranked-retrieval-results-1.html>`_,\n    2008.\n.. [Everingham2010] M. Everingham, L. Van Gool, C.K.I. Williams, J. Winn, A. Zisserman,\n    `The Pascal Visual Object Classes (VOC) Challenge\n    <https:\/\/citeseerx.ist.psu.edu\/doc_view\/pid\/b6bebfd529b233f00cb854b7d8070319600cf59d>`_,\n    IJCV 2010.\n.. [Davis2006] J. Davis, M. Goadrich, `The Relationship Between Precision-Recall and ROC Curves\n    <https:\/\/www.biostat.wisc.edu\/~page\/rocpr.pdf>`_,\n    ICML 2006.\n.. [Flach2015] P.A. Flach, M. Kull, `Precision-Recall-Gain Curves: PR Analysis Done Right\n    <https:\/\/papers.nips.cc\/paper\/5867-precision-recall-gain-curves-pr-analysis-done-right.pdf>`_,\n    NIPS 2015.\n\nBinary classification\n^^^^^^^^^^^^^^^^^^^^^\n\nIn a binary classification task, the terms ''positive'' and ''negative'' refer\nto the classifier's prediction, and the terms ''true'' and ''false'' refer to\nwhether that prediction corresponds to the external judgment (sometimes known\nas the ''observation''). Given these definitions, we can formulate the\nfollowing table:\n\n+-------------------+------------------------------------------------+\n|                   |    Actual class (observation)                  |\n+-------------------+---------------------+--------------------------+\n|   Predicted class | tp (true positive)  | fp (false positive)      |\n|   (expectation)   | Correct result      | Unexpected result        |\n|                   +---------------------+--------------------------+\n|                   | fn (false negative) | tn (true negative)       |\n|                   | Missing result      | Correct absence of result|\n+-------------------+---------------------+--------------------------+\n\nIn this context, we can define the notions of precision and recall:\n\n.. math::\n\n   \\text{precision} = \\frac{\\text{tp}}{\\text{tp} + \\text{fp}},\n\n.. math::\n\n   \\text{recall} = \\frac{\\text{tp}}{\\text{tp} + \\text{fn}},\n\n(Sometimes recall is also called ''sensitivity'')\n\nF-measure is the weighted harmonic mean of precision and recall, with precision's\ncontribution to the mean weighted by some parameter :math:`\\beta`:\n\n.. math::\n\n   F_\\beta = (1 + \\beta^2) \\frac{\\text{precision} \\times \\text{recall}}{\\beta^2 \\text{precision} + \\text{recall}}\n\nTo avoid division by zero when precision and recall are zero, Scikit-Learn calculates F-measure with this\notherwise-equivalent formula:\n\n.. math::\n\n   F_\\beta = \\frac{(1 + \\beta^2) \\text{tp}}{(1 + \\beta^2) \\text{tp} + \\text{fp} + \\beta^2 \\text{fn}}\n\nNote that this formula is still undefined when there are no true positives, false\npositives, or false negatives. By default, F-1 for a set of exclusively true negatives\nis calculated as 0, however this behavior can be changed using the `zero_division`\nparameter.\nHere are some small examples in binary classification::\n\n  >>> from sklearn import metrics\n  >>> y_pred = [0, 1, 0, 0]\n  >>> y_true = [0, 1, 0, 1]\n  >>> metrics.precision_score(y_true, y_pred)\n  1.0\n  >>> metrics.recall_score(y_true, y_pred)\n  0.5\n  >>> metrics.f1_score(y_true, y_pred)\n  0.66...\n  >>> metrics.fbeta_score(y_true, y_pred, beta=0.5)\n  0.83...\n  >>> metrics.fbeta_score(y_true, y_pred, beta=1)\n  0.66...\n  >>> metrics.fbeta_score(y_true, y_pred, beta=2)\n  0.55...\n  >>> metrics.precision_recall_fscore_support(y_true, y_pred, beta=0.5)\n  (array([0.66..., 1.        ]), array([1. , 0.5]), array([0.71..., 0.83...]), array([2, 2]))\n\n\n  >>> import numpy as np\n  >>> from sklearn.metrics import precision_recall_curve\n  >>> from sklearn.metrics import average_precision_score\n  >>> y_true = np.array([0, 0, 1, 1])\n  >>> y_scores = np.array([0.1, 0.4, 0.35, 0.8])\n  >>> precision, recall, threshold = precision_recall_curve(y_true, y_scores)\n  >>> precision\n  array([0.5       , 0.66..., 0.5       , 1.        , 1.        ])\n  >>> recall\n  array([1. , 1. , 0.5, 0.5, 0. ])\n  >>> threshold\n  array([0.1 , 0.35, 0.4 , 0.8 ])\n  >>> average_precision_score(y_true, y_scores)\n  0.83...\n\n\n\nMulticlass and multilabel classification\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nIn a multiclass and multilabel classification task, the notions of precision,\nrecall, and F-measures can be applied to each label independently.\nThere are a few ways to combine results across labels,\nspecified by the ``average`` argument to the\n:func:`average_precision_score`, :func:`f1_score`,\n:func:`fbeta_score`, :func:`precision_recall_fscore_support`,\n:func:`precision_score` and :func:`recall_score` functions, as described\n:ref:`above <average>`.\n\nNote the following behaviors when averaging:\n\n* If all labels are included, \"micro\"-averaging in a multiclass setting will produce\n  precision, recall and :math:`F` that are all identical to accuracy.\n* \"weighted\" averaging may produce a F-score that is not between precision and recall.\n* \"macro\" averaging for F-measures is calculated as the arithmetic mean over\n  per-label\/class F-measures, not the harmonic mean over the arithmetic precision and\n  recall means. Both calculations can be seen in the literature but are not equivalent,\n  see [OB2019]_ for details.\n\nTo make this more explicit, consider the following notation:\n\n* :math:`y` the set of *true* :math:`(sample, label)` pairs\n* :math:`\\hat{y}` the set of *predicted* :math:`(sample, label)` pairs\n* :math:`L` the set of labels\n* :math:`S` the set of samples\n* :math:`y_s` the subset of :math:`y` with sample :math:`s`,\n  i.e. :math:`y_s := \\left\\{(s', l) \\in y | s' = s\\right\\}`\n* :math:`y_l` the subset of :math:`y` with label :math:`l`\n* similarly, :math:`\\hat{y}_s` and :math:`\\hat{y}_l` are subsets of\n  :math:`\\hat{y}`\n* :math:`P(A, B) := \\frac{\\left| A \\cap B \\right|}{\\left|B\\right|}` for some\n  sets :math:`A` and :math:`B`\n* :math:`R(A, B) := \\frac{\\left| A \\cap B \\right|}{\\left|A\\right|}`\n  (Conventions vary on handling :math:`A = \\emptyset`; this implementation uses\n  :math:`R(A, B):=0`, and similar for :math:`P`.)\n* :math:`F_\\beta(A, B) := \\left(1 + \\beta^2\\right) \\frac{P(A, B) \\times R(A, B)}{\\beta^2 P(A, B) + R(A, B)}`\n\nThen the metrics are defined as:\n\n+---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+\n|``average``    | Precision                                                                                                        | Recall                                                                                                           | F\\_beta                                                                                                              |\n+===============+==================================================================================================================+==================================================================================================================+======================================================================================================================+\n|``\"micro\"``    | :math:`P(y, \\hat{y})`                                                                                            | :math:`R(y, \\hat{y})`                                                                                            | :math:`F_\\beta(y, \\hat{y})`                                                                                          |\n+---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+\n|``\"samples\"``  | :math:`\\frac{1}{\\left|S\\right|} \\sum_{s \\in S} P(y_s, \\hat{y}_s)`                                                | :math:`\\frac{1}{\\left|S\\right|} \\sum_{s \\in S} R(y_s, \\hat{y}_s)`                                                | :math:`\\frac{1}{\\left|S\\right|} \\sum_{s \\in S} F_\\beta(y_s, \\hat{y}_s)`                                              |\n+---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+\n|``\"macro\"``    | :math:`\\frac{1}{\\left|L\\right|} \\sum_{l \\in L} P(y_l, \\hat{y}_l)`                                                | :math:`\\frac{1}{\\left|L\\right|} \\sum_{l \\in L} R(y_l, \\hat{y}_l)`                                                | :math:`\\frac{1}{\\left|L\\right|} \\sum_{l \\in L} F_\\beta(y_l, \\hat{y}_l)`                                              |\n+---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+\n|``\"weighted\"`` | :math:`\\frac{1}{\\sum_{l \\in L} \\left|y_l\\right|} \\sum_{l \\in L} \\left|y_l\\right| P(y_l, \\hat{y}_l)`              | :math:`\\frac{1}{\\sum_{l \\in L} \\left|y_l\\right|} \\sum_{l \\in L} \\left|y_l\\right| R(y_l, \\hat{y}_l)`              | :math:`\\frac{1}{\\sum_{l \\in L} \\left|y_l\\right|} \\sum_{l \\in L} \\left|y_l\\right| F_\\beta(y_l, \\hat{y}_l)`            |\n+---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+\n|``None``       | :math:`\\langle P(y_l, \\hat{y}_l) | l \\in L \\rangle`                                                              | :math:`\\langle R(y_l, \\hat{y}_l) | l \\in L \\rangle`                                                              | :math:`\\langle F_\\beta(y_l, \\hat{y}_l) | l \\in L \\rangle`                                                            |\n+---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+\n\n  >>> from sklearn import metrics\n  >>> y_true = [0, 1, 2, 0, 1, 2]\n  >>> y_pred = [0, 2, 1, 0, 0, 1]\n  >>> metrics.precision_score(y_true, y_pred, average='macro')\n  0.22...\n  >>> metrics.recall_score(y_true, y_pred, average='micro')\n  0.33...\n  >>> metrics.f1_score(y_true, y_pred, average='weighted')\n  0.26...\n  >>> metrics.fbeta_score(y_true, y_pred, average='macro', beta=0.5)\n  0.23...\n  >>> metrics.precision_recall_fscore_support(y_true, y_pred, beta=0.5, average=None)\n  (array([0.66..., 0.        , 0.        ]), array([1., 0., 0.]), array([0.71..., 0.        , 0.        ]), array([2, 2, 2]...))\n\nFor multiclass classification with a \"negative class\", it is possible to exclude some labels:\n\n  >>> metrics.recall_score(y_true, y_pred, labels=[1, 2], average='micro')\n  ... # excluding 0, no labels were correctly recalled\n  0.0\n\nSimilarly, labels not present in the data sample may be accounted for in macro-averaging.\n\n  >>> metrics.precision_score(y_true, y_pred, labels=[0, 1, 2, 3], average='macro')\n  0.166...\n\n.. rubric:: References\n\n.. [OB2019] :arxiv:`Opitz, J., & Burst, S. (2019). \"Macro f1 and macro f1.\"\n    <1911.03347>`\n\n.. _jaccard_similarity_score:\n\nJaccard similarity coefficient score\n-------------------------------------\n\nThe :func:`jaccard_score` function computes the average of `Jaccard similarity\ncoefficients <https:\/\/en.wikipedia.org\/wiki\/Jaccard_index>`_, also called the\nJaccard index, between pairs of label sets.\n\nThe Jaccard similarity coefficient with a ground truth label set :math:`y` and\npredicted label set :math:`\\hat{y}`, is defined as\n\n.. math::\n\n    J(y, \\hat{y}) = \\frac{|y \\cap \\hat{y}|}{|y \\cup \\hat{y}|}.\n\nThe :func:`jaccard_score` (like :func:`precision_recall_fscore_support`) applies\nnatively to binary targets. By computing it set-wise it can be extended to apply\nto multilabel and multiclass through the use of `average` (see\n:ref:`above <average>`).\n\nIn the binary case::\n\n  >>> import numpy as np\n  >>> from sklearn.metrics import jaccard_score\n  >>> y_true = np.array([[0, 1, 1],\n  ...                    [1, 1, 0]])\n  >>> y_pred = np.array([[1, 1, 1],\n  ...                    [1, 0, 0]])\n  >>> jaccard_score(y_true[0], y_pred[0])\n  0.6666...\n\nIn the 2D comparison case (e.g. image similarity):\n\n  >>> jaccard_score(y_true, y_pred, average=\"micro\")\n  0.6\n\nIn the multilabel case with binary label indicators::\n\n  >>> jaccard_score(y_true, y_pred, average='samples')\n  0.5833...\n  >>> jaccard_score(y_true, y_pred, average='macro')\n  0.6666...\n  >>> jaccard_score(y_true, y_pred, average=None)\n  array([0.5, 0.5, 1. ])\n\nMulticlass problems are binarized and treated like the corresponding\nmultilabel problem::\n\n  >>> y_pred = [0, 2, 1, 2]\n  >>> y_true = [0, 1, 2, 2]\n  >>> jaccard_score(y_true, y_pred, average=None)\n  array([1. , 0. , 0.33...])\n  >>> jaccard_score(y_true, y_pred, average='macro')\n  0.44...\n  >>> jaccard_score(y_true, y_pred, average='micro')\n  0.33...\n\n.. _hinge_loss:\n\nHinge loss\n----------\n\nThe :func:`hinge_loss` function computes the average distance between\nthe model and the data using\n`hinge loss <https:\/\/en.wikipedia.org\/wiki\/Hinge_loss>`_, a one-sided metric\nthat considers only prediction errors. (Hinge\nloss is used in maximal margin classifiers such as support vector machines.)\n\nIf the true label :math:`y_i` of a binary classification task is encoded as\n:math:`y_i=\\left\\{-1, +1\\right\\}` for every sample :math:`i`; and :math:`w_i`\nis the corresponding predicted decision (an array of shape (`n_samples`,) as\noutput by the `decision_function` method), then the hinge loss is defined as:\n\n.. math::\n\n  L_\\text{Hinge}(y, w) = \\frac{1}{n_\\text{samples}} \\sum_{i=0}^{n_\\text{samples}-1} \\max\\left\\{1 - w_i y_i, 0\\right\\}\n\nIf there are more than two labels, :func:`hinge_loss` uses a multiclass variant\ndue to Crammer & Singer.\n`Here <https:\/\/jmlr.csail.mit.edu\/papers\/volume2\/crammer01a\/crammer01a.pdf>`_ is\nthe paper describing it.\n\nIn this case the predicted decision is an array of shape (`n_samples`,\n`n_labels`). If :math:`w_{i, y_i}` is the predicted decision for the true label\n:math:`y_i` of the :math:`i`-th sample; and\n:math:`\\hat{w}_{i, y_i} = \\max\\left\\{w_{i, y_j}~|~y_j \\ne y_i \\right\\}`\nis the maximum of the\npredicted decisions for all the other labels, then the multi-class hinge loss\nis defined by:\n\n.. math::\n\n  L_\\text{Hinge}(y, w) = \\frac{1}{n_\\text{samples}}\n  \\sum_{i=0}^{n_\\text{samples}-1} \\max\\left\\{1 + \\hat{w}_{i, y_i}\n  - w_{i, y_i}, 0\\right\\}\n\nHere is a small example demonstrating the use of the :func:`hinge_loss` function\nwith a svm classifier in a binary class problem::\n\n  >>> from sklearn import svm\n  >>> from sklearn.metrics import hinge_loss\n  >>> X = [[0], [1]]\n  >>> y = [-1, 1]\n  >>> est = svm.LinearSVC(random_state=0)\n  >>> est.fit(X, y)\n  LinearSVC(random_state=0)\n  >>> pred_decision = est.decision_function([[-2], [3], [0.5]])\n  >>> pred_decision\n  array([-2.18...,  2.36...,  0.09...])\n  >>> hinge_loss([-1, 1, 1], pred_decision)\n  0.3...\n\nHere is an example demonstrating the use of the :func:`hinge_loss` function\nwith a svm classifier in a multiclass problem::\n\n  >>> X = np.array([[0], [1], [2], [3]])\n  >>> Y = np.array([0, 1, 2, 3])\n  >>> labels = np.array([0, 1, 2, 3])\n  >>> est = svm.LinearSVC()\n  >>> est.fit(X, Y)\n  LinearSVC()\n  >>> pred_decision = est.decision_function([[-1], [2], [3]])\n  >>> y_true = [0, 2, 3]\n  >>> hinge_loss(y_true, pred_decision, labels=labels)\n  0.56...\n\n.. _log_loss:\n\nLog loss\n--------\n\nLog loss, also called logistic regression loss or\ncross-entropy loss, is defined on probability estimates.  It is\ncommonly used in (multinomial) logistic regression and neural networks, as well\nas in some variants of expectation-maximization, and can be used to evaluate the\nprobability outputs (``predict_proba``) of a classifier instead of its\ndiscrete predictions.\n\nFor binary classification with a true label :math:`y \\in \\{0,1\\}`\nand a probability estimate :math:`p = \\operatorname{Pr}(y = 1)`,\nthe log loss per sample is the negative log-likelihood\nof the classifier given the true label:\n\n.. math::\n\n    L_{\\log}(y, p) = -\\log \\operatorname{Pr}(y|p) = -(y \\log (p) + (1 - y) \\log (1 - p))\n\nThis extends to the multiclass case as follows.\nLet the true labels for a set of samples\nbe encoded as a 1-of-K binary indicator matrix :math:`Y`,\ni.e., :math:`y_{i,k} = 1` if sample :math:`i` has label :math:`k`\ntaken from a set of :math:`K` labels.\nLet :math:`P` be a matrix of probability estimates,\nwith :math:`p_{i,k} = \\operatorname{Pr}(y_{i,k} = 1)`.\nThen the log loss of the whole set is\n\n.. math::\n\n    L_{\\log}(Y, P) = -\\log \\operatorname{Pr}(Y|P) = - \\frac{1}{N} \\sum_{i=0}^{N-1} \\sum_{k=0}^{K-1} y_{i,k} \\log p_{i,k}\n\nTo see how this generalizes the binary log loss given above,\nnote that in the binary case,\n:math:`p_{i,0} = 1 - p_{i,1}` and :math:`y_{i,0} = 1 - y_{i,1}`,\nso expanding the inner sum over :math:`y_{i,k} \\in \\{0,1\\}`\ngives the binary log loss.\n\nThe :func:`log_loss` function computes log loss given a list of ground-truth\nlabels and a probability matrix, as returned by an estimator's ``predict_proba``\nmethod.\n\n    >>> from sklearn.metrics import log_loss\n    >>> y_true = [0, 0, 1, 1]\n    >>> y_pred = [[.9, .1], [.8, .2], [.3, .7], [.01, .99]]\n    >>> log_loss(y_true, y_pred)\n    0.1738...\n\nThe first ``[.9, .1]`` in ``y_pred`` denotes 90% probability that the first\nsample has label 0.  The log loss is non-negative.\n\n.. _matthews_corrcoef:\n\nMatthews correlation coefficient\n---------------------------------\n\nThe :func:`matthews_corrcoef` function computes the\n`Matthew's correlation coefficient (MCC) <https:\/\/en.wikipedia.org\/wiki\/Matthews_correlation_coefficient>`_\nfor binary classes.  Quoting Wikipedia:\n\n\n    \"The Matthews correlation coefficient is used in machine learning as a\n    measure of the quality of binary (two-class) classifications. It takes\n    into account true and false positives and negatives and is generally\n    regarded as a balanced measure which can be used even if the classes are\n    of very different sizes. The MCC is in essence a correlation coefficient\n    value between -1 and +1. A coefficient of +1 represents a perfect\n    prediction, 0 an average random prediction and -1 an inverse prediction.\n    The statistic is also known as the phi coefficient.\"\n\n\nIn the binary (two-class) case, :math:`tp`, :math:`tn`, :math:`fp` and\n:math:`fn` are respectively the number of true positives, true negatives, false\npositives and false negatives, the MCC is defined as\n\n.. math::\n\n  MCC = \\frac{tp \\times tn - fp \\times fn}{\\sqrt{(tp + fp)(tp + fn)(tn + fp)(tn + fn)}}.\n\nIn the multiclass case, the Matthews correlation coefficient can be `defined\n<http:\/\/rk.kvl.dk\/introduction\/index.html>`_ in terms of a\n:func:`confusion_matrix` :math:`C` for :math:`K` classes.  To simplify the\ndefinition consider the following intermediate variables:\n\n* :math:`t_k=\\sum_{i}^{K} C_{ik}` the number of times class :math:`k` truly occurred,\n* :math:`p_k=\\sum_{i}^{K} C_{ki}` the number of times class :math:`k` was predicted,\n* :math:`c=\\sum_{k}^{K} C_{kk}` the total number of samples correctly predicted,\n* :math:`s=\\sum_{i}^{K} \\sum_{j}^{K} C_{ij}` the total number of samples.\n\nThen the multiclass MCC is defined as:\n\n.. math::\n    MCC = \\frac{\n        c \\times s - \\sum_{k}^{K} p_k \\times t_k\n    }{\\sqrt{\n        (s^2 - \\sum_{k}^{K} p_k^2) \\times\n        (s^2 - \\sum_{k}^{K} t_k^2)\n    }}\n\nWhen there are more than two labels, the value of the MCC will no longer range\nbetween -1 and +1. Instead the minimum value will be somewhere between -1 and 0\ndepending on the number and distribution of ground true labels. The maximum\nvalue is always +1.\nFor additional information, see [WikipediaMCC2021]_.\n\nHere is a small example illustrating the usage of the :func:`matthews_corrcoef`\nfunction:\n\n    >>> from sklearn.metrics import matthews_corrcoef\n    >>> y_true = [+1, +1, +1, -1]\n    >>> y_pred = [+1, -1, +1, +1]\n    >>> matthews_corrcoef(y_true, y_pred)\n    -0.33...\n\n.. rubric:: References\n\n.. [WikipediaMCC2021] Wikipedia contributors. Phi coefficient.\n   Wikipedia, The Free Encyclopedia. April 21, 2021, 12:21 CEST.\n   Available at: https:\/\/en.wikipedia.org\/wiki\/Phi_coefficient\n   Accessed April 21, 2021.\n\n.. _multilabel_confusion_matrix:\n\nMulti-label confusion matrix\n----------------------------\n\nThe :func:`multilabel_confusion_matrix` function computes class-wise (default)\nor sample-wise (samplewise=True) multilabel confusion matrix to evaluate\nthe accuracy of a classification. multilabel_confusion_matrix also treats\nmulticlass data as if it were multilabel, as this is a transformation commonly\napplied to evaluate multiclass problems with binary classification metrics\n(such as precision, recall, etc.).\n\nWhen calculating class-wise multilabel confusion matrix :math:`C`, the\ncount of true negatives for class :math:`i` is :math:`C_{i,0,0}`, false\nnegatives is :math:`C_{i,1,0}`, true positives is :math:`C_{i,1,1}`\nand false positives is :math:`C_{i,0,1}`.\n\nHere is an example demonstrating the use of the\n:func:`multilabel_confusion_matrix` function with\n:term:`multilabel indicator matrix` input::\n\n    >>> import numpy as np\n    >>> from sklearn.metrics import multilabel_confusion_matrix\n    >>> y_true = np.array([[1, 0, 1],\n    ...                    [0, 1, 0]])\n    >>> y_pred = np.array([[1, 0, 0],\n    ...                    [0, 1, 1]])\n    >>> multilabel_confusion_matrix(y_true, y_pred)\n    array([[[1, 0],\n            [0, 1]],\n    <BLANKLINE>\n           [[1, 0],\n            [0, 1]],\n    <BLANKLINE>\n           [[0, 1],\n            [1, 0]]])\n\nOr a confusion matrix can be constructed for each sample's labels:\n\n    >>> multilabel_confusion_matrix(y_true, y_pred, samplewise=True)\n    array([[[1, 0],\n            [1, 1]],\n    <BLANKLINE>\n           [[1, 1],\n            [0, 1]]])\n\nHere is an example demonstrating the use of the\n:func:`multilabel_confusion_matrix` function with\n:term:`multiclass` input::\n\n    >>> y_true = [\"cat\", \"ant\", \"cat\", \"cat\", \"ant\", \"bird\"]\n    >>> y_pred = [\"ant\", \"ant\", \"cat\", \"cat\", \"ant\", \"cat\"]\n    >>> multilabel_confusion_matrix(y_true, y_pred,\n    ...                             labels=[\"ant\", \"bird\", \"cat\"])\n    array([[[3, 1],\n            [0, 2]],\n    <BLANKLINE>\n           [[5, 0],\n            [1, 0]],\n    <BLANKLINE>\n           [[2, 1],\n            [1, 2]]])\n\nHere are some examples demonstrating the use of the\n:func:`multilabel_confusion_matrix` function to calculate recall\n(or sensitivity), specificity, fall out and miss rate for each class in a\nproblem with multilabel indicator matrix input.\n\nCalculating\n`recall <https:\/\/en.wikipedia.org\/wiki\/Sensitivity_and_specificity>`__\n(also called the true positive rate or the sensitivity) for each class::\n\n    >>> y_true = np.array([[0, 0, 1],\n    ...                    [0, 1, 0],\n    ...                    [1, 1, 0]])\n    >>> y_pred = np.array([[0, 1, 0],\n    ...                    [0, 0, 1],\n    ...                    [1, 1, 0]])\n    >>> mcm = multilabel_confusion_matrix(y_true, y_pred)\n    >>> tn = mcm[:, 0, 0]\n    >>> tp = mcm[:, 1, 1]\n    >>> fn = mcm[:, 1, 0]\n    >>> fp = mcm[:, 0, 1]\n    >>> tp \/ (tp + fn)\n    array([1. , 0.5, 0. ])\n\nCalculating\n`specificity <https:\/\/en.wikipedia.org\/wiki\/Sensitivity_and_specificity>`__\n(also called the true negative rate) for each class::\n\n    >>> tn \/ (tn + fp)\n    array([1. , 0. , 0.5])\n\nCalculating `fall out <https:\/\/en.wikipedia.org\/wiki\/False_positive_rate>`__\n(also called the false positive rate) for each class::\n\n    >>> fp \/ (fp + tn)\n    array([0. , 1. , 0.5])\n\nCalculating `miss rate\n<https:\/\/en.wikipedia.org\/wiki\/False_positives_and_false_negatives>`__\n(also called the false negative rate) for each class::\n\n    >>> fn \/ (fn + tp)\n    array([0. , 0.5, 1. ])\n\n.. _roc_metrics:\n\nReceiver operating characteristic (ROC)\n---------------------------------------\n\nThe function :func:`roc_curve` computes the\n`receiver operating characteristic curve, or ROC curve <https:\/\/en.wikipedia.org\/wiki\/Receiver_operating_characteristic>`_.\nQuoting Wikipedia :\n\n  \"A receiver operating characteristic (ROC), or simply ROC curve, is a\n  graphical plot which illustrates the performance of a binary classifier\n  system as its discrimination threshold is varied. It is created by plotting\n  the fraction of true positives out of the positives (TPR = true positive\n  rate) vs. the fraction of false positives out of the negatives (FPR = false\n  positive rate), at various threshold settings. TPR is also known as\n  sensitivity, and FPR is one minus the specificity or true negative rate.\"\n\nThis function requires the true binary value and the target scores, which can\neither be probability estimates of the positive class, confidence values, or\nbinary decisions. Here is a small example of how to use the :func:`roc_curve`\nfunction::\n\n    >>> import numpy as np\n    >>> from sklearn.metrics import roc_curve\n    >>> y = np.array([1, 1, 2, 2])\n    >>> scores = np.array([0.1, 0.4, 0.35, 0.8])\n    >>> fpr, tpr, thresholds = roc_curve(y, scores, pos_label=2)\n    >>> fpr\n    array([0. , 0. , 0.5, 0.5, 1. ])\n    >>> tpr\n    array([0. , 0.5, 0.5, 1. , 1. ])\n    >>> thresholds\n    array([ inf, 0.8 , 0.4 , 0.35, 0.1 ])\n\nCompared to metrics such as the subset accuracy, the Hamming loss, or the\nF1 score, ROC doesn't require optimizing a threshold for each label.\n\nThe :func:`roc_auc_score` function, denoted by ROC-AUC or AUROC, computes the\narea under the ROC curve. By doing so, the curve information is summarized in\none number.\n\nThe following figure shows the ROC curve and ROC-AUC score for a classifier\naimed to distinguish the virginica flower from the rest of the species in the\n:ref:`iris_dataset`:\n\n.. image:: ..\/auto_examples\/model_selection\/images\/sphx_glr_plot_roc_001.png\n   :target: ..\/auto_examples\/model_selection\/plot_roc.html\n   :scale: 75\n   :align: center\n\n\n\nFor more information see the `Wikipedia article on AUC\n<https:\/\/en.wikipedia.org\/wiki\/Receiver_operating_characteristic#Area_under_the_curve>`_.\n\n.. _roc_auc_binary:\n\nBinary case\n^^^^^^^^^^^\n\nIn the **binary case**, you can either provide the probability estimates, using\nthe `classifier.predict_proba()` method, or the non-thresholded decision values\ngiven by the `classifier.decision_function()` method. In the case of providing\nthe probability estimates, the probability of the class with the\n\"greater label\" should be provided. The \"greater label\" corresponds to\n`classifier.classes_[1]` and thus `classifier.predict_proba(X)[:, 1]`.\nTherefore, the `y_score` parameter is of size (n_samples,).\n\n  >>> from sklearn.datasets import load_breast_cancer\n  >>> from sklearn.linear_model import LogisticRegression\n  >>> from sklearn.metrics import roc_auc_score\n  >>> X, y = load_breast_cancer(return_X_y=True)\n  >>> clf = LogisticRegression(solver=\"liblinear\").fit(X, y)\n  >>> clf.classes_\n  array([0, 1])\n\nWe can use the probability estimates corresponding to `clf.classes_[1]`.\n\n  >>> y_score = clf.predict_proba(X)[:, 1]\n  >>> roc_auc_score(y, y_score)\n  0.99...\n\nOtherwise, we can use the non-thresholded decision values\n\n  >>> roc_auc_score(y, clf.decision_function(X))\n  0.99...\n\n.. _roc_auc_multiclass:\n\nMulti-class case\n^^^^^^^^^^^^^^^^\n\nThe :func:`roc_auc_score` function can also be used in **multi-class\nclassification**. Two averaging strategies are currently supported: the\none-vs-one algorithm computes the average of the pairwise ROC AUC scores, and\nthe one-vs-rest algorithm computes the average of the ROC AUC scores for each\nclass against all other classes. In both cases, the predicted labels are\nprovided in an array with values from 0 to ``n_classes``, and the scores\ncorrespond to the probability estimates that a sample belongs to a particular\nclass. The OvO and OvR algorithms support weighting uniformly\n(``average='macro'``) and by prevalence (``average='weighted'``).\n\n.. dropdown:: One-vs-one Algorithm\n\n  Computes the average AUC of all possible pairwise\n  combinations of classes. [HT2001]_ defines a multiclass AUC metric weighted\n  uniformly:\n\n  .. math::\n\n    \\frac{1}{c(c-1)}\\sum_{j=1}^{c}\\sum_{k > j}^c (\\text{AUC}(j | k) +\n    \\text{AUC}(k | j))\n\n  where :math:`c` is the number of classes and :math:`\\text{AUC}(j | k)` is the\n  AUC with class :math:`j` as the positive class and class :math:`k` as the\n  negative class. In general,\n  :math:`\\text{AUC}(j | k) \\neq \\text{AUC}(k | j))` in the multiclass\n  case. This algorithm is used by setting the keyword argument ``multiclass``\n  to ``'ovo'`` and ``average`` to ``'macro'``.\n\n  The [HT2001]_ multiclass AUC metric can be extended to be weighted by the\n  prevalence:\n\n  .. math::\n\n    \\frac{1}{c(c-1)}\\sum_{j=1}^{c}\\sum_{k > j}^c p(j \\cup k)(\n    \\text{AUC}(j | k) + \\text{AUC}(k | j))\n\n  where :math:`c` is the number of classes. This algorithm is used by setting\n  the keyword argument ``multiclass`` to ``'ovo'`` and ``average`` to\n  ``'weighted'``. The ``'weighted'`` option returns a prevalence-weighted average\n  as described in [FC2009]_.\n\n.. dropdown:: One-vs-rest Algorithm\n\n  Computes the AUC of each class against the rest\n  [PD2000]_. The algorithm is functionally the same as the multilabel case. To\n  enable this algorithm set the keyword argument ``multiclass`` to ``'ovr'``.\n  Additionally to ``'macro'`` [F2006]_ and ``'weighted'`` [F2001]_ averaging, OvR\n  supports ``'micro'`` averaging.\n\n  In applications where a high false positive rate is not tolerable the parameter\n  ``max_fpr`` of :func:`roc_auc_score` can be used to summarize the ROC curve up\n  to the given limit.\n\n  The following figure shows the micro-averaged ROC curve and its corresponding\n  ROC-AUC score for a classifier aimed to distinguish the different species in\n  the :ref:`iris_dataset`:\n\n  .. image:: ..\/auto_examples\/model_selection\/images\/sphx_glr_plot_roc_002.png\n    :target: ..\/auto_examples\/model_selection\/plot_roc.html\n    :scale: 75\n    :align: center\n\n.. _roc_auc_multilabel:\n\nMulti-label case\n^^^^^^^^^^^^^^^^\n\nIn **multi-label classification**, the :func:`roc_auc_score` function is\nextended by averaging over the labels as :ref:`above <average>`. In this case,\nyou should provide a `y_score` of shape `(n_samples, n_classes)`. Thus, when\nusing the probability estimates, one needs to select the probability of the\nclass with the greater label for each output.\n\n  >>> from sklearn.datasets import make_multilabel_classification\n  >>> from sklearn.multioutput import MultiOutputClassifier\n  >>> X, y = make_multilabel_classification(random_state=0)\n  >>> inner_clf = LogisticRegression(solver=\"liblinear\", random_state=0)\n  >>> clf = MultiOutputClassifier(inner_clf).fit(X, y)\n  >>> y_score = np.transpose([y_pred[:, 1] for y_pred in clf.predict_proba(X)])\n  >>> roc_auc_score(y, y_score, average=None)\n  array([0.82..., 0.86..., 0.94..., 0.85... , 0.94...])\n\nAnd the decision values do not require such processing.\n\n  >>> from sklearn.linear_model import RidgeClassifierCV\n  >>> clf = RidgeClassifierCV().fit(X, y)\n  >>> y_score = clf.decision_function(X)\n  >>> roc_auc_score(y, y_score, average=None)\n  array([0.81..., 0.84... , 0.93..., 0.87..., 0.94...])\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_model_selection_plot_roc.py` for an example of\n  using ROC to evaluate the quality of the output of a classifier.\n\n* See :ref:`sphx_glr_auto_examples_model_selection_plot_roc_crossval.py`  for an\n  example of using ROC to evaluate classifier output quality, using cross-validation.\n\n* See :ref:`sphx_glr_auto_examples_applications_plot_species_distribution_modeling.py`\n  for an example of using ROC to model species distribution.\n\n.. rubric:: References\n\n.. [HT2001] Hand, D.J. and Till, R.J., (2001). `A simple generalisation\n   of the area under the ROC curve for multiple class classification problems.\n   <http:\/\/link.springer.com\/article\/10.1023\/A:1010920819831>`_\n   Machine learning, 45(2), pp. 171-186.\n\n.. [FC2009] Ferri, C\u00e8sar & Hernandez-Orallo, Jose & Modroiu, R. (2009).\n   `An Experimental Comparison of Performance Measures for Classification.\n   <https:\/\/www.math.ucdavis.edu\/~saito\/data\/roc\/ferri-class-perf-metrics.pdf>`_\n   Pattern Recognition Letters. 30. 27-38.\n\n.. [PD2000] Provost, F., Domingos, P. (2000). `Well-trained PETs: Improving\n   probability estimation trees\n   <https:\/\/fosterprovost.com\/publication\/well-trained-pets-improving-probability-estimation-trees\/>`_\n   (Section 6.2), CeDER Working Paper #IS-00-04, Stern School of Business,\n   New York University.\n\n.. [F2006] Fawcett, T., 2006. `An introduction to ROC analysis.\n   <http:\/\/www.sciencedirect.com\/science\/article\/pii\/S016786550500303X>`_\n   Pattern Recognition Letters, 27(8), pp. 861-874.\n\n.. [F2001] Fawcett, T., 2001. `Using rule sets to maximize\n   ROC performance <https:\/\/ieeexplore.ieee.org\/document\/989510\/>`_\n   In Data Mining, 2001.\n   Proceedings IEEE International Conference, pp. 131-138.\n\n.. _det_curve:\n\nDetection error tradeoff (DET)\n------------------------------\n\nThe function :func:`det_curve` computes the\ndetection error tradeoff curve (DET) curve [WikipediaDET2017]_.\nQuoting Wikipedia:\n\n  \"A detection error tradeoff (DET) graph is a graphical plot of error rates\n  for binary classification systems, plotting false reject rate vs. false\n  accept rate. The x- and y-axes are scaled non-linearly by their standard\n  normal deviates (or just by logarithmic transformation), yielding tradeoff\n  curves that are more linear than ROC curves, and use most of the image area\n  to highlight the differences of importance in the critical operating region.\"\n\nDET curves are a variation of receiver operating characteristic (ROC) curves\nwhere False Negative Rate is plotted on the y-axis instead of True Positive\nRate.\nDET curves are commonly plotted in normal deviate scale by transformation with\n:math:`\\phi^{-1}` (with :math:`\\phi` being the cumulative distribution\nfunction).\nThe resulting performance curves explicitly visualize the tradeoff of error\ntypes for given classification algorithms.\nSee [Martin1997]_ for examples and further motivation.\n\nThis figure compares the ROC and DET curves of two example classifiers on the\nsame classification task:\n\n.. image:: ..\/auto_examples\/model_selection\/images\/sphx_glr_plot_det_001.png\n   :target: ..\/auto_examples\/model_selection\/plot_det.html\n   :scale: 75\n   :align: center\n\n.. dropdown:: Properties\n\n  * DET curves form a linear curve in normal deviate scale if the detection\n    scores are normally (or close-to normally) distributed.\n    It was shown by [Navratil2007]_ that the reverse is not necessarily true and\n    even more general distributions are able to produce linear DET curves.\n\n  * The normal deviate scale transformation spreads out the points such that a\n    comparatively larger space of plot is occupied.\n    Therefore curves with similar classification performance might be easier to\n    distinguish on a DET plot.\n\n  * With False Negative Rate being \"inverse\" to True Positive Rate the point\n    of perfection for DET curves is the origin (in contrast to the top left\n    corner for ROC curves).\n\n.. dropdown:: Applications and limitations\n\n  DET curves are intuitive to read and hence allow quick visual assessment of a\n  classifier's performance.\n  Additionally DET curves can be consulted for threshold analysis and operating\n  point selection.\n  This is particularly helpful if a comparison of error types is required.\n\n  On the other hand DET curves do not provide their metric as a single number.\n  Therefore for either automated evaluation or comparison to other\n  classification tasks metrics like the derived area under ROC curve might be\n  better suited.\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_model_selection_plot_det.py`\n  for an example comparison between receiver operating characteristic (ROC)\n  curves and Detection error tradeoff (DET) curves.\n\n.. rubric:: References\n\n.. [WikipediaDET2017] Wikipedia contributors. Detection error tradeoff.\n    Wikipedia, The Free Encyclopedia. September 4, 2017, 23:33 UTC.\n    Available at: https:\/\/en.wikipedia.org\/w\/index.php?title=Detection_error_tradeoff&oldid=798982054.\n    Accessed February 19, 2018.\n\n.. [Martin1997] A. Martin, G. Doddington, T. Kamm, M. Ordowski, and M. Przybocki,\n    `The DET Curve in Assessment of Detection Task Performance\n    <https:\/\/ccc.inaoep.mx\/~villasen\/bib\/martin97det.pdf>`_, NIST 1997.\n\n.. [Navratil2007] J. Navractil and D. Klusacek,\n    `\"On Linear DETs\" <https:\/\/ieeexplore.ieee.org\/document\/4218079>`_,\n    2007 IEEE International Conference on Acoustics,\n    Speech and Signal Processing - ICASSP '07, Honolulu,\n    HI, 2007, pp. IV-229-IV-232.\n\n.. _zero_one_loss:\n\nZero one loss\n--------------\n\nThe :func:`zero_one_loss` function computes the sum or the average of the 0-1\nclassification loss (:math:`L_{0-1}`) over :math:`n_{\\text{samples}}`. By\ndefault, the function normalizes over the sample. To get the sum of the\n:math:`L_{0-1}`, set ``normalize`` to ``False``.\n\nIn multilabel classification, the :func:`zero_one_loss` scores a subset as\none if its labels strictly match the predictions, and as a zero if there\nare any errors.  By default, the function returns the percentage of imperfectly\npredicted subsets.  To get the count of such subsets instead, set\n``normalize`` to ``False``\n\nIf :math:`\\hat{y}_i` is the predicted value of\nthe :math:`i`-th sample and :math:`y_i` is the corresponding true value,\nthen the 0-1 loss :math:`L_{0-1}` is defined as:\n\n.. math::\n\n   L_{0-1}(y, \\hat{y}) = \\frac{1}{n_\\text{samples}} \\sum_{i=0}^{n_\\text{samples}-1} 1(\\hat{y}_i \\not= y_i)\n\nwhere :math:`1(x)` is the `indicator function\n<https:\/\/en.wikipedia.org\/wiki\/Indicator_function>`_. The zero one\nloss can also be computed as :math:`zero-one loss = 1 - accuracy`.\n\n\n  >>> from sklearn.metrics import zero_one_loss\n  >>> y_pred = [1, 2, 3, 4]\n  >>> y_true = [2, 2, 3, 4]\n  >>> zero_one_loss(y_true, y_pred)\n  0.25\n  >>> zero_one_loss(y_true, y_pred, normalize=False)\n  1.0\n\nIn the multilabel case with binary label indicators, where the first label\nset [0,1] has an error::\n\n  >>> zero_one_loss(np.array([[0, 1], [1, 1]]), np.ones((2, 2)))\n  0.5\n\n  >>> zero_one_loss(np.array([[0, 1], [1, 1]]), np.ones((2, 2)),  normalize=False)\n  1.0\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_feature_selection_plot_rfe_with_cross_validation.py`\n  for an example of zero one loss usage to perform recursive feature\n  elimination with cross-validation.\n\n.. _brier_score_loss:\n\nBrier score loss\n----------------\n\nThe :func:`brier_score_loss` function computes the\n`Brier score <https:\/\/en.wikipedia.org\/wiki\/Brier_score>`_\nfor binary classes [Brier1950]_. Quoting Wikipedia:\n\n    \"The Brier score is a proper score function that measures the accuracy of\n    probabilistic predictions. It is applicable to tasks in which predictions\n    must assign probabilities to a set of mutually exclusive discrete outcomes.\"\n\nThis function returns the mean squared error of the actual outcome\n:math:`y \\in \\{0,1\\}` and the predicted probability estimate\n:math:`p = \\operatorname{Pr}(y = 1)` (:term:`predict_proba`) as outputted by:\n\n.. math::\n\n   BS = \\frac{1}{n_{\\text{samples}}} \\sum_{i=0}^{n_{\\text{samples}} - 1}(y_i - p_i)^2\n\nThe Brier score loss is also between 0 to 1 and the lower the value (the mean\nsquare difference is smaller), the more accurate the prediction is.\n\nHere is a small example of usage of this function::\n\n    >>> import numpy as np\n    >>> from sklearn.metrics import brier_score_loss\n    >>> y_true = np.array([0, 1, 1, 0])\n    >>> y_true_categorical = np.array([\"spam\", \"ham\", \"ham\", \"spam\"])\n    >>> y_prob = np.array([0.1, 0.9, 0.8, 0.4])\n    >>> y_pred = np.array([0, 1, 1, 0])\n    >>> brier_score_loss(y_true, y_prob)\n    0.055\n    >>> brier_score_loss(y_true, 1 - y_prob, pos_label=0)\n    0.055\n    >>> brier_score_loss(y_true_categorical, y_prob, pos_label=\"ham\")\n    0.055\n    >>> brier_score_loss(y_true, y_prob > 0.5)\n    0.0\n\nThe Brier score can be used to assess how well a classifier is calibrated.\nHowever, a lower Brier score loss does not always mean a better calibration.\nThis is because, by analogy with the bias-variance decomposition of the mean\nsquared error, the Brier score loss can be decomposed as the sum of calibration\nloss and refinement loss [Bella2012]_. Calibration loss is defined as the mean\nsquared deviation from empirical probabilities derived from the slope of ROC\nsegments. Refinement loss can be defined as the expected optimal loss as\nmeasured by the area under the optimal cost curve. Refinement loss can change\nindependently from calibration loss, thus a lower Brier score loss does not\nnecessarily mean a better calibrated model. \"Only when refinement loss remains\nthe same does a lower Brier score loss always mean better calibration\"\n[Bella2012]_, [Flach2008]_.\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_calibration_plot_calibration.py`\n  for an example of Brier score loss usage to perform probability\n  calibration of classifiers.\n\n.. rubric:: References\n\n.. [Brier1950] G. Brier, `Verification of forecasts expressed in terms of probability\n  <ftp:\/\/ftp.library.noaa.gov\/docs.lib\/htdocs\/rescue\/mwr\/078\/mwr-078-01-0001.pdf>`_,\n  Monthly weather review 78.1 (1950)\n\n.. [Bella2012] Bella, Ferri, Hern\u00e1ndez-Orallo, and Ram\u00edrez-Quintana\n  `\"Calibration of Machine Learning Models\"\n  <http:\/\/dmip.webs.upv.es\/papers\/BFHRHandbook2010.pdf>`_\n  in Khosrow-Pour, M. \"Machine learning: concepts, methodologies, tools\n  and applications.\" Hershey, PA: Information Science Reference (2012).\n\n.. [Flach2008] Flach, Peter, and Edson Matsubara. `\"On classification, ranking,\n  and probability estimation.\" <https:\/\/drops.dagstuhl.de\/opus\/volltexte\/2008\/1382\/>`_\n  Dagstuhl Seminar Proceedings. Schloss Dagstuhl-Leibniz-Zentrum fr Informatik (2008).\n\n.. _class_likelihood_ratios:\n\nClass likelihood ratios\n-----------------------\n\nThe :func:`class_likelihood_ratios` function computes the `positive and negative\nlikelihood ratios\n<https:\/\/en.wikipedia.org\/wiki\/Likelihood_ratios_in_diagnostic_testing>`_\n:math:`LR_\\pm` for binary classes, which can be interpreted as the ratio of\npost-test to pre-test odds as explained below. As a consequence, this metric is\ninvariant w.r.t. the class prevalence (the number of samples in the positive\nclass divided by the total number of samples) and **can be extrapolated between\npopulations regardless of any possible class imbalance.**\n\nThe :math:`LR_\\pm` metrics are therefore very useful in settings where the data\navailable to learn and evaluate a classifier is a study population with nearly\nbalanced classes, such as a case-control study, while the target application,\ni.e. the general population, has very low prevalence.\n\nThe positive likelihood ratio :math:`LR_+` is the probability of a classifier to\ncorrectly predict that a sample belongs to the positive class divided by the\nprobability of predicting the positive class for a sample belonging to the\nnegative class:\n\n.. math::\n\n   LR_+ = \\frac{\\text{PR}(P+|T+)}{\\text{PR}(P+|T-)}.\n\nThe notation here refers to predicted (:math:`P`) or true (:math:`T`) label and\nthe sign :math:`+` and :math:`-` refer to the positive and negative class,\nrespectively, e.g. :math:`P+` stands for \"predicted positive\".\n\nAnalogously, the negative likelihood ratio :math:`LR_-` is the probability of a\nsample of the positive class being classified as belonging to the negative class\ndivided by the probability of a sample of the negative class being correctly\nclassified:\n\n.. math::\n\n   LR_- = \\frac{\\text{PR}(P-|T+)}{\\text{PR}(P-|T-)}.\n\nFor classifiers above chance :math:`LR_+` above 1 **higher is better**, while\n:math:`LR_-` ranges from 0 to 1 and **lower is better**.\nValues of :math:`LR_\\pm\\approx 1` correspond to chance level.\n\nNotice that probabilities differ from counts, for instance\n:math:`\\operatorname{PR}(P+|T+)` is not equal to the number of true positive\ncounts ``tp`` (see `the wikipedia page\n<https:\/\/en.wikipedia.org\/wiki\/Likelihood_ratios_in_diagnostic_testing>`_ for\nthe actual formulas).\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_model_selection_plot_likelihood_ratios.py`\n\n.. dropdown:: Interpretation across varying prevalence\n\n  Both class likelihood ratios are interpretable in terms of an odds ratio\n  (pre-test and post-tests):\n\n  .. math::\n\n    \\text{post-test odds} = \\text{Likelihood ratio} \\times \\text{pre-test odds}.\n\n  Odds are in general related to probabilities via\n\n  .. math::\n\n    \\text{odds} = \\frac{\\text{probability}}{1 - \\text{probability}},\n\n  or equivalently\n\n  .. math::\n\n    \\text{probability} = \\frac{\\text{odds}}{1 + \\text{odds}}.\n\n  On a given population, the pre-test probability is given by the prevalence. By\n  converting odds to probabilities, the likelihood ratios can be translated into a\n  probability of truly belonging to either class before and after a classifier\n  prediction:\n\n  .. math::\n\n    \\text{post-test odds} = \\text{Likelihood ratio} \\times\n    \\frac{\\text{pre-test probability}}{1 - \\text{pre-test probability}},\n\n  .. math::\n\n    \\text{post-test probability} = \\frac{\\text{post-test odds}}{1 + \\text{post-test odds}}.\n\n.. dropdown:: Mathematical divergences\n\n  The positive likelihood ratio is undefined when :math:`fp = 0`, which can be\n  interpreted as the classifier perfectly identifying positive cases. If :math:`fp\n  = 0` and additionally :math:`tp = 0`, this leads to a zero\/zero division. This\n  happens, for instance, when using a `DummyClassifier` that always predicts the\n  negative class and therefore the interpretation as a perfect classifier is lost.\n\n  The negative likelihood ratio is undefined when :math:`tn = 0`. Such divergence\n  is invalid, as :math:`LR_- > 1` would indicate an increase in the odds of a\n  sample belonging to the positive class after being classified as negative, as if\n  the act of classifying caused the positive condition. This includes the case of\n  a `DummyClassifier` that always predicts the positive class (i.e. when\n  :math:`tn=fn=0`).\n\n  Both class likelihood ratios are undefined when :math:`tp=fn=0`, which means\n  that no samples of the positive class were present in the testing set. This can\n  also happen when cross-validating highly imbalanced data.\n\n  In all the previous cases the :func:`class_likelihood_ratios` function raises by\n  default an appropriate warning message and returns `nan` to avoid pollution when\n  averaging over cross-validation folds.\n\n  For a worked-out demonstration of the :func:`class_likelihood_ratios` function,\n  see the example below.\n\n.. dropdown:: References\n\n  * `Wikipedia entry for Likelihood ratios in diagnostic testing\n    <https:\/\/en.wikipedia.org\/wiki\/Likelihood_ratios_in_diagnostic_testing>`_\n\n  * Brenner, H., & Gefeller, O. (1997).\n    Variation of sensitivity, specificity, likelihood ratios and predictive\n    values with disease prevalence.\n    Statistics in medicine, 16(9), 981-991.\n\n\n.. _d2_score_classification:\n\nD\u00b2 score for classification\n---------------------------\n\nThe D\u00b2 score computes the fraction of deviance explained.\nIt is a generalization of R\u00b2, where the squared error is generalized and replaced\nby a classification deviance of choice :math:`\\text{dev}(y, \\hat{y})`\n(e.g., Log loss). D\u00b2 is a form of a *skill score*.\nIt is calculated as\n\n.. math::\n\n  D^2(y, \\hat{y}) = 1 - \\frac{\\text{dev}(y, \\hat{y})}{\\text{dev}(y, y_{\\text{null}})} \\,.\n\nWhere :math:`y_{\\text{null}}` is the optimal prediction of an intercept-only model\n(e.g., the per-class proportion of `y_true` in the case of the Log loss).\n\nLike R\u00b2, the best possible score is 1.0 and it can be negative (because the\nmodel can be arbitrarily worse). A constant model that always predicts\n:math:`y_{\\text{null}}`, disregarding the input features, would get a D\u00b2 score\nof 0.0.\n\n.. dropdown:: D2 log loss score\n\n  The :func:`d2_log_loss_score` function implements the special case\n  of D\u00b2 with the log loss, see :ref:`log_loss`, i.e.:\n\n  .. math::\n\n    \\text{dev}(y, \\hat{y}) = \\text{log_loss}(y, \\hat{y}).\n\n  Here are some usage examples of the :func:`d2_log_loss_score` function::\n\n    >>> from sklearn.metrics import d2_log_loss_score\n    >>> y_true = [1, 1, 2, 3]\n    >>> y_pred = [\n    ...    [0.5, 0.25, 0.25],\n    ...    [0.5, 0.25, 0.25],\n    ...    [0.5, 0.25, 0.25],\n    ...    [0.5, 0.25, 0.25],\n    ... ]\n    >>> d2_log_loss_score(y_true, y_pred)\n    0.0\n    >>> y_true = [1, 2, 3]\n    >>> y_pred = [\n    ...     [0.98, 0.01, 0.01],\n    ...     [0.01, 0.98, 0.01],\n    ...     [0.01, 0.01, 0.98],\n    ... ]\n    >>> d2_log_loss_score(y_true, y_pred)\n    0.981...\n    >>> y_true = [1, 2, 3]\n    >>> y_pred = [\n    ...     [0.1, 0.6, 0.3],\n    ...     [0.1, 0.6, 0.3],\n    ...     [0.4, 0.5, 0.1],\n    ... ]\n    >>> d2_log_loss_score(y_true, y_pred)\n    -0.552...\n\n\n.. _multilabel_ranking_metrics:\n\nMultilabel ranking metrics\n==========================\n\n.. currentmodule:: sklearn.metrics\n\nIn multilabel learning, each sample can have any number of ground truth labels\nassociated with it. The goal is to give high scores and better rank to\nthe ground truth labels.\n\n.. _coverage_error:\n\nCoverage error\n--------------\n\nThe :func:`coverage_error` function computes the average number of labels that\nhave to be included in the final prediction such that all true labels\nare predicted. This is useful if you want to know how many top-scored-labels\nyou have to predict in average without missing any true one. The best value\nof this metrics is thus the average number of true labels.\n\n.. note::\n\n    Our implementation's score is 1 greater than the one given in Tsoumakas\n    et al., 2010. This extends it to handle the degenerate case in which an\n    instance has 0 true labels.\n\nFormally, given a binary indicator matrix of the ground truth labels\n:math:`y \\in \\left\\{0, 1\\right\\}^{n_\\text{samples} \\times n_\\text{labels}}` and the\nscore associated with each label\n:math:`\\hat{f} \\in \\mathbb{R}^{n_\\text{samples} \\times n_\\text{labels}}`,\nthe coverage is defined as\n\n.. math::\n  coverage(y, \\hat{f}) = \\frac{1}{n_{\\text{samples}}}\n    \\sum_{i=0}^{n_{\\text{samples}} - 1} \\max_{j:y_{ij} = 1} \\text{rank}_{ij}\n\nwith :math:`\\text{rank}_{ij} = \\left|\\left\\{k: \\hat{f}_{ik} \\geq \\hat{f}_{ij} \\right\\}\\right|`.\nGiven the rank definition, ties in ``y_scores`` are broken by giving the\nmaximal rank that would have been assigned to all tied values.\n\nHere is a small example of usage of this function::\n\n    >>> import numpy as np\n    >>> from sklearn.metrics import coverage_error\n    >>> y_true = np.array([[1, 0, 0], [0, 0, 1]])\n    >>> y_score = np.array([[0.75, 0.5, 1], [1, 0.2, 0.1]])\n    >>> coverage_error(y_true, y_score)\n    2.5\n\n.. _label_ranking_average_precision:\n\nLabel ranking average precision\n-------------------------------\n\nThe :func:`label_ranking_average_precision_score` function\nimplements label ranking average precision (LRAP). This metric is linked to\nthe :func:`average_precision_score` function, but is based on the notion of\nlabel ranking instead of precision and recall.\n\nLabel ranking average precision (LRAP) averages over the samples the answer to\nthe following question: for each ground truth label, what fraction of\nhigher-ranked labels were true labels? This performance measure will be higher\nif you are able to give better rank to the labels associated with each sample.\nThe obtained score is always strictly greater than 0, and the best value is 1.\nIf there is exactly one relevant label per sample, label ranking average\nprecision is equivalent to the `mean\nreciprocal rank <https:\/\/en.wikipedia.org\/wiki\/Mean_reciprocal_rank>`_.\n\nFormally, given a binary indicator matrix of the ground truth labels\n:math:`y \\in \\left\\{0, 1\\right\\}^{n_\\text{samples} \\times n_\\text{labels}}`\nand the score associated with each label\n:math:`\\hat{f} \\in \\mathbb{R}^{n_\\text{samples} \\times n_\\text{labels}}`,\nthe average precision is defined as\n\n.. math::\n  LRAP(y, \\hat{f}) = \\frac{1}{n_{\\text{samples}}}\n    \\sum_{i=0}^{n_{\\text{samples}} - 1} \\frac{1}{||y_i||_0}\n    \\sum_{j:y_{ij} = 1} \\frac{|\\mathcal{L}_{ij}|}{\\text{rank}_{ij}}\n\n\nwhere\n:math:`\\mathcal{L}_{ij} = \\left\\{k: y_{ik} = 1, \\hat{f}_{ik} \\geq \\hat{f}_{ij} \\right\\}`,\n:math:`\\text{rank}_{ij} = \\left|\\left\\{k: \\hat{f}_{ik} \\geq \\hat{f}_{ij} \\right\\}\\right|`,\n:math:`|\\cdot|` computes the cardinality of the set (i.e., the number of\nelements in the set), and :math:`||\\cdot||_0` is the :math:`\\ell_0` \"norm\"\n(which computes the number of nonzero elements in a vector).\n\nHere is a small example of usage of this function::\n\n    >>> import numpy as np\n    >>> from sklearn.metrics import label_ranking_average_precision_score\n    >>> y_true = np.array([[1, 0, 0], [0, 0, 1]])\n    >>> y_score = np.array([[0.75, 0.5, 1], [1, 0.2, 0.1]])\n    >>> label_ranking_average_precision_score(y_true, y_score)\n    0.416...\n\n.. _label_ranking_loss:\n\nRanking loss\n------------\n\nThe :func:`label_ranking_loss` function computes the ranking loss which\naverages over the samples the number of label pairs that are incorrectly\nordered, i.e. true labels have a lower score than false labels, weighted by\nthe inverse of the number of ordered pairs of false and true labels.\nThe lowest achievable ranking loss is zero.\n\nFormally, given a binary indicator matrix of the ground truth labels\n:math:`y \\in \\left\\{0, 1\\right\\}^{n_\\text{samples} \\times n_\\text{labels}}` and the\nscore associated with each label\n:math:`\\hat{f} \\in \\mathbb{R}^{n_\\text{samples} \\times n_\\text{labels}}`,\nthe ranking loss is defined as\n\n.. math::\n  ranking\\_loss(y, \\hat{f}) =  \\frac{1}{n_{\\text{samples}}}\n    \\sum_{i=0}^{n_{\\text{samples}} - 1} \\frac{1}{||y_i||_0(n_\\text{labels} - ||y_i||_0)}\n    \\left|\\left\\{(k, l): \\hat{f}_{ik} \\leq \\hat{f}_{il}, y_{ik} = 1, y_{il} = 0\u00a0\\right\\}\\right|\n\nwhere :math:`|\\cdot|` computes the cardinality of the set (i.e., the number of\nelements in the set) and :math:`||\\cdot||_0` is the :math:`\\ell_0` \"norm\"\n(which computes the number of nonzero elements in a vector).\n\nHere is a small example of usage of this function::\n\n    >>> import numpy as np\n    >>> from sklearn.metrics import label_ranking_loss\n    >>> y_true = np.array([[1, 0, 0], [0, 0, 1]])\n    >>> y_score = np.array([[0.75, 0.5, 1], [1, 0.2, 0.1]])\n    >>> label_ranking_loss(y_true, y_score)\n    0.75...\n    >>> # With the following prediction, we have perfect and minimal loss\n    >>> y_score = np.array([[1.0, 0.1, 0.2], [0.1, 0.2, 0.9]])\n    >>> label_ranking_loss(y_true, y_score)\n    0.0\n\n\n.. dropdown:: References\n\n  * Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In\n    Data mining and knowledge discovery handbook (pp. 667-685). Springer US.\n\n\n.. _ndcg:\n\nNormalized Discounted Cumulative Gain\n-------------------------------------\n\nDiscounted Cumulative Gain (DCG) and Normalized Discounted Cumulative Gain\n(NDCG) are ranking metrics implemented in :func:`~sklearn.metrics.dcg_score`\nand :func:`~sklearn.metrics.ndcg_score` ; they compare a predicted order to\nground-truth scores, such as the relevance of answers to a query.\n\nFrom the Wikipedia page for Discounted Cumulative Gain:\n\n\"Discounted cumulative gain (DCG) is a measure of ranking quality. In\ninformation retrieval, it is often used to measure effectiveness of web search\nengine algorithms or related applications. Using a graded relevance scale of\ndocuments in a search-engine result set, DCG measures the usefulness, or gain,\nof a document based on its position in the result list. The gain is accumulated\nfrom the top of the result list to the bottom, with the gain of each result\ndiscounted at lower ranks\"\n\nDCG orders the true targets (e.g. relevance of query answers) in the predicted\norder, then multiplies them by a logarithmic decay and sums the result. The sum\ncan be truncated after the first :math:`K` results, in which case we call it\nDCG@K.\nNDCG, or NDCG@K is DCG divided by the DCG obtained by a perfect prediction, so\nthat it is always between 0 and 1. Usually, NDCG is preferred to DCG.\n\nCompared with the ranking loss, NDCG can take into account relevance scores,\nrather than a ground-truth ranking. So if the ground-truth consists only of an\nordering, the ranking loss should be preferred; if the ground-truth consists of\nactual usefulness scores (e.g. 0 for irrelevant, 1 for relevant, 2 for very\nrelevant), NDCG can be used.\n\nFor one sample, given the vector of continuous ground-truth values for each\ntarget :math:`y \\in \\mathbb{R}^{M}`, where :math:`M` is the number of outputs, and\nthe prediction :math:`\\hat{y}`, which induces the ranking function :math:`f`, the\nDCG score is\n\n.. math::\n   \\sum_{r=1}^{\\min(K, M)}\\frac{y_{f(r)}}{\\log(1 + r)}\n\nand the NDCG score is the DCG score divided by the DCG score obtained for\n:math:`y`.\n\n.. dropdown:: References\n\n  * `Wikipedia entry for Discounted Cumulative Gain\n    <https:\/\/en.wikipedia.org\/wiki\/Discounted_cumulative_gain>`_\n\n  * Jarvelin, K., & Kekalainen, J. (2002).\n    Cumulated gain-based evaluation of IR techniques. ACM Transactions on\n    Information Systems (TOIS), 20(4), 422-446.\n\n  * Wang, Y., Wang, L., Li, Y., He, D., Chen, W., & Liu, T. Y. (2013, May).\n    A theoretical analysis of NDCG ranking measures. In Proceedings of the 26th\n    Annual Conference on Learning Theory (COLT 2013)\n\n  * McSherry, F., & Najork, M. (2008, March). Computing information retrieval\n    performance measures efficiently in the presence of tied scores. In\n    European conference on information retrieval (pp. 414-421). Springer,\n    Berlin, Heidelberg.\n\n\n.. _regression_metrics:\n\nRegression metrics\n===================\n\n.. currentmodule:: sklearn.metrics\n\nThe :mod:`sklearn.metrics` module implements several loss, score, and utility\nfunctions to measure regression performance. Some of those have been enhanced\nto handle the multioutput case: :func:`mean_squared_error`,\n:func:`mean_absolute_error`, :func:`r2_score`,\n:func:`explained_variance_score`, :func:`mean_pinball_loss`, :func:`d2_pinball_score`\nand :func:`d2_absolute_error_score`.\n\n\nThese functions have a ``multioutput`` keyword argument which specifies the\nway the scores or losses for each individual target should be averaged. The\ndefault is ``'uniform_average'``, which specifies a uniformly weighted mean\nover outputs. If an ``ndarray`` of shape ``(n_outputs,)`` is passed, then its\nentries are interpreted as weights and an according weighted average is\nreturned. If ``multioutput`` is ``'raw_values'``, then all unaltered\nindividual scores or losses will be returned in an array of shape\n``(n_outputs,)``.\n\n\nThe :func:`r2_score` and :func:`explained_variance_score` accept an additional\nvalue ``'variance_weighted'`` for the ``multioutput`` parameter. This option\nleads to a weighting of each individual score by the variance of the\ncorresponding target variable. This setting quantifies the globally captured\nunscaled variance. If the target variables are of different scale, then this\nscore puts more importance on explaining the higher variance variables.\n\n.. _r2_score:\n\nR\u00b2 score, the coefficient of determination\n-------------------------------------------\n\nThe :func:`r2_score` function computes the `coefficient of\ndetermination <https:\/\/en.wikipedia.org\/wiki\/Coefficient_of_determination>`_,\nusually denoted as :math:`R^2`.\n\nIt represents the proportion of variance (of y) that has been explained by the\nindependent variables in the model. It provides an indication of goodness of\nfit and therefore a measure of how well unseen samples are likely to be\npredicted by the model, through the proportion of explained variance.\n\nAs such variance is dataset dependent, :math:`R^2` may not be meaningfully comparable\nacross different datasets. Best possible score is 1.0 and it can be negative\n(because the model can be arbitrarily worse). A constant model that always\npredicts the expected (average) value of y, disregarding the input features,\nwould get an :math:`R^2` score of 0.0.\n\nNote: when the prediction residuals have zero mean, the :math:`R^2` score and\nthe :ref:`explained_variance_score` are identical.\n\nIf :math:`\\hat{y}_i` is the predicted value of the :math:`i`-th sample\nand :math:`y_i` is the corresponding true value for total :math:`n` samples,\nthe estimated :math:`R^2` is defined as:\n\n.. math::\n\n  R^2(y, \\hat{y}) = 1 - \\frac{\\sum_{i=1}^{n} (y_i - \\hat{y}_i)^2}{\\sum_{i=1}^{n} (y_i - \\bar{y})^2}\n\nwhere :math:`\\bar{y} = \\frac{1}{n} \\sum_{i=1}^{n} y_i` and :math:`\\sum_{i=1}^{n} (y_i - \\hat{y}_i)^2 = \\sum_{i=1}^{n} \\epsilon_i^2`.\n\nNote that :func:`r2_score` calculates unadjusted :math:`R^2` without correcting for\nbias in sample variance of y.\n\nIn the particular case where the true target is constant, the :math:`R^2` score is\nnot finite: it is either ``NaN`` (perfect predictions) or ``-Inf`` (imperfect\npredictions). Such non-finite scores may prevent correct model optimization\nsuch as grid-search cross-validation to be performed correctly. For this reason\nthe default behaviour of :func:`r2_score` is to replace them with 1.0 (perfect\npredictions) or 0.0 (imperfect predictions). If ``force_finite``\nis set to ``False``, this score falls back on the original :math:`R^2` definition.\n\nHere is a small example of usage of the :func:`r2_score` function::\n\n  >>> from sklearn.metrics import r2_score\n  >>> y_true = [3, -0.5, 2, 7]\n  >>> y_pred = [2.5, 0.0, 2, 8]\n  >>> r2_score(y_true, y_pred)\n  0.948...\n  >>> y_true = [[0.5, 1], [-1, 1], [7, -6]]\n  >>> y_pred = [[0, 2], [-1, 2], [8, -5]]\n  >>> r2_score(y_true, y_pred, multioutput='variance_weighted')\n  0.938...\n  >>> y_true = [[0.5, 1], [-1, 1], [7, -6]]\n  >>> y_pred = [[0, 2], [-1, 2], [8, -5]]\n  >>> r2_score(y_true, y_pred, multioutput='uniform_average')\n  0.936...\n  >>> r2_score(y_true, y_pred, multioutput='raw_values')\n  array([0.965..., 0.908...])\n  >>> r2_score(y_true, y_pred, multioutput=[0.3, 0.7])\n  0.925...\n  >>> y_true = [-2, -2, -2]\n  >>> y_pred = [-2, -2, -2]\n  >>> r2_score(y_true, y_pred)\n  1.0\n  >>> r2_score(y_true, y_pred, force_finite=False)\n  nan\n  >>> y_true = [-2, -2, -2]\n  >>> y_pred = [-2, -2, -2 + 1e-8]\n  >>> r2_score(y_true, y_pred)\n  0.0\n  >>> r2_score(y_true, y_pred, force_finite=False)\n  -inf\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_linear_model_plot_lasso_and_elasticnet.py`\n  for an example of R\u00b2 score usage to\n  evaluate Lasso and Elastic Net on sparse signals.\n\n.. _mean_absolute_error:\n\nMean absolute error\n-------------------\n\nThe :func:`mean_absolute_error` function computes `mean absolute\nerror <https:\/\/en.wikipedia.org\/wiki\/Mean_absolute_error>`_, a risk\nmetric corresponding to the expected value of the absolute error loss or\n:math:`l1`-norm loss.\n\nIf :math:`\\hat{y}_i` is the predicted value of the :math:`i`-th sample,\nand :math:`y_i` is the corresponding true value, then the mean absolute error\n(MAE) estimated over :math:`n_{\\text{samples}}` is defined as\n\n.. math::\n\n  \\text{MAE}(y, \\hat{y}) = \\frac{1}{n_{\\text{samples}}} \\sum_{i=0}^{n_{\\text{samples}}-1} \\left| y_i - \\hat{y}_i \\right|.\n\nHere is a small example of usage of the :func:`mean_absolute_error` function::\n\n  >>> from sklearn.metrics import mean_absolute_error\n  >>> y_true = [3, -0.5, 2, 7]\n  >>> y_pred = [2.5, 0.0, 2, 8]\n  >>> mean_absolute_error(y_true, y_pred)\n  0.5\n  >>> y_true = [[0.5, 1], [-1, 1], [7, -6]]\n  >>> y_pred = [[0, 2], [-1, 2], [8, -5]]\n  >>> mean_absolute_error(y_true, y_pred)\n  0.75\n  >>> mean_absolute_error(y_true, y_pred, multioutput='raw_values')\n  array([0.5, 1. ])\n  >>> mean_absolute_error(y_true, y_pred, multioutput=[0.3, 0.7])\n  0.85...\n\n.. _mean_squared_error:\n\nMean squared error\n-------------------\n\nThe :func:`mean_squared_error` function computes `mean squared\nerror <https:\/\/en.wikipedia.org\/wiki\/Mean_squared_error>`_, a risk\nmetric corresponding to the expected value of the squared (quadratic) error or\nloss.\n\nIf :math:`\\hat{y}_i` is the predicted value of the :math:`i`-th sample,\nand :math:`y_i` is the corresponding true value, then the mean squared error\n(MSE) estimated over :math:`n_{\\text{samples}}` is defined as\n\n.. math::\n\n  \\text{MSE}(y, \\hat{y}) = \\frac{1}{n_\\text{samples}} \\sum_{i=0}^{n_\\text{samples} - 1} (y_i - \\hat{y}_i)^2.\n\nHere is a small example of usage of the :func:`mean_squared_error`\nfunction::\n\n  >>> from sklearn.metrics import mean_squared_error\n  >>> y_true = [3, -0.5, 2, 7]\n  >>> y_pred = [2.5, 0.0, 2, 8]\n  >>> mean_squared_error(y_true, y_pred)\n  0.375\n  >>> y_true = [[0.5, 1], [-1, 1], [7, -6]]\n  >>> y_pred = [[0, 2], [-1, 2], [8, -5]]\n  >>> mean_squared_error(y_true, y_pred)\n  0.7083...\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_ensemble_plot_gradient_boosting_regression.py`\n  for an example of mean squared error usage to evaluate gradient boosting regression.\n\nTaking the square root of the MSE, called the root mean squared error (RMSE), is another\ncommon metric that provides a measure in the same units as the target variable. RSME is\navailable through the :func:`root_mean_squared_error` function.\n\n.. _mean_squared_log_error:\n\nMean squared logarithmic error\n------------------------------\n\nThe :func:`mean_squared_log_error` function computes a risk metric\ncorresponding to the expected value of the squared logarithmic (quadratic)\nerror or loss.\n\nIf :math:`\\hat{y}_i` is the predicted value of the :math:`i`-th sample,\nand :math:`y_i` is the corresponding true value, then the mean squared\nlogarithmic error (MSLE) estimated over :math:`n_{\\text{samples}}` is\ndefined as\n\n.. math::\n\n  \\text{MSLE}(y, \\hat{y}) = \\frac{1}{n_\\text{samples}} \\sum_{i=0}^{n_\\text{samples} - 1} (\\log_e (1 + y_i) - \\log_e (1 + \\hat{y}_i) )^2.\n\nWhere :math:`\\log_e (x)` means the natural logarithm of :math:`x`. This metric\nis best to use when targets having exponential growth, such as population\ncounts, average sales of a commodity over a span of years etc. Note that this\nmetric penalizes an under-predicted estimate greater than an over-predicted\nestimate.\n\nHere is a small example of usage of the :func:`mean_squared_log_error`\nfunction::\n\n  >>> from sklearn.metrics import mean_squared_log_error\n  >>> y_true = [3, 5, 2.5, 7]\n  >>> y_pred = [2.5, 5, 4, 8]\n  >>> mean_squared_log_error(y_true, y_pred)\n  0.039...\n  >>> y_true = [[0.5, 1], [1, 2], [7, 6]]\n  >>> y_pred = [[0.5, 2], [1, 2.5], [8, 8]]\n  >>> mean_squared_log_error(y_true, y_pred)\n  0.044...\n\nThe root mean squared logarithmic error (RMSLE) is available through the\n:func:`root_mean_squared_log_error` function.\n\n.. _mean_absolute_percentage_error:\n\nMean absolute percentage error\n------------------------------\nThe :func:`mean_absolute_percentage_error` (MAPE), also known as mean absolute\npercentage deviation (MAPD), is an evaluation metric for regression problems.\nThe idea of this metric is to be sensitive to relative errors. It is for example\nnot changed by a global scaling of the target variable.\n\nIf :math:`\\hat{y}_i` is the predicted value of the :math:`i`-th sample\nand :math:`y_i` is the corresponding true value, then the mean absolute percentage\nerror (MAPE) estimated over :math:`n_{\\text{samples}}` is defined as\n\n.. math::\n\n  \\text{MAPE}(y, \\hat{y}) = \\frac{1}{n_{\\text{samples}}} \\sum_{i=0}^{n_{\\text{samples}}-1} \\frac{{}\\left| y_i - \\hat{y}_i \\right|}{\\max(\\epsilon, \\left| y_i \\right|)}\n\nwhere :math:`\\epsilon` is an arbitrary small yet strictly positive number to\navoid undefined results when y is zero.\n\nThe :func:`mean_absolute_percentage_error` function supports multioutput.\n\nHere is a small example of usage of the :func:`mean_absolute_percentage_error`\nfunction::\n\n  >>> from sklearn.metrics import mean_absolute_percentage_error\n  >>> y_true = [1, 10, 1e6]\n  >>> y_pred = [0.9, 15, 1.2e6]\n  >>> mean_absolute_percentage_error(y_true, y_pred)\n  0.2666...\n\nIn above example, if we had used `mean_absolute_error`, it would have ignored\nthe small magnitude values and only reflected the error in prediction of highest\nmagnitude value. But that problem is resolved in case of MAPE because it calculates\nrelative percentage error with respect to actual output.\n\n.. note::\n\n    The MAPE formula here does not represent the common \"percentage\" definition: the\n    percentage in the range [0, 100] is converted to a relative value in the range [0,\n    1] by dividing by 100. Thus, an error of 200% corresponds to a relative error of 2.\n    The motivation here is to have a range of values that is more consistent with other\n    error metrics in scikit-learn, such as `accuracy_score`.\n\n    To obtain the mean absolute percentage error as per the Wikipedia formula,\n    multiply the `mean_absolute_percentage_error` computed here by 100.\n\n.. dropdown:: References\n\n  * `Wikipedia entry for Mean Absolute Percentage Error\n    <https:\/\/en.wikipedia.org\/wiki\/Mean_absolute_percentage_error>`_\n\n.. _median_absolute_error:\n\nMedian absolute error\n---------------------\n\nThe :func:`median_absolute_error` is particularly interesting because it is\nrobust to outliers. The loss is calculated by taking the median of all absolute\ndifferences between the target and the prediction.\n\nIf :math:`\\hat{y}_i` is the predicted value of the :math:`i`-th sample\nand :math:`y_i` is the corresponding true value, then the median absolute error\n(MedAE) estimated over :math:`n_{\\text{samples}}` is defined as\n\n.. math::\n\n  \\text{MedAE}(y, \\hat{y}) = \\text{median}(\\mid y_1 - \\hat{y}_1 \\mid, \\ldots, \\mid y_n - \\hat{y}_n \\mid).\n\nThe :func:`median_absolute_error` does not support multioutput.\n\nHere is a small example of usage of the :func:`median_absolute_error`\nfunction::\n\n  >>> from sklearn.metrics import median_absolute_error\n  >>> y_true = [3, -0.5, 2, 7]\n  >>> y_pred = [2.5, 0.0, 2, 8]\n  >>> median_absolute_error(y_true, y_pred)\n  0.5\n\n\n\n.. _max_error:\n\nMax error\n-------------------\n\nThe :func:`max_error` function computes the maximum `residual error\n<https:\/\/en.wikipedia.org\/wiki\/Errors_and_residuals>`_ , a metric\nthat captures the worst case error between the predicted value and\nthe true value. In a perfectly fitted single output regression\nmodel, ``max_error`` would be ``0`` on the training set and though this\nwould be highly unlikely in the real world, this metric shows the\nextent of error that the model had when it was fitted.\n\n\nIf :math:`\\hat{y}_i` is the predicted value of the :math:`i`-th sample,\nand :math:`y_i` is the corresponding true value, then the max error is\ndefined as\n\n.. math::\n\n  \\text{Max Error}(y, \\hat{y}) = \\max(| y_i - \\hat{y}_i |)\n\nHere is a small example of usage of the :func:`max_error` function::\n\n  >>> from sklearn.metrics import max_error\n  >>> y_true = [3, 2, 7, 1]\n  >>> y_pred = [9, 2, 7, 1]\n  >>> max_error(y_true, y_pred)\n  6\n\nThe :func:`max_error` does not support multioutput.\n\n.. _explained_variance_score:\n\nExplained variance score\n-------------------------\n\nThe :func:`explained_variance_score` computes the `explained variance\nregression score <https:\/\/en.wikipedia.org\/wiki\/Explained_variation>`_.\n\nIf :math:`\\hat{y}` is the estimated target output, :math:`y` the corresponding\n(correct) target output, and :math:`Var` is `Variance\n<https:\/\/en.wikipedia.org\/wiki\/Variance>`_, the square of the standard deviation,\nthen the explained variance is estimated as follow:\n\n.. math::\n\n  explained\\_{}variance(y, \\hat{y}) = 1 - \\frac{Var\\{ y - \\hat{y}\\}}{Var\\{y\\}}\n\nThe best possible score is 1.0, lower values are worse.\n\n.. topic:: Link to :ref:`r2_score`\n\n    The difference between the explained variance score and the :ref:`r2_score`\n    is that the explained variance score does not account for\n    systematic offset in the prediction. For this reason, the\n    :ref:`r2_score` should be preferred in general.\n\nIn the particular case where the true target is constant, the Explained\nVariance score is not finite: it is either ``NaN`` (perfect predictions) or\n``-Inf`` (imperfect predictions). Such non-finite scores may prevent correct\nmodel optimization such as grid-search cross-validation to be performed\ncorrectly. For this reason the default behaviour of\n:func:`explained_variance_score` is to replace them with 1.0 (perfect\npredictions) or 0.0 (imperfect predictions). You can set the ``force_finite``\nparameter to ``False`` to prevent this fix from happening and fallback on the\noriginal Explained Variance score.\n\nHere is a small example of usage of the :func:`explained_variance_score`\nfunction::\n\n    >>> from sklearn.metrics import explained_variance_score\n    >>> y_true = [3, -0.5, 2, 7]\n    >>> y_pred = [2.5, 0.0, 2, 8]\n    >>> explained_variance_score(y_true, y_pred)\n    0.957...\n    >>> y_true = [[0.5, 1], [-1, 1], [7, -6]]\n    >>> y_pred = [[0, 2], [-1, 2], [8, -5]]\n    >>> explained_variance_score(y_true, y_pred, multioutput='raw_values')\n    array([0.967..., 1.        ])\n    >>> explained_variance_score(y_true, y_pred, multioutput=[0.3, 0.7])\n    0.990...\n    >>> y_true = [-2, -2, -2]\n    >>> y_pred = [-2, -2, -2]\n    >>> explained_variance_score(y_true, y_pred)\n    1.0\n    >>> explained_variance_score(y_true, y_pred, force_finite=False)\n    nan\n    >>> y_true = [-2, -2, -2]\n    >>> y_pred = [-2, -2, -2 + 1e-8]\n    >>> explained_variance_score(y_true, y_pred)\n    0.0\n    >>> explained_variance_score(y_true, y_pred, force_finite=False)\n    -inf\n\n\n.. _mean_tweedie_deviance:\n\nMean Poisson, Gamma, and Tweedie deviances\n------------------------------------------\nThe :func:`mean_tweedie_deviance` function computes the `mean Tweedie\ndeviance error\n<https:\/\/en.wikipedia.org\/wiki\/Tweedie_distribution#The_Tweedie_deviance>`_\nwith a ``power`` parameter (:math:`p`). This is a metric that elicits\npredicted expectation values of regression targets.\n\nFollowing special cases exist,\n\n- when ``power=0`` it is equivalent to :func:`mean_squared_error`.\n- when ``power=1`` it is equivalent to :func:`mean_poisson_deviance`.\n- when ``power=2`` it is equivalent to :func:`mean_gamma_deviance`.\n\nIf :math:`\\hat{y}_i` is the predicted value of the :math:`i`-th sample,\nand :math:`y_i` is the corresponding true value, then the mean Tweedie\ndeviance error (D) for power :math:`p`, estimated over :math:`n_{\\text{samples}}`\nis defined as\n\n.. math::\n\n  \\text{D}(y, \\hat{y}) = \\frac{1}{n_\\text{samples}}\n  \\sum_{i=0}^{n_\\text{samples} - 1}\n  \\begin{cases}\n  (y_i-\\hat{y}_i)^2, & \\text{for }p=0\\text{ (Normal)}\\\\\n  2(y_i \\log(y_i\/\\hat{y}_i) + \\hat{y}_i - y_i),  & \\text{for }p=1\\text{ (Poisson)}\\\\\n  2(\\log(\\hat{y}_i\/y_i) + y_i\/\\hat{y}_i - 1),  & \\text{for }p=2\\text{ (Gamma)}\\\\\n  2\\left(\\frac{\\max(y_i,0)^{2-p}}{(1-p)(2-p)}-\n  \\frac{y_i\\,\\hat{y}_i^{1-p}}{1-p}+\\frac{\\hat{y}_i^{2-p}}{2-p}\\right),\n  & \\text{otherwise}\n  \\end{cases}\n\nTweedie deviance is a homogeneous function of degree ``2-power``.\nThus, Gamma distribution with ``power=2`` means that simultaneously scaling\n``y_true`` and ``y_pred`` has no effect on the deviance. For Poisson\ndistribution ``power=1`` the deviance scales linearly, and for Normal\ndistribution (``power=0``), quadratically.  In general, the higher\n``power`` the less weight is given to extreme deviations between true\nand predicted targets.\n\nFor instance, let's compare the two predictions 1.5 and 150 that are both\n50% larger than their corresponding true value.\n\nThe mean squared error (``power=0``) is very sensitive to the\nprediction difference of the second point,::\n\n    >>> from sklearn.metrics import mean_tweedie_deviance\n    >>> mean_tweedie_deviance([1.0], [1.5], power=0)\n    0.25\n    >>> mean_tweedie_deviance([100.], [150.], power=0)\n    2500.0\n\nIf we increase ``power`` to 1,::\n\n    >>> mean_tweedie_deviance([1.0], [1.5], power=1)\n    0.18...\n    >>> mean_tweedie_deviance([100.], [150.], power=1)\n    18.9...\n\nthe difference in errors decreases. Finally, by setting, ``power=2``::\n\n    >>> mean_tweedie_deviance([1.0], [1.5], power=2)\n    0.14...\n    >>> mean_tweedie_deviance([100.], [150.], power=2)\n    0.14...\n\nwe would get identical errors. The deviance when ``power=2`` is thus only\nsensitive to relative errors.\n\n.. _pinball_loss:\n\nPinball loss\n------------\n\nThe :func:`mean_pinball_loss` function is used to evaluate the predictive\nperformance of `quantile regression\n<https:\/\/en.wikipedia.org\/wiki\/Quantile_regression>`_ models.\n\n.. math::\n\n  \\text{pinball}(y, \\hat{y}) = \\frac{1}{n_{\\text{samples}}} \\sum_{i=0}^{n_{\\text{samples}}-1}  \\alpha \\max(y_i - \\hat{y}_i, 0) + (1 - \\alpha) \\max(\\hat{y}_i - y_i, 0)\n\nThe value of pinball loss is equivalent to half of :func:`mean_absolute_error` when the quantile\nparameter ``alpha`` is set to 0.5.\n\n\nHere is a small example of usage of the :func:`mean_pinball_loss` function::\n\n  >>> from sklearn.metrics import mean_pinball_loss\n  >>> y_true = [1, 2, 3]\n  >>> mean_pinball_loss(y_true, [0, 2, 3], alpha=0.1)\n  0.03...\n  >>> mean_pinball_loss(y_true, [1, 2, 4], alpha=0.1)\n  0.3...\n  >>> mean_pinball_loss(y_true, [0, 2, 3], alpha=0.9)\n  0.3...\n  >>> mean_pinball_loss(y_true, [1, 2, 4], alpha=0.9)\n  0.03...\n  >>> mean_pinball_loss(y_true, y_true, alpha=0.1)\n  0.0\n  >>> mean_pinball_loss(y_true, y_true, alpha=0.9)\n  0.0\n\nIt is possible to build a scorer object with a specific choice of ``alpha``::\n\n  >>> from sklearn.metrics import make_scorer\n  >>> mean_pinball_loss_95p = make_scorer(mean_pinball_loss, alpha=0.95)\n\nSuch a scorer can be used to evaluate the generalization performance of a\nquantile regressor via cross-validation:\n\n  >>> from sklearn.datasets import make_regression\n  >>> from sklearn.model_selection import cross_val_score\n  >>> from sklearn.ensemble import GradientBoostingRegressor\n  >>>\n  >>> X, y = make_regression(n_samples=100, random_state=0)\n  >>> estimator = GradientBoostingRegressor(\n  ...     loss=\"quantile\",\n  ...     alpha=0.95,\n  ...     random_state=0,\n  ... )\n  >>> cross_val_score(estimator, X, y, cv=5, scoring=mean_pinball_loss_95p)\n  array([13.6..., 9.7..., 23.3..., 9.5..., 10.4...])\n\nIt is also possible to build scorer objects for hyper-parameter tuning. The\nsign of the loss must be switched to ensure that greater means better as\nexplained in the example linked below.\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_ensemble_plot_gradient_boosting_quantile.py`\n  for an example of using the pinball loss to evaluate and tune the\n  hyper-parameters of quantile regression models on data with non-symmetric\n  noise and outliers.\n\n.. _d2_score:\n\nD\u00b2 score\n--------\n\nThe D\u00b2 score computes the fraction of deviance explained.\nIt is a generalization of R\u00b2, where the squared error is generalized and replaced\nby a deviance of choice :math:`\\text{dev}(y, \\hat{y})`\n(e.g., Tweedie, pinball or mean absolute error). D\u00b2 is a form of a *skill score*.\nIt is calculated as\n\n.. math::\n\n  D^2(y, \\hat{y}) = 1 - \\frac{\\text{dev}(y, \\hat{y})}{\\text{dev}(y, y_{\\text{null}})} \\,.\n\nWhere :math:`y_{\\text{null}}` is the optimal prediction of an intercept-only model\n(e.g., the mean of `y_true` for the Tweedie case, the median for absolute\nerror and the alpha-quantile for pinball loss).\n\nLike R\u00b2, the best possible score is 1.0 and it can be negative (because the\nmodel can be arbitrarily worse). A constant model that always predicts\n:math:`y_{\\text{null}}`, disregarding the input features, would get a D\u00b2 score\nof 0.0.\n\n.. dropdown:: D\u00b2 Tweedie score\n\n  The :func:`d2_tweedie_score` function implements the special case of D\u00b2\n  where :math:`\\text{dev}(y, \\hat{y})` is the Tweedie deviance, see :ref:`mean_tweedie_deviance`.\n  It is also known as D\u00b2 Tweedie and is related to McFadden's likelihood ratio index.\n\n  The argument ``power`` defines the Tweedie power as for\n  :func:`mean_tweedie_deviance`. Note that for `power=0`,\n  :func:`d2_tweedie_score` equals :func:`r2_score` (for single targets).\n\n  A scorer object with a specific choice of ``power`` can be built by::\n\n    >>> from sklearn.metrics import d2_tweedie_score, make_scorer\n    >>> d2_tweedie_score_15 = make_scorer(d2_tweedie_score, power=1.5)\n\n.. dropdown:: D\u00b2 pinball score\n\n  The :func:`d2_pinball_score` function implements the special case\n  of D\u00b2 with the pinball loss, see :ref:`pinball_loss`, i.e.:\n\n  .. math::\n\n    \\text{dev}(y, \\hat{y}) = \\text{pinball}(y, \\hat{y}).\n\n  The argument ``alpha`` defines the slope of the pinball loss as for\n  :func:`mean_pinball_loss` (:ref:`pinball_loss`). It determines the\n  quantile level ``alpha`` for which the pinball loss and also D\u00b2\n  are optimal. Note that for `alpha=0.5` (the default) :func:`d2_pinball_score`\n  equals :func:`d2_absolute_error_score`.\n\n  A scorer object with a specific choice of ``alpha`` can be built by::\n\n    >>> from sklearn.metrics import d2_pinball_score, make_scorer\n    >>> d2_pinball_score_08 = make_scorer(d2_pinball_score, alpha=0.8)\n\n.. dropdown:: D\u00b2 absolute error score\n\n  The :func:`d2_absolute_error_score` function implements the special case of\n  the :ref:`mean_absolute_error`:\n\n  .. math::\n\n    \\text{dev}(y, \\hat{y}) = \\text{MAE}(y, \\hat{y}).\n\n  Here are some usage examples of the :func:`d2_absolute_error_score` function::\n\n    >>> from sklearn.metrics import d2_absolute_error_score\n    >>> y_true = [3, -0.5, 2, 7]\n    >>> y_pred = [2.5, 0.0, 2, 8]\n    >>> d2_absolute_error_score(y_true, y_pred)\n    0.764...\n    >>> y_true = [1, 2, 3]\n    >>> y_pred = [1, 2, 3]\n    >>> d2_absolute_error_score(y_true, y_pred)\n    1.0\n    >>> y_true = [1, 2, 3]\n    >>> y_pred = [2, 2, 2]\n    >>> d2_absolute_error_score(y_true, y_pred)\n    0.0\n\n\n.. _visualization_regression_evaluation:\n\nVisual evaluation of regression models\n--------------------------------------\n\nAmong methods to assess the quality of regression models, scikit-learn provides\nthe :class:`~sklearn.metrics.PredictionErrorDisplay` class. It allows to\nvisually inspect the prediction errors of a model in two different manners.\n\n.. image:: ..\/auto_examples\/model_selection\/images\/sphx_glr_plot_cv_predict_001.png\n   :target: ..\/auto_examples\/model_selection\/plot_cv_predict.html\n   :scale: 75\n   :align: center\n\nThe plot on the left shows the actual values vs predicted values. For a\nnoise-free regression task aiming to predict the (conditional) expectation of\n`y`, a perfect regression model would display data points on the diagonal\ndefined by predicted equal to actual values. The further away from this optimal\nline, the larger the error of the model. In a more realistic setting with\nirreducible noise, that is, when not all the variations of `y` can be explained\nby features in `X`, then the best model would lead to a cloud of points densely\narranged around the diagonal.\n\nNote that the above only holds when the predicted values is the expected value\nof `y` given `X`. This is typically the case for regression models that\nminimize the mean squared error objective function or more generally the\n:ref:`mean Tweedie deviance <mean_tweedie_deviance>` for any value of its\n\"power\" parameter.\n\nWhen plotting the predictions of an estimator that predicts a quantile\nof `y` given `X`, e.g. :class:`~sklearn.linear_model.QuantileRegressor`\nor any other model minimizing the :ref:`pinball loss <pinball_loss>`, a\nfraction of the points are either expected to lie above or below the diagonal\ndepending on the estimated quantile level.\n\nAll in all, while intuitive to read, this plot does not really inform us on\nwhat to do to obtain a better model.\n\nThe right-hand side plot shows the residuals (i.e. the difference between the\nactual and the predicted values) vs. the predicted values.\n\nThis plot makes it easier to visualize if the residuals follow and\n`homoscedastic or heteroschedastic\n<https:\/\/en.wikipedia.org\/wiki\/Homoscedasticity_and_heteroscedasticity>`_\ndistribution.\n\nIn particular, if the true distribution of `y|X` is Poisson or Gamma\ndistributed, it is expected that the variance of the residuals of the optimal\nmodel would grow with the predicted value of `E[y|X]` (either linearly for\nPoisson or quadratically for Gamma).\n\nWhen fitting a linear least squares regression model (see\n:class:`~sklearn.linear_model.LinearRegression` and\n:class:`~sklearn.linear_model.Ridge`), we can use this plot to check\nif some of the `model assumptions\n<https:\/\/en.wikipedia.org\/wiki\/Ordinary_least_squares#Assumptions>`_\nare met, in particular that the residuals should be uncorrelated, their\nexpected value should be null and that their variance should be constant\n(homoschedasticity).\n\nIf this is not the case, and in particular if the residuals plot show some\nbanana-shaped structure, this is a hint that the model is likely mis-specified\nand that non-linear feature engineering or switching to a non-linear regression\nmodel might be useful.\n\nRefer to the example below to see a model evaluation that makes use of this\ndisplay.\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_compose_plot_transformed_target.py` for\n  an example on how to use :class:`~sklearn.metrics.PredictionErrorDisplay`\n  to visualize the prediction quality improvement of a regression model\n  obtained by transforming the target before learning.\n\n.. _clustering_metrics:\n\nClustering metrics\n==================\n\n.. currentmodule:: sklearn.metrics\n\nThe :mod:`sklearn.metrics` module implements several loss, score, and utility\nfunctions to measure clustering performance. For more information see the\n:ref:`clustering_evaluation` section for instance clustering, and\n:ref:`biclustering_evaluation` for biclustering.\n\n.. _dummy_estimators:\n\n\nDummy estimators\n=================\n\n.. currentmodule:: sklearn.dummy\n\nWhen doing supervised learning, a simple sanity check consists of comparing\none's estimator against simple rules of thumb. :class:`DummyClassifier`\nimplements several such simple strategies for classification:\n\n- ``stratified`` generates random predictions by respecting the training\n  set class distribution.\n- ``most_frequent`` always predicts the most frequent label in the training set.\n- ``prior`` always predicts the class that maximizes the class prior\n  (like ``most_frequent``) and ``predict_proba`` returns the class prior.\n- ``uniform`` generates predictions uniformly at random.\n- ``constant`` always predicts a constant label that is provided by the user.\n   A major motivation of this method is F1-scoring, when the positive class\n   is in the minority.\n\nNote that with all these strategies, the ``predict`` method completely ignores\nthe input data!\n\nTo illustrate :class:`DummyClassifier`, first let's create an imbalanced\ndataset::\n\n  >>> from sklearn.datasets import load_iris\n  >>> from sklearn.model_selection import train_test_split\n  >>> X, y = load_iris(return_X_y=True)\n  >>> y[y != 1] = -1\n  >>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)\n\nNext, let's compare the accuracy of ``SVC`` and ``most_frequent``::\n\n  >>> from sklearn.dummy import DummyClassifier\n  >>> from sklearn.svm import SVC\n  >>> clf = SVC(kernel='linear', C=1).fit(X_train, y_train)\n  >>> clf.score(X_test, y_test)\n  0.63...\n  >>> clf = DummyClassifier(strategy='most_frequent', random_state=0)\n  >>> clf.fit(X_train, y_train)\n  DummyClassifier(random_state=0, strategy='most_frequent')\n  >>> clf.score(X_test, y_test)\n  0.57...\n\nWe see that ``SVC`` doesn't do much better than a dummy classifier. Now, let's\nchange the kernel::\n\n  >>> clf = SVC(kernel='rbf', C=1).fit(X_train, y_train)\n  >>> clf.score(X_test, y_test)\n  0.94...\n\nWe see that the accuracy was boosted to almost 100%.  A cross validation\nstrategy is recommended for a better estimate of the accuracy, if it\nis not too CPU costly. For more information see the :ref:`cross_validation`\nsection. Moreover if you want to optimize over the parameter space, it is highly\nrecommended to use an appropriate methodology; see the :ref:`grid_search`\nsection for details.\n\nMore generally, when the accuracy of a classifier is too close to random, it\nprobably means that something went wrong: features are not helpful, a\nhyperparameter is not correctly tuned, the classifier is suffering from class\nimbalance, etc...\n\n:class:`DummyRegressor` also implements four simple rules of thumb for regression:\n\n- ``mean`` always predicts the mean of the training targets.\n- ``median`` always predicts the median of the training targets.\n- ``quantile`` always predicts a user provided quantile of the training targets.\n- ``constant`` always predicts a constant value that is provided by the user.\n\nIn all these strategies, the ``predict`` method completely ignores\nthe input data.","site":"scikit-learn","answers_cleaned":"   currentmodule   sklearn      model evaluation                                                               Metrics and scoring  quantifying the quality of predictions                                                                  which scoring function   Which scoring function should I use                                        Before we take a closer look into the details of the many scores and  term  evaluation metrics   we want to give some guidance  inspired by statistical decision theory  on the choice of   scoring functions   for   supervised learning    see  Gneiting2009        Which scoring function should I use      Which scoring function is a good one for my task    In a nutshell  if the scoring function is given  e g  in a kaggle competition or in a business context  use that one  If you are free to choose  it starts by considering the ultimate goal and application of the prediction  It is useful to distinguish two steps     Predicting   Decision making    Predicting    Usually  the response variable  math  Y  is a random variable  in the sense that there is  no deterministic  function  math  Y   g X   of the features  math  X   Instead  there is a probability distribution  math  F  of  math  Y   One can aim to predict the whole distribution  known as  probabilistic prediction   or   more the focus of scikit learn   issue a  point prediction   or point forecast  by choosing a property or functional of that distribution  math  F   Typical examples are the mean  expected value   the median or a quantile of the response variable  math  Y   conditionally on  math  X     Once that is settled  use a   strictly consistent   scoring function for that  target  functional  see  Gneiting2009    This means using a scoring function that is aligned with  measuring the distance between predictions   y pred   and the true target functional using observations of   math  Y   i e   y true   For classification   strictly proper scoring rules    see  Wikipedia entry for Scoring rule  https   en wikipedia org wiki Scoring rule    and  Gneiting2007    coincide with strictly consistent scoring functions  The table further below provides examples  One could say that consistent scoring functions act as  truth serum  in that they guarantee   that truth telling         is an optimal strategy in expectation    Gneiting2014     Once a strictly consistent scoring function is chosen  it is best used for both  as loss function for model training and as metric score in model evaluation and model comparison   Note that for regressors  the prediction is done with  term  predict  while for classifiers it is usually  term  predict proba      Decision Making    The most common decisions are done on binary classification tasks  where the result of  term  predict proba  is turned into a single outcome  e g   from the predicted probability of rain a decision is made on how to act  whether to take mitigating measures like an umbrella or not   For classifiers  this is what  term  predict  returns  See also  ref  TunedThresholdClassifierCV   There are many scoring functions which measure different aspects of such a decision  most of them are covered with or derived from the  func  metrics confusion matrix      List of strictly consistent scoring functions    Here  we list some of the most relevant statistical functionals and corresponding strictly consistent scoring functions for tasks in practice  Note that the list is not complete and that there are more of them  For further criteria on how to select a specific one  see  Fissler2022                                                                                                                                      functional          scoring or loss function                             response  y           prediction                                                                                                                                    Classification   mean                 ref  Brier score  brier score loss    sup  1        multi class             predict proba   mean                 ref  log loss  log loss                             multi class             predict proba   mode                 ref  zero one loss  zero one loss    sup  2         multi class             predict    categorical   Regression   mean                 ref  squared error  mean squared error    sup  3    all reals               predict    all reals mean                 ref  Poisson deviance  mean tweedie deviance        non negative            predict    strictly positive mean                 ref  Gamma deviance  mean tweedie deviance          strictly positive       predict    strictly positive mean                 ref  Tweedie deviance  mean tweedie deviance        depends on   power      predict    depends on   power   median               ref  absolute error  mean absolute error            all reals               predict    all reals quantile             ref  pinball loss  pinball loss                     all reals               predict    all reals mode                no consistent one exists                             reals                                                                                                                                    sup  1  The Brier score is just a different name for the squared error in case of classification    sup  2  The zero one loss is only consistent but not strictly consistent for the mode  The zero one loss is equivalent to one minus the accuracy score  meaning it gives different score values but the same ranking    sup  3  R  gives the same ranking as squared error     Fictitious Example    Let s make the above arguments more tangible  Consider a setting in network reliability engineering  such as maintaining stable internet or Wi Fi connections  As provider of the network  you have access to the dataset of log entries of network connections containing network load over time and many interesting features  Your goal is to improve the reliability of the connections  In fact  you promise your customers that on at least 99  of all days there are no connection discontinuities larger than 1 minute  Therefore  you are interested in a prediction of the 99  quantile  of longest connection interruption duration per day  in order to know in advance when to add more bandwidth and thereby satisfy your customers  So the  target functional  is the 99  quantile  From the table above  you choose the pinball loss as scoring function  fair enough  not much choice given   for model training  e g   HistGradientBoostingRegressor loss  quantile   quantile 0 99    as well as model evaluation   mean pinball loss      alpha 0 99     we apologize for the different argument names   quantile  and  alpha   be it in grid search for finding hyperparameters or in comparing to other models like  QuantileRegressor quantile 0 99        rubric   References      Gneiting2007  T  Gneiting and A  E  Raftery   doi  Strictly Proper     Scoring Rules  Prediction  and Estimation  10 1198 016214506000001437       In  Journal of the American Statistical Association 102  2007       pp  359  378       link to pdf  www stat washington edu people raftery Research PDF Gneiting2007jasa pdf         Gneiting2009  T  Gneiting   arxiv  Making and Evaluating Point Forecasts      0912 0902       Journal of the American Statistical Association 106  2009   746   762       Gneiting2014  T  Gneiting and M  Katzfuss   doi  Probabilistic Forecasting      10 1146 annurev st atistics 062713 085831    In  Annual Review of Statistics and Its Application 1 1  2014   pp  125 151       Fissler2022  T  Fissler  C  Lorentzen and M  Mayer   arxiv  Model     Comparison and Calibration Assessment  User Guide for Consistent Scoring     Functions in Machine Learning and Actuarial Practice   2202 12780        scoring api overview   Scoring API overview                       There are 3 different APIs for evaluating the quality of a model s predictions       Estimator score method    Estimators have a   score   method providing a   default evaluation criterion for the problem they are designed to solve    Most commonly this is  ref  accuracy  accuracy score   for classifiers and the    ref  coefficient of determination  r2 score     math  R 2   for regressors    Details for each estimator can be found in its documentation       Scoring parameter    Model evaluation tools that use    ref  cross validation  cross validation    such as    class  model selection GridSearchCV    func  model selection validation curve  and    class  linear model LogisticRegressionCV   rely on an internal  scoring  strategy    This can be specified using the  scoring  parameter of that tool and is discussed   in the section  ref  scoring parameter        Metric functions    The  mod  sklearn metrics  module implements functions   assessing prediction error for specific purposes  These metrics are detailed   in sections on  ref  classification metrics      ref  multilabel ranking metrics    ref  regression metrics  and    ref  clustering metrics    Finally   ref  dummy estimators  are useful to get a baseline value of those metrics for random predictions      seealso       For  pairwise  metrics  between  samples  and not estimators or    predictions  see the  ref  metrics  section       scoring parameter   The   scoring   parameter  defining model evaluation rules                                                             Model selection and evaluation tools that internally use  ref  cross validation  cross validation    such as  class  model selection GridSearchCV    func  model selection validation curve  and  class  linear model LogisticRegressionCV   take a   scoring   parameter that controls what metric they apply to the estimators evaluated   They can be specified in several ways      None   the estimator s default evaluation criterion  i e   the metric used in the   estimator s  score  method  is used     ref  String name  scoring string names    common metrics can be passed via a string   name     ref  Callable  scoring callable    more complex metrics can be passed via a custom   metric callable  e g   function    Some tools do also accept multiple metric evaluation  See  ref  multimetric scoring  for details       scoring string names   String name scorers                      For the most common use cases  you can designate a scorer object with the   scoring   parameter via a string name  the table below shows all possible values  All scorer objects follow the convention that   higher return values are better than lower return values    Thus metrics which measure the distance between the model and the data  like  func  metrics mean squared error   are available as  neg mean squared error  which return the negated value of the metric                                                                                                                                Scoring string name                    Function                                           Comment                                                                                                                                Classification    accuracy                               func  metrics accuracy score   balanced accuracy                      func  metrics balanced accuracy score   top k accuracy                         func  metrics top k accuracy score   average precision                      func  metrics average precision score   neg brier score                        func  metrics brier score loss   f1                                     func  metrics f1 score                            for binary targets  f1 micro                               func  metrics f1 score                            micro averaged  f1 macro                               func  metrics f1 score                            macro averaged  f1 weighted                            func  metrics f1 score                            weighted average  f1 samples                             func  metrics f1 score                            by multilabel sample  neg log loss                           func  metrics log loss                            requires   predict proba   support  precision  etc                         func  metrics precision score                     suffixes apply as with  f1   recall  etc                            func  metrics recall score                        suffixes apply as with  f1   jaccard  etc                           func  metrics jaccard score                       suffixes apply as with  f1   roc auc                                func  metrics roc auc score   roc auc ovr                            func  metrics roc auc score   roc auc ovo                            func  metrics roc auc score   roc auc ovr weighted                   func  metrics roc auc score   roc auc ovo weighted                   func  metrics roc auc score   d2 log loss score                      func  metrics d2 log loss score     Clustering    adjusted mutual info score             func  metrics adjusted mutual info score   adjusted rand score                    func  metrics adjusted rand score   completeness score                     func  metrics completeness score   fowlkes mallows score                  func  metrics fowlkes mallows score   homogeneity score                      func  metrics homogeneity score   mutual info score                      func  metrics mutual info score   normalized mutual info score           func  metrics normalized mutual info score   rand score                             func  metrics rand score   v measure score                        func  metrics v measure score     Regression    explained variance                     func  metrics explained variance score   neg max error                          func  metrics max error   neg mean absolute error                func  metrics mean absolute error   neg mean squared error                 func  metrics mean squared error   neg root mean squared error            func  metrics root mean squared error   neg mean squared log error             func  metrics mean squared log error   neg root mean squared log error        func  metrics root mean squared log error   neg median absolute error              func  metrics median absolute error   r2                                     func  metrics r2 score   neg mean poisson deviance              func  metrics mean poisson deviance   neg mean gamma deviance                func  metrics mean gamma deviance   neg mean absolute percentage error     func  metrics mean absolute percentage error   d2 absolute error score               func  metrics d2 absolute error score                                                                                                                                Usage examples           from sklearn import svm  datasets         from sklearn model selection import cross val score         X  y   datasets load iris return X y True          clf   svm SVC random state 0          cross val score clf  X  y  cv 5  scoring  recall macro       array  0 96     0 96     0 96     0 93     1                note        If a wrong scoring name is passed  an   InvalidParameterError   is raised      You can retrieve the names of all available scorers by calling      func   sklearn metrics get scorer names       currentmodule   sklearn metrics      scoring callable   Callable scorers                   For more complex use cases and more flexibility  you can pass a callable to the  scoring  parameter  This can be done by      ref  scoring adapt metric     ref  scoring custom   most flexible       scoring adapt metric   Adapting predefined metrics via  make scorer                                                 The following metric functions are not implemented as named scorers  sometimes because they require additional parameters  such as  func  fbeta score   They cannot be passed to the   scoring   parameters  instead their callable needs to be passed to  func  make scorer  together with the value of the user settable parameters                                                                                                    Function                               Parameter  Example usage                                                                                                    Classification    func  metrics fbeta score               beta       make scorer fbeta score  beta 2       Regression    func  metrics mean tweedie deviance     power      make scorer mean tweedie deviance  power 1 5     func  metrics mean pinball loss         alpha      make scorer mean pinball loss  alpha 0 95     func  metrics d2 tweedie score          power      make scorer d2 tweedie score  power 1 5     func  metrics d2 pinball score          alpha      make scorer d2 pinball score  alpha 0 95                                                                                                      One typical use case is to wrap an existing metric function from the library with non default values for its parameters  such as the   beta   parameter for the  func  fbeta score  function            from sklearn metrics import fbeta score  make scorer         ftwo scorer   make scorer fbeta score  beta 2          from sklearn model selection import GridSearchCV         from sklearn svm import LinearSVC         grid   GridSearchCV LinearSVC    param grid   C    1  10                                scoring ftwo scorer  cv 5   The module  mod  sklearn metrics  also exposes a set of simple functions measuring a prediction error given ground truth and prediction     functions ending with    score   return a value to   maximize  the higher the better     functions ending with    error       loss    or    deviance   return a   value to minimize  the lower the better  When converting   into a scorer object using  func  make scorer   set   the   greater is better   parameter to   False      True   by default  see the   parameter description below        scoring custom   Creating a custom scorer object                                  You can create your own custom scorer object using  func  make scorer  or for the most flexibility  from scratch  See below for details      dropdown   Custom scorer objects using  make scorer     You can build a completely custom scorer object   from a simple python function using  func  make scorer   which can   take several parameters       the python function you want to use    my custom loss func       in the example below       whether the python function returns a score    greater is better True        the default  or a loss    greater is better False     If a loss  the output     of the python function is negated by the scorer object  conforming to     the cross validation convention that scorers return higher values for better models       for classification metrics only  whether the python function you provided requires     continuous decision certainties  If the scoring function only accepts probability     estimates  e g   func  metrics log loss    then one needs to set the parameter      response method  predict proba    Some scoring     functions do not necessarily require probability estimates but rather non thresholded     decision values  e g   func  metrics roc auc score    In this case  one can provide a     list  e g    response method   decision function    predict proba          and scorer will use the first available method  in the order given in the list      to compute the scores       any additional parameters of the scoring function  such as   beta   or   labels       Here is an example of building custom scorers  and of using the     greater is better   parameter              import numpy as np           def my custom loss func y true  y pred                 diff   np abs y true   y pred  max                 return np log1p diff                        score will negate the return value of my custom loss func              which will be np log 2   0 693  given the values for X             and y defined below            score   make scorer my custom loss func  greater is better False            X     1    1             y    0  1            from sklearn dummy import DummyClassifier           clf   DummyClassifier strategy  most frequent   random state 0            clf   clf fit X  y            my custom loss func y  clf predict X         0 69              score clf  X  y         0 69        dropdown   Custom scorer objects from scratch    You can generate even more flexible model scorers by constructing your own   scoring object from scratch  without using the  func  make scorer  factory     For a callable to be a scorer  it needs to meet the protocol specified by   the following two rules       It can be called with parameters    estimator  X  y     where   estimator       is the model that should be evaluated    X   is validation data  and   y   is     the ground truth target for   X    in the supervised case  or   None    in the     unsupervised case        It returns a floating point number that quantifies the       estimator   prediction quality on   X    with reference to   y        Again  by convention higher numbers are better  so if your scorer     returns loss  that value should be negated       Advanced  If it requires extra metadata to be passed to it  it should expose     a   get metadata routing   method returning the requested metadata  The user     should be able to set the requested metadata via a   set score request       method  Please see  ref  User Guide  metadata routing   and  ref  Developer     Guide  sphx glr auto examples miscellaneous plot metadata routing py   for     more details       dropdown   Using custom scorers in functions where n jobs   1      While defining the custom scoring function alongside the calling function     should work out of the box with the default joblib backend  loky       importing it from another module will be a more robust approach and work     independently of the joblib backend       For example  to use   n jobs   greater than 1 in the example below        custom scoring function   function is saved in a user created module        custom scorer module py    and imported                from custom scorer module import custom scoring function   doctest   SKIP             cross val score model               X train               y train               scoring make scorer custom scoring function  greater is better False                cv 5               n jobs  1    doctest   SKIP      multimetric scoring   Using multiple metric evaluation                                   Scikit learn also permits evaluation of multiple metrics in   GridSearchCV      RandomizedSearchCV   and   cross validate     There are three ways to specify multiple scoring metrics for the   scoring   parameter     As an iterable of string metrics            scoring     accuracy    precision      As a   dict   mapping the scorer name to the scoring function            from sklearn metrics import accuracy score         from sklearn metrics import make scorer         scoring     accuracy   make scorer accuracy score                       prec    precision      Note that the dict values can either be scorer functions or one of the   predefined metric strings     As a callable that returns a dictionary of scores            from sklearn model selection import cross validate         from sklearn metrics import confusion matrix           A sample toy binary classification dataset         X  y   datasets make classification n classes 2  random state 0          svm   LinearSVC random state 0          def confusion matrix scorer clf  X  y                y pred   clf predict X               cm   confusion matrix y  y pred               return   tn   cm 0  0    fp   cm 0  1                         fn   cm 1  0    tp   cm 1  1           cv results   cross validate svm  X  y  cv 5                                      scoring confusion matrix scorer            Getting the test set true positive scores         print cv results  test tp         10  9  8  7  8            Getting the test set false negative scores         print cv results  test fn         0 1 2 3 2       classification metrics   Classification metrics                             currentmodule   sklearn metrics  The  mod  sklearn metrics  module implements several loss  score  and utility functions to measure classification performance  Some metrics might require probability estimates of the positive class  confidence values  or binary decisions values  Most implementations allow each sample to provide a weighted contribution to the overall score  through the   sample weight   parameter   Some of these are restricted to the binary classification case      autosummary       precision recall curve    roc curve    class likelihood ratios    det curve   Others also work in the multiclass case      autosummary       balanced accuracy score    cohen kappa score    confusion matrix    hinge loss    matthews corrcoef    roc auc score    top k accuracy score   Some also work in the multilabel case      autosummary       accuracy score    classification report    f1 score    fbeta score    hamming loss    jaccard score    log loss    multilabel confusion matrix    precision recall fscore support    precision score    recall score    roc auc score    zero one loss    d2 log loss score  And some work with binary and multilabel  but not multiclass  problems      autosummary       average precision score   In the following sub sections  we will describe each of those functions  preceded by some notes on common API and metric definition       average   From binary to multiclass and multilabel                                           Some metrics are essentially defined for binary classification tasks  e g   func  f1 score    func  roc auc score    In these cases  by default only the positive label is evaluated  assuming by default that the positive class is labelled   1    though this may be configurable through the   pos label   parameter    In extending a binary metric to multiclass or multilabel problems  the data is treated as a collection of binary problems  one for each class  There are then a number of ways to average binary metric calculations across the set of classes  each of which may be useful in some scenario  Where available  you should select among these using the   average   parameter        macro    simply calculates the mean of the binary metrics    giving equal weight to each class   In problems where infrequent classes   are nonetheless important  macro averaging may be a means of highlighting   their performance  On the other hand  the assumption that all classes are   equally important is often untrue  such that macro averaging will   over emphasize the typically low performance on an infrequent class       weighted    accounts for class imbalance by computing the average of   binary metrics in which each class s score is weighted by its presence in the   true data sample       micro    gives each sample class pair an equal contribution to the overall   metric  except as a result of sample weight   Rather than summing the   metric per class  this sums the dividends and divisors that make up the   per class metrics to calculate an overall quotient    Micro averaging may be preferred in multilabel settings  including   multiclass classification where a majority class is to be ignored       samples    applies only to multilabel problems  It does not calculate a   per class measure  instead calculating the metric over the true and predicted   classes for each sample in the evaluation data  and returning their      sample weight   weighted  average    Selecting   average None   will return an array with the score for each   class   While multiclass data is provided to the metric  like binary targets  as an array of class labels  multilabel data is specified as an indicator matrix  in which cell    i  j    has value 1 if sample   i   has label   j   and value 0 otherwise       accuracy score   Accuracy score                 The  func  accuracy score  function computes the  accuracy  https   en wikipedia org wiki Accuracy and precision     either the fraction  default  or the count  normalize False  of correct predictions    In multilabel classification  the function returns the subset accuracy  If the entire set of predicted labels for a sample strictly match with the true set of labels  then the subset accuracy is 1 0  otherwise it is 0 0   If  math   hat y  i  is the predicted value of the  math  i  th sample and  math  y i  is the corresponding true value  then the fraction of correct predictions over  math  n  text samples   is defined as     math       texttt accuracy  y   hat y      frac 1  n  text samples    sum  i 0   n  text samples  1  1  hat y  i   y i   where  math  1 x   is the  indicator function  https   en wikipedia org wiki Indicator function            import numpy as np       from sklearn metrics import accuracy score       y pred    0  2  1  3        y true    0  1  2  3        accuracy score y true  y pred    0 5       accuracy score y true  y pred  normalize False    2 0  In the multilabel case with binary label indicators          accuracy score np array   0  1    1  1     np ones  2  2      0 5     rubric   Examples    See  ref  sphx glr auto examples model selection plot permutation tests for classification py    for an example of accuracy score usage using permutations of   the dataset       top k accuracy score   Top k accuracy score                       The  func  top k accuracy score  function is a generalization of  func  accuracy score   The difference is that a prediction is considered correct as long as the true label is associated with one of the   k   highest predicted scores   func  accuracy score  is the special case of  k   1    The function covers the binary and multiclass classification cases but not the multilabel case   If  math   hat f   i j   is the predicted class for the  math  i  th sample corresponding to the  math  j  th largest predicted score and  math  y i  is the corresponding true value  then the fraction of correct predictions over  math  n  text samples   is defined as     math        texttt top k accuracy  y   hat f      frac 1  n  text samples    sum  i 0   n  text samples  1   sum  j 1   k  1  hat f   i j    y i   where  math  k  is the number of guesses allowed and  math  1 x   is the  indicator function  https   en wikipedia org wiki Indicator function            import numpy as np       from sklearn metrics import top k accuracy score       y true   np array  0  1  2  2         y score   np array   0 5  0 2  0 2                              0 3  0 4  0 2                              0 2  0 4  0 3                              0 7  0 2  0 1          top k accuracy score y true  y score  k 2    0 75         Not normalizing gives the number of  correctly  classified samples       top k accuracy score y true  y score  k 2  normalize False    3      balanced accuracy score   Balanced accuracy score                          The  func  balanced accuracy score  function computes the  balanced accuracy  https   en wikipedia org wiki Accuracy and precision     which avoids inflated performance estimates on imbalanced datasets  It is the macro average of recall scores per class or  equivalently  raw accuracy where each sample is weighted according to the inverse prevalence of its true class  Thus for balanced datasets  the score is equal to accuracy   In the binary case  balanced accuracy is equal to the arithmetic mean of  sensitivity  https   en wikipedia org wiki Sensitivity and specificity     true positive rate  and  specificity  https   en wikipedia org wiki Sensitivity and specificity     true negative rate   or the area under the ROC curve with binary predictions rather than scores      math        texttt balanced accuracy     frac 1  2  left   frac TP  TP   FN     frac TN  TN   FP  right    If the classifier performs equally well on either class  this term reduces to the conventional accuracy  i e   the number of correct predictions divided by the total number of predictions    In contrast  if the conventional accuracy is above chance only because the classifier takes advantage of an imbalanced test set  then the balanced accuracy  as appropriate  will drop to  math   frac 1  n  classes     The score ranges from 0 to 1  or when   adjusted True   is used  it rescaled to the range  math   frac 1  1   n  classes   to 1  inclusive  with performance at random scoring 0   If  math  y i  is the true value of the  math  i  th sample  and  math  w i  is the corresponding sample weight  then we adjust the sample weight to      math        hat w  i    frac w i   sum j 1 y j   y i  w j    where  math  1 x   is the  indicator function  https   en wikipedia org wiki Indicator function     Given predicted  math   hat y  i  for sample  math  i   balanced accuracy is defined as      math        texttt balanced accuracy  y   hat y   w     frac 1   sum  hat w  i    sum i 1  hat y  i   y i   hat w  i  With   adjusted True    balanced accuracy reports the relative increase from  math   texttt balanced accuracy  y   mathbf 0   w     frac 1  n  classes     In the binary case  this is also known as   Youden s J statistic   https   en wikipedia org wiki Youden 27s J statistic     or  informedness       note        The multiclass definition here seems the most reasonable extension of the     metric used in binary classification  though there is no certain consensus     in the literature         Our definition   Mosley2013     Kelleher2015   and  Guyon2015    where        Guyon2015   adopt the adjusted version to ensure that random predictions       have a score of  math  0  and perfect predictions have a score of  math  1          Class balanced accuracy as described in  Mosley2013    the minimum between the precision       and the recall for each class is computed  Those values are then averaged over the total       number of classes to get the balanced accuracy        Balanced Accuracy as described in  Urbanowicz2015    the average of sensitivity and specificity       is computed for each class and then averaged over total number of classes      rubric   References      Guyon2015  I  Guyon  K  Bennett  G  Cawley  H J  Escalante  S  Escalera  T K  Ho  N  Maci       B  Ray  M  Saeed  A R  Statnikov  E  Viegas   Design of the 2015 ChaLearn AutoML Challenge      https   ieeexplore ieee org document 7280767     IJCNN 2015      Mosley2013  L  Mosley   A balanced approach to the multi class imbalance problem      https   lib dr iastate edu etd 13537      IJCV 2010      Kelleher2015  John  D  Kelleher  Brian Mac Namee  Aoife D Arcy   Fundamentals of     Machine Learning for Predictive Data Analytics  Algorithms  Worked Examples      and Case Studies  https   mitpress mit edu books fundamentals machine learning predictive data analytics         2015      Urbanowicz2015  Urbanowicz R J    Moore  J H   doi  ExSTraCS 2 0  description     and evaluation of a scalable learning classifier     system  10 1007 s12065 015 0128 8    Evol  Intel   2015  8  89       cohen kappa   Cohen s kappa                The function  func  cohen kappa score  computes  Cohen s kappa  https   en wikipedia org wiki Cohen 27s kappa    statistic  This measure is intended to compare labelings by different human annotators  not a classifier versus a ground truth   The kappa score is a number between  1 and 1  Scores above  8 are generally considered good agreement  zero or lower means no agreement  practically random labels    Kappa scores can be computed for binary or multiclass problems  but not for multilabel problems  except by manually computing a per label score  and not for more than two annotators         from sklearn metrics import cohen kappa score       labeling1    2  0  2  2  0  1        labeling2    0  0  2  2  0  2        cohen kappa score labeling1  labeling2    0 4285714285714286      confusion matrix   Confusion matrix                   The  func  confusion matrix  function evaluates classification accuracy by computing the  confusion matrix  https   en wikipedia org wiki Confusion matrix    with each row corresponding to the true class  Wikipedia and other references may use different convention for axes    By definition  entry  math  i  j  in a confusion matrix is the number of observations actually in group  math  i   but predicted to be in group  math  j   Here is an example          from sklearn metrics import confusion matrix       y true    2  0  2  2  0  1        y pred    0  0  2  2  0  2        confusion matrix y true  y pred    array   2  0  0             0  0  1             1  0  2      class  ConfusionMatrixDisplay  can be used to visually represent a confusion matrix as shown in the  ref  sphx glr auto examples model selection plot confusion matrix py  example  which creates the following figure      image      auto examples model selection images sphx glr plot confusion matrix 001 png     target     auto examples model selection plot confusion matrix html     scale  75     align  center  The parameter   normalize   allows to report ratios instead of counts  The confusion matrix can be normalized in 3 different ways     pred        true     and    all    which will divide the counts by the sum of each columns  rows  or the entire matrix  respectively         y true    0  0  0  1  1  1  1  1        y pred    0  1  0  1  0  1  0  1        confusion matrix y true  y pred  normalize  all     array   0 25   0 125             0 25   0 375     For binary problems  we can get counts of true negatives  false positives  false negatives and true positives as follows          y true    0  0  0  1  1  1  1  1        y pred    0  1  0  1  0  1  0  1        tn  fp  fn  tp   confusion matrix y true  y pred  ravel         tn  fp  fn  tp    2  1  2  3      rubric   Examples    See  ref  sphx glr auto examples model selection plot confusion matrix py    for an example of using a confusion matrix to evaluate classifier output   quality     See  ref  sphx glr auto examples classification plot digits classification py    for an example of using a confusion matrix to classify   hand written digits     See  ref  sphx glr auto examples text plot document classification 20newsgroups py    for an example of using a confusion matrix to classify text   documents       classification report   Classification report                         The  func  classification report  function builds a text report showing the main classification metrics  Here is a small example with custom   target names   and inferred labels           from sklearn metrics import classification report        y true    0  1  2  2  0         y pred    0  0  2  1  0         target names     class 0    class 1    class 2          print classification report y true  y pred  target names target names                    precision    recall  f1 score   support     BLANKLINE          class 0       0 67      1 00      0 80         2         class 1       0 00      0 00      0 00         1         class 2       1 00      0 50      0 67         2     BLANKLINE         accuracy                           0 60         5       macro avg       0 56      0 50      0 49         5    weighted avg       0 67      0 60      0 59         5     BLANKLINE      rubric   Examples    See  ref  sphx glr auto examples classification plot digits classification py    for an example of classification report usage for   hand written digits     See  ref  sphx glr auto examples model selection plot grid search digits py    for an example of classification report usage for   grid search with nested cross validation       hamming loss   Hamming loss                The  func  hamming loss  computes the average Hamming loss or  Hamming distance  https   en wikipedia org wiki Hamming distance    between two sets of samples   If  math   hat y   i j   is the predicted value for the  math  j  th label of a given sample  math  i    math  y  i j   is the corresponding true value   math  n  text samples   is the number of samples and  math  n  text labels   is the number of labels  then the Hamming loss  math  L  Hamming   is defined as      math       L  Hamming  y   hat y      frac 1  n  text samples    n  text labels    sum  i 0   n  text samples  1   sum  j 0   n  text labels    1  1  hat y   i j   not  y  i j    where  math  1 x   is the  indicator function  https   en wikipedia org wiki Indicator function      The equation above does not hold true in the case of multiclass classification  Please refer to the note below for more information            from sklearn metrics import hamming loss       y pred    1  2  3  4        y true    2  2  3  4        hamming loss y true  y pred    0 25  In the multilabel case with binary label indicators          hamming loss np array   0  1    1  1     np zeros  2  2      0 75     note        In multiclass classification  the Hamming loss corresponds to the Hamming     distance between   y true   and   y pred   which is similar to the      ref  zero one loss  function   However  while zero one loss penalizes     prediction sets that do not strictly match true sets  the Hamming loss     penalizes individual labels   Thus the Hamming loss  upper bounded by the zero one     loss  is always between zero and one  inclusive  and predicting a proper subset     or superset of the true labels will give a Hamming loss between     zero and one  exclusive       precision recall f measure metrics   Precision  recall and F measures                                    Intuitively   precision  https   en wikipedia org wiki Precision and recall Precision    is the ability of the classifier not to label as positive a sample that is negative  and  recall  https   en wikipedia org wiki Precision and recall Recall    is the ability of the classifier to find all the positive samples   The   F measure  https   en wikipedia org wiki F1 score      math  F  beta  and  math  F 1  measures  can be interpreted as a weighted harmonic mean of the precision and recall  A  math  F  beta  measure reaches its best value at 1 and its worst score at 0  With  math   beta   1     math  F  beta  and  math  F 1   are equivalent  and the recall and the precision are equally important   The  func  precision recall curve  computes a precision recall curve from the ground truth label and a score given by the classifier by varying a decision threshold   The  func  average precision score  function computes the  average precision  https   en wikipedia org w index php title Information retrieval oldid 793358396 Average precision     AP  from prediction scores  The value is between 0 and 1 and higher is better  AP is defined as     math        text AP     sum n  R n   R  n 1   P n  where  math  P n  and  math  R n  are the precision and recall at the nth threshold  With random predictions  the AP is the fraction of positive samples   References  Manning2008   and  Everingham2010   present alternative variants of AP that interpolate the precision recall curve  Currently   func  average precision score  does not implement any interpolated variant  References  Davis2006   and  Flach2015   describe why a linear interpolation of points on the precision recall curve provides an overly optimistic measure of classifier performance  This linear interpolation is used when computing area under the curve with the trapezoidal rule in  func  auc    Several functions allow you to analyze the precision  recall and F measures score      autosummary       average precision score    f1 score    fbeta score    precision recall curve    precision recall fscore support    precision score    recall score  Note that the  func  precision recall curve  function is restricted to the binary case  The  func  average precision score  function supports multiclass and multilabel formats by computing each class score in a One vs the rest  OvR  fashion and averaging them or not depending of its   average   argument value   The  func  PrecisionRecallDisplay from estimator  and  func  PrecisionRecallDisplay from predictions  functions will plot the precision recall curve as follows      image      auto examples model selection images sphx glr plot precision recall 001 png          target     auto examples model selection plot precision recall html plot the precision recall curve          scale  75          align  center     rubric   Examples    See  ref  sphx glr auto examples model selection plot grid search digits py    for an example of  func  precision score  and  func  recall score  usage   to estimate parameters using grid search with nested cross validation     See  ref  sphx glr auto examples model selection plot precision recall py    for an example of  func  precision recall curve  usage to evaluate   classifier output quality      rubric   References      Manning2008  C D  Manning  P  Raghavan  H  Sch tze   Introduction to Information Retrieval      https   nlp stanford edu IR book html htmledition evaluation of ranked retrieval results 1 html         2008      Everingham2010  M  Everingham  L  Van Gool  C K I  Williams  J  Winn  A  Zisserman       The Pascal Visual Object Classes  VOC  Challenge      https   citeseerx ist psu edu doc view pid b6bebfd529b233f00cb854b7d8070319600cf59d         IJCV 2010      Davis2006  J  Davis  M  Goadrich   The Relationship Between Precision Recall and ROC Curves      https   www biostat wisc edu  page rocpr pdf         ICML 2006      Flach2015  P A  Flach  M  Kull   Precision Recall Gain Curves  PR Analysis Done Right      https   papers nips cc paper 5867 precision recall gain curves pr analysis done right pdf         NIPS 2015   Binary classification                        In a binary classification task  the terms   positive   and   negative   refer to the classifier s prediction  and the terms   true   and   false   refer to whether that prediction corresponds to the external judgment  sometimes known as the   observation     Given these definitions  we can formulate the following table                                                                                                   Actual class  observation                                                                                                Predicted class   tp  true positive     fp  false positive              expectation      Correct result        Unexpected result                                                                                                       fn  false negative    tn  true negative                                Missing result        Correct absence of result                                                                          In this context  we can define the notions of precision and recall      math        text precision     frac  text tp    text tp     text fp        math        text recall     frac  text tp    text tp     text fn      Sometimes recall is also called   sensitivity     F measure is the weighted harmonic mean of precision and recall  with precision s contribution to the mean weighted by some parameter  math   beta       math       F  beta    1    beta 2   frac  text precision   times  text recall    beta 2  text precision     text recall    To avoid division by zero when precision and recall are zero  Scikit Learn calculates F measure with this otherwise equivalent formula      math       F  beta    frac  1    beta 2   text tp    1    beta 2   text tp     text fp     beta 2  text fn    Note that this formula is still undefined when there are no true positives  false positives  or false negatives  By default  F 1 for a set of exclusively true negatives is calculated as 0  however this behavior can be changed using the  zero division  parameter  Here are some small examples in binary classification          from sklearn import metrics       y pred    0  1  0  0        y true    0  1  0  1        metrics precision score y true  y pred    1 0       metrics recall score y true  y pred    0 5       metrics f1 score y true  y pred    0 66          metrics fbeta score y true  y pred  beta 0 5    0 83          metrics fbeta score y true  y pred  beta 1    0 66          metrics fbeta score y true  y pred  beta 2    0 55          metrics precision recall fscore support y true  y pred  beta 0 5     array  0 66     1             array  1    0 5    array  0 71     0 83       array  2  2            import numpy as np       from sklearn metrics import precision recall curve       from sklearn metrics import average precision score       y true   np array  0  0  1  1         y scores   np array  0 1  0 4  0 35  0 8         precision  recall  threshold   precision recall curve y true  y scores        precision   array  0 5         0 66     0 5         1           1                  recall   array  1    1    0 5  0 5  0           threshold   array  0 1   0 35  0 4   0 8          average precision score y true  y scores    0 83       Multiclass and multilabel classification                                          In a multiclass and multilabel classification task  the notions of precision  recall  and F measures can be applied to each label independently  There are a few ways to combine results across labels  specified by the   average   argument to the  func  average precision score    func  f1 score    func  fbeta score    func  precision recall fscore support    func  precision score  and  func  recall score  functions  as described  ref  above  average     Note the following behaviors when averaging     If all labels are included   micro  averaging in a multiclass setting will produce   precision  recall and  math  F  that are all identical to accuracy     weighted  averaging may produce a F score that is not between precision and recall     macro  averaging for F measures is calculated as the arithmetic mean over   per label class F measures  not the harmonic mean over the arithmetic precision and   recall means  Both calculations can be seen in the literature but are not equivalent    see  OB2019   for details   To make this more explicit  consider the following notation      math  y  the set of  true   math   sample  label   pairs    math   hat y   the set of  predicted   math   sample  label   pairs    math  L  the set of labels    math  S  the set of samples    math  y s  the subset of  math  y  with sample  math  s     i e   math  y s     left   s   l   in y   s    s right       math  y l  the subset of  math  y  with label  math  l    similarly   math   hat y  s  and  math   hat y  l  are subsets of    math   hat y      math  P A  B      frac  left  A  cap B  right    left B right    for some   sets  math  A  and  math  B     math  R A  B      frac  left  A  cap B  right    left A right       Conventions vary on handling  math  A    emptyset   this implementation uses    math  R A  B   0   and similar for  math  P       math  F  beta A  B      left 1    beta 2 right   frac P A  B   times R A  B    beta 2 P A  B    R A  B     Then the metrics are defined as                                                                                                                                                                                                                                                                                                                                                                                     average        Precision                                                                                                          Recall                                                                                                             F  beta                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   micro          math  P y   hat y                                                                                                  math  R y   hat y                                                                                                  math  F  beta y   hat y                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  samples        math   frac 1   left S right    sum  s  in S  P y s   hat y  s                                                     math   frac 1   left S right    sum  s  in S  R y s   hat y  s                                                     math   frac 1   left S right    sum  s  in S  F  beta y s   hat y  s                                                                                                                                                                                                                                                                                                                                                                                                                                     macro          math   frac 1   left L right    sum  l  in L  P y l   hat y  l                                                     math   frac 1   left L right    sum  l  in L  R y l   hat y  l                                                     math   frac 1   left L right    sum  l  in L  F  beta y l   hat y  l                                                                                                                                                                                                                                                                                                                                                                                                                                     weighted       math   frac 1   sum  l  in L   left y l right    sum  l  in L   left y l right  P y l   hat y  l                   math   frac 1   sum  l  in L   left y l right    sum  l  in L   left y l right  R y l   hat y  l                   math   frac 1   sum  l  in L   left y l right    sum  l  in L   left y l right  F  beta y l   hat y  l                                                                                                                                                                                                                                                                                                                                                                                                  None            math   langle P y l   hat y  l    l  in L  rangle                                                                  math   langle R y l   hat y  l    l  in L  rangle                                                                  math   langle F  beta y l   hat y  l    l  in L  rangle                                                                                                                                                                                                                                                                                                                                                                                                                                                     from sklearn import metrics       y true    0  1  2  0  1  2        y pred    0  2  1  0  0  1        metrics precision score y true  y pred  average  macro     0 22          metrics recall score y true  y pred  average  micro     0 33          metrics f1 score y true  y pred  average  weighted     0 26          metrics fbeta score y true  y pred  average  macro   beta 0 5    0 23          metrics precision recall fscore support y true  y pred  beta 0 5  average None     array  0 66     0           0             array  1   0   0     array  0 71     0           0             array  2  2  2        For multiclass classification with a  negative class   it is possible to exclude some labels         metrics recall score y true  y pred  labels  1  2   average  micro           excluding 0  no labels were correctly recalled   0 0  Similarly  labels not present in the data sample may be accounted for in macro averaging         metrics precision score y true  y pred  labels  0  1  2  3   average  macro     0 166        rubric   References      OB2019   arxiv  Opitz  J     Burst  S   2019    Macro f1 and macro f1        1911 03347        jaccard similarity score   Jaccard similarity coefficient score                                        The  func  jaccard score  function computes the average of  Jaccard similarity coefficients  https   en wikipedia org wiki Jaccard index     also called the Jaccard index  between pairs of label sets   The Jaccard similarity coefficient with a ground truth label set  math  y  and predicted label set  math   hat y    is defined as     math        J y   hat y      frac  y  cap  hat y     y  cup  hat y      The  func  jaccard score   like  func  precision recall fscore support   applies natively to binary targets  By computing it set wise it can be extended to apply to multilabel and multiclass through the use of  average   see  ref  above  average      In the binary case          import numpy as np       from sklearn metrics import jaccard score       y true   np array   0  1  1                             1  1  0          y pred   np array   1  1  1                             1  0  0          jaccard score y true 0   y pred 0     0 6666     In the 2D comparison case  e g  image similarity          jaccard score y true  y pred  average  micro     0 6  In the multilabel case with binary label indicators          jaccard score y true  y pred  average  samples     0 5833          jaccard score y true  y pred  average  macro     0 6666          jaccard score y true  y pred  average None    array  0 5  0 5  1      Multiclass problems are binarized and treated like the corresponding multilabel problem          y pred    0  2  1  2        y true    0  1  2  2        jaccard score y true  y pred  average None    array  1    0    0 33            jaccard score y true  y pred  average  macro     0 44          jaccard score y true  y pred  average  micro     0 33         hinge loss   Hinge loss             The  func  hinge loss  function computes the average distance between the model and the data using  hinge loss  https   en wikipedia org wiki Hinge loss     a one sided metric that considers only prediction errors   Hinge loss is used in maximal margin classifiers such as support vector machines    If the true label  math  y i  of a binary classification task is encoded as  math  y i  left   1   1 right    for every sample  math  i   and  math  w i  is the corresponding predicted decision  an array of shape   n samples    as output by the  decision function  method   then the hinge loss is defined as      math      L  text Hinge  y  w     frac 1  n  text samples    sum  i 0   n  text samples  1   max left  1   w i y i  0 right    If there are more than two labels   func  hinge loss  uses a multiclass variant due to Crammer   Singer   Here  https   jmlr csail mit edu papers volume2 crammer01a crammer01a pdf    is the paper describing it   In this case the predicted decision is an array of shape   n samples    n labels    If  math  w  i  y i   is the predicted decision for the true label  math  y i  of the  math  i  th sample  and  math   hat w   i  y i     max left  w  i  y j    y j  ne y i  right    is the maximum of the predicted decisions for all the other labels  then the multi class hinge loss is defined by      math      L  text Hinge  y  w     frac 1  n  text samples      sum  i 0   n  text samples  1   max left  1    hat w   i  y i      w  i  y i   0 right    Here is a small example demonstrating the use of the  func  hinge loss  function with a svm classifier in a binary class problem          from sklearn import svm       from sklearn metrics import hinge loss       X     0    1         y     1  1        est   svm LinearSVC random state 0        est fit X  y    LinearSVC random state 0        pred decision   est decision function    2    3    0 5          pred decision   array   2 18      2 36      0 09            hinge loss   1  1  1   pred decision    0 3     Here is an example demonstrating the use of the  func  hinge loss  function with a svm classifier in a multiclass problem          X   np array   0    1    2    3          Y   np array  0  1  2  3         labels   np array  0  1  2  3         est   svm LinearSVC         est fit X  Y    LinearSVC         pred decision   est decision function    1    2    3          y true    0  2  3        hinge loss y true  pred decision  labels labels    0 56         log loss   Log loss           Log loss  also called logistic regression loss or cross entropy loss  is defined on probability estimates   It is commonly used in  multinomial  logistic regression and neural networks  as well as in some variants of expectation maximization  and can be used to evaluate the probability outputs    predict proba    of a classifier instead of its discrete predictions   For binary classification with a true label  math  y  in   0 1    and a probability estimate  math  p    operatorname Pr  y   1    the log loss per sample is the negative log likelihood of the classifier given the true label      math        L   log  y  p      log  operatorname Pr  y p      y  log  p     1   y   log  1   p    This extends to the multiclass case as follows  Let the true labels for a set of samples be encoded as a 1 of K binary indicator matrix  math  Y   i e    math  y  i k    1  if sample  math  i  has label  math  k  taken from a set of  math  K  labels  Let  math  P  be a matrix of probability estimates  with  math  p  i k     operatorname Pr  y  i k    1    Then the log loss of the whole set is     math        L   log  Y  P      log  operatorname Pr  Y P       frac 1  N   sum  i 0   N 1   sum  k 0   K 1  y  i k   log p  i k   To see how this generalizes the binary log loss given above  note that in the binary case   math  p  i 0    1   p  i 1   and  math  y  i 0    1   y  i 1    so expanding the inner sum over  math  y  i k   in   0 1    gives the binary log loss   The  func  log loss  function computes log loss given a list of ground truth labels and a probability matrix  as returned by an estimator s   predict proba   method           from sklearn metrics import log loss         y true    0  0  1  1          y pred      9   1     8   2     3   7     01   99           log loss y true  y pred      0 1738     The first     9   1    in   y pred   denotes 90  probability that the first sample has label 0   The log loss is non negative       matthews corrcoef   Matthews correlation coefficient                                    The  func  matthews corrcoef  function computes the  Matthew s correlation coefficient  MCC   https   en wikipedia org wiki Matthews correlation coefficient    for binary classes   Quoting Wikipedia         The Matthews correlation coefficient is used in machine learning as a     measure of the quality of binary  two class  classifications  It takes     into account true and false positives and negatives and is generally     regarded as a balanced measure which can be used even if the classes are     of very different sizes  The MCC is in essence a correlation coefficient     value between  1 and  1  A coefficient of  1 represents a perfect     prediction  0 an average random prediction and  1 an inverse prediction      The statistic is also known as the phi coefficient     In the binary  two class  case   math  tp    math  tn    math  fp  and  math  fn  are respectively the number of true positives  true negatives  false positives and false negatives  the MCC is defined as     math      MCC    frac tp  times tn   fp  times fn   sqrt  tp   fp  tp   fn  tn   fp  tn   fn      In the multiclass case  the Matthews correlation coefficient can be  defined  http   rk kvl dk introduction index html    in terms of a  func  confusion matrix   math  C  for  math  K  classes   To simplify the definition consider the following intermediate variables      math  t k  sum  i   K  C  ik   the number of times class  math  k  truly occurred     math  p k  sum  i   K  C  ki   the number of times class  math  k  was predicted     math  c  sum  k   K  C  kk   the total number of samples correctly predicted     math  s  sum  i   K   sum  j   K  C  ij   the total number of samples   Then the multiclass MCC is defined as      math       MCC    frac          c  times s    sum  k   K  p k  times t k        sqrt           s 2    sum  k   K  p k 2   times          s 2    sum  k   K  t k 2          When there are more than two labels  the value of the MCC will no longer range between  1 and  1  Instead the minimum value will be somewhere between  1 and 0 depending on the number and distribution of ground true labels  The maximum value is always  1  For additional information  see  WikipediaMCC2021     Here is a small example illustrating the usage of the  func  matthews corrcoef  function           from sklearn metrics import matthews corrcoef         y true     1   1   1   1          y pred     1   1   1   1          matthews corrcoef y true  y pred       0 33        rubric   References      WikipediaMCC2021  Wikipedia contributors  Phi coefficient     Wikipedia  The Free Encyclopedia  April 21  2021  12 21 CEST     Available at  https   en wikipedia org wiki Phi coefficient    Accessed April 21  2021       multilabel confusion matrix   Multi label confusion matrix                               The  func  multilabel confusion matrix  function computes class wise  default  or sample wise  samplewise True  multilabel confusion matrix to evaluate the accuracy of a classification  multilabel confusion matrix also treats multiclass data as if it were multilabel  as this is a transformation commonly applied to evaluate multiclass problems with binary classification metrics  such as precision  recall  etc     When calculating class wise multilabel confusion matrix  math  C   the count of true negatives for class  math  i  is  math  C  i 0 0    false negatives is  math  C  i 1 0    true positives is  math  C  i 1 1   and false positives is  math  C  i 0 1     Here is an example demonstrating the use of the  func  multilabel confusion matrix  function with  term  multilabel indicator matrix  input            import numpy as np         from sklearn metrics import multilabel confusion matrix         y true   np array   1  0  1                               0  1  0            y pred   np array   1  0  0                               0  1  1            multilabel confusion matrix y true  y pred      array    1  0                0  1         BLANKLINE               1  0                0  1         BLANKLINE               0  1                1  0      Or a confusion matrix can be constructed for each sample s labels           multilabel confusion matrix y true  y pred  samplewise True      array    1  0                1  1         BLANKLINE               1  1                0  1      Here is an example demonstrating the use of the  func  multilabel confusion matrix  function with  term  multiclass  input            y true     cat    ant    cat    cat    ant    bird           y pred     ant    ant    cat    cat    ant    cat           multilabel confusion matrix y true  y pred                                      labels   ant    bird    cat        array    3  1                0  2         BLANKLINE               5  0                1  0         BLANKLINE               2  1                1  2      Here are some examples demonstrating the use of the  func  multilabel confusion matrix  function to calculate recall  or sensitivity   specificity  fall out and miss rate for each class in a problem with multilabel indicator matrix input   Calculating  recall  https   en wikipedia org wiki Sensitivity and specificity      also called the true positive rate or the sensitivity  for each class            y true   np array   0  0  1                               0  1  0                               1  1  0            y pred   np array   0  1  0                               0  0  1                               1  1  0            mcm   multilabel confusion matrix y true  y pred          tn   mcm    0  0          tp   mcm    1  1          fn   mcm    1  0          fp   mcm    0  1          tp    tp   fn      array  1    0 5  0      Calculating  specificity  https   en wikipedia org wiki Sensitivity and specificity      also called the true negative rate  for each class            tn    tn   fp      array  1    0    0 5    Calculating  fall out  https   en wikipedia org wiki False positive rate      also called the false positive rate  for each class            fp    fp   tn      array  0    1    0 5    Calculating  miss rate  https   en wikipedia org wiki False positives and false negatives      also called the false negative rate  for each class            fn    fn   tp      array  0    0 5  1          roc metrics   Receiver operating characteristic  ROC                                           The function  func  roc curve  computes the  receiver operating characteristic curve  or ROC curve  https   en wikipedia org wiki Receiver operating characteristic     Quoting Wikipedia       A receiver operating characteristic  ROC   or simply ROC curve  is a   graphical plot which illustrates the performance of a binary classifier   system as its discrimination threshold is varied  It is created by plotting   the fraction of true positives out of the positives  TPR   true positive   rate  vs  the fraction of false positives out of the negatives  FPR   false   positive rate   at various threshold settings  TPR is also known as   sensitivity  and FPR is one minus the specificity or true negative rate    This function requires the true binary value and the target scores  which can either be probability estimates of the positive class  confidence values  or binary decisions  Here is a small example of how to use the  func  roc curve  function            import numpy as np         from sklearn metrics import roc curve         y   np array  1  1  2  2           scores   np array  0 1  0 4  0 35  0 8           fpr  tpr  thresholds   roc curve y  scores  pos label 2          fpr     array  0    0    0 5  0 5  1             tpr     array  0    0 5  0 5  1    1             thresholds     array   inf  0 8   0 4   0 35  0 1     Compared to metrics such as the subset accuracy  the Hamming loss  or the F1 score  ROC doesn t require optimizing a threshold for each label   The  func  roc auc score  function  denoted by ROC AUC or AUROC  computes the area under the ROC curve  By doing so  the curve information is summarized in one number   The following figure shows the ROC curve and ROC AUC score for a classifier aimed to distinguish the virginica flower from the rest of the species in the  ref  iris dataset       image      auto examples model selection images sphx glr plot roc 001 png     target     auto examples model selection plot roc html     scale  75     align  center    For more information see the  Wikipedia article on AUC  https   en wikipedia org wiki Receiver operating characteristic Area under the curve          roc auc binary   Binary case              In the   binary case    you can either provide the probability estimates  using the  classifier predict proba    method  or the non thresholded decision values given by the  classifier decision function    method  In the case of providing the probability estimates  the probability of the class with the  greater label  should be provided  The  greater label  corresponds to  classifier classes  1   and thus  classifier predict proba X     1    Therefore  the  y score  parameter is of size  n samples           from sklearn datasets import load breast cancer       from sklearn linear model import LogisticRegression       from sklearn metrics import roc auc score       X  y   load breast cancer return X y True        clf   LogisticRegression solver  liblinear   fit X  y        clf classes    array  0  1    We can use the probability estimates corresponding to  clf classes  1           y score   clf predict proba X     1        roc auc score y  y score    0 99     Otherwise  we can use the non thresholded decision values        roc auc score y  clf decision function X     0 99         roc auc multiclass   Multi class case                   The  func  roc auc score  function can also be used in   multi class classification    Two averaging strategies are currently supported  the one vs one algorithm computes the average of the pairwise ROC AUC scores  and the one vs rest algorithm computes the average of the ROC AUC scores for each class against all other classes  In both cases  the predicted labels are provided in an array with values from 0 to   n classes    and the scores correspond to the probability estimates that a sample belongs to a particular class  The OvO and OvR algorithms support weighting uniformly    average  macro     and by prevalence    average  weighted          dropdown   One vs one Algorithm    Computes the average AUC of all possible pairwise   combinations of classes   HT2001   defines a multiclass AUC metric weighted   uniformly        math         frac 1  c c 1   sum  j 1   c  sum  k   j  c   text AUC  j   k         text AUC  k   j      where  math  c  is the number of classes and  math   text AUC  j   k   is the   AUC with class  math  j  as the positive class and class  math  k  as the   negative class  In general     math   text AUC  j   k   neq  text AUC  k   j    in the multiclass   case  This algorithm is used by setting the keyword argument   multiclass     to    ovo    and   average   to    macro        The  HT2001   multiclass AUC metric can be extended to be weighted by the   prevalence        math         frac 1  c c 1   sum  j 1   c  sum  k   j  c p j  cup k        text AUC  j   k     text AUC  k   j      where  math  c  is the number of classes  This algorithm is used by setting   the keyword argument   multiclass   to    ovo    and   average   to      weighted     The    weighted    option returns a prevalence weighted average   as described in  FC2009        dropdown   One vs rest Algorithm    Computes the AUC of each class against the rest    PD2000    The algorithm is functionally the same as the multilabel case  To   enable this algorithm set the keyword argument   multiclass   to    ovr       Additionally to    macro     F2006   and    weighted     F2001   averaging  OvR   supports    micro    averaging     In applications where a high false positive rate is not tolerable the parameter     max fpr   of  func  roc auc score  can be used to summarize the ROC curve up   to the given limit     The following figure shows the micro averaged ROC curve and its corresponding   ROC AUC score for a classifier aimed to distinguish the different species in   the  ref  iris dataset         image      auto examples model selection images sphx glr plot roc 002 png      target     auto examples model selection plot roc html      scale  75      align  center      roc auc multilabel   Multi label case                   In   multi label classification    the  func  roc auc score  function is extended by averaging over the labels as  ref  above  average    In this case  you should provide a  y score  of shape   n samples  n classes    Thus  when using the probability estimates  one needs to select the probability of the class with the greater label for each output         from sklearn datasets import make multilabel classification       from sklearn multioutput import MultiOutputClassifier       X  y   make multilabel classification random state 0        inner clf   LogisticRegression solver  liblinear   random state 0        clf   MultiOutputClassifier inner clf  fit X  y        y score   np transpose  y pred    1  for y pred in clf predict proba X          roc auc score y  y score  average None    array  0 82     0 86     0 94     0 85      0 94       And the decision values do not require such processing         from sklearn linear model import RidgeClassifierCV       clf   RidgeClassifierCV   fit X  y        y score   clf decision function X        roc auc score y  y score  average None    array  0 81     0 84      0 93     0 87     0 94          rubric   Examples    See  ref  sphx glr auto examples model selection plot roc py  for an example of   using ROC to evaluate the quality of the output of a classifier     See  ref  sphx glr auto examples model selection plot roc crossval py   for an   example of using ROC to evaluate classifier output quality  using cross validation     See  ref  sphx glr auto examples applications plot species distribution modeling py    for an example of using ROC to model species distribution      rubric   References      HT2001  Hand  D J  and Till  R J    2001    A simple generalisation    of the area under the ROC curve for multiple class classification problems      http   link springer com article 10 1023 A 1010920819831       Machine learning  45 2   pp  171 186       FC2009  Ferri  C sar   Hernandez Orallo  Jose   Modroiu  R   2009       An Experimental Comparison of Performance Measures for Classification      https   www math ucdavis edu  saito data roc ferri class perf metrics pdf       Pattern Recognition Letters  30  27 38       PD2000  Provost  F   Domingos  P   2000    Well trained PETs  Improving    probability estimation trees     https   fosterprovost com publication well trained pets improving probability estimation trees         Section 6 2   CeDER Working Paper  IS 00 04  Stern School of Business     New York University       F2006  Fawcett  T   2006   An introduction to ROC analysis      http   www sciencedirect com science article pii S016786550500303X       Pattern Recognition Letters  27 8   pp  861 874       F2001  Fawcett  T   2001   Using rule sets to maximize    ROC performance  https   ieeexplore ieee org document 989510        In Data Mining  2001     Proceedings IEEE International Conference  pp  131 138       det curve   Detection error tradeoff  DET                                  The function  func  det curve  computes the detection error tradeoff curve  DET  curve  WikipediaDET2017    Quoting Wikipedia      A detection error tradeoff  DET  graph is a graphical plot of error rates   for binary classification systems  plotting false reject rate vs  false   accept rate  The x  and y axes are scaled non linearly by their standard   normal deviates  or just by logarithmic transformation   yielding tradeoff   curves that are more linear than ROC curves  and use most of the image area   to highlight the differences of importance in the critical operating region    DET curves are a variation of receiver operating characteristic  ROC  curves where False Negative Rate is plotted on the y axis instead of True Positive Rate  DET curves are commonly plotted in normal deviate scale by transformation with  math   phi   1    with  math   phi  being the cumulative distribution function   The resulting performance curves explicitly visualize the tradeoff of error types for given classification algorithms  See  Martin1997   for examples and further motivation   This figure compares the ROC and DET curves of two example classifiers on the same classification task      image      auto examples model selection images sphx glr plot det 001 png     target     auto examples model selection plot det html     scale  75     align  center     dropdown   Properties      DET curves form a linear curve in normal deviate scale if the detection     scores are normally  or close to normally  distributed      It was shown by  Navratil2007   that the reverse is not necessarily true and     even more general distributions are able to produce linear DET curves       The normal deviate scale transformation spreads out the points such that a     comparatively larger space of plot is occupied      Therefore curves with similar classification performance might be easier to     distinguish on a DET plot       With False Negative Rate being  inverse  to True Positive Rate the point     of perfection for DET curves is the origin  in contrast to the top left     corner for ROC curves       dropdown   Applications and limitations    DET curves are intuitive to read and hence allow quick visual assessment of a   classifier s performance    Additionally DET curves can be consulted for threshold analysis and operating   point selection    This is particularly helpful if a comparison of error types is required     On the other hand DET curves do not provide their metric as a single number    Therefore for either automated evaluation or comparison to other   classification tasks metrics like the derived area under ROC curve might be   better suited      rubric   Examples    See  ref  sphx glr auto examples model selection plot det py    for an example comparison between receiver operating characteristic  ROC    curves and Detection error tradeoff  DET  curves      rubric   References      WikipediaDET2017  Wikipedia contributors  Detection error tradeoff      Wikipedia  The Free Encyclopedia  September 4  2017  23 33 UTC      Available at  https   en wikipedia org w index php title Detection error tradeoff oldid 798982054      Accessed February 19  2018       Martin1997  A  Martin  G  Doddington  T  Kamm  M  Ordowski  and M  Przybocki       The DET Curve in Assessment of Detection Task Performance      https   ccc inaoep mx  villasen bib martin97det pdf     NIST 1997       Navratil2007  J  Navractil and D  Klusacek        On Linear DETs   https   ieeexplore ieee org document 4218079         2007 IEEE International Conference on Acoustics      Speech and Signal Processing   ICASSP  07  Honolulu      HI  2007  pp  IV 229 IV 232       zero one loss   Zero one loss                 The  func  zero one loss  function computes the sum or the average of the 0 1 classification loss   math  L  0 1    over  math  n   text samples     By default  the function normalizes over the sample  To get the sum of the  math  L  0 1    set   normalize   to   False     In multilabel classification  the  func  zero one loss  scores a subset as one if its labels strictly match the predictions  and as a zero if there are any errors   By default  the function returns the percentage of imperfectly predicted subsets   To get the count of such subsets instead  set   normalize   to   False    If  math   hat y  i  is the predicted value of the  math  i  th sample and  math  y i  is the corresponding true value  then the 0 1 loss  math  L  0 1   is defined as      math       L  0 1  y   hat y      frac 1  n  text samples    sum  i 0   n  text samples  1  1  hat y  i  not  y i   where  math  1 x   is the  indicator function  https   en wikipedia org wiki Indicator function     The zero one loss can also be computed as  math  zero one loss   1   accuracy           from sklearn metrics import zero one loss       y pred    1  2  3  4        y true    2  2  3  4        zero one loss y true  y pred    0 25       zero one loss y true  y pred  normalize False    1 0  In the multilabel case with binary label indicators  where the first label set  0 1  has an error          zero one loss np array   0  1    1  1     np ones  2  2      0 5        zero one loss np array   0  1    1  1     np ones  2  2     normalize False    1 0     rubric   Examples    See  ref  sphx glr auto examples feature selection plot rfe with cross validation py    for an example of zero one loss usage to perform recursive feature   elimination with cross validation       brier score loss   Brier score loss                   The  func  brier score loss  function computes the  Brier score  https   en wikipedia org wiki Brier score    for binary classes  Brier1950    Quoting Wikipedia        The Brier score is a proper score function that measures the accuracy of     probabilistic predictions  It is applicable to tasks in which predictions     must assign probabilities to a set of mutually exclusive discrete outcomes    This function returns the mean squared error of the actual outcome  math  y  in   0 1    and the predicted probability estimate  math  p    operatorname Pr  y   1     term  predict proba   as outputted by      math       BS    frac 1  n   text samples     sum  i 0   n   text samples     1  y i   p i  2  The Brier score loss is also between 0 to 1 and the lower the value  the mean square difference is smaller   the more accurate the prediction is   Here is a small example of usage of this function            import numpy as np         from sklearn metrics import brier score loss         y true   np array  0  1  1  0           y true categorical   np array   spam    ham    ham    spam            y prob   np array  0 1  0 9  0 8  0 4           y pred   np array  0  1  1  0           brier score loss y true  y prob      0 055         brier score loss y true  1   y prob  pos label 0      0 055         brier score loss y true categorical  y prob  pos label  ham       0 055         brier score loss y true  y prob   0 5      0 0  The Brier score can be used to assess how well a classifier is calibrated  However  a lower Brier score loss does not always mean a better calibration  This is because  by analogy with the bias variance decomposition of the mean squared error  the Brier score loss can be decomposed as the sum of calibration loss and refinement loss  Bella2012    Calibration loss is defined as the mean squared deviation from empirical probabilities derived from the slope of ROC segments  Refinement loss can be defined as the expected optimal loss as measured by the area under the optimal cost curve  Refinement loss can change independently from calibration loss  thus a lower Brier score loss does not necessarily mean a better calibrated model   Only when refinement loss remains the same does a lower Brier score loss always mean better calibration   Bella2012     Flach2008        rubric   Examples    See  ref  sphx glr auto examples calibration plot calibration py    for an example of Brier score loss usage to perform probability   calibration of classifiers      rubric   References      Brier1950  G  Brier   Verification of forecasts expressed in terms of probability    ftp   ftp library noaa gov docs lib htdocs rescue mwr 078 mwr 078 01 0001 pdf       Monthly weather review 78 1  1950       Bella2012  Bella  Ferri  Hern ndez Orallo  and Ram rez Quintana     Calibration of Machine Learning Models     http   dmip webs upv es papers BFHRHandbook2010 pdf      in Khosrow Pour  M   Machine learning  concepts  methodologies  tools   and applications   Hershey  PA  Information Science Reference  2012        Flach2008  Flach  Peter  and Edson Matsubara    On classification  ranking    and probability estimation    https   drops dagstuhl de opus volltexte 2008 1382       Dagstuhl Seminar Proceedings  Schloss Dagstuhl Leibniz Zentrum fr Informatik  2008        class likelihood ratios   Class likelihood ratios                          The  func  class likelihood ratios  function computes the  positive and negative likelihood ratios  https   en wikipedia org wiki Likelihood ratios in diagnostic testing     math  LR  pm  for binary classes  which can be interpreted as the ratio of post test to pre test odds as explained below  As a consequence  this metric is invariant w r t  the class prevalence  the number of samples in the positive class divided by the total number of samples  and   can be extrapolated between populations regardless of any possible class imbalance     The  math  LR  pm  metrics are therefore very useful in settings where the data available to learn and evaluate a classifier is a study population with nearly balanced classes  such as a case control study  while the target application  i e  the general population  has very low prevalence   The positive likelihood ratio  math  LR    is the probability of a classifier to correctly predict that a sample belongs to the positive class divided by the probability of predicting the positive class for a sample belonging to the negative class      math       LR      frac  text PR  P  T     text PR  P  T      The notation here refers to predicted   math  P   or true   math  T   label and the sign  math     and  math     refer to the positive and negative class  respectively  e g   math  P   stands for  predicted positive    Analogously  the negative likelihood ratio  math  LR    is the probability of a sample of the positive class being classified as belonging to the negative class divided by the probability of a sample of the negative class being correctly classified      math       LR      frac  text PR  P  T     text PR  P  T      For classifiers above chance  math  LR    above 1   higher is better    while  math  LR    ranges from 0 to 1 and   lower is better    Values of  math  LR  pm approx 1  correspond to chance level   Notice that probabilities differ from counts  for instance  math   operatorname PR  P  T    is not equal to the number of true positive counts   tp    see  the wikipedia page  https   en wikipedia org wiki Likelihood ratios in diagnostic testing    for the actual formulas       rubric   Examples     ref  sphx glr auto examples model selection plot likelihood ratios py      dropdown   Interpretation across varying prevalence    Both class likelihood ratios are interpretable in terms of an odds ratio    pre test and post tests         math         text post test odds     text Likelihood ratio   times  text pre test odds      Odds are in general related to probabilities via       math         text odds     frac  text probability   1    text probability       or equivalently       math         text probability     frac  text odds   1    text odds       On a given population  the pre test probability is given by the prevalence  By   converting odds to probabilities  the likelihood ratios can be translated into a   probability of truly belonging to either class before and after a classifier   prediction        math         text post test odds     text Likelihood ratio   times      frac  text pre test probability   1    text pre test probability          math         text post test probability     frac  text post test odds   1    text post test odds        dropdown   Mathematical divergences    The positive likelihood ratio is undefined when  math  fp   0   which can be   interpreted as the classifier perfectly identifying positive cases  If  math  fp     0  and additionally  math  tp   0   this leads to a zero zero division  This   happens  for instance  when using a  DummyClassifier  that always predicts the   negative class and therefore the interpretation as a perfect classifier is lost     The negative likelihood ratio is undefined when  math  tn   0   Such divergence   is invalid  as  math  LR     1  would indicate an increase in the odds of a   sample belonging to the positive class after being classified as negative  as if   the act of classifying caused the positive condition  This includes the case of   a  DummyClassifier  that always predicts the positive class  i e  when    math  tn fn 0       Both class likelihood ratios are undefined when  math  tp fn 0   which means   that no samples of the positive class were present in the testing set  This can   also happen when cross validating highly imbalanced data     In all the previous cases the  func  class likelihood ratios  function raises by   default an appropriate warning message and returns  nan  to avoid pollution when   averaging over cross validation folds     For a worked out demonstration of the  func  class likelihood ratios  function    see the example below      dropdown   References       Wikipedia entry for Likelihood ratios in diagnostic testing      https   en wikipedia org wiki Likelihood ratios in diagnostic testing         Brenner  H     Gefeller  O   1997       Variation of sensitivity  specificity  likelihood ratios and predictive     values with disease prevalence      Statistics in medicine  16 9   981 991        d2 score classification   D  score for classification                              The D  score computes the fraction of deviance explained  It is a generalization of R   where the squared error is generalized and replaced by a classification deviance of choice  math   text dev  y   hat y     e g   Log loss   D  is a form of a  skill score   It is calculated as     math      D 2 y   hat y     1    frac  text dev  y   hat y     text dev  y  y   text null          Where  math  y   text null    is the optimal prediction of an intercept only model  e g   the per class proportion of  y true  in the case of the Log loss    Like R   the best possible score is 1 0 and it can be negative  because the model can be arbitrarily worse   A constant model that always predicts  math  y   text null     disregarding the input features  would get a D  score of 0 0      dropdown   D2 log loss score    The  func  d2 log loss score  function implements the special case   of D  with the log loss  see  ref  log loss   i e         math         text dev  y   hat y      text log loss  y   hat y       Here are some usage examples of the  func  d2 log loss score  function            from sklearn metrics import d2 log loss score         y true    1  1  2  3          y pred                 0 5  0 25  0 25               0 5  0 25  0 25               0 5  0 25  0 25               0 5  0 25  0 25                     d2 log loss score y true  y pred      0 0         y true    1  2  3          y pred                  0 98  0 01  0 01                0 01  0 98  0 01                0 01  0 01  0 98                     d2 log loss score y true  y pred      0 981            y true    1  2  3          y pred                  0 1  0 6  0 3                0 1  0 6  0 3                0 4  0 5  0 1                     d2 log loss score y true  y pred       0 552          multilabel ranking metrics   Multilabel ranking metrics                                currentmodule   sklearn metrics  In multilabel learning  each sample can have any number of ground truth labels associated with it  The goal is to give high scores and better rank to the ground truth labels       coverage error   Coverage error                 The  func  coverage error  function computes the average number of labels that have to be included in the final prediction such that all true labels are predicted  This is useful if you want to know how many top scored labels you have to predict in average without missing any true one  The best value of this metrics is thus the average number of true labels      note        Our implementation s score is 1 greater than the one given in Tsoumakas     et al   2010  This extends it to handle the degenerate case in which an     instance has 0 true labels   Formally  given a binary indicator matrix of the ground truth labels  math  y  in  left  0  1 right    n  text samples   times n  text labels    and the score associated with each label  math   hat f   in  mathbb R   n  text samples   times n  text labels     the coverage is defined as     math     coverage y   hat f      frac 1  n   text samples         sum  i 0   n   text samples     1   max  j y  ij    1   text rank   ij   with  math   text rank   ij     left  left  k   hat f   ik   geq  hat f   ij   right   right    Given the rank definition  ties in   y scores   are broken by giving the maximal rank that would have been assigned to all tied values   Here is a small example of usage of this function            import numpy as np         from sklearn metrics import coverage error         y true   np array   1  0  0    0  0  1            y score   np array   0 75  0 5  1    1  0 2  0 1            coverage error y true  y score      2 5      label ranking average precision   Label ranking average precision                                  The  func  label ranking average precision score  function implements label ranking average precision  LRAP   This metric is linked to the  func  average precision score  function  but is based on the notion of label ranking instead of precision and recall   Label ranking average precision  LRAP  averages over the samples the answer to the following question  for each ground truth label  what fraction of higher ranked labels were true labels  This performance measure will be higher if you are able to give better rank to the labels associated with each sample  The obtained score is always strictly greater than 0  and the best value is 1  If there is exactly one relevant label per sample  label ranking average precision is equivalent to the  mean reciprocal rank  https   en wikipedia org wiki Mean reciprocal rank      Formally  given a binary indicator matrix of the ground truth labels  math  y  in  left  0  1 right    n  text samples   times n  text labels    and the score associated with each label  math   hat f   in  mathbb R   n  text samples   times n  text labels     the average precision is defined as     math     LRAP y   hat f      frac 1  n   text samples         sum  i 0   n   text samples     1   frac 1    y i   0       sum  j y  ij    1   frac   mathcal L   ij     text rank   ij     where  math   mathcal L   ij     left  k  y  ik    1   hat f   ik   geq  hat f   ij   right      math   text rank   ij     left  left  k   hat f   ik   geq  hat f   ij   right   right     math    cdot   computes the cardinality of the set  i e   the number of elements in the set   and  math     cdot   0  is the  math   ell 0   norm   which computes the number of nonzero elements in a vector    Here is a small example of usage of this function            import numpy as np         from sklearn metrics import label ranking average precision score         y true   np array   1  0  0    0  0  1            y score   np array   0 75  0 5  1    1  0 2  0 1            label ranking average precision score y true  y score      0 416         label ranking loss   Ranking loss               The  func  label ranking loss  function computes the ranking loss which averages over the samples the number of label pairs that are incorrectly ordered  i e  true labels have a lower score than false labels  weighted by the inverse of the number of ordered pairs of false and true labels  The lowest achievable ranking loss is zero   Formally  given a binary indicator matrix of the ground truth labels  math  y  in  left  0  1 right    n  text samples   times n  text labels    and the score associated with each label  math   hat f   in  mathbb R   n  text samples   times n  text labels     the ranking loss is defined as     math     ranking  loss y   hat f       frac 1  n   text samples         sum  i 0   n   text samples     1   frac 1    y i   0 n  text labels      y i   0        left  left   k  l    hat f   ik   leq  hat f   il   y  ik    1  y  il    0  right   right   where  math    cdot   computes the cardinality of the set  i e   the number of elements in the set  and  math     cdot   0  is the  math   ell 0   norm   which computes the number of nonzero elements in a vector    Here is a small example of usage of this function            import numpy as np         from sklearn metrics import label ranking loss         y true   np array   1  0  0    0  0  1            y score   np array   0 75  0 5  1    1  0 2  0 1            label ranking loss y true  y score      0 75              With the following prediction  we have perfect and minimal loss         y score   np array   1 0  0 1  0 2    0 1  0 2  0 9            label ranking loss y true  y score      0 0      dropdown   References      Tsoumakas  G   Katakis  I     Vlahavas  I   2010   Mining multi label data  In     Data mining and knowledge discovery handbook  pp  667 685   Springer US        ndcg   Normalized Discounted Cumulative Gain                                        Discounted Cumulative Gain  DCG  and Normalized Discounted Cumulative Gain  NDCG  are ranking metrics implemented in  func   sklearn metrics dcg score  and  func   sklearn metrics ndcg score    they compare a predicted order to ground truth scores  such as the relevance of answers to a query   From the Wikipedia page for Discounted Cumulative Gain    Discounted cumulative gain  DCG  is a measure of ranking quality  In information retrieval  it is often used to measure effectiveness of web search engine algorithms or related applications  Using a graded relevance scale of documents in a search engine result set  DCG measures the usefulness  or gain  of a document based on its position in the result list  The gain is accumulated from the top of the result list to the bottom  with the gain of each result discounted at lower ranks   DCG orders the true targets  e g  relevance of query answers  in the predicted order  then multiplies them by a logarithmic decay and sums the result  The sum can be truncated after the first  math  K  results  in which case we call it DCG K  NDCG  or NDCG K is DCG divided by the DCG obtained by a perfect prediction  so that it is always between 0 and 1  Usually  NDCG is preferred to DCG   Compared with the ranking loss  NDCG can take into account relevance scores  rather than a ground truth ranking  So if the ground truth consists only of an ordering  the ranking loss should be preferred  if the ground truth consists of actual usefulness scores  e g  0 for irrelevant  1 for relevant  2 for very relevant   NDCG can be used   For one sample  given the vector of continuous ground truth values for each target  math  y  in  mathbb R   M    where  math  M  is the number of outputs  and the prediction  math   hat y    which induces the ranking function  math  f   the DCG score is     math       sum  r 1    min K  M   frac y  f r     log 1   r    and the NDCG score is the DCG score divided by the DCG score obtained for  math  y       dropdown   References       Wikipedia entry for Discounted Cumulative Gain      https   en wikipedia org wiki Discounted cumulative gain         Jarvelin  K     Kekalainen  J   2002       Cumulated gain based evaluation of IR techniques  ACM Transactions on     Information Systems  TOIS   20 4   422 446       Wang  Y   Wang  L   Li  Y   He  D   Chen  W     Liu  T  Y   2013  May       A theoretical analysis of NDCG ranking measures  In Proceedings of the 26th     Annual Conference on Learning Theory  COLT 2013       McSherry  F     Najork  M   2008  March   Computing information retrieval     performance measures efficiently in the presence of tied scores  In     European conference on information retrieval  pp  414 421   Springer      Berlin  Heidelberg        regression metrics   Regression metrics                         currentmodule   sklearn metrics  The  mod  sklearn metrics  module implements several loss  score  and utility functions to measure regression performance  Some of those have been enhanced to handle the multioutput case   func  mean squared error    func  mean absolute error    func  r2 score    func  explained variance score    func  mean pinball loss    func  d2 pinball score  and  func  d2 absolute error score     These functions have a   multioutput   keyword argument which specifies the way the scores or losses for each individual target should be averaged  The default is    uniform average     which specifies a uniformly weighted mean over outputs  If an   ndarray   of shape    n outputs     is passed  then its entries are interpreted as weights and an according weighted average is returned  If   multioutput   is    raw values     then all unaltered individual scores or losses will be returned in an array of shape    n outputs        The  func  r2 score  and  func  explained variance score  accept an additional value    variance weighted    for the   multioutput   parameter  This option leads to a weighting of each individual score by the variance of the corresponding target variable  This setting quantifies the globally captured unscaled variance  If the target variables are of different scale  then this score puts more importance on explaining the higher variance variables       r2 score   R  score  the coefficient of determination                                              The  func  r2 score  function computes the  coefficient of determination  https   en wikipedia org wiki Coefficient of determination     usually denoted as  math  R 2    It represents the proportion of variance  of y  that has been explained by the independent variables in the model  It provides an indication of goodness of fit and therefore a measure of how well unseen samples are likely to be predicted by the model  through the proportion of explained variance   As such variance is dataset dependent   math  R 2  may not be meaningfully comparable across different datasets  Best possible score is 1 0 and it can be negative  because the model can be arbitrarily worse   A constant model that always predicts the expected  average  value of y  disregarding the input features  would get an  math  R 2  score of 0 0   Note  when the prediction residuals have zero mean  the  math  R 2  score and the  ref  explained variance score  are identical   If  math   hat y  i  is the predicted value of the  math  i  th sample and  math  y i  is the corresponding true value for total  math  n  samples  the estimated  math  R 2  is defined as      math      R 2 y   hat y     1    frac  sum  i 1   n   y i    hat y  i  2   sum  i 1   n   y i    bar y   2   where  math   bar y     frac 1  n   sum  i 1   n  y i  and  math   sum  i 1   n   y i    hat y  i  2    sum  i 1   n   epsilon i 2    Note that  func  r2 score  calculates unadjusted  math  R 2  without correcting for bias in sample variance of y   In the particular case where the true target is constant  the  math  R 2  score is not finite  it is either   NaN    perfect predictions  or    Inf    imperfect predictions   Such non finite scores may prevent correct model optimization such as grid search cross validation to be performed correctly  For this reason the default behaviour of  func  r2 score  is to replace them with 1 0  perfect predictions  or 0 0  imperfect predictions   If   force finite   is set to   False    this score falls back on the original  math  R 2  definition   Here is a small example of usage of the  func  r2 score  function          from sklearn metrics import r2 score       y true    3   0 5  2  7        y pred    2 5  0 0  2  8        r2 score y true  y pred    0 948          y true     0 5  1     1  1    7   6         y pred     0  2     1  2    8   5         r2 score y true  y pred  multioutput  variance weighted     0 938          y true     0 5  1     1  1    7   6         y pred     0  2     1  2    8   5         r2 score y true  y pred  multioutput  uniform average     0 936          r2 score y true  y pred  multioutput  raw values     array  0 965     0 908            r2 score y true  y pred  multioutput  0 3  0 7     0 925          y true     2   2   2        y pred     2   2   2        r2 score y true  y pred    1 0       r2 score y true  y pred  force finite False    nan       y true     2   2   2        y pred     2   2   2   1e 8        r2 score y true  y pred    0 0       r2 score y true  y pred  force finite False     inf     rubric   Examples    See  ref  sphx glr auto examples linear model plot lasso and elasticnet py    for an example of R  score usage to   evaluate Lasso and Elastic Net on sparse signals       mean absolute error   Mean absolute error                      The  func  mean absolute error  function computes  mean absolute error  https   en wikipedia org wiki Mean absolute error     a risk metric corresponding to the expected value of the absolute error loss or  math  l1  norm loss   If  math   hat y  i  is the predicted value of the  math  i  th sample  and  math  y i  is the corresponding true value  then the mean absolute error  MAE  estimated over  math  n   text samples    is defined as     math       text MAE  y   hat y      frac 1  n   text samples     sum  i 0   n   text samples   1   left  y i    hat y  i  right    Here is a small example of usage of the  func  mean absolute error  function          from sklearn metrics import mean absolute error       y true    3   0 5  2  7        y pred    2 5  0 0  2  8        mean absolute error y true  y pred    0 5       y true     0 5  1     1  1    7   6         y pred     0  2     1  2    8   5         mean absolute error y true  y pred    0 75       mean absolute error y true  y pred  multioutput  raw values     array  0 5  1           mean absolute error y true  y pred  multioutput  0 3  0 7     0 85         mean squared error   Mean squared error                      The  func  mean squared error  function computes  mean squared error  https   en wikipedia org wiki Mean squared error     a risk metric corresponding to the expected value of the squared  quadratic  error or loss   If  math   hat y  i  is the predicted value of the  math  i  th sample  and  math  y i  is the corresponding true value  then the mean squared error  MSE  estimated over  math  n   text samples    is defined as     math       text MSE  y   hat y      frac 1  n  text samples    sum  i 0   n  text samples    1   y i    hat y  i  2   Here is a small example of usage of the  func  mean squared error  function          from sklearn metrics import mean squared error       y true    3   0 5  2  7        y pred    2 5  0 0  2  8        mean squared error y true  y pred    0 375       y true     0 5  1     1  1    7   6         y pred     0  2     1  2    8   5         mean squared error y true  y pred    0 7083        rubric   Examples    See  ref  sphx glr auto examples ensemble plot gradient boosting regression py    for an example of mean squared error usage to evaluate gradient boosting regression   Taking the square root of the MSE  called the root mean squared error  RMSE   is another common metric that provides a measure in the same units as the target variable  RSME is available through the  func  root mean squared error  function       mean squared log error   Mean squared logarithmic error                                 The  func  mean squared log error  function computes a risk metric corresponding to the expected value of the squared logarithmic  quadratic  error or loss   If  math   hat y  i  is the predicted value of the  math  i  th sample  and  math  y i  is the corresponding true value  then the mean squared logarithmic error  MSLE  estimated over  math  n   text samples    is defined as     math       text MSLE  y   hat y      frac 1  n  text samples    sum  i 0   n  text samples    1    log e  1   y i     log e  1    hat y  i    2   Where  math   log e  x   means the natural logarithm of  math  x   This metric is best to use when targets having exponential growth  such as population counts  average sales of a commodity over a span of years etc  Note that this metric penalizes an under predicted estimate greater than an over predicted estimate   Here is a small example of usage of the  func  mean squared log error  function          from sklearn metrics import mean squared log error       y true    3  5  2 5  7        y pred    2 5  5  4  8        mean squared log error y true  y pred    0 039          y true     0 5  1    1  2    7  6         y pred     0 5  2    1  2 5    8  8         mean squared log error y true  y pred    0 044     The root mean squared logarithmic error  RMSLE  is available through the  func  root mean squared log error  function       mean absolute percentage error   Mean absolute percentage error                                The  func  mean absolute percentage error   MAPE   also known as mean absolute percentage deviation  MAPD   is an evaluation metric for regression problems  The idea of this metric is to be sensitive to relative errors  It is for example not changed by a global scaling of the target variable   If  math   hat y  i  is the predicted value of the  math  i  th sample and  math  y i  is the corresponding true value  then the mean absolute percentage error  MAPE  estimated over  math  n   text samples    is defined as     math       text MAPE  y   hat y      frac 1  n   text samples     sum  i 0   n   text samples   1   frac    left  y i    hat y  i  right    max  epsilon   left  y i  right     where  math   epsilon  is an arbitrary small yet strictly positive number to avoid undefined results when y is zero   The  func  mean absolute percentage error  function supports multioutput   Here is a small example of usage of the  func  mean absolute percentage error  function          from sklearn metrics import mean absolute percentage error       y true    1  10  1e6        y pred    0 9  15  1 2e6        mean absolute percentage error y true  y pred    0 2666     In above example  if we had used  mean absolute error   it would have ignored the small magnitude values and only reflected the error in prediction of highest magnitude value  But that problem is resolved in case of MAPE because it calculates relative percentage error with respect to actual output      note        The MAPE formula here does not represent the common  percentage  definition  the     percentage in the range  0  100  is converted to a relative value in the range  0      1  by dividing by 100  Thus  an error of 200  corresponds to a relative error of 2      The motivation here is to have a range of values that is more consistent with other     error metrics in scikit learn  such as  accuracy score        To obtain the mean absolute percentage error as per the Wikipedia formula      multiply the  mean absolute percentage error  computed here by 100      dropdown   References       Wikipedia entry for Mean Absolute Percentage Error      https   en wikipedia org wiki Mean absolute percentage error         median absolute error   Median absolute error                        The  func  median absolute error  is particularly interesting because it is robust to outliers  The loss is calculated by taking the median of all absolute differences between the target and the prediction   If  math   hat y  i  is the predicted value of the  math  i  th sample and  math  y i  is the corresponding true value  then the median absolute error  MedAE  estimated over  math  n   text samples    is defined as     math       text MedAE  y   hat y      text median   mid y 1    hat y  1  mid   ldots   mid y n    hat y  n  mid    The  func  median absolute error  does not support multioutput   Here is a small example of usage of the  func  median absolute error  function          from sklearn metrics import median absolute error       y true    3   0 5  2  7        y pred    2 5  0 0  2  8        median absolute error y true  y pred    0 5        max error   Max error                      The  func  max error  function computes the maximum  residual error  https   en wikipedia org wiki Errors and residuals      a metric that captures the worst case error between the predicted value and the true value  In a perfectly fitted single output regression model    max error   would be   0   on the training set and though this would be highly unlikely in the real world  this metric shows the extent of error that the model had when it was fitted    If  math   hat y  i  is the predicted value of the  math  i  th sample  and  math  y i  is the corresponding true value  then the max error is defined as     math       text Max Error  y   hat y      max   y i    hat y  i     Here is a small example of usage of the  func  max error  function          from sklearn metrics import max error       y true    3  2  7  1        y pred    9  2  7  1        max error y true  y pred    6  The  func  max error  does not support multioutput       explained variance score   Explained variance score                            The  func  explained variance score  computes the  explained variance regression score  https   en wikipedia org wiki Explained variation      If  math   hat y   is the estimated target output   math  y  the corresponding  correct  target output  and  math  Var  is  Variance  https   en wikipedia org wiki Variance     the square of the standard deviation  then the explained variance is estimated as follow      math      explained    variance y   hat y     1    frac Var   y    hat y     Var  y     The best possible score is 1 0  lower values are worse      topic   Link to  ref  r2 score       The difference between the explained variance score and the  ref  r2 score      is that the explained variance score does not account for     systematic offset in the prediction  For this reason  the      ref  r2 score  should be preferred in general   In the particular case where the true target is constant  the Explained Variance score is not finite  it is either   NaN    perfect predictions  or    Inf    imperfect predictions   Such non finite scores may prevent correct model optimization such as grid search cross validation to be performed correctly  For this reason the default behaviour of  func  explained variance score  is to replace them with 1 0  perfect predictions  or 0 0  imperfect predictions   You can set the   force finite   parameter to   False   to prevent this fix from happening and fallback on the original Explained Variance score   Here is a small example of usage of the  func  explained variance score  function            from sklearn metrics import explained variance score         y true    3   0 5  2  7          y pred    2 5  0 0  2  8          explained variance score y true  y pred      0 957            y true     0 5  1     1  1    7   6           y pred     0  2     1  2    8   5           explained variance score y true  y pred  multioutput  raw values       array  0 967     1                    explained variance score y true  y pred  multioutput  0 3  0 7       0 990            y true     2   2   2          y pred     2   2   2          explained variance score y true  y pred      1 0         explained variance score y true  y pred  force finite False      nan         y true     2   2   2          y pred     2   2   2   1e 8          explained variance score y true  y pred      0 0         explained variance score y true  y pred  force finite False       inf       mean tweedie deviance   Mean Poisson  Gamma  and Tweedie deviances                                            The  func  mean tweedie deviance  function computes the  mean Tweedie deviance error  https   en wikipedia org wiki Tweedie distribution The Tweedie deviance    with a   power   parameter   math  p    This is a metric that elicits predicted expectation values of regression targets   Following special cases exist     when   power 0   it is equivalent to  func  mean squared error     when   power 1   it is equivalent to  func  mean poisson deviance     when   power 2   it is equivalent to  func  mean gamma deviance    If  math   hat y  i  is the predicted value of the  math  i  th sample  and  math  y i  is the corresponding true value  then the mean Tweedie deviance error  D  for power  math  p   estimated over  math  n   text samples    is defined as     math       text D  y   hat y      frac 1  n  text samples      sum  i 0   n  text samples    1     begin cases     y i  hat y  i  2     text for  p 0 text   Normal       2 y i  log y i  hat y  i     hat y  i   y i       text for  p 1 text   Poisson       2  log  hat y  i y i    y i  hat y  i   1       text for  p 2 text   Gamma       2 left  frac  max y i 0   2 p    1 p  2 p       frac y i   hat y  i  1 p   1 p   frac  hat y  i  2 p   2 p  right        text otherwise     end cases   Tweedie deviance is a homogeneous function of degree   2 power    Thus  Gamma distribution with   power 2   means that simultaneously scaling   y true   and   y pred   has no effect on the deviance  For Poisson distribution   power 1   the deviance scales linearly  and for Normal distribution    power 0     quadratically   In general  the higher   power   the less weight is given to extreme deviations between true and predicted targets   For instance  let s compare the two predictions 1 5 and 150 that are both 50  larger than their corresponding true value   The mean squared error    power 0    is very sensitive to the prediction difference of the second point             from sklearn metrics import mean tweedie deviance         mean tweedie deviance  1 0    1 5   power 0      0 25         mean tweedie deviance  100     150    power 0      2500 0  If we increase   power   to 1             mean tweedie deviance  1 0    1 5   power 1      0 18            mean tweedie deviance  100     150    power 1      18 9     the difference in errors decreases  Finally  by setting    power 2              mean tweedie deviance  1 0    1 5   power 2      0 14            mean tweedie deviance  100     150    power 2      0 14     we would get identical errors  The deviance when   power 2   is thus only sensitive to relative errors       pinball loss   Pinball loss               The  func  mean pinball loss  function is used to evaluate the predictive performance of  quantile regression  https   en wikipedia org wiki Quantile regression    models      math       text pinball  y   hat y      frac 1  n   text samples     sum  i 0   n   text samples   1    alpha  max y i    hat y  i  0     1    alpha   max  hat y  i   y i  0   The value of pinball loss is equivalent to half of  func  mean absolute error  when the quantile parameter   alpha   is set to 0 5    Here is a small example of usage of the  func  mean pinball loss  function          from sklearn metrics import mean pinball loss       y true    1  2  3        mean pinball loss y true   0  2  3   alpha 0 1    0 03          mean pinball loss y true   1  2  4   alpha 0 1    0 3          mean pinball loss y true   0  2  3   alpha 0 9    0 3          mean pinball loss y true   1  2  4   alpha 0 9    0 03          mean pinball loss y true  y true  alpha 0 1    0 0       mean pinball loss y true  y true  alpha 0 9    0 0  It is possible to build a scorer object with a specific choice of   alpha            from sklearn metrics import make scorer       mean pinball loss 95p   make scorer mean pinball loss  alpha 0 95   Such a scorer can be used to evaluate the generalization performance of a quantile regressor via cross validation         from sklearn datasets import make regression       from sklearn model selection import cross val score       from sklearn ensemble import GradientBoostingRegressor             X  y   make regression n samples 100  random state 0        estimator   GradientBoostingRegressor            loss  quantile             alpha 0 95            random state 0                cross val score estimator  X  y  cv 5  scoring mean pinball loss 95p    array  13 6     9 7     23 3     9 5     10 4       It is also possible to build scorer objects for hyper parameter tuning  The sign of the loss must be switched to ensure that greater means better as explained in the example linked below      rubric   Examples    See  ref  sphx glr auto examples ensemble plot gradient boosting quantile py    for an example of using the pinball loss to evaluate and tune the   hyper parameters of quantile regression models on data with non symmetric   noise and outliers       d2 score   D  score           The D  score computes the fraction of deviance explained  It is a generalization of R   where the squared error is generalized and replaced by a deviance of choice  math   text dev  y   hat y     e g   Tweedie  pinball or mean absolute error   D  is a form of a  skill score   It is calculated as     math      D 2 y   hat y     1    frac  text dev  y   hat y     text dev  y  y   text null          Where  math  y   text null    is the optimal prediction of an intercept only model  e g   the mean of  y true  for the Tweedie case  the median for absolute error and the alpha quantile for pinball loss    Like R   the best possible score is 1 0 and it can be negative  because the model can be arbitrarily worse   A constant model that always predicts  math  y   text null     disregarding the input features  would get a D  score of 0 0      dropdown   D  Tweedie score    The  func  d2 tweedie score  function implements the special case of D    where  math   text dev  y   hat y    is the Tweedie deviance  see  ref  mean tweedie deviance     It is also known as D  Tweedie and is related to McFadden s likelihood ratio index     The argument   power   defines the Tweedie power as for    func  mean tweedie deviance   Note that for  power 0      func  d2 tweedie score  equals  func  r2 score   for single targets      A scorer object with a specific choice of   power   can be built by            from sklearn metrics import d2 tweedie score  make scorer         d2 tweedie score 15   make scorer d2 tweedie score  power 1 5      dropdown   D  pinball score    The  func  d2 pinball score  function implements the special case   of D  with the pinball loss  see  ref  pinball loss   i e         math         text dev  y   hat y      text pinball  y   hat y       The argument   alpha   defines the slope of the pinball loss as for    func  mean pinball loss    ref  pinball loss    It determines the   quantile level   alpha   for which the pinball loss and also D    are optimal  Note that for  alpha 0 5   the default   func  d2 pinball score    equals  func  d2 absolute error score      A scorer object with a specific choice of   alpha   can be built by            from sklearn metrics import d2 pinball score  make scorer         d2 pinball score 08   make scorer d2 pinball score  alpha 0 8      dropdown   D  absolute error score    The  func  d2 absolute error score  function implements the special case of   the  ref  mean absolute error         math         text dev  y   hat y      text MAE  y   hat y       Here are some usage examples of the  func  d2 absolute error score  function            from sklearn metrics import d2 absolute error score         y true    3   0 5  2  7          y pred    2 5  0 0  2  8          d2 absolute error score y true  y pred      0 764            y true    1  2  3          y pred    1  2  3          d2 absolute error score y true  y pred      1 0         y true    1  2  3          y pred    2  2  2          d2 absolute error score y true  y pred      0 0       visualization regression evaluation   Visual evaluation of regression models                                         Among methods to assess the quality of regression models  scikit learn provides the  class   sklearn metrics PredictionErrorDisplay  class  It allows to visually inspect the prediction errors of a model in two different manners      image      auto examples model selection images sphx glr plot cv predict 001 png     target     auto examples model selection plot cv predict html     scale  75     align  center  The plot on the left shows the actual values vs predicted values  For a noise free regression task aiming to predict the  conditional  expectation of  y   a perfect regression model would display data points on the diagonal defined by predicted equal to actual values  The further away from this optimal line  the larger the error of the model  In a more realistic setting with irreducible noise  that is  when not all the variations of  y  can be explained by features in  X   then the best model would lead to a cloud of points densely arranged around the diagonal   Note that the above only holds when the predicted values is the expected value of  y  given  X   This is typically the case for regression models that minimize the mean squared error objective function or more generally the  ref  mean Tweedie deviance  mean tweedie deviance   for any value of its  power  parameter   When plotting the predictions of an estimator that predicts a quantile of  y  given  X   e g   class   sklearn linear model QuantileRegressor  or any other model minimizing the  ref  pinball loss  pinball loss    a fraction of the points are either expected to lie above or below the diagonal depending on the estimated quantile level   All in all  while intuitive to read  this plot does not really inform us on what to do to obtain a better model   The right hand side plot shows the residuals  i e  the difference between the actual and the predicted values  vs  the predicted values   This plot makes it easier to visualize if the residuals follow and  homoscedastic or heteroschedastic  https   en wikipedia org wiki Homoscedasticity and heteroscedasticity    distribution   In particular  if the true distribution of  y X  is Poisson or Gamma distributed  it is expected that the variance of the residuals of the optimal model would grow with the predicted value of  E y X    either linearly for Poisson or quadratically for Gamma    When fitting a linear least squares regression model  see  class   sklearn linear model LinearRegression  and  class   sklearn linear model Ridge    we can use this plot to check if some of the  model assumptions  https   en wikipedia org wiki Ordinary least squares Assumptions    are met  in particular that the residuals should be uncorrelated  their expected value should be null and that their variance should be constant  homoschedasticity    If this is not the case  and in particular if the residuals plot show some banana shaped structure  this is a hint that the model is likely mis specified and that non linear feature engineering or switching to a non linear regression model might be useful   Refer to the example below to see a model evaluation that makes use of this display      rubric   Examples    See  ref  sphx glr auto examples compose plot transformed target py  for   an example on how to use  class   sklearn metrics PredictionErrorDisplay    to visualize the prediction quality improvement of a regression model   obtained by transforming the target before learning       clustering metrics   Clustering metrics                        currentmodule   sklearn metrics  The  mod  sklearn metrics  module implements several loss  score  and utility functions to measure clustering performance  For more information see the  ref  clustering evaluation  section for instance clustering  and  ref  biclustering evaluation  for biclustering       dummy estimators    Dummy estimators                       currentmodule   sklearn dummy  When doing supervised learning  a simple sanity check consists of comparing one s estimator against simple rules of thumb   class  DummyClassifier  implements several such simple strategies for classification       stratified   generates random predictions by respecting the training   set class distribution      most frequent   always predicts the most frequent label in the training set      prior   always predicts the class that maximizes the class prior    like   most frequent    and   predict proba   returns the class prior      uniform   generates predictions uniformly at random      constant   always predicts a constant label that is provided by the user     A major motivation of this method is F1 scoring  when the positive class    is in the minority   Note that with all these strategies  the   predict   method completely ignores the input data   To illustrate  class  DummyClassifier   first let s create an imbalanced dataset          from sklearn datasets import load iris       from sklearn model selection import train test split       X  y   load iris return X y True        y y    1     1       X train  X test  y train  y test   train test split X  y  random state 0   Next  let s compare the accuracy of   SVC   and   most frequent            from sklearn dummy import DummyClassifier       from sklearn svm import SVC       clf   SVC kernel  linear   C 1  fit X train  y train        clf score X test  y test    0 63          clf   DummyClassifier strategy  most frequent   random state 0        clf fit X train  y train    DummyClassifier random state 0  strategy  most frequent         clf score X test  y test    0 57     We see that   SVC   doesn t do much better than a dummy classifier  Now  let s change the kernel          clf   SVC kernel  rbf   C 1  fit X train  y train        clf score X test  y test    0 94     We see that the accuracy was boosted to almost 100    A cross validation strategy is recommended for a better estimate of the accuracy  if it is not too CPU costly  For more information see the  ref  cross validation  section  Moreover if you want to optimize over the parameter space  it is highly recommended to use an appropriate methodology  see the  ref  grid search  section for details   More generally  when the accuracy of a classifier is too close to random  it probably means that something went wrong  features are not helpful  a hyperparameter is not correctly tuned  the classifier is suffering from class imbalance  etc      class  DummyRegressor  also implements four simple rules of thumb for regression       mean   always predicts the mean of the training targets      median   always predicts the median of the training targets      quantile   always predicts a user provided quantile of the training targets      constant   always predicts a constant value that is provided by the user   In all these strategies  the   predict   method completely ignores the input data "}
{"questions":"scikit-learn sklearn covariance Many statistical problems require the estimation of a Covariance estimation covariance","answers":".. _covariance:\n\n===================================================\nCovariance estimation\n===================================================\n\n.. currentmodule:: sklearn.covariance\n\n\nMany statistical problems require the estimation of a\npopulation's covariance matrix, which can be seen as an estimation of\ndata set scatter plot shape. Most of the time, such an estimation has\nto be done on a sample whose properties (size, structure, homogeneity)\nhave a large influence on the estimation's quality. The\n:mod:`sklearn.covariance` package provides tools for accurately estimating\na population's covariance matrix under various settings.\n\nWe assume that the observations are independent and identically\ndistributed (i.i.d.).\n\n\nEmpirical covariance\n====================\n\nThe covariance matrix of a data set is known to be well approximated\nby the classical *maximum likelihood estimator* (or \"empirical\ncovariance\"), provided the number of observations is large enough\ncompared to the number of features (the variables describing the\nobservations). More precisely, the Maximum Likelihood Estimator of a\nsample is an asymptotically unbiased estimator of the corresponding\npopulation's covariance matrix.\n\nThe empirical covariance matrix of a sample can be computed using the\n:func:`empirical_covariance` function of the package, or by fitting an\n:class:`EmpiricalCovariance` object to the data sample with the\n:meth:`EmpiricalCovariance.fit` method. Be careful that results depend\non whether the data are centered, so one may want to use the\n``assume_centered`` parameter accurately. More precisely, if\n``assume_centered=False``, then the test set is supposed to have the\nsame mean vector as the training set. If not, both should be centered\nby the user, and ``assume_centered=True`` should be used.\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_covariance_plot_covariance_estimation.py` for\n  an example on how to fit an :class:`EmpiricalCovariance` object to data.\n\n\n.. _shrunk_covariance:\n\nShrunk Covariance\n=================\n\nBasic shrinkage\n---------------\n\nDespite being an asymptotically unbiased estimator of the covariance matrix,\nthe Maximum Likelihood Estimator is not a good estimator of the\neigenvalues of the covariance matrix, so the precision matrix obtained\nfrom its inversion is not accurate. Sometimes, it even occurs that the\nempirical covariance matrix cannot be inverted for numerical\nreasons. To avoid such an inversion problem, a transformation of the\nempirical covariance matrix has been introduced: the ``shrinkage``.\n\nIn scikit-learn, this transformation (with a user-defined shrinkage\ncoefficient) can be directly applied to a pre-computed covariance with\nthe :func:`shrunk_covariance` method. Also, a shrunk estimator of the\ncovariance can be fitted to data with a :class:`ShrunkCovariance` object\nand its :meth:`ShrunkCovariance.fit` method. Again, results depend on\nwhether the data are centered, so one may want to use the\n``assume_centered`` parameter accurately.\n\n\nMathematically, this shrinkage consists in reducing the ratio between the\nsmallest and the largest eigenvalues of the empirical covariance matrix.\nIt can be done by simply shifting every eigenvalue according to a given\noffset, which is equivalent of finding the l2-penalized Maximum\nLikelihood Estimator of the covariance matrix. In practice, shrinkage\nboils down to a simple a convex transformation : :math:`\\Sigma_{\\rm\nshrunk} = (1-\\alpha)\\hat{\\Sigma} + \\alpha\\frac{{\\rm\nTr}\\hat{\\Sigma}}{p}\\rm Id`.\n\nChoosing the amount of shrinkage, :math:`\\alpha` amounts to setting a\nbias\/variance trade-off, and is discussed below.\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_covariance_plot_covariance_estimation.py` for\n  an example on how to fit a :class:`ShrunkCovariance` object to data.\n\n\nLedoit-Wolf shrinkage\n---------------------\n\nIn their 2004 paper [1]_, O. Ledoit and M. Wolf propose a formula\nto compute the optimal shrinkage coefficient :math:`\\alpha` that\nminimizes the Mean Squared Error between the estimated and the real\ncovariance matrix.\n\nThe Ledoit-Wolf estimator of the covariance matrix can be computed on\na sample with the :meth:`ledoit_wolf` function of the\n:mod:`sklearn.covariance` package, or it can be otherwise obtained by\nfitting a :class:`LedoitWolf` object to the same sample.\n\n.. note:: **Case when population covariance matrix is isotropic**\n\n    It is important to note that when the number of samples is much larger than\n    the number of features, one would expect that no shrinkage would be\n    necessary. The intuition behind this is that if the population covariance\n    is full rank, when the number of sample grows, the sample covariance will\n    also become positive definite. As a result, no shrinkage would necessary\n    and the method should automatically do this.\n\n    This, however, is not the case in the Ledoit-Wolf procedure when the\n    population covariance happens to be a multiple of the identity matrix. In\n    this case, the Ledoit-Wolf shrinkage estimate approaches 1 as the number of\n    samples increases. This indicates that the optimal estimate of the\n    covariance matrix in the Ledoit-Wolf sense is multiple of the identity.\n    Since the population covariance is already a multiple of the identity\n    matrix, the Ledoit-Wolf solution is indeed a reasonable estimate.\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_covariance_plot_covariance_estimation.py` for\n  an example on how to fit a :class:`LedoitWolf` object to data and\n  for visualizing the performances of the Ledoit-Wolf estimator in\n  terms of likelihood.\n\n.. rubric:: References\n\n.. [1] O. Ledoit and M. Wolf, \"A Well-Conditioned Estimator for Large-Dimensional\n       Covariance Matrices\", Journal of Multivariate Analysis, Volume 88, Issue 2,\n       February 2004, pages 365-411.\n\n.. _oracle_approximating_shrinkage:\n\nOracle Approximating Shrinkage\n------------------------------\n\nUnder the assumption that the data are Gaussian distributed, Chen et\nal. [2]_ derived a formula aimed at choosing a shrinkage coefficient that\nyields a smaller Mean Squared Error than the one given by Ledoit and\nWolf's formula. The resulting estimator is known as the Oracle\nShrinkage Approximating estimator of the covariance.\n\nThe OAS estimator of the covariance matrix can be computed on a sample\nwith the :meth:`oas` function of the :mod:`sklearn.covariance`\npackage, or it can be otherwise obtained by fitting an :class:`OAS`\nobject to the same sample.\n\n.. figure:: ..\/auto_examples\/covariance\/images\/sphx_glr_plot_covariance_estimation_001.png\n   :target: ..\/auto_examples\/covariance\/plot_covariance_estimation.html\n   :align: center\n   :scale: 65%\n\n   Bias-variance trade-off when setting the shrinkage: comparing the\n   choices of Ledoit-Wolf and OAS estimators\n\n.. rubric:: References\n\n.. [2] :arxiv:`\"Shrinkage algorithms for MMSE covariance estimation.\",\n       Chen, Y., Wiesel, A., Eldar, Y. C., & Hero, A. O.\n       IEEE Transactions on Signal Processing, 58(10), 5016-5029, 2010.\n       <0907.4698>`\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_covariance_plot_covariance_estimation.py` for\n  an example on how to fit an :class:`OAS` object to data.\n\n* See :ref:`sphx_glr_auto_examples_covariance_plot_lw_vs_oas.py` to visualize the\n  Mean Squared Error difference between a :class:`LedoitWolf` and\n  an :class:`OAS` estimator of the covariance.\n\n\n.. figure:: ..\/auto_examples\/covariance\/images\/sphx_glr_plot_lw_vs_oas_001.png\n   :target: ..\/auto_examples\/covariance\/plot_lw_vs_oas.html\n   :align: center\n   :scale: 75%\n\n\n.. _sparse_inverse_covariance:\n\nSparse inverse covariance\n==========================\n\nThe matrix inverse of the covariance matrix, often called the precision\nmatrix, is proportional to the partial correlation matrix. It gives the\npartial independence relationship. In other words, if two features are\nindependent conditionally on the others, the corresponding coefficient in\nthe precision matrix will be zero. This is why it makes sense to\nestimate a sparse precision matrix: the estimation of the covariance\nmatrix is better conditioned by learning independence relations from\nthe data. This is known as *covariance selection*.\n\nIn the small-samples situation, in which ``n_samples`` is on the order\nof ``n_features`` or smaller, sparse inverse covariance estimators tend to work\nbetter than shrunk covariance estimators. However, in the opposite\nsituation, or for very correlated data, they can be numerically unstable.\nIn addition, unlike shrinkage estimators, sparse estimators are able to\nrecover off-diagonal structure.\n\nThe :class:`GraphicalLasso` estimator uses an l1 penalty to enforce sparsity on\nthe precision matrix: the higher its ``alpha`` parameter, the more sparse\nthe precision matrix. The corresponding :class:`GraphicalLassoCV` object uses\ncross-validation to automatically set the ``alpha`` parameter.\n\n.. figure:: ..\/auto_examples\/covariance\/images\/sphx_glr_plot_sparse_cov_001.png\n   :target: ..\/auto_examples\/covariance\/plot_sparse_cov.html\n   :align: center\n   :scale: 60%\n\n   *A comparison of maximum likelihood, shrinkage and sparse estimates of\n   the covariance and precision matrix in the very small samples\n   settings.*\n\n.. note:: **Structure recovery**\n\n   Recovering a graphical structure from correlations in the data is a\n   challenging thing. If you are interested in such recovery keep in mind\n   that:\n\n   * Recovery is easier from a correlation matrix than a covariance\n     matrix: standardize your observations before running :class:`GraphicalLasso`\n\n   * If the underlying graph has nodes with much more connections than\n     the average node, the algorithm will miss some of these connections.\n\n   * If your number of observations is not large compared to the number\n     of edges in your underlying graph, you will not recover it.\n\n   * Even if you are in favorable recovery conditions, the alpha\n     parameter chosen by cross-validation (e.g. using the\n     :class:`GraphicalLassoCV` object) will lead to selecting too many edges.\n     However, the relevant edges will have heavier weights than the\n     irrelevant ones.\n\nThe mathematical formulation is the following:\n\n.. math::\n\n    \\hat{K} = \\mathrm{argmin}_K \\big(\n                \\mathrm{tr} S K - \\mathrm{log} \\mathrm{det} K\n                + \\alpha \\|K\\|_1\n                \\big)\n\nWhere :math:`K` is the precision matrix to be estimated, and :math:`S` is the\nsample covariance matrix. :math:`\\|K\\|_1` is the sum of the absolute values of\noff-diagonal coefficients of :math:`K`. The algorithm employed to solve this\nproblem is the GLasso algorithm, from the Friedman 2008 Biostatistics\npaper. It is the same algorithm as in the R ``glasso`` package.\n\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_covariance_plot_sparse_cov.py`: example on synthetic\n  data showing some recovery of a structure, and comparing to other\n  covariance estimators.\n\n* :ref:`sphx_glr_auto_examples_applications_plot_stock_market.py`: example on real\n  stock market data, finding which symbols are most linked.\n\n.. rubric:: References\n\n* Friedman et al, `\"Sparse inverse covariance estimation with the\n  graphical lasso\" <https:\/\/biostatistics.oxfordjournals.org\/content\/9\/3\/432.short>`_,\n  Biostatistics 9, pp 432, 2008\n\n.. _robust_covariance:\n\nRobust Covariance Estimation\n============================\n\nReal data sets are often subject to measurement or recording\nerrors. Regular but uncommon observations may also appear for a variety\nof reasons. Observations which are very uncommon are called\noutliers.\nThe empirical covariance estimator and the shrunk covariance\nestimators presented above are very sensitive to the presence of\noutliers in the data. Therefore, one should use robust\ncovariance estimators to estimate the covariance of its real data\nsets. Alternatively, robust covariance estimators can be used to\nperform outlier detection and discard\/downweight some observations\naccording to further processing of the data.\n\nThe ``sklearn.covariance`` package implements a robust estimator of covariance,\nthe Minimum Covariance Determinant [3]_.\n\n\nMinimum Covariance Determinant\n------------------------------\n\nThe Minimum Covariance Determinant estimator is a robust estimator of\na data set's covariance introduced by P.J. Rousseeuw in [3]_.  The idea\nis to find a given proportion (h) of \"good\" observations which are not\noutliers and compute their empirical covariance matrix.  This\nempirical covariance matrix is then rescaled to compensate the\nperformed selection of observations (\"consistency step\").  Having\ncomputed the Minimum Covariance Determinant estimator, one can give\nweights to observations according to their Mahalanobis distance,\nleading to a reweighted estimate of the covariance matrix of the data\nset (\"reweighting step\").\n\nRousseeuw and Van Driessen [4]_ developed the FastMCD algorithm in order\nto compute the Minimum Covariance Determinant. This algorithm is used\nin scikit-learn when fitting an MCD object to data. The FastMCD\nalgorithm also computes a robust estimate of the data set location at\nthe same time.\n\nRaw estimates can be accessed as ``raw_location_`` and ``raw_covariance_``\nattributes of a :class:`MinCovDet` robust covariance estimator object.\n\n.. rubric:: References\n\n.. [3] P. J. Rousseeuw. Least median of squares regression.\n       J. Am Stat Ass, 79:871, 1984.\n.. [4] A Fast Algorithm for the Minimum Covariance Determinant Estimator,\n       1999, American Statistical Association and the American Society\n       for Quality, TECHNOMETRICS.\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_covariance_plot_robust_vs_empirical_covariance.py` for\n  an example on how to fit a :class:`MinCovDet` object to data and see how\n  the estimate remains accurate despite the presence of outliers.\n\n* See :ref:`sphx_glr_auto_examples_covariance_plot_mahalanobis_distances.py` to\n  visualize the difference between :class:`EmpiricalCovariance` and\n  :class:`MinCovDet` covariance estimators in terms of Mahalanobis distance\n  (so we get a better estimate of the precision matrix too).\n\n.. |robust_vs_emp| image:: ..\/auto_examples\/covariance\/images\/sphx_glr_plot_robust_vs_empirical_covariance_001.png\n   :target: ..\/auto_examples\/covariance\/plot_robust_vs_empirical_covariance.html\n   :scale: 49%\n\n.. |mahalanobis| image:: ..\/auto_examples\/covariance\/images\/sphx_glr_plot_mahalanobis_distances_001.png\n   :target: ..\/auto_examples\/covariance\/plot_mahalanobis_distances.html\n   :scale: 49%\n\n\n\n____\n\n.. list-table::\n    :header-rows: 1\n\n    * - Influence of outliers on location and covariance estimates\n      - Separating inliers from outliers using a Mahalanobis distance\n\n    * - |robust_vs_emp|\n      - |mahalanobis|","site":"scikit-learn","answers_cleaned":"    covariance                                                       Covariance estimation                                                         currentmodule   sklearn covariance   Many statistical problems require the estimation of a population s covariance matrix  which can be seen as an estimation of data set scatter plot shape  Most of the time  such an estimation has to be done on a sample whose properties  size  structure  homogeneity  have a large influence on the estimation s quality  The  mod  sklearn covariance  package provides tools for accurately estimating a population s covariance matrix under various settings   We assume that the observations are independent and identically distributed  i i d      Empirical covariance                       The covariance matrix of a data set is known to be well approximated by the classical  maximum likelihood estimator   or  empirical covariance    provided the number of observations is large enough compared to the number of features  the variables describing the observations   More precisely  the Maximum Likelihood Estimator of a sample is an asymptotically unbiased estimator of the corresponding population s covariance matrix   The empirical covariance matrix of a sample can be computed using the  func  empirical covariance  function of the package  or by fitting an  class  EmpiricalCovariance  object to the data sample with the  meth  EmpiricalCovariance fit  method  Be careful that results depend on whether the data are centered  so one may want to use the   assume centered   parameter accurately  More precisely  if   assume centered False    then the test set is supposed to have the same mean vector as the training set  If not  both should be centered by the user  and   assume centered True   should be used      rubric   Examples    See  ref  sphx glr auto examples covariance plot covariance estimation py  for   an example on how to fit an  class  EmpiricalCovariance  object to data        shrunk covariance   Shrunk Covariance                    Basic shrinkage                  Despite being an asymptotically unbiased estimator of the covariance matrix  the Maximum Likelihood Estimator is not a good estimator of the eigenvalues of the covariance matrix  so the precision matrix obtained from its inversion is not accurate  Sometimes  it even occurs that the empirical covariance matrix cannot be inverted for numerical reasons  To avoid such an inversion problem  a transformation of the empirical covariance matrix has been introduced  the   shrinkage     In scikit learn  this transformation  with a user defined shrinkage coefficient  can be directly applied to a pre computed covariance with the  func  shrunk covariance  method  Also  a shrunk estimator of the covariance can be fitted to data with a  class  ShrunkCovariance  object and its  meth  ShrunkCovariance fit  method  Again  results depend on whether the data are centered  so one may want to use the   assume centered   parameter accurately    Mathematically  this shrinkage consists in reducing the ratio between the smallest and the largest eigenvalues of the empirical covariance matrix  It can be done by simply shifting every eigenvalue according to a given offset  which is equivalent of finding the l2 penalized Maximum Likelihood Estimator of the covariance matrix  In practice  shrinkage boils down to a simple a convex transformation    math   Sigma   rm shrunk     1  alpha  hat  Sigma     alpha frac   rm Tr  hat  Sigma   p  rm Id    Choosing the amount of shrinkage   math   alpha  amounts to setting a bias variance trade off  and is discussed below      rubric   Examples    See  ref  sphx glr auto examples covariance plot covariance estimation py  for   an example on how to fit a  class  ShrunkCovariance  object to data    Ledoit Wolf shrinkage                        In their 2004 paper  1    O  Ledoit and M  Wolf propose a formula to compute the optimal shrinkage coefficient  math   alpha  that minimizes the Mean Squared Error between the estimated and the real covariance matrix   The Ledoit Wolf estimator of the covariance matrix can be computed on a sample with the  meth  ledoit wolf  function of the  mod  sklearn covariance  package  or it can be otherwise obtained by fitting a  class  LedoitWolf  object to the same sample      note     Case when population covariance matrix is isotropic        It is important to note that when the number of samples is much larger than     the number of features  one would expect that no shrinkage would be     necessary  The intuition behind this is that if the population covariance     is full rank  when the number of sample grows  the sample covariance will     also become positive definite  As a result  no shrinkage would necessary     and the method should automatically do this       This  however  is not the case in the Ledoit Wolf procedure when the     population covariance happens to be a multiple of the identity matrix  In     this case  the Ledoit Wolf shrinkage estimate approaches 1 as the number of     samples increases  This indicates that the optimal estimate of the     covariance matrix in the Ledoit Wolf sense is multiple of the identity      Since the population covariance is already a multiple of the identity     matrix  the Ledoit Wolf solution is indeed a reasonable estimate      rubric   Examples    See  ref  sphx glr auto examples covariance plot covariance estimation py  for   an example on how to fit a  class  LedoitWolf  object to data and   for visualizing the performances of the Ledoit Wolf estimator in   terms of likelihood      rubric   References      1  O  Ledoit and M  Wolf   A Well Conditioned Estimator for Large Dimensional        Covariance Matrices   Journal of Multivariate Analysis  Volume 88  Issue 2         February 2004  pages 365 411       oracle approximating shrinkage   Oracle Approximating Shrinkage                                 Under the assumption that the data are Gaussian distributed  Chen et al   2   derived a formula aimed at choosing a shrinkage coefficient that yields a smaller Mean Squared Error than the one given by Ledoit and Wolf s formula  The resulting estimator is known as the Oracle Shrinkage Approximating estimator of the covariance   The OAS estimator of the covariance matrix can be computed on a sample with the  meth  oas  function of the  mod  sklearn covariance  package  or it can be otherwise obtained by fitting an  class  OAS  object to the same sample      figure      auto examples covariance images sphx glr plot covariance estimation 001 png     target     auto examples covariance plot covariance estimation html     align  center     scale  65      Bias variance trade off when setting the shrinkage  comparing the    choices of Ledoit Wolf and OAS estimators     rubric   References      2   arxiv   Shrinkage algorithms for MMSE covariance estimation           Chen  Y   Wiesel  A   Eldar  Y  C     Hero  A  O         IEEE Transactions on Signal Processing  58 10   5016 5029  2010          0907 4698       rubric   Examples    See  ref  sphx glr auto examples covariance plot covariance estimation py  for   an example on how to fit an  class  OAS  object to data     See  ref  sphx glr auto examples covariance plot lw vs oas py  to visualize the   Mean Squared Error difference between a  class  LedoitWolf  and   an  class  OAS  estimator of the covariance       figure      auto examples covariance images sphx glr plot lw vs oas 001 png     target     auto examples covariance plot lw vs oas html     align  center     scale  75        sparse inverse covariance   Sparse inverse covariance                             The matrix inverse of the covariance matrix  often called the precision matrix  is proportional to the partial correlation matrix  It gives the partial independence relationship  In other words  if two features are independent conditionally on the others  the corresponding coefficient in the precision matrix will be zero  This is why it makes sense to estimate a sparse precision matrix  the estimation of the covariance matrix is better conditioned by learning independence relations from the data  This is known as  covariance selection    In the small samples situation  in which   n samples   is on the order of   n features   or smaller  sparse inverse covariance estimators tend to work better than shrunk covariance estimators  However  in the opposite situation  or for very correlated data  they can be numerically unstable  In addition  unlike shrinkage estimators  sparse estimators are able to recover off diagonal structure   The  class  GraphicalLasso  estimator uses an l1 penalty to enforce sparsity on the precision matrix  the higher its   alpha   parameter  the more sparse the precision matrix  The corresponding  class  GraphicalLassoCV  object uses cross validation to automatically set the   alpha   parameter      figure      auto examples covariance images sphx glr plot sparse cov 001 png     target     auto examples covariance plot sparse cov html     align  center     scale  60       A comparison of maximum likelihood  shrinkage and sparse estimates of    the covariance and precision matrix in the very small samples    settings       note     Structure recovery       Recovering a graphical structure from correlations in the data is a    challenging thing  If you are interested in such recovery keep in mind    that        Recovery is easier from a correlation matrix than a covariance      matrix  standardize your observations before running  class  GraphicalLasso        If the underlying graph has nodes with much more connections than      the average node  the algorithm will miss some of these connections        If your number of observations is not large compared to the number      of edges in your underlying graph  you will not recover it        Even if you are in favorable recovery conditions  the alpha      parameter chosen by cross validation  e g  using the       class  GraphicalLassoCV  object  will lead to selecting too many edges       However  the relevant edges will have heavier weights than the      irrelevant ones   The mathematical formulation is the following      math         hat K     mathrm argmin  K  big                   mathrm tr  S K    mathrm log   mathrm det  K                    alpha   K   1                  big   Where  math  K  is the precision matrix to be estimated  and  math  S  is the sample covariance matrix   math    K   1  is the sum of the absolute values of off diagonal coefficients of  math  K   The algorithm employed to solve this problem is the GLasso algorithm  from the Friedman 2008 Biostatistics paper  It is the same algorithm as in the R   glasso   package       rubric   Examples     ref  sphx glr auto examples covariance plot sparse cov py   example on synthetic   data showing some recovery of a structure  and comparing to other   covariance estimators      ref  sphx glr auto examples applications plot stock market py   example on real   stock market data  finding which symbols are most linked      rubric   References    Friedman et al    Sparse inverse covariance estimation with the   graphical lasso   https   biostatistics oxfordjournals org content 9 3 432 short       Biostatistics 9  pp 432  2008      robust covariance   Robust Covariance Estimation                               Real data sets are often subject to measurement or recording errors  Regular but uncommon observations may also appear for a variety of reasons  Observations which are very uncommon are called outliers  The empirical covariance estimator and the shrunk covariance estimators presented above are very sensitive to the presence of outliers in the data  Therefore  one should use robust covariance estimators to estimate the covariance of its real data sets  Alternatively  robust covariance estimators can be used to perform outlier detection and discard downweight some observations according to further processing of the data   The   sklearn covariance   package implements a robust estimator of covariance  the Minimum Covariance Determinant  3      Minimum Covariance Determinant                                 The Minimum Covariance Determinant estimator is a robust estimator of a data set s covariance introduced by P J  Rousseeuw in  3     The idea is to find a given proportion  h  of  good  observations which are not outliers and compute their empirical covariance matrix   This empirical covariance matrix is then rescaled to compensate the performed selection of observations   consistency step     Having computed the Minimum Covariance Determinant estimator  one can give weights to observations according to their Mahalanobis distance  leading to a reweighted estimate of the covariance matrix of the data set   reweighting step     Rousseeuw and Van Driessen  4   developed the FastMCD algorithm in order to compute the Minimum Covariance Determinant  This algorithm is used in scikit learn when fitting an MCD object to data  The FastMCD algorithm also computes a robust estimate of the data set location at the same time   Raw estimates can be accessed as   raw location    and   raw covariance    attributes of a  class  MinCovDet  robust covariance estimator object      rubric   References      3  P  J  Rousseeuw  Least median of squares regression         J  Am Stat Ass  79 871  1984      4  A Fast Algorithm for the Minimum Covariance Determinant Estimator         1999  American Statistical Association and the American Society        for Quality  TECHNOMETRICS      rubric   Examples    See  ref  sphx glr auto examples covariance plot robust vs empirical covariance py  for   an example on how to fit a  class  MinCovDet  object to data and see how   the estimate remains accurate despite the presence of outliers     See  ref  sphx glr auto examples covariance plot mahalanobis distances py  to   visualize the difference between  class  EmpiricalCovariance  and    class  MinCovDet  covariance estimators in terms of Mahalanobis distance    so we get a better estimate of the precision matrix too        robust vs emp  image      auto examples covariance images sphx glr plot robust vs empirical covariance 001 png     target     auto examples covariance plot robust vs empirical covariance html     scale  49       mahalanobis  image      auto examples covariance images sphx glr plot mahalanobis distances 001 png     target     auto examples covariance plot mahalanobis distances html     scale  49              list table        header rows  1          Influence of outliers on location and covariance estimates         Separating inliers from outliers using a Mahalanobis distance           robust vs emp           mahalanobis "}
{"questions":"scikit-learn classification and regression This section of the user guide covers functionality related to multi learning problems including and Multiclass and multioutput algorithms multiclass","answers":"\n.. _multiclass:\n\n=====================================\nMulticlass and multioutput algorithms\n=====================================\n\nThis section of the user guide covers functionality related to multi-learning\nproblems, including :term:`multiclass`, :term:`multilabel`, and\n:term:`multioutput` classification and regression.\n\nThe modules in this section implement :term:`meta-estimators`, which require a\nbase estimator to be provided in their constructor. Meta-estimators extend the\nfunctionality of the base estimator to support multi-learning problems, which\nis accomplished by transforming the multi-learning problem into a set of\nsimpler problems, then fitting one estimator per problem.\n\nThis section covers two modules: :mod:`sklearn.multiclass` and\n:mod:`sklearn.multioutput`. The chart below demonstrates the problem types\nthat each module is responsible for, and the corresponding meta-estimators\nthat each module provides.\n\n.. image:: ..\/images\/multi_org_chart.png\n   :align: center\n\nThe table below provides a quick reference on the differences between problem\ntypes. More detailed explanations can be found in subsequent sections of this\nguide.\n\n+------------------------------+-----------------------+-------------------------+--------------------------------------------------+\n|                              | Number of targets     | Target cardinality      | Valid                                            |\n|                              |                       |                         | :func:`~sklearn.utils.multiclass.type_of_target` |\n+==============================+=======================+=========================+==================================================+\n| Multiclass                   |  1                    | >2                      | 'multiclass'                                     |\n| classification               |                       |                         |                                                  |\n+------------------------------+-----------------------+-------------------------+--------------------------------------------------+\n| Multilabel                   | >1                    |  2 (0 or 1)             | 'multilabel-indicator'                           |\n| classification               |                       |                         |                                                  |\n+------------------------------+-----------------------+-------------------------+--------------------------------------------------+\n| Multiclass-multioutput       | >1                    | >2                      | 'multiclass-multioutput'                         |\n| classification               |                       |                         |                                                  |\n+------------------------------+-----------------------+-------------------------+--------------------------------------------------+\n| Multioutput                  | >1                    | Continuous              | 'continuous-multioutput'                         |\n| regression                   |                       |                         |                                                  |\n+------------------------------+-----------------------+-------------------------+--------------------------------------------------+\n\nBelow is a summary of scikit-learn estimators that have multi-learning support\nbuilt-in, grouped by strategy. You don't need the meta-estimators provided by\nthis section if you're using one of these estimators. However, meta-estimators\ncan provide additional strategies beyond what is built-in:\n\n.. currentmodule:: sklearn\n\n- **Inherently multiclass:**\n\n  - :class:`naive_bayes.BernoulliNB`\n  - :class:`tree.DecisionTreeClassifier`\n  - :class:`tree.ExtraTreeClassifier`\n  - :class:`ensemble.ExtraTreesClassifier`\n  - :class:`naive_bayes.GaussianNB`\n  - :class:`neighbors.KNeighborsClassifier`\n  - :class:`semi_supervised.LabelPropagation`\n  - :class:`semi_supervised.LabelSpreading`\n  - :class:`discriminant_analysis.LinearDiscriminantAnalysis`\n  - :class:`svm.LinearSVC` (setting multi_class=\"crammer_singer\")\n  - :class:`linear_model.LogisticRegression` (with most solvers)\n  - :class:`linear_model.LogisticRegressionCV` (with most solvers)\n  - :class:`neural_network.MLPClassifier`\n  - :class:`neighbors.NearestCentroid`\n  - :class:`discriminant_analysis.QuadraticDiscriminantAnalysis`\n  - :class:`neighbors.RadiusNeighborsClassifier`\n  - :class:`ensemble.RandomForestClassifier`\n  - :class:`linear_model.RidgeClassifier`\n  - :class:`linear_model.RidgeClassifierCV`\n\n\n- **Multiclass as One-Vs-One:**\n\n  - :class:`svm.NuSVC`\n  - :class:`svm.SVC`.\n  - :class:`gaussian_process.GaussianProcessClassifier` (setting multi_class = \"one_vs_one\")\n\n\n- **Multiclass as One-Vs-The-Rest:**\n\n  - :class:`ensemble.GradientBoostingClassifier`\n  - :class:`gaussian_process.GaussianProcessClassifier` (setting multi_class = \"one_vs_rest\")\n  - :class:`svm.LinearSVC` (setting multi_class=\"ovr\")\n  - :class:`linear_model.LogisticRegression` (most solvers)\n  - :class:`linear_model.LogisticRegressionCV` (most solvers)\n  - :class:`linear_model.SGDClassifier`\n  - :class:`linear_model.Perceptron`\n  - :class:`linear_model.PassiveAggressiveClassifier`\n\n\n- **Support multilabel:**\n\n  - :class:`tree.DecisionTreeClassifier`\n  - :class:`tree.ExtraTreeClassifier`\n  - :class:`ensemble.ExtraTreesClassifier`\n  - :class:`neighbors.KNeighborsClassifier`\n  - :class:`neural_network.MLPClassifier`\n  - :class:`neighbors.RadiusNeighborsClassifier`\n  - :class:`ensemble.RandomForestClassifier`\n  - :class:`linear_model.RidgeClassifier`\n  - :class:`linear_model.RidgeClassifierCV`\n\n\n- **Support multiclass-multioutput:**\n\n  - :class:`tree.DecisionTreeClassifier`\n  - :class:`tree.ExtraTreeClassifier`\n  - :class:`ensemble.ExtraTreesClassifier`\n  - :class:`neighbors.KNeighborsClassifier`\n  - :class:`neighbors.RadiusNeighborsClassifier`\n  - :class:`ensemble.RandomForestClassifier`\n\n.. _multiclass_classification:\n\nMulticlass classification\n=========================\n\n.. warning::\n    All classifiers in scikit-learn do multiclass classification\n    out-of-the-box. You don't need to use the :mod:`sklearn.multiclass` module\n    unless you want to experiment with different multiclass strategies.\n\n**Multiclass classification** is a classification task with more than two\nclasses. Each sample can only be labeled as one class.\n\nFor example, classification using features extracted from a set of images of\nfruit, where each image may either be of an orange, an apple, or a pear.\nEach image is one sample and is labeled as one of the 3 possible classes.\nMulticlass classification makes the assumption that each sample is assigned\nto one and only one label - one sample cannot, for example, be both a pear\nand an apple.\n\nWhile all scikit-learn classifiers are capable of multiclass classification,\nthe meta-estimators offered by :mod:`sklearn.multiclass`\npermit changing the way they handle more than two classes\nbecause this may have an effect on classifier performance\n(either in terms of generalization error or required computational resources).\n\nTarget format\n-------------\n\nValid :term:`multiclass` representations for\n:func:`~sklearn.utils.multiclass.type_of_target` (`y`) are:\n\n- 1d or column vector containing more than two discrete values. An\n  example of a vector ``y`` for 4 samples:\n\n    >>> import numpy as np\n    >>> y = np.array(['apple', 'pear', 'apple', 'orange'])\n    >>> print(y)\n    ['apple' 'pear' 'apple' 'orange']\n\n- Dense or sparse :term:`binary` matrix of shape ``(n_samples, n_classes)``\n  with a single sample per row, where each column represents one class. An\n  example of both a dense and sparse :term:`binary` matrix ``y`` for 4\n  samples, where the columns, in order, are apple, orange, and pear:\n\n    >>> import numpy as np\n    >>> from sklearn.preprocessing import LabelBinarizer\n    >>> y = np.array(['apple', 'pear', 'apple', 'orange'])\n    >>> y_dense = LabelBinarizer().fit_transform(y)\n    >>> print(y_dense)\n    [[1 0 0]\n     [0 0 1]\n     [1 0 0]\n     [0 1 0]]\n    >>> from scipy import sparse\n    >>> y_sparse = sparse.csr_matrix(y_dense)\n    >>> print(y_sparse)\n    <Compressed Sparse Row sparse matrix of dtype 'int64'\n    \twith 4 stored elements and shape (4, 3)>\n      Coords\tValues\n      (0, 0)\t1\n      (1, 2)\t1\n      (2, 0)\t1\n      (3, 1)\t1\n\nFor more information about :class:`~sklearn.preprocessing.LabelBinarizer`,\nrefer to :ref:`preprocessing_targets`.\n\n.. _ovr_classification:\n\nOneVsRestClassifier\n-------------------\n\nThe **one-vs-rest** strategy, also known as **one-vs-all**, is implemented in\n:class:`~sklearn.multiclass.OneVsRestClassifier`.  The strategy consists in\nfitting one classifier per class. For each classifier, the class is fitted\nagainst all the other classes. In addition to its computational efficiency\n(only `n_classes` classifiers are needed), one advantage of this approach is\nits interpretability. Since each class is represented by one and only one\nclassifier, it is possible to gain knowledge about the class by inspecting its\ncorresponding classifier. This is the most commonly used strategy and is a fair\ndefault choice.\n\nBelow is an example of multiclass learning using OvR::\n\n  >>> from sklearn import datasets\n  >>> from sklearn.multiclass import OneVsRestClassifier\n  >>> from sklearn.svm import LinearSVC\n  >>> X, y = datasets.load_iris(return_X_y=True)\n  >>> OneVsRestClassifier(LinearSVC(random_state=0)).fit(X, y).predict(X)\n  array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n         0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n         0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n         1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1,\n         1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n         2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2, 2,\n         2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])\n\n\n:class:`~sklearn.multiclass.OneVsRestClassifier` also supports multilabel\nclassification. To use this feature, feed the classifier an indicator matrix,\nin which cell [i, j] indicates the presence of label j in sample i.\n\n\n.. figure:: ..\/auto_examples\/miscellaneous\/images\/sphx_glr_plot_multilabel_001.png\n    :target: ..\/auto_examples\/miscellaneous\/plot_multilabel.html\n    :align: center\n    :scale: 75%\n\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_miscellaneous_plot_multilabel.py`\n* :ref:`sphx_glr_auto_examples_classification_plot_classification_probability.py`\n\n.. _ovo_classification:\n\nOneVsOneClassifier\n------------------\n\n:class:`~sklearn.multiclass.OneVsOneClassifier` constructs one classifier per\npair of classes. At prediction time, the class which received the most votes\nis selected. In the event of a tie (among two classes with an equal number of\nvotes), it selects the class with the highest aggregate classification\nconfidence by summing over the pair-wise classification confidence levels\ncomputed by the underlying binary classifiers.\n\nSince it requires to fit ``n_classes * (n_classes - 1) \/ 2`` classifiers,\nthis method is usually slower than one-vs-the-rest, due to its\nO(n_classes^2) complexity. However, this method may be advantageous for\nalgorithms such as kernel algorithms which don't scale well with\n``n_samples``. This is because each individual learning problem only involves\na small subset of the data whereas, with one-vs-the-rest, the complete\ndataset is used ``n_classes`` times. The decision function is the result\nof a monotonic transformation of the one-versus-one classification.\n\nBelow is an example of multiclass learning using OvO::\n\n  >>> from sklearn import datasets\n  >>> from sklearn.multiclass import OneVsOneClassifier\n  >>> from sklearn.svm import LinearSVC\n  >>> X, y = datasets.load_iris(return_X_y=True)\n  >>> OneVsOneClassifier(LinearSVC(random_state=0)).fit(X, y).predict(X)\n  array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n         0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n         0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n         1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1,\n         1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n         2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n         2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])\n\n\n.. rubric:: References\n\n* \"Pattern Recognition and Machine Learning. Springer\",\n  Christopher M. Bishop, page 183, (First Edition)\n\n.. _ecoc:\n\nOutputCodeClassifier\n--------------------\n\nError-Correcting Output Code-based strategies are fairly different from\none-vs-the-rest and one-vs-one. With these strategies, each class is\nrepresented in a Euclidean space, where each dimension can only be 0 or 1.\nAnother way to put it is that each class is represented by a binary code (an\narray of 0 and 1). The matrix which keeps track of the location\/code of each\nclass is called the code book. The code size is the dimensionality of the\naforementioned space. Intuitively, each class should be represented by a code\nas unique as possible and a good code book should be designed to optimize\nclassification accuracy. In this implementation, we simply use a\nrandomly-generated code book as advocated in [3]_ although more elaborate\nmethods may be added in the future.\n\nAt fitting time, one binary classifier per bit in the code book is fitted.\nAt prediction time, the classifiers are used to project new points in the\nclass space and the class closest to the points is chosen.\n\nIn :class:`~sklearn.multiclass.OutputCodeClassifier`, the ``code_size``\nattribute allows the user to control the number of classifiers which will be\nused. It is a percentage of the total number of classes.\n\nA number between 0 and 1 will require fewer classifiers than\none-vs-the-rest. In theory, ``log2(n_classes) \/ n_classes`` is sufficient to\nrepresent each class unambiguously. However, in practice, it may not lead to\ngood accuracy since ``log2(n_classes)`` is much smaller than `n_classes`.\n\nA number greater than 1 will require more classifiers than\none-vs-the-rest. In this case, some classifiers will in theory correct for\nthe mistakes made by other classifiers, hence the name \"error-correcting\".\nIn practice, however, this may not happen as classifier mistakes will\ntypically be correlated. The error-correcting output codes have a similar\neffect to bagging.\n\nBelow is an example of multiclass learning using Output-Codes::\n\n  >>> from sklearn import datasets\n  >>> from sklearn.multiclass import OutputCodeClassifier\n  >>> from sklearn.svm import LinearSVC\n  >>> X, y = datasets.load_iris(return_X_y=True)\n  >>> clf = OutputCodeClassifier(LinearSVC(random_state=0), code_size=2, random_state=0)\n  >>> clf.fit(X, y).predict(X)\n  array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n         0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n         0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1,\n         1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 1, 1,\n         1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n         2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2,\n         2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])\n\n.. rubric:: References\n\n* \"Solving multiclass learning problems via error-correcting output codes\",\n  Dietterich T., Bakiri G., Journal of Artificial Intelligence Research 2, 1995.\n\n.. [3] \"The error coding method and PICTs\", James G., Hastie T.,\n  Journal of Computational and Graphical statistics 7, 1998.\n\n* \"The Elements of Statistical Learning\",\n  Hastie T., Tibshirani R., Friedman J., page 606 (second-edition), 2008.\n\n.. _multilabel_classification:\n\nMultilabel classification\n=========================\n\n**Multilabel classification** (closely related to **multioutput**\n**classification**) is a classification task labeling each sample with ``m``\nlabels from ``n_classes`` possible classes, where ``m`` can be 0 to\n``n_classes`` inclusive. This can be thought of as predicting properties of a\nsample that are not mutually exclusive. Formally, a binary output is assigned\nto each class, for every sample. Positive classes are indicated with 1 and\nnegative classes with 0 or -1. It is thus comparable to running ``n_classes``\nbinary classification tasks, for example with\n:class:`~sklearn.multioutput.MultiOutputClassifier`. This approach treats\neach label independently whereas multilabel classifiers *may* treat the\nmultiple classes simultaneously, accounting for correlated behavior among\nthem.\n\nFor example, prediction of the topics relevant to a text document or video.\nThe document or video may be about one of 'religion', 'politics', 'finance'\nor 'education', several of the topic classes or all of the topic classes.\n\nTarget format\n-------------\n\nA valid representation of :term:`multilabel` `y` is an either dense or sparse\n:term:`binary` matrix of shape ``(n_samples, n_classes)``. Each column\nrepresents a class. The ``1``'s in each row denote the positive classes a\nsample has been labeled with. An example of a dense matrix ``y`` for 3\nsamples:\n\n  >>> y = np.array([[1, 0, 0, 1], [0, 0, 1, 1], [0, 0, 0, 0]])\n  >>> print(y)\n  [[1 0 0 1]\n   [0 0 1 1]\n   [0 0 0 0]]\n\nDense binary matrices can also be created using\n:class:`~sklearn.preprocessing.MultiLabelBinarizer`. For more information,\nrefer to :ref:`preprocessing_targets`.\n\nAn example of the same ``y`` in sparse matrix form:\n\n  >>> y_sparse = sparse.csr_matrix(y)\n  >>> print(y_sparse)\n  <Compressed Sparse Row sparse matrix of dtype 'int64'\n    with 4 stored elements and shape (3, 4)>\n    Coords\tValues\n    (0, 0)\t1\n    (0, 3)\t1\n    (1, 2)\t1\n    (1, 3)\t1\n\n.. _multioutputclassfier:\n\nMultiOutputClassifier\n---------------------\n\nMultilabel classification support can be added to any classifier with\n:class:`~sklearn.multioutput.MultiOutputClassifier`. This strategy consists of\nfitting one classifier per target.  This allows multiple target variable\nclassifications. The purpose of this class is to extend estimators\nto be able to estimate a series of target functions (f1,f2,f3...,fn)\nthat are trained on a single X predictor matrix to predict a series\nof responses (y1,y2,y3...,yn).\n\nYou can find a usage example for\n:class:`~sklearn.multioutput.MultiOutputClassifier`\nas part of the section on :ref:`multiclass_multioutput_classification`\nsince it is a generalization of multilabel classification to\nmulticlass outputs instead of binary outputs.\n\n.. _classifierchain:\n\nClassifierChain\n---------------\n\nClassifier chains (see :class:`~sklearn.multioutput.ClassifierChain`) are a way\nof combining a number of binary classifiers into a single multi-label model\nthat is capable of exploiting correlations among targets.\n\nFor a multi-label classification problem with N classes, N binary\nclassifiers are assigned an integer between 0 and N-1. These integers\ndefine the order of models in the chain. Each classifier is then fit on the\navailable training data plus the true labels of the classes whose\nmodels were assigned a lower number.\n\nWhen predicting, the true labels will not be available. Instead the\npredictions of each model are passed on to the subsequent models in the\nchain to be used as features.\n\nClearly the order of the chain is important. The first model in the chain\nhas no information about the other labels while the last model in the chain\nhas features indicating the presence of all of the other labels. In general\none does not know the optimal ordering of the models in the chain so\ntypically many randomly ordered chains are fit and their predictions are\naveraged together.\n\n.. rubric:: References\n\n* Jesse Read, Bernhard Pfahringer, Geoff Holmes, Eibe Frank,\n  \"Classifier Chains for Multi-label Classification\", 2009.\n\n.. _multiclass_multioutput_classification:\n\nMulticlass-multioutput classification\n=====================================\n\n**Multiclass-multioutput classification**\n(also known as **multitask classification**) is a\nclassification task which labels each sample with a set of **non-binary**\nproperties. Both the number of properties and the number of\nclasses per property is greater than 2. A single estimator thus\nhandles several joint classification tasks. This is both a generalization of\nthe multi\\ *label* classification task, which only considers binary\nattributes, as well as a generalization of the multi\\ *class* classification\ntask, where only one property is considered.\n\nFor example, classification of the properties \"type of fruit\" and \"colour\"\nfor a set of images of fruit. The property \"type of fruit\" has the possible\nclasses: \"apple\", \"pear\" and \"orange\". The property \"colour\" has the\npossible classes: \"green\", \"red\", \"yellow\" and \"orange\". Each sample is an\nimage of a fruit, a label is output for both properties and each label is\none of the possible classes of the corresponding property.\n\nNote that all classifiers handling multiclass-multioutput (also known as\nmultitask classification) tasks, support the multilabel classification task\nas a special case. Multitask classification is similar to the multioutput\nclassification task with different model formulations. For more information,\nsee the relevant estimator documentation.\n\nBelow is an example of multiclass-multioutput classification:\n\n    >>> from sklearn.datasets import make_classification\n    >>> from sklearn.multioutput import MultiOutputClassifier\n    >>> from sklearn.ensemble import RandomForestClassifier\n    >>> from sklearn.utils import shuffle\n    >>> import numpy as np\n    >>> X, y1 = make_classification(n_samples=10, n_features=100,\n    ...                             n_informative=30, n_classes=3,\n    ...                             random_state=1)\n    >>> y2 = shuffle(y1, random_state=1)\n    >>> y3 = shuffle(y1, random_state=2)\n    >>> Y = np.vstack((y1, y2, y3)).T\n    >>> n_samples, n_features = X.shape # 10,100\n    >>> n_outputs = Y.shape[1] # 3\n    >>> n_classes = 3\n    >>> forest = RandomForestClassifier(random_state=1)\n    >>> multi_target_forest = MultiOutputClassifier(forest, n_jobs=2)\n    >>> multi_target_forest.fit(X, Y).predict(X)\n    array([[2, 2, 0],\n           [1, 2, 1],\n           [2, 1, 0],\n           [0, 0, 2],\n           [0, 2, 1],\n           [0, 0, 2],\n           [1, 1, 0],\n           [1, 1, 1],\n           [0, 0, 2],\n           [2, 0, 0]])\n\n.. warning::\n    At present, no metric in :mod:`sklearn.metrics`\n    supports the multiclass-multioutput classification task.\n\nTarget format\n-------------\n\nA valid representation of :term:`multioutput` `y` is a dense matrix of shape\n``(n_samples, n_classes)`` of class labels. A column wise concatenation of 1d\n:term:`multiclass` variables. An example of ``y`` for 3 samples:\n\n  >>> y = np.array([['apple', 'green'], ['orange', 'orange'], ['pear', 'green']])\n  >>> print(y)\n  [['apple' 'green']\n   ['orange' 'orange']\n   ['pear' 'green']]\n\n.. _multioutput_regression:\n\nMultioutput regression\n======================\n\n**Multioutput regression** predicts multiple numerical properties for each\nsample. Each property is a numerical variable and the number of properties\nto be predicted for each sample is greater than or equal to 2. Some estimators\nthat support multioutput regression are faster than just running ``n_output``\nestimators.\n\nFor example, prediction of both wind speed and wind direction, in degrees,\nusing data obtained at a certain location. Each sample would be data\nobtained at one location and both wind speed and direction would be\noutput for each sample.\n\nThe following regressors natively support multioutput regression:\n\n- :class:`cross_decomposition.CCA`\n- :class:`tree.DecisionTreeRegressor`\n- :class:`dummy.DummyRegressor`\n- :class:`linear_model.ElasticNet`\n- :class:`tree.ExtraTreeRegressor`\n- :class:`ensemble.ExtraTreesRegressor`\n- :class:`gaussian_process.GaussianProcessRegressor`\n- :class:`neighbors.KNeighborsRegressor`\n- :class:`kernel_ridge.KernelRidge`\n- :class:`linear_model.Lars`\n- :class:`linear_model.Lasso`\n- :class:`linear_model.LassoLars`\n- :class:`linear_model.LinearRegression`\n- :class:`multioutput.MultiOutputRegressor`\n- :class:`linear_model.MultiTaskElasticNet`\n- :class:`linear_model.MultiTaskElasticNetCV`\n- :class:`linear_model.MultiTaskLasso`\n- :class:`linear_model.MultiTaskLassoCV`\n- :class:`linear_model.OrthogonalMatchingPursuit`\n- :class:`cross_decomposition.PLSCanonical`\n- :class:`cross_decomposition.PLSRegression`\n- :class:`linear_model.RANSACRegressor`\n- :class:`neighbors.RadiusNeighborsRegressor`\n- :class:`ensemble.RandomForestRegressor`\n- :class:`multioutput.RegressorChain`\n- :class:`linear_model.Ridge`\n- :class:`linear_model.RidgeCV`\n- :class:`compose.TransformedTargetRegressor`\n\nTarget format\n-------------\n\nA valid representation of :term:`multioutput` `y` is a dense matrix of shape\n``(n_samples, n_output)`` of floats. A column wise concatenation of\n:term:`continuous` variables. An example of ``y`` for 3 samples:\n\n  >>> y = np.array([[31.4, 94], [40.5, 109], [25.0, 30]])\n  >>> print(y)\n  [[ 31.4  94. ]\n   [ 40.5 109. ]\n   [ 25.   30. ]]\n\n.. _multioutputregressor:\n\nMultiOutputRegressor\n--------------------\n\nMultioutput regression support can be added to any regressor with\n:class:`~sklearn.multioutput.MultiOutputRegressor`.  This strategy consists of\nfitting one regressor per target. Since each target is represented by exactly\none regressor it is possible to gain knowledge about the target by\ninspecting its corresponding regressor. As\n:class:`~sklearn.multioutput.MultiOutputRegressor` fits one regressor per\ntarget it can not take advantage of correlations between targets.\n\nBelow is an example of multioutput regression:\n\n  >>> from sklearn.datasets import make_regression\n  >>> from sklearn.multioutput import MultiOutputRegressor\n  >>> from sklearn.ensemble import GradientBoostingRegressor\n  >>> X, y = make_regression(n_samples=10, n_targets=3, random_state=1)\n  >>> MultiOutputRegressor(GradientBoostingRegressor(random_state=0)).fit(X, y).predict(X)\n  array([[-154.75474165, -147.03498585,  -50.03812219],\n         [   7.12165031,    5.12914884,  -81.46081961],\n         [-187.8948621 , -100.44373091,   13.88978285],\n         [-141.62745778,   95.02891072, -191.48204257],\n         [  97.03260883,  165.34867495,  139.52003279],\n         [ 123.92529176,   21.25719016,   -7.84253   ],\n         [-122.25193977,  -85.16443186, -107.12274212],\n         [ -30.170388  ,  -94.80956739,   12.16979946],\n         [ 140.72667194,  176.50941682,  -17.50447799],\n         [ 149.37967282,  -81.15699552,   -5.72850319]])\n\n.. _regressorchain:\n\nRegressorChain\n--------------\n\nRegressor chains (see :class:`~sklearn.multioutput.RegressorChain`) is\nanalogous to :class:`~sklearn.multioutput.ClassifierChain` as a way of\ncombining a number of regressions into a single multi-target model that is\ncapable of exploiting correlations among targets.","site":"scikit-learn","answers_cleaned":"     multiclass                                         Multiclass and multioutput algorithms                                        This section of the user guide covers functionality related to multi learning problems  including  term  multiclass    term  multilabel   and  term  multioutput  classification and regression   The modules in this section implement  term  meta estimators   which require a base estimator to be provided in their constructor  Meta estimators extend the functionality of the base estimator to support multi learning problems  which is accomplished by transforming the multi learning problem into a set of simpler problems  then fitting one estimator per problem   This section covers two modules   mod  sklearn multiclass  and  mod  sklearn multioutput   The chart below demonstrates the problem types that each module is responsible for  and the corresponding meta estimators that each module provides      image      images multi org chart png     align  center  The table below provides a quick reference on the differences between problem types  More detailed explanations can be found in subsequent sections of this guide                                                                                                                                                                          Number of targets       Target cardinality        Valid                                                                                                                                  func   sklearn utils multiclass type of target                                                                                                                                            Multiclass                      1                       2                         multiclass                                          classification                                                                                                                                                                                                                                                              Multilabel                      1                       2  0 or 1                 multilabel indicator                                classification                                                                                                                                                                                                                                                              Multiclass multioutput          1                       2                         multiclass multioutput                              classification                                                                                                                                                                                                                                                              Multioutput                     1                      Continuous                 continuous multioutput                              regression                                                                                                                                                                                                                                                                 Below is a summary of scikit learn estimators that have multi learning support built in  grouped by strategy  You don t need the meta estimators provided by this section if you re using one of these estimators  However  meta estimators can provide additional strategies beyond what is built in      currentmodule   sklearn      Inherently multiclass          class  naive bayes BernoulliNB       class  tree DecisionTreeClassifier       class  tree ExtraTreeClassifier       class  ensemble ExtraTreesClassifier       class  naive bayes GaussianNB       class  neighbors KNeighborsClassifier       class  semi supervised LabelPropagation       class  semi supervised LabelSpreading       class  discriminant analysis LinearDiscriminantAnalysis       class  svm LinearSVC   setting multi class  crammer singer        class  linear model LogisticRegression   with most solvers       class  linear model LogisticRegressionCV   with most solvers       class  neural network MLPClassifier       class  neighbors NearestCentroid       class  discriminant analysis QuadraticDiscriminantAnalysis       class  neighbors RadiusNeighborsClassifier       class  ensemble RandomForestClassifier       class  linear model RidgeClassifier       class  linear model RidgeClassifierCV        Multiclass as One Vs One          class  svm NuSVC       class  svm SVC        class  gaussian process GaussianProcessClassifier   setting multi class    one vs one         Multiclass as One Vs The Rest          class  ensemble GradientBoostingClassifier       class  gaussian process GaussianProcessClassifier   setting multi class    one vs rest        class  svm LinearSVC   setting multi class  ovr        class  linear model LogisticRegression   most solvers       class  linear model LogisticRegressionCV   most solvers       class  linear model SGDClassifier       class  linear model Perceptron       class  linear model PassiveAggressiveClassifier        Support multilabel          class  tree DecisionTreeClassifier       class  tree ExtraTreeClassifier       class  ensemble ExtraTreesClassifier       class  neighbors KNeighborsClassifier       class  neural network MLPClassifier       class  neighbors RadiusNeighborsClassifier       class  ensemble RandomForestClassifier       class  linear model RidgeClassifier       class  linear model RidgeClassifierCV        Support multiclass multioutput          class  tree DecisionTreeClassifier       class  tree ExtraTreeClassifier       class  ensemble ExtraTreesClassifier       class  neighbors KNeighborsClassifier       class  neighbors RadiusNeighborsClassifier       class  ensemble RandomForestClassifier       multiclass classification   Multiclass classification                               warning       All classifiers in scikit learn do multiclass classification     out of the box  You don t need to use the  mod  sklearn multiclass  module     unless you want to experiment with different multiclass strategies     Multiclass classification   is a classification task with more than two classes  Each sample can only be labeled as one class   For example  classification using features extracted from a set of images of fruit  where each image may either be of an orange  an apple  or a pear  Each image is one sample and is labeled as one of the 3 possible classes  Multiclass classification makes the assumption that each sample is assigned to one and only one label   one sample cannot  for example  be both a pear and an apple   While all scikit learn classifiers are capable of multiclass classification  the meta estimators offered by  mod  sklearn multiclass  permit changing the way they handle more than two classes because this may have an effect on classifier performance  either in terms of generalization error or required computational resources    Target format                Valid  term  multiclass  representations for  func   sklearn utils multiclass type of target    y   are     1d or column vector containing more than two discrete values  An   example of a vector   y   for 4 samples           import numpy as np         y   np array   apple    pear    apple    orange            print y        apple   pear   apple   orange      Dense or sparse  term  binary  matrix of shape    n samples  n classes      with a single sample per row  where each column represents one class  An   example of both a dense and sparse  term  binary  matrix   y   for 4   samples  where the columns  in order  are apple  orange  and pear           import numpy as np         from sklearn preprocessing import LabelBinarizer         y   np array   apple    pear    apple    orange            y dense   LabelBinarizer   fit transform y          print y dense        1 0 0        0 0 1        1 0 0        0 1 0           from scipy import sparse         y sparse   sparse csr matrix y dense          print y sparse       Compressed Sparse Row sparse matrix of dtype  int64       with 4 stored elements and shape  4  3         Coords Values        0  0  1        1  2  1        2  0  1        3  1  1  For more information about  class   sklearn preprocessing LabelBinarizer   refer to  ref  preprocessing targets        ovr classification   OneVsRestClassifier                      The   one vs rest   strategy  also known as   one vs all    is implemented in  class   sklearn multiclass OneVsRestClassifier    The strategy consists in fitting one classifier per class  For each classifier  the class is fitted against all the other classes  In addition to its computational efficiency  only  n classes  classifiers are needed   one advantage of this approach is its interpretability  Since each class is represented by one and only one classifier  it is possible to gain knowledge about the class by inspecting its corresponding classifier  This is the most commonly used strategy and is a fair default choice   Below is an example of multiclass learning using OvR          from sklearn import datasets       from sklearn multiclass import OneVsRestClassifier       from sklearn svm import LinearSVC       X  y   datasets load iris return X y True        OneVsRestClassifier LinearSVC random state 0   fit X  y  predict X    array  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0           0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0           0  0  0  0  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1           1  2  1  1  1  1  1  1  1  1  1  1  1  1  2  2  1  1  1  1  1  1  1           1  1  1  1  1  1  1  1  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2           2  2  2  2  2  2  2  2  2  2  2  2  2  2  1  2  2  2  1  2  2  2  2           2  2  2  2  2  2  2  2  2  2  2  2      class   sklearn multiclass OneVsRestClassifier  also supports multilabel classification  To use this feature  feed the classifier an indicator matrix  in which cell  i  j  indicates the presence of label j in sample i       figure      auto examples miscellaneous images sphx glr plot multilabel 001 png      target     auto examples miscellaneous plot multilabel html      align  center      scale  75       rubric   Examples     ref  sphx glr auto examples miscellaneous plot multilabel py     ref  sphx glr auto examples classification plot classification probability py       ovo classification   OneVsOneClassifier                      class   sklearn multiclass OneVsOneClassifier  constructs one classifier per pair of classes  At prediction time  the class which received the most votes is selected  In the event of a tie  among two classes with an equal number of votes   it selects the class with the highest aggregate classification confidence by summing over the pair wise classification confidence levels computed by the underlying binary classifiers   Since it requires to fit   n classes    n classes   1    2   classifiers  this method is usually slower than one vs the rest  due to its O n classes 2  complexity  However  this method may be advantageous for algorithms such as kernel algorithms which don t scale well with   n samples    This is because each individual learning problem only involves a small subset of the data whereas  with one vs the rest  the complete dataset is used   n classes   times  The decision function is the result of a monotonic transformation of the one versus one classification   Below is an example of multiclass learning using OvO          from sklearn import datasets       from sklearn multiclass import OneVsOneClassifier       from sklearn svm import LinearSVC       X  y   datasets load iris return X y True        OneVsOneClassifier LinearSVC random state 0   fit X  y  predict X    array  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0           0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0           0  0  0  0  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1           1  2  1  2  1  1  1  1  1  1  1  1  1  1  2  1  1  1  1  1  1  1  1           1  1  1  1  1  1  1  1  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2           2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2           2  2  2  2  2  2  2  2  2  2  2  2        rubric   References     Pattern Recognition and Machine Learning  Springer     Christopher M  Bishop  page 183   First Edition       ecoc   OutputCodeClassifier                       Error Correcting Output Code based strategies are fairly different from one vs the rest and one vs one  With these strategies  each class is represented in a Euclidean space  where each dimension can only be 0 or 1  Another way to put it is that each class is represented by a binary code  an array of 0 and 1   The matrix which keeps track of the location code of each class is called the code book  The code size is the dimensionality of the aforementioned space  Intuitively  each class should be represented by a code as unique as possible and a good code book should be designed to optimize classification accuracy  In this implementation  we simply use a randomly generated code book as advocated in  3   although more elaborate methods may be added in the future   At fitting time  one binary classifier per bit in the code book is fitted  At prediction time  the classifiers are used to project new points in the class space and the class closest to the points is chosen   In  class   sklearn multiclass OutputCodeClassifier   the   code size   attribute allows the user to control the number of classifiers which will be used  It is a percentage of the total number of classes   A number between 0 and 1 will require fewer classifiers than one vs the rest  In theory    log2 n classes    n classes   is sufficient to represent each class unambiguously  However  in practice  it may not lead to good accuracy since   log2 n classes    is much smaller than  n classes    A number greater than 1 will require more classifiers than one vs the rest  In this case  some classifiers will in theory correct for the mistakes made by other classifiers  hence the name  error correcting   In practice  however  this may not happen as classifier mistakes will typically be correlated  The error correcting output codes have a similar effect to bagging   Below is an example of multiclass learning using Output Codes          from sklearn import datasets       from sklearn multiclass import OutputCodeClassifier       from sklearn svm import LinearSVC       X  y   datasets load iris return X y True        clf   OutputCodeClassifier LinearSVC random state 0   code size 2  random state 0        clf fit X  y  predict X    array  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0           0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0           0  0  0  0  1  1  1  1  1  1  2  1  1  1  1  1  1  1  1  1  2  1  1           1  2  1  1  1  1  1  1  2  1  1  1  1  1  2  2  2  1  1  1  1  1  1           1  1  1  1  1  1  1  1  2  2  2  2  2  2  2  2  2  2  2  2  2  2  2           2  2  2  2  1  2  2  2  2  2  2  2  2  2  1  2  2  2  1  1  2  2  2           2  2  2  2  2  2  2  2  2  2  2  2       rubric   References     Solving multiclass learning problems via error correcting output codes     Dietterich T   Bakiri G   Journal of Artificial Intelligence Research 2  1995       3   The error coding method and PICTs   James G   Hastie T     Journal of Computational and Graphical statistics 7  1998      The Elements of Statistical Learning     Hastie T   Tibshirani R   Friedman J   page 606  second edition   2008       multilabel classification   Multilabel classification                              Multilabel classification    closely related to   multioutput     classification    is a classification task labeling each sample with   m   labels from   n classes   possible classes  where   m   can be 0 to   n classes   inclusive  This can be thought of as predicting properties of a sample that are not mutually exclusive  Formally  a binary output is assigned to each class  for every sample  Positive classes are indicated with 1 and negative classes with 0 or  1  It is thus comparable to running   n classes   binary classification tasks  for example with  class   sklearn multioutput MultiOutputClassifier   This approach treats each label independently whereas multilabel classifiers  may  treat the multiple classes simultaneously  accounting for correlated behavior among them   For example  prediction of the topics relevant to a text document or video  The document or video may be about one of  religion    politics    finance  or  education   several of the topic classes or all of the topic classes   Target format                A valid representation of  term  multilabel   y  is an either dense or sparse  term  binary  matrix of shape    n samples  n classes     Each column represents a class  The   1   s in each row denote the positive classes a sample has been labeled with  An example of a dense matrix   y   for 3 samples         y   np array   1  0  0  1    0  0  1  1    0  0  0  0          print y      1 0 0 1      0 0 1 1      0 0 0 0    Dense binary matrices can also be created using  class   sklearn preprocessing MultiLabelBinarizer   For more information  refer to  ref  preprocessing targets    An example of the same   y   in sparse matrix form         y sparse   sparse csr matrix y        print y sparse     Compressed Sparse Row sparse matrix of dtype  int64      with 4 stored elements and shape  3  4       Coords Values      0  0  1      0  3  1      1  2  1      1  3  1      multioutputclassfier   MultiOutputClassifier                        Multilabel classification support can be added to any classifier with  class   sklearn multioutput MultiOutputClassifier   This strategy consists of fitting one classifier per target   This allows multiple target variable classifications  The purpose of this class is to extend estimators to be able to estimate a series of target functions  f1 f2 f3    fn  that are trained on a single X predictor matrix to predict a series of responses  y1 y2 y3    yn    You can find a usage example for  class   sklearn multioutput MultiOutputClassifier  as part of the section on  ref  multiclass multioutput classification  since it is a generalization of multilabel classification to multiclass outputs instead of binary outputs       classifierchain   ClassifierChain                  Classifier chains  see  class   sklearn multioutput ClassifierChain   are a way of combining a number of binary classifiers into a single multi label model that is capable of exploiting correlations among targets   For a multi label classification problem with N classes  N binary classifiers are assigned an integer between 0 and N 1  These integers define the order of models in the chain  Each classifier is then fit on the available training data plus the true labels of the classes whose models were assigned a lower number   When predicting  the true labels will not be available  Instead the predictions of each model are passed on to the subsequent models in the chain to be used as features   Clearly the order of the chain is important  The first model in the chain has no information about the other labels while the last model in the chain has features indicating the presence of all of the other labels  In general one does not know the optimal ordering of the models in the chain so typically many randomly ordered chains are fit and their predictions are averaged together      rubric   References    Jesse Read  Bernhard Pfahringer  Geoff Holmes  Eibe Frank     Classifier Chains for Multi label Classification   2009       multiclass multioutput classification   Multiclass multioutput classification                                          Multiclass multioutput classification    also known as   multitask classification    is a classification task which labels each sample with a set of   non binary   properties  Both the number of properties and the number of classes per property is greater than 2  A single estimator thus handles several joint classification tasks  This is both a generalization of the multi   label  classification task  which only considers binary attributes  as well as a generalization of the multi   class  classification task  where only one property is considered   For example  classification of the properties  type of fruit  and  colour  for a set of images of fruit  The property  type of fruit  has the possible classes   apple    pear  and  orange   The property  colour  has the possible classes   green    red    yellow  and  orange   Each sample is an image of a fruit  a label is output for both properties and each label is one of the possible classes of the corresponding property   Note that all classifiers handling multiclass multioutput  also known as multitask classification  tasks  support the multilabel classification task as a special case  Multitask classification is similar to the multioutput classification task with different model formulations  For more information  see the relevant estimator documentation   Below is an example of multiclass multioutput classification           from sklearn datasets import make classification         from sklearn multioutput import MultiOutputClassifier         from sklearn ensemble import RandomForestClassifier         from sklearn utils import shuffle         import numpy as np         X  y1   make classification n samples 10  n features 100                                      n informative 30  n classes 3                                      random state 1          y2   shuffle y1  random state 1          y3   shuffle y1  random state 2          Y   np vstack  y1  y2  y3   T         n samples  n features   X shape   10 100         n outputs   Y shape 1    3         n classes   3         forest   RandomForestClassifier random state 1          multi target forest   MultiOutputClassifier forest  n jobs 2          multi target forest fit X  Y  predict X      array   2  2  0               1  2  1               2  1  0               0  0  2               0  2  1               0  0  2               1  1  0               1  1  1               0  0  2               2  0  0        warning       At present  no metric in  mod  sklearn metrics      supports the multiclass multioutput classification task   Target format                A valid representation of  term  multioutput   y  is a dense matrix of shape    n samples  n classes    of class labels  A column wise concatenation of 1d  term  multiclass  variables  An example of   y   for 3 samples         y   np array    apple    green      orange    orange      pear    green           print y       apple   green        orange   orange        pear   green         multioutput regression   Multioutput regression                           Multioutput regression   predicts multiple numerical properties for each sample  Each property is a numerical variable and the number of properties to be predicted for each sample is greater than or equal to 2  Some estimators that support multioutput regression are faster than just running   n output   estimators   For example  prediction of both wind speed and wind direction  in degrees  using data obtained at a certain location  Each sample would be data obtained at one location and both wind speed and direction would be output for each sample   The following regressors natively support multioutput regression      class  cross decomposition CCA     class  tree DecisionTreeRegressor     class  dummy DummyRegressor     class  linear model ElasticNet     class  tree ExtraTreeRegressor     class  ensemble ExtraTreesRegressor     class  gaussian process GaussianProcessRegressor     class  neighbors KNeighborsRegressor     class  kernel ridge KernelRidge     class  linear model Lars     class  linear model Lasso     class  linear model LassoLars     class  linear model LinearRegression     class  multioutput MultiOutputRegressor     class  linear model MultiTaskElasticNet     class  linear model MultiTaskElasticNetCV     class  linear model MultiTaskLasso     class  linear model MultiTaskLassoCV     class  linear model OrthogonalMatchingPursuit     class  cross decomposition PLSCanonical     class  cross decomposition PLSRegression     class  linear model RANSACRegressor     class  neighbors RadiusNeighborsRegressor     class  ensemble RandomForestRegressor     class  multioutput RegressorChain     class  linear model Ridge     class  linear model RidgeCV     class  compose TransformedTargetRegressor   Target format                A valid representation of  term  multioutput   y  is a dense matrix of shape    n samples  n output    of floats  A column wise concatenation of  term  continuous  variables  An example of   y   for 3 samples         y   np array   31 4  94    40 5  109    25 0  30          print y       31 4  94         40 5 109         25    30          multioutputregressor   MultiOutputRegressor                       Multioutput regression support can be added to any regressor with  class   sklearn multioutput MultiOutputRegressor    This strategy consists of fitting one regressor per target  Since each target is represented by exactly one regressor it is possible to gain knowledge about the target by inspecting its corresponding regressor  As  class   sklearn multioutput MultiOutputRegressor  fits one regressor per target it can not take advantage of correlations between targets   Below is an example of multioutput regression         from sklearn datasets import make regression       from sklearn multioutput import MultiOutputRegressor       from sklearn ensemble import GradientBoostingRegressor       X  y   make regression n samples 10  n targets 3  random state 1        MultiOutputRegressor GradientBoostingRegressor random state 0   fit X  y  predict X    array    154 75474165   147 03498585    50 03812219                7 12165031     5 12914884    81 46081961              187 8948621    100 44373091    13 88978285              141 62745778    95 02891072   191 48204257               97 03260883   165 34867495   139 52003279              123 92529176    21 25719016     7 84253                 122 25193977    85 16443186   107 12274212               30 170388      94 80956739    12 16979946              140 72667194   176 50941682    17 50447799              149 37967282    81 15699552     5 72850319         regressorchain   RegressorChain                 Regressor chains  see  class   sklearn multioutput RegressorChain   is analogous to  class   sklearn multioutput ClassifierChain  as a way of combining a number of regressions into a single multi target model that is capable of exploiting correlations among targets "}
{"questions":"scikit-learn neighbors Jake Vanderplas vanderplas astro washington edu sklearn neighbors Nearest Neighbors","answers":".. _neighbors:\n\n=================\nNearest Neighbors\n=================\n\n.. sectionauthor:: Jake Vanderplas <vanderplas@astro.washington.edu>\n\n.. currentmodule:: sklearn.neighbors\n\n:mod:`sklearn.neighbors` provides functionality for unsupervised and\nsupervised neighbors-based learning methods.  Unsupervised nearest neighbors\nis the foundation of many other learning methods,\nnotably manifold learning and spectral clustering.  Supervised neighbors-based\nlearning comes in two flavors: `classification`_ for data with\ndiscrete labels, and `regression`_ for data with continuous labels.\n\nThe principle behind nearest neighbor methods is to find a predefined number\nof training samples closest in distance to the new point, and\npredict the label from these.  The number of samples can be a user-defined\nconstant (k-nearest neighbor learning), or vary based\non the local density of points (radius-based neighbor learning).\nThe distance can, in general, be any metric measure: standard Euclidean\ndistance is the most common choice.\nNeighbors-based methods are known as *non-generalizing* machine\nlearning methods, since they simply \"remember\" all of its training data\n(possibly transformed into a fast indexing structure such as a\n:ref:`Ball Tree <ball_tree>` or :ref:`KD Tree <kd_tree>`).\n\nDespite its simplicity, nearest neighbors has been successful in a\nlarge number of classification and regression problems, including\nhandwritten digits and satellite image scenes. Being a non-parametric method,\nit is often successful in classification situations where the decision\nboundary is very irregular.\n\nThe classes in :mod:`sklearn.neighbors` can handle either NumPy arrays or\n`scipy.sparse` matrices as input.  For dense matrices, a large number of\npossible distance metrics are supported.  For sparse matrices, arbitrary\nMinkowski metrics are supported for searches.\n\nThere are many learning routines which rely on nearest neighbors at their\ncore.  One example is :ref:`kernel density estimation <kernel_density>`,\ndiscussed in the :ref:`density estimation <density_estimation>` section.\n\n\n.. _unsupervised_neighbors:\n\nUnsupervised Nearest Neighbors\n==============================\n\n:class:`NearestNeighbors` implements unsupervised nearest neighbors learning.\nIt acts as a uniform interface to three different nearest neighbors\nalgorithms: :class:`BallTree`, :class:`KDTree`, and a\nbrute-force algorithm based on routines in :mod:`sklearn.metrics.pairwise`.\nThe choice of neighbors search algorithm is controlled through the keyword\n``'algorithm'``, which must be one of\n``['auto', 'ball_tree', 'kd_tree', 'brute']``.  When the default value\n``'auto'`` is passed, the algorithm attempts to determine the best approach\nfrom the training data.  For a discussion of the strengths and weaknesses\nof each option, see `Nearest Neighbor Algorithms`_.\n\n.. warning::\n\n    Regarding the Nearest Neighbors algorithms, if two\n    neighbors :math:`k+1` and :math:`k` have identical distances\n    but different labels, the result will depend on the ordering of the\n    training data.\n\nFinding the Nearest Neighbors\n-----------------------------\nFor the simple task of finding the nearest neighbors between two sets of\ndata, the unsupervised algorithms within :mod:`sklearn.neighbors` can be\nused:\n\n    >>> from sklearn.neighbors import NearestNeighbors\n    >>> import numpy as np\n    >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])\n    >>> nbrs = NearestNeighbors(n_neighbors=2, algorithm='ball_tree').fit(X)\n    >>> distances, indices = nbrs.kneighbors(X)\n    >>> indices\n    array([[0, 1],\n           [1, 0],\n           [2, 1],\n           [3, 4],\n           [4, 3],\n           [5, 4]]...)\n    >>> distances\n    array([[0.        , 1.        ],\n           [0.        , 1.        ],\n           [0.        , 1.41421356],\n           [0.        , 1.        ],\n           [0.        , 1.        ],\n           [0.        , 1.41421356]])\n\nBecause the query set matches the training set, the nearest neighbor of each\npoint is the point itself, at a distance of zero.\n\nIt is also possible to efficiently produce a sparse graph showing the\nconnections between neighboring points:\n\n    >>> nbrs.kneighbors_graph(X).toarray()\n    array([[1., 1., 0., 0., 0., 0.],\n           [1., 1., 0., 0., 0., 0.],\n           [0., 1., 1., 0., 0., 0.],\n           [0., 0., 0., 1., 1., 0.],\n           [0., 0., 0., 1., 1., 0.],\n           [0., 0., 0., 0., 1., 1.]])\n\nThe dataset is structured such that points nearby in index order are nearby\nin parameter space, leading to an approximately block-diagonal matrix of\nK-nearest neighbors.  Such a sparse graph is useful in a variety of\ncircumstances which make use of spatial relationships between points for\nunsupervised learning: in particular, see :class:`~sklearn.manifold.Isomap`,\n:class:`~sklearn.manifold.LocallyLinearEmbedding`, and\n:class:`~sklearn.cluster.SpectralClustering`.\n\nKDTree and BallTree Classes\n---------------------------\nAlternatively, one can use the :class:`KDTree` or :class:`BallTree` classes\ndirectly to find nearest neighbors.  This is the functionality wrapped by\nthe :class:`NearestNeighbors` class used above.  The Ball Tree and KD Tree\nhave the same interface; we'll show an example of using the KD Tree here:\n\n    >>> from sklearn.neighbors import KDTree\n    >>> import numpy as np\n    >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])\n    >>> kdt = KDTree(X, leaf_size=30, metric='euclidean')\n    >>> kdt.query(X, k=2, return_distance=False)\n    array([[0, 1],\n           [1, 0],\n           [2, 1],\n           [3, 4],\n           [4, 3],\n           [5, 4]]...)\n\nRefer to the :class:`KDTree` and :class:`BallTree` class documentation\nfor more information on the options available for nearest neighbors searches,\nincluding specification of query strategies, distance metrics, etc. For a list\nof valid metrics use `KDTree.valid_metrics` and `BallTree.valid_metrics`:\n\n    >>> from sklearn.neighbors import KDTree, BallTree\n    >>> KDTree.valid_metrics\n    ['euclidean', 'l2', 'minkowski', 'p', 'manhattan', 'cityblock', 'l1', 'chebyshev', 'infinity']\n    >>> BallTree.valid_metrics\n    ['euclidean', 'l2', 'minkowski', 'p', 'manhattan', 'cityblock', 'l1', 'chebyshev', 'infinity', 'seuclidean', 'mahalanobis', 'hamming', 'canberra', 'braycurtis', 'jaccard', 'dice', 'rogerstanimoto', 'russellrao', 'sokalmichener', 'sokalsneath', 'haversine', 'pyfunc']\n\n.. _classification:\n\nNearest Neighbors Classification\n================================\n\nNeighbors-based classification is a type of *instance-based learning* or\n*non-generalizing learning*: it does not attempt to construct a general\ninternal model, but simply stores instances of the training data.\nClassification is computed from a simple majority vote of the nearest\nneighbors of each point: a query point is assigned the data class which\nhas the most representatives within the nearest neighbors of the point.\n\nscikit-learn implements two different nearest neighbors classifiers:\n:class:`KNeighborsClassifier` implements learning based on the :math:`k`\nnearest neighbors of each query point, where :math:`k` is an integer value\nspecified by the user.  :class:`RadiusNeighborsClassifier` implements learning\nbased on the number of neighbors within a fixed radius :math:`r` of each\ntraining point, where :math:`r` is a floating-point value specified by\nthe user.\n\nThe :math:`k`-neighbors classification in :class:`KNeighborsClassifier`\nis the most commonly used technique. The optimal choice of the value :math:`k`\nis highly data-dependent: in general a larger :math:`k` suppresses the effects\nof noise, but makes the classification boundaries less distinct.\n\nIn cases where the data is not uniformly sampled, radius-based neighbors\nclassification in :class:`RadiusNeighborsClassifier` can be a better choice.\nThe user specifies a fixed radius :math:`r`, such that points in sparser\nneighborhoods use fewer nearest neighbors for the classification.  For\nhigh-dimensional parameter spaces, this method becomes less effective due\nto the so-called \"curse of dimensionality\".\n\nThe basic nearest neighbors classification uses uniform weights: that is, the\nvalue assigned to a query point is computed from a simple majority vote of\nthe nearest neighbors.  Under some circumstances, it is better to weight the\nneighbors such that nearer neighbors contribute more to the fit.  This can\nbe accomplished through the ``weights`` keyword.  The default value,\n``weights = 'uniform'``, assigns uniform weights to each neighbor.\n``weights = 'distance'`` assigns weights proportional to the inverse of the\ndistance from the query point.  Alternatively, a user-defined function of the\ndistance can be supplied to compute the weights.\n\n.. |classification_1| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_classification_001.png\n   :target: ..\/auto_examples\/neighbors\/plot_classification.html\n   :scale: 75\n\n.. centered:: |classification_1|\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_neighbors_plot_classification.py`: an example of\n  classification using nearest neighbors.\n\n.. _regression:\n\nNearest Neighbors Regression\n============================\n\nNeighbors-based regression can be used in cases where the data labels are\ncontinuous rather than discrete variables.  The label assigned to a query\npoint is computed based on the mean of the labels of its nearest neighbors.\n\nscikit-learn implements two different neighbors regressors:\n:class:`KNeighborsRegressor` implements learning based on the :math:`k`\nnearest neighbors of each query point, where :math:`k` is an integer\nvalue specified by the user.  :class:`RadiusNeighborsRegressor` implements\nlearning based on the neighbors within a fixed radius :math:`r` of the\nquery point, where :math:`r` is a floating-point value specified by the\nuser.\n\nThe basic nearest neighbors regression uses uniform weights: that is,\neach point in the local neighborhood contributes uniformly to the\nclassification of a query point.  Under some circumstances, it can be\nadvantageous to weight points such that nearby points contribute more\nto the regression than faraway points.  This can be accomplished through\nthe ``weights`` keyword.  The default value, ``weights = 'uniform'``,\nassigns equal weights to all points.  ``weights = 'distance'`` assigns\nweights proportional to the inverse of the distance from the query point.\nAlternatively, a user-defined function of the distance can be supplied,\nwhich will be used to compute the weights.\n\n.. figure:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_regression_001.png\n   :target: ..\/auto_examples\/neighbors\/plot_regression.html\n   :align: center\n   :scale: 75\n\nThe use of multi-output nearest neighbors for regression is demonstrated in\n:ref:`sphx_glr_auto_examples_miscellaneous_plot_multioutput_face_completion.py`. In this example, the inputs\nX are the pixels of the upper half of faces and the outputs Y are the pixels of\nthe lower half of those faces.\n\n.. figure:: ..\/auto_examples\/miscellaneous\/images\/sphx_glr_plot_multioutput_face_completion_001.png\n   :target: ..\/auto_examples\/miscellaneous\/plot_multioutput_face_completion.html\n   :scale: 75\n   :align: center\n\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_neighbors_plot_regression.py`: an example of regression\n  using nearest neighbors.\n\n* :ref:`sphx_glr_auto_examples_miscellaneous_plot_multioutput_face_completion.py`:\n  an example of multi-output regression using nearest neighbors.\n\n\nNearest Neighbor Algorithms\n===========================\n\n.. _brute_force:\n\nBrute Force\n-----------\n\nFast computation of nearest neighbors is an active area of research in\nmachine learning. The most naive neighbor search implementation involves\nthe brute-force computation of distances between all pairs of points in the\ndataset: for :math:`N` samples in :math:`D` dimensions, this approach scales\nas :math:`O[D N^2]`.  Efficient brute-force neighbors searches can be very\ncompetitive for small data samples.\nHowever, as the number of samples :math:`N` grows, the brute-force\napproach quickly becomes infeasible.  In the classes within\n:mod:`sklearn.neighbors`, brute-force neighbors searches are specified\nusing the keyword ``algorithm = 'brute'``, and are computed using the\nroutines available in :mod:`sklearn.metrics.pairwise`.\n\n.. _kd_tree:\n\nK-D Tree\n--------\n\nTo address the computational inefficiencies of the brute-force approach, a\nvariety of tree-based data structures have been invented.  In general, these\nstructures attempt to reduce the required number of distance calculations\nby efficiently encoding aggregate distance information for the sample.\nThe basic idea is that if point :math:`A` is very distant from point\n:math:`B`, and point :math:`B` is very close to point :math:`C`,\nthen we know that points :math:`A` and :math:`C`\nare very distant, *without having to explicitly calculate their distance*.\nIn this way, the computational cost of a nearest neighbors search can be\nreduced to :math:`O[D N \\log(N)]` or better. This is a significant\nimprovement over brute-force for large :math:`N`.\n\nAn early approach to taking advantage of this aggregate information was\nthe *KD tree* data structure (short for *K-dimensional tree*), which\ngeneralizes two-dimensional *Quad-trees* and 3-dimensional *Oct-trees*\nto an arbitrary number of dimensions.  The KD tree is a binary tree\nstructure which recursively partitions the parameter space along the data\naxes, dividing it into nested orthotropic regions into which data points\nare filed.  The construction of a KD tree is very fast: because partitioning\nis performed only along the data axes, no :math:`D`-dimensional distances\nneed to be computed. Once constructed, the nearest neighbor of a query\npoint can be determined with only :math:`O[\\log(N)]` distance computations.\nThough the KD tree approach is very fast for low-dimensional (:math:`D < 20`)\nneighbors searches, it becomes inefficient as :math:`D` grows very large:\nthis is one manifestation of the so-called \"curse of dimensionality\".\nIn scikit-learn, KD tree neighbors searches are specified using the\nkeyword ``algorithm = 'kd_tree'``, and are computed using the class\n:class:`KDTree`.\n\n\n.. dropdown:: References\n\n  * `\"Multidimensional binary search trees used for associative searching\"\n    <https:\/\/dl.acm.org\/citation.cfm?doid=361002.361007>`_,\n    Bentley, J.L., Communications of the ACM (1975)\n\n\n.. _ball_tree:\n\nBall Tree\n---------\n\nTo address the inefficiencies of KD Trees in higher dimensions, the *ball tree*\ndata structure was developed.  Where KD trees partition data along\nCartesian axes, ball trees partition data in a series of nesting\nhyper-spheres.  This makes tree construction more costly than that of the\nKD tree, but results in a data structure which can be very efficient on\nhighly structured data, even in very high dimensions.\n\nA ball tree recursively divides the data into\nnodes defined by a centroid :math:`C` and radius :math:`r`, such that each\npoint in the node lies within the hyper-sphere defined by :math:`r` and\n:math:`C`. The number of candidate points for a neighbor search\nis reduced through use of the *triangle inequality*:\n\n.. math::   |x+y| \\leq |x| + |y|\n\nWith this setup, a single distance calculation between a test point and\nthe centroid is sufficient to determine a lower and upper bound on the\ndistance to all points within the node.\nBecause of the spherical geometry of the ball tree nodes, it can out-perform\na *KD-tree* in high dimensions, though the actual performance is highly\ndependent on the structure of the training data.\nIn scikit-learn, ball-tree-based\nneighbors searches are specified using the keyword ``algorithm = 'ball_tree'``,\nand are computed using the class :class:`BallTree`.\nAlternatively, the user can work with the :class:`BallTree` class directly.\n\n\n.. dropdown:: References\n\n  * `\"Five Balltree Construction Algorithms\"\n    <https:\/\/citeseerx.ist.psu.edu\/doc_view\/pid\/17ac002939f8e950ffb32ec4dc8e86bdd8cb5ff1>`_,\n    Omohundro, S.M., International Computer Science Institute\n    Technical Report (1989)\n\n.. dropdown:: Choice of Nearest Neighbors Algorithm\n\n  The optimal algorithm for a given dataset is a complicated choice, and\n  depends on a number of factors:\n\n  * number of samples :math:`N` (i.e. ``n_samples``) and dimensionality\n    :math:`D` (i.e. ``n_features``).\n\n    * *Brute force* query time grows as :math:`O[D N]`\n    * *Ball tree* query time grows as approximately :math:`O[D \\log(N)]`\n    * *KD tree* query time changes with :math:`D` in a way that is difficult\n      to precisely characterise.  For small :math:`D` (less than 20 or so)\n      the cost is approximately :math:`O[D\\log(N)]`, and the KD tree\n      query can be very efficient.\n      For larger :math:`D`, the cost increases to nearly :math:`O[DN]`, and\n      the overhead due to the tree\n      structure can lead to queries which are slower than brute force.\n\n    For small data sets (:math:`N` less than 30 or so), :math:`\\log(N)` is\n    comparable to :math:`N`, and brute force algorithms can be more efficient\n    than a tree-based approach.  Both :class:`KDTree` and :class:`BallTree`\n    address this through providing a *leaf size* parameter: this controls the\n    number of samples at which a query switches to brute-force.  This allows both\n    algorithms to approach the efficiency of a brute-force computation for small\n    :math:`N`.\n\n  * data structure: *intrinsic dimensionality* of the data and\/or *sparsity*\n    of the data. Intrinsic dimensionality refers to the dimension\n    :math:`d \\le D` of a manifold on which the data lies, which can be linearly\n    or non-linearly embedded in the parameter space. Sparsity refers to the\n    degree to which the data fills the parameter space (this is to be\n    distinguished from the concept as used in \"sparse\" matrices.  The data\n    matrix may have no zero entries, but the **structure** can still be\n    \"sparse\" in this sense).\n\n    * *Brute force* query time is unchanged by data structure.\n    * *Ball tree* and *KD tree* query times can be greatly influenced\n      by data structure.  In general, sparser data with a smaller intrinsic\n      dimensionality leads to faster query times.  Because the KD tree\n      internal representation is aligned with the parameter axes, it will not\n      generally show as much improvement as ball tree for arbitrarily\n      structured data.\n\n    Datasets used in machine learning tend to be very structured, and are\n    very well-suited for tree-based queries.\n\n  * number of neighbors :math:`k` requested for a query point.\n\n    * *Brute force* query time is largely unaffected by the value of :math:`k`\n    * *Ball tree* and *KD tree* query time will become slower as :math:`k`\n      increases.  This is due to two effects: first, a larger :math:`k` leads\n      to the necessity to search a larger portion of the parameter space.\n      Second, using :math:`k > 1` requires internal queueing of results\n      as the tree is traversed.\n\n    As :math:`k` becomes large compared to :math:`N`, the ability to prune\n    branches in a tree-based query is reduced.  In this situation, Brute force\n    queries can be more efficient.\n\n  * number of query points.  Both the ball tree and the KD Tree\n    require a construction phase.  The cost of this construction becomes\n    negligible when amortized over many queries.  If only a small number of\n    queries will be performed, however, the construction can make up\n    a significant fraction of the total cost.  If very few query points\n    will be required, brute force is better than a tree-based method.\n\n  Currently, ``algorithm = 'auto'`` selects ``'brute'`` if any of the following\n  conditions are verified:\n\n  * input data is sparse\n  * ``metric = 'precomputed'``\n  * :math:`D > 15`\n  * :math:`k >= N\/2`\n  * ``effective_metric_`` isn't in the ``VALID_METRICS`` list for either\n    ``'kd_tree'`` or ``'ball_tree'``\n\n  Otherwise, it selects the first out of ``'kd_tree'`` and ``'ball_tree'`` that\n  has ``effective_metric_`` in its ``VALID_METRICS`` list. This heuristic is\n  based on the following assumptions:\n\n  * the number of query points is at least the same order as the number of\n    training points\n  * ``leaf_size`` is close to its default value of ``30``\n  * when :math:`D > 15`, the intrinsic dimensionality of the data is generally\n    too high for tree-based methods\n\n.. dropdown:: Effect of ``leaf_size``\n\n  As noted above, for small sample sizes a brute force search can be more\n  efficient than a tree-based query.  This fact is accounted for in the ball\n  tree and KD tree by internally switching to brute force searches within\n  leaf nodes.  The level of this switch can be specified with the parameter\n  ``leaf_size``.  This parameter choice has many effects:\n\n  **construction time**\n    A larger ``leaf_size`` leads to a faster tree construction time, because\n    fewer nodes need to be created\n\n  **query time**\n    Both a large or small ``leaf_size`` can lead to suboptimal query cost.\n    For ``leaf_size`` approaching 1, the overhead involved in traversing\n    nodes can significantly slow query times.  For ``leaf_size`` approaching\n    the size of the training set, queries become essentially brute force.\n    A good compromise between these is ``leaf_size = 30``, the default value\n    of the parameter.\n\n  **memory**\n    As ``leaf_size`` increases, the memory required to store a tree structure\n    decreases.  This is especially important in the case of ball tree, which\n    stores a :math:`D`-dimensional centroid for each node.  The required\n    storage space for :class:`BallTree` is approximately ``1 \/ leaf_size`` times\n    the size of the training set.\n\n  ``leaf_size`` is not referenced for brute force queries.\n\n.. dropdown:: Valid Metrics for Nearest Neighbor Algorithms\n\n  For a list of available metrics, see the documentation of the\n  :class:`~sklearn.metrics.DistanceMetric` class and the metrics listed in\n  `sklearn.metrics.pairwise.PAIRWISE_DISTANCE_FUNCTIONS`. Note that the \"cosine\"\n  metric uses :func:`~sklearn.metrics.pairwise.cosine_distances`.\n\n  A list of valid metrics for any of the above algorithms can be obtained by using their\n  ``valid_metric`` attribute. For example, valid metrics for ``KDTree`` can be generated by:\n\n      >>> from sklearn.neighbors import KDTree\n      >>> print(sorted(KDTree.valid_metrics))\n      ['chebyshev', 'cityblock', 'euclidean', 'infinity', 'l1', 'l2', 'manhattan', 'minkowski', 'p']\n\n.. _nearest_centroid_classifier:\n\nNearest Centroid Classifier\n===========================\n\nThe :class:`NearestCentroid` classifier is a simple algorithm that represents\neach class by the centroid of its members. In effect, this makes it\nsimilar to the label updating phase of the :class:`~sklearn.cluster.KMeans` algorithm.\nIt also has no parameters to choose, making it a good baseline classifier. It\ndoes, however, suffer on non-convex classes, as well as when classes have\ndrastically different variances, as equal variance in all dimensions is\nassumed. See Linear Discriminant Analysis (:class:`~sklearn.discriminant_analysis.LinearDiscriminantAnalysis`)\nand Quadratic Discriminant Analysis (:class:`~sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis`)\nfor more complex methods that do not make this assumption. Usage of the default\n:class:`NearestCentroid` is simple:\n\n    >>> from sklearn.neighbors import NearestCentroid\n    >>> import numpy as np\n    >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])\n    >>> y = np.array([1, 1, 1, 2, 2, 2])\n    >>> clf = NearestCentroid()\n    >>> clf.fit(X, y)\n    NearestCentroid()\n    >>> print(clf.predict([[-0.8, -1]]))\n    [1]\n\n\nNearest Shrunken Centroid\n-------------------------\n\nThe :class:`NearestCentroid` classifier has a ``shrink_threshold`` parameter,\nwhich implements the nearest shrunken centroid classifier. In effect, the value\nof each feature for each centroid is divided by the within-class variance of\nthat feature. The feature values are then reduced by ``shrink_threshold``. Most\nnotably, if a particular feature value crosses zero, it is set\nto zero. In effect, this removes the feature from affecting the classification.\nThis is useful, for example, for removing noisy features.\n\nIn the example below, using a small shrink threshold increases the accuracy of\nthe model from 0.81 to 0.82.\n\n.. |nearest_centroid_1| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_nearest_centroid_001.png\n   :target: ..\/auto_examples\/neighbors\/plot_nearest_centroid.html\n   :scale: 50\n\n.. |nearest_centroid_2| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_nearest_centroid_002.png\n   :target: ..\/auto_examples\/neighbors\/plot_nearest_centroid.html\n   :scale: 50\n\n.. centered:: |nearest_centroid_1| |nearest_centroid_2|\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_neighbors_plot_nearest_centroid.py`: an example of\n  classification using nearest centroid with different shrink thresholds.\n\n.. _neighbors_transformer:\n\nNearest Neighbors Transformer\n=============================\n\nMany scikit-learn estimators rely on nearest neighbors: Several classifiers and\nregressors such as :class:`KNeighborsClassifier` and\n:class:`KNeighborsRegressor`, but also some clustering methods such as\n:class:`~sklearn.cluster.DBSCAN` and\n:class:`~sklearn.cluster.SpectralClustering`, and some manifold embeddings such\nas :class:`~sklearn.manifold.TSNE` and :class:`~sklearn.manifold.Isomap`.\n\nAll these estimators can compute internally the nearest neighbors, but most of\nthem also accept precomputed nearest neighbors :term:`sparse graph`,\nas given by :func:`~sklearn.neighbors.kneighbors_graph` and\n:func:`~sklearn.neighbors.radius_neighbors_graph`. With mode\n`mode='connectivity'`, these functions return a binary adjacency sparse graph\nas required, for instance, in :class:`~sklearn.cluster.SpectralClustering`.\nWhereas with `mode='distance'`, they return a distance sparse graph as required,\nfor instance, in :class:`~sklearn.cluster.DBSCAN`. To include these functions in\na scikit-learn pipeline, one can also use the corresponding classes\n:class:`KNeighborsTransformer` and :class:`RadiusNeighborsTransformer`.\nThe benefits of this sparse graph API are multiple.\n\nFirst, the precomputed graph can be re-used multiple times, for instance while\nvarying a parameter of the estimator. This can be done manually by the user, or\nusing the caching properties of the scikit-learn pipeline:\n\n    >>> import tempfile\n    >>> from sklearn.manifold import Isomap\n    >>> from sklearn.neighbors import KNeighborsTransformer\n    >>> from sklearn.pipeline import make_pipeline\n    >>> from sklearn.datasets import make_regression\n    >>> cache_path = tempfile.gettempdir()  # we use a temporary folder here\n    >>> X, _ = make_regression(n_samples=50, n_features=25, random_state=0)\n    >>> estimator = make_pipeline(\n    ...     KNeighborsTransformer(mode='distance'),\n    ...     Isomap(n_components=3, metric='precomputed'),\n    ...     memory=cache_path)\n    >>> X_embedded = estimator.fit_transform(X)\n    >>> X_embedded.shape\n    (50, 3)\n\nSecond, precomputing the graph can give finer control on the nearest neighbors\nestimation, for instance enabling multiprocessing though the parameter\n`n_jobs`, which might not be available in all estimators.\n\nFinally, the precomputation can be performed by custom estimators to use\ndifferent implementations, such as approximate nearest neighbors methods, or\nimplementation with special data types. The precomputed neighbors\n:term:`sparse graph` needs to be formatted as in\n:func:`~sklearn.neighbors.radius_neighbors_graph` output:\n\n* a CSR matrix (although COO, CSC or LIL will be accepted).\n* only explicitly store nearest neighborhoods of each sample with respect to the\n  training data. This should include those at 0 distance from a query point,\n  including the matrix diagonal when computing the nearest neighborhoods\n  between the training data and itself.\n* each row's `data` should store the distance in increasing order (optional.\n  Unsorted data will be stable-sorted, adding a computational overhead).\n* all values in data should be non-negative.\n* there should be no duplicate `indices` in any row\n  (see https:\/\/github.com\/scipy\/scipy\/issues\/5807).\n* if the algorithm being passed the precomputed matrix uses k nearest neighbors\n  (as opposed to radius neighborhood), at least k neighbors must be stored in\n  each row (or k+1, as explained in the following note).\n\n.. note::\n  When a specific number of neighbors is queried (using\n  :class:`KNeighborsTransformer`), the definition of `n_neighbors` is ambiguous\n  since it can either include each training point as its own neighbor, or\n  exclude them. Neither choice is perfect, since including them leads to a\n  different number of non-self neighbors during training and testing, while\n  excluding them leads to a difference between `fit(X).transform(X)` and\n  `fit_transform(X)`, which is against scikit-learn API.\n  In :class:`KNeighborsTransformer` we use the definition which includes each\n  training point as its own neighbor in the count of `n_neighbors`. However,\n  for compatibility reasons with other estimators which use the other\n  definition, one extra neighbor will be computed when `mode == 'distance'`.\n  To maximise compatibility with all estimators, a safe choice is to always\n  include one extra neighbor in a custom nearest neighbors estimator, since\n  unnecessary neighbors will be filtered by following estimators.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_neighbors_approximate_nearest_neighbors.py`:\n  an example of pipelining :class:`KNeighborsTransformer` and\n  :class:`~sklearn.manifold.TSNE`. Also proposes two custom nearest neighbors\n  estimators based on external packages.\n\n* :ref:`sphx_glr_auto_examples_neighbors_plot_caching_nearest_neighbors.py`:\n  an example of pipelining :class:`KNeighborsTransformer` and\n  :class:`KNeighborsClassifier` to enable caching of the neighbors graph\n  during a hyper-parameter grid-search.\n\n.. _nca:\n\nNeighborhood Components Analysis\n================================\n\n.. sectionauthor:: William de Vazelhes <william.de-vazelhes@inria.fr>\n\nNeighborhood Components Analysis (NCA, :class:`NeighborhoodComponentsAnalysis`)\nis a distance metric learning algorithm which aims to improve the accuracy of\nnearest neighbors classification compared to the standard Euclidean distance.\nThe algorithm directly maximizes a stochastic variant of the leave-one-out\nk-nearest neighbors (KNN) score on the training set. It can also learn a\nlow-dimensional linear projection of data that can be used for data\nvisualization and fast classification.\n\n.. |nca_illustration_1| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_nca_illustration_001.png\n   :target: ..\/auto_examples\/neighbors\/plot_nca_illustration.html\n   :scale: 50\n\n.. |nca_illustration_2| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_nca_illustration_002.png\n   :target: ..\/auto_examples\/neighbors\/plot_nca_illustration.html\n   :scale: 50\n\n.. centered:: |nca_illustration_1| |nca_illustration_2|\n\nIn the above illustrating figure, we consider some points from a randomly\ngenerated dataset. We focus on the stochastic KNN classification of point no.\n3. The thickness of a link between sample 3 and another point is proportional\nto their distance, and can be seen as the relative weight (or probability) that\na stochastic nearest neighbor prediction rule would assign to this point. In\nthe original space, sample 3 has many stochastic neighbors from various\nclasses, so the right class is not very likely. However, in the projected space\nlearned by NCA, the only stochastic neighbors with non-negligible weight are\nfrom the same class as sample 3, guaranteeing that the latter will be well\nclassified. See the :ref:`mathematical formulation <nca_mathematical_formulation>`\nfor more details.\n\n\nClassification\n--------------\n\nCombined with a nearest neighbors classifier (:class:`KNeighborsClassifier`),\nNCA is attractive for classification because it can naturally handle\nmulti-class problems without any increase in the model size, and does not\nintroduce additional parameters that require fine-tuning by the user.\n\nNCA classification has been shown to work well in practice for data sets of\nvarying size and difficulty. In contrast to related methods such as Linear\nDiscriminant Analysis, NCA does not make any assumptions about the class\ndistributions. The nearest neighbor classification can naturally produce highly\nirregular decision boundaries.\n\nTo use this model for classification, one needs to combine a\n:class:`NeighborhoodComponentsAnalysis` instance that learns the optimal\ntransformation with a :class:`KNeighborsClassifier` instance that performs the\nclassification in the projected space. Here is an example using the two\nclasses:\n\n    >>> from sklearn.neighbors import (NeighborhoodComponentsAnalysis,\n    ... KNeighborsClassifier)\n    >>> from sklearn.datasets import load_iris\n    >>> from sklearn.model_selection import train_test_split\n    >>> from sklearn.pipeline import Pipeline\n    >>> X, y = load_iris(return_X_y=True)\n    >>> X_train, X_test, y_train, y_test = train_test_split(X, y,\n    ... stratify=y, test_size=0.7, random_state=42)\n    >>> nca = NeighborhoodComponentsAnalysis(random_state=42)\n    >>> knn = KNeighborsClassifier(n_neighbors=3)\n    >>> nca_pipe = Pipeline([('nca', nca), ('knn', knn)])\n    >>> nca_pipe.fit(X_train, y_train)\n    Pipeline(...)\n    >>> print(nca_pipe.score(X_test, y_test))\n    0.96190476...\n\n.. |nca_classification_1| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_nca_classification_001.png\n   :target: ..\/auto_examples\/neighbors\/plot_nca_classification.html\n   :scale: 50\n\n.. |nca_classification_2| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_nca_classification_002.png\n   :target: ..\/auto_examples\/neighbors\/plot_nca_classification.html\n   :scale: 50\n\n.. centered:: |nca_classification_1| |nca_classification_2|\n\nThe plot shows decision boundaries for Nearest Neighbor Classification and\nNeighborhood Components Analysis classification on the iris dataset, when\ntraining and scoring on only two features, for visualisation purposes.\n\n.. _nca_dim_reduction:\n\nDimensionality reduction\n------------------------\n\nNCA can be used to perform supervised dimensionality reduction. The input data\nare projected onto a linear subspace consisting of the directions which\nminimize the NCA objective. The desired dimensionality can be set using the\nparameter ``n_components``. For instance, the following figure shows a\ncomparison of dimensionality reduction with Principal Component Analysis\n(:class:`~sklearn.decomposition.PCA`), Linear Discriminant Analysis\n(:class:`~sklearn.discriminant_analysis.LinearDiscriminantAnalysis`) and\nNeighborhood Component Analysis (:class:`NeighborhoodComponentsAnalysis`) on\nthe Digits dataset, a dataset with size :math:`n_{samples} = 1797` and\n:math:`n_{features} = 64`. The data set is split into a training and a test set\nof equal size, then standardized. For evaluation the 3-nearest neighbor\nclassification accuracy is computed on the 2-dimensional projected points found\nby each method. Each data sample belongs to one of 10 classes.\n\n.. |nca_dim_reduction_1| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_nca_dim_reduction_001.png\n   :target: ..\/auto_examples\/neighbors\/plot_nca_dim_reduction.html\n   :width: 32%\n\n.. |nca_dim_reduction_2| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_nca_dim_reduction_002.png\n   :target: ..\/auto_examples\/neighbors\/plot_nca_dim_reduction.html\n   :width: 32%\n\n.. |nca_dim_reduction_3| image:: ..\/auto_examples\/neighbors\/images\/sphx_glr_plot_nca_dim_reduction_003.png\n   :target: ..\/auto_examples\/neighbors\/plot_nca_dim_reduction.html\n   :width: 32%\n\n.. centered:: |nca_dim_reduction_1| |nca_dim_reduction_2| |nca_dim_reduction_3|\n\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_neighbors_plot_nca_classification.py`\n* :ref:`sphx_glr_auto_examples_neighbors_plot_nca_dim_reduction.py`\n* :ref:`sphx_glr_auto_examples_manifold_plot_lle_digits.py`\n\n.. _nca_mathematical_formulation:\n\nMathematical formulation\n------------------------\n\nThe goal of NCA is to learn an optimal linear transformation matrix of size\n``(n_components, n_features)``, which maximises the sum over all samples\n:math:`i` of the probability :math:`p_i` that :math:`i` is correctly\nclassified, i.e.:\n\n.. math::\n\n  \\underset{L}{\\arg\\max} \\sum\\limits_{i=0}^{N - 1} p_{i}\n\nwith :math:`N` = ``n_samples`` and :math:`p_i` the probability of sample\n:math:`i` being correctly classified according to a stochastic nearest\nneighbors rule in the learned embedded space:\n\n.. math::\n\n  p_{i}=\\sum\\limits_{j \\in C_i}{p_{i j}}\n\nwhere :math:`C_i` is the set of points in the same class as sample :math:`i`,\nand :math:`p_{i j}` is the softmax over Euclidean distances in the embedded\nspace:\n\n.. math::\n\n  p_{i j} = \\frac{\\exp(-||L x_i - L x_j||^2)}{\\sum\\limits_{k \\ne\n            i} {\\exp{-(||L x_i - L x_k||^2)}}} , \\quad p_{i i} = 0\n\n.. dropdown:: Mahalanobis distance\n\n  NCA can be seen as learning a (squared) Mahalanobis distance metric:\n\n  .. math::\n\n      || L(x_i - x_j)||^2 = (x_i - x_j)^TM(x_i - x_j),\n\n  where :math:`M = L^T L` is a symmetric positive semi-definite matrix of size\n  ``(n_features, n_features)``.\n\n\nImplementation\n--------------\n\nThis implementation follows what is explained in the original paper [1]_. For\nthe optimisation method, it currently uses scipy's L-BFGS-B with a full\ngradient computation at each iteration, to avoid to tune the learning rate and\nprovide stable learning.\n\nSee the examples below and the docstring of\n:meth:`NeighborhoodComponentsAnalysis.fit` for further information.\n\nComplexity\n----------\n\nTraining\n^^^^^^^^\nNCA stores a matrix of pairwise distances, taking ``n_samples ** 2`` memory.\nTime complexity depends on the number of iterations done by the optimisation\nalgorithm. However, one can set the maximum number of iterations with the\nargument ``max_iter``. For each iteration, time complexity is\n``O(n_components x n_samples x min(n_samples, n_features))``.\n\n\nTransform\n^^^^^^^^^\nHere the ``transform`` operation returns :math:`LX^T`, therefore its time\ncomplexity equals ``n_components * n_features * n_samples_test``. There is no\nadded space complexity in the operation.\n\n\n.. rubric:: References\n\n.. [1] `\"Neighbourhood Components Analysis\"\n  <http:\/\/www.cs.nyu.edu\/~roweis\/papers\/ncanips.pdf>`_,\n  J. Goldberger, S. Roweis, G. Hinton, R. Salakhutdinov, Advances in\n  Neural Information Processing Systems, Vol. 17, May 2005, pp. 513-520.\n\n* `Wikipedia entry on Neighborhood Components Analysis\n  <https:\/\/en.wikipedia.org\/wiki\/Neighbourhood_components_analysis>`_","site":"scikit-learn","answers_cleaned":"    neighbors                     Nearest Neighbors                       sectionauthor   Jake Vanderplas  vanderplas astro washington edu      currentmodule   sklearn neighbors   mod  sklearn neighbors  provides functionality for unsupervised and supervised neighbors based learning methods   Unsupervised nearest neighbors is the foundation of many other learning methods  notably manifold learning and spectral clustering   Supervised neighbors based learning comes in two flavors   classification   for data with discrete labels  and  regression   for data with continuous labels   The principle behind nearest neighbor methods is to find a predefined number of training samples closest in distance to the new point  and predict the label from these   The number of samples can be a user defined constant  k nearest neighbor learning   or vary based on the local density of points  radius based neighbor learning   The distance can  in general  be any metric measure  standard Euclidean distance is the most common choice  Neighbors based methods are known as  non generalizing  machine learning methods  since they simply  remember  all of its training data  possibly transformed into a fast indexing structure such as a  ref  Ball Tree  ball tree   or  ref  KD Tree  kd tree      Despite its simplicity  nearest neighbors has been successful in a large number of classification and regression problems  including handwritten digits and satellite image scenes  Being a non parametric method  it is often successful in classification situations where the decision boundary is very irregular   The classes in  mod  sklearn neighbors  can handle either NumPy arrays or  scipy sparse  matrices as input   For dense matrices  a large number of possible distance metrics are supported   For sparse matrices  arbitrary Minkowski metrics are supported for searches   There are many learning routines which rely on nearest neighbors at their core   One example is  ref  kernel density estimation  kernel density    discussed in the  ref  density estimation  density estimation   section        unsupervised neighbors   Unsupervised Nearest Neighbors                                  class  NearestNeighbors  implements unsupervised nearest neighbors learning  It acts as a uniform interface to three different nearest neighbors algorithms   class  BallTree    class  KDTree   and a brute force algorithm based on routines in  mod  sklearn metrics pairwise   The choice of neighbors search algorithm is controlled through the keyword    algorithm     which must be one of     auto    ball tree    kd tree    brute       When the default value    auto    is passed  the algorithm attempts to determine the best approach from the training data   For a discussion of the strengths and weaknesses of each option  see  Nearest Neighbor Algorithms        warning        Regarding the Nearest Neighbors algorithms  if two     neighbors  math  k 1  and  math  k  have identical distances     but different labels  the result will depend on the ordering of the     training data   Finding the Nearest Neighbors                               For the simple task of finding the nearest neighbors between two sets of data  the unsupervised algorithms within  mod  sklearn neighbors  can be used           from sklearn neighbors import NearestNeighbors         import numpy as np         X   np array    1   1     2   1     3   2    1  1    2  1    3  2            nbrs   NearestNeighbors n neighbors 2  algorithm  ball tree   fit X          distances  indices   nbrs kneighbors X          indices     array   0  1               1  0               2  1               3  4               4  3               5  4               distances     array   0           1                        0           1                        0           1 41421356               0           1                        0           1                        0           1 41421356     Because the query set matches the training set  the nearest neighbor of each point is the point itself  at a distance of zero   It is also possible to efficiently produce a sparse graph showing the connections between neighboring points           nbrs kneighbors graph X  toarray       array   1   1   0   0   0   0                1   1   0   0   0   0                0   1   1   0   0   0                0   0   0   1   1   0                0   0   0   1   1   0                0   0   0   0   1   1      The dataset is structured such that points nearby in index order are nearby in parameter space  leading to an approximately block diagonal matrix of K nearest neighbors   Such a sparse graph is useful in a variety of circumstances which make use of spatial relationships between points for unsupervised learning  in particular  see  class   sklearn manifold Isomap    class   sklearn manifold LocallyLinearEmbedding   and  class   sklearn cluster SpectralClustering    KDTree and BallTree Classes                             Alternatively  one can use the  class  KDTree  or  class  BallTree  classes directly to find nearest neighbors   This is the functionality wrapped by the  class  NearestNeighbors  class used above   The Ball Tree and KD Tree have the same interface  we ll show an example of using the KD Tree here           from sklearn neighbors import KDTree         import numpy as np         X   np array    1   1     2   1     3   2    1  1    2  1    3  2            kdt   KDTree X  leaf size 30  metric  euclidean           kdt query X  k 2  return distance False      array   0  1               1  0               2  1               3  4               4  3               5  4        Refer to the  class  KDTree  and  class  BallTree  class documentation for more information on the options available for nearest neighbors searches  including specification of query strategies  distance metrics  etc  For a list of valid metrics use  KDTree valid metrics  and  BallTree valid metrics            from sklearn neighbors import KDTree  BallTree         KDTree valid metrics       euclidean    l2    minkowski    p    manhattan    cityblock    l1    chebyshev    infinity           BallTree valid metrics       euclidean    l2    minkowski    p    manhattan    cityblock    l1    chebyshev    infinity    seuclidean    mahalanobis    hamming    canberra    braycurtis    jaccard    dice    rogerstanimoto    russellrao    sokalmichener    sokalsneath    haversine    pyfunc        classification   Nearest Neighbors Classification                                   Neighbors based classification is a type of  instance based learning  or  non generalizing learning   it does not attempt to construct a general internal model  but simply stores instances of the training data  Classification is computed from a simple majority vote of the nearest neighbors of each point  a query point is assigned the data class which has the most representatives within the nearest neighbors of the point   scikit learn implements two different nearest neighbors classifiers   class  KNeighborsClassifier  implements learning based on the  math  k  nearest neighbors of each query point  where  math  k  is an integer value specified by the user    class  RadiusNeighborsClassifier  implements learning based on the number of neighbors within a fixed radius  math  r  of each training point  where  math  r  is a floating point value specified by the user   The  math  k  neighbors classification in  class  KNeighborsClassifier  is the most commonly used technique  The optimal choice of the value  math  k  is highly data dependent  in general a larger  math  k  suppresses the effects of noise  but makes the classification boundaries less distinct   In cases where the data is not uniformly sampled  radius based neighbors classification in  class  RadiusNeighborsClassifier  can be a better choice  The user specifies a fixed radius  math  r   such that points in sparser neighborhoods use fewer nearest neighbors for the classification   For high dimensional parameter spaces  this method becomes less effective due to the so called  curse of dimensionality    The basic nearest neighbors classification uses uniform weights  that is  the value assigned to a query point is computed from a simple majority vote of the nearest neighbors   Under some circumstances  it is better to weight the neighbors such that nearer neighbors contribute more to the fit   This can be accomplished through the   weights   keyword   The default value    weights    uniform     assigns uniform weights to each neighbor    weights    distance    assigns weights proportional to the inverse of the distance from the query point   Alternatively  a user defined function of the distance can be supplied to compute the weights       classification 1  image      auto examples neighbors images sphx glr plot classification 001 png     target     auto examples neighbors plot classification html     scale  75     centered    classification 1      rubric   Examples     ref  sphx glr auto examples neighbors plot classification py   an example of   classification using nearest neighbors       regression   Nearest Neighbors Regression                               Neighbors based regression can be used in cases where the data labels are continuous rather than discrete variables   The label assigned to a query point is computed based on the mean of the labels of its nearest neighbors   scikit learn implements two different neighbors regressors   class  KNeighborsRegressor  implements learning based on the  math  k  nearest neighbors of each query point  where  math  k  is an integer value specified by the user    class  RadiusNeighborsRegressor  implements learning based on the neighbors within a fixed radius  math  r  of the query point  where  math  r  is a floating point value specified by the user   The basic nearest neighbors regression uses uniform weights  that is  each point in the local neighborhood contributes uniformly to the classification of a query point   Under some circumstances  it can be advantageous to weight points such that nearby points contribute more to the regression than faraway points   This can be accomplished through the   weights   keyword   The default value    weights    uniform     assigns equal weights to all points     weights    distance    assigns weights proportional to the inverse of the distance from the query point  Alternatively  a user defined function of the distance can be supplied  which will be used to compute the weights      figure      auto examples neighbors images sphx glr plot regression 001 png     target     auto examples neighbors plot regression html     align  center     scale  75  The use of multi output nearest neighbors for regression is demonstrated in  ref  sphx glr auto examples miscellaneous plot multioutput face completion py   In this example  the inputs X are the pixels of the upper half of faces and the outputs Y are the pixels of the lower half of those faces      figure      auto examples miscellaneous images sphx glr plot multioutput face completion 001 png     target     auto examples miscellaneous plot multioutput face completion html     scale  75     align  center      rubric   Examples     ref  sphx glr auto examples neighbors plot regression py   an example of regression   using nearest neighbors      ref  sphx glr auto examples miscellaneous plot multioutput face completion py     an example of multi output regression using nearest neighbors    Nearest Neighbor Algorithms                                  brute force   Brute Force              Fast computation of nearest neighbors is an active area of research in machine learning  The most naive neighbor search implementation involves the brute force computation of distances between all pairs of points in the dataset  for  math  N  samples in  math  D  dimensions  this approach scales as  math  O D N 2     Efficient brute force neighbors searches can be very competitive for small data samples  However  as the number of samples  math  N  grows  the brute force approach quickly becomes infeasible   In the classes within  mod  sklearn neighbors   brute force neighbors searches are specified using the keyword   algorithm    brute     and are computed using the routines available in  mod  sklearn metrics pairwise        kd tree   K D Tree           To address the computational inefficiencies of the brute force approach  a variety of tree based data structures have been invented   In general  these structures attempt to reduce the required number of distance calculations by efficiently encoding aggregate distance information for the sample  The basic idea is that if point  math  A  is very distant from point  math  B   and point  math  B  is very close to point  math  C   then we know that points  math  A  and  math  C  are very distant   without having to explicitly calculate their distance   In this way  the computational cost of a nearest neighbors search can be reduced to  math  O D N  log N    or better  This is a significant improvement over brute force for large  math  N    An early approach to taking advantage of this aggregate information was the  KD tree  data structure  short for  K dimensional tree    which generalizes two dimensional  Quad trees  and 3 dimensional  Oct trees  to an arbitrary number of dimensions   The KD tree is a binary tree structure which recursively partitions the parameter space along the data axes  dividing it into nested orthotropic regions into which data points are filed   The construction of a KD tree is very fast  because partitioning is performed only along the data axes  no  math  D  dimensional distances need to be computed  Once constructed  the nearest neighbor of a query point can be determined with only  math  O  log N    distance computations  Though the KD tree approach is very fast for low dimensional   math  D   20   neighbors searches  it becomes inefficient as  math  D  grows very large  this is one manifestation of the so called  curse of dimensionality   In scikit learn  KD tree neighbors searches are specified using the keyword   algorithm    kd tree     and are computed using the class  class  KDTree        dropdown   References        Multidimensional binary search trees used for associative searching       https   dl acm org citation cfm doid 361002 361007         Bentley  J L   Communications of the ACM  1975        ball tree   Ball Tree            To address the inefficiencies of KD Trees in higher dimensions  the  ball tree  data structure was developed   Where KD trees partition data along Cartesian axes  ball trees partition data in a series of nesting hyper spheres   This makes tree construction more costly than that of the KD tree  but results in a data structure which can be very efficient on highly structured data  even in very high dimensions   A ball tree recursively divides the data into nodes defined by a centroid  math  C  and radius  math  r   such that each point in the node lies within the hyper sphere defined by  math  r  and  math  C   The number of candidate points for a neighbor search is reduced through use of the  triangle inequality       math      x y   leq  x     y   With this setup  a single distance calculation between a test point and the centroid is sufficient to determine a lower and upper bound on the distance to all points within the node  Because of the spherical geometry of the ball tree nodes  it can out perform a  KD tree  in high dimensions  though the actual performance is highly dependent on the structure of the training data  In scikit learn  ball tree based neighbors searches are specified using the keyword   algorithm    ball tree     and are computed using the class  class  BallTree   Alternatively  the user can work with the  class  BallTree  class directly       dropdown   References        Five Balltree Construction Algorithms       https   citeseerx ist psu edu doc view pid 17ac002939f8e950ffb32ec4dc8e86bdd8cb5ff1         Omohundro  S M   International Computer Science Institute     Technical Report  1989      dropdown   Choice of Nearest Neighbors Algorithm    The optimal algorithm for a given dataset is a complicated choice  and   depends on a number of factors       number of samples  math  N   i e    n samples    and dimensionality      math  D   i e    n features             Brute force  query time grows as  math  O D N          Ball tree  query time grows as approximately  math  O D  log N           KD tree  query time changes with  math  D  in a way that is difficult       to precisely characterise   For small  math  D   less than 20 or so        the cost is approximately  math  O D log N     and the KD tree       query can be very efficient        For larger  math  D   the cost increases to nearly  math  O DN    and       the overhead due to the tree       structure can lead to queries which are slower than brute force       For small data sets   math  N  less than 30 or so    math   log N   is     comparable to  math  N   and brute force algorithms can be more efficient     than a tree based approach   Both  class  KDTree  and  class  BallTree      address this through providing a  leaf size  parameter  this controls the     number of samples at which a query switches to brute force   This allows both     algorithms to approach the efficiency of a brute force computation for small      math  N        data structure   intrinsic dimensionality  of the data and or  sparsity      of the data  Intrinsic dimensionality refers to the dimension      math  d  le D  of a manifold on which the data lies  which can be linearly     or non linearly embedded in the parameter space  Sparsity refers to the     degree to which the data fills the parameter space  this is to be     distinguished from the concept as used in  sparse  matrices   The data     matrix may have no zero entries  but the   structure   can still be      sparse  in this sense           Brute force  query time is unchanged by data structure         Ball tree  and  KD tree  query times can be greatly influenced       by data structure   In general  sparser data with a smaller intrinsic       dimensionality leads to faster query times   Because the KD tree       internal representation is aligned with the parameter axes  it will not       generally show as much improvement as ball tree for arbitrarily       structured data       Datasets used in machine learning tend to be very structured  and are     very well suited for tree based queries       number of neighbors  math  k  requested for a query point          Brute force  query time is largely unaffected by the value of  math  k         Ball tree  and  KD tree  query time will become slower as  math  k        increases   This is due to two effects  first  a larger  math  k  leads       to the necessity to search a larger portion of the parameter space        Second  using  math  k   1  requires internal queueing of results       as the tree is traversed       As  math  k  becomes large compared to  math  N   the ability to prune     branches in a tree based query is reduced   In this situation  Brute force     queries can be more efficient       number of query points   Both the ball tree and the KD Tree     require a construction phase   The cost of this construction becomes     negligible when amortized over many queries   If only a small number of     queries will be performed  however  the construction can make up     a significant fraction of the total cost   If very few query points     will be required  brute force is better than a tree based method     Currently    algorithm    auto    selects    brute    if any of the following   conditions are verified       input data is sparse       metric    precomputed         math  D   15       math  k    N 2        effective metric    isn t in the   VALID METRICS   list for either        kd tree    or    ball tree       Otherwise  it selects the first out of    kd tree    and    ball tree    that   has   effective metric    in its   VALID METRICS   list  This heuristic is   based on the following assumptions       the number of query points is at least the same order as the number of     training points       leaf size   is close to its default value of   30       when  math  D   15   the intrinsic dimensionality of the data is generally     too high for tree based methods     dropdown   Effect of   leaf size      As noted above  for small sample sizes a brute force search can be more   efficient than a tree based query   This fact is accounted for in the ball   tree and KD tree by internally switching to brute force searches within   leaf nodes   The level of this switch can be specified with the parameter     leaf size     This parameter choice has many effects       construction time       A larger   leaf size   leads to a faster tree construction time  because     fewer nodes need to be created      query time       Both a large or small   leaf size   can lead to suboptimal query cost      For   leaf size   approaching 1  the overhead involved in traversing     nodes can significantly slow query times   For   leaf size   approaching     the size of the training set  queries become essentially brute force      A good compromise between these is   leaf size   30    the default value     of the parameter       memory       As   leaf size   increases  the memory required to store a tree structure     decreases   This is especially important in the case of ball tree  which     stores a  math  D  dimensional centroid for each node   The required     storage space for  class  BallTree  is approximately   1   leaf size   times     the size of the training set       leaf size   is not referenced for brute force queries      dropdown   Valid Metrics for Nearest Neighbor Algorithms    For a list of available metrics  see the documentation of the    class   sklearn metrics DistanceMetric  class and the metrics listed in    sklearn metrics pairwise PAIRWISE DISTANCE FUNCTIONS   Note that the  cosine    metric uses  func   sklearn metrics pairwise cosine distances      A list of valid metrics for any of the above algorithms can be obtained by using their     valid metric   attribute  For example  valid metrics for   KDTree   can be generated by             from sklearn neighbors import KDTree           print sorted KDTree valid metrics           chebyshev    cityblock    euclidean    infinity    l1    l2    manhattan    minkowski    p        nearest centroid classifier   Nearest Centroid Classifier                              The  class  NearestCentroid  classifier is a simple algorithm that represents each class by the centroid of its members  In effect  this makes it similar to the label updating phase of the  class   sklearn cluster KMeans  algorithm  It also has no parameters to choose  making it a good baseline classifier  It does  however  suffer on non convex classes  as well as when classes have drastically different variances  as equal variance in all dimensions is assumed  See Linear Discriminant Analysis   class   sklearn discriminant analysis LinearDiscriminantAnalysis   and Quadratic Discriminant Analysis   class   sklearn discriminant analysis QuadraticDiscriminantAnalysis   for more complex methods that do not make this assumption  Usage of the default  class  NearestCentroid  is simple           from sklearn neighbors import NearestCentroid         import numpy as np         X   np array    1   1     2   1     3   2    1  1    2  1    3  2            y   np array  1  1  1  2  2  2           clf   NearestCentroid           clf fit X  y      NearestCentroid           print clf predict    0 8   1          1    Nearest Shrunken Centroid                            The  class  NearestCentroid  classifier has a   shrink threshold   parameter  which implements the nearest shrunken centroid classifier  In effect  the value of each feature for each centroid is divided by the within class variance of that feature  The feature values are then reduced by   shrink threshold    Most notably  if a particular feature value crosses zero  it is set to zero  In effect  this removes the feature from affecting the classification  This is useful  for example  for removing noisy features   In the example below  using a small shrink threshold increases the accuracy of the model from 0 81 to 0 82       nearest centroid 1  image      auto examples neighbors images sphx glr plot nearest centroid 001 png     target     auto examples neighbors plot nearest centroid html     scale  50      nearest centroid 2  image      auto examples neighbors images sphx glr plot nearest centroid 002 png     target     auto examples neighbors plot nearest centroid html     scale  50     centered    nearest centroid 1   nearest centroid 2      rubric   Examples     ref  sphx glr auto examples neighbors plot nearest centroid py   an example of   classification using nearest centroid with different shrink thresholds       neighbors transformer   Nearest Neighbors Transformer                                Many scikit learn estimators rely on nearest neighbors  Several classifiers and regressors such as  class  KNeighborsClassifier  and  class  KNeighborsRegressor   but also some clustering methods such as  class   sklearn cluster DBSCAN  and  class   sklearn cluster SpectralClustering   and some manifold embeddings such as  class   sklearn manifold TSNE  and  class   sklearn manifold Isomap    All these estimators can compute internally the nearest neighbors  but most of them also accept precomputed nearest neighbors  term  sparse graph   as given by  func   sklearn neighbors kneighbors graph  and  func   sklearn neighbors radius neighbors graph   With mode  mode  connectivity    these functions return a binary adjacency sparse graph as required  for instance  in  class   sklearn cluster SpectralClustering   Whereas with  mode  distance    they return a distance sparse graph as required  for instance  in  class   sklearn cluster DBSCAN   To include these functions in a scikit learn pipeline  one can also use the corresponding classes  class  KNeighborsTransformer  and  class  RadiusNeighborsTransformer   The benefits of this sparse graph API are multiple   First  the precomputed graph can be re used multiple times  for instance while varying a parameter of the estimator  This can be done manually by the user  or using the caching properties of the scikit learn pipeline           import tempfile         from sklearn manifold import Isomap         from sklearn neighbors import KNeighborsTransformer         from sklearn pipeline import make pipeline         from sklearn datasets import make regression         cache path   tempfile gettempdir      we use a temporary folder here         X      make regression n samples 50  n features 25  random state 0          estimator   make pipeline              KNeighborsTransformer mode  distance                Isomap n components 3  metric  precomputed                memory cache path          X embedded   estimator fit transform X          X embedded shape      50  3   Second  precomputing the graph can give finer control on the nearest neighbors estimation  for instance enabling multiprocessing though the parameter  n jobs   which might not be available in all estimators   Finally  the precomputation can be performed by custom estimators to use different implementations  such as approximate nearest neighbors methods  or implementation with special data types  The precomputed neighbors  term  sparse graph  needs to be formatted as in  func   sklearn neighbors radius neighbors graph  output     a CSR matrix  although COO  CSC or LIL will be accepted     only explicitly store nearest neighborhoods of each sample with respect to the   training data  This should include those at 0 distance from a query point    including the matrix diagonal when computing the nearest neighborhoods   between the training data and itself    each row s  data  should store the distance in increasing order  optional    Unsorted data will be stable sorted  adding a computational overhead     all values in data should be non negative    there should be no duplicate  indices  in any row    see https   github com scipy scipy issues 5807     if the algorithm being passed the precomputed matrix uses k nearest neighbors    as opposed to radius neighborhood   at least k neighbors must be stored in   each row  or k 1  as explained in the following note       note     When a specific number of neighbors is queried  using    class  KNeighborsTransformer    the definition of  n neighbors  is ambiguous   since it can either include each training point as its own neighbor  or   exclude them  Neither choice is perfect  since including them leads to a   different number of non self neighbors during training and testing  while   excluding them leads to a difference between  fit X  transform X   and    fit transform X    which is against scikit learn API    In  class  KNeighborsTransformer  we use the definition which includes each   training point as its own neighbor in the count of  n neighbors   However    for compatibility reasons with other estimators which use the other   definition  one extra neighbor will be computed when  mode     distance      To maximise compatibility with all estimators  a safe choice is to always   include one extra neighbor in a custom nearest neighbors estimator  since   unnecessary neighbors will be filtered by following estimators      rubric   Examples     ref  sphx glr auto examples neighbors approximate nearest neighbors py     an example of pipelining  class  KNeighborsTransformer  and    class   sklearn manifold TSNE   Also proposes two custom nearest neighbors   estimators based on external packages      ref  sphx glr auto examples neighbors plot caching nearest neighbors py     an example of pipelining  class  KNeighborsTransformer  and    class  KNeighborsClassifier  to enable caching of the neighbors graph   during a hyper parameter grid search       nca   Neighborhood Components Analysis                                      sectionauthor   William de Vazelhes  william de vazelhes inria fr   Neighborhood Components Analysis  NCA   class  NeighborhoodComponentsAnalysis   is a distance metric learning algorithm which aims to improve the accuracy of nearest neighbors classification compared to the standard Euclidean distance  The algorithm directly maximizes a stochastic variant of the leave one out k nearest neighbors  KNN  score on the training set  It can also learn a low dimensional linear projection of data that can be used for data visualization and fast classification       nca illustration 1  image      auto examples neighbors images sphx glr plot nca illustration 001 png     target     auto examples neighbors plot nca illustration html     scale  50      nca illustration 2  image      auto examples neighbors images sphx glr plot nca illustration 002 png     target     auto examples neighbors plot nca illustration html     scale  50     centered    nca illustration 1   nca illustration 2   In the above illustrating figure  we consider some points from a randomly generated dataset  We focus on the stochastic KNN classification of point no  3  The thickness of a link between sample 3 and another point is proportional to their distance  and can be seen as the relative weight  or probability  that a stochastic nearest neighbor prediction rule would assign to this point  In the original space  sample 3 has many stochastic neighbors from various classes  so the right class is not very likely  However  in the projected space learned by NCA  the only stochastic neighbors with non negligible weight are from the same class as sample 3  guaranteeing that the latter will be well classified  See the  ref  mathematical formulation  nca mathematical formulation   for more details    Classification                 Combined with a nearest neighbors classifier   class  KNeighborsClassifier    NCA is attractive for classification because it can naturally handle multi class problems without any increase in the model size  and does not introduce additional parameters that require fine tuning by the user   NCA classification has been shown to work well in practice for data sets of varying size and difficulty  In contrast to related methods such as Linear Discriminant Analysis  NCA does not make any assumptions about the class distributions  The nearest neighbor classification can naturally produce highly irregular decision boundaries   To use this model for classification  one needs to combine a  class  NeighborhoodComponentsAnalysis  instance that learns the optimal transformation with a  class  KNeighborsClassifier  instance that performs the classification in the projected space  Here is an example using the two classes           from sklearn neighbors import  NeighborhoodComponentsAnalysis          KNeighborsClassifier          from sklearn datasets import load iris         from sklearn model selection import train test split         from sklearn pipeline import Pipeline         X  y   load iris return X y True          X train  X test  y train  y test   train test split X  y          stratify y  test size 0 7  random state 42          nca   NeighborhoodComponentsAnalysis random state 42          knn   KNeighborsClassifier n neighbors 3          nca pipe   Pipeline    nca   nca     knn   knn            nca pipe fit X train  y train      Pipeline              print nca pipe score X test  y test       0 96190476         nca classification 1  image      auto examples neighbors images sphx glr plot nca classification 001 png     target     auto examples neighbors plot nca classification html     scale  50      nca classification 2  image      auto examples neighbors images sphx glr plot nca classification 002 png     target     auto examples neighbors plot nca classification html     scale  50     centered    nca classification 1   nca classification 2   The plot shows decision boundaries for Nearest Neighbor Classification and Neighborhood Components Analysis classification on the iris dataset  when training and scoring on only two features  for visualisation purposes       nca dim reduction   Dimensionality reduction                           NCA can be used to perform supervised dimensionality reduction  The input data are projected onto a linear subspace consisting of the directions which minimize the NCA objective  The desired dimensionality can be set using the parameter   n components    For instance  the following figure shows a comparison of dimensionality reduction with Principal Component Analysis   class   sklearn decomposition PCA    Linear Discriminant Analysis   class   sklearn discriminant analysis LinearDiscriminantAnalysis   and Neighborhood Component Analysis   class  NeighborhoodComponentsAnalysis   on the Digits dataset  a dataset with size  math  n  samples    1797  and  math  n  features    64   The data set is split into a training and a test set of equal size  then standardized  For evaluation the 3 nearest neighbor classification accuracy is computed on the 2 dimensional projected points found by each method  Each data sample belongs to one of 10 classes       nca dim reduction 1  image      auto examples neighbors images sphx glr plot nca dim reduction 001 png     target     auto examples neighbors plot nca dim reduction html     width  32       nca dim reduction 2  image      auto examples neighbors images sphx glr plot nca dim reduction 002 png     target     auto examples neighbors plot nca dim reduction html     width  32       nca dim reduction 3  image      auto examples neighbors images sphx glr plot nca dim reduction 003 png     target     auto examples neighbors plot nca dim reduction html     width  32      centered    nca dim reduction 1   nca dim reduction 2   nca dim reduction 3       rubric   Examples     ref  sphx glr auto examples neighbors plot nca classification py     ref  sphx glr auto examples neighbors plot nca dim reduction py     ref  sphx glr auto examples manifold plot lle digits py       nca mathematical formulation   Mathematical formulation                           The goal of NCA is to learn an optimal linear transformation matrix of size    n components  n features     which maximises the sum over all samples  math  i  of the probability  math  p i  that  math  i  is correctly classified  i e       math       underset L   arg max   sum limits  i 0   N   1  p  i   with  math  N      n samples   and  math  p i  the probability of sample  math  i  being correctly classified according to a stochastic nearest neighbors rule in the learned embedded space      math      p  i   sum limits  j  in C i  p  i j    where  math  C i  is the set of points in the same class as sample  math  i   and  math  p  i j   is the softmax over Euclidean distances in the embedded space      math      p  i j     frac  exp    L x i   L x j   2    sum limits  k  ne             i    exp     L x i   L x k   2        quad p  i i    0     dropdown   Mahalanobis distance    NCA can be seen as learning a  squared  Mahalanobis distance metric        math             L x i   x j    2    x i   x j  TM x i   x j      where  math  M   L T L  is a symmetric positive semi definite matrix of size      n features  n features       Implementation                 This implementation follows what is explained in the original paper  1    For the optimisation method  it currently uses scipy s L BFGS B with a full gradient computation at each iteration  to avoid to tune the learning rate and provide stable learning   See the examples below and the docstring of  meth  NeighborhoodComponentsAnalysis fit  for further information   Complexity             Training          NCA stores a matrix of pairwise distances  taking   n samples    2   memory  Time complexity depends on the number of iterations done by the optimisation algorithm  However  one can set the maximum number of iterations with the argument   max iter    For each iteration  time complexity is   O n components x n samples x min n samples  n features        Transform           Here the   transform   operation returns  math  LX T   therefore its time complexity equals   n components   n features   n samples test    There is no added space complexity in the operation       rubric   References      1    Neighbourhood Components Analysis     http   www cs nyu edu  roweis papers ncanips pdf       J  Goldberger  S  Roweis  G  Hinton  R  Salakhutdinov  Advances in   Neural Information Processing Systems  Vol  17  May 2005  pp  513 520      Wikipedia entry on Neighborhood Components Analysis    https   en wikipedia org wiki Neighbourhood components analysis   "}
{"questions":"scikit-learn mixture sklearn mixture gmm Gaussian mixture models","answers":".. _mixture:\n\n.. _gmm:\n\n=======================\nGaussian mixture models\n=======================\n\n.. currentmodule:: sklearn.mixture\n\n``sklearn.mixture`` is a package which enables one to learn\nGaussian Mixture Models (diagonal, spherical, tied and full covariance\nmatrices supported), sample them, and estimate them from\ndata. Facilities to help determine the appropriate number of\ncomponents are also provided.\n\n.. figure:: ..\/auto_examples\/mixture\/images\/sphx_glr_plot_gmm_pdf_001.png\n  :target: ..\/auto_examples\/mixture\/plot_gmm_pdf.html\n  :align: center\n  :scale: 50%\n\n  **Two-component Gaussian mixture model:** *data points, and equi-probability\n  surfaces of the model.*\n\nA Gaussian mixture model is a probabilistic model that assumes all the\ndata points are generated from a mixture of a finite number of\nGaussian distributions with unknown parameters. One can think of\nmixture models as generalizing k-means clustering to incorporate\ninformation about the covariance structure of the data as well as the\ncenters of the latent Gaussians.\n\nScikit-learn implements different classes to estimate Gaussian\nmixture models, that correspond to different estimation strategies,\ndetailed below.\n\nGaussian Mixture\n================\n\nThe :class:`GaussianMixture` object implements the\n:ref:`expectation-maximization <expectation_maximization>` (EM)\nalgorithm for fitting mixture-of-Gaussian models. It can also draw\nconfidence ellipsoids for multivariate models, and compute the\nBayesian Information Criterion to assess the number of clusters in the\ndata. A :meth:`GaussianMixture.fit` method is provided that learns a Gaussian\nMixture Model from train data. Given test data, it can assign to each\nsample the Gaussian it most probably belongs to using\nthe :meth:`GaussianMixture.predict` method.\n\n..\n    Alternatively, the probability of each\n    sample belonging to the various Gaussians may be retrieved using the\n    :meth:`GaussianMixture.predict_proba` method.\n\nThe :class:`GaussianMixture` comes with different options to constrain the\ncovariance of the difference classes estimated: spherical, diagonal, tied or\nfull covariance.\n\n.. figure:: ..\/auto_examples\/mixture\/images\/sphx_glr_plot_gmm_covariances_001.png\n   :target: ..\/auto_examples\/mixture\/plot_gmm_covariances.html\n   :align: center\n   :scale: 75%\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_mixture_plot_gmm_covariances.py` for an example of\n  using the Gaussian mixture as clustering on the iris dataset.\n\n* See :ref:`sphx_glr_auto_examples_mixture_plot_gmm_pdf.py` for an example on plotting the\n  density estimation.\n\n.. dropdown:: Pros and cons of class GaussianMixture\n\n  .. rubric:: Pros\n\n  :Speed: It is the fastest algorithm for learning mixture models\n\n  :Agnostic: As this algorithm maximizes only the likelihood, it\n    will not bias the means towards zero, or bias the cluster sizes to\n    have specific structures that might or might not apply.\n\n  .. rubric:: Cons\n\n  :Singularities: When one has insufficiently many points per\n    mixture, estimating the covariance matrices becomes difficult,\n    and the algorithm is known to diverge and find solutions with\n    infinite likelihood unless one regularizes the covariances artificially.\n\n  :Number of components: This algorithm will always use all the\n    components it has access to, needing held-out data\n    or information theoretical criteria to decide how many components to use\n    in the absence of external cues.\n\n.. dropdown:: Selecting the number of components in a classical Gaussian Mixture model\n\n  The BIC criterion can be used to select the number of components in a Gaussian\n  Mixture in an efficient way. In theory, it recovers the true number of\n  components only in the asymptotic regime (i.e. if much data is available and\n  assuming that the data was actually generated i.i.d. from a mixture of Gaussian\n  distribution). Note that using a :ref:`Variational Bayesian Gaussian mixture <bgmm>`\n  avoids the specification of the number of components for a Gaussian mixture\n  model.\n\n  .. figure:: ..\/auto_examples\/mixture\/images\/sphx_glr_plot_gmm_selection_002.png\n    :target: ..\/auto_examples\/mixture\/plot_gmm_selection.html\n    :align: center\n    :scale: 50%\n\n  .. rubric:: Examples\n\n  * See :ref:`sphx_glr_auto_examples_mixture_plot_gmm_selection.py` for an example\n    of model selection performed with classical Gaussian mixture.\n\n.. _expectation_maximization:\n\n.. dropdown:: Estimation algorithm expectation-maximization\n\n  The main difficulty in learning Gaussian mixture models from unlabeled\n  data is that one usually doesn't know which points came from\n  which latent component (if one has access to this information it gets\n  very easy to fit a separate Gaussian distribution to each set of\n  points). `Expectation-maximization\n  <https:\/\/en.wikipedia.org\/wiki\/Expectation%E2%80%93maximization_algorithm>`_\n  is a well-founded statistical\n  algorithm to get around this problem by an iterative process. First\n  one assumes random components (randomly centered on data points,\n  learned from k-means, or even just normally distributed around the\n  origin) and computes for each point a probability of being generated by\n  each component of the model. Then, one tweaks the\n  parameters to maximize the likelihood of the data given those\n  assignments. Repeating this process is guaranteed to always converge\n  to a local optimum.\n\n.. dropdown:: Choice of the Initialization method\n\n  There is a choice of four initialization methods (as well as inputting user defined\n  initial means) to generate the initial centers for the model components:\n\n  k-means (default)\n    This applies a traditional k-means clustering algorithm.\n    This can be computationally expensive compared to other initialization methods.\n\n  k-means++\n    This uses the initialization method of k-means clustering: k-means++.\n    This will pick the first center at random from the data. Subsequent centers will be\n    chosen from a weighted distribution of the data favouring points further away from\n    existing centers. k-means++ is the default initialization for k-means so will be\n    quicker than running a full k-means but can still take a significant amount of\n    time for large data sets with many components.\n\n  random_from_data\n    This will pick random data points from the input data as the initial\n    centers. This is a very fast method of initialization but can produce non-convergent\n    results if the chosen points are too close to each other.\n\n  random\n    Centers are chosen as a small perturbation away from the mean of all data.\n    This method is simple but can lead to the model taking longer to converge.\n\n  .. figure:: ..\/auto_examples\/mixture\/images\/sphx_glr_plot_gmm_init_001.png\n    :target: ..\/auto_examples\/mixture\/plot_gmm_init.html\n    :align: center\n    :scale: 50%\n\n  .. rubric:: Examples\n\n  * See :ref:`sphx_glr_auto_examples_mixture_plot_gmm_init.py` for an example of\n    using different initializations in Gaussian Mixture.\n\n.. _bgmm:\n\nVariational Bayesian Gaussian Mixture\n=====================================\n\nThe :class:`BayesianGaussianMixture` object implements a variant of the\nGaussian mixture model with variational inference algorithms. The API is\nsimilar to the one defined by :class:`GaussianMixture`.\n\n.. _variational_inference:\n\n**Estimation algorithm: variational inference**\n\nVariational inference is an extension of expectation-maximization that\nmaximizes a lower bound on model evidence (including\npriors) instead of data likelihood. The principle behind\nvariational methods is the same as expectation-maximization (that is\nboth are iterative algorithms that alternate between finding the\nprobabilities for each point to be generated by each mixture and\nfitting the mixture to these assigned points), but variational\nmethods add regularization by integrating information from prior\ndistributions. This avoids the singularities often found in\nexpectation-maximization solutions but introduces some subtle biases\nto the model. Inference is often notably slower, but not usually as\nmuch so as to render usage unpractical.\n\nDue to its Bayesian nature, the variational algorithm needs more hyperparameters\nthan expectation-maximization, the most important of these being the\nconcentration parameter ``weight_concentration_prior``. Specifying a low value\nfor the concentration prior will make the model put most of the weight on a few\ncomponents and set the remaining components' weights very close to zero. High\nvalues of the concentration prior will allow a larger number of components to\nbe active in the mixture.\n\nThe parameters implementation of the :class:`BayesianGaussianMixture` class\nproposes two types of prior for the weights distribution: a finite mixture model\nwith Dirichlet distribution and an infinite mixture model with the Dirichlet\nProcess. In practice Dirichlet Process inference algorithm is approximated and\nuses a truncated distribution with a fixed maximum number of components (called\nthe Stick-breaking representation). The number of components actually used\nalmost always depends on the data.\n\nThe next figure compares the results obtained for the different type of the\nweight concentration prior (parameter ``weight_concentration_prior_type``)\nfor different values of ``weight_concentration_prior``.\nHere, we can see the value of the ``weight_concentration_prior`` parameter\nhas a strong impact on the effective number of active components obtained. We\ncan also notice that large values for the concentration weight prior lead to\nmore uniform weights when the type of prior is 'dirichlet_distribution' while\nthis is not necessarily the case for the 'dirichlet_process' type (used by\ndefault).\n\n.. |plot_bgmm| image:: ..\/auto_examples\/mixture\/images\/sphx_glr_plot_concentration_prior_001.png\n   :target: ..\/auto_examples\/mixture\/plot_concentration_prior.html\n   :scale: 48%\n\n.. |plot_dpgmm| image:: ..\/auto_examples\/mixture\/images\/sphx_glr_plot_concentration_prior_002.png\n   :target: ..\/auto_examples\/mixture\/plot_concentration_prior.html\n   :scale: 48%\n\n.. centered:: |plot_bgmm| |plot_dpgmm|\n\nThe examples below compare Gaussian mixture models with a fixed number of\ncomponents, to the variational Gaussian mixture models with a Dirichlet process\nprior. Here, a classical Gaussian mixture is fitted with 5 components on a\ndataset composed of 2 clusters. We can see that the variational Gaussian mixture\nwith a Dirichlet process prior is able to limit itself to only 2 components\nwhereas the Gaussian mixture fits the data with a fixed number of components\nthat has to be set a priori by the user. In this case the user has selected\n``n_components=5`` which does not match the true generative distribution of this\ntoy dataset. Note that with very little observations, the variational Gaussian\nmixture models with a Dirichlet process prior can take a conservative stand, and\nfit only one component.\n\n.. figure:: ..\/auto_examples\/mixture\/images\/sphx_glr_plot_gmm_001.png\n   :target: ..\/auto_examples\/mixture\/plot_gmm.html\n   :align: center\n   :scale: 70%\n\n\nOn the following figure we are fitting a dataset not well-depicted by a\nGaussian mixture. Adjusting the ``weight_concentration_prior``, parameter of the\n:class:`BayesianGaussianMixture` controls the number of components used to fit\nthis data. We also present on the last two plots a random sampling generated\nfrom the two resulting mixtures.\n\n.. figure:: ..\/auto_examples\/mixture\/images\/sphx_glr_plot_gmm_sin_001.png\n   :target: ..\/auto_examples\/mixture\/plot_gmm_sin.html\n   :align: center\n   :scale: 65%\n\n\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_mixture_plot_gmm.py` for an example on\n  plotting the confidence ellipsoids for both :class:`GaussianMixture`\n  and :class:`BayesianGaussianMixture`.\n\n* :ref:`sphx_glr_auto_examples_mixture_plot_gmm_sin.py` shows using\n  :class:`GaussianMixture` and :class:`BayesianGaussianMixture` to fit a\n  sine wave.\n\n* See :ref:`sphx_glr_auto_examples_mixture_plot_concentration_prior.py`\n  for an example plotting the confidence ellipsoids for the\n  :class:`BayesianGaussianMixture` with different\n  ``weight_concentration_prior_type`` for different values of the parameter\n  ``weight_concentration_prior``.\n\n.. dropdown:: Pros and cons of variational inference with BayesianGaussianMixture\n\n  .. rubric:: Pros\n\n  :Automatic selection: When ``weight_concentration_prior`` is small enough and\n    ``n_components`` is larger than what is found necessary by the model, the\n    Variational Bayesian mixture model has a natural tendency to set some mixture\n    weights values close to zero. This makes it possible to let the model choose\n    a suitable number of effective components automatically. Only an upper bound\n    of this number needs to be provided. Note however that the \"ideal\" number of\n    active components is very application specific and is typically ill-defined\n    in a data exploration setting.\n\n  :Less sensitivity to the number of parameters: Unlike finite models, which will\n    almost always use all components as much as they can, and hence will produce\n    wildly different solutions for different numbers of components, the\n    variational inference with a Dirichlet process prior\n    (``weight_concentration_prior_type='dirichlet_process'``) won't change much\n    with changes to the parameters, leading to more stability and less tuning.\n\n  :Regularization: Due to the incorporation of prior information,\n    variational solutions have less pathological special cases than\n    expectation-maximization solutions.\n\n  .. rubric:: Cons\n\n  :Speed: The extra parametrization necessary for variational inference makes\n    inference slower, although not by much.\n\n  :Hyperparameters: This algorithm needs an extra hyperparameter\n    that might need experimental tuning via cross-validation.\n\n  :Bias: There are many implicit biases in the inference algorithms (and also in\n    the Dirichlet process if used), and whenever there is a mismatch between\n    these biases and the data it might be possible to fit better models using a\n    finite mixture.\n\n.. _dirichlet_process:\n\nThe Dirichlet Process\n---------------------\n\nHere we describe variational inference algorithms on Dirichlet process\nmixture. The Dirichlet process is a prior probability distribution on\n*clusterings with an infinite, unbounded, number of partitions*.\nVariational techniques let us incorporate this prior structure on\nGaussian mixture models at almost no penalty in inference time, comparing\nwith a finite Gaussian mixture model.\n\nAn important question is how can the Dirichlet process use an infinite,\nunbounded number of clusters and still be consistent. While a full explanation\ndoesn't fit this manual, one can think of its `stick breaking process\n<https:\/\/en.wikipedia.org\/wiki\/Dirichlet_process#The_stick-breaking_process>`_\nanalogy to help understanding it. The stick breaking process is a generative\nstory for the Dirichlet process. We start with a unit-length stick and in each\nstep we break off a portion of the remaining stick. Each time, we associate the\nlength of the piece of the stick to the proportion of points that falls into a\ngroup of the mixture. At the end, to represent the infinite mixture, we\nassociate the last remaining piece of the stick to the proportion of points\nthat don't fall into all the other groups. The length of each piece is a random\nvariable with probability proportional to the concentration parameter. Smaller\nvalues of the concentration will divide the unit-length into larger pieces of\nthe stick (defining more concentrated distribution). Larger concentration\nvalues will create smaller pieces of the stick (increasing the number of\ncomponents with non zero weights).\n\nVariational inference techniques for the Dirichlet process still work\nwith a finite approximation to this infinite mixture model, but\ninstead of having to specify a priori how many components one wants to\nuse, one just specifies the concentration parameter and an upper bound\non the number of mixture components (this upper bound, assuming it is\nhigher than the \"true\" number of components, affects only algorithmic\ncomplexity, not the actual number of components used).","site":"scikit-learn","answers_cleaned":"    mixture       gmm                           Gaussian mixture models                             currentmodule   sklearn mixture    sklearn mixture   is a package which enables one to learn Gaussian Mixture Models  diagonal  spherical  tied and full covariance matrices supported   sample them  and estimate them from data  Facilities to help determine the appropriate number of components are also provided      figure      auto examples mixture images sphx glr plot gmm pdf 001 png    target     auto examples mixture plot gmm pdf html    align  center    scale  50       Two component Gaussian mixture model     data points  and equi probability   surfaces of the model    A Gaussian mixture model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters  One can think of mixture models as generalizing k means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians   Scikit learn implements different classes to estimate Gaussian mixture models  that correspond to different estimation strategies  detailed below   Gaussian Mixture                   The  class  GaussianMixture  object implements the  ref  expectation maximization  expectation maximization    EM  algorithm for fitting mixture of Gaussian models  It can also draw confidence ellipsoids for multivariate models  and compute the Bayesian Information Criterion to assess the number of clusters in the data  A  meth  GaussianMixture fit  method is provided that learns a Gaussian Mixture Model from train data  Given test data  it can assign to each sample the Gaussian it most probably belongs to using the  meth  GaussianMixture predict  method          Alternatively  the probability of each     sample belonging to the various Gaussians may be retrieved using the      meth  GaussianMixture predict proba  method   The  class  GaussianMixture  comes with different options to constrain the covariance of the difference classes estimated  spherical  diagonal  tied or full covariance      figure      auto examples mixture images sphx glr plot gmm covariances 001 png     target     auto examples mixture plot gmm covariances html     align  center     scale  75      rubric   Examples    See  ref  sphx glr auto examples mixture plot gmm covariances py  for an example of   using the Gaussian mixture as clustering on the iris dataset     See  ref  sphx glr auto examples mixture plot gmm pdf py  for an example on plotting the   density estimation      dropdown   Pros and cons of class GaussianMixture       rubric   Pros     Speed  It is the fastest algorithm for learning mixture models     Agnostic  As this algorithm maximizes only the likelihood  it     will not bias the means towards zero  or bias the cluster sizes to     have specific structures that might or might not apply        rubric   Cons     Singularities  When one has insufficiently many points per     mixture  estimating the covariance matrices becomes difficult      and the algorithm is known to diverge and find solutions with     infinite likelihood unless one regularizes the covariances artificially      Number of components  This algorithm will always use all the     components it has access to  needing held out data     or information theoretical criteria to decide how many components to use     in the absence of external cues      dropdown   Selecting the number of components in a classical Gaussian Mixture model    The BIC criterion can be used to select the number of components in a Gaussian   Mixture in an efficient way  In theory  it recovers the true number of   components only in the asymptotic regime  i e  if much data is available and   assuming that the data was actually generated i i d  from a mixture of Gaussian   distribution   Note that using a  ref  Variational Bayesian Gaussian mixture  bgmm     avoids the specification of the number of components for a Gaussian mixture   model        figure      auto examples mixture images sphx glr plot gmm selection 002 png      target     auto examples mixture plot gmm selection html      align  center      scale  50        rubric   Examples      See  ref  sphx glr auto examples mixture plot gmm selection py  for an example     of model selection performed with classical Gaussian mixture       expectation maximization      dropdown   Estimation algorithm expectation maximization    The main difficulty in learning Gaussian mixture models from unlabeled   data is that one usually doesn t know which points came from   which latent component  if one has access to this information it gets   very easy to fit a separate Gaussian distribution to each set of   points    Expectation maximization    https   en wikipedia org wiki Expectation E2 80 93maximization algorithm      is a well founded statistical   algorithm to get around this problem by an iterative process  First   one assumes random components  randomly centered on data points    learned from k means  or even just normally distributed around the   origin  and computes for each point a probability of being generated by   each component of the model  Then  one tweaks the   parameters to maximize the likelihood of the data given those   assignments  Repeating this process is guaranteed to always converge   to a local optimum      dropdown   Choice of the Initialization method    There is a choice of four initialization methods  as well as inputting user defined   initial means  to generate the initial centers for the model components     k means  default      This applies a traditional k means clustering algorithm      This can be computationally expensive compared to other initialization methods     k means       This uses the initialization method of k means clustering  k means        This will pick the first center at random from the data  Subsequent centers will be     chosen from a weighted distribution of the data favouring points further away from     existing centers  k means   is the default initialization for k means so will be     quicker than running a full k means but can still take a significant amount of     time for large data sets with many components     random from data     This will pick random data points from the input data as the initial     centers  This is a very fast method of initialization but can produce non convergent     results if the chosen points are too close to each other     random     Centers are chosen as a small perturbation away from the mean of all data      This method is simple but can lead to the model taking longer to converge        figure      auto examples mixture images sphx glr plot gmm init 001 png      target     auto examples mixture plot gmm init html      align  center      scale  50        rubric   Examples      See  ref  sphx glr auto examples mixture plot gmm init py  for an example of     using different initializations in Gaussian Mixture       bgmm   Variational Bayesian Gaussian Mixture                                        The  class  BayesianGaussianMixture  object implements a variant of the Gaussian mixture model with variational inference algorithms  The API is similar to the one defined by  class  GaussianMixture        variational inference     Estimation algorithm  variational inference    Variational inference is an extension of expectation maximization that maximizes a lower bound on model evidence  including priors  instead of data likelihood  The principle behind variational methods is the same as expectation maximization  that is both are iterative algorithms that alternate between finding the probabilities for each point to be generated by each mixture and fitting the mixture to these assigned points   but variational methods add regularization by integrating information from prior distributions  This avoids the singularities often found in expectation maximization solutions but introduces some subtle biases to the model  Inference is often notably slower  but not usually as much so as to render usage unpractical   Due to its Bayesian nature  the variational algorithm needs more hyperparameters than expectation maximization  the most important of these being the concentration parameter   weight concentration prior    Specifying a low value for the concentration prior will make the model put most of the weight on a few components and set the remaining components  weights very close to zero  High values of the concentration prior will allow a larger number of components to be active in the mixture   The parameters implementation of the  class  BayesianGaussianMixture  class proposes two types of prior for the weights distribution  a finite mixture model with Dirichlet distribution and an infinite mixture model with the Dirichlet Process  In practice Dirichlet Process inference algorithm is approximated and uses a truncated distribution with a fixed maximum number of components  called the Stick breaking representation   The number of components actually used almost always depends on the data   The next figure compares the results obtained for the different type of the weight concentration prior  parameter   weight concentration prior type    for different values of   weight concentration prior    Here  we can see the value of the   weight concentration prior   parameter has a strong impact on the effective number of active components obtained  We can also notice that large values for the concentration weight prior lead to more uniform weights when the type of prior is  dirichlet distribution  while this is not necessarily the case for the  dirichlet process  type  used by default        plot bgmm  image      auto examples mixture images sphx glr plot concentration prior 001 png     target     auto examples mixture plot concentration prior html     scale  48       plot dpgmm  image      auto examples mixture images sphx glr plot concentration prior 002 png     target     auto examples mixture plot concentration prior html     scale  48      centered    plot bgmm   plot dpgmm   The examples below compare Gaussian mixture models with a fixed number of components  to the variational Gaussian mixture models with a Dirichlet process prior  Here  a classical Gaussian mixture is fitted with 5 components on a dataset composed of 2 clusters  We can see that the variational Gaussian mixture with a Dirichlet process prior is able to limit itself to only 2 components whereas the Gaussian mixture fits the data with a fixed number of components that has to be set a priori by the user  In this case the user has selected   n components 5   which does not match the true generative distribution of this toy dataset  Note that with very little observations  the variational Gaussian mixture models with a Dirichlet process prior can take a conservative stand  and fit only one component      figure      auto examples mixture images sphx glr plot gmm 001 png     target     auto examples mixture plot gmm html     align  center     scale  70    On the following figure we are fitting a dataset not well depicted by a Gaussian mixture  Adjusting the   weight concentration prior    parameter of the  class  BayesianGaussianMixture  controls the number of components used to fit this data  We also present on the last two plots a random sampling generated from the two resulting mixtures      figure      auto examples mixture images sphx glr plot gmm sin 001 png     target     auto examples mixture plot gmm sin html     align  center     scale  65        rubric   Examples    See  ref  sphx glr auto examples mixture plot gmm py  for an example on   plotting the confidence ellipsoids for both  class  GaussianMixture    and  class  BayesianGaussianMixture       ref  sphx glr auto examples mixture plot gmm sin py  shows using    class  GaussianMixture  and  class  BayesianGaussianMixture  to fit a   sine wave     See  ref  sphx glr auto examples mixture plot concentration prior py    for an example plotting the confidence ellipsoids for the    class  BayesianGaussianMixture  with different     weight concentration prior type   for different values of the parameter     weight concentration prior        dropdown   Pros and cons of variational inference with BayesianGaussianMixture       rubric   Pros     Automatic selection  When   weight concentration prior   is small enough and       n components   is larger than what is found necessary by the model  the     Variational Bayesian mixture model has a natural tendency to set some mixture     weights values close to zero  This makes it possible to let the model choose     a suitable number of effective components automatically  Only an upper bound     of this number needs to be provided  Note however that the  ideal  number of     active components is very application specific and is typically ill defined     in a data exploration setting      Less sensitivity to the number of parameters  Unlike finite models  which will     almost always use all components as much as they can  and hence will produce     wildly different solutions for different numbers of components  the     variational inference with a Dirichlet process prior        weight concentration prior type  dirichlet process     won t change much     with changes to the parameters  leading to more stability and less tuning      Regularization  Due to the incorporation of prior information      variational solutions have less pathological special cases than     expectation maximization solutions        rubric   Cons     Speed  The extra parametrization necessary for variational inference makes     inference slower  although not by much      Hyperparameters  This algorithm needs an extra hyperparameter     that might need experimental tuning via cross validation      Bias  There are many implicit biases in the inference algorithms  and also in     the Dirichlet process if used   and whenever there is a mismatch between     these biases and the data it might be possible to fit better models using a     finite mixture       dirichlet process   The Dirichlet Process                        Here we describe variational inference algorithms on Dirichlet process mixture  The Dirichlet process is a prior probability distribution on  clusterings with an infinite  unbounded  number of partitions   Variational techniques let us incorporate this prior structure on Gaussian mixture models at almost no penalty in inference time  comparing with a finite Gaussian mixture model   An important question is how can the Dirichlet process use an infinite  unbounded number of clusters and still be consistent  While a full explanation doesn t fit this manual  one can think of its  stick breaking process  https   en wikipedia org wiki Dirichlet process The stick breaking process    analogy to help understanding it  The stick breaking process is a generative story for the Dirichlet process  We start with a unit length stick and in each step we break off a portion of the remaining stick  Each time  we associate the length of the piece of the stick to the proportion of points that falls into a group of the mixture  At the end  to represent the infinite mixture  we associate the last remaining piece of the stick to the proportion of points that don t fall into all the other groups  The length of each piece is a random variable with probability proportional to the concentration parameter  Smaller values of the concentration will divide the unit length into larger pieces of the stick  defining more concentrated distribution   Larger concentration values will create smaller pieces of the stick  increasing the number of components with non zero weights    Variational inference techniques for the Dirichlet process still work with a finite approximation to this infinite mixture model  but instead of having to specify a priori how many components one wants to use  one just specifies the concentration parameter and an upper bound on the number of mixture components  this upper bound  assuming it is higher than the  true  number of components  affects only algorithmic complexity  not the actual number of components used  "}
{"questions":"scikit-learn When performing classification you often want not only to predict the class sklearn calibration calibration Probability calibration","answers":".. _calibration:\n\n=======================\nProbability calibration\n=======================\n\n.. currentmodule:: sklearn.calibration\n\n\nWhen performing classification you often want not only to predict the class\nlabel, but also obtain a probability of the respective label. This probability\ngives you some kind of confidence on the prediction. Some models can give you\npoor estimates of the class probabilities and some even do not support\nprobability prediction (e.g., some instances of\n:class:`~sklearn.linear_model.SGDClassifier`).\nThe calibration module allows you to better calibrate\nthe probabilities of a given model, or to add support for probability\nprediction.\n\nWell calibrated classifiers are probabilistic classifiers for which the output\nof the :term:`predict_proba` method can be directly interpreted as a confidence\nlevel.\nFor instance, a well calibrated (binary) classifier should classify the samples such\nthat among the samples to which it gave a :term:`predict_proba` value close to, say,\n0.8, approximately 80% actually belong to the positive class.\n\nBefore we show how to re-calibrate a classifier, we first need a way to detect how\ngood a classifier is calibrated.\n\n.. note::\n    Strictly proper scoring rules for probabilistic predictions like\n    :func:`sklearn.metrics.brier_score_loss` and\n    :func:`sklearn.metrics.log_loss` assess calibration (reliability) and\n    discriminative power (resolution) of a model, as well as the randomness of the data\n    (uncertainty) at the same time. This follows from the well-known Brier score\n    decomposition of Murphy [1]_. As it is not clear which term dominates, the score is\n    of limited use for assessing calibration alone (unless one computes each term of\n    the decomposition). A lower Brier loss, for instance, does not necessarily\n    mean a better calibrated model, it could also mean a worse calibrated model with much\n    more discriminatory power, e.g. using many more features.\n\n.. _calibration_curve:\n\nCalibration curves\n------------------\n\nCalibration curves, also referred to as *reliability diagrams* (Wilks 1995 [2]_),\ncompare how well the probabilistic predictions of a binary classifier are calibrated.\nIt plots the frequency of the positive label (to be more precise, an estimation of the\n*conditional event probability* :math:`P(Y=1|\\text{predict_proba})`) on the y-axis\nagainst the predicted probability :term:`predict_proba` of a model on the x-axis.\nThe tricky part is to get values for the y-axis.\nIn scikit-learn, this is accomplished by binning the predictions such that the x-axis\nrepresents the average predicted probability in each bin.\nThe y-axis is then the *fraction of positives* given the predictions of that bin, i.e.\nthe proportion of samples whose class is the positive class (in each bin).\n\nThe top calibration curve plot is created with\n:func:`CalibrationDisplay.from_estimator`, which uses :func:`calibration_curve` to\ncalculate the per bin average predicted probabilities and fraction of positives.\n:func:`CalibrationDisplay.from_estimator`\ntakes as input a fitted classifier, which is used to calculate the predicted\nprobabilities. The classifier thus must have :term:`predict_proba` method. For\nthe few classifiers that do not have a :term:`predict_proba` method, it is\npossible to use :class:`CalibratedClassifierCV` to calibrate the classifier\noutputs to probabilities.\n\nThe bottom histogram gives some insight into the behavior of each classifier\nby showing the number of samples in each predicted probability bin.\n\n.. figure:: ..\/auto_examples\/calibration\/images\/sphx_glr_plot_compare_calibration_001.png\n   :target: ..\/auto_examples\/calibration\/plot_compare_calibration.html\n   :align: center\n\n.. currentmodule:: sklearn.linear_model\n\n:class:`LogisticRegression` is more likely to return well calibrated predictions by itself as it has a\ncanonical link function for its loss, i.e. the logit-link for the :ref:`log_loss`.\nIn the unpenalized case, this leads to the so-called **balance property**, see [8]_ and :ref:`Logistic_regression`.\nIn the plot above, data is generated according to a linear mechanism, which is\nconsistent with the :class:`LogisticRegression` model (the model is 'well specified'),\nand the value of the regularization parameter `C` is tuned to be\nappropriate (neither too strong nor too low). As a consequence, this model returns\naccurate predictions from its `predict_proba` method.\nIn contrast to that, the other shown models return biased probabilities; with\ndifferent biases per model.\n\n.. currentmodule:: sklearn.naive_bayes\n\n:class:`GaussianNB` (Naive Bayes) tends to push probabilities to 0 or 1 (note the counts\nin the histograms). This is mainly because it makes the assumption that\nfeatures are conditionally independent given the class, which is not the\ncase in this dataset which contains 2 redundant features.\n\n.. currentmodule:: sklearn.ensemble\n\n:class:`RandomForestClassifier` shows the opposite behavior: the histograms\nshow peaks at probabilities approximately 0.2 and 0.9, while probabilities\nclose to 0 or 1 are very rare. An explanation for this is given by\nNiculescu-Mizil and Caruana [3]_: \"Methods such as bagging and random\nforests that average predictions from a base set of models can have\ndifficulty making predictions near 0 and 1 because variance in the\nunderlying base models will bias predictions that should be near zero or one\naway from these values. Because predictions are restricted to the interval\n[0,1], errors caused by variance tend to be one-sided near zero and one. For\nexample, if a model should predict p = 0 for a case, the only way bagging\ncan achieve this is if all bagged trees predict zero. If we add noise to the\ntrees that bagging is averaging over, this noise will cause some trees to\npredict values larger than 0 for this case, thus moving the average\nprediction of the bagged ensemble away from 0. We observe this effect most\nstrongly with random forests because the base-level trees trained with\nrandom forests have relatively high variance due to feature subsetting.\" As\na result, the calibration curve shows a characteristic sigmoid shape, indicating that\nthe classifier could trust its \"intuition\" more and return probabilities closer\nto 0 or 1 typically.\n\n.. currentmodule:: sklearn.svm\n\n:class:`LinearSVC` (SVC) shows an even more sigmoid curve than the random forest, which\nis typical for maximum-margin methods (compare Niculescu-Mizil and Caruana [3]_), which\nfocus on difficult to classify samples that are close to the decision boundary (the\nsupport vectors).\n\nCalibrating a classifier\n------------------------\n\n.. currentmodule:: sklearn.calibration\n\nCalibrating a classifier consists of fitting a regressor (called a\n*calibrator*) that maps the output of the classifier (as given by\n:term:`decision_function` or :term:`predict_proba`) to a calibrated probability\nin [0, 1]. Denoting the output of the classifier for a given sample by :math:`f_i`,\nthe calibrator tries to predict the conditional event probability\n:math:`P(y_i = 1 | f_i)`.\n\nIdeally, the calibrator is fit on a dataset independent of the training data used to\nfit the classifier in the first place.\nThis is because performance of the classifier on its training data would be\nbetter than for novel data. Using the classifier output of training data\nto fit the calibrator would thus result in a biased calibrator that maps to\nprobabilities closer to 0 and 1 than it should.\n\nUsage\n-----\n\nThe :class:`CalibratedClassifierCV` class is used to calibrate a classifier.\n\n:class:`CalibratedClassifierCV` uses a cross-validation approach to ensure\nunbiased data is always used to fit the calibrator. The data is split into k\n`(train_set, test_set)` couples (as determined by `cv`). When `ensemble=True`\n(default), the following procedure is repeated independently for each\ncross-validation split:\n\n1. a clone of `base_estimator` is trained on the train subset\n2. the trained `base_estimator` makes predictions on the test subset\n3. the predictions are used to fit a calibrator (either a sigmoid or isotonic\n   regressor) (when the data is multiclass, a calibrator is fit for every class)\n\nThis results in an\nensemble of k `(classifier, calibrator)` couples where each calibrator maps\nthe output of its corresponding classifier into [0, 1]. Each couple is exposed\nin the `calibrated_classifiers_` attribute, where each entry is a calibrated\nclassifier with a :term:`predict_proba` method that outputs calibrated\nprobabilities. The output of :term:`predict_proba` for the main\n:class:`CalibratedClassifierCV` instance corresponds to the average of the\npredicted probabilities of the `k` estimators in the `calibrated_classifiers_`\nlist. The output of :term:`predict` is the class that has the highest\nprobability.\n\nIt is important to choose `cv` carefully when using `ensemble=True`.\nAll classes should be present in both train and test subsets for every split.\nWhen a class is absent in the train subset, the predicted probability for that\nclass will default to 0 for the `(classifier, calibrator)` couple of that split.\nThis skews the :term:`predict_proba` as it averages across all couples.\nWhen a class is absent in the test subset, the calibrator for that class\n(within the `(classifier, calibrator)` couple of that split) is\nfit on data with no positive class. This results in ineffective calibration.\n\nWhen `ensemble=False`, cross-validation is used to obtain 'unbiased'\npredictions for all the data, via\n:func:`~sklearn.model_selection.cross_val_predict`.\nThese unbiased predictions are then used to train the calibrator. The attribute\n`calibrated_classifiers_` consists of only one `(classifier, calibrator)`\ncouple where the classifier is the `base_estimator` trained on all the data.\nIn this case the output of :term:`predict_proba` for\n:class:`CalibratedClassifierCV` is the predicted probabilities obtained\nfrom the single `(classifier, calibrator)` couple.\n\nThe main advantage of `ensemble=True` is to benefit from the traditional\nensembling effect (similar to :ref:`bagging`). The resulting ensemble should\nboth be well calibrated and slightly more accurate than with `ensemble=False`.\nThe main advantage of using `ensemble=False` is computational: it reduces the\noverall fit time by training only a single base classifier and calibrator\npair, decreases the final model size and increases prediction speed.\n\nAlternatively an already fitted classifier can be calibrated by using a\n:class:`~sklearn.frozen.FrozenEstimator` as\n``CalibratedClassifierCV(estimator=FrozenEstimator(estimator))``.\nIt is up to the user to make sure that the data used for fitting the classifier\nis disjoint from the data used for fitting the regressor.\ndata used for fitting the regressor.\n\n:class:`CalibratedClassifierCV` supports the use of two regression techniques\nfor calibration via the `method` parameter: `\"sigmoid\"` and `\"isotonic\"`.\n\n.. _sigmoid_regressor:\n\nSigmoid\n^^^^^^^\n\nThe sigmoid regressor, `method=\"sigmoid\"` is based on Platt's logistic model [4]_:\n\n.. math::\n       p(y_i = 1 | f_i) = \\frac{1}{1 + \\exp(A f_i + B)} \\,,\n\nwhere :math:`y_i` is the true label of sample :math:`i` and :math:`f_i`\nis the output of the un-calibrated classifier for sample :math:`i`. :math:`A`\nand :math:`B` are real numbers to be determined when fitting the regressor via\nmaximum likelihood.\n\nThe sigmoid method assumes the :ref:`calibration curve <calibration_curve>`\ncan be corrected by applying a sigmoid function to the raw predictions. This\nassumption has been empirically justified in the case of :ref:`svm` with\ncommon kernel functions on various benchmark datasets in section 2.1 of Platt\n1999 [4]_ but does not necessarily hold in general. Additionally, the\nlogistic model works best if the calibration error is symmetrical, meaning\nthe classifier output for each binary class is normally distributed with\nthe same variance [7]_. This can be a problem for highly imbalanced\nclassification problems, where outputs do not have equal variance.\n\nIn general this method is most effective for small sample sizes or when the\nun-calibrated model is under-confident and has similar calibration errors for both\nhigh and low outputs.\n\nIsotonic\n^^^^^^^^\n\nThe `method=\"isotonic\"` fits a non-parametric isotonic regressor, which outputs\na step-wise non-decreasing function, see :mod:`sklearn.isotonic`. It minimizes:\n\n.. math::\n       \\sum_{i=1}^{n} (y_i - \\hat{f}_i)^2\n\nsubject to :math:`\\hat{f}_i \\geq \\hat{f}_j` whenever\n:math:`f_i \\geq f_j`. :math:`y_i` is the true\nlabel of sample :math:`i` and :math:`\\hat{f}_i` is the output of the\ncalibrated classifier for sample :math:`i` (i.e., the calibrated probability).\nThis method is more general when compared to 'sigmoid' as the only restriction\nis that the mapping function is monotonically increasing. It is thus more\npowerful as it can correct any monotonic distortion of the un-calibrated model.\nHowever, it is more prone to overfitting, especially on small datasets [6]_.\n\nOverall, 'isotonic' will perform as well as or better than 'sigmoid' when\nthere is enough data (greater than ~ 1000 samples) to avoid overfitting [3]_.\n\n.. note:: Impact on ranking metrics like AUC\n\n    It is generally expected that calibration does not affect ranking metrics such as\n    ROC-AUC. However, these metrics might differ after calibration when using\n    `method=\"isotonic\"` since isotonic regression introduces ties in the predicted\n    probabilities. This can be seen as within the uncertainty of the model predictions.\n    In case, you strictly want to keep the ranking and thus AUC scores, use\n    `method=\"sigmoid\"` which is a strictly monotonic transformation and thus keeps\n    the ranking.\n\nMulticlass support\n^^^^^^^^^^^^^^^^^^\n\nBoth isotonic and sigmoid regressors only\nsupport 1-dimensional data (e.g., binary classification output) but are\nextended for multiclass classification if the `base_estimator` supports\nmulticlass predictions. For multiclass predictions,\n:class:`CalibratedClassifierCV` calibrates for\neach class separately in a :ref:`ovr_classification` fashion [5]_. When\npredicting\nprobabilities, the calibrated probabilities for each class\nare predicted separately. As those probabilities do not necessarily sum to\none, a postprocessing is performed to normalize them.\n\n.. rubric:: Examples\n\n* :ref:`sphx_glr_auto_examples_calibration_plot_calibration_curve.py`\n* :ref:`sphx_glr_auto_examples_calibration_plot_calibration_multiclass.py`\n* :ref:`sphx_glr_auto_examples_calibration_plot_calibration.py`\n* :ref:`sphx_glr_auto_examples_calibration_plot_compare_calibration.py`\n\n.. rubric:: References\n\n.. [1] Allan H. Murphy (1973).\n       :doi:`\"A New Vector Partition of the Probability Score\"\n       <10.1175\/1520-0450(1973)012%3C0595:ANVPOT%3E2.0.CO;2>`\n       Journal of Applied Meteorology and Climatology\n\n.. [2] `On the combination of forecast probabilities for\n       consecutive precipitation periods.\n       <https:\/\/journals.ametsoc.org\/waf\/article\/5\/4\/640\/40179>`_\n       Wea. Forecasting, 5, 640\u2013650., Wilks, D. S., 1990a\n\n.. [3] `Predicting Good Probabilities with Supervised Learning\n       <https:\/\/www.cs.cornell.edu\/~alexn\/papers\/calibration.icml05.crc.rev3.pdf>`_,\n       A. Niculescu-Mizil & R. Caruana, ICML 2005\n\n\n.. [4] `Probabilistic Outputs for Support Vector Machines and Comparisons\n       to Regularized Likelihood Methods.\n       <https:\/\/www.cs.colorado.edu\/~mozer\/Teaching\/syllabi\/6622\/papers\/Platt1999.pdf>`_\n       J. Platt, (1999)\n\n.. [5] `Transforming Classifier Scores into Accurate Multiclass\n       Probability Estimates.\n       <https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/775047.775151>`_\n       B. Zadrozny & C. Elkan, (KDD 2002)\n\n.. [6] `Predicting accurate probabilities with a ranking loss.\n       <https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC4180410\/>`_\n       Menon AK, Jiang XJ, Vembu S, Elkan C, Ohno-Machado L.\n       Proc Int Conf Mach Learn. 2012;2012:703-710\n\n.. [7] `Beyond sigmoids: How to obtain well-calibrated probabilities from\n       binary classifiers with beta calibration\n       <https:\/\/projecteuclid.org\/euclid.ejs\/1513306867>`_\n       Kull, M., Silva Filho, T. M., & Flach, P. (2017).\n\n.. [8] Mario V. W\u00fcthrich, Michael Merz (2023).\n       :doi:`\"Statistical Foundations of Actuarial Learning and its Applications\"\n       <10.1007\/978-3-031-12409-9>`\n       Springer Actuarial","site":"scikit-learn","answers_cleaned":"    calibration                           Probability calibration                             currentmodule   sklearn calibration   When performing classification you often want not only to predict the class label  but also obtain a probability of the respective label  This probability gives you some kind of confidence on the prediction  Some models can give you poor estimates of the class probabilities and some even do not support probability prediction  e g   some instances of  class   sklearn linear model SGDClassifier    The calibration module allows you to better calibrate the probabilities of a given model  or to add support for probability prediction   Well calibrated classifiers are probabilistic classifiers for which the output of the  term  predict proba  method can be directly interpreted as a confidence level  For instance  a well calibrated  binary  classifier should classify the samples such that among the samples to which it gave a  term  predict proba  value close to  say  0 8  approximately 80  actually belong to the positive class   Before we show how to re calibrate a classifier  we first need a way to detect how good a classifier is calibrated      note       Strictly proper scoring rules for probabilistic predictions like      func  sklearn metrics brier score loss  and      func  sklearn metrics log loss  assess calibration  reliability  and     discriminative power  resolution  of a model  as well as the randomness of the data      uncertainty  at the same time  This follows from the well known Brier score     decomposition of Murphy  1    As it is not clear which term dominates  the score is     of limited use for assessing calibration alone  unless one computes each term of     the decomposition   A lower Brier loss  for instance  does not necessarily     mean a better calibrated model  it could also mean a worse calibrated model with much     more discriminatory power  e g  using many more features       calibration curve   Calibration curves                     Calibration curves  also referred to as  reliability diagrams   Wilks 1995  2     compare how well the probabilistic predictions of a binary classifier are calibrated  It plots the frequency of the positive label  to be more precise  an estimation of the  conditional event probability   math  P Y 1  text predict proba     on the y axis against the predicted probability  term  predict proba  of a model on the x axis  The tricky part is to get values for the y axis  In scikit learn  this is accomplished by binning the predictions such that the x axis represents the average predicted probability in each bin  The y axis is then the  fraction of positives  given the predictions of that bin  i e  the proportion of samples whose class is the positive class  in each bin    The top calibration curve plot is created with  func  CalibrationDisplay from estimator   which uses  func  calibration curve  to calculate the per bin average predicted probabilities and fraction of positives   func  CalibrationDisplay from estimator  takes as input a fitted classifier  which is used to calculate the predicted probabilities  The classifier thus must have  term  predict proba  method  For the few classifiers that do not have a  term  predict proba  method  it is possible to use  class  CalibratedClassifierCV  to calibrate the classifier outputs to probabilities   The bottom histogram gives some insight into the behavior of each classifier by showing the number of samples in each predicted probability bin      figure      auto examples calibration images sphx glr plot compare calibration 001 png     target     auto examples calibration plot compare calibration html     align  center     currentmodule   sklearn linear model   class  LogisticRegression  is more likely to return well calibrated predictions by itself as it has a canonical link function for its loss  i e  the logit link for the  ref  log loss   In the unpenalized case  this leads to the so called   balance property    see  8   and  ref  Logistic regression   In the plot above  data is generated according to a linear mechanism  which is consistent with the  class  LogisticRegression  model  the model is  well specified    and the value of the regularization parameter  C  is tuned to be appropriate  neither too strong nor too low   As a consequence  this model returns accurate predictions from its  predict proba  method  In contrast to that  the other shown models return biased probabilities  with different biases per model      currentmodule   sklearn naive bayes   class  GaussianNB   Naive Bayes  tends to push probabilities to 0 or 1  note the counts in the histograms   This is mainly because it makes the assumption that features are conditionally independent given the class  which is not the case in this dataset which contains 2 redundant features      currentmodule   sklearn ensemble   class  RandomForestClassifier  shows the opposite behavior  the histograms show peaks at probabilities approximately 0 2 and 0 9  while probabilities close to 0 or 1 are very rare  An explanation for this is given by Niculescu Mizil and Caruana  3     Methods such as bagging and random forests that average predictions from a base set of models can have difficulty making predictions near 0 and 1 because variance in the underlying base models will bias predictions that should be near zero or one away from these values  Because predictions are restricted to the interval  0 1   errors caused by variance tend to be one sided near zero and one  For example  if a model should predict p   0 for a case  the only way bagging can achieve this is if all bagged trees predict zero  If we add noise to the trees that bagging is averaging over  this noise will cause some trees to predict values larger than 0 for this case  thus moving the average prediction of the bagged ensemble away from 0  We observe this effect most strongly with random forests because the base level trees trained with random forests have relatively high variance due to feature subsetting   As a result  the calibration curve shows a characteristic sigmoid shape  indicating that the classifier could trust its  intuition  more and return probabilities closer to 0 or 1 typically      currentmodule   sklearn svm   class  LinearSVC   SVC  shows an even more sigmoid curve than the random forest  which is typical for maximum margin methods  compare Niculescu Mizil and Caruana  3     which focus on difficult to classify samples that are close to the decision boundary  the support vectors    Calibrating a classifier                              currentmodule   sklearn calibration  Calibrating a classifier consists of fitting a regressor  called a  calibrator   that maps the output of the classifier  as given by  term  decision function  or  term  predict proba   to a calibrated probability in  0  1   Denoting the output of the classifier for a given sample by  math  f i   the calibrator tries to predict the conditional event probability  math  P y i   1   f i     Ideally  the calibrator is fit on a dataset independent of the training data used to fit the classifier in the first place  This is because performance of the classifier on its training data would be better than for novel data  Using the classifier output of training data to fit the calibrator would thus result in a biased calibrator that maps to probabilities closer to 0 and 1 than it should   Usage        The  class  CalibratedClassifierCV  class is used to calibrate a classifier    class  CalibratedClassifierCV  uses a cross validation approach to ensure unbiased data is always used to fit the calibrator  The data is split into k   train set  test set   couples  as determined by  cv    When  ensemble True   default   the following procedure is repeated independently for each cross validation split   1  a clone of  base estimator  is trained on the train subset 2  the trained  base estimator  makes predictions on the test subset 3  the predictions are used to fit a calibrator  either a sigmoid or isotonic    regressor   when the data is multiclass  a calibrator is fit for every class   This results in an ensemble of k   classifier  calibrator   couples where each calibrator maps the output of its corresponding classifier into  0  1   Each couple is exposed in the  calibrated classifiers   attribute  where each entry is a calibrated classifier with a  term  predict proba  method that outputs calibrated probabilities  The output of  term  predict proba  for the main  class  CalibratedClassifierCV  instance corresponds to the average of the predicted probabilities of the  k  estimators in the  calibrated classifiers   list  The output of  term  predict  is the class that has the highest probability   It is important to choose  cv  carefully when using  ensemble True   All classes should be present in both train and test subsets for every split  When a class is absent in the train subset  the predicted probability for that class will default to 0 for the   classifier  calibrator   couple of that split  This skews the  term  predict proba  as it averages across all couples  When a class is absent in the test subset  the calibrator for that class  within the   classifier  calibrator   couple of that split  is fit on data with no positive class  This results in ineffective calibration   When  ensemble False   cross validation is used to obtain  unbiased  predictions for all the data  via  func   sklearn model selection cross val predict   These unbiased predictions are then used to train the calibrator  The attribute  calibrated classifiers   consists of only one   classifier  calibrator   couple where the classifier is the  base estimator  trained on all the data  In this case the output of  term  predict proba  for  class  CalibratedClassifierCV  is the predicted probabilities obtained from the single   classifier  calibrator   couple   The main advantage of  ensemble True  is to benefit from the traditional ensembling effect  similar to  ref  bagging    The resulting ensemble should both be well calibrated and slightly more accurate than with  ensemble False   The main advantage of using  ensemble False  is computational  it reduces the overall fit time by training only a single base classifier and calibrator pair  decreases the final model size and increases prediction speed   Alternatively an already fitted classifier can be calibrated by using a  class   sklearn frozen FrozenEstimator  as   CalibratedClassifierCV estimator FrozenEstimator estimator      It is up to the user to make sure that the data used for fitting the classifier is disjoint from the data used for fitting the regressor  data used for fitting the regressor    class  CalibratedClassifierCV  supports the use of two regression techniques for calibration via the  method  parameter    sigmoid   and   isotonic         sigmoid regressor   Sigmoid          The sigmoid regressor   method  sigmoid   is based on Platt s logistic model  4        math          p y i   1   f i     frac 1  1    exp A f i   B        where  math  y i  is the true label of sample  math  i  and  math  f i  is the output of the un calibrated classifier for sample  math  i    math  A  and  math  B  are real numbers to be determined when fitting the regressor via maximum likelihood   The sigmoid method assumes the  ref  calibration curve  calibration curve   can be corrected by applying a sigmoid function to the raw predictions  This assumption has been empirically justified in the case of  ref  svm  with common kernel functions on various benchmark datasets in section 2 1 of Platt 1999  4   but does not necessarily hold in general  Additionally  the logistic model works best if the calibration error is symmetrical  meaning the classifier output for each binary class is normally distributed with the same variance  7    This can be a problem for highly imbalanced classification problems  where outputs do not have equal variance   In general this method is most effective for small sample sizes or when the un calibrated model is under confident and has similar calibration errors for both high and low outputs   Isotonic           The  method  isotonic   fits a non parametric isotonic regressor  which outputs a step wise non decreasing function  see  mod  sklearn isotonic   It minimizes      math           sum  i 1   n   y i    hat f  i  2  subject to  math   hat f  i  geq  hat f  j  whenever  math  f i  geq f j    math  y i  is the true label of sample  math  i  and  math   hat f  i  is the output of the calibrated classifier for sample  math  i   i e   the calibrated probability   This method is more general when compared to  sigmoid  as the only restriction is that the mapping function is monotonically increasing  It is thus more powerful as it can correct any monotonic distortion of the un calibrated model  However  it is more prone to overfitting  especially on small datasets  6     Overall   isotonic  will perform as well as or better than  sigmoid  when there is enough data  greater than   1000 samples  to avoid overfitting  3        note   Impact on ranking metrics like AUC      It is generally expected that calibration does not affect ranking metrics such as     ROC AUC  However  these metrics might differ after calibration when using      method  isotonic   since isotonic regression introduces ties in the predicted     probabilities  This can be seen as within the uncertainty of the model predictions      In case  you strictly want to keep the ranking and thus AUC scores  use      method  sigmoid   which is a strictly monotonic transformation and thus keeps     the ranking   Multiclass support                     Both isotonic and sigmoid regressors only support 1 dimensional data  e g   binary classification output  but are extended for multiclass classification if the  base estimator  supports multiclass predictions  For multiclass predictions   class  CalibratedClassifierCV  calibrates for each class separately in a  ref  ovr classification  fashion  5    When predicting probabilities  the calibrated probabilities for each class are predicted separately  As those probabilities do not necessarily sum to one  a postprocessing is performed to normalize them      rubric   Examples     ref  sphx glr auto examples calibration plot calibration curve py     ref  sphx glr auto examples calibration plot calibration multiclass py     ref  sphx glr auto examples calibration plot calibration py     ref  sphx glr auto examples calibration plot compare calibration py      rubric   References      1  Allan H  Murphy  1973           doi   A New Vector Partition of the Probability Score          10 1175 1520 0450 1973 012 3C0595 ANVPOT 3E2 0 CO 2          Journal of Applied Meteorology and Climatology      2   On the combination of forecast probabilities for        consecutive precipitation periods          https   journals ametsoc org waf article 5 4 640 40179           Wea  Forecasting  5  640 650   Wilks  D  S   1990a      3   Predicting Good Probabilities with Supervised Learning         https   www cs cornell edu  alexn papers calibration icml05 crc rev3 pdf            A  Niculescu Mizil   R  Caruana  ICML 2005       4   Probabilistic Outputs for Support Vector Machines and Comparisons        to Regularized Likelihood Methods          https   www cs colorado edu  mozer Teaching syllabi 6622 papers Platt1999 pdf           J  Platt   1999       5   Transforming Classifier Scores into Accurate Multiclass        Probability Estimates          https   dl acm org doi pdf 10 1145 775047 775151           B  Zadrozny   C  Elkan   KDD 2002       6   Predicting accurate probabilities with a ranking loss          https   www ncbi nlm nih gov pmc articles PMC4180410            Menon AK  Jiang XJ  Vembu S  Elkan C  Ohno Machado L         Proc Int Conf Mach Learn  2012 2012 703 710      7   Beyond sigmoids  How to obtain well calibrated probabilities from        binary classifiers with beta calibration         https   projecteuclid org euclid ejs 1513306867           Kull  M   Silva Filho  T  M     Flach  P   2017        8  Mario V  W thrich  Michael Merz  2023           doi   Statistical Foundations of Actuarial Learning and its Applications          10 1007 978 3 031 12409 9          Springer Actuarial"}
{"questions":"scikit-learn sklearn manifold Manifold learning manifold Look for the bare necessities","answers":"\n.. currentmodule:: sklearn.manifold\n\n.. _manifold:\n\n=================\nManifold learning\n=================\n\n| Look for the bare necessities\n| The simple bare necessities\n| Forget about your worries and your strife\n| I mean the bare necessities\n| Old Mother Nature's recipes\n| That bring the bare necessities of life\n|\n|             -- Baloo's song [The Jungle Book]\n\n\n\n.. figure:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_compare_methods_001.png\n   :target: ..\/auto_examples\/manifold\/plot_compare_methods.html\n   :align: center\n   :scale: 70%\n\n.. |manifold_img3| image:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_compare_methods_003.png\n  :target: ..\/auto_examples\/manifold\/plot_compare_methods.html\n  :scale: 60%\n\n.. |manifold_img4| image:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_compare_methods_004.png\n    :target: ..\/auto_examples\/manifold\/plot_compare_methods.html\n    :scale: 60%\n\n.. |manifold_img5| image:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_compare_methods_005.png\n    :target: ..\/auto_examples\/manifold\/plot_compare_methods.html\n    :scale: 60%\n\n.. |manifold_img6| image:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_compare_methods_006.png\n    :target: ..\/auto_examples\/manifold\/plot_compare_methods.html\n    :scale: 60%\n\n.. centered:: |manifold_img3| |manifold_img4| |manifold_img5| |manifold_img6|\n\n\nManifold learning is an approach to non-linear dimensionality reduction.\nAlgorithms for this task are based on the idea that the dimensionality of\nmany data sets is only artificially high.\n\n\nIntroduction\n============\n\nHigh-dimensional datasets can be very difficult to visualize.  While data\nin two or three dimensions can be plotted to show the inherent\nstructure of the data, equivalent high-dimensional plots are much less\nintuitive.  To aid visualization of the structure of a dataset, the\ndimension must be reduced in some way.\n\nThe simplest way to accomplish this dimensionality reduction is by taking\na random projection of the data.  Though this allows some degree of\nvisualization of the data structure, the randomness of the choice leaves much\nto be desired.  In a random projection, it is likely that the more\ninteresting structure within the data will be lost.\n\n\n.. |digits_img| image:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_lle_digits_001.png\n    :target: ..\/auto_examples\/manifold\/plot_lle_digits.html\n    :scale: 50\n\n.. |projected_img| image::  ..\/auto_examples\/manifold\/images\/sphx_glr_plot_lle_digits_002.png\n    :target: ..\/auto_examples\/manifold\/plot_lle_digits.html\n    :scale: 50\n\n.. centered:: |digits_img| |projected_img|\n\n\nTo address this concern, a number of supervised and unsupervised linear\ndimensionality reduction frameworks have been designed, such as Principal\nComponent Analysis (PCA), Independent Component Analysis, Linear\nDiscriminant Analysis, and others.  These algorithms define specific\nrubrics to choose an \"interesting\" linear projection of the data.\nThese methods can be powerful, but often miss important non-linear\nstructure in the data.\n\n\n.. |PCA_img| image:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_lle_digits_003.png\n    :target: ..\/auto_examples\/manifold\/plot_lle_digits.html\n    :scale: 50\n\n.. |LDA_img| image::  ..\/auto_examples\/manifold\/images\/sphx_glr_plot_lle_digits_004.png\n    :target: ..\/auto_examples\/manifold\/plot_lle_digits.html\n    :scale: 50\n\n.. centered:: |PCA_img| |LDA_img|\n\nManifold Learning can be thought of as an attempt to generalize linear\nframeworks like PCA to be sensitive to non-linear structure in data. Though\nsupervised variants exist, the typical manifold learning problem is\nunsupervised: it learns the high-dimensional structure of the data\nfrom the data itself, without the use of predetermined classifications.\n\n\n.. rubric:: Examples\n\n* See :ref:`sphx_glr_auto_examples_manifold_plot_lle_digits.py` for an example of\n  dimensionality reduction on handwritten digits.\n\n* See :ref:`sphx_glr_auto_examples_manifold_plot_compare_methods.py` for an example of\n  dimensionality reduction on a toy \"S-curve\" dataset.\n\n* See :ref:`sphx_glr_auto_examples_applications_plot_stock_market.py` for an example of\n  using manifold learning to map the stock market structure based on historical stock\n  prices.\n\nThe manifold learning implementations available in scikit-learn are\nsummarized below\n\n.. _isomap:\n\nIsomap\n======\n\nOne of the earliest approaches to manifold learning is the Isomap\nalgorithm, short for Isometric Mapping.  Isomap can be viewed as an\nextension of Multi-dimensional Scaling (MDS) or Kernel PCA.\nIsomap seeks a lower-dimensional embedding which maintains geodesic\ndistances between all points.  Isomap can be performed with the object\n:class:`Isomap`.\n\n.. figure:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_lle_digits_005.png\n   :target: ..\/auto_examples\/manifold\/plot_lle_digits.html\n   :align: center\n   :scale: 50\n\n.. dropdown:: Complexity\n\n  The Isomap algorithm comprises three stages:\n\n  1. **Nearest neighbor search.**  Isomap uses\n     :class:`~sklearn.neighbors.BallTree` for efficient neighbor search.\n     The cost is approximately :math:`O[D \\log(k) N \\log(N)]`, for :math:`k`\n     nearest neighbors of :math:`N` points in :math:`D` dimensions.\n\n  2. **Shortest-path graph search.**  The most efficient known algorithms\n     for this are *Dijkstra's Algorithm*, which is approximately\n     :math:`O[N^2(k + \\log(N))]`, or the *Floyd-Warshall algorithm*, which\n     is :math:`O[N^3]`.  The algorithm can be selected by the user with\n     the ``path_method`` keyword of ``Isomap``.  If unspecified, the code\n     attempts to choose the best algorithm for the input data.\n\n  3. **Partial eigenvalue decomposition.**  The embedding is encoded in the\n     eigenvectors corresponding to the :math:`d` largest eigenvalues of the\n     :math:`N \\times N` isomap kernel.  For a dense solver, the cost is\n     approximately :math:`O[d N^2]`.  This cost can often be improved using\n     the ``ARPACK`` solver.  The eigensolver can be specified by the user\n     with the ``eigen_solver`` keyword of ``Isomap``.  If unspecified, the\n     code attempts to choose the best algorithm for the input data.\n\n  The overall complexity of Isomap is\n  :math:`O[D \\log(k) N \\log(N)] + O[N^2(k + \\log(N))] + O[d N^2]`.\n\n  * :math:`N` : number of training data points\n  * :math:`D` : input dimension\n  * :math:`k` : number of nearest neighbors\n  * :math:`d` : output dimension\n\n.. rubric:: References\n\n* `\"A global geometric framework for nonlinear dimensionality reduction\"\n  <http:\/\/science.sciencemag.org\/content\/290\/5500\/2319.full>`_\n  Tenenbaum, J.B.; De Silva, V.; & Langford, J.C.  Science 290 (5500)\n\n.. _locally_linear_embedding:\n\nLocally Linear Embedding\n========================\n\nLocally linear embedding (LLE) seeks a lower-dimensional projection of the data\nwhich preserves distances within local neighborhoods.  It can be thought\nof as a series of local Principal Component Analyses which are globally\ncompared to find the best non-linear embedding.\n\nLocally linear embedding can be performed with function\n:func:`locally_linear_embedding` or its object-oriented counterpart\n:class:`LocallyLinearEmbedding`.\n\n.. figure:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_lle_digits_006.png\n   :target: ..\/auto_examples\/manifold\/plot_lle_digits.html\n   :align: center\n   :scale: 50\n\n.. dropdown:: Complexity\n\n  The standard LLE algorithm comprises three stages:\n\n  1. **Nearest Neighbors Search**.  See discussion under Isomap above.\n\n  2. **Weight Matrix Construction**. :math:`O[D N k^3]`.\n     The construction of the LLE weight matrix involves the solution of a\n     :math:`k \\times k` linear equation for each of the :math:`N` local\n     neighborhoods.\n\n  3. **Partial Eigenvalue Decomposition**. See discussion under Isomap above.\n\n  The overall complexity of standard LLE is\n  :math:`O[D \\log(k) N \\log(N)] + O[D N k^3] + O[d N^2]`.\n\n  * :math:`N` : number of training data points\n  * :math:`D` : input dimension\n  * :math:`k` : number of nearest neighbors\n  * :math:`d` : output dimension\n\n.. rubric:: References\n\n* `\"Nonlinear dimensionality reduction by locally linear embedding\"\n  <http:\/\/www.sciencemag.org\/content\/290\/5500\/2323.full>`_\n  Roweis, S. & Saul, L.  Science 290:2323 (2000)\n\n\nModified Locally Linear Embedding\n=================================\n\nOne well-known issue with LLE is the regularization problem.  When the number\nof neighbors is greater than the number of input dimensions, the matrix\ndefining each local neighborhood is rank-deficient.  To address this, standard\nLLE applies an arbitrary regularization parameter :math:`r`, which is chosen\nrelative to the trace of the local weight matrix.  Though it can be shown\nformally that as :math:`r \\to 0`, the solution converges to the desired\nembedding, there is no guarantee that the optimal solution will be found\nfor :math:`r > 0`.  This problem manifests itself in embeddings which distort\nthe underlying geometry of the manifold.\n\nOne method to address the regularization problem is to use multiple weight\nvectors in each neighborhood.  This is the essence of *modified locally\nlinear embedding* (MLLE).  MLLE can be  performed with function\n:func:`locally_linear_embedding` or its object-oriented counterpart\n:class:`LocallyLinearEmbedding`, with the keyword ``method = 'modified'``.\nIt requires ``n_neighbors > n_components``.\n\n.. figure:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_lle_digits_007.png\n   :target: ..\/auto_examples\/manifold\/plot_lle_digits.html\n   :align: center\n   :scale: 50\n\n.. dropdown:: Complexity\n\n  The MLLE algorithm comprises three stages:\n\n  1. **Nearest Neighbors Search**.  Same as standard LLE\n\n  2. **Weight Matrix Construction**. Approximately\n     :math:`O[D N k^3] + O[N (k-D) k^2]`.  The first term is exactly equivalent\n     to that of standard LLE.  The second term has to do with constructing the\n     weight matrix from multiple weights.  In practice, the added cost of\n     constructing the MLLE weight matrix is relatively small compared to the\n     cost of stages 1 and 3.\n\n  3. **Partial Eigenvalue Decomposition**. Same as standard LLE\n\n  The overall complexity of MLLE is\n  :math:`O[D \\log(k) N \\log(N)] + O[D N k^3] + O[N (k-D) k^2] + O[d N^2]`.\n\n  * :math:`N` : number of training data points\n  * :math:`D` : input dimension\n  * :math:`k` : number of nearest neighbors\n  * :math:`d` : output dimension\n\n.. rubric:: References\n\n* `\"MLLE: Modified Locally Linear Embedding Using Multiple Weights\"\n  <https:\/\/citeseerx.ist.psu.edu\/doc_view\/pid\/0b060fdbd92cbcc66b383bcaa9ba5e5e624d7ee3>`_\n  Zhang, Z. & Wang, J.\n\n\nHessian Eigenmapping\n====================\n\nHessian Eigenmapping (also known as Hessian-based LLE: HLLE) is another method\nof solving the regularization problem of LLE.  It revolves around a\nhessian-based quadratic form at each neighborhood which is used to recover\nthe locally linear structure.  Though other implementations note its poor\nscaling with data size, ``sklearn`` implements some algorithmic\nimprovements which make its cost comparable to that of other LLE variants\nfor small output dimension.  HLLE can be  performed with function\n:func:`locally_linear_embedding` or its object-oriented counterpart\n:class:`LocallyLinearEmbedding`, with the keyword ``method = 'hessian'``.\nIt requires ``n_neighbors > n_components * (n_components + 3) \/ 2``.\n\n.. figure:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_lle_digits_008.png\n   :target: ..\/auto_examples\/manifold\/plot_lle_digits.html\n   :align: center\n   :scale: 50\n\n.. dropdown:: Complexity\n\n  The HLLE algorithm comprises three stages:\n\n  1. **Nearest Neighbors Search**.  Same as standard LLE\n\n  2. **Weight Matrix Construction**. Approximately\n     :math:`O[D N k^3] + O[N d^6]`.  The first term reflects a similar\n     cost to that of standard LLE.  The second term comes from a QR\n     decomposition of the local hessian estimator.\n\n  3. **Partial Eigenvalue Decomposition**. Same as standard LLE.\n\n  The overall complexity of standard HLLE is\n  :math:`O[D \\log(k) N \\log(N)] + O[D N k^3] + O[N d^6] + O[d N^2]`.\n\n  * :math:`N` : number of training data points\n  * :math:`D` : input dimension\n  * :math:`k` : number of nearest neighbors\n  * :math:`d` : output dimension\n\n.. rubric:: References\n\n* `\"Hessian Eigenmaps: Locally linear embedding techniques for\n  high-dimensional data\" <http:\/\/www.pnas.org\/content\/100\/10\/5591>`_\n  Donoho, D. & Grimes, C. Proc Natl Acad Sci USA. 100:5591 (2003)\n\n.. _spectral_embedding:\n\nSpectral Embedding\n====================\n\nSpectral Embedding is an approach to calculating a non-linear embedding.\nScikit-learn implements Laplacian Eigenmaps, which finds a low dimensional\nrepresentation of the data using a spectral decomposition of the graph\nLaplacian. The graph generated can be considered as a discrete approximation of\nthe low dimensional manifold in the high dimensional space. Minimization of a\ncost function based on the graph ensures that points close to each other on\nthe manifold are mapped close to each other in the low dimensional space,\npreserving local distances. Spectral embedding can be  performed with the\nfunction :func:`spectral_embedding` or its object-oriented counterpart\n:class:`SpectralEmbedding`.\n\n.. dropdown:: Complexity\n\n  The Spectral Embedding (Laplacian Eigenmaps) algorithm comprises three stages:\n\n  1. **Weighted Graph Construction**. Transform the raw input data into\n     graph representation using affinity (adjacency) matrix representation.\n\n  2. **Graph Laplacian Construction**. unnormalized Graph Laplacian\n     is constructed as :math:`L = D - A` for and normalized one as\n     :math:`L = D^{-\\frac{1}{2}} (D - A) D^{-\\frac{1}{2}}`.\n\n  3. **Partial Eigenvalue Decomposition**. Eigenvalue decomposition is\n     done on graph Laplacian.\n\n  The overall complexity of spectral embedding is\n  :math:`O[D \\log(k) N \\log(N)] + O[D N k^3] + O[d N^2]`.\n\n  * :math:`N` : number of training data points\n  * :math:`D` : input dimension\n  * :math:`k` : number of nearest neighbors\n  * :math:`d` : output dimension\n\n.. rubric:: References\n\n* `\"Laplacian Eigenmaps for Dimensionality Reduction\n  and Data Representation\"\n  <https:\/\/web.cse.ohio-state.edu\/~mbelkin\/papers\/LEM_NC_03.pdf>`_\n  M. Belkin, P. Niyogi, Neural Computation, June 2003; 15 (6):1373-1396\n\n\nLocal Tangent Space Alignment\n=============================\n\nThough not technically a variant of LLE, Local tangent space alignment (LTSA)\nis algorithmically similar enough to LLE that it can be put in this category.\nRather than focusing on preserving neighborhood distances as in LLE, LTSA\nseeks to characterize the local geometry at each neighborhood via its\ntangent space, and performs a global optimization to align these local\ntangent spaces to learn the embedding.  LTSA can be performed with function\n:func:`locally_linear_embedding` or its object-oriented counterpart\n:class:`LocallyLinearEmbedding`, with the keyword ``method = 'ltsa'``.\n\n.. figure:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_lle_digits_009.png\n   :target: ..\/auto_examples\/manifold\/plot_lle_digits.html\n   :align: center\n   :scale: 50\n\n.. dropdown:: Complexity\n\n  The LTSA algorithm comprises three stages:\n\n  1. **Nearest Neighbors Search**.  Same as standard LLE\n\n  2. **Weight Matrix Construction**. Approximately\n     :math:`O[D N k^3] + O[k^2 d]`.  The first term reflects a similar\n     cost to that of standard LLE.\n\n  3. **Partial Eigenvalue Decomposition**. Same as standard LLE\n\n  The overall complexity of standard LTSA is\n  :math:`O[D \\log(k) N \\log(N)] + O[D N k^3] + O[k^2 d] + O[d N^2]`.\n\n  * :math:`N` : number of training data points\n  * :math:`D` : input dimension\n  * :math:`k` : number of nearest neighbors\n  * :math:`d` : output dimension\n\n.. rubric:: References\n\n* :arxiv:`\"Principal manifolds and nonlinear dimensionality reduction via\n  tangent space alignment\"\n  <cs\/0212008>`\n  Zhang, Z. & Zha, H. Journal of Shanghai Univ. 8:406 (2004)\n\n.. _multidimensional_scaling:\n\nMulti-dimensional Scaling (MDS)\n===============================\n\n`Multidimensional scaling <https:\/\/en.wikipedia.org\/wiki\/Multidimensional_scaling>`_\n(:class:`MDS`) seeks a low-dimensional\nrepresentation of the data in which the distances respect well the\ndistances in the original high-dimensional space.\n\nIn general, :class:`MDS` is a technique used for analyzing similarity or\ndissimilarity data. It attempts to model similarity or dissimilarity data as\ndistances in a geometric spaces. The data can be ratings of similarity between\nobjects, interaction frequencies of molecules, or trade indices between\ncountries.\n\nThere exists two types of MDS algorithm: metric and non metric. In\nscikit-learn, the class :class:`MDS` implements both. In Metric MDS, the input\nsimilarity matrix arises from a metric (and thus respects the triangular\ninequality), the distances between output two points are then set to be as\nclose as possible to the similarity or dissimilarity data. In the non-metric\nversion, the algorithms will try to preserve the order of the distances, and\nhence seek for a monotonic relationship between the distances in the embedded\nspace and the similarities\/dissimilarities.\n\n.. figure:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_lle_digits_010.png\n   :target: ..\/auto_examples\/manifold\/plot_lle_digits.html\n   :align: center\n   :scale: 50\n\n\nLet :math:`S` be the similarity matrix, and :math:`X` the coordinates of the\n:math:`n` input points. Disparities :math:`\\hat{d}_{ij}` are transformation of\nthe similarities chosen in some optimal ways. The objective, called the\nstress, is then defined by :math:`\\sum_{i < j} d_{ij}(X) - \\hat{d}_{ij}(X)`\n\n\n.. dropdown:: Metric MDS\n\n  The simplest metric :class:`MDS` model, called *absolute MDS*, disparities are defined by\n  :math:`\\hat{d}_{ij} = S_{ij}`. With absolute MDS, the value :math:`S_{ij}`\n  should then correspond exactly to the distance between point :math:`i` and\n  :math:`j` in the embedding point.\n\n  Most commonly, disparities are set to :math:`\\hat{d}_{ij} = b S_{ij}`.\n\n.. dropdown:: Nonmetric MDS\n\n  Non metric :class:`MDS` focuses on the ordination of the data. If\n  :math:`S_{ij} > S_{jk}`, then the embedding should enforce :math:`d_{ij} <\n  d_{jk}`. For this reason, we discuss it in terms of dissimilarities\n  (:math:`\\delta_{ij}`) instead of similarities (:math:`S_{ij}`). Note that\n  dissimilarities can easily be obtained from similarities through a simple\n  transform, e.g. :math:`\\delta_{ij}=c_1-c_2 S_{ij}` for some real constants\n  :math:`c_1, c_2`. A simple algorithm to enforce proper ordination is to use a\n  monotonic regression of :math:`d_{ij}` on :math:`\\delta_{ij}`, yielding\n  disparities :math:`\\hat{d}_{ij}` in the same order as :math:`\\delta_{ij}`.\n\n  A trivial solution to this problem is to set all the points on the origin. In\n  order to avoid that, the disparities :math:`\\hat{d}_{ij}` are normalized. Note\n  that since we only care about relative ordering, our objective should be\n  invariant to simple translation and scaling, however the stress used in metric\n  MDS is sensitive to scaling. To address this, non-metric MDS may use a\n  normalized stress, known as Stress-1 defined as\n\n  .. math::\n      \\sqrt{\\frac{\\sum_{i < j} (d_{ij} - \\hat{d}_{ij})^2}{\\sum_{i < j} d_{ij}^2}}.\n\n  The use of normalized Stress-1 can be enabled by setting `normalized_stress=True`,\n  however it is only compatible with the non-metric MDS problem and will be ignored\n  in the metric case.\n\n  .. figure:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_mds_001.png\n    :target: ..\/auto_examples\/manifold\/plot_mds.html\n    :align: center\n    :scale: 60\n\n.. rubric:: References\n\n* `\"Modern Multidimensional Scaling - Theory and Applications\"\n  <https:\/\/www.springer.com\/fr\/book\/9780387251509>`_\n  Borg, I.; Groenen P. Springer Series in Statistics (1997)\n\n* `\"Nonmetric multidimensional scaling: a numerical method\"\n  <http:\/\/cda.psych.uiuc.edu\/psychometrika_highly_cited_articles\/kruskal_1964b.pdf>`_\n  Kruskal, J. Psychometrika, 29 (1964)\n\n* `\"Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis\"\n  <http:\/\/cda.psych.uiuc.edu\/psychometrika_highly_cited_articles\/kruskal_1964a.pdf>`_\n  Kruskal, J. Psychometrika, 29, (1964)\n\n.. _t_sne:\n\nt-distributed Stochastic Neighbor Embedding (t-SNE)\n===================================================\n\nt-SNE (:class:`TSNE`) converts affinities of data points to probabilities.\nThe affinities in the original space are represented by Gaussian joint\nprobabilities and the affinities in the embedded space are represented by\nStudent's t-distributions. This allows t-SNE to be particularly sensitive\nto local structure and has a few other advantages over existing techniques:\n\n* Revealing the structure at many scales on a single map\n* Revealing data that lie in multiple, different, manifolds or clusters\n* Reducing the tendency to crowd points together at the center\n\nWhile Isomap, LLE and variants are best suited to unfold a single continuous\nlow dimensional manifold, t-SNE will focus on the local structure of the data\nand will tend to extract clustered local groups of samples as highlighted on\nthe S-curve example. This ability to group samples based on the local structure\nmight be beneficial to visually disentangle a dataset that comprises several\nmanifolds at once as is the case in the digits dataset.\n\nThe Kullback-Leibler (KL) divergence of the joint\nprobabilities in the original space and the embedded space will be minimized\nby gradient descent. Note that the KL divergence is not convex, i.e.\nmultiple restarts with different initializations will end up in local minima\nof the KL divergence. Hence, it is sometimes useful to try different seeds\nand select the embedding with the lowest KL divergence.\n\nThe disadvantages to using t-SNE are roughly:\n\n* t-SNE is computationally expensive, and can take several hours on million-sample\n  datasets where PCA will finish in seconds or minutes\n* The Barnes-Hut t-SNE method is limited to two or three dimensional embeddings.\n* The algorithm is stochastic and multiple restarts with different seeds can\n  yield different embeddings. However, it is perfectly legitimate to pick the\n  embedding with the least error.\n* Global structure is not explicitly preserved. This problem is mitigated by\n  initializing points with PCA (using `init='pca'`).\n\n\n.. figure:: ..\/auto_examples\/manifold\/images\/sphx_glr_plot_lle_digits_013.png\n   :target: ..\/auto_examples\/manifold\/plot_lle_digits.html\n   :align: center\n   :scale: 50\n\n.. dropdown:: Optimizing t-SNE\n\n  The main purpose of t-SNE is visualization of high-dimensional data. Hence,\n  it works best when the data will be embedded on two or three dimensions.\n\n  Optimizing the KL divergence can be a little bit tricky sometimes. There are\n  five parameters that control the optimization of t-SNE and therefore possibly\n  the quality of the resulting embedding:\n\n  * perplexity\n  * early exaggeration factor\n  * learning rate\n  * maximum number of iterations\n  * angle (not used in the exact method)\n\n  The perplexity is defined as :math:`k=2^{(S)}` where :math:`S` is the Shannon\n  entropy of the conditional probability distribution. The perplexity of a\n  :math:`k`-sided die is :math:`k`, so that :math:`k` is effectively the number of\n  nearest neighbors t-SNE considers when generating the conditional probabilities.\n  Larger perplexities lead to more nearest neighbors and less sensitive to small\n  structure. Conversely a lower perplexity considers a smaller number of\n  neighbors, and thus ignores more global information in favour of the\n  local neighborhood. As dataset sizes get larger more points will be\n  required to get a reasonable sample of the local neighborhood, and hence\n  larger perplexities may be required. Similarly noisier datasets will require\n  larger perplexity values to encompass enough local neighbors to see beyond\n  the background noise.\n\n  The maximum number of iterations is usually high enough and does not need\n  any tuning. The optimization consists of two phases: the early exaggeration\n  phase and the final optimization. During early exaggeration the joint\n  probabilities in the original space will be artificially increased by\n  multiplication with a given factor. Larger factors result in larger gaps\n  between natural clusters in the data. If the factor is too high, the KL\n  divergence could increase during this phase. Usually it does not have to be\n  tuned. A critical parameter is the learning rate. If it is too low gradient\n  descent will get stuck in a bad local minimum. If it is too high the KL\n  divergence will increase during optimization. A heuristic suggested in\n  Belkina et al. (2019) is to set the learning rate to the sample size\n  divided by the early exaggeration factor. We implement this heuristic\n  as `learning_rate='auto'` argument. More tips can be found in\n  Laurens van der Maaten's FAQ (see references). The last parameter, angle,\n  is a tradeoff between performance and accuracy. Larger angles imply that we\n  can approximate larger regions by a single point, leading to better speed\n  but less accurate results.\n\n  `\"How to Use t-SNE Effectively\" <https:\/\/distill.pub\/2016\/misread-tsne\/>`_\n  provides a good discussion of the effects of the various parameters, as well\n  as interactive plots to explore the effects of different parameters.\n\n.. dropdown:: Barnes-Hut t-SNE\n\n  The Barnes-Hut t-SNE that has been implemented here is usually much slower than\n  other manifold learning algorithms. The optimization is quite difficult\n  and the computation of the gradient is :math:`O[d N log(N)]`, where :math:`d`\n  is the number of output dimensions and :math:`N` is the number of samples. The\n  Barnes-Hut method improves on the exact method where t-SNE complexity is\n  :math:`O[d N^2]`, but has several other notable differences:\n\n  * The Barnes-Hut implementation only works when the target dimensionality is 3\n    or less. The 2D case is typical when building visualizations.\n  * Barnes-Hut only works with dense input data. Sparse data matrices can only be\n    embedded with the exact method or can be approximated by a dense low rank\n    projection for instance using :class:`~sklearn.decomposition.PCA`\n  * Barnes-Hut is an approximation of the exact method. The approximation is\n    parameterized with the angle parameter, therefore the angle parameter is\n    unused when method=\"exact\"\n  * Barnes-Hut is significantly more scalable. Barnes-Hut can be used to embed\n    hundred of thousands of data points while the exact method can handle\n    thousands of samples before becoming computationally intractable\n\n  For visualization purpose (which is the main use case of t-SNE), using the\n  Barnes-Hut method is strongly recommended. The exact t-SNE method is useful\n  for checking the theoretically properties of the embedding possibly in higher\n  dimensional space but limit to small datasets due to computational constraints.\n\n  Also note that the digits labels roughly match the natural grouping found by\n  t-SNE while the linear 2D projection of the PCA model yields a representation\n  where label regions largely overlap. This is a strong clue that this data can\n  be well separated by non linear methods that focus on the local structure (e.g.\n  an SVM with a Gaussian RBF kernel). However, failing to visualize well\n  separated homogeneously labeled groups with t-SNE in 2D does not necessarily\n  imply that the data cannot be correctly classified by a supervised model. It\n  might be the case that 2 dimensions are not high enough to accurately represent\n  the internal structure of the data.\n\n.. rubric:: References\n\n* `\"Visualizing High-Dimensional Data Using t-SNE\"\n  <https:\/\/jmlr.org\/papers\/v9\/vandermaaten08a.html>`_\n  van der Maaten, L.J.P.; Hinton, G. Journal of Machine Learning Research (2008)\n\n* `\"t-Distributed Stochastic Neighbor Embedding\"\n  <https:\/\/lvdmaaten.github.io\/tsne\/>`_ van der Maaten, L.J.P.\n\n* `\"Accelerating t-SNE using Tree-Based Algorithms\"\n  <https:\/\/lvdmaaten.github.io\/publications\/papers\/JMLR_2014.pdf>`_\n  van der Maaten, L.J.P.; Journal of Machine Learning Research 15(Oct):3221-3245, 2014.\n\n* `\"Automated optimized parameters for T-distributed stochastic neighbor\n  embedding improve visualization and analysis of large datasets\"\n  <https:\/\/www.nature.com\/articles\/s41467-019-13055-y>`_\n  Belkina, A.C., Ciccolella, C.O., Anno, R., Halpert, R., Spidlen, J.,\n  Snyder-Cappione, J.E., Nature Communications 10, 5415 (2019).\n\nTips on practical use\n=====================\n\n* Make sure the same scale is used over all features. Because manifold\n  learning methods are based on a nearest-neighbor search, the algorithm\n  may perform poorly otherwise.  See :ref:`StandardScaler <preprocessing_scaler>`\n  for convenient ways of scaling heterogeneous data.\n\n* The reconstruction error computed by each routine can be used to choose\n  the optimal output dimension.  For a :math:`d`-dimensional manifold embedded\n  in a :math:`D`-dimensional parameter space, the reconstruction error will\n  decrease as ``n_components`` is increased until ``n_components == d``.\n\n* Note that noisy data can \"short-circuit\" the manifold, in essence acting\n  as a bridge between parts of the manifold that would otherwise be\n  well-separated.  Manifold learning on noisy and\/or incomplete data is\n  an active area of research.\n\n* Certain input configurations can lead to singular weight matrices, for\n  example when more than two points in the dataset are identical, or when\n  the data is split into disjointed groups.  In this case, ``solver='arpack'``\n  will fail to find the null space.  The easiest way to address this is to\n  use ``solver='dense'`` which will work on a singular matrix, though it may\n  be very slow depending on the number of input points.  Alternatively, one\n  can attempt to understand the source of the singularity: if it is due to\n  disjoint sets, increasing ``n_neighbors`` may help.  If it is due to\n  identical points in the dataset, removing these points may help.\n\n.. seealso::\n\n   :ref:`random_trees_embedding` can also be useful to derive non-linear\n   representations of feature space, also it does not perform\n   dimensionality reduction.","site":"scikit-learn","answers_cleaned":"    currentmodule   sklearn manifold      manifold                     Manifold learning                      Look for the bare necessities   The simple bare necessities   Forget about your worries and your strife   I mean the bare necessities   Old Mother Nature s recipes   That bring the bare necessities of life                    Baloo s song  The Jungle Book        figure      auto examples manifold images sphx glr plot compare methods 001 png     target     auto examples manifold plot compare methods html     align  center     scale  70       manifold img3  image      auto examples manifold images sphx glr plot compare methods 003 png    target     auto examples manifold plot compare methods html    scale  60       manifold img4  image      auto examples manifold images sphx glr plot compare methods 004 png      target     auto examples manifold plot compare methods html      scale  60       manifold img5  image      auto examples manifold images sphx glr plot compare methods 005 png      target     auto examples manifold plot compare methods html      scale  60       manifold img6  image      auto examples manifold images sphx glr plot compare methods 006 png      target     auto examples manifold plot compare methods html      scale  60      centered    manifold img3   manifold img4   manifold img5   manifold img6    Manifold learning is an approach to non linear dimensionality reduction  Algorithms for this task are based on the idea that the dimensionality of many data sets is only artificially high    Introduction               High dimensional datasets can be very difficult to visualize   While data in two or three dimensions can be plotted to show the inherent structure of the data  equivalent high dimensional plots are much less intuitive   To aid visualization of the structure of a dataset  the dimension must be reduced in some way   The simplest way to accomplish this dimensionality reduction is by taking a random projection of the data   Though this allows some degree of visualization of the data structure  the randomness of the choice leaves much to be desired   In a random projection  it is likely that the more interesting structure within the data will be lost        digits img  image      auto examples manifold images sphx glr plot lle digits 001 png      target     auto examples manifold plot lle digits html      scale  50      projected img  image       auto examples manifold images sphx glr plot lle digits 002 png      target     auto examples manifold plot lle digits html      scale  50     centered    digits img   projected img    To address this concern  a number of supervised and unsupervised linear dimensionality reduction frameworks have been designed  such as Principal Component Analysis  PCA   Independent Component Analysis  Linear Discriminant Analysis  and others   These algorithms define specific rubrics to choose an  interesting  linear projection of the data  These methods can be powerful  but often miss important non linear structure in the data        PCA img  image      auto examples manifold images sphx glr plot lle digits 003 png      target     auto examples manifold plot lle digits html      scale  50      LDA img  image       auto examples manifold images sphx glr plot lle digits 004 png      target     auto examples manifold plot lle digits html      scale  50     centered    PCA img   LDA img   Manifold Learning can be thought of as an attempt to generalize linear frameworks like PCA to be sensitive to non linear structure in data  Though supervised variants exist  the typical manifold learning problem is unsupervised  it learns the high dimensional structure of the data from the data itself  without the use of predetermined classifications       rubric   Examples    See  ref  sphx glr auto examples manifold plot lle digits py  for an example of   dimensionality reduction on handwritten digits     See  ref  sphx glr auto examples manifold plot compare methods py  for an example of   dimensionality reduction on a toy  S curve  dataset     See  ref  sphx glr auto examples applications plot stock market py  for an example of   using manifold learning to map the stock market structure based on historical stock   prices   The manifold learning implementations available in scikit learn are summarized below      isomap   Isomap         One of the earliest approaches to manifold learning is the Isomap algorithm  short for Isometric Mapping   Isomap can be viewed as an extension of Multi dimensional Scaling  MDS  or Kernel PCA  Isomap seeks a lower dimensional embedding which maintains geodesic distances between all points   Isomap can be performed with the object  class  Isomap       figure      auto examples manifold images sphx glr plot lle digits 005 png     target     auto examples manifold plot lle digits html     align  center     scale  50     dropdown   Complexity    The Isomap algorithm comprises three stages     1    Nearest neighbor search     Isomap uses       class   sklearn neighbors BallTree  for efficient neighbor search       The cost is approximately  math  O D  log k  N  log N     for  math  k       nearest neighbors of  math  N  points in  math  D  dimensions     2    Shortest path graph search     The most efficient known algorithms      for this are  Dijkstra s Algorithm   which is approximately       math  O N 2 k    log N      or the  Floyd Warshall algorithm   which      is  math  O N 3     The algorithm can be selected by the user with      the   path method   keyword of   Isomap     If unspecified  the code      attempts to choose the best algorithm for the input data     3    Partial eigenvalue decomposition     The embedding is encoded in the      eigenvectors corresponding to the  math  d  largest eigenvalues of the       math  N  times N  isomap kernel   For a dense solver  the cost is      approximately  math  O d N 2     This cost can often be improved using      the   ARPACK   solver   The eigensolver can be specified by the user      with the   eigen solver   keyword of   Isomap     If unspecified  the      code attempts to choose the best algorithm for the input data     The overall complexity of Isomap is    math  O D  log k  N  log N     O N 2 k    log N      O d N 2          math  N    number of training data points      math  D    input dimension      math  k    number of nearest neighbors      math  d    output dimension     rubric   References      A global geometric framework for nonlinear dimensionality reduction     http   science sciencemag org content 290 5500 2319 full      Tenenbaum  J B   De Silva  V     Langford  J C   Science 290  5500       locally linear embedding   Locally Linear Embedding                           Locally linear embedding  LLE  seeks a lower dimensional projection of the data which preserves distances within local neighborhoods   It can be thought of as a series of local Principal Component Analyses which are globally compared to find the best non linear embedding   Locally linear embedding can be performed with function  func  locally linear embedding  or its object oriented counterpart  class  LocallyLinearEmbedding       figure      auto examples manifold images sphx glr plot lle digits 006 png     target     auto examples manifold plot lle digits html     align  center     scale  50     dropdown   Complexity    The standard LLE algorithm comprises three stages     1    Nearest Neighbors Search     See discussion under Isomap above     2    Weight Matrix Construction     math  O D N k 3         The construction of the LLE weight matrix involves the solution of a       math  k  times k  linear equation for each of the  math  N  local      neighborhoods     3    Partial Eigenvalue Decomposition    See discussion under Isomap above     The overall complexity of standard LLE is    math  O D  log k  N  log N     O D N k 3    O d N 2          math  N    number of training data points      math  D    input dimension      math  k    number of nearest neighbors      math  d    output dimension     rubric   References      Nonlinear dimensionality reduction by locally linear embedding     http   www sciencemag org content 290 5500 2323 full      Roweis  S    Saul  L   Science 290 2323  2000    Modified Locally Linear Embedding                                    One well known issue with LLE is the regularization problem   When the number of neighbors is greater than the number of input dimensions  the matrix defining each local neighborhood is rank deficient   To address this  standard LLE applies an arbitrary regularization parameter  math  r   which is chosen relative to the trace of the local weight matrix   Though it can be shown formally that as  math  r  to 0   the solution converges to the desired embedding  there is no guarantee that the optimal solution will be found for  math  r   0    This problem manifests itself in embeddings which distort the underlying geometry of the manifold   One method to address the regularization problem is to use multiple weight vectors in each neighborhood   This is the essence of  modified locally linear embedding   MLLE    MLLE can be  performed with function  func  locally linear embedding  or its object oriented counterpart  class  LocallyLinearEmbedding   with the keyword   method    modified     It requires   n neighbors   n components        figure      auto examples manifold images sphx glr plot lle digits 007 png     target     auto examples manifold plot lle digits html     align  center     scale  50     dropdown   Complexity    The MLLE algorithm comprises three stages     1    Nearest Neighbors Search     Same as standard LLE    2    Weight Matrix Construction    Approximately       math  O D N k 3    O N  k D  k 2     The first term is exactly equivalent      to that of standard LLE   The second term has to do with constructing the      weight matrix from multiple weights   In practice  the added cost of      constructing the MLLE weight matrix is relatively small compared to the      cost of stages 1 and 3     3    Partial Eigenvalue Decomposition    Same as standard LLE    The overall complexity of MLLE is    math  O D  log k  N  log N     O D N k 3    O N  k D  k 2    O d N 2          math  N    number of training data points      math  D    input dimension      math  k    number of nearest neighbors      math  d    output dimension     rubric   References      MLLE  Modified Locally Linear Embedding Using Multiple Weights     https   citeseerx ist psu edu doc view pid 0b060fdbd92cbcc66b383bcaa9ba5e5e624d7ee3      Zhang  Z    Wang  J    Hessian Eigenmapping                       Hessian Eigenmapping  also known as Hessian based LLE  HLLE  is another method of solving the regularization problem of LLE   It revolves around a hessian based quadratic form at each neighborhood which is used to recover the locally linear structure   Though other implementations note its poor scaling with data size    sklearn   implements some algorithmic improvements which make its cost comparable to that of other LLE variants for small output dimension   HLLE can be  performed with function  func  locally linear embedding  or its object oriented counterpart  class  LocallyLinearEmbedding   with the keyword   method    hessian     It requires   n neighbors   n components    n components   3    2        figure      auto examples manifold images sphx glr plot lle digits 008 png     target     auto examples manifold plot lle digits html     align  center     scale  50     dropdown   Complexity    The HLLE algorithm comprises three stages     1    Nearest Neighbors Search     Same as standard LLE    2    Weight Matrix Construction    Approximately       math  O D N k 3    O N d 6     The first term reflects a similar      cost to that of standard LLE   The second term comes from a QR      decomposition of the local hessian estimator     3    Partial Eigenvalue Decomposition    Same as standard LLE     The overall complexity of standard HLLE is    math  O D  log k  N  log N     O D N k 3    O N d 6    O d N 2          math  N    number of training data points      math  D    input dimension      math  k    number of nearest neighbors      math  d    output dimension     rubric   References      Hessian Eigenmaps  Locally linear embedding techniques for   high dimensional data   http   www pnas org content 100 10 5591      Donoho  D    Grimes  C  Proc Natl Acad Sci USA  100 5591  2003       spectral embedding   Spectral Embedding                       Spectral Embedding is an approach to calculating a non linear embedding  Scikit learn implements Laplacian Eigenmaps  which finds a low dimensional representation of the data using a spectral decomposition of the graph Laplacian  The graph generated can be considered as a discrete approximation of the low dimensional manifold in the high dimensional space  Minimization of a cost function based on the graph ensures that points close to each other on the manifold are mapped close to each other in the low dimensional space  preserving local distances  Spectral embedding can be  performed with the function  func  spectral embedding  or its object oriented counterpart  class  SpectralEmbedding       dropdown   Complexity    The Spectral Embedding  Laplacian Eigenmaps  algorithm comprises three stages     1    Weighted Graph Construction    Transform the raw input data into      graph representation using affinity  adjacency  matrix representation     2    Graph Laplacian Construction    unnormalized Graph Laplacian      is constructed as  math  L   D   A  for and normalized one as       math  L   D    frac 1  2    D   A  D    frac 1  2        3    Partial Eigenvalue Decomposition    Eigenvalue decomposition is      done on graph Laplacian     The overall complexity of spectral embedding is    math  O D  log k  N  log N     O D N k 3    O d N 2          math  N    number of training data points      math  D    input dimension      math  k    number of nearest neighbors      math  d    output dimension     rubric   References      Laplacian Eigenmaps for Dimensionality Reduction   and Data Representation     https   web cse ohio state edu  mbelkin papers LEM NC 03 pdf      M  Belkin  P  Niyogi  Neural Computation  June 2003  15  6  1373 1396   Local Tangent Space Alignment                                Though not technically a variant of LLE  Local tangent space alignment  LTSA  is algorithmically similar enough to LLE that it can be put in this category  Rather than focusing on preserving neighborhood distances as in LLE  LTSA seeks to characterize the local geometry at each neighborhood via its tangent space  and performs a global optimization to align these local tangent spaces to learn the embedding   LTSA can be performed with function  func  locally linear embedding  or its object oriented counterpart  class  LocallyLinearEmbedding   with the keyword   method    ltsa         figure      auto examples manifold images sphx glr plot lle digits 009 png     target     auto examples manifold plot lle digits html     align  center     scale  50     dropdown   Complexity    The LTSA algorithm comprises three stages     1    Nearest Neighbors Search     Same as standard LLE    2    Weight Matrix Construction    Approximately       math  O D N k 3    O k 2 d     The first term reflects a similar      cost to that of standard LLE     3    Partial Eigenvalue Decomposition    Same as standard LLE    The overall complexity of standard LTSA is    math  O D  log k  N  log N     O D N k 3    O k 2 d    O d N 2          math  N    number of training data points      math  D    input dimension      math  k    number of nearest neighbors      math  d    output dimension     rubric   References     arxiv   Principal manifolds and nonlinear dimensionality reduction via   tangent space alignment     cs 0212008     Zhang  Z    Zha  H  Journal of Shanghai Univ  8 406  2004       multidimensional scaling   Multi dimensional Scaling  MDS                                    Multidimensional scaling  https   en wikipedia org wiki Multidimensional scaling      class  MDS   seeks a low dimensional representation of the data in which the distances respect well the distances in the original high dimensional space   In general   class  MDS  is a technique used for analyzing similarity or dissimilarity data  It attempts to model similarity or dissimilarity data as distances in a geometric spaces  The data can be ratings of similarity between objects  interaction frequencies of molecules  or trade indices between countries   There exists two types of MDS algorithm  metric and non metric  In scikit learn  the class  class  MDS  implements both  In Metric MDS  the input similarity matrix arises from a metric  and thus respects the triangular inequality   the distances between output two points are then set to be as close as possible to the similarity or dissimilarity data  In the non metric version  the algorithms will try to preserve the order of the distances  and hence seek for a monotonic relationship between the distances in the embedded space and the similarities dissimilarities      figure      auto examples manifold images sphx glr plot lle digits 010 png     target     auto examples manifold plot lle digits html     align  center     scale  50   Let  math  S  be the similarity matrix  and  math  X  the coordinates of the  math  n  input points  Disparities  math   hat d   ij   are transformation of the similarities chosen in some optimal ways  The objective  called the stress  is then defined by  math   sum  i   j  d  ij  X     hat d   ij  X        dropdown   Metric MDS    The simplest metric  class  MDS  model  called  absolute MDS   disparities are defined by    math   hat d   ij    S  ij    With absolute MDS  the value  math  S  ij     should then correspond exactly to the distance between point  math  i  and    math  j  in the embedding point     Most commonly  disparities are set to  math   hat d   ij    b S  ij        dropdown   Nonmetric MDS    Non metric  class  MDS  focuses on the ordination of the data  If    math  S  ij    S  jk    then the embedding should enforce  math  d  ij      d  jk    For this reason  we discuss it in terms of dissimilarities     math   delta  ij    instead of similarities   math  S  ij     Note that   dissimilarities can easily be obtained from similarities through a simple   transform  e g   math   delta  ij  c 1 c 2 S  ij   for some real constants    math  c 1  c 2   A simple algorithm to enforce proper ordination is to use a   monotonic regression of  math  d  ij   on  math   delta  ij    yielding   disparities  math   hat d   ij   in the same order as  math   delta  ij       A trivial solution to this problem is to set all the points on the origin  In   order to avoid that  the disparities  math   hat d   ij   are normalized  Note   that since we only care about relative ordering  our objective should be   invariant to simple translation and scaling  however the stress used in metric   MDS is sensitive to scaling  To address this  non metric MDS may use a   normalized stress  known as Stress 1 defined as       math          sqrt  frac  sum  i   j   d  ij     hat d   ij   2   sum  i   j  d  ij  2       The use of normalized Stress 1 can be enabled by setting  normalized stress True     however it is only compatible with the non metric MDS problem and will be ignored   in the metric case        figure      auto examples manifold images sphx glr plot mds 001 png      target     auto examples manifold plot mds html      align  center      scale  60     rubric   References      Modern Multidimensional Scaling   Theory and Applications     https   www springer com fr book 9780387251509      Borg  I   Groenen P  Springer Series in Statistics  1997       Nonmetric multidimensional scaling  a numerical method     http   cda psych uiuc edu psychometrika highly cited articles kruskal 1964b pdf      Kruskal  J  Psychometrika  29  1964       Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis     http   cda psych uiuc edu psychometrika highly cited articles kruskal 1964a pdf      Kruskal  J  Psychometrika  29   1964       t sne   t distributed Stochastic Neighbor Embedding  t SNE                                                       t SNE   class  TSNE   converts affinities of data points to probabilities  The affinities in the original space are represented by Gaussian joint probabilities and the affinities in the embedded space are represented by Student s t distributions  This allows t SNE to be particularly sensitive to local structure and has a few other advantages over existing techniques     Revealing the structure at many scales on a single map   Revealing data that lie in multiple  different  manifolds or clusters   Reducing the tendency to crowd points together at the center  While Isomap  LLE and variants are best suited to unfold a single continuous low dimensional manifold  t SNE will focus on the local structure of the data and will tend to extract clustered local groups of samples as highlighted on the S curve example  This ability to group samples based on the local structure might be beneficial to visually disentangle a dataset that comprises several manifolds at once as is the case in the digits dataset   The Kullback Leibler  KL  divergence of the joint probabilities in the original space and the embedded space will be minimized by gradient descent  Note that the KL divergence is not convex  i e  multiple restarts with different initializations will end up in local minima of the KL divergence  Hence  it is sometimes useful to try different seeds and select the embedding with the lowest KL divergence   The disadvantages to using t SNE are roughly     t SNE is computationally expensive  and can take several hours on million sample   datasets where PCA will finish in seconds or minutes   The Barnes Hut t SNE method is limited to two or three dimensional embeddings    The algorithm is stochastic and multiple restarts with different seeds can   yield different embeddings  However  it is perfectly legitimate to pick the   embedding with the least error    Global structure is not explicitly preserved  This problem is mitigated by   initializing points with PCA  using  init  pca          figure      auto examples manifold images sphx glr plot lle digits 013 png     target     auto examples manifold plot lle digits html     align  center     scale  50     dropdown   Optimizing t SNE    The main purpose of t SNE is visualization of high dimensional data  Hence    it works best when the data will be embedded on two or three dimensions     Optimizing the KL divergence can be a little bit tricky sometimes  There are   five parameters that control the optimization of t SNE and therefore possibly   the quality of the resulting embedding       perplexity     early exaggeration factor     learning rate     maximum number of iterations     angle  not used in the exact method     The perplexity is defined as  math  k 2   S    where  math  S  is the Shannon   entropy of the conditional probability distribution  The perplexity of a    math  k  sided die is  math  k   so that  math  k  is effectively the number of   nearest neighbors t SNE considers when generating the conditional probabilities    Larger perplexities lead to more nearest neighbors and less sensitive to small   structure  Conversely a lower perplexity considers a smaller number of   neighbors  and thus ignores more global information in favour of the   local neighborhood  As dataset sizes get larger more points will be   required to get a reasonable sample of the local neighborhood  and hence   larger perplexities may be required  Similarly noisier datasets will require   larger perplexity values to encompass enough local neighbors to see beyond   the background noise     The maximum number of iterations is usually high enough and does not need   any tuning  The optimization consists of two phases  the early exaggeration   phase and the final optimization  During early exaggeration the joint   probabilities in the original space will be artificially increased by   multiplication with a given factor  Larger factors result in larger gaps   between natural clusters in the data  If the factor is too high  the KL   divergence could increase during this phase  Usually it does not have to be   tuned  A critical parameter is the learning rate  If it is too low gradient   descent will get stuck in a bad local minimum  If it is too high the KL   divergence will increase during optimization  A heuristic suggested in   Belkina et al   2019  is to set the learning rate to the sample size   divided by the early exaggeration factor  We implement this heuristic   as  learning rate  auto   argument  More tips can be found in   Laurens van der Maaten s FAQ  see references   The last parameter  angle    is a tradeoff between performance and accuracy  Larger angles imply that we   can approximate larger regions by a single point  leading to better speed   but less accurate results       How to Use t SNE Effectively   https   distill pub 2016 misread tsne       provides a good discussion of the effects of the various parameters  as well   as interactive plots to explore the effects of different parameters      dropdown   Barnes Hut t SNE    The Barnes Hut t SNE that has been implemented here is usually much slower than   other manifold learning algorithms  The optimization is quite difficult   and the computation of the gradient is  math  O d N log N     where  math  d    is the number of output dimensions and  math  N  is the number of samples  The   Barnes Hut method improves on the exact method where t SNE complexity is    math  O d N 2    but has several other notable differences       The Barnes Hut implementation only works when the target dimensionality is 3     or less  The 2D case is typical when building visualizations      Barnes Hut only works with dense input data  Sparse data matrices can only be     embedded with the exact method or can be approximated by a dense low rank     projection for instance using  class   sklearn decomposition PCA      Barnes Hut is an approximation of the exact method  The approximation is     parameterized with the angle parameter  therefore the angle parameter is     unused when method  exact      Barnes Hut is significantly more scalable  Barnes Hut can be used to embed     hundred of thousands of data points while the exact method can handle     thousands of samples before becoming computationally intractable    For visualization purpose  which is the main use case of t SNE   using the   Barnes Hut method is strongly recommended  The exact t SNE method is useful   for checking the theoretically properties of the embedding possibly in higher   dimensional space but limit to small datasets due to computational constraints     Also note that the digits labels roughly match the natural grouping found by   t SNE while the linear 2D projection of the PCA model yields a representation   where label regions largely overlap  This is a strong clue that this data can   be well separated by non linear methods that focus on the local structure  e g    an SVM with a Gaussian RBF kernel   However  failing to visualize well   separated homogeneously labeled groups with t SNE in 2D does not necessarily   imply that the data cannot be correctly classified by a supervised model  It   might be the case that 2 dimensions are not high enough to accurately represent   the internal structure of the data      rubric   References      Visualizing High Dimensional Data Using t SNE     https   jmlr org papers v9 vandermaaten08a html      van der Maaten  L J P   Hinton  G  Journal of Machine Learning Research  2008       t Distributed Stochastic Neighbor Embedding     https   lvdmaaten github io tsne     van der Maaten  L J P       Accelerating t SNE using Tree Based Algorithms     https   lvdmaaten github io publications papers JMLR 2014 pdf      van der Maaten  L J P   Journal of Machine Learning Research 15 Oct  3221 3245  2014       Automated optimized parameters for T distributed stochastic neighbor   embedding improve visualization and analysis of large datasets     https   www nature com articles s41467 019 13055 y      Belkina  A C   Ciccolella  C O   Anno  R   Halpert  R   Spidlen  J     Snyder Cappione  J E   Nature Communications 10  5415  2019    Tips on practical use                          Make sure the same scale is used over all features  Because manifold   learning methods are based on a nearest neighbor search  the algorithm   may perform poorly otherwise   See  ref  StandardScaler  preprocessing scaler     for convenient ways of scaling heterogeneous data     The reconstruction error computed by each routine can be used to choose   the optimal output dimension   For a  math  d  dimensional manifold embedded   in a  math  D  dimensional parameter space  the reconstruction error will   decrease as   n components   is increased until   n components    d       Note that noisy data can  short circuit  the manifold  in essence acting   as a bridge between parts of the manifold that would otherwise be   well separated   Manifold learning on noisy and or incomplete data is   an active area of research     Certain input configurations can lead to singular weight matrices  for   example when more than two points in the dataset are identical  or when   the data is split into disjointed groups   In this case    solver  arpack      will fail to find the null space   The easiest way to address this is to   use   solver  dense    which will work on a singular matrix  though it may   be very slow depending on the number of input points   Alternatively  one   can attempt to understand the source of the singularity  if it is due to   disjoint sets  increasing   n neighbors   may help   If it is due to   identical points in the dataset  removing these points may help      seealso        ref  random trees embedding  can also be useful to derive non linear    representations of feature space  also it does not perform    dimensionality reduction "}
{"questions":"scikit-learn Installing the development version of scikit learn advanced installation mindependencysubstitutions rst This section introduces how to install the main branch of scikit learn","answers":"\n.. _advanced-installation:\n\n.. include:: ..\/min_dependency_substitutions.rst\n\n==================================================\nInstalling the development version of scikit-learn\n==================================================\n\nThis section introduces how to install the **main branch** of scikit-learn.\nThis can be done by either installing a nightly build or building from source.\n\n.. _install_nightly_builds:\n\nInstalling nightly builds\n=========================\n\nThe continuous integration servers of the scikit-learn project build, test\nand upload wheel packages for the most recent Python version on a nightly\nbasis.\n\nInstalling a nightly build is the quickest way to:\n\n- try a new feature that will be shipped in the next release (that is, a\n  feature from a pull-request that was recently merged to the main branch);\n\n- check whether a bug you encountered has been fixed since the last release.\n\nYou can install the nightly build of scikit-learn using the `scientific-python-nightly-wheels`\nindex from the PyPI registry of `anaconda.org`:\n\n.. prompt:: bash $\n\n  pip install --pre --extra-index https:\/\/pypi.anaconda.org\/scientific-python-nightly-wheels\/simple scikit-learn\n\nNote that first uninstalling scikit-learn might be required to be able to\ninstall nightly builds of scikit-learn.\n\n.. _install_bleeding_edge:\n\nBuilding from source\n====================\n\nBuilding from source is required to work on a contribution (bug fix, new\nfeature, code or documentation improvement).\n\n.. _git_repo:\n\n#. Use `Git <https:\/\/git-scm.com\/>`_ to check out the latest source from the\n   `scikit-learn repository <https:\/\/github.com\/scikit-learn\/scikit-learn>`_ on\n   Github.:\n\n   .. prompt:: bash $\n\n     git clone git@github.com:scikit-learn\/scikit-learn.git  # add --depth 1 if your connection is slow\n     cd scikit-learn\n\n   If you plan on submitting a pull-request, you should clone from your fork\n   instead.\n\n#. Install a recent version of Python (3.9 or later at the time of writing) for\n   instance using Miniforge3_. Miniforge provides a conda-based distribution of\n   Python and the most popular scientific libraries.\n\n   If you installed Python with conda, we recommend to create a dedicated\n   `conda environment`_ with all the build dependencies of scikit-learn\n   (namely NumPy_, SciPy_, Cython_, meson-python_ and Ninja_):\n\n   .. prompt:: bash $\n\n     conda create -n sklearn-env -c conda-forge python numpy scipy cython meson-python ninja\n\n   It is not always necessary but it is safer to open a new prompt before\n   activating the newly created conda environment.\n\n   .. prompt:: bash $\n\n     conda activate sklearn-env\n\n#. **Alternative to conda:** You can use alternative installations of Python\n   provided they are recent enough (3.9 or higher at the time of writing).\n   Here is an example on how to create a build environment for a Linux system's\n   Python. Build dependencies are installed with `pip` in a dedicated virtualenv_\n   to avoid disrupting other Python programs installed on the system:\n\n   .. prompt:: bash $\n\n     python3 -m venv sklearn-env\n     source sklearn-env\/bin\/activate\n     pip install wheel numpy scipy cython meson-python ninja\n\n#. Install a compiler with OpenMP_ support for your platform. See instructions\n   for :ref:`compiler_windows`, :ref:`compiler_macos`, :ref:`compiler_linux`\n   and :ref:`compiler_freebsd`.\n\n#. Build the project with pip:\n\n   .. prompt:: bash $\n\n     pip install --editable . \\\n        --verbose --no-build-isolation \\\n        --config-settings editable-verbose=true\n\n#. Check that the installed scikit-learn has a version number ending with\n   `.dev0`:\n\n   .. prompt:: bash $\n\n     python -c \"import sklearn; sklearn.show_versions()\"\n\n#. Please refer to the :ref:`developers_guide` and :ref:`pytest_tips` to run\n   the tests on the module of your choice.\n\n.. note::\n\n    `--config-settings editable-verbose=true` is optional but recommended\n    to avoid surprises when you import `sklearn`. `meson-python` implements\n    editable installs by rebuilding `sklearn` when executing `import sklearn`.\n    With the recommended setting you will see a message when this happens,\n    rather than potentially waiting without feed-back and wondering\n    what is taking so long. Bonus: this means you only have to run the `pip\n    install` command once, `sklearn` will automatically be rebuilt when\n    importing `sklearn`.\n\n    Note that `--config-settings` is only supported in `pip` version 23.1 or\n    later. To upgrade `pip` to a compatible version, run `pip install -U pip`.\n\nDependencies\n------------\n\nRuntime dependencies\n~~~~~~~~~~~~~~~~~~~~\n\nScikit-learn requires the following dependencies both at build time and at\nruntime:\n\n- Python (>= 3.8),\n- NumPy (>= |NumpyMinVersion|),\n- SciPy (>= |ScipyMinVersion|),\n- Joblib (>= |JoblibMinVersion|),\n- threadpoolctl (>= |ThreadpoolctlMinVersion|).\n\nBuild dependencies\n~~~~~~~~~~~~~~~~~~\n\nBuilding Scikit-learn also requires:\n\n..\n    # The following places need to be in sync with regard to Cython version:\n    # - .circleci config file\n    # - sklearn\/_build_utils\/__init__.py\n    # - advanced installation guide\n\n- Cython >= |CythonMinVersion|\n- A C\/C++ compiler and a matching OpenMP_ runtime library. See the\n  :ref:`platform system specific instructions\n  <platform_specific_instructions>` for more details.\n\n.. note::\n\n   If OpenMP is not supported by the compiler, the build will be done with\n   OpenMP functionalities disabled. This is not recommended since it will force\n   some estimators to run in sequential mode instead of leveraging thread-based\n   parallelism. Setting the ``SKLEARN_FAIL_NO_OPENMP`` environment variable\n   (before cythonization) will force the build to fail if OpenMP is not\n   supported.\n\nSince version 0.21, scikit-learn automatically detects and uses the linear\nalgebra library used by SciPy **at runtime**. Scikit-learn has therefore no\nbuild dependency on BLAS\/LAPACK implementations such as OpenBlas, Atlas, Blis\nor MKL.\n\nTest dependencies\n~~~~~~~~~~~~~~~~~\n\nRunning tests requires:\n\n- pytest >= |PytestMinVersion|\n\nSome tests also require `pandas <https:\/\/pandas.pydata.org>`_.\n\n\nBuilding a specific version from a tag\n--------------------------------------\n\nIf you want to build a stable version, you can ``git checkout <VERSION>``\nto get the code for that particular version, or download an zip archive of\nthe version from github.\n\n.. _platform_specific_instructions:\n\nPlatform-specific instructions\n==============================\n\nHere are instructions to install a working C\/C++ compiler with OpenMP support\nto build scikit-learn Cython extensions for each supported platform.\n\n.. _compiler_windows:\n\nWindows\n-------\n\nFirst, download the `Build Tools for Visual Studio 2019 installer\n<https:\/\/aka.ms\/vs\/17\/release\/vs_buildtools.exe>`_.\n\nRun the downloaded `vs_buildtools.exe` file, during the installation you will\nneed to make sure you select \"Desktop development with C++\", similarly to this\nscreenshot:\n\n.. image:: ..\/images\/visual-studio-build-tools-selection.png\n\nSecondly, find out if you are running 64-bit or 32-bit Python. The building\ncommand depends on the architecture of the Python interpreter. You can check\nthe architecture by running the following in ``cmd`` or ``powershell``\nconsole:\n\n.. prompt:: bash $\n\n    python -c \"import struct; print(struct.calcsize('P') * 8)\"\n\nFor 64-bit Python, configure the build environment by running the following\ncommands in ``cmd`` or an Anaconda Prompt (if you use Anaconda):\n\n.. sphinx-prompt 1.3.0 (used in doc-min-dependencies CI task) does not support `batch` prompt type,\n.. so we work around by using a known prompt type and an explicit prompt text.\n..\n.. prompt:: bash C:\\>\n\n    SET DISTUTILS_USE_SDK=1\n    \"C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Auxiliary\\Build\\vcvarsall.bat\" x64\n\nReplace ``x64`` by ``x86`` to build for 32-bit Python.\n\nPlease be aware that the path above might be different from user to user. The\naim is to point to the \"vcvarsall.bat\" file that will set the necessary\nenvironment variables in the current command prompt.\n\nFinally, build scikit-learn with this command prompt:\n\n.. prompt:: bash $\n\n    pip install --editable . \\\n        --verbose --no-build-isolation \\\n        --config-settings editable-verbose=true\n\n.. _compiler_macos:\n\nmacOS\n-----\n\nThe default C compiler on macOS, Apple clang (confusingly aliased as\n`\/usr\/bin\/gcc`), does not directly support OpenMP. We present two alternatives\nto enable OpenMP support:\n\n- either install `conda-forge::compilers` with conda;\n\n- or install `libomp` with Homebrew to extend the default Apple clang compiler.\n\nFor Apple Silicon M1 hardware, only the conda-forge method below is known to\nwork at the time of writing (January 2021). You can install the `macos\/arm64`\ndistribution of conda using the `miniforge installer\n<https:\/\/github.com\/conda-forge\/miniforge#miniforge>`_\n\nmacOS compilers from conda-forge\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nIf you use the conda package manager (version >= 4.7), you can install the\n``compilers`` meta-package from the conda-forge channel, which provides\nOpenMP-enabled C\/C++ compilers based on the llvm toolchain.\n\nFirst install the macOS command line tools:\n\n.. prompt:: bash $\n\n    xcode-select --install\n\nIt is recommended to use a dedicated `conda environment`_ to build\nscikit-learn from source:\n\n.. prompt:: bash $\n\n    conda create -n sklearn-dev -c conda-forge python numpy scipy cython \\\n        joblib threadpoolctl pytest compilers llvm-openmp meson-python ninja\n\nIt is not always necessary but it is safer to open a new prompt before\nactivating the newly created conda environment.\n\n.. prompt:: bash $\n\n    conda activate sklearn-dev\n    make clean\n    pip install --editable . \\\n        --verbose --no-build-isolation \\\n        --config-settings editable-verbose=true\n\n.. note::\n\n    If you get any conflicting dependency error message, try commenting out\n    any custom conda configuration in the ``$HOME\/.condarc`` file. In\n    particular the ``channel_priority: strict`` directive is known to cause\n    problems for this setup.\n\nYou can check that the custom compilers are properly installed from conda\nforge using the following command:\n\n.. prompt:: bash $\n\n    conda list\n\nwhich should include ``compilers`` and ``llvm-openmp``.\n\nThe compilers meta-package will automatically set custom environment\nvariables:\n\n.. prompt:: bash $\n\n    echo $CC\n    echo $CXX\n    echo $CFLAGS\n    echo $CXXFLAGS\n    echo $LDFLAGS\n\nThey point to files and folders from your ``sklearn-dev`` conda environment\n(in particular in the bin\/, include\/ and lib\/ subfolders). For instance\n``-L\/path\/to\/conda\/envs\/sklearn-dev\/lib`` should appear in ``LDFLAGS``.\n\nIn the log, you should see the compiled extension being built with the clang\nand clang++ compilers installed by conda with the ``-fopenmp`` command line\nflag.\n\nmacOS compilers from Homebrew\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nAnother solution is to enable OpenMP support for the clang compiler shipped\nby default on macOS.\n\nFirst install the macOS command line tools:\n\n.. prompt:: bash $\n\n    xcode-select --install\n\nInstall the Homebrew_ package manager for macOS.\n\nInstall the LLVM OpenMP library:\n\n.. prompt:: bash $\n\n    brew install libomp\n\nSet the following environment variables:\n\n.. prompt:: bash $\n\n    export CC=\/usr\/bin\/clang\n    export CXX=\/usr\/bin\/clang++\n    export CPPFLAGS=\"$CPPFLAGS -Xpreprocessor -fopenmp\"\n    export CFLAGS=\"$CFLAGS -I\/usr\/local\/opt\/libomp\/include\"\n    export CXXFLAGS=\"$CXXFLAGS -I\/usr\/local\/opt\/libomp\/include\"\n    export LDFLAGS=\"$LDFLAGS -Wl,-rpath,\/usr\/local\/opt\/libomp\/lib -L\/usr\/local\/opt\/libomp\/lib -lomp\"\n\nFinally, build scikit-learn in verbose mode (to check for the presence of the\n``-fopenmp`` flag in the compiler commands):\n\n.. prompt:: bash $\n\n    make clean\n    pip install --editable . \\\n        --verbose --no-build-isolation \\\n        --config-settings editable-verbose=true\n\n.. _compiler_linux:\n\nLinux\n-----\n\nLinux compilers from the system\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nInstalling scikit-learn from source without using conda requires you to have\ninstalled the scikit-learn Python development headers and a working C\/C++\ncompiler with OpenMP support (typically the GCC toolchain).\n\nInstall build dependencies for Debian-based operating systems, e.g.\nUbuntu:\n\n.. prompt:: bash $\n\n    sudo apt-get install build-essential python3-dev python3-pip\n\nthen proceed as usual:\n\n.. prompt:: bash $\n\n    pip3 install cython\n    pip3 install --editable . \\\n        --verbose --no-build-isolation \\\n        --config-settings editable-verbose=true\n\nCython and the pre-compiled wheels for the runtime dependencies (numpy, scipy\nand joblib) should automatically be installed in\n``$HOME\/.local\/lib\/pythonX.Y\/site-packages``. Alternatively you can run the\nabove commands from a virtualenv_ or a `conda environment`_ to get full\nisolation from the Python packages installed via the system packager. When\nusing an isolated environment, ``pip3`` should be replaced by ``pip`` in the\nabove commands.\n\nWhen precompiled wheels of the runtime dependencies are not available for your\narchitecture (e.g. ARM), you can install the system versions:\n\n.. prompt:: bash $\n\n    sudo apt-get install cython3 python3-numpy python3-scipy\n\nOn Red Hat and clones (e.g. CentOS), install the dependencies using:\n\n.. prompt:: bash $\n\n    sudo yum -y install gcc gcc-c++ python3-devel numpy scipy\n\nLinux compilers from conda-forge\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nAlternatively, install a recent version of the GNU C Compiler toolchain (GCC)\nin the user folder using conda:\n\n.. prompt:: bash $\n\n    conda create -n sklearn-dev -c conda-forge python numpy scipy cython \\\n        joblib threadpoolctl pytest compilers meson-python ninja\n\nIt is not always necessary but it is safer to open a new prompt before\nactivating the newly created conda environment.\n\n.. prompt:: bash $\n\n    conda activate sklearn-dev\n    pip install --editable . \\\n        --verbose --no-build-isolation \\\n        --config-settings editable-verbose=true\n\n.. _compiler_freebsd:\n\nFreeBSD\n-------\n\nThe clang compiler included in FreeBSD 12.0 and 11.2 base systems does not\ninclude OpenMP support. You need to install the `openmp` library from packages\n(or ports):\n\n.. prompt:: bash $\n\n    sudo pkg install openmp\n\nThis will install header files in ``\/usr\/local\/include`` and libs in\n``\/usr\/local\/lib``. Since these directories are not searched by default, you\ncan set the environment variables to these locations:\n\n.. prompt:: bash $\n\n    export CFLAGS=\"$CFLAGS -I\/usr\/local\/include\"\n    export CXXFLAGS=\"$CXXFLAGS -I\/usr\/local\/include\"\n    export LDFLAGS=\"$LDFLAGS -Wl,-rpath,\/usr\/local\/lib -L\/usr\/local\/lib -lomp\"\n\nFinally, build the package using the standard command:\n\n.. prompt:: bash $\n\n    pip install --editable . \\\n        --verbose --no-build-isolation \\\n        --config-settings editable-verbose=true\n\nFor the upcoming FreeBSD 12.1 and 11.3 versions, OpenMP will be included in\nthe base system and these steps will not be necessary.\n\n.. _OpenMP: https:\/\/en.wikipedia.org\/wiki\/OpenMP\n.. _Cython: https:\/\/cython.org\n.. _meson-python: https:\/\/mesonbuild.com\/meson-python\n.. _Ninja: https:\/\/ninja-build.org\/\n.. _NumPy: https:\/\/numpy.org\n.. _SciPy: https:\/\/www.scipy.org\n.. _Homebrew: https:\/\/brew.sh\n.. _virtualenv: https:\/\/docs.python.org\/3\/tutorial\/venv.html\n.. _conda environment: https:\/\/docs.conda.io\/projects\/conda\/en\/latest\/user-guide\/tasks\/manage-environments.html\n.. _Miniforge3: https:\/\/github.com\/conda-forge\/miniforge#miniforge3","site":"scikit-learn","answers_cleaned":"     advanced installation      include      min dependency substitutions rst                                                     Installing the development version of scikit learn                                                     This section introduces how to install the   main branch   of scikit learn  This can be done by either installing a nightly build or building from source       install nightly builds   Installing nightly builds                            The continuous integration servers of the scikit learn project build  test and upload wheel packages for the most recent Python version on a nightly basis   Installing a nightly build is the quickest way to     try a new feature that will be shipped in the next release  that is  a   feature from a pull request that was recently merged to the main branch      check whether a bug you encountered has been fixed since the last release   You can install the nightly build of scikit learn using the  scientific python nightly wheels  index from the PyPI registry of  anaconda org       prompt   bash      pip install   pre   extra index https   pypi anaconda org scientific python nightly wheels simple scikit learn  Note that first uninstalling scikit learn might be required to be able to install nightly builds of scikit learn       install bleeding edge   Building from source                       Building from source is required to work on a contribution  bug fix  new feature  code or documentation improvement        git repo      Use  Git  https   git scm com     to check out the latest source from the     scikit learn repository  https   github com scikit learn scikit learn    on    Github          prompt   bash         git clone git github com scikit learn scikit learn git    add   depth 1 if your connection is slow      cd scikit learn     If you plan on submitting a pull request  you should clone from your fork    instead      Install a recent version of Python  3 9 or later at the time of writing  for    instance using Miniforge3   Miniforge provides a conda based distribution of    Python and the most popular scientific libraries      If you installed Python with conda  we recommend to create a dedicated     conda environment   with all the build dependencies of scikit learn     namely NumPy   SciPy   Cython   meson python  and Ninja           prompt   bash         conda create  n sklearn env  c conda forge python numpy scipy cython meson python ninja     It is not always necessary but it is safer to open a new prompt before    activating the newly created conda environment         prompt   bash         conda activate sklearn env       Alternative to conda    You can use alternative installations of Python    provided they are recent enough  3 9 or higher at the time of writing      Here is an example on how to create a build environment for a Linux system s    Python  Build dependencies are installed with  pip  in a dedicated virtualenv     to avoid disrupting other Python programs installed on the system         prompt   bash         python3  m venv sklearn env      source sklearn env bin activate      pip install wheel numpy scipy cython meson python ninja     Install a compiler with OpenMP  support for your platform  See instructions    for  ref  compiler windows    ref  compiler macos    ref  compiler linux     and  ref  compiler freebsd       Build the project with pip         prompt   bash         pip install   editable               verbose   no build isolation             config settings editable verbose true     Check that the installed scikit learn has a version number ending with      dev0          prompt   bash         python  c  import sklearn  sklearn show versions        Please refer to the  ref  developers guide  and  ref  pytest tips  to run    the tests on the module of your choice      note           config settings editable verbose true  is optional but recommended     to avoid surprises when you import  sklearn    meson python  implements     editable installs by rebuilding  sklearn  when executing  import sklearn       With the recommended setting you will see a message when this happens      rather than potentially waiting without feed back and wondering     what is taking so long  Bonus  this means you only have to run the  pip     install  command once   sklearn  will automatically be rebuilt when     importing  sklearn        Note that    config settings  is only supported in  pip  version 23 1 or     later  To upgrade  pip  to a compatible version  run  pip install  U pip    Dependencies               Runtime dependencies                       Scikit learn requires the following dependencies both at build time and at runtime     Python     3 8     NumPy      NumpyMinVersion      SciPy      ScipyMinVersion      Joblib      JoblibMinVersion      threadpoolctl      ThreadpoolctlMinVersion     Build dependencies                     Building Scikit learn also requires            The following places need to be in sync with regard to Cython version           circleci config file         sklearn  build utils   init   py         advanced installation guide    Cython     CythonMinVersion    A C C   compiler and a matching OpenMP  runtime library  See the    ref  platform system specific instructions    platform specific instructions   for more details      note       If OpenMP is not supported by the compiler  the build will be done with    OpenMP functionalities disabled  This is not recommended since it will force    some estimators to run in sequential mode instead of leveraging thread based    parallelism  Setting the   SKLEARN FAIL NO OPENMP   environment variable     before cythonization  will force the build to fail if OpenMP is not    supported   Since version 0 21  scikit learn automatically detects and uses the linear algebra library used by SciPy   at runtime    Scikit learn has therefore no build dependency on BLAS LAPACK implementations such as OpenBlas  Atlas  Blis or MKL   Test dependencies                    Running tests requires     pytest     PytestMinVersion   Some tests also require  pandas  https   pandas pydata org       Building a specific version from a tag                                         If you want to build a stable version  you can   git checkout  VERSION    to get the code for that particular version  or download an zip archive of the version from github       platform specific instructions   Platform specific instructions                                 Here are instructions to install a working C C   compiler with OpenMP support to build scikit learn Cython extensions for each supported platform       compiler windows   Windows          First  download the  Build Tools for Visual Studio 2019 installer  https   aka ms vs 17 release vs buildtools exe      Run the downloaded  vs buildtools exe  file  during the installation you will need to make sure you select  Desktop development with C     similarly to this screenshot      image      images visual studio build tools selection png  Secondly  find out if you are running 64 bit or 32 bit Python  The building command depends on the architecture of the Python interpreter  You can check the architecture by running the following in   cmd   or   powershell   console      prompt   bash        python  c  import struct  print struct calcsize  P     8    For 64 bit Python  configure the build environment by running the following commands in   cmd   or an Anaconda Prompt  if you use Anaconda       sphinx prompt 1 3 0  used in doc min dependencies CI task  does not support  batch  prompt type     so we work around by using a known prompt type and an explicit prompt text        prompt   bash C         SET DISTUTILS USE SDK 1      C  Program Files  x86  Microsoft Visual Studio 2019 BuildTools VC Auxiliary Build vcvarsall bat  x64  Replace   x64   by   x86   to build for 32 bit Python   Please be aware that the path above might be different from user to user  The aim is to point to the  vcvarsall bat  file that will set the necessary environment variables in the current command prompt   Finally  build scikit learn with this command prompt      prompt   bash        pip install   editable               verbose   no build isolation             config settings editable verbose true      compiler macos   macOS        The default C compiler on macOS  Apple clang  confusingly aliased as   usr bin gcc    does not directly support OpenMP  We present two alternatives to enable OpenMP support     either install  conda forge  compilers  with conda     or install  libomp  with Homebrew to extend the default Apple clang compiler   For Apple Silicon M1 hardware  only the conda forge method below is known to work at the time of writing  January 2021   You can install the  macos arm64  distribution of conda using the  miniforge installer  https   github com conda forge miniforge miniforge     macOS compilers from conda forge                                   If you use the conda package manager  version    4 7   you can install the   compilers   meta package from the conda forge channel  which provides OpenMP enabled C C   compilers based on the llvm toolchain   First install the macOS command line tools      prompt   bash        xcode select   install  It is recommended to use a dedicated  conda environment   to build scikit learn from source      prompt   bash        conda create  n sklearn dev  c conda forge python numpy scipy cython           joblib threadpoolctl pytest compilers llvm openmp meson python ninja  It is not always necessary but it is safer to open a new prompt before activating the newly created conda environment      prompt   bash        conda activate sklearn dev     make clean     pip install   editable               verbose   no build isolation             config settings editable verbose true     note        If you get any conflicting dependency error message  try commenting out     any custom conda configuration in the    HOME  condarc   file  In     particular the   channel priority  strict   directive is known to cause     problems for this setup   You can check that the custom compilers are properly installed from conda forge using the following command      prompt   bash        conda list  which should include   compilers   and   llvm openmp     The compilers meta package will automatically set custom environment variables      prompt   bash        echo  CC     echo  CXX     echo  CFLAGS     echo  CXXFLAGS     echo  LDFLAGS  They point to files and folders from your   sklearn dev   conda environment  in particular in the bin   include  and lib  subfolders   For instance    L path to conda envs sklearn dev lib   should appear in   LDFLAGS     In the log  you should see the compiled extension being built with the clang and clang   compilers installed by conda with the    fopenmp   command line flag   macOS compilers from Homebrew                                Another solution is to enable OpenMP support for the clang compiler shipped by default on macOS   First install the macOS command line tools      prompt   bash        xcode select   install  Install the Homebrew  package manager for macOS   Install the LLVM OpenMP library      prompt   bash        brew install libomp  Set the following environment variables      prompt   bash        export CC  usr bin clang     export CXX  usr bin clang       export CPPFLAGS   CPPFLAGS  Xpreprocessor  fopenmp      export CFLAGS   CFLAGS  I usr local opt libomp include      export CXXFLAGS   CXXFLAGS  I usr local opt libomp include      export LDFLAGS   LDFLAGS  Wl  rpath  usr local opt libomp lib  L usr local opt libomp lib  lomp   Finally  build scikit learn in verbose mode  to check for the presence of the    fopenmp   flag in the compiler commands       prompt   bash        make clean     pip install   editable               verbose   no build isolation             config settings editable verbose true      compiler linux   Linux        Linux compilers from the system                                  Installing scikit learn from source without using conda requires you to have installed the scikit learn Python development headers and a working C C   compiler with OpenMP support  typically the GCC toolchain    Install build dependencies for Debian based operating systems  e g  Ubuntu      prompt   bash        sudo apt get install build essential python3 dev python3 pip  then proceed as usual      prompt   bash        pip3 install cython     pip3 install   editable               verbose   no build isolation             config settings editable verbose true  Cython and the pre compiled wheels for the runtime dependencies  numpy  scipy and joblib  should automatically be installed in    HOME  local lib pythonX Y site packages    Alternatively you can run the above commands from a virtualenv  or a  conda environment   to get full isolation from the Python packages installed via the system packager  When using an isolated environment    pip3   should be replaced by   pip   in the above commands   When precompiled wheels of the runtime dependencies are not available for your architecture  e g  ARM   you can install the system versions      prompt   bash        sudo apt get install cython3 python3 numpy python3 scipy  On Red Hat and clones  e g  CentOS   install the dependencies using      prompt   bash        sudo yum  y install gcc gcc c   python3 devel numpy scipy  Linux compilers from conda forge                                   Alternatively  install a recent version of the GNU C Compiler toolchain  GCC  in the user folder using conda      prompt   bash        conda create  n sklearn dev  c conda forge python numpy scipy cython           joblib threadpoolctl pytest compilers meson python ninja  It is not always necessary but it is safer to open a new prompt before activating the newly created conda environment      prompt   bash        conda activate sklearn dev     pip install   editable               verbose   no build isolation             config settings editable verbose true      compiler freebsd   FreeBSD          The clang compiler included in FreeBSD 12 0 and 11 2 base systems does not include OpenMP support  You need to install the  openmp  library from packages  or ports       prompt   bash        sudo pkg install openmp  This will install header files in    usr local include   and libs in    usr local lib    Since these directories are not searched by default  you can set the environment variables to these locations      prompt   bash        export CFLAGS   CFLAGS  I usr local include      export CXXFLAGS   CXXFLAGS  I usr local include      export LDFLAGS   LDFLAGS  Wl  rpath  usr local lib  L usr local lib  lomp   Finally  build the package using the standard command      prompt   bash        pip install   editable               verbose   no build isolation             config settings editable verbose true  For the upcoming FreeBSD 12 1 and 11 3 versions  OpenMP will be included in the base system and these steps will not be necessary       OpenMP  https   en wikipedia org wiki OpenMP     Cython  https   cython org     meson python  https   mesonbuild com meson python     Ninja  https   ninja build org      NumPy  https   numpy org     SciPy  https   www scipy org     Homebrew  https   brew sh     virtualenv  https   docs python org 3 tutorial venv html     conda environment  https   docs conda io projects conda en latest user guide tasks manage environments html     Miniforge3  https   github com conda forge miniforge miniforge3"}
{"questions":"scikit-learn sklearn Contributing contribute It is hosted on https github com scikit learn scikit learn This project is a community effort and everyone is welcome to contributing","answers":".. _contributing:\n\n============\nContributing\n============\n\n.. currentmodule:: sklearn\n\nThis project is a community effort, and everyone is welcome to\ncontribute. It is hosted on https:\/\/github.com\/scikit-learn\/scikit-learn.\nThe decision making process and governance structure of scikit-learn is laid\nout in :ref:`governance`.\n\nScikit-learn is somewhat :ref:`selective <selectiveness>` when it comes to\nadding new algorithms, and the best way to contribute and to help the project\nis to start working on known issues.\nSee :ref:`new_contributors` to get started.\n\n.. topic:: **Our community, our values**\n\n    We are a community based on openness and friendly, didactic,\n    discussions.\n\n    We aspire to treat everybody equally, and value their contributions.  We\n    are particularly seeking people from underrepresented backgrounds in Open\n    Source Software and scikit-learn in particular to participate and\n    contribute their expertise and experience.\n\n    Decisions are made based on technical merit and consensus.\n\n    Code is not the only way to help the project. Reviewing pull\n    requests, answering questions to help others on mailing lists or\n    issues, organizing and teaching tutorials, working on the website,\n    improving the documentation, are all priceless contributions.\n\n    We abide by the principles of openness, respect, and consideration of\n    others of the Python Software Foundation:\n    https:\/\/www.python.org\/psf\/codeofconduct\/\n\n\nIn case you experience issues using this package, do not hesitate to submit a\nticket to the\n`GitHub issue tracker\n<https:\/\/github.com\/scikit-learn\/scikit-learn\/issues>`_. You are also\nwelcome to post feature requests or pull requests.\n\nWays to contribute\n==================\n\nThere are many ways to contribute to scikit-learn, with the most common ones\nbeing contribution of code or documentation to the project. Improving the\ndocumentation is no less important than improving the library itself.  If you\nfind a typo in the documentation, or have made improvements, do not hesitate to\ncreate a GitHub issue or preferably submit a GitHub pull request.\nFull documentation can be found under the doc\/ directory.\n\nBut there are many other ways to help. In particular helping to\n:ref:`improve, triage, and investigate issues <bug_triaging>` and\n:ref:`reviewing other developers' pull requests <code_review>` are very\nvaluable contributions that decrease the burden on the project\nmaintainers.\n\nAnother way to contribute is to report issues you're facing, and give a \"thumbs\nup\" on issues that others reported and that are relevant to you.  It also helps\nus if you spread the word: reference the project from your blog and articles,\nlink to it from your website, or simply star to say \"I use it\":\n\n.. raw:: html\n\n  <p>\n    <object\n      data=\"https:\/\/img.shields.io\/github\/stars\/scikit-learn\/scikit-learn?style=for-the-badge&logo=github\"\n      type=\"image\/svg+xml\">\n    <\/object>\n  <\/p>\n\nIn case a contribution\/issue involves changes to the API principles\nor changes to dependencies or supported versions, it must be backed by a\n:ref:`slep`, where a SLEP must be submitted as a pull-request to\n`enhancement proposals <https:\/\/scikit-learn-enhancement-proposals.readthedocs.io>`_\nusing the `SLEP template <https:\/\/scikit-learn-enhancement-proposals.readthedocs.io\/en\/latest\/slep_template.html>`_\nand follows the decision-making process outlined in :ref:`governance`.\n\n.. dropdown:: Contributing to related projects\n\n  Scikit-learn thrives in an ecosystem of several related projects, which also\n  may have relevant issues to work on, including smaller projects such as:\n\n  * `scikit-learn-contrib <https:\/\/github.com\/search?q=org%3Ascikit-learn-contrib+is%3Aissue+is%3Aopen+sort%3Aupdated-desc&type=Issues>`__\n  * `joblib <https:\/\/github.com\/joblib\/joblib\/issues>`__\n  * `sphinx-gallery <https:\/\/github.com\/sphinx-gallery\/sphinx-gallery\/issues>`__\n  * `numpydoc <https:\/\/github.com\/numpy\/numpydoc\/issues>`__\n  * `liac-arff <https:\/\/github.com\/renatopp\/liac-arff\/issues>`__\n\n  and larger projects:\n\n  * `numpy <https:\/\/github.com\/numpy\/numpy\/issues>`__\n  * `scipy <https:\/\/github.com\/scipy\/scipy\/issues>`__\n  * `matplotlib <https:\/\/github.com\/matplotlib\/matplotlib\/issues>`__\n  * and so on.\n\n  Look for issues marked \"help wanted\" or similar. Helping these projects may help\n  scikit-learn too. See also :ref:`related_projects`.\n\nAutomated Contributions Policy\n==============================\n\nPlease refrain from submitting issues or pull requests generated by\nfully-automated tools. Maintainers reserve the right, at their sole discretion,\nto close such submissions and to block any account responsible for them.\n\nIdeally, contributions should follow from a human-to-human discussion in the\nform of an issue.\n\nSubmitting a bug report or a feature request\n============================================\n\nWe use GitHub issues to track all bugs and feature requests; feel free to open\nan issue if you have found a bug or wish to see a feature implemented.\n\nIn case you experience issues using this package, do not hesitate to submit a\nticket to the\n`Bug Tracker <https:\/\/github.com\/scikit-learn\/scikit-learn\/issues>`_. You are\nalso welcome to post feature requests or pull requests.\n\nIt is recommended to check that your issue complies with the\nfollowing rules before submitting:\n\n-  Verify that your issue is not being currently addressed by other\n   `issues <https:\/\/github.com\/scikit-learn\/scikit-learn\/issues?q=>`_\n   or `pull requests <https:\/\/github.com\/scikit-learn\/scikit-learn\/pulls?q=>`_.\n\n-  If you are submitting an algorithm or feature request, please verify that\n   the algorithm fulfills our\n   `new algorithm requirements\n   <https:\/\/scikit-learn.org\/stable\/faq.html#what-are-the-inclusion-criteria-for-new-algorithms>`_.\n\n-  If you are submitting a bug report, we strongly encourage you to follow the guidelines in\n   :ref:`filing_bugs`.\n\n.. _filing_bugs:\n\nHow to make a good bug report\n-----------------------------\n\nWhen you submit an issue to `GitHub\n<https:\/\/github.com\/scikit-learn\/scikit-learn\/issues>`__, please do your best to\nfollow these guidelines! This will make it a lot easier to provide you with good\nfeedback:\n\n- The ideal bug report contains a :ref:`short reproducible code snippet\n  <minimal_reproducer>`, this way anyone can try to reproduce the bug easily. If your\n  snippet is longer than around 50 lines, please link to a `Gist\n  <https:\/\/gist.github.com>`_ or a GitHub repo.\n\n- If not feasible to include a reproducible snippet, please be specific about\n  what **estimators and\/or functions are involved and the shape of the data**.\n\n- If an exception is raised, please **provide the full traceback**.\n\n- Please include your **operating system type and version number**, as well as\n  your **Python, scikit-learn, numpy, and scipy versions**. This information\n  can be found by running:\n\n  .. prompt:: bash\n\n    python -c \"import sklearn; sklearn.show_versions()\"\n\n- Please ensure all **code snippets and error messages are formatted in\n  appropriate code blocks**.  See `Creating and highlighting code blocks\n  <https:\/\/help.github.com\/articles\/creating-and-highlighting-code-blocks>`_\n  for more details.\n\nIf you want to help curate issues, read about :ref:`bug_triaging`.\n\nContributing code\n=================\n\n.. note::\n\n  To avoid duplicating work, it is highly advised that you search through the\n  `issue tracker <https:\/\/github.com\/scikit-learn\/scikit-learn\/issues>`_ and\n  the `PR list <https:\/\/github.com\/scikit-learn\/scikit-learn\/pulls>`_.\n  If in doubt about duplicated work, or if you want to work on a non-trivial\n  feature, it's recommended to first open an issue in\n  the `issue tracker <https:\/\/github.com\/scikit-learn\/scikit-learn\/issues>`_\n  to get some feedbacks from core developers.\n\n  One easy way to find an issue to work on is by applying the \"help wanted\"\n  label in your search. This lists all the issues that have been unclaimed\n  so far. In order to claim an issue for yourself, please comment exactly\n  ``\/take`` on it for the CI to automatically assign the issue to you.\n\nTo maintain the quality of the codebase and ease the review process, any\ncontribution must conform to the project's :ref:`coding guidelines\n<coding-guidelines>`, in particular:\n\n- Don't modify unrelated lines to keep the PR focused on the scope stated in its\n  description or issue.\n- Only write inline comments that add value and avoid stating the obvious: explain\n  the \"why\" rather than the \"what\".\n- **Most importantly**: Do not contribute code that you don't understand.\n\nVideo resources\n---------------\nThese videos are step-by-step introductions on how to contribute to\nscikit-learn, and are a great companion to the following text guidelines.\nPlease make sure to still check our guidelines below, since they describe our\nlatest up-to-date workflow.\n\n- Crash Course in Contributing to Scikit-Learn & Open Source Projects:\n  `Video <https:\/\/youtu.be\/5OL8XoMMOfA>`__,\n  `Transcript\n  <https:\/\/github.com\/data-umbrella\/event-transcripts\/blob\/main\/2020\/05-andreas-mueller-contributing.md>`__\n\n- Example of Submitting a Pull Request to scikit-learn:\n  `Video <https:\/\/youtu.be\/PU1WyDPGePI>`__,\n  `Transcript\n  <https:\/\/github.com\/data-umbrella\/event-transcripts\/blob\/main\/2020\/06-reshama-shaikh-sklearn-pr.md>`__\n\n- Sprint-specific instructions and practical tips:\n  `Video <https:\/\/youtu.be\/p_2Uw2BxdhA>`__,\n  `Transcript\n  <https:\/\/github.com\/data-umbrella\/data-umbrella-scikit-learn-sprint\/blob\/master\/3_transcript_ACM_video_vol2.md>`__\n\n- 3 Components of Reviewing a Pull Request:\n  `Video <https:\/\/youtu.be\/dyxS9KKCNzA>`__,\n  `Transcript\n  <https:\/\/github.com\/data-umbrella\/event-transcripts\/blob\/main\/2021\/27-thomas-pr.md>`__\n\n.. note::\n  In January 2021, the default branch name changed from ``master`` to ``main``\n  for the scikit-learn GitHub repository to use more inclusive terms.\n  These videos were created prior to the renaming of the branch.\n  For contributors who are viewing these videos to set up their\n  working environment and submitting a PR, ``master`` should be replaced to ``main``.\n\nHow to contribute\n-----------------\n\nThe preferred way to contribute to scikit-learn is to fork the `main\nrepository <https:\/\/github.com\/scikit-learn\/scikit-learn\/>`__ on GitHub,\nthen submit a \"pull request\" (PR).\n\nIn the first few steps, we explain how to locally install scikit-learn, and\nhow to set up your git repository:\n\n1. `Create an account <https:\/\/github.com\/join>`_ on\n   GitHub if you do not already have one.\n\n2. Fork the `project repository\n   <https:\/\/github.com\/scikit-learn\/scikit-learn>`__: click on the 'Fork'\n   button near the top of the page. This creates a copy of the code under your\n   account on the GitHub user account. For more details on how to fork a\n   repository see `this guide <https:\/\/help.github.com\/articles\/fork-a-repo\/>`_.\n\n3. Clone your fork of the scikit-learn repo from your GitHub account to your\n   local disk:\n\n   .. prompt:: bash\n\n      git clone git@github.com:YourLogin\/scikit-learn.git  # add --depth 1 if your connection is slow\n      cd scikit-learn\n\n4. Follow steps 2-6 in :ref:`install_bleeding_edge` to build scikit-learn in\n   development mode and return to this document.\n\n5. Install the development dependencies:\n\n   .. prompt:: bash\n\n        pip install pytest pytest-cov ruff mypy numpydoc black==24.3.0\n\n.. _upstream:\n\n6. Add the ``upstream`` remote. This saves a reference to the main\n   scikit-learn repository, which you can use to keep your repository\n   synchronized with the latest changes:\n\n   .. prompt:: bash\n\n        git remote add upstream git@github.com:scikit-learn\/scikit-learn.git\n\n7. Check that the `upstream` and `origin` remote aliases are configured correctly\n   by running `git remote -v` which should display:\n\n   .. code-block:: text\n\n        origin\tgit@github.com:YourLogin\/scikit-learn.git (fetch)\n        origin\tgit@github.com:YourLogin\/scikit-learn.git (push)\n        upstream\tgit@github.com:scikit-learn\/scikit-learn.git (fetch)\n        upstream\tgit@github.com:scikit-learn\/scikit-learn.git (push)\n\nYou should now have a working installation of scikit-learn, and your git repository\nproperly configured. It could be useful to run some test to verify your installation.\nPlease refer to :ref:`pytest_tips` for examples.\n\nThe next steps now describe the process of modifying code and submitting a PR:\n\n8. Synchronize your ``main`` branch with the ``upstream\/main`` branch,\n   more details on `GitHub Docs <https:\/\/docs.github.com\/en\/github\/collaborating-with-issues-and-pull-requests\/syncing-a-fork>`_:\n\n   .. prompt:: bash\n\n        git checkout main\n        git fetch upstream\n        git merge upstream\/main\n\n9. Create a feature branch to hold your development changes:\n\n   .. prompt:: bash\n\n        git checkout -b my_feature\n\n   and start making changes. Always use a feature branch. It's good\n   practice to never work on the ``main`` branch!\n\n10. (**Optional**) Install `pre-commit <https:\/\/pre-commit.com\/#install>`_ to\n    run code style checks before each commit:\n\n    .. prompt:: bash\n\n          pip install pre-commit\n          pre-commit install\n\n    pre-commit checks can be disabled for a particular commit with\n    `git commit -n`.\n\n11. Develop the feature on your feature branch on your computer, using Git to\n    do the version control. When you're done editing, add changed files using\n    ``git add`` and then ``git commit``:\n\n    .. prompt:: bash\n\n        git add modified_files\n        git commit\n\n    to record your changes in Git, then push the changes to your GitHub\n    account with:\n\n    .. prompt:: bash\n\n       git push -u origin my_feature\n\n12. Follow `these\n    <https:\/\/help.github.com\/articles\/creating-a-pull-request-from-a-fork>`_\n    instructions to create a pull request from your fork. This will send an\n    notification to potential reviewers. You may want to consider sending an message to\n    the `discord <https:\/\/discord.com\/invite\/h9qyrK8Jc8>`_ in the development\n    channel for more visibility if your pull request does not receive attention after\n    a couple of days (instant replies are not guaranteed though).\n\nIt is often helpful to keep your local feature branch synchronized with the\nlatest changes of the main scikit-learn repository:\n\n.. prompt:: bash\n\n    git fetch upstream\n    git merge upstream\/main\n\nSubsequently, you might need to solve the conflicts. You can refer to the\n`Git documentation related to resolving merge conflict using the command\nline\n<https:\/\/help.github.com\/articles\/resolving-a-merge-conflict-using-the-command-line\/>`_.\n\n.. topic:: Learning Git\n\n    The `Git documentation <https:\/\/git-scm.com\/doc>`_ and\n    http:\/\/try.github.io are excellent resources to get started with git,\n    and understanding all of the commands shown here.\n\n.. _pr_checklist:\n\nPull request checklist\n----------------------\n\nBefore a PR can be merged, it needs to be approved by two core developers.\nAn incomplete contribution -- where you expect to do more work before receiving\na full review -- should be marked as a `draft pull request\n<https:\/\/docs.github.com\/en\/pull-requests\/collaborating-with-pull-requests\/proposing-changes-to-your-work-with-pull-requests\/changing-the-stage-of-a-pull-request>`__\nand changed to \"ready for review\" when it matures. Draft PRs may be useful to:\nindicate you are working on something to avoid duplicated work, request\nbroad review of functionality or API, or seek collaborators. Draft PRs often\nbenefit from the inclusion of a `task list\n<https:\/\/github.com\/blog\/1375-task-lists-in-gfm-issues-pulls-comments>`_ in\nthe PR description.\n\nIn order to ease the reviewing process, we recommend that your contribution\ncomplies with the following rules before marking a PR as \"ready for review\". The\n**bolded** ones are especially important:\n\n1. **Give your pull request a helpful title** that summarizes what your\n   contribution does. This title will often become the commit message once\n   merged so it should summarize your contribution for posterity. In some\n   cases \"Fix <ISSUE TITLE>\" is enough. \"Fix #<ISSUE NUMBER>\" is never a\n   good title.\n\n2. **Make sure your code passes the tests**. The whole test suite can be run\n   with `pytest`, but it is usually not recommended since it takes a long\n   time. It is often enough to only run the test related to your changes:\n   for example, if you changed something in\n   `sklearn\/linear_model\/_logistic.py`, running the following commands will\n   usually be enough:\n\n   - `pytest sklearn\/linear_model\/_logistic.py` to make sure the doctest\n     examples are correct\n   - `pytest sklearn\/linear_model\/tests\/test_logistic.py` to run the tests\n     specific to the file\n   - `pytest sklearn\/linear_model` to test the whole\n     :mod:`~sklearn.linear_model` module\n   - `pytest doc\/modules\/linear_model.rst` to make sure the user guide\n     examples are correct.\n   - `pytest sklearn\/tests\/test_common.py -k LogisticRegression` to run all our\n     estimator checks (specifically for `LogisticRegression`, if that's the\n     estimator you changed).\n\n   There may be other failing tests, but they will be caught by the CI so\n   you don't need to run the whole test suite locally. For guidelines on how\n   to use ``pytest`` efficiently, see the :ref:`pytest_tips`.\n\n3. **Make sure your code is properly commented and documented**, and **make\n   sure the documentation renders properly**. To build the documentation, please\n   refer to our :ref:`contribute_documentation` guidelines. The CI will also\n   build the docs: please refer to :ref:`generated_doc_CI`.\n\n4. **Tests are necessary for enhancements to be\n   accepted**. Bug-fixes or new features should be provided with\n   `non-regression tests\n   <https:\/\/en.wikipedia.org\/wiki\/Non-regression_testing>`_. These tests\n   verify the correct behavior of the fix or feature. In this manner, further\n   modifications on the code base are granted to be consistent with the\n   desired behavior. In the case of bug fixes, at the time of the PR, the\n   non-regression tests should fail for the code base in the ``main`` branch\n   and pass for the PR code.\n\n5. If your PR is likely to affect users, you need to add a changelog entry describing\n   your PR changes, see the `following README <https:\/\/github.com\/scikit-learn\/scikit-learn\/blob\/main\/doc\/whats_new\/upcoming_changes\/README.md>`\n   for more details.\n\n6. Follow the :ref:`coding-guidelines`.\n\n7. When applicable, use the validation tools and scripts in the :mod:`sklearn.utils`\n   module. A list of utility routines available for developers can be found in the\n   :ref:`developers-utils` page.\n\n8. Often pull requests resolve one or more other issues (or pull requests).\n   If merging your pull request means that some other issues\/PRs should\n   be closed, you should `use keywords to create link to them\n   <https:\/\/github.com\/blog\/1506-closing-issues-via-pull-requests\/>`_\n   (e.g., ``Fixes #1234``; multiple issues\/PRs are allowed as long as each\n   one is preceded by a keyword). Upon merging, those issues\/PRs will\n   automatically be closed by GitHub. If your pull request is simply\n   related to some other issues\/PRs, or it only partially resolves the target\n   issue, create a link to them without using the keywords (e.g., ``Towards #1234``).\n\n9. PRs should often substantiate the change, through benchmarks of\n   performance and efficiency (see :ref:`monitoring_performances`) or through\n   examples of usage. Examples also illustrate the features and intricacies of\n   the library to users. Have a look at other examples in the `examples\/\n   <https:\/\/github.com\/scikit-learn\/scikit-learn\/tree\/main\/examples>`_\n   directory for reference. Examples should demonstrate why the new\n   functionality is useful in practice and, if possible, compare it to other\n   methods available in scikit-learn.\n\n10. New features have some maintenance overhead. We expect PR authors\n    to take part in the maintenance for the code they submit, at least\n    initially. New features need to be illustrated with narrative\n    documentation in the user guide, with small code snippets.\n    If relevant, please also add references in the literature, with PDF links\n    when possible.\n\n11. The user guide should also include expected time and space complexity\n    of the algorithm and scalability, e.g. \"this algorithm can scale to a\n    large number of samples > 100000, but does not scale in dimensionality:\n    `n_features` is expected to be lower than 100\".\n\nYou can also check our :ref:`code_review` to get an idea of what reviewers\nwill expect.\n\nYou can check for common programming errors with the following tools:\n\n* Code with a good unit test coverage (at least 80%, better 100%), check with:\n\n  .. prompt:: bash\n\n    pip install pytest pytest-cov\n    pytest --cov sklearn path\/to\/tests\n\n  See also :ref:`testing_coverage`.\n\n* Run static analysis with `mypy`:\n\n  .. prompt:: bash\n\n      mypy sklearn\n\n  This must not produce new errors in your pull request. Using `# type: ignore`\n  annotation can be a workaround for a few cases that are not supported by\n  mypy, in particular,\n\n  - when importing C or Cython modules,\n  - on properties with decorators.\n\nBonus points for contributions that include a performance analysis with\na benchmark script and profiling output (see :ref:`monitoring_performances`).\nAlso check out the :ref:`performance-howto` guide for more details on\nprofiling and Cython optimizations.\n\n.. note::\n\n  The current state of the scikit-learn code base is not compliant with\n  all of those guidelines, but we expect that enforcing those constraints\n  on all new contributions will get the overall code base quality in the\n  right direction.\n\n.. seealso::\n\n   For two very well documented and more detailed guides on development\n   workflow, please pay a visit to the `Scipy Development Workflow\n   <http:\/\/scipy.github.io\/devdocs\/dev\/dev_quickstart.html>`_ -\n   and the `Astropy Workflow for Developers\n   <https:\/\/astropy.readthedocs.io\/en\/latest\/development\/workflow\/development_workflow.html>`_\n   sections.\n\nContinuous Integration (CI)\n---------------------------\n\n* Azure pipelines are used for testing scikit-learn on Linux, Mac and Windows,\n  with different dependencies and settings.\n* CircleCI is used to build the docs for viewing.\n* Github Actions are used for various tasks, including building wheels and\n  source distributions.\n* Cirrus CI is used to build on ARM.\n\n.. _commit_markers:\n\nCommit message markers\n^^^^^^^^^^^^^^^^^^^^^^\n\nPlease note that if one of the following markers appear in the latest commit\nmessage, the following actions are taken.\n\n====================== ===================\nCommit Message Marker  Action Taken by CI\n====================== ===================\n[ci skip]              CI is skipped completely\n[cd build]             CD is run (wheels and source distribution are built)\n[cd build gh]          CD is run only for GitHub Actions\n[cd build cirrus]      CD is run only for Cirrus CI\n[lint skip]            Azure pipeline skips linting\n[scipy-dev]            Build & test with our dependencies (numpy, scipy, etc.) development builds\n[free-threaded]        Build & test with CPython 3.13 free-threaded\n[pyodide]              Build & test with Pyodide\n[azure parallel]       Run Azure CI jobs in parallel\n[cirrus arm]           Run Cirrus CI ARM test\n[float32]              Run float32 tests by setting `SKLEARN_RUN_FLOAT32_TESTS=1`. See :ref:`environment_variable` for more details\n[doc skip]             Docs are not built\n[doc quick]            Docs built, but excludes example gallery plots\n[doc build]            Docs built including example gallery plots (very long)\n====================== ===================\n\nNote that, by default, the documentation is built but only the examples\nthat are directly modified by the pull request are executed.\n\n.. _build_lock_files:\n\nBuild lock files\n^^^^^^^^^^^^^^^^\n\nCIs use lock files to build environments with specific versions of dependencies. When a\nPR needs to modify the dependencies or their versions, the lock files should be updated\naccordingly. This can be done by adding the following comment directly in the GitHub\nPull Request (PR) discussion:\n\n.. code-block:: text\n\n  @scikit-learn-bot update lock-files\n\nA bot will push a commit to your PR branch with the updated lock files in a few minutes.\nMake sure to tick the *Allow edits from maintainers* checkbox located at the bottom of\nthe right sidebar of the PR. You can also specify the options `--select-build`,\n`--skip-build`, and `--select-tag` as in a command line. Use `--help` on the script\n`build_tools\/update_environments_and_lock_files.py` for more information. For example,\n\n.. code-block:: text\n\n  @scikit-learn-bot update lock-files --select-tag main-ci --skip-build doc\n\nThe bot will automatically add :ref:`commit message markers <commit_markers>` to the\ncommit for certain tags. If you want to add more markers manually, you can do so using\nthe `--commit-marker` option. For example, the following comment will trigger the bot to\nupdate documentation-related lock files and add the `[doc build]` marker to the commit:\n\n.. code-block:: text\n\n  @scikit-learn-bot update lock-files --select-build doc --commit-marker \"[doc build]\"\n\nResolve conflicts in lock files\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nHere is a bash snippet that helps resolving conflicts in environment and lock files:\n\n.. prompt:: bash\n\n  # pull latest upstream\/main\n  git pull upstream main --no-rebase\n  # resolve conflicts - keeping the upstream\/main version for specific files\n  git checkout --theirs  build_tools\/*\/*.lock build_tools\/*\/*environment.yml \\\n      build_tools\/*\/*lock.txt build_tools\/*\/*requirements.txt\n  git add build_tools\/*\/*.lock build_tools\/*\/*environment.yml \\\n      build_tools\/*\/*lock.txt build_tools\/*\/*requirements.txt\n  git merge --continue\n\nThis will merge `upstream\/main` into our branch, automatically prioritising the\n`upstream\/main` for conflicting environment and lock files (this is good enough, because\nwe will re-generate the lock files afterwards).\n\nNote that this only fixes conflicts in environment and lock files and you might have\nother conflicts to resolve.\n\nFinally, we have to re-generate the environment and lock files for the CIs, as described\nin :ref:`Build lock files <build_lock_files>`, or by running:\n\n.. prompt:: bash\n\n  python build_tools\/update_environments_and_lock_files.py\n\n.. _stalled_pull_request:\n\nStalled pull requests\n---------------------\n\nAs contributing a feature can be a lengthy process, some\npull requests appear inactive but unfinished. In such a case, taking\nthem over is a great service for the project. A good etiquette to take over is:\n\n* **Determine if a PR is stalled**\n\n  * A pull request may have the label \"stalled\" or \"help wanted\" if we\n    have already identified it as a candidate for other contributors.\n\n  * To decide whether an inactive PR is stalled, ask the contributor if\n    she\/he plans to continue working on the PR in the near future.\n    Failure to respond within 2 weeks with an activity that moves the PR\n    forward suggests that the PR is stalled and will result in tagging\n    that PR with \"help wanted\".\n\n    Note that if a PR has received earlier comments on the contribution\n    that have had no reply in a month, it is safe to assume that the PR\n    is stalled and to shorten the wait time to one day.\n\n    After a sprint, follow-up for un-merged PRs opened during sprint will\n    be communicated to participants at the sprint, and those PRs will be\n    tagged \"sprint\". PRs tagged with \"sprint\" can be reassigned or\n    declared stalled by sprint leaders.\n\n* **Taking over a stalled PR**: To take over a PR, it is important to\n  comment on the stalled PR that you are taking over and to link from the\n  new PR to the old one. The new PR should be created by pulling from the\n  old one.\n\nStalled and Unclaimed Issues\n----------------------------\n\nGenerally speaking, issues which are up for grabs will have a\n`\"help wanted\" <https:\/\/github.com\/scikit-learn\/scikit-learn\/labels\/help%20wanted>`_.\ntag. However, not all issues which need contributors will have this tag,\nas the \"help wanted\" tag is not always up-to-date with the state\nof the issue. Contributors can find issues which are still up for grabs\nusing the following guidelines:\n\n* First, to **determine if an issue is claimed**:\n\n  * Check for linked pull requests\n  * Check the conversation to see if anyone has said that they're working on\n    creating a pull request\n\n* If a contributor comments on an issue to say they are working on it,\n  a pull request is expected within 2 weeks (new contributor) or 4 weeks\n  (contributor or core dev), unless an larger time frame is explicitly given.\n  Beyond that time, another contributor can take the issue and make a\n  pull request for it. We encourage contributors to comment directly on the\n  stalled or unclaimed issue to let community members know that they will be\n  working on it.\n\n* If the issue is linked to a :ref:`stalled pull request <stalled_pull_request>`,\n  we recommend that contributors follow the procedure\n  described in the :ref:`stalled_pull_request`\n  section rather than working directly on the issue.\n\n.. _new_contributors:\n\nIssues for New Contributors\n---------------------------\n\nNew contributors should look for the following tags when looking for issues.  We\nstrongly recommend that new contributors tackle \"easy\" issues first: this helps\nthe contributor become familiar with the contribution workflow, and for the core\ndevs to become acquainted with the contributor; besides which, we frequently\nunderestimate how easy an issue is to solve!\n\n- **Good first issue tag**\n\n  A great way to start contributing to scikit-learn is to pick an item from\n  the list of `good first issues\n  <https:\/\/github.com\/scikit-learn\/scikit-learn\/labels\/good%20first%20issue>`_\n  in the issue tracker. Resolving these issues allow you to start contributing\n  to the project without much prior knowledge. If you have already contributed\n  to scikit-learn, you should look at Easy issues instead.\n\n- **Easy tag**\n\n  If you have already contributed to scikit-learn, another great way to contribute\n  to scikit-learn is to pick an item from the list of `Easy issues\n  <https:\/\/github.com\/scikit-learn\/scikit-learn\/labels\/Easy>`_ in the issue\n  tracker. Your assistance in this area will be greatly appreciated by the\n  more experienced developers as it helps free up their time to concentrate on\n  other issues.\n\n- **Help wanted tag**\n\n  We often use the help wanted tag to mark issues regardless of difficulty.\n  Additionally, we use the help wanted tag to mark Pull Requests which have been\n  abandoned by their original contributor and are available for someone to pick up where\n  the original contributor left off. The list of issues with the help wanted tag can be\n  found `here <https:\/\/github.com\/scikit-learn\/scikit-learn\/labels\/help%20wanted>`_.\n  Note that not all issues which need contributors will have this tag.\n\n.. _contribute_documentation:\n\nDocumentation\n=============\n\nWe are glad to accept any sort of documentation:\n\n* **Function\/method\/class docstrings:** Also known as \"API documentation\", these\n  describe what the object does and details any parameters, attributes and\n  methods. Docstrings live alongside the code in `sklearn\/\n  <https:\/\/github.com\/scikit-learn\/scikit-learn\/tree\/main\/sklearn>`_, and are generated\n  generated according to `doc\/api_reference.py\n  <https:\/\/github.com\/scikit-learn\/scikit-learn\/blob\/main\/doc\/api_reference.py>`_. To\n  add, update, remove, or deprecate a public API that is listed in :ref:`api_ref`, this\n  is the place to look at.\n* **User guide:** These provide more detailed information about the algorithms\n  implemented in scikit-learn and generally live in the root\n  `doc\/ <https:\/\/github.com\/scikit-learn\/scikit-learn\/tree\/main\/doc>`_ directory\n  and\n  `doc\/modules\/ <https:\/\/github.com\/scikit-learn\/scikit-learn\/tree\/main\/doc\/modules>`_.\n* **Examples:** These provide full code examples that may demonstrate the use\n  of scikit-learn modules, compare different algorithms or discuss their\n  interpretation, etc. Examples live in\n  `examples\/ <https:\/\/github.com\/scikit-learn\/scikit-learn\/tree\/main\/examples>`_.\n* **Other reStructuredText documents:** These provide various other useful information\n  (e.g., the :ref:`contributing` guide) and live in\n  `doc\/ <https:\/\/github.com\/scikit-learn\/scikit-learn\/tree\/main\/doc>`_.\n\n\n.. dropdown:: Guidelines for writing docstrings\n\n  * When documenting the parameters and attributes, here is a list of some\n    well-formatted examples\n\n    .. code-block:: text\n\n      n_clusters : int, default=3\n          The number of clusters detected by the algorithm.\n\n      some_param : {\"hello\", \"goodbye\"}, bool or int, default=True\n          The parameter description goes here, which can be either a string\n          literal (either `hello` or `goodbye`), a bool, or an int. The default\n          value is True.\n\n      array_parameter : {array-like, sparse matrix} of shape (n_samples, n_features) \\\n          or (n_samples,)\n          This parameter accepts data in either of the mentioned forms, with one\n          of the mentioned shapes. The default value is `np.ones(shape=(n_samples,))`.\n\n      list_param : list of int\n\n      typed_ndarray : ndarray of shape (n_samples,), dtype=np.int32\n\n      sample_weight : array-like of shape (n_samples,), default=None\n\n      multioutput_array : ndarray of shape (n_samples, n_classes) or list of such arrays\n\n    In general have the following in mind:\n\n    * Use Python basic types. (``bool`` instead of ``boolean``)\n    * Use parenthesis for defining shapes: ``array-like of shape (n_samples,)``\n      or ``array-like of shape (n_samples, n_features)``\n    * For strings with multiple options, use brackets: ``input: {'log',\n      'squared', 'multinomial'}``\n    * 1D or 2D data can be a subset of ``{array-like, ndarray, sparse matrix,\n      dataframe}``. Note that ``array-like`` can also be a ``list``, while\n      ``ndarray`` is explicitly only a ``numpy.ndarray``.\n    * Specify ``dataframe`` when \"frame-like\" features are being used, such as\n      the column names.\n    * When specifying the data type of a list, use ``of`` as a delimiter: ``list\n      of int``. When the parameter supports arrays giving details about the\n      shape and\/or data type and a list of such arrays, you can use one of\n      ``array-like of shape (n_samples,) or list of such arrays``.\n    * When specifying the dtype of an ndarray, use e.g. ``dtype=np.int32`` after\n      defining the shape: ``ndarray of shape (n_samples,), dtype=np.int32``. You\n      can specify multiple dtype as a set: ``array-like of shape (n_samples,),\n      dtype={np.float64, np.float32}``. If one wants to mention arbitrary\n      precision, use `integral` and `floating` rather than the Python dtype\n      `int` and `float`. When both `int` and `floating` are supported, there is\n      no need to specify the dtype.\n    * When the default is ``None``, ``None`` only needs to be specified at the\n      end with ``default=None``. Be sure to include in the docstring, what it\n      means for the parameter or attribute to be ``None``.\n\n  * Add \"See Also\" in docstrings for related classes\/functions.\n\n  * \"See Also\" in docstrings should be one line per reference, with a colon and an\n    explanation, for example:\n\n    .. code-block:: text\n\n      See Also\n      --------\n      SelectKBest : Select features based on the k highest scores.\n      SelectFpr : Select features based on a false positive rate test.\n\n  * Add one or two snippets of code in \"Example\" section to show how it can be used.\n\n\n.. dropdown:: Guidelines for writing the user guide and other reStructuredText documents\n\n  It is important to keep a good compromise between mathematical and algorithmic\n  details, and give intuition to the reader on what the algorithm does.\n\n  * Begin with a concise, hand-waving explanation of what the algorithm\/code does on\n    the data.\n\n  * Highlight the usefulness of the feature and its recommended application.\n    Consider including the algorithm's complexity\n    (:math:`O\\left(g\\left(n\\right)\\right)`) if available, as \"rules of thumb\" can\n    be very machine-dependent. Only if those complexities are not available, then\n    rules of thumb may be provided instead.\n\n  * Incorporate a relevant figure (generated from an example) to provide intuitions.\n\n  * Include one or two short code examples to demonstrate the feature's usage.\n\n  * Introduce any necessary mathematical equations, followed by references. By\n    deferring the mathematical aspects, the documentation becomes more accessible\n    to users primarily interested in understanding the feature's practical\n    implications rather than its underlying mechanics.\n\n  * When editing reStructuredText (``.rst``) files, try to keep line length under\n    88 characters when possible (exceptions include links and tables).\n\n  * In scikit-learn reStructuredText files both single and double backticks\n    surrounding text will render as inline literal (often used for code, e.g.,\n    `list`). This is due to specific configurations we have set. Single\n    backticks should be used nowadays.\n\n  * Too much information makes it difficult for users to access the content they\n    are interested in. Use dropdowns to factorize it by using the following syntax\n\n    .. code-block:: rst\n\n      .. dropdown:: Dropdown title\n\n        Dropdown content.\n\n    The snippet above will result in the following dropdown:\n\n    .. dropdown:: Dropdown title\n\n      Dropdown content.\n\n  * Information that can be hidden by default using dropdowns is:\n\n    * low hierarchy sections such as `References`, `Properties`, etc. (see for\n      instance the subsections in :ref:`det_curve`);\n\n    * in-depth mathematical details;\n\n    * narrative that is use-case specific;\n\n    * in general, narrative that may only interest users that want to go beyond\n      the pragmatics of a given tool.\n\n  * Do not use dropdowns for the low level section `Examples`, as it should stay\n    visible to all users. Make sure that the `Examples` section comes right after\n    the main discussion with the least possible folded section in-between.\n\n  * Be aware that dropdowns break cross-references. If that makes sense, hide the\n    reference along with the text mentioning it. Else, do not use dropdown.\n\n\n.. dropdown:: Guidelines for writing references\n\n  * When bibliographic references are available with `arxiv <https:\/\/arxiv.org\/>`_\n    or `Digital Object Identifier <https:\/\/www.doi.org\/>`_ identification numbers,\n    use the sphinx directives `:arxiv:` or `:doi:`. For example, see references in\n    :ref:`Spectral Clustering Graphs <spectral_clustering_graph>`.\n\n  * For the \"References\" section in docstrings, see\n    :func:`sklearn.metrics.silhouette_score` as an example.\n\n  * To cross-reference to other pages in the scikit-learn documentation use the\n    reStructuredText cross-referencing syntax:\n\n    * **Section:** to link to an arbitrary section in the documentation, use\n      reference labels (see `Sphinx docs\n      <https:\/\/www.sphinx-doc.org\/en\/master\/usage\/restructuredtext\/roles.html#ref-role>`_).\n      For example:\n\n      .. code-block:: rst\n\n          .. _my-section:\n\n          My section\n          ----------\n\n          This is the text of the section.\n\n          To refer to itself use :ref:`my-section`.\n\n      You should not modify existing sphinx reference labels as this would break\n      existing cross references and external links pointing to specific sections\n      in the scikit-learn documentation.\n\n    * **Glossary:** linking to a term in the :ref:`glossary`:\n\n      .. code-block:: rst\n\n          :term:`cross_validation`\n\n    * **Function:** to link to the documentation of a function, use the full import\n      path to the function:\n\n      .. code-block:: rst\n\n          :func:`~sklearn.model_selection.cross_val_score`\n\n      However, if there is a `.. currentmodule::` directive above you in the document,\n      you will only need to use the path to the function succeeding the current\n      module specified. For example:\n\n      .. code-block:: rst\n\n          .. currentmodule:: sklearn.model_selection\n\n          :func:`cross_val_score`\n\n    * **Class:** to link to documentation of a class, use the full import path to the\n      class, unless there is a `.. currentmodule::` directive in the document above\n      (see above):\n\n      .. code-block:: rst\n\n          :class:`~sklearn.preprocessing.StandardScaler`\n\nYou can edit the documentation using any text editor, and then generate the\nHTML output by following :ref:`building_documentation`. The resulting HTML files\nwill be placed in ``_build\/html\/`` and are viewable in a web browser, for instance by\nopening the local ``_build\/html\/index.html`` file or by running a local server\n\n.. prompt:: bash\n\n  python -m http.server -d _build\/html\n\n\n.. _building_documentation:\n\nBuilding the documentation\n--------------------------\n\n**Before submitting a pull request check if your modifications have introduced\nnew sphinx warnings by building the documentation locally and try to fix them.**\n\nFirst, make sure you have :ref:`properly installed <install_bleeding_edge>` the\ndevelopment version. On top of that, building the documentation requires installing some\nadditional packages:\n\n..\n    packaging is not needed once setuptools starts shipping packaging>=17.0\n\n.. prompt:: bash\n\n    pip install sphinx sphinx-gallery numpydoc matplotlib Pillow pandas \\\n                polars scikit-image packaging seaborn sphinx-prompt \\\n                sphinxext-opengraph sphinx-copybutton plotly pooch \\\n                pydata-sphinx-theme sphinxcontrib-sass sphinx-design \\\n                sphinx-remove-toctrees\n\nTo build the documentation, you need to be in the ``doc`` folder:\n\n.. prompt:: bash\n\n    cd doc\n\nIn the vast majority of cases, you only need to generate the web site without\nthe example gallery:\n\n.. prompt:: bash\n\n    make\n\nThe documentation will be generated in the ``_build\/html\/stable`` directory\nand are viewable in a web browser, for instance by opening the local\n``_build\/html\/stable\/index.html`` file.\nTo also generate the example gallery you can use:\n\n.. prompt:: bash\n\n    make html\n\nThis will run all the examples, which takes a while. You can also run only a few examples based on their file names.\nHere is a way to run all examples with filenames containing `plot_calibration`:\n\n.. prompt:: bash\n\n    EXAMPLES_PATTERN=\"plot_calibration\" make html\n\nYou can use regular expressions for more advanced use cases.\n\nSet the environment variable `NO_MATHJAX=1` if you intend to view the documentation in\nan offline setting. To build the PDF manual, run:\n\n.. prompt:: bash\n\n    make latexpdf\n\n.. admonition:: Sphinx version\n   :class: warning\n\n   While we do our best to have the documentation build under as many\n   versions of Sphinx as possible, the different versions tend to\n   behave slightly differently. To get the best results, you should\n   use the same version as the one we used on CircleCI. Look at this\n   `GitHub search <https:\/\/github.com\/search?q=repo%3Ascikit-learn%2Fscikit-learn+%2F%5C%2Fsphinx-%5B0-9.%5D%2B%2F+path%3Abuild_tools%2Fcircle%2Fdoc_linux-64_conda.lock&type=code>`_\n   to know the exact version.\n\n\n.. _generated_doc_CI:\n\nGenerated documentation on GitHub Actions\n-----------------------------------------\n\nWhen you change the documentation in a pull request, GitHub Actions automatically\nbuilds it. To view the documentation generated by GitHub Actions, simply go to the\nbottom of your PR page, look for the item \"Check the rendered docs here!\" and\nclick on 'details' next to it:\n\n.. image:: ..\/images\/generated-doc-ci.png\n   :align: center\n\n.. _testing_coverage:\n\nTesting and improving test coverage\n===================================\n\nHigh-quality `unit testing <https:\/\/en.wikipedia.org\/wiki\/Unit_testing>`_\nis a corner-stone of the scikit-learn development process. For this\npurpose, we use the `pytest <https:\/\/docs.pytest.org>`_\npackage. The tests are functions appropriately named, located in `tests`\nsubdirectories, that check the validity of the algorithms and the\ndifferent options of the code.\n\nRunning `pytest` in a folder will run all the tests of the corresponding\nsubpackages. For a more detailed `pytest` workflow, please refer to the\n:ref:`pr_checklist`.\n\nWe expect code coverage of new features to be at least around 90%.\n\n.. dropdown:: Writing matplotlib-related tests\n\n  Test fixtures ensure that a set of tests will be executing with the appropriate\n  initialization and cleanup. The scikit-learn test suite implements a ``pyplot``\n  fixture which can be used with ``matplotlib``.\n\n  The ``pyplot`` fixture should be used when a test function is dealing with\n  ``matplotlib``. ``matplotlib`` is a soft dependency and is not required.\n  This fixture is in charge of skipping the tests if ``matplotlib`` is not\n  installed. In addition, figures created during the tests will be\n  automatically closed once the test function has been executed.\n\n  To use this fixture in a test function, one needs to pass it as an\n  argument::\n\n      def test_requiring_mpl_fixture(pyplot):\n          # you can now safely use matplotlib\n\n.. dropdown:: Workflow to improve test coverage\n\n  To test code coverage, you need to install the `coverage\n  <https:\/\/pypi.org\/project\/coverage\/>`_ package in addition to `pytest`.\n\n  1. Run `pytest --cov sklearn \/path\/to\/tests`. The output lists for each file the line\n     numbers that are not tested.\n\n  2. Find a low hanging fruit, looking at which lines are not tested,\n     write or adapt a test specifically for these lines.\n\n  3. Loop.\n\n.. _monitoring_performances:\n\nMonitoring performance\n======================\n\n*This section is heavily inspired from the* `pandas documentation\n<https:\/\/pandas.pydata.org\/docs\/development\/contributing_codebase.html#running-the-performance-test-suite>`_.\n\nWhen proposing changes to the existing code base, it's important to make sure\nthat they don't introduce performance regressions. Scikit-learn uses\n`asv benchmarks <https:\/\/github.com\/airspeed-velocity\/asv>`_ to monitor the\nperformance of a selection of common estimators and functions. You can view\nthese benchmarks on the `scikit-learn benchmark page\n<https:\/\/scikit-learn.org\/scikit-learn-benchmarks>`_.\nThe corresponding benchmark suite can be found in the `asv_benchmarks\/` directory.\n\nTo use all features of asv, you will need either `conda` or `virtualenv`. For\nmore details please check the `asv installation webpage\n<https:\/\/asv.readthedocs.io\/en\/latest\/installing.html>`_.\n\nFirst of all you need to install the development version of asv:\n\n.. prompt:: bash\n\n    pip install git+https:\/\/github.com\/airspeed-velocity\/asv\n\nand change your directory to `asv_benchmarks\/`:\n\n.. prompt:: bash\n\n  cd asv_benchmarks\n\nThe benchmark suite is configured to run against your local clone of\nscikit-learn. Make sure it is up to date:\n\n.. prompt:: bash\n\n  git fetch upstream\n\nIn the benchmark suite, the benchmarks are organized following the same\nstructure as scikit-learn. For example, you can compare the performance of a\nspecific estimator between ``upstream\/main`` and the branch you are working on:\n\n.. prompt:: bash\n\n  asv continuous -b LogisticRegression upstream\/main HEAD\n\nThe command uses conda by default for creating the benchmark environments. If\nyou want to use virtualenv instead, use the `-E` flag:\n\n.. prompt:: bash\n\n  asv continuous -E virtualenv -b LogisticRegression upstream\/main HEAD\n\nYou can also specify a whole module to benchmark:\n\n.. prompt:: bash\n\n  asv continuous -b linear_model upstream\/main HEAD\n\nYou can replace `HEAD` by any local branch. By default it will only report the\nbenchmarks that have change by at least 10%. You can control this ratio with\nthe `-f` flag.\n\nTo run the full benchmark suite, simply remove the `-b` flag :\n\n.. prompt:: bash\n\n  asv continuous upstream\/main HEAD\n\nHowever this can take up to two hours. The `-b` flag also accepts a regular\nexpression for a more complex subset of benchmarks to run.\n\nTo run the benchmarks without comparing to another branch, use the `run`\ncommand:\n\n.. prompt:: bash\n\n  asv run -b linear_model HEAD^!\n\nYou can also run the benchmark suite using the version of scikit-learn already\ninstalled in your current Python environment:\n\n.. prompt:: bash\n\n  asv run --python=same\n\nIt's particularly useful when you installed scikit-learn in editable mode to\navoid creating a new environment each time you run the benchmarks. By default\nthe results are not saved when using an existing installation. To save the\nresults you must specify a commit hash:\n\n.. prompt:: bash\n\n  asv run --python=same --set-commit-hash=<commit hash>\n\nBenchmarks are saved and organized by machine, environment and commit. To see\nthe list of all saved benchmarks:\n\n.. prompt:: bash\n\n  asv show\n\nand to see the report of a specific run:\n\n.. prompt:: bash\n\n  asv show <commit hash>\n\nWhen running benchmarks for a pull request you're working on please report the\nresults on github.\n\nThe benchmark suite supports additional configurable options which can be set\nin the `benchmarks\/config.json` configuration file. For example, the benchmarks\ncan run for a provided list of values for the `n_jobs` parameter.\n\nMore information on how to write a benchmark and how to use asv can be found in\nthe `asv documentation <https:\/\/asv.readthedocs.io\/en\/latest\/index.html>`_.\n\n.. _issue_tracker_tags:\n\nIssue Tracker Tags\n==================\n\nAll issues and pull requests on the\n`GitHub issue tracker <https:\/\/github.com\/scikit-learn\/scikit-learn\/issues>`_\nshould have (at least) one of the following tags:\n\n:Bug:\n    Something is happening that clearly shouldn't happen.\n    Wrong results as well as unexpected errors from estimators go here.\n\n:Enhancement:\n    Improving performance, usability, consistency.\n\n:Documentation:\n    Missing, incorrect or sub-standard documentations and examples.\n\n:New Feature:\n    Feature requests and pull requests implementing a new feature.\n\nThere are four other tags to help new contributors:\n\n:Good first issue:\n    This issue is ideal for a first contribution to scikit-learn. Ask for help\n    if the formulation is unclear. If you have already contributed to\n    scikit-learn, look at Easy issues instead.\n\n:Easy:\n    This issue can be tackled without much prior experience.\n\n:Moderate:\n    Might need some knowledge of machine learning or the package,\n    but is still approachable for someone new to the project.\n\n:Help wanted:\n    This tag marks an issue which currently lacks a contributor or a\n    PR that needs another contributor to take over the work. These\n    issues can range in difficulty, and may not be approachable\n    for new contributors. Note that not all issues which need\n    contributors will have this tag.\n\n.. _backwards-compatibility:\n\nMaintaining backwards compatibility\n===================================\n\n.. _contributing_deprecation:\n\nDeprecation\n-----------\n\nIf any publicly accessible class, function, method, attribute or parameter is renamed,\nwe still support the old one for two releases and issue a deprecation warning when it is\ncalled, passed, or accessed.\n\n.. rubric:: Deprecating a class or a function\n\nSuppose the function ``zero_one`` is renamed to ``zero_one_loss``, we add the decorator\n:class:`utils.deprecated` to ``zero_one`` and call ``zero_one_loss`` from that\nfunction::\n\n    from ..utils import deprecated\n\n    def zero_one_loss(y_true, y_pred, normalize=True):\n        # actual implementation\n        pass\n\n    @deprecated(\n        \"Function `zero_one` was renamed to `zero_one_loss` in 0.13 and will be \"\n        \"removed in 0.15. Default behavior is changed from `normalize=False` to \"\n        \"`normalize=True`\"\n    )\n    def zero_one(y_true, y_pred, normalize=False):\n        return zero_one_loss(y_true, y_pred, normalize)\n\nOne also needs to move ``zero_one`` from ``API_REFERENCE`` to\n``DEPRECATED_API_REFERENCE`` and add ``zero_one_loss`` to ``API_REFERENCE`` in the\n``doc\/api_reference.py`` file to reflect the changes in :ref:`api_ref`.\n\n.. rubric:: Deprecating an attribute or a method\n\nIf an attribute or a method is to be deprecated, use the decorator\n:class:`~utils.deprecated` on the property. Please note that the\n:class:`~utils.deprecated` decorator should be placed before the ``property`` decorator\nif there is one, so that the docstrings can be rendered properly. For instance, renaming\nan attribute ``labels_`` to ``classes_`` can be done as::\n\n    @deprecated(\n        \"Attribute `labels_` was deprecated in 0.13 and will be removed in 0.15. Use \"\n        \"`classes_` instead\"\n    )\n    @property\n    def labels_(self):\n        return self.classes_\n\n.. rubric:: Deprecating a parameter\n\nIf a parameter has to be deprecated, a ``FutureWarning`` warning must be raised\nmanually. In the following example, ``k`` is deprecated and renamed to n_clusters::\n\n    import warnings\n\n    def example_function(n_clusters=8, k=\"deprecated\"):\n        if k != \"deprecated\":\n            warnings.warn(\n                \"`k` was renamed to `n_clusters` in 0.13 and will be removed in 0.15\",\n                FutureWarning,\n            )\n            n_clusters = k\n\nWhen the change is in a class, we validate and raise warning in ``fit``::\n\n  import warnings\n\n  class ExampleEstimator(BaseEstimator):\n      def __init__(self, n_clusters=8, k='deprecated'):\n          self.n_clusters = n_clusters\n          self.k = k\n\n      def fit(self, X, y):\n          if self.k != \"deprecated\":\n              warnings.warn(\n                  \"`k` was renamed to `n_clusters` in 0.13 and will be removed in 0.15.\",\n                  FutureWarning,\n              )\n              self._n_clusters = self.k\n          else:\n              self._n_clusters = self.n_clusters\n\nAs in these examples, the warning message should always give both the\nversion in which the deprecation happened and the version in which the\nold behavior will be removed. If the deprecation happened in version\n0.x-dev, the message should say deprecation occurred in version 0.x and\nthe removal will be in 0.(x+2), so that users will have enough time to\nadapt their code to the new behaviour. For example, if the deprecation happened\nin version 0.18-dev, the message should say it happened in version 0.18\nand the old behavior will be removed in version 0.20.\n\nThe warning message should also include a brief explanation of the change and point\nusers to an alternative.\n\nIn addition, a deprecation note should be added in the docstring, recalling the\nsame information as the deprecation warning as explained above. Use the\n``.. deprecated::`` directive:\n\n.. code-block:: rst\n\n  .. deprecated:: 0.13\n     ``k`` was renamed to ``n_clusters`` in version 0.13 and will be removed\n     in 0.15.\n\nWhat's more, a deprecation requires a test which ensures that the warning is\nraised in relevant cases but not in other cases. The warning should be caught\nin all other tests (using e.g., ``@pytest.mark.filterwarnings``),\nand there should be no warning in the examples.\n\n\nChange the default value of a parameter\n---------------------------------------\n\nIf the default value of a parameter needs to be changed, please replace the\ndefault value with a specific value (e.g., ``\"warn\"``) and raise\n``FutureWarning`` when users are using the default value. The following\nexample assumes that the current version is 0.20 and that we change the\ndefault value of ``n_clusters`` from 5 (old default for 0.20) to 10\n(new default for 0.22)::\n\n    import warnings\n\n    def example_function(n_clusters=\"warn\"):\n        if n_clusters == \"warn\":\n            warnings.warn(\n                \"The default value of `n_clusters` will change from 5 to 10 in 0.22.\",\n                FutureWarning,\n            )\n            n_clusters = 5\n\nWhen the change is in a class, we validate and raise warning in ``fit``::\n\n  import warnings\n\n  class ExampleEstimator:\n      def __init__(self, n_clusters=\"warn\"):\n          self.n_clusters = n_clusters\n\n      def fit(self, X, y):\n          if self.n_clusters == \"warn\":\n              warnings.warn(\n                  \"The default value of `n_clusters` will change from 5 to 10 in 0.22.\",\n                  FutureWarning,\n              )\n              self._n_clusters = 5\n\nSimilar to deprecations, the warning message should always give both the\nversion in which the change happened and the version in which the old behavior\nwill be removed.\n\nThe parameter description in the docstring needs to be updated accordingly by adding\na ``versionchanged`` directive with the old and new default value, pointing to the\nversion when the change will be effective:\n\n.. code-block:: rst\n\n    .. versionchanged:: 0.22\n       The default value for `n_clusters` will change from 5 to 10 in version 0.22.\n\nFinally, we need a test which ensures that the warning is raised in relevant cases but\nnot in other cases. The warning should be caught in all other tests\n(using e.g., ``@pytest.mark.filterwarnings``), and there should be no warning\nin the examples.\n\n.. _code_review:\n\nCode Review Guidelines\n======================\n\nReviewing code contributed to the project as PRs is a crucial component of\nscikit-learn development. We encourage anyone to start reviewing code of other\ndevelopers. The code review process is often highly educational for everybody\ninvolved. This is particularly appropriate if it is a feature you would like to\nuse, and so can respond critically about whether the PR meets your needs. While\neach pull request needs to be signed off by two core developers, you can speed\nup this process by providing your feedback.\n\n.. note::\n\n  The difference between an objective improvement and a subjective nit isn't\n  always clear. Reviewers should recall that code review is primarily about\n  reducing risk in the project. When reviewing code, one should aim at\n  preventing situations which may require a bug fix, a deprecation, or a\n  retraction. Regarding docs: typos, grammar issues and disambiguations are\n  better addressed immediately.\n\n.. dropdown:: Important aspects to be covered in any code review\n\n  Here are a few important aspects that need to be covered in any code review,\n  from high-level questions to a more detailed check-list.\n\n  - Do we want this in the library? Is it likely to be used? Do you, as\n    a scikit-learn user, like the change and intend to use it? Is it in\n    the scope of scikit-learn? Will the cost of maintaining a new\n    feature be worth its benefits?\n\n  - Is the code consistent with the API of scikit-learn? Are public\n    functions\/classes\/parameters well named and intuitively designed?\n\n  - Are all public functions\/classes and their parameters, return types, and\n    stored attributes named according to scikit-learn conventions and documented clearly?\n\n  - Is any new functionality described in the user-guide and illustrated with examples?\n\n  - Is every public function\/class tested? Are a reasonable set of\n    parameters, their values, value types, and combinations tested? Do\n    the tests validate that the code is correct, i.e. doing what the\n    documentation says it does? If the change is a bug-fix, is a\n    non-regression test included? Look at `this\n    <https:\/\/jeffknupp.com\/blog\/2013\/12\/09\/improve-your-python-understanding-unit-testing>`__\n    to get started with testing in Python.\n\n  - Do the tests pass in the continuous integration build? If\n    appropriate, help the contributor understand why tests failed.\n\n  - Do the tests cover every line of code (see the coverage report in the build\n    log)? If not, are the lines missing coverage good exceptions?\n\n  - Is the code easy to read and low on redundancy? Should variable names be\n    improved for clarity or consistency? Should comments be added? Should comments\n    be removed as unhelpful or extraneous?\n\n  - Could the code easily be rewritten to run much more efficiently for\n    relevant settings?\n\n  - Is the code backwards compatible with previous versions? (or is a\n    deprecation cycle necessary?)\n\n  - Will the new code add any dependencies on other libraries? (this is\n    unlikely to be accepted)\n\n  - Does the documentation render properly (see the\n    :ref:`contribute_documentation` section for more details), and are the plots\n    instructive?\n\n  :ref:`saved_replies` includes some frequent comments that reviewers may make.\n\n.. _communication:\n\n.. dropdown:: Communication Guidelines\n\n  Reviewing open pull requests (PRs) helps move the project forward. It is a\n  great way to get familiar with the codebase and should motivate the\n  contributor to keep involved in the project. [1]_\n\n  - Every PR, good or bad, is an act of generosity. Opening with a positive\n    comment will help the author feel rewarded, and your subsequent remarks may\n    be heard more clearly. You may feel good also.\n  - Begin if possible with the large issues, so the author knows they've been\n    understood. Resist the temptation to immediately go line by line, or to open\n    with small pervasive issues.\n  - Do not let perfect be the enemy of the good. If you find yourself making\n    many small suggestions that don't fall into the :ref:`code_review`, consider\n    the following approaches:\n\n    - refrain from submitting these;\n    - prefix them as \"Nit\" so that the contributor knows it's OK not to address;\n    - follow up in a subsequent PR, out of courtesy, you may want to let the\n      original contributor know.\n\n  - Do not rush, take the time to make your comments clear and justify your\n    suggestions.\n  - You are the face of the project. Bad days occur to everyone, in that\n    occasion you deserve a break: try to take your time and stay offline.\n\n  .. [1] Adapted from the numpy `communication guidelines\n        <https:\/\/numpy.org\/devdocs\/dev\/reviewer_guidelines.html#communication-guidelines>`_.\n\nReading the existing code base\n==============================\n\nReading and digesting an existing code base is always a difficult exercise\nthat takes time and experience to master. Even though we try to write simple\ncode in general, understanding the code can seem overwhelming at first,\ngiven the sheer size of the project. Here is a list of tips that may help\nmake this task easier and faster (in no particular order).\n\n- Get acquainted with the :ref:`api_overview`: understand what :term:`fit`,\n  :term:`predict`, :term:`transform`, etc. are used for.\n- Before diving into reading the code of a function \/ class, go through the\n  docstrings first and try to get an idea of what each parameter \/ attribute\n  is doing. It may also help to stop a minute and think *how would I do this\n  myself if I had to?*\n- The trickiest thing is often to identify which portions of the code are\n  relevant, and which are not. In scikit-learn **a lot** of input checking\n  is performed, especially at the beginning of the :term:`fit` methods.\n  Sometimes, only a very small portion of the code is doing the actual job.\n  For example looking at the :meth:`~linear_model.LinearRegression.fit` method of\n  :class:`~linear_model.LinearRegression`, what you're looking for\n  might just be the call the :func:`scipy.linalg.lstsq`, but it is buried into\n  multiple lines of input checking and the handling of different kinds of\n  parameters.\n- Due to the use of `Inheritance\n  <https:\/\/en.wikipedia.org\/wiki\/Inheritance_(object-oriented_programming)>`_,\n  some methods may be implemented in parent classes. All estimators inherit\n  at least from :class:`~base.BaseEstimator`, and\n  from a ``Mixin`` class (e.g. :class:`~base.ClassifierMixin`) that enables default\n  behaviour depending on the nature of the estimator (classifier, regressor,\n  transformer, etc.).\n- Sometimes, reading the tests for a given function will give you an idea of\n  what its intended purpose is. You can use ``git grep`` (see below) to find\n  all the tests written for a function. Most tests for a specific\n  function\/class are placed under the ``tests\/`` folder of the module\n- You'll often see code looking like this:\n  ``out = Parallel(...)(delayed(some_function)(param) for param in\n  some_iterable)``. This runs ``some_function`` in parallel using `Joblib\n  <https:\/\/joblib.readthedocs.io\/>`_. ``out`` is then an iterable containing\n  the values returned by ``some_function`` for each call.\n- We use `Cython <https:\/\/cython.org\/>`_ to write fast code. Cython code is\n  located in ``.pyx`` and ``.pxd`` files. Cython code has a more C-like flavor:\n  we use pointers, perform manual memory allocation, etc. Having some minimal\n  experience in C \/ C++ is pretty much mandatory here. For more information see\n  :ref:`cython`.\n- Master your tools.\n\n  - With such a big project, being efficient with your favorite editor or\n    IDE goes a long way towards digesting the code base. Being able to quickly\n    jump (or *peek*) to a function\/class\/attribute definition helps a lot.\n    So does being able to quickly see where a given name is used in a file.\n  - `Git <https:\/\/git-scm.com\/book\/en>`_ also has some built-in killer\n    features. It is often useful to understand how a file changed over time,\n    using e.g. ``git blame`` (`manual\n    <https:\/\/git-scm.com\/docs\/git-blame>`_). This can also be done directly\n    on GitHub. ``git grep`` (`examples\n    <https:\/\/git-scm.com\/docs\/git-grep#_examples>`_) is also extremely\n    useful to see every occurrence of a pattern (e.g. a function call or a\n    variable) in the code base.\n\n- Configure `git blame` to ignore the commit that migrated the code style to\n  `black`.\n\n  .. prompt:: bash\n\n      git config blame.ignoreRevsFile .git-blame-ignore-revs\n\n  Find out more information in black's\n  `documentation for avoiding ruining git blame <https:\/\/black.readthedocs.io\/en\/stable\/guides\/introducing_black_to_your_project.html#avoiding-ruining-git-blame>`_.","site":"scikit-learn","answers_cleaned":"    contributing                Contributing                  currentmodule   sklearn  This project is a community effort  and everyone is welcome to contribute  It is hosted on https   github com scikit learn scikit learn  The decision making process and governance structure of scikit learn is laid out in  ref  governance    Scikit learn is somewhat  ref  selective  selectiveness   when it comes to adding new algorithms  and the best way to contribute and to help the project is to start working on known issues  See  ref  new contributors  to get started      topic     Our community  our values        We are a community based on openness and friendly  didactic      discussions       We aspire to treat everybody equally  and value their contributions   We     are particularly seeking people from underrepresented backgrounds in Open     Source Software and scikit learn in particular to participate and     contribute their expertise and experience       Decisions are made based on technical merit and consensus       Code is not the only way to help the project  Reviewing pull     requests  answering questions to help others on mailing lists or     issues  organizing and teaching tutorials  working on the website      improving the documentation  are all priceless contributions       We abide by the principles of openness  respect  and consideration of     others of the Python Software Foundation      https   www python org psf codeofconduct    In case you experience issues using this package  do not hesitate to submit a ticket to the  GitHub issue tracker  https   github com scikit learn scikit learn issues     You are also welcome to post feature requests or pull requests   Ways to contribute                     There are many ways to contribute to scikit learn  with the most common ones being contribution of code or documentation to the project  Improving the documentation is no less important than improving the library itself   If you find a typo in the documentation  or have made improvements  do not hesitate to create a GitHub issue or preferably submit a GitHub pull request  Full documentation can be found under the doc  directory   But there are many other ways to help  In particular helping to  ref  improve  triage  and investigate issues  bug triaging   and  ref  reviewing other developers  pull requests  code review   are very valuable contributions that decrease the burden on the project maintainers   Another way to contribute is to report issues you re facing  and give a  thumbs up  on issues that others reported and that are relevant to you   It also helps us if you spread the word  reference the project from your blog and articles  link to it from your website  or simply star to say  I use it       raw   html     p       object       data  https   img shields io github stars scikit learn scikit learn style for the badge logo github        type  image svg xml         object      p   In case a contribution issue involves changes to the API principles or changes to dependencies or supported versions  it must be backed by a  ref  slep   where a SLEP must be submitted as a pull request to  enhancement proposals  https   scikit learn enhancement proposals readthedocs io    using the  SLEP template  https   scikit learn enhancement proposals readthedocs io en latest slep template html    and follows the decision making process outlined in  ref  governance       dropdown   Contributing to related projects    Scikit learn thrives in an ecosystem of several related projects  which also   may have relevant issues to work on  including smaller projects such as        scikit learn contrib  https   github com search q org 3Ascikit learn contrib is 3Aissue is 3Aopen sort 3Aupdated desc type Issues          joblib  https   github com joblib joblib issues          sphinx gallery  https   github com sphinx gallery sphinx gallery issues          numpydoc  https   github com numpy numpydoc issues          liac arff  https   github com renatopp liac arff issues        and larger projects        numpy  https   github com numpy numpy issues          scipy  https   github com scipy scipy issues          matplotlib  https   github com matplotlib matplotlib issues         and so on     Look for issues marked  help wanted  or similar  Helping these projects may help   scikit learn too  See also  ref  related projects    Automated Contributions Policy                                 Please refrain from submitting issues or pull requests generated by fully automated tools  Maintainers reserve the right  at their sole discretion  to close such submissions and to block any account responsible for them   Ideally  contributions should follow from a human to human discussion in the form of an issue   Submitting a bug report or a feature request                                               We use GitHub issues to track all bugs and feature requests  feel free to open an issue if you have found a bug or wish to see a feature implemented   In case you experience issues using this package  do not hesitate to submit a ticket to the  Bug Tracker  https   github com scikit learn scikit learn issues     You are also welcome to post feature requests or pull requests   It is recommended to check that your issue complies with the following rules before submitting      Verify that your issue is not being currently addressed by other     issues  https   github com scikit learn scikit learn issues q        or  pull requests  https   github com scikit learn scikit learn pulls q          If you are submitting an algorithm or feature request  please verify that    the algorithm fulfills our     new algorithm requirements     https   scikit learn org stable faq html what are the inclusion criteria for new algorithms         If you are submitting a bug report  we strongly encourage you to follow the guidelines in     ref  filing bugs        filing bugs   How to make a good bug report                                When you submit an issue to  GitHub  https   github com scikit learn scikit learn issues      please do your best to follow these guidelines  This will make it a lot easier to provide you with good feedback     The ideal bug report contains a  ref  short reproducible code snippet    minimal reproducer    this way anyone can try to reproduce the bug easily  If your   snippet is longer than around 50 lines  please link to a  Gist    https   gist github com    or a GitHub repo     If not feasible to include a reproducible snippet  please be specific about   what   estimators and or functions are involved and the shape of the data       If an exception is raised  please   provide the full traceback       Please include your   operating system type and version number    as well as   your   Python  scikit learn  numpy  and scipy versions    This information   can be found by running        prompt   bash      python  c  import sklearn  sklearn show versions       Please ensure all   code snippets and error messages are formatted in   appropriate code blocks     See  Creating and highlighting code blocks    https   help github com articles creating and highlighting code blocks      for more details   If you want to help curate issues  read about  ref  bug triaging    Contributing code                       note      To avoid duplicating work  it is highly advised that you search through the    issue tracker  https   github com scikit learn scikit learn issues    and   the  PR list  https   github com scikit learn scikit learn pulls       If in doubt about duplicated work  or if you want to work on a non trivial   feature  it s recommended to first open an issue in   the  issue tracker  https   github com scikit learn scikit learn issues      to get some feedbacks from core developers     One easy way to find an issue to work on is by applying the  help wanted    label in your search  This lists all the issues that have been unclaimed   so far  In order to claim an issue for yourself  please comment exactly      take   on it for the CI to automatically assign the issue to you   To maintain the quality of the codebase and ease the review process  any contribution must conform to the project s  ref  coding guidelines  coding guidelines    in particular     Don t modify unrelated lines to keep the PR focused on the scope stated in its   description or issue    Only write inline comments that add value and avoid stating the obvious  explain   the  why  rather than the  what       Most importantly    Do not contribute code that you don t understand   Video resources                 These videos are step by step introductions on how to contribute to scikit learn  and are a great companion to the following text guidelines  Please make sure to still check our guidelines below  since they describe our latest up to date workflow     Crash Course in Contributing to Scikit Learn   Open Source Projects     Video  https   youtu be 5OL8XoMMOfA         Transcript    https   github com data umbrella event transcripts blob main 2020 05 andreas mueller contributing md        Example of Submitting a Pull Request to scikit learn     Video  https   youtu be PU1WyDPGePI         Transcript    https   github com data umbrella event transcripts blob main 2020 06 reshama shaikh sklearn pr md        Sprint specific instructions and practical tips     Video  https   youtu be p 2Uw2BxdhA         Transcript    https   github com data umbrella data umbrella scikit learn sprint blob master 3 transcript ACM video vol2 md        3 Components of Reviewing a Pull Request     Video  https   youtu be dyxS9KKCNzA         Transcript    https   github com data umbrella event transcripts blob main 2021 27 thomas pr md         note     In January 2021  the default branch name changed from   master   to   main     for the scikit learn GitHub repository to use more inclusive terms    These videos were created prior to the renaming of the branch    For contributors who are viewing these videos to set up their   working environment and submitting a PR    master   should be replaced to   main     How to contribute                    The preferred way to contribute to scikit learn is to fork the  main repository  https   github com scikit learn scikit learn      on GitHub  then submit a  pull request   PR    In the first few steps  we explain how to locally install scikit learn  and how to set up your git repository   1   Create an account  https   github com join    on    GitHub if you do not already have one   2  Fork the  project repository     https   github com scikit learn scikit learn      click on the  Fork     button near the top of the page  This creates a copy of the code under your    account on the GitHub user account  For more details on how to fork a    repository see  this guide  https   help github com articles fork a repo       3  Clone your fork of the scikit learn repo from your GitHub account to your    local disk         prompt   bash        git clone git github com YourLogin scikit learn git    add   depth 1 if your connection is slow       cd scikit learn  4  Follow steps 2 6 in  ref  install bleeding edge  to build scikit learn in    development mode and return to this document   5  Install the development dependencies         prompt   bash          pip install pytest pytest cov ruff mypy numpydoc black  24 3 0      upstream   6  Add the   upstream   remote  This saves a reference to the main    scikit learn repository  which you can use to keep your repository    synchronized with the latest changes         prompt   bash          git remote add upstream git github com scikit learn scikit learn git  7  Check that the  upstream  and  origin  remote aliases are configured correctly    by running  git remote  v  which should display         code block   text          origin git github com YourLogin scikit learn git  fetch          origin git github com YourLogin scikit learn git  push          upstream git github com scikit learn scikit learn git  fetch          upstream git github com scikit learn scikit learn git  push   You should now have a working installation of scikit learn  and your git repository properly configured  It could be useful to run some test to verify your installation  Please refer to  ref  pytest tips  for examples   The next steps now describe the process of modifying code and submitting a PR   8  Synchronize your   main   branch with the   upstream main   branch     more details on  GitHub Docs  https   docs github com en github collaborating with issues and pull requests syncing a fork            prompt   bash          git checkout main         git fetch upstream         git merge upstream main  9  Create a feature branch to hold your development changes         prompt   bash          git checkout  b my feature     and start making changes  Always use a feature branch  It s good    practice to never work on the   main   branch   10     Optional    Install  pre commit  https   pre commit com  install    to     run code style checks before each commit          prompt   bash            pip install pre commit           pre commit install      pre commit checks can be disabled for a particular commit with      git commit  n    11  Develop the feature on your feature branch on your computer  using Git to     do the version control  When you re done editing  add changed files using       git add   and then   git commit            prompt   bash          git add modified files         git commit      to record your changes in Git  then push the changes to your GitHub     account with          prompt   bash         git push  u origin my feature  12  Follow  these      https   help github com articles creating a pull request from a fork        instructions to create a pull request from your fork  This will send an     notification to potential reviewers  You may want to consider sending an message to     the  discord  https   discord com invite h9qyrK8Jc8    in the development     channel for more visibility if your pull request does not receive attention after     a couple of days  instant replies are not guaranteed though    It is often helpful to keep your local feature branch synchronized with the latest changes of the main scikit learn repository      prompt   bash      git fetch upstream     git merge upstream main  Subsequently  you might need to solve the conflicts  You can refer to the  Git documentation related to resolving merge conflict using the command line  https   help github com articles resolving a merge conflict using the command line          topic   Learning Git      The  Git documentation  https   git scm com doc    and     http   try github io are excellent resources to get started with git      and understanding all of the commands shown here       pr checklist   Pull request checklist                         Before a PR can be merged  it needs to be approved by two core developers  An incomplete contribution    where you expect to do more work before receiving a full review    should be marked as a  draft pull request  https   docs github com en pull requests collaborating with pull requests proposing changes to your work with pull requests changing the stage of a pull request     and changed to  ready for review  when it matures  Draft PRs may be useful to  indicate you are working on something to avoid duplicated work  request broad review of functionality or API  or seek collaborators  Draft PRs often benefit from the inclusion of a  task list  https   github com blog 1375 task lists in gfm issues pulls comments    in the PR description   In order to ease the reviewing process  we recommend that your contribution complies with the following rules before marking a PR as  ready for review   The   bolded   ones are especially important   1    Give your pull request a helpful title   that summarizes what your    contribution does  This title will often become the commit message once    merged so it should summarize your contribution for posterity  In some    cases  Fix  ISSUE TITLE   is enough   Fix   ISSUE NUMBER   is never a    good title   2    Make sure your code passes the tests    The whole test suite can be run    with  pytest   but it is usually not recommended since it takes a long    time  It is often enough to only run the test related to your changes     for example  if you changed something in     sklearn linear model  logistic py   running the following commands will    usually be enough         pytest sklearn linear model  logistic py  to make sure the doctest      examples are correct       pytest sklearn linear model tests test logistic py  to run the tests      specific to the file       pytest sklearn linear model  to test the whole       mod   sklearn linear model  module       pytest doc modules linear model rst  to make sure the user guide      examples are correct        pytest sklearn tests test common py  k LogisticRegression  to run all our      estimator checks  specifically for  LogisticRegression   if that s the      estimator you changed       There may be other failing tests  but they will be caught by the CI so    you don t need to run the whole test suite locally  For guidelines on how    to use   pytest   efficiently  see the  ref  pytest tips    3    Make sure your code is properly commented and documented    and   make    sure the documentation renders properly    To build the documentation  please    refer to our  ref  contribute documentation  guidelines  The CI will also    build the docs  please refer to  ref  generated doc CI    4    Tests are necessary for enhancements to be    accepted    Bug fixes or new features should be provided with     non regression tests     https   en wikipedia org wiki Non regression testing     These tests    verify the correct behavior of the fix or feature  In this manner  further    modifications on the code base are granted to be consistent with the    desired behavior  In the case of bug fixes  at the time of the PR  the    non regression tests should fail for the code base in the   main   branch    and pass for the PR code   5  If your PR is likely to affect users  you need to add a changelog entry describing    your PR changes  see the  following README  https   github com scikit learn scikit learn blob main doc whats new upcoming changes README md      for more details   6  Follow the  ref  coding guidelines    7  When applicable  use the validation tools and scripts in the  mod  sklearn utils     module  A list of utility routines available for developers can be found in the     ref  developers utils  page   8  Often pull requests resolve one or more other issues  or pull requests      If merging your pull request means that some other issues PRs should    be closed  you should  use keywords to create link to them     https   github com blog 1506 closing issues via pull requests         e g     Fixes  1234    multiple issues PRs are allowed as long as each    one is preceded by a keyword   Upon merging  those issues PRs will    automatically be closed by GitHub  If your pull request is simply    related to some other issues PRs  or it only partially resolves the target    issue  create a link to them without using the keywords  e g     Towards  1234      9  PRs should often substantiate the change  through benchmarks of    performance and efficiency  see  ref  monitoring performances   or through    examples of usage  Examples also illustrate the features and intricacies of    the library to users  Have a look at other examples in the  examples      https   github com scikit learn scikit learn tree main examples       directory for reference  Examples should demonstrate why the new    functionality is useful in practice and  if possible  compare it to other    methods available in scikit learn   10  New features have some maintenance overhead  We expect PR authors     to take part in the maintenance for the code they submit  at least     initially  New features need to be illustrated with narrative     documentation in the user guide  with small code snippets      If relevant  please also add references in the literature  with PDF links     when possible   11  The user guide should also include expected time and space complexity     of the algorithm and scalability  e g   this algorithm can scale to a     large number of samples   100000  but does not scale in dimensionality       n features  is expected to be lower than 100    You can also check our  ref  code review  to get an idea of what reviewers will expect   You can check for common programming errors with the following tools     Code with a good unit test coverage  at least 80   better 100    check with        prompt   bash      pip install pytest pytest cov     pytest   cov sklearn path to tests    See also  ref  testing coverage      Run static analysis with  mypy         prompt   bash        mypy sklearn    This must not produce new errors in your pull request  Using    type  ignore    annotation can be a workaround for a few cases that are not supported by   mypy  in particular       when importing C or Cython modules      on properties with decorators   Bonus points for contributions that include a performance analysis with a benchmark script and profiling output  see  ref  monitoring performances    Also check out the  ref  performance howto  guide for more details on profiling and Cython optimizations      note      The current state of the scikit learn code base is not compliant with   all of those guidelines  but we expect that enforcing those constraints   on all new contributions will get the overall code base quality in the   right direction      seealso       For two very well documented and more detailed guides on development    workflow  please pay a visit to the  Scipy Development Workflow     http   scipy github io devdocs dev dev quickstart html         and the  Astropy Workflow for Developers     https   astropy readthedocs io en latest development workflow development workflow html       sections   Continuous Integration  CI                                 Azure pipelines are used for testing scikit learn on Linux  Mac and Windows    with different dependencies and settings    CircleCI is used to build the docs for viewing    Github Actions are used for various tasks  including building wheels and   source distributions    Cirrus CI is used to build on ARM       commit markers   Commit message markers                         Please note that if one of the following markers appear in the latest commit message  the following actions are taken                                              Commit Message Marker  Action Taken by CI                                             ci skip               CI is skipped completely  cd build              CD is run  wheels and source distribution are built   cd build gh           CD is run only for GitHub Actions  cd build cirrus       CD is run only for Cirrus CI  lint skip             Azure pipeline skips linting  scipy dev             Build   test with our dependencies  numpy  scipy  etc   development builds  free threaded         Build   test with CPython 3 13 free threaded  pyodide               Build   test with Pyodide  azure parallel        Run Azure CI jobs in parallel  cirrus arm            Run Cirrus CI ARM test  float32               Run float32 tests by setting  SKLEARN RUN FLOAT32 TESTS 1   See  ref  environment variable  for more details  doc skip              Docs are not built  doc quick             Docs built  but excludes example gallery plots  doc build             Docs built including example gallery plots  very long                                              Note that  by default  the documentation is built but only the examples that are directly modified by the pull request are executed       build lock files   Build lock files                   CIs use lock files to build environments with specific versions of dependencies  When a PR needs to modify the dependencies or their versions  the lock files should be updated accordingly  This can be done by adding the following comment directly in the GitHub Pull Request  PR  discussion      code block   text     scikit learn bot update lock files  A bot will push a commit to your PR branch with the updated lock files in a few minutes  Make sure to tick the  Allow edits from maintainers  checkbox located at the bottom of the right sidebar of the PR  You can also specify the options    select build      skip build   and    select tag  as in a command line  Use    help  on the script  build tools update environments and lock files py  for more information  For example      code block   text     scikit learn bot update lock files   select tag main ci   skip build doc  The bot will automatically add  ref  commit message markers  commit markers   to the commit for certain tags  If you want to add more markers manually  you can do so using the    commit marker  option  For example  the following comment will trigger the bot to update documentation related lock files and add the   doc build   marker to the commit      code block   text     scikit learn bot update lock files   select build doc   commit marker   doc build    Resolve conflicts in lock files                                  Here is a bash snippet that helps resolving conflicts in environment and lock files      prompt   bash      pull latest upstream main   git pull upstream main   no rebase     resolve conflicts   keeping the upstream main version for specific files   git checkout   theirs  build tools     lock build tools    environment yml         build tools    lock txt build tools    requirements txt   git add build tools     lock build tools    environment yml         build tools    lock txt build tools    requirements txt   git merge   continue  This will merge  upstream main  into our branch  automatically prioritising the  upstream main  for conflicting environment and lock files  this is good enough  because we will re generate the lock files afterwards    Note that this only fixes conflicts in environment and lock files and you might have other conflicts to resolve   Finally  we have to re generate the environment and lock files for the CIs  as described in  ref  Build lock files  build lock files    or by running      prompt   bash    python build tools update environments and lock files py      stalled pull request   Stalled pull requests                        As contributing a feature can be a lengthy process  some pull requests appear inactive but unfinished  In such a case  taking them over is a great service for the project  A good etiquette to take over is       Determine if a PR is stalled        A pull request may have the label  stalled  or  help wanted  if we     have already identified it as a candidate for other contributors       To decide whether an inactive PR is stalled  ask the contributor if     she he plans to continue working on the PR in the near future      Failure to respond within 2 weeks with an activity that moves the PR     forward suggests that the PR is stalled and will result in tagging     that PR with  help wanted        Note that if a PR has received earlier comments on the contribution     that have had no reply in a month  it is safe to assume that the PR     is stalled and to shorten the wait time to one day       After a sprint  follow up for un merged PRs opened during sprint will     be communicated to participants at the sprint  and those PRs will be     tagged  sprint   PRs tagged with  sprint  can be reassigned or     declared stalled by sprint leaders       Taking over a stalled PR    To take over a PR  it is important to   comment on the stalled PR that you are taking over and to link from the   new PR to the old one  The new PR should be created by pulling from the   old one   Stalled and Unclaimed Issues                               Generally speaking  issues which are up for grabs will have a   help wanted   https   github com scikit learn scikit learn labels help 20wanted     tag  However  not all issues which need contributors will have this tag  as the  help wanted  tag is not always up to date with the state of the issue  Contributors can find issues which are still up for grabs using the following guidelines     First  to   determine if an issue is claimed         Check for linked pull requests     Check the conversation to see if anyone has said that they re working on     creating a pull request    If a contributor comments on an issue to say they are working on it    a pull request is expected within 2 weeks  new contributor  or 4 weeks    contributor or core dev   unless an larger time frame is explicitly given    Beyond that time  another contributor can take the issue and make a   pull request for it  We encourage contributors to comment directly on the   stalled or unclaimed issue to let community members know that they will be   working on it     If the issue is linked to a  ref  stalled pull request  stalled pull request      we recommend that contributors follow the procedure   described in the  ref  stalled pull request    section rather than working directly on the issue       new contributors   Issues for New Contributors                              New contributors should look for the following tags when looking for issues   We strongly recommend that new contributors tackle  easy  issues first  this helps the contributor become familiar with the contribution workflow  and for the core devs to become acquainted with the contributor  besides which  we frequently underestimate how easy an issue is to solve       Good first issue tag      A great way to start contributing to scikit learn is to pick an item from   the list of  good first issues    https   github com scikit learn scikit learn labels good 20first 20issue      in the issue tracker  Resolving these issues allow you to start contributing   to the project without much prior knowledge  If you have already contributed   to scikit learn  you should look at Easy issues instead       Easy tag      If you have already contributed to scikit learn  another great way to contribute   to scikit learn is to pick an item from the list of  Easy issues    https   github com scikit learn scikit learn labels Easy    in the issue   tracker  Your assistance in this area will be greatly appreciated by the   more experienced developers as it helps free up their time to concentrate on   other issues       Help wanted tag      We often use the help wanted tag to mark issues regardless of difficulty    Additionally  we use the help wanted tag to mark Pull Requests which have been   abandoned by their original contributor and are available for someone to pick up where   the original contributor left off  The list of issues with the help wanted tag can be   found  here  https   github com scikit learn scikit learn labels help 20wanted       Note that not all issues which need contributors will have this tag       contribute documentation   Documentation                We are glad to accept any sort of documentation       Function method class docstrings    Also known as  API documentation   these   describe what the object does and details any parameters  attributes and   methods  Docstrings live alongside the code in  sklearn     https   github com scikit learn scikit learn tree main sklearn     and are generated   generated according to  doc api reference py    https   github com scikit learn scikit learn blob main doc api reference py     To   add  update  remove  or deprecate a public API that is listed in  ref  api ref   this   is the place to look at      User guide    These provide more detailed information about the algorithms   implemented in scikit learn and generally live in the root    doc   https   github com scikit learn scikit learn tree main doc    directory   and    doc modules   https   github com scikit learn scikit learn tree main doc modules         Examples    These provide full code examples that may demonstrate the use   of scikit learn modules  compare different algorithms or discuss their   interpretation  etc  Examples live in    examples   https   github com scikit learn scikit learn tree main examples         Other reStructuredText documents    These provide various other useful information    e g   the  ref  contributing  guide  and live in    doc   https   github com scikit learn scikit learn tree main doc          dropdown   Guidelines for writing docstrings      When documenting the parameters and attributes  here is a list of some     well formatted examples         code block   text        n clusters   int  default 3           The number of clusters detected by the algorithm         some param     hello    goodbye    bool or int  default True           The parameter description goes here  which can be either a string           literal  either  hello  or  goodbye    a bool  or an int  The default           value is True         array parameter    array like  sparse matrix  of shape  n samples  n features              or  n samples             This parameter accepts data in either of the mentioned forms  with one           of the mentioned shapes  The default value is  np ones shape  n samples             list param   list of int        typed ndarray   ndarray of shape  n samples    dtype np int32        sample weight   array like of shape  n samples    default None        multioutput array   ndarray of shape  n samples  n classes  or list of such arrays      In general have the following in mind         Use Python basic types     bool   instead of   boolean          Use parenthesis for defining shapes    array like of shape  n samples           or   array like of shape  n samples  n features          For strings with multiple options  use brackets    input    log          squared    multinomial           1D or 2D data can be a subset of    array like  ndarray  sparse matrix        dataframe     Note that   array like   can also be a   list    while         ndarray   is explicitly only a   numpy ndarray          Specify   dataframe   when  frame like  features are being used  such as       the column names        When specifying the data type of a list  use   of   as a delimiter    list       of int    When the parameter supports arrays giving details about the       shape and or data type and a list of such arrays  you can use one of         array like of shape  n samples   or list of such arrays          When specifying the dtype of an ndarray  use e g    dtype np int32   after       defining the shape    ndarray of shape  n samples    dtype np int32    You       can specify multiple dtype as a set    array like of shape  n samples          dtype  np float64  np float32     If one wants to mention arbitrary       precision  use  integral  and  floating  rather than the Python dtype        int  and  float   When both  int  and  floating  are supported  there is       no need to specify the dtype        When the default is   None      None   only needs to be specified at the       end with   default None    Be sure to include in the docstring  what it       means for the parameter or attribute to be   None         Add  See Also  in docstrings for related classes functions        See Also  in docstrings should be one line per reference  with a colon and an     explanation  for example          code block   text        See Also                      SelectKBest   Select features based on the k highest scores        SelectFpr   Select features based on a false positive rate test       Add one or two snippets of code in  Example  section to show how it can be used       dropdown   Guidelines for writing the user guide and other reStructuredText documents    It is important to keep a good compromise between mathematical and algorithmic   details  and give intuition to the reader on what the algorithm does       Begin with a concise  hand waving explanation of what the algorithm code does on     the data       Highlight the usefulness of the feature and its recommended application      Consider including the algorithm s complexity       math  O left g left n right  right    if available  as  rules of thumb  can     be very machine dependent  Only if those complexities are not available  then     rules of thumb may be provided instead       Incorporate a relevant figure  generated from an example  to provide intuitions       Include one or two short code examples to demonstrate the feature s usage       Introduce any necessary mathematical equations  followed by references  By     deferring the mathematical aspects  the documentation becomes more accessible     to users primarily interested in understanding the feature s practical     implications rather than its underlying mechanics       When editing reStructuredText     rst    files  try to keep line length under     88 characters when possible  exceptions include links and tables        In scikit learn reStructuredText files both single and double backticks     surrounding text will render as inline literal  often used for code  e g        list    This is due to specific configurations we have set  Single     backticks should be used nowadays       Too much information makes it difficult for users to access the content they     are interested in  Use dropdowns to factorize it by using the following syntax         code block   rst           dropdown   Dropdown title          Dropdown content       The snippet above will result in the following dropdown          dropdown   Dropdown title        Dropdown content       Information that can be hidden by default using dropdowns is         low hierarchy sections such as  References    Properties   etc   see for       instance the subsections in  ref  det curve           in depth mathematical details         narrative that is use case specific         in general  narrative that may only interest users that want to go beyond       the pragmatics of a given tool       Do not use dropdowns for the low level section  Examples   as it should stay     visible to all users  Make sure that the  Examples  section comes right after     the main discussion with the least possible folded section in between       Be aware that dropdowns break cross references  If that makes sense  hide the     reference along with the text mentioning it  Else  do not use dropdown       dropdown   Guidelines for writing references      When bibliographic references are available with  arxiv  https   arxiv org         or  Digital Object Identifier  https   www doi org     identification numbers      use the sphinx directives   arxiv   or   doi    For example  see references in      ref  Spectral Clustering Graphs  spectral clustering graph         For the  References  section in docstrings  see      func  sklearn metrics silhouette score  as an example       To cross reference to other pages in the scikit learn documentation use the     reStructuredText cross referencing syntax           Section    to link to an arbitrary section in the documentation  use       reference labels  see  Sphinx docs        https   www sphinx doc org en master usage restructuredtext roles html ref role            For example            code block   rst                my section             My section                                 This is the text of the section             To refer to itself use  ref  my section          You should not modify existing sphinx reference labels as this would break       existing cross references and external links pointing to specific sections       in the scikit learn documentation           Glossary    linking to a term in the  ref  glossary             code block   rst             term  cross validation           Function    to link to the documentation of a function  use the full import       path to the function            code block   rst             func   sklearn model selection cross val score         However  if there is a     currentmodule    directive above you in the document        you will only need to use the path to the function succeeding the current       module specified  For example            code block   rst               currentmodule   sklearn model selection             func  cross val score           Class    to link to documentation of a class  use the full import path to the       class  unless there is a     currentmodule    directive in the document above        see above             code block   rst             class   sklearn preprocessing StandardScaler   You can edit the documentation using any text editor  and then generate the HTML output by following  ref  building documentation   The resulting HTML files will be placed in    build html    and are viewable in a web browser  for instance by opening the local    build html index html   file or by running a local server     prompt   bash    python  m http server  d  build html       building documentation   Building the documentation                               Before submitting a pull request check if your modifications have introduced new sphinx warnings by building the documentation locally and try to fix them     First  make sure you have  ref  properly installed  install bleeding edge   the development version  On top of that  building the documentation requires installing some additional packages          packaging is not needed once setuptools starts shipping packaging  17 0     prompt   bash      pip install sphinx sphinx gallery numpydoc matplotlib Pillow pandas                   polars scikit image packaging seaborn sphinx prompt                   sphinxext opengraph sphinx copybutton plotly pooch                   pydata sphinx theme sphinxcontrib sass sphinx design                   sphinx remove toctrees  To build the documentation  you need to be in the   doc   folder      prompt   bash      cd doc  In the vast majority of cases  you only need to generate the web site without the example gallery      prompt   bash      make  The documentation will be generated in the    build html stable   directory and are viewable in a web browser  for instance by opening the local    build html stable index html   file  To also generate the example gallery you can use      prompt   bash      make html  This will run all the examples  which takes a while  You can also run only a few examples based on their file names  Here is a way to run all examples with filenames containing  plot calibration       prompt   bash      EXAMPLES PATTERN  plot calibration  make html  You can use regular expressions for more advanced use cases   Set the environment variable  NO MATHJAX 1  if you intend to view the documentation in an offline setting  To build the PDF manual  run      prompt   bash      make latexpdf     admonition   Sphinx version     class  warning     While we do our best to have the documentation build under as many    versions of Sphinx as possible  the different versions tend to    behave slightly differently  To get the best results  you should    use the same version as the one we used on CircleCI  Look at this     GitHub search  https   github com search q repo 3Ascikit learn 2Fscikit learn  2F 5C 2Fsphinx  5B0 9  5D 2B 2F path 3Abuild tools 2Fcircle 2Fdoc linux 64 conda lock type code       to know the exact version        generated doc CI   Generated documentation on GitHub Actions                                            When you change the documentation in a pull request  GitHub Actions automatically builds it  To view the documentation generated by GitHub Actions  simply go to the bottom of your PR page  look for the item  Check the rendered docs here   and click on  details  next to it      image      images generated doc ci png     align  center      testing coverage   Testing and improving test coverage                                      High quality  unit testing  https   en wikipedia org wiki Unit testing    is a corner stone of the scikit learn development process  For this purpose  we use the  pytest  https   docs pytest org    package  The tests are functions appropriately named  located in  tests  subdirectories  that check the validity of the algorithms and the different options of the code   Running  pytest  in a folder will run all the tests of the corresponding subpackages  For a more detailed  pytest  workflow  please refer to the  ref  pr checklist    We expect code coverage of new features to be at least around 90       dropdown   Writing matplotlib related tests    Test fixtures ensure that a set of tests will be executing with the appropriate   initialization and cleanup  The scikit learn test suite implements a   pyplot     fixture which can be used with   matplotlib       The   pyplot   fixture should be used when a test function is dealing with     matplotlib      matplotlib   is a soft dependency and is not required    This fixture is in charge of skipping the tests if   matplotlib   is not   installed  In addition  figures created during the tests will be   automatically closed once the test function has been executed     To use this fixture in a test function  one needs to pass it as an   argument          def test requiring mpl fixture pyplot               you can now safely use matplotlib     dropdown   Workflow to improve test coverage    To test code coverage  you need to install the  coverage    https   pypi org project coverage     package in addition to  pytest      1  Run  pytest   cov sklearn  path to tests   The output lists for each file the line      numbers that are not tested     2  Find a low hanging fruit  looking at which lines are not tested       write or adapt a test specifically for these lines     3  Loop       monitoring performances   Monitoring performance                          This section is heavily inspired from the   pandas documentation  https   pandas pydata org docs development contributing codebase html running the performance test suite      When proposing changes to the existing code base  it s important to make sure that they don t introduce performance regressions  Scikit learn uses  asv benchmarks  https   github com airspeed velocity asv    to monitor the performance of a selection of common estimators and functions  You can view these benchmarks on the  scikit learn benchmark page  https   scikit learn org scikit learn benchmarks     The corresponding benchmark suite can be found in the  asv benchmarks   directory   To use all features of asv  you will need either  conda  or  virtualenv   For more details please check the  asv installation webpage  https   asv readthedocs io en latest installing html      First of all you need to install the development version of asv      prompt   bash      pip install git https   github com airspeed velocity asv  and change your directory to  asv benchmarks        prompt   bash    cd asv benchmarks  The benchmark suite is configured to run against your local clone of scikit learn  Make sure it is up to date      prompt   bash    git fetch upstream  In the benchmark suite  the benchmarks are organized following the same structure as scikit learn  For example  you can compare the performance of a specific estimator between   upstream main   and the branch you are working on      prompt   bash    asv continuous  b LogisticRegression upstream main HEAD  The command uses conda by default for creating the benchmark environments  If you want to use virtualenv instead  use the   E  flag      prompt   bash    asv continuous  E virtualenv  b LogisticRegression upstream main HEAD  You can also specify a whole module to benchmark      prompt   bash    asv continuous  b linear model upstream main HEAD  You can replace  HEAD  by any local branch  By default it will only report the benchmarks that have change by at least 10   You can control this ratio with the   f  flag   To run the full benchmark suite  simply remove the   b  flag       prompt   bash    asv continuous upstream main HEAD  However this can take up to two hours  The   b  flag also accepts a regular expression for a more complex subset of benchmarks to run   To run the benchmarks without comparing to another branch  use the  run  command      prompt   bash    asv run  b linear model HEAD    You can also run the benchmark suite using the version of scikit learn already installed in your current Python environment      prompt   bash    asv run   python same  It s particularly useful when you installed scikit learn in editable mode to avoid creating a new environment each time you run the benchmarks  By default the results are not saved when using an existing installation  To save the results you must specify a commit hash      prompt   bash    asv run   python same   set commit hash  commit hash   Benchmarks are saved and organized by machine  environment and commit  To see the list of all saved benchmarks      prompt   bash    asv show  and to see the report of a specific run      prompt   bash    asv show  commit hash   When running benchmarks for a pull request you re working on please report the results on github   The benchmark suite supports additional configurable options which can be set in the  benchmarks config json  configuration file  For example  the benchmarks can run for a provided list of values for the  n jobs  parameter   More information on how to write a benchmark and how to use asv can be found in the  asv documentation  https   asv readthedocs io en latest index html          issue tracker tags   Issue Tracker Tags                     All issues and pull requests on the  GitHub issue tracker  https   github com scikit learn scikit learn issues    should have  at least  one of the following tags    Bug      Something is happening that clearly shouldn t happen      Wrong results as well as unexpected errors from estimators go here    Enhancement      Improving performance  usability  consistency    Documentation      Missing  incorrect or sub standard documentations and examples    New Feature      Feature requests and pull requests implementing a new feature   There are four other tags to help new contributors    Good first issue      This issue is ideal for a first contribution to scikit learn  Ask for help     if the formulation is unclear  If you have already contributed to     scikit learn  look at Easy issues instead    Easy      This issue can be tackled without much prior experience    Moderate      Might need some knowledge of machine learning or the package      but is still approachable for someone new to the project    Help wanted      This tag marks an issue which currently lacks a contributor or a     PR that needs another contributor to take over the work  These     issues can range in difficulty  and may not be approachable     for new contributors  Note that not all issues which need     contributors will have this tag       backwards compatibility   Maintaining backwards compatibility                                          contributing deprecation   Deprecation              If any publicly accessible class  function  method  attribute or parameter is renamed  we still support the old one for two releases and issue a deprecation warning when it is called  passed  or accessed      rubric   Deprecating a class or a function  Suppose the function   zero one   is renamed to   zero one loss    we add the decorator  class  utils deprecated  to   zero one   and call   zero one loss   from that function        from   utils import deprecated      def zero one loss y true  y pred  normalize True             actual implementation         pass       deprecated           Function  zero one  was renamed to  zero one loss  in 0 13 and will be            removed in 0 15  Default behavior is changed from  normalize False  to             normalize True             def zero one y true  y pred  normalize False           return zero one loss y true  y pred  normalize   One also needs to move   zero one   from   API REFERENCE   to   DEPRECATED API REFERENCE   and add   zero one loss   to   API REFERENCE   in the   doc api reference py   file to reflect the changes in  ref  api ref       rubric   Deprecating an attribute or a method  If an attribute or a method is to be deprecated  use the decorator  class   utils deprecated  on the property  Please note that the  class   utils deprecated  decorator should be placed before the   property   decorator if there is one  so that the docstrings can be rendered properly  For instance  renaming an attribute   labels    to   classes    can be done as         deprecated           Attribute  labels   was deprecated in 0 13 and will be removed in 0 15  Use             classes   instead             property     def labels  self           return self classes      rubric   Deprecating a parameter  If a parameter has to be deprecated  a   FutureWarning   warning must be raised manually  In the following example    k   is deprecated and renamed to n clusters        import warnings      def example function n clusters 8  k  deprecated            if k     deprecated               warnings warn                    k  was renamed to  n clusters  in 0 13 and will be removed in 0 15                   FutureWarning                            n clusters   k  When the change is in a class  we validate and raise warning in   fit        import warnings    class ExampleEstimator BaseEstimator         def   init   self  n clusters 8  k  deprecated              self n clusters   n clusters           self k   k        def fit self  X  y             if self k     deprecated                 warnings warn                      k  was renamed to  n clusters  in 0 13 and will be removed in 0 15                      FutureWarning                                self  n clusters   self k           else                self  n clusters   self n clusters  As in these examples  the warning message should always give both the version in which the deprecation happened and the version in which the old behavior will be removed  If the deprecation happened in version 0 x dev  the message should say deprecation occurred in version 0 x and the removal will be in 0  x 2   so that users will have enough time to adapt their code to the new behaviour  For example  if the deprecation happened in version 0 18 dev  the message should say it happened in version 0 18 and the old behavior will be removed in version 0 20   The warning message should also include a brief explanation of the change and point users to an alternative   In addition  a deprecation note should be added in the docstring  recalling the same information as the deprecation warning as explained above  Use the      deprecated     directive      code block   rst       deprecated   0 13        k   was renamed to   n clusters   in version 0 13 and will be removed      in 0 15   What s more  a deprecation requires a test which ensures that the warning is raised in relevant cases but not in other cases  The warning should be caught in all other tests  using e g      pytest mark filterwarnings     and there should be no warning in the examples    Change the default value of a parameter                                          If the default value of a parameter needs to be changed  please replace the default value with a specific value  e g      warn     and raise   FutureWarning   when users are using the default value  The following example assumes that the current version is 0 20 and that we change the default value of   n clusters   from 5  old default for 0 20  to 10  new default for 0 22         import warnings      def example function n clusters  warn            if n clusters     warn               warnings warn                   The default value of  n clusters  will change from 5 to 10 in 0 22                    FutureWarning                            n clusters   5  When the change is in a class  we validate and raise warning in   fit        import warnings    class ExampleEstimator        def   init   self  n clusters  warn              self n clusters   n clusters        def fit self  X  y             if self n clusters     warn                 warnings warn                     The default value of  n clusters  will change from 5 to 10 in 0 22                      FutureWarning                                self  n clusters   5  Similar to deprecations  the warning message should always give both the version in which the change happened and the version in which the old behavior will be removed   The parameter description in the docstring needs to be updated accordingly by adding a   versionchanged   directive with the old and new default value  pointing to the version when the change will be effective      code block   rst         versionchanged   0 22        The default value for  n clusters  will change from 5 to 10 in version 0 22   Finally  we need a test which ensures that the warning is raised in relevant cases but not in other cases  The warning should be caught in all other tests  using e g      pytest mark filterwarnings     and there should be no warning in the examples       code review   Code Review Guidelines                         Reviewing code contributed to the project as PRs is a crucial component of scikit learn development  We encourage anyone to start reviewing code of other developers  The code review process is often highly educational for everybody involved  This is particularly appropriate if it is a feature you would like to use  and so can respond critically about whether the PR meets your needs  While each pull request needs to be signed off by two core developers  you can speed up this process by providing your feedback      note      The difference between an objective improvement and a subjective nit isn t   always clear  Reviewers should recall that code review is primarily about   reducing risk in the project  When reviewing code  one should aim at   preventing situations which may require a bug fix  a deprecation  or a   retraction  Regarding docs  typos  grammar issues and disambiguations are   better addressed immediately      dropdown   Important aspects to be covered in any code review    Here are a few important aspects that need to be covered in any code review    from high level questions to a more detailed check list       Do we want this in the library  Is it likely to be used  Do you  as     a scikit learn user  like the change and intend to use it  Is it in     the scope of scikit learn  Will the cost of maintaining a new     feature be worth its benefits       Is the code consistent with the API of scikit learn  Are public     functions classes parameters well named and intuitively designed       Are all public functions classes and their parameters  return types  and     stored attributes named according to scikit learn conventions and documented clearly       Is any new functionality described in the user guide and illustrated with examples       Is every public function class tested  Are a reasonable set of     parameters  their values  value types  and combinations tested  Do     the tests validate that the code is correct  i e  doing what the     documentation says it does  If the change is a bug fix  is a     non regression test included  Look at  this      https   jeffknupp com blog 2013 12 09 improve your python understanding unit testing         to get started with testing in Python       Do the tests pass in the continuous integration build  If     appropriate  help the contributor understand why tests failed       Do the tests cover every line of code  see the coverage report in the build     log   If not  are the lines missing coverage good exceptions       Is the code easy to read and low on redundancy  Should variable names be     improved for clarity or consistency  Should comments be added  Should comments     be removed as unhelpful or extraneous       Could the code easily be rewritten to run much more efficiently for     relevant settings       Is the code backwards compatible with previous versions   or is a     deprecation cycle necessary        Will the new code add any dependencies on other libraries   this is     unlikely to be accepted       Does the documentation render properly  see the      ref  contribute documentation  section for more details   and are the plots     instructive      ref  saved replies  includes some frequent comments that reviewers may make       communication      dropdown   Communication Guidelines    Reviewing open pull requests  PRs  helps move the project forward  It is a   great way to get familiar with the codebase and should motivate the   contributor to keep involved in the project   1        Every PR  good or bad  is an act of generosity  Opening with a positive     comment will help the author feel rewarded  and your subsequent remarks may     be heard more clearly  You may feel good also      Begin if possible with the large issues  so the author knows they ve been     understood  Resist the temptation to immediately go line by line  or to open     with small pervasive issues      Do not let perfect be the enemy of the good  If you find yourself making     many small suggestions that don t fall into the  ref  code review   consider     the following approaches         refrain from submitting these        prefix them as  Nit  so that the contributor knows it s OK not to address        follow up in a subsequent PR  out of courtesy  you may want to let the       original contributor know       Do not rush  take the time to make your comments clear and justify your     suggestions      You are the face of the project  Bad days occur to everyone  in that     occasion you deserve a break  try to take your time and stay offline         1  Adapted from the numpy  communication guidelines          https   numpy org devdocs dev reviewer guidelines html communication guidelines      Reading the existing code base                                 Reading and digesting an existing code base is always a difficult exercise that takes time and experience to master  Even though we try to write simple code in general  understanding the code can seem overwhelming at first  given the sheer size of the project  Here is a list of tips that may help make this task easier and faster  in no particular order      Get acquainted with the  ref  api overview   understand what  term  fit      term  predict    term  transform   etc  are used for    Before diving into reading the code of a function   class  go through the   docstrings first and try to get an idea of what each parameter   attribute   is doing  It may also help to stop a minute and think  how would I do this   myself if I had to     The trickiest thing is often to identify which portions of the code are   relevant  and which are not  In scikit learn   a lot   of input checking   is performed  especially at the beginning of the  term  fit  methods    Sometimes  only a very small portion of the code is doing the actual job    For example looking at the  meth   linear model LinearRegression fit  method of    class   linear model LinearRegression   what you re looking for   might just be the call the  func  scipy linalg lstsq   but it is buried into   multiple lines of input checking and the handling of different kinds of   parameters    Due to the use of  Inheritance    https   en wikipedia org wiki Inheritance  object oriented programming        some methods may be implemented in parent classes  All estimators inherit   at least from  class   base BaseEstimator   and   from a   Mixin   class  e g   class   base ClassifierMixin   that enables default   behaviour depending on the nature of the estimator  classifier  regressor    transformer  etc      Sometimes  reading the tests for a given function will give you an idea of   what its intended purpose is  You can use   git grep    see below  to find   all the tests written for a function  Most tests for a specific   function class are placed under the   tests    folder of the module   You ll often see code looking like this      out   Parallel      delayed some function  param  for param in   some iterable     This runs   some function   in parallel using  Joblib    https   joblib readthedocs io        out   is then an iterable containing   the values returned by   some function   for each call    We use  Cython  https   cython org     to write fast code  Cython code is   located in    pyx   and    pxd   files  Cython code has a more C like flavor    we use pointers  perform manual memory allocation  etc  Having some minimal   experience in C   C   is pretty much mandatory here  For more information see    ref  cython     Master your tools       With such a big project  being efficient with your favorite editor or     IDE goes a long way towards digesting the code base  Being able to quickly     jump  or  peek   to a function class attribute definition helps a lot      So does being able to quickly see where a given name is used in a file       Git  https   git scm com book en    also has some built in killer     features  It is often useful to understand how a file changed over time      using e g    git blame     manual      https   git scm com docs git blame      This can also be done directly     on GitHub    git grep     examples      https   git scm com docs git grep  examples     is also extremely     useful to see every occurrence of a pattern  e g  a function call or a     variable  in the code base     Configure  git blame  to ignore the commit that migrated the code style to    black         prompt   bash        git config blame ignoreRevsFile  git blame ignore revs    Find out more information in black s    documentation for avoiding ruining git blame  https   black readthedocs io en stable guides introducing black to your project html avoiding ruining git blame    "}
{"questions":"scikit-learn cython Tips for developing with Cython in scikit learn Cython Best Practices Conventions and Knowledge This documents tips to develop Cython code in scikit learn","answers":".. _cython:\n\nCython Best Practices, Conventions and Knowledge\n================================================\n\nThis documents tips to develop Cython code in scikit-learn.\n\nTips for developing with Cython in scikit-learn\n-----------------------------------------------\n\nTips to ease development\n^^^^^^^^^^^^^^^^^^^^^^^^\n\n* Time spent reading `Cython's documentation <https:\/\/cython.readthedocs.io\/en\/latest\/>`_ is not time lost.\n\n* If you intend to use OpenMP: On MacOS, system's distribution of ``clang`` does not implement OpenMP.\n  You can install the ``compilers`` package available on ``conda-forge`` which comes with an implementation of OpenMP.\n\n* Activating `checks <https:\/\/github.com\/scikit-learn\/scikit-learn\/blob\/62a017efa047e9581ae7df8bbaa62cf4c0544ee4\/sklearn\/_build_utils\/__init__.py#L68-L87>`_ might help. E.g. for activating boundscheck use:\n\n  .. code-block:: bash\n\n         export SKLEARN_ENABLE_DEBUG_CYTHON_DIRECTIVES=1\n\n* `Start from scratch in a notebook <https:\/\/cython.readthedocs.io\/en\/latest\/src\/quickstart\/build.html#using-the-jupyter-notebook>`_ to understand how to use Cython and to get feedback on your work quickly.\n  If you plan to use OpenMP for your implementations in your Jupyter Notebook, do add extra compiler and linkers arguments in the Cython magic.\n\n  .. code-block:: python\n\n         # For GCC and for clang\n         %%cython --compile-args=-fopenmp --link-args=-fopenmp\n         # For Microsoft's compilers\n         %%cython --compile-args=\/openmp --link-args=\/openmp\n\n* To debug C code (e.g. a segfault), do use ``gdb`` with:\n\n  .. code-block:: bash\n\n         gdb --ex r --args python .\/entrypoint_to_bug_reproducer.py\n\n* To have access to some value in place to debug in ``cdef (nogil)`` context, use:\n\n  .. code-block:: cython\n\n         with gil:\n             print(state_to_print)\n\n* Note that Cython cannot parse f-strings with ``{var=}`` expressions, e.g.\n\n  .. code-block:: bash\n\n         print(f\"{test_val=}\")\n\n* scikit-learn codebase has a lot of non-unified (fused) types (re)definitions.\n  There currently is `ongoing work to simplify and unify that across the codebase\n  <https:\/\/github.com\/scikit-learn\/scikit-learn\/issues\/25572>`_.\n  For now, make sure you understand which concrete types are used ultimately.\n\n* You might find this alias to compile individual Cython extension handy:\n\n  .. code-block::\n\n      # You might want to add this alias to your shell script config.\n      alias cythonX=\"cython -X language_level=3 -X boundscheck=False -X wraparound=False -X initializedcheck=False -X nonecheck=False -X cdivision=True\"\n\n      # This generates `source.c` as if you had recompiled scikit-learn entirely.\n      cythonX --annotate source.pyx\n\n* Using the ``--annotate`` option with this flag allows generating a HTML report of code annotation.\n  This report indicates interactions with the CPython interpreter on a line-by-line basis.\n  Interactions with the CPython interpreter must be avoided as much as possible in\n  the computationally intensive sections of the algorithms.\n  For more information, please refer to `this section of Cython's tutorial <https:\/\/cython.readthedocs.io\/en\/latest\/src\/tutorial\/cython_tutorial.html#primes>`_\n\n  .. code-block::\n\n      # This generates a HTML report (`source.html`) for `source.c`.\n      cythonX --annotate source.pyx\n\nTips for performance\n^^^^^^^^^^^^^^^^^^^^\n\n* Understand the GIL in context for CPython (which problems it solves, what are its limitations)\n  and get a good understanding of when Cython will be mapped to C code free of interactions with\n  CPython, when it will not, and when it cannot (e.g. presence of interactions with Python\n  objects, which include functions). In this regard, `PEP073 <https:\/\/peps.python.org\/pep-0703\/>`_\n  provides a good overview and context and pathways for removal.\n\n* Make sure you have deactivated `checks <https:\/\/github.com\/scikit-learn\/scikit-learn\/blob\/62a017efa047e9581ae7df8bbaa62cf4c0544ee4\/sklearn\/_build_utils\/__init__.py#L68-L87>`_.\n\n* Always prefer memoryviews instead over ``cnp.ndarray`` when possible: memoryviews are lightweight.\n\n* Avoid memoryview slicing: memoryview slicing might be costly or misleading in some cases and\n  we better not use it, even if handling fewer dimensions in some context would be preferable.\n\n* Decorate final classes or methods with ``@final`` (this allows removing virtual tables when needed)\n\n* Inline methods and function when it makes sense\n\n* In doubt, read the generated C or C++ code if you can: \"The fewer C instructions and indirections\n  for a line of Cython code, the better\" is a good rule of thumb.\n\n* ``nogil`` declarations are just hints: when declaring the ``cdef`` functions\n  as nogil, it means that they can be called without holding the GIL, but it does not release\n  the GIL when entering them. You have to do that yourself either by passing ``nogil=True`` to\n  ``cython.parallel.prange`` explicitly, or by using an explicit context manager:\n\n  .. code-block:: cython\n\n      cdef inline void my_func(self) nogil:\n\n          # Some logic interacting with CPython, e.g. allocating arrays via NumPy.\n\n          with nogil:\n              # The code here is run as is it were written in C.\n\n          return 0\n\n  This item is based on `this comment from St\u00e9fan's Benhel <https:\/\/github.com\/cython\/cython\/issues\/2798#issuecomment-459971828>`_\n\n* Direct calls to BLAS routines are possible via interfaces defined in ``sklearn.utils._cython_blas``.\n\nUsing OpenMP\n^^^^^^^^^^^^\n\nSince scikit-learn can be built without OpenMP, it's necessary to protect each\ndirect call to OpenMP.\n\nThe `_openmp_helpers` module, available in\n`sklearn\/utils\/_openmp_helpers.pyx <https:\/\/github.com\/scikit-learn\/scikit-learn\/blob\/main\/sklearn\/utils\/_openmp_helpers.pyx>`_\nprovides protected versions of the OpenMP routines. To use OpenMP routines, they\nmust be ``cimported`` from this module and not from the OpenMP library directly:\n\n.. code-block:: cython\n\n   from sklearn.utils._openmp_helpers cimport omp_get_max_threads\n   max_threads = omp_get_max_threads()\n\n\nThe parallel loop, `prange`, is already protected by cython and can be used directly\nfrom `cython.parallel`.\n\nTypes\n~~~~~\n\nCython code requires to use explicit types. This is one of the reasons you get a\nperformance boost. In order to avoid code duplication, we have a central place\nfor the most used types in\n`sklearn\/utils\/_typedefs.pyd <https:\/\/github.com\/scikit-learn\/scikit-learn\/blob\/main\/sklearn\/utils\/_typedefs.pyd>`_.\nIdeally you start by having a look there and `cimport` types you need, for example\n\n.. code-block:: cython\n\n    from sklear.utils._typedefs cimport float32, float64","site":"scikit-learn","answers_cleaned":"    cython   Cython Best Practices  Conventions and Knowledge                                                   This documents tips to develop Cython code in scikit learn   Tips for developing with Cython in scikit learn                                                  Tips to ease development                             Time spent reading  Cython s documentation  https   cython readthedocs io en latest     is not time lost     If you intend to use OpenMP  On MacOS  system s distribution of   clang   does not implement OpenMP    You can install the   compilers   package available on   conda forge   which comes with an implementation of OpenMP     Activating  checks  https   github com scikit learn scikit learn blob 62a017efa047e9581ae7df8bbaa62cf4c0544ee4 sklearn  build utils   init   py L68 L87    might help  E g  for activating boundscheck use        code block   bash           export SKLEARN ENABLE DEBUG CYTHON DIRECTIVES 1     Start from scratch in a notebook  https   cython readthedocs io en latest src quickstart build html using the jupyter notebook    to understand how to use Cython and to get feedback on your work quickly    If you plan to use OpenMP for your implementations in your Jupyter Notebook  do add extra compiler and linkers arguments in the Cython magic        code block   python             For GCC and for clang            cython   compile args  fopenmp   link args  fopenmp            For Microsoft s compilers            cython   compile args  openmp   link args  openmp    To debug C code  e g  a segfault   do use   gdb   with        code block   bash           gdb   ex r   args python   entrypoint to bug reproducer py    To have access to some value in place to debug in   cdef  nogil    context  use        code block   cython           with gil               print state to print     Note that Cython cannot parse f strings with    var     expressions  e g        code block   bash           print f  test val        scikit learn codebase has a lot of non unified  fused  types  re definitions    There currently is  ongoing work to simplify and unify that across the codebase    https   github com scikit learn scikit learn issues 25572       For now  make sure you understand which concrete types are used ultimately     You might find this alias to compile individual Cython extension handy        code block            You might want to add this alias to your shell script config        alias cythonX  cython  X language level 3  X boundscheck False  X wraparound False  X initializedcheck False  X nonecheck False  X cdivision True           This generates  source c  as if you had recompiled scikit learn entirely        cythonX   annotate source pyx    Using the     annotate   option with this flag allows generating a HTML report of code annotation    This report indicates interactions with the CPython interpreter on a line by line basis    Interactions with the CPython interpreter must be avoided as much as possible in   the computationally intensive sections of the algorithms    For more information  please refer to  this section of Cython s tutorial  https   cython readthedocs io en latest src tutorial cython tutorial html primes          code block            This generates a HTML report   source html   for  source c         cythonX   annotate source pyx  Tips for performance                         Understand the GIL in context for CPython  which problems it solves  what are its limitations    and get a good understanding of when Cython will be mapped to C code free of interactions with   CPython  when it will not  and when it cannot  e g  presence of interactions with Python   objects  which include functions   In this regard   PEP073  https   peps python org pep 0703       provides a good overview and context and pathways for removal     Make sure you have deactivated  checks  https   github com scikit learn scikit learn blob 62a017efa047e9581ae7df8bbaa62cf4c0544ee4 sklearn  build utils   init   py L68 L87        Always prefer memoryviews instead over   cnp ndarray   when possible  memoryviews are lightweight     Avoid memoryview slicing  memoryview slicing might be costly or misleading in some cases and   we better not use it  even if handling fewer dimensions in some context would be preferable     Decorate final classes or methods with    final    this allows removing virtual tables when needed     Inline methods and function when it makes sense    In doubt  read the generated C or C   code if you can   The fewer C instructions and indirections   for a line of Cython code  the better  is a good rule of thumb       nogil   declarations are just hints  when declaring the   cdef   functions   as nogil  it means that they can be called without holding the GIL  but it does not release   the GIL when entering them  You have to do that yourself either by passing   nogil True   to     cython parallel prange   explicitly  or by using an explicit context manager        code block   cython        cdef inline void my func self  nogil               Some logic interacting with CPython  e g  allocating arrays via NumPy             with nogil                  The code here is run as is it were written in C             return 0    This item is based on  this comment from St fan s Benhel  https   github com cython cython issues 2798 issuecomment 459971828       Direct calls to BLAS routines are possible via interfaces defined in   sklearn utils  cython blas     Using OpenMP               Since scikit learn can be built without OpenMP  it s necessary to protect each direct call to OpenMP   The   openmp helpers  module  available in  sklearn utils  openmp helpers pyx  https   github com scikit learn scikit learn blob main sklearn utils  openmp helpers pyx    provides protected versions of the OpenMP routines  To use OpenMP routines  they must be   cimported   from this module and not from the OpenMP library directly      code block   cython     from sklearn utils  openmp helpers cimport omp get max threads    max threads   omp get max threads     The parallel loop   prange   is already protected by cython and can be used directly from  cython parallel    Types        Cython code requires to use explicit types  This is one of the reasons you get a performance boost  In order to avoid code duplication  we have a central place for the most used types in  sklearn utils  typedefs pyd  https   github com scikit learn scikit learn blob main sklearn utils  typedefs pyd     Ideally you start by having a look there and  cimport  types you need  for example     code block   cython      from sklear utils  typedefs cimport float32  float64"}
{"questions":"scikit-learn The is important to the communication in the project it helps priorities For this reason it is important to curate it adding labels bugtriaging to issues and closing issues that are not necessary developers identify major projects to work on as well as to discuss Bug triaging and issue curation","answers":".. _bug_triaging:\n\nBug triaging and issue curation\n===============================\n\nThe `issue tracker <https:\/\/github.com\/scikit-learn\/scikit-learn\/issues>`_\nis important to the communication in the project: it helps\ndevelopers identify major projects to work on, as well as to discuss\npriorities. For this reason, it is important to curate it, adding labels\nto issues and closing issues that are not necessary.\n\nWorking on issues to improve them\n---------------------------------\n\nImproving issues increases their chances of being successfully resolved.\nGuidelines on submitting good issues can be found :ref:`here\n<filing_bugs>`.\nA third party can give useful feedback or even add\ncomments on the issue.\nThe following actions are typically useful:\n\n- documenting issues that are missing elements to reproduce the problem\n  such as code samples\n\n- suggesting better use of code formatting\n\n- suggesting to reformulate the title and description to make them more\n  explicit about the problem to be solved\n\n- linking to related issues or discussions while briefly describing how\n  they are related, for instance \"See also #xyz for a similar attempt\n  at this\" or \"See also #xyz where the same thing happened in\n  SomeEstimator\" provides context and helps the discussion.\n\n.. topic:: Fruitful discussions\n\n   Online discussions may be harder than it seems at first glance, in\n   particular given that a person new to open-source may have a very\n   different understanding of the process than a seasoned maintainer.\n\n   Overall, it is useful to stay positive and assume good will. `The\n   following article\n   <https:\/\/gael-varoquaux.info\/programming\/technical-discussions-are-hard-a-few-tips.html>`_\n   explores how to lead online discussions in the context of open source.\n\nWorking on PRs to help review\n-----------------------------\n\nReviewing code is also encouraged. Contributors and users are welcome to\nparticipate to the review process following our :ref:`review guidelines\n<code_review>`.\n\nTriaging operations for members of the core and contributor experience teams\n----------------------------------------------------------------------------\n\nIn addition to the above, members of the core team and the contributor experience team\ncan do the following important tasks:\n\n- Update :ref:`labels for issues and PRs <issue_tracker_tags>`: see the list of\n  the `available github labels\n  <https:\/\/github.com\/scikit-learn\/scikit-learn\/labels>`_.\n\n- :ref:`Determine if a PR must be relabeled as stalled <stalled_pull_request>`\n  or needs help (this is typically very important in the context\n  of sprints, where the risk is to create many unfinished PRs)\n\n- If a stalled PR is taken over by a newer PR, then label the stalled PR as\n  \"Superseded\", leave a comment on the stalled PR linking to the new PR, and\n  likely close the stalled PR.\n\n- Triage issues:\n\n  - **close usage questions** and politely point the reporter to use\n    Stack Overflow instead.\n\n  - **close duplicate issues**, after checking that they are\n    indeed duplicate. Ideally, the original submitter moves the\n    discussion to the older, duplicate issue\n\n  - **close issues that cannot be replicated**, after leaving time (at\n    least a week) to add extra information\n\n:ref:`Saved replies <saved_replies>` are useful to gain time and yet be\nwelcoming and polite when triaging.\n\nSee the github description for `roles in the organization\n<https:\/\/docs.github.com\/en\/github\/setting-up-and-managing-organizations-and-teams\/repository-permission-levels-for-an-organization>`_.\n\n.. topic:: Closing issues: a tough call\n\n    When uncertain on whether an issue should be closed or not, it is\n    best to strive for consensus with the original poster, and possibly\n    to seek relevant expertise. However, when the issue is a usage\n    question, or when it has been considered as unclear for many years it\n    should be closed.\n\nA typical workflow for triaging issues\n--------------------------------------\n\nThe following workflow [1]_ is a good way to approach issue triaging:\n\n#. Thank the reporter for opening an issue\n\n   The issue tracker is many people's first interaction with the\n   scikit-learn project itself, beyond just using the library. As such,\n   we want it to be a welcoming, pleasant experience.\n\n#. Is this a usage question? If so close it with a polite message\n   (:ref:`here is an example <saved_replies>`).\n\n#. Is the necessary information provided?\n\n   If crucial information (like the version of scikit-learn used), is\n   missing feel free to ask for that and label the issue with \"Needs\n   info\".\n\n#. Is this a duplicate issue?\n\n   We have many open issues. If a new issue seems to be a duplicate,\n   point to the original issue. If it is a clear duplicate, or consensus\n   is that it is redundant, close it. Make sure to still thank the\n   reporter, and encourage them to chime in on the original issue, and\n   perhaps try to fix it.\n\n   If the new issue provides relevant information, such as a better or\n   slightly different example, add it to the original issue as a comment\n   or an edit to the original post.\n\n#. Make sure that the title accurately reflects the issue. If you have the\n   necessary permissions edit it yourself if it's not clear.\n\n#. Is the issue minimal and reproducible?\n\n   For bug reports, we ask that the reporter provide a minimal\n   reproducible example. See `this useful post\n   <https:\/\/matthewrocklin.com\/blog\/work\/2018\/02\/28\/minimal-bug-reports>`_\n   by Matthew Rocklin for a good explanation. If the example is not\n   reproducible, or if it's clearly not minimal, feel free to ask the reporter\n   if they can provide and example or simplify the provided one.\n   Do acknowledge that writing minimal reproducible examples is hard work.\n   If the reporter is struggling, you can try to write one yourself.\n\n   If a reproducible example is provided, but you see a simplification,\n   add your simpler reproducible example.\n\n#. Add the relevant labels, such as \"Documentation\" when the issue is\n   about documentation, \"Bug\" if it is clearly a bug, \"Enhancement\" if it\n   is an enhancement request, ...\n\n   If the issue is clearly defined and the fix seems relatively\n   straightforward, label the issue as \u201cGood first issue\u201d.\n\n   An additional useful step can be to tag the corresponding module e.g.\n   `sklearn.linear_models` when relevant.\n\n#. Remove the \"Needs Triage\" label from the issue if the label exists.\n\n.. [1] Adapted from the pandas project `maintainers guide\n       <https:\/\/pandas.pydata.org\/docs\/development\/maintaining.html>`_","site":"scikit-learn","answers_cleaned":"    bug triaging   Bug triaging and issue curation                                  The  issue tracker  https   github com scikit learn scikit learn issues    is important to the communication in the project  it helps developers identify major projects to work on  as well as to discuss priorities  For this reason  it is important to curate it  adding labels to issues and closing issues that are not necessary   Working on issues to improve them                                    Improving issues increases their chances of being successfully resolved  Guidelines on submitting good issues can be found  ref  here  filing bugs    A third party can give useful feedback or even add comments on the issue  The following actions are typically useful     documenting issues that are missing elements to reproduce the problem   such as code samples    suggesting better use of code formatting    suggesting to reformulate the title and description to make them more   explicit about the problem to be solved    linking to related issues or discussions while briefly describing how   they are related  for instance  See also  xyz for a similar attempt   at this  or  See also  xyz where the same thing happened in   SomeEstimator  provides context and helps the discussion      topic   Fruitful discussions     Online discussions may be harder than it seems at first glance  in    particular given that a person new to open source may have a very    different understanding of the process than a seasoned maintainer      Overall  it is useful to stay positive and assume good will   The    following article     https   gael varoquaux info programming technical discussions are hard a few tips html       explores how to lead online discussions in the context of open source   Working on PRs to help review                                Reviewing code is also encouraged  Contributors and users are welcome to participate to the review process following our  ref  review guidelines  code review     Triaging operations for members of the core and contributor experience teams                                                                               In addition to the above  members of the core team and the contributor experience team can do the following important tasks     Update  ref  labels for issues and PRs  issue tracker tags    see the list of   the  available github labels    https   github com scikit learn scikit learn labels         ref  Determine if a PR must be relabeled as stalled  stalled pull request     or needs help  this is typically very important in the context   of sprints  where the risk is to create many unfinished PRs     If a stalled PR is taken over by a newer PR  then label the stalled PR as    Superseded   leave a comment on the stalled PR linking to the new PR  and   likely close the stalled PR     Triage issues         close usage questions   and politely point the reporter to use     Stack Overflow instead         close duplicate issues    after checking that they are     indeed duplicate  Ideally  the original submitter moves the     discussion to the older  duplicate issue        close issues that cannot be replicated    after leaving time  at     least a week  to add extra information   ref  Saved replies  saved replies   are useful to gain time and yet be welcoming and polite when triaging   See the github description for  roles in the organization  https   docs github com en github setting up and managing organizations and teams repository permission levels for an organization         topic   Closing issues  a tough call      When uncertain on whether an issue should be closed or not  it is     best to strive for consensus with the original poster  and possibly     to seek relevant expertise  However  when the issue is a usage     question  or when it has been considered as unclear for many years it     should be closed   A typical workflow for triaging issues                                         The following workflow  1   is a good way to approach issue triaging      Thank the reporter for opening an issue     The issue tracker is many people s first interaction with the    scikit learn project itself  beyond just using the library  As such     we want it to be a welcoming  pleasant experience      Is this a usage question  If so close it with a polite message      ref  here is an example  saved replies         Is the necessary information provided      If crucial information  like the version of scikit learn used   is    missing feel free to ask for that and label the issue with  Needs    info       Is this a duplicate issue      We have many open issues  If a new issue seems to be a duplicate     point to the original issue  If it is a clear duplicate  or consensus    is that it is redundant  close it  Make sure to still thank the    reporter  and encourage them to chime in on the original issue  and    perhaps try to fix it      If the new issue provides relevant information  such as a better or    slightly different example  add it to the original issue as a comment    or an edit to the original post      Make sure that the title accurately reflects the issue  If you have the    necessary permissions edit it yourself if it s not clear      Is the issue minimal and reproducible      For bug reports  we ask that the reporter provide a minimal    reproducible example  See  this useful post     https   matthewrocklin com blog work 2018 02 28 minimal bug reports       by Matthew Rocklin for a good explanation  If the example is not    reproducible  or if it s clearly not minimal  feel free to ask the reporter    if they can provide and example or simplify the provided one     Do acknowledge that writing minimal reproducible examples is hard work     If the reporter is struggling  you can try to write one yourself      If a reproducible example is provided  but you see a simplification     add your simpler reproducible example      Add the relevant labels  such as  Documentation  when the issue is    about documentation   Bug  if it is clearly a bug   Enhancement  if it    is an enhancement request          If the issue is clearly defined and the fix seems relatively    straightforward  label the issue as  Good first issue       An additional useful step can be to tag the corresponding module e g      sklearn linear models  when relevant      Remove the  Needs Triage  label from the issue if the label exists       1  Adapted from the pandas project  maintainers guide         https   pandas pydata org docs development maintaining html   "}
{"questions":"scikit-learn minimalreproducer Crafting a minimal reproducer for scikit learn question in the discussions being able to craft minimal reproducible examples Whether submitting a bug report designing a suite of tests or simply posting a or minimal workable examples is the key to communicating effectively and","answers":".. _minimal_reproducer:\n\n==============================================\nCrafting a minimal reproducer for scikit-learn\n==============================================\n\n\nWhether submitting a bug report, designing a suite of tests, or simply posting a\nquestion in the discussions, being able to craft minimal, reproducible examples\n(or minimal, workable examples) is the key to communicating effectively and\nefficiently with the community.\n\nThere are very good guidelines on the internet such as `this StackOverflow\ndocument <https:\/\/stackoverflow.com\/help\/mcve>`_ or `this blogpost by Matthew\nRocklin <https:\/\/matthewrocklin.com\/blog\/work\/2018\/02\/28\/minimal-bug-reports>`_\non crafting Minimal Complete Verifiable Examples (referred below as MCVE).\nOur goal is not to be repetitive with those references but rather to provide a\nstep-by-step guide on how to narrow down a bug until you have reached the\nshortest possible code to reproduce it.\n\nThe first step before submitting a bug report to scikit-learn is to read the\n`Issue template\n<https:\/\/github.com\/scikit-learn\/scikit-learn\/blob\/main\/.github\/ISSUE_TEMPLATE\/bug_report.yml>`_.\nIt is already quite informative about the information you will be asked to\nprovide.\n\n\n.. _good_practices:\n\nGood practices\n==============\n\nIn this section we will focus on the **Steps\/Code to Reproduce** section of the\n`Issue template\n<https:\/\/github.com\/scikit-learn\/scikit-learn\/blob\/main\/.github\/ISSUE_TEMPLATE\/bug_report.yml>`_.\nWe will start with a snippet of code that already provides a failing example but\nthat has room for readability improvement. We then craft a MCVE from it.\n\n**Example**\n\n.. code-block:: python\n\n    # I am currently working in a ML project and when I tried to fit a\n    # GradientBoostingRegressor instance to my_data.csv I get a UserWarning:\n    # \"X has feature names, but DecisionTreeRegressor was fitted without\n    # feature names\". You can get a copy of my dataset from\n    # https:\/\/example.com\/my_data.csv and verify my features do have\n    # names. The problem seems to arise during fit when I pass an integer\n    # to the n_iter_no_change parameter.\n\n    df = pd.read_csv('my_data.csv')\n    X = df[[\"feature_name\"]] # my features do have names\n    y = df[\"target\"]\n\n    # We set random_state=42 for the train_test_split\n    X_train, X_test, y_train, y_test = train_test_split(\n        X, y, test_size=0.33, random_state=42\n    )\n\n    scaler = StandardScaler(with_mean=False)\n    X_train = scaler.fit_transform(X_train)\n    X_test = scaler.transform(X_test)\n\n    # An instance with default n_iter_no_change raises no error nor warnings\n    gbdt = GradientBoostingRegressor(random_state=0)\n    gbdt.fit(X_train, y_train)\n    default_score = gbdt.score(X_test, y_test)\n\n    # the bug appears when I change the value for n_iter_no_change\n    gbdt = GradientBoostingRegressor(random_state=0, n_iter_no_change=5)\n    gbdt.fit(X_train, y_train)\n    other_score = gbdt.score(X_test, y_test)\n\n    other_score = gbdt.score(X_test, y_test)\n\n\nProvide a failing code example with minimal comments\n----------------------------------------------------\n\nWriting instructions to reproduce the problem in English is often ambiguous.\nBetter make sure that all the necessary details to reproduce the problem are\nillustrated in the Python code snippet to avoid any ambiguity. Besides, by this\npoint you already provided a concise description in the **Describe the bug**\nsection of the `Issue template\n<https:\/\/github.com\/scikit-learn\/scikit-learn\/blob\/main\/.github\/ISSUE_TEMPLATE\/bug_report.yml>`_.\n\nThe following code, while **still not minimal**, is already **much better**\nbecause it can be copy-pasted in a Python terminal to reproduce the problem in\none step. In particular:\n\n- it contains **all necessary imports statements**;\n- it can fetch the public dataset without having to manually download a\n  file and put it in the expected location on the disk.\n\n**Improved example**\n\n.. code-block:: python\n\n    import pandas as pd\n\n    df = pd.read_csv(\"https:\/\/example.com\/my_data.csv\")\n    X = df[[\"feature_name\"]]\n    y = df[\"target\"]\n\n    from sklearn.model_selection import train_test_split\n\n    X_train, X_test, y_train, y_test = train_test_split(\n        X, y, test_size=0.33, random_state=42\n    )\n\n    from sklearn.preprocessing import StandardScaler\n\n    scaler = StandardScaler(with_mean=False)\n    X_train = scaler.fit_transform(X_train)\n    X_test = scaler.transform(X_test)\n\n    from sklearn.ensemble import GradientBoostingRegressor\n\n    gbdt = GradientBoostingRegressor(random_state=0)\n    gbdt.fit(X_train, y_train)  # no warning\n    default_score = gbdt.score(X_test, y_test)\n\n    gbdt = GradientBoostingRegressor(random_state=0, n_iter_no_change=5)\n    gbdt.fit(X_train, y_train)  # raises warning\n    other_score = gbdt.score(X_test, y_test)\n    other_score = gbdt.score(X_test, y_test)\n\n\nBoil down your script to something as small as possible\n-------------------------------------------------------\n\nYou have to ask yourself which lines of code are relevant and which are not for\nreproducing the bug. Deleting unnecessary lines of code or simplifying the\nfunction calls by omitting unrelated non-default options will help you and other\ncontributors narrow down the cause of the bug.\n\nIn particular, for this specific example:\n\n- the warning has nothing to do with the `train_test_split` since it already\n  appears in the training step, before we use the test set.\n- similarly, the lines that compute the scores on the test set are not\n  necessary;\n- the bug can be reproduced for any value of `random_state` so leave it to its\n  default;\n- the bug can be reproduced without preprocessing the data with the\n  `StandardScaler`.\n\n**Improved example**\n\n.. code-block:: python\n\n    import pandas as pd\n    df = pd.read_csv(\"https:\/\/example.com\/my_data.csv\")\n    X = df[[\"feature_name\"]]\n    y = df[\"target\"]\n\n    from sklearn.ensemble import GradientBoostingRegressor\n\n    gbdt = GradientBoostingRegressor()\n    gbdt.fit(X, y)  # no warning\n\n    gbdt = GradientBoostingRegressor(n_iter_no_change=5)\n    gbdt.fit(X, y)  # raises warning\n\n\n**DO NOT** report your data unless it is extremely necessary\n------------------------------------------------------------\n\nThe idea is to make the code as self-contained as possible. For doing so, you\ncan use a :ref:`synth_data`. It can be generated using numpy, pandas or the\n:mod:`sklearn.datasets` module. Most of the times the bug is not related to a\nparticular structure of your data. Even if it is, try to find an available\ndataset that has similar characteristics to yours and that reproduces the\nproblem. In this particular case, we are interested in data that has labeled\nfeature names.\n\n**Improved example**\n\n.. code-block:: python\n\n    import pandas as pd\n    from sklearn.ensemble import GradientBoostingRegressor\n\n    df = pd.DataFrame(\n        {\n            \"feature_name\": [-12.32, 1.43, 30.01, 22.17],\n            \"target\": [72, 55, 32, 43],\n        }\n    )\n    X = df[[\"feature_name\"]]\n    y = df[\"target\"]\n\n    gbdt = GradientBoostingRegressor()\n    gbdt.fit(X, y) # no warning\n    gbdt = GradientBoostingRegressor(n_iter_no_change=5)\n    gbdt.fit(X, y) # raises warning\n\nAs already mentioned, the key to communication is the readability of the code\nand good formatting can really be a plus. Notice that in the previous snippet\nwe:\n\n- try to limit all lines to a maximum of 79 characters to avoid horizontal\n  scrollbars in the code snippets blocks rendered on the GitHub issue;\n- use blank lines to separate groups of related functions;\n- place all the imports in their own group at the beginning.\n\nThe simplification steps presented in this guide can be implemented in a\ndifferent order than the progression we have shown here. The important points\nare:\n\n- a minimal reproducer should be runnable by a simple copy-and-paste in a\n  python terminal;\n- it should be simplified as much as possible by removing any code steps\n  that are not strictly needed to reproducing the original problem;\n- it should ideally only rely on a minimal dataset generated on-the-fly by\n  running the code instead of relying on external data, if possible.\n\n\nUse markdown formatting\n-----------------------\n\nTo format code or text into its own distinct block, use triple backticks.\n`Markdown\n<https:\/\/docs.github.com\/en\/get-started\/writing-on-github\/getting-started-with-writing-and-formatting-on-github\/basic-writing-and-formatting-syntax>`_\nsupports an optional language identifier to enable syntax highlighting in your\nfenced code block. For example::\n\n    ```python\n    from sklearn.datasets import make_blobs\n\n    n_samples = 100\n    n_components = 3\n    X, y = make_blobs(n_samples=n_samples, centers=n_components)\n    ```\n\nwill render a python formatted snippet as follows\n\n.. code-block:: python\n\n    from sklearn.datasets import make_blobs\n\n    n_samples = 100\n    n_components = 3\n    X, y = make_blobs(n_samples=n_samples, centers=n_components)\n\nIt is not necessary to create several blocks of code when submitting a bug\nreport. Remember other reviewers are going to copy-paste your code and having a\nsingle cell will make their task easier.\n\nIn the section named **Actual results** of the `Issue template\n<https:\/\/github.com\/scikit-learn\/scikit-learn\/blob\/main\/.github\/ISSUE_TEMPLATE\/bug_report.yml>`_\nyou are asked to provide the error message including the full traceback of the\nexception. In this case, use the `python-traceback` qualifier. For example::\n\n    ```python-traceback\n    ---------------------------------------------------------------------------\n    TypeError                                 Traceback (most recent call last)\n    <ipython-input-1-a674e682c281> in <module>\n        4 vectorizer = CountVectorizer(input=docs, analyzer='word')\n        5 lda_features = vectorizer.fit_transform(docs)\n    ----> 6 lda_model = LatentDirichletAllocation(\n        7     n_topics=10,\n        8     learning_method='online',\n\n    TypeError: __init__() got an unexpected keyword argument 'n_topics'\n    ```\n\nyields the following when rendered:\n\n.. code-block:: python\n\n    ---------------------------------------------------------------------------\n    TypeError                                 Traceback (most recent call last)\n    <ipython-input-1-a674e682c281> in <module>\n        4 vectorizer = CountVectorizer(input=docs, analyzer='word')\n        5 lda_features = vectorizer.fit_transform(docs)\n    ----> 6 lda_model = LatentDirichletAllocation(\n        7     n_topics=10,\n        8     learning_method='online',\n\n    TypeError: __init__() got an unexpected keyword argument 'n_topics'\n\n\n.. _synth_data:\n\nSynthetic dataset\n=================\n\nBefore choosing a particular synthetic dataset, first you have to identify the\ntype of problem you are solving: Is it a classification, a regression,\na clustering, etc?\n\nOnce that you narrowed down the type of problem, you need to provide a synthetic\ndataset accordingly. Most of the times you only need a minimalistic dataset.\nHere is a non-exhaustive list of tools that may help you.\n\nNumPy\n-----\n\nNumPy tools such as `numpy.random.randn\n<https:\/\/numpy.org\/doc\/stable\/reference\/random\/generated\/numpy.random.randn.html>`_\nand `numpy.random.randint\n<https:\/\/numpy.org\/doc\/stable\/reference\/random\/generated\/numpy.random.randint.html>`_\ncan be used to create dummy numeric data.\n\n- regression\n\n  Regressions take continuous numeric data as features and target.\n\n  .. code-block:: python\n\n      import numpy as np\n\n      rng = np.random.RandomState(0)\n      n_samples, n_features = 5, 5\n      X = rng.randn(n_samples, n_features)\n      y = rng.randn(n_samples)\n\nA similar snippet can be used as synthetic data when testing scaling tools such\nas :class:`sklearn.preprocessing.StandardScaler`.\n\n- classification\n\n  If the bug is not raised during when encoding a categorical variable, you can\n  feed numeric data to a classifier. Just remember to ensure that the target\n  is indeed an integer.\n\n  .. code-block:: python\n\n      import numpy as np\n\n      rng = np.random.RandomState(0)\n      n_samples, n_features = 5, 5\n      X = rng.randn(n_samples, n_features)\n      y = rng.randint(0, 2, n_samples)  # binary target with values in {0, 1}\n\n\n  If the bug only happens with non-numeric class labels, you might want to\n  generate a random target with `numpy.random.choice\n  <https:\/\/numpy.org\/doc\/stable\/reference\/random\/generated\/numpy.random.choice.html>`_.\n\n  .. code-block:: python\n\n      import numpy as np\n\n      rng = np.random.RandomState(0)\n      n_samples, n_features = 50, 5\n      X = rng.randn(n_samples, n_features)\n      y = np.random.choice(\n          [\"male\", \"female\", \"other\"], size=n_samples, p=[0.49, 0.49, 0.02]\n      )\n\nPandas\n------\n\nSome scikit-learn objects expect pandas dataframes as input. In this case you can\ntransform numpy arrays into pandas objects using `pandas.DataFrame\n<https:\/\/pandas.pydata.org\/docs\/reference\/api\/pandas.DataFrame.html>`_, or\n`pandas.Series\n<https:\/\/pandas.pydata.org\/docs\/reference\/api\/pandas.Series.html>`_.\n\n.. code-block:: python\n\n    import numpy as np\n    import pandas as pd\n\n    rng = np.random.RandomState(0)\n    n_samples, n_features = 5, 5\n    X = pd.DataFrame(\n        {\n            \"continuous_feature\": rng.randn(n_samples),\n            \"positive_feature\": rng.uniform(low=0.0, high=100.0, size=n_samples),\n            \"categorical_feature\": rng.choice([\"a\", \"b\", \"c\"], size=n_samples),\n        }\n    )\n    y = pd.Series(rng.randn(n_samples))\n\nIn addition, scikit-learn includes various :ref:`sample_generators` that can be\nused to build artificial datasets of controlled size and complexity.\n\n`make_regression`\n-----------------\n\nAs hinted by the name, :class:`sklearn.datasets.make_regression` produces\nregression targets with noise as an optionally-sparse random linear combination\nof random features.\n\n.. code-block:: python\n\n    from sklearn.datasets import make_regression\n\n    X, y = make_regression(n_samples=1000, n_features=20)\n\n`make_classification`\n---------------------\n\n:class:`sklearn.datasets.make_classification` creates multiclass datasets with multiple Gaussian\nclusters per class. Noise can be introduced by means of correlated, redundant or\nuninformative features.\n\n.. code-block:: python\n\n    from sklearn.datasets import make_classification\n\n    X, y = make_classification(\n        n_features=2, n_redundant=0, n_informative=2, n_clusters_per_class=1\n    )\n\n`make_blobs`\n------------\n\nSimilarly to `make_classification`, :class:`sklearn.datasets.make_blobs` creates\nmulticlass datasets using normally-distributed clusters of points. It provides\ngreater control regarding the centers and standard deviations of each cluster,\nand therefore it is useful to demonstrate clustering.\n\n.. code-block:: python\n\n    from sklearn.datasets import make_blobs\n\n    X, y = make_blobs(n_samples=10, centers=3, n_features=2)\n\nDataset loading utilities\n-------------------------\n\nYou can use the :ref:`datasets` to load and fetch several popular reference\ndatasets. This option is useful when the bug relates to the particular structure\nof the data, e.g. dealing with missing values or image recognition.\n\n.. code-block:: python\n\n    from sklearn.datasets import load_breast_cancer\n\n    X, y = load_breast_cancer(return_X_y=True)","site":"scikit-learn","answers_cleaned":"    minimal reproducer                                                  Crafting a minimal reproducer for scikit learn                                                  Whether submitting a bug report  designing a suite of tests  or simply posting a question in the discussions  being able to craft minimal  reproducible examples  or minimal  workable examples  is the key to communicating effectively and efficiently with the community   There are very good guidelines on the internet such as  this StackOverflow document  https   stackoverflow com help mcve    or  this blogpost by Matthew Rocklin  https   matthewrocklin com blog work 2018 02 28 minimal bug reports    on crafting Minimal Complete Verifiable Examples  referred below as MCVE   Our goal is not to be repetitive with those references but rather to provide a step by step guide on how to narrow down a bug until you have reached the shortest possible code to reproduce it   The first step before submitting a bug report to scikit learn is to read the  Issue template  https   github com scikit learn scikit learn blob main  github ISSUE TEMPLATE bug report yml     It is already quite informative about the information you will be asked to provide        good practices   Good practices                 In this section we will focus on the   Steps Code to Reproduce   section of the  Issue template  https   github com scikit learn scikit learn blob main  github ISSUE TEMPLATE bug report yml     We will start with a snippet of code that already provides a failing example but that has room for readability improvement  We then craft a MCVE from it     Example       code block   python        I am currently working in a ML project and when I tried to fit a       GradientBoostingRegressor instance to my data csv I get a UserWarning         X has feature names  but DecisionTreeRegressor was fitted without       feature names   You can get a copy of my dataset from       https   example com my data csv and verify my features do have       names  The problem seems to arise during fit when I pass an integer       to the n iter no change parameter       df   pd read csv  my data csv       X   df   feature name      my features do have names     y   df  target          We set random state 42 for the train test split     X train  X test  y train  y test   train test split          X  y  test size 0 33  random state 42            scaler   StandardScaler with mean False      X train   scaler fit transform X train      X test   scaler transform X test         An instance with default n iter no change raises no error nor warnings     gbdt   GradientBoostingRegressor random state 0      gbdt fit X train  y train      default score   gbdt score X test  y test         the bug appears when I change the value for n iter no change     gbdt   GradientBoostingRegressor random state 0  n iter no change 5      gbdt fit X train  y train      other score   gbdt score X test  y test       other score   gbdt score X test  y test    Provide a failing code example with minimal comments                                                       Writing instructions to reproduce the problem in English is often ambiguous  Better make sure that all the necessary details to reproduce the problem are illustrated in the Python code snippet to avoid any ambiguity  Besides  by this point you already provided a concise description in the   Describe the bug   section of the  Issue template  https   github com scikit learn scikit learn blob main  github ISSUE TEMPLATE bug report yml      The following code  while   still not minimal    is already   much better   because it can be copy pasted in a Python terminal to reproduce the problem in one step  In particular     it contains   all necessary imports statements      it can fetch the public dataset without having to manually download a   file and put it in the expected location on the disk     Improved example       code block   python      import pandas as pd      df   pd read csv  https   example com my data csv       X   df   feature name        y   df  target        from sklearn model selection import train test split      X train  X test  y train  y test   train test split          X  y  test size 0 33  random state 42            from sklearn preprocessing import StandardScaler      scaler   StandardScaler with mean False      X train   scaler fit transform X train      X test   scaler transform X test       from sklearn ensemble import GradientBoostingRegressor      gbdt   GradientBoostingRegressor random state 0      gbdt fit X train  y train     no warning     default score   gbdt score X test  y test       gbdt   GradientBoostingRegressor random state 0  n iter no change 5      gbdt fit X train  y train     raises warning     other score   gbdt score X test  y test      other score   gbdt score X test  y test    Boil down your script to something as small as possible                                                          You have to ask yourself which lines of code are relevant and which are not for reproducing the bug  Deleting unnecessary lines of code or simplifying the function calls by omitting unrelated non default options will help you and other contributors narrow down the cause of the bug   In particular  for this specific example     the warning has nothing to do with the  train test split  since it already   appears in the training step  before we use the test set    similarly  the lines that compute the scores on the test set are not   necessary    the bug can be reproduced for any value of  random state  so leave it to its   default    the bug can be reproduced without preprocessing the data with the    StandardScaler      Improved example       code block   python      import pandas as pd     df   pd read csv  https   example com my data csv       X   df   feature name        y   df  target        from sklearn ensemble import GradientBoostingRegressor      gbdt   GradientBoostingRegressor       gbdt fit X  y     no warning      gbdt   GradientBoostingRegressor n iter no change 5      gbdt fit X  y     raises warning     DO NOT   report your data unless it is extremely necessary                                                               The idea is to make the code as self contained as possible  For doing so  you can use a  ref  synth data   It can be generated using numpy  pandas or the  mod  sklearn datasets  module  Most of the times the bug is not related to a particular structure of your data  Even if it is  try to find an available dataset that has similar characteristics to yours and that reproduces the problem  In this particular case  we are interested in data that has labeled feature names     Improved example       code block   python      import pandas as pd     from sklearn ensemble import GradientBoostingRegressor      df   pd DataFrame                         feature name     12 32  1 43  30 01  22 17                target    72  55  32  43                       X   df   feature name        y   df  target        gbdt   GradientBoostingRegressor       gbdt fit X  y    no warning     gbdt   GradientBoostingRegressor n iter no change 5      gbdt fit X  y    raises warning  As already mentioned  the key to communication is the readability of the code and good formatting can really be a plus  Notice that in the previous snippet we     try to limit all lines to a maximum of 79 characters to avoid horizontal   scrollbars in the code snippets blocks rendered on the GitHub issue    use blank lines to separate groups of related functions    place all the imports in their own group at the beginning   The simplification steps presented in this guide can be implemented in a different order than the progression we have shown here  The important points are     a minimal reproducer should be runnable by a simple copy and paste in a   python terminal    it should be simplified as much as possible by removing any code steps   that are not strictly needed to reproducing the original problem    it should ideally only rely on a minimal dataset generated on the fly by   running the code instead of relying on external data  if possible    Use markdown formatting                          To format code or text into its own distinct block  use triple backticks   Markdown  https   docs github com en get started writing on github getting started with writing and formatting on github basic writing and formatting syntax    supports an optional language identifier to enable syntax highlighting in your fenced code block  For example           python     from sklearn datasets import make blobs      n samples   100     n components   3     X  y   make blobs n samples n samples  centers n components           will render a python formatted snippet as follows     code block   python      from sklearn datasets import make blobs      n samples   100     n components   3     X  y   make blobs n samples n samples  centers n components   It is not necessary to create several blocks of code when submitting a bug report  Remember other reviewers are going to copy paste your code and having a single cell will make their task easier   In the section named   Actual results   of the  Issue template  https   github com scikit learn scikit learn blob main  github ISSUE TEMPLATE bug report yml    you are asked to provide the error message including the full traceback of the exception  In this case  use the  python traceback  qualifier  For example           python traceback                                                                                     TypeError                                 Traceback  most recent call last       ipython input 1 a674e682c281  in  module          4 vectorizer   CountVectorizer input docs  analyzer  word           5 lda features   vectorizer fit transform docs            6 lda model   LatentDirichletAllocation          7     n topics 10          8     learning method  online        TypeError    init     got an unexpected keyword argument  n topics           yields the following when rendered      code block   python                                                                                      TypeError                                 Traceback  most recent call last       ipython input 1 a674e682c281  in  module          4 vectorizer   CountVectorizer input docs  analyzer  word           5 lda features   vectorizer fit transform docs            6 lda model   LatentDirichletAllocation          7     n topics 10          8     learning method  online        TypeError    init     got an unexpected keyword argument  n topics        synth data   Synthetic dataset                    Before choosing a particular synthetic dataset  first you have to identify the type of problem you are solving  Is it a classification  a regression  a clustering  etc   Once that you narrowed down the type of problem  you need to provide a synthetic dataset accordingly  Most of the times you only need a minimalistic dataset  Here is a non exhaustive list of tools that may help you   NumPy        NumPy tools such as  numpy random randn  https   numpy org doc stable reference random generated numpy random randn html    and  numpy random randint  https   numpy org doc stable reference random generated numpy random randint html    can be used to create dummy numeric data     regression    Regressions take continuous numeric data as features and target        code block   python        import numpy as np        rng   np random RandomState 0        n samples  n features   5  5       X   rng randn n samples  n features        y   rng randn n samples   A similar snippet can be used as synthetic data when testing scaling tools such as  class  sklearn preprocessing StandardScaler      classification    If the bug is not raised during when encoding a categorical variable  you can   feed numeric data to a classifier  Just remember to ensure that the target   is indeed an integer        code block   python        import numpy as np        rng   np random RandomState 0        n samples  n features   5  5       X   rng randn n samples  n features        y   rng randint 0  2  n samples     binary target with values in  0  1      If the bug only happens with non numeric class labels  you might want to   generate a random target with  numpy random choice    https   numpy org doc stable reference random generated numpy random choice html           code block   python        import numpy as np        rng   np random RandomState 0        n samples  n features   50  5       X   rng randn n samples  n features        y   np random choice              male    female    other    size n samples  p  0 49  0 49  0 02           Pandas         Some scikit learn objects expect pandas dataframes as input  In this case you can transform numpy arrays into pandas objects using  pandas DataFrame  https   pandas pydata org docs reference api pandas DataFrame html     or  pandas Series  https   pandas pydata org docs reference api pandas Series html         code block   python      import numpy as np     import pandas as pd      rng   np random RandomState 0      n samples  n features   5  5     X   pd DataFrame                         continuous feature   rng randn n samples                positive feature   rng uniform low 0 0  high 100 0  size n samples                categorical feature   rng choice   a    b    c    size n samples                       y   pd Series rng randn n samples    In addition  scikit learn includes various  ref  sample generators  that can be used to build artificial datasets of controlled size and complexity    make regression                     As hinted by the name   class  sklearn datasets make regression  produces regression targets with noise as an optionally sparse random linear combination of random features      code block   python      from sklearn datasets import make regression      X  y   make regression n samples 1000  n features 20    make classification                          class  sklearn datasets make classification  creates multiclass datasets with multiple Gaussian clusters per class  Noise can be introduced by means of correlated  redundant or uninformative features      code block   python      from sklearn datasets import make classification      X  y   make classification          n features 2  n redundant 0  n informative 2  n clusters per class 1         make blobs                Similarly to  make classification    class  sklearn datasets make blobs  creates multiclass datasets using normally distributed clusters of points  It provides greater control regarding the centers and standard deviations of each cluster  and therefore it is useful to demonstrate clustering      code block   python      from sklearn datasets import make blobs      X  y   make blobs n samples 10  centers 3  n features 2   Dataset loading utilities                            You can use the  ref  datasets  to load and fetch several popular reference datasets  This option is useful when the bug relates to the particular structure of the data  e g  dealing with missing values or image recognition      code block   python      from sklearn datasets import load breast cancer      X  y   load breast cancer return X y True "}
{"questions":"terraform page title AWS Cost Estimation HCP Terraform Supported AWS resources in HCP Terraform Cost Estimation Learn which AWS resources HCP Terraform includes in cost estimation HCP Terraform can estimate monthly costs for many AWS Terraform resources","answers":"---\npage_title: AWS - Cost Estimation - HCP Terraform\ndescription: >-\n  Learn which AWS resources HCP Terraform includes in cost estimation.\n---\n\n# Supported AWS resources in HCP Terraform Cost Estimation\n\nHCP Terraform can estimate monthly costs for many AWS Terraform resources.\n\n-> **Note:** Terraform Enterprise requires AWS credentials to support cost estimation. These credentials are configured at the instance level, not the organization level. See the [Application Administration docs](\/terraform\/enterprise\/admin\/application\/integration) for more details.\n\n## Supported Resources\n\nCost estimation supports the following resources. Not all possible values for attributes of each resource are supported, ex. newer instance types or EBS volume types.\n\n| Resource | Incurs Cost |\n| ----------- | ----------- |\n| `aws_alb` | X |\n| `aws_autoscaling_group` | X |\n| `aws_cloudhsm_v2_hsm` | X |\n| `aws_cloudwatch_dashboard` | X |\n| `aws_cloudwatch_metric_alarm` | X |\n| `aws_db_instance` | X |\n| `aws_dynamodb_table` | X |\n| `aws_ebs_volume` | X |\n| `aws_elasticache_cluster` | X |\n| `aws_elasticsearch_domain` | X |\n| `aws_elb` | X |\n| `aws_instance` | X |\n| `aws_kms_key` | X |\n| `aws_lb` | X |\n| `aws_rds_cluster_instance` | X |\n| `aws_acm_certificate_validation` | |\n| `aws_alb_listener` | |\n| `aws_alb_listener_rule` | |\n| `aws_alb_target_group` | |\n| `aws_alb_target_group_attachment` | |\n| `aws_api_gateway_api_key` | |\n| `aws_api_gateway_deployment` | |\n| `aws_api_gateway_integration` | |\n| `aws_api_gateway_integration_response` | |\n| `aws_api_gateway_method` | |\n| `aws_api_gateway_method_response` | |\n| `aws_api_gateway_resource` | |\n| `aws_api_gateway_usage_plan_key` | |\n| `aws_appautoscaling_policy` | |\n| `aws_appautoscaling_target` | |\n| `aws_autoscaling_lifecycle_hook` | |\n| `aws_autoscaling_policy` | |\n| `aws_cloudformation_stack` | |\n| `aws_cloudfront_distribution` | |\n| `aws_cloudfront_origin_access_identity` | |\n| `aws_cloudwatch_event_rule` | |\n| `aws_cloudwatch_event_target` | |\n| `aws_cloudwatch_log_group` | |\n| `aws_cloudwatch_log_metric_filter` | |\n| `aws_cloudwatch_log_stream` | |\n| `aws_cloudwatch_log_subscription_filter` | |\n| `aws_codebuild_webhook` | |\n| `aws_codedeploy_deployment_group` | |\n| `aws_cognito_identity_provider` | |\n| `aws_cognito_user_pool` | |\n| `aws_cognito_user_pool_client` | |\n| `aws_cognito_user_pool_domain` | |\n| `aws_config_config_rule` | |\n| `aws_customer_gateway` | |\n| `aws_db_parameter_group` | |\n| `aws_db_subnet_group` | |\n| `aws_dynamodb_table_item` | |\n| `aws_ecr_lifecycle_policy` | |\n| `aws_ecr_repository_policy` | |\n| `aws_ecs_cluster` | |\n| `aws_ecs_task_definition` | |\n| `aws_efs_mount_target` | |\n| `aws_eip_association` | |\n| `aws_elastic_beanstalk_application` | |\n| `aws_elastic_beanstalk_application_version` | |\n| `aws_elastic_beanstalk_environment` | |\n| `aws_elasticache_parameter_group` | |\n| `aws_elasticache_subnet_group` | |\n| `aws_flow_log` | |\n| `aws_iam_access_key` | |\n| `aws_iam_account_alias` | |\n| `aws_iam_account_password_policy` | |\n| `aws_iam_group` | |\n| `aws_iam_group_membership` | |\n| `aws_iam_group_policy` | |\n| `aws_iam_group_policy_attachment` | |\n| `aws_iam_instance_profile` | |\n| `aws_iam_policy` | |\n| `aws_iam_policy_attachment` | |\n| `aws_iam_role` | |\n| `aws_iam_role_policy` | |\n| `aws_iam_role_policy_attachment` | |\n| `aws_iam_saml_provider` | |\n| `aws_iam_service_linked_role` | |\n| `aws_iam_user` | |\n| `aws_iam_user_group_membership` | |\n| `aws_iam_user_login_profile` | |\n| `aws_iam_user_policy` | |\n| `aws_iam_user_policy_attachment` | |\n| `aws_iam_user_ssh_key` | |\n| `aws_internet_gateway` | |\n| `aws_key_pair` | |\n| `aws_kms_alias` | |\n| `aws_lambda_alias` | |\n| `aws_lambda_event_source_mapping` | |\n| `aws_lambda_function` | |\n| `aws_lambda_layer_version` | |\n| `aws_lambda_permission` | |\n| `aws_launch_configuration` | |\n| `aws_lb_listener` | |\n| `aws_lb_listener_rule` | |\n| `aws_lb_target_group` | |\n| `aws_lb_target_group_attachment` | |\n| `aws_network_acl` | |\n| `aws_network_acl_rule` | |\n| `aws_network_interface` | |\n| `aws_placement_group` | |\n| `aws_rds_cluster_parameter_group` | |\n| `aws_route` | |\n| `aws_route53_record` | |\n| `aws_route53_zone_association` | |\n| `aws_route_table` | |\n| `aws_route_table_association` | |\n| `aws_s3_bucket` | |\n| `aws_s3_bucket_notification` | |\n| `aws_s3_bucket_object` | |\n| `aws_s3_bucket_policy` | |\n| `aws_s3_bucket_public_access_block` | |\n| `aws_security_group` | |\n| `aws_security_group_rule` | |\n| `aws_service_discovery_service` | |\n| `aws_sfn_state_machine` | |\n| `aws_sns_topic` | |\n| `aws_sns_topic_subscription` | |\n| `aws_sqs_queue` | |\n| `aws_sqs_queue_policy` | |\n| `aws_ssm_maintenance_window` | |\n| `aws_ssm_maintenance_window_target` | |\n| `aws_ssm_maintenance_window_task` | |\n| `aws_ssm_parameter` | |\n| `aws_subnet` | |\n| `aws_volume_attachment` | |\n| `aws_vpc` | |\n| `aws_vpc_dhcp_options` | |\n| `aws_vpc_dhcp_options_association` | |\n| `aws_vpc_endpoint` | |\n| `aws_vpc_endpoint_route_table_association` | |\n| `aws_vpc_endpoint_service` | |\n| `aws_vpc_ipv4_cidr_block_association` | |\n| `aws_vpc_peering_connection_accepter` | |\n| `aws_vpc_peering_connection_options` | |\n| `aws_vpn_connection_route` | |\n| `aws_waf_ipset` | |\n| `aws_waf_rule` | |\n| `aws_waf_web_acl` | |","site":"terraform","answers_cleaned":"    page title  AWS   Cost Estimation   HCP Terraform description       Learn which AWS resources HCP Terraform includes in cost estimation         Supported AWS resources in HCP Terraform Cost Estimation  HCP Terraform can estimate monthly costs for many AWS Terraform resources        Note    Terraform Enterprise requires AWS credentials to support cost estimation  These credentials are configured at the instance level  not the organization level  See the  Application Administration docs   terraform enterprise admin application integration  for more details      Supported Resources  Cost estimation supports the following resources  Not all possible values for attributes of each resource are supported  ex  newer instance types or EBS volume types     Resource   Incurs Cost                                    aws alb    X      aws autoscaling group    X      aws cloudhsm v2 hsm    X      aws cloudwatch dashboard    X      aws cloudwatch metric alarm    X      aws db instance    X      aws dynamodb table    X      aws ebs volume    X      aws elasticache cluster    X      aws elasticsearch domain    X      aws elb    X      aws instance    X      aws kms key    X      aws lb    X      aws rds cluster instance    X      aws acm certificate validation         aws alb listener         aws alb listener rule         aws alb target group         aws alb target group attachment         aws api gateway api key         aws api gateway deployment         aws api gateway integration         aws api gateway integration response         aws api gateway method         aws api gateway method response         aws api gateway resource         aws api gateway usage plan key         aws appautoscaling policy         aws appautoscaling target         aws autoscaling lifecycle hook         aws autoscaling policy         aws cloudformation stack         aws cloudfront distribution         aws cloudfront origin access identity         aws cloudwatch event rule         aws cloudwatch event target         aws cloudwatch log group         aws cloudwatch log metric filter         aws cloudwatch log stream         aws cloudwatch log subscription filter         aws codebuild webhook         aws codedeploy deployment group         aws cognito identity provider         aws cognito user pool         aws cognito user pool client         aws cognito user pool domain         aws config config rule         aws customer gateway         aws db parameter group         aws db subnet group         aws dynamodb table item         aws ecr lifecycle policy         aws ecr repository policy         aws ecs cluster         aws ecs task definition         aws efs mount target         aws eip association         aws elastic beanstalk application         aws elastic beanstalk application version         aws elastic beanstalk environment         aws elasticache parameter group         aws elasticache subnet group         aws flow log         aws iam access key         aws iam account alias         aws iam account password policy         aws iam group         aws iam group membership         aws iam group policy         aws iam group policy attachment         aws iam instance profile         aws iam policy         aws iam policy attachment         aws iam role         aws iam role policy         aws iam role policy attachment         aws iam saml provider         aws iam service linked role         aws iam user         aws iam user group membership         aws iam user login profile         aws iam user policy         aws iam user policy attachment         aws iam user ssh key         aws internet gateway         aws key pair         aws kms alias         aws lambda alias         aws lambda event source mapping         aws lambda function         aws lambda layer version         aws lambda permission         aws launch configuration         aws lb listener         aws lb listener rule         aws lb target group         aws lb target group attachment         aws network acl         aws network acl rule         aws network interface         aws placement group         aws rds cluster parameter group         aws route         aws route53 record         aws route53 zone association         aws route table         aws route table association         aws s3 bucket         aws s3 bucket notification         aws s3 bucket object         aws s3 bucket policy         aws s3 bucket public access block         aws security group         aws security group rule         aws service discovery service         aws sfn state machine         aws sns topic         aws sns topic subscription         aws sqs queue         aws sqs queue policy         aws ssm maintenance window         aws ssm maintenance window target         aws ssm maintenance window task         aws ssm parameter         aws subnet         aws volume attachment         aws vpc         aws vpc dhcp options         aws vpc dhcp options association         aws vpc endpoint         aws vpc endpoint route table association         aws vpc endpoint service         aws vpc ipv4 cidr block association         aws vpc peering connection accepter         aws vpc peering connection options         aws vpn connection route         aws waf ipset         aws waf rule         aws waf web acl     "}
{"questions":"terraform page title Review deployment plans Review deployment plans Deployment plans are a combination of an individual configuration version and one of your stack s deployments As in the traditional Terraform workflow HCP Terraform creates plans every time a new configuration version introduces potential changes for a deployment Learn how to review deployment plans in HCP Terraform Stacks tfc only true","answers":"---\npage_title: Review deployment plans\ndescription: Learn how to review deployment plans in HCP Terraform Stacks.\ntfc_only: true\n---\n\n# Review deployment plans\n\nDeployment plans are a combination of an individual configuration version and one of your stack\u2019s deployments. As in the traditional Terraform workflow, HCP Terraform creates plans every time a new configuration version introduces potential changes for a deployment. \n\nThis guide explains how to review and approve deployment plans in HCP Terraform.\n\n## Requirements\n\nTo view a Stack and its plans, you must also be a member of a team in your organization with one of the following permissions:\n* [Organization-level **View all projects**](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#view-all-projects) or higher\n* [Project-level **Write**](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#write) or higher\n\n## View a deployment\n\nIf you are not already on your stack\u2019s deployment page, navigate to it:\n\n1. In the navigation menu, click **Projects** under **Manage**.  \n1. Select the project containing your Stack.  \n1. Click **Stacks** in the navigation menu.  \n1. Select the Stack you want to review.\n\nA stack\u2019s **Overview** page displays the following information:\n\n* The number of your stack\u2019s components and deployments  \n* Your deployment\u2019s current [health](#check-deployment-health)  \n* The latest configuration version  \n* A chart listing each deployment\u2019s recent configuration versions\n\nTo view all of the plans for a deployment, click the name of the deployment you want to review.\n\nA deployment\u2019s page lists its components and the last five deployment plans. Clicking on a component reveals all of the resources it contains. Opening the **Inspect** dropdown menu on a deployment's page reveals the option to download: \n\n* **State description** downloads that deployment\u2019s current state.  \n* **Provider schemas** downloads the schema of the providers of your deployment. \n\n### View plans\n\nA Stack deployment can have multiple plans. In a deployment's page, underneath **Latest plans**, each deployment plan lists:\n\n* The plan name   \n* The trigger why HCP Terraform created this plan  \n* The configuration version HCP Terraform created the plan with  \n* When HCP Terraform created the plan\n\nYou can see a plan\u2019s full details by clicking the plan\u2019s name or an abbreviated list of the plan\u2019s changes by clicking **Quick View**. You can also click **View all plans** to display a list of all the plans for this deployment. You can filter plans by [health status](#check-deployment-health) in the list of all plans. \n\nEach plan includes a timeline detailing when the plan started, when it received its configuration version, and when it was approved.  \n\n### Check deployment health\n\nClick **Overview** in the sidebar of your Stack to view the status of each of your deployments. \n\n| Status | Description |\n| :---- | :---- |\n| Healthy | HCP Terraform is not applying this deployment, no plans await approval, and no diagnostic alerts exist. |\n| Deploying | A plan is approved, and HCP Terraform is applying it. |\n| Plans waiting for approval | HCP Terraform created a plan successfully and is waiting for approval. |\n| Error diagnostics | The deployment has error diagnostic alerts. |\n| Warning diagnostics | The deployment has warning diagnostic alerts. |\n\n\n## Download plan data\n\nYou can download Stack plan data to debug and analyze how your Stack changes over time. \n\nSelect one of the following options in the **Inspect** drop-down to perform the associated action:\n\n* **View plan orchestration results** displays HCP Terraform's decisions while making this plan.  \n* **Download plan event stream** downloads a log file of your plan\u2019s events.   \n* **Download plan description** downloads a file with all plan information.  \n* **View apply orchestration results** displays HCP Terraform's decisions to apply this plan.  \n* **Download apply event stream** downloads a log file of your plan\u2019s events.  \n* **Download apply description** downloads a file with all the information about this applied plan.\n\n## Approve plans\n\nLike traditional Terraform plans, Stack deployment plans list the changes that will occur if you approve that plan. Each component lists its expected resource changes, and you can review those changes as you decide whether to apply a plan. \n\nWhen viewing a deployment plan, HCP Terraform notes if that plan is awaiting approval. Click **Approve plan** if you want HCP Terraform to apply a plan to this deployment or **Discard plan** to ignore it. You manage each deployment independently, so any plans you approve only affect the current deployment you are interacting with.\n\n### Convergence checks\n\nAfter applying any plan, HCP Terraform automatically triggers a plan called a convergence check. A convergence check is a re-plan to ensure components do not have any [deferred changes](#deferred-changes). HCP Terraform continues to trigger new plans until the convergence check returns a plan that does not contain changes. \n\nBy default, each Stack has an `auto-approve` rule named `empty_plan`, which auto-approves a plan if it does not contain changes. When a convergence check contains no changes, HCP Terraform auto-applies that plan.\n\n## Deferred changes\n\nLike with Terraform configuration files, HCP Terraform generates a dependency graph and creates resources defined in `*.tfstack.hcl` and `*.tfdeploy.hcl` files. \n\nWhen you deploy a Stack with resources that depend on resources provisioned by other components in your stack, HCP Terraform recognizes the dependency between components and automatically defers that plan until HCP Terraform can complete it successfully. Plans with deferred changes are plans with resources that depend on resources that don't exist yet, requiring follow-up plans.\n\n-> **Hands-on**: Complete the [Manage Kubernetes workloads with stacks](\/terraform\/tutorials\/cloud\/stacks-eks-deferred) tutorial to create plans with deferred changes.\n\nHCP Terraform notifies you in the UI if a plan contains deferred changes. Approving a plan with deferred changes makes HCP Terraform automatically create a follow-up plan to properly set up resources in the order of operations those resources require. \n\nAfter applying a plan with deferred changes, HCP Terraform notifies you of any replans it creates with a link to **View replan**. You can review the replan to ensure HCP Terraform created your resources as expected","site":"terraform","answers_cleaned":"    page title  Review deployment plans description  Learn how to review deployment plans in HCP Terraform Stacks  tfc only  true        Review deployment plans  Deployment plans are a combination of an individual configuration version and one of your stack s deployments  As in the traditional Terraform workflow  HCP Terraform creates plans every time a new configuration version introduces potential changes for a deployment    This guide explains how to review and approve deployment plans in HCP Terraform      Requirements  To view a Stack and its plans  you must also be a member of a team in your organization with one of the following permissions     Organization level   View all projects     terraform cloud docs users teams organizations permissions view all projects  or higher    Project level   Write     terraform cloud docs users teams organizations permissions write  or higher     View a deployment  If you are not already on your stack s deployment page  navigate to it   1  In the navigation menu  click   Projects   under   Manage      1  Select the project containing your Stack    1  Click   Stacks   in the navigation menu    1  Select the Stack you want to review   A stack s   Overview   page displays the following information     The number of your stack s components and deployments     Your deployment s current  health   check deployment health      The latest configuration version     A chart listing each deployment s recent configuration versions  To view all of the plans for a deployment  click the name of the deployment you want to review   A deployment s page lists its components and the last five deployment plans  Clicking on a component reveals all of the resources it contains  Opening the   Inspect   dropdown menu on a deployment s page reveals the option to download        State description   downloads that deployment s current state        Provider schemas   downloads the schema of the providers of your deployment        View plans  A Stack deployment can have multiple plans  In a deployment s page  underneath   Latest plans    each deployment plan lists     The plan name      The trigger why HCP Terraform created this plan     The configuration version HCP Terraform created the plan with     When HCP Terraform created the plan  You can see a plan s full details by clicking the plan s name or an abbreviated list of the plan s changes by clicking   Quick View    You can also click   View all plans   to display a list of all the plans for this deployment  You can filter plans by  health status   check deployment health  in the list of all plans    Each plan includes a timeline detailing when the plan started  when it received its configuration version  and when it was approved         Check deployment health  Click   Overview   in the sidebar of your Stack to view the status of each of your deployments      Status   Description                       Healthy   HCP Terraform is not applying this deployment  no plans await approval  and no diagnostic alerts exist      Deploying   A plan is approved  and HCP Terraform is applying it      Plans waiting for approval   HCP Terraform created a plan successfully and is waiting for approval      Error diagnostics   The deployment has error diagnostic alerts      Warning diagnostics   The deployment has warning diagnostic alerts         Download plan data  You can download Stack plan data to debug and analyze how your Stack changes over time    Select one of the following options in the   Inspect   drop down to perform the associated action       View plan orchestration results   displays HCP Terraform s decisions while making this plan        Download plan event stream   downloads a log file of your plan s events         Download plan description   downloads a file with all plan information        View apply orchestration results   displays HCP Terraform s decisions to apply this plan        Download apply event stream   downloads a log file of your plan s events        Download apply description   downloads a file with all the information about this applied plan      Approve plans  Like traditional Terraform plans  Stack deployment plans list the changes that will occur if you approve that plan  Each component lists its expected resource changes  and you can review those changes as you decide whether to apply a plan    When viewing a deployment plan  HCP Terraform notes if that plan is awaiting approval  Click   Approve plan   if you want HCP Terraform to apply a plan to this deployment or   Discard plan   to ignore it  You manage each deployment independently  so any plans you approve only affect the current deployment you are interacting with       Convergence checks  After applying any plan  HCP Terraform automatically triggers a plan called a convergence check  A convergence check is a re plan to ensure components do not have any  deferred changes   deferred changes   HCP Terraform continues to trigger new plans until the convergence check returns a plan that does not contain changes    By default  each Stack has an  auto approve  rule named  empty plan   which auto approves a plan if it does not contain changes  When a convergence check contains no changes  HCP Terraform auto applies that plan      Deferred changes  Like with Terraform configuration files  HCP Terraform generates a dependency graph and creates resources defined in    tfstack hcl  and    tfdeploy hcl  files    When you deploy a Stack with resources that depend on resources provisioned by other components in your stack  HCP Terraform recognizes the dependency between components and automatically defers that plan until HCP Terraform can complete it successfully  Plans with deferred changes are plans with resources that depend on resources that don t exist yet  requiring follow up plans        Hands on    Complete the  Manage Kubernetes workloads with stacks   terraform tutorials cloud stacks eks deferred  tutorial to create plans with deferred changes   HCP Terraform notifies you in the UI if a plan contains deferred changes  Approving a plan with deferred changes makes HCP Terraform automatically create a follow up plan to properly set up resources in the order of operations those resources require    After applying a plan with deferred changes  HCP Terraform notifies you of any replans it creates with a link to   View replan    You can review the replan to ensure HCP Terraform created your resources as expected"}
{"questions":"terraform Manage projects page title Manage projects in HCP Terraform and Terraform Enterprise This topic describes how to create and manage projects in HCP Terraform and Terraform Enterprise A project is a folder containing one or more workspaces Use projects to organize and group workspaces and create ownership boundaries across your infrastructure","answers":"---\npage_title: Manage projects in HCP Terraform and Terraform Enterprise\ndescription: |-\n  Use projects to organize and group workspaces and create ownership boundaries\n  across your infrastructure.\n---\n\n# Manage projects\n\nThis topic describes how to create and manage projects in HCP Terraform and Terraform Enterprise. A project is a folder containing one or more workspaces.\n\n## Requirements\n\nYou must have the following permissions to manage projects:\n\n- You must be a member of a team with the **Manage all Projects**  permissions enabled to create a project. Refer to [Organization Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-permissions) for additional information.\n- You must be a member of a team with the **Visible** option enabled under **Visibility** in the organization settings to configure a new team's access to the project. Refer to [Team Visibility](\/terraform\/cloud-docs\/users-teams-organizations\/teams\/manage#team-visibility) for additional information.\n- You must be a member of a team with update and delete permissions to be able to update and delete teams respectively.\n\nTo delete tags on a project, you must be member of a team with the **Admin** permission group enabled for the project.\n\nTo create tags for a project, you must be member of a team with the **Write** permission group enabled for the project.\n\n## View a project\n\nTo view your organization's projects:\n\n1. Click **Projects**.\n1. Search for a project that you want to view. You can use the following methods:\n   - Sort by column header.\n   - Use the search bar to search on the name of a project or a tag.\n1. Click on a project's name to view more details.\n\n\n## Create a project\n\n1. Click **Projects**.\n1. Click **+ New project**.\n1. Specify a name for the project. The name must be unique within the organization and can only include letters, numbers, inner spaces, hyphens, and underscores.\n1. Add a description for the project. This field is optional.\n   \n    ~> **Adding project tags is in private beta and unavailable for some users.** Skip to step 8 if your interface does not include elements for adding tags. Contact your HashiCorp representative for information about participating in the private beta.  \n\n1. Open the **Add key value tags** menu to add tags to your project. Tags are optional key-value pairs that you can use to organize projects. Any workspaces you create within the project inherit project tags. Refer to [Define project tags](#define-project-tags) for additional information.\n1. Click **+Add tag** and specify a tag key and tag value. If your organization has defined reserved tag keys, they appear in the **Tag key** field as suggestions. Refer to [Create and manage reserved tags](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#create-and-manage-reserved-tags) for additional information.\n1. Click **+add another tag** to attach any additional tags.\n1. Click **Create** to finish creating the project.\n\nHCP Terraform returns a new project page displaying all the project\ninformation.\n\n## Edit a project\n\n1. Click **Projects**.\n1. Click on a project name of the project you want to edit.\n1. Click **Settings**.\n\nOn this **General settings** page, you can update the project name, project\ndescription, and delete the project. On the **Team access** page, you can modify\nteam access to the project.\n\n## Automatically destroy inactive workspaces \n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/ephemeral-workspaces.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nYou can configure HCP Terraform to automatically destroy each workspace's\ninfrastructure in a project after a period of inactivity. A workspace\nis _inactive_ if the workspace's state has not changed within your designated\ntime period.\n\nIf you configure a project to auto-destroy its infrastructure when inactive, \nany run that updates Terraform state further delays the scheduled auto-destroy \ntime by the length of your designated timeframe.\n\n!> **Warning:** Automatic destroy plans _do not_ prompt you for apply approval in the HCP Terraform user interface. We recommend only using this setting for development environments.\n\nTo schedule an auto-destroy run after a period of workspace inactivity:\n\n1. Navigate to the project's **Settings** > **Auto-destroy Workspaces** page.\n1. Click **Set up default**.\n1. Select or customize a desired timeframe of inactivity.\n1. Click **Confirm default**.\n\nYou can configure an individual workspace's auto-destroy settings to override\nthis default configuration. Refer to [automatically destroy workspaces](\/terraform\/cloud-docs\/workspaces\/settings\/deletion#automatically-destroy) for more information.\n\n## Delete a project\n\nYou can only delete projects that do not contain workspaces.\n\nTo delete an empty project:\n\n1. Click **Projects**.\n1. Search for a project that you want to review by scrolling down the table or\n   searching for a project name in the search bar above the project table.\n1. Click **Settings**. The settings view for the selected project appears.\n1. Click the **Delete** button. A **Delete project** modal appears.\n1. Click the **Delete** button to confirm the deletion.\n\nHCP Terraform returns to the **Projects** view with the deleted project\nremoved from the list.\n\n## Define project tags\n\n~> **Adding project tags is in private beta and unavailable for some users.** Contact your HashiCorp representative for information about participating in the private beta.  \n\nYou can define tags stored as key-value pairs to help you organize your projects and track resource consumption. HCP Terraform applies tags that you attach to projects to the workspaces created inside the project.\n\nWorkspace administrators with appropriate permissions can attach new key-value pairs to their workspaces to override inherited tags. Refer to [Create workspace tags](\/terraform\/cloud-docs\/workspaces\/tags) for additional information about using tags in workspaces.\n\nTags that you create appear in the tags management screen in the organization settings. Refer to [Organizations](\/terraform\/cloud-docs\/users-teams-organizations\/organizations) for additional information.\n\nThe following rules apply to tag keys and values:\n\n- Tags must be one or more characters.\n- Tags have a 255 character limit.\n- Tags can include letters, numbers, colons, hyphens, and underscores.\n- Tag values are optional.\n- You can create up to 10 unique tags per workspace and 10 unique tags per project. As a result, each workspace can have up to 20 tags.\n- You cannot use the following strings at the beginning of a tag key:\n    - `hcp`\n    - `hc`\n    - `ibm","site":"terraform","answers_cleaned":"    page title  Manage projects in HCP Terraform and Terraform Enterprise description       Use projects to organize and group workspaces and create ownership boundaries   across your infrastructure         Manage projects  This topic describes how to create and manage projects in HCP Terraform and Terraform Enterprise  A project is a folder containing one or more workspaces      Requirements  You must have the following permissions to manage projects     You must be a member of a team with the   Manage all Projects    permissions enabled to create a project  Refer to  Organization Permissions   terraform cloud docs users teams organizations permissions organization permissions  for additional information    You must be a member of a team with the   Visible   option enabled under   Visibility   in the organization settings to configure a new team s access to the project  Refer to  Team Visibility   terraform cloud docs users teams organizations teams manage team visibility  for additional information    You must be a member of a team with update and delete permissions to be able to update and delete teams respectively   To delete tags on a project  you must be member of a team with the   Admin   permission group enabled for the project   To create tags for a project  you must be member of a team with the   Write   permission group enabled for the project      View a project  To view your organization s projects   1  Click   Projects    1  Search for a project that you want to view  You can use the following methods       Sort by column header       Use the search bar to search on the name of a project or a tag  1  Click on a project s name to view more details       Create a project  1  Click   Projects    1  Click     New project    1  Specify a name for the project  The name must be unique within the organization and can only include letters  numbers  inner spaces  hyphens  and underscores  1  Add a description for the project  This field is optional               Adding project tags is in private beta and unavailable for some users    Skip to step 8 if your interface does not include elements for adding tags  Contact your HashiCorp representative for information about participating in the private beta     1  Open the   Add key value tags   menu to add tags to your project  Tags are optional key value pairs that you can use to organize projects  Any workspaces you create within the project inherit project tags  Refer to  Define project tags   define project tags  for additional information  1  Click    Add tag   and specify a tag key and tag value  If your organization has defined reserved tag keys  they appear in the   Tag key   field as suggestions  Refer to  Create and manage reserved tags   terraform cloud docs users teams organizations organizations create and manage reserved tags  for additional information  1  Click    add another tag   to attach any additional tags  1  Click   Create   to finish creating the project   HCP Terraform returns a new project page displaying all the project information      Edit a project  1  Click   Projects    1  Click on a project name of the project you want to edit  1  Click   Settings     On this   General settings   page  you can update the project name  project description  and delete the project  On the   Team access   page  you can modify team access to the project      Automatically destroy inactive workspaces        BEGIN  TFC only name pnp callout      include  tfc package callouts ephemeral workspaces mdx       END  TFC only name pnp callout      You can configure HCP Terraform to automatically destroy each workspace s infrastructure in a project after a period of inactivity  A workspace is  inactive  if the workspace s state has not changed within your designated time period   If you configure a project to auto destroy its infrastructure when inactive   any run that updates Terraform state further delays the scheduled auto destroy  time by the length of your designated timeframe        Warning    Automatic destroy plans  do not  prompt you for apply approval in the HCP Terraform user interface  We recommend only using this setting for development environments   To schedule an auto destroy run after a period of workspace inactivity   1  Navigate to the project s   Settings       Auto destroy Workspaces   page  1  Click   Set up default    1  Select or customize a desired timeframe of inactivity  1  Click   Confirm default     You can configure an individual workspace s auto destroy settings to override this default configuration  Refer to  automatically destroy workspaces   terraform cloud docs workspaces settings deletion automatically destroy  for more information      Delete a project  You can only delete projects that do not contain workspaces   To delete an empty project   1  Click   Projects    1  Search for a project that you want to review by scrolling down the table or    searching for a project name in the search bar above the project table  1  Click   Settings    The settings view for the selected project appears  1  Click the   Delete   button  A   Delete project   modal appears  1  Click the   Delete   button to confirm the deletion   HCP Terraform returns to the   Projects   view with the deleted project removed from the list      Define project tags       Adding project tags is in private beta and unavailable for some users    Contact your HashiCorp representative for information about participating in the private beta     You can define tags stored as key value pairs to help you organize your projects and track resource consumption  HCP Terraform applies tags that you attach to projects to the workspaces created inside the project   Workspace administrators with appropriate permissions can attach new key value pairs to their workspaces to override inherited tags  Refer to  Create workspace tags   terraform cloud docs workspaces tags  for additional information about using tags in workspaces   Tags that you create appear in the tags management screen in the organization settings  Refer to  Organizations   terraform cloud docs users teams organizations organizations  for additional information   The following rules apply to tag keys and values     Tags must be one or more characters    Tags have a 255 character limit    Tags can include letters  numbers  colons  hyphens  and underscores    Tag values are optional    You can create up to 10 unique tags per workspace and 10 unique tags per project  As a result  each workspace can have up to 20 tags    You cannot use the following strings at the beginning of a tag key         hcp         hc         ibm"}
{"questions":"terraform Workspaces organize infrastructure into meaningful groups Learn how to create page title Create workspaces Workspaces HCP Terraform This topic describes how to create and manage workspaces in HCP Terraform and Terraform Enterprise UI A workspace is a group of infrastructure resources managed by Terraform Refer to Workspaces overview terraform cloud docs workspaces for additional information and configure workspaces through the UI Create workspaces","answers":"---\npage_title: Create workspaces - Workspaces - HCP Terraform\ndescription: >-\n  Workspaces organize infrastructure into meaningful groups. Learn how to create\n  and configure workspaces through the UI.\n---\n\n# Create workspaces\n\nThis topic describes how to create and manage workspaces in HCP Terraform and Terraform Enterprise UI. A workspace is a group of infrastructure resources managed by Terraform. Refer to [Workspaces overview](\/terraform\/cloud-docs\/workspaces) for additional information.\n\n> **Hands-on:** Try the [Get Started - HCP Terraform](\/terraform\/tutorials\/cloud-get-started?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorials.\n\n## Introduction\n\nCreate new workspaces when you need to manage a new collection of infrastructure resources.  You can use the following methods to create workspaces:\n\n- HCP Terraform UI: Refer to [Create a workspace](#create-a-workspace) for instructions.\n- Workspaces API: Send a `POST`call to the `\/organizations\/:organization_name\/workspaces` endpoint to create a workspace. Refer to the [API documentation](\/terraform\/cloud-docs\/api-docs\/workspaces#create-a-workspace) for instructions.\n- Terraform Enterprise provider: Install the `tfe` provider and add the `tfe_workspace` resource to your configuration. Refer to the [`tfe` provider documentation](https:\/\/registry.terraform.io\/providers\/hashicorp\/tfe\/latest\/docs\/resources\/workspace) in the Terraform registry for instructions.\n- No-code provisioning: Use a no-code module from the registry to create a new workspace and deploy the module's resources. Refer to [Provisioning No-Code Infrastructure](\/terraform\/cloud-docs\/no-code-provisioning\/provisioning) for instructions.\n\nEach workspace belongs to a project. Refer to [Manage projects](\/terraform\/cloud-docs\/projects\/manage) for additional information.\n\n\n## Requirements\n\nYou must be a member of a team with one of the following permissions enabled to create and manage workspaces:\n\n- **Manage all projects**\n- **Manage all workspaces**\n- **Admin** permission group for a project.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Workspace naming\n\nWe recommend using consistent and informative names for new workspaces. One common approach is combining the workspace's important attributes in a consistent order. Attributes can be any defining characteristic of a workspace, such as the component, the component\u2019s run environment, and the region where the workspace is provisioning infrastructure.\n\nThis strategy could produce the following example workspace names:\n- networking-prod-us-east\n- networking-staging-us-east\n- networking-prod-eu-central\n- networking-staging-eu-central\n- monitoring-prod-us-east\n- monitoring-staging-us-east\n- monitoring-prod-eu-central\n- monitoring-staging-eu-central\n\nYou can add additional attributes to your workspace names as needed. For example, you may add the infrastructure provider, datacenter, or line of business.\n\nWe recommend using 90 characters or less for the name of your workspace.\n\n## Create a workspace\n\n[workdir]: \/terraform\/cloud-docs\/workspaces\/settings#terraform-working-directory\n\n[trigger]: \/terraform\/cloud-docs\/workspaces\/settings\/vcs#automatic-run-triggering\n\n[branch]: \/terraform\/cloud-docs\/workspaces\/settings\/vcs#vcs-branch\n\n[submodules]: \/terraform\/cloud-docs\/workspaces\/settings\/vcs#include-submodules-on-clone\n\nComplete the following steps to use the HCP Terraform or Terraform Enterprise UI to create a workspace:\n\n1. Log in and choose your organization.\n1. Click **New** and choose **Workspace** from the drop-down menu.\n1. If you have multiple projects, HCP Terraform may prompt you to choose the project to create the workspace in. Only users on teams with permissions for the entire project or the specific workspace can access the workspace. Refer to [Manage projects](\/terraform\/cloud-docs\/projects\/manage) for additional information.\n1. Choose a workflow type.\n1. Complete the following steps if you are creating a workspace that follows the VCS workflow: \n   1. Choose an existing version control provider from the list or configure a new system. You must enable the workspace project to connect to your provider. Refer to [Connecting VCS\n      Providers](\/terraform\/cloud-docs\/vcs) for more details.\n   1. If you choose the **GitHub App** provider, choose an organization and repository when prompted. The list only displays the first 100 repositories from your VCS provider. If your repository is missing from the list, enter the repository ID in the text field .\n   1. Refer to the following topics for information about configuring workspaces settings in the **Advanced options** screen:\n      - [Terraform Working Directory][workdir]\n      - [Automatic Run Triggering][trigger]\n      - [VCS branch][branch]\n      - [Include submodules on clone][submodules]\n1. Specify a name for the workspace. VCS workflow workspaces default to the name of the repository. The name must be unique within the organization and can include letters, numbers, hyphens, and underscores. Refer to [Workspace naming](#workspace-naming) for additional information.\n1. Add an optional description for the workspace. The description appears at the top of the workspace in the HCP Terraform UI.\n1. Click **Create workspace** to finish.\n\nFor CLI or API-driven workflow, the system opens the new workspace overview. For version control workspaces, the **Configure Terraform variables** page appears.\n\n### Configure Terraform variables for VCS workflows\n\nAfter you create a new workspace from a version control repository, HCP Terraform scans its configuration files for [Terraform variables](\/terraform\/cloud-docs\/workspaces\/variables#terraform-variables) and displays variables without default values or variables that are undefined in an existing [global or project-scoped variable set](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#variable-sets). Terraform cannot perform successful runs in the workspace until you set values for these variables.\n\nChoose one of the following actions:\n\n- To skip this step, click **Go to workspace overview**. You can [load these variables from files](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#loading-variables-from-files) or create and set values for them later from within the workspace. HCP Terraform does not automatically scan your configuration again; you can only add variables from within the workspace individually.\n- To configure variables, enter a value for each variable on the page. You may want to leave a variable empty if you plan to provide it through another source, like an `auto.tfvars` file. Click **Save variables** to add these variables to the workspace.\n\n## Next steps\n\nIf you have already configured all Terraform variables, we recommend [manually starting a run](\/terraform\/cloud-docs\/run\/ui#manually-starting-runs) to prepare VCS-driven workspaces. You may also want to do one or more of the following actions:\n\n- [Upload configuration versions](\/terraform\/cloud-docs\/workspaces\/configurations#providing-configuration-versions): If you chose the API or CLI-Driven workflow, you must upload configuration versions for the workspace.\n- [Edit environment variables](\/terraform\/cloud-docs\/workspaces\/variables): Shell environment variables store credentials and customize Terraform's behavior.\n- [Edit additional workspace settings](\/terraform\/cloud-docs\/workspaces\/settings): This includes notifications, permissions, and run triggers to start runs automatically.\n- [Learn more about running Terraform in your workspace](\/terraform\/cloud-docs\/run\/remote-operations): This includes how Terraform processes runs within the workspace, run modes, run states, and other operations.\n- [Create workspace tags](\/terraform\/cloud-docs\/workspaces\/tags): Add tags to your workspaces so that you can organize and track them. \n- [Browse workspaces](\/terraform\/cloud-docs\/workspaces\/browse): Use the interfaces available in the UI to browse, sort, and filter workspaces so that you can track resource consumption. \n\n### VCS Connection\n\nIf you connected a VCS repository to the workspace, HCP Terraform automatically registers a webhook with your VCS provider. A workspace with no runs will not accept new runs from a VCS webhook, so you must [manually start at least one run](\/terraform\/cloud-docs\/run\/ui#manually-starting-runs).\n\nAfter you manually start a run, HCP Terraform automatically queues a plan when new commits appear in the selected branch of the linked repository or someone opens a pull request on that branch. Refer to [Webhooks](\/terraform\/cloud-docs\/vcs#webhooks) for more details.","site":"terraform","answers_cleaned":"    page title  Create workspaces   Workspaces   HCP Terraform description       Workspaces organize infrastructure into meaningful groups  Learn how to create   and configure workspaces through the UI         Create workspaces  This topic describes how to create and manage workspaces in HCP Terraform and Terraform Enterprise UI  A workspace is a group of infrastructure resources managed by Terraform  Refer to  Workspaces overview   terraform cloud docs workspaces  for additional information       Hands on    Try the  Get Started   HCP Terraform   terraform tutorials cloud get started utm source WEBSITE utm medium WEB IO utm offer ARTICLE PAGE utm content DOCS  tutorials      Introduction  Create new workspaces when you need to manage a new collection of infrastructure resources   You can use the following methods to create workspaces     HCP Terraform UI  Refer to  Create a workspace   create a workspace  for instructions    Workspaces API  Send a  POST call to the   organizations  organization name workspaces  endpoint to create a workspace  Refer to the  API documentation   terraform cloud docs api docs workspaces create a workspace  for instructions    Terraform Enterprise provider  Install the  tfe  provider and add the  tfe workspace  resource to your configuration  Refer to the   tfe  provider documentation  https   registry terraform io providers hashicorp tfe latest docs resources workspace  in the Terraform registry for instructions    No code provisioning  Use a no code module from the registry to create a new workspace and deploy the module s resources  Refer to  Provisioning No Code Infrastructure   terraform cloud docs no code provisioning provisioning  for instructions   Each workspace belongs to a project  Refer to  Manage projects   terraform cloud docs projects manage  for additional information       Requirements  You must be a member of a team with one of the following permissions enabled to create and manage workspaces       Manage all projects       Manage all workspaces       Admin   permission group for a project    permissions citation    intentionally unused   keep for maintainers     Workspace naming  We recommend using consistent and informative names for new workspaces  One common approach is combining the workspace s important attributes in a consistent order  Attributes can be any defining characteristic of a workspace  such as the component  the component s run environment  and the region where the workspace is provisioning infrastructure   This strategy could produce the following example workspace names    networking prod us east   networking staging us east   networking prod eu central   networking staging eu central   monitoring prod us east   monitoring staging us east   monitoring prod eu central   monitoring staging eu central  You can add additional attributes to your workspace names as needed  For example  you may add the infrastructure provider  datacenter  or line of business   We recommend using 90 characters or less for the name of your workspace      Create a workspace   workdir    terraform cloud docs workspaces settings terraform working directory   trigger    terraform cloud docs workspaces settings vcs automatic run triggering   branch    terraform cloud docs workspaces settings vcs vcs branch   submodules    terraform cloud docs workspaces settings vcs include submodules on clone  Complete the following steps to use the HCP Terraform or Terraform Enterprise UI to create a workspace   1  Log in and choose your organization  1  Click   New   and choose   Workspace   from the drop down menu  1  If you have multiple projects  HCP Terraform may prompt you to choose the project to create the workspace in  Only users on teams with permissions for the entire project or the specific workspace can access the workspace  Refer to  Manage projects   terraform cloud docs projects manage  for additional information  1  Choose a workflow type  1  Complete the following steps if you are creating a workspace that follows the VCS workflow      1  Choose an existing version control provider from the list or configure a new system  You must enable the workspace project to connect to your provider  Refer to  Connecting VCS       Providers   terraform cloud docs vcs  for more details     1  If you choose the   GitHub App   provider  choose an organization and repository when prompted  The list only displays the first 100 repositories from your VCS provider  If your repository is missing from the list  enter the repository ID in the text field      1  Refer to the following topics for information about configuring workspaces settings in the   Advanced options   screen           Terraform Working Directory  workdir           Automatic Run Triggering  trigger           VCS branch  branch           Include submodules on clone  submodules  1  Specify a name for the workspace  VCS workflow workspaces default to the name of the repository  The name must be unique within the organization and can include letters  numbers  hyphens  and underscores  Refer to  Workspace naming   workspace naming  for additional information  1  Add an optional description for the workspace  The description appears at the top of the workspace in the HCP Terraform UI  1  Click   Create workspace   to finish   For CLI or API driven workflow  the system opens the new workspace overview  For version control workspaces  the   Configure Terraform variables   page appears       Configure Terraform variables for VCS workflows  After you create a new workspace from a version control repository  HCP Terraform scans its configuration files for  Terraform variables   terraform cloud docs workspaces variables terraform variables  and displays variables without default values or variables that are undefined in an existing  global or project scoped variable set   terraform cloud docs workspaces variables managing variables variable sets   Terraform cannot perform successful runs in the workspace until you set values for these variables   Choose one of the following actions     To skip this step  click   Go to workspace overview    You can  load these variables from files   terraform cloud docs workspaces variables managing variables loading variables from files  or create and set values for them later from within the workspace  HCP Terraform does not automatically scan your configuration again  you can only add variables from within the workspace individually    To configure variables  enter a value for each variable on the page  You may want to leave a variable empty if you plan to provide it through another source  like an  auto tfvars  file  Click   Save variables   to add these variables to the workspace      Next steps  If you have already configured all Terraform variables  we recommend  manually starting a run   terraform cloud docs run ui manually starting runs  to prepare VCS driven workspaces  You may also want to do one or more of the following actions      Upload configuration versions   terraform cloud docs workspaces configurations providing configuration versions   If you chose the API or CLI Driven workflow  you must upload configuration versions for the workspace     Edit environment variables   terraform cloud docs workspaces variables   Shell environment variables store credentials and customize Terraform s behavior     Edit additional workspace settings   terraform cloud docs workspaces settings   This includes notifications  permissions  and run triggers to start runs automatically     Learn more about running Terraform in your workspace   terraform cloud docs run remote operations   This includes how Terraform processes runs within the workspace  run modes  run states  and other operations     Create workspace tags   terraform cloud docs workspaces tags   Add tags to your workspaces so that you can organize and track them      Browse workspaces   terraform cloud docs workspaces browse   Use the interfaces available in the UI to browse  sort  and filter workspaces so that you can track resource consumption        VCS Connection  If you connected a VCS repository to the workspace  HCP Terraform automatically registers a webhook with your VCS provider  A workspace with no runs will not accept new runs from a VCS webhook  so you must  manually start at least one run   terraform cloud docs run ui manually starting runs    After you manually start a run  HCP Terraform automatically queues a plan when new commits appear in the selected branch of the linked repository or someone opens a pull request on that branch  Refer to  Webhooks   terraform cloud docs vcs webhooks  for more details "}
{"questions":"terraform page title JSON Filtering HCP Terraform viewer terraform cloud docs workspaces state and policy check JSON data About JSON Data Filtering Certain pages where JSON data is displayed such as the state viewer terraform cloud docs policy enforcement sentinel json allow you to filter the results This Learn how to create custom datasets on pages that display JSON data","answers":"---\npage_title: JSON Filtering - HCP Terraform\ndescription: Learn how to create custom datasets on pages that display JSON data.\n---\n\n# About JSON Data Filtering\n\nCertain pages where JSON data is displayed, such as the [state\nviewer](\/terraform\/cloud-docs\/workspaces\/state) and [policy check JSON data\nviewer](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/json), allow you to filter the results. This\nenables you to see just the data you need, and even create entirely new datasets\nto see data in the way you want to see it!\n\n![entering a json filter](\/img\/docs\/json-viewer-intro.png)\n\n-> **NOTE:** _Filtering_ the data in the JSON viewer is separate from\n_searching_ it. To search, press Control-F (or Command-F on MacOS). You can\nsearch and apply a filter at the same time.\n\n## Entering a Filter\n\nFilters are entered by putting the filter in the aptly named **filter** box in\nthe JSON viewer. After entering the filter, pressing **Apply** or the enter key\non your keyboard will apply the filter. The filtered results, if any, are\ndisplayed in result box. Clearing the filter will restore the original JSON\ndata.\n\n![entering a json filter](\/img\/docs\/sentinel-json-enter-filter.png)\n\n## Filter Language\n\nThe JSON filter language is a small subset of the\n[jq](https:\/\/stedolan.github.io\/jq\/) JSON filtering language. Selectors,\nliterals, indexes, slices, iterators, and pipes are supported, as are also array\nand object construction. At this time, parentheses, and more complex operations\nsuch as mathematical operators, conditionals, and functions are not supported.\n\nBelow is a quick reference of some of the more basic functions to get you\nstarted.\n\n### Selectors\n\nSelectors allow you to pick an index out of a JSON object, and are written as\n`.KEY.SUBKEY`. So, as an example, given an object of\n`{\"foo\": {\"bar\": \"baz\"}}`, and the filter `.foo.bar`, the result would be\ndisplayed as `\"baz\"`.\n\nA single dot (`.`) without anything else always denotes the current value,\nunaltered.\n\n### Indexes\n\nIndexes can be used to fetch array elements, or select non-alphanumeric object\nfields. They are written as `[0]` or `[\"foo-bar\"]`, depending on the purpose.\n\nGiven an object of `{\"foo-bar\": [\"baz\", \"qux\"]}` and the filter of\n`.[\"foo-bar\"][0]`, the result would be displayed as `\"baz\"`.\n\n### Slices\n\nArrays can be sliced to get a subset an array. The syntax is `[LOW:HIGH]`.\n\nGiven an array of `[0, 1, 2, 3, 4]` and the filter of\n`.[1:3]`, the result would be displayed as `[1, 2]`. This also illustrates that\nthe result of the slice operation is always of length HIGH-LOW.\n\nSlices can also be applied to strings, in which a substring is returned with the\nsame rules applied, with the first character of the string being index 0.\n\n### Iterators\n\nIterators can iterate over arrays and objects. The syntax is `[]`.\n\nIterators iterate over the _values_ of an object only. So given a object of\n`{\"foo\": 1, \"bar\": 2}`, the filter `.[]` would yield an iteration of `1, 2`.\n\nNote that iteration results are not necessarily always arrays. Iterators are\nhandled in a special fashion when dealing with pipes and object creators (see\nbelow).\n\n### Array Construction\n\nWrapping an expression in brackets (`[ ... ]`) creates an array with the\nsub-expressions inside the array. The results are always concatenated.\n\nFor example, for an object of `{\"foo\": [1, 2], \"bar\": [3, 4]}`, the construction\nexpressions `[.foo[], .bar[]]` and `[.[][]]`, are the same, producing the\nresulting array `[1, 2, 3, 4]`.\n\n### Object Construction\n\nWrapping an expression in curly braces `{KEY: EXPRESSION, ...}` creates an\nobject.\n\nIterators work uniquely with object construction in that an object is\nconstructed for each _iteration_ that the iterator produces.\n\nAs a basic example, Consider an array `[1, 2, 3]`. While the expression\n`{foo: .}` will produce `{\"foo\": [1, 2, 3]}`, adding an iterator to the\nexpression so that it reads `{foo: .[]}` will produce 3 individual objects:\n`{\"foo\": 1}`, `{\"foo\": 2}`, and `{\"foo\": 3}`.\n\n### Pipes\n\nPipes allow the results of one expression to be fed into another. This can be\nused to re-write expressions to help reduce complexity.\n\nIterators work with pipes in a fashion similar to object construction, where the\nexpression on the right-hand side of the pipe is evaluated once for every\niteration.\n\nAs an example, for the object `{\"foo\": {\"a\": 1}, \"bar\": {\"a\": 2}}`, both the\nexpression `{z: .[].a}` and `.[] | {z: .a}` produce the same result: `{\"z\": 1}`\nand `{\"z\": 2}`.","site":"terraform","answers_cleaned":"    page title  JSON Filtering   HCP Terraform description  Learn how to create custom datasets on pages that display JSON data         About JSON Data Filtering  Certain pages where JSON data is displayed  such as the  state viewer   terraform cloud docs workspaces state  and  policy check JSON data viewer   terraform cloud docs policy enforcement sentinel json   allow you to filter the results  This enables you to see just the data you need  and even create entirely new datasets to see data in the way you want to see it     entering a json filter   img docs json viewer intro png        NOTE     Filtering  the data in the JSON viewer is separate from  searching  it  To search  press Control F  or Command F on MacOS   You can search and apply a filter at the same time      Entering a Filter  Filters are entered by putting the filter in the aptly named   filter   box in the JSON viewer  After entering the filter  pressing   Apply   or the enter key on your keyboard will apply the filter  The filtered results  if any  are displayed in result box  Clearing the filter will restore the original JSON data     entering a json filter   img docs sentinel json enter filter png      Filter Language  The JSON filter language is a small subset of the  jq  https   stedolan github io jq   JSON filtering language  Selectors  literals  indexes  slices  iterators  and pipes are supported  as are also array and object construction  At this time  parentheses  and more complex operations such as mathematical operators  conditionals  and functions are not supported   Below is a quick reference of some of the more basic functions to get you started       Selectors  Selectors allow you to pick an index out of a JSON object  and are written as   KEY SUBKEY   So  as an example  given an object of    foo     bar    baz      and the filter   foo bar   the result would be displayed as   baz     A single dot       without anything else always denotes the current value  unaltered       Indexes  Indexes can be used to fetch array elements  or select non alphanumeric object fields  They are written as   0   or    foo bar     depending on the purpose   Given an object of    foo bar     baz    qux     and the filter of     foo bar   0    the result would be displayed as   baz         Slices  Arrays can be sliced to get a subset an array  The syntax is   LOW HIGH     Given an array of   0  1  2  3  4   and the filter of    1 3    the result would be displayed as   1  2    This also illustrates that the result of the slice operation is always of length HIGH LOW   Slices can also be applied to strings  in which a substring is returned with the same rules applied  with the first character of the string being index 0       Iterators  Iterators can iterate over arrays and objects  The syntax is        Iterators iterate over the  values  of an object only  So given a object of    foo   1   bar   2    the filter       would yield an iteration of  1  2    Note that iteration results are not necessarily always arrays  Iterators are handled in a special fashion when dealing with pipes and object creators  see below        Array Construction  Wrapping an expression in brackets             creates an array with the sub expressions inside the array  The results are always concatenated   For example  for an object of    foo    1  2    bar    3  4     the construction expressions    foo     bar     and            are the same  producing the resulting array   1  2  3  4         Object Construction  Wrapping an expression in curly braces   KEY  EXPRESSION        creates an object   Iterators work uniquely with object construction in that an object is constructed for each  iteration  that the iterator produces   As a basic example  Consider an array   1  2  3    While the expression   foo      will produce    foo    1  2  3     adding an iterator to the expression so that it reads   foo        will produce 3 individual objects     foo   1       foo   2    and    foo   3         Pipes  Pipes allow the results of one expression to be fed into another  This can be used to re write expressions to help reduce complexity   Iterators work with pipes in a fashion similar to object construction  where the expression on the right hand side of the pipe is evaluated once for every iteration   As an example  for the object    foo     a   1    bar     a   2     both the expression   z      a   and         z   a   produce the same result     z   1   and    z   2   "}
{"questions":"terraform Health page title Health HCP Terraform HCP Terraform can perform automatic health assessments in a workspace to assess whether its real infrastructure matches the requirements defined in its Terraform configuration Health assessments include the following types of evaluations HCP Terraform can continuously monitor workspaces to assess whether their real infrastructure matches the requirements defined in their Terraform configuration","answers":"---\npage_title: Health - HCP Terraform\ndescription: |-\n HCP Terraform can continuously monitor workspaces to assess whether their real infrastructure matches the requirements defined in their Terraform configuration.\n---\n\n# Health\n\nHCP Terraform can perform automatic health assessments in a workspace to assess whether its real infrastructure matches the requirements defined in its Terraform configuration. Health assessments include the following types of evaluations:\n\n- [Drift detection](#drift-detection) determines whether your real-world infrastructure matches your Terraform configuration.\n- [Continuous validation](#continuous-validation) determines whether custom conditions in the workspace\u2019s configuration continue to pass after Terraform provisions the infrastructure.\n\nWhen you enable health assessments, HCP Terraform periodically runs health assessments for your workspace. Refer to [Health Assessment Scheduling](#health-assessment-scheduling) for details.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/health-assessments.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n## Permissions\n\nWorking with health assessments requires the following permissions:\n\n- To view health status for a workspace, you need read access to that workspace.\n- To change organization health settings, you must be an [organization owner](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-owners).\n- To change a workspace\u2019s health settings, you must be an [administrator for that workspace](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#workspace-admins).\n<!-- BEGIN: TFC:only name:health-assessments -->\n- To trigger [on-demand health assessments](\/terraform\/cloud-docs\/workspaces\/health#on-demand-assessments) for a workspace, you must be an [administrator for that workspace](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#workspace-admins).\n<!-- END: TFC:only name:health-assessments -->\n\n## Workspace requirements\n\nWorkspaces require the following settings to receive health assessments:\n- Terraform version 0.15.4+ for drift detection only\n- Terraform version 1.3.0+ for drift detection and continuous validation\n- [Remote execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode) or [Agent execution mode](\/terraform\/cloud-docs\/agents\/agent-pools#configure-workspaces-to-use-the-agent) for Terraform runs\n\nThe latest Terraform run in the workspace must have been successful. If the most recent run ended in an errored, canceled, or discarded state, HCP Terraform pauses health assessments until there is a successfully applied run.\n\nThe workspace must also have at least one run in which Terraform successfully applies a configuration. HCP Terraform does not perform health assessments in workspaces with no real-world infrastructure.\n\n## Enable health assessments\n\nYou can enforce health assessments across all eligible workspaces in an organization within the [organization settings](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#health). Enforcing health assessments at an organization-level overrides workspace-level settings. You can only enable health assessments within a specific workspace when HCP Terraform is not enforcing health assessments at the organization level.\n\nTo enable health assessments within a workspace:\n\n1. Verify that your workspace satisfies the [requirements](#workspace-requirements).\n1. Go to the workspace and click **Settings > Health**.\n1. Select **Enable** under **Health Assessments**.\n1. Click **Save settings**.\n\n## Health assessment scheduling\n\nWhen you enable health assessments for a workspace, HCP Terraform runs the first health assessment based on whether there are active Terraform runs for the workspace:\n\n- **No active runs:** A few minutes after you enable the feature.\n- **Active speculative plan:** A few minutes after that plan is complete.\n- **Other active runs:** During the next assessment period.\n\nAfter the first health assessment, HCP Terraform starts a new health assessment during the next assessment period if there are no active runs in the workspace. Health assessments may take longer to complete when you enable health assessments in many workspaces at once or your workspace contains a complex configuration with many resources.\n\n\nA health assessment never interrupts or interferes with runs. If you start a new run during a health assessment, HCP Terraform cancels the current assessment and runs the next assessment during the next assessment period. This behavior may prevent HCP Terraform from performing health assessments in workspaces with frequent runs.\n\nHCP Terraform pauses health assessments if the latest run ended in an errored state. This behavior occurs for all run types, including plan-only runs and speculative plans. Once the workspace completes a successful run, HCP Terraform restarts health assessments during the next assessment period.\n\nTerraform Enterprise administrators can modify their installation's [assessment frequency  and number of maximum concurrent assessments](\/terraform\/enterprise\/admin\/application\/general#health-assessments) from the admin settings console.\n\n<!-- BEGIN: TFC:only name:health-assessments -->\n\n### On-demand assessments\n\n-> **Note:** On-demand assessments are only available in the HCP Terraform user interface.\n\nIf you are an administrator for a workspace and it satisfies all [assessment requirements](\/terraform\/cloud-docs\/workspaces\/health#workspace-requirements), you can trigger a new assessment by clicking **Start health assessment** on the workspace's **Health** page.\n\nAfter clicking **Start health assessment**, the workspace displays a message in the bottom lefthand corner of the page to indicate if it successfully triggered a new assessment. The time it takes to complete an assessment can vary based on network latency and the number of resources managed by the workspace. \n\nYou cannot trigger another assessment while one is in progress. An on-demand assessment resets the scheduling for automated assessments, so HCP Terraform waits to run the next assessment until the next scheduled period.\n\n<!-- END: TFC:only name:health-assessments -->\n\n### Concurrency\n\nIf you enable health assessments on multiple workspaces, assessments may run concurrently. Health assessments do not affect your concurrency limit. HCP Terraform also monitors and controls health assessment concurrency to avoid issues for large-scale deployments with thousands of workspaces. However, HCP Terraform performs health assessments in batches, so health assessments may take longer to complete when you enable them in a large number of workspaces.\n\n### Notifications\n\nHCP Terraform sends [notifications](\/terraform\/cloud-docs\/workspaces\/settings\/notifications) about health assessment results according to your workspace\u2019s settings.\n\n## Workspace health status\n\nOn the organization's **Workspaces** page, HCP Terraform displays a **Health warning** status for workspaces with infrastructure drift or failed continuous validation checks.\n\nOn the right of a workspace\u2019s overview page, HCP Terraform displays a **Health** bar that summarizes the results of the last health assessment.\n- The **Drift** summary shows the total number of resources in the configuration and the number of resources that have drifted.\n- The **Checks** summary shows the number of passed, failed, and unknown statuses for objects with continuous validation checks.\n\n<!-- BEGIN: TFC:only name:health-in-explorer -->\n\n### View workspace health in explorer\n\nThe [Explorer page](\/terraform\/cloud-docs\/workspaces\/explorer) presents a condensed overview of the health status of the workspaces within your organization. You can see the following information:\n\n- Workspaces that are monitoring workspace health \n- Status of any configured continuous validation checks \n- Count of drifted resources for each workspace \n\nFor additional details on the data available for reporting, refer to the [Explorer](\/terraform\/cloud-docs\/workspaces\/explorer) documentation.\n\n![Viewing Workspace Health in Explorer](\/img\/docs\/tfc-explorer-health.png)\n\n<!-- END: TFC:only name:health-in-explorer -->\n\n## Drift detection\n\nDrift detection helps you identify situations where your actual infrastructure no longer matches the configuration defined in Terraform. This deviation is known as _configuration drift_. Configuration drift occurs when changes are made outside Terraform's regular process, leading to inconsistencies between the remote objects and your configured infrastructure.\n\nFor example, a teammate could create configuration drift by directly updating a storage bucket's settings with conflicting configuration settings using the cloud provider's console. Drift detection could detect these differences and recommend steps to address and rectify the discrepancies.\n\nConfiguration drift differs from state drift. Drift detection does not detect state drift.\n\nConfiguration drift happens when external changes affecting remote objects invalidate your infrastructure configuration. State drift occurs when external changes affecting remote objects _do not_ invalidate your infrastructure configuration. Refer to [Refresh-Only Mode](\/terraform\/cloud-docs\/run\/modes-and-options#refresh-only-mode)  to learn more about remediating state drift.\n\n### View workspace drift\n\nTo view the drift detection results from the latest health assessment, go to the workspace and click **Health > Drift**. If there is configuration drift, HCP Terraform proposes the necessary changes to bring the infrastructure back in sync with its configuration.\n\n### Resolve drift\n\nYou can use one of the following approaches to correct configuration drift:\n- **Overwrite drift**: If you do not want the drift's changes, queue a new plan and apply the changes to revert your real-world infrastructure to match your Terraform configuration.\n- **Update Terraform configuration:** If you want the drift's changes, modify your Terraform configuration to include the changes and push a new configuration version. This prevents Terraform from reverting the drift during the next apply. Refer to the [Manage Resource Drift](\/terraform\/tutorials\/state\/resource-drift) tutorial for a detailed example.\n\n## Continuous validation\n\nContinuous validation regularly verifies whether your configuration\u2019s custom assertions continue to pass, validating your infrastructure. For example, you can monitor whether your website returns an expected status code, or whether an API gateway certificate is valid. Identifying failed assertions helps you resolve the failure and prevent errors during your next time Terraform operation.\n\nContinuous validation evaluates preconditions, postconditions, and check blocks as part of an assessment, but we recommend using [check blocks](\/terraform\/language\/checks) for post-apply monitoring. Use check blocks to create custom rules to validate your infrastructure's resources, data sources, and outputs.\n\n### Preventing false positives\n\nHealth assessments create a speculative plan to access the current state of your infrastructure. Terraform evaluates any check blocks in your configuration as the last step of creating the speculative plan. If your configuration relies on data sources and the values queried by a data source change between the time of your last run and the assessment, the speculative plan will include those changes. HCP Terraform will not modify your infrastructure as part of an assessment, but it can use those updated values to evaluate checks. This may lead to false positive results for alerts since your infrastructure did not yet change.\n\nTo ensure your checks evaluate the current state of your configuration instead of against a possible future change, use nested data sources that query your actual resource configuration, rather than a computed latest value. Refer to the [AMI image scenario](#asserting-up-to-date-amis-for-compute-instances) below for an example.\n\n### Example use cases\n\nReview the provider documentation for `check` block examples with [AWS](https:\/\/registry.terraform.io\/providers\/hashicorp\/aws\/latest\/docs\/guides\/continuous-validation-examples), [Azure](https:\/\/registry.terraform.io\/providers\/hashicorp\/azurerm\/latest\/docs\/guides\/tfc-check-blocks), and [GCP](https:\/\/registry.terraform.io\/providers\/hashicorp\/google\/latest\/docs\/guides\/google-continuous-validation).\n\n#### Monitoring the health of a provisioned website\n\nThe following example uses the [HTTP](https:\/\/registry.terraform.io\/providers\/hashicorp\/http\/latest\/docs) Terraform provider and a [scoped data source](\/terraform\/language\/checks#scoped-data-sources) within a [`check` block](\/terraform\/language\/checks) to assert the Terraform website returns a `200` status code, indicating it is healthy.\n\n```hcl\ncheck \"health_check\" {\n  data \"http\" \"terraform_io\" {\n    url = \"https:\/\/www.terraform.io\"\n  }\n\n  assert {\n    condition = data.http.terraform_io.status_code == 200\n    error_message = \"${data.http.terraform_io.url} returned an unhealthy status code\"\n  }\n}\n```\n\nContinuous Validation alerts you if the website returns any status code besides `200` while Terraform evaluates this assertion. You can also find failures in your workspace's [Continuous Validation Results](#view-continuous-validation-results) page. You can configure continuous validation alerts in your workspace's [notification settings](\/terraform\/cloud-docs\/workspaces\/settings\/notifications).\n\n#### Monitoring certificate expiration\n\n[Vault](https:\/\/www.vaultproject.io\/) lets you secure, store, and tightly control access to tokens, passwords, certificates, encryption keys, and other sensitive data. The following example uses a `check` block to monitor for the expiration of a Vault certificate.\n\n```hcl\nresource \"vault_pki_secret_backend_cert\" \"app\" {\n  backend = vault_mount.intermediate.path\n  name = vault_pki_secret_backend_role.test.name\n  common_name = \"app.my.domain\"\n}\n\ncheck \"certificate_valid\" {\n  assert {\n    condition = !vault_pki_secret_backend_cert.app.renew_pending\n    error_message = \"Vault cert is ready to renew.\"\n  }\n}\n```\n\n#### Asserting up-to-date AMIs for compute instances\n\n[HCP Packer](\/hcp\/docs\/packer) stores metadata about your [Packer](https:\/\/www.packer.io\/) images. The following example check fails when there is a newer AMI version available.\n\n```hcl\ndata \"hcp_packer_artifact\" \"hashiapp_image\" {\n  bucket_name  = \"hashiapp\"\n  channel_name = \"latest\"\n  platform     = \"aws\"\n  region       = \"us-west-2\"\n}\n\nresource \"aws_instance\" \"hashiapp\" {\n  ami                         = data.hcp_packer_artifact.hashiapp_image.external_identifier\n  instance_type               = var.instance_type\n  associate_public_ip_address = true\n  subnet_id                   = aws_subnet.hashiapp.id\n  vpc_security_group_ids      = [aws_security_group.hashiapp.id]\n  key_name                    = aws_key_pair.generated_key.key_name\n\n  tags = {\n    Name = \"hashiapp\"\n  }\n}\n\ncheck \"ami_version_check\" {\n  data \"aws_instance\" \"hashiapp_current\" {\n    instance_tags = {\n      Name = \"hashiapp\"\n    }\n  }\n\n  assert {\n    condition = aws_instance.hashiapp.ami == data.hcp_packer_artifact.hashiapp_image.external_identifier\n    error_message = \"Must use the latest available AMI, ${data.hcp_packer_artifact.hashiapp_image.external_identifier}.\"\n  }\n}\n```\n\n### View continuous validation results\n\nTo view the continuous validation results from the latest health assessment, go to the workspace and click **Health > Continuous validation**.\n\nThe page shows all of the resources, outputs, and data sources with custom assertions that HCP Terraform evaluated. Next to each object, HCP Terraform reports whether the assertion passed or failed. If one or more assertions fail, HCP Terraform displays the error messages for each assertion.\n\nThe health assessment page displays each assertion by its [named value](\/terraform\/language\/expressions\/references). A `check` block's named value combines the prefix `check` with its configuration name.\n\nIf your configuration contains multiple [preconditions and postconditions](\/terraform\/language\/expressions\/custom-conditions#preconditions-and-postconditions) within a single resource, output, or data source, HCP Terraform will not show the results of individual conditions unless they fail. If all custom conditions on the object pass, HCP Terraform reports that the entire check passed. The assessment results will display the results of any precondition and postconditions alongside the results of any assertions from `check` blocks, identified by the named values of their parent block.","site":"terraform","answers_cleaned":"    page title  Health   HCP Terraform description      HCP Terraform can continuously monitor workspaces to assess whether their real infrastructure matches the requirements defined in their Terraform configuration         Health  HCP Terraform can perform automatic health assessments in a workspace to assess whether its real infrastructure matches the requirements defined in its Terraform configuration  Health assessments include the following types of evaluations      Drift detection   drift detection  determines whether your real world infrastructure matches your Terraform configuration     Continuous validation   continuous validation  determines whether custom conditions in the workspace s configuration continue to pass after Terraform provisions the infrastructure   When you enable health assessments  HCP Terraform periodically runs health assessments for your workspace  Refer to  Health Assessment Scheduling   health assessment scheduling  for details        BEGIN  TFC only name pnp callout      include  tfc package callouts health assessments mdx       END  TFC only name pnp callout         Permissions  Working with health assessments requires the following permissions     To view health status for a workspace  you need read access to that workspace    To change organization health settings  you must be an  organization owner   terraform cloud docs users teams organizations permissions organization owners     To change a workspace s health settings  you must be an  administrator for that workspace   terraform cloud docs users teams organizations permissions workspace admins        BEGIN  TFC only name health assessments       To trigger  on demand health assessments   terraform cloud docs workspaces health on demand assessments  for a workspace  you must be an  administrator for that workspace   terraform cloud docs users teams organizations permissions workspace admins        END  TFC only name health assessments         Workspace requirements  Workspaces require the following settings to receive health assessments    Terraform version 0 15 4  for drift detection only   Terraform version 1 3 0  for drift detection and continuous validation    Remote execution mode   terraform cloud docs workspaces settings execution mode  or  Agent execution mode   terraform cloud docs agents agent pools configure workspaces to use the agent  for Terraform runs  The latest Terraform run in the workspace must have been successful  If the most recent run ended in an errored  canceled  or discarded state  HCP Terraform pauses health assessments until there is a successfully applied run   The workspace must also have at least one run in which Terraform successfully applies a configuration  HCP Terraform does not perform health assessments in workspaces with no real world infrastructure      Enable health assessments  You can enforce health assessments across all eligible workspaces in an organization within the  organization settings   terraform cloud docs users teams organizations organizations health   Enforcing health assessments at an organization level overrides workspace level settings  You can only enable health assessments within a specific workspace when HCP Terraform is not enforcing health assessments at the organization level   To enable health assessments within a workspace   1  Verify that your workspace satisfies the  requirements   workspace requirements   1  Go to the workspace and click   Settings   Health    1  Select   Enable   under   Health Assessments    1  Click   Save settings        Health assessment scheduling  When you enable health assessments for a workspace  HCP Terraform runs the first health assessment based on whether there are active Terraform runs for the workspace       No active runs    A few minutes after you enable the feature      Active speculative plan    A few minutes after that plan is complete      Other active runs    During the next assessment period   After the first health assessment  HCP Terraform starts a new health assessment during the next assessment period if there are no active runs in the workspace  Health assessments may take longer to complete when you enable health assessments in many workspaces at once or your workspace contains a complex configuration with many resources    A health assessment never interrupts or interferes with runs  If you start a new run during a health assessment  HCP Terraform cancels the current assessment and runs the next assessment during the next assessment period  This behavior may prevent HCP Terraform from performing health assessments in workspaces with frequent runs   HCP Terraform pauses health assessments if the latest run ended in an errored state  This behavior occurs for all run types  including plan only runs and speculative plans  Once the workspace completes a successful run  HCP Terraform restarts health assessments during the next assessment period   Terraform Enterprise administrators can modify their installation s  assessment frequency  and number of maximum concurrent assessments   terraform enterprise admin application general health assessments  from the admin settings console        BEGIN  TFC only name health assessments          On demand assessments       Note    On demand assessments are only available in the HCP Terraform user interface   If you are an administrator for a workspace and it satisfies all  assessment requirements   terraform cloud docs workspaces health workspace requirements   you can trigger a new assessment by clicking   Start health assessment   on the workspace s   Health   page   After clicking   Start health assessment    the workspace displays a message in the bottom lefthand corner of the page to indicate if it successfully triggered a new assessment  The time it takes to complete an assessment can vary based on network latency and the number of resources managed by the workspace    You cannot trigger another assessment while one is in progress  An on demand assessment resets the scheduling for automated assessments  so HCP Terraform waits to run the next assessment until the next scheduled period        END  TFC only name health assessments          Concurrency  If you enable health assessments on multiple workspaces  assessments may run concurrently  Health assessments do not affect your concurrency limit  HCP Terraform also monitors and controls health assessment concurrency to avoid issues for large scale deployments with thousands of workspaces  However  HCP Terraform performs health assessments in batches  so health assessments may take longer to complete when you enable them in a large number of workspaces       Notifications  HCP Terraform sends  notifications   terraform cloud docs workspaces settings notifications  about health assessment results according to your workspace s settings      Workspace health status  On the organization s   Workspaces   page  HCP Terraform displays a   Health warning   status for workspaces with infrastructure drift or failed continuous validation checks   On the right of a workspace s overview page  HCP Terraform displays a   Health   bar that summarizes the results of the last health assessment    The   Drift   summary shows the total number of resources in the configuration and the number of resources that have drifted    The   Checks   summary shows the number of passed  failed  and unknown statuses for objects with continuous validation checks        BEGIN  TFC only name health in explorer          View workspace health in explorer  The  Explorer page   terraform cloud docs workspaces explorer  presents a condensed overview of the health status of the workspaces within your organization  You can see the following information     Workspaces that are monitoring workspace health    Status of any configured continuous validation checks    Count of drifted resources for each workspace   For additional details on the data available for reporting  refer to the  Explorer   terraform cloud docs workspaces explorer  documentation     Viewing Workspace Health in Explorer   img docs tfc explorer health png        END  TFC only name health in explorer         Drift detection  Drift detection helps you identify situations where your actual infrastructure no longer matches the configuration defined in Terraform  This deviation is known as  configuration drift   Configuration drift occurs when changes are made outside Terraform s regular process  leading to inconsistencies between the remote objects and your configured infrastructure   For example  a teammate could create configuration drift by directly updating a storage bucket s settings with conflicting configuration settings using the cloud provider s console  Drift detection could detect these differences and recommend steps to address and rectify the discrepancies   Configuration drift differs from state drift  Drift detection does not detect state drift   Configuration drift happens when external changes affecting remote objects invalidate your infrastructure configuration  State drift occurs when external changes affecting remote objects  do not  invalidate your infrastructure configuration  Refer to  Refresh Only Mode   terraform cloud docs run modes and options refresh only mode   to learn more about remediating state drift       View workspace drift  To view the drift detection results from the latest health assessment  go to the workspace and click   Health   Drift    If there is configuration drift  HCP Terraform proposes the necessary changes to bring the infrastructure back in sync with its configuration       Resolve drift  You can use one of the following approaches to correct configuration drift      Overwrite drift    If you do not want the drift s changes  queue a new plan and apply the changes to revert your real world infrastructure to match your Terraform configuration      Update Terraform configuration    If you want the drift s changes  modify your Terraform configuration to include the changes and push a new configuration version  This prevents Terraform from reverting the drift during the next apply  Refer to the  Manage Resource Drift   terraform tutorials state resource drift  tutorial for a detailed example      Continuous validation  Continuous validation regularly verifies whether your configuration s custom assertions continue to pass  validating your infrastructure  For example  you can monitor whether your website returns an expected status code  or whether an API gateway certificate is valid  Identifying failed assertions helps you resolve the failure and prevent errors during your next time Terraform operation   Continuous validation evaluates preconditions  postconditions  and check blocks as part of an assessment  but we recommend using  check blocks   terraform language checks  for post apply monitoring  Use check blocks to create custom rules to validate your infrastructure s resources  data sources  and outputs       Preventing false positives  Health assessments create a speculative plan to access the current state of your infrastructure  Terraform evaluates any check blocks in your configuration as the last step of creating the speculative plan  If your configuration relies on data sources and the values queried by a data source change between the time of your last run and the assessment  the speculative plan will include those changes  HCP Terraform will not modify your infrastructure as part of an assessment  but it can use those updated values to evaluate checks  This may lead to false positive results for alerts since your infrastructure did not yet change   To ensure your checks evaluate the current state of your configuration instead of against a possible future change  use nested data sources that query your actual resource configuration  rather than a computed latest value  Refer to the  AMI image scenario   asserting up to date amis for compute instances  below for an example       Example use cases  Review the provider documentation for  check  block examples with  AWS  https   registry terraform io providers hashicorp aws latest docs guides continuous validation examples    Azure  https   registry terraform io providers hashicorp azurerm latest docs guides tfc check blocks   and  GCP  https   registry terraform io providers hashicorp google latest docs guides google continuous validation         Monitoring the health of a provisioned website  The following example uses the  HTTP  https   registry terraform io providers hashicorp http latest docs  Terraform provider and a  scoped data source   terraform language checks scoped data sources  within a   check  block   terraform language checks  to assert the Terraform website returns a  200  status code  indicating it is healthy      hcl check  health check      data  http   terraform io        url    https   www terraform io         assert       condition   data http terraform io status code    200     error message      data http terraform io url  returned an unhealthy status code             Continuous Validation alerts you if the website returns any status code besides  200  while Terraform evaluates this assertion  You can also find failures in your workspace s  Continuous Validation Results   view continuous validation results  page  You can configure continuous validation alerts in your workspace s  notification settings   terraform cloud docs workspaces settings notifications         Monitoring certificate expiration   Vault  https   www vaultproject io   lets you secure  store  and tightly control access to tokens  passwords  certificates  encryption keys  and other sensitive data  The following example uses a  check  block to monitor for the expiration of a Vault certificate      hcl resource  vault pki secret backend cert   app      backend   vault mount intermediate path   name   vault pki secret backend role test name   common name    app my domain     check  certificate valid      assert       condition    vault pki secret backend cert app renew pending     error message    Vault cert is ready to renew                   Asserting up to date AMIs for compute instances   HCP Packer   hcp docs packer  stores metadata about your  Packer  https   www packer io   images  The following example check fails when there is a newer AMI version available      hcl data  hcp packer artifact   hashiapp image      bucket name     hashiapp    channel name    latest    platform        aws    region          us west 2     resource  aws instance   hashiapp      ami                           data hcp packer artifact hashiapp image external identifier   instance type                 var instance type   associate public ip address   true   subnet id                     aws subnet hashiapp id   vpc security group ids         aws security group hashiapp id    key name                      aws key pair generated key key name    tags         Name    hashiapp         check  ami version check      data  aws instance   hashiapp current        instance tags           Name    hashiapp               assert       condition   aws instance hashiapp ami    data hcp packer artifact hashiapp image external identifier     error message    Must use the latest available AMI    data hcp packer artifact hashiapp image external identifier                   View continuous validation results  To view the continuous validation results from the latest health assessment  go to the workspace and click   Health   Continuous validation     The page shows all of the resources  outputs  and data sources with custom assertions that HCP Terraform evaluated  Next to each object  HCP Terraform reports whether the assertion passed or failed  If one or more assertions fail  HCP Terraform displays the error messages for each assertion   The health assessment page displays each assertion by its  named value   terraform language expressions references   A  check  block s named value combines the prefix  check  with its configuration name   If your configuration contains multiple  preconditions and postconditions   terraform language expressions custom conditions preconditions and postconditions  within a single resource  output  or data source  HCP Terraform will not show the results of individual conditions unless they fail  If all custom conditions on the object pass  HCP Terraform reports that the entire check passed  The assessment results will display the results of any precondition and postconditions alongside the results of any assertions from  check  blocks  identified by the named values of their parent block "}
{"questions":"terraform Surface information from across workspaces and projects in your organization Explorer for Workspace Visibility As your organization grows keeping track of your sprawling infrastructure estate can get increasingly more complicated The explorer for workspace visibility helps surface a wide range of valuable information from across your organization page title Explorer for Workspace Visibility HCP Terraform tfc only true","answers":"---\npage_title: Explorer for Workspace Visibility - HCP Terraform\ndescription: >-\n  Surface information from across workspaces and projects in your organization.\ntfc_only: true\n---\n\n# Explorer for Workspace Visibility\n\nAs your organization grows, keeping track of your sprawling infrastructure estate can get increasingly more complicated. The explorer for workspace visibility helps surface a wide range of valuable information from across your organization.\n\nOpen the explorer for workspace visibility by clicking **Explorer** in your organization's top-level side navigation.\n\nThe **Explorer** page displays buttons grouped by **Types** and **Use Cases**. Each button offers a new view into your organization or workspace's data. Clicking a button triggers the explorer to perform a query and display the results in a table of data. \n\n![Explorer Landing Page](\/img\/docs\/terraform-cloud-explorer-landing-page.png)\n\nThe **Types** buttons present generic use cases. For example, \"Workspaces\" displays a paginated and unfiltered list of your organization's workspaces and each workspace's accompanying data. The **Use Cases** buttons present sorted and filtered results to give you a focused view of your organizational data.\n\nYou can sort each column of the explorer results table. Clicking a hyperlinked field shows increasingly specific views of your data. For example, a workspace's modules count field links to a view of that workspace's associated modules. \n\nClearing a query takes you back to the explorer landing page. To clear a query, click the back arrow at the top left of your current explorer view page.\n\n## Permissions\n\nThe explorer for workspace visibility requires access to a broad range of an organization's data. To use the explorer, you must have either of the following organization permissions:\n- [Organization owner](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-owners)\n- [View all workspaces](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#view-all-workspaces) or greater\n\n## Types\n\nThe explorer for workspace visibility supports four types:\n\n- [Workspaces](\/terraform\/cloud-docs\/workspaces)\n- [Modules](\/terraform\/language\/modules)\n- [Providers](\/terraform\/language\/providers)\n- [Terraform Versions](\/terraform\/language\/upgrade-guides#upgrading-to-terraform-v1-4)\n\n## Use cases\n\nThe explorer for workspace visibility provides the following queries for specific use cases:\n\n- **Top module versions** shows modules sorted by usage frequency.\n- **Latest Terraform versions** displays a sorted list of Terraform versions in use.\n- **Top provider versions** lists providers sorted by usage frequency.\n- **Workspaces without VCS** lists workspaces not backed by VCS.\n- **Workspace VCS source** lists VCS-backed workspaces sorted by repository name.\n- **Workspaces with failed checks** lists workspaces that failed at least one [continuous validation](\/terraform\/enterprise\/workspaces\/health#continuous-validation) check.\n- **Drifted workspaces** displays [workspaces with drift](\/terraform\/enterprise\/workspaces\/health#drift-detection) and relevant drift information.\n- **Workspace VCS source** displays a subset of workspace data sorted by the underlying VCS repository.\n- **All workspace versions** is a simplified view of your workspaces with current run and version information.\n- **Runs by status** provides a run-focused view by sorting workspaces by their current run status.\n- **Top Terraform versions** lists all Terraform versions by usage frequency\n- **Latest updated workspaces** displays your most recently updated workspaces.\n- **Oldest applied workspaces** sorts workspaces by the date of the current applied run.\n\n## Custom filter conditions\n\nThe explorer's query builder allows you to execute queries with custom filter conditions against any of the supported [types](#types).\n\nTo use the query builder, select a type or use case from the explorer home page. Expand the **Modify conditions** section to show the filter conditions in use for the current query and to define new filter conditions.\n\n![Explorer Query Builder](\/img\/docs\/query-builder.png)\n\nEach filter condition is represented by a row of inputs, each made up of a target field, operator, and value.\n\n1. Choose a target field from the first dropdown to select field that the explorer will run the query against. The options available will vary based on the target field's data type.\n\n1. Choose an operator from the second dropdown. The options available will vary based on the target field's data type.\n\n1. Provide a value in the third field to compare against the target field's value.\n\n1. Click **Run Query** to evaluate the filter conditions.\n\n-> **Tip:** Inspect the filter conditions used by the various pre-canned [**Use Cases**](#use-cases) to learn how they are constructed.\n\nYou can create multiple filter conditions for a query. When you provide multiple conditions, the explorer evaluates them at query time with a logical AND. To add a new condition, use the **Add condition** button below the condition list. To remove a condition, use the trash bin button on the right hand side of the condition.\n\n## Save a view\n\nYou can save explorer views to revisit a custom query or use case.\n\n-> **Note**: The ability to save views in the explorer is in public beta. All APIs and workflows are subject to change.\n\nYou can save the explorer\u2019s view of your data by performing the following steps:\n\n1. Navigate to the **Explorer** page in the sidebar of your organization.  \n1. Click on a tile in the **Types** or **Use cases** section.\n1. Define a query using the query building interface.   \n1. By default, the explorer displays all available information, but you can adjust which columns you want your view to include.  \n1. Open the **Actions** dropdown menu and select **Save view**, saving the last query you performed in the explorer.  \n1. Specify a new, unique name for your saved view.   \n1. Click **Save**.\n\nWhen your explorer saves a view, it saves the last query it performed. If you change a query and do not rerun it, the explorer does not save those changes.\n\nAfter you have saved a view of your data, you can access it from the explorer\u2019s main page underneath the **Saved views** tab. Saved views keep track of the following attributes:\n\n* The name of the saved view. \n* The type of data you are querying, either module, workspace, provider, or Terraform versions.\n* The owner of the saved view.  \n* When the saved view was last updated.\n\nYou can rename or delete a saved view from the **Saved views** tab by opening the ellipsis menu next to a view and selecting either **Rename** or **Delete**.\n\n### Manage a saved view\n\nComplete the following steps to update a saved view:\n\n1. Open the view in the explorer and make changes.  \n1. Open the **Actions** dropdown menu and select **Save view.**\n\nComplete the following steps to save a new view based on an existing saved view:\n\n1. Open a saved view in the explorer and make changes.  \n1. Open the **Actions** dropdown menu and select **Save as**.  \n1. Entering a name for this new saved view.  \n1. Click **Save**.\n\nComplete the following steps to delete a saved view:\n\n1. Open a saved view in the explorer.\n1. Choose  **Delete view** from the **Actions** drop-down menu.\n1. Click **Delete** when prompted to confirm that you want to permanently delete the view.","site":"terraform","answers_cleaned":"    page title  Explorer for Workspace Visibility   HCP Terraform description       Surface information from across workspaces and projects in your organization  tfc only  true        Explorer for Workspace Visibility  As your organization grows  keeping track of your sprawling infrastructure estate can get increasingly more complicated  The explorer for workspace visibility helps surface a wide range of valuable information from across your organization   Open the explorer for workspace visibility by clicking   Explorer   in your organization s top level side navigation   The   Explorer   page displays buttons grouped by   Types   and   Use Cases    Each button offers a new view into your organization or workspace s data  Clicking a button triggers the explorer to perform a query and display the results in a table of data      Explorer Landing Page   img docs terraform cloud explorer landing page png   The   Types   buttons present generic use cases  For example   Workspaces  displays a paginated and unfiltered list of your organization s workspaces and each workspace s accompanying data  The   Use Cases   buttons present sorted and filtered results to give you a focused view of your organizational data   You can sort each column of the explorer results table  Clicking a hyperlinked field shows increasingly specific views of your data  For example  a workspace s modules count field links to a view of that workspace s associated modules    Clearing a query takes you back to the explorer landing page  To clear a query  click the back arrow at the top left of your current explorer view page      Permissions  The explorer for workspace visibility requires access to a broad range of an organization s data  To use the explorer  you must have either of the following organization permissions     Organization owner   terraform cloud docs users teams organizations permissions organization owners     View all workspaces   terraform cloud docs users teams organizations permissions view all workspaces  or greater     Types  The explorer for workspace visibility supports four types      Workspaces   terraform cloud docs workspaces     Modules   terraform language modules     Providers   terraform language providers     Terraform Versions   terraform language upgrade guides upgrading to terraform v1 4      Use cases  The explorer for workspace visibility provides the following queries for specific use cases       Top module versions   shows modules sorted by usage frequency      Latest Terraform versions   displays a sorted list of Terraform versions in use      Top provider versions   lists providers sorted by usage frequency      Workspaces without VCS   lists workspaces not backed by VCS      Workspace VCS source   lists VCS backed workspaces sorted by repository name      Workspaces with failed checks   lists workspaces that failed at least one  continuous validation   terraform enterprise workspaces health continuous validation  check      Drifted workspaces   displays  workspaces with drift   terraform enterprise workspaces health drift detection  and relevant drift information      Workspace VCS source   displays a subset of workspace data sorted by the underlying VCS repository      All workspace versions   is a simplified view of your workspaces with current run and version information      Runs by status   provides a run focused view by sorting workspaces by their current run status      Top Terraform versions   lists all Terraform versions by usage frequency     Latest updated workspaces   displays your most recently updated workspaces      Oldest applied workspaces   sorts workspaces by the date of the current applied run      Custom filter conditions  The explorer s query builder allows you to execute queries with custom filter conditions against any of the supported  types   types    To use the query builder  select a type or use case from the explorer home page  Expand the   Modify conditions   section to show the filter conditions in use for the current query and to define new filter conditions     Explorer Query Builder   img docs query builder png   Each filter condition is represented by a row of inputs  each made up of a target field  operator  and value   1  Choose a target field from the first dropdown to select field that the explorer will run the query against  The options available will vary based on the target field s data type   1  Choose an operator from the second dropdown  The options available will vary based on the target field s data type   1  Provide a value in the third field to compare against the target field s value   1  Click   Run Query   to evaluate the filter conditions        Tip    Inspect the filter conditions used by the various pre canned    Use Cases     use cases  to learn how they are constructed   You can create multiple filter conditions for a query  When you provide multiple conditions  the explorer evaluates them at query time with a logical AND  To add a new condition  use the   Add condition   button below the condition list  To remove a condition  use the trash bin button on the right hand side of the condition      Save a view  You can save explorer views to revisit a custom query or use case        Note    The ability to save views in the explorer is in public beta  All APIs and workflows are subject to change   You can save the explorer s view of your data by performing the following steps   1  Navigate to the   Explorer   page in the sidebar of your organization    1  Click on a tile in the   Types   or   Use cases   section  1  Define a query using the query building interface     1  By default  the explorer displays all available information  but you can adjust which columns you want your view to include    1  Open the   Actions   dropdown menu and select   Save view    saving the last query you performed in the explorer    1  Specify a new  unique name for your saved view     1  Click   Save     When your explorer saves a view  it saves the last query it performed  If you change a query and do not rerun it  the explorer does not save those changes   After you have saved a view of your data  you can access it from the explorer s main page underneath the   Saved views   tab  Saved views keep track of the following attributes     The name of the saved view     The type of data you are querying  either module  workspace  provider  or Terraform versions    The owner of the saved view      When the saved view was last updated   You can rename or delete a saved view from the   Saved views   tab by opening the ellipsis menu next to a view and selecting either   Rename   or   Delete         Manage a saved view  Complete the following steps to update a saved view   1  Open the view in the explorer and make changes    1  Open the   Actions   dropdown menu and select   Save view     Complete the following steps to save a new view based on an existing saved view   1  Open a saved view in the explorer and make changes    1  Open the   Actions   dropdown menu and select   Save as      1  Entering a name for this new saved view    1  Click   Save     Complete the following steps to delete a saved view   1  Open a saved view in the explorer  1  Choose    Delete view   from the   Actions   drop down menu  1  Click   Delete   when prompted to confirm that you want to permanently delete the view "}
{"questions":"terraform Create workspace tags Learn how to create tags for your workspaces so that you can organize workspaces Tagging workspaces also lets you sort and filter workspaces in the UI page title Create workspace tags Overview This topic describes how to attach tags to your workspaces so that you can organize workspaces Tagging workspaces also helps you sort and filter workspaces in the UI and enable you to associate Terraform configurations with several workspaces","answers":"---\npage_title: Create workspace tags\ndescription: Learn how to create tags for your workspaces so that you can organize workspaces. Tagging workspaces also lets you sort and filter workspaces in the UI.  \n---\n\n# Create workspace tags\n\nThis topic describes how to attach tags to your workspaces so that you can organize workspaces. Tagging workspaces also helps you sort and filter workspaces in the UI and enable you to associate Terraform configurations with several workspaces. \n\n## Overview\n\nYou can create tags and attach them to your workspaces. Tagging workspaces helps organization administrators organize, sort, and filter workspaces so that they can track resource consumption. For example, you could add a `cost-center` tag so that administrators can sort workspaces according to cost center. \n\nHCP Terraform stores tags as either single-value tags or key-value pairs. You can also migrate existing single-value tags to the key-value scheme. Refer to [Migrating to key-value tags](#migrating-to-key-value-tags) for instructions.\n\n~> **Adding tags stored as key-value pairs is in private beta and unavailable for some users.** Contact your HashiCorp representative for information about participating in the private beta.\n\nSingle-value tags enable you to associate a single Terraform configuration file with several workspaces according to tag. Refer to the following topics in the Terraform CLI and configuration language documentation for additional information:\n\n- [`terraform{}.cloud{}.workspaces` reference](\/terraform\/language\/terraform#terraform-cloud-workspaces)\n- [Define connection settings](\/terraform\/cli\/cloud\/settings#define-connection-settings)\n\n### Reserved tags\n\nYou can reserve a set of tag keys for each organization. Reserved tag keys appear as suggestions when people create tags for projects and workspaces so that you can use consistent terms for tags. Refer to [Create and manage reserved tags](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#create-and-manage-reserved-tags) for additional information.\n\n~> **Reserved tags and project tags are in private beta and unavailable for some users.** Contact your HashiCorp representative for information about participating in the private beta.\n\n## Requirements\n\n- You must be member of a team with the **Write** permission group enabled for the workspace to create tags for a workspace.\n- You must be member of a team with the **Admin** permission group enabled for the workspace to delete tags on a workspace.\n\nYou cannot create tags for a workspace using the CLI.\n\n## Define tags\n\nComplete the following steps to define workspace tags: \n\n<Tabs>\n\n<Tab heading=\"Key-value tags\">\n\n1. Open your workspace.\n1. Click either the count link for the **Tags** label or **Manage Tags** in the **Tags** card on the right-sidebar to open the **Manage workspace tags** drawer.\n1. Click **+Add tag** to define a new key-value pair. Refer to [Tag syntax](#Tag-syntax) for information about supported characters. \n1. Tags inherited from the project appear in the **Inherited Tags** section. You can attach new key-value pairs to their projects to override inherited tags. Refer to [Manage projects](\/terraform\/cloud-docs\/projects\/manage) for additional information about using tags in projects.\n\n   You cannot override reserved tag keys created by the organization administrator. Refer to [Create and manage reserved tags](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#create-and-manage-reserved-tags) for additional information.\n\n   You can also click on tag links in the **Inherited Tags** section to view workspaces that use the same tag.\n1. Click **Save**.\n\n<\/Tab>\n<Tab heading=\"Single-value tags\">\n\n1. Open your workspace.\n1. Open the **Add a tag** drop-down menu and enter a value. If a tag value already exists, you can attach it to the workspace. Otherwise, HCP Terraform creates a new tag and attaches it to the workspace. Refer to [Tag syntax](#Tag-syntax) for information about supported characters.\n\n<\/Tab>\n<\/Tabs>\n\nTags that you create appear in the tags management screen in the organization settings. Refer to [Organizations](\/terraform\/cloud-docs\/users-teams-organizations\/organizations) for additional information.\n\n## Update tags\n\n<Tabs>\n\n<Tab heading=\"Key-value tags\">\n\n1. Open your workspace.\n1. Click either the count link for the **Tags** label or **Manage Tags** in the **Tags** card on the right-sidebar to open the **Manage workspace tags** drawer.\n1. In the **Direct Tags** section, modify either a key, value, or both and click **Save**.\n\n<\/Tab>\n<Tab heading=\"Single-value tags\">\n\nYou cannot manage single-value tags in the UI. Instead, use the following workspace API endpoints to manage single-value tags:\n\n- [`POST \/workspaces\/:workspace_id\/relationships\/tags`](\/terraform\/cloud-docs\/api-docs\/workspaces#add-tags-to-a-workspace): Adds a new single-value workspace tag.\n- [`DELETE \/workspaces\/:workspace_id\/relationships\/tags`](\/terraform\/cloud-docs\/api-docs\/workspaces#remove-tags-from-workspace): Deletes a single-value workspace tag.\n- [`PATCH \/workspaces\/:workspace_id`](\/terraform\/cloud-docs\/api-docs\/workspaces#update-a-workspace): Updates an existing single-value workspace tag.\n\n<\/Tab>\n<\/Tabs>\n\n## Migrating to key-value tags\n\nYou can use the API to migrate single-value tags that may already be in your workspace tags to tag stored as key-value pairs. You must have permissions in the workspace to perform the following task. Refer to [Requirements](#requirements) for additional information.\n\nNote that Terraform adds single-value workspace tags that are defined in the associated Terraform cloud block configuration to workspaces selected by the configuration. As result, your workspace may include duplicate tags. Refer to the [Terraform reference documentation](\/terraform\/language\/terraform#terraform-cloud-workspaces) for additional information. \n\n### Re-create existing workspace tags as resource tags\n\n1. Send a `GET` request to the [`\/organizations\/:organization_name\/tags`](\/terraform\/cloud-docs\/api-docs\/organization-tags#list-tags) endpoint to request all workspaces for your organization. The response may span several pages.\n1. For each workspace, check the `tag-names` attribute for existing tags.\n1. Send a `PATCH` request to the [`\/workspaces\/:workspace_id`](\/terraform\/cloud-docs\/api-docs\/workspaces#update-a-workspace) endpoint and include the `tag-binding` relationship in the request body for each workspace tag.\n\n### Delete single-value workspace tags\n\n1. Send a `GET` request to the [`\/organizations\/:organization_name\/tags`](\/terraform\/cloud-docs\/api-docs\/organization-tags#list-tags) endpoint to request all workspaces for your organization.\n1. Enumerate the external IDs for all tags.\n2. Send a `DELETE` request to the [`\/organizations\/:organization_name\/tags`](\/terraform\/cloud-docs\/api-docs\/organization-tags#delete-tags) endpoint to delete tags.\n\n## Tag syntax\n\nThe following rules apply to tags:\n\n- Tags must be one or more characters.\n- Tags have a 255 character limit.\n- Tags can include letters, numbers, colons, hyphens, and underscores.\n- For tags stored as key-value pairs, tag values are optional","site":"terraform","answers_cleaned":"    page title  Create workspace tags description  Learn how to create tags for your workspaces so that you can organize workspaces  Tagging workspaces also lets you sort and filter workspaces in the UI           Create workspace tags  This topic describes how to attach tags to your workspaces so that you can organize workspaces  Tagging workspaces also helps you sort and filter workspaces in the UI and enable you to associate Terraform configurations with several workspaces       Overview  You can create tags and attach them to your workspaces  Tagging workspaces helps organization administrators organize  sort  and filter workspaces so that they can track resource consumption  For example  you could add a  cost center  tag so that administrators can sort workspaces according to cost center    HCP Terraform stores tags as either single value tags or key value pairs  You can also migrate existing single value tags to the key value scheme  Refer to  Migrating to key value tags   migrating to key value tags  for instructions        Adding tags stored as key value pairs is in private beta and unavailable for some users    Contact your HashiCorp representative for information about participating in the private beta   Single value tags enable you to associate a single Terraform configuration file with several workspaces according to tag  Refer to the following topics in the Terraform CLI and configuration language documentation for additional information       terraform   cloud   workspaces  reference   terraform language terraform terraform cloud workspaces     Define connection settings   terraform cli cloud settings define connection settings       Reserved tags  You can reserve a set of tag keys for each organization  Reserved tag keys appear as suggestions when people create tags for projects and workspaces so that you can use consistent terms for tags  Refer to  Create and manage reserved tags   terraform cloud docs users teams organizations organizations create and manage reserved tags  for additional information        Reserved tags and project tags are in private beta and unavailable for some users    Contact your HashiCorp representative for information about participating in the private beta      Requirements    You must be member of a team with the   Write   permission group enabled for the workspace to create tags for a workspace    You must be member of a team with the   Admin   permission group enabled for the workspace to delete tags on a workspace   You cannot create tags for a workspace using the CLI      Define tags  Complete the following steps to define workspace tags     Tabs    Tab heading  Key value tags    1  Open your workspace  1  Click either the count link for the   Tags   label or   Manage Tags   in the   Tags   card on the right sidebar to open the   Manage workspace tags   drawer  1  Click    Add tag   to define a new key value pair  Refer to  Tag syntax   Tag syntax  for information about supported characters   1  Tags inherited from the project appear in the   Inherited Tags   section  You can attach new key value pairs to their projects to override inherited tags  Refer to  Manage projects   terraform cloud docs projects manage  for additional information about using tags in projects      You cannot override reserved tag keys created by the organization administrator  Refer to  Create and manage reserved tags   terraform cloud docs users teams organizations organizations create and manage reserved tags  for additional information      You can also click on tag links in the   Inherited Tags   section to view workspaces that use the same tag  1  Click   Save       Tab   Tab heading  Single value tags    1  Open your workspace  1  Open the   Add a tag   drop down menu and enter a value  If a tag value already exists  you can attach it to the workspace  Otherwise  HCP Terraform creates a new tag and attaches it to the workspace  Refer to  Tag syntax   Tag syntax  for information about supported characters     Tab    Tabs   Tags that you create appear in the tags management screen in the organization settings  Refer to  Organizations   terraform cloud docs users teams organizations organizations  for additional information      Update tags   Tabs    Tab heading  Key value tags    1  Open your workspace  1  Click either the count link for the   Tags   label or   Manage Tags   in the   Tags   card on the right sidebar to open the   Manage workspace tags   drawer  1  In the   Direct Tags   section  modify either a key  value  or both and click   Save       Tab   Tab heading  Single value tags    You cannot manage single value tags in the UI  Instead  use the following workspace API endpoints to manage single value tags       POST  workspaces  workspace id relationships tags    terraform cloud docs api docs workspaces add tags to a workspace   Adds a new single value workspace tag      DELETE  workspaces  workspace id relationships tags    terraform cloud docs api docs workspaces remove tags from workspace   Deletes a single value workspace tag      PATCH  workspaces  workspace id    terraform cloud docs api docs workspaces update a workspace   Updates an existing single value workspace tag     Tab    Tabs      Migrating to key value tags  You can use the API to migrate single value tags that may already be in your workspace tags to tag stored as key value pairs  You must have permissions in the workspace to perform the following task  Refer to  Requirements   requirements  for additional information   Note that Terraform adds single value workspace tags that are defined in the associated Terraform cloud block configuration to workspaces selected by the configuration  As result  your workspace may include duplicate tags  Refer to the  Terraform reference documentation   terraform language terraform terraform cloud workspaces  for additional information        Re create existing workspace tags as resource tags  1  Send a  GET  request to the    organizations  organization name tags    terraform cloud docs api docs organization tags list tags  endpoint to request all workspaces for your organization  The response may span several pages  1  For each workspace  check the  tag names  attribute for existing tags  1  Send a  PATCH  request to the    workspaces  workspace id    terraform cloud docs api docs workspaces update a workspace  endpoint and include the  tag binding  relationship in the request body for each workspace tag       Delete single value workspace tags  1  Send a  GET  request to the    organizations  organization name tags    terraform cloud docs api docs organization tags list tags  endpoint to request all workspaces for your organization  1  Enumerate the external IDs for all tags  2  Send a  DELETE  request to the    organizations  organization name tags    terraform cloud docs api docs organization tags delete tags  endpoint to delete tags      Tag syntax  The following rules apply to tags     Tags must be one or more characters    Tags have a 255 character limit    Tags can include letters  numbers  colons  hyphens  and underscores    For tags stored as key value pairs  tag values are optional"}
{"questions":"terraform to access state from other workspaces page title Terraform State Workspaces HCP Terraform Workspaces have their own separate state data Learn how state is used and how Each HCP Terraform workspace has its own separate state data used for runs within that workspace Terraform State in HCP Terraform","answers":"---\npage_title: Terraform State - Workspaces - HCP Terraform\ndescription: >-\n  Workspaces have their own separate state data. Learn how state is used and how\n  to access state from other workspaces.\n---\n\n# Terraform State in HCP Terraform\n\nEach HCP Terraform workspace has its own separate state data, used for runs within that workspace.\n\n-> **API:** See the [State Versions API](\/terraform\/cloud-docs\/api-docs\/state-versions).\n\n## State Usage in Terraform Runs\n\nIn [remote runs](\/terraform\/cloud-docs\/run\/remote-operations), HCP Terraform automatically configures Terraform to use the workspace's state; the Terraform configuration does not need an explicit backend configuration. (If a backend configuration is present, it will be overridden.)\n\nIn local runs (available for workspaces whose execution mode setting is set to \"local\"), you can use a workspace's state by configuring the [CLI integration](\/terraform\/cli\/cloud) and authenticating with a user token that has permission to read and write state versions for the relevant workspace. When using a Terraform configuration that references outputs from another workspace, the authentication token must also have permission to read state outputs for that workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n<!-- BEGIN: TFC:only name:intermediate-state -->\n\nDuring an HCP Terraform run, Terraform incrementally creates intermediate state versions and marks them as finalized once it uploads the state content.\n\nWhen a workspace is unlocked, HCP Terraform selects the latest state and sets it as the current state version, deletes all other intermediate state versions that were saved as recovery snapshots for the duration of the lock, and discards all pending intermediate state versions that were superseded by newer state versions.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n<!-- END: TFC:only name:intermediate-state -->\n\n## State Versions\n\nIn addition to the current state, HCP Terraform retains historical state versions, which can be used to analyze infrastructure changes over time.\n\nYou can view a workspace's state versions from its **States** tab. Each state in the list indicates which run and which VCS commit (if applicable) it was associated with. Click a state in the list for more details, including a diff against the previous state and a link to the raw state file.\n\n<!-- BEGIN: TFC:only name:managed-resources -->\n## Managed Resources Count\n\n-> **Note:** A managed resources count for each organization is available in your organization's settings.\n\nYour organization\u2019s managed resource count helps you understand the number of infrastructure resources that HCP Terraform manages across all your workspaces.\n\nHCP Terraform reads all the workspaces\u2019 state files to determine the total number of managed resources. Each [resource](\/terraform\/language\/resources\/syntax) in the state equals one managed resource. HCP Terraform includes resources in modules and each resource instance created with the `count` or `for_each` meta-arguments. For example, `\"aws_instance\" \"servers\" { count = 10 }` creates ten separate managed resources in state. HCP Terraform does not include [data sources](\/terraform\/language\/data-sources) in the count.\n\n### Examples - Managed Resources\n\nThe following Terraform state excerpt describes a  `random` resource. HCP Terraform counts  `random` as one managed resource because `\u201cmode\u201d: \u201cmanaged\u201d`.\n\n```json\n\"resources\": [\n{\n      \"mode\": \"managed\",\n      \"type\": \"random_pet\",\n      \"name\": \"random\",\n      \"provider\": \"provider[\\\"registry.terraform.io\/hashicorp\/random\\\"]\",\n      \"instances\": [\n        {\n          \"schema_version\": 0,\n          \"attributes\": {\n            \"id\": \"puma\",\n            \"keepers\": null,\n            \"length\": 1,\n            \"prefix\": null,\n            \"separator\": \"-\"\n          },\n          \"sensitive_attributes\": []\n        }\n      ]\n    }\n]\n```\n\nA single resource configuration block can describe multiple resource instances with the [`count`](\/terraform\/language\/meta-arguments\/count) or [`for_each`](\/terraform\/language\/meta-arguments\/for_each) meta-arguments. Each of these instances counts as a managed resource.\n\nThe following example shows a Terraform state excerpt with 2 instances of a `aws_subnet` resource. HCP Terraform counts each instance of `aws_subnet` as a separate managed resource.\n\n```json\n{\n      \"module\": \"module.vpc\",\n      \"mode\": \"managed\",\n      \"type\": \"aws_subnet\",\n      \"name\": \"public\",\n      \"provider\": \"provider[\\\"registry.terraform.io\/hashicorp\/aws\\\"]\",\n      \"instances\": [\n        {\n          \"index_key\": 0,\n          \"schema_version\": 1,\n          \"attributes\": {\n            \"arn\": \"arn:aws:ec2:us-east-2:561656980159:subnet\/subnet-024b05c4fba9c9733\",\n            \"assign_ipv6_address_on_creation\": false,\n            \"availability_zone\": \"us-east-2a\",\n            ##...\n            \"private_dns_hostname_type_on_launch\": \"ip-name\",\n            \"tags\": {\n              \"Name\": \"-public-us-east-2a\"\n            },\n            \"tags_all\": {\n              \"Name\": \"-public-us-east-2a\"\n            },\n            \"timeouts\": null,\n            \"vpc_id\": \"vpc-0f693f9721b61333b\"\n          },\n          \"sensitive_attributes\": [],\n          \"private\": \"eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjo2MDAwMDAwMDAwMDAsImRlbGV0ZSI6MTIwMDAwMDAwMDAwMH0sInNjaGVtYV92ZXJzaW9uIjoiMSJ9\",\n          \"dependencies\": [\n            \"data.aws_availability_zones.available\",\n            \"module.vpc.aws_vpc.this\",\n            \"module.vpc.aws_vpc_ipv4_cidr_block_association.this\"\n          ]\n        },\n        {\n          \"index_key\": 1,\n          \"schema_version\": 1,\n          \"attributes\": {\n            \"arn\": \"arn:aws:ec2:us-east-2:561656980159:subnet\/subnet-08924f16617e087b2\",\n            \"assign_ipv6_address_on_creation\": false,\n            \"availability_zone\": \"us-east-2b\",\n            ##...\n            \"private_dns_hostname_type_on_launch\": \"ip-name\",\n            \"tags\": {\n              \"Name\": \"-public-us-east-2b\"\n            },\n            \"tags_all\": {\n              \"Name\": \"-public-us-east-2b\"\n            },\n            \"timeouts\": null,\n            \"vpc_id\": \"vpc-0f693f9721b61333b\"\n          },\n          \"sensitive_attributes\": [],\n          \"private\": \"eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjo2MDAwMDAwMDAwMDAsImRlbGV0ZSI6MTIwMDAwMDAwMDAwMH0sInNjaGVtYV92ZXJzaW9uIjoiMSJ9\",\n          \"dependencies\": [\n            \"data.aws_availability_zones.available\",\n            \"module.vpc.aws_vpc.this\",\n            \"module.vpc.aws_vpc_ipv4_cidr_block_association.this\"\n          ]\n        }\n      ]\n}\n```\n\n### Example - Excluded Data Source\n\nThe following Terraform state excerpt describes a `aws_availability_zones` data source. HCP Terraform does not include `aws_availability_zones` in the managed resource count because `\u201dmode\u201d: \u201cdata\u201d`.\n\n```json\n \"resources\": [\n    {\n      \"mode\": \"data\",\n      \"type\": \"aws_availability_zones\",\n      \"name\": \"available\",\n      \"provider\": \"provider[\\\"registry.terraform.io\/hashicorp\/aws\\\"]\",\n      \"instances\": [\n        {\n          \"schema_version\": 0,\n          \"attributes\": {\n            \"all_availability_zones\": null,\n            \"exclude_names\": null,\n            \"exclude_zone_ids\": null,\n            \"filter\": null,\n            \"group_names\": [\n              \"us-east-2\"\n            ],\n            \"id\": \"us-east-2\",\n            \"names\": [\n              \"us-east-2a\",\n              \"us-east-2b\",\n              \"us-east-2c\"\n            ],\n            \"state\": null,\n            \"zone_ids\": [\n              \"use2-az1\",\n              \"use2-az2\",\n              \"use2-az3\"\n            ]\n          },\n          \"sensitive_attributes\": []\n        }\n      ]\n     }\n   ]\n```\n<!-- END: TFC:only name:managed-resources -->\n\n## State Manipulation\n\nCertain tasks (including importing resources, tainting resources, moving or renaming existing resources to match a changed configuration, and more) may require modifying Terraform state outside the context of a run, depending on which version of Terraform your HCP Terraform workspace is configured to use.\n\nNewer Terraform features like [`moved` blocks](\/terraform\/language\/modules\/develop\/refactoring), [`import` blocks](\/terraform\/language\/import), and the [`replace` option](\/terraform\/cloud-docs\/run\/modes-and-options#replacing-selected-resources) allow you to accomplish these tasks using the usual plan and apply workflow. However, if the Terraform version you're using doesn't support these features, you may need to fall back to manual state manipulation.\n\nManual state manipulation in HCP Terraform workspaces, with the exception of [rolling back to a previous state version](#rolling-back-to-a-previous-state), requires the use of Terraform CLI, using the same commands as would be used in a local workflow (`terraform import`, `terraform taint`, etc.). To manipulate state, you must configure the [CLI integration](\/terraform\/cli\/cloud) and authenticate with a user token that has permission to read and write state versions for the relevant workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n### Rolling Back to a Previous State\n\nYou can rollback to a previous, known good state version using the HCP Terraform UI. Navigate to the state you want to rollback to and click the **Advanced** toggle button. This option requires that you have access to create new state and that you lock the workspace. It works by duplicating the state that you specify and making it the workspace's current state version. The workspace remains locked. To undo the rollback operation, rollback to the state version that was previously the latest state.\n\n-> **Note:** You can rollback to any prior state, but you should use caution because replacing state improperly can result in orphaned or duplicated infrastructure resources. This feature is provided as a convenient alternative to manually downloading older state and using state manipulation commands in the CLI to push it to HCP Terraform.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Accessing State from Other Workspaces\n\n-> **Note:** Provider-specific [data sources](\/terraform\/language\/data-sources) are usually the most resilient way to share information between separate Terraform configurations. `terraform_remote_state` is more flexible, but we recommend using specialized data sources whenever it is convenient to do so.\n\nTerraform's built-in [`terraform_remote_state` data source](\/terraform\/language\/state\/remote-state-data) lets you share arbitrary information between configurations via root module [outputs](\/terraform\/language\/values\/outputs).\n\nHCP Terraform automatically manages API credentials for `terraform_remote_state` access during [runs managed by HCP Terraform](\/terraform\/cloud-docs\/run\/remote-operations#remote-operations). This means you do not usually need to include an API token in a `terraform_remote_state` data source's configuration.\n\n## Upgrading State\n\nYou can upgrade a workspace's state version to a new Terraform version without making any configuration changes. To upgrade, we recommend the following steps:\n\n1.  Run a [speculative plan](\/terraform\/cloud-docs\/run\/ui#testing-terraform-upgrades-with-speculative-plans) to test whether your configuration is compatible with the new Terraform version. You can run speculative plans with a Terraform version that is different than the one currently selected for the workspace.\n1.  Select **Settings > General** and select the desired new **Terraform Version**.\n1.  Click **+ New run** and then select **Allow empty apply** as the run type. An [empty apply](\/terraform\/cloud-docs\/run\/modes-and-options#allow-empty-apply) allows Terraform to apply a plan that produces no infrastructure changes. Terraform upgrades the state file version during the apply process.\n\n-> **Note:** If the desired Terraform version is incompatible with a workspace's existing state version, the run fails and HCP Terraform prompts you to run an apply with a compatible version first. Refer to the [Terraform upgrade guides](\/terraform\/language\/upgrade-guides) for details about upgrading between versions.\n\n### Remote State Access Controls\n\nRemote state access between workspaces is subject to access controls:\n\n- Only workspaces within the same organization can access each other's state.\n- The workspace whose state is being read must be configured to allow that access. State access permissions are configured on a workspace's [general settings page](\/terraform\/cloud-docs\/workspaces\/settings). There are two ways a workspace can allow access:\n  - Globally, to all workspaces within the same organization.\n  - Selectively, to a list of specific approved workspaces.\n\nBy default, new workspaces in HCP Terraform do not allow other workspaces to access their state. We recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other.\n\n-> **Note:** The default access permissions for new workspaces in HCP Terraform changed in April 2021. Workspaces created before this change defaulted to allowing global access within their organization. These workspaces can be changed to more restrictive access at any time on their [general settings page](\/terraform\/cloud-docs\/workspaces\/settings). Terraform Enterprise administrators can choose whether new workspaces on their instances default to global access or selective access.\n\n### Data Source Configuration\n\nTo configure a `tfe_outputs` data source that references an HCP Terraform workspace, specify the organization and workspace in the `config` argument.\n\nYou must still properly configure the `tfe` provider with a valid authentication token and correct permissions to HCP Terraform.\n\n```hcl\ndata \"tfe_outputs\" \"vpc\" {\n  config = {\n    organization = \"example_corp\"\n    workspaces = {\n      name = \"vpc-prod\"\n    }\n  }\n}\n\nresource \"aws_instance\" \"redis_server\" {\n  # Terraform 0.12 and later: use the \"outputs.<OUTPUT NAME>\" attribute\n  subnet_id = data.tfe_outputs.vpc.outputs.subnet_id\n}\n```\n-> **Note:** Remote state access controls do not apply when using the `tfe_outputs` data source.","site":"terraform","answers_cleaned":"    page title  Terraform State   Workspaces   HCP Terraform description       Workspaces have their own separate state data  Learn how state is used and how   to access state from other workspaces         Terraform State in HCP Terraform  Each HCP Terraform workspace has its own separate state data  used for runs within that workspace        API    See the  State Versions API   terraform cloud docs api docs state versions       State Usage in Terraform Runs  In  remote runs   terraform cloud docs run remote operations   HCP Terraform automatically configures Terraform to use the workspace s state  the Terraform configuration does not need an explicit backend configuration   If a backend configuration is present  it will be overridden    In local runs  available for workspaces whose execution mode setting is set to  local    you can use a workspace s state by configuring the  CLI integration   terraform cli cloud  and authenticating with a user token that has permission to read and write state versions for the relevant workspace  When using a Terraform configuration that references outputs from another workspace  the authentication token must also have permission to read state outputs for that workspace    More about permissions    terraform cloud docs users teams organizations permissions         BEGIN  TFC only name intermediate state      During an HCP Terraform run  Terraform incrementally creates intermediate state versions and marks them as finalized once it uploads the state content   When a workspace is unlocked  HCP Terraform selects the latest state and sets it as the current state version  deletes all other intermediate state versions that were saved as recovery snapshots for the duration of the lock  and discards all pending intermediate state versions that were superseded by newer state versions    permissions citation    intentionally unused   keep for maintainers       END  TFC only name intermediate state         State Versions  In addition to the current state  HCP Terraform retains historical state versions  which can be used to analyze infrastructure changes over time   You can view a workspace s state versions from its   States   tab  Each state in the list indicates which run and which VCS commit  if applicable  it was associated with  Click a state in the list for more details  including a diff against the previous state and a link to the raw state file        BEGIN  TFC only name managed resources        Managed Resources Count       Note    A managed resources count for each organization is available in your organization s settings   Your organization s managed resource count helps you understand the number of infrastructure resources that HCP Terraform manages across all your workspaces   HCP Terraform reads all the workspaces  state files to determine the total number of managed resources  Each  resource   terraform language resources syntax  in the state equals one managed resource  HCP Terraform includes resources in modules and each resource instance created with the  count  or  for each  meta arguments  For example    aws instance   servers    count   10    creates ten separate managed resources in state  HCP Terraform does not include  data sources   terraform language data sources  in the count       Examples   Managed Resources  The following Terraform state excerpt describes a   random  resource  HCP Terraform counts   random  as one managed resource because   mode    managed        json  resources              mode    managed          type    random pet          name    random          provider    provider   registry terraform io hashicorp random             instances                          schema version   0             attributes                  id    puma                keepers   null               length   1               prefix   null               separator                               sensitive attributes                                     A single resource configuration block can describe multiple resource instances with the   count    terraform language meta arguments count  or   for each    terraform language meta arguments for each  meta arguments  Each of these instances counts as a managed resource   The following example shows a Terraform state excerpt with 2 instances of a  aws subnet  resource  HCP Terraform counts each instance of  aws subnet  as a separate managed resource      json          module    module vpc          mode    managed          type    aws subnet          name    public          provider    provider   registry terraform io hashicorp aws             instances                          index key   0             schema version   1             attributes                  arn    arn aws ec2 us east 2 561656980159 subnet subnet 024b05c4fba9c9733                assign ipv6 address on creation   false               availability zone    us east 2a                                  private dns hostname type on launch    ip name                tags                    Name     public us east 2a                              tags all                    Name     public us east 2a                              timeouts   null               vpc id    vpc 0f693f9721b61333b                          sensitive attributes                  private    eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjo2MDAwMDAwMDAwMDAsImRlbGV0ZSI6MTIwMDAwMDAwMDAwMH0sInNjaGVtYV92ZXJzaW9uIjoiMSJ9              dependencies                  data aws availability zones available                module vpc aws vpc this                module vpc aws vpc ipv4 cidr block association this                                              index key   1             schema version   1             attributes                  arn    arn aws ec2 us east 2 561656980159 subnet subnet 08924f16617e087b2                assign ipv6 address on creation   false               availability zone    us east 2b                                  private dns hostname type on launch    ip name                tags                    Name     public us east 2b                              tags all                    Name     public us east 2b                              timeouts   null               vpc id    vpc 0f693f9721b61333b                          sensitive attributes                  private    eyJlMmJmYjczMC1lY2FhLTExZTYtOGY4OC0zNDM2M2JjN2M0YzAiOnsiY3JlYXRlIjo2MDAwMDAwMDAwMDAsImRlbGV0ZSI6MTIwMDAwMDAwMDAwMH0sInNjaGVtYV92ZXJzaW9uIjoiMSJ9              dependencies                  data aws availability zones available                module vpc aws vpc this                module vpc aws vpc ipv4 cidr block association this                                           Example   Excluded Data Source  The following Terraform state excerpt describes a  aws availability zones  data source  HCP Terraform does not include  aws availability zones  in the managed resource count because   mode    data        json   resources                  mode    data          type    aws availability zones          name    available          provider    provider   registry terraform io hashicorp aws             instances                          schema version   0             attributes                  all availability zones   null               exclude names   null               exclude zone ids   null               filter   null               group names                    us east 2                              id    us east 2                names                    us east 2a                  us east 2b                  us east 2c                              state   null               zone ids                    use2 az1                  use2 az2                  use2 az3                                        sensitive attributes                                             END  TFC only name managed resources         State Manipulation  Certain tasks  including importing resources  tainting resources  moving or renaming existing resources to match a changed configuration  and more  may require modifying Terraform state outside the context of a run  depending on which version of Terraform your HCP Terraform workspace is configured to use   Newer Terraform features like   moved  blocks   terraform language modules develop refactoring     import  blocks   terraform language import   and the   replace  option   terraform cloud docs run modes and options replacing selected resources  allow you to accomplish these tasks using the usual plan and apply workflow  However  if the Terraform version you re using doesn t support these features  you may need to fall back to manual state manipulation   Manual state manipulation in HCP Terraform workspaces  with the exception of  rolling back to a previous state version   rolling back to a previous state   requires the use of Terraform CLI  using the same commands as would be used in a local workflow   terraform import    terraform taint   etc    To manipulate state  you must configure the  CLI integration   terraform cli cloud  and authenticate with a user token that has permission to read and write state versions for the relevant workspace    More about permissions    terraform cloud docs users teams organizations permissions        Rolling Back to a Previous State  You can rollback to a previous  known good state version using the HCP Terraform UI  Navigate to the state you want to rollback to and click the   Advanced   toggle button  This option requires that you have access to create new state and that you lock the workspace  It works by duplicating the state that you specify and making it the workspace s current state version  The workspace remains locked  To undo the rollback operation  rollback to the state version that was previously the latest state        Note    You can rollback to any prior state  but you should use caution because replacing state improperly can result in orphaned or duplicated infrastructure resources  This feature is provided as a convenient alternative to manually downloading older state and using state manipulation commands in the CLI to push it to HCP Terraform    permissions citation    intentionally unused   keep for maintainers     Accessing State from Other Workspaces       Note    Provider specific  data sources   terraform language data sources  are usually the most resilient way to share information between separate Terraform configurations   terraform remote state  is more flexible  but we recommend using specialized data sources whenever it is convenient to do so   Terraform s built in   terraform remote state  data source   terraform language state remote state data  lets you share arbitrary information between configurations via root module  outputs   terraform language values outputs    HCP Terraform automatically manages API credentials for  terraform remote state  access during  runs managed by HCP Terraform   terraform cloud docs run remote operations remote operations   This means you do not usually need to include an API token in a  terraform remote state  data source s configuration      Upgrading State  You can upgrade a workspace s state version to a new Terraform version without making any configuration changes  To upgrade  we recommend the following steps   1   Run a  speculative plan   terraform cloud docs run ui testing terraform upgrades with speculative plans  to test whether your configuration is compatible with the new Terraform version  You can run speculative plans with a Terraform version that is different than the one currently selected for the workspace  1   Select   Settings   General   and select the desired new   Terraform Version    1   Click     New run   and then select   Allow empty apply   as the run type  An  empty apply   terraform cloud docs run modes and options allow empty apply  allows Terraform to apply a plan that produces no infrastructure changes  Terraform upgrades the state file version during the apply process        Note    If the desired Terraform version is incompatible with a workspace s existing state version  the run fails and HCP Terraform prompts you to run an apply with a compatible version first  Refer to the  Terraform upgrade guides   terraform language upgrade guides  for details about upgrading between versions       Remote State Access Controls  Remote state access between workspaces is subject to access controls     Only workspaces within the same organization can access each other s state    The workspace whose state is being read must be configured to allow that access  State access permissions are configured on a workspace s  general settings page   terraform cloud docs workspaces settings   There are two ways a workspace can allow access      Globally  to all workspaces within the same organization      Selectively  to a list of specific approved workspaces   By default  new workspaces in HCP Terraform do not allow other workspaces to access their state  We recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other        Note    The default access permissions for new workspaces in HCP Terraform changed in April 2021  Workspaces created before this change defaulted to allowing global access within their organization  These workspaces can be changed to more restrictive access at any time on their  general settings page   terraform cloud docs workspaces settings   Terraform Enterprise administrators can choose whether new workspaces on their instances default to global access or selective access       Data Source Configuration  To configure a  tfe outputs  data source that references an HCP Terraform workspace  specify the organization and workspace in the  config  argument   You must still properly configure the  tfe  provider with a valid authentication token and correct permissions to HCP Terraform      hcl data  tfe outputs   vpc      config         organization    example corp      workspaces           name    vpc prod               resource  aws instance   redis server        Terraform 0 12 and later  use the  outputs  OUTPUT NAME   attribute   subnet id   data tfe outputs vpc outputs subnet id            Note    Remote state access controls do not apply when using the  tfe outputs  data source "}
{"questions":"terraform You can change a workspace s settings after creation Workspace settings are separated into several pages page title Settings Workspaces HCP Terraform settings for notifications permissions and more Workspaces organize infrastructure Find documentation about workspace Workspace Settings","answers":"---\npage_title: Settings - Workspaces - HCP Terraform\ndescription: >-\n  Workspaces organize infrastructure. Find documentation about workspace\n  settings for notifications, permissions, and more.\n---\n\n# Workspace Settings\n\nYou can change a workspace\u2019s settings after creation. Workspace settings are separated into several pages.\n\n- [General](#general): Settings that determine how the workspace functions, including its name, description, associated project, Terraform version, and execution mode.\n- [Health](\/terraform\/cloud-docs\/workspaces\/health): Settings that let you configure health assessments, including drift detection and continuous validation.\n- [Locking](#locking): Locking a workspace temporarily prevents new plans and applies.\n- [Notifications](#notifications): Settings that let you configure run notifications.\n- [Policies](#policies): Settings that let you toggle between Sentinel policy evaluation experiences.\n- [Run Triggers](#run-triggers): Settings that let you configure run triggers. Run triggers allow runs to queue automatically in your workspace when runs in other workspaces are successful.\n- [SSH Key](#ssh-key): Set a private SSH key for downloading Terraform modules from Git-based module sources.\n- [Team Access](#team-access): Settings that let you manage which teams can view the workspace and use it to provision infrastructure.\n- [Version Control](#version-control): Manage the workspace\u2019s VCS integration.\n- [Destruction and Deletion](#destruction-and-deletion): Remove a workspace and the infrastructure it manages.\n\nChanging settings requires admin access to the relevant workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n-> **API:** See the [Update a Workspace endpoint](\/terraform\/cloud-docs\/api-docs\/workspaces#update-a-workspace) (`PATCH \/organizations\/:organization_name\/workspaces\/:name`).\n\n## General\n\nGeneral settings let you change a workspace's name, description, the project it belongs to, and details about how Terraform runs operate. After changing these settings, click **Save settings** at the bottom of the page.\n\n### ID\n\nEvery workspace has a unique ID that you cannot change. You may need to reference the workspace's ID when using the [HCP Terraform API](\/terraform\/cloud-docs\/api-docs).\n\nClick the icon beside the ID to copy it to your clipboard.\n\n### Name\n\nThe display name of the workspace.\n\n!> **Warning:** Some API calls refer to a workspace by its name, so changing the name may break existing integrations.\n\n### Project\n\nThe [project](\/terraform\/cloud-docs\/projects) that this workspace belongs to. Changing the workspace's project can change the read and write permissions for the workspace and which users can access it. \n\nTo move a workspace, you must have the \"Manage all Projects\" organization permission or explicit team admin privileges on both the source and destination projects. Remember that moving a workspace to another project may affect user visibility for that project's workspaces. Refer to [Project Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#project-permissions) for details on workspace access.\n\n### Description (Optional)\n\nEnter a brief description of the workspace's purpose or types of infrastructure.\n\n### Execution Mode\n\nWhether to use HCP Terraform as the Terraform execution platform for this workspace.\n\nBy default, HCP Terraform uses an organization's [default execution mode](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#organization-settings) to choose the execution platform for a workspace. Alternatively, you can instead choose a custom execution mode for a workspace.\n\nSpecifying the \"Remote\" execution mode instructs HCP Terraform to perform Terraform runs on its own disposable virtual machines. This provides a consistent and reliable run environment and enables advanced features like Sentinel policy enforcement, cost estimation, notifications, version control integration, and more.\n\nTo disable remote execution for a workspace, change its execution mode to \"Local\". This mode lets you perform Terraform runs locally with the [CLI-driven run workflow](\/terraform\/cloud-docs\/run\/cli). The workspace will store state, which Terraform can access with the [CLI integration](\/terraform\/cli\/cloud). HCP Terraform does not evaluate workspace variables or variable sets in local execution mode.\n\nIf you instead need to allow HCP Terraform to communicate with isolated, private, or on-premises infrastructure, consider using [HCP Terraform agents](\/terraform\/cloud-docs\/agents). By deploying a lightweight agent, you can establish a simple connection between your environment and HCP Terraform.\n\nChanging your workspace's execution mode after a run has already been planned will cause the run to error when it is applied.\n\nTo minimize the number of runs that error when changing your workspace's execution mode, you should:\n\n1. Disable [auto-apply](\/terraform\/cloud-docs\/workspaces\/settings#auto-apply) if you have it enabled.\n1. Complete any runs that are no longer in the [pending stage](\/terraform\/cloud-docs\/run\/states#the-pending-stage).\n1. [Lock](\/terraform\/cloud-docs\/workspaces\/settings#locking) your workspace to prevent any new runs.\n1. Change the execution mode.\n1. Enable [auto-apply](\/terraform\/cloud-docs\/workspaces\/settings#auto-apply), if you had it enabled before changing your execution mode.\n1. [Unlock](\/terraform\/cloud-docs\/workspaces\/settings#locking) your workspace.\n\n<a id=\"apply-method\"><\/a>\n<a id=\"auto-apply-and-manual-apply\"><\/a>\n\n### Auto-apply\n\nWhether or not HCP Terraform should automatically apply a successful Terraform plan. If you choose manual apply, an operator must confirm a successful plan and choose to apply it.\n\nThe main auto-apply setting affects runs created by the HCP Terraform user interface, API, CLI, and version control webhooks. HCP Terraform also has a separate setting for runs created by [run triggers](\/terraform\/cloud-docs\/workspaces\/settings\/run-triggers) from another workspace.\n\nAuto-apply has the following exception:\n\n- Plans queued by users without permission to apply runs for the workspace must be approved by a user who does have permission. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### Terraform Version\n\nThe Terraform version to use for all operations in the workspace. The default value is whichever release was current when HCP Terraform created the workspace. You can also update a workspace's Terraform version to an exact version or a valid [version constraint](\/terraform\/language\/expressions\/version-constraints).\n\n> **Hands-on:** Try the [Upgrade Terraform Version in HCP Terraform](\/terraform\/tutorials\/cloud\/cloud-versions) tutorial.\n\n-> **API:** You can specify a Terraform version when you [create a workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#create-a-workspace) with the API.\n\n### Terraform Working Directory\n\nThe directory where Terraform will execute, specified as a relative path from the root of the configuration directory. Defaults to the root of the configuration directory.\n\nHCP Terraform will change to this directory before starting a Terraform run, and will report an error if the directory does not exist.\n\nSetting a working directory creates a default filter for automatic run triggering, and sometimes causes CLI-driven runs to upload additional configuration content.\n\n#### Default Run Trigger Filtering\n\nIn VCS-backed workspaces that specify a working directory, HCP Terraform assumes that only changes within that working directory should trigger a run. You can override this behavior with the [Automatic Run Triggering](\/terraform\/cloud-docs\/workspaces\/settings\/vcs#automatic-run-triggering) settings.\n\n#### Parent Directory Uploads\n\nIf a working directory is configured, HCP Terraform always expects the complete shared configuration directory to be available, since the configuration might use local modules from outside its working directory.\n\nIn [runs triggered by VCS commits](\/terraform\/cloud-docs\/run\/ui), this is automatic. In [CLI-driven runs](\/terraform\/cloud-docs\/run\/cli), Terraform's CLI sometimes uploads additional content:\n\n- When the local working directory _does not match_ the name of the configured working directory, Terraform assumes it is the root of the configuration directory, and uploads only the local working directory.\n- When the local working directory _matches_ the name of the configured working directory, Terraform uploads one or more parents of the local working directory, according to the depth of the configured working directory. (For example, a working directory of `production` is only one level deep, so Terraform would upload the immediate parent directory. `consul\/production` is two levels deep, so Terraform would upload the parent and grandparent directories.)\n\nIf you use the working directory setting, always run Terraform from a complete copy of the configuration directory. Moving one subdirectory to a new location can result in unexpected content uploads.\n\n### Remote State Sharing\n\nWhich other workspaces within the organization can access the state of the workspace during [runs managed by HCP Terraform](\/terraform\/cloud-docs\/run\/remote-operations#remote-operations). The [`terraform_remote_state` data source](\/terraform\/language\/state\/remote-state-data) relies on state sharing to access workspace outputs.\n\n- If \"Share state globally\" is enabled, all other workspaces within the organization can access this workspace's state during runs.\n- If global sharing is turned off, you can specify a list of workspaces within the organization that can access this workspace's state; no other workspaces will be allowed.\n\n  The workspace selector is searchable; if you don't initially see a workspace you're looking for, type part of its name.\n\nBy default, new workspaces in HCP Terraform do not allow other workspaces to access their state. We recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other. To configure remote state sharing, a user must have read access for the destination workspace. If a user does not have access to the destination workspace due to scoped project or workspace permissions, they will not have complete visibility into the list of other workspace that can access its state.\n\n-> **Note:** The default access permissions for new workspaces in HCP Terraform changed in April 2021. Workspaces created before this change default to allowing global access within their organization. These workspaces can be changed to more restrictive access at any time. Terraform Enterprise administrators can choose whether new workspaces on their instances default to global access or selective access.\n\n### User Interface\n\nSelect the user experience for displaying plan and apply details.\n\nThe default experience is _Structured Run Output_, which displays your plan and apply results in a human-readable format. This includes nodes that you can expand to view details about each resource and any configured output.\n\nThe Console UI experience is the traditional Terraform experience, where live text logging is streamed in real time to the UI. This experience most closely emulates the CLI output.\n\n~> **Note:** Your workspace must be configured to use a Terraform version of 1.0.5 or higher for the Structured Run Output experience to be fully supported. Workspaces running versions from 0.15.2 may see partial functionality. Workspaces running versions below 0.15.2 will default to the \"Console UI\" experience regardless of the User Interface setting.\n\n## Locking\n\n~> **Important:** Unlike other settings, locks can also be managed by users with permission to lock and unlock the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nIf you need to prevent Terraform runs for any reason, you can lock a workspace. This prevents all applies (and many kinds of plans) from proceeding, and affects runs created via UI, CLI, API, and automated systems. To enable runs again, a user must unlock the workspace.\n\nTwo kinds of run operations can ignore workspace locking because they cannot affect resources or state and do not attempt to lock the workspace themselves:\n\n- Plan-only runs.\n- The planning stages of [saved plan runs](\/terraform\/cloud-docs\/run\/modes-and-options.mdx#saved-plans). You can only _apply_ a saved plan if the workspace is unlocked, and applying that plan locks the workspace as usual. Terraform Enterprise does not yet support this workflow.\n\nLocking a workspace also restricts state uploads. In order to upload state, the workspace must be locked by the user who is uploading state.\n\nUsers with permission to lock and unlock a workspace can't unlock a workspace which was locked by another user. Users with admin access to a workspace can force unlock a workspace even if another user has locked it.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nLocks are managed with a single \"Lock\/Unlock\/Force unlock `<WORKSPACE NAME>`\" button. HCP Terraform asks for confirmation when unlocking.\n\nYou can also manage the workspace's lock from the **Actions** menu.\n\n## Notifications\n\nThe \"Notifications\" page allows HCP Terraform to send webhooks to external services whenever specific run events occur in a workspace.\n\nSee [Run Notifications](\/terraform\/cloud-docs\/workspaces\/settings\/notifications) for detailed information about configuring notifications.\n\n## Policies\n\nHCP Terraform offers two experiences for Sentinel policy evaluations. On the \"Policies\" page, you can adjust your **Sentinel Experience** settings to your preferred experience. By default, HCP Terraform enables the newest policy evaluation experience.\n\nTo toggle between the two Sentinel policy evaluation experiences, click the **Enable the new Sentinel policy experience** toggle under the **Sentinel Experience** heading. HCP Terraform persists your changes automatically. If HCP Terraform is performing a run on a different page, you must refresh that page to see changes to your policy evaluation experience.\n\n## Run Triggers\n\nThe \"Run Triggers\" page configures connections between a workspace and one or more source workspaces. These connections, called \"run triggers\", allow runs to queue automatically in a workspace on successful apply of runs in any of the source workspaces.\n\nSee [Run Triggers](\/terraform\/cloud-docs\/workspaces\/settings\/run-triggers) for detailed information about configuring run triggers.\n\n## SSH Key\n\nIf a workspace's configuration uses [Git-based module sources](\/terraform\/language\/modules\/sources) to reference Terraform modules in private Git repositories, Terraform needs an SSH key to clone those repositories. The \"SSH Key\" page lets you choose which key it should use.\n\nSee [Using SSH Keys for Cloning Modules](\/terraform\/cloud-docs\/workspaces\/settings\/ssh-keys) for detailed information about this page.\n\n## Team Access\n\nThe \"Team Access\" page configures which teams can perform which actions on a workspace.\n\nSee [Managing Access to Workspaces](\/terraform\/cloud-docs\/workspaces\/settings\/access) for detailed information.\n\n## Version Control\n\nThe \"Version Control\" page configures an optional VCS repository that contains the workspace's Terraform configuration. Version control integration is only relevant for workspaces with [remote execution](#execution-mode) enabled.\n\nSee [VCS Connections](\/terraform\/cloud-docs\/workspaces\/settings\/vcs) for detailed information about this page.\n\n## Destruction and Deletion\n\nThe **Destruction and Deletion** page allows [admin users](\/terraform\/cloud-docs\/users-teams-organizations\/permissions) to delete a workspace's managed infrastructure or delete the workspace itself.\n\nFor details, refer to [Destruction and Deletion](\/terraform\/cloud-docs\/workspaces\/settings\/deletion) for detailed information about this page.\n","site":"terraform","answers_cleaned":"    page title  Settings   Workspaces   HCP Terraform description       Workspaces organize infrastructure  Find documentation about workspace   settings for notifications  permissions  and more         Workspace Settings  You can change a workspace s settings after creation  Workspace settings are separated into several pages      General   general   Settings that determine how the workspace functions  including its name  description  associated project  Terraform version  and execution mode     Health   terraform cloud docs workspaces health   Settings that let you configure health assessments  including drift detection and continuous validation     Locking   locking   Locking a workspace temporarily prevents new plans and applies     Notifications   notifications   Settings that let you configure run notifications     Policies   policies   Settings that let you toggle between Sentinel policy evaluation experiences     Run Triggers   run triggers   Settings that let you configure run triggers  Run triggers allow runs to queue automatically in your workspace when runs in other workspaces are successful     SSH Key   ssh key   Set a private SSH key for downloading Terraform modules from Git based module sources     Team Access   team access   Settings that let you manage which teams can view the workspace and use it to provision infrastructure     Version Control   version control   Manage the workspace s VCS integration     Destruction and Deletion   destruction and deletion   Remove a workspace and the infrastructure it manages   Changing settings requires admin access to the relevant workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers       API    See the  Update a Workspace endpoint   terraform cloud docs api docs workspaces update a workspace    PATCH  organizations  organization name workspaces  name        General  General settings let you change a workspace s name  description  the project it belongs to  and details about how Terraform runs operate  After changing these settings  click   Save settings   at the bottom of the page       ID  Every workspace has a unique ID that you cannot change  You may need to reference the workspace s ID when using the  HCP Terraform API   terraform cloud docs api docs    Click the icon beside the ID to copy it to your clipboard       Name  The display name of the workspace        Warning    Some API calls refer to a workspace by its name  so changing the name may break existing integrations       Project  The  project   terraform cloud docs projects  that this workspace belongs to  Changing the workspace s project can change the read and write permissions for the workspace and which users can access it    To move a workspace  you must have the  Manage all Projects  organization permission or explicit team admin privileges on both the source and destination projects  Remember that moving a workspace to another project may affect user visibility for that project s workspaces  Refer to  Project Permissions   terraform cloud docs users teams organizations permissions project permissions  for details on workspace access       Description  Optional   Enter a brief description of the workspace s purpose or types of infrastructure       Execution Mode  Whether to use HCP Terraform as the Terraform execution platform for this workspace   By default  HCP Terraform uses an organization s  default execution mode   terraform cloud docs users teams organizations organizations organization settings  to choose the execution platform for a workspace  Alternatively  you can instead choose a custom execution mode for a workspace   Specifying the  Remote  execution mode instructs HCP Terraform to perform Terraform runs on its own disposable virtual machines  This provides a consistent and reliable run environment and enables advanced features like Sentinel policy enforcement  cost estimation  notifications  version control integration  and more   To disable remote execution for a workspace  change its execution mode to  Local   This mode lets you perform Terraform runs locally with the  CLI driven run workflow   terraform cloud docs run cli   The workspace will store state  which Terraform can access with the  CLI integration   terraform cli cloud   HCP Terraform does not evaluate workspace variables or variable sets in local execution mode   If you instead need to allow HCP Terraform to communicate with isolated  private  or on premises infrastructure  consider using  HCP Terraform agents   terraform cloud docs agents   By deploying a lightweight agent  you can establish a simple connection between your environment and HCP Terraform   Changing your workspace s execution mode after a run has already been planned will cause the run to error when it is applied   To minimize the number of runs that error when changing your workspace s execution mode  you should   1  Disable  auto apply   terraform cloud docs workspaces settings auto apply  if you have it enabled  1  Complete any runs that are no longer in the  pending stage   terraform cloud docs run states the pending stage   1   Lock   terraform cloud docs workspaces settings locking  your workspace to prevent any new runs  1  Change the execution mode  1  Enable  auto apply   terraform cloud docs workspaces settings auto apply   if you had it enabled before changing your execution mode  1   Unlock   terraform cloud docs workspaces settings locking  your workspace    a id  apply method    a   a id  auto apply and manual apply    a       Auto apply  Whether or not HCP Terraform should automatically apply a successful Terraform plan  If you choose manual apply  an operator must confirm a successful plan and choose to apply it   The main auto apply setting affects runs created by the HCP Terraform user interface  API  CLI  and version control webhooks  HCP Terraform also has a separate setting for runs created by  run triggers   terraform cloud docs workspaces settings run triggers  from another workspace   Auto apply has the following exception     Plans queued by users without permission to apply runs for the workspace must be approved by a user who does have permission    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers      Terraform Version  The Terraform version to use for all operations in the workspace  The default value is whichever release was current when HCP Terraform created the workspace  You can also update a workspace s Terraform version to an exact version or a valid  version constraint   terraform language expressions version constraints        Hands on    Try the  Upgrade Terraform Version in HCP Terraform   terraform tutorials cloud cloud versions  tutorial        API    You can specify a Terraform version when you  create a workspace   terraform cloud docs api docs workspaces create a workspace  with the API       Terraform Working Directory  The directory where Terraform will execute  specified as a relative path from the root of the configuration directory  Defaults to the root of the configuration directory   HCP Terraform will change to this directory before starting a Terraform run  and will report an error if the directory does not exist   Setting a working directory creates a default filter for automatic run triggering  and sometimes causes CLI driven runs to upload additional configuration content        Default Run Trigger Filtering  In VCS backed workspaces that specify a working directory  HCP Terraform assumes that only changes within that working directory should trigger a run  You can override this behavior with the  Automatic Run Triggering   terraform cloud docs workspaces settings vcs automatic run triggering  settings        Parent Directory Uploads  If a working directory is configured  HCP Terraform always expects the complete shared configuration directory to be available  since the configuration might use local modules from outside its working directory   In  runs triggered by VCS commits   terraform cloud docs run ui   this is automatic  In  CLI driven runs   terraform cloud docs run cli   Terraform s CLI sometimes uploads additional content     When the local working directory  does not match  the name of the configured working directory  Terraform assumes it is the root of the configuration directory  and uploads only the local working directory    When the local working directory  matches  the name of the configured working directory  Terraform uploads one or more parents of the local working directory  according to the depth of the configured working directory   For example  a working directory of  production  is only one level deep  so Terraform would upload the immediate parent directory   consul production  is two levels deep  so Terraform would upload the parent and grandparent directories    If you use the working directory setting  always run Terraform from a complete copy of the configuration directory  Moving one subdirectory to a new location can result in unexpected content uploads       Remote State Sharing  Which other workspaces within the organization can access the state of the workspace during  runs managed by HCP Terraform   terraform cloud docs run remote operations remote operations   The   terraform remote state  data source   terraform language state remote state data  relies on state sharing to access workspace outputs     If  Share state globally  is enabled  all other workspaces within the organization can access this workspace s state during runs    If global sharing is turned off  you can specify a list of workspaces within the organization that can access this workspace s state  no other workspaces will be allowed     The workspace selector is searchable  if you don t initially see a workspace you re looking for  type part of its name   By default  new workspaces in HCP Terraform do not allow other workspaces to access their state  We recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other  To configure remote state sharing  a user must have read access for the destination workspace  If a user does not have access to the destination workspace due to scoped project or workspace permissions  they will not have complete visibility into the list of other workspace that can access its state        Note    The default access permissions for new workspaces in HCP Terraform changed in April 2021  Workspaces created before this change default to allowing global access within their organization  These workspaces can be changed to more restrictive access at any time  Terraform Enterprise administrators can choose whether new workspaces on their instances default to global access or selective access       User Interface  Select the user experience for displaying plan and apply details   The default experience is  Structured Run Output   which displays your plan and apply results in a human readable format  This includes nodes that you can expand to view details about each resource and any configured output   The Console UI experience is the traditional Terraform experience  where live text logging is streamed in real time to the UI  This experience most closely emulates the CLI output        Note    Your workspace must be configured to use a Terraform version of 1 0 5 or higher for the Structured Run Output experience to be fully supported  Workspaces running versions from 0 15 2 may see partial functionality  Workspaces running versions below 0 15 2 will default to the  Console UI  experience regardless of the User Interface setting      Locking       Important    Unlike other settings  locks can also be managed by users with permission to lock and unlock the workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  If you need to prevent Terraform runs for any reason  you can lock a workspace  This prevents all applies  and many kinds of plans  from proceeding  and affects runs created via UI  CLI  API  and automated systems  To enable runs again  a user must unlock the workspace   Two kinds of run operations can ignore workspace locking because they cannot affect resources or state and do not attempt to lock the workspace themselves     Plan only runs    The planning stages of  saved plan runs   terraform cloud docs run modes and options mdx saved plans   You can only  apply  a saved plan if the workspace is unlocked  and applying that plan locks the workspace as usual  Terraform Enterprise does not yet support this workflow   Locking a workspace also restricts state uploads  In order to upload state  the workspace must be locked by the user who is uploading state   Users with permission to lock and unlock a workspace can t unlock a workspace which was locked by another user  Users with admin access to a workspace can force unlock a workspace even if another user has locked it    permissions citation    intentionally unused   keep for maintainers  Locks are managed with a single  Lock Unlock Force unlock   WORKSPACE NAME    button  HCP Terraform asks for confirmation when unlocking   You can also manage the workspace s lock from the   Actions   menu      Notifications  The  Notifications  page allows HCP Terraform to send webhooks to external services whenever specific run events occur in a workspace   See  Run Notifications   terraform cloud docs workspaces settings notifications  for detailed information about configuring notifications      Policies  HCP Terraform offers two experiences for Sentinel policy evaluations  On the  Policies  page  you can adjust your   Sentinel Experience   settings to your preferred experience  By default  HCP Terraform enables the newest policy evaluation experience   To toggle between the two Sentinel policy evaluation experiences  click the   Enable the new Sentinel policy experience   toggle under the   Sentinel Experience   heading  HCP Terraform persists your changes automatically  If HCP Terraform is performing a run on a different page  you must refresh that page to see changes to your policy evaluation experience      Run Triggers  The  Run Triggers  page configures connections between a workspace and one or more source workspaces  These connections  called  run triggers   allow runs to queue automatically in a workspace on successful apply of runs in any of the source workspaces   See  Run Triggers   terraform cloud docs workspaces settings run triggers  for detailed information about configuring run triggers      SSH Key  If a workspace s configuration uses  Git based module sources   terraform language modules sources  to reference Terraform modules in private Git repositories  Terraform needs an SSH key to clone those repositories  The  SSH Key  page lets you choose which key it should use   See  Using SSH Keys for Cloning Modules   terraform cloud docs workspaces settings ssh keys  for detailed information about this page      Team Access  The  Team Access  page configures which teams can perform which actions on a workspace   See  Managing Access to Workspaces   terraform cloud docs workspaces settings access  for detailed information      Version Control  The  Version Control  page configures an optional VCS repository that contains the workspace s Terraform configuration  Version control integration is only relevant for workspaces with  remote execution   execution mode  enabled   See  VCS Connections   terraform cloud docs workspaces settings vcs  for detailed information about this page      Destruction and Deletion  The   Destruction and Deletion   page allows  admin users   terraform cloud docs users teams organizations permissions  to delete a workspace s managed infrastructure or delete the workspace itself   For details  refer to  Destruction and Deletion   terraform cloud docs workspaces settings deletion  for detailed information about this page  "}
{"questions":"terraform page title Destruction and Deletion Workspaces HCP Terraform HCP Terraform workspaces have two primary delete actions Learn about destroying infrastructure and deleting workspaces in HCP Terraform Destruction and Deletion","answers":"---\npage_title: Destruction and Deletion - Workspaces - HCP Terraform\ndescription: |-\n  Learn about destroying infrastructure and deleting workspaces in HCP Terraform.\n---\n\n# Destruction and Deletion\n\nHCP Terraform workspaces have two primary delete actions:\n\n- [Destroying infrastructure](#destroy-infrastructure) deletes resources managed by the HCP Terraform workspace by triggering a destroy run.\n- [Deleting a workspace](#delete-workspaces) deletes the workspace itself without triggering a destroy run.\n\nIn general, you should perform both actions in the above order when destroying a workspace to ensure resource cleanup for all of a workspace's managed infrastructure.\n\n## Destroy Infrastructure\n\nDestroy plans delete the infrastructure managed by a workspace. We recommend destroying the infrastructure managed by a workspace _before_ deleting the workspace itself. Otherwise, the unmanaged infrastructure resources will continue to exist but will become unmanaged, and you must go into your infrastructure providers to delete the resources manually.\n\nBefore queuing a destroy plan, enable the **Allow destroy plans** toggle setting on this page.\n\n### Automatically Destroy\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/ephemeral-workspaces.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nConfiguring automatic infrastructure destruction for a workspace requires [admin permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#workspace-admins) for that workspace.\n\nThere are two main ways to automatically destroy a workspace's resources:\n* Schedule a run to destroy all resources in a workspace at a specific date and time.\n* Configure HCP Terraform to destroy a workspace's infrastructure after a period of workspace inactivity.\n\nWorkspaces can inherit auto-destroy settings from their project. Refer to [managing projects](\/terraform\/cloud-docs\/projects\/manage#automatically-destroy-inactive-workspaces) for more information. You can configure an individual workspace's auto-destroy settings to override the project's configuration.\n\nYou can reduce your spending on infrastructure by automatically destroying temporary resources like development environments.\n\nAfter HCP Terraform performs an auto-destroy run, it unsets the `auto-destroy-at` field on the workspace. If you continue using the workspace, you can schedule another future auto-destroy run to remove any new resources. \n\n!> **Note:** Automatic destroy plans _do not_ prompt you for apply approval in the HCP Terraform user interface. We recommend only using this setting for development environments.\n\nYou can schedule an auto-destroy run using the HCP Terraform web user interface, or the [workspace API](\/terraform\/cloud-docs\/api-docs\/workspaces).\n\nYou can also schedule [notifications](\/terraform\/cloud-docs\/workspaces\/settings\/notifications) to alert you 12 and 24 hours before an auto-destroy run, and to report auto-destroy run results.\n\n#### Destroy at a specific day and time\n\nTo schedule an auto-destroy run at a specific time in HCP Terraform:\n1. Navigate to the workspace's **Settings** > **Destruction and Deletion** page.\n1. Under **Automatically destroy**, click **Set up auto-destroy**.\n1. Enter the desired date and time. HCP Terraform defaults to your local time zone for scheduling and displays how long until the scheduled operation.\n1. Click **Confirm auto-destroy**.\n\nTo cancel a scheduled auto-destroy run in HCP Terraform:\n1. Navigate to the workspace's **Settings** > **Destruction and Deletion** page.\n1. Under **Automatically destroy**, click **Edit** next to your scheduled run's details.\n1. Click **Remove**.\n\n#### Destroy if a workspace is inactive\n\nYou can configure HCP Terraform to automatically destroy a workspace's infrastructure after a period of inactivity.\nA workspace is _inactive_ if the workspace's state has not changed within your designated time period.\n\n!> **Caution:** As opposed to configuring an auto-destroy run for a specific date and time, this setting _persists_ after queueing auto-destroy runs.\n\nIf you configure a workspace to auto-destroy its infrastructure when inactive, any run that updates Terraform state further delays the scheduled auto-destroy time by the length of your designated timeframe.\n\nTo schedule an auto-destroy run after a period of workspace inactivity:\n1. Navigate to the workspace's **Settings** > **Destruction and Deletion** page.\n1. Under **Automatically destroy**, click **Set up auto-destroy**.\n1. Click the **Destroy if inactive** toggle.\n1. Select or customize a desired timeframe of inactivity.\n1. Click **Confirm auto-destroy**.\n\nWhen configured for the first time, the auto-destroy duration setting displays the scheduled date and time that HCP Terraform will perform the auto-destroy run.\nSubsequent auto-destroy runs and Terraform runs that update state both update the next scheduled auto-destroy date.\n\nAfter HCP Terraform completes a manual or automatic destroy run, it waits until further state updates to schedule a new auto-destroy run.\n\nTo remove your workspace's auto-destroy based on inactivity:\n1. Navigate to the workspace's **Settings** > **Destruction and Deletion** page.\n1. Under **Auto-destroy settings**, click **Edit** to change the auto-destroy settings.\n1. Click **Remove**.\n\n## Delete Workspace\n\nTerraform does not automatically destroy managed infrastructure when you delete a workspace.\n\nAfter you delete the workspace and its state file, Terraform can _no longer track or manage_ that infrastructure. You must manually delete or [import](\/terraform\/cli\/commands\/import) any remaining resources into another Terraform workspace.\n\nBy default, [workspace administrators](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#workspace-admins) can only delete unlocked workspaces that are not managing any infrastructure. Organization owners can force delete a workspace to override these protections. Organization owners can also configure the [organization's settings](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#general) to let workspace administrators force delete their own workspaces.\n\n## Data Retention Policies\n\n<EnterpriseAlert>\nData retention policies are exclusive to Terraform Enterprise, and not available in HCP Terraform. <a href=\"https:\/\/developer.hashicorp.com\/terraform\/enterprise\">Learn more about Terraform Enterprise<\/a>.\n<\/EnterpriseAlert>\n\nDefine configurable data retention policies for workspaces to help reduce object storage consumption. You can define a policy that allows Terraform to _soft delete_ the backing data associated with configuration versions and state versions. Soft deleting refers to marking a data object for garbage collection so that Terraform can automatically delete the object after a set number of days.\n\nOnce an object is soft deleted, any attempts to read the object will fail. Until the garbage collection grace period elapses, you can still restore an object using the APIs described in the [configuration version documentation](\/terraform\/enterprise\/api-docs\/configuration-versions) and [state version documentation](\/terraform\/enterprise\/api-docs\/state-versions). After the garbage collection grace period elapses, Terraform permanently deletes the archivist storage.\n\nThe [organization policy](\/terraform\/enterprise\/users-teams-organizations\/organizations#destruction-and-deletion) is the default policy applied to workspaces, but members of individual workspaces can override the policy for their workspaces.\n\nThe workspace policy always overrides the organization policy. A workspace admin can set or override the following data retention policies:\n\n- **Organization default policy**\n- **Do not auto-delete**\n- **Auto-delete data**\n\nSetting the data retention policy to **Organization default policy** disables the other data retention policy settings.","site":"terraform","answers_cleaned":"    page title  Destruction and Deletion   Workspaces   HCP Terraform description       Learn about destroying infrastructure and deleting workspaces in HCP Terraform         Destruction and Deletion  HCP Terraform workspaces have two primary delete actions      Destroying infrastructure   destroy infrastructure  deletes resources managed by the HCP Terraform workspace by triggering a destroy run     Deleting a workspace   delete workspaces  deletes the workspace itself without triggering a destroy run   In general  you should perform both actions in the above order when destroying a workspace to ensure resource cleanup for all of a workspace s managed infrastructure      Destroy Infrastructure  Destroy plans delete the infrastructure managed by a workspace  We recommend destroying the infrastructure managed by a workspace  before  deleting the workspace itself  Otherwise  the unmanaged infrastructure resources will continue to exist but will become unmanaged  and you must go into your infrastructure providers to delete the resources manually   Before queuing a destroy plan  enable the   Allow destroy plans   toggle setting on this page       Automatically Destroy       BEGIN  TFC only name pnp callout      include  tfc package callouts ephemeral workspaces mdx       END  TFC only name pnp callout      Configuring automatic infrastructure destruction for a workspace requires  admin permissions   terraform cloud docs users teams organizations permissions workspace admins  for that workspace   There are two main ways to automatically destroy a workspace s resources    Schedule a run to destroy all resources in a workspace at a specific date and time    Configure HCP Terraform to destroy a workspace s infrastructure after a period of workspace inactivity   Workspaces can inherit auto destroy settings from their project  Refer to  managing projects   terraform cloud docs projects manage automatically destroy inactive workspaces  for more information  You can configure an individual workspace s auto destroy settings to override the project s configuration   You can reduce your spending on infrastructure by automatically destroying temporary resources like development environments   After HCP Terraform performs an auto destroy run  it unsets the  auto destroy at  field on the workspace  If you continue using the workspace  you can schedule another future auto destroy run to remove any new resources         Note    Automatic destroy plans  do not  prompt you for apply approval in the HCP Terraform user interface  We recommend only using this setting for development environments   You can schedule an auto destroy run using the HCP Terraform web user interface  or the  workspace API   terraform cloud docs api docs workspaces    You can also schedule  notifications   terraform cloud docs workspaces settings notifications  to alert you 12 and 24 hours before an auto destroy run  and to report auto destroy run results        Destroy at a specific day and time  To schedule an auto destroy run at a specific time in HCP Terraform  1  Navigate to the workspace s   Settings       Destruction and Deletion   page  1  Under   Automatically destroy    click   Set up auto destroy    1  Enter the desired date and time  HCP Terraform defaults to your local time zone for scheduling and displays how long until the scheduled operation  1  Click   Confirm auto destroy     To cancel a scheduled auto destroy run in HCP Terraform  1  Navigate to the workspace s   Settings       Destruction and Deletion   page  1  Under   Automatically destroy    click   Edit   next to your scheduled run s details  1  Click   Remove          Destroy if a workspace is inactive  You can configure HCP Terraform to automatically destroy a workspace s infrastructure after a period of inactivity  A workspace is  inactive  if the workspace s state has not changed within your designated time period        Caution    As opposed to configuring an auto destroy run for a specific date and time  this setting  persists  after queueing auto destroy runs   If you configure a workspace to auto destroy its infrastructure when inactive  any run that updates Terraform state further delays the scheduled auto destroy time by the length of your designated timeframe   To schedule an auto destroy run after a period of workspace inactivity  1  Navigate to the workspace s   Settings       Destruction and Deletion   page  1  Under   Automatically destroy    click   Set up auto destroy    1  Click the   Destroy if inactive   toggle  1  Select or customize a desired timeframe of inactivity  1  Click   Confirm auto destroy     When configured for the first time  the auto destroy duration setting displays the scheduled date and time that HCP Terraform will perform the auto destroy run  Subsequent auto destroy runs and Terraform runs that update state both update the next scheduled auto destroy date   After HCP Terraform completes a manual or automatic destroy run  it waits until further state updates to schedule a new auto destroy run   To remove your workspace s auto destroy based on inactivity  1  Navigate to the workspace s   Settings       Destruction and Deletion   page  1  Under   Auto destroy settings    click   Edit   to change the auto destroy settings  1  Click   Remove        Delete Workspace  Terraform does not automatically destroy managed infrastructure when you delete a workspace   After you delete the workspace and its state file  Terraform can  no longer track or manage  that infrastructure  You must manually delete or  import   terraform cli commands import  any remaining resources into another Terraform workspace   By default   workspace administrators   terraform cloud docs users teams organizations permissions workspace admins  can only delete unlocked workspaces that are not managing any infrastructure  Organization owners can force delete a workspace to override these protections  Organization owners can also configure the  organization s settings   terraform cloud docs users teams organizations organizations general  to let workspace administrators force delete their own workspaces      Data Retention Policies   EnterpriseAlert  Data retention policies are exclusive to Terraform Enterprise  and not available in HCP Terraform   a href  https   developer hashicorp com terraform enterprise  Learn more about Terraform Enterprise  a     EnterpriseAlert   Define configurable data retention policies for workspaces to help reduce object storage consumption  You can define a policy that allows Terraform to  soft delete  the backing data associated with configuration versions and state versions  Soft deleting refers to marking a data object for garbage collection so that Terraform can automatically delete the object after a set number of days   Once an object is soft deleted  any attempts to read the object will fail  Until the garbage collection grace period elapses  you can still restore an object using the APIs described in the  configuration version documentation   terraform enterprise api docs configuration versions  and  state version documentation   terraform enterprise api docs state versions   After the garbage collection grace period elapses  Terraform permanently deletes the archivist storage   The  organization policy   terraform enterprise users teams organizations organizations destruction and deletion  is the default policy applied to workspaces  but members of individual workspaces can override the policy for their workspaces   The workspace policy always overrides the organization policy  A workspace admin can set or override the following data retention policies       Organization default policy       Do not auto delete       Auto delete data    Setting the data retention policy to   Organization default policy   disables the other data retention policy settings "}
{"questions":"terraform page title Notifications Workspaces HCP Terraform HCP Terraform can use webhooks to notify external systems about run progress and other events Each workspace has its own notification settings and can notify up to 20 destinations Notifications Learn how to use webhooks to notify external systems about run progress and other events Create and enable workspace notifications","answers":"---\npage_title: Notifications - Workspaces - HCP Terraform\ndescription: >-\n  Learn how to use webhooks to notify external systems about run progress and other events. Create and enable workspace notifications.\n---\n\n# Notifications\n\nHCP Terraform can use webhooks to notify external systems about run progress and other events. Each workspace has its own notification settings and can notify up to 20 destinations.\n\n-> **Note:** [Speculative plans](\/terraform\/cloud-docs\/run\/modes-and-options#plan-only-speculative-plan) and workspaces configured with `Local` [execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode) do not support notifications.\n\nConfiguring notifications requires admin access to the workspace. Refer to [Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions) for details.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n-> **API:** Refer to [Notification Configuration APIs](\/terraform\/cloud-docs\/api-docs\/notification-configurations).\n\n## Viewing and Managing Notification Settings\n\nTo add, edit, or delete notifications for a workspace, go to the workspace and click **Settings > Notifications**. The **Notifications** page appears, showing existing notification configurations.\n\n## Creating a Notification Configuration\n\nA notification configuration specifies a destination URL, a payload type, and the events that should generate a notification. To create a notification configuration:\n\n1.  Click **Settings > Notifications**. The **Notifications** page appears.\n\n2.  Click **Create a Notification**. The **Create a Notification** form appears.\n\n3.  Configure the notifications:\n\n- **Destination:** HCP Terraform can deliver either a generic payload or a payload formatted specifically for Slack, Microsoft Teams, or Email. Refer to [Notification Payloads](#notification-payloads) for details.\n- **Name:** A display name for this notification configuration.\n- **Webhook URL** This URL is only available for generic, Slack, and Microsoft Teams webhooks. The webhook URL is the destination for the webhook payload. This URL must accept HTTP or HTTPS `POST` requests and should be able to use the chosen payload type. For details, refer to Slack's documentation on [creating an incoming webhook](https:\/\/api.slack.com\/messaging\/webhooks#create_a_webhook) and Microsoft's documentation on [creating a workflow from a channel in teams](https:\/\/support.microsoft.com\/en-us\/office\/creating-a-workflow-from-a-channel-in-teams-242eb8f2-f328-45be-b81f-9817b51a5f0e).\n- **Token** (Optional) This notification is only available for generic webhooks. A token is an arbitrary secret string that HCP Terraform will use to sign its notification webhooks. Refer to [Notification Authenticity][inpage-hmac] for details. You cannot view the token after you save the notification configuration.\n- **Email Recipients** This notification is only available for emails. Select users that should receive notifications.\n\n- **Workspace Events**: HCP Terraform can send notifications for all events or only for specific events. The following events are available:\n\n  - **Drift**: HCP Terraform detected configuration drift. This notification is only available if you enable [health assessments](\/terraform\/cloud-docs\/workspaces\/health) for the workspace.\n  - **Check Failure:** HCP Terraform detected one or more failed continuous validation checks. This notification is only available if you enable health assessments for the workspace.\n  - **Health Assessment Fail**: A health assessment failed. This notification is only available if you enable health assessments for the workspace. Health assessments fail when HCP Terraform cannot perform drift detection, continuous validation, or both. The notification does not specify the cause of the failure, but you can use the [Assessment Result](\/terraform\/cloud-docs\/api-docs\/assessment-results) logs to help diagnose the issue.\n  - **Auto destroy reminder**: Sends reminders 12 and 24 hours before a scheduled auto destroy run.\n  - **Auto destroy results**: HCP Terraform performed an auto destroy run in the workspace. Reports both successful and errored runs.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/health-assessments.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n- **Run Events:** HCP Terraform can send notifications for all events or only for specific events. The following events are available:\n  - **Created**: A run begins and enters the [Pending stage](\/terraform\/enterprise\/run\/states#the-pending-stage).\n  - **Planning**: A run acquires the lock and starts to execute.\n  - **Needs Attention**: A plan has changes and Terraform requires user input to continue. This event may include approving the plan or a [policy override](\/terraform\/enterprise\/run\/states#the-policy-check-stage).\n  - **Applying**: A run enters the [Apply stage](\/terraform\/enterprise\/run\/states#the-apply-stage), where Terraform makes the infrastructure changes described in the plan.\n  - **Completed**: A run completed successfully.\n  - **Errored**: A run terminated early due to error or cancellation.\n\n4.  Click **Create a notification**.\n\n\n## Enabling and Verifying a Configuration\n\nTo enable or disable a configuration, toggle the **Enabled\/Disabled** switch on its detail page. HCP Terraform will attempt to verify the configuration for generic and slack webhooks by sending a test message, and will enable the notification configuration if the test succeeds.\n\nFor a verification to be successful, the destination must respond with a `2xx` HTTP code. If verification fails, HCP Terraform displays the error message and the configuration will remain disabled.\n\nFor both successful and unsuccessful verifications, click the **Last Response** box to view more information about the verification results. You can also send additional test messages with the **Send a Test** link.\n\n## Notification Payloads\n\n### Slack\n\nNotifications to Slack will contain the following information:\n\n- The run's workspace (as a link)\n- The HCP Terraform username and avatar of the person that created the run\n- The run ID (as a link)\n- The reason the run was queued (usually a commit message or a custom message)\n- The time the run was created\n- The event that triggered the notification and the time that event occurred\n\n### Microsoft Teams\n\nNotifications to Microsoft Teams contain the following information:\n\n- The run's workspace (as a link)\n- The HCP Terraform username and avatar of the person that created the run\n- The run ID\n- A link to view the run\n- The reason the run was queued (usually a commit message or a custom message)\n- The time the run was created\n- The event that triggered the notification and the time that event occurred\n\n### Email\n\nEmail notifications will contain the following information:\n\n- The run's workspace (as a link)\n- The run ID (as a link)\n- The event that triggered the notification, and if the run needs to be acted upon or not\n\n### Generic\n\nA generic notification will contain information about a run and its state at the time the triggering event occurred. The complete generic notification payload is described in the [API documentation][generic-payload].\n\n[generic-payload]: \/terraform\/cloud-docs\/api-docs\/notification-configurations#notification-payload\n\nSome of the values in the payload can be used to retrieve additional information through the API, such as:\n\n- The [run ID](\/terraform\/cloud-docs\/api-docs\/run#get-run-details)\n- The [workspace ID](\/terraform\/cloud-docs\/api-docs\/workspaces#list-workspaces)\n- The [organization name](\/terraform\/cloud-docs\/api-docs\/organizations#show-an-organization)\n\n## Notification Authenticity\n\n[inpage-hmac]: #notification-authenticity\n\nSlack notifications use Slack's own protocols for verifying HCP Terraform's webhook requests.\n\nGeneric notifications can include a signature for verifying the request. For notification configurations that include a secret token, HCP Terraform's webhook requests will include an `X-TFE-Notification-Signature` header, which contains an HMAC signature computed from the token using the SHA-512 digest algorithm. The receiving service is responsible for validating the signature. More information, as well as an example of how to validate the signature, can be found in the [API documentation](\/terraform\/cloud-docs\/api-docs\/notification-configurations#notification-authenticity).","site":"terraform","answers_cleaned":"    page title  Notifications   Workspaces   HCP Terraform description       Learn how to use webhooks to notify external systems about run progress and other events  Create and enable workspace notifications         Notifications  HCP Terraform can use webhooks to notify external systems about run progress and other events  Each workspace has its own notification settings and can notify up to 20 destinations        Note     Speculative plans   terraform cloud docs run modes and options plan only speculative plan  and workspaces configured with  Local   execution mode   terraform cloud docs workspaces settings execution mode  do not support notifications   Configuring notifications requires admin access to the workspace  Refer to  Permissions   terraform cloud docs users teams organizations permissions  for details    permissions citation    intentionally unused   keep for maintainers       API    Refer to  Notification Configuration APIs   terraform cloud docs api docs notification configurations       Viewing and Managing Notification Settings  To add  edit  or delete notifications for a workspace  go to the workspace and click   Settings   Notifications    The   Notifications   page appears  showing existing notification configurations      Creating a Notification Configuration  A notification configuration specifies a destination URL  a payload type  and the events that should generate a notification  To create a notification configuration   1   Click   Settings   Notifications    The   Notifications   page appears   2   Click   Create a Notification    The   Create a Notification   form appears   3   Configure the notifications       Destination    HCP Terraform can deliver either a generic payload or a payload formatted specifically for Slack  Microsoft Teams  or Email  Refer to  Notification Payloads   notification payloads  for details      Name    A display name for this notification configuration      Webhook URL   This URL is only available for generic  Slack  and Microsoft Teams webhooks  The webhook URL is the destination for the webhook payload  This URL must accept HTTP or HTTPS  POST  requests and should be able to use the chosen payload type  For details  refer to Slack s documentation on  creating an incoming webhook  https   api slack com messaging webhooks create a webhook  and Microsoft s documentation on  creating a workflow from a channel in teams  https   support microsoft com en us office creating a workflow from a channel in teams 242eb8f2 f328 45be b81f 9817b51a5f0e       Token    Optional  This notification is only available for generic webhooks  A token is an arbitrary secret string that HCP Terraform will use to sign its notification webhooks  Refer to  Notification Authenticity  inpage hmac  for details  You cannot view the token after you save the notification configuration      Email Recipients   This notification is only available for emails  Select users that should receive notifications       Workspace Events    HCP Terraform can send notifications for all events or only for specific events  The following events are available         Drift    HCP Terraform detected configuration drift  This notification is only available if you enable  health assessments   terraform cloud docs workspaces health  for the workspace        Check Failure    HCP Terraform detected one or more failed continuous validation checks  This notification is only available if you enable health assessments for the workspace        Health Assessment Fail    A health assessment failed  This notification is only available if you enable health assessments for the workspace  Health assessments fail when HCP Terraform cannot perform drift detection  continuous validation  or both  The notification does not specify the cause of the failure  but you can use the  Assessment Result   terraform cloud docs api docs assessment results  logs to help diagnose the issue        Auto destroy reminder    Sends reminders 12 and 24 hours before a scheduled auto destroy run        Auto destroy results    HCP Terraform performed an auto destroy run in the workspace  Reports both successful and errored runs        BEGIN  TFC only name pnp callout      include  tfc package callouts health assessments mdx       END  TFC only name pnp callout          Run Events    HCP Terraform can send notifications for all events or only for specific events  The following events are available        Created    A run begins and enters the  Pending stage   terraform enterprise run states the pending stage         Planning    A run acquires the lock and starts to execute        Needs Attention    A plan has changes and Terraform requires user input to continue  This event may include approving the plan or a  policy override   terraform enterprise run states the policy check stage         Applying    A run enters the  Apply stage   terraform enterprise run states the apply stage   where Terraform makes the infrastructure changes described in the plan        Completed    A run completed successfully        Errored    A run terminated early due to error or cancellation   4   Click   Create a notification         Enabling and Verifying a Configuration  To enable or disable a configuration  toggle the   Enabled Disabled   switch on its detail page  HCP Terraform will attempt to verify the configuration for generic and slack webhooks by sending a test message  and will enable the notification configuration if the test succeeds   For a verification to be successful  the destination must respond with a  2xx  HTTP code  If verification fails  HCP Terraform displays the error message and the configuration will remain disabled   For both successful and unsuccessful verifications  click the   Last Response   box to view more information about the verification results  You can also send additional test messages with the   Send a Test   link      Notification Payloads      Slack  Notifications to Slack will contain the following information     The run s workspace  as a link    The HCP Terraform username and avatar of the person that created the run   The run ID  as a link    The reason the run was queued  usually a commit message or a custom message    The time the run was created   The event that triggered the notification and the time that event occurred      Microsoft Teams  Notifications to Microsoft Teams contain the following information     The run s workspace  as a link    The HCP Terraform username and avatar of the person that created the run   The run ID   A link to view the run   The reason the run was queued  usually a commit message or a custom message    The time the run was created   The event that triggered the notification and the time that event occurred      Email  Email notifications will contain the following information     The run s workspace  as a link    The run ID  as a link    The event that triggered the notification  and if the run needs to be acted upon or not      Generic  A generic notification will contain information about a run and its state at the time the triggering event occurred  The complete generic notification payload is described in the  API documentation  generic payload     generic payload    terraform cloud docs api docs notification configurations notification payload  Some of the values in the payload can be used to retrieve additional information through the API  such as     The  run ID   terraform cloud docs api docs run get run details    The  workspace ID   terraform cloud docs api docs workspaces list workspaces    The  organization name   terraform cloud docs api docs organizations show an organization      Notification Authenticity   inpage hmac    notification authenticity  Slack notifications use Slack s own protocols for verifying HCP Terraform s webhook requests   Generic notifications can include a signature for verifying the request  For notification configurations that include a secret token  HCP Terraform s webhook requests will include an  X TFE Notification Signature  header  which contains an HMAC signature computed from the token using the SHA 512 digest algorithm  The receiving service is responsible for validating the signature  More information  as well as an example of how to validate the signature  can be found in the  API documentation   terraform cloud docs api docs notification configurations notification authenticity  "}
{"questions":"terraform Learn how to use the web UI to connect a workspace to a version control system repository that contains Terraform configuration You can connect any HCP Terraform workspace terraform cloud docs workspaces to a version control system VCS repository that contains a Terraform configuration This page explains the workspace VCS connection settings in the HCP Terraform UI page title VCS Connections Workspaces HCP Terraform Configuring Workspace VCS Connections","answers":"---\npage_title: VCS Connections - Workspaces - HCP Terraform\ndescription: >-\n  Learn how to use the web UI to connect a workspace to a version control system repository that contains Terraform configuration.\n---\n\n# Configuring Workspace VCS Connections\n\nYou can connect any HCP Terraform [workspace](\/terraform\/cloud-docs\/workspaces) to a version control system (VCS) repository that contains a Terraform configuration. This page explains the workspace VCS connection settings in the HCP Terraform UI.\n\nRefer to [Terraform Configurations in HCP Terraform Workspaces](\/terraform\/cloud-docs\/workspaces\/configurations) for details on handling configuration versions and connected repositories. Refer to [Connecting VCS Providers](\/terraform\/cloud-docs\/vcs) for a list of supported VCS providers and details about configuring VCS access, viewing VCS events, etc.\n\n## API\n\nYou can use the [Update a Workspace endpoint](\/terraform\/cloud-docs\/api-docs\/workspaces#update-a-workspace) in the Workspaces API to change one or more VCS settings. We also recommend using this endpoint to automate changing VCS connections for many workspaces at once. For example, when you move a VCS server or remove a deprecated API version.\n\n## Version Control Settings\n\nTo change a workspace's VCS settings:\n\n1. Go to the workspace and click **Settings > Version Control**. The **Version Control** page appears.\n1. Choose the desired settings and click **Update VCS settings**.\n\nYou can update the following types of VCS settings for the workspace.\n\n### VCS Connection\n\nYou can take one of the following actions:\n\n- To add a new VCS connection, click **Connect to version control**. Select **Version control workflow** and follow the steps to [select a VCS provider and repository](\/terraform\/cloud-docs\/workspaces\/create#create-a-workspace).\n- To edit an existing VCS connection, click **Change source**. Choose the **Version control workflow** and follow the steps to [select VCS provider and repository](\/terraform\/cloud-docs\/workspaces\/create#create-a-workspace).\n- To remove the VCS connection, click **Change source**. Select either the **CLI-driven workflow** or the **API-driven workflow**, and click **Update VCS settings**. The workspace is no longer connected to VCS.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### Terraform Working Directory\n\nSpecify the directory where Terraform will execute runs. This defaults to the root directory in your repository, but you may want to specify another directory if you have directories for multiple different Terraform configurations within the same repository. For example, if you had one `staging` directory and one `production` directory.\n\nA working directory is required when you use [trigger prefixes](#automatic-run-triggering).\n\n### Apply Method\n\nChoose a workflow for Terraform runs.\n\n- **Auto apply:** Terraform will apply changes from successful plans without prompting for approval. A push to the default branch of your repository will trigger a plan and apply cycle. You may want to do this in non-interactive environments, like continuous deployment workflows.\n\n  !> **Warning:** If you choose auto apply, make sure that no one can change your infrastructure outside of your automated build pipeline. This reduces the risk of configuration drift and unexpected changes.\n\n- **Manual apply:** Terraform will ask for approval before applying changes from a successful plan. A push to the default branch of your repository will trigger a plan, and then Terraform will wait for confirmation.\n\n### Automatic Run Triggering\n\nHCP Terraform uses your VCS provider's API to retrieve the changed files in your repository. You can choose one of the following options to specify which changes trigger Terraform runs.\n\n#### Always trigger runs\n\nThis option instructs Terraform to begin a run when changes are pushed to any file within the repository. This can be useful for repositories that do not have multiple configurations but require a working directory for some other reason. However, we do not recommend this approach for true monorepos, as it queues unnecessary runs and slows down your ability to provision infrastructure.\n\n#### Only trigger runs when files in specified paths change\n\nThis option instructs Terraform to begin new runs only for changes that affect specified files and directories. This behavior also applies to [speculative plans](\/terraform\/cloud-docs\/run\/remote-operations#speculative-plans) on pull requests.\n\nYou can use trigger patterns and trigger prefixes in the **Add path** field to specify groups of files and directories.\n- **Trigger Patterns:** (Recommended) Use glob patterns to specify the files that should trigger a new run. For example, `\/submodule\/**\/*.tf`, specifies all files with the `.tf` extension that are nested below the `submodule` directory. You can also use more complex patterns like `\/**\/networking\/**\/*`, which specifies all files that have a `networking` folder in their file path. (e.g., `\/submodule\/service-1\/networking\/private\/main.tf`). Refer to [Glob Patterns for Automatic Run Triggering](#glob-patterns-for-automatic-run-triggering) for details.\n- **Trigger Prefixes:** HCP Terraform will queue runs for changes in any of the specified trigger directories matching the provided prefixes (including the working directory). For example, if you use a top-level `modules` directory to share Terraform code across multiple configurations, changes to the shared modules are relevant to every workspace that uses that repository. You can add `modules` as a trigger directory for each workspace to track changes to shared code.\n\n-> **Note:** HCP Terraform triggers runs on all attached workspaces if it does not receive a list of changed files or if that list is too large to process. When this happens, HCP Terraform may show several runs with completed plans that do not result in infrastructure changes.\n\n#### Trigger runs when a git tag is published\n\nThis option instructs Terraform to begin new runs only for changes that have a specific tag format.\n\nThe tag format can be chosen between the following options:\n- **Semantic Versioning:** It matches tags in the popular [SemVer format](https:\/\/semver.org\/). For example, `0.4.2`.\n- **Version contains a prefix:** It matches tags which have an additional prefix before the [SemVer format](https:\/\/semver.org\/). For example, `version-0.4.2`.\n- **Version contains a suffix:** It matches tags which have an additional suffix after the [SemVer format](https:\/\/semver.org\/). For example `0.4.2-alpha`.\n- **Custom Regular Expression:** You can define your own regex for HCP Terraform to match against tags.\n\nYou must include an additional `\\` to escape the regex pattern when you manage your workspace with the [hashicorp\/tfe provider](https:\/\/registry.terraform.io\/providers\/hashicorp\/tfe\/latest\/docs\/resources\/workspace#tags_regex) and trigger runs through matching git tags. Refer to [Terraform escape sequences](\/terraform\/language\/expressions\/strings#escape-sequences) for more details.\n\n| Tag Format                    | Regex Pattern             | Regex Pattern (Escaped)  |\n|-------------------------------|---------------------------|--------------------------|\n| **Semantic Versioning**       | `^\\d+.\\d+.\\d+$`           | `^\\\\d+.\\\\d+.\\\\d+$`       |\n| **Version contains a prefix** | `\\d+.\\d+.\\d+$`            | `\\\\d+.\\\\d+.\\\\d+$`        |\n| **Version contains a suffix** | `^\\d+.\\d+.\\d+`            | `^\\\\d+.\\\\d+.\\\\d+`        |\n\nHCP Terraform triggers runs for all tags matching this pattern, regardless of the value in the [VCS Branch](#vcs-branch) setting.\n\n### VCS Branch\n\nThis setting designates which branch of the repository HCP Terraform should use when the workspace is set to [Always Trigger Runs](#always-trigger-runs) or [Only trigger runs when files in specified paths change](#only-trigger-runs-when-files-in-specified-paths-change). If you leave this setting blank, HCP Terraform uses the repository's default branch. If the workspace is set to trigger runs when a [git tag is published](#trigger-runs-when-a-git-tag-is-published), all tags will trigger runs, regardless of the branch specified in this setting.\n\n### Automatic Speculative Plans\n\nWhether to perform [speculative plans on pull requests](\/terraform\/cloud-docs\/run\/ui#speculative-plans-on-pull-requests) to the connected repository, to assist in reviewing proposed changes. Automatic speculative plans are enabled by default, but you can disable them for any workspace.\n\n### Include Submodules on Clone\n\nSelect **Include submodules on clone** to recursively clone all of the repository's Git submodules when HCP Terraform fetches a configuration.\n\n-> **Note:** The [SSH key for cloning Git submodules](\/terraform\/cloud-docs\/vcs#ssh-keys) is set in the VCS provider settings for the organization and is not related to the workspace's SSH key for Terraform modules.\n\n## Glob Patterns for Automatic Run Triggering\n\nWe support `glob` patterns to describe a set of triggers for automatic runs. Refer to [trigger patterns](#only-trigger-runs-when-files-in-specified-paths-change) for details.\n\nSupported wildcards:\n- `*`  Matches zero or more characters.\n- `?`  Matches one or more characters.\n- `**` Matches directories recursively.\n\nThe following examples demonstrate how to use the supported wildcards:\n- `\/**\/*` matches every file in every directory\n- `\/module\/**\/*` matches all files in any directory below the `module` directory\n- `\/**\/networking\/*` matches every file that is inside any `networking` directory\n- `\/**\/networking\/**\/*` matches every file that has `networking` directory on its path\n- `\/**\/*.tf` matches every file in any directory that has the `.tf` extension\n- `\/submodule\/*.???` matches every file inside `submodule` directory which has three characters long extension.","site":"terraform","answers_cleaned":"    page title  VCS Connections   Workspaces   HCP Terraform description       Learn how to use the web UI to connect a workspace to a version control system repository that contains Terraform configuration         Configuring Workspace VCS Connections  You can connect any HCP Terraform  workspace   terraform cloud docs workspaces  to a version control system  VCS  repository that contains a Terraform configuration  This page explains the workspace VCS connection settings in the HCP Terraform UI   Refer to  Terraform Configurations in HCP Terraform Workspaces   terraform cloud docs workspaces configurations  for details on handling configuration versions and connected repositories  Refer to  Connecting VCS Providers   terraform cloud docs vcs  for a list of supported VCS providers and details about configuring VCS access  viewing VCS events  etc      API  You can use the  Update a Workspace endpoint   terraform cloud docs api docs workspaces update a workspace  in the Workspaces API to change one or more VCS settings  We also recommend using this endpoint to automate changing VCS connections for many workspaces at once  For example  when you move a VCS server or remove a deprecated API version      Version Control Settings  To change a workspace s VCS settings   1  Go to the workspace and click   Settings   Version Control    The   Version Control   page appears  1  Choose the desired settings and click   Update VCS settings     You can update the following types of VCS settings for the workspace       VCS Connection  You can take one of the following actions     To add a new VCS connection  click   Connect to version control    Select   Version control workflow   and follow the steps to  select a VCS provider and repository   terraform cloud docs workspaces create create a workspace     To edit an existing VCS connection  click   Change source    Choose the   Version control workflow   and follow the steps to  select VCS provider and repository   terraform cloud docs workspaces create create a workspace     To remove the VCS connection  click   Change source    Select either the   CLI driven workflow   or the   API driven workflow    and click   Update VCS settings    The workspace is no longer connected to VCS    permissions citation    intentionally unused   keep for maintainers      Terraform Working Directory  Specify the directory where Terraform will execute runs  This defaults to the root directory in your repository  but you may want to specify another directory if you have directories for multiple different Terraform configurations within the same repository  For example  if you had one  staging  directory and one  production  directory   A working directory is required when you use  trigger prefixes   automatic run triggering        Apply Method  Choose a workflow for Terraform runs       Auto apply    Terraform will apply changes from successful plans without prompting for approval  A push to the default branch of your repository will trigger a plan and apply cycle  You may want to do this in non interactive environments  like continuous deployment workflows          Warning    If you choose auto apply  make sure that no one can change your infrastructure outside of your automated build pipeline  This reduces the risk of configuration drift and unexpected changes       Manual apply    Terraform will ask for approval before applying changes from a successful plan  A push to the default branch of your repository will trigger a plan  and then Terraform will wait for confirmation       Automatic Run Triggering  HCP Terraform uses your VCS provider s API to retrieve the changed files in your repository  You can choose one of the following options to specify which changes trigger Terraform runs        Always trigger runs  This option instructs Terraform to begin a run when changes are pushed to any file within the repository  This can be useful for repositories that do not have multiple configurations but require a working directory for some other reason  However  we do not recommend this approach for true monorepos  as it queues unnecessary runs and slows down your ability to provision infrastructure        Only trigger runs when files in specified paths change  This option instructs Terraform to begin new runs only for changes that affect specified files and directories  This behavior also applies to  speculative plans   terraform cloud docs run remote operations speculative plans  on pull requests   You can use trigger patterns and trigger prefixes in the   Add path   field to specify groups of files and directories      Trigger Patterns     Recommended  Use glob patterns to specify the files that should trigger a new run  For example    submodule      tf   specifies all files with the   tf  extension that are nested below the  submodule  directory  You can also use more complex patterns like      networking        which specifies all files that have a  networking  folder in their file path   e g     submodule service 1 networking private main tf    Refer to  Glob Patterns for Automatic Run Triggering   glob patterns for automatic run triggering  for details      Trigger Prefixes    HCP Terraform will queue runs for changes in any of the specified trigger directories matching the provided prefixes  including the working directory   For example  if you use a top level  modules  directory to share Terraform code across multiple configurations  changes to the shared modules are relevant to every workspace that uses that repository  You can add  modules  as a trigger directory for each workspace to track changes to shared code        Note    HCP Terraform triggers runs on all attached workspaces if it does not receive a list of changed files or if that list is too large to process  When this happens  HCP Terraform may show several runs with completed plans that do not result in infrastructure changes        Trigger runs when a git tag is published  This option instructs Terraform to begin new runs only for changes that have a specific tag format   The tag format can be chosen between the following options      Semantic Versioning    It matches tags in the popular  SemVer format  https   semver org    For example   0 4 2       Version contains a prefix    It matches tags which have an additional prefix before the  SemVer format  https   semver org    For example   version 0 4 2       Version contains a suffix    It matches tags which have an additional suffix after the  SemVer format  https   semver org    For example  0 4 2 alpha       Custom Regular Expression    You can define your own regex for HCP Terraform to match against tags   You must include an additional     to escape the regex pattern when you manage your workspace with the  hashicorp tfe provider  https   registry terraform io providers hashicorp tfe latest docs resources workspace tags regex  and trigger runs through matching git tags  Refer to  Terraform escape sequences   terraform language expressions strings escape sequences  for more details     Tag Format                      Regex Pattern               Regex Pattern  Escaped                                                                                                  Semantic Versioning              d   d   d                    d    d    d                Version contains a prefix       d   d   d                    d    d    d                 Version contains a suffix        d   d   d                    d    d    d             HCP Terraform triggers runs for all tags matching this pattern  regardless of the value in the  VCS Branch   vcs branch  setting       VCS Branch  This setting designates which branch of the repository HCP Terraform should use when the workspace is set to  Always Trigger Runs   always trigger runs  or  Only trigger runs when files in specified paths change   only trigger runs when files in specified paths change   If you leave this setting blank  HCP Terraform uses the repository s default branch  If the workspace is set to trigger runs when a  git tag is published   trigger runs when a git tag is published   all tags will trigger runs  regardless of the branch specified in this setting       Automatic Speculative Plans  Whether to perform  speculative plans on pull requests   terraform cloud docs run ui speculative plans on pull requests  to the connected repository  to assist in reviewing proposed changes  Automatic speculative plans are enabled by default  but you can disable them for any workspace       Include Submodules on Clone  Select   Include submodules on clone   to recursively clone all of the repository s Git submodules when HCP Terraform fetches a configuration        Note    The  SSH key for cloning Git submodules   terraform cloud docs vcs ssh keys  is set in the VCS provider settings for the organization and is not related to the workspace s SSH key for Terraform modules      Glob Patterns for Automatic Run Triggering  We support  glob  patterns to describe a set of triggers for automatic runs  Refer to  trigger patterns   only trigger runs when files in specified paths change  for details   Supported wildcards         Matches zero or more characters         Matches one or more characters         Matches directories recursively   The following examples demonstrate how to use the supported wildcards            matches every file in every directory     module       matches all files in any directory below the  module  directory        networking    matches every file that is inside any  networking  directory        networking       matches every file that has  networking  directory on its path          tf  matches every file in any directory that has the   tf  extension     submodule        matches every file inside  submodule  directory which has three characters long extension "}
{"questions":"terraform entitlement terraform cloud docs api docs feature entitlements Learn how to integrate third party tools into the run lifecycle Create and delete run tasks and associate them with workspaces Run Tasks page title Run Tasks Workspaces HCP Terraform","answers":"---\npage_title: Run Tasks - Workspaces - HCP Terraform\ndescription: >-\n  Learn how to integrate third-party tools into the run lifecycle. Create and delete run tasks and associate them with workspaces.\n---\n\n[entitlement]: \/terraform\/cloud-docs\/api-docs#feature-entitlements\n\n# Run Tasks\n\nHCP Terraform run tasks let you directly integrate third-party tools and services at certain stages in the HCP Terraform run lifecycle. Use run tasks to validate Terraform configuration files, analyze execution plans before applying them, scan for security vulnerabilities, or perform other custom actions.\n\nRun tasks send data about a run to an external service at [specific run stages](#understanding-run-tasks-within-a-run). The external service processes the data, evaluates whether the run passes or fails, and sends a response to HCP Terraform. HCP Terraform then uses this response and the run task enforcement level to determine if a run can proceed. [Explore run tasks in the Terraform registry](https:\/\/registry.terraform.io\/browse\/run-tasks).\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/run-tasks.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nYou can manage run tasks through the HCP Terraform UI or the [Run Tasks API](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks).\n\n> **Hands-on:** Try the [HCP Packer validation run task](\/packer\/tutorials\/hcp\/setup-hcp-terraform-run-task) tutorial.\n\n## Requirements\n\n**Terraform Version** - You can assign run tasks to workspaces that use a Terraform version of 1.1.9 and later. You can downgrade a workspace with existing runs to use a prior Terraform version without causing an error. However, HCP Terraform no longer triggers the run tasks during plan and apply operations.\n\n**Permissions** - To create a run task, you must have a user account with the [Manage Run Tasks permission](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-run-tasks). To associate run tasks with a workspace, you need the [Manage Workspace Run Tasks permission](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#general-workspace-permissions) on that particular workspace.\n\n## Creating a Run Task\n\nExplore the full list of [run tasks in the Terraform Registry](https:\/\/registry.terraform.io\/browse\/run-tasks). \n\nRun tasks send an API payload to an external service. The API payload contains run-related information, including a callback URL, which the service uses to return a pass or fail status to HCP Terraform. \n\nFor example, the [HCP Packer integration](\/terraform\/cloud-docs\/integrations\/run-tasks#hcp-packer-run-task) checks image artifacts within a Terraform configuration for validity. If the configuration references images marked as unusable (revoked), then the run task fails and provides an error message.\n\nTo create a new run task:\n\n1. Navigate to the desired workspace, open the **Settings** menu, and select **Run Tasks**.\n\n1. Click **Create a new run task**. The **Run Tasks** page appears.\n\n1. Enter the information about the run task to be configured:\n\n   - **Enabled** (optional): Whether the run task will run across all associated workspaces. New tasks are enabled by default.\n   - **Name** (required): A human-readable name for the run task. This will be displayed in workspace configuration pages and can contain letters, numbers, dashes and underscores.\n   - **Endpoint URL** (required): The URL for the external service. Run tasks will POST the [run tasks payload](\/terraform\/cloud-docs\/integrations\/run-tasks#integration-details) to this URL.\n   - **Description** (optional): A human-readable description for the run task. This information can contain letters, numbers, spaces, and special characters.\n   - **HMAC key** (optional): A secret key that may be required by the external service to verify request authenticity.\n\n1. Click **Create run task**. The run task is now available within the organization, and you can associate it with one or more workspaces.\n\n### Global Run Tasks\n\nWhen you create a new run task, you can choose to apply it globally to every workspace in an organization. Your organization must have the `global-run-task` [entitlement][] to use global run tasks.\n\n1. Select the **Global** checkbox\n\n1. Choose when HCP Terraform should start the run task:\n\n   - **Pre-plan**: Before Terraform creates the plan.\n   - **Post-plan**: After Terraform creates the plan.\n   - **Pre-apply**: Before Terraform applies a plan.\n   - **Post-apply**: After Terraform applies a plan.\n\n1. Choose an enforcement level:\n\n    - **Advisory**: Run tasks can not block a run from completing. If the task fails, the run proceeds with a warning in the user interface.\n    - **Mandatory**: Failed run tasks can block a run from completing. If the task fails (including timeouts or unexpected remote errors), the run stops and errors with a warning in the user interface.\n\n## Associating Run Tasks with a Workspace\n\n1. Click **Workspaces** and then go to the workspace where you want to associate run tasks.\n\n1. Open the **Settings** menu and select **Run Tasks**.\n\n1. Click the **+** next to the task you want to add to the workspace.\n\n1. Choose when HCP Terraform should start the run task:\n\n   - **Pre-plan**: Before Terraform creates the plan.\n   - **Post-plan**: After Terraform creates the plan.\n   - **Pre-apply**: Before Terraform applies a plan.\n   - **Post-apply**: After Terraform applies a plan.\n\n1. Choose an enforcement level:\n\n   - **Advisory**: Run tasks can not block a run from completing. If the task fails, the run will proceed with a warning in the UI.\n   - **Mandatory**: Run tasks can block a run from completing. If the task fails (including a timeout or unexpected remote error condition), the run will transition to an Errored state with a warning in the UI.\n\n1. Click **Create**. Your run task is now configured.\n\n## Understanding Run Tasks Within a Run\n\nRun tasks perform actions before and after, the [plan](\/terraform\/cloud-docs\/run\/states#the-plan-stage) and [apply](\/terraform\/cloud-docs\/run\/states#the-apply-stage) stages of a [Terraform run](\/terraform\/cloud-docs\/run\/remote-operations). Once all run tasks complete, the run ends based on the most restrictive enforcement level in each associated run task.\n\nFor example, if a mandatory task fails and an advisory task succeeds, the run fails. If an advisory task fails, but a mandatory task succeeds, the run succeeds and proceeds to the apply stage. Regardless of the exit status of a task, HCP Terraform displays the status and any related message data in the UI.\n\n## Removing a Run Task from a Workspace\n\nRemoving a run task from a workspace does not delete it from the organization. To remove a run task from a specific workspace:\n\n1. Navigate to the desired workspace, open the **Settings** menu and select **Run Tasks**.\n\n1. Click the ellipses (...) on the associated run task, and then click **Remove**. The run task will no longer be applied to runs within the workspace.\n\n## Deleting a Run Task\n\nYou must remove a run task from all associated workspaces before you can delete it. To delete a run task:\n\n1. Navigate to **Settings** and click **Run Tasks**.\n\n1. Click the ellipses (...) next to the run task you want to delete, and then click **Edit**.\n\n1. Click **Delete run task**.\n\nYou cannot delete run tasks that are still associated with a workspace. If you attempt this, you will see a warning in the UI containing a list of all workspaces that are associated with the run task","site":"terraform","answers_cleaned":"    page title  Run Tasks   Workspaces   HCP Terraform description       Learn how to integrate third party tools into the run lifecycle  Create and delete run tasks and associate them with workspaces        entitlement    terraform cloud docs api docs feature entitlements    Run Tasks  HCP Terraform run tasks let you directly integrate third party tools and services at certain stages in the HCP Terraform run lifecycle  Use run tasks to validate Terraform configuration files  analyze execution plans before applying them  scan for security vulnerabilities  or perform other custom actions   Run tasks send data about a run to an external service at  specific run stages   understanding run tasks within a run   The external service processes the data  evaluates whether the run passes or fails  and sends a response to HCP Terraform  HCP Terraform then uses this response and the run task enforcement level to determine if a run can proceed   Explore run tasks in the Terraform registry  https   registry terraform io browse run tasks         BEGIN  TFC only name pnp callout      include  tfc package callouts run tasks mdx       END  TFC only name pnp callout      You can manage run tasks through the HCP Terraform UI or the  Run Tasks API   terraform cloud docs api docs run tasks run tasks        Hands on    Try the  HCP Packer validation run task   packer tutorials hcp setup hcp terraform run task  tutorial      Requirements    Terraform Version     You can assign run tasks to workspaces that use a Terraform version of 1 1 9 and later  You can downgrade a workspace with existing runs to use a prior Terraform version without causing an error  However  HCP Terraform no longer triggers the run tasks during plan and apply operations     Permissions     To create a run task  you must have a user account with the  Manage Run Tasks permission   terraform cloud docs users teams organizations permissions manage run tasks   To associate run tasks with a workspace  you need the  Manage Workspace Run Tasks permission   terraform cloud docs users teams organizations permissions general workspace permissions  on that particular workspace      Creating a Run Task  Explore the full list of  run tasks in the Terraform Registry  https   registry terraform io browse run tasks     Run tasks send an API payload to an external service  The API payload contains run related information  including a callback URL  which the service uses to return a pass or fail status to HCP Terraform    For example  the  HCP Packer integration   terraform cloud docs integrations run tasks hcp packer run task  checks image artifacts within a Terraform configuration for validity  If the configuration references images marked as unusable  revoked   then the run task fails and provides an error message   To create a new run task   1  Navigate to the desired workspace  open the   Settings   menu  and select   Run Tasks     1  Click   Create a new run task    The   Run Tasks   page appears   1  Enter the information about the run task to be configured          Enabled    optional   Whether the run task will run across all associated workspaces  New tasks are enabled by default         Name    required   A human readable name for the run task  This will be displayed in workspace configuration pages and can contain letters  numbers  dashes and underscores         Endpoint URL    required   The URL for the external service  Run tasks will POST the  run tasks payload   terraform cloud docs integrations run tasks integration details  to this URL         Description    optional   A human readable description for the run task  This information can contain letters  numbers  spaces  and special characters         HMAC key    optional   A secret key that may be required by the external service to verify request authenticity   1  Click   Create run task    The run task is now available within the organization  and you can associate it with one or more workspaces       Global Run Tasks  When you create a new run task  you can choose to apply it globally to every workspace in an organization  Your organization must have the  global run task   entitlement    to use global run tasks   1  Select the   Global   checkbox  1  Choose when HCP Terraform should start the run task          Pre plan    Before Terraform creates the plan         Post plan    After Terraform creates the plan         Pre apply    Before Terraform applies a plan         Post apply    After Terraform applies a plan   1  Choose an enforcement level           Advisory    Run tasks can not block a run from completing  If the task fails  the run proceeds with a warning in the user interface          Mandatory    Failed run tasks can block a run from completing  If the task fails  including timeouts or unexpected remote errors   the run stops and errors with a warning in the user interface      Associating Run Tasks with a Workspace  1  Click   Workspaces   and then go to the workspace where you want to associate run tasks   1  Open the   Settings   menu and select   Run Tasks     1  Click the       next to the task you want to add to the workspace   1  Choose when HCP Terraform should start the run task          Pre plan    Before Terraform creates the plan         Post plan    After Terraform creates the plan         Pre apply    Before Terraform applies a plan         Post apply    After Terraform applies a plan   1  Choose an enforcement level          Advisory    Run tasks can not block a run from completing  If the task fails  the run will proceed with a warning in the UI         Mandatory    Run tasks can block a run from completing  If the task fails  including a timeout or unexpected remote error condition   the run will transition to an Errored state with a warning in the UI   1  Click   Create    Your run task is now configured      Understanding Run Tasks Within a Run  Run tasks perform actions before and after  the  plan   terraform cloud docs run states the plan stage  and  apply   terraform cloud docs run states the apply stage  stages of a  Terraform run   terraform cloud docs run remote operations   Once all run tasks complete  the run ends based on the most restrictive enforcement level in each associated run task   For example  if a mandatory task fails and an advisory task succeeds  the run fails  If an advisory task fails  but a mandatory task succeeds  the run succeeds and proceeds to the apply stage  Regardless of the exit status of a task  HCP Terraform displays the status and any related message data in the UI      Removing a Run Task from a Workspace  Removing a run task from a workspace does not delete it from the organization  To remove a run task from a specific workspace   1  Navigate to the desired workspace  open the   Settings   menu and select   Run Tasks     1  Click the ellipses       on the associated run task  and then click   Remove    The run task will no longer be applied to runs within the workspace      Deleting a Run Task  You must remove a run task from all associated workspaces before you can delete it  To delete a run task   1  Navigate to   Settings   and click   Run Tasks     1  Click the ellipses       next to the run task you want to delete  and then click   Edit     1  Click   Delete run task     You cannot delete run tasks that are still associated with a workspace  If you attempt this  you will see a warning in the UI containing a list of all workspaces that are associated with the run task"}
{"questions":"terraform your HCP Terraform runs Use OpenID Connect to get short term credentials for the Kubernetes and Helm Terraform providers in page title Dynamic Credentials with the Kubernetes and Helm Provider Workspaces HCP Terraform Dynamic Credentials with the Kubernetes and Helm providers Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 13 1 terraform cloud docs agents changelog 1 13 1 10 25 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog","answers":"---\npage_title: Dynamic Credentials with the Kubernetes and Helm Provider - Workspaces - HCP Terraform\ndescription: >-\n  Use OpenID Connect to get short-term credentials for the Kubernetes and Helm Terraform providers in\n  your HCP Terraform runs.\n---\n\n# Dynamic Credentials with the Kubernetes and Helm providers\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.13.1](\/terraform\/cloud-docs\/agents\/changelog#1-13-1-10-25-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can use HCP Terraform\u2019s native OpenID Connect integration with Kubernetes to use [dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials) for the Kubernetes and Helm providers in your HCP Terraform runs. Configuring the integration requires the following steps:\n\n1. **[Configure Kubernetes](#configure-kubernetes):** Set up a trust configuration between Kubernetes and HCP Terraform. Next, create Kubernetes role bindings for your HCP Terraform identities.\n2. **[Configure HCP Terraform](#configure-hcp-terraform):** Add environment variables to the HCP Terraform workspaces where you want to use dynamic credentials.\n3. **[Configure the Kubernetes or Helm provider](#configure-the-provider)**: Set the required attributes on the provider block.\n\nOnce you complete the setup, HCP Terraform automatically authenticates to Kubernetes during each run. The Kubernetes and Helm providers' authentication is valid for the length of a plan or apply operation.\n\n## Configure Kubernetes\n\nYou must enable and configure an OIDC identity provider in the Kubernetes API. This workflow changes based on the platform hosting your Kubernetes cluster. HCP Terraform only supports dynamic credentials with Kubernetes in AWS and GCP.\n\n### Configure an OIDC identity provider\n\nRefer to the AWS documentation for guidance on [setting up an EKS cluster for OIDC authentication](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/authenticate-oidc-identity-provider.html). You can also refer to our [example configuration](https:\/\/github.com\/hashicorp-education\/learn-terraform-dynamic-credentials\/tree\/main\/eks\/trust).\n\nRefer to the GCP documentation for guidance on [setting up a GKE cluster for OIDC authentication](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/oidc). You can also refer to our [example configuration](https:\/\/github.com\/hashicorp-education\/learn-terraform-dynamic-credentials\/tree\/main\/gke\/trust).\n\nWhen inputting an \"issuer URL\", use the address of HCP Terraform (`https:\/\/app.terraform.io` _without_ a trailing slash) or the URL of your Terraform Enterprise instance. The value of \"client ID\" is your audience in OIDC terminology, and it should match the value of the `TFC_KUBERNETES_WORKLOAD_IDENTITY_AUDIENCE` environment variable in your workspace.\n\nThe OIDC identity resolves authentication to the Kubernetes API, but it first requires authorization to interact with that API. So, you must bind RBAC roles to the OIDC identity in Kubernetes.\n\nYou can use both \"User\" and \"Group\" subjects in your role bindings. For OIDC identities coming from TFC, the \"User\" value is formatted like so: `organization:<MY-ORG-NAME>:project:<MY-PROJECT-NAME>:workspace:<MY-WORKSPACE-NAME>:run_phase:<plan|apply>`. \n\nYou can extract the \"Group\" value from the token claim you configured in your cluster OIDC configuration. For details on the structure of the HCP Terraform token, refer to [Workload Identity](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/workload-identity-tokens).\n\nBelow, we show an example of a `RoleBinding` for the HCP Terraform OIDC identity.\n\n```hcl\nresource \"kubernetes_cluster_role_binding_v1\" \"oidc_role\" {\n  metadata {\n    name = \"odic-identity\"\n  }\n\n  role_ref {\n    api_group = \"rbac.authorization.k8s.io\"\n    kind      = \"ClusterRole\"\n    name      = var.rbac_group_cluster_role\n  }\n\n  \/\/ Option A - Bind RBAC roles to groups\n  \/\/\n  \/\/ Groups are extracted from the token claim designated by 'rbac_group_oidc_claim'\n  \/\/\n  subject {\n    api_group = \"rbac.authorization.k8s.io\"\n    kind      = \"Group\"\n    name      = var.tfc_organization_name\n  }\n\n  \/\/ Option B - Bind RBAC roles to user identities\n  \/\/\n  \/\/ Users are extracted from the 'sub' token claim.\n  \/\/ Plan and apply phases get assigned different users identities.\n  \/\/ For HCP Terraform tokens, the format of the user id is always the one described bellow.\n  \/\/\n  subject {\n    api_group = \"rbac.authorization.k8s.io\"\n    kind      = \"User\"\n    name      = \"${var.tfc_hostname}#organization:${var.tfc_organization_name}:project:${var.tfc_project_name}:workspace:${var.tfc_workspace_name}:run_phase:plan\"\n  }\n\n  subject {\n    api_group = \"rbac.authorization.k8s.io\"\n    kind      = \"User\"\n    name      = \"${var.tfc_hostname}#organization:${var.tfc_organization_name}:project:${var.tfc_project_name}:workspace:${var.tfc_workspace_name}:run_phase:apply\"\n  }\n}\n```\n\nIf binding with \"User\" subjects, be aware that plan and apply phases are assigned different identities, each requiring specific bindings. Meaning you can tailor permissions for each Terraform operation. Planning operations usually require \"read-only\" permissions, while apply operations also require \"write\" access.\n\n!> **Warning**:  Always check, at minimum, the audience and the organization's name to prevent unauthorized access from other HCP Terraform organizations.\n\n## Configure HCP Terraform\n\nYou must set certain environment variables in your HCP Terraform workspace to configure HCP Terraform to authenticate with Kubernetes or Helm using dynamic credentials. You can set these as workspace variables, or if you\u2019d like to share one Kubernetes role across multiple workspaces, you can use a variable set. When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.\n\n### Required Environment Variables\n| Variable                                                                                                                                                    | Value                                                                         | Notes                                                                                                                                                      |\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_KUBERNETES_PROVIDER_AUTH`<br \/>`TFC_KUBERNETES_PROVIDER_AUTH[_TAG]`<br \/>_(Default variable not supported)_                                            | `true`                                                                        | Requires **v1.14.0** or later if self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to authenticate to Kubernetes. |\n| `TFC_KUBERNETES_WORKLOAD_IDENTITY_AUDIENCE`<br \/>`TFC_KUBERNETES_WORKLOAD_IDENTITY_AUDIENCE[_TAG]`<br \/>`TFC_DEFAULT_KUBERNETES_WORKLOAD_IDENTITY_AUDIENCE` | The audience name in your cluster's OIDC configuration, such as `kubernetes`. | Requires **v1.14.0** or later if self-managing agents.                                                                                                     |\n\n## Configure the provider\n\nThe Kubernetes and Helm providers share the same schema of configuration attributes for the provider block. The example below illustrates using the Kubernetes provider but the same configuration applies to the Helm provider.\n\nMake sure that you are not using any of the other arguments or methods listed in the [authentication](https:\/\/registry.terraform.io\/providers\/hashicorp\/kubernetes\/latest\/docs#authentication) section of the provider documentation as these settings may interfere with dynamic provider credentials. The only allowed provider attributes are `host` and `cluster_ca_certificate`.\n\n### Single provider instance\n\nHCP Terraform automatically sets the `KUBE_TOKEN` environment variable and includes the workload identity token.\n\nThe provider needs to be configured with the URL of the API endpoint using the `host` attribute (or `KUBE_HOST` environment variable). In most cases, the `cluster_ca_certificate` (or `KUBE_CLUSTER_CA_CERT_DATA` environment variable) is also required.\n\n#### Example Usage\n\n```hcl\nprovider \"kubernetes\" {\n  host                   = var.cluster-endpoint-url\n  cluster_ca_certificate = base64decode(var.cluster-endpoint-ca)\n}\n```\n\n### Multiple aliases\n\nYou can add additional variables to handle multiple distinct Kubernetes clusters, enabling you to use multiple [provider aliases](\/terraform\/language\/providers\/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.\n\nFor more details, see [Specifying Multiple Configurations](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/specifying-multiple-configurations).\n\n#### Required Terraform Variable\n\nTo use additional configurations, add the following code to your Terraform configuration. This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.\n\n```hcl\nvariable \"tfc_kubernetes_dynamic_credentials\" {\n  description = \"Object containing Kubernetes dynamic credentials configuration\"\n  type = object({\n    default = object({\n      token_path = string\n    })\n    aliases = map(object({\n      token_path = string\n    }))\n  })\n}\n```\n\n#### Example Usage\n\n```hcl\nprovider \"kubernetes\" {\n  alias                  = \"ALIAS1\"\n  host                   = var.alias1-endpoint-url\n  cluster_ca_certificate = base64decode(var.alias1-cluster-ca)\n  token                  = file(var.tfc_kubernetes_dynamic_credentials.aliases[\"ALIAS1\"].token_path)\n}\n\nprovider \"kubernetes\" {\n  alias                  = \"ALIAS2\"\n  host                   = var.alias1-endpoint-url\n  cluster_ca_certificate = base64decode(var.alias1-cluster-ca)\n  token                  = file(var.tfc_kubernetes_dynamic_credentials.aliases[\"ALIAS2\"].token_path)\n}\n```\n\nThe `tfc_kubernetes_dynamic_credentials` variable is also available to use for single provider configurations, instead of the `KUBE_TOKEN` environment variable.\n\n```hcl\nprovider \"kubernetes\" {\n  host                   = var.cluster-endpoint-url\n  cluster_ca_certificate = base64decode(var.cluster-endpoint-ca)\n  token                  = file(var.tfc_kubernetes_dynamic_credentials.default.token_path)\n}\n```","site":"terraform","answers_cleaned":"    page title  Dynamic Credentials with the Kubernetes and Helm Provider   Workspaces   HCP Terraform description       Use OpenID Connect to get short term credentials for the Kubernetes and Helm Terraform providers in   your HCP Terraform runs         Dynamic Credentials with the Kubernetes and Helm providers       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 13 1   terraform cloud docs agents changelog 1 13 1 10 25 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can use HCP Terraform s native OpenID Connect integration with Kubernetes to use  dynamic credentials   terraform cloud docs workspaces dynamic provider credentials  for the Kubernetes and Helm providers in your HCP Terraform runs  Configuring the integration requires the following steps   1     Configure Kubernetes   configure kubernetes     Set up a trust configuration between Kubernetes and HCP Terraform  Next  create Kubernetes role bindings for your HCP Terraform identities  2     Configure HCP Terraform   configure hcp terraform     Add environment variables to the HCP Terraform workspaces where you want to use dynamic credentials  3     Configure the Kubernetes or Helm provider   configure the provider     Set the required attributes on the provider block   Once you complete the setup  HCP Terraform automatically authenticates to Kubernetes during each run  The Kubernetes and Helm providers  authentication is valid for the length of a plan or apply operation      Configure Kubernetes  You must enable and configure an OIDC identity provider in the Kubernetes API  This workflow changes based on the platform hosting your Kubernetes cluster  HCP Terraform only supports dynamic credentials with Kubernetes in AWS and GCP       Configure an OIDC identity provider  Refer to the AWS documentation for guidance on  setting up an EKS cluster for OIDC authentication  https   docs aws amazon com eks latest userguide authenticate oidc identity provider html   You can also refer to our  example configuration  https   github com hashicorp education learn terraform dynamic credentials tree main eks trust    Refer to the GCP documentation for guidance on  setting up a GKE cluster for OIDC authentication  https   cloud google com kubernetes engine docs how to oidc   You can also refer to our  example configuration  https   github com hashicorp education learn terraform dynamic credentials tree main gke trust    When inputting an  issuer URL   use the address of HCP Terraform   https   app terraform io   without  a trailing slash  or the URL of your Terraform Enterprise instance  The value of  client ID  is your audience in OIDC terminology  and it should match the value of the  TFC KUBERNETES WORKLOAD IDENTITY AUDIENCE  environment variable in your workspace   The OIDC identity resolves authentication to the Kubernetes API  but it first requires authorization to interact with that API  So  you must bind RBAC roles to the OIDC identity in Kubernetes   You can use both  User  and  Group  subjects in your role bindings  For OIDC identities coming from TFC  the  User  value is formatted like so   organization  MY ORG NAME  project  MY PROJECT NAME  workspace  MY WORKSPACE NAME  run phase  plan apply      You can extract the  Group  value from the token claim you configured in your cluster OIDC configuration  For details on the structure of the HCP Terraform token  refer to  Workload Identity   terraform cloud docs workspaces dynamic provider credentials workload identity tokens    Below  we show an example of a  RoleBinding  for the HCP Terraform OIDC identity      hcl resource  kubernetes cluster role binding v1   oidc role      metadata       name    odic identity         role ref       api group    rbac authorization k8s io      kind         ClusterRole      name        var rbac group cluster role           Option A   Bind RBAC roles to groups           Groups are extracted from the token claim designated by  rbac group oidc claim         subject       api group    rbac authorization k8s io      kind         Group      name        var tfc organization name           Option B   Bind RBAC roles to user identities           Users are extracted from the  sub  token claim       Plan and apply phases get assigned different users identities       For HCP Terraform tokens  the format of the user id is always the one described bellow         subject       api group    rbac authorization k8s io      kind         User      name           var tfc hostname  organization   var tfc organization name  project   var tfc project name  workspace   var tfc workspace name  run phase plan         subject       api group    rbac authorization k8s io      kind         User      name           var tfc hostname  organization   var tfc organization name  project   var tfc project name  workspace   var tfc workspace name  run phase apply             If binding with  User  subjects  be aware that plan and apply phases are assigned different identities  each requiring specific bindings  Meaning you can tailor permissions for each Terraform operation  Planning operations usually require  read only  permissions  while apply operations also require  write  access        Warning     Always check  at minimum  the audience and the organization s name to prevent unauthorized access from other HCP Terraform organizations      Configure HCP Terraform  You must set certain environment variables in your HCP Terraform workspace to configure HCP Terraform to authenticate with Kubernetes or Helm using dynamic credentials  You can set these as workspace variables  or if you d like to share one Kubernetes role across multiple workspaces  you can use a variable set  When you configure dynamic provider credentials with multiple provider configurations of the same type  use either a default variable or a tagged alias variable name for each provider configuration  Refer to  Specifying Multiple Configurations   specifying multiple configurations  for more details       Required Environment Variables   Variable                                                                                                                                                      Value                                                                           Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        TFC KUBERNETES PROVIDER AUTH  br    TFC KUBERNETES PROVIDER AUTH  TAG   br     Default variable not supported                                                 true                                                                           Requires   v1 14 0   or later if self managing agents  Must be present and set to  true   or HCP Terraform will not attempt to authenticate to Kubernetes       TFC KUBERNETES WORKLOAD IDENTITY AUDIENCE  br    TFC KUBERNETES WORKLOAD IDENTITY AUDIENCE  TAG   br    TFC DEFAULT KUBERNETES WORKLOAD IDENTITY AUDIENCE    The audience name in your cluster s OIDC configuration  such as  kubernetes     Requires   v1 14 0   or later if self managing agents                                                                                                            Configure the provider  The Kubernetes and Helm providers share the same schema of configuration attributes for the provider block  The example below illustrates using the Kubernetes provider but the same configuration applies to the Helm provider   Make sure that you are not using any of the other arguments or methods listed in the  authentication  https   registry terraform io providers hashicorp kubernetes latest docs authentication  section of the provider documentation as these settings may interfere with dynamic provider credentials  The only allowed provider attributes are  host  and  cluster ca certificate        Single provider instance  HCP Terraform automatically sets the  KUBE TOKEN  environment variable and includes the workload identity token   The provider needs to be configured with the URL of the API endpoint using the  host  attribute  or  KUBE HOST  environment variable   In most cases  the  cluster ca certificate   or  KUBE CLUSTER CA CERT DATA  environment variable  is also required        Example Usage     hcl provider  kubernetes      host                     var cluster endpoint url   cluster ca certificate   base64decode var cluster endpoint ca             Multiple aliases  You can add additional variables to handle multiple distinct Kubernetes clusters  enabling you to use multiple  provider aliases   terraform language providers configuration alias multiple provider configurations  within the same workspace  You can configure each set of credentials independently  or use default values by configuring the variables prefixed with  TFC DEFAULT     For more details  see  Specifying Multiple Configurations   terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations         Required Terraform Variable  To use additional configurations  add the following code to your Terraform configuration  This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks      hcl variable  tfc kubernetes dynamic credentials      description    Object containing Kubernetes dynamic credentials configuration    type   object       default   object         token path   string            aliases   map object         token path   string                          Example Usage     hcl provider  kubernetes      alias                     ALIAS1    host                     var alias1 endpoint url   cluster ca certificate   base64decode var alias1 cluster ca    token                    file var tfc kubernetes dynamic credentials aliases  ALIAS1   token path     provider  kubernetes      alias                     ALIAS2    host                     var alias1 endpoint url   cluster ca certificate   base64decode var alias1 cluster ca    token                    file var tfc kubernetes dynamic credentials aliases  ALIAS2   token path         The  tfc kubernetes dynamic credentials  variable is also available to use for single provider configurations  instead of the  KUBE TOKEN  environment variable      hcl provider  kubernetes      host                     var cluster endpoint url   cluster ca certificate   base64decode var cluster endpoint ca    token                    file var tfc kubernetes dynamic credentials default token path       "}
{"questions":"terraform Dynamic Credentials with the Vault Provider your HCP Terraform runs Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 7 0 terraform cloud docs agents changelog 1 7 0 03 02 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog Use OpenID Connect to get short term credentials for the Vault Terraform provider in page title Dynamic Credentials with the Vault Provider Workspaces HCP Terraform","answers":"---\npage_title: Dynamic Credentials with the Vault Provider - Workspaces - HCP Terraform\ndescription: >-\n  Use OpenID Connect to get short-term credentials for the Vault Terraform provider in\n  your HCP Terraform runs.\n---\n\n# Dynamic Credentials with the Vault Provider\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.7.0](\/terraform\/cloud-docs\/agents\/changelog#1-7-0-03-02-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can use HCP Terraform\u2019s native OpenID Connect integration with Vault to get [dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials) for the Vault provider in your HCP Terraform runs. Configuring the integration requires the following steps:\n\n1. **[Configure Vault](#configure-vault):** Set up a trust configuration between Vault and HCP Terraform. Then, you must create Vault roles and policies for your HCP Terraform workspaces.\n2. **[Configure HCP Terraform](#configure-hcp-terraform):** Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials.\n\nOnce you complete the setup, HCP Terraform automatically authenticates to Vault during each run. The Vault provider authentication is valid for the length of the plan or apply. Vault does not revoke authentication until the run is complete.\n\nIf you are using Vault's [secrets engines](\/vault\/docs\/secrets), you must complete the following set up before continuing to configure [Vault-backed dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/vault-backed).\n\n## Configure Vault\nYou must enable and configure the JWT backend in Vault. These instructions use the Vault CLI commands, but you can also use Terraform to configure Vault. Refer to our [example Terraform configuration](https:\/\/github.com\/hashicorp\/terraform-dynamic-credentials-setup-examples\/tree\/main\/vault).\n### Enable the JWT Auth Backend\nRun the following command to enable the JWT auth backend in Vault:\n```shell\nvault auth enable jwt\n```\n### Configure Trust with HCP Terraform\nYou must configure Vault to trust HCP Terraform\u2019s identity tokens and verify them using HCP Terraform\u2019s public key. The following command configures the `jwt` auth backend in Vault to trust HCP Terraform as an OIDC identity provider:\n```shell\nvault write auth\/jwt\/config \\\n    oidc_discovery_url=\"https:\/\/app.terraform.io\" \\\n    bound_issuer=\"https:\/\/app.terraform.io\"\n```\nThe `oidc_discovery_url` and `bound_issuer` should both be the root address of HCP Terraform, including the scheme and without a trailing slash.\n\n#### Terraform Enterprise Specific Requirements\n\nIf you are using a custom or self-signed CA certificate you may need to specify the CA certificate or chain of certificates, in PEM format, via the [`oidc_discovery_ca_pem`](\/vault\/api-docs\/auth\/jwt#oidc_discovery_ca_pem) argument as shown in the following example command:\n```shell\nvault write auth\/jwt\/config \\\n    oidc_discovery_url=\"https:\/\/app.terraform.io\" \\\n    bound_issuer=\"https:\/\/app.terraform.io\" \\\n    oidc_discovery_ca_pem=@my-cert.pem\n```\n\nIn the example above, `my-cert.pem` is a PEM formatted file containing the certificate.\n\n### Create a Vault Policy\n\nYou must create a Vault policy that controls what paths and secrets your HCP Terraform workspace can access in Vault.\nCreate a file called tfc-policy.hcl with the following content:\n```hcl\n# Allow tokens to query themselves\npath \"auth\/token\/lookup-self\" {\n  capabilities = [\"read\"]\n}\n\n# Allow tokens to renew themselves\npath \"auth\/token\/renew-self\" {\n    capabilities = [\"update\"]\n}\n\n# Allow tokens to revoke themselves\npath \"auth\/token\/revoke-self\" {\n    capabilities = [\"update\"]\n}\n\n# Configure the actual secrets the token should have access to\npath \"secret\/*\" {\n  capabilities = [\"read\"]\n}\n```\n\nThen create the policy in Vault:\n\n```shell\nvault policy write tfc-policy tfc-policy.hcl\n```\n\n### Create a JWT Auth Role\nCreate a Vault role that HCP Terraform can use when authenticating to Vault.\n\nVault offers a lot of flexibility in determining how to map roles and permissions in Vault to workspaces in HCP Terraform. You can have one role for each workspace, one role for a group of workspaces, or one role for all workspaces in an organization. You can also configure different roles for the plan and apply phases of a run.\n\n-> **Note:** If you set your `user_claim` to be per workspace, then Vault ties the entity it creates to that workspace's name. If you rename the workspace tied to your `user_claim`, Vault will create an additional identity object. To avoid this, update the alias name in Vault to your new workspace name before you update it in HCP Terraform.\n\nThe following example creates a role called `tfc-role`. The role is mapped to a single workspace and HCP Terraform can use it for both plan and apply runs.\n\nCreate a file called `vault-jwt-auth-role.json` with the following content:\n```json\n{\n  \"policies\": [\"tfc-policy\"],\n  \"bound_audiences\": [\"vault.workload.identity\"],\n  \"bound_claims_type\": \"glob\",\n  \"bound_claims\": {\n    \"sub\":\n\"organization:my-org-name:project:my-project-name:workspace:my-workspace-name:run_phase:*\"\n  },\n  \"user_claim\": \"terraform_full_workspace\",\n  \"role_type\": \"jwt\",\n  \"token_ttl\": \"20m\"\n}\n```\n\nThen run the following command to create a role named `tfc-role` with this configuration in Vault:\n```shell\nvault write auth\/jwt\/role\/tfc-role @vault-jwt-auth-role.json\n```\nTo understand all the available options for matching bound claims, refer to the [Terraform workload identity claim specification](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials) and the [Vault documentation on configuring bound claims](\/vault\/docs\/auth\/jwt#bound-claims). To understand all the options available when configuring Vault JWT auth roles, refer to the [Vault API documentation](\/vault\/api-docs\/auth\/jwt#create-role).\n\n!> **Warning:** you should always check, at minimum, the audience and the name of the organization in order to prevent unauthorized access from other HCP Terraform organizations!\n\n#### Token TTLs\nWe recommend setting token_ttl to a relatively short value. HCP Terraform can renew the token periodically until the plan or apply is complete, then revoke it to prevent it from being used further.\n\n## Configure HCP Terraform\nYou\u2019ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with Vault using dynamic credentials. You can set these as workspace variables, or if you\u2019d like to share one Vault role across multiple workspaces, you can use a variable set. When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.\n\n### Required Environment Variables\n| Variable                                                                                               | Value                                                                            | Notes                                                                                                                                                                                                                                                                                         |\n|--------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_VAULT_PROVIDER_AUTH`<br \/>`TFC_VAULT_PROVIDER_AUTH[_TAG]`<br \/>_(Default variable not supported)_ | `true`                                                                           | Requires **v1.7.0** or later if self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to authenticate to Vault.                                                                                                                                          |\n| `TFC_VAULT_ADDR`<br \/>`TFC_VAULT_ADDR[_TAG]`<br \/>`TFC_DEFAULT_VAULT_ADDR`                             | The address of the Vault instance to authenticate against.                       | Requires **v1.7.0** or later if self-managing agents. Will also be used to set `VAULT_ADDR` in the run environment.                                                                                                                                                                           |\n| `TFC_VAULT_RUN_ROLE`<br \/>`TFC_VAULT_RUN_ROLE[_TAG]`<br \/>`TFC_DEFAULT_VAULT_RUN_ROLE`                 | The name of the Vault role to authenticate against (`tfc-role`, in our example). | Requires **v1.7.0** or later if self-managing agents. Optional if `TFC_VAULT_PLAN_ROLE` and `TFC_VAULT_APPLY_ROLE` are both provided. These variables are described [below](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/vault-configuration#optional-environment-variables) |\n\n### Optional Environment Variables\nYou may need to set these variables, depending on your Vault configuration and use case.\n\n| Variable                                                                                                                                     | Value                                                                                          | Notes                                                                                                                                                                                                          |\n|----------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_VAULT_NAMESPACE`<br \/>`TFC_VAULT_NAMESPACE[_TAG]`<br \/>`TFC_DEFAULT_VAULT_NAMESPACE`                                                    | The namespace to use when authenticating to Vault.                                             | Requires **v1.7.0** or later if self-managing agents. Will also be used to set `VAULT_NAMESPACE` in the run environment.                                                                                       |\n| `TFC_VAULT_AUTH_PATH`<br \/>`TFC_VAULT_AUTH_PATH[_TAG]`<br \/>`TFC_DEFAULT_VAULT_AUTH_PATH`                                                    | The path where the JWT auth backend is mounted in Vault. Defaults to jwt.                      | Requires **v1.7.0** or later if self-managing agents.                                                                                                                                                          |\n| `TFC_VAULT_WORKLOAD_IDENTITY_AUDIENCE`<br \/>`TFC_VAULT_WORKLOAD_IDENTITY_AUDIENCE[_TAG]`<br \/>`TFC_DEFAULT_VAULT_WORKLOAD_IDENTITY_AUDIENCE` | Will be used as the `aud` claim for the identity token. Defaults to `vault.workload.identity`. | Requires **v1.7.0** or later if self-managing agents. Must match the `bound_audiences` configured for the role in Vault.                                                                                       |\n| `TFC_VAULT_PLAN_ROLE`<br \/>`TFC_VAULT_PLAN_ROLE[_TAG]`<br \/>`TFC_DEFAULT_VAULT_PLAN_ROLE`                                                    | The Vault role to use for the plan phase of a run.                                             | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_VAULT_RUN_ROLE` if not provided.                                                                                     |\n| `TFC_VAULT_APPLY_ROLE`<br \/>`TFC_VAULT_APPLY_ROLE[_TAG]`<br \/>`TFC_DEFAULT_VAULT_APPLY_ROLE`                                                 | The Vault role to use for the apply phase of a run.                                            | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_VAULT_RUN_ROLE` if not provided.                                                                                     |\n| `TFC_VAULT_ENCODED_CACERT`<br \/>`TFC_VAULT_ENCODED_CACERT[_TAG]`<br \/>`TFC_DEFAULT_VAULT_ENCODED_CACERT`                                     | A PEM-encoded CA certificate that has been Base64 encoded.                                     | Requires **v1.9.0** or later if self-managing agents. This certificate will be used when connecting to Vault. May be required when connecting to Vault instances that use a custom or self-signed certificate. |\n\n## Vault Provider Configuration\nOnce you set up dynamic credentials for a workspace, HCP Terraform automatically authenticates to Vault for each run. Do not pass the `address`, `token`, or `namespace` arguments into the provider configuration block. HCP Terraform sets these values as environment variables in the run environment.\n\nYou can use the Vault provider to read static secrets from Vault and use them with other Terraform resources. You can also access the other resources and data sources available in the [Vault provider documentation](https:\/\/registry.terraform.io\/providers\/hashicorp\/vault\/latest). You must adjust your [Vault policy](#create-a-vault-policy) to give your HCP Terraform workspace access to all required Vault paths.\n\n~> **Important:** data sources that use secrets engines to generate dynamic secrets must not be used with Vault dynamic credentials. You can use Vault's dynamic secrets engines for AWS, GCP, and Azure by adding additional configurations. For more details, see [Vault-backed dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/vault-backed).\n\n### Specifying Multiple Configurations\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.12.0](\/terraform\/cloud-docs\/agents\/changelog#1-12-0-07-26-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\n~> **Important:** Ensure you are using version **3.18.0** or later of the **Vault provider** as the required [`auth_login_token_file`](https:\/\/registry.terraform.io\/providers\/hashicorp\/vault\/latest\/docs#token-file) block was introduced in this provider version.\n\nYou can add additional variables to handle multiple distinct Vault setups, enabling you to use multiple [provider aliases](\/terraform\/language\/providers\/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.\n\nFor more details, see [Specifying Multiple Configurations](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/specifying-multiple-configurations).\n\n#### Required Terraform Variable\n\nTo use additional configurations, add the following code to your Terraform configuration. This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.\n\n```hcl\nvariable \"tfc_vault_dynamic_credentials\" {\n  description = \"Object containing Vault dynamic credentials configuration\"\n  type = object({\n    default = object({\n      token_filename = string\n      address = string\n      namespace = string\n      ca_cert_file = string\n    })\n    aliases = map(object({\n      token_filename = string\n      address = string\n      namespace = string\n      ca_cert_file = string\n    }))\n  })\n}\n```\n\n#### Example Usage\n\n```hcl\nprovider \"vault\" {\n  \/\/ skip_child_token must be explicitly set to true as HCP Terraform manages the token lifecycle\n  skip_child_token = true\n  address          = var.tfc_vault_dynamic_credentials.default.address\n  namespace        = var.tfc_vault_dynamic_credentials.default.namespace\n\n  auth_login_token_file {\n    filename = var.tfc_vault_dynamic_credentials.default.token_filename\n  }\n}\n\nprovider \"vault\" {\n  \/\/ skip_child_token must be explicitly set to true as HCP Terraform manages the token lifecycle\n  skip_child_token = true\n  alias            = \"ALIAS1\"\n  address          = var.tfc_vault_dynamic_credentials.aliases[\"ALIAS1\"].address\n  namespace        = var.tfc_vault_dynamic_credentials.aliases[\"ALIAS1\"].namespace\n\n  auth_login_token_file {\n    filename = var.tfc_vault_dynamic_credentials.aliases[\"ALIAS1\"].token_filename\n  }\n}\n```","site":"terraform","answers_cleaned":"    page title  Dynamic Credentials with the Vault Provider   Workspaces   HCP Terraform description       Use OpenID Connect to get short term credentials for the Vault Terraform provider in   your HCP Terraform runs         Dynamic Credentials with the Vault Provider       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 7 0   terraform cloud docs agents changelog 1 7 0 03 02 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can use HCP Terraform s native OpenID Connect integration with Vault to get  dynamic credentials   terraform cloud docs workspaces dynamic provider credentials  for the Vault provider in your HCP Terraform runs  Configuring the integration requires the following steps   1     Configure Vault   configure vault     Set up a trust configuration between Vault and HCP Terraform  Then  you must create Vault roles and policies for your HCP Terraform workspaces  2     Configure HCP Terraform   configure hcp terraform     Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials   Once you complete the setup  HCP Terraform automatically authenticates to Vault during each run  The Vault provider authentication is valid for the length of the plan or apply  Vault does not revoke authentication until the run is complete   If you are using Vault s  secrets engines   vault docs secrets   you must complete the following set up before continuing to configure  Vault backed dynamic credentials   terraform cloud docs workspaces dynamic provider credentials vault backed       Configure Vault You must enable and configure the JWT backend in Vault  These instructions use the Vault CLI commands  but you can also use Terraform to configure Vault  Refer to our  example Terraform configuration  https   github com hashicorp terraform dynamic credentials setup examples tree main vault       Enable the JWT Auth Backend Run the following command to enable the JWT auth backend in Vault     shell vault auth enable jwt         Configure Trust with HCP Terraform You must configure Vault to trust HCP Terraform s identity tokens and verify them using HCP Terraform s public key  The following command configures the  jwt  auth backend in Vault to trust HCP Terraform as an OIDC identity provider     shell vault write auth jwt config       oidc discovery url  https   app terraform io        bound issuer  https   app terraform io      The  oidc discovery url  and  bound issuer  should both be the root address of HCP Terraform  including the scheme and without a trailing slash        Terraform Enterprise Specific Requirements  If you are using a custom or self signed CA certificate you may need to specify the CA certificate or chain of certificates  in PEM format  via the   oidc discovery ca pem    vault api docs auth jwt oidc discovery ca pem  argument as shown in the following example command     shell vault write auth jwt config       oidc discovery url  https   app terraform io        bound issuer  https   app terraform io        oidc discovery ca pem  my cert pem      In the example above   my cert pem  is a PEM formatted file containing the certificate       Create a Vault Policy  You must create a Vault policy that controls what paths and secrets your HCP Terraform workspace can access in Vault  Create a file called tfc policy hcl with the following content     hcl   Allow tokens to query themselves path  auth token lookup self      capabilities     read        Allow tokens to renew themselves path  auth token renew self        capabilities     update        Allow tokens to revoke themselves path  auth token revoke self        capabilities     update        Configure the actual secrets the token should have access to path  secret        capabilities     read          Then create the policy in Vault      shell vault policy write tfc policy tfc policy hcl          Create a JWT Auth Role Create a Vault role that HCP Terraform can use when authenticating to Vault   Vault offers a lot of flexibility in determining how to map roles and permissions in Vault to workspaces in HCP Terraform  You can have one role for each workspace  one role for a group of workspaces  or one role for all workspaces in an organization  You can also configure different roles for the plan and apply phases of a run        Note    If you set your  user claim  to be per workspace  then Vault ties the entity it creates to that workspace s name  If you rename the workspace tied to your  user claim   Vault will create an additional identity object  To avoid this  update the alias name in Vault to your new workspace name before you update it in HCP Terraform   The following example creates a role called  tfc role   The role is mapped to a single workspace and HCP Terraform can use it for both plan and apply runs   Create a file called  vault jwt auth role json  with the following content     json      policies     tfc policy       bound audiences     vault workload identity       bound claims type    glob      bound claims          sub    organization my org name project my project name workspace my workspace name run phase            user claim    terraform full workspace      role type    jwt      token ttl    20m         Then run the following command to create a role named  tfc role  with this configuration in Vault     shell vault write auth jwt role tfc role  vault jwt auth role json     To understand all the available options for matching bound claims  refer to the  Terraform workload identity claim specification   terraform cloud docs workspaces dynamic provider credentials  and the  Vault documentation on configuring bound claims   vault docs auth jwt bound claims   To understand all the options available when configuring Vault JWT auth roles  refer to the  Vault API documentation   vault api docs auth jwt create role         Warning    you should always check  at minimum  the audience and the name of the organization in order to prevent unauthorized access from other HCP Terraform organizations        Token TTLs We recommend setting token ttl to a relatively short value  HCP Terraform can renew the token periodically until the plan or apply is complete  then revoke it to prevent it from being used further      Configure HCP Terraform You ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with Vault using dynamic credentials  You can set these as workspace variables  or if you d like to share one Vault role across multiple workspaces  you can use a variable set  When you configure dynamic provider credentials with multiple provider configurations of the same type  use either a default variable or a tagged alias variable name for each provider configuration  Refer to  Specifying Multiple Configurations   specifying multiple configurations  for more details       Required Environment Variables   Variable                                                                                                 Value                                                                              Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            TFC VAULT PROVIDER AUTH  br    TFC VAULT PROVIDER AUTH  TAG   br     Default variable not supported      true                                                                              Requires   v1 7 0   or later if self managing agents  Must be present and set to  true   or HCP Terraform will not attempt to authenticate to Vault                                                                                                                                                TFC VAULT ADDR  br    TFC VAULT ADDR  TAG   br    TFC DEFAULT VAULT ADDR                                The address of the Vault instance to authenticate against                          Requires   v1 7 0   or later if self managing agents  Will also be used to set  VAULT ADDR  in the run environment                                                                                                                                                                                 TFC VAULT RUN ROLE  br    TFC VAULT RUN ROLE  TAG   br    TFC DEFAULT VAULT RUN ROLE                    The name of the Vault role to authenticate against   tfc role   in our example     Requires   v1 7 0   or later if self managing agents  Optional if  TFC VAULT PLAN ROLE  and  TFC VAULT APPLY ROLE  are both provided  These variables are described  below   terraform cloud docs workspaces dynamic provider credentials vault configuration optional environment variables         Optional Environment Variables You may need to set these variables  depending on your Vault configuration and use case     Variable                                                                                                                                       Value                                                                                            Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  TFC VAULT NAMESPACE  br    TFC VAULT NAMESPACE  TAG   br    TFC DEFAULT VAULT NAMESPACE                                                       The namespace to use when authenticating to Vault                                                Requires   v1 7 0   or later if self managing agents  Will also be used to set  VAULT NAMESPACE  in the run environment                                                                                             TFC VAULT AUTH PATH  br    TFC VAULT AUTH PATH  TAG   br    TFC DEFAULT VAULT AUTH PATH                                                       The path where the JWT auth backend is mounted in Vault  Defaults to jwt                         Requires   v1 7 0   or later if self managing agents                                                                                                                                                                TFC VAULT WORKLOAD IDENTITY AUDIENCE  br    TFC VAULT WORKLOAD IDENTITY AUDIENCE  TAG   br    TFC DEFAULT VAULT WORKLOAD IDENTITY AUDIENCE    Will be used as the  aud  claim for the identity token  Defaults to  vault workload identity     Requires   v1 7 0   or later if self managing agents  Must match the  bound audiences  configured for the role in Vault                                                                                             TFC VAULT PLAN ROLE  br    TFC VAULT PLAN ROLE  TAG   br    TFC DEFAULT VAULT PLAN ROLE                                                       The Vault role to use for the plan phase of a run                                                Requires   v1 7 0   or later if self managing agents  Will fall back to the value of  TFC VAULT RUN ROLE  if not provided                                                                                           TFC VAULT APPLY ROLE  br    TFC VAULT APPLY ROLE  TAG   br    TFC DEFAULT VAULT APPLY ROLE                                                    The Vault role to use for the apply phase of a run                                               Requires   v1 7 0   or later if self managing agents  Will fall back to the value of  TFC VAULT RUN ROLE  if not provided                                                                                           TFC VAULT ENCODED CACERT  br    TFC VAULT ENCODED CACERT  TAG   br    TFC DEFAULT VAULT ENCODED CACERT                                        A PEM encoded CA certificate that has been Base64 encoded                                        Requires   v1 9 0   or later if self managing agents  This certificate will be used when connecting to Vault  May be required when connecting to Vault instances that use a custom or self signed certificate        Vault Provider Configuration Once you set up dynamic credentials for a workspace  HCP Terraform automatically authenticates to Vault for each run  Do not pass the  address    token   or  namespace  arguments into the provider configuration block  HCP Terraform sets these values as environment variables in the run environment   You can use the Vault provider to read static secrets from Vault and use them with other Terraform resources  You can also access the other resources and data sources available in the  Vault provider documentation  https   registry terraform io providers hashicorp vault latest   You must adjust your  Vault policy   create a vault policy  to give your HCP Terraform workspace access to all required Vault paths        Important    data sources that use secrets engines to generate dynamic secrets must not be used with Vault dynamic credentials  You can use Vault s dynamic secrets engines for AWS  GCP  and Azure by adding additional configurations  For more details  see  Vault backed dynamic credentials   terraform cloud docs workspaces dynamic provider credentials vault backed        Specifying Multiple Configurations       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 12 0   terraform cloud docs agents changelog 1 12 0 07 26 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog         Important    Ensure you are using version   3 18 0   or later of the   Vault provider   as the required   auth login token file   https   registry terraform io providers hashicorp vault latest docs token file  block was introduced in this provider version   You can add additional variables to handle multiple distinct Vault setups  enabling you to use multiple  provider aliases   terraform language providers configuration alias multiple provider configurations  within the same workspace  You can configure each set of credentials independently  or use default values by configuring the variables prefixed with  TFC DEFAULT     For more details  see  Specifying Multiple Configurations   terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations         Required Terraform Variable  To use additional configurations  add the following code to your Terraform configuration  This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks      hcl variable  tfc vault dynamic credentials      description    Object containing Vault dynamic credentials configuration    type   object       default   object         token filename   string       address   string       namespace   string       ca cert file   string            aliases   map object         token filename   string       address   string       namespace   string       ca cert file   string                          Example Usage     hcl provider  vault         skip child token must be explicitly set to true as HCP Terraform manages the token lifecycle   skip child token   true   address            var tfc vault dynamic credentials default address   namespace          var tfc vault dynamic credentials default namespace    auth login token file       filename   var tfc vault dynamic credentials default token filename        provider  vault         skip child token must be explicitly set to true as HCP Terraform manages the token lifecycle   skip child token   true   alias               ALIAS1    address            var tfc vault dynamic credentials aliases  ALIAS1   address   namespace          var tfc vault dynamic credentials aliases  ALIAS1   namespace    auth login token file       filename   var tfc vault dynamic credentials aliases  ALIAS1   token filename          "}
{"questions":"terraform your HCP Terraform runs Use OpenID Connect to get short term credentials for the HCP provider in Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 15 1 terraform cloud docs agents changelog 1 15 1 05 01 2024 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog Dynamic Credentials with the HCP Provider page title Dynamic Credentials with the HCP Provider Workspaces HCP Terraform","answers":"---\npage_title: Dynamic Credentials with the HCP Provider - Workspaces - HCP Terraform\ndescription: >-\n  Use OpenID Connect to get short-term credentials for the HCP provider in\n  your HCP Terraform runs.\n---\n\n# Dynamic Credentials with the HCP Provider\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.15.1](\/terraform\/cloud-docs\/agents\/changelog#1-15-1-05-01-2024) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can use HCP Terraform\u2019s native OpenID Connect integration with HCP to authenticate with the HCP provider using [dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials) in your HCP Terraform runs. Configuring dynamic credentials for the HCP provider requires the following steps:\n\n1. **[Configure HCP](#configure-hcp):** Set up a trust configuration between HCP and HCP Terraform. Then, you must create a [service principal in HPC](https:\/\/developer.hashicorp.com\/hcp\/docs\/hcp\/admin\/iam\/service-principals) for your HCP Terraform workspaces.\n2. **[Configure HCP Terraform](#configure-hcp-terraform):** Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials.\n\nOnce you complete the setup, HCP Terraform automatically authenticates to HCP during each run.\n\n## Configure HCP\nYou must enable and configure a workload identity pool and provider on HCP. These instructions use the HCP CLI, but you can also use Terraform to configure HCP. Refer to our [example Terraform configuration](https:\/\/github.com\/hashicorp\/terraform-dynamic-credentials-setup-examples\/tree\/main\/hcp).\n\n#### Create a Service Principal\nCreate a service principal for HCP Terraform to assume during runs by running the following HCP command. Note the ID of the service principal you create because you will need it in the following steps. For all remaining steps, replace `HCP_PROJECT_ID` with the ID of the project that contains all the resources and workspaces that you want to manage with this service principal. If you wish to manage more than one project with dynamic credentials, it is recommended that you create multiple service principals, one for each project.\n\n```shell\nhcp iam service-principals create hcp-terraform --project=HCP_PROJECT_ID\n```\n\nGrant your service principal the necessary permissions to manage your infrastructure during runs.\n\n```shell\nhcp projects add-binding \\\n  --project=HCP_PROJECT_ID \\\n  --member=HCP_PRINCIPAL_ID \\\n  --role=roles\/contributor\n```\n\n#### Add a Workload Identity Provider\n\nNext, create a workload identity provider that HCP uses to authenticate the HCP Terraform run. Make sure to replace `HCP_PROJECT_ID`, `ORG_NAME`, `PROJECT_NAME`, and `WORKSPACE_NAME` with their respective values before running the command.\n\n```shell\nhcp iam workload-identity-providers create-oidc hcp-terraform-dynamic-credentials \\\n  --service-principal=iam\/project\/HCP_PROJECT_ID\/service-principal\/hcp-terraform \\\n  --issuer=https:\/\/app.terraform.io \\\n  --allowed-audience=hcp.workload.identity\n  --conditional-access='jwt_claims.sub matches `^organization:ORG_NAME:project:PROJECT_NAME:workspace:WORKSPACE_NAME:run_phase:.*`' \\\n  --description=\"Allow HCP Terraform agents to act as the hcp-terraform service principal\"\n```\n\n## Configure HCP Terraform\nNext, you need to set environment variables in your HCP Terraform workspace to configure HCP Terraform to authenticate with HCP using dynamic credentials. You can set these as workspace variables or use a variable set to share one HCP service principal across multiple workspaces. When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.\n\n### Required Environment Variables\n| Variable                                                                                                                               | Value                                                                                                 | Notes                                                                                                                                                                                                   |\n|----------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_HCP_PROVIDER_AUTH`<br \/>`TFC_HCP_PROVIDER_AUTH[_TAG]`<br \/>_(Default variable not supported)_                                     | `true`                                                                                                | Requires **v1.15.1** or later if you use self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to use dynamic credentials to authenticate to HCP.                  |\n| `TFC_HCP_RUN_PROVIDER_RESOURCE_NAME`<br \/>`TFC_HCP_RUN_PROVIDER_RESOURCE_NAME[_TAG]`<br \/>`TFC_DEFAULT_HCP_RUN_PROVIDER_RESOURCE_NAME` | The resource name of the workload identity provider that will be used to assume the service principal | Requires **v1.15.1** or later if you use self-managing agents. Optional if you provide `PLAN_PROVIDER_RESOURCE_NAME` and `APPLY_PROVIDER_RESOURCE_NAME`. [Learn more](#optional-environment-variables). |\n\n### Optional Environment Variables\nYou may need to set these variables, depending on your use case.\n\n| Variable                                                                                                                                     | Value                                                                                                                                                                                                                                                                                             | Notes                                                                                                                               |\n|----------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_HCP_WORKLOAD_IDENTITY_AUDIENCE`<br \/>`TFC_HCP_WORKLOAD_IDENTITY_AUDIENCE[_TAG]`<br \/>`TFC_DEFAULT_HCP_WORKLOAD_IDENTITY_AUDIENCE`       | HCP Terraform uses this as the `aud` claim for the identity token. Defaults to the provider resource name for the current run phase, which HCP Terraform derives from the values you provide for `RUN_PROVIDER_RESOURCE_NAME`, `PLAN_PROVIDER_RESOURCE_NAME`, and `APPLY_PROVIDER_RESOURCE_NAME`. | Requires **v1.15.1** or later if you use self-managing agents. This is one of the default `aud` formats that HCP accepts.           |\n| `TFC_HCP_PLAN_PROVIDER_RESOURCE_NAME`<br \/>`TFC_HCP_PLAN_PROVIDER_RESOURCE_NAME[_TAG]`<br \/>`TFC_DEFAULT_HCP_PLAN_PROVIDER_RESOURCE_NAME`    | The resource name of the workload identity provider that will HCP Terraform will use to authenticate the agent during the plan phase of a run.                                                                                                                                                    | Requires **v1.15.1** or later if self-managing agents. Will fall back to the value of `RUN_PROVIDER_RESOURCE_NAME` if not provided. |\n| `TFC_HCP_APPLY_PROVIDER_RESOURCE_NAME`<br \/>`TFC_HCP_APPLY_PROVIDER_RESOURCE_NAME[_TAG]`<br \/>`TFC_DEFAULT_HCP_APPLY_PROVIDER_RESOURCE_NAME` | The resource name of the workload identity provider that will HCP Terraform will use to authenticate the agent during the apply phase of a run.                                                                                                                                                   | Requires **v1.15.1** or later if self-managing agents. Will fall back to the value of `RUN_PROVIDER_RESOURCE_NAME` if not provided. |\n\n## Configure the HCP Provider\n\nDo not set the `HCP_CRED_FILE` environment variable when configuring the HCP provider, or `HCP_CRED_FILE` will conflict with the dynamic credentials authentication process.\n\n### Specifying Multiple Configurations\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.15.1](\/terraform\/cloud-docs\/agents\/changelog#1-15-1-05-01-2024) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can add additional variables to handle multiple distinct HCP setups, enabling you to use multiple [provider aliases](\/terraform\/language\/providers\/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.\n\nFor more details, refer to [Specifying Multiple Configurations](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/specifying-multiple-configurations).\n\n#### Required Terraform Variable\n\nAdd the following variable to your Terraform configuration to set up additional dynamic credential configurations with the HCP provider. This variable lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.\n\n```hcl\nvariable \"tfc_hcp_dynamic_credentials\" {\n  description = \"Object containing HCP dynamic credentials configuration\"\n  type = object({\n    default = object({\n      credential_file = string\n    })\n    aliases = map(object({\n      credential_file = string\n    }))\n  })\n}\n```\n\n#### Example Usage\n\n```hcl\nprovider \"hcp\" {\n  credential_file = var.tfc_hcp_dynamic_credentials.default.credential_file\n}\n\nprovider \"hcp\" {\n  alias = \"ALIAS1\"\n  credential_file = var.tfc_hcp_dynamic_credentials.aliases[\"ALIAS1\"].credential_file\n}\n```","site":"terraform","answers_cleaned":"    page title  Dynamic Credentials with the HCP Provider   Workspaces   HCP Terraform description       Use OpenID Connect to get short term credentials for the HCP provider in   your HCP Terraform runs         Dynamic Credentials with the HCP Provider       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 15 1   terraform cloud docs agents changelog 1 15 1 05 01 2024  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can use HCP Terraform s native OpenID Connect integration with HCP to authenticate with the HCP provider using  dynamic credentials   terraform cloud docs workspaces dynamic provider credentials  in your HCP Terraform runs  Configuring dynamic credentials for the HCP provider requires the following steps   1     Configure HCP   configure hcp     Set up a trust configuration between HCP and HCP Terraform  Then  you must create a  service principal in HPC  https   developer hashicorp com hcp docs hcp admin iam service principals  for your HCP Terraform workspaces  2     Configure HCP Terraform   configure hcp terraform     Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials   Once you complete the setup  HCP Terraform automatically authenticates to HCP during each run      Configure HCP You must enable and configure a workload identity pool and provider on HCP  These instructions use the HCP CLI  but you can also use Terraform to configure HCP  Refer to our  example Terraform configuration  https   github com hashicorp terraform dynamic credentials setup examples tree main hcp         Create a Service Principal Create a service principal for HCP Terraform to assume during runs by running the following HCP command  Note the ID of the service principal you create because you will need it in the following steps  For all remaining steps  replace  HCP PROJECT ID  with the ID of the project that contains all the resources and workspaces that you want to manage with this service principal  If you wish to manage more than one project with dynamic credentials  it is recommended that you create multiple service principals  one for each project      shell hcp iam service principals create hcp terraform   project HCP PROJECT ID      Grant your service principal the necessary permissions to manage your infrastructure during runs      shell hcp projects add binding       project HCP PROJECT ID       member HCP PRINCIPAL ID       role roles contributor           Add a Workload Identity Provider  Next  create a workload identity provider that HCP uses to authenticate the HCP Terraform run  Make sure to replace  HCP PROJECT ID    ORG NAME    PROJECT NAME   and  WORKSPACE NAME  with their respective values before running the command      shell hcp iam workload identity providers create oidc hcp terraform dynamic credentials       service principal iam project HCP PROJECT ID service principal hcp terraform       issuer https   app terraform io       allowed audience hcp workload identity     conditional access  jwt claims sub matches   organization ORG NAME project PROJECT NAME workspace WORKSPACE NAME run phase            description  Allow HCP Terraform agents to act as the hcp terraform service principal          Configure HCP Terraform Next  you need to set environment variables in your HCP Terraform workspace to configure HCP Terraform to authenticate with HCP using dynamic credentials  You can set these as workspace variables or use a variable set to share one HCP service principal across multiple workspaces  When you configure dynamic provider credentials with multiple provider configurations of the same type  use either a default variable or a tagged alias variable name for each provider configuration  Refer to  Specifying Multiple Configurations   specifying multiple configurations  for more details       Required Environment Variables   Variable                                                                                                                                 Value                                                                                                   Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     TFC HCP PROVIDER AUTH  br    TFC HCP PROVIDER AUTH  TAG   br     Default variable not supported                                          true                                                                                                   Requires   v1 15 1   or later if you use self managing agents  Must be present and set to  true   or HCP Terraform will not attempt to use dynamic credentials to authenticate to HCP                        TFC HCP RUN PROVIDER RESOURCE NAME  br    TFC HCP RUN PROVIDER RESOURCE NAME  TAG   br    TFC DEFAULT HCP RUN PROVIDER RESOURCE NAME    The resource name of the workload identity provider that will be used to assume the service principal   Requires   v1 15 1   or later if you use self managing agents  Optional if you provide  PLAN PROVIDER RESOURCE NAME  and  APPLY PROVIDER RESOURCE NAME    Learn more   optional environment variables          Optional Environment Variables You may need to set these variables  depending on your use case     Variable                                                                                                                                       Value                                                                                                                                                                                                                                                                                               Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               TFC HCP WORKLOAD IDENTITY AUDIENCE  br    TFC HCP WORKLOAD IDENTITY AUDIENCE  TAG   br    TFC DEFAULT HCP WORKLOAD IDENTITY AUDIENCE          HCP Terraform uses this as the  aud  claim for the identity token  Defaults to the provider resource name for the current run phase  which HCP Terraform derives from the values you provide for  RUN PROVIDER RESOURCE NAME    PLAN PROVIDER RESOURCE NAME   and  APPLY PROVIDER RESOURCE NAME     Requires   v1 15 1   or later if you use self managing agents  This is one of the default  aud  formats that HCP accepts                 TFC HCP PLAN PROVIDER RESOURCE NAME  br    TFC HCP PLAN PROVIDER RESOURCE NAME  TAG   br    TFC DEFAULT HCP PLAN PROVIDER RESOURCE NAME       The resource name of the workload identity provider that will HCP Terraform will use to authenticate the agent during the plan phase of a run                                                                                                                                                       Requires   v1 15 1   or later if self managing agents  Will fall back to the value of  RUN PROVIDER RESOURCE NAME  if not provided       TFC HCP APPLY PROVIDER RESOURCE NAME  br    TFC HCP APPLY PROVIDER RESOURCE NAME  TAG   br    TFC DEFAULT HCP APPLY PROVIDER RESOURCE NAME    The resource name of the workload identity provider that will HCP Terraform will use to authenticate the agent during the apply phase of a run                                                                                                                                                      Requires   v1 15 1   or later if self managing agents  Will fall back to the value of  RUN PROVIDER RESOURCE NAME  if not provided        Configure the HCP Provider  Do not set the  HCP CRED FILE  environment variable when configuring the HCP provider  or  HCP CRED FILE  will conflict with the dynamic credentials authentication process       Specifying Multiple Configurations       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 15 1   terraform cloud docs agents changelog 1 15 1 05 01 2024  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can add additional variables to handle multiple distinct HCP setups  enabling you to use multiple  provider aliases   terraform language providers configuration alias multiple provider configurations  within the same workspace  You can configure each set of credentials independently  or use default values by configuring the variables prefixed with  TFC DEFAULT     For more details  refer to  Specifying Multiple Configurations   terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations         Required Terraform Variable  Add the following variable to your Terraform configuration to set up additional dynamic credential configurations with the HCP provider  This variable lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks      hcl variable  tfc hcp dynamic credentials      description    Object containing HCP dynamic credentials configuration    type   object       default   object         credential file   string            aliases   map object         credential file   string                          Example Usage     hcl provider  hcp      credential file   var tfc hcp dynamic credentials default credential file    provider  hcp      alias    ALIAS1    credential file   var tfc hcp dynamic credentials aliases  ALIAS1   credential file      "}
{"questions":"terraform Dynamic Credentials with the AWS Provider your HCP Terraform runs Use OpenID Connect to get short term credentials for the AWS Terraform provider in Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 7 0 terraform cloud docs agents changelog 1 7 0 03 02 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog page title Dynamic Credentials with the AWS Provider Workspaces HCP Terraform","answers":"---\npage_title: Dynamic Credentials with the AWS Provider - Workspaces - HCP Terraform\ndescription: >-\n  Use OpenID Connect to get short-term credentials for the AWS Terraform provider in\n  your HCP Terraform runs.\n---\n\n# Dynamic Credentials with the AWS Provider\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.7.0](\/terraform\/cloud-docs\/agents\/changelog#1-7-0-03-02-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can use HCP Terraform\u2019s native OpenID Connect integration with AWS to get [dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials) for the AWS provider in your HCP Terraform runs. Configuring the integration requires the following steps:\n\n1. **[Configure AWS](#configure-aws):** Set up a trust configuration between AWS and HCP Terraform. Then, you must create AWS roles and policies for your HCP Terraform workspaces.\n2. **[Configure HCP Terraform](#configure-hcp-terraform):** Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials.\n\nOnce you complete the setup, HCP Terraform automatically authenticates to AWS during each run. The AWS provider authentication is valid for the length of the plan or apply.\n\n## Configure AWS\nYou must enable and configure an OIDC identity provider and accompanying role and trust policy on AWS. These instructions use the AWS console, but you can also use Terraform to configure AWS. Refer to our [example Terraform configuration](https:\/\/github.com\/hashicorp\/terraform-dynamic-credentials-setup-examples\/tree\/main\/aws).\n### Create an OIDC Identity Provider\nAWS documentation for setting this up through the AWS console or API can be found here: [Creating OpenID Connect (OIDC) identity providers - AWS Identity and Access Management](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id_roles_providers_create_oidc.html).\n\nThe `provider URL` should be set to the address of HCP Terraform (e.g., https:\/\/app.terraform.io **without** a trailing slash), and the `audience` should be set to `aws.workload.identity` or the value of `TFC_AWS_WORKLOAD_IDENTITY_AUDIENCE`, if configured.\n### Configure a Role and Trust Policy\nYou must configure a role and corresponding trust policy. Amazon documentation on setting this up can be found here: [Creating a role for web identity or OpenID Connect Federation (console) - AWS Identity and Access Management](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id_roles_create_for-idp_oidc.html).\nThe trust policy will be of the form:\n```json\n{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Principal\": {\n                \"Federated\": \"OIDC_PROVIDER_ARN\"\n            },\n            \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n            \"Condition\": {\n                \"StringEquals\": {\n                    \"SITE_ADDRESS:aud\": \"AUDIENCE_VALUE\",\n                    \"SITE_ADDRESS:sub\": \"organization:ORG_NAME:project:PROJECT_NAME:workspace:WORKSPACE_NAME:run_phase:RUN_PHASE\"\n                }\n            }\n        }\n    ]\n}\n```\nwith the capitalized values replaced with the following:\n* **OIDC_PROVIDER_ARN**: The ARN from the OIDC provider resource created in the previous step\n* **SITE_ADDRESS**: The address of HCP Terraform with `https:\/\/` stripped, (e.g., `app.terraform.io`)\n* **AUDIENCE_VALUE**: This should be set to `aws.workload.identity` unless a non-default audience has been specified in TFC\n* **ORG_NAME**: The organization name this policy will apply to, such as `my-org-name`\n* **PROJECT_NAME**: The project name that this policy will apply to, such as `my-project-name`\n* **WORKSPACE_NAME**: The workspace name this policy will apply to, such as `my-workspace-name`\n* **RUN_PHASE**: The run phase this policy will apply to, currently one of `plan` or `apply`.\n\n-> **Note:** if different permissions are desired for plan and apply, then two separate roles and trust policies must be created for each of these run phases to properly match them to the correct access level.\nIf the same permissions will be used regardless of run phase, then the condition can be modified like the below to use `StringLike` instead of `StringEquals` for the sub and include a `*` after `run_phase:` to perform a wildcard match:\n\n```json\n{\n    \"Condition\": {\n        \"StringEquals\": {\n            \"SITE_ADDRESS:aud\": \"AUDIENCE_VALUE\"\n        },\n        \"StringLike\": {\n            \"SITE_ADDRESS:sub\": \"organization:ORG_NAME:project:PROJECT_NAME:workspace:WORKSPACE_NAME:run_phase:*\"\n        }\n    }\n}\n```\n\n!> **Warning**: you should always check, at minimum, the audience and the name of the organization in order to prevent unauthorized access from other HCP Terraform organizations!\n\nA permissions policy needs to be added to the role which defines what operations within AWS the role is allowed to perform. As an example, the below policy allows for fetching a list of S3 buckets:\n\n```json\n{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Action\": [\n                \"s3:ListBucket\"\n            ],\n            \"Resource\": \"*\"\n        }\n    ]\n}\n```\n\n## Configure HCP Terraform\nYou\u2019ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with AWS using dynamic credentials. You can set these as workspace variables, or if you\u2019d like to share one AWS role across multiple workspaces, you can use a variable set. When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.\n\n### Required Environment Variables\n| Variable                                                                                     | Value                                 | Notes                                                                                                                                                                                                                                                                                           |\n|----------------------------------------------------------------------------------------------|---------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_AWS_PROVIDER_AUTH`<br \/>`TFC_AWS_PROVIDER_AUTH[_TAG]`<br \/>_(Default variable not supported)_                                   | `true`                                | Requires **v1.7.0** or later if self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to authenticate to AWS.                                                                                                                                              |\n| `TFC_AWS_RUN_ROLE_ARN`<br \/>`TFC_AWS_RUN_ROLE_ARN[_TAG]`<br \/>`TFC_DEFAULT_AWS_RUN_ROLE_ARN` | The ARN of the role to assume in AWS. | Requires **v1.7.0** or later if self-managing agents. Optional if `TFC_AWS_PLAN_ROLE_ARN` and `TFC_AWS_APPLY_ROLE_ARN` are both provided. These variables are described [below](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/aws-configuration#optional-environment-variables) |\n\n### Optional Environment Variables\nYou may need to set these variables, depending on your use case.\n\n| Variable                                                                                                                               | Value                                                                                        | Notes                                                                                                                        |\n|----------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_AWS_WORKLOAD_IDENTITY_AUDIENCE`<br \/>`TFC_AWS_WORKLOAD_IDENTITY_AUDIENCE[_TAG]`<br \/>`TFC_DEFAULT_AWS_WORKLOAD_IDENTITY_AUDIENCE` | Will be used as the `aud` claim for the identity token. Defaults to `aws.workload.identity`. | Requires **v1.7.0** or later if self-managing agents.                                                                        |\n| `TFC_AWS_PLAN_ROLE_ARN`<br \/>`TFC_AWS_PLAN_ROLE_ARN[_TAG]`<br \/>`TFC_DEFAULT_AWS_PLAN_ROLE_ARN`                                        | The ARN of the role to use for the plan phase of a run.                                      | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_AWS_RUN_ROLE_ARN` if not provided. |\n| `TFC_AWS_APPLY_ROLE_ARN`<br \/>`TFC_AWS_APPLY_ROLE_ARN[_TAG]`<br \/>`TFC_DEFAULT_AWS_APPLY_ROLE_ARN`                                     | The ARN of the role to use for the apply phase of a run.                                     | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_AWS_RUN_ROLE_ARN` if not provided. |\n\n\n## Configure the AWS Provider\nMake sure that you\u2019re passing a value for the `region` argument into the provider configuration block or setting the `AWS_REGION` variable in your workspace.\n\nMake sure that you\u2019re not using any of the other arguments or methods mentioned in the [authentication and configuration](https:\/\/registry.terraform.io\/providers\/hashicorp\/aws\/latest\/docs#authentication-and-configuration) section of the provider documentation as these settings may interfere with dynamic provider credentials.\n\n### Specifying Multiple Configurations\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.12.0](\/terraform\/cloud-docs\/agents\/changelog#1-12-0-07-26-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can add additional variables to handle multiple distinct AWS setups, enabling you to use multiple [provider aliases](\/terraform\/language\/providers\/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.\n\nFor more details, see [Specifying Multiple Configurations](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/specifying-multiple-configurations).\n\n#### Required Terraform Variable\n\nTo use additional configurations, add the following code to your Terraform configuration. This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.\n\n```hcl\nvariable \"tfc_aws_dynamic_credentials\" {\n  description = \"Object containing AWS dynamic credentials configuration\"\n  type = object({\n    default = object({\n      shared_config_file = string\n    })\n    aliases = map(object({\n      shared_config_file = string\n    }))\n  })\n}\n```\n\n#### Example Usage\n\n```hcl\nprovider \"aws\" {\n  shared_config_files = [var.tfc_aws_dynamic_credentials.default.shared_config_file]\n}\n\nprovider \"aws\" {\n  alias = \"ALIAS1\"\n  shared_config_files = [var.tfc_aws_dynamic_credentials.aliases[\"ALIAS1\"].shared_config_file]\n}\n```","site":"terraform","answers_cleaned":"    page title  Dynamic Credentials with the AWS Provider   Workspaces   HCP Terraform description       Use OpenID Connect to get short term credentials for the AWS Terraform provider in   your HCP Terraform runs         Dynamic Credentials with the AWS Provider       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 7 0   terraform cloud docs agents changelog 1 7 0 03 02 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can use HCP Terraform s native OpenID Connect integration with AWS to get  dynamic credentials   terraform cloud docs workspaces dynamic provider credentials  for the AWS provider in your HCP Terraform runs  Configuring the integration requires the following steps   1     Configure AWS   configure aws     Set up a trust configuration between AWS and HCP Terraform  Then  you must create AWS roles and policies for your HCP Terraform workspaces  2     Configure HCP Terraform   configure hcp terraform     Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials   Once you complete the setup  HCP Terraform automatically authenticates to AWS during each run  The AWS provider authentication is valid for the length of the plan or apply      Configure AWS You must enable and configure an OIDC identity provider and accompanying role and trust policy on AWS  These instructions use the AWS console  but you can also use Terraform to configure AWS  Refer to our  example Terraform configuration  https   github com hashicorp terraform dynamic credentials setup examples tree main aws       Create an OIDC Identity Provider AWS documentation for setting this up through the AWS console or API can be found here   Creating OpenID Connect  OIDC  identity providers   AWS Identity and Access Management  https   docs aws amazon com IAM latest UserGuide id roles providers create oidc html    The  provider URL  should be set to the address of HCP Terraform  e g   https   app terraform io   without   a trailing slash   and the  audience  should be set to  aws workload identity  or the value of  TFC AWS WORKLOAD IDENTITY AUDIENCE   if configured      Configure a Role and Trust Policy You must configure a role and corresponding trust policy  Amazon documentation on setting this up can be found here   Creating a role for web identity or OpenID Connect Federation  console    AWS Identity and Access Management  https   docs aws amazon com IAM latest UserGuide id roles create for idp oidc html   The trust policy will be of the form     json        Version    2012 10 17        Statement                            Effect    Allow                Principal                      Federated    OIDC PROVIDER ARN                              Action    sts AssumeRoleWithWebIdentity                Condition                      StringEquals                          SITE ADDRESS aud    AUDIENCE VALUE                        SITE ADDRESS sub    organization ORG NAME project PROJECT NAME workspace WORKSPACE NAME run phase RUN PHASE                                                        with the capitalized values replaced with the following      OIDC PROVIDER ARN    The ARN from the OIDC provider resource created in the previous step     SITE ADDRESS    The address of HCP Terraform with  https     stripped   e g    app terraform io       AUDIENCE VALUE    This should be set to  aws workload identity  unless a non default audience has been specified in TFC     ORG NAME    The organization name this policy will apply to  such as  my org name      PROJECT NAME    The project name that this policy will apply to  such as  my project name      WORKSPACE NAME    The workspace name this policy will apply to  such as  my workspace name      RUN PHASE    The run phase this policy will apply to  currently one of  plan  or  apply         Note    if different permissions are desired for plan and apply  then two separate roles and trust policies must be created for each of these run phases to properly match them to the correct access level  If the same permissions will be used regardless of run phase  then the condition can be modified like the below to use  StringLike  instead of  StringEquals  for the sub and include a     after  run phase   to perform a wildcard match      json        Condition              StringEquals                  SITE ADDRESS aud    AUDIENCE VALUE                      StringLike                  SITE ADDRESS sub    organization ORG NAME project PROJECT NAME workspace WORKSPACE NAME run phase                                Warning    you should always check  at minimum  the audience and the name of the organization in order to prevent unauthorized access from other HCP Terraform organizations   A permissions policy needs to be added to the role which defines what operations within AWS the role is allowed to perform  As an example  the below policy allows for fetching a list of S3 buckets      json        Version    2012 10 17        Statement                            Effect    Allow                Action                      s3 ListBucket                              Resource                                 Configure HCP Terraform You ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with AWS using dynamic credentials  You can set these as workspace variables  or if you d like to share one AWS role across multiple workspaces  you can use a variable set  When you configure dynamic provider credentials with multiple provider configurations of the same type  use either a default variable or a tagged alias variable name for each provider configuration  Refer to  Specifying Multiple Configurations   specifying multiple configurations  for more details       Required Environment Variables   Variable                                                                                       Value                                   Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           TFC AWS PROVIDER AUTH  br    TFC AWS PROVIDER AUTH  TAG   br     Default variable not supported                                        true                                   Requires   v1 7 0   or later if self managing agents  Must be present and set to  true   or HCP Terraform will not attempt to authenticate to AWS                                                                                                                                                    TFC AWS RUN ROLE ARN  br    TFC AWS RUN ROLE ARN  TAG   br    TFC DEFAULT AWS RUN ROLE ARN    The ARN of the role to assume in AWS    Requires   v1 7 0   or later if self managing agents  Optional if  TFC AWS PLAN ROLE ARN  and  TFC AWS APPLY ROLE ARN  are both provided  These variables are described  below   terraform cloud docs workspaces dynamic provider credentials aws configuration optional environment variables         Optional Environment Variables You may need to set these variables  depending on your use case     Variable                                                                                                                                 Value                                                                                          Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      TFC AWS WORKLOAD IDENTITY AUDIENCE  br    TFC AWS WORKLOAD IDENTITY AUDIENCE  TAG   br    TFC DEFAULT AWS WORKLOAD IDENTITY AUDIENCE    Will be used as the  aud  claim for the identity token  Defaults to  aws workload identity     Requires   v1 7 0   or later if self managing agents                                                                              TFC AWS PLAN ROLE ARN  br    TFC AWS PLAN ROLE ARN  TAG   br    TFC DEFAULT AWS PLAN ROLE ARN                                           The ARN of the role to use for the plan phase of a run                                         Requires   v1 7 0   or later if self managing agents  Will fall back to the value of  TFC AWS RUN ROLE ARN  if not provided       TFC AWS APPLY ROLE ARN  br    TFC AWS APPLY ROLE ARN  TAG   br    TFC DEFAULT AWS APPLY ROLE ARN                                        The ARN of the role to use for the apply phase of a run                                        Requires   v1 7 0   or later if self managing agents  Will fall back to the value of  TFC AWS RUN ROLE ARN  if not provided         Configure the AWS Provider Make sure that you re passing a value for the  region  argument into the provider configuration block or setting the  AWS REGION  variable in your workspace   Make sure that you re not using any of the other arguments or methods mentioned in the  authentication and configuration  https   registry terraform io providers hashicorp aws latest docs authentication and configuration  section of the provider documentation as these settings may interfere with dynamic provider credentials       Specifying Multiple Configurations       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 12 0   terraform cloud docs agents changelog 1 12 0 07 26 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can add additional variables to handle multiple distinct AWS setups  enabling you to use multiple  provider aliases   terraform language providers configuration alias multiple provider configurations  within the same workspace  You can configure each set of credentials independently  or use default values by configuring the variables prefixed with  TFC DEFAULT     For more details  see  Specifying Multiple Configurations   terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations         Required Terraform Variable  To use additional configurations  add the following code to your Terraform configuration  This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks      hcl variable  tfc aws dynamic credentials      description    Object containing AWS dynamic credentials configuration    type   object       default   object         shared config file   string            aliases   map object         shared config file   string                          Example Usage     hcl provider  aws      shared config files    var tfc aws dynamic credentials default shared config file     provider  aws      alias    ALIAS1    shared config files    var tfc aws dynamic credentials aliases  ALIAS1   shared config file       "}
{"questions":"terraform your HCP Terraform runs page title Dynamic Credentials with the Azure Providers HCP Terraform Dynamic Credentials with the Azure Provider Important Ensure you are using version 3 25 0 or later of the AzureRM provider and version 2 29 0 or later of the Microsoft Entra ID provider previously Azure Active Directory as required OIDC functionality was introduced in these provider versions Use OpenID Connect to get short term credentials for the Azure Terraform providers in","answers":"---\npage_title: Dynamic Credentials with the Azure Providers - HCP Terraform\ndescription: >-\n  Use OpenID Connect to get short-term credentials for the Azure Terraform providers in\n  your HCP Terraform runs.\n---\n\n# Dynamic Credentials with the Azure Provider\n\n~> **Important:** Ensure you are using version **3.25.0** or later of the **AzureRM provider** and version **2.29.0** or later of the **Microsoft Entra ID provider** (previously Azure Active Directory) as required OIDC functionality was introduced in these provider versions.\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.7.0](\/terraform\/cloud-docs\/agents\/changelog#1-7-0-03-02-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can use HCP Terraform\u2019s native OpenID Connect integration with Azure to get [dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials) for the AzureRM or Microsoft Entra ID providers in your HCP Terraform runs. Configuring the integration requires the following steps:\n\n1. **[Configure Azure](#configure-azure):** Set up a trust configuration between Azure and HCP Terraform. Then, you must create Azure roles and policies for your HCP Terraform workspaces.\n2. **[Configure HCP Terraform](#configure-hcp-terraform):** Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials.\n\nOnce you complete the setup, HCP Terraform automatically authenticates to Azure during each run. The Azure provider authentication is valid for the length of the plan or apply.\n\n!> **Warning:** Dynamic credentials with the Azure providers do not  work when your Terraform Enterprise instance uses a custom or self-signed certificate. This limitation is due to restrictions in Azure.\n\n## Configure Azure\nYou must enable and configure an application and service principal with accompanying federated credentials and permissions on Azure. These instructions use the Azure portal, but you can also use Terraform to configure Azure. Refer to our [example Terraform configuration](https:\/\/github.com\/hashicorp\/terraform-dynamic-credentials-setup-examples\/tree\/main\/azure).\n### Create an Application and Service Principal\nFollow the steps mentioned in the AzureRM provider docs here: [Creating the Application and Service Principal](https:\/\/registry.terraform.io\/providers\/hashicorp\/azurerm\/latest\/docs\/guides\/service_principal_oidc#creating-the-application-and-service-principal).\n\nAs mentioned in the documentation it will be important to make note of the `client_id` for the application as you will use this later for authentication.\n\n-> **Note:** you will want to skip the `\u201cConfigure Microsoft Entra ID Application to Trust a GitHub Repository\u201d` section as this does not apply here.\n\n### Grant the Application Access to Manage Resources in Your Azure Subscription\nYou must now give the created Application permission to modify resources within your Subscription.\n\nFollow the steps mentioned in the AzureRM provider docs here: [Granting the Application access to manage resources in your Azure Subscription](https:\/\/registry.terraform.io\/providers\/hashicorp\/azurerm\/latest\/docs\/guides\/service_principal_oidc#granting-the-application-access-to-manage-resources-in-your-azure-subscription).\n\n### Configure Microsoft Entra ID Application to Trust a Generic Issuer\nFinally, you must create federated identity credentials which validate the contents of the token sent to Azure from HCP Terraform.\n\nFollow the steps mentioned in the AzureRM provider docs here: [Configure Azure Microsoft Entra ID Application to Trust a Generic Issuer](https:\/\/registry.terraform.io\/providers\/hashicorp\/azurerm\/latest\/docs\/guides\/service_principal_oidc#configure-azure-active-directory-application-to-trust-a-generic-issuer).\n\nThe following information should be specified:\n* **Federated credential scenario**: Must be set to `Other issuer`.\n* **Issuer**: The address of HCP Terraform (e.g., https:\/\/app.terraform.io).\n    * **Important**: make sure this value starts with **https:\/\/** and does _not_ have a trailing slash.\n* **Subject identifier**: The subject identifier from HCP Terraform that this credential will match. This will be in the form `organization:my-org-name:project:my-project-name:workspace:my-workspace-name:run_phase:plan` where the `run_phase` can be one of `plan` or `apply`.\n* **Name**: A name for the federated credential, such as `tfc-plan-credential`. Note that this cannot be changed later.\n\nThe following is optional, but may be desired:\n* **Audience**: Enter the audience value that will be set when requesting the identity token. This will be `api:\/\/AzureADTokenExchange` by default. This should be set to the value of `TFC_AZURE_WORKLOAD_IDENTITY_AUDIENCE` if this has been configured.\n\n-> **Note:** because the `Subject identifier` for federated credentials is a direct string match, two federated identity credentials need to be created for each workspace using dynamic credentials: one that matches `run_phase:plan` and one that matches `run_phase:apply`.\n\n## Configure HCP Terraform\nYou\u2019ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with Azure using dynamic credentials. You can set these as workspace variables. When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.\n### Required Environment Variables\n\n| Variable                                                                                               | Value                                                                                    | Notes                                                                                                                                                                                                                                                                                                   |\n|--------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_AZURE_PROVIDER_AUTH`<br \/>`TFC_AZURE_PROVIDER_AUTH[_TAG]`<br \/>_(Default variable not supported)_ | `true`                                                                                   | Requires **v1.7.0** or later if self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to authenticate to Azure.                                                                                                                                                    |\n| `TFC_AZURE_RUN_CLIENT_ID`<br \/>`TFC_AZURE_RUN_CLIENT_ID[_TAG]`<br \/>`TFC_DEFAULT_AZURE_RUN_CLIENT_ID`  | The client ID for the Service Principal \/ Application used when authenticating to Azure. | Requires **v1.7.0** or later if self-managing agents. Optional if `TFC_AZURE_PLAN_CLIENT_ID` and `TFC_AZURE_APPLY_CLIENT_ID` are both provided. These variables are described [below](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/azure-configuration#optional-environment-variables) |\n\n### Optional Environment Variables\nYou may need to set these variables, depending on your use case.\n\n| Variable                                                                                                                                     | Value                                                                                             | Notes                                                                                                                           |\n|----------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_AZURE_WORKLOAD_IDENTITY_AUDIENCE`<br \/>`TFC_AZURE_WORKLOAD_IDENTITY_AUDIENCE[_TAG]`<br \/>`TFC_DEFAULT_AZURE_WORKLOAD_IDENTITY_AUDIENCE` | Will be used as the `aud` claim for the identity token. Defaults to `api:\/\/AzureADTokenExchange`. | Requires **v1.7.0** or later if self-managing agents.                                                                           |\n| `TFC_AZURE_PLAN_CLIENT_ID`<br \/>`TFC_AZURE_PLAN_CLIENT_ID[_TAG]`<br \/>`TFC_DEFAULT_AZURE_PLAN_CLIENT_ID`                                     | The client ID for the Service Principal \/ Application to use for the plan phase of a run.         | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_AZURE_RUN_CLIENT_ID` if not provided. |\n| `TFC_AZURE_APPLY_CLIENT_ID`<br \/>`TFC_AZURE_APPLY_CLIENT_ID[_TAG]`<br \/>`TFC_DEFAULT_AZURE_APPLY_CLIENT_ID`                                  | The client ID for the Service Principal \/ Application to use for the apply phase of a run.        | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_AZURE_RUN_CLIENT_ID` if not provided. |\n\n## Configure the AzureRM or Microsoft Entra ID Provider\nMake sure that you\u2019re passing values for the `subscription_id` and `tenant_id` arguments into the provider configuration block or setting the `ARM_SUBSCRIPTION_ID` and `ARM_TENANT_ID` variables in your workspace.\n\nMake sure that you\u2019re _not_ setting values for `client_id`, `use_oidc`, or `oidc_token` in the provider or setting any of `ARM_CLIENT_ID`, `ARM_USE_OIDC`, `ARM_OIDC_TOKEN`.\n\n### Specifying Multiple Configurations\n\n~> **Important:** Ensure you are using version **3.60.0** or later of the **AzureRM provider** and version **2.43.0** or later of the **Microsoft Entra ID provider** as required functionality was introduced in these provider versions.\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.12.0](\/terraform\/cloud-docs\/agents\/changelog#1-12-0-07-26-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can add additional variables to handle multiple distinct Azure setups, enabling you to use multiple [provider aliases](\/terraform\/language\/providers\/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.\n\nFor more details, see [Specifying Multiple Configurations](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/specifying-multiple-configurations).\n\n#### Required Terraform Variable\n\nTo use additional configurations, add the following code to your Terraform configuration. This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.\n\n```hcl\nvariable \"tfc_azure_dynamic_credentials\" {\n  description = \"Object containing Azure dynamic credentials configuration\"\n  type = object({\n    default = object({\n      client_id_file_path = string\n      oidc_token_file_path = string\n    })\n    aliases = map(object({\n      client_id_file_path = string\n      oidc_token_file_path = string\n    }))\n  })\n}\n```\n\n#### Example Usage\n\n##### AzureRM Provider\n\n```hcl\nprovider \"azurerm\" {\n  features {}\n  \/\/ use_cli should be set to false to yield more accurate error messages on auth failure.\n  use_cli              = false\n  \/\/ use_oidc must be explicitly set to true when using multiple configurations.\n  use_oidc             = true\n  client_id_file_path  = var.tfc_azure_dynamic_credentials.default.client_id_file_path\n  oidc_token_file_path = var.tfc_azure_dynamic_credentials.default.oidc_token_file_path\n  subscription_id      = \"00000000-0000-0000-0000-000000000000\"\n  tenant_id            = \"10000000-0000-0000-0000-000000000000\"\n}\n\nprovider \"azurerm\" {\n  features {}\n  \/\/ use_cli should be set to false to yield more accurate error messages on auth failure.\n  use_cli              = false\n  \/\/ use_oidc must be explicitly set to true when using multiple configurations.\n  use_oidc             = true\n  alias                = \"ALIAS1\"\n  client_id_file_path  = var.tfc_azure_dynamic_credentials.aliases[\"ALIAS1\"].client_id_file_path\n  oidc_token_file_path = var.tfc_azure_dynamic_credentials.aliases[\"ALIAS1\"].oidc_token_file_path\n  subscription_id      = \"00000000-0000-0000-0000-000000000000\"\n  tenant_id            = \"20000000-0000-0000-0000-000000000000\"\n}\n```\n\n##### Microsoft Entra ID Provider (formerly Azure AD)\n\n```hcl\nprovider \"azuread\" {\n  features {}\n  \/\/ use_cli should be set to false to yield more accurate error messages on auth failure.\n  use_cli              = false\n  \/\/ use_oidc must be explicitly set to true when using multiple configurations.\n  use_oidc             = true\n  client_id_file_path  = var.tfc_azure_dynamic_credentials.default.client_id_file_path\n  oidc_token_file_path = var.tfc_azure_dynamic_credentials.default.oidc_token_file_path\n  subscription_id      = \"00000000-0000-0000-0000-000000000000\"\n  tenant_id            = \"10000000-0000-0000-0000-000000000000\"\n}\n\nprovider \"azuread\" {\n  features {}\n  \/\/ use_cli should be set to false to yield more accurate error messages on auth failure.\n  use_cli              = false\n  \/\/ use_oidc must be explicitly set to true when using multiple configurations.\n  use_oidc             = true\n  alias                = \"ALIAS1\"\n  client_id_file_path  = var.tfc_azure_dynamic_credentials.aliases[\"ALIAS1\"].client_id_file_path\n  oidc_token_file_path = var.tfc_azure_dynamic_credentials.aliases[\"ALIAS1\"].oidc_token_file_path\n  subscription_id      = \"00000000-0000-0000-0000-000000000000\"\n  tenant_id            = \"20000000-0000-0000-0000-000000000000\"\n}\n```","site":"terraform","answers_cleaned":"    page title  Dynamic Credentials with the Azure Providers   HCP Terraform description       Use OpenID Connect to get short term credentials for the Azure Terraform providers in   your HCP Terraform runs         Dynamic Credentials with the Azure Provider       Important    Ensure you are using version   3 25 0   or later of the   AzureRM provider   and version   2 29 0   or later of the   Microsoft Entra ID provider    previously Azure Active Directory  as required OIDC functionality was introduced in these provider versions        Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 7 0   terraform cloud docs agents changelog 1 7 0 03 02 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can use HCP Terraform s native OpenID Connect integration with Azure to get  dynamic credentials   terraform cloud docs workspaces dynamic provider credentials  for the AzureRM or Microsoft Entra ID providers in your HCP Terraform runs  Configuring the integration requires the following steps   1     Configure Azure   configure azure     Set up a trust configuration between Azure and HCP Terraform  Then  you must create Azure roles and policies for your HCP Terraform workspaces  2     Configure HCP Terraform   configure hcp terraform     Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials   Once you complete the setup  HCP Terraform automatically authenticates to Azure during each run  The Azure provider authentication is valid for the length of the plan or apply        Warning    Dynamic credentials with the Azure providers do not  work when your Terraform Enterprise instance uses a custom or self signed certificate  This limitation is due to restrictions in Azure      Configure Azure You must enable and configure an application and service principal with accompanying federated credentials and permissions on Azure  These instructions use the Azure portal  but you can also use Terraform to configure Azure  Refer to our  example Terraform configuration  https   github com hashicorp terraform dynamic credentials setup examples tree main azure       Create an Application and Service Principal Follow the steps mentioned in the AzureRM provider docs here   Creating the Application and Service Principal  https   registry terraform io providers hashicorp azurerm latest docs guides service principal oidc creating the application and service principal    As mentioned in the documentation it will be important to make note of the  client id  for the application as you will use this later for authentication        Note    you will want to skip the   Configure Microsoft Entra ID Application to Trust a GitHub Repository   section as this does not apply here       Grant the Application Access to Manage Resources in Your Azure Subscription You must now give the created Application permission to modify resources within your Subscription   Follow the steps mentioned in the AzureRM provider docs here   Granting the Application access to manage resources in your Azure Subscription  https   registry terraform io providers hashicorp azurerm latest docs guides service principal oidc granting the application access to manage resources in your azure subscription        Configure Microsoft Entra ID Application to Trust a Generic Issuer Finally  you must create federated identity credentials which validate the contents of the token sent to Azure from HCP Terraform   Follow the steps mentioned in the AzureRM provider docs here   Configure Azure Microsoft Entra ID Application to Trust a Generic Issuer  https   registry terraform io providers hashicorp azurerm latest docs guides service principal oidc configure azure active directory application to trust a generic issuer    The following information should be specified      Federated credential scenario    Must be set to  Other issuer       Issuer    The address of HCP Terraform  e g   https   app terraform io           Important    make sure this value starts with   https      and does  not  have a trailing slash      Subject identifier    The subject identifier from HCP Terraform that this credential will match  This will be in the form  organization my org name project my project name workspace my workspace name run phase plan  where the  run phase  can be one of  plan  or  apply       Name    A name for the federated credential  such as  tfc plan credential   Note that this cannot be changed later   The following is optional  but may be desired      Audience    Enter the audience value that will be set when requesting the identity token  This will be  api   AzureADTokenExchange  by default  This should be set to the value of  TFC AZURE WORKLOAD IDENTITY AUDIENCE  if this has been configured        Note    because the  Subject identifier  for federated credentials is a direct string match  two federated identity credentials need to be created for each workspace using dynamic credentials  one that matches  run phase plan  and one that matches  run phase apply       Configure HCP Terraform You ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with Azure using dynamic credentials  You can set these as workspace variables  When you configure dynamic provider credentials with multiple provider configurations of the same type  use either a default variable or a tagged alias variable name for each provider configuration  Refer to  Specifying Multiple Configurations   specifying multiple configurations  for more details      Required Environment Variables    Variable                                                                                                 Value                                                                                      Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        TFC AZURE PROVIDER AUTH  br    TFC AZURE PROVIDER AUTH  TAG   br     Default variable not supported      true                                                                                      Requires   v1 7 0   or later if self managing agents  Must be present and set to  true   or HCP Terraform will not attempt to authenticate to Azure                                                                                                                                                          TFC AZURE RUN CLIENT ID  br    TFC AZURE RUN CLIENT ID  TAG   br    TFC DEFAULT AZURE RUN CLIENT ID     The client ID for the Service Principal   Application used when authenticating to Azure    Requires   v1 7 0   or later if self managing agents  Optional if  TFC AZURE PLAN CLIENT ID  and  TFC AZURE APPLY CLIENT ID  are both provided  These variables are described  below   terraform cloud docs workspaces dynamic provider credentials azure configuration optional environment variables         Optional Environment Variables You may need to set these variables  depending on your use case     Variable                                                                                                                                       Value                                                                                               Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       TFC AZURE WORKLOAD IDENTITY AUDIENCE  br    TFC AZURE WORKLOAD IDENTITY AUDIENCE  TAG   br    TFC DEFAULT AZURE WORKLOAD IDENTITY AUDIENCE    Will be used as the  aud  claim for the identity token  Defaults to  api   AzureADTokenExchange     Requires   v1 7 0   or later if self managing agents                                                                                 TFC AZURE PLAN CLIENT ID  br    TFC AZURE PLAN CLIENT ID  TAG   br    TFC DEFAULT AZURE PLAN CLIENT ID                                        The client ID for the Service Principal   Application to use for the plan phase of a run            Requires   v1 7 0   or later if self managing agents  Will fall back to the value of  TFC AZURE RUN CLIENT ID  if not provided       TFC AZURE APPLY CLIENT ID  br    TFC AZURE APPLY CLIENT ID  TAG   br    TFC DEFAULT AZURE APPLY CLIENT ID                                     The client ID for the Service Principal   Application to use for the apply phase of a run           Requires   v1 7 0   or later if self managing agents  Will fall back to the value of  TFC AZURE RUN CLIENT ID  if not provided        Configure the AzureRM or Microsoft Entra ID Provider Make sure that you re passing values for the  subscription id  and  tenant id  arguments into the provider configuration block or setting the  ARM SUBSCRIPTION ID  and  ARM TENANT ID  variables in your workspace   Make sure that you re  not  setting values for  client id    use oidc   or  oidc token  in the provider or setting any of  ARM CLIENT ID    ARM USE OIDC    ARM OIDC TOKEN        Specifying Multiple Configurations       Important    Ensure you are using version   3 60 0   or later of the   AzureRM provider   and version   2 43 0   or later of the   Microsoft Entra ID provider   as required functionality was introduced in these provider versions        Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 12 0   terraform cloud docs agents changelog 1 12 0 07 26 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can add additional variables to handle multiple distinct Azure setups  enabling you to use multiple  provider aliases   terraform language providers configuration alias multiple provider configurations  within the same workspace  You can configure each set of credentials independently  or use default values by configuring the variables prefixed with  TFC DEFAULT     For more details  see  Specifying Multiple Configurations   terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations         Required Terraform Variable  To use additional configurations  add the following code to your Terraform configuration  This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks      hcl variable  tfc azure dynamic credentials      description    Object containing Azure dynamic credentials configuration    type   object       default   object         client id file path   string       oidc token file path   string            aliases   map object         client id file path   string       oidc token file path   string                          Example Usage        AzureRM Provider     hcl provider  azurerm      features         use cli should be set to false to yield more accurate error messages on auth failure    use cli                false      use oidc must be explicitly set to true when using multiple configurations    use oidc               true   client id file path    var tfc azure dynamic credentials default client id file path   oidc token file path   var tfc azure dynamic credentials default oidc token file path   subscription id         00000000 0000 0000 0000 000000000000    tenant id               10000000 0000 0000 0000 000000000000     provider  azurerm      features         use cli should be set to false to yield more accurate error messages on auth failure    use cli                false      use oidc must be explicitly set to true when using multiple configurations    use oidc               true   alias                   ALIAS1    client id file path    var tfc azure dynamic credentials aliases  ALIAS1   client id file path   oidc token file path   var tfc azure dynamic credentials aliases  ALIAS1   oidc token file path   subscription id         00000000 0000 0000 0000 000000000000    tenant id               20000000 0000 0000 0000 000000000000               Microsoft Entra ID Provider  formerly Azure AD      hcl provider  azuread      features         use cli should be set to false to yield more accurate error messages on auth failure    use cli                false      use oidc must be explicitly set to true when using multiple configurations    use oidc               true   client id file path    var tfc azure dynamic credentials default client id file path   oidc token file path   var tfc azure dynamic credentials default oidc token file path   subscription id         00000000 0000 0000 0000 000000000000    tenant id               10000000 0000 0000 0000 000000000000     provider  azuread      features         use cli should be set to false to yield more accurate error messages on auth failure    use cli                false      use oidc must be explicitly set to true when using multiple configurations    use oidc               true   alias                   ALIAS1    client id file path    var tfc azure dynamic credentials aliases  ALIAS1   client id file path   oidc token file path   var tfc azure dynamic credentials aliases  ALIAS1   oidc token file path   subscription id         00000000 0000 0000 0000 000000000000    tenant id               20000000 0000 0000 0000 000000000000       "}
{"questions":"terraform your HCP Terraform runs Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 7 0 terraform cloud docs agents changelog 1 7 0 03 02 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog Dynamic Credentials with the GCP Provider page title Dynamic Credentials with the GCP Provider Workspaces HCP Terraform Use OpenID Connect to get short term credentials for the GCP Terraform provider in","answers":"---\npage_title: Dynamic Credentials with the GCP Provider - Workspaces - HCP Terraform\ndescription: >-\n  Use OpenID Connect to get short-term credentials for the GCP Terraform provider in\n  your HCP Terraform runs.\n---\n\n# Dynamic Credentials with the GCP Provider\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.7.0](\/terraform\/cloud-docs\/agents\/changelog#1-7-0-03-02-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\n\nYou can use HCP Terraform\u2019s native OpenID Connect integration with GCP to get [dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials) for the GCP provider in your HCP Terraform runs. Configuring the integration requires the following steps:\n\n1. **[Configure GCP](#configure-gcp):** Set up a trust configuration between GCP and HCP Terraform. Then, you must create GCP roles and policies for your HCP Terraform workspaces.\n2. **[Configure HCP Terraform](#configure-hcp-terraform):** Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials.\n\nOnce you complete the setup, HCP Terraform automatically authenticates to GCP during each run. The GCP provider authentication is valid for the length of the plan or apply.\n\n!> **Warning:** Dynamic credentials with the GCP provider do not work if your Terraform Enterprise instance uses a custom or self-signed certificate. This limitation is due to restrictions in GCP.\n\n## Configure GCP\nYou must enable and configure a workload identity pool and provider on GCP. These instructions use the GCP console, but you can also use Terraform to configure GCP. Refer to our [example Terraform configuration](https:\/\/github.com\/hashicorp\/terraform-dynamic-credentials-setup-examples\/tree\/main\/gcp).\n### Add a Workload Identity Pool and Provider\nGoogle documentation for setting this up can be found here: [Configuring workload identity federation with other identity providers](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation-with-other-providers).\n\nBefore starting to create the resources, you must enable the APIs mentioned at the start of the [Configure workload Identity federation](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation-with-other-providers#configure).\n#### Add a Workload Identity Pool\nThe following information should be specified:\n* **Name**: Name for the pool, such as `my-tfc-pool`. The name is also used as the pool ID. You can't change the pool ID later.\n\nThe following is optional, but may be desired:\n* **Pool ID**: The ID for the pool. This defaults to the name as mentioned above, but can be set to another value.\n* **Description**: Text that describes the purpose of the pool.\n\nYou will also want to ensure that the `Enabled Pool` option is set to be enabled before clicking next.\n\n#### Add a Workload Identity Provider\nYou must add a workload identity provider to the pool. The following information should be specified:\n* **Provider type**: Must be `OpenID Connect (OIDC)`.\n* **Provider name**: Name for the identity provider, such as `my-tfc-provider`. The name is also used as the provider ID. You can\u2019t change the provider ID later.\n* **Issuer (URL)**: The address of the TFC\/E instance, such as https:\/\/app.terraform.io\n    * **Important**: make sure this value starts with **https:\/\/** and does _not_ have a trailing slash.\n* **Audiences**: This can be left as `Default audience` if you are planning on using the default audience HCP Terraform provides.\n    * **Important**: you must select the `Allowed audiences` toggle and set this to the value of `TFC_GCP_WORKLOAD_IDENTITY_AUDIENCE`, if configured.\n* **Provider attributes mapping**: At the minimum this must include `assertion.sub` for the `google.subject` entry. Other mappings can be added for other claims in the identity token to attributes by adding `attribute.[claim name]` on the Google side and `assertion.[claim name]` on the OIDC side of a new mapping.\n* **Attribute Conditions**: Conditions to restrict which identity tokens can authenticate using the workload identity pool, such as `assertion.sub.startsWith(\"organization:my-org:project:my-project:workspace:my-workspace\")` to restrict access to identity tokens from a specific workspace. See this page in Google documentation for more information on the expression language: [Attribute conditions](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation#conditions).\n\n!> **Warning**: you should always check, at minimum, the audience and the name of the organization in order to prevent unauthorized access from other HCP Terraform organizations!\n\nThe following is optional, but may be desired:\n* **Provider ID**: The ID for the provider. This defaults to the name as mentioned above, but can be set to another value.\n### Add a Service Account and Permissions\nYou must next add a service account and properly configure the permissions.\n#### Create a Service Account\nGoogle documentation for setting this up can be found here: [Creating a service account for the external workload](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation-with-other-providers#create_a_service_account_for_the_external_workload).\n\nThe following information should be specified:\n* **Service account name**: Name for the service account, such as `tfc-service-account`. The name is also used as the pool ID. You can't change the pool ID later.\n\nThe following is optional, but may be desired:\n* **Service account ID**: The ID for the service account. This defaults to the name as mentioned above, but can be set to another value.\n* **Description**: Text that describes the purpose of the service account.\n#### Grant IAM Permissions\nThe next step in the setup wizard will allow for granting IAM permissions for the service account. The role that is given to the service account will vary depending on your specific needs and project setup. This should in general be the most minimal set of permissions needed for the service account to properly function.\n#### Grant External Permissions\nOnce the service account has been created and granted IAM permissions, you will need to grant access to the service account for the identity pool created above. Google documentation for setting this up can be found here: [Allow the external workload to impersonate the service account](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation-with-other-providers#allow_the_external_workload_to_impersonate_the_service_account).\n## Configure HCP Terraform\nYou\u2019ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with GCP using dynamic credentials. You can set these as workspace variables, or if you\u2019d like to share one GCP service account across multiple workspaces, you can use a variable set. When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.\n### Required Environment Variables\n| Variable                                                                                                                            | Value                                                                        | Notes                                                                                                                                                                                                                                                                                                                     |\n|-------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_GCP_PROVIDER_AUTH`<br \/>`TFC_GCP_PROVIDER_AUTH[_TAG]`<br \/>_(Default variable not supported)_                                  | `true`                                                                       | Requires **v1.7.0** or later if self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to use dynamic credentials to authenticate to GCP.                                                                                                                                             |\n| `TFC_GCP_RUN_SERVICE_ACCOUNT_EMAIL`<br \/>`TFC_GCP_RUN_SERVICE_ACCOUNT_EMAIL[_TAG]`<br \/>`TFC_DEFAULT_GCP_RUN_SERVICE_ACCOUNT_EMAIL` | The service account email HCP Terraform will use when authenticating to GCP. | Requires **v1.7.0** or later if self-managing agents. Optional if `TFC_GCP_PLAN_SERVICE_ACCOUNT_EMAIL` and `TFC_GCP_APPLY_SERVICE_ACCOUNT_EMAIL` are both provided. These variables are described [below](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/gcp-configuration#optional-environment-variables) |\n\nYou must also include information about the GCP Workload Identity Provider that HCP Terraform will use when authenticating to GCP. You can supply this information in two different ways:\n\n1. By providing one unified variable containing the canonical name of the workload identity provider.\n2. By providing the project number, pool ID, and provider ID as separate variables.\n\nYou should avoid setting both types of variables, but if you do, the unified version will take precedence.\n\n#### Unified Variable\n\n| Variable                                                                                                                   | Value                                                                                                                                                                                                                                                             | Notes                                                                                                                                                                                 |\n|----------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_GCP_WORKLOAD_PROVIDER_NAME`<br \/>`TFC_GCP_WORKLOAD_PROVIDER_NAME[_TAG]`<br \/>`TFC_DEFAULT_GCP_WORKLOAD_PROVIDER_NAME` | The canonical name of the workload identity provider. This must be in the form mentioned for the `name` attribute [here](https:\/\/registry.terraform.io\/providers\/hashicorp\/google\/latest\/docs\/resources\/iam_workload_identity_pool_provider#attributes-reference) | Requires **v1.7.0** or later if self-managing agents. This will take precedence over `TFC_GCP_PROJECT_NUMBER`, `TFC_GCP_WORKLOAD_POOL_ID`, and `TFC_GCP_WORKLOAD_PROVIDER_ID` if set. |\n\n#### Separate Variables\n\n| Variable                                                                                                             | Value                                                       | Notes                                                                                                        |\n|----------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|\n| `TFC_GCP_PROJECT_NUMBER`<br \/>`TFC_GCP_PROJECT_NUMBER[_TAG]`<br \/>`TFC_DEFAULT_GCP_PROJECT_NUMBER`                   | The project number where the pool and other resources live. | Requires **v1.7.0** or later if self-managing agents. This is _not_ the project ID and is a separate number. |\n| `TFC_GCP_WORKLOAD_POOL_ID`<br \/>`TFC_GCP_WORKLOAD_POOL_ID[_TAG]`<br \/>`TFC_DEFAULT_GCP_WORKLOAD_POOL_ID`             | The workload pool ID.                                       | Requires **v1.7.0** or later if self-managing agents.                                                        |\n| `TFC_GCP_WORKLOAD_PROVIDER_ID`<br \/>`TFC_GCP_WORKLOAD_PROVIDER_ID[_TAG]`<br \/>`TFC_DEFAULT_GCP_WORKLOAD_PROVIDER_ID` | The workload identity provider ID.                          | Requires **v1.7.0** or later if self-managing agents.                                                        |\n\n\n### Optional Environment Variables\nYou may need to set these variables, depending on your use case.\n\n| Variable                                                                                                                                  | Value                                                                                                                                                                                                                                                       | Notes                                                                                                                                     |\n|-------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_GCP_WORKLOAD_IDENTITY_AUDIENCE`<br \/>`TFC_GCP_WORKLOAD_IDENTITY_AUDIENCE[_TAG]`<br \/>`TFC_DEFAULT_GCP_WORKLOAD_IDENTITY_AUDIENCE`    | Will be used as the `aud` claim for the identity token. Defaults to a string of the form mentioned [here](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation-with-other-providers#oidc_1) in the GCP docs with the leading **https:** stripped. | Requires **v1.7.0** or later if self-managing agents. This is one of the default `aud` formats that GCP accepts.                          |\n| `TFC_GCP_PLAN_SERVICE_ACCOUNT_EMAIL`<br \/>`TFC_GCP_PLAN_SERVICE_ACCOUNT_EMAIL[_TAG]`<br \/>`TFC_DEFAULT_GCP_PLAN_SERVICE_ACCOUNT_EMAIL`    | The service account email to use for the plan phase of a run.                                                                                                                                                                                               | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_GCP_RUN_SERVICE_ACCOUNT_EMAIL` if not provided. |\n| `TFC_GCP_APPLY_SERVICE_ACCOUNT_EMAIL`<br \/>`TFC_GCP_APPLY_SERVICE_ACCOUNT_EMAIL[_TAG]`<br \/>`TFC_DEFAULT_GCP_APPLY_SERVICE_ACCOUNT_EMAIL` | The service account email to use for the apply phase of a run.                                                                                                                                                                                              | Requires **v1.7.0** or later if self-managing agents. Will fall back to the value of `TFC_GCP_RUN_SERVICE_ACCOUNT_EMAIL` if not provided. |\n\n## Configure the GCP Provider\nMake sure that you\u2019re passing values for the `project` and `region` arguments into the provider configuration block.\n\nMake sure that you\u2019re not setting values for the `GOOGLE_CREDENTIALS` or `GOOGLE_APPLICATION_CREDENTIALS` environment variables as these will conflict with the dynamic credentials authentication process.\n\n### Specifying Multiple Configurations\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.12.0](\/terraform\/cloud-docs\/agents\/changelog#1-12-0-07-26-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can add additional variables to handle multiple distinct GCP setups, enabling you to use multiple [provider aliases](\/terraform\/language\/providers\/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.\n\nFor more details, see [Specifying Multiple Configurations](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/specifying-multiple-configurations).\n\n#### Required Terraform Variable\n\nTo use additional configurations, add the following code to your Terraform configuration. This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.\n\n```hcl\nvariable \"tfc_gcp_dynamic_credentials\" {\n  description = \"Object containing GCP dynamic credentials configuration\"\n  type = object({\n    default = object({\n      credentials = string\n    })\n    aliases = map(object({\n      credentials = string\n    }))\n  })\n}\n```\n\n#### Example Usage\n\n```hcl\nprovider \"google\" {\n  credentials = var.tfc_gcp_dynamic_credentials.default.credentials\n}\n\nprovider \"google\" {\n  alias = \"ALIAS1\"\n  credentials = var.tfc_gcp_dynamic_credentials.aliases[\"ALIAS1\"].credentials\n}\n```","site":"terraform","answers_cleaned":"    page title  Dynamic Credentials with the GCP Provider   Workspaces   HCP Terraform description       Use OpenID Connect to get short term credentials for the GCP Terraform provider in   your HCP Terraform runs         Dynamic Credentials with the GCP Provider       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 7 0   terraform cloud docs agents changelog 1 7 0 03 02 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog     You can use HCP Terraform s native OpenID Connect integration with GCP to get  dynamic credentials   terraform cloud docs workspaces dynamic provider credentials  for the GCP provider in your HCP Terraform runs  Configuring the integration requires the following steps   1     Configure GCP   configure gcp     Set up a trust configuration between GCP and HCP Terraform  Then  you must create GCP roles and policies for your HCP Terraform workspaces  2     Configure HCP Terraform   configure hcp terraform     Add environment variables to the HCP Terraform workspaces where you want to use Dynamic Credentials   Once you complete the setup  HCP Terraform automatically authenticates to GCP during each run  The GCP provider authentication is valid for the length of the plan or apply        Warning    Dynamic credentials with the GCP provider do not work if your Terraform Enterprise instance uses a custom or self signed certificate  This limitation is due to restrictions in GCP      Configure GCP You must enable and configure a workload identity pool and provider on GCP  These instructions use the GCP console  but you can also use Terraform to configure GCP  Refer to our  example Terraform configuration  https   github com hashicorp terraform dynamic credentials setup examples tree main gcp       Add a Workload Identity Pool and Provider Google documentation for setting this up can be found here   Configuring workload identity federation with other identity providers  https   cloud google com iam docs workload identity federation with other providers    Before starting to create the resources  you must enable the APIs mentioned at the start of the  Configure workload Identity federation  https   cloud google com iam docs workload identity federation with other providers configure        Add a Workload Identity Pool The following information should be specified      Name    Name for the pool  such as  my tfc pool   The name is also used as the pool ID  You can t change the pool ID later   The following is optional  but may be desired      Pool ID    The ID for the pool  This defaults to the name as mentioned above  but can be set to another value      Description    Text that describes the purpose of the pool   You will also want to ensure that the  Enabled Pool  option is set to be enabled before clicking next        Add a Workload Identity Provider You must add a workload identity provider to the pool  The following information should be specified      Provider type    Must be  OpenID Connect  OIDC        Provider name    Name for the identity provider  such as  my tfc provider   The name is also used as the provider ID  You can t change the provider ID later      Issuer  URL     The address of the TFC E instance  such as https   app terraform io         Important    make sure this value starts with   https      and does  not  have a trailing slash      Audiences    This can be left as  Default audience  if you are planning on using the default audience HCP Terraform provides          Important    you must select the  Allowed audiences  toggle and set this to the value of  TFC GCP WORKLOAD IDENTITY AUDIENCE   if configured      Provider attributes mapping    At the minimum this must include  assertion sub  for the  google subject  entry  Other mappings can be added for other claims in the identity token to attributes by adding  attribute  claim name   on the Google side and  assertion  claim name   on the OIDC side of a new mapping      Attribute Conditions    Conditions to restrict which identity tokens can authenticate using the workload identity pool  such as  assertion sub startsWith  organization my org project my project workspace my workspace    to restrict access to identity tokens from a specific workspace  See this page in Google documentation for more information on the expression language   Attribute conditions  https   cloud google com iam docs workload identity federation conditions         Warning    you should always check  at minimum  the audience and the name of the organization in order to prevent unauthorized access from other HCP Terraform organizations   The following is optional  but may be desired      Provider ID    The ID for the provider  This defaults to the name as mentioned above  but can be set to another value      Add a Service Account and Permissions You must next add a service account and properly configure the permissions       Create a Service Account Google documentation for setting this up can be found here   Creating a service account for the external workload  https   cloud google com iam docs workload identity federation with other providers create a service account for the external workload    The following information should be specified      Service account name    Name for the service account  such as  tfc service account   The name is also used as the pool ID  You can t change the pool ID later   The following is optional  but may be desired      Service account ID    The ID for the service account  This defaults to the name as mentioned above  but can be set to another value      Description    Text that describes the purpose of the service account       Grant IAM Permissions The next step in the setup wizard will allow for granting IAM permissions for the service account  The role that is given to the service account will vary depending on your specific needs and project setup  This should in general be the most minimal set of permissions needed for the service account to properly function       Grant External Permissions Once the service account has been created and granted IAM permissions  you will need to grant access to the service account for the identity pool created above  Google documentation for setting this up can be found here   Allow the external workload to impersonate the service account  https   cloud google com iam docs workload identity federation with other providers allow the external workload to impersonate the service account      Configure HCP Terraform You ll need to set some environment variables in your HCP Terraform workspace in order to configure HCP Terraform to authenticate with GCP using dynamic credentials  You can set these as workspace variables  or if you d like to share one GCP service account across multiple workspaces  you can use a variable set  When you configure dynamic provider credentials with multiple provider configurations of the same type  use either a default variable or a tagged alias variable name for each provider configuration  Refer to  Specifying Multiple Configurations   specifying multiple configurations  for more details      Required Environment Variables   Variable                                                                                                                              Value                                                                          Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             TFC GCP PROVIDER AUTH  br    TFC GCP PROVIDER AUTH  TAG   br     Default variable not supported                                       true                                                                          Requires   v1 7 0   or later if self managing agents  Must be present and set to  true   or HCP Terraform will not attempt to use dynamic credentials to authenticate to GCP                                                                                                                                                   TFC GCP RUN SERVICE ACCOUNT EMAIL  br    TFC GCP RUN SERVICE ACCOUNT EMAIL  TAG   br    TFC DEFAULT GCP RUN SERVICE ACCOUNT EMAIL    The service account email HCP Terraform will use when authenticating to GCP    Requires   v1 7 0   or later if self managing agents  Optional if  TFC GCP PLAN SERVICE ACCOUNT EMAIL  and  TFC GCP APPLY SERVICE ACCOUNT EMAIL  are both provided  These variables are described  below   terraform cloud docs workspaces dynamic provider credentials gcp configuration optional environment variables     You must also include information about the GCP Workload Identity Provider that HCP Terraform will use when authenticating to GCP  You can supply this information in two different ways   1  By providing one unified variable containing the canonical name of the workload identity provider  2  By providing the project number  pool ID  and provider ID as separate variables   You should avoid setting both types of variables  but if you do  the unified version will take precedence        Unified Variable    Variable                                                                                                                     Value                                                                                                                                                                                                                                                               Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 TFC GCP WORKLOAD PROVIDER NAME  br    TFC GCP WORKLOAD PROVIDER NAME  TAG   br    TFC DEFAULT GCP WORKLOAD PROVIDER NAME    The canonical name of the workload identity provider  This must be in the form mentioned for the  name  attribute  here  https   registry terraform io providers hashicorp google latest docs resources iam workload identity pool provider attributes reference    Requires   v1 7 0   or later if self managing agents  This will take precedence over  TFC GCP PROJECT NUMBER    TFC GCP WORKLOAD POOL ID   and  TFC GCP WORKLOAD PROVIDER ID  if set          Separate Variables    Variable                                                                                                               Value                                                         Notes                                                                                                                                                                                                                                                                                                                                                                                                                   TFC GCP PROJECT NUMBER  br    TFC GCP PROJECT NUMBER  TAG   br    TFC DEFAULT GCP PROJECT NUMBER                      The project number where the pool and other resources live    Requires   v1 7 0   or later if self managing agents  This is  not  the project ID and is a separate number       TFC GCP WORKLOAD POOL ID  br    TFC GCP WORKLOAD POOL ID  TAG   br    TFC DEFAULT GCP WORKLOAD POOL ID                The workload pool ID                                          Requires   v1 7 0   or later if self managing agents                                                              TFC GCP WORKLOAD PROVIDER ID  br    TFC GCP WORKLOAD PROVIDER ID  TAG   br    TFC DEFAULT GCP WORKLOAD PROVIDER ID    The workload identity provider ID                             Requires   v1 7 0   or later if self managing agents                                                                 Optional Environment Variables You may need to set these variables  depending on your use case     Variable                                                                                                                                    Value                                                                                                                                                                                                                                                         Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  TFC GCP WORKLOAD IDENTITY AUDIENCE  br    TFC GCP WORKLOAD IDENTITY AUDIENCE  TAG   br    TFC DEFAULT GCP WORKLOAD IDENTITY AUDIENCE       Will be used as the  aud  claim for the identity token  Defaults to a string of the form mentioned  here  https   cloud google com iam docs workload identity federation with other providers oidc 1  in the GCP docs with the leading   https    stripped    Requires   v1 7 0   or later if self managing agents  This is one of the default  aud  formats that GCP accepts                                TFC GCP PLAN SERVICE ACCOUNT EMAIL  br    TFC GCP PLAN SERVICE ACCOUNT EMAIL  TAG   br    TFC DEFAULT GCP PLAN SERVICE ACCOUNT EMAIL       The service account email to use for the plan phase of a run                                                                                                                                                                                                  Requires   v1 7 0   or later if self managing agents  Will fall back to the value of  TFC GCP RUN SERVICE ACCOUNT EMAIL  if not provided       TFC GCP APPLY SERVICE ACCOUNT EMAIL  br    TFC GCP APPLY SERVICE ACCOUNT EMAIL  TAG   br    TFC DEFAULT GCP APPLY SERVICE ACCOUNT EMAIL    The service account email to use for the apply phase of a run                                                                                                                                                                                                 Requires   v1 7 0   or later if self managing agents  Will fall back to the value of  TFC GCP RUN SERVICE ACCOUNT EMAIL  if not provided        Configure the GCP Provider Make sure that you re passing values for the  project  and  region  arguments into the provider configuration block   Make sure that you re not setting values for the  GOOGLE CREDENTIALS  or  GOOGLE APPLICATION CREDENTIALS  environment variables as these will conflict with the dynamic credentials authentication process       Specifying Multiple Configurations       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 12 0   terraform cloud docs agents changelog 1 12 0 07 26 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can add additional variables to handle multiple distinct GCP setups  enabling you to use multiple  provider aliases   terraform language providers configuration alias multiple provider configurations  within the same workspace  You can configure each set of credentials independently  or use default values by configuring the variables prefixed with  TFC DEFAULT     For more details  see  Specifying Multiple Configurations   terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations         Required Terraform Variable  To use additional configurations  add the following code to your Terraform configuration  This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks      hcl variable  tfc gcp dynamic credentials      description    Object containing GCP dynamic credentials configuration    type   object       default   object         credentials   string            aliases   map object         credentials   string                          Example Usage     hcl provider  google      credentials   var tfc gcp dynamic credentials default credentials    provider  google      alias    ALIAS1    credentials   var tfc gcp dynamic credentials aliases  ALIAS1   credentials      "}
{"questions":"terraform HCP Vault Secrets Backed Dynamic Credentials with the GCP Provider your HCP Terraform runs page title HCP Vault Secrets Backed Dynamic Credentials with the GCP Provider Workspaces HCP Terraform Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 16 0 terraform cloud docs agents changelog 1 16 0 10 02 2024 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog Use OpenID Connect and HCP Vault Secrets to get short term credentials for the GCP Terraform provider in","answers":"---\npage_title: HCP Vault Secrets-Backed Dynamic Credentials with the GCP Provider - Workspaces - HCP Terraform\ndescription: >-\n  Use OpenID Connect and HCP Vault Secrets to get short-term credentials for the GCP Terraform provider in\n  your HCP Terraform runs.\n---\n\n# HCP Vault Secrets-Backed Dynamic Credentials with the GCP Provider\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.16.0](\/terraform\/cloud-docs\/agents\/changelog#1-16-0-10-02-2024) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can use HCP Terraform\u2019s native OpenID Connect integration with HCP to use [HCP Vault Secrets-backed dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/hcp-vault-secrets-backed) with the GCP provider in your HCP Terraform runs. Configuring the integration requires the following steps:\n\n1. **[Configure HCP Provider Credentials](#configure-hcp-provider-credentials)**: Set up a trust configuration between HCP Vault Secrets and HCP Terraform, create HCP Vault Secrets roles and policies for your HCP Terraform workspaces, and add environment variables to those workspaces.\n2. **[Configure HCP Vault Secrets](#configure-hcp-vault-secrets-gcp-secrets-engine)**: Set up your HCP project's GCP integration and dynamic secret.\n3. **[Configure HCP Terraform](#configure-hcp-terraform)**: Add additional environment variables to the HCP Terraform workspaces where you want to use HCP Vault Secrets-backed dynamic credentials.\n4. **[Configure Terraform Providers](#configure-terraform-providers)**: Configure your Terraform providers to work with HCP Vault Secrets-backed dynamic credentials.\n\nOnce you complete this setup, HCP Terraform automatically authenticates with GCP via HCP Vault Secrets-generated credentials during the plan and apply phase of each run. The GCP provider's authentication is only valid for the length of the plan or apply phase.\n\n## Configure HCP Provider Credentials\nYou must first set up HCP dynamic provider credentials before you can use HCP Vault Secrets-backed dynamic credentials. This includes creating a service principal, configuring trust between HCP and HCP Terraform, and populating the required environment variables in your HCP Terraform workspace.\n\n[See the setup instructions for HCP dynamic provider credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/hcp-configuration).\n\n## Configure HCP Vault Secrets GCP Secrets Engine\nFollow the instructions in the HCP Vault Secrets documentation for [setting up the GCP integration in your HCP project](\/hcp\/docs\/vault-secrets\/dynamic-secrets\/gcp).\n\n## Configure HCP Terraform\nNext, you need to set certain environment variables in your HCP Terraform workspace to authenticate HCP Terraform with GCP using HCP Vault Secrets-backed dynamic credentials. These variables are in addition to those you previously set while configuring [HCP provider credentials](#configure-hcp-provider-credentials). You can add these as workspace variables or as a [variable set](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#variable-sets).\n\n### Required Common Environment Variables\n| Variable                                                                                                | Value                                                                        | Notes                                                                                                                                                  |\n|---------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_HVS_BACKED_GCP_AUTH`<br \/>`TFC_HVS_BACKED_GCP_AUTH[_TAG]`<br \/>_(Default variable not supported)_  | `true`                                                                       | Requires **v1.16.0** or later if self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to authenticate with GCP.   |\n| `TFC_HVS_BACKED_GCP_RUN_SECRET_RESOURCE_NAME`                                                           | The name of the HCP Vault Secrets dynamic secret resource to read.           | Requires **v1.16.0** or later if self-managing agents. Must be present.                                                                                 |\n\n\n### Optional Common Environment Variables\nYou may need to set these variables, depending on your use case.\n\n| Variable                                                                                                                    | Value                                                                                                                                                                                               | Notes                                                                                                                                       |\n|-----------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_HVS_BACKED_GCP_HCP_CONFIG`<br \/>`TFC_HVS_BACKED_GCP_HCP_CONFIG[_TAG]`<br \/>`TFC_DEFAULT_HVS_BACKED_GCP_HCP_CONFIG`     | The name of the non-default HCP configuration for workspaces using [multiple HCP configurations](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/specifying-multiple-configurations). | Requires **v1.16.0** or later if self-managing agents. Will fall back to using the default HCP Vault Secrets configuration if not provided. |\n| `TFC_HVS_BACKED_GCP_PLAN_SECRET_RESOURCE_NAME`                                                                              | The name of the HCP Vault Secrets dynamic secret resource to read for the plan phase.                                                                                                               | Requires **v1.16.0** or later if self-managing agents. Must be present.                                                                      |\n| `TFC_HVS_BACKED_GCP_APPLY_SECRET_RESOURCE_NAME`                                                                             | The name of the HCP Vault Secrets dynamic secret resource to read for the apply phase.                                                                                                              | Requires **v1.16.0** or later if self-managing agents. Must be present.                                                                      |\n\n## Configure Terraform Providers\nThe final step is to directly configure your GCP and HCP Vault Secrets providers.\n\n### Configure the GCP Provider\nEnsure you pass values for the `project` and `region` arguments into the provider configuration block.\n\nEnsure you are not setting values or environment variables for `GOOGLE_CREDENTIALS` or `GOOGLE_APPLICATION_CREDENTIALS`. Otherwise, these values may interfere with dynamic provider credentials.\n\n### Specifying Multiple Configurations\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.16.0](\/terraform\/cloud-docs\/agents\/changelog#1-16-0-10-02-2024) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can add additional variables to handle multiple distinct HCP Vault Secrets-backed GCP setups, enabling you to use multiple [provider aliases](\/terraform\/language\/providers\/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.\n\nFor more details, see [Specifying Multiple Configurations](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/specifying-multiple-configurations).\n\n#### Required Terraform Variable\n\nTo use additional configurations, add the following code to your Terraform configuration. This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.\n\n```hcl\nvariable \"tfc_hvs_backed_gcp_dynamic_credentials\" {\n  description = \"Object containing HCP Vault Secrets-backed GCP dynamic credentials configuration\"\n  type = object({\n    default = object({\n      credentials = string\n      access_token = string\n    })\n    aliases = map(object({\n      credentials = string\n      access_token = string\n    }))\n  })\n}\n```\n\n#### Example Usage\n\n##### Access Token\n\n```hcl\nprovider \"google\" {\n  access_token = var.tfc_hvs_backed_gcp_dynamic_credentials.default.access_token\n}\n\nprovider \"google\" {\n  alias = \"ALIAS1\"\n  access_token = var.tfc_hvs_backed_gcp_dynamic_credentials.aliases[\"ALIAS1\"].access_token\n}\n```\n\n##### Credentials\n\n```hcl\nprovider \"google\" {\n  credentials = var.tfc_hvs_backed_gcp_dynamic_credentials.default.credentials\n}\n\nprovider \"google\" {\n  alias = \"ALIAS1\"\n  credentials = var.tfc_hvs_backed_gcp_dynamic_credentials.aliases[\"ALIAS1\"].credentials\n}\n```","site":"terraform","answers_cleaned":"    page title  HCP Vault Secrets Backed Dynamic Credentials with the GCP Provider   Workspaces   HCP Terraform description       Use OpenID Connect and HCP Vault Secrets to get short term credentials for the GCP Terraform provider in   your HCP Terraform runs         HCP Vault Secrets Backed Dynamic Credentials with the GCP Provider       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 16 0   terraform cloud docs agents changelog 1 16 0 10 02 2024  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can use HCP Terraform s native OpenID Connect integration with HCP to use  HCP Vault Secrets backed dynamic credentials   terraform cloud docs workspaces dynamic provider credentials hcp vault secrets backed  with the GCP provider in your HCP Terraform runs  Configuring the integration requires the following steps   1     Configure HCP Provider Credentials   configure hcp provider credentials     Set up a trust configuration between HCP Vault Secrets and HCP Terraform  create HCP Vault Secrets roles and policies for your HCP Terraform workspaces  and add environment variables to those workspaces  2     Configure HCP Vault Secrets   configure hcp vault secrets gcp secrets engine     Set up your HCP project s GCP integration and dynamic secret  3     Configure HCP Terraform   configure hcp terraform     Add additional environment variables to the HCP Terraform workspaces where you want to use HCP Vault Secrets backed dynamic credentials  4     Configure Terraform Providers   configure terraform providers     Configure your Terraform providers to work with HCP Vault Secrets backed dynamic credentials   Once you complete this setup  HCP Terraform automatically authenticates with GCP via HCP Vault Secrets generated credentials during the plan and apply phase of each run  The GCP provider s authentication is only valid for the length of the plan or apply phase      Configure HCP Provider Credentials You must first set up HCP dynamic provider credentials before you can use HCP Vault Secrets backed dynamic credentials  This includes creating a service principal  configuring trust between HCP and HCP Terraform  and populating the required environment variables in your HCP Terraform workspace    See the setup instructions for HCP dynamic provider credentials   terraform cloud docs workspaces dynamic provider credentials hcp configuration       Configure HCP Vault Secrets GCP Secrets Engine Follow the instructions in the HCP Vault Secrets documentation for  setting up the GCP integration in your HCP project   hcp docs vault secrets dynamic secrets gcp       Configure HCP Terraform Next  you need to set certain environment variables in your HCP Terraform workspace to authenticate HCP Terraform with GCP using HCP Vault Secrets backed dynamic credentials  These variables are in addition to those you previously set while configuring  HCP provider credentials   configure hcp provider credentials   You can add these as workspace variables or as a  variable set   terraform cloud docs workspaces variables managing variables variable sets        Required Common Environment Variables   Variable                                                                                                  Value                                                                          Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           TFC HVS BACKED GCP AUTH  br    TFC HVS BACKED GCP AUTH  TAG   br     Default variable not supported       true                                                                          Requires   v1 16 0   or later if self managing agents  Must be present and set to  true   or HCP Terraform will not attempt to authenticate with GCP         TFC HVS BACKED GCP RUN SECRET RESOURCE NAME                                                              The name of the HCP Vault Secrets dynamic secret resource to read              Requires   v1 16 0   or later if self managing agents  Must be present                                                                                          Optional Common Environment Variables You may need to set these variables  depending on your use case     Variable                                                                                                                      Value                                                                                                                                                                                                 Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                TFC HVS BACKED GCP HCP CONFIG  br    TFC HVS BACKED GCP HCP CONFIG  TAG   br    TFC DEFAULT HVS BACKED GCP HCP CONFIG        The name of the non default HCP configuration for workspaces using  multiple HCP configurations   terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations     Requires   v1 16 0   or later if self managing agents  Will fall back to using the default HCP Vault Secrets configuration if not provided       TFC HVS BACKED GCP PLAN SECRET RESOURCE NAME                                                                                 The name of the HCP Vault Secrets dynamic secret resource to read for the plan phase                                                                                                                  Requires   v1 16 0   or later if self managing agents  Must be present                                                                            TFC HVS BACKED GCP APPLY SECRET RESOURCE NAME                                                                                The name of the HCP Vault Secrets dynamic secret resource to read for the apply phase                                                                                                                 Requires   v1 16 0   or later if self managing agents  Must be present                                                                             Configure Terraform Providers The final step is to directly configure your GCP and HCP Vault Secrets providers       Configure the GCP Provider Ensure you pass values for the  project  and  region  arguments into the provider configuration block   Ensure you are not setting values or environment variables for  GOOGLE CREDENTIALS  or  GOOGLE APPLICATION CREDENTIALS   Otherwise  these values may interfere with dynamic provider credentials       Specifying Multiple Configurations       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 16 0   terraform cloud docs agents changelog 1 16 0 10 02 2024  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can add additional variables to handle multiple distinct HCP Vault Secrets backed GCP setups  enabling you to use multiple  provider aliases   terraform language providers configuration alias multiple provider configurations  within the same workspace  You can configure each set of credentials independently  or use default values by configuring the variables prefixed with  TFC DEFAULT     For more details  see  Specifying Multiple Configurations   terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations         Required Terraform Variable  To use additional configurations  add the following code to your Terraform configuration  This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks      hcl variable  tfc hvs backed gcp dynamic credentials      description    Object containing HCP Vault Secrets backed GCP dynamic credentials configuration    type   object       default   object         credentials   string       access token   string            aliases   map object         credentials   string       access token   string                          Example Usage        Access Token     hcl provider  google      access token   var tfc hvs backed gcp dynamic credentials default access token    provider  google      alias    ALIAS1    access token   var tfc hvs backed gcp dynamic credentials aliases  ALIAS1   access token              Credentials     hcl provider  google      credentials   var tfc hvs backed gcp dynamic credentials default credentials    provider  google      alias    ALIAS1    credentials   var tfc hvs backed gcp dynamic credentials aliases  ALIAS1   credentials      "}
{"questions":"terraform your HCP Terraform runs page title Vault Backed Dynamic Credentials with the AWS Provider Workspaces HCP Terraform Vault Backed Dynamic Credentials with the AWS Provider Use OpenID Connect and Vault to get short term credentials for the AWS Terraform provider in Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 8 0 terraform cloud docs agents changelog 1 8 0 04 18 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog","answers":"---\npage_title: Vault-Backed Dynamic Credentials with the AWS Provider - Workspaces - HCP Terraform\ndescription: >-\n  Use OpenID Connect and Vault to get short-term credentials for the AWS Terraform provider in\n  your HCP Terraform runs.\n---\n\n# Vault-Backed Dynamic Credentials with the AWS Provider\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.8.0](\/terraform\/cloud-docs\/agents\/changelog#1-8-0-04-18-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can use HCP Terraform\u2019s native OpenID Connect integration with Vault to use [Vault-backed dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/vault-backed) with the AWS provider in your HCP Terraform runs. Configuring the integration requires the following steps:\n\n1. **[Configure Vault Dynamic Provider Credentials](#configure-vault-dynamic-provider-credentials)**: Set up a trust configuration between Vault and HCP Terraform, create Vault roles and policies for your HCP Terraform workspaces, and add environment variables to those workspaces.\n2. **[Configure the Vault AWS Secrets Engine](#configure-vault-aws-secrets-engine)**: Set up the AWS secrets engine in your Vault instance.\n3. **[Configure HCP Terraform](#configure-hcp-terraform)**: Add additional environment variables to the HCP Terraform workspaces where you want to use Vault-Backed Dynamic Credentials.\n4. **[Configure Terraform Providers](#configure-terraform-providers)**: Configure your Terraform providers to work with Vault-backed dynamic credentials.\n\nOnce you complete this setup, HCP Terraform automatically authenticates with AWS via Vault-generated credentials during the plan and apply phase of each run. The AWS provider's authentication is only valid for the length of the plan or apply phase.\n\n## Configure Vault Dynamic Provider Credentials\nYou must first set up Vault dynamic provider credentials before you can use Vault-backed dynamic credentials. This includes setting up the JWT auth backend in Vault, configuring trust between HCP Terraform and Vault, and populating the required environment variables in your HCP Terraform workspace.\n\n[See the setup instructions for Vault dynamic provider credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/vault-configuration).\n\n# Configure Vault AWS Secrets Engine\nFollow the instructions in the Vault documentation for [setting up the AWS secrets engine in your Vault instance](\/vault\/docs\/secrets\/aws). You can also do this configuration through Terraform. Refer to our [example Terraform configuration](https:\/\/github.com\/hashicorp\/terraform-dynamic-credentials-setup-examples\/tree\/main\/vault-backed\/aws).\n\n~> **Important**: carefully consider the limitations and differences between each supported credential type in the AWS secrets engine. These limitations carry over to HCP Terraform\u2019s usage of these credentials for authenticating the AWS provider.\n\n## Configure HCP Terraform\nNext, you need to set certain environment variables in your HCP Terraform workspace to authenticate HCP Terraform with AWS using Vault-backed dynamic credentials. These variables are in addition to those you previously set while configuring [Vault dynamic provider credentials](#configure-vault-dynamic-provider-credentials). You can add these as workspace variables or as a [variable set](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#variable-sets). When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.\n### Common Environment Variables\nThe below variables apply to all AWS auth types.\n\n#### Required Common Environment Variables\n| Variable                                                                                                                                  | Value                                                                                                                                      | Notes                                                                                                                                                                                                                                                  |\n|-------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_VAULT_BACKED_AWS_AUTH`<br \/>`TFC_VAULT_BACKED_AWS_AUTH[_TAG]`<br \/>_(Default variable not supported)_                                | `true`                                                                                                                                     | Requires **v1.8.0** or later if self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to authenticate with AWS.                                                                                                   |\n| `TFC_VAULT_BACKED_AWS_AUTH_TYPE`<br \/>`TFC_VAULT_BACKED_AWS_AUTH_TYPE[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AWS_AUTH_TYPE`                | Specifies the type of authentication to perform with AWS. Must be one of the following: `iam_user`, `assumed_role`, or `federation_token`. | Requires **v1.8.0** or later if self-managing agents.                                                                                                                                                                                                  |\n| `TFC_VAULT_BACKED_AWS_RUN_VAULT_ROLE`<br \/>`TFC_VAULT_BACKED_AWS_RUN_VAULT_ROLE[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AWS_RUN_VAULT_ROLE` | The role to use in Vault.                                                                                                                  | Requires **v1.8.0** or later if self-managing agents. Optional if `TFC_VAULT_BACKED_AWS_PLAN_VAULT_ROLE` and `TFC_VAULT_BACKED_AWS_APPLY_VAULT_ROLE` are both provided. These variables are described [below](#optional-common-environment-variables). |\n\n#### Optional Common Environment Variables\nYou may need to set these variables, depending on your use case.\n\n| Variable                                                                                                                                        | Value                                                                                                                                                                                                   | Notes                                                                                                                                                  |\n|-------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_VAULT_BACKED_AWS_MOUNT_PATH`<br \/>`TFC_VAULT_BACKED_AWS_MOUNT_PATH[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AWS_MOUNT_PATH`                   | The mount path of the AWS secrets engine in Vault.                                                                                                                                                      | Requires **v1.8.0** or later if self-managing agents. Defaults to `aws`.                                                                               |\n| `TFC_VAULT_BACKED_AWS_PLAN_VAULT_ROLE`<br \/>`TFC_VAULT_BACKED_AWS_PLAN_VAULT_ROLE[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AWS_PLAN_VAULT_ROLE`    | The Vault role to use the plan phase of a run.                                                                                                                                                          | Requires **v1.8.0** or later if self-managing agents. Will fall back to the value of `TFC_VAULT_BACKED_AWS_RUN_VAULT_ROLE` if not provided.            |\n| `TFC_VAULT_BACKED_AWS_APPLY_VAULT_ROLE`<br \/>`TFC_VAULT_BACKED_AWS_APPLY_VAULT_ROLE[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AWS_APPLY_VAULT_ROLE` | The Vault role to use for the apply phase of a run.                                                                                                                                                     | Requires **v1.8.0** or later if self-managing agents. Will fall back to the value of `TFC_VAULT_BACKED_AWS_RUN_VAULT_ROLE` if not provided.            |\n| `TFC_VAULT_BACKED_AWS_SLEEP_SECONDS`<br \/>`TFC_VAULT_BACKED_AWS_SLEEP_SECONDS[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AWS_SLEEP_SECONDS`          | The amount of time to wait, in seconds, after obtaining temporary credentials from Vault. e.g., `30` for 30 seconds. Must be 1500 seconds (25 minutes) or less.                                         | Requires **v1.12.0** or later if self-managing agents. Can be used to mitigate eventual consistency issues in AWS when using the `iam_user` auth type. |\n| `TFC_VAULT_BACKED_AWS_VAULT_CONFIG`<br \/>`TFC_VAULT_BACKED_AWS_VAULT_CONFIG[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AWS_VAULT_CONFIG`             | The name of the non-default Vault configuration for workspaces using [multiple Vault configurations](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/specifying-multiple-configurations). | Requires **v1.12.0** or later if self-managing agents. Will fall back to using the default Vault configuration if not provided.                        |\n\n### Assumed Role Specific Environment Variables\nThese environment variables are only valid if the `TFC_VAULT_BACKED_AWS_AUTH_TYPE` is `assumed_role`.\n\n#### Required Assumed Role Specific Environment Variables\n| Variable                                                                                                                            | Value                                 | Notes                                                                                                                                                                                                                                                            |\n|-------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_VAULT_BACKED_AWS_RUN_ROLE_ARN`<br \/>`TFC_VAULT_BACKED_AWS_RUN_ROLE_ARN[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AWS_RUN_ROLE_ARN` | The ARN of the role to assume in AWS. | Requires **v1.8.0** or later if self-managing agents. Optional if `TFC_VAULT_BACKED_AWS_PLAN_ROLE_ARN` and `TFC_VAULT_BACKED_AWS_APPLY_ROLE_ARN` are both provided. These variables are described [below](#optional-assume-role-specific-environment-variables). |\n\n#### Optional Assumed Role Specific Environment Variables\nYou may need to set these variables, depending on your use case.\n\n| Variable                                                                                                                                  | Value                                                    | Notes                                                                                                                                     |\n|-------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_VAULT_BACKED_AWS_PLAN_ROLE_ARN`<br \/>`TFC_VAULT_BACKED_AWS_PLAN_ROLE_ARN[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AWS_PLAN_ROLE_ARN`    | The ARN of the role to use for the plan phase of a run.  | Requires **v1.8.0** or later if self-managing agents. Will fall back to the value of `TFC_VAULT_BACKED_AWS_RUN_ROLE_ARN` if not provided. |\n| `TFC_VAULT_BACKED_AWS_APPLY_ROLE_ARN`<br \/>`TFC_VAULT_BACKED_AWS_APPLY_ROLE_ARN[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AWS_APPLY_ROLE_ARN` | The ARN of the role to use for the apply phase of a run. | Requires **v1.8.0** or later if self-managing agents. Will fall back to the value of `TFC_VAULT_BACKED_AWS_RUN_ROLE_ARN` if not provided. |\n\n## Configure Terraform Providers\nThe final step is to directly configure your AWS and Vault providers.\n\n### Configure the AWS Provider\nEnsure you pass a value for the `region` argument in your AWS provider configuration block or set the `AWS_REGION` variable in your workspace.\n\nEnsure you are not using any of the arguments or methods mentioned in the [authentication and configuration](https:\/\/registry.terraform.io\/providers\/hashicorp\/aws\/latest\/docs#authentication-and-configuration) section of the provider documentation. Otherwise, these settings may interfere with dynamic provider credentials.\n\n### Configure the Vault Provider\nIf you were previously using the Vault provider to authenticate the AWS provider, remove any existing usage of the AWS secrets engine from your Terraform Code.\nThis includes the [`vault_aws_access_credentials`](https:\/\/registry.terraform.io\/providers\/hashicorp\/vault\/latest\/docs\/data-sources\/aws_access_credentials) data source and any instances of [`vault_generic_secret`](https:\/\/registry.terraform.io\/providers\/hashicorp\/vault\/latest\/docs\/data-sources\/generic_secret) you previously used to generate AWS credentials.\n\n### Specifying Multiple Configurations\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.12.0](\/terraform\/cloud-docs\/agents\/changelog#1-12-0-07-26-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can add additional variables to handle multiple distinct Vault-backed AWS setups, enabling you to use multiple [provider aliases](\/terraform\/language\/providers\/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.\n\nFor more details, see [Specifying Multiple Configurations](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/specifying-multiple-configurations).\n\n#### Required Terraform Variable\n\nTo use additional configurations, add the following code to your Terraform configuration. This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.\n\n```hcl\nvariable \"tfc_vault_backed_aws_dynamic_credentials\" {\n  description = \"Object containing Vault-backed AWS dynamic credentials configuration\"\n  type = object({\n    default = object({\n      shared_credentials_file = string\n    })\n    aliases = map(object({\n      shared_credentials_file = string\n    }))\n  })\n}\n```\n\n#### Example Usage\n\n```hcl\nprovider \"aws\" {\n  shared_credentials_files = [var.tfc_vault_backed_aws_dynamic_credentials.default.shared_credentials_file]\n}\n\nprovider \"aws\" {\n  alias = \"ALIAS1\"\n  shared_credentials_files = [var.tfc_vault_backed_aws_dynamic_credentials.aliases[\"ALIAS1\"].shared_credentials_file]\n}\n```","site":"terraform","answers_cleaned":"    page title  Vault Backed Dynamic Credentials with the AWS Provider   Workspaces   HCP Terraform description       Use OpenID Connect and Vault to get short term credentials for the AWS Terraform provider in   your HCP Terraform runs         Vault Backed Dynamic Credentials with the AWS Provider       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 8 0   terraform cloud docs agents changelog 1 8 0 04 18 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can use HCP Terraform s native OpenID Connect integration with Vault to use  Vault backed dynamic credentials   terraform cloud docs workspaces dynamic provider credentials vault backed  with the AWS provider in your HCP Terraform runs  Configuring the integration requires the following steps   1     Configure Vault Dynamic Provider Credentials   configure vault dynamic provider credentials     Set up a trust configuration between Vault and HCP Terraform  create Vault roles and policies for your HCP Terraform workspaces  and add environment variables to those workspaces  2     Configure the Vault AWS Secrets Engine   configure vault aws secrets engine     Set up the AWS secrets engine in your Vault instance  3     Configure HCP Terraform   configure hcp terraform     Add additional environment variables to the HCP Terraform workspaces where you want to use Vault Backed Dynamic Credentials  4     Configure Terraform Providers   configure terraform providers     Configure your Terraform providers to work with Vault backed dynamic credentials   Once you complete this setup  HCP Terraform automatically authenticates with AWS via Vault generated credentials during the plan and apply phase of each run  The AWS provider s authentication is only valid for the length of the plan or apply phase      Configure Vault Dynamic Provider Credentials You must first set up Vault dynamic provider credentials before you can use Vault backed dynamic credentials  This includes setting up the JWT auth backend in Vault  configuring trust between HCP Terraform and Vault  and populating the required environment variables in your HCP Terraform workspace    See the setup instructions for Vault dynamic provider credentials   terraform cloud docs workspaces dynamic provider credentials vault configuration      Configure Vault AWS Secrets Engine Follow the instructions in the Vault documentation for  setting up the AWS secrets engine in your Vault instance   vault docs secrets aws   You can also do this configuration through Terraform  Refer to our  example Terraform configuration  https   github com hashicorp terraform dynamic credentials setup examples tree main vault backed aws         Important    carefully consider the limitations and differences between each supported credential type in the AWS secrets engine  These limitations carry over to HCP Terraform s usage of these credentials for authenticating the AWS provider      Configure HCP Terraform Next  you need to set certain environment variables in your HCP Terraform workspace to authenticate HCP Terraform with AWS using Vault backed dynamic credentials  These variables are in addition to those you previously set while configuring  Vault dynamic provider credentials   configure vault dynamic provider credentials   You can add these as workspace variables or as a  variable set   terraform cloud docs workspaces variables managing variables variable sets   When you configure dynamic provider credentials with multiple provider configurations of the same type  use either a default variable or a tagged alias variable name for each provider configuration  Refer to  Specifying Multiple Configurations   specifying multiple configurations  for more details      Common Environment Variables The below variables apply to all AWS auth types        Required Common Environment Variables   Variable                                                                                                                                    Value                                                                                                                                        Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           TFC VAULT BACKED AWS AUTH  br    TFC VAULT BACKED AWS AUTH  TAG   br     Default variable not supported                                     true                                                                                                                                        Requires   v1 8 0   or later if self managing agents  Must be present and set to  true   or HCP Terraform will not attempt to authenticate with AWS                                                                                                         TFC VAULT BACKED AWS AUTH TYPE  br    TFC VAULT BACKED AWS AUTH TYPE  TAG   br    TFC DEFAULT VAULT BACKED AWS AUTH TYPE                   Specifies the type of authentication to perform with AWS  Must be one of the following   iam user    assumed role   or  federation token     Requires   v1 8 0   or later if self managing agents                                                                                                                                                                                                        TFC VAULT BACKED AWS RUN VAULT ROLE  br    TFC VAULT BACKED AWS RUN VAULT ROLE  TAG   br    TFC DEFAULT VAULT BACKED AWS RUN VAULT ROLE    The role to use in Vault                                                                                                                     Requires   v1 8 0   or later if self managing agents  Optional if  TFC VAULT BACKED AWS PLAN VAULT ROLE  and  TFC VAULT BACKED AWS APPLY VAULT ROLE  are both provided  These variables are described  below   optional common environment variables           Optional Common Environment Variables You may need to set these variables  depending on your use case     Variable                                                                                                                                          Value                                                                                                                                                                                                     Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              TFC VAULT BACKED AWS MOUNT PATH  br    TFC VAULT BACKED AWS MOUNT PATH  TAG   br    TFC DEFAULT VAULT BACKED AWS MOUNT PATH                      The mount path of the AWS secrets engine in Vault                                                                                                                                                         Requires   v1 8 0   or later if self managing agents  Defaults to  aws                                                                                      TFC VAULT BACKED AWS PLAN VAULT ROLE  br    TFC VAULT BACKED AWS PLAN VAULT ROLE  TAG   br    TFC DEFAULT VAULT BACKED AWS PLAN VAULT ROLE       The Vault role to use the plan phase of a run                                                                                                                                                             Requires   v1 8 0   or later if self managing agents  Will fall back to the value of  TFC VAULT BACKED AWS RUN VAULT ROLE  if not provided                  TFC VAULT BACKED AWS APPLY VAULT ROLE  br    TFC VAULT BACKED AWS APPLY VAULT ROLE  TAG   br    TFC DEFAULT VAULT BACKED AWS APPLY VAULT ROLE    The Vault role to use for the apply phase of a run                                                                                                                                                        Requires   v1 8 0   or later if self managing agents  Will fall back to the value of  TFC VAULT BACKED AWS RUN VAULT ROLE  if not provided                  TFC VAULT BACKED AWS SLEEP SECONDS  br    TFC VAULT BACKED AWS SLEEP SECONDS  TAG   br    TFC DEFAULT VAULT BACKED AWS SLEEP SECONDS             The amount of time to wait  in seconds  after obtaining temporary credentials from Vault  e g    30  for 30 seconds  Must be 1500 seconds  25 minutes  or less                                            Requires   v1 12 0   or later if self managing agents  Can be used to mitigate eventual consistency issues in AWS when using the  iam user  auth type       TFC VAULT BACKED AWS VAULT CONFIG  br    TFC VAULT BACKED AWS VAULT CONFIG  TAG   br    TFC DEFAULT VAULT BACKED AWS VAULT CONFIG                The name of the non default Vault configuration for workspaces using  multiple Vault configurations   terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations     Requires   v1 12 0   or later if self managing agents  Will fall back to using the default Vault configuration if not provided                                Assumed Role Specific Environment Variables These environment variables are only valid if the  TFC VAULT BACKED AWS AUTH TYPE  is  assumed role         Required Assumed Role Specific Environment Variables   Variable                                                                                                                              Value                                   Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    TFC VAULT BACKED AWS RUN ROLE ARN  br    TFC VAULT BACKED AWS RUN ROLE ARN  TAG   br    TFC DEFAULT VAULT BACKED AWS RUN ROLE ARN    The ARN of the role to assume in AWS    Requires   v1 8 0   or later if self managing agents  Optional if  TFC VAULT BACKED AWS PLAN ROLE ARN  and  TFC VAULT BACKED AWS APPLY ROLE ARN  are both provided  These variables are described  below   optional assume role specific environment variables           Optional Assumed Role Specific Environment Variables You may need to set these variables  depending on your use case     Variable                                                                                                                                    Value                                                      Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               TFC VAULT BACKED AWS PLAN ROLE ARN  br    TFC VAULT BACKED AWS PLAN ROLE ARN  TAG   br    TFC DEFAULT VAULT BACKED AWS PLAN ROLE ARN       The ARN of the role to use for the plan phase of a run     Requires   v1 8 0   or later if self managing agents  Will fall back to the value of  TFC VAULT BACKED AWS RUN ROLE ARN  if not provided       TFC VAULT BACKED AWS APPLY ROLE ARN  br    TFC VAULT BACKED AWS APPLY ROLE ARN  TAG   br    TFC DEFAULT VAULT BACKED AWS APPLY ROLE ARN    The ARN of the role to use for the apply phase of a run    Requires   v1 8 0   or later if self managing agents  Will fall back to the value of  TFC VAULT BACKED AWS RUN ROLE ARN  if not provided        Configure Terraform Providers The final step is to directly configure your AWS and Vault providers       Configure the AWS Provider Ensure you pass a value for the  region  argument in your AWS provider configuration block or set the  AWS REGION  variable in your workspace   Ensure you are not using any of the arguments or methods mentioned in the  authentication and configuration  https   registry terraform io providers hashicorp aws latest docs authentication and configuration  section of the provider documentation  Otherwise  these settings may interfere with dynamic provider credentials       Configure the Vault Provider If you were previously using the Vault provider to authenticate the AWS provider  remove any existing usage of the AWS secrets engine from your Terraform Code  This includes the   vault aws access credentials   https   registry terraform io providers hashicorp vault latest docs data sources aws access credentials  data source and any instances of   vault generic secret   https   registry terraform io providers hashicorp vault latest docs data sources generic secret  you previously used to generate AWS credentials       Specifying Multiple Configurations       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 12 0   terraform cloud docs agents changelog 1 12 0 07 26 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can add additional variables to handle multiple distinct Vault backed AWS setups  enabling you to use multiple  provider aliases   terraform language providers configuration alias multiple provider configurations  within the same workspace  You can configure each set of credentials independently  or use default values by configuring the variables prefixed with  TFC DEFAULT     For more details  see  Specifying Multiple Configurations   terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations         Required Terraform Variable  To use additional configurations  add the following code to your Terraform configuration  This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks      hcl variable  tfc vault backed aws dynamic credentials      description    Object containing Vault backed AWS dynamic credentials configuration    type   object       default   object         shared credentials file   string            aliases   map object         shared credentials file   string                          Example Usage     hcl provider  aws      shared credentials files    var tfc vault backed aws dynamic credentials default shared credentials file     provider  aws      alias    ALIAS1    shared credentials files    var tfc vault backed aws dynamic credentials aliases  ALIAS1   shared credentials file       "}
{"questions":"terraform Use OpenID Connect and Vault to get short term credentials for the Azure Terraform providers in page title Vault Backed Dynamic Credentials with the Azure Providers HCP Terraform your HCP Terraform runs Vault Backed Dynamic Credentials with the Azure Provider Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 8 0 terraform cloud docs agents changelog 1 8 0 04 18 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog","answers":"---\npage_title: Vault-Backed Dynamic Credentials with the Azure Providers - HCP Terraform\ndescription: >-\n  Use OpenID Connect and Vault to get short-term credentials for the Azure Terraform providers in\n  your HCP Terraform runs.\n---\n\n# Vault-Backed Dynamic Credentials with the Azure Provider\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.8.0](\/terraform\/cloud-docs\/agents\/changelog#1-8-0-04-18-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can use HCP Terraform\u2019s native OpenID Connect integration with Vault to use [Vault-backed dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/vault-backed) with the Azure provider in your HCP Terraform runs. Configuring the integration requires the following steps:\n\n1. **[Configure Vault Dynamic Provider Credentials](#configure-vault-dynamic-provider-credentials)**: Set up a trust configuration between Vault and HCP Terraform, create Vault roles and policies for your HCP Terraform workspaces, and add environment variables to those workspaces.\n2. **[Configure the Vault Azure Secrets Engine](#configure-vault-azure-secrets-engine)**: Set up the Azure secrets engine in your Vault instance.\n3. **[Configure HCP Terraform](#configure-hcp-terraform)**: Add additional environment variables to the HCP Terraform workspaces where you want to use Vault-Backed Dynamic Credentials.\n4. **[Configure Terraform Providers](#configure-terraform-providers)**: Configure your Terraform providers to work with Vault-backed Dynamic Credentials.\n\nOnce you complete this setup, HCP Terraform automatically authenticates with Azure via Vault-generated credentials during the plan and apply phase of each run. The Azure provider's authentication is only valid for the length of the plan or apply phase.\n\n## Configure Vault Dynamic Provider Credentials\nYou must first set up Vault dynamic provider credentials before you can use Vault-backed dynamic credentials. This includes setting up the JWT auth backend in Vault, configuring trust between HCP Terraform and Vault, and populating the required environment variables in your HCP Terraform workspace.\n\n[See the setup instructions for Vault dynamic provider credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/vault-configuration).\n\n# Configure Vault Azure Secrets Engine\nFollow the instructions in the Vault documentation for [setting up the Azure secrets engine in your Vault instance](\/vault\/docs\/secrets\/azure). You can also do this configuration through Terraform. Refer to our [example Terraform configuration](https:\/\/github.com\/hashicorp\/terraform-dynamic-credentials-setup-examples\/tree\/main\/vault-backed\/azure).\n\n## Configure HCP Terraform\nNext, you need to set certain environment variables in your HCP Terraform workspace to authenticate HCP Terraform with Azure using Vault-backed dynamic credentials. These variables are in addition to those you previously set while configuring [Vault dynamic provider credentials](#configure-vault-dynamic-provider-credentials). You can add these as workspace variables or as a [variable set](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#variable-sets). When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.\n\n### Required Environment Variables\n| Variable                                                                                                                                        | Value                     | Notes                                                                                                                                                                                                                                               |\n|-------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_VAULT_BACKED_AZURE_AUTH`<br \/>`TFC_VAULT_BACKED_AZURE_AUTH[_TAG]`<br \/>_(Default variable not supported)_                                  | `true`                    | Requires **v1.8.0** or later if self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to authenticate with Azure.                                                                                              |\n| `TFC_VAULT_BACKED_AZURE_RUN_VAULT_ROLE`<br \/>`TFC_VAULT_BACKED_AZURE_RUN_VAULT_ROLE[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AZURE_RUN_VAULT_ROLE` | The role to use in Vault. | Requires **v1.8.0** or later if self-managing agents. Optional if `TFC_VAULT_BACKED_AZURE_PLAN_VAULT_ROLE` and `TFC_VAULT_BACKED_AZURE_APPLY_VAULT_ROLE` are both provided. These variables are described [below](#optional-environment-variables). |\n\n### Optional Environment Variables\nYou may need to set these variables, depending on your use case.\n\n| Variable                                                                                                                                              | Value                                                                                                                                                                                                   | Notes                                                                                                                                         |\n|-------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_VAULT_BACKED_AZURE_MOUNT_PATH`<br \/>`TFC_VAULT_BACKED_AZURE_MOUNT_PATH[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AZURE_MOUNT_PATH`                   | The mount path of the Azure secrets engine in Vault.                                                                                                                                                    | Requires **v1.8.0** or later if self-managing agents. Defaults to `azure`.                                                                    |\n| `TFC_VAULT_BACKED_AZURE_PLAN_VAULT_ROLE`<br \/>`TFC_VAULT_BACKED_AZURE_PLAN_VAULT_ROLE[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AZURE_PLAN_VAULT_ROLE`    | The Vault role to use the plan phase of a run.                                                                                                                                                          | Requires **v1.8.0** or later if self-managing agents. Will fall back to the value of `TFC_VAULT_BACKED_AZURE_RUN_VAULT_ROLE` if not provided. |\n| `TFC_VAULT_BACKED_AZURE_APPLY_VAULT_ROLE`<br \/>`TFC_VAULT_BACKED_AZURE_APPLY_VAULT_ROLE[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AZURE_APPLY_VAULT_ROLE` | The Vault role to use for the apply phase of a run.                                                                                                                                                     | Requires **v1.8.0** or later if self-managing agents. Will fall back to the value of `TFC_VAULT_BACKED_AZURE_RUN_VAULT_ROLE` if not provided. |\n| `TFC_VAULT_BACKED_AZURE_SLEEP_SECONDS`<br \/>`TFC_VAULT_BACKED_AZURE_SLEEP_SECONDS[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AZURE_SLEEP_SECONDS`          | The amount of time to wait, in seconds, after obtaining temporary credentials from Vault. e.g., `30` for 30 seconds. Must be 1500 seconds (25 minutes) or less.                                         | Requires **v1.12.0** or later if self-managing agents. Can be used to mitigate eventual consistency issues in Azure.                          |\n| `TFC_VAULT_BACKED_AZURE_VAULT_CONFIG`<br \/>`TFC_VAULT_BACKED_AZURE_VAULT_CONFIG[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_AZURE_VAULT_CONFIG`             | The name of the non-default Vault configuration for workspaces using [multiple Vault configurations](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/specifying-multiple-configurations). | Requires **v1.12.0** or later if self-managing agents. Will fall back to using the default Vault configuration if not provided.               |\n\n## Configure Terraform Providers\nThe final step is to directly configure your Azure and Vault providers.\n\n### Configure the AzureRM or Microsoft Entra ID\nEnsure you pass a value for the `subscription_id` and `tenant_id` arguments in your provider configuration block or set the `ARM_SUBSCRIPTION_ID` and `ARM_TENANT_ID` variables in your workspace.\n\nDo not set values for `client_id`, `use_oidc`, or `oidc_token` in your provider configuration block. Additionally, do not set variable values for `ARM_CLIENT_ID`, `ARM_USE_OIDC`, or `ARM_OIDC_TOKEN`.\n\n### Configure the Vault Provider\nIf you were previously using the Vault provider to authenticate the Azure provider, remove any existing usage of the Azure secrets engine from your Terraform Code.\nThis includes the [`vault_azure_access_credentials`](https:\/\/registry.terraform.io\/providers\/hashicorp\/vault\/latest\/docs\/data-sources\/azure_access_credentials) data source and any instances of [`vault_generic_secret`](https:\/\/registry.terraform.io\/providers\/hashicorp\/vault\/latest\/docs\/data-sources\/generic_secret) you previously used to generate Azure credentials.\n\n### Specifying Multiple Configurations\n\n~> **Important:** Ensure you are using version **3.60.0** or later of the **AzureRM provider** and version **2.43.0** or later of the **Microsoft Entra ID provider** (previously Azure Active Directory) as required functionality was introduced in these provider versions.\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.12.0](\/terraform\/cloud-docs\/agents\/changelog#1-12-0-07-26-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can add additional variables to handle multiple distinct Vault-backed Azure setups, enabling you to use multiple [provider aliases](\/terraform\/language\/providers\/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.\n\nFor more details, see [Specifying Multiple Configurations](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/specifying-multiple-configurations).\n\n#### Required Terraform Variable\n\nTo use additional configurations, add the following code to your Terraform configuration. This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.\n\n```hcl\nvariable \"tfc_vault_backed_azure_dynamic_credentials\" {\n  description = \"Object containing Vault-backed Azure dynamic credentials configuration\"\n  type = object({\n    default = object({\n      client_id_file_path = string\n      client_secret_file_path = string\n    })\n    aliases = map(object({\n      client_id_file_path = string\n      client_secret_file_path = string\n    }))\n  })\n}\n```\n\n#### Example Usage\n\n##### AzureRM Provider\n\n```hcl\nprovider \"azurerm\" {\n  features {}\n  \/\/ use_cli should be set to false to yield more accurate error messages on auth failure.\n  use_cli                 = false\n  client_id_file_path     = var.tfc_vault_backed_azure_dynamic_credentials.default.client_id_file_path\n  client_secret_file_path = var.tfc_vault_backed_azure_dynamic_credentials.default.client_secret_file_path\n  subscription_id         = \"00000000-0000-0000-0000-000000000000\"\n  tenant_id               = \"10000000-0000-0000-0000-000000000000\"\n}\n\nprovider \"azurerm\" {\n  features {}\n  \/\/ use_cli should be set to false to yield more accurate error messages on auth failure.\n  use_cli                 = false\n  alias                   = \"ALIAS1\"\n  client_id_file_path     = var.tfc_vault_backed_azure_dynamic_credentials.aliases[\"ALIAS1\"].client_id_file_path\n  client_secret_file_path = var.tfc_vault_backed_azure_dynamic_credentials.aliases[\"ALIAS1\"].client_secret_file_path\n  subscription_id         = \"00000000-0000-0000-0000-000000000000\"\n  tenant_id               = \"20000000-0000-0000-0000-000000000000\"\n}\n```\n\n##### Microsoft Entra ID Provider (previously AzureAD)\n\n```hcl\nprovider \"azuread\" {\n  features {}\n  \/\/ use_cli should be set to false to yield more accurate error messages on auth failure.\n  use_cli                 = false\n  client_id_file_path     = var.tfc_vault_backed_azure_dynamic_credentials.default.client_id_file_path\n  client_secret_file_path = var.tfc_vault_backed_azure_dynamic_credentials.default.client_secret_file_path\n  subscription_id         = \"00000000-0000-0000-0000-000000000000\"\n  tenant_id               = \"10000000-0000-0000-0000-000000000000\"\n}\n\nprovider \"azuread\" {\n  features {}\n  \/\/ use_cli should be set to false to yield more accurate error messages on auth failure.\n  use_cli                 = false\n  alias                   = \"ALIAS1\"\n  client_id_file_path     = var.tfc_vault_backed_azure_dynamic_credentials.aliases[\"ALIAS1\"].client_id_file_path\n  client_secret_file_path = var.tfc_vault_backed_azure_dynamic_credentials.aliases[\"ALIAS1\"].client_secret_file_path\n  subscription_id         = \"00000000-0000-0000-0000-000000000000\"\n  tenant_id               = \"20000000-0000-0000-0000-000000000000\"\n}\n```","site":"terraform","answers_cleaned":"    page title  Vault Backed Dynamic Credentials with the Azure Providers   HCP Terraform description       Use OpenID Connect and Vault to get short term credentials for the Azure Terraform providers in   your HCP Terraform runs         Vault Backed Dynamic Credentials with the Azure Provider       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 8 0   terraform cloud docs agents changelog 1 8 0 04 18 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can use HCP Terraform s native OpenID Connect integration with Vault to use  Vault backed dynamic credentials   terraform cloud docs workspaces dynamic provider credentials vault backed  with the Azure provider in your HCP Terraform runs  Configuring the integration requires the following steps   1     Configure Vault Dynamic Provider Credentials   configure vault dynamic provider credentials     Set up a trust configuration between Vault and HCP Terraform  create Vault roles and policies for your HCP Terraform workspaces  and add environment variables to those workspaces  2     Configure the Vault Azure Secrets Engine   configure vault azure secrets engine     Set up the Azure secrets engine in your Vault instance  3     Configure HCP Terraform   configure hcp terraform     Add additional environment variables to the HCP Terraform workspaces where you want to use Vault Backed Dynamic Credentials  4     Configure Terraform Providers   configure terraform providers     Configure your Terraform providers to work with Vault backed Dynamic Credentials   Once you complete this setup  HCP Terraform automatically authenticates with Azure via Vault generated credentials during the plan and apply phase of each run  The Azure provider s authentication is only valid for the length of the plan or apply phase      Configure Vault Dynamic Provider Credentials You must first set up Vault dynamic provider credentials before you can use Vault backed dynamic credentials  This includes setting up the JWT auth backend in Vault  configuring trust between HCP Terraform and Vault  and populating the required environment variables in your HCP Terraform workspace    See the setup instructions for Vault dynamic provider credentials   terraform cloud docs workspaces dynamic provider credentials vault configuration      Configure Vault Azure Secrets Engine Follow the instructions in the Vault documentation for  setting up the Azure secrets engine in your Vault instance   vault docs secrets azure   You can also do this configuration through Terraform  Refer to our  example Terraform configuration  https   github com hashicorp terraform dynamic credentials setup examples tree main vault backed azure       Configure HCP Terraform Next  you need to set certain environment variables in your HCP Terraform workspace to authenticate HCP Terraform with Azure using Vault backed dynamic credentials  These variables are in addition to those you previously set while configuring  Vault dynamic provider credentials   configure vault dynamic provider credentials   You can add these as workspace variables or as a  variable set   terraform cloud docs workspaces variables managing variables variable sets   When you configure dynamic provider credentials with multiple provider configurations of the same type  use either a default variable or a tagged alias variable name for each provider configuration  Refer to  Specifying Multiple Configurations   specifying multiple configurations  for more details       Required Environment Variables   Variable                                                                                                                                          Value                       Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          TFC VAULT BACKED AZURE AUTH  br    TFC VAULT BACKED AZURE AUTH  TAG   br     Default variable not supported                                       true                       Requires   v1 8 0   or later if self managing agents  Must be present and set to  true   or HCP Terraform will not attempt to authenticate with Azure                                                                                                    TFC VAULT BACKED AZURE RUN VAULT ROLE  br    TFC VAULT BACKED AZURE RUN VAULT ROLE  TAG   br    TFC DEFAULT VAULT BACKED AZURE RUN VAULT ROLE    The role to use in Vault    Requires   v1 8 0   or later if self managing agents  Optional if  TFC VAULT BACKED AZURE PLAN VAULT ROLE  and  TFC VAULT BACKED AZURE APPLY VAULT ROLE  are both provided  These variables are described  below   optional environment variables          Optional Environment Variables You may need to set these variables  depending on your use case     Variable                                                                                                                                                Value                                                                                                                                                                                                     Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  TFC VAULT BACKED AZURE MOUNT PATH  br    TFC VAULT BACKED AZURE MOUNT PATH  TAG   br    TFC DEFAULT VAULT BACKED AZURE MOUNT PATH                      The mount path of the Azure secrets engine in Vault                                                                                                                                                       Requires   v1 8 0   or later if self managing agents  Defaults to  azure                                                                           TFC VAULT BACKED AZURE PLAN VAULT ROLE  br    TFC VAULT BACKED AZURE PLAN VAULT ROLE  TAG   br    TFC DEFAULT VAULT BACKED AZURE PLAN VAULT ROLE       The Vault role to use the plan phase of a run                                                                                                                                                             Requires   v1 8 0   or later if self managing agents  Will fall back to the value of  TFC VAULT BACKED AZURE RUN VAULT ROLE  if not provided       TFC VAULT BACKED AZURE APPLY VAULT ROLE  br    TFC VAULT BACKED AZURE APPLY VAULT ROLE  TAG   br    TFC DEFAULT VAULT BACKED AZURE APPLY VAULT ROLE    The Vault role to use for the apply phase of a run                                                                                                                                                        Requires   v1 8 0   or later if self managing agents  Will fall back to the value of  TFC VAULT BACKED AZURE RUN VAULT ROLE  if not provided       TFC VAULT BACKED AZURE SLEEP SECONDS  br    TFC VAULT BACKED AZURE SLEEP SECONDS  TAG   br    TFC DEFAULT VAULT BACKED AZURE SLEEP SECONDS             The amount of time to wait  in seconds  after obtaining temporary credentials from Vault  e g    30  for 30 seconds  Must be 1500 seconds  25 minutes  or less                                            Requires   v1 12 0   or later if self managing agents  Can be used to mitigate eventual consistency issues in Azure                                TFC VAULT BACKED AZURE VAULT CONFIG  br    TFC VAULT BACKED AZURE VAULT CONFIG  TAG   br    TFC DEFAULT VAULT BACKED AZURE VAULT CONFIG                The name of the non default Vault configuration for workspaces using  multiple Vault configurations   terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations     Requires   v1 12 0   or later if self managing agents  Will fall back to using the default Vault configuration if not provided                      Configure Terraform Providers The final step is to directly configure your Azure and Vault providers       Configure the AzureRM or Microsoft Entra ID Ensure you pass a value for the  subscription id  and  tenant id  arguments in your provider configuration block or set the  ARM SUBSCRIPTION ID  and  ARM TENANT ID  variables in your workspace   Do not set values for  client id    use oidc   or  oidc token  in your provider configuration block  Additionally  do not set variable values for  ARM CLIENT ID    ARM USE OIDC   or  ARM OIDC TOKEN        Configure the Vault Provider If you were previously using the Vault provider to authenticate the Azure provider  remove any existing usage of the Azure secrets engine from your Terraform Code  This includes the   vault azure access credentials   https   registry terraform io providers hashicorp vault latest docs data sources azure access credentials  data source and any instances of   vault generic secret   https   registry terraform io providers hashicorp vault latest docs data sources generic secret  you previously used to generate Azure credentials       Specifying Multiple Configurations       Important    Ensure you are using version   3 60 0   or later of the   AzureRM provider   and version   2 43 0   or later of the   Microsoft Entra ID provider    previously Azure Active Directory  as required functionality was introduced in these provider versions        Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 12 0   terraform cloud docs agents changelog 1 12 0 07 26 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can add additional variables to handle multiple distinct Vault backed Azure setups  enabling you to use multiple  provider aliases   terraform language providers configuration alias multiple provider configurations  within the same workspace  You can configure each set of credentials independently  or use default values by configuring the variables prefixed with  TFC DEFAULT     For more details  see  Specifying Multiple Configurations   terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations         Required Terraform Variable  To use additional configurations  add the following code to your Terraform configuration  This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks      hcl variable  tfc vault backed azure dynamic credentials      description    Object containing Vault backed Azure dynamic credentials configuration    type   object       default   object         client id file path   string       client secret file path   string            aliases   map object         client id file path   string       client secret file path   string                          Example Usage        AzureRM Provider     hcl provider  azurerm      features         use cli should be set to false to yield more accurate error messages on auth failure    use cli                   false   client id file path       var tfc vault backed azure dynamic credentials default client id file path   client secret file path   var tfc vault backed azure dynamic credentials default client secret file path   subscription id            00000000 0000 0000 0000 000000000000    tenant id                  10000000 0000 0000 0000 000000000000     provider  azurerm      features         use cli should be set to false to yield more accurate error messages on auth failure    use cli                   false   alias                      ALIAS1    client id file path       var tfc vault backed azure dynamic credentials aliases  ALIAS1   client id file path   client secret file path   var tfc vault backed azure dynamic credentials aliases  ALIAS1   client secret file path   subscription id            00000000 0000 0000 0000 000000000000    tenant id                  20000000 0000 0000 0000 000000000000               Microsoft Entra ID Provider  previously AzureAD      hcl provider  azuread      features         use cli should be set to false to yield more accurate error messages on auth failure    use cli                   false   client id file path       var tfc vault backed azure dynamic credentials default client id file path   client secret file path   var tfc vault backed azure dynamic credentials default client secret file path   subscription id            00000000 0000 0000 0000 000000000000    tenant id                  10000000 0000 0000 0000 000000000000     provider  azuread      features         use cli should be set to false to yield more accurate error messages on auth failure    use cli                   false   alias                      ALIAS1    client id file path       var tfc vault backed azure dynamic credentials aliases  ALIAS1   client id file path   client secret file path   var tfc vault backed azure dynamic credentials aliases  ALIAS1   client secret file path   subscription id            00000000 0000 0000 0000 000000000000    tenant id                  20000000 0000 0000 0000 000000000000       "}
{"questions":"terraform your HCP Terraform runs Use OpenID Connect and Vault to get short term credentials for the GCP Terraform provider in Vault Backed Dynamic Credentials with the GCP Provider page title Vault Backed Dynamic Credentials with the GCP Provider Workspaces HCP Terraform Important If you are self hosting HCP Terraform agents terraform cloud docs agents ensure your agents use v1 8 0 terraform cloud docs agents changelog 1 8 0 04 18 2023 or above To use the latest dynamic credentials features upgrade your agents to the latest version terraform cloud docs agents changelog","answers":"---\npage_title: Vault-Backed Dynamic Credentials with the GCP Provider - Workspaces - HCP Terraform\ndescription: >-\n  Use OpenID Connect and Vault to get short-term credentials for the GCP Terraform provider in\n  your HCP Terraform runs.\n---\n\n# Vault-Backed Dynamic Credentials with the GCP Provider\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.8.0](\/terraform\/cloud-docs\/agents\/changelog#1-8-0-04-18-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can use HCP Terraform\u2019s native OpenID Connect integration with Vault to use [Vault-backed dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/vault-backed) with the GCP provider in your HCP Terraform runs. Configuring the integration requires the following steps:\n\n1. **[Configure Vault Dynamic Provider Credentials](#configure-vault-dynamic-provider-credentials)**: Set up a trust configuration between Vault and HCP Terraform, create Vault roles and policies for your HCP Terraform workspaces, and add environment variables to those workspaces.\n2. **[Configure the Vault GCP Secrets Engine](#configure-vault-gcp-secrets-engine)**: Set up the GCP secrets engine in your Vault instance.\n3. **[Configure HCP Terraform](#configure-hcp-terraform)**: Add additional environment variables to the HCP Terraform workspaces where you want to use Vault-Backed Dynamic Credentials.\n4. **[Configure Terraform Providers](#configure-terraform-providers)**: Configure your Terraform providers to work with Vault-backed dynamic credentials.\n\nOnce you complete this setup, HCP Terraform automatically authenticates with GCP via Vault-generated credentials during the plan and apply phase of each run. The GCP provider's authentication is only valid for the length of the plan or apply phase.\n\n## Configure Vault Dynamic Provider Credentials\nYou must first set up Vault dynamic provider credentials before you can use Vault-backed dynamic credentials. This includes setting up the JWT auth backend in Vault, configuring trust between HCP Terraform and Vault, and populating the required environment variables in your HCP Terraform workspace.\n\n[See the setup instructions for Vault dynamic provider credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/vault-configuration).\n\n# Configure Vault GCP Secrets Engine\nFollow the instructions in the Vault documentation for [setting up the GCP secrets engine in your Vault instance](\/vault\/docs\/secrets\/gcp). You can also do this configuration through Terraform. Refer to our [example Terraform configuration](https:\/\/github.com\/hashicorp\/terraform-dynamic-credentials-setup-examples\/tree\/main\/vault-backed\/gcp).\n\n~> **Important**: carefully consider the limitations and differences between each supported credential type in the GCP secrets engine. These limitations carry over to HCP Terraform\u2019s usage of these credentials for authenticating the GCP provider.\n\n## Configure HCP Terraform\nNext, you need to set certain environment variables in your HCP Terraform workspace to authenticate HCP Terraform with GCP using Vault-backed dynamic credentials. These variables are in addition to those you previously set while configuring [Vault dynamic provider credentials](#configure-vault-dynamic-provider-credentials). You can add these as workspace variables or as a [variable set](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#variable-sets). When you configure dynamic provider credentials with multiple provider configurations of the same type, use either a default variable or a tagged alias variable name for each provider configuration. Refer to [Specifying Multiple Configurations](#specifying-multiple-configurations) for more details.\n\n### Common Environment Variables\nThe below variables apply to all GCP auth types.\n\n#### Required Common Environment Variables\n| Variable                                                                                                                   | Value                                                                                                                                                                                                                  | Notes                                                                                                                                                |\n|----------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_VAULT_BACKED_GCP_AUTH`<br \/>`TFC_VAULT_BACKED_GCP_AUTH[_TAG]`<br \/>_(Default variable not supported)_                 | `true`                                                                                                                                                                                                                 | Requires **v1.8.0** or later if self-managing agents. Must be present and set to `true`, or HCP Terraform will not attempt to authenticate with GCP. |\n| `TFC_VAULT_BACKED_GCP_AUTH_TYPE`<br \/>`TFC_VAULT_BACKED_GCP_AUTH_TYPE[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_GCP_AUTH_TYPE` | Specifies the type of authentication to perform with GCP. Must be one of the following: `roleset\/access_token`, `roleset\/service_account_key`, `static_account\/access_token`, or `static_account\/service_account_key`. | Requires **v1.8.0** or later if self-managing agents.                                                                                                |\n\n#### Optional Common Environment Variables\nYou may need to set these variables, depending on your use case.\n\n| Variable                                                                                                                            | Value                                                                                                                                                                                                   | Notes                                                                                                                           |\n|-------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_VAULT_BACKED_GCP_MOUNT_PATH`<br \/>`TFC_VAULT_BACKED_GCP_MOUNT_PATH[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_GCP_MOUNT_PATH`       | The mount path of the GCP secrets engine in Vault.                                                                                                                                                      | Requires **v1.8.0** or later if self-managing agents. Defaults to `gcp`.                                                        |\n| `TFC_VAULT_BACKED_GCP_VAULT_CONFIG`<br \/>`TFC_VAULT_BACKED_GCP_VAULT_CONFIG[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_GCP_VAULT_CONFIG` | The name of the non-default Vault configuration for workspaces using [multiple Vault configurations](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/specifying-multiple-configurations). | Requires **v1.12.0** or later if self-managing agents. Will fall back to using the default Vault configuration if not provided. |\n\n### Roleset Specific Environment Variables\nThese environment variables are only valid if the `TFC_VAULT_BACKED_GCP_AUTH_TYPE` is `roleset\/access_token` or `roleset\/service_account_key`.\n\n#### Required Roleset Specific Environment Variables\n| Variable                                                                                                                                           | Value                        | Notes                                                                                                                                                                                                                                                                  |\n|----------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_VAULT_BACKED_GCP_RUN_VAULT_ROLESET`<br \/>`TFC_VAULT_BACKED_GCP_RUN_VAULT_ROLESET[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_GCP_RUN_VAULT_ROLESET` | The roleset to use in Vault. | Requires **v1.8.0** or later if self-managing agents. Optional if `TFC_VAULT_BACKED_GCP_PLAN_VAULT_ROLESET` and `TFC_VAULT_BACKED_GCP_APPLY_VAULT_ROLESET` are both provided. These variables are described [below](#optional-roleset-specific-environment-variables). |\n\n#### Optional Roleset Specific Environment Variables\nYou may need to set these variables, depending on your use case.\n\n| Variable                                                                                                                                                 | Value                                            | Notes                                                                                                                                          |\n|----------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_VAULT_BACKED_GCP_PLAN_VAULT_ROLESET`<br \/>`TFC_VAULT_BACKED_GCP_PLAN_VAULT_ROLESET[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_GCP_PLAN_VAULT_ROLESET`    | The roleset to use for the plan phase of a run.  | Requires **v1.8.0** or later if self-managing agents. Will fall back to the value of `TFC_VAULT_BACKED_GCP_RUN_VAULT_ROLESET` if not provided. |\n| `TFC_VAULT_BACKED_GCP_APPLY_VAULT_ROLESET`<br \/>`TFC_VAULT_BACKED_GCP_APPLY_VAULT_ROLESET[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_GCP_APPLY_VAULT_ROLESET` | The roleset to use for the apply phase of a run. | Requires **v1.8.0** or later if self-managing agents. Will fall back to the value of `TFC_VAULT_BACKED_GCP_RUN_VAULT_ROLESET` if not provided. |\n\n### Static Account Specific Environment Variables\nThese environment variables are only valid if the `TFC_VAULT_BACKED_GCP_AUTH_TYPE` is `static_account\/access_token` or `static_account\/service_account_key`.\n\n#### Required Static Account Specific Environment Variables\n| Variable                                                                                                                                                                | Value                               | Notes                                                                                                                                                                                                                                                                                       |\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_VAULT_BACKED_GCP_RUN_VAULT_STATIC_ACCOUNT`<br \/>`TFC_VAULT_BACKED_GCP_RUN_VAULT_STATIC_ACCOUNT[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_GCP_RUN_VAULT_STATIC_ACCOUNT` | The static account to use in Vault. | Requires **v1.8.0** or later if self-managing agents. Optional if `TFC_VAULT_BACKED_GCP_PLAN_VAULT_STATIC_ACCOUNT` and `TFC_VAULT_BACKED_GCP_APPLY_VAULT_STATIC_ACCOUNT` are both provided. These variables are described [below](#optional-static-account-specific-environment-variables). |\n\n#### Optional Static Account Specific Environment Variables\nYou may need to set these variables, depending on your use case.\n\n| Variable                                                                                                                                                                      | Value                                                   | Notes                                                                                                                                                 |\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `TFC_VAULT_BACKED_GCP_PLAN_VAULT_STATIC_ACCOUNT`<br \/>`TFC_VAULT_BACKED_GCP_PLAN_VAULT_STATIC_ACCOUNT[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_GCP_PLAN_VAULT_STATIC_ACCOUNT`    | The static account to use for the plan phase of a run.  | Requires **v1.8.0** or later if self-managing agents. Will fall back to the value of `TFC_VAULT_BACKED_GCP_RUN_VAULT_STATIC_ACCOUNT` if not provided. |\n| `TFC_VAULT_BACKED_GCP_APPLY_VAULT_STATIC_ACCOUNT`<br \/>`TFC_VAULT_BACKED_GCP_APPLY_VAULT_STATIC_ACCOUNT[_TAG]`<br \/>`TFC_DEFAULT_VAULT_BACKED_GCP_APPLY_VAULT_STATIC_ACCOUNT` | The static account to use for the apply phase of a run. | Requires **v1.8.0** or later if self-managing agents. Will fall back to the value of `TFC_VAULT_BACKED_GCP_RUN_VAULT_STATIC_ACCOUNT` if not provided. |\n\n## Configure Terraform Providers\nThe final step is to directly configure your GCP and Vault providers.\n\n### Configure the GCP Provider\nEnsure you pass values for the `project` and `region` arguments into the provider configuration block.\n\nEnsure you are not setting values or environment variables for `GOOGLE_CREDENTIALS` or `GOOGLE_APPLICATION_CREDENTIALS`. Otherwise, these values may interfere with dynamic provider credentials.\n\n### Configure the Vault Provider\nIf you were previously using the Vault provider to authenticate the GCP provider, remove any existing usage of the GCP secrets engine from your Terraform Code.\nThis includes instances of [`vault_generic_secret`](https:\/\/registry.terraform.io\/providers\/hashicorp\/vault\/latest\/docs\/data-sources\/generic_secret) that you previously used to generate GCP credentials.\n\n### Specifying Multiple Configurations\n\n~> **Important:** If you are self-hosting [HCP Terraform agents](\/terraform\/cloud-docs\/agents), ensure your agents use [v1.12.0](\/terraform\/cloud-docs\/agents\/changelog#1-12-0-07-26-2023) or above. To use the latest dynamic credentials features, [upgrade your agents to the latest version](\/terraform\/cloud-docs\/agents\/changelog).\n\nYou can add additional variables to handle multiple distinct Vault-backed GCP setups, enabling you to use multiple [provider aliases](\/terraform\/language\/providers\/configuration#alias-multiple-provider-configurations) within the same workspace. You can configure each set of credentials independently, or use default values by configuring the variables prefixed with `TFC_DEFAULT_`.\n\nFor more details, see [Specifying Multiple Configurations](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/specifying-multiple-configurations).\n\n#### Required Terraform Variable\n\nTo use additional configurations, add the following code to your Terraform configuration. This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks.\n\n```hcl\nvariable \"tfc_vault_backed_gcp_dynamic_credentials\" {\n  description = \"Object containing Vault-backed GCP dynamic credentials configuration\"\n  type = object({\n    default = object({\n      credentials = string\n      access_token = string\n    })\n    aliases = map(object({\n      credentials = string\n      access_token = string\n    }))\n  })\n}\n```\n\n#### Example Usage\n\n##### Access Token\n\n```hcl\nprovider \"google\" {\n  access_token = var.tfc_vault_backed_gcp_dynamic_credentials.default.access_token\n}\n\nprovider \"google\" {\n  alias = \"ALIAS1\"\n  access_token = var.tfc_vault_backed_gcp_dynamic_credentials.aliases[\"ALIAS1\"].access_token\n}\n```\n\n##### Credentials\n\n```hcl\nprovider \"google\" {\n  credentials = var.tfc_vault_backed_gcp_dynamic_credentials.default.credentials\n}\n\nprovider \"google\" {\n  alias = \"ALIAS1\"\n  credentials = var.tfc_vault_backed_gcp_dynamic_credentials.aliases[\"ALIAS1\"].credentials\n}\n```","site":"terraform","answers_cleaned":"    page title  Vault Backed Dynamic Credentials with the GCP Provider   Workspaces   HCP Terraform description       Use OpenID Connect and Vault to get short term credentials for the GCP Terraform provider in   your HCP Terraform runs         Vault Backed Dynamic Credentials with the GCP Provider       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 8 0   terraform cloud docs agents changelog 1 8 0 04 18 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can use HCP Terraform s native OpenID Connect integration with Vault to use  Vault backed dynamic credentials   terraform cloud docs workspaces dynamic provider credentials vault backed  with the GCP provider in your HCP Terraform runs  Configuring the integration requires the following steps   1     Configure Vault Dynamic Provider Credentials   configure vault dynamic provider credentials     Set up a trust configuration between Vault and HCP Terraform  create Vault roles and policies for your HCP Terraform workspaces  and add environment variables to those workspaces  2     Configure the Vault GCP Secrets Engine   configure vault gcp secrets engine     Set up the GCP secrets engine in your Vault instance  3     Configure HCP Terraform   configure hcp terraform     Add additional environment variables to the HCP Terraform workspaces where you want to use Vault Backed Dynamic Credentials  4     Configure Terraform Providers   configure terraform providers     Configure your Terraform providers to work with Vault backed dynamic credentials   Once you complete this setup  HCP Terraform automatically authenticates with GCP via Vault generated credentials during the plan and apply phase of each run  The GCP provider s authentication is only valid for the length of the plan or apply phase      Configure Vault Dynamic Provider Credentials You must first set up Vault dynamic provider credentials before you can use Vault backed dynamic credentials  This includes setting up the JWT auth backend in Vault  configuring trust between HCP Terraform and Vault  and populating the required environment variables in your HCP Terraform workspace    See the setup instructions for Vault dynamic provider credentials   terraform cloud docs workspaces dynamic provider credentials vault configuration      Configure Vault GCP Secrets Engine Follow the instructions in the Vault documentation for  setting up the GCP secrets engine in your Vault instance   vault docs secrets gcp   You can also do this configuration through Terraform  Refer to our  example Terraform configuration  https   github com hashicorp terraform dynamic credentials setup examples tree main vault backed gcp         Important    carefully consider the limitations and differences between each supported credential type in the GCP secrets engine  These limitations carry over to HCP Terraform s usage of these credentials for authenticating the GCP provider      Configure HCP Terraform Next  you need to set certain environment variables in your HCP Terraform workspace to authenticate HCP Terraform with GCP using Vault backed dynamic credentials  These variables are in addition to those you previously set while configuring  Vault dynamic provider credentials   configure vault dynamic provider credentials   You can add these as workspace variables or as a  variable set   terraform cloud docs workspaces variables managing variables variable sets   When you configure dynamic provider credentials with multiple provider configurations of the same type  use either a default variable or a tagged alias variable name for each provider configuration  Refer to  Specifying Multiple Configurations   specifying multiple configurations  for more details       Common Environment Variables The below variables apply to all GCP auth types        Required Common Environment Variables   Variable                                                                                                                     Value                                                                                                                                                                                                                    Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    TFC VAULT BACKED GCP AUTH  br    TFC VAULT BACKED GCP AUTH  TAG   br     Default variable not supported                      true                                                                                                                                                                                                                    Requires   v1 8 0   or later if self managing agents  Must be present and set to  true   or HCP Terraform will not attempt to authenticate with GCP       TFC VAULT BACKED GCP AUTH TYPE  br    TFC VAULT BACKED GCP AUTH TYPE  TAG   br    TFC DEFAULT VAULT BACKED GCP AUTH TYPE    Specifies the type of authentication to perform with GCP  Must be one of the following   roleset access token    roleset service account key    static account access token   or  static account service account key     Requires   v1 8 0   or later if self managing agents                                                                                                         Optional Common Environment Variables You may need to set these variables  depending on your use case     Variable                                                                                                                              Value                                                                                                                                                                                                     Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    TFC VAULT BACKED GCP MOUNT PATH  br    TFC VAULT BACKED GCP MOUNT PATH  TAG   br    TFC DEFAULT VAULT BACKED GCP MOUNT PATH          The mount path of the GCP secrets engine in Vault                                                                                                                                                         Requires   v1 8 0   or later if self managing agents  Defaults to  gcp                                                               TFC VAULT BACKED GCP VAULT CONFIG  br    TFC VAULT BACKED GCP VAULT CONFIG  TAG   br    TFC DEFAULT VAULT BACKED GCP VAULT CONFIG    The name of the non default Vault configuration for workspaces using  multiple Vault configurations   terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations     Requires   v1 12 0   or later if self managing agents  Will fall back to using the default Vault configuration if not provided         Roleset Specific Environment Variables These environment variables are only valid if the  TFC VAULT BACKED GCP AUTH TYPE  is  roleset access token  or  roleset service account key         Required Roleset Specific Environment Variables   Variable                                                                                                                                             Value                          Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      TFC VAULT BACKED GCP RUN VAULT ROLESET  br    TFC VAULT BACKED GCP RUN VAULT ROLESET  TAG   br    TFC DEFAULT VAULT BACKED GCP RUN VAULT ROLESET    The roleset to use in Vault    Requires   v1 8 0   or later if self managing agents  Optional if  TFC VAULT BACKED GCP PLAN VAULT ROLESET  and  TFC VAULT BACKED GCP APPLY VAULT ROLESET  are both provided  These variables are described  below   optional roleset specific environment variables           Optional Roleset Specific Environment Variables You may need to set these variables  depending on your use case     Variable                                                                                                                                                   Value                                              Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                TFC VAULT BACKED GCP PLAN VAULT ROLESET  br    TFC VAULT BACKED GCP PLAN VAULT ROLESET  TAG   br    TFC DEFAULT VAULT BACKED GCP PLAN VAULT ROLESET       The roleset to use for the plan phase of a run     Requires   v1 8 0   or later if self managing agents  Will fall back to the value of  TFC VAULT BACKED GCP RUN VAULT ROLESET  if not provided       TFC VAULT BACKED GCP APPLY VAULT ROLESET  br    TFC VAULT BACKED GCP APPLY VAULT ROLESET  TAG   br    TFC DEFAULT VAULT BACKED GCP APPLY VAULT ROLESET    The roleset to use for the apply phase of a run    Requires   v1 8 0   or later if self managing agents  Will fall back to the value of  TFC VAULT BACKED GCP RUN VAULT ROLESET  if not provided         Static Account Specific Environment Variables These environment variables are only valid if the  TFC VAULT BACKED GCP AUTH TYPE  is  static account access token  or  static account service account key         Required Static Account Specific Environment Variables   Variable                                                                                                                                                                  Value                                 Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            TFC VAULT BACKED GCP RUN VAULT STATIC ACCOUNT  br    TFC VAULT BACKED GCP RUN VAULT STATIC ACCOUNT  TAG   br    TFC DEFAULT VAULT BACKED GCP RUN VAULT STATIC ACCOUNT    The static account to use in Vault    Requires   v1 8 0   or later if self managing agents  Optional if  TFC VAULT BACKED GCP PLAN VAULT STATIC ACCOUNT  and  TFC VAULT BACKED GCP APPLY VAULT STATIC ACCOUNT  are both provided  These variables are described  below   optional static account specific environment variables           Optional Static Account Specific Environment Variables You may need to set these variables  depending on your use case     Variable                                                                                                                                                                        Value                                                     Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          TFC VAULT BACKED GCP PLAN VAULT STATIC ACCOUNT  br    TFC VAULT BACKED GCP PLAN VAULT STATIC ACCOUNT  TAG   br    TFC DEFAULT VAULT BACKED GCP PLAN VAULT STATIC ACCOUNT       The static account to use for the plan phase of a run     Requires   v1 8 0   or later if self managing agents  Will fall back to the value of  TFC VAULT BACKED GCP RUN VAULT STATIC ACCOUNT  if not provided       TFC VAULT BACKED GCP APPLY VAULT STATIC ACCOUNT  br    TFC VAULT BACKED GCP APPLY VAULT STATIC ACCOUNT  TAG   br    TFC DEFAULT VAULT BACKED GCP APPLY VAULT STATIC ACCOUNT    The static account to use for the apply phase of a run    Requires   v1 8 0   or later if self managing agents  Will fall back to the value of  TFC VAULT BACKED GCP RUN VAULT STATIC ACCOUNT  if not provided        Configure Terraform Providers The final step is to directly configure your GCP and Vault providers       Configure the GCP Provider Ensure you pass values for the  project  and  region  arguments into the provider configuration block   Ensure you are not setting values or environment variables for  GOOGLE CREDENTIALS  or  GOOGLE APPLICATION CREDENTIALS   Otherwise  these values may interfere with dynamic provider credentials       Configure the Vault Provider If you were previously using the Vault provider to authenticate the GCP provider  remove any existing usage of the GCP secrets engine from your Terraform Code  This includes instances of   vault generic secret   https   registry terraform io providers hashicorp vault latest docs data sources generic secret  that you previously used to generate GCP credentials       Specifying Multiple Configurations       Important    If you are self hosting  HCP Terraform agents   terraform cloud docs agents   ensure your agents use  v1 12 0   terraform cloud docs agents changelog 1 12 0 07 26 2023  or above  To use the latest dynamic credentials features   upgrade your agents to the latest version   terraform cloud docs agents changelog    You can add additional variables to handle multiple distinct Vault backed GCP setups  enabling you to use multiple  provider aliases   terraform language providers configuration alias multiple provider configurations  within the same workspace  You can configure each set of credentials independently  or use default values by configuring the variables prefixed with  TFC DEFAULT     For more details  see  Specifying Multiple Configurations   terraform cloud docs workspaces dynamic provider credentials specifying multiple configurations         Required Terraform Variable  To use additional configurations  add the following code to your Terraform configuration  This lets HCP Terraform supply variable values that you can then use to map authentication and configuration details to the correct provider blocks      hcl variable  tfc vault backed gcp dynamic credentials      description    Object containing Vault backed GCP dynamic credentials configuration    type   object       default   object         credentials   string       access token   string            aliases   map object         credentials   string       access token   string                          Example Usage        Access Token     hcl provider  google      access token   var tfc vault backed gcp dynamic credentials default access token    provider  google      alias    ALIAS1    access token   var tfc vault backed gcp dynamic credentials aliases  ALIAS1   access token              Credentials     hcl provider  google      credentials   var tfc vault backed gcp dynamic credentials default credentials    provider  google      alias    ALIAS1    credentials   var tfc vault backed gcp dynamic credentials aliases  ALIAS1   credentials      "}
{"questions":"terraform HCP Terraform workspace variables let you customize configurations modify Variables HCP Terraform workspace variables let you customize configurations modify Terraform s behavior setup dynamic provider credentials terraform cloud docs workspaces dynamic provider credentials and store information like static provider credentials page title Workspace Variables HCP Terraform Terraform s behavior and store information like provider credentials","answers":"---\npage_title: Workspace Variables - HCP Terraform\ndescription: >-\n  HCP Terraform workspace variables let you customize configurations, modify\n  Terraform's behavior, and store information like provider credentials.\n---\n\n# Variables\n\nHCP Terraform workspace variables let you customize configurations, modify Terraform's behavior, setup [dynamic provider credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials), and store information like static provider credentials.\n\nYou can set variables specifically for each workspace or you can create variable sets to reuse the same variables across multiple workspaces. For example, you could define a variable set of provider credentials and automatically apply it to all of the workspaces using that provider. You can use the command line to specify variable values for each plan or apply. Otherwise, HCP Terraform applies workspace variables to all runs within that workspace.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Types\n\nYou can create both environment variables and Terraform variables in HCP Terraform.\n\n> **Hands-on:** Try the [Create and Use a Variable Sets](\/terraform\/tutorials\/cloud-get-started\/cloud-create-variable-set) and [Create Infrastructure](\/terraform\/tutorials\/cloud-get-started\/cloud-workspace-configure) tutorials to set environment and Terraform variables in HCP Terraform.\n\n### Environment Variables\n\nHCP Terraform performs Terraform runs on disposable Linux worker VMs using a POSIX-compatible shell. Before running Terraform operations, HCP Terraform uses the `export` command to populate the shell with environment variables.\n\nEnvironment variables can store provider credentials and other data. Refer to your provider's Terraform Registry documentation for a full list of supported shell environment variables (e.g., authentication variables for [AWS](https:\/\/registry.terraform.io\/providers\/hashicorp\/aws\/latest\/docs#environment-variables), [Google Cloud Platform](https:\/\/registry.terraform.io\/providers\/hashicorp\/google\/latest\/docs\/guides\/getting_started#adding-credentials), and [Azure](https:\/\/registry.terraform.io\/providers\/hashicorp\/azurerm\/latest\/docs#argument-reference)). Environment variables can also [modify Terraform's behavior](\/terraform\/cli\/config\/environment-variables). For example, `TF_LOG` enables detailed logs for debugging.\n\n#### Parallelism\n\nYou can use the `TFE_PARALLELISM` environment variable when your infrastructure providers produce errors on concurrent operations or use non-standard rate limiting. The `TFE_PARALLELISM` variable sets the  `-parallelism=<N>` flag for  `terraform plan` and `terraform apply`  ([more about `parallelism`](\/terraform\/internals\/graph#walking-the-graph)). Valid values are between 1 and 256, inclusive, and the default is `10`. HCP Terraform agents do not support `TFE_PARALLELISM`, but you can specify flags as environment variables directly via [`TF_CLI_ARGS_name`](\/terraform\/cli\/config\/environment-variables#tf-cli-args). In these cases, use `TF_CLI_ARGS_plan=\"-parallelism=<N>\"` or `TF_CLI_ARGS_apply=\"-parallelism=<N>\"` instead.\n\n!> **Warning:** We recommend reading and understanding [Terraform parallelism](https:\/\/support.hashicorp.com\/hc\/en-us\/articles\/10348130482451) prior to setting `TFE_PARALLELISM`. You can also contact HashiCorp support for direct advice. \n\n#### Dynamic Credentials\n\nYou can configure [dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials) for certain providers using environment variables [at the workspace level](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#workspace-specific-variables) or using [variable sets](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#variable-sets).\n\nDynamic credentials allows for using temporary per-run credentials and eliminates the need to manually rotate secrets.\n\n### Terraform Variables\n\nTerraform variables refer to [input variables](\/terraform\/language\/values\/variables) that define parameters without hardcoding them into the configuration. For example, you could create variables that let users specify the number and type of Amazon Web Services EC2 instances they want to provision with a Terraform module.\n\n```hcl\nvariable \"instance_count\" {\n  description = \"Number of instances to provision.\"\n  type        = number\n  default     = 2\n}\n```\n\nYou can then reference this variable in your configuration.\n\n```hcl\nmodule \"ec2_instances\" {\n  source = \".\/modules\/aws-instance\"\n\n instance_count = var.instance_count\n ## ...\n}\n```\n\nIf a required input variable is missing, Terraform plans in the workspace will fail and print an explanation in the log.\n\n## Scope\n\nEach environment and Terraform variable can have one of the following scopes:\n\n| Scope                         | Description                                                                        | Resources                                                                                                                                                                                                                                                                                                                                              |\n| ----------------------------- | ---------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| Run-Specific                  | Apply to a specific run within a single workspace.                                 | [Specify Run-Specific Variables](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#run-specific-variables)                                                                                                                                                                                                                                 |\n| Workspace-Specific            | Apply to a single workspace.                                                       | [Create Workspace-Specific Variables](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#workspace-specific-variables), [Loading Variables from Files](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#loading-variables-from-files), [Workspace-Specific Variables API](\/terraform\/cloud-docs\/api-docs\/workspace-variables). |\n| Workspace-Scoped Variable Set | Apply to multiple workspaces within the same organization.                         | [Create Variable Sets](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#variable-sets) and [Variable Sets API](\/terraform\/cloud-docs\/api-docs\/variable-sets)                                                                                                                                                                              |\n| Project-Scoped Variable Set   | Automatically applied to all current and future workspaces within a project.       | [Create Variable Sets](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#variable-sets) and [Variable Sets API](\/terraform\/cloud-docs\/api-docs\/variable-sets)                                                                                                                                                                              |\n| Global Variable Set           | Automatically applied to all current and future workspaces within an organization. | [Create Variable Sets](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#variable-sets) and [Variable Sets API](\/terraform\/cloud-docs\/api-docs\/variable-sets)                                                                                                                                                                              |\n\n## Precedence\n\n> **Hands On:** The [Manage Multiple Variable Sets in HCP Terraform](\/terraform\/tutorials\/cloud\/cloud-multiple-variable-sets) tutorial shows how to manage multiple variable sets and demonstrates variable precedence.\n\nThere may be cases when a workspace contains conflicting variables of the same type with the same key. HCP Terraform marks overwritten variables in the UI.\n\nHCP Terraform prioritizes and overwrites conflicting variables according to the following precedence:\n\n### 1. Priority global variable sets\n\nIf [prioritized](\/terraform\/cloud-docs\/workspaces\/variables#precedence-with-priority-variable-sets), variables in a global variable set have precedence over all other variables with the same key.\n\n### 2. Priority project-scoped variable sets\n\nIf [prioritized](\/terraform\/cloud-docs\/workspaces\/variables#precedence-with-priority-variable-sets), variables in a priority project-scoped variable set have precedence over variables with the same key set at a more specific scope.\n\n### 3. Priority workspace-scoped variable sets\n\nIf [prioritized](\/terraform\/cloud-docs\/workspaces\/variables#precedence-with-priority-variable-sets), variables in a priority workspace-scoped variable set have precedence over variables with the same key set at a more specific scope.\n\n### 4. Command line argument variables\n\nWhen using a CLI workflow, variables applied to a run with either `-var` or `-var-file` overwrite workspace-specific and variable set variables that have the same key.\n\n### 5. Local environment variables prefixed with `TF_VAR_`\n\nWhen using a CLI workflow, local environment variables prefixed with `TF_VAR_` (e.g., `TF_VAR_replicas`) overwrite workspace-specific, variable set, and `.auto.tfvars` file variables that have the same key.\n\n### 6. Workspace-specific variables\n\nWorkspace-specific variables always overwrite variables from variable sets that have the same key. Refer to [overwrite variables from variable sets](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#overwrite-variable-sets) for details.\n\n### 7. Workspace-scoped variable sets\n\nVariables in workspace-scoped variable sets are only applied to a subset of workspaces in an organization.\n\nWhen workspace-scoped variable sets have conflicting variables, HCP Terraform compares the variable set names and uses values from the variable set with lexical precedence. Terraform and HCP Terraform operate on UTF-8 strings, and HCP Terraform sorts variable set names based on the lexical order of Unicode code points.\n\nFor example, if you apply `A_Variable_Set` and `B_Variable_Set` to the same workspace, HCP Terraform will use any conflicting variables from `A_Variable_Set`. This is the case regardless of which variable set has been edited most recently. HCP Terraform only considers the lexical ordering of variable set names when determining precedence.\n\n### 8. Project-scoped variable sets\n\nWorkspace-specific variables and workspace-scoped variable sets always take precedence over project-scoped variable sets that are applied to workspaces within a project. \n\nVariables in project-scoped variable sets are only applied to the workspaces within the specified projects.\n\nWhen project-scoped variable sets have conflicting variables, HCP Terraform compares the variable set names and uses values from the variable set with lexical precedence. Terraform and HCP Terraform operate on UTF-8 strings, and HCP Terraform sorts variable set names based the on lexical order of Unicode code points.\n\nFor example, if you apply `A_Variable_Set` and `B_Variable_Set` to the same project, HCP Terraform uses any conflicting variables from `A_Variable_Set`. This is the case regardless of which variable set has been edited most recently. HCP Terraform only considers the lexical ordering of variable set names when determining precedence.\n\n### 9. Global variable sets\n\nWorkspace and project-scoped variable sets always take precedence over global variable sets that are applied to all workspaces within an organization. Terraform does not allow global variable sets to contain variables with the same key, so they cannot conflict.\n\n### 10. `*.auto.tfvars` variable files\n\nVariables in the HCP Terraform workspace and variables provided through the command line always overwrite variables with the same key from files ending in `.auto.tfvars`.\n\n### 11. `terraform.tfvars` variable file\n\nVariables in the `.auto.tfvars` files take precedence over variables in the `terraform.tfvars` file.\n\n<Note>\n  \nAlthough Terraform Cloud uses variables from `terraform.tfvars`, Terraform Enterprise currently ignores this file.\n\n<\/Note>\n\n## Precedence with priority variable sets\n\nYou can select to prioritize all values of the variables in a variable set.\nWhen a variable set is priority, the values take precedence over any variables with the same key set at a more specific scope.\n\nFor example, variables in a priority global variable set would take precedence over all variables with the same key.\n\nIf two priority variable sets with the same scope include the same variable key, HCP Terraform will determine precedence by the alphabetical order of the variable sets' names.\n\nWhile a priority variable set can enforce that Terraform variables use designated values, it does not guarantee that the configuration uses the variable. A user can still directly modify the Terraform configuration to remove usage of a variable and replace it with a hard-coded value. For stricter enforcement, we recommend using policy checks or run tasks.\n\n## Precedence example\n\nConsider an example workspace that has the following variables applied:\n\n| (**Scope**) Source                         | Region      | Var1 | Replicas  |\n| ------------------------------------------ | ----------- | ---- | --------- |\n| Priority **global** variable set           | `us-east-1` |      |           |\n| Priority **project-scoped** variable set   | `us-east-2` |      |           |\n| Priority **workspace-scoped** variable set | `us-west-1` |      |           |\n| Command line argument                      | `us-west-2` |      |   `9`     |\n| Local environment variable                 |             |      |   `8`     |\n| **Workspace-specific** variable            |             | `h`  |   `1`     |\n| **Workspace-scoped** variable set          |             | `y`  |   `2`     |\n| **Project-scoped** variable set            |             |      |   `3`     |\n| **Global** variable set                    |             |      |   `4`     |\n\nWhen you trigger a run through the command line, these are the final values Terraform Cloud assigns to each variable:\n\n| Variable | Value        |\n| -------- | ------------ | \n| Region   | ` us-east-1` | \n| Var1     | `h`          |\n| Replicas | `9`          ","site":"terraform","answers_cleaned":"    page title  Workspace Variables   HCP Terraform description       HCP Terraform workspace variables let you customize configurations  modify   Terraform s behavior  and store information like provider credentials         Variables  HCP Terraform workspace variables let you customize configurations  modify Terraform s behavior  setup  dynamic provider credentials   terraform cloud docs workspaces dynamic provider credentials   and store information like static provider credentials   You can set variables specifically for each workspace or you can create variable sets to reuse the same variables across multiple workspaces  For example  you could define a variable set of provider credentials and automatically apply it to all of the workspaces using that provider  You can use the command line to specify variable values for each plan or apply  Otherwise  HCP Terraform applies workspace variables to all runs within that workspace    permissions citation    intentionally unused   keep for maintainers     Types  You can create both environment variables and Terraform variables in HCP Terraform       Hands on    Try the  Create and Use a Variable Sets   terraform tutorials cloud get started cloud create variable set  and  Create Infrastructure   terraform tutorials cloud get started cloud workspace configure  tutorials to set environment and Terraform variables in HCP Terraform       Environment Variables  HCP Terraform performs Terraform runs on disposable Linux worker VMs using a POSIX compatible shell  Before running Terraform operations  HCP Terraform uses the  export  command to populate the shell with environment variables   Environment variables can store provider credentials and other data  Refer to your provider s Terraform Registry documentation for a full list of supported shell environment variables  e g   authentication variables for  AWS  https   registry terraform io providers hashicorp aws latest docs environment variables    Google Cloud Platform  https   registry terraform io providers hashicorp google latest docs guides getting started adding credentials   and  Azure  https   registry terraform io providers hashicorp azurerm latest docs argument reference    Environment variables can also  modify Terraform s behavior   terraform cli config environment variables   For example   TF LOG  enables detailed logs for debugging        Parallelism  You can use the  TFE PARALLELISM  environment variable when your infrastructure providers produce errors on concurrent operations or use non standard rate limiting  The  TFE PARALLELISM  variable sets the    parallelism  N   flag for   terraform plan  and  terraform apply     more about  parallelism    terraform internals graph walking the graph    Valid values are between 1 and 256  inclusive  and the default is  10   HCP Terraform agents do not support  TFE PARALLELISM   but you can specify flags as environment variables directly via   TF CLI ARGS name    terraform cli config environment variables tf cli args   In these cases  use  TF CLI ARGS plan   parallelism  N    or  TF CLI ARGS apply   parallelism  N    instead        Warning    We recommend reading and understanding  Terraform parallelism  https   support hashicorp com hc en us articles 10348130482451  prior to setting  TFE PARALLELISM   You can also contact HashiCorp support for direct advice         Dynamic Credentials  You can configure  dynamic credentials   terraform cloud docs workspaces dynamic provider credentials  for certain providers using environment variables  at the workspace level   terraform cloud docs workspaces variables managing variables workspace specific variables  or using  variable sets   terraform cloud docs workspaces variables managing variables variable sets    Dynamic credentials allows for using temporary per run credentials and eliminates the need to manually rotate secrets       Terraform Variables  Terraform variables refer to  input variables   terraform language values variables  that define parameters without hardcoding them into the configuration  For example  you could create variables that let users specify the number and type of Amazon Web Services EC2 instances they want to provision with a Terraform module      hcl variable  instance count      description    Number of instances to provision     type          number   default       2        You can then reference this variable in your configuration      hcl module  ec2 instances      source      modules aws instance    instance count   var instance count                If a required input variable is missing  Terraform plans in the workspace will fail and print an explanation in the log      Scope  Each environment and Terraform variable can have one of the following scopes     Scope                           Description                                                                          Resources                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Run Specific                    Apply to a specific run within a single workspace                                     Specify Run Specific Variables   terraform cloud docs workspaces variables managing variables run specific variables                                                                                                                                                                                                                                      Workspace Specific              Apply to a single workspace                                                           Create Workspace Specific Variables   terraform cloud docs workspaces variables managing variables workspace specific variables    Loading Variables from Files   terraform cloud docs workspaces variables managing variables loading variables from files    Workspace Specific Variables API   terraform cloud docs api docs workspace variables       Workspace Scoped Variable Set   Apply to multiple workspaces within the same organization                             Create Variable Sets   terraform cloud docs workspaces variables managing variables variable sets  and  Variable Sets API   terraform cloud docs api docs variable sets                                                                                                                                                                                   Project Scoped Variable Set     Automatically applied to all current and future workspaces within a project           Create Variable Sets   terraform cloud docs workspaces variables managing variables variable sets  and  Variable Sets API   terraform cloud docs api docs variable sets                                                                                                                                                                                   Global Variable Set             Automatically applied to all current and future workspaces within an organization     Create Variable Sets   terraform cloud docs workspaces variables managing variables variable sets  and  Variable Sets API   terraform cloud docs api docs variable sets                                                                                                                                                                                     Precedence      Hands On    The  Manage Multiple Variable Sets in HCP Terraform   terraform tutorials cloud cloud multiple variable sets  tutorial shows how to manage multiple variable sets and demonstrates variable precedence   There may be cases when a workspace contains conflicting variables of the same type with the same key  HCP Terraform marks overwritten variables in the UI   HCP Terraform prioritizes and overwrites conflicting variables according to the following precedence       1  Priority global variable sets  If  prioritized   terraform cloud docs workspaces variables precedence with priority variable sets   variables in a global variable set have precedence over all other variables with the same key       2  Priority project scoped variable sets  If  prioritized   terraform cloud docs workspaces variables precedence with priority variable sets   variables in a priority project scoped variable set have precedence over variables with the same key set at a more specific scope       3  Priority workspace scoped variable sets  If  prioritized   terraform cloud docs workspaces variables precedence with priority variable sets   variables in a priority workspace scoped variable set have precedence over variables with the same key set at a more specific scope       4  Command line argument variables  When using a CLI workflow  variables applied to a run with either   var  or   var file  overwrite workspace specific and variable set variables that have the same key       5  Local environment variables prefixed with  TF VAR    When using a CLI workflow  local environment variables prefixed with  TF VAR    e g    TF VAR replicas   overwrite workspace specific  variable set  and   auto tfvars  file variables that have the same key       6  Workspace specific variables  Workspace specific variables always overwrite variables from variable sets that have the same key  Refer to  overwrite variables from variable sets   terraform cloud docs workspaces variables managing variables overwrite variable sets  for details       7  Workspace scoped variable sets  Variables in workspace scoped variable sets are only applied to a subset of workspaces in an organization   When workspace scoped variable sets have conflicting variables  HCP Terraform compares the variable set names and uses values from the variable set with lexical precedence  Terraform and HCP Terraform operate on UTF 8 strings  and HCP Terraform sorts variable set names based on the lexical order of Unicode code points   For example  if you apply  A Variable Set  and  B Variable Set  to the same workspace  HCP Terraform will use any conflicting variables from  A Variable Set   This is the case regardless of which variable set has been edited most recently  HCP Terraform only considers the lexical ordering of variable set names when determining precedence       8  Project scoped variable sets  Workspace specific variables and workspace scoped variable sets always take precedence over project scoped variable sets that are applied to workspaces within a project    Variables in project scoped variable sets are only applied to the workspaces within the specified projects   When project scoped variable sets have conflicting variables  HCP Terraform compares the variable set names and uses values from the variable set with lexical precedence  Terraform and HCP Terraform operate on UTF 8 strings  and HCP Terraform sorts variable set names based the on lexical order of Unicode code points   For example  if you apply  A Variable Set  and  B Variable Set  to the same project  HCP Terraform uses any conflicting variables from  A Variable Set   This is the case regardless of which variable set has been edited most recently  HCP Terraform only considers the lexical ordering of variable set names when determining precedence       9  Global variable sets  Workspace and project scoped variable sets always take precedence over global variable sets that are applied to all workspaces within an organization  Terraform does not allow global variable sets to contain variables with the same key  so they cannot conflict       10     auto tfvars  variable files  Variables in the HCP Terraform workspace and variables provided through the command line always overwrite variables with the same key from files ending in   auto tfvars        11   terraform tfvars  variable file  Variables in the   auto tfvars  files take precedence over variables in the  terraform tfvars  file    Note     Although Terraform Cloud uses variables from  terraform tfvars   Terraform Enterprise currently ignores this file     Note      Precedence with priority variable sets  You can select to prioritize all values of the variables in a variable set  When a variable set is priority  the values take precedence over any variables with the same key set at a more specific scope   For example  variables in a priority global variable set would take precedence over all variables with the same key   If two priority variable sets with the same scope include the same variable key  HCP Terraform will determine precedence by the alphabetical order of the variable sets  names   While a priority variable set can enforce that Terraform variables use designated values  it does not guarantee that the configuration uses the variable  A user can still directly modify the Terraform configuration to remove usage of a variable and replace it with a hard coded value  For stricter enforcement  we recommend using policy checks or run tasks      Precedence example  Consider an example workspace that has the following variables applied        Scope    Source                           Region        Var1   Replicas                                                                                      Priority   global   variable set              us east 1                         Priority   project scoped   variable set      us east 2                         Priority   workspace scoped   variable set    us west 1                         Command line argument                         us west 2              9          Local environment variable                                           8            Workspace specific   variable                             h        1            Workspace scoped   variable set                           y        2            Project scoped   variable set                                      3            Global   variable set                                              4         When you trigger a run through the command line  these are the final values Terraform Cloud assigns to each variable     Variable   Value                                         Region       us east 1       Var1        h               Replicas    9           "}
{"questions":"terraform reusable variable sets that apply to multiple workspaces You can set variables specifically for each workspace or you can create variable sets to reuse the same variables across multiple workspaces Refer to the variables overview terraform cloud docs workspaces variables documentation for more information about variable types scope and precedence You can also set variable values specifically for each run on the command line Configure Terraform input variables and environment variables and create page title Managing Variables HCP Terraform Managing Variables","answers":"---\npage_title: Managing Variables - HCP Terraform\ndescription: >-\n  Configure Terraform input variables and environment variables and create\n  reusable variable sets that apply to multiple workspaces.\n---\n\n# Managing Variables\n\nYou can set variables specifically for each workspace or you can create variable sets to reuse the same variables across multiple workspaces. Refer to the [variables overview](\/terraform\/cloud-docs\/workspaces\/variables) documentation for more information about variable types, scope, and precedence. You can also set variable values specifically for each run on the command line.\n\nYou can create and edit workspace-specific variables through:\n\n- The HCP Terraform UI, as detailed below.\n- The Variables API for [workspace-specific variables](\/terraform\/cloud-docs\/api-docs\/workspace-variables) and [variable sets](\/terraform\/cloud-docs\/api-docs\/variable-sets).\n- The `tfe` provider's [`tfe_variable`](https:\/\/registry.terraform.io\/providers\/hashicorp\/tfe\/latest\/docs\/resources\/variable) resource, which can be more convenient for bulk management.\n\n\n## Permissions\nYou must have [`read variables` permission](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#general-workspace-permissions) to view the variables for a particular workspace and to view the variable sets in your organization.\n\nTo create new variable sets and apply them to workspaces, you must be part of a team with [manage workspaces](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-owners) permissions. To create and edit workspace-specific variables within a workspace, you must have [read and write variables](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#general-workspace-permissions) for that workspace.\n\n\n\n## Run-Specific Variables\n\nTerraform 1.1 and later lets you set [Terraform variable](\/terraform\/cloud-docs\/workspaces\/variables#terraform-variables) values for a particular plan or apply on the command line. These variable values will overwrite workspace-specific and variable set variables with the same key. Refer to the [variable precedence](\/terraform\/cloud-docs\/workspaces\/variables#precedence) documentation for more details.\n\nYou can set run-specific Terraform variable values by:\n\n- Specifying `-var` and `-var-file` arguments. For example:\n\n  ```\n  terraform apply -var=\"key=value\" -var-file=\"testing.tfvars\"\n  ```\n- Creating local environment variables prefixed with `TF_VAR_`. For example, if you declare a variable called `replicas` in your configuration, you could create a local environment variable called `TF_VAR_replicas` and set it to a particular value. When you use the [CLI Workflow](\/terraform\/cloud-docs\/run\/cli), Terraform automatically identifies these environment variables and applies their values to the run.\n\nRefer to the [variables on the command line](\/terraform\/language\/values\/variables#variables-on-the-command-line) documentation for more details and examples.\n\n## Workspace-Specific Variables\n\nTo view and manage a workspace's variables, go to the workspace and click the **Variables** tab.\n\nThe **Variables** page appears, showing all workspace-specific variables and variable sets applied to the workspace. This is where you can add, edit, and delete workspace-specific variables. You can also apply and remove variable sets from the workspace.\n\nThe **Variables** page is not available for workspaces configured with `Local` [execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode). HCP Terraform does not evaluate workspace variables or variable sets in local execution mode.\n\n### Add a Variable\n\nTo add a variable:\n\n1. Go to the workspace **Variables** page and click **+ Add variable** in the **Workspace Variables** section.\n\n1. Choose a variable category (Terraform or environment), optionally mark the variable as [sensitive](#sensitive-values), and enter a variable key, value, and optional description. For Terraform variables only, you can check the **HCL** checkbox to enter a value in HashiCorp Configuration Language. \n\n   Refer to [variable values and format](#variable-values-and-format) for variable limits, allowable values, and formatting.\n\n1. Click **Save variable**. The variable now appears in the list of the workspace's variables and HCP Terraform will apply it to runs.\n\n### Edit a Variable\n\nTo edit a variable:\n\n1. Click the ellipses next to the variable you want to edit and select **Edit**.\n1. Make any desired changes and click **Save variable**.\n\n### Delete a Variable\n\nTo delete a variable:\n\n1. Click the ellipses next to the variable you want to delete and select **Delete**.\n1. Click **Yes, delete variable** to confirm your action.\n\n## Loading Variables from Files\n\nYou can set [Terraform variable](\/terraform\/cloud-docs\/workspaces\/variables#terraform-variables) values by providing any number of [files ending in `.auto.tfvars`](\/terraform\/language\/values\/variables#variable-files) to workspaces that use Terraform 0.10.0 or later. When you trigger a run, Terraform automatically loads and uses the variables defined in these files. If any variable from the workspace has the same key as a variable in the file, the workspace variable overwrites variable from the file.\n\nYou can only do this with files ending in `auto.tfvars` or `terraform.tfvars`. You can apply other types of `.tfvars` files [using the command line](#run-specific-variables) for each run.\n\n~> **Note:** HCP Terraform loads variables from files ending in `auto.tfvars` for each Terraform run, but does not automatically persist those variables to the HCP Terraform workspace or display them in the **Variables** section of the workspace UI.\n\n## Variable Sets\n\n> **Hands On:** Try the [Manage Variable Sets in HCP Terraform tutorial](\/terraform\/tutorials\/cloud\/cloud-multiple-variable-sets) tutorial.\n\nOnly members of the [Owners Team](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-owners) or members of a team with the [Manage all projects](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-all-projects) or [Manage all workspaces](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-all-workspaces) permission can create, update, and delete variable sets.\n\nHCP Terraform does not evaluate variable sets during Terraform runs for workspaces configured with `Local` [execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode).\n\nTo view variable sets for your organization, click **Settings**, then click **Variable sets**.\n\nThe **Variable sets** page appears, listing all of the organization's variable sets. Click on a variable set to open it and review details about its variables and scoping.\n\n### Create Variable Sets\n\nTo create a variable set:\n\n1. Go to the **Variable Sets** page for your organization and click **Create variable set**. The **Create a new variable set** page appears.\n\n1. Choose a descriptive **Name** for the variable set. You can use any combination of numbers, letters, and characters.\n\n1. Write an optional **Description** that tells other users about the purpose of the variable set and what it contains.\n\n1. Choose a variable set scope:\n   - **Apply globally:** HCP Terraform will automatically apply this global variable set to all existing and future workspaces.\n   - **Apply to specific projects and workspaces:** Use the text fields to search for and select workspaces and\/or projects to apply this variable set to. This affects all current and future workspaces for any selected projects. After creation, users can also [add this variable set to their workspaces](#apply-or-remove-variable-sets-from-inside-a-workspace).\n\n1. Add one or more variables: Click **+ Add variable**, choose a variable type (Terraform or environment), optionally mark the variable as [sensitive](#sensitive-values), and enter a variable name, value, and optional description. Then, click **Save variable**.\n\n   Refer to [variable values and format](#variable-values-and-format) for variable limits, allowable values, and formatting.\n\n   ~> **Note:** HCP Terraform will error if you try to declare variables with the same key in multiple global variable sets.\n\n1. Click **Create variable set.** HCP Terraform adds the new variable set to any specified workspaces and displays it on the **Variable Sets** page.\n\n### Edit Variable Sets\n\nTo edit or remove a variable set:\n\n1. Go to the organization's settings and then click **Variable Sets**. The **Variable sets** page appears.\n1. Click the variable set you want to edit. That specific variable set page appears, where you can change the variable set settings. Refer to [create variable sets](#create-variable-sets) for details.\n\n### Delete Variable Sets\n\nDeleting a variable set can be a disruptive action, especially if the variables are required to execute runs. We recommend informing organization and workspace owners before removing a variable set.\n\nTo delete a variable set:\n\n1. Click **Settings**, then click **Variable Sets**. The **Variable sets** page appears.\n1. Select **Delete variable set**. Enter the variable set name and click **Delete variable set** to confirm this action. HCP Terraform deletes the variable set and removes it from all workspaces. Runs within those workspaces will no longer use the variables from the variable set.\n\n### Apply or Remove Variable Sets From Inside a Workspace\n\nTo apply a variable set to a specific workspace:\n\n1. Navigate to the workspace and click the **Variables** tab. The **Variables** page appears, showing all workspace-specific variables and variable sets applied to the workspace.\n\n1. In the **Variable sets** section, click **Apply Variable Set**. Select the variable set you want to apply to your workspace, and click **Apply variable set**. The variable set appears in the workspace's variable sets list and HCP Terraform will now apply the variables to runs.\n\nTo remove a variable set from within a workspace:\n\n1. Navigate to the workspace and click the **Variables** tab. The **Variables** page appears, showing all workspace-specific variables and variable sets applied to the workspace.\n1. Click the ellipses button next to the variable set and select **Remove variable set**.\n1. Click **Remove variable set** in the dialog box. HCP Terraform removes the variable set from this workspace, but it remains available to other workspaces in the organization.\n\n## Overwrite Variable Sets\n\nYou can overwrite variables defined in variable sets within a workspace. For example, you may want to use a different set of provider credentials in a specific workspace.\n\nTo overwrite a variable from a variable set, [create a new workspace-specific variable](#workspace-specific-variables) of the same type with the same key. HCP Terraform marks any variables that you overwrite with a yellow **OVERWRITTEN** flag. When you click the overwritten variable, HCP Terraform highlights the variable it will use during runs.\n\nVariables within a variable set can also automatically overwrite variables with the same key in other variable sets applied to the same workspace. Though variable sets are created for the organization, these overwrites occur within each workspace. Refer to [variable precedence](\/terraform\/cloud-docs\/workspaces\/variables#precedence) for more details.\n\n## Priority Variable Sets\n\nThe values in priority variable sets overwrite any variables with the same key set at more specific scopes. This includes variables set using command line flags, or through`.*auto.tfvars` and `terraform.tfvars` files.\n\nIt is still possible for a user to directly modify the Terraform configuration and remove usage of a variable and replace it with a hard coded value. For stricter enforcement, we recommend using policy checks or run tasks.\nRefer to [variable precedence](\/terraform\/cloud-docs\/workspaces\/variables#precedence-with-priority-variable-sets) for more details.\n\n## Variable Values and Format\n\nThe limits, allowable values, and required format are the same for both workspace-specific variables and variable sets.\n\n### Security\n\nHCP Terraform encrypts all variable values securely using [Vault's transit backend](\/vault\/docs\/secrets\/transit) prior to saving them. This ensures that no out-of-band party can read these values without proper authorization. However, HCP Terraform stores variable [descriptions](#variable-description) in plain text, so be careful with the information you save in a variable description.\n\nWe also recommend passing credentials to Terraform as environment variables instead of Terraform variables when possible, since Terraform runs receive the full text of all Terraform variable values, including [sensitive](#sensitive-values) ones. It may print the values in logs and state files if the configuration sends the value to an output or a resource parameter. Sentinel mocks downloaded from runs will also contain the sensitive values of Terraform variables.\n\nAlthough HCP Terraform does not store environment variables in state, it can include them in log files if `TF_LOG` is set to `TRACE`.\n\n#### Dynamic Credentials\n\nAn alternative to passing static credentials for some providers is to use [dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials).\n\nDynamic credentials allows for using temporary per-run credentials and eliminates the need to manually rotate secrets.\n\n### Character Limits\n\nThe following limits apply to variables:\n\n| Component   | Limit          |\n| ----------- | -------------- |\n| description | 512 characters |\n| key         | 128 characters |\n| value       | 256 kilobytes  |\n\n### Multi-Line Text\n\nYou can type or paste multi-line text into variable value text fields.\n\n### HashiCorp Configuration Language (HCL)\n\nYou can use HCL for Terraform variables, but not for environment variables. The same Terraform version that performs runs in the workspace will interpret the HCL.\n\nVariable values are strings by default. To enter list or map values, click the variable\u2019s **HCL** checkbox (visible when editing) and enter the value with the same HCL syntax you would use when writing Terraform code. For example:\n\n```hcl\n{\n    us-east-1 = \"image-1234\"\n    us-west-2 = \"image-4567\"\n}\n```\n\n### Sensitive Values\n\n!> **Warning:** There are some cases when even sensitive variables are included in logs and state files. Refer to [security](#security) for more information.\n\nTerraform often needs cloud provider credentials and other sensitive information that should not be widely available within your organization. To protect these secrets, you can mark any Terraform or environment variable as sensitive data by clicking its **Sensitive** checkbox that is visible during editing.\n\nMarking a variable as sensitive makes it write-only and prevents all users (including you) from viewing its value in the HCP Terraform UI or reading it through the Variables API endpoint.\n\nUsers with permission to read and write variables can set new values for sensitive variables, but other attributes of a sensitive variable cannot be modified. To update other attributes, delete the variable and create a new variable to replace it.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### Variable Description\n\n!> **Warning:** Variable descriptions are not encrypted, so do not include any sensitive information.\n\nVariable descriptions are optional, and help distinguish between similarly named variables. They are only shown on the **Variables** page and are completely independent from any variable descriptions declared in Terraform CLI.","site":"terraform","answers_cleaned":"    page title  Managing Variables   HCP Terraform description       Configure Terraform input variables and environment variables and create   reusable variable sets that apply to multiple workspaces         Managing Variables  You can set variables specifically for each workspace or you can create variable sets to reuse the same variables across multiple workspaces  Refer to the  variables overview   terraform cloud docs workspaces variables  documentation for more information about variable types  scope  and precedence  You can also set variable values specifically for each run on the command line   You can create and edit workspace specific variables through     The HCP Terraform UI  as detailed below    The Variables API for  workspace specific variables   terraform cloud docs api docs workspace variables  and  variable sets   terraform cloud docs api docs variable sets     The  tfe  provider s   tfe variable   https   registry terraform io providers hashicorp tfe latest docs resources variable  resource  which can be more convenient for bulk management       Permissions You must have   read variables  permission   terraform cloud docs users teams organizations permissions general workspace permissions  to view the variables for a particular workspace and to view the variable sets in your organization   To create new variable sets and apply them to workspaces  you must be part of a team with  manage workspaces   terraform cloud docs users teams organizations permissions organization owners  permissions  To create and edit workspace specific variables within a workspace  you must have  read and write variables   terraform cloud docs users teams organizations permissions general workspace permissions  for that workspace        Run Specific Variables  Terraform 1 1 and later lets you set  Terraform variable   terraform cloud docs workspaces variables terraform variables  values for a particular plan or apply on the command line  These variable values will overwrite workspace specific and variable set variables with the same key  Refer to the  variable precedence   terraform cloud docs workspaces variables precedence  documentation for more details   You can set run specific Terraform variable values by     Specifying   var  and   var file  arguments  For example           terraform apply  var  key value   var file  testing tfvars          Creating local environment variables prefixed with  TF VAR    For example  if you declare a variable called  replicas  in your configuration  you could create a local environment variable called  TF VAR replicas  and set it to a particular value  When you use the  CLI Workflow   terraform cloud docs run cli   Terraform automatically identifies these environment variables and applies their values to the run   Refer to the  variables on the command line   terraform language values variables variables on the command line  documentation for more details and examples      Workspace Specific Variables  To view and manage a workspace s variables  go to the workspace and click the   Variables   tab   The   Variables   page appears  showing all workspace specific variables and variable sets applied to the workspace  This is where you can add  edit  and delete workspace specific variables  You can also apply and remove variable sets from the workspace   The   Variables   page is not available for workspaces configured with  Local   execution mode   terraform cloud docs workspaces settings execution mode   HCP Terraform does not evaluate workspace variables or variable sets in local execution mode       Add a Variable  To add a variable   1  Go to the workspace   Variables   page and click     Add variable   in the   Workspace Variables   section   1  Choose a variable category  Terraform or environment   optionally mark the variable as  sensitive   sensitive values   and enter a variable key  value  and optional description  For Terraform variables only  you can check the   HCL   checkbox to enter a value in HashiCorp Configuration Language       Refer to  variable values and format   variable values and format  for variable limits  allowable values  and formatting   1  Click   Save variable    The variable now appears in the list of the workspace s variables and HCP Terraform will apply it to runs       Edit a Variable  To edit a variable   1  Click the ellipses next to the variable you want to edit and select   Edit    1  Make any desired changes and click   Save variable         Delete a Variable  To delete a variable   1  Click the ellipses next to the variable you want to delete and select   Delete    1  Click   Yes  delete variable   to confirm your action      Loading Variables from Files  You can set  Terraform variable   terraform cloud docs workspaces variables terraform variables  values by providing any number of  files ending in   auto tfvars    terraform language values variables variable files  to workspaces that use Terraform 0 10 0 or later  When you trigger a run  Terraform automatically loads and uses the variables defined in these files  If any variable from the workspace has the same key as a variable in the file  the workspace variable overwrites variable from the file   You can only do this with files ending in  auto tfvars  or  terraform tfvars   You can apply other types of   tfvars  files  using the command line   run specific variables  for each run        Note    HCP Terraform loads variables from files ending in  auto tfvars  for each Terraform run  but does not automatically persist those variables to the HCP Terraform workspace or display them in the   Variables   section of the workspace UI      Variable Sets      Hands On    Try the  Manage Variable Sets in HCP Terraform tutorial   terraform tutorials cloud cloud multiple variable sets  tutorial   Only members of the  Owners Team   terraform cloud docs users teams organizations permissions organization owners  or members of a team with the  Manage all projects   terraform cloud docs users teams organizations permissions manage all projects  or  Manage all workspaces   terraform cloud docs users teams organizations permissions manage all workspaces  permission can create  update  and delete variable sets   HCP Terraform does not evaluate variable sets during Terraform runs for workspaces configured with  Local   execution mode   terraform cloud docs workspaces settings execution mode    To view variable sets for your organization  click   Settings    then click   Variable sets     The   Variable sets   page appears  listing all of the organization s variable sets  Click on a variable set to open it and review details about its variables and scoping       Create Variable Sets  To create a variable set   1  Go to the   Variable Sets   page for your organization and click   Create variable set    The   Create a new variable set   page appears   1  Choose a descriptive   Name   for the variable set  You can use any combination of numbers  letters  and characters   1  Write an optional   Description   that tells other users about the purpose of the variable set and what it contains   1  Choose a variable set scope         Apply globally    HCP Terraform will automatically apply this global variable set to all existing and future workspaces         Apply to specific projects and workspaces    Use the text fields to search for and select workspaces and or projects to apply this variable set to  This affects all current and future workspaces for any selected projects  After creation  users can also  add this variable set to their workspaces   apply or remove variable sets from inside a workspace    1  Add one or more variables  Click     Add variable    choose a variable type  Terraform or environment   optionally mark the variable as  sensitive   sensitive values   and enter a variable name  value  and optional description  Then  click   Save variable        Refer to  variable values and format   variable values and format  for variable limits  allowable values  and formatting           Note    HCP Terraform will error if you try to declare variables with the same key in multiple global variable sets   1  Click   Create variable set    HCP Terraform adds the new variable set to any specified workspaces and displays it on the   Variable Sets   page       Edit Variable Sets  To edit or remove a variable set   1  Go to the organization s settings and then click   Variable Sets    The   Variable sets   page appears  1  Click the variable set you want to edit  That specific variable set page appears  where you can change the variable set settings  Refer to  create variable sets   create variable sets  for details       Delete Variable Sets  Deleting a variable set can be a disruptive action  especially if the variables are required to execute runs  We recommend informing organization and workspace owners before removing a variable set   To delete a variable set   1  Click   Settings    then click   Variable Sets    The   Variable sets   page appears  1  Select   Delete variable set    Enter the variable set name and click   Delete variable set   to confirm this action  HCP Terraform deletes the variable set and removes it from all workspaces  Runs within those workspaces will no longer use the variables from the variable set       Apply or Remove Variable Sets From Inside a Workspace  To apply a variable set to a specific workspace   1  Navigate to the workspace and click the   Variables   tab  The   Variables   page appears  showing all workspace specific variables and variable sets applied to the workspace   1  In the   Variable sets   section  click   Apply Variable Set    Select the variable set you want to apply to your workspace  and click   Apply variable set    The variable set appears in the workspace s variable sets list and HCP Terraform will now apply the variables to runs   To remove a variable set from within a workspace   1  Navigate to the workspace and click the   Variables   tab  The   Variables   page appears  showing all workspace specific variables and variable sets applied to the workspace  1  Click the ellipses button next to the variable set and select   Remove variable set    1  Click   Remove variable set   in the dialog box  HCP Terraform removes the variable set from this workspace  but it remains available to other workspaces in the organization      Overwrite Variable Sets  You can overwrite variables defined in variable sets within a workspace  For example  you may want to use a different set of provider credentials in a specific workspace   To overwrite a variable from a variable set   create a new workspace specific variable   workspace specific variables  of the same type with the same key  HCP Terraform marks any variables that you overwrite with a yellow   OVERWRITTEN   flag  When you click the overwritten variable  HCP Terraform highlights the variable it will use during runs   Variables within a variable set can also automatically overwrite variables with the same key in other variable sets applied to the same workspace  Though variable sets are created for the organization  these overwrites occur within each workspace  Refer to  variable precedence   terraform cloud docs workspaces variables precedence  for more details      Priority Variable Sets  The values in priority variable sets overwrite any variables with the same key set at more specific scopes  This includes variables set using command line flags  or through   auto tfvars  and  terraform tfvars  files   It is still possible for a user to directly modify the Terraform configuration and remove usage of a variable and replace it with a hard coded value  For stricter enforcement  we recommend using policy checks or run tasks  Refer to  variable precedence   terraform cloud docs workspaces variables precedence with priority variable sets  for more details      Variable Values and Format  The limits  allowable values  and required format are the same for both workspace specific variables and variable sets       Security  HCP Terraform encrypts all variable values securely using  Vault s transit backend   vault docs secrets transit  prior to saving them  This ensures that no out of band party can read these values without proper authorization  However  HCP Terraform stores variable  descriptions   variable description  in plain text  so be careful with the information you save in a variable description   We also recommend passing credentials to Terraform as environment variables instead of Terraform variables when possible  since Terraform runs receive the full text of all Terraform variable values  including  sensitive   sensitive values  ones  It may print the values in logs and state files if the configuration sends the value to an output or a resource parameter  Sentinel mocks downloaded from runs will also contain the sensitive values of Terraform variables   Although HCP Terraform does not store environment variables in state  it can include them in log files if  TF LOG  is set to  TRACE         Dynamic Credentials  An alternative to passing static credentials for some providers is to use  dynamic credentials   terraform cloud docs workspaces dynamic provider credentials    Dynamic credentials allows for using temporary per run credentials and eliminates the need to manually rotate secrets       Character Limits  The following limits apply to variables     Component     Limit                                               description   512 characters     key           128 characters     value         256 kilobytes         Multi Line Text  You can type or paste multi line text into variable value text fields       HashiCorp Configuration Language  HCL   You can use HCL for Terraform variables  but not for environment variables  The same Terraform version that performs runs in the workspace will interpret the HCL   Variable values are strings by default  To enter list or map values  click the variable s   HCL   checkbox  visible when editing  and enter the value with the same HCL syntax you would use when writing Terraform code  For example      hcl       us east 1    image 1234      us west 2    image 4567             Sensitive Values       Warning    There are some cases when even sensitive variables are included in logs and state files  Refer to  security   security  for more information   Terraform often needs cloud provider credentials and other sensitive information that should not be widely available within your organization  To protect these secrets  you can mark any Terraform or environment variable as sensitive data by clicking its   Sensitive   checkbox that is visible during editing   Marking a variable as sensitive makes it write only and prevents all users  including you  from viewing its value in the HCP Terraform UI or reading it through the Variables API endpoint   Users with permission to read and write variables can set new values for sensitive variables  but other attributes of a sensitive variable cannot be modified  To update other attributes  delete the variable and create a new variable to replace it    permissions citation    intentionally unused   keep for maintainers      Variable Description       Warning    Variable descriptions are not encrypted  so do not include any sensitive information   Variable descriptions are optional  and help distinguish between similarly named variables  They are only shown on the   Variables   page and are completely independent from any variable descriptions declared in Terraform CLI "}
{"questions":"terraform page title Permissions HCP Terraform permissions and what permissions you can grant to users BEGIN TFC only name pnp callout Learn the difference between workspace level and organization level Permissions","answers":"---\npage_title: Permissions - HCP Terraform\ndescription: >-\n  Learn the difference between workspace-level and organization-level\n  permissions and what permissions you can grant to users.\n---\n\n# Permissions\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n-> **Note:** Team management is available in HCP Terraform **Standard** Edition. [Learn more about HCP Terraform pricing here](https:\/\/www.hashicorp.com\/products\/terraform\/pricing).\n<!-- END: TFC:only name:pnp-callout -->\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n> **Hands-on:** Try the [Manage Permissions in HCP Terraform](\/terraform\/tutorials\/cloud\/cloud-permissions?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorial.\n\nHCP Terraform's access model is team-based. In order to perform an action within an HCP Terraform organization, users must belong to a team that has been granted the appropriate permissions.\n\nThe permissions model is split into organization-level, project-level, and workspace-level permissions. Additionally, every organization has a special team named \"owners\", whose members have maximal permissions within the organization.\n\n## Organization Owners\n\nEvery organization has a special \"owners\" team. Members of this team are often referred to as \"organization owners\".\n\nOrganization owners have every available permission within the organization. This includes all organization-level permissions, and the highest level of workspace permissions on every workspace.\n\nThere are also some actions within an organization that are only available to owners. These are generally actions that affect the permissions and membership of other teams, or are otherwise fundamental to the organization's security and integrity.\n\nPermissions for the owners team include:\n\n- Manage workspaces (refer to [Organization Permissions][] below; equivalent to admin permissions on every workspace)\n- Manage projects (refer to [Organization Permissions][] below; equivalent to admin permissions on every project and workspace)\n- Manage policies (refer to [Organization Permissions][] below)\n- Manage policy overrides (refer to [Organization Permissions][] below)\n- Manage VCS settings (refer to [Organization Permissions][] below)\n- Manage the private registry (refer to [Organization Permissions][] below)\n- Manage Membership (refer to [Organization Permissions][] below; invite or remove users from the organization itself, and manage the membership of its teams)\n- View all secret teams (refer to [Organization Permissions][] below)\n- Manage agents (refer to [Organization Permissions][] below)\n- Manage organization permissions (refer to [Organization Permissions][] below)\n- Manage all organization settings (owners only)\n- Manage organization billing (owners only, not applicable to Terraform Enterprise)\n- Delete organization (owners only)\n\nThis list is not necessarily exhaustive.\n\n[Organization Permissions]: #organization-permissions\n\n## Workspace Permissions\n\n[workspace]: \/terraform\/cloud-docs\/workspaces\n\nMost of HCP Terraform's permissions system is focused on workspaces. In general, administrators want to delegate access to specific collections of infrastructure; HCP Terraform implements this by granting permissions to teams on a per-workspace basis.\n\nThere are two ways to choose which permissions a given team has on a workspace: fixed permission sets, and custom permissions. Additionally, there is a special \"admin\" permission set that grants the highest level of permissions on a workspace.\n\n### Implied Permissions\n\nSome permissions imply other permissions; for example, the run access plan permission also grants permission to read runs.\n\nIf documentation or UI text states that an action requires a specific permission, it is also available for any permission that implies that permission.\n\n### General Workspace Permissions\n\n[General Workspace Permissions]: #general-workspace-permissions\n\nThe following workspace permissions can be granted to teams on a per-workspace basis. They can be granted via either fixed permission sets or custom workspace permissions.\n\n-> **Note:** Throughout the documentation, we refer to the specific permission an action requires (like \"requires permission to apply runs\") rather than the fixed permission set that includes that permission (like \"requires write access\").\n\n- **Run access:**\n  - **Read:** \u2014\u00a0Allows users to view information about remote Terraform runs, including the run history, the status of runs, the log output of each stage of a run (plan, apply, cost estimation, policy check), and configuration versions associated with a run.\n  - **Plan:** \u2014\u00a0_Implies permission to read._ Allows users to queue Terraform plans in a workspace, including both speculative plans and normal plans. Normal plans must be approved by a user with permission to apply runs. This also allows users to comment on runs.\n  - **Apply:** \u2014\u00a0_Implies permission to plan._ Allows users to approve and apply Terraform plans, causing changes to real infrastructure.\n- **Variable access:**\n  - **No access:** \u2014 No access is granted to the values of Terraform variables and environment variables for the workspace.\n  - **Read:** \u2014\u00a0Allows users to view the values of Terraform variables and environment variables for the workspace. Note that variables marked as sensitive are write-only, and can't be viewed by any user.\n  - **Read and write:** \u2014\u00a0_Implies permission to read._ Allows users to edit the values of variables in the workspace.\n- **State access:**\n  - **No access:** \u2014 No access is granted to the state file from the workspace.\n  - **Read outputs only:** \u2014\u00a0Allows users to access values in the workspace's most recent Terraform state that have been explicitly marked as public outputs. Output values are often used as an interface between separate workspaces that manage loosely coupled collections of infrastructure, so their contents can be relevant to people who have no direct responsibility for the managed infrastructure but still indirectly use some of its functions. This permission is required to access the [State Version Outputs](\/terraform\/cloud-docs\/api-docs\/state-version-outputs) API endpoint.\n\n    -> **Note:** **Read state versions** permission is required to use the `terraform output` command or the `terraform_remote_state` data source against the workspace.\n  - **Read:** \u2014\u00a0_Implies permission to read outputs only._ Allows users to read complete state files from the workspace. State files are useful for identifying infrastructure changes over time, but often contain sensitive information.\n  - **Read and write:** \u2014\u00a0_Implies permission to read._ Allows users to directly create new state versions in the workspace. Applying a remote Terraform run creates new state versions without this permission, but if the workspace's execution mode is set to \"local\", this permission is required for performing local runs. This permission is also required to use any of the Terraform CLI's state manipulation and maintenance commands against this workspace, including `terraform import`, `terraform taint`, and the various `terraform state` subcommands.\n- **Other controls:**\n  - **Download Sentinel mocks:** \u2014\u00a0Allows users to download data from runs in the workspace in a format that can be used for developing Sentinel policies. This run data is very detailed, and often contains unredacted sensitive information.\n  - **Manage Workspace Run Tasks:** \u2014 Allows users to associate or dissociate run tasks with the workspace. HCP Terraform creates Run Tasks at the organization level, where you can manually associate or dissociate them with specific workspaces.\n  - **Lock\/unlock workspace:** \u2014\u00a0Allows users to manually lock the workspace to temporarily prevent runs. When a workspace's execution mode is set to \"local\", users must have this permission to perform local CLI runs using the workspace's state.\n\n### Fixed Permission Sets\n\nFixed permission sets are bundles of specific permissions for workspaces, which you can use to delegate access to workspaces easily.\n\nEach permissions set targets a level of authority and responsibility for a given workspace's infrastructure. A permission set can grant permissions that recipients do not require but offer a balance of simplicity and utility.\n\n#### Workspace Admins\n\nMuch like the owners team has full control over an organization, each workspace has a special \"admin\" permissions level that grants full control over the workspace. Members of a team with admin permissions on a workspace are sometimes called \"workspace admins\" for that workspace.\n\nAdmin permissions include the highest level of general permissions for the workspace. There are also some permissions that are only available to workspace admins, which generally involve changing the workspace's settings or setting access levels for other teams.\n\nWorkspace admins have all [General Workspace Permissions](#general-workspace-permissions), as well as the ability to do the following tasks:\n\n- Read and write workspace settings. This includes general settings, notification configurations, run triggers, and more.\n- Set or remove workspace permissions for visible teams. Workspace admins cannot view or manage teams with the [**Secret**](\/terraform\/cloud-docs\/users-teams-organizations\/teams\/manage#team-visibility) visibility option enabled unless they are also organization owners.\n- Delete the workspace\n  - Depending on the [organization's settings](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#general), workspace admins may only be able to delete the workspace if it is not actively managing infrastructure. Refer to [Deleting a Workspace With Resources Under Management](\/terraform\/cloud-docs\/workspaces\/settings#deleting-a-workspace-with-resources-under-management) for details.\n\n#### Write\n\nThe \"write\" permission set is for people who do most of the day-to-day work of provisioning and modifying managed infrastructure. Write access grants the following workspace permissions:\n\n- Run access - Apply\n- Variable access - Read and write\n- State access - Read and write\n- Other access - Lock\/unlock workspace\n- Other access - Download Sentinel mocks\n\nSee [General Workspace Permissions][] above for details about specific permissions.\n\n#### Plan\n\nThe \"plan\" permission set is for people who might propose changes to managed infrastructure, but whose proposed changes should be approved before they are applied. Plan access grants the following workspace permissions:\n\n- Run access - Plan\n- Variable access - Read\n- State access - Read\n\nSee [General Workspace Permissions][] above for details about specific permissions.\n\n#### Read\n\nThe \"read\" permission set is for people who need to view information about the status and configuration of managed infrastructure in order to do their jobs, but aren't responsible for maintaining that infrastructure. Read access grants the following workspace permissions:\n\n- Run access - Read\n- Variable access - Read\n- State access - Read\n\nSee [General Workspace Permissions][] above for details about specific permissions.\n\n### Custom Workspace Permissions\n\nCustom permissions let you assign specific, finer-grained permissions to a team than the broader fixed permission sets provide. This enables more task-focused permission sets and tighter control of sensitive information.\n\nYou can use custom permissions to assign any of the permissions listed above under [General Workspace Permissions][], with the exception of admin-only permissions.\n\nThe minimum custom permissions set for a workspace is the permission to read runs; the only way to grant a team lower access is to not add them to the workspace at all.\n\nSome permissions - such as the runs permission - are tiered: you can assign one permission per category, since higher permissions include all of the capabilities of the lower ones.\n\n## Project Permissions\n\nYou can assign project-specific permissions to teams.\n\n### Implied Permissions\n\nSome permissions imply other permissions. For example, permission to update a project also grants permission to read a project.\n\nIf an action states that it requires a specific permission level, you can perform that action if your permissions _imply_ the stated permission level.\n\n### General Project Permissions\n\n[General Project Permissions]: #general-project-permissions\n\nYou can grant the following project permissions to teams on a per-project basis. You can grant these with either fixed permission sets or custom project permissions.\n\n-> **Note:** Throughout the documentation, we refer to the specific permission an action requires (like \"requires permission to apply runs\") rather than the fixed permission set that includes that permission (like \"requires write access\").\n\n- **Project access:**\n  - **Read:** \u2014\u00a0Allows users to view information about the project including the name.\n  - **Update:** \u2014\u00a0_Implies permission to read._ Allows users to update the project name.\n  - **Delete:** \u2014\u00a0_Implies permission to update._ Allows users to delete the project.\n  - **Create Workspaces:** \u2014 Allow users to create workspaces in the project. This grants read access to all workspaces in the project.\n  - **Delete Workspaces:** \u2014 Allows users to delete workspaces in the project.\n    - Depending on the [organization's settings](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#general), workspace admins may only be able to delete the workspace if it is not actively managing infrastructure. Refer to [Deleting a Workspace With Resources Under Management](\/terraform\/cloud-docs\/workspaces\/settings#deleting-a-workspace-with-resources-under-management) for details.\n  - **Move Workspaces:** \u2014 Allows users to move workspaces out of the project. A user _must_ have this permission on both the source _and_ destination project to successfully move a workspace from one project to another.\n- **Team management:**\n  - **None:** \u2014 No access to view teams assigned to the project.\n  - **Read:** \u2014\u00a0Allows users to see teams assigned to the project for visible teams.\n  - **Manage:** \u2014\u00a0_Implies permission to read._ Allows users to set or remove project permissions for visible teams. Project admins can not view or manage [secret teams](\/terraform\/cloud-docs\/users-teams-organizations\/teams\/manage#team-visibility) unless they are also organization owners.\n\nSee [General Workspace Permissions](#general-workspace-permissions)for the complete list of available permissions for a project's workspaces.\n\n### Fixed Permission Sets\n\nFixed permission sets are bundles of specific permissions for projects, which you can use to delegate access to a project's workspaces easily.\n\n#### Project Admin\n\nEach project has an \"admin\" permissions level that grants permissions for both the project and the workspaces that belong to that project. Members with admin permissions on a project are dubbed that project's \"project admins\".\n\nMembers of teams with \"admin\" permissions for a project have [General Workspace Permissions](#general-workspace-permissions) for every workspace in the project, and the ability to:\n\n- Read and update project settings.\n- Delete the project.\n- Create workspaces in the project.\n- Move workspaces into or out of the project. This also requires project admin permissions for the source or destination project.\n- Grant or revoke project permissions for visible teams. Project admins **cannot** view or manage access for teams that are are [Secret](\/terraform\/cloud-docs\/users-teams-organizations\/teams\/manage#team-visibility), unless those admins are also organization owners.\n\n#### Maintain\n\nThe \"maintain\" permission is for people who need to manage existing infrastructure in a single project, while also granting the ability to create new workspaces in that project. Maintain access grants full control of everything in the project, including the following permissions:\n- Admin access for all workspaces in this project.\n- Create workspaces in this project.\n- Read the project name.\n- Lock and unlock all workspaces in this project.\n- Read and write variables for all workspaces in this project.\n- Access state for all workspaces in this project.\n- Approve runs for all workspaces in this project.\n\n#### Write\n\nThe \"write\" permission set is for people who do most of the day-to-day work of provisioning and modifying managed infrastructure. Write access grants the following workspace permissions:\n- Read the project name.\n- Lock and unlock all workspaces in this project.\n- Read and write variables for all workspaces in this project.\n- Access state for all workspaces in this project.\n- Approve runs for all workspaces in this project.\n\n#### Read\n\nThe \"read\" permission set is for people who need to view information about the status and configuration of managed infrastructure for their job function, but are not responsible for maintaining that infrastructure. Read access grants the permissions to:\n- Read the project name.\n- Read the workspaces in the project.\n\n### Custom Project Permissions\n\nCustom permissions enable you to assign specific and granular permissions to a team. You can use custom permission sets to create task-focused permission sets and control sensitive information.\n\nYou can create a set of custom permissions using any of the permissions listed under [General Project Permissions](#general-project-permissions).\n\nSome permissions, such as the project access permission, are tiered. You can only assign one permission per category because higher-level permissions include the capabilities of lower levels.\n\n## Organization Permissions\n\nSeparate from project and workspace permissions, you can grant teams permissions to manage or view certain resources or settings across an organization. To set these permissions for a team, go to your organization's **Settings**. Then click **Teams**, and select the team name from the list.\n\nThe following organization permissions are available:\n\n### Project permissions\n\nYou must select a level of access for projects.\n\n#### None\n\nMembers do not have access to projects or workspaces. You can grant permissions to individual projects or workspaces through [Project Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#project-permissions) or [Workspace Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#workspace-permissions).\n\n#### View all projects\n\nMembers can view all projects within the organization. This lets users:\n- View project names in a given organization.\n\n#### Manage all projects\n\nMembers can create and manage all projects and workspaces within the organization. In addition to the permissions granted by [\u201cManage all workspaces\u201d](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-all-workspaces), this also lets users:\n- Manage other teams' access to all projects.\n- Create, edit, and delete projects (otherwise only available to organization owners).\n- Move workspaces between projects.\n\n### Workspace permissions\n\nYou must select a level of access for workspaces.\n\n#### None\n\nMembers do not have access to projects or workspaces. You can grant permissions to individual projects or workspaces through [Project Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#project-permissions) or [Workspace Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#workspace-permissions).\n\n#### View all workspaces\n\nMembers can view all workspaces within the organization. This lets users:\n- View information and features relevant to each workspaces (e.g. runs, state versions, variables).\n\n#### Manage all workspaces\n\nMembers can create and manage all workspaces within the organization. This lets users:\n- Perform any action that requires admin permissions in those workspaces.\n- Create new workspaces within the organization's **Default Project**, an action that is otherwise only available to organization owners.\n- Create, update, and delete [Variable Sets](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#variable-sets).\n\n### Manage Policies\n\nAllows members to create, edit, read, list and delete the organization's Sentinel policies.\n\nThis permission implicitly gives permission to read runs on all workspaces, which is necessary to set enforcement of [policy sets](\/terraform\/cloud-docs\/policy-enforcement\/manage-policy-sets).\n\n### Manage Run Tasks\n\nAllows members to create, edit, and delete run tasks on the organization.\n\n### Manage Policy Overrides\n\nAllows members to override soft-mandatory policy checks.\n\nThis permission implicitly gives permission to read runs on all workspaces, which is necessary to override policy checks.\n\n### Manage VCS Settings\n\nAllows members to manage the set of [VCS providers](\/terraform\/cloud-docs\/vcs) and [SSH keys](\/terraform\/cloud-docs\/vcs#ssh-keys) available within the organization.\n\n### Manage Agent Pools\n\nAllows members to create, edit, and delete agent pools within their organization.\n\nThis permission implicitly grants access to read all workspaces, which is necessary for agent pool management.\n\n### Manage Private Registry\n\nAllows members to publish and delete providers, modules, or both providers and modules in the organization's [private registry](\/terraform\/cloud-docs\/registry). These permissions are otherwise only available to organization owners.\n\n### Team Management Permissions\n\nHCP Terraform has three levels of team management permissions: manage membership, manage teams, and manage organization access. Each permission level grants users the ability to perform specific actions and each progressively requires prerequisite permissions. \n\nFor example, to grant a user the manage teams permission, that user must already have manage membership permissions. To grant a user the manage organization access permission, a user must have both manage teams and manage membership permissions.\n\n#### Manage Membership\n\nAllows members to invite users to the organization, remove users from the organization, and add or remove users from teams within the organization.\n\nThis permission grants the ability to view the list of users within the organization, and to view the organization access of other visible teams. It does not permit the creation of teams, the ability to modify the settings of existing teams, or the ability to view secret teams.\n\nIn order to modify the membership of a team, a user with Manage Membership permissions must have visibility into the team (i.e. the team must be [\"Visible\"](\/terraform\/cloud-docs\/users-teams-organizations\/teams\/manage#team-visibility), or the user must be on the team).\nIn order to remove a user from the organization, the holder of this permission must have visibility into all of the teams which the user is a member of.\n\n~> This permission is intended to allow owners of large organizations to delegate membership management to another trusted team, and should be granted to only teams of trusted users. **Assign with caution:** Users with this permission are able to add themselves to any visible team, and inherit the permissions of any visible team.\n\n#### Manage Teams\n\nAllows members to create, update, and delete teams, and generate, regenerate, and revoke tokens.\n\nThis permission grants the ability to update a team's names, SSO IDs, and token management permissions, but does not allow access to organization settings. On its own, this permission does not allow users to create, update, delete, or otherwise access secret teams.\n\nThe manage teams permission confers all permissions granted by the manage membership permission.\n\nThis permission allows owners of large organizations to delegate team management to another trusted team. You should only grant it to teams of trusted users.\n\n~> **Assign with caution**: Users with this permission can update or delete any visible team. Because this permission also confers the manage membership permission, a user with the manage teams permission can add themselves to any visible team.\n\n#### Manage Organization Access\n\nAllows members to update a team's organization access settings.\n\nOn its own, this permission does not allow users to create, update, delete, or otherwise access secret teams. This permission confers all of the permissions granted by the manage teams and manage membership permissions.\n\nThis permission allows owners of large organizations to delegate team management to another trusted team. You should only grant it to teams of trusted users. \n\n~> **Assign with caution:** Members with this permission can update all organization access settings for any team visible to them.\n\n### Include Secret Teams\n\nAllows members access to secret teams at the level permitted by that user's team permissions setting.\n\nThis permission acts as a modifier to existing team management permissions. Members with this permission can access secret teams up to the level permitted by other team management permissions. For example, if a user has permission to include secret teams and [manage teams](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-teams), that user can create secret teams.\n\n### Allow Member Token Management\n\nAllows owners and members with [manage teams](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-teams) permissions to enable and disable team token management for team members. This permission defaults to `true`.\n\nWhen member token management is enabled, members will be able to perform actions on team tokens, including generating and revoking a team token.\n\nWhen member token management is disabled, members will be unable to perform actions on team tokens, including generating and revoking a team token.\n\n## Permissions Outside HCP Terraform's Scope\n\nThis documentation only refers to permissions that are managed by HCP Terraform itself.\n\nSince HCP Terraform integrates with other systems, the permissions models of those systems can also be relevant to the overall security model of your HCP Terraform organization. For example:\n\n- When a workspace is connected to a VCS repository, anyone who can merge changes to that repository's main branch can indirectly queue plans in that workspace, regardless of whether they have explicit permission to queue plans or are even a member of your HCP Terraform organization. (And when auto-apply is enabled, merging changes will indirectly apply runs.)\n- If you use HCP Terraform's API to create a Slack bot for provisioning infrastructure, anyone able to issue commands to that Slack bot can implicitly act with that bot's permissions, regardless of their own membership and permissions in the HCP Terraform organization.\n- When a run task sends a request to an integrator, it provides an access token that provides access depending on the run task stage:\n  - For post-plan, it provides access to the runs plan json and the run task callback\n  - All access tokens created for run tasks have a lifetime of 10 minutes\n\nWhen integrating HCP Terraform with other systems, you are responsible for understanding the effects on your organization's security. An integrated system is able to delegate any level of access that it has been granted, so carefully consider the conditions and events that will cause it to delegate that access.","site":"terraform","answers_cleaned":"    page title  Permissions   HCP Terraform description       Learn the difference between workspace level and organization level   permissions and what permissions you can grant to users         Permissions       BEGIN  TFC only name pnp callout          Note    Team management is available in HCP Terraform   Standard   Edition   Learn more about HCP Terraform pricing here  https   www hashicorp com products terraform pricing        END  TFC only name pnp callout       permissions citation    intentionally unused   keep for maintainers      Hands on    Try the  Manage Permissions in HCP Terraform   terraform tutorials cloud cloud permissions utm source WEBSITE utm medium WEB IO utm offer ARTICLE PAGE utm content DOCS  tutorial   HCP Terraform s access model is team based  In order to perform an action within an HCP Terraform organization  users must belong to a team that has been granted the appropriate permissions   The permissions model is split into organization level  project level  and workspace level permissions  Additionally  every organization has a special team named  owners   whose members have maximal permissions within the organization      Organization Owners  Every organization has a special  owners  team  Members of this team are often referred to as  organization owners    Organization owners have every available permission within the organization  This includes all organization level permissions  and the highest level of workspace permissions on every workspace   There are also some actions within an organization that are only available to owners  These are generally actions that affect the permissions and membership of other teams  or are otherwise fundamental to the organization s security and integrity   Permissions for the owners team include     Manage workspaces  refer to  Organization Permissions    below  equivalent to admin permissions on every workspace    Manage projects  refer to  Organization Permissions    below  equivalent to admin permissions on every project and workspace    Manage policies  refer to  Organization Permissions    below    Manage policy overrides  refer to  Organization Permissions    below    Manage VCS settings  refer to  Organization Permissions    below    Manage the private registry  refer to  Organization Permissions    below    Manage Membership  refer to  Organization Permissions    below  invite or remove users from the organization itself  and manage the membership of its teams    View all secret teams  refer to  Organization Permissions    below    Manage agents  refer to  Organization Permissions    below    Manage organization permissions  refer to  Organization Permissions    below    Manage all organization settings  owners only    Manage organization billing  owners only  not applicable to Terraform Enterprise    Delete organization  owners only   This list is not necessarily exhaustive    Organization Permissions    organization permissions     Workspace Permissions   workspace    terraform cloud docs workspaces  Most of HCP Terraform s permissions system is focused on workspaces  In general  administrators want to delegate access to specific collections of infrastructure  HCP Terraform implements this by granting permissions to teams on a per workspace basis   There are two ways to choose which permissions a given team has on a workspace  fixed permission sets  and custom permissions  Additionally  there is a special  admin  permission set that grants the highest level of permissions on a workspace       Implied Permissions  Some permissions imply other permissions  for example  the run access plan permission also grants permission to read runs   If documentation or UI text states that an action requires a specific permission  it is also available for any permission that implies that permission       General Workspace Permissions   General Workspace Permissions    general workspace permissions  The following workspace permissions can be granted to teams on a per workspace basis  They can be granted via either fixed permission sets or custom workspace permissions        Note    Throughout the documentation  we refer to the specific permission an action requires  like  requires permission to apply runs   rather than the fixed permission set that includes that permission  like  requires write access         Run access          Read      Allows users to view information about remote Terraform runs  including the run history  the status of runs  the log output of each stage of a run  plan  apply  cost estimation  policy check   and configuration versions associated with a run        Plan       Implies permission to read   Allows users to queue Terraform plans in a workspace  including both speculative plans and normal plans  Normal plans must be approved by a user with permission to apply runs  This also allows users to comment on runs        Apply       Implies permission to plan   Allows users to approve and apply Terraform plans  causing changes to real infrastructure      Variable access          No access      No access is granted to the values of Terraform variables and environment variables for the workspace        Read      Allows users to view the values of Terraform variables and environment variables for the workspace  Note that variables marked as sensitive are write only  and can t be viewed by any user        Read and write       Implies permission to read   Allows users to edit the values of variables in the workspace      State access          No access      No access is granted to the state file from the workspace        Read outputs only      Allows users to access values in the workspace s most recent Terraform state that have been explicitly marked as public outputs  Output values are often used as an interface between separate workspaces that manage loosely coupled collections of infrastructure  so their contents can be relevant to people who have no direct responsibility for the managed infrastructure but still indirectly use some of its functions  This permission is required to access the  State Version Outputs   terraform cloud docs api docs state version outputs  API endpoint            Note      Read state versions   permission is required to use the  terraform output  command or the  terraform remote state  data source against the workspace        Read       Implies permission to read outputs only   Allows users to read complete state files from the workspace  State files are useful for identifying infrastructure changes over time  but often contain sensitive information        Read and write       Implies permission to read   Allows users to directly create new state versions in the workspace  Applying a remote Terraform run creates new state versions without this permission  but if the workspace s execution mode is set to  local   this permission is required for performing local runs  This permission is also required to use any of the Terraform CLI s state manipulation and maintenance commands against this workspace  including  terraform import    terraform taint   and the various  terraform state  subcommands      Other controls          Download Sentinel mocks      Allows users to download data from runs in the workspace in a format that can be used for developing Sentinel policies  This run data is very detailed  and often contains unredacted sensitive information        Manage Workspace Run Tasks      Allows users to associate or dissociate run tasks with the workspace  HCP Terraform creates Run Tasks at the organization level  where you can manually associate or dissociate them with specific workspaces        Lock unlock workspace      Allows users to manually lock the workspace to temporarily prevent runs  When a workspace s execution mode is set to  local   users must have this permission to perform local CLI runs using the workspace s state       Fixed Permission Sets  Fixed permission sets are bundles of specific permissions for workspaces  which you can use to delegate access to workspaces easily   Each permissions set targets a level of authority and responsibility for a given workspace s infrastructure  A permission set can grant permissions that recipients do not require but offer a balance of simplicity and utility        Workspace Admins  Much like the owners team has full control over an organization  each workspace has a special  admin  permissions level that grants full control over the workspace  Members of a team with admin permissions on a workspace are sometimes called  workspace admins  for that workspace   Admin permissions include the highest level of general permissions for the workspace  There are also some permissions that are only available to workspace admins  which generally involve changing the workspace s settings or setting access levels for other teams   Workspace admins have all  General Workspace Permissions   general workspace permissions   as well as the ability to do the following tasks     Read and write workspace settings  This includes general settings  notification configurations  run triggers  and more    Set or remove workspace permissions for visible teams  Workspace admins cannot view or manage teams with the    Secret     terraform cloud docs users teams organizations teams manage team visibility  visibility option enabled unless they are also organization owners    Delete the workspace     Depending on the  organization s settings   terraform cloud docs users teams organizations organizations general   workspace admins may only be able to delete the workspace if it is not actively managing infrastructure  Refer to  Deleting a Workspace With Resources Under Management   terraform cloud docs workspaces settings deleting a workspace with resources under management  for details        Write  The  write  permission set is for people who do most of the day to day work of provisioning and modifying managed infrastructure  Write access grants the following workspace permissions     Run access   Apply   Variable access   Read and write   State access   Read and write   Other access   Lock unlock workspace   Other access   Download Sentinel mocks  See  General Workspace Permissions    above for details about specific permissions        Plan  The  plan  permission set is for people who might propose changes to managed infrastructure  but whose proposed changes should be approved before they are applied  Plan access grants the following workspace permissions     Run access   Plan   Variable access   Read   State access   Read  See  General Workspace Permissions    above for details about specific permissions        Read  The  read  permission set is for people who need to view information about the status and configuration of managed infrastructure in order to do their jobs  but aren t responsible for maintaining that infrastructure  Read access grants the following workspace permissions     Run access   Read   Variable access   Read   State access   Read  See  General Workspace Permissions    above for details about specific permissions       Custom Workspace Permissions  Custom permissions let you assign specific  finer grained permissions to a team than the broader fixed permission sets provide  This enables more task focused permission sets and tighter control of sensitive information   You can use custom permissions to assign any of the permissions listed above under  General Workspace Permissions     with the exception of admin only permissions   The minimum custom permissions set for a workspace is the permission to read runs  the only way to grant a team lower access is to not add them to the workspace at all   Some permissions   such as the runs permission   are tiered  you can assign one permission per category  since higher permissions include all of the capabilities of the lower ones      Project Permissions  You can assign project specific permissions to teams       Implied Permissions  Some permissions imply other permissions  For example  permission to update a project also grants permission to read a project   If an action states that it requires a specific permission level  you can perform that action if your permissions  imply  the stated permission level       General Project Permissions   General Project Permissions    general project permissions  You can grant the following project permissions to teams on a per project basis  You can grant these with either fixed permission sets or custom project permissions        Note    Throughout the documentation  we refer to the specific permission an action requires  like  requires permission to apply runs   rather than the fixed permission set that includes that permission  like  requires write access         Project access          Read      Allows users to view information about the project including the name        Update       Implies permission to read   Allows users to update the project name        Delete       Implies permission to update   Allows users to delete the project        Create Workspaces      Allow users to create workspaces in the project  This grants read access to all workspaces in the project        Delete Workspaces      Allows users to delete workspaces in the project        Depending on the  organization s settings   terraform cloud docs users teams organizations organizations general   workspace admins may only be able to delete the workspace if it is not actively managing infrastructure  Refer to  Deleting a Workspace With Resources Under Management   terraform cloud docs workspaces settings deleting a workspace with resources under management  for details        Move Workspaces      Allows users to move workspaces out of the project  A user  must  have this permission on both the source  and  destination project to successfully move a workspace from one project to another      Team management          None      No access to view teams assigned to the project        Read      Allows users to see teams assigned to the project for visible teams        Manage       Implies permission to read   Allows users to set or remove project permissions for visible teams  Project admins can not view or manage  secret teams   terraform cloud docs users teams organizations teams manage team visibility  unless they are also organization owners   See  General Workspace Permissions   general workspace permissions for the complete list of available permissions for a project s workspaces       Fixed Permission Sets  Fixed permission sets are bundles of specific permissions for projects  which you can use to delegate access to a project s workspaces easily        Project Admin  Each project has an  admin  permissions level that grants permissions for both the project and the workspaces that belong to that project  Members with admin permissions on a project are dubbed that project s  project admins    Members of teams with  admin  permissions for a project have  General Workspace Permissions   general workspace permissions  for every workspace in the project  and the ability to     Read and update project settings    Delete the project    Create workspaces in the project    Move workspaces into or out of the project  This also requires project admin permissions for the source or destination project    Grant or revoke project permissions for visible teams  Project admins   cannot   view or manage access for teams that are are  Secret   terraform cloud docs users teams organizations teams manage team visibility   unless those admins are also organization owners        Maintain  The  maintain  permission is for people who need to manage existing infrastructure in a single project  while also granting the ability to create new workspaces in that project  Maintain access grants full control of everything in the project  including the following permissions    Admin access for all workspaces in this project    Create workspaces in this project    Read the project name    Lock and unlock all workspaces in this project    Read and write variables for all workspaces in this project    Access state for all workspaces in this project    Approve runs for all workspaces in this project        Write  The  write  permission set is for people who do most of the day to day work of provisioning and modifying managed infrastructure  Write access grants the following workspace permissions    Read the project name    Lock and unlock all workspaces in this project    Read and write variables for all workspaces in this project    Access state for all workspaces in this project    Approve runs for all workspaces in this project        Read  The  read  permission set is for people who need to view information about the status and configuration of managed infrastructure for their job function  but are not responsible for maintaining that infrastructure  Read access grants the permissions to    Read the project name    Read the workspaces in the project       Custom Project Permissions  Custom permissions enable you to assign specific and granular permissions to a team  You can use custom permission sets to create task focused permission sets and control sensitive information   You can create a set of custom permissions using any of the permissions listed under  General Project Permissions   general project permissions    Some permissions  such as the project access permission  are tiered  You can only assign one permission per category because higher level permissions include the capabilities of lower levels      Organization Permissions  Separate from project and workspace permissions  you can grant teams permissions to manage or view certain resources or settings across an organization  To set these permissions for a team  go to your organization s   Settings    Then click   Teams    and select the team name from the list   The following organization permissions are available       Project permissions  You must select a level of access for projects        None  Members do not have access to projects or workspaces  You can grant permissions to individual projects or workspaces through  Project Permissions   terraform cloud docs users teams organizations permissions project permissions  or  Workspace Permissions   terraform cloud docs users teams organizations permissions workspace permissions         View all projects  Members can view all projects within the organization  This lets users    View project names in a given organization        Manage all projects  Members can create and manage all projects and workspaces within the organization  In addition to the permissions granted by   Manage all workspaces    terraform cloud docs users teams organizations permissions manage all workspaces   this also lets users    Manage other teams  access to all projects    Create  edit  and delete projects  otherwise only available to organization owners     Move workspaces between projects       Workspace permissions  You must select a level of access for workspaces        None  Members do not have access to projects or workspaces  You can grant permissions to individual projects or workspaces through  Project Permissions   terraform cloud docs users teams organizations permissions project permissions  or  Workspace Permissions   terraform cloud docs users teams organizations permissions workspace permissions         View all workspaces  Members can view all workspaces within the organization  This lets users    View information and features relevant to each workspaces  e g  runs  state versions  variables         Manage all workspaces  Members can create and manage all workspaces within the organization  This lets users    Perform any action that requires admin permissions in those workspaces    Create new workspaces within the organization s   Default Project    an action that is otherwise only available to organization owners    Create  update  and delete  Variable Sets   terraform cloud docs workspaces variables managing variables variable sets        Manage Policies  Allows members to create  edit  read  list and delete the organization s Sentinel policies   This permission implicitly gives permission to read runs on all workspaces  which is necessary to set enforcement of  policy sets   terraform cloud docs policy enforcement manage policy sets        Manage Run Tasks  Allows members to create  edit  and delete run tasks on the organization       Manage Policy Overrides  Allows members to override soft mandatory policy checks   This permission implicitly gives permission to read runs on all workspaces  which is necessary to override policy checks       Manage VCS Settings  Allows members to manage the set of  VCS providers   terraform cloud docs vcs  and  SSH keys   terraform cloud docs vcs ssh keys  available within the organization       Manage Agent Pools  Allows members to create  edit  and delete agent pools within their organization   This permission implicitly grants access to read all workspaces  which is necessary for agent pool management       Manage Private Registry  Allows members to publish and delete providers  modules  or both providers and modules in the organization s  private registry   terraform cloud docs registry   These permissions are otherwise only available to organization owners       Team Management Permissions  HCP Terraform has three levels of team management permissions  manage membership  manage teams  and manage organization access  Each permission level grants users the ability to perform specific actions and each progressively requires prerequisite permissions    For example  to grant a user the manage teams permission  that user must already have manage membership permissions  To grant a user the manage organization access permission  a user must have both manage teams and manage membership permissions        Manage Membership  Allows members to invite users to the organization  remove users from the organization  and add or remove users from teams within the organization   This permission grants the ability to view the list of users within the organization  and to view the organization access of other visible teams  It does not permit the creation of teams  the ability to modify the settings of existing teams  or the ability to view secret teams   In order to modify the membership of a team  a user with Manage Membership permissions must have visibility into the team  i e  the team must be   Visible    terraform cloud docs users teams organizations teams manage team visibility   or the user must be on the team   In order to remove a user from the organization  the holder of this permission must have visibility into all of the teams which the user is a member of      This permission is intended to allow owners of large organizations to delegate membership management to another trusted team  and should be granted to only teams of trusted users    Assign with caution    Users with this permission are able to add themselves to any visible team  and inherit the permissions of any visible team        Manage Teams  Allows members to create  update  and delete teams  and generate  regenerate  and revoke tokens   This permission grants the ability to update a team s names  SSO IDs  and token management permissions  but does not allow access to organization settings  On its own  this permission does not allow users to create  update  delete  or otherwise access secret teams   The manage teams permission confers all permissions granted by the manage membership permission   This permission allows owners of large organizations to delegate team management to another trusted team  You should only grant it to teams of trusted users        Assign with caution    Users with this permission can update or delete any visible team  Because this permission also confers the manage membership permission  a user with the manage teams permission can add themselves to any visible team        Manage Organization Access  Allows members to update a team s organization access settings   On its own  this permission does not allow users to create  update  delete  or otherwise access secret teams  This permission confers all of the permissions granted by the manage teams and manage membership permissions   This permission allows owners of large organizations to delegate team management to another trusted team  You should only grant it to teams of trusted users         Assign with caution    Members with this permission can update all organization access settings for any team visible to them       Include Secret Teams  Allows members access to secret teams at the level permitted by that user s team permissions setting   This permission acts as a modifier to existing team management permissions  Members with this permission can access secret teams up to the level permitted by other team management permissions  For example  if a user has permission to include secret teams and  manage teams   terraform cloud docs users teams organizations permissions manage teams   that user can create secret teams       Allow Member Token Management  Allows owners and members with  manage teams   terraform cloud docs users teams organizations permissions manage teams  permissions to enable and disable team token management for team members  This permission defaults to  true    When member token management is enabled  members will be able to perform actions on team tokens  including generating and revoking a team token   When member token management is disabled  members will be unable to perform actions on team tokens  including generating and revoking a team token      Permissions Outside HCP Terraform s Scope  This documentation only refers to permissions that are managed by HCP Terraform itself   Since HCP Terraform integrates with other systems  the permissions models of those systems can also be relevant to the overall security model of your HCP Terraform organization  For example     When a workspace is connected to a VCS repository  anyone who can merge changes to that repository s main branch can indirectly queue plans in that workspace  regardless of whether they have explicit permission to queue plans or are even a member of your HCP Terraform organization   And when auto apply is enabled  merging changes will indirectly apply runs     If you use HCP Terraform s API to create a Slack bot for provisioning infrastructure  anyone able to issue commands to that Slack bot can implicitly act with that bot s permissions  regardless of their own membership and permissions in the HCP Terraform organization    When a run task sends a request to an integrator  it provides an access token that provides access depending on the run task stage      For post plan  it provides access to the runs plan json and the run task callback     All access tokens created for run tasks have a lifetime of 10 minutes  When integrating HCP Terraform with other systems  you are responsible for understanding the effects on your organization s security  An integrated system is able to delegate any level of access that it has been granted  so carefully consider the conditions and events that will cause it to delegate that access "}
{"questions":"terraform page title API Tokens HCP Terraform API Tokens Learn about the level of access granted by user team and organization API This topic describes the distinct types of API tokens you can use to authenticate with HCP Terraform tokens","answers":"---\npage_title: API Tokens - HCP Terraform\ndescription: >-\n  Learn about the level of access granted by user, team, and organization API\n  tokens.\n---\n\n# API Tokens\n\nThis topic describes the distinct types of API tokens you can use to authenticate with HCP Terraform.\n\nNote that HCP Terraform only displays API tokens once when you initially create them and are obfuscated thereafter. If the token is lost, it must be regenerated.\n\nRefer to [Team Token API](\/terraform\/cloud-docs\/api-docs\/team-tokens) and [Organization Token API](\/terraform\/cloud-docs\/api-docs\/organization-tokens) for additional information about using the APIs.\n\n## User API Tokens\n\nAPI tokens may belong directly to a user. User tokens are the most flexible token type because they inherit permissions from the user they are associated with. For more information on user tokens and how to generate them, see the [Users](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) documentation.\n\n## Team API Tokens\n\nAPI tokens may belong to a specific team. Team API tokens allow access to the workspaces that the team has access to, without being tied to any specific user.\n\nNavigate to the **Organization settings > API Tokens > Team Token** tab to manage API tokens for a team or create new team tokens.\n\nEach team can have **one** valid API token at a time. When a token is regenerated, the previous token immediately becomes invalid.\n\nOwners and users with [manage teams](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-teams) permissions have the ability to enable and disable team token management for a team, which limits the actions that team members can take on a team token. Refer to [Allow Member Token Management](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#allow-member-token-management) for more information.\n\nTeam API tokens are designed for performing API operations on workspaces. They have the same access level to the workspaces the team has access to. For example, if a team has permission to apply runs on a workspace, the team's token can create runs and configuration versions for that workspace via the API. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nNote that the individual members of a team can usually perform actions the team itself cannot, since users can belong to multiple teams, can belong to multiple organizations, and can authenticate with Terraform's `atlas` backend for running Terraform locally.\n\nIf an API token is generated for the \"owners\" team, then that API token will have all of the same permissions that an organization owner would.\n\n## Organization API Tokens\n\nAPI tokens may be generated for a specific organization. Organization API tokens allow access to the organization-level settings and resources, without being tied to any specific team or user.\n\nTo manage the API token for an organization, go to **Organization settings > API Token** and use the controls under the \"Organization Tokens\" header.\n\nEach organization can have **one** valid API token at a time. Only [organization owners](\/terraform\/cloud-docs\/users-teams-organizations\/teams#the-owners-team) can generate or revoke an organization's token.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nOrganization API tokens are designed for creating and configuring workspaces and teams. We don't recommend using them as an all-purpose interface to HCP Terraform; their purpose is to do some initial setup before delegating a workspace to a team. For more routine interactions with workspaces, use [team API tokens](#team-api-tokens).\n\nOrganization API tokens have permissions across the entire organization. They can perform all CRUD operations on most resources, but have some limitations; most importantly, they cannot start runs or create configuration versions. Any API endpoints that can't be used with an organization API token include a note like the following:\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](#team-api-tokens).\n\n<!-- BEGIN: TFC:only -->\n\n## Audit trail tokens\n\nYou can generate an audit trails token to read an organization's [audit trails](\/terraform\/cloud-docs\/api-docs\/audit-trails). Use this token type to authenticate integrations pulling audit trail data, for example, using the [HCP Terraform for Splunk](\/terraform\/cloud-docs\/integrations\/splunk) app.\n\nTo manage an organization's audit trails token, go to **Organization settings > API Token** and use the settings under the \"Audit Token\" header.\n\nEach organization can only have a _single_ valid audit trails token. Only [organization owners](\/terraform\/cloud-docs\/users-teams-organizations\/teams#the-owners-team) can generate or revoke an organization's audit trails token.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n<!-- END: TFC:only -->\n\n## Agent API Tokens\n\n[Agent pools](\/terraform\/cloud-docs\/agents) have their own set of API tokens which allow agents to communicate with HCP Terraform, scoped to an organization. These tokens are not valid for direct usage in the HCP Terraform API and are only used by agents.\n\n## Access Levels\n\nThe following chart illustrates the various access levels for the supported API token types. Some permissions are implicit based on the token type, others are dependent on the permissions of the associated user, team, or organization.\n\n\ud83d\udd35 = Implicit for token type \ud83d\udd36 = Requires explicit permission\n\n|                                    | User tokens | Team tokens | Organization tokens |\n| ---------------------------------- | :---------: | :---------: | :-----------------: |\n| **Users**                          |             |             |                     |\n| Manage account settings            |      \ud83d\udd35     |             |                     |\n| Manage user tokens                 |      \ud83d\udd35     |             |                     |\n| **Workspaces**                     |             |             |                     |\n| Read workspace variables           |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35         |\n| Write workspace variables          |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35         |\n| Plan, apply, upload states         |      \ud83d\udd36     |      \ud83d\udd36     |                     |\n| Force cancel runs                  |      \ud83d\udd36     |      \ud83d\udd36     |                     |\n| Create configuration versions      |      \ud83d\udd36     |      \ud83d\udd36     |                     |\n| Create or modify workspaces        |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35         |\n| Remote operations                  |      \ud83d\udd36     |      \ud83d\udd36     |                     |\n| Manage run triggers                |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35         |\n| Manage notification configurations |      \ud83d\udd36     |      \ud83d\udd36     |                     |\n| Manage run tasks                   |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35          |\n| **Teams**                          |             |             |                     |\n| Create teams                       |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35         |\n| Modify team                        |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35         |\n| Read team                          |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35         |\n| Manage team tokens                 |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35         |\n| Manage team workspace access       |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35         |\n| Manage team membership             |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35         |\n| **Organizations**                  |             |             |                     |\n| Create organizations               |      \ud83d\udd35     |             |                     |\n| Modify organizations               |      \ud83d\udd36     |             |                     |\n| Manage organization tokens         |      \ud83d\udd36     |             |                     |\n| View audit trails                  |             |             |          \ud83d\udd35         |\n| Invite users to organization       |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35         |\n| **Sentinel**                       |             |             |                     |\n| Manage Sentinel policies           |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35         |\n| Manage policy sets                 |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35         |\n| Override policy checks             |      \ud83d\udd36     |      \ud83d\udd36     |                     |\n| **Integrations**                   |             |             |                     |\n| Manage VCS connections             |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35         |\n| Manage SSH keys                    |      \ud83d\udd36     |      \ud83d\udd36     |                     |\n| Manage run tasks                   |      \ud83d\udd36     |      \ud83d\udd36     |          \ud83d\udd35          |\n| **Modules**                        |             |             |                     |\n| Manage Terraform modules           |      \ud83d\udd36     | \ud83d\udd35 (owners) |                     |\n\n## Token Expiration\n\nYou can create user, team, and organization tokens with an expiration date and time. Once the expiration time has passed, the token is longer treated as valid and may not be used to authenticate to any API. Any API requests made with an expired token will fail.\n\nHashiCorp recommends setting an expiration on all new authentication tokens. Creating tokens with an expiration date helps reduce the risk of accidentally leaking valid tokens or forgetting to delete tokens meant for a delegated use once their intended purpose is complete.\n\nYou can not modify the expiration of a token once you have created it. The HCP Terraform UI displays tokens relative to the current user's timezone, but all tokens are passed and displayed in UTC in ISO 8601 format through the HCP Terraform API.","site":"terraform","answers_cleaned":"    page title  API Tokens   HCP Terraform description       Learn about the level of access granted by user  team  and organization API   tokens         API Tokens  This topic describes the distinct types of API tokens you can use to authenticate with HCP Terraform   Note that HCP Terraform only displays API tokens once when you initially create them and are obfuscated thereafter  If the token is lost  it must be regenerated   Refer to  Team Token API   terraform cloud docs api docs team tokens  and  Organization Token API   terraform cloud docs api docs organization tokens  for additional information about using the APIs      User API Tokens  API tokens may belong directly to a user  User tokens are the most flexible token type because they inherit permissions from the user they are associated with  For more information on user tokens and how to generate them  see the  Users   terraform cloud docs users teams organizations users api tokens  documentation      Team API Tokens  API tokens may belong to a specific team  Team API tokens allow access to the workspaces that the team has access to  without being tied to any specific user   Navigate to the   Organization settings   API Tokens   Team Token   tab to manage API tokens for a team or create new team tokens   Each team can have   one   valid API token at a time  When a token is regenerated  the previous token immediately becomes invalid   Owners and users with  manage teams   terraform cloud docs users teams organizations permissions manage teams  permissions have the ability to enable and disable team token management for a team  which limits the actions that team members can take on a team token  Refer to  Allow Member Token Management   terraform cloud docs users teams organizations permissions allow member token management  for more information   Team API tokens are designed for performing API operations on workspaces  They have the same access level to the workspaces the team has access to  For example  if a team has permission to apply runs on a workspace  the team s token can create runs and configuration versions for that workspace via the API    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  Note that the individual members of a team can usually perform actions the team itself cannot  since users can belong to multiple teams  can belong to multiple organizations  and can authenticate with Terraform s  atlas  backend for running Terraform locally   If an API token is generated for the  owners  team  then that API token will have all of the same permissions that an organization owner would      Organization API Tokens  API tokens may be generated for a specific organization  Organization API tokens allow access to the organization level settings and resources  without being tied to any specific team or user   To manage the API token for an organization  go to   Organization settings   API Token   and use the controls under the  Organization Tokens  header   Each organization can have   one   valid API token at a time  Only  organization owners   terraform cloud docs users teams organizations teams the owners team  can generate or revoke an organization s token    permissions citation    intentionally unused   keep for maintainers  Organization API tokens are designed for creating and configuring workspaces and teams  We don t recommend using them as an all purpose interface to HCP Terraform  their purpose is to do some initial setup before delegating a workspace to a team  For more routine interactions with workspaces  use  team API tokens   team api tokens    Organization API tokens have permissions across the entire organization  They can perform all CRUD operations on most resources  but have some limitations  most importantly  they cannot start runs or create configuration versions  Any API endpoints that can t be used with an organization API token include a note like the following        Note    This endpoint cannot be accessed with  organization tokens   organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   team api tokens         BEGIN  TFC only         Audit trail tokens  You can generate an audit trails token to read an organization s  audit trails   terraform cloud docs api docs audit trails   Use this token type to authenticate integrations pulling audit trail data  for example  using the  HCP Terraform for Splunk   terraform cloud docs integrations splunk  app   To manage an organization s audit trails token  go to   Organization settings   API Token   and use the settings under the  Audit Token  header   Each organization can only have a  single  valid audit trails token  Only  organization owners   terraform cloud docs users teams organizations teams the owners team  can generate or revoke an organization s audit trails token    permissions citation    intentionally unused   keep for maintainers       END  TFC only         Agent API Tokens   Agent pools   terraform cloud docs agents  have their own set of API tokens which allow agents to communicate with HCP Terraform  scoped to an organization  These tokens are not valid for direct usage in the HCP Terraform API and are only used by agents      Access Levels  The following chart illustrates the various access levels for the supported API token types  Some permissions are implicit based on the token type  others are dependent on the permissions of the associated user  team  or organization       Implicit for token type     Requires explicit permission                                         User tokens   Team tokens   Organization tokens                                                                                                Users                                                                                  Manage account settings                                                                 Manage user tokens                                                                        Workspaces                                                                             Read workspace variables                                                              Write workspace variables                                                             Plan  apply  upload states                                                             Force cancel runs                                                                      Create configuration versions                                                          Create or modify workspaces                                                           Remote operations                                                                      Manage run triggers                                                                   Manage notification configurations                                                     Manage run tasks                                                                         Teams                                                                                  Create teams                                                                          Modify team                                                                           Read team                                                                             Manage team tokens                                                                    Manage team workspace access                                                          Manage team membership                                                                  Organizations                                                                          Create organizations                                                                    Modify organizations                                                                    Manage organization tokens                                                              View audit trails                                                                       Invite users to organization                                                            Sentinel                                                                               Manage Sentinel policies                                                              Manage policy sets                                                                    Override policy checks                                                                   Integrations                                                                           Manage VCS connections                                                                Manage SSH keys                                                                        Manage run tasks                                                                         Modules                                                                                Manage Terraform modules                             owners                              Token Expiration  You can create user  team  and organization tokens with an expiration date and time  Once the expiration time has passed  the token is longer treated as valid and may not be used to authenticate to any API  Any API requests made with an expired token will fail   HashiCorp recommends setting an expiration on all new authentication tokens  Creating tokens with an expiration date helps reduce the risk of accidentally leaking valid tokens or forgetting to delete tokens meant for a delegated use once their intended purpose is complete   You can not modify the expiration of a token once you have created it  The HCP Terraform UI displays tokens relative to the current user s timezone  but all tokens are passed and displayed in UTC in ISO 8601 format through the HCP Terraform API "}
{"questions":"terraform tokens and more invite terraform cloud docs users teams organizations organizations users organizations terraform cloud docs users teams organizations organizations page title Users HCP Terraform teams terraform cloud docs users teams organizations teams Create an account edit settings set up two factor authentication create API","answers":"---\npage_title: Users - HCP Terraform\ndescription: >-\n  Create an account, edit settings, set up two-factor authentication, create API\n  tokens, and more.\n---\n\n[organizations]: \/terraform\/cloud-docs\/users-teams-organizations\/organizations\n[teams]: \/terraform\/cloud-docs\/users-teams-organizations\/teams\n[invite]: \/terraform\/cloud-docs\/users-teams-organizations\/organizations#users\n[owners]: \/terraform\/cloud-docs\/users-teams-organizations\/teams#the-owners-team\n\n# Users\n\nUser accounts belong to individual people. Each user can be part of one or more [teams](\/terraform\/cloud-docs\/users-teams-organizations\/teams), which are granted permissions on workspaces within an organization. A user can be a member of multiple [organizations][].\n\n## API\nUse the [Account API](\/terraform\/cloud-docs\/api-docs\/account) to get account details, update account information, and change your password.\n\n<!-- BEGIN: TFC:only -->\n\n## Log in with a HashiCorp Cloud Platform account\n\nWe recommend using a [HashiCorp Cloud Platform (HCP)](https:\/\/portal.cloud.hashicorp.com\/sign-up) account to log in to HCP Terraform. Your HCP Account grants access to every HashiCorp product and the Terraform Registry. If you use an HCP Account, you manage account settings like multi-factor authentication and password resets from within HCP instead of the HCP Terraform UI.\n\nTo log in with your HCP account, go to the [Sign In to HCP Terraform](https:\/\/app.terraform.io\/) page and click **Continue with HCP account**. HCP Terraform may ask if you want to link your account.\n\n### Linking HCP and HCP Terraform accounts\n\nThe first time you log in with your HCP credentials, HCP Terraform searches for an existing HCP Terraform account with the same email address. If you have an unlinked account, HCP Terraform asks if you want to link it to your HCP account. Otherwise, if no account matches your HCP account's email address, HCP Terraform creates and automatically links a new HCP Terraform account to your HCP account.\n\n> **Note**: You can only log in with your HCP credentials after linking your HCP and HCP Terraform accounts. We do not recommend linking your account if you use an SSO provider to log in to HCP Terraform because linking your account may conflict with your existing SSO configuration. \n\nThe only way to log in with your old HCP Terraform credentials is to unlink your HCP Terraform and HCP accounts. If HCP Terraform generated an account for you, you cannot unlink that account from your HCP account. You can unlink a pre-existing HCP Terraform account on the [HCP Account Linking page](#hcp-account-linking) in your **Account settings**.\n\n<!-- END: TFC:only -->\n\n## Creating an account\n\nTo use HCP Terraform or Enterprise, you must create an account through one of the following methods:\n\n- **Invitation Email:** When a user sends you an invitation to join an existing HCP Terraform organization, the email includes a sign-up link. After you create an account, you can automatically join that organization and can begin using HCP Terraform.\n- **Sign-Up Page:** Creating an account requires a username, an email address, and a password. For HCP Terraform, go to [`https:\/\/app.terraform.io\/public\/signup\/account`](https:\/\/app.terraform.io\/public\/signup\/account). For Terraform Enterprise, go to `https:\/\/<TFE HOSTNAME>\/public\/signup\/account`. After you create an account, you do not belong to any organizations. To begin using HCP Terraform, you can either [create an organization](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#creating-organizations) or ask an organization owner to send you an invitation email to join their organization.\n\n<!-- BEGIN: TFC:only -->\n\nWe recommend logging into HCP Terraform [with your HCP account](#log-in-with-your-hashicorp-cloud-platform-account) instead of creating a separate HCP Terraform account.\n\n<!-- END: TFC:only -->\n\n## Joining organizations and teams\n\nAn organization owner or a user with [**Manage Membership**](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-membership) permissions enabled must [invite you to join their organization](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#users) and [add you to one or more teams](\/terraform\/cloud-docs\/users-teams-organizations\/teams\/manage#manage-team-membership).\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nHCP Terraform sends user invitations by email. If the invited email address matches an existing HCP Terraform account, the invitee can join the organization with that account. Otherwise, they must create a new account and then join the organization.\n\n## Site admin permissions\n\nOn Terraform Enterprise instances, some user accounts have a special site admin permission that allows them to administer the entire instance.\n\nAdmin permissions are distinct from normal organization-level permissions, and they apply to a different set of UI controls and API endpoints. Admin users can administer any resource across the instance when using the site admin pages or the [admin API](\/terraform\/enterprise\/api-docs\/admin), but they have normal user permissions when using an organization's standard UI controls and API endpoints. These normal user permissions are determined by team membership.\n\nRefer to [Administering Terraform Enterprise](\/terraform\/enterprise\/admin) for more details.\n\n## Account settings\n\nTo view your settings page, click your user icon and select **Account settings**. Your **Profile** page appears, showing your username, email address, and avatar.\n\n### Profile\n\nClick **Profile** in the sidebar to view and edit the username and email address associated with your HCP Terraform account.\n\n~> **Important:** HCP Terraform includes your username in URL paths to resources. If external systems make requests to these resources, you must update them before you change your username.\n\nHCP Terraform uses [Gravatar](http:\/\/en.gravatar.com) to display a user icon if you have associated one with your email address. Refer to the [Gravatar documentation](http:\/\/en.gravatar.com\/support\/) for details about changing your user icon.\n\n### Sessions\n\nClick **Sessions** in the sidebar to view a list of sessions associated with your HCP Terraform account. You can revoke any sessions you do not recognize.\n\n<!-- BEGIN: TFC:only -->\n\nThere are two types of Terraform accounts, standalone HCP Terraform accounts and HCP Terraform accounts linked to HCP accounts. \n\n### Idle session timeout\n\nHCP Terraform automatically terminates user sessions if there has been no end-user activity for a certain time period: \n\n- Standalone HCP Terraform accounts can stay idle and valid for up to 14 days by default\n- HCP Terraform accounts linked to an HCP account follow the HCP defaults and can stay idle for 1 hour by default\n\nAfter HCP Terraform terminates a session, you can resume it by logging back in through the HCP Terraform portal.  This is a security measure to prevent unauthorized access to unmonitored devices.\n\n-> **Note:** HCP Terraform organization owners can reduce the idle session timeout for an organization in the authentication settings for standalone HCP Terraform accounts, but cannot modify settings for HCP Terraform accounts linked to HCP accounts.\n\n### Forced re-authentication\n\nForced re-authentication (e.g., \u201cremember for\u201d) makes a user re-authenticate, regardless of activity.  This is a security measure to force a new identity verification to access sensitive IT and data managed by HCP Terraform.  In this case, the user must re-authenticate their credentials and may be asked to verify 2FA\/MFA again.\n\n- By default, standalone HCP Terraform accounts are forced to re-authenticate every 14 days \n- By default, HCP Terraform accounts linked to an HCP account follow the HCP defaults and are forced to re-authenticate every 48 hours\n\n-> **Note:** HCP Terraform organization owners can reduce the idle session timeout for standalone HCP Terraform accounts, but cannot modify settings for HCP Terraform accounts linked to HCP accounts.\n\n### Impact to user experience\n\nThe default re-authentication defaults force users to re-authenticate at the beginning of each work week (Monday through Friday). Note that several actions immediately terminate active sessions, including:\n\n* Manually logging out of the HCP or HCP Terraform portals\n* Clearing browser session\/cookies\n* Closing all active browser windows\n\nAny of these actions requires you to re-authenticate regardless of session timeout settings.\n\n<!-- END: TFC:only -->\n\n### Organizations\n\nClick **Organizations** in the sidebar to view a list of the organizations where you are a member. If you are on the [owners team][owners], the organization is marked with an **OWNER** badge.\n\nTo leave an organization, click the ellipses (**...**) next to the organization and select **Leave organization**. You do not need permission from the owners to leave an organization, but you cannot leave if you are the last member of the owners team. Either add a new owner and then leave, or [delete the organization](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#general).\n\n### Password\n\nClick **Password** in the sidebar to change your password.\n\n-> **Note:** Password management is not available if your Terraform Enterprise instance uses [SAML single sign on](\/terraform\/enterprise\/saml\/configuration).\n-> **Note:** Passwords must be at least 10 characters in length, and you can use any type of character. Password management is not available if your Terraform Enterprise instance uses [SAML single sign on](\/terraform\/enterprise\/saml\/configuration).\n\n### Two-factor authentication\n\nClick **Two Factor Authentication** in the sidebar to enable two-factor authentication. Two-factor authentication requires a TOTP-compliant application or an SMS-capable phone number. An organization can set policies that require two-factor authentication.\n\nRefer to [Two-Factor Authentication](\/terraform\/cloud-docs\/users-teams-organizations\/2fa) for details.\n\n<!-- BEGIN: TFC:only -->\n\n### HCP account linking\n\nClick **HCP Account Linking** in the sidebar to unlink your HCP Terraform from your HCP Account. You cannot unlink an account that HCP Terraform autogenerated during the linking process. Refer to [Linked HCP and HCP Terraform Accounts](#linked-hcp-and-hcp-terraform-accounts) for more details.\n\nAfter you unlink, you can begin using your HCP Terraform credentials to log in. You cannot log in with your HCP account again unless you re-link it to your HCP Terraform account.\n\n### SSO identities\n\nClick **SSO Identities** in the sidebar to review and [remove SSO identity links](\/terraform\/cloud-docs\/users-teams-organizations\/single-sign-on\/linking-user-account#remove-sso-identity-link) associated with your account.\n\nYou have an SSO identity for every SSO-enabled HCP Terraform organization. HCP Terraform links each SSO identity to a single HCP Terraform user account. This link determines which account you can use to access each organization.\n\n<!-- END: TFC:only -->\n\n### Tokens\n\nClick **Tokens** in the sidebar to create, manage, and revoke API tokens. HCP Terraform has three kinds of API tokens: user, team, and organization. Users can be members of multiple organizations, so user tokens work with any organization where the associated user is a member. Refer to [API Tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens) for details.\n\nAPI tokens are required for the following tasks:\n\n- Authenticating with the [HCP Terraform API](\/terraform\/cloud-docs\/api-docs). API calls require an `Authorization: Bearer <TOKEN>` HTTP header.\n- Authenticating with the [HCP Terraform CLI integration](\/terraform\/cli\/cloud\/settings) or the [`remote` backend](\/terraform\/language\/settings\/backends\/remote). These require a token in the CLI configuration file or in the backend configuration.\n- Using [private modules](\/terraform\/cloud-docs\/registry\/using) in command-line runs on local machines. This requires [a token in the CLI configuration file](\/terraform\/cloud-docs\/registry\/using#authentication).\n\nProtect your tokens carefully because they contain the same permissions as your user account. For example, if you belong to a team with permission to read and write variables for a workspace, another user could use your API token to authenticate as your user account and also edit variables in that workspace. Refer to [permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions) for more details.\n\nWe recommend protecting your tokens by creating them with an expiration date and time. Refer to [API Token Expiration](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#token-expiration) for details.\n\n#### Creating a token\n\n To create a new token:\n  1. Click **Create an API token**. The **Create API token** box appears.\n  1. Enter a **Description** that explains what the token is for and click **Create API token**.\n  1. You can optionally enter the token's expiration date or time, or create a token that never expires. The UI displays a token's expiration date and time in your current time zone.\n  1. Copy your token from the box and save it in a secure location. HCP Terraform only displays the token once, right after you create it. If you lose it, you must revoke the old token and create a new one.\n\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n#### Revoking a token\n\nTo revoke a token, click the **trash can** next to it. That token will no longer be able to authenticate as your user account.\n\n~> **Note**: HCP Terraform does not revoke a user API token's access to an organization when you remove the user from an SSO Identity Provider as the user may still be a member of the organization. To remove access to a user's API token, remove the user from the organization in the UI or with the [Terraform Enterprise provider](https:\/\/registry.terraform.io\/providers\/hashicorp\/tfe\/latest).\n\n### GitHub app OAuth token\n\nClick **Tokens** in the sidebar to manage your GitHub App token. This token lets you connect a workspaces to an available GitHub App installation.\n\n~> **Note:** Only an HCP Terraform user can own a GitHub App token. Team and Organization API tokens are not able to own a GitHub App token.\n\nA GitHub App token lets you:\n- Connect workspaces, policy sets, and registry modules to a GitHub App installation with the [HCP Terraform API](\/terraform\/cloud-docs\/api-docs) and UI.\n- View available GitHub App installations with the [HCP Terraform API](\/terraform\/cloud-docs\/api-docs) and UI.\n\nAfter generating this token, you can use it to view information about your available installations for the Terraform Cloud GitHub App.\n\n#### Creating a GitHub app token\n\nTo create a GitHub App token, click **Create a GitHub App token**. The **GitHub App authorization pop-up window** appears requesting authorization of the Terraform Cloud GitHub App.\n\n~> **Note:** This does not grant HCP Terraform access to repositories.\n\n#### Revoking a GitHub app token\n\nTo revoke the GitHub App token, click the **ellipses button (...)**. The dropdown menu appears. Click the **Delete Token** option. This triggers a confirmation window to appear, which asks you to confirm that you want to revoke the token. Once confirmed, the token is revoked and you can no longer view GitHub App installations.\n\n#### Additional resources\n\n- [GitHub App permissions in HCP Terraform](\/terraform\/cloud-docs\/vcs\/github-app#github-permissions)","site":"terraform","answers_cleaned":"    page title  Users   HCP Terraform description       Create an account  edit settings  set up two factor authentication  create API   tokens  and more        organizations    terraform cloud docs users teams organizations organizations  teams    terraform cloud docs users teams organizations teams  invite    terraform cloud docs users teams organizations organizations users  owners    terraform cloud docs users teams organizations teams the owners team    Users  User accounts belong to individual people  Each user can be part of one or more  teams   terraform cloud docs users teams organizations teams   which are granted permissions on workspaces within an organization  A user can be a member of multiple  organizations         API Use the  Account API   terraform cloud docs api docs account  to get account details  update account information  and change your password        BEGIN  TFC only         Log in with a HashiCorp Cloud Platform account  We recommend using a  HashiCorp Cloud Platform  HCP   https   portal cloud hashicorp com sign up  account to log in to HCP Terraform  Your HCP Account grants access to every HashiCorp product and the Terraform Registry  If you use an HCP Account  you manage account settings like multi factor authentication and password resets from within HCP instead of the HCP Terraform UI   To log in with your HCP account  go to the  Sign In to HCP Terraform  https   app terraform io   page and click   Continue with HCP account    HCP Terraform may ask if you want to link your account       Linking HCP and HCP Terraform accounts  The first time you log in with your HCP credentials  HCP Terraform searches for an existing HCP Terraform account with the same email address  If you have an unlinked account  HCP Terraform asks if you want to link it to your HCP account  Otherwise  if no account matches your HCP account s email address  HCP Terraform creates and automatically links a new HCP Terraform account to your HCP account       Note    You can only log in with your HCP credentials after linking your HCP and HCP Terraform accounts  We do not recommend linking your account if you use an SSO provider to log in to HCP Terraform because linking your account may conflict with your existing SSO configuration    The only way to log in with your old HCP Terraform credentials is to unlink your HCP Terraform and HCP accounts  If HCP Terraform generated an account for you  you cannot unlink that account from your HCP account  You can unlink a pre existing HCP Terraform account on the  HCP Account Linking page   hcp account linking  in your   Account settings          END  TFC only         Creating an account  To use HCP Terraform or Enterprise  you must create an account through one of the following methods       Invitation Email    When a user sends you an invitation to join an existing HCP Terraform organization  the email includes a sign up link  After you create an account  you can automatically join that organization and can begin using HCP Terraform      Sign Up Page    Creating an account requires a username  an email address  and a password  For HCP Terraform  go to   https   app terraform io public signup account   https   app terraform io public signup account   For Terraform Enterprise  go to  https    TFE HOSTNAME  public signup account   After you create an account  you do not belong to any organizations  To begin using HCP Terraform  you can either  create an organization   terraform cloud docs users teams organizations organizations creating organizations  or ask an organization owner to send you an invitation email to join their organization        BEGIN  TFC only      We recommend logging into HCP Terraform  with your HCP account   log in with your hashicorp cloud platform account  instead of creating a separate HCP Terraform account        END  TFC only         Joining organizations and teams  An organization owner or a user with    Manage Membership     terraform cloud docs users teams organizations permissions manage membership  permissions enabled must  invite you to join their organization   terraform cloud docs users teams organizations organizations users  and  add you to one or more teams   terraform cloud docs users teams organizations teams manage manage team membership     permissions citation    intentionally unused   keep for maintainers  HCP Terraform sends user invitations by email  If the invited email address matches an existing HCP Terraform account  the invitee can join the organization with that account  Otherwise  they must create a new account and then join the organization      Site admin permissions  On Terraform Enterprise instances  some user accounts have a special site admin permission that allows them to administer the entire instance   Admin permissions are distinct from normal organization level permissions  and they apply to a different set of UI controls and API endpoints  Admin users can administer any resource across the instance when using the site admin pages or the  admin API   terraform enterprise api docs admin   but they have normal user permissions when using an organization s standard UI controls and API endpoints  These normal user permissions are determined by team membership   Refer to  Administering Terraform Enterprise   terraform enterprise admin  for more details      Account settings  To view your settings page  click your user icon and select   Account settings    Your   Profile   page appears  showing your username  email address  and avatar       Profile  Click   Profile   in the sidebar to view and edit the username and email address associated with your HCP Terraform account        Important    HCP Terraform includes your username in URL paths to resources  If external systems make requests to these resources  you must update them before you change your username   HCP Terraform uses  Gravatar  http   en gravatar com  to display a user icon if you have associated one with your email address  Refer to the  Gravatar documentation  http   en gravatar com support   for details about changing your user icon       Sessions  Click   Sessions   in the sidebar to view a list of sessions associated with your HCP Terraform account  You can revoke any sessions you do not recognize        BEGIN  TFC only      There are two types of Terraform accounts  standalone HCP Terraform accounts and HCP Terraform accounts linked to HCP accounts        Idle session timeout  HCP Terraform automatically terminates user sessions if there has been no end user activity for a certain time period      Standalone HCP Terraform accounts can stay idle and valid for up to 14 days by default   HCP Terraform accounts linked to an HCP account follow the HCP defaults and can stay idle for 1 hour by default  After HCP Terraform terminates a session  you can resume it by logging back in through the HCP Terraform portal   This is a security measure to prevent unauthorized access to unmonitored devices        Note    HCP Terraform organization owners can reduce the idle session timeout for an organization in the authentication settings for standalone HCP Terraform accounts  but cannot modify settings for HCP Terraform accounts linked to HCP accounts       Forced re authentication  Forced re authentication  e g    remember for   makes a user re authenticate  regardless of activity   This is a security measure to force a new identity verification to access sensitive IT and data managed by HCP Terraform   In this case  the user must re authenticate their credentials and may be asked to verify 2FA MFA again     By default  standalone HCP Terraform accounts are forced to re authenticate every 14 days    By default  HCP Terraform accounts linked to an HCP account follow the HCP defaults and are forced to re authenticate every 48 hours       Note    HCP Terraform organization owners can reduce the idle session timeout for standalone HCP Terraform accounts  but cannot modify settings for HCP Terraform accounts linked to HCP accounts       Impact to user experience  The default re authentication defaults force users to re authenticate at the beginning of each work week  Monday through Friday   Note that several actions immediately terminate active sessions  including     Manually logging out of the HCP or HCP Terraform portals   Clearing browser session cookies   Closing all active browser windows  Any of these actions requires you to re authenticate regardless of session timeout settings        END  TFC only          Organizations  Click   Organizations   in the sidebar to view a list of the organizations where you are a member  If you are on the  owners team  owners   the organization is marked with an   OWNER   badge   To leave an organization  click the ellipses           next to the organization and select   Leave organization    You do not need permission from the owners to leave an organization  but you cannot leave if you are the last member of the owners team  Either add a new owner and then leave  or  delete the organization   terraform cloud docs users teams organizations organizations general        Password  Click   Password   in the sidebar to change your password        Note    Password management is not available if your Terraform Enterprise instance uses  SAML single sign on   terraform enterprise saml configuration        Note    Passwords must be at least 10 characters in length  and you can use any type of character  Password management is not available if your Terraform Enterprise instance uses  SAML single sign on   terraform enterprise saml configuration        Two factor authentication  Click   Two Factor Authentication   in the sidebar to enable two factor authentication  Two factor authentication requires a TOTP compliant application or an SMS capable phone number  An organization can set policies that require two factor authentication   Refer to  Two Factor Authentication   terraform cloud docs users teams organizations 2fa  for details        BEGIN  TFC only          HCP account linking  Click   HCP Account Linking   in the sidebar to unlink your HCP Terraform from your HCP Account  You cannot unlink an account that HCP Terraform autogenerated during the linking process  Refer to  Linked HCP and HCP Terraform Accounts   linked hcp and hcp terraform accounts  for more details   After you unlink  you can begin using your HCP Terraform credentials to log in  You cannot log in with your HCP account again unless you re link it to your HCP Terraform account       SSO identities  Click   SSO Identities   in the sidebar to review and  remove SSO identity links   terraform cloud docs users teams organizations single sign on linking user account remove sso identity link  associated with your account   You have an SSO identity for every SSO enabled HCP Terraform organization  HCP Terraform links each SSO identity to a single HCP Terraform user account  This link determines which account you can use to access each organization        END  TFC only          Tokens  Click   Tokens   in the sidebar to create  manage  and revoke API tokens  HCP Terraform has three kinds of API tokens  user  team  and organization  Users can be members of multiple organizations  so user tokens work with any organization where the associated user is a member  Refer to  API Tokens   terraform cloud docs users teams organizations api tokens  for details   API tokens are required for the following tasks     Authenticating with the  HCP Terraform API   terraform cloud docs api docs   API calls require an  Authorization  Bearer  TOKEN   HTTP header    Authenticating with the  HCP Terraform CLI integration   terraform cli cloud settings  or the   remote  backend   terraform language settings backends remote   These require a token in the CLI configuration file or in the backend configuration    Using  private modules   terraform cloud docs registry using  in command line runs on local machines  This requires  a token in the CLI configuration file   terraform cloud docs registry using authentication    Protect your tokens carefully because they contain the same permissions as your user account  For example  if you belong to a team with permission to read and write variables for a workspace  another user could use your API token to authenticate as your user account and also edit variables in that workspace  Refer to  permissions   terraform cloud docs users teams organizations permissions  for more details   We recommend protecting your tokens by creating them with an expiration date and time  Refer to  API Token Expiration   terraform cloud docs users teams organizations api tokens token expiration  for details        Creating a token   To create a new token    1  Click   Create an API token    The   Create API token   box appears    1  Enter a   Description   that explains what the token is for and click   Create API token      1  You can optionally enter the token s expiration date or time  or create a token that never expires  The UI displays a token s expiration date and time in your current time zone    1  Copy your token from the box and save it in a secure location  HCP Terraform only displays the token once  right after you create it  If you lose it  you must revoke the old token and create a new one     permissions citation    intentionally unused   keep for maintainers       Revoking a token  To revoke a token  click the   trash can   next to it  That token will no longer be able to authenticate as your user account        Note    HCP Terraform does not revoke a user API token s access to an organization when you remove the user from an SSO Identity Provider as the user may still be a member of the organization  To remove access to a user s API token  remove the user from the organization in the UI or with the  Terraform Enterprise provider  https   registry terraform io providers hashicorp tfe latest        GitHub app OAuth token  Click   Tokens   in the sidebar to manage your GitHub App token  This token lets you connect a workspaces to an available GitHub App installation        Note    Only an HCP Terraform user can own a GitHub App token  Team and Organization API tokens are not able to own a GitHub App token   A GitHub App token lets you    Connect workspaces  policy sets  and registry modules to a GitHub App installation with the  HCP Terraform API   terraform cloud docs api docs  and UI    View available GitHub App installations with the  HCP Terraform API   terraform cloud docs api docs  and UI   After generating this token  you can use it to view information about your available installations for the Terraform Cloud GitHub App        Creating a GitHub app token  To create a GitHub App token  click   Create a GitHub App token    The   GitHub App authorization pop up window   appears requesting authorization of the Terraform Cloud GitHub App        Note    This does not grant HCP Terraform access to repositories        Revoking a GitHub app token  To revoke the GitHub App token  click the   ellipses button          The dropdown menu appears  Click the   Delete Token   option  This triggers a confirmation window to appear  which asks you to confirm that you want to revoke the token  Once confirmed  the token is revoked and you can no longer view GitHub App installations        Additional resources     GitHub App permissions in HCP Terraform   terraform cloud docs vcs github app github permissions "}
{"questions":"terraform page title Organizations overview Organizations are partitions in HCP Terraform and Terraform Enterprise that let teams within a business unit collaborate on infrastructure as code projects Organizations contain one or more projects which contain one or more workspaces teams terraform cloud docs users teams organizations teams users terraform cloud docs users teams organizations users","answers":"---\npage_title: Organizations overview\ndescription: >-\n  Organizations are partitions in HCP Terraform and Terraform Enterprise that let teams within a business unit collaborate on infrastructure as code projects. Organizations contain one or more projects, which contain one or more workspaces.\n---\n\n[teams]: \/terraform\/cloud-docs\/users-teams-organizations\/teams\n\n[users]: \/terraform\/cloud-docs\/users-teams-organizations\/users\n\n[owners]: \/terraform\/cloud-docs\/users-teams-organizations\/teams#the-owners-team\n\n# Organizations overview\n\nThis topic provides overview information about how to create and manage organizations in HCP Terraform and Terraform Enterprise. An organization contains one or more projects.\n\n## Requirements\n\nThe **admin** permission preset must be enabled on your profile to create and manage organizations in the HCP Terraform UI. Refer to [Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-permissions) for additional information. \n\n## API and Terraform Enterprise Provider\n\nIn addition to the HCP Terraform UI, you can use the following methods to manage organizations:\n- [Organizations API](\/terraform\/cloud-docs\/api-docs\/organizations)\n- The `tfe` provider [`tfe_organization`](https:\/\/registry.terraform.io\/providers\/hashicorp\/tfe\/latest\/docs\/resources\/organization) resource\n\n## Selecting organizations\n\nHCP Terraform displays your current organization in the bottom left of the sidebar. To select an organization:\n\n1. Click the current organization name to view a list of all the organizations where you are a member.\n1. Click an organization to select it. HCP Terraform displays list of workspaces within that organization.\n\n## Joining and leaving organizations\n\nTo join an organization, the organization [owners][] or a user with specific [team management](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#team-management-permissions) permissions must invite you, and you must accept the emailed invitation. [Learn more](#users).\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nTo leave an organization:\n\n1. Click the Terraform logo in the upper left corner to navigate to the **Organizations** page.\n1. Click the ellipses (**...**) next to the organization and select **Leave organization**. \n\nYou do not need permission from the owners to leave an organization, but you cannot leave if you are the last member of the owners team. Either add a new owner and then leave, or [delete the organization](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#general).\n\n## Creating organizations\n\nOn Terraform Enterprise, administrators can restrict your ability to create organizations. Refer to [Administration: General Settings](\/terraform\/enterprise\/admin\/application\/general#organization-creation) for details.\n\nOn HCP Terraform, any user can create a new organization. If you do not belong to any organizations, HCP Terraform prompts you to create one the first time you log in. To create an organization:\n\n1. Click the current organization name and select **Create new organization**. The **Create a new organization** page appears.\n1. Enter a unique **Organization name** Organization names can include numbers, letters, underscores (`_`), and hyphens (`-`).\n1. Provide an **Email address** to receive notifications about the organization.\n1. Click **Create organization**.\n\nHCP Terraform shows the new organization and prompts you to create a new workspace. You can also [invite other users](#users) to join the organization.\n\n<!-- BEGIN: TFC:only name:managed-resources -->\n\n## Managed resources\n\nYour organization\u2019s managed resource count helps you understand the number of infrastructure resources that HCP Terraform manages across all your workspaces.\n\nHCP Terraform reads all the workspaces\u2019 state files to determine the total number of managed resources. Each [resource](\/terraform\/language\/resources\/syntax) instance in the state equals one managed resource. HCP Terraform includes resources in modules and each resource created with the `count` or `for_each` meta-arguments. HCP Terraform does not include [data sources](\/terraform\/language\/data-sources) in the count. Refer to [Managed Resources Count](\/terraform\/cloud-docs\/workspaces\/state#managed-resources-count) in the workspace state documentation for more details.\n\nYou can view your organization's managed resource count on the **Usage** page.\n\n<!-- END: TFC:only name:managed-resources -->\n\n## Create and manage reserved tags\n\n~> **Reserved tags is in private beta and unavailable for some users.** Contact your HashiCorp representative for information about participating in the private beta.\n\nTags are key-value pairs that you can apply to projects and workspaces. Tags help you organize your projects and resources, as well as track resource consumption. Refer to the following topics for information about creating and managing tags attached to projects and workspaces:\n\n- [Create a project](\/terraform\/cloud-docs\/projects\/manage#create-a-project)\n- [Create workspace tags](\/terraform\/cloud-docs\/workspaces\/tags)\n\nYou can define reserved tag keys for your organization so that project and workspace managers can use consistent labels.\n\n1. Click on **Tags** in the sidebar to view all tags in your organization. \n1. Click on the **Reserved Keys** tab. You can perform the following actions:\n   1. Use the search bar to find a specific tag key.\n   1. Click on a column header to sort the table.\n   1. Click on the ellipses menu to open the management options for each tag key.\n   1. Click **New tag key** to define a new key.\n\nYou can also view single-value tags that workspace managers added directly to their workspaces. Refer to [Tags](#tags) in the organization settings reference for additional information.\n\n## Managing settings\n\nTo view and manage an organization's settings, click **Settings**.\n\nThe contents of the organization settings depends on your permissions within the organization. All users can review the organization's contact email, view the membership of any teams they belong to, and view the organization's authentication policy. [Organization owners][owners] can view and manage the entire list of organization settings. Refer to [Organization Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-permissions) for details.\n\nYou may be able to manage the following organization settings.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### Organization settings\n\n#### General\n\nReview the organization name and contact email. Organization owners can choose to change the organization name, contact email, and the default execution mode, or delete the organization. When an organization owner updates the default execution mode, all workspaces configured to [inherit this value](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode) will be affected.\n\nOrganization owners can also choose whether [workspace administrators](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#workspace-admins) can delete workspaces that are managing resources. Deleting a workspace with resources under management introduces risk because Terraform can no longer track or manage the infrastructure. The workspace's users must manually delete any remaining resources or [import](\/terraform\/cli\/commands\/import) them into another Terraform workspace.\n\n<!-- BEGIN: TFC:only name:generated-tests -->\n\nOrganization owners using HCP Terraform Plus edition can choose whether members with [module management permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-private-registry) can [generate module tests](\/terraform\/cloud-docs\/registry\/test#generated-module-tests).\n\n<!-- END: TFC:only name:generated-tests -->\n\n##### Renaming an organization\n\n!> **Warning:** Deleting or renaming an organization can be very disruptive. We strongly recommend against deleting or renaming organizations with active members.\n\nTo rename an organization that manages infrastructure:\n\n1. Alert all members of the organization about the name change.\n1. Cancel in progress and pending runs or wait for them to finish. HCP Terraform cannot change the name of an organization with runs in progress.\n1. Lock all workspaces to ensure that no new runs will start before you change the name.\n1. Rename the organization.\n1. Update all components using the HCP Terraform API to the new organization name. This includes Terraform's `cloud` block CLI integration, the `tfe` Terraform provider, and any external API integrations.\n1. Unlock workspaces and resume normal operations.\n\n#### Plan & Billing\n\nReview the organization's plan and any invoices for previous plan payments. Organization owners can also upgrade to one of HCP Terraform's paid plans, downgrade to a free plan, or begin a free trial of paid features.\n\n#### Tags\n\nClick the **Tags** tab in the **Tags Management** screen to view single-value tags that workspace managers added directly to their workspaces. For information about key-value tag keys, refer to [Create and manage reserved tags](#create-and-manage-reserved-tags).\n\nYou can perform the following actions in the **Tags** tab:\n\n1. Search for a tag key or value.\n1. Click on a column header to sort tags.\n\n#### Teams\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/team-management.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nAll users in an organization can access the **Teams** page, which displays a list of [teams][] within the organization. \n\nOrganization owners and users with the [include secret teams permission](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#include-secret-teams) can:\n  * view all [secret teams](\/terraform\/cloud-docs\/users-teams-organizations\/teams\/manage#team-visibility)\n  * view each team's membership\n  * manage team API tokens\n\nHCP Terraform restricts team creation, team deletion, and management of team API tokens to organization owners and users with the [manage teams](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-teams) permission. Organization owners and users with the [manage membership](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-membership) permission can manage team membership. Remember that users must accept their organization invitations before you can add them to a team.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n#### Users\n\nOrganization owners and users with [manage membership](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-membership) permissions can invite HCP Terraform users into the organization, cancel invitations, and remove existing members.\n\nThe list of users is separated into one tab for active users and one tab for invited users who have not yet accepted their invitations. For active users, the list includes usernames, email addresses, avatar icons, two-factor authentication status, and current team memberships. Use the **Search by username or email** field to filter these lists.\n\nUser invitations are always sent by email; you cannot invite someone using their HCP Terraform username. To invite a user to an organization:\n\n1. Click **Invite a user**. The **invite a user** box appears.\n1. Enter the user's email address and optionally add them to one or more teams. If the user accepts the invitation, HCP Terraform automatically adds them to the specified teams.\n\nAll permissions in HCP Terraform are managed through teams. Users can join an organization without belonging to any teams, but they cannot use Terraform Cloud features until they belong to a team. Refer to [permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions) for details.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n#### Variable Sets\n\nView all of the available variable sets and their variables. Users with [`read and write variables` permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#general-workspace-permissions) can also create variable sets and assign them to one or more workspaces.\n\nVariable sets let you reuse the same variables across multiple workspaces in the organization. For example, you could define a variable set of provider credentials and automatically apply it to several workspaces, rather than manually defining credential variables in each. Changes to variable sets instantly apply to all appropriate workspaces, saving time and reducing errors from manual updates.\n\nRefer to the [variables overview](\/terraform\/cloud-docs\/workspaces\/variables) documentation for details about variable types, scope, and precedence. Refer to [managing variables](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables) for details about how to create and manage variable sets.\n\n#### Health\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/health-assessments.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nHCP Terraform can perform automatic health assessments in a workspace to assess whether its real infrastructure matches the requirements defined in its Terraform configuration. Health assessments include the following types of evaluations:\n- Drift detection determines whether your real-world infrastructure matches your Terraform configuration. Drift detection requires Terraform version 0.15.4+.\n- Continuous validation determines whether custom conditions in the workspace\u2019s configuration continue to pass after Terraform provisions the infrastructure. Continuous validation requires Terraform version 1.3.0+.\n\nYou can enforce health assessments for all eligible workspaces or let each workspace opt in to health assessments through workspace settings. Refer to [Health](\/terraform\/cloud-docs\/workspaces\/health) in the workspaces documentation for more details.\n\n#### Runs\nFrom the Workspaces page, click **Settings** in the sidebar, then **Runs** to view all of the current runs in your organization's workspaces. The **Runs** page displays:\n- The name of the run\n- The run's ID\n- What triggered the run\n- The workspace and project where the run is taking place\n- When the latest change in the run occurred\n- A button allowing you to cancel that run\n\nYou can apply the following filters to limit the runs HCP Terraform displays:\n- Click **Needs Attention** to display runs that require user input to continue, such as approving a plan or overriding a policy. \n- Click **Running** to display runs that are in progress.\n- Click **On Hold** to display paused runs.\n\n\nFor precise filtering, click **More filters** and check the boxes to filter runs by specific [run statuses](\/terraform\/cloud-docs\/run\/states), [run operations](\/terraform\/cloud-docs\/run\/modes-and-options), workspaces, or [agent pools](\/terraform\/cloud-docs\/agents\/agent-pools). Click **Apply filters** to list the runs that match your criteria.\n\nYou can dismiss any of your filtering criteria by clicking the **X** next to the filter name above the table displaying your runs. \n\nFor more details about run states, refer to [Run States and Stages](\/terraform\/cloud-docs\/run\/states).\n\n### Integrations\n\n#### Cost Estimation\n\nEnable and disable the [cost estimation](\/terraform\/cloud-docs\/cost-estimation) feature for all workspaces.\n\n#### Policies\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nPolicies let you define and enforce rules for Terraform runs. You can write them using either the [Sentinel](\/terraform\/cloud-docs\/policy-enforcement\/sentinel) or [Open Policy Agent (OPA)](\/terraform\/cloud-docs\/policy-enforcement\/opa) policy-as-code frameworks and then group them into policy sets that you can apply to workspaces in your organization. To create policies and policy sets, you must have [permission to manage policies](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-permissions).\n\n#### Policy Sets\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nCreate groups of policies and enforce those policy sets globally or on specific [projects](\/terraform\/cloud-docs\/projects\/manage) and workspaces. You can create policy sets through the Terraform API, by connecting a VCS repository containing policies, or directly in HCP Terraform. To create policies and policy sets, you must have [permission to manage policies](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-permissions).\n\nRefer to [Managing Policy Sets](\/terraform\/cloud-docs\/policy-enforcement\/manage-policy-sets) for details.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n#### Run Tasks\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/run-tasks.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nManage the run tasks that you can add to workspaces within the organization. [Run tasks](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks) let you integrate third-party tools and services at specific stages in the HCP Terraform run lifecycle.\n\n### Security\n\n#### Agents\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/agents.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nCreate and manage [HCP Terraform agent pools](\/terraform\/cloud-docs\/agents). HCP Terraform agents let HCP Terraform communicate with isolated, private, or on-premises infrastructure. This is useful for on-premises infrastructure types such as vSphere, Nutanix, OpenStack, enterprise networking providers, and infrastructure within a protected enclave.\n\n#### API Tokens\n\nOrganization owners can set up a special [Organization API Token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens) that is not associated with a specific user or team.\n\n#### Authentication\n\nOrganization owners can determine when users must reauthenticate and require [two-factor authentication](\/terraform\/cloud-docs\/users-teams-organizations\/2fa) for all members of the organization.\n\n#### SSH Keys\n\nManage [SSH keys for cloning Git-based modules](\/terraform\/cloud-docs\/workspaces\/settings\/ssh-keys) during Terraform runs. This does not include keys to access a connected VCS provider.\n\n#### SSO\n\nOrganization owners can set up an SSO provider for the organization.\n\n### Version Control\n\n#### VCS General\n\nConfigure [Automatically cancel plan-only runs triggered by outdated commits](\/terraform\/cloud-docs\/users-teams-organizations\/organizations\/vcs-speculative-plan-management) to manage the setting.\n\n#### VCS Events\n\n-> **Note:** This feature is in beta.\n\nReview the event logs for GitLab.com connections.\n\n#### VCS Providers\n\nConfigure [VCS providers](\/terraform\/cloud-docs\/vcs) to use in the organization. You must have [permission to manage VCS settings](\/terraform\/cloud-docs\/users-teams-organizations\/permissions).\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### Destruction and Deletion\n\n#### Data Retention Policies\n\n<EnterpriseAlert>\nData retention policies are exclusive to Terraform Enterprise, and not available in HCP Terraform. <a href=\"https:\/\/developer.hashicorp.com\/terraform\/enterprise\">Learn more about Terraform Enterprise<\/a>.\n<\/EnterpriseAlert>\n\nAn organization owner can set or override the following data retention policies:\n\n- **Admin default policy**\n- **Do not auto-delete**\n- **Auto-delete data**\n\nSetting the data retention policy to **Admin default policy** disables the other data retention policy settings.\n\nBy default, the **Do not auto-delete** option is enabled for an organization. This option directs Terraform Enterprise to retain data associated with configuration and state versions, but organization owners can define configurable data retention policies that allow Terraform to _soft delete_ the backing data associated with configuration versions and state versions. Soft deleting refers to marking a data object for garbage collection so that Terraform can delete the object after a set number of days.\n\nOnce an object is soft deleted, any attempts to read the object will fail. Until the garbage collection process begins, you can restore soft deleted objects using the APIs described in the [configuration version documentation](\/terraform\/enterprise\/api-docs\/configuration-versions) and the [state version documentation](\/terraform\/enterprise\/api-docs\/state-versions). Terraform permanently deletes the archivist storage after the garbage collection grace period elapses.\n\nThe organization policy is the default policy applied to all workspaces, but members of individual workspaces can set overriding policies for their workspaces that take precedence over the organization policy.\n\n## Trial Expired Organizations\n\nHCP Terraform paid features are available as a free trial. When a free trial has expired, the organization displays a banner reading **TRIAL EXPIRED \u2014\u00a0Upgrade Required**.\n\nOrganizations with expired trials return to the feature set of a free organization, but they retain any data created as part of paid features. Specifically, HCP Terraform disables the following features:\n\n- Teams other than `owners` and locks users who do not belong to the `owners` team out of the organization. HCP Terraform preserves team membership and permissions and re-enables them after you upgrade the organization.\n- Sentinel policy checks. HCP Terraform preserves existing policies and policy sets and re-enables them after you upgrade the organization.\n- Cost estimation.","site":"terraform","answers_cleaned":"    page title  Organizations overview description       Organizations are partitions in HCP Terraform and Terraform Enterprise that let teams within a business unit collaborate on infrastructure as code projects  Organizations contain one or more projects  which contain one or more workspaces        teams    terraform cloud docs users teams organizations teams   users    terraform cloud docs users teams organizations users   owners    terraform cloud docs users teams organizations teams the owners team    Organizations overview  This topic provides overview information about how to create and manage organizations in HCP Terraform and Terraform Enterprise  An organization contains one or more projects      Requirements  The   admin   permission preset must be enabled on your profile to create and manage organizations in the HCP Terraform UI  Refer to  Permissions   terraform cloud docs users teams organizations permissions organization permissions  for additional information       API and Terraform Enterprise Provider  In addition to the HCP Terraform UI  you can use the following methods to manage organizations     Organizations API   terraform cloud docs api docs organizations    The  tfe  provider   tfe organization   https   registry terraform io providers hashicorp tfe latest docs resources organization  resource     Selecting organizations  HCP Terraform displays your current organization in the bottom left of the sidebar  To select an organization   1  Click the current organization name to view a list of all the organizations where you are a member  1  Click an organization to select it  HCP Terraform displays list of workspaces within that organization      Joining and leaving organizations  To join an organization  the organization  owners    or a user with specific  team management   terraform cloud docs users teams organizations permissions team management permissions  permissions must invite you  and you must accept the emailed invitation   Learn more   users     permissions citation    intentionally unused   keep for maintainers  To leave an organization   1  Click the Terraform logo in the upper left corner to navigate to the   Organizations   page  1  Click the ellipses           next to the organization and select   Leave organization      You do not need permission from the owners to leave an organization  but you cannot leave if you are the last member of the owners team  Either add a new owner and then leave  or  delete the organization   terraform cloud docs users teams organizations organizations general       Creating organizations  On Terraform Enterprise  administrators can restrict your ability to create organizations  Refer to  Administration  General Settings   terraform enterprise admin application general organization creation  for details   On HCP Terraform  any user can create a new organization  If you do not belong to any organizations  HCP Terraform prompts you to create one the first time you log in  To create an organization   1  Click the current organization name and select   Create new organization    The   Create a new organization   page appears  1  Enter a unique   Organization name   Organization names can include numbers  letters  underscores        and hyphens        1  Provide an   Email address   to receive notifications about the organization  1  Click   Create organization     HCP Terraform shows the new organization and prompts you to create a new workspace  You can also  invite other users   users  to join the organization        BEGIN  TFC only name managed resources         Managed resources  Your organization s managed resource count helps you understand the number of infrastructure resources that HCP Terraform manages across all your workspaces   HCP Terraform reads all the workspaces  state files to determine the total number of managed resources  Each  resource   terraform language resources syntax  instance in the state equals one managed resource  HCP Terraform includes resources in modules and each resource created with the  count  or  for each  meta arguments  HCP Terraform does not include  data sources   terraform language data sources  in the count  Refer to  Managed Resources Count   terraform cloud docs workspaces state managed resources count  in the workspace state documentation for more details   You can view your organization s managed resource count on the   Usage   page        END  TFC only name managed resources         Create and manage reserved tags       Reserved tags is in private beta and unavailable for some users    Contact your HashiCorp representative for information about participating in the private beta   Tags are key value pairs that you can apply to projects and workspaces  Tags help you organize your projects and resources  as well as track resource consumption  Refer to the following topics for information about creating and managing tags attached to projects and workspaces      Create a project   terraform cloud docs projects manage create a project     Create workspace tags   terraform cloud docs workspaces tags   You can define reserved tag keys for your organization so that project and workspace managers can use consistent labels   1  Click on   Tags   in the sidebar to view all tags in your organization   1  Click on the   Reserved Keys   tab  You can perform the following actions     1  Use the search bar to find a specific tag key     1  Click on a column header to sort the table     1  Click on the ellipses menu to open the management options for each tag key     1  Click   New tag key   to define a new key   You can also view single value tags that workspace managers added directly to their workspaces  Refer to  Tags   tags  in the organization settings reference for additional information      Managing settings  To view and manage an organization s settings  click   Settings     The contents of the organization settings depends on your permissions within the organization  All users can review the organization s contact email  view the membership of any teams they belong to  and view the organization s authentication policy   Organization owners  owners  can view and manage the entire list of organization settings  Refer to  Organization Permissions   terraform cloud docs users teams organizations permissions organization permissions  for details   You may be able to manage the following organization settings    permissions citation    intentionally unused   keep for maintainers      Organization settings       General  Review the organization name and contact email  Organization owners can choose to change the organization name  contact email  and the default execution mode  or delete the organization  When an organization owner updates the default execution mode  all workspaces configured to  inherit this value   terraform cloud docs workspaces settings execution mode  will be affected   Organization owners can also choose whether  workspace administrators   terraform cloud docs users teams organizations permissions workspace admins  can delete workspaces that are managing resources  Deleting a workspace with resources under management introduces risk because Terraform can no longer track or manage the infrastructure  The workspace s users must manually delete any remaining resources or  import   terraform cli commands import  them into another Terraform workspace        BEGIN  TFC only name generated tests      Organization owners using HCP Terraform Plus edition can choose whether members with  module management permissions   terraform cloud docs users teams organizations permissions manage private registry  can  generate module tests   terraform cloud docs registry test generated module tests         END  TFC only name generated tests            Renaming an organization       Warning    Deleting or renaming an organization can be very disruptive  We strongly recommend against deleting or renaming organizations with active members   To rename an organization that manages infrastructure   1  Alert all members of the organization about the name change  1  Cancel in progress and pending runs or wait for them to finish  HCP Terraform cannot change the name of an organization with runs in progress  1  Lock all workspaces to ensure that no new runs will start before you change the name  1  Rename the organization  1  Update all components using the HCP Terraform API to the new organization name  This includes Terraform s  cloud  block CLI integration  the  tfe  Terraform provider  and any external API integrations  1  Unlock workspaces and resume normal operations        Plan   Billing  Review the organization s plan and any invoices for previous plan payments  Organization owners can also upgrade to one of HCP Terraform s paid plans  downgrade to a free plan  or begin a free trial of paid features        Tags  Click the   Tags   tab in the   Tags Management   screen to view single value tags that workspace managers added directly to their workspaces  For information about key value tag keys  refer to  Create and manage reserved tags   create and manage reserved tags    You can perform the following actions in the   Tags   tab   1  Search for a tag key or value  1  Click on a column header to sort tags        Teams       BEGIN  TFC only name pnp callout      include  tfc package callouts team management mdx       END  TFC only name pnp callout      All users in an organization can access the   Teams   page  which displays a list of  teams    within the organization    Organization owners and users with the  include secret teams permission   terraform cloud docs users teams organizations permissions include secret teams  can      view all  secret teams   terraform cloud docs users teams organizations teams manage team visibility      view each team s membership     manage team API tokens  HCP Terraform restricts team creation  team deletion  and management of team API tokens to organization owners and users with the  manage teams   terraform cloud docs users teams organizations permissions manage teams  permission  Organization owners and users with the  manage membership   terraform cloud docs users teams organizations permissions manage membership  permission can manage team membership  Remember that users must accept their organization invitations before you can add them to a team    permissions citation    intentionally unused   keep for maintainers       Users  Organization owners and users with  manage membership   terraform cloud docs users teams organizations permissions manage membership  permissions can invite HCP Terraform users into the organization  cancel invitations  and remove existing members   The list of users is separated into one tab for active users and one tab for invited users who have not yet accepted their invitations  For active users  the list includes usernames  email addresses  avatar icons  two factor authentication status  and current team memberships  Use the   Search by username or email   field to filter these lists   User invitations are always sent by email  you cannot invite someone using their HCP Terraform username  To invite a user to an organization   1  Click   Invite a user    The   invite a user   box appears  1  Enter the user s email address and optionally add them to one or more teams  If the user accepts the invitation  HCP Terraform automatically adds them to the specified teams   All permissions in HCP Terraform are managed through teams  Users can join an organization without belonging to any teams  but they cannot use Terraform Cloud features until they belong to a team  Refer to  permissions   terraform cloud docs users teams organizations permissions  for details    permissions citation    intentionally unused   keep for maintainers       Variable Sets  View all of the available variable sets and their variables  Users with   read and write variables  permissions   terraform cloud docs users teams organizations permissions general workspace permissions  can also create variable sets and assign them to one or more workspaces   Variable sets let you reuse the same variables across multiple workspaces in the organization  For example  you could define a variable set of provider credentials and automatically apply it to several workspaces  rather than manually defining credential variables in each  Changes to variable sets instantly apply to all appropriate workspaces  saving time and reducing errors from manual updates   Refer to the  variables overview   terraform cloud docs workspaces variables  documentation for details about variable types  scope  and precedence  Refer to  managing variables   terraform cloud docs workspaces variables managing variables  for details about how to create and manage variable sets        Health       BEGIN  TFC only name pnp callout      include  tfc package callouts health assessments mdx       END  TFC only name pnp callout      HCP Terraform can perform automatic health assessments in a workspace to assess whether its real infrastructure matches the requirements defined in its Terraform configuration  Health assessments include the following types of evaluations    Drift detection determines whether your real world infrastructure matches your Terraform configuration  Drift detection requires Terraform version 0 15 4     Continuous validation determines whether custom conditions in the workspace s configuration continue to pass after Terraform provisions the infrastructure  Continuous validation requires Terraform version 1 3 0    You can enforce health assessments for all eligible workspaces or let each workspace opt in to health assessments through workspace settings  Refer to  Health   terraform cloud docs workspaces health  in the workspaces documentation for more details        Runs From the Workspaces page  click   Settings   in the sidebar  then   Runs   to view all of the current runs in your organization s workspaces  The   Runs   page displays    The name of the run   The run s ID   What triggered the run   The workspace and project where the run is taking place   When the latest change in the run occurred   A button allowing you to cancel that run  You can apply the following filters to limit the runs HCP Terraform displays    Click   Needs Attention   to display runs that require user input to continue  such as approving a plan or overriding a policy     Click   Running   to display runs that are in progress    Click   On Hold   to display paused runs    For precise filtering  click   More filters   and check the boxes to filter runs by specific  run statuses   terraform cloud docs run states    run operations   terraform cloud docs run modes and options   workspaces  or  agent pools   terraform cloud docs agents agent pools   Click   Apply filters   to list the runs that match your criteria   You can dismiss any of your filtering criteria by clicking the   X   next to the filter name above the table displaying your runs    For more details about run states  refer to  Run States and Stages   terraform cloud docs run states        Integrations       Cost Estimation  Enable and disable the  cost estimation   terraform cloud docs cost estimation  feature for all workspaces        Policies       BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      Policies let you define and enforce rules for Terraform runs  You can write them using either the  Sentinel   terraform cloud docs policy enforcement sentinel  or  Open Policy Agent  OPA    terraform cloud docs policy enforcement opa  policy as code frameworks and then group them into policy sets that you can apply to workspaces in your organization  To create policies and policy sets  you must have  permission to manage policies   terraform cloud docs users teams organizations permissions organization permissions         Policy Sets       BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      Create groups of policies and enforce those policy sets globally or on specific  projects   terraform cloud docs projects manage  and workspaces  You can create policy sets through the Terraform API  by connecting a VCS repository containing policies  or directly in HCP Terraform  To create policies and policy sets  you must have  permission to manage policies   terraform cloud docs users teams organizations permissions organization permissions    Refer to  Managing Policy Sets   terraform cloud docs policy enforcement manage policy sets  for details    permissions citation    intentionally unused   keep for maintainers       Run Tasks       BEGIN  TFC only name pnp callout      include  tfc package callouts run tasks mdx       END  TFC only name pnp callout      Manage the run tasks that you can add to workspaces within the organization   Run tasks   terraform cloud docs workspaces settings run tasks  let you integrate third party tools and services at specific stages in the HCP Terraform run lifecycle       Security       Agents       BEGIN  TFC only name pnp callout      include  tfc package callouts agents mdx       END  TFC only name pnp callout      Create and manage  HCP Terraform agent pools   terraform cloud docs agents   HCP Terraform agents let HCP Terraform communicate with isolated  private  or on premises infrastructure  This is useful for on premises infrastructure types such as vSphere  Nutanix  OpenStack  enterprise networking providers  and infrastructure within a protected enclave        API Tokens  Organization owners can set up a special  Organization API Token   terraform cloud docs users teams organizations api tokens  that is not associated with a specific user or team        Authentication  Organization owners can determine when users must reauthenticate and require  two factor authentication   terraform cloud docs users teams organizations 2fa  for all members of the organization        SSH Keys  Manage  SSH keys for cloning Git based modules   terraform cloud docs workspaces settings ssh keys  during Terraform runs  This does not include keys to access a connected VCS provider        SSO  Organization owners can set up an SSO provider for the organization       Version Control       VCS General  Configure  Automatically cancel plan only runs triggered by outdated commits   terraform cloud docs users teams organizations organizations vcs speculative plan management  to manage the setting        VCS Events       Note    This feature is in beta   Review the event logs for GitLab com connections        VCS Providers  Configure  VCS providers   terraform cloud docs vcs  to use in the organization  You must have  permission to manage VCS settings   terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers      Destruction and Deletion       Data Retention Policies   EnterpriseAlert  Data retention policies are exclusive to Terraform Enterprise  and not available in HCP Terraform   a href  https   developer hashicorp com terraform enterprise  Learn more about Terraform Enterprise  a     EnterpriseAlert   An organization owner can set or override the following data retention policies       Admin default policy       Do not auto delete       Auto delete data    Setting the data retention policy to   Admin default policy   disables the other data retention policy settings   By default  the   Do not auto delete   option is enabled for an organization  This option directs Terraform Enterprise to retain data associated with configuration and state versions  but organization owners can define configurable data retention policies that allow Terraform to  soft delete  the backing data associated with configuration versions and state versions  Soft deleting refers to marking a data object for garbage collection so that Terraform can delete the object after a set number of days   Once an object is soft deleted  any attempts to read the object will fail  Until the garbage collection process begins  you can restore soft deleted objects using the APIs described in the  configuration version documentation   terraform enterprise api docs configuration versions  and the  state version documentation   terraform enterprise api docs state versions   Terraform permanently deletes the archivist storage after the garbage collection grace period elapses   The organization policy is the default policy applied to all workspaces  but members of individual workspaces can set overriding policies for their workspaces that take precedence over the organization policy      Trial Expired Organizations  HCP Terraform paid features are available as a free trial  When a free trial has expired  the organization displays a banner reading   TRIAL EXPIRED   Upgrade Required     Organizations with expired trials return to the feature set of a free organization  but they retain any data created as part of paid features  Specifically  HCP Terraform disables the following features     Teams other than  owners  and locks users who do not belong to the  owners  team out of the organization  HCP Terraform preserves team membership and permissions and re enables them after you upgrade the organization    Sentinel policy checks  HCP Terraform preserves existing policies and policy sets and re enables them after you upgrade the organization    Cost estimation "}
{"questions":"terraform page title Microsoft Entra ID Single Sign on Terraform Cloud The Microsoft Entra ID previously Azure Active Directory SSO integration currently supports the following SAML features Single Sign on Microsoft Entra ID Learn how to configure single sign on with Microsoft Entra ID previously Entra active directory tfc only true","answers":"---\npage_title: Microsoft Entra ID - Single Sign-on - Terraform Cloud\ntfc_only: true\ndescription: >-\n  Learn how to configure single sign-on with Microsoft Entra ID (previously Entra active directory).\n---\n\n# Single Sign-on: Microsoft Entra ID\n\nThe Microsoft Entra ID (previously Azure Active Directory) SSO integration currently supports the following SAML features:\n\n- Service Provider (SP) initiated SSO\n- Identity Provider (IdP) initiated SSO\n- Just-in-Time Provisioning\n\nFor more information on the listed features, visit the [Microsoft Entra ID SAML Protocol Documentation](https:\/\/learn.microsoft.com\/en-us\/entra\/identity-platform\/single-sign-on-saml-protocol).\n\n## Configuration (Microsoft Entra ID)\n\n1. Sign in to the Entra portal.\n1. On the left navigation pane, select the **Microsoft Entra ID** service.\n1. Navigate to **Enterprise Applications** and then select **All Applications**.\n1. To add new application, select **New application**.\n1. In the **Add from the gallery** section, type **Terraform Cloud** in the search box.\n1. Select **Terraform Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.\n1. On the **Terraform Cloud** application integration page, find the **Manage** section and select **single sign-on**.\n1. On the **Select a single sign-on method** page, select **SAML**.\n1. In the SAML Signing Certificate section (you may need to refresh the page) copy the **App Federation Metadata Url**.\n\n## Configuration (HCP Terraform)\n\n1. Visit your organization settings page and click \"SSO\".\n\n1. Click \"Setup SSO\".\n\n   ![sso-setup](\/img\/docs\/setup.png)\n\n1. Select \"Microsoft Entra ID\" and click \"Next\".\n\n   ![sso-wizard-choose-provider-entra](\/img\/docs\/wizard-choose-provider-entra.png)\n\n1. Provide your App Federation Metadata URL.\n\n   ![sso-wizard-configure-settings-entra](\/img\/docs\/wizard-configure-settings-entra.png)\n\n1. Save, and you should see a completed Terraform Cloud SAML configuration.\n\n1. Copy Entity ID and Reply URL.\n\n## Configuration (Microsoft Entra ID)\n\n1. In the Entra portal, on the **Terraform Cloud** application integration page, find the **Manage** section and select **single sign-on**.\n1. On the **Select a single sign-on method** page, select **SAML**.\n1. On the **Set up single sign-on with SAML** page, click the edit\/pen icon for **Basic SAML Configuration** to edit the settings.\n   1. In the **Identifier** text box, paste the **Entity ID**.\n   1. In the **Reply URL** text box, paste the **Reply URL**.\n   1. For Service Provider initiated SSO, type `https:\/\/app.terraform.io\/session` in the **Sign-On URL** text box. Otherwise, leave the box blank.\n   1. Select **Save**.\n1. On the **Single sign-on** page, download the `Certificate (Base64)` file from under **SAML Signing Certificate**.\n1. In the app's overview page, find the **Manage** section and select **Users and groups**.\n1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.\n1. In the **Users and groups** dialog, select your user from the Users list, then click the **Select** button at the bottom of the screen.\n1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see \"Default Access\" role selected.\n1. In the **Add Assignment** dialog, click the **Assign** button.\n\n## Configuration (HCP Terraform)\nTo edit your Entra SSO configuration settings:\n\n   1. Go to **Public Certificate**.\n   1. Paste the contents of the SAML Signing Certificate you downloaded from Microsoft Entra ID.\n   1. Save Settings.\n1. [Verify](\/terraform\/cloud-docs\/users-teams-organizations\/single-sign-on\/testing) your settings and click \"Enable\".\n\n1. Your Entra SSO configuration is complete and ready to [use](\/terraform\/cloud-docs\/users-teams-organizations\/single-sign-on#signing-in-with-sso).\n\n   ![sso-settings](\/img\/docs\/settings-entra.png)\n\n## Team and Username Attributes\n\nTo configure team management in your Microsoft Entra ID application:\n\n1. Navigate to the single sign-on page.\n1. Edit step 2, **User Attributes & Claims**. \n1. Add a new group claim.\n1. In **Group Claims**, select **Security Groups**.  \n1. In the **Source Attribute** field, select either **sAMAccountName** to use account names or **Group ID** to use group UUIDs. \n1. Check **Customize the name of the group claim**. \n1. Set **Name (required)** to \"MemberOf\" and leave the namespace field blank.\n\n-> **Note:** When you configure Microsoft Entra ID to use Group Claims, it provides Group UUIDs instead of human readable names in its SAML assertions. We recommend [configuring SSO Team IDs](\/terraform\/cloud-docs\/users-teams-organizations\/single-sign-on#team-names-and-sso-team-ids) for your HCP Terraform teams to match these Entra Group UUIDs.\n\nIf you plan to use SAML to set usernames in your Microsoft Entra ID application:\n\n1. Navigate to the single sign-on page.\n1. Edit step 2, **User Attributes & Claims.**  \n   We recommend naming the claim \"username\", leaving the namespace blank, and sourcing `user.displayname` or `user.mailnickname` as a starting point. If you have a Terraform Enterprise account, you can source `user.mail` or `user.userprincipalname`. Note that HCP Terraform usernames only allow lowercase letters, numbers, and dashes.\n\nIf you namespaced any of your claims, then Microsoft Entra ID passes the attribute name using the format `<claim_namespace\/claim_name>`. Consider this format when setting team and username attribute names.\n\n## Troubleshooting the SAML assertion\n[Use this guide](https:\/\/support.hashicorp.com\/hc\/en-us\/articles\/1500005371682-Capturing-a-SAML-Assertion) to verify and validate the claims being sent in the SAML response. ","site":"terraform","answers_cleaned":"    page title  Microsoft Entra ID   Single Sign on   Terraform Cloud tfc only  true description       Learn how to configure single sign on with Microsoft Entra ID  previously Entra active directory          Single Sign on  Microsoft Entra ID  The Microsoft Entra ID  previously Azure Active Directory  SSO integration currently supports the following SAML features     Service Provider  SP  initiated SSO   Identity Provider  IdP  initiated SSO   Just in Time Provisioning  For more information on the listed features  visit the  Microsoft Entra ID SAML Protocol Documentation  https   learn microsoft com en us entra identity platform single sign on saml protocol       Configuration  Microsoft Entra ID   1  Sign in to the Entra portal  1  On the left navigation pane  select the   Microsoft Entra ID   service  1  Navigate to   Enterprise Applications   and then select   All Applications    1  To add new application  select   New application    1  In the   Add from the gallery   section  type   Terraform Cloud   in the search box  1  Select   Terraform Cloud   from results panel and then add the app  Wait a few seconds while the app is added to your tenant  1  On the   Terraform Cloud   application integration page  find the   Manage   section and select   single sign on    1  On the   Select a single sign on method   page  select   SAML    1  In the SAML Signing Certificate section  you may need to refresh the page  copy the   App Federation Metadata Url        Configuration  HCP Terraform   1  Visit your organization settings page and click  SSO    1  Click  Setup SSO         sso setup   img docs setup png   1  Select  Microsoft Entra ID  and click  Next         sso wizard choose provider entra   img docs wizard choose provider entra png   1  Provide your App Federation Metadata URL        sso wizard configure settings entra   img docs wizard configure settings entra png   1  Save  and you should see a completed Terraform Cloud SAML configuration   1  Copy Entity ID and Reply URL      Configuration  Microsoft Entra ID   1  In the Entra portal  on the   Terraform Cloud   application integration page  find the   Manage   section and select   single sign on    1  On the   Select a single sign on method   page  select   SAML    1  On the   Set up single sign on with SAML   page  click the edit pen icon for   Basic SAML Configuration   to edit the settings     1  In the   Identifier   text box  paste the   Entity ID       1  In the   Reply URL   text box  paste the   Reply URL       1  For Service Provider initiated SSO  type  https   app terraform io session  in the   Sign On URL   text box  Otherwise  leave the box blank     1  Select   Save    1  On the   Single sign on   page  download the  Certificate  Base64   file from under   SAML Signing Certificate    1  In the app s overview page  find the   Manage   section and select   Users and groups    1  Select   Add user    then select   Users and groups   in the   Add Assignment   dialog  1  In the   Users and groups   dialog  select your user from the Users list  then click the   Select   button at the bottom of the screen  1  If you are expecting a role to be assigned to the users  you can select it from the   Select a role   dropdown  If no role has been set up for this app  you see  Default Access  role selected  1  In the   Add Assignment   dialog  click the   Assign   button      Configuration  HCP Terraform  To edit your Entra SSO configuration settings      1  Go to   Public Certificate       1  Paste the contents of the SAML Signing Certificate you downloaded from Microsoft Entra ID     1  Save Settings  1   Verify   terraform cloud docs users teams organizations single sign on testing  your settings and click  Enable    1  Your Entra SSO configuration is complete and ready to  use   terraform cloud docs users teams organizations single sign on signing in with sso         sso settings   img docs settings entra png      Team and Username Attributes  To configure team management in your Microsoft Entra ID application   1  Navigate to the single sign on page  1  Edit step 2    User Attributes   Claims     1  Add a new group claim  1  In   Group Claims    select   Security Groups      1  In the   Source Attribute   field  select either   sAMAccountName   to use account names or   Group ID   to use group UUIDs   1  Check   Customize the name of the group claim     1  Set   Name  required    to  MemberOf  and leave the namespace field blank        Note    When you configure Microsoft Entra ID to use Group Claims  it provides Group UUIDs instead of human readable names in its SAML assertions  We recommend  configuring SSO Team IDs   terraform cloud docs users teams organizations single sign on team names and sso team ids  for your HCP Terraform teams to match these Entra Group UUIDs   If you plan to use SAML to set usernames in your Microsoft Entra ID application   1  Navigate to the single sign on page  1  Edit step 2    User Attributes   Claims         We recommend naming the claim  username   leaving the namespace blank  and sourcing  user displayname  or  user mailnickname  as a starting point  If you have a Terraform Enterprise account  you can source  user mail  or  user userprincipalname   Note that HCP Terraform usernames only allow lowercase letters  numbers  and dashes   If you namespaced any of your claims  then Microsoft Entra ID passes the attribute name using the format   claim namespace claim name    Consider this format when setting team and username attribute names      Troubleshooting the SAML assertion  Use this guide  https   support hashicorp com hc en us articles 1500005371682 Capturing a SAML Assertion  to verify and validate the claims being sent in the SAML response  "}
{"questions":"terraform Single Sign on SSO and more page title Single Sign on HCP Terraform Learn how single sign on SSO works in HCP Terraform how to sign in with tfc only true","answers":"---\npage_title: Single Sign-on - HCP Terraform\ndescription: >-\n  Learn how single sign-on (SSO) works in HCP Terraform, how to sign in with\n  SSO, and more.\ntfc_only: true\n---\n\n# Single Sign-on\n\n~> **Important:** This page is about configuring single sign-on in HCP Terraform. Terraform Enterprise's single sign-on is configured differently. If you administer a Terraform Enterprise instance, see [Terraform Enterprise: SAML Configuration](\/terraform\/enterprise\/saml\/configuration).\n\nHCP Terraform allows organizations to configure SAML single sign-on (SSO), an alternative to traditional user management.  SSO gives owners more control to secure accessibility to your organization\u2019s [Projects](\/terraform\/cloud-docs\/projects\/manage), [Workspaces](\/terraform\/cloud-docs\/workspaces), and [Managed Resources](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#managed-resources). By using SSO, your organization can centralize management of users for HCP Terraform and other Software-as-a-Service (SaaS) vendors, providing greater accountability and security for an organization's identity and user management.\n\n## Supported Identity Providers (IdPs)\n\nSelect your preferred provider to learn more about what is supported for that provider and how to configure SSO for it.\n\n* [Microsoft Entra ID](\/terraform\/cloud-docs\/users-teams-organizations\/single-sign-on\/entra-id)\n* [Okta](\/terraform\/cloud-docs\/users-teams-organizations\/single-sign-on\/okta)\n* [SAML](\/terraform\/cloud-docs\/users-teams-organizations\/single-sign-on\/saml)\n\n## How SSO Works\n\nOrganization owners can enable SSO for their organization and configure an identity provider to connect to.\n\nOnce SSO is enabled for an organization, all non-owner members must sign in through SSO in order to access the organization. (Owners of an SSO-enabled organization can still access the organization through username and password, to enable fixing problems with SSO.)\n\n### SSO Identities and HCP Terraform User Accounts\n\nSSO does not automatically provision HCP Terraform user accounts. The first time you sign in with SSO, you must either provide a password to create a new HCP Terraform user account (using your SSO email address as the username), or link your SSO identity to an existing HCP Terraform user account. Once the SSO identity is linked, you can only log in to that organization using the linked account. You must [remove the SSO link](\/terraform\/cloud-docs\/users-teams-organizations\/single-sign-on\/linking-user-account#remove-sso-identity-link) if you want to access the organization with a different user account.\n\nIf an organization's owners disable SSO, all members can continue to access the organization using their HCP Terraform or HashiCorp Cloud Platform credentials.\n\n### Enforced Access Policy for HCP Terraform Resources\n\nAs a non-owner, when you attempt to access an organization that has SSO configured, you will be redirected to the organization's SAML IdP to authenticate and authorize access using your SAML IdP credentials before you can access the organization's [Projects](\/terraform\/cloud-docs\/projects\/manage), [Workspaces](\/terraform\/cloud-docs\/workspaces), and [Managed Resources](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#managed-resources).\n\nOwners of an SSO-enabled organization can still access the organization's resources through their HCP Terraform credentials or their HCP credentials (if linked to their HCP Terraform account). This is to enable a workaround to problems such as your IdP becoming unavailable, lost access to your MFA or IdP credentials, or other authentication issues.\n\n<Note>\n\nHCP Terraform users are able to use their single HCP Terraform account to access resources in different organizations, however, SAML SSO does not authorize access to:\n- Account Settings (such as to manage 2FA or generate\/revoke User API tokens)\n- Other organizations with SSO configured with a different SAML IdP.  You will need to authenticate to each configured IdP separately.\n- Other organizations where SSO is not configured.\n\n<\/Note>\n\nIn order to access these resources, you may be asked to \u201cstep-up\u201d authentication using your HCP Terraform or HCP credentials. In most situations, a step-up HCP Terraform or HCP authentication login prompt will be required immediately after SSO authentication so that HCP Terraform can establish a broad user [session](\/terraform\/cloud-docs\/users-teams-organizations\/users#sessions) and to check access and authorization to different resources in HCP Terraform.\n\nThe below diagram explains the the access enforcement policy and the authentication required for an HCP Terraform user account to access different resources in HCP Terraform:\n\n![Screenshot: a diagram of resource access in HCP Terraform with both SSO and non-SSO authentication](\/img\/docs\/tfc-sso-auth.png)\n\n## Signing in with SSO\n\n1. Visit <https:\/\/app.terraform.io> and sign out if you're signed in.\n\n1. Click \"Sign in via SSO\".\n\n1. Provide your organization name and click **Next\"**.\n\n1. If you've signed in to HCP Terraform with SSO before, proceed to the next step.\n\n   If you're signing in for the first time under this account or for the first time accessing this organization, you'll be required to create a new account or link to an existing account. Use the links below the account creation form if you want to link your SSO identity to an existing account, then fill out and submit the relevant form.\n\n1. You will be redirected to your SSO identity provider. Authenticate your account as necessary.\n\n1. You are now signed in to HCP Terraform.\n\n## Configuring SSO in HCP Terraform Free Edition \n\nSSO is available to all HCP Terraform organizations, but is configured and managed differently in HCP Terraform Free Edition because Team Management is only available in HCP Terraform Standard and Plus Editions.\n\nIn HCP Terraform Free Edition organizations, after you successfully configure SSO, HCP Terraform automatically creates a team named `sso` and adds all current members of the `owners` team into it. In the Free Edition, you cannot modify the organization-level permissions for both the `owners` and `sso` teams. These teams grant every member full administrative access to the organization, projects, workspaces, and managed resources.  \n\nAfter configuring SSO access, review the `owners` team membership. Members of the `owners` team have permission to bypass SSO in the event that your Identity Provider (IDP) is unavailable to service authentication requests, for example due to an IDP service outage, an administrator forgot their SSO credentials, or lost access to their software authenticator. An owner can use their HCP Terraform or HashiCorp Cloud Platform credentials (if linked) to bypass HCP Terraform SSO authentication at any time.\n\nTo encourage least privilege practices, HCP Terraform prompts the user who successfully configures SSO to optionally remove other users from the owners group.\n\n## Managing Owners and SSO Team Membership in the Free Edition\n\nThe Team Management feature set is available in HCP Terraform Standard and Plus Editions only. Inviting users to any Free Edition organization will add them to the `owners` team but not the `sso` team. Review the following to assign the proper team membership between the two teams.\n\nFor new users new to HCP Terraform\n- **Assign new users to the `owners` team** by inviting the user to the organization.\n- **Assign new users to the `sso` team** by asking the user to login directly to the HCP Terraform organization via the SSO authentication login.\n\nFor managing existing users permissions:\n- **Assign existing users from the `sso` team to the `owners` team:** Remove the user from the organization. Re-invite the user.\n- **Assign existing users from the `owners` team to the `sso` team:** Remove the user from the organization.  Ask the user to sign in via SSO authentication login directly.\n\n## Managing Team Membership Through SSO\n\nHCP Terraform can automatically add users to teams based on their SAML assertion, so you can manage team membership in your directory service.\n\nTo enable team membership mapping:\n\n1. Click **Settings** in the navigation bar and then click **SSO** in the sidebar. The SSO configuration page appears.\n1. Toggle the **Enable team management to customize your team attribute**.\n\nWhen team management is enabled, you can configure which SAML attribute in the SAMLResponse will control team membership. This defaults to the `MemberOf` attribute. The expected format of the corresponding `AttributeValue` in the SAMLResponse is a either a string containing a comma-separated list of teams, or separate `AttributeValue` items specifying teams.\n\nWhen users log in through SAML, Terraform automatically adds them to the teams included in their assertion and automatically removes them from teams that are not included in their assertion. This automatic mapping overrides any manually set team memberships. Each time the user logs in, their team membership is adjusted to match their SAML assertion.\n\nHCP Terraform ignores team names that do not exactly match existing teams and will not create new teams from those listed in the assertion. If the chosen SAML attribute is not provided in the SAMLResponse, Terraform assigns users to a default team named `sso` and does not remove them from any existing teams.\n\nIt is not possible to assign users to the `owners` team through this attribute.\n\n## Team Names and SSO Team IDs\n\nHCP Terraform expects the team names in the team membership SAML attribute to exactly match team names or configured SSO team IDs stored in HCP Terraform. Values are case sensitive and literal. HCP Terraform does not process the value passed by the IdP. As a result, you cannot use values such as the full distinguished name (DN).\n\nYou can configure SSO Team IDs in the organization's **Teams** page. If an SSO Team ID is configured, HCP Terraform will attempt to match the chosen SAML attribute against both the team name and the SSO Team ID when mapping users to teams. You may want to create an SSO Team ID if the team membership SAML attribute is not human readable and is not used as the team's name in HCP Terraform.\n\nSSO Team IDs are particularly helpful if your SSO or Microsoft Entra ID provider restricts the `MemberOf` attribute in its SAML responses to Group UUIDs, rather than human readable group names. Setting the SSO Team ID allows you to maintain human readable team names in HCP Terraform, while still managing team membership through SSO or Microsoft Entra ID.\n\n## NameID Format\n\nHCP Terraform requires that the NameID format in the SAML response be set to `urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress` with a valid email address being provided as the value for this attribute.","site":"terraform","answers_cleaned":"    page title  Single Sign on   HCP Terraform description       Learn how single sign on  SSO  works in HCP Terraform  how to sign in with   SSO  and more  tfc only  true        Single Sign on       Important    This page is about configuring single sign on in HCP Terraform  Terraform Enterprise s single sign on is configured differently  If you administer a Terraform Enterprise instance  see  Terraform Enterprise  SAML Configuration   terraform enterprise saml configuration    HCP Terraform allows organizations to configure SAML single sign on  SSO   an alternative to traditional user management   SSO gives owners more control to secure accessibility to your organization s  Projects   terraform cloud docs projects manage    Workspaces   terraform cloud docs workspaces   and  Managed Resources   terraform cloud docs users teams organizations organizations managed resources   By using SSO  your organization can centralize management of users for HCP Terraform and other Software as a Service  SaaS  vendors  providing greater accountability and security for an organization s identity and user management      Supported Identity Providers  IdPs   Select your preferred provider to learn more about what is supported for that provider and how to configure SSO for it      Microsoft Entra ID   terraform cloud docs users teams organizations single sign on entra id     Okta   terraform cloud docs users teams organizations single sign on okta     SAML   terraform cloud docs users teams organizations single sign on saml      How SSO Works  Organization owners can enable SSO for their organization and configure an identity provider to connect to   Once SSO is enabled for an organization  all non owner members must sign in through SSO in order to access the organization   Owners of an SSO enabled organization can still access the organization through username and password  to enable fixing problems with SSO        SSO Identities and HCP Terraform User Accounts  SSO does not automatically provision HCP Terraform user accounts  The first time you sign in with SSO  you must either provide a password to create a new HCP Terraform user account  using your SSO email address as the username   or link your SSO identity to an existing HCP Terraform user account  Once the SSO identity is linked  you can only log in to that organization using the linked account  You must  remove the SSO link   terraform cloud docs users teams organizations single sign on linking user account remove sso identity link  if you want to access the organization with a different user account   If an organization s owners disable SSO  all members can continue to access the organization using their HCP Terraform or HashiCorp Cloud Platform credentials       Enforced Access Policy for HCP Terraform Resources  As a non owner  when you attempt to access an organization that has SSO configured  you will be redirected to the organization s SAML IdP to authenticate and authorize access using your SAML IdP credentials before you can access the organization s  Projects   terraform cloud docs projects manage    Workspaces   terraform cloud docs workspaces   and  Managed Resources   terraform cloud docs users teams organizations organizations managed resources    Owners of an SSO enabled organization can still access the organization s resources through their HCP Terraform credentials or their HCP credentials  if linked to their HCP Terraform account   This is to enable a workaround to problems such as your IdP becoming unavailable  lost access to your MFA or IdP credentials  or other authentication issues    Note   HCP Terraform users are able to use their single HCP Terraform account to access resources in different organizations  however  SAML SSO does not authorize access to    Account Settings  such as to manage 2FA or generate revoke User API tokens    Other organizations with SSO configured with a different SAML IdP   You will need to authenticate to each configured IdP separately    Other organizations where SSO is not configured     Note   In order to access these resources  you may be asked to  step up  authentication using your HCP Terraform or HCP credentials  In most situations  a step up HCP Terraform or HCP authentication login prompt will be required immediately after SSO authentication so that HCP Terraform can establish a broad user  session   terraform cloud docs users teams organizations users sessions  and to check access and authorization to different resources in HCP Terraform   The below diagram explains the the access enforcement policy and the authentication required for an HCP Terraform user account to access different resources in HCP Terraform     Screenshot  a diagram of resource access in HCP Terraform with both SSO and non SSO authentication   img docs tfc sso auth png      Signing in with SSO  1  Visit  https   app terraform io  and sign out if you re signed in   1  Click  Sign in via SSO    1  Provide your organization name and click   Next      1  If you ve signed in to HCP Terraform with SSO before  proceed to the next step      If you re signing in for the first time under this account or for the first time accessing this organization  you ll be required to create a new account or link to an existing account  Use the links below the account creation form if you want to link your SSO identity to an existing account  then fill out and submit the relevant form   1  You will be redirected to your SSO identity provider  Authenticate your account as necessary   1  You are now signed in to HCP Terraform      Configuring SSO in HCP Terraform Free Edition   SSO is available to all HCP Terraform organizations  but is configured and managed differently in HCP Terraform Free Edition because Team Management is only available in HCP Terraform Standard and Plus Editions   In HCP Terraform Free Edition organizations  after you successfully configure SSO  HCP Terraform automatically creates a team named  sso  and adds all current members of the  owners  team into it  In the Free Edition  you cannot modify the organization level permissions for both the  owners  and  sso  teams  These teams grant every member full administrative access to the organization  projects  workspaces  and managed resources     After configuring SSO access  review the  owners  team membership  Members of the  owners  team have permission to bypass SSO in the event that your Identity Provider  IDP  is unavailable to service authentication requests  for example due to an IDP service outage  an administrator forgot their SSO credentials  or lost access to their software authenticator  An owner can use their HCP Terraform or HashiCorp Cloud Platform credentials  if linked  to bypass HCP Terraform SSO authentication at any time   To encourage least privilege practices  HCP Terraform prompts the user who successfully configures SSO to optionally remove other users from the owners group      Managing Owners and SSO Team Membership in the Free Edition  The Team Management feature set is available in HCP Terraform Standard and Plus Editions only  Inviting users to any Free Edition organization will add them to the  owners  team but not the  sso  team  Review the following to assign the proper team membership between the two teams   For new users new to HCP Terraform     Assign new users to the  owners  team   by inviting the user to the organization      Assign new users to the  sso  team   by asking the user to login directly to the HCP Terraform organization via the SSO authentication login   For managing existing users permissions      Assign existing users from the  sso  team to the  owners  team    Remove the user from the organization  Re invite the user      Assign existing users from the  owners  team to the  sso  team    Remove the user from the organization   Ask the user to sign in via SSO authentication login directly      Managing Team Membership Through SSO  HCP Terraform can automatically add users to teams based on their SAML assertion  so you can manage team membership in your directory service   To enable team membership mapping   1  Click   Settings   in the navigation bar and then click   SSO   in the sidebar  The SSO configuration page appears  1  Toggle the   Enable team management to customize your team attribute     When team management is enabled  you can configure which SAML attribute in the SAMLResponse will control team membership  This defaults to the  MemberOf  attribute  The expected format of the corresponding  AttributeValue  in the SAMLResponse is a either a string containing a comma separated list of teams  or separate  AttributeValue  items specifying teams   When users log in through SAML  Terraform automatically adds them to the teams included in their assertion and automatically removes them from teams that are not included in their assertion  This automatic mapping overrides any manually set team memberships  Each time the user logs in  their team membership is adjusted to match their SAML assertion   HCP Terraform ignores team names that do not exactly match existing teams and will not create new teams from those listed in the assertion  If the chosen SAML attribute is not provided in the SAMLResponse  Terraform assigns users to a default team named  sso  and does not remove them from any existing teams   It is not possible to assign users to the  owners  team through this attribute      Team Names and SSO Team IDs  HCP Terraform expects the team names in the team membership SAML attribute to exactly match team names or configured SSO team IDs stored in HCP Terraform  Values are case sensitive and literal  HCP Terraform does not process the value passed by the IdP  As a result  you cannot use values such as the full distinguished name  DN    You can configure SSO Team IDs in the organization s   Teams   page  If an SSO Team ID is configured  HCP Terraform will attempt to match the chosen SAML attribute against both the team name and the SSO Team ID when mapping users to teams  You may want to create an SSO Team ID if the team membership SAML attribute is not human readable and is not used as the team s name in HCP Terraform   SSO Team IDs are particularly helpful if your SSO or Microsoft Entra ID provider restricts the  MemberOf  attribute in its SAML responses to Group UUIDs  rather than human readable group names  Setting the SSO Team ID allows you to maintain human readable team names in HCP Terraform  while still managing team membership through SSO or Microsoft Entra ID      NameID Format  HCP Terraform requires that the NameID format in the SAML response be set to  urn oasis names tc SAML 1 1 nameid format emailAddress  with a valid email address being provided as the value for this attribute "}
{"questions":"terraform include tfc package callouts notifications mdx page title Manage team notifications Learn how to set up team notifications to notify team members on external systems whenever a particular action takes place Manage team notifications HCP Terraform can use webhooks to notify external systems about run progress change requests and other events Team notifications allow you to configure relevant alerts that notify teams you specify whenever a certain event occurs","answers":"---\npage_title: Manage team notifications\ndescription: Learn how to set up team notifications to notify team members on external systems whenever a particular action takes place.\n---\n\n# Manage team notifications\n\nHCP Terraform can use webhooks to notify external systems about run progress, change requests, and other events. Team notifications allow you to configure relevant alerts that notify teams you specify whenever a certain event occurs. \n\n@include 'tfc-package-callouts\/notifications.mdx'\n\nYou can configure an individual team notification to notify up to twenty teams. To set up notifications for teams using the API, refer to the [Notification API](\/terraform\/cloud-docs\/api-docs\/notification-configurations#team-notification-configuration).\n\n## Requirements\n\n-> **Note**: Team notifications are in public beta. All APIs and workflows are subject to change.\n\nTo configure team notifications, you need the [**Manage teams**](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-teams) permissions for the team for which you want to configure notifications.\n\n## View notification configuration settings\n\nTo view your current team notifications, perform the following steps:\n\n1. Navigate to your organization\u2019s **Settings** page.  \n1. Select **Teams** in the sidebar navigation.  \n1. Select the team for which you want to view the notifications from your list of teams.\n1. Select **Notifications** in the sidebar navigation.\n\nHCP Terraform displays a list of any notification configurations you have set up. A notification configuration defines how and when you want to send notifications, and once you enable that configuration, it can send notifications. \n\n### Update and enable notification configurations\n\nEach notification configuration includes a brief overview of each configuration\u2019s name, type, the events that can trigger the notification, and the last time the notification was triggered. Clicking on a notification configuration opens a page where you can perform the following actions:\n\n* Enable your configuration to send notifications by toggling the switch.  \n* Delete a configuration by clicking **Delete notification**, then **Yes, delete notification configuration**.  \n* Test your notification\u2019s configuration by clicking **Send test**.   \n* Click **Edit notification** to edit your notification configuration.\n\nAfter creating a notification configuration, you can only edit the following aspects of that configuration:\n\n1. The configuration\u2019s name.  \n1. Whether this configuration notifies everyone on a team or specific members.  \n1. The workspace events that trigger notifications. You can choose from:  \n    * **All events** triggers a notification for every event in your workspace.  \n    * **No events** means that no workspace events trigger a notification.  \n    * **Only certain events** lets you specify which events trigger a notification.\n\nAfter making any changes, click **Update notification** to save your changes.\n\n## Create and configure a notification \n\nTo configure a new notification for a team or a user, perform the following steps:\n\n1. Navigate to your organization\u2019s **Settings** page.  \n1. Select **Teams** in the sidebar navigation.  \n1. Select the team you want to view the notifications for from your list of teams.  \n1. Select **Notifications** in the sidebar navigation.  \n1. Click **Create a notification**.\n\nYou must complete the following fields for all new notification configurations:\n\n1. The **Destination** where HCP Terraform should deliver either a generic or a specifically formatted payload. Refer to [Notification payloads](#notification-payloads) for details.  \n1. The display **Name** for this notification configuration.  \n1. If you configure an email notification, you can optionally specify which **Email Recipients** will receive this notification.  \n1. If you choose to configure a webhook, you must also specify:   \n    * A **Webhook URL** for the destination of your webhook payload. Your URL must accept HTTP or HTTPS `POST` requests and be able to use the chosen payload type.   \n    * You can optionally configure a **Token** as an arbitrary secret string that HCP Terraform will use to sign its notification webhooks. Refer to [Notification authenticity](#notification-authenticity) for details. You cannot view the token after you save the notification configuration.  \n1. If you choose to specify either a **Slack** or **Microsoft Teams** notification, you must also configure your webhook URL for either service. For details, refer to Slack's documentation on [creating an incoming webhook](https:\/\/api.slack.com\/messaging\/webhooks#create_a_webhook) and Microsoft's documentation on [creating a workflow from a channel in teams](https:\/\/support.microsoft.com\/en-us\/office\/creating-a-workflow-from-a-channel-in-teams-242eb8f2-f328-45be-b81f-9817b51a5f0e).  \n1. Specify which [**Workspace events**](#workspace-events) should trigger this notification.  \n1. After you finish configuring your notification, click **Create a notification**.\n\nNote that if you are create an email notification, you must have [**Manage membership**](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-membership) permissions on a team to select users from that team as email recipients.\n\n### Workspace events\n\nHCP Terraform can send notifications for all workspace events, no workspace events, or specific events. The following events are available for you to specify:\n\n| Event | Description |\n| :---- | :---- |\n| Change Requests | HCP Terraform will notify this team whenever someone creates a change request on a workspace to which this team has access. |\n\n## Enable and verify a notification\n\nTo configure HCP Terraform to stop sending notifications for a notification configuration, disable the **Enabled** setting on a configuration's detail page . \n\nHCP Terraform enables notifications for email configurations by default. Before enabling any webhook notifications, HCP Terraform attempts to verify the notification\u2019s configuration by sending a test message. If the test succeeds, HCP Terraform enables the notification. \n\nTo verify a notification configuration, the destination must respond with a `2xx` HTTP code. If verification fails, HCP Terraform does not enable the configuration and displays an error message.\n\nFor successful and unsuccessful verifications, click the **Last Response** box to view more information about the verification results. You can also send additional test messages by clicking **Send a Test**.\n\n## Notification Payloads\n\nNotification payloads contain different attributes depending on the integration you specified when configuring that notification.\n\n### Slack\n\nNotifications to Slack contain the following information:\n\n* Information about the change request, including the username and avatar of the person who created the change request.\n* The event that triggered the notification and the time that event occurred.\n\n### Microsoft Teams\n\nNotifications to Microsoft Teams contain the following information:\n* Information about the change request, including the username and avatar of the person who created the change request.\n* The event that triggered the notification and the time that event occurred.\n\n### Email\n\nEmail notifications contain the following information:\n* Information about the change request, including the username and avatar of the person who created the change request.\n* The event that triggered the notification and the time that event occurred.\n\n### Generic\n\nA generic notification contains information about the event that triggered it and the time that the event occurred. You can refer to the complete generic notification payload in the [API documentation](\/terraform\/cloud-docs\/api-docs\/notification-configurations#notification-payload).\n\nYou can use some of the values in the payload to retrieve additional information through the API, such as:\n* The [workspace ID](\/terraform\/cloud-docs\/api-docs\/workspaces#list-workspaces)  \n* The [organization name](\/terraform\/cloud-docs\/api-docs\/organizations#show-an-organization)\n\n## Notification Authenticity\n\nSlack notifications use Slack's own protocols to verify HCP Terraform's webhook requests.\n\nGeneric notifications can include a signature to verify the request. For notification configurations that include a secret token, HCP Terraform's webhook requests include an `X-TFE-Notification-Signature` header containing an HMAC signature computed from the token using the SHA-512 digest algorithm. The notification\u2019s receiving service is responsible for validating the signature. For more information and an example of how to validate the signature, refer to the [API documentation](\/terraform\/cloud-docs\/api-docs\/notification-configurations#notification-payload)","site":"terraform","answers_cleaned":"    page title  Manage team notifications description  Learn how to set up team notifications to notify team members on external systems whenever a particular action takes place         Manage team notifications  HCP Terraform can use webhooks to notify external systems about run progress  change requests  and other events  Team notifications allow you to configure relevant alerts that notify teams you specify whenever a certain event occurs     include  tfc package callouts notifications mdx   You can configure an individual team notification to notify up to twenty teams  To set up notifications for teams using the API  refer to the  Notification API   terraform cloud docs api docs notification configurations team notification configuration       Requirements       Note    Team notifications are in public beta  All APIs and workflows are subject to change   To configure team notifications  you need the    Manage teams     terraform cloud docs users teams organizations permissions manage teams  permissions for the team for which you want to configure notifications      View notification configuration settings  To view your current team notifications  perform the following steps   1  Navigate to your organization s   Settings   page    1  Select   Teams   in the sidebar navigation    1  Select the team for which you want to view the notifications from your list of teams  1  Select   Notifications   in the sidebar navigation   HCP Terraform displays a list of any notification configurations you have set up  A notification configuration defines how and when you want to send notifications  and once you enable that configuration  it can send notifications        Update and enable notification configurations  Each notification configuration includes a brief overview of each configuration s name  type  the events that can trigger the notification  and the last time the notification was triggered  Clicking on a notification configuration opens a page where you can perform the following actions     Enable your configuration to send notifications by toggling the switch      Delete a configuration by clicking   Delete notification    then   Yes  delete notification configuration        Test your notification s configuration by clicking   Send test         Click   Edit notification   to edit your notification configuration   After creating a notification configuration  you can only edit the following aspects of that configuration   1  The configuration s name    1  Whether this configuration notifies everyone on a team or specific members    1  The workspace events that trigger notifications  You can choose from            All events   triggers a notification for every event in your workspace            No events   means that no workspace events trigger a notification            Only certain events   lets you specify which events trigger a notification   After making any changes  click   Update notification   to save your changes      Create and configure a notification   To configure a new notification for a team or a user  perform the following steps   1  Navigate to your organization s   Settings   page    1  Select   Teams   in the sidebar navigation    1  Select the team you want to view the notifications for from your list of teams    1  Select   Notifications   in the sidebar navigation    1  Click   Create a notification     You must complete the following fields for all new notification configurations   1  The   Destination   where HCP Terraform should deliver either a generic or a specifically formatted payload  Refer to  Notification payloads   notification payloads  for details    1  The display   Name   for this notification configuration    1  If you configure an email notification  you can optionally specify which   Email Recipients   will receive this notification    1  If you choose to configure a webhook  you must also specify           A   Webhook URL   for the destination of your webhook payload  Your URL must accept HTTP or HTTPS  POST  requests and be able to use the chosen payload type           You can optionally configure a   Token   as an arbitrary secret string that HCP Terraform will use to sign its notification webhooks  Refer to  Notification authenticity   notification authenticity  for details  You cannot view the token after you save the notification configuration    1  If you choose to specify either a   Slack   or   Microsoft Teams   notification  you must also configure your webhook URL for either service  For details  refer to Slack s documentation on  creating an incoming webhook  https   api slack com messaging webhooks create a webhook  and Microsoft s documentation on  creating a workflow from a channel in teams  https   support microsoft com en us office creating a workflow from a channel in teams 242eb8f2 f328 45be b81f 9817b51a5f0e     1  Specify which    Workspace events     workspace events  should trigger this notification    1  After you finish configuring your notification  click   Create a notification     Note that if you are create an email notification  you must have    Manage membership     terraform cloud docs users teams organizations permissions manage membership  permissions on a team to select users from that team as email recipients       Workspace events  HCP Terraform can send notifications for all workspace events  no workspace events  or specific events  The following events are available for you to specify     Event   Description                       Change Requests   HCP Terraform will notify this team whenever someone creates a change request on a workspace to which this team has access        Enable and verify a notification  To configure HCP Terraform to stop sending notifications for a notification configuration  disable the   Enabled   setting on a configuration s detail page     HCP Terraform enables notifications for email configurations by default  Before enabling any webhook notifications  HCP Terraform attempts to verify the notification s configuration by sending a test message  If the test succeeds  HCP Terraform enables the notification    To verify a notification configuration  the destination must respond with a  2xx  HTTP code  If verification fails  HCP Terraform does not enable the configuration and displays an error message   For successful and unsuccessful verifications  click the   Last Response   box to view more information about the verification results  You can also send additional test messages by clicking   Send a Test        Notification Payloads  Notification payloads contain different attributes depending on the integration you specified when configuring that notification       Slack  Notifications to Slack contain the following information     Information about the change request  including the username and avatar of the person who created the change request    The event that triggered the notification and the time that event occurred       Microsoft Teams  Notifications to Microsoft Teams contain the following information    Information about the change request  including the username and avatar of the person who created the change request    The event that triggered the notification and the time that event occurred       Email  Email notifications contain the following information    Information about the change request  including the username and avatar of the person who created the change request    The event that triggered the notification and the time that event occurred       Generic  A generic notification contains information about the event that triggered it and the time that the event occurred  You can refer to the complete generic notification payload in the  API documentation   terraform cloud docs api docs notification configurations notification payload    You can use some of the values in the payload to retrieve additional information through the API  such as    The  workspace ID   terraform cloud docs api docs workspaces list workspaces      The  organization name   terraform cloud docs api docs organizations show an organization      Notification Authenticity  Slack notifications use Slack s own protocols to verify HCP Terraform s webhook requests   Generic notifications can include a signature to verify the request  For notification configurations that include a secret token  HCP Terraform s webhook requests include an  X TFE Notification Signature  header containing an HMAC signature computed from the token using the SHA 512 digest algorithm  The notification s receiving service is responsible for validating the signature  For more information and an example of how to validate the signature  refer to the  API documentation   terraform cloud docs api docs notification configurations notification payload "}
{"questions":"terraform Manage teams You can grant team management abilities to members of teams with either one of the manage teams or manage organization access permissions Refer to Team Permissions terraform cloud docs users teams organizations permissions team permissions for details Learn how to manage team creation team deletion team membership and team access to workspaces projects and organizations page title Manage teams","answers":"---\npage_title: Manage teams\ndescription: |-\n Learn how to manage team creation, team deletion, team membership, and team access to workspaces, projects, and organizations. \n---\n\n# Manage teams\n\nYou can grant team management abilities to members of teams with either one of the manage teams or manage organization access permissions. Refer to [Team Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#team-permissions) for details.\n\n[Organization owners](\/terraform\/cloud-docs\/users-teams-organizations\/teams#the-owners-team) can also create teams, assign team permissions, or view the full list of teams. Other users can view any teams marked as visible within the organization, plus any secret teams they are members of. Refer to [Team Visibility](#team-visibility) for details.\n\nTo manage teams, perform the following steps:\n\n1. Click **Settings** and then click **Teams**. The **Team Management** page appears, containing a list of all teams within the organization.\n1. Click a team to go to its settings page, which lists the team's settings and current members. Members that have [two-factor authentication](\/terraform\/cloud-docs\/users-teams-organizations\/2fa) enabled have a **2FA** badge.\n\nYou can manage a team on its settings page by adding or removing members, changing its visibility, and controlling access to workspaces, projects, and the organization.\n\n## Create teams\n\nTo create a new team, perform the following steps:\n\n1. Click **Settings** and then click **Teams**. The **Team Management** page appears, containing a list of all teams within the organization.\n1. Enter a unique team **Name** and click **Create Team**. Team names can include numbers, letters, underscores (`_`), and hyphens (`-`).\n\nThe new team's settings page appears, where you can add new members and grant permissions.\n\n## Delete teams\n\n~> **Important:** Team deletion is permanent, and you cannot undo it.\n\nTo delete a team, perform the following steps:\n\n1. Click **Settings**, then click **Teams**. The **Team Management** page appears, containing a list of all teams within the organization.\n1. Click the team you want to delete to go to its settings page.\n1. Click **Delete [team name]** at the bottom of the page. The **Deleting team \"[team name]\"** box appears.\n1. Click **Yes, delete team** to permanently delete the team and all of its data from HCP Terraform.\n\n## Manage team membership\n\nTeam structure often resembles your company's organizational structure.\n\n### Add users\n\nIf the user is not yet in the organization, [invite them to join the organization](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#users) and include a list of teams they should belong to in the invitation. Once the user accepts the invitation, HCP Terraform automatically adds them to those teams.\n\nTo add a user that is already in the organization:\n\n1. Click **Settings** and then click **Teams**. The **Team Management** page appears, containing a list of all teams within the organization.\n1. Click the team to go to its settings page.\n1. Choose a user under **Add a New Team Member**. Use the text field to filter the list by username or email.\n1. Click the user to add them to the team. HCP Terraform now displays the user under **Members**.\n\n### Remove users\n\nTo remove a user from a team:\n\n1. Click **Settings** and then click **Teams**. The **Team Management** page appears, containing a list of all teams within the organization.\n1. Click the team to go to its settings page.\n1. Click **...** next to the user's name and choose **Remove from team** from the menu. HCP Terraform removes the user from the list of team members.\n\n\n## Team visibility\n\nThe settings under **Visibility** allow you to  control who can see a team within the organization. To edit a team's visibility:\n\n1. Click **Settings** and then click **Teams**. The **Team Management** page appears, containing a list of all teams within the organization.\n1. Click the team to go to its settings page.\n1. Enable one of the following settings:\n   - **Visible:** Every user in the organization can see the team and its membership. Non-members have read-only access.\n   - **Secret:**\u00a0The default setting is that only team members and organization owners can view a team and its membership.\n\nWe recommend making the majority of teams visible to simplify workspace administration. Secret teams should only have\n[organization-level permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-permissions) since workspace admins cannot manage permissions for teams they cannot view.\n\n## Manage workspace access\n\nYou can grant teams various permissions on workspaces. Refer to [Workspace Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#workspace-permissions) for details.\n\nHCP Terraform uses the most permissive permission level from your teams to determine what actions you can take on a particular resource. For example, if you belong to a team that only has permission to read runs for a workspace and another team with admin access to that workspace, HCP Terraform grants you admin access.\n\nHCP Terraform grants the most permissive permissions regardless of whether an organization, project, team, or workspace set those permissions. For example, if a team has permission to read runs for a given workspace and has permission to manage that workspace through the organization, then members of that team can manage that workspace. Refer to [organization permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-permissions) and [project permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#project-permissions) for additional information. \n\nAnother example is when a team has permission at the organization-level to read runs for all workspaces and admin access to a specific workspace. HCP Terraform grants the more permissive admin permissions to that workspace.\n\nTo manage team permissions on a workspace:\n\n1. Go to the workspace and click **Settings > Team Access**. The **Team Access** page appears.\n1. Click **Add team and permissions** to select a team and assign a pre-built or custom permission set.\n\n## Manage project access\n\nYou can grant teams permissions to manage a project and the workspaces that belong to it. Refer to [Project Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#project-permissions) for details.\n\n## Manage organization access\n\nOrganization owners can grant teams permissions to manage policies, projects and workspaces, team and organization membership, VCS settings, private registry providers and modules, and policy overrides across an organization. Refer to [Organization Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-permissions) for details.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers","site":"terraform","answers_cleaned":"    page title  Manage teams description      Learn how to manage team creation  team deletion  team membership  and team access to workspaces  projects  and organizations          Manage teams  You can grant team management abilities to members of teams with either one of the manage teams or manage organization access permissions  Refer to  Team Permissions   terraform cloud docs users teams organizations permissions team permissions  for details    Organization owners   terraform cloud docs users teams organizations teams the owners team  can also create teams  assign team permissions  or view the full list of teams  Other users can view any teams marked as visible within the organization  plus any secret teams they are members of  Refer to  Team Visibility   team visibility  for details   To manage teams  perform the following steps   1  Click   Settings   and then click   Teams    The   Team Management   page appears  containing a list of all teams within the organization  1  Click a team to go to its settings page  which lists the team s settings and current members  Members that have  two factor authentication   terraform cloud docs users teams organizations 2fa  enabled have a   2FA   badge   You can manage a team on its settings page by adding or removing members  changing its visibility  and controlling access to workspaces  projects  and the organization      Create teams  To create a new team  perform the following steps   1  Click   Settings   and then click   Teams    The   Team Management   page appears  containing a list of all teams within the organization  1  Enter a unique team   Name   and click   Create Team    Team names can include numbers  letters  underscores        and hyphens         The new team s settings page appears  where you can add new members and grant permissions      Delete teams       Important    Team deletion is permanent  and you cannot undo it   To delete a team  perform the following steps   1  Click   Settings    then click   Teams    The   Team Management   page appears  containing a list of all teams within the organization  1  Click the team you want to delete to go to its settings page  1  Click   Delete  team name    at the bottom of the page  The   Deleting team   team name     box appears  1  Click   Yes  delete team   to permanently delete the team and all of its data from HCP Terraform      Manage team membership  Team structure often resembles your company s organizational structure       Add users  If the user is not yet in the organization   invite them to join the organization   terraform cloud docs users teams organizations organizations users  and include a list of teams they should belong to in the invitation  Once the user accepts the invitation  HCP Terraform automatically adds them to those teams   To add a user that is already in the organization   1  Click   Settings   and then click   Teams    The   Team Management   page appears  containing a list of all teams within the organization  1  Click the team to go to its settings page  1  Choose a user under   Add a New Team Member    Use the text field to filter the list by username or email  1  Click the user to add them to the team  HCP Terraform now displays the user under   Members         Remove users  To remove a user from a team   1  Click   Settings   and then click   Teams    The   Team Management   page appears  containing a list of all teams within the organization  1  Click the team to go to its settings page  1  Click         next to the user s name and choose   Remove from team   from the menu  HCP Terraform removes the user from the list of team members       Team visibility  The settings under   Visibility   allow you to  control who can see a team within the organization  To edit a team s visibility   1  Click   Settings   and then click   Teams    The   Team Management   page appears  containing a list of all teams within the organization  1  Click the team to go to its settings page  1  Enable one of the following settings         Visible    Every user in the organization can see the team and its membership  Non members have read only access         Secret    The default setting is that only team members and organization owners can view a team and its membership   We recommend making the majority of teams visible to simplify workspace administration  Secret teams should only have  organization level permissions   terraform cloud docs users teams organizations permissions organization permissions  since workspace admins cannot manage permissions for teams they cannot view      Manage workspace access  You can grant teams various permissions on workspaces  Refer to  Workspace Permissions   terraform cloud docs users teams organizations permissions workspace permissions  for details   HCP Terraform uses the most permissive permission level from your teams to determine what actions you can take on a particular resource  For example  if you belong to a team that only has permission to read runs for a workspace and another team with admin access to that workspace  HCP Terraform grants you admin access   HCP Terraform grants the most permissive permissions regardless of whether an organization  project  team  or workspace set those permissions  For example  if a team has permission to read runs for a given workspace and has permission to manage that workspace through the organization  then members of that team can manage that workspace  Refer to  organization permissions   terraform cloud docs users teams organizations permissions organization permissions  and  project permissions   terraform cloud docs users teams organizations permissions project permissions  for additional information    Another example is when a team has permission at the organization level to read runs for all workspaces and admin access to a specific workspace  HCP Terraform grants the more permissive admin permissions to that workspace   To manage team permissions on a workspace   1  Go to the workspace and click   Settings   Team Access    The   Team Access   page appears  1  Click   Add team and permissions   to select a team and assign a pre built or custom permission set      Manage project access  You can grant teams permissions to manage a project and the workspaces that belong to it  Refer to  Project Permissions   terraform cloud docs users teams organizations permissions project permissions  for details      Manage organization access  Organization owners can grant teams permissions to manage policies  projects and workspaces  team and organization membership  VCS settings  private registry providers and modules  and policy overrides across an organization  Refer to  Organization Permissions   terraform cloud docs users teams organizations permissions organization permissions  for details    permissions citation    intentionally unused   keep for maintainers"}
{"questions":"terraform Part 3 3 How to Move from Infrastructure as Code to Collaborative Infrastructure as Code Move to collaborative infrastructure as code workflows Learn about HCP Terraform s run environment create workspaces plan and create teams assign permissions and restrict non Terraform access page title Terraform Recommended Practices tfc only true Part 3 3 From Infrastructure as Code to Collaborative Infrastructure as Code","answers":"---\npage_title: >-\n  Part 3.3: From Infrastructure as Code to Collaborative Infrastructure as Code\n  - Terraform Recommended Practices\ntfc_only: true\ndescription: >-\n  Move to collaborative infrastructure-as-code workflows. Learn about HCP Terraform's run environment, create workspaces, plan and create teams, assign permissions, and restrict non-Terraform access.\n---\n\n# Part 3.3: How to Move from Infrastructure as Code to Collaborative Infrastructure as Code\n\nUsing version-controlled Terraform configurations to manage key infrastructure eliminates a great deal of technical complexity and inconsistency. Now that you have the basics under control, you\u2019re ready to focus on other problems.\n\nYour next goals are to:\n\n- Adopt consistent workflows for Terraform usage across teams\n- Expand the benefits of Terraform beyond the core of engineers who directly edit Terraform code.\n- Manage infrastructure provisioning permissions for users and teams.\n\n[HCP Terraform](https:\/\/www.hashicorp.com\/products\/terraform\/) is the product we\u2019ve built to help you address these next-level problems. The following section describes how to start using it most effectively.\n\nNote: If you aren\u2019t already using mature Terraform code to manage a significant portion of your infrastructure, make sure you follow the steps in the previous section first.\n\n## 1. Install or Sign Up for HCP Terraform\n\nYou have two options for using HCP Terraform: the SaaS hosted by HashiCorp, or a private instance you manage with Terraform Enterprise. If you have chosen the SaaS version then you can skip this step; otherwise visit the [Terraform Enterprise documentation](\/terraform\/enterprise) to get started.\n\n## 2. Learn HCP Terraform's Run Environment\n\nGet familiar with how Terraform runs work in HCP Terraform. With Terraform Community Edition, you generally use external VCS tools to get code onto the filesystem, then execute runs from the command line or from a general purpose CI system.\n\nHCP Terraform does things differently: a workspace is associated directly with a VCS repo, and you use HCP Terraform\u2019s UI or API to start and monitor runs. To get familiar with this operating model:\n\n- Read the documentation on how to [perform and configure Terraform runs](\/terraform\/cloud-docs\/run\/remote-operations) in HCP Terraform.\n- Create a proof-of-concept workspace, associate it with Terraform code in a VCS repo, set variables as needed, and use HCP Terraform to perform some Terraform runs with that code.\n\n## 3. Design Your Organization\u2019s Workspace Structure\n\nIn HCP Terraform, each Terraform configuration should manage a specific infrastructure component, and each environment of a given configuration should be a separate workspace \u2014 in other words, Terraform configurations \\* environments = workspaces. A workspace name should be something like \u201cnetworking-dev,\u201d so you can tell at a glance which infrastructure and environment it manages.\n\nThe definition of an \u201cinfrastructure component\u201d depends on your organization\u2019s structure. A given workspace might manage an application, a service, or a group of related services; it might provision infrastructure used by a single engineering team, or it might provision shared, foundational infrastructure used by the entire business.\n\nYou should structure your workspaces to match the divisions of responsibility in your infrastructure. You will probably end up with a mixture: some components, like networking, are foundational infrastructure controlled by central IT staff; others are application-specific and should be controlled by the engineering teams that rely on them.\n\nAlso, keep in mind:\n\n- Some workspaces publish output data to be used by other workspaces.\n- The workspaces that make up a configuration\u2019s environments (app1-dev, app1-stage, app1-prod) should be run in order, to ensure code is properly verified.\n\nThe first relationship, a relationship between workspaces for different components but the same environment, creates a graph of dependencies between workspaces, and you should stay aware of it. The second relationship, a relationship between workspaces for the same component but different environments, creates a pipeline between workspaces. HCP Terraform doesn\u2019t currently have the ability to act on these dependencies, but features like cascading updates and promotion are coming soon, and you\u2019ll be able to use them more easily if you already understand how your workspaces relate.\n\n## 4. Create Workspaces\n\nCreate workspaces in HCP Terraform, and map VCS repositories to them. Each workspace reads its Terraform code from your version control system. You\u2019ll need to assign a repository and branch to each workspace.\n\nWe recommend using the same repository and branch for every environment of a given app or service \u2014 write your Terraform code such that you can differentiate the environments via variables, and set those variables appropriately per workspace. This might not be practical for your existing code yet, in which case you can use different branches per workspace and handle promotion through your merge strategy, but we believe a model of one canonical branch works best.\n\n![Changes in VCS branches can be merged to master and then promoted between workspaces representing a staging environment, a UAT environment, and finally a production environment.](\/img\/docs\/image1.png)\n\n## 5. Plan and Create Teams\n\nHCP Terraform manages workspace access with teams, which are groups of user accounts.\n\nYour HCP Terraform teams should match your understanding of who's responsible for which infrastructure. That isn't always an exact match for your org chart, so make sure you spend some time thinking about this and talking to people across the organization. Keep in mind:\n\n- Some teams need to administer many workspaces, and others only need permissions on one or two.\n- A team might not have the same permissions on every workspace they use; for example, application developers might have read\/write access to their app\u2019s dev and stage environments, but read-only access to prod.\n\nManaging an accurate and complete map of how responsibilities are delegated is one of the most difficult parts of practicing collaborative infrastructure as code.\n\nWhen managing team membership, you have two options:\n\n- Manage user accounts with [SAML single sign-on](\/terraform\/enterprise\/saml\/configuration). SAML support is exclusive to Terraform Enterprise, and lets users log into HCP Terraform via your organization's existing identity provider. If your organization is at a scale where you use a SAML-compatible identity provider, we recommend this option.\n\n  If your identity provider already has information about your colleagues' teams or groups, you can [manage team membership via your identity provider](\/terraform\/enterprise\/saml\/team-membership). Otherwise, you can add users to teams with the UI or with [the team membership API](\/terraform\/cloud-docs\/api-docs\/team-members).\n\n- Manage user accounts in HCP Terraform. Your colleagues must create their own HCP Terraform user accounts, and you can add them to your organization by adding their username to at least one team. You can manage team membership with the UI or with [the team membership API](\/terraform\/cloud-docs\/api-docs\/team-members).\n\n## 6. Assign Permissions\n\nAssign workspace ownership and permissions to teams.\n\nHCP Terraform supports granular team permissions for each workspace. For complete information about the available permissions, see [the HCP Terraform permissions documentation.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions)\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nMost workspaces will give access to multiple teams with different permissions.\n\n| Workspace       | Team Permissions                                                                                                                                   |\n| --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |\n| app1-dev        | Team-eng-app1: Apply runs, read and write variables <br \/> Team-owners-app1: Admin <br \/> Team-central-IT: Admin                                   |\n| app1-prod       | Team-eng-app1: Queue plans, read variables <br \/> Team-owners-app1: Apply runs, read and write variables <br \/> Team-central-IT: Admin             |\n| networking-dev  | Team-eng-networking: Apply runs, read and write variables <br \/> Team-owners-networking: Admin <br \/> Team-central-IT: Admin                       |\n| networking-prod | Team-eng-networking: Queue plans, read variables <br \/> Team-owners-networking: Apply runs, read and write variables <br \/> Team-central-IT: Admin |\n\n## 7. Restrict Non-Terraform Access\n\nRestrict access to cloud provider UIs and APIs. Since HCP Terraform is now your organization\u2019s primary interface for infrastructure provisioning, you should restrict access to any alternate interface that bypasses HCP Terraform. For almost all users, it should be impossible to manually modify infrastructure without using the organization\u2019s agreed-upon Terraform workflow.\n\nAs long as no one can bypass Terraform, your code review processes and your HCP Terraform workspace permissions are the definitive record of who can modify which infrastructure. This makes everything about your infrastructure more knowable and controllable. HCP Terraform is one workflow to learn, one workflow to secure, and one workflow to audit for provisioning any infrastructure in your organization.\n\n## Next\n\nAt this point, you have successfully adopted a collaborative infrastructure as code workflow with HCP Terraform. You can provision infrastructure across multiple providers using a single workflow, and you have a shared interface that helps manage your organization\u2019s standards around access control and code promotion.\n\nNext, you can make additional improvements to your workflows and practices. Continue on to [Part 3.4: Advanced Improvements to Collaborative Infrastructure as Code](\/terraform\/cloud-docs\/recommended-practices\/part3.4).","site":"terraform","answers_cleaned":"    page title       Part 3 3  From Infrastructure as Code to Collaborative Infrastructure as Code     Terraform Recommended Practices tfc only  true description       Move to collaborative infrastructure as code workflows  Learn about HCP Terraform s run environment  create workspaces  plan and create teams  assign permissions  and restrict non Terraform access         Part 3 3  How to Move from Infrastructure as Code to Collaborative Infrastructure as Code  Using version controlled Terraform configurations to manage key infrastructure eliminates a great deal of technical complexity and inconsistency  Now that you have the basics under control  you re ready to focus on other problems   Your next goals are to     Adopt consistent workflows for Terraform usage across teams   Expand the benefits of Terraform beyond the core of engineers who directly edit Terraform code    Manage infrastructure provisioning permissions for users and teams    HCP Terraform  https   www hashicorp com products terraform   is the product we ve built to help you address these next level problems  The following section describes how to start using it most effectively   Note  If you aren t already using mature Terraform code to manage a significant portion of your infrastructure  make sure you follow the steps in the previous section first      1  Install or Sign Up for HCP Terraform  You have two options for using HCP Terraform  the SaaS hosted by HashiCorp  or a private instance you manage with Terraform Enterprise  If you have chosen the SaaS version then you can skip this step  otherwise visit the  Terraform Enterprise documentation   terraform enterprise  to get started      2  Learn HCP Terraform s Run Environment  Get familiar with how Terraform runs work in HCP Terraform  With Terraform Community Edition  you generally use external VCS tools to get code onto the filesystem  then execute runs from the command line or from a general purpose CI system   HCP Terraform does things differently  a workspace is associated directly with a VCS repo  and you use HCP Terraform s UI or API to start and monitor runs  To get familiar with this operating model     Read the documentation on how to  perform and configure Terraform runs   terraform cloud docs run remote operations  in HCP Terraform    Create a proof of concept workspace  associate it with Terraform code in a VCS repo  set variables as needed  and use HCP Terraform to perform some Terraform runs with that code      3  Design Your Organization s Workspace Structure  In HCP Terraform  each Terraform configuration should manage a specific infrastructure component  and each environment of a given configuration should be a separate workspace   in other words  Terraform configurations    environments   workspaces  A workspace name should be something like  networking dev   so you can tell at a glance which infrastructure and environment it manages   The definition of an  infrastructure component  depends on your organization s structure  A given workspace might manage an application  a service  or a group of related services  it might provision infrastructure used by a single engineering team  or it might provision shared  foundational infrastructure used by the entire business   You should structure your workspaces to match the divisions of responsibility in your infrastructure  You will probably end up with a mixture  some components  like networking  are foundational infrastructure controlled by central IT staff  others are application specific and should be controlled by the engineering teams that rely on them   Also  keep in mind     Some workspaces publish output data to be used by other workspaces    The workspaces that make up a configuration s environments  app1 dev  app1 stage  app1 prod  should be run in order  to ensure code is properly verified   The first relationship  a relationship between workspaces for different components but the same environment  creates a graph of dependencies between workspaces  and you should stay aware of it  The second relationship  a relationship between workspaces for the same component but different environments  creates a pipeline between workspaces  HCP Terraform doesn t currently have the ability to act on these dependencies  but features like cascading updates and promotion are coming soon  and you ll be able to use them more easily if you already understand how your workspaces relate      4  Create Workspaces  Create workspaces in HCP Terraform  and map VCS repositories to them  Each workspace reads its Terraform code from your version control system  You ll need to assign a repository and branch to each workspace   We recommend using the same repository and branch for every environment of a given app or service   write your Terraform code such that you can differentiate the environments via variables  and set those variables appropriately per workspace  This might not be practical for your existing code yet  in which case you can use different branches per workspace and handle promotion through your merge strategy  but we believe a model of one canonical branch works best     Changes in VCS branches can be merged to master and then promoted between workspaces representing a staging environment  a UAT environment  and finally a production environment    img docs image1 png      5  Plan and Create Teams  HCP Terraform manages workspace access with teams  which are groups of user accounts   Your HCP Terraform teams should match your understanding of who s responsible for which infrastructure  That isn t always an exact match for your org chart  so make sure you spend some time thinking about this and talking to people across the organization  Keep in mind     Some teams need to administer many workspaces  and others only need permissions on one or two    A team might not have the same permissions on every workspace they use  for example  application developers might have read write access to their app s dev and stage environments  but read only access to prod   Managing an accurate and complete map of how responsibilities are delegated is one of the most difficult parts of practicing collaborative infrastructure as code   When managing team membership  you have two options     Manage user accounts with  SAML single sign on   terraform enterprise saml configuration   SAML support is exclusive to Terraform Enterprise  and lets users log into HCP Terraform via your organization s existing identity provider  If your organization is at a scale where you use a SAML compatible identity provider  we recommend this option     If your identity provider already has information about your colleagues  teams or groups  you can  manage team membership via your identity provider   terraform enterprise saml team membership   Otherwise  you can add users to teams with the UI or with  the team membership API   terraform cloud docs api docs team members      Manage user accounts in HCP Terraform  Your colleagues must create their own HCP Terraform user accounts  and you can add them to your organization by adding their username to at least one team  You can manage team membership with the UI or with  the team membership API   terraform cloud docs api docs team members       6  Assign Permissions  Assign workspace ownership and permissions to teams   HCP Terraform supports granular team permissions for each workspace  For complete information about the available permissions  see  the HCP Terraform permissions documentation    terraform cloud docs users teams organizations permissions    permissions citation    intentionally unused   keep for maintainers  Most workspaces will give access to multiple teams with different permissions     Workspace         Team Permissions                                                                                                                                                                                                                                                                                                                app1 dev          Team eng app1  Apply runs  read and write variables  br    Team owners app1  Admin  br    Team central IT  Admin                                       app1 prod         Team eng app1  Queue plans  read variables  br    Team owners app1  Apply runs  read and write variables  br    Team central IT  Admin                 networking dev    Team eng networking  Apply runs  read and write variables  br    Team owners networking  Admin  br    Team central IT  Admin                           networking prod   Team eng networking  Queue plans  read variables  br    Team owners networking  Apply runs  read and write variables  br    Team central IT  Admin       7  Restrict Non Terraform Access  Restrict access to cloud provider UIs and APIs  Since HCP Terraform is now your organization s primary interface for infrastructure provisioning  you should restrict access to any alternate interface that bypasses HCP Terraform  For almost all users  it should be impossible to manually modify infrastructure without using the organization s agreed upon Terraform workflow   As long as no one can bypass Terraform  your code review processes and your HCP Terraform workspace permissions are the definitive record of who can modify which infrastructure  This makes everything about your infrastructure more knowable and controllable  HCP Terraform is one workflow to learn  one workflow to secure  and one workflow to audit for provisioning any infrastructure in your organization      Next  At this point  you have successfully adopted a collaborative infrastructure as code workflow with HCP Terraform  You can provision infrastructure across multiple providers using a single workflow  and you have a shared interface that helps manage your organization s standards around access control and code promotion   Next  you can make additional improvements to your workflows and practices  Continue on to  Part 3 4  Advanced Improvements to Collaborative Infrastructure as Code   terraform cloud docs recommended practices part3 4  "}
{"questions":"terraform Learn about our recommended Terraform workflow in this recommended practices guide Part 1 An Overview of Our Recommended Workflow Terraform s purpose is to provide one workflow to provision any infrastructure In this section we ll show you our recommended practices for organizing Terraform usage across a large organization This is the set of practices that we call collaborative infrastructure as code page title Part 1 Overview of Our Recommended Workflow Terraform Recommended Practices tfc only true","answers":"---\npage_title: 'Part 1: Overview of Our Recommended Workflow - Terraform Recommended Practices'\ntfc_only: true\ndescription: >-\n  Learn about our recommended Terraform workflow in this recommended practices guide.\n---\n\n# Part 1: An Overview of Our Recommended Workflow\n\nTerraform's purpose is to provide one workflow to provision any infrastructure. In this section, we'll show you our recommended practices for organizing Terraform usage across a large organization. This is the set of practices that we call \"collaborative infrastructure as code.\"\n\n## Fundamental Challenges in Provisioning\n\nThere are two major challenges everyone faces when trying to improve their provisioning practices: technical complexity and organizational complexity.\n\n1. Technical complexity \u2014 Different infrastructure providers use different interfaces to provision new resources, and the inconsistency between these interfaces imposes extra costs on daily operations. These costs get worse as you add more infrastructure providers and more collaborators.\n\n   Terraform addresses this complexity by separating the provisioning workload. It uses a single core engine to read infrastructure as code configurations and determine the relationships between resources, then uses many [provider plugins](\/terraform\/language\/providers) to create, modify, and destroy resources on the infrastructure providers. These provider plugins can talk to IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. GitHub, DNSimple, Cloudflare).\n\n   In other words, Terraform uses a model of workflow-level abstraction, rather than resource-level abstraction. It lets you use a single workflow for managing  infrastructure, but acknowledges the uniqueness of each provider instead of imposing generic concepts on non-equivalent resources.\n\n1. Organizational complexity \u2014 As infrastructure scales, it requires more teams to maintain it. For effective collaboration, it's important to delegate ownership of infrastructure across these teams and empower them to work in parallel without conflict. Terraform and HCP Terraform can help delegate infrastructure in the same way components of a large application are delegated.\n\n   To delegate a large application, companies often split it into small, focused microservice components that are owned by specific teams. Each microservice provides an API, and as long as those APIs don't change, microservice teams can make changes in parallel despite relying on each others' functionality.\n\n   Similarly, infrastructure code can be split into smaller Terraform configurations, which have limited scope and are owned by specific teams. These independent configurations use [output variables](\/terraform\/language\/values\/outputs) to publish information and [remote state resources](\/terraform\/language\/state\/remote-state-data) to access output data from other workspaces. Just like microservices communicate and connect via APIs, Terraform workspaces connect via remote state.\n\n   Once you have loosely-coupled Terraform configurations, you can delegate their development and maintenance to different teams. To do this effectively, you need to control access to that code. Version control systems can regulate who can commit code, but since Terraform affects real infrastructure, you also need to regulate who can run the code.\n\n   This is how HCP Terraform solves the organizational complexity of provisioning: by providing a centralized run environment for Terraform that supports and enforces your organization's access control decisions across all workspaces. This helps you delegate infrastructure ownership to enable parallel development.\n\n## Personas, Responsibilities, and Desired User Experiences\n\nThere are four main personas for managing infrastructure at scale. These roles have different responsibilities and needs, and HCP Terraform supports them with different tools and permissions.\n\n### Central IT\n\nThis team is responsible for defining common infrastructure practices, enforcing policy across teams, and maintaining shared services.\n\nCentral IT users want a single dashboard to view the status and compliance of all infrastructure, so they can quickly fix misconfigurations or malfunctions. Since HCP Terraform is tightly integrated with Terraform's run data and is designed around Terraform's concepts of workspaces and runs, it offers a more integrated workflow experience than a general-purpose CI system.\n\n### Organization Architect\n\nThis team defines how global infrastructure is divided and delegated to the teams within the business unit. This team also enables connectivity between workspaces by defining the APIs each workspace must expose, and sets organization-wide variables and policies.\n\nOrganization Architects want a single dashboard to view the status of all workspaces and the graph of connectivity between them.\n\n### Workspace Owner\n\nThis individual owns a specific set of workspaces, which build a given Terraform configuration across several environments. They are responsible for the health of those workspaces, managing the full change lifecycle through dev, UAT, staging, and production. They are the main approver of changes to production within their domain.\n\nWorkspace Owners want:\n\n* A single dashboard to view the status of all workspaces that use their infrastructure code.\n* A streamlined way to promote changes between environments.\n* An interface to set variables used by a Terraform configuration across environments.\n\n### Workspace Contributor\n\nContributors submit changes to workspaces by making updates to the infrastructure as code configuration. They usually do not have approval to make changes to production, but can make changes in dev, UAT, and staging.\n\nWorkspace Contributors want a simple workflow to submit changes to a workspace and promote changes between workspaces. They can edit a subset of workspace variables and their own personal variables.\n\nWorkspace contributors are often already familiar with Terraform's operating model and command line interface, and can usually adapt quickly to HCP Terraform's web interface.\n\n## The Recommended Terraform Workspace Structure\n\n### About Workspaces\n\nHCP Terraform's main unit of organization is a workspace. A workspace is a collection of everything Terraform needs to run: a Terraform configuration (usually from a VCS repo), values for that configuration's variables, and state data to keep track of operations between runs.\n\nIn Terraform Community edition, a workspace is an independent state file on the local disk. In HCP Terraform, they're persistent shared resources; you can assign them their own access controls, monitor their run states, and more.\n\n### One Workspace Per Environment Per Terraform Configuration\n\nWorkspaces are HCP Terraform's primary tool for delegating control, which means their structure should match your organizational permissions structure. The best approach is to use one workspace for each environment of a given infrastructure component. Or in other words, Terraform configurations \\* environments = workspaces.\n\nThis is different from how some other tools view environments; notably, you shouldn't use a single Terraform workspace to manage everything that makes up your production or staging environment. Instead, make smaller workspaces that are easy to delegate. This also means not every configuration has to use the exact same environments; if a UAT environment doesn't make sense for your security infrastructure, you aren't forced to use one.\n\nName your workspaces with both their component and their environment. For example, if you have a Terraform configuration for managing an internal billing app and another for your networking infrastructure, you could name the workspaces as follows:\n\n* billing-app-dev\n* billing-app-stage\n* billing-app-prod\n* networking-dev\n* networking-stage\n* networking-prod\n\n### Delegating Workspaces\n\nSince each workspace is one environment of one infrastructure component, you can use per-workspace access controls to delegate ownership of components and regulate code promotion across environments. For example:\n\n* Teams that help manage a component can start Terraform runs and edit variables in dev or staging.\n* The owners or senior contributors of a component can start Terraform runs in production, after reviewing other contributors' work.\n* Central IT and organization architects can administer permissions on all workspaces, to ensure everyone has what they need to work.\n* Teams that have no role managing a given component don't have access to its workspaces.\n\nTo use HCP Terraform effectively, you must make sure the division of workspaces and permissions matches your organization's division of responsibilities. If it's difficult to separate your workspaces effectively, it might reveal an area of your infrastructure where responsibility is muddled and unclear. If so, this is a chance to disentangle the code and enforce better boundaries of ownership.\n\n## Next\n\nNow that you're familiar with the outlines of the HCP Terraform workflow, it's time to assess your organization's provisioning practices. Continue on to [Part 2: Evaluating Your Current Provisioning Practices](\/terraform\/cloud-docs\/recommended-practices\/part2).","site":"terraform","answers_cleaned":"    page title   Part 1  Overview of Our Recommended Workflow   Terraform Recommended Practices  tfc only  true description       Learn about our recommended Terraform workflow in this recommended practices guide         Part 1  An Overview of Our Recommended Workflow  Terraform s purpose is to provide one workflow to provision any infrastructure  In this section  we ll show you our recommended practices for organizing Terraform usage across a large organization  This is the set of practices that we call  collaborative infrastructure as code       Fundamental Challenges in Provisioning  There are two major challenges everyone faces when trying to improve their provisioning practices  technical complexity and organizational complexity   1  Technical complexity   Different infrastructure providers use different interfaces to provision new resources  and the inconsistency between these interfaces imposes extra costs on daily operations  These costs get worse as you add more infrastructure providers and more collaborators      Terraform addresses this complexity by separating the provisioning workload  It uses a single core engine to read infrastructure as code configurations and determine the relationships between resources  then uses many  provider plugins   terraform language providers  to create  modify  and destroy resources on the infrastructure providers  These provider plugins can talk to IaaS  e g  AWS  GCP  Microsoft Azure  OpenStack   PaaS  e g  Heroku   or SaaS services  e g  GitHub  DNSimple  Cloudflare       In other words  Terraform uses a model of workflow level abstraction  rather than resource level abstraction  It lets you use a single workflow for managing  infrastructure  but acknowledges the uniqueness of each provider instead of imposing generic concepts on non equivalent resources   1  Organizational complexity   As infrastructure scales  it requires more teams to maintain it  For effective collaboration  it s important to delegate ownership of infrastructure across these teams and empower them to work in parallel without conflict  Terraform and HCP Terraform can help delegate infrastructure in the same way components of a large application are delegated      To delegate a large application  companies often split it into small  focused microservice components that are owned by specific teams  Each microservice provides an API  and as long as those APIs don t change  microservice teams can make changes in parallel despite relying on each others  functionality      Similarly  infrastructure code can be split into smaller Terraform configurations  which have limited scope and are owned by specific teams  These independent configurations use  output variables   terraform language values outputs  to publish information and  remote state resources   terraform language state remote state data  to access output data from other workspaces  Just like microservices communicate and connect via APIs  Terraform workspaces connect via remote state      Once you have loosely coupled Terraform configurations  you can delegate their development and maintenance to different teams  To do this effectively  you need to control access to that code  Version control systems can regulate who can commit code  but since Terraform affects real infrastructure  you also need to regulate who can run the code      This is how HCP Terraform solves the organizational complexity of provisioning  by providing a centralized run environment for Terraform that supports and enforces your organization s access control decisions across all workspaces  This helps you delegate infrastructure ownership to enable parallel development      Personas  Responsibilities  and Desired User Experiences  There are four main personas for managing infrastructure at scale  These roles have different responsibilities and needs  and HCP Terraform supports them with different tools and permissions       Central IT  This team is responsible for defining common infrastructure practices  enforcing policy across teams  and maintaining shared services   Central IT users want a single dashboard to view the status and compliance of all infrastructure  so they can quickly fix misconfigurations or malfunctions  Since HCP Terraform is tightly integrated with Terraform s run data and is designed around Terraform s concepts of workspaces and runs  it offers a more integrated workflow experience than a general purpose CI system       Organization Architect  This team defines how global infrastructure is divided and delegated to the teams within the business unit  This team also enables connectivity between workspaces by defining the APIs each workspace must expose  and sets organization wide variables and policies   Organization Architects want a single dashboard to view the status of all workspaces and the graph of connectivity between them       Workspace Owner  This individual owns a specific set of workspaces  which build a given Terraform configuration across several environments  They are responsible for the health of those workspaces  managing the full change lifecycle through dev  UAT  staging  and production  They are the main approver of changes to production within their domain   Workspace Owners want     A single dashboard to view the status of all workspaces that use their infrastructure code    A streamlined way to promote changes between environments    An interface to set variables used by a Terraform configuration across environments       Workspace Contributor  Contributors submit changes to workspaces by making updates to the infrastructure as code configuration  They usually do not have approval to make changes to production  but can make changes in dev  UAT  and staging   Workspace Contributors want a simple workflow to submit changes to a workspace and promote changes between workspaces  They can edit a subset of workspace variables and their own personal variables   Workspace contributors are often already familiar with Terraform s operating model and command line interface  and can usually adapt quickly to HCP Terraform s web interface      The Recommended Terraform Workspace Structure      About Workspaces  HCP Terraform s main unit of organization is a workspace  A workspace is a collection of everything Terraform needs to run  a Terraform configuration  usually from a VCS repo   values for that configuration s variables  and state data to keep track of operations between runs   In Terraform Community edition  a workspace is an independent state file on the local disk  In HCP Terraform  they re persistent shared resources  you can assign them their own access controls  monitor their run states  and more       One Workspace Per Environment Per Terraform Configuration  Workspaces are HCP Terraform s primary tool for delegating control  which means their structure should match your organizational permissions structure  The best approach is to use one workspace for each environment of a given infrastructure component  Or in other words  Terraform configurations    environments   workspaces   This is different from how some other tools view environments  notably  you shouldn t use a single Terraform workspace to manage everything that makes up your production or staging environment  Instead  make smaller workspaces that are easy to delegate  This also means not every configuration has to use the exact same environments  if a UAT environment doesn t make sense for your security infrastructure  you aren t forced to use one   Name your workspaces with both their component and their environment  For example  if you have a Terraform configuration for managing an internal billing app and another for your networking infrastructure  you could name the workspaces as follows     billing app dev   billing app stage   billing app prod   networking dev   networking stage   networking prod      Delegating Workspaces  Since each workspace is one environment of one infrastructure component  you can use per workspace access controls to delegate ownership of components and regulate code promotion across environments  For example     Teams that help manage a component can start Terraform runs and edit variables in dev or staging    The owners or senior contributors of a component can start Terraform runs in production  after reviewing other contributors  work    Central IT and organization architects can administer permissions on all workspaces  to ensure everyone has what they need to work    Teams that have no role managing a given component don t have access to its workspaces   To use HCP Terraform effectively  you must make sure the division of workspaces and permissions matches your organization s division of responsibilities  If it s difficult to separate your workspaces effectively  it might reveal an area of your infrastructure where responsibility is muddled and unclear  If so  this is a chance to disentangle the code and enforce better boundaries of ownership      Next  Now that you re familiar with the outlines of the HCP Terraform workflow  it s time to assess your organization s provisioning practices  Continue on to  Part 2  Evaluating Your Current Provisioning Practices   terraform cloud docs recommended practices part2  "}
{"questions":"terraform Part 2 Evaluating Your Current Provisioning Practices Practices page title Use a quiz to evaluate your current Terraform provisioning practices Learn whether your practices are mostly manual semi automated infrastructure as code or collaborative infrastructure as code tfc only true Part 2 Evaluating Your Current Provisioning Practices Terraform Recommended","answers":"---\npage_title: >-\n  Part 2: Evaluating Your Current Provisioning Practices - Terraform Recommended\n  Practices\ntfc_only: true\ndescription: >-\n  Use a quiz to evaluate your current Terraform provisioning practices. Learn whether your practices are mostly manual, semi-automated, infrastructure-as-code, or collaborative infrastructure-as-code.\n---\n\n# Part 2: Evaluating Your Current Provisioning Practices\n\nHCP Terraform depends on several foundational IT practices. Before you can implement HCP Terraform's collaborative infrastructure as code workflows, you need to understand which of those practices you're already using, and which ones you still need to implement.\n\nWe've written the section below in the form of a quiz or interview, with multiple-choice answers that represent the range of operational maturity levels we've seen across many organizations. You should read it with a notepad handy, and take note of any questions where your organization can improve its use of automation and collaboration.\n\nThis quiz doesn't have a passing or failing score, but it's important to know your organization's answers. Once you know which of your IT practices need the most attention, Section 3 will guide you from your current state to our recommended practices in the most direct way.\n\n## Four Levels of Operational Maturity\n\nEach question has several answers, each of which aligns with a different level of operational maturity. Those levels are as follows:\n\n1. **Manual**\n   * Infrastructure is provisioned through a UI or CLI.\n   * Configuration changes do not leave a traceable history, and aren't always visible.\n   * Limited or no naming standards in place.\n\n1. **Semi-automated**\n   * Infrastructure is provisioned through a combination of UI\/CLI, infrastructure as code, and scripts or configuration management.\n   * Traceability is limited, since different record-keeping methods are used across the organization.\n   * Rollbacks are hard to achieve due to differing record-keeping methods.\n\n1. **Infrastructure as code**\n   * Infrastructure is provisioned using Terraform OSS.\n   * Provisioning and deployment processes are automated.\n   * Infrastructure configuration is consistent, with all necessary details fully documented (nothing siloed in a sysadmin's head).\n   * Source files are stored in version control to record editing history, and, if necessary, roll back to older versions.\n   * Some Terraform code is split out into modules, to promote consistent reuse of your organization's more common architectural patterns.\n\n1. **Collaborative infrastructure as code**\n   * Users across the organization can safely provision infrastructure with Terraform, without conflicts and with clear understanding of their access permissions.\n   * Expert users within an organization can produce standardized infrastructure templates, and beginner users can consume those to follow infrastructure best practices for the organization.\n   * Per-workspace access control helps committers and approvers on workspaces protect production environments.\n   * Functional groups that don't directly write Terraform code have visibility into infrastructure status and changes through HCP Terraform's UI.\n\nBy the end of this section, you should have a clear understanding of which operational maturity stage you are in. Section 3 will explain the recommended steps to move from your current stage to the next one.\n\nAnswering these questions will help you understand your organization's method for provisioning infrastructure, its change workflow, its operation model, and its security model.\n\nOnce you understand your current practices, you can identify the remaining steps for implementing HCP Terraform.\n\n## Your Current Configuration and Provisioning Practices\n\nHow does your organization configure and provision infrastructure today? Automated and consistent practices help make your infrastructure more knowable and reliable, and reduce the amount of time spent on troubleshooting.\n\nThe following questions will help you evaluate your current level of automation for configuration and provisioning.\n\n### Q1. How do you currently manage your infrastructure?\n\n1. Through a UI or CLI. This might seem like the easiest option for one-off tasks, but for recurring operations it is a big consumer of valuable engineering time. It's also difficult to track and manage changes.\n1. Through reusable command line scripts, or a combination of UI and infrastructure as code. This is faster and more reliable than pure ad-hoc management and makes recurring operations repeatable, but the lack of consistency and versioning makes it difficult to manage over time.\n1. Through an infrastructure as code tool (Terraform, CloudFormation). Infrastructure as code enables scalable, repeatable, and versioned infrastructure. It dramatically increases the productivity of each operator and can enforce consistency across environments when used appropriately.\n1. Through a general-purpose automation framework (i.e. Jenkins + scripts \/ Jenkins + Terraform). This centralizes the management workflow, albeit with a tool that isn't built specifically for provisioning tasks.\n\n### Q2. What topology is in place for your service provider accounts?\n\n1. Flat structure, single account. All infrastructure is provisioned within the same account.\n1. Flat structure, multiple accounts. Infrastructure is provisioned using different infrastructure providers, with an account per environment.\n1. Tree hierarchy. This features a master billing account, an audit\/security\/logging account, and project\/environment-specific infrastructure accounts.\n\n### Q3. How do you manage the infrastructure for different environments?\n\n1. Manual. Everything is manual, with no configuration management in place.\n1. Siloed. Each application team has its own way of managing infrastructure \u2014 some manually, some using infrastructure as code or custom scripts.\n1. Infrastructure as code with different code bases per environment. Having different code bases for infrastructure as code configurations can lead to untracked changes from one environment to the other if there is no promotion within environments.\n1. Infrastructure as code with a single code base and differing environment variables. All resources, regardless of environment, are provisioned with the same code, ensuring that changes promote through your deployment tiers in a predictable way.\n\n### Q4. How do teams collaborate and share infrastructure configuration and code?\n\n1. N\/A. Infrastructure as code is not used.\n1. Locally. Infrastructure configuration is hosted locally and shared via email, documents or spreadsheets.\n1. Ticketing system. Code is shared through journal entries in change requests or problem\/incident tickets.\n1. Centralized without version control. Code is stored on a shared filesystem and secured through security groups. Changes are not versioned. Rollbacks are only possible through restores from backups or snapshots.\n1. Configuration stored and collaborated in a version control system (VCS) (Git repositories, etc.). Teams collaborate on infrastructure configurations within a VCS workflow, and can review infrastructure changes before they enter production. This is the most mature approach, as it offers the best record-keeping and cross-department\/cross-team visibility.\n\n### Q5. Do you use reusable modules for writing infrastructure as code?\n\n1. Everything is manual. No infrastructure as code currently used.\n1. No modularity. Infrastructure as code is used, but primarily as one-off configurations. Users usually don't share or re-use code.\n1. Teams use modules internally but do not share them across teams.\n1. Modules are shared organization-wide. Similar to shared software libraries, a module for a common infrastructure pattern can be updated once and the entire organization benefits.\n\n## Your Current Change Control Workflow\n\nChange control is a formal process to coordinate and approve changes to a product or system. The goals of a change control process include:\n\n* Minimizing disruption to services.\n* Reducing rollbacks.\n* Reducing the overall cost of changes.\n* Preventing unnecessary changes.\n* Allowing users to make changes without impacting changes made by other users.\n\nThe following questions will help you assess the maturity of your change control workflow.\n\n### Q6. How do you govern the access to control changes to infrastructure?\n\n1. Access is not restricted or audited. Everyone in the platform team has the flexibility to create, change, and destroy all infrastructure. This leads to a complex system that is unstable and hard to manage.\n1. Access is not restricted, only audited. This makes it easier to track changes after the fact, but doesn't proactively protect your infrastructure's stability.\n1. Access is restricted based on service provider account level. Members of the team have admin access to different accounts based on the environment they are responsible for.\n1. Access is restricted based on user roles. All access is restricted based on user roles at infrastructure provider level.\n\n### Q7. What is the process for changing existing infrastructure?\n\n1. Manual changes by remotely logging into machines. Repetitive manual tasks are inefficient and prone to human errors.\n1. Runtime configuration management (Puppet, Chef, etc.). Configuration management tools let you make fast, automated changes based on readable and auditable code. However, since they don't produce static artifacts, the outcome of a given configuration version isn't always 100% repeatable, making rollbacks only partially reliable.\n1. Immutable infrastructure (images, containers). Immutable components can be replaced for every deployment (rather than being updated in-place), using static deployment artifacts. If you maintain sharp boundaries between ephemeral layers and state-storing layers, immutable infrastructure can be much easier to test, validate, and roll back.\n\n### Q8. How do you deploy applications?\n\n1. Manually (SSH, WinRM, rsync, robocopy, etc.). Repetitive manual tasks are inefficient and prone to human errors.\n1. With scripts (Fabric, Capistrano, custom, etc.).\n1. With a configuration management tool (Chef, Puppet, Ansible, Salt, etc.), or by passing userdata scripts to CloudFormation Templates or Terraform configuration files.\n1. With a scheduler (Kubernetes, Nomad, Mesos, Swarm, ECS, etc.).\n\n## Your Current Security Model\n\n### Q9. How are infrastructure service provider credentials managed?\n\n1. By hardcoding them in the source code. This is highly insecure.\n1. By using infrastructure provider roles (like EC2 instance roles for AWS).Since the service provider knows the identity of the machines it's providing, you can grant some machines permission to make API requests without giving them a copy of your actual credentials.\n1. By using a secrets management solution (like Vault, Keywhis, or PAR).We recommend this.\n1. By using short-lived tokens. This is one of the most secure methods, since the temporary credentials you distribute expire quickly and are very difficult to exploit. However, this can be more complex to use than a secrets management solution.\n\n### Q10. How do you control users and objects hosted by your infrastructure provider (like logins, access and role control, etc.)?\n\n1. A common 'admin' or 'superuser' account shared by engineers. This increases the possibility of a breach into your infrastructure provider account.\n1. Individual named user accounts. This makes a loss of credentials less likely and easier to recover from, but it doesn't scale very well as the team grows.\n1. LDAP and\/or Microsoft Entra ID integration. This is much more secure than shared accounts, but requires additional architectural considerations to ensure that the provider's access into your corporate network is configured correctly.\n1. Single sign-on through OAuth or SAML. This provides token-based access into your infrastructure provider while not requiring your provider to have access to your corporate network.\n\n### Q11. How do you track the changes made by different users in your infrastructure provider's environments?\n\n1. No logging in place. Auditing and troubleshooting can be very difficult without a record of who made which changes when.\n1. Manual changelog. Users manually write down their changes to infrastructure in a shared document. This method is prone to human error.\n1. By logging all API calls to an audit trail service or log management service (like CloudTrail, Loggly, or Splunk). We recommend this. This ensures that an audit trail is available during troubleshooting and\/or security reviews.\n\n### Q12. How is the access of former employees revoked?\n\n1. Immediately, manually. If you don't use infrastructure as code, the easiest and quickest way is by removing access for that employee manually using the infrastructure provider's console.\n1. Delayed, as part of the next release. if your release process is extremely coupled and most of your security changes have to pass through a CAB (Change Advisory Board) meeting in order to be executed in production, this could be delayed.\n1. Immediately, writing a hot-fix in the infrastructure as code. this is the most secure and recommended option. Before the employee leaves the building, access must be removed.\n\n## Assessing the Overall Maturity of Your Provisioning Practices\n\nAfter reviewing all of these questions, look back at your notes and assess your organization's overall stage of maturity: are your practices mostly manual, semi-automated, infrastructure as code, or collaborative infrastructure as code?\n\nKeep your current state in mind as you read the next section.\n\n## Next\n\nNow that you've taken a hard look at your current practices, it's time to begin improving them. Continue on to [Part 3: How to Evolve Your Provisioning Practices](\/terraform\/cloud-docs\/recommended-practices\/part3).","site":"terraform","answers_cleaned":"    page title       Part 2  Evaluating Your Current Provisioning Practices   Terraform Recommended   Practices tfc only  true description       Use a quiz to evaluate your current Terraform provisioning practices  Learn whether your practices are mostly manual  semi automated  infrastructure as code  or collaborative infrastructure as code         Part 2  Evaluating Your Current Provisioning Practices  HCP Terraform depends on several foundational IT practices  Before you can implement HCP Terraform s collaborative infrastructure as code workflows  you need to understand which of those practices you re already using  and which ones you still need to implement   We ve written the section below in the form of a quiz or interview  with multiple choice answers that represent the range of operational maturity levels we ve seen across many organizations  You should read it with a notepad handy  and take note of any questions where your organization can improve its use of automation and collaboration   This quiz doesn t have a passing or failing score  but it s important to know your organization s answers  Once you know which of your IT practices need the most attention  Section 3 will guide you from your current state to our recommended practices in the most direct way      Four Levels of Operational Maturity  Each question has several answers  each of which aligns with a different level of operational maturity  Those levels are as follows   1    Manual        Infrastructure is provisioned through a UI or CLI       Configuration changes do not leave a traceable history  and aren t always visible       Limited or no naming standards in place   1    Semi automated        Infrastructure is provisioned through a combination of UI CLI  infrastructure as code  and scripts or configuration management       Traceability is limited  since different record keeping methods are used across the organization       Rollbacks are hard to achieve due to differing record keeping methods   1    Infrastructure as code        Infrastructure is provisioned using Terraform OSS       Provisioning and deployment processes are automated       Infrastructure configuration is consistent  with all necessary details fully documented  nothing siloed in a sysadmin s head        Source files are stored in version control to record editing history  and  if necessary  roll back to older versions       Some Terraform code is split out into modules  to promote consistent reuse of your organization s more common architectural patterns   1    Collaborative infrastructure as code        Users across the organization can safely provision infrastructure with Terraform  without conflicts and with clear understanding of their access permissions       Expert users within an organization can produce standardized infrastructure templates  and beginner users can consume those to follow infrastructure best practices for the organization       Per workspace access control helps committers and approvers on workspaces protect production environments       Functional groups that don t directly write Terraform code have visibility into infrastructure status and changes through HCP Terraform s UI   By the end of this section  you should have a clear understanding of which operational maturity stage you are in  Section 3 will explain the recommended steps to move from your current stage to the next one   Answering these questions will help you understand your organization s method for provisioning infrastructure  its change workflow  its operation model  and its security model   Once you understand your current practices  you can identify the remaining steps for implementing HCP Terraform      Your Current Configuration and Provisioning Practices  How does your organization configure and provision infrastructure today  Automated and consistent practices help make your infrastructure more knowable and reliable  and reduce the amount of time spent on troubleshooting   The following questions will help you evaluate your current level of automation for configuration and provisioning       Q1  How do you currently manage your infrastructure   1  Through a UI or CLI  This might seem like the easiest option for one off tasks  but for recurring operations it is a big consumer of valuable engineering time  It s also difficult to track and manage changes  1  Through reusable command line scripts  or a combination of UI and infrastructure as code  This is faster and more reliable than pure ad hoc management and makes recurring operations repeatable  but the lack of consistency and versioning makes it difficult to manage over time  1  Through an infrastructure as code tool  Terraform  CloudFormation   Infrastructure as code enables scalable  repeatable  and versioned infrastructure  It dramatically increases the productivity of each operator and can enforce consistency across environments when used appropriately  1  Through a general purpose automation framework  i e  Jenkins   scripts   Jenkins   Terraform   This centralizes the management workflow  albeit with a tool that isn t built specifically for provisioning tasks       Q2  What topology is in place for your service provider accounts   1  Flat structure  single account  All infrastructure is provisioned within the same account  1  Flat structure  multiple accounts  Infrastructure is provisioned using different infrastructure providers  with an account per environment  1  Tree hierarchy  This features a master billing account  an audit security logging account  and project environment specific infrastructure accounts       Q3  How do you manage the infrastructure for different environments   1  Manual  Everything is manual  with no configuration management in place  1  Siloed  Each application team has its own way of managing infrastructure   some manually  some using infrastructure as code or custom scripts  1  Infrastructure as code with different code bases per environment  Having different code bases for infrastructure as code configurations can lead to untracked changes from one environment to the other if there is no promotion within environments  1  Infrastructure as code with a single code base and differing environment variables  All resources  regardless of environment  are provisioned with the same code  ensuring that changes promote through your deployment tiers in a predictable way       Q4  How do teams collaborate and share infrastructure configuration and code   1  N A  Infrastructure as code is not used  1  Locally  Infrastructure configuration is hosted locally and shared via email  documents or spreadsheets  1  Ticketing system  Code is shared through journal entries in change requests or problem incident tickets  1  Centralized without version control  Code is stored on a shared filesystem and secured through security groups  Changes are not versioned  Rollbacks are only possible through restores from backups or snapshots  1  Configuration stored and collaborated in a version control system  VCS   Git repositories  etc    Teams collaborate on infrastructure configurations within a VCS workflow  and can review infrastructure changes before they enter production  This is the most mature approach  as it offers the best record keeping and cross department cross team visibility       Q5  Do you use reusable modules for writing infrastructure as code   1  Everything is manual  No infrastructure as code currently used  1  No modularity  Infrastructure as code is used  but primarily as one off configurations  Users usually don t share or re use code  1  Teams use modules internally but do not share them across teams  1  Modules are shared organization wide  Similar to shared software libraries  a module for a common infrastructure pattern can be updated once and the entire organization benefits      Your Current Change Control Workflow  Change control is a formal process to coordinate and approve changes to a product or system  The goals of a change control process include     Minimizing disruption to services    Reducing rollbacks    Reducing the overall cost of changes    Preventing unnecessary changes    Allowing users to make changes without impacting changes made by other users   The following questions will help you assess the maturity of your change control workflow       Q6  How do you govern the access to control changes to infrastructure   1  Access is not restricted or audited  Everyone in the platform team has the flexibility to create  change  and destroy all infrastructure  This leads to a complex system that is unstable and hard to manage  1  Access is not restricted  only audited  This makes it easier to track changes after the fact  but doesn t proactively protect your infrastructure s stability  1  Access is restricted based on service provider account level  Members of the team have admin access to different accounts based on the environment they are responsible for  1  Access is restricted based on user roles  All access is restricted based on user roles at infrastructure provider level       Q7  What is the process for changing existing infrastructure   1  Manual changes by remotely logging into machines  Repetitive manual tasks are inefficient and prone to human errors  1  Runtime configuration management  Puppet  Chef  etc    Configuration management tools let you make fast  automated changes based on readable and auditable code  However  since they don t produce static artifacts  the outcome of a given configuration version isn t always 100  repeatable  making rollbacks only partially reliable  1  Immutable infrastructure  images  containers   Immutable components can be replaced for every deployment  rather than being updated in place   using static deployment artifacts  If you maintain sharp boundaries between ephemeral layers and state storing layers  immutable infrastructure can be much easier to test  validate  and roll back       Q8  How do you deploy applications   1  Manually  SSH  WinRM  rsync  robocopy  etc    Repetitive manual tasks are inefficient and prone to human errors  1  With scripts  Fabric  Capistrano  custom  etc    1  With a configuration management tool  Chef  Puppet  Ansible  Salt  etc    or by passing userdata scripts to CloudFormation Templates or Terraform configuration files  1  With a scheduler  Kubernetes  Nomad  Mesos  Swarm  ECS  etc        Your Current Security Model      Q9  How are infrastructure service provider credentials managed   1  By hardcoding them in the source code  This is highly insecure  1  By using infrastructure provider roles  like EC2 instance roles for AWS  Since the service provider knows the identity of the machines it s providing  you can grant some machines permission to make API requests without giving them a copy of your actual credentials  1  By using a secrets management solution  like Vault  Keywhis  or PAR  We recommend this  1  By using short lived tokens  This is one of the most secure methods  since the temporary credentials you distribute expire quickly and are very difficult to exploit  However  this can be more complex to use than a secrets management solution       Q10  How do you control users and objects hosted by your infrastructure provider  like logins  access and role control  etc     1  A common  admin  or  superuser  account shared by engineers  This increases the possibility of a breach into your infrastructure provider account  1  Individual named user accounts  This makes a loss of credentials less likely and easier to recover from  but it doesn t scale very well as the team grows  1  LDAP and or Microsoft Entra ID integration  This is much more secure than shared accounts  but requires additional architectural considerations to ensure that the provider s access into your corporate network is configured correctly  1  Single sign on through OAuth or SAML  This provides token based access into your infrastructure provider while not requiring your provider to have access to your corporate network       Q11  How do you track the changes made by different users in your infrastructure provider s environments   1  No logging in place  Auditing and troubleshooting can be very difficult without a record of who made which changes when  1  Manual changelog  Users manually write down their changes to infrastructure in a shared document  This method is prone to human error  1  By logging all API calls to an audit trail service or log management service  like CloudTrail  Loggly  or Splunk   We recommend this  This ensures that an audit trail is available during troubleshooting and or security reviews       Q12  How is the access of former employees revoked   1  Immediately  manually  If you don t use infrastructure as code  the easiest and quickest way is by removing access for that employee manually using the infrastructure provider s console  1  Delayed  as part of the next release  if your release process is extremely coupled and most of your security changes have to pass through a CAB  Change Advisory Board  meeting in order to be executed in production  this could be delayed  1  Immediately  writing a hot fix in the infrastructure as code  this is the most secure and recommended option  Before the employee leaves the building  access must be removed      Assessing the Overall Maturity of Your Provisioning Practices  After reviewing all of these questions  look back at your notes and assess your organization s overall stage of maturity  are your practices mostly manual  semi automated  infrastructure as code  or collaborative infrastructure as code   Keep your current state in mind as you read the next section      Next  Now that you ve taken a hard look at your current practices  it s time to begin improving them  Continue on to  Part 3  How to Evolve Your Provisioning Practices   terraform cloud docs recommended practices part3  "}
{"questions":"terraform ServiceNow Service Graph Connector for Terraform Setup Note Follow the Configure ServiceNow Service Graph Connector for HCP Terraform terraform tutorials it saas servicenow sgc tutorial for hands on instructions on how to import an AWS resource deployed in your HCP Terraform organization to the ServiceNow CMDB by using the Service Graph Connector for Terraform ServiceNow Service Graph Connector for Terraform integrates with HCP Terraform and offers two modes of import for Terraform resources page title Overview ServiceNow Service Graph Connector for Terraform Integration summary of the setup process","answers":"---\npage_title: Overview - ServiceNow Service Graph Connector for Terraform Integration - summary of the setup process\ndescription: >-\n  ServiceNow Service Graph Connector for Terraform integrates with HCP Terraform and offers two modes of import for Terraform resources.\n---\n\n# ServiceNow Service Graph Connector for Terraform \u2013 Setup\n\n-> **Note:** Follow the [Configure ServiceNow Service Graph Connector for HCP Terraform](\/terraform\/tutorials\/it-saas\/servicenow-sgc) tutorial for hands-on instructions on how to import an AWS resource deployed in your HCP Terraform organization to the ServiceNow CMDB by using the Service Graph Connector for Terraform.\n\nThe ServiceNow Service Graph Connector for Terraform is a certified scoped application available in the ServiceNow Store. Search for \u201dService Graph Connector for Terraform\u201d published by \u201dHashiCorp Inc\u201d and click **Install**.\n\n## Prerequisites\n\nTo start using the Service Graph Connector for Terraform, you must have:\n\n- An administrator account on a Terraform Enterprise instance or within an HCP Terraform organization.\n- An administrator account on your ServiceNow vendor instance.\n\nThe Service Graph Connector for Terraform supports the following ServiceNow server versions:\n\n- Utah\n- Vancouver\n- Washington DC\n- Xanadu\n\nThe following ServiceNow plugins are required dependencies:\n\n- ITOM Discovery License\n- Integration Commons for CMDB\n- Discovery and Service Mapping Patterns\n- ServiceNow IntegrationHub Standard Pack\n\nAdditionally, you can install the IntegrationHub ETL application if you want to modify the default CMDB mappings.\n\n-> **Note:** Dependent plugins are installed on your ServiceNow instance automatically when the app is downloaded from the ServiceNow Store. Before installing the Service Graph Connector for Terraform, you must activate the ITOM Discovery License plugin in your production instance. \n\n## Connect ServiceNow to HCP Terraform\n\n-> **ServiceNow roles:** `admin`, `x_hashi_service_gr.terraform_user`\n\nOnce the integration is installed, you can proceed to the guided setup form where you will enter your Terraform credentials. This step will establish a secure connection between HCP Terraform and your ServiceNow instance.\n\n### Create and scope Terraform API token\n\nIn order for ServiceNow to connect to HCP Terraform, you must give it an HCP Terraform API token. The permissions of this token determine what resources the Service Graph Connector will import into the CMDB. While you could use a user API token, it could import resources from multiple organizations. By providing a team API token, you can scope permissions to only import resources from specified workspaces within a single organization. \n\nVisit your organization\u2019s **Settings** > **Teams** page. Scroll down to the **Team API Token** section and click **Create a team token**.\nSave this token in a safe place since HCP Terraform only displays it once. You will use it to configure ServiceNow in the next section.\n\n![ServiceNow Service Graph Connector Configure Team API token in HCP Terraform](\/img\/docs\/service-now-service-graph-team-token-gen.png)\n\n### Configure Service Graph Connector for Terraform API token\n\nIn the top navigation of your ServiceNow instance's control panel, click on **All**, search for **Service Graph Connector for Terraform**, and click **SG-Setup**. Next, click **Get Started**.\n\nNext, in the **Configure the Terraform connection** section, click **Get Started**.\n\nIn the **Configure Terraform authentication credentials** section, click **Configure**. \n\nIf you want to route traffic between your HCP Terraform and the ServiceNow instance through a MID server acting as a proxy, change the **Applies to** dropdown to \"Specific MID servers\" and select your previously configured MID server name. If you don't use MID servers, leave the default value.\n\nSet the **API Key** to the HCP Terraform team API token that you created in the previous section and click **Update**.\n\n![ServiceNow Service Graph Connector API Key Credentials configuration screen. The API key is provided, then saved by clicking the Update button](\/img\/docs\/service-now-service-graph-apikey.png)\n\nIn the **Configure Terraform authentication credentials** section, click **Mark as Complete**.\n\n### Configure Terraform Webhook Notification token\n\nTo improve security, HCP Terraform includes an HMAC signature on all \"generic\" webhook notifications using a user-provided **token**. This token is an arbitrary secret string that HCP Terraform uses to sign each webhook notification. ServiceNow uses the same token to verify the request authenticity. Refer to [Notification Authenticity](\/terraform\/cloud-docs\/api-docs\/notification-configurations#notification-authenticity) for more information.\n\nCreate a token and save it in a safe place. This secret token can be any value but should be treated as sensitive.\n\nIn the **Configure Terraform Webhook token** section, click **Configure**. In the **Token** field, enter the secret token that will be shared between the HCP Terraform and your ServiceNow instance and click **Update**. \n\n![ServiceNow Service Graph Connector Webhook token configuration screen. The Token is provided, then saved by clicking the Update button](\/img\/docs\/service-now-service-graph-webhook-token.png)\n\nIn the **Configure Terraform Webhook token** section, click **Mark as Complete**.\n\n### Configure Terraform connection\n\nIn the **Configure Terraform connection** section, click **Configure**.\n\nIf you are using Terraform Enterprise, set the **Connection URL** to the URL of your Terraform Enterprise instance. If you are using HCP Terraform, leave the **Connection URL** as the default value of `https:\/\/app.terraform.io`.\n\n![ServiceNow Service Graph Connector HTTP Connection configuration screen. A Terraform Enterprise URL may be provided in the Connection URL field, the saved by clicking the Update button](\/img\/docs\/service-now-service-graph-tfconn.png)\n\nIf you want to use a MID server, check **Use MID server** box, change **MID Selection** dropdown to \"Specific MID sever\" and select your previously configured and validated MID server.\n\nClick **Update** to save these settings. In the **Configure Terraform connection** section, click **Mark as Complete**.\n\n## Import Resources\n\nRefer to the documentation explaining the difference between the [two modes of import](\/terraform\/cloud-docs\/integrations\/service-now\/service-graph#import-methods) offered by the Service Graph Connector for Terraform. Both options may be enabled, or you may choose to enable only the webhook or scheduled import.\n\n### Configure scheduled import\n\nIn the **Set up scheduled import job** section of the setup form, proceed to **Configure the scheduled jobs** and click **Configure**. \n\nYou can use the **Execute Now** option to run a single import job, which is useful for testing. The import set will be displayed in the table below the scheduled import form, after refreshing the page. Once the import is successfully triggered, click on the **Import Set** field of the record to view the logs associated with the import run, as well as its status.\n\nActivate the job by checking the **Activate** box. Set the **Repeat Interval** and click **Update**. Note that the import processing time depends of the number of organizations and workspaces in your HCP Terraform. Setting the import job to run frequently is not recommended for big environments.\n\n![ServiceNow Service Graph Connector scheduled import screen](\/img\/docs\/service-now-service-graph-scheduled-import.png)\n\nYou can also access the scheduler interface by searching for **Service Graph Connector for Terraform** in the top navigation menu and selecting **SG-Import Schedule**.\n\n### Configure Terraform Webhook\n\nIn the top navigation, click on **All**, search for **Scheduled Imports**, and click on **Scheduled Imports**. \n\nSelect the **SG-Terraform Scheduled Process State** record, then click **To edit the record click here**.\n\nClick the **Active** checkbox to enable it. Leave the default value for the **Repeat Interval** of 5 seconds. Click **Update**.\n\n![ServiceNow Service Graph Connector scheduled import screen showing the Active checkbox enabled](\/img\/docs\/service-now-service-graph-webhook-schedule.png)\n\nNext, create the webhook in HCP Terraform. Select a workspace and click **Settings > Notifications**. Click **Create a Notification**.\n\nKeep the **Destination** as the default option of **Webhook**. Choose a descriptive name **Name**.\n\nSet the **Webhook URL** enter `https:\/\/<SERVICENOW_HOSTNAME>\/api\/x_hashi_service_gr\/sg_terraform_webhook` and replace `<SERVICENOW_HOSTNAME>` with the hostname of your ServiceNow instance.\n\nIn the **Token** field, enter the same string you provided in **Terraform Webhook token** section the of the Service Graph guided setup form.\n\nUnder **Health Events** choose **No events**.\n\nUnder **Run Events** choose **Only certain events** and enable notifications only on **Completed** runs. Click **Create Notification**.\n\n![HCP Terraform notification creation screen, showing a webhook pointing to ServiceNow which is only triggered on completed runs](\/img\/docs\/service-now-service-graph-webhook-tfc.png)\n\nTrigger a run in your workspace. Once the run is successfully completed, a webhook notification request will be sent to your ServiceNow instance.\n\n### Monitor the import job\n\nBy following these steps, you can track the status of import jobs in ServiceNow and verify the completion of the import process before accessing the imported resources in the CMDB.\n\nFor scheduled imports, navigate back to the **SG-Import Schedule** interface. For webhook imports, go to the **SG-Terraform Scheduled Process State** interface.\n\nUnder the form, you will find a table containing all registered import sets. Locate and select the relevant import set record.\n\nClick on the **Import Set** field to open it and view its details. The **Outbound Http Requests** tab lists all requests made by your ServiceNow instance to HCP Terraform in order to retrieve the latest Terraform state.\n\nMonitor the state of the import job. Wait for it to change to **Complete**, indicated by a green mark.\nOnce the import job is complete, you can access the imported resources in the CMDB.\n\n![ServiceNow Service Graph Connector: import set with successfully completed status](\/img\/docs\/service-now-service-graph-import-set.png)\n\nYou can also access all import sets, regardless of the import type, by navigating to **All** and selecting **Import Sets** under the **Advanced** category.\n\n### View resources in ServiceNow CMDB\n\nIn the top navigation of ServiceNow, click on **All** and search for **CMDB Workspace**, and click on **CMDB Workspace**. \n\nPerform a search by entering a Configuration Item (CI) name in the **Search** field (for example, **Virtual Machine Instance**). CI names supported by the application are listed on the [resource mapping page](\/terraform\/cloud-docs\/integrations\/service-now\/service-graph\/resource-coverage).\n","site":"terraform","answers_cleaned":"    page title  Overview   ServiceNow Service Graph Connector for Terraform Integration   summary of the setup process description       ServiceNow Service Graph Connector for Terraform integrates with HCP Terraform and offers two modes of import for Terraform resources         ServiceNow Service Graph Connector for Terraform   Setup       Note    Follow the  Configure ServiceNow Service Graph Connector for HCP Terraform   terraform tutorials it saas servicenow sgc  tutorial for hands on instructions on how to import an AWS resource deployed in your HCP Terraform organization to the ServiceNow CMDB by using the Service Graph Connector for Terraform   The ServiceNow Service Graph Connector for Terraform is a certified scoped application available in the ServiceNow Store  Search for  Service Graph Connector for Terraform  published by  HashiCorp Inc  and click   Install        Prerequisites  To start using the Service Graph Connector for Terraform  you must have     An administrator account on a Terraform Enterprise instance or within an HCP Terraform organization    An administrator account on your ServiceNow vendor instance   The Service Graph Connector for Terraform supports the following ServiceNow server versions     Utah   Vancouver   Washington DC   Xanadu  The following ServiceNow plugins are required dependencies     ITOM Discovery License   Integration Commons for CMDB   Discovery and Service Mapping Patterns   ServiceNow IntegrationHub Standard Pack  Additionally  you can install the IntegrationHub ETL application if you want to modify the default CMDB mappings        Note    Dependent plugins are installed on your ServiceNow instance automatically when the app is downloaded from the ServiceNow Store  Before installing the Service Graph Connector for Terraform  you must activate the ITOM Discovery License plugin in your production instance       Connect ServiceNow to HCP Terraform       ServiceNow roles     admin    x hashi service gr terraform user   Once the integration is installed  you can proceed to the guided setup form where you will enter your Terraform credentials  This step will establish a secure connection between HCP Terraform and your ServiceNow instance       Create and scope Terraform API token  In order for ServiceNow to connect to HCP Terraform  you must give it an HCP Terraform API token  The permissions of this token determine what resources the Service Graph Connector will import into the CMDB  While you could use a user API token  it could import resources from multiple organizations  By providing a team API token  you can scope permissions to only import resources from specified workspaces within a single organization    Visit your organization s   Settings       Teams   page  Scroll down to the   Team API Token   section and click   Create a team token    Save this token in a safe place since HCP Terraform only displays it once  You will use it to configure ServiceNow in the next section     ServiceNow Service Graph Connector Configure Team API token in HCP Terraform   img docs service now service graph team token gen png       Configure Service Graph Connector for Terraform API token  In the top navigation of your ServiceNow instance s control panel  click on   All    search for   Service Graph Connector for Terraform    and click   SG Setup    Next  click   Get Started     Next  in the   Configure the Terraform connection   section  click   Get Started     In the   Configure Terraform authentication credentials   section  click   Configure      If you want to route traffic between your HCP Terraform and the ServiceNow instance through a MID server acting as a proxy  change the   Applies to   dropdown to  Specific MID servers  and select your previously configured MID server name  If you don t use MID servers  leave the default value   Set the   API Key   to the HCP Terraform team API token that you created in the previous section and click   Update       ServiceNow Service Graph Connector API Key Credentials configuration screen  The API key is provided  then saved by clicking the Update button   img docs service now service graph apikey png   In the   Configure Terraform authentication credentials   section  click   Mark as Complete         Configure Terraform Webhook Notification token  To improve security  HCP Terraform includes an HMAC signature on all  generic  webhook notifications using a user provided   token    This token is an arbitrary secret string that HCP Terraform uses to sign each webhook notification  ServiceNow uses the same token to verify the request authenticity  Refer to  Notification Authenticity   terraform cloud docs api docs notification configurations notification authenticity  for more information   Create a token and save it in a safe place  This secret token can be any value but should be treated as sensitive   In the   Configure Terraform Webhook token   section  click   Configure    In the   Token   field  enter the secret token that will be shared between the HCP Terraform and your ServiceNow instance and click   Update        ServiceNow Service Graph Connector Webhook token configuration screen  The Token is provided  then saved by clicking the Update button   img docs service now service graph webhook token png   In the   Configure Terraform Webhook token   section  click   Mark as Complete         Configure Terraform connection  In the   Configure Terraform connection   section  click   Configure     If you are using Terraform Enterprise  set the   Connection URL   to the URL of your Terraform Enterprise instance  If you are using HCP Terraform  leave the   Connection URL   as the default value of  https   app terraform io      ServiceNow Service Graph Connector HTTP Connection configuration screen  A Terraform Enterprise URL may be provided in the Connection URL field  the saved by clicking the Update button   img docs service now service graph tfconn png   If you want to use a MID server  check   Use MID server   box  change   MID Selection   dropdown to  Specific MID sever  and select your previously configured and validated MID server   Click   Update   to save these settings  In the   Configure Terraform connection   section  click   Mark as Complete        Import Resources  Refer to the documentation explaining the difference between the  two modes of import   terraform cloud docs integrations service now service graph import methods  offered by the Service Graph Connector for Terraform  Both options may be enabled  or you may choose to enable only the webhook or scheduled import       Configure scheduled import  In the   Set up scheduled import job   section of the setup form  proceed to   Configure the scheduled jobs   and click   Configure      You can use the   Execute Now   option to run a single import job  which is useful for testing  The import set will be displayed in the table below the scheduled import form  after refreshing the page  Once the import is successfully triggered  click on the   Import Set   field of the record to view the logs associated with the import run  as well as its status   Activate the job by checking the   Activate   box  Set the   Repeat Interval   and click   Update    Note that the import processing time depends of the number of organizations and workspaces in your HCP Terraform  Setting the import job to run frequently is not recommended for big environments     ServiceNow Service Graph Connector scheduled import screen   img docs service now service graph scheduled import png   You can also access the scheduler interface by searching for   Service Graph Connector for Terraform   in the top navigation menu and selecting   SG Import Schedule         Configure Terraform Webhook  In the top navigation  click on   All    search for   Scheduled Imports    and click on   Scheduled Imports      Select the   SG Terraform Scheduled Process State   record  then click   To edit the record click here     Click the   Active   checkbox to enable it  Leave the default value for the   Repeat Interval   of 5 seconds  Click   Update       ServiceNow Service Graph Connector scheduled import screen showing the Active checkbox enabled   img docs service now service graph webhook schedule png   Next  create the webhook in HCP Terraform  Select a workspace and click   Settings   Notifications    Click   Create a Notification     Keep the   Destination   as the default option of   Webhook    Choose a descriptive name   Name     Set the   Webhook URL   enter  https    SERVICENOW HOSTNAME  api x hashi service gr sg terraform webhook  and replace   SERVICENOW HOSTNAME   with the hostname of your ServiceNow instance   In the   Token   field  enter the same string you provided in   Terraform Webhook token   section the of the Service Graph guided setup form   Under   Health Events   choose   No events     Under   Run Events   choose   Only certain events   and enable notifications only on   Completed   runs  Click   Create Notification       HCP Terraform notification creation screen  showing a webhook pointing to ServiceNow which is only triggered on completed runs   img docs service now service graph webhook tfc png   Trigger a run in your workspace  Once the run is successfully completed  a webhook notification request will be sent to your ServiceNow instance       Monitor the import job  By following these steps  you can track the status of import jobs in ServiceNow and verify the completion of the import process before accessing the imported resources in the CMDB   For scheduled imports  navigate back to the   SG Import Schedule   interface  For webhook imports  go to the   SG Terraform Scheduled Process State   interface   Under the form  you will find a table containing all registered import sets  Locate and select the relevant import set record   Click on the   Import Set   field to open it and view its details  The   Outbound Http Requests   tab lists all requests made by your ServiceNow instance to HCP Terraform in order to retrieve the latest Terraform state   Monitor the state of the import job  Wait for it to change to   Complete    indicated by a green mark  Once the import job is complete  you can access the imported resources in the CMDB     ServiceNow Service Graph Connector  import set with successfully completed status   img docs service now service graph import set png   You can also access all import sets  regardless of the import type  by navigating to   All   and selecting   Import Sets   under the   Advanced   category       View resources in ServiceNow CMDB  In the top navigation of ServiceNow  click on   All   and search for   CMDB Workspace    and click on   CMDB Workspace      Perform a search by entering a Configuration Item  CI  name in the   Search   field  for example    Virtual Machine Instance     CI names supported by the application are listed on the  resource mapping page   terraform cloud docs integrations service now service graph resource coverage   "}
{"questions":"terraform Google Cloud Platform This page provides details on how Google Cloud resources set up using Terraform are corresponded to the classes within the ServiceNow CMDB ServiceNow Service Graph integration allows to import selected resources from Google Cloud Platform into ServiceNow CMDB page title ServiceNow Service Graph Connector for Terraform Integration GCP Coverage","answers":"---\npage_title: ServiceNow Service Graph Connector for Terraform Integration - GCP Coverage\ndescription: >-\n  ServiceNow Service Graph integration allows to import selected resources from Google Cloud Platform into ServiceNow CMDB.\n---\n\n# Google Cloud Platform\n\nThis page provides details on how Google Cloud resources, set up using Terraform, are corresponded to the classes within the ServiceNow CMDB.\n\n## Mapping of Terraform resources to CMDB CI Classes\n\n| Google resource            | Terraform resource name                                               | ServiceNow CMDB CI Class        | ServiceNow CMDB Category Name |\n|----------------------------|-----------------------------------------------------------------------|---------------------------------|-------------------------------|\n| Project ID                 | N\/A                                                                   | `cmdb_ci_cloud_service_account` | Cloud Service Account         |\n| Region (location)          | N\/A                                                                   | `cmdb_ci_google_datacenter`     | Google Datacenter             |\n| Virtual Machine Instance   | `google_compute_instance`                                             | `cmdb_ci_vm_instance`           | Virtual Machine Instance      |\n| Kubernetes Cluster         | `google_container_cluster`                                            | `cmdb_ci_kubernetes_cluster`    | Kubernetes Cluster            |\n| Google Storage             | `google_storage_bucket`                                               | `cmdb_ci_cloud_storage_account` | Cloud Storage Account         |\n| Google BigQuery            | `google_bigquery_table`                                               | `cmdb_ci_cloud_database`        | Cloud DataBase                |\n| Google SQL                 | `google_sql_database`                                                 | `cmdb_ci_cloud_database`        | Cloud DataBase                |\n| Google Compute Firewall    | `google_compute_firewall`                                             | `cmdb_ci_compute_security_group`| Compute Security Group        |\n| Cloud Function             | `google_cloudfunctions_function` or `google_cloudfunctions2_function` | `cmdb_ci_cloud_function`        | Cloud Function                |\n| Load Balancer              | `google_compute_forwarding_rule`                                      | `cmdb_ci_cloud_load_balancer`   | Cloud Load Balancer           |\n| VPC                        | `google_compute_network`                                              | `cmdb_ci_network`               | Cloud Network                 |\n| Tags                       | N\/A                                                                   | `cmdb_key_value`                | Key Value                     |\n\n## Resource relationships\n\n| Child CI Class                                             | Relationship type      | Parent CI Class                                           |\n|------------------------------------------------------------|------------------------|-----------------------------------------------------------|\n| Google Datacenter 1 (`cmdb_ci_google_datacenter`)          | Hosted On::Hosts       | Cloud Service Account 4 (`cmdb_ci_cloud_service_account`) |\n| Google Datacenter 2 (`cmdb_ci_google_datacenter`)          | Hosted On::Hosts       | Cloud Service Account 4 (`cmdb_ci_cloud_service_account`) |\n| Virtual Machine Instance 4 (`cmdb_ci_vm_instance`)         | Hosted On::Hosts       | Google Datacenter 1 (`cmdb_ci_google_datacenter`)         |\n| Virtual Machine Instance 4 (`cmdb_ci_vm_instance`)         | Reference              | Key Value 13 (`cmdb_key_value`)                           |\n| Cloud Network 3 (`cmdb_ci_network`)                        | Hosted On::Hosts       | Google Datacenter 1 (`cmdb_ci_google_datacenter`)         |\n| Cloud Network 3 (`cmdb_ci_network`)                        | Reference              | Key Value 18 (`cmdb_key_value`)                           |\n| Compute Security Group 3 (`cmdb_ci_compute_security_group`)| Hosted On::Hosts       | Google Datacenter 1 (`cmdb_ci_google_datacenter`)         |\n| Compute Security Group 3 (`cmdb_ci_compute_security_group`)| Reference              | Key Value 21 (`cmdb_key_value`)                           |\n| Kubernetes Cluster 3 (`cmdb_ci_kubernetes_cluster`)        | Hosted On::Hosts       | Google Datacenter 1 (`cmdb_ci_google_datacenter`)         |\n| Kubernetes Cluster 3 (`cmdb_ci_kubernetes_cluster`)        | Reference              | Key Value 22 (`cmdb_key_value`)                           |\n| Cloud DataBase 3 (`cmdb_ci_cloud_database` )               | Hosted On::Hosts       | Google Datacenter 1 (`cmdb_ci_google_datacenter`)         |\n| Cloud DataBase 2 (`cmdb_ci_cloud_database` )               | Reference              | Key Value 24 (`cmdb_key_value`)                           |\n| Cloud Function 3 (`cmdb_ci_cloud_function`)                | Hosted On::Hosts       | Google Datacenter 1 (`cmdb_ci_google_datacenter`)         |\n| Cloud Function 3 (`cmdb_ci_cloud_function`)                | Reference              | Key Value 25 (`cmdb_key_value`)                           |\n| Cloud Load Balancer 2 (`cmdb_ci_cloud_load_balancer`)      | Hosted On::Hosts       | Google Datacenter 1 (`cmdb_ci_google_datacenter`)         |\n| Cloud Load Balancer 2 (`cmdb_ci_cloud_load_balancer`)      | Reference              | Key Value 26 (`cmdb_key_value`)                           |\n| Cloud Storage Account 2 (`cmdb_ci_cloud_storage_account`)  | Hosted On::Hosts       | Google Datacenter 2 (`cmdb_ci_google_datacenter`)         |\n| Cloud Storage Account 2 (`cmdb_ci_cloud_storage_account`)  | Reference              | Key Value 23 (`cmdb_key_value`)                           |\n\n## Field attributes mapping\n\n### Cloud Service Account (`cmdb_ci_cloud_service_account`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `project`                                                      |\n| Account Id        | `project`                                                      |\n| Datacenter Type   | Defaults to `google`                                           |\n| Object ID         | `project`                                                      |\n| Name              | `project`                                                      |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Google Datacenter (`cmdb_ci_google_datacenter`)  \n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | Concatenation of `project` and region extracted from `id`      |\n| Object Id         | Region extracted from `id`                                     |\n| Region            | Region extracted from `id`                                     |\n| Name              | Region extracted from `id`                                     |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Virtual Machine Instance (`cmdb_ci_vm_instance`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `id`                                                           |\n| Object Id         | `id`                                                           |\n| Name              | `name`                                                         |\n| Category          | `machine_type`                                                 |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud Network (`cmdb_ci_network`) \n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `id`                                                           |\n| Object Id         | `id`                                                           |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Compute Security Group (`cmdb_ci_compute_security_group`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `id`                                                           |\n| Object Id         | `id`                                                           |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Kubernetes Cluster (`cmdb_ci_kubernetes_cluster`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `arn`                                                          |\n| IP Address        | `endpoint`                                                     |\n| Port              | Defaults to \"6443\"                                             |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud Object Storage (`cmdb_ci_cloud_object_storage`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `arn`                                                          |\n| Object Id         | `id`                                                           |\n| Cloud Provider    | Resource cloud provider extracted from `arn`                   |\n| Name              | `bucket`                                                       |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud Storage Account (`cmdb_ci_cloud_storage_account`) \n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `id`                                                           |\n| Object Id         | `id`                                                           |\n| Name              | `name`                                                         |\n| Name              | `location`                                                     |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud DataBase (`cmdb_ci_cloud_database`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `id`                                                           |\n| Object Id         | `id`                                                           |\n| Name              | Name extracted from `id`                                       |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud Function (`cmdb_ci_cloud_function`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `id`                                                           |\n| Object Id         | `id`                                                           |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud Load Balancer (`cmdb_ci_cloud_load_balancer`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `id`                                                           |\n| Object Id         | `id`                                                           |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                ","site":"terraform","answers_cleaned":"    page title  ServiceNow Service Graph Connector for Terraform Integration   GCP Coverage description       ServiceNow Service Graph integration allows to import selected resources from Google Cloud Platform into ServiceNow CMDB         Google Cloud Platform  This page provides details on how Google Cloud resources  set up using Terraform  are corresponded to the classes within the ServiceNow CMDB      Mapping of Terraform resources to CMDB CI Classes    Google resource              Terraform resource name                                                 ServiceNow CMDB CI Class          ServiceNow CMDB Category Name                                                                                                                                                                              Project ID                   N A                                                                      cmdb ci cloud service account    Cloud Service Account             Region  location             N A                                                                      cmdb ci google datacenter        Google Datacenter                 Virtual Machine Instance      google compute instance                                                 cmdb ci vm instance              Virtual Machine Instance          Kubernetes Cluster            google container cluster                                                cmdb ci kubernetes cluster       Kubernetes Cluster                Google Storage                google storage bucket                                                   cmdb ci cloud storage account    Cloud Storage Account             Google BigQuery               google bigquery table                                                   cmdb ci cloud database           Cloud DataBase                    Google SQL                    google sql database                                                     cmdb ci cloud database           Cloud DataBase                    Google Compute Firewall       google compute firewall                                                 cmdb ci compute security group   Compute Security Group            Cloud Function                google cloudfunctions function  or  google cloudfunctions2 function     cmdb ci cloud function           Cloud Function                    Load Balancer                 google compute forwarding rule                                          cmdb ci cloud load balancer      Cloud Load Balancer               VPC                           google compute network                                                  cmdb ci network                  Cloud Network                     Tags                         N A                                                                      cmdb key value                   Key Value                           Resource relationships    Child CI Class                                               Relationship type        Parent CI Class                                                                                                                                                                                                   Google Datacenter 1   cmdb ci google datacenter              Hosted On  Hosts         Cloud Service Account 4   cmdb ci cloud service account       Google Datacenter 2   cmdb ci google datacenter              Hosted On  Hosts         Cloud Service Account 4   cmdb ci cloud service account       Virtual Machine Instance 4   cmdb ci vm instance             Hosted On  Hosts         Google Datacenter 1   cmdb ci google datacenter               Virtual Machine Instance 4   cmdb ci vm instance             Reference                Key Value 13   cmdb key value                                 Cloud Network 3   cmdb ci network                            Hosted On  Hosts         Google Datacenter 1   cmdb ci google datacenter               Cloud Network 3   cmdb ci network                            Reference                Key Value 18   cmdb key value                                 Compute Security Group 3   cmdb ci compute security group    Hosted On  Hosts         Google Datacenter 1   cmdb ci google datacenter               Compute Security Group 3   cmdb ci compute security group    Reference                Key Value 21   cmdb key value                                 Kubernetes Cluster 3   cmdb ci kubernetes cluster            Hosted On  Hosts         Google Datacenter 1   cmdb ci google datacenter               Kubernetes Cluster 3   cmdb ci kubernetes cluster            Reference                Key Value 22   cmdb key value                                 Cloud DataBase 3   cmdb ci cloud database                    Hosted On  Hosts         Google Datacenter 1   cmdb ci google datacenter               Cloud DataBase 2   cmdb ci cloud database                    Reference                Key Value 24   cmdb key value                                 Cloud Function 3   cmdb ci cloud function                    Hosted On  Hosts         Google Datacenter 1   cmdb ci google datacenter               Cloud Function 3   cmdb ci cloud function                    Reference                Key Value 25   cmdb key value                                 Cloud Load Balancer 2   cmdb ci cloud load balancer          Hosted On  Hosts         Google Datacenter 1   cmdb ci google datacenter               Cloud Load Balancer 2   cmdb ci cloud load balancer          Reference                Key Value 26   cmdb key value                                 Cloud Storage Account 2   cmdb ci cloud storage account      Hosted On  Hosts         Google Datacenter 2   cmdb ci google datacenter               Cloud Storage Account 2   cmdb ci cloud storage account      Reference                Key Value 23   cmdb key value                                   Field attributes mapping      Cloud Service Account   cmdb ci cloud service account      CMDB field          Terraform state field                                                                                                                                     Source Native Key    project                                                           Account Id           project                                                           Datacenter Type     Defaults to  google                                                Object ID            project                                                           Name                 project                                                           Operational Status  Defaults to  1    Operational                                         Google Datacenter   cmdb ci google datacenter        CMDB field          Terraform state field                                                                                                                                     Source Native Key   Concatenation of  project  and region extracted from  id           Object Id           Region extracted from  id                                          Region              Region extracted from  id                                          Name                Region extracted from  id                                          Operational Status  Defaults to  1    Operational                                         Virtual Machine Instance   cmdb ci vm instance      CMDB field          Terraform state field                                                                                                                                     Source Native Key    id                                                                Object Id            id                                                                Name                 name                                                              Category             machine type                                                      Operational Status  Defaults to  1    Operational                                         Cloud Network   cmdb ci network       CMDB field          Terraform state field                                                                                                                                     Source Native Key    id                                                                Object Id            id                                                                Name                 name                                                              Operational Status  Defaults to  1    Operational                                         Compute Security Group   cmdb ci compute security group      CMDB field          Terraform state field                                                                                                                                     Source Native Key    id                                                                Object Id            id                                                                Name                 name                                                              Operational Status  Defaults to  1    Operational                                         Kubernetes Cluster   cmdb ci kubernetes cluster      CMDB field          Terraform state field                                                                                                                                     Source Native Key    arn                                                               IP Address           endpoint                                                          Port                Defaults to  6443                                                  Name                 name                                                              Operational Status  Defaults to  1    Operational                                         Cloud Object Storage   cmdb ci cloud object storage      CMDB field          Terraform state field                                                                                                                                     Source Native Key    arn                                                               Object Id            id                                                                Cloud Provider      Resource cloud provider extracted from  arn                        Name                 bucket                                                            Operational Status  Defaults to  1    Operational                                         Cloud Storage Account   cmdb ci cloud storage account       CMDB field          Terraform state field                                                                                                                                     Source Native Key    id                                                                Object Id            id                                                                Name                 name                                                              Name                 location                                                          Operational Status  Defaults to  1    Operational                                         Cloud DataBase   cmdb ci cloud database      CMDB field          Terraform state field                                                                                                                                     Source Native Key    id                                                                Object Id            id                                                                Name                Name extracted from  id                                            Operational Status  Defaults to  1    Operational                                         Cloud Function   cmdb ci cloud function      CMDB field          Terraform state field                                                                                                                                     Source Native Key    id                                                                Object Id            id                                                                Name                 name                                                              Operational Status  Defaults to  1    Operational                                         Cloud Load Balancer   cmdb ci cloud load balancer      CMDB field          Terraform state field                                                                                                                                     Source Native Key    id                                                                Object Id            id                                                                Name                 name                                                              Operational Status  Defaults to  1    Operational                                  "}
{"questions":"terraform AWS This page details the mapping rules for importing AWS resources provisioned with Terraform into ServiceNow CMDB page title ServiceNow Service Graph Connector for Terraform Integration AWS Coverage ServiceNow Service Graph integration allows to import selected resources from AWS into ServiceNow CMDB","answers":"---\npage_title: ServiceNow Service Graph Connector for Terraform Integration - AWS Coverage\ndescription: >-\n  ServiceNow Service Graph integration allows to import selected resources from AWS into ServiceNow CMDB.\n---\n\n# AWS\n\nThis page details the mapping rules for importing AWS resources, provisioned with Terraform, into ServiceNow CMDB.\n\n## Mapping of Terraform resources to CMDB CI Classes\n\n| AWS resource                                                                         | Terraform resource name    | ServiceNow CMDB CI Class        | ServiceNow CMDB Category Name |\n|--------------------------------------------------------------------------------------|----------------------------|---------------------------------|-------------------------------|\n| AWS account                                                                          | N\/A                        | `cmdb_ci_cloud_service_account` | Cloud Service Account         |\n| AWS region                                                                           | N\/A                        | `cmdb_ci_aws_datacenter`        | AWS Datacenter                |\n| EC2 Instance                                                                         | `aws_instance`             | `cmdb_ci_vm_instance`           | Virtual Machine Instance      |\n| S3 Bucket                                                                            | `aws_s3_bucket`            | `cmdb_ci_cloud_object_storage`  | Cloud Object Storage          |\n| ECS Cluster                                                                          | `aws_ecs_cluster`          | `cmdb_ci_cloud_ecs_cluster`     | AWS Cloud ECS Cluster         |\n| EKS Cluster                                                                          | `aws_eks_cluster`          | `cmdb_ci_kubernetes_cluster`    | Kubernetes Cluster            |\n| VPC                                                                                  | `aws_vpc`                  | `cmdb_ci_network`               | Cloud Network                 |\n| Database Instance (*non-Aurora databases: e.g., MySQL, PostgreSQL, SQL Server, etc.*)| `aws_db_instance`          | `cmdb_ci_cloud_database`        | Cloud DataBase                |\n| RDS Aurora Cluster                                                                   | `aws_rds_cluster`          | `cmdb_ci_cloud_db_cluster`      | Cloud DataBase Cluster        |\n| RDS Aurora Instance                                                                  | `aws_rds_cluster_instance` | `cmdb_ci_cloud_database`        | Cloud DataBase                |\n| DynamoDB Global Table                                                                | `aws_dynamodb_global_table`| `cmdb_ci_dynamodb_global_table` | DynamoDB Global Table         |\n| DynamoDB Table                                                                       | `aws_dynamodb_table`       | `cmdb_ci_dynamodb_table`        | DynamoDB Table                |\n| Security Group                                                                       | `aws_security_group`       | `cmdb_ci_compute_security_group`| Compute Security Group        |\n| Lambda                                                                               | `aws_lambda_function`      | `cmdb_ci_cloud_function`        | Cloud Function                |\n| Load Balancer                                                                        | `aws_lb`                   | `cmdb_ci_cloud_load_balancer`   | Cloud Load Balancer           |\n| Tags                                                                                 | N\/A                        | `cmdb_key_value`                | Key Value                     |\n\n## Resource relationships\n\n| Child CI Class                                             | Relationship type| Parent CI Class                                           |\n|------------------------------------------------------------|------------------|-----------------------------------------------------------|\n| AWS Datacenter 1 (`cmdb_ci_aws_datacenter`)                | Hosted On::Hosts | Cloud Service Account 1 (`cmdb_ci_cloud_service_account`) |\n| AWS Datacenter 2 (`cmdb_ci_aws_datacenter`)                | Hosted On::Hosts | Cloud Service Account 6 (`cmdb_ci_cloud_service_account`) |\n| Virtual Machine Instance 1 (`cmdb_ci_vm_instance`)         | Hosted On::Hosts | AWS Datacenter 1 (`cmdb_ci_aws_datacenter`)               |\n| Virtual Machine Instance 1 (`cmdb_ci_vm_instance`)         | Reference        | Key Value 1 (`cmdb_key_value`)                            |\n| AWS Cloud ECS Cluster 1 (`cmdb_ci_cloud_ecs_cluster`)      | Hosted On::Hosts | AWS Datacenter 1 (`cmdb_ci_aws_datacenter`)               |\n| AWS Cloud ECS Cluster 1 (`cmdb_ci_cloud_ecs_cluster`)      | Reference        | Key Value 2 (`cmdb_key_value`)                            |\n| Cloud Object Storage 1 (`cmdb_ci_cloud_object_storage`)    | Hosted On::Hosts | AWS Datacenter 2 (`cmdb_ci_aws_datacenter`)               |\n| Cloud Object Storage 1 (`cmdb_ci_cloud_object_storage`)    | Reference        | Key Value 3 (`cmdb_key_value`)                            |\n| Kubernetes Cluster 1 (`cmdb_ci_kubernetes_cluster`)        | Hosted On::Hosts | AWS Datacenter 1 (`cmdb_ci_aws_datacenter`)               |\n| Kubernetes Cluster 1 (`cmdb_ci_kubernetes_cluster`)        | Reference        | Key Value 4 (`cmdb_key_value`)                            |\n| Cloud Network 1 (`cmdb_ci_network`)                        | Hosted On::Hosts | AWS Datacenter 1 (`cmdb_ci_aws_datacenter`)               |\n| Cloud Network 1 (`cmdb_ci_network`)                        | Reference        | Key Value 5 (`cmdb_key_value`)                            |\n| Cloud DataBase 1 (`cmdb_ci_cloud_database` )               | Hosted On::Hosts | AWS Datacenter 1 (`cmdb_ci_aws_datacenter`)               |\n| Cloud DataBase 1 (`cmdb_ci_cloud_database` )               | Reference        | Key Value 6 (`cmdb_key_value`)                            |\n| Cloud DataBase Cluster 1 (`cmdb_ci_cloud_db_cluster`)      | Hosted On::Hosts | AWS Datacenter 1 (`cmdb_ci_aws_datacenter`)               |\n| Cloud DataBase Cluster 1 (`cmdb_ci_cloud_db_cluster`)      | Reference        | Key Value 7 (`cmdb_key_value`)                            |\n| DynamoDB Global Table 1 (`cmdb_ci_dynamodb_global_table`)  | Hosted On::Hosts | Cloud Service Account 1 (`cmdb_ci_cloud_service_account`) |\n| DynamoDB Table 1 (`cmdb_ci_dynamodb_table`)                | Hosted On::Hosts | AWS Datacenter 1 (`cmdb_ci_aws_datacenter`)               |\n| DynamoDB Table 1 (`cmdb_ci_dynamodb_table`)                | Reference        | Key Value 8 (`cmdb_key_value`)                            |\n| Compute Security Group 1 (`cmdb_ci_compute_security_group`)| Hosted On::Hosts | AWS Datacenter 1 (`cmdb_ci_aws_datacenter`)               |\n| Compute Security Group 1 (`cmdb_ci_compute_security_group`)| Reference        | Key Value 10 (`cmdb_key_value`)                           |\n| Cloud Function 1 (`cmdb_ci_cloud_function`)                | Hosted On::Hosts | AWS Datacenter 1 (`cmdb_ci_aws_datacenter`)               |\n| Cloud Function 1 (`cmdb_ci_cloud_function`)                | Reference        | Key Value 11 (`cmdb_key_value`)                           |\n| Cloud Load Balancer 1 (`cmdb_ci_cloud_load_balancer`)      | Hosted On::Hosts | AWS Datacenter 1 (`cmdb_ci_aws_datacenter`)               |\n| Cloud Load Balancer 1 (`cmdb_ci_cloud_load_balancer`)      | Reference        | Key Value 12 (`cmdb_key_value`)                           |\n\n## Field attributes mapping\n\n### Cloud Service Account (`cmdb_ci_cloud_service_account`)\n\n|  CMDB field        | Terraform state field                         |\n|--------------------|-----------------------------------------------|\n| Source Native Key  | Resource account number extracted from `arn`  |\n| Account Id         | Resource account number extracted from `arn`  |\n| Datacenter Type    | Resource cloud provider extracted from `arn`  |\n| Object ID          | Resource id extracted from `arn`              |\n| Name               | Resource name extracted from `arn`            |\n| Operational Status | Defaults to \"1\" (\"Operational\")               |\n\n### AWS Datacenter (`cmdb_ci_aws_datacenter`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | Concatenation of region and account number extracted from `arn`|\n| Object Id         | Region extracted from `arn`                                    |\n| Region            | Region extracted from `arn`                                    |\n| Name              | Region extracted from `arn`                                    |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n\n### Virtual Machine Instance (`cmdb_ci_vm_instance`) \n\n|  CMDB field        | Terraform state field                                         |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `arn`                                                          |\n| Object Id         | `id`                                                           |\n| Placement Group ID| `placement_group`                                              |\n| IP Address        | `public_ip`                                                    |\n| Status            | `instance_state`                                               |\n| VM Instance ID    | `id`                                                           |\n| Name              | `id`                                                           |\n| State             | `state`                                                        |\n| CPU               | `cpu_core_count`                                               |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### AWS Cloud ECS Cluster (`cmdb_ci_cloud_ecs_cluster`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `arn`                                                          |\n| Object Id         | `id`                                                           |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud Object Storage (`cmdb_ci_cloud_object_storage`) \n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `arn`                                                          |\n| Object Id         | `id`                                                           |\n| Cloud Provider    | Resource cloud provider extracted from `arn`                   |\n| Name              | `bucket`                                                       |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Kubernetes Cluster (`cmdb_ci_kubernetes_cluster`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `arn`                                                          |\n| IP Address        | `endpoint`                                                     |\n| Port              | Defaults to \"6443\"                                             |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud Network (`cmdb_ci_network`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `arn`                                                          |\n| Object Id         | `id`                                                           |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud DataBase (`cmdb_ci_cloud_database`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `arn`                                                          |\n| Object Id         | `id`                                                           |\n| Version           | `engine_version`                                               |\n| Type              | `engine`                                                       |\n| TCP port(s)       | `port`                                                         |\n| Category          | `instance_class`                                               |\n| Fully qualified domain name| `endpoint`                                            |\n| Location          | Region extracted from `arn`                                    |\n| Name              | `name`                                                         |\n| Vendor            | Resource cloud provider extracted from `arn`                   |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud DataBase Cluster (`cmdb_ci_cloud_db_cluster`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `arn`                                                          |\n| Cluster ID        | `cluster_resource_id`                                          |\n| Name              | `name`                                                         |\n| TCP port(s)       | `port`                                                         |\n| Fully qualified domain name| `endpoint`                                            |\n| Vendor            | Resource cloud provider extracted from `arn`                   |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### DynamoDB Table (`cmdb_ci_dynamodb_table`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `arn`                                                          |\n| Object Id         | `arn`                                                          |\n| Location          | Region extracted from `arn`                                    |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### DynamoDB Global Table (`cmdb_ci_dynamodb_global_table`) \n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `arn`                                                          |\n| Object Id         | `arn`                                                          |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Compute Security Group (`cmdb_ci_compute_security_group`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `arn`                                                          |\n| Object Id         | `id`                                                           |\n| Location          | Region extracted from `arn`                                    |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud Function (`cmdb_ci_cloud_function`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `arn`                                                          |\n| Object Id         | `arn`                                                          |\n| Language          | `runtime`                                                      |\n| Code Size         | `source_code_size`                                             |\n| Location          | Region extracted from `arn`                                    |\n| Name              | `function_name`                                                |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud Load Balancer (`cmdb_ci_cloud_load_balancer`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `arn`                                                          |\n| Object Id         | `id`                                                           |\n| Canonical Hosted Zone Name| `dns_name`                                             |\n| Canonical Hosted Zone ID| `zone_id`                                                |\n| Location          | Region extracted from `arn`                                    |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                ","site":"terraform","answers_cleaned":"    page title  ServiceNow Service Graph Connector for Terraform Integration   AWS Coverage description       ServiceNow Service Graph integration allows to import selected resources from AWS into ServiceNow CMDB         AWS  This page details the mapping rules for importing AWS resources  provisioned with Terraform  into ServiceNow CMDB      Mapping of Terraform resources to CMDB CI Classes    AWS resource                                                                           Terraform resource name      ServiceNow CMDB CI Class          ServiceNow CMDB Category Name                                                                                                                                                                                             AWS account                                                                            N A                           cmdb ci cloud service account    Cloud Service Account             AWS region                                                                             N A                           cmdb ci aws datacenter           AWS Datacenter                    EC2 Instance                                                                            aws instance                 cmdb ci vm instance              Virtual Machine Instance          S3 Bucket                                                                               aws s3 bucket                cmdb ci cloud object storage     Cloud Object Storage              ECS Cluster                                                                             aws ecs cluster              cmdb ci cloud ecs cluster        AWS Cloud ECS Cluster             EKS Cluster                                                                             aws eks cluster              cmdb ci kubernetes cluster       Kubernetes Cluster                VPC                                                                                     aws vpc                      cmdb ci network                  Cloud Network                     Database Instance   non Aurora databases  e g   MySQL  PostgreSQL  SQL Server  etc      aws db instance              cmdb ci cloud database           Cloud DataBase                    RDS Aurora Cluster                                                                      aws rds cluster              cmdb ci cloud db cluster         Cloud DataBase Cluster            RDS Aurora Instance                                                                     aws rds cluster instance     cmdb ci cloud database           Cloud DataBase                    DynamoDB Global Table                                                                   aws dynamodb global table    cmdb ci dynamodb global table    DynamoDB Global Table             DynamoDB Table                                                                          aws dynamodb table           cmdb ci dynamodb table           DynamoDB Table                    Security Group                                                                          aws security group           cmdb ci compute security group   Compute Security Group            Lambda                                                                                  aws lambda function          cmdb ci cloud function           Cloud Function                    Load Balancer                                                                           aws lb                       cmdb ci cloud load balancer      Cloud Load Balancer               Tags                                                                                   N A                           cmdb key value                   Key Value                           Resource relationships    Child CI Class                                               Relationship type  Parent CI Class                                                                                                                                                                                             AWS Datacenter 1   cmdb ci aws datacenter                    Hosted On  Hosts   Cloud Service Account 1   cmdb ci cloud service account       AWS Datacenter 2   cmdb ci aws datacenter                    Hosted On  Hosts   Cloud Service Account 6   cmdb ci cloud service account       Virtual Machine Instance 1   cmdb ci vm instance             Hosted On  Hosts   AWS Datacenter 1   cmdb ci aws datacenter                     Virtual Machine Instance 1   cmdb ci vm instance             Reference          Key Value 1   cmdb key value                                  AWS Cloud ECS Cluster 1   cmdb ci cloud ecs cluster          Hosted On  Hosts   AWS Datacenter 1   cmdb ci aws datacenter                     AWS Cloud ECS Cluster 1   cmdb ci cloud ecs cluster          Reference          Key Value 2   cmdb key value                                  Cloud Object Storage 1   cmdb ci cloud object storage        Hosted On  Hosts   AWS Datacenter 2   cmdb ci aws datacenter                     Cloud Object Storage 1   cmdb ci cloud object storage        Reference          Key Value 3   cmdb key value                                  Kubernetes Cluster 1   cmdb ci kubernetes cluster            Hosted On  Hosts   AWS Datacenter 1   cmdb ci aws datacenter                     Kubernetes Cluster 1   cmdb ci kubernetes cluster            Reference          Key Value 4   cmdb key value                                  Cloud Network 1   cmdb ci network                            Hosted On  Hosts   AWS Datacenter 1   cmdb ci aws datacenter                     Cloud Network 1   cmdb ci network                            Reference          Key Value 5   cmdb key value                                  Cloud DataBase 1   cmdb ci cloud database                    Hosted On  Hosts   AWS Datacenter 1   cmdb ci aws datacenter                     Cloud DataBase 1   cmdb ci cloud database                    Reference          Key Value 6   cmdb key value                                  Cloud DataBase Cluster 1   cmdb ci cloud db cluster          Hosted On  Hosts   AWS Datacenter 1   cmdb ci aws datacenter                     Cloud DataBase Cluster 1   cmdb ci cloud db cluster          Reference          Key Value 7   cmdb key value                                  DynamoDB Global Table 1   cmdb ci dynamodb global table      Hosted On  Hosts   Cloud Service Account 1   cmdb ci cloud service account       DynamoDB Table 1   cmdb ci dynamodb table                    Hosted On  Hosts   AWS Datacenter 1   cmdb ci aws datacenter                     DynamoDB Table 1   cmdb ci dynamodb table                    Reference          Key Value 8   cmdb key value                                  Compute Security Group 1   cmdb ci compute security group    Hosted On  Hosts   AWS Datacenter 1   cmdb ci aws datacenter                     Compute Security Group 1   cmdb ci compute security group    Reference          Key Value 10   cmdb key value                                 Cloud Function 1   cmdb ci cloud function                    Hosted On  Hosts   AWS Datacenter 1   cmdb ci aws datacenter                     Cloud Function 1   cmdb ci cloud function                    Reference          Key Value 11   cmdb key value                                 Cloud Load Balancer 1   cmdb ci cloud load balancer          Hosted On  Hosts   AWS Datacenter 1   cmdb ci aws datacenter                     Cloud Load Balancer 1   cmdb ci cloud load balancer          Reference          Key Value 12   cmdb key value                                   Field attributes mapping      Cloud Service Account   cmdb ci cloud service account       CMDB field          Terraform state field                                                                                                    Source Native Key    Resource account number extracted from  arn       Account Id           Resource account number extracted from  arn       Datacenter Type      Resource cloud provider extracted from  arn       Object ID            Resource id extracted from  arn                   Name                 Resource name extracted from  arn                 Operational Status   Defaults to  1    Operational                        AWS Datacenter   cmdb ci aws datacenter      CMDB field          Terraform state field                                                                                                                                     Source Native Key   Concatenation of region and account number extracted from  arn     Object Id           Region extracted from  arn                                         Region              Region extracted from  arn                                         Name                Region extracted from  arn                                         Operational Status  Defaults to  1    Operational                                          Virtual Machine Instance   cmdb ci vm instance        CMDB field          Terraform state field                                                                                                                                    Source Native Key    arn                                                               Object Id            id                                                                Placement Group ID   placement group                                                   IP Address           public ip                                                         Status               instance state                                                    VM Instance ID       id                                                                Name                 id                                                                State                state                                                             CPU                  cpu core count                                                    Operational Status  Defaults to  1    Operational                                         AWS Cloud ECS Cluster   cmdb ci cloud ecs cluster      CMDB field          Terraform state field                                                                                                                                     Source Native Key    arn                                                               Object Id            id                                                                Name                 name                                                              Operational Status  Defaults to  1    Operational                                         Cloud Object Storage   cmdb ci cloud object storage       CMDB field          Terraform state field                                                                                                                                     Source Native Key    arn                                                               Object Id            id                                                                Cloud Provider      Resource cloud provider extracted from  arn                        Name                 bucket                                                            Operational Status  Defaults to  1    Operational                                         Kubernetes Cluster   cmdb ci kubernetes cluster      CMDB field          Terraform state field                                                                                                                                     Source Native Key    arn                                                               IP Address           endpoint                                                          Port                Defaults to  6443                                                  Name                 name                                                              Operational Status  Defaults to  1    Operational                                         Cloud Network   cmdb ci network      CMDB field          Terraform state field                                                                                                                                     Source Native Key    arn                                                               Object Id            id                                                                Name                 name                                                              Operational Status  Defaults to  1    Operational                                         Cloud DataBase   cmdb ci cloud database      CMDB field          Terraform state field                                                                                                                                     Source Native Key    arn                                                               Object Id            id                                                                Version              engine version                                                    Type                 engine                                                            TCP port s           port                                                              Category             instance class                                                    Fully qualified domain name   endpoint                                                 Location            Region extracted from  arn                                         Name                 name                                                              Vendor              Resource cloud provider extracted from  arn                        Operational Status  Defaults to  1    Operational                                         Cloud DataBase Cluster   cmdb ci cloud db cluster      CMDB field          Terraform state field                                                                                                                                     Source Native Key    arn                                                               Cluster ID           cluster resource id                                               Name                 name                                                              TCP port s           port                                                              Fully qualified domain name   endpoint                                                 Vendor              Resource cloud provider extracted from  arn                        Operational Status  Defaults to  1    Operational                                         DynamoDB Table   cmdb ci dynamodb table      CMDB field          Terraform state field                                                                                                                                     Source Native Key    arn                                                               Object Id            arn                                                               Location            Region extracted from  arn                                         Name                 name                                                              Operational Status  Defaults to  1    Operational                                         DynamoDB Global Table   cmdb ci dynamodb global table       CMDB field          Terraform state field                                                                                                                                     Source Native Key    arn                                                               Object Id            arn                                                               Name                 name                                                              Operational Status  Defaults to  1    Operational                                         Compute Security Group   cmdb ci compute security group      CMDB field          Terraform state field                                                                                                                                     Source Native Key    arn                                                               Object Id            id                                                                Location            Region extracted from  arn                                         Name                 name                                                              Operational Status  Defaults to  1    Operational                                         Cloud Function   cmdb ci cloud function      CMDB field          Terraform state field                                                                                                                                     Source Native Key    arn                                                               Object Id            arn                                                               Language             runtime                                                           Code Size            source code size                                                  Location            Region extracted from  arn                                         Name                 function name                                                     Operational Status  Defaults to  1    Operational                                         Cloud Load Balancer   cmdb ci cloud load balancer      CMDB field          Terraform state field                                                                                                                                     Source Native Key    arn                                                               Object Id            id                                                                Canonical Hosted Zone Name   dns name                                                  Canonical Hosted Zone ID   zone id                                                     Location            Region extracted from  arn                                         Name                 name                                                              Operational Status  Defaults to  1    Operational                                  "}
{"questions":"terraform This page describes how Terraform provisioned Azure resources are mapped to the classes within the ServiceNow CMDB Microsoft Azure ServiceNow Service Graph integration allows to import selected resources from Azure into ServiceNow CMDB page title ServiceNow Service Graph Connector for Terraform Integration Azure Coverage","answers":"---\npage_title: ServiceNow Service Graph Connector for Terraform Integration - Azure Coverage\ndescription: >-\n  ServiceNow Service Graph integration allows to import selected resources from Azure into ServiceNow CMDB.\n---\n\n# Microsoft Azure\n\nThis page describes how Terraform-provisioned Azure resources are mapped to the classes within the ServiceNow CMDB.\n\n## Mapping of Terraform resources to CMDB CI Classes\n\n| Azure resource             | Terraform resource name            | ServiceNow CMDB CI Class        | ServiceNow CMDB Category Name |\n|----------------------------|------------------------------------|---------------------------------|-------------------------------|\n| Azure account              | N\/A                                | `cmdb_ci_cloud_service_account` | Cloud Service Account         |\n| Azure region               | N\/A                                | `cmdb_ci_azure_datacenter`      | Azure Datacenter              |\n| Resource Group             | `azurerm_resource_group`           | `cmdb_ci_resource_group`        | Resource Group                |\n| Windows VM                 | `azurerm_windows_virtual_machine`  | `cmdb_ci_vm_instance`           | Virtual Machine Instance      |\n| Linux VM                   | `azurerm_linux_virtual_machine`    | `cmdb_ci_vm_instance`           | Virtual Machine Instance      |\n| AKS Cluster                | `azurerm_kubernetes_cluster`       | `cmdb_ci_kubernetes_cluster`    | Kubernetes Cluster            |\n| Storage Container          | `azurerm_storage_container`        | `cmdb_ci_cloud_storage_account` | Cloud Storage Account         |\n| MariaDB Database           | `azurerm_mariadb_server`           | `cmdb_ci_cloud_database`        | Cloud DataBase                |\n| MS SQL Database            | `azurerm_mssql_server`             | `cmdb_ci_cloud_database`        | Cloud DataBase                |\n| MySQL Database             | `azurerm_mysql_server`             | `cmdb_ci_cloud_database`        | Cloud DataBase                |\n| PostgreSQL Database        | `azurerm_postgresql_server`        | `cmdb_ci_cloud_database`        | Cloud DataBase                |\n| Network security group     | `azurerm_network_security_group`   | `cmdb_ci_compute_security_group`| Compute Security Group        |\n| Linux Function App         | `azurerm_linux_function_app`       | `cmdb_ci_cloud_function`        | Cloud Function                |\n| Windows Function App       | `azurerm_windows_function_app`     | `cmdb_ci_cloud_function`        | Cloud Function                |\n| Virtual Network            | `azurerm_virtual_network`          | `cmdb_ci_network`               | Cloud Network                 |\n| Tags                       | N\/A                                | `cmdb_key_value`                | Key Value                     |\n\n## Resource relationships\n\n| Child CI Class                                             | Relationship type      | Parent CI Class                                           |\n|------------------------------------------------------------|------------------------|-----------------------------------------------------------|\n| Azure Datacenter 1 (`cmdb_ci_azure_datacenter`)            | Hosted On::Hosts       | Cloud Service Account 2 (`cmdb_ci_cloud_service_account`) |\n| Azure Datacenter 2 (`cmdb_ci_azure_datacenter`)            | Hosted On::Hosts       | Cloud Service Account 3 (`cmdb_ci_cloud_service_account`) |\n| Azure Datacenter 1 (`cmdb_ci_azure_datacenter`)            | Contains::Contained by | Resource Group 1 (`cmdb_ci_resource_group`)               |\n| Cloud Storage Account 1 (`cmdb_ci_cloud_storage_account`)  | Hosted On::Hosts       | Azure Datacenter 2 (`cmdb_ci_azure_datacenter`)           |\n| Virtual Machine Instance 2 (`cmdb_ci_vm_instance`)         | Hosted On::Hosts       | Azure Datacenter 1 (`cmdb_ci_azure_datacenter`)           |\n| Virtual Machine Instance 2 (`cmdb_ci_vm_instance`)         | Reference              | Key Value 14 (`cmdb_key_value`)                           |\n| Virtual Machine Instance 3 (`cmdb_ci_vm_instance`)         | Hosted On::Hosts       | Azure Datacenter 1 (`cmdb_ci_azure_datacenter`)           |\n| Virtual Machine Instance 3 (`cmdb_ci_vm_instance`)         | Reference              | Key Value 15 (`cmdb_key_value`)                           |\n| Kubernetes Cluster 2 (`cmdb_ci_kubernetes_cluster`)        | Hosted On::Hosts       | Azure Datacenter 1 (`cmdb_ci_azure_datacenter`)           |\n| Kubernetes Cluster 2 (`cmdb_ci_kubernetes_cluster`)        | Reference              | Key Value 16 (`cmdb_key_value`)                           |\n| Cloud DataBase 2 (`cmdb_ci_cloud_database` )               | Hosted On::Hosts       | Azure Datacenter 1 (`cmdb_ci_azure_datacenter`)           |\n| Cloud DataBase 2 (`cmdb_ci_cloud_database` )               | Reference              | Key Value 9 (`cmdb_key_value`)                            |\n| Compute Security Group 2 (`cmdb_ci_compute_security_group`)| Hosted On::Hosts       | Azure Datacenter 1 (`cmdb_ci_azure_datacenter`)           |\n| Compute Security Group 2 (`cmdb_ci_compute_security_group`)| Reference              | Key Value 17 (`cmdb_key_value`)                           |\n| Cloud Function 2 (`cmdb_ci_cloud_function`)                | Hosted On::Hosts       | Azure Datacenter 1 (`cmdb_ci_azure_datacenter`)           |\n| Cloud Function 2 (`cmdb_ci_cloud_function`)                | Reference              | Key Value 19 (`cmdb_key_value`)                           |\n| Cloud Network 2 (`cmdb_ci_network`)                        | Hosted On::Hosts       | Azure Datacenter 1 (`cmdb_ci_azure_datacenter`)           |\n| Cloud Network 2 (`cmdb_ci_network`)                        | Reference              | Key Value 20 (`cmdb_key_value`)                           |\n\n## Field attributes mapping\n\n### Cloud Service Account (`cmdb_ci_cloud_service_account`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | Subscription ID extracted from `id`                            |\n| Account Id        | Subscription ID extracted from `id`                            |\n| Datacenter Type   | Defaults to `azure`                                            |\n| Object ID         | Subscription ID extracted from `id`                            |\n| Name              | Subscription ID extracted from `id`                            |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Azure Datacenter (`cmdb_ci_azure_datacenter`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | Concatenation of `location` and Subscription ID                |\n| Object Id         | `location`                                                     |\n| Region            | `location`                                                     |\n| Name              | `location`                                                     |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Virtual Machine Instance (`cmdb_ci_vm_instance`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `id`                                                           |\n| Object Id         | `id`                                                           |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud Storage Account (`cmdb_ci_cloud_storage_account`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `resource_manager_id`                                          |\n| Object Id         | `resource_manager_id`                                          |\n| Fully qualified domain name| `id`                                                  |\n| Blob Service      | `storage_account_name`                                         |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Resource Group (`cmdb_ci_resource_group`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `id`                                                           |\n| Object Id         | `id`                                                           |\n| Name              | `name`                                                         |\n| Location          | `location`                                                     |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Kubernetes Cluster (`cmdb_ci_kubernetes_cluster`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `id`                                                           |\n| IP Address        | `fqdn`                                                         |\n| Port              | Defaults to \"6443\"                                             |\n| Name              | `name`                                                         |\n| Location          | `location`                                                     |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud DataBase (`cmdb_ci_cloud_database`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `id`                                                           |\n| Object Id         | `id`                                                           |\n| Version           | `engine_version`                                               |\n| Fully qualified domain name| `fqdn`                                                |\n| Name              | `name`                                                         |\n| Vendor            | Defaults to `azure`                                            |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Compute Security Group (`cmdb_ci_compute_security_group`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `id`                                                           |\n| Object Id         | `id`                                                           |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud Function (`cmdb_ci_cloud_function`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `id`                                                           |\n| Object Id         | `id`                                                           |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                |\n\n### Cloud Network (`cmdb_ci_network`)\n\n| CMDB field        | Terraform state field                                          |\n|-------------------|----------------------------------------------------------------|\n| Source Native Key | `id`                                                           |\n| Object Id         | `id`                                                           |\n| Name              | `name`                                                         |\n| Operational Status| Defaults to \"1\" (\"Operational\")                                ","site":"terraform","answers_cleaned":"    page title  ServiceNow Service Graph Connector for Terraform Integration   Azure Coverage description       ServiceNow Service Graph integration allows to import selected resources from Azure into ServiceNow CMDB         Microsoft Azure  This page describes how Terraform provisioned Azure resources are mapped to the classes within the ServiceNow CMDB      Mapping of Terraform resources to CMDB CI Classes    Azure resource               Terraform resource name              ServiceNow CMDB CI Class          ServiceNow CMDB Category Name                                                                                                                                           Azure account                N A                                   cmdb ci cloud service account    Cloud Service Account             Azure region                 N A                                   cmdb ci azure datacenter         Azure Datacenter                  Resource Group                azurerm resource group               cmdb ci resource group           Resource Group                    Windows VM                    azurerm windows virtual machine      cmdb ci vm instance              Virtual Machine Instance          Linux VM                      azurerm linux virtual machine        cmdb ci vm instance              Virtual Machine Instance          AKS Cluster                   azurerm kubernetes cluster           cmdb ci kubernetes cluster       Kubernetes Cluster                Storage Container             azurerm storage container            cmdb ci cloud storage account    Cloud Storage Account             MariaDB Database              azurerm mariadb server               cmdb ci cloud database           Cloud DataBase                    MS SQL Database               azurerm mssql server                 cmdb ci cloud database           Cloud DataBase                    MySQL Database                azurerm mysql server                 cmdb ci cloud database           Cloud DataBase                    PostgreSQL Database           azurerm postgresql server            cmdb ci cloud database           Cloud DataBase                    Network security group        azurerm network security group       cmdb ci compute security group   Compute Security Group            Linux Function App            azurerm linux function app           cmdb ci cloud function           Cloud Function                    Windows Function App          azurerm windows function app         cmdb ci cloud function           Cloud Function                    Virtual Network               azurerm virtual network              cmdb ci network                  Cloud Network                     Tags                         N A                                   cmdb key value                   Key Value                           Resource relationships    Child CI Class                                               Relationship type        Parent CI Class                                                                                                                                                                                                   Azure Datacenter 1   cmdb ci azure datacenter                Hosted On  Hosts         Cloud Service Account 2   cmdb ci cloud service account       Azure Datacenter 2   cmdb ci azure datacenter                Hosted On  Hosts         Cloud Service Account 3   cmdb ci cloud service account       Azure Datacenter 1   cmdb ci azure datacenter                Contains  Contained by   Resource Group 1   cmdb ci resource group                     Cloud Storage Account 1   cmdb ci cloud storage account      Hosted On  Hosts         Azure Datacenter 2   cmdb ci azure datacenter                 Virtual Machine Instance 2   cmdb ci vm instance             Hosted On  Hosts         Azure Datacenter 1   cmdb ci azure datacenter                 Virtual Machine Instance 2   cmdb ci vm instance             Reference                Key Value 14   cmdb key value                                 Virtual Machine Instance 3   cmdb ci vm instance             Hosted On  Hosts         Azure Datacenter 1   cmdb ci azure datacenter                 Virtual Machine Instance 3   cmdb ci vm instance             Reference                Key Value 15   cmdb key value                                 Kubernetes Cluster 2   cmdb ci kubernetes cluster            Hosted On  Hosts         Azure Datacenter 1   cmdb ci azure datacenter                 Kubernetes Cluster 2   cmdb ci kubernetes cluster            Reference                Key Value 16   cmdb key value                                 Cloud DataBase 2   cmdb ci cloud database                    Hosted On  Hosts         Azure Datacenter 1   cmdb ci azure datacenter                 Cloud DataBase 2   cmdb ci cloud database                    Reference                Key Value 9   cmdb key value                                  Compute Security Group 2   cmdb ci compute security group    Hosted On  Hosts         Azure Datacenter 1   cmdb ci azure datacenter                 Compute Security Group 2   cmdb ci compute security group    Reference                Key Value 17   cmdb key value                                 Cloud Function 2   cmdb ci cloud function                    Hosted On  Hosts         Azure Datacenter 1   cmdb ci azure datacenter                 Cloud Function 2   cmdb ci cloud function                    Reference                Key Value 19   cmdb key value                                 Cloud Network 2   cmdb ci network                            Hosted On  Hosts         Azure Datacenter 1   cmdb ci azure datacenter                 Cloud Network 2   cmdb ci network                            Reference                Key Value 20   cmdb key value                                   Field attributes mapping      Cloud Service Account   cmdb ci cloud service account      CMDB field          Terraform state field                                                                                                                                     Source Native Key   Subscription ID extracted from  id                                 Account Id          Subscription ID extracted from  id                                 Datacenter Type     Defaults to  azure                                                 Object ID           Subscription ID extracted from  id                                 Name                Subscription ID extracted from  id                                 Operational Status  Defaults to  1    Operational                                         Azure Datacenter   cmdb ci azure datacenter      CMDB field          Terraform state field                                                                                                                                     Source Native Key   Concatenation of  location  and Subscription ID                    Object Id            location                                                          Region               location                                                          Name                 location                                                          Operational Status  Defaults to  1    Operational                                         Virtual Machine Instance   cmdb ci vm instance      CMDB field          Terraform state field                                                                                                                                     Source Native Key    id                                                                Object Id            id                                                                Name                 name                                                              Operational Status  Defaults to  1    Operational                                         Cloud Storage Account   cmdb ci cloud storage account      CMDB field          Terraform state field                                                                                                                                     Source Native Key    resource manager id                                               Object Id            resource manager id                                               Fully qualified domain name   id                                                       Blob Service         storage account name                                              Name                 name                                                              Operational Status  Defaults to  1    Operational                                         Resource Group   cmdb ci resource group      CMDB field          Terraform state field                                                                                                                                     Source Native Key    id                                                                Object Id            id                                                                Name                 name                                                              Location             location                                                          Operational Status  Defaults to  1    Operational                                         Kubernetes Cluster   cmdb ci kubernetes cluster      CMDB field          Terraform state field                                                                                                                                     Source Native Key    id                                                                IP Address           fqdn                                                              Port                Defaults to  6443                                                  Name                 name                                                              Location             location                                                          Operational Status  Defaults to  1    Operational                                         Cloud DataBase   cmdb ci cloud database      CMDB field          Terraform state field                                                                                                                                     Source Native Key    id                                                                Object Id            id                                                                Version              engine version                                                    Fully qualified domain name   fqdn                                                     Name                 name                                                              Vendor              Defaults to  azure                                                 Operational Status  Defaults to  1    Operational                                         Compute Security Group   cmdb ci compute security group      CMDB field          Terraform state field                                                                                                                                     Source Native Key    id                                                                Object Id            id                                                                Name                 name                                                              Operational Status  Defaults to  1    Operational                                         Cloud Function   cmdb ci cloud function      CMDB field          Terraform state field                                                                                                                                     Source Native Key    id                                                                Object Id            id                                                                Name                 name                                                              Operational Status  Defaults to  1    Operational                                         Cloud Network   cmdb ci network      CMDB field          Terraform state field                                                                                                                                     Source Native Key    id                                                                Object Id            id                                                                Name                 name                                                              Operational Status  Defaults to  1    Operational                                  "}
{"questions":"terraform Example customizations for the ServiceNow Service Catalog Integration This example use case creates a Terraform Catalog Item for requesting resources Cloud Example Customizations ServiceNow Service Catalog Integration Terraform page title Example Customizations","answers":"---\npage_title: >-\n  Example Customizations - ServiceNow Service Catalog Integration - Terraform\n  Cloud\ndescription: Example customizations for the ServiceNow Service Catalog Integration.\n---\n\n# Example Customizations\n\nThis example use case creates a Terraform Catalog Item for requesting resources\nwith custom variable values passed to the Terraform configuration.\n\n## Change the scope\n\nWhen you make a customization to the app, ensure you switch to the \"Terraform\"\nscope. This guarantees that all items you create are correctly assigned to that\nscope. To change the scope in your ServiceNow instance, click the globe icon at\nthe top right of the screen. For detailed instructions on changing the scope,\nrefer to the [ServiceNow\ndocumentation](https:\/\/developer.servicenow.com\/dev.do#!\/learn\/learning-plans\/xanadu\/new_to_servicenow\/app_store_learnv2_buildneedit_xanadu_application_scope).\n\n## Make a copy of the existing Catalog Item\n\nThe ServiceNow Service Catalog for Terraform application provides pre-configured [Catalog\nItems](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform\/developer-reference#example-service-catalog-flows-and-actions)\nfor immediate use. We recommend creating a copy of the most recent version of the\nCatalog Item to ensure you have access to the latest features and\nimprovements. Make a copy of the most appropriate Catalog Item for your specific\nbusiness requirements by following these steps:\n\n1. Navigate to **All > Service Catalog > Catalogs > Terraform Catalog**, and review the\n   Catalog Items based on flows, whose names use the suffix \"Flow\". \n   We recommend choosing Flows over Workflows because Flows provide enhanced functionality and performance and are actively developed by ServiceNow.\nFor more information, refer \n   to [Catalog Items based on Flows vs. Workflows](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform\/developer-reference#example-service-catalog-flows-and-actions).\n1. Open the Catalog Item in editing mode: \n   1. Click the Catalog Item to open the request form. \n   1. Click **...** in the top right corner. \n   1. Select **Configure Item** from the menu.\n   ![Screenshot: ServiceNow Configure Catalog Item](\/img\/docs\/servicenow-catalog-configure-item.png \"Screenshot of the ServiceNow Configure Catalog Item dropdown menu\")\n1. Click the **Process Engine** tab in the Catalog Item configuration. Take\n   note of the Flow name associated with the Catalog Item, because you need to\n   create a copy of this Flow as well.\n   ![Screenshot: ServiceNow Process Engine](\/img\/docs\/servicenow-catalog-process-engine.png \"Screenshot of the ServiceNow Configure Catalog Item \u2013 Process Engine tab\")\n1. Start the copying process: \n   1. Click the **Copy** button above the **Related Links** section. \n   1. Assign a new name to the copied Catalog Item. \n   1. Optionally, modify the description and short description fields. \n      Right-click the header and select **Save**.\n   ![Screenshot: ServiceNow Copy Item](\/img\/docs\/servicenow-catalog-copied-item.png \"Screenshot of the copied ServiceNow Catalog Item\")\n\n## Adjust the Variable Set\n\nIf a Catalog Item requires users to input variable values,\nyou must update the variable set with those required variables.\nAlthough some default Catalog Items come with pre-defined example variables, it\nis common practice to remove these and replace them with your own custom\nvariables.\n\n1. Create a new Variable Set.\n   1. On the Catalog Item's configuration page, under the **Related Links**\n      section, click the **Variable Sets** tab.\n   1. Click the **New** button to create a new variable set. Ensure that the\n   variables in your new set match the variables required by your Terraform\n   configuration.\n   ![Screenshot: ServiceNow New Variable Set](\/img\/docs\/servicenow-catalog-new-varset.png \"Screenshot of the ServiceNow Catalog Item \u2013 new Variable Set\")\n   1. Select **Single-Row Variable Set** and provide a title and description.\n   1. Click **Submit**. Upon submission, you will be redirected back to the Catalog\n   Item's configuration page.\n   ![Screenshot: ServiceNow New Variable Set Form](\/img\/docs\/servicenow-catalog-new-varset-form.png \"Screenshot of the ServiceNow Catalog Item \u2013 new Variable Set\")\n   1. Click the name of your newly created Variable Set and create your \n   variables. You must follow the [naming convention for Terraform\n   variables](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform\/developer-reference#terraform-variables-and-servicenow-variable-sets).\n   ServiceNow offers various types of variable representation (such as strings,\n   booleans, and dropdown menus). Refer to the [ServiceNow documentation on\n   variables](https:\/\/docs.servicenow.com\/csh?topicname=c_ServiceCatalogVariables.html&version=latest)\n   and select the types that best suit your use case. You can also set default\n   values for the variables in the **Default Value** tab, which ServiceNow prefills for the end users.\n   ![Screenshot: ServiceNow New Variables](\/img\/docs\/servicenow-catalog-variables.png \"Screenshot of the ServiceNow Catalog Item \u2013 new variables\")\n1. Attach the newly created Variable Set to your custom Catalog Item and remove\nthe default Workspace Variables.\n   1. Return to the **Variable Sets** tab on the Catalog Item's configuration page\n   and click the **Edit** button. \n   1. Move the \"Workspace Variables\" Set from the right side to the left side\n   and click **Save**. Do not remove the\n   \"Workspace Request Create\" or the \"Workspace Request Update\" Sets.\n   ![Screenshot: ServiceNow Remove Example Variables](\/img\/docs\/servicenow-catalog-remove-example-variables.png \"Screenshot of the ServiceNow Catalog Item \u2013 new variables\")\n\n## Make a copy of the Flow and Action\n\n1. Open the ServiceNow Studio by navigating to **All > Studio** and open the\n      \"Terraform\" application. Once in the **Terraform** application, navigate to\n      **Flow Designer > Flows**.\n      ![Screenshot: ServiceNow Flow Designer Interface](\/img\/docs\/servicenow-catalog-original-flow.png \"Screenshot of the ServiceNow Flow Designer \u2013 selecting a Flow\")\n\n      Another way to access the ServiceNow Studio is to click **All**, select\n      \"Flow Designer\", then select **Flows**. You can set the **Application** \n      filter to \"Terraform\" to quickly find the desired Flow.\n1. Open the Flow referenced in your Catalog Item. Click **...**\n   in the top right corner of the Flow Designer interface and \n   select **Copy flow**. Provide a name for the copied Flow and\n   click **Copy**.\n   ![Screenshot: ServiceNow Copy Flow Interface](\/img\/docs\/servicenow-catalog-copy-flow.png \"Screenshot of the ServiceNow Flow Designer \u2013 copying a Flow\")\n1. Customize your newly copied Flow by clicking **Edit flow**.\n   ![Screenshot: ServiceNow Edit New Flow Interface](\/img\/docs\/servicenow-catalog-edit-flow.png \"Screenshot of the ServiceNow Flow Designer \u2013 editing a Flow\")\n   1. Do not change the **Service Catalog** trigger.\n   1. Update the \"Get Catalog Variables\" action:\n      1. Keep the \"Requested Item Record\" in the **Submitted Request** field.\n      1. Select your newly created Catalog Item from the dropdown menu for\n         **Template Catalog Item**.\n      1. Move all of your variables to the **Selected** side in the **Catalog\n         Variables** section. Remove any previous example variables from the\n         **Available** side.\n         ![Screenshot: ServiceNow Get Variables Flow Step](\/img\/docs\/servicenow-catalog-get-variables.png \"Screenshot of the ServiceNow Flow Designer \u2013 getting Variables step\")\n      1. Click **Done** to finish configuring this Action.\n1. Unfold the second Action in the Flow and click the arrow to open it in\n      the Action Designer.\n      ![Screenshot: ServiceNow Open Action Designer](\/img\/docs\/servicenow-catalog-open-action.png \"Screenshot of the ServiceNow Action Designer\")\n      1. Click **...** in the top right corner and select **Copy Action**.\n         ![Screenshot: ServiceNow Copy Action](\/img\/docs\/servicenow-catalog-copy-action.png \"Screenshot of the ServiceNow Copy Action\")\n         Rename it and click **Copy**.\n         ![Screenshot: ServiceNow Rename Action](\/img\/docs\/servicenow-catalog-rename-action.png \"Screenshot of the ServiceNow Rename Action\")\n      1. In the the Inputs section, remove any previous example variables.\n      ![Screenshot: ServiceNow Remove Variables From Action](\/img\/docs\/servicenow-catalog-remove-example-variables-from-action.png \"Screenshot of the ServiceNow Action Input Variables\")\n      1. Add your custom variables by clicking the **Create Input** button. Ensure that the variable names match your Catalog Item variables and select the variable type that matches each variable. Click **Save**.\n      ![Screenshot: ServiceNow Add Variables To Action](\/img\/docs\/servicenow-catalog-add-variables-to-action.png \"Screenshot of adding ServiceNow Action Input Variables\")\n      1. Open the **Script step** within the Action. Remove any example variables\n         and add your custom variables by clicking **Create Variable** at the\n         bottom. Enter the name of each variable and drag the corresponding data\n         pill from the right into the **Value field**.\n         ![Screenshot: ServiceNow Add Script Step Variables To Action](\/img\/docs\/servicenow-catalog-adjust-script-variables.png \"Screenshot of adjusting ServiceNow Action Script Variables\")\n      1. Click **Save** and then **Publish**.\n1. Reopen the Flow and attach the newly created Action to the Flow\n   after \"Get Catalog Variables\" step:\n   1. Remove the \"Create Terraform Workspace with Vars\" Action that you copied earlier and replace it with\n      your newly created Action.\n      ![Screenshot: ServiceNow Replace Action Step](\/img\/docs\/servicenow-catalog-replace-action.png \"Screenshot of replacing ServiceNow Action step\")\n   1. Connect the new Action to the Flow by dragging and dropping the data pills\n      from the \"Get Catalog Variables\" Action to the corresponding inputs of\n      your new Action. Click **Done** to save this step.\n      ![Screenshot: ServiceNow Fill Variables for Action](\/img\/docs\/servicenow-catalog-fill-new-action-step.png \"Screenshot of filling out ServiceNow Action variables\")\n   1. Click **Save**.\n   1. Click **Activate** to enable the Flow and make it available for use.\n\n## Set the Flow for your Catalog Item\n\n1. Navigate back to the Catalog by clicking on **All** and then go to **Service\n   Catalog > Catalogs > Terraform Catalog**.\n1. Locate your custom Catalog Item and click **...** at the top\n   of the item. From the dropdown menu, select **Configure item**.\n1. In the configuration settings, click the **Process Engine** tab.\n1. In the **Flow** field, search for the Flow you recently created. Click \n   the Flow then click the **Update**.\n   ![Screenshot: ServiceNow Update Process Engine](\/img\/docs\/servicenow-catalog-update-process-engine.png \"Screenshot of updating Process Engine for the Catalog Item\")\n\n## Test the Catalog Item\n\nThe new item is now available in the Terraform Service Catalog. To make the\nnew item accessible to your end users via the Service Portal, follow these\nsteps:\n1. Navigate to the configuration page of the item you want to make available.\n1. Locate the **Catalogs** field on the configuration page and click the lock\n   icon next to it.\n1. In the search bar, type \"Service Catalog\" and select it from the search\n   results. Add \"Service Catalog\" to the list of catalogs associated with the\n   item. Click the lock icon again to lock the changes.\n   ![Screenshot: ServiceNow Enable Service Portal](\/img\/docs\/servicenow-catalog-service-portal.png \"Screenshot of adding the Catalog Item to the Service Portal\")\n1. Click the **Update** button at the top of the page.\n\nAfter completing these steps, end users will be able to\naccess the new item through the Service Portal of your ServiceNow instance. You\ncan access the Service Portal by navigating to **All > Service Portal Home**","site":"terraform","answers_cleaned":"    page title       Example Customizations   ServiceNow Service Catalog Integration   Terraform   Cloud description  Example customizations for the ServiceNow Service Catalog Integration         Example Customizations  This example use case creates a Terraform Catalog Item for requesting resources with custom variable values passed to the Terraform configuration      Change the scope  When you make a customization to the app  ensure you switch to the  Terraform  scope  This guarantees that all items you create are correctly assigned to that scope  To change the scope in your ServiceNow instance  click the globe icon at the top right of the screen  For detailed instructions on changing the scope  refer to the  ServiceNow documentation  https   developer servicenow com dev do   learn learning plans xanadu new to servicenow app store learnv2 buildneedit xanadu application scope       Make a copy of the existing Catalog Item  The ServiceNow Service Catalog for Terraform application provides pre configured  Catalog Items   terraform cloud docs integrations service now service catalog terraform developer reference example service catalog flows and actions  for immediate use  We recommend creating a copy of the most recent version of the Catalog Item to ensure you have access to the latest features and improvements  Make a copy of the most appropriate Catalog Item for your specific business requirements by following these steps   1  Navigate to   All   Service Catalog   Catalogs   Terraform Catalog    and review the    Catalog Items based on flows  whose names use the suffix  Flow       We recommend choosing Flows over Workflows because Flows provide enhanced functionality and performance and are actively developed by ServiceNow  For more information  refer     to  Catalog Items based on Flows vs  Workflows   terraform cloud docs integrations service now service catalog terraform developer reference example service catalog flows and actions   1  Open the Catalog Item in editing mode      1  Click the Catalog Item to open the request form      1  Click         in the top right corner      1  Select   Configure Item   from the menu       Screenshot  ServiceNow Configure Catalog Item   img docs servicenow catalog configure item png  Screenshot of the ServiceNow Configure Catalog Item dropdown menu   1  Click the   Process Engine   tab in the Catalog Item configuration  Take    note of the Flow name associated with the Catalog Item  because you need to    create a copy of this Flow as well       Screenshot  ServiceNow Process Engine   img docs servicenow catalog process engine png  Screenshot of the ServiceNow Configure Catalog Item   Process Engine tab   1  Start the copying process      1  Click the   Copy   button above the   Related Links   section      1  Assign a new name to the copied Catalog Item      1  Optionally  modify the description and short description fields         Right click the header and select   Save         Screenshot  ServiceNow Copy Item   img docs servicenow catalog copied item png  Screenshot of the copied ServiceNow Catalog Item       Adjust the Variable Set  If a Catalog Item requires users to input variable values  you must update the variable set with those required variables  Although some default Catalog Items come with pre defined example variables  it is common practice to remove these and replace them with your own custom variables   1  Create a new Variable Set     1  On the Catalog Item s configuration page  under the   Related Links         section  click the   Variable Sets   tab     1  Click the   New   button to create a new variable set  Ensure that the    variables in your new set match the variables required by your Terraform    configuration       Screenshot  ServiceNow New Variable Set   img docs servicenow catalog new varset png  Screenshot of the ServiceNow Catalog Item   new Variable Set      1  Select   Single Row Variable Set   and provide a title and description     1  Click   Submit    Upon submission  you will be redirected back to the Catalog    Item s configuration page       Screenshot  ServiceNow New Variable Set Form   img docs servicenow catalog new varset form png  Screenshot of the ServiceNow Catalog Item   new Variable Set      1  Click the name of your newly created Variable Set and create your     variables  You must follow the  naming convention for Terraform    variables   terraform cloud docs integrations service now service catalog terraform developer reference terraform variables and servicenow variable sets      ServiceNow offers various types of variable representation  such as strings     booleans  and dropdown menus   Refer to the  ServiceNow documentation on    variables  https   docs servicenow com csh topicname c ServiceCatalogVariables html version latest     and select the types that best suit your use case  You can also set default    values for the variables in the   Default Value   tab  which ServiceNow prefills for the end users       Screenshot  ServiceNow New Variables   img docs servicenow catalog variables png  Screenshot of the ServiceNow Catalog Item   new variables   1  Attach the newly created Variable Set to your custom Catalog Item and remove the default Workspace Variables     1  Return to the   Variable Sets   tab on the Catalog Item s configuration page    and click the   Edit   button      1  Move the  Workspace Variables  Set from the right side to the left side    and click   Save    Do not remove the     Workspace Request Create  or the  Workspace Request Update  Sets       Screenshot  ServiceNow Remove Example Variables   img docs servicenow catalog remove example variables png  Screenshot of the ServiceNow Catalog Item   new variables       Make a copy of the Flow and Action  1  Open the ServiceNow Studio by navigating to   All   Studio   and open the        Terraform  application  Once in the   Terraform   application  navigate to         Flow Designer   Flows            Screenshot  ServiceNow Flow Designer Interface   img docs servicenow catalog original flow png  Screenshot of the ServiceNow Flow Designer   selecting a Flow          Another way to access the ServiceNow Studio is to click   All    select        Flow Designer   then select   Flows    You can set the   Application          filter to  Terraform  to quickly find the desired Flow  1  Open the Flow referenced in your Catalog Item  Click            in the top right corner of the Flow Designer interface and     select   Copy flow    Provide a name for the copied Flow and    click   Copy         Screenshot  ServiceNow Copy Flow Interface   img docs servicenow catalog copy flow png  Screenshot of the ServiceNow Flow Designer   copying a Flow   1  Customize your newly copied Flow by clicking   Edit flow         Screenshot  ServiceNow Edit New Flow Interface   img docs servicenow catalog edit flow png  Screenshot of the ServiceNow Flow Designer   editing a Flow      1  Do not change the   Service Catalog   trigger     1  Update the  Get Catalog Variables  action        1  Keep the  Requested Item Record  in the   Submitted Request   field        1  Select your newly created Catalog Item from the dropdown menu for            Template Catalog Item          1  Move all of your variables to the   Selected   side in the   Catalog          Variables   section  Remove any previous example variables from the            Available   side             Screenshot  ServiceNow Get Variables Flow Step   img docs servicenow catalog get variables png  Screenshot of the ServiceNow Flow Designer   getting Variables step         1  Click   Done   to finish configuring this Action  1  Unfold the second Action in the Flow and click the arrow to open it in       the Action Designer          Screenshot  ServiceNow Open Action Designer   img docs servicenow catalog open action png  Screenshot of the ServiceNow Action Designer         1  Click         in the top right corner and select   Copy Action               Screenshot  ServiceNow Copy Action   img docs servicenow catalog copy action png  Screenshot of the ServiceNow Copy Action            Rename it and click   Copy               Screenshot  ServiceNow Rename Action   img docs servicenow catalog rename action png  Screenshot of the ServiceNow Rename Action         1  In the the Inputs section  remove any previous example variables          Screenshot  ServiceNow Remove Variables From Action   img docs servicenow catalog remove example variables from action png  Screenshot of the ServiceNow Action Input Variables         1  Add your custom variables by clicking the   Create Input   button  Ensure that the variable names match your Catalog Item variables and select the variable type that matches each variable  Click   Save            Screenshot  ServiceNow Add Variables To Action   img docs servicenow catalog add variables to action png  Screenshot of adding ServiceNow Action Input Variables         1  Open the   Script step   within the Action  Remove any example variables          and add your custom variables by clicking   Create Variable   at the          bottom  Enter the name of each variable and drag the corresponding data          pill from the right into the   Value field               Screenshot  ServiceNow Add Script Step Variables To Action   img docs servicenow catalog adjust script variables png  Screenshot of adjusting ServiceNow Action Script Variables         1  Click   Save   and then   Publish    1  Reopen the Flow and attach the newly created Action to the Flow    after  Get Catalog Variables  step     1  Remove the  Create Terraform Workspace with Vars  Action that you copied earlier and replace it with       your newly created Action          Screenshot  ServiceNow Replace Action Step   img docs servicenow catalog replace action png  Screenshot of replacing ServiceNow Action step      1  Connect the new Action to the Flow by dragging and dropping the data pills       from the  Get Catalog Variables  Action to the corresponding inputs of       your new Action  Click   Done   to save this step          Screenshot  ServiceNow Fill Variables for Action   img docs servicenow catalog fill new action step png  Screenshot of filling out ServiceNow Action variables      1  Click   Save       1  Click   Activate   to enable the Flow and make it available for use      Set the Flow for your Catalog Item  1  Navigate back to the Catalog by clicking on   All   and then go to   Service    Catalog   Catalogs   Terraform Catalog    1  Locate your custom Catalog Item and click         at the top    of the item  From the dropdown menu  select   Configure item    1  In the configuration settings  click the   Process Engine   tab  1  In the   Flow   field  search for the Flow you recently created  Click     the Flow then click the   Update         Screenshot  ServiceNow Update Process Engine   img docs servicenow catalog update process engine png  Screenshot of updating Process Engine for the Catalog Item       Test the Catalog Item  The new item is now available in the Terraform Service Catalog  To make the new item accessible to your end users via the Service Portal  follow these steps  1  Navigate to the configuration page of the item you want to make available  1  Locate the   Catalogs   field on the configuration page and click the lock    icon next to it  1  In the search bar  type  Service Catalog  and select it from the search    results  Add  Service Catalog  to the list of catalogs associated with the    item  Click the lock icon again to lock the changes       Screenshot  ServiceNow Enable Service Portal   img docs servicenow catalog service portal png  Screenshot of adding the Catalog Item to the Service Portal   1  Click the   Update   button at the top of the page   After completing these steps  end users will be able to access the new item through the Service Portal of your ServiceNow instance  You can access the Service Portal by navigating to   All   Service Portal Home  "}
{"questions":"terraform infrastructure from ServiceNow page title Setup Instructions ServiceNow Service Catalog Integration HCP Terraform Terraform ServiceNow Service Catalog Integration Setup Instructions Integration version v2 7 0 ServiceNow integration enables your users to order Terraform built","answers":"---\npage_title: Setup Instructions - ServiceNow Service Catalog Integration - HCP Terraform\ndescription: >-\n  ServiceNow integration enables your users to order Terraform-built\n  infrastructure from ServiceNow.\n---\n\n# Terraform ServiceNow Service Catalog Integration Setup Instructions\n\n-> **Integration version:**  v2.7.0\n\nThe Terraform ServiceNow Service Catalog integration enables your end-users to\nprovision self-serve infrastructure via ServiceNow. By connecting ServiceNow to\nHCP Terraform, this integration lets ServiceNow users order Service Items,\ncreate workspaces, and perform Terraform runs using prepared Terraform\nconfigurations hosted in VCS repositories or as [no-code\nmodules](\/terraform\/cloud-docs\/no-code-provisioning\/module-design) for\nself-service provisioning. \n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/servicenow-catalog.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n## Summary of the Setup Process\n\nThe integration relies on Terraform ServiceNow Catalog integration software\ninstalled within your ServiceNow instance. Installing and configuring this\nintegration requires administration in both ServiceNow and HCP Terraform.\nSince administrators of these services within your organization are not\nnecessarily the same person, this documentation refers to a **ServiceNow Admin**\nand a **Terraform Admin**.\n\nFirst, the Terraform Admin configures your HCP Terraform organization with a\ndedicated team for the ServiceNow integration, and obtains a team API token for\nthat team. The Terraform Admin provides the following to your ServiceNow admin:\n* An Organization name\n* A team API token\n* The hostname of your HCP Terraform instance\n* Any available no-code modules or version control repositories containing Terraform configurations \n* Other required variables\ntoken, the hostname of your HCP Terraform instance, and details about no-code\nmodules or version control repositories containing Terraform configurations and\nrequired variables to the ServiceNow Admin.\n\nNext, the ServiceNow Admin will install the Terraform ServiceNow Catalog\nintegration to your ServiceNow instance, and configure it using the team API\ntoken and hostname.\n\nFinally, the ServiceNow Admin will create a Service Catalog within ServiceNow\nfor the Terraform integration, and configure it using the version control\nrepositories or no-code modules, and variable definitions provided by the\nTerraform Admin.\n\n| ServiceNow Admin                                                             | Terraform Admin                                                                                                                            |\n| ---------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |\n|                                                                              | Prepare an organization for use with the ServiceNow Catalog.                                                                               |\n|                                                                              | Create a team that can manage workspaces in that organization.                                                                             |\n|                                                                              | Create a team API token so the integration can use that team's permissions.                                                                |\n|                                                                              | If using VCS repositories, retrieve the OAuth token IDs and repository identifiers that HCP Terraform uses to identify your VCS repositories. If using a no-code flow, [create a no-code ready module](\/terraform\/cloud-docs\/no-code-provisioning\/provisioning) in your organization's private registry. Learn more in [Configure VCS Repositories or No-Code Modules](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform\/service-catalog-config#configure-vcs-repositories-or-no-code-modules).|\n|                                                                              | Provide the API token, OAuth token ID, repository identifiers, variable definitions, and HCP Terraform hostname to the ServiceNow Admin. |\n| Install the Terraform integration application from the ServiceNow App Store. |                                                                                                                                            |\n| Connect the integration application with HCP Terraform.                    |                                                                                                                                            |\n| Add the Terraform Service Catalog to ServiceNow.                             |                                                                                                                                            |\n| If you are using the VCS flow, configure the VCS repositories in ServiceNow.         |                                                                                                                                            |\n| Configure variable sets for use with the VCS repositories or no-code modules.|                                                                                                                                            |\n\nOnce these steps are complete, self-serve infrastructure will be available\nthrough the ServiceNow Catalog. HCP Terraform will provision and manage\nrequested infrastructure and report the status back to ServiceNow.\n\n## Prerequisites\n\nTo start using Terraform with the ServiceNow Catalog Integration, you must have:\n\n- An administrator account on a Terraform Enterprise instance or within a\n  HCP Terraform organization.\n- An administrator account on your ServiceNow instance.\n- If you are using the VCS flow, one or more [supported version control\n  systems](\/terraform\/cloud-docs\/vcs#supported-vcs-providers) (VCSs) with read\n  access to repositories with Terraform configurations.\n- If you are using no-code provisioning, one or more [no-code modules](\/terraform\/cloud-docs\/no-code-provisioning\/provisioning) created in\n  your organization's private registry. Refer to the [no-code module\nconfiguration](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform\/service-catalog-config#no-code-module-configuration)\nfor information about using no-code modules with the ServiceNow Service Catalog\nfor Terraform.\n\nYou can use this integration on the following ServiceNow server versions:\n\n- Utah\n- Vancouver\n- Washington DC\n- Xanadu\n\nIt requires the following ServiceNow plugins as dependencies:\n\n- Flow Designer support for the Service Catalog (`com.glideapp.servicecatalog.flow_designer`)\n- ServiceNow IntegrationHub Action Step - Script (`com.glide.hub.action_step.script`)\n- ServiceNow IntegrationHub Action Step - REST (`com.glide.hub.action_step.rest`)\n\n-> **Note:** Dependent plugins are installed on your ServiceNow instance automatically when the app is downloaded from the ServiceNow Store.\n\n## Configure HCP Terraform\n\nBefore installing the ServiceNow integration, the Terraform Admin will need to\nperform the following steps to configure and gather information from HCP\nTerraform.\n\n1. Either [create an\n   organization](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#creating-organizations)\n   or choose an existing organization where ServiceNow will create new\n   workspaces.\n   **Save the organization name for later.**\n2. [Create a team](\/terraform\/cloud-docs\/users-teams-organizations\/teams) for that\n   organization called \"ServiceNow\", and ensure that it has [permission to\n   manage\n   workspaces](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-all-workspaces).\n   You do not need to add any users to this team.\n   [permissions-citation]: #intentionally-unused---keep-for-maintainers\n3. On the \"ServiceNow\" team's settings page, generate a [team API\n   token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n   **Save the team API token for later.**\n4. If you are using the [VCS flow](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform\/service-catalog-config#vcs-configuration): \n    1. Ensure your Terraform organization is [connected to a VCS provider](\/terraform\/cloud-docs\/vcs). Repositories that are connectable to HCP Terraform workspaces can also be used as workspace templates in the ServiceNow integration.\n    2. On your organization's VCS provider settings page (**Settings** > **VCS Providers**), find the OAuth Token ID for the VCS provider(s) that you intend to use with the ServiceNow integration. HCP Terraform uses the OAuth token ID to identify and authorize the VCS provider. **Save the OAuth token ID for later.**\n    3. Identify the VCS repositories in the VCS provider containing Terraform configurations that the ServiceNow Terraform integration will deploy. Take note of any Terraform or environment variables used by the repositories you select. Save the Terraform and environment variables for later.\n5. If using the [no-code flow](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform\/service-catalog-config#no-code-module-configuration), create one or more no-code modules in the private registry of your HCP Terraform. **Save the no-code module names for later.**\n6. Provide the following information to the ServiceNow Admin:\n    * The organization name \n    * The team API token\n    * The hostname of your Terraform Enterprise instance, or of HCP Terraform. The hostname of HCP Terraform is `app.terraform.io`.\n    * The no-code module name(s) or the OAuth token ID(s) of your VCS provider(s), and the repository identifier for each VCS repository containing Terraform configurations that will be used by the integration.\n    * Any Terraform or environment variables required by the configurations in the\n   given VCS repositories.\n\n-> **Note:** Repository identifiers are determined by your VCS provider; they\ntypically use a format like `<ORGANIZATION>\/<REPO_NAME>` or\n`<PROJECT_KEY>\/<REPO_NAME>`. Azure DevOps repositories use the format\n`<ORGANIZATION>\/<PROJECT>\/_git\/<REPO_NAME>`. A GitHub repository hosted at\n`https:\/\/github.com\/exampleorg\/examplerepo\/` would have the repository\nidentifier `exampleorg\/examplerepo`.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nFor instance, if you are configuring this integration for your company, `Example\nCorp`, using two GitHub repositories, you would share values like the following\nwith the ServiceNow Admin.\n\n```markdown\nTerraform Enterprise Organization Name: `ServiceNowExampleOrg`\n\nTeam API Token: `q2uPExampleELkQ.atlasv1.A7jGHmvufExampleTeamAPITokenimVYxwunJk0xD8ObVol054`\n\nTerraform Enterprise Hostname: `terraform.corp.example`\n\nOAuth Token ID (GitHub org: example-corp): `ot-DhjEXAMPLELVtFA`\n  - Repository ID (Developer Environment): `example-corp\/developer-repo`\n    - Environment variables:\n      - `AWS_ACCESS_KEY_ID=AKIAEXAMPLEKEY`\n      - `AWS_SECRET_ACCESS_KEY=ZB0ExampleSecretAccessKeyGjUiJh`\n      - `AWS_DEFAULT_REGION=us-west-2`\n    - Terraform variables:\n      - `instance_type=t2.medium`\n  - Repository ID (Testing Environment): `example-corp\/testing-repo`\n    - Environment variables:\n      - `AWS_ACCESS_KEY_ID=AKIAEXAMPLEKEY`\n      - `AWS_SECRET_ACCESS_KEY=ZB0ExampleSecretAccessKeyGjUiJh`\n      - `AWS_DEFAULT_REGION=us-west-2`\n    - Terraform variables:\n      - `instance_type=t2.large`\n```\n\n## Install the ServiceNow Integration\n\nBefore beginning setup, the ServiceNow Admin must install the Terraform\nServiceNow Catalog integration software.\n\nThis can be added to your ServiceNow instance from the [ServiceNow\nStore](https:\/\/store.servicenow.com\/sn_appstore_store.do). Search for the \"Terraform\" integration,\npublished by \"HashiCorp Inc\".\n\n![Screenshot: ServiceNow Store Page](\/img\/docs\/service-now-store.png \"Screenshot of the ServiceNow Store listing for the Terraform Integration\")\n\n## Connect ServiceNow to HCP Terraform\n\n-> **ServiceNow Roles:** `admin` or `x_terraform.config_user`\n\nOnce the integration is installed, the ServiceNow Admin can connect your\nServiceNow instance to HCP Terraform. Before you begin, you will need the\ninformation described in the \"Configure HCP Terraform\" section from your\nTerraform Admin.\n\nOnce you have this information, connect ServiceNow to HCP Terraform with\nthe following steps.\n\n1. Navigate to your ServiceNow Service Management Screen.\n1. Using the left-hand navigation, open the configuration table for the\n   integration to manage the HCP Terraform connection.\n   - Terraform > Configs\n1. Click on \"New\" to create a new HCP Terraform connection.\n   - Set Org Name to the HCP Terraform organization name.\n   - Click on the \"Lock\" icon to set Hostname to the hostname of your Terraform\n     Enterprise instance. If you are using the SaaS version of HCP Terraform,\n     the hostname is `https:\/\/app.terraform.io`. Be sure to include \"https:\/\/\"\n     before the hostname.\n   - Set API Team Token to the HCP Terraform team API token.\n   - (Optional) To use the [MID Server](https:\/\/docs.servicenow.com\/csh?topicname=mid-server-landing.html&version=latest), \n     select the checkbox and type the `name` in the `MID Server Name` field.\n1. Click \"Submit\".\n\n![Screenshot: ServiceNow Terraform Config](\/img\/docs\/service-now-updated-config.png \"Screenshot of the ServiceNow Terraform Config New Record page\")\n\n## Create and Populate a Service Catalog\n\nNow that you have connected ServiceNow to HCP Terraform, you are ready to\ncreate a Service Catalog using the VCS repositories or no-code modules provided\nby the Terraform Admin.\n\nNavigate to the [Service Catalog documentation](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform\/service-catalog-config) to\nbegin. You can also refer to this documentation whenever you need to add or\nupdate request items.\n\n### ServiceNow Developer Reference\n\nServiceNow developers who wish to customize the Terraform integration can refer\nto the [developer documentation](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform\/developer-reference).\n\n### ServiceNow Administrator's Guide.\n\nRefer to the [ServiceNow Administrator documentation](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform\/admin-guide) for\ninformation about configuring the integration.\n\n### Example Customizations\n\nOnce the ServiceNow integration is installed, you can consult the [example\ncustomizations documentation](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform\/example-customizations).","site":"terraform","answers_cleaned":"    page title  Setup Instructions   ServiceNow Service Catalog Integration   HCP Terraform description       ServiceNow integration enables your users to order Terraform built   infrastructure from ServiceNow         Terraform ServiceNow Service Catalog Integration Setup Instructions       Integration version     v2 7 0  The Terraform ServiceNow Service Catalog integration enables your end users to provision self serve infrastructure via ServiceNow  By connecting ServiceNow to HCP Terraform  this integration lets ServiceNow users order Service Items  create workspaces  and perform Terraform runs using prepared Terraform configurations hosted in VCS repositories or as  no code modules   terraform cloud docs no code provisioning module design  for self service provisioning         BEGIN  TFC only name pnp callout      include  tfc package callouts servicenow catalog mdx       END  TFC only name pnp callout         Summary of the Setup Process  The integration relies on Terraform ServiceNow Catalog integration software installed within your ServiceNow instance  Installing and configuring this integration requires administration in both ServiceNow and HCP Terraform  Since administrators of these services within your organization are not necessarily the same person  this documentation refers to a   ServiceNow Admin   and a   Terraform Admin     First  the Terraform Admin configures your HCP Terraform organization with a dedicated team for the ServiceNow integration  and obtains a team API token for that team  The Terraform Admin provides the following to your ServiceNow admin    An Organization name   A team API token   The hostname of your HCP Terraform instance   Any available no code modules or version control repositories containing Terraform configurations    Other required variables token  the hostname of your HCP Terraform instance  and details about no code modules or version control repositories containing Terraform configurations and required variables to the ServiceNow Admin   Next  the ServiceNow Admin will install the Terraform ServiceNow Catalog integration to your ServiceNow instance  and configure it using the team API token and hostname   Finally  the ServiceNow Admin will create a Service Catalog within ServiceNow for the Terraform integration  and configure it using the version control repositories or no code modules  and variable definitions provided by the Terraform Admin     ServiceNow Admin                                                               Terraform Admin                                                                                                                                                                                                                                                                                                                                                                                                                                             Prepare an organization for use with the ServiceNow Catalog                                                                                                                                                                   Create a team that can manage workspaces in that organization                                                                                                                                                                 Create a team API token so the integration can use that team s permissions                                                                                                                                                    If using VCS repositories  retrieve the OAuth token IDs and repository identifiers that HCP Terraform uses to identify your VCS repositories  If using a no code flow   create a no code ready module   terraform cloud docs no code provisioning provisioning  in your organization s private registry  Learn more in  Configure VCS Repositories or No Code Modules   terraform cloud docs integrations service now service catalog terraform service catalog config configure vcs repositories or no code modules                                                                                     Provide the API token  OAuth token ID  repository identifiers  variable definitions  and HCP Terraform hostname to the ServiceNow Admin      Install the Terraform integration application from the ServiceNow App Store                                                                                                                                                   Connect the integration application with HCP Terraform                                                                                                                                                                      Add the Terraform Service Catalog to ServiceNow                                                                                                                                                                               If you are using the VCS flow  configure the VCS repositories in ServiceNow                                                                                                                                                           Configure variable sets for use with the VCS repositories or no code modules                                                                                                                                                 Once these steps are complete  self serve infrastructure will be available through the ServiceNow Catalog  HCP Terraform will provision and manage requested infrastructure and report the status back to ServiceNow      Prerequisites  To start using Terraform with the ServiceNow Catalog Integration  you must have     An administrator account on a Terraform Enterprise instance or within a   HCP Terraform organization    An administrator account on your ServiceNow instance    If you are using the VCS flow  one or more  supported version control   systems   terraform cloud docs vcs supported vcs providers   VCSs  with read   access to repositories with Terraform configurations    If you are using no code provisioning  one or more  no code modules   terraform cloud docs no code provisioning provisioning  created in   your organization s private registry  Refer to the  no code module configuration   terraform cloud docs integrations service now service catalog terraform service catalog config no code module configuration  for information about using no code modules with the ServiceNow Service Catalog for Terraform   You can use this integration on the following ServiceNow server versions     Utah   Vancouver   Washington DC   Xanadu  It requires the following ServiceNow plugins as dependencies     Flow Designer support for the Service Catalog   com glideapp servicecatalog flow designer     ServiceNow IntegrationHub Action Step   Script   com glide hub action step script     ServiceNow IntegrationHub Action Step   REST   com glide hub action step rest         Note    Dependent plugins are installed on your ServiceNow instance automatically when the app is downloaded from the ServiceNow Store      Configure HCP Terraform  Before installing the ServiceNow integration  the Terraform Admin will need to perform the following steps to configure and gather information from HCP Terraform   1  Either  create an    organization   terraform cloud docs users teams organizations organizations creating organizations     or choose an existing organization where ServiceNow will create new    workspaces       Save the organization name for later    2   Create a team   terraform cloud docs users teams organizations teams  for that    organization called  ServiceNow   and ensure that it has  permission to    manage    workspaces   terraform cloud docs users teams organizations permissions manage all workspaces      You do not need to add any users to this team      permissions citation    intentionally unused   keep for maintainers 3  On the  ServiceNow  team s settings page  generate a  team API    token   terraform cloud docs users teams organizations api tokens team api tokens        Save the team API token for later    4  If you are using the  VCS flow   terraform cloud docs integrations service now service catalog terraform service catalog config vcs configuration        1  Ensure your Terraform organization is  connected to a VCS provider   terraform cloud docs vcs   Repositories that are connectable to HCP Terraform workspaces can also be used as workspace templates in the ServiceNow integration      2  On your organization s VCS provider settings page    Settings       VCS Providers     find the OAuth Token ID for the VCS provider s  that you intend to use with the ServiceNow integration  HCP Terraform uses the OAuth token ID to identify and authorize the VCS provider    Save the OAuth token ID for later        3  Identify the VCS repositories in the VCS provider containing Terraform configurations that the ServiceNow Terraform integration will deploy  Take note of any Terraform or environment variables used by the repositories you select  Save the Terraform and environment variables for later  5  If using the  no code flow   terraform cloud docs integrations service now service catalog terraform service catalog config no code module configuration   create one or more no code modules in the private registry of your HCP Terraform    Save the no code module names for later    6  Provide the following information to the ServiceNow Admin        The organization name        The team API token       The hostname of your Terraform Enterprise instance  or of HCP Terraform  The hostname of HCP Terraform is  app terraform io         The no code module name s  or the OAuth token ID s  of your VCS provider s   and the repository identifier for each VCS repository containing Terraform configurations that will be used by the integration        Any Terraform or environment variables required by the configurations in the    given VCS repositories        Note    Repository identifiers are determined by your VCS provider  they typically use a format like   ORGANIZATION   REPO NAME   or   PROJECT KEY   REPO NAME    Azure DevOps repositories use the format   ORGANIZATION   PROJECT   git  REPO NAME    A GitHub repository hosted at  https   github com exampleorg examplerepo   would have the repository identifier  exampleorg examplerepo     permissions citation    intentionally unused   keep for maintainers  For instance  if you are configuring this integration for your company   Example Corp   using two GitHub repositories  you would share values like the following with the ServiceNow Admin      markdown Terraform Enterprise Organization Name   ServiceNowExampleOrg   Team API Token   q2uPExampleELkQ atlasv1 A7jGHmvufExampleTeamAPITokenimVYxwunJk0xD8ObVol054   Terraform Enterprise Hostname   terraform corp example   OAuth Token ID  GitHub org  example corp    ot DhjEXAMPLELVtFA      Repository ID  Developer Environment    example corp developer repo        Environment variables           AWS ACCESS KEY ID AKIAEXAMPLEKEY           AWS SECRET ACCESS KEY ZB0ExampleSecretAccessKeyGjUiJh           AWS DEFAULT REGION us west 2        Terraform variables           instance type t2 medium      Repository ID  Testing Environment    example corp testing repo        Environment variables           AWS ACCESS KEY ID AKIAEXAMPLEKEY           AWS SECRET ACCESS KEY ZB0ExampleSecretAccessKeyGjUiJh           AWS DEFAULT REGION us west 2        Terraform variables           instance type t2 large          Install the ServiceNow Integration  Before beginning setup  the ServiceNow Admin must install the Terraform ServiceNow Catalog integration software   This can be added to your ServiceNow instance from the  ServiceNow Store  https   store servicenow com sn appstore store do   Search for the  Terraform  integration  published by  HashiCorp Inc      Screenshot  ServiceNow Store Page   img docs service now store png  Screenshot of the ServiceNow Store listing for the Terraform Integration       Connect ServiceNow to HCP Terraform       ServiceNow Roles     admin  or  x terraform config user   Once the integration is installed  the ServiceNow Admin can connect your ServiceNow instance to HCP Terraform  Before you begin  you will need the information described in the  Configure HCP Terraform  section from your Terraform Admin   Once you have this information  connect ServiceNow to HCP Terraform with the following steps   1  Navigate to your ServiceNow Service Management Screen  1  Using the left hand navigation  open the configuration table for the    integration to manage the HCP Terraform connection       Terraform   Configs 1  Click on  New  to create a new HCP Terraform connection       Set Org Name to the HCP Terraform organization name       Click on the  Lock  icon to set Hostname to the hostname of your Terraform      Enterprise instance  If you are using the SaaS version of HCP Terraform       the hostname is  https   app terraform io   Be sure to include  https          before the hostname       Set API Team Token to the HCP Terraform team API token        Optional  To use the  MID Server  https   docs servicenow com csh topicname mid server landing html version latest         select the checkbox and type the  name  in the  MID Server Name  field  1  Click  Submit      Screenshot  ServiceNow Terraform Config   img docs service now updated config png  Screenshot of the ServiceNow Terraform Config New Record page       Create and Populate a Service Catalog  Now that you have connected ServiceNow to HCP Terraform  you are ready to create a Service Catalog using the VCS repositories or no code modules provided by the Terraform Admin   Navigate to the  Service Catalog documentation   terraform cloud docs integrations service now service catalog terraform service catalog config  to begin  You can also refer to this documentation whenever you need to add or update request items       ServiceNow Developer Reference  ServiceNow developers who wish to customize the Terraform integration can refer to the  developer documentation   terraform cloud docs integrations service now service catalog terraform developer reference        ServiceNow Administrator s Guide   Refer to the  ServiceNow Administrator documentation   terraform cloud docs integrations service now service catalog terraform admin guide  for information about configuring the integration       Example Customizations  Once the ServiceNow integration is installed  you can consult the  example customizations documentation   terraform cloud docs integrations service now service catalog terraform example customizations  "}
{"questions":"terraform page title Troubleshooting tips ServiceNow Service Catalog Integration HCP Terraform HCP Terraform ServiceNow Service Catalog Integration troubleshooting tips It also provides instructions on how to find and read logs to diagnose and resolve issues Troubleshooting tips for ServiceNow Service Catalog Integration This page offers troubleshooting tips for common issues with the ServiceNow Service Catalog Integration for HCP Terraform","answers":"---\npage_title: Troubleshooting tips - ServiceNow Service Catalog Integration - HCP Terraform\ndescription: Troubleshooting tips for ServiceNow Service Catalog Integration.\n---\n\n# HCP Terraform ServiceNow Service Catalog Integration troubleshooting tips. \n\nThis page offers troubleshooting tips for common issues with the ServiceNow Service Catalog Integration for HCP Terraform. \nIt also provides instructions on how to find and read logs to diagnose and resolve issues.\n\n## Find logs\n\nLogs are crucial for diagnosing issues. You can find logs in ServiceNow in the following places:\n\n### Workflow logs\n\nTo find workflow logs, click on the RITM number on a failed ticket to open the request item. \nScroll down to **Related Links > Workflow Context** and open the **Workflow Log** tab. \n\n### Flow logs\n\nTo find flow logs, click on the RITM number on a failed ticket to open the request item.\nScroll down to **Related Links > Flow Context > Open Context Record** and open the **Flow engine log entries** tab. \n\n### Application logs\n\nTo find application logs, navigate to **All > System Log > Application Logs.**\nSet the **Application** filter to \"Terraform\". \nSearch for logs around the time your issue occurred. Some records include HTTP status codes and detailed error messages.\n\n### Outbound requests\n\nServiceNow logs all outgoing API calls, including calls to HCP Terraform. To view the log of outbound requests, navigate  to **All > System Logs > Outbound HTTP Requests.** \nTo customize the table view, add columns like \"URL,\" \"URL Path,\" and \"Application scope.\" \nLogs from the Catalog app are marked with the `x_325709_terraform` scope.\n\n## Enable email notifications\n\nTo enable email notifications and receive updates on your requested item tickets:\n\n1. Log in to your ServiceNow instance as an administrator. \n1. **Click System Properties > Email Properties**. \n1. In the **Outbound Email Configuration** panel, select **Yes** next to the check-box with the email that ServiceNow should send notifications to.\n\nTo ensure you have relevant notifications configured in your instance: \n\n1. Navigate to **System Notification > Email > Notifications.**\n1. Search for \"Request Opened\" and \"Request Item Commented\" and ensure they are activated.\n\nReaching out to ServiceNow customer support if you run into any issues with the global configurations.out to ServiceNow customer support if you run into any issues with the global configurations.\n\n## Common problems\n\nThis section details frequently encountered issues and how they can be resolved. \n\n### Failure to create a workspace\n\nIf you order the \"create a workspace\" catalog item and  nothing happens in ServiceNow and HCP Terraform does not create a workspace then there are several possible reasons why: \n\nEnsure your HCP Terraform token, hostname, and organization name is correct.\n\n1. Make sure to use a **Team API Token**. This can be found in HCP Terraform under \"API Tokens\". \n1. Ensure the team API token has the correct permissions. \n1. Double-check your organization name by copying and pasting it from HCP Terraform or Terraform Enterprise.\n1. Double-check your host name. \n1. Make sure you created your team API token  in the same organization you are using \n1. Test your configuration. First click **Update** to process any changes then **Test Config to make sure the connection is working.\n  \nVerify your VCS configuration.\n\n1. The **Identifier** field should not have any spaces.  The ServiceNow Service Catalog Integration requires that you format repository names in the `username\/repo_name` format.\n1. The **Name** can be anything, but it is better to avoid special characters as per naming convention.\n1. Double-check the OAuth token ID in your HCP Terraform\/Terraform Enterprise settings. To retrieve your OAuth token ID, navigate to your HCP Terraform organization's settings page, then click **Provider** in the left navigation bar under **Version Control**.\n\n\n### Failure to successfully order any catalog item\n\nAfter placing an order for any catalog item, navigate to the comments section in the newly created RITM ticket. \nThe latest comment will contain a response from HCP Terraform.\n\n### Frequency of comments and outputs\n\nWhen you place an order in the  Terraform Catalog, ServiceNow submits and processes the order, then attaches  additional comments  to the order to indicate whether HCP Terraform successfully created the workspace.\n\nBy default, ServiceNow polls HCP Terraform every 5 minutes for the latest status of the Terraform run. ServiceNow does not show any comments until the next ping.\n\nTo configure ServiceNow to poll HCP Terraform more frequently:\n\n1. Navigate to **All > Flow designer**.\n1. Set the **Application** filter to **Terraform**.\n1. Under the **Name** column click  **Worker Poll Run State**.\n1. Click on the trigger and adjust the interval to your desired schedule. \n1. Click **Done > Save > Activate** to save your changes.\n\n### Using no-code modules feature\n\nIf ServiceNow fails to deploy a no-code module catalog item, verify the following:\n\n1. Ensure that your HCP Terraform organization has an [HCP Plus tier](https:\/\/www.hashicorp.com\/products\/terraform\/pricing) subscription. \n1. Ensure the name you enter for your no-code module in the catalog user form matches the no-code module in HCP Terraform. \n### Updating no-code workspaces\n\nIf the \u201cupdate no-code workspace\u201d catalog item returns the output message \u201cNo update has been made to the workspace\u201d, then you have not upgraded your no-code module in HCP Terraform. \n\n### Application Scope\n\nIf you are making customizations and you encounter unexpected issues, make sure to change the scope from **Global** to **Terraform** and recreate your customized items in the **Terraform scope**. \nFor additional instructions on customizations, refer to the [example customizations](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform\/example-customizations) documentation.\n\n### MID server  \n\nIf you are using a MID server in your configuration, check the connectivity by using the **Test Config** button on the configurations page. \nAdditionally, when ServiceNow provisions a MID server, navigate to **MID Servers > Servers** to check if the status is \u201cup\u201d and \u201cvalidated\u201d.\n\n### Configuration\n\nWhile the app allows multiple config entries, only one should be present as this can interfere with the functionality of the app","site":"terraform","answers_cleaned":"    page title  Troubleshooting tips   ServiceNow Service Catalog Integration   HCP Terraform description  Troubleshooting tips for ServiceNow Service Catalog Integration         HCP Terraform ServiceNow Service Catalog Integration troubleshooting tips    This page offers troubleshooting tips for common issues with the ServiceNow Service Catalog Integration for HCP Terraform   It also provides instructions on how to find and read logs to diagnose and resolve issues      Find logs  Logs are crucial for diagnosing issues  You can find logs in ServiceNow in the following places       Workflow logs  To find workflow logs  click on the RITM number on a failed ticket to open the request item   Scroll down to   Related Links   Workflow Context   and open the   Workflow Log   tab        Flow logs  To find flow logs  click on the RITM number on a failed ticket to open the request item  Scroll down to   Related Links   Flow Context   Open Context Record   and open the   Flow engine log entries   tab        Application logs  To find application logs  navigate to   All   System Log   Application Logs    Set the   Application   filter to  Terraform    Search for logs around the time your issue occurred  Some records include HTTP status codes and detailed error messages       Outbound requests  ServiceNow logs all outgoing API calls  including calls to HCP Terraform  To view the log of outbound requests  navigate  to   All   System Logs   Outbound HTTP Requests     To customize the table view  add columns like  URL    URL Path   and  Application scope    Logs from the Catalog app are marked with the  x 325709 terraform  scope      Enable email notifications  To enable email notifications and receive updates on your requested item tickets   1  Log in to your ServiceNow instance as an administrator   1    Click System Properties   Email Properties     1  In the   Outbound Email Configuration   panel  select   Yes   next to the check box with the email that ServiceNow should send notifications to   To ensure you have relevant notifications configured in your instance    1  Navigate to   System Notification   Email   Notifications    1  Search for  Request Opened  and  Request Item Commented  and ensure they are activated   Reaching out to ServiceNow customer support if you run into any issues with the global configurations out to ServiceNow customer support if you run into any issues with the global configurations      Common problems  This section details frequently encountered issues and how they can be resolved        Failure to create a workspace  If you order the  create a workspace  catalog item and  nothing happens in ServiceNow and HCP Terraform does not create a workspace then there are several possible reasons why    Ensure your HCP Terraform token  hostname  and organization name is correct   1  Make sure to use a   Team API Token    This can be found in HCP Terraform under  API Tokens    1  Ensure the team API token has the correct permissions   1  Double check your organization name by copying and pasting it from HCP Terraform or Terraform Enterprise  1  Double check your host name   1  Make sure you created your team API token  in the same organization you are using  1  Test your configuration  First click   Update   to process any changes then   Test Config to make sure the connection is working     Verify your VCS configuration   1  The   Identifier   field should not have any spaces   The ServiceNow Service Catalog Integration requires that you format repository names in the  username repo name  format  1  The   Name   can be anything  but it is better to avoid special characters as per naming convention  1  Double check the OAuth token ID in your HCP Terraform Terraform Enterprise settings  To retrieve your OAuth token ID  navigate to your HCP Terraform organization s settings page  then click   Provider   in the left navigation bar under   Version Control          Failure to successfully order any catalog item  After placing an order for any catalog item  navigate to the comments section in the newly created RITM ticket   The latest comment will contain a response from HCP Terraform       Frequency of comments and outputs  When you place an order in the  Terraform Catalog  ServiceNow submits and processes the order  then attaches  additional comments  to the order to indicate whether HCP Terraform successfully created the workspace   By default  ServiceNow polls HCP Terraform every 5 minutes for the latest status of the Terraform run  ServiceNow does not show any comments until the next ping   To configure ServiceNow to poll HCP Terraform more frequently   1  Navigate to   All   Flow designer    1  Set the   Application   filter to   Terraform    1  Under the   Name   column click    Worker Poll Run State    1  Click on the trigger and adjust the interval to your desired schedule   1  Click   Done   Save   Activate   to save your changes       Using no code modules feature  If ServiceNow fails to deploy a no code module catalog item  verify the following   1  Ensure that your HCP Terraform organization has an  HCP Plus tier  https   www hashicorp com products terraform pricing  subscription   1  Ensure the name you enter for your no code module in the catalog user form matches the no code module in HCP Terraform       Updating no code workspaces  If the  update no code workspace  catalog item returns the output message  No update has been made to the workspace   then you have not upgraded your no code module in HCP Terraform        Application Scope  If you are making customizations and you encounter unexpected issues  make sure to change the scope from   Global   to   Terraform   and recreate your customized items in the   Terraform scope     For additional instructions on customizations  refer to the  example customizations   terraform cloud docs integrations service now service catalog terraform example customizations  documentation       MID server    If you are using a MID server in your configuration  check the connectivity by using the   Test Config   button on the configurations page   Additionally  when ServiceNow provisions a MID server  navigate to   MID Servers   Servers   to check if the status is  up  and  validated        Configuration  While the app allows multiple config entries  only one should be present as this can interfere with the functionality of the app"}
{"questions":"terraform When using ServiceNow with the HCP Terraform integration you will configure Service Catalog Configuration page title Service Catalog ServiceNow Service Catalog Integration HCP Terraform provision infrastructure using ServiceNow and HCP Terraform Configure Service Catalogs and VCS repositories to allow end users to","answers":"---\npage_title: Service Catalog - ServiceNow Service Catalog Integration - HCP Terraform\ndescription: |-\n  Configure Service Catalogs and VCS repositories to allow end users to\n  provision infrastructure using ServiceNow and HCP Terraform.\n---\n\n# Service Catalog Configuration\n\nWhen using ServiceNow with the HCP Terraform integration, you will configure\nat least one service catalog item. You will also configure one or more version\ncontrol system (VCS) repositories or no-code modules containing the Terraform\nconfigurations which will be used to provision that infrastructure. End users\nwill request infrastructure from the service catalog, and HCP Terraform will\nfulfill the request by creating a new workspace, applying the configuration, and\nthen reporting the results back to ServiceNow.\n\n## Prerequisites\n\nBefore configuring a service catalog, you must install and configure the\nHCP Terraform integration software on your ServiceNow instance. These steps\nare covered in the [installation documentation](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform).\n\nAdditionally, you must have have the following information:\n\n1. The no-code module name or the OAuth token ID and repository identifier for\n   each VCS repository that HCP Terraform will use to provision\n   infrastructure. Your Terraform Admin will provide these to you. Learn more in\n   [Configure VCS Repositories or No-Code\n   Modules](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform\/service-catalog-config#configure-vcs-repositories-or-no-code-modules).\n1. Any Terraform or environment variables required by the configurations in the\n   given VCS repositories or no-code modules.\n\nOnce these steps are complete, in order for end users to provision\ninfrastructure with ServiceNow and HCP Terraform, the ServiceNow Admin will\nperform the following steps to make Service Items available to your end users.\n\n1. Add at least one service catalog for use with Terraform.\n1. If you are using the VCS flow, configure at least one one VCS repository in ServiceNow.\n1. Create variable sets to define Terraform and environment variables to be used\n   by HCP Terraform to provision infrastructure.\n\n## Add the Terraform Service Catalog\n\n-> **ServiceNow Role:** `admin`\n\nFirst, add a Service Catalog for use with the Terraform integration. Depending\non your organization's needs, you might use a single service catalog, or\nseveral. If you already have a Service Catalog to use with Terraform, skip to\nthe next step.\n\n1. In ServiceNow, open the Service Catalog > Catalogs view by searching for\n   \"Service Catalog\" in the left-hand navigation.\n   1. Click the plus sign in the top right.\n   1. Select \"Catalogs > Terraform Catalog > Title and Image\" and choose a\n      location to add the Service Catalog.\n   1. Close the \"Sections\" dialog box by clicking the \"x\" in the upper right-hand\n      corner.\n\n-> **Note:** In step 1, be sure to choose \"Catalogs\", not \"Catalog\" from the\nleft-hand navigation.\n\n## Configure VCS Repositories or No-Code Modules\n\n-> **ServiceNow Roles:** `admin` or `x_terraform.vcs_repositories_user`\n\nTerraform workspaces created through the ServiceNow Service Catalog for\nTerraform can be associated with a VCS\nprovider repository or be backed by a [no-code\nmodule](\/terraform\/cloud-docs\/no-code-provisioning\/provisioning) in your\norganization's private registry. Administrators determine which workspace type\nend users can request from the Terraform Catalog. Below are the key\ndifferences between the version control and no-code approaches.\n\n### VCS configuration\n\nTo make infrastructure available to your users through version control\nworkspaces, you must add one or more VCS repositories containing Terraform\nconfigurations to the Service Catalog for Terraform.\n\n1. In ServiceNow, open the \"Terraform > VCS Repositories\" table by searching for\n   \"Terraform\" in the left-hand navigation.\n1. Click \"New\" to add a VCS repository, and fill in the following fields:\n   - Name: The name for this repository. This name will be visible to end\n     users, and does not have to be the same as the repository name as defined\n     by your VCS provider. Ideally it will succinctly describe the\n     infrastructure that will be provisioned by Terraform from the repository.\n   - OAuth Token ID: The OAuth token ID that from your HCP Terraform\n     organization's VCS providers settings. This ID specifies which VCS\n     provider configured in HCP Terraform hosts the desired repository.\n   - Identifier: The VCS repository that contains the Terraform configuration\n     for this workspace template. Repository identifiers are determined by your\n     VCS provider; they typically use a format like\n     `<ORGANIZATION>\/<REPO_NAME>` or `<PROJECT KEY>\/<REPO_NAME>`. Azure DevOps\n     repositories use the format `<ORGANIZATION>\/<PROJECT>\/_git\/<REPO_NAME>`.\n   - The remaining fields are optional.\n     - Branch: The branch within the repository, if different from the default\n       branch.\n     - Working Directory: The directory within the repository containing\n       Terraform configuration.\n     - Terraform Version: The version of Terraform to use. This will default to\n       the latest version of Terraform supported by your HCP Terraform\n       instance.\n1. Click \"Submit\".\n\n![Screenshot: ServiceNow New VCS Repository](\/img\/docs\/service-now-vcs-repository.png \"Screenshot of the ServiceNow Terraform New VCS Repository page\")\n\nAfter configuring your repositories in ServiceNow, the names of those\nrepositories will be available in the \"VCS Repository\" dropdown menu a user\norders new workspaces through the following items in the Terraform Catalog:\n\n- **Create Workspace**\n- **Create Workspace with Variables**\n- **Provision Resources**\n- **Provision Resources with Variables**\n\n### No-Code Module Configuration\n\nIn version 2.5.0 and newer, ServiceNow administrators can configure\nCatalog Items using [no-code\nmodules](\/terraform\/cloud-docs\/no-code-provisioning\/provisioning). This release\nintroduces two new additions to the Terraform Catalog - no-code workspace\ncreate and update Items. Both utilize no-code modules from the private registry\nin HCP Terraform to enable end users to request infrastructure without writing\ncode.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/nocode.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nThe following Catalog Items allow you to build and manage workspaces with\nno-code modules:\n- **Provision No-Code Workspace and Deploy Resources**: creates a new Terraform\n  workspace based on a no-code module of your choice, supplies required variable\n  values, runs and applies Terraform.\n- **Update No-Code Workspace and Deploy Resources**: Updates an existing no-code\n  workspace to the most recent no-code module version, updates that workspace's\n  attached variable values, and then starts and applies a new Terraform run.   \n\nAdministrators can skip configuring VCS repositories in ServiceNow when using\nno-code modules. The only input required in the no-code workspace request form\nis the name of the no-code module. \n\nBefore utilizing a no-code module, you must publish it to the your organization's\nprivate module registry. With this one-time configuration complete, ServiceNow\nAdministrators can then call the modules through Catalog requests without\nrepository management, simplifying the use of infrastructure-as-code.\n\n> **Hands On:** Try the [Self-service enablement with HCP Terraform and ServiceNow tutorial](\/terraform\/tutorials\/it-saas\/servicenow-no-code).\n\n## Configure a Variable Set\n\nMost Terraform configurations can be customized with Terraform variables or\nenvironment variables. You can create a Variable Set within ServiceNow to\ncontain the variables needed for a given configuration. Your Terraform Admin\nshould provide these to you.\n\n1. In ServiceNow, open the \"Service Catalog > Variable Sets\" table by searching for\n   \"variable sets\" in the left-hand navigation.\n1. Click \"New\" to add a Variable Set.\n1. Select \"Single-Row Variable Set\".\n   - Title: User-visible title for the variable set.\n   - Internal name: The internal name for the variable set.\n   - Order: The order in which the variable set will be displayed.\n   - Type: Should be set to \"Single Row\"\n   - Application: Should be set to \"Terraform\"\n   - Display title: Whether the title is displayed to the end user.\n   - Layout: How the variables in the set will be displayed on the screen.\n   - Description: A long description of the variable set.\n1. Click \"Submit\" to create the variable set.\n1. Find and click on the title of the new variable set in the Variable Sets\n   table.\n1. At the bottom of the variable set details page, click \"New\" to add a new\n   variable.\n\n- Type: Should be \"Single Line Text\" for most variables, or \"Masked\" for\n  variables containing sensitive values.\n- Question: The user-visible question or label for the variable.\n- Name: The internal name of the variable. This must be derived from the name of the\n  Terraform or environment variable. Consult the table below to determine the\n  proper prefix for each variable name.\n- Tooltip: A tooltip to display for the variable.\n- Example Text: Example text to show in the variable's input box.\n\n1. Under the \"Default Value\" tab, you can set a default value for the variable.\n1. Continue to add new variables corresponding to the Terraform and environment\n   variables the configuration requires.\n\nWhen the Terraform integration applies configuration, it will map ServiceNow\nvariables to Terraform and environment variables using the following convention.\nServiceNow variables that begin with \"sensitive_\" will be saved as sensitive\nvariables within HCP Terraform.\n\n| ServiceNow Variable Name         | HCP Terraform Variable                                   |\n| -------------------------------- | ---------------------------------------------------------- |\n| `tf_var_VARIABLE_NAME`           | Terraform Variable: `VARIABLE_NAME`                        |\n| `tf_env_ENV_NAME`                | Environment Variable: `ENV_NAME`                           |\n| `sensitive_tf_var_VARIABLE_NAME` | Sensitive Terraform Variable (Write Only): `VARIABLE_NAME` |\n| `sensitive_tf_env_ENV_NAME`      | Sensitive Environment Variable (Write Only): `ENV_NAME`    |\n\n## Provision Infrastructure\n\nOnce you configure the Service Catalog for Terraform, ServiceNow users\ncan request infrastructure to be provisioned by HCP Terraform.\n\nThese requests will be fulfilled by HCP Terraform, which will:\n\n1. Create a new workspace from the no-code module or the VCS repository provided by ServiceNow.\n1. Configure variables for that workspace, also provided by ServiceNow.\n1. Plan and apply the change.\n1. Report the results, including any outputs from Terraform, to ServiceNow.\n\nOnce this is complete, ServiceNow will reflect that the Request Item has been\nprovisioned.\n\n-> **Note:** The integration creates workspaces with\n[auto-apply](\/terraform\/cloud-docs\/workspaces\/settings#auto-apply-and-manual-apply)\nenabled. HCP Terraform will queue an apply for these workspaces whenever\nchanges are merged to the associated VCS repositories. This is known as the\n[VCS-driven run workflow](\/terraform\/cloud-docs\/run\/ui). It is important to keep in mind\nthat all of the ServiceNow workspaces connected to a given repository will be\nupdated whenever changes are merged to the associated branch in that repository.\n\n## Execution Mode\n\nIf using v2.2.0 or above, the Service Catalog app allows you to set an [execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode) for your Terraform workspaces. There are two modes to choose from:\n\n - The default value is \"Remote\", which instructs HCP Terraform to perform runs on its disposable virtual machines.\n - Selecting \"Agent\" mode allows you to run Terraform operations on isolated, private, or on-premises infrastructure. This option requires you to create an Agent Pool in your organization beforehand, then provide that Agent Pool's id when you order a new workspace through the Service Catalog.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/agents.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n## Workspace Name\n\nVersion 2.4.0 of the Service Catalog for Terraform introduces the ability to set custom names for your Terraform workspaces. You can choose a prefix for your workspace name that the Service Catalog app will append the ServiceNow RITM number to. If you do not define a workspace prefix, ServiceNow will use RITM number as the workspace name.\n\nWorkspace names can include letters, numbers, dashes (`-`), and underscores (`_`), and should not exceed 90 characters.\nRefer to the [workspace naming recommendations](\/terraform\/cloud-docs\/workspaces\/create#workspace-naming) for best practices.\n\n## Workspace Tags\n\nVersion 2.4.0 also allows you to add custom tags to your Terraform workspaces. Use the \"Workspace Tags\" field to provide a comma-separated list of tags that will be parsed and attached to the workspace in HCP Terraform. \n\nTags give you an easier way to categorize, filter, and manage workspaces provisioned through the Service Catalog for Terraform.\nWe recommend that you set naming conventions for tags with your end users to avoid variations such as `ec2`, `aws-ec2`, `aws_ec2`.\n\nWorkspace tags have a 255 character limit and can contain letters, numbers, colons, hyphens, and underscores. Refer to the [workspace tagging rules](\/terraform\/cloud-docs\/workspaces\/create#workspace-tags) for more details.","site":"terraform","answers_cleaned":"    page title  Service Catalog   ServiceNow Service Catalog Integration   HCP Terraform description       Configure Service Catalogs and VCS repositories to allow end users to   provision infrastructure using ServiceNow and HCP Terraform         Service Catalog Configuration  When using ServiceNow with the HCP Terraform integration  you will configure at least one service catalog item  You will also configure one or more version control system  VCS  repositories or no code modules containing the Terraform configurations which will be used to provision that infrastructure  End users will request infrastructure from the service catalog  and HCP Terraform will fulfill the request by creating a new workspace  applying the configuration  and then reporting the results back to ServiceNow      Prerequisites  Before configuring a service catalog  you must install and configure the HCP Terraform integration software on your ServiceNow instance  These steps are covered in the  installation documentation   terraform cloud docs integrations service now service catalog terraform    Additionally  you must have have the following information   1  The no code module name or the OAuth token ID and repository identifier for    each VCS repository that HCP Terraform will use to provision    infrastructure  Your Terraform Admin will provide these to you  Learn more in     Configure VCS Repositories or No Code    Modules   terraform cloud docs integrations service now service catalog terraform service catalog config configure vcs repositories or no code modules   1  Any Terraform or environment variables required by the configurations in the    given VCS repositories or no code modules   Once these steps are complete  in order for end users to provision infrastructure with ServiceNow and HCP Terraform  the ServiceNow Admin will perform the following steps to make Service Items available to your end users   1  Add at least one service catalog for use with Terraform  1  If you are using the VCS flow  configure at least one one VCS repository in ServiceNow  1  Create variable sets to define Terraform and environment variables to be used    by HCP Terraform to provision infrastructure      Add the Terraform Service Catalog       ServiceNow Role     admin   First  add a Service Catalog for use with the Terraform integration  Depending on your organization s needs  you might use a single service catalog  or several  If you already have a Service Catalog to use with Terraform  skip to the next step   1  In ServiceNow  open the Service Catalog   Catalogs view by searching for     Service Catalog  in the left hand navigation     1  Click the plus sign in the top right     1  Select  Catalogs   Terraform Catalog   Title and Image  and choose a       location to add the Service Catalog     1  Close the  Sections  dialog box by clicking the  x  in the upper right hand       corner        Note    In step 1  be sure to choose  Catalogs   not  Catalog  from the left hand navigation      Configure VCS Repositories or No Code Modules       ServiceNow Roles     admin  or  x terraform vcs repositories user   Terraform workspaces created through the ServiceNow Service Catalog for Terraform can be associated with a VCS provider repository or be backed by a  no code module   terraform cloud docs no code provisioning provisioning  in your organization s private registry  Administrators determine which workspace type end users can request from the Terraform Catalog  Below are the key differences between the version control and no code approaches       VCS configuration  To make infrastructure available to your users through version control workspaces  you must add one or more VCS repositories containing Terraform configurations to the Service Catalog for Terraform   1  In ServiceNow  open the  Terraform   VCS Repositories  table by searching for     Terraform  in the left hand navigation  1  Click  New  to add a VCS repository  and fill in the following fields       Name  The name for this repository  This name will be visible to end      users  and does not have to be the same as the repository name as defined      by your VCS provider  Ideally it will succinctly describe the      infrastructure that will be provisioned by Terraform from the repository       OAuth Token ID  The OAuth token ID that from your HCP Terraform      organization s VCS providers settings  This ID specifies which VCS      provider configured in HCP Terraform hosts the desired repository       Identifier  The VCS repository that contains the Terraform configuration      for this workspace template  Repository identifiers are determined by your      VCS provider  they typically use a format like        ORGANIZATION   REPO NAME   or   PROJECT KEY   REPO NAME    Azure DevOps      repositories use the format   ORGANIZATION   PROJECT   git  REPO NAME         The remaining fields are optional         Branch  The branch within the repository  if different from the default        branch         Working Directory  The directory within the repository containing        Terraform configuration         Terraform Version  The version of Terraform to use  This will default to        the latest version of Terraform supported by your HCP Terraform        instance  1  Click  Submit      Screenshot  ServiceNow New VCS Repository   img docs service now vcs repository png  Screenshot of the ServiceNow Terraform New VCS Repository page    After configuring your repositories in ServiceNow  the names of those repositories will be available in the  VCS Repository  dropdown menu a user orders new workspaces through the following items in the Terraform Catalog       Create Workspace       Create Workspace with Variables       Provision Resources       Provision Resources with Variables        No Code Module Configuration  In version 2 5 0 and newer  ServiceNow administrators can configure Catalog Items using  no code modules   terraform cloud docs no code provisioning provisioning   This release introduces two new additions to the Terraform Catalog   no code workspace create and update Items  Both utilize no code modules from the private registry in HCP Terraform to enable end users to request infrastructure without writing code        BEGIN  TFC only name pnp callout      include  tfc package callouts nocode mdx       END  TFC only name pnp callout      The following Catalog Items allow you to build and manage workspaces with no code modules      Provision No Code Workspace and Deploy Resources    creates a new Terraform   workspace based on a no code module of your choice  supplies required variable   values  runs and applies Terraform      Update No Code Workspace and Deploy Resources    Updates an existing no code   workspace to the most recent no code module version  updates that workspace s   attached variable values  and then starts and applies a new Terraform run      Administrators can skip configuring VCS repositories in ServiceNow when using no code modules  The only input required in the no code workspace request form is the name of the no code module    Before utilizing a no code module  you must publish it to the your organization s private module registry  With this one time configuration complete  ServiceNow Administrators can then call the modules through Catalog requests without repository management  simplifying the use of infrastructure as code       Hands On    Try the  Self service enablement with HCP Terraform and ServiceNow tutorial   terraform tutorials it saas servicenow no code       Configure a Variable Set  Most Terraform configurations can be customized with Terraform variables or environment variables  You can create a Variable Set within ServiceNow to contain the variables needed for a given configuration  Your Terraform Admin should provide these to you   1  In ServiceNow  open the  Service Catalog   Variable Sets  table by searching for     variable sets  in the left hand navigation  1  Click  New  to add a Variable Set  1  Select  Single Row Variable Set        Title  User visible title for the variable set       Internal name  The internal name for the variable set       Order  The order in which the variable set will be displayed       Type  Should be set to  Single Row       Application  Should be set to  Terraform       Display title  Whether the title is displayed to the end user       Layout  How the variables in the set will be displayed on the screen       Description  A long description of the variable set  1  Click  Submit  to create the variable set  1  Find and click on the title of the new variable set in the Variable Sets    table  1  At the bottom of the variable set details page  click  New  to add a new    variable     Type  Should be  Single Line Text  for most variables  or  Masked  for   variables containing sensitive values    Question  The user visible question or label for the variable    Name  The internal name of the variable  This must be derived from the name of the   Terraform or environment variable  Consult the table below to determine the   proper prefix for each variable name    Tooltip  A tooltip to display for the variable    Example Text  Example text to show in the variable s input box   1  Under the  Default Value  tab  you can set a default value for the variable  1  Continue to add new variables corresponding to the Terraform and environment    variables the configuration requires   When the Terraform integration applies configuration  it will map ServiceNow variables to Terraform and environment variables using the following convention  ServiceNow variables that begin with  sensitive   will be saved as sensitive variables within HCP Terraform     ServiceNow Variable Name           HCP Terraform Variable                                                                                                                                          tf var VARIABLE NAME              Terraform Variable   VARIABLE NAME                              tf env ENV NAME                   Environment Variable   ENV NAME                                 sensitive tf var VARIABLE NAME    Sensitive Terraform Variable  Write Only    VARIABLE NAME       sensitive tf env ENV NAME         Sensitive Environment Variable  Write Only    ENV NAME           Provision Infrastructure  Once you configure the Service Catalog for Terraform  ServiceNow users can request infrastructure to be provisioned by HCP Terraform   These requests will be fulfilled by HCP Terraform  which will   1  Create a new workspace from the no code module or the VCS repository provided by ServiceNow  1  Configure variables for that workspace  also provided by ServiceNow  1  Plan and apply the change  1  Report the results  including any outputs from Terraform  to ServiceNow   Once this is complete  ServiceNow will reflect that the Request Item has been provisioned        Note    The integration creates workspaces with  auto apply   terraform cloud docs workspaces settings auto apply and manual apply  enabled  HCP Terraform will queue an apply for these workspaces whenever changes are merged to the associated VCS repositories  This is known as the  VCS driven run workflow   terraform cloud docs run ui   It is important to keep in mind that all of the ServiceNow workspaces connected to a given repository will be updated whenever changes are merged to the associated branch in that repository      Execution Mode  If using v2 2 0 or above  the Service Catalog app allows you to set an  execution mode   terraform cloud docs workspaces settings execution mode  for your Terraform workspaces  There are two modes to choose from      The default value is  Remote   which instructs HCP Terraform to perform runs on its disposable virtual machines     Selecting  Agent  mode allows you to run Terraform operations on isolated  private  or on premises infrastructure  This option requires you to create an Agent Pool in your organization beforehand  then provide that Agent Pool s id when you order a new workspace through the Service Catalog        BEGIN  TFC only name pnp callout      include  tfc package callouts agents mdx       END  TFC only name pnp callout         Workspace Name  Version 2 4 0 of the Service Catalog for Terraform introduces the ability to set custom names for your Terraform workspaces  You can choose a prefix for your workspace name that the Service Catalog app will append the ServiceNow RITM number to  If you do not define a workspace prefix  ServiceNow will use RITM number as the workspace name   Workspace names can include letters  numbers  dashes        and underscores        and should not exceed 90 characters  Refer to the  workspace naming recommendations   terraform cloud docs workspaces create workspace naming  for best practices      Workspace Tags  Version 2 4 0 also allows you to add custom tags to your Terraform workspaces  Use the  Workspace Tags  field to provide a comma separated list of tags that will be parsed and attached to the workspace in HCP Terraform    Tags give you an easier way to categorize  filter  and manage workspaces provisioned through the Service Catalog for Terraform  We recommend that you set naming conventions for tags with your end users to avoid variations such as  ec2    aws ec2    aws ec2    Workspace tags have a 255 character limit and can contain letters  numbers  colons  hyphens  and underscores  Refer to the  workspace tagging rules   terraform cloud docs workspaces create workspace tags  for more details "}
{"questions":"terraform The Terraform ServiceNow integration can be customized by ServiceNow developers Terraform ServiceNow Service Catalog Integration Developer Reference page title Developer Reference ServiceNow Service Catalog Integration HCP Terraform using the information found in this document Developer reference for the ServiceNow Service Catalog Integration","answers":"---\npage_title: Developer Reference - ServiceNow Service Catalog Integration - HCP Terraform\ndescription: Developer reference for the ServiceNow Service Catalog Integration.\n---\n\n# Terraform ServiceNow Service Catalog Integration Developer Reference\n\nThe Terraform ServiceNow integration can be customized by ServiceNow developers\nusing the information found in this document.\n\n## Terraform Variables and ServiceNow Variable Sets\n\nServiceNow has the concept of a Variable Set which is a collection of ServiceNow\nVariables that can be referenced in a Flow from a Service Catalog item. The\nTerraform Integration codebase can create [Terraform Variables and Terraform\nEnvironment Variables](\/terraform\/cloud-docs\/workspaces\/variables) via the API using the\n`tf_variable.createVariablesFromSet()` function.\n\nThis function looks for variables following these conventions:\n\n| ServiceNow Variable Name         | HCP Terraform Variable                                   |\n| -------------------------------- | ---------------------------------------------------------- |\n| `tf_var_VARIABLE_NAME`           | Terraform Variable: `VARIABLE_NAME`                        |\n| `tf_env_ENV_NAME`                | Environment Variable: `ENV_NAME`                           |\n| `sensitive_tf_var_VARIABLE_NAME` | Sensitive Terraform Variable (Write Only): `VARIABLE_NAME` |\n| `sensitive_tf_env_ENV_NAME`      | Sensitive Environment Variable (Write Only): `ENV_NAME`    |\n\nThis function takes the ServiceNow Variable Set and HCP Terraform workspace\nID. It will loop through the given variable set collection and create any\nnecessary Terraform variables or environment variables in the workspace.\n\n## Customizing with ServiceNow \"Script Includes\" Libraries\n\nThe Terraform\/ServiceNow Integration codebase includes [ServiceNow Script\nIncludes\nClasses](https:\/\/docs.servicenow.com\/csh?topicname=c_ScriptIncludes.html&version=latest)\nthat are used to interface with HCP Terraform. The codebase also includes\nexample catalog items and flows that implement the interface to the HCP Terraform API.\n\nThese classes and examples can be used to help create ServiceNow Catalog Items\ncustomized to your specific ServiceNow instance and requirements.\n\n### Script Include Classes\n\nThe ServiceNow Script Include Classes can be found in the ServiceNow Studio >\nServer Development > Script Include.\n\n| Class Name            | Description                                                |\n| --------------------- | ---------------------------------------------------------- |\n| `tf_config`           | Helper to pull values from the SN Terraform Configs Table  |\n| `tf_get_workspace`    | Client-callable script to retrieve workspace data          |\n| `tf_http`             | ServiceNow HTTP REST wrapper for requests to the Terraform API |\n| `tf_no_code_workspace`| Resources for Terraform no-code module API requests       |\n| `tf_run`              | Resources for Terraform run API requests                   |\n| `tf_terraform_record` | Manage ServiceNow Terraform Table Records                  |\n| `tf_test_config`      | Client-callable script to test Terraform connectivity      |\n| `tf_util`             | Miscellaneous helper functions                             |\n| `tf_variable`         | Resources for Terraform variable API Requests              |\n| `tf_vcs_record`       | Manage ServiceNow Terraform VCS repositories table records |\n| `tf_workspace`        | Resources for Terraform workspace API requests             |\n\n### Example Service Catalog Flows and Actions\n\nThe ServiceNow Service Catalog for Terraform provides sample catalog items that use **Flows** \nand **Workflows** as their primary process engines.  **Flows** are a newer solution developed \nby ServiceNow and are generally preferred over **Workflows**. To see which engine an item is using, open it \nin the edit mode and navigate to the **Process Engine** tab. For example, **Create Workspace** uses a **Workflow**, \nwhereas **Create Workspace Flow** is built upon a **Flow**. You can access both in the **Studio**. You can also \nmanage **Flows** in the **Flow Designer**. To manage **Workflows**, navigate to **All > Workflow Editor**.\n\nYou can find the ServiceNow Example Flows for Terraform in the **ServiceNow Studio > Flows** (or **All > Flow Designer**). \nSearch for items that belong to the **Terraform** application. By default, Flows execute when someone submits an order request \nfor a catalog item based on a Flow. Admins can customize the Flows and Actions to add approval flows, set approval rules based \non certain conditions, and configure multiple users or roles as approvers for specific catalog items. \n\n| Flow Name                     | Description                                                                                                                                                                  |\n| ----------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Create Workspace              | Creates a new HCP Terraform workspace from VCS repository.                                                                                                                 |\n| Create Workspace with Vars    | Creates a new HCP Terraform workspace from VCS repository and creates any variables provided.                                                                              |\n| Create Run                    | Creates and queues a new run in the HCP Terraform workspace.                                                                                                                   |\n| Apply Run                     | Applies a run in the HCP Terraform workspace.                                                                                                                              |\n| Provision Resources           | Creates a new HCP Terraform workspace (with auto-apply), creates and queues a run, then applies the run when ready.                                                                     |\n| Provision Resources with Vars | Creates a new HCP Terraform workspace (with auto-apply), creates any variables, creates\/queues a run, applies the run when ready.  \n| Provision No-Code Workspace and Deploy Resources | Creates a new HCP Terraform workspace based on a no-code module configured in the private registry (with auto-apply), creates any variables, creates and queues a run, then applies the run when ready.                                                |\n| Delete Workspace              | Creates a destroy run plan.                                                                                                                                                  |\n| Worker Poll Run State         | Polls the HCP Terraform API for the current run state of a workspace.                                                                                                      |\n| Worker Poll Apply Run         | Polls the HCP Terraform API and applies any pending Terraform runs.                                                                                                        |\n| Worker Poll Destroy Workspace | Queries ServiceNow Terraform Records for resources marked `is_destroyable`, applies the destroy run to destroy resources, and deletes the corresponding Terraform workspace. |\n| Update No-Code Workspace and Deploy Resources | Updates an existing no-code workspace to the most recent no-code module version, updates that workspace's attached variable values, and then starts a new Terraform run.     \n| Update Workspace              | Updates HCP Terraform workspace configurations, such as VCS repository, description, project, execution mode, and agent pool ID (if applicable).                               |\n| Update Workspace with Vars    | Allows you to change details about the HCP Terraform workspace configurations and attached variable values.                                                          |\n| Update Resources              | Updates HCP Terraform workspace details and starts a new Terraform run with these new values.                                                                              |\n| Update Resources with Vars    | Updates your existing HCP Terraform workspace and its variables, then starts a Terraform run with these updated values.                                                    |\n\n## ServiceNow ACLs\n\nAccess control lists (ACLs) restrict user access to objects and operations based\non permissions granted. This integration includes the following roles that can\nbe used to manage various components.\n\n| Access Control Roles                | Description                                                                                     |\n| :---------------------------------- | ----------------------------------------------------------------------------------------------- |\n| `x_terraform.config_user`           | Can manage the connection from the ServiceNow application to your HCP Terraform organization. |\n| `x_terraform.terraform_user`        | Can manage all of the Terraform resources created in ServiceNow.                                |\n| `x_terraform.vcs_repositories_user` | Can manage the VCS repositories available for catalog items to be ordered by end-users.         |\n\nFor users who only need to order from the Terraform Catalog, we recommend\ncreating another role with read-only permissions for\n`x_terraform_vcs_repositories` to view the available repositories for ordering\ninfrastructure. Install the Terraform ServiceNow Service Catalog integration by\nfollowing [the installation guide](\/terraform\/cloud-docs\/integrations\/service-now\/service-catalog-terraform).","site":"terraform","answers_cleaned":"    page title  Developer Reference   ServiceNow Service Catalog Integration   HCP Terraform description  Developer reference for the ServiceNow Service Catalog Integration         Terraform ServiceNow Service Catalog Integration Developer Reference  The Terraform ServiceNow integration can be customized by ServiceNow developers using the information found in this document      Terraform Variables and ServiceNow Variable Sets  ServiceNow has the concept of a Variable Set which is a collection of ServiceNow Variables that can be referenced in a Flow from a Service Catalog item  The Terraform Integration codebase can create  Terraform Variables and Terraform Environment Variables   terraform cloud docs workspaces variables  via the API using the  tf variable createVariablesFromSet    function   This function looks for variables following these conventions     ServiceNow Variable Name           HCP Terraform Variable                                                                                                                                          tf var VARIABLE NAME              Terraform Variable   VARIABLE NAME                              tf env ENV NAME                   Environment Variable   ENV NAME                                 sensitive tf var VARIABLE NAME    Sensitive Terraform Variable  Write Only    VARIABLE NAME       sensitive tf env ENV NAME         Sensitive Environment Variable  Write Only    ENV NAME        This function takes the ServiceNow Variable Set and HCP Terraform workspace ID  It will loop through the given variable set collection and create any necessary Terraform variables or environment variables in the workspace      Customizing with ServiceNow  Script Includes  Libraries  The Terraform ServiceNow Integration codebase includes  ServiceNow Script Includes Classes  https   docs servicenow com csh topicname c ScriptIncludes html version latest  that are used to interface with HCP Terraform  The codebase also includes example catalog items and flows that implement the interface to the HCP Terraform API   These classes and examples can be used to help create ServiceNow Catalog Items customized to your specific ServiceNow instance and requirements       Script Include Classes  The ServiceNow Script Include Classes can be found in the ServiceNow Studio   Server Development   Script Include     Class Name              Description                                                                                                                                            tf config              Helper to pull values from the SN Terraform Configs Table       tf get workspace       Client callable script to retrieve workspace data               tf http                ServiceNow HTTP REST wrapper for requests to the Terraform API      tf no code workspace   Resources for Terraform no code module API requests            tf run                 Resources for Terraform run API requests                        tf terraform record    Manage ServiceNow Terraform Table Records                       tf test config         Client callable script to test Terraform connectivity           tf util                Miscellaneous helper functions                                  tf variable            Resources for Terraform variable API Requests                   tf vcs record          Manage ServiceNow Terraform VCS repositories table records      tf workspace           Resources for Terraform workspace API requests                    Example Service Catalog Flows and Actions  The ServiceNow Service Catalog for Terraform provides sample catalog items that use   Flows    and   Workflows   as their primary process engines     Flows   are a newer solution developed  by ServiceNow and are generally preferred over   Workflows    To see which engine an item is using  open it  in the edit mode and navigate to the   Process Engine   tab  For example    Create Workspace   uses a   Workflow     whereas   Create Workspace Flow   is built upon a   Flow    You can access both in the   Studio    You can also  manage   Flows   in the   Flow Designer    To manage   Workflows    navigate to   All   Workflow Editor     You can find the ServiceNow Example Flows for Terraform in the   ServiceNow Studio   Flows    or   All   Flow Designer      Search for items that belong to the   Terraform   application  By default  Flows execute when someone submits an order request  for a catalog item based on a Flow  Admins can customize the Flows and Actions to add approval flows  set approval rules based  on certain conditions  and configure multiple users or roles as approvers for specific catalog items      Flow Name                       Description                                                                                                                                                                                                                                                                                                                                                                                       Create Workspace                Creates a new HCP Terraform workspace from VCS repository                                                                                                                      Create Workspace with Vars      Creates a new HCP Terraform workspace from VCS repository and creates any variables provided                                                                                   Create Run                      Creates and queues a new run in the HCP Terraform workspace                                                                                                                        Apply Run                       Applies a run in the HCP Terraform workspace                                                                                                                                   Provision Resources             Creates a new HCP Terraform workspace  with auto apply   creates and queues a run  then applies the run when ready                                                                          Provision Resources with Vars   Creates a new HCP Terraform workspace  with auto apply   creates any variables  creates queues a run  applies the run when ready      Provision No Code Workspace and Deploy Resources   Creates a new HCP Terraform workspace based on a no code module configured in the private registry  with auto apply   creates any variables  creates and queues a run  then applies the run when ready                                                     Delete Workspace                Creates a destroy run plan                                                                                                                                                       Worker Poll Run State           Polls the HCP Terraform API for the current run state of a workspace                                                                                                           Worker Poll Apply Run           Polls the HCP Terraform API and applies any pending Terraform runs                                                                                                             Worker Poll Destroy Workspace   Queries ServiceNow Terraform Records for resources marked  is destroyable   applies the destroy run to destroy resources  and deletes the corresponding Terraform workspace      Update No Code Workspace and Deploy Resources   Updates an existing no code workspace to the most recent no code module version  updates that workspace s attached variable values  and then starts a new Terraform run         Update Workspace                Updates HCP Terraform workspace configurations  such as VCS repository  description  project  execution mode  and agent pool ID  if applicable                                     Update Workspace with Vars      Allows you to change details about the HCP Terraform workspace configurations and attached variable values                                                               Update Resources                Updates HCP Terraform workspace details and starts a new Terraform run with these new values                                                                                   Update Resources with Vars      Updates your existing HCP Terraform workspace and its variables  then starts a Terraform run with these updated values                                                           ServiceNow ACLs  Access control lists  ACLs  restrict user access to objects and operations based on permissions granted  This integration includes the following roles that can be used to manage various components     Access Control Roles                  Description                                                                                                                                                                                                                                    x terraform config user              Can manage the connection from the ServiceNow application to your HCP Terraform organization       x terraform terraform user           Can manage all of the Terraform resources created in ServiceNow                                      x terraform vcs repositories user    Can manage the VCS repositories available for catalog items to be ordered by end users             For users who only need to order from the Terraform Catalog  we recommend creating another role with read only permissions for  x terraform vcs repositories  to view the available repositories for ordering infrastructure  Install the Terraform ServiceNow Service Catalog integration by following  the installation guide   terraform cloud docs integrations service now service catalog terraform  "}
{"questions":"terraform page title HCP Terraform Run Tasks Integrations Setup Run Tasks Integration Run tasks allow HCP Terraform to execute tasks in external systems at In addition to using existing technology partners integrations HashiCorp HCP Terraform customers can build their own custom run task integrations Custom integrations have access to plan details in between the plan and apply phase and can display custom messages within the run pipeline as well as prevent a run from continuing to the apply phase specific points in the HCP Terraform run lifecycle","answers":"---\npage_title: HCP Terraform Run Tasks Integrations Setup\ndescription: >-\n  Run tasks allow HCP Terraform to execute tasks in external systems at\n  specific points in the HCP Terraform run lifecycle.\n---\n\n# Run Tasks Integration\n\nIn addition to using existing technology partners integrations, HashiCorp HCP Terraform customers can build their own custom run task integrations. Custom integrations have access to plan details in between the plan and apply phase, and can display custom messages within the run pipeline as well as prevent a run from continuing to the apply phase.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/run-tasks.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n## Prerequisites\n\nTo build a custom integration, you must have a server capable of receiving requests from HCP Terraform and responding with a status update to a supplied callback URL. When creating a run task, you supply an endpoint url to receive the hook. We send a test POST to the supplied URL, and it must respond with a 200 for the run task to be created.\n\nThis feature relies heavily on the proper parsing of [plan JSON output](\/terraform\/internals\/json-format). When sending this output to an external system, be certain that system can properly interpret the information provided.\n\n## Available Run Tasks\n\nYou can view the most up-to-date list of run tasks in the [Terraform Registry](https:\/\/registry.terraform.io\/browse\/run-tasks).\n\n## Integration Details\n\nWhen a run reaches the appropriate phase and a run task is triggered, the supplied URL will receive details about the run in a payload similar to the one below. The server receiving the run task should respond `200 OK`, or Terraform will retry to trigger the run task.\n\nRefer to the [Run Task Integration API](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks-integration) for the exact payload specification.\n\n```json\n{\n  \"payload_version\": 1,\n  \"stage\": \"post_plan\",\n  \"access_token\": \"4QEuyyxug1f2rw.atlasv1.iDyxqhXGVZ0ykes53YdQyHyYtFOrdAWNBxcVUgWvzb64NFHjcquu8gJMEdUwoSLRu4Q\",\n  \"capabilities\": {\n    \"outcomes\": true\n  },\n  \"configuration_version_download_url\": \"https:\/\/app.terraform.io\/api\/v2\/configuration-versions\/cv-ntv3HbhJqvFzamy7\/download\",\n  \"configuration_version_id\": \"cv-ntv3HbhJqvFzamy7\",\n  \"is_speculative\": false,\n  \"organization_name\": \"hashicorp\",\n  \"plan_json_api_url\": \"https:\/\/app.terraform.io\/api\/v2\/plans\/plan-6AFmRJW1PFJ7qbAh\/json-output\",\n  \"run_app_url\": \"https:\/\/app.terraform.io\/app\/hashicorp\/my-workspace\/runs\/run-i3Df5to9ELvibKpQ\",\n  \"run_created_at\": \"2021-09-02T14:47:13.036Z\",\n  \"run_created_by\": \"username\",\n  \"run_id\": \"run-i3Df5to9ELvibKpQ\",\n  \"run_message\": \"Triggered via UI\",\n  \"task_result_callback_url\": \"https:\/\/app.terraform.io\/api\/v2\/task-results\/5ea8d46c-2ceb-42cd-83f2-82e54697bddd\/callback\",\n  \"task_result_enforcement_level\": \"mandatory\",\n  \"task_result_id\": \"taskrs-2nH5dncYoXaMVQmJ\",\n  \"vcs_branch\": \"main\",\n  \"vcs_commit_url\": \"https:\/\/github.com\/hashicorp\/terraform-random\/commit\/7d8fb2a2d601edebdb7a59ad2088a96673637d22\",\n  \"vcs_pull_request_url\": null,\n  \"vcs_repo_url\": \"https:\/\/github.com\/hashicorp\/terraform-random\",\n  \"workspace_app_url\": \"https:\/\/app.terraform.io\/app\/hashicorp\/my-workspace\",\n  \"workspace_id\": \"ws-ck4G5bb1Yei5szRh\",\n  \"workspace_name\": \"tfr_github_0\",\n  \"workspace_working_directory\": \"\/terraform\"\n}\n```\n\nOnce your server receives this payload, HCP Terraform expects you to callback to the supplied `task_result_callback_url` using the `access_token` as an [Authentication Header](\/terraform\/cloud-docs\/api-docs#authentication) with a [jsonapi](\/terraform\/cloud-docs\/api-docs#json-api-formatting) payload of the form:\n\n\n```json\n{\n  \"data\": {\n    \"type\": \"task-results\",\n      \"attributes\": {\n        \"status\": \"running\",\n        \"message\": \"Hello task\",\n        \"url\": \"https:\/\/example.com\",\n        \"outcomes\": [...]\n      }\n  }\n}\n```\n\nHCP Terraform expects this callback within 10 minutes, or the task will be considered to have `errored`. The supplied message attribute will be displayed in HCP Terraform on the run details page. The status can be `running`, `passed` or `failed`.\n\nHere's what the data flow looks like:\n\n![Screenshot: a diagram of the user and data flow for an HCP Terraform run task](\/img\/docs\/terraform-cloud-run-tasks-diagram.png)\n\n\nRefer to the [run task integration API](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks-integration#structured-results) for the exact payload specifications, and the [run task JSON schema](https:\/\/github.com\/hashicorp\/terraform-docs-common\/blob\/main\/website\/public\/schema\/run-tasks\/runtask-result.json) for code generation and payload validation.\n\n\n## Securing your Run Task\n\nWhen creating your run task, you can supply an HMAC key which HCP Terraform will use to create a signature of the payload in the `X-Tfc-Task-Signature` header when calling your service.\n\nThe signature is a sha512 sum of the webhook body using the provided HMAC key. The generation of the signature depends on your implementation, however an example of how to generate a signature in bash is provided below.\n\n```bash\n$ echo -n $WEBHOOK_BODY | openssl dgst -sha512 -hmac \"$HMAC_KEY\"\n```\n\n## HCP Packer Run Task\n> **Hands On:** Try the [Set Up HCP Terraform Run Task for HCP Packer](\/packer\/tutorials\/hcp\/setup-hcp-terraform-run-task), [Standard tier run task image validation](\/packer\/tutorials\/hcp\/run-tasks-data-source-image-validation), and [Plus tier run task image validation](\/packer\/tutorials\/hcp\/run-tasks-resource-image-validation) tutorials to set up and test the HCP Terraform Run Task integration end to end.\n\n[Packer](https:\/\/www.packer.io\/) lets you create identical machine images for multiple platforms from a single source template. The [HCP Packer registry](\/hcp\/docs\/packer) lets you track golden images, designate images for test and production environments, and query images to use in Packer and Terraform configurations.\n\nThe HCP Packer validation run task checks the image artifacts within a Terraform configuration. If the configuration references images marked as unusable (revoked), the run task fails and provides an error message containing the number of revoked artifacts and whether HCP Packer has metadata for newer versions. For HCP Packer Plus registries, run tasks also help you identify hardcoded and untracked images that may not meet security and compliance requirements.\n\nTo get started, [create an HCP Packer account](https:\/\/cloud.hashicorp.com\/products\/packer) and follow the instructions in the [HCP Packer Run Task](\/hcp\/docs\/packer\/manage-image-use\/terraform-cloud-run-tasks) documentation.","site":"terraform","answers_cleaned":"    page title  HCP Terraform Run Tasks Integrations Setup description       Run tasks allow HCP Terraform to execute tasks in external systems at   specific points in the HCP Terraform run lifecycle         Run Tasks Integration  In addition to using existing technology partners integrations  HashiCorp HCP Terraform customers can build their own custom run task integrations  Custom integrations have access to plan details in between the plan and apply phase  and can display custom messages within the run pipeline as well as prevent a run from continuing to the apply phase        BEGIN  TFC only name pnp callout      include  tfc package callouts run tasks mdx       END  TFC only name pnp callout         Prerequisites  To build a custom integration  you must have a server capable of receiving requests from HCP Terraform and responding with a status update to a supplied callback URL  When creating a run task  you supply an endpoint url to receive the hook  We send a test POST to the supplied URL  and it must respond with a 200 for the run task to be created   This feature relies heavily on the proper parsing of  plan JSON output   terraform internals json format   When sending this output to an external system  be certain that system can properly interpret the information provided      Available Run Tasks  You can view the most up to date list of run tasks in the  Terraform Registry  https   registry terraform io browse run tasks       Integration Details  When a run reaches the appropriate phase and a run task is triggered  the supplied URL will receive details about the run in a payload similar to the one below  The server receiving the run task should respond  200 OK   or Terraform will retry to trigger the run task   Refer to the  Run Task Integration API   terraform cloud docs api docs run tasks run tasks integration  for the exact payload specification      json      payload version   1     stage    post plan      access token    4QEuyyxug1f2rw atlasv1 iDyxqhXGVZ0ykes53YdQyHyYtFOrdAWNBxcVUgWvzb64NFHjcquu8gJMEdUwoSLRu4Q      capabilities          outcomes   true         configuration version download url    https   app terraform io api v2 configuration versions cv ntv3HbhJqvFzamy7 download      configuration version id    cv ntv3HbhJqvFzamy7      is speculative   false     organization name    hashicorp      plan json api url    https   app terraform io api v2 plans plan 6AFmRJW1PFJ7qbAh json output      run app url    https   app terraform io app hashicorp my workspace runs run i3Df5to9ELvibKpQ      run created at    2021 09 02T14 47 13 036Z      run created by    username      run id    run i3Df5to9ELvibKpQ      run message    Triggered via UI      task result callback url    https   app terraform io api v2 task results 5ea8d46c 2ceb 42cd 83f2 82e54697bddd callback      task result enforcement level    mandatory      task result id    taskrs 2nH5dncYoXaMVQmJ      vcs branch    main      vcs commit url    https   github com hashicorp terraform random commit 7d8fb2a2d601edebdb7a59ad2088a96673637d22      vcs pull request url   null     vcs repo url    https   github com hashicorp terraform random      workspace app url    https   app terraform io app hashicorp my workspace      workspace id    ws ck4G5bb1Yei5szRh      workspace name    tfr github 0      workspace working directory     terraform         Once your server receives this payload  HCP Terraform expects you to callback to the supplied  task result callback url  using the  access token  as an  Authentication Header   terraform cloud docs api docs authentication  with a  jsonapi   terraform cloud docs api docs json api formatting  payload of the form       json      data          type    task results          attributes              status    running            message    Hello task            url    https   example com            outcomes                            HCP Terraform expects this callback within 10 minutes  or the task will be considered to have  errored   The supplied message attribute will be displayed in HCP Terraform on the run details page  The status can be  running    passed  or  failed    Here s what the data flow looks like     Screenshot  a diagram of the user and data flow for an HCP Terraform run task   img docs terraform cloud run tasks diagram png    Refer to the  run task integration API   terraform cloud docs api docs run tasks run tasks integration structured results  for the exact payload specifications  and the  run task JSON schema  https   github com hashicorp terraform docs common blob main website public schema run tasks runtask result json  for code generation and payload validation       Securing your Run Task  When creating your run task  you can supply an HMAC key which HCP Terraform will use to create a signature of the payload in the  X Tfc Task Signature  header when calling your service   The signature is a sha512 sum of the webhook body using the provided HMAC key  The generation of the signature depends on your implementation  however an example of how to generate a signature in bash is provided below      bash   echo  n  WEBHOOK BODY   openssl dgst  sha512  hmac   HMAC KEY          HCP Packer Run Task     Hands On    Try the  Set Up HCP Terraform Run Task for HCP Packer   packer tutorials hcp setup hcp terraform run task    Standard tier run task image validation   packer tutorials hcp run tasks data source image validation   and  Plus tier run task image validation   packer tutorials hcp run tasks resource image validation  tutorials to set up and test the HCP Terraform Run Task integration end to end    Packer  https   www packer io   lets you create identical machine images for multiple platforms from a single source template  The  HCP Packer registry   hcp docs packer  lets you track golden images  designate images for test and production environments  and query images to use in Packer and Terraform configurations   The HCP Packer validation run task checks the image artifacts within a Terraform configuration  If the configuration references images marked as unusable  revoked   the run task fails and provides an error message containing the number of revoked artifacts and whether HCP Packer has metadata for newer versions  For HCP Packer Plus registries  run tasks also help you identify hardcoded and untracked images that may not meet security and compliance requirements   To get started   create an HCP Packer account  https   cloud hashicorp com products packer  and follow the instructions in the  HCP Packer Run Task   hcp docs packer manage image use terraform cloud run tasks  documentation "}
{"questions":"terraform Warning Version 1 of the HCP Terraform Operator for Kubernetes is deprecated and no longer maintained If you are installing the operator for the first time refer to Set up the HCP Terraform Operator for Kubernetes terraform cloud docs integrations kubernetes setup for guidance HCP Terraform Operator for Kubernetes v2 Migration Guide Upgrade the Terraform Kubernetes Operator from version 1 to version 2 page title HCP Terraform Operator for Kubernetes v2 Migration Guide","answers":"---\npage_title: HCP Terraform Operator for Kubernetes v2 Migration Guide\ndescription: >-\n  Upgrade the Terraform Kubernetes Operator from version 1 to version 2.\n---\n\n# HCP Terraform Operator for Kubernetes v2 Migration Guide\n\n~> **Warning**: Version 1 of the HCP Terraform Operator for Kubernetes is **deprecated** and no longer maintained. If you are installing the operator for the first time, refer to [Set up the HCP Terraform Operator for Kubernetes](\/terraform\/cloud-docs\/integrations\/kubernetes\/setup) for guidance.\n\nTo upgrade the HCP Terraform Operator for Kubernetes from version 1 to the HCP Terraform Operator for Kubernetes (version 2), there is a one-time process that you need to complete. This process upgrades the operator to the newest version and migrate your custom resources. \n\n## Prerequisites\n\nThe migration process requires the following tools to be installed locally:\n\n- [kubectl](https:\/\/kubernetes.io\/docs\/tasks\/tools\/#kubectl)\n- [Helm](https:\/\/helm.sh\/docs\/intro\/install\/)\n\n## Prepare for the upgrade\n\nConfigure an environment variable named `RELEASE_NAMESPACE` with the value of the namespace that the Helm chart is installed in.\n\n```shell-session\n$ export RELEASE_NAMESPACE=<NAMESPACE>\n```\n\nNext, create an environment variable named `RELEASE_NAME` with the value of the name that you gave your installation for the Helm chart.\n\n```shell-session\n$ export RELEASE_NAME=<INSTALLATION_NAME>\n```\n\nBefore you migrate to HCP Terraform Operator for Kubernetes v2, you must first update v1 of the operator to the latest version, including the custom resource definitions.\n\n```shell-session\n$ helm upgrade --namespace ${RELEASE_NAMESPACE} ${RELEASE_NAME} hashicorp\/terraform\n```\n\nNext, backup the workspace resources.\n\n```shell-session\n$ kubectl get workspace --all-namespaces -o yaml > backup_tfc_operator_v1.yaml\n```\n\n## Manifest schema migration\n\nVersion 2 of the HCP Terraform Operator for Kubernetes renames and moves many existing fields. When you migrate, you must update your specification to match version 2's field names.\n\n### Workspace controller\n\nThe table below lists the field mapping of the `Workspace` controller between v1 and v2 of the operator.\n\n| Version 1 | Version 2 | Changes between versions |\n| --- | --- | --- |\n| `apiVersion: app.terraform.io\/v1alpha1` | `apiVersion: app.terraform.io\/v1alpha2` | The `apiVersion` is now `v1alpha2`. |\n| `kind: Workspace` | `kind: Workspace` | None. |\n| `metadata` | `metadata` | None. |\n| `spec.organization` | `spec.organization` | None. |\n| `spec.secretsMountPath` | `spec.token.secretKeyRef` | In v2 the operator keeps the HCP Terraform access token in a Kubernetes Secret. |\n| `spec.vcs` | `spec.versionControl` | Renamed the `vcs` field to `versionControl`. |\n| `spec.vcs.token_id` | `spec.versionControl.oAuthTokenID` | Renamed the `token_id` field to `oAuthTokenID`. |\n| `spec.vcs.repo_identifier` | `spec.versionControl.repository` | Renamed the `repo_identifier` field to `repository`. |\n| `spec.vcs.branch` | `spec.versionControl.branch` | None. |\n| `spec.vcs.ingress_submodules` | `spec.workingDirectory` | Moved. |\n| `spec.variables.[*]` | `spec.environmentVariables.[*]` OR `spec.terraformVariables.[*]` | <a id=\"workspace-variables\"><\/a>We split variables into two possible places. In v1's CRD, if `spec.variables.environmentVariable` was `true`, migrate those variables to `spec.environmentVariables`. If `false`, migrate those variables to `spec.terraformVariables`. |\n| `spec.variables.[*]key` | `spec.environmentVariables.[*]name` OR `spec.terraformVariables.[*]name` | Renamed the `key` field as `name`. [Learn more](#workspace-variables).|\n| `spec.variables.[*]value` | `spec.environmentVariables.[*]value` OR `spec.terraformVariables.[*]value` | [Learn more](#workspace-variables). |\n| `spec.variables.[*]valueFrom` | `spec.environmentVariables.[*]valueFrom` OR `spec.terraformVariables.[*]valueFrom` | [Learn more](#workspace-variables). |\n| `spec.variables.[*]hcl` | `spec.environmentVariables.[*]hcl` OR `spec.terraformVariables.[*]hcl` | [Learn more](#workspace-variables). |\n| `spec.variables.sensitive` |  `spec.environmentVariables.[*]sensitive` OR `spec.terraformVariables.[*]sensitive` | [Learn more](#workspace-variables). |\n| `spec.variables.environmentVariable` | N\/A | Removed, variables are split between `spec.environmentVariables` and `spec.terraformVariables`. |\n| `spec.runTriggers.[*]` | `spec.runTriggers.[*]` | None. |\n| `spec.runTriggers.[*].sourceableName` | `spec.runTriggers.[*].name` | The `sourceableName` field is now `name`. |\n| `spec.sshKeyID` | `spec.sshKey.id` | Moved the `sshKeyID` to `spec.sshKey.id`. |\n| `spec.outputs` | N\/A | Removed. |\n| `spec.terraformVersion` | `spec.terraformVersion` | None. |\n| `spec.notifications.[*]` | `spec.notifications.[*]` | None. |\n| `spec.notifications.[*].type` | `spec.notifications.[*].type` | None. |\n| `spec.notifications.[*].enabled` | `spec.notifications.[*].enabled` | None. |\n| `spec.notifications.[*].name` | `spec.notifications.[*].name` | None. |\n| `spec.notifications.[*].url` | `spec.notifications.[*].url` | None. |\n| `spec.notifications.[*].token` | `spec.notifications.[*].token` | None. |\n| `spec.notifications.[*].triggers.[*]` | `spec.notifications.[*].triggers.[*]` | None. |\n| `spec.notifications.[*].recipients.[*]` | `spec.notifications.[*].emailAddresses.[*]` | Renamed the `recipients` field to `emailAddresses`. |\n| `spec.notifications.[*].users.[*]` | `spec.notifications.[*].emailUsers.[*]` | Renamed the `users` field to `emailUsers`. |\n| `spec.omitNamespacePrefix` | N\/A | Removed. In v1 `spec.omitNamespacePrefix` is a boolean field that affects how the operator generates a workspace name. In v2, you must explicitly set workspace names in `spec.name`. |\n| `spec.agentPoolID` | `spec.agentPool.id` | Moved the `agentPoolID` field to `spec.agentPool.id`. |\n| `spec.agentPoolName` | `spec.agentPool.name` | Moved the `agentPoolName` field to `spec.agentPool.name`. |\n| `spec.module` | N\/A | Removed. You now configure modules with a separate `Module` CRD. [Learn more](#module-controller). |\n\nBelow is an example of configuring a variable in v1 of the operator. \n\n<CodeBlockConfig filename=\"v1.yaml\">\n\n```yaml\napiVersion: app.terraform.io\/v1alpha1\nkind: Workspace\nmetadata:\n  name: migration\n  spec:\n    variables:\n      - key: username\n        value: \"user\"\n        hcl: true\n        sensitive: false\n        environmentVariable: false\n      - key: SECRET_KEY\n        value: \"s3cr3t\"\n        hcl: false\n        sensitive: false\n        environmentVariable: true\n```\n\n<\/CodeBlockConfig>\n\nIn v2 of the operator, you must configure Terraform variables in `spec.terraformVariables` and environment variables `spec.environmentVariables`.\n\n<CodeBlockConfig filename=\"v2.yaml\">\n\n```yaml\napiVersion: app.terraform.io\/v1alpha2\nkind: Workspace\nmetadata:\n  name: migration\n  spec:\n    terraformVariables:\n      - name: username\n        value: \"user\"\n        hcl: true\n        sensitive: false\n    environmentVariables:\n      - name: SECRET_KEY\n        value: \"s3cr3t\"\n        hcl: false\n        sensitive: false\n```\n\n<\/CodeBlockConfig>\n\n### Module controller\n\nHCP Terraform Operator for Kubernetes v2 configures modules in a new `Module` controller separate from the `Workspace` controller. Below is a template of a custom resource manifest:\n\n```yaml\napiVersion: app.terraform.io\/v1alpha2\nkind: Module\nmetadata:\n  name: <NAME>\nspec:\n  organization: <ORG-NAME>\n  token:\n    secretKeyRef:\n      name: <SECRET-NAME>\n      key: <KEY-NAME>\n  name: operator\n```\n\nThe table below describes the mapping between the `Workspace` controller from v1 and the `Module` controller in v2 of the operator.\n\n| Version 1 (Workspace CRD) | Version 2 (Module CRD) | Notes |\n| --- | --- | --- |\n| `spec.module` | N\/A | In v2 of the operator a `Module` is a separate controller with its own CRD. |\n| N\/A | `spec.name: operator` | In v1 of the operator, the name of the generated module is hardcoded to `operator`. In v2, the default name of the generated module is `this`, but you can rename it. |\n| `spec.module.source` | `spec.module.source` | This supports all Terraform [module sources](\/terraform\/language\/modules\/sources). |\n| `spec.module.version` | `spec.module.version` | Refer to  [module sources](\/terraform\/language\/modules\/sources) for versioning information for each module source. |\n| `spec.variables.[*]` | `spec.variables.[*].name` | You should include variable names in the module. This is a reference to variables in the workspace that is executing the module. |\n| `spec.outputs.[*].key` | `spec.outputs.[*].name` | You should include output names in the module. This is a reference to the output variables produced by the module. |\n| `status.workspaceID` OR `metadata.namespace-metadata.name` | `spec.workspace.id` OR `spec.workspace.name` | The workspace where the module is executed. The workspace must be in the same organization. |\n\nBelow is an example migration of a `Module` between v1 and v2 of the operator:\n\n<CodeBlockConfig filename=\"v1.yaml\">\n\n```yaml\napiVersion: app.terraform.io\/v1alpha1\nkind: Workspace\nmetadata:\n  name: migration\nspec:\n  module:\n    source: app.terraform.io\/org-name\/module-name\/provider\n    version: 0.0.42\n  variables:\n    - key: username\n      value: \"user\"\n      hcl: true\n      sensitive: false\n      environmentVariable: false\n    - key: SECRET_KEY\n      value: \"s3cr3t\"\n      hcl: false\n      sensitive: false\n      environmentVariable: true\n```\n\n<\/CodeBlockConfig>\n\nIn v2 of the operator, separate controllers manage workspace and modules.\n\n<CodeBlockConfig filename=\"workspace-v2.yaml\">\n\n  ```yaml\n  apiVersion: app.terraform.io\/v1alpha2\n  kind: Workspace\n  metadata:\n    name: migration\n    spec:\n      terraformVariables:\n        - name: username\n          value: \"user\"\n          hcl: true\n          sensitive: false\n      environmentVariables:\n        - name: SECRET_KEY\n          value: \"s3cr3t\"\n          hcl: false\n          sensitive: false\n  ```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig filename=\"module-v2.yaml\">\n\n```yaml\napiVersion: app.terraform.io\/v1alpha2\nkind: Module\nmetadata:\n  name: migration\nspec:\n  name: operator\n  module:\n    source: app.terraform.io\/org-name\/module-name\/provider\n    version: 0.0.42\n  workspace:\n    name: migration\n```\n\n<\/CodeBlockConfig>\n\n## Upgrade the operator\n\nDownload Workspace CRD patch A:\n\n```shell-session\n$ curl -sO https:\/\/raw.githubusercontent.com\/hashicorp\/hcp-terraform-operator\/main\/docs\/migration\/crds\/workspaces_patch_a.yaml\n```\n\nView the changes that patch A applies to the workspace CRD.\n\n```shell-session\n$ kubectl diff --filename workspaces_patch_a.yaml\n```\n\nPatch the workspace CRD with patch A. This patch adds `app.terraform.io\/v1alpha2` support, but excludes `.status.runStatus` because it has a different format in `app.terraform.io\/v1alpha1` and causes JSON un-marshalling issues.\n\n!> **Upgrade warning**: Once you apply a patch, Kubernetes converts existing `app.terraform.io\/v1alpha1` custom resources to `app.terraform.io\/v1alpha2` according to the updated schema, meaning that v1 of the operator can no longer serve custom resources. Before patching, update your existing custom resources to satisfy the v2 schema requirements. [Learn more](#manifest-schema-migration).\n\n```shell-session\n$ kubectl patch crd workspaces.app.terraform.io --patch-file workspaces_patch_a.yaml\n```\n\nInstall the Operator v2 Helm chart with the `helm install` command. Be sure to set the `operator.watchedNamespaces` value to the list of namespaces your Workspace resources are deployed to. If this value is not provided, the operator will watch all namespaces in the Kubernetes cluster.\n\n```shell-session\n$ helm install \\\n  ${RELEASE_NAME} hashicorp\/hcp-terraform-operator \\\n  --version 2.4.0 \\\n  --namespace ${RELEASE_NAMESPACE} \\\n  --set 'operator.watchedNamespaces={white,blue,red}' \\\n  --set controllers.agentPool.workers=5 \\\n  --set controllers.module.workers=5 \\\n  --set controllers.workspace.workers=5\n```\n\nNext, create a Kubernetes secret to store the HCP Terraform API token following the [Usage Guide](https:\/\/github.com\/hashicorp\/hcp-terraform-operator\/blob\/main\/docs\/usage.md#prerequisites). The API token can be copied from the Kubernetes secret that you created for v1 of the operator. By default, this is named `terraformrc`. Use the `kubectl get secret` command to get the API token.\n\n```shell-session\n$ kubectl --namespace ${RELEASE_NAMESPACE} get secret terraformrc -o json | jq '.data.credentials' | tr -d '\"' | base64 -d\n```\n\nUpdate existing custom resources [according to the schema migration guidance](#manifest-schema-migration) and apply your changes.\n\n```shell-session\n$ kubectl apply --filename <UPDATED_V2_WORKSPACE_MANIFEST.yaml>\n```\n\nDownload Workspace CRD patch B.\n\n```shell-session\n$ curl -sO https:\/\/raw.githubusercontent.com\/hashicorp\/hcp-terraform-operator\/main\/docs\/migration\/crds\/workspaces_patch_b.yaml\n```\n\nView the changes that patch B applies to the workspace CRD.\n\n```shell-session\n$ kubectl diff --filename workspaces_patch_b.yaml\n```\n\nPatch the workspace CRD with patch B. This patch adds `.status.runStatus` support, which was excluded in patch A.\n\n```shell-session\n$ kubectl patch crd workspaces.app.terraform.io --patch-file workspaces_patch_b.yaml\n```\n\nThe v2 operator will fail to proceed if a custom resource has the v1 finalizer `finalizer.workspace.app.terraform.io`. If you encounter an error, check the logs for more information.\n\n```shell-session\n$ kubectl logs -f <POD_NAME>\n```\n\nSpecifically, look for an error message such as the following.\n\n```\nERROR\tMigration\t{\"workspace\": \"default\/<WORKSPACE_NAME>\", \"msg\": \"spec contains old finalizer finalizer.workspace.app.terraform.io\"}\n```\n\nThe `finalizer` exists to provide greater control over the migration process. Verify the custom resource, and when you\u2019re ready to migrate it, use the `kubectl patch` command to update the `finalizer` value.\n\n```shell-session\n$ kubectl patch workspace migration --type=merge --patch '{\"metadata\": {\"finalizers\": [\"workspace.app.terraform.io\/finalizer\"]}}'\n```\n\nReview the operator logs once more and verify there are no error messages.\n\n```shell-session\n$ kubectl logs -f <POD_NAME>\n```\nThe operator reconciles resources during the next sync period. This interval is set by the `operator.syncPeriod` configuration of the operator and defaults to five minutes. \n\nIf you have any migrated `Module` custom resources, apply them now.\n\n```shell-session\n$ kubectl apply --filename <MIGRATED_V2_MODULE_MANIFEST.yaml>\n```\n\nIn v2 of the operator, the `applyMethod` is set to `manual` by default. In this case, a new run in a managed workspace requires manual approval. Run the following command for each `Workspace` resource to change it to `auto` approval.\n\n```shell-session\n$ kubectl patch workspace <WORKSPACE_NAME> --type=merge --patch '{\"spec\": {\"applyMethod\": \"auto\"}}'\n```","site":"terraform","answers_cleaned":"    page title  HCP Terraform Operator for Kubernetes v2 Migration Guide description       Upgrade the Terraform Kubernetes Operator from version 1 to version 2         HCP Terraform Operator for Kubernetes v2 Migration Guide       Warning    Version 1 of the HCP Terraform Operator for Kubernetes is   deprecated   and no longer maintained  If you are installing the operator for the first time  refer to  Set up the HCP Terraform Operator for Kubernetes   terraform cloud docs integrations kubernetes setup  for guidance   To upgrade the HCP Terraform Operator for Kubernetes from version 1 to the HCP Terraform Operator for Kubernetes  version 2   there is a one time process that you need to complete  This process upgrades the operator to the newest version and migrate your custom resources       Prerequisites  The migration process requires the following tools to be installed locally      kubectl  https   kubernetes io docs tasks tools  kubectl     Helm  https   helm sh docs intro install       Prepare for the upgrade  Configure an environment variable named  RELEASE NAMESPACE  with the value of the namespace that the Helm chart is installed in      shell session   export RELEASE NAMESPACE  NAMESPACE       Next  create an environment variable named  RELEASE NAME  with the value of the name that you gave your installation for the Helm chart      shell session   export RELEASE NAME  INSTALLATION NAME       Before you migrate to HCP Terraform Operator for Kubernetes v2  you must first update v1 of the operator to the latest version  including the custom resource definitions      shell session   helm upgrade   namespace   RELEASE NAMESPACE    RELEASE NAME  hashicorp terraform      Next  backup the workspace resources      shell session   kubectl get workspace   all namespaces  o yaml   backup tfc operator v1 yaml         Manifest schema migration  Version 2 of the HCP Terraform Operator for Kubernetes renames and moves many existing fields  When you migrate  you must update your specification to match version 2 s field names       Workspace controller  The table below lists the field mapping of the  Workspace  controller between v1 and v2 of the operator     Version 1   Version 2   Changes between versions                          apiVersion  app terraform io v1alpha1     apiVersion  app terraform io v1alpha2    The  apiVersion  is now  v1alpha2        kind  Workspace     kind  Workspace    None       metadata     metadata    None       spec organization     spec organization    None       spec secretsMountPath     spec token secretKeyRef    In v2 the operator keeps the HCP Terraform access token in a Kubernetes Secret       spec vcs     spec versionControl    Renamed the  vcs  field to  versionControl        spec vcs token id     spec versionControl oAuthTokenID    Renamed the  token id  field to  oAuthTokenID        spec vcs repo identifier     spec versionControl repository    Renamed the  repo identifier  field to  repository        spec vcs branch     spec versionControl branch    None       spec vcs ingress submodules     spec workingDirectory    Moved       spec variables         spec environmentVariables      OR  spec terraformVariables         a id  workspace variables    a We split variables into two possible places  In v1 s CRD  if  spec variables environmentVariable  was  true   migrate those variables to  spec environmentVariables   If  false   migrate those variables to  spec terraformVariables        spec variables    key     spec environmentVariables    name  OR  spec terraformVariables    name    Renamed the  key  field as  name    Learn more   workspace variables       spec variables    value     spec environmentVariables    value  OR  spec terraformVariables    value     Learn more   workspace variables        spec variables    valueFrom     spec environmentVariables    valueFrom  OR  spec terraformVariables    valueFrom     Learn more   workspace variables        spec variables    hcl     spec environmentVariables    hcl  OR  spec terraformVariables    hcl     Learn more   workspace variables        spec variables sensitive      spec environmentVariables    sensitive  OR  spec terraformVariables    sensitive     Learn more   workspace variables        spec variables environmentVariable    N A   Removed  variables are split between  spec environmentVariables  and  spec terraformVariables        spec runTriggers         spec runTriggers        None       spec runTriggers     sourceableName     spec runTriggers     name    The  sourceableName  field is now  name        spec sshKeyID     spec sshKey id    Moved the  sshKeyID  to  spec sshKey id        spec outputs    N A   Removed       spec terraformVersion     spec terraformVersion    None       spec notifications         spec notifications        None       spec notifications     type     spec notifications     type    None       spec notifications     enabled     spec notifications     enabled    None       spec notifications     name     spec notifications     name    None       spec notifications     url     spec notifications     url    None       spec notifications     token     spec notifications     token    None       spec notifications     triggers         spec notifications     triggers        None       spec notifications     recipients         spec notifications     emailAddresses        Renamed the  recipients  field to  emailAddresses        spec notifications     users         spec notifications     emailUsers        Renamed the  users  field to  emailUsers        spec omitNamespacePrefix    N A   Removed  In v1  spec omitNamespacePrefix  is a boolean field that affects how the operator generates a workspace name  In v2  you must explicitly set workspace names in  spec name        spec agentPoolID     spec agentPool id    Moved the  agentPoolID  field to  spec agentPool id        spec agentPoolName     spec agentPool name    Moved the  agentPoolName  field to  spec agentPool name        spec module    N A   Removed  You now configure modules with a separate  Module  CRD   Learn more   module controller      Below is an example of configuring a variable in v1 of the operator     CodeBlockConfig filename  v1 yaml       yaml apiVersion  app terraform io v1alpha1 kind  Workspace metadata    name  migration   spec      variables          key  username         value   user          hcl  true         sensitive  false         environmentVariable  false         key  SECRET KEY         value   s3cr3t          hcl  false         sensitive  false         environmentVariable  true        CodeBlockConfig   In v2 of the operator  you must configure Terraform variables in  spec terraformVariables  and environment variables  spec environmentVariables     CodeBlockConfig filename  v2 yaml       yaml apiVersion  app terraform io v1alpha2 kind  Workspace metadata    name  migration   spec      terraformVariables          name  username         value   user          hcl  true         sensitive  false     environmentVariables          name  SECRET KEY         value   s3cr3t          hcl  false         sensitive  false        CodeBlockConfig       Module controller  HCP Terraform Operator for Kubernetes v2 configures modules in a new  Module  controller separate from the  Workspace  controller  Below is a template of a custom resource manifest      yaml apiVersion  app terraform io v1alpha2 kind  Module metadata    name   NAME  spec    organization   ORG NAME    token      secretKeyRef        name   SECRET NAME        key   KEY NAME    name  operator      The table below describes the mapping between the  Workspace  controller from v1 and the  Module  controller in v2 of the operator     Version 1  Workspace CRD    Version 2  Module CRD    Notes                          spec module    N A   In v2 of the operator a  Module  is a separate controller with its own CRD      N A    spec name  operator    In v1 of the operator  the name of the generated module is hardcoded to  operator   In v2  the default name of the generated module is  this   but you can rename it       spec module source     spec module source    This supports all Terraform  module sources   terraform language modules sources        spec module version     spec module version    Refer to   module sources   terraform language modules sources  for versioning information for each module source       spec variables         spec variables     name    You should include variable names in the module  This is a reference to variables in the workspace that is executing the module       spec outputs     key     spec outputs     name    You should include output names in the module  This is a reference to the output variables produced by the module       status workspaceID  OR  metadata namespace metadata name     spec workspace id  OR  spec workspace name    The workspace where the module is executed  The workspace must be in the same organization     Below is an example migration of a  Module  between v1 and v2 of the operator    CodeBlockConfig filename  v1 yaml       yaml apiVersion  app terraform io v1alpha1 kind  Workspace metadata    name  migration spec    module      source  app terraform io org name module name provider     version  0 0 42   variables        key  username       value   user        hcl  true       sensitive  false       environmentVariable  false       key  SECRET KEY       value   s3cr3t        hcl  false       sensitive  false       environmentVariable  true        CodeBlockConfig   In v2 of the operator  separate controllers manage workspace and modules    CodeBlockConfig filename  workspace v2 yaml         yaml   apiVersion  app terraform io v1alpha2   kind  Workspace   metadata      name  migration     spec        terraformVariables            name  username           value   user            hcl  true           sensitive  false       environmentVariables            name  SECRET KEY           value   s3cr3t            hcl  false           sensitive  false          CodeBlockConfig    CodeBlockConfig filename  module v2 yaml       yaml apiVersion  app terraform io v1alpha2 kind  Module metadata    name  migration spec    name  operator   module      source  app terraform io org name module name provider     version  0 0 42   workspace      name  migration        CodeBlockConfig      Upgrade the operator  Download Workspace CRD patch A      shell session   curl  sO https   raw githubusercontent com hashicorp hcp terraform operator main docs migration crds workspaces patch a yaml      View the changes that patch A applies to the workspace CRD      shell session   kubectl diff   filename workspaces patch a yaml      Patch the workspace CRD with patch A  This patch adds  app terraform io v1alpha2  support  but excludes   status runStatus  because it has a different format in  app terraform io v1alpha1  and causes JSON un marshalling issues        Upgrade warning    Once you apply a patch  Kubernetes converts existing  app terraform io v1alpha1  custom resources to  app terraform io v1alpha2  according to the updated schema  meaning that v1 of the operator can no longer serve custom resources  Before patching  update your existing custom resources to satisfy the v2 schema requirements   Learn more   manifest schema migration       shell session   kubectl patch crd workspaces app terraform io   patch file workspaces patch a yaml      Install the Operator v2 Helm chart with the  helm install  command  Be sure to set the  operator watchedNamespaces  value to the list of namespaces your Workspace resources are deployed to  If this value is not provided  the operator will watch all namespaces in the Kubernetes cluster      shell session   helm install       RELEASE NAME  hashicorp hcp terraform operator       version 2 4 0       namespace   RELEASE NAMESPACE        set  operator watchedNamespaces  white blue red         set controllers agentPool workers 5       set controllers module workers 5       set controllers workspace workers 5      Next  create a Kubernetes secret to store the HCP Terraform API token following the  Usage Guide  https   github com hashicorp hcp terraform operator blob main docs usage md prerequisites   The API token can be copied from the Kubernetes secret that you created for v1 of the operator  By default  this is named  terraformrc   Use the  kubectl get secret  command to get the API token      shell session   kubectl   namespace   RELEASE NAMESPACE  get secret terraformrc  o json   jq   data credentials    tr  d       base64  d      Update existing custom resources  according to the schema migration guidance   manifest schema migration  and apply your changes      shell session   kubectl apply   filename  UPDATED V2 WORKSPACE MANIFEST yaml       Download Workspace CRD patch B      shell session   curl  sO https   raw githubusercontent com hashicorp hcp terraform operator main docs migration crds workspaces patch b yaml      View the changes that patch B applies to the workspace CRD      shell session   kubectl diff   filename workspaces patch b yaml      Patch the workspace CRD with patch B  This patch adds   status runStatus  support  which was excluded in patch A      shell session   kubectl patch crd workspaces app terraform io   patch file workspaces patch b yaml      The v2 operator will fail to proceed if a custom resource has the v1 finalizer  finalizer workspace app terraform io   If you encounter an error  check the logs for more information      shell session   kubectl logs  f  POD NAME       Specifically  look for an error message such as the following       ERROR Migration   workspace    default  WORKSPACE NAME     msg    spec contains old finalizer finalizer workspace app terraform io        The  finalizer  exists to provide greater control over the migration process  Verify the custom resource  and when you re ready to migrate it  use the  kubectl patch  command to update the  finalizer  value      shell session   kubectl patch workspace migration   type merge   patch    metadata     finalizers     workspace app terraform io finalizer           Review the operator logs once more and verify there are no error messages      shell session   kubectl logs  f  POD NAME      The operator reconciles resources during the next sync period  This interval is set by the  operator syncPeriod  configuration of the operator and defaults to five minutes    If you have any migrated  Module  custom resources  apply them now      shell session   kubectl apply   filename  MIGRATED V2 MODULE MANIFEST yaml       In v2 of the operator  the  applyMethod  is set to  manual  by default  In this case  a new run in a managed workspace requires manual approval  Run the following command for each  Workspace  resource to change it to  auto  approval      shell session   kubectl patch workspace  WORKSPACE NAME    type merge   patch    spec     applyMethod    auto        "}
{"questions":"terraform The HCP Terraform Operator for Kubernetes allows you to provision infrastructure directly from HCP Terraform Operator for Kubernetes overview page title HCP Terraform Operator for Kubernetes The HCP Terraform Operator for Kubernetes https github com hashicorp hcp terraform operator allows you to manage HCP Terraform resources with Kubernetes custom resources You can provision infrastructure internal or external to your Kubernetes cluster directly from the Kubernetes control plane the Kubernetes control plane","answers":"---\npage_title: HCP Terraform Operator for Kubernetes \ndescription: >-\n  The HCP Terraform Operator for Kubernetes allows you to provision infrastructure directly from\n  the Kubernetes control plane.\n---\n\n# HCP Terraform Operator for Kubernetes overview\n\nThe [HCP Terraform Operator for Kubernetes](https:\/\/github.com\/hashicorp\/hcp-terraform-operator) allows you to manage HCP Terraform resources with Kubernetes custom resources. You can provision infrastructure internal or external to your Kubernetes cluster directly from the Kubernetes control plane. \n\nThe operator's CustomResourceDefinitions (CRD) let you dynamically create HCP Terraform workspaces with Terraform modules, populate workspace variables, and provision infrastructure with Terraform runs.\n\n## Key benefits\n\nThe HCP Terraform Operator for Kubernetes v2 offers several improvements over v1:\n\n- **Flexible resource management**: The operator now features multiple custom resources, each with separate controllers for different HCP Terraform resources. This provides additional flexibility and the ability to manage more custom resources concurrently, significantly improving performance for large-scale deployments.\n\n- **Namespace management**: The `--namespace` option allows you to tailor the operator's watch scope to specific namespaces, which enables more fine-grained resource management.\n\n- **Configurable synchronization**: The `--sync-period` option allows you to configure the synchronization frequency between custom resources and HCP Terraform, ensuring timely updates and smoother operations.\n\n\n## Supported HCP Terraform features\n\nThe HCP Terraform Operator for Kubernetes allows you to create agent pools, deploy modules, and manage workspaces through Kubernetes controllers. These controllers enable you to automate and manage HCP Terraform resources using custom resources in Kubernetes.\n\n### Agent pools\n\nAgent pools in HCP Terraform manage the execution environment for Terraform runs. The HCP Terraform Operator for Kubernetes allows you to create and manage agent pools as part of your Kubernetes infrastructure.\n\nThe following example creates a new agent pool with the name `agent-pool-development` and generates an agent token with the name `token-red`.\n\n```yaml\n---\napiVersion: app.terraform.io\/v1alpha2\nkind: AgentPool\nmetadata:\n  name: my-agent-pool\nspec:\n  organization: kubernetes-operator\n  token:\n    secretKeyRef:\n      name: tfc-operator\n      key: token\n  name: agent-pool-development\n  agentTokens:\n    - name: token-red\n```\n\nThe operator stores the `token-red` agent token in a Kubernetes secret named `my-agent-pool-token-red`.\n\nYou can also enable agent autoscaling by providing a `.spec.autoscaling` configuration in your `AgentPool` specification.\n\n<CodeBlockConfig highlight=\"17-26\">\n\n```yaml\n---\napiVersion: app.terraform.io\/v1alpha2\nkind: AgentPool\nmetadata:\n  name: this\nspec:\n  organization: kubernetes-operator\n  token:\n    secretKeyRef:\n      name: tfc-operator\n      key: token\n  name: agent-pool-development\n  agentTokens:\n  - name: token-red\n  agentDeployment:\n   replicas: 1\n  autoscaling:\n    targetWorkspaces:\n    - name: us-west-development\n    - id: ws-NUVHA9feCXzAmPHx\n    - wildcardName: eu-development-*\n    minReplicas: 1\n    maxReplicas: 3\n    cooldownPeriod:\n      scaleUpSeconds: 30\n      scaleDownSeconds: 30\n```\n\n<\/CodeBlockConfig>\n\nIn the above example, the operator ensures that at least one agent pod is continuously running and dynamically scales the number of pods up to a maximum of three based on the workload or resource demand. The operator then monitors resource demands by observing the load of the designated workspaces specified by the `name`, `id`, or `wildcardName` patterns. When the workload decreases, the operator downscales the number of agent pods.\n\nRefer to the [agent pool API reference](\/terraform\/cloud-docs\/integrations\/kubernetes\/api-reference#agentpool) for the complete `AgentPool` specification.\n\n### Module\n\nThe `Module` controller enforces an [API-driven Run workflow](\/terraform\/cloud-docs\/run\/api) and lets you deploy Terraform modules within workspaces.\n\nThe following example deploys version `1.0.0` of the `hashicorp\/module\/random` module in the `workspace-name` workspace.\n\n```yaml\n---\napiVersion: app.terraform.io\/v1alpha2\nkind: Module\nmetadata:\n  name: my-module\nspec:\n  organization: kubernetes-operator\n  token:\n    secretKeyRef:\n      name: tfc-operator\n      key: token\n  module:\n    source: hashicorp\/module\/random\n    version: 1.0.0\n  workspace:\n    name: workspace-name\n  variables:\n  - name: string_length\n  outputs:\n  - name: random_string\n```\n\nThe operator passes the workspace's `string_length` variable to the module and stores the `random_string` outputs as either a Kubernetes secret or a ConfigMap. If the workspace marks the output as `sensitive`, the operator stores the `random_string` as a Kubernetes secret; otherwise, the operator stores it as a ConfigMap. The variables must be accessible within the workspace as a workspace variable, workspace variable set, or project variable set.\n\nRefer to the [module API reference](\/terraform\/cloud-docs\/integrations\/kubernetes\/api-reference#module) for the complete `Module` specification.\n\n### Project\n\nProjects let you organize your workspaces and scope access to workspace resources. The `Project` controller allows you to create, configure, and manage [projects](\/terraform\/tutorials\/cloud\/projects) directly from Kubernetes.\n\nThe following example creates a new project named `testing`.\n\n```yaml\n---\napiVersion: app.terraform.io\/v1alpha2\nkind: Project\nmetadata:\n  name: testing\nspec:\n  organization: kubernetes-operator\n  token:\n    secretKeyRef:\n      name: tfc-operator\n      key: token\n  name: project-demo\n```\n\nThe `Project` controller allows you to manage team access [permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#project-permissions).\n\nThe following example creates a project named `testing` and grants the `qa` team admin access to the project.\n\n<CodeBlockConfig highlight=\"13-16\">\n\n```yaml\n---\napiVersion: app.terraform.io\/v1alpha2\nkind: Project\nmetadata:\n  name: testing\nspec:\n  organization: kubernetes-operator\n  token:\n    secretKeyRef:\n      name: tfc-operator\n      key: token\n  name: project-demo\n  teamAccess:\n  - team:\n      name: qa\n    access: admin\n```\n\n<\/CodeBlockConfig>\n\nRefer to the [project API reference](\/terraform\/cloud-docs\/integrations\/kubernetes\/api-reference#project) for the complete `Project` specification.\n\n### Workspace\n\nHCP Terraform workspaces organize and manage Terraform configurations. The HCP Terraform Operator for Kubernetes allows you to create, configure, and manage workspaces directly from Kubernetes.\n\nThe following example creates a new workspace named `us-west-development`, configured to use Terraform version `1.6.2`. This workspace has two variables, `nodes` and `rds-secret`. The variable `rds-secret` is treated as sensitive, and the operator reads the value for the variable from a Kubernetes secret named `us-west-development-secrets`.\n\n```yaml\n---\napiVersion: app.terraform.io\/v1alpha2\nkind: Workspace\nmetadata:\n  name: us-west-development\nspec:\n  organization: kubernetes-operator\n  token:\n    secretKeyRef:\n      name: tfc-operator\n      key: token\n  name: us-west-development\n  description: US West development workspace\n  terraformVersion: 1.6.2\n  applyMethod: auto\n  agentPool:\n    name: ap-us-west-development\n  terraformVariables:\n    - name: nodes\n      value: 2\n    - name: rds-secret\n      sensitive: true\n      valueFrom:\n        secretKeyRef:\n          name: us-west-development-secrets\n          key: rds-secret\n  runTasks:\n    - name: rt-us-west-development\n      stage: pre_plan\n```\n\nIn the above example, the `applyMethod` has the value of `auto`, so HCP Terraform automatically applies any changes to this workspace. The specification also configures the workspace to use the `ap-us-west-development` agent pool and run the `rt-us-west-development` run task at the `pre_plan` stage.\n\nThe operator stores the value of the workspace outputs as Kubernetes secrets or ConfigMaps. If the outputs are marked as `sensitive`, they are stored as Kubernetes secrets, otherwise they are stored as ConfigMaps.\n\n-> **Note**: The operator rolls back any external modifications made to the workspace to match the state specified in the custom resource definition.\n\nRefer to the [workspace API reference](\/terraform\/cloud-docs\/integrations\/kubernetes\/api-reference#workspace) for the complete `Workspace` specification.\n","site":"terraform","answers_cleaned":"    page title  HCP Terraform Operator for Kubernetes  description       The HCP Terraform Operator for Kubernetes allows you to provision infrastructure directly from   the Kubernetes control plane         HCP Terraform Operator for Kubernetes overview  The  HCP Terraform Operator for Kubernetes  https   github com hashicorp hcp terraform operator  allows you to manage HCP Terraform resources with Kubernetes custom resources  You can provision infrastructure internal or external to your Kubernetes cluster directly from the Kubernetes control plane    The operator s CustomResourceDefinitions  CRD  let you dynamically create HCP Terraform workspaces with Terraform modules  populate workspace variables  and provision infrastructure with Terraform runs      Key benefits  The HCP Terraform Operator for Kubernetes v2 offers several improvements over v1       Flexible resource management    The operator now features multiple custom resources  each with separate controllers for different HCP Terraform resources  This provides additional flexibility and the ability to manage more custom resources concurrently  significantly improving performance for large scale deployments       Namespace management    The    namespace  option allows you to tailor the operator s watch scope to specific namespaces  which enables more fine grained resource management       Configurable synchronization    The    sync period  option allows you to configure the synchronization frequency between custom resources and HCP Terraform  ensuring timely updates and smoother operations       Supported HCP Terraform features  The HCP Terraform Operator for Kubernetes allows you to create agent pools  deploy modules  and manage workspaces through Kubernetes controllers  These controllers enable you to automate and manage HCP Terraform resources using custom resources in Kubernetes       Agent pools  Agent pools in HCP Terraform manage the execution environment for Terraform runs  The HCP Terraform Operator for Kubernetes allows you to create and manage agent pools as part of your Kubernetes infrastructure   The following example creates a new agent pool with the name  agent pool development  and generates an agent token with the name  token red       yaml     apiVersion  app terraform io v1alpha2 kind  AgentPool metadata    name  my agent pool spec    organization  kubernetes operator   token      secretKeyRef        name  tfc operator       key  token   name  agent pool development   agentTokens        name  token red      The operator stores the  token red  agent token in a Kubernetes secret named  my agent pool token red    You can also enable agent autoscaling by providing a   spec autoscaling  configuration in your  AgentPool  specification    CodeBlockConfig highlight  17 26       yaml     apiVersion  app terraform io v1alpha2 kind  AgentPool metadata    name  this spec    organization  kubernetes operator   token      secretKeyRef        name  tfc operator       key  token   name  agent pool development   agentTokens      name  token red   agentDeployment     replicas  1   autoscaling      targetWorkspaces        name  us west development       id  ws NUVHA9feCXzAmPHx       wildcardName  eu development       minReplicas  1     maxReplicas  3     cooldownPeriod        scaleUpSeconds  30       scaleDownSeconds  30        CodeBlockConfig   In the above example  the operator ensures that at least one agent pod is continuously running and dynamically scales the number of pods up to a maximum of three based on the workload or resource demand  The operator then monitors resource demands by observing the load of the designated workspaces specified by the  name    id   or  wildcardName  patterns  When the workload decreases  the operator downscales the number of agent pods   Refer to the  agent pool API reference   terraform cloud docs integrations kubernetes api reference agentpool  for the complete  AgentPool  specification       Module  The  Module  controller enforces an  API driven Run workflow   terraform cloud docs run api  and lets you deploy Terraform modules within workspaces   The following example deploys version  1 0 0  of the  hashicorp module random  module in the  workspace name  workspace      yaml     apiVersion  app terraform io v1alpha2 kind  Module metadata    name  my module spec    organization  kubernetes operator   token      secretKeyRef        name  tfc operator       key  token   module      source  hashicorp module random     version  1 0 0   workspace      name  workspace name   variables      name  string length   outputs      name  random string      The operator passes the workspace s  string length  variable to the module and stores the  random string  outputs as either a Kubernetes secret or a ConfigMap  If the workspace marks the output as  sensitive   the operator stores the  random string  as a Kubernetes secret  otherwise  the operator stores it as a ConfigMap  The variables must be accessible within the workspace as a workspace variable  workspace variable set  or project variable set   Refer to the  module API reference   terraform cloud docs integrations kubernetes api reference module  for the complete  Module  specification       Project  Projects let you organize your workspaces and scope access to workspace resources  The  Project  controller allows you to create  configure  and manage  projects   terraform tutorials cloud projects  directly from Kubernetes   The following example creates a new project named  testing       yaml     apiVersion  app terraform io v1alpha2 kind  Project metadata    name  testing spec    organization  kubernetes operator   token      secretKeyRef        name  tfc operator       key  token   name  project demo      The  Project  controller allows you to manage team access  permissions   terraform cloud docs users teams organizations permissions project permissions    The following example creates a project named  testing  and grants the  qa  team admin access to the project    CodeBlockConfig highlight  13 16       yaml     apiVersion  app terraform io v1alpha2 kind  Project metadata    name  testing spec    organization  kubernetes operator   token      secretKeyRef        name  tfc operator       key  token   name  project demo   teamAccess      team        name  qa     access  admin        CodeBlockConfig   Refer to the  project API reference   terraform cloud docs integrations kubernetes api reference project  for the complete  Project  specification       Workspace  HCP Terraform workspaces organize and manage Terraform configurations  The HCP Terraform Operator for Kubernetes allows you to create  configure  and manage workspaces directly from Kubernetes   The following example creates a new workspace named  us west development   configured to use Terraform version  1 6 2   This workspace has two variables   nodes  and  rds secret   The variable  rds secret  is treated as sensitive  and the operator reads the value for the variable from a Kubernetes secret named  us west development secrets       yaml     apiVersion  app terraform io v1alpha2 kind  Workspace metadata    name  us west development spec    organization  kubernetes operator   token      secretKeyRef        name  tfc operator       key  token   name  us west development   description  US West development workspace   terraformVersion  1 6 2   applyMethod  auto   agentPool      name  ap us west development   terraformVariables        name  nodes       value  2       name  rds secret       sensitive  true       valueFrom          secretKeyRef            name  us west development secrets           key  rds secret   runTasks        name  rt us west development       stage  pre plan      In the above example  the  applyMethod  has the value of  auto   so HCP Terraform automatically applies any changes to this workspace  The specification also configures the workspace to use the  ap us west development  agent pool and run the  rt us west development  run task at the  pre plan  stage   The operator stores the value of the workspace outputs as Kubernetes secrets or ConfigMaps  If the outputs are marked as  sensitive   they are stored as Kubernetes secrets  otherwise they are stored as ConfigMaps        Note    The operator rolls back any external modifications made to the workspace to match the state specified in the custom resource definition   Refer to the  workspace API reference   terraform cloud docs integrations kubernetes api reference workspace  for the complete  Workspace  specification  "}
{"questions":"terraform API Reference app terraform io v1alpha2 appterraformiov1alpha2 page title HCP Terraform Operator for Kubernetes API reference Packages API reference for the HCP Terraform Operator for Kubernetes","answers":"---\npage_title: HCP Terraform Operator for Kubernetes API reference\ndescription: >-\n  API reference for the HCP Terraform Operator for Kubernetes.\n---\n\n# API Reference\n\n## Packages\n- [app.terraform.io\/v1alpha2](#appterraformiov1alpha2)\n\n## app.terraform.io\/v1alpha2\n\nPackage v1alpha2 contains API Schema definitions for the app v1alpha2 API group\n\n### Resource Types\n- [AgentPool](#agentpool)\n- [Module](#module)\n- [Project](#project)\n- [Workspace](#workspace)\n\n#### AgentDeployment\n\n_Appears in:_\n- [AgentPoolSpec](#agentpoolspec)\n\n| Field | Description |\n| --- | --- |\n| `replicas` _integer_ |  |\n| `spec` _[PodSpec](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.27\/#podspec-v1-core)_ |  |\n| `annotations` _object (keys:string, values:string)_ | The annotations that the operator will apply to the pod template in the deployment. |\n| `labels` _object (keys:string, values:string)_ | The labels that the operator will apply to the pod template in the deployment. |\n\n#### AgentDeploymentAutoscaling\n\nAgentDeploymentAutoscaling configures the operator to scale the deployment for an AgentPool up and down to meet demand.\n\n_Appears in:_\n- [AgentPoolSpec](#agentpoolspec)\n\n| Field | Description |\n| --- | --- |\n| `maxReplicas` _integer_ | MaxReplicas is the maximum number of replicas for the Agent deployment. |\n| `minReplicas` _integer_ | MinReplicas is the minimum number of replicas for the Agent deployment. |\n| `targetWorkspaces` _[TargetWorkspace](#targetworkspace)_ | TargetWorkspaces is a list of HCP Terraform Workspaces which the agent pool should scale up to meet demand. When this field is omitted the autoscaler will target all workspaces that are associated with the AgentPool. |\n| `cooldownPeriodSeconds` _integer_ | CooldownPeriodSeconds is the time to wait between scaling events. Defaults to 300. |\n| `cooldownPeriod` _[AgentDeploymentAutoscalingCooldownPeriod](#agentdeploymentautoscalingcooldownperiod)_ | CoolDownPeriod is the period to wait between scaling up and scaling down |\n\n#### AgentDeploymentAutoscalingCooldownPeriod\n\nAgentDeploymentAutoscalingCooldownPeriod configures the period to wait between scaling up and scaling down.\n\n_Appears in:_\n- [AgentDeploymentAutoscaling](#agentdeploymentautoscaling)\n\n| Field | Description |\n| --- | --- |\n| `scaleUpSeconds` _integer_ | ScaleUpSeconds is the time to wait before scaling up. |\n| `scaleDownSeconds` _integer_ | ScaleDownSeconds is the time to wait before scaling down. |\n\n#### AgentDeploymentAutoscalingStatus\n\nAgentDeploymentAutoscalingStatus\n\n_Appears in:_\n- [AgentPoolStatus](#agentpoolstatus)\n\n| Field | Description |\n| --- | --- |\n| `desiredReplicas` _integer_ | Desired number of agent replicas |\n| `lastScalingEvent` _[Time](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.27\/#time-v1-meta)_ | Last time the agent pool was scaledx |\n\n#### AgentPool\n\nAgentPool is the Schema for the agentpools API.\n\n| Field | Description |\n| --- | --- |\n| `apiVersion` _string_ | `app.terraform.io\/v1alpha2`\n| `kind` _string_ | `AgentPool`\n| `kind` _string_ | Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. [More information](https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds) |\n| `apiVersion` _string_ | APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. [More information](https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#resources) |\n| `metadata` _[ObjectMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.27\/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |\n| `spec` _[AgentPoolSpec](#agentpoolspec)_ |  |\n\n#### AgentPoolSpec\n\nAgentPoolSpec defines the desired state of AgentPool.\n\n_Appears in:_\n- [AgentPool](#agentpool)\n\n| Field | Description |\n| --- | --- |\n| `name` _string_ | Agent Pool name. [More information](\/terraform\/cloud-docs\/agents\/agent-pools). |\n| `organization` _string_ | Organization name where the Workspace will be created. [More information](\/terraform\/cloud-docs\/users-teams-organizations\/organizations). |\n| `token` _[Token](#token)_ | API Token to be used for API calls. |\n| `agentTokens` _[AgentToken](#agenttoken) array_ | List of the agent tokens to generate. |\n| `agentDeployment` _[AgentDeployment](#agentdeployment)_ | Agent deployment settings |\n| `autoscaling` _[AgentDeploymentAutoscaling](#agentdeploymentautoscaling)_ | Agent deployment settings |\n\n#### AgentToken\n\nTerraform uses `AgentTokens` to connect to the Terraform agent pool. Only the field `Name` is allowed in the `spec` list. Use the other fields in the `status` list. [More information](\/terraform\/cloud-docs\/agents).\n\n_Appears in:_\n- [AgentPoolSpec](#agentpoolspec)\n- [AgentPoolStatus](#agentpoolstatus)\n\n| Field | Description |\n| --- | --- |\n| `name` _string_ | Agent Token name. |\n| `id` _string_ | Agent Token ID. |\n| `createdAt` _integer_ | Timestamp of when the agent token was created. |\n| `lastUsedAt` _integer_ | Timestamp of when the agent token was last used. |\n\n#### ConfigurationVersionStatus\n\nA configuration version is a resource used to reference the uploaded configuration files.\nMore information:\n  - [Configuration versions API](\/terraform\/cloud-docs\/api-docs\/configuration-versions)\n  - [The API-driven run workflow](\/terraform\/cloud-docs\/run\/api)\n\n_Appears in:_\n- [ModuleStatus](#modulestatus)\n\n| Field | Description |\n| --- | --- |\n| `id` _string_ | Configuration Version ID. |\n\n#### ConsumerWorkspace\n\nConsumerWorkspace allows access to the state for specific workspaces within the same organization. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory. [More information](\/terraform\/cloud-docs\/workspaces\/state#remote-state-access-controls).\n\n_Appears in:_\n- [RemoteStateSharing](#remotestatesharing)\n\n| Field | Description |\n| --- | --- |\n| `id` _string_ | Consumer Workspace ID. Must match pattern: `^ws-[a-zA-Z0-9]+$` |\n| `name` _string_ | Consumer Workspace name. |\n\n#### CustomPermissions\n\nCustom permissions let you assign specific, finer-grained permissions to a team than the broader fixed permission sets provide. [More information](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#custom-workspace-permissions).\n\n_Appears in:_\n- [TeamAccess](#teamaccess)\n\n| Field | Description |\n| --- | --- |\n| `runs` _string_ | Run access. Must be one of the following values: `apply`, `plan`, `read`. Default: `read`. |\n| `runTasks` _boolean_ | Manage Workspace Run Tasks. Default: `false`. |\n| `sentinel` _string_ | Download Sentinel mocks. Must be one of the following values: `none`, `read`. Default: `none`. |\n| `stateVersions` _string_ | State access. Must be one of the following values: `none`, `read`, `read-outputs`, `write`. Default: `none`. |\n| `variables` _string_ | Variable access. Must be one of the following values: `none`, `read`, `write`. Default: `none`. |\n| `workspaceLocking` _boolean_ | Lock\/unlock workspace. Default: `false`. |\n\n#### CustomProjectPermissions\n\nCustom permissions let you assign specific, finer-grained permissions to a team than the broader fixed permission sets provide.\nMore information:\n  - [Custom project permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#custom-project-permissions)\n  - [General workspace permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#general-workspace-permissions)\n\n_Appears in:_\n- [ProjectTeamAccess](#projectteamaccess)\n\n| Field | Description |\n| --- | --- |\n| `projectAccess` _[ProjectSettingsPermissionType](#projectsettingspermissiontype)_ | Project access. Must be one of the following values: `delete`, `read`, `update`. Default: `read`. |\n| `teamManagement` _[ProjectTeamsPermissionType](#projectteamspermissiontype)_ | Team management. Must be one of the following values: `manage`, `none`, `read`. Default: `none`. |\n| `createWorkspace` _boolean_ | Allow users to create workspaces in the project. This grants read access to all workspaces in the project. Default: `false`. |\n| `deleteWorkspace` _boolean_ | Allows users to delete workspaces in the project. Default: `false`. |\n| `moveWorkspace` _boolean_ | Allows users to move workspaces out of the project. A user must have this permission on both the source and destination project to successfully move a workspace from one project to another. Default: `false`. |\n| `lockWorkspace` _boolean_ | Allows users to manually lock the workspace to temporarily prevent runs. When a workspace's execution mode is set to \"local\", users must have this permission to perform local CLI runs using the workspace's state. Default: `false`. |\n| `runs` _[WorkspaceRunsPermissionType](#workspacerunspermissiontype)_ | Run access. Must be one of the following values: `apply`, `plan`, `read`. Default: `read`. |\n| `runTasks` _boolean_ | Manage Workspace Run Tasks. Default: `false`. |\n| `sentinelMocks` _[WorkspaceSentinelMocksPermissionType](#workspacesentinelmockspermissiontype)_ | Download Sentinel mocks. Must be one of the following values: `none`, `read`. Default: `none`. |\n| `stateVersions` _[WorkspaceStateVersionsPermissionType](#workspacestateversionspermissiontype)_ | State access. Must be one of the following values: `none`, `read`, `read-outputs`, `write`. Default: `none`. |\n| `variables` _[WorkspaceVariablesPermissionType](#workspacevariablespermissiontype)_ | Variable access. Must be one of the following values: `none`, `read`, `write`. Default: `none`. |\n\n#### DeletionPolicy\n\n_Underlying type:_ _string_\n\nDeletionPolicy defines the strategy the Kubernetes operator uses when you delete a resource, either manually or by a system event.\n\n\nYou must use one of the following values:\n- `retain`: When you delete the custom resource, the operator does not delete the workspace.\n- `soft`: Attempts to delete the associated workspace only if it does not contain any managed resources.\n- `destroy`: Executes a destroy operation to remove all resources managed by the associated workspace. Once the destruction of these resources is successful, the operator deletes the workspace, and then deletes the custom resource.\n- `force`: Forcefully and immediately deletes the workspace and the custom resource.\n\n_Appears in:_\n- [WorkspaceSpec](#workspacespec)\n\n\n#### Module\n\nModule is the Schema for the modules API Module implements the API-driven Run Workflow. [More information](\/terraform\/cloud-docs\/run\/api).\n\n| Field | Description |\n| --- | --- |\n| `apiVersion` _string_ | `app.terraform.io\/v1alpha2`\n| `kind` _string_ | `Module`\n| `kind` _string_ | Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. [More information](https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds) |\n| `apiVersion` _string_ | APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. [More information](https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#resources) |\n| `metadata` _[ObjectMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.27\/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |\n| `spec` _[ModuleSpec](#modulespec)_ |  |\n\n#### ModuleOutput\n\nModule outputs to store in ConfigMap(non-sensitive) or Secret(sensitive).\n\n_Appears in:_\n- [ModuleSpec](#modulespec)\n\n| Field | Description |\n| --- | --- |\n| `name` _string_ | Output name must match with the module output. |\n| `sensitive` _boolean_ | Specify whether or not the output is sensitive. Default: `false`. |\n\n#### ModuleSource\n\nModule source and version to execute.\n\n_Appears in:_\n- [ModuleSpec](#modulespec)\n\n| Field | Description |\n| --- | --- |\n| `source` _string_ | Non local Terraform module source. [More information](\/terraform\/language\/modules\/sources). |\n| `version` _string_ | Terraform module version. |\n\n#### ModuleSpec\n\nModuleSpec defines the desired state of Module.\n\n_Appears in:_\n- [Module](#module)\n\n| Field | Description |\n| --- | --- |\n| `organization` _string_ | Organization name where the Workspace will be created. [More information](\/terraform\/cloud-docs\/users-teams-organizations\/organizations). |\n| `token` _[Token](#token)_ | API Token to be used for API calls. |\n| `module` _[ModuleSource](#modulesource)_ | Module source and version to execute. |\n| `workspace` _[ModuleWorkspace](#moduleworkspace)_ | Workspace to execute the module. |\n| `name` _string_ | Name of the module that will be uploaded and executed. Default: `this`. |\n| `variables` _[ModuleVariable](#modulevariable) array_ | Variables to pass to the module, they must exist in the Workspace. |\n| `outputs` _[ModuleOutput](#moduleoutput) array_ | Module outputs to store in ConfigMap(non-sensitive) or Secret(sensitive). |\n| `destroyOnDeletion` _boolean_ | Specify whether or not to execute a Destroy run when the object is deleted from the Kubernetes. Default: `false`. |\n| `restartedAt` _string_ | Allows executing a new Run without changing any Workspace or Module attributes. Example: ``kubectl patch KIND NAME --type=merge --patch '{\"spec\": {\"restartedAt\": \"'\\`date -u -Iseconds\\`'\"}}'`` |\n\n#### ModuleVariable\n\nVariables to pass to the module.\n\n_Appears in:_\n- [ModuleSpec](#modulespec)\n\n| Field | Description |\n| --- | --- |\n| `name` _string_ | Variable name must exist in the Workspace. |\n\n#### ModuleWorkspace\n\nWorkspace to execute the module. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory.\n\n_Appears in:_\n- [ModuleSpec](#modulespec)\n\n| Field | Description |\n| --- | --- |\n| `id` _string_ | Module Workspace ID. Must match pattern: `^ws-[a-zA-Z0-9]+$` |\n| `name` _string_ | Module Workspace Name. |\n\n\n#### Notification\n\nNotifications allow you to send messages to other applications based on run and workspace events. [More information](\/terraform\/cloud-docs\/workspaces\/settings\/notifications).\n\n_Appears in:_\n- [WorkspaceSpec](#workspacespec)\n\n| Field | Description |\n| --- | --- |\n| `name` _string_ | Notification name. |\n| `type` _[NotificationDestinationType](#notificationdestinationtype)_ | The type of the notification. Must be one of the following values: `email`, `generic`, `microsoft-teams`, `slack`. |\n| `enabled` _boolean_ | Whether the notification configuration should be enabled or not. Default: `true`. |\n| `token` _string_ | The token of the notification. |\n| `triggers` _[NotificationTrigger](#notificationtrigger) array_ | The list of run events that trigger notifications. Triggers are notifications that Terraform sends when a run transitions to a different state. <br\/><br\/>The following triggers notify you about health events: `assessment:check_failure`, `assessment:drifted`, `assessment:failed`. <br\/><br\/>The following triggers notify you about run events: `run:applying`, `run:completed`, `run:created`, `run:errored`, `run:needs_attention`, `run:planning`. |\n| `url` _string_ | The URL of the notification. Must match pattern: `^https?:\/\/.*` |\n| `emailAddresses` _string array_ | The list of email addresses that will receive notification emails. It is only available for Terraform Enterprise users. It is not available in HCP Terraform. |\n| `emailUsers` _string array_ | The list of users belonging to the organization that will receive notification emails. |\n\n\n#### NotificationTrigger\n\n_Underlying type:_ _string_\n\nNotificationTrigger represents the notifications Terraform sends when a run transitions to a different state. This resource must align with `go-tfe` type `NotificationTriggerType`. You must use one of the following values:  `run:applying`, `assessment:check_failure`, `run:completed`, `run:created`, `assessment:drifted`, `run:errored`, `assessment:failed`, `run:needs_attention`, `run:planning`.\n\n_Appears in:_\n- [Notification](#notification)\n\n#### OutputStatus\n\nOutputs status.\n\n_Appears in:_\n- [ModuleStatus](#modulestatus)\n\n| Field | Description |\n| --- | --- |\n| `runID` _string_ | Run ID of the latest run that updated the outputs. |\n\n#### PlanStatus\n\n_Appears in:_\n- [WorkspaceStatus](#workspacestatus)\n\n| Field | Description |\n| --- | --- |\n| `id` _string_ | Latest plan-only\/speculative plan HCP Terraform run ID. |\n| `terraformVersion` _string_ | The version of Terraform to use for this run. |\n\n#### Project\n\nProject is the Schema for the projects API\n\n| Field | Description |\n| --- | --- |\n| `apiVersion` _string_ | `app.terraform.io\/v1alpha2`\n| `kind` _string_ | `Project`\n| `kind` _string_ | Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. [More information](https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds) |\n| `apiVersion` _string_ | APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. [More information](https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#resources) |\n| `metadata` _[ObjectMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.27\/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |\n| `spec` _[ProjectSpec](#projectspec)_ |  |\n\n#### ProjectSpec\n\nProjectSpec defines the desired state of Project. [More information](\/terraform\/cloud-docs\/workspaces\/organize-workspaces-with-projects).\n\n_Appears in:_\n- [Project](#project)\n\n| Field | Description |\n| --- | --- |\n| `organization` _string_ | Organization name where the Workspace will be created. [More information](\/terraform\/cloud-docs\/users-teams-organizations\/organizations). |\n| `token` _[Token](#token)_ | API Token to be used for API calls. |\n| `name` _string_ | Name of the Project. |\n| `teamAccess` _[ProjectTeamAccess](#projectteamaccess) array_ | HCP Terraform's access model is team-based. In order to perform an action within a HCP Terraform organization, users must belong to a team that has been granted the appropriate permissions. You can assign project-specific permissions to teams. More information: <br \/> [Project permissions](\/terraform\/cloud-docs\/workspaces\/organize-workspaces-with-projects#permissions) - [Team project permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#project-permissions) |\n\n#### ProjectTeamAccess\n\nHCP Terraform's access model is team-based. In order to perform an action within a HCP Terraform organization, users must belong to a team that has been granted the appropriate permissions. You can assign project-specific permissions to teams. More information:\n\n  - [Project permissions API](\/terraform\/cloud-docs\/workspaces\/organize-workspaces-with-projects#permissions)\n  - [Project permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#project-permissions)\n\n_Appears in:_\n- [ProjectSpec](#projectspec)\n\n| Field | Description |\n| --- | --- |\n| `team` _[Team](#team)_ | Team to grant access. [More information](\/terraform\/cloud-docs\/users-teams-organizations\/teams). |\n| `access` _[TeamProjectAccessType](#teamprojectaccesstype)_ | There are two ways to choose which permissions a given team has on a project: fixed permission sets, and custom permissions. Must be one of the following values: `admin`, `custom`, `maintain`, `read`, `write`. More information: <br \/> - [Project permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#project-permissions) <br \/>- [General project permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#general-project-permissions) |\n| `custom` _[CustomProjectPermissions](#customprojectpermissions)_ | Custom permissions let you assign specific, finer-grained permissions to a team than the broader fixed permission sets provide. [More information](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#custom-project-permissions). |\n\n#### RemoteStateSharing\n\nRemoteStateSharing allows remote state access between workspaces.\nBy default, new workspaces in HCP Terraform do not allow other workspaces to access their state. [More information](\/terraform\/cloud-docs\/workspaces\/state#accessing-state-from-other-workspaces).\n\n_Appears in:_\n- [WorkspaceSpec](#workspacespec)\n\n| Field | Description |\n| --- | --- |\n| `allWorkspaces` _boolean_ | Allow access to the state for all workspaces within the same organization. Default: `false`. |\n| `workspaces` _[ConsumerWorkspace](#consumerworkspace) array_ | Allow access to the state for specific workspaces within the same organization. |\n\n#### RunStatus\n\n_Appears in:_\n- [ModuleStatus](#modulestatus)\n- [WorkspaceStatus](#workspacestatus)\n\n| Field | Description |\n| --- | --- |\n| `id` _string_ | Current(both active and finished) HCP Terraform run ID. |\n| `configurationVersion` _string_ | The configuration version of this run. |\n| `outputRunID` _string_ | Run ID of the latest run that could update the outputs. |\n\n#### RunTrigger\n\nRunTrigger allows you to connect this workspace to one or more source workspaces.\nThese connections allow runs to queue automatically in this workspace on successful apply of runs in any of the source workspaces.\nOnly one of the fields `ID` or `Name` is allowed.\nAt least one of the fields `ID` or `Name` is mandatory. [More information](\/terraform\/cloud-docs\/workspaces\/settings\/run-triggers).\n\n_Appears in:_\n- [WorkspaceSpec](#workspacespec)\n\n| Field | Description |\n| --- | --- |\n| `id` _string_ | Source Workspace ID. Must match pattern: `^ws-[a-zA-Z0-9]+$` |\n| `name` _string_ | Source Workspace Name. |\n\n#### SSHKey\n\nSSH key used to clone Terraform modules. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory. [More information](\/terraform\/cloud-docs\/workspaces\/settings\/ssh-keys).\n\n_Appears in:_\n- [WorkspaceSpec](#workspacespec)\n\n| Field | Description |\n| --- | --- |\n| `id` _string_ | SSH key ID. Must match pattern: `^sshkey-[a-zA-Z0-9]+$` |\n| `name` _string_ | SSH key name. |\n\n#### Tag\n\n_Underlying type:_ _string_\n\nTags allows you to correlate, organize, and even filter workspaces based on the assigned tags. Tags must be one or more characters; can include letters, numbers, colons, hyphens, and underscores; and must begin and end with a letter or number. Must match pattern: `^[A-Za-z0-9][A-Za-z0-9:_-]*$`\n\n_Appears in:_\n- [WorkspaceSpec](#workspacespec)\n\n#### TargetWorkspace\n\nTargetWorkspace is the name or ID of the workspace you want autoscale against.\n\n_Appears in:_\n- [AgentDeploymentAutoscaling](#agentdeploymentautoscaling)\n\n| Field | Description |\n| --- | --- |\n| `id` _string_ | Workspace ID |\n| `name` _string_ | Workspace Name |\n| `wildcardName` _string_ | Wildcard Name to match match workspace names using `*` on name suffix, prefix, or both. |\n\n#### Team\n\nTeams are groups of HCP Terraform users within an organization. If a user belongs to at least one team in an organization, they are considered a member of that organization. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory. [More information](\/terraform\/cloud-docs\/users-teams-organizations\/teams).\n\n_Appears in:_\n- [ProjectTeamAccess](#projectteamaccess)\n- [TeamAccess](#teamaccess)\n\n| Field | Description |\n| --- | --- |\n| `id` _string_ | Team ID. Must match pattern: `^team-[a-zA-Z0-9]+$` |\n| `name` _string_ | Team name. |\n\n#### TeamAccess\n\nHCP Terraform workspaces can only be accessed by users with the correct permissions. You can manage permissions for a workspace on a per-team basis. When a workspace is created, only the owners team and teams with the \"manage workspaces\" permission can access it, with full admin permissions. These teams' access can't be removed from a workspace.[More information](\/terraform\/cloud-docs\/workspaces\/settings\/access).\n\n_Appears in:_\n- [WorkspaceSpec](#workspacespec)\n\n| Field | Description |\n| --- | --- |\n| `team` _[Team](#team)_ | Team to grant access. [More information](\/terraform\/cloud-docs\/users-teams-organizations\/teams). |\n| `access` _string_ | There are two ways to choose which permissions a given team has on a workspace: fixed permission sets, and custom permissions. Must be one of the following values: `admin`, `custom`, `plan`, `read`, `write`. [More information](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#workspace-permissions). |\n| `custom` _[CustomPermissions](#custompermissions)_ | Custom permissions let you assign specific, finer-grained permissions to a team than the broader fixed permission sets provide. [More information](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#custom-workspace-permissions). |\n\n#### Token\n\nToken refers to a Kubernetes Secret object within the same namespace as the Workspace object\n\n_Appears in:_\n- [AgentPoolSpec](#agentpoolspec)\n- [ModuleSpec](#modulespec)\n- [ProjectSpec](#projectspec)\n- [WorkspaceSpec](#workspacespec)\n\n| Field | Description |\n| --- | --- |\n| `secretKeyRef` _[SecretKeySelector](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.27\/#secretkeyselector-v1-core)_ | Selects a key of a secret in the workspace's namespace |\n\n#### ValueFrom\n\nValueFrom source for the variable's value. Cannot be used if value is not empty.\n\n_Appears in:_\n- [Variable](#variable)\n\n| Field | Description |\n| --- | --- |\n| `configMapKeyRef` _[ConfigMapKeySelector](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.27\/#configmapkeyselector-v1-core)_ | Selects a key of a ConfigMap. |\n| `secretKeyRef` _[SecretKeySelector](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.27\/#secretkeyselector-v1-core)_ | Selects a key of a Secret. |\n\n#### Variable\n\nVariables let you customize configurations, modify Terraform's behavior, and store information like provider credentials. [More information](\/terraform\/cloud-docs\/workspaces\/variables).\n\n_Appears in:_\n- [WorkspaceSpec](#workspacespec)\n\n| Field | Description |\n| --- | --- |\n| `name` _string_ | Name of the variable. |\n| `description` _string_ | Description of the variable. |\n| `hcl` _boolean_ | Parse this field as HashiCorp Configuration Language (HCL). This allows you to interpolate values at runtime. Default: `false`. |\n| `sensitive` _boolean_ | Sensitive variables are never shown in the UI or API. They may appear in Terraform logs if your configuration is designed to output them. Default: `false`. |\n| `value` _string_ | Value of the variable. |\n| `valueFrom` _[ValueFrom](#valuefrom)_ | Source for the variable's value. Cannot be used if value is not empty. |\n\n#### VariableStatus\n\n_Appears in:_\n- [WorkspaceStatus](#workspacestatus)\n\n| Field | Description |\n| --- | --- |\n| `name` _string_ | Name of the variable. |\n| `id` _string_ | ID of the variable. |\n| `versionID` _string_ | VersionID is a hash of the variable on the HCP Terraform end. |\n| `valueID` _string_ | ValueID is a hash of the variable on the CRD end. |\n| `category` _string_ | Category of the variable. |\n\n#### VersionControl\n\nVersionControl settings for the workspace's VCS repository, enabling the UI\/VCS-driven run workflow. Omit this argument to utilize the CLI-driven and API-driven workflows, where runs are not driven by webhooks on your VCS provider. More information:\n\n  - [The UI- and VCS-driven run workflow](\/terraform\/cloud-docs\/run\/ui)\n  - [Connecting VCS providers to HCP Terraform](\/terraform\/cloud-docs\/vcs)\n\n_Appears in:_\n- [WorkspaceSpec](#workspacespec)\n\n| Field | Description |\n| --- | --- |\n| `oAuthTokenID` _string_ | The VCS Connection (OAuth Connection + Token) to use. Must match pattern: `^ot-[a-zA-Z0-9]+$` |\n| `repository` _string_ | A reference to your VCS repository in the format `<organization>\/<repository>` where `<organization>` and `<repository>` refer to the organization and repository in your VCS provider. |\n| `branch` _string_ | The repository branch that Run will execute from. This defaults to the repository's default branch (e.g. main). |\n| `speculativePlans` _boolean_ | Whether this workspace allows automatic speculative plans on PR. Default: `true`. More information: <br \/>- [Speculative plans on pull requests](\/terraform\/cloud-docs\/run\/ui#speculative-plans-on-pull-requests) <br \/>- [Speculative plans](\/terraform\/cloud-docs\/run\/remote-operations#speculative-plans) |\n\n#### Workspace\n\nWorkspace is the Schema for the workspaces API\n\n| Field | Description |\n| --- | --- |\n| `apiVersion` _string_ | `app.terraform.io\/v1alpha2`\n| `kind` _string_ | `Workspace`\n| `kind` _string_ | Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. [More information](https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#types-kinds) |\n| `apiVersion` _string_ | APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. [More information](https:\/\/git.k8s.io\/community\/contributors\/devel\/sig-architecture\/api-conventions.md#resources) |\n| `metadata` _[ObjectMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.27\/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |\n| `spec` _[WorkspaceSpec](#workspacespec)_ |  |\n\n#### WorkspaceAgentPool\n\nAgentPool allows HCP Terraform to communicate with isolated, private, or on-premises infrastructure. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory. [More information](\/terraform\/cloud-docs\/agents).\n\n_Appears in:_\n- [WorkspaceSpec](#workspacespec)\n\n| Field | Description |\n| --- | --- |\n| `id` _string_ | Agent Pool ID. Must match pattern: `^apool-[a-zA-Z0-9]+$` |\n| `name` _string_ | Agent Pool name. |\n\n#### WorkspaceProject\n\nProjects let you organize your workspaces into groups. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory. [More information](\/terraform\/tutorials\/cloud\/projects).\n\n_Appears in:_\n- [WorkspaceSpec](#workspacespec)\n\n| Field | Description |\n| --- | --- |\n| `id` _string_ | Project ID. Must match pattern: `^prj-[a-zA-Z0-9]+$` |\n| `name` _string_ | Project name. |\n\n#### WorkspaceRunTask\n\nRun tasks allow HCP Terraform to interact with external systems at specific points in the HCP Terraform run lifecycle. Only one of the fields `ID` or `Name` is allowed. At least one of the fields `ID` or `Name` is mandatory. [More information](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks).\n\n_Appears in:_\n- [WorkspaceSpec](#workspacespec)\n\n| Field | Description |\n| --- | --- |\n| `id` _string_ | Run Task ID. Must match pattern: `^task-[a-zA-Z0-9]+$` |\n| `name` _string_ | Run Task Name. |\n| `enforcementLevel` _string_ | Run Task Enforcement Level. Can be one of `advisory` or `mandatory`. Default: `advisory`. Must be one of the following values: `advisory`, `mandatory` Default: `advisory`. |\n| `stage` _string_ | Run Task Stage. Must be one of the following values: `pre_apply`, `pre_plan`, `post_plan`. Default: `post_plan`. |\n\n#### WorkspaceSpec\n\nWorkspaceSpec defines the desired state of Workspace.\n\n_Appears in:_\n- [Workspace](#workspace)\n\n| Field | Description |\n| --- | --- |\n| `name` _string_ | Workspace name. |\n| `organization` _string_ | Organization name where the Workspace will be created. [More information](\/terraform\/cloud-docs\/users-teams-organizations\/organizations). |\n| `token` _[Token](#token)_ | API Token to be used for API calls. |\n| `applyMethod` _string_ | Define either change will be applied automatically(auto) or require an operator to confirm(manual). Must be one of the following values: `auto`, `manual`. Default: `manual`. [More information](\/terraform\/cloud-docs\/workspaces\/settings#auto-apply-and-manual-apply). |\n| `allowDestroyPlan` _boolean_ | Allows a destroy plan to be created and applied. Default: `true`. [More information](\/terraform\/cloud-docs\/workspaces\/settings#destruction-and-deletion). |\n| `description` _string_ | Workspace description. |\n| `agentPool` _[WorkspaceAgentPool](#workspaceagentpool)_ | HCP Terraform Agents allow HCP Terraform to communicate with isolated, private, or on-premises infrastructure. [More information](\/terraform\/cloud-docs\/agents). |\n| `executionMode` _string_ | Define where the Terraform code will be executed. Must be one of the following values: `agent`, `local`, `remote`. Default: `remote`. [More information](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode). |\n| `runTasks` _[WorkspaceRunTask](#workspaceruntask) array_ | Run tasks allow HCP Terraform to interact with external systems at specific points in the HCP Terraform run lifecycle. [More information](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks). |\n| `tags` _[Tag](#tag) array_ | Workspace tags are used to help identify and group together workspaces. Tags must be one or more characters; can include letters, numbers, colons, hyphens, and underscores; and must begin and end with a letter or number. |\n| `teamAccess` _[TeamAccess](#teamaccess) array_ | HCP Terraform workspaces can only be accessed by users with the correct permissions. You can manage permissions for a workspace on a per-team basis. When a workspace is created, only the owners team and teams with the \"manage workspaces\" permission can access it, with full admin permissions. These teams' access can't be removed from a workspace. [More information](\/terraform\/cloud-docs\/workspaces\/settings\/access). |\n| `terraformVersion` _string_ | The version of Terraform to use for this workspace. If not specified, the latest available version will be used. Must match pattern: `^\\\\d{1}\\\\.\\\\d{1,2}\\\\.\\\\d{1,2}$` More information:   - \/cloud-docs\/workspaces\/settings#terraform-version |\n| `workingDirectory` _string_ | The directory where Terraform will execute, specified as a relative path from the root of the configuration directory. More information:   - \/cloud-docs\/workspaces\/settings#terraform-working-directory |\n| `environmentVariables` _[Variable](#variable) array_ | Terraform Environment variables for all plans and applies in this workspace. Variables defined within a workspace always overwrite variables from variable sets that have the same type and the same key. More information: <br \/> - [Workspace variables](\/terraform\/cloud-docs\/workspaces\/variables) <br \/> - [Environment variables](\/terraform\/cloud-docs\/workspaces\/variables#environment-variables) |\n| `terraformVariables` _[Variable](#variable) array_ | Terraform variables for all plans and applies in this workspace. Variables defined within a workspace always overwrite variables from variable sets that have the same type and the same key. [More information: <br \/> - [Workspace variables](\/terraform\/cloud-docs\/workspaces\/variables) <br \/>- [Terraform variables](\/terraform\/cloud-docs\/workspaces\/variables#terraform-variables) |\n| `remoteStateSharing` _[RemoteStateSharing](#remotestatesharing)_ | Remote state access between workspaces. By default, new workspaces in HCP Terraform do not allow other workspaces to access their state. [More information](\/terraform\/cloud-docs\/workspaces\/state#accessing-state-from-other-workspaces). |\n| `runTriggers` _[RunTrigger](#runtrigger) array_ | Run triggers allow you to connect this workspace to one or more source workspaces. These connections allow runs to queue automatically in this workspace on successful apply of runs in any of the source workspaces. [More information](\/terraform\/cloud-docs\/workspaces\/settings\/run-triggers). |\n| `versionControl` _[VersionControl](#versioncontrol)_ | Settings for the workspace's VCS repository, enabling the UI\/VCS-driven run workflow. Omit this argument to utilize the CLI-driven and API-driven workflows, where runs are not driven by webhooks on your VCS provider. More information:   - \/cloud-docs\/run\/ui   - \/cloud-docs\/vcs |\n| `sshKey` _[SSHKey](#sshkey)_ | SSH key used to clone Terraform modules. [More information](\/terraform\/cloud-docs\/workspaces\/settings\/ssh-keys). |\n| `notifications` _[Notification](#notification) array_ | Notifications allow you to send messages to other applications based on run and workspace events. [More information](\/terraform\/cloud-docs\/workspaces\/settings\/notifications). |\n| `project` _[WorkspaceProject](#workspaceproject)_ | Projects let you organize your workspaces into groups. Default: default organization project. [More information](\/terraform\/tutorials\/cloud\/projects). |\n| `deletionPolicy` _[DeletionPolicy](#deletionpolicy)_ | The Deletion Policy specifies the behavior of the custom resource and its associated workspace when the custom resource is deleted.<br \/>- `retain`: When you delete the custom resource, the operator does not delete the workspace.<br \/>- `soft`: Attempts to delete the associated workspace only if it does not contain any managed resources.<br \/>- `destroy`: Executes a destroy operation to remove all resources managed by the associated workspace. Once the destruction of these resources is successful, the operator deletes the workspace, and then deletes the custom resource.<br \/>- `force`: Forcefully and immediately deletes the workspace and the custom resource.<br \/>Default: `retain`. |","site":"terraform","answers_cleaned":"    page title  HCP Terraform Operator for Kubernetes API reference description       API reference for the HCP Terraform Operator for Kubernetes         API Reference     Packages    app terraform io v1alpha2   appterraformiov1alpha2      app terraform io v1alpha2  Package v1alpha2 contains API Schema definitions for the app v1alpha2 API group      Resource Types    AgentPool   agentpool     Module   module     Project   project     Workspace   workspace        AgentDeployment   Appears in      AgentPoolSpec   agentpoolspec     Field   Description                    replicas   integer          spec    PodSpec  https   kubernetes io docs reference generated kubernetes api v1 27  podspec v1 core           annotations   object  keys string  values string     The annotations that the operator will apply to the pod template in the deployment       labels   object  keys string  values string     The labels that the operator will apply to the pod template in the deployment          AgentDeploymentAutoscaling  AgentDeploymentAutoscaling configures the operator to scale the deployment for an AgentPool up and down to meet demand    Appears in      AgentPoolSpec   agentpoolspec     Field   Description                    maxReplicas   integer    MaxReplicas is the maximum number of replicas for the Agent deployment       minReplicas   integer    MinReplicas is the minimum number of replicas for the Agent deployment       targetWorkspaces    TargetWorkspace   targetworkspace     TargetWorkspaces is a list of HCP Terraform Workspaces which the agent pool should scale up to meet demand  When this field is omitted the autoscaler will target all workspaces that are associated with the AgentPool       cooldownPeriodSeconds   integer    CooldownPeriodSeconds is the time to wait between scaling events  Defaults to 300       cooldownPeriod    AgentDeploymentAutoscalingCooldownPeriod   agentdeploymentautoscalingcooldownperiod     CoolDownPeriod is the period to wait between scaling up and scaling down         AgentDeploymentAutoscalingCooldownPeriod  AgentDeploymentAutoscalingCooldownPeriod configures the period to wait between scaling up and scaling down    Appears in      AgentDeploymentAutoscaling   agentdeploymentautoscaling     Field   Description                    scaleUpSeconds   integer    ScaleUpSeconds is the time to wait before scaling up       scaleDownSeconds   integer    ScaleDownSeconds is the time to wait before scaling down          AgentDeploymentAutoscalingStatus  AgentDeploymentAutoscalingStatus   Appears in      AgentPoolStatus   agentpoolstatus     Field   Description                    desiredReplicas   integer    Desired number of agent replicas      lastScalingEvent    Time  https   kubernetes io docs reference generated kubernetes api v1 27  time v1 meta     Last time the agent pool was scaledx         AgentPool  AgentPool is the Schema for the agentpools API     Field   Description                    apiVersion   string     app terraform io v1alpha2     kind   string     AgentPool     kind   string    Kind is a string value representing the REST resource this object represents  Servers may infer this from the endpoint the client submits requests to  Cannot be updated  In CamelCase   More information  https   git k8s io community contributors devel sig architecture api conventions md types kinds       apiVersion   string    APIVersion defines the versioned schema of this representation of an object  Servers should convert recognized schemas to the latest internal value  and may reject unrecognized values   More information  https   git k8s io community contributors devel sig architecture api conventions md resources       metadata    ObjectMeta  https   kubernetes io docs reference generated kubernetes api v1 27  objectmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata        spec    AgentPoolSpec   agentpoolspec              AgentPoolSpec  AgentPoolSpec defines the desired state of AgentPool    Appears in      AgentPool   agentpool     Field   Description                    name   string    Agent Pool name   More information   terraform cloud docs agents agent pools        organization   string    Organization name where the Workspace will be created   More information   terraform cloud docs users teams organizations organizations        token    Token   token     API Token to be used for API calls       agentTokens    AgentToken   agenttoken  array    List of the agent tokens to generate       agentDeployment    AgentDeployment   agentdeployment     Agent deployment settings      autoscaling    AgentDeploymentAutoscaling   agentdeploymentautoscaling     Agent deployment settings         AgentToken  Terraform uses  AgentTokens  to connect to the Terraform agent pool  Only the field  Name  is allowed in the  spec  list  Use the other fields in the  status  list   More information   terraform cloud docs agents     Appears in      AgentPoolSpec   agentpoolspec     AgentPoolStatus   agentpoolstatus     Field   Description                    name   string    Agent Token name       id   string    Agent Token ID       createdAt   integer    Timestamp of when the agent token was created       lastUsedAt   integer    Timestamp of when the agent token was last used          ConfigurationVersionStatus  A configuration version is a resource used to reference the uploaded configuration files  More information       Configuration versions API   terraform cloud docs api docs configuration versions       The API driven run workflow   terraform cloud docs run api    Appears in      ModuleStatus   modulestatus     Field   Description                    id   string    Configuration Version ID          ConsumerWorkspace  ConsumerWorkspace allows access to the state for specific workspaces within the same organization  Only one of the fields  ID  or  Name  is allowed  At least one of the fields  ID  or  Name  is mandatory   More information   terraform cloud docs workspaces state remote state access controls     Appears in      RemoteStateSharing   remotestatesharing     Field   Description                    id   string    Consumer Workspace ID  Must match pattern    ws  a zA Z0 9          name   string    Consumer Workspace name          CustomPermissions  Custom permissions let you assign specific  finer grained permissions to a team than the broader fixed permission sets provide   More information   terraform cloud docs users teams organizations permissions custom workspace permissions     Appears in      TeamAccess   teamaccess     Field   Description                    runs   string    Run access  Must be one of the following values   apply    plan    read   Default   read        runTasks   boolean    Manage Workspace Run Tasks  Default   false        sentinel   string    Download Sentinel mocks  Must be one of the following values   none    read   Default   none        stateVersions   string    State access  Must be one of the following values   none    read    read outputs    write   Default   none        variables   string    Variable access  Must be one of the following values   none    read    write   Default   none        workspaceLocking   boolean    Lock unlock workspace  Default   false           CustomProjectPermissions  Custom permissions let you assign specific  finer grained permissions to a team than the broader fixed permission sets provide  More information       Custom project permissions   terraform cloud docs users teams organizations permissions custom project permissions       General workspace permissions   terraform cloud docs users teams organizations permissions general workspace permissions    Appears in      ProjectTeamAccess   projectteamaccess     Field   Description                    projectAccess    ProjectSettingsPermissionType   projectsettingspermissiontype     Project access  Must be one of the following values   delete    read    update   Default   read        teamManagement    ProjectTeamsPermissionType   projectteamspermissiontype     Team management  Must be one of the following values   manage    none    read   Default   none        createWorkspace   boolean    Allow users to create workspaces in the project  This grants read access to all workspaces in the project  Default   false        deleteWorkspace   boolean    Allows users to delete workspaces in the project  Default   false        moveWorkspace   boolean    Allows users to move workspaces out of the project  A user must have this permission on both the source and destination project to successfully move a workspace from one project to another  Default   false        lockWorkspace   boolean    Allows users to manually lock the workspace to temporarily prevent runs  When a workspace s execution mode is set to  local   users must have this permission to perform local CLI runs using the workspace s state  Default   false        runs    WorkspaceRunsPermissionType   workspacerunspermissiontype     Run access  Must be one of the following values   apply    plan    read   Default   read        runTasks   boolean    Manage Workspace Run Tasks  Default   false        sentinelMocks    WorkspaceSentinelMocksPermissionType   workspacesentinelmockspermissiontype     Download Sentinel mocks  Must be one of the following values   none    read   Default   none        stateVersions    WorkspaceStateVersionsPermissionType   workspacestateversionspermissiontype     State access  Must be one of the following values   none    read    read outputs    write   Default   none        variables    WorkspaceVariablesPermissionType   workspacevariablespermissiontype     Variable access  Must be one of the following values   none    read    write   Default   none           DeletionPolicy   Underlying type    string   DeletionPolicy defines the strategy the Kubernetes operator uses when you delete a resource  either manually or by a system event    You must use one of the following values     retain   When you delete the custom resource  the operator does not delete the workspace     soft   Attempts to delete the associated workspace only if it does not contain any managed resources     destroy   Executes a destroy operation to remove all resources managed by the associated workspace  Once the destruction of these resources is successful  the operator deletes the workspace  and then deletes the custom resource     force   Forcefully and immediately deletes the workspace and the custom resource    Appears in      WorkspaceSpec   workspacespec         Module  Module is the Schema for the modules API Module implements the API driven Run Workflow   More information   terraform cloud docs run api      Field   Description                    apiVersion   string     app terraform io v1alpha2     kind   string     Module     kind   string    Kind is a string value representing the REST resource this object represents  Servers may infer this from the endpoint the client submits requests to  Cannot be updated  In CamelCase   More information  https   git k8s io community contributors devel sig architecture api conventions md types kinds       apiVersion   string    APIVersion defines the versioned schema of this representation of an object  Servers should convert recognized schemas to the latest internal value  and may reject unrecognized values   More information  https   git k8s io community contributors devel sig architecture api conventions md resources       metadata    ObjectMeta  https   kubernetes io docs reference generated kubernetes api v1 27  objectmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata        spec    ModuleSpec   modulespec              ModuleOutput  Module outputs to store in ConfigMap non sensitive  or Secret sensitive     Appears in      ModuleSpec   modulespec     Field   Description                    name   string    Output name must match with the module output       sensitive   boolean    Specify whether or not the output is sensitive  Default   false           ModuleSource  Module source and version to execute    Appears in      ModuleSpec   modulespec     Field   Description                    source   string    Non local Terraform module source   More information   terraform language modules sources        version   string    Terraform module version          ModuleSpec  ModuleSpec defines the desired state of Module    Appears in      Module   module     Field   Description                    organization   string    Organization name where the Workspace will be created   More information   terraform cloud docs users teams organizations organizations        token    Token   token     API Token to be used for API calls       module    ModuleSource   modulesource     Module source and version to execute       workspace    ModuleWorkspace   moduleworkspace     Workspace to execute the module       name   string    Name of the module that will be uploaded and executed  Default   this        variables    ModuleVariable   modulevariable  array    Variables to pass to the module  they must exist in the Workspace       outputs    ModuleOutput   moduleoutput  array    Module outputs to store in ConfigMap non sensitive  or Secret sensitive        destroyOnDeletion   boolean    Specify whether or not to execute a Destroy run when the object is deleted from the Kubernetes  Default   false        restartedAt   string    Allows executing a new Run without changing any Workspace or Module attributes  Example    kubectl patch KIND NAME   type merge   patch    spec     restartedAt       date  u  Iseconds                  ModuleVariable  Variables to pass to the module    Appears in      ModuleSpec   modulespec     Field   Description                    name   string    Variable name must exist in the Workspace          ModuleWorkspace  Workspace to execute the module  Only one of the fields  ID  or  Name  is allowed  At least one of the fields  ID  or  Name  is mandatory    Appears in      ModuleSpec   modulespec     Field   Description                    id   string    Module Workspace ID  Must match pattern    ws  a zA Z0 9          name   string    Module Workspace Name           Notification  Notifications allow you to send messages to other applications based on run and workspace events   More information   terraform cloud docs workspaces settings notifications     Appears in      WorkspaceSpec   workspacespec     Field   Description                    name   string    Notification name       type    NotificationDestinationType   notificationdestinationtype     The type of the notification  Must be one of the following values   email    generic    microsoft teams    slack        enabled   boolean    Whether the notification configuration should be enabled or not  Default   true        token   string    The token of the notification       triggers    NotificationTrigger   notificationtrigger  array    The list of run events that trigger notifications  Triggers are notifications that Terraform sends when a run transitions to a different state   br   br  The following triggers notify you about health events   assessment check failure    assessment drifted    assessment failed    br   br  The following triggers notify you about run events   run applying    run completed    run created    run errored    run needs attention    run planning        url   string    The URL of the notification  Must match pattern    https             emailAddresses   string array    The list of email addresses that will receive notification emails  It is only available for Terraform Enterprise users  It is not available in HCP Terraform       emailUsers   string array    The list of users belonging to the organization that will receive notification emails           NotificationTrigger   Underlying type    string   NotificationTrigger represents the notifications Terraform sends when a run transitions to a different state  This resource must align with  go tfe  type  NotificationTriggerType   You must use one of the following values    run applying    assessment check failure    run completed    run created    assessment drifted    run errored    assessment failed    run needs attention    run planning     Appears in      Notification   notification        OutputStatus  Outputs status    Appears in      ModuleStatus   modulestatus     Field   Description                    runID   string    Run ID of the latest run that updated the outputs          PlanStatus   Appears in      WorkspaceStatus   workspacestatus     Field   Description                    id   string    Latest plan only speculative plan HCP Terraform run ID       terraformVersion   string    The version of Terraform to use for this run          Project  Project is the Schema for the projects API    Field   Description                    apiVersion   string     app terraform io v1alpha2     kind   string     Project     kind   string    Kind is a string value representing the REST resource this object represents  Servers may infer this from the endpoint the client submits requests to  Cannot be updated  In CamelCase   More information  https   git k8s io community contributors devel sig architecture api conventions md types kinds       apiVersion   string    APIVersion defines the versioned schema of this representation of an object  Servers should convert recognized schemas to the latest internal value  and may reject unrecognized values   More information  https   git k8s io community contributors devel sig architecture api conventions md resources       metadata    ObjectMeta  https   kubernetes io docs reference generated kubernetes api v1 27  objectmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata        spec    ProjectSpec   projectspec              ProjectSpec  ProjectSpec defines the desired state of Project   More information   terraform cloud docs workspaces organize workspaces with projects     Appears in      Project   project     Field   Description                    organization   string    Organization name where the Workspace will be created   More information   terraform cloud docs users teams organizations organizations        token    Token   token     API Token to be used for API calls       name   string    Name of the Project       teamAccess    ProjectTeamAccess   projectteamaccess  array    HCP Terraform s access model is team based  In order to perform an action within a HCP Terraform organization  users must belong to a team that has been granted the appropriate permissions  You can assign project specific permissions to teams  More information   br     Project permissions   terraform cloud docs workspaces organize workspaces with projects permissions     Team project permissions   terraform cloud docs users teams organizations permissions project permissions          ProjectTeamAccess  HCP Terraform s access model is team based  In order to perform an action within a HCP Terraform organization  users must belong to a team that has been granted the appropriate permissions  You can assign project specific permissions to teams  More information        Project permissions API   terraform cloud docs workspaces organize workspaces with projects permissions       Project permissions   terraform cloud docs users teams organizations permissions project permissions    Appears in      ProjectSpec   projectspec     Field   Description                    team    Team   team     Team to grant access   More information   terraform cloud docs users teams organizations teams        access    TeamProjectAccessType   teamprojectaccesstype     There are two ways to choose which permissions a given team has on a project  fixed permission sets  and custom permissions  Must be one of the following values   admin    custom    maintain    read    write   More information   br       Project permissions   terraform cloud docs users teams organizations permissions project permissions   br      General project permissions   terraform cloud docs users teams organizations permissions general project permissions       custom    CustomProjectPermissions   customprojectpermissions     Custom permissions let you assign specific  finer grained permissions to a team than the broader fixed permission sets provide   More information   terraform cloud docs users teams organizations permissions custom project permissions           RemoteStateSharing  RemoteStateSharing allows remote state access between workspaces  By default  new workspaces in HCP Terraform do not allow other workspaces to access their state   More information   terraform cloud docs workspaces state accessing state from other workspaces     Appears in      WorkspaceSpec   workspacespec     Field   Description                    allWorkspaces   boolean    Allow access to the state for all workspaces within the same organization  Default   false        workspaces    ConsumerWorkspace   consumerworkspace  array    Allow access to the state for specific workspaces within the same organization          RunStatus   Appears in      ModuleStatus   modulestatus     WorkspaceStatus   workspacestatus     Field   Description                    id   string    Current both active and finished  HCP Terraform run ID       configurationVersion   string    The configuration version of this run       outputRunID   string    Run ID of the latest run that could update the outputs          RunTrigger  RunTrigger allows you to connect this workspace to one or more source workspaces  These connections allow runs to queue automatically in this workspace on successful apply of runs in any of the source workspaces  Only one of the fields  ID  or  Name  is allowed  At least one of the fields  ID  or  Name  is mandatory   More information   terraform cloud docs workspaces settings run triggers     Appears in      WorkspaceSpec   workspacespec     Field   Description                    id   string    Source Workspace ID  Must match pattern    ws  a zA Z0 9          name   string    Source Workspace Name          SSHKey  SSH key used to clone Terraform modules  Only one of the fields  ID  or  Name  is allowed  At least one of the fields  ID  or  Name  is mandatory   More information   terraform cloud docs workspaces settings ssh keys     Appears in      WorkspaceSpec   workspacespec     Field   Description                    id   string    SSH key ID  Must match pattern    sshkey  a zA Z0 9          name   string    SSH key name          Tag   Underlying type    string   Tags allows you to correlate  organize  and even filter workspaces based on the assigned tags  Tags must be one or more characters  can include letters  numbers  colons  hyphens  and underscores  and must begin and end with a letter or number  Must match pattern     A Za z0 9  A Za z0 9          Appears in      WorkspaceSpec   workspacespec        TargetWorkspace  TargetWorkspace is the name or ID of the workspace you want autoscale against    Appears in      AgentDeploymentAutoscaling   agentdeploymentautoscaling     Field   Description                    id   string    Workspace ID      name   string    Workspace Name      wildcardName   string    Wildcard Name to match match workspace names using     on name suffix  prefix  or both          Team  Teams are groups of HCP Terraform users within an organization  If a user belongs to at least one team in an organization  they are considered a member of that organization  Only one of the fields  ID  or  Name  is allowed  At least one of the fields  ID  or  Name  is mandatory   More information   terraform cloud docs users teams organizations teams     Appears in      ProjectTeamAccess   projectteamaccess     TeamAccess   teamaccess     Field   Description                    id   string    Team ID  Must match pattern    team  a zA Z0 9          name   string    Team name          TeamAccess  HCP Terraform workspaces can only be accessed by users with the correct permissions  You can manage permissions for a workspace on a per team basis  When a workspace is created  only the owners team and teams with the  manage workspaces  permission can access it  with full admin permissions  These teams  access can t be removed from a workspace  More information   terraform cloud docs workspaces settings access     Appears in      WorkspaceSpec   workspacespec     Field   Description                    team    Team   team     Team to grant access   More information   terraform cloud docs users teams organizations teams        access   string    There are two ways to choose which permissions a given team has on a workspace  fixed permission sets  and custom permissions  Must be one of the following values   admin    custom    plan    read    write    More information   terraform cloud docs users teams organizations permissions workspace permissions        custom    CustomPermissions   custompermissions     Custom permissions let you assign specific  finer grained permissions to a team than the broader fixed permission sets provide   More information   terraform cloud docs users teams organizations permissions custom workspace permissions           Token  Token refers to a Kubernetes Secret object within the same namespace as the Workspace object   Appears in      AgentPoolSpec   agentpoolspec     ModuleSpec   modulespec     ProjectSpec   projectspec     WorkspaceSpec   workspacespec     Field   Description                    secretKeyRef    SecretKeySelector  https   kubernetes io docs reference generated kubernetes api v1 27  secretkeyselector v1 core     Selects a key of a secret in the workspace s namespace         ValueFrom  ValueFrom source for the variable s value  Cannot be used if value is not empty    Appears in      Variable   variable     Field   Description                    configMapKeyRef    ConfigMapKeySelector  https   kubernetes io docs reference generated kubernetes api v1 27  configmapkeyselector v1 core     Selects a key of a ConfigMap       secretKeyRef    SecretKeySelector  https   kubernetes io docs reference generated kubernetes api v1 27  secretkeyselector v1 core     Selects a key of a Secret          Variable  Variables let you customize configurations  modify Terraform s behavior  and store information like provider credentials   More information   terraform cloud docs workspaces variables     Appears in      WorkspaceSpec   workspacespec     Field   Description                    name   string    Name of the variable       description   string    Description of the variable       hcl   boolean    Parse this field as HashiCorp Configuration Language  HCL   This allows you to interpolate values at runtime  Default   false        sensitive   boolean    Sensitive variables are never shown in the UI or API  They may appear in Terraform logs if your configuration is designed to output them  Default   false        value   string    Value of the variable       valueFrom    ValueFrom   valuefrom     Source for the variable s value  Cannot be used if value is not empty          VariableStatus   Appears in      WorkspaceStatus   workspacestatus     Field   Description                    name   string    Name of the variable       id   string    ID of the variable       versionID   string    VersionID is a hash of the variable on the HCP Terraform end       valueID   string    ValueID is a hash of the variable on the CRD end       category   string    Category of the variable          VersionControl  VersionControl settings for the workspace s VCS repository  enabling the UI VCS driven run workflow  Omit this argument to utilize the CLI driven and API driven workflows  where runs are not driven by webhooks on your VCS provider  More information        The UI  and VCS driven run workflow   terraform cloud docs run ui       Connecting VCS providers to HCP Terraform   terraform cloud docs vcs    Appears in      WorkspaceSpec   workspacespec     Field   Description                    oAuthTokenID   string    The VCS Connection  OAuth Connection   Token  to use  Must match pattern    ot  a zA Z0 9          repository   string    A reference to your VCS repository in the format   organization   repository   where   organization   and   repository   refer to the organization and repository in your VCS provider       branch   string    The repository branch that Run will execute from  This defaults to the repository s default branch  e g  main        speculativePlans   boolean    Whether this workspace allows automatic speculative plans on PR  Default   true   More information   br      Speculative plans on pull requests   terraform cloud docs run ui speculative plans on pull requests   br      Speculative plans   terraform cloud docs run remote operations speculative plans          Workspace  Workspace is the Schema for the workspaces API    Field   Description                    apiVersion   string     app terraform io v1alpha2     kind   string     Workspace     kind   string    Kind is a string value representing the REST resource this object represents  Servers may infer this from the endpoint the client submits requests to  Cannot be updated  In CamelCase   More information  https   git k8s io community contributors devel sig architecture api conventions md types kinds       apiVersion   string    APIVersion defines the versioned schema of this representation of an object  Servers should convert recognized schemas to the latest internal value  and may reject unrecognized values   More information  https   git k8s io community contributors devel sig architecture api conventions md resources       metadata    ObjectMeta  https   kubernetes io docs reference generated kubernetes api v1 27  objectmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata        spec    WorkspaceSpec   workspacespec              WorkspaceAgentPool  AgentPool allows HCP Terraform to communicate with isolated  private  or on premises infrastructure  Only one of the fields  ID  or  Name  is allowed  At least one of the fields  ID  or  Name  is mandatory   More information   terraform cloud docs agents     Appears in      WorkspaceSpec   workspacespec     Field   Description                    id   string    Agent Pool ID  Must match pattern    apool  a zA Z0 9          name   string    Agent Pool name          WorkspaceProject  Projects let you organize your workspaces into groups  Only one of the fields  ID  or  Name  is allowed  At least one of the fields  ID  or  Name  is mandatory   More information   terraform tutorials cloud projects     Appears in      WorkspaceSpec   workspacespec     Field   Description                    id   string    Project ID  Must match pattern    prj  a zA Z0 9          name   string    Project name          WorkspaceRunTask  Run tasks allow HCP Terraform to interact with external systems at specific points in the HCP Terraform run lifecycle  Only one of the fields  ID  or  Name  is allowed  At least one of the fields  ID  or  Name  is mandatory   More information   terraform cloud docs workspaces settings run tasks     Appears in      WorkspaceSpec   workspacespec     Field   Description                    id   string    Run Task ID  Must match pattern    task  a zA Z0 9          name   string    Run Task Name       enforcementLevel   string    Run Task Enforcement Level  Can be one of  advisory  or  mandatory   Default   advisory   Must be one of the following values   advisory    mandatory  Default   advisory        stage   string    Run Task Stage  Must be one of the following values   pre apply    pre plan    post plan   Default   post plan           WorkspaceSpec  WorkspaceSpec defines the desired state of Workspace    Appears in      Workspace   workspace     Field   Description                    name   string    Workspace name       organization   string    Organization name where the Workspace will be created   More information   terraform cloud docs users teams organizations organizations        token    Token   token     API Token to be used for API calls       applyMethod   string    Define either change will be applied automatically auto  or require an operator to confirm manual   Must be one of the following values   auto    manual   Default   manual    More information   terraform cloud docs workspaces settings auto apply and manual apply        allowDestroyPlan   boolean    Allows a destroy plan to be created and applied  Default   true    More information   terraform cloud docs workspaces settings destruction and deletion        description   string    Workspace description       agentPool    WorkspaceAgentPool   workspaceagentpool     HCP Terraform Agents allow HCP Terraform to communicate with isolated  private  or on premises infrastructure   More information   terraform cloud docs agents        executionMode   string    Define where the Terraform code will be executed  Must be one of the following values   agent    local    remote   Default   remote    More information   terraform cloud docs workspaces settings execution mode        runTasks    WorkspaceRunTask   workspaceruntask  array    Run tasks allow HCP Terraform to interact with external systems at specific points in the HCP Terraform run lifecycle   More information   terraform cloud docs workspaces settings run tasks        tags    Tag   tag  array    Workspace tags are used to help identify and group together workspaces  Tags must be one or more characters  can include letters  numbers  colons  hyphens  and underscores  and must begin and end with a letter or number       teamAccess    TeamAccess   teamaccess  array    HCP Terraform workspaces can only be accessed by users with the correct permissions  You can manage permissions for a workspace on a per team basis  When a workspace is created  only the owners team and teams with the  manage workspaces  permission can access it  with full admin permissions  These teams  access can t be removed from a workspace   More information   terraform cloud docs workspaces settings access        terraformVersion   string    The version of Terraform to use for this workspace  If not specified  the latest available version will be used  Must match pattern      d 1      d 1 2      d 1 2    More information       cloud docs workspaces settings terraform version      workingDirectory   string    The directory where Terraform will execute  specified as a relative path from the root of the configuration directory  More information       cloud docs workspaces settings terraform working directory      environmentVariables    Variable   variable  array    Terraform Environment variables for all plans and applies in this workspace  Variables defined within a workspace always overwrite variables from variable sets that have the same type and the same key  More information   br       Workspace variables   terraform cloud docs workspaces variables   br       Environment variables   terraform cloud docs workspaces variables environment variables       terraformVariables    Variable   variable  array    Terraform variables for all plans and applies in this workspace  Variables defined within a workspace always overwrite variables from variable sets that have the same type and the same key   More information   br       Workspace variables   terraform cloud docs workspaces variables   br      Terraform variables   terraform cloud docs workspaces variables terraform variables       remoteStateSharing    RemoteStateSharing   remotestatesharing     Remote state access between workspaces  By default  new workspaces in HCP Terraform do not allow other workspaces to access their state   More information   terraform cloud docs workspaces state accessing state from other workspaces        runTriggers    RunTrigger   runtrigger  array    Run triggers allow you to connect this workspace to one or more source workspaces  These connections allow runs to queue automatically in this workspace on successful apply of runs in any of the source workspaces   More information   terraform cloud docs workspaces settings run triggers        versionControl    VersionControl   versioncontrol     Settings for the workspace s VCS repository  enabling the UI VCS driven run workflow  Omit this argument to utilize the CLI driven and API driven workflows  where runs are not driven by webhooks on your VCS provider  More information       cloud docs run ui      cloud docs vcs      sshKey    SSHKey   sshkey     SSH key used to clone Terraform modules   More information   terraform cloud docs workspaces settings ssh keys        notifications    Notification   notification  array    Notifications allow you to send messages to other applications based on run and workspace events   More information   terraform cloud docs workspaces settings notifications        project    WorkspaceProject   workspaceproject     Projects let you organize your workspaces into groups  Default  default organization project   More information   terraform tutorials cloud projects        deletionPolicy    DeletionPolicy   deletionpolicy     The Deletion Policy specifies the behavior of the custom resource and its associated workspace when the custom resource is deleted  br      retain   When you delete the custom resource  the operator does not delete the workspace  br      soft   Attempts to delete the associated workspace only if it does not contain any managed resources  br      destroy   Executes a destroy operation to remove all resources managed by the associated workspace  Once the destruction of these resources is successful  the operator deletes the workspace  and then deletes the custom resource  br      force   Forcefully and immediately deletes the workspace and the custom resource  br   Default   retain    "}
{"questions":"terraform Learn how to automatically migrate state to HCP Terraform or Terraform Enterprise using the Terraform migrate CLI tool page title Terraform migrate HCP Terraform Terraform migrate Terraform migrate automatically migrates your Terraform Community Edition state to HCP Terraform or Terraform Enterprise It also updates your local configuration with any necessary changes and optionally creates a pull request to update your code repository","answers":"---\npage_title: Terraform migrate - HCP Terraform\ndescription: >-\n  Learn how to automatically migrate state to HCP Terraform or Terraform Enterprise using the Terraform migrate CLI tool \n---\n\n# Terraform migrate\n\nTerraform migrate automatically migrates your Terraform Community Edition state to HCP Terraform or Terraform Enterprise. It also updates your local configuration with any necessary changes and optionally creates a pull request to update your code repository.\n\n\n## Overview\n\nComplete the following steps to migrate Terraform state using the CLI:\n\n1. Download and install `tf-migrate`: You can manually download and install or use Homebrew if you are on macOS. \n1. `tf-migrate prepare`: This step scans the current working directory and generates Terraform configuration to migrate your state. The generated migration plan depends on the structure of your configuration. For more information, refer to [tf-migrate prepare](\/terraform\/cloud-docs\/migrate\/tf-migrate\/reference\/prepare).\n\n1. `tf-migrate execute`: This step directs Terraform to run the `init`, `plan`, and `apply` commands to perform the migration to HCP Terraform or Terraform Enterprise. At the end of the migration, `tf-migrate` displays a summary of what it migrated, links to the workspaces it created, and, if configured, a link to the pull request it created. For more information, refer to [tf-migrate execute](\/terraform\/cloud-docs\/migrate\/tf-migrate\/reference\/execute).\n\n## Install\n\n<Tabs>\n\n<Tab heading=\"Manual installation\">\n\nHashiCorp distributes Terraform migrate as a binary package. To install Terraform migrate, find the [appropriate binary](https:\/\/releases.hashicorp.com\/tf-migrate\/) for your operating system and download it as a zip archive.\n\nAfter you download Terraform migrate, unzip the archive. Terraform migrate runs as a single binary named `tf-migrate`.\n\nFinally, make sure that the `tf-migrate` binary is available in a directory that is in your system's `PATH`. \n\n### Verify the installation\n\nEvery build of Terraform migrate includes a `SHA256SUMS` and a `SHA256SUMS.sig` file to validate your downloaded binary. Refer to the [verify HashiCorp binary downloads tutorial](https:\/\/developer.hashicorp.com\/well-architected-framework\/operational-excellence\/verify-hashicorp-binary) for more information.\n\n<\/Tab>\n\n<Tab heading=\"Homebrew of macOS\">\n\n[Homebrew](https:\/\/brew.sh\/) is a free and open-source package management system for macOS. You can install the official [Terraform migrate](https:\/\/github.com\/hashicorp\/homebrew-tap) formula from the terminal.\n\nFirst, install the HashiCorp tap, a repository of all our Homebrew packages.\n\n```\n$ brew tap hashicorp\/tap\n```\n\nNow, install Terraform migrate with the `hashicorp\/tap\/tf-migrate` formula.\n\n```\n$ brew install hashicorp\/tap\/tf-migrate\n```\n\n<\/Tab>\n\n<\/Tabs>\n\n## Authentication\n\nTerraform migrate uses your locally configured Terraform CLI API token. If you have not authenticated your local Terraform installation with HCP Terraform, use the `terraform login` command to create an authentication token.\n\n```\n$ terraform login\n\nTerraform will request an API token for app.terraform.io using your browser.\n\nIf login is successful, Terraform will store the token in plain text in\nthe following file for use by subsequent commands:\n   \/Users\/redacted\/.terraform.d\/credentials.tfrc.json\n\nDo you want to proceed?\n Only 'yes' will be accepted to confirm.\n\n Enter a value: yes\n```\n\nTerraform opens a browser to the HCP Terraform login screen. Enter a token name in the web UI, or leave the default name. Click **Create API token** to generate the authentication token.\n\nHCP Terraform only displays your token once. Copy this token, then when the Terraform CLI prompts you, paste the user token exactly once into your terminal. Press **Enter** to complete the authentication process.\n\nTerraform migrate uses the GitHub API to create a pull request to update your configuration. Set the `GITHUB_TOKEN` environment variable to provide your GitHub API token. Your token must have the `repo` scope.\n\n```\n$ export GITHUB_TOKEN=<TOKEN>\n```\n\n## Logging\n\nYou can enable detailed logging by setting the `TF_MIGRATE_ENABLE_LOG` environment variable to `true`. When you enable this setting, Terraform migrate writes the logs to the following locations, depending on your operating system:\n\n| Platform | Location |\n| -------- | -------- |\n| macOS\/Linux | `\/Users\/<username>\/.tf-migrate\/logs\/<commandName>\/<date>.log` |\n| Windows | `C:\\Users\\<username>\\.tf-migrate\\logs\\<commandName>\\<date>.log` |\n\nYou can set the `TF_MIGRATE_LOG_LEVEL` environment variable to one of the following values to change the verbosity of the logs (in order of decreasing verbosity): \n\n- `TRACE`\n- `DEBUG` \n- `INFO` \n- `WARN`\n- `ERROR","site":"terraform","answers_cleaned":"    page title  Terraform migrate   HCP Terraform description       Learn how to automatically migrate state to HCP Terraform or Terraform Enterprise using the Terraform migrate CLI tool         Terraform migrate  Terraform migrate automatically migrates your Terraform Community Edition state to HCP Terraform or Terraform Enterprise  It also updates your local configuration with any necessary changes and optionally creates a pull request to update your code repository       Overview  Complete the following steps to migrate Terraform state using the CLI   1  Download and install  tf migrate   You can manually download and install or use Homebrew if you are on macOS   1   tf migrate prepare   This step scans the current working directory and generates Terraform configuration to migrate your state  The generated migration plan depends on the structure of your configuration  For more information  refer to  tf migrate prepare   terraform cloud docs migrate tf migrate reference prepare    1   tf migrate execute   This step directs Terraform to run the  init    plan   and  apply  commands to perform the migration to HCP Terraform or Terraform Enterprise  At the end of the migration   tf migrate  displays a summary of what it migrated  links to the workspaces it created  and  if configured  a link to the pull request it created  For more information  refer to  tf migrate execute   terraform cloud docs migrate tf migrate reference execute       Install   Tabs    Tab heading  Manual installation    HashiCorp distributes Terraform migrate as a binary package  To install Terraform migrate  find the  appropriate binary  https   releases hashicorp com tf migrate   for your operating system and download it as a zip archive   After you download Terraform migrate  unzip the archive  Terraform migrate runs as a single binary named  tf migrate    Finally  make sure that the  tf migrate  binary is available in a directory that is in your system s  PATH         Verify the installation  Every build of Terraform migrate includes a  SHA256SUMS  and a  SHA256SUMS sig  file to validate your downloaded binary  Refer to the  verify HashiCorp binary downloads tutorial  https   developer hashicorp com well architected framework operational excellence verify hashicorp binary  for more information     Tab    Tab heading  Homebrew of macOS     Homebrew  https   brew sh   is a free and open source package management system for macOS  You can install the official  Terraform migrate  https   github com hashicorp homebrew tap  formula from the terminal   First  install the HashiCorp tap  a repository of all our Homebrew packages         brew tap hashicorp tap      Now  install Terraform migrate with the  hashicorp tap tf migrate  formula         brew install hashicorp tap tf migrate        Tab     Tabs      Authentication  Terraform migrate uses your locally configured Terraform CLI API token  If you have not authenticated your local Terraform installation with HCP Terraform  use the  terraform login  command to create an authentication token         terraform login  Terraform will request an API token for app terraform io using your browser   If login is successful  Terraform will store the token in plain text in the following file for use by subsequent commands      Users redacted  terraform d credentials tfrc json  Do you want to proceed   Only  yes  will be accepted to confirm    Enter a value  yes      Terraform opens a browser to the HCP Terraform login screen  Enter a token name in the web UI  or leave the default name  Click   Create API token   to generate the authentication token   HCP Terraform only displays your token once  Copy this token  then when the Terraform CLI prompts you  paste the user token exactly once into your terminal  Press   Enter   to complete the authentication process   Terraform migrate uses the GitHub API to create a pull request to update your configuration  Set the  GITHUB TOKEN  environment variable to provide your GitHub API token  Your token must have the  repo  scope         export GITHUB TOKEN  TOKEN          Logging  You can enable detailed logging by setting the  TF MIGRATE ENABLE LOG  environment variable to  true   When you enable this setting  Terraform migrate writes the logs to the following locations  depending on your operating system     Platform   Location                             macOS Linux     Users  username   tf migrate logs  commandName   date  log      Windows    C  Users  username   tf migrate logs  commandName   date  log     You can set the  TF MIGRATE LOG LEVEL  environment variable to one of the following values to change the verbosity of the logs  in order of decreasing verbosity        TRACE     DEBUG      INFO      WARN     ERROR"}
{"questions":"terraform tf migrate prepare page title tf migrate prepare Terraform migrate HCP Terraform Gather information and create a plan to migrate your Terraform Community Edition state The tf migrate prepare command recursively scans the current directory for Terraform state files then generates new Terraform configuration to migrate the state to HCP Terraform or Terraform Enterprise","answers":"---\npage_title: tf-migrate prepare - Terraform migrate - HCP Terraform\ndescription: >-\n  Gather information and create a plan to migrate your Terraform Community Edition state.\n---\n\n# tf-migrate prepare\n\nThe `tf-migrate prepare` command recursively scans the current directory for Terraform state files, then generates new Terraform configuration to migrate the state to HCP Terraform or Terraform Enterprise. \n\n## Usage\n\n```\n$ tf-migrate prepare [options]\n```\n\n## Description\n\nThe `tf-migrate prepare` command prompts you for the following information:\n\n- The HCP Terraform or Terraform Enterprise organization to migrate your state to.\n- If you would like to create a new branch named `hcp-migrate-main`.  \n- If you would like it to automatically create a pull request with the updated code change when the migration is complete.\n\nAfter you run the `prepare` command, Terraform migrate generates a new Terraform configuration in the `hcp-migrate-config` directory to perform the migration. This configuration creates the following resources:\n\n- One workspace per state file  \n- One project to store all workspaces  \n- A new local git branch if you responded to the prompt to create a new branch with `yes`.  \n- A new pull request in the remote git repository if you responded to the prompt to create a pull request with `yes`.\n\nThe `tf-migrate` CLI tool adds the generated configuration to the `.gitignore` file so that the configuration is not committed to source control.\n\nTerraform migrate creates the following structure in HCP Terraform or Terraform Enterprise depending on your local configuration:\n\n| Source | Result |\n| :---- | :---- |\n| Single configuration, single state | Single HCP workspace |\n| Single configuration, multiple states for each Community Edition workspace | One HCP workspace per state |\n| Multiple configurations, one state per configuration | One HCP workspace per configuration |\n| Multiple configurations, multiple states per configuration | One HCP workspace per combination of configuration and state |\n\n## Example\n\nThe following configuration uses the AWS S3 backend and has a single state file.\n\n<CodeBlockConfig hideClipboard>\n\n```\n.\n\u251c\u2500\u2500 LICENSE\n\u251c\u2500\u2500 README.md\n\u2514\u2500\u2500 main.tf\n```\n\n<\/CodeBlockConfig>\n\nThe `tf-migrate prepare` command generates the configuration to migrate this state to a single HCP Terraform workspace.\n\n<CodeBlockConfig hideClipboard>\n\n```\n$ tf-migrate prepare\n\u2713 Current working directory: \/tmp\/backend-test\/learn-terraform-migrate\n\u2713 Environment readiness checks completed\n\u2713 Found 3 HCP Terraform organizations\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 Available Orgs             \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 my-org-1                   \u2502\n\u2502 my-org-2                   \u2502\n\u2502 my-org-3                   \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\nEnter the name of the HCP Terraform organization to migrate to:  my-org-1\n\u2713 You have selected organization my-org-1 for migration\n\u2713 Found 1 directories with Terraform files\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502   Terraform File Directories   \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 learn-terraform-migrate        \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\nCreate a local branch named hcp-migrate-main from the current branch main: ... ?\n\n\n  Only 'yes or no' will be accepted as input.\n  Type 'yes' to approve the step\n  Type 'no' to to skip\n\n\nEnter a value:  yes\n\n\u2713 Successfully created branch hcp-migrate-main\nDo you want to open a pull request from hcp-migrate-main ... ?\n\n\n  Only 'yes or no' will be accepted as input.\n  Type 'yes' to approve the step\n  Type 'no' to to skip\n\n\nEnter a value:  yes\n\n\u2713 Migration config generation completed\n```\n\n<\/CodeBlockConfig>\n\n## Available options\n\nYou can include the following flags when you run the `tf-migrate prepare` command:\n\n| Option | Description | Default | Required |\n| ------ | ----------- | ------- | -------- |\n| `-hostname` | The hostname of your Terraform Enterprise server. If you do not provide a hostname, Terraform migrate defaults to HCP Terraform. | `app.terraform.io` | No ","site":"terraform","answers_cleaned":"    page title  tf migrate prepare   Terraform migrate   HCP Terraform description       Gather information and create a plan to migrate your Terraform Community Edition state         tf migrate prepare  The  tf migrate prepare  command recursively scans the current directory for Terraform state files  then generates new Terraform configuration to migrate the state to HCP Terraform or Terraform Enterprise       Usage        tf migrate prepare  options          Description  The  tf migrate prepare  command prompts you for the following information     The HCP Terraform or Terraform Enterprise organization to migrate your state to    If you would like to create a new branch named  hcp migrate main       If you would like it to automatically create a pull request with the updated code change when the migration is complete   After you run the  prepare  command  Terraform migrate generates a new Terraform configuration in the  hcp migrate config  directory to perform the migration  This configuration creates the following resources     One workspace per state file     One project to store all workspaces     A new local git branch if you responded to the prompt to create a new branch with  yes       A new pull request in the remote git repository if you responded to the prompt to create a pull request with  yes    The  tf migrate  CLI tool adds the generated configuration to the   gitignore  file so that the configuration is not committed to source control   Terraform migrate creates the following structure in HCP Terraform or Terraform Enterprise depending on your local configuration     Source   Result                       Single configuration  single state   Single HCP workspace     Single configuration  multiple states for each Community Edition workspace   One HCP workspace per state     Multiple configurations  one state per configuration   One HCP workspace per configuration     Multiple configurations  multiple states per configuration   One HCP workspace per combination of configuration and state       Example  The following configuration uses the AWS S3 backend and has a single state file    CodeBlockConfig hideClipboard             LICENSE     README md     main tf        CodeBlockConfig   The  tf migrate prepare  command generates the configuration to migrate this state to a single HCP Terraform workspace    CodeBlockConfig hideClipboard         tf migrate prepare   Current working directory   tmp backend test learn terraform migrate   Environment readiness checks completed   Found 3 HCP Terraform organizations                                  Available Orgs                                                my org 1                       my org 2                       my org 3                                                    Enter the name of the HCP Terraform organization to migrate to   my org 1   You have selected organization my org 1 for migration   Found 1 directories with Terraform files                                        Terraform File Directories                                          learn terraform migrate                                             Create a local branch named hcp migrate main from the current branch main            Only  yes or no  will be accepted as input    Type  yes  to approve the step   Type  no  to to skip   Enter a value   yes    Successfully created branch hcp migrate main Do you want to open a pull request from hcp migrate main           Only  yes or no  will be accepted as input    Type  yes  to approve the step   Type  no  to to skip   Enter a value   yes    Migration config generation completed        CodeBlockConfig      Available options  You can include the following flags when you run the  tf migrate prepare  command     Option   Description   Default   Required                                                     hostname    The hostname of your Terraform Enterprise server  If you do not provide a hostname  Terraform migrate defaults to HCP Terraform     app terraform io    No "}
{"questions":"terraform private module registry terraform cloud docs registry policy sets terraform cloud docs policy enforcement manage policy sets tfc only true page title GitHub com GitHub App VCS Providers HCP Terraform Learn how to use GitHub com repositories with workspaces and private registry modules without requiring an organization owner to configure an OAuth connection","answers":"---\npage_title: GitHub.com (GitHub App) - VCS Providers - HCP Terraform\ntfc_only: true\ndescription: >-\n  Learn how to use GitHub.com repositories with workspaces and private registry modules, without requiring an organization owner to configure an OAuth connection.\n---\n\n[private module registry]: \/terraform\/cloud-docs\/registry\n\n[policy sets]: \/terraform\/cloud-docs\/policy-enforcement\/manage-policy-sets\n\n[vcs settings]: \/terraform\/cloud-docs\/workspaces\/settings\/vcs\n\n[create]: \/terraform\/cloud-docs\/workspaces\/create\n\n[owners]: \/terraform\/cloud-docs\/users-teams-organizations\/teams#the-owners-team\n\n# Configuration-Free GitHub Usage\n\nThese instructions are for using repositories from GitHub.com with HCP Terraform workspaces and private registry modules, without requiring an organization owner to configure an OAuth connection.\n\nThis method uses a preconfigured GitHub App, and only works with GitHub.com. There are separate instructions for connecting to [GitHub.com via OAuth](\/terraform\/cloud-docs\/vcs\/github), connecting to [GitHub Enterprise](\/terraform\/cloud-docs\/vcs\/github-enterprise), and connecting to [other supported VCS providers.](\/terraform\/cloud-docs\/vcs)\n\n-> **Note:** This VCS Provider is only available on HCP Terraform. If you are using Terraform Enterprise, you can follow the instructions for creating [GitHub App for TFE](\/terraform\/enterprise\/admin\/application\/github-app-integration) or connecting to [GitHub.com via OAuth](\/terraform\/cloud-docs\/vcs\/github).\n\n## Using GitHub Repositories\n\nChoose \"GitHub.com\" on the \"Connect to a version control provider\" screen, which is shown when [creating a new workspace][create] or [changing a workspace's VCS connection][vcs settings]. Authorize access to GitHub if necessary. On the next screen, select a GitHub account or organization from the drop-down menu (or add a new organization) and choose a repository from the list.\n\nThe controls on the \"Connect to a version control provider\" screen can vary, depending on your permissions and your organization's settings:\n\n- In organizations with no VCS connections configured:\n  - Users with permission to manage VCS settings ([more about permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions)) will see several drop-down menus, sorted by product family. Choose \"GitHub.com\" (_not_ \"GitHub.com (Custom)\") from the GitHub menu.\n  - Other users will see a \"GitHub\" button.\n- In organizations with an existing VCS connection, only the connected providers are shown. Click the \"Connect to a different VCS\" link to reveal the provider menus (if you can manage VCS settings) or the GitHub button (others).\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## GitHub Permissions\n\nWhen using the Terraform Cloud GitHub App, each HCP Terraform user authenticates individually, and can use GitHub resources within HCP Terraform according to their own GitHub organization memberships and access permissions.\n\n-> **Note:** This is different from OAuth connections, where an HCP Terraform organization always acts as one particular GitHub user.\n\nTo enable this personalized access, HCP Terraform requests two kinds of permissions:\n\n- **Per user:** Each HCP Terraform user must _authorize_ HCP Terraform for their own GitHub account. This lets HCP Terraform determine which organizations and repositories they have access to.\n- **Per GitHub organization:** Each GitHub organization (or personal account) must _install_ the Terraform Cloud app, either globally or for specific repositories. This allows HCP Terraform to access repository contents and events.\n\nIndividual HCP Terraform users can access GitHub repositories where both of the following are true:\n\n- The user has at least read access to that repository on GitHub.\n- The repository's owner has installed the Terraform Cloud app and allowed it to access that repository.\n\nThis means that different HCP Terraform users within the same organization can see different sets of repositories available for their workspaces.\n\n### Authorizing\n\nHCP Terraform requests GitHub authorization from each user, displaying a pop-up window the first time they choose GitHub on the \"Connect to a version control provider\" screen.\n\n![Screenshot: GitHub asking whether you want to authorize \"Terraform Cloud by HashiCorp\".](\/img\/docs\/gh-app-authorize.png)\n\nOnce you authorize the app, you can use GitHub in any of your HCP Terraform organizations without needing to re-authorize.\n\nAfter installing the GitHub App and creating your VCS provider instance, you cannot reinstall the application again. However, you can modify your existing GitHub App configuration.\n\nIf you are a repository owner, you can adjust an existing Github App configuration by:\n1. Click a repository's **Settings** tab.\n1. Select **GitHub Apps** in the sidebar.\n1. Next to **Terraform Cloud**, click **Configure**. \n1. Underneath **Repository access**, adjust the repositories your GitHub app can access.\n\nNow that your Github app has access to your desired repository, you can [create a new workspace](\/terraform\/cloud-docs\/workspaces\/create#create-a-workspace) with your existing, newly updated GitHub App connection.\n\nYou can also adjust your GitHub App's access within HCP Terraform itself.  Whenever you create a new workspace, you can choose which organizations or repositories to install the GitHub App into. To adjust your GitHub App's configuration, create a new workspace:\n1. Click **Workspaces** in the sidebar. \n1. Click **New**, then select **Workspace**.\n1. Choose the project where you want to create your workspace, then click **Create**.\n1. Click **Version Control Workflow**, then choose the **GitHub App**.\n1. Click on your GitHub organization's name to reveal a dropdown, then select **Add another organization** to configure the resources your GitHub app has access to.\n\n\n#### Deauthorizing\n\nYou can use GitHub's web interface to deauthorize HCP Terraform for your GitHub account.\n\nOpen your GitHub personal settings, then go to the \"Applications\" section and the \"Authorized GitHub Apps\" tab. (Or, browse directly to `https:\/\/github.com\/settings\/apps\/authorizations`.) Click the **Revoke** button for HCP Terraform to deauthorize it.\n\nAfter deauthorizing, you won't be able to connect GitHub repositories to HCP Terraform workspaces until you authorize again. Existing connections will still work.\n\n### Installing\n\nHCP Terraform requests installation when a user chooses \"Add another organization\" from the repository list's organization menu.\n\nThe installation interface is a pop-up GitHub window, which lists your personal account and the organizations you can access. Note that installing an app for a GitHub organization requires appropriate organization permissions; see [GitHub's permissions documentation](https:\/\/help.github.com\/en\/github\/setting-up-and-managing-organizations-and-teams\/permission-levels-for-an-organization#github-app-managers) for details.\n\n![Screenshot: GitHub asking which organization or account to install the Terraform Cloud app in](\/img\/docs\/gh-app-install-pick.png)\n\nFor a given organization or account, the app can be installed globally or only for specific repositories.\n\nOnce the application is installed for an organization (or a subset of its repositories), its members can select any affected repositories they have access to when using HCP Terraform.\n\nAccess is not restricted to a specific HCP Terraform organization; members of a GitHub organization can use its repositories in any HCP Terraform organization they belong to.\n\n#### Configuring and Uninstalling\n\nYou can use GitHub's web interface to configure or uninstall HCP Terraform for an organization or account.\n\nOpen your GitHub personal settings or organization settings, then go to the **Applications** section and the *Installed GitHub Apps** tab. Click the **Configure** button for HCP Terraform to change its settings.\n\nIn the app's settings you can change which repositories HCP Terraform has access to, or uninstall it entirely.\n\nIf you disallow access to a repository that is currently connected to any HCP Terraform workspaces, those workspaces will be unable to retrieve configuration versions until you change their VCS settings and connect them to an allowed repository.\n\n## Feature Limitations\n\nYou can use the Terraform Cloud GitHub App to create workspaces and private registry modules from the UI, the API, or the TFE Terraform provider.  The following tools can use any version of HCP Terraform to access these features, but require a minimum version of Terraform Enterprise:\n\n- For the UI, use Terraform Enterprise v202302-1 or above.\n- For the API, use Terraform Enterprise v202303-1 or above.\n- Using at least v1.19.0 of [`go_tfe`](https:\/\/github.com\/hashicorp\/go-tfe), use Terraform Enterprise v202303-1 and above.\n- Using at least v0.43.0 of [`tfe_provider`](https:\/\/registry.terraform.io\/providers\/hashicorp\/tfe\/latest\/docs), use Terraform Enterprise v202303-1 and above.\n\nOnce you decide to start using these other features, a user with permission to manage VCS settings can configure [GitHub OAuth access](\/terraform\/cloud-docs\/vcs\/github) for your organization. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n","site":"terraform","answers_cleaned":"    page title  GitHub com  GitHub App    VCS Providers   HCP Terraform tfc only  true description       Learn how to use GitHub com repositories with workspaces and private registry modules  without requiring an organization owner to configure an OAuth connection        private module registry    terraform cloud docs registry   policy sets    terraform cloud docs policy enforcement manage policy sets   vcs settings    terraform cloud docs workspaces settings vcs   create    terraform cloud docs workspaces create   owners    terraform cloud docs users teams organizations teams the owners team    Configuration Free GitHub Usage  These instructions are for using repositories from GitHub com with HCP Terraform workspaces and private registry modules  without requiring an organization owner to configure an OAuth connection   This method uses a preconfigured GitHub App  and only works with GitHub com  There are separate instructions for connecting to  GitHub com via OAuth   terraform cloud docs vcs github   connecting to  GitHub Enterprise   terraform cloud docs vcs github enterprise   and connecting to  other supported VCS providers    terraform cloud docs vcs        Note    This VCS Provider is only available on HCP Terraform  If you are using Terraform Enterprise  you can follow the instructions for creating  GitHub App for TFE   terraform enterprise admin application github app integration  or connecting to  GitHub com via OAuth   terraform cloud docs vcs github       Using GitHub Repositories  Choose  GitHub com  on the  Connect to a version control provider  screen  which is shown when  creating a new workspace  create  or  changing a workspace s VCS connection  vcs settings   Authorize access to GitHub if necessary  On the next screen  select a GitHub account or organization from the drop down menu  or add a new organization  and choose a repository from the list   The controls on the  Connect to a version control provider  screen can vary  depending on your permissions and your organization s settings     In organizations with no VCS connections configured      Users with permission to manage VCS settings   more about permissions   terraform cloud docs users teams organizations permissions   will see several drop down menus  sorted by product family  Choose  GitHub com    not   GitHub com  Custom    from the GitHub menu      Other users will see a  GitHub  button    In organizations with an existing VCS connection  only the connected providers are shown  Click the  Connect to a different VCS  link to reveal the provider menus  if you can manage VCS settings  or the GitHub button  others     permissions citation    intentionally unused   keep for maintainers     GitHub Permissions  When using the Terraform Cloud GitHub App  each HCP Terraform user authenticates individually  and can use GitHub resources within HCP Terraform according to their own GitHub organization memberships and access permissions        Note    This is different from OAuth connections  where an HCP Terraform organization always acts as one particular GitHub user   To enable this personalized access  HCP Terraform requests two kinds of permissions       Per user    Each HCP Terraform user must  authorize  HCP Terraform for their own GitHub account  This lets HCP Terraform determine which organizations and repositories they have access to      Per GitHub organization    Each GitHub organization  or personal account  must  install  the Terraform Cloud app  either globally or for specific repositories  This allows HCP Terraform to access repository contents and events   Individual HCP Terraform users can access GitHub repositories where both of the following are true     The user has at least read access to that repository on GitHub    The repository s owner has installed the Terraform Cloud app and allowed it to access that repository   This means that different HCP Terraform users within the same organization can see different sets of repositories available for their workspaces       Authorizing  HCP Terraform requests GitHub authorization from each user  displaying a pop up window the first time they choose GitHub on the  Connect to a version control provider  screen     Screenshot  GitHub asking whether you want to authorize  Terraform Cloud by HashiCorp     img docs gh app authorize png   Once you authorize the app  you can use GitHub in any of your HCP Terraform organizations without needing to re authorize   After installing the GitHub App and creating your VCS provider instance  you cannot reinstall the application again  However  you can modify your existing GitHub App configuration   If you are a repository owner  you can adjust an existing Github App configuration by  1  Click a repository s   Settings   tab  1  Select   GitHub Apps   in the sidebar  1  Next to   Terraform Cloud    click   Configure     1  Underneath   Repository access    adjust the repositories your GitHub app can access   Now that your Github app has access to your desired repository  you can  create a new workspace   terraform cloud docs workspaces create create a workspace  with your existing  newly updated GitHub App connection   You can also adjust your GitHub App s access within HCP Terraform itself   Whenever you create a new workspace  you can choose which organizations or repositories to install the GitHub App into  To adjust your GitHub App s configuration  create a new workspace  1  Click   Workspaces   in the sidebar   1  Click   New    then select   Workspace    1  Choose the project where you want to create your workspace  then click   Create    1  Click   Version Control Workflow    then choose the   GitHub App    1  Click on your GitHub organization s name to reveal a dropdown  then select   Add another organization   to configure the resources your GitHub app has access to         Deauthorizing  You can use GitHub s web interface to deauthorize HCP Terraform for your GitHub account   Open your GitHub personal settings  then go to the  Applications  section and the  Authorized GitHub Apps  tab   Or  browse directly to  https   github com settings apps authorizations    Click the   Revoke   button for HCP Terraform to deauthorize it   After deauthorizing  you won t be able to connect GitHub repositories to HCP Terraform workspaces until you authorize again  Existing connections will still work       Installing  HCP Terraform requests installation when a user chooses  Add another organization  from the repository list s organization menu   The installation interface is a pop up GitHub window  which lists your personal account and the organizations you can access  Note that installing an app for a GitHub organization requires appropriate organization permissions  see  GitHub s permissions documentation  https   help github com en github setting up and managing organizations and teams permission levels for an organization github app managers  for details     Screenshot  GitHub asking which organization or account to install the Terraform Cloud app in   img docs gh app install pick png   For a given organization or account  the app can be installed globally or only for specific repositories   Once the application is installed for an organization  or a subset of its repositories   its members can select any affected repositories they have access to when using HCP Terraform   Access is not restricted to a specific HCP Terraform organization  members of a GitHub organization can use its repositories in any HCP Terraform organization they belong to        Configuring and Uninstalling  You can use GitHub s web interface to configure or uninstall HCP Terraform for an organization or account   Open your GitHub personal settings or organization settings  then go to the   Applications   section and the  Installed GitHub Apps   tab  Click the   Configure   button for HCP Terraform to change its settings   In the app s settings you can change which repositories HCP Terraform has access to  or uninstall it entirely   If you disallow access to a repository that is currently connected to any HCP Terraform workspaces  those workspaces will be unable to retrieve configuration versions until you change their VCS settings and connect them to an allowed repository      Feature Limitations  You can use the Terraform Cloud GitHub App to create workspaces and private registry modules from the UI  the API  or the TFE Terraform provider   The following tools can use any version of HCP Terraform to access these features  but require a minimum version of Terraform Enterprise     For the UI  use Terraform Enterprise v202302 1 or above    For the API  use Terraform Enterprise v202303 1 or above    Using at least v1 19 0 of   go tfe   https   github com hashicorp go tfe   use Terraform Enterprise v202303 1 and above    Using at least v0 43 0 of   tfe provider   https   registry terraform io providers hashicorp tfe latest docs   use Terraform Enterprise v202303 1 and above   Once you decide to start using these other features  a user with permission to manage VCS settings can configure  GitHub OAuth access   terraform cloud docs vcs github  for your organization    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers "}
{"questions":"terraform page title GitLab com VCS Providers HCP Terraform These instructions are for using GitLab com for HCP Terraform s VCS features GitLab CE and GitLab EE have separate instructions terraform cloud docs vcs gitlab eece as do the other supported VCS providers terraform cloud docs vcs Learn how to use GitLab com for VCS features Configuring GitLab com Access","answers":"---\npage_title: GitLab.com - VCS Providers - HCP Terraform\ndescription: >-\n  Learn how to use GitLab.com for VCS features.\n---\n\n# Configuring GitLab.com Access\n\nThese instructions are for using GitLab.com for HCP Terraform's VCS features. [GitLab CE and GitLab EE have separate instructions,](\/terraform\/cloud-docs\/vcs\/gitlab-eece) as do the [other supported VCS providers.](\/terraform\/cloud-docs\/vcs)\n\nConfiguring a new VCS provider requires permission to manage VCS settings for the organization. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nConnecting HCP Terraform to your VCS involves four steps:\n\n| On your VCS                                                                    | On HCP Terraform                                                          |\n| ------------------------------------------------------------------------------ | --------------------------------------------------------------------------- |\n| \u00a0                                                                              | Create a new connection in HCP Terraform. Get redirect URI.               |\n| Register your HCP Terraform organization as a new app. Provide redirect URI. | \u00a0                                                                           |\n| \u00a0                                                                              | Provide HCP Terraform with application ID and secret. Request VCS access. |\n| Approve access request.                                                        | \u00a0                                                                           |\n\nThe rest of this page explains the GitLab.com versions of these steps.\n\n-> **Note:** Alternately, you can skip the OAuth configuration process and authenticate with a personal access token. This requires using HCP Terraform's API. For details, see [the OAuth Clients API page](\/terraform\/cloud-docs\/api-docs\/oauth-clients).\n\n## Step 1: On HCP Terraform, Begin Adding a New VCS Provider\n\n1. Go to your organization's settings and then click **Providers**. The **VCS Providers** page appears.\n\n1. Click **Add VCS Provider**. The **VCS Providers** page appears.\n\n1. Select **GitLab** and then select **GitLab.com** from the menu. The page moves to the next step.\n\n1. Locate the \"Redirect URI\" and copy it to your clipboard; you'll paste it in the next step.\n\nLeave the page open in a browser tab. In the next step you will copy values from this page, and in later steps you will continue configuring HCP Terraform.\n\n## Step 2: On GitLab, Create a New Application\n\n1. In a new browser tab, open [gitlab.com](https:\/\/gitlab.com) and log in as whichever account you want HCP Terraform to act as. For most organizations this should be a dedicated service user, but a personal account will also work.\n\n   ~> **Important:** The account you use for connecting HCP Terraform **must have Maintainer access** to any shared repositories of Terraform configurations, since creating webhooks requires Maintainer permissions. Refer to [the GitLab documentation](https:\/\/docs.gitlab.com\/ee\/user\/permissions.html#project-members-permissions) for details.\n\n1. Navigate to GitLab's [User Settings > Applications](https:\/\/gitlab.com\/-\/profile\/applications) page.\n\n   This page is located at <https:\/\/gitlab.com\/-\/profile\/applications>. You can also reach it through GitLab's menus:\n\n   - Click your profile picture and choose \"Settings.\"\n   - Click \"Applications.\"\n\n1. This page has a list of applications and a form for adding new ones. The form has two text fields and some checkboxes.\n\n   Fill out the fields and checkboxes with the corresponding values currently displayed in your HCP Terraform browser tab. HCP Terraform lists the values in the order they appear, and includes controls for copying values to your clipboard.\n\n   Fill out the form as follows:\n\n   | Field                   | Value                                                                                            |\n   | ----------------------- | ------------------------------------------------------------------------------------------------ |\n   | Name                    | HCP Terraform (`<YOUR ORGANIZATION NAME>`)                                                     |\n   | Redirect URI            | `https:\/\/app.terraform.io\/<YOUR CALLBACK URL>`, the redirect URI you copied from HCP Terraform |\n   | Confidential (checkbox) | \u2714\ufe0f (enabled)                                                                                     |\n   | Scopes (all checkboxes) | api                                                                                              |\n\n\n1. Click the \"Save application\" button, which creates the application and takes you to its page.\n\n1. Leave this page open in a browser tab. In the next step, you will copy and paste the unique **Application ID** and **Secret.**\n\n## Step 3: On HCP Terraform, Set up Your Provider\n\n1. Enter the **Application ID** and **Secret** from the previous step, as well as an option **Name** for this VCS connection.\n\n1. Click **Connect and continue.** This takes you to a page on GitLab.com, which asks if you want to authorize the app.\n\n1. Click the green **Authorize** button at the bottom of the authorization page.\n\n## Step 4: On HCP Terraform, Configure Advanced Settings (Optional)\n\nThe settings in this section are optional. The Advanced Settings you can configure are:\n- **Scope of VCS Provider** - You can configure which workspaces can use repositories from this VCS provider. By default the **All Projects** option is selected, meaning this VCS provider is available to be used by all workspaces in the organization.\n- **Set up SSH Keypair** - Most organizations will not need to add an SSH key. However, if the organization repositories include Git submodules that can only be accessed via SSH, an SSH key can be added along with the OAuth credentials. You can add or update the SSH key at a later time.\n\n### If You Don't Need to Configure Advanced Settings:\n\n1. Click the **Skip and Finish** button. This returns you to HCP Terraform's VCS Provider page, which now includes your new GitLab client.\n\n### If You Need to Limit the Scope of this VCS Provider:\n\n1. Select the **Selected Projects** option and use the text field that appears to search for and select projects to enable. All current and future workspaces for any selected projects can use repositories from this VCS Provider. \n\n1. Click the **Update VCS Provider** button to save your selections.\n\n### If You Do Need an SSH Keypair:\n\n#### Important Notes\n\n- SSH will only be used to clone Git submodules. All other Git operations will still use HTTPS.\n- Do not use your personal SSH key to connect HCP Terraform and GitLab; generate a new one or use an existing key reserved for service access.\n- In the following steps, you must provide HCP Terraform with the private key. Although HCP Terraform does not display the text of the key to users after it is entered, it retains it and will use it when authenticating to GitLab.\n- **Protect this private key carefully.** It can push code to the repositories you use to manage your infrastructure. Take note of your organization's policies for protecting important credentials and be sure to follow them.\n\n1. On a secure workstation, create an SSH keypair that HCP Terraform can use to connect to GitLab.com. The exact command depends on your OS, but is usually something like:\n   `ssh-keygen -t rsa -m PEM -f \"\/Users\/<NAME>\/.ssh\/service_terraform\" -C \"service_terraform_enterprise\"`\n   This creates a `service_terraform` file with the private key, and a `service_terraform.pub` file with the public key. This SSH key **must have an empty passphrase**. HCP Terraform cannot use SSH keys that require a passphrase.\n\n1. While logged into the GitLab.com account you want HCP Terraform to act as, navigate to the SSH Keys settings page, add a new SSH key and paste the value of the SSH public key you just created.\n\n1. In HCP Terraform's **Add VCS Provider** page, paste the text of the **SSH private key** you just created, and click the **Add SSH Key** button.\n\n## Finished\n\nAt this point, GitLab.com access for HCP Terraform is fully configured, and you can create Terraform workspaces based on your organization's shared repositories.","site":"terraform","answers_cleaned":"    page title  GitLab com   VCS Providers   HCP Terraform description       Learn how to use GitLab com for VCS features         Configuring GitLab com Access  These instructions are for using GitLab com for HCP Terraform s VCS features   GitLab CE and GitLab EE have separate instructions    terraform cloud docs vcs gitlab eece  as do the  other supported VCS providers    terraform cloud docs vcs   Configuring a new VCS provider requires permission to manage VCS settings for the organization    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  Connecting HCP Terraform to your VCS involves four steps     On your VCS                                                                      On HCP Terraform                                                                                                                                                                                                                                                                                                                Create a new connection in HCP Terraform  Get redirect URI                    Register your HCP Terraform organization as a new app  Provide redirect URI                                                                                                                                                                     Provide HCP Terraform with application ID and secret  Request VCS access      Approve access request                                                                                                                                          The rest of this page explains the GitLab com versions of these steps        Note    Alternately  you can skip the OAuth configuration process and authenticate with a personal access token  This requires using HCP Terraform s API  For details  see  the OAuth Clients API page   terraform cloud docs api docs oauth clients       Step 1  On HCP Terraform  Begin Adding a New VCS Provider  1  Go to your organization s settings and then click   Providers    The   VCS Providers   page appears   1  Click   Add VCS Provider    The   VCS Providers   page appears   1  Select   GitLab   and then select   GitLab com   from the menu  The page moves to the next step   1  Locate the  Redirect URI  and copy it to your clipboard  you ll paste it in the next step   Leave the page open in a browser tab  In the next step you will copy values from this page  and in later steps you will continue configuring HCP Terraform      Step 2  On GitLab  Create a New Application  1  In a new browser tab  open  gitlab com  https   gitlab com  and log in as whichever account you want HCP Terraform to act as  For most organizations this should be a dedicated service user  but a personal account will also work           Important    The account you use for connecting HCP Terraform   must have Maintainer access   to any shared repositories of Terraform configurations  since creating webhooks requires Maintainer permissions  Refer to  the GitLab documentation  https   docs gitlab com ee user permissions html project members permissions  for details   1  Navigate to GitLab s  User Settings   Applications  https   gitlab com   profile applications  page      This page is located at  https   gitlab com   profile applications   You can also reach it through GitLab s menus        Click your profile picture and choose  Settings        Click  Applications    1  This page has a list of applications and a form for adding new ones  The form has two text fields and some checkboxes      Fill out the fields and checkboxes with the corresponding values currently displayed in your HCP Terraform browser tab  HCP Terraform lists the values in the order they appear  and includes controls for copying values to your clipboard      Fill out the form as follows        Field                     Value                                                                                                                                                                                                                                     Name                      HCP Terraform    YOUR ORGANIZATION NAME                                                               Redirect URI               https   app terraform io  YOUR CALLBACK URL    the redirect URI you copied from HCP Terraform        Confidential  checkbox        enabled                                                                                             Scopes  all checkboxes    api                                                                                                  1  Click the  Save application  button  which creates the application and takes you to its page   1  Leave this page open in a browser tab  In the next step  you will copy and paste the unique   Application ID   and   Secret        Step 3  On HCP Terraform  Set up Your Provider  1  Enter the   Application ID   and   Secret   from the previous step  as well as an option   Name   for this VCS connection   1  Click   Connect and continue    This takes you to a page on GitLab com  which asks if you want to authorize the app   1  Click the green   Authorize   button at the bottom of the authorization page      Step 4  On HCP Terraform  Configure Advanced Settings  Optional   The settings in this section are optional  The Advanced Settings you can configure are      Scope of VCS Provider     You can configure which workspaces can use repositories from this VCS provider  By default the   All Projects   option is selected  meaning this VCS provider is available to be used by all workspaces in the organization      Set up SSH Keypair     Most organizations will not need to add an SSH key  However  if the organization repositories include Git submodules that can only be accessed via SSH  an SSH key can be added along with the OAuth credentials  You can add or update the SSH key at a later time       If You Don t Need to Configure Advanced Settings   1  Click the   Skip and Finish   button  This returns you to HCP Terraform s VCS Provider page  which now includes your new GitLab client       If You Need to Limit the Scope of this VCS Provider   1  Select the   Selected Projects   option and use the text field that appears to search for and select projects to enable  All current and future workspaces for any selected projects can use repositories from this VCS Provider    1  Click the   Update VCS Provider   button to save your selections       If You Do Need an SSH Keypair        Important Notes    SSH will only be used to clone Git submodules  All other Git operations will still use HTTPS    Do not use your personal SSH key to connect HCP Terraform and GitLab  generate a new one or use an existing key reserved for service access    In the following steps  you must provide HCP Terraform with the private key  Although HCP Terraform does not display the text of the key to users after it is entered  it retains it and will use it when authenticating to GitLab      Protect this private key carefully    It can push code to the repositories you use to manage your infrastructure  Take note of your organization s policies for protecting important credentials and be sure to follow them   1  On a secure workstation  create an SSH keypair that HCP Terraform can use to connect to GitLab com  The exact command depends on your OS  but is usually something like      ssh keygen  t rsa  m PEM  f   Users  NAME   ssh service terraform   C  service terraform enterprise      This creates a  service terraform  file with the private key  and a  service terraform pub  file with the public key  This SSH key   must have an empty passphrase    HCP Terraform cannot use SSH keys that require a passphrase   1  While logged into the GitLab com account you want HCP Terraform to act as  navigate to the SSH Keys settings page  add a new SSH key and paste the value of the SSH public key you just created   1  In HCP Terraform s   Add VCS Provider   page  paste the text of the   SSH private key   you just created  and click the   Add SSH Key   button      Finished  At this point  GitLab com access for HCP Terraform is fully configured  and you can create Terraform workspaces based on your organization s shared repositories "}
{"questions":"terraform page title GitHub Enterprise VCS Providers HCP Terraform These instructions are for using a self hosted installation of GitHub Enterprise for HCP Terraform s VCS features GitHub com has separate instructions terraform cloud docs vcs github enterprise as do the other supported VCS providers terraform cloud docs vcs Learn how to use an on premise installation of GitHub Enterprise for VCS features Configuring GitHub Enterprise Access","answers":"---\npage_title: GitHub Enterprise - VCS Providers - HCP Terraform\ndescription: >-\n  Learn how to use an on-premise installation of GitHub Enterprise for VCS features.\n---\n\n# Configuring GitHub Enterprise Access\n\nThese instructions are for using a self-hosted installation of GitHub Enterprise for HCP Terraform's VCS features. [GitHub.com has separate instructions,](\/terraform\/cloud-docs\/vcs\/github-enterprise) as do the [other supported VCS providers.](\/terraform\/cloud-docs\/vcs)\n\nConfiguring a new VCS provider requires permission to manage VCS settings for the organization. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nConnecting HCP Terraform to your VCS involves four steps:\n\n| On your VCS                                                                                    | On HCP Terraform                                            |\n| ---------------------------------------------------------------------------------------------- | ------------------------------------------------------------- |\n| \u00a0                                                                                              | Create a new connection in HCP Terraform. Get callback URL. |\n| Register your HCP Terraform organization as a new app. Provide callback URL. Get ID and key. | \u00a0                                                             |\n| \u00a0                                                                                              | Provide HCP Terraform with ID and key. Request VCS access.  |\n| Approve access request.                                                                        | \u00a0                                                             |\n\nThe rest of this page explains the GitHub Enterprise versions of these steps.\n\n~> **Important:** HCP Terraform needs to contact your GitHub Enterprise instance during setup and during normal operation. For the SaaS version of HCP Terraform, this means GitHub Enterprise must be internet-accessible; for Terraform Enterprise, you must have network connectivity between your Terraform Enterprise and GitHub Enterprise instances.\n\n-> **Note:** Alternately, you can skip the OAuth configuration process and authenticate with a personal access token. This requires using HCP Terraform's API. For details, see [the OAuth Clients API page](\/terraform\/cloud-docs\/api-docs\/oauth-clients).\n\n## Step 1: On HCP Terraform, begin adding a new VCS provider\n\n1. Go to your organization's settings and then click **Providers** in the Version Control section. The **VCS Providers** page appears.\n\n1. Click **Add a VCS provider**. The **Add VCS Provider** page appears.\n\n1. Select **GitHub** and then select **GitHub Enterprise** from the menu. The page moves to the next step.\n\n1. In the \"Set up provider\" step, fill in the **HTTP URL** and **API URL** of your GitHub Enterprise instance, as well as an optional **Name** for this VCS connection.\n\n   | Field    | Value                                       |\n   | -------- | ------------------------------------------- |\n   | HTTP URL | `https:\/\/<GITHUB INSTANCE HOSTNAME>`        |\n   | API URL  | `https:\/\/<GITHUB INSTANCE HOSTNAME>\/api\/v3` |\n\nLeave the page open in a browser tab. In the next step you will copy values from this page, and in later steps you will continue configuring HCP Terraform.\n\n## Step 2: On GitHub, create a new OAuth application\n\n1. In a new browser tab, open your GitHub Enterprise instance and log in as whichever account you want HCP Terraform to act as. For most organizations this should be a dedicated service user, but a personal account will also work.\n\n   ~> **Important:** The account you use for connecting HCP Terraform **must have admin access** to any shared repositories of Terraform configurations, since creating webhooks requires admin permissions.\n\n1. Navigate to GitHub's Register a New OAuth Application page.\n\n   This page is located at `https:\/\/<GITHUB INSTANCE HOSTNAME>\/settings\/applications\/new`. You can also reach it through GitHub's menus:\n\n   - Click your profile picture and choose \"Settings.\"\n   - Click \"OAuth Apps\" (under the \"Developer settings\" section).\n   - Click the \"Register a new application\" button.\n\n1. This page has a form with four text fields.\n\n   Fill out the fields with the corresponding values currently displayed in your HCP Terraform browser tab. HCP Terraform lists the values in the order they appear, and includes controls for copying values to your clipboard.\n\n   Fill out the text fields as follows:\n\n   | Field name                 | Value                                                                         |\n   | -------------------------- | ----------------------------------------------------------------------------- |\n   | Application Name           | HCP Terraform (`<YOUR ORGANIZATION NAME>`)                                  |\n   | Homepage URL               | `https:\/\/app.terraform.io` (or the URL of your Terraform Enterprise instance) |\n   | Application Description    | Any description of your choice.                                               |\n   | Authorization callback URL | `https:\/\/app.terraform.io\/<YOUR CALLBACK URL>`                                |\n\n1. Click the \"Register application\" button, which creates the application and takes you to its page.\n\n1. <a href=\"https:\/\/content.hashicorp.com\/api\/assets?product=terraform-docs-common&version=main&asset=website\/img\/docs\/hcp-terraform-logo-on-white.png\" download>Download this image of the HCP Terraform logo<\/a> and upload it with the \"Upload new logo\" button or the drag-and-drop target. This optional step helps you identify HCP Terraform's pull request checks at a glance.\n\n1. Click the \"Generate a new client secret\" button. You will need this secret in the next step.\n\n1. Leave this page open in a browser tab. In the next step, you will copy and paste the unique **Client ID** and **Client Secret.**\n\n## Step 3: On HCP Terraform, set up your provider\n\n1. Enter the **Client ID** and **Client Secret** from the previous step.\n\n1. Click \"Connect and continue.\" This takes you to a page on your GitHub Enterprise instance, asking whether you want to authorize the app.\n\n1. The authorization page lists any GitHub organizations this account belongs to. If there is a **Request** button next to the organization that owns your Terraform code repositories, click it now. Note that you need to do this even if you are only connecting workspaces to private forks of repositories in those organizations since those forks are subject to the organization's access restrictions.  See [About OAuth App access restrictions](https:\/\/docs.github.com\/en\/organizations\/managing-oauth-access-to-your-organizations-data\/about-oauth-app-access-restrictions).\n\n   If it results in a 500 error, it usually means HCP Terraform was unable to reach your GitHub Enterprise instance.\n\n1. Click the green \"Authorize `<GITHUB USER>`\" button at the bottom of the authorization page. GitHub might request your password or multi-factor token to confirm the operation.\n\n## Step 4: On HCP Terraform, configure advanced settings (optional)\n\nThe settings in this section are optional. The Advanced Settings you can configure are:\n- **Scope of VCS Provider** - You can configure which workspaces can use repositories from this VCS provider. By default the **All Projects** option is selected, meaning this VCS provider is available to be used by all workspaces in the organization.\n- **Set up SSH Keypair** - Most organizations will not need to add an SSH key. However, if the organization repositories include Git submodules that can only be accessed via SSH, an SSH key can be added along with the OAuth credentials. You can add or update the SSH key at a later time.\n\n### If you don't need to configure advanced settings:\n\n1. Click the **Skip and finish** button. This returns you to HCP Terraform's VCS Providers page, which now includes your new GitHub Enterprise client.\n\n### If you need to limit the scope of this VCS provider:\n\n1. Select the **Selected Projects** option and use the text field that appears to search for and select projects to enable. All current and future workspaces for any selected projects can use repositories from this VCS Provider. \n\n1. Click the **Update VCS Provider** button to save your selections.\n\n### If you need an SSH keypair:\n\n#### Important notes\n\n- SSH will only be used to clone Git submodules. All other Git operations will still use HTTPS.\n- Do not use your personal SSH key to connect HCP Terraform and GitHub Enterprise; generate a new one or use an existing key reserved for service access.\n- In the following steps, you must provide HCP Terraform with the private key. Although HCP Terraform does not display the text of the key to users after it is entered, it retains it and will use it when authenticating to GitHub Enterprise.\n- **Protect this private key carefully.** It can push code to the repositories you use to manage your infrastructure. Take note of your organization's policies for protecting important credentials and be sure to follow them.\n\n1. On a secure workstation, create an SSH keypair that HCP Terraform can use to connect to Github Enterprise. The exact command depends on your OS, but is usually something like:\n   `ssh-keygen -t rsa -m PEM -f \"\/Users\/<NAME>\/.ssh\/service_terraform\" -C \"service_terraform_enterprise\"`\n   This creates a `service_terraform` file with the private key, and a `service_terraform.pub` file with the public key. This SSH key **must have an empty passphrase**. HCP Terraform cannot use SSH keys that require a passphrase.\n\n1. While logged into the GitHub Enterprise account you want HCP Terraform to act as, navigate to the SSH Keys settings page, add a new SSH key and paste the value of the SSH public key you just created.\n\n1. In HCP Terraform's **Add VCS Provider** page, paste the text of the **SSH private key** you just created, and click the **Add SSH Key** button.\n\n## Step 5: Contact Your GitHub organization admins\n\nIf your organization uses OAuth app access restrictions, you had to click a **Request** button when authorizing HCP Terraform, which sent an automated email to the administrators of your GitHub organization. An administrator must approve the request before HCP Terraform can access your organization's shared repositories.\n\nIf you're a GitHub administrator, check your email now and respond to the request; otherwise, contact whoever is responsible for GitHub accounts in your organization, and wait for confirmation that they've approved your request.\n\n## Finished\n\nAt this point, GitHub access for HCP Terraform is fully configured, and you can create Terraform workspaces based on your organization's shared GitHub Enterprise repositories.","site":"terraform","answers_cleaned":"    page title  GitHub Enterprise   VCS Providers   HCP Terraform description       Learn how to use an on premise installation of GitHub Enterprise for VCS features         Configuring GitHub Enterprise Access  These instructions are for using a self hosted installation of GitHub Enterprise for HCP Terraform s VCS features   GitHub com has separate instructions    terraform cloud docs vcs github enterprise  as do the  other supported VCS providers    terraform cloud docs vcs   Configuring a new VCS provider requires permission to manage VCS settings for the organization    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  Connecting HCP Terraform to your VCS involves four steps     On your VCS                                                                                      On HCP Terraform                                                                                                                                                                                                                                                                                                                    Create a new connection in HCP Terraform  Get callback URL      Register your HCP Terraform organization as a new app  Provide callback URL  Get ID and key                                                                                                                                                                       Provide HCP Terraform with ID and key  Request VCS access       Approve access request                                                                                                                                            The rest of this page explains the GitHub Enterprise versions of these steps        Important    HCP Terraform needs to contact your GitHub Enterprise instance during setup and during normal operation  For the SaaS version of HCP Terraform  this means GitHub Enterprise must be internet accessible  for Terraform Enterprise  you must have network connectivity between your Terraform Enterprise and GitHub Enterprise instances        Note    Alternately  you can skip the OAuth configuration process and authenticate with a personal access token  This requires using HCP Terraform s API  For details  see  the OAuth Clients API page   terraform cloud docs api docs oauth clients       Step 1  On HCP Terraform  begin adding a new VCS provider  1  Go to your organization s settings and then click   Providers   in the Version Control section  The   VCS Providers   page appears   1  Click   Add a VCS provider    The   Add VCS Provider   page appears   1  Select   GitHub   and then select   GitHub Enterprise   from the menu  The page moves to the next step   1  In the  Set up provider  step  fill in the   HTTP URL   and   API URL   of your GitHub Enterprise instance  as well as an optional   Name   for this VCS connection        Field      Value                                                                                                            HTTP URL    https    GITHUB INSTANCE HOSTNAME                 API URL     https    GITHUB INSTANCE HOSTNAME  api v3     Leave the page open in a browser tab  In the next step you will copy values from this page  and in later steps you will continue configuring HCP Terraform      Step 2  On GitHub  create a new OAuth application  1  In a new browser tab  open your GitHub Enterprise instance and log in as whichever account you want HCP Terraform to act as  For most organizations this should be a dedicated service user  but a personal account will also work           Important    The account you use for connecting HCP Terraform   must have admin access   to any shared repositories of Terraform configurations  since creating webhooks requires admin permissions   1  Navigate to GitHub s Register a New OAuth Application page      This page is located at  https    GITHUB INSTANCE HOSTNAME  settings applications new   You can also reach it through GitHub s menus        Click your profile picture and choose  Settings        Click  OAuth Apps   under the  Developer settings  section        Click the  Register a new application  button   1  This page has a form with four text fields      Fill out the fields with the corresponding values currently displayed in your HCP Terraform browser tab  HCP Terraform lists the values in the order they appear  and includes controls for copying values to your clipboard      Fill out the text fields as follows        Field name                   Value                                                                                                                                                                                                  Application Name             HCP Terraform    YOUR ORGANIZATION NAME                                            Homepage URL                  https   app terraform io   or the URL of your Terraform Enterprise instance         Application Description      Any description of your choice                                                       Authorization callback URL    https   app terraform io  YOUR CALLBACK URL                                     1  Click the  Register application  button  which creates the application and takes you to its page   1   a href  https   content hashicorp com api assets product terraform docs common version main asset website img docs hcp terraform logo on white png  download Download this image of the HCP Terraform logo  a  and upload it with the  Upload new logo  button or the drag and drop target  This optional step helps you identify HCP Terraform s pull request checks at a glance   1  Click the  Generate a new client secret  button  You will need this secret in the next step   1  Leave this page open in a browser tab  In the next step  you will copy and paste the unique   Client ID   and   Client Secret        Step 3  On HCP Terraform  set up your provider  1  Enter the   Client ID   and   Client Secret   from the previous step   1  Click  Connect and continue   This takes you to a page on your GitHub Enterprise instance  asking whether you want to authorize the app   1  The authorization page lists any GitHub organizations this account belongs to  If there is a   Request   button next to the organization that owns your Terraform code repositories  click it now  Note that you need to do this even if you are only connecting workspaces to private forks of repositories in those organizations since those forks are subject to the organization s access restrictions   See  About OAuth App access restrictions  https   docs github com en organizations managing oauth access to your organizations data about oauth app access restrictions       If it results in a 500 error  it usually means HCP Terraform was unable to reach your GitHub Enterprise instance   1  Click the green  Authorize   GITHUB USER    button at the bottom of the authorization page  GitHub might request your password or multi factor token to confirm the operation      Step 4  On HCP Terraform  configure advanced settings  optional   The settings in this section are optional  The Advanced Settings you can configure are      Scope of VCS Provider     You can configure which workspaces can use repositories from this VCS provider  By default the   All Projects   option is selected  meaning this VCS provider is available to be used by all workspaces in the organization      Set up SSH Keypair     Most organizations will not need to add an SSH key  However  if the organization repositories include Git submodules that can only be accessed via SSH  an SSH key can be added along with the OAuth credentials  You can add or update the SSH key at a later time       If you don t need to configure advanced settings   1  Click the   Skip and finish   button  This returns you to HCP Terraform s VCS Providers page  which now includes your new GitHub Enterprise client       If you need to limit the scope of this VCS provider   1  Select the   Selected Projects   option and use the text field that appears to search for and select projects to enable  All current and future workspaces for any selected projects can use repositories from this VCS Provider    1  Click the   Update VCS Provider   button to save your selections       If you need an SSH keypair        Important notes    SSH will only be used to clone Git submodules  All other Git operations will still use HTTPS    Do not use your personal SSH key to connect HCP Terraform and GitHub Enterprise  generate a new one or use an existing key reserved for service access    In the following steps  you must provide HCP Terraform with the private key  Although HCP Terraform does not display the text of the key to users after it is entered  it retains it and will use it when authenticating to GitHub Enterprise      Protect this private key carefully    It can push code to the repositories you use to manage your infrastructure  Take note of your organization s policies for protecting important credentials and be sure to follow them   1  On a secure workstation  create an SSH keypair that HCP Terraform can use to connect to Github Enterprise  The exact command depends on your OS  but is usually something like      ssh keygen  t rsa  m PEM  f   Users  NAME   ssh service terraform   C  service terraform enterprise      This creates a  service terraform  file with the private key  and a  service terraform pub  file with the public key  This SSH key   must have an empty passphrase    HCP Terraform cannot use SSH keys that require a passphrase   1  While logged into the GitHub Enterprise account you want HCP Terraform to act as  navigate to the SSH Keys settings page  add a new SSH key and paste the value of the SSH public key you just created   1  In HCP Terraform s   Add VCS Provider   page  paste the text of the   SSH private key   you just created  and click the   Add SSH Key   button      Step 5  Contact Your GitHub organization admins  If your organization uses OAuth app access restrictions  you had to click a   Request   button when authorizing HCP Terraform  which sent an automated email to the administrators of your GitHub organization  An administrator must approve the request before HCP Terraform can access your organization s shared repositories   If you re a GitHub administrator  check your email now and respond to the request  otherwise  contact whoever is responsible for GitHub accounts in your organization  and wait for confirmation that they ve approved your request      Finished  At this point  GitHub access for HCP Terraform is fully configured  and you can create Terraform workspaces based on your organization s shared GitHub Enterprise repositories "}
{"questions":"terraform These instructions are for using dev azure com for HCP Terraform s VCS features Other supported VCS providers terraform cloud docs vcs have separate instructions page title Azure DevOps Services VCS Providers HCP Terraform Configuring Azure DevOps Services Access Learn how to use Azure DevOps Services dev azure com for VCS features","answers":"---\npage_title: Azure DevOps Services - VCS Providers - HCP Terraform\ndescription: >-\n  Learn how to use Azure DevOps Services (dev.azure.com) for VCS features.\n---\n\n# Configuring Azure DevOps Services Access\n\nThese instructions are for using `dev.azure.com` for HCP Terraform's VCS features. [Other supported VCS providers](\/terraform\/cloud-docs\/vcs) have separate instructions.\n\nThis page explains the four main steps required to connect HCP Terraform to your Azure DevOps Services VCS:\n\n1. Create a new connection in HCP Terraform and get the callback URL.\n1. On your VCS, register your HCP Terraform organization as a new application. Provide the callback URL and get the application ID and key.\n1. Provide HCP Terraform with the application ID and key. Then, request VCS access.\n1. On your VCS, approve the access request from HCP Terraform.\n\n~> **Important:** HCP Terraform only supports Azure DevOps connections that use the `dev.azure.com` domain. If your Azure DevOps project uses the older `visualstudio.com` domain, you must migrate using the [steps in the Microsoft documentation](https:\/\/docs.microsoft.com\/en-us\/azure\/devops\/release-notes\/2018\/sep-10-azure-devops-launch#switch-existing-organizations-to-use-the-new-domain-name-url).\n\n## Requirements\n\nConfiguring a new VCS provider requires permission to [manage VCS settings](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-vcs-settings) for the organization.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nBefore you begin, enable `Third-party application access via OAuth` in Azure DevOps Services settings.\n\n1. Log in to [Azure DevOps Services](https:\/\/dev.azure.com\/).\n1. Click **Organization settings**.\n1. Click **Policies** under **Security**.\n1. Enable the **Third-party application access via OAuth** setting.\n\n   ![Azure DevOps Services Screenshot: Policies Third-party application access via Oauth](\/img\/docs\/azure-devops-services-oauth-policies.png)\n\n## Step 1: On HCP Terraform, Begin Adding a New VCS Provider\n\n1. Go to your organization's settings and then click **Providers**. The **VCS Providers** page appears.\n\n1. Click **Add VCS Provider**. The **VCS Providers** page appears.\n\n1. Select **Azure DevOps** and then select **Azure DevOps Services** from the menu. The page moves to the next step.\n\nLeave this page open in a browser tab. You will copy values from this page into Azure DevOps in the next step, and in later steps you will continue configuring HCP Terraform.\n\n## Step 2: From your Azure DevOps Services Profile, Create a New Application\n\n1. In a new browser tab, open your [Azure DevOps Services Profile](https:\/\/aex.dev.azure.com), and log in to your Azure DevOps Services account if necessary. A page with a list of your organizations appears.\n\n   ~> **Important:** The Azure DevOps Services account you use for connecting HCP Terraform must have Project Collection Administrator access to any projects containing repositories of Terraform configurations, since creating webhooks requires admin permissions. It is not possible to create custom access roles with lower levels of privilege, as Microsoft does not currently allow delegation of this capability. If you're unable to load the link above, you can create a new application for the next step at one of the following links: `https:\/\/aex.dev.azure.com\/app\/register?mkt=en-US` or `https:\/\/app.vsaex.visualstudio.com\/app\/register?mkt=en-US`.\n\n1. Go into your preferred organization.\n\n1. Click your user icon and then click the **ellipses (...) ** and select **User settings**.\n\n1. From the User settings menu, click **Profile**. Your profile page appears.\n\n1. Click **Authorizations**. The Authorized OAuth Apps page appears.\n\n1. Click the link to register a new app. A form appears asking for your company and application information.\n\n1. Fill out the fields and checkboxes with the corresponding values currently displayed in your HCP Terraform browser tab. HCP Terraform lists the values in the order they appear and includes controls for copying values to your clipboard. Here is an example:\n\n   | Field name                 | Value                                                                         |\n   | -------------------------- | ----------------------------------------------------------------------------- |\n   | Company name               | HashiCorp                                                                     |\n   | Application Name           | HCP Terraform (`<YOUR ORGANIZATION NAME>`)                                  |\n   | Application website        | `https:\/\/app.terraform.io` (or the URL of your Terraform Enterprise instance) |\n   | Authorization callback URL | `https:\/\/app.terraform.io\/<YOUR CALLBACK URL>`                                |\n\n   In the **Authorized scopes** section, select only **Code (read)** and **Code (status)** and then click **Create Application.**\n\n   ![Azure DevOps Services Screenshot: Required permissions when creating a new application in your Azure DevOps Services Profile](\/img\/docs\/azure-devops-services-application-permissions.png)\n\n   ~> **Important:** Do not add any additional scopes beyond **Code (read)** and **Code (status),** as this can prevent HCP Terraform from connecting. Note that these authorized scopes cannot be updated after the application is created; to fix incorrect scopes you must delete and re-create the application.\n\n1. After creating the application, the next page displays its details. Leave this page open in a browser tab. In the next step, you will copy and paste the unique **App ID** and **Client Secret** from this page.\n\n   If you accidentally close this details page and need to find it later, you can reach it from the **Applications and Services** links in your profile.\n\n## Step 3: On HCP Terraform, Set up Your Provider\n\n1. (Optional) Enter a **Name** for this VCS connection.\n\n1. Enter your Azure DevOps Services application's **App ID** and **Client Secret**. These can be found in the application's details, which should still be open in the browser tab from Step 2.\n\n1. Click **Connect and continue.** This takes you to a page on Azure DevOps Services, asking whether you want to authorize the app. Click the **Accept** button and you'll be redirected back to HCP Terraform.\n\n   -> **Note:** If you receive a 404 error from Azure DevOps Services, it likely means your callback URL has not been configured correctly.\n\n## Step 4: On HCP Terraform, Configure Advanced Settings (Optional)\n\nThe settings in this section are optional. The Advanced Settings you can configure are:\n- **Scope of VCS Provider** - You can configure which workspaces can use repositories from this VCS provider. By default the **All Projects** option is selected, meaning this VCS provider is available to be used by all workspaces in the organization.\n- **Set up SSH Keypair** - Most organizations will not need to add an SSH key. However, if the organization repositories include Git submodules that can only be accessed via SSH, an SSH key can be added along with the OAuth credentials. You can add or update the SSH key at a later time.\n\n### If You Don't Need to Configure Advanced Settings:\n\n1. Click the **Skip and Finish** button. This returns you to HCP Terraform's VCS Provider page, which now includes your new Azure DevOps Services client.\n\n### If You Need to Limit the Scope of this VCS Provider:\n\n1. Select the **Selected Projects** option and use the text field that appears to search for and select projects to enable. All current and future workspaces for any selected projects can use repositories from this VCS Provider. \n\n1. Click the **Update VCS Provider** button to save your selections.\n\n### If You Do Need an SSH Keypair:\n\n#### Important Notes\n\n- SSH will only be used to clone Git submodules. All other Git operations will still use HTTPS.\n- Do not use your personal SSH key to connect HCP Terraform and Azure DevOps Services; generate a new one or use an existing key reserved for service access.\n- In the following steps, you must provide HCP Terraform with the private key. Although HCP Terraform does not display the text of the key to users after it is entered, it retains it and will use it when authenticating to Azure DevOps Services.\n- **Protect this private key carefully.** It can push code to the repositories you use to manage your infrastructure. Take note of your organization's policies for protecting important credentials and be sure to follow them.\n\n1. On a secure workstation, create an SSH keypair that HCP Terraform can use to connect to Azure DevOps Services.com. The exact command depends on your OS, but is usually something like:\n   `ssh-keygen -t rsa -m PEM -f \"\/Users\/<NAME>\/.ssh\/service_terraform\" -C \"service_terraform_enterprise\"`\n   This creates a `service_terraform` file with the private key, and a `service_terraform.pub` file with the public key. This SSH key **must have an empty passphrase**. HCP Terraform cannot use SSH keys that require a passphrase.\n\n1. While logged into the Azure DevOps Services account you want HCP Terraform to act as, navigate to the SSH Keys settings page, add a new SSH key and paste the value of the SSH public key you just created.\n\n1. In HCP Terraform's **Add VCS Provider** page, paste the text of the **SSH private key** you just created, and click the **Add SSH Key** button.\n\n## Finished\n\nAt this point, Azure DevOps Services access for HCP Terraform is fully configured, and you can create Terraform workspaces based on your organization's repositories.","site":"terraform","answers_cleaned":"    page title  Azure DevOps Services   VCS Providers   HCP Terraform description       Learn how to use Azure DevOps Services  dev azure com  for VCS features         Configuring Azure DevOps Services Access  These instructions are for using  dev azure com  for HCP Terraform s VCS features   Other supported VCS providers   terraform cloud docs vcs  have separate instructions   This page explains the four main steps required to connect HCP Terraform to your Azure DevOps Services VCS   1  Create a new connection in HCP Terraform and get the callback URL  1  On your VCS  register your HCP Terraform organization as a new application  Provide the callback URL and get the application ID and key  1  Provide HCP Terraform with the application ID and key  Then  request VCS access  1  On your VCS  approve the access request from HCP Terraform        Important    HCP Terraform only supports Azure DevOps connections that use the  dev azure com  domain  If your Azure DevOps project uses the older  visualstudio com  domain  you must migrate using the  steps in the Microsoft documentation  https   docs microsoft com en us azure devops release notes 2018 sep 10 azure devops launch switch existing organizations to use the new domain name url       Requirements  Configuring a new VCS provider requires permission to  manage VCS settings   terraform cloud docs users teams organizations permissions manage vcs settings  for the organization    permissions citation    intentionally unused   keep for maintainers  Before you begin  enable  Third party application access via OAuth  in Azure DevOps Services settings   1  Log in to  Azure DevOps Services  https   dev azure com    1  Click   Organization settings    1  Click   Policies   under   Security    1  Enable the   Third party application access via OAuth   setting        Azure DevOps Services Screenshot  Policies Third party application access via Oauth   img docs azure devops services oauth policies png      Step 1  On HCP Terraform  Begin Adding a New VCS Provider  1  Go to your organization s settings and then click   Providers    The   VCS Providers   page appears   1  Click   Add VCS Provider    The   VCS Providers   page appears   1  Select   Azure DevOps   and then select   Azure DevOps Services   from the menu  The page moves to the next step   Leave this page open in a browser tab  You will copy values from this page into Azure DevOps in the next step  and in later steps you will continue configuring HCP Terraform      Step 2  From your Azure DevOps Services Profile  Create a New Application  1  In a new browser tab  open your  Azure DevOps Services Profile  https   aex dev azure com   and log in to your Azure DevOps Services account if necessary  A page with a list of your organizations appears           Important    The Azure DevOps Services account you use for connecting HCP Terraform must have Project Collection Administrator access to any projects containing repositories of Terraform configurations  since creating webhooks requires admin permissions  It is not possible to create custom access roles with lower levels of privilege  as Microsoft does not currently allow delegation of this capability  If you re unable to load the link above  you can create a new application for the next step at one of the following links   https   aex dev azure com app register mkt en US  or  https   app vsaex visualstudio com app register mkt en US    1  Go into your preferred organization   1  Click your user icon and then click the   ellipses          and select   User settings     1  From the User settings menu  click   Profile    Your profile page appears   1  Click   Authorizations    The Authorized OAuth Apps page appears   1  Click the link to register a new app  A form appears asking for your company and application information   1  Fill out the fields and checkboxes with the corresponding values currently displayed in your HCP Terraform browser tab  HCP Terraform lists the values in the order they appear and includes controls for copying values to your clipboard  Here is an example        Field name                   Value                                                                                                                                                                                                  Company name                 HashiCorp                                                                            Application Name             HCP Terraform    YOUR ORGANIZATION NAME                                            Application website           https   app terraform io   or the URL of your Terraform Enterprise instance         Authorization callback URL    https   app terraform io  YOUR CALLBACK URL                                        In the   Authorized scopes   section  select only   Code  read    and   Code  status    and then click   Create Application          Azure DevOps Services Screenshot  Required permissions when creating a new application in your Azure DevOps Services Profile   img docs azure devops services application permissions png           Important    Do not add any additional scopes beyond   Code  read    and   Code  status     as this can prevent HCP Terraform from connecting  Note that these authorized scopes cannot be updated after the application is created  to fix incorrect scopes you must delete and re create the application   1  After creating the application  the next page displays its details  Leave this page open in a browser tab  In the next step  you will copy and paste the unique   App ID   and   Client Secret   from this page      If you accidentally close this details page and need to find it later  you can reach it from the   Applications and Services   links in your profile      Step 3  On HCP Terraform  Set up Your Provider  1   Optional  Enter a   Name   for this VCS connection   1  Enter your Azure DevOps Services application s   App ID   and   Client Secret    These can be found in the application s details  which should still be open in the browser tab from Step 2   1  Click   Connect and continue    This takes you to a page on Azure DevOps Services  asking whether you want to authorize the app  Click the   Accept   button and you ll be redirected back to HCP Terraform           Note    If you receive a 404 error from Azure DevOps Services  it likely means your callback URL has not been configured correctly      Step 4  On HCP Terraform  Configure Advanced Settings  Optional   The settings in this section are optional  The Advanced Settings you can configure are      Scope of VCS Provider     You can configure which workspaces can use repositories from this VCS provider  By default the   All Projects   option is selected  meaning this VCS provider is available to be used by all workspaces in the organization      Set up SSH Keypair     Most organizations will not need to add an SSH key  However  if the organization repositories include Git submodules that can only be accessed via SSH  an SSH key can be added along with the OAuth credentials  You can add or update the SSH key at a later time       If You Don t Need to Configure Advanced Settings   1  Click the   Skip and Finish   button  This returns you to HCP Terraform s VCS Provider page  which now includes your new Azure DevOps Services client       If You Need to Limit the Scope of this VCS Provider   1  Select the   Selected Projects   option and use the text field that appears to search for and select projects to enable  All current and future workspaces for any selected projects can use repositories from this VCS Provider    1  Click the   Update VCS Provider   button to save your selections       If You Do Need an SSH Keypair        Important Notes    SSH will only be used to clone Git submodules  All other Git operations will still use HTTPS    Do not use your personal SSH key to connect HCP Terraform and Azure DevOps Services  generate a new one or use an existing key reserved for service access    In the following steps  you must provide HCP Terraform with the private key  Although HCP Terraform does not display the text of the key to users after it is entered  it retains it and will use it when authenticating to Azure DevOps Services      Protect this private key carefully    It can push code to the repositories you use to manage your infrastructure  Take note of your organization s policies for protecting important credentials and be sure to follow them   1  On a secure workstation  create an SSH keypair that HCP Terraform can use to connect to Azure DevOps Services com  The exact command depends on your OS  but is usually something like      ssh keygen  t rsa  m PEM  f   Users  NAME   ssh service terraform   C  service terraform enterprise      This creates a  service terraform  file with the private key  and a  service terraform pub  file with the public key  This SSH key   must have an empty passphrase    HCP Terraform cannot use SSH keys that require a passphrase   1  While logged into the Azure DevOps Services account you want HCP Terraform to act as  navigate to the SSH Keys settings page  add a new SSH key and paste the value of the SSH public key you just created   1  In HCP Terraform s   Add VCS Provider   page  paste the text of the   SSH private key   you just created  and click the   Add SSH Key   button      Finished  At this point  Azure DevOps Services access for HCP Terraform is fully configured  and you can create Terraform workspaces based on your organization s repositories "}
{"questions":"terraform Configuring GitHub com Access OAuth Learn how to use GitHub com for VCS features using a per organization OAuth connection with the permissions of an individual GitHub user page title GitHub com OAuth VCS Providers HCP Terraform These instructions are for using GitHub com for HCP Terraform s VCS features using a per organization OAuth connection with the permissions of one particular GitHub user GitHub Enterprise has separate instructions terraform enterprise vcs github enterprise as do the other supported VCS providers terraform enterprise vcs","answers":"---\npage_title: GitHub.com (OAuth) - VCS Providers - HCP Terraform\ndescription: >-\n  Learn how to use GitHub.com for VCS features, using a per-organization OAuth connection with the permissions of an individual GitHub user.\n---\n\n# Configuring GitHub.com Access (OAuth)\n\nThese instructions are for using GitHub.com for HCP Terraform's VCS features, using a per-organization OAuth connection with the permissions of one particular GitHub user. [GitHub Enterprise has separate instructions,](\/terraform\/enterprise\/vcs\/github-enterprise) as do the [other supported VCS providers.](\/terraform\/enterprise\/vcs)\n\n<!-- BEGIN: TFC:only -->\nFor new users on HCP Terraform, we recommend using our [configuration-free GitHub App](\/terraform\/cloud-docs\/vcs\/github-app) to access repositories instead.\n<!-- END: TFC:only -->\n\nFor Terraform Enterprise site admins, you can create your own [GitHub App](\/terraform\/enterprise\/admin\/application\/github-app-integration) to access repositories.\n\nConfiguring a new VCS provider requires permission to manage VCS settings for the organization. ([More about permissions.](\/terraform\/enterprise\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nConnecting HCP Terraform to your VCS involves four steps:\n\n| On your VCS                                                                    | On HCP Terraform                                            |\n| ------------------------------------------------------------------------------ | ------------------------------------------------------------- |\n| \u00a0                                                                              | Create a new connection in HCP Terraform. Get callback URL. |\n| Register your HCP Terraform organization as a new app. Provide callback URL. | \u00a0                                                             |\n| \u00a0                                                                              | Provide HCP Terraform with ID and key. Request VCS access.  |\n| Approve access request.                                                        | \u00a0                                                             |\n\nThe rest of this page explains the GitHub versions of these steps.\n\n-> **Note:** Alternately, you can skip the OAuth configuration process and authenticate with a personal access token. This requires using HCP Terraform's API. For details, see [the OAuth Clients API page](\/terraform\/cloud-docs\/api-docs\/oauth-clients).\n\n## Step 1: On HCP Terraform, begin adding a new VCS provider\n\n1. Go to your organization's settings and then click **Providers** in the **Version Control** section. The **VCS Providers** page appears.\n\n1. Click **Add a VCS provider**. The **Add VCS Provider** page appears.\n\n1. Select **GitHub** and then select **GitHub.com (Custom)** from the menu. The page moves to the next step.\n\nLeave the page open in a browser tab. In the next step you will copy values from this page, and in later steps you will continue configuring HCP Terraform.\n\n## Step 2: On GitHub, create a new OAuth application\n\nOn the HCP Terraform **Add VCS Provider** page, click **register a new OAuth Application**. This opens GitHub.com in a new browser tab with the OAuth application settings pre-filled.\n\nAlternately, create the OAuth application manually on GitHub.com.\n\n### Manual steps\n\n1. In a new browser tab, open [github.com](https:\/\/github.com) and log in as whichever account you want HCP Terraform to act as. For most organizations this should be a dedicated service user, but a personal account will also work.\n\n   ~> **Important:** The account you use for connecting HCP Terraform **must have admin access** to any shared repositories of Terraform configurations, since creating webhooks requires admin permissions.\n\n1. Navigate to GitHub's [Register a New OAuth Application](https:\/\/github.com\/settings\/applications\/new) page.\n\n   This page is located at <https:\/\/github.com\/settings\/applications\/new>. You can also reach it through GitHub's menus:\n\n   - Click your profile picture and choose \"Settings.\"\n   - Click \"Developer settings,\" then make sure you're on the \"OAuth Apps\" page (not \"GitHub Apps\").\n   - Click the \"New OAuth App\" button.\n\n1. This page has a form with four text fields.\n\n   Fill out the fields with the corresponding values currently displayed in your HCP Terraform browser tab. HCP Terraform lists the values in the order they appear, and includes controls for copying values to your clipboard.\n\n   Fill out the text fields as follows:\n\n   | Field name                 | Value                                                                         |\n   | -------------------------- | ----------------------------------------------------------------------------- |\n   | Application Name           | HCP Terraform (`<YOUR ORGANIZATION NAME>`)                                  |\n   | Homepage URL               | `https:\/\/app.terraform.io` (or the URL of your Terraform Enterprise instance) |\n   | Application Description    | Any description of your choice.                                               |\n   | Authorization callback URL | `https:\/\/app.terraform.io\/<YOUR CALLBACK URL>`                                |\n\n### Register the OAuth application\n\n1. Click the \"Register application\" button, which creates the application and takes you to its page.\n\n1. <a href=\"https:\/\/content.hashicorp.com\/api\/assets?product=terraform-docs-common&version=main&asset=website\/img\/docs\/hcp-terraform-logo-on-white.png\" download>Download this image of the HCP Terraform logo<\/a> and upload it with the \"Upload new logo\" button or the drag-and-drop target. This optional step helps you identify HCP Terraform's pull request checks at a glance.\n\n1. Click the **Generate a new client secret** button. You will need this secret in the next step.\n\n1. Leave this page open in a browser tab. In the next step, you will copy and paste the unique **Client ID** and **Client Secret.**\n\n## Step 3: On HCP Terraform, set up your provider\n\n1. Enter the **Client ID** and **Client Secret** from the previous step, as well as an optional **Name** for this VCS connection.\n\n1. Click \"Connect and continue.\" This takes you to a page on GitHub.com, asking whether you want to authorize the app.\n\n1. The authorization page lists any GitHub organizations this account belongs to. If there is a **Request** button next to the organization that owns your Terraform code repositories, click it now. Note that you need to do this even if you are only connecting workspaces to private forks of repositories in those organizations since those forks are subject to the organization's access restrictions.  See [About OAuth App access restrictions](https:\/\/docs.github.com\/en\/organizations\/managing-oauth-access-to-your-organizations-data\/about-oauth-app-access-restrictions).\n\n1. Click the green \"Authorize `<GITHUB USER>`\" button at the bottom of the authorization page. GitHub might request your password or multi-factor token to confirm the operation.\n\n## Step 4: On HCP Terraform, configure advanced settings (optional)\n\nThe settings in this section are optional. The Advanced Settings you can configure are:\n- **Scope of VCS Provider** - You can configure which workspaces can use repositories from this VCS provider. By default the **All Projects** option is selected, meaning this VCS provider is available to be used by all workspaces in the organization.\n- **Set up SSH Keypair** - Most organizations will not need to add an SSH key. However, if the organization repositories include Git submodules that can only be accessed via SSH, an SSH key can be added along with the OAuth credentials. You can add or update the SSH key at a later time.\n\n### If you don't need to configure advanced settings:\n\n1. Click the **Skip and finish** button. This returns you to HCP Terraform's **VCS Providers** page, which now includes your new GitHub client.\n\n### If you need to limit the scope of this VCS provider:\n\n1. Select the **Selected Projects** option and use the text field that appears to search for and select projects to enable. All current and future workspaces for any selected projects can use repositories from this VCS Provider. \n\n1. Click the **Update VCS Provider** button to save your selections.\n\n### If you need an SSH keypair:\n\n#### Important notes\n\n- SSH will only be used to clone Git submodules. All other Git operations will still use HTTPS.\n- Do not use your personal SSH key to connect HCP Terraform and GitHub; generate a new one or use an existing key reserved for service access.\n- In the following steps, you must provide HCP Terraform with the private key. Although HCP Terraform does not display the text of the key to users after it is entered, it retains it and will use it when authenticating to GitHub.\n- **Protect this private key carefully.** It can push code to the repositories you use to manage your infrastructure. Take note of your organization's policies for protecting important credentials and be sure to follow them.\n\n1. On a secure workstation, create an SSH keypair that HCP Terraform can use to connect to GitHub.com. The exact command depends on your OS, but is usually something like:\n   `ssh-keygen -t rsa -m PEM -f \"\/Users\/<NAME>\/.ssh\/service_terraform\" -C \"service_terraform_enterprise\"`\n   This creates a `service_terraform` file with the private key, and a `service_terraform.pub` file with the public key. This SSH key **must have an empty passphrase**. HCP Terraform cannot use SSH keys that require a passphrase.\n\n1. While logged into the GitHub.com account you want HCP Terraform to act as, navigate to the SSH Keys settings page, add a new SSH key and paste the value of the SSH public key you just created.\n\n1. In HCP Terraform's **Add VCS Provider** page, paste the text of the **SSH private key** you just created, and click the **Add SSH Key** button.\n\n## Step 5: Contact your GitHub organization admins\n\nIf your organization uses OAuth app access restrictions, you had to click a **Request** button when authorizing HCP Terraform, which sent an automated email to the administrators of your GitHub organization. An administrator must approve the request before HCP Terraform can access your organization's shared repositories.\n\nIf you're a GitHub administrator, check your email now and respond to the request; otherwise, contact whoever is responsible for GitHub accounts in your organization, and wait for confirmation that they've approved your request.\n\n## Finished\n\nAt this point, GitHub access for HCP Terraform is fully configured, and you can create Terraform workspaces based on your organization's shared GitHub repositories.","site":"terraform","answers_cleaned":"    page title  GitHub com  OAuth    VCS Providers   HCP Terraform description       Learn how to use GitHub com for VCS features  using a per organization OAuth connection with the permissions of an individual GitHub user         Configuring GitHub com Access  OAuth   These instructions are for using GitHub com for HCP Terraform s VCS features  using a per organization OAuth connection with the permissions of one particular GitHub user   GitHub Enterprise has separate instructions    terraform enterprise vcs github enterprise  as do the  other supported VCS providers    terraform enterprise vcs        BEGIN  TFC only     For new users on HCP Terraform  we recommend using our  configuration free GitHub App   terraform cloud docs vcs github app  to access repositories instead       END  TFC only      For Terraform Enterprise site admins  you can create your own  GitHub App   terraform enterprise admin application github app integration  to access repositories   Configuring a new VCS provider requires permission to manage VCS settings for the organization    More about permissions    terraform enterprise users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  Connecting HCP Terraform to your VCS involves four steps     On your VCS                                                                      On HCP Terraform                                                                                                                                                                                                                                                                                    Create a new connection in HCP Terraform  Get callback URL      Register your HCP Terraform organization as a new app  Provide callback URL                                                                                                                                                       Provide HCP Terraform with ID and key  Request VCS access       Approve access request                                                                                                                            The rest of this page explains the GitHub versions of these steps        Note    Alternately  you can skip the OAuth configuration process and authenticate with a personal access token  This requires using HCP Terraform s API  For details  see  the OAuth Clients API page   terraform cloud docs api docs oauth clients       Step 1  On HCP Terraform  begin adding a new VCS provider  1  Go to your organization s settings and then click   Providers   in the   Version Control   section  The   VCS Providers   page appears   1  Click   Add a VCS provider    The   Add VCS Provider   page appears   1  Select   GitHub   and then select   GitHub com  Custom    from the menu  The page moves to the next step   Leave the page open in a browser tab  In the next step you will copy values from this page  and in later steps you will continue configuring HCP Terraform      Step 2  On GitHub  create a new OAuth application  On the HCP Terraform   Add VCS Provider   page  click   register a new OAuth Application    This opens GitHub com in a new browser tab with the OAuth application settings pre filled   Alternately  create the OAuth application manually on GitHub com       Manual steps  1  In a new browser tab  open  github com  https   github com  and log in as whichever account you want HCP Terraform to act as  For most organizations this should be a dedicated service user  but a personal account will also work           Important    The account you use for connecting HCP Terraform   must have admin access   to any shared repositories of Terraform configurations  since creating webhooks requires admin permissions   1  Navigate to GitHub s  Register a New OAuth Application  https   github com settings applications new  page      This page is located at  https   github com settings applications new   You can also reach it through GitHub s menus        Click your profile picture and choose  Settings        Click  Developer settings   then make sure you re on the  OAuth Apps  page  not  GitHub Apps         Click the  New OAuth App  button   1  This page has a form with four text fields      Fill out the fields with the corresponding values currently displayed in your HCP Terraform browser tab  HCP Terraform lists the values in the order they appear  and includes controls for copying values to your clipboard      Fill out the text fields as follows        Field name                   Value                                                                                                                                                                                                  Application Name             HCP Terraform    YOUR ORGANIZATION NAME                                            Homepage URL                  https   app terraform io   or the URL of your Terraform Enterprise instance         Application Description      Any description of your choice                                                       Authorization callback URL    https   app terraform io  YOUR CALLBACK URL                                         Register the OAuth application  1  Click the  Register application  button  which creates the application and takes you to its page   1   a href  https   content hashicorp com api assets product terraform docs common version main asset website img docs hcp terraform logo on white png  download Download this image of the HCP Terraform logo  a  and upload it with the  Upload new logo  button or the drag and drop target  This optional step helps you identify HCP Terraform s pull request checks at a glance   1  Click the   Generate a new client secret   button  You will need this secret in the next step   1  Leave this page open in a browser tab  In the next step  you will copy and paste the unique   Client ID   and   Client Secret        Step 3  On HCP Terraform  set up your provider  1  Enter the   Client ID   and   Client Secret   from the previous step  as well as an optional   Name   for this VCS connection   1  Click  Connect and continue   This takes you to a page on GitHub com  asking whether you want to authorize the app   1  The authorization page lists any GitHub organizations this account belongs to  If there is a   Request   button next to the organization that owns your Terraform code repositories  click it now  Note that you need to do this even if you are only connecting workspaces to private forks of repositories in those organizations since those forks are subject to the organization s access restrictions   See  About OAuth App access restrictions  https   docs github com en organizations managing oauth access to your organizations data about oauth app access restrictions    1  Click the green  Authorize   GITHUB USER    button at the bottom of the authorization page  GitHub might request your password or multi factor token to confirm the operation      Step 4  On HCP Terraform  configure advanced settings  optional   The settings in this section are optional  The Advanced Settings you can configure are      Scope of VCS Provider     You can configure which workspaces can use repositories from this VCS provider  By default the   All Projects   option is selected  meaning this VCS provider is available to be used by all workspaces in the organization      Set up SSH Keypair     Most organizations will not need to add an SSH key  However  if the organization repositories include Git submodules that can only be accessed via SSH  an SSH key can be added along with the OAuth credentials  You can add or update the SSH key at a later time       If you don t need to configure advanced settings   1  Click the   Skip and finish   button  This returns you to HCP Terraform s   VCS Providers   page  which now includes your new GitHub client       If you need to limit the scope of this VCS provider   1  Select the   Selected Projects   option and use the text field that appears to search for and select projects to enable  All current and future workspaces for any selected projects can use repositories from this VCS Provider    1  Click the   Update VCS Provider   button to save your selections       If you need an SSH keypair        Important notes    SSH will only be used to clone Git submodules  All other Git operations will still use HTTPS    Do not use your personal SSH key to connect HCP Terraform and GitHub  generate a new one or use an existing key reserved for service access    In the following steps  you must provide HCP Terraform with the private key  Although HCP Terraform does not display the text of the key to users after it is entered  it retains it and will use it when authenticating to GitHub      Protect this private key carefully    It can push code to the repositories you use to manage your infrastructure  Take note of your organization s policies for protecting important credentials and be sure to follow them   1  On a secure workstation  create an SSH keypair that HCP Terraform can use to connect to GitHub com  The exact command depends on your OS  but is usually something like      ssh keygen  t rsa  m PEM  f   Users  NAME   ssh service terraform   C  service terraform enterprise      This creates a  service terraform  file with the private key  and a  service terraform pub  file with the public key  This SSH key   must have an empty passphrase    HCP Terraform cannot use SSH keys that require a passphrase   1  While logged into the GitHub com account you want HCP Terraform to act as  navigate to the SSH Keys settings page  add a new SSH key and paste the value of the SSH public key you just created   1  In HCP Terraform s   Add VCS Provider   page  paste the text of the   SSH private key   you just created  and click the   Add SSH Key   button      Step 5  Contact your GitHub organization admins  If your organization uses OAuth app access restrictions  you had to click a   Request   button when authorizing HCP Terraform  which sent an automated email to the administrators of your GitHub organization  An administrator must approve the request before HCP Terraform can access your organization s shared repositories   If you re a GitHub administrator  check your email now and respond to the request  otherwise  contact whoever is responsible for GitHub accounts in your organization  and wait for confirmation that they ve approved your request      Finished  At this point  GitHub access for HCP Terraform is fully configured  and you can create Terraform workspaces based on your organization s shared GitHub repositories "}
{"questions":"terraform Configuring GitLab EE and CE Access Learn how to use on premise installation of GitLab Enterprise Edition EE or GitLab Community Edition CE for VCS features page title GitLab EE and CE VCS Providers HCP Terraform These instructions are for using an on premise installation of GitLab Enterprise Edition EE or GitLab Community Edition CE for HCP Terraform s VCS features GitLab com has separate instructions terraform cloud docs vcs gitlab com as do the other supported VCS providers terraform cloud docs vcs","answers":"---\npage_title: GitLab EE and CE - VCS Providers - HCP Terraform\ndescription: >-\n  Learn how to use on-premise installation of GitLab Enterprise Edition (EE) or GitLab Community Edition (CE) for VCS features.\n---\n\n# Configuring GitLab EE and CE Access\n\nThese instructions are for using an on-premise installation of GitLab Enterprise Edition (EE) or GitLab Community Edition (CE) for HCP Terraform's VCS features. [GitLab.com has separate instructions,](\/terraform\/cloud-docs\/vcs\/gitlab-com) as do the [other supported VCS providers.](\/terraform\/cloud-docs\/vcs)\n\nConfiguring a new VCS provider requires permission to manage VCS settings for the organization. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nConnecting HCP Terraform to your VCS involves four steps:\n\n| On your VCS                                                                                    | On HCP Terraform                                                          |\n| ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------- |\n| \u00a0                                                                                              | Create a new connection in HCP Terraform. Get redirect URI.               |\n| Register your HCP Terraform organization as a new app. Provide redirect URI. Get ID and key. | \u00a0                                                                           |\n| \u00a0                                                                                              | Provide HCP Terraform with application ID and secret. Request VCS access. |\n| Approve access request.                                                                        | \u00a0                                                                           |\n\nThe rest of this page explains the on-premise GitLab versions of these steps.\n\n~> **Important:** HCP Terraform needs to contact your GitLab instance during setup and during normal operation. For the SaaS version of HCP Terraform, this means GitLab must be internet-accessible; for Terraform Enterprise, you must have network connectivity between your Terraform Enterprise and GitLab instances.\n\n-> **Note:** Alternately, you can skip the OAuth configuration process and authenticate with a personal access token. This requires using HCP Terraform's API. For details, see [the OAuth Clients API page](\/terraform\/cloud-docs\/api-docs\/oauth-clients).\n\n-> **Version Note:** HCP Terraform supports GitLab versions 9.0 and newer. HashiCorp does not test older versions of GitLab with HCP Terraform, and they might not work as expected. Also note that, although we do not deliberately remove support for versions that have reached end of life (per the [GitLab Support End of Life Policy](https:\/\/docs.gitlab.com\/ee\/policy\/maintenance.html#patch-releases)), our ability to resolve customer issues with end of life versions might be limited.\n\n## Step 1: On HCP Terraform, Begin Adding a New VCS Provider\n\n1. Go to your organization's settings and then click **Providers**. The **VCS Providers** page appears.\n\n1. Click **Add VCS Provider**. The **VCS Providers** page appears.\n\n1. Select **GitLab** and then select **GitLab Enterprise Edition** or **GitLab Community Edition** from the menu. The page moves to the next step.\n\n1. In the \"Set up provider\" step, fill in the **HTTP URL** and **API URL** of your GitLab Enterprise Edition or GitLab Community Edition instance, as well as an optional **Name** for this VCS connection. Click \"Continue.\"\n\n   | Field    | Value                                       |\n   | -------- | ------------------------------------------- |\n   | HTTP URL | `https:\/\/<GITLAB INSTANCE HOSTNAME>`        |\n   | API URL  | `https:\/\/<GITLAB INSTANCE HOSTNAME>\/api\/v4` |\n\n   Note that HCP Terraform uses GitLab's v4 API.\n\nLeave the page open in a browser tab. In the next step you will copy values from this page, and in later steps you will continue configuring HCP Terraform.\n\n## Step 2: On GitLab, Create a New Application\n\n1. In a new browser tab, open your GitLab instance and log in as whichever account you want HCP Terraform to act as. For most organizations this should be a dedicated service user, but a personal account will also work.\n\n   ~> **Important:** The account you use for connecting HCP Terraform **must have admin (master) access** to any shared repositories of Terraform configurations, since creating webhooks requires admin permissions. Do not create the application as an administrative application not owned by a user; HCP Terraform needs user access to repositories to create webhooks and ingress configurations.\n\n   ~> **Important**: In GitLab CE or EE 10.6 and up, you may also need to enable **Allow requests to the local network from hooks and services** on the \"Outbound requests\" section inside the Admin area under Settings (`\/admin\/application_settings\/network`). Refer to [the GitLab documentation](https:\/\/docs.gitlab.com\/ee\/security\/webhooks.html) for details.\n\n1. Navigate to GitLab's \"User Settings > Applications\" page.\n\n   This page is located at `https:\/\/<GITLAB INSTANCE HOSTNAME>\/profile\/applications`. You can also reach it through GitLab's menus:\n\n   - Click your profile picture and choose \"Settings.\"\n   - Click \"Applications.\"\n\n1. This page has a list of applications and a form for adding new ones. The form has two text fields and some checkboxes.\n\n   Fill out the fields and checkboxes with the corresponding values currently displayed in your HCP Terraform browser tab. HCP Terraform lists the values in the order they appear, and includes controls for copying values to your clipboard.\n\n   Fill out the form as follows:\n\n   | Field                   | Value                                          |\n   | ----------------------- | ---------------------------------------------- |\n   | Name                    | HCP Terraform (`<YOUR ORGANIZATION NAME>`)   |\n   | Redirect URI            | `https:\/\/app.terraform.io\/<YOUR CALLBACK URL>` |\n   | Confidential (checkbox) | \u2714\ufe0f (enabled)                                   |\n   | Expire access tokens (checkbox) | (no longer required)                   |\n   | Scopes (all checkboxes) | api                                            |\n\n   -> **Note:** For previous versions of HCP Terraform and GitLab, we recommended disabling a setting called `Expire access tokens`. This action was required because Gitlab marked OAuth tokens as expired after 2 hours, but HCP Terraform only refreshed tokens after 6 hours. This setting does not exist on Gitlab v15+ and HCP Terraform now refreshes tokens more often.\n\n1. Click the \"Save application\" button, which creates the application and takes you to its page.\n\n1. Leave this page open in a browser tab. In the next step, you will copy and paste the unique **Application ID** and **Secret.**\n\n\n## Step 3: On HCP Terraform, Set up Your Provider\n\n1. On the \"Configure settings\" step on HCP Terraform, enter the **Application ID** and **Secret** from the previous step.\n\n1. Click **Connect and continue.** This takes you to a page on GitLab asking whether you want to authorize the app. Alternatively, if you are redirected to a 500 error, it usually means HCP Terraform was unable to reach your GitLab instance.\n\n1. Click the green **Authorize** button at the bottom of the authorization page.\n\n## Step 4: On HCP Terraform, Configure Advanced Settings (Optional)\n\nThe settings in this section are optional. The Advanced Settings you can configure are:\n- **Scope of VCS Provider** - You can configure which workspaces can use repositories from this VCS provider. By default the **All Projects** option is selected, meaning this VCS provider is available to be used by all workspaces in the organization.\n- **Set up a PEM formatted SSH Keypair** - Most organizations will not need to add an SSH key. However, if the organization repositories include Git submodules that can only be accessed via SSH, an SSH key can be added along with the OAuth credentials. You can add or update the SSH key at a later time.\n\n### If You Don't Need to Configure Advanced Settings:\n\n1. Click the **Skip and Finish** button. This returns you to HCP Terraform's VCS Provider page, which now includes your new GitLab client.\n\n### If You Need to Limit the Scope of this VCS Provider:\n\n1. Select the **Selected Projects** option and use the text field that appears to search for and select projects to enable. All current and future workspaces for any selected projects can use repositories from this VCS Provider. \n\n1. Click the **Update VCS Provider** button to save your selections.\n\n### If You Do Need a PEM formatted SSH Keypair:\n\n#### Important Notes\n\n- SSH will only be used to clone Git submodules. All other Git operations will still use HTTPS.\n- Do not use your personal SSH key to connect HCP Terraform and GitLab; generate a new one or use an existing key reserved for service access.\n- In the following steps, you must provide HCP Terraform with the private key. Although HCP Terraform does not display the text of the key to users after it is entered, it retains it and will use it when authenticating to GitLab.\n- **Protect this private key carefully.** It can push code to the repositories you use to manage your infrastructure. Take note of your organization's policies for protecting important credentials and be sure to follow them.\n\n1. On a secure workstation, create a PEM formatted SSH keypair that HCP Terraform can use to connect to GitLab. The exact command depends on your OS, but is usually something like:\n   `ssh-keygen -t rsa -m PEM -f \"\/Users\/<NAME>\/.ssh\/service_terraform\" -C \"service_terraform_enterprise\"`\n   This creates a `service_terraform` file with the private key, and a `service_terraform.pub` file with the public key. This SSH key **must have an empty passphrase**. HCP Terraform cannot use SSH keys that require a passphrase.\n\n1. While logged into the GitLab account you want HCP Terraform to act as, navigate to the SSH Keys settings page, add a new SSH key and paste the value of the SSH public key you just created.\n\n1. In HCP Terraform's **Add VCS Provider** page, paste the text of the **SSH private key** you just created, and click the **Add SSH Key** button.\n\n## Finished\n\nAt this point, GitLab access for HCP Terraform is fully configured, and you can create Terraform workspaces based on your organization's shared repositories.","site":"terraform","answers_cleaned":"    page title  GitLab EE and CE   VCS Providers   HCP Terraform description       Learn how to use on premise installation of GitLab Enterprise Edition  EE  or GitLab Community Edition  CE  for VCS features         Configuring GitLab EE and CE Access  These instructions are for using an on premise installation of GitLab Enterprise Edition  EE  or GitLab Community Edition  CE  for HCP Terraform s VCS features   GitLab com has separate instructions    terraform cloud docs vcs gitlab com  as do the  other supported VCS providers    terraform cloud docs vcs   Configuring a new VCS provider requires permission to manage VCS settings for the organization    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  Connecting HCP Terraform to your VCS involves four steps     On your VCS                                                                                      On HCP Terraform                                                                                                                                                                                                                                                                                                                                                Create a new connection in HCP Terraform  Get redirect URI                    Register your HCP Terraform organization as a new app  Provide redirect URI  Get ID and key                                                                                                                                                                                     Provide HCP Terraform with application ID and secret  Request VCS access      Approve access request                                                                                                                                                          The rest of this page explains the on premise GitLab versions of these steps        Important    HCP Terraform needs to contact your GitLab instance during setup and during normal operation  For the SaaS version of HCP Terraform  this means GitLab must be internet accessible  for Terraform Enterprise  you must have network connectivity between your Terraform Enterprise and GitLab instances        Note    Alternately  you can skip the OAuth configuration process and authenticate with a personal access token  This requires using HCP Terraform s API  For details  see  the OAuth Clients API page   terraform cloud docs api docs oauth clients         Version Note    HCP Terraform supports GitLab versions 9 0 and newer  HashiCorp does not test older versions of GitLab with HCP Terraform  and they might not work as expected  Also note that  although we do not deliberately remove support for versions that have reached end of life  per the  GitLab Support End of Life Policy  https   docs gitlab com ee policy maintenance html patch releases    our ability to resolve customer issues with end of life versions might be limited      Step 1  On HCP Terraform  Begin Adding a New VCS Provider  1  Go to your organization s settings and then click   Providers    The   VCS Providers   page appears   1  Click   Add VCS Provider    The   VCS Providers   page appears   1  Select   GitLab   and then select   GitLab Enterprise Edition   or   GitLab Community Edition   from the menu  The page moves to the next step   1  In the  Set up provider  step  fill in the   HTTP URL   and   API URL   of your GitLab Enterprise Edition or GitLab Community Edition instance  as well as an optional   Name   for this VCS connection  Click  Continue         Field      Value                                                                                                            HTTP URL    https    GITLAB INSTANCE HOSTNAME                 API URL     https    GITLAB INSTANCE HOSTNAME  api v4        Note that HCP Terraform uses GitLab s v4 API   Leave the page open in a browser tab  In the next step you will copy values from this page  and in later steps you will continue configuring HCP Terraform      Step 2  On GitLab  Create a New Application  1  In a new browser tab  open your GitLab instance and log in as whichever account you want HCP Terraform to act as  For most organizations this should be a dedicated service user  but a personal account will also work           Important    The account you use for connecting HCP Terraform   must have admin  master  access   to any shared repositories of Terraform configurations  since creating webhooks requires admin permissions  Do not create the application as an administrative application not owned by a user  HCP Terraform needs user access to repositories to create webhooks and ingress configurations           Important    In GitLab CE or EE 10 6 and up  you may also need to enable   Allow requests to the local network from hooks and services   on the  Outbound requests  section inside the Admin area under Settings    admin application settings network    Refer to  the GitLab documentation  https   docs gitlab com ee security webhooks html  for details   1  Navigate to GitLab s  User Settings   Applications  page      This page is located at  https    GITLAB INSTANCE HOSTNAME  profile applications   You can also reach it through GitLab s menus        Click your profile picture and choose  Settings        Click  Applications    1  This page has a list of applications and a form for adding new ones  The form has two text fields and some checkboxes      Fill out the fields and checkboxes with the corresponding values currently displayed in your HCP Terraform browser tab  HCP Terraform lists the values in the order they appear  and includes controls for copying values to your clipboard      Fill out the form as follows        Field                     Value                                                                                                                                 Name                      HCP Terraform    YOUR ORGANIZATION NAME             Redirect URI               https   app terraform io  YOUR CALLBACK URL          Confidential  checkbox        enabled                                           Expire access tokens  checkbox     no longer required                           Scopes  all checkboxes    api                                                       Note    For previous versions of HCP Terraform and GitLab  we recommended disabling a setting called  Expire access tokens   This action was required because Gitlab marked OAuth tokens as expired after 2 hours  but HCP Terraform only refreshed tokens after 6 hours  This setting does not exist on Gitlab v15  and HCP Terraform now refreshes tokens more often   1  Click the  Save application  button  which creates the application and takes you to its page   1  Leave this page open in a browser tab  In the next step  you will copy and paste the unique   Application ID   and   Secret         Step 3  On HCP Terraform  Set up Your Provider  1  On the  Configure settings  step on HCP Terraform  enter the   Application ID   and   Secret   from the previous step   1  Click   Connect and continue    This takes you to a page on GitLab asking whether you want to authorize the app  Alternatively  if you are redirected to a 500 error  it usually means HCP Terraform was unable to reach your GitLab instance   1  Click the green   Authorize   button at the bottom of the authorization page      Step 4  On HCP Terraform  Configure Advanced Settings  Optional   The settings in this section are optional  The Advanced Settings you can configure are      Scope of VCS Provider     You can configure which workspaces can use repositories from this VCS provider  By default the   All Projects   option is selected  meaning this VCS provider is available to be used by all workspaces in the organization      Set up a PEM formatted SSH Keypair     Most organizations will not need to add an SSH key  However  if the organization repositories include Git submodules that can only be accessed via SSH  an SSH key can be added along with the OAuth credentials  You can add or update the SSH key at a later time       If You Don t Need to Configure Advanced Settings   1  Click the   Skip and Finish   button  This returns you to HCP Terraform s VCS Provider page  which now includes your new GitLab client       If You Need to Limit the Scope of this VCS Provider   1  Select the   Selected Projects   option and use the text field that appears to search for and select projects to enable  All current and future workspaces for any selected projects can use repositories from this VCS Provider    1  Click the   Update VCS Provider   button to save your selections       If You Do Need a PEM formatted SSH Keypair        Important Notes    SSH will only be used to clone Git submodules  All other Git operations will still use HTTPS    Do not use your personal SSH key to connect HCP Terraform and GitLab  generate a new one or use an existing key reserved for service access    In the following steps  you must provide HCP Terraform with the private key  Although HCP Terraform does not display the text of the key to users after it is entered  it retains it and will use it when authenticating to GitLab      Protect this private key carefully    It can push code to the repositories you use to manage your infrastructure  Take note of your organization s policies for protecting important credentials and be sure to follow them   1  On a secure workstation  create a PEM formatted SSH keypair that HCP Terraform can use to connect to GitLab  The exact command depends on your OS  but is usually something like      ssh keygen  t rsa  m PEM  f   Users  NAME   ssh service terraform   C  service terraform enterprise      This creates a  service terraform  file with the private key  and a  service terraform pub  file with the public key  This SSH key   must have an empty passphrase    HCP Terraform cannot use SSH keys that require a passphrase   1  While logged into the GitLab account you want HCP Terraform to act as  navigate to the SSH Keys settings page  add a new SSH key and paste the value of the SSH public key you just created   1  In HCP Terraform s   Add VCS Provider   page  paste the text of the   SSH private key   you just created  and click the   Add SSH Key   button      Finished  At this point  GitLab access for HCP Terraform is fully configured  and you can create Terraform workspaces based on your organization s shared repositories "}
{"questions":"terraform HCP Terraform is more powerful when you integrate it with your version control system VCS provider Although you can use many of HCP Terraform s features without one a VCS connection provides additional features and improved workflows In particular Connecting VCS Providers to HCP Terraform page title Connecting VCS Providers HCP Terraform Learn how to connect a version control system VCS to your organization to integrate HCP Terraform into your development workflow and access additional features","answers":"---\npage_title: Connecting VCS Providers - HCP Terraform\ndescription: >-\n  Learn how to connect a version control system (VCS) to your organization to integrate HCP Terraform into your development workflow and access additional features. \n---\n\n# Connecting VCS Providers to HCP Terraform\n\nHCP Terraform is more powerful when you integrate it with your version control system (VCS) provider. Although you can use many of HCP Terraform's features without one, a VCS connection provides additional features and improved workflows. In particular:\n\n- When workspaces are linked to a VCS repository, HCP Terraform can [automatically initiate Terraform runs](\/terraform\/cloud-docs\/run\/ui) when changes are committed to the specified branch.\n- HCP Terraform makes code review easier by [automatically predicting](\/terraform\/cloud-docs\/run\/ui#speculative-plans-on-pull-requests) how pull requests will affect infrastructure.\n- Publishing new versions of a [private Terraform module](\/terraform\/cloud-docs\/registry\/publish-modules) is as easy as pushing a tag to the module's repository.\n\nWe recommend configuring VCS access when first setting up an organization, and you might need to add additional VCS providers later depending on how your organization grows.\n\nConfiguring a new VCS provider requires permission to manage VCS settings for the organization. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Supported VCS Providers\n\nHCP Terraform supports the following VCS providers:\n<!-- BEGIN: TFC:only -->\n-   [GitHub.com](\/terraform\/cloud-docs\/vcs\/github-app)\n\n<!-- END: TFC:only -->\n\n- [GitHub App for TFE](\/terraform\/enterprise\/admin\/application\/github-app-integration)\n- [GitHub.com (OAuth)](\/terraform\/cloud-docs\/vcs\/github)\n- [GitHub Enterprise](\/terraform\/cloud-docs\/vcs\/github-enterprise)\n- [GitLab.com](\/terraform\/cloud-docs\/vcs\/gitlab-com)\n- [GitLab EE and CE](\/terraform\/cloud-docs\/vcs\/gitlab-eece)\n- [Bitbucket Cloud](\/terraform\/cloud-docs\/vcs\/bitbucket-cloud)\n- [Bitbucket Data Center](\/terraform\/cloud-docs\/vcs\/bitbucket-data-center)\n- [Azure DevOps Server](\/terraform\/cloud-docs\/vcs\/azure-devops-server)\n- [Azure DevOps Services](\/terraform\/cloud-docs\/vcs\/azure-devops-services)\n\nUse the links above to see details on configuring VCS access for each supported provider. If you use another VCS that is not supported, you can build an integration via [the API-driven run workflow](\/terraform\/cloud-docs\/run\/api).\n\n## How HCP Terraform Uses VCS Access\n\nMost workspaces in HCP Terraform are associated with a VCS repository, which provides Terraform configurations for that workspace. To find out which repos are available, access their contents, and create webhooks, HCP Terraform needs access to your VCS provider.\n\nAlthough HCP Terraform's API lets you create workspaces and push configurations to them without a VCS connection, the primary workflow expects every workspace to be backed by a repository.\n\nTo use configurations from VCS, HCP Terraform needs to do several things:\n\n- Access a list of repositories, to let you search for repos when creating new workspaces.\n- Register webhooks with your VCS provider, to get notified of new commits to a chosen branch.\n- Download the contents of a repository at a specific commit in order to run Terraform with that code.\n\n~> **Important:** HCP Terraform usually performs VCS actions using a designated VCS user account, but it has no other knowledge about your VCS's authorization controls and does not associate HCP Terraform user accounts with VCS user accounts. This means HCP Terraform's VCS user might have a different level of access to repositories than any given HCP Terraform user. Keep this in mind when selecting a VCS user, as it may affect your security posture in one or both systems.\n\n### Webhooks\n\nHCP Terraform uses webhooks to monitor new commits and pull requests.\n\n- When someone adds new commits to a branch, any HCP Terraform workspaces based on that branch will begin a Terraform run. Usually a user must inspect the plan output and approve an apply, but you can also enable automatic applies on a per-workspace basis. You can prevent automatic runs by locking a workspace. A run will only occur if the workspace has not previously processed a run for the commit SHA.\n- When someone submits a pull request\/merge request to a branch, any HCP Terraform workspaces based on that branch will perform a [speculative plan](\/terraform\/cloud-docs\/run\/remote-operations#speculative-plans) with the contents of the request and links to the results on the PR's page. This helps you avoid merging PRs that cause plan failures.\n\n~> **Important:** In Terraform Enterprise, integration with a SaaS VCS provider (GitHub.com, GitLab.com, Bitbucket Cloud, or Azure DevOps Services) requires ingress from the public internet. This lets the inbound web hooks reach Terraform Enterprise. You should also configure appropriate security controls, such as a Web Application Firewall (WAF).\n\n### SSH Keys\n\nFor most supported VCS providers, HCP Terraform does not need an SSH key. This is because Terraform can do everything it needs with the provider's API and an OAuth token. The exceptions are Azure DevOps Server and Bitbucket Data Center, which require an SSH key for downloading repository contents. Refer to the setup instructions for  [Azure DevOps Server](\/terraform\/cloud-docs\/vcs\/azure-devops-server) and [Bitbucket Data Center](\/terraform\/cloud-docs\/vcs\/bitbucket-data-center) for details.\n\nFor other VCS providers, most organizations will not need to add an SSH private key. However, if the organization repositories include Git submodules that can only be accessed via SSH, an SSH key can be added along with the OAuth credentials.\n\nFor VCS providers where adding an SSH private key is optional, SSH will only be used to clone Git submodules. All other Git operations will still use HTTPS.\n\nIf submodules will be cloned via SSH from a private VCS instance, SSH must be running on the standard port 22 on the VCS server.\n\nTo add an SSH key to a VCS connection, finish configuring OAuth in the organization settings, and then use the \"add a private SSH key\" link on the VCS Provider settings page to add a private key that has access to the submodule repositories. When setting up a workspace, if submodules are required, select \"Include submodules on clone\". More at [Workspace settings](\/terraform\/cloud-docs\/workspaces\/settings).\n\n### Multiple VCS Connections\n\nIf your infrastructure code is spread across multiple VCS providers, you can configure multiple VCS connections. You can choose which VCS connection to use whenever you create a new workspace.\n\n#### Scoping VCS Connections using Projects\n\nYou can configure which projects can use repositories from a VCS connection. By default each VCS connection is enabled for all workspaces in the organization. If you need to limit which projects can use repositories from a given VCS connection, you can change this setting to enable the connection for only workspaces in the selected projects.\n\n## Configuring VCS Access\n\nHCP Terraform uses the OAuth protocol to authenticate with VCS providers.\n\n~> **Important:** Even if you've used OAuth before, read the instructions carefully. Since HCP Terraform's security model treats each _organization_ as a separate OAuth application, we authenticate with OAuth's developer workflow, which is more complex than the standard user workflow.\n\nThe exact steps to authenticate are different for each VCS provider, but they follow this general order:\n\n| On your VCS                                                              | On HCP Terraform                                                               |\n| ------------------------------------------------------------------------ | -------------------------------------------------------------------------------- |\n| Register your HCP Terraform organization as a new app. Get ID and key. | \u00a0                                                                                |\n| \u00a0                                                                        | Tell HCP Terraform how to reach VCS, and provide ID and key. Get callback URL. |\n| Provide callback URL.                                                    | \u00a0                                                                                |\n| \u00a0                                                                        | Request VCS access.                                                              |\n| Approve access request.                                                  | \u00a0                                                                                |\n\nFor complete details, click the link for your VCS provider:\n\n- [GitHub](\/terraform\/cloud-docs\/vcs\/github)\n- [GitHub Enterprise](\/terraform\/cloud-docs\/vcs\/github-enterprise)\n- [GitLab.com](\/terraform\/cloud-docs\/vcs\/gitlab-com)\n- [GitLab EE and CE](\/terraform\/cloud-docs\/vcs\/gitlab-eece)\n- [Bitbucket Cloud](\/terraform\/cloud-docs\/vcs\/bitbucket-cloud)\n- [Bitbucket Data Center](\/terraform\/cloud-docs\/vcs\/bitbucket-data-center)\n- [Azure DevOps Server](\/terraform\/cloud-docs\/vcs\/azure-devops-server)\n- [Azure DevOps Services](\/terraform\/cloud-docs\/vcs\/azure-devops-services)\n\n-> **Note:** Alternatively, you can skip the OAuth configuration process and authenticate with a personal access token. This requires using HCP Terraform's API. For details, see [the OAuth Clients API page](\/terraform\/cloud-docs\/api-docs\/oauth-clients).\n\n<!-- BEGIN: TFC:only -->\n### Private VCS\n\nYou can use self-hosted HCP Terraform Agents to connect HCP Terraform to your private VCS provider, such as GitHub Enterprise, GitLab Enterprise, and BitBucket Data Center. For more information, refer to [Connect to Private VCS Providers](\/terraform\/cloud-docs\/vcs\/private).\n\n<!-- END: TFC:only -->\n\n## Viewing events\n\nVCS events describe changes within your organization for VCS-related actions. The VCS events page only displays events from previously processed commits in the past 30 days. The VCS page indicates previously processed commits with the message, `\"Processing skipped for duplicate commit SHA\"`.\n\nViewing VCS events requires permission to manage VCS settings for the organization. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nTo view VCS events for your organization, go to your organization's settings and click **Events**. The **VCS Events** page appears.","site":"terraform","answers_cleaned":"    page title  Connecting VCS Providers   HCP Terraform description       Learn how to connect a version control system  VCS  to your organization to integrate HCP Terraform into your development workflow and access additional features          Connecting VCS Providers to HCP Terraform  HCP Terraform is more powerful when you integrate it with your version control system  VCS  provider  Although you can use many of HCP Terraform s features without one  a VCS connection provides additional features and improved workflows  In particular     When workspaces are linked to a VCS repository  HCP Terraform can  automatically initiate Terraform runs   terraform cloud docs run ui  when changes are committed to the specified branch    HCP Terraform makes code review easier by  automatically predicting   terraform cloud docs run ui speculative plans on pull requests  how pull requests will affect infrastructure    Publishing new versions of a  private Terraform module   terraform cloud docs registry publish modules  is as easy as pushing a tag to the module s repository   We recommend configuring VCS access when first setting up an organization  and you might need to add additional VCS providers later depending on how your organization grows   Configuring a new VCS provider requires permission to manage VCS settings for the organization    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers     Supported VCS Providers  HCP Terraform supports the following VCS providers       BEGIN  TFC only          GitHub com   terraform cloud docs vcs github app        END  TFC only         GitHub App for TFE   terraform enterprise admin application github app integration     GitHub com  OAuth    terraform cloud docs vcs github     GitHub Enterprise   terraform cloud docs vcs github enterprise     GitLab com   terraform cloud docs vcs gitlab com     GitLab EE and CE   terraform cloud docs vcs gitlab eece     Bitbucket Cloud   terraform cloud docs vcs bitbucket cloud     Bitbucket Data Center   terraform cloud docs vcs bitbucket data center     Azure DevOps Server   terraform cloud docs vcs azure devops server     Azure DevOps Services   terraform cloud docs vcs azure devops services   Use the links above to see details on configuring VCS access for each supported provider  If you use another VCS that is not supported  you can build an integration via  the API driven run workflow   terraform cloud docs run api       How HCP Terraform Uses VCS Access  Most workspaces in HCP Terraform are associated with a VCS repository  which provides Terraform configurations for that workspace  To find out which repos are available  access their contents  and create webhooks  HCP Terraform needs access to your VCS provider   Although HCP Terraform s API lets you create workspaces and push configurations to them without a VCS connection  the primary workflow expects every workspace to be backed by a repository   To use configurations from VCS  HCP Terraform needs to do several things     Access a list of repositories  to let you search for repos when creating new workspaces    Register webhooks with your VCS provider  to get notified of new commits to a chosen branch    Download the contents of a repository at a specific commit in order to run Terraform with that code        Important    HCP Terraform usually performs VCS actions using a designated VCS user account  but it has no other knowledge about your VCS s authorization controls and does not associate HCP Terraform user accounts with VCS user accounts  This means HCP Terraform s VCS user might have a different level of access to repositories than any given HCP Terraform user  Keep this in mind when selecting a VCS user  as it may affect your security posture in one or both systems       Webhooks  HCP Terraform uses webhooks to monitor new commits and pull requests     When someone adds new commits to a branch  any HCP Terraform workspaces based on that branch will begin a Terraform run  Usually a user must inspect the plan output and approve an apply  but you can also enable automatic applies on a per workspace basis  You can prevent automatic runs by locking a workspace  A run will only occur if the workspace has not previously processed a run for the commit SHA    When someone submits a pull request merge request to a branch  any HCP Terraform workspaces based on that branch will perform a  speculative plan   terraform cloud docs run remote operations speculative plans  with the contents of the request and links to the results on the PR s page  This helps you avoid merging PRs that cause plan failures        Important    In Terraform Enterprise  integration with a SaaS VCS provider  GitHub com  GitLab com  Bitbucket Cloud  or Azure DevOps Services  requires ingress from the public internet  This lets the inbound web hooks reach Terraform Enterprise  You should also configure appropriate security controls  such as a Web Application Firewall  WAF        SSH Keys  For most supported VCS providers  HCP Terraform does not need an SSH key  This is because Terraform can do everything it needs with the provider s API and an OAuth token  The exceptions are Azure DevOps Server and Bitbucket Data Center  which require an SSH key for downloading repository contents  Refer to the setup instructions for   Azure DevOps Server   terraform cloud docs vcs azure devops server  and  Bitbucket Data Center   terraform cloud docs vcs bitbucket data center  for details   For other VCS providers  most organizations will not need to add an SSH private key  However  if the organization repositories include Git submodules that can only be accessed via SSH  an SSH key can be added along with the OAuth credentials   For VCS providers where adding an SSH private key is optional  SSH will only be used to clone Git submodules  All other Git operations will still use HTTPS   If submodules will be cloned via SSH from a private VCS instance  SSH must be running on the standard port 22 on the VCS server   To add an SSH key to a VCS connection  finish configuring OAuth in the organization settings  and then use the  add a private SSH key  link on the VCS Provider settings page to add a private key that has access to the submodule repositories  When setting up a workspace  if submodules are required  select  Include submodules on clone   More at  Workspace settings   terraform cloud docs workspaces settings        Multiple VCS Connections  If your infrastructure code is spread across multiple VCS providers  you can configure multiple VCS connections  You can choose which VCS connection to use whenever you create a new workspace        Scoping VCS Connections using Projects  You can configure which projects can use repositories from a VCS connection  By default each VCS connection is enabled for all workspaces in the organization  If you need to limit which projects can use repositories from a given VCS connection  you can change this setting to enable the connection for only workspaces in the selected projects      Configuring VCS Access  HCP Terraform uses the OAuth protocol to authenticate with VCS providers        Important    Even if you ve used OAuth before  read the instructions carefully  Since HCP Terraform s security model treats each  organization  as a separate OAuth application  we authenticate with OAuth s developer workflow  which is more complex than the standard user workflow   The exact steps to authenticate are different for each VCS provider  but they follow this general order     On your VCS                                                                On HCP Terraform                                                                                                                                                                                                                                   Register your HCP Terraform organization as a new app  Get ID and key                                                                                                                                                                    Tell HCP Terraform how to reach VCS  and provide ID and key  Get callback URL      Provide callback URL                                                                                                                                                                                                                       Request VCS access                                                                   Approve access request                                                                                                                                         For complete details  click the link for your VCS provider      GitHub   terraform cloud docs vcs github     GitHub Enterprise   terraform cloud docs vcs github enterprise     GitLab com   terraform cloud docs vcs gitlab com     GitLab EE and CE   terraform cloud docs vcs gitlab eece     Bitbucket Cloud   terraform cloud docs vcs bitbucket cloud     Bitbucket Data Center   terraform cloud docs vcs bitbucket data center     Azure DevOps Server   terraform cloud docs vcs azure devops server     Azure DevOps Services   terraform cloud docs vcs azure devops services        Note    Alternatively  you can skip the OAuth configuration process and authenticate with a personal access token  This requires using HCP Terraform s API  For details  see  the OAuth Clients API page   terraform cloud docs api docs oauth clients         BEGIN  TFC only         Private VCS  You can use self hosted HCP Terraform Agents to connect HCP Terraform to your private VCS provider  such as GitHub Enterprise  GitLab Enterprise  and BitBucket Data Center  For more information  refer to  Connect to Private VCS Providers   terraform cloud docs vcs private         END  TFC only         Viewing events  VCS events describe changes within your organization for VCS related actions  The VCS events page only displays events from previously processed commits in the past 30 days  The VCS page indicates previously processed commits with the message    Processing skipped for duplicate commit SHA     Viewing VCS events requires permission to manage VCS settings for the organization    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  To view VCS events for your organization  go to your organization s settings and click   Events    The   VCS Events   page appears "}
{"questions":"terraform This topic describes how to connect Bitbucket Cloud to HCP Terraform Bitbucket Cloud is the cloud hosted version of Bitbucket For self hosted Bitbucket Data Center instances refer to Configuring Bitbucket Data Center Access terraform cloud docs vcs bitbucket data center Refer to Connecting VCS Providers to HCP Terraform terraform cloud docs vcs for other supported VCS providers Configuring Bitbucket Cloud Access Learn how to use Bitbucket Cloud for VCS features page title Bitbucket Cloud VCS Providers HCP Terraform","answers":"---\npage_title: Bitbucket Cloud - VCS Providers - HCP Terraform\ndescription: >-\n  Learn how to use Bitbucket Cloud for VCS features.\n---\n\n# Configuring Bitbucket Cloud Access\n\nThis topic describes how to connect Bitbucket Cloud to HCP Terraform. Bitbucket Cloud is the cloud-hosted version of Bitbucket. For self-hosted Bitbucket Data Center instances, refer to [Configuring Bitbucket Data Center Access](\/terraform\/cloud-docs\/vcs\/bitbucket-data-center). Refer to [Connecting VCS Providers to HCP Terraform](\/terraform\/cloud-docs\/vcs) for other supported VCS providers.\n\nConfiguring a new VCS provider requires permission to manage VCS settings for the organization. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nConnecting HCP Terraform to your VCS involves four steps:\n\n| On your VCS                                                                                    | On HCP Terraform                                            |\n| ---------------------------------------------------------------------------------------------- | ------------------------------------------------------------- |\n| \u00a0                                                                                              | Create a new connection in HCP Terraform. Get callback URL. |\n| Register your HCP Terraform organization as a new app. Provide callback URL. Get ID and key. | \u00a0                                                             |\n| \u00a0                                                                                              | Provide HCP Terraform with ID and key. Request VCS access.  |\n| Approve access request.                                                                        | \u00a0                                                             |\n\nThe rest of this page explains the Bitbucket Cloud-specific versions of these steps.\n\n## Step 1: On HCP Terraform, Begin Adding a New VCS Provider\n\n1. Go to your organization's settings and then click **Providers**. The **VCS Providers** page appears.\n\n1. Click **Add VCS Provider**. The **VCS Providers** page appears.\n\n1. Select **Bitbucket** and then select **Bitbucket Cloud** from the menu. The page moves to the next step.\n\nLeave the page open in a browser tab. In the next step you will copy values from this page, and in later steps you will continue configuring HCP Terraform.\n\n## Step 2: On Bitbucket Cloud, Create a New OAuth Consumer\n\n1. In a new browser tab, open [Bitbucket Cloud](https:\/\/bitbucket.org) and log in as whichever account you want HCP Terraform to act as. For most organizations this should be a dedicated service user, but a personal account will also work.\n\n   ~> **Important:** The account you use for connecting HCP Terraform **must have admin access** to any shared repositories of Terraform configurations, since creating webhooks requires admin permissions.\n\n1. Navigate to Bitbucket's \"Add OAuth Consumer\" page.\n\n   This page is located at `https:\/\/bitbucket.org\/<YOUR WORKSPACE NAME>\/workspace\/settings\/oauth-consumers\/new`. You can also reach it through Bitbucket's menus:\n\n   - Click your profile picture and choose the workspace you want to access.\n   - Click \"Settings\".\n   - Click \"OAuth consumers,\" which is in the \"Apps and Features\" section.\n   - On the OAuth settings page, click the \"Add consumer\" button.\n\n1. This page has a form with several text fields and checkboxes.\n\n   Fill out the fields and checkboxes with the corresponding values currently displayed in your HCP Terraform browser tab. HCP Terraform lists the values in the order they appear, and includes controls for copying values to your clipboard.\n\n   Fill out the text fields as follows:\n\n   | Field        | Value                                                                         |\n   | ------------ | ----------------------------------------------------------------------------- |\n   | Name         | HCP Terraform (`<YOUR ORGANIZATION NAME>`)                                  |\n   | Description  | Any description of your choice.                                               |\n   | Callback URL | `https:\/\/app.terraform.io\/<YOUR CALLBACK URL>`                                |\n   | URL          | `https:\/\/app.terraform.io` (or the URL of your Terraform Enterprise instance) |\n\n   Ensure that the \"This is a private consumer\" option is checked. Then, activate the following permissions checkboxes:\n\n   | Permission type | Permission level |\n   | --------------- | ---------------- |\n   | Account         | Write            |\n   | Repositories    | Admin            |\n   | Pull requests   | Write            |\n   | Webhooks        | Read and write   |\n\n1. Click the \"Save\" button, which returns you to the OAuth settings page.\n\n1. Find your new OAuth consumer under the \"OAuth Consumers\" heading, and click its name to reveal its details.\n\n   Leave this page open in a browser tab. In the next step, you will copy and paste the unique **Key** and **Secret.**\n\n\n## Step 3: On HCP Terraform, Set up Your Provider\n\n1. Enter the **Key** and **Secret** from the previous step, as well as an optional **Name** for this VCS connection.\n\n1. Click \"Connect and continue.\" This takes you to a page on Bitbucket Cloud asking whether you want to authorize the app.\n\n1. Click the blue \"Grant access\" button to proceed.\n\n## Step 4: On HCP Terraform, Configure Advanced Settings (Optional)\n\nThe settings in this section are optional. The Advanced Settings you can configure are:\n- **Scope of VCS Provider** - You can configure which workspaces can use repositories from this VCS provider. By default the **All Projects** option is selected, meaning this VCS provider is available to be used by all workspaces in the organization.\n- **Set up SSH Keypair** - Most organizations will not need to add an SSH key. However, if the organization repositories include Git submodules that can only be accessed via SSH, an SSH key can be added along with the OAuth credentials. You can add or update the SSH key at a later time.\n\n### If You Don't Need to Configure Advanced Settings:\n\n1. Click the **Skip and Finish** button. This returns you to HCP Terraform's VCS Provider page, which now includes your new Bitbucket Cloud client.\n\n### If You Need to Limit the Scope of this VCS Provider:\n\n1. Select the **Selected Projects** option and use the text field that appears to search for and select projects to enable. All current and future workspaces for any selected projects can use repositories from this VCS Provider. \n\n1. Click the **Update VCS Provider** button to save your selections.\n\n### If You Do Need an SSH Keypair:\n\n#### Important Notes\n\n- SSH will only be used to clone Git submodules. All other Git operations will still use HTTPS.\n- Do not use your personal SSH key to connect HCP Terraform and Bitbucket Cloud; generate a new one or use an existing key reserved for service access.\n- In the following steps, you must provide HCP Terraform with the private key. Although HCP Terraform does not display the text of the key to users after it is entered, it retains it and will use it when authenticating to Bitbucket Cloud.\n- **Protect this private key carefully.** It can push code to the repositories you use to manage your infrastructure. Take note of your organization's policies for protecting important credentials and be sure to follow them.\n\n1. On a secure workstation, create an SSH keypair that HCP Terraform can use to connect to Bitbucket Cloud. The exact command depends on your OS, but is usually something like:\n   `ssh-keygen -t rsa -m PEM -f \"\/Users\/<NAME>\/.ssh\/service_terraform\" -C \"service_terraform_enterprise\"`\n   This creates a `service_terraform` file with the private key, and a `service_terraform.pub` file with the public key. This SSH key **must have an empty passphrase**. HCP Terraform cannot use SSH keys that require a passphrase.\n\n1. While logged into the Bitbucket Cloud account you want HCP Terraform to act as, navigate to the SSH Keys settings page, add a new SSH key and paste the value of the SSH public key you just created.\n\n1. In HCP Terraform's **Add VCS Provider** page, paste the text of the **SSH private key** you just created, and click the **Add SSH Key** button.\n\n## Finished\n\nAt this point, Bitbucket Cloud access for HCP Terraform is fully configured, and you can create Terraform workspaces based on your organization's shared repositories.","site":"terraform","answers_cleaned":"    page title  Bitbucket Cloud   VCS Providers   HCP Terraform description       Learn how to use Bitbucket Cloud for VCS features         Configuring Bitbucket Cloud Access  This topic describes how to connect Bitbucket Cloud to HCP Terraform  Bitbucket Cloud is the cloud hosted version of Bitbucket  For self hosted Bitbucket Data Center instances  refer to  Configuring Bitbucket Data Center Access   terraform cloud docs vcs bitbucket data center   Refer to  Connecting VCS Providers to HCP Terraform   terraform cloud docs vcs  for other supported VCS providers   Configuring a new VCS provider requires permission to manage VCS settings for the organization    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  Connecting HCP Terraform to your VCS involves four steps     On your VCS                                                                                      On HCP Terraform                                                                                                                                                                                                                                                                                                                    Create a new connection in HCP Terraform  Get callback URL      Register your HCP Terraform organization as a new app  Provide callback URL  Get ID and key                                                                                                                                                                       Provide HCP Terraform with ID and key  Request VCS access       Approve access request                                                                                                                                            The rest of this page explains the Bitbucket Cloud specific versions of these steps      Step 1  On HCP Terraform  Begin Adding a New VCS Provider  1  Go to your organization s settings and then click   Providers    The   VCS Providers   page appears   1  Click   Add VCS Provider    The   VCS Providers   page appears   1  Select   Bitbucket   and then select   Bitbucket Cloud   from the menu  The page moves to the next step   Leave the page open in a browser tab  In the next step you will copy values from this page  and in later steps you will continue configuring HCP Terraform      Step 2  On Bitbucket Cloud  Create a New OAuth Consumer  1  In a new browser tab  open  Bitbucket Cloud  https   bitbucket org  and log in as whichever account you want HCP Terraform to act as  For most organizations this should be a dedicated service user  but a personal account will also work           Important    The account you use for connecting HCP Terraform   must have admin access   to any shared repositories of Terraform configurations  since creating webhooks requires admin permissions   1  Navigate to Bitbucket s  Add OAuth Consumer  page      This page is located at  https   bitbucket org  YOUR WORKSPACE NAME  workspace settings oauth consumers new   You can also reach it through Bitbucket s menus        Click your profile picture and choose the workspace you want to access       Click  Settings        Click  OAuth consumers   which is in the  Apps and Features  section       On the OAuth settings page  click the  Add consumer  button   1  This page has a form with several text fields and checkboxes      Fill out the fields and checkboxes with the corresponding values currently displayed in your HCP Terraform browser tab  HCP Terraform lists the values in the order they appear  and includes controls for copying values to your clipboard      Fill out the text fields as follows        Field          Value                                                                                                                                                                                    Name           HCP Terraform    YOUR ORGANIZATION NAME                                            Description    Any description of your choice                                                       Callback URL    https   app terraform io  YOUR CALLBACK URL                                         URL             https   app terraform io   or the URL of your Terraform Enterprise instance        Ensure that the  This is a private consumer  option is checked  Then  activate the following permissions checkboxes        Permission type   Permission level                                                  Account           Write                   Repositories      Admin                   Pull requests     Write                   Webhooks          Read and write      1  Click the  Save  button  which returns you to the OAuth settings page   1  Find your new OAuth consumer under the  OAuth Consumers  heading  and click its name to reveal its details      Leave this page open in a browser tab  In the next step  you will copy and paste the unique   Key   and   Secret         Step 3  On HCP Terraform  Set up Your Provider  1  Enter the   Key   and   Secret   from the previous step  as well as an optional   Name   for this VCS connection   1  Click  Connect and continue   This takes you to a page on Bitbucket Cloud asking whether you want to authorize the app   1  Click the blue  Grant access  button to proceed      Step 4  On HCP Terraform  Configure Advanced Settings  Optional   The settings in this section are optional  The Advanced Settings you can configure are      Scope of VCS Provider     You can configure which workspaces can use repositories from this VCS provider  By default the   All Projects   option is selected  meaning this VCS provider is available to be used by all workspaces in the organization      Set up SSH Keypair     Most organizations will not need to add an SSH key  However  if the organization repositories include Git submodules that can only be accessed via SSH  an SSH key can be added along with the OAuth credentials  You can add or update the SSH key at a later time       If You Don t Need to Configure Advanced Settings   1  Click the   Skip and Finish   button  This returns you to HCP Terraform s VCS Provider page  which now includes your new Bitbucket Cloud client       If You Need to Limit the Scope of this VCS Provider   1  Select the   Selected Projects   option and use the text field that appears to search for and select projects to enable  All current and future workspaces for any selected projects can use repositories from this VCS Provider    1  Click the   Update VCS Provider   button to save your selections       If You Do Need an SSH Keypair        Important Notes    SSH will only be used to clone Git submodules  All other Git operations will still use HTTPS    Do not use your personal SSH key to connect HCP Terraform and Bitbucket Cloud  generate a new one or use an existing key reserved for service access    In the following steps  you must provide HCP Terraform with the private key  Although HCP Terraform does not display the text of the key to users after it is entered  it retains it and will use it when authenticating to Bitbucket Cloud      Protect this private key carefully    It can push code to the repositories you use to manage your infrastructure  Take note of your organization s policies for protecting important credentials and be sure to follow them   1  On a secure workstation  create an SSH keypair that HCP Terraform can use to connect to Bitbucket Cloud  The exact command depends on your OS  but is usually something like      ssh keygen  t rsa  m PEM  f   Users  NAME   ssh service terraform   C  service terraform enterprise      This creates a  service terraform  file with the private key  and a  service terraform pub  file with the public key  This SSH key   must have an empty passphrase    HCP Terraform cannot use SSH keys that require a passphrase   1  While logged into the Bitbucket Cloud account you want HCP Terraform to act as  navigate to the SSH Keys settings page  add a new SSH key and paste the value of the SSH public key you just created   1  In HCP Terraform s   Add VCS Provider   page  paste the text of the   SSH private key   you just created  and click the   Add SSH Key   button      Finished  At this point  Bitbucket Cloud access for HCP Terraform is fully configured  and you can create Terraform workspaces based on your organization s shared repositories "}
{"questions":"terraform Learn how to use Bitbucket Data Center for VCS features Configuring Bitbucket Data Center Access page title Bitbucket Data Center VCS Providers HCP Terraform This topic describes how to connect Bitbucket Data Center to HCP Terraform For instructions on how to connect Bitbucket Cloud refer to Configuring Bitbucket Cloud Access terraform cloud docs vcs bitbucket cloud Refer to Connecting VCS Providers to HCP Terraform terraform cloud docs vcs for other supported VCS providers","answers":"---\npage_title: Bitbucket Data Center - VCS Providers - HCP Terraform\ndescription: >-\n   Learn how to use Bitbucket Data Center for VCS features.\n---\n\n# Configuring Bitbucket Data Center Access\n\nThis topic describes how to connect Bitbucket Data Center to HCP Terraform. For instructions on how to connect Bitbucket Cloud, refer to [Configuring Bitbucket Cloud Access](\/terraform\/cloud-docs\/vcs\/bitbucket-cloud). Refer to [Connecting VCS Providers to HCP Terraform](\/terraform\/cloud-docs\/vcs) for other supported VCS providers.\n\n**Bitbucket Server is deprecated**.  Atlassian ended support for Bitbucket Server on February 15, 2024, and recommends using either Bitbucket Data Center (v8.0 or newer) or Bitbucket Cloud instead. Refer to the [Atlassian documentation](https:\/\/bitbucket.org\/blog\/cloud-migration-benefits) for additional information. \n\nHCP Terraform will end support Bitbucket Server on August 15, 2024. Terraform Enterprise will also end support for Bitbucket Server in Terraform Enterprise v202410. [Contact HashiCorp support](https:\/\/support.hashicorp.com\/hc\/en-us) if you have any questions regarding this change.\n\n\n\n## Overview\n\nThe following steps provide an overview of how to connect HCP Terraform and Terraform Enterprise to Bitbucket Data Center:\n\n1. Add a new VCS provider to HCP Terraform or Enterprise.\n1. Create a new application link in Bitbucket.\n1. Create an SSH key pair. SSH keys must have an empty passphrase because HCP Terraform cannot use SSH keys that require a passphrase.\n1. Add an SSH key to Bitbucket. You must complete this step as a non-administrator user in Bitbucket.\n1. Add the private SSH key to Terraform.\n\n## Requirements\n\n- You must have permission to manage VCS settings for the organization. Refer to [Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions) for additional information. \n\n[permissions-citation]:#intentionally-unused---keep-for-maintainers\n\n- You must have OAuth authentication credentials for Bitbucket Data Center.\n\n- Your instance of Bitbucket Data Center must be internet-accessible on its SSH and HTTP(S) ports. This is because HCP Terraform must be able to contact Bitbucket Data Center over both SSH and HTTP or HTTPS during setup and during normal operation.\n\n- HCP Terraform must have network connectivity to Bitbucket Data Center instances. Note that [Bitbucket Data Center's default ports](https:\/\/confluence.atlassian.com\/bitbucketserverkb\/which-ports-does-bitbucket-server-listen-on-and-what-are-they-used-for-806029586.html) are `7999` for SSH and `7990` for HTTP. Check your configuration to confirm your BitBucket instance's real ports.\n\n## Add a new VCS provider to Terraform\n\n1. Go to your organization's settings and then click **Providers**. The **VCS Providers** page appears.\n\n1. Click **Add VCS Provider**. The **VCS Providers** page appears.\n\n1. Choose **Bitbucket Data Center** from the **Bitbucket** drop-down menu.\n\n1. (Optional) Enter a **Name** for this VCS connection.\n\n1. Specify the URL of your Bitbucket Data Center instance in the **HTTP URL** and **API URL** fields. If the context path is not set for your Bitbucket Data Center instance, the **API URL** is the same as the **HTTP URL**. Refer to the [Atlassian documentation](https:\/\/confluence.atlassian.com\/bitbucketserver\/moving-bitbucket-server-to-a-different-context-path-776640153.html) for additional information. Specify the following values if the context path is set for your Bitbucket Data Center instance:\n\n   - Set the **HTTP URL** field to your Bitbucket Data Center instance URL and add the context path: `https:\/\/<BITBUCKET INSTANCE HOSTNAME>\/<CONTEXT PATH>`.\n   - Set the **API URL** field to your Bitbucket Data Center instance URL: `https:\/\/<BITBUCKET INSTANCE HOSTNAME>`.\n\n   By default, HCP Terraform uses port `80` for HTTP and `443` for HTTPS. If Bitbucket Data Center is configured to use non-standard ports or is behind a reverse proxy, you may need to include the port number in the URL.\n\n1. You can either generate new consumer and public keys that you can use to create a new application link in Bitbucket Data Center described in [Create an application link](#create-an-application-link) or use keys from an existing application link:\n   - To generate new keys, click **Continue**. Do not leave this screen until you have copied the key values.\n   - To use existing keys, enable the **Use Custom Keys** option and enter them into the fields.\n\n## Create an application link\n\n1. Log into Bitbucket Data Center as an admin.\n1. Open the **Application Links** administration page using the navigation or by entering `https:\/\/<BITBUCKET INSTANCE HOSTNAME>\/plugins\/servlet\/applinks\/listApplicationLinks` in your browser's address bar.\n\n1. Click **Application Links** in the sidebar, then click **Create new link**.\n1. Choose **Atlassian product** as the link type. This option also works for external applications and lets you continue to use OAuth 1.0 integrations.\n1. Enter `https:\/\/app.terraform.io` or the hostname of your Terraform Enterprise instance when prompted. You can only specify the main URL once. To connect multiple HCP Terraform organizations to the same Bitbucket Data Center instance, enter the organization URL when creating the link instead. The organization URL is the HCP Terraform URL or Terraform Enterprise hostname appended with `\/app\/<ORG NAME>`.\n1. When prompted, confirm that you wish to use the URL as entered. If you specified HCP Terraform's main URL, click **Continue**.  If you specified an organization URL, enable the **Use this URL** option and then click **Continue**.\n\n1. In the **Link applications** dialog, configure the following settings:\n   - Specify `HCP Terraform <ORG NAME>` in the **Application Name** field\n   - Choose **Generic Application** from the **Application Type** drop-down menu\n   - Enable the **Create incoming link** option\n\n   Leave all the other fields empty.\n1. Click **Continue**. The **Link applications** screen progresses to the second configuration screen.\n1. In the **Consumer Key** and **Public Key** fields, enter the key values you created in the [Add a new VCS provider to Terraform](#add-a-new-vcs-provider-to-terraform) instructions.\n1. In the **Consumer Name** field, enter `HCP Terraform (<ORG NAME>)`.\n1. Click **Continue**. Bitbucket prompts you to authorize Terraform to make changes. Before you proceed, verify that you are logged in with the user account that HCP Terraform will use to access Bitbucket and not as a Bitbucket administrator. If Bitbucket returns a 500 error instead of the authorization screen, Terraform may have been unable to reach your Bitbucket Data Center instance.\n1. Click **Allow**  and enter the SSH key when prompted.\n\n## Create an SSH key for Terraform\n\nOn a secure workstation, create an SSH keypair that HCP Terraform or Terraform Enterprise can use to connect to Bitbucket Data Center. The command for generating SSH keys depends on your OS. The following example for Linux creates a `service_terraform` file with the private key and a `service_terraform.pub` file with the public key:\n\n ```shell-session\n $ ssh-keygen -t rsa -m PEM -f \"\/Users\/<NAME>\/.ssh\/service_terraform\" -C \"service_terraform_enterprise\"\n ```\n\nDo not specify a passphrase because Terraform cannot use SSH keys that require a passphrase.\n\n\n## Add an SSH key to Bitbucket\n\nIn the following steps, you must provide HCP Terraform with the private SSH key you created in [Create an SSH key for Terraform](#create-an-ssh-key-for-terraform). Although HCP Terraform does not display the text of the key to users after it is entered, it retains the key and uses it for authenticating to Bitbucket Data Center.\n\n1. If you are logged into Bitbucket Data Center as an administrator, log out before proceeding.\n1. Log in with the account that you want to enable HCP Terraform or Terraform Enterprise to log in with. Many organizations use a dedicated service user account for this purpose. The account you use for connecting HCP Terraform must have admin access to any shared repositories of Terraform configurations because since creating webhooks requires admin permissions. Refer to [Requirements](#requirements) for additional information.\n1. Open the **SSH keys** page and click the profile icon.\n1. Choose **Manage account**.\n1. Click **SSH keys** or enter `https:\/\/<BITBUCKET INSTANCE HOSTNAME>\/plugins\/servlet\/ssh\/account\/keys` in the address bar to go to the **SSH keys** screen.\n1. Click **Add key** and enter the SSH public key you created in [Create an SSH key for Terraform](#create-an-ssh-key-for-terraform) into the text field. Open the `.pub` file to get the key value.\n1. Click **Add key** to finish adding the key.\n\n## Add an SSH private key\n\nComplete the following steps in HCP Terraform or Terraform Enterprise to request access to Bitbucket and add the SSH private key.\n\n1. Open the **SSH keys** settings page and click **Add a private SSH key**. A large text field appears.\n1. Enter the text of the **SSH private key** you created in [Create an SSH key for Terraform](#create-an-ssh-key-for-terraform) and click **Add SSH Key**.\n\n## Next steps\n\nAfter completing these instructions, you can create Terraform workspaces based on your organization's shared repositories. Refer to the following resources for additional guidance:\n\n- [Creating Workspaces](\/terraform\/cloud-docs\/workspaces\/create) in HCP Terraform\n- [Creating Workspaces](\/terraform\/enterprise\/workspaces\/create) in Terraform Enterprise","site":"terraform","answers_cleaned":"    page title  Bitbucket Data Center   VCS Providers   HCP Terraform description        Learn how to use Bitbucket Data Center for VCS features         Configuring Bitbucket Data Center Access  This topic describes how to connect Bitbucket Data Center to HCP Terraform  For instructions on how to connect Bitbucket Cloud  refer to  Configuring Bitbucket Cloud Access   terraform cloud docs vcs bitbucket cloud   Refer to  Connecting VCS Providers to HCP Terraform   terraform cloud docs vcs  for other supported VCS providers     Bitbucket Server is deprecated     Atlassian ended support for Bitbucket Server on February 15  2024  and recommends using either Bitbucket Data Center  v8 0 or newer  or Bitbucket Cloud instead  Refer to the  Atlassian documentation  https   bitbucket org blog cloud migration benefits  for additional information    HCP Terraform will end support Bitbucket Server on August 15  2024  Terraform Enterprise will also end support for Bitbucket Server in Terraform Enterprise v202410   Contact HashiCorp support  https   support hashicorp com hc en us  if you have any questions regarding this change        Overview  The following steps provide an overview of how to connect HCP Terraform and Terraform Enterprise to Bitbucket Data Center   1  Add a new VCS provider to HCP Terraform or Enterprise  1  Create a new application link in Bitbucket  1  Create an SSH key pair  SSH keys must have an empty passphrase because HCP Terraform cannot use SSH keys that require a passphrase  1  Add an SSH key to Bitbucket  You must complete this step as a non administrator user in Bitbucket  1  Add the private SSH key to Terraform      Requirements    You must have permission to manage VCS settings for the organization  Refer to  Permissions   terraform cloud docs users teams organizations permissions  for additional information     permissions citation   intentionally unused   keep for maintainers    You must have OAuth authentication credentials for Bitbucket Data Center     Your instance of Bitbucket Data Center must be internet accessible on its SSH and HTTP S  ports  This is because HCP Terraform must be able to contact Bitbucket Data Center over both SSH and HTTP or HTTPS during setup and during normal operation     HCP Terraform must have network connectivity to Bitbucket Data Center instances  Note that  Bitbucket Data Center s default ports  https   confluence atlassian com bitbucketserverkb which ports does bitbucket server listen on and what are they used for 806029586 html  are  7999  for SSH and  7990  for HTTP  Check your configuration to confirm your BitBucket instance s real ports      Add a new VCS provider to Terraform  1  Go to your organization s settings and then click   Providers    The   VCS Providers   page appears   1  Click   Add VCS Provider    The   VCS Providers   page appears   1  Choose   Bitbucket Data Center   from the   Bitbucket   drop down menu   1   Optional  Enter a   Name   for this VCS connection   1  Specify the URL of your Bitbucket Data Center instance in the   HTTP URL   and   API URL   fields  If the context path is not set for your Bitbucket Data Center instance  the   API URL   is the same as the   HTTP URL    Refer to the  Atlassian documentation  https   confluence atlassian com bitbucketserver moving bitbucket server to a different context path 776640153 html  for additional information  Specify the following values if the context path is set for your Bitbucket Data Center instance        Set the   HTTP URL   field to your Bitbucket Data Center instance URL and add the context path   https    BITBUCKET INSTANCE HOSTNAME   CONTEXT PATH         Set the   API URL   field to your Bitbucket Data Center instance URL   https    BITBUCKET INSTANCE HOSTNAME        By default  HCP Terraform uses port  80  for HTTP and  443  for HTTPS  If Bitbucket Data Center is configured to use non standard ports or is behind a reverse proxy  you may need to include the port number in the URL   1  You can either generate new consumer and public keys that you can use to create a new application link in Bitbucket Data Center described in  Create an application link   create an application link  or use keys from an existing application link       To generate new keys  click   Continue    Do not leave this screen until you have copied the key values       To use existing keys  enable the   Use Custom Keys   option and enter them into the fields      Create an application link  1  Log into Bitbucket Data Center as an admin  1  Open the   Application Links   administration page using the navigation or by entering  https    BITBUCKET INSTANCE HOSTNAME  plugins servlet applinks listApplicationLinks  in your browser s address bar   1  Click   Application Links   in the sidebar  then click   Create new link    1  Choose   Atlassian product   as the link type  This option also works for external applications and lets you continue to use OAuth 1 0 integrations  1  Enter  https   app terraform io  or the hostname of your Terraform Enterprise instance when prompted  You can only specify the main URL once  To connect multiple HCP Terraform organizations to the same Bitbucket Data Center instance  enter the organization URL when creating the link instead  The organization URL is the HCP Terraform URL or Terraform Enterprise hostname appended with   app  ORG NAME    1  When prompted  confirm that you wish to use the URL as entered  If you specified HCP Terraform s main URL  click   Continue     If you specified an organization URL  enable the   Use this URL   option and then click   Continue     1  In the   Link applications   dialog  configure the following settings       Specify  HCP Terraform  ORG NAME   in the   Application Name   field      Choose   Generic Application   from the   Application Type   drop down menu      Enable the   Create incoming link   option     Leave all the other fields empty  1  Click   Continue    The   Link applications   screen progresses to the second configuration screen  1  In the   Consumer Key   and   Public Key   fields  enter the key values you created in the  Add a new VCS provider to Terraform   add a new vcs provider to terraform  instructions  1  In the   Consumer Name   field  enter  HCP Terraform   ORG NAME     1  Click   Continue    Bitbucket prompts you to authorize Terraform to make changes  Before you proceed  verify that you are logged in with the user account that HCP Terraform will use to access Bitbucket and not as a Bitbucket administrator  If Bitbucket returns a 500 error instead of the authorization screen  Terraform may have been unable to reach your Bitbucket Data Center instance  1  Click   Allow    and enter the SSH key when prompted      Create an SSH key for Terraform  On a secure workstation  create an SSH keypair that HCP Terraform or Terraform Enterprise can use to connect to Bitbucket Data Center  The command for generating SSH keys depends on your OS  The following example for Linux creates a  service terraform  file with the private key and a  service terraform pub  file with the public key       shell session    ssh keygen  t rsa  m PEM  f   Users  NAME   ssh service terraform   C  service terraform enterprise        Do not specify a passphrase because Terraform cannot use SSH keys that require a passphrase       Add an SSH key to Bitbucket  In the following steps  you must provide HCP Terraform with the private SSH key you created in  Create an SSH key for Terraform   create an ssh key for terraform   Although HCP Terraform does not display the text of the key to users after it is entered  it retains the key and uses it for authenticating to Bitbucket Data Center   1  If you are logged into Bitbucket Data Center as an administrator  log out before proceeding  1  Log in with the account that you want to enable HCP Terraform or Terraform Enterprise to log in with  Many organizations use a dedicated service user account for this purpose  The account you use for connecting HCP Terraform must have admin access to any shared repositories of Terraform configurations because since creating webhooks requires admin permissions  Refer to  Requirements   requirements  for additional information  1  Open the   SSH keys   page and click the profile icon  1  Choose   Manage account    1  Click   SSH keys   or enter  https    BITBUCKET INSTANCE HOSTNAME  plugins servlet ssh account keys  in the address bar to go to the   SSH keys   screen  1  Click   Add key   and enter the SSH public key you created in  Create an SSH key for Terraform   create an ssh key for terraform  into the text field  Open the   pub  file to get the key value  1  Click   Add key   to finish adding the key      Add an SSH private key  Complete the following steps in HCP Terraform or Terraform Enterprise to request access to Bitbucket and add the SSH private key   1  Open the   SSH keys   settings page and click   Add a private SSH key    A large text field appears  1  Enter the text of the   SSH private key   you created in  Create an SSH key for Terraform   create an ssh key for terraform  and click   Add SSH Key        Next steps  After completing these instructions  you can create Terraform workspaces based on your organization s shared repositories  Refer to the following resources for additional guidance      Creating Workspaces   terraform cloud docs workspaces create  in HCP Terraform    Creating Workspaces   terraform enterprise workspaces create  in Terraform Enterprise"}
{"questions":"terraform page title Troubleshooting VCS Providers HCP Terraform Troubleshooting VCS Integration in HCP Terraform This page collects solutions to the most common problems our users encounter with VCS integration in HCP Terraform Learn how to address common problems in VCS integrations","answers":"---\npage_title: Troubleshooting - VCS Providers - HCP Terraform\ndescription: >-\n  Learn how to address common problems in VCS integrations.\n---\n\n# Troubleshooting VCS Integration in HCP Terraform\n\nThis page collects solutions to the most common problems our users encounter with VCS integration in HCP Terraform.\n\n## Azure DevOps\n\n### Required status checks not sending\n\nWhen configuring [status checks with Azure DevOps](https:\/\/learn.microsoft.com\/en-us\/azure\/devops\/repos\/git\/pr-status-policy) the web interface may auto populate Genre and Name fields (beneath \"Status to check\") with incorrect values that do not reflect what HCP Terraform is sending.\nTo function correctly as required checks the Genre must be populated with \"Terraform Cloud\" (or the first segment for a Terraform Enterprise install), and the remainder of the status check goes in the Name field. This requires using the \"Enter genre\/name separately\" checkbox to not use the default configuration.\n\nIn the example below the status check is named `Terraform Cloud\/paul-hcp\/gianni-test-1` and needs to be configured with Genre `Terraform Cloud` and Name `paul-hcp\/gianni-test-1`.\n\n![Azure DevOps screenshot: configuring required status checks correctly](\/img\/docs\/ado-required-status-check.png)\n\nWith an older version of Azure DevOps Server it may be that the web interface does not allow entering the Genre and Name separately. In which case the status check will need to be created via the [API](https:\/\/learn.microsoft.com\/en-us\/rest\/api\/azure\/devops\/policy\/configurations\/create).\n\n## Bitbucket Data Center\n\nThe following errors are specific to Bitbucket Data Center integrations.\n\n### Clicking \"Connect organization `<X>`\" with Bitbucket Data Center raises an error message in HCP Terraform\n\nHCP Terraform uses OAuth 1 to authenticate the user to Bitbucket Data Center. The first step in the authentication process is for HCP Terraform to call Bitbucket Data Center to obtain a request token. After the call completes, HCP Terraform redirects you to Bitbucket Data Center with the request token.\n\nAn error occurs when HCP Terraform calls to Bitbucket Data Center to obtain the request token but the request is rejected. Some common reasons for the request to be rejected are:\n\n- The API endpoint is unreachable; this can happen if the address or port is incorrect or the domain name doesn't resolve.\n- The certificate used on Bitbucket Data Center is rejected by the HCP Terraform HTTP client because the SSL verification fails. This is often the case with self-signed certificates or when the Terraform Enterprise instance is not configured to trust the signing chain of the Bitbucket Data Center SSL certificate.\n\nTo fix this issue, do the following:\n\n- Verify that the instance running Terraform Enterprise can resolve the domain name and can reach Bitbucket Data Center.\n- Verify that the HCP Terraform client accepts the HTTPS connection to Bitbucket Data Center. This can be done by performing a `curl` from the Terraform Enterprise instance to Bitbucket Data Center; it should not return any SSL errors.\n- Verify that the Consumer Key, Consumer Name, and the Public Key are configured properly in Bitbucket Data Center.\n- Verify that the HTTP URL and API URL in HCP Terraform are correct for your Bitbucket Data Center instance. This includes the proper scheme (HTTP vs HTTPS), as well as the port.\n\n### Creating a workspace from a repository hangs indefinitely, displaying a spinner on the confirm button\n\nIf you were able to connect HCP Terraform to Bitbucket Data Center but cannot create workspaces, it often means HCP Terraform isn't able to automatically add webhook URLs for that repository.\n\nTo fix this issue:\n\n- Make sure you haven't manually entered any webhook URLs for the affected repository or project. Although the Bitbucket Web Post Hooks Plugin documentation describes how to manually enter a hook URL, HCP Terraform handles this automatically. Manually entered URLs can interfere with HCP Terraform's operation.\n\n  To check the hook URLs for a repository, go to the repository's settings, then go to the \"Hooks\" page (in the \"Workflow\" section) and click on the \"Post-Receive WebHooks\" link.\n\n  Also note that some Bitbucket Data Center versions might allow you to set per-project or server-wide hook URLs in addition to per-repository hooks. These should all be empty; if you set a hook URL that might affect more than one repo when installing the plugin, go back and delete it.\n- Make sure you aren't trying to connect too many workspaces to a single repository. Bitbucket Data Center's webhooks plugin can only attach five hooks to a given repo. You might need to create additional repositories if you need to make more than five workspaces from a single configuration repo.\n\n## Bitbucket Cloud\n\n### HCP Terraform fails to obtain repositories\n\nThis typically happens when the HCP Terraform application in Bitbucket Cloud wasn't configured to have the full set of permissions. Go to the OAuth section of the Bitbucket settings, find your HCP Terraform OAuth consumer, click the edit link in the \"...\" menu, and ensure it has the required permissions enabled:\n\n| Permission type | Permission level |\n| --------------- | ---------------- |\n| Account         | Write            |\n| Repositories    | Admin            |\n| Pull requests   | Write            |\n| Webhooks        | Read and write   |\n\n## GitHub\n\n### \"Host key verification failed\" error in `terraform init` when attempting to ingress Terraform modules via Git over SSH\n\nThis is most common when running Terraform 0.10.3 or 0.10.4, which had a bug in handling SSH submodule ingress. Try upgrading affected HCP Terraform workspaces to the latest Terraform version or 0.10.8 (the latest in the 0.10 series).\n\n### HCP Terraform can't ingress Git submodules, with auth errors during init\n\nThis usually happens when an SSH key isn't associated with the VCS provider's OAuth client.\n\n- Go to your organization's \"VCS Provider\" settings page and check your GitHub client. If it still says \"You can add a private SSH key to this connection to be used for git clone operations\" (instead of \"A private SSH key has been added...\"), you need to click the \"add a private SSH key\" link and add a key.\n- Check the settings page for affected workspaces and ensure that \"Include submodules on clone\" is enabled.\n\nNote that the \"SSH Key\" section in a workspace's settings is only used for mid-run operations like cloning Terraform modules. It isn't used when cloning the linked repository before a run.\n\n## General\n\nThe following errors may occur for all VCS providers except Bitbucket Data Center.\n\n### HCP Terraform returns 500 after authenticating with the VCS provider \n\nThe Callback URL in the OAuth application configuration in the VCS provider probably wasn't updated in the last step of the instructions and still points to the default \"\/\" path (or an example.com link) instead of the full callback url.\n\nThe fix is to update the callback URL in your VCS provider's application settings. You can look up the real callback URL in HCP Terraform's settings.\n\n### Can't delete a workspace or module, resulting in 500 errors\n\nThis often happens when the VCS connection has been somehow broken: it might have had permissions revoked, been reconfigured, or had the repository removed. Check for these possibilities and contact HashiCorp support for further assistance, including any information you collected in your support ticket.\n\n### `redirect_uri_mismatch` error on \"Connect\"\n\nThe domain name for HCP Terraform's SaaS release changed on 02\/22 at 9AM from `atlas.hashicorp.com` to `app.terraform.io`. If the OAuth client was originally configured on the old domain, using it for a new VCS connection can result in this error.\n\nThe fix is to update the OAuth Callback URL in your VCS provider to use app.terraform.io instead of atlas.hashicorp.com.\n\n### Can't trigger workspace runs from VCS webhook\n\nA workspace with no runs will not accept new runs from a VCS webhook. You must queue at least one run manually.\n\nA workspace will not process a webhook if the workspace previously processed a webhook with the same commit SHA and created a run. To trigger a run, create a new commit. If a workspace receives a webhook with a previously processed commit, HCP Terraform adds a new event to the [VCS Events](\/terraform\/cloud-docs\/vcs#viewing-events) page documenting the received webhook.\n\n### Changing the URL for a VCS provider\n\nOn rare occasions, you might need HCP Terraform to change the URL it uses to reach your VCS provider. This usually only happens if you move your VCS server or the VCS vendor changes their supported API versions.\n\nHCP Terraform does not allow you to change the API URL for an existing VCS connection, but you can create a new VCS connection and update existing resources to use it. This is most efficient if you script the necessary updates using HCP Terraform's API. In brief:\n\n1. [Configure a new VCS connection](\/terraform\/cloud-docs\/vcs) with the updated URL.\n1. Obtain the [oauth-token IDs](\/terraform\/cloud-docs\/api-docs\/oauth-tokens) for the old and new OAuth clients.\n1. [List all workspaces](\/terraform\/cloud-docs\/api-docs\/workspaces#list-workspaces) (dealing with pagination if necessary), and use a JSON filtering tool like `jq` to make a list of all workspace IDs whose `attributes.vcs-repo.oauth-token-id` matches the old VCS connection.\n1. Iterate over the list of workspaces and [PATCH each one](\/terraform\/cloud-docs\/api-docs\/workspaces#update-a-workspace) to use the new `oauth-token-id`.\n1. [List all registry modules](\/terraform\/registry\/api-docs#list-modules) and use their `source` property to determine which ones came from the old VCS connection.\n1. [Delete each affected module](\/terraform\/cloud-docs\/api-docs\/private-registry\/modules#delete-a-module), then [create a new module](\/terraform\/cloud-docs\/api-docs\/private-registry\/modules#publish-a-private-module-from-a-vcs) from the new connection's version of the relevant repo.\n1. Delete the old VCS connection.\n\n\n### Reauthorizing VCS OAuth Providers\n\nIf a VCS OAuth connection breaks, you can reauthorize an existing VCS provider while retaining any VCS connected resources, like workspaces. We recommend only using this feature to fix broken VCS connections. We also recommend reauthorizing using the same VCS account to avoid permission changes to your repositories. \n\n~> **Important:** Reauthorizing is not available when the [TFE Provider](https:\/\/registry.terraform.io\/providers\/hashicorp\/tfe\/latest\/docs\/resources\/oauth_client) is managing the OAuth Client. Instead, you can update the [oauth_token](https:\/\/registry.terraform.io\/providers\/hashicorp\/tfe\/latest\/docs\/resources\/oauth_client#oauth_token) argument with a new token from your VCS Provider.\n\nTo reauthorize a VCS connection, complete the following steps:\n1. Go to your organization's settings and click **Providers** under **Version Control**.\n1. Click **Reauthorize** in the **OAuth Token ID** column. \n1. Confirm the reauthorization. HCP Terraform redirects you to your VCS Provider where you can reauthorize the connection.\n\n## Certificate Errors on Terraform Enterprise\n\nWhen debugging failures of VCS connections due to certificate errors, running additional diagnostics using the OpenSSL command may provide more information about the failure.\n\nFirst, attach a bash session to the application container:\n\n```\ndocker exec -it ptfe_atlas sh -c \"stty rows 50 && stty cols 150 && bash\"\n```\n\nThen run the `openssl s_client` command, using the certificate at `\/tmp\/cust-ca-certificates.crt` in the container:\n\n```\nopenssl s_client -showcerts -CAfile \/tmp\/cust-ca-certificates.crt -connect git-server-hostname:443\n```\n\nFor example, a Gitlab server that uses a self-signed certificate might result in an error like `verify error:num=18:self signed certificate`, as shown in the output below:\n\n```\nbash-4.3# openssl s_client -showcerts -CAfile \/tmp\/cust-ca-certificates.crt -connect gitlab.local:443\nCONNECTED(00000003)\ndepth=0 CN = gitlab.local\nverify error:num=18:self signed certificate\nverify return:1\ndepth=0 CN = gitlab.local\nverify return:1\n---\nCertificate chain\n 0 s:\/CN=gitlab.local\n   i:\/CN=gitlab.local\n-----BEGIN CERTIFICATE-----\nMIIC\/DCCAeSgAwIBAgIJAIhG2GWtcj7lMA0GCSqGSIb3DQEBCwUAMCAxHjAcBgNV\nBAMMFWdpdGxhYi1sb2NhbC5oYXNoaS5jbzAeFw0xODA2MDQyMjAwMDhaFw0xOTA2\nMDQyMjAwMDhaMCAxHjAcBgNVBAMMFWdpdGxhYi1sb2NhbC5oYXNoaS5jbzCCASIw\nDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMMgrpo3zsoy2BP\/AoGIgrYwEMnj\nPwSOFGNHbclmiVBCW9jvrZrtva8Qh+twU7CSQdkeSP34ZgLrRp1msmLvUuVMgPts\ni7isrI5hug\/IHLLOGO5xMvxOcrHknvySYJRmvYFriEBPNRPYJGJ9O1ZUVUYeNwW\/\nl9eegBDpJrdsjGmFKCOzZEdUA3zu7PfNgf788uIi4UkVXZNa\/OFHsZi63OYyfOc2\nZm0\/vRKOn17dewOOesHhw77yYbBH8OFsEiC10JCe5y3MD9yrhV1h9Z4niK8rHPXz\nXEh3JfV+BBArodmDbvi4UtT+IGdDueUllXv7kbwqvQ67OFmmek0GZOY7ZvMCAwEA\nAaM5MDcwIAYDVR0RBBkwF4IVZ2l0bGFiLWxvY2FsLmhhc2hpLmNvMBMGA1UdJQQM\nMAoGCCsGAQUFBwMBMA0GCSqGSIb3DQEBCwUAA4IBAQCfkukNV\/9vCA\/8qoEbPt1M\nmvf2FHyUD69p\/Gq\/04IhGty3sno4eVcwWEc5EvfNt8vv1FykFQ6zMJuWA0jL9x2s\nLbC8yuRDnsAlukSBvyazCZ9pt3qseGOLskaVCeOqG3b+hJqikZihFUD95IvWNFQs\nRpvGvnA\/AH2Lqqeyk2ITtLYj1AcSB1hBSnG\/0fdtao9zs0JQsrS59CD1lbbTPPRN\norbKtVTWF2JlJxl2watfCNTw6nTCPI+51CYd687T3MuRN7LsTgglzP4xazuNjbWB\nQGAiQRd6aKj+xAJnqjzXt9wl6a493m8aNkyWrxZGHfIA1W70RtMqIC\/554flZ4ia\n-----END CERTIFICATE-----\n---\nServer certificate\nsubject=\/CN=gitlab.local\nissuer=\/CN=gitlab.local\n---\nNo client certificate CA names sent\nPeer signing digest: SHA512\nServer Temp Key: ECDH, P-256, 256 bits\n---\nSSL handshake has read 1443 bytes and written 433 bytes\n---\nNew, TLSv1\/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384\nServer public key is 2048 bit\nSecure Renegotiation IS supported\nCompression: NONE\nExpansion: NONE\nNo ALPN negotiated\nSSL-Session:\n    Protocol  : TLSv1.2\n    Cipher    : ECDHE-RSA-AES256-GCM-SHA384\n    Session-ID: AF5286FB7C7725D377B4A5F556DEB6DDC38B302153DDAE90C552ACB5DC4D86B8\n    Session-ID-ctx:\n    Master-Key: DB75AEC12C6E7B62246C653C8CB8FC3B90DE86886D68CB09898A6A6F5D539007F7760BC25EC4563A893D34ABCFAAC28A\n    Key-Arg   : None\n    PSK identity: None\n    PSK identity hint: None\n    SRP username: None\n    TLS session ticket lifetime hint: 300 (seconds)\n    TLS session ticket:\n    0000 - 03 c1 35 c4 ff 6d 24 a8-6c 70 61 fb 2c dc 2e b8   ..5..m$.lpa.,...\n    0010 - de 4c 6d b0 2c 13 8e b6-63 95 18 ee 4d 33 a6 dc   .Lm.,...c...M3..\n    0020 - 0d 64 24 f0 8d 3f 9c aa-b8 a4 e2 4f d3 c3 4d 88   .d$..?.....O..M.\n    0030 - 58 99 10 73 83 93 70 4a-2c 61 e7 2d 41 74 d3 e9   X..s..pJ,a.-At..\n    0040 - 83 8c 4a 7f ae 7b e8 56-5c 51 fc 6f fe e3 a0 ec   ..J..{.V\\Q.o....\n    0050 - 3c 2b 6b 13 fc a0 e5 15-a8 31 16 19 11 98 56 43   <+k......1....VC\n    0060 - 16 86 c4 cd 53 e6 c3 61-e2 6c 1b 99 86 f5 a8 bd   ....S..a.l......\n    0070 - 3c 49 c0 0a ce 81 a9 33-9b 95 2c e1 f4 6d 05 1e   <I.....3..,..m..\n    0080 - 18 fa bf 2e f2 27 cc 0b-df 08 13 7e 4d 5a c8 41   .....'.....~MZ.A\n    0090 - 93 26 23 90 f1 bb ba 3a-15 17 1b 09 6a 14 a8 47   .&#....:....j..G\n    00a0 - 61 eb d9 91 0a 5c 4d e0-4a 8f 4d 50 ab 4b 81 aa   a....\\M.J.MP.K..\n\n    Start Time: 1528152434\n    Timeout   : 300 (sec)\n    Verify return code: 18 (self signed certificate)\n---\nclosed\n```","site":"terraform","answers_cleaned":"    page title  Troubleshooting   VCS Providers   HCP Terraform description       Learn how to address common problems in VCS integrations         Troubleshooting VCS Integration in HCP Terraform  This page collects solutions to the most common problems our users encounter with VCS integration in HCP Terraform      Azure DevOps      Required status checks not sending  When configuring  status checks with Azure DevOps  https   learn microsoft com en us azure devops repos git pr status policy  the web interface may auto populate Genre and Name fields  beneath  Status to check   with incorrect values that do not reflect what HCP Terraform is sending  To function correctly as required checks the Genre must be populated with  Terraform Cloud   or the first segment for a Terraform Enterprise install   and the remainder of the status check goes in the Name field  This requires using the  Enter genre name separately  checkbox to not use the default configuration   In the example below the status check is named  Terraform Cloud paul hcp gianni test 1  and needs to be configured with Genre  Terraform Cloud  and Name  paul hcp gianni test 1      Azure DevOps screenshot  configuring required status checks correctly   img docs ado required status check png   With an older version of Azure DevOps Server it may be that the web interface does not allow entering the Genre and Name separately  In which case the status check will need to be created via the  API  https   learn microsoft com en us rest api azure devops policy configurations create       Bitbucket Data Center  The following errors are specific to Bitbucket Data Center integrations       Clicking  Connect organization   X    with Bitbucket Data Center raises an error message in HCP Terraform  HCP Terraform uses OAuth 1 to authenticate the user to Bitbucket Data Center  The first step in the authentication process is for HCP Terraform to call Bitbucket Data Center to obtain a request token  After the call completes  HCP Terraform redirects you to Bitbucket Data Center with the request token   An error occurs when HCP Terraform calls to Bitbucket Data Center to obtain the request token but the request is rejected  Some common reasons for the request to be rejected are     The API endpoint is unreachable  this can happen if the address or port is incorrect or the domain name doesn t resolve    The certificate used on Bitbucket Data Center is rejected by the HCP Terraform HTTP client because the SSL verification fails  This is often the case with self signed certificates or when the Terraform Enterprise instance is not configured to trust the signing chain of the Bitbucket Data Center SSL certificate   To fix this issue  do the following     Verify that the instance running Terraform Enterprise can resolve the domain name and can reach Bitbucket Data Center    Verify that the HCP Terraform client accepts the HTTPS connection to Bitbucket Data Center  This can be done by performing a  curl  from the Terraform Enterprise instance to Bitbucket Data Center  it should not return any SSL errors    Verify that the Consumer Key  Consumer Name  and the Public Key are configured properly in Bitbucket Data Center    Verify that the HTTP URL and API URL in HCP Terraform are correct for your Bitbucket Data Center instance  This includes the proper scheme  HTTP vs HTTPS   as well as the port       Creating a workspace from a repository hangs indefinitely  displaying a spinner on the confirm button  If you were able to connect HCP Terraform to Bitbucket Data Center but cannot create workspaces  it often means HCP Terraform isn t able to automatically add webhook URLs for that repository   To fix this issue     Make sure you haven t manually entered any webhook URLs for the affected repository or project  Although the Bitbucket Web Post Hooks Plugin documentation describes how to manually enter a hook URL  HCP Terraform handles this automatically  Manually entered URLs can interfere with HCP Terraform s operation     To check the hook URLs for a repository  go to the repository s settings  then go to the  Hooks  page  in the  Workflow  section  and click on the  Post Receive WebHooks  link     Also note that some Bitbucket Data Center versions might allow you to set per project or server wide hook URLs in addition to per repository hooks  These should all be empty  if you set a hook URL that might affect more than one repo when installing the plugin  go back and delete it    Make sure you aren t trying to connect too many workspaces to a single repository  Bitbucket Data Center s webhooks plugin can only attach five hooks to a given repo  You might need to create additional repositories if you need to make more than five workspaces from a single configuration repo      Bitbucket Cloud      HCP Terraform fails to obtain repositories  This typically happens when the HCP Terraform application in Bitbucket Cloud wasn t configured to have the full set of permissions  Go to the OAuth section of the Bitbucket settings  find your HCP Terraform OAuth consumer  click the edit link in the       menu  and ensure it has the required permissions enabled     Permission type   Permission level                                            Account           Write                Repositories      Admin                Pull requests     Write                Webhooks          Read and write         GitHub       Host key verification failed  error in  terraform init  when attempting to ingress Terraform modules via Git over SSH  This is most common when running Terraform 0 10 3 or 0 10 4  which had a bug in handling SSH submodule ingress  Try upgrading affected HCP Terraform workspaces to the latest Terraform version or 0 10 8  the latest in the 0 10 series        HCP Terraform can t ingress Git submodules  with auth errors during init  This usually happens when an SSH key isn t associated with the VCS provider s OAuth client     Go to your organization s  VCS Provider  settings page and check your GitHub client  If it still says  You can add a private SSH key to this connection to be used for git clone operations   instead of  A private SSH key has been added       you need to click the  add a private SSH key  link and add a key    Check the settings page for affected workspaces and ensure that  Include submodules on clone  is enabled   Note that the  SSH Key  section in a workspace s settings is only used for mid run operations like cloning Terraform modules  It isn t used when cloning the linked repository before a run      General  The following errors may occur for all VCS providers except Bitbucket Data Center       HCP Terraform returns 500 after authenticating with the VCS provider   The Callback URL in the OAuth application configuration in the VCS provider probably wasn t updated in the last step of the instructions and still points to the default     path  or an example com link  instead of the full callback url   The fix is to update the callback URL in your VCS provider s application settings  You can look up the real callback URL in HCP Terraform s settings       Can t delete a workspace or module  resulting in 500 errors  This often happens when the VCS connection has been somehow broken  it might have had permissions revoked  been reconfigured  or had the repository removed  Check for these possibilities and contact HashiCorp support for further assistance  including any information you collected in your support ticket        redirect uri mismatch  error on  Connect   The domain name for HCP Terraform s SaaS release changed on 02 22 at 9AM from  atlas hashicorp com  to  app terraform io   If the OAuth client was originally configured on the old domain  using it for a new VCS connection can result in this error   The fix is to update the OAuth Callback URL in your VCS provider to use app terraform io instead of atlas hashicorp com       Can t trigger workspace runs from VCS webhook  A workspace with no runs will not accept new runs from a VCS webhook  You must queue at least one run manually   A workspace will not process a webhook if the workspace previously processed a webhook with the same commit SHA and created a run  To trigger a run  create a new commit  If a workspace receives a webhook with a previously processed commit  HCP Terraform adds a new event to the  VCS Events   terraform cloud docs vcs viewing events  page documenting the received webhook       Changing the URL for a VCS provider  On rare occasions  you might need HCP Terraform to change the URL it uses to reach your VCS provider  This usually only happens if you move your VCS server or the VCS vendor changes their supported API versions   HCP Terraform does not allow you to change the API URL for an existing VCS connection  but you can create a new VCS connection and update existing resources to use it  This is most efficient if you script the necessary updates using HCP Terraform s API  In brief   1   Configure a new VCS connection   terraform cloud docs vcs  with the updated URL  1  Obtain the  oauth token IDs   terraform cloud docs api docs oauth tokens  for the old and new OAuth clients  1   List all workspaces   terraform cloud docs api docs workspaces list workspaces   dealing with pagination if necessary   and use a JSON filtering tool like  jq  to make a list of all workspace IDs whose  attributes vcs repo oauth token id  matches the old VCS connection  1  Iterate over the list of workspaces and  PATCH each one   terraform cloud docs api docs workspaces update a workspace  to use the new  oauth token id   1   List all registry modules   terraform registry api docs list modules  and use their  source  property to determine which ones came from the old VCS connection  1   Delete each affected module   terraform cloud docs api docs private registry modules delete a module   then  create a new module   terraform cloud docs api docs private registry modules publish a private module from a vcs  from the new connection s version of the relevant repo  1  Delete the old VCS connection        Reauthorizing VCS OAuth Providers  If a VCS OAuth connection breaks  you can reauthorize an existing VCS provider while retaining any VCS connected resources  like workspaces  We recommend only using this feature to fix broken VCS connections  We also recommend reauthorizing using the same VCS account to avoid permission changes to your repositories         Important    Reauthorizing is not available when the  TFE Provider  https   registry terraform io providers hashicorp tfe latest docs resources oauth client  is managing the OAuth Client  Instead  you can update the  oauth token  https   registry terraform io providers hashicorp tfe latest docs resources oauth client oauth token  argument with a new token from your VCS Provider   To reauthorize a VCS connection  complete the following steps  1  Go to your organization s settings and click   Providers   under   Version Control    1  Click   Reauthorize   in the   OAuth Token ID   column   1  Confirm the reauthorization  HCP Terraform redirects you to your VCS Provider where you can reauthorize the connection      Certificate Errors on Terraform Enterprise  When debugging failures of VCS connections due to certificate errors  running additional diagnostics using the OpenSSL command may provide more information about the failure   First  attach a bash session to the application container       docker exec  it ptfe atlas sh  c  stty rows 50    stty cols 150    bash       Then run the  openssl s client  command  using the certificate at   tmp cust ca certificates crt  in the container       openssl s client  showcerts  CAfile  tmp cust ca certificates crt  connect git server hostname 443      For example  a Gitlab server that uses a self signed certificate might result in an error like  verify error num 18 self signed certificate   as shown in the output below       bash 4 3  openssl s client  showcerts  CAfile  tmp cust ca certificates crt  connect gitlab local 443 CONNECTED 00000003  depth 0 CN   gitlab local verify error num 18 self signed certificate verify return 1 depth 0 CN   gitlab local verify return 1     Certificate chain  0 s  CN gitlab local    i  CN gitlab local      BEGIN CERTIFICATE      MIIC DCCAeSgAwIBAgIJAIhG2GWtcj7lMA0GCSqGSIb3DQEBCwUAMCAxHjAcBgNV BAMMFWdpdGxhYi1sb2NhbC5oYXNoaS5jbzAeFw0xODA2MDQyMjAwMDhaFw0xOTA2 MDQyMjAwMDhaMCAxHjAcBgNVBAMMFWdpdGxhYi1sb2NhbC5oYXNoaS5jbzCCASIw DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMMgrpo3zsoy2BP AoGIgrYwEMnj PwSOFGNHbclmiVBCW9jvrZrtva8Qh twU7CSQdkeSP34ZgLrRp1msmLvUuVMgPts i7isrI5hug IHLLOGO5xMvxOcrHknvySYJRmvYFriEBPNRPYJGJ9O1ZUVUYeNwW  l9eegBDpJrdsjGmFKCOzZEdUA3zu7PfNgf788uIi4UkVXZNa OFHsZi63OYyfOc2 Zm0 vRKOn17dewOOesHhw77yYbBH8OFsEiC10JCe5y3MD9yrhV1h9Z4niK8rHPXz XEh3JfV BBArodmDbvi4UtT IGdDueUllXv7kbwqvQ67OFmmek0GZOY7ZvMCAwEA AaM5MDcwIAYDVR0RBBkwF4IVZ2l0bGFiLWxvY2FsLmhhc2hpLmNvMBMGA1UdJQQM MAoGCCsGAQUFBwMBMA0GCSqGSIb3DQEBCwUAA4IBAQCfkukNV 9vCA 8qoEbPt1M mvf2FHyUD69p Gq 04IhGty3sno4eVcwWEc5EvfNt8vv1FykFQ6zMJuWA0jL9x2s LbC8yuRDnsAlukSBvyazCZ9pt3qseGOLskaVCeOqG3b hJqikZihFUD95IvWNFQs RpvGvnA AH2Lqqeyk2ITtLYj1AcSB1hBSnG 0fdtao9zs0JQsrS59CD1lbbTPPRN orbKtVTWF2JlJxl2watfCNTw6nTCPI 51CYd687T3MuRN7LsTgglzP4xazuNjbWB QGAiQRd6aKj xAJnqjzXt9wl6a493m8aNkyWrxZGHfIA1W70RtMqIC 554flZ4ia      END CERTIFICATE          Server certificate subject  CN gitlab local issuer  CN gitlab local     No client certificate CA names sent Peer signing digest  SHA512 Server Temp Key  ECDH  P 256  256 bits     SSL handshake has read 1443 bytes and written 433 bytes     New  TLSv1 SSLv3  Cipher is ECDHE RSA AES256 GCM SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression  NONE Expansion  NONE No ALPN negotiated SSL Session      Protocol    TLSv1 2     Cipher      ECDHE RSA AES256 GCM SHA384     Session ID  AF5286FB7C7725D377B4A5F556DEB6DDC38B302153DDAE90C552ACB5DC4D86B8     Session ID ctx      Master Key  DB75AEC12C6E7B62246C653C8CB8FC3B90DE86886D68CB09898A6A6F5D539007F7760BC25EC4563A893D34ABCFAAC28A     Key Arg     None     PSK identity  None     PSK identity hint  None     SRP username  None     TLS session ticket lifetime hint  300  seconds      TLS session ticket      0000   03 c1 35 c4 ff 6d 24 a8 6c 70 61 fb 2c dc 2e b8     5  m  lpa          0010   de 4c 6d b0 2c 13 8e b6 63 95 18 ee 4d 33 a6 dc    Lm     c   M3       0020   0d 64 24 f0 8d 3f 9c aa b8 a4 e2 4f d3 c3 4d 88    d         O  M      0030   58 99 10 73 83 93 70 4a 2c 61 e7 2d 41 74 d3 e9   X  s  pJ a  At       0040   83 8c 4a 7f ae 7b e8 56 5c 51 fc 6f fe e3 a0 ec     J    V Q o         0050   3c 2b 6b 13 fc a0 e5 15 a8 31 16 19 11 98 56 43     k      1    VC     0060   16 86 c4 cd 53 e6 c3 61 e2 6c 1b 99 86 f5 a8 bd       S  a l           0070   3c 49 c0 0a ce 81 a9 33 9b 95 2c e1 f4 6d 05 1e    I     3     m       0080   18 fa bf 2e f2 27 cc 0b df 08 13 7e 4d 5a c8 41               MZ A     0090   93 26 23 90 f1 bb ba 3a 15 17 1b 09 6a 14 a8 47               j  G     00a0   61 eb d9 91 0a 5c 4d e0 4a 8f 4d 50 ab 4b 81 aa   a     M J MP K        Start Time  1528152434     Timeout     300  sec      Verify return code  18  self signed certificate      closed    "}
{"questions":"terraform Learn to prepare private modules for publishing add them to the registry and page title Publishing Private Modules Private Registry HCP Terraform Publishing Private Modules to the HCP Terraform Private Registry vcs terraform cloud docs vcs release new versions","answers":"---\npage_title: Publishing Private Modules - Private Registry - HCP Terraform\ndescription: >-\n  Learn to prepare private modules for publishing, add them to the registry, and\n  release new versions.\n---\n\n[vcs]: \/terraform\/cloud-docs\/vcs\n\n# Publishing Private Modules to the HCP Terraform Private Registry\n\n> **Hands-on:** Try the [Share Modules in the Private Module Registry](\/terraform\/tutorials\/modules\/module-private-registry-share) tutorial.\n\nIn addition to [adding modules from the Terraform Registry](\/terraform\/cloud-docs\/registry\/add), you can publish private modules to an organization's HCP Terraform private registry. The registry handles downloads and controls access with HCP Terraform API tokens, so consumers do not need access to the module's source repository, even when running Terraform from the command line.\n\nThe private registry uses your configured [Version Control System (VCS) integrations][vcs] and defers to your VCS provider for most management tasks. For example, your VCS provider handles new version releases. The only manual tasks are adding a new module and deleting module versions.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Permissions\n\nPrivate modules are only available to members of the organization where you add them. In Terraform Enterprise, they are also available to organizations that you configure to [share modules](\/terraform\/enterprise\/admin\/application\/registry-sharing) with that organization.\n\nMembers of the [owners team](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-owners) and teams with [Manage Private Registry permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-private-registry) can publish and delete modules from the private registry.\n\n\n## Preparing a Module Repository\n\nAfter you configure at least one [connection to a VCS provider][vcs], you can publish a new module by specifying a properly formatted VCS repository (details below). The registry automatically detects the rest of the information it needs, including the module's name and its available versions.\n\nA module repository must meet all of the following requirements before you can add it to the registry:\n\n- **Location and permissions:** The repository must be in one of\n  your configured [VCS providers][vcs], and HCP Terraform's VCS user account must have admin access to the repository. The registry needs admin access to create the webhooks to import new module versions. GitLab repositories must be in the main organization or group, and not in any subgroups.\n\n- **Named `terraform-<PROVIDER>-<NAME>`:** Module repositories must use this\n  three-part name format, where `<NAME>` reflects the type of infrastructure\n  the module manages and `<PROVIDER>` is the main provider where it creates that\n  infrastructure. The `<PROVIDER>` segment must be all lowercase. The `<NAME>`\n  segment can contain additional hyphens. Examples: `terraform-google-vault` or\n  `terraform-aws-ec2-instance`.\n\n- **Standard module structure:** The module must adhere to the\n  [standard module structure](\/terraform\/language\/modules\/develop\/structure).\n  This allows the registry to inspect your module and generate documentation,\n  track resource usage, and more.\n\n## Publishing a New Module\n\nYou can publish modules through the UI as shown below or with the [Registry Modules API](\/terraform\/cloud-docs\/api-docs\/private-registry\/modules). The API also supports publishing modules without a VCS repo as the source, which is not possible in the UI.\n\nTo publish a new module:\n\n  1. Click **Registry**. The **Registry** page appears.\n\n  1. Click **Publish** and select **Module**.\n\n    The **Add Module** page appears with a list of available repositories.\n\n  1. Select the repository containing the module you want to publish.\n\n    You can search the list by typing part or all of a repository name into the filter field. Remember that VCS providers use `<NAMESPACE>\/<REPO NAME>` strings to locate repositories. The namespace is an organization name for most providers, but Bitbucket Data Center, not Bitbucket Cloud, uses project keys, like `INFRA`.\n\n  1. When prompted, choose either the **Tag** or **Branch** module publishing type.\n\n  1.  (Optional) If this module is a [no-code ready module](\/terraform\/cloud-docs\/no-code-provisioning\/module-design), select the **Add Module to no-code provision allowlist** checkbox.\n\n  1.  Click **Publish module**.\n\n    HCP Terraform displays a loading page while it imports the module versions and then takes you to the new module's details page. On the details page, you can view available versions, read documentation, and copy a usage example.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/nocode.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n### Tag-based publishing considerations\n\nWhen using the **Tag** module publishing type, the registry uses `x.y.z` formatted release tags to identify module versions. Your repository must contain at least one release tag for you to publish a module. Release tag names must be a [semantic version](http:\/\/semver.org), which you can optionally prefix with a `v`. For example, `v1.0.4` and `0.9.2`. The registry ignores tags that do not match these formats.\n\n<!-- BEGIN: TFC:only name:branch-based-publishing -->\n\n### Branch-based publishing considerations\n\nWhen using the **Branch** module publishing type, you must provide the name of an existing branch in your VCS repository and give the module a **Module version**. Your VCS repository does not need to contain a matching tag or release.\n\nYou can only enable testing on modules published using branch-based publishing. Refer to the [test-integrated modules](\/terraform\/cloud-docs\/registry\/test) documentation for more information.\n\n<!-- END: TFC:only name:branch-based-publishing -->\n\n## Releasing New Versions of a Module\n\n<!-- BEGIN: TFC:only name:branch-based-publishing -->\n\nThe process to release a new module version differs between the tag-based and branch-based publishing workflows.\n\n### Tag-Based Publishing Workflow\n\n<!-- END: TFC:only name:branch-based-publishing -->\n\nTo release a new version of a module in the tag-based publishing workflow, push a new release tag to its VCS repository. The registry automatically imports the new version.\n\nRefer to [Preparing a Module Repository](#preparing-a-module-repository) for details about release tag requirements.\n\n<!-- BEGIN: TFC:only name:branch-based-publishing -->\n\n### Branch-Based Publishing Workflow\n\nTo release a new version of a module using the branch-based publishing workflow, navigate to the module overview screen, then click the **Publish New Version** button. Select the commit SHA that the new version will point to, and assign a new module version. You cannot re-use an existing module version.\n\n## Update Publish Settings\n\nAfter publishing your module, you can change between tag-based and branch-based publishing. To update your module's publish settings, navigate to the the module overview page, click the **Manage Module for Organization** dropdown, and then click **Publish Settings**.\n\n- To change from tag-based to branch-based publishing, you must configure a **Module branch** and [create a new module version](#branch-based-publishing-workflow), as HCP Terraform will not automatically create one.\n\n- To change from branch-based publishing to tag-based publishing, you must create at least one tag in your VCS repository.\n\n<!-- END: TFC:only name:branch-based-publishing -->\n\n## Deleting Versions and Modules\n\n-> **Note:** Deleting a tag from your VCS repository does not automatically remove the version from the private registry.\n\nYou can delete individual versions of a module or the entire module. If deleting a module version would leave a module with no versions, HCP Terraform removes the entire module. To delete a module or version:\n\n1. Navigate to the module's details page.\n\n1. If you want to delete a single version, use the **Versions** menu to select it.\n\n1. Click **Delete module**.\n\n1. Select an action from the menu:\n\n   - **Delete only this module version:** Deletes only the version of the module you were viewing when you clicked **Delete module**.\n   - **Delete all versions for this provider for this module:** Deletes the entire module for a single provider. This action is important if you have modules with the same name but with different providers. For example, if you have module repos named `terraform-aws-appserver` and `terraform-azure-appserver`, the registry treats them as alternate providers of the same `appserver` module.\n   - **Delete all providers and versions for this module:** Deletes all modules with this name, even if they are from different providers. For example, this action deletes both `terraform-aws-appserver` and `terraform-azure-appserver`.\n\n1. Type the module name and click **Delete**.\n\n### Restoring a Deleted Module or Version\n\nDeletion is permanent, but there are ways to restore deleted modules and module versions.\n\n- To restore a deleted module, re-add it as a new module.\n- To restore a deleted version, either delete the corresponding tag from your VCS and push a new tag with the same name, or delete the entire module from the registry and re-add it.\n\n## Sharing Modules Across Organizations\n\nHCP Terraform does not typically allow one organization's workspaces to use private modules from a different organization. This restriction is because HCP Terraform gives Terraform temporary credentials to access modules that are only valid for that workspace's organization. Although it is possible to mix modules from multiple organizations when you run Terraform on the command line, we strongly recommend against it.\n\nInstead, you can share modules across organizations by sharing the underlying VCS repository. Grant each organization access to the module's repository, and then add the module to each organization's registry. When you push tags to publish new module versions, both organizations update accordingly.\n\nTerraform Enterprise administrators can configure [module sharing](\/terraform\/enterprise\/admin\/application\/registry-sharing) to allow organizations to use private modules from other organizations.\n\n## Generating Module Tests (Beta)\n\nYou can generate and run generated tests for your module with [the `terraform test` command](\/terraform\/cli\/commands\/test).\n\nBefore you can generate tests for a module, it must have at least one version published. Tests can only be generated once per module and are intended to be reviewed by the module's authors before being checked into version control and maintained with the rest of the module's content. If the module's configuration files exceed 128KB in total size, HCP Terraform will not be able to generate tests for that module.\n\nYou must have [permission to manage registry modules](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-private-registry) and [permission to manage module test generation](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-module-test-generation-beta) to generate tests.","site":"terraform","answers_cleaned":"    page title  Publishing Private Modules   Private Registry   HCP Terraform description       Learn to prepare private modules for publishing  add them to the registry  and   release new versions        vcs    terraform cloud docs vcs    Publishing Private Modules to the HCP Terraform Private Registry      Hands on    Try the  Share Modules in the Private Module Registry   terraform tutorials modules module private registry share  tutorial   In addition to  adding modules from the Terraform Registry   terraform cloud docs registry add   you can publish private modules to an organization s HCP Terraform private registry  The registry handles downloads and controls access with HCP Terraform API tokens  so consumers do not need access to the module s source repository  even when running Terraform from the command line   The private registry uses your configured  Version Control System  VCS  integrations  vcs  and defers to your VCS provider for most management tasks  For example  your VCS provider handles new version releases  The only manual tasks are adding a new module and deleting module versions    permissions citation    intentionally unused   keep for maintainers     Permissions  Private modules are only available to members of the organization where you add them  In Terraform Enterprise  they are also available to organizations that you configure to  share modules   terraform enterprise admin application registry sharing  with that organization   Members of the  owners team   terraform cloud docs users teams organizations permissions organization owners  and teams with  Manage Private Registry permissions   terraform cloud docs users teams organizations permissions manage private registry  can publish and delete modules from the private registry       Preparing a Module Repository  After you configure at least one  connection to a VCS provider  vcs   you can publish a new module by specifying a properly formatted VCS repository  details below   The registry automatically detects the rest of the information it needs  including the module s name and its available versions   A module repository must meet all of the following requirements before you can add it to the registry       Location and permissions    The repository must be in one of   your configured  VCS providers  vcs   and HCP Terraform s VCS user account must have admin access to the repository  The registry needs admin access to create the webhooks to import new module versions  GitLab repositories must be in the main organization or group  and not in any subgroups       Named  terraform  PROVIDER   NAME      Module repositories must use this   three part name format  where   NAME   reflects the type of infrastructure   the module manages and   PROVIDER   is the main provider where it creates that   infrastructure  The   PROVIDER   segment must be all lowercase  The   NAME     segment can contain additional hyphens  Examples   terraform google vault  or    terraform aws ec2 instance        Standard module structure    The module must adhere to the    standard module structure   terraform language modules develop structure     This allows the registry to inspect your module and generate documentation    track resource usage  and more      Publishing a New Module  You can publish modules through the UI as shown below or with the  Registry Modules API   terraform cloud docs api docs private registry modules   The API also supports publishing modules without a VCS repo as the source  which is not possible in the UI   To publish a new module     1  Click   Registry    The   Registry   page appears     1  Click   Publish   and select   Module         The   Add Module   page appears with a list of available repositories     1  Select the repository containing the module you want to publish       You can search the list by typing part or all of a repository name into the filter field  Remember that VCS providers use   NAMESPACE   REPO NAME   strings to locate repositories  The namespace is an organization name for most providers  but Bitbucket Data Center  not Bitbucket Cloud  uses project keys  like  INFRA      1  When prompted  choose either the   Tag   or   Branch   module publishing type     1    Optional  If this module is a  no code ready module   terraform cloud docs no code provisioning module design   select the   Add Module to no code provision allowlist   checkbox     1   Click   Publish module         HCP Terraform displays a loading page while it imports the module versions and then takes you to the new module s details page  On the details page  you can view available versions  read documentation  and copy a usage example        BEGIN  TFC only name pnp callout      include  tfc package callouts nocode mdx       END  TFC only name pnp callout          Tag based publishing considerations  When using the   Tag   module publishing type  the registry uses  x y z  formatted release tags to identify module versions  Your repository must contain at least one release tag for you to publish a module  Release tag names must be a  semantic version  http   semver org   which you can optionally prefix with a  v   For example   v1 0 4  and  0 9 2   The registry ignores tags that do not match these formats        BEGIN  TFC only name branch based publishing          Branch based publishing considerations  When using the   Branch   module publishing type  you must provide the name of an existing branch in your VCS repository and give the module a   Module version    Your VCS repository does not need to contain a matching tag or release   You can only enable testing on modules published using branch based publishing  Refer to the  test integrated modules   terraform cloud docs registry test  documentation for more information        END  TFC only name branch based publishing         Releasing New Versions of a Module       BEGIN  TFC only name branch based publishing      The process to release a new module version differs between the tag based and branch based publishing workflows       Tag Based Publishing Workflow       END  TFC only name branch based publishing      To release a new version of a module in the tag based publishing workflow  push a new release tag to its VCS repository  The registry automatically imports the new version   Refer to  Preparing a Module Repository   preparing a module repository  for details about release tag requirements        BEGIN  TFC only name branch based publishing          Branch Based Publishing Workflow  To release a new version of a module using the branch based publishing workflow  navigate to the module overview screen  then click the   Publish New Version   button  Select the commit SHA that the new version will point to  and assign a new module version  You cannot re use an existing module version      Update Publish Settings  After publishing your module  you can change between tag based and branch based publishing  To update your module s publish settings  navigate to the the module overview page  click the   Manage Module for Organization   dropdown  and then click   Publish Settings       To change from tag based to branch based publishing  you must configure a   Module branch   and  create a new module version   branch based publishing workflow   as HCP Terraform will not automatically create one     To change from branch based publishing to tag based publishing  you must create at least one tag in your VCS repository        END  TFC only name branch based publishing         Deleting Versions and Modules       Note    Deleting a tag from your VCS repository does not automatically remove the version from the private registry   You can delete individual versions of a module or the entire module  If deleting a module version would leave a module with no versions  HCP Terraform removes the entire module  To delete a module or version   1  Navigate to the module s details page   1  If you want to delete a single version  use the   Versions   menu to select it   1  Click   Delete module     1  Select an action from the menu          Delete only this module version    Deletes only the version of the module you were viewing when you clicked   Delete module           Delete all versions for this provider for this module    Deletes the entire module for a single provider  This action is important if you have modules with the same name but with different providers  For example  if you have module repos named  terraform aws appserver  and  terraform azure appserver   the registry treats them as alternate providers of the same  appserver  module         Delete all providers and versions for this module    Deletes all modules with this name  even if they are from different providers  For example  this action deletes both  terraform aws appserver  and  terraform azure appserver    1  Type the module name and click   Delete         Restoring a Deleted Module or Version  Deletion is permanent  but there are ways to restore deleted modules and module versions     To restore a deleted module  re add it as a new module    To restore a deleted version  either delete the corresponding tag from your VCS and push a new tag with the same name  or delete the entire module from the registry and re add it      Sharing Modules Across Organizations  HCP Terraform does not typically allow one organization s workspaces to use private modules from a different organization  This restriction is because HCP Terraform gives Terraform temporary credentials to access modules that are only valid for that workspace s organization  Although it is possible to mix modules from multiple organizations when you run Terraform on the command line  we strongly recommend against it   Instead  you can share modules across organizations by sharing the underlying VCS repository  Grant each organization access to the module s repository  and then add the module to each organization s registry  When you push tags to publish new module versions  both organizations update accordingly   Terraform Enterprise administrators can configure  module sharing   terraform enterprise admin application registry sharing  to allow organizations to use private modules from other organizations      Generating Module Tests  Beta   You can generate and run generated tests for your module with  the  terraform test  command   terraform cli commands test    Before you can generate tests for a module  it must have at least one version published  Tests can only be generated once per module and are intended to be reviewed by the module s authors before being checked into version control and maintained with the rest of the module s content  If the module s configuration files exceed 128KB in total size  HCP Terraform will not be able to generate tests for that module   You must have  permission to manage registry modules   terraform cloud docs users teams organizations permissions manage private registry  and  permission to manage module test generation   terraform cloud docs users teams organizations permissions manage module test generation beta  to generate tests "}
{"questions":"terraform page title Publishing Private Providers Private Registry In addition to curating public providers from the Terraform Registry terraform cloud docs registry add you can publish private providers to an organization s HCP Terraform private registry Once you have published a private provider through the API members of your organization can search for it in the private registry UI and use it in configurations and release new versions Publishing private providers to the HCP Terraform private registry Prepare private providers for publishing add them to the private registry","answers":"---\npage_title: Publishing Private Providers - Private Registry\ndescription: >-\n  Prepare private providers for publishing, add them to the private registry,\n  and release new versions.\n---\n\n# Publishing private providers to the HCP Terraform private registry\n\nIn addition to [curating public providers from the Terraform Registry](\/terraform\/cloud-docs\/registry\/add), you can publish private providers to an organization's HCP Terraform private registry. Once you have published a private provider through the API, members of your organization can search for it in the private registry UI and use it in configurations.\n\n\n## Requirements\n\nReview the following before publishing a new provider or provider version.\n\n### Permissions\n\nUsers must be members of an organization to access its registry and private providers. In Terraform Enterprise, providers are also available to organizations that you configure to [share registry access](\/terraform\/enterprise\/admin\/application\/registry-sharing).\n\nYou must be a member of the [owners team](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-owners) or a team with [Manage Private Registry permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-private-registry) to publish and delete private providers from the private registry.\n\n### Release files\n\nYou must publish at least one version of your provider that follows [semantic versioning format](http:\/\/semver.org). For each version, you must upload the `SHA256SUMS` file, `SHA256SUMS.sig` file, and one or more provider binaries. Using GoReleaser to [create a release on GitHub](\/terraform\/registry\/providers\/publishing#creating-a-github-release) or [create a release locally](\/terraform\/registry\/providers\/publishing#using-goreleaser-locally) generates these files automatically. The private registry does not have strict naming conventions, but we recommend using GoReleaser file naming schemes for consistency.\n\nPrivate providers do not currently support documentation.\n\n### Signed releases\n\nGPG signing is required for private providers, and you must upload the public key of the GPG keypair used to sign the release. Refer to [Preparing and Adding a Signing Key](\/terraform\/registry\/providers\/publishing#preparing-and-adding-a-signing-key) for more details. Unlike the public Terraform Registry, the private registry does not automatically upload new releases. You must manually add new provider versions and the associated release files.\n\n-> **Note**: If you are using the [provider API](\/terraform\/cloud-docs\/api-docs\/private-registry\/providers) to upload an official HashiCorp public provider into your private registry, use [HashiCorp's public PGP key](https:\/\/www.hashicorp.com\/.well-known\/pgp-key.txt). You do not need to upload this public key, and it is automatically included in Terraform Enterprise version v202309-1 and newer.\n\n## Publishing a provider\n\nBefore consumers can use a private provider, you must do the following:\n1. [Create the provider](#create-the-provider)\n2. [Upload a GPG signing key](#add-your-public-key)\n3. [Create at least one version](#create-a-version)\n4. [Create at least one platform for that version](#create-a-provider-platform)\n5. [Upload release files](#upload-provider-binary)\n\n### Create the provider\n\nCreate a file named `provider.json` with the following contents. Replace `PROVIDER_NAME` with the name of your provider and replace `ORG_NAME` with the name of your organization.\n\n```json\n{\n  \"data\": {\n    \"type\": \"registry-providers\",\n    \"attributes\": {\n      \"name\": \"PROVIDER_NAME\",\n      \"namespace\": \"ORG_NAME\",\n      \"registry-name\": \"private\"\n    }\n  }\n}\n```\n\nUse the [Create a Provider endpoint](\/terraform\/cloud-docs\/api-docs\/private-registry\/providers#create-a-provider) to create the provider in HCP Terraform. Replace `TOKEN` in the `Authorization` header with your HCP Terraform API token and replace `ORG_NAME` with the name of your organization.\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @provider.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/ORG_NAME\/registry-providers\n```\n\nThe provider is now available in your organization\u2019s HCP Terraform private registry, but consumers cannot use it until you add a version and a platform. \n\nTo create a version and a platform, you need the following resources:\n* The Provider binaries\n* A public GPG signing key\n* A `SHA256SUMS` file\n* A `SHA256SUMS.sig` file from at least one release\n\n\n### Add your public key\n\n-> **Note**: If you are uploading an official HashiCorp public provider into your private registry, skip this step and instead use [HashiCorp's public PGP key](https:\/\/www.hashicorp.com\/.well-known\/pgp-key.txt) in the the [create a version](#create-a-version) step. The key ID for HashiCorp's public ID is `34365D9472D7468F`, and you can verify the ID by [importing the public key locally](\/terraform\/tutorials\/cli\/verify-archive#download-and-import-hashicorp-s-public-key).\n\nCreate a file named `key.json` with the following contents. Replace `ORG_NAME` with the name of your organization and supply your public key in the `ascii-armor` field.\n\n```hcl\n{\n  \"data\": {\n    \"type\": \"gpg-keys\",\n    \"attributes\": {\n      \"namespace\": \"ORG_NAME\",\n      \"ascii-armor\": \"-----BEGIN PGP PUBLIC KEY BLOCK-----\\n\\nmQINB...=txfz\\n-----END PGP PUBLIC KEY BLOCK-----\\n\"\n    }  }\n}\n```\n\nUse the [Add a GPG key endpoint](\/terraform\/cloud-docs\/api-docs\/private-registry\/gpg-keys#add-a-gpg-key) to add the public key that matches the signing key for the release. Replace `TOKEN` in the `Authorization` header with your HCP Terraform API token.\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @key.json \\\n  https:\/\/app.terraform.io\/api\/registry\/private\/v2\/gpg-keys\n```\n\nThe response contains a `key-id` that you will use to create a provider version.\n\n```json\n\"key-id\": \"34365D9472D7468F\"\n```\n\n### Create a version\n\nCreate a file named `version.json` with the following contents. Replace the value of the `version` field with the version of your provider, and replace the `key-id` field with the id of the GPG key that you created in the [Add your public key](#add-your-public-key) step. If you are uploading an official HashiCorp public provider, use the value `34365D9472D7468F` for your `key-id`.\n\n```hcl\n{\n  \"data\": {\n    \"type\": \"registry-provider-versions\",\n    \"attributes\": {\n      \"version\": \"5.14.0\",\n      \"key-id\": \"34365D9472D7468F\",\n      \"protocols\": [\"5.0\"]\n    }\n  }\n}\n```\n\nUse the [Create a Provider Version endpoint](\/terraform\/cloud-docs\/api-docs\/private-registry\/provider-versions-platforms#create-a-provider-version) to create a version for your provider. Replace `TOKEN` in the `Authorization` header with your HCP Terraform API token, and replace both instances of `ORG_NAME` with the name of your organization. If are not using the `aws` provider, then replace `aws` with your provider's name.\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @version.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/ORG_NAME\/registry-providers\/private\/ORG_NAME\/aws\/versions\n```\n\n The response includes URL links that you will use to upload the `SHA256SUMS` and `SHA256.sig` files.\n\n```json\n\"links\": {\n    \"shasums-upload\":     \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1b64hd73ghd63\",\n    \"shasums-sig-upload\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1b37dj37dh33d\"\n  }\n```\n\n### Upload signatures\n\nUpload the `SHA256SUMS` and `SHA256SUMS.sig` files to the URLs [returned in the previous step](#create-a-version). The example command below uploads the files from your local machine. First upload the `SHA256SUMS` file to the URL returned in the `shasums-upload` field.\n\n```shell-session\n$ curl \\\n  -T terraform-provider-aws_5.14.0_SHA256SUMS \\\n  https:\/\/archivist.terraform.io\/v1\/object\/dmF1b64hd73ghd63...\n```\n\nNext, upload the `SHA256SUMS.sig` file to the URL returned in the `shasums-sig-upload` field.\n\n```shell-session\n$ curl \\\n  -T terraform-provider-aws_5.14.0_SHA256SUMS.72D7468F.sig \\\n  https:\/\/archivist.terraform.io\/v1\/object\/dmF1b37dj37dh33d...\n```\n\n### Create a provider platform\n\nFirst, calculate the SHA256 hash of the provider binary that you intend to upload. This should match the SHA256 hash of the file listed in the `SHA256SUMS` file.\n\n```shell-session\n$ shasum -a 256 terraform-provider-aws_5.14.0_linux_amd64.zip\nf1d83b3e5a29bae471f9841a4e0153eac5bccedbdece369e2f6186e9044db64e  terraform-provider-aws_5.14.0_linux_amd64.zip\n```\n\nNext, create a file named `platform.json`. Replace the `os`, `arch`, `filename`, and `shasum` fields with the values that match the provider you intend to upload. \n\n```json\n{\n  \"data\": {\n    \"type\": \"registry-provider-version-platforms\",\n    \"attributes\": {\n      \"os\": \"linux\",\n      \"arch\": \"amd64\",\n      \"shasum\": \"f1d83b3e5a29bae471f9841a4e0153eac5bccedbdece369e2f6186e9044db64e\",\n      \"filename\": \"terraform-provider-aws_5.14.0_linux_amd64.zip\"\n    }\n  }\n}\n```\n\nUse the [Create a Provider Platform endpoint](\/terraform\/cloud-docs\/api-docs\/private-registry\/provider-versions-platforms#create-a-provider-platform) to create a platform for the version. Platforms are binaries that allow the provider to run on a particular operating system and architecture combination (e.g., Linux and AMD64).  Replace `TOKEN` in the `Authorization` header with your HCP Terraform API token and replace both instances of `ORG_NAME` with the name of your organization.\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @platform.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/ORG_NAME\/registry-providers\/private\/ORG_NAME\/aws\/versions\/5.14.0\/platforms\n```\n\nThe response includes a `provider-binary-upload` URL that you will use to upload the binary file for the platform.\n\n```json\n\"links\": {\n      \"provider-binary-upload\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1b45c367djh45nj78\"\n    }\n```\n\n### Upload provider binary\n\nUpload the platform binary file to the `provider-binary-upload` URL returned in the [previous step](#create-a-version). The example command below uploads the binary from your local machine.\n\n```shell-session\n$ curl -T local-example\/terraform-provider-random_5.14.0_linux_amd64.zip\n  https:\/\/archivist.terraform.io\/v1\/object\/dmF1b45c367djh45nj78\n```\n\nThe version is available in the HCP Terraform user interface. Consumers can now begin using this provider version in configurations. You can repeat these steps starting from [Create a provider platform](#create-a-provider-platform) to add additional platform binaries for the release.\n\n## Checking Release Files\n\nConsumers cannot use a private provider version until you upload all required [release files](#release-files). To determine whether these files have been uploaded:\n\n1. Click **Registry** and click the private provider to go to its details page.\n1. Use the version menu to navigate to the version you want to check. The UI shows a warning banner for versions that do not have all required release files.\n1. Open the **Manage Provider** menu and select **Show release files**. The **Release Files** page appears containing lists of uploaded and missing files for the current version.\n\n\n## Managing private providers\n\nUse the HCP Terraform API to create, read, update, and delete the following:\n\n- [GPG keys](\/terraform\/cloud-docs\/api-docs\/private-registry\/gpg-keys)\n- [Private providers](\/terraform\/cloud-docs\/api-docs\/private-registry\/providers)\n- [Provider versions and platforms](\/terraform\/cloud-docs\/api-docs\/private-registry\/provider-versions-platforms)\n\n## Deleting private providers and versions\n\nIn addition to the [Registry Providers API](\/terraform\/cloud-docs\/api-docs\/private-registry\/providers#delete-a-provider), you can delete providers and provider versions through the HCP Terraform UI. To delete providers and versions in the UI:\n\n1. Click **Registry** and click the private provider to go to its details page.\n1. If you want to delete a single version, use the **Versions** menu to select it.\n1. Open the **Manage Provider** menu and select **Delete Provider**. The **Delete Provider from Organization** box appears.\n1. Select an action from the menu:\n\n    - **Delete only this provider version:** Deletes only the version of the provider you are currently viewing.\n    - **Delete all versions for this provider:** Deletes the entire provider and all associated versions.\n\n1. Type the provider name into the confirmation box and click **Delete**.\n\nThe provider version or entire provider has been deleted from this organization's private registry and its data has been removed. Consumers will no longer be able to reference it in configurations.\n\n### Restoring a deleted provider\n\nDeletion is permanent, but you can restore a deleted private provider by re-adding it to your organization and recreating its versions and platforms.\n\n","site":"terraform","answers_cleaned":"    page title  Publishing Private Providers   Private Registry description       Prepare private providers for publishing  add them to the private registry    and release new versions         Publishing private providers to the HCP Terraform private registry  In addition to  curating public providers from the Terraform Registry   terraform cloud docs registry add   you can publish private providers to an organization s HCP Terraform private registry  Once you have published a private provider through the API  members of your organization can search for it in the private registry UI and use it in configurations       Requirements  Review the following before publishing a new provider or provider version       Permissions  Users must be members of an organization to access its registry and private providers  In Terraform Enterprise  providers are also available to organizations that you configure to  share registry access   terraform enterprise admin application registry sharing    You must be a member of the  owners team   terraform cloud docs users teams organizations permissions organization owners  or a team with  Manage Private Registry permissions   terraform cloud docs users teams organizations permissions manage private registry  to publish and delete private providers from the private registry       Release files  You must publish at least one version of your provider that follows  semantic versioning format  http   semver org   For each version  you must upload the  SHA256SUMS  file   SHA256SUMS sig  file  and one or more provider binaries  Using GoReleaser to  create a release on GitHub   terraform registry providers publishing creating a github release  or  create a release locally   terraform registry providers publishing using goreleaser locally  generates these files automatically  The private registry does not have strict naming conventions  but we recommend using GoReleaser file naming schemes for consistency   Private providers do not currently support documentation       Signed releases  GPG signing is required for private providers  and you must upload the public key of the GPG keypair used to sign the release  Refer to  Preparing and Adding a Signing Key   terraform registry providers publishing preparing and adding a signing key  for more details  Unlike the public Terraform Registry  the private registry does not automatically upload new releases  You must manually add new provider versions and the associated release files        Note    If you are using the  provider API   terraform cloud docs api docs private registry providers  to upload an official HashiCorp public provider into your private registry  use  HashiCorp s public PGP key  https   www hashicorp com  well known pgp key txt   You do not need to upload this public key  and it is automatically included in Terraform Enterprise version v202309 1 and newer      Publishing a provider  Before consumers can use a private provider  you must do the following  1   Create the provider   create the provider  2   Upload a GPG signing key   add your public key  3   Create at least one version   create a version  4   Create at least one platform for that version   create a provider platform  5   Upload release files   upload provider binary       Create the provider  Create a file named  provider json  with the following contents  Replace  PROVIDER NAME  with the name of your provider and replace  ORG NAME  with the name of your organization      json      data          type    registry providers        attributes            name    PROVIDER NAME          namespace    ORG NAME          registry name    private                   Use the  Create a Provider endpoint   terraform cloud docs api docs private registry providers create a provider  to create the provider in HCP Terraform  Replace  TOKEN  in the  Authorization  header with your HCP Terraform API token and replace  ORG NAME  with the name of your organization      shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  provider json     https   app terraform io api v2 organizations ORG NAME registry providers      The provider is now available in your organization s HCP Terraform private registry  but consumers cannot use it until you add a version and a platform    To create a version and a platform  you need the following resources    The Provider binaries   A public GPG signing key   A  SHA256SUMS  file   A  SHA256SUMS sig  file from at least one release       Add your public key       Note    If you are uploading an official HashiCorp public provider into your private registry  skip this step and instead use  HashiCorp s public PGP key  https   www hashicorp com  well known pgp key txt  in the the  create a version   create a version  step  The key ID for HashiCorp s public ID is  34365D9472D7468F   and you can verify the ID by  importing the public key locally   terraform tutorials cli verify archive download and import hashicorp s public key    Create a file named  key json  with the following contents  Replace  ORG NAME  with the name of your organization and supply your public key in the  ascii armor  field      hcl      data          type    gpg keys        attributes            namespace    ORG NAME          ascii armor         BEGIN PGP PUBLIC KEY BLOCK      n nmQINB    txfz n     END PGP PUBLIC KEY BLOCK      n                  Use the  Add a GPG key endpoint   terraform cloud docs api docs private registry gpg keys add a gpg key  to add the public key that matches the signing key for the release  Replace  TOKEN  in the  Authorization  header with your HCP Terraform API token      shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  key json     https   app terraform io api registry private v2 gpg keys      The response contains a  key id  that you will use to create a provider version      json  key id    34365D9472D7468F           Create a version  Create a file named  version json  with the following contents  Replace the value of the  version  field with the version of your provider  and replace the  key id  field with the id of the GPG key that you created in the  Add your public key   add your public key  step  If you are uploading an official HashiCorp public provider  use the value  34365D9472D7468F  for your  key id       hcl      data          type    registry provider versions        attributes            version    5 14 0          key id    34365D9472D7468F          protocols     5 0                    Use the  Create a Provider Version endpoint   terraform cloud docs api docs private registry provider versions platforms create a provider version  to create a version for your provider  Replace  TOKEN  in the  Authorization  header with your HCP Terraform API token  and replace both instances of  ORG NAME  with the name of your organization  If are not using the  aws  provider  then replace  aws  with your provider s name      shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  version json     https   app terraform io api v2 organizations ORG NAME registry providers private ORG NAME aws versions       The response includes URL links that you will use to upload the  SHA256SUMS  and  SHA256 sig  files      json  links          shasums upload        https   archivist terraform io v1 object dmF1b64hd73ghd63        shasums sig upload    https   archivist terraform io v1 object dmF1b37dj37dh33d               Upload signatures  Upload the  SHA256SUMS  and  SHA256SUMS sig  files to the URLs  returned in the previous step   create a version   The example command below uploads the files from your local machine  First upload the  SHA256SUMS  file to the URL returned in the  shasums upload  field      shell session   curl      T terraform provider aws 5 14 0 SHA256SUMS     https   archivist terraform io v1 object dmF1b64hd73ghd63         Next  upload the  SHA256SUMS sig  file to the URL returned in the  shasums sig upload  field      shell session   curl      T terraform provider aws 5 14 0 SHA256SUMS 72D7468F sig     https   archivist terraform io v1 object dmF1b37dj37dh33d             Create a provider platform  First  calculate the SHA256 hash of the provider binary that you intend to upload  This should match the SHA256 hash of the file listed in the  SHA256SUMS  file      shell session   shasum  a 256 terraform provider aws 5 14 0 linux amd64 zip f1d83b3e5a29bae471f9841a4e0153eac5bccedbdece369e2f6186e9044db64e  terraform provider aws 5 14 0 linux amd64 zip      Next  create a file named  platform json   Replace the  os    arch    filename   and  shasum  fields with the values that match the provider you intend to upload       json      data          type    registry provider version platforms        attributes            os    linux          arch    amd64          shasum    f1d83b3e5a29bae471f9841a4e0153eac5bccedbdece369e2f6186e9044db64e          filename    terraform provider aws 5 14 0 linux amd64 zip                   Use the  Create a Provider Platform endpoint   terraform cloud docs api docs private registry provider versions platforms create a provider platform  to create a platform for the version  Platforms are binaries that allow the provider to run on a particular operating system and architecture combination  e g   Linux and AMD64    Replace  TOKEN  in the  Authorization  header with your HCP Terraform API token and replace both instances of  ORG NAME  with the name of your organization      shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  platform json     https   app terraform io api v2 organizations ORG NAME registry providers private ORG NAME aws versions 5 14 0 platforms      The response includes a  provider binary upload  URL that you will use to upload the binary file for the platform      json  links            provider binary upload    https   archivist terraform io v1 object dmF1b45c367djh45nj78                 Upload provider binary  Upload the platform binary file to the  provider binary upload  URL returned in the  previous step   create a version   The example command below uploads the binary from your local machine      shell session   curl  T local example terraform provider random 5 14 0 linux amd64 zip   https   archivist terraform io v1 object dmF1b45c367djh45nj78      The version is available in the HCP Terraform user interface  Consumers can now begin using this provider version in configurations  You can repeat these steps starting from  Create a provider platform   create a provider platform  to add additional platform binaries for the release      Checking Release Files  Consumers cannot use a private provider version until you upload all required  release files   release files   To determine whether these files have been uploaded   1  Click   Registry   and click the private provider to go to its details page  1  Use the version menu to navigate to the version you want to check  The UI shows a warning banner for versions that do not have all required release files  1  Open the   Manage Provider   menu and select   Show release files    The   Release Files   page appears containing lists of uploaded and missing files for the current version       Managing private providers  Use the HCP Terraform API to create  read  update  and delete the following      GPG keys   terraform cloud docs api docs private registry gpg keys     Private providers   terraform cloud docs api docs private registry providers     Provider versions and platforms   terraform cloud docs api docs private registry provider versions platforms      Deleting private providers and versions  In addition to the  Registry Providers API   terraform cloud docs api docs private registry providers delete a provider   you can delete providers and provider versions through the HCP Terraform UI  To delete providers and versions in the UI   1  Click   Registry   and click the private provider to go to its details page  1  If you want to delete a single version  use the   Versions   menu to select it  1  Open the   Manage Provider   menu and select   Delete Provider    The   Delete Provider from Organization   box appears  1  Select an action from the menu           Delete only this provider version    Deletes only the version of the provider you are currently viewing          Delete all versions for this provider    Deletes the entire provider and all associated versions   1  Type the provider name into the confirmation box and click   Delete     The provider version or entire provider has been deleted from this organization s private registry and its data has been removed  Consumers will no longer be able to reference it in configurations       Restoring a deleted provider  Deletion is permanent  but you can restore a deleted private provider by re adding it to your organization and recreating its versions and platforms   "}
{"questions":"terraform Location Search for providers modules and usage examples in the HCP Terraform private registry UI All users in an organization can view the HCP Terraform private registry and use the available providers and modules A private registry has some key requirements and differences from the public Terraform Registry terraform registry Using Providers and Modules from the Private Registry Find available providers and modules and include them in configurations page title Using Providers and Modules Private Module Registry HCP Terraform","answers":"---\npage_title: Using Providers and Modules - Private Module Registry - HCP Terraform\ndescription: Find available providers and modules and include them in configurations. \n---\n\n# Using Providers and Modules from the Private Registry\n\nAll users in an organization can view the HCP Terraform private registry and use the available providers and modules. A private registry has some key requirements and differences from the [public Terraform Registry](\/terraform\/registry):\n\n- **Location:** Search for providers, modules, and usage examples in the HCP Terraform private registry UI.\n- **Provider and Module block `source` argument:** Private providers and modules use a [different format](\/terraform\/cloud-docs\/registry\/using#using-private-providers-and-modules-in-configurations).\n- **Terraform version:** HCP Terraform workspaces using version 0.11 and higher can automatically access your private modules during Terraform runs, and workspaces using version 0.13 and higher can also automatically access private providers.\n- **Authentication:** If you run Terraform on the command line, you must [authenticate](\/terraform\/cloud-docs\/registry\/using#authentication) to HCP Terraform or your instance to use providers and modules in your organization\u2019s private registry.\n\nHCP Terraform supports using modules in written configuration or through the [no-code provisioning workflow](\/terraform\/cloud-docs\/no-code-provisioning\/provisioning).\n\n## Finding Providers and Modules\n\nTo find available providers and modules, click the **Registry** button. The **Registry** page appears.\n\nClick **Providers** and **Modules** to toggle back and forth between lists of available providers and modules in the private registry. You can also use the search field to filter for titles that contain a specific keyword. The search does not include READMEs or resource details.\n\n### Shared Providers and Modules - Terraform Enterprise\n\nOn Terraform Enterprise, your [registry sharing](\/terraform\/enterprise\/admin\/application\/registry-sharing) configuration may grant you access to another organization's providers and modules. Providers and modules that are shared with your current organization have a **Shared** badge in the private registry (below). Providers and modules in your current organization that are shared with other organizations have a badge that says **Sharing**.\n\n### Viewing Provider and Module Details and Versions\n\nClick a provider or module to view its details page. Use the **Versions** menu in the upper right to switch between the available versions, and use the **Readme**, **Inputs**, **Outputs**, **Dependencies**, and **Resources** tabs to view more information about the selected version.\n\n### Viewing Nested Modules and Examples\n\nUse the **Submodules** menu to navigate to the detail pages for any nested modules. Use the **Examples** menu to navigate to the detail pages for any available example modules.\n\n## Provisioning Infrastructure from No-Code Ready Modules\n\nYou can use modules marked **No-Code Ready** to create a new workspace and automatically provision the module's resources without writing any Terraform configuration. Refer to [Provisioning No-Code Infrastructure](\/terraform\/cloud-docs\/no-code-provisioning\/provisioning) for details.\n\n## Using Public Providers and Modules in Configurations\n\n> **Hands-on:** Try the [Use Modules from the Registry](\/terraform\/tutorials\/modules\/module-use) tutorial.\n\nThe syntax for public providers in a private registry is the same as for providers that you use directly from the public Terraform Registry. The syntax for the [provider block](\/terraform\/language\/providers\/configuration#provider-configuration-1) `source` argument is `<NAMESPACE>\/<PROVIDER_NAME>`.\n\n```hcl\nterraform {\n  required_providers {\n    google = {\n      source = \"hashicorp\/google\"\n      version = \"4.0.0\"\n    }\n  }\n```\n\nThe syntax for referencing public modules in the [module block](\/terraform\/language\/modules\/syntax) `source` argument is `<NAMESPACE>\/<MODULE_NAME>\/<PROVIDER_NAME>`.\n\n```hcl\nmodule \"subnets\" {\n  source  = \"hashicorp\/subnets\/cidr\"\n  version = \"1.0.0\"\n}\n```\n\n## Using Private Providers and Modules in Configurations\n\nThe syntax for referencing private providers in the [provider block](\/terraform\/language\/providers\/configuration#provider-configuration-1) `source` argument is `<HOSTNAME>\/<NAMESPACE>\/<PROVIDER_NAME>`. For the SaaS version of HCP Terraform, the hostname is `app.terraform.io`.\n\n```hcl\nterraform {\n  required_providers {\n    random = {\n      source = \"app.terraform.io\/demo-custom-provider\/random\"\n      version = \"1.1.0\"\n    }\n  }\n```\n\nThe syntax for referencing private modules in the [module block](\/terraform\/language\/modules\/syntax) `source` argument is `<HOSTNAME>\/<ORGANIZATION>\/<MODULE_NAME>\/<PROVIDER_NAME>`.\n\n- **Hostname:** For the SaaS version of HCP Terraform, use `app.terraform.io`. In Terraform Enterprise, use the hostname for your instance or the [generic hostname](\/terraform\/cloud-docs\/registry\/using#generic-hostname-terraform-enterprise).\n- **Organization:** If you are using a shared module with Terraform Enterprise, the module's organization name may be different from your organization's name. Check the source string at the top of the module's registry page to find the proper organization name.\n\n```hcl\nmodule \"vpc\" {\n  source  = \"app.terraform.io\/example_corp\/vpc\/aws\"\n  version = \"1.0.4\"\n}\n```\n\n\n### Generic Hostname - HCP Terraform and Terraform Enterprise\n\nYou can use the generic hostname `localterraform.com` in module sources to reference modules without modifying the HCP Terraform or Terraform Enterprise instance. When you run Terraform, it automatically requests any `localterraform.com` modules from the instance it runs on.\n\n```hcl\nmodule \"vpc\" {\n  source  = \"localterraform.com\/example_corp\/vpc\/aws\"\n  version = \"1.0.4\"\n}\n```\n\n~> **Important**: CLI-driven workflows require Terraform CLI v1.4.0 or above.\n\nTo test configurations on a developer workstation without the remote backend configured, you must replace the generic hostname with a literal hostname in all module sources and then change them back before committing to VCS. We are working on making this workflow smoother, but we only recommend `localterraform.com` for large organizations that use multiple Terraform Enterprise instances.\n\n### Provider and Module Availability\n\nA workspace can only use private providers and modules from its own organization's registry. When using providers or modules from multiple organizations in the same configuration, we recommend:\n\n- **HCP Terraform:** [Add providers and modules to the registry](\/terraform\/cloud-docs\/registry\/publish-modules#sharing-modules-across-organizations) for each organization that requires access.\n\n- **Terraform Enterprise:** Check your site's [registry sharing](\/terraform\/enterprise\/admin\/application\/registry-sharing) configuration. Workspaces can also use private providers and modules from organizations that are sharing with the workspace's organization.\n\n## Running Configurations with Private Providers and Modules\n\n### Version Requirements\n\nTerraform version 0.11 or later is required to use private modules in HCP Terraform workspaces and to use the CLI to apply configurations with private modules. Terraform version 0.13 and later is required to use private providers in HCP Terraform workspaces and apply configurations with private providers.\n\n\n### Authentication\n\nTo authenticate with HCP Terraform, you can use either a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or a [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens). The type of token you choose may grant different permissions.\n\n- **User Token**: Allows you to access providers and modules from any organization in which you are a member. You are a member of an organization if you belong to any team in that organization. You can also access modules from any organization that is sharing modules with any of your organizations.\n\n -> **Note:** When SAML SSO is enabled, there is a [session timeout for user API tokens](\/terraform\/enterprise\/saml\/login#api-token-expiration), requiring you to periodically re-authenticate through the web UI. Expired tokens produce a _401 Unauthorized_ error. A SAML SSO account with [IsServiceAccount](\/terraform\/enterprise\/saml\/attributes#isserviceaccount) is treated as a service account and will not have the session timeout.\n\n- **Team Token**: Allows you to access the private registry of that team's organization and the registries from any other organizations that have configured sharing.\n\n\n_Permissions Example_\n\nA user belongs to three organizations (1, 2, and 3), and organizations 1 and 2 share access with each other. In this case, the user's token gives them access to the private registries for all of the organizations they belong to: 1, 2, and 3. However, a team token from a team in organization 1 only gives the user access to the private registry in organizations 1 and 2.\n\n#### Configure Authentication\n\nTo configure authentication to HCP Terraform or your Terraform Enterprise instance, you can:\n\n- (Terraform 0.12.21 or later) Use the [`terraform login`](\/terraform\/cli\/commands\/login) command to obtain and save a user API token.\n- Create a token and [manually configure credentials in the CLI config file][cli-credentials].\n\nMake sure the hostname matches the hostname you use in provider and module sources because if the same HCP Terraform server is available at two hostnames, Terraform will not know that they reference the same server. To support multiple hostnames for provider and module sources, use the `terraform login` command multiple times and specify a different hostname each time.\n\n[user-token]: \/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens\n\n[cli-credentials]: \/terraform\/cli\/config\/config-file#credentials\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers","site":"terraform","answers_cleaned":"    page title  Using Providers and Modules   Private Module Registry   HCP Terraform description  Find available providers and modules and include them in configurations          Using Providers and Modules from the Private Registry  All users in an organization can view the HCP Terraform private registry and use the available providers and modules  A private registry has some key requirements and differences from the  public Terraform Registry   terraform registry        Location    Search for providers  modules  and usage examples in the HCP Terraform private registry UI      Provider and Module block  source  argument    Private providers and modules use a  different format   terraform cloud docs registry using using private providers and modules in configurations       Terraform version    HCP Terraform workspaces using version 0 11 and higher can automatically access your private modules during Terraform runs  and workspaces using version 0 13 and higher can also automatically access private providers      Authentication    If you run Terraform on the command line  you must  authenticate   terraform cloud docs registry using authentication  to HCP Terraform or your instance to use providers and modules in your organization s private registry   HCP Terraform supports using modules in written configuration or through the  no code provisioning workflow   terraform cloud docs no code provisioning provisioning       Finding Providers and Modules  To find available providers and modules  click the   Registry   button  The   Registry   page appears   Click   Providers   and   Modules   to toggle back and forth between lists of available providers and modules in the private registry  You can also use the search field to filter for titles that contain a specific keyword  The search does not include READMEs or resource details       Shared Providers and Modules   Terraform Enterprise  On Terraform Enterprise  your  registry sharing   terraform enterprise admin application registry sharing  configuration may grant you access to another organization s providers and modules  Providers and modules that are shared with your current organization have a   Shared   badge in the private registry  below   Providers and modules in your current organization that are shared with other organizations have a badge that says   Sharing         Viewing Provider and Module Details and Versions  Click a provider or module to view its details page  Use the   Versions   menu in the upper right to switch between the available versions  and use the   Readme      Inputs      Outputs      Dependencies    and   Resources   tabs to view more information about the selected version       Viewing Nested Modules and Examples  Use the   Submodules   menu to navigate to the detail pages for any nested modules  Use the   Examples   menu to navigate to the detail pages for any available example modules      Provisioning Infrastructure from No Code Ready Modules  You can use modules marked   No Code Ready   to create a new workspace and automatically provision the module s resources without writing any Terraform configuration  Refer to  Provisioning No Code Infrastructure   terraform cloud docs no code provisioning provisioning  for details      Using Public Providers and Modules in Configurations      Hands on    Try the  Use Modules from the Registry   terraform tutorials modules module use  tutorial   The syntax for public providers in a private registry is the same as for providers that you use directly from the public Terraform Registry  The syntax for the  provider block   terraform language providers configuration provider configuration 1   source  argument is   NAMESPACE   PROVIDER NAME        hcl terraform     required providers       google           source    hashicorp google        version    4 0 0                 The syntax for referencing public modules in the  module block   terraform language modules syntax   source  argument is   NAMESPACE   MODULE NAME   PROVIDER NAME        hcl module  subnets      source     hashicorp subnets cidr    version    1 0 0            Using Private Providers and Modules in Configurations  The syntax for referencing private providers in the  provider block   terraform language providers configuration provider configuration 1   source  argument is   HOSTNAME   NAMESPACE   PROVIDER NAME    For the SaaS version of HCP Terraform  the hostname is  app terraform io       hcl terraform     required providers       random           source    app terraform io demo custom provider random        version    1 1 0                 The syntax for referencing private modules in the  module block   terraform language modules syntax   source  argument is   HOSTNAME   ORGANIZATION   MODULE NAME   PROVIDER NAME         Hostname    For the SaaS version of HCP Terraform  use  app terraform io   In Terraform Enterprise  use the hostname for your instance or the  generic hostname   terraform cloud docs registry using generic hostname terraform enterprise       Organization    If you are using a shared module with Terraform Enterprise  the module s organization name may be different from your organization s name  Check the source string at the top of the module s registry page to find the proper organization name      hcl module  vpc      source     app terraform io example corp vpc aws    version    1 0 4              Generic Hostname   HCP Terraform and Terraform Enterprise  You can use the generic hostname  localterraform com  in module sources to reference modules without modifying the HCP Terraform or Terraform Enterprise instance  When you run Terraform  it automatically requests any  localterraform com  modules from the instance it runs on      hcl module  vpc      source     localterraform com example corp vpc aws    version    1 0 4              Important    CLI driven workflows require Terraform CLI v1 4 0 or above   To test configurations on a developer workstation without the remote backend configured  you must replace the generic hostname with a literal hostname in all module sources and then change them back before committing to VCS  We are working on making this workflow smoother  but we only recommend  localterraform com  for large organizations that use multiple Terraform Enterprise instances       Provider and Module Availability  A workspace can only use private providers and modules from its own organization s registry  When using providers or modules from multiple organizations in the same configuration  we recommend       HCP Terraform     Add providers and modules to the registry   terraform cloud docs registry publish modules sharing modules across organizations  for each organization that requires access       Terraform Enterprise    Check your site s  registry sharing   terraform enterprise admin application registry sharing  configuration  Workspaces can also use private providers and modules from organizations that are sharing with the workspace s organization      Running Configurations with Private Providers and Modules      Version Requirements  Terraform version 0 11 or later is required to use private modules in HCP Terraform workspaces and to use the CLI to apply configurations with private modules  Terraform version 0 13 and later is required to use private providers in HCP Terraform workspaces and apply configurations with private providers        Authentication  To authenticate with HCP Terraform  you can use either a  user token   terraform cloud docs users teams organizations users api tokens  or a  team token   terraform cloud docs users teams organizations api tokens team api tokens   The type of token you choose may grant different permissions       User Token    Allows you to access providers and modules from any organization in which you are a member  You are a member of an organization if you belong to any team in that organization  You can also access modules from any organization that is sharing modules with any of your organizations         Note    When SAML SSO is enabled  there is a  session timeout for user API tokens   terraform enterprise saml login api token expiration   requiring you to periodically re authenticate through the web UI  Expired tokens produce a  401 Unauthorized  error  A SAML SSO account with  IsServiceAccount   terraform enterprise saml attributes isserviceaccount  is treated as a service account and will not have the session timeout       Team Token    Allows you to access the private registry of that team s organization and the registries from any other organizations that have configured sharing     Permissions Example   A user belongs to three organizations  1  2  and 3   and organizations 1 and 2 share access with each other  In this case  the user s token gives them access to the private registries for all of the organizations they belong to  1  2  and 3  However  a team token from a team in organization 1 only gives the user access to the private registry in organizations 1 and 2        Configure Authentication  To configure authentication to HCP Terraform or your Terraform Enterprise instance  you can      Terraform 0 12 21 or later  Use the   terraform login    terraform cli commands login  command to obtain and save a user API token    Create a token and  manually configure credentials in the CLI config file  cli credentials    Make sure the hostname matches the hostname you use in provider and module sources because if the same HCP Terraform server is available at two hostnames  Terraform will not know that they reference the same server  To support multiple hostnames for provider and module sources  use the  terraform login  command multiple times and specify a different hostname each time    user token    terraform cloud docs users teams organizations users api tokens   cli credentials    terraform cli config config file credentials   permissions citation    intentionally unused   keep for maintainers"}
{"questions":"terraform We recommend that you test your Sentinel policies extensively before deploying page title Mocking Terraform Sentinel Data Sentinel HCP Terraform Learn how to use the UI or API to generate mock Sentinel data and how to use Mocking Terraform Sentinel Data that data for testing","answers":"---\npage_title: Mocking Terraform Sentinel Data - Sentinel - HCP Terraform\ndescription: >-\n  Learn how to use the UI or API to generate mock Sentinel data and how to use\n  that data for testing.\n---\n\n# Mocking Terraform Sentinel Data\n\nWe recommend that you test your Sentinel policies extensively before deploying\nthem within HCP Terraform. An important part of this process is mocking\nthe data that you wish your policies to operate on.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nDue to the highly variable structure of data that can be produced by an\nindividual Terraform configuration, HCP Terraform provides the ability to\ngenerate mock data from existing configurations. This can be used to create\nsample data for a new policy, or data to reproduce issues in an existing one.\n\nTesting policies is done using the [Sentinel\nCLI](https:\/\/docs.hashicorp.com\/sentinel\/commands). More general information on\ntesting Sentinel policies can be found in the [Testing\nsection](https:\/\/docs.hashicorp.com\/sentinel\/writing\/testing) of the [Sentinel\nruntime documentation](https:\/\/docs.hashicorp.com\/sentinel).\n\n~> **Be careful!** Mock data generated by HCP Terraform directly exposes any\nand all data within the configuration, plan, and state. Terraform attempts to\nscrub sensitive data from these mocks, but we do not guarantee 100% accuracy.\nTreat this data with care, and avoid generating mocks with live sensitive data\nwhen possible. Access to this information requires [permission to download\nSentinel mocks](\/terraform\/cloud-docs\/users-teams-organizations\/permissions) for the\nworkspace where the data was generated.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Generating Mock Data Using the UI\n\nMock data can be generated using the UI by expanding the plan status section of\nthe run page, and clicking on the **Download Sentinel mocks** button.\n\n![sentinel mock generate ui](\/img\/docs\/download-mocks.png)\n\nFor more information on creating a run, see the\n[Terraform Runs and Remote Operations](\/terraform\/cloud-docs\/run\/remote-operations) section of the docs.\n\nIf the button is not visible, then the plan is ineligible for mock generation or\nthe user doesn't have the necessary permissions. See [Mock Data\nAvailability](#mock-data-availability) for more details.\n\n## Generating Mock Data Using the API\n\nMock data can also be created with the [Plan Export\nAPI](\/terraform\/cloud-docs\/api-docs\/plan-exports).\n\nMultiple steps are required for mock generation. The export process is\nasynchronous, so you must monitor the request to know when the data is generated\nand available for download.\n\n1. Get the plan ID for the run that you want to generate the mock for by\n   [getting the run details](\/terraform\/cloud-docs\/api-docs\/run#get-run-details).\n   Look for the `id` of the `plan` object within the `relationships` section of\n   the return data.\n1. [Request a plan\n   export](\/terraform\/cloud-docs\/api-docs\/plan-exports#create-a-plan-export) using the\n   discovered plan ID. Supply the Sentinel export type `sentinel-mock-bundle-v0`.\n1. Monitor the export request by [viewing the plan\n   export](\/terraform\/cloud-docs\/api-docs\/plan-exports#show-a-plan-export). When the\n   status is `finished`, the data is ready for download.\n1. Finally, [download the export\n   data](\/terraform\/cloud-docs\/api-docs\/plan-exports#download-exported-plan-data).\n   You have up to an hour from the completion of the export request - after\n   that, the mock data expires and must be re-generated.\n\n## Using Mock Data\n\n-> **Note:** The v2 mock files are only available on Terraform 0.12 and higher.\n\nMock data is supplied as a bundled tarball, containing the following files:\n\n```\nmock-tfconfig.sentinel    # tfconfig mock data\nmock-tfconfig-v2.sentinel # tfconfig\/v2 mock data\nmock-tfplan.sentinel      # tfplan mock data\nmock-tfplan-v2.sentinel   # tfplan\/v2 mock data\nmock-tfstate.sentinel     # tfstate mock data\nmock-tfstate-v2.sentinel  # tfstate\/v2 mock data\nmock-tfrun.sentinel       # tfrun mock data\nsentinel.hcl              # sample configuration file\n```\n\nThe sample `sentinel.hcl` file contains mappings to the mocks so that you\ncan get started testing with `sentinel apply` right away. For `sentinel test`,\nhowever, we recommend a more detailed layout.\n\nWe recommend placing the files for `sentinel test` in a subdirectory\nof the repository holding your policies, so they don't interfere with the\ncommand's automatic policy detection. While the test data is Sentinel code, it's\nnot a policy and will produce errors if evaluated like one.\n\n```\n.\n\u251c\u2500\u2500 foo.sentinel\n\u251c\u2500\u2500 sentinel.hcl\n\u251c\u2500\u2500 test\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 foo\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 fail.hcl\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 pass.hcl\n\u2514\u2500\u2500 testdata\n    \u251c\u2500\u2500 mock-tfconfig.sentinel\n    \u251c\u2500\u2500 mock-tfconfig-v2.sentinel\n    \u251c\u2500\u2500 mock-tfplan.sentinel\n    \u251c\u2500\u2500 mock-tfplan-v2.sentinel\n    \u251c\u2500\u2500 mock-tfstate.sentinel\n    \u251c\u2500\u2500 mock-tfstate-v2.sentinel\n    \u2514\u2500\u2500 mock-tfrun.sentinel\n```\n\nEach configuration that needs access to the mock should reference the mock data\nfiles within the `mock` block in the Sentinel configuration file.\n\nFor `sentinel apply`, this path is relative to the working directory. Assuming\nyou always run this command from the repository root, the `sentinel.hcl`\nconfiguration file would look like:\n\n```hcl\nmock \"tfconfig\" {\n  module {\n    source = \"testdata\/mock-tfconfig.sentinel\"\n  }\n}\n\nmock \"tfconfig\/v1\" {\n  module {\n    source = \"testdata\/mock-tfconfig.sentinel\"\n  }\n}\n\nmock \"tfconfig\/v2\" {\n  module {\n    source = \"testdata\/mock-tfconfig-v2.sentinel\"\n  }\n}\n\nmock \"tfplan\" {\n  module {\n    source = \"testdata\/mock-tfplan.sentinel\"\n  }\n}\n\nmock \"tfplan\/v1\" {\n  module {\n    source = \"testdata\/mock-tfplan.sentinel\"\n  }\n}\n\nmock \"tfplan\/v2\" {\n  module {\n    source = \"testdata\/mock-tfplan-v2.sentinel\"\n  }\n}\n\nmock \"tfstate\" {\n  module {\n    source = \"testdata\/mock-tfstate.sentinel\"\n  }\n}\n\nmock \"tfstate\/v1\" {\n  module {\n    source = \"testdata\/mock-tfstate.sentinel\"\n  }\n}\n\nmock \"tfstate\/v2\" {\n  module {\n    source = \"testdata\/mock-tfstate-v2.sentinel\"\n  }\n}\n\nmock \"tfrun\" {\n  module {\n    source = \"testdata\/mock-tfrun.sentinel\"\n  }\n}\n```\n\nFor `sentinel test`, the paths are relative to the specific test configuration\nfile. For example, the contents of `pass.hcl`, asserting that the result of the\n`main` rule was `true`, would be:\n\n```\nmock \"tfconfig\" {\n  module {\n    source = \"..\/..\/testdata\/mock-tfconfig.sentinel\"\n  }\n}\n\nmock \"tfconfig\/v1\" {\n  module {\n    source = \"..\/..\/testdata\/mock-tfconfig.sentinel\"\n  }\n}\n\nmock \"tfconfig\/v2\" {\n  module {\n    source = \"..\/..\/testdata\/mock-tfconfig-v2.sentinel\"\n  }\n}\n\nmock \"tfplan\" {\n  module {\n    source = \"..\/..\/testdata\/mock-tfplan.sentinel\"\n  }\n}\n\nmock \"tfplan\/v1\" {\n  module {\n    source = \"..\/..\/testdata\/mock-tfplan.sentinel\"\n  }\n}\n\nmock \"tfplan\/v2\" {\n  module {\n    source = \"..\/..\/testdata\/mock-tfplan-v2.sentinel\"\n  }\n}\n\nmock \"tfstate\" {\n  module {\n    source = \"..\/..\/testdata\/mock-tfstate.sentinel\"\n  }\n}\n\nmock \"tfstate\/v1\" {\n  module {\n    source = \"..\/..\/testdata\/mock-tfstate.sentinel\"\n  }\n}\n\nmock \"tfstate\/v2\" {\n  module {\n    source = \"..\/..\/testdata\/mock-tfstate-v2.sentinel\"\n  }\n}\n\nmock \"tfrun\" {\n  module {\n    source = \"..\/..\/testdata\/mock-tfrun.sentinel\"\n  }\n}\n\ntest {\n  rules = {\n    main = true\n  }\n}\n```\n\n## Mock Data Availability\n\nThe following factors can prevent you from generating mock data:\n\n* You do not have permission to download Sentinel mocks for the workspace.\n  ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n  Permission is required to protect the possibly sensitive data which can be\n  produced via mock generation.\n* The run has not progressed past the planning stage, or did not create a plan\n  successfully.\n* The run progressed past the planning stage prior to July 23, 2021. Prior to this date, HCP Terraform only kept JSON plans for 7 days.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nIf a plan cannot have its mock data exported due to any of these reasons, the\n**Download Sentinel mocks** button within the plan status section of the UI will\nnot be visible.\n\n-> **Note:** Only a successful plan is required for mock generation. Sentinel can still generate the data if apply or policy checks fail.","site":"terraform","answers_cleaned":"    page title  Mocking Terraform Sentinel Data   Sentinel   HCP Terraform description       Learn how to use the UI or API to generate mock Sentinel data and how to use   that data for testing         Mocking Terraform Sentinel Data  We recommend that you test your Sentinel policies extensively before deploying them within HCP Terraform  An important part of this process is mocking the data that you wish your policies to operate on        BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      Due to the highly variable structure of data that can be produced by an individual Terraform configuration  HCP Terraform provides the ability to generate mock data from existing configurations  This can be used to create sample data for a new policy  or data to reproduce issues in an existing one   Testing policies is done using the  Sentinel CLI  https   docs hashicorp com sentinel commands   More general information on testing Sentinel policies can be found in the  Testing section  https   docs hashicorp com sentinel writing testing  of the  Sentinel runtime documentation  https   docs hashicorp com sentinel         Be careful    Mock data generated by HCP Terraform directly exposes any and all data within the configuration  plan  and state  Terraform attempts to scrub sensitive data from these mocks  but we do not guarantee 100  accuracy  Treat this data with care  and avoid generating mocks with live sensitive data when possible  Access to this information requires  permission to download Sentinel mocks   terraform cloud docs users teams organizations permissions  for the workspace where the data was generated    permissions citation    intentionally unused   keep for maintainers     Generating Mock Data Using the UI  Mock data can be generated using the UI by expanding the plan status section of the run page  and clicking on the   Download Sentinel mocks   button     sentinel mock generate ui   img docs download mocks png   For more information on creating a run  see the  Terraform Runs and Remote Operations   terraform cloud docs run remote operations  section of the docs   If the button is not visible  then the plan is ineligible for mock generation or the user doesn t have the necessary permissions  See  Mock Data Availability   mock data availability  for more details      Generating Mock Data Using the API  Mock data can also be created with the  Plan Export API   terraform cloud docs api docs plan exports    Multiple steps are required for mock generation  The export process is asynchronous  so you must monitor the request to know when the data is generated and available for download   1  Get the plan ID for the run that you want to generate the mock for by     getting the run details   terraform cloud docs api docs run get run details      Look for the  id  of the  plan  object within the  relationships  section of    the return data  1   Request a plan    export   terraform cloud docs api docs plan exports create a plan export  using the    discovered plan ID  Supply the Sentinel export type  sentinel mock bundle v0   1  Monitor the export request by  viewing the plan    export   terraform cloud docs api docs plan exports show a plan export   When the    status is  finished   the data is ready for download  1  Finally   download the export    data   terraform cloud docs api docs plan exports download exported plan data      You have up to an hour from the completion of the export request   after    that  the mock data expires and must be re generated      Using Mock Data       Note    The v2 mock files are only available on Terraform 0 12 and higher   Mock data is supplied as a bundled tarball  containing the following files       mock tfconfig sentinel      tfconfig mock data mock tfconfig v2 sentinel   tfconfig v2 mock data mock tfplan sentinel        tfplan mock data mock tfplan v2 sentinel     tfplan v2 mock data mock tfstate sentinel       tfstate mock data mock tfstate v2 sentinel    tfstate v2 mock data mock tfrun sentinel         tfrun mock data sentinel hcl                sample configuration file      The sample  sentinel hcl  file contains mappings to the mocks so that you can get started testing with  sentinel apply  right away  For  sentinel test   however  we recommend a more detailed layout   We recommend placing the files for  sentinel test  in a subdirectory of the repository holding your policies  so they don t interfere with the command s automatic policy detection  While the test data is Sentinel code  it s not a policy and will produce errors if evaluated like one             foo sentinel     sentinel hcl     test         foo             fail hcl             pass hcl     testdata         mock tfconfig sentinel         mock tfconfig v2 sentinel         mock tfplan sentinel         mock tfplan v2 sentinel         mock tfstate sentinel         mock tfstate v2 sentinel         mock tfrun sentinel      Each configuration that needs access to the mock should reference the mock data files within the  mock  block in the Sentinel configuration file   For  sentinel apply   this path is relative to the working directory  Assuming you always run this command from the repository root  the  sentinel hcl  configuration file would look like      hcl mock  tfconfig      module       source    testdata mock tfconfig sentinel         mock  tfconfig v1      module       source    testdata mock tfconfig sentinel         mock  tfconfig v2      module       source    testdata mock tfconfig v2 sentinel         mock  tfplan      module       source    testdata mock tfplan sentinel         mock  tfplan v1      module       source    testdata mock tfplan sentinel         mock  tfplan v2      module       source    testdata mock tfplan v2 sentinel         mock  tfstate      module       source    testdata mock tfstate sentinel         mock  tfstate v1      module       source    testdata mock tfstate sentinel         mock  tfstate v2      module       source    testdata mock tfstate v2 sentinel         mock  tfrun      module       source    testdata mock tfrun sentinel             For  sentinel test   the paths are relative to the specific test configuration file  For example  the contents of  pass hcl   asserting that the result of the  main  rule was  true   would be       mock  tfconfig      module       source          testdata mock tfconfig sentinel         mock  tfconfig v1      module       source          testdata mock tfconfig sentinel         mock  tfconfig v2      module       source          testdata mock tfconfig v2 sentinel         mock  tfplan      module       source          testdata mock tfplan sentinel         mock  tfplan v1      module       source          testdata mock tfplan sentinel         mock  tfplan v2      module       source          testdata mock tfplan v2 sentinel         mock  tfstate      module       source          testdata mock tfstate sentinel         mock  tfstate v1      module       source          testdata mock tfstate sentinel         mock  tfstate v2      module       source          testdata mock tfstate v2 sentinel         mock  tfrun      module       source          testdata mock tfrun sentinel         test     rules         main   true               Mock Data Availability  The following factors can prevent you from generating mock data     You do not have permission to download Sentinel mocks for the workspace      More about permissions    terraform cloud docs users teams organizations permissions     Permission is required to protect the possibly sensitive data which can be   produced via mock generation    The run has not progressed past the planning stage  or did not create a plan   successfully    The run progressed past the planning stage prior to July 23  2021  Prior to this date  HCP Terraform only kept JSON plans for 7 days    permissions citation    intentionally unused   keep for maintainers  If a plan cannot have its mock data exported due to any of these reasons  the   Download Sentinel mocks   button within the plan status section of the UI will not be visible        Note    Only a successful plan is required for mock generation  Sentinel can still generate the data if apply or policy checks fail "}
{"questions":"terraform Policies and policy sets Add policies to HCP Terraform group policies into policy sets and apply policy sets to workspaces HCP Terraform checks the Terraform plan against the policy set for each run Policies are rules that HCP Terraform enforces on Terraform runs You can define policies using either the Sentinel terraform cloud docs policy enforcement sentinel or Open Policy Agent OPA terraform cloud docs policy enforcement opa policy as code frameworks page title Manage Policies and Policy Sets HCP Terraform","answers":"---\npage_title: Manage Policies and Policy Sets - HCP Terraform\ndescription: >-\n Add policies to HCP Terraform, group policies into policy sets, and apply policy sets to workspaces. HCP Terraform checks the Terraform plan against the policy set for each run.\n---\n\n# Policies and policy sets\n\nPolicies are rules that HCP Terraform enforces on Terraform runs. You can define policies using either the [Sentinel](\/terraform\/cloud-docs\/policy-enforcement\/sentinel) or [Open Policy Agent (OPA)](\/terraform\/cloud-docs\/policy-enforcement\/opa) policy-as-code frameworks.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n\n@include 'tfc-package-callouts\/policies.mdx'\n\n<!-- END: TFC:only name:pnp-callout -->\n\nPolicy sets are collections of policies you can apply globally or to specific [projects](\/terraform\/cloud-docs\/projects\/manage) and workspaces in your organization. For each run in the applicable workspaces, HCP Terraform checks the Terraform plan against the policy set. Depending on the [enforcement level](#policy-enforcement-levels), failed policies can stop a run in a workspace. If you do not want to enforce a policy set on a specific workspace, you can exclude the workspace from that set.\n\n## Permissions\n\nTo view and manage policies and policy sets, you must have [manage policy permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-policies) for your organization.\n\n## Policy checks versus policy evaluations\n\nPolicy checks and evaluations can access different types of data and enable slightly different workflows.\n\n### Policy checks\n\nOnly Sentinel policies can run as policy checks. Checks can access cost estimation data but can only use the latest version of Sentinel.\n\n~> **Warning:** Policy checks are deprecated and will be permanently removed in August 2025. We recommend that you start using policy evaluations to avoid disruptions.\n\n### Policy evaluations\n\nOPA policy sets can only run as policy evaluations, and you can enable policy evaluations for Sentinel policy sets by selecting the `Enhanced` policy set type. Policy evaluations run within the [HCP Terraform agent](\/terraform\/cloud-docs\/agents) in HCP Terraform's infrastructure.\n\nFor Sentinel policy sets, using policy evaluations lets you:\n- Enable overrides for soft-mandatory and hard-mandatory policies, letting any user with [Manage Policy Overrides permission](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-policy-overrides) proceed with a run in the event of policy failure.\n- Select a specific Sentinel runtime version for the policy set.\n\nPolicy evaluations **cannot** access cost estimation data, so use policy checks if your policies rely on cost estimates.\n\n~> **Tip:** Sentinel runtime version pinning is supported only for Sentinel 0.23.1 and above, as well as HCP Terraform agent versions 1.13.1 and above\n\n## Policy enforcement levels\n\nYou can set an enforcement level for each policy that determines what happens when a Terraform plan does not pass the policy rule. Sentinel and OPA policies have different enforcement levels available.\n\n### Sentinel\n\nSentinel provides three policy enforcement levels:\n\n- **advisory:** Failed policies never interrupt the run. They provide information about policy check failures in the UI.\n- **soft mandatory:** Failed policies stop the run, but any user with [Manage Policy Overrides permission](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-policy-overrides) can override these failures and allow the run to complete.\n- **hard mandatory:** Failed policies stop the run. Terraform does not apply runs with failed **hard mandatory** policies until a user fixes the issue that caused the failure.\n\n### OPA\n\nOPA provides two policy enforcement levels:\n\n- **advisory** Failed policies never interrupt the run. They provide information about policy failures in the UI.\n- **mandatory:** Failed policies stop the run, but any user with [Manage Policy Overrides permission](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-policy-overrides) can override these failures and allow the run to complete.\n\n## Policy publishing workflows\n\nYou can create policies and policy sets for your HCP Terraform organization in one of three ways:\n- **HCP Terraform web UI:** Add individually-managed policies manually in the HCP Terraform UI, and store your policy code in HCP Terraform. This workflow is ideal for initial experimentation with policy enforcement, but we do not recommend it for organizations with large numbers of policies.\n- **Version control:**  Connect HCP Terraform to a version control repository containing a policy set. When you push changes to the repository, HCP Terraform automatically uses the updated policy set.\n- **Automated:** Push versions of policy sets to HCP Terraform with the [HCP Terraform Policy Sets API](\/terraform\/cloud-docs\/api-docs\/policy-sets#create-a-policy-set-version) or the  `tfe` provider [`tfe_policy_set`](https:\/\/registry.terraform.io\/providers\/hashicorp\/tfe\/latest\/docs\/resources\/policy_set) resource. This workflow is ideal for automated Continuous Integration and Deployment (CI\/CD) pipelines.\n\n### Manage individual policies in the web UI\n\nYou can add policies directly to HCP Terraform using the web UI. This process requires you to paste completed, valid Sentinel or Rego code into the UI. We recommend validating your policy code before adding it to HCP Terraform.\n\n#### Add managed policies\n\nTo add an individually managed policy:\n\n1. Go to **Policies** in your organization\u2019s settings. A list of managed policies in HCP Terraform appears. Each policy designates its policy framework (Sentinel or OPA) and associated policy sets.\n1. Click **Create a new policy**.\n1. Choose the **Policy framework** you want to use. You can only create a policy set from policies written using the same framework. You cannot change the framework type after you create the policy.\n1. Complete the following fields to define the policy:\n   - **Policy Name:** Add a name containing letters, numbers, `-`, and `_`. HCP Terraform displays this name in the UI. The name must be unique within your organization.\n   - **Description:** Describe the policy\u2019s purpose. The description supports Markdown rendering, and HCP Terraform displays this text in the UI.\n   - **Enforcement mode:** Choose whether this policy can stop Terraform runs and whether users can override it. Refer to [policy enforcement levels](#policy-enforcement-levels) for more details.\n   - **(OPA Only) Query:** Write a query to identify a specific policy rule within your rego code. HCP Terraform uses this query to determine the result of the policy. The query is typically a combination of the policy package name and rule name, such as `terraform.deny`. The result of this query must be an array. The policy passes when the array is empty.\n   - **Policy code**: Paste the code for the policy: either Sentinel code or Rego code for OPA policies. The UI provides syntax highlighting for the policy language.\n   - **(Optional) Policy sets:** Select one or more existing managed policy sets where you want to add the new policy. You can only select policy sets compatible with the chosen policy set framework. If there are no policy sets available, you can [create a new one](#create-policy-sets).\n\nThe policy is now available in the HCP Terraform UI for you to edit and add to one or more policy sets.\n\n#### Edit managed policies\n\nTo edit a managed policy:\n\n- Go to **Policies** in your organization\u2019s settings.\n- Click the policy you want to edit to go to its details page.\n- Edit the policy's fields and then click **Update policy**.\n\n#### Delete managed policies\n\n~> **Warning:** Deleting a policy that applies to an active run causes that run\u2019s policy evaluation stage to error. We recommend warning other members of your organization before you delete widely used policies.\n\nYou can not restore policies after deletion. You must manually re-add them to HCP Terraform. You may want to save the policy code in a separate location before you delete the policy.\n\nTo delete a managed policy:\n\n- Go to **Policies** in your organization\u2019s settings.\n- Click the policy you want to delete to go to its details page.\n- Click **Delete policy** and then click **Yes, delete policy** to confirm.\n\nThe policy no longer appears in HCP Terraform and in any associated policy sets.\n\n## Manage policy sets\n\nPolicy sets are collections of policies that you can apply globally or to specific [projects](\/terraform\/cloud-docs\/projects\/manage) and workspaces.\n\nTo view and manage policy sets, go to the **Policy Sets** section of your organization\u2019s settings. This page contains all of the policy sets available in the organization, including those added through the API.\n\nThe way you set up and configure a new policy set depends on your workflow and where you store policies.\n\n- For [managed policies](#managed-policies), you use the UI to create a policy set and add managed policies.\n- For policy sets in a version control system, you use the UI to create a policy set connected to that repository. HCP Terraform automatically refreshes the policy set when you change relevant files in that repository. Version control policy sets have specific organization and formatting requirements. Refer to [Sentinel VCS Repositories](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/vcs) and [OPA VCS Repositories](\/terraform\/cloud-docs\/policy-enforcement\/opa\/vcs) for details.\n- For automated workflows like continuous deployment, you can use the UI to create an empty policy set and then use the [Policy Sets API](\/terraform\/cloud-docs\/api-docs\/policy-sets) to add policies. You can also use the API or the [`tfe` provider (Sentinel Only)](https:\/\/registry.terraform.io\/providers\/hashicorp\/tfe\/latest\/docs\/resources\/policy_set) to add an entire, packaged policy set.\n\n### Create policy sets\n\nTo create a policy set:\n1. Go to **Policy Sets** in your organization\u2019s settings.\n1. Click **Connect a new policy set**.\n1. Choose your workflow.\n   - For managed policies, click **create a policy set with individually managed policies**. HCP Terraform shows a form to create a policy set and add individually managed policies.\n   - For version control policies, choose a version control provider and then select the repository with your policy set. HCP Terraform shows a form to create a policy set connected to that repository.\n   - For automated workflows, click **No VCS Connection**. HCP Terraform shows a form to create an empty policy set. You can use the API to add policies to this empty policy set later.\n1. Choose a **Policy framework** for the policies you want to add. A policy set can only contain policies that use the same framework (OPA or Sentinel). You cannot change a policy set's framework type after creation.\n1. Choose a policy set scope:\n   - **Policies enforced globally:** HCP Terraform automatically enforces this global policy set on all of an organization's existing and future workspaces.\n   - **Policies enforced on selected projects and workspaces:** Use the text fields to find and select the workspaces and projects to enforce this policy set on. This affects all current and future workspaces for any chosen projects.\n1. **(Optional)** Add **Policy exclusions** for this policy set. Specify any workspaces in the policy set's scope that HCP Terraform will not enforce this policy set on.\n1. **(Sentinel Only)** Choose a policy set type:\n   - **Standard:** This is the default workflow. A Sentinel policy set uses a [policy check](#policy-checks) in HCP Terraform and lets you access cost estimation data.\n   - **Enhanced:** A Sentinel policy set uses a [policy evaluation](#policy-evaluations) in HCP Terraform. This lets you enable policy overrides and enforce a Sentinel runtime version\n1. **(OPA Only)** Select a **Runtime version** for this policy set.\n\n1. **(OPA Only)** Allow **Overrides**, which enables users with override policy permissions to apply plans that have [mandatory policy](#policy-enforcement-levels) failures.\n1. **(VCS Only)** Optionally specify the **VCS branch** within your VCS repository where HCP Terraform should import new versions of policies. If you do not set this field, HCP Terraform uses your selected VCS repository's default branch.\n1. **(VCS Only)** Specify where your policy set files live using the **Policies path**. This lets you maintain multiple policy sets within a single repository. Use a relative path from your root directory to the directory that contains either the `sentinel.hcl` (Sentinel) or `policies.hcl` (OPA) configuration files. If you do not set this field, HCP Terraform uses the repository's root directory.\n1. **(Managed Policies Only)** Select managed **Policies** to add to the policy set. You can only add policies written with the same policy framework you selected for this set.\n1. Choose a descriptive and unique **Name** for the policy set. You can use any combination of letters, numbers, `-`, and `_`.\n1. Write an optional **Description** that tells other users about the purpose of the policy set and what it contains.\n\n### Edit policy sets\n\nTo edit a policy set:\n\n1. Go to the **Policy Sets** section of your organization\u2019s settings.\n1. Click the policy set you want to edit to go to its settings page.\n1. Adjust the settings and click **Update policy set**.\n\n### Evaluate a policy runtime upgrade\n\nYou can validate that changing a policy runtime version does not introduce any breaking changes.\n\nTo perform a policy evaluation:\n\n1. Go to the **Policy Sets** section of your organization\u2019s settings.\n1. Click the policy set you want to upgrade.\n1. Click the **Evaluate** tab.\n1. Select the **Runtime version** you wish to upgrade to.\n1. Select a **Workspace** to test the policy and upgraded version against.\n1. Click **Evaluate**.\n\nHCP Terraform will execute the policy set using the specified version and the latest plan data for the selected workspace. It will display the evaluation results. If the evaluation returns a `Failed` status, inspect the JSON output to determine whether the issue is related to a non-compliant resource or is due to a syntax issue.\nIf the evaluation results in an error, check that the policy configuration is valid.\n\n### Delete policy sets\n\n~> **Warning:** Deleting a policy set that applies to an active run causes that run\u2019s policy evaluation stage to error. We recommend warning other members of your organization before you delete widely used policy sets.\n\nYou can not restore policy sets after deletion. You must manually re-add them to HCP Terraform.\n\nTo delete a policy set:\n\n1. Go to **Policy Sets** in your organization\u2019s settings.\n2. Click the policy set you want to delete to go to its details page.\n3. Click **Delete policy** and then click **Yes, delete policy set** to confirm.\n\nThe policy set no longer appears on the UI and HCP Terraform no longer applies it to any workspaces. For managed policy sets, all of the individual policies are still available in HCP Terraform. You must delete each policy individually to remove it from your organization.\n\n### (Sentinel only) Sentinel parameters\n\n[Sentinel parameters](https:\/\/docs.hashicorp.com\/sentinel\/language\/parameters) are a list of key\/value pairs that HCP Terraform sends to the Sentinel runtime when performing policy checks on workspaces. If the value parses as JSON, HCP Terraform sends it to Sentinel as the corresponding type (string, boolean, integer, map, or list). If the value fails JSON validation, HCP Terraform sends it as a string.\n\nYou can set Sentinel parameters when you [edit a policy set](#edit-policy-sets).","site":"terraform","answers_cleaned":"    page title  Manage Policies and Policy Sets   HCP Terraform description      Add policies to HCP Terraform  group policies into policy sets  and apply policy sets to workspaces  HCP Terraform checks the Terraform plan against the policy set for each run         Policies and policy sets  Policies are rules that HCP Terraform enforces on Terraform runs  You can define policies using either the  Sentinel   terraform cloud docs policy enforcement sentinel  or  Open Policy Agent  OPA    terraform cloud docs policy enforcement opa  policy as code frameworks        BEGIN  TFC only name pnp callout       include  tfc package callouts policies mdx        END  TFC only name pnp callout      Policy sets are collections of policies you can apply globally or to specific  projects   terraform cloud docs projects manage  and workspaces in your organization  For each run in the applicable workspaces  HCP Terraform checks the Terraform plan against the policy set  Depending on the  enforcement level   policy enforcement levels   failed policies can stop a run in a workspace  If you do not want to enforce a policy set on a specific workspace  you can exclude the workspace from that set      Permissions  To view and manage policies and policy sets  you must have  manage policy permissions   terraform cloud docs users teams organizations permissions manage policies  for your organization      Policy checks versus policy evaluations  Policy checks and evaluations can access different types of data and enable slightly different workflows       Policy checks  Only Sentinel policies can run as policy checks  Checks can access cost estimation data but can only use the latest version of Sentinel        Warning    Policy checks are deprecated and will be permanently removed in August 2025  We recommend that you start using policy evaluations to avoid disruptions       Policy evaluations  OPA policy sets can only run as policy evaluations  and you can enable policy evaluations for Sentinel policy sets by selecting the  Enhanced  policy set type  Policy evaluations run within the  HCP Terraform agent   terraform cloud docs agents  in HCP Terraform s infrastructure   For Sentinel policy sets  using policy evaluations lets you    Enable overrides for soft mandatory and hard mandatory policies  letting any user with  Manage Policy Overrides permission   terraform cloud docs users teams organizations permissions manage policy overrides  proceed with a run in the event of policy failure    Select a specific Sentinel runtime version for the policy set   Policy evaluations   cannot   access cost estimation data  so use policy checks if your policies rely on cost estimates        Tip    Sentinel runtime version pinning is supported only for Sentinel 0 23 1 and above  as well as HCP Terraform agent versions 1 13 1 and above     Policy enforcement levels  You can set an enforcement level for each policy that determines what happens when a Terraform plan does not pass the policy rule  Sentinel and OPA policies have different enforcement levels available       Sentinel  Sentinel provides three policy enforcement levels       advisory    Failed policies never interrupt the run  They provide information about policy check failures in the UI      soft mandatory    Failed policies stop the run  but any user with  Manage Policy Overrides permission   terraform cloud docs users teams organizations permissions manage policy overrides  can override these failures and allow the run to complete      hard mandatory    Failed policies stop the run  Terraform does not apply runs with failed   hard mandatory   policies until a user fixes the issue that caused the failure       OPA  OPA provides two policy enforcement levels       advisory   Failed policies never interrupt the run  They provide information about policy failures in the UI      mandatory    Failed policies stop the run  but any user with  Manage Policy Overrides permission   terraform cloud docs users teams organizations permissions manage policy overrides  can override these failures and allow the run to complete      Policy publishing workflows  You can create policies and policy sets for your HCP Terraform organization in one of three ways      HCP Terraform web UI    Add individually managed policies manually in the HCP Terraform UI  and store your policy code in HCP Terraform  This workflow is ideal for initial experimentation with policy enforcement  but we do not recommend it for organizations with large numbers of policies      Version control     Connect HCP Terraform to a version control repository containing a policy set  When you push changes to the repository  HCP Terraform automatically uses the updated policy set      Automated    Push versions of policy sets to HCP Terraform with the  HCP Terraform Policy Sets API   terraform cloud docs api docs policy sets create a policy set version  or the   tfe  provider   tfe policy set   https   registry terraform io providers hashicorp tfe latest docs resources policy set  resource  This workflow is ideal for automated Continuous Integration and Deployment  CI CD  pipelines       Manage individual policies in the web UI  You can add policies directly to HCP Terraform using the web UI  This process requires you to paste completed  valid Sentinel or Rego code into the UI  We recommend validating your policy code before adding it to HCP Terraform        Add managed policies  To add an individually managed policy   1  Go to   Policies   in your organization s settings  A list of managed policies in HCP Terraform appears  Each policy designates its policy framework  Sentinel or OPA  and associated policy sets  1  Click   Create a new policy    1  Choose the   Policy framework   you want to use  You can only create a policy set from policies written using the same framework  You cannot change the framework type after you create the policy  1  Complete the following fields to define the policy         Policy Name    Add a name containing letters  numbers       and      HCP Terraform displays this name in the UI  The name must be unique within your organization         Description    Describe the policy s purpose  The description supports Markdown rendering  and HCP Terraform displays this text in the UI         Enforcement mode    Choose whether this policy can stop Terraform runs and whether users can override it  Refer to  policy enforcement levels   policy enforcement levels  for more details          OPA Only  Query    Write a query to identify a specific policy rule within your rego code  HCP Terraform uses this query to determine the result of the policy  The query is typically a combination of the policy package name and rule name  such as  terraform deny   The result of this query must be an array  The policy passes when the array is empty         Policy code    Paste the code for the policy  either Sentinel code or Rego code for OPA policies  The UI provides syntax highlighting for the policy language          Optional  Policy sets    Select one or more existing managed policy sets where you want to add the new policy  You can only select policy sets compatible with the chosen policy set framework  If there are no policy sets available  you can  create a new one   create policy sets    The policy is now available in the HCP Terraform UI for you to edit and add to one or more policy sets        Edit managed policies  To edit a managed policy     Go to   Policies   in your organization s settings    Click the policy you want to edit to go to its details page    Edit the policy s fields and then click   Update policy          Delete managed policies       Warning    Deleting a policy that applies to an active run causes that run s policy evaluation stage to error  We recommend warning other members of your organization before you delete widely used policies   You can not restore policies after deletion  You must manually re add them to HCP Terraform  You may want to save the policy code in a separate location before you delete the policy   To delete a managed policy     Go to   Policies   in your organization s settings    Click the policy you want to delete to go to its details page    Click   Delete policy   and then click   Yes  delete policy   to confirm   The policy no longer appears in HCP Terraform and in any associated policy sets      Manage policy sets  Policy sets are collections of policies that you can apply globally or to specific  projects   terraform cloud docs projects manage  and workspaces   To view and manage policy sets  go to the   Policy Sets   section of your organization s settings  This page contains all of the policy sets available in the organization  including those added through the API   The way you set up and configure a new policy set depends on your workflow and where you store policies     For  managed policies   managed policies   you use the UI to create a policy set and add managed policies    For policy sets in a version control system  you use the UI to create a policy set connected to that repository  HCP Terraform automatically refreshes the policy set when you change relevant files in that repository  Version control policy sets have specific organization and formatting requirements  Refer to  Sentinel VCS Repositories   terraform cloud docs policy enforcement sentinel vcs  and  OPA VCS Repositories   terraform cloud docs policy enforcement opa vcs  for details    For automated workflows like continuous deployment  you can use the UI to create an empty policy set and then use the  Policy Sets API   terraform cloud docs api docs policy sets  to add policies  You can also use the API or the   tfe  provider  Sentinel Only   https   registry terraform io providers hashicorp tfe latest docs resources policy set  to add an entire  packaged policy set       Create policy sets  To create a policy set  1  Go to   Policy Sets   in your organization s settings  1  Click   Connect a new policy set    1  Choose your workflow       For managed policies  click   create a policy set with individually managed policies    HCP Terraform shows a form to create a policy set and add individually managed policies       For version control policies  choose a version control provider and then select the repository with your policy set  HCP Terraform shows a form to create a policy set connected to that repository       For automated workflows  click   No VCS Connection    HCP Terraform shows a form to create an empty policy set  You can use the API to add policies to this empty policy set later  1  Choose a   Policy framework   for the policies you want to add  A policy set can only contain policies that use the same framework  OPA or Sentinel   You cannot change a policy set s framework type after creation  1  Choose a policy set scope         Policies enforced globally    HCP Terraform automatically enforces this global policy set on all of an organization s existing and future workspaces         Policies enforced on selected projects and workspaces    Use the text fields to find and select the workspaces and projects to enforce this policy set on  This affects all current and future workspaces for any chosen projects  1     Optional    Add   Policy exclusions   for this policy set  Specify any workspaces in the policy set s scope that HCP Terraform will not enforce this policy set on  1     Sentinel Only    Choose a policy set type         Standard    This is the default workflow  A Sentinel policy set uses a  policy check   policy checks  in HCP Terraform and lets you access cost estimation data         Enhanced    A Sentinel policy set uses a  policy evaluation   policy evaluations  in HCP Terraform  This lets you enable policy overrides and enforce a Sentinel runtime version 1     OPA Only    Select a   Runtime version   for this policy set   1     OPA Only    Allow   Overrides    which enables users with override policy permissions to apply plans that have  mandatory policy   policy enforcement levels  failures  1     VCS Only    Optionally specify the   VCS branch   within your VCS repository where HCP Terraform should import new versions of policies  If you do not set this field  HCP Terraform uses your selected VCS repository s default branch  1     VCS Only    Specify where your policy set files live using the   Policies path    This lets you maintain multiple policy sets within a single repository  Use a relative path from your root directory to the directory that contains either the  sentinel hcl   Sentinel  or  policies hcl   OPA  configuration files  If you do not set this field  HCP Terraform uses the repository s root directory  1     Managed Policies Only    Select managed   Policies   to add to the policy set  You can only add policies written with the same policy framework you selected for this set  1  Choose a descriptive and unique   Name   for the policy set  You can use any combination of letters  numbers       and      1  Write an optional   Description   that tells other users about the purpose of the policy set and what it contains       Edit policy sets  To edit a policy set   1  Go to the   Policy Sets   section of your organization s settings  1  Click the policy set you want to edit to go to its settings page  1  Adjust the settings and click   Update policy set         Evaluate a policy runtime upgrade  You can validate that changing a policy runtime version does not introduce any breaking changes   To perform a policy evaluation   1  Go to the   Policy Sets   section of your organization s settings  1  Click the policy set you want to upgrade  1  Click the   Evaluate   tab  1  Select the   Runtime version   you wish to upgrade to  1  Select a   Workspace   to test the policy and upgraded version against  1  Click   Evaluate     HCP Terraform will execute the policy set using the specified version and the latest plan data for the selected workspace  It will display the evaluation results  If the evaluation returns a  Failed  status  inspect the JSON output to determine whether the issue is related to a non compliant resource or is due to a syntax issue  If the evaluation results in an error  check that the policy configuration is valid       Delete policy sets       Warning    Deleting a policy set that applies to an active run causes that run s policy evaluation stage to error  We recommend warning other members of your organization before you delete widely used policy sets   You can not restore policy sets after deletion  You must manually re add them to HCP Terraform   To delete a policy set   1  Go to   Policy Sets   in your organization s settings  2  Click the policy set you want to delete to go to its details page  3  Click   Delete policy   and then click   Yes  delete policy set   to confirm   The policy set no longer appears on the UI and HCP Terraform no longer applies it to any workspaces  For managed policy sets  all of the individual policies are still available in HCP Terraform  You must delete each policy individually to remove it from your organization        Sentinel only  Sentinel parameters   Sentinel parameters  https   docs hashicorp com sentinel language parameters  are a list of key value pairs that HCP Terraform sends to the Sentinel runtime when performing policy checks on workspaces  If the value parses as JSON  HCP Terraform sends it to Sentinel as the corresponding type  string  boolean  integer  map  or list   If the value fails JSON validation  HCP Terraform sends it as a string   You can set Sentinel parameters when you  edit a policy set   edit policy sets  "}
{"questions":"terraform Sentinel Policy Set VCS Repositories To enable policy enforcement you must group Sentinel policies into policy sets You can then apply those policy sets globally or to specific projects terraform cloud docs projects manage and workspaces page title Policy Set VCS Repositories Sentinel HCP Terraform Configure a Sentinel policy set version control repository to use in HCP Terraform A policy set repository has a configuration file policy files and module files","answers":"---\npage_title: Policy Set VCS Repositories - Sentinel - HCP Terraform\ndescription: >-\n Configure a Sentinel policy set version control repository to use in HCP Terraform. A policy set repository has a configuration file, policy files, and module files. \n---\n\n# Sentinel Policy Set VCS Repositories\n\nTo enable policy enforcement, you must group Sentinel policies into policy sets. You can then apply those policy sets globally or to specific [projects](\/terraform\/cloud-docs\/projects\/manage) and workspaces.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nOne way to create policy sets is by connecting HCP Terraform to a version control repository. When you push changes to the repository, HCP Terraform automatically uses the updated policy set. Refer to [Managing Policy Sets](\/terraform\/cloud-docs\/policy-enforcement\/manage-policy-sets) for more details.\n\nA Sentinel policy set repository contains a Sentinel configuration file, policy files, and module files.\n\n## Configuration File\n\nYour repository must contain a configuration file named `sentinel.hcl` that defines the following features of the policy set:\n\n- Each policy included in the set. The policy name must match the names of individual [policy code files](#policy-code-files) exactly. HCP Terraform ignores policy files in the repository that are not listed in the configuration file. For each policy, the configuration file must designate the policy\u2019s [enforcement level](\/terraform\/cloud-docs\/policy-enforcement\/manage-policy-sets#policy-enforcement-levels) and [source](#policy-source).\n- [Terraform modules](#modules) that policies in the set need to access.\n\nThe following example shows a portion of a `sentinel.hcl` configuration file that defines a policy named `terraform-maintenance-windows`. The policy has a `hard-mandatory` enforcement level, meaning that it can block Terraform runs when it fails and users cannot override it.\n\n```hcl\npolicy \"terraform-maintenance-windows\" {\n source            = \".\/terraform-maintenance-windows.sentinel\"\n enforcement_level = \"hard-mandatory\"\n}\n```\n\nTo configure a module, add a `module` entry to your `sentinel.hcl` file. The following example adds a module called `timezone`.\n\n```hcl\nmodule \"timezone\" {\n source = \".\/modules\/timezone.sentinel\"\n}\n```\n\nThe repositories for [policy libraries on the Terraform Registry](https:\/\/registry.terraform.io\/browse\/policies) contain more examples.\n\n## Policy Code Files\n\nDefine each Sentinel policy in a separate file within your repository. All local policy files must reside in the same directory as the `sentinel.hcl` configuration file and end with the `.sentinel` suffix.\n\n### Policy Source\n\nA policy's `source` field can either reference a file within the policy repository, or it can reference a remote source. For example, the configuration could reference a policy from HashiCorp's [foundational policies library](https:\/\/github.com\/hashicorp\/terraform-foundational-policies-library). Sentinel only supports HTTP and HTTPS remote sources.\n\nTo specify a local source, prefix the `source` with a `.\/`, or `..\/`. The following example shows how to reference a local source policy called `terraform-maintenance-windows.sentinel`.\n\n```hcl\npolicy \"terraform-maintenance-windows\" {\n source            = \".\/terraform-maintenance-windows.sentinel\"\n enforcement_level = \"hard-mandatory\"\n}\n```\n\nTo specify a remote source, supply the URL as the `source`. The following example references a policy from HashiCorp's foundational policies library.\n\n```hcl\npolicy \"deny-public-ssh-nsg-rules\" {\n  source = \"https:\/\/registry.terraform.io\/v2\/policies\/hashicorp\/azure-networking-terraform\/1.0.2\/policy\/deny-public-ssh-nsg-rules.sentinel?checksum=sha256:75c95bf1d6eb48153cb31f15c49c237bf7829549beebe20effa07bcdd3f3cb74\"\n  enforcement_level = \"advisory\"\n}\n```\n\nFor GitHub, you must use the URL of the raw policy content. Other URL types cause HCP Terraform to error when checking the policy. For example, do not use `https:\/\/github.com\/hashicorp\/policy-library-azure-networking-terraform\/blob\/main\/policies\/deny-public-ssh-nsg-rules\/deny-public-ssh-nsg-rules.sentinel`.\n\nTo access the raw URL, open the Sentinel file in your Github repository, right-click **Raw** on the top right of the page, and save the link address.\n\n### Example Policy\n\nThe following example policy uses the `time` and `tfrun` imports and a custom `timezone` module to do the following tasks:\n\n1. Load the time when the Terraform run occurred\n1. Convert the loaded time with the correct offset using the [Timezone API](https:\/\/timezoneapi.io\/)\n1. Verify that the provisioning operation occurs only on a specific day\n\nThe example policy also uses a [rule expression](https:\/\/docs.hashicorp.com\/sentinel\/language\/spec#rule-expressions) with the `when` predicate. If the value of `tfrun.workspace.auto_apply` is false, the rule is not evaluated and returns true.\n\nFinally, the example uses parameters to facilitate module reuse within Terraform. Refer to the [Sentinel parameter documentation](https:\/\/docs.hashicorp.com\/sentinel\/language\/parameters) for details.\n\n\n```hcl\nimport \"time\"\nimport \"tfrun\"\nimport \"timezone\"\n\nparam token default \"WbNKULOBheqV\"\nparam maintenance_days default [\"Friday\", \"Saturday\", \"Sunday\"]\nparam timezone_id default \"America\/Los_Angeles\"\n\ntfrun_created_at = time.load(tfrun.created_at)\n\nsupported_maintenance_day = rule when tfrun.workspace.auto_apply is true {\n  tfrun_created_at.add(time.hour * timezone.offset(timezone_id, token)).weekday_name in maintenance_days\n}\n\nmain = rule {\n  supported_maintenance_day\n}\n```\n\nTo expand the policy, you could use the [time.hour](https:\/\/docs.hashicorp.com\/sentinel\/imports\/time#time-hour) function to also restrict provisioning to specific times of day.\n\n## Modules\n\nHCP Terraform supports [Sentinel modules](https:\/\/docs.hashicorp.com\/sentinel\/extending\/modules). Modules let you write reusable policy code that you can import and use within several policies at once.\n\nYou can store modules locally or retrieve them from a remote HTTP or HTTPS source.\n\n-> **Note:** We recommend reviewing [Sentinel runtime's modules documentation](https:\/\/docs.hashicorp.com\/sentinel\/extending\/modules) to learn how to use modules within Sentinel. However, the configuration examples in the runtime documentation are relevant to the Sentinel CLI and not HCP Terraform.\n\nThe following example module loads the code at `.\/modules\/timezone.sentinel` relative to the policy set working directory. Other modules can access this code with the statement `import \"timezone\"`.\n\n```hcl\nimport \"http\"\nimport \"json\"\nimport \"decimal\"\n\nhttpGet = func(id, token){\n  uri = \"https:\/\/timezoneapi.io\/api\/timezone\/?\" + id + \"&token=\" + token\n  request = http.get(uri)\n  return json.unmarshal(request.body)\n}\n\noffset = func(id, token) {\n  tz = httpGet(id, token)\n  offset = decimal.new(tz.data.datetime.offset_hours).int\n  return offset\n}\n``","site":"terraform","answers_cleaned":"    page title  Policy Set VCS Repositories   Sentinel   HCP Terraform description      Configure a Sentinel policy set version control repository to use in HCP Terraform  A policy set repository has a configuration file  policy files  and module files          Sentinel Policy Set VCS Repositories  To enable policy enforcement  you must group Sentinel policies into policy sets  You can then apply those policy sets globally or to specific  projects   terraform cloud docs projects manage  and workspaces        BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      One way to create policy sets is by connecting HCP Terraform to a version control repository  When you push changes to the repository  HCP Terraform automatically uses the updated policy set  Refer to  Managing Policy Sets   terraform cloud docs policy enforcement manage policy sets  for more details   A Sentinel policy set repository contains a Sentinel configuration file  policy files  and module files      Configuration File  Your repository must contain a configuration file named  sentinel hcl  that defines the following features of the policy set     Each policy included in the set  The policy name must match the names of individual  policy code files   policy code files  exactly  HCP Terraform ignores policy files in the repository that are not listed in the configuration file  For each policy  the configuration file must designate the policy s  enforcement level   terraform cloud docs policy enforcement manage policy sets policy enforcement levels  and  source   policy source      Terraform modules   modules  that policies in the set need to access   The following example shows a portion of a  sentinel hcl  configuration file that defines a policy named  terraform maintenance windows   The policy has a  hard mandatory  enforcement level  meaning that it can block Terraform runs when it fails and users cannot override it      hcl policy  terraform maintenance windows     source                 terraform maintenance windows sentinel   enforcement level    hard mandatory         To configure a module  add a  module  entry to your  sentinel hcl  file  The following example adds a module called  timezone       hcl module  timezone     source      modules timezone sentinel         The repositories for  policy libraries on the Terraform Registry  https   registry terraform io browse policies  contain more examples      Policy Code Files  Define each Sentinel policy in a separate file within your repository  All local policy files must reside in the same directory as the  sentinel hcl  configuration file and end with the   sentinel  suffix       Policy Source  A policy s  source  field can either reference a file within the policy repository  or it can reference a remote source  For example  the configuration could reference a policy from HashiCorp s  foundational policies library  https   github com hashicorp terraform foundational policies library   Sentinel only supports HTTP and HTTPS remote sources   To specify a local source  prefix the  source  with a       or        The following example shows how to reference a local source policy called  terraform maintenance windows sentinel       hcl policy  terraform maintenance windows     source                 terraform maintenance windows sentinel   enforcement level    hard mandatory         To specify a remote source  supply the URL as the  source   The following example references a policy from HashiCorp s foundational policies library      hcl policy  deny public ssh nsg rules      source    https   registry terraform io v2 policies hashicorp azure networking terraform 1 0 2 policy deny public ssh nsg rules sentinel checksum sha256 75c95bf1d6eb48153cb31f15c49c237bf7829549beebe20effa07bcdd3f3cb74    enforcement level    advisory         For GitHub  you must use the URL of the raw policy content  Other URL types cause HCP Terraform to error when checking the policy  For example  do not use  https   github com hashicorp policy library azure networking terraform blob main policies deny public ssh nsg rules deny public ssh nsg rules sentinel    To access the raw URL  open the Sentinel file in your Github repository  right click   Raw   on the top right of the page  and save the link address       Example Policy  The following example policy uses the  time  and  tfrun  imports and a custom  timezone  module to do the following tasks   1  Load the time when the Terraform run occurred 1  Convert the loaded time with the correct offset using the  Timezone API  https   timezoneapi io   1  Verify that the provisioning operation occurs only on a specific day  The example policy also uses a  rule expression  https   docs hashicorp com sentinel language spec rule expressions  with the  when  predicate  If the value of  tfrun workspace auto apply  is false  the rule is not evaluated and returns true   Finally  the example uses parameters to facilitate module reuse within Terraform  Refer to the  Sentinel parameter documentation  https   docs hashicorp com sentinel language parameters  for details       hcl import  time  import  tfrun  import  timezone   param token default  WbNKULOBheqV  param maintenance days default   Friday    Saturday    Sunday   param timezone id default  America Los Angeles   tfrun created at   time load tfrun created at   supported maintenance day   rule when tfrun workspace auto apply is true     tfrun created at add time hour   timezone offset timezone id  token   weekday name in maintenance days    main   rule     supported maintenance day        To expand the policy  you could use the  time hour  https   docs hashicorp com sentinel imports time time hour  function to also restrict provisioning to specific times of day      Modules  HCP Terraform supports  Sentinel modules  https   docs hashicorp com sentinel extending modules   Modules let you write reusable policy code that you can import and use within several policies at once   You can store modules locally or retrieve them from a remote HTTP or HTTPS source        Note    We recommend reviewing  Sentinel runtime s modules documentation  https   docs hashicorp com sentinel extending modules  to learn how to use modules within Sentinel  However  the configuration examples in the runtime documentation are relevant to the Sentinel CLI and not HCP Terraform   The following example module loads the code at    modules timezone sentinel  relative to the policy set working directory  Other modules can access this code with the statement  import  timezone        hcl import  http  import  json  import  decimal   httpGet   func id  token     uri    https   timezoneapi io api timezone      id     token     token   request   http get uri    return json unmarshal request body     offset   func id  token      tz   httpGet id  token    offset   decimal new tz data datetime offset hours  int   return offset     "}
{"questions":"terraform page title tfstate v2 Imports Sentinel HCP Terraform Sentinel import designed specifically for Terraform 0 12 This import requires Terraform 0 12 or higher and must currently be loaded by path using an alias The tfstate v2 import provides access to a Terraform state Note This is documentation for the next version of the tfstate example import tfstate v2 as tfstate","answers":"---\npage_title: tfstate\/v2 - Imports - Sentinel - HCP Terraform\ndescription: The tfstate\/v2 import provides access to a Terraform state.\n---\n\n-> **Note:** This is documentation for the next version of the `tfstate`\nSentinel import, designed specifically for Terraform 0.12. This import requires\nTerraform 0.12 or higher, and must currently be loaded by path, using an alias,\nexample: `import \"tfstate\/v2\" as tfstate`.\n\n# Import: tfstate\/v2\n\nThe `tfstate\/v2` import provides access to a Terraform state.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nThe _state_ is the data that Terraform has recorded about a workspace at a\nparticular point in its lifecycle, usually after an apply. You can read more\ngeneral information about how Terraform uses state\n[here](\/terraform\/language\/state).\n\n-> **NOTE:** Since HCP Terraform currently only supports policy checks at plan\ntime, the usefulness of this import is somewhat limited, as it will usually give\nyou the state _prior_ to the plan the policy check is currently being run for.\nDepending on your needs, you may find the\n[`planned_values`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan-v2#the-planned_values-collection) collection in\n`tfplan\/v2` more useful, which will give you a _predicted_ state by applying\nplan data to the data found here. The one exception to this rule is _data\nsources_, which will always give up to date data here, as long as the data\nsource could be evaluated at plan time.\n\nThe data in the `tfstate\/v2` import is sourced from the JSON configuration file\nthat is generated by the [`terraform show -json`](\/terraform\/cli\/commands\/show#json-output) command. For more information on\nthe file format, see the [JSON Output Format](\/terraform\/internals\/json-format)\npage.\n\n## Import Overview\n\nThe `tfstate\/v2` import is structured as currently two _collections_, keyed in\nresource address and output name, respectively.\n\n```\n(tfstate\/v2)\n\u251c\u2500\u2500 terraform_version (string)\n\u251c\u2500\u2500 resources\n\u2502   \u2514\u2500\u2500 (indexed by address)\n\u2502       \u251c\u2500\u2500 address (string)\n\u2502       \u251c\u2500\u2500 module_address (string)\n\u2502       \u251c\u2500\u2500 mode (string)\n\u2502       \u251c\u2500\u2500 type (string)\n\u2502       \u251c\u2500\u2500 name (string)\n\u2502       \u251c\u2500\u2500 index (float (number) or string)\n\u2502       \u251c\u2500\u2500 provider_name (string)\n\u2502       \u251c\u2500\u2500 values (map)\n\u2502       \u251c\u2500\u2500 depends_on (list of strings)\n\u2502       \u251c\u2500\u2500 tainted (boolean)\n\u2502       \u2514\u2500\u2500 deposed_key (string)\n\u2514\u2500\u2500 outputs\n    \u2514\u2500\u2500 (indexed by name)\n        \u251c\u2500\u2500 name (string)\n        \u251c\u2500\u2500 sensitive (boolean)\n        \u2514\u2500\u2500 value (value)\n```\n\nThe collections are:\n\n* [`resources`](#the-resources-collection) - The state of all resources across\n  all modules in the state.\n* [`outputs`](#the-outputs-collection) - The state of all outputs from the root module in the state.\n\nThese collections are specifically designed to be used with the\n[`filter`](https:\/\/docs.hashicorp.com\/sentinel\/language\/collection-operations#filter-expression)\nquantifier expression in Sentinel, so that one can collect a list of resources\nto perform policy checks on without having to write complex module traversal. As\nan example, the following code will return all `aws_instance` resource types\nwithin the state, regardless of what module they are in:\n\n```\nall_aws_instances = filter tfstate.resources as _, r {\n\tr.mode is \"managed\" and\n\t\tr.type is \"aws_instance\"\n}\n```\n\nYou can add specific attributes to the filter to narrow the search, such as the\nmodule address. The following code would return resources in a module named\n`foo` only:\n\n```\nall_aws_instances = filter tfstate.resources as _, r {\n\tr.module_address is \"module.foo\" and\n\t\tr.mode is \"managed\" and\n\t\tr.type is \"aws_instance\"\n}\n```\n\n## The `terraform_version` Value\n\nThe top-level `terraform_version` value in this import gives the Terraform\nversion that recorded the state. This can be used to do version validation.\n\n```\nimport \"tfstate\/v2\" as tfstate\nimport \"strings\"\n\nv = strings.split(tfstate.terraform_version, \".\")\nversion_major = int(v[1])\nversion_minor = int(v[2])\n\nmain = rule {\n\tversion_major is 12 and version_minor >= 19\n}\n```\n\n-> **NOTE:** The above example will give errors when working with pre-release\nversions (example: `0.12.0beta1`). Future versions of this import will include\nhelpers to assist with processing versions that will account for these kinds of\nexceptions.\n\n## The `resources` Collection\n\nThe `resources` collection is a collection representing all of the resources in\nthe state, across all modules.\n\nThis collection is indexed on the complete resource address as the key.\n\nAn element in the collection has the following values:\n\n* `address` - The absolute resource address - also the key for the collection's\n  index.\n\n* `module_address` - The address portion of the absolute resource address.\n\n* `mode` - The resource mode, either `managed` (resources) or `data` (data\n  sources).\n\n* `type` - The resource type, example: `aws_instance` for `aws_instance.foo`.\n\n* `name` - The resource name, example: `foo` for `aws_instance.foo`.\n\n* `index` - The resource index. Can be either a number or a string.\n\n* `provider_name` - The name of the provider this resource belongs to. This\n  allows the provider to be interpreted unambiguously in the unusual situation\n  where a provider offers a resource type whose name does not start with its own\n  name, such as the `googlebeta` provider offering `google_compute_instance`.\n\n  -> **Note:** Starting with Terraform 0.13, the `provider_name` field contains the\n  _full_ source address to the provider in the Terraform Registry. Example:\n  `registry.terraform.io\/hashicorp\/null` for the null provider.\n\n* `values` - An object (map) representation of the attribute values of the\n  resource, whose structure depends on the resource type schema. When accessing\n  proposed state through the [`planned_values`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan-v2#the-planned_values-collection)\n  collection of the tfplan\/v2 import, unknown values will be omitted.\n\n* `depends_on` - The addresses of the resources that this resource depends on.\n\n* `tainted` - `true` if the resource has been explicitly marked as\n  [tainted](\/terraform\/cli\/commands\/taint) in the state.\n\n* `deposed_key` - Set if the resource has been marked deposed and will be\n  destroyed on the next apply. This matches the deposed field in the\n  [`resource_changes`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan-v2#the-resource_changes-collection)\n  collection in the [`tfplan\/v2`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan-v2) import.\n\n## The `outputs` Collection\n\nThe `outputs` collection is a collection of outputs from the root module of the\nstate.\n\nNote that no child modules are included in this output set, and there is no way\nto fetch child module output values. This is to encourage the correct flow of\noutputs to the recommended root consumption level.\n\nThe collection is indexed on the output name, with the following fields:\n\n* `name`: The name of the output, also the collection key.\n* `sensitive`: Whether or not the value was marked as\n  [sensitive](\/terraform\/language\/values\/outputs#sensitive-suppressing-values-in-cli-output)\n  in\n  configuration.\n* `value`: The value of the output.","site":"terraform","answers_cleaned":"    page title  tfstate v2   Imports   Sentinel   HCP Terraform description  The tfstate v2 import provides access to a Terraform state            Note    This is documentation for the next version of the  tfstate  Sentinel import  designed specifically for Terraform 0 12  This import requires Terraform 0 12 or higher  and must currently be loaded by path  using an alias  example   import  tfstate v2  as tfstate      Import  tfstate v2  The  tfstate v2  import provides access to a Terraform state        BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      The  state  is the data that Terraform has recorded about a workspace at a particular point in its lifecycle  usually after an apply  You can read more general information about how Terraform uses state  here   terraform language state         NOTE    Since HCP Terraform currently only supports policy checks at plan time  the usefulness of this import is somewhat limited  as it will usually give you the state  prior  to the plan the policy check is currently being run for  Depending on your needs  you may find the   planned values    terraform cloud docs policy enforcement sentinel import tfplan v2 the planned values collection  collection in  tfplan v2  more useful  which will give you a  predicted  state by applying plan data to the data found here  The one exception to this rule is  data sources   which will always give up to date data here  as long as the data source could be evaluated at plan time   The data in the  tfstate v2  import is sourced from the JSON configuration file that is generated by the   terraform show  json    terraform cli commands show json output  command  For more information on the file format  see the  JSON Output Format   terraform internals json format  page      Import Overview  The  tfstate v2  import is structured as currently two  collections   keyed in resource address and output name  respectively        tfstate v2      terraform version  string      resources          indexed by address              address  string              module address  string              mode  string              type  string              name  string              index  float  number  or string              provider name  string              values  map              depends on  list of strings              tainted  boolean              deposed key  string      outputs          indexed by name              name  string              sensitive  boolean              value  value       The collections are       resources    the resources collection    The state of all resources across   all modules in the state      outputs    the outputs collection    The state of all outputs from the root module in the state   These collections are specifically designed to be used with the   filter   https   docs hashicorp com sentinel language collection operations filter expression  quantifier expression in Sentinel  so that one can collect a list of resources to perform policy checks on without having to write complex module traversal  As an example  the following code will return all  aws instance  resource types within the state  regardless of what module they are in       all aws instances   filter tfstate resources as    r    r mode is  managed  and   r type is  aws instance         You can add specific attributes to the filter to narrow the search  such as the module address  The following code would return resources in a module named  foo  only       all aws instances   filter tfstate resources as    r    r module address is  module foo  and   r mode is  managed  and   r type is  aws instance            The  terraform version  Value  The top level  terraform version  value in this import gives the Terraform version that recorded the state  This can be used to do version validation       import  tfstate v2  as tfstate import  strings   v   strings split tfstate terraform version       version major   int v 1   version minor   int v 2    main   rule    version major is 12 and version minor    19             NOTE    The above example will give errors when working with pre release versions  example   0 12 0beta1    Future versions of this import will include helpers to assist with processing versions that will account for these kinds of exceptions      The  resources  Collection  The  resources  collection is a collection representing all of the resources in the state  across all modules   This collection is indexed on the complete resource address as the key   An element in the collection has the following values      address    The absolute resource address   also the key for the collection s   index      module address    The address portion of the absolute resource address      mode    The resource mode  either  managed   resources  or  data   data   sources       type    The resource type  example   aws instance  for  aws instance foo       name    The resource name  example   foo  for  aws instance foo       index    The resource index  Can be either a number or a string      provider name    The name of the provider this resource belongs to  This   allows the provider to be interpreted unambiguously in the unusual situation   where a provider offers a resource type whose name does not start with its own   name  such as the  googlebeta  provider offering  google compute instance           Note    Starting with Terraform 0 13  the  provider name  field contains the    full  source address to the provider in the Terraform Registry  Example     registry terraform io hashicorp null  for the null provider      values    An object  map  representation of the attribute values of the   resource  whose structure depends on the resource type schema  When accessing   proposed state through the   planned values    terraform cloud docs policy enforcement sentinel import tfplan v2 the planned values collection    collection of the tfplan v2 import  unknown values will be omitted      depends on    The addresses of the resources that this resource depends on      tainted     true  if the resource has been explicitly marked as    tainted   terraform cli commands taint  in the state      deposed key    Set if the resource has been marked deposed and will be   destroyed on the next apply  This matches the deposed field in the     resource changes    terraform cloud docs policy enforcement sentinel import tfplan v2 the resource changes collection    collection in the   tfplan v2    terraform cloud docs policy enforcement sentinel import tfplan v2  import      The  outputs  Collection  The  outputs  collection is a collection of outputs from the root module of the state   Note that no child modules are included in this output set  and there is no way to fetch child module output values  This is to encourage the correct flow of outputs to the recommended root consumption level   The collection is indexed on the output name  with the following fields      name   The name of the output  also the collection key     sensitive   Whether or not the value was marked as    sensitive   terraform language values outputs sensitive suppressing values in cli output    in   configuration     value   The value of the output "}
{"questions":"terraform The tfrun import provides access to data associated with a Terraform run The tfrun import provides access to data associated with a Terraform run run glossary BEGIN TFC only name pnp callout page title tfrun Imports Sentinel HCP Terraform Import tfrun","answers":"---\npage_title: tfrun - Imports - Sentinel - HCP Terraform\ndescription: The `tfrun` import provides access to data associated with a Terraform run.\n---\n\n# Import: tfrun\n\nThe `tfrun` import provides access to data associated with a [Terraform run][run-glossary].\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nThis import currently consists of run attributes, as well as namespaces for the `organization`, `workspace` and `cost-estimate`. Each namespace provides static data regarding the HCP Terraform application that can then be consumed by Sentinel during a policy evaluation.\n\n```\ntfrun\n\u251c\u2500\u2500 id (string)\n\u251c\u2500\u2500 created_at (string)\n\u251c\u2500\u2500 created_by (string)\n\u251c\u2500\u2500 message (string)\n\u251c\u2500\u2500 commit_sha (string)\n\u251c\u2500\u2500 is_destroy (boolean)\n\u251c\u2500\u2500 refresh (boolean)\n\u251c\u2500\u2500 refresh_only (boolean)\n\u251c\u2500\u2500 replace_addrs (array of strings)\n\u251c\u2500\u2500 speculative (boolean)\n\u251c\u2500\u2500 target_addrs (array of strings)\n\u251c\u2500\u2500 project\n\u2502   \u251c\u2500\u2500 id (string)\n\u2502   \u2514\u2500\u2500 name (string)\n\u251c\u2500\u2500 variables (map of keys)\n\u251c\u2500\u2500 organization\n\u2502   \u2514\u2500\u2500 name (string)\n\u251c\u2500\u2500 workspace\n\u2502   \u251c\u2500\u2500 id (string)\n\u2502   \u251c\u2500\u2500 name (string)\n\u2502   \u251c\u2500\u2500 created_at (string)\n\u2502   \u251c\u2500\u2500 description (string)\n\u2502   \u251c\u2500\u2500 execution_mode (string)\n\u2502   \u251c\u2500\u2500 auto_apply (bool)\n\u2502   \u251c\u2500\u2500 tags (array of strings)\n|   \u251c\u2500\u2500 tag_bindings (array of objects)\n\u2502   \u251c\u2500\u2500 working_directory (string)\n\u2502   \u2514\u2500\u2500 vcs_repo (map of keys)\n\u2514\u2500\u2500 cost_estimate\n    \u251c\u2500\u2500 prior_monthly_cost (string)\n    \u251c\u2500\u2500 proposed_monthly_cost (string)\n    \u2514\u2500\u2500 delta_monthly_cost (string)\n```\n\n-> **Note:** When writing policies using this import, keep in mind that workspace\ndata is generally editable by users outside of the context of policy\nenforcement. For example, consider the case of omitting the enforcement of\npolicy rules for development workspaces by the workspace name (allowing the\npolicy to pass if the workspace ends in `-dev`). While this is useful for\nextremely granular exceptions, the workspace name could be edited by\nworkspace admins, effectively bypassing the policy. In this case, where an\nextremely strict separation of policy managers vs. workspace practitioners is\nrequired, using [policy sets](\/terraform\/cloud-docs\/policy-enforcement\/manage-policy-sets)\nto only enforce the policy on non-development workspaces is more appropriate.\n\n[run-glossary]: \/terraform\/docs\/glossary#run\n\n[workspace-glossary]: \/terraform\/docs\/glossary#workspace\n\n## Namespace: root\n\nThe **root namespace** contains data associated with the current run.\n\n### Value: `id`\n\n* **Value Type:** String.\n\nSpecifies the ID that is associated with the current Terraform run.\n\n### Value: `created_at`\n\n* **Value Type:** String.\n\nThe `created_at` value within the [root namespace](#namespace-root) specifies the time that the run was created. The timestamp returned follows the format outlined in [RFC3339](https:\/\/datatracker.ietf.org\/doc\/html\/rfc3339).\n\nUsers can use the `time` import to [load](https:\/\/docs.hashicorp.com\/sentinel\/imports\/time#time-load-timeish) a run timestamp and create a new timespace from the specified value. See the `time` import [documentation](https:\/\/docs.hashicorp.com\/sentinel\/imports\/time#import-time) for available actions that can be performed on timespaces.\n\n### Value: `created_by`\n\n* **Value Type:** String.\n\nThe `created_by` value within the [root namespace](#namespace-root) is string that specifies the user name of the HCP Terraform user for the specific run.\n\n### Value: `message`\n\n* **Value Type:** String.\n\nSpecifies the message that is associated with the Terraform run.\n\nThe default value is _\"Queued manually via the Terraform Enterprise API\"_.\n\n### Value: `commit_sha`\n\n* **Value Type:** String.\n\nSpecifies the checksum hash (SHA) that identifies the commit.\n\n### Value: `is_destroy`\n\n* **Value Type:** Boolean.\n\nSpecifies if the plan is a destroy plan, which will destroy all provisioned resources.\n\n### Value: `refresh`\n\n* **Value Type:** Boolean.\n\nSpecifies whether the state was refreshed prior to the plan.\n\n### Value: `refresh_only`\n\n* **Value Type:** Boolean.\n\nSpecifies whether the plan is in refresh-only mode, which ignores configuration changes and updates state with any changes made outside of Terraform.\n\n### Value: `replace_addrs`\n\n* **Value Type:** An array of strings representing [resource addresses](\/terraform\/cli\/state\/resource-addressing).\n\nProvides the targets specified using the [`-replace`](\/terraform\/cli\/commands\/plan#resource-targeting) flag in the CLI or the `replace-addrs` attribute in the API. Will be null if no resource targets are specified.\n\n### Value: `speculative`\n\n* **Value Type:** Boolean.\n\nSpecifies whether the plan associated with the run is a [speculative plan](\/terraform\/cloud-docs\/run\/remote-operations#speculative-plans) only.\n\n### Value: `target_addrs`\n\n* **Value Type:** An array of strings representing [resource addresses](\/terraform\/cli\/state\/resource-addressing).\n\nProvides the targets specified using the [`-target`](\/terraform\/cli\/commands\/plan#resource-targeting) flag in the CLI or the `target-addrs` attribute in the API. Will be null if no resource targets are specified.\n\nTo prohibit targeted runs altogether, make sure the `target_addrs` value is null or empty:\n\n```\nimport \"tfrun\"\n\nmain = tfrun.target_addrs is null or tfrun.target_addrs is empty\n```\n\n### Value: `variables`\n\n* **Value Type:** A string-keyed map of values.\n\nProvides the names of the variables that are configured within the run and the [sensitivity](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#sensitive-values) state of the value.\n\n```\nvariables (map of keys)\n\u2514\u2500\u2500 name (string)\n    \u2514\u2500\u2500 category (string)\n    \u2514\u2500\u2500 sensitive (boolean)\n```\n\n## Namespace: project\n\nThe **project namespace** contains data associated with the current run's [projects](\/terraform\/cloud-docs\/api-docs\/projects).\n\n### Value: `id`\n\n* **Value Type:** String.\n\nSpecifies the ID that is associated with the current project.\n\n### Value: `name`\n\n* **Value Type:** String.\n\nSpecifies the name assigned to the HCP Terraform project.\n\n## Namespace: organization\n\nThe **organization namespace** contains data associated with the current run's HCP Terraform [organization](\/terraform\/cloud-docs\/users-teams-organizations\/organizations).\n\n### Value: `name`\n\n* **Value Type:** String.\n\nSpecifies the name assigned to the HCP Terraform organization.\n\n## Namespace: workspace\n\nThe **workspace namespace** contains data associated with the current run's workspace.\n\n### Value: `id`\n\n* **Value Type:** String.\n\nSpecifies the ID that is associated with the Terraform workspace.\n\n### Value: `name`\n\n* **Value Type:** String.\n\nThe name of the workspace, which can only include letters, numbers, `-`, and `_`.\n\nAs an example, in a workspace named `app-us-east-dev` the following policy would evaluate to `true`:\n\n```\n# Enforces production rules on all non-development workspaces\n\nimport \"tfrun\"\nimport \"strings\"\n\n# (Actual policy logic omitted)\nevaluate_production_policy = rule { ... }\n\nmain = rule when strings.has_suffix(tfrun.workspace.name, \"-dev\") is false {\n    evaluate_production_policy\n}\n```\n\n### Value: `created_at`\n\n* **Value Type:** String.\n\nSpecifies the time that the workspace was created. The timestamp returned follows the format outlined in [RFC3339](https:\/\/datatracker.ietf.org\/doc\/html\/rfc3339).\n\nUsers can use the `time` import to [load](https:\/\/docs.hashicorp.com\/sentinel\/imports\/time#time-load-timeish) a workspace timestamp, and create a new timespace from the specified value. See the `time` import [documentation](https:\/\/docs.hashicorp.com\/sentinel\/imports\/time#import-time) for available actions that can be performed on timespaces.\n\n### Value: `description`\n\n* **Value Type:** String.\n\nContains the description for the workspace.\n\nThis value can be `null`.\n\n### Value: `auto_apply`\n\n* **Value Type:** Boolean.\n\nContains the workspace's [auto-apply](\/terraform\/cloud-docs\/workspaces\/settings#auto-apply-and-manual-apply) setting.\n\n### Value: `tags`\n\n* **Value Type:** Array of strings.\n\nContains the list of tag names for the workspace, as well as the keys from tag bindings.\n\n### Value: `tag_bindings`\n\n* **Value Type:** Array of objects.\n\nContains the complete list of tag bindings for the workspace, which includes inherited tag bindings, as well as the workspace key-only tags. Each binding has a string `key`, a nullable string `value`, as well as a boolean `inherited` properties.\n\n```\ntag_bindings (array of objects)\n\u251c\u2500\u2500 key (string)\n\u251c\u2500\u2500 value (string or null)\n\u2514\u2500\u2500 inherited (boolean)\n```\n\n### Value: `working_directory`\n\n* **Value Type:** String.\n\nContains the configured [Terraform working directory](\/terraform\/cloud-docs\/workspaces\/settings#terraform-working-directory) of the workspace.\n\nThis value can be `null`.\n\n### Value: `execution_mode`\n\n* **Value Type:** String.\n\nContains the configured [Terraform execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode) of the workspace.\n\nThe default value is `remote`.\n\n### Value: `vcs_repo`\n\n* **Value Type:** A string-keyed map of values.\n\nContains data associated with a VCS repository connected to the workspace.\n\nDetails regarding each attribute can be found in the documentation for the HCP Terraform [Workspaces API](\/terraform\/cloud-docs\/api-docs\/workspaces).\n\nThis value can be `null`.\n\n```\nvcs_repo (map of keys)\n\u251c\u2500\u2500 identifier (string)\n\u251c\u2500\u2500 display_identifier (string)\n\u251c\u2500\u2500 branch (string)\n\u2514\u2500\u2500 ingress_submodules (bool)\n```\n\n## Namespace: cost_estimate\n\nThe **cost_estimation namespace** contains data associated with the current run's cost estimate.\n\nThis namespace is only present if a cost estimate is available.\n\n-> Cost estimation is disabled for runs using [resource targeting](\/terraform\/cli\/commands\/plan#resource-targeting), which may cause unexpected failures.\n\n-> **Note:** Cost estimates are not available for Terraform 0.11.\n\n### Value: `prior_monthly_cost`\n\n* **Value Type:** String.\n\nContains the monthly cost estimate at the beginning of a plan.\n\nThis value contains a positive decimal and can be `\"0.0\"`.\n\n### Value: `proposed_monthly_cost`\n\n* **Value Type:** String.\n\nContains the monthly cost estimate if the plan were to be applied.\n\nThis value contains a positive decimal and can be `\"0.0\"`.\n\n### Value: `delta_monthly_cost`\n\n* **Value Type:** String.\n\nContains the difference between the prior and proposed monthly cost estimates.\n\nThis value may contain a positive or negative decimal and can be `\"0.0\"`.","site":"terraform","answers_cleaned":"    page title  tfrun   Imports   Sentinel   HCP Terraform description  The  tfrun  import provides access to data associated with a Terraform run         Import  tfrun  The  tfrun  import provides access to data associated with a  Terraform run  run glossary         BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      This import currently consists of run attributes  as well as namespaces for the  organization    workspace  and  cost estimate   Each namespace provides static data regarding the HCP Terraform application that can then be consumed by Sentinel during a policy evaluation       tfrun     id  string      created at  string      created by  string      message  string      commit sha  string      is destroy  boolean      refresh  boolean      refresh only  boolean      replace addrs  array of strings      speculative  boolean      target addrs  array of strings      project         id  string          name  string      variables  map of keys      organization         name  string      workspace         id  string          name  string          created at  string          description  string          execution mode  string          auto apply  bool          tags  array of strings          tag bindings  array of objects          working directory  string          vcs repo  map of keys      cost estimate         prior monthly cost  string          proposed monthly cost  string          delta monthly cost  string            Note    When writing policies using this import  keep in mind that workspace data is generally editable by users outside of the context of policy enforcement  For example  consider the case of omitting the enforcement of policy rules for development workspaces by the workspace name  allowing the policy to pass if the workspace ends in   dev    While this is useful for extremely granular exceptions  the workspace name could be edited by workspace admins  effectively bypassing the policy  In this case  where an extremely strict separation of policy managers vs  workspace practitioners is required  using  policy sets   terraform cloud docs policy enforcement manage policy sets  to only enforce the policy on non development workspaces is more appropriate    run glossary    terraform docs glossary run   workspace glossary    terraform docs glossary workspace     Namespace  root  The   root namespace   contains data associated with the current run       Value   id       Value Type    String   Specifies the ID that is associated with the current Terraform run       Value   created at       Value Type    String   The  created at  value within the  root namespace   namespace root  specifies the time that the run was created  The timestamp returned follows the format outlined in  RFC3339  https   datatracker ietf org doc html rfc3339    Users can use the  time  import to  load  https   docs hashicorp com sentinel imports time time load timeish  a run timestamp and create a new timespace from the specified value  See the  time  import  documentation  https   docs hashicorp com sentinel imports time import time  for available actions that can be performed on timespaces       Value   created by       Value Type    String   The  created by  value within the  root namespace   namespace root  is string that specifies the user name of the HCP Terraform user for the specific run       Value   message       Value Type    String   Specifies the message that is associated with the Terraform run   The default value is   Queued manually via the Terraform Enterprise API         Value   commit sha       Value Type    String   Specifies the checksum hash  SHA  that identifies the commit       Value   is destroy       Value Type    Boolean   Specifies if the plan is a destroy plan  which will destroy all provisioned resources       Value   refresh       Value Type    Boolean   Specifies whether the state was refreshed prior to the plan       Value   refresh only       Value Type    Boolean   Specifies whether the plan is in refresh only mode  which ignores configuration changes and updates state with any changes made outside of Terraform       Value   replace addrs       Value Type    An array of strings representing  resource addresses   terraform cli state resource addressing    Provides the targets specified using the    replace    terraform cli commands plan resource targeting  flag in the CLI or the  replace addrs  attribute in the API  Will be null if no resource targets are specified       Value   speculative       Value Type    Boolean   Specifies whether the plan associated with the run is a  speculative plan   terraform cloud docs run remote operations speculative plans  only       Value   target addrs       Value Type    An array of strings representing  resource addresses   terraform cli state resource addressing    Provides the targets specified using the    target    terraform cli commands plan resource targeting  flag in the CLI or the  target addrs  attribute in the API  Will be null if no resource targets are specified   To prohibit targeted runs altogether  make sure the  target addrs  value is null or empty       import  tfrun   main   tfrun target addrs is null or tfrun target addrs is empty          Value   variables       Value Type    A string keyed map of values   Provides the names of the variables that are configured within the run and the  sensitivity   terraform cloud docs workspaces variables managing variables sensitive values  state of the value       variables  map of keys      name  string          category  string          sensitive  boolean          Namespace  project  The   project namespace   contains data associated with the current run s  projects   terraform cloud docs api docs projects        Value   id       Value Type    String   Specifies the ID that is associated with the current project       Value   name       Value Type    String   Specifies the name assigned to the HCP Terraform project      Namespace  organization  The   organization namespace   contains data associated with the current run s HCP Terraform  organization   terraform cloud docs users teams organizations organizations        Value   name       Value Type    String   Specifies the name assigned to the HCP Terraform organization      Namespace  workspace  The   workspace namespace   contains data associated with the current run s workspace       Value   id       Value Type    String   Specifies the ID that is associated with the Terraform workspace       Value   name       Value Type    String   The name of the workspace  which can only include letters  numbers       and       As an example  in a workspace named  app us east dev  the following policy would evaluate to  true          Enforces production rules on all non development workspaces  import  tfrun  import  strings      Actual policy logic omitted  evaluate production policy   rule          main   rule when strings has suffix tfrun workspace name    dev   is false       evaluate production policy            Value   created at       Value Type    String   Specifies the time that the workspace was created  The timestamp returned follows the format outlined in  RFC3339  https   datatracker ietf org doc html rfc3339    Users can use the  time  import to  load  https   docs hashicorp com sentinel imports time time load timeish  a workspace timestamp  and create a new timespace from the specified value  See the  time  import  documentation  https   docs hashicorp com sentinel imports time import time  for available actions that can be performed on timespaces       Value   description       Value Type    String   Contains the description for the workspace   This value can be  null        Value   auto apply       Value Type    Boolean   Contains the workspace s  auto apply   terraform cloud docs workspaces settings auto apply and manual apply  setting       Value   tags       Value Type    Array of strings   Contains the list of tag names for the workspace  as well as the keys from tag bindings       Value   tag bindings       Value Type    Array of objects   Contains the complete list of tag bindings for the workspace  which includes inherited tag bindings  as well as the workspace key only tags  Each binding has a string  key   a nullable string  value   as well as a boolean  inherited  properties       tag bindings  array of objects      key  string      value  string or null      inherited  boolean           Value   working directory       Value Type    String   Contains the configured  Terraform working directory   terraform cloud docs workspaces settings terraform working directory  of the workspace   This value can be  null        Value   execution mode       Value Type    String   Contains the configured  Terraform execution mode   terraform cloud docs workspaces settings execution mode  of the workspace   The default value is  remote        Value   vcs repo       Value Type    A string keyed map of values   Contains data associated with a VCS repository connected to the workspace   Details regarding each attribute can be found in the documentation for the HCP Terraform  Workspaces API   terraform cloud docs api docs workspaces    This value can be  null        vcs repo  map of keys      identifier  string      display identifier  string      branch  string      ingress submodules  bool          Namespace  cost estimate  The   cost estimation namespace   contains data associated with the current run s cost estimate   This namespace is only present if a cost estimate is available      Cost estimation is disabled for runs using  resource targeting   terraform cli commands plan resource targeting   which may cause unexpected failures        Note    Cost estimates are not available for Terraform 0 11       Value   prior monthly cost       Value Type    String   Contains the monthly cost estimate at the beginning of a plan   This value contains a positive decimal and can be   0 0         Value   proposed monthly cost       Value Type    String   Contains the monthly cost estimate if the plan were to be applied   This value contains a positive decimal and can be   0 0         Value   delta monthly cost       Value Type    String   Contains the difference between the prior and proposed monthly cost estimates   This value may contain a positive or negative decimal and can be   0 0   "}
{"questions":"terraform Warning The tfconfig import is now deprecated and will be permanently removed in August 2025 We recommend that you start using the updated tfconfig v2 terraform cloud docs policy enforcement sentinel import tfconfig v2 import as soon as possible to avoid disruptions Import tfconfig The tfconfig v2 import offers improved functionality and is designed to better support your policy enforcement needs page title tfconfig Imports Sentinel HCP Terraform The tfconfig import provides access to a Terraform configuration","answers":"---\npage_title: tfconfig - Imports - Sentinel - HCP Terraform\ndescription: The tfconfig import provides access to a Terraform configuration.\n---\n\n# Import: tfconfig\n\n~> **Warning:** The `tfconfig` import is now deprecated and will be permanently removed in August 2025.\nWe recommend that you start using the updated [tfconfig\/v2](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfconfig-v2) import as soon as possible to avoid disruptions.\nThe `tfconfig\/v2` import offers improved functionality and is designed to better support your policy enforcement needs.\n\nThe `tfconfig` import provides access to a Terraform configuration.\n\nThe Terraform configuration is the set of `*.tf` files that are used to\ndescribe the desired infrastructure state. Policies using the `tfconfig`\nimport can access all aspects of the configuration: providers, resources,\ndata sources, modules, and variables.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nSome use cases for `tfconfig` include:\n\n* **Organizational naming conventions**: requiring that configuration elements\n  are named in a way that conforms to some organization-wide standard.\n* **Required inputs and outputs**: organizations may require a particular set\n  of input variable names across all workspaces or may require a particular\n  set of outputs for asset management purposes.\n* **Enforcing particular modules**: organizations may provide a number of\n  \"building block\" modules and require that each workspace be built only from\n  combinations of these modules.\n* **Enforcing particular providers or resources**: an organization may wish to\n  require or prevent the use of providers and\/or resources so that configuration\n  authors cannot use alternative approaches to work around policy\n  restrictions.\n\nNote with these use cases that this import is concerned with object _names_\nin the configuration. Since this is the configuration and not an invocation\nof Terraform, you can't see values for variables, the state, or the diff for\na pending plan. If you want to write policy around expressions used\nwithin configuration blocks, you likely want to use the\n[`tfplan`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan) import.\n\n## Namespace Overview\n\nThe following is a tree view of the import namespace. For more detail on a\nparticular part of the namespace, see below.\n\n-> **Note:** The root-level alias keys shown here (`data`, `modules`,\n`providers`, `resources`, and `variables`) are shortcuts to a [module\nnamespace](#namespace-module) scoped to the root module. For more details, see\nthe section on [root namespace aliases](#root-namespace-aliases).\n\n```\ntfconfig\n\u251c\u2500\u2500 module() (function)\n\u2502   \u2514\u2500\u2500 (module namespace)\n\u2502       \u251c\u2500\u2500 data\n\u2502       \u2502   \u2514\u2500\u2500 TYPE.NAME\n\u2502       \u2502       \u251c\u2500\u2500 config (map of keys)\n\u2502       \u2502       \u251c\u2500\u2500 references (map of keys) (TF 0.12 and later)\n\u2502       \u2502       \u2514\u2500\u2500 provisioners\n\u2502       \u2502           \u2514\u2500\u2500 NUMBER\n\u2502       \u2502               \u251c\u2500\u2500 config (map of keys)\n\u2502       \u2502               \u251c\u2500\u2500 references (map of keys) (TF 0.12 and later)\n\u2502       \u2502               \u2514\u2500\u2500 type (string)\n\u2502       \u251c\u2500\u2500 modules\n\u2502       \u2502   \u2514\u2500\u2500 NAME\n\u2502       \u2502       \u251c\u2500\u2500 config (map of keys)\n\u2502       \u2502       \u251c\u2500\u2500 references (map of keys) (TF 0.12 and later)\n\u2502       \u2502       \u251c\u2500\u2500 source (string)\n\u2502       \u2502       \u2514\u2500\u2500 version (string)\n\u2502       \u251c\u2500\u2500outputs\n\u2502       \u2502   \u2514\u2500\u2500 NAME\n\u2502       \u2502       \u251c\u2500\u2500 depends_on (list of strings)\n\u2502       \u2502       \u251c\u2500\u2500 description (string)\n\u2502       \u2502       \u251c\u2500\u2500 sensitive (boolean)\n\u2502       \u2502       \u251c\u2500\u2500 references (list of strings) (TF 0.12 and later)\n\u2502       \u2502       \u2514\u2500\u2500 value (value)\n\u2502       \u251c\u2500\u2500 providers\n\u2502       \u2502   \u2514\u2500\u2500 TYPE\n\u2502       \u2502       \u251c\u2500\u2500 alias\n\u2502       \u2502       \u2502   \u2514\u2500\u2500 ALIAS\n\u2502       \u2502       \u2502       \u251c\u2500\u2500 config (map of keys)\n\u2502       \u2502       |       \u251c\u2500\u2500 references (map of keys) (TF 0.12 and later)\n\u2502       \u2502       \u2502       \u2514\u2500\u2500 version (string)\n\u2502       \u2502       \u251c\u2500\u2500 config (map of keys)\n\u2502       \u2502       \u251c\u2500\u2500 references (map of keys) (TF 0.12 and later)\n\u2502       \u2502       \u2514\u2500\u2500 version (string)\n\u2502       \u251c\u2500\u2500 resources\n\u2502       \u2502   \u2514\u2500\u2500 TYPE.NAME\n\u2502       \u2502       \u251c\u2500\u2500 config (map of keys)\n\u2502       \u2502       \u251c\u2500\u2500 references (map of keys) (TF 0.12 and later)\n\u2502       \u2502       \u2514\u2500\u2500 provisioners\n\u2502       \u2502           \u2514\u2500\u2500 NUMBER\n\u2502       \u2502               \u251c\u2500\u2500 config (map of keys)\n\u2502       \u2502               \u251c\u2500\u2500 references (map of keys) (TF 0.12 and later)\n\u2502       \u2502               \u2514\u2500\u2500 type (string)\n\u2502       \u2514\u2500\u2500 variables\n\u2502           \u2514\u2500\u2500 NAME\n\u2502               \u251c\u2500\u2500 default (value)\n\u2502               \u2514\u2500\u2500 description (string)\n\u251c\u2500\u2500 module_paths ([][]string)\n\u2502\n\u251c\u2500\u2500 data (root module alias)\n\u251c\u2500\u2500 modules (root module alias)\n\u251c\u2500\u2500 outputs (root module alias)\n\u251c\u2500\u2500 providers (root module alias)\n\u251c\u2500\u2500 resources (root module alias)\n\u2514\u2500\u2500 variables (root module alias)\n```\n\n### `references` with Terraform 0.12\n\n**With Terraform 0.11 or earlier**, if a configuration value is defined as an\nexpression (and not a static value), the value will be accessible in its raw,\nnon-interpolated string (just as with a constant value).\n\nAs an example, consider the following resource block:\n\n```hcl\nresource \"local_file\" \"accounts\" {\n  content  = \"some text\"\n  filename = \"${var.subdomain}.${var.domain}\/accounts.txt\"\n}\n```\n\nIn this example, one might want to ensure `domain` and `subdomain` input\nvariables are used within `filename` in this configuration. With Terraform 0.11 or\nearlier, the following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\n# filename_value is the raw, non-interpolated string\nfilename_value = tfconfig.resources.local_file.accounts.config.filename\n\nmain = rule {\n\tfilename_value contains \"${var.domain}\" and\n\tfilename_value contains \"${var.subdomain}\"\n}\n```\n\n**With Terraform 0.12 or later**, any non-static\nvalues (such as interpolated strings) are not present within the\nconfiguration value and `references` should be used instead:\n\n```python\nimport \"tfconfig\"\n\n# filename_references is a list of string values containing the references used in the expression\nfilename_references = tfconfig.resources.local_file.accounts.references.filename\n\nmain = rule {\n  filename_references contains \"var.domain\" and\n  filename_references contains \"var.subdomain\"\n}\n```\n\nThe `references` value is present in any namespace where non-constant\nconfiguration values can be expressed. This is essentially every namespace\nwhich has a `config` value as well as the `outputs` namespace.\n\n-> **Note:** Remember, this import enforces policy around the literal Terraform\nconfiguration and not the final values as a result of invoking Terraform. If\nyou want to write policy around the _result_ of expressions used within\nconfiguration blocks (for example, if you wanted to ensure the final value of\n`filename` above includes `accounts.txt`), you likely want to use the\n[`tfplan`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan) import.\n\n## Namespace: Root\n\nThe root-level namespace consists of the values and functions documented below.\n\nIn addition to this, the root-level `data`, `modules`, `providers`, `resources`,\nand `variables` keys all alias to their corresponding namespaces within the\n[module namespace](#namespace-module).\n\n<a id=\"root-function-module\" \/>\n\n### Function: `module()`\n\n```\nmodule = func(ADDR)\n```\n\n* **Return Type:** A [module namespace](#namespace-module).\n\nThe `module()` function in the [root namespace](#namespace-root) returns the\n[module namespace](#namespace-module) for a particular module address.\n\nThe address must be a list and is the module address, split on the period (`.`),\nexcluding the root module.\n\nHence, a module with an address of simply `foo` (or `root.foo`) would be\n`[\"foo\"]`, and a module within that (so address `foo.bar`) would be read as\n`[\"foo\", \"bar\"]`.\n\n[`null`][ref-null] is returned if a module address is invalid, or if the module\nis not present in the configuration.\n\n[ref-null]: https:\/\/docs.hashicorp.com\/sentinel\/language\/spec#null\n\nAs an example, given the following module block:\n\n```hcl\nmodule \"foo\" {\n  # ...\n}\n```\n\nIf the module contained the following content:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  triggers = {\n    foo = \"bar\"\n  }\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { subject.module([\"foo\"]).resources.null_resource.foo.config.triggers[0].foo is \"bar\" }\n```\n\n<a id=\"root-value-module_paths\" \/>\n\n### Value: `module_paths`\n\n* **Value Type:** List of a list of strings.\n\nThe `module_paths` value within the [root namespace](#namespace-root) is a list\nof all of the modules within the Terraform configuration.\n\nModules not present in the configuration will not be present here, even if they\nare present in the diff or state.\n\nThis data is represented as a list of a list of strings, with the inner list\nbeing the module address, split on the period (`.`).\n\nThe root module is included in this list, represented as an empty inner list.\n\nAs an example, if the following module block was present within a Terraform\nconfiguration:\n\n```hcl\nmodule \"foo\" {\n  # ...\n}\n```\n\nThe value of `module_paths` would be:\n\n```\n[\n\t[],\n\t[\"foo\"],\n]\n```\n\nAnd the following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { tfconfig.module_paths contains [\"foo\"] }\n```\n\n#### Iterating Through Modules\n\nIterating through all modules to find particular resources can be useful. This\n[example][iterate-over-modules] shows how to use `module_paths` with the\n[`module()` function](#function-module-) to find all resources of a\nparticular type from all modules using the `tfplan` import. By changing `tfplan`\nin this function to `tfconfig`, you could make a similar function find all\nresources of a specific type in the Terraform configuration.\n\n[iterate-over-modules]: \/terraform\/cloud-docs\/policy-enforcement\/sentinel#sentinel-imports\n\n## Namespace: Module\n\nThe **module namespace** can be loaded by calling [`module()`](#root-function-module)\nfor a particular module.\n\nIt can be used to load the following child namespaces:\n\n* `data` - Loads the [resource namespace](#namespace-resources-data-sources),\n  filtered against data sources.\n* `modules` - Loads the [module configuration\n  namespace](#namespace-module-configuration).\n* `outputs` - Loads the [output namespace](#namespace-outputs).\n* `providers` - Loads the [provider namespace](#namespace-providers).\n* `resources` - Loads the [resource\n  namespace](#namespace-resources-data-sources), filtered against resources.\n* `variables` - Loads the [variable namespace](#namespace-variables).\n\n### Root Namespace Aliases\n\nThe root-level `data`, `modules`, `providers`, `resources`, and `variables` keys\nall alias to their corresponding namespaces within the module namespace, loaded\nfor the root module. They are the equivalent of running `module([]).KEY`.\n\n<a id=\"namespace-resources\"><\/a>\n\n## Namespace: Resources\/Data Sources\n\nThe **resource namespace** is a namespace _type_ that applies to both resources\n(accessed by using the `resources` namespace key) and data sources (accessed\nusing the `data` namespace key).\n\nAccessing an individual resource or data source within each respective namespace\ncan be accomplished by specifying the type and name, in the syntax\n`[resources|data].TYPE.NAME`.\n\nIn addition, each of these namespace levels is a map, allowing you to filter\nbased on type and name. Some examples of multi-level access are below:\n\n* To fetch all `aws_instance` resources within the root module, you can specify\n  `tfconfig.resources.aws_instance`. This would give you a map of resource\n  namespaces indexed from the names of each resource (`foo`, `bar`, and so\n  on).\n* To fetch all resources within the root module, irrespective of type, use\n  `tfconfig.resources`. This is indexed by type, as shown above with\n  `tfconfig.resources.aws_instance`, with names being the next level down.\n\nAs an example, perhaps you wish to deny use of the `local_file` resource\nin your configuration. Consider the following resource block:\n\n```hcl\nresource \"local_file\" \"foo\" {\n    content     = \"foo!\"\n    filename = \"${path.module}\/foo.bar\"\n}\n```\n\nThe following policy would fail:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { tfconfig.resources not contains \"local_file\" }\n```\n\nFurther explanation of the namespace will be in the context of resources. As\nmentioned, when operating on data sources, use the same syntax, except with\n`data` in place of `resources`.\n\n<a id=\"resources-value-config\" \/>\n\n### Value: `config`\n\n* **Value Type:** A string-keyed map of values.\n\nThe `config` value within the [resource\nnamespace](#namespace-resources-data-sources) is a map of key-value pairs that\ndirectly map to Terraform config keys and values.\n\n-> **With Terraform 0.11 or earlier**, if the config value is defined as an\nexpression (and not a static value), the value will be in its raw,\nnon-interpolated string. **With Terraform 0.12 or later**, any non-static\nvalues (such as interpolated strings) are not present and\n[`references`](#resources-value-references) should be used instead.\n\nAs an example, consider the following resource block:\n\n```hcl\nresource \"local_file\" \"accounts\" {\n  content  = \"some text\"\n  filename = \"accounts.txt\"\n}\n```\n\nIn this example, one might want to access `filename` to validate that the correct\nfile name is used. Given the above example, the following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule {\n\ttfconfig.resources.local_file.accounts.config.filename is \"accounts.txt\"\n}\n```\n\n<a id=\"resources-value-references\" \/>\n\n### Value: `references`\n\n* **Value Type:** A string-keyed map of list values containing strings.\n\n-> **Note:** This value is only present when using Terraform 0.12 or later.\n\nThe `references` value within the [resource namespace](#namespace-resources-data-sources)\ncontains the identifiers within non-constant expressions found in [`config`](#resources-value-config).\nSee the [documentation on `references`](#references-with-terraform-0-12) for more information.\n\n<a id=\"resources-value-provisioners\" \/>\n\n### Value: `provisioners`\n\n* **Value Type:** List of [provisioner namespaces](#namespace-provisioners).\n\nThe `provisioners` value within the [resource namespace](#namespace-resources)\nrepresents the [provisioners][ref-tf-provisioners] within a specific resource.\n\nProvisioners are listed in the order they were provided in the configuration\nfile.\n\nWhile the `provisioners` value will be present within data sources, it will\nalways be an empty map (in Terraform 0.11) or `null` (in Terraform 0.12) since\ndata sources cannot actually have provisioners.\n\nThe data within a provisioner can be inspected via the returned [provisioner\nnamespace](#namespace-provisioners).\n\n[ref-tf-provisioners]: \/terraform\/language\/resources\/provisioners\/syntax\n\n## Namespace: Provisioners\n\nThe **provisioner namespace** represents the configuration for a particular\n[provisioner][ref-tf-provisioners] within a specific resource.\n\n<a id=\"provisioners-value-config\" \/>\n\n### Value: `config`\n\n* **Value Type:** A string-keyed map of values.\n\nThe `config` value within the [provisioner namespace](#namespace-provisioners)\nrepresents the values of the keys within the provisioner.\n\n-> **With Terraform 0.11 or earlier**, if the config value is defined as an\nexpression (and not a static value), the value will be in its raw,\nnon-interpolated string. **With Terraform 0.12 or later**, any non-static\nvalues (such as interpolated strings) are not present and\n[`references`](#provisioners-value-references) should be used instead.\n\nAs an example, given the following resource block:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  # ...\n\n  provisioner \"local-exec\" {\n    command = \"echo ${self.private_ip} > file.txt\"\n  }\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule {\n\ttfconfig.resources.null_resource.foo.provisioners[0].config.command is \"echo ${self.private_ip} > file.txt\"\n}\n```\n\n<a id=\"provisioners-value-references\" \/>\n\n### Value: `references`\n\n* **Value Type:** A string-keyed map of list values containing strings.\n\n-> **Note:** This value is only present when using Terraform 0.12 or later.\n\nThe `references` value within the [provisioner namespace](#namespace-provisioners)\ncontains the identifiers within non-constant expressions found in [`config`](#provisioners-value-config).\nSee the [documentation on `references`](#references-with-terraform-0-12) for more information.\n\n<a id=\"provisioners-value-type\" \/>\n\n### Value: `type`\n\n* **Value Type:** String.\n\nThe `type` value within the [provisioner namespace](#namespace-provisioners)\nrepresents the type of the specific provisioner.\n\nAs an example, in the following resource block:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  # ...\n\n  provisioner \"local-exec\" {\n    command = \"echo ${self.private_ip} > file.txt\"\n  }\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { tfconfig.resources.null_resource.foo.provisioners[0].type is \"local-exec\" }\n```\n\n## Namespace: Module Configuration\n\nThe **module configuration** namespace displays data on _module configuration_\nas it is given within a `module` block. This means that the namespace concerns\nitself with the contents of the declaration block (example: the `source`\nparameter and variable assignment keys), not the data within the module\n(example: any contained resources or data sources). For the latter, the module\ninstance would need to be looked up with the [`module()`\nfunction](#root-function-module).\n\n<a id=\"modules-value-source\" \/>\n\n### Value: `source`\n\n* **Value Type:** String.\n\nThe `source` value within the [module configuration\nnamespace](#namespace-module-configuration) represents the module source path as\nsupplied to the module configuration.\n\nAs an example, given the module declaration block:\n\n```hcl\nmodule \"foo\" {\n  source = \".\/foo\"\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { tfconfig.modules.foo.source is \".\/foo\" }\n```\n\n<a id=\"modules-value-version\" \/>\n\n### Value: `version`\n\n* **Value Type:** String.\n\nThe `version` value within the [module configuration\nnamespace](#namespace-module-configuration) represents the [version\nconstraint][module-version-constraint] for modules that support it, such as\nmodules within the [Terraform Module Registry][terraform-module-registry] or the\n[HCP Terraform private module registry][tfe-private-registry].\n\n[module-version-constraint]: \/terraform\/language\/modules#module-versions\n\n[terraform-module-registry]: https:\/\/registry.terraform.io\/\n\n[tfe-private-registry]: \/terraform\/cloud-docs\/registry\n\nAs an example, given the module declaration block:\n\n```hcl\nmodule \"foo\" {\n  source  = \"foo\/bar\"\n  version = \"~> 1.2\"\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { tfconfig.modules.foo.version is \"~> 1.2\" }\n```\n\n<a id=\"modules-value-config\" \/>\n\n### Value: `config`\n\n* **Value Type:** A string-keyed map of values.\n\n-> **With Terraform 0.11 or earlier**, if the config value is defined as an\nexpression (and not a static value), the value will be in its raw,\nnon-interpolated string. **With Terraform 0.12 or later**, any non-static\nvalues (such as interpolated strings) are not present and\n[`references`](#modules-value-references) should be used instead.\n\nThe `config` value within the [module configuration\nnamespace](#namespace-module-configuration) represents the values of the keys\nwithin the module configuration. This is every key within a module declaration\nblock except [`source`](#modules-value-source) and [`version`](#modules-value-version), which\nhave their own values.\n\nAs an example, given the module declaration block:\n\n```hcl\nmodule \"foo\" {\n  source = \".\/foo\"\n\n  bar = \"baz\"\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { tfconfig.modules.foo.config.bar is \"baz\" }\n```\n\n<a id=\"modules-value-references\" \/>\n\n### Value: `references`\n\n* **Value Type:** A string-keyed map of list values containing strings.\n\n-> **Note:** This value is only present when using Terraform 0.12 or later.\n\nThe `references` value within the [module configuration namespace](#namespace-module-configuration)\ncontains the identifiers within non-constant expressions found in [`config`](#modules-value-config).\nSee the [documentation on `references`](#references-with-terraform-0-12) for more information.\n\n## Namespace: Outputs\n\nThe **output namespace** represents _declared_ output data within a\nconfiguration. As such, configuration for the [`value`](#outputs-value-value) attribute\nwill be in its raw form, and not yet interpolated. For fully interpolated output\nvalues, see the [`tfstate` import][ref-tfe-sentinel-tfstate].\n\n[ref-tfe-sentinel-tfstate]: \/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfstate\n\nThis namespace is indexed by output name.\n\n<a id=\"outputs-value-depends_on\" \/>\n\n### Value: `depends_on`\n\n* **Value Type:** A list of strings.\n\nThe `depends_on` value within the [output namespace](#namespace-outputs)\nrepresents any _explicit_ dependencies for this output. For more information,\nsee the [depends_on output setting][ref-depends_on] within the general Terraform\ndocumentation.\n\n[ref-depends_on]: \/terraform\/language\/values\/outputs#depends_on\n\nAs an example, given the following output declaration block:\n\n```hcl\noutput \"id\" {\n  depends_on = [\"null_resource.bar\"]\n  value      = \"${null_resource.foo.id}\"\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { tfconfig.outputs.id.depends_on[0] is \"null_resource.bar\" }\n```\n\n<a id=\"outputs-value-description\" \/>\n\n### Value: `description`\n\n* **Value Type:** String.\n\nThe `description` value within the [output namespace](#namespace-outputs)\nrepresents the defined description for this output.\n\nAs an example, given the following output declaration block:\n\n```hcl\noutput \"id\" {\n  description = \"foobar\"\n  value       = \"${null_resource.foo.id}\"\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { tfconfig.outputs.id.description is \"foobar\" }\n```\n\n<a id=\"outputs-value-sensitive\" \/>\n\n### Value: `sensitive`\n\n* **Value Type:** Boolean.\n\nThe `sensitive` value within the [output namespace](#namespace-outputs)\nrepresents if this value has been marked as sensitive or not.\n\nAs an example, given the following output declaration block:\n\n```hcl\noutput \"id\" {\n  sensitive = true\n  value     = \"${null_resource.foo.id}\"\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { subject.outputs.id.sensitive }\n```\n\n<a id=\"outputs-value-value\" \/>\n\n### Value: `value`\n\n* **Value Type:** Any primitive type, list or map.\n\nThe `value` value within the [output namespace](#namespace-outputs) represents\nthe defined value for the output as declared in the configuration. Primitives\nwill bear the implicit type of their declaration (string, int, float, or bool),\nand maps and lists will be represented as such.\n\n-> **With Terraform 0.11 or earlier**, if the config value is defined as an\nexpression (and not a static value), the value will be in its raw,\nnon-interpolated string. **With Terraform 0.12 or later**, any non-static\nvalues (such as interpolated strings) are not present and\n[`references`](#outputs-value-references) should be used instead.\n\nAs an example, given the following output declaration block:\n\n```hcl\noutput \"id\" {\n  value = \"${null_resource.foo.id}\"\n}\n```\n\nWith Terraform 0.11 or earlier the following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { tfconfig.outputs.id.value is \"${null_resource.foo.id}\" }\n```\n\n<a id=\"outputs-value-references\" \/>\n\n### Value: `references`\n\n* **Value Type:**. List of strings.\n\n-> **Note:** This value is only present when using Terraform 0.12 or later.\n\nThe `references` value within the [output namespace](#namespace-outputs)\ncontains the names of any referenced identifiers when [`value`](#outputs-value-value)\nis a non-constant expression.\n\nAs an example, given the following output declaration block:\n\n```hcl\noutput \"id\" {\n  value = \"${null_resource.foo.id}\"\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { tfconfig.outputs.id.references contains \"null_resource.foo.id\" }\n```\n\n## Namespace: Providers\n\nThe **provider namespace** represents data on the declared providers within a\nnamespace.\n\nThis namespace is indexed by provider type and _only_ contains data about\nproviders when actually declared. If you are using a completely implicit\nprovider configuration, this namespace will be empty.\n\nThis namespace is populated based on the following criteria:\n\n* The top-level namespace [`config`](#providers-value-config) and\n  [`version`](#providers-value-version) values are populated with the configuration and\n  version information from the default provider (the provider declaration that\n  lacks an alias).\n* Any aliased providers are added as namespaces within the\n  [`alias`](#providers-value-alias) value.\n* If a module lacks a default provider configuration, the top-level `config` and\n  `version` values will be empty.\n\n<a id=\"providers-value-alias\" \/>\n\n### Value: `alias`\n\n* **Value Type:** A map of [provider namespaces](#namespace-providers), indexed\n  by alias.\n\nThe `alias` value within the [provider namespace](#namespace-providers)\nrepresents all declared [non-default provider\ninstances][ref-tf-provider-instances] for a specific provider type, indexed by\ntheir specific alias.\n\n[ref-tf-provider-instances]: \/terraform\/language\/providers\/configuration#alias-multiple-provider-configurations\n\nThe return type is a provider namespace with the data for the instance in\nquestion loaded. The `alias` key will not be available within this namespace.\n\nAs an example, given the following provider declaration block:\n\n```hcl\nprovider \"aws\" {\n  alias  = \"east\"\n  region = \"us-east-1\"\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { tfconfig.providers.aws.alias.east.config.region is \"us-east-1\" }\n```\n\n<a id=\"providers-value-config\" \/>\n\n### Value: `config`\n\n* **Value Type:** A string-keyed map of values.\n\n-> **With Terraform 0.11 or earlier**, if the config value is defined as an\nexpression (and not a static value), the value will be in its raw,\nnon-interpolated string. **With Terraform 0.12 or later**, any non-static\nvalues (such as interpolated strings) are not present and\n[`references`](#providers-value-references) should be used instead.\n\nThe `config` value within the [provider namespace](#namespace-providers)\nrepresents the values of the keys within the provider's configuration, with the\nexception of the provider version, which is represented by the\n[`version`](#providers-value-version) value. [`alias`](#providers-value-alias) is also not included\nwhen the provider is aliased.\n\nAs an example, given the following provider declaration block:\n\n```hcl\nprovider \"aws\" {\n  region = \"us-east-1\"\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { tfconfig.providers.aws.config.region is \"us-east-1\" }\n```\n\n<a id=\"providers-value-references\" \/>\n\n### Value: `references`\n\n* **Value Type:** A string-keyed map of list values containing strings.\n\n-> **Note:** This value is only present when using Terraform 0.12 or later.\n\nThe `references` value within the [provider namespace](#namespace-providers)\ncontains the identifiers within non-constant expressions found in [`config`](#providers-value-config).\nSee the [documentation on `references`](#references-with-terraform-0-12) for more information.\n\n<a id=\"providers-value-version\" \/>\n\n### Value: `version`\n\n* **Value Type:** String.\n\nThe `version` value within the [provider namespace](#namespace-providers)\nrepresents the explicit expected version of the supplied provider. This includes\nthe pessimistic operator.\n\nAs an example, given the following provider declaration block:\n\n```hcl\nprovider \"aws\" {\n  version = \"~> 1.34\"\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { tfconfig.providers.aws.version is \"~> 1.34\" }\n```\n\n## Namespace: Variables\n\nThe **variable namespace** represents _declared_ variable data within a\nconfiguration. As such, static data can be extracted, such as defaults, but not\ndynamic data, such as the current value of a variable within a plan (although\nthis can be extracted within the [`tfplan` import][ref-tfe-sentinel-tfplan]).\n\n[ref-tfe-sentinel-tfplan]: \/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan\n\nThis namespace is indexed by variable name.\n\n<a id=\"variables-value-default\" \/>\n\n### Value: `default`\n\n* **Value Type:** Any primitive type, list, map, or `null`.\n\nThe `default` value within the [variable namespace](#namespace-variables)\nrepresents the default for the variable as declared in the configuration.\n\nThe actual value will be as configured. Primitives will bear the implicit type\nof their declaration (string, int, float, or bool), and maps and lists will be\nrepresented as such.\n\nIf no default is present, the value will be [`null`][ref-sentinel-null] (not to\nbe confused with [`undefined`][ref-sentinel-undefined]).\n\n[ref-sentinel-null]: https:\/\/docs.hashicorp.com\/sentinel\/language\/spec#null\n\n[ref-sentinel-undefined]: https:\/\/docs.hashicorp.com\/sentinel\/language\/undefined\n\nAs an example, given the following variable blocks:\n\n```hcl\nvariable \"foo\" {\n  default = \"bar\"\n}\n\nvariable \"number\" {\n  default = 42\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\ndefault_foo = rule { tfconfig.variables.foo.default is \"bar\" }\ndefault_number = rule { tfconfig.variables.number.default is 42 }\n\nmain = rule { default_foo and default_number }\n```\n\n<a id=\"variables-value-description\" \/>\n\n### Value: `description`\n\n* **Value Type:** String.\n\nThe `description` value within the [variable namespace](#namespace-variables)\nrepresents the description of the variable, as provided in configuration.\n\nAs an example, given the following variable block:\n\n```hcl\nvariable \"foo\" {\n  description = \"foobar\"\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfconfig\"\n\nmain = rule { tfconfig.variables.foo.description is \"foobar\" }\n```","site":"terraform","answers_cleaned":"    page title  tfconfig   Imports   Sentinel   HCP Terraform description  The tfconfig import provides access to a Terraform configuration         Import  tfconfig       Warning    The  tfconfig  import is now deprecated and will be permanently removed in August 2025  We recommend that you start using the updated  tfconfig v2   terraform cloud docs policy enforcement sentinel import tfconfig v2  import as soon as possible to avoid disruptions  The  tfconfig v2  import offers improved functionality and is designed to better support your policy enforcement needs   The  tfconfig  import provides access to a Terraform configuration   The Terraform configuration is the set of    tf  files that are used to describe the desired infrastructure state  Policies using the  tfconfig  import can access all aspects of the configuration  providers  resources  data sources  modules  and variables        BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      Some use cases for  tfconfig  include       Organizational naming conventions    requiring that configuration elements   are named in a way that conforms to some organization wide standard      Required inputs and outputs    organizations may require a particular set   of input variable names across all workspaces or may require a particular   set of outputs for asset management purposes      Enforcing particular modules    organizations may provide a number of    building block  modules and require that each workspace be built only from   combinations of these modules      Enforcing particular providers or resources    an organization may wish to   require or prevent the use of providers and or resources so that configuration   authors cannot use alternative approaches to work around policy   restrictions   Note with these use cases that this import is concerned with object  names  in the configuration  Since this is the configuration and not an invocation of Terraform  you can t see values for variables  the state  or the diff for a pending plan  If you want to write policy around expressions used within configuration blocks  you likely want to use the   tfplan    terraform cloud docs policy enforcement sentinel import tfplan  import      Namespace Overview  The following is a tree view of the import namespace  For more detail on a particular part of the namespace  see below        Note    The root level alias keys shown here   data    modules    providers    resources   and  variables   are shortcuts to a  module namespace   namespace module  scoped to the root module  For more details  see the section on  root namespace aliases   root namespace aliases        tfconfig     module    function           module namespace              data                 TYPE NAME                     config  map of keys                      references  map of keys   TF 0 12 and later                      provisioners                         NUMBER                             config  map of keys                              references  map of keys   TF 0 12 and later                              type  string              modules                 NAME                     config  map of keys                      references  map of keys   TF 0 12 and later                      source  string                      version  string             outputs                 NAME                     depends on  list of strings                      description  string                      sensitive  boolean                      references  list of strings   TF 0 12 and later                      value  value              providers                 TYPE                     alias                         ALIAS                             config  map of keys                              references  map of keys   TF 0 12 and later                              version  string                      config  map of keys                      references  map of keys   TF 0 12 and later                      version  string              resources                 TYPE NAME                     config  map of keys                      references  map of keys   TF 0 12 and later                      provisioners                         NUMBER                             config  map of keys                              references  map of keys   TF 0 12 and later                              type  string              variables                 NAME                     default  value                      description  string      module paths      string        data  root module alias      modules  root module alias      outputs  root module alias      providers  root module alias      resources  root module alias      variables  root module alias            references  with Terraform 0 12    With Terraform 0 11 or earlier    if a configuration value is defined as an expression  and not a static value   the value will be accessible in its raw  non interpolated string  just as with a constant value    As an example  consider the following resource block      hcl resource  local file   accounts      content     some text    filename      var subdomain    var domain  accounts txt         In this example  one might want to ensure  domain  and  subdomain  input variables are used within  filename  in this configuration  With Terraform 0 11 or earlier  the following policy would evaluate to  true       python import  tfconfig     filename value is the raw  non interpolated string filename value   tfconfig resources local file accounts config filename  main   rule    filename value contains    var domain   and  filename value contains    var subdomain            With Terraform 0 12 or later    any non static values  such as interpolated strings  are not present within the configuration value and  references  should be used instead      python import  tfconfig     filename references is a list of string values containing the references used in the expression filename references   tfconfig resources local file accounts references filename  main   rule     filename references contains  var domain  and   filename references contains  var subdomain         The  references  value is present in any namespace where non constant configuration values can be expressed  This is essentially every namespace which has a  config  value as well as the  outputs  namespace        Note    Remember  this import enforces policy around the literal Terraform configuration and not the final values as a result of invoking Terraform  If you want to write policy around the  result  of expressions used within configuration blocks  for example  if you wanted to ensure the final value of  filename  above includes  accounts txt    you likely want to use the   tfplan    terraform cloud docs policy enforcement sentinel import tfplan  import      Namespace  Root  The root level namespace consists of the values and functions documented below   In addition to this  the root level  data    modules    providers    resources   and  variables  keys all alias to their corresponding namespaces within the  module namespace   namespace module     a id  root function module          Function   module         module   func ADDR           Return Type    A  module namespace   namespace module    The  module    function in the  root namespace   namespace root  returns the  module namespace   namespace module  for a particular module address   The address must be a list and is the module address  split on the period        excluding the root module   Hence  a module with an address of simply  foo   or  root foo   would be    foo     and a module within that  so address  foo bar   would be read as    foo    bar        null   ref null  is returned if a module address is invalid  or if the module is not present in the configuration    ref null   https   docs hashicorp com sentinel language spec null  As an example  given the following module block      hcl module  foo                   If the module contained the following content      hcl resource  null resource   foo      triggers         foo    bar             The following policy would evaluate to  true       python import  tfconfig   main   rule   subject module   foo    resources null resource foo config triggers 0  foo is  bar          a id  root value module paths          Value   module paths       Value Type    List of a list of strings   The  module paths  value within the  root namespace   namespace root  is a list of all of the modules within the Terraform configuration   Modules not present in the configuration will not be present here  even if they are present in the diff or state   This data is represented as a list of a list of strings  with the inner list being the module address  split on the period         The root module is included in this list  represented as an empty inner list   As an example  if the following module block was present within a Terraform configuration      hcl module  foo                   The value of  module paths  would be                 foo           And the following policy would evaluate to  true       python import  tfconfig   main   rule   tfconfig module paths contains   foo               Iterating Through Modules  Iterating through all modules to find particular resources can be useful  This  example  iterate over modules  shows how to use  module paths  with the   module    function   function module   to find all resources of a particular type from all modules using the  tfplan  import  By changing  tfplan  in this function to  tfconfig   you could make a similar function find all resources of a specific type in the Terraform configuration    iterate over modules    terraform cloud docs policy enforcement sentinel sentinel imports     Namespace  Module  The   module namespace   can be loaded by calling   module      root function module  for a particular module   It can be used to load the following child namespaces      data    Loads the  resource namespace   namespace resources data sources     filtered against data sources     modules    Loads the  module configuration   namespace   namespace module configuration      outputs    Loads the  output namespace   namespace outputs      providers    Loads the  provider namespace   namespace providers      resources    Loads the  resource   namespace   namespace resources data sources   filtered against resources     variables    Loads the  variable namespace   namespace variables        Root Namespace Aliases  The root level  data    modules    providers    resources   and  variables  keys all alias to their corresponding namespaces within the module namespace  loaded for the root module  They are the equivalent of running  module     KEY     a id  namespace resources    a      Namespace  Resources Data Sources  The   resource namespace   is a namespace  type  that applies to both resources  accessed by using the  resources  namespace key  and data sources  accessed using the  data  namespace key    Accessing an individual resource or data source within each respective namespace can be accomplished by specifying the type and name  in the syntax   resources data  TYPE NAME    In addition  each of these namespace levels is a map  allowing you to filter based on type and name  Some examples of multi level access are below     To fetch all  aws instance  resources within the root module  you can specify    tfconfig resources aws instance   This would give you a map of resource   namespaces indexed from the names of each resource   foo    bar   and so   on     To fetch all resources within the root module  irrespective of type  use    tfconfig resources   This is indexed by type  as shown above with    tfconfig resources aws instance   with names being the next level down   As an example  perhaps you wish to deny use of the  local file  resource in your configuration  Consider the following resource block      hcl resource  local file   foo        content        foo       filename      path module  foo bar         The following policy would fail      python import  tfconfig   main   rule   tfconfig resources not contains  local file         Further explanation of the namespace will be in the context of resources  As mentioned  when operating on data sources  use the same syntax  except with  data  in place of  resources     a id  resources value config          Value   config       Value Type    A string keyed map of values   The  config  value within the  resource namespace   namespace resources data sources  is a map of key value pairs that directly map to Terraform config keys and values        With Terraform 0 11 or earlier    if the config value is defined as an expression  and not a static value   the value will be in its raw  non interpolated string    With Terraform 0 12 or later    any non static values  such as interpolated strings  are not present and   references    resources value references  should be used instead   As an example  consider the following resource block      hcl resource  local file   accounts      content     some text    filename    accounts txt         In this example  one might want to access  filename  to validate that the correct file name is used  Given the above example  the following policy would evaluate to  true       python import  tfconfig   main   rule    tfconfig resources local file accounts config filename is  accounts txt          a id  resources value references          Value   references       Value Type    A string keyed map of list values containing strings        Note    This value is only present when using Terraform 0 12 or later   The  references  value within the  resource namespace   namespace resources data sources  contains the identifiers within non constant expressions found in   config    resources value config   See the  documentation on  references    references with terraform 0 12  for more information    a id  resources value provisioners          Value   provisioners       Value Type    List of  provisioner namespaces   namespace provisioners    The  provisioners  value within the  resource namespace   namespace resources  represents the  provisioners  ref tf provisioners  within a specific resource   Provisioners are listed in the order they were provided in the configuration file   While the  provisioners  value will be present within data sources  it will always be an empty map  in Terraform 0 11  or  null   in Terraform 0 12  since data sources cannot actually have provisioners   The data within a provisioner can be inspected via the returned  provisioner namespace   namespace provisioners     ref tf provisioners    terraform language resources provisioners syntax     Namespace  Provisioners  The   provisioner namespace   represents the configuration for a particular  provisioner  ref tf provisioners  within a specific resource    a id  provisioners value config          Value   config       Value Type    A string keyed map of values   The  config  value within the  provisioner namespace   namespace provisioners  represents the values of the keys within the provisioner        With Terraform 0 11 or earlier    if the config value is defined as an expression  and not a static value   the value will be in its raw  non interpolated string    With Terraform 0 12 or later    any non static values  such as interpolated strings  are not present and   references    provisioners value references  should be used instead   As an example  given the following resource block      hcl resource  null resource   foo               provisioner  local exec        command    echo   self private ip    file txt             The following policy would evaluate to  true       python import  tfconfig   main   rule    tfconfig resources null resource foo provisioners 0  config command is  echo   self private ip    file txt          a id  provisioners value references          Value   references       Value Type    A string keyed map of list values containing strings        Note    This value is only present when using Terraform 0 12 or later   The  references  value within the  provisioner namespace   namespace provisioners  contains the identifiers within non constant expressions found in   config    provisioners value config   See the  documentation on  references    references with terraform 0 12  for more information    a id  provisioners value type          Value   type       Value Type    String   The  type  value within the  provisioner namespace   namespace provisioners  represents the type of the specific provisioner   As an example  in the following resource block      hcl resource  null resource   foo               provisioner  local exec        command    echo   self private ip    file txt             The following policy would evaluate to  true       python import  tfconfig   main   rule   tfconfig resources null resource foo provisioners 0  type is  local exec            Namespace  Module Configuration  The   module configuration   namespace displays data on  module configuration  as it is given within a  module  block  This means that the namespace concerns itself with the contents of the declaration block  example  the  source  parameter and variable assignment keys   not the data within the module  example  any contained resources or data sources   For the latter  the module instance would need to be looked up with the   module    function   root function module     a id  modules value source          Value   source       Value Type    String   The  source  value within the  module configuration namespace   namespace module configuration  represents the module source path as supplied to the module configuration   As an example  given the module declaration block      hcl module  foo      source      foo         The following policy would evaluate to  true       python import  tfconfig   main   rule   tfconfig modules foo source is    foo          a id  modules value version          Value   version       Value Type    String   The  version  value within the  module configuration namespace   namespace module configuration  represents the  version constraint  module version constraint  for modules that support it  such as modules within the  Terraform Module Registry  terraform module registry  or the  HCP Terraform private module registry  tfe private registry     module version constraint    terraform language modules module versions   terraform module registry   https   registry terraform io    tfe private registry    terraform cloud docs registry  As an example  given the module declaration block      hcl module  foo      source     foo bar    version       1 2         The following policy would evaluate to  true       python import  tfconfig   main   rule   tfconfig modules foo version is     1 2          a id  modules value config          Value   config       Value Type    A string keyed map of values        With Terraform 0 11 or earlier    if the config value is defined as an expression  and not a static value   the value will be in its raw  non interpolated string    With Terraform 0 12 or later    any non static values  such as interpolated strings  are not present and   references    modules value references  should be used instead   The  config  value within the  module configuration namespace   namespace module configuration  represents the values of the keys within the module configuration  This is every key within a module declaration block except   source    modules value source  and   version    modules value version   which have their own values   As an example  given the module declaration block      hcl module  foo      source      foo     bar    baz         The following policy would evaluate to  true       python import  tfconfig   main   rule   tfconfig modules foo config bar is  baz          a id  modules value references          Value   references       Value Type    A string keyed map of list values containing strings        Note    This value is only present when using Terraform 0 12 or later   The  references  value within the  module configuration namespace   namespace module configuration  contains the identifiers within non constant expressions found in   config    modules value config   See the  documentation on  references    references with terraform 0 12  for more information      Namespace  Outputs  The   output namespace   represents  declared  output data within a configuration  As such  configuration for the   value    outputs value value  attribute will be in its raw form  and not yet interpolated  For fully interpolated output values  see the   tfstate  import  ref tfe sentinel tfstate     ref tfe sentinel tfstate    terraform cloud docs policy enforcement sentinel import tfstate  This namespace is indexed by output name    a id  outputs value depends on          Value   depends on       Value Type    A list of strings   The  depends on  value within the  output namespace   namespace outputs  represents any  explicit  dependencies for this output  For more information  see the  depends on output setting  ref depends on  within the general Terraform documentation    ref depends on    terraform language values outputs depends on  As an example  given the following output declaration block      hcl output  id      depends on     null resource bar     value           null resource foo id          The following policy would evaluate to  true       python import  tfconfig   main   rule   tfconfig outputs id depends on 0  is  null resource bar          a id  outputs value description          Value   description       Value Type    String   The  description  value within the  output namespace   namespace outputs  represents the defined description for this output   As an example  given the following output declaration block      hcl output  id      description    foobar    value            null resource foo id          The following policy would evaluate to  true       python import  tfconfig   main   rule   tfconfig outputs id description is  foobar          a id  outputs value sensitive          Value   sensitive       Value Type    Boolean   The  sensitive  value within the  output namespace   namespace outputs  represents if this value has been marked as sensitive or not   As an example  given the following output declaration block      hcl output  id      sensitive   true   value          null resource foo id          The following policy would evaluate to  true       python import  tfconfig   main   rule   subject outputs id sensitive         a id  outputs value value          Value   value       Value Type    Any primitive type  list or map   The  value  value within the  output namespace   namespace outputs  represents the defined value for the output as declared in the configuration  Primitives will bear the implicit type of their declaration  string  int  float  or bool   and maps and lists will be represented as such        With Terraform 0 11 or earlier    if the config value is defined as an expression  and not a static value   the value will be in its raw  non interpolated string    With Terraform 0 12 or later    any non static values  such as interpolated strings  are not present and   references    outputs value references  should be used instead   As an example  given the following output declaration block      hcl output  id      value      null resource foo id          With Terraform 0 11 or earlier the following policy would evaluate to  true       python import  tfconfig   main   rule   tfconfig outputs id value is    null resource foo id           a id  outputs value references          Value   references       Value Type     List of strings        Note    This value is only present when using Terraform 0 12 or later   The  references  value within the  output namespace   namespace outputs  contains the names of any referenced identifiers when   value    outputs value value  is a non constant expression   As an example  given the following output declaration block      hcl output  id      value      null resource foo id          The following policy would evaluate to  true       python import  tfconfig   main   rule   tfconfig outputs id references contains  null resource foo id            Namespace  Providers  The   provider namespace   represents data on the declared providers within a namespace   This namespace is indexed by provider type and  only  contains data about providers when actually declared  If you are using a completely implicit provider configuration  this namespace will be empty   This namespace is populated based on the following criteria     The top level namespace   config    providers value config  and     version    providers value version  values are populated with the configuration and   version information from the default provider  the provider declaration that   lacks an alias     Any aliased providers are added as namespaces within the     alias    providers value alias  value    If a module lacks a default provider configuration  the top level  config  and    version  values will be empty    a id  providers value alias          Value   alias       Value Type    A map of  provider namespaces   namespace providers   indexed   by alias   The  alias  value within the  provider namespace   namespace providers  represents all declared  non default provider instances  ref tf provider instances  for a specific provider type  indexed by their specific alias    ref tf provider instances    terraform language providers configuration alias multiple provider configurations  The return type is a provider namespace with the data for the instance in question loaded  The  alias  key will not be available within this namespace   As an example  given the following provider declaration block      hcl provider  aws      alias     east    region    us east 1         The following policy would evaluate to  true       python import  tfconfig   main   rule   tfconfig providers aws alias east config region is  us east 1          a id  providers value config          Value   config       Value Type    A string keyed map of values        With Terraform 0 11 or earlier    if the config value is defined as an expression  and not a static value   the value will be in its raw  non interpolated string    With Terraform 0 12 or later    any non static values  such as interpolated strings  are not present and   references    providers value references  should be used instead   The  config  value within the  provider namespace   namespace providers  represents the values of the keys within the provider s configuration  with the exception of the provider version  which is represented by the   version    providers value version  value    alias    providers value alias  is also not included when the provider is aliased   As an example  given the following provider declaration block      hcl provider  aws      region    us east 1         The following policy would evaluate to  true       python import  tfconfig   main   rule   tfconfig providers aws config region is  us east 1          a id  providers value references          Value   references       Value Type    A string keyed map of list values containing strings        Note    This value is only present when using Terraform 0 12 or later   The  references  value within the  provider namespace   namespace providers  contains the identifiers within non constant expressions found in   config    providers value config   See the  documentation on  references    references with terraform 0 12  for more information    a id  providers value version          Value   version       Value Type    String   The  version  value within the  provider namespace   namespace providers  represents the explicit expected version of the supplied provider  This includes the pessimistic operator   As an example  given the following provider declaration block      hcl provider  aws      version       1 34         The following policy would evaluate to  true       python import  tfconfig   main   rule   tfconfig providers aws version is     1 34            Namespace  Variables  The   variable namespace   represents  declared  variable data within a configuration  As such  static data can be extracted  such as defaults  but not dynamic data  such as the current value of a variable within a plan  although this can be extracted within the   tfplan  import  ref tfe sentinel tfplan      ref tfe sentinel tfplan    terraform cloud docs policy enforcement sentinel import tfplan  This namespace is indexed by variable name    a id  variables value default          Value   default       Value Type    Any primitive type  list  map  or  null    The  default  value within the  variable namespace   namespace variables  represents the default for the variable as declared in the configuration   The actual value will be as configured  Primitives will bear the implicit type of their declaration  string  int  float  or bool   and maps and lists will be represented as such   If no default is present  the value will be   null   ref sentinel null   not to be confused with   undefined   ref sentinel undefined      ref sentinel null   https   docs hashicorp com sentinel language spec null   ref sentinel undefined   https   docs hashicorp com sentinel language undefined  As an example  given the following variable blocks      hcl variable  foo      default    bar     variable  number      default   42        The following policy would evaluate to  true       python import  tfconfig   default foo   rule   tfconfig variables foo default is  bar    default number   rule   tfconfig variables number default is 42    main   rule   default foo and default number         a id  variables value description          Value   description       Value Type    String   The  description  value within the  variable namespace   namespace variables  represents the description of the variable  as provided in configuration   As an example  given the following variable block      hcl variable  foo      description    foobar         The following policy would evaluate to  true       python import  tfconfig   main   rule   tfconfig variables foo description is  foobar       "}
{"questions":"terraform Import tfplan page title tfplan Imports Sentinel HCP Terraform The tfplan import provides access to a Terraform plan A Terraform plan is the file created as a result of terraform plan and is the input to terraform apply The plan represents the changes that Terraform needs to make to infrastructure to reach the desired state represented by the configuration","answers":"---\npage_title: tfplan - Imports - Sentinel - HCP Terraform\ndescription: >-\n  The tfplan import provides access to a Terraform plan. A Terraform plan is the\n  file created as a result of `terraform plan` and is the input to `terraform\n  apply`. The plan represents the changes that Terraform needs to make to\n  infrastructure to reach the desired state represented by the configuration.\n---\n\n# Import: tfplan\n\n~> **Warning:** The `tfplan` import is now deprecated and will be permanently removed in August 2025.\nWe recommend that you start using the updated [tfplan\/v2](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan-v2) import as soon as possible to avoid disruptions.\nThe `tfplan\/v2` import offers improved functionality and is designed to better support your policy enforcement needs.\n\nThe `tfplan` import provides access to a Terraform plan. A Terraform plan is the\nfile created as a result of `terraform plan` and is the input to `terraform\napply`. The plan represents the changes that Terraform needs to make to\ninfrastructure to reach the desired state represented by the configuration.\n\nIn addition to the diff data available in the plan, there is an\n[`applied`](#value-applied) state available that merges the plan with the state\nto create the planned state after apply.\n\nFinally, this import also allows you to access the configuration files and the\nTerraform state at the time the plan was run. See the section on [accessing a\nplan's state and configuration\ndata](#accessing-a-plan-39-s-state-and-configuration-data) for more information.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n## Namespace Overview\n\nThe following is a tree view of the import namespace. For more detail on a\nparticular part of the namespace, see below.\n\n-> Note that the root-level alias keys shown here (`data`, `path`, and\n`resources`) are shortcuts to a [module namespace](#namespace-module) scoped to\nthe root module. For more details, see the section on [root namespace\naliases](#root-namespace-aliases).\n\n```\ntfplan\n\u251c\u2500\u2500 module() (function)\n\u2502   \u2514\u2500\u2500 (module namespace)\n\u2502       \u251c\u2500\u2500 path ([]string)\n\u2502       \u251c\u2500\u2500 data\n\u2502       \u2502   \u2514\u2500\u2500 TYPE.NAME[NUMBER]\n\u2502       \u2502       \u251c\u2500\u2500 applied (map of keys)\n\u2502       \u2502       \u2514\u2500\u2500 diff\n\u2502       \u2502           \u2514\u2500\u2500 KEY\n\u2502       \u2502               \u251c\u2500\u2500 computed (bool)\n\u2502       \u2502               \u251c\u2500\u2500 new (string)\n\u2502       \u2502               \u2514\u2500\u2500 old (string)\n\u2502       \u2514\u2500\u2500 resources\n\u2502           \u2514\u2500\u2500 TYPE.NAME[NUMBER]\n\u2502               \u251c\u2500\u2500 applied (map of keys)\n\u2502               \u251c\u2500\u2500 destroy (bool)\n\u2502               \u251c\u2500\u2500 requires_new (bool)\n\u2502               \u2514\u2500\u2500 diff\n\u2502                   \u2514\u2500\u2500 KEY\n\u2502                       \u251c\u2500\u2500 computed (bool)\n\u2502                       \u251c\u2500\u2500 new (string)\n\u2502                       \u2514\u2500\u2500 old (string)\n\u251c\u2500\u2500 module_paths ([][]string)\n\u251c\u2500\u2500 terraform_version (string)\n\u251c\u2500\u2500 variables (map of keys)\n\u2502\n\u251c\u2500\u2500 data (root module alias)\n\u251c\u2500\u2500 path (root module alias)\n\u251c\u2500\u2500 resources (root module alias)\n\u2502\n\u251c\u2500\u2500 config (tfconfig namespace alias)\n\u2514\u2500\u2500 state (tfstate import alias)\n```\n\n## Namespace: Root\n\nThe root-level namespace consists of the values and functions documented below.\n\nIn addition to this, the root-level `data`, `path`, and `resources` keys alias\nto their corresponding namespaces or values within the [module\nnamespace](#namespace-module).\n\n### Accessing a Plan's State and Configuration Data\n\nThe `config` and `state` keys alias to the [`tfconfig`][import-tfconfig] and\n[`tfstate`][import-tfstate] namespaces, respectively, with the data sourced from\nthe Terraform _plan_ (as opposed to actual configuration and state).\n\n[import-tfconfig]: \/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfconfig\n\n[import-tfstate]: \/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfstate\n\n-> Note that these aliases are not represented as maps. While they will appear\nempty when viewed as maps, the specific import namespace keys will still be\naccessible.\n\n-> Note that while current versions of HCP Terraform source configuration and\nstate data from the plan for the Terraform run in question, future versions may\nsource data accessed through the `tfconfig` and `tfstate` imports (as opposed to\n`tfplan.config` and `tfplan.state`) from actual config bundles, or state as\nstored by HCP Terraform. When this happens, the distinction here will be useful -\nthe data in the aliased namespaces will be the config and state data as the\n_plan_ sees it, versus the actual \"physical\" data.\n\n### Function: `module()`\n\n```\nmodule = func(ADDR)\n```\n\n* **Return Type:** A [module namespace](#namespace-module).\n\nThe `module()` function in the [root namespace](#namespace-root) returns the\n[module namespace](#namespace-module) for a particular module address.\n\nThe address must be a list and is the module address, split on the period (`.`),\nexcluding the root module.\n\nHence, a module with an address of simply `foo` (or `root.foo`) would be\n`[\"foo\"]`, and a module within that (so address `foo.bar`) would be read as\n`[\"foo\", \"bar\"]`.\n\n[`null`][ref-null] is returned if a module address is invalid, or if the module\nis not present in the diff.\n\n[ref-null]: https:\/\/docs.hashicorp.com\/sentinel\/language\/spec#null\n\nAs an example, given the following module block:\n\n```hcl\nmodule \"foo\" {\n  # ...\n}\n```\n\nIf the module contained the following content:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  triggers = {\n    foo = \"bar\"\n  }\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfplan\"\n\nmain = rule { tfplan.module([\"foo\"]).resources.null_resource.foo[0].applied.triggers.foo is \"bar\" }\n```\n\n### Value: `module_paths`\n\n* **Value Type:** List of a list of strings.\n\nThe `module_paths` value within the [root namespace](#namespace-root) is a list\nof all of the modules within the Terraform diff for the current plan.\n\nModules not present in the diff will not be present here, even if they are\npresent in the configuration or state.\n\nThis data is represented as a list of a list of strings, with the inner list\nbeing the module address, split on the period (`.`).\n\nThe root module is included in this list, represented as an empty inner list, as\nlong as there are changes.\n\nAs an example, if the following module block was present within a Terraform\nconfiguration:\n\n```hcl\nmodule \"foo\" {\n  # ...\n}\n```\n\nThe value of `module_paths` would be:\n\n```\n[\n\t[],\n\t[\"foo\"],\n]\n```\n\nAnd the following policy would evaluate to `true`:\n\n```python\nimport \"tfplan\"\n\nmain = rule { tfplan.module_paths contains [\"foo\"] }\n```\n\n-> Note the above example only applies if the module is present in the diff.\n\n#### Iterating Through Modules\n\nIterating through all modules to find particular resources can be useful. This\n[example][iterate-over-modules] shows how to use `module_paths` with the\n[`module()` function](#function-module-) to find all resources of a\nparticular type from all modules that have pending changes using the `tfplan`\nimport.\n\n[iterate-over-modules]: \/terraform\/cloud-docs\/policy-enforcement\/sentinel#sentinel-imports\n\n### Value: `terraform_version`\n\n* **Value Type:** String.\n\nThe `terraform_version` value within the [root namespace](#namespace-root)\nrepresents the version of Terraform used to create the plan. This can be used to\nenforce a specific version of Terraform in a policy check.\n\nAs an example, the following policy would evaluate to `true`, as long as the\nplan was made with a version of Terraform in the 0.11.x series, excluding any\npre-release versions (example: `-beta1` or `-rc1`):\n\n```python\nimport \"tfplan\"\n\nmain = rule { tfplan.terraform_version matches \"^0\\\\.11\\\\.\\\\d+$\" }\n```\n\n### Value: `variables`\n\n* **Value Type:** A string-keyed map of values.\n\nThe `variables` value within the [root namespace](#namespace-root) represents\nall of the variables that were set when creating the plan. This will only\ncontain variables set for the root module.\n\nNote that unlike the [`default`][import-tfconfig-variables-default] value in the\n[`tfconfig` variables namespace][import-tfconfig-variables], primitive values\nhere are stringified, and type conversion will need to be performed to perform\ncomparison for int, float, or boolean values. This only applies to variables\nthat are primitives themselves and not primitives within maps and lists, which\nwill be their original types.\n\n[import-tfconfig-variables-default]: \/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfconfig#value-default\n\n[import-tfconfig-variables]: \/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfconfig#namespace-variables\n\nIf a default was accepted for the particular variable, the default value will be\npopulated here.\n\nAs an example, given the following variable blocks:\n\n```hcl\nvariable \"foo\" {\n  default = \"bar\"\n}\n\nvariable \"number\" {\n  default = 42\n}\n\nvariable \"map\" {\n  default = {\n    foo    = \"bar\"\n    number = 42\n  }\n}\n```\n\nThe following policy would evaluate to `true`, if no values were entered to\nchange these variables:\n\n```python\nimport \"tfplan\"\n\ndefault_foo = rule { tfplan.variables.foo is \"bar\" }\ndefault_number = rule { tfplan.variables.number is \"42\" }\ndefault_map_string = rule { tfplan.variables.map[\"foo\"] is \"bar\" }\ndefault_map_int = rule { tfplan.variables.map[\"number\"] is 42 }\n\nmain = rule { default_foo and default_number and default_map_string and default_map_int }\n```\n\n## Namespace: Module\n\nThe **module namespace** can be loaded by calling\n[`module()`](#function-module-) for a particular module.\n\nIt can be used to load the following child namespaces, in addition to the values\ndocumented below:\n\n* `data` - Loads the [resource namespace](#namespace-resources-data-sources),\n  filtered against data sources.\n* `resources` - Loads the [resource\n  namespace](#namespace-resources-data-sources), filtered against resources.\n\n### Root Namespace Aliases\n\nThe root-level `data` and `resources` keys both alias to their corresponding\nnamespaces within the module namespace, loaded for the root module. They are the\nequivalent of running `module([]).KEY`.\n\n### Value: `path`\n\n* **Value Type:** List of strings.\n\nThe `path` value within the [module namespace](#namespace-module) contains the\npath of the module that the namespace represents. This is represented as a list\nof strings.\n\nAs an example, if the following module block was present within a Terraform\nconfiguration:\n\n```hcl\nmodule \"foo\" {\n  # ...\n}\n```\n\nThe following policy would evaluate to `true` _only_ if the diff had changes for\nthat module:\n\n```python\nimport \"tfplan\"\n\nmain = rule { tfplan.module([\"foo\"]).path contains \"foo\" }\n```\n\n## Namespace: Resources\/Data Sources\n\nThe **resource namespace** is a namespace _type_ that applies to both resources\n(accessed by using the `resources` namespace key) and data sources (accessed\nusing the `data` namespace key).\n\nAccessing an individual resource or data source within each respective namespace\ncan be accomplished by specifying the type, name, and resource number (as if the\nresource or data source had a `count` value in it) in the syntax\n`[resources|data].TYPE.NAME[NUMBER]`. Note that NUMBER is always needed, even if\nyou did not use `count` in the resource.\n\nIn addition, each of these namespace levels is a map, allowing you to filter\nbased on type and name.\n\n-> The (somewhat strange) notation here of `TYPE.NAME[NUMBER]` may imply that\nthe inner resource index map is actually a list, but it's not - using the square\nbracket notation over the dotted notation (`TYPE.NAME.NUMBER`) is required here\nas an identifier cannot start with a number.\n\nSome examples of multi-level access are below:\n\n* To fetch all `aws_instance.foo` resource instances within the root module, you\n  can specify `tfplan.resources.aws_instance.foo`. This would then be indexed by\n  resource count index (`0`, `1`, `2`, and so on). Note that as mentioned above,\n  these elements must be accessed using square-bracket map notation (so `[0]`,\n  `[1]`, `[2]`, and so on) instead of dotted notation.\n* To fetch all `aws_instance` resources within the root module, you can specify\n  `tfplan.resources.aws_instance`. This would be indexed from the names of\n  each resource (`foo`, `bar`, and so on), with each of those maps containing\n  instances indexed by resource count index as per above.\n* To fetch all resources within the root module, irrespective of type, use\n  `tfplan.resources`. This is indexed by type, as shown above with\n  `tfplan.resources.aws_instance`, with names being the next level down, and so\n  on.\n\n~> When [resource targeting](\/terraform\/cli\/commands\/plan#resource-targeting) is in effect, `tfplan.resources` will only include the resources specified as targets for the run. This may lead to unexpected outcomes if a policy expects a resource to be present in the plan. To prohibit targeted runs altogether, ensure [`tfrun.target_addrs`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfrun#value-target_addrs) is undefined or empty.\n\nFurther explanation of the namespace will be in the context of resources. As\nmentioned, when operating on data sources, use the same syntax, except with\n`data` in place of `resources`.\n\n### Value: `applied`\n\n* **Value Type:** A string-keyed map of values.\n\nThe `applied` value within the [resource\nnamespace](#namespace-resources-data-sources) contains a \"predicted\"\nrepresentation of the resource's state post-apply. It's created by merging the\npending resource's diff on top of the existing data from the resource's state\n(if any). The map is a complex representation of these values with data going\nas far down as needed to represent any state values such as maps, lists, and\nsets.\n\nAs an example, given the following resource:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  triggers = {\n    foo = \"bar\"\n  }\n}\n```\n\nThe following policy would evaluate to `true` if the resource was in the diff:\n\n```python\nimport \"tfplan\"\n\nmain = rule { tfplan.resources.null_resource.foo[0].applied.triggers.foo is \"bar\" }\n```\n\n-> Note that some values will not be available in the `applied` state because\nthey cannot be known until the plan is actually applied. In Terraform 0.11 or\nearlier, these values are represented by a placeholder (the UUID value\n`74D93920-ED26-11E3-AC10-0800200C9A66`) and in Terraform 0.12 or later they\nare `undefined`. **In either case**, you should instead use the\n[`computed`](#value-computed) key within the [diff\nnamespace](#namespace-resource-diff) to determine that a computed value will\nexist.\n\n-> If a resource is being destroyed, its `applied` value is omitted from the\nnamespace and trying to fetch it will return undefined.\n\n### Value: `diff`\n\n* **Value Type:** A map of [diff namespaces](#namespace-resource-diff).\n\nThe `diff` value within the [resource\nnamespace](#namespace-resources-data-sources) contains the diff for a particular\nresource. Each key within the map links to a [diff\nnamespace](#namespace-resource-diff) for that particular key.\n\nNote that unlike the [`applied`](#value-applied) value, this map is not complex;\nthe map is only 1 level deep with each key possibly representing a diff for a\nparticular complex value within the resource.\n\nSee the below section for more details on the diff namespace, in addition to\nusage examples.\n\n### Value: `destroy`\n\n* **Value Type:** Boolean.\n\nThe `destroy` value within the [resource\nnamespace](#namespace-resources-data-sources) is `true` if a resource is being\ndestroyed for _any_ reason, including cases where it's being deleted as part of\na resource re-creation, in which case [`requires_new`](#value-requires_new) will\nalso be set.\n\nAs an example, given the following resource:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  triggers = {\n    foo = \"bar\"\n  }\n}\n```\n\nThe following policy would evaluate to `true` when `null_resource.foo` is being\ndestroyed:\n\n```python\nimport \"tfplan\"\n\nmain = rule { tfplan.resources.null_resource.foo[0].destroy }\n```\n\n### Value: `requires_new`\n\n* **Value Type:** Boolean.\n\nThe `requires_new` value within the [resource\nnamespace](#namespace-resources-data-sources) is `true` if the resource is still\npresent in the configuration, but must be replaced to satisfy its current diff.\nWhenever `requires_new` is `true`, [`destroy`](#value-destroy) is also `true`.\n\nAs an example, given the following resource:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  triggers = {\n    foo = \"bar\"\n  }\n}\n```\n\nThe following policy would evaluate to `true` if one of the `triggers` in\n`null_resource.foo` was being changed:\n\n```python\nimport \"tfplan\"\n\nmain = rule { tfplan.resources.null_resource.foo[0].requires_new }\n```\n\n## Namespace: Resource Diff\n\nThe **diff namespace** is a namespace that represents the diff for a specific\nattribute within a resource. For details on reading a particular attribute,\nsee the [`diff`](#value-diff) value in the [resource\nnamespace](#namespace-resources-data-sources).\n\n### Value: `computed`\n\n* **Value Type:** Boolean.\n\nThe `computed` value within the [diff namespace](#namespace-resource-diff) is\n`true` if the resource key in question depends on another value that isn't yet\nknown. Typically, that means the value it depends on belongs to a resource that\neither doesn't exist yet, or is changing state in such a way as to affect the\ndependent value so that it can't be known until the apply is complete.\n\n-> Keep in mind that when using `computed` with complex structures such as maps,\nlists, and sets, it's sometimes necessary to test the count attribute for the\nstructure, versus a key within it, depending on whether or not the diff has\nmarked the whole structure as computed. This is demonstrated in the example\nbelow.  Count keys are `%` for maps, and `#` for lists and sets. If you are\nhaving trouble determining the type of specific field within a resource, contact\nthe support team.\n\nAs an example, given the following resource:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  triggers = {\n    foo = \"bar\"\n  }\n}\n\nresource \"null_resource\" \"bar\" {\n  triggers = {\n    foo_id = \"${null_resource.foo.id}\"\n  }\n}\n```\n\nThe following policy would evaluate to `true`, if the `id` of\n`null_resource.foo` was currently not known, such as when the resource is\npending creation, or is being deleted and re-created:\n\n```python\nimport \"tfplan\"\n\nmain = rule { tfplan.resources.null_resource.bar[0].diff[\"triggers.%\"].computed }\n```\n\n### Value: `new`\n\n* **Value Type:** String.\n\nThe `new` value within the [diff namespace](#namespace-resource-diff) contains\nthe new value of a changing attribute, _if_ the value is known at plan time.\n\n-> `new` will be an empty string if the attribute's value is currently unknown.\nFor more details on detecting unknown values, see [`computed`](#value-computed).\n\nNote that this value is _always_ a string, regardless of the actual type of the\nvalue changing. [Type conversion][ref-sentinel-type-conversion] within policy\nmay be necessary to achieve the comparison needed.\n\n[ref-sentinel-type-conversion]: https:\/\/docs.hashicorp.com\/sentinel\/language\/values#type-conversion\n\nAs an example, given the following resource:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  triggers = {\n    foo = \"bar\"\n  }\n}\n```\n\nThe following policy would evaluate to `true`, if the resource was in the diff\nand each of the concerned keys were changing to new values:\n\n```python\nimport \"tfplan\"\n\nmain = rule { tfplan.resources.null_resource.foo[0].diff[\"triggers.foo\"].new is \"bar\" }\n```\n\n### Value: `old`\n\n* **Value Type:** String.\n\nThe `old` value within the [diff namespace](#namespace-resource-diff) contains\nthe old value of a changing attribute.\n\nNote that this value is _always_ a string, regardless of the actual type of the\nvalue changing. [Type conversion][ref-sentinel-type-conversion] within policy\nmay be necessary to achieve the comparison needed.\n\nIf the value did not exist in the previous state, `old` will always be an empty\nstring.\n\nAs an example, given the following resource:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  triggers = {\n    foo = \"baz\"\n  }\n}\n```\n\nIf that resource was previously in config as:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  triggers = {\n    foo = \"bar\"\n  }\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfplan\"\n\nmain = rule { tfplan.resources.null_resource.foo[0].diff[\"triggers.foo\"].old is \"bar\" }\n```","site":"terraform","answers_cleaned":"    page title  tfplan   Imports   Sentinel   HCP Terraform description       The tfplan import provides access to a Terraform plan  A Terraform plan is the   file created as a result of  terraform plan  and is the input to  terraform   apply   The plan represents the changes that Terraform needs to make to   infrastructure to reach the desired state represented by the configuration         Import  tfplan       Warning    The  tfplan  import is now deprecated and will be permanently removed in August 2025  We recommend that you start using the updated  tfplan v2   terraform cloud docs policy enforcement sentinel import tfplan v2  import as soon as possible to avoid disruptions  The  tfplan v2  import offers improved functionality and is designed to better support your policy enforcement needs   The  tfplan  import provides access to a Terraform plan  A Terraform plan is the file created as a result of  terraform plan  and is the input to  terraform apply   The plan represents the changes that Terraform needs to make to infrastructure to reach the desired state represented by the configuration   In addition to the diff data available in the plan  there is an   applied    value applied  state available that merges the plan with the state to create the planned state after apply   Finally  this import also allows you to access the configuration files and the Terraform state at the time the plan was run  See the section on  accessing a plan s state and configuration data   accessing a plan 39 s state and configuration data  for more information        BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout         Namespace Overview  The following is a tree view of the import namespace  For more detail on a particular part of the namespace  see below      Note that the root level alias keys shown here   data    path   and  resources   are shortcuts to a  module namespace   namespace module  scoped to the root module  For more details  see the section on  root namespace aliases   root namespace aliases        tfplan     module    function           module namespace              path    string              data                 TYPE NAME NUMBER                      applied  map of keys                      diff                         KEY                             computed  bool                              new  string                              old  string              resources                 TYPE NAME NUMBER                      applied  map of keys                      destroy  bool                      requires new  bool                      diff                         KEY                             computed  bool                              new  string                              old  string      module paths      string      terraform version  string      variables  map of keys        data  root module alias      path  root module alias      resources  root module alias        config  tfconfig namespace alias      state  tfstate import alias          Namespace  Root  The root level namespace consists of the values and functions documented below   In addition to this  the root level  data    path   and  resources  keys alias to their corresponding namespaces or values within the  module namespace   namespace module        Accessing a Plan s State and Configuration Data  The  config  and  state  keys alias to the   tfconfig   import tfconfig  and   tfstate   import tfstate  namespaces  respectively  with the data sourced from the Terraform  plan   as opposed to actual configuration and state     import tfconfig    terraform cloud docs policy enforcement sentinel import tfconfig   import tfstate    terraform cloud docs policy enforcement sentinel import tfstate     Note that these aliases are not represented as maps  While they will appear empty when viewed as maps  the specific import namespace keys will still be accessible      Note that while current versions of HCP Terraform source configuration and state data from the plan for the Terraform run in question  future versions may source data accessed through the  tfconfig  and  tfstate  imports  as opposed to  tfplan config  and  tfplan state   from actual config bundles  or state as stored by HCP Terraform  When this happens  the distinction here will be useful   the data in the aliased namespaces will be the config and state data as the  plan  sees it  versus the actual  physical  data       Function   module         module   func ADDR           Return Type    A  module namespace   namespace module    The  module    function in the  root namespace   namespace root  returns the  module namespace   namespace module  for a particular module address   The address must be a list and is the module address  split on the period        excluding the root module   Hence  a module with an address of simply  foo   or  root foo   would be    foo     and a module within that  so address  foo bar   would be read as    foo    bar        null   ref null  is returned if a module address is invalid  or if the module is not present in the diff    ref null   https   docs hashicorp com sentinel language spec null  As an example  given the following module block      hcl module  foo                   If the module contained the following content      hcl resource  null resource   foo      triggers         foo    bar             The following policy would evaluate to  true       python import  tfplan   main   rule   tfplan module   foo    resources null resource foo 0  applied triggers foo is  bar             Value   module paths       Value Type    List of a list of strings   The  module paths  value within the  root namespace   namespace root  is a list of all of the modules within the Terraform diff for the current plan   Modules not present in the diff will not be present here  even if they are present in the configuration or state   This data is represented as a list of a list of strings  with the inner list being the module address  split on the period         The root module is included in this list  represented as an empty inner list  as long as there are changes   As an example  if the following module block was present within a Terraform configuration      hcl module  foo                   The value of  module paths  would be                 foo           And the following policy would evaluate to  true       python import  tfplan   main   rule   tfplan module paths contains   foo             Note the above example only applies if the module is present in the diff        Iterating Through Modules  Iterating through all modules to find particular resources can be useful  This  example  iterate over modules  shows how to use  module paths  with the   module    function   function module   to find all resources of a particular type from all modules that have pending changes using the  tfplan  import    iterate over modules    terraform cloud docs policy enforcement sentinel sentinel imports      Value   terraform version       Value Type    String   The  terraform version  value within the  root namespace   namespace root  represents the version of Terraform used to create the plan  This can be used to enforce a specific version of Terraform in a policy check   As an example  the following policy would evaluate to  true   as long as the plan was made with a version of Terraform in the 0 11 x series  excluding any pre release versions  example    beta1  or   rc1        python import  tfplan   main   rule   tfplan terraform version matches   0   11     d               Value   variables       Value Type    A string keyed map of values   The  variables  value within the  root namespace   namespace root  represents all of the variables that were set when creating the plan  This will only contain variables set for the root module   Note that unlike the   default   import tfconfig variables default  value in the   tfconfig  variables namespace  import tfconfig variables   primitive values here are stringified  and type conversion will need to be performed to perform comparison for int  float  or boolean values  This only applies to variables that are primitives themselves and not primitives within maps and lists  which will be their original types    import tfconfig variables default    terraform cloud docs policy enforcement sentinel import tfconfig value default   import tfconfig variables    terraform cloud docs policy enforcement sentinel import tfconfig namespace variables  If a default was accepted for the particular variable  the default value will be populated here   As an example  given the following variable blocks      hcl variable  foo      default    bar     variable  number      default   42    variable  map      default         foo       bar      number   42            The following policy would evaluate to  true   if no values were entered to change these variables      python import  tfplan   default foo   rule   tfplan variables foo is  bar    default number   rule   tfplan variables number is  42    default map string   rule   tfplan variables map  foo   is  bar    default map int   rule   tfplan variables map  number   is 42    main   rule   default foo and default number and default map string and default map int           Namespace  Module  The   module namespace   can be loaded by calling   module      function module   for a particular module   It can be used to load the following child namespaces  in addition to the values documented below      data    Loads the  resource namespace   namespace resources data sources     filtered against data sources     resources    Loads the  resource   namespace   namespace resources data sources   filtered against resources       Root Namespace Aliases  The root level  data  and  resources  keys both alias to their corresponding namespaces within the module namespace  loaded for the root module  They are the equivalent of running  module     KEY        Value   path       Value Type    List of strings   The  path  value within the  module namespace   namespace module  contains the path of the module that the namespace represents  This is represented as a list of strings   As an example  if the following module block was present within a Terraform configuration      hcl module  foo                   The following policy would evaluate to  true   only  if the diff had changes for that module      python import  tfplan   main   rule   tfplan module   foo    path contains  foo            Namespace  Resources Data Sources  The   resource namespace   is a namespace  type  that applies to both resources  accessed by using the  resources  namespace key  and data sources  accessed using the  data  namespace key    Accessing an individual resource or data source within each respective namespace can be accomplished by specifying the type  name  and resource number  as if the resource or data source had a  count  value in it  in the syntax   resources data  TYPE NAME NUMBER    Note that NUMBER is always needed  even if you did not use  count  in the resource   In addition  each of these namespace levels is a map  allowing you to filter based on type and name      The  somewhat strange  notation here of  TYPE NAME NUMBER   may imply that the inner resource index map is actually a list  but it s not   using the square bracket notation over the dotted notation   TYPE NAME NUMBER   is required here as an identifier cannot start with a number   Some examples of multi level access are below     To fetch all  aws instance foo  resource instances within the root module  you   can specify  tfplan resources aws instance foo   This would then be indexed by   resource count index   0    1    2   and so on   Note that as mentioned above    these elements must be accessed using square bracket map notation  so   0        1      2    and so on  instead of dotted notation    To fetch all  aws instance  resources within the root module  you can specify    tfplan resources aws instance   This would be indexed from the names of   each resource   foo    bar   and so on   with each of those maps containing   instances indexed by resource count index as per above    To fetch all resources within the root module  irrespective of type  use    tfplan resources   This is indexed by type  as shown above with    tfplan resources aws instance   with names being the next level down  and so   on      When  resource targeting   terraform cli commands plan resource targeting  is in effect   tfplan resources  will only include the resources specified as targets for the run  This may lead to unexpected outcomes if a policy expects a resource to be present in the plan  To prohibit targeted runs altogether  ensure   tfrun target addrs    terraform cloud docs policy enforcement sentinel import tfrun value target addrs  is undefined or empty   Further explanation of the namespace will be in the context of resources  As mentioned  when operating on data sources  use the same syntax  except with  data  in place of  resources        Value   applied       Value Type    A string keyed map of values   The  applied  value within the  resource namespace   namespace resources data sources  contains a  predicted  representation of the resource s state post apply  It s created by merging the pending resource s diff on top of the existing data from the resource s state  if any   The map is a complex representation of these values with data going as far down as needed to represent any state values such as maps  lists  and sets   As an example  given the following resource      hcl resource  null resource   foo      triggers         foo    bar             The following policy would evaluate to  true  if the resource was in the diff      python import  tfplan   main   rule   tfplan resources null resource foo 0  applied triggers foo is  bar            Note that some values will not be available in the  applied  state because they cannot be known until the plan is actually applied  In Terraform 0 11 or earlier  these values are represented by a placeholder  the UUID value  74D93920 ED26 11E3 AC10 0800200C9A66   and in Terraform 0 12 or later they are  undefined     In either case    you should instead use the   computed    value computed  key within the  diff namespace   namespace resource diff  to determine that a computed value will exist      If a resource is being destroyed  its  applied  value is omitted from the namespace and trying to fetch it will return undefined       Value   diff       Value Type    A map of  diff namespaces   namespace resource diff    The  diff  value within the  resource namespace   namespace resources data sources  contains the diff for a particular resource  Each key within the map links to a  diff namespace   namespace resource diff  for that particular key   Note that unlike the   applied    value applied  value  this map is not complex  the map is only 1 level deep with each key possibly representing a diff for a particular complex value within the resource   See the below section for more details on the diff namespace  in addition to usage examples       Value   destroy       Value Type    Boolean   The  destroy  value within the  resource namespace   namespace resources data sources  is  true  if a resource is being destroyed for  any  reason  including cases where it s being deleted as part of a resource re creation  in which case   requires new    value requires new  will also be set   As an example  given the following resource      hcl resource  null resource   foo      triggers         foo    bar             The following policy would evaluate to  true  when  null resource foo  is being destroyed      python import  tfplan   main   rule   tfplan resources null resource foo 0  destroy            Value   requires new       Value Type    Boolean   The  requires new  value within the  resource namespace   namespace resources data sources  is  true  if the resource is still present in the configuration  but must be replaced to satisfy its current diff  Whenever  requires new  is  true     destroy    value destroy  is also  true    As an example  given the following resource      hcl resource  null resource   foo      triggers         foo    bar             The following policy would evaluate to  true  if one of the  triggers  in  null resource foo  was being changed      python import  tfplan   main   rule   tfplan resources null resource foo 0  requires new           Namespace  Resource Diff  The   diff namespace   is a namespace that represents the diff for a specific attribute within a resource  For details on reading a particular attribute  see the   diff    value diff  value in the  resource namespace   namespace resources data sources        Value   computed       Value Type    Boolean   The  computed  value within the  diff namespace   namespace resource diff  is  true  if the resource key in question depends on another value that isn t yet known  Typically  that means the value it depends on belongs to a resource that either doesn t exist yet  or is changing state in such a way as to affect the dependent value so that it can t be known until the apply is complete      Keep in mind that when using  computed  with complex structures such as maps  lists  and sets  it s sometimes necessary to test the count attribute for the structure  versus a key within it  depending on whether or not the diff has marked the whole structure as computed  This is demonstrated in the example below   Count keys are     for maps  and     for lists and sets  If you are having trouble determining the type of specific field within a resource  contact the support team   As an example  given the following resource      hcl resource  null resource   foo      triggers         foo    bar         resource  null resource   bar      triggers         foo id      null resource foo id              The following policy would evaluate to  true   if the  id  of  null resource foo  was currently not known  such as when the resource is pending creation  or is being deleted and re created      python import  tfplan   main   rule   tfplan resources null resource bar 0  diff  triggers     computed            Value   new       Value Type    String   The  new  value within the  diff namespace   namespace resource diff  contains the new value of a changing attribute   if  the value is known at plan time       new  will be an empty string if the attribute s value is currently unknown  For more details on detecting unknown values  see   computed    value computed    Note that this value is  always  a string  regardless of the actual type of the value changing   Type conversion  ref sentinel type conversion  within policy may be necessary to achieve the comparison needed    ref sentinel type conversion   https   docs hashicorp com sentinel language values type conversion  As an example  given the following resource      hcl resource  null resource   foo      triggers         foo    bar             The following policy would evaluate to  true   if the resource was in the diff and each of the concerned keys were changing to new values      python import  tfplan   main   rule   tfplan resources null resource foo 0  diff  triggers foo   new is  bar             Value   old       Value Type    String   The  old  value within the  diff namespace   namespace resource diff  contains the old value of a changing attribute   Note that this value is  always  a string  regardless of the actual type of the value changing   Type conversion  ref sentinel type conversion  within policy may be necessary to achieve the comparison needed   If the value did not exist in the previous state   old  will always be an empty string   As an example  given the following resource      hcl resource  null resource   foo      triggers         foo    baz             If that resource was previously in config as      hcl resource  null resource   foo      triggers         foo    bar             The following policy would evaluate to  true       python import  tfplan   main   rule   tfplan resources null resource foo 0  diff  triggers foo   old is  bar       "}
{"questions":"terraform Note This is documentation for the next version of the tfconfig Sentinel import designed specifically for Terraform 0 12 This import requires page title tfconfig v2 Imports Sentinel HCP Terraform example import tfconfig v2 as tfconfig The tfconfig v2 import provides access to a Terraform configuration Terraform 0 12 or higher and must currently be loaded by path using an alias","answers":"---\npage_title: tfconfig\/v2 - Imports - Sentinel - HCP Terraform\ndescription: The tfconfig\/v2 import provides access to a Terraform configuration.\n---\n\n-> **Note:** This is documentation for the next version of the `tfconfig`\nSentinel import, designed specifically for Terraform 0.12. This import requires\nTerraform 0.12 or higher, and must currently be loaded by path, using an alias,\nexample: `import \"tfconfig\/v2\" as tfconfig`.\n\n# Import: tfconfig\/v2\n\nThe `tfconfig\/v2` import provides access to a Terraform configuration.\n\nThe Terraform configuration is the set of `*.tf` files that are used to\ndescribe the desired infrastructure state. Policies using the `tfconfig`\nimport can access all aspects of the configuration: providers, resources,\ndata sources, modules, and variables.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nSome use cases for `tfconfig` include:\n\n* **Organizational naming conventions**: requiring that configuration elements\n  are named in a way that conforms to some organization-wide standard.\n* **Required inputs and outputs**: organizations may require a particular set\n  of input variable names across all workspaces or may require a particular\n  set of outputs for asset management purposes.\n* **Enforcing particular modules**: organizations may provide a number of\n  \"building block\" modules and require that each workspace be built only from\n  combinations of these modules.\n* **Enforcing particular providers or resources**: an organization may wish to\n  require or prevent the use of providers and\/or resources so that configuration\n  authors cannot use alternative approaches to work around policy\n  restrictions.\n\nThe data in the `tfconfig\/v2` import is sourced from the JSON configuration file\nthat is generated by the [`terraform show -json`](\/terraform\/cli\/commands\/show#json-output) command. For more information on\nthe file format, see the [JSON Output Format](\/terraform\/internals\/json-format)\npage.\n\n## Import Overview\n\nThe `tfconfig\/v2` import is structured as a series of _collections_, keyed as a\nspecific format, such as resource address, module address, or a\nspecifically-formatted provider key.\n\n```\ntfconfig\/v2\n\u251c\u2500\u2500 strip_index() (function)\n\u251c\u2500\u2500 providers\n\u2502   \u2514\u2500\u2500 (indexed by [module_address:]provider[.alias])\n\u2502       \u251c\u2500\u2500 provider_config_key (string)\n\u2502       \u251c\u2500\u2500 name (string)\n\u2502       \u251c\u2500\u2500 full_name (string)\n\u2502       \u251c\u2500\u2500 alias (string)\n\u2502       \u251c\u2500\u2500 module_address (string)\n\u2502       \u251c\u2500\u2500 config (block expression representation)\n\u2502       \u2514\u2500\u2500 version_constraint (string)\n\u251c\u2500\u2500 resources\n\u2502   \u2514\u2500\u2500 (indexed by address)\n\u2502       \u251c\u2500\u2500 address (string)\n\u2502       \u251c\u2500\u2500 module_address (string)\n\u2502       \u251c\u2500\u2500 mode (string)\n\u2502       \u251c\u2500\u2500 type (string)\n\u2502       \u251c\u2500\u2500 name (string)\n\u2502       \u251c\u2500\u2500 provider_config_key (string)\n\u2502       \u251c\u2500\u2500 provisioners (list)\n\u2502       \u2502   \u2514\u2500\u2500 (ordered provisioners for this resource only)\n\u2502       \u251c\u2500\u2500 config (block expression representation)\n\u2502       \u251c\u2500\u2500 count (expression representation)\n\u2502       \u251c\u2500\u2500 for_each (expression representation)\n\u2502       \u2514\u2500\u2500 depends_on (list of strings)\n\u251c\u2500\u2500 provisioners\n\u2502   \u2514\u2500\u2500 (indexed by resource_address:index)\n\u2502       \u251c\u2500\u2500 resource_address (string)\n\u2502       \u251c\u2500\u2500 type (string)\n\u2502       \u251c\u2500\u2500 index (string)\n\u2502       \u2514\u2500\u2500 config (block expression representation)\n\u251c\u2500\u2500 variables\n\u2502   \u2514\u2500\u2500 (indexed by module_address:name)\n\u2502       \u251c\u2500\u2500 module_address (string)\n\u2502       \u251c\u2500\u2500 name (string)\n\u2502       \u251c\u2500\u2500 default (value)\n\u2502       \u2514\u2500\u2500 description (string)\n\u251c\u2500\u2500 outputs\n\u2502   \u2514\u2500\u2500 (indexed by module_address:name)\n\u2502       \u251c\u2500\u2500 module_address (string)\n\u2502       \u251c\u2500\u2500 name (string)\n\u2502       \u251c\u2500\u2500 sensitive (boolean)\n\u2502       \u251c\u2500\u2500 value (expression representation)\n\u2502       \u251c\u2500\u2500 description (string)\n\u2502       \u2514\u2500\u2500 depends_on (list of strings)\n\u2514\u2500\u2500 module_calls\n    \u2514\u2500\u2500 (indexed by module_address:name)\n        \u251c\u2500\u2500 module_address (string)\n        \u251c\u2500\u2500 name (string)\n        \u251c\u2500\u2500 source (string)\n        \u251c\u2500\u2500 config (block expression representation)\n        \u251c\u2500\u2500 count (expression representation)\n        \u251c\u2500\u2500 depends_on (expression representation)\n        \u251c\u2500\u2500 for_each (expression representation)\n        \u2514\u2500\u2500 version_constraint (string)\n```\n\nThe collections are:\n\n* [`providers`](#the-providers-collection) - The configuration for all provider\n  instances across all modules in the configuration.\n* [`resources`](#the-resources-collection) - The configuration of all resources\n  across all modules in the configuration.\n* [`variables`](#the-variables-collection) - The configuration of all variable\n  definitions across all modules in the configuration.\n* [`outputs`](#the-outputs-collection) - The configuration of all output\n  definitions across all modules in the configuration.\n* [`module_calls`](#the-module_calls-collection) - The configuration of all module\n  calls (individual [`module`](\/terraform\/language\/modules) blocks) across\n  all modules in the configuration.\n\nThese collections are specifically designed to be used with the\n[`filter`](https:\/\/docs.hashicorp.com\/sentinel\/language\/collection-operations#filter-expression)\nquantifier expression in Sentinel, so that one can collect a list of resources\nto perform policy checks on without having to write complex module or\nconfiguration traversal. As an example, the following code will return all\n`aws_instance` resource types within the configuration, regardless of what\nmodule they are in:\n\n```\nall_aws_instances = filter tfconfig.resources as _, r {\n\tr.mode is \"managed\" and\n\t\tr.type is \"aws_instance\"\n}\n```\n\nYou can add specific attributes to the filter to narrow the search, such as the\nmodule address. The following code would return resources in a module named\n`foo` only:\n\n```\nall_aws_instances = filter tfconfig.resources as _, r {\n\tr.module_address is \"module.foo\" and\n\t\tr.mode is \"managed\" and\n\t\tr.type is \"aws_instance\"\n}\n```\n\n### Address Differences Between `tfconfig`, `tfplan`, and `tfstate`\n\nThis import deals with configuration before it is expanded into a\nresource graph by Terraform. As such, it is not possible to compute an index as\nthe import is building its collections and computing addresses for resources and\nmodules.\n\nAs such, addresses found here may not always match the expanded addresses found\nin the [`tfplan\/v2`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan-v2) and [`tfstate\/v2`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfstate-v2)\nimports, specifically when\n[`count`](\/terraform\/language\/resources#count-multiple-resource-instances-by-count)\nand\n[`for_each`](\/terraform\/language\/resources#for_each-multiple-resource-instances-defined-by-a-map-or-set-of-strings),\nare used.\n\nAs an example, consider a resource named `null_resource.foo` with a count of `2`\nlocated in a module named `bar`. While there will possibly be entries in the\nother imports for `module.bar.null_resource.foo[0]` and\n`module.bar.null_resource.foo[1]`, in `tfconfig\/v2`, there will only be a\n`module.bar.null_resource.foo`. As mentioned in the start of this section, this\nis because configuration actually _defines_ this scaling, whereas _expansion_\nactually happens when the resource graph is built, which happens as a natural\npart of the refresh and planning process.\n\nThe `strip_index` helper function, found in this import, can assist in\nremoving the indexes from addresses found in the `tfplan\/v2` and `tfstate\/v2`\nimports so that data from those imports can be used to reference data in this\none.\n\n## The `strip_index` Function\n\nThe `strip_index` helper function can be used to remove indexes from addresses\nfound in [`tfplan\/v2`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan-v2) and [`tfstate\/v2`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfstate-v2),\nby removing the indexes from each resource.\n\nThis can be used to help facilitate cross-import lookups for data between plan,\nstate, and config.\n\n```\nimport \"tfconfig\/v2\" as tfconfig\nimport \"tfplan\/v2\" as tfplan\n\nmain = rule {\n\tall filter tfplan.resource_changes as _, rc {\n\t\trc.mode is \"managed\" and\n\t\t\trc.type is \"aws_instance\"\n\t} as _, rc {\n\t\ttfconfig.resources[tfconfig.strip_index(rc.address)].config.ami.constant_value is \"ami-abcdefgh012345\"\n\t}\n}\n```\n\n## Expression Representations\n\nMost collections in this import will have one of two kinds of _expression\nrepresentations_. This is a verbose format for expressing a (parsed)\nconfiguration value independent of the configuration source code, which is not\n100% available to a policy check in HCP Terraform.\n\n```\n(expression representation)\n\u251c\u2500\u2500 constant_value (value)\n\u2514\u2500\u2500 references (list of strings)\n```\n\nThere are two major parts to an expression representation:\n\n* Any _strictly constant value_ is expressed as an expression with a\n  `constant_value` field.\n* Any expression that requires some degree of evaluation to generate the final\n  value - even if that value is known at plan time - is not expressed in\n  configuration. Instead, any particular references that are made are added to\n  the `references` field. More details on this field can be found in the\n  [expression\n  representation](\/terraform\/internals\/json-format#expression-representation)\n  section of the JSON output format documentation.\n\nFor example, to determine if an output is based on a particular\nresource value, one could do:\n\n```\nimport \"tfconfig\/v2\" as tfconfig\n\nmain = rule {\n\ttfconfig.outputs[\"instance_id\"].value.references is [\"aws_instance.foo\"]\n}\n```\n\n-> **Note:** The representation does not account for\ncomplex interpolations or other expressions that combine constants with other\nexpression data. For example, the partially constant data in `\"foo${var.bar}\"` would be lost.\n\n### Block Expression Representation\n\nExpanding on the above, a multi-value expression representation (such as the\nkind found in a [`resources`](#the-resources-collection) collection element) is\nsimilar, but the root value is a keyed map of expression representations. This\nis repeated until a \"scalar\" expression value is encountered, ie: a field that\nis not a block in the resource's schema.\n\n```\n(block expression representation)\n\u2514\u2500\u2500 (attribute key)\n    \u251c\u2500\u2500 (child block expression representation)\n    \u2502   \u2514\u2500\u2500 (...)\n    \u251c\u2500\u2500 constant_value (value)\n    \u2514\u2500\u2500 references (list of strings)\n```\n\nAs an example, one can validate expressions in an\n[`aws_instance`](https:\/\/registry.terraform.io\/providers\/hashicorp\/aws\/latest\/docs\/resources\/instance) resource using the\nfollowing:\n\n```\nimport \"tfconfig\/v2\" as tfconfig\n\nmain = rule {\n\ttfconfig.resources[\"aws_instance.foo\"].config.ami.constant_value is \"ami-abcdefgh012345\"\n}\n```\n\nNote that _nested blocks_, sometimes known as _sub-resources_, will be nested in\nconfiguration as a list of blocks (reflecting their ultimate nature as a list\nof objects). An example would be the `aws_instance` resource's\n[`ebs_block_device`](https:\/\/registry.terraform.io\/providers\/hashicorp\/aws\/latest\/docs\/resources\/instance#ebs-ephemeral-and-root-block-devices) block:\n\n```\nimport \"tfconfig\/v2\" as tfconfig\n\nmain = rule {\n\ttfconfig.resources[\"aws_instance.foo\"].config.ebs_block_device[0].volume_size < 10\n}\n```\n\n## The `providers` Collection\n\nThe `providers` collection is a collection representing the configurations of\nall provider instances across all modules in the configuration.\n\nThis collection is indexed by an opaque key. This is currently\n`module_address:provider.alias`, the same value as found in the\n`provider_config_key` field. `module_address` and the colon delimiter are\nomitted for the root module.\n\nThe `provider_config_key` field is also found in the `resources` collection and\ncan be used to locate a provider that belongs to a configured resource.\n\nThe fields in this collection are as follows:\n\n* `provider_config_key` - The opaque configuration key, used as the index key.\n* `name` - The name of the provider, ie: `aws`.\n* `full_name` - The fully-qualified name of the provider, e.g. `registry.terraform.io\/hashicorp\/aws`.\n* `alias` - The alias of the provider, ie: `east`. Empty for a default provider.\n* `module_address` - The address of the module this provider appears in.\n* `config` - A [block expression\n  representation](#block-expression-representation) with provider configuration\n  values.\n* `version_constraint` - The defined version constraint for this provider.\n\n## The `resources` Collection\n\nThe `resources` collection is a collection representing all of the resources\nfound in all modules in the configuration.\n\nThis collection is indexed by the resource address.\n\nThe fields in this collection are as follows:\n\n* `address` - The resource address. This is the index of the collection.\n* `module_address` - The module address that this resource was found in.\n* `mode` - The resource mode, either `managed` (resources) or `data` (data\n  sources).\n* `type` - The type of resource, ie: `null_resource` in `null_resource.foo`.\n* `name` - The name of the resource, ie: `foo` in `null_resource.foo`.\n* `provider_config_key` - The opaque configuration key that serves as the index\n  of the [`providers`](#the-providers-collection) collection.\n* `provisioners` - The ordered list of provisioners for this resource. The\n  syntax of the provisioners matches those found in the\n  [`provisioners`](#the-provisioners-collection) collection, but is a list\n  indexed by the order the provisioners show up in the resource.\n* `config` - The [block expression\n  representation](#block-expression-representation) of the configuration values\n  found in the resource.\n* `count` - The [expression data](#expression-representations) for the `count`\n  value in the resource.\n* `for_each` - The [expression data](#expression-representations) for the\n  `for_each` value in the resource.\n* `depends_on` - The contents of the `depends_on` config directive, which\n  declares explicit dependencies for this resource.\n\n## The `provisioners` Collection\n\nThe `provisioners` collection is a collection of all of the provisioners found\nacross all resources in the configuration.\n\nWhile normally bound to a resource in an ordered fashion, this collection allows\nfor the filtering of provisioners within a single expression.\n\nThis collection is indexed with a key following the format\n`resource_address:index`, with each field matching their respective field in the\nparticular element below:\n\n* `resource_address`: The address of the resource that the provisioner was found\n  in. This can be found in the [`resources`](#the-resources-collection)\n  collection.\n* `type`: The provisioner type, ie: `local_exec`.\n* `index`: The provisioner index as it shows up in the resource provisioner\n  order.\n* `config`: The [block expression\n  representation](#block-expression-representation) of the configuration values\n  in the provisioner.\n\n## The `variables` Collection\n\nThe `variables` collection is a collection of all variables across all modules\nin the configuration.\n\nNote that this tracks variable definitions, not values. See the [`tfplan\/v2`\n`variables` collection](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan-v2#the-variables-collection) for variable\nvalues set within a plan.\n\nThis collection is indexed by the key format `module_address:name`, with each\nfield matching their respective name below. `module_address` and the colon\ndelimiter are omitted for the root module.\n\n* `module_address` - The address of the module the variable was found in.\n* `name` - The name of the variable.\n* `default` - The defined default value of the variable.\n* `description` - The description of the variable.\n\n## The `outputs` Collection\n\nThe `outputs` collection is a collection of all outputs across all modules in\nthe configuration.\n\nNote that this tracks variable definitions, not values. See the [`tfstate\/v2`\n`outputs` collection](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfstate-v2#the-outputs-collection) for the final\nvalues of outputs set within a state. The [`tfplan\/v2` `output_changes`\ncollection](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan-v2#the-output_changes-collection) also contains a more\ncomplex collection of planned output changes.\n\nThis collection is indexed by the key format `module_address:name`, with each\nfield matching their respective name below. `module_address` and the colon\ndelimiter are omitted for the root module.\n\n* `module_address` - The address of the module the output was found in.\n* `name` - The name of the output.\n* `sensitive` - Indicates whether or not the output was marked as\n  [`sensitive`](\/terraform\/language\/values\/outputs#sensitive-suppressing-values-in-cli-output).\n* `value` - An [expression representation](#expression-representations) for the output.\n* `description` - The description of the output.\n* `depends_on` - A list of resource names that the output depends on. These are\n  the hard-defined output dependencies as defined in the\n  [`depends_on`](\/terraform\/language\/values\/outputs#depends_on-explicit-output-dependencies)\n  field in an output declaration, not the dependencies that get derived from\n  natural evaluation of the output expression (these can be found in the\n  `references` field of the expression representation).\n\n## The `module_calls` Collection\n\nThe `module_calls` collection is a collection of all module declarations at all\nlevels within the configuration.\n\nNote that this is the\n[`module`](\/terraform\/language\/modules#calling-a-child-module) stanza in\nany particular configuration, and not the module itself. Hence, a declaration\nfor `module.foo` would actually be declared in the root module, which would be\nrepresented by a blank field in `module_address`.\n\nThis collection is indexed by the key format `module_address:name`, with each\nfield matching their respective name below. `module_address` and the colon\ndelimiter are omitted for the root module.\n\n* `module_address` - The address of the module the declaration was found in.\n* `name` - The name of the module.\n* `source` - The contents of the `source` field.\n* `config` - A [block expression\n  representation](#block-expression-representation) for all parameter values\n  sent to the module.\n* `count` - An [expression representation](#expression-representations) for the\n  `count` field.\n* `depends_on`: An [expression representation](#expression-representations) for the\n  `depends_on` field.\n* `for_each` - An [expression representation](#expression-representations) for\n  the `for_each` field.\n* `version_constraint` - The string value found in the `version` field of the\n  module declaration.","site":"terraform","answers_cleaned":"    page title  tfconfig v2   Imports   Sentinel   HCP Terraform description  The tfconfig v2 import provides access to a Terraform configuration            Note    This is documentation for the next version of the  tfconfig  Sentinel import  designed specifically for Terraform 0 12  This import requires Terraform 0 12 or higher  and must currently be loaded by path  using an alias  example   import  tfconfig v2  as tfconfig      Import  tfconfig v2  The  tfconfig v2  import provides access to a Terraform configuration   The Terraform configuration is the set of    tf  files that are used to describe the desired infrastructure state  Policies using the  tfconfig  import can access all aspects of the configuration  providers  resources  data sources  modules  and variables        BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      Some use cases for  tfconfig  include       Organizational naming conventions    requiring that configuration elements   are named in a way that conforms to some organization wide standard      Required inputs and outputs    organizations may require a particular set   of input variable names across all workspaces or may require a particular   set of outputs for asset management purposes      Enforcing particular modules    organizations may provide a number of    building block  modules and require that each workspace be built only from   combinations of these modules      Enforcing particular providers or resources    an organization may wish to   require or prevent the use of providers and or resources so that configuration   authors cannot use alternative approaches to work around policy   restrictions   The data in the  tfconfig v2  import is sourced from the JSON configuration file that is generated by the   terraform show  json    terraform cli commands show json output  command  For more information on the file format  see the  JSON Output Format   terraform internals json format  page      Import Overview  The  tfconfig v2  import is structured as a series of  collections   keyed as a specific format  such as resource address  module address  or a specifically formatted provider key       tfconfig v2     strip index    function      providers          indexed by  module address  provider  alias               provider config key  string              name  string              full name  string              alias  string              module address  string              config  block expression representation              version constraint  string      resources          indexed by address              address  string              module address  string              mode  string              type  string              name  string              provider config key  string              provisioners  list                   ordered provisioners for this resource only              config  block expression representation              count  expression representation              for each  expression representation              depends on  list of strings      provisioners          indexed by resource address index              resource address  string              type  string              index  string              config  block expression representation      variables          indexed by module address name              module address  string              name  string              default  value              description  string      outputs          indexed by module address name              module address  string              name  string              sensitive  boolean              value  expression representation              description  string              depends on  list of strings      module calls          indexed by module address name              module address  string              name  string              source  string              config  block expression representation              count  expression representation              depends on  expression representation              for each  expression representation              version constraint  string       The collections are       providers    the providers collection    The configuration for all provider   instances across all modules in the configuration      resources    the resources collection    The configuration of all resources   across all modules in the configuration      variables    the variables collection    The configuration of all variable   definitions across all modules in the configuration      outputs    the outputs collection    The configuration of all output   definitions across all modules in the configuration      module calls    the module calls collection    The configuration of all module   calls  individual   module    terraform language modules  blocks  across   all modules in the configuration   These collections are specifically designed to be used with the   filter   https   docs hashicorp com sentinel language collection operations filter expression  quantifier expression in Sentinel  so that one can collect a list of resources to perform policy checks on without having to write complex module or configuration traversal  As an example  the following code will return all  aws instance  resource types within the configuration  regardless of what module they are in       all aws instances   filter tfconfig resources as    r    r mode is  managed  and   r type is  aws instance         You can add specific attributes to the filter to narrow the search  such as the module address  The following code would return resources in a module named  foo  only       all aws instances   filter tfconfig resources as    r    r module address is  module foo  and   r mode is  managed  and   r type is  aws instance             Address Differences Between  tfconfig    tfplan   and  tfstate   This import deals with configuration before it is expanded into a resource graph by Terraform  As such  it is not possible to compute an index as the import is building its collections and computing addresses for resources and modules   As such  addresses found here may not always match the expanded addresses found in the   tfplan v2    terraform cloud docs policy enforcement sentinel import tfplan v2  and   tfstate v2    terraform cloud docs policy enforcement sentinel import tfstate v2  imports  specifically when   count    terraform language resources count multiple resource instances by count  and   for each    terraform language resources for each multiple resource instances defined by a map or set of strings   are used   As an example  consider a resource named  null resource foo  with a count of  2  located in a module named  bar   While there will possibly be entries in the other imports for  module bar null resource foo 0   and  module bar null resource foo 1    in  tfconfig v2   there will only be a  module bar null resource foo   As mentioned in the start of this section  this is because configuration actually  defines  this scaling  whereas  expansion  actually happens when the resource graph is built  which happens as a natural part of the refresh and planning process   The  strip index  helper function  found in this import  can assist in removing the indexes from addresses found in the  tfplan v2  and  tfstate v2  imports so that data from those imports can be used to reference data in this one      The  strip index  Function  The  strip index  helper function can be used to remove indexes from addresses found in   tfplan v2    terraform cloud docs policy enforcement sentinel import tfplan v2  and   tfstate v2    terraform cloud docs policy enforcement sentinel import tfstate v2   by removing the indexes from each resource   This can be used to help facilitate cross import lookups for data between plan  state  and config       import  tfconfig v2  as tfconfig import  tfplan v2  as tfplan  main   rule    all filter tfplan resource changes as    rc     rc mode is  managed  and    rc type is  aws instance     as    rc     tfconfig resources tfconfig strip index rc address   config ami constant value is  ami abcdefgh012345               Expression Representations  Most collections in this import will have one of two kinds of  expression representations   This is a verbose format for expressing a  parsed  configuration value independent of the configuration source code  which is not 100  available to a policy check in HCP Terraform        expression representation      constant value  value      references  list of strings       There are two major parts to an expression representation     Any  strictly constant value  is expressed as an expression with a    constant value  field    Any expression that requires some degree of evaluation to generate the final   value   even if that value is known at plan time   is not expressed in   configuration  Instead  any particular references that are made are added to   the  references  field  More details on this field can be found in the    expression   representation   terraform internals json format expression representation    section of the JSON output format documentation   For example  to determine if an output is based on a particular resource value  one could do       import  tfconfig v2  as tfconfig  main   rule    tfconfig outputs  instance id   value references is   aws instance foo               Note    The representation does not account for complex interpolations or other expressions that combine constants with other expression data  For example  the partially constant data in   foo  var bar    would be lost       Block Expression Representation  Expanding on the above  a multi value expression representation  such as the kind found in a   resources    the resources collection  collection element  is similar  but the root value is a keyed map of expression representations  This is repeated until a  scalar  expression value is encountered  ie  a field that is not a block in the resource s schema        block expression representation       attribute key           child block expression representation                            constant value  value          references  list of strings       As an example  one can validate expressions in an   aws instance   https   registry terraform io providers hashicorp aws latest docs resources instance  resource using the following       import  tfconfig v2  as tfconfig  main   rule    tfconfig resources  aws instance foo   config ami constant value is  ami abcdefgh012345         Note that  nested blocks   sometimes known as  sub resources   will be nested in configuration as a list of blocks  reflecting their ultimate nature as a list of objects   An example would be the  aws instance  resource s   ebs block device   https   registry terraform io providers hashicorp aws latest docs resources instance ebs ephemeral and root block devices  block       import  tfconfig v2  as tfconfig  main   rule    tfconfig resources  aws instance foo   config ebs block device 0  volume size   10           The  providers  Collection  The  providers  collection is a collection representing the configurations of all provider instances across all modules in the configuration   This collection is indexed by an opaque key  This is currently  module address provider alias   the same value as found in the  provider config key  field   module address  and the colon delimiter are omitted for the root module   The  provider config key  field is also found in the  resources  collection and can be used to locate a provider that belongs to a configured resource   The fields in this collection are as follows      provider config key    The opaque configuration key  used as the index key     name    The name of the provider  ie   aws      full name    The fully qualified name of the provider  e g   registry terraform io hashicorp aws      alias    The alias of the provider  ie   east   Empty for a default provider     module address    The address of the module this provider appears in     config    A  block expression   representation   block expression representation  with provider configuration   values     version constraint    The defined version constraint for this provider      The  resources  Collection  The  resources  collection is a collection representing all of the resources found in all modules in the configuration   This collection is indexed by the resource address   The fields in this collection are as follows      address    The resource address  This is the index of the collection     module address    The module address that this resource was found in     mode    The resource mode  either  managed   resources  or  data   data   sources      type    The type of resource  ie   null resource  in  null resource foo      name    The name of the resource  ie   foo  in  null resource foo      provider config key    The opaque configuration key that serves as the index   of the   providers    the providers collection  collection     provisioners    The ordered list of provisioners for this resource  The   syntax of the provisioners matches those found in the     provisioners    the provisioners collection  collection  but is a list   indexed by the order the provisioners show up in the resource     config    The  block expression   representation   block expression representation  of the configuration values   found in the resource     count    The  expression data   expression representations  for the  count    value in the resource     for each    The  expression data   expression representations  for the    for each  value in the resource     depends on    The contents of the  depends on  config directive  which   declares explicit dependencies for this resource      The  provisioners  Collection  The  provisioners  collection is a collection of all of the provisioners found across all resources in the configuration   While normally bound to a resource in an ordered fashion  this collection allows for the filtering of provisioners within a single expression   This collection is indexed with a key following the format  resource address index   with each field matching their respective field in the particular element below      resource address   The address of the resource that the provisioner was found   in  This can be found in the   resources    the resources collection    collection     type   The provisioner type  ie   local exec      index   The provisioner index as it shows up in the resource provisioner   order     config   The  block expression   representation   block expression representation  of the configuration values   in the provisioner      The  variables  Collection  The  variables  collection is a collection of all variables across all modules in the configuration   Note that this tracks variable definitions  not values  See the   tfplan v2   variables  collection   terraform cloud docs policy enforcement sentinel import tfplan v2 the variables collection  for variable values set within a plan   This collection is indexed by the key format  module address name   with each field matching their respective name below   module address  and the colon delimiter are omitted for the root module      module address    The address of the module the variable was found in     name    The name of the variable     default    The defined default value of the variable     description    The description of the variable      The  outputs  Collection  The  outputs  collection is a collection of all outputs across all modules in the configuration   Note that this tracks variable definitions  not values  See the   tfstate v2   outputs  collection   terraform cloud docs policy enforcement sentinel import tfstate v2 the outputs collection  for the final values of outputs set within a state  The   tfplan v2   output changes  collection   terraform cloud docs policy enforcement sentinel import tfplan v2 the output changes collection  also contains a more complex collection of planned output changes   This collection is indexed by the key format  module address name   with each field matching their respective name below   module address  and the colon delimiter are omitted for the root module      module address    The address of the module the output was found in     name    The name of the output     sensitive    Indicates whether or not the output was marked as     sensitive    terraform language values outputs sensitive suppressing values in cli output      value    An  expression representation   expression representations  for the output     description    The description of the output     depends on    A list of resource names that the output depends on  These are   the hard defined output dependencies as defined in the     depends on    terraform language values outputs depends on explicit output dependencies    field in an output declaration  not the dependencies that get derived from   natural evaluation of the output expression  these can be found in the    references  field of the expression representation       The  module calls  Collection  The  module calls  collection is a collection of all module declarations at all levels within the configuration   Note that this is the   module    terraform language modules calling a child module  stanza in any particular configuration  and not the module itself  Hence  a declaration for  module foo  would actually be declared in the root module  which would be represented by a blank field in  module address    This collection is indexed by the key format  module address name   with each field matching their respective name below   module address  and the colon delimiter are omitted for the root module      module address    The address of the module the declaration was found in     name    The name of the module     source    The contents of the  source  field     config    A  block expression   representation   block expression representation  for all parameter values   sent to the module     count    An  expression representation   expression representations  for the    count  field     depends on   An  expression representation   expression representations  for the    depends on  field     for each    An  expression representation   expression representations  for   the  for each  field     version constraint    The string value found in the  version  field of the   module declaration "}
{"questions":"terraform page title tfplan v2 Imports Sentinel HCP Terraform example import tfplan v2 as tfplan import designed specifically for Terraform 0 12 This import requires Note This is documentation for the next version of the tfplan Sentinel Terraform 0 12 or higher and must currently be loaded by path using an alias The tfplan v2 import provides access to a Terraform plan","answers":"---\npage_title: tfplan\/v2 - Imports - Sentinel - HCP Terraform\ndescription: The tfplan\/v2 import provides access to a Terraform plan.\n---\n\n-> **Note:** This is documentation for the next version of the `tfplan` Sentinel\nimport, designed specifically for Terraform 0.12. This import requires\nTerraform 0.12 or higher, and must currently be loaded by path, using an alias,\nexample: `import \"tfplan\/v2\" as tfplan`.\n\n# Import: tfplan\/v2\n\nThe `tfplan\/v2` import provides access to a Terraform plan.\n\nA Terraform plan is the file created as a result of `terraform plan` and is the\ninput to `terraform apply`. The plan represents the changes that Terraform needs\nto make to infrastructure to reach the desired state represented by the\nconfiguration.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nIn addition to the diff data available in the plan, there is a \"planned state\"\nthat is available through this import, via the\n[`planned_values`](#the-planned_values-collection) collection. This collection\npresents the Terraform state as how it might look after the plan data is\napplied, but is not guaranteed to be the final state.\n\nThe data in the `tfplan\/v2` import is sourced from the JSON configuration file\nthat is generated by the [`terraform show -json`](\/terraform\/cli\/commands\/show#json-output) command. For more information on\nthe file format, see the [JSON Output Format](\/terraform\/internals\/json-format)\npage.\n\nThe entirety of the JSON output file is exposed as a Sentinel map via the\n[`raw`](#the-raw-collection) collection. This allows direct, low-level access to\nthe JSON data, but should only be used in complex situations where the\nhigher-level collections do not serve the purpose.\n\n## Import Overview\n\nThe `tfplan\/v2` import is structured as a series of _collections_, keyed as a\nspecific format depending on the collection.\n\n```\ntfplan\/v2\n\u251c\u2500\u2500 terraform_version (string)\n\u251c\u2500\u2500 variables\n\u2502   \u2514\u2500\u2500 (indexed by name)\n\u2502       \u251c\u2500\u2500 name (string)\n\u2502       \u2514\u2500\u2500 value (value)\n\u251c\u2500\u2500 planned_values\n\u2502   \u251c\u2500\u2500 outputs (tfstate\/v2 outputs representation)\n\u2502   \u2514\u2500\u2500 resources (tfstate\/v2 resources representation)\n\u251c\u2500\u2500 resource_changes\n\u2502   \u2514\u2500\u2500 (indexed by address[:deposed])\n\u2502       \u251c\u2500\u2500 address (string)\n\u2502       \u251c\u2500\u2500 module_address (string)\n\u2502       \u251c\u2500\u2500 mode (string)\n\u2502       \u251c\u2500\u2500 type (string)\n\u2502       \u251c\u2500\u2500 name (string)\n\u2502       \u251c\u2500\u2500 index (float (number) or string)\n\u2502       \u251c\u2500\u2500 provider_name (string)\n\u2502       \u251c\u2500\u2500 deposed (string)\n\u2502       \u2514\u2500\u2500 change (change representation)\n\u251c\u2500\u2500 resource_drift\n\u2502   \u2514\u2500\u2500 (indexed by address[:deposed])\n\u2502       \u251c\u2500\u2500 address (string)\n\u2502       \u251c\u2500\u2500 module_address (string)\n\u2502       \u251c\u2500\u2500 mode (string)\n\u2502       \u251c\u2500\u2500 type (string)\n\u2502       \u251c\u2500\u2500 name (string)\n\u2502       \u251c\u2500\u2500 index (float (number) or string)\n\u2502       \u251c\u2500\u2500 provider_name (string)\n\u2502       \u251c\u2500\u2500 deposed (string)\n\u2502       \u2514\u2500\u2500 change (change representation)\n\u251c\u2500\u2500 output_changes\n\u2502   \u2514\u2500\u2500 (indexed by name)\n\u2502       \u251c\u2500\u2500 name (string)\n\u2502       \u2514\u2500\u2500 change (change representation)\n\u2514\u2500\u2500 raw (map)\n```\n\nThe collections are:\n\n* [`variables`](#the-variables-collection) - The values of variables that have\n  been set in the plan itself. This collection only contains variables set in\n  the root module.\n* [`planned_values`](#the-planned_values-collection) - The state representation\n  of _planned values_, or an estimation of what the state will look like after\n  the plan is applied.\n* [`resource_changes`](#the-resource_changes-and-resource_drift-collections) - The set of change\n  operations for resources and data sources within this plan.\n* [`resource_drift`](#the-resource_changes-and-resource_drift-collections) - A description of the\n  changes Terraform detected when it compared the most recent state to the prior saved state.\n* [`output_changes`](#the-output_changes-collection) - The changes to outputs\n  within this plan. This collection only contains outputs set in the root\n  module.\n* [`raw`](#the-raw-collection) - Access to the raw plan data as stored by\n  HCP Terraform.\n\nThese collections are specifically designed to be used with the\n[`filter`](https:\/\/docs.hashicorp.com\/sentinel\/language\/collection-operations#filter-expression)\nquantifier expression in Sentinel, so that one can collect a list of resources\nto perform policy checks on without having to write complex discovery code. As\nan example, the following code will return all `aws_instance` resource changes,\nacross all modules in the plan:\n\n```\nall_aws_instances = filter tfplan.resource_changes as _, rc {\n\trc.mode is \"managed\" and\n\t\trc.type is \"aws_instance\"\n}\n```\n\nYou can add specific attributes to the filter to narrow the search, such as the\nmodule address, or the operation being performed. The following code would\nreturn resources in a module named `foo` only, and further narrow the search\ndown to only resources that were being created:\n\n```\nall_aws_instances = filter tfplan.resource_changes as _, rc {\n\trc.module_address is \"module.foo\" and\n\t\trc.mode is \"managed\" and\n\t\trc.type is \"aws_instance\" and\n\t\trc.change.actions is [\"create\"]\n}\n```\n\n### Change Representation\n\nCertain collections in this import contain a _change representation_, an object\nwith details about changes to a particular entity, such as a resource (within\nthe [`resource_changes`](#the-resource_changes-collection) collection), or\noutput (within the [`output_changes`](#the-output_changes-collection)\ncollection).\n\n```\n(change representation)\n\u251c\u2500\u2500 actions (list)\n\u251c\u2500\u2500 before (value, or map)\n\u251c\u2500\u2500 after (value, or map)\n\u2514\u2500\u2500 after_unknown (boolean, or map of booleans)\n```\n\nThis change representation contains the following fields:\n\n* `actions` - A list of actions being carried out for this change. The order is\n  important, for example a regular replace operation is denoted by `[\"delete\",\n  \"create\"]`, but a\n  [`create_before_destroy`](\/terraform\/language\/meta-arguments\/lifecycle#create_before_destroy)\n  resource will have an operation order of `[\"create\", \"delete\"]`.\n* `before` - The representation of the resource data object value before the\n  action. For create-only actions, this is unset. For no-op actions, this value\n  will be identical with `after`.\n* `after` - The representation of the resource data object value after the\n  action. For delete-only actions, this is unset. For no-op actions, this value\n  will be identical with `before`. Note that unknown values will not show up in\n  this field.\n* `after_unknown` - A deep object of booleans that denotes any values that are\n  unknown in a resource. These values were previously referred to as \"computed\"\n  values. If the value cannot be found in this map, then its value should be\n  available within `after`, so long as the operation supports it.\n\n#### Actions\n\nAs mentioned above, actions show up within the `actions` field of a change\nrepresentation and indicate the type of actions being performed as part of the\nchange, and the order that they are being performed in.\n\nThe current list of actions are as follows:\n\n* `create` - The action will create the associated entity. Depending on the\n  order this appears in, the entity may be created alongside a copy of the\n  entity before replacing it.\n* `read`  - The action will read the associated entity. In practice, seeing this\n  change type should be rare, as reads generally happen before a plan is\n  executed (usually during a refresh).\n* `update` - The action will update the associated entity in a way that alters its state\n  in some way.\n* `delete` - The action will remove the associated entity, deleting any\n  applicable state and associated real resources or infrastructure.\n* `no-op` - No action will be performed on the associated entity.\n\nThe `actions` field is a list, as some real-world actions are actually a\ncomposite of more than one primitive action. At this point in time, this\nis generally only applicable to resource replacement, in which the following\naction orders apply:\n\n* **Normal replacement:** `[\"delete\", \"create\"]` - Applies to default lifecycle\n  configurations.\n* **Create-before-destroy:** `[\"create\", \"delete\"]` - Applies when\n  [`create_before_destroy`](\/terraform\/language\/meta-arguments\/lifecycle#create_before_destroy)\n  is used in a lifecycle configuration.\n\nNote that, in most situations, the plan will list all \"changes\", including no-op\nchanges. This makes filtering on change type crucial to the accurate selection\nof data if you are concerned with the state change of a particular resource.\n\nTo filter on a change type, use exact list comparison. For example, the\nfollowing example from the [Import Overview](#import-overview) filters on\nexactly the resources being created _only_:\n\n```\nall_aws_instances = filter tfplan.resource_changes as _, rc {\n\trc.module_address is \"module.foo\" and\n\t\trc.mode is \"managed\" and\n\t\trc.type is \"aws_instance\" and\n    rc.change.actions is [\"create\"]\n}\n```\n\n#### `before`, `after`, and `after_unknown`\n\nThe exact attribute changes for a particular operation are outlined in the\n`before` and `after` attributes. Depending on the entity being operated on, this\nwill either be a map (as with\n[`resource_changes`](#the-resource_changes-collection)) or a singular value (as\nwith [`output_changes`](#the-output_changes-collection)).\n\nWhat you can expect in these fields varies depending on the operation:\n\n* For fresh create operations, `before` will generally be `null`, and `after`\n  will contain the data you can expect to see after the change.\n* For full delete operations, this will be reversed - `before` will contain\n  data, and `after` will be `null`.\n* Update or replace operations will have data in both fields relevant to their\n  states before and after the operation.\n* No-op operations should have identical data in `before` and `after`.\n\nFor resources, if a field cannot be found in `after`, it generally means one of\ntwo things:\n\n* The attribute does not exist in the resource schema. Generally, known\n  attributes that do not have a value will show up as `null` or otherwise empty\n  in `after`.\n* The attribute is _unknown_, that is, it was unable to be determined at plan\n  time and will only be available after apply-time values have been able to be\n  calculated.\n\nIn the latter case, there should be a value for the particular attribute in\n`after_unknown`, which can be checked to assert that the value is indeed\nunknown, versus invalid:\n\n```\nimport \"tfplan\/v2\" as tfplan\n\nno_unknown_amis = rule {\n\tall filter tfplan.resource_changes as _, rc {\n\t\trc.module_address is \"module.foo\" and\n\t\t\trc.mode is \"managed\" and\n\t\t\trc.type is \"aws_instance\" and\n\t\t\trc.change.actions is [\"create\"]\n\t} as _, rc {\n\t\trc.change.after_unknown.ami else false is false\n\t}\n}\n```\n\nFor output changes, `after_unknown` will simply be `true` if the value won't be\nknown until the plan is applied.\n\n## The `terraform_version` Value\n\nThe top-level `terraform_version` value in this import gives the Terraform\nversion that made the plan. This can be used to do version validation.\n\n```\nimport \"tfplan\/v2\" as tfplan\nimport \"strings\"\n\nv = strings.split(tfplan.terraform_version, \".\")\nversion_major = int(v[1])\nversion_minor = int(v[2])\n\nmain = rule {\n\tversion_major is 12 and version_minor >= 19\n}\n```\n\n-> **NOTE:** The above example will give errors when working with pre-release\nversions (example: `0.12.0beta1`). Future versions of this import will include\nhelpers to assist with processing versions that will account for these kinds of\nexceptions.\n\n## The `variables` Collection\n\nThe `variables` collection is a collection of the variables set in the root\nmodule when creating the plan.\n\nThis collection is indexed on the name of the variable.\n\nThe valid values are:\n\n* `name` - The name of the variable, also used as the collection key.\n* `value` - The value of the variable assigned during the plan.\n\n## The `planned_values` Collection\n\nThe `planned_values` collection is a special collection in that it contains two\nfields that alias to state collections with the _planned_ state set. This is the\nbest prediction of what the state will look like after the plan is executed.\n\nThe two fields are:\n\n* `outputs` - The prediction of what output values will look like after the\n  state is applied. For more details on the structure of this collection, see\n  the [`outputs`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfstate-v2#the-outputs-collection) collection in the\n  [`tfstate\/v2`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfstate-v2) documentation.\n* `resources` - The prediction of what resource values will look like after the\n  state is applied. For more details on the structure of this collection, see\n  the [`resources`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfstate-v2#the-resources-collection) collection in the\n  [`tfstate\/v2`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfstate-v2) documentation.\n\n-> **NOTE:** Unknown values are omitted from the `planned_values` state\nrepresentations, regardless of whether or not they existed before. Use\n[`resource_changes`](#the-resource_changes-collection) if awareness of unknown\ndata is important.\n\n## The `resource_changes` and `resource_drift` Collections\n\nThe `resource_changes` and `resource_drift` collections are a set of change operations for resources\nand data sources within this plan.\n\nThe `resource_drift` collection provides a description of the changes Terraform detected\nwhen it compared the most recent state to the prior saved state.\n\nThe `resource_changes` collection includes all resources that have been found in the configuration and state,\nregardless of whether or not they are changing.\n\n~> When [resource targeting](\/terraform\/cli\/commands\/plan#resource-targeting) is in effect, the `resource_changes` collection will only include the resources specified as targets for the run. This may lead to unexpected outcomes if a policy expects a resource to be present in the plan. To prohibit targeted runs altogether, ensure [`tfrun.target_addrs`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfrun#value-target_addrs) is undefined or empty.\n\nThis collection is indexed on the complete resource address as the key. If\n`deposed` is non-empty, it is appended to the end, and may look something like\n`aws_instance.foo:deposed-abc123`.\n\nAn element contains the following fields:\n\n* `address` - The absolute resource address - also the key for the collection's\n  index, if `deposed` is empty.\n\n* `module_address` - The module portion of the absolute resource address.\n\n* `mode` - The resource mode, either `managed` (resources) or `data` (data\n  sources).\n\n* `type` - The resource type, example: `aws_instance` for `aws_instance.foo`.\n\n* `name` - The resource name, example: `foo` for `aws_instance.foo`.\n\n* `index` - The resource index. Can be either a number or a string.\n\n* `provider_name` - The name of the provider this resource belongs to. This\n  allows the provider to be interpreted unambiguously in the unusual situation\n  where a provider offers a resource type whose name does not start with its own\n  name, such as the `googlebeta` provider offering `google_compute_instance`.\n\n  -> **Note:** Starting with Terraform 0.13, the `provider_name` field contains the\n  _full_ source address to the provider in the Terraform Registry. Example:\n  `registry.terraform.io\/hashicorp\/null` for the null provider.\n\n* `deposed` - An identifier used during replacement operations, and can be used\n  to identify the exact resource being replaced in state.\n\n* `change` - The data describing the change that will be made to this resource.\n  For more details, see [Change Representation](#change-representation).\n\n## The `output_changes` Collection\n\nThe `output_changes` collection is a collection of the change operations for\noutputs within this plan.\n\nOnly outputs for the root module are included.\n\nThis collection is indexed by the name of the output. The fields in a collection\nvalue are below:\n\n* `name` -  The name of the output, also the index key.\n* `change` - The data describing the change that will be made to this output.\n  For more details, see [Change Representation](#change-representation).\n\n## The `raw` Collection\n\nThe `raw` collection exposes the raw, unprocessed plan data, direct from the\ndata stored by HCP Terraform.\n\nThis is the same data that is produced by [`terraform show -json`](\/terraform\/cli\/commands\/show#json-output) on the plan file for the run this\npolicy check is attached to.\n\nUse of this data is only recommended in expert situations where the data the\ncollections present may not exactly serve the needs of the policy. For more\ninformation on the file format, see the [JSON Output\nFormat](\/terraform\/internals\/json-format) page.\n\n-> **NOTE:** Although designed to be relatively stable, the actual makeup for\nthe JSON output format is a Terraform CLI concern and as such not managed by\nHCP Terraform. Use at your own risk, follow the [Terraform CLI\nproject](https:\/\/github.com\/hashicorp\/terraform), and watch the file format\ndocumentation for any changes.","site":"terraform","answers_cleaned":"    page title  tfplan v2   Imports   Sentinel   HCP Terraform description  The tfplan v2 import provides access to a Terraform plan            Note    This is documentation for the next version of the  tfplan  Sentinel import  designed specifically for Terraform 0 12  This import requires Terraform 0 12 or higher  and must currently be loaded by path  using an alias  example   import  tfplan v2  as tfplan      Import  tfplan v2  The  tfplan v2  import provides access to a Terraform plan   A Terraform plan is the file created as a result of  terraform plan  and is the input to  terraform apply   The plan represents the changes that Terraform needs to make to infrastructure to reach the desired state represented by the configuration        BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      In addition to the diff data available in the plan  there is a  planned state  that is available through this import  via the   planned values    the planned values collection  collection  This collection presents the Terraform state as how it might look after the plan data is applied  but is not guaranteed to be the final state   The data in the  tfplan v2  import is sourced from the JSON configuration file that is generated by the   terraform show  json    terraform cli commands show json output  command  For more information on the file format  see the  JSON Output Format   terraform internals json format  page   The entirety of the JSON output file is exposed as a Sentinel map via the   raw    the raw collection  collection  This allows direct  low level access to the JSON data  but should only be used in complex situations where the higher level collections do not serve the purpose      Import Overview  The  tfplan v2  import is structured as a series of  collections   keyed as a specific format depending on the collection       tfplan v2     terraform version  string      variables          indexed by name              name  string              value  value      planned values         outputs  tfstate v2 outputs representation          resources  tfstate v2 resources representation      resource changes          indexed by address  deposed               address  string              module address  string              mode  string              type  string              name  string              index  float  number  or string              provider name  string              deposed  string              change  change representation      resource drift          indexed by address  deposed               address  string              module address  string              mode  string              type  string              name  string              index  float  number  or string              provider name  string              deposed  string              change  change representation      output changes          indexed by name              name  string              change  change representation      raw  map       The collections are       variables    the variables collection    The values of variables that have   been set in the plan itself  This collection only contains variables set in   the root module      planned values    the planned values collection    The state representation   of  planned values   or an estimation of what the state will look like after   the plan is applied      resource changes    the resource changes and resource drift collections    The set of change   operations for resources and data sources within this plan      resource drift    the resource changes and resource drift collections    A description of the   changes Terraform detected when it compared the most recent state to the prior saved state      output changes    the output changes collection    The changes to outputs   within this plan  This collection only contains outputs set in the root   module      raw    the raw collection    Access to the raw plan data as stored by   HCP Terraform   These collections are specifically designed to be used with the   filter   https   docs hashicorp com sentinel language collection operations filter expression  quantifier expression in Sentinel  so that one can collect a list of resources to perform policy checks on without having to write complex discovery code  As an example  the following code will return all  aws instance  resource changes  across all modules in the plan       all aws instances   filter tfplan resource changes as    rc    rc mode is  managed  and   rc type is  aws instance         You can add specific attributes to the filter to narrow the search  such as the module address  or the operation being performed  The following code would return resources in a module named  foo  only  and further narrow the search down to only resources that were being created       all aws instances   filter tfplan resource changes as    rc    rc module address is  module foo  and   rc mode is  managed  and   rc type is  aws instance  and   rc change actions is   create              Change Representation  Certain collections in this import contain a  change representation   an object with details about changes to a particular entity  such as a resource  within the   resource changes    the resource changes collection  collection   or output  within the   output changes    the output changes collection  collection         change representation      actions  list      before  value  or map      after  value  or map      after unknown  boolean  or map of booleans       This change representation contains the following fields      actions    A list of actions being carried out for this change  The order is   important  for example a regular replace operation is denoted by    delete      create     but a     create before destroy    terraform language meta arguments lifecycle create before destroy    resource will have an operation order of    create    delete        before    The representation of the resource data object value before the   action  For create only actions  this is unset  For no op actions  this value   will be identical with  after      after    The representation of the resource data object value after the   action  For delete only actions  this is unset  For no op actions  this value   will be identical with  before   Note that unknown values will not show up in   this field     after unknown    A deep object of booleans that denotes any values that are   unknown in a resource  These values were previously referred to as  computed    values  If the value cannot be found in this map  then its value should be   available within  after   so long as the operation supports it        Actions  As mentioned above  actions show up within the  actions  field of a change representation and indicate the type of actions being performed as part of the change  and the order that they are being performed in   The current list of actions are as follows      create    The action will create the associated entity  Depending on the   order this appears in  the entity may be created alongside a copy of the   entity before replacing it     read     The action will read the associated entity  In practice  seeing this   change type should be rare  as reads generally happen before a plan is   executed  usually during a refresh      update    The action will update the associated entity in a way that alters its state   in some way     delete    The action will remove the associated entity  deleting any   applicable state and associated real resources or infrastructure     no op    No action will be performed on the associated entity   The  actions  field is a list  as some real world actions are actually a composite of more than one primitive action  At this point in time  this is generally only applicable to resource replacement  in which the following action orders apply       Normal replacement       delete    create      Applies to default lifecycle   configurations      Create before destroy       create    delete      Applies when     create before destroy    terraform language meta arguments lifecycle create before destroy    is used in a lifecycle configuration   Note that  in most situations  the plan will list all  changes   including no op changes  This makes filtering on change type crucial to the accurate selection of data if you are concerned with the state change of a particular resource   To filter on a change type  use exact list comparison  For example  the following example from the  Import Overview   import overview  filters on exactly the resources being created  only        all aws instances   filter tfplan resource changes as    rc    rc module address is  module foo  and   rc mode is  managed  and   rc type is  aws instance  and     rc change actions is   create                before    after   and  after unknown   The exact attribute changes for a particular operation are outlined in the  before  and  after  attributes  Depending on the entity being operated on  this will either be a map  as with   resource changes    the resource changes collection   or a singular value  as with   output changes    the output changes collection     What you can expect in these fields varies depending on the operation     For fresh create operations   before  will generally be  null   and  after    will contain the data you can expect to see after the change    For full delete operations  this will be reversed    before  will contain   data  and  after  will be  null     Update or replace operations will have data in both fields relevant to their   states before and after the operation    No op operations should have identical data in  before  and  after    For resources  if a field cannot be found in  after   it generally means one of two things     The attribute does not exist in the resource schema  Generally  known   attributes that do not have a value will show up as  null  or otherwise empty   in  after     The attribute is  unknown   that is  it was unable to be determined at plan   time and will only be available after apply time values have been able to be   calculated   In the latter case  there should be a value for the particular attribute in  after unknown   which can be checked to assert that the value is indeed unknown  versus invalid       import  tfplan v2  as tfplan  no unknown amis   rule    all filter tfplan resource changes as    rc     rc module address is  module foo  and    rc mode is  managed  and    rc type is  aws instance  and    rc change actions is   create      as    rc     rc change after unknown ami else false is false           For output changes   after unknown  will simply be  true  if the value won t be known until the plan is applied      The  terraform version  Value  The top level  terraform version  value in this import gives the Terraform version that made the plan  This can be used to do version validation       import  tfplan v2  as tfplan import  strings   v   strings split tfplan terraform version       version major   int v 1   version minor   int v 2    main   rule    version major is 12 and version minor    19             NOTE    The above example will give errors when working with pre release versions  example   0 12 0beta1    Future versions of this import will include helpers to assist with processing versions that will account for these kinds of exceptions      The  variables  Collection  The  variables  collection is a collection of the variables set in the root module when creating the plan   This collection is indexed on the name of the variable   The valid values are      name    The name of the variable  also used as the collection key     value    The value of the variable assigned during the plan      The  planned values  Collection  The  planned values  collection is a special collection in that it contains two fields that alias to state collections with the  planned  state set  This is the best prediction of what the state will look like after the plan is executed   The two fields are      outputs    The prediction of what output values will look like after the   state is applied  For more details on the structure of this collection  see   the   outputs    terraform cloud docs policy enforcement sentinel import tfstate v2 the outputs collection  collection in the     tfstate v2    terraform cloud docs policy enforcement sentinel import tfstate v2  documentation     resources    The prediction of what resource values will look like after the   state is applied  For more details on the structure of this collection  see   the   resources    terraform cloud docs policy enforcement sentinel import tfstate v2 the resources collection  collection in the     tfstate v2    terraform cloud docs policy enforcement sentinel import tfstate v2  documentation        NOTE    Unknown values are omitted from the  planned values  state representations  regardless of whether or not they existed before  Use   resource changes    the resource changes collection  if awareness of unknown data is important      The  resource changes  and  resource drift  Collections  The  resource changes  and  resource drift  collections are a set of change operations for resources and data sources within this plan   The  resource drift  collection provides a description of the changes Terraform detected when it compared the most recent state to the prior saved state   The  resource changes  collection includes all resources that have been found in the configuration and state  regardless of whether or not they are changing      When  resource targeting   terraform cli commands plan resource targeting  is in effect  the  resource changes  collection will only include the resources specified as targets for the run  This may lead to unexpected outcomes if a policy expects a resource to be present in the plan  To prohibit targeted runs altogether  ensure   tfrun target addrs    terraform cloud docs policy enforcement sentinel import tfrun value target addrs  is undefined or empty   This collection is indexed on the complete resource address as the key  If  deposed  is non empty  it is appended to the end  and may look something like  aws instance foo deposed abc123    An element contains the following fields      address    The absolute resource address   also the key for the collection s   index  if  deposed  is empty      module address    The module portion of the absolute resource address      mode    The resource mode  either  managed   resources  or  data   data   sources       type    The resource type  example   aws instance  for  aws instance foo       name    The resource name  example   foo  for  aws instance foo       index    The resource index  Can be either a number or a string      provider name    The name of the provider this resource belongs to  This   allows the provider to be interpreted unambiguously in the unusual situation   where a provider offers a resource type whose name does not start with its own   name  such as the  googlebeta  provider offering  google compute instance           Note    Starting with Terraform 0 13  the  provider name  field contains the    full  source address to the provider in the Terraform Registry  Example     registry terraform io hashicorp null  for the null provider      deposed    An identifier used during replacement operations  and can be used   to identify the exact resource being replaced in state      change    The data describing the change that will be made to this resource    For more details  see  Change Representation   change representation       The  output changes  Collection  The  output changes  collection is a collection of the change operations for outputs within this plan   Only outputs for the root module are included   This collection is indexed by the name of the output  The fields in a collection value are below      name     The name of the output  also the index key     change    The data describing the change that will be made to this output    For more details  see  Change Representation   change representation       The  raw  Collection  The  raw  collection exposes the raw  unprocessed plan data  direct from the data stored by HCP Terraform   This is the same data that is produced by   terraform show  json    terraform cli commands show json output  on the plan file for the run this policy check is attached to   Use of this data is only recommended in expert situations where the data the collections present may not exactly serve the needs of the policy  For more information on the file format  see the  JSON Output Format   terraform internals json format  page        NOTE    Although designed to be relatively stable  the actual makeup for the JSON output format is a Terraform CLI concern and as such not managed by HCP Terraform  Use at your own risk  follow the  Terraform CLI project  https   github com hashicorp terraform   and watch the file format documentation for any changes "}
{"questions":"terraform The tfstate import provides access to a Terraform state Warning The tfstate import is now deprecated and will be permanently removed in August 2025 page title tfstate Imports Sentinel HCP Terraform Import tfstate We recommend that you start using the updated tfstate v2 terraform cloud docs policy enforcement sentinel import tfstate v2 import as soon as possible to avoid disruptions The tfstate v2 import offers improved functionality and is designed to better support your policy enforcement needs","answers":"---\npage_title: tfstate - Imports - Sentinel - HCP Terraform\ndescription: The tfstate import provides access to a Terraform state.\n---\n\n# Import: tfstate\n\n~> **Warning:** The `tfstate` import is now deprecated and will be permanently removed in August 2025.\nWe recommend that you start using the updated [tfstate\/v2](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfstate-v2) import as soon as possible to avoid disruptions.\nThe `tfstate\/v2` import offers improved functionality and is designed to better support your policy enforcement needs.\n\nThe `tfstate` import provides access to the Terraform state.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nThe _state_ is the data that Terraform has recorded about a workspace at a\nparticular point in its lifecycle, usually after an apply. You can read more\ngeneral information about how Terraform uses state [here][ref-tf-state].\n\n[ref-tf-state]: \/terraform\/language\/state\n\n-> **Note:** Since HCP Terraform currently only supports policy checks at plan\ntime, the usefulness of this import is somewhat limited, as it will usually give\nyou the state _prior_ to the plan the policy check is currently being run for.\nDepending on your needs, you may find the\n[`applied`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan#value-applied) collection in `tfplan` more useful,\nwhich will give you a _predicted_ state by applying plan data to the data found\nhere. The one exception to this rule is _data sources_, which will always give\nup to date data here, as long as the data source could be evaluated at plan\ntime.\n\n## Namespace Overview\n\nThe following is a tree view of the import namespace. For more detail on a\nparticular part of the namespace, see below.\n\n-> Note that the root-level alias keys shown here (`data`, `outputs`, `path`,\nand `resources`) are shortcuts to a [module namespace](#namespace-module) scoped\nto the root module. For more details, see the section on [root namespace\naliases](#root-namespace-aliases).\n\n```\ntfstate\n\u251c\u2500\u2500 module() (function)\n\u2502   \u2514\u2500\u2500 (module namespace)\n\u2502       \u251c\u2500\u2500 path ([]string)\n\u2502       \u251c\u2500\u2500 data\n\u2502       \u2502   \u2514\u2500\u2500 TYPE.NAME[NUMBER]\n\u2502       \u2502       \u251c\u2500\u2500 attr (map of keys)\n\u2502       \u2502       \u251c\u2500\u2500 depends_on ([]string)\n\u2502       \u2502       \u251c\u2500\u2500 id (string)\n\u2502       \u2502       \u2514\u2500\u2500 tainted (boolean)\n\u2502       \u251c\u2500\u2500 outputs (root module only in TF 0.12 or later)\n\u2502       \u2502   \u2514\u2500\u2500 NAME\n\u2502       \u2502       \u251c\u2500\u2500 sensitive (bool)\n\u2502       \u2502       \u251c\u2500\u2500 type (string)\n\u2502       \u2502       \u2514\u2500\u2500 value (value)\n\u2502       \u2514\u2500\u2500 resources\n\u2502           \u2514\u2500\u2500 TYPE.NAME[NUMBER]\n\u2502               \u251c\u2500\u2500 attr (map of keys)\n\u2502               \u251c\u2500\u2500 depends_on ([]string)\n\u2502               \u251c\u2500\u2500 id (string)\n\u2502               \u2514\u2500\u2500 tainted (boolean)\n\u2502\n\u251c\u2500\u2500 module_paths ([][]string)\n\u251c\u2500\u2500 terraform_version (string)\n\u2502\n\u251c\u2500\u2500 data (root module alias)\n\u251c\u2500\u2500 outputs (root module alias)\n\u251c\u2500\u2500 path (root module alias)\n\u2514\u2500\u2500 resources (root module alias)\n```\n\n## Namespace: Root\n\nThe root-level namespace consists of the values and functions documented below.\n\nIn addition to this, the root-level `data`, `outputs`, `path`, and `resources`\nkeys alias to their corresponding namespaces or values within the [module\nnamespace](#namespace-module).\n\n### Function: `module()`\n\n```\nmodule = func(ADDR)\n```\n\n* **Return Type:** A [module namespace](#namespace-module).\n\nThe `module()` function in the [root namespace](#namespace-root) returns the\n[module namespace](#namespace-module) for a particular module address.\n\nThe address must be a list and is the module address, split on the period (`.`),\nexcluding the root module.\n\nHence, a module with an address of simply `foo` (or `root.foo`) would be\n`[\"foo\"]`, and a module within that (so address `foo.bar`) would be read as\n`[\"foo\", \"bar\"]`.\n\n[`null`][ref-null] is returned if a module address is invalid, or if the module\nis not present in the state.\n\n[ref-null]: https:\/\/docs.hashicorp.com\/sentinel\/language\/spec#null\n\nAs an example, given the following module block:\n\n```hcl\nmodule \"foo\" {\n  # ...\n}\n```\n\nIf the module contained the following content:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  triggers = {\n    foo = \"bar\"\n  }\n}\n```\n\nThe following policy would evaluate to `true` if the resource was present in\nthe state:\n\n```python\nimport \"tfstate\"\n\nmain = rule { tfstate.module([\"foo\"]).resources.null_resource.foo[0].attr.triggers.foo is \"bar\" }\n```\n\n### Value: `module_paths`\n\n* **Value Type:** List of a list of strings.\n\nThe `module_paths` value within the [root namespace](#namespace-root) is a list\nof all of the modules within the Terraform state at plan-time.\n\nModules not present in the state will not be present here, even if they are\npresent in the configuration or the diff.\n\nThis data is represented as a list of a list of strings, with the inner list\nbeing the module address, split on the period (`.`).\n\nThe root module is included in this list, represented as an empty inner list, as\nlong as it is present in state.\n\nAs an example, if the following module block was present within a Terraform\nconfiguration:\n\n```hcl\nmodule \"foo\" {\n  # ...\n}\n```\n\nThe value of `module_paths` would be:\n\n```\n[\n\t[],\n\t[\"foo\"],\n]\n```\n\nAnd the following policy would evaluate to `true`:\n\n```python\nimport \"tfstate\"\n\nmain = rule { tfstate.module_paths contains [\"foo\"] }\n```\n\n-> Note the above example only applies if the module is present in the state.\n\n#### Iterating Through Modules\n\nIterating through all modules to find particular resources can be useful. This\n[example][iterate-over-modules] shows how to use `module_paths` with the\n[`module()` function](#function-module-) to find all resources of a\nparticular type from all modules using the `tfplan` import. By changing `tfplan`\nin this function to `tfstate`, you could make a similar function find all\nresources of a specific type in the current state.\n\n[iterate-over-modules]: \/terraform\/cloud-docs\/policy-enforcement\/sentinel#sentinel-imports\n\n### Value: `terraform_version`\n\n* **Value Type:** String.\n\nThe `terraform_version` value within the [root namespace](#namespace-root)\nrepresents the version of Terraform in use when the state was saved. This can be\nused to enforce a specific version of Terraform in a policy check.\n\nAs an example, the following policy would evaluate to `true` as long as the\nstate was made with a version of Terraform in the 0.11.x series, excluding any\npre-release versions (example: `-beta1` or `-rc1`):\n\n```python\nimport \"tfstate\"\n\nmain = rule { tfstate.terraform_version matches \"^0\\\\.11\\\\.\\\\d+$\" }\n```\n\n-> **NOTE:** This value is also available via the [`tfplan`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan)\nimport, which will be more current when a policy check is run against a plan.\nIt's recommended you use the value in `tfplan` until HCP Terraform\nsupports policy checks in other stages of the workspace lifecycle. See the\n[`terraform_version`][import-tfplan-terraform-version] reference within the\n`tfplan` import for more details.\n\n[import-tfplan-terraform-version]: \/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan#value-terraform_version\n\n## Namespace: Module\n\nThe **module namespace** can be loaded by calling\n[`module()`](#function-module-) for a particular module.\n\nIt can be used to load the following child namespaces, in addition to the values\ndocumented below:\n\n* `data` - Loads the [resource namespace](#namespace-resources-data-sources),\n  filtered against data sources.\n* `outputs` - Loads the [output namespace](#namespace-outputs), which supply the\n  outputs present in this module's state. Note that with Terraform 0.12 or\n  later, this value is only available for the root namespace.\n* `resources` - Loads the [resource\n  namespace](#namespace-resources-data-sources), filtered against resources.\n\n### Root Namespace Aliases\n\nThe root-level `data`, `outputs`, and `resources` keys both alias to their\ncorresponding namespaces within the module namespace, loaded for the root\nmodule. They are the equivalent of running `module([]).KEY`.\n\n### Value: `path`\n\n* **Value Type:** List of strings.\n\nThe `path` value within the [module namespace](#namespace-module) contains the\npath of the module that the namespace represents. This is represented as a list\nof strings.\n\nAs an example, if the following module block was present within a Terraform\nconfiguration:\n\n```hcl\nmodule \"foo\" {\n  # ...\n}\n```\n\nThe following policy would evaluate to `true`, _only_ if the module was present\nin the state:\n\n```python\nimport \"tfstate\"\n\nmain = rule { tfstate.module([\"foo\"]).path contains \"foo\" }\n```\n\n## Namespace: Resources\/Data Sources\n\nThe **resource namespace** is a namespace _type_ that applies to both resources\n(accessed by using the `resources` namespace key) and data sources (accessed\nusing the `data` namespace key).\n\nAccessing an individual resource or data source within each respective namespace\ncan be accomplished by specifying the type, name, and resource number (as if the\nresource or data source had a `count` value in it) in the syntax\n`[resources|data].TYPE.NAME[NUMBER]`. Note that NUMBER is always needed, even if\nyou did not use `count` in the resource.\n\nIn addition, each of these namespace levels is a map, allowing you to filter\nbased on type and name.\n\n-> The (somewhat strange) notation here of `TYPE.NAME[NUMBER]` may imply that\nthe inner resource index map is actually a list, but it's not - using the square\nbracket notation over the dotted notation (`TYPE.NAME.NUMBER`) is required here\nas an identifier cannot start with number.\n\nSome examples of multi-level access are below:\n\n* To fetch all `aws_instance.foo` resource instances within the root module, you\n  can specify `tfstate.resources.aws_instance.foo`. This would then be indexed\n  by resource count index (`0`, `1`, `2`, and so on). Note that as mentioned\n  above, these elements must be accessed using square-bracket map notation (so\n  `[0]`, `[1]`, `[2]`, and so on) instead of dotted notation.\n* To fetch all `aws_instance` resources within the root module, you can specify\n  `tfstate.resources.aws_instance`. This would be indexed from the names of\n  each resource (`foo`, `bar`, and so on), with each of those maps containing\n  instances indexed by resource count index as per above.\n* To fetch all resources within the root module, irrespective of type, use\n  `tfstate.resources`. This is indexed by type, as shown above with\n  `tfstate.resources.aws_instance`, with names being the next level down, and so\n  on.\n\nFurther explanation of the namespace will be in the context of resources. As\nmentioned, when operating on data sources, use the same syntax, except with\n`data` in place of `resources`.\n\n### Value: `attr`\n\n* **Value Type:** A string-keyed map of values.\n\nThe `attr` value within the [resource\nnamespace](#namespace-resources-data-sources) is a direct mapping to the state\nof the resource.\n\nThe map is a complex representation of these values with data going as far down\nas needed to represent any state values such as maps, lists, and sets.\n\nAs an example, given the following resource:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  triggers = {\n    foo = \"bar\"\n  }\n}\n```\n\nThe following policy would evaluate to `true` if the resource was in the state:\n\n```python\nimport \"tfstate\"\n\nmain = rule { tfstate.resources.null_resource.foo[0].attr.triggers.foo is \"bar\" }\n```\n\n### Value: `depends_on`\n\n* **Value Type:** A list of strings.\n\nThe `depends_on` value within the [resource\nnamespace](#namespace-resources-data-sources) contains the dependencies for the\nresource.\n\nThis is a list of full resource addresses, relative to the module (example:\n`null_resource.foo`).\n\nAs an example, given the following resources:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  triggers = {\n    foo = \"bar\"\n  }\n}\n\nresource \"null_resource\" \"bar\" {\n  # ...\n\n  depends_on = [\n    \"null_resource.foo\",\n  ]\n}\n```\n\nThe following policy would evaluate to `true` if the resource was in the state:\n\n```python\nimport \"tfstate\"\n\nmain = rule { tfstate.resources.null_resource.bar[0].depends_on contains \"null_resource.foo\" }\n```\n\n### Value: `id`\n\n* **Value Type:** String.\n\nThe `id` value within the [resource\nnamespace](#namespace-resources-data-sources) contains the id of the resource.\n\n-> **NOTE:** The example below uses a _data source_ here because the\n[`null_data_source`][ref-tf-null-data-source] data source gives a static ID,\nwhich makes documenting the example easier. As previously mentioned, data\nsources share the same namespace as resources, but need to be loaded with the\n`data` key. For more information, see the\n[synopsis](#namespace-resources-data-sources) for the namespace itself.\n\n[ref-tf-null-data-source]: https:\/\/registry.terraform.io\/providers\/hashicorp\/null\/latest\/docs\/data-sources\/data_source\n\nAs an example, given the following data source:\n\n```hcl\ndata \"null_data_source\" \"foo\" {\n  # ...\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfstate\"\n\nmain = rule { tfstate.data.null_data_source.foo[0].id is \"static\" }\n```\n\n### Value: `tainted`\n\n* **Value Type:** Boolean.\n\nThe `tainted` value within the [resource\nnamespace](#namespace-resources-data-sources) is `true` if the resource is\nmarked as tainted in Terraform state.\n\nAs an example, given the following resource:\n\n```hcl\nresource \"null_resource\" \"foo\" {\n  triggers = {\n    foo = \"bar\"\n  }\n}\n```\n\nThe following policy would evaluate to `true`, if the resource was marked as\ntainted in the state:\n\n```python\nimport \"tfstate\"\n\nmain = rule { tfstate.resources.null_resource.foo[0].tainted }\n```\n\n## Namespace: Outputs\n\nThe **output namespace** represents all of the outputs present within a\n[module](#namespace-module). Outputs are present in a state if they were saved\nduring a previous apply, or if they were updated with known values during the\npre-plan refresh.\n\n**With Terraform 0.11 or earlier** this can be used to fetch both the outputs\nof the root module, and the outputs of any module in the state below the root.\nThis makes it possible to see outputs that have not been threaded to the root\nmodule.\n\n**With Terraform 0.12 or later** outputs are available in the top-level (root\nmodule) namespace only and not accessible within submodules.\n\nThis namespace is indexed by output name.\n\n### Value: `sensitive`\n\n* **Value Type:** Boolean.\n\nThe `sensitive` value within the [output namespace](#namespace-outputs) is\n`true` when the output has been [marked as sensitive][ref-tf-sensitive-outputs].\n\n[ref-tf-sensitive-outputs]: \/terraform\/language\/values\/outputs#sensitive-suppressing-values-in-cli-output\n\nAs an example, given the following output:\n\n```hcl\noutput \"foo\" {\n  sensitive = true\n  value     = \"bar\"\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfstate\"\n\nmain = rule { tfstate.outputs.foo.sensitive }\n```\n\n### Value: `type`\n\n* **Value Type:** String.\n\nThe `type` value within the [output namespace](#namespace-outputs) gives the\noutput's type. This will be one of `string`, `list`, or `map`. These are\ncurrently the only types available for outputs in Terraform.\n\nAs an example, given the following output:\n\n```hcl\noutput \"string\" {\n  value = \"foo\"\n}\n\noutput \"list\" {\n  value = [\n    \"foo\",\n    \"bar\",\n  ]\n}\n\noutput \"map\" {\n  value = {\n    foo = \"bar\"\n  }\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfstate\"\n\ntype_string = rule { tfstate.outputs.string.type is \"string\" }\ntype_list = rule { tfstate.outputs.list.type is \"list\" }\ntype_map = rule { tfstate.outputs.map.type is \"map\" }\n\nmain = rule { type_string and type_list and type_map }\n```\n\n### Value: `value`\n\n* **Value Type:** String, list, or map.\n\nThe `value` value within the [output namespace](#namespace-outputs) is the value\nof the output in question.\n\nNote that the only valid primitive output type in Terraform is currently a\nstring, which means that any int, float, or boolean value will need to be\nconverted before it can be used in comparison. This does not apply to primitives\nwithin maps and lists, which will be their original types.\n\nAs an example, given the following output blocks:\n\n```hcl\noutput \"foo\" {\n  value = \"bar\"\n}\n\noutput \"number\" {\n  value = \"42\"\n}\n\noutput \"map\" {\n  value = {\n    foo    = \"bar\"\n    number = 42\n  }\n}\n```\n\nThe following policy would evaluate to `true`:\n\n```python\nimport \"tfstate\"\n\nvalue_foo = rule { tfstate.outputs.foo.value is \"bar\" }\nvalue_number = rule { int(tfstate.outputs.number.value) is 42 }\nvalue_map_string = rule { tfstate.outputs.map.value[\"foo\"] is \"bar\" }\nvalue_map_int = rule { tfstate.outputs.map.value[\"number\"] is 42 }\n\nmain = rule { value_foo and value_number and value_map_string and value_map_int }\n```","site":"terraform","answers_cleaned":"    page title  tfstate   Imports   Sentinel   HCP Terraform description  The tfstate import provides access to a Terraform state         Import  tfstate       Warning    The  tfstate  import is now deprecated and will be permanently removed in August 2025  We recommend that you start using the updated  tfstate v2   terraform cloud docs policy enforcement sentinel import tfstate v2  import as soon as possible to avoid disruptions  The  tfstate v2  import offers improved functionality and is designed to better support your policy enforcement needs   The  tfstate  import provides access to the Terraform state        BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      The  state  is the data that Terraform has recorded about a workspace at a particular point in its lifecycle  usually after an apply  You can read more general information about how Terraform uses state  here  ref tf state     ref tf state    terraform language state       Note    Since HCP Terraform currently only supports policy checks at plan time  the usefulness of this import is somewhat limited  as it will usually give you the state  prior  to the plan the policy check is currently being run for  Depending on your needs  you may find the   applied    terraform cloud docs policy enforcement sentinel import tfplan value applied  collection in  tfplan  more useful  which will give you a  predicted  state by applying plan data to the data found here  The one exception to this rule is  data sources   which will always give up to date data here  as long as the data source could be evaluated at plan time      Namespace Overview  The following is a tree view of the import namespace  For more detail on a particular part of the namespace  see below      Note that the root level alias keys shown here   data    outputs    path   and  resources   are shortcuts to a  module namespace   namespace module  scoped to the root module  For more details  see the section on  root namespace aliases   root namespace aliases        tfstate     module    function           module namespace              path    string              data                 TYPE NAME NUMBER                      attr  map of keys                      depends on    string                      id  string                      tainted  boolean              outputs  root module only in TF 0 12 or later                  NAME                     sensitive  bool                      type  string                      value  value              resources                 TYPE NAME NUMBER                      attr  map of keys                      depends on    string                      id  string                      tainted  boolean        module paths      string      terraform version  string        data  root module alias      outputs  root module alias      path  root module alias      resources  root module alias          Namespace  Root  The root level namespace consists of the values and functions documented below   In addition to this  the root level  data    outputs    path   and  resources  keys alias to their corresponding namespaces or values within the  module namespace   namespace module        Function   module         module   func ADDR           Return Type    A  module namespace   namespace module    The  module    function in the  root namespace   namespace root  returns the  module namespace   namespace module  for a particular module address   The address must be a list and is the module address  split on the period        excluding the root module   Hence  a module with an address of simply  foo   or  root foo   would be    foo     and a module within that  so address  foo bar   would be read as    foo    bar        null   ref null  is returned if a module address is invalid  or if the module is not present in the state    ref null   https   docs hashicorp com sentinel language spec null  As an example  given the following module block      hcl module  foo                   If the module contained the following content      hcl resource  null resource   foo      triggers         foo    bar             The following policy would evaluate to  true  if the resource was present in the state      python import  tfstate   main   rule   tfstate module   foo    resources null resource foo 0  attr triggers foo is  bar             Value   module paths       Value Type    List of a list of strings   The  module paths  value within the  root namespace   namespace root  is a list of all of the modules within the Terraform state at plan time   Modules not present in the state will not be present here  even if they are present in the configuration or the diff   This data is represented as a list of a list of strings  with the inner list being the module address  split on the period         The root module is included in this list  represented as an empty inner list  as long as it is present in state   As an example  if the following module block was present within a Terraform configuration      hcl module  foo                   The value of  module paths  would be                 foo           And the following policy would evaluate to  true       python import  tfstate   main   rule   tfstate module paths contains   foo             Note the above example only applies if the module is present in the state        Iterating Through Modules  Iterating through all modules to find particular resources can be useful  This  example  iterate over modules  shows how to use  module paths  with the   module    function   function module   to find all resources of a particular type from all modules using the  tfplan  import  By changing  tfplan  in this function to  tfstate   you could make a similar function find all resources of a specific type in the current state    iterate over modules    terraform cloud docs policy enforcement sentinel sentinel imports      Value   terraform version       Value Type    String   The  terraform version  value within the  root namespace   namespace root  represents the version of Terraform in use when the state was saved  This can be used to enforce a specific version of Terraform in a policy check   As an example  the following policy would evaluate to  true  as long as the state was made with a version of Terraform in the 0 11 x series  excluding any pre release versions  example    beta1  or   rc1        python import  tfstate   main   rule   tfstate terraform version matches   0   11     d                NOTE    This value is also available via the   tfplan    terraform cloud docs policy enforcement sentinel import tfplan  import  which will be more current when a policy check is run against a plan  It s recommended you use the value in  tfplan  until HCP Terraform supports policy checks in other stages of the workspace lifecycle  See the   terraform version   import tfplan terraform version  reference within the  tfplan  import for more details    import tfplan terraform version    terraform cloud docs policy enforcement sentinel import tfplan value terraform version     Namespace  Module  The   module namespace   can be loaded by calling   module      function module   for a particular module   It can be used to load the following child namespaces  in addition to the values documented below      data    Loads the  resource namespace   namespace resources data sources     filtered against data sources     outputs    Loads the  output namespace   namespace outputs   which supply the   outputs present in this module s state  Note that with Terraform 0 12 or   later  this value is only available for the root namespace     resources    Loads the  resource   namespace   namespace resources data sources   filtered against resources       Root Namespace Aliases  The root level  data    outputs   and  resources  keys both alias to their corresponding namespaces within the module namespace  loaded for the root module  They are the equivalent of running  module     KEY        Value   path       Value Type    List of strings   The  path  value within the  module namespace   namespace module  contains the path of the module that the namespace represents  This is represented as a list of strings   As an example  if the following module block was present within a Terraform configuration      hcl module  foo                   The following policy would evaluate to  true    only  if the module was present in the state      python import  tfstate   main   rule   tfstate module   foo    path contains  foo            Namespace  Resources Data Sources  The   resource namespace   is a namespace  type  that applies to both resources  accessed by using the  resources  namespace key  and data sources  accessed using the  data  namespace key    Accessing an individual resource or data source within each respective namespace can be accomplished by specifying the type  name  and resource number  as if the resource or data source had a  count  value in it  in the syntax   resources data  TYPE NAME NUMBER    Note that NUMBER is always needed  even if you did not use  count  in the resource   In addition  each of these namespace levels is a map  allowing you to filter based on type and name      The  somewhat strange  notation here of  TYPE NAME NUMBER   may imply that the inner resource index map is actually a list  but it s not   using the square bracket notation over the dotted notation   TYPE NAME NUMBER   is required here as an identifier cannot start with number   Some examples of multi level access are below     To fetch all  aws instance foo  resource instances within the root module  you   can specify  tfstate resources aws instance foo   This would then be indexed   by resource count index   0    1    2   and so on   Note that as mentioned   above  these elements must be accessed using square bracket map notation  so     0      1      2    and so on  instead of dotted notation    To fetch all  aws instance  resources within the root module  you can specify    tfstate resources aws instance   This would be indexed from the names of   each resource   foo    bar   and so on   with each of those maps containing   instances indexed by resource count index as per above    To fetch all resources within the root module  irrespective of type  use    tfstate resources   This is indexed by type  as shown above with    tfstate resources aws instance   with names being the next level down  and so   on   Further explanation of the namespace will be in the context of resources  As mentioned  when operating on data sources  use the same syntax  except with  data  in place of  resources        Value   attr       Value Type    A string keyed map of values   The  attr  value within the  resource namespace   namespace resources data sources  is a direct mapping to the state of the resource   The map is a complex representation of these values with data going as far down as needed to represent any state values such as maps  lists  and sets   As an example  given the following resource      hcl resource  null resource   foo      triggers         foo    bar             The following policy would evaluate to  true  if the resource was in the state      python import  tfstate   main   rule   tfstate resources null resource foo 0  attr triggers foo is  bar             Value   depends on       Value Type    A list of strings   The  depends on  value within the  resource namespace   namespace resources data sources  contains the dependencies for the resource   This is a list of full resource addresses  relative to the module  example   null resource foo     As an example  given the following resources      hcl resource  null resource   foo      triggers         foo    bar         resource  null resource   bar               depends on          null resource foo              The following policy would evaluate to  true  if the resource was in the state      python import  tfstate   main   rule   tfstate resources null resource bar 0  depends on contains  null resource foo             Value   id       Value Type    String   The  id  value within the  resource namespace   namespace resources data sources  contains the id of the resource        NOTE    The example below uses a  data source  here because the   null data source   ref tf null data source  data source gives a static ID  which makes documenting the example easier  As previously mentioned  data sources share the same namespace as resources  but need to be loaded with the  data  key  For more information  see the  synopsis   namespace resources data sources  for the namespace itself    ref tf null data source   https   registry terraform io providers hashicorp null latest docs data sources data source  As an example  given the following data source      hcl data  null data source   foo                   The following policy would evaluate to  true       python import  tfstate   main   rule   tfstate data null data source foo 0  id is  static             Value   tainted       Value Type    Boolean   The  tainted  value within the  resource namespace   namespace resources data sources  is  true  if the resource is marked as tainted in Terraform state   As an example  given the following resource      hcl resource  null resource   foo      triggers         foo    bar             The following policy would evaluate to  true   if the resource was marked as tainted in the state      python import  tfstate   main   rule   tfstate resources null resource foo 0  tainted           Namespace  Outputs  The   output namespace   represents all of the outputs present within a  module   namespace module   Outputs are present in a state if they were saved during a previous apply  or if they were updated with known values during the pre plan refresh     With Terraform 0 11 or earlier   this can be used to fetch both the outputs of the root module  and the outputs of any module in the state below the root  This makes it possible to see outputs that have not been threaded to the root module     With Terraform 0 12 or later   outputs are available in the top level  root module  namespace only and not accessible within submodules   This namespace is indexed by output name       Value   sensitive       Value Type    Boolean   The  sensitive  value within the  output namespace   namespace outputs  is  true  when the output has been  marked as sensitive  ref tf sensitive outputs     ref tf sensitive outputs    terraform language values outputs sensitive suppressing values in cli output  As an example  given the following output      hcl output  foo      sensitive   true   value        bar         The following policy would evaluate to  true       python import  tfstate   main   rule   tfstate outputs foo sensitive            Value   type       Value Type    String   The  type  value within the  output namespace   namespace outputs  gives the output s type  This will be one of  string    list   or  map   These are currently the only types available for outputs in Terraform   As an example  given the following output      hcl output  string      value    foo     output  list      value          foo        bar          output  map      value         foo    bar             The following policy would evaluate to  true       python import  tfstate   type string   rule   tfstate outputs string type is  string    type list   rule   tfstate outputs list type is  list    type map   rule   tfstate outputs map type is  map     main   rule   type string and type list and type map            Value   value       Value Type    String  list  or map   The  value  value within the  output namespace   namespace outputs  is the value of the output in question   Note that the only valid primitive output type in Terraform is currently a string  which means that any int  float  or boolean value will need to be converted before it can be used in comparison  This does not apply to primitives within maps and lists  which will be their original types   As an example  given the following output blocks      hcl output  foo      value    bar     output  number      value    42     output  map      value         foo       bar      number   42            The following policy would evaluate to  true       python import  tfstate   value foo   rule   tfstate outputs foo value is  bar    value number   rule   int tfstate outputs number value  is 42   value map string   rule   tfstate outputs map value  foo   is  bar    value map int   rule   tfstate outputs map value  number   is 42    main   rule   value foo and value number and value map string and value map int      "}
{"questions":"terraform Learn how to use Sentinel policy language to create custom policies including Define custom Sentinel policies imports to define rules and useful functions This topic describes how to create and manage custom policies using Sentinel policy language For instructions about how to use pre written Sentinel policies from the registry refer to Run pre written Sentinel policies terraform cloud docs policy enforcement define policies prewritten sentinel page title Define custom Sentinel policies","answers":"---\npage_title: Define custom Sentinel policies\ndescription: >-\n  Learn how to use Sentinel policy language to create custom policies, including\n  imports to define rules and useful functions.\n---\n\n# Define custom Sentinel policies \n\nThis topic describes how to create and manage custom policies using Sentinel policy language.  For instructions about how to use pre-written Sentinel policies from the registry, refer to [Run pre-written Sentinel policies](\/terraform\/cloud-docs\/policy-enforcement\/define-policies\/prewritten-sentinel).\n\n## Overview\n\nTo define a policy, create a file and declare an `import` function to include reusable libraries, external data, and other functions. Sentinel policy language includes several types of elements you can import using the  `import` function. \n\nDeclare and configure additional Sentinel policy language elements. The details depend on which elements you want to use in your policy. Refer to the Sentinel documentation for additional information. \n\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n## Declare an `import` function\n\nA policy can include imports that enable a policy to access reusable libraries, external data, and functions. Refer to [imports](https:\/\/docs.hashicorp.com\/sentinel\/concepts\/imports) in the Sentinel documentation for more details.\n\nHCP Terraform provides four imports to define policy rules for the plan, configuration, state, and run associated with a policy check.\n\n- [tfplan](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan) - Access a Terraform plan, which is the file created as a result of the [`terraform plan` command](\/terraform\/cli\/commands\/plan). The plan represents the changes that Terraform must make to reach the desired infrastructure state described in the configuration.\n- [tfconfig](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfconfig) - Access a Terraform configuration. The configuration is the set of `.tf` files that describe the desired infrastructure state.\n- [tfstate](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfstate) - Access the Terraform [state](\/terraform\/language\/state). Terraform uses state to map real-world resources to your configuration.\n- [tfrun](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfrun) - Access data associated with a [run in HCP Terraform](\/terraform\/cloud-docs\/run\/remote-operations). For example, you could retrieve the run's workspace.\n\nYou can create mocks of these imports to use with the the [Sentinel\nCLI](https:\/\/docs.hashicorp.com\/sentinel\/commands) mocking and testing features. Refer to [Mocking Terraform Sentinel Data](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/mock) for more details.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\nHCP Terraform does not support custom imports.\n<!-- END: TFC:only name:pnp-callout -->\n\n\n## Declare additional elements\n\nThe following functions and idioms will be useful as you start writing Sentinel\npolicies for Terraform.\n\n### Iterate over modules and find resources\n\nThe most basic Sentinel task for Terraform is to enforce a rule on all resources\nof a given type. Before you can do that, you need to get a collection of all the\nrelevant resources from all modules. The easiest way to do that is to copy and\nuse a function like the following into your policies.\n\nThe following example uses the `tfplan` import. You can find similar\nfunctions that iterate over the `tfconfig` and `tfstate` imports\n[here](https:\/\/github.com\/hashicorp\/terraform-guides\/tree\/master\/governance\/second-generation\/common-functions).\n\n```python\nimport \"tfplan\"\nimport \"strings\"\n\n# Find all resources of specific type from all modules using the tfplan import\nfind_resources_from_plan = func(type) {\n    resources = {}\n    for tfplan.module_paths as path {\n        for tfplan.module(path).resources[type] else {} as name, instances {\n            for instances as index, r {\n                # Get the address of the resource instance\n                if length(path) == 0 {\n                    # root module\n                    address = type + \".\" + name + \"[\" + string(index) + \"]\"\n                } else {\n                    # non-root module\n                    address = \"module.\" + strings.join(path, \".module.\") + \".\" +\n                              type + \".\" + name + \"[\" + string(index) + \"]\"\n                }\n                # Add the instance to resources, setting the key to the address\n                resources[address] = r\n            }\n        }\n    }\n    return resources\n}\n```\n\nCall the function to get all resources of a desired type by passing the\ntype as a string in quotation marks:\n\n```python\naws_instances = find_resources_from_plan(\"aws_instance\")\n```\n\nThis example function does several useful things while finding resources:\n\n- It checks every module (including the root module) for resources of the\n  specified type by iterating over the `module_paths` namespace. The top-level\n  `resources` namespace is more convenient, but it only reveals resources from\n  the root module.\n- It iterates over the named resources and [resource\n  instances](\/terraform\/language\/expressions\/references#resources)\n  found in each module, starting with `tfplan.module(path).resources[type]`\n  which is a series of nested maps keyed by resource names and instance counts.\n- It uses the Sentinel [`else`\n  operator](https:\/\/docs.hashicorp.com\/sentinel\/language\/spec#else-operator) to\n  recover from `undefined` values which would occur for modules that don't have\n  any resources of the specified type.\n- It builds a flat `resources` map of all resource instances of the specified\n  type. Using a flat map simplifies the code used by Sentinel policies to\n  evaluate rules.\n- It computes an `address` variable for each resource instance and uses this as\n  the key in the `resources` map. This allows writers of Sentinel policies to\n  print the full [address](\/terraform\/cli\/state\/resource-addressing) of each\n  resource instance that violate a policy, using the same address format used in\n  plan and apply logs. Doing this tells users who see violation messages exactly\n  which resources they need to modify in their Terraform code to comply with the\n  Sentinel policies.\n- It sets the value of the `resources` map to the data associated with the\n  resource instance (`r`). This is the data that Sentinel policies apply rules\n  against.\n\n### Validate resource attributes\n\nOnce you have a collection of resources instances of a desired type indexed by\ntheir addresses, you usually want to validate that one or more resource\nattributes meets some conditions by iterating over the resource instances.\n\nWhile you could use Sentinel's [`all` and `any`\nexpressions](https:\/\/docs.hashicorp.com\/sentinel\/language\/boolexpr#any-all-expressions)\ndirectly inside Sentinel rules, your rules would only report the first violation\nbecause Sentinel uses short-circuit logic. It is therefore usually preferred to\nuse a [`for` loop](https:\/\/docs.hashicorp.com\/sentinel\/language\/loops) outside\nof your rules so that you can report all violations that occur. You can do this\ninside functions or directly in the policy itself.\n\nHere is a function that calls the `find_resources_from_plan` function and\nvalidates that the instance types of all EC2 instances being provisioned are in\na given list:\n\n```python\n# Validate that all EC2 instances have instance_type in the allowed_types list\nvalidate_ec2_instance_types = func(allowed_types) {\n    validated = true\n    aws_instances = find_resources_from_plan(\"aws_instance\")\n    for aws_instances as address, r {\n        # Determine if the attribute is computed\n        if r.diff[\"instance_type\"].computed else false is true {\n            print(\"EC2 instance\", address,\n                  \"has attribute, instance_type, that is computed.\")\n        } else {\n            # Validate that each instance has allowed value\n            if (r.applied.instance_type else \"\") not in allowed_types {\n                print(\"EC2 instance\", address, \"has instance_type\",\n                    r.applied.instance_type, \"that is not in the allowed list:\",\n                    allowed_types)\n                validated = false\n            }\n        }\n    }\n    return validated\n}\n```\n\nThe boolean variable `validated` is initially set to `true`, but it is set to\n`false` if any resource instance violates the condition requiring that the\n`instance_type` attribute be in the `allowed_types` list. Since the function\nreturns `true` or `false`, it can be called inside Sentinel rules.\n\nNote that this function prints a warning message for **every** resource instance\nthat violates the condition. This allows writers of Terraform code to fix all\nviolations after just one policy check. It also prints warnings when the\nattribute being evaluated is\n[computed](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan#value-computed) and does\nnot evaluate the condition in this case since the applied value will not be\nknown.\n\nWhile this function allows a rule to validate an attribute against a list, some\nrules will only need to validate an attribute against a single value; in those\ncases, you could either use a list with a single value or embed that value\ninside the function itself, drop the `allowed_types` parameter from the function\ndefinition, and use the `is` operator instead of the `in` operator to compare\nthe resource attribute against the embedded value.\n\n### Write Rules\n\nHaving used the standardized `find_resources_from_plan` function and having\nwritten your own function to validate that resources instances of a specific\ntype satisfy some condition, you can define a list with allowed values and write\na rule that evaluates the value returned by your validation function.\n\n```python\n# Allowed Types\nallowed_types = [\n    \"t2.small\",\n    \"t2.medium\",\n    \"t2.large\",\n]\n\n# Main rule\nmain = rule {\n    validate_ec2_instance_types(allowed_types)\n}\n\n```\n\n### Validate multiple conditions in a single policy\n\nIf you want a policy to validate multiple conditions against resources of a\nspecific type, you could define a separate validation function for each\ncondition or use a single function to evaluate all the conditions. In the latter\ncase, you would make this function return a list of boolean values, using one\nfor each condition.  You can then use multiple Sentinel rules that evaluate\nthose boolean values or evaluate all of them in your `main` rule. Here is a\npartial example:\n\n```python\n# Function to validate that S3 buckets have private ACL and use KMS encryption\nvalidate_private_acl_and_kms_encryption = func() {\n    result = {\n        \"private\":          true,\n        \"encrypted_by_kms\": true,\n    }\n    s3_buckets = find_resources_from_plan(\"aws_s3_bucket\")\n    # Iterate over resource instances and check that S3 buckets\n    # have private ACL and are encrypted by a KMS key\n    # If an S3 bucket is not private, set result[\"private\"] to false\n    # If an S3 bucket is not encrypted, set result[\"encrypted_by_kms\"] to false\n    for s3_buckets as joined_path, resource_map {\n        #...\n    }\n    return result\n}\n\n# Call the validation function\nvalidations = validate_private_acl_and_kms_encryption()\n\n# ACL rule\nis_private = rule {\n    validations[\"private\"]\n}\n\n# KMS Encryption Rule\nis_encrypted_by_kms = rule {\n    validations[\"encrypted_by_kms\"]\n}\n\n# Main rule\nmain = rule {\n    is_private and is_encrypted_by_kms\n}\n```\n\nYou can write similar functions and policies to restrict Terraform configurations using the `tfconfig` import and to restrict Terraform state using the `tfstate` import.\n\n## Next steps\n\n1. Group your policies into sets and apply them to your workspaces. Refer to [Create policy sets](\/terraform\/cloud-docs\/\/policy-enforcement\/create-sets) for additional information.\n1. View results and address Terraform runs that do not comply with your policies. Refer to [View results](\/terraform\/cloud-docs\/policy-enforcement\/view-results) for additional information.\n1. You can also view Sentinel policy results in JSON format. Refer to [View Sentinel JSON results]((\/terraform\/cloud-docs\/policy-enforcement\/view-json-results) for additional information. ","site":"terraform","answers_cleaned":"    page title  Define custom Sentinel policies description       Learn how to use Sentinel policy language to create custom policies  including   imports to define rules and useful functions         Define custom Sentinel policies   This topic describes how to create and manage custom policies using Sentinel policy language   For instructions about how to use pre written Sentinel policies from the registry  refer to  Run pre written Sentinel policies   terraform cloud docs policy enforcement define policies prewritten sentinel       Overview  To define a policy  create a file and declare an  import  function to include reusable libraries  external data  and other functions  Sentinel policy language includes several types of elements you can import using the   import  function    Declare and configure additional Sentinel policy language elements  The details depend on which elements you want to use in your policy  Refer to the Sentinel documentation for additional information          BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout         Declare an  import  function  A policy can include imports that enable a policy to access reusable libraries  external data  and functions  Refer to  imports  https   docs hashicorp com sentinel concepts imports  in the Sentinel documentation for more details   HCP Terraform provides four imports to define policy rules for the plan  configuration  state  and run associated with a policy check      tfplan   terraform cloud docs policy enforcement sentinel import tfplan    Access a Terraform plan  which is the file created as a result of the   terraform plan  command   terraform cli commands plan   The plan represents the changes that Terraform must make to reach the desired infrastructure state described in the configuration     tfconfig   terraform cloud docs policy enforcement sentinel import tfconfig    Access a Terraform configuration  The configuration is the set of   tf  files that describe the desired infrastructure state     tfstate   terraform cloud docs policy enforcement sentinel import tfstate    Access the Terraform  state   terraform language state   Terraform uses state to map real world resources to your configuration     tfrun   terraform cloud docs policy enforcement sentinel import tfrun    Access data associated with a  run in HCP Terraform   terraform cloud docs run remote operations   For example  you could retrieve the run s workspace   You can create mocks of these imports to use with the the  Sentinel CLI  https   docs hashicorp com sentinel commands  mocking and testing features  Refer to  Mocking Terraform Sentinel Data   terraform cloud docs policy enforcement sentinel mock  for more details        BEGIN  TFC only name pnp callout     HCP Terraform does not support custom imports       END  TFC only name pnp callout          Declare additional elements  The following functions and idioms will be useful as you start writing Sentinel policies for Terraform       Iterate over modules and find resources  The most basic Sentinel task for Terraform is to enforce a rule on all resources of a given type  Before you can do that  you need to get a collection of all the relevant resources from all modules  The easiest way to do that is to copy and use a function like the following into your policies   The following example uses the  tfplan  import  You can find similar functions that iterate over the  tfconfig  and  tfstate  imports  here  https   github com hashicorp terraform guides tree master governance second generation common functions       python import  tfplan  import  strings     Find all resources of specific type from all modules using the tfplan import find resources from plan   func type        resources          for tfplan module paths as path           for tfplan module path  resources type  else    as name  instances               for instances as index  r                     Get the address of the resource instance                 if length path     0                         root module                     address   type         name         string index                          else                         non root module                     address    module     strings join path    module                                          type         name         string index                                            Add the instance to resources  setting the key to the address                 resources address    r                                   return resources        Call the function to get all resources of a desired type by passing the type as a string in quotation marks      python aws instances   find resources from plan  aws instance        This example function does several useful things while finding resources     It checks every module  including the root module  for resources of the   specified type by iterating over the  module paths  namespace  The top level    resources  namespace is more convenient  but it only reveals resources from   the root module    It iterates over the named resources and  resource   instances   terraform language expressions references resources    found in each module  starting with  tfplan module path  resources type     which is a series of nested maps keyed by resource names and instance counts    It uses the Sentinel   else    operator  https   docs hashicorp com sentinel language spec else operator  to   recover from  undefined  values which would occur for modules that don t have   any resources of the specified type    It builds a flat  resources  map of all resource instances of the specified   type  Using a flat map simplifies the code used by Sentinel policies to   evaluate rules    It computes an  address  variable for each resource instance and uses this as   the key in the  resources  map  This allows writers of Sentinel policies to   print the full  address   terraform cli state resource addressing  of each   resource instance that violate a policy  using the same address format used in   plan and apply logs  Doing this tells users who see violation messages exactly   which resources they need to modify in their Terraform code to comply with the   Sentinel policies    It sets the value of the  resources  map to the data associated with the   resource instance   r    This is the data that Sentinel policies apply rules   against       Validate resource attributes  Once you have a collection of resources instances of a desired type indexed by their addresses  you usually want to validate that one or more resource attributes meets some conditions by iterating over the resource instances   While you could use Sentinel s   all  and  any  expressions  https   docs hashicorp com sentinel language boolexpr any all expressions  directly inside Sentinel rules  your rules would only report the first violation because Sentinel uses short circuit logic  It is therefore usually preferred to use a   for  loop  https   docs hashicorp com sentinel language loops  outside of your rules so that you can report all violations that occur  You can do this inside functions or directly in the policy itself   Here is a function that calls the  find resources from plan  function and validates that the instance types of all EC2 instances being provisioned are in a given list      python   Validate that all EC2 instances have instance type in the allowed types list validate ec2 instance types   func allowed types        validated   true     aws instances   find resources from plan  aws instance       for aws instances as address  r             Determine if the attribute is computed         if r diff  instance type   computed else false is true               print  EC2 instance   address                     has attribute  instance type  that is computed              else                 Validate that each instance has allowed value             if  r applied instance type else     not in allowed types                   print  EC2 instance   address   has instance type                       r applied instance type   that is not in the allowed list                        allowed types                  validated   false                                   return validated        The boolean variable  validated  is initially set to  true   but it is set to  false  if any resource instance violates the condition requiring that the  instance type  attribute be in the  allowed types  list  Since the function returns  true  or  false   it can be called inside Sentinel rules   Note that this function prints a warning message for   every   resource instance that violates the condition  This allows writers of Terraform code to fix all violations after just one policy check  It also prints warnings when the attribute being evaluated is  computed   terraform cloud docs policy enforcement sentinel import tfplan value computed  and does not evaluate the condition in this case since the applied value will not be known   While this function allows a rule to validate an attribute against a list  some rules will only need to validate an attribute against a single value  in those cases  you could either use a list with a single value or embed that value inside the function itself  drop the  allowed types  parameter from the function definition  and use the  is  operator instead of the  in  operator to compare the resource attribute against the embedded value       Write Rules  Having used the standardized  find resources from plan  function and having written your own function to validate that resources instances of a specific type satisfy some condition  you can define a list with allowed values and write a rule that evaluates the value returned by your validation function      python   Allowed Types allowed types          t2 small        t2 medium        t2 large        Main rule main   rule       validate ec2 instance types allowed types              Validate multiple conditions in a single policy  If you want a policy to validate multiple conditions against resources of a specific type  you could define a separate validation function for each condition or use a single function to evaluate all the conditions  In the latter case  you would make this function return a list of boolean values  using one for each condition   You can then use multiple Sentinel rules that evaluate those boolean values or evaluate all of them in your  main  rule  Here is a partial example      python   Function to validate that S3 buckets have private ACL and use KMS encryption validate private acl and kms encryption   func         result              private            true           encrypted by kms   true            s3 buckets   find resources from plan  aws s3 bucket         Iterate over resource instances and check that S3 buckets       have private ACL and are encrypted by a KMS key       If an S3 bucket is not private  set result  private   to false       If an S3 bucket is not encrypted  set result  encrypted by kms   to false     for s3 buckets as joined path  resource map                          return result      Call the validation function validations   validate private acl and kms encryption      ACL rule is private   rule       validations  private        KMS Encryption Rule is encrypted by kms   rule       validations  encrypted by kms        Main rule main   rule       is private and is encrypted by kms        You can write similar functions and policies to restrict Terraform configurations using the  tfconfig  import and to restrict Terraform state using the  tfstate  import      Next steps  1  Group your policies into sets and apply them to your workspaces  Refer to  Create policy sets   terraform cloud docs  policy enforcement create sets  for additional information  1  View results and address Terraform runs that do not comply with your policies  Refer to  View results   terraform cloud docs policy enforcement view results  for additional information  1  You can also view Sentinel policy results in JSON format  Refer to  View Sentinel JSON results    terraform cloud docs policy enforcement view json results  for additional information  "}
{"questions":"terraform Define OPA policies This topic describes how to create and manage custom policies using the open policy agent OPA framework Refer to the following topics for instructions on using HashiCorp Sentinel policies Use the Rego policy language to define Open Policy Agent OPA policies for HCP Terraform page title Defining Policies Open Policy Agent HCP Terraform","answers":"---\npage_title:  Defining Policies - Open Policy Agent - HCP Terraform\ndescription: Use the Rego policy language to define Open Policy Agent (OPA) policies for HCP Terraform.\n\n---\n\n# Define OPA policies \n\nThis topic describes how to create and manage custom policies using the open policy agent (OPA) framework. Refer to the following topics for instructions on using HashiCorp Sentinel policies:\n\n[Define custom Sentinel policies](\/terraform\/cloud-docs\/policy-enforcement\/define-policies\/custom-sentinel)\n[Copy pre-written Sentinel policies](\/terraform\/cloud-docs\/policy-enforcement\/define-policies\/prewritten-sentinel)\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n> **Hands-on:** Try the [Detect Infrastructure Drift and Enforce OPA Policies](\/terraform\/tutorials\/cloud\/drift-and-opa) tutorial.\n\n## Overview\n\nYou can write OPA policies using the Rego policy language, which is the native query language for the OPA framework. Refer to the following topics in the [OPA documentation](https:\/\/www.openpolicyagent.org\/docs\/latest\/policy-language\/) for additional information:\n\n[How Do I Write Rego Policies?](https:\/\/www.openpolicyagent.org\/docs\/v0.13.5\/how-do-i-write-policies\/)\n[Rego Policy Playground](https:\/\/play.openpolicyagent.org\/)\n\n\n## OPA query\n\nYou must write a query to identify a specific policy rule within your Rego code. The query may evaluate code from multiple Rego files.\n\nThe result of each query must return an array, which HCP Terraform uses to determine whether the policy has passed or failed. If the array is empty, HCP Terraform reports that the policy has passed.\n\nThe query is typically a combination of the policy package name and rule name, such as `data.terraform.deny`.\n\n## OPA input\n\nHCP Terraform combines the output from the Terraform run and plan into a single JSON file and passes that file to OPA as input. Refer to the [OPA Overview documentation](https:\/\/www.openpolicyagent.org\/docs\/latest\/#the-input-document) for more details about how OPA uses JSON input data.\n\nThe run data contains information like workspace details and the organization name. To access the properties from the Terraform plan data in your policies, use  `input.plan`. To access properties from the Terraform run, use  `input.run`.\n\nThe following example shows sample OPA input data.\n\n```json\n{\n\"plan\": {\n \"format_version\": \"1.1\",\n \"output_changes\": {\n },\n \"planned_values\": {\n  },\n  \"resource_changes\": [\n ],\n \"terraform_version\": \"1.2.7\"\n},\n\n\"run\": {\n  \"organization\": {\n  \"name\": \"hashicorp\"\n  },\n  \"workspace\": {\n  }\n}\n}\n```\n\nUse the [Retrieve JSON Execution Plan endpoint](\/terraform\/cloud-docs\/api-docs\/plans#retrieve-the-json-execution-plan) to retrieve Terraform plan output data for testing. Refer to [Terraform Run Data](#terraform-run-data) for the properties included in Terraform run output data.\n\n## Example Policies\n\nThe following example policy parses a Terraform plan and checks whether it includes security group updates that allow ingress traffic from all CIDRs (`0.0.0.0\/0`).\n\nThe OPA query for this example policy is `data.terraform.policies.public_ingress.deny`.\n\n```rego\npackage terraform.policies.public_ingress\n\nimport input.plan as plan\n\ndeny[msg] {\n  r := plan.resource_changes[_]\n  r.type == \"aws_security_group\"\n  r.change.after.ingress[_].cidr_blocks[_] == \"0.0.0.0\/0\"\n  msg := sprintf(\"%v has 0.0.0.0\/0 as allowed ingress\", [r.address])\n}\n```\n\nThe following example policy ensures that databases are no larger than 128 GB.\n\nThe OPA query for this policy is `data.terraform.policies.fws.database.fws_db_001.rule`.\n\n```rego\npackage terraform.policies.fws.database.fws_db_001\n\nimport future.keywords.in\nimport input.plan as tfplan\n\nactions := [\n\t[\"no-op\"],\n\t[\"create\"],\n\t[\"update\"],\n]\n\ndb_size := 128\n\nresources := [resource_changes |\n\tresource_changes := tfplan.resource_changes[_]\n\tresource_changes.type == \"fakewebservices_database\"\n\tresource_changes.mode == \"managed\"\n\tresource_changes.change.actions in actions\n]\n\nviolations := [resource |\n\tresource := resources[_]\n\tnot resource.change.after.size == db_size\n]\n\nviolators[address] {\n\taddress := violations[_].address\n}\n\nrule[msg] {\n\tcount(violations) != 0\n  msg := sprintf(\n    \"%d %q severity resource violation(s) have been detected.\",\n\t\t[count(violations), rego.metadata.rule().custom.severity]\n\t)\n}\n```\n\n## Test policies\n\nYou can write tests for your policies by [mocking](https:\/\/www.openpolicyagent.org\/docs\/latest\/policy-testing\/#data-and-function-mocking) the input data the policies use during Terraform runs.\n\nThe following example policy called `block_auto_apply_runs` checks whether or not an HCP Terraform workspace has been configured to automatically apply a successful Terraform plan.\n\n```rego\npackage terraform.tfc.block_auto_apply_runs\n\nimport input.run as run\n\ndeny[msg] {\n\trun.workspace.auto_apply != false\n\tmsg := sprintf(\n\t\t\"HCP Terraform workspace %s has been configured to automatically provision Terraform infrastructure. Change the workspace Apply Method settings to 'Manual Apply'\",\n\t\t[run.workspace.name],\n\t)\n}\n```\n\nThe following test validates `block_auto_apply_runs`. The test is written in rego and uses the OPA [test format](https:\/\/www.openpolicyagent.org\/docs\/latest\/policy-testing\/#test-format) to check that the workspace [apply method](\/terraform\/cloud-docs\/workspaces\/settings#apply-method) is not configured to auto apply. You can run this test with the `opa test` CLI command. Refer to [Policy Testing](https:\/\/www.openpolicyagent.org\/docs\/latest\/policy-testing\/) in the OPA documentation for more details.\n\n```rego\npackage terraform.tfc.block_auto_apply_runs\n\nimport future.keywords\n\ntest_run_workspace_auto_apply if {\n\tdeny with input as {\"run\": {\"workspace\": {\"auto_apply\": true}}}\n}\n```\n\n## Terraform run data\n\nEach [Terraform run](\/terraform\/docs\/glossary#run) outputs data describing the run settings and the associated workspace.\n\n### Schema\n\nThe following code shows the schema for Terraform run data.\n\n```\nrun\n\u251c\u2500\u2500 id (string)\n\u251c\u2500\u2500 created_at (string)\n\u251c\u2500\u2500 created_by (string)\n\u251c\u2500\u2500 message (string)\n\u251c\u2500\u2500 commit_sha (string)\n\u251c\u2500\u2500 is_destroy (boolean)\n\u251c\u2500\u2500 refresh (boolean)\n\u251c\u2500\u2500 refresh_only (boolean)\n\u251c\u2500\u2500 replace_addrs (array of strings)\n\u251c\u2500\u2500 speculative (boolean)\n\u251c\u2500\u2500 target_addrs (array of strings)\n\u2514\u2500\u2500 project\n\u2502    \u251c\u2500\u2500 id (string)\n\u2502    \u2514\u2500\u2500 name (string)\n\u251c\u2500\u2500 variables (map of keys)\n\u251c\u2500\u2500 organization\n\u2502   \u2514\u2500\u2500 name (string)\n\u2514\u2500\u2500 workspace\n    \u251c\u2500\u2500 id (string)\n    \u251c\u2500\u2500 name (string)\n    \u251c\u2500\u2500 created_at (string)\n    \u251c\u2500\u2500 description (string)\n    \u251c\u2500\u2500 execution_mode (string)\n    \u251c\u2500\u2500 auto_apply (bool)\n    \u251c\u2500\u2500 tags (array of strings)\n    \u251c\u2500\u2500 working_directory (string)\n    \u2514\u2500\u2500 vcs_repo (map of keys)\n```\n\n### Properties\n\nThe following sections contain details about each property in Terraform run data.\n\n#### Run namespace\n\nThe following table contains the attributes for the `run` namespace.\n\n| Properties Name |  Type  | Description |\n|---------| ----------| ----------|\n|  `id`   | String | The ID associated with the current Terraform run |\n|  `created_at` | String | The time Terraform created the run. The timestamp follows the [standard timestamp format in RFC 3339](https:\/\/datatracker.ietf.org\/doc\/html\/rfc3339). |\n|  `created_by` | String | A string that specifies the user name of the HCP Terraform user for the specific run.|\n|  `message` |  String | The message associated with the Terraform run. The default value is \"Queued manually via the Terraform Enterprise API\". |\n|  `commit_sha` |  String |  The checksum hash (SHA) that identifies the commit |\n|  `is_destroy` |  Boolean | Whether the plan is a destroy plan that destroys all provisioned resources |\n|  `refresh` | Boolean |  Whether the state refreshed prior to the plan |\n|   `refresh_only` |  Boolean | Whether the plan is in refresh-only mode. In refresh-only mode, Terraform ignores configuration changes and updates state with any changes made outside of Terraform. |\n|  `replace_addrs` | An array of strings representing [resource addresses](\/terraform\/cli\/state\/resource-addressing) | The targets specified using the [`-replace`](\/terraform\/cli\/commands\/plan#replace-address) flag in the CLI or the `replace-addrs` property in the API. Undefined if there are no specified resource targets. |\n|  `speculative` | Boolean | Whether the plan associated with the run is a [speculative plan](\/terraform\/cloud-docs\/run\/remote-operations#speculative-plans) only |\n|  `target_addrs` |  An array of strings representing [resource addresses](\/terraform\/cli\/state\/resource-addressing). | The targets specified using the [`-target`](\/terraform\/cli\/commands\/plan#resource-targeting) flag in the CLI or the `target-addrs` property in the API. Undefined if there are no specified resource targets. |\n|   `variables` |  A string-keyed map of values. | Provides the variables configured within the run. Each variable `name` maps to two properties: `category` and `sensitive`.  The `category` property is a string indicating the variable type, either \"input\" or \"environment\". The `sensitive` property is a boolean, indicating whether the variable is a [sensitive value](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#sensitive-values).  |\n\n#### Project Namespace\n\nThe following table contains the properties for the `project` namespace.\n\n| Property Name |  Type  | Description |\n|---------- | ------------ | -----------|\n| `id` | String | The ID associated with the Terraform project |\n|  `name` |  String | The name of the project, which can only include letters, numbers, spaces, `-`, and `_`. |\n\n\n#### Organization namespace\n\nThe `organization` namespace has one property called `name`. The `name` property is a string that specifies the name of the HCP Terraform organization for the run.\n\n#### Workspace namespace\n\nThe following table contains the properties for the `workspace` namespace.\n\n| Property Name |  Type  | Description |\n|---------- | ------------ | -----------|\n| `id` | String | The ID associated with the Terraform workspace |\n|  `name` |  String | The name of the workspace, which can only include letters, numbers, `-`, and `_` |\n|  `created_at` | String | The time of the workspace's creation. The timestamp follows the [standard timestamp format in RFC 3339](https:\/\/datatracker.ietf.org\/doc\/html\/rfc3339). |\n|  `description` | String | The description for the workspace. This value can be `null`. |\n|  `auto_apply` | Boolean | The workspace's [auto-apply](\/terraform\/cloud-docs\/workspaces\/settings#apply-method) setting |\n|  `tags` | Array of strings | The list of tag names for the workspace |\n|  `working_directory` |  String | The configured [Terraform working directory](\/terraform\/cloud-docs\/workspaces\/settings#terraform-working-directory) of the workspace. This value can be `null`. |\n|  `execution_mode` |  String | The configured Terraform execution mode of the workspace. The default value is `remote`. |\n|  `vcs_repo` |  A string-keyed map to objects | Data associated with a VCS repository connected to the workspace. The map contains `identifier` (string), ` display_identifier` (string), `branch` (string), and  `ingress_submodules` (boolean). Refer to the HCP Terraform [Workspaces API documentation](\/terraform\/cloud-docs\/api-docs\/workspaces) for details about each property. This value can be `null`. |\n\n## Next steps\n\n- Group your policies into sets and apply them to your workspaces. Refer to [Create policy sets](\/terraform\/cloud-docs\/\/policy-enforcement\/create-sets) for additional information.\n- View results and address Terraform runs that do not comply with your policies. Refer to [View results](\/terraform\/cloud-docs\/policy-enforcement\/view-results) for additional information.\n- You can also view Sentinel policy results in JSON format. Refer to [View Sentinel JSON results]((\/terraform\/cloud-docs\/policy-enforcement\/view-json-results) for additional information. ","site":"terraform","answers_cleaned":"    page title   Defining Policies   Open Policy Agent   HCP Terraform description  Use the Rego policy language to define Open Policy Agent  OPA  policies for HCP Terraform          Define OPA policies   This topic describes how to create and manage custom policies using the open policy agent  OPA  framework  Refer to the following topics for instructions on using HashiCorp Sentinel policies    Define custom Sentinel policies   terraform cloud docs policy enforcement define policies custom sentinel   Copy pre written Sentinel policies   terraform cloud docs policy enforcement define policies prewritten sentinel        BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout          Hands on    Try the  Detect Infrastructure Drift and Enforce OPA Policies   terraform tutorials cloud drift and opa  tutorial      Overview  You can write OPA policies using the Rego policy language  which is the native query language for the OPA framework  Refer to the following topics in the  OPA documentation  https   www openpolicyagent org docs latest policy language   for additional information    How Do I Write Rego Policies   https   www openpolicyagent org docs v0 13 5 how do i write policies    Rego Policy Playground  https   play openpolicyagent org        OPA query  You must write a query to identify a specific policy rule within your Rego code  The query may evaluate code from multiple Rego files   The result of each query must return an array  which HCP Terraform uses to determine whether the policy has passed or failed  If the array is empty  HCP Terraform reports that the policy has passed   The query is typically a combination of the policy package name and rule name  such as  data terraform deny       OPA input  HCP Terraform combines the output from the Terraform run and plan into a single JSON file and passes that file to OPA as input  Refer to the  OPA Overview documentation  https   www openpolicyagent org docs latest  the input document  for more details about how OPA uses JSON input data   The run data contains information like workspace details and the organization name  To access the properties from the Terraform plan data in your policies  use   input plan   To access properties from the Terraform run  use   input run    The following example shows sample OPA input data      json    plan       format version    1 1     output changes           planned values             resource changes           terraform version    1 2 7       run        organization        name    hashicorp          workspace                  Use the  Retrieve JSON Execution Plan endpoint   terraform cloud docs api docs plans retrieve the json execution plan  to retrieve Terraform plan output data for testing  Refer to  Terraform Run Data   terraform run data  for the properties included in Terraform run output data      Example Policies  The following example policy parses a Terraform plan and checks whether it includes security group updates that allow ingress traffic from all CIDRs   0 0 0 0 0     The OPA query for this example policy is  data terraform policies public ingress deny       rego package terraform policies public ingress  import input plan as plan  deny msg      r    plan resource changes      r type     aws security group    r change after ingress    cidr blocks        0 0 0 0 0    msg    sprintf   v has 0 0 0 0 0 as allowed ingress    r address          The following example policy ensures that databases are no larger than 128 GB   The OPA query for this policy is  data terraform policies fws database fws db 001 rule       rego package terraform policies fws database fws db 001  import future keywords in import input plan as tfplan  actions         no op       create       update       db size    128  resources     resource changes    resource changes    tfplan resource changes     resource changes type     fakewebservices database   resource changes mode     managed   resource changes change actions in actions    violations     resource    resource    resources     not resource change after size    db size    violators address     address    violations    address    rule msg     count violations     0   msg    sprintf        d  q severity resource violation s  have been detected       count violations   rego metadata rule   custom severity               Test policies  You can write tests for your policies by  mocking  https   www openpolicyagent org docs latest policy testing  data and function mocking  the input data the policies use during Terraform runs   The following example policy called  block auto apply runs  checks whether or not an HCP Terraform workspace has been configured to automatically apply a successful Terraform plan      rego package terraform tfc block auto apply runs  import input run as run  deny msg     run workspace auto apply    false  msg    sprintf     HCP Terraform workspace  s has been configured to automatically provision Terraform infrastructure  Change the workspace Apply Method settings to  Manual Apply       run workspace name             The following test validates  block auto apply runs   The test is written in rego and uses the OPA  test format  https   www openpolicyagent org docs latest policy testing  test format  to check that the workspace  apply method   terraform cloud docs workspaces settings apply method  is not configured to auto apply  You can run this test with the  opa test  CLI command  Refer to  Policy Testing  https   www openpolicyagent org docs latest policy testing   in the OPA documentation for more details      rego package terraform tfc block auto apply runs  import future keywords  test run workspace auto apply if    deny with input as   run     workspace     auto apply   true              Terraform run data  Each  Terraform run   terraform docs glossary run  outputs data describing the run settings and the associated workspace       Schema  The following code shows the schema for Terraform run data       run     id  string      created at  string      created by  string      message  string      commit sha  string      is destroy  boolean      refresh  boolean      refresh only  boolean      replace addrs  array of strings      speculative  boolean      target addrs  array of strings      project          id  string           name  string      variables  map of keys      organization         name  string      workspace         id  string          name  string          created at  string          description  string          execution mode  string          auto apply  bool          tags  array of strings          working directory  string          vcs repo  map of keys           Properties  The following sections contain details about each property in Terraform run data        Run namespace  The following table contains the attributes for the  run  namespace     Properties Name    Type    Description                                           id      String   The ID associated with the current Terraform run       created at    String   The time Terraform created the run  The timestamp follows the  standard timestamp format in RFC 3339  https   datatracker ietf org doc html rfc3339         created by    String   A string that specifies the user name of the HCP Terraform user for the specific run       message     String   The message associated with the Terraform run  The default value is  Queued manually via the Terraform Enterprise API         commit sha     String    The checksum hash  SHA  that identifies the commit       is destroy     Boolean   Whether the plan is a destroy plan that destroys all provisioned resources       refresh    Boolean    Whether the state refreshed prior to the plan        refresh only     Boolean   Whether the plan is in refresh only mode  In refresh only mode  Terraform ignores configuration changes and updates state with any changes made outside of Terraform        replace addrs    An array of strings representing  resource addresses   terraform cli state resource addressing    The targets specified using the    replace    terraform cli commands plan replace address  flag in the CLI or the  replace addrs  property in the API  Undefined if there are no specified resource targets        speculative    Boolean   Whether the plan associated with the run is a  speculative plan   terraform cloud docs run remote operations speculative plans  only       target addrs     An array of strings representing  resource addresses   terraform cli state resource addressing     The targets specified using the    target    terraform cli commands plan resource targeting  flag in the CLI or the  target addrs  property in the API  Undefined if there are no specified resource targets         variables     A string keyed map of values    Provides the variables configured within the run  Each variable  name  maps to two properties   category  and  sensitive    The  category  property is a string indicating the variable type  either  input  or  environment   The  sensitive  property is a boolean  indicating whether the variable is a  sensitive value   terraform cloud docs workspaces variables managing variables sensitive values            Project Namespace  The following table contains the properties for the  project  namespace     Property Name    Type    Description                                                id    String   The ID associated with the Terraform project       name     String   The name of the project  which can only include letters  numbers  spaces       and               Organization namespace  The  organization  namespace has one property called  name   The  name  property is a string that specifies the name of the HCP Terraform organization for the run        Workspace namespace  The following table contains the properties for the  workspace  namespace     Property Name    Type    Description                                                id    String   The ID associated with the Terraform workspace       name     String   The name of the workspace  which can only include letters  numbers       and           created at    String   The time of the workspace s creation  The timestamp follows the  standard timestamp format in RFC 3339  https   datatracker ietf org doc html rfc3339         description    String   The description for the workspace  This value can be  null         auto apply    Boolean   The workspace s  auto apply   terraform cloud docs workspaces settings apply method  setting       tags    Array of strings   The list of tag names for the workspace       working directory     String   The configured  Terraform working directory   terraform cloud docs workspaces settings terraform working directory  of the workspace  This value can be  null         execution mode     String   The configured Terraform execution mode of the workspace  The default value is  remote         vcs repo     A string keyed map to objects   Data associated with a VCS repository connected to the workspace  The map contains  identifier   string     display identifier   string    branch   string   and   ingress submodules   boolean   Refer to the HCP Terraform  Workspaces API documentation   terraform cloud docs api docs workspaces  for details about each property  This value can be  null         Next steps    Group your policies into sets and apply them to your workspaces  Refer to  Create policy sets   terraform cloud docs  policy enforcement create sets  for additional information    View results and address Terraform runs that do not comply with your policies  Refer to  View results   terraform cloud docs policy enforcement view results  for additional information    You can also view Sentinel policy results in JSON format  Refer to  View Sentinel JSON results    terraform cloud docs policy enforcement view json results  for additional information  "}
{"questions":"terraform This topic describes how to run Sentinel policies created and maintained by HashiCorp For instructions about how to create your own custom Sentinel policies refer to Define custom Sentinel policies terraform cloud docs policy enforcement define policies custom sentinel Learn how to download and install pre written Sentinel policies created and maintained by HashiCorp Run pre written Sentinel policies page title Run pre written Sentinel policies Overview","answers":"---\npage_title: Run pre-written Sentinel policies\ndescription: Learn how to download and install pre-written Sentinel policies created and maintained by HashiCorp.\n---\n\n# Run pre-written Sentinel policies \n\nThis topic describes how to run Sentinel policies created and maintained by HashiCorp. For instructions about how to create your own custom Sentinel policies, refer to [Define custom Sentinel policies](\/terraform\/cloud-docs\/policy-enforcement\/define-policies\/custom-sentinel).\n\n## Overview\n\nPre-written Sentinel policy libraries streamline your compliance processes and enhance security across your infrastructure. HashiCorp's ready-to-use policies can help you enforce best practices and security standards across your AWS environment.\n\nRefer to the following resources for details about working with pre-written policies and information about the Sentinel language and framework: \n\n- [Sentinel documentation](\/sentinel\/docs).\n- The `README.md` documentation included with each of the policy libraries.\n\nComplete the following steps to implement pre-written Sentinel policies in your workspaces:\n\n1. Obtain the policies you want to implement. Download policies directly into your repository or create a fork of the HashiCorp repositories. Alternatively, you can add the Terraform module to your configuration, which acquires the policies and connects them to your workspaces in a single step.\n1. Connect policies to your workspace. After you download policies or fork policy repositories, you must connect them to your HCP Terraform or Terraform Enterprise workspaces.       \n\n## Requirements\n\nYou must use one of the following Terraform applications:\n\n- HCP Terraform\n- Terraform Enterprise v202406-1 or newer\n\n### Permissions\n\nTo create new policy sets and policies, your HCP Terraform or Terraform Enterprise user account must either be a member of the owners team or have the **Manage Policies** organization-level permissions enabled. Refer to the following topics for additional information:\n\n- [Organization owners](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-owners)\n- [Manage policies](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-policies)\n\n### Version control system\n\nYou must have a GitHub account connected to HCP Terraform or Terraform Enterprise to manually connect policy sets to your workspaces. Refer to [Connecting VCS Providers](\/terraform\/cloud-docs\/vcs) for instructions.\n\n## Obtain policies\n\nYou can use the policy libraries created and maintained by HashiCorp. The libraries are stored in the following GitHub repositories:\n\n- [policy-library-cis-aws-efs-terraform](https:\/\/github.com\/hashicorp\/policy-library-cis-aws-efs-terraform)\n- [policy-library-cis-aws-rds-terraform](https:\/\/github.com\/hashicorp\/policy-library-cis-aws-rds-terraform)\n- [policy-library-cis-aws-vpc-terraform](https:\/\/github.com\/hashicorp\/policy-library-cis-aws-vpc-terraform)\n- [policy-library-cis-aws-iam-terraform](https:\/\/github.com\/hashicorp\/policy-library-cis-aws-iam-terraform)\n- [policy-library-cis-aws-s3-terraform](https:\/\/github.com\/hashicorp\/policy-library-cis-aws-s3-terraform)\n- [policv-library-cis-aws-cloudtrail-terraform](https:\/\/github.com\/hashicorp\/policy-library-cis-aws-cloudtrail-terraform)\n- [policy-library-cis-aws-kms-terraform](https:\/\/github.com\/hashicorp\/policy-library-cis-aws-kms-terraform)\n- [policy-library-cis-aws-ec2-terraform](https:\/\/github.com\/hashicorp\/policy-library-cis-aws-ec2-terraform)\n\nUse one of the following methods to obtain pre-written policies:\n\n- **Download policies from the registry**: Use this method if you want to assemble custom policy sets without customizing policies.  \n- **Fork the HashiCorp policy GitHub repository**: Use this method if you intend to customize the policies. \n- **Add the Terraform module to your configuration**: Use this method to implement specific versions of the policies as-is. This method also connects the policies to workspaces in the Terraform configuration file instead of connecting them as a separate step. \n\n### Download policies from the registry\n\nComplete the following steps to download policies from the registry and apply them directly to your workspaces. \n\n1. Browse the policy libraries available in the [Terraform registry](https:\/\/registry.terraform.io\/browse\/policies).\n1. Click on a policy library and click **Choose policies**.\n1. Select the policies you want to implement. The registry generates code in the **USAGE INSTRUCTIONS** box.\n1. Click **Copy Code Snippet** to copy the code to your clipboard. \n1. Create a GitHub repository to store the policies and the policy set configuration file.\n1. Create a file called `sentinel.hcl` in the repository. \n1. Paste the code from your clipboard into `sentinel.hcl` and commit your changes. \n1. Complete the instructions for [connecting the policies to your workspace](#connect-policies-to-your-workspace).\n\n### Create a fork of the policy libraries \n\nCreate a fork of the repository containing the policies you want to implement. Refer to the [GitHub documentation](https:\/\/docs.github.com\/en\/pull-requests\/collaborating-with-pull-requests\/working-with-forks\/fork-a-repo) for instructions on how to create a fork. \n\nHashiCorp Sentinel policy libraries include a `sentinel.hcl` file. The file defines an example policy set using the policies included in the library. Modify the file to customize your policy set. Refer to [Sentinel Policy Set VCS Repositories](\/terraform\/cloud-docs\/policy-enforcement\/manage-policy-sets\/sentinel-vcs) for additional information.\n\nAfter forking the repository, complete the instructions for [connecting the policies to your workspace](#connect-policies-to-your-workspace).\n\n\n### Add the Terraform module to your configuration \nThis method enables you to connect the policies to workspaces in the Terraform configuration file. As a result, you can skip the instructions described in [Connect policies to your workspaces](#connect-policies-to-your-workspaces). \n\t \n1. Go to the [module in the Terraform registry](https:\/\/registry.terraform.io\/modules\/hashicorp\/CIS-Policy-Set\/AWS\/latest) and copy the code generated in the **Provision Instructions** tile.  \n1. Add the `module` block to your Terraform configuration and define the following arguments: \n   - `source`: Specify the path to the module you downloaded.\n   - `tfe_organization`: Specify the name of your organization on Terraform Enterprise or HCP Terraform.\n   - `policy_set_workspace_names`: Specify a list of workspace names that you want to apply the policies to. \n\n   The following example configuration applies invokes the module for `target_workspace_1`:\n\n   ```hcl\n   module \"cis_v1-2-0_policies\" {\n      source = \"..\/prewritten-policy\"\n      name                             = \"cis-1-2-0\"\n      tfe_organization                 = \"<your-tfe-org>\"\n      policy_set_workspace_names       = [\"target_workspace_1\"]\n   }\n   ```\n\n1. Run `terraform plan` to view the plan.\n1. Run `terraform apply` to apply the changes. After running the command, Terraform will evaluate Sentinel policies for each following run of the workspaces you specified.\n\n## Connect policies to your workspace\n\nSkip this step if you [added the Terraform module](#add-the-terraform-module-to-your-configuration) to your configuration. When you use the module, the `policy_set_workspace_names` argument instructs Terraform to connect the policies to the HCP Terraform workspaces specified in the configuration. \n\n1. Log into your organization and click **Settings** in the sidebar. \n1. Click **Policy Sets** and click **Connect a new policy set**.\n1. Click the **Version control provider (VCS)** tile.\n1. Enable the **Sentinel** option as the policy framework.\n1. Specify a name and description for the set.\n1. Configure any additional options for the policy set and click **Next**.\n1. Choose the GitHub connection type, then choose the repository you created in [Set up a repository for the policies](#set-up-a-repository-for-the-policies). \n1. If the `sentinel.hcl` policy set file is stored in a subfolder, specify the path to the file in the **Policies path** field. The default is the root directory.\n1. If you want to apply updated policy sets to the workspace from a specific branch, specify the name in the **VCS branch** field. The default is the default branch configured for the repository.\n1. Click **Next** and specify any additional parameters you want to pass to the Sentinel runtime and click **Connect policy set** to finish applying the policies to the workspace. \n\nRun a plan in the workspace to trigger the connected policies. Refer to [Start a Terraform run](\/terraform\/cloud-docs\/run\/remote-operations#starting-runs) for additional information.\n\n## Next steps\n\n- Group your policies into sets and apply them to your workspaces. Refer to [Create policy sets](\/terraform\/cloud-docs\/\/policy-enforcement\/create-sets) for additional information.\n- View results and address Terraform runs that do not comply with your policies. Refer to [View results](\/terraform\/cloud-docs\/policy-enforcement\/view-results) for additional information.\n- You can also view Sentinel policy results in JSON format. Refer to [View Sentinel JSON results](\/terraform\/cloud-docs\/policy-enforcement\/view-json-results) for additional information.","site":"terraform","answers_cleaned":"    page title  Run pre written Sentinel policies description  Learn how to download and install pre written Sentinel policies created and maintained by HashiCorp         Run pre written Sentinel policies   This topic describes how to run Sentinel policies created and maintained by HashiCorp  For instructions about how to create your own custom Sentinel policies  refer to  Define custom Sentinel policies   terraform cloud docs policy enforcement define policies custom sentinel       Overview  Pre written Sentinel policy libraries streamline your compliance processes and enhance security across your infrastructure  HashiCorp s ready to use policies can help you enforce best practices and security standards across your AWS environment   Refer to the following resources for details about working with pre written policies and information about the Sentinel language and framework       Sentinel documentation   sentinel docs     The  README md  documentation included with each of the policy libraries   Complete the following steps to implement pre written Sentinel policies in your workspaces   1  Obtain the policies you want to implement  Download policies directly into your repository or create a fork of the HashiCorp repositories  Alternatively  you can add the Terraform module to your configuration  which acquires the policies and connects them to your workspaces in a single step  1  Connect policies to your workspace  After you download policies or fork policy repositories  you must connect them to your HCP Terraform or Terraform Enterprise workspaces             Requirements  You must use one of the following Terraform applications     HCP Terraform   Terraform Enterprise v202406 1 or newer      Permissions  To create new policy sets and policies  your HCP Terraform or Terraform Enterprise user account must either be a member of the owners team or have the   Manage Policies   organization level permissions enabled  Refer to the following topics for additional information      Organization owners   terraform cloud docs users teams organizations permissions organization owners     Manage policies   terraform cloud docs users teams organizations permissions manage policies       Version control system  You must have a GitHub account connected to HCP Terraform or Terraform Enterprise to manually connect policy sets to your workspaces  Refer to  Connecting VCS Providers   terraform cloud docs vcs  for instructions      Obtain policies  You can use the policy libraries created and maintained by HashiCorp  The libraries are stored in the following GitHub repositories      policy library cis aws efs terraform  https   github com hashicorp policy library cis aws efs terraform     policy library cis aws rds terraform  https   github com hashicorp policy library cis aws rds terraform     policy library cis aws vpc terraform  https   github com hashicorp policy library cis aws vpc terraform     policy library cis aws iam terraform  https   github com hashicorp policy library cis aws iam terraform     policy library cis aws s3 terraform  https   github com hashicorp policy library cis aws s3 terraform     policv library cis aws cloudtrail terraform  https   github com hashicorp policy library cis aws cloudtrail terraform     policy library cis aws kms terraform  https   github com hashicorp policy library cis aws kms terraform     policy library cis aws ec2 terraform  https   github com hashicorp policy library cis aws ec2 terraform   Use one of the following methods to obtain pre written policies       Download policies from the registry    Use this method if you want to assemble custom policy sets without customizing policies        Fork the HashiCorp policy GitHub repository    Use this method if you intend to customize the policies       Add the Terraform module to your configuration    Use this method to implement specific versions of the policies as is  This method also connects the policies to workspaces in the Terraform configuration file instead of connecting them as a separate step        Download policies from the registry  Complete the following steps to download policies from the registry and apply them directly to your workspaces    1  Browse the policy libraries available in the  Terraform registry  https   registry terraform io browse policies   1  Click on a policy library and click   Choose policies    1  Select the policies you want to implement  The registry generates code in the   USAGE INSTRUCTIONS   box  1  Click   Copy Code Snippet   to copy the code to your clipboard   1  Create a GitHub repository to store the policies and the policy set configuration file  1  Create a file called  sentinel hcl  in the repository   1  Paste the code from your clipboard into  sentinel hcl  and commit your changes   1  Complete the instructions for  connecting the policies to your workspace   connect policies to your workspace        Create a fork of the policy libraries   Create a fork of the repository containing the policies you want to implement  Refer to the  GitHub documentation  https   docs github com en pull requests collaborating with pull requests working with forks fork a repo  for instructions on how to create a fork    HashiCorp Sentinel policy libraries include a  sentinel hcl  file  The file defines an example policy set using the policies included in the library  Modify the file to customize your policy set  Refer to  Sentinel Policy Set VCS Repositories   terraform cloud docs policy enforcement manage policy sets sentinel vcs  for additional information   After forking the repository  complete the instructions for  connecting the policies to your workspace   connect policies to your workspace         Add the Terraform module to your configuration  This method enables you to connect the policies to workspaces in the Terraform configuration file  As a result  you can skip the instructions described in  Connect policies to your workspaces   connect policies to your workspaces       1  Go to the  module in the Terraform registry  https   registry terraform io modules hashicorp CIS Policy Set AWS latest  and copy the code generated in the   Provision Instructions   tile    1  Add the  module  block to your Terraform configuration and define the following arguments         source   Specify the path to the module you downloaded        tfe organization   Specify the name of your organization on Terraform Enterprise or HCP Terraform        policy set workspace names   Specify a list of workspace names that you want to apply the policies to       The following example configuration applies invokes the module for  target workspace 1          hcl    module  cis v1 2 0 policies          source       prewritten policy        name                                cis 1 2 0        tfe organization                     your tfe org         policy set workspace names           target workspace 1                1  Run  terraform plan  to view the plan  1  Run  terraform apply  to apply the changes  After running the command  Terraform will evaluate Sentinel policies for each following run of the workspaces you specified      Connect policies to your workspace  Skip this step if you  added the Terraform module   add the terraform module to your configuration  to your configuration  When you use the module  the  policy set workspace names  argument instructs Terraform to connect the policies to the HCP Terraform workspaces specified in the configuration    1  Log into your organization and click   Settings   in the sidebar   1  Click   Policy Sets   and click   Connect a new policy set    1  Click the   Version control provider  VCS    tile  1  Enable the   Sentinel   option as the policy framework  1  Specify a name and description for the set  1  Configure any additional options for the policy set and click   Next    1  Choose the GitHub connection type  then choose the repository you created in  Set up a repository for the policies   set up a repository for the policies    1  If the  sentinel hcl  policy set file is stored in a subfolder  specify the path to the file in the   Policies path   field  The default is the root directory  1  If you want to apply updated policy sets to the workspace from a specific branch  specify the name in the   VCS branch   field  The default is the default branch configured for the repository  1  Click   Next   and specify any additional parameters you want to pass to the Sentinel runtime and click   Connect policy set   to finish applying the policies to the workspace    Run a plan in the workspace to trigger the connected policies  Refer to  Start a Terraform run   terraform cloud docs run remote operations starting runs  for additional information      Next steps    Group your policies into sets and apply them to your workspaces  Refer to  Create policy sets   terraform cloud docs  policy enforcement create sets  for additional information    View results and address Terraform runs that do not comply with your policies  Refer to  View results   terraform cloud docs policy enforcement view results  for additional information    You can also view Sentinel policy results in JSON format  Refer to  View Sentinel JSON results   terraform cloud docs policy enforcement view json results  for additional information "}
{"questions":"terraform page title CLI driven Runs Runs HCP Terraform Trigger runs from your terminal using the Terraform CLI Learn the required private terraform cloud docs registry speculative plan terraform cloud docs run remote operations speculative plans configuration for remote CLI runs","answers":"---\npage_title: CLI-driven Runs - Runs - HCP Terraform\ndescription: >-\n  Trigger runs from your terminal using the Terraform CLI. Learn the required\n  configuration for remote CLI runs.\n---\n\n[private]: \/terraform\/cloud-docs\/registry\n\n[speculative plan]: \/terraform\/cloud-docs\/run\/remote-operations#speculative-plans\n\n[tfe-provider]: https:\/\/registry.terraform.io\/providers\/hashicorp\/tfe\/latest\/docs\n\n# The CLI-driven Run Workflow\n\n> **Hands-on:** Try the [Log in to HCP Terraform from the CLI](\/terraform\/tutorials\/0-13\/cloud-login) tutorial.\n\nHCP Terraform has three workflows for managing Terraform runs.\n\n- The [UI\/VCS-driven run workflow](\/terraform\/cloud-docs\/run\/ui), which is the primary mode of operation.\n- The [API-driven run workflow](\/terraform\/cloud-docs\/run\/api), which is more flexible but requires you to create some tooling.\n- The CLI-driven run workflow described below, which uses Terraform's standard CLI tools to execute runs in HCP Terraform.\n\n## Summary\n\nThe [CLI integration](\/terraform\/cli\/cloud) brings HCP Terraform's collaboration features into the familiar Terraform CLI workflow. It offers the best of both worlds to developers who are already comfortable with using the Terraform CLI, and it can work with existing CI\/CD pipelines.\n\nYou can start runs with the standard `terraform plan` and `terraform apply` commands and then watch the progress of the run from your terminal. These runs execute remotely in HCP Terraform, use variables from the appropriate workspace, enforce any applicable [Sentinel or OPA policies](\/terraform\/cloud-docs\/policy-enforcement), and can access HCP Terraform's [private registry][private] and remote state inputs.\n\nHCP Terraform offers a few types of CLI-driven runs, to support different stages of your workflow:\n\n- `terraform plan` starts a [speculative plan][] in an HCP Terraform workspace, using configuration files from a local directory. You can quickly check the results of edits (including compliance with Sentinel policies) without needing to copy sensitive variables to your local machine.\n\n  Speculative plans work with all workspaces, and can co-exist with the [VCS-driven workflow](\/terraform\/cloud-docs\/run\/ui).\n\n- `terraform apply` starts a standard plan and apply in an HCP Terraform workspace, using configuration files from a local directory.\n\n  Remote `terraform apply` is for workspaces without a linked VCS repository. It replaces the VCS-driven workflow with a more traditional CLI workflow.\n\n- `terraform plan -out <FILE>` and `terraform apply <FILE>` perform a two-part [saved plan run](\/terraform\/cloud-docs\/run\/modes-and-options\/#saved-plans) in an HCP Terraform workspace, using configuration files from a local directory. The first command performs and saves the plan, and the second command applies it. You can use `terraform show <FILE>` to inspect a saved plan.\n\n    Like remote `terraform apply`, remote saved plans are for workspaces without a linked VCS repository.\n\n    Saved plans require at least Terraform CLI v1.6.0.\n\nTo supplement these remote operations, you can also use the optional [Terraform Enterprise Provider][tfe-provider], which interacts with the HCP Terraform-supported resources. This provider is useful for editing variables and workspace settings through the Terraform CLI.\n\n## Configuration\n\nTo enable the CLI-driven workflow, you must:\n\n1. Run `terraform login` to authenticate with HCP Terraform. Alternatively, you can manually configure credentials in the CLI config file or through environment variables. Refer to [CLI Configuration](\/terraform\/cli\/config\/config-file#environment-variable-credentials) for details.\n\n1. Add the `cloud` block to your Terraform configuration. You can define its arguments directly in your configuration file or supply them through environment variables, which can be useful for [non-interactive workflows](#non-interactive-workflows). Refer to [Using HCP Terraform](\/terraform\/cli\/cloud) for configuration details.\n\n   The following example shows how to map CLI workspaces to HCP Terraform workspaces with a specific tag.\n\n   ```\n   terraform {\n     cloud {\n       organization = \"my-org\"\n       workspaces {\n         tags = [\"networking\"]\n       }\n     }\n   }\n   ```\n\n   -> **Note:** The `cloud` block is available in Terraform v1.1 and later. Previous versions can use the [`remote` backend](\/terraform\/language\/settings\/backends\/remote) to configure the CLI workflow and migrate state.\n\n1. Run `terraform init`.\n\n   ```\n   $ terraform init\n\n   Initializing HCP Terraform...\n\n   Initializing provider plugins...\n   - Reusing previous version of hashicorp\/random from the dependency lock file\n   - Using previously-installed hashicorp\/random v3.0.1\n\n   HCP Terraform has been successfully initialized!\n\n   You may now begin working with HCP Terraform. Try running \"terraform plan\"\n   to see any changes that are required for your infrastructure.\n\n   If you ever set or change modules or Terraform Settings,\n   run \"terraform init\" again to reinitialize your working directory.\n   ```\n\n### Implicit Workspace Creation\n\nIf you configure the `cloud` block to use a workspace that doesn't yet exist in your organization, HCP Terraform will create a new workspace with that name when you run `terraform init`. The output of `terraform init` will inform you when this happens.\n\nAutomatically created workspaces might not be immediately ready to use, so use HCP Terraform's UI to check a workspace's settings and data before performing any runs. In particular, note that:\n\n- No Terraform variables or environment variables are created by default, unless your organization has configured one or more [global variable sets](\/terraform\/cloud-docs\/workspaces\/variables#scope). HCP Terraform will use `*.auto.tfvars` files if they are present, but you will usually still need to set some workspace-specific variables.\n- The execution mode defaults to \"Remote,\" so that runs occur within HCP Terraform's infrastructure instead of on your workstation.\n- New workspaces are not automatically connected to a VCS repository and do not have a working directory specified.\n- A new workspace's Terraform version defaults to the most recent release of Terraform at the time the workspace was created.\n\n### Implicit Project Creation\n\nIf you configure the [`workspaces` block](\/terraform\/cli\/cloud\/settings#workspaces) to use a [project](\/terraform\/cli\/cloud\/settings#project) that does not yet exist in your organization, HCP Terraform will attempt to create a new project with that name when you run `terraform init` and notify you in the command output.\n\nIf you specify both the `project` argument and [`TF_CLOUD_PROJECT`](\/terraform\/cli\/cloud\/settings#tf_cloud_project) environment variable, the `project` argument takes precedence.\n\n## Variables in CLI-Driven Runs\n\nRemote runs in HCP Terraform use:\n\n- Run-specific variables set through the command line or in your local environment. Terraform can use shell environment variables prefixed with `TF_VAR_` as input variables for the run, but you must still set all required environment variables, like provider credentials, inside the workspace.\n- Workspace-specific Terraform and environment variables set in the workspace.\n- Any variable sets applied globally, on the project containing the workspace, or on the workspace itself.\n- Terraform variables from any `*.auto.tfvars` files included in the configuration.\n\nRefer to [Variables](\/terraform\/cloud-docs\/workspaces\/variables) for more details about variable types, variable scopes, variable precedence, and how to set run-specific variables through the command line.\n\n## Remote Working Directories\n\nIf you manage your Terraform configurations in self-contained repositories, the remote working directory always has the same content as the local working directory.\n\nIf you use a combined repository and [specify a working directory on workspaces](\/terraform\/cloud-docs\/workspaces\/settings#terraform-working-directory), you can run Terraform from either the real working directory or from the root of the combined configuration directory. In both cases, Terraform will upload the entire combined configuration directory.\n\n## Excluding Files from Upload\n\n-> **Version note:** `.terraformignore` support was added in Terraform 0.12.11.\n\nCLI-driven runs upload an archive of your configuration directory\nto HCP Terraform. If the directory contains files you want to exclude from upload,\nyou can do so by defining a [`.terraformignore` file in your configuration directory](\/terraform\/cli\/cloud\/settings).\n\n## Remote Speculative Plans\n\nYou can run speculative plans in any workspace where you have [permission to queue plans](\/terraform\/cloud-docs\/users-teams-organizations\/permissions). Speculative plans use the configuration code from the local working directory, but will use variable values from the specified workspace.\n\nTo run a [speculative plan][] on your configuration, use the `terraform plan` command. The plan will run in HCP Terraform, and the logs will stream back to the command line along with a URL to view the plan in the HCP Terraform UI.\n\n```\n$ terraform plan\n\nRunning plan in HCP Terraform. Output will stream here. Pressing Ctrl-C\nwill stop streaming the logs, but will not stop the plan running remotely.\n\nPreparing the remote plan...\n\nTo view this run in a browser, visit:\nhttps:\/\/app.terraform.io\/app\/hashicorp-learn\/docs-workspace\/runs\/run-cfh2trDbvMU2Rkf1\n\nWaiting for the plan to start...\n\n[...]\n\nPlan: 1 to add, 0 to change, 0 to destroy.\n\nChanges to Outputs:\n  + pet_name = (known after apply)\n```\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n\n## Remote Applies\n\nIn workspaces that are not connected to a VCS repository, users with [permission to apply runs](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#general-workspace-permissions) can use the CLI to trigger remote applies. Remote applies use the configuration code from the local working directory, but use the variable values from the specified workspace.\n\n~> **Note:** You cannot run remote applies in workspaces that are linked to a VCS repository, since the repository serves as the workspace\u2019s source of truth. To apply changes in a VCS-linked workspace, merge your changes to the designated branch.\n\nWhen you are ready to apply configuration changes, use the `terraform apply` command. HCP Terraform will plan your changes, and the command line will prompt you for approval before applying them.\n\n```\n$ terraform apply\n\nRunning apply in HCP Terraform. Output will stream here. Pressing Ctrl-C\nwill cancel the remote apply if it's still pending. If the apply started it\nwill stop streaming the logs, but will not stop the apply running remotely.\n\nPreparing the remote apply...\n\nTo view this run in a browser, visit:\nhttps:\/\/app.terraform.io\/app\/hashicorp-learn\/docs-workspace\/runs\/run-Rcc12TkNW1PDa7GH\n\nWaiting for the plan to start...\n\n[...]\n\nPlan: 1 to add, 0 to change, 0 to destroy.\n\nChanges to Outputs:\n  + pet_name = (known after apply)\n\nDo you want to perform these actions in workspace \"docs-workspace\"?\n  Terraform will perform the actions described above.\n  Only 'yes' will be accepted to approve.\n\n  Enter a value: yes\n\n[...]\n\nApply complete! Resources: 1 added, 0 changed, 0 destroyed.\n```\n\n### Non-Interactive Workflows\n\n> **Hands On:** Try the [Deploy Infrastructure with HCP Terraform and CircleCI](\/terraform\/tutorials\/automation\/circle-ci) tutorial.\n\nExternal systems cannot run the traditional apply workflow because Terraform requires console input from the user to approve plans. We recommend using the [API-driven Run Workflow](\/terraform\/cloud-docs\/run\/api) for non-interactive workflows when possible.\n\nIf you prefer to use the CLI in a non-interactive environment, we recommend first running a [speculative plan](\/terraform\/cloud-docs\/run\/remote-operations#speculative-plans) to preview the changes Terraform will make to your infrastructure. Then, use one of the following approaches with the `-auto-approve` flag based on the [execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode) of your workspace. The [`-auto-approve`](\/terraform\/cli\/commands\/apply#auto-approve) flag skips prompting you to approve the plan.\n\n- **Local Execution:**  Save the approved speculative plan and then run `terraform apply -auto-approve` with the saved plan.\n- **Remote Execution:** HCP Terraform does not support uploading saved plans for remote execution, so we recommend running `terraform apply -auto-approve` immediately after approving the speculative plan to prevent the plan from becoming stale.\n\n  !> **Warning:** Remote execution with non-interactive workflows requires auto-approved deployments. Minimize the risk of unpredictable infrastructure changes and configuration drift by making sure that no one can change your infrastructure outside of your automated build pipeline.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Remote Saved Plans\n\n-> **Version note:** Saved plans require at least Terraform CLI v1.6.0.\n\nIn workspaces that support `terraform apply`, you also have the option of performing the plan and apply as separate steps, using the standard variations of the relevant Terraform commands:\n\n- `terraform plan -out <FILE>` performs and saves a plan.\n- `terraform apply <FILE>` applies a previously saved plan.\n- `terraform show <FILE>` (and `terraform show -json <FILE>`) inspect a plan you previously saved.\n\nSaved plan runs are halfway between [speculative plans](#remote-speculative-plans) and standard [plan and apply runs](#remote-applies). They allow you to:\n\n- Perform cheap exploratory plans while retaining the option of applying a specific plan you are satisfied with.\n- Perform other tasks in your terminal between the plan and apply stages.\n- Perform the plan and apply operations on separate machines (as is common in continuous integration workflows).\n\nSaved plans become _stale_ once the state Terraform planned them against is no longer valid (usually due to someone applying a different run). In HCP Terraform, stale saved plan runs are automatically detected and discarded. When examining a remote saved plan, the `terraform show` command (without the `-json` option) informs you if a plan has been discarded or is otherwise unable to be applied.\n\n### File Contents and Permissions\n\nYou can only apply remote saved plans in the same remote HCP Terraform workspace that performed the plan. Additionally, you can not apply locally executed saved plans in a remote workspace.\n\nIn order to abide by HCP Terraform's permissions model, remote saved plans do not use the same local file format as locally executed saved plans. Instead, remote saved plans are a thin reference to a remote run, and the Terraform CLI relies on authenticated network requests to inspect and apply remote plans. This helps avoid the accidental exposure of credentials or other sensitive information.\n\nThe `terraform show -json` command requires [workspace admin permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#workspace-admins) to inspect a remote saved plan; this is because the [machine-readable JSON plan format](\/terraform\/internals\/json-format) contains unredacted sensitive information (alongside redaction hints for use by systems that consume the format). The human-readable version of `terraform show` only requires the read runs permission, because it uses pre-redacted information.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Policy Enforcement\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nPolicies are rules that HCP Terraform enforces on Terraform runs. You can use two policy-as-code frameworks to define fine-grained, logic-based policies: Sentinel and Open Policy Agent (OPA).\n\nIf the specified workspace uses policies, HCP Terraform runs those policies against all speculative plans and remote applies in that workspace. Failed policies can pause or prevent an apply, depending on the enforcement level. Refer to [Policy Enforcement](\/terraform\/cloud-docs\/policy-enforcement) for details.\n\nFor Sentinel, the Terraform CLI prints policy results for CLI-driven runs. CLI support for policy results is not available for OPA.\n\nThe following example shows Sentinel policy output in the terminal.\n\n```\n$ terraform apply\n\n[...]\n\nPlan: 1 to add, 0 to change, 1 to destroy.\n\n------------------------------------------------------------------------\n\nOrganization policy check:\n\nSentinel Result: false\n\nSentinel evaluated to false because one or more Sentinel policies evaluated\nto false. This false was not due to an undefined value or runtime error.\n\n1 policies evaluated.\n## Policy 1: my-policy.sentinel (soft-mandatory)\n\nResult: false\n\nFALSE - my-policy.sentinel:1:1 - Rule \"main\"\n\nDo you want to override the soft failed policy check?\n  Only 'override' will be accepted to override.\n\n  Enter a value: override\n```\n\n## Options for Plans and Applies\n\n[Run Modes and Options](\/terraform\/cloud-docs\/run\/modes-and-options) contains more details about the various options available for plans and applies when you use the CLI-driven workflow.\n\n## Networking\/Connection Issues\n\nSometimes during a CLI-driven run, errors relating to network connectivity issues arise. Examples of these kinds of errors include:\n\n* `Client.Timeout exceeded while awaiting headers`\n* `context deadline exceeded`\n* `TLS handshake timeout`\n\nSometimes there are network problems beyond our control. If you have network errors, verify your network connection is operational. Then, check the following common configuration settings:\n\n* Determine if any firewall software on your system blocks the `terraform` command and explicitly approve it.\n* Verify that you have a valid DNS server IP address.\n* Remove any expired TLS certificates for your system.","site":"terraform","answers_cleaned":"    page title  CLI driven Runs   Runs   HCP Terraform description       Trigger runs from your terminal using the Terraform CLI  Learn the required   configuration for remote CLI runs        private    terraform cloud docs registry   speculative plan    terraform cloud docs run remote operations speculative plans   tfe provider   https   registry terraform io providers hashicorp tfe latest docs    The CLI driven Run Workflow      Hands on    Try the  Log in to HCP Terraform from the CLI   terraform tutorials 0 13 cloud login  tutorial   HCP Terraform has three workflows for managing Terraform runs     The  UI VCS driven run workflow   terraform cloud docs run ui   which is the primary mode of operation    The  API driven run workflow   terraform cloud docs run api   which is more flexible but requires you to create some tooling    The CLI driven run workflow described below  which uses Terraform s standard CLI tools to execute runs in HCP Terraform      Summary  The  CLI integration   terraform cli cloud  brings HCP Terraform s collaboration features into the familiar Terraform CLI workflow  It offers the best of both worlds to developers who are already comfortable with using the Terraform CLI  and it can work with existing CI CD pipelines   You can start runs with the standard  terraform plan  and  terraform apply  commands and then watch the progress of the run from your terminal  These runs execute remotely in HCP Terraform  use variables from the appropriate workspace  enforce any applicable  Sentinel or OPA policies   terraform cloud docs policy enforcement   and can access HCP Terraform s  private registry  private  and remote state inputs   HCP Terraform offers a few types of CLI driven runs  to support different stages of your workflow      terraform plan  starts a  speculative plan    in an HCP Terraform workspace  using configuration files from a local directory  You can quickly check the results of edits  including compliance with Sentinel policies  without needing to copy sensitive variables to your local machine     Speculative plans work with all workspaces  and can co exist with the  VCS driven workflow   terraform cloud docs run ui       terraform apply  starts a standard plan and apply in an HCP Terraform workspace  using configuration files from a local directory     Remote  terraform apply  is for workspaces without a linked VCS repository  It replaces the VCS driven workflow with a more traditional CLI workflow      terraform plan  out  FILE   and  terraform apply  FILE   perform a two part  saved plan run   terraform cloud docs run modes and options  saved plans  in an HCP Terraform workspace  using configuration files from a local directory  The first command performs and saves the plan  and the second command applies it  You can use  terraform show  FILE   to inspect a saved plan       Like remote  terraform apply   remote saved plans are for workspaces without a linked VCS repository       Saved plans require at least Terraform CLI v1 6 0   To supplement these remote operations  you can also use the optional  Terraform Enterprise Provider  tfe provider   which interacts with the HCP Terraform supported resources  This provider is useful for editing variables and workspace settings through the Terraform CLI      Configuration  To enable the CLI driven workflow  you must   1  Run  terraform login  to authenticate with HCP Terraform  Alternatively  you can manually configure credentials in the CLI config file or through environment variables  Refer to  CLI Configuration   terraform cli config config file environment variable credentials  for details   1  Add the  cloud  block to your Terraform configuration  You can define its arguments directly in your configuration file or supply them through environment variables  which can be useful for  non interactive workflows   non interactive workflows   Refer to  Using HCP Terraform   terraform cli cloud  for configuration details      The following example shows how to map CLI workspaces to HCP Terraform workspaces with a specific tag             terraform        cloud          organization    my org         workspaces            tags     networking                                        Note    The  cloud  block is available in Terraform v1 1 and later  Previous versions can use the   remote  backend   terraform language settings backends remote  to configure the CLI workflow and migrate state   1  Run  terraform init                terraform init     Initializing HCP Terraform        Initializing provider plugins         Reusing previous version of hashicorp random from the dependency lock file      Using previously installed hashicorp random v3 0 1     HCP Terraform has been successfully initialized      You may now begin working with HCP Terraform  Try running  terraform plan     to see any changes that are required for your infrastructure      If you ever set or change modules or Terraform Settings     run  terraform init  again to reinitialize your working directory              Implicit Workspace Creation  If you configure the  cloud  block to use a workspace that doesn t yet exist in your organization  HCP Terraform will create a new workspace with that name when you run  terraform init   The output of  terraform init  will inform you when this happens   Automatically created workspaces might not be immediately ready to use  so use HCP Terraform s UI to check a workspace s settings and data before performing any runs  In particular  note that     No Terraform variables or environment variables are created by default  unless your organization has configured one or more  global variable sets   terraform cloud docs workspaces variables scope   HCP Terraform will use    auto tfvars  files if they are present  but you will usually still need to set some workspace specific variables    The execution mode defaults to  Remote   so that runs occur within HCP Terraform s infrastructure instead of on your workstation    New workspaces are not automatically connected to a VCS repository and do not have a working directory specified    A new workspace s Terraform version defaults to the most recent release of Terraform at the time the workspace was created       Implicit Project Creation  If you configure the   workspaces  block   terraform cli cloud settings workspaces  to use a  project   terraform cli cloud settings project  that does not yet exist in your organization  HCP Terraform will attempt to create a new project with that name when you run  terraform init  and notify you in the command output   If you specify both the  project  argument and   TF CLOUD PROJECT    terraform cli cloud settings tf cloud project  environment variable  the  project  argument takes precedence      Variables in CLI Driven Runs  Remote runs in HCP Terraform use     Run specific variables set through the command line or in your local environment  Terraform can use shell environment variables prefixed with  TF VAR   as input variables for the run  but you must still set all required environment variables  like provider credentials  inside the workspace    Workspace specific Terraform and environment variables set in the workspace    Any variable sets applied globally  on the project containing the workspace  or on the workspace itself    Terraform variables from any    auto tfvars  files included in the configuration   Refer to  Variables   terraform cloud docs workspaces variables  for more details about variable types  variable scopes  variable precedence  and how to set run specific variables through the command line      Remote Working Directories  If you manage your Terraform configurations in self contained repositories  the remote working directory always has the same content as the local working directory   If you use a combined repository and  specify a working directory on workspaces   terraform cloud docs workspaces settings terraform working directory   you can run Terraform from either the real working directory or from the root of the combined configuration directory  In both cases  Terraform will upload the entire combined configuration directory      Excluding Files from Upload       Version note      terraformignore  support was added in Terraform 0 12 11   CLI driven runs upload an archive of your configuration directory to HCP Terraform  If the directory contains files you want to exclude from upload  you can do so by defining a    terraformignore  file in your configuration directory   terraform cli cloud settings       Remote Speculative Plans  You can run speculative plans in any workspace where you have  permission to queue plans   terraform cloud docs users teams organizations permissions   Speculative plans use the configuration code from the local working directory  but will use variable values from the specified workspace   To run a  speculative plan    on your configuration  use the  terraform plan  command  The plan will run in HCP Terraform  and the logs will stream back to the command line along with a URL to view the plan in the HCP Terraform UI         terraform plan  Running plan in HCP Terraform  Output will stream here  Pressing Ctrl C will stop streaming the logs  but will not stop the plan running remotely   Preparing the remote plan     To view this run in a browser  visit  https   app terraform io app hashicorp learn docs workspace runs run cfh2trDbvMU2Rkf1  Waiting for the plan to start            Plan  1 to add  0 to change  0 to destroy   Changes to Outputs      pet name    known after apply        permissions citation    intentionally unused   keep for maintainers      Remote Applies  In workspaces that are not connected to a VCS repository  users with  permission to apply runs   terraform cloud docs users teams organizations permissions general workspace permissions  can use the CLI to trigger remote applies  Remote applies use the configuration code from the local working directory  but use the variable values from the specified workspace        Note    You cannot run remote applies in workspaces that are linked to a VCS repository  since the repository serves as the workspace s source of truth  To apply changes in a VCS linked workspace  merge your changes to the designated branch   When you are ready to apply configuration changes  use the  terraform apply  command  HCP Terraform will plan your changes  and the command line will prompt you for approval before applying them         terraform apply  Running apply in HCP Terraform  Output will stream here  Pressing Ctrl C will cancel the remote apply if it s still pending  If the apply started it will stop streaming the logs  but will not stop the apply running remotely   Preparing the remote apply     To view this run in a browser  visit  https   app terraform io app hashicorp learn docs workspace runs run Rcc12TkNW1PDa7GH  Waiting for the plan to start            Plan  1 to add  0 to change  0 to destroy   Changes to Outputs      pet name    known after apply   Do you want to perform these actions in workspace  docs workspace     Terraform will perform the actions described above    Only  yes  will be accepted to approve     Enter a value  yes         Apply complete  Resources  1 added  0 changed  0 destroyed           Non Interactive Workflows      Hands On    Try the  Deploy Infrastructure with HCP Terraform and CircleCI   terraform tutorials automation circle ci  tutorial   External systems cannot run the traditional apply workflow because Terraform requires console input from the user to approve plans  We recommend using the  API driven Run Workflow   terraform cloud docs run api  for non interactive workflows when possible   If you prefer to use the CLI in a non interactive environment  we recommend first running a  speculative plan   terraform cloud docs run remote operations speculative plans  to preview the changes Terraform will make to your infrastructure  Then  use one of the following approaches with the   auto approve  flag based on the  execution mode   terraform cloud docs workspaces settings execution mode  of your workspace  The    auto approve    terraform cli commands apply auto approve  flag skips prompting you to approve the plan       Local Execution     Save the approved speculative plan and then run  terraform apply  auto approve  with the saved plan      Remote Execution    HCP Terraform does not support uploading saved plans for remote execution  so we recommend running  terraform apply  auto approve  immediately after approving the speculative plan to prevent the plan from becoming stale          Warning    Remote execution with non interactive workflows requires auto approved deployments  Minimize the risk of unpredictable infrastructure changes and configuration drift by making sure that no one can change your infrastructure outside of your automated build pipeline    permissions citation    intentionally unused   keep for maintainers     Remote Saved Plans       Version note    Saved plans require at least Terraform CLI v1 6 0   In workspaces that support  terraform apply   you also have the option of performing the plan and apply as separate steps  using the standard variations of the relevant Terraform commands      terraform plan  out  FILE   performs and saves a plan     terraform apply  FILE   applies a previously saved plan     terraform show  FILE    and  terraform show  json  FILE    inspect a plan you previously saved   Saved plan runs are halfway between  speculative plans   remote speculative plans  and standard  plan and apply runs   remote applies   They allow you to     Perform cheap exploratory plans while retaining the option of applying a specific plan you are satisfied with    Perform other tasks in your terminal between the plan and apply stages    Perform the plan and apply operations on separate machines  as is common in continuous integration workflows    Saved plans become  stale  once the state Terraform planned them against is no longer valid  usually due to someone applying a different run   In HCP Terraform  stale saved plan runs are automatically detected and discarded  When examining a remote saved plan  the  terraform show  command  without the   json  option  informs you if a plan has been discarded or is otherwise unable to be applied       File Contents and Permissions  You can only apply remote saved plans in the same remote HCP Terraform workspace that performed the plan  Additionally  you can not apply locally executed saved plans in a remote workspace   In order to abide by HCP Terraform s permissions model  remote saved plans do not use the same local file format as locally executed saved plans  Instead  remote saved plans are a thin reference to a remote run  and the Terraform CLI relies on authenticated network requests to inspect and apply remote plans  This helps avoid the accidental exposure of credentials or other sensitive information   The  terraform show  json  command requires  workspace admin permissions   terraform cloud docs users teams organizations permissions workspace admins  to inspect a remote saved plan  this is because the  machine readable JSON plan format   terraform internals json format  contains unredacted sensitive information  alongside redaction hints for use by systems that consume the format   The human readable version of  terraform show  only requires the read runs permission  because it uses pre redacted information    permissions citation    intentionally unused   keep for maintainers     Policy Enforcement       BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      Policies are rules that HCP Terraform enforces on Terraform runs  You can use two policy as code frameworks to define fine grained  logic based policies  Sentinel and Open Policy Agent  OPA    If the specified workspace uses policies  HCP Terraform runs those policies against all speculative plans and remote applies in that workspace  Failed policies can pause or prevent an apply  depending on the enforcement level  Refer to  Policy Enforcement   terraform cloud docs policy enforcement  for details   For Sentinel  the Terraform CLI prints policy results for CLI driven runs  CLI support for policy results is not available for OPA   The following example shows Sentinel policy output in the terminal         terraform apply         Plan  1 to add  0 to change  1 to destroy                                                                             Organization policy check   Sentinel Result  false  Sentinel evaluated to false because one or more Sentinel policies evaluated to false  This false was not due to an undefined value or runtime error   1 policies evaluated     Policy 1  my policy sentinel  soft mandatory   Result  false  FALSE   my policy sentinel 1 1   Rule  main   Do you want to override the soft failed policy check    Only  override  will be accepted to override     Enter a value  override         Options for Plans and Applies   Run Modes and Options   terraform cloud docs run modes and options  contains more details about the various options available for plans and applies when you use the CLI driven workflow      Networking Connection Issues  Sometimes during a CLI driven run  errors relating to network connectivity issues arise  Examples of these kinds of errors include      Client Timeout exceeded while awaiting headers     context deadline exceeded     TLS handshake timeout   Sometimes there are network problems beyond our control  If you have network errors  verify your network connection is operational  Then  check the following common configuration settings     Determine if any firewall software on your system blocks the  terraform  command and explicitly approve it    Verify that you have a valid DNS server IP address    Remove any expired TLS certificates for your system "}
{"questions":"terraform HCP Terraform is designed as an execution platform for Terraform and most of its features are based around its ability to perform Terraform runs in a fleet of disposable worker VMs This page describes some features of the run environment for Terraform runs managed by HCP Terraform page title HCP Terraform s Run Environment Runs HCP Terraform Learn about worker virtual machines network access concurrency for run HCP Terraform s Run Environment queueing state access authentication and environment variables","answers":"---\npage_title: HCP Terraform's Run Environment - Runs - HCP Terraform\ndescription: >-\n  Learn about worker virtual machines, network access, concurrency for run\n  queueing, state access authentication, and environment variables.\n---\n\n# HCP Terraform's Run Environment\n\nHCP Terraform is designed as an execution platform for Terraform, and most of its features are based around its ability to perform Terraform runs in a fleet of disposable worker VMs. This page describes some features of the run environment for Terraform runs managed by HCP Terraform.\n\n## The Terraform Worker VMs\n\nHCP Terraform performs Terraform runs in single-use Linux virtual machines, running on an x86_64 architecture.\n\nThe operating system and other software installed on the worker VMs is an internal implementation detail of HCP Terraform. It is not part of a stable public interface, and is subject to change at any time.\n\nBefore Terraform is executed, the worker VM's shell environment is populated with environment variables from the workspace, the selected version of Terraform is installed, and the run's Terraform configuration version is made available.\n\nChanges made to the worker during a run are not persisted to subsequent runs, since the VM is destroyed after the run is completed. Notably, this requires some additional care when installing additional software with a `local-exec` provisioner; see [Installing Additional Tools](\/terraform\/cloud-docs\/run\/install-software#installing-additional-tools) for more details.\n\n> **Hands-on:** Try the [Upgrade Terraform Version in HCP Terraform](\/terraform\/tutorials\/cloud\/cloud-versions) tutorial.\n\n## Network Access to VCS and Infrastructure Providers\n\nIn order to perform Terraform runs, HCP Terraform needs network access to all of the resources being managed by Terraform.\n\nIf you are using the SaaS version of HCP Terraform, this means your VCS provider and any private infrastructure providers you manage with Terraform (including VMware vSphere, OpenStack, other private clouds, and more) _must be internet accessible._\n\nTerraform Enterprise instances must have network connectivity to any connected VCS providers or managed infrastructure providers.\n\n## Concurrency and Run Queuing\n\nHCP Terraform uses multiple concurrent worker VMs, which take jobs from a global queue of runs that are ready for processing. (This includes confirmed applies, and plans that have just become the current run on their workspace.)\n\nIf the global queue has more runs than the workers can handle at once, some of them must wait until a worker becomes available. When the queue is backed up, HCP Terraform gives different priorities to different kinds of runs:\n\n- Applies that will make changes to infrastructure have the highest priority.\n- Normal plans have the next highest priority.\n- Speculative plans have the lowest priority.\n\nHCP Terraform can also delay some runs in order to make performance more consistent across organizations. If an organization requests a large number of runs at once, HCP Terraform queues some of them immediately, and delays the rest until some of the initial batch have finished; this allows every organization to continue performing runs even during periods of especially heavy load.\n\n## State Access and Authentication\n\n[CLI config file]: \/terraform\/cli\/config\/config-file\n\n[cloud]: \/terraform\/cli\/cloud\n\nHCP Terraform stores state for its workspaces.\n\nWhen you trigger runs via the [CLI workflow](\/terraform\/cloud-docs\/run\/cli), Terraform reads from and writes to HCP Terraform's stored state. HCP Terraform uses [the `cloud` block][cloud] for runs, overriding any existing [backend](\/terraform\/language\/settings\/backends\/configuration) in the configuration.\n\n-> **Note:** The `cloud` block is available in Terraform v1.1 and later. Previous versions can use the [`remote` backend](\/terraform\/language\/settings\/backends\/remote) to configure the CLI workflow and migrate state.\n\n### Autogenerated API Token\n\nInstead of using existing user credentials, HCP Terraform generates a unique per-run API token and provides it to the Terraform worker in the [CLI config file][]. When you run Terraform on the command line against a workspace configured for remote operations, you must have [the `cloud` block][cloud] in your configuration and have a user or team API token with the appropriate permissions specified in your [CLI config file][]. However, the run itself occurs within one of HCP Terraform's worker VMs and uses the per-run token for state access.\n\nThe per-run token can read and write state data for the workspace associated with the run, can download modules from the [private registry](\/terraform\/cloud-docs\/registry), and may be granted access to read state from other workspaces in the organization. (Refer to [cross-workspace state access](\/terraform\/cloud-docs\/workspaces\/state#accessing-state-from-other-workspaces) for more details.) Per-run tokens cannot make any other calls to the HCP Terraform API and are not considered to be user, team, or organization tokens. They become invalid after the run is completed.\n\n### User Token\n\nHCP Terraform uses the user token to access a workspace's state when you:\n\n- Run Terraform on the command line against a workspace that is _not_ configured for remote operations. The user must have permission to read and write state versions for the workspace.\n\n- Run Terraform's state manipulation commands against an HCP Terraform workspace. The user must have permission to read and write state versions for the workspace.\n\nRefer to [Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#workspace-permissions) for more details about workspace permissions.\n\n### Provider Authentication\n\nRuns in HCP Terraform typically require some form of credentials to authenticate with infrastructure providers. Credentials can be provided statically through Environment or Terraform [variables](\/terraform\/cloud-docs\/workspaces\/variables), or can be generated on a per-run basis through [dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials) for certain providers. Below are pros and cons to each approach.\n\n#### Static Credentials\n\n##### Pros\n\n- Simple to setup\n- Broad support across providers\n\n##### Cons\n\n- Requires regular manual rotation for enhanced security posture\n- Large blast radius if a credential is exposed and needs to be revoked\n\n#### Dynamic Credentials\n\n##### Pros\n\n- Eliminates the need for manual rotation of credentials on HCP Terraform\n- HCP Terraform metadata - including the run's project, workspace, and run-phase - is encoded into every token to allow for granular permission scoping on the target cloud platform\n- Credentials are short-lived, which reduces blast radius of potential credential exposure\n\n##### Cons\n\n- More complicated initial setup compared to using static credentials\n- Not supported for all providers\n\nThe full list of supported providers and setup instructions can be found in the [dynamic credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials) documentation.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### Environment Variables\n\nHCP Terraform automatically injects the following environment variables for each run:\n\n| Variable Name                              | Description                                                                                                                          | Example                         |\n| ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------- |\n| `TFC_RUN_ID`                               | A unique identifier for this run.                                                                                                    | `run-CKuwsxMGgMd4W7Ui`          |\n| `TFC_WORKSPACE_NAME`                       | The name of the workspace used in this run.                                                                                          | `prod-load-balancers`           |\n| `TFC_WORKSPACE_SLUG`                       | The full slug of the configuration used in this run. This consists of the organization name and workspace name, joined with a slash. | `acme-corp\/prod-load-balancers` |\n| `TFC_CONFIGURATION_VERSION_GIT_BRANCH`     | The name of the branch that the associated Terraform configuration version was ingressed from.                                       | `main`                          |\n| `TFC_CONFIGURATION_VERSION_GIT_COMMIT_SHA` | The full commit hash of the commit that the associated Terraform configuration version was ingressed from.                           | `abcd1234...`                   |\n| `TFC_CONFIGURATION_VERSION_GIT_TAG`        | The name of the tag that the associated Terraform configuration version was ingressed from.                                          | `v0.1.0`                        |\n| `TFC_PROJECT_NAME`                         | The name of the project used in this run.                                                                                            | `proj-name`                     |\n\nThey are also available as Terraform input variables by defining a variable with the same name. For example:\n\n```terraform\nvariable \"TFC_RUN_ID\" {}\n```","site":"terraform","answers_cleaned":"    page title  HCP Terraform s Run Environment   Runs   HCP Terraform description       Learn about worker virtual machines  network access  concurrency for run   queueing  state access authentication  and environment variables         HCP Terraform s Run Environment  HCP Terraform is designed as an execution platform for Terraform  and most of its features are based around its ability to perform Terraform runs in a fleet of disposable worker VMs  This page describes some features of the run environment for Terraform runs managed by HCP Terraform      The Terraform Worker VMs  HCP Terraform performs Terraform runs in single use Linux virtual machines  running on an x86 64 architecture   The operating system and other software installed on the worker VMs is an internal implementation detail of HCP Terraform  It is not part of a stable public interface  and is subject to change at any time   Before Terraform is executed  the worker VM s shell environment is populated with environment variables from the workspace  the selected version of Terraform is installed  and the run s Terraform configuration version is made available   Changes made to the worker during a run are not persisted to subsequent runs  since the VM is destroyed after the run is completed  Notably  this requires some additional care when installing additional software with a  local exec  provisioner  see  Installing Additional Tools   terraform cloud docs run install software installing additional tools  for more details       Hands on    Try the  Upgrade Terraform Version in HCP Terraform   terraform tutorials cloud cloud versions  tutorial      Network Access to VCS and Infrastructure Providers  In order to perform Terraform runs  HCP Terraform needs network access to all of the resources being managed by Terraform   If you are using the SaaS version of HCP Terraform  this means your VCS provider and any private infrastructure providers you manage with Terraform  including VMware vSphere  OpenStack  other private clouds  and more   must be internet accessible    Terraform Enterprise instances must have network connectivity to any connected VCS providers or managed infrastructure providers      Concurrency and Run Queuing  HCP Terraform uses multiple concurrent worker VMs  which take jobs from a global queue of runs that are ready for processing   This includes confirmed applies  and plans that have just become the current run on their workspace    If the global queue has more runs than the workers can handle at once  some of them must wait until a worker becomes available  When the queue is backed up  HCP Terraform gives different priorities to different kinds of runs     Applies that will make changes to infrastructure have the highest priority    Normal plans have the next highest priority    Speculative plans have the lowest priority   HCP Terraform can also delay some runs in order to make performance more consistent across organizations  If an organization requests a large number of runs at once  HCP Terraform queues some of them immediately  and delays the rest until some of the initial batch have finished  this allows every organization to continue performing runs even during periods of especially heavy load      State Access and Authentication   CLI config file    terraform cli config config file   cloud    terraform cli cloud  HCP Terraform stores state for its workspaces   When you trigger runs via the  CLI workflow   terraform cloud docs run cli   Terraform reads from and writes to HCP Terraform s stored state  HCP Terraform uses  the  cloud  block  cloud  for runs  overriding any existing  backend   terraform language settings backends configuration  in the configuration        Note    The  cloud  block is available in Terraform v1 1 and later  Previous versions can use the   remote  backend   terraform language settings backends remote  to configure the CLI workflow and migrate state       Autogenerated API Token  Instead of using existing user credentials  HCP Terraform generates a unique per run API token and provides it to the Terraform worker in the  CLI config file     When you run Terraform on the command line against a workspace configured for remote operations  you must have  the  cloud  block  cloud  in your configuration and have a user or team API token with the appropriate permissions specified in your  CLI config file     However  the run itself occurs within one of HCP Terraform s worker VMs and uses the per run token for state access   The per run token can read and write state data for the workspace associated with the run  can download modules from the  private registry   terraform cloud docs registry   and may be granted access to read state from other workspaces in the organization   Refer to  cross workspace state access   terraform cloud docs workspaces state accessing state from other workspaces  for more details   Per run tokens cannot make any other calls to the HCP Terraform API and are not considered to be user  team  or organization tokens  They become invalid after the run is completed       User Token  HCP Terraform uses the user token to access a workspace s state when you     Run Terraform on the command line against a workspace that is  not  configured for remote operations  The user must have permission to read and write state versions for the workspace     Run Terraform s state manipulation commands against an HCP Terraform workspace  The user must have permission to read and write state versions for the workspace   Refer to  Permissions   terraform cloud docs users teams organizations permissions workspace permissions  for more details about workspace permissions       Provider Authentication  Runs in HCP Terraform typically require some form of credentials to authenticate with infrastructure providers  Credentials can be provided statically through Environment or Terraform  variables   terraform cloud docs workspaces variables   or can be generated on a per run basis through  dynamic credentials   terraform cloud docs workspaces dynamic provider credentials  for certain providers  Below are pros and cons to each approach        Static Credentials        Pros    Simple to setup   Broad support across providers        Cons    Requires regular manual rotation for enhanced security posture   Large blast radius if a credential is exposed and needs to be revoked       Dynamic Credentials        Pros    Eliminates the need for manual rotation of credentials on HCP Terraform   HCP Terraform metadata   including the run s project  workspace  and run phase   is encoded into every token to allow for granular permission scoping on the target cloud platform   Credentials are short lived  which reduces blast radius of potential credential exposure        Cons    More complicated initial setup compared to using static credentials   Not supported for all providers  The full list of supported providers and setup instructions can be found in the  dynamic credentials   terraform cloud docs workspaces dynamic provider credentials  documentation    permissions citation    intentionally unused   keep for maintainers      Environment Variables  HCP Terraform automatically injects the following environment variables for each run     Variable Name                                Description                                                                                                                            Example                                                                                                                                                                                                                                                      TFC RUN ID                                  A unique identifier for this run                                                                                                        run CKuwsxMGgMd4W7Ui                TFC WORKSPACE NAME                          The name of the workspace used in this run                                                                                              prod load balancers                 TFC WORKSPACE SLUG                          The full slug of the configuration used in this run  This consists of the organization name and workspace name  joined with a slash     acme corp prod load balancers       TFC CONFIGURATION VERSION GIT BRANCH        The name of the branch that the associated Terraform configuration version was ingressed from                                           main                                TFC CONFIGURATION VERSION GIT COMMIT SHA    The full commit hash of the commit that the associated Terraform configuration version was ingressed from                               abcd1234                            TFC CONFIGURATION VERSION GIT TAG           The name of the tag that the associated Terraform configuration version was ingressed from                                              v0 1 0                              TFC PROJECT NAME                            The name of the project used in this run                                                                                                proj name                         They are also available as Terraform input variables by defining a variable with the same name  For example      terraform variable  TFC RUN ID        "}
{"questions":"terraform Each plan and apply run passes through several stages of action pending plan cost estimation policy check apply and completion HCP Terraform shows a run s progress through each stage as a run state policy check apply and complete Learn what happens during each stage of a run pending plan cost estimation page title Run States and Stages Runs HCP Terraform Run States and Stages","answers":"---\npage_title: Run States and Stages - Runs - HCP Terraform\ndescription: >-\n  Learn what happens during each stage of a run: pending, plan, cost estimation,\n  policy check, apply, and complete.\n---\n\n# Run States and Stages\n\nEach plan and apply run passes through several stages of action: pending, plan, cost estimation, policy check, apply, and completion. HCP Terraform shows a run's progress through each stage as a run state.\n\nIn the list of workspaces on HCP Terraform's main page, each workspace shows the state of the run it's currently processing. If no run is in progress, HCP Terraform displays the state of the most recently completed run.\n\n## The Pending Stage\n\n_States in this stage:_\n\n- **Pending:** HCP Terraform hasn't started action on a run yet. HCP Terraform processes each workspace's runs in the order they were queued, and a run remains pending until every run before it has completed.\n\n_Leaving this stage:_\n\n- If the user discards the run before it starts, the run does not continue (**Discarded** state).\n- If the run is first in the queue, it proceeds automatically to the plan stage (**Planning** state).\n\n## The Fetching Stage\n\nHCP Terraform may need to fetch the configuration from VCS prior to starting the plan. HCP Terraform automatically archives configuration versions created through VCS when all runs are complete and then re-fetches the files for subsequent runs.\n\n_States in this stage:_\n\n- **Fetching:** If HCP Terraform has not yet fetched the configuration from VCS, the run will go into this state until the configuration is available.\n\n_Leaving this stage:_\n\n- If HCP Terraform encounters an error when fetching the configuration from VCS, the run does not continue (**Plan Errored** state).\n- If Terraform successfully fetches the configuration, the run moves to the next stage.\n\n## The Pre-Plan Stage\n\nThe pre-plan phase only occurs if there are enabled [run tasks](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks) in the workspace that are configured to begin before Terraform creates the plan. HCP Terraform sends information about the run to the configured external system and waits for a `passed` or `failed` response to determine whether the run can continue. The information sent to the external system includes the configuration version of the run.\n\nAll runs can enter this phase, including [speculative plans](\/terraform\/cloud-docs\/run\/remote-operations#speculative-plans).\n\n_States in this stage:_\n\n- **Pre-plan running:** HCP Terraform is waiting for a response from the configured external system(s).\n  - External systems must respond initially with a `200 OK` acknowledging the request is in progress. After that, they have 10 minutes to return a status of `passed`, `running`, or `failed`. If the timeout expires, HCP Terraform assumes that the run tasks is in the `failed` status.\n\n_Leaving this stage:_\n\n- If any mandatory tasks failed, the run skips to completion (**Plan Errored** state).\n- If any advisory tasks failed, the run proceeds to the **Planning** state, with a visible warning regarding the failed task.\n- If a single run has a combination of mandatory and advisory tasks, Terraform takes the most restrictive action. For example, the run fails if there are two advisory tasks that succeed and one mandatory task that fails.\n- If a user canceled the run, the run ends in the **Canceled** state.\n\n## The Plan Stage\n\nA run goes through different steps during the plan stage depending on whether or not HCP Terraform needs to fetch the configuration from VCS. HCP Terraform automatically archives configuration versions created through VCS when all runs are complete and then re-fetches the files for subsequent runs.\n\n_States in this stage:_\n\n- **Planning:** HCP Terraform is currently running `terraform plan`.\n- **Needs Confirmation:** `terraform plan` has finished. Runs sometimes pause in this state, depending on the workspace and organization settings.\n\n_Leaving this stage:_\n\n- If the `terraform plan` command failed, the run does not continue (**Plan Errored** state).\n- If a user canceled the plan by pressing the \"Cancel Run\" button, the run does not continue (**Canceled** state).\n- If the plan succeeded with no changes and neither cost estimation nor Sentinel policy checks will be done, HCP Terraform considers the run complete (**Planned and Finished** state).\n- If the plan succeeded and requires changes:\n  - If cost estimation is enabled, the run proceeds automatically to the cost estimation stage.\n  - If cost estimation is disabled and [Sentinel policies](\/terraform\/enterprise\/policy-enforcement\/sentinel) are enabled, the run proceeds automatically to the policy check stage.\n  - If there are no Sentinel policies and the plan can be auto-applied, the run proceeds automatically to the apply stage. Plans can be auto-applied if the auto-apply setting is enabled on the workspace and the plan was queued by a new VCS commit or by a user with permission to apply runs. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n  - If there are no Sentinel policies and HCP Terraform cannot auto-apply the plan, the run pauses in the **Needs Confirmation** state until a user with permission to apply runs takes action. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions)) If an authorized user approves the apply, the run proceeds to the apply stage. If an authorized user rejects the apply, the run does not continue (**Discarded** state).\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nNote, if you want to directly integrate third-party tools and services between your plan and apply stages, see [Run Tasks](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks).\n\n## The Post-Plan Stage\n\nThe post-plan phase only occurs if you configure [run tasks](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks) on a workspace to begin after Terraform successfully completes a plan operation.\nAll runs can enter this phase, including [speculative plans](\/terraform\/cloud-docs\/run\/remote-operations#speculative-plans). During this phase, HCP Terraform sends information about the run to the configured external system and waits for a `passed` or `failed` response to determine whether the run can continue.\n\n-> **Note:** The information sent to the configured external system includes the [JSON output](\/terraform\/internals\/json-format) of the Terraform plan.\n\n_States in this stage:_\n\n- **Post-plan running:** HCP Terraform is waiting for a response from the configured external system(s).\n  - External systems must respond initially with a `200 OK` acknowledging the request is in progress. After that, they have 10 minutes to return a status of `passed`, `running`, or `failed`, or the timeout will expire and the task will be assumed to be in the `failed` status.\n\n_Leaving this stage:_\n\n- If any mandatory tasks failed, the run skips to completion (**Plan Errored** state).\n- If any advisory tasks failed, the run proceeds to the **Applying** state, with a visible warning regarding the failed task.\n- If a single run has a combination of mandatory and advisory tasks, Terraform takes the most restrictive action. For example, if there are two advisory tasks that succeed and one mandatory task that failed, the run fails. If one mandatory task succeeds and two advisory tasks fail, the run succeeds with a warning.\n- If a user canceled the run, the run ends in the **Canceled** state.\n\n## The OPA Policy Check Stage\n\nThis stage only occurs if you enabled [Open Policy Agent (OPA) policies](\/terraform\/cloud-docs\/policy-enforcement\/opa) and runs after a successful `terraform plan` and before Cost Estimation. In this stage, HCP Terraform checks whether the plan adheres to the policies in the OPA policy sets for the workspace.\n\n_States in this stage:_\n\n- **Policy Check:** HCP Terraform is checking the plan against the OPA policy sets.\n- **Policy Override:** The policy check finished, but a mandatory policy failed. The run pauses, and Terraform cannot perform an apply unless a user manually overrides the policy check failure. Refer to [Policy Results](\/terraform\/cloud-docs\/policy-enforcement\/policy-results) for details.\n- **Policy Checked:** The policy check succeeded, and Terraform can apply the plan. The run may pause in this state if the workspace is not set up to auto-apply runs.\n\n_Leaving this stage:_\n\nIf any mandatory policies failed, the run pauses in the **Policy Override** state. The run completes one of the following workflows:\n\n- The run stops and enters the **Discarded** state when a user with [permission to apply runs](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#general-workspace-permissions#manage-policy-overrides) discards the run.\n- The run proceeds to the **Policy Checked** state when a user with [permission to manage policy overrides](\/terraform\/cloud-docs\/users-teams-organizations\/permissions) overrides the failed policy. The **Policy Checked** state means that no mandatory policies failed or that a user performed a manual override.\n\nOnce the run reaches the **Policy Checked** state, the run completes one of the following workflows:\n\n- The run proceeds to the **Apply** stage if Terraform can automatically apply the plan. An auto-apply requires that the **Auto apply** setting is enabled on the workspace.\n- If Terraform cannot automatically apply the plan, the run pauses in the **Policy Checked** state until a user with permission to apply runs takes action. If the user approves the apply, the run proceeds to the **Apply** stage. If the user rejects the apply, the run stops and enters the **Discarded** state.\n\n## The Cost Estimation Stage\n\nThis stage only occurs if cost estimation is enabled. After a successful `terraform plan`, HCP Terraform uses plan data to estimate costs for each resource found in the plan.\n\n_States in this stage:_\n\n- **Cost Estimating:** HCP Terraform is currently estimating the resources in the plan.\n- **Cost Estimated:** The cost estimate completed.\n\n_Leaving this stage:_\n\n- If cost estimation succeeded or errors, the run moves to the next stage.\n- If there are no policy checks or applies, the run does not continue (**Planned and Finished** state).\n\n## The Sentinel Policy Check Stage\n\nThis stage only occurs if [Sentinel policies](\/terraform\/cloud-docs\/policy-enforcement\/sentinel) are enabled. After a successful `terraform plan`, HCP Terraform checks whether the plan obeys policy to determine whether it can be applied.\n\n_States in this stage:_\n\n- **Policy Check:** HCP Terraform is currently checking the plan against the organization's policies.\n- **Policy Override:** The policy check finished, but a soft-mandatory policy failed, so an apply cannot proceed without approval from a user with permission to manage policy overrides for the organization. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions)) The run pauses in this state.\n- **Policy Checked:** The policy check succeeded, and Sentinel will allow an apply to proceed. The run sometimes pauses in this state, depending on workspace settings.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n_Leaving this stage:_\n\n- If any hard-mandatory policies failed, the run does not continue (**Plan Errored** state).\n- If any soft-mandatory policies failed, the run pauses in the **Policy Override** state.\n  - If a user with permission to manage policy overrides, overrides the failed policy, the run proceeds to the **Policy Checked** state. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n  - If a user with permission to apply runs discards the run, the run does not continue (**Discarded** state). ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n- If the run reaches the **Policy Checked** state (no mandatory policies failed, or soft-mandatory policies were overridden):\n  - If the plan can be auto-applied, the run proceeds automatically to the apply stage. Plans can be auto-applied if the auto-apply setting is enabled on the workspace and the plan was queued by a new VCS commit or by a user with permission to apply runs. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n  - If the plan can't be auto-applied, the run pauses in the **Policy Checked** state until a user with permission to apply runs takes action. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions)) The run proceeds to the apply stage if they approve the apply, or does not continue (**Discarded** state) if they reject the apply.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## The Pre-Apply Stage\n\nThe pre-apply phase only occurs if the workspace has [run tasks](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks) configured to begin before Terraform creates the apply. HCP Terraform sends information about the run to the configured external system and waits for a `passed` or `failed` response to determine whether the run can continue. The information sent to the external system includes the configuration version of the run.\n\nOnly confirmed runs can enter this phase.\n\n_States in this stage:_\n\n- **Pre-apply running:** HCP Terraform is waiting for a response from the configured external system(s).\n  - External systems must respond initially with a `200 OK` acknowledging the request is in progress. After that, they have 10 minutes to return a status of `passed`, `running`, or `failed`. If the timeout expires, HCP Terraform assumes that the run tasks is in the `failed` status.\n\n_Leaving this stage:_\n\n- If any mandatory tasks failed, the run skips to completion.\n- If any advisory tasks failed, the run proceeds to the **Applying** state, with a visible warning regarding the failed task.\n- If a single run has a combination of mandatory and advisory tasks, Terraform takes the most restrictive action. For example, the run fails if there are two advisory tasks that succeed and one mandatory task that fails.\n- If a user canceled the run, the run ends in the **Canceled** state.\n\n## The Apply Stage\n\n_States in this stage:_\n\n- **Applying:** HCP Terraform is currently running `terraform apply`.\n\n_Leaving this stage:_\n\nAfter applying, the run proceeds automatically to completion.\n\n- If the apply succeeded, the run ends in the **Applied** state.\n- If the apply failed, the run ends in the **Apply Errored** state.\n- If a user canceled the apply by pressing **Cancel Run**, the run ends in the **Canceled** state.\n\n## The Post-Apply Stage\n\nThe post-apply phase only occurs if you configure [run tasks](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks) on a workspace to begin after Terraform successfully completes an apply operation. During this phase, HCP Terraform sends information about the run to the configured external system and waits for a `passed` or `failed` response. However, unlike other stages in the run task process, a failed outcome does not halt the run since HCP Terraform has already provisioned the infrastructure.\n\n_States in this stage:_\n\n- **Post-apply running:** HCP Terraform is waiting for a response from the configured external system(s).\n- External systems must respond initially with a `200 OK` acknowledging the request is in progress. After that, they have 10 minutes to return a status of `passed`, `running`, or `failed`. If the timeout expires, HCP Terraform assumes that the run tasks is in the `failed` status.\n\n_Leaving this stage:_\n\n- There are only advisory tasks on this stage.\n- If any advisory tasks failed, the run proceeds to the **Applied** state, with a visible warning regarding the failed task.\n- If a user cancels the run, the run ends in the **Canceled** state.\n\n## Completion\n\nA run is complete if it finishes applying, if any part of the run fails, if there is nothing to do, or if a user chooses not to continue. Once a run completes, the next run in the queue can enter the plan stage.\n\n_States in this stage:_\n\n- **Applied:** The run was successfully applied.\n- **Planned and Finished:** `terraform plan`'s output already matches the current infrastructure state, so `terraform apply` doesn't need to do anything.\n- **Apply Errored:** The `terraform apply` command failed, possibly due to a missing or misconfigured provider or an illegal operation on a provider.\n- **Plan Errored:** The `terraform plan` command failed (usually requiring fixes to variables or code), or a hard-mandatory Sentinel policy failed. The run cannot be applied.\n- **Discarded:** A user chose not to continue this run.\n- **Canceled:** A user interrupted the `terraform plan` or `terraform apply` command with the \"Cancel Run\" button.","site":"terraform","answers_cleaned":"    page title  Run States and Stages   Runs   HCP Terraform description       Learn what happens during each stage of a run  pending  plan  cost estimation    policy check  apply  and complete         Run States and Stages  Each plan and apply run passes through several stages of action  pending  plan  cost estimation  policy check  apply  and completion  HCP Terraform shows a run s progress through each stage as a run state   In the list of workspaces on HCP Terraform s main page  each workspace shows the state of the run it s currently processing  If no run is in progress  HCP Terraform displays the state of the most recently completed run      The Pending Stage   States in this stage        Pending    HCP Terraform hasn t started action on a run yet  HCP Terraform processes each workspace s runs in the order they were queued  and a run remains pending until every run before it has completed    Leaving this stage      If the user discards the run before it starts  the run does not continue    Discarded   state     If the run is first in the queue  it proceeds automatically to the plan stage    Planning   state       The Fetching Stage  HCP Terraform may need to fetch the configuration from VCS prior to starting the plan  HCP Terraform automatically archives configuration versions created through VCS when all runs are complete and then re fetches the files for subsequent runs    States in this stage        Fetching    If HCP Terraform has not yet fetched the configuration from VCS  the run will go into this state until the configuration is available    Leaving this stage      If HCP Terraform encounters an error when fetching the configuration from VCS  the run does not continue    Plan Errored   state     If Terraform successfully fetches the configuration  the run moves to the next stage      The Pre Plan Stage  The pre plan phase only occurs if there are enabled  run tasks   terraform cloud docs workspaces settings run tasks  in the workspace that are configured to begin before Terraform creates the plan  HCP Terraform sends information about the run to the configured external system and waits for a  passed  or  failed  response to determine whether the run can continue  The information sent to the external system includes the configuration version of the run   All runs can enter this phase  including  speculative plans   terraform cloud docs run remote operations speculative plans     States in this stage        Pre plan running    HCP Terraform is waiting for a response from the configured external system s       External systems must respond initially with a  200 OK  acknowledging the request is in progress  After that  they have 10 minutes to return a status of  passed    running   or  failed   If the timeout expires  HCP Terraform assumes that the run tasks is in the  failed  status    Leaving this stage      If any mandatory tasks failed  the run skips to completion    Plan Errored   state     If any advisory tasks failed  the run proceeds to the   Planning   state  with a visible warning regarding the failed task    If a single run has a combination of mandatory and advisory tasks  Terraform takes the most restrictive action  For example  the run fails if there are two advisory tasks that succeed and one mandatory task that fails    If a user canceled the run  the run ends in the   Canceled   state      The Plan Stage  A run goes through different steps during the plan stage depending on whether or not HCP Terraform needs to fetch the configuration from VCS  HCP Terraform automatically archives configuration versions created through VCS when all runs are complete and then re fetches the files for subsequent runs    States in this stage        Planning    HCP Terraform is currently running  terraform plan       Needs Confirmation     terraform plan  has finished  Runs sometimes pause in this state  depending on the workspace and organization settings    Leaving this stage      If the  terraform plan  command failed  the run does not continue    Plan Errored   state     If a user canceled the plan by pressing the  Cancel Run  button  the run does not continue    Canceled   state     If the plan succeeded with no changes and neither cost estimation nor Sentinel policy checks will be done  HCP Terraform considers the run complete    Planned and Finished   state     If the plan succeeded and requires changes      If cost estimation is enabled  the run proceeds automatically to the cost estimation stage      If cost estimation is disabled and  Sentinel policies   terraform enterprise policy enforcement sentinel  are enabled  the run proceeds automatically to the policy check stage      If there are no Sentinel policies and the plan can be auto applied  the run proceeds automatically to the apply stage  Plans can be auto applied if the auto apply setting is enabled on the workspace and the plan was queued by a new VCS commit or by a user with permission to apply runs    More about permissions    terraform cloud docs users teams organizations permissions       If there are no Sentinel policies and HCP Terraform cannot auto apply the plan  the run pauses in the   Needs Confirmation   state until a user with permission to apply runs takes action    More about permissions    terraform cloud docs users teams organizations permissions   If an authorized user approves the apply  the run proceeds to the apply stage  If an authorized user rejects the apply  the run does not continue    Discarded   state     permissions citation    intentionally unused   keep for maintainers  Note  if you want to directly integrate third party tools and services between your plan and apply stages  see  Run Tasks   terraform cloud docs workspaces settings run tasks       The Post Plan Stage  The post plan phase only occurs if you configure  run tasks   terraform cloud docs workspaces settings run tasks  on a workspace to begin after Terraform successfully completes a plan operation  All runs can enter this phase  including  speculative plans   terraform cloud docs run remote operations speculative plans   During this phase  HCP Terraform sends information about the run to the configured external system and waits for a  passed  or  failed  response to determine whether the run can continue        Note    The information sent to the configured external system includes the  JSON output   terraform internals json format  of the Terraform plan    States in this stage        Post plan running    HCP Terraform is waiting for a response from the configured external system s       External systems must respond initially with a  200 OK  acknowledging the request is in progress  After that  they have 10 minutes to return a status of  passed    running   or  failed   or the timeout will expire and the task will be assumed to be in the  failed  status    Leaving this stage      If any mandatory tasks failed  the run skips to completion    Plan Errored   state     If any advisory tasks failed  the run proceeds to the   Applying   state  with a visible warning regarding the failed task    If a single run has a combination of mandatory and advisory tasks  Terraform takes the most restrictive action  For example  if there are two advisory tasks that succeed and one mandatory task that failed  the run fails  If one mandatory task succeeds and two advisory tasks fail  the run succeeds with a warning    If a user canceled the run  the run ends in the   Canceled   state      The OPA Policy Check Stage  This stage only occurs if you enabled  Open Policy Agent  OPA  policies   terraform cloud docs policy enforcement opa  and runs after a successful  terraform plan  and before Cost Estimation  In this stage  HCP Terraform checks whether the plan adheres to the policies in the OPA policy sets for the workspace    States in this stage        Policy Check    HCP Terraform is checking the plan against the OPA policy sets      Policy Override    The policy check finished  but a mandatory policy failed  The run pauses  and Terraform cannot perform an apply unless a user manually overrides the policy check failure  Refer to  Policy Results   terraform cloud docs policy enforcement policy results  for details      Policy Checked    The policy check succeeded  and Terraform can apply the plan  The run may pause in this state if the workspace is not set up to auto apply runs    Leaving this stage    If any mandatory policies failed  the run pauses in the   Policy Override   state  The run completes one of the following workflows     The run stops and enters the   Discarded   state when a user with  permission to apply runs   terraform cloud docs users teams organizations permissions general workspace permissions manage policy overrides  discards the run    The run proceeds to the   Policy Checked   state when a user with  permission to manage policy overrides   terraform cloud docs users teams organizations permissions  overrides the failed policy  The   Policy Checked   state means that no mandatory policies failed or that a user performed a manual override   Once the run reaches the   Policy Checked   state  the run completes one of the following workflows     The run proceeds to the   Apply   stage if Terraform can automatically apply the plan  An auto apply requires that the   Auto apply   setting is enabled on the workspace    If Terraform cannot automatically apply the plan  the run pauses in the   Policy Checked   state until a user with permission to apply runs takes action  If the user approves the apply  the run proceeds to the   Apply   stage  If the user rejects the apply  the run stops and enters the   Discarded   state      The Cost Estimation Stage  This stage only occurs if cost estimation is enabled  After a successful  terraform plan   HCP Terraform uses plan data to estimate costs for each resource found in the plan    States in this stage        Cost Estimating    HCP Terraform is currently estimating the resources in the plan      Cost Estimated    The cost estimate completed    Leaving this stage      If cost estimation succeeded or errors  the run moves to the next stage    If there are no policy checks or applies  the run does not continue    Planned and Finished   state       The Sentinel Policy Check Stage  This stage only occurs if  Sentinel policies   terraform cloud docs policy enforcement sentinel  are enabled  After a successful  terraform plan   HCP Terraform checks whether the plan obeys policy to determine whether it can be applied    States in this stage        Policy Check    HCP Terraform is currently checking the plan against the organization s policies      Policy Override    The policy check finished  but a soft mandatory policy failed  so an apply cannot proceed without approval from a user with permission to manage policy overrides for the organization    More about permissions    terraform cloud docs users teams organizations permissions   The run pauses in this state      Policy Checked    The policy check succeeded  and Sentinel will allow an apply to proceed  The run sometimes pauses in this state  depending on workspace settings    permissions citation    intentionally unused   keep for maintainers   Leaving this stage      If any hard mandatory policies failed  the run does not continue    Plan Errored   state     If any soft mandatory policies failed  the run pauses in the   Policy Override   state      If a user with permission to manage policy overrides  overrides the failed policy  the run proceeds to the   Policy Checked   state    More about permissions    terraform cloud docs users teams organizations permissions       If a user with permission to apply runs discards the run  the run does not continue    Discarded   state     More about permissions    terraform cloud docs users teams organizations permissions     If the run reaches the   Policy Checked   state  no mandatory policies failed  or soft mandatory policies were overridden       If the plan can be auto applied  the run proceeds automatically to the apply stage  Plans can be auto applied if the auto apply setting is enabled on the workspace and the plan was queued by a new VCS commit or by a user with permission to apply runs    More about permissions    terraform cloud docs users teams organizations permissions       If the plan can t be auto applied  the run pauses in the   Policy Checked   state until a user with permission to apply runs takes action    More about permissions    terraform cloud docs users teams organizations permissions   The run proceeds to the apply stage if they approve the apply  or does not continue    Discarded   state  if they reject the apply    permissions citation    intentionally unused   keep for maintainers     The Pre Apply Stage  The pre apply phase only occurs if the workspace has  run tasks   terraform cloud docs workspaces settings run tasks  configured to begin before Terraform creates the apply  HCP Terraform sends information about the run to the configured external system and waits for a  passed  or  failed  response to determine whether the run can continue  The information sent to the external system includes the configuration version of the run   Only confirmed runs can enter this phase    States in this stage        Pre apply running    HCP Terraform is waiting for a response from the configured external system s       External systems must respond initially with a  200 OK  acknowledging the request is in progress  After that  they have 10 minutes to return a status of  passed    running   or  failed   If the timeout expires  HCP Terraform assumes that the run tasks is in the  failed  status    Leaving this stage      If any mandatory tasks failed  the run skips to completion    If any advisory tasks failed  the run proceeds to the   Applying   state  with a visible warning regarding the failed task    If a single run has a combination of mandatory and advisory tasks  Terraform takes the most restrictive action  For example  the run fails if there are two advisory tasks that succeed and one mandatory task that fails    If a user canceled the run  the run ends in the   Canceled   state      The Apply Stage   States in this stage        Applying    HCP Terraform is currently running  terraform apply     Leaving this stage    After applying  the run proceeds automatically to completion     If the apply succeeded  the run ends in the   Applied   state    If the apply failed  the run ends in the   Apply Errored   state    If a user canceled the apply by pressing   Cancel Run    the run ends in the   Canceled   state      The Post Apply Stage  The post apply phase only occurs if you configure  run tasks   terraform cloud docs workspaces settings run tasks  on a workspace to begin after Terraform successfully completes an apply operation  During this phase  HCP Terraform sends information about the run to the configured external system and waits for a  passed  or  failed  response  However  unlike other stages in the run task process  a failed outcome does not halt the run since HCP Terraform has already provisioned the infrastructure    States in this stage        Post apply running    HCP Terraform is waiting for a response from the configured external system s     External systems must respond initially with a  200 OK  acknowledging the request is in progress  After that  they have 10 minutes to return a status of  passed    running   or  failed   If the timeout expires  HCP Terraform assumes that the run tasks is in the  failed  status    Leaving this stage      There are only advisory tasks on this stage    If any advisory tasks failed  the run proceeds to the   Applied   state  with a visible warning regarding the failed task    If a user cancels the run  the run ends in the   Canceled   state      Completion  A run is complete if it finishes applying  if any part of the run fails  if there is nothing to do  or if a user chooses not to continue  Once a run completes  the next run in the queue can enter the plan stage    States in this stage        Applied    The run was successfully applied      Planned and Finished     terraform plan  s output already matches the current infrastructure state  so  terraform apply  doesn t need to do anything      Apply Errored    The  terraform apply  command failed  possibly due to a missing or misconfigured provider or an illegal operation on a provider      Plan Errored    The  terraform plan  command failed  usually requiring fixes to variables or code   or a hard mandatory Sentinel policy failed  The run cannot be applied      Discarded    A user chose not to continue this run      Canceled    A user interrupted the  terraform plan  or  terraform apply  command with the  Cancel Run  button "}
{"questions":"terraform Run Terraform remotely through the UI API or CLI Learn how HCP Terraform page title Remote Operations HCP Terraform Hands on Try the Get Started HCP Terraform terraform tutorials cloud get started utm source WEBSITE utm medium WEB IO utm offer ARTICLE PAGE utm content DOCS tutorials Remote Operations manages runs","answers":"---\npage_title: Remote Operations - HCP Terraform\ndescription: >-\n  Run Terraform remotely through the UI, API, or CLI. Learn how HCP Terraform\n  manages runs.\n---\n\n# Remote Operations\n\n> **Hands-on:** Try the [Get Started \u2014\u00a0HCP Terraform](\/terraform\/tutorials\/cloud-get-started?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorials.\n\nHCP Terraform provides a central interface for running Terraform within a large collaborative organization. If you're accustomed to running Terraform from your workstation, the way HCP Terraform manages runs can be unfamiliar.\n\nThis page describes the basics of how runs work in HCP Terraform.\n\n## Remote Operations\n\nHCP Terraform is designed as an execution platform for Terraform, and can perform Terraform runs on its own disposable virtual machines. This provides a consistent and reliable run environment, and enables advanced features like Sentinel policy enforcement, cost estimation, notifications, version control integration, and more.\n\nTerraform runs managed by HCP Terraform are called _remote operations._ Remote runs can be initiated by webhooks from your VCS provider, by UI controls within HCP Terraform, by API calls, or by Terraform CLI. When using Terraform CLI to perform remote operations, the progress of the run is streamed to the user's terminal, to provide an experience equivalent to local operations.\n\n### Disabling Remote Operations\n\n[execution_mode]: \/terraform\/cloud-docs\/workspaces\/settings#execution-mode\n\nMany of HCP Terraform's features rely on remote execution and are not available when using local operations. This includes features like Sentinel policy enforcement, cost estimation, and notifications.\n\nYou can disable remote operations for any workspace by changing its [Execution Mode][execution_mode] to **Local**. This causes the workspace to act only as a remote backend for Terraform state, with all execution occurring on your own workstations or continuous integration workers.\n\n### Protecting Private Environments\n\n[HCP Terraform agents](\/terraform\/cloud-docs\/agents) are a paid feature that allows HCP Terraform to communicate with isolated, private, or on-premises infrastructure. The agent polls HCP Terraform or Terraform Enterprise for any changes to your configuration and executes the changes locally, so you do not need to allow public ingress traffic to your resources. Agents allow you to control infrastructure in private environments without modifying your network perimeter.\n\nHCP Terraform agents also support running custom programs, called _hooks_, during strategic points of a Terraform run. For example, you may create a hook to dynamically download software required by the Terraform run or send an HTTP request to a system to kick off an external workflow.\n\n## Runs and Workspaces\n\nHCP Terraform always performs Terraform runs in the context of a [workspace](\/terraform\/cloud-docs\/run\/remote-operations). The workspace serves the same role that a persistent working directory serves when running Terraform locally: it provides the configuration, state, and variables for the run.\n\n### Configuration Versions\n\nEach workspace is associated with a particular Terraform configuration, but that configuration is expected to change over time. Thus, HCP Terraform manages configurations as a series of _configuration versions._\n\nMost commonly, a workspace is linked to a VCS repository, and its configuration versions are tied to revisions in the specified VCS branch. In workspaces that aren't linked to a repository, new configuration versions can be uploaded via Terraform CLI or via the API.\n\n### Ordering and Timing\n\nEach workspace in HCP Terraform maintains its own queue of runs, and processes those runs in order.\n\nWhenever a new run is initiated, it's added to the end of the queue. If there's already a run in progress, the new run won't start until the current one has completely finished \u2014\u00a0HCP Terraform won't even plan the run yet, because the current run might change what a future run would do. Runs that are waiting for other runs to finish are in a _pending_ state, and a workspace might have any number of pending runs.\n\nThere are two exceptions to the run queue, which can proceed at any time and do not block the progress of other runs:\n\n- Plan-only runs.\n- The planning stages of [saved plan runs](\/terraform\/cloud-docs\/run\/modes-and-options\/#saved-plans). You can only _apply_ a saved plan if no other run is in progress, and applying that plan blocks the run queue as usual. Terraform Enterprise does not yet support this workflow.\n\nWhen you initiate a run, HCP Terraform locks the run to a particular configuration version and set of variable values. If you change variables or commit new code before the run finishes, it will only affect future runs, not runs that are already pending, planning, or awaiting apply.\n\n### Workspace Locks\n\nWhen a workspace is _locked,_ HCP Terraform can create new runs (automatically or manually), but those runs do not begin until you unlock the workspace.\n\nWhen a run is in progress, that run locks the workspace, as described above under \"Ordering and Timing\".\n\nThere are two kinds of run operation that can ignore workspace locking:\n\n- Plan-only runs.\n- The planning stages of [saved plan runs](\/terraform\/cloud-docs\/run\/modes-and-options\/#saved-plans). You can only _apply_ a saved plan if the workspace is unlocked, and applying that plan locks the workspace as usual. Terraform Enterprise does not yet support this workflow.\n\nA user or team can also deliberately lock a workspace, to perform maintenance or for any other reason. For more details, see [Locking Workspaces (Preventing Runs)](\/terraform\/cloud-docs\/run\/manage#locking-workspaces-preventing-runs-).\n\n## Starting Runs\n\nHCP Terraform has three main workflows for managing runs, and your chosen workflow determines when and how Terraform runs occur. For detailed information, see:\n\n- The [UI\/VCS-driven run workflow](\/terraform\/cloud-docs\/run\/ui), which is the primary mode of operation.\n- The [API-driven run workflow](\/terraform\/cloud-docs\/run\/api), which is more flexible but requires you to create some tooling.\n- The [CLI-driven run workflow](\/terraform\/cloud-docs\/run\/cli), which uses Terraform's standard CLI tools to execute runs in HCP Terraform.\n\nYou can use the following methods to initiate HCP Terraform runs: \n- Click the **+ New run** button on the workspace's page\n- Implement VCS webhooks \n- Run the standard `terraform apply` command when the CLI integration is configured\n- Call [the Runs API](\/terraform\/cloud-docs\/api-docs\/run) using any API tool\n\n## Plans and Applies\n\nHCP Terraform enforces Terraform's division between _plan_ and _apply_ operations. It always plans first, then uses that plan's output for the apply.\n\nIn the default configuration, HCP Terraform waits for user approval before running an apply, but you can configure workspaces to [automatically apply](\/terraform\/cloud-docs\/workspaces\/settings#auto-apply-and-manual-apply) successful plans. Some plans can't be auto-applied, like plans queued by [run triggers](\/terraform\/cloud-docs\/workspaces\/settings\/run-triggers) or by users without permission to apply runs for the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nIf a plan contains no changes, HCP Terraform does not attempt to apply it. Instead, the run ends with a status of \"Planned and finished\". The [allow empty apply](\/terraform\/cloud-docs\/run\/modes-and-options#allow-empty-apply) run mode can override this behavior.\n\n### Speculative Plans\n\nIn addition to normal runs, HCP Terraform can run _speculative plans_ to test changes to a configuration during editing and code review. Speculative plans are plan-only runs. They show possible changes, and policies affected by those changes, but cannot apply any changes.\n\nSpeculative plans can begin without waiting for other runs to finish because they don't affect real infrastructure. HCP Terraform lists past speculative plan runs alongside a workspace's other runs.\n\nThere are three ways to run speculative plans:\n\n- In VCS-backed workspaces, pull requests start speculative plans, and the VCS provider's pull request interface includes a link to the plan. See [UI\/VCS Runs: Speculative Plans on Pull Requests](\/terraform\/cloud-docs\/run\/ui#speculative-plans-on-pull-requests) for more details.\n- With the [CLI integration](\/terraform\/cli\/cloud) configured, running `terraform plan` on the command line starts a speculative plan. The plan output streams to the terminal, and a link to the plan is also included.\n- The runs API creates speculative plans whenever the specified configuration version is marked as speculative. See [the `configuration-versions` API](\/terraform\/cloud-docs\/api-docs\/configuration-versions#create-a-configuration-version) for more information.\n\n#### Retry a speculative plan in the UI\n\nIf a speculative plan fails due to an external factor, you can run it again using the \"Retry Run\" button on its page:\n\nRetrying a plan requires permission to queue plans for that workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions)) Only failed or canceled plans can be retried.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nRetrying the run will create a new run with the same configuration version. If it is a VCS-backed workspace, the pull request interface will receive the status of the new run, along with a link to the new run.\n\n### Saved Plans\n\n-> **Version note:** Using saved plans from the CLI with HCP Terraform requires at least Terraform CLI v1.6.0.\n\nHCP Terraform also supports saved plan runs. If you have configured the [CLI integration](\/terraform\/cli\/cloud) you can use `terraform plan -out <FILE>` to perform and save a plan, `terraform apply <FILE>` to apply a saved plan, and `terraform show <FILE>` to inspect a saved plan before applying it. You can also create saved plan runs via the API by using the `save-plan` option.\n\nSaved plan runs affect the run queue differently from normal runs, and can sometimes be automatically discarded. For more details, refer to [Run Modes and Options: Saved Plans](\/terraform\/cloud-docs\/run\/modes-and-options#saved-plans).\n\n## Planning Modes and Options\n\nIn addition to the normal run workflows described above, HCP Terraform supports destroy runs, refresh-only runs, and several planning options that can modify the behavior of a run. For more details, see [Run Modes and Options](\/terraform\/cloud-docs\/run\/modes-and-options).\n\n## Run States\n\nHCP Terraform shows the progress of each run as it passes through each run state (pending, plan, policy check, apply, and completion). In some states, the run might require confirmation before continuing or ending; see [Managing Runs: Interacting with Runs](\/terraform\/cloud-docs\/run\/manage#interacting-with-runs) for more information.\n\nIn the list of workspaces on HCP Terraform's main page, each workspace shows the state of the run it's currently processing. (Or, if no run is in progress, the state of the most recent completed run.)\n\nFor full details about the stages of a run, see [Run States and Stages][].\n\n[Run States and Stages]: \/terraform\/cloud-docs\/run\/states\n\n## Import\n\nWe recommend using [`import` blocks](\/terraform\/language\/import), introduced in Terraform 1.5, to import resources in HCP Terraform.\n\nHCP Terraform does not support remote execution for the `terraform import` command. For this command the workspace acts only as a remote backend for Terraform state, with all execution occurring on your own workstations or continuous integration workers.\n\nSince `terraform import` runs locally, environment variables defined in the workspace are not available. Any environment variables required by the provider you're importing from must be defined within your local execution scope.","site":"terraform","answers_cleaned":"    page title  Remote Operations   HCP Terraform description       Run Terraform remotely through the UI  API  or CLI  Learn how HCP Terraform   manages runs         Remote Operations      Hands on    Try the  Get Started   HCP Terraform   terraform tutorials cloud get started utm source WEBSITE utm medium WEB IO utm offer ARTICLE PAGE utm content DOCS  tutorials   HCP Terraform provides a central interface for running Terraform within a large collaborative organization  If you re accustomed to running Terraform from your workstation  the way HCP Terraform manages runs can be unfamiliar   This page describes the basics of how runs work in HCP Terraform      Remote Operations  HCP Terraform is designed as an execution platform for Terraform  and can perform Terraform runs on its own disposable virtual machines  This provides a consistent and reliable run environment  and enables advanced features like Sentinel policy enforcement  cost estimation  notifications  version control integration  and more   Terraform runs managed by HCP Terraform are called  remote operations   Remote runs can be initiated by webhooks from your VCS provider  by UI controls within HCP Terraform  by API calls  or by Terraform CLI  When using Terraform CLI to perform remote operations  the progress of the run is streamed to the user s terminal  to provide an experience equivalent to local operations       Disabling Remote Operations   execution mode    terraform cloud docs workspaces settings execution mode  Many of HCP Terraform s features rely on remote execution and are not available when using local operations  This includes features like Sentinel policy enforcement  cost estimation  and notifications   You can disable remote operations for any workspace by changing its  Execution Mode  execution mode  to   Local    This causes the workspace to act only as a remote backend for Terraform state  with all execution occurring on your own workstations or continuous integration workers       Protecting Private Environments   HCP Terraform agents   terraform cloud docs agents  are a paid feature that allows HCP Terraform to communicate with isolated  private  or on premises infrastructure  The agent polls HCP Terraform or Terraform Enterprise for any changes to your configuration and executes the changes locally  so you do not need to allow public ingress traffic to your resources  Agents allow you to control infrastructure in private environments without modifying your network perimeter   HCP Terraform agents also support running custom programs  called  hooks   during strategic points of a Terraform run  For example  you may create a hook to dynamically download software required by the Terraform run or send an HTTP request to a system to kick off an external workflow      Runs and Workspaces  HCP Terraform always performs Terraform runs in the context of a  workspace   terraform cloud docs run remote operations   The workspace serves the same role that a persistent working directory serves when running Terraform locally  it provides the configuration  state  and variables for the run       Configuration Versions  Each workspace is associated with a particular Terraform configuration  but that configuration is expected to change over time  Thus  HCP Terraform manages configurations as a series of  configuration versions    Most commonly  a workspace is linked to a VCS repository  and its configuration versions are tied to revisions in the specified VCS branch  In workspaces that aren t linked to a repository  new configuration versions can be uploaded via Terraform CLI or via the API       Ordering and Timing  Each workspace in HCP Terraform maintains its own queue of runs  and processes those runs in order   Whenever a new run is initiated  it s added to the end of the queue  If there s already a run in progress  the new run won t start until the current one has completely finished   HCP Terraform won t even plan the run yet  because the current run might change what a future run would do  Runs that are waiting for other runs to finish are in a  pending  state  and a workspace might have any number of pending runs   There are two exceptions to the run queue  which can proceed at any time and do not block the progress of other runs     Plan only runs    The planning stages of  saved plan runs   terraform cloud docs run modes and options  saved plans   You can only  apply  a saved plan if no other run is in progress  and applying that plan blocks the run queue as usual  Terraform Enterprise does not yet support this workflow   When you initiate a run  HCP Terraform locks the run to a particular configuration version and set of variable values  If you change variables or commit new code before the run finishes  it will only affect future runs  not runs that are already pending  planning  or awaiting apply       Workspace Locks  When a workspace is  locked   HCP Terraform can create new runs  automatically or manually   but those runs do not begin until you unlock the workspace   When a run is in progress  that run locks the workspace  as described above under  Ordering and Timing    There are two kinds of run operation that can ignore workspace locking     Plan only runs    The planning stages of  saved plan runs   terraform cloud docs run modes and options  saved plans   You can only  apply  a saved plan if the workspace is unlocked  and applying that plan locks the workspace as usual  Terraform Enterprise does not yet support this workflow   A user or team can also deliberately lock a workspace  to perform maintenance or for any other reason  For more details  see  Locking Workspaces  Preventing Runs    terraform cloud docs run manage locking workspaces preventing runs        Starting Runs  HCP Terraform has three main workflows for managing runs  and your chosen workflow determines when and how Terraform runs occur  For detailed information  see     The  UI VCS driven run workflow   terraform cloud docs run ui   which is the primary mode of operation    The  API driven run workflow   terraform cloud docs run api   which is more flexible but requires you to create some tooling    The  CLI driven run workflow   terraform cloud docs run cli   which uses Terraform s standard CLI tools to execute runs in HCP Terraform   You can use the following methods to initiate HCP Terraform runs     Click the     New run   button on the workspace s page   Implement VCS webhooks    Run the standard  terraform apply  command when the CLI integration is configured   Call  the Runs API   terraform cloud docs api docs run  using any API tool     Plans and Applies  HCP Terraform enforces Terraform s division between  plan  and  apply  operations  It always plans first  then uses that plan s output for the apply   In the default configuration  HCP Terraform waits for user approval before running an apply  but you can configure workspaces to  automatically apply   terraform cloud docs workspaces settings auto apply and manual apply  successful plans  Some plans can t be auto applied  like plans queued by  run triggers   terraform cloud docs workspaces settings run triggers  or by users without permission to apply runs for the workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  If a plan contains no changes  HCP Terraform does not attempt to apply it  Instead  the run ends with a status of  Planned and finished   The  allow empty apply   terraform cloud docs run modes and options allow empty apply  run mode can override this behavior       Speculative Plans  In addition to normal runs  HCP Terraform can run  speculative plans  to test changes to a configuration during editing and code review  Speculative plans are plan only runs  They show possible changes  and policies affected by those changes  but cannot apply any changes   Speculative plans can begin without waiting for other runs to finish because they don t affect real infrastructure  HCP Terraform lists past speculative plan runs alongside a workspace s other runs   There are three ways to run speculative plans     In VCS backed workspaces  pull requests start speculative plans  and the VCS provider s pull request interface includes a link to the plan  See  UI VCS Runs  Speculative Plans on Pull Requests   terraform cloud docs run ui speculative plans on pull requests  for more details    With the  CLI integration   terraform cli cloud  configured  running  terraform plan  on the command line starts a speculative plan  The plan output streams to the terminal  and a link to the plan is also included    The runs API creates speculative plans whenever the specified configuration version is marked as speculative  See  the  configuration versions  API   terraform cloud docs api docs configuration versions create a configuration version  for more information        Retry a speculative plan in the UI  If a speculative plan fails due to an external factor  you can run it again using the  Retry Run  button on its page   Retrying a plan requires permission to queue plans for that workspace    More about permissions    terraform cloud docs users teams organizations permissions   Only failed or canceled plans can be retried    permissions citation    intentionally unused   keep for maintainers  Retrying the run will create a new run with the same configuration version  If it is a VCS backed workspace  the pull request interface will receive the status of the new run  along with a link to the new run       Saved Plans       Version note    Using saved plans from the CLI with HCP Terraform requires at least Terraform CLI v1 6 0   HCP Terraform also supports saved plan runs  If you have configured the  CLI integration   terraform cli cloud  you can use  terraform plan  out  FILE   to perform and save a plan   terraform apply  FILE   to apply a saved plan  and  terraform show  FILE   to inspect a saved plan before applying it  You can also create saved plan runs via the API by using the  save plan  option   Saved plan runs affect the run queue differently from normal runs  and can sometimes be automatically discarded  For more details  refer to  Run Modes and Options  Saved Plans   terraform cloud docs run modes and options saved plans       Planning Modes and Options  In addition to the normal run workflows described above  HCP Terraform supports destroy runs  refresh only runs  and several planning options that can modify the behavior of a run  For more details  see  Run Modes and Options   terraform cloud docs run modes and options       Run States  HCP Terraform shows the progress of each run as it passes through each run state  pending  plan  policy check  apply  and completion   In some states  the run might require confirmation before continuing or ending  see  Managing Runs  Interacting with Runs   terraform cloud docs run manage interacting with runs  for more information   In the list of workspaces on HCP Terraform s main page  each workspace shows the state of the run it s currently processing   Or  if no run is in progress  the state of the most recent completed run    For full details about the stages of a run  see  Run States and Stages       Run States and Stages    terraform cloud docs run states     Import  We recommend using   import  blocks   terraform language import   introduced in Terraform 1 5  to import resources in HCP Terraform   HCP Terraform does not support remote execution for the  terraform import  command  For this command the workspace acts only as a remote backend for Terraform state  with all execution occurring on your own workstations or continuous integration workers   Since  terraform import  runs locally  environment variables defined in the workspace are not available  Any environment variables required by the provider you re importing from must be defined within your local execution scope "}
{"questions":"terraform Terraform relies on provider plugins to manage resources In most cases Terraform can automatically download the required plugins but there are cases where plugins must be managed explicitly Installing Software in the Run Environment Learn how to install Terraform providers and additional software on Terraform Cloud workers including cloud CLIs or configuration management tools page title Installing Software in the Run Environment Runs HCP Terraform","answers":"---\npage_title: Installing Software in the Run Environment - Runs - HCP Terraform\ndescription: >-\n  Learn how to install Terraform providers and additional software on Terraform\n  Cloud workers, including cloud CLIs or configuration management tools.\n---\n\n# Installing Software in the Run Environment\n\nTerraform relies on provider plugins to manage resources. In most cases, Terraform can automatically download the required plugins, but there are cases where plugins must be managed explicitly.\n\nIn rare cases, it might also be necessary to install extra software on the Terraform worker, such as a configuration management tool or cloud CLI.\n\n## Installing Terraform Providers\n\nThe mechanics of provider installation changed in Terraform 0.13, thanks to the introduction of the [Terraform Registry][registry] for providers which allows custom and community providers to be installed via `terraform init`. Prior to Terraform 0.13, Terraform could only automatically install providers distributed by HashiCorp.\n\n### Terraform 0.13 and later\n\n#### Providers From the Terraform Registry\n\nThe [Terraform Registry][registry] allows anyone to publish and distribute providers which can be automatically downloaded and installed via `terraform init`.\n\nTerraform Enterprise instances must be able to access `registry.terraform.io` to use providers from the public registry; otherwise, you can install providers using [the `terraform-bundle` tool][bundle].\n\n[registry]: https:\/\/registry.terraform.io\/browse\/providers\n\n#### In-House Providers\n\nIf you have a custom provider that you'd rather not publish in the public Terraform Registry, you have a few options:\n\n- Add the provider binary to the VCS repo (or manually-uploaded configuration version). Place the compiled `linux_amd64` version of the plugin at `terraform.d\/plugins\/<SOURCE HOST>\/<SOURCE NAMESPACE>\/<PLUGIN NAME>\/<VERSION>\/linux_amd64`, relative to the root of the directory.\n\n  The source host and namespace will need to match the source given in the  `required_providers` block within the configuration, but can otherwise be arbitrary identifiers. For instance, if your `required_providers` block looks like this:\n\n  ```\n  terraform {\n    required_providers {\n      custom = {\n        source = \"my-host\/my-namespace\/custom\"\n        version = \"1.0.0\"\n      }\n    }\n  }\n  ```\n\n  HCP Terraform will be able to use your compiled provider if you place it at `terraform.d\/plugins\/my-host\/my-namespace\/custom\/1.0.0\/linux_amd64\/terraform-provider-custom`.\n\n- Use a privately-owned provider registry service which implements the [provider registry protocol](\/terraform\/internals\/provider-registry-protocol) to distribute custom providers. Be sure to include the full [source address](\/terraform\/language\/providers\/requirements#source-addresses), including the hostname, when referencing providers.\n\n- **Terraform Enterprise only:** Use [the `terraform-bundle` tool][bundle] to add custom providers.\n\n-> **Note:** Using a [network mirror](\/terraform\/internals\/provider-network-mirror-protocol) to host custom providers for installation is not currently supported in HCP Terraform, since the network mirror cannot be activated without a [`provider_installation`](\/terraform\/cli\/config\/config-file#explicit-installation-method-configuration) block in the CLI configuration file.\n\n### Terraform 0.12 and earlier\n\n#### Providers Distributed by HashiCorp\n\nHCP Terraform can automatically install providers distributed by HashiCorp. Terraform Enterprise instances can do this as well as long as they can access `releases.hashicorp.com`.\n\nIf that isn't feasible due to security requirements, you can manually install providers. Use [the `terraform-bundle` tool][bundle] to build a custom version of Terraform that includes the necessary providers, and configure your workspaces to use that bundled version.\n\n[bundle]: https:\/\/github.com\/hashicorp\/terraform\/tree\/master\/tools\/terraform-bundle#installing-a-bundle-in-on-premises-terraform-enterprise\n\n#### Custom and Community Providers\n\nTo use community providers or your own custom providers with Terraform versions prior to 0.13, you must install them yourself.\n\nThere are two ways to accomplish this:\n\n- Add the provider binary to the VCS repo (or manually-uploaded configuration version) for any workspace that uses it. Place the compiled `linux_amd64` version of the plugin at `terraform.d\/plugins\/linux_amd64\/<PLUGIN NAME>` (as a relative path from the root of the working directory). The plugin name should follow the [naming scheme](\/terraform\/language\/v1.1.x\/configuration-0-11\/providers#plugin-names-and-versions) and the plugin file must have read and execute permissions. (Third-party plugins are often distributed with an appropriate filename already set in the distribution archive.)\n\n  You can add plugins directly to a configuration repo, or you can add them as Git submodules and symlink the executable files into `terraform.d\/plugins\/`. Submodules are a good choice when many workspaces use the same custom provider, since they keep your repos smaller. If using submodules, enable the [\"Include submodules on clone\" setting](\/terraform\/cloud-docs\/workspaces\/settings\/vcs#include-submodules-on-clone) on any affected workspace.\n\n- **Terraform Enterprise only:** Use [the `terraform-bundle` tool][bundle] to add custom providers to a custom Terraform version. This keeps custom providers out of your configuration repos entirely, and is easier to update when many workspaces use the same provider.\n\n## Installing Additional Tools\n\n### Avoid Installing Extra Software\n\nWhenever possible, don't install software on the worker. There are a number of reasons for this:\n\n- Provisioners are a last resort in Terraform; they greatly increase the risk of creating unknown states with unmanaged and partially-managed infrastructure, and the `local-exec` provisioner is especially hazardous. [The Terraform CLI docs on provisioners](\/terraform\/language\/resources\/provisioners\/syntax#provisioners-are-a-last-resort) explain the hazards in more detail, with more information about the preferred alternatives. (In summary: use Packer, use cloud-init, try to make your infrastructure more immutable, and always prefer real provider features.)\n- We don't guarantee the stability of the operating system on the Terraform build workers. It's currently the latest version of Ubuntu LTS, but we reserve the right to change that at any time.\n- The build workers are disposable and are destroyed after each use, which makes managing extra software even more complex than when running Terraform CLI in a persistent environment. Custom software must be installed on every run, which also increases run times.\n\n### Only Install Standalone Binaries\n\nHCP Terraform does not allow you to elevate a command's permissions with `sudo` during Terraform runs. This means you cannot install packages using the worker OS's normal package management tools. However, you can install and execute standalone binaries in Terraform's working directory.\n\nYou have two options for getting extra software into the configuration directory:\n\n- Include it in the configuration repository as a submodule. (Make sure the workspace is configured to clone submodules.)\n- Use `local-exec` to download it with `curl`. For example:\n\n  ```hcl\n  resource \"aws_instance\" \"example\" {\n    ami           = \"${var.ami}\"\n    instance_type = \"t2.micro\"\n    provisioner \"local-exec\" {\n      command = <<EOH\n  curl -o jq https:\/\/github.com\/stedolan\/jq\/releases\/download\/jq-1.6\/jq-linux64\n  chmod 0755 jq\n  # Do some kind of JSON processing with .\/jq\n  EOH\n    }\n  }\n  ```\n\nWhen downloading software with `local-exec`, try to associate the provisioner block with the resource(s) that the software will interact with. If you use a null resource with a `local-exec` provisioner, you must ensure it can be properly configured with [triggers](\/terraform\/language\/resources\/provisioners\/null_resource#example-usage). Otherwise, a null resource with the `local-exec` provisioner will only install software on the initial run where the `null_resource` is created. The `null_resource` will not be automatically recreated in subsequent runs and the related software won't be installed, which may cause runs to encounter errors.\n\n-> **Note:** Terraform Enterprise instances can be configured to allow `sudo` commands during Terraform runs. However, even when `sudo` is allowed, using the worker OS's package tools during runs is still usually a bad idea. You will have a much better experience if you can move your provisioner actions into a custom provider or an immutable machine image.","site":"terraform","answers_cleaned":"    page title  Installing Software in the Run Environment   Runs   HCP Terraform description       Learn how to install Terraform providers and additional software on Terraform   Cloud workers  including cloud CLIs or configuration management tools         Installing Software in the Run Environment  Terraform relies on provider plugins to manage resources  In most cases  Terraform can automatically download the required plugins  but there are cases where plugins must be managed explicitly   In rare cases  it might also be necessary to install extra software on the Terraform worker  such as a configuration management tool or cloud CLI      Installing Terraform Providers  The mechanics of provider installation changed in Terraform 0 13  thanks to the introduction of the  Terraform Registry  registry  for providers which allows custom and community providers to be installed via  terraform init   Prior to Terraform 0 13  Terraform could only automatically install providers distributed by HashiCorp       Terraform 0 13 and later       Providers From the Terraform Registry  The  Terraform Registry  registry  allows anyone to publish and distribute providers which can be automatically downloaded and installed via  terraform init    Terraform Enterprise instances must be able to access  registry terraform io  to use providers from the public registry  otherwise  you can install providers using  the  terraform bundle  tool  bundle     registry   https   registry terraform io browse providers       In House Providers  If you have a custom provider that you d rather not publish in the public Terraform Registry  you have a few options     Add the provider binary to the VCS repo  or manually uploaded configuration version   Place the compiled  linux amd64  version of the plugin at  terraform d plugins  SOURCE HOST   SOURCE NAMESPACE   PLUGIN NAME   VERSION  linux amd64   relative to the root of the directory     The source host and namespace will need to match the source given in the   required providers  block within the configuration  but can otherwise be arbitrary identifiers  For instance  if your  required providers  block looks like this           terraform       required providers         custom             source    my host my namespace custom          version    1 0 0                             HCP Terraform will be able to use your compiled provider if you place it at  terraform d plugins my host my namespace custom 1 0 0 linux amd64 terraform provider custom      Use a privately owned provider registry service which implements the  provider registry protocol   terraform internals provider registry protocol  to distribute custom providers  Be sure to include the full  source address   terraform language providers requirements source addresses   including the hostname  when referencing providers       Terraform Enterprise only    Use  the  terraform bundle  tool  bundle  to add custom providers        Note    Using a  network mirror   terraform internals provider network mirror protocol  to host custom providers for installation is not currently supported in HCP Terraform  since the network mirror cannot be activated without a   provider installation    terraform cli config config file explicit installation method configuration  block in the CLI configuration file       Terraform 0 12 and earlier       Providers Distributed by HashiCorp  HCP Terraform can automatically install providers distributed by HashiCorp  Terraform Enterprise instances can do this as well as long as they can access  releases hashicorp com    If that isn t feasible due to security requirements  you can manually install providers  Use  the  terraform bundle  tool  bundle  to build a custom version of Terraform that includes the necessary providers  and configure your workspaces to use that bundled version    bundle   https   github com hashicorp terraform tree master tools terraform bundle installing a bundle in on premises terraform enterprise       Custom and Community Providers  To use community providers or your own custom providers with Terraform versions prior to 0 13  you must install them yourself   There are two ways to accomplish this     Add the provider binary to the VCS repo  or manually uploaded configuration version  for any workspace that uses it  Place the compiled  linux amd64  version of the plugin at  terraform d plugins linux amd64  PLUGIN NAME    as a relative path from the root of the working directory   The plugin name should follow the  naming scheme   terraform language v1 1 x configuration 0 11 providers plugin names and versions  and the plugin file must have read and execute permissions   Third party plugins are often distributed with an appropriate filename already set in the distribution archive      You can add plugins directly to a configuration repo  or you can add them as Git submodules and symlink the executable files into  terraform d plugins    Submodules are a good choice when many workspaces use the same custom provider  since they keep your repos smaller  If using submodules  enable the   Include submodules on clone  setting   terraform cloud docs workspaces settings vcs include submodules on clone  on any affected workspace       Terraform Enterprise only    Use  the  terraform bundle  tool  bundle  to add custom providers to a custom Terraform version  This keeps custom providers out of your configuration repos entirely  and is easier to update when many workspaces use the same provider      Installing Additional Tools      Avoid Installing Extra Software  Whenever possible  don t install software on the worker  There are a number of reasons for this     Provisioners are a last resort in Terraform  they greatly increase the risk of creating unknown states with unmanaged and partially managed infrastructure  and the  local exec  provisioner is especially hazardous   The Terraform CLI docs on provisioners   terraform language resources provisioners syntax provisioners are a last resort  explain the hazards in more detail  with more information about the preferred alternatives   In summary  use Packer  use cloud init  try to make your infrastructure more immutable  and always prefer real provider features     We don t guarantee the stability of the operating system on the Terraform build workers  It s currently the latest version of Ubuntu LTS  but we reserve the right to change that at any time    The build workers are disposable and are destroyed after each use  which makes managing extra software even more complex than when running Terraform CLI in a persistent environment  Custom software must be installed on every run  which also increases run times       Only Install Standalone Binaries  HCP Terraform does not allow you to elevate a command s permissions with  sudo  during Terraform runs  This means you cannot install packages using the worker OS s normal package management tools  However  you can install and execute standalone binaries in Terraform s working directory   You have two options for getting extra software into the configuration directory     Include it in the configuration repository as a submodule   Make sure the workspace is configured to clone submodules     Use  local exec  to download it with  curl   For example        hcl   resource  aws instance   example        ami                var ami       instance type    t2 micro      provisioner  local exec          command     EOH   curl  o jq https   github com stedolan jq releases download jq 1 6 jq linux64   chmod 0755 jq     Do some kind of JSON processing with   jq   EOH                  When downloading software with  local exec   try to associate the provisioner block with the resource s  that the software will interact with  If you use a null resource with a  local exec  provisioner  you must ensure it can be properly configured with  triggers   terraform language resources provisioners null resource example usage   Otherwise  a null resource with the  local exec  provisioner will only install software on the initial run where the  null resource  is created  The  null resource  will not be automatically recreated in subsequent runs and the related software won t be installed  which may cause runs to encounter errors        Note    Terraform Enterprise instances can be configured to allow  sudo  commands during Terraform runs  However  even when  sudo  is allowed  using the worker OS s package tools during runs is still usually a bad idea  You will have a much better experience if you can move your provisioner actions into a custom provider or an immutable machine image "}
{"questions":"terraform HCP Terraform has three workflows for managing Terraform runs The UI and VCS driven Run Workflow page title UI VCS driven Runs Runs HCP Terraform Workspaces are associated with a VCS repository branch and HCP Terraform automatically queues runs when new commits are merged","answers":"---\npage_title: UI\/VCS-driven Runs - Runs - HCP Terraform\ndescription: >-\n  Workspaces are associated with a VCS repository branch, and HCP Terraform\n  automatically queues runs when new commits are merged.\n---\n\n# The UI- and VCS-driven Run Workflow\n\nHCP Terraform has three workflows for managing Terraform runs.\n\n- The UI\/VCS-driven run workflow described below, which is the primary mode of operation.\n- The [API-driven run workflow](\/terraform\/cloud-docs\/run\/api), which is more flexible but requires you to create some tooling.\n- The [CLI-driven run workflow](\/terraform\/cloud-docs\/run\/cli), which uses Terraform's standard CLI tools to execute runs in HCP Terraform.\n\n## Summary\n\nIn the UI and VCS workflow, every workspace is associated with a specific branch of a VCS repo of Terraform configurations. HCP Terraform registers webhooks with your VCS provider when you create a workspace, then automatically queues a Terraform run whenever new commits are merged to that branch of workspace's linked repository.\n\nHCP Terraform also performs a [speculative plan][] when a pull request is opened against that branch. HCP Terraform posts a link to the plan in the pull request, and re-runs the plan if the pull request is updated.\n\n[speculative plan]: \/terraform\/cloud-docs\/run\/remote-operations#speculative-plans\n\nThe Terraform code for a normal run always comes from version control, and is always associated with a specific commit.\n\n## Automatically Starting Runs\n\nIn a workspace linked to a VCS repository, runs start automatically when you merge or commit changes to version control.\n\nIf you use GitHub as your VCS provider and merge a PR changing 300 or more files, HCP Terraform automatically triggers runs for every workspace connected to that repository. The GitHub API has a limit of 300 reported changed files for a PR merge. To address this, HCP Terraform initiates workspace runs proactively, preventing oversight of file changes beyond this limit.\n\nA workspace is linked to one branch of a VCS repository and ignores changes to other branches. You can specify which files and directories within your repository trigger runs. HCP Terraform can also automatically trigger runs when you create Git tags. Refer to [Automatic Run Triggering](\/terraform\/cloud-docs\/workspaces\/settings\/vcs#automatic-run-triggering) for details.\n\n-> **Note:** A workspace with no runs will not accept new runs via VCS webhook. At least one run must be manually queued to confirm that the workspace is ready for further runs.\n\nA workspace will not process a webhook if the workspace previously processed a webhook with the same commit SHA and created a run. To trigger a run, create a new commit. If a workspace receives a webhook with a previously processed commit, HCP Terraform will add a new event to the [VCS Events](\/terraform\/cloud-docs\/vcs#viewing-events) page documenting the received webhook.\n\n## Manually Starting Runs\n\nYou can manually trigger a run using the UI. Manual runs let you apply configuration changes when you update variable values but the configuration in version control is unchanged. You must manually trigger an initial run in any new VCS-driven workspace.\n\nTo start a run, click the **+ New run** button on the workspace page. This opens a **Start a new run** dialog. Select the run mode and provide an optional message. Review the [run modes documentation](\/terraform\/cloud-docs\/run\/modes-and-options) for more detail on supported options.\n\nRun modes that have a plan phase support debugging mode. This is equivalent to setting the `TF_LOG` environment variable to `TRACE` for this run only. To enable debugging, click **Additional planning options** under the run mode and click **Enable debugging mode**.  See [Debugging Terraform](\/terraform\/internals\/debugging) for more information.\n\nTo [replace](\/terraform\/cloud-docs\/run\/modes-and-options#replacing-selected-resources) specific resources as part of a standard plan and apply run, expand the **Additional planning options** section and select the resources to replace.\n\nManually starting a run requires permission to queue plans for the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nIf the workspace has a plan that is still in the [plan stage](\/terraform\/cloud-docs\/run\/states#the-plan-stage) when a new plan is queued, you can either wait for it to complete, or visit the **Current Run** page and click **Run this plan now**. Be aware that this action terminates the current plan and unlocks the workspace, which can lead to anomalies in behavior, but can be useful if the plans are long-running and the current plan does not have all the desired changes.\n\n## Automatically cancel plan-only runs triggered by outdated commits\n\nRefer to [Automatically cancel plan-only runs triggered by outdated commits](\/terraform\/cloud-docs\/users-teams-organizations\/organizations\/vcs-speculative-plan-management) for additional information.\n\n## Confirming or Discarding Plans\n\nBy default, run plans require confirmation before HCP Terraform will apply them. Users with permission to apply runs for the workspace can navigate to a run that has finished planning and click the \"Confirm & Apply\" or \"Discard Plan\" button to finish or cancel a run. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions)) If necessary, use the \"View Plan\" button for more details about what the run will change.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n![confirm button](\/img\/docs\/runs-confirm.png)\n\nUsers can also leave comments if there's something unusual involved in a run.\n\nNote that once the plan stage is completed, until you apply or discard a plan, HCP Terraform can't start another run in that workspace.\n\n### Auto apply\n\nIf you would rather automatically apply plans that don't have errors, you can [enable auto apply](\/terraform\/cloud-docs\/workspaces\/settings#auto-apply-and-manual-apply) on the workspace's \"General Settings\" page. Some plans can't be auto-applied, like plans queued by [run triggers](\/terraform\/cloud-docs\/workspaces\/settings\/run-triggers) or by users without permission to apply runs. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Speculative Plans on Pull Requests\n\nTo help you review proposed changes, HCP Terraform can automatically run [speculative plans][speculative plan] for pull requests or merge requests.\n\n### Viewing Pull Request Plans\n\nYou can view speculative plans in a workspace's list of normal runs. Additionally, HCP Terraform adds a link to the run in the pull request itself, along with an indicator of the run's status.\n\nA single pull request can include links to multiple plans, depending on how many workspaces connect to the destination branch. If you update a pull request, HCP Terraform performs new speculative plans and update the links.\n\nAlthough any contributor to the repository can see the status indicators for pull request plans, only members of your HCP Terraform organization with permission to read runs for the affected workspaces can click through and view the complete plan output. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### Rules for Triggering Pull Request Plans\n\nWhenever a pull request is _created or updated,_ HCP Terraform checks whether it should run speculative plans in workspaces connected to that repository, based on the following rules:\n\n- Only pull requests that originate from within the same repository can trigger speculative plans.\n\n  To avoid executing malicious code or exposing sensitive information, HCP Terraform doesn't run speculative plans for pull requests that originate from forks of a repository.\n\n  -> **Note:** On Terraform Enterprise, administrators can choose to allow speculative plans on pull requests that originate from forks. To learn more about this setting, refer to the [general settings documentation](\/terraform\/enterprise\/admin\/application\/general#allow-speculative-plans-on-pull-requests-from-forks)\n\n- Pull requests can only trigger runs in workspaces where automatic speculative plans are allowed. You can [disable automatic speculative plans](\/terraform\/cloud-docs\/workspaces\/settings\/vcs#automatic-speculative-plans) in a workspace's VCS settings.\n\n- A pull request will only trigger speculative plans in workspaces that are connected to that pull request's destination branch.\n\n  The destination branch is the branch that a pull request proposes to make changes to; this is often the repository's main branch, but not always.\n\n- If a workspace is configured to only treat certain directories in a repository as relevant, pull requests that don't affect those directories won't trigger speculative plans in that workspace. For more information, see [VCS settings: automatic run triggering](\/terraform\/cloud-docs\/workspaces\/settings\/vcs#automatic-run-triggering).\n\n  -> **Note:** If HCP Terraform skips a plan because the changes weren't relevant, it will still post a passing commit status to the pull request.\n\n- HCP Terraform does not update the status checks on a pull request with the status of an associated apply. This means that a commit with a successful plan but an errored apply will still show the passing commit status from the plan.\n\n### Contents of Pull Request Plans\n\nSpeculative plans for pull requests use the contents of the head branch (the branch that the PR proposes to merge into the destination branch), and they compare against the workspace's current state at the time of the plan. This means that if the destination branch changes significantly after the head branch is created, the speculative plan might not accurately show the results of accepting the PR. To get a more accurate view, you can rebase the head branch onto a more recent commit, or merge the destination branch into the head branch.\n\n## Testing Terraform Upgrades with Speculative Plans\n\nYou can start a new [speculative plan][speculative plan] in the UI with the workspace's current configuration version and any Terraform version available to the organization.\n\n1. Click **+ New run**.\n1. Select **Plan only** as the run type.\n1. Select a version from the **Choose Terraform version** menu. The speculative plan will use this version without changing the workspace's settings.\n1. Click **Start run**.\n\nIf the run is successful, you can change the workspace's Terraform version and [upgrade the state](\/terraform\/cloud-docs\/workspaces\/state#upgrading-state).\n\n## Speculative Plans During Development\n\nYou can also use the CLI to run speculative plans on demand before making a pull request. Refer to the [CLI-driven run workflow](\/terraform\/cloud-docs\/run\/cli#remote-speculative-plans) for more details.","site":"terraform","answers_cleaned":"    page title  UI VCS driven Runs   Runs   HCP Terraform description       Workspaces are associated with a VCS repository branch  and HCP Terraform   automatically queues runs when new commits are merged         The UI  and VCS driven Run Workflow  HCP Terraform has three workflows for managing Terraform runs     The UI VCS driven run workflow described below  which is the primary mode of operation    The  API driven run workflow   terraform cloud docs run api   which is more flexible but requires you to create some tooling    The  CLI driven run workflow   terraform cloud docs run cli   which uses Terraform s standard CLI tools to execute runs in HCP Terraform      Summary  In the UI and VCS workflow  every workspace is associated with a specific branch of a VCS repo of Terraform configurations  HCP Terraform registers webhooks with your VCS provider when you create a workspace  then automatically queues a Terraform run whenever new commits are merged to that branch of workspace s linked repository   HCP Terraform also performs a  speculative plan    when a pull request is opened against that branch  HCP Terraform posts a link to the plan in the pull request  and re runs the plan if the pull request is updated    speculative plan    terraform cloud docs run remote operations speculative plans  The Terraform code for a normal run always comes from version control  and is always associated with a specific commit      Automatically Starting Runs  In a workspace linked to a VCS repository  runs start automatically when you merge or commit changes to version control   If you use GitHub as your VCS provider and merge a PR changing 300 or more files  HCP Terraform automatically triggers runs for every workspace connected to that repository  The GitHub API has a limit of 300 reported changed files for a PR merge  To address this  HCP Terraform initiates workspace runs proactively  preventing oversight of file changes beyond this limit   A workspace is linked to one branch of a VCS repository and ignores changes to other branches  You can specify which files and directories within your repository trigger runs  HCP Terraform can also automatically trigger runs when you create Git tags  Refer to  Automatic Run Triggering   terraform cloud docs workspaces settings vcs automatic run triggering  for details        Note    A workspace with no runs will not accept new runs via VCS webhook  At least one run must be manually queued to confirm that the workspace is ready for further runs   A workspace will not process a webhook if the workspace previously processed a webhook with the same commit SHA and created a run  To trigger a run  create a new commit  If a workspace receives a webhook with a previously processed commit  HCP Terraform will add a new event to the  VCS Events   terraform cloud docs vcs viewing events  page documenting the received webhook      Manually Starting Runs  You can manually trigger a run using the UI  Manual runs let you apply configuration changes when you update variable values but the configuration in version control is unchanged  You must manually trigger an initial run in any new VCS driven workspace   To start a run  click the     New run   button on the workspace page  This opens a   Start a new run   dialog  Select the run mode and provide an optional message  Review the  run modes documentation   terraform cloud docs run modes and options  for more detail on supported options   Run modes that have a plan phase support debugging mode  This is equivalent to setting the  TF LOG  environment variable to  TRACE  for this run only  To enable debugging  click   Additional planning options   under the run mode and click   Enable debugging mode     See  Debugging Terraform   terraform internals debugging  for more information   To  replace   terraform cloud docs run modes and options replacing selected resources  specific resources as part of a standard plan and apply run  expand the   Additional planning options   section and select the resources to replace   Manually starting a run requires permission to queue plans for the workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  If the workspace has a plan that is still in the  plan stage   terraform cloud docs run states the plan stage  when a new plan is queued  you can either wait for it to complete  or visit the   Current Run   page and click   Run this plan now    Be aware that this action terminates the current plan and unlocks the workspace  which can lead to anomalies in behavior  but can be useful if the plans are long running and the current plan does not have all the desired changes      Automatically cancel plan only runs triggered by outdated commits  Refer to  Automatically cancel plan only runs triggered by outdated commits   terraform cloud docs users teams organizations organizations vcs speculative plan management  for additional information      Confirming or Discarding Plans  By default  run plans require confirmation before HCP Terraform will apply them  Users with permission to apply runs for the workspace can navigate to a run that has finished planning and click the  Confirm   Apply  or  Discard Plan  button to finish or cancel a run    More about permissions    terraform cloud docs users teams organizations permissions   If necessary  use the  View Plan  button for more details about what the run will change    permissions citation    intentionally unused   keep for maintainers    confirm button   img docs runs confirm png   Users can also leave comments if there s something unusual involved in a run   Note that once the plan stage is completed  until you apply or discard a plan  HCP Terraform can t start another run in that workspace       Auto apply  If you would rather automatically apply plans that don t have errors  you can  enable auto apply   terraform cloud docs workspaces settings auto apply and manual apply  on the workspace s  General Settings  page  Some plans can t be auto applied  like plans queued by  run triggers   terraform cloud docs workspaces settings run triggers  or by users without permission to apply runs    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers     Speculative Plans on Pull Requests  To help you review proposed changes  HCP Terraform can automatically run  speculative plans  speculative plan  for pull requests or merge requests       Viewing Pull Request Plans  You can view speculative plans in a workspace s list of normal runs  Additionally  HCP Terraform adds a link to the run in the pull request itself  along with an indicator of the run s status   A single pull request can include links to multiple plans  depending on how many workspaces connect to the destination branch  If you update a pull request  HCP Terraform performs new speculative plans and update the links   Although any contributor to the repository can see the status indicators for pull request plans  only members of your HCP Terraform organization with permission to read runs for the affected workspaces can click through and view the complete plan output    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers      Rules for Triggering Pull Request Plans  Whenever a pull request is  created or updated   HCP Terraform checks whether it should run speculative plans in workspaces connected to that repository  based on the following rules     Only pull requests that originate from within the same repository can trigger speculative plans     To avoid executing malicious code or exposing sensitive information  HCP Terraform doesn t run speculative plans for pull requests that originate from forks of a repository          Note    On Terraform Enterprise  administrators can choose to allow speculative plans on pull requests that originate from forks  To learn more about this setting  refer to the  general settings documentation   terraform enterprise admin application general allow speculative plans on pull requests from forks     Pull requests can only trigger runs in workspaces where automatic speculative plans are allowed  You can  disable automatic speculative plans   terraform cloud docs workspaces settings vcs automatic speculative plans  in a workspace s VCS settings     A pull request will only trigger speculative plans in workspaces that are connected to that pull request s destination branch     The destination branch is the branch that a pull request proposes to make changes to  this is often the repository s main branch  but not always     If a workspace is configured to only treat certain directories in a repository as relevant  pull requests that don t affect those directories won t trigger speculative plans in that workspace  For more information  see  VCS settings  automatic run triggering   terraform cloud docs workspaces settings vcs automatic run triggering           Note    If HCP Terraform skips a plan because the changes weren t relevant  it will still post a passing commit status to the pull request     HCP Terraform does not update the status checks on a pull request with the status of an associated apply  This means that a commit with a successful plan but an errored apply will still show the passing commit status from the plan       Contents of Pull Request Plans  Speculative plans for pull requests use the contents of the head branch  the branch that the PR proposes to merge into the destination branch   and they compare against the workspace s current state at the time of the plan  This means that if the destination branch changes significantly after the head branch is created  the speculative plan might not accurately show the results of accepting the PR  To get a more accurate view  you can rebase the head branch onto a more recent commit  or merge the destination branch into the head branch      Testing Terraform Upgrades with Speculative Plans  You can start a new  speculative plan  speculative plan  in the UI with the workspace s current configuration version and any Terraform version available to the organization   1  Click     New run    1  Select   Plan only   as the run type  1  Select a version from the   Choose Terraform version   menu  The speculative plan will use this version without changing the workspace s settings  1  Click   Start run     If the run is successful  you can change the workspace s Terraform version and  upgrade the state   terraform cloud docs workspaces state upgrading state       Speculative Plans During Development  You can also use the CLI to run speculative plans on demand before making a pull request  Refer to the  CLI driven run workflow   terraform cloud docs run cli remote speculative plans  for more details "}
{"questions":"terraform HCP Terraform has three workflows for managing Terraform runs trigger runs page title API driven Runs Runs HCP Terraform The API driven Run Workflow External tools communicate with HCP Terraform to upload configurations and","answers":"---\npage_title: API-driven Runs - Runs - HCP Terraform\ndescription: >-\n  External tools communicate with HCP Terraform to upload configurations and\n  trigger runs.\n---\n\n# The API-driven Run Workflow\n\nHCP Terraform has three workflows for managing Terraform runs.\n\n- The [UI\/VCS-driven run workflow](\/terraform\/cloud-docs\/run\/ui), which is the primary mode of operation.\n- The API-driven run workflow described below, which is more flexible but requires you to create some tooling.\n- The [CLI-driven run workflow](\/terraform\/cloud-docs\/run\/cli), which uses Terraform's standard CLI tools to execute runs in HCP Terraform.\n\n## Summary\n\nIn the API-driven workflow, workspaces are not directly associated with a VCS repo, and runs are not driven by webhooks on your VCS provider.\n\nInstead, one of your organization's other tools is in charge of deciding when configuration has changed and a run should occur. Usually this is something like a CI system, or something else capable of monitoring changes to your Terraform code and performing actions in response.\n\nOnce your other tooling has decided a run should occur, it must make a series of calls to HCP Terraform's `runs` and `configuration-versions` APIs to upload configuration files and perform a run with them. For the exact series of API calls, see the [pushing a new configuration version](#pushing-a-new-configuration-version) section.\n\nThe most significant difference in this workflow is that HCP Terraform _does not_ fetch configuration files from version control. Instead, your own tooling must upload the configurations as a `.tar.gz` file. This allows you to work with configurations from unsupported version control systems, automatically generate Terraform configurations from some other source of data, or build a variety of other integrations.\n\n~> **Important:** The script below is provided to illustrate the run process, and is not intended for production use. If you want to drive HCP Terraform runs from the command line, please see the [CLI-driven run workflow](\/terraform\/cloud-docs\/run\/cli).\n\n## Pushing a New Configuration Version\n\nPushing a new configuration to an existing workspace is a multi-step process. This section walks through each step in detail, using an example bash script to illustrate.\n\nYou need queue plans permission to create new configuration versions for the workspace. Refer to the [permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#general-workspace-permissions) documentation for more details.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### 1. Define Variables\n\nTo perform an upload, a few user parameters must be set:\n\n- **path_to_content_directory** is the folder with the terraform configuration. There must be at least one `.tf` file in the root of this path.\n- **organization** is the organization name (not ID) for your HCP Terraform organization.\n- **workspace** is the workspace name (not ID) in the HCP Terraform organization.\n- **$TOKEN** is the API Token used for [authenticating with the HCP Terraform API](\/terraform\/cloud-docs\/api-docs#authentication).\n\nThis script extracts the `path_to_content_directory`, `organization`, and `workspace` from command line arguments, and expects the `$TOKEN` as an environment variable.\n\n```bash\n#!\/bin\/bash\n\nif [ -z \"$1\" ] || [ -z \"$2\" ]; then\n  echo \"Usage: $0 <path_to_content_directory> <organization>\/<workspace>\"\n  exit 0\nfi\n\nCONTENT_DIRECTORY=\"$1\"\nORG_NAME=\"$(cut -d'\/' -f1 <<<\"$2\")\"\nWORKSPACE_NAME=\"$(cut -d'\/' -f2 <<<\"$2\")\"\n```\n\n### 2. Create the File for Upload\n\nThe [configuration version API](\/terraform\/cloud-docs\/api-docs\/configuration-versions) requires a `tar.gz` file to use the configuration version for a run, so you must package the directory containing the Terraform configuration into a `tar.gz` file.\n\n~> **Important:** The configuration directory must be the root of the tar file, with no intermediate directories. In other words, when the tar file is extracted the result must be paths like `.\/main.tf` rather than `.\/terraform-appserver\/main.tf`.\n\n```bash\nUPLOAD_FILE_NAME=\".\/content-$(date +%s).tar.gz\"\ntar -zcvf \"$UPLOAD_FILE_NAME\" -C \"$CONTENT_DIRECTORY\" .\n```\n\n### 3. Look Up the Workspace ID\n\nThe first step identified the organization name and the workspace name; however, the [configuration version API](\/terraform\/cloud-docs\/api-docs\/configuration-versions) expects the workspace ID. As such, the ID has to be looked up. If the workspace ID is already known, this step can be skipped. This step uses the [`jq` tool](https:\/\/stedolan.github.io\/jq\/) to parse the JSON output and extract the ID value into the `WORKSPACE_ID` variable.\n\n```bash\nWORKSPACE_ID=($(curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/workspaces\/$WORKSPACE_NAME \\\n  | jq -r '.data.id'))\n```\n\n### 4. Create a New Configuration Version\n\nBefore uploading the configuration files, you must create a `configuration-version` to associate uploaded content with the workspace. This API call performs two tasks: it creates the new configuration version and it extracts the upload URL to be used in the next step.\n\n```bash\necho '{\"data\":{\"type\":\"configuration-versions\"}}' > .\/create_config_version.json\n\nUPLOAD_URL=($(curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @create_config_version.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/$WORKSPACE_ID\/configuration-versions \\\n  | jq -r '.data.attributes.\"upload-url\"'))\n```\n\n### 5. Upload the Configuration Content File\n\nNext, upload the configuration version `tar.gz` file to the upload URL extracted from the previous step. If a file is not uploaded, the configuration version will not be usable, since it will have no Terraform configuration files.\n\nHCP Terraform automatically creates a new run with a plan once the new file is uploaded. If the workspace is configured to auto-apply, it will also apply if the plan succeeds; otherwise, an apply can be triggered via the [Run Apply API](\/terraform\/cloud-docs\/api-docs\/run#apply). If the API token used for the upload lacks permission to apply runs for the workspace, the run can't be auto-applied. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n```bash\ncurl \\\n  --header \"Content-Type: application\/octet-stream\" \\\n  --request PUT \\\n  --data-binary @\"$UPLOAD_FILE_NAME\" \\\n  $UPLOAD_URL\n```\n\n### 6. Delete Temporary Files\n\nIn the previous steps a few files were created; they are no longer needed, so they should be deleted.\n\n```bash\nrm \"$UPLOAD_FILE_NAME\"\nrm .\/create_config_version.json\n```\n\n### Complete Script\n\nCombine all of the code blocks into a single file, `.\/terraform-enterprise-push.sh` and give execution permission to create a combined bash script to perform all of the operations.\n\n```shell\nchmod +x .\/terraform-enterprise-push.sh\n.\/terraform-enterprise-push.sh .\/content my-organization\/my-workspace\n```\n\n**Note**: This script does not have error handling, so for a more robust script consider adding error checking.\n\n**`.\/terraform-enterprise-push.sh`:**\n\n```bash\n#!\/bin\/bash\n\n# Complete script for API-driven runs.\n\n# 1. Define Variables\n\nif [ -z \"$1\" ] || [ -z \"$2\" ]; then\n  echo \"Usage: $0 <path_to_content_directory> <organization>\/<workspace>\"\n  exit 0\nfi\n\nCONTENT_DIRECTORY=\"$1\"\nORG_NAME=\"$(cut -d'\/' -f1 <<<\"$2\")\"\nWORKSPACE_NAME=\"$(cut -d'\/' -f2 <<<\"$2\")\"\n\n# 2. Create the File for Upload\n\nUPLOAD_FILE_NAME=\".\/content-$(date +%s).tar.gz\"\ntar -zcvf \"$UPLOAD_FILE_NAME\" -C \"$CONTENT_DIRECTORY\" .\n\n# 3. Look Up the Workspace ID\n\nWORKSPACE_ID=($(curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/workspaces\/$WORKSPACE_NAME \\\n  | jq -r '.data.id'))\n\n# 4. Create a New Configuration Version\n\necho '{\"data\":{\"type\":\"configuration-versions\"}}' > .\/create_config_version.json\n\nUPLOAD_URL=($(curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @create_config_version.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/$WORKSPACE_ID\/configuration-versions \\\n  | jq -r '.data.attributes.\"upload-url\"'))\n\n# 5. Upload the Configuration Content File\n\ncurl \\\n  --header \"Content-Type: application\/octet-stream\" \\\n  --request PUT \\\n  --data-binary @\"$UPLOAD_FILE_NAME\" \\\n  $UPLOAD_URL\n\n# 6. Delete Temporary Files\n\nrm \"$UPLOAD_FILE_NAME\"\nrm .\/create_config_version.json\n```\n\n## Advanced Use Cases\n\nFor advanced use cases refer to the [Terraform Enterprise Automation Script](https:\/\/github.com\/hashicorp\/terraform-guides\/tree\/master\/operations\/automation-script) repository for automating interactions with HCP Terraform, including the creation of a workspace, uploading code, setting variables, and triggering the `plan` and `apply` operations.\n\nIn addition to uploading configurations and starting runs, you can use HCP Terraform's APIs to create and modify workspaces, edit variable values, and more. See the [API documentation](\/terraform\/cloud-docs\/api-docs) for more details.","site":"terraform","answers_cleaned":"    page title  API driven Runs   Runs   HCP Terraform description       External tools communicate with HCP Terraform to upload configurations and   trigger runs         The API driven Run Workflow  HCP Terraform has three workflows for managing Terraform runs     The  UI VCS driven run workflow   terraform cloud docs run ui   which is the primary mode of operation    The API driven run workflow described below  which is more flexible but requires you to create some tooling    The  CLI driven run workflow   terraform cloud docs run cli   which uses Terraform s standard CLI tools to execute runs in HCP Terraform      Summary  In the API driven workflow  workspaces are not directly associated with a VCS repo  and runs are not driven by webhooks on your VCS provider   Instead  one of your organization s other tools is in charge of deciding when configuration has changed and a run should occur  Usually this is something like a CI system  or something else capable of monitoring changes to your Terraform code and performing actions in response   Once your other tooling has decided a run should occur  it must make a series of calls to HCP Terraform s  runs  and  configuration versions  APIs to upload configuration files and perform a run with them  For the exact series of API calls  see the  pushing a new configuration version   pushing a new configuration version  section   The most significant difference in this workflow is that HCP Terraform  does not  fetch configuration files from version control  Instead  your own tooling must upload the configurations as a   tar gz  file  This allows you to work with configurations from unsupported version control systems  automatically generate Terraform configurations from some other source of data  or build a variety of other integrations        Important    The script below is provided to illustrate the run process  and is not intended for production use  If you want to drive HCP Terraform runs from the command line  please see the  CLI driven run workflow   terraform cloud docs run cli       Pushing a New Configuration Version  Pushing a new configuration to an existing workspace is a multi step process  This section walks through each step in detail  using an example bash script to illustrate   You need queue plans permission to create new configuration versions for the workspace  Refer to the  permissions   terraform cloud docs users teams organizations permissions general workspace permissions  documentation for more details    permissions citation    intentionally unused   keep for maintainers      1  Define Variables  To perform an upload  a few user parameters must be set       path to content directory   is the folder with the terraform configuration  There must be at least one   tf  file in the root of this path      organization   is the organization name  not ID  for your HCP Terraform organization      workspace   is the workspace name  not ID  in the HCP Terraform organization       TOKEN   is the API Token used for  authenticating with the HCP Terraform API   terraform cloud docs api docs authentication    This script extracts the  path to content directory    organization   and  workspace  from command line arguments  and expects the   TOKEN  as an environment variable      bash    bin bash  if    z   1          z   2     then   echo  Usage   0  path to content directory   organization   workspace     exit 0 fi  CONTENT DIRECTORY   1  ORG NAME    cut  d     f1      2    WORKSPACE NAME    cut  d     f2      2             2  Create the File for Upload  The  configuration version API   terraform cloud docs api docs configuration versions  requires a  tar gz  file to use the configuration version for a run  so you must package the directory containing the Terraform configuration into a  tar gz  file        Important    The configuration directory must be the root of the tar file  with no intermediate directories  In other words  when the tar file is extracted the result must be paths like    main tf  rather than    terraform appserver main tf       bash UPLOAD FILE NAME    content   date   s  tar gz  tar  zcvf   UPLOAD FILE NAME   C   CONTENT DIRECTORY             3  Look Up the Workspace ID  The first step identified the organization name and the workspace name  however  the  configuration version API   terraform cloud docs api docs configuration versions  expects the workspace ID  As such  the ID has to be looked up  If the workspace ID is already known  this step can be skipped  This step uses the   jq  tool  https   stedolan github io jq   to parse the JSON output and extract the ID value into the  WORKSPACE ID  variable      bash WORKSPACE ID    curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations  ORG NAME workspaces  WORKSPACE NAME       jq  r   data id             4  Create a New Configuration Version  Before uploading the configuration files  you must create a  configuration version  to associate uploaded content with the workspace  This API call performs two tasks  it creates the new configuration version and it extracts the upload URL to be used in the next step      bash echo    data    type   configuration versions         create config version json  UPLOAD URL    curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  create config version json     https   app terraform io api v2 workspaces  WORKSPACE ID configuration versions       jq  r   data attributes  upload url              5  Upload the Configuration Content File  Next  upload the configuration version  tar gz  file to the upload URL extracted from the previous step  If a file is not uploaded  the configuration version will not be usable  since it will have no Terraform configuration files   HCP Terraform automatically creates a new run with a plan once the new file is uploaded  If the workspace is configured to auto apply  it will also apply if the plan succeeds  otherwise  an apply can be triggered via the  Run Apply API   terraform cloud docs api docs run apply   If the API token used for the upload lacks permission to apply runs for the workspace  the run can t be auto applied    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers     bash curl       header  Content Type  application octet stream        request PUT       data binary    UPLOAD FILE NAME       UPLOAD URL          6  Delete Temporary Files  In the previous steps a few files were created  they are no longer needed  so they should be deleted      bash rm   UPLOAD FILE NAME  rm   create config version json          Complete Script  Combine all of the code blocks into a single file     terraform enterprise push sh  and give execution permission to create a combined bash script to perform all of the operations      shell chmod  x   terraform enterprise push sh   terraform enterprise push sh   content my organization my workspace        Note    This script does not have error handling  so for a more robust script consider adding error checking        terraform enterprise push sh         bash    bin bash    Complete script for API driven runs     1  Define Variables  if    z   1          z   2     then   echo  Usage   0  path to content directory   organization   workspace     exit 0 fi  CONTENT DIRECTORY   1  ORG NAME    cut  d     f1      2    WORKSPACE NAME    cut  d     f2      2       2  Create the File for Upload  UPLOAD FILE NAME    content   date   s  tar gz  tar  zcvf   UPLOAD FILE NAME   C   CONTENT DIRECTORY       3  Look Up the Workspace ID  WORKSPACE ID    curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations  ORG NAME workspaces  WORKSPACE NAME       jq  r   data id       4  Create a New Configuration Version  echo    data    type   configuration versions         create config version json  UPLOAD URL    curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  create config version json     https   app terraform io api v2 workspaces  WORKSPACE ID configuration versions       jq  r   data attributes  upload url        5  Upload the Configuration Content File  curl       header  Content Type  application octet stream        request PUT       data binary    UPLOAD FILE NAME       UPLOAD URL    6  Delete Temporary Files  rm   UPLOAD FILE NAME  rm   create config version json         Advanced Use Cases  For advanced use cases refer to the  Terraform Enterprise Automation Script  https   github com hashicorp terraform guides tree master operations automation script  repository for automating interactions with HCP Terraform  including the creation of a workspace  uploading code  setting variables  and triggering the  plan  and  apply  operations   In addition to uploading configurations and starting runs  you can use HCP Terraform s APIs to create and modify workspaces  edit variable values  and more  See the  API documentation   terraform cloud docs api docs  for more details "}
{"questions":"terraform customize behavior during runs Learn the effect of the Destroy and Refresh Only modes as well as options to Run Modes and Options HCP Terraform runs support many of the same modes and options available in the Terraform CLI page title Run Modes and Options Runs HCP Terraform","answers":"---\npage_title: Run Modes and Options - Runs - HCP Terraform\ndescription: >-\n  Learn the effect of the Destroy and Refresh-Only modes as well as options to\n  customize behavior during runs.\n---\n\n# Run Modes and Options\n\nHCP Terraform runs support many of the same modes and options available in the Terraform CLI.\n\n## Plan and Apply (Standard)\n\nThe default run mode of HCP Terraform is to perform a plan and then apply it. If you have enabled auto-apply, a successful plan applies immediately. Otherwise, the run waits for user confirmation before applying.\n\n- **CLI:** Use `terraform apply` (without providing a saved plan file).\n- **API:** Create a run without specifying any options that select a different mode.\n- **UI:** From the workspace's overview page, click **+ New run**, and then choose **Plan and apply (standard)** as the run type.\n- **VCS:** When a workspace is connected to a VCS repository, HCP Terraform automatically starts a standard plan and apply when you add new commits to the selected branch of that repository.\n\n## Destroy Mode\n\n[Destroy mode](\/terraform\/cli\/commands\/plan#planning-modes) instructs Terraform to create a plan which destroys all objects, regardless of configuration changes.\n\n- **CLI:** Use `terraform plan -destroy` or `terraform destroy`\n- **API:** Use the `is-destroy` option.\n- **UI:** Use a workspace's **Destruction and Deletion** settings page.\n\n## Plan Only\/Speculative Plan\n\nThis option creates a [speculative plan](\/terraform\/cloud-docs\/run\/remote-operations#speculative-plans). The speculative plan shows a set of possible changes and checks them against Sentinel policies, but Terraform can _not_ apply this plan.\n\nYou can create speculative plans with a different Terraform version than the one currently selected for a workspace. This lets you check whether your configuration is compatible with a newer Terraform version without changing the workspace settings.\n\nPlan-only runs ignore the per-workspace run queue. Plan-only runs can proceed even if another run is in progress, can not become the workspace's current run, and do not block progress on a workspace's other runs.\n\n- **API:** Set the `plan-only` option to `true` and specify an available terraform version using the `terraform-version` field.\n- **UI:** From the workspace's overview page, click **+ New run**, and then choose **Plan only** as the run type.\n- **VCS:** When a workspace is connected to a VCS repository, HCP Terraform automatically starts a speculative plan when someone opens a pull request (or merge request) against the selected branch of that repository.  The pull\/merge request view in your VCS links to the speculative plan, and you can also find it in the workspace's run list.\n\n## Saved Plans\n\n-> **Version note:** Using saved plans from the CLI with HCP Terraform requires at least Terraform CLI v1.6.0.\n\nSaved plan runs are very similar to standard plan and apply runs: they perform a plan and then optionally apply it. There are three main differences:\n\n1. _No wait for planning._ Saved plan runs ignore the per-workspace run queue during their plan and checks. Like plan-only runs, saved plans can begin planning even if another run is in progress, without blocking progress on other runs.\n2. _No auto-apply._ Saved plan runs are never auto-applied, even if you enabled auto-apply for the workspace. Saved plans only apply if you confirm them.\n3. _Automatic discard for stale plans._ If another run is applied (or the state is otherwise modified) before a saved plan run is confirmed, HCP Terraform automatically discards that saved plan. HCP Terraform may also automatically discard saved plans if they are not confirmed within a few weeks.\n\nSaved plans are ideal for interactive CLI workflows, where you can perform many exploratory plans and then choose one to apply, or for custom continuous integration workflows where the default run queue behavior isn't suitable.\n\n- **CLI:** Use `terraform plan -out <FILE>` to perform and save a plan, then use `terraform apply <FILE>` to apply the saved plan. Use `terraform show <FILE>` to inspect a saved plan before applying it.\n- **API:** Use the `save-plan` option when creating a run. If you create a new configuration version for a saved plan run, use the `provisional` option so that it will not become the workspace's current configuration version until you decide to apply the run.\n\n## Allow Empty Apply\n\nA no-operation (empty) apply enables HCP Terraform to apply a run from a plan that contains no infrastructure changes. During apply, Terraform can upgrade the state version if required. You can use this option to upgrade the state in your HCP Terraform workspace to a new Terraform version. Only some Terraform versions require this, most notably 0.13.\n\nTo make such upgrades easier, empty apply runs will always auto-apply if their plan contains no changes.\n\n~> **Warning:** HCP Terraform cannot guarantee that a plan in this mode will produce no changes. We recommend checking the plan for drift before proceeding to the apply stage.\n\n- **API:** Set the `allow-empty-apply` field to `true`.\n- **UI:** From the workspace's overview page, click **+ New run**, and then choose **Allow empty apply** as the run type.\n\n## Refresh-Only Mode\n\n> **Hands-on:** Try the [Use Refresh-Only Mode to Sync Terraform State](\/terraform\/tutorials\/state\/refresh) tutorial.\n\n-> **Version note:** Refresh-only support requires a workspace using at least Terraform CLI v0.15.4.\n\n[Refresh-only mode](\/terraform\/cli\/commands\/plan#planning-modes) instructs Terraform to create a plan that updates the Terraform state to match changes made to remote objects outside of Terraform. This is useful if state drift has occurred and you want to reconcile your state file to match the drifted remote objects. Applying a refresh-only run does not result in further changes to remote objects.\n\n- **CLI:** Use `terraform plan -refresh-only` or `terraform apply -refresh-only`.\n- **API:** Use the `refresh-only` option.\n- **UI:** From the workspace's overview page, click **+ New run**, and then choose **Refresh state** as the run type.\n\n## Skipping Automatic State Refresh\n\nThe [`-refresh=false` option](\/terraform\/cli\/commands\/plan#refresh-false) is used in normal planning mode to skip the default behavior of refreshing Terraform state before checking for configuration changes.\n\n- **CLI:** Use `terraform plan -refresh=false` or `terraform apply -refresh=false`.\n- **API:** Use the `refresh` option.\n\n## Replacing Selected Resources\n\n-> **Version note:** Replace support requires a workspace using at least Terraform CLI v0.15.2.\n\nThe [replace option](\/terraform\/cli\/commands\/plan#replace-address) instructs Terraform to replace the object with the given resource address.\n\n- **CLI:** Use `terraform plan -replace=ADDRESS` or `terraform apply -replace=ADDRESS`.\n- **API:** Use the `replace-addrs` option.\n- **UI:** Click **+ New run** and select the **Plan and apply (standard)** run type. Then click **Additional planning options** to reveal the **Replace resources** option. Type the address of the resource that you want to replace. You can replace multiple resources.\n\n## Targeted Plan and Apply\n\n[Resource Targeting](\/terraform\/cli\/commands\/plan#resource-targeting) is intended for exceptional circumstances only and should not be used routinely.\n\n- **CLI:** Use `terraform plan -target=ADDRESS` or `terraform apply -target=ADDRESS`.\n- **API:** Use the `target-addrs` option.\n\nThe usual caveats for targeting in local operations imply some additional limitations on HCP Terraform features for remote plans created with targeting:\n\n- [Sentinel](\/terraform\/cloud-docs\/policy-enforcement) policy checks for targeted plans will see only the selected subset of resource instances planned for changes in [the `tfplan` import](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan) and [the `tfplan\/v2` import](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfplan-v2), which may cause an unintended failure for any policy that requires a planned change to a particular resource instance selected by its address.\n\n- [Cost Estimation](\/terraform\/cloud-docs\/cost-estimation) is disabled for any run created with `-target` set, to prevent producing a misleading underestimate of cost due to resource instances being excluded from the plan.\n\nYou can disable or constrain use of targeting in a particular workspace using a Sentinel policy based on [the `tfrun.target_addrs` value](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfrun#value-target_addrs).\n\n## Generating Configuration\n\n-> **Version note:** Support for `import` blocks and generating configuration requires a workspace using at least Terraform CLI v1.5.0.\n\nWhen using [`import` blocks](\/terraform\/language\/import) to import existing resources, Terraform can [automatically generate configuration](\/terraform\/language\/import\/generating-configuration) during the plan for any imported resources that don't have an existing `resource` block. This option is enabled by default for runs started from the UI or from a VCS webhook.\n\n- **CLI:** Use `terraform plan -generate-config-out=generated.tf`.\n- **API:** Use the `allow-config-generation` option.\n\nYou can find generated configuration displayed in the plan UI. If you're using the CLI workflow, Terraform will write generated configuration to the file you specify when running `terraform plan`.\n\nOnce Terraform has generated configuration for you, you'll need to review it, incorporate it in your Terraform configuration (including committing it to version control), then run another plan. If you try to directly apply a plan with generated configuration, the run will error.","site":"terraform","answers_cleaned":"    page title  Run Modes and Options   Runs   HCP Terraform description       Learn the effect of the Destroy and Refresh Only modes as well as options to   customize behavior during runs         Run Modes and Options  HCP Terraform runs support many of the same modes and options available in the Terraform CLI      Plan and Apply  Standard   The default run mode of HCP Terraform is to perform a plan and then apply it  If you have enabled auto apply  a successful plan applies immediately  Otherwise  the run waits for user confirmation before applying       CLI    Use  terraform apply   without providing a saved plan file       API    Create a run without specifying any options that select a different mode      UI    From the workspace s overview page  click     New run    and then choose   Plan and apply  standard    as the run type      VCS    When a workspace is connected to a VCS repository  HCP Terraform automatically starts a standard plan and apply when you add new commits to the selected branch of that repository      Destroy Mode   Destroy mode   terraform cli commands plan planning modes  instructs Terraform to create a plan which destroys all objects  regardless of configuration changes       CLI    Use  terraform plan  destroy  or  terraform destroy      API    Use the  is destroy  option      UI    Use a workspace s   Destruction and Deletion   settings page      Plan Only Speculative Plan  This option creates a  speculative plan   terraform cloud docs run remote operations speculative plans   The speculative plan shows a set of possible changes and checks them against Sentinel policies  but Terraform can  not  apply this plan   You can create speculative plans with a different Terraform version than the one currently selected for a workspace  This lets you check whether your configuration is compatible with a newer Terraform version without changing the workspace settings   Plan only runs ignore the per workspace run queue  Plan only runs can proceed even if another run is in progress  can not become the workspace s current run  and do not block progress on a workspace s other runs       API    Set the  plan only  option to  true  and specify an available terraform version using the  terraform version  field      UI    From the workspace s overview page  click     New run    and then choose   Plan only   as the run type      VCS    When a workspace is connected to a VCS repository  HCP Terraform automatically starts a speculative plan when someone opens a pull request  or merge request  against the selected branch of that repository   The pull merge request view in your VCS links to the speculative plan  and you can also find it in the workspace s run list      Saved Plans       Version note    Using saved plans from the CLI with HCP Terraform requires at least Terraform CLI v1 6 0   Saved plan runs are very similar to standard plan and apply runs  they perform a plan and then optionally apply it  There are three main differences   1   No wait for planning   Saved plan runs ignore the per workspace run queue during their plan and checks  Like plan only runs  saved plans can begin planning even if another run is in progress  without blocking progress on other runs  2   No auto apply   Saved plan runs are never auto applied  even if you enabled auto apply for the workspace  Saved plans only apply if you confirm them  3   Automatic discard for stale plans   If another run is applied  or the state is otherwise modified  before a saved plan run is confirmed  HCP Terraform automatically discards that saved plan  HCP Terraform may also automatically discard saved plans if they are not confirmed within a few weeks   Saved plans are ideal for interactive CLI workflows  where you can perform many exploratory plans and then choose one to apply  or for custom continuous integration workflows where the default run queue behavior isn t suitable       CLI    Use  terraform plan  out  FILE   to perform and save a plan  then use  terraform apply  FILE   to apply the saved plan  Use  terraform show  FILE   to inspect a saved plan before applying it      API    Use the  save plan  option when creating a run  If you create a new configuration version for a saved plan run  use the  provisional  option so that it will not become the workspace s current configuration version until you decide to apply the run      Allow Empty Apply  A no operation  empty  apply enables HCP Terraform to apply a run from a plan that contains no infrastructure changes  During apply  Terraform can upgrade the state version if required  You can use this option to upgrade the state in your HCP Terraform workspace to a new Terraform version  Only some Terraform versions require this  most notably 0 13   To make such upgrades easier  empty apply runs will always auto apply if their plan contains no changes        Warning    HCP Terraform cannot guarantee that a plan in this mode will produce no changes  We recommend checking the plan for drift before proceeding to the apply stage       API    Set the  allow empty apply  field to  true       UI    From the workspace s overview page  click     New run    and then choose   Allow empty apply   as the run type      Refresh Only Mode      Hands on    Try the  Use Refresh Only Mode to Sync Terraform State   terraform tutorials state refresh  tutorial        Version note    Refresh only support requires a workspace using at least Terraform CLI v0 15 4    Refresh only mode   terraform cli commands plan planning modes  instructs Terraform to create a plan that updates the Terraform state to match changes made to remote objects outside of Terraform  This is useful if state drift has occurred and you want to reconcile your state file to match the drifted remote objects  Applying a refresh only run does not result in further changes to remote objects       CLI    Use  terraform plan  refresh only  or  terraform apply  refresh only       API    Use the  refresh only  option      UI    From the workspace s overview page  click     New run    and then choose   Refresh state   as the run type      Skipping Automatic State Refresh  The    refresh false  option   terraform cli commands plan refresh false  is used in normal planning mode to skip the default behavior of refreshing Terraform state before checking for configuration changes       CLI    Use  terraform plan  refresh false  or  terraform apply  refresh false       API    Use the  refresh  option      Replacing Selected Resources       Version note    Replace support requires a workspace using at least Terraform CLI v0 15 2   The  replace option   terraform cli commands plan replace address  instructs Terraform to replace the object with the given resource address       CLI    Use  terraform plan  replace ADDRESS  or  terraform apply  replace ADDRESS       API    Use the  replace addrs  option      UI    Click     New run   and select the   Plan and apply  standard    run type  Then click   Additional planning options   to reveal the   Replace resources   option  Type the address of the resource that you want to replace  You can replace multiple resources      Targeted Plan and Apply   Resource Targeting   terraform cli commands plan resource targeting  is intended for exceptional circumstances only and should not be used routinely       CLI    Use  terraform plan  target ADDRESS  or  terraform apply  target ADDRESS       API    Use the  target addrs  option   The usual caveats for targeting in local operations imply some additional limitations on HCP Terraform features for remote plans created with targeting      Sentinel   terraform cloud docs policy enforcement  policy checks for targeted plans will see only the selected subset of resource instances planned for changes in  the  tfplan  import   terraform cloud docs policy enforcement sentinel import tfplan  and  the  tfplan v2  import   terraform cloud docs policy enforcement sentinel import tfplan v2   which may cause an unintended failure for any policy that requires a planned change to a particular resource instance selected by its address      Cost Estimation   terraform cloud docs cost estimation  is disabled for any run created with   target  set  to prevent producing a misleading underestimate of cost due to resource instances being excluded from the plan   You can disable or constrain use of targeting in a particular workspace using a Sentinel policy based on  the  tfrun target addrs  value   terraform cloud docs policy enforcement sentinel import tfrun value target addrs       Generating Configuration       Version note    Support for  import  blocks and generating configuration requires a workspace using at least Terraform CLI v1 5 0   When using   import  blocks   terraform language import  to import existing resources  Terraform can  automatically generate configuration   terraform language import generating configuration  during the plan for any imported resources that don t have an existing  resource  block  This option is enabled by default for runs started from the UI or from a VCS webhook       CLI    Use  terraform plan  generate config out generated tf       API    Use the  allow config generation  option   You can find generated configuration displayed in the plan UI  If you re using the CLI workflow  Terraform will write generated configuration to the file you specify when running  terraform plan    Once Terraform has generated configuration for you  you ll need to review it  incorporate it in your Terraform configuration  including committing it to version control   then run another plan  If you try to directly apply a plan with generated configuration  the run will error "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the agents and agent pools endpoints List show and delete agents and list show create update and delete agent pools using the HTTP API page title Agent Pools and Agents API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201","answers":"---\npage_title: Agent Pools and Agents - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/agents` and `\/agent-pools` endpoints. List, show, and delete agents, and list, show, create, update, and delete agent pools using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Agent Pools and Agents API\n\nAn Agent Pool represents a group of Agents, often related to one another by sharing a common network segment or purpose.\nA workspace may be configured to use one of the organization's agent pools to run remote operations with isolated,\nprivate, or on-premises infrastructure.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/agents.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n## List Agent Pools\n\n`GET \/organizations\/:organization_name\/agent-pools`\n\n| Parameter            | Description                   |\n| -------------------- | ----------------------------- |\n| `:organization_name` | The name of the organization. |\n\nThis endpoint allows you to list agent pools, their agents, and their tokens for an organization.\n\n| Status  | Response                                      | Reason                 |\n| ------- | --------------------------------------------- | ---------------------- |\n| [200][] | [JSON API document][] (`type: \"agent-pools\"`) | Success                |\n| [404][] | [JSON API error object][]                     | Organization not found |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter                          | Description                                                                                                                                                                                                                                                                                                                 |\n|------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `q`                                | **Optional.** A search query string. Agent pools are searchable by name.                                                                                                                                                                                              |\n| `sort`                             | **Optional.** Allows sorting the returned agents pools. Valid values are `\"name\"` and `\"created-at\"`. Prepending a hyphen to the sort parameter will reverse the order (e.g. `\"-name\"`).                                                                                                                                    |\n| `page[number]`                     | **Optional.** If omitted, the endpoint will return the first page.                                                                                                                                                                                                                                                          |\n| `page[size]`                       | **Optional.** If omitted, the endpoint will return 20 agent pools per page.                                                                                                                                                                                                                                                 |\n| `filter[allowed_workspaces][name]` | **Optional.** Filters agent pools to those associated with the given workspace. The workspace must have permission to use the agent pool. Refer to [Scoping Agent Pools to Specific Workspaces](\/terraform\/cloud-docs\/agents#scope-an-agent-pool-to-specific-workspaces). |                                                                             |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/agent-pools\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"apool-yoGUFz5zcRMMz53i\",\n      \"type\": \"agent-pools\",\n      \"attributes\": {\n        \"name\": \"example-pool\",\n        \"created-at\": \"2020-08-05T18:10:26.964Z\",\n        \"organization-scoped\": false,\n        \"agent-count\": 3\n      },\n      \"relationships\": {\n        \"agents\": {\n          \"links\": {\n            \"related\": \"\/api\/v2\/agent-pools\/apool-yoGUFz5zcRMMz53i\/agents\"\n          }\n        },\n        \"authentication-tokens\": {\n          \"links\": {\n            \"related\": \"\/api\/v2\/agent-pools\/apool-yoGUFz5zcRMMz53i\/authentication-tokens\"\n          }\n        },\n        \"workspaces\": {\n          \"data\": [\n            {\n              \"id\": \"ws-9EEkcEQSA3XgWyGe\",\n              \"type\": \"workspaces\"\n            }\n          ]\n        },\n        \"allowed-workspaces\": {\n          \"data\": [\n            {\n              \"id\": \"ws-x9taqV23mxrGcDrn\",\n              \"type\": \"workspaces\"\n            }\n          ]\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/agent-pools\/apool-yoGUFz5zcRMMz53i\"\n      }\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/agent-pools?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/agent-pools?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/agent-pools?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 1\n    },\n    \"status-counts\": {\n      \"total\": 1,\n      \"matching\": 1\n    }\n  }\n}\n```\n\n## List Agents\n\n`GET \/agent-pools\/:agent_pool_id\/agents`\n\n| Parameter        | Description                       |\n| ---------------- | --------------------------------- |\n| `:agent_pool_id` | The ID of the Agent Pool to list. |\n\n| Status  | Response                                 | Reason                                                       |\n| ------- | ---------------------------------------- | ------------------------------------------------------------ |\n| [200][] | [JSON API document][] (`type: \"agents\"`) | Success                                                      |\n| [404][] | [JSON API error object][]                | Agent Pool not found, or user unauthorized to perform action |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter                 | Description                                                                  |\n| ------------------------- | ---------------------------------------------------------------------------- |\n| `filter[last-ping-since]` | **Optional.** Accepts a date in ISO8601 format (ex. `2020-08-11T10:41:23Z`). |\n| `page[number]`            | **Optional.** If omitted, the endpoint will return the first page.           |\n| `page[size]`              | **Optional.** If omitted, the endpoint will return 20 agents per page.       |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/agent-pools\/apool-xkuMi7x4LsEnBUdY\/agents\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"agent-A726QeosTCpCumAs\",\n      \"type\": \"agents\",\n      \"attributes\": {\n        \"name\": \"my-cool-agent\",\n        \"status\": \"idle\",\n        \"ip-address\": \"123.123.123.123\",\n        \"last-ping-at\": \"2020-10-09T18:52:25.246Z\"\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/agents\/agent-A726QeosTCpCumAs\"\n      }\n    },\n    {\n      \"id\": \"agent-4cQzjbr1cnM6Pcxr\",\n      \"type\": \"agents\",\n      \"attributes\": {\n        \"name\": \"my-other-cool-agent\",\n        \"status\": \"exited\",\n        \"ip-address\": \"123.123.123.123\",\n        \"last-ping-at\": \"2020-08-12T15:25:09.726Z\"\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/agents\/agent-4cQzjbr1cnM6Pcxr\"\n      }\n    },\n    {\n      \"id\": \"agent-yEJjXQCucpNxtxd2\",\n      \"type\": \"agents\",\n      \"attributes\": {\n        \"name\": null,\n        \"status\": \"errored\",\n        \"ip-address\": \"123.123.123.123\",\n        \"last-ping-at\": \"2020-08-11T06:22:20.300Z\"\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/agents\/agent-yEJjXQCucpNxtxd2\"\n      }\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/agent-pools\/apool-yoGUFz5zcRMMz53i\/agents?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/agent-pools\/apool-yoGUFz5zcRMMz53i\/agents?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/agent-pools\/apool-yoGUFz5zcRMMz53i\/agents?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 3\n    }\n  }\n}\n```\n\n## Show an Agent Pool\n\n`GET \/agent-pools\/:id`\n\n| Parameter | Description                      |\n| --------- | -------------------------------- |\n| `:id`     | The ID of the Agent Pool to show |\n\n| Status  | Response                                      | Reason                                                       |\n| ------- | --------------------------------------------- | ------------------------------------------------------------ |\n| [200][] | [JSON API document][] (`type: \"agent-pools\"`) | Success                                                      |\n| [404][] | [JSON API error object][]                     | Agent Pool not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/agent-pools\/apool-MCf6kkxu5FyHbqhd\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"apool-yoGUFz5zcRMMz53i\",\n    \"type\": \"agent-pools\",\n    \"attributes\": {\n      \"name\": \"example-pool\",\n      \"created-at\": \"2020-08-05T18:10:26.964Z\",\n      \"organization-scoped\": false\n    },\n    \"relationships\": {\n      \"agents\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/agent-pools\/apool-yoGUFz5zcRMMz53i\/agents\"\n        }\n      },\n      \"authentication-tokens\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/agent-pools\/apool-yoGUFz5zcRMMz53i\/authentication-tokens\"\n        }\n      },\n      \"workspaces\": {\n        \"data\": [\n          {\n            \"id\": \"ws-9EEkcEQSA3XgWyGe\",\n            \"type\": \"workspaces\"\n          }\n        ]\n      },\n      \"allowed-workspaces\": {\n        \"data\": [\n          {\n            \"id\": \"ws-x9taqV23mxrGcDrn\",\n            \"type\": \"workspaces\"\n          }\n        ]\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/agent-pools\/apool-yoGUFz5zcRMMz53i\"\n    }\n  }\n}\n```\n\n## Show an Agent\n\n`GET \/agents\/:id`\n\n| Parameter | Description                 |\n| --------- | --------------------------- |\n| `:id`     | The ID of the agent to show |\n\n| Status  | Response                                 | Reason                                                  |\n| ------- | ---------------------------------------- | ------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"agents\"`) | Success                                                 |\n| [404][] | [JSON API error object][]                | Agent not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/agents\/agent-73PJNzbZB5idR7AQ\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"agent-Zz9PTEcUgBtYzht8\",\n    \"type\": \"agents\",\n    \"attributes\": {\n      \"name\": \"my-agent\",\n      \"status\": \"busy\",\n      \"ip-address\": \"123.123.123.123\",\n      \"last-ping-at\": \"2020-09-08T18:47:35.361Z\"\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/agents\/agent-Zz9PTEcUgBtYzht8\"\n    }\n  }\n}\n```\n\nThis endpoint lists details about an agent along with that agent's status. [Learn more about agents statuses](\/terraform\/cloud-docs\/agents\/agent-pools#view-agent-statuses).\n\n## Delete an Agent\n\n`DELETE \/agents\/:id`\n\n| Parameter | Description                   |\n| --------- | ----------------------------- |\n| `:id`     | The ID of the agent to delete |\n\n| Status  | Response                  | Reason                                                                                                       |\n| ------- | ------------------------- | ------------------------------------------------------------------------------------------------------------ |\n| [204][] | No Content                | Success                                                                                                      |\n| [412][] | [JSON API error object][] | Agent is not deletable. Agents must have a status of `unknown`, `errored`, or `exited` before being deleted. |\n| [404][] | [JSON API error object][] | Agent not found, or user unauthorized to perform action                                                      |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/agents\/agent-73PJNzbZB5idR7AQ\n```\n\n## Create an Agent Pool\n\n`POST \/organizations\/:organization_name\/agent-pools`\n\n| Parameter            | Description                   |\n| -------------------- | ----------------------------- |\n| `:organization_name` | The name of the organization. |\n\nThis endpoint allows you to create an Agent Pool for an organization. Only one Agent Pool may exist for an organization.\n\n| Status  | Response                                      | Reason                                                         |\n| ------- | --------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"agent-pools\"`) | Agent Pool successfully created                                |\n| [404][] | [JSON API error object][]                     | Organization not found or user unauthorized to perform action  |\n| [422][] | [JSON API error object][]                     | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                                          | Type   | Default | Description                                                                                                                                                                                                                                      |\n|---------------------------------------------------|--------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `data.type`                                       | string |         | Must be `\"agent-pools\"`.                                                                                                                                                                                                                         |\n| `data.attributes.name`                            | string |         | The name of the agent pool, which can only include letters, numbers, `-`, and `_`. This will be used as an identifier and must be unique in the organization.                                                                                    |\n| `data.attributes.organization-scoped`             | bool   | true    | The scope of the agent pool. If true, all workspaces in the organization can use the agent pool.                                                                                          |\n| `data.relationships.allowed-workspaces.data.type` | string |         | Must be `\"workspaces\"`.                                                                                                                                                                    |\n| `data.relationships.allowed-workspaces.data.id`   | string |         | The ID of the workspace that has permission to use the agent pool. Refer to [Scoping Agent Pools to Specific Workspaces](\/terraform\/cloud-docs\/agents#scope-an-agent-pool-to-specific-workspaces).  |\n\n### Sample Payload\n\n```json\n{\n    \"data\": {\n        \"type\": \"agent-pools\",\n        \"attributes\": {\n            \"name\": \"my-pool\",\n            \"organization-scoped\": false,\n        },\n        \"relationships\": {\n          \"allowed-workspaces\": {\n            \"data\": [\n              {\n                \"id\": \"ws-x9taqV23mxrGcDrn\",\n                \"type\": \"workspaces\"\n              }\n            ]\n          }\n        }\n    }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/agent-pools\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"apool-55jZekR57npjHHYQ\",\n    \"type\": \"agent-pools\",\n    \"attributes\": {\n      \"name\": \"my-pool\",\n      \"created-at\": \"2020-10-13T16:32:45.165Z\",\n      \"organization-scoped\": false,\n\n    },\n    \"relationships\": {\n      \"agents\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/agent-pools\/apool-55jZekR57npjHHYQ\/agents\"\n        }\n      },\n      \"authentication-tokens\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/agent-pools\/apool-55jZekR57npjHHYQ\/authentication-tokens\"\n        }\n      },\n      \"workspaces\": {\n        \"data\": []\n      },\n      \"allowed-workspaces\": {\n        \"data\": [\n          {\n            \"id\": \"ws-x9taqV23mxrGcDrn\",\n            \"type\": \"workspaces\"\n          }\n        ]\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/agent-pools\/apool-55jZekR57npjHHYQ\"\n    }\n  }\n}\n```\n\n## Update an Agent Pool\n\n`PATCH \/agent-pools\/:id`\n\n| Parameter | Description                        |\n| --------- | ---------------------------------- |\n| `:id`     | The ID of the Agent Pool to update |\n\n| Status  | Response                                      | Reason                                                         |\n| ------- | --------------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"agent-pools\"`) | Success                                                        |\n| [404][] | [JSON API error object][]                     | Agent Pool not found, or user unauthorized to perform action   |\n| [422][] | JSON error document                           | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                                          | Type   | Default          | Description                                                                                                                                                                                                                                      |\n|---------------------------------------------------|--------|------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `data.type`                                       | string |                  | Must be `\"agent-pools\"`.                                                                                                                                                                                                                         |\n| `data.attributes.name`                            | string | (previous value) | The name of the agent pool, which can only include letters, numbers, `-`, and `_`. This will be used as an identifier and must be unique in the organization.                                                                                    |\n| `data.attributes.organization-scoped`             | bool   | true             | The scope of the agent pool. If true, all workspaces in the organization can use the agent pool.                                                                                          |\n| `data.relationships.allowed-workspaces.data.type` | string |                  | Must be `\"workspaces\"`.                                                                                                                                                                   |\n| `data.relationships.allowed-workspaces.data.id`   | string |                  | The ID of the workspace that has permission to use the agent pool. Refer to [Scoping Agent Pools to Specific Workspaces](\/terraform\/cloud-docs\/agents#scope-an-agent-pool-to-specific-workspaces). |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"agent-pools\",\n    \"attributes\": {\n      \"name\": \"example-pool\",\n      \"organization-scoped\": false,\n    },\n    \"relationships\": {\n      \"allowed-workspaces\": {\n        \"data\": [\n          {\n            \"id\": \"ws-x9taqV23mxrGcDrn\",\n            \"type\": \"workspaces\"\n          }\n        ]\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/agent-pools\/apool-MCf6kkxu5FyHbqhd\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"apool-yoGUFz5zcRMMz53i\",\n    \"type\": \"agent-pools\",\n    \"attributes\": {\n      \"name\": \"example-pool\",\n      \"created-at\": \"2020-08-05T18:10:26.964Z\"\n    },\n    \"relationships\": {\n      \"agents\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/agent-pools\/apool-yoGUFz5zcRMMz53i\/agents\"\n        }\n      },\n      \"authentication-tokens\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/agent-pools\/apool-yoGUFz5zcRMMz53i\/authentication-tokens\"\n        }\n      },\n      \"workspaces\": {\n        \"data\": [\n          {\n            \"id\": \"ws-9EEkcEQSA3XgWyGe\",\n            \"type\": \"workspaces\"\n          }\n        ]\n      },\n      \"allowed-workspaces\": {\n        \"data\": [\n          {\n            \"id\": \"ws-x9taqV23mxrGcDrn\",\n            \"type\": \"workspaces\"\n          }\n        ]\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/agent-pools\/apool-yoGUFz5zcRMMz53i\"\n    }\n  }\n}\n```\n\n## Delete an Agent Pool\n\n`DELETE \/agent-pools\/:agent_pool_id`\n\n| Parameter        | Description                           |\n| ---------------- | ------------------------------------- |\n| `:agent_pool_id` | The ID of the agent pool ID to delete |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/agent-pools\/apool-MCf6kkxu5FyHbqhd\n```\n\n### Available Related Resources\n\nThe GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n| Resource Name  | Description                                 |\n| -------------- | ------------------------------------------- |\n| `workspaces`   | The workspaces attached to this agent pool. |","site":"terraform","answers_cleaned":"    page title  Agent Pools and Agents   API Docs   HCP Terraform description       Use the   agents  and   agent pools  endpoints  List  show  and delete agents  and list  show  create  update  and delete agent pools using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Agent Pools and Agents API  An Agent Pool represents a group of Agents  often related to one another by sharing a common network segment or purpose  A workspace may be configured to use one of the organization s agent pools to run remote operations with isolated  private  or on premises infrastructure        BEGIN  TFC only name pnp callout      include  tfc package callouts agents mdx       END  TFC only name pnp callout         List Agent Pools   GET  organizations  organization name agent pools     Parameter              Description                                                                                  organization name    The name of the organization     This endpoint allows you to list agent pools  their agents  and their tokens for an organization     Status    Response                                        Reason                                                                                                           200       JSON API document      type   agent pools      Success                     404       JSON API error object                          Organization not found        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter                            Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           q                                     Optional    A search query string  Agent pools are searchable by name                                                                                                                                                                                                    sort                                  Optional    Allows sorting the returned agents pools  Valid values are   name   and   created at    Prepending a hyphen to the sort parameter will reverse the order  e g     name                                                                                                                                             page number                           Optional    If omitted  the endpoint will return the first page                                                                                                                                                                                                                                                                page size                             Optional    If omitted  the endpoint will return 20 agent pools per page                                                                                                                                                                                                                                                       filter allowed workspaces  name       Optional    Filters agent pools to those associated with the given workspace  The workspace must have permission to use the agent pool  Refer to  Scoping Agent Pools to Specific Workspaces   terraform cloud docs agents scope an agent pool to specific workspaces                                                                                        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 organizations my organization agent pools          Sample Response     json      data                  id    apool yoGUFz5zcRMMz53i          type    agent pools          attributes              name    example pool            created at    2020 08 05T18 10 26 964Z            organization scoped   false           agent count   3                 relationships              agents                links                  related     api v2 agent pools apool yoGUFz5zcRMMz53i agents                                  authentication tokens                links                  related     api v2 agent pools apool yoGUFz5zcRMMz53i authentication tokens                                  workspaces                data                                  id    ws 9EEkcEQSA3XgWyGe                  type    workspaces                                                allowed workspaces                data                                  id    ws x9taqV23mxrGcDrn                  type    workspaces                                                      links              self     api v2 agent pools apool yoGUFz5zcRMMz53i                        links          self    https   app terraform io api v2 organizations my organization agent pools page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   app terraform io api v2 organizations my organization agent pools page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next   null       last    https   app terraform io api v2 organizations my organization agent pools page 5Bnumber 5D 1 page 5Bsize 5D 20          meta          pagination            current page   1         prev page   null         next page   null         total pages   1         total count   1             status counts            total   1         matching   1                     List Agents   GET  agent pools  agent pool id agents     Parameter          Description                                                                                      agent pool id    The ID of the Agent Pool to list       Status    Response                                   Reason                                                                                                                                                                                  200       JSON API document      type   agents      Success                                                           404       JSON API error object                     Agent Pool not found  or user unauthorized to perform action        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter                   Description                                                                                                                                                                                    filter last ping since       Optional    Accepts a date in ISO8601 format  ex   2020 08 11T10 41 23Z         page number                  Optional    If omitted  the endpoint will return the first page                 page size                    Optional    If omitted  the endpoint will return 20 agents per page               Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 agent pools apool xkuMi7x4LsEnBUdY agents          Sample Response     json      data                  id    agent A726QeosTCpCumAs          type    agents          attributes              name    my cool agent            status    idle            ip address    123 123 123 123            last ping at    2020 10 09T18 52 25 246Z                  links              self     api v2 agents agent A726QeosTCpCumAs                              id    agent 4cQzjbr1cnM6Pcxr          type    agents          attributes              name    my other cool agent            status    exited            ip address    123 123 123 123            last ping at    2020 08 12T15 25 09 726Z                  links              self     api v2 agents agent 4cQzjbr1cnM6Pcxr                              id    agent yEJjXQCucpNxtxd2          type    agents          attributes              name   null           status    errored            ip address    123 123 123 123            last ping at    2020 08 11T06 22 20 300Z                  links              self     api v2 agents agent yEJjXQCucpNxtxd2                        links          self    https   app terraform io api v2 agent pools apool yoGUFz5zcRMMz53i agents page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   app terraform io api v2 agent pools apool yoGUFz5zcRMMz53i agents page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next   null       last    https   app terraform io api v2 agent pools apool yoGUFz5zcRMMz53i agents page 5Bnumber 5D 1 page 5Bsize 5D 20          meta          pagination            current page   1         prev page   null         next page   null         total pages   1         total count   3                     Show an Agent Pool   GET  agent pools  id     Parameter   Description                                                                             id        The ID of the Agent Pool to show      Status    Response                                        Reason                                                                                                                                                                                       200       JSON API document      type   agent pools      Success                                                           404       JSON API error object                          Agent Pool not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 agent pools apool MCf6kkxu5FyHbqhd          Sample Response     json      data          id    apool yoGUFz5zcRMMz53i        type    agent pools        attributes            name    example pool          created at    2020 08 05T18 10 26 964Z          organization scoped   false             relationships            agents              links                related     api v2 agent pools apool yoGUFz5zcRMMz53i agents                            authentication tokens              links                related     api v2 agent pools apool yoGUFz5zcRMMz53i authentication tokens                            workspaces              data                              id    ws 9EEkcEQSA3XgWyGe                type    workspaces                                        allowed workspaces              data                              id    ws x9taqV23mxrGcDrn                type    workspaces                                            links            self     api v2 agent pools apool yoGUFz5zcRMMz53i                      Show an Agent   GET  agents  id     Parameter   Description                                                                   id        The ID of the agent to show      Status    Response                                   Reason                                                                                                                                                                        200       JSON API document      type   agents      Success                                                      404       JSON API error object                     Agent not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 agents agent 73PJNzbZB5idR7AQ          Sample Response     json      data          id    agent Zz9PTEcUgBtYzht8        type    agents        attributes            name    my agent          status    busy          ip address    123 123 123 123          last ping at    2020 09 08T18 47 35 361Z              links            self     api v2 agents agent Zz9PTEcUgBtYzht8                   This endpoint lists details about an agent along with that agent s status   Learn more about agents statuses   terraform cloud docs agents agent pools view agent statuses       Delete an Agent   DELETE  agents  id     Parameter   Description                                                                       id        The ID of the agent to delete      Status    Response                    Reason                                                                                                                                                                                                                                                                   204      No Content                  Success                                                                                                           412       JSON API error object      Agent is not deletable  Agents must have a status of  unknown    errored   or  exited  before being deleted       404       JSON API error object      Agent not found  or user unauthorized to perform action                                                             Sample Request     shell curl       header  Authorization  Bearer  TOKEN        request DELETE     https   app terraform io api v2 agents agent 73PJNzbZB5idR7AQ         Create an Agent Pool   POST  organizations  organization name agent pools     Parameter              Description                                                                                  organization name    The name of the organization     This endpoint allows you to create an Agent Pool for an organization  Only one Agent Pool may exist for an organization     Status    Response                                        Reason                                                                                                                                                                                           201       JSON API document      type   agent pools      Agent Pool successfully created                                     404       JSON API error object                          Organization not found or user unauthorized to perform action       422       JSON API error object                          Malformed request body  missing attributes  wrong types  etc          Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                                            Type     Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       data type                                          string             Must be   agent pools                                                                                                                                                                                                                                 data attributes name                               string             The name of the agent pool  which can only include letters  numbers       and      This will be used as an identifier and must be unique in the organization                                                                                          data attributes organization scoped                bool     true      The scope of the agent pool  If true  all workspaces in the organization can use the agent pool                                                                                                data relationships allowed workspaces data type    string             Must be   workspaces                                                                                                                                                                            data relationships allowed workspaces data id      string             The ID of the workspace that has permission to use the agent pool  Refer to  Scoping Agent Pools to Specific Workspaces   terraform cloud docs agents scope an agent pool to specific workspaces           Sample Payload     json        data              type    agent pools            attributes                  name    my pool                organization scoped   false                      relationships                allowed workspaces                  data                                      id    ws x9taqV23mxrGcDrn                    type    workspaces                                                                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization agent pools          Sample Response     json      data          id    apool 55jZekR57npjHHYQ        type    agent pools        attributes            name    my pool          created at    2020 10 13T16 32 45 165Z          organization scoped   false               relationships            agents              links                related     api v2 agent pools apool 55jZekR57npjHHYQ agents                            authentication tokens              links                related     api v2 agent pools apool 55jZekR57npjHHYQ authentication tokens                            workspaces              data                      allowed workspaces              data                              id    ws x9taqV23mxrGcDrn                type    workspaces                                            links            self     api v2 agent pools apool 55jZekR57npjHHYQ                      Update an Agent Pool   PATCH  agent pools  id     Parameter   Description                                                                                 id        The ID of the Agent Pool to update      Status    Response                                        Reason                                                                                                                                                                                           200       JSON API document      type   agent pools      Success                                                             404       JSON API error object                          Agent Pool not found  or user unauthorized to perform action        422      JSON error document                             Malformed request body  missing attributes  wrong types  etc          Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                                            Type     Default            Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                data type                                          string                      Must be   agent pools                                                                                                                                                                                                                                 data attributes name                               string    previous value    The name of the agent pool  which can only include letters  numbers       and      This will be used as an identifier and must be unique in the organization                                                                                          data attributes organization scoped                bool     true               The scope of the agent pool  If true  all workspaces in the organization can use the agent pool                                                                                                data relationships allowed workspaces data type    string                      Must be   workspaces                                                                                                                                                                           data relationships allowed workspaces data id      string                      The ID of the workspace that has permission to use the agent pool  Refer to  Scoping Agent Pools to Specific Workspaces   terraform cloud docs agents scope an agent pool to specific workspaces          Sample Payload     json      data          type    agent pools        attributes            name    example pool          organization scoped   false              relationships            allowed workspaces              data                              id    ws x9taqV23mxrGcDrn                type    workspaces                                                     Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 agent pools apool MCf6kkxu5FyHbqhd          Sample Response     json      data          id    apool yoGUFz5zcRMMz53i        type    agent pools        attributes            name    example pool          created at    2020 08 05T18 10 26 964Z              relationships            agents              links                related     api v2 agent pools apool yoGUFz5zcRMMz53i agents                            authentication tokens              links                related     api v2 agent pools apool yoGUFz5zcRMMz53i authentication tokens                            workspaces              data                              id    ws 9EEkcEQSA3XgWyGe                type    workspaces                                        allowed workspaces              data                              id    ws x9taqV23mxrGcDrn                type    workspaces                                            links            self     api v2 agent pools apool yoGUFz5zcRMMz53i                      Delete an Agent Pool   DELETE  agent pools  agent pool id     Parameter          Description                                                                                              agent pool id    The ID of the agent pool ID to delete        Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 agent pools apool MCf6kkxu5FyHbqhd          Available Related Resources  The GET endpoints above can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available     Resource Name    Description                                                                                                       workspaces      The workspaces attached to this agent pool   "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 Use these endpoints to control availability of no code provisioning and designate variable values for a no code module in your organization page title No Code Provisioning API Docs HCP Terraform","answers":"---\npage_title: No-Code Provisioning - API Docs - HCP Terraform\ndescription: >-\n  Use these endpoints to control availability of no-code provisioning and designate variable values for a no-code module in your organization.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: http:\/\/jsonapi.org\/format\/#error-objects\n\n# No-code provisioning API\n\nThe no-code provisioning API allows you to create, configure, and deploy Terraform modules in the no-code provisioning workflows within HCP Terraform. For more information on no-code modules, refer to [Designing No-Code Ready Modules](\/terraform\/cloud-docs\/no-code-provisioning\/module-design).\n\n## Enable no-code provisioning for a module\n\n`POST \/organizations\/:organization_name\/no-code-modules`\n\n| Parameter            | Description                                          |\n| -------------------- | ---------------------------------------------------- |\n| `:organization_name` | The name of the organization the module belongs to.  |\n\nTo deploy a module using the no-code provisioning workflow, the module must meet the following requirement:\n\n1. The module must exist in your organization's private registry.\n1. The module must meet the [design requirements](\/terraform\/cloud-docs\/no-code-provisioning\/module-design#requirements) for a no-code module.\n1. You must enable no-code provisioning for the module.\n\nYou can send a `POST` request to the `\/no-code-modules` endpoint to enable no-code provisioning for a specific module. You can also call this endpoint to set options for the allowed values of a variable for a no-code module in your organization.\n\n-> **Note**: This endpoint can not be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n| Status  | Response                                          | Reason                                                                |\n| ------- | ------------------------------------------------- | --------------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"no-code-modules\"`) | Successfully enabled a module for no-code provisioning.               |\n| [404][] | [JSON API error object][]                         | Not found, or the user is unauthorized to perform this action.        |\n| [422][] | [JSON API error object][]                         | Malformed request body (e.g., missing attributes, wrong types, etc.). |\n| [500][] | [JSON API error object][]                         | Internal system failure.                                              |\n\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type   | Default | Description |\n| --- | --- | --- | --- |\n| `data.type` | string | | Must be `\"no-code-modules\"`. |\n| `data.attributes.version-pin` | string   |  Latest version of the module | The module version to use in no-code provisioning workflows. |\n| `data.attributes.enabled` | boolean   |  `false`  | Set to `true` to enable no-code provisioning workflows. |\n| `data.relationships.registry-module.data.id` | string |  | The ID of a module in the organization's private registry. |\n| `data.relationships.registry-module.data.type` | string |  | Must be `\"registry-module\"`. |\n| `data.relationships.variable-options.data[].type` | string | | Must be `\"variable-options\"`. |\n| `data.relationships.variable-options.data[].attributes.variable-name` | string | | The name of a variable within the module. |\n| `data.relationships.variable-options.data[].attributes.variable-type` | string | | The data type for the variable. Can be [any type supported by Terraform](\/terraform\/language\/expressions\/types#types).  |\n| `data.relationships.variable-options.data[].attributes.options` | Any[] | | A list of allowed values for the variable. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"no-code-modules\",\n    \"attributes\": {\n      \"version-pin\":  \"1.0.1\",\n      \"enabled\": true\n    },\n    \"relationships\": {\n      \"registry-module\": {\n        \"data\": {\n          \"id\": \"mod-2aaFrmRPZs2N9epr\",\n          \"type\": \"registry-module\"\n        }\n      },\n      \"variable-options\": {\n        \"data\": [\n          {\n            \"type\": \"variable-options\",\n            \"attributes\": {\n              \"variable-name\": \"amis\",\n              \"variable-type\": \"string\",\n              \"options\": [\n                \"ami-1\",\n                \"ami-2\",\n                \"ami-3\"\n              ]\n            }\n          },\n          {\n            \"type\": \"variable-options\",\n            \"attributes\": {\n              \"variable-name\": \"region\",\n              \"variable-type\": \"string\",\n              \"options\": [\n                \"eu-north-1\",\n                \"us-east-2\",\n                \"us-west-1\"\n              ]\n            }\n          }\n        ]\n      }\n    }\n  }\n}\n\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/no-code-modules\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"nocode-9HE91XDNY3faePn2\",\n    \"type\": \"no-code-modules\",\n    \"attributes\": {\n      \"enabled\": true,\n      \"version-pin\": \"1.0.1\"\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/my-organization\"\n        }\n      },\n      \"registry-module\": {\n        \"data\": {\n          \"id\": \"mod-2aaFrmRPZs2N9epr\",\n          \"type\": \"registry-modules\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/registry-modules\/mod-2aaFrmRPZs2N9epr\"\n        }\n      },\n      \"variable-options\": {\n        \"data\": [\n          {\n            \"id\": \"ncvaropt-fcHDfnZ1EGdRzFNC\",\n            \"type\": \"variable-options\"\n          },\n          {\n            \"id\": \"ncvaropt-dZMfdh9KBcwFjyv2\",\n            \"type\": \"variable-options\"\n          }\n        ]\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/no-code-modules\/nocode-9HE91XDNY3faePn2\"\n    }\n  }\n}\n```\n\n## Update no-code provisioning settings\n\n`PATCH \/no-code-modules\/:id`\n\n| Parameter | Description                                  |\n| --------- | -------------------------------------------- |\n| `:id`     | The unique identifier of the no-code module. |\n\nSend a `PATCH` request to the `\/no-code-modules\/:id` endpoint to update the settings for the no-code provisioning of a module. You can update the following settings:\n\n- Enable or disable no-code provisioning.\n- Adjust the set of options for allowed variable values.\n- Change the module version being provisioned.\n- Change the module being provisioned.\n\nThe [API call that enables no-code provisioning for a module](#allow-no-code-provisioning-of-a-module-within-an-organization) returns that module's unique identifier.\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n| Status  | Response                                          | Reason                                                                |\n| ------- | ------------------------------------------------- | --------------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"no-code-modules\"`) | Successfully updated a no-code module.                                |\n| [404][] | [JSON API error object][]                         | Not found, or the user is unauthorized to perform this action.        |\n| [422][] | [JSON API error object][]                         | Malformed request body (e.g., missing attributes, wrong types, etc.). |\n| [500][] | [JSON API error object][]                         | Internal system failure.                                              |\n\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type | Default | Description |\n| --- | --- | --- | --- |\n| `data.type` | string | | Must be `\"no-code-modules\"`. |\n| `data.attributes.version-pin` | string   | Existing value | The module version to use in no-code provisioning workflows. |\n| `data.attributes.enabled` | boolean   | Existing value | Set to `true` to enable no-code provisioning workflows, or `false` to disable them. |\n| `data.relationships.registry-module.data.id` | string | Existing value | The ID of a module in the organization's Private Registry. |\n| `data.relationships.registry-module.data.type` | string | Existing value | Must be `\"registry-module\"`. |\n| `data.relationships.variable-options.data[].id` | string | | The ID of an existing variable-options set. If provided, a new variable-options set replaces the set with this ID. If not provided, this creates a new variable-option set. |\n| `data.relationships.variable-options.data[].type` | string | | Must be `\"variable-options\"`. |\n| `data.relationships.variable-options.data[].attributes.variable-name` | string | | The name of a variable within the module. |\n| `data.relationships.variable-options.data[].attributes.variable-type` | string | | The data type for the variable. Can be [any type supported by Terraform](\/terraform\/language\/expressions\/types#types). |\n| `data.relationships.variable-options.data[].attributes.options` | Any[] | | A list of allowed values for the variable. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"no-code-modules\",\n    \"attributes\": {\n      \"enabled\": false\n    },\n    \"relationships\": {\n      \"registry-module\": {\n        \"data\": {\n          \"id\": \"mod-zyai9dwH4VPPaVuC\",\n          \"type\": \"registry-module\"\n        }\n      },\n      \"variable-options\": {\n        \"data\": [\n          {\n            \"id\": \"ncvaropt-fcHDfnZ1EGdRzFNC\",\n            \"type\": \"variable-options\",\n            \"attributes\": {\n              \"variable-name\": \"Linux AMIs\",\n              \"variable-type\": \"array\",\n              \"options\": [\n                \"Xenial Xerus\",\n                \"Trusty Tahr\"\n              ]\n            }\n          },\n          {\n            \"id\": \"ncvaropt-dZMfdh9KBcwFjyv2\",\n            \"type\": \"variable-options\",\n            \"attributes\": {\n              \"variable-name\": \"region\",\n              \"variable-type\": \"array\",\n              \"options\": [\n                \"eu-north-1\",\n                \"us-east-2\",\n                \"us-west-1\"\n              ]\n            }\n          }\n        ]\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/no-code-modules\/nocode-9HE91XDNY3faePn2\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"nocode-9HE91XDNY3faePn2\",\n    \"type\": \"no-code-modules\",\n    \"attributes\": {\n      \"enabled\": true,\n      \"version-pin\": \"1.0.1\"\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/my-organization\"\n        }\n      },\n      \"registry-module\": {\n        \"data\": {\n          \"id\": \"mod-2aaFrmRPZs2N9epr\",\n          \"type\": \"registry-modules\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/registry-modules\/mod-2aaFrmRPZs2N9epr\"\n        }\n      },\n      \"variable-options\": {\n        \"data\": [\n          {\n            \"id\": \"ncvaropt-fcHDfnZ1EGdRzFNC\",\n            \"type\": \"variable-options\"\n          },\n          {\n            \"id\": \"ncvaropt-dZMfdh9KBcwFjyv2\",\n            \"type\": \"variable-options\"\n          }\n        ]\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/no-code-modules\/nocode-9HE91XDNY3faePn2\"\n    }\n  }\n}\n```\n\n## Read a no-code module's properties\n\n\n`GET \/no-code-modules\/:id`\n\n\n| Parameter | Description                                  |\n| --------- | -------------------------------------------- |\n| `:id`     | The unique identifier of the no-code module. |\n\nUse this API to read the details of an existing no-code module.\n\nThe [API call that enables no-code provisioning for a module](#allow-no-code-provisioning-of-a-module-within-an-organization) returns that module's unique identifier.\n\n| Status  | Response                                          | Reason                                                                |\n| ------- | ------------------------------------------------- | --------------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"no-code-modules\"`) | Successfully read the no-code module.                                 |\n| [400][] | [JSON API error object][]                         | Invalid `include` parameter.                                          |\n| [404][] | [JSON API error object][]                         | Not found, or the user is unauthorized to perform this action.        |\n| [422][] | [JSON API error object][]                         | Malformed request body (e.g., missing attributes, wrong types, etc.). |\n| [500][] | [JSON API error object][]                         | Internal system failure.                                              |\n\n### Query Parameters\n\nThis endpoint uses our [standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Use HTML URL encoding syntax, such as `%5B` to represent `[` and `%5D` to represent `]`, if your tooling does not automatically encode URLs.\n\nTerraform returns related resources when you add the [`include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources) to the request.\n\n| Parameter | Description                                       |\n| --------- | ------------------------------------------------- | \n| `include` | List related resource to include in the response. |\n\nThe following resource types are available:\n\n| Resource Name      | Description                                               |\n| ------------------ | --------------------------------------------------------- |\n| `variable_options` | Module variables with a configured set of allowed values. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/no-code-modules\/nocode-9HE91XDNY3faePn2?include=variable_options\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"nocode-9HE91XDNY3faePn2\",\n    \"type\": \"no-code-modules\",\n    \"attributes\": {\n      \"enabled\": true,\n      \"version-pin\": \"1.0.1\"\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/my-organization\"\n        }\n      },\n      \"registry-module\": {\n        \"data\": {\n          \"id\": \"mod-2aaFrmRPZs2N9epr\",\n          \"type\": \"registry-modules\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/registry-modules\/mod-2aaFrmRPZs2N9epr\"\n        }\n      },\n      \"variable-options\": {\n        \"data\": [\n          {\n            \"id\": \"ncvaropt-fcHDfnZ1EGdRzFNC\",\n            \"type\": \"variable-options\"\n          },\n          {\n            \"id\": \"ncvaropt-dZMfdh9KBcwFjyv2\",\n            \"type\": \"variable-options\"\n          }\n        ]\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/no-code-modules\/nocode-9HE91XDNY3faePn2\"\n    }\n  },\n  \"included\": [\n    {\n      \"id\": \"ncvaropt-fcHDfnZ1EGdRzFNC\",\n      \"type\": \"variable-options\",\n      \"attributes\": {\n        \"variable-name\": \"Linux AMIs\",\n        \"variable-type\": \"array\",\n        \"options\": [\n          \"Xenial Xerus\",\n          \"Trusty Tahr\"\n        ]\n      },\n      \"relationships\": {\n        \"no-code-allowed-module\": {\n          \"data\": {\n            \"id\": \"nocode-9HE91XDNY3faePn2\",\n            \"type\": \"no-code-allowed-modules\"\n          }\n        }\n      }\n    },\n    {\n      \"id\": \"ncvaropt-dZMfdh9KBcwFjyv2\",\n      \"type\": \"variable-options\",\n      \"attributes\": {\n        \"variable-name\": \"region\",\n        \"variable-type\": \"array\",\n        \"options\": [\n          \"eu-north-1\",\n          \"us-east-2\",\n          \"us-west-1\"\n        ]\n      },\n      \"relationships\": {\n        \"no-code-allowed-module\": {\n          \"data\": {\n            \"id\": \"nocode-9HE91XDNY3faePn2\",\n            \"type\": \"no-code-allowed-modules\"\n          }\n        }\n      }\n    }\n  ]\n}\n```\n\n## Create a no-code module workspace\n\nThis endpoint creates a workspace from a no-code module. \n\n`POST \/no-code-modules\/:id\/workspaces`\n\n| Parameter            | Description                                |\n| -------------------- | ------------------------------------------ |\n| `:id`                | The ID of the no-code module to provision. |\n\nEach HCP Terraform organization has a list of which modules you can use to create workspaces using no-code provisioning. You can use this API to create a workspace by selecting a no-code module to enable a no-code provisioning workflow.\n\n-> **Note**: This endpoint can not be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n| Status  | Response                                     | Reason                                                                           |\n| ------- | -------------------------------------------- | -------------------------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"workspaces\"`) | Successfully created a workspace from a no-code module for no-code provisioning. |\n| [404][] | [JSON API error object][]                    | Not found, or the user is unauthorized to perform this action.                   |\n| [422][] | [JSON API error object][]                    | Malformed request body (e.g., missing attributes, wrong types, etc.).            |\n| [500][] | [JSON API error object][]                    | Internal system failure.                                                         |\n\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type   | Default | Description |\n| --- | --- | --- | ---  |\n| `data.type`                                             | string  | none      | Must be `\"workspaces\"`. |\n| `data.attributes.agent-pool-id`                         | string  | none | Required when `execution-mode` is set to `agent`. The ID of the agent pool belonging to the workspace's organization. This value must not be specified if `execution-mode` is set to `remote`. |                                                                                                                                                                                                                                   |\n| `data.attributes.auto_apply`                            | boolean |  `false`  | If `true`, Terraform automatically applies changes when a Terraform `plan` is successful. |\n| `data.attributes.description`                           | string  | `\"\"`      | A description for the workspace. |\n| `data.attributes.execution-mode`                        | string  | none | Which [execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode) to use. Valid values are `remote`, and `agent`. When not provided, defaults to `remote`. |                                                                                                                                            |\n| `data.attributes.name`                                  | string  | none      | The name of the workspace. You can only include letters, numbers, `-`, and `_`. Terraform uses this value to identify the workspace and must be unique in the organization. |\n| `data.attributes.source-name`                           | string  | none | A friendly name for the application or client creating this workspace. If set, this will be displayed on the workspace as \"Created via `<SOURCE NAME>`\". |                                                                                                                                                                                                                                                                                                                                                        |\n| `data.attributes.source-url`                            | string  | (nothing) | A URL for the application or client creating this workspace. This can be the URL of a related resource in another app, or a link to documentation or other info about the client. |                                                                                                                                                                                                                                                                                                                               |\n| `data.relationships.project.data.id`                    | string  |           | The ID of the project to create the workspace in. You must have permission to create workspaces in the project, either by organization-level permissions or team admin access to a specific project. |\n| `data.relationships.project.data.type`                  | string  |           | Must be `\"project\"`. |\n| `data.relationships.vars.data[].type`                   | string  |           | Must be `\"vars\"`. |\n| `data.relationships.vars.data[].attributes.key`         | string  |           | The name of the variable. |\n| `data.relationships.vars.data[].attributes.value`       | string  | `\"\"`      | The value of the variable. |\n| `data.relationships.vars.data[].attributes.description` | string  |           | The description of the variable. |\n| `data.relationships.vars.data[].attributes.category`    | string  |           | Whether this is a Terraform or environment variable. Valid values are `\"terraform\"` or `\"env\"`. |\n| `data.relationships.vars.data[].attributes.hcl`         | boolean | `false`   | Whether to evaluate the value of the variable as a string of HCL code. Has no effect for environment variables. |\n| `data.relationships.vars.data[].attributes.sensitive`   | boolean | `false`   | Whether the value is sensitive. If `true` then the variable is written once and not visible thereafter. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"workspaces\",\n    \"attributes\": {\n      \"name\":  \"no-code-workspace\",\n      \"description\": \"A workspace to enable the no-code provisioning workflow.\"\n    },\n    \"relationships\": {\n      \"project\": {\n        \"data\": {\n          \"id\": \"prj-yuEN6sJVra5t6XVy\",\n          \"type\": \"project\"\n        }\n      },\n      \"vars\": {\n        \"data\": [\n          {\n            \"type\": \"vars\",\n            \"attributes\": {\n              \"key\": \"region\",\n              \"value\": \"eu-central-1\",\n              \"category\": \"terraform\",\n              \"hcl\": true,\n              \"sensitive\": false,\n            }\n          },\n          {\n            \"type\": \"vars\",\n            \"attributes\": {\n              \"key\": \"ami\",\n              \"value\": \"ami\u2011077062\",\n              \"category\": \"terraform\",\n              \"hcl\": true,\n              \"sensitive\": false,\n            }\n          }\n        ]\n      }\n    }\n  }\n}\n\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/no-code-modules\/nocode-WGckovT2RQxupyt1\/workspaces\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"ws-qACTToFUM5BvDKhC\",\n    \"type\": \"workspaces\",\n    \"attributes\": {\n      \"allow-destroy-plan\": true,\n      \"auto-apply\": false,\n      \"auto-destroy-at\": null,\n      \"auto-destroy-status\": null,\n      \"created-at\": \"2023-09-08T10:36:04.391Z\",\n      \"environment\": \"default\",\n      \"locked\": false,\n      \"name\": \"no-code-workspace\",\n      \"queue-all-runs\": false,\n      \"speculative-enabled\": true,\n      \"structured-run-output-enabled\": true,\n      \"terraform-version\": \"1.5.6\",\n      \"working-directory\": null,\n      \"global-remote-state\": true,\n      \"updated-at\": \"2023-09-08T10:36:04.427Z\",\n      \"resource-count\": 0,\n      \"apply-duration-average\": null,\n      \"plan-duration-average\": null,\n      \"policy-check-failures\": null,\n      \"run-failures\": null,\n      \"workspace-kpis-runs-count\": null,\n      \"latest-change-at\": \"2023-09-08T10:36:04.391Z\",\n      \"operations\": true,\n      \"execution-mode\": \"remote\",\n      \"vcs-repo\": null,\n      \"vcs-repo-identifier\": null,\n      \"permissions\": {\n        \"can-update\": true,\n        \"can-destroy\": true,\n        \"can-queue-run\": true,\n        \"can-read-variable\": true,\n        \"can-update-variable\": true,\n        \"can-read-state-versions\": true,\n        \"can-read-state-outputs\": true,\n        \"can-create-state-versions\": true,\n        \"can-queue-apply\": true,\n        \"can-lock\": true,\n        \"can-unlock\": true,\n        \"can-force-unlock\": true,\n        \"can-read-settings\": true,\n        \"can-manage-tags\": true,\n        \"can-manage-run-tasks\": true,\n        \"can-force-delete\": true,\n        \"can-manage-assessments\": true,\n        \"can-manage-ephemeral-workspaces\": true,\n        \"can-read-assessment-results\": true,\n        \"can-queue-destroy\": true\n      },\n      \"actions\": {\n        \"is-destroyable\": true\n      },\n      \"description\": null,\n      \"file-triggers-enabled\": true,\n      \"trigger-prefixes\": [],\n      \"trigger-patterns\": [],\n      \"assessments-enabled\": false,\n      \"last-assessment-result-at\": null,\n      \"source\": \"tfe-module\",\n      \"source-name\": null,\n      \"source-url\": null,\n      \"source-module-id\": \"private\/my-organization\/lambda\/aws\/1.0.9\",\n      \"no-code-upgrade-available\": false,\n      \"tag-names\": [],\n      \"setting-overwrites\": {\n        \"execution-mode\": false,\n        \"agent-pool\": false\n      }\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        }\n      },\n      \"current-run\": {\n        \"data\": null\n      },\n      \"latest-run\": {\n        \"data\": null\n      },\n      \"outputs\": {\n        \"data\": []\n      },\n      \"remote-state-consumers\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/workspaces\/ws-qACTToFUM5BvDKhC\/relationships\/remote-state-consumers\"\n        }\n      },\n      \"current-state-version\": {\n        \"data\": null\n      },\n      \"current-configuration-version\": {\n        \"data\": {\n          \"id\": \"cv-vizi2p3mnrt3utgA\",\n          \"type\": \"configuration-versions\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/configuration-versions\/cv-vizi2p3mnrt3utgA\"\n        }\n      },\n      \"agent-pool\": {\n        \"data\": null\n      },\n      \"readme\": {\n        \"data\": null\n      },\n      \"project\": {\n        \"data\": {\n          \"id\": \"prj-yuEN6sJVra5t6XVy\",\n          \"type\": \"projects\"\n        }\n      },\n      \"current-assessment-result\": {\n        \"data\": null\n      },\n      \"no-code-module-version\": {\n        \"data\": {\n          \"id\": \"nocodever-vFcQjZLs3ZHTe4TU\",\n          \"type\": \"no-code-module-versions\"\n        }\n      },\n      \"vars\": {\n        \"data\": []\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/my-organization\/workspaces\/no-code-workspace\",\n      \"self-html\": \"\/app\/my-organization\/workspaces\/no-code-workspace\"\n    }\n  }\n}\n```\n\n## Initiate a no-code workspace update\n\nUpgrading a workspace's no-code provisioning settings is a multi-step process. \n\n1. Use this API to initiate the update. HCP Terraform starts a new plan, which describes the resources to add, update, or remove from the workspace.\n1. Poll the [read workspace upgrade plan status API](#read-workspace-upgrade-plan-status) to wait for HCP Terraform to complete the plan.\n1. Use the [confirm and apply a workspace upgrade plan API](#confirm-and-apply-a-workspace-upgrade-plan) to complete the workspace upgrade.\n\n`POST \/no-code-modules\/:no_code_module_id\/workspaces\/:id\/upgrade`\n\n\n| Parameter            | Description                    |\n| -------------------- | -------------------------------|\n| `:no_code_module_id` | The ID of the no-code module.  |\n| `:id`                | The ID of the workspace.       |\n\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type   | Default | Description\n| --- | --- | --- | ---\n| `data.type`                                             | string  |           | Must be `\"workspaces\"`.\n| `data.relationships.vars.data[].type`                   | string  |           | Must be `\"vars\"`.    |\n| `data.relationships.vars.data[].attributes.key`         | string  |           | The name of the variable. |\n| `data.relationships.vars.data[].attributes.value`       | string  | `\"\"`      | The value of the variable. |\n| `data.relationships.vars.data[].attributes.description` | string  |           | The description of the variable. |\n| `data.relationships.vars.data[].attributes.category`    | string  |           | Whether this is a Terraform or environment variable. Valid values are `\"terraform\"` or `\"env\"`. |\n| `data.relationships.vars.data[].attributes.hcl`         | boolean | `false`   | Whether to evaluate the value of the variable as a string of HCL code. Has no effect for environment variables. |\n| `data.relationships.vars.data[].attributes.sensitive`   | boolean | `false`   | Whether the value is sensitive. If `true` then the variable is written once and not visible thereafter. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"workspaces\",\n    \"relationships\": {\n        \"vars\": {\n        \"data\": [\n          {\n            \"type\": \"vars\",\n            \"attributes\": {\n              \"key\": \"region\",\n              \"value\": \"eu-central-1\",\n              \"category\": \"terraform\",\n              \"hcl\": true,\n              \"sensitive\": false,\n            }\n          },\n          {\n            \"type\": \"vars\",\n            \"attributes\": {\n              \"key\": \"ami\",\n              \"value\": \"ami\u2011077062\",\n              \"category\": \"terraform\",\n              \"hcl\": true,\n              \"sensitive\": false,\n            }\n          }\n        ]\n      }\n    }\n  }\n}\n\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/no-code-modules\/nocode-WGckovT2RQxupyt1\/workspaces\/ws-qACTToFUM5BvDKhC\/upgrade\n```\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"run-Cyij8ctBHM1g5xdX\",\n    \"type\": \"workspace-upgrade\",\n    \"attributes\": {\n      \"status\": \"planned\",\n      \"plan-url\": \"https:\/\/app.terraform.io\/app\/my-organization\/no-code-workspace\/runs\/run-Cyij8ctBHM1g5xdX\"\n    },\n    \"relationships\": {\n      \"workspace\": {\n        \"data\": {\n          \"id\": \"ws-VvKtcfueHNkR6GqP\",\n          \"type\": \"workspaces\"\n        }\n      }\n    }\n  }\n}\n```\n\n## Read workspace upgrade plan status\n\nThis endpoint returns the plan details and status for updating a workspace to new no-code provisioning settings.\n\n`GET \/no-code-modules\/:no_code_module_id\/workspaces\/:workspace_id\/upgrade\/:id`\n\n| Parameter            | Description                    |\n| -------------------- | -------------------------------|\n| `:no_code_module_id` | The ID of the no-code module.  |\n| `:workspace_id`      | The ID of workspace.           |\n| `:id`                | The ID of update.              |\n\nReturns the details of a no-code workspace update run, including the run's current state, such as `pending`, `fetching`, `planning`, `planned`, or `cost_estimated`. Refer to [Run States and Stages](\/terraform\/enterprise\/run\/states) for more information on the states a run can return.\n\n| Status  | Response                                                | Reason                                                              |\n| ------- | ------------------------------------------------------- | --------------------------------------------------------------------|\n| [200][] | [JSON API document][] (`type: \"workspace-upgrade\"`)     | Success                                                             |\n| [404][] | [JSON API error object][]                               | Workspace upgrade not found, or user unauthorized to perform action |\n\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/no-code-modules\/nocode-WGckovT2RQxupyt1\/workspaces\/ws-qACTToFUM5BvDKhC\/upgrade\/run-Cyij8ctBHM1g5xdX\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"run-Cyij8ctBHM1g5xdX\",\n    \"type\": \"workspace-upgrade\",\n    \"attributes\": {\n      \"status\": \"planned_and_finished\",\n      \"plan-url\": \"https:\/\/app.terraform.io\/app\/my-organization\/no-code-workspace\/runs\/run-Cyij8ctBHM1g5xdX\"\n    },\n    \"relationships\": {\n      \"workspace\": {\n        \"data\": {\n          \"id\": \"ws-VvKtcfueHNkR6GqP\",\n          \"type\": \"workspaces\"\n        }\n      }\n    }\n  }\n}\n```\n\n## Confirm and apply a workspace upgrade plan\n\nUse this endpoint to confirm an update and finalize the update for a workspace to use new no-code provisioning settings.\n\n`POST \/no-code-modules\/:no_code_module_id\/workspaces\/:workspace_id\/upgrade\/:id`\n\n| Parameter            | Description                    |\n| -------------------- | -------------------------------|\n| `:no_code_module_id` | The ID of the no-code module.  |\n| `:workspace_id`      | The ID of workspace.           |\n| `:id`                | The ID of update.              |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/no-code-modules\/nocode-WGckovT2RQxupyt1\/workspaces\/ws-qACTToFUM5BvDKhC\/upgrade\/run-Cyij8ctBHM1g5xdX\n```\n\n### Sample Response\n\n```json\n{ \"Workspace update completed\" }\n```","site":"terraform","answers_cleaned":"    page title  No Code Provisioning   API Docs   HCP Terraform description       Use these endpoints to control availability of no code provisioning and designate variable values for a no code module in your organization        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   http   jsonapi org format  error objects    No code provisioning API  The no code provisioning API allows you to create  configure  and deploy Terraform modules in the no code provisioning workflows within HCP Terraform  For more information on no code modules  refer to  Designing No Code Ready Modules   terraform cloud docs no code provisioning module design       Enable no code provisioning for a module   POST  organizations  organization name no code modules     Parameter              Description                                                                                                                                organization name    The name of the organization the module belongs to      To deploy a module using the no code provisioning workflow  the module must meet the following requirement   1  The module must exist in your organization s private registry  1  The module must meet the  design requirements   terraform cloud docs no code provisioning module design requirements  for a no code module  1  You must enable no code provisioning for the module   You can send a  POST  request to the   no code modules  endpoint to enable no code provisioning for a specific module  You can also call this endpoint to set options for the allowed values of a variable for a no code module in your organization        Note    This endpoint can not be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens      Status    Response                                            Reason                                                                                                                                                                                                             200       JSON API document      type   no code modules      Successfully enabled a module for no code provisioning                     404       JSON API error object                              Not found  or the user is unauthorized to perform this action              422       JSON API error object                              Malformed request body  e g   missing attributes  wrong types  etc         500       JSON API error object                              Internal system failure                                                       Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type     Default   Description                                data type    string     Must be   no code modules         data attributes version pin    string      Latest version of the module   The module version to use in no code provisioning workflows       data attributes enabled    boolean       false     Set to  true  to enable no code provisioning workflows       data relationships registry module data id    string      The ID of a module in the organization s private registry       data relationships registry module data type    string      Must be   registry module         data relationships variable options data   type    string     Must be   variable options         data relationships variable options data   attributes variable name    string     The name of a variable within the module       data relationships variable options data   attributes variable type    string     The data type for the variable  Can be  any type supported by Terraform   terraform language expressions types types         data relationships variable options data   attributes options    Any       A list of allowed values for the variable         Sample Payload     json      data          type    no code modules        attributes            version pin     1 0 1          enabled   true             relationships            registry module              data                id    mod 2aaFrmRPZs2N9epr              type    registry module                            variable options              data                              type    variable options                attributes                    variable name    amis                  variable type    string                  options                      ami 1                    ami 2                    ami 3                                                                      type    variable options                attributes                    variable name    region                  variable type    string                  options                      eu north 1                    us east 2                    us west 1                                                                                    Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization no code modules          Sample Response     json      data          id    nocode 9HE91XDNY3faePn2        type    no code modules        attributes            enabled   true         version pin    1 0 1              relationships            organization              data                id    my organization              type    organizations                      links                related     api v2 organizations my organization                            registry module              data                id    mod 2aaFrmRPZs2N9epr              type    registry modules                      links                related     api v2 registry modules mod 2aaFrmRPZs2N9epr                            variable options              data                              id    ncvaropt fcHDfnZ1EGdRzFNC                type    variable options                                        id    ncvaropt dZMfdh9KBcwFjyv2                type    variable options                                            links            self     api v2 no code modules nocode 9HE91XDNY3faePn2                      Update no code provisioning settings   PATCH  no code modules  id     Parameter   Description                                                                                                     id        The unique identifier of the no code module     Send a  PATCH  request to the   no code modules  id  endpoint to update the settings for the no code provisioning of a module  You can update the following settings     Enable or disable no code provisioning    Adjust the set of options for allowed variable values    Change the module version being provisioned    Change the module being provisioned   The  API call that enables no code provisioning for a module   allow no code provisioning of a module within an organization  returns that module s unique identifier        Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens      Status    Response                                            Reason                                                                                                                                                                                                             200       JSON API document      type   no code modules      Successfully updated a no code module                                      404       JSON API error object                              Not found  or the user is unauthorized to perform this action              422       JSON API error object                              Malformed request body  e g   missing attributes  wrong types  etc         500       JSON API error object                              Internal system failure                                                       Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type   Default   Description                                data type    string     Must be   no code modules         data attributes version pin    string     Existing value   The module version to use in no code provisioning workflows       data attributes enabled    boolean     Existing value   Set to  true  to enable no code provisioning workflows  or  false  to disable them       data relationships registry module data id    string   Existing value   The ID of a module in the organization s Private Registry       data relationships registry module data type    string   Existing value   Must be   registry module         data relationships variable options data   id    string     The ID of an existing variable options set  If provided  a new variable options set replaces the set with this ID  If not provided  this creates a new variable option set       data relationships variable options data   type    string     Must be   variable options         data relationships variable options data   attributes variable name    string     The name of a variable within the module       data relationships variable options data   attributes variable type    string     The data type for the variable  Can be  any type supported by Terraform   terraform language expressions types types        data relationships variable options data   attributes options    Any       A list of allowed values for the variable         Sample Payload     json      data          type    no code modules        attributes            enabled   false             relationships            registry module              data                id    mod zyai9dwH4VPPaVuC              type    registry module                            variable options              data                              id    ncvaropt fcHDfnZ1EGdRzFNC                type    variable options                attributes                    variable name    Linux AMIs                  variable type    array                  options                      Xenial Xerus                    Trusty Tahr                                                                      id    ncvaropt dZMfdh9KBcwFjyv2                type    variable options                attributes                    variable name    region                  variable type    array                  options                      eu north 1                    us east 2                    us west 1                                                                                   Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 no code modules nocode 9HE91XDNY3faePn2          Sample Response     json      data          id    nocode 9HE91XDNY3faePn2        type    no code modules        attributes            enabled   true         version pin    1 0 1              relationships            organization              data                id    my organization              type    organizations                      links                related     api v2 organizations my organization                            registry module              data                id    mod 2aaFrmRPZs2N9epr              type    registry modules                      links                related     api v2 registry modules mod 2aaFrmRPZs2N9epr                            variable options              data                              id    ncvaropt fcHDfnZ1EGdRzFNC                type    variable options                                        id    ncvaropt dZMfdh9KBcwFjyv2                type    variable options                                            links            self     api v2 no code modules nocode 9HE91XDNY3faePn2                      Read a no code module s properties    GET  no code modules  id      Parameter   Description                                                                                                     id        The unique identifier of the no code module     Use this API to read the details of an existing no code module   The  API call that enables no code provisioning for a module   allow no code provisioning of a module within an organization  returns that module s unique identifier     Status    Response                                            Reason                                                                                                                                                                                                             200       JSON API document      type   no code modules      Successfully read the no code module                                       400       JSON API error object                              Invalid  include  parameter                                                404       JSON API error object                              Not found  or the user is unauthorized to perform this action              422       JSON API error object                              Malformed request body  e g   missing attributes  wrong types  etc         500       JSON API error object                              Internal system failure                                                      Query Parameters  This endpoint uses our  standard URL query parameters   terraform cloud docs api docs query parameters   Use HTML URL encoding syntax  such as   5B  to represent     and   5D  to represent      if your tooling does not automatically encode URLs   Terraform returns related resources when you add the   include  query parameter   terraform cloud docs api docs inclusion of related resources  to the request     Parameter   Description                                                                                                               include    List related resource to include in the response     The following resource types are available     Resource Name        Description                                                                                                                                       variable options    Module variables with a configured set of allowed values         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 no code modules nocode 9HE91XDNY3faePn2 include variable options          Sample Response     json      data          id    nocode 9HE91XDNY3faePn2        type    no code modules        attributes            enabled   true         version pin    1 0 1              relationships            organization              data                id    my organization              type    organizations                      links                related     api v2 organizations my organization                            registry module              data                id    mod 2aaFrmRPZs2N9epr              type    registry modules                      links                related     api v2 registry modules mod 2aaFrmRPZs2N9epr                            variable options              data                              id    ncvaropt fcHDfnZ1EGdRzFNC                type    variable options                                        id    ncvaropt dZMfdh9KBcwFjyv2                type    variable options                                            links            self     api v2 no code modules nocode 9HE91XDNY3faePn2                included                  id    ncvaropt fcHDfnZ1EGdRzFNC          type    variable options          attributes              variable name    Linux AMIs            variable type    array            options                Xenial Xerus              Trusty Tahr                            relationships              no code allowed module                data                  id    nocode 9HE91XDNY3faePn2                type    no code allowed modules                                                    id    ncvaropt dZMfdh9KBcwFjyv2          type    variable options          attributes              variable name    region            variable type    array            options                eu north 1              us east 2              us west 1                            relationships              no code allowed module                data                  id    nocode 9HE91XDNY3faePn2                type    no code allowed modules                                                    Create a no code module workspace  This endpoint creates a workspace from a no code module     POST  no code modules  id workspaces     Parameter              Description                                                                                                            id                   The ID of the no code module to provision     Each HCP Terraform organization has a list of which modules you can use to create workspaces using no code provisioning  You can use this API to create a workspace by selecting a no code module to enable a no code provisioning workflow        Note    This endpoint can not be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens      Status    Response                                       Reason                                                                                                                                                                                                                              200       JSON API document      type   workspaces      Successfully created a workspace from a no code module for no code provisioning       404       JSON API error object                         Not found  or the user is unauthorized to perform this action                         422       JSON API error object                         Malformed request body  e g   missing attributes  wrong types  etc                    500       JSON API error object                         Internal system failure                                                                  Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type     Default   Description                                 data type                                                string    none        Must be   workspaces         data attributes agent pool id                            string    none   Required when  execution mode  is set to  agent   The ID of the agent pool belonging to the workspace s organization  This value must not be specified if  execution mode  is set to  remote                                                                                                                                                                                                                                            data attributes auto apply                               boolean     false     If  true   Terraform automatically applies changes when a Terraform  plan  is successful       data attributes description                              string                A description for the workspace       data attributes execution mode                           string    none   Which  execution mode   terraform cloud docs workspaces settings execution mode  to use  Valid values are  remote   and  agent   When not provided  defaults to  remote                                                                                                                                                     data attributes name                                     string    none        The name of the workspace  You can only include letters  numbers       and      Terraform uses this value to identify the workspace and must be unique in the organization       data attributes source name                              string    none   A friendly name for the application or client creating this workspace  If set  this will be displayed on the workspace as  Created via   SOURCE NAME                                                                                                                                                                                                                                                                                                                                                                   data attributes source url                               string     nothing    A URL for the application or client creating this workspace  This can be the URL of a related resource in another app  or a link to documentation or other info about the client                                                                                                                                                                                                                                                                                                                                       data relationships project data id                       string                The ID of the project to create the workspace in  You must have permission to create workspaces in the project  either by organization level permissions or team admin access to a specific project       data relationships project data type                     string                Must be   project         data relationships vars data   type                      string                Must be   vars         data relationships vars data   attributes key            string                The name of the variable       data relationships vars data   attributes value          string                The value of the variable       data relationships vars data   attributes description    string                The description of the variable       data relationships vars data   attributes category       string                Whether this is a Terraform or environment variable  Valid values are   terraform   or   env         data relationships vars data   attributes hcl            boolean    false      Whether to evaluate the value of the variable as a string of HCL code  Has no effect for environment variables       data relationships vars data   attributes sensitive      boolean    false      Whether the value is sensitive  If  true  then the variable is written once and not visible thereafter         Sample Payload     json      data          type    workspaces        attributes            name     no code workspace          description    A workspace to enable the no code provisioning workflow               relationships            project              data                id    prj yuEN6sJVra5t6XVy              type    project                            vars              data                              type    vars                attributes                    key    region                  value    eu central 1                  category    terraform                  hcl   true                 sensitive   false                                                      type    vars                attributes                    key    ami                  value    ami 077062                  category    terraform                  hcl   true                 sensitive   false                                                                    Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 no code modules nocode WGckovT2RQxupyt1 workspaces          Sample Response     json      data          id    ws qACTToFUM5BvDKhC        type    workspaces        attributes            allow destroy plan   true         auto apply   false         auto destroy at   null         auto destroy status   null         created at    2023 09 08T10 36 04 391Z          environment    default          locked   false         name    no code workspace          queue all runs   false         speculative enabled   true         structured run output enabled   true         terraform version    1 5 6          working directory   null         global remote state   true         updated at    2023 09 08T10 36 04 427Z          resource count   0         apply duration average   null         plan duration average   null         policy check failures   null         run failures   null         workspace kpis runs count   null         latest change at    2023 09 08T10 36 04 391Z          operations   true         execution mode    remote          vcs repo   null         vcs repo identifier   null         permissions              can update   true           can destroy   true           can queue run   true           can read variable   true           can update variable   true           can read state versions   true           can read state outputs   true           can create state versions   true           can queue apply   true           can lock   true           can unlock   true           can force unlock   true           can read settings   true           can manage tags   true           can manage run tasks   true           can force delete   true           can manage assessments   true           can manage ephemeral workspaces   true           can read assessment results   true           can queue destroy   true                 actions              is destroyable   true                 description   null         file triggers enabled   true         trigger prefixes              trigger patterns              assessments enabled   false         last assessment result at   null         source    tfe module          source name   null         source url   null         source module id    private my organization lambda aws 1 0 9          no code upgrade available   false         tag names              setting overwrites              execution mode   false           agent pool   false                     relationships            organization              data                id    my organization              type    organizations                            current run              data   null                 latest run              data   null                 outputs              data                      remote state consumers              links                related     api v2 workspaces ws qACTToFUM5BvDKhC relationships remote state consumers                            current state version              data   null                 current configuration version              data                id    cv vizi2p3mnrt3utgA              type    configuration versions                      links                related     api v2 configuration versions cv vizi2p3mnrt3utgA                            agent pool              data   null                 readme              data   null                 project              data                id    prj yuEN6sJVra5t6XVy              type    projects                            current assessment result              data   null                 no code module version              data                id    nocodever vFcQjZLs3ZHTe4TU              type    no code module versions                            vars              data                          links            self     api v2 organizations my organization workspaces no code workspace          self html     app my organization workspaces no code workspace                      Initiate a no code workspace update  Upgrading a workspace s no code provisioning settings is a multi step process    1  Use this API to initiate the update  HCP Terraform starts a new plan  which describes the resources to add  update  or remove from the workspace  1  Poll the  read workspace upgrade plan status API   read workspace upgrade plan status  to wait for HCP Terraform to complete the plan  1  Use the  confirm and apply a workspace upgrade plan API   confirm and apply a workspace upgrade plan  to complete the workspace upgrade    POST  no code modules  no code module id workspaces  id upgrade      Parameter              Description                                                                                    no code module id    The ID of the no code module         id                   The ID of the workspace                Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type     Default   Description                            data type                                                string                Must be   workspaces       data relationships vars data   type                      string                Must be   vars            data relationships vars data   attributes key            string                The name of the variable       data relationships vars data   attributes value          string                The value of the variable       data relationships vars data   attributes description    string                The description of the variable       data relationships vars data   attributes category       string                Whether this is a Terraform or environment variable  Valid values are   terraform   or   env         data relationships vars data   attributes hcl            boolean    false      Whether to evaluate the value of the variable as a string of HCL code  Has no effect for environment variables       data relationships vars data   attributes sensitive      boolean    false      Whether the value is sensitive  If  true  then the variable is written once and not visible thereafter         Sample Payload     json      data          type    workspaces        relationships              vars              data                              type    vars                attributes                    key    region                  value    eu central 1                  category    terraform                  hcl   true                 sensitive   false                                                      type    vars                attributes                    key    ami                  value    ami 077062                  category    terraform                  hcl   true                 sensitive   false                                                                    Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 no code modules nocode WGckovT2RQxupyt1 workspaces ws qACTToFUM5BvDKhC upgrade         Sample Response     json      data          id    run Cyij8ctBHM1g5xdX        type    workspace upgrade        attributes            status    planned          plan url    https   app terraform io app my organization no code workspace runs run Cyij8ctBHM1g5xdX              relationships            workspace              data                id    ws VvKtcfueHNkR6GqP              type    workspaces                                        Read workspace upgrade plan status  This endpoint returns the plan details and status for updating a workspace to new no code provisioning settings    GET  no code modules  no code module id workspaces  workspace id upgrade  id     Parameter              Description                                                                                    no code module id    The ID of the no code module         workspace id         The ID of workspace                  id                   The ID of update                  Returns the details of a no code workspace update run  including the run s current state  such as  pending    fetching    planning    planned   or  cost estimated   Refer to  Run States and Stages   terraform enterprise run states  for more information on the states a run can return     Status    Response                                                  Reason                                                                                                                                                                                                               200       JSON API document      type   workspace upgrade          Success                                                                  404       JSON API error object                                    Workspace upgrade not found  or user unauthorized to perform action         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 no code modules nocode WGckovT2RQxupyt1 workspaces ws qACTToFUM5BvDKhC upgrade run Cyij8ctBHM1g5xdX          Sample Response     json      data          id    run Cyij8ctBHM1g5xdX        type    workspace upgrade        attributes            status    planned and finished          plan url    https   app terraform io app my organization no code workspace runs run Cyij8ctBHM1g5xdX              relationships            workspace              data                id    ws VvKtcfueHNkR6GqP              type    workspaces                                        Confirm and apply a workspace upgrade plan  Use this endpoint to confirm an update and finalize the update for a workspace to use new no code provisioning settings    POST  no code modules  no code module id workspaces  workspace id upgrade  id     Parameter              Description                                                                                    no code module id    The ID of the no code module         workspace id         The ID of workspace                  id                   The ID of update                      Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 no code modules nocode WGckovT2RQxupyt1 workspaces ws qACTToFUM5BvDKhC upgrade run Cyij8ctBHM1g5xdX          Sample Response     json    Workspace update completed       "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Variables API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the vars endpoint to manage organization level variables List create update and delete variables using the HTTP API","answers":"---\npage_title: Variables - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/vars` endpoint to manage organization-level variables. List, create, update, and delete variables using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Variables API\n\n~> **Important**: The Variables API is **deprecated** and will be removed in a future release. All existing integrations with this API should transition to the [Workspace Variables API](\/terraform\/cloud-docs\/api-docs\/workspace-variables).\n\nThis set of APIs covers create, update, list and delete operations on variables.\n\n## Create a Variable\n\n`POST \/vars`\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                                 | Type   | Default | Description                                                                                                                                                                                                           |\n| ---------------------------------------- | ------ | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                              | string |         | Must be `\"vars\"`.                                                                                                                                                                                                     |\n| `data.attributes.key`                    | string |         | The name of the variable.                                                                                                                                                                                             |\n| `data.attributes.value`                  | string | `\"\"`    | The value of the variable.                                                                                                                                                                                            |\n| `data.attributes.description`            | string |         | The description of the variable.                                                                                                                                                                                      |\n| `data.attributes.category`               | string |         | Whether this is a Terraform or environment variable. Valid values are `\"terraform\"` or `\"env\"`.                                                                                                                       |\n| `data.attributes.hcl`                    | bool   | `false` | Whether to evaluate the value of the variable as a string of HCL code. Has no effect for environment variables.                                                                                                       |\n| `data.attributes.sensitive`              | bool   | `false` | Whether the value is sensitive. If true then the variable is written once and not visible thereafter.                                                                                                                 |\n| `data.relationships.workspace.data.type` | string |         | Must be `\"workspaces\"`.                                                                                                                                                                                               |\n| `data.relationships.workspace.data.id`   | string |         | The ID of the workspace that owns the variable. Obtain workspace IDs from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint. |\n\n**Deprecation warning**: The custom `filter` properties are replaced by JSON API `relationships` and will be removed from future versions of the API!\n\n| Key path                   | Type   | Default | Description                                           |\n| -------------------------- | ------ | ------- | ----------------------------------------------------- |\n| `filter.workspace.name`    | string |         | The name of the workspace that owns the variable.     |\n| `filter.organization.name` | string |         | The name of the organization that owns the workspace. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\":\"vars\",\n    \"attributes\": {\n      \"key\":\"some_key\",\n      \"value\":\"some_value\",\n      \"description\":\"some description\",\n      \"category\":\"terraform\",\n      \"hcl\":false,\n      \"sensitive\":false\n    },\n    \"relationships\": {\n      \"workspace\": {\n        \"data\": {\n          \"id\":\"ws-4j8p6jX1w33MiDC7\",\n          \"type\":\"workspaces\"\n        }\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/vars\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\":\"var-EavQ1LztoRTQHSNT\",\n    \"type\":\"vars\",\n    \"attributes\": {\n      \"key\":\"some_key\",\n      \"value\":\"some_value\",\n      \"description\":\"some description\",\n      \"sensitive\":false,\n      \"category\":\"terraform\",\n      \"hcl\":false\n    },\n    \"relationships\": {\n      \"configurable\": {\n        \"data\": {\n          \"id\":\"ws-4j8p6jX1w33MiDC7\",\n          \"type\":\"workspaces\"\n        },\n        \"links\": {\n          \"related\":\"\/api\/v2\/organizations\/my-organization\/workspaces\/my-workspace\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/vars\/var-EavQ1LztoRTQHSNT\"\n    }\n  }\n}\n```\n\n## List Variables\n\n`GET \/vars`\n\n### Query Parameters\n\n[These are standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter                    | Description                                                                |\n| ---------------------------- | -------------------------------------------------------------------------- |\n| `filter[workspace][name]`    | **Required** The name of one workspace to list variables for.              |\n| `filter[organization][name]` | **Required** The name of the organization that owns the desired workspace. |\n\nThese two parameters are optional but linked; if you include one, you must include both. Without a filter, this method lists variables for all workspaces where you have permission to read variables. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n\"https:\/\/app.terraform.io\/api\/v2\/vars?filter%5Borganization%5D%5Bname%5D=my-organization&filter%5Bworkspace%5D%5Bname%5D=my-workspace\"\n# ?filter[organization][name]=my-organization&filter[workspace][name]=demo01\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\":\"var-AD4pibb9nxo1468E\",\n      \"type\":\"vars\",\"attributes\": {\n        \"key\":\"name\",\n        \"value\":\"hello\",\n        \"description\":\"some description\",\n        \"sensitive\":false,\n        \"category\":\"terraform\",\n        \"hcl\":false\n      },\n      \"relationships\": {\n        \"configurable\": {\n          \"data\": {\n            \"id\":\"ws-cZE9LERN3rGPRAmH\",\n            \"type\":\"workspaces\"\n          },\n          \"links\": {\n            \"related\":\"\/api\/v2\/organizations\/my-organization\/workspaces\/my-workspace\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\":\"\/api\/v2\/vars\/var-AD4pibb9nxo1468E\"\n      }\n    }\n  ]\n}\n```\n\n## Update Variables\n\n`PATCH \/vars\/:variable_id`\n\n| Parameter      | Description                           |\n| -------------- | ------------------------------------- |\n| `:variable_id` | The ID of the variable to be updated. |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path          | Type   | Default | Description                                                                                                                                                                                                                                                                                          |\n| ----------------- | ------ | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`       | string |         | Must be `\"vars\"`.                                                                                                                                                                                                                                                                                    |\n| `data.id`         | string |         | The ID of the variable to update.                                                                                                                                                                                                                                                                    |\n| `data.attributes` | object |         | New attributes for the variable. This object can include `key`, `value`, `description`, `category`, `hcl`, and `sensitive` properties, which are described above under [create a variable](#create-a-variable). All of these properties are optional; if omitted, a property will be left unchanged. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"id\":\"var-yRmifb4PJj7cLkMG\",\n    \"attributes\": {\n      \"key\":\"name\",\n      \"value\":\"mars\",\n      \"description\": \"new description\",\n      \"category\":\"terraform\",\n      \"hcl\": false,\n      \"sensitive\": false\n    },\n    \"type\":\"vars\"\n  }\n}\n```\n\n### Sample Request\n\n```bash\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/vars\/var-yRmifb4PJj7cLkMG\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\":\"var-yRmifb4PJj7cLkMG\",\n    \"type\":\"vars\",\n    \"attributes\": {\n      \"key\":\"name\",\n      \"value\":\"mars\",\n      \"description\":\"new description\",\n      \"sensitive\":false,\n      \"category\":\"terraform\",\n      \"hcl\":false\n    },\n    \"relationships\": {\n      \"configurable\": {\n        \"data\": {\n          \"id\":\"ws-4j8p6jX1w33MiDC7\",\n          \"type\":\"workspaces\"\n        },\n        \"links\": {\n          \"related\":\"\/api\/v2\/organizations\/workspace-v2-06\/workspaces\/workspace-v2-06\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/vars\/var-yRmifb4PJj7cLkMG\"\n    }\n  }\n}\n```\n\n## Delete Variables\n\n`DELETE \/vars\/:variable_id`\n\n| Parameter      | Description                           |\n| -------------- | ------------------------------------- |\n| `:variable_id` | The ID of the variable to be deleted. |\n\n### Sample Request\n\n```bash\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/vars\/var-yRmifb4PJj7cLkMG\n```","site":"terraform","answers_cleaned":"    page title  Variables   API Docs   HCP Terraform description       Use the   vars  endpoint to manage organization level variables  List  create  update  and delete variables using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Variables API       Important    The Variables API is   deprecated   and will be removed in a future release  All existing integrations with this API should transition to the  Workspace Variables API   terraform cloud docs api docs workspace variables    This set of APIs covers create  update  list and delete operations on variables      Create a Variable   POST  vars       Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                                   Type     Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        data type                                 string             Must be   vars                                                                                                                                                                                                             data attributes key                       string             The name of the variable                                                                                                                                                                                                   data attributes value                     string             The value of the variable                                                                                                                                                                                                  data attributes description               string             The description of the variable                                                                                                                                                                                            data attributes category                  string             Whether this is a Terraform or environment variable  Valid values are   terraform   or   env                                                                                                                               data attributes hcl                       bool      false    Whether to evaluate the value of the variable as a string of HCL code  Has no effect for environment variables                                                                                                             data attributes sensitive                 bool      false    Whether the value is sensitive  If true then the variable is written once and not visible thereafter                                                                                                                       data relationships workspace data type    string             Must be   workspaces                                                                                                                                                                                                       data relationships workspace data id      string             The ID of the workspace that owns the variable  Obtain workspace IDs from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   terraform cloud docs api docs workspaces show workspace  endpoint       Deprecation warning    The custom  filter  properties are replaced by JSON API  relationships  and will be removed from future versions of the API     Key path                     Type     Default   Description                                                                                                                                                          filter workspace name       string             The name of the workspace that owns the variable           filter organization name    string             The name of the organization that owns the workspace         Sample Payload     json      data          type   vars        attributes            key   some key          value   some value          description   some description          category   terraform          hcl  false         sensitive  false             relationships            workspace              data                id   ws 4j8p6jX1w33MiDC7              type   workspaces                                         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 vars          Sample Response     json      data          id   var EavQ1LztoRTQHSNT        type   vars        attributes            key   some key          value   some value          description   some description          sensitive  false         category   terraform          hcl  false             relationships            configurable              data                id   ws 4j8p6jX1w33MiDC7              type   workspaces                      links                related    api v2 organizations my organization workspaces my workspace                                links            self    api v2 vars var EavQ1LztoRTQHSNT                      List Variables   GET  vars       Query Parameters   These are standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter                      Description                                                                                                                                                                                   filter workspace  name          Required   The name of one workspace to list variables for                    filter organization  name       Required   The name of the organization that owns the desired workspace     These two parameters are optional but linked  if you include one  you must include both  Without a filter  this method lists variables for all workspaces where you have permission to read variables    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers      Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json     https   app terraform io api v2 vars filter 5Borganization 5D 5Bname 5D my organization filter 5Bworkspace 5D 5Bname 5D my workspace     filter organization  name  my organization filter workspace  name  demo01          Sample Response     json      data                  id   var AD4pibb9nxo1468E          type   vars   attributes              key   name            value   hello            description   some description            sensitive  false           category   terraform            hcl  false                 relationships              configurable                data                  id   ws cZE9LERN3rGPRAmH                type   workspaces                          links                  related    api v2 organizations my organization workspaces my workspace                                        links              self    api v2 vars var AD4pibb9nxo1468E                              Update Variables   PATCH  vars  variable id     Parameter        Description                                                                                            variable id    The ID of the variable to be updated         Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path            Type     Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               data type          string             Must be   vars                                                                                                                                                                                                                                                                                            data id            string             The ID of the variable to update                                                                                                                                                                                                                                                                          data attributes    object             New attributes for the variable  This object can include  key    value    description    category    hcl   and  sensitive  properties  which are described above under  create a variable   create a variable   All of these properties are optional  if omitted  a property will be left unchanged         Sample Payload     json      data          id   var yRmifb4PJj7cLkMG        attributes            key   name          value   mars          description    new description          category   terraform          hcl   false         sensitive   false             type   vars                 Sample Request     bash   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 vars var yRmifb4PJj7cLkMG          Sample Response     json      data          id   var yRmifb4PJj7cLkMG        type   vars        attributes            key   name          value   mars          description   new description          sensitive  false         category   terraform          hcl  false             relationships            configurable              data                id   ws 4j8p6jX1w33MiDC7              type   workspaces                      links                related    api v2 organizations workspace v2 06 workspaces workspace v2 06                                links            self    api v2 vars var yRmifb4PJj7cLkMG                      Delete Variables   DELETE  vars  variable id     Parameter        Description                                                                                            variable id    The ID of the variable to be deleted         Sample Request     bash   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 vars var yRmifb4PJj7cLkMG    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title OAuth Clients API Docs HCP Terraform Use the oauth clients endpoint to manage OAuth connections between your organization and a VCS provider List show create update and destroy clients using the HTTP API Attach and detach OAuth clients to projects","answers":"---\npage_title: OAuth Clients - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/oauth-clients` endpoint to manage OAuth connections between your organization and a VCS provider. List, show, create, update, and destroy clients using the HTTP API. Attach and detach OAuth clients to projects.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# OAuth Clients API\n\nAn OAuth client represents the connection between an organization and a VCS provider. By default, you can globally access an OAuth client throughout the organization, or you can have scope access to one or more [projects](\/terraform\/cloud-docs\/projects\/manage).\n\n## List OAuth Clients\n\n`GET \/organizations\/:organization_name\/oauth-clients`\n\n| Parameter            | Description                   |\n| -------------------- | ----------------------------- |\n| `:organization_name` | The name of the organization. |\n\nThis endpoint allows you to list VCS connections between an organization and a VCS provider (GitHub, Bitbucket, or GitLab) for use when creating or setting up workspaces.\n\n| Status  | Response                                        | Reason                 |\n| ------- | ----------------------------------------------- | ---------------------- |\n| [200][] | [JSON API document][] (`type: \"oauth-clients\"`) | Success                |\n| [404][] | [JSON API error object][]                       | Organization not found |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.  If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.\n\n| Parameter      | Description                                                                   |\n| -------------- | ----------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.            |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 oauth clients per page. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/oauth-clients\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"oc-XKFwG6ggfA9n7t1K\",\n      \"type\": \"oauth-clients\",\n      \"attributes\": {\n        \"created-at\": \"2018-04-16T20:42:53.771Z\",\n        \"callback-url\": \"https:\/\/app.terraform.io\/auth\/35936d44-842c-4ddd-b4d4-7c741383dc3a\/callback\",\n        \"connect-path\": \"\/auth\/35936d44-842c-4ddd-b4d4-7c741383dc3a?organization_id=1\",\n        \"service-provider\": \"github\",\n        \"service-provider-display-name\": \"GitHub\",\n        \"name\": null,\n        \"http-url\": \"https:\/\/github.com\",\n        \"api-url\": \"https:\/\/api.github.com\",\n        \"key\": null,\n        \"rsa-public-key\": null,\n        \"organization-scoped\": false\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/organizations\/my-organization\"\n          }\n        },\n        \"projects\": {\n          \"data\": [\n            { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }\n          ]\n        }, \n        \"oauth-tokens\": {\n          \"data\": [],\n          \"links\": {\n            \"related\": \"\/api\/v2\/oauth-tokens\/ot-KaeqH4cy72VPXFQT\"\n          }\n        },\n        \"agent-pool\": {\n            \"data\": { \n              \"id\": \"apool-VsmjEMcYkShrckpK\", \n              \"type\": \"agent-pools\" \n            },\n            \"links\": { \n              \"related\": \"\/api\/v2\/agent-pools\/apool-VsmjEMcYkShrckpK\" \n            }\n        }\n      }\n    }\n  ]\n}\n```\n\n## Show an OAuth Client\n\n`GET \/oauth-clients\/:id`\n\n| Parameter | Description                        |\n| --------- | ---------------------------------- |\n| `:id`     | The ID of the OAuth Client to show |\n\n| Status  | Response                                        | Reason                                                         |\n| ------- | ----------------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"oauth-clients\"`) | Success                                                        |\n| [404][] | [JSON API error object][]                       | OAuth Client not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/oauth-clients\/oc-XKFwG6ggfA9n7t1K\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"oc-XKFwG6ggfA9n7t1K\",\n    \"type\": \"oauth-clients\",\n    \"attributes\": {\n      \"created-at\": \"2018-04-16T20:42:53.771Z\",\n      \"callback-url\": \"https:\/\/app.terraform.io\/auth\/35936d44-842c-4ddd-b4d4-7c741383dc3a\/callback\",\n      \"connect-path\": \"\/auth\/35936d44-842c-4ddd-b4d4-7c741383dc3a?organization_id=1\",\n      \"service-provider\": \"github\",\n      \"service-provider-display-name\": \"GitHub\",\n      \"name\": null,\n      \"http-url\": \"https:\/\/github.com\",\n      \"api-url\": \"https:\/\/api.github.com\",\n      \"key\": null,\n      \"rsa-public-key\": null,\n      \"organization-scoped\": false\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/my-organization\"\n        }\n      },\n      \"projects\": {\n        \"data\": [\n          { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }\n        ]\n      },\n      \"oauth-tokens\": {\n        \"data\": [],\n        \"links\": {\n          \"related\": \"\/api\/v2\/oauth-tokens\/ot-KaeqH4cy72VPXFQT\"\n        }\n      },\n      \"agent-pool\": {\n        \"data\": { \n          \"id\": \"apool-VsmjEMcYkShrckpK\",\n          \"type\": \"agent-pools\"\n        },\n        \"links\": { \n          \"related\": \"\/api\/v2\/agent-pools\/apool-VsmjEMcYkShrckpK\" \n        }\n      }\n    }\n  }\n}\n```\n\n## Create an OAuth Client\n\n`POST \/organizations\/:organization_name\/oauth-clients`\n\n| Parameter            | Description                                                                                                                                                                                                                                                          |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization that will be connected to the VCS provider. The organization must already exist in the system, and the user must have permission to manage VCS settings. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions)) |\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nThis endpoint allows you to create a VCS connection between an organization and a VCS provider (GitHub or GitLab) for use when creating or setting up workspaces. By using this API endpoint, you can provide a pre-generated OAuth token string instead of going through the process of creating a GitHub or GitLab OAuth Application.\n\nTo learn how to generate one of these token strings for your VCS provider, you can read the following documentation:\n\n* [GitHub and GitHub Enterprise](https:\/\/docs.github.com\/en\/authentication\/keeping-your-account-and-data-secure\/creating-a-personal-access-token)\n* [GitLab, GitLab Community Edition, and GitLab Enterprise Edition](https:\/\/docs.gitlab.com\/ee\/user\/profile\/personal_access_tokens.html#create-a-personal-access-token)\n* [Azure DevOps Server](https:\/\/docs.microsoft.com\/en-us\/azure\/devops\/organizations\/accounts\/use-personal-access-tokens-to-authenticate?view=azure-devops-2019&tabs=preview-page)\n\n| Status  | Response                                        | Reason                                                         |\n| ------- | ----------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"oauth-clients\"`) | OAuth Client successfully created                              |\n| [404][] | [JSON API error object][]                       | Organization not found or user unauthorized to perform action  |\n| [422][] | [JSON API error object][]                       | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                             | Type   | Default | Description                                                                                                                                                                                    |\n| ------------------------------------ | ------ | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                          | string |         | Must be `\"oauth-clients\"`.                                                                                                                                                                     |\n| `data.attributes.service-provider`   | string |         | The VCS provider being connected with. Valid options are `\"github\"`, `\"github_enterprise\"`, `\"gitlab_hosted\"`, `\"gitlab_community_edition\"`, `\"gitlab_enterprise_edition\"`, or `\"ado_server\"`. |\n| `data.attributes.name`               | string | `null`  | An optional display name for the OAuth Client. If left `null`, the UI will default to the display name of the VCS provider.                                                                    |\n| `data.attributes.key`                | string |         | The OAuth Client key. It can refer to a Consumer Key, Application Key, or another type of client key for the VCS provider.                                                                     |\n| `data.attributes.http-url`           | string |         | The homepage of your VCS provider (e.g. `\"https:\/\/github.com\"` or `\"https:\/\/ghe.example.com\"` or `\"https:\/\/gitlab.com\"`).                                                                      |\n| `data.attributes.api-url`            | string |         | The base URL as per your VCS provider's API documentation (e.g. `\"https:\/\/api.github.com\"`, `\"https:\/\/ghe.example.com\/api\/v3\"` or `\"https:\/\/gitlab.com\/api\/v4\"`).                              |\n| `data.attributes.oauth-token-string` | string |         | The token string you were given by your VCS provider                                                                                                                                           |\n| `data.attributes.private-key`        | string |         | **Required for Azure DevOps Server. Not used for any other providers.** The text of the SSH private key associated with your Azure DevOps Server account.                                      |\n| `data.attributes.secret`             | string |         | The OAuth client secret. For Bitbucket Data Center, the secret is the text of the SSH private key associated with your Bitbucket Data Center application link.                                      |\n| `data.attributes.rsa-public-key`     | string |         | **Required for Bitbucket Data Center in conjunction with the `secret`. Not used for any other providers.** The text of the SSH public key associated with your Bitbucket Data Center application link.   |\n| `data.attributes.organization-scoped` | boolean |         | Whether or not the OAuth client is scoped to all projects and workspaces in the organization. Defaults to `true`.\n| `data.relationships.projects.data[]` | array\\[object] | `[]`      | A list of resource identifier objects that defines which projects are associated with the OAuth client. If `data.attributes.organization-scoped` is set to `false`, the OAuth client is only available to this list of projects. Each object in this list must contain a project `id` and use the `\"projects\"` type. For example, `{ \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }`. |\n| `data.relationships.agent-pool.data` | object | `{}`    | The Agent Pool associated to the VCS connection. This pool will be responsible for forwarding VCS Provider API calls and cloning VCS repositories.                         |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"oauth-clients\",\n    \"attributes\": {\n      \"service-provider\": \"github\",\n      \"http-url\": \"https:\/\/github.com\",\n      \"api-url\": \"https:\/\/api.github.com\",\n      \"oauth-token-string\": \"4306823352f0009d0ed81f1b654ac17a\",\n      \"organization-scoped\": false\n    },\n    \"relationships\": {\n      \"projects\": {\n        \"data\": [\n          { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }\n        ]\n      },\n      \"agent-pool\": {\n        \"data\": { \n            \"id\": \"apool-VsmjEMcYkShrckzzz\",\n            \"type\": \"agent-pools\"\n        } \n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/oauth-clients\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"oc-XKFwG6ggfA9n7t1K\",\n    \"type\": \"oauth-clients\",\n    \"attributes\": {\n      \"created-at\": \"2018-04-16T20:42:53.771Z\",\n      \"callback-url\": \"https:\/\/app.terraform.io\/auth\/35936d44-842c-4ddd-b4d4-7c741383dc3a\/callback\",\n      \"connect-path\": \"\/auth\/35936d44-842c-4ddd-b4d4-7c741383dc3a?organization_id=1\",\n      \"service-provider\": \"github\",\n      \"service-provider-display-name\": \"GitHub\",\n      \"name\": null,\n      \"http-url\": \"https:\/\/github.com\",\n      \"api-url\": \"https:\/\/api.github.com\",\n      \"key\": null,\n      \"rsa-public-key\": null,\n      \"organization-scoped\": false\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/my-organization\"\n        }\n      },\n      \"projects\": {\n        \"data\": [\n          { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }\n        ]\n      },\n      \"oauth-tokens\": {\n        \"data\": [],\n        \"links\": {\n          \"related\": \"\/api\/v2\/oauth-tokens\/ot-KaeqH4cy72VPXFQT\"\n        }\n      },\n      \"agent-pool\": {\n        \"data\": { \n            \"id\": \"apool-VsmjEMcYkShrckzzz\",\n            \"type\": \"agent-pools\"\n        }\n      }\n    }\n  }\n}\n```\n\n## Update an OAuth Client\n\n`PATCH \/oauth-clients\/:id`\n\n| Parameter | Description                           |\n| --------- | ------------------------------------- |\n| `:id`     | The ID of the OAuth Client to update. |\n\nUse caution when changing attributes with this endpoint; editing an OAuth client that workspaces are currently using can have unexpected effects.\n\n| Status  | Response                                        | Reason                                                         |\n| ------- | ----------------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"oauth-clients\"`) | The request was successful                                     |\n| [404][] | [JSON API error object][]                       | OAuth Client not found or user unauthorized to perform action  |\n| [422][] | [JSON API error object][]                       | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\n| Key path                         | Type   | Default          | Description                                                                                                                                                                                  |\n| -------------------------------- | ------ | ---------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                      | string |                  | Must be `\"oauth-clients\"`.                                                                                                                                                                   |\n| `data.attributes.name`           | string | (previous value) | An optional display name for the OAuth Client. If set to `null`, the UI will default to the display name of the VCS provider.                                                                |\n| `data.attributes.key`            | string | (previous value) | The OAuth Client key. It can refer to a Consumer Key, Application Key, or another type of client key for the VCS provider.                                                                   |\n| `data.attributes.secret`         | string | (previous value) | The OAuth client secret. For Bitbucket Data Center, this secret is the text of the SSH private key associated with your Bitbucket Data Center application link.                                    |\n| `data.attributes.rsa-public-key` | string | (previous value) | **Required for Bitbucket Data Center in conjunction with the `secret`. Not used for any other providers.** The text of the SSH public key associated with your Bitbucket Data Center application link. |\n| `data.attributes.organization-scoped` | boolean | (previous value) | Whether or not the OAuth client is available to all projects and workspaces in the organization.                                                                      |\n| `data.relationships.projects`    | array\\[object] | (previous value) | A list of resource identifier objects that defines which projects are associated with the OAuth client. If `data.attributes.organization-scoped` is set to `false`, the OAuth client is only available to this list of projects. Each object in this list must contain a project `id` and use the `\"projects\"` type. For example, `{ \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }`. Sending an empty array clears all project assignments.                                               |\n| `data.relationships.agent-pool.data` | object | `{}`    | The Agent Pool associated to the VCS connection. This pool will be responsible for forwarding VCS Provider API calls and cloning VCS repositories.                            |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"id\": \"oc-XKFwG6ggfA9n7t1K\",\n    \"type\": \"oauth-clients\",\n    \"attributes\": {\n      \"key\": \"key\",\n      \"secret\": \"secret\",\n      \"organization-scoped\": false\n    },\n    \"relationships\": {\n      \"projects\": {\n        \"data\": [\n          { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }\n        ]\n      },\n      \"agent-pool\": {\n        \"data\": { \n            \"id\": \"apool-VsmjEMcYkShrckzzz\", \n            \"type\": \"agent-pools\" \n        } \n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/oauth-clients\/oc-XKFwG6ggfA9n7t1K\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"oc-XKFwG6ggfA9n7t1K\",\n    \"type\": \"oauth-clients\",\n    \"attributes\": {\n      \"created-at\": \"2018-04-16T20:42:53.771Z\",\n      \"callback-url\": \"https:\/\/app.terraform.io\/auth\/35936d44-842c-4ddd-b4d4-7c741383dc3a\/callback\",\n      \"connect-path\": \"\/auth\/35936d44-842c-4ddd-b4d4-7c741383dc3a?organization_id=1\",\n      \"service-provider\": \"github\",\n      \"service-provider-display-name\": \"GitHub\",\n      \"name\": null,\n      \"http-url\": \"https:\/\/github.com\",\n      \"api-url\": \"https:\/\/api.github.com\",\n      \"key\": null,\n      \"rsa-public-key\": null,\n      \"organization-scoped\": false\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/my-organization\"\n        }\n      },\n      \"projects\": {\n        \"data\": [\n          { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }\n        ]\n      },\n      \"oauth-tokens\": {\n        \"data\": [],\n        \"links\": {\n          \"related\": \"\/api\/v2\/oauth-tokens\/ot-KaeqH4cy72VPXFQT\"\n        }\n      },\n      \"agent-pool\": {\n        \"data\": { \n          \"id\": \"apool-VsmjEMcYkShrckzzz\", \n          \"type\": \"agent-pools\" \n        } \n      }\n    }\n  }\n}\n```\n\n## Destroy an OAuth Client\n\n`DELETE \/oauth-clients\/:id`\n\n| Parameter | Description                           |\n| --------- | ------------------------------------- |\n| `:id`     | The ID of the OAuth Client to destroy |\n\nThis endpoint allows you to remove an existing connection between an organization and a VCS provider (GitHub, Bitbucket, or GitLab).\n\n**Note:** Removing the OAuth Client will unlink workspaces that use this connection from their repositories, and these workspaces will need to be manually linked to another repository.\n\n| Status  | Response                  | Reason                                                                         |\n| ------- | ------------------------- | ------------------------------------------------------------------------------ |\n| [204][] | Empty response            | The OAuth Client was successfully destroyed                                    |\n| [404][] | [JSON API error object][] | Organization or OAuth Client not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/oauth-clients\/oc-XKFwG6ggfA9n7t1K\n```\n\n## Attach to a project\n\n`POST \/oauth-clients\/:id\/relationships\/projects`\n\n| Parameter | Description                                                                                        |\n| --------- | -------------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the OAuth client to attach to a project. Use the [List OAuth Clients](#list-oauth-clients) endpoint to reference your OAuth client IDs. |\n\n| Status  | Response                  | Reason                                                                     |\n| ------- | ------------------------- | -------------------------------------------------------------------------- |\n| [204][] | Nothing                   | The request was successful                                                 |\n| [404][] | [JSON API error object][] | OAuth Client not found or user unauthorized to perform action                |\n| [422][] | [JSON API error object][] | Malformed request body (one or more projects not found, wrong types, etc.) |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type           | Default | Description                                                                                                                                                                                                                                                          |\n| -------- | -------------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data[]` | array\\[object] |   `[]`  | A list of resource identifier objects that defines which projects to attach the OAuth client to. These objects must contain `id` and `type` properties, and the `type` property must be `projects` (e.g. `{ \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }`).    |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" },\n    { \"id\": \"prj-2HRvNs49EWPjDqT1\", \"type\": \"projects\" }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/oauth-clients\/oc-XKFwG6ggfA9n7t1K\/relationships\/projects\n```\n\n## Detach an OAuth Client from projects\n\n`DELETE \/oauth-clients\/:id\/relationships\/projects`\n\n| Parameter | Description                                                                                        |\n| --------- | -------------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the oauth client you want to detach from the specified projects. Use the \"List OAuth Clients\" endpoint to find IDs. |\n\n| Status  | Response                  | Reason                                                                     |\n| ------- | ------------------------- | -------------------------------------------------------------------------- |\n| [204][] | Nothing                   | The request was successful                                                 |\n| [404][] | [JSON API error object][] | OAuth Client not found or user unauthorized to perform action                |\n| [422][] | [JSON API error object][] | Malformed request body (one or more projects not found, wrong types, etc.) |\n\n### Request Body\n\nThis DELETE endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type           | Default | Description                                                                                                                                                                                                                                                            |\n| -------- | -------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data[]` | array\\[object] |  `[]`   | A list of resource identifier objects that defines which projects are associated with the OAuth client. If `data.attributes.organization-scoped` is set to `false`, the OAuth client is only available to this list of projects. Each object in this list must contain a project `id` and use the `\"projects\"` type. For example, `{ \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }`. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" },\n    { \"id\": \"prj-2HRvNs49EWPjDqT1\", \"type\": \"projects\" }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/oauth-clients\/oc-XKFwG6ggfA9n7t1K\/relationships\/projects\n```\n\n### Available Related Resources\n\nThe GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n| Resource Name  | Description                              |\n| -------------- | ---------------------------------------- |\n| `oauth_tokens` | The OAuth tokens managed by this client  |\n| `projects`     | The projects scoped to this client       |\n","site":"terraform","answers_cleaned":"    page title  OAuth Clients   API Docs   HCP Terraform description       Use the   oauth clients  endpoint to manage OAuth connections between your organization and a VCS provider  List  show  create  update  and destroy clients using the HTTP API  Attach and detach OAuth clients to projects        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    OAuth Clients API  An OAuth client represents the connection between an organization and a VCS provider  By default  you can globally access an OAuth client throughout the organization  or you can have scope access to one or more  projects   terraform cloud docs projects manage       List OAuth Clients   GET  organizations  organization name oauth clients     Parameter              Description                                                                                  organization name    The name of the organization     This endpoint allows you to list VCS connections between an organization and a VCS provider  GitHub  Bitbucket  or GitLab  for use when creating or setting up workspaces     Status    Response                                          Reason                                                                                                             200       JSON API document      type   oauth clients      Success                     404       JSON API error object                            Organization not found        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs   If neither pagination query parameters are provided  the endpoint will not be paginated and will return all results     Parameter        Description                                                                                                                                                                           page number       Optional    If omitted  the endpoint will return the first page                  page size         Optional    If omitted  the endpoint will return 20 oauth clients per page         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 organizations my organization oauth clients          Sample Response     json      data                  id    oc XKFwG6ggfA9n7t1K          type    oauth clients          attributes              created at    2018 04 16T20 42 53 771Z            callback url    https   app terraform io auth 35936d44 842c 4ddd b4d4 7c741383dc3a callback            connect path     auth 35936d44 842c 4ddd b4d4 7c741383dc3a organization id 1            service provider    github            service provider display name    GitHub            name   null           http url    https   github com            api url    https   api github com            key   null           rsa public key   null           organization scoped   false                 relationships              organization                data                  id    my organization                type    organizations                          links                  related     api v2 organizations my organization                                  projects                data                    id    prj AwfuCJTkdai4xj9w    type    projects                                     oauth tokens                data                  links                  related     api v2 oauth tokens ot KaeqH4cy72VPXFQT                                  agent pool                  data                     id    apool VsmjEMcYkShrckpK                   type    agent pools                               links                     related     api v2 agent pools apool VsmjEMcYkShrckpK                                                       Show an OAuth Client   GET  oauth clients  id     Parameter   Description                                                                                 id        The ID of the OAuth Client to show      Status    Response                                          Reason                                                                                                                                                                                             200       JSON API document      type   oauth clients      Success                                                             404       JSON API error object                            OAuth Client not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 oauth clients oc XKFwG6ggfA9n7t1K          Sample Response     json      data          id    oc XKFwG6ggfA9n7t1K        type    oauth clients        attributes            created at    2018 04 16T20 42 53 771Z          callback url    https   app terraform io auth 35936d44 842c 4ddd b4d4 7c741383dc3a callback          connect path     auth 35936d44 842c 4ddd b4d4 7c741383dc3a organization id 1          service provider    github          service provider display name    GitHub          name   null         http url    https   github com          api url    https   api github com          key   null         rsa public key   null         organization scoped   false             relationships            organization              data                id    my organization              type    organizations                      links                related     api v2 organizations my organization                            projects              data                  id    prj AwfuCJTkdai4xj9w    type    projects                              oauth tokens              data                links                related     api v2 oauth tokens ot KaeqH4cy72VPXFQT                            agent pool              data                 id    apool VsmjEMcYkShrckpK              type    agent pools                      links                 related     api v2 agent pools apool VsmjEMcYkShrckpK                                         Create an OAuth Client   POST  organizations  organization name oauth clients     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                organization name    The name of the organization that will be connected to the VCS provider  The organization must already exist in the system  and the user must have permission to manage VCS settings    More about permissions    terraform cloud docs users teams organizations permissions       permissions citation    intentionally unused   keep for maintainers  This endpoint allows you to create a VCS connection between an organization and a VCS provider  GitHub or GitLab  for use when creating or setting up workspaces  By using this API endpoint  you can provide a pre generated OAuth token string instead of going through the process of creating a GitHub or GitLab OAuth Application   To learn how to generate one of these token strings for your VCS provider  you can read the following documentation      GitHub and GitHub Enterprise  https   docs github com en authentication keeping your account and data secure creating a personal access token     GitLab  GitLab Community Edition  and GitLab Enterprise Edition  https   docs gitlab com ee user profile personal access tokens html create a personal access token     Azure DevOps Server  https   docs microsoft com en us azure devops organizations accounts use personal access tokens to authenticate view azure devops 2019 tabs preview page     Status    Response                                          Reason                                                                                                                                                                                             201       JSON API document      type   oauth clients      OAuth Client successfully created                                   404       JSON API error object                            Organization not found or user unauthorized to perform action       422       JSON API error object                            Malformed request body  missing attributes  wrong types  etc          Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                               Type     Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                      data type                             string             Must be   oauth clients                                                                                                                                                                             data attributes service provider      string             The VCS provider being connected with  Valid options are   github      github enterprise      gitlab hosted      gitlab community edition      gitlab enterprise edition    or   ado server         data attributes name                  string    null     An optional display name for the OAuth Client  If left  null   the UI will default to the display name of the VCS provider                                                                          data attributes key                   string             The OAuth Client key  It can refer to a Consumer Key  Application Key  or another type of client key for the VCS provider                                                                           data attributes http url              string             The homepage of your VCS provider  e g    https   github com   or   https   ghe example com   or   https   gitlab com                                                                               data attributes api url               string             The base URL as per your VCS provider s API documentation  e g    https   api github com      https   ghe example com api v3   or   https   gitlab com api v4                                       data attributes oauth token string    string             The token string you were given by your VCS provider                                                                                                                                                data attributes private key           string               Required for Azure DevOps Server  Not used for any other providers    The text of the SSH private key associated with your Azure DevOps Server account                                            data attributes secret                string             The OAuth client secret  For Bitbucket Data Center  the secret is the text of the SSH private key associated with your Bitbucket Data Center application link                                            data attributes rsa public key        string               Required for Bitbucket Data Center in conjunction with the  secret   Not used for any other providers    The text of the SSH public key associated with your Bitbucket Data Center application link         data attributes organization scoped    boolean             Whether or not the OAuth client is scoped to all projects and workspaces in the organization  Defaults to  true      data relationships projects data      array  object                A list of resource identifier objects that defines which projects are associated with the OAuth client  If  data attributes organization scoped  is set to  false   the OAuth client is only available to this list of projects  Each object in this list must contain a project  id  and use the   projects   type  For example      id    prj AwfuCJTkdai4xj9w    type    projects           data relationships agent pool data    object             The Agent Pool associated to the VCS connection  This pool will be responsible for forwarding VCS Provider API calls and cloning VCS repositories                                 Sample Payload     json      data          type    oauth clients        attributes            service provider    github          http url    https   github com          api url    https   api github com          oauth token string    4306823352f0009d0ed81f1b654ac17a          organization scoped   false             relationships            projects              data                  id    prj AwfuCJTkdai4xj9w    type    projects                              agent pool              data                   id    apool VsmjEMcYkShrckzzz                type    agent pools                                          Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization oauth clients          Sample Response     json      data          id    oc XKFwG6ggfA9n7t1K        type    oauth clients        attributes            created at    2018 04 16T20 42 53 771Z          callback url    https   app terraform io auth 35936d44 842c 4ddd b4d4 7c741383dc3a callback          connect path     auth 35936d44 842c 4ddd b4d4 7c741383dc3a organization id 1          service provider    github          service provider display name    GitHub          name   null         http url    https   github com          api url    https   api github com          key   null         rsa public key   null         organization scoped   false             relationships            organization              data                id    my organization              type    organizations                      links                related     api v2 organizations my organization                            projects              data                  id    prj AwfuCJTkdai4xj9w    type    projects                              oauth tokens              data                links                related     api v2 oauth tokens ot KaeqH4cy72VPXFQT                            agent pool              data                   id    apool VsmjEMcYkShrckzzz                type    agent pools                                        Update an OAuth Client   PATCH  oauth clients  id     Parameter   Description                                                                                       id        The ID of the OAuth Client to update     Use caution when changing attributes with this endpoint  editing an OAuth client that workspaces are currently using can have unexpected effects     Status    Response                                          Reason                                                                                                                                                                                             200       JSON API document      type   oauth clients      The request was successful                                          404       JSON API error object                            OAuth Client not found or user unauthorized to perform action       422       JSON API error object                            Malformed request body  missing attributes  wrong types  etc          Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload     Key path                           Type     Default            Description                                                                                                                                                                                                                                                                                                                                                                                                                                                       data type                         string                      Must be   oauth clients                                                                                                                                                                           data attributes name              string    previous value    An optional display name for the OAuth Client  If set to  null   the UI will default to the display name of the VCS provider                                                                      data attributes key               string    previous value    The OAuth Client key  It can refer to a Consumer Key  Application Key  or another type of client key for the VCS provider                                                                         data attributes secret            string    previous value    The OAuth client secret  For Bitbucket Data Center  this secret is the text of the SSH private key associated with your Bitbucket Data Center application link                                          data attributes rsa public key    string    previous value      Required for Bitbucket Data Center in conjunction with the  secret   Not used for any other providers    The text of the SSH public key associated with your Bitbucket Data Center application link       data attributes organization scoped    boolean    previous value    Whether or not the OAuth client is available to all projects and workspaces in the organization                                                                            data relationships projects       array  object     previous value    A list of resource identifier objects that defines which projects are associated with the OAuth client  If  data attributes organization scoped  is set to  false   the OAuth client is only available to this list of projects  Each object in this list must contain a project  id  and use the   projects   type  For example      id    prj AwfuCJTkdai4xj9w    type    projects      Sending an empty array clears all project assignments                                                     data relationships agent pool data    object             The Agent Pool associated to the VCS connection  This pool will be responsible for forwarding VCS Provider API calls and cloning VCS repositories                                    Sample Payload     json      data          id    oc XKFwG6ggfA9n7t1K        type    oauth clients        attributes            key    key          secret    secret          organization scoped   false             relationships            projects              data                  id    prj AwfuCJTkdai4xj9w    type    projects                              agent pool              data                   id    apool VsmjEMcYkShrckzzz                 type    agent pools                                           Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 oauth clients oc XKFwG6ggfA9n7t1K          Sample Response     json      data          id    oc XKFwG6ggfA9n7t1K        type    oauth clients        attributes            created at    2018 04 16T20 42 53 771Z          callback url    https   app terraform io auth 35936d44 842c 4ddd b4d4 7c741383dc3a callback          connect path     auth 35936d44 842c 4ddd b4d4 7c741383dc3a organization id 1          service provider    github          service provider display name    GitHub          name   null         http url    https   github com          api url    https   api github com          key   null         rsa public key   null         organization scoped   false             relationships            organization              data                id    my organization              type    organizations                      links                related     api v2 organizations my organization                            projects              data                  id    prj AwfuCJTkdai4xj9w    type    projects                              oauth tokens              data                links                related     api v2 oauth tokens ot KaeqH4cy72VPXFQT                            agent pool              data                 id    apool VsmjEMcYkShrckzzz               type    agent pools                                          Destroy an OAuth Client   DELETE  oauth clients  id     Parameter   Description                                                                                       id        The ID of the OAuth Client to destroy    This endpoint allows you to remove an existing connection between an organization and a VCS provider  GitHub  Bitbucket  or GitLab      Note    Removing the OAuth Client will unlink workspaces that use this connection from their repositories  and these workspaces will need to be manually linked to another repository     Status    Response                    Reason                                                                                                                                                                                                       204      Empty response              The OAuth Client was successfully destroyed                                         404       JSON API error object      Organization or OAuth Client not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 oauth clients oc XKFwG6ggfA9n7t1K         Attach to a project   POST  oauth clients  id relationships projects     Parameter   Description                                                                                                                                                                                                                 id        The ID of the OAuth client to attach to a project  Use the  List OAuth Clients   list oauth clients  endpoint to reference your OAuth client IDs       Status    Response                    Reason                                                                                                                                                                                               204      Nothing                     The request was successful                                                      404       JSON API error object      OAuth Client not found or user unauthorized to perform action                     422       JSON API error object      Malformed request body  one or more projects not found  wrong types  etc          Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type             Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              data      array  object              A list of resource identifier objects that defines which projects to attach the OAuth client to  These objects must contain  id  and  type  properties  and the  type  property must be  projects   e g      id    prj AwfuCJTkdai4xj9w    type    projects                 Sample Payload     json      data            id    prj AwfuCJTkdai4xj9w    type    projects            id    prj 2HRvNs49EWPjDqT1    type    projects                   Sample Request     shell curl      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 oauth clients oc XKFwG6ggfA9n7t1K relationships projects         Detach an OAuth Client from projects   DELETE  oauth clients  id relationships projects     Parameter   Description                                                                                                                                                                                                                 id        The ID of the oauth client you want to detach from the specified projects  Use the  List OAuth Clients  endpoint to find IDs       Status    Response                    Reason                                                                                                                                                                                               204      Nothing                     The request was successful                                                      404       JSON API error object      OAuth Client not found or user unauthorized to perform action                     422       JSON API error object      Malformed request body  one or more projects not found  wrong types  etc          Request Body  This DELETE endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type             Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  data      array  object              A list of resource identifier objects that defines which projects are associated with the OAuth client  If  data attributes organization scoped  is set to  false   the OAuth client is only available to this list of projects  Each object in this list must contain a project  id  and use the   projects   type  For example      id    prj AwfuCJTkdai4xj9w    type    projects             Sample Payload     json      data            id    prj AwfuCJTkdai4xj9w    type    projects            id    prj 2HRvNs49EWPjDqT1    type    projects                   Sample Request     shell curl      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json        request DELETE       data  payload json     https   app terraform io api v2 oauth clients oc XKFwG6ggfA9n7t1K relationships projects          Available Related Resources  The GET endpoints above can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available     Resource Name    Description                                                                                                 oauth tokens    The OAuth tokens managed by this client       projects        The projects scoped to this client         "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title OAuth Tokens API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the oauth tokens endpoint to manage the OAuth tokens used to connect a workspace to a VCS provider List show update and destroy tokens using the HTTP API","answers":"---\npage_title: OAuth Tokens - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/oauth-tokens` endpoint to manage the OAuth tokens used to connect a workspace to a VCS provider. List, show, update, and destroy tokens using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# OAuth Tokens\n\nThe `oauth-token` object represents a VCS configuration which includes the OAuth connection and the associated OAuth token. This object is used when creating a workspace to identify which VCS connection to use.\n\n## List OAuth Tokens\n\nList all the OAuth Tokens for a given OAuth Client\n\n`GET \/oauth-clients\/:oauth_client_id\/oauth-tokens`\n\n| Parameter          | Description                |\n| ------------------ | -------------------------- |\n| `:oauth_client_id` | The ID of the OAuth Client |\n\n| Status  | Response                                       | Reason                                                         |\n| ------- | ---------------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"oauth-tokens\"`) | Success                                                        |\n| [404][] | [JSON API error object][]                      | OAuth Client not found, or user unauthorized to perform action |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.  If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.\n\n| Parameter      | Description                                                                  |\n| -------------- | ---------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.           |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 oauth tokens per page. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/oauth-clients\/oc-GhHqb5rkeK19mLB8\/oauth-tokens\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"ot-hmAyP66qk2AMVdbJ\",\n      \"type\": \"oauth-tokens\",\n      \"attributes\": {\n        \"created-at\":\"2017-11-02T06:37:49.284Z\",\n        \"service-provider-user\":\"skierkowski\",\n        \"has-ssh-key\": false\n      },\n      \"relationships\": {\n        \"oauth-client\": {\n          \"data\": {\n            \"id\": \"oc-GhHqb5rkeK19mLB8\",\n            \"type\": \"oauth-clients\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/oauth-clients\/oc-GhHqb5rkeK19mLB8\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/oauth-tokens\/ot-hmAyP66qk2AMVdbJ\"\n      }\n    }\n  ]\n}\n```\n\n## Show an OAuth Token\n\n`GET \/oauth-tokens\/:id`\n\n| Parameter | Description                       |\n| --------- | --------------------------------- |\n| `:id`     | The ID of the OAuth token to show |\n\n| Status  | Response                                       | Reason                                                        |\n| ------- | ---------------------------------------------- | ------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"oauth-tokens\"`) | Success                                                       |\n| [404][] | [JSON API error object][]                      | OAuth Token not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/oauth-tokens\/ot-29t7xkUKiNC2XasL\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"ot-29t7xkUKiNC2XasL\",\n    \"type\": \"oauth-tokens\",\n    \"attributes\": {\n      \"created-at\": \"2018-08-29T14:07:22.144Z\",\n      \"service-provider-user\": \"EM26Jj0ikRsIFFh3fE5C\",\n      \"has-ssh-key\": false\n    },\n    \"relationships\": {\n      \"oauth-client\": {\n        \"data\": {\n          \"id\": \"oc-WMipGbuW8q7xCRmJ\",\n          \"type\": \"oauth-clients\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/oauth-clients\/oc-WMipGbuW8q7xCRmJ\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/oauth-tokens\/ot-29t7xkUKiNC2XasL\"\n    }\n  }\n}\n```\n\n## Update an OAuth Token\n\n`PATCH \/oauth-tokens\/:id`\n\n| Parameter | Description                         |\n| --------- | ----------------------------------- |\n| `:id`     | The ID of the OAuth token to update |\n\n| Status  | Response                                       | Reason                                                         |\n| ------- | ---------------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"oauth-tokens\"`) | OAuth Token successfully updated                               |\n| [404][] | [JSON API error object][]                      | OAuth Token not found or user unauthorized to perform action   |\n| [422][] | [JSON API error object][]                      | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                  | Type   | Default | Description               |\n| ------------------------- | ------ | ------- | ------------------------- |\n| `data.type`               | string |         | Must be `\"oauth-tokens\"`. |\n| `data.attributes.ssh-key` | string |         | **Optional.** The SSH key |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"id\": \"ot-29t7xkUKiNC2XasL\",\n    \"type\": \"oauth-tokens\",\n    \"attributes\": {\n      \"ssh-key\": \"...\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/oauth-tokens\/ot-29t7xkUKiNC2XasL\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"ot-29t7xkUKiNC2XasL\",\n    \"type\": \"oauth-tokens\",\n    \"attributes\": {\n      \"created-at\": \"2018-08-29T14:07:22.144Z\",\n      \"service-provider-user\": \"EM26Jj0ikRsIFFh3fE5C\",\n      \"has-ssh-key\": false\n    },\n    \"relationships\": {\n      \"oauth-client\": {\n        \"data\": {\n          \"id\": \"oc-WMipGbuW8q7xCRmJ\",\n          \"type\": \"oauth-clients\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/oauth-clients\/oc-WMipGbuW8q7xCRmJ\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/oauth-tokens\/ot-29t7xkUKiNC2XasL\"\n    }\n  }\n}\n```\n\n## Destroy an OAuth Token\n\n`DELETE \/oauth-tokens\/:id`\n\n| Parameter | Description                          |\n| --------- | ------------------------------------ |\n| `:id`     | The ID of the OAuth Token to destroy |\n\n| Status  | Response                  | Reason                                                        |\n| ------- | ------------------------- | ------------------------------------------------------------- |\n| [204][] | Empty response            | The OAuth Token was successfully destroyed                    |\n| [404][] | [JSON API error object][] | OAuth Token not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/oauth-tokens\/ot-29t7xkUKiNC2XasL\n```","site":"terraform","answers_cleaned":"    page title  OAuth Tokens   API Docs   HCP Terraform description       Use the   oauth tokens  endpoint to manage the OAuth tokens used to connect a workspace to a VCS provider  List  show  update  and destroy tokens using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    OAuth Tokens  The  oauth token  object represents a VCS configuration which includes the OAuth connection and the associated OAuth token  This object is used when creating a workspace to identify which VCS connection to use      List OAuth Tokens  List all the OAuth Tokens for a given OAuth Client   GET  oauth clients  oauth client id oauth tokens     Parameter            Description                                                                          oauth client id    The ID of the OAuth Client      Status    Response                                         Reason                                                                                                                                                                                            200       JSON API document      type   oauth tokens      Success                                                             404       JSON API error object                           OAuth Client not found  or user unauthorized to perform action        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs   If neither pagination query parameters are provided  the endpoint will not be paginated and will return all results     Parameter        Description                                                                                                                                                                         page number       Optional    If omitted  the endpoint will return the first page                 page size         Optional    If omitted  the endpoint will return 20 oauth tokens per page         Sample Request     shell curl       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 oauth clients oc GhHqb5rkeK19mLB8 oauth tokens          Sample Response     json      data                  id    ot hmAyP66qk2AMVdbJ          type    oauth tokens          attributes              created at   2017 11 02T06 37 49 284Z            service provider user   skierkowski            has ssh key   false                 relationships              oauth client                data                  id    oc GhHqb5rkeK19mLB8                type    oauth clients                          links                  related     api v2 oauth clients oc GhHqb5rkeK19mLB8                                        links              self     api v2 oauth tokens ot hmAyP66qk2AMVdbJ                              Show an OAuth Token   GET  oauth tokens  id     Parameter   Description                                                                               id        The ID of the OAuth token to show      Status    Response                                         Reason                                                                                                                                                                                          200       JSON API document      type   oauth tokens      Success                                                            404       JSON API error object                           OAuth Token not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 oauth tokens ot 29t7xkUKiNC2XasL          Sample Response     json      data          id    ot 29t7xkUKiNC2XasL        type    oauth tokens        attributes            created at    2018 08 29T14 07 22 144Z          service provider user    EM26Jj0ikRsIFFh3fE5C          has ssh key   false             relationships            oauth client              data                id    oc WMipGbuW8q7xCRmJ              type    oauth clients                      links                related     api v2 oauth clients oc WMipGbuW8q7xCRmJ                                links            self     api v2 oauth tokens ot 29t7xkUKiNC2XasL                      Update an OAuth Token   PATCH  oauth tokens  id     Parameter   Description                                                                                   id        The ID of the OAuth token to update      Status    Response                                         Reason                                                                                                                                                                                            200       JSON API document      type   oauth tokens      OAuth Token successfully updated                                    404       JSON API error object                           OAuth Token not found or user unauthorized to perform action        422       JSON API error object                           Malformed request body  missing attributes  wrong types  etc          Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                    Type     Default   Description                                                                                                 data type                  string             Must be   oauth tokens         data attributes ssh key    string               Optional    The SSH key        Sample Payload     json      data          id    ot 29t7xkUKiNC2XasL        type    oauth tokens        attributes            ssh key                              Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 oauth tokens ot 29t7xkUKiNC2XasL          Sample Response     json      data          id    ot 29t7xkUKiNC2XasL        type    oauth tokens        attributes            created at    2018 08 29T14 07 22 144Z          service provider user    EM26Jj0ikRsIFFh3fE5C          has ssh key   false             relationships            oauth client              data                id    oc WMipGbuW8q7xCRmJ              type    oauth clients                      links                related     api v2 oauth clients oc WMipGbuW8q7xCRmJ                                links            self     api v2 oauth tokens ot 29t7xkUKiNC2XasL                      Destroy an OAuth Token   DELETE  oauth tokens  id     Parameter   Description                                                                                     id        The ID of the OAuth Token to destroy      Status    Response                    Reason                                                                                                                                                                     204      Empty response              The OAuth Token was successfully destroyed                         404       JSON API error object      OAuth Token not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 oauth tokens ot 29t7xkUKiNC2XasL    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the authentication tokens endpoint to manage agent tokens List show create and destroy tokens using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 page title Agent Tokens API Docs HCP Terraform","answers":"---\npage_title: Agent Tokens - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/authentication-tokens` endpoint to manage agent tokens. List, show, create, and destroy tokens using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Agent Tokens API\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/agents.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n## List Agent Tokens\n\n`GET \/agent-pools\/:agent_pool_id\/authentication-tokens`\n\n| Parameter        | Description               |\n| ---------------- | ------------------------- |\n| `:agent_pool_id` | The ID of the Agent Pool. |\n\nThe objects returned by this endpoint only contain metadata, and do not include the secret text of any authentication tokens. A token is only shown upon creation, and cannot be recovered later.\n\n| Status  | Response                                                | Reason                                                       |\n| ------- | ------------------------------------------------------- | ------------------------------------------------------------ |\n| [200][] | [JSON API document][] (`type: \"authentication-tokens\"`) | Success                                                      |\n| [404][] | [JSON API error object][]                               | Agent Pool not found, or user unauthorized to perform action |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                            |\n| -------------- | ---------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.     |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 tokens per page. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/agent-pools\/apool-MCf6kkxu5FyHbqhd\/authentication-tokens\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": [\n        {\n            \"id\": \"at-bonpPzYqv2bGD7vr\",\n            \"type\": \"authentication-tokens\",\n            \"attributes\": {\n                \"created-at\": \"2020-08-07T19:38:20.868Z\",\n                \"last-used-at\": \"2020-08-07T19:40:55.139Z\",\n                \"description\": \"asdfsdf\",\n                \"token\": null\n            },\n            \"relationships\": {\n                \"created-by\": {\n                    \"data\": {\n                        \"id\": \"user-Nxv6svuhVrTW7eb1\",\n                        \"type\": \"users\"\n                    }\n                }\n            }\n        }\n    ],\n    \"links\": {\n        \"self\": \"https:\/\/app.terraform.io\/api\/v2\/agent-pools\/apool-MCf6kkxu5FyHbqhd\/authentication-tokens?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n        \"first\": \"https:\/\/app.terraform.io\/api\/v2\/agent-pools\/apool-MCf6kkxu5FyHbqhd\/authentication-tokens?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n        \"prev\": null,\n        \"next\": null,\n        \"last\": \"https:\/\/app.terraform.io\/api\/v2\/agent-pools\/apool-MCf6kkxu5FyHbqhd\/authentication-tokens?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n    },\n    \"meta\": {\n        \"pagination\": {\n            \"current-page\": 1,\n            \"prev-page\": null,\n            \"next-page\": null,\n            \"total-pages\": 1,\n            \"total-count\": 1\n        }\n    }\n}\n```\n\n## Show an Agent Token\n\n`GET \/authentication-tokens\/:id`\n\n| Parameter | Description                       |\n| --------- | --------------------------------- |\n| `:id`     | The ID of the Agent Token to show |\n\n| Status  | Response                                                | Reason                                                        |\n| ------- | ------------------------------------------------------- | ------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"authentication-tokens\"`) | Success                                                       |\n| [404][] | [JSON API error object][]                               | Agent Token not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/authentication-tokens\/at-bonpPzYqv2bGD7vr\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": {\n        \"id\": \"at-bonpPzYqv2bGD7vr\",\n        \"type\": \"authentication-tokens\",\n        \"attributes\": {\n            \"created-at\": \"2020-08-07T19:38:20.868Z\",\n            \"last-used-at\": \"2020-08-07T19:40:55.139Z\",\n            \"description\": \"test token\",\n            \"token\": null\n        },\n        \"relationships\": {\n            \"created-by\": {\n                \"data\": {\n                    \"id\": \"user-Nxv6svuhVrTW7eb1\",\n                    \"type\": \"users\"\n                }\n            }\n        }\n    }\n}\n```\n\n## Create an Agent Token\n\n`POST \/agent-pools\/:agent_pool_id\/authentication-tokens`\n\n| Parameter        | Description              |\n| ---------------- | ------------------------ |\n| `:agent_pool_id` | The ID of the Agent Pool |\n\nThis endpoint returns the secret text of the created authentication token. A token is only shown upon creation, and cannot be recovered later.\n\n| Status  | Response                                                | Reason                                                         |\n| ------- | ------------------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"authentication-tokens\"`) | The request was successful                                     |\n| [404][] | [JSON API error object][]                               | Agent Pool not found or user unauthorized to perform action    |\n| [422][] | [JSON API error object][]                               | Malformed request body (missing attributes, wrong types, etc.) |\n| [500][] | [JSON API error object][]                               | Failure during Agent Token creation                            |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                      | Type   | Default | Description                          |\n| ----------------------------- | ------ | ------- | ------------------------------------ |\n| `data.type`                   | string |         | Must be `\"authentication-tokens\"`.   |\n| `data.attributes.description` | string |         | The description for the Agent Token. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"authentication-tokens\",\n    \"attributes\": {\n      \"description\":\"api\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/agent-pools\/apool-xkuMi7x4LsEnBUdY\/authentication-tokens\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": {\n        \"id\": \"at-2rG2oYU9JEvfaqji\",\n        \"type\": \"authentication-tokens\",\n        \"attributes\": {\n            \"created-at\": \"2020-08-10T22:29:21.907Z\",\n            \"last-used-at\": null,\n            \"description\": \"api\",\n            \"token\": \"eHub7TsW7fz7LQ.atlasv1.cHGFcvf2VxVfUH4PZ7UNdaGB6SjyKWs5phdZ371zkI2KniZs2qKgrAcazhlsiy02awk\"\n        },\n        \"relationships\": {\n            \"created-by\": {\n                \"data\": {\n                    \"id\": \"user-Nxv6svuhVrTW7eb1\",\n                    \"type\": \"users\"\n                }\n            }\n        }\n    }\n}\n```\n\n## Destroy an Agent Token\n\n`DELETE \/api\/v2\/authentication-tokens\/:id`\n\n| Parameter | Description                           |\n| --------- | ------------------------------------- |\n| `:id`     | The ID of the Agent Token to destroy. |\n\n| Status  | Response                  | Reason                                                        |\n| ------- | ------------------------- | ------------------------------------------------------------- |\n| [204][] | Empty response            | The Agent Token was successfully destroyed                    |\n| [404][] | [JSON API error object][] | Agent Token not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/authentication-tokens\/at-6yEmxNAhaoQLH1Da\n```","site":"terraform","answers_cleaned":"    page title  Agent Tokens   API Docs   HCP Terraform description       Use the   authentication tokens  endpoint to manage agent tokens  List  show  create  and destroy tokens using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Agent Tokens API       BEGIN  TFC only name pnp callout      include  tfc package callouts agents mdx       END  TFC only name pnp callout         List Agent Tokens   GET  agent pools  agent pool id authentication tokens     Parameter          Description                                                                      agent pool id    The ID of the Agent Pool     The objects returned by this endpoint only contain metadata  and do not include the secret text of any authentication tokens  A token is only shown upon creation  and cannot be recovered later     Status    Response                                                  Reason                                                                                                                                                                                                 200       JSON API document      type   authentication tokens      Success                                                           404       JSON API error object                                    Agent Pool not found  or user unauthorized to perform action        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                             page number       Optional    If omitted  the endpoint will return the first page           page size         Optional    If omitted  the endpoint will return 20 tokens per page         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 agent pools apool MCf6kkxu5FyHbqhd authentication tokens          Sample Response     json        data                            id    at bonpPzYqv2bGD7vr                type    authentication tokens                attributes                      created at    2020 08 07T19 38 20 868Z                    last used at    2020 08 07T19 40 55 139Z                    description    asdfsdf                    token   null                             relationships                      created by                          data                              id    user Nxv6svuhVrTW7eb1                            type    users                                                                              links              self    https   app terraform io api v2 agent pools apool MCf6kkxu5FyHbqhd authentication tokens page 5Bnumber 5D 1 page 5Bsize 5D 20            first    https   app terraform io api v2 agent pools apool MCf6kkxu5FyHbqhd authentication tokens page 5Bnumber 5D 1 page 5Bsize 5D 20            prev   null           next   null           last    https   app terraform io api v2 agent pools apool MCf6kkxu5FyHbqhd authentication tokens page 5Bnumber 5D 1 page 5Bsize 5D 20              meta              pagination                  current page   1               prev page   null               next page   null               total pages   1               total count   1                           Show an Agent Token   GET  authentication tokens  id     Parameter   Description                                                                               id        The ID of the Agent Token to show      Status    Response                                                  Reason                                                                                                                                                                                                   200       JSON API document      type   authentication tokens      Success                                                            404       JSON API error object                                    Agent Token not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 authentication tokens at bonpPzYqv2bGD7vr          Sample Response     json        data              id    at bonpPzYqv2bGD7vr            type    authentication tokens            attributes                  created at    2020 08 07T19 38 20 868Z                last used at    2020 08 07T19 40 55 139Z                description    test token                token   null                     relationships                  created by                      data                          id    user Nxv6svuhVrTW7eb1                        type    users                                                            Create an Agent Token   POST  agent pools  agent pool id authentication tokens     Parameter          Description                                                                    agent pool id    The ID of the Agent Pool    This endpoint returns the secret text of the created authentication token  A token is only shown upon creation  and cannot be recovered later     Status    Response                                                  Reason                                                                                                                                                                                                     201       JSON API document      type   authentication tokens      The request was successful                                          404       JSON API error object                                    Agent Pool not found or user unauthorized to perform action         422       JSON API error object                                    Malformed request body  missing attributes  wrong types  etc        500       JSON API error object                                    Failure during Agent Token creation                                   Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                        Type     Default   Description                                                                                                                           data type                      string             Must be   authentication tokens           data attributes description    string             The description for the Agent Token         Sample Payload     json      data          type    authentication tokens        attributes            description   api                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 agent pools apool xkuMi7x4LsEnBUdY authentication tokens          Sample Response     json        data              id    at 2rG2oYU9JEvfaqji            type    authentication tokens            attributes                  created at    2020 08 10T22 29 21 907Z                last used at   null               description    api                token    eHub7TsW7fz7LQ atlasv1 cHGFcvf2VxVfUH4PZ7UNdaGB6SjyKWs5phdZ371zkI2KniZs2qKgrAcazhlsiy02awk                      relationships                  created by                      data                          id    user Nxv6svuhVrTW7eb1                        type    users                                                            Destroy an Agent Token   DELETE  api v2 authentication tokens  id     Parameter   Description                                                                                       id        The ID of the Agent Token to destroy       Status    Response                    Reason                                                                                                                                                                     204      Empty response              The Agent Token was successfully destroyed                         404       JSON API error object      Agent Token not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 authentication tokens at 6yEmxNAhaoQLH1Da    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Account API Docs HCP Terraform Use the account endpoint to manage the current user Get user details update account info and change the user password with the HTTP API","answers":"---\npage_title: Account - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/account` endpoint to manage the current user. Get user details, update account info, and change the user password with the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Account API\n\nAccount represents the current user interacting with Terraform. It returns the same type of object as the [Users](\/terraform\/cloud-docs\/api-docs\/users) API, but also includes an email address, which is hidden when viewing info about other users. \n\nFor internal reasons, HCP Terraform associates team and organization tokens with a synthetic user account called _service user_. HCP Terraform returns the associated service user for account requests authenticated by a team or organization token. Use the `authenticated-resource` relationship to access the underlying team or organization associated with a token. For user tokens, you can use the user, itself.\n\n## Get your account details\n\n`GET \/account\/details`\n\n| Status  | Response                                | Reason                     |\n| ------- | --------------------------------------- | -------------------------- |\n| [200][] | [JSON API document][] (`type: \"users\"`) | The request was successful |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/account\/details\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"user-V3R563qtJNcExAkN\",\n    \"type\": \"users\",\n    \"attributes\": {\n      \"username\": \"admin\",\n      \"is-service-account\": false,\n      \"auth-method\": \"tfc\",\n      \"avatar-url\": \"https:\/\/www.gravatar.com\/avatar\/9babb00091b97b9ce9538c45807fd35f?s=100&d=mm\",\n      \"v2-only\": false,\n      \"is-site-admin\": true,\n      \"is-sso-login\": false,\n      \"email\": \"admin@hashicorp.com\",\n      \"unconfirmed-email\": null,\n      \"permissions\": {\n        \"can-create-organizations\": true,\n        \"can-change-email\": true,\n        \"can-change-username\": true\n      }\n    },\n    \"relationships\": {\n      \"authentication-tokens\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/users\/user-V3R563qtJNcExAkN\/authentication-tokens\"\n        }\n      },\n      \"authenticated-resource\": {\n        \"data\": {\n          \"id\": \"user-V3R563qtJNcExAkN\",\n          \"type\": \"users\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/users\/user-V3R563qtJNcExAkN\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/users\/user-V3R563qtJNcExAkN\"\n    }\n  }\n}\n```\n\n## Update your account info\n\nYour username and email address can be updated with this endpoint.\n\n`PATCH \/account\/update`\n\n| Status  | Response                                | Reason                                                         |\n| ------- | --------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"users\"`) | Your info was successfully updated                             |\n| [401][] | [JSON API error object][]               | Unauthorized                                                   |\n| [422][] | [JSON API error object][]               | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\n| Key path                   | Type   | Default | Description                                                     |\n| -------------------------- | ------ | ------- | --------------------------------------------------------------- |\n| `data.type`                | string |         | Must be `\"users\"`                                               |\n| `data.attributes.username` | string |         | New username                                                    |\n| `data.attributes.email`    | string |         | New email address (must be confirmed afterwards to take effect) |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"users\",\n    \"attributes\": {\n      \"email\": \"admin@example.com\",\n      \"username\": \"admin\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/account\/update\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"user-V3R563qtJNcExAkN\",\n    \"type\": \"users\",\n    \"attributes\": {\n      \"username\": \"admin\",\n      \"is-service-account\": false,\n      \"auth-method\": \"hcp_username_password\",\n      \"avatar-url\": \"https:\/\/www.gravatar.com\/avatar\/9babb00091b97b9ce9538c45807fd35f?s=100&d=mm\",\n      \"v2-only\": false,\n      \"is-site-admin\": true,\n      \"is-sso-login\": false,\n      \"email\": \"admin@hashicorp.com\",\n      \"unconfirmed-email\": null,\n      \"permissions\": {\n        \"can-create-organizations\": true,\n        \"can-change-email\": true,\n        \"can-change-username\": true\n      }\n    },\n    \"relationships\": {\n      \"authentication-tokens\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/users\/user-V3R563qtJNcExAkN\/authentication-tokens\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/users\/user-V3R563qtJNcExAkN\"\n    }\n  }\n}\n```\n\n## Change your password\n\n`PATCH \/account\/password`\n\n| Status  | Response                                | Reason                                                         |\n| ------- | --------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"users\"`) | Your password was successfully changed                         |\n| [401][] | [JSON API error object][]               | Unauthorized                                                   |\n| [422][] | [JSON API error object][]               | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\n| Key path                                | Type   | Default | Description                 |\n| --------------------------------------- | ------ | ------- | --------------------------- |\n| `data.type`                             | string |         | Must be `\"users\"`           |\n| `data.attributes.current_password`      | string |         | Current password            |\n| `data.attributes.password`              | string |         | New password (must be at least 10 characters in length)               |\n| `data.attributes.password_confirmation` | string |         | New password (confirmation) |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"users\",\n    \"attributes\": {\n      \"current_password\": \"current password e.g. 2:C)e'G4{D\\n06:[d1~y\",\n      \"password\": \"new password e.g. 34rk492+jgLL0@xhfyisj\",\n      \"password_confirmation\": \"new password e.g. 34rk492+jLL0@xhfyisj\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/account\/password\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"user-V3R563qtJNcExAkN\",\n    \"type\": \"users\",\n    \"attributes\": {\n      \"username\": \"admin\",\n      \"is-service-account\": false,\n      \"auth-method\": \"hcp_github\",\n      \"avatar-url\": \"https:\/\/www.gravatar.com\/avatar\/9babb00091b97b9ce9538c45807fd35f?s=100&d=mm\",\n      \"v2-only\": false,\n      \"is-site-admin\": true,\n      \"is-sso-login\": false,\n      \"email\": \"admin@hashicorp.com\",\n      \"unconfirmed-email\": null,\n      \"permissions\": {\n        \"can-create-organizations\": true,\n        \"can-change-email\": true,\n        \"can-change-username\": true\n      }\n    },\n    \"relationships\": {\n      \"authentication-tokens\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/users\/user-V3R563qtJNcExAkN\/authentication-tokens\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/users\/user-V3R563qtJNcExAkN\"\n    }\n  }\n}\n```","site":"terraform","answers_cleaned":"    page title  Account   API Docs   HCP Terraform description       Use the   account  endpoint to manage the current user  Get user details  update account info  and change the user password with the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Account API  Account represents the current user interacting with Terraform  It returns the same type of object as the  Users   terraform cloud docs api docs users  API  but also includes an email address  which is hidden when viewing info about other users    For internal reasons  HCP Terraform associates team and organization tokens with a synthetic user account called  service user   HCP Terraform returns the associated service user for account requests authenticated by a team or organization token  Use the  authenticated resource  relationship to access the underlying team or organization associated with a token  For user tokens  you can use the user  itself      Get your account details   GET  account details     Status    Response                                  Reason                                                                                                             200       JSON API document      type   users      The request was successful        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 account details          Sample Response     json      data          id    user V3R563qtJNcExAkN        type    users        attributes            username    admin          is service account   false         auth method    tfc          avatar url    https   www gravatar com avatar 9babb00091b97b9ce9538c45807fd35f s 100 d mm          v2 only   false         is site admin   true         is sso login   false         email    admin hashicorp com          unconfirmed email   null         permissions              can create organizations   true           can change email   true           can change username   true                     relationships            authentication tokens              links                related     api v2 users user V3R563qtJNcExAkN authentication tokens                            authenticated resource              data                id    user V3R563qtJNcExAkN              type    users                      links                related     api v2 users user V3R563qtJNcExAkN                                links            self     api v2 users user V3R563qtJNcExAkN                      Update your account info  Your username and email address can be updated with this endpoint    PATCH  account update     Status    Response                                  Reason                                                                                                                                                                                     200       JSON API document      type   users      Your info was successfully updated                                  401       JSON API error object                    Unauthorized                                                        422       JSON API error object                    Malformed request body  missing attributes  wrong types  etc          Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload     Key path                     Type     Default   Description                                                                                                                                                                              data type                   string             Must be   users                                                      data attributes username    string             New username                                                         data attributes email       string             New email address  must be confirmed afterwards to take effect         Sample Payload     json      data          type    users        attributes            email    admin example com          username    admin                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 account update          Sample Response     json      data          id    user V3R563qtJNcExAkN        type    users        attributes            username    admin          is service account   false         auth method    hcp username password          avatar url    https   www gravatar com avatar 9babb00091b97b9ce9538c45807fd35f s 100 d mm          v2 only   false         is site admin   true         is sso login   false         email    admin hashicorp com          unconfirmed email   null         permissions              can create organizations   true           can change email   true           can change username   true                     relationships            authentication tokens              links                related     api v2 users user V3R563qtJNcExAkN authentication tokens                                links            self     api v2 users user V3R563qtJNcExAkN                      Change your password   PATCH  account password     Status    Response                                  Reason                                                                                                                                                                                     200       JSON API document      type   users      Your password was successfully changed                              401       JSON API error object                    Unauthorized                                                        422       JSON API error object                    Malformed request body  missing attributes  wrong types  etc          Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload     Key path                                  Type     Default   Description                                                                                                                   data type                                string             Must be   users                  data attributes current password         string             Current password                 data attributes password                 string             New password  must be at least 10 characters in length                     data attributes password confirmation    string             New password  confirmation         Sample Payload     json      data          type    users        attributes            current password    current password e g  2 C e G4 D n06  d1 y          password    new password e g  34rk492 jgLL0 xhfyisj          password confirmation    new password e g  34rk492 jLL0 xhfyisj                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 account password          Sample Response     json      data          id    user V3R563qtJNcExAkN        type    users        attributes            username    admin          is service account   false         auth method    hcp github          avatar url    https   www gravatar com avatar 9babb00091b97b9ce9538c45807fd35f s 100 d mm          v2 only   false         is site admin   true         is sso login   false         email    admin hashicorp com          unconfirmed email   null         permissions              can create organizations   true           can change email   true           can change username   true                     relationships            authentication tokens              links                related     api v2 users user V3R563qtJNcExAkN authentication tokens                                links            self     api v2 users user V3R563qtJNcExAkN                 "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the github app installations endpoint to view Terraform Cloud GitHub App installations List installations and get details on a specific installation using the HTTP API page title GitHub App Installations API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201","answers":"---\npage_title: GitHub App Installations - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/github-app\/installations` endpoint to view Terraform Cloud GitHub App installations. List installations and get details on a specific installation using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# GitHub App Installations API\n\nYou can create a GitHub App installation using the HCP Terraform UI. Learn how to [create a GitHub App installation](\/terraform\/cloud-docs\/vcs\/github-app).\n\n~> **Note:** To use this resource in Terraform Enterprise installations, you must configure the GitHub App in the site admin area.\n\n~> **Note:** You can only use this API if you have already authorized the Terraform Cloud GitHub App. Manage your [GitHub App token](\/terraform\/cloud-docs\/users-teams-organizations\/users#github-app-oauth-token) from **Account Settings** > **Tokens**.\n\n## List Installations\n\nThis endpoint lists GitHub App installations available to the current User.\n\n`GET \/github-app\/installations`\n\n### Query Parameters\n\nQueries only return GitHub App Installations that the current user has access to within GitHub.\n\n| Parameter                               | Description                                                                                                                                             |\n|-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `filter[name]`                          | **Optional.** If present, returns a list of available GitHub App installations that match the GitHub organization or login.                             |\n| `filter[installation_id]`               | **Optional.** If present, returns a list of available GitHub App installations that match the installation ID within GitHub. (**Not HCP Terraform**) |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/github-app\/installations\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": [\n        {\n            \"id\": \"ghain-BYrbNeGQ8nAzKouu\",\n            \"type\": \"github-app-installations\",\n            \"attributes\": {\n                \"name\": \"octouser\",\n                \"installation-id\": 54810170,\n                \"icon-url\": \"https:\/\/avatars.githubusercontent.com\/u\/29916665?v=4\",\n                \"installation-type\": \"User\",\n                \"installation-url\": \"https:\/\/github.com\/settings\/installations\/54810170\"\n            }\n        }\n    ]\n}\n```\n\n## Show Installation\n\n`GET \/github-app\/installation\/:gh_app_installation_id`\n\n| Parameter                 | Description                    |\n|---------------------------|--------------------------------|\n| `:gh_app_installation_id` | The Github App Installation ID |\n\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/github-app\/installation\/ghain-R4xmKTaxnhLFioUq\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": {\n        \"id\": \"ghain-R4xmKTaxnhLFioUq\",\n        \"type\": \"github-app-installations\",\n        \"attributes\": {\n            \"name\": \"octouser\",\n            \"installation-id\": 54810170,\n            \"icon-url\": \"https:\/\/avatars.githubusercontent.com\/u\/29916665?v=4\",\n            \"installation-type\": \"User\",\n            \"installation-url\": \"https:\/\/github.com\/settings\/installations\/54810170\"\n        }\n    }\n}\n```","site":"terraform","answers_cleaned":"    page title  GitHub App Installations   API Docs   HCP Terraform description       Use the   github app installations  endpoint to view Terraform Cloud GitHub App installations  List installations and get details on a specific installation using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    GitHub App Installations API  You can create a GitHub App installation using the HCP Terraform UI  Learn how to  create a GitHub App installation   terraform cloud docs vcs github app         Note    To use this resource in Terraform Enterprise installations  you must configure the GitHub App in the site admin area        Note    You can only use this API if you have already authorized the Terraform Cloud GitHub App  Manage your  GitHub App token   terraform cloud docs users teams organizations users github app oauth token  from   Account Settings       Tokens        List Installations  This endpoint lists GitHub App installations available to the current User    GET  github app installations       Query Parameters  Queries only return GitHub App Installations that the current user has access to within GitHub     Parameter                                 Description                                                                                                                                                                                                                                                                                                                                                        filter name                                Optional    If present  returns a list of available GitHub App installations that match the GitHub organization or login                                   filter installation id                     Optional    If present  returns a list of available GitHub App installations that match the installation ID within GitHub     Not HCP Terraform           Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 github app installations          Sample Response     json        data                            id    ghain BYrbNeGQ8nAzKouu                type    github app installations                attributes                      name    octouser                    installation id   54810170                   icon url    https   avatars githubusercontent com u 29916665 v 4                    installation type    User                    installation url    https   github com settings installations 54810170                                          Show Installation   GET  github app installation  gh app installation id     Parameter                   Description                                                                                         gh app installation id    The Github App Installation ID         Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 github app installation ghain R4xmKTaxnhLFioUq          Sample Response     json        data              id    ghain R4xmKTaxnhLFioUq            type    github app installations            attributes                  name    octouser                installation id   54810170               icon url    https   avatars githubusercontent com u 29916665 v 4                installation type    User                installation url    https   github com settings installations 54810170                       "}
{"questions":"terraform VCS Events API Note The VCS Events API is still in beta as support is being added for additional VCS providers Currently only GitLab com connections established after December 2020 are supported page title VCS Events API Docs HCP Terraform Use the vcs events endpoint to query VCS related events List changes within your organization using the HTTP API","answers":"---\npage_title: VCS Events - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/vcs-events` endpoint to query VCS-related events. List changes within your organization using the HTTP API.\n---\n\n# VCS Events API\n\n-> **Note**: The VCS Events API is still in beta as support is being added for additional VCS providers. Currently only GitLab.com connections established after December 2020 are supported.\n\nVCS (version control system) events describe changes within your organization for VCS-related actions. Events are only stored for 10 days. If information about the [OAuth Client](\/terraform\/cloud-docs\/api-docs\/oauth-clients) or [OAuth Token](\/terraform\/cloud-docs\/api-docs\/oauth-tokens) are available at the time of the event, it will be logged with the event.\n\n## List VCS events\n\nThis endpoint lists VCS events for an organization\n\n`GET \/organizations\/:organization_name\/vcs-events`\n\n| Parameter            | Description                                                                                                                                                        |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The name of the organization to list VCS events from. The organization must already exist in the system and the user must have permissions to manage VCS settings. |\n\n-> **Note:** Viewing VCS events is restricted to the owners team, teams with the \"Manage VCS Settings\", and the [organization API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). ([More about permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions).)\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter                           | Description                                                                                                                                                                                                                     |\n| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `page[number]`                      | **Optional.** If omitted, the endpoint will return the first page.                                                                                                                                                              |\n| `page[size]`                        | **Optional.** If omitted, the endpoint will return 20 workspaces per page.                                                                                                                                                      |\n| `filter[from]`                      | **Optional.** Must be RFC3339 formatted and in UTC. If omitted, the endpoint will default to 10 days ago.                                                                                                                       |\n| `filter[to]`                        | **Optional.** Must be RFC3339 formatted and in UTC. If omitted, the endpoint will default to now.                                                                                                                               |\n| `filter[oauth_client_external_ids]` | **Optional.** Format as a comma-separated string. If omitted, the endpoint will return all events.                                                                                                                              |\n| `filter[levels]`                    | **Optional.** `info` and `error` are the only accepted values. If omitted, the endpoint will return both info and error events.                                                                                                 |\n| `include`                           | **Optional.** Allows including related resource data. This endpoint only supports `oauth_client` as a value. Only the `name`, `service-provider`, and `id` will be returned on the OAuth Client object in the `included` block. |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/vcs-events?filter%5Bfrom%5D=2021-02-02T14%3A09%3A00Z&filter%5Bto%5D=2021-02-12T14%3A09%3A59Z&filter%5Boauth_client_external_ids%5D=oc-hhTM7WNUUgbXJpkW&filter%5Blevels%5D=info&include=oauth_client\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"ve-DJpbEwZc98ZedHZG\",\n      \"type\": \"vcs-events\",\n      \"attributes\": {\n        \"created-at\": \"2021-02-09 20:07:49.686182 +0000 UTC\",\n        \"level\": \"info\",\n        \"message\": \"Loaded 11 repositories\",\n        \"organization-id\": \"org-SBVreZxVessAmCZG\"\n      },\n      \"relationships\": {\n        \"oauth-client\": {\n          \"data\": {\n            \"id\": \"oc-LePsVhHXhCM6jWf3\",\n            \"type\": \"oauth-clients\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/oauth-clients\/oc-LePsVhHXhCM6jWf3\"\n          }\n        },\n        \"oauth-token\": {\n          \"data\": {\n            \"id\": \"ot-Ma2cs8tzjv3LYZHw\",\n            \"type\": \"oauth-tokens\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/oauth-tokens\/ot-Ma2cs8tzjv3LYZHw\"\n          }\n        }\n      }\n    }\n  ],\n  \"included\": [\n    {\n      \"id\": \"oc-LePsVhHXhCM6jWf3\",\n      \"type\": \"oauth-clients\",\n      \"attributes\": {\n        \"name\": \"working\",\n        \"service-provider\": \"gitlab_hosted\"\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/organizations\/my-organization\"\n          }\n        },\n        \"oauth-tokens\": {\n          \"data\": [\n            {\n              \"id\": \"ot-Ma2cs8tzjv3LYZHw\",\n              \"type\": \"oauth-tokens\"\n            }\n          ]\n        }\n      }\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/vcs-events?filter%5Bfrom%5D=2021-02-02T14%3A09%3A00Z\\u0026filter%5Blevels%5D=info\\u0026filter%5Boauth_client_external_ids%5D=oc-LePsVhHXhCM6jWf3\\u0026filter%5Bto%5D=2021-02-12T14%3A09%3A59Z\\u0026include=oauth_client\\u0026organization_name=my-organization\\u0026page%5Bnumber%5D=1\\u0026page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/vcs-events?filter%5Bfrom%5D=2021-02-02T14%3A09%3A00Z\\u0026filter%5Blevels%5D=info\\u0026filter%5Boauth_client_external_ids%5D=oc-LePsVhHXhCM6jWf3\\u0026filter%5Bto%5D=2021-02-12T14%3A09%3A59Z\\u0026include=oauth_client\\u0026organization_name=my-organization\\u0026page%5Bnumber%5D=1\\u0026page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/vcs-events?filter%5Bfrom%5D=2021-02-02T14%3A09%3A00Z\\u0026filter%5Blevels%5D=info\\u0026filter%5Boauth_client_external_ids%5D=oc-LePsVhHXhCM6jWf3\\u0026filter%5Bto%5D=2021-02-12T14%3A09%3A59Z\\u0026include=oauth_client\\u0026organization_name=my-organization\\u0026page%5Bnumber%5D=1\\u0026page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 8\n    }\n  }\n}\n```","site":"terraform","answers_cleaned":"    page title  VCS Events   API Docs   HCP Terraform description       Use the   vcs events  endpoint to query VCS related events  List changes within your organization using the HTTP API         VCS Events API       Note    The VCS Events API is still in beta as support is being added for additional VCS providers  Currently only GitLab com connections established after December 2020 are supported   VCS  version control system  events describe changes within your organization for VCS related actions  Events are only stored for 10 days  If information about the  OAuth Client   terraform cloud docs api docs oauth clients  or  OAuth Token   terraform cloud docs api docs oauth tokens  are available at the time of the event  it will be logged with the event      List VCS events  This endpoint lists VCS events for an organization   GET  organizations  organization name vcs events     Parameter              Description                                                                                                                                                                                                                                                                                                                                                            organization name    The name of the organization to list VCS events from  The organization must already exist in the system and the user must have permissions to manage VCS settings          Note    Viewing VCS events is restricted to the owners team  teams with the  Manage VCS Settings   and the  organization API token   terraform cloud docs users teams organizations api tokens organization api tokens     More about permissions   terraform cloud docs users teams organizations permissions         Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter                             Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    page number                            Optional    If omitted  the endpoint will return the first page                                                                                                                                                                    page size                              Optional    If omitted  the endpoint will return 20 workspaces per page                                                                                                                                                            filter from                            Optional    Must be RFC3339 formatted and in UTC  If omitted  the endpoint will default to 10 days ago                                                                                                                             filter to                              Optional    Must be RFC3339 formatted and in UTC  If omitted  the endpoint will default to now                                                                                                                                     filter oauth client external ids       Optional    Format as a comma separated string  If omitted  the endpoint will return all events                                                                                                                                    filter levels                          Optional     info  and  error  are the only accepted values  If omitted  the endpoint will return both info and error events                                                                                                       include                                Optional    Allows including related resource data  This endpoint only supports  oauth client  as a value  Only the  name    service provider   and  id  will be returned on the OAuth Client object in the  included  block         Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations my organization vcs events filter 5Bfrom 5D 2021 02 02T14 3A09 3A00Z filter 5Bto 5D 2021 02 12T14 3A09 3A59Z filter 5Boauth client external ids 5D oc hhTM7WNUUgbXJpkW filter 5Blevels 5D info include oauth client          Sample Response     json      data                  id    ve DJpbEwZc98ZedHZG          type    vcs events          attributes              created at    2021 02 09 20 07 49 686182  0000 UTC            level    info            message    Loaded 11 repositories            organization id    org SBVreZxVessAmCZG                  relationships              oauth client                data                  id    oc LePsVhHXhCM6jWf3                type    oauth clients                          links                  related     api v2 oauth clients oc LePsVhHXhCM6jWf3                                  oauth token                data                  id    ot Ma2cs8tzjv3LYZHw                type    oauth tokens                          links                  related     api v2 oauth tokens ot Ma2cs8tzjv3LYZHw                                              included                  id    oc LePsVhHXhCM6jWf3          type    oauth clients          attributes              name    working            service provider    gitlab hosted                  relationships              organization                data                  id    my organization                type    organizations                          links                  related     api v2 organizations my organization                                  oauth tokens                data                                  id    ot Ma2cs8tzjv3LYZHw                  type    oauth tokens                                                            links          self    https   app terraform io api v2 organizations my organization vcs events filter 5Bfrom 5D 2021 02 02T14 3A09 3A00Z u0026filter 5Blevels 5D info u0026filter 5Boauth client external ids 5D oc LePsVhHXhCM6jWf3 u0026filter 5Bto 5D 2021 02 12T14 3A09 3A59Z u0026include oauth client u0026organization name my organization u0026page 5Bnumber 5D 1 u0026page 5Bsize 5D 20        first    https   app terraform io api v2 organizations my organization vcs events filter 5Bfrom 5D 2021 02 02T14 3A09 3A00Z u0026filter 5Blevels 5D info u0026filter 5Boauth client external ids 5D oc LePsVhHXhCM6jWf3 u0026filter 5Bto 5D 2021 02 12T14 3A09 3A59Z u0026include oauth client u0026organization name my organization u0026page 5Bnumber 5D 1 u0026page 5Bsize 5D 20        prev   null       next   null       last    https   app terraform io api v2 organizations my organization vcs events filter 5Bfrom 5D 2021 02 02T14 3A09 3A00Z u0026filter 5Blevels 5D info u0026filter 5Boauth client external ids 5D oc LePsVhHXhCM6jWf3 u0026filter 5Bto 5D 2021 02 12T14 3A09 3A59Z u0026include oauth client u0026organization name my organization u0026page 5Bnumber 5D 1 u0026page 5Bsize 5D 20          meta          pagination            current page   1         prev page   null         next page   null         total pages   1         total count   8                "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the comments endpoint to manage with a Terraform run s comments List show and create comments using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 page title Comments API Docs HCP Terraform","answers":"---\npage_title: Comments - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/comments` endpoint to manage with a Terraform run's comments. List, show, and create comments using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[307]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/307\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Comments API\n\nComments allow users to leave feedback or record decisions about a run. \n\n## List Comments for a Run\n\n`GET \/runs\/:id\/comments`\n\n| Parameter | Description        |\n| --------- | ------------------ |\n| `id`      | The ID of the run. |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/runs\/run-KTuq99JSzgmDSvYj\/comments\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"wsc-JdFX3u8o114F4CWf\",\n      \"type\": \"comments\",\n      \"attributes\": {\n        \"body\": \"A comment body\"\n      },\n      \"relationships\": {\n        \"run-event\": {\n          \"data\": {\n            \"id\": \"re-fo1YXZ8W5bp5GBKM\",\n            \"type\": \"run-events\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/run-events\/re-fo1YXZ8W5bp5GBKM\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/comments\/wsc-JdFX3u8o114F4CWf\"\n      }\n    },\n    {\n      \"id\": \"wsc-QdhSPFTNoCTpfafp\",\n      \"type\": \"comments\",\n      \"attributes\": {\n        \"body\": \"Another comment body\"\n      },\n      \"relationships\": {\n        \"run-event\": {\n          \"data\": {\n            \"id\": \"re-fo1YXZ8W5bp5GBKM\",\n            \"type\": \"run-events\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/run-events\/re-fo1YXZ8W5bp5GBKM\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/comments\/wsc-QdhSPFTNoCTpfafp\"\n      }\n    }\n  ]\n}\n```\n\n## Show a Comment\n\n`GET \/comments\/:id`\n\n| Parameter | Description            |\n| --------- | ---------------------- |\n| `id`      | The ID of the comment. |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/comments\/wsc-gTFq83JSzjmAvYj\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"wsc-gTFq83JSzjmAvYj\",\n    \"type\": \"comments\",\n    \"attributes\": {\n      \"body\": \"Another comment\"\n    },\n    \"relationships\": {\n      \"run-event\": {\n        \"data\": {\n            \"id\": \"re-8RB5ZaFrDanG2hGY\",\n            \"type\": \"run-events\"\n        },\n        \"links\": {\n            \"related\": \"\/api\/v2\/run-events\/re-8RB5ZaFrDanG2hGY\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/comments\/wsc-gTFq83JSzjmAvYj\"\n    }\n  }\n}\n```\n\n## Create Comment\n\n`POST \/runs\/:id\/comments`\n\n| Parameter | Description        |\n| --------- | ------------------ |\n| `id`      | The ID of the run. |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as the request payload.\n\n| Key Path                 | Type   | Default | Description\n| ------------------------ | ------ | ------- | ----------------\n| `data.type`              | string |         | Must be `\"comments\"`.\n| `data.attributes.body`   | string |         | The body of the comment.\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"body\": \"A comment about the run\",\n    },\n    \"type\": \"comments\"\n  }\n}\n```\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/runs\/run-KTuq99JSzgmDSvYj\/comments\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"wsc-oRiShushpgLU4JD2\",\n    \"type\": \"comments\",\n    \"attributes\": {\n      \"body\": \"A comment about the run\"\n    },\n    \"relationships\": {\n      \"run-event\": {\n          \"data\": {\n            \"id\": \"re-E3xsBX11F1fbm2zV\",\n            \"type\": \"run-events\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/run-events\/re-E3xsBX11F1fbm2zV\"\n          }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/comments\/wsc-oRiShushpgLU4JD2\"\n    }\n  }\n}\n```","site":"terraform","answers_cleaned":"    page title  Comments   API Docs   HCP Terraform description       Use the   comments  endpoint to manage with a Terraform run s comments  List  show  and create comments using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   307   https   developer mozilla org en US docs Web HTTP Status 307   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Comments API  Comments allow users to leave feedback or record decisions about a run       List Comments for a Run   GET  runs  id comments     Parameter   Description                                                id         The ID of the run         Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 runs run KTuq99JSzgmDSvYj comments          Sample Response     json      data                  id    wsc JdFX3u8o114F4CWf          type    comments          attributes              body    A comment body                  relationships              run event                data                  id    re fo1YXZ8W5bp5GBKM                type    run events                          links                  related     api v2 run events re fo1YXZ8W5bp5GBKM                                        links              self     api v2 comments wsc JdFX3u8o114F4CWf                              id    wsc QdhSPFTNoCTpfafp          type    comments          attributes              body    Another comment body                  relationships              run event                data                  id    re fo1YXZ8W5bp5GBKM                type    run events                          links                  related     api v2 run events re fo1YXZ8W5bp5GBKM                                        links              self     api v2 comments wsc QdhSPFTNoCTpfafp                              Show a Comment   GET  comments  id     Parameter   Description                                                        id         The ID of the comment         Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 comments wsc gTFq83JSzjmAvYj          Sample Response     json      data          id    wsc gTFq83JSzjmAvYj        type    comments        attributes            body    Another comment              relationships            run event              data                  id    re 8RB5ZaFrDanG2hGY                type    run events                      links                  related     api v2 run events re 8RB5ZaFrDanG2hGY                                links            self     api v2 comments wsc gTFq83JSzjmAvYj                      Create Comment   POST  runs  id comments     Parameter   Description                                                id         The ID of the run         Request Body  This POST endpoint requires a JSON object with the following properties as the request payload     Key Path                   Type     Default   Description                                                                     data type                 string             Must be   comments       data attributes body      string             The body of the comment       Sample Payload     json      data          attributes            body    A comment about the run               type    comments                 Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 runs run KTuq99JSzgmDSvYj comments          Sample Response     json      data          id    wsc oRiShushpgLU4JD2        type    comments        attributes            body    A comment about the run              relationships            run event                data                  id    re E3xsBX11F1fbm2zV                type    run events                          links                  related     api v2 run events re E3xsBX11F1fbm2zV                                  links            self     api v2 comments wsc oRiShushpgLU4JD2                 "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Notification Configurations API Docs HCP Terraform Use the notification configurations endpoint to manage notification configurations List show create update verify and delete notification configurations for a workspace using the HTTP API","answers":"---\npage_title: Notification Configurations - API Docs - HCP Terraform\ndescription: >-\n  Use the `notification-configurations` endpoint to manage notification configurations. List, show, create, update, verify, and delete notification configurations for a workspace using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Notification configuration API\n\nHCP Terraform can send notifications for run state transitions and workspace events. You can specify a destination URL, request type, and what events will trigger the notification. Each workspace can have up to 20 notification configurations, and they apply to all runs for that workspace.\n\nInteracting with notification configurations requires admin access to the relevant workspace. ([More about permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions).)\n\n-> **Note:** [Speculative plans](\/terraform\/cloud-docs\/run\/modes-and-options#plan-only-speculative-plan) and workspaces configured with `Local` [execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode) do not support notifications.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Notification triggers\n\nNotifications are sent in response to triggers related to workspace events, and can be defined at workspace and team levels. You can specify workspace events in the `triggers` array attribute. \n\n### Workspace notification triggers\n\nThe following triggers are available for workspace notifications.\n\n| Display Name       | Value                   | Description                                                                                              |\n| ------------------ | ----------------------- | -------------------------------------------------------------------------------------------------------- |\n| Created            | `\"run:created\"`         | A run is created and enters the [Pending stage](\/terraform\/cloud-docs\/run\/states#the-pending-stage)                                                    |\n| Planning           | `\"run:planning\"`        | A run acquires the lock and starts to execute.                                                      |\n| Needs Attention    | `\"run:needs_attention\"` | A plan has changes and Terraform requires user input to continue. This input may include approving the plan or a [policy override](\/terraform\/cloud-docs\/run\/states#the-policy-check-stage). |\n| Applying           | `\"run:applying\"`        | A run enters the [Apply stage](\/terraform\/cloud-docs\/run\/states#the-apply-stage), where Terraform makes the infrastructure changes described in the plan.                            |\n| Completed          | `\"run:completed\"`       | A run completes successfully.                                     |\n| Errored            | `\"run:errored\"`         | A run terminates early due to error or cancellation.                                          |\n| Drifted            | `\"assessment:drifted\"`  | HCP Terraform detected configuration drift. This option is only available if you enabled drift detection for the workspace.                                                    |\n| Checks Failed      | `\"assessment:check_failure\"` | One or more continuous validation checks did not pass. This option is only available if you enabled drift detection for the workspace. |\n| Health Assessment Failed | `\"assessment:failed\"`   | A health assessment failed. This option is only available if you enable health assessments for the workspace.                                        |\n| Auto Destroy Reminder | `\"workspace:auto_destro_reminder\"`   | An automated workspace destroy run is imminent. |\n| Auto Destroy Results | `\"workspace:auto_destroy_run_results\"`   | HCP Terraform attempted an automated workspace destroy run. |\n\n### Team notification triggers\n\nThe following triggers are available for [team notifications](#team-notification-configuration).\n\n| Display Name       | Value                   | Description                                                                                              |\n| ------------------ | ----------------------- | -------------------------------------------------------------------------------------------------------- |\n| Change Request | `\"team:change_request\"`   |  HCP Terraform sent a change request to a workspace that the specified team has explicit access to.  |\n\n## Notification payload\n\nThe notification is an HTTP POST request with a detailed payload. The content depends on the type of notification.\n\nFor Slack and Microsoft Teams notifications, the payload conforms to the respective webhook API and results in a notification message with informational attachments. Refer to [Slack Notification Payloads](\/terraform\/cloud-docs\/workspaces\/settings\/notifications#slack) and [Microsoft Teams Notification Payloads](\/terraform\/cloud-docs\/workspaces\/settings\/notifications#microsoft-teams) for examples. For generic notifications, the payload varies based on whether the notification contains information about run events or workspace events.\n\n### Run notification payload\n\nRun events include detailed information about a specific run, including the time it began and the associated workspace and organization. Generic notifications for run events contain the following information:\n\n| Name                             | Type   | Description                                                                                                                                                                                                                                    |\n| -------------------------------- | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `payload_version`                | number | Always \"1\".                                                                                                                                                                                                                                    |\n| `notification_configuration_id`  | string | The ID of the configuration associated with this notification.                                                                                                                                                                                 |\n| `run_url`                        | string | URL used to access the run UI page.                                                                                                                                                                                                            |\n| `run_id`                         | string | ID of the run which triggered this notification.                                                                                                                                                                                               |\n| `run_message`                    | string | The reason the run was queued.                                                                                                                                                                                                                 |\n| `run_created_at`                 | string | Timestamp of the run's creation.                                                                                                                                                                                                               |\n| `run_created_by`                 | string | Username of the user who created the run.                                                                                                                                                                                                      |\n| `workspace_id`                   | string | ID of the run's workspace.                                                                                                                                                                                                                     |\n| `workspace_name`                 | string | Human-readable name of the run's workspace.                                                                                                                                                                                                    |\n| `organization_name`              | string | Human-readable name of the run's organization.                                                                                                                                                                                                 |\n| `notifications`                  | array  | List of events which caused this notification to be sent, with each event represented by an object. At present, this is always one event, but in the future HCP Terraform may roll up several notifications for a run into a single request. |\n| `notifications[].message`        | string | Human-readable reason for the notification.                                                                                                                                                                                                    |\n| `notifications[].trigger`        | string | Value of the trigger which caused the notification to be sent.                                                                                                                                                                                 |\n| `notifications[].run_status`     | string | Status of the run at the time of notification.                                                                                                                                                                                                 |\n| `notifications[].run_updated_at` | string | Timestamp of the run's update.                                                                                                                                                                                                                 |\n| `notifications[].run_updated_by` | string | Username of the user who caused the run to update.                                                                                                                                                                                             |\n\n#### Sample payload\n\n```json\n{\n  \"payload_version\": 1,\n  \"notification_configuration_id\": \"nc-AeUQ2zfKZzW9TiGZ\",\n  \"run_url\": \"https:\/\/app.terraform.io\/app\/acme-org\/my-workspace\/runs\/run-FwnENkvDnrpyFC7M\",\n  \"run_id\": \"run-FwnENkvDnrpyFC7M\",\n  \"run_message\": \"Add five new queue workers\",\n  \"run_created_at\": \"2019-01-25T18:34:00.000Z\",\n  \"run_created_by\": \"sample-user\",\n  \"workspace_id\": \"ws-XdeUVMWShTesDMME\",\n  \"workspace_name\": \"my-workspace\",\n  \"organization_name\": \"acme-org\",\n  \"notifications\": [\n    {\n      \"message\": \"Run Canceled\",\n      \"trigger\": \"run:errored\",\n      \"run_status\": \"canceled\",\n      \"run_updated_at\": \"2019-01-25T18:37:04.000Z\",\n      \"run_updated_by\": \"sample-user\"\n    }\n  ]\n}\n```\n\n### Change request notification payload\n\nChange request events contain the following fields in their payloads.\n\n| Name                             | Type   | Description                                                                                                                                                                                                                                    |\n| -------------------------------- | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `payload_version`                | number | Always \"1\".                                                                                                                                                                                                                                    |\n| `notification_configuration_id`  | string | The ID of the configuration associated with this notification.                                                                                                                                                                                 |\n| `change_request_url`             | string | URL used to access the change request UI page.                                                                                                                                                                                                 |\n| `change_request_subject`         | string | title of the change request which triggered this notification.                                                                                                                                                                                    |\n| `change_request_message`         | string | The contents of the change request.                                                                                                                                                                                                            |\n| `change_request_created_at`      | string | Timestamp of the change request's creation.                                                                                                                                                                                                    |\n| `change_request_created_by`      | string | Username of the user who created the change_request.                                                                                                                                                                                           |\n| `workspace_id`                   | string | ID of the run's workspace.                                                                                                                                                                                                                     |\n| `workspace_name`                 | string | Human-readable name of the run's workspace.                                                                                                                                                                                                    |\n| `organization_name`              | string | Human-readable name of the run's organization.                                                                                                                                                                                                 |\n\n##### `Send a test` payload\n\nThis is a sample payload you can send to test if notifications are working. The payload does not have a `run` or `workspace` context, resulting in null values.\n\nYou can trigger a test notification from the workspace notification settings page. You can read more about verifying a [notification configuration](\/terraform\/enterprise\/workspaces\/settings\/notifications#enabling-and-verifying-a-configuration).\n\n```json\n{\n  \"payload_version\": 1,\n  \"notification_configuration_id\": \"nc-jWvVsmp5VxsaCeXm\",\n  \"run_url\": null,\n  \"run_id\": null,\n  \"run_message\": null,\n  \"run_created_at\": null,\n  \"run_created_by\": null,\n  \"workspace_id\": null,\n  \"workspace_name\": null,\n  \"organization_name\": null,\n  \"notifications\": [\n    {\n      \"message\": \"Verification of test\",\n      \"trigger\": \"verification\",\n      \"run_status\": null,\n      \"run_updated_at\": null,\n      \"run_updated_by\": null,\n    }\n  ]\n}\n```\n\n### Workspace notification payload\n\nWorkspace events include detailed information about workspace-level validation events like [health assessments](\/terraform\/cloud-docs\/workspaces\/health) if you enable them for the workspace. Much of the information provides details about the associated [assessment result](\/terraform\/cloud-docs\/api-docs\/assessment-results), which HCP Terraform uses to track instances of continuous validation.\n\nHCP Terraform returns different types of attributes returned in the payload details, depending on the type of `trigger_scope`. There are two main values for `trigger_scope`: `assessment` and `workspace`, examples of which you can see below.\n\n#### Health assessments\n\nHealth assessment notifications for workspace events contain the following information:\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/health-assessments.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n| Name                                         | Type   | Description                                                                          |\n| -------------------------------------------- | ------ | ------------------------------------------------------------------------------------ |\n| `payload_version`                            | number | Always \"2\".                                                                          |\n| `notification_configuration_id`              | string | The ID of the configuration associated with this notification.                       |\n| `notification_configuration_url`             | string | URL to get the notification configuration from the HCP Terraform API.              |\n| `trigger_scope`                              | string | Always \"assessment\" for workspace assessment notifications.                          |\n| `trigger`                                    | string | Value of the trigger that caused the notification to be sent.                        |\n| `message`                                    | string | Human-readable reason for the notification.                                          |\n| `details`                                    | object | Object containing details specific to the notification.                              |\n| `details.new_assessment_result`              | object | The most recent assessment result. This result triggered the notification.           |\n| `details.new_assessment_result.id`           | string | ID of the assessment result.                                                         |\n| `details.new_assessment_result.url`          | string | URL to get the assessment result from the HCP Terraform API.                       |\n| `details.new_assessment_result.succeeded`    | bool   | Whether assessment succeeded.                                                        |\n| `details.new_assessment_result.all_checks_succeeded`  | bool | Whether all conditions passed.                                                |\n| `details.new_assessment_result.checks_passed`         | number | The number of resources, data sources, and outputs passing their conditions.               |\n| `details.new_assessment_result.checks_failed`         | number | The number of resources, data sources, and outputs with one or more failing conditions.    |\n| `details.new_assessment_result.checks_errored`        | number | The number of resources, data sources, and outputs that had a condition error.             |\n| `details.new_assessment_result.checks_unknown`        | number | The number of resources, data sources, and outputs that had conditions left unevaluated.   |\n| `details.new_assessment_result.drifted`      | bool   | Whether assessment detected drift.                                                   |\n| `details.new_assessment_result.resources_drifted`     | number| The number of resources whose configuration does not match from the workspace's state file.                                             |\n| `details.new_assessment_result.resources_undrifted`   | number| The number of real resources whose configuration matches the workspace's state file.                                         |\n| `details.new_assessment_result.created_at`   | string | Timestamp for when HCP Terraform created the assessment result.                                                  |\n| `details.prior_assessment_result`            | object | The assessment result immediately prior to the one that triggered the notification.  |\n| `details.prior_assessment_result.id`         | string | ID of the assessment result.                                                         |\n| `details.prior_assessment_result.url`        | string | URL to get the assessment result from the HCP Terraform API.                       |\n| `details.prior_assessment_result.succeeded`  | bool   | Whether assessment succeeded.                                                        |\n| `details.prior_assessment_result.all_checks_succeeded`  | bool | Whether all conditions passed.                                              |\n| `details.prior_assessment_result.checks_passed`         | number | The number of resources, data sources, and outputs passing their conditions.             |\n| `details.prior_assessment_result.checks_failed`         | number | The number of resources, data sources, and outputs with one or more failing conditions.  |\n| `details.prior_assessment_result.checks_errored`        | number | The number of resources, data sources, and outputs that had a condition error.           |\n| `details.prior_assessment_result.checks_unknown`        | number | The number of resources, data sources, and outputs that had conditions left unevaluated. |\n| `details.prior_assessment_result.drifted`    | bool   | Whether assessment detected drift.                                                   |\n| `details.prior_assessment_result.resources_drifted`     | number| The number of resources whose configuration does not match the workspace's state file.                                           |\n| `details.prior_assessment_result.resources_undrifted`   | number| The number of resources whose configuration matches the workspace's state file.                                       |\n| `details.prior_assessment_result.created_at` | string | Timestamp of the assessment result.                                                  |\n| `details.workspace_id`                       | string | ID of the workspace that generated the notification.                                 |\n| `details.workspace_name`                     | string | Human-readable name of the workspace.                                                |\n| `details.organization_name`                  | string | Human-readable name of the organization.                                             |\n\n##### Sample payload\n\nHealth assessment payloads have information about resource drift and continuous validation checks.\n\n```json\n{\n  \"payload_version\": \"2\",\n  \"notification_configuration_id\": \"nc-SZ3V3cLFxK6sqLKn\",\n  \"notification_configuration_url\": \"https:\/\/app.terraform.io\/api\/v2\/notification-configurations\/nc-SZ3V3cLFxK6sqLKn\",\n  \"trigger_scope\": \"assessment\",\n  \"trigger\": \"assessment:drifted\",\n  \"message\": \"Drift Detected\",\n  \"details\": {\n      \"new_assessment_result\": {\n        \"id\": \"asmtres-vRVQxpqq64EA9V5a\",\n        \"url\": \"https:\/\/app.terraform.io\/api\/v2\/assessment-results\/asmtres-vRVQxpqq64EA9V5a\",\n        \"succeeded\": true,\n        \"drifted\": true,\n        \"all_checks_succeeded\": true,\n        \"resources_drifted\": 4,\n        \"resources_undrifted\": 55,\n        \"checks_passed\": 33,\n        \"checks_failed\": 0,\n        \"checks_errored\": 0,\n        \"checks_unknown\": 0,\n        \"created_at\": \"2022-06-09T05:23:10Z\"\n      },\n      \"prior_assessment_result\": {\n        \"id\": \"asmtres-A6zEbpGArqP74fdL\",\n        \"url\": \"https:\/\/app.terraform.io\/api\/v2\/assessment-results\/asmtres-A6zEbpGArqP74fdL\",\n        \"succeeded\": true,\n        \"drifted\": true,\n        \"all_checks_succeeded\": true,\n        \"resources_drifted\": 4,\n        \"resources_undrifted\": 55,\n        \"checks_passed\": 33,\n        \"checks_failed\": 0,\n        \"checks_errored\": 0,\n        \"checks_unknown\": 0,\n        \"created_at\": \"2022-06-09T05:22:51Z\"\n      },\n    \"workspace_id\": \"ws-XdeUVMWShTesDMME\",\n    \"workspace_name\": \"my-workspace\",\n    \"organization_name\": \"acme-org\"\n  }\n}\n```\n\n#### Automatic destroy runs\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/ephemeral-workspaces.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nAutomatic destroy run notifications for workspace events contain the following information:\n\n| Name                                         | Type   | Description                                                                                                 |\n| -------------------------------------------- | ------ | ------------------------------------------------------------------------------------------------------------|\n| `payload_version`                            | string | Always 2.                                                                                                   |\n| `notification_configuration_id`                | string | The ID of the notification's configuration.                                                                   |\n| `notification_configuration_url`               | string | The URL to get the notification's configuration from the HCP Terraform API.                                 |\n| `trigger_scope`                              | string | Always \"workspace\" for ephemeral workspace notifications                                                     |\n| `trigger`                                    | string | Value of the trigger that caused HCP Terraform to send the notification.                                   |\n| `message`                                    | string | Human-readable reason for the notification.                                                                  |\n| `details`                                    | object | Object containing details specific to the notification.                                                       |\n| `details.auto_destroy_at`                    | string | Timestamp when HCP Terraform will schedule the next destroy run. Only applies to reminder notifications.   |\n| `details.run_created_at`                     | string | Timestamp of when HCP Terraform successfully created a destroy run. Only applies to results notifications. |\n| `details.run_status`                         | string | Status of the scheduled destroy run. Only applies to results notifications.                                  |\n| `details.run_external_id`                    | string | The ID of the scheduled destroy run. Only applies to results notifications.                                  |\n| `details.run_create_error_message`           | string | Message detailing why the run was unable to be queued. Only applies to results notifications.                |\n| `details.trigger_type`                       | string | The type of notification, and the value is either \"reminder\" or \"results\".                                   |\n| `details.workspace_name`                     | string | Human-readable name of the workspace.                                                                       |\n| `details.organization_name`                  | string | Human-readable name of the organization.                                                                    |\n\n##### Sample payload\n\nThe shape of data in auto destroy notification payloads may differ depending on the success of the run HCP Terraform created. Refer to the specific examples below.\n\n###### Reminder\n\nReminders that HCP Terraform will trigger a destroy run at some point in the future.\n\n```json\n{\n  \"payload_version\": \"2\",\n  \"notification_configuration_id\": \"nc-SZ3V3cLFxK6sqLKn\",\n  \"notification_configuration_url\": \"https:\/\/app.terraform.io\/api\/v2\/notification-configurations\/nc-SZ3V3cLFxK6sqLKn\",\n  \"trigger_scope\": \"workspace\",\n  \"trigger\": \"workspace:auto_destroy_reminder\",\n  \"message\": \"Auto Destroy Reminder\",\n  \"details\": {\n      \"auto_destroy_at\": \"2025-01-01T00:00:00Z\",\n      \"run_created_at\": null,\n      \"run_status\": null,\n      \"run_external_id\": null,\n      \"run_create_error_message\": null,\n      \"trigger_type\": \"reminder\",\n      \"workspace_name\": \"learned-english-dog\",\n      \"organization_name\": \"acme-org\"\n  }\n}\n```\n\n###### Results\n\nThe final result of the scheduled auto destroy run includes additional metadata about the run.\n\n```json\n{\n  \"payload_version\": \"2\",\n  \"notification_configuration_id\": \"nc-SZ3V3cLFxK6sqLKn\",\n  \"notification_configuration_url\": \"https:\/\/app.terraform.io\/api\/v2\/notification-configurations\/nc-SZ3V3cLFxK6sqLKn\",\n  \"trigger_scope\": \"workspace\",\n  \"trigger\": \"workspace:auto_destroy_results\",\n  \"message\": \"Auto Destroy Results\",\n  \"details\": {\n      \"auto_destroy_at\": null,\n      \"run_created_at\": \"2022-06-09T05:22:51Z\",\n      \"run_status\": \"applied\",\n      \"run_external_id\": \"run-vRVQxpqq64EA9V5a\",\n      \"run_create_error_message\": null,\n      \"trigger_type\": \"results\",\n      \"workspace_name\": \"learned-english-dog\",\n      \"organization_name\": \"acme-org\"\n  }\n}\n```\n\n###### Failed run creation\n\nRun-specific values are empty when HCP Terraform was unable to create an auto destroy run.\n\n```json\n{\n  \"payload_version\": \"2\",\n  \"notification_configuration_id\": \"nc-SZ3V3cLFxK6sqLKn\",\n  \"notification_configuration_url\": \"https:\/\/app.terraform.io\/api\/v2\/notification-configurations\/nc-SZ3V3cLFxK6sqLKn\",\n  \"trigger_scope\": \"workspace\",\n  \"trigger\": \"workspace:auto_destroy_results\",\n  \"message\": \"Auto Destroy Results\",\n  \"details\": {\n      \"auto_destroy_at\": null,\n      \"run_created_at\": null,\n      \"run_status\": null,\n      \"run_external_id\": null,\n      \"run_create_error_message\": \"Configuration version is missing\",\n      \"trigger_type\": \"results\",\n      \"workspace_name\": \"learned-english-dog\",\n      \"organization_name\": \"acme-org\"\n  }\n}\n```\n\n## Notification authenticity\n\nIf a `token` is configured, HCP Terraform provides an HMAC signature on all `\"generic\"` notification requests, using the `token` as the key. This is sent in the `X-TFE-Notification-Signature` header. The digest algorithm used is SHA-512. Notification target servers should verify the source of the HTTP request by computing the HMAC of the request body using the same shared secret, and dropping any requests with invalid signatures.\n\nSample Ruby code for verifying the HMAC:\n\n```ruby\ntoken = SecureRandom.hex\nhmac = OpenSSL::HMAC.hexdigest(OpenSSL::Digest.new(\"sha512\"), token, @request.body)\nfail \"Invalid HMAC\" if hmac != @request.headers[\"X-TFE-Notification-Signature\"]\n```\n\n## Notification verification and delivery responses\n\nWhen saving a configuration with `enabled` set to `true`, or when using the [verify API][], HCP Terraform sends a verification request to the configured URL. The response to this request is stored and available in the `delivery-responses` array of the `notification-configuration` resource.\n\nConfigurations cannot be enabled if the verification request fails. Success is defined as an HTTP response with status code of `2xx`.\nConfigurations with `destination_type` `email` can only be verified manually, they do not require an HTTP response.\n\nThe most recent response is stored in the `delivery-responses` array.\n\nEach delivery response has several fields:\n\n| Name         | Type   | Description                                                                                                                                                   |\n| ------------ | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `body`       | string | Response body (may be truncated).                                                                                                                             |\n| `code`       | string | HTTP status code, e.g. `400`.                                                                                                                                 |\n| `headers`    | object | All HTTP headers received, represented as an object with keys for each header name (lowercased) and an array of string values (most arrays will be size one). |\n| `sent-at`    | date   | The UTC timestamp when the notification was sent.                                                                                                             |\n| `successful` | bool   | Whether HCP Terraform considers this response to be successful.                                                                                      |\n| `url`        | string | The URL to which the request was sent.                                                                                                                        |\n\n[verify API]: #verify-a-notification-configuration\n\n## Create a notification configuration\n\n`POST \/workspaces\/:workspace_id\/notification-configurations`\n\n| Parameter       | Description                                                                                                                                                                                                      |\n| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The ID of the workspace to list configurations for. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint. |\n\n| Status  | Response                                                      | Reason                                                         |\n| ------- | ------------------------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"notification-configurations\"`) | Successfully created a notification configuration              |\n| [400][] | [JSON API error object][]                                     | Unable to complete verification request to destination URL     |\n| [404][] | [JSON API error object][]                                     | Workspace not found, or user unauthorized to perform action    |\n| [422][] | [JSON API error object][]                                     | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\nIf `enabled` is set to `true`, a verification request will be sent before saving the configuration. If this request receives no response or the response is not successful (HTTP 2xx), the configuration will not save.\n\n| Key path                           | Type           | Default | Description                                                                                                                                                                              |\n| ---------------------------------- | -------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                        | string         |         | Must be `\"notification-configuration\"`.                                                                                                                                                  |\n| `data.attributes.destination-type` | string         |         | Type of notification payload to send. Valid values are `\"generic\"`, `\"email\"`, `\"slack\"` or `\"microsoft-teams\"`.                                                                                              |\n| `data.attributes.enabled`          | bool           | `false` | Disabled configurations will not send any notifications.                                                                                                                                 |\n| `data.attributes.name`             | string         |         | Human-readable name for the configuration.                                                                                                                                               |\n| `data.attributes.token`            | string or null | `null`  | Optional write-only secure token, which can be used at the receiving server to verify request authenticity. See [Notification Authenticity][notification-authenticity] for more details. |\n| `data.attributes.triggers`         | array          | `[]`    | Array of triggers for which this configuration will send notifications. See [Notification Triggers][notification-triggers] for more details and a list of allowed values.                |\n| `data.attributes.url`              | string         |         | HTTP or HTTPS URL to which notification requests will be made, only for configurations with `\"destination_type:\"` `\"slack\"`, `\"microsoft-teams\"` or `\"generic\"`                                               |\n| `data.relationships.users`         | array          |         | Array of users part of the organization, only for configurations with `\"destination_type:\"` `\"email\"`                                                                                    |\n\n[notification-authenticity]: #notification-authenticity\n\n[notification-triggers]: #notification-triggers\n\n### Sample payload for generic notification configurations\n\n```json\n{\n  \"data\": {\n    \"type\": \"notification-configuration\",\n    \"attributes\": {\n      \"destination-type\": \"generic\",\n      \"enabled\": true,\n      \"name\": \"Webhook server test\",\n      \"url\": \"https:\/\/httpstat.us\/200\",\n      \"triggers\": [\n        \"run:applying\",\n        \"run:completed\",\n        \"run:created\",\n        \"run:errored\",\n        \"run:needs_attention\",\n        \"run:planning\"\n      ]\n    }\n  }\n}\n```\n\n### Sample payload for email notification configurations\n\n```json\n{\n  \"data\": {\n    \"type\": \"notification-configurations\",\n    \"attributes\": {\n      \"destination-type\": \"email\",\n      \"enabled\": true,\n      \"name\": \"Notify organization users about run\",\n      \"triggers\": [\n        \"run:applying\",\n        \"run:completed\",\n        \"run:created\",\n        \"run:errored\",\n        \"run:needs_attention\",\n        \"run:planning\"\n      ]\n    },\n    \"relationships\": {\n       \"users\": {\n          \"data\": [ { \"id\": \"organization-user-id\", \"type\": \"users\" } ]\n       }\n    }\n  }\n}\n```\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-XdeUVMWShTesDMME\/notification-configurations\n```\n\n### Sample response\n\n```json\n{\n  \"data\": {\n    \"id\": \"nc-AeUQ2zfKZzW9TiGZ\",\n    \"type\": \"notification-configurations\",\n    \"attributes\": {\n      \"enabled\": true,\n      \"name\": \"Webhook server test\",\n      \"url\": \"https:\/\/httpstat.us\/200\",\n      \"destination-type\": \"generic\",\n      \"token\": null,\n      \"triggers\": [\n        \"run:applying\",\n        \"run:completed\",\n        \"run:created\",\n        \"run:errored\",\n        \"run:needs_attention\",\n        \"run:planning\"\n      ],\n      \"delivery-responses\": [\n        {\n          \"url\": \"https:\/\/httpstat.us\/200\",\n          \"body\": \"\\\"200 OK\\\"\",\n          \"code\": \"200\",\n          \"headers\": {\n            \"cache-control\": [\n              \"private\"\n            ],\n            \"content-length\": [\n              \"129\"\n            ],\n            \"content-type\": [\n              \"application\/json; charset=utf-8\"\n            ],\n            \"content-encoding\": [\n              \"gzip\"\n            ],\n            \"vary\": [\n              \"Accept-Encoding\"\n            ],\n            \"server\": [\n              \"Microsoft-IIS\/10.0\"\n            ],\n            \"x-aspnetmvc-version\": [\n              \"5.1\"\n            ],\n            \"access-control-allow-origin\": [\n              \"*\"\n            ],\n            \"x-aspnet-version\": [\n              \"4.0.30319\"\n            ],\n            \"x-powered-by\": [\n              \"ASP.NET\"\n            ],\n            \"set-cookie\": [\n              \"ARRAffinity=77c477e3e649643e5771873e1a13179fb00983bc73c71e196bf25967fd453df9;Path=\/;HttpOnly;Domain=httpstat.us\"\n            ],\n            \"date\": [\n              \"Tue, 08 Jan 2019 21:34:37 GMT\"\n            ]\n          },\n          \"sent-at\": \"2019-01-08 21:34:37 UTC\",\n          \"successful\": \"true\"\n        }\n      ],\n      \"created-at\": \"2019-01-08T21:32:14.125Z\",\n      \"updated-at\": \"2019-01-08T21:34:37.274Z\"\n    },\n    \"relationships\": {\n      \"subscribable\": {\n        \"data\": {\n          \"id\": \"ws-XdeUVMWShTesDMME\",\n          \"type\": \"workspaces\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/notification-configurations\/nc-AeUQ2zfKZzW9TiGZ\"\n    }\n  }\n}\n```\n\n## List notification configurations\n\nUse the following endpoint to list all notification configurations for a workspace.\n\n`GET \/workspaces\/:workspace_id\/notification-configurations`\n\n| Parameter       | Description                                                                                                                                                                                                                                                                                                                             |\n| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The ID of the workspace to list configurations from. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint.  If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results. |\n\n### Query parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                                                 |\n| -------------- | ------------------------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.                          |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 notification configurations per page. |\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-XdeUVMWShTesDMME\/notification-configurations\n```\n\n### Sample response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"nc-W6VGEi8A7Cfoaf4K\",\n      \"type\": \"notification-configurations\",\n      \"attributes\": {\n        \"enabled\": false,\n        \"name\": \"Slack: #devops\",\n        \"url\": \"https:\/\/hooks.slack.com\/services\/T00000000\/BC012345\/0PWCpQmYyD4bTTRYZ53q4w\",\n        \"destination-type\": \"slack\",\n        \"token\": null,\n        \"triggers\": [\n          \"run:errored\",\n          \"run:needs_attention\"\n        ],\n        \"delivery-responses\": [],\n        \"created-at\": \"2019-01-08T21:34:28.367Z\",\n        \"updated-at\": \"2019-01-08T21:34:28.367Z\"\n      },\n      \"relationships\": {\n        \"subscribable\": {\n          \"data\": {\n            \"id\": \"ws-XdeUVMWShTesDMME\",\n            \"type\": \"workspaces\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/notification-configurations\/nc-W6VGEi8A7Cfoaf4K\"\n      }\n    },\n    {\n      \"id\": \"nc-AeUQ2zfKZzW9TiGZ\",\n      \"type\": \"notification-configurations\",\n      \"attributes\": {\n        \"enabled\": true,\n        \"name\": \"Webhook server test\",\n        \"url\": \"https:\/\/httpstat.us\/200\",\n        \"destination-type\": \"generic\",\n        \"token\": null,\n        \"triggers\": [\n          \"run:applying\",\n          \"run:completed\",\n          \"run:created\",\n          \"run:errored\",\n          \"run:needs_attention\",\n          \"run:planning\"\n        ],\n        \"delivery-responses\": [\n          {\n            \"url\": \"https:\/\/httpstat.us\/200\",\n            \"body\": \"\\\"200 OK\\\"\",\n            \"code\": \"200\",\n            \"headers\": {\n              \"cache-control\": [\n                \"private\"\n              ],\n              \"content-length\": [\n                \"129\"\n              ],\n              \"content-type\": [\n                \"application\/json; charset=utf-8\"\n              ],\n              \"content-encoding\": [\n                \"gzip\"\n              ],\n              \"vary\": [\n                \"Accept-Encoding\"\n              ],\n              \"server\": [\n                \"Microsoft-IIS\/10.0\"\n              ],\n              \"x-aspnetmvc-version\": [\n                \"5.1\"\n              ],\n              \"access-control-allow-origin\": [\n                \"*\"\n              ],\n              \"x-aspnet-version\": [\n                \"4.0.30319\"\n              ],\n              \"x-powered-by\": [\n                \"ASP.NET\"\n              ],\n              \"set-cookie\": [\n                \"ARRAffinity=77c477e3e649643e5771873e1a13179fb00983bc73c71e196bf25967fd453df9;Path=\/;HttpOnly;Domain=httpstat.us\"\n              ],\n              \"date\": [\n                \"Tue, 08 Jan 2019 21:34:37 GMT\"\n              ]\n            },\n            \"sent-at\": \"2019-01-08 21:34:37 UTC\",\n            \"successful\": \"true\"\n          }\n        ],\n        \"created-at\": \"2019-01-08T21:32:14.125Z\",\n        \"updated-at\": \"2019-01-08T21:34:37.274Z\"\n      },\n      \"relationships\": {\n        \"subscribable\": {\n          \"data\": {\n            \"id\": \"ws-XdeUVMWShTesDMME\",\n            \"type\": \"workspaces\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/notification-configurations\/nc-AeUQ2zfKZzW9TiGZ\"\n      }\n    }\n  ]\n}\n\n```\n\n## Show a notification configuration\n\n`GET \/notification-configurations\/:notification-configuration-id`\n\n| Parameter                        | Description                                       |\n| -------------------------------- | ------------------------------------------------- |\n| `:notification-configuration-id` | The id of the notification configuration to show. |\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/notification-configurations\/nc-AeUQ2zfKZzW9TiGZ\n```\n\n### Sample response\n\nThe `type` and `id` attributes in `relationships.subscribable` may also reference a `\"teams\"` and team ID, respectively.\n\n```json\n{\n  \"data\": {\n    \"id\": \"nc-AeUQ2zfKZzW9TiGZ\",\n      \"type\": \"notification-configurations\",\n      \"attributes\": {\n        \"enabled\": true,\n        \"name\": \"Webhook server test\",\n        \"url\": \"https:\/\/httpstat.us\/200\",\n        \"destination-type\": \"generic\",\n        \"token\": null,\n        \"triggers\": [\n          \"run:applying\",\n          \"run:completed\",\n          \"run:created\",\n          \"run:errored\",\n          \"run:needs_attention\",\n          \"run:planning\"\n        ],\n        \"delivery-responses\": [\n        {\n          \"url\": \"https:\/\/httpstat.us\/200\",\n          \"body\": \"\\\"200 OK\\\"\",\n          \"code\": \"200\",\n          \"headers\": {\n            \"cache-control\": [\n              \"private\"\n            ],\n            \"content-length\": [\n              \"129\"\n            ],\n            \"content-type\": [\n              \"application\/json; charset=utf-8\"\n            ],\n            \"content-encoding\": [\n              \"gzip\"\n            ],\n            \"vary\": [\n              \"Accept-Encoding\"\n            ],\n            \"server\": [\n              \"Microsoft-IIS\/10.0\"\n            ],\n            \"x-aspnetmvc-version\": [\n              \"5.1\"\n            ],\n            \"access-control-allow-origin\": [\n              \"*\"\n            ],\n            \"x-aspnet-version\": [\n              \"4.0.30319\"\n            ],\n            \"x-powered-by\": [\n              \"ASP.NET\"\n            ],\n            \"set-cookie\": [\n              \"ARRAffinity=77c477e3e649643e5771873e1a13179fb00983bc73c71e196bf25967fd453df9;Path=\/;HttpOnly;Domain=httpstat.us\"\n            ],\n            \"date\": [\n              \"Tue, 08 Jan 2019 21:34:37 GMT\"\n            ]\n          },\n          \"sent-at\": \"2019-01-08 21:34:37 UTC\",\n          \"successful\": \"true\"\n        }\n        ],\n        \"created-at\": \"2019-01-08T21:32:14.125Z\",\n        \"updated-at\": \"2019-01-08T21:34:37.274Z\"\n      },\n      \"relationships\": {\n        \"subscribable\": {\n          \"data\": {\n            \"id\": \"ws-XdeUVMWShTesDMME\",\n            \"type\": \"workspaces\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/notification-configurations\/nc-AeUQ2zfKZzW9TiGZ\"\n      }\n  }\n}\n```\n\n## Update a notification configuration\n\n`PATCH \/notification-configurations\/:notification-configuration-id`\n\n| Parameter                        | Description                                         |\n| -------------------------------- | --------------------------------------------------- |\n| `:notification-configuration-id` | The id of the notification configuration to update. |\n\nIf the `enabled` attribute is true, updating the configuration will cause HCP Terraform to send a verification request. If a response is received, it will be stored and returned in the `delivery-responses` attribute. More details in the [Notification Verification and Delivery Responses][] section above.\n\n[Notification Verification and Delivery Responses]: #notification-verification-and-delivery-responses\n\n| Status  | Response                                                      | Reason                                                                       |\n| ------- | ------------------------------------------------------------- | ---------------------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"notification-configurations\"`) | Successfully updated the notification configuration                          |\n| [400][] | [JSON API error object][]                                     | Unable to complete verification request to destination URL                   |\n| [404][] | [JSON API error object][]                                     | Notification configuration not found, or user unauthorized to perform action |\n| [422][] | [JSON API error object][]                                     | Malformed request body (missing attributes, wrong types, etc.)               |\n\n### Request body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nIf `enabled` is set to `true`, a verification request will be sent before saving the configuration. If this request fails to send, or the response is not successful (HTTP 2xx), the configuration will not save.\n\n| Key path                   | Type   | Default          | Description                                                                                                                                                                              |\n| -------------------------- | ------ | ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                | string | (previous value) | Must be `\"notification-configuration\"`.                                                                                                                                                  |\n| `data.attributes.enabled`  | bool   | (previous value) | Disabled configurations will not send any notifications.                                                                                                                                 |\n| `data.attributes.name`     | string | (previous value) | User-readable name for the configuration.                                                                                                                                                |\n| `data.attributes.token`    | string | (previous value) | Optional write-only secure token, which can be used at the receiving server to verify request authenticity. See [Notification Authenticity][notification-authenticity] for more details. |\n| `data.attributes.triggers` | array  | (previous value) | Array of triggers for sending notifications. See [Notification Triggers][notification-triggers] for more details.                                                                        |\n| `data.attributes.url`      | string | (previous value) | HTTP or HTTPS URL to which notification requests will be made, only for configurations with  `\"destination_type:\"` `\"slack\"`, `\"microsoft-teams\"` or `\"generic\"`                                              |\n| `data.relationships.users` | array  |                  | Array of users part of the organization, only for configurations with `\"destination_type:\"` `\"email\"`                                                                                    |\n\n[notification-authenticity]: #notification-authenticity\n\n[notification-triggers]: #notification-triggers\n\n### Sample payload\n\n```json\n{\n  \"data\": {\n    \"id\": \"nc-W6VGEi8A7Cfoaf4K\",\n    \"type\": \"notification-configurations\",\n    \"attributes\": {\n      \"enabled\": false,\n      \"name\": \"Slack: #devops\",\n      \"url\": \"https:\/\/hooks.slack.com\/services\/T00000001\/BC012345\/0PWCpQmYyD4bTTRYZ53q4w\",\n      \"destination-type\": \"slack\",\n      \"token\": null,\n      \"triggers\": [\n        \"run:created\",\n        \"run:errored\",\n        \"run:needs_attention\"\n      ],\n    }\n  }\n}\n```\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/notification-configurations\/nc-W6VGEi8A7Cfoaf4K\n```\n\n### Sample response\n\n```json\n{\n  \"data\": {\n    \"id\": \"nc-W6VGEi8A7Cfoaf4K\",\n    \"type\": \"notification-configurations\",\n    \"attributes\": {\n      \"enabled\": false,\n      \"name\": \"Slack: #devops\",\n      \"url\": \"https:\/\/hooks.slack.com\/services\/T00000001\/BC012345\/0PWCpQmYyD4bTTRYZ53q4w\",\n      \"destination-type\": \"slack\",\n      \"token\": null,\n      \"triggers\": [\n        \"run:created\",\n        \"run:errored\",\n        \"run:needs_attention\"\n      ],\n      \"delivery-responses\": [],\n      \"created-at\": \"2019-01-08T21:34:28.367Z\",\n      \"updated-at\": \"2019-01-08T21:49:02.103Z\"\n    },\n    \"relationships\": {\n      \"subscribable\": {\n        \"data\": {\n          \"id\": \"ws-XdeUVMWShTesDMME\",\n          \"type\": \"workspaces\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/notification-configurations\/nc-W6VGEi8A7Cfoaf4K\"\n    }\n  },\n}\n```\n\n## Verify a notification configuration\n\n`POST \/notification-configurations\/:notification-configuration-id\/actions\/verify`\n\n| Parameter                        | Description                                         |\n| -------------------------------- | --------------------------------------------------- |\n| `:notification-configuration-id` | The id of the notification configuration to verify. |\n\nThis will cause HCP Terraform to send a verification request for the specified configuration. If a response is received, it will be stored and returned in the `delivery-responses` attribute. More details in the [Notification Verification and Delivery Responses][] section above.\n\n| Status  | Response                                                      | Reason                                                     |\n| ------- | ------------------------------------------------------------- | ---------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"notification-configurations\"`) | Successfully verified the notification configuration       |\n| [400][] | [JSON API error object][]                                     | Unable to complete verification request to destination URL |\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/notification-configurations\/nc-AeUQ2zfKZzW9TiGZ\/actions\/verify\n```\n\n### Sample response\n\n```json\n{\n  \"data\": {\n    \"id\": \"nc-AeUQ2zfKZzW9TiGZ\",\n      \"type\": \"notification-configurations\",\n      \"attributes\": {\n        \"enabled\": true,\n        \"name\": \"Webhook server test\",\n        \"url\": \"https:\/\/httpstat.us\/200\",\n        \"destination-type\": \"generic\",\n        \"token\": null,\n        \"triggers\": [\n          \"run:applying\",\n          \"run:completed\",\n          \"run:created\",\n          \"run:errored\",\n          \"run:needs_attention\",\n          \"run:planning\"\n        ],\n        \"delivery-responses\": [\n        {\n          \"url\": \"https:\/\/httpstat.us\/200\",\n          \"body\": \"\\\"200 OK\\\"\",\n          \"code\": \"200\",\n          \"headers\": {\n            \"cache-control\": [\n              \"private\"\n            ],\n            \"content-length\": [\n              \"129\"\n            ],\n            \"content-type\": [\n              \"application\/json; charset=utf-8\"\n            ],\n            \"content-encoding\": [\n              \"gzip\"\n            ],\n            \"vary\": [\n              \"Accept-Encoding\"\n            ],\n            \"server\": [\n              \"Microsoft-IIS\/10.0\"\n            ],\n            \"x-aspnetmvc-version\": [\n              \"5.1\"\n            ],\n            \"access-control-allow-origin\": [\n              \"*\"\n            ],\n            \"x-aspnet-version\": [\n              \"4.0.30319\"\n            ],\n            \"x-powered-by\": [\n              \"ASP.NET\"\n            ],\n            \"set-cookie\": [\n              \"ARRAffinity=77c477e3e649643e5771873e1a13179fb00983bc73c71e196bf25967fd453df9;Path=\/;HttpOnly;Domain=httpstat.us\"\n            ],\n            \"date\": [\n              \"Tue, 08 Jan 2019 21:34:37 GMT\"\n            ]\n          },\n          \"sent-at\": \"2019-01-08 21:34:37 UTC\",\n          \"successful\": \"true\"\n        }\n        ],\n        \"created-at\": \"2019-01-08T21:32:14.125Z\",\n        \"updated-at\": \"2019-01-08T21:34:37.274Z\"\n      },\n      \"relationships\": {\n        \"subscribable\": {\n          \"data\": {\n            \"id\": \"ws-XdeUVMWShTesDMME\",\n            \"type\": \"workspaces\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/notification-configurations\/nc-AeUQ2zfKZzW9TiGZ\"\n      }\n  }\n}\n```\n\n## Delete a notification configuration\n\nThis endpoint deletes a notification configuration.\n\n`DELETE \/notification-configurations\/:notification-configuration-id`\n\n| Parameter                        | Description                                         |\n| -------------------------------- | --------------------------------------------------- |\n| `:notification-configuration-id` | The id of the notification configuration to delete. |\n\n| Status  | Response                  | Reason                                                                       |\n| ------- | ------------------------- | ---------------------------------------------------------------------------- |\n| [204][] | None                      | Successfully deleted the notification configuration                          |\n| [404][] | [JSON API error object][] | Notification configuration not found, or user unauthorized to perform action |\n\n### Sample request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/notification-configurations\/nc-AeUQ2zfKZzW9TiGZ\n```\n\n##  Team notification configuration\n\n-> **Note**: Team notifications are in public beta. All APIs and workflows are subject to change.\n\nTeam notifications allow you to configure relevant alerts that notify teams you specify whenever a certain event occurs. \n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/notifications.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n### Create a team notification configuration\n\nBy default, every team has a default email notification configuration with no users assigned. If a notification configuration has no users assigned, HCP Terraform sends email notifications to all team members.\n\nUse this endpoint to create a notification configuration to notify a team.\n\n`POST \/teams\/:team_id\/notification-configurations`\n\n\n| Parameter       | Description                                                                                                                                                                                                      |\n| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n  | `:team_id` | The ID of the team to create a configuration for. |\n\n| Status  | Response                                                      | Reason                                                         |\n| ------- | ------------------------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"notification-configurations\"`) | Successfully created a notification configuration              |\n| [400][] | [JSON API error object][]                                     | Unable to complete verification request to destination URL     |\n| [404][] | [JSON API error object][]                                     | Team not found, or user unauthorized to perform action    |\n| [422][] | [JSON API error object][]                                     | Malformed request body (missing attributes, wrong types, etc.) |\n\n#### Request body\n\nThis `POST` endpoint requires a JSON object with the following properties as a request payload. Properties without a default value are required.\n\n\nIf `enabled` is set to `true`, HCP Terraform sends a verification request before saving the configuration. If this request does not receive a response or the response is not successful (HTTP 2xx), the configuration will not be saved.\n\n| Key path                           | Type           | Default | Description                                                                                                                                                                              |\n| ---------------------------------- | -------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                        | string         |         | Must be `\"notification-configuration\"`.                                                                                                                                                  |\n| `data.attributes.destination-type` | string         |         | Type of notification payload to send. Valid values are `\"generic\"`, `\"email\"`, `\"slack\"` or `\"microsoft-teams\"`.                                                                                              |\n| `data.attributes.enabled`          | bool           | `false` | Disabled configurations will not send any notifications.                                                                                                                                 |\n| `data.attributes.name`             | string         |         | Human-readable name for the configuration.                                                                                                                                               |\n| `data.attributes.token`            | string or null | `null`  | Optional write-only secure token, which can be used at the receiving server to verify request authenticity. See [Notification Authenticity][notification-authenticity] for more details. |\n| `data.attributes.triggers`         | array          | `[]`    | Array of triggers for which this configuration will send notifications. See [Notification Triggers][notification-triggers] for more details and a list of allowed values.                |\n| `data.attributes.url`              | string         |         | HTTP or HTTPS URL to which notification requests will be made, only for configurations with `\"destination_type:\"` `\"slack\"`, `\"microsoft-teams\"` or `\"generic\"`                                               |\n| `data.attributes.email_all_members`| bool           |         | Email all team members, only for configurations with `\"destination_type:\" \"email\"`. |\n| `data.relationships.users`         | array          |         | Array of users part of the organization, only for configurations with `\"destination_type:\"` `\"email\"`                                                                                    |\n\n[notification-authenticity]: #notification-authenticity\n\n[notification-triggers]: #notification-triggers\n\n#### Sample payload for generic notification configurations\n\n```json\n{\n  \"data\": {\n    \"type\": \"notification-configuration\",\n    \"attributes\": {\n      \"destination-type\": \"generic\",\n      \"enabled\": true,\n      \"name\": \"Webhook server test\",\n      \"url\": \"https:\/\/httpstat.us\/200\",\n      \"triggers\": [\n        \"change_request:created\"\n      ]\n    }\n  }\n}\n```\n\n#### Sample payload for email notification configurations\n\n```json\n{\n  \"data\": {\n    \"type\": \"notification-configurations\",\n    \"attributes\": {\n      \"destination-type\": \"email\",\n      \"enabled\": true,\n      \"name\": \"Email teams about change requests\",\n      \"triggers\": [\n        \"change_request:created\"\n      ]\n    },\n    \"relationships\": {\n       \"users\": {\n          \"data\": [ { \"id\": \"organization-user-id\", \"type\": \"users\" } ]\n       }\n    }\n  }\n}\n```\n\n#### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/teams\/team-6p5jTwJQXwqZBncC\/notification-configurations\n```\n\n#### Sample response\n\n```json\n{\n  \"data\": {\n    \"id\": \"nc-AeUQ2zfKZzW9TiGZ\",\n    \"type\": \"notification-configurations\",\n    \"attributes\": {\n      \"enabled\": true,\n      \"name\": \"Webhook server test\",\n      \"url\": \"https:\/\/httpstat.us\/200\",\n      \"destination-type\": \"generic\",\n      \"token\": null,\n      \"triggers\": [\n        \"change_request:created\"\n      ],\n      \"delivery-responses\": [\n        {\n          \"url\": \"https:\/\/httpstat.us\/200\",\n          \"body\": \"\\\"200 OK\\\"\",\n          \"code\": \"200\",\n          \"headers\": {\n            \"cache-control\": [\n              \"private\"\n            ],\n            \"content-length\": [\n              \"129\"\n            ],\n            \"content-type\": [\n              \"application\/json; charset=utf-8\"\n            ],\n            \"content-encoding\": [\n              \"gzip\"\n            ],\n            \"vary\": [\n              \"Accept-Encoding\"\n            ],\n            \"server\": [\n              \"Microsoft-IIS\/10.0\"\n            ],\n            \"x-aspnetmvc-version\": [\n              \"5.1\"\n            ],\n            \"access-control-allow-origin\": [\n              \"*\"\n            ],\n            \"x-aspnet-version\": [\n              \"4.0.30319\"\n            ],\n            \"x-powered-by\": [\n              \"ASP.NET\"\n            ],\n            \"set-cookie\": [\n              \"ARRAffinity=77c477e3e649643e5771873e1a13179fb00983bc73c71e196bf25967fd453df9;Path=\/;HttpOnly;Domain=httpstat.us\"\n            ],\n            \"date\": [\n              \"Tue, 08 Jan 2024 21:34:37 GMT\"\n            ]\n          },\n          \"sent-at\": \"2024-01-08 21:34:37 UTC\",\n          \"successful\": \"true\"\n        }\n      ],\n      \"created-at\": \"2024-01-08T21:32:14.125Z\",\n      \"updated-at\": \"2024-01-08T21:34:37.274Z\"\n    },\n    \"relationships\": {\n      \"subscribable\": {\n        \"data\": {\n          \"id\": \"team-6p5jTwJQXwqZBncC\",\n          \"type\": \"teams\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/notification-configurations\/nc-AeUQ2zfKZzW9TiGZ\"\n    }\n  }\n}\n```\n\n### List team notification configurations\n\nUse this endpoint to list notification configurations for a team.\n\n`GET \/teams\/:team_id\/notification-configurations`\n\n\n| Parameter       | Description                                                                                                                                                                                                                                                                                                                             |\n| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:team_id` | The ID of the teams to list configurations from. |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                                                 |\n| -------------- | ------------------------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.                          |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 notification configurations per page. |\n\n#### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/teams\/team-6p5jTwJQXwqZBncC\/notification-configurations\n```\n\n#### Sample response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"nc-W6VGEi8A7Cfoaf4K\",\n      \"type\": \"notification-configurations\",\n      \"attributes\": {\n        \"enabled\": false,\n        \"name\": \"Slack: #devops\",\n        \"url\": \"https:\/\/hooks.slack.com\/services\/T00000000\/BC012345\/0PWCpQmYyD4bTTRYZ53q4w\",\n        \"destination-type\": \"slack\",\n        \"token\": null,\n        \"triggers\": [\n          \"change_request:created\"\n        ],\n        \"delivery-responses\": [],\n        \"created-at\": \"2019-01-08T21:34:28.367Z\",\n        \"updated-at\": \"2019-01-08T21:34:28.367Z\"\n      },\n      \"relationships\": {\n        \"subscribable\": {\n          \"data\": {\n            \"id\": \"team-TdeUVMWShTesDMME\",\n            \"type\": \"teams\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/notification-configurations\/nc-W6VGEi8A7Cfoaf4K\"\n      }\n    },\n    {\n      \"id\": \"nc-AeUQ2zfKZzW9TiGZ\",\n      \"type\": \"notification-configurations\",\n      \"attributes\": {\n        \"enabled\": true,\n        \"name\": \"Webhook server test\",\n        \"url\": \"https:\/\/httpstat.us\/200\",\n        \"destination-type\": \"generic\",\n        \"token\": null,\n        \"triggers\": [\n          \"change_request:created\"\n        ],\n        \"delivery-responses\": [\n          {\n            \"url\": \"https:\/\/httpstat.us\/200\",\n            \"body\": \"\\\"200 OK\\\"\",\n            \"code\": \"200\",\n            \"headers\": {\n              \"cache-control\": [\n                \"private\"\n              ],\n              \"content-length\": [\n                \"129\"\n              ],\n              \"content-type\": [\n                \"application\/json; charset=utf-8\"\n              ],\n              \"content-encoding\": [\n                \"gzip\"\n              ],\n              \"vary\": [\n                \"Accept-Encoding\"\n              ],\n              \"server\": [\n                \"Microsoft-IIS\/10.0\"\n              ],\n              \"x-aspnetmvc-version\": [\n                \"5.1\"\n              ],\n              \"access-control-allow-origin\": [\n                \"*\"\n              ],\n              \"x-aspnet-version\": [\n                \"4.0.30319\"\n              ],\n              \"x-powered-by\": [\n                \"ASP.NET\"\n              ],\n              \"set-cookie\": [\n                \"ARRAffinity=77c477e3e649643e5771873e1a13179fb00983bc73c71e196bf25967fd453df9;Path=\/;HttpOnly;Domain=httpstat.us\"\n              ],\n              \"date\": [\n                \"Tue, 08 Jan 2019 21:34:37 GMT\"\n              ]\n            },\n            \"sent-at\": \"2019-01-08 21:34:37 UTC\",\n            \"successful\": \"true\"\n          }\n        ],\n        \"created-at\": \"2019-01-08T21:32:14.125Z\",\n        \"updated-at\": \"2019-01-08T21:34:37.274Z\"\n      },\n      \"relationships\": {\n        \"subscribable\": {\n          \"data\": {\n            \"id\": \"team-XdeUVMWShTesDMME\",\n            \"type\": \"teams\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/notification-configurations\/nc-AeUQ2zfKZzW9TiGZ\"\n      }\n    }\n  ]\n}\n```\n","site":"terraform","answers_cleaned":"    page title  Notification Configurations   API Docs   HCP Terraform description       Use the  notification configurations  endpoint to manage notification configurations  List  show  create  update  verify  and delete notification configurations for a workspace using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Notification configuration API  HCP Terraform can send notifications for run state transitions and workspace events  You can specify a destination URL  request type  and what events will trigger the notification  Each workspace can have up to 20 notification configurations  and they apply to all runs for that workspace   Interacting with notification configurations requires admin access to the relevant workspace    More about permissions   terraform cloud docs users teams organizations permissions          Note     Speculative plans   terraform cloud docs run modes and options plan only speculative plan  and workspaces configured with  Local   execution mode   terraform cloud docs workspaces settings execution mode  do not support notifications    permissions citation    intentionally unused   keep for maintainers     Notification triggers  Notifications are sent in response to triggers related to workspace events  and can be defined at workspace and team levels  You can specify workspace events in the  triggers  array attribute        Workspace notification triggers  The following triggers are available for workspace notifications     Display Name         Value                     Description                                                                                                                                                                                                                                                              Created                run created             A run is created and enters the  Pending stage   terraform cloud docs run states the pending stage                                                         Planning               run planning            A run acquires the lock and starts to execute                                                           Needs Attention        run needs attention     A plan has changes and Terraform requires user input to continue  This input may include approving the plan or a  policy override   terraform cloud docs run states the policy check stage       Applying               run applying            A run enters the  Apply stage   terraform cloud docs run states the apply stage   where Terraform makes the infrastructure changes described in the plan                                 Completed              run completed           A run completes successfully                                          Errored                run errored             A run terminates early due to error or cancellation                                               Drifted                assessment drifted      HCP Terraform detected configuration drift  This option is only available if you enabled drift detection for the workspace                                                         Checks Failed          assessment check failure     One or more continuous validation checks did not pass  This option is only available if you enabled drift detection for the workspace      Health Assessment Failed     assessment failed       A health assessment failed  This option is only available if you enable health assessments for the workspace                                             Auto Destroy Reminder     workspace auto destro reminder       An automated workspace destroy run is imminent      Auto Destroy Results     workspace auto destroy run results       HCP Terraform attempted an automated workspace destroy run         Team notification triggers  The following triggers are available for  team notifications   team notification configuration      Display Name         Value                     Description                                                                                                                                                                                                                                                              Change Request     team change request        HCP Terraform sent a change request to a workspace that the specified team has explicit access to         Notification payload  The notification is an HTTP POST request with a detailed payload  The content depends on the type of notification   For Slack and Microsoft Teams notifications  the payload conforms to the respective webhook API and results in a notification message with informational attachments  Refer to  Slack Notification Payloads   terraform cloud docs workspaces settings notifications slack  and  Microsoft Teams Notification Payloads   terraform cloud docs workspaces settings notifications microsoft teams  for examples  For generic notifications  the payload varies based on whether the notification contains information about run events or workspace events       Run notification payload  Run events include detailed information about a specific run  including the time it began and the associated workspace and organization  Generic notifications for run events contain the following information     Name                               Type     Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        payload version                   number   Always  1                                                                                                                                                                                                                                           notification configuration id     string   The ID of the configuration associated with this notification                                                                                                                                                                                       run url                           string   URL used to access the run UI page                                                                                                                                                                                                                  run id                            string   ID of the run which triggered this notification                                                                                                                                                                                                     run message                       string   The reason the run was queued                                                                                                                                                                                                                       run created at                    string   Timestamp of the run s creation                                                                                                                                                                                                                     run created by                    string   Username of the user who created the run                                                                                                                                                                                                            workspace id                      string   ID of the run s workspace                                                                                                                                                                                                                           workspace name                    string   Human readable name of the run s workspace                                                                                                                                                                                                          organization name                 string   Human readable name of the run s organization                                                                                                                                                                                                       notifications                     array    List of events which caused this notification to be sent  with each event represented by an object  At present  this is always one event  but in the future HCP Terraform may roll up several notifications for a run into a single request       notifications   message           string   Human readable reason for the notification                                                                                                                                                                                                          notifications   trigger           string   Value of the trigger which caused the notification to be sent                                                                                                                                                                                       notifications   run status        string   Status of the run at the time of notification                                                                                                                                                                                                       notifications   run updated at    string   Timestamp of the run s update                                                                                                                                                                                                                       notifications   run updated by    string   Username of the user who caused the run to update                                                                                                                                                                                                      Sample payload     json      payload version   1     notification configuration id    nc AeUQ2zfKZzW9TiGZ      run url    https   app terraform io app acme org my workspace runs run FwnENkvDnrpyFC7M      run id    run FwnENkvDnrpyFC7M      run message    Add five new queue workers      run created at    2019 01 25T18 34 00 000Z      run created by    sample user      workspace id    ws XdeUVMWShTesDMME      workspace name    my workspace      organization name    acme org      notifications                  message    Run Canceled          trigger    run errored          run status    canceled          run updated at    2019 01 25T18 37 04 000Z          run updated by    sample user                       Change request notification payload  Change request events contain the following fields in their payloads     Name                               Type     Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        payload version                   number   Always  1                                                                                                                                                                                                                                           notification configuration id     string   The ID of the configuration associated with this notification                                                                                                                                                                                       change request url                string   URL used to access the change request UI page                                                                                                                                                                                                       change request subject            string   title of the change request which triggered this notification                                                                                                                                                                                          change request message            string   The contents of the change request                                                                                                                                                                                                                  change request created at         string   Timestamp of the change request s creation                                                                                                                                                                                                          change request created by         string   Username of the user who created the change request                                                                                                                                                                                                 workspace id                      string   ID of the run s workspace                                                                                                                                                                                                                           workspace name                    string   Human readable name of the run s workspace                                                                                                                                                                                                          organization name                 string   Human readable name of the run s organization                                                                                                                                                                                                            Send a test  payload  This is a sample payload you can send to test if notifications are working  The payload does not have a  run  or  workspace  context  resulting in null values   You can trigger a test notification from the workspace notification settings page  You can read more about verifying a  notification configuration   terraform enterprise workspaces settings notifications enabling and verifying a configuration       json      payload version   1     notification configuration id    nc jWvVsmp5VxsaCeXm      run url   null     run id   null     run message   null     run created at   null     run created by   null     workspace id   null     workspace name   null     organization name   null     notifications                  message    Verification of test          trigger    verification          run status   null         run updated at   null         run updated by   null                       Workspace notification payload  Workspace events include detailed information about workspace level validation events like  health assessments   terraform cloud docs workspaces health  if you enable them for the workspace  Much of the information provides details about the associated  assessment result   terraform cloud docs api docs assessment results   which HCP Terraform uses to track instances of continuous validation   HCP Terraform returns different types of attributes returned in the payload details  depending on the type of  trigger scope   There are two main values for  trigger scope    assessment  and  workspace   examples of which you can see below        Health assessments  Health assessment notifications for workspace events contain the following information        BEGIN  TFC only name pnp callout      include  tfc package callouts health assessments mdx       END  TFC only name pnp callout        Name                                           Type     Description                                                                                                                                                                                                                                payload version                               number   Always  2                                                                                 notification configuration id                 string   The ID of the configuration associated with this notification                             notification configuration url                string   URL to get the notification configuration from the HCP Terraform API                    trigger scope                                 string   Always  assessment  for workspace assessment notifications                                trigger                                       string   Value of the trigger that caused the notification to be sent                              message                                       string   Human readable reason for the notification                                                details                                       object   Object containing details specific to the notification                                    details new assessment result                 object   The most recent assessment result  This result triggered the notification                 details new assessment result id              string   ID of the assessment result                                                               details new assessment result url             string   URL to get the assessment result from the HCP Terraform API                             details new assessment result succeeded       bool     Whether assessment succeeded                                                              details new assessment result all checks succeeded     bool   Whether all conditions passed                                                      details new assessment result checks passed            number   The number of resources  data sources  and outputs passing their conditions                     details new assessment result checks failed            number   The number of resources  data sources  and outputs with one or more failing conditions          details new assessment result checks errored           number   The number of resources  data sources  and outputs that had a condition error                   details new assessment result checks unknown           number   The number of resources  data sources  and outputs that had conditions left unevaluated         details new assessment result drifted         bool     Whether assessment detected drift                                                         details new assessment result resources drifted        number  The number of resources whose configuration does not match from the workspace s state file                                                   details new assessment result resources undrifted      number  The number of real resources whose configuration matches the workspace s state file                                               details new assessment result created at      string   Timestamp for when HCP Terraform created the assessment result                                                        details prior assessment result               object   The assessment result immediately prior to the one that triggered the notification        details prior assessment result id            string   ID of the assessment result                                                               details prior assessment result url           string   URL to get the assessment result from the HCP Terraform API                             details prior assessment result succeeded     bool     Whether assessment succeeded                                                              details prior assessment result all checks succeeded     bool   Whether all conditions passed                                                    details prior assessment result checks passed            number   The number of resources  data sources  and outputs passing their conditions                   details prior assessment result checks failed            number   The number of resources  data sources  and outputs with one or more failing conditions        details prior assessment result checks errored           number   The number of resources  data sources  and outputs that had a condition error                 details prior assessment result checks unknown           number   The number of resources  data sources  and outputs that had conditions left unevaluated       details prior assessment result drifted       bool     Whether assessment detected drift                                                         details prior assessment result resources drifted        number  The number of resources whose configuration does not match the workspace s state file                                                 details prior assessment result resources undrifted      number  The number of resources whose configuration matches the workspace s state file                                             details prior assessment result created at    string   Timestamp of the assessment result                                                        details workspace id                          string   ID of the workspace that generated the notification                                       details workspace name                        string   Human readable name of the workspace                                                      details organization name                     string   Human readable name of the organization                                                       Sample payload  Health assessment payloads have information about resource drift and continuous validation checks      json      payload version    2      notification configuration id    nc SZ3V3cLFxK6sqLKn      notification configuration url    https   app terraform io api v2 notification configurations nc SZ3V3cLFxK6sqLKn      trigger scope    assessment      trigger    assessment drifted      message    Drift Detected      details            new assessment result              id    asmtres vRVQxpqq64EA9V5a            url    https   app terraform io api v2 assessment results asmtres vRVQxpqq64EA9V5a            succeeded   true           drifted   true           all checks succeeded   true           resources drifted   4           resources undrifted   55           checks passed   33           checks failed   0           checks errored   0           checks unknown   0           created at    2022 06 09T05 23 10Z                  prior assessment result              id    asmtres A6zEbpGArqP74fdL            url    https   app terraform io api v2 assessment results asmtres A6zEbpGArqP74fdL            succeeded   true           drifted   true           all checks succeeded   true           resources drifted   4           resources undrifted   55           checks passed   33           checks failed   0           checks errored   0           checks unknown   0           created at    2022 06 09T05 22 51Z                workspace id    ws XdeUVMWShTesDMME        workspace name    my workspace        organization name    acme org                  Automatic destroy runs       BEGIN  TFC only name pnp callout      include  tfc package callouts ephemeral workspaces mdx       END  TFC only name pnp callout      Automatic destroy run notifications for workspace events contain the following information     Name                                           Type     Description                                                                                                                                                                                                                                                                              payload version                               string   Always 2                                                                                                         notification configuration id                   string   The ID of the notification s configuration                                                                         notification configuration url                  string   The URL to get the notification s configuration from the HCP Terraform API                                       trigger scope                                 string   Always  workspace  for ephemeral workspace notifications                                                          trigger                                       string   Value of the trigger that caused HCP Terraform to send the notification                                         message                                       string   Human readable reason for the notification                                                                        details                                       object   Object containing details specific to the notification                                                             details auto destroy at                       string   Timestamp when HCP Terraform will schedule the next destroy run  Only applies to reminder notifications         details run created at                        string   Timestamp of when HCP Terraform successfully created a destroy run  Only applies to results notifications       details run status                            string   Status of the scheduled destroy run  Only applies to results notifications                                        details run external id                       string   The ID of the scheduled destroy run  Only applies to results notifications                                        details run create error message              string   Message detailing why the run was unable to be queued  Only applies to results notifications                      details trigger type                          string   The type of notification  and the value is either  reminder  or  results                                          details workspace name                        string   Human readable name of the workspace                                                                             details organization name                     string   Human readable name of the organization                                                                              Sample payload  The shape of data in auto destroy notification payloads may differ depending on the success of the run HCP Terraform created  Refer to the specific examples below          Reminder  Reminders that HCP Terraform will trigger a destroy run at some point in the future      json      payload version    2      notification configuration id    nc SZ3V3cLFxK6sqLKn      notification configuration url    https   app terraform io api v2 notification configurations nc SZ3V3cLFxK6sqLKn      trigger scope    workspace      trigger    workspace auto destroy reminder      message    Auto Destroy Reminder      details            auto destroy at    2025 01 01T00 00 00Z          run created at   null         run status   null         run external id   null         run create error message   null         trigger type    reminder          workspace name    learned english dog          organization name    acme org                    Results  The final result of the scheduled auto destroy run includes additional metadata about the run      json      payload version    2      notification configuration id    nc SZ3V3cLFxK6sqLKn      notification configuration url    https   app terraform io api v2 notification configurations nc SZ3V3cLFxK6sqLKn      trigger scope    workspace      trigger    workspace auto destroy results      message    Auto Destroy Results      details            auto destroy at   null         run created at    2022 06 09T05 22 51Z          run status    applied          run external id    run vRVQxpqq64EA9V5a          run create error message   null         trigger type    results          workspace name    learned english dog          organization name    acme org                    Failed run creation  Run specific values are empty when HCP Terraform was unable to create an auto destroy run      json      payload version    2      notification configuration id    nc SZ3V3cLFxK6sqLKn      notification configuration url    https   app terraform io api v2 notification configurations nc SZ3V3cLFxK6sqLKn      trigger scope    workspace      trigger    workspace auto destroy results      message    Auto Destroy Results      details            auto destroy at   null         run created at   null         run status   null         run external id   null         run create error message    Configuration version is missing          trigger type    results          workspace name    learned english dog          organization name    acme org                Notification authenticity  If a  token  is configured  HCP Terraform provides an HMAC signature on all   generic   notification requests  using the  token  as the key  This is sent in the  X TFE Notification Signature  header  The digest algorithm used is SHA 512  Notification target servers should verify the source of the HTTP request by computing the HMAC of the request body using the same shared secret  and dropping any requests with invalid signatures   Sample Ruby code for verifying the HMAC      ruby token   SecureRandom hex hmac   OpenSSL  HMAC hexdigest OpenSSL  Digest new  sha512    token   request body  fail  Invalid HMAC  if hmac     request headers  X TFE Notification Signature           Notification verification and delivery responses  When saving a configuration with  enabled  set to  true   or when using the  verify API     HCP Terraform sends a verification request to the configured URL  The response to this request is stored and available in the  delivery responses  array of the  notification configuration  resource   Configurations cannot be enabled if the verification request fails  Success is defined as an HTTP response with status code of  2xx   Configurations with  destination type   email  can only be verified manually  they do not require an HTTP response   The most recent response is stored in the  delivery responses  array   Each delivery response has several fields     Name           Type     Description                                                                                                                                                                                                                                                                                                                                                  body          string   Response body  may be truncated                                                                                                                                    code          string   HTTP status code  e g   400                                                                                                                                        headers       object   All HTTP headers received  represented as an object with keys for each header name  lowercased  and an array of string values  most arrays will be size one        sent at       date     The UTC timestamp when the notification was sent                                                                                                                   successful    bool     Whether HCP Terraform considers this response to be successful                                                                                            url           string   The URL to which the request was sent                                                                                                                             verify API    verify a notification configuration     Create a notification configuration   POST  workspaces  workspace id notification configurations     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                                                                                   workspace id    The ID of the workspace to list configurations for  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   terraform cloud docs api docs workspaces show workspace  endpoint       Status    Response                                                        Reason                                                                                                                                                                                                           201       JSON API document      type   notification configurations      Successfully created a notification configuration                   400       JSON API error object                                          Unable to complete verification request to destination URL          404       JSON API error object                                          Workspace not found  or user unauthorized to perform action         422       JSON API error object                                          Malformed request body  missing attributes  wrong types  etc          Request body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required   If  enabled  is set to  true   a verification request will be sent before saving the configuration  If this request receives no response or the response is not successful  HTTP 2xx   the configuration will not save     Key path                             Type             Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                data type                           string                     Must be   notification configuration                                                                                                                                                          data attributes destination type    string                     Type of notification payload to send  Valid values are   generic      email      slack   or   microsoft teams                                                                                                      data attributes enabled             bool              false    Disabled configurations will not send any notifications                                                                                                                                       data attributes name                string                     Human readable name for the configuration                                                                                                                                                     data attributes token               string or null    null     Optional write only secure token  which can be used at the receiving server to verify request authenticity  See  Notification Authenticity  notification authenticity  for more details       data attributes triggers            array                      Array of triggers for which this configuration will send notifications  See  Notification Triggers  notification triggers  for more details and a list of allowed values                      data attributes url                 string                     HTTP or HTTPS URL to which notification requests will be made  only for configurations with   destination type      slack      microsoft teams   or   generic                                                      data relationships users            array                      Array of users part of the organization  only for configurations with   destination type      email                                                                                          notification authenticity    notification authenticity   notification triggers    notification triggers      Sample payload for generic notification configurations     json      data          type    notification configuration        attributes            destination type    generic          enabled   true         name    Webhook server test          url    https   httpstat us 200          triggers              run applying            run completed            run created            run errored            run needs attention            run planning                               Sample payload for email notification configurations     json      data          type    notification configurations        attributes            destination type    email          enabled   true         name    Notify organization users about run          triggers              run applying            run completed            run created            run errored            run needs attention            run planning                      relationships             users                data        id    organization user id    type    users                                    Sample request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 workspaces ws XdeUVMWShTesDMME notification configurations          Sample response     json      data          id    nc AeUQ2zfKZzW9TiGZ        type    notification configurations        attributes            enabled   true         name    Webhook server test          url    https   httpstat us 200          destination type    generic          token   null         triggers              run applying            run completed            run created            run errored            run needs attention            run planning                  delivery responses                          url    https   httpstat us 200              body      200 OK                code    200              headers                  cache control                    private                              content length                    129                              content type                    application json  charset utf 8                              content encoding                    gzip                              vary                    Accept Encoding                              server                    Microsoft IIS 10 0                              x aspnetmvc version                    5 1                              access control allow origin                                                   x aspnet version                    4 0 30319                              x powered by                    ASP NET                              set cookie                    ARRAffinity 77c477e3e649643e5771873e1a13179fb00983bc73c71e196bf25967fd453df9 Path   HttpOnly Domain httpstat us                              date                    Tue  08 Jan 2019 21 34 37 GMT                                        sent at    2019 01 08 21 34 37 UTC              successful    true                            created at    2019 01 08T21 32 14 125Z          updated at    2019 01 08T21 34 37 274Z              relationships            subscribable              data                id    ws XdeUVMWShTesDMME              type    workspaces                                links            self     api v2 notification configurations nc AeUQ2zfKZzW9TiGZ                      List notification configurations  Use the following endpoint to list all notification configurations for a workspace    GET  workspaces  workspace id notification configurations     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 workspace id    The ID of the workspace to list configurations from  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   terraform cloud docs api docs workspaces show workspace  endpoint   If neither pagination query parameters are provided  the endpoint will not be paginated and will return all results         Query parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                                                                       page number       Optional    If omitted  the endpoint will return the first page                                page size         Optional    If omitted  the endpoint will return 20 notification configurations per page         Sample request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 workspaces ws XdeUVMWShTesDMME notification configurations          Sample response     json      data                  id    nc W6VGEi8A7Cfoaf4K          type    notification configurations          attributes              enabled   false           name    Slack   devops            url    https   hooks slack com services T00000000 BC012345 0PWCpQmYyD4bTTRYZ53q4w            destination type    slack            token   null           triggers                run errored              run needs attention                      delivery responses                created at    2019 01 08T21 34 28 367Z            updated at    2019 01 08T21 34 28 367Z                  relationships              subscribable                data                  id    ws XdeUVMWShTesDMME                type    workspaces                                        links              self     api v2 notification configurations nc W6VGEi8A7Cfoaf4K                              id    nc AeUQ2zfKZzW9TiGZ          type    notification configurations          attributes              enabled   true           name    Webhook server test            url    https   httpstat us 200            destination type    generic            token   null           triggers                run applying              run completed              run created              run errored              run needs attention              run planning                      delivery responses                              url    https   httpstat us 200                body      200 OK                  code    200                headers                    cache control                      private                                  content length                      129                                  content type                      application json  charset utf 8                                  content encoding                      gzip                                  vary                      Accept Encoding                                  server                      Microsoft IIS 10 0                                  x aspnetmvc version                      5 1                                  access control allow origin                                                         x aspnet version                      4 0 30319                                  x powered by                      ASP NET                                  set cookie                      ARRAffinity 77c477e3e649643e5771873e1a13179fb00983bc73c71e196bf25967fd453df9 Path   HttpOnly Domain httpstat us                                  date                      Tue  08 Jan 2019 21 34 37 GMT                                              sent at    2019 01 08 21 34 37 UTC                successful    true                                  created at    2019 01 08T21 32 14 125Z            updated at    2019 01 08T21 34 37 274Z                  relationships              subscribable                data                  id    ws XdeUVMWShTesDMME                type    workspaces                                        links              self     api v2 notification configurations nc AeUQ2zfKZzW9TiGZ                               Show a notification configuration   GET  notification configurations  notification configuration id     Parameter                          Description                                                                                                                                      notification configuration id    The id of the notification configuration to show         Sample request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 notification configurations nc AeUQ2zfKZzW9TiGZ          Sample response  The  type  and  id  attributes in  relationships subscribable  may also reference a   teams   and team ID  respectively      json      data          id    nc AeUQ2zfKZzW9TiGZ          type    notification configurations          attributes              enabled   true           name    Webhook server test            url    https   httpstat us 200            destination type    generic            token   null           triggers                run applying              run completed              run created              run errored              run needs attention              run planning                      delivery responses                          url    https   httpstat us 200              body      200 OK                code    200              headers                  cache control                    private                              content length                    129                              content type                    application json  charset utf 8                              content encoding                    gzip                              vary                    Accept Encoding                              server                    Microsoft IIS 10 0                              x aspnetmvc version                    5 1                              access control allow origin                                                   x aspnet version                    4 0 30319                              x powered by                    ASP NET                              set cookie                    ARRAffinity 77c477e3e649643e5771873e1a13179fb00983bc73c71e196bf25967fd453df9 Path   HttpOnly Domain httpstat us                              date                    Tue  08 Jan 2019 21 34 37 GMT                                        sent at    2019 01 08 21 34 37 UTC              successful    true                                created at    2019 01 08T21 32 14 125Z            updated at    2019 01 08T21 34 37 274Z                  relationships              subscribable                data                  id    ws XdeUVMWShTesDMME                type    workspaces                                        links              self     api v2 notification configurations nc AeUQ2zfKZzW9TiGZ                        Update a notification configuration   PATCH  notification configurations  notification configuration id     Parameter                          Description                                                                                                                                          notification configuration id    The id of the notification configuration to update     If the  enabled  attribute is true  updating the configuration will cause HCP Terraform to send a verification request  If a response is received  it will be stored and returned in the  delivery responses  attribute  More details in the  Notification Verification and Delivery Responses    section above    Notification Verification and Delivery Responses    notification verification and delivery responses    Status    Response                                                        Reason                                                                                                                                                                                                                                       200       JSON API document      type   notification configurations      Successfully updated the notification configuration                               400       JSON API error object                                          Unable to complete verification request to destination URL                        404       JSON API error object                                          Notification configuration not found  or user unauthorized to perform action      422       JSON API error object                                          Malformed request body  missing attributes  wrong types  etc                        Request body  This PATCH endpoint requires a JSON object with the following properties as a request payload   If  enabled  is set to  true   a verification request will be sent before saving the configuration  If this request fails to send  or the response is not successful  HTTP 2xx   the configuration will not save     Key path                     Type     Default            Description                                                                                                                                                                                                                                                                                                                                                                                                                                         data type                   string    previous value    Must be   notification configuration                                                                                                                                                          data attributes enabled     bool      previous value    Disabled configurations will not send any notifications                                                                                                                                       data attributes name        string    previous value    User readable name for the configuration                                                                                                                                                      data attributes token       string    previous value    Optional write only secure token  which can be used at the receiving server to verify request authenticity  See  Notification Authenticity  notification authenticity  for more details       data attributes triggers    array     previous value    Array of triggers for sending notifications  See  Notification Triggers  notification triggers  for more details                                                                              data attributes url         string    previous value    HTTP or HTTPS URL to which notification requests will be made  only for configurations with    destination type      slack      microsoft teams   or   generic                                                     data relationships users    array                       Array of users part of the organization  only for configurations with   destination type      email                                                                                          notification authenticity    notification authenticity   notification triggers    notification triggers      Sample payload     json      data          id    nc W6VGEi8A7Cfoaf4K        type    notification configurations        attributes            enabled   false         name    Slack   devops          url    https   hooks slack com services T00000001 BC012345 0PWCpQmYyD4bTTRYZ53q4w          destination type    slack          token   null         triggers              run created            run errored            run needs attention                                Sample request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 notification configurations nc W6VGEi8A7Cfoaf4K          Sample response     json      data          id    nc W6VGEi8A7Cfoaf4K        type    notification configurations        attributes            enabled   false         name    Slack   devops          url    https   hooks slack com services T00000001 BC012345 0PWCpQmYyD4bTTRYZ53q4w          destination type    slack          token   null         triggers              run created            run errored            run needs attention                  delivery responses              created at    2019 01 08T21 34 28 367Z          updated at    2019 01 08T21 49 02 103Z              relationships            subscribable              data                id    ws XdeUVMWShTesDMME              type    workspaces                                links            self     api v2 notification configurations nc W6VGEi8A7Cfoaf4K                       Verify a notification configuration   POST  notification configurations  notification configuration id actions verify     Parameter                          Description                                                                                                                                          notification configuration id    The id of the notification configuration to verify     This will cause HCP Terraform to send a verification request for the specified configuration  If a response is received  it will be stored and returned in the  delivery responses  attribute  More details in the  Notification Verification and Delivery Responses    section above     Status    Response                                                        Reason                                                                                                                                                                                                   200       JSON API document      type   notification configurations      Successfully verified the notification configuration            400       JSON API error object                                          Unable to complete verification request to destination URL        Sample request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 notification configurations nc AeUQ2zfKZzW9TiGZ actions verify          Sample response     json      data          id    nc AeUQ2zfKZzW9TiGZ          type    notification configurations          attributes              enabled   true           name    Webhook server test            url    https   httpstat us 200            destination type    generic            token   null           triggers                run applying              run completed              run created              run errored              run needs attention              run planning                      delivery responses                          url    https   httpstat us 200              body      200 OK                code    200              headers                  cache control                    private                              content length                    129                              content type                    application json  charset utf 8                              content encoding                    gzip                              vary                    Accept Encoding                              server                    Microsoft IIS 10 0                              x aspnetmvc version                    5 1                              access control allow origin                                                   x aspnet version                    4 0 30319                              x powered by                    ASP NET                              set cookie                    ARRAffinity 77c477e3e649643e5771873e1a13179fb00983bc73c71e196bf25967fd453df9 Path   HttpOnly Domain httpstat us                              date                    Tue  08 Jan 2019 21 34 37 GMT                                        sent at    2019 01 08 21 34 37 UTC              successful    true                                created at    2019 01 08T21 32 14 125Z            updated at    2019 01 08T21 34 37 274Z                  relationships              subscribable                data                  id    ws XdeUVMWShTesDMME                type    workspaces                                        links              self     api v2 notification configurations nc AeUQ2zfKZzW9TiGZ                        Delete a notification configuration  This endpoint deletes a notification configuration    DELETE  notification configurations  notification configuration id     Parameter                          Description                                                                                                                                          notification configuration id    The id of the notification configuration to delete       Status    Response                    Reason                                                                                                                                                                                                   204      None                        Successfully deleted the notification configuration                               404       JSON API error object      Notification configuration not found  or user unauthorized to perform action        Sample request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 notification configurations nc AeUQ2zfKZzW9TiGZ          Team notification configuration       Note    Team notifications are in public beta  All APIs and workflows are subject to change   Team notifications allow you to configure relevant alerts that notify teams you specify whenever a certain event occurs         BEGIN  TFC only name pnp callout      include  tfc package callouts notifications mdx       END  TFC only name pnp callout          Create a team notification configuration  By default  every team has a default email notification configuration with no users assigned  If a notification configuration has no users assigned  HCP Terraform sends email notifications to all team members   Use this endpoint to create a notification configuration to notify a team    POST  teams  team id notification configurations      Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                                                                                     team id    The ID of the team to create a configuration for       Status    Response                                                        Reason                                                                                                                                                                                                           201       JSON API document      type   notification configurations      Successfully created a notification configuration                   400       JSON API error object                                          Unable to complete verification request to destination URL          404       JSON API error object                                          Team not found  or user unauthorized to perform action         422       JSON API error object                                          Malformed request body  missing attributes  wrong types  etc           Request body  This  POST  endpoint requires a JSON object with the following properties as a request payload  Properties without a default value are required    If  enabled  is set to  true   HCP Terraform sends a verification request before saving the configuration  If this request does not receive a response or the response is not successful  HTTP 2xx   the configuration will not be saved     Key path                             Type             Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                data type                           string                     Must be   notification configuration                                                                                                                                                          data attributes destination type    string                     Type of notification payload to send  Valid values are   generic      email      slack   or   microsoft teams                                                                                                      data attributes enabled             bool              false    Disabled configurations will not send any notifications                                                                                                                                       data attributes name                string                     Human readable name for the configuration                                                                                                                                                     data attributes token               string or null    null     Optional write only secure token  which can be used at the receiving server to verify request authenticity  See  Notification Authenticity  notification authenticity  for more details       data attributes triggers            array                      Array of triggers for which this configuration will send notifications  See  Notification Triggers  notification triggers  for more details and a list of allowed values                      data attributes url                 string                     HTTP or HTTPS URL to which notification requests will be made  only for configurations with   destination type      slack      microsoft teams   or   generic                                                      data attributes email all members   bool                       Email all team members  only for configurations with   destination type    email         data relationships users            array                      Array of users part of the organization  only for configurations with   destination type      email                                                                                          notification authenticity    notification authenticity   notification triggers    notification triggers       Sample payload for generic notification configurations     json      data          type    notification configuration        attributes            destination type    generic          enabled   true         name    Webhook server test          url    https   httpstat us 200          triggers              change request created                                Sample payload for email notification configurations     json      data          type    notification configurations        attributes            destination type    email          enabled   true         name    Email teams about change requests          triggers              change request created                      relationships             users                data        id    organization user id    type    users                                     Sample request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 teams team 6p5jTwJQXwqZBncC notification configurations           Sample response     json      data          id    nc AeUQ2zfKZzW9TiGZ        type    notification configurations        attributes            enabled   true         name    Webhook server test          url    https   httpstat us 200          destination type    generic          token   null         triggers              change request created                  delivery responses                          url    https   httpstat us 200              body      200 OK                code    200              headers                  cache control                    private                              content length                    129                              content type                    application json  charset utf 8                              content encoding                    gzip                              vary                    Accept Encoding                              server                    Microsoft IIS 10 0                              x aspnetmvc version                    5 1                              access control allow origin                                                   x aspnet version                    4 0 30319                              x powered by                    ASP NET                              set cookie                    ARRAffinity 77c477e3e649643e5771873e1a13179fb00983bc73c71e196bf25967fd453df9 Path   HttpOnly Domain httpstat us                              date                    Tue  08 Jan 2024 21 34 37 GMT                                        sent at    2024 01 08 21 34 37 UTC              successful    true                            created at    2024 01 08T21 32 14 125Z          updated at    2024 01 08T21 34 37 274Z              relationships            subscribable              data                id    team 6p5jTwJQXwqZBncC              type    teams                                links            self     api v2 notification configurations nc AeUQ2zfKZzW9TiGZ                       List team notification configurations  Use this endpoint to list notification configurations for a team    GET  teams  team id notification configurations      Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 team id    The ID of the teams to list configurations from         Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                                                                       page number       Optional    If omitted  the endpoint will return the first page                                page size         Optional    If omitted  the endpoint will return 20 notification configurations per page          Sample request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 teams team 6p5jTwJQXwqZBncC notification configurations           Sample response     json      data                  id    nc W6VGEi8A7Cfoaf4K          type    notification configurations          attributes              enabled   false           name    Slack   devops            url    https   hooks slack com services T00000000 BC012345 0PWCpQmYyD4bTTRYZ53q4w            destination type    slack            token   null           triggers                change request created                      delivery responses                created at    2019 01 08T21 34 28 367Z            updated at    2019 01 08T21 34 28 367Z                  relationships              subscribable                data                  id    team TdeUVMWShTesDMME                type    teams                                        links              self     api v2 notification configurations nc W6VGEi8A7Cfoaf4K                              id    nc AeUQ2zfKZzW9TiGZ          type    notification configurations          attributes              enabled   true           name    Webhook server test            url    https   httpstat us 200            destination type    generic            token   null           triggers                change request created                      delivery responses                              url    https   httpstat us 200                body      200 OK                  code    200                headers                    cache control                      private                                  content length                      129                                  content type                      application json  charset utf 8                                  content encoding                      gzip                                  vary                      Accept Encoding                                  server                      Microsoft IIS 10 0                                  x aspnetmvc version                      5 1                                  access control allow origin                                                         x aspnet version                      4 0 30319                                  x powered by                      ASP NET                                  set cookie                      ARRAffinity 77c477e3e649643e5771873e1a13179fb00983bc73c71e196bf25967fd453df9 Path   HttpOnly Domain httpstat us                                  date                      Tue  08 Jan 2019 21 34 37 GMT                                              sent at    2019 01 08 21 34 37 UTC                successful    true                                  created at    2019 01 08T21 32 14 125Z            updated at    2019 01 08T21 34 37 274Z                  relationships              subscribable                data                  id    team XdeUVMWShTesDMME                type    teams                                        links              self     api v2 notification configurations nc AeUQ2zfKZzW9TiGZ                          "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Plans API Docs HCP Terraform Use the plans endpoint to access Terraform run plans Show a plan and get the JSON formatted execution plan using the HTTP API","answers":"---\npage_title: Plans - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/plans` endpoint to access Terraform run plans. Show a plan and get the JSON-formatted execution plan using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[307]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/307\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Plans API\n\nA plan represents the execution plan of a Run in a Terraform workspace.\n\n## Attributes\n\n### Plan States\n\nThe plan state is found in `data.attributes.status`, and you can reference the following list of possible states.\n\n| State                     | Description                                                                   |\n| ------------------------- | ----------------------------------------------------------------------------- |\n| `pending`                 | The initial status of a plan once it has been created.                        |\n| `managed_queued`\/`queued` | The plan has been queued, awaiting backend service capacity to run terraform. |\n| `running`                 | The plan is running.                                                          |\n| `errored`                 | The plan has errored. This is a final state.                                  |\n| `canceled`                | The plan has been canceled. This is a final state.                            |\n| `finished`                | The plan has completed successfully. This is a final state.                   |\n| `unreachable`             | The plan will not run. This is a final state.                                 |\n\n## Show a plan\n\n`GET \/plans\/:id`\n\n| Parameter | Description                 |\n| --------- | --------------------------- |\n| `id`      | The ID of the plan to show. |\n\nThere is no endpoint to list plans. You can find the ID for a plan in the\n`relationships.plan` property of a run object.\n\n| Status  | Response                                | Reason                                                 |\n| ------- | --------------------------------------- | ------------------------------------------------------ |\n| [200][] | [JSON API document][] (`type: \"plans\"`) | The request was successful                             |\n| [404][] | [JSON API error object][]               | Plan not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/plans\/plan-8F5JFydVYAmtTjET\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"plan-8F5JFydVYAmtTjET\",\n    \"type\": \"plans\",\n    \"attributes\": {\n      \"execution-details\": {\n        \"mode\": \"remote\",\n      },\n      \"generated-configuration\": false,\n      \"has-changes\": true,\n      \"resource-additions\": 0,\n      \"resource-changes\": 1,\n      \"resource-destructions\": 0,\n      \"resource-imports\": 0,\n      \"status\": \"finished\",\n      \"status-timestamps\": {\n        \"queued-at\": \"2018-07-02T22:29:53+00:00\",\n        \"pending-at\": \"2018-07-02T22:29:53+00:00\",\n        \"started-at\": \"2018-07-02T22:29:54+00:00\",\n        \"finished-at\": \"2018-07-02T22:29:58+00:00\"\n      },\n      \"log-read-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1bHQ6djE6OFA1eEdlSFVHRSs4YUcwaW83a1dRRDA0U2E3T3FiWk1HM2NyQlNtcS9JS1hHN3dmTXJmaFhEYTlHdTF1ZlgxZ2wzVC9kVTlNcjRPOEJkK050VFI3U3dvS2ZuaUhFSGpVenJVUFYzSFVZQ1VZYno3T3UyYjdDRVRPRE5pbWJDVTIrNllQTENyTndYd1Y0ak1DL1dPVlN1VlNxKzYzbWlIcnJPa2dRRkJZZGtFeTNiaU84YlZ4QWs2QzlLY3VJb3lmWlIrajF4a1hYZTlsWnFYemRkL2pNOG9Zc0ZDakdVMCtURUE3dDNMODRsRnY4cWl1dUN5dUVuUzdnZzFwL3BNeHlwbXNXZWRrUDhXdzhGNnF4c3dqaXlZS29oL3FKakI5dm9uYU5ZKzAybnloREdnQ3J2Rk5WMlBJemZQTg\"\n    },\n    \"relationships\": {\n      \"state-versions\": {\n        \"data\": []\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/plans\/plan-8F5JFydVYAmtTjET\",\n      \"json-output\": \"\/api\/v2\/plans\/plan-8F5JFydVYAmtTjET\/json-output\"\n    }\n  }\n}\n```\n\n_Using HCP Terraform agents_\n\n[HCP Terraform agents](\/terraform\/cloud-docs\/api-docs\/agents) allow HCP Terraform to communicate with isolated, private, or on-premises infrastructure. When a workspace is set to use the agent execution mode, the plan response will include additional details about the agent pool and agent used.\n\n```json\n{\n  \"data\": {\n    \"id\": \"plan-8F5JFydVYAmtTjET\",\n    \"type\": \"plans\",\n    \"attributes\": {\n      \"execution-details\": {\n        \"agent-id\": \"agent-S1Y7tcKxXPJDQAvq\",\n        \"agent-name\": \"agent_01\",\n        \"agent-pool-id\": \"apool-Zigq2VGreKq7nwph\",\n        \"agent-pool-name\": \"first-pool\",\n        \"mode\": \"agent\",\n      },\n      \"generated-configuration\": false,\n      \"has-changes\": true,\n      \"resource-additions\": 0,\n      \"resource-changes\": 1,\n      \"resource-destructions\": 0,\n      \"resource-imports\": 0,\n      \"status\": \"finished\",\n      \"status-timestamps\": {\n        \"queued-at\": \"2018-07-02T22:29:53+00:00\",\n        \"pending-at\": \"2018-07-02T22:29:53+00:00\",\n        \"started-at\": \"2018-07-02T22:29:54+00:00\",\n        \"finished-at\": \"2018-07-02T22:29:58+00:00\"\n      },\n      \"log-read-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1bHQ6djE6OFA1eEdlSFVHRSs4YUcwaW83a1dRRDA0U2E3T3FiWk1HM2NyQlNtcS9JS1hHN3dmTXJmaFhEYTlHdTF1ZlgxZ2wzVC9kVTlNcjRPOEJkK050VFI3U3dvS2ZuaUhFSGpVenJVUFYzSFVZQ1VZYno3T3UyYjdDRVRPRE5pbWJDVTIrNllQTENyTndYd1Y0ak1DL1dPVlN1VlNxKzYzbWlIcnJPa2dRRkJZZGtFeTNiaU84YlZ4QWs2QzlLY3VJb3lmWlIrajF4a1hYZTlsWnFYemRkL2pNOG9Zc0ZDakdVMCtURUE3dDNMODRsRnY4cWl1dUN5dUVuUzdnZzFwL3BNeHlwbXNXZWRrUDhXdzhGNnF4c3dqaXlZS29oL3FKakI5dm9uYU5ZKzAybnloREdnQ3J2Rk5WMlBJemZQTg\"\n    },\n    \"relationships\": {\n      \"state-versions\": {\n        \"data\": []\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/plans\/plan-8F5JFydVYAmtTjET\",\n      \"json-output\": \"\/api\/v2\/plans\/plan-8F5JFydVYAmtTjET\/json-output\"\n    }\n  }\n}\n```\n\n## Retrieve the JSON execution plan\n\n`GET \/plans\/:id\/json-output`\n\n`GET \/runs\/:id\/plan\/json-output`\n\nThese endpoints generate a temporary authenticated URL to the location of the [JSON formatted execution plan](\/terraform\/internals\/json-format#format-summary).  When successful, this endpoint responds with a temporary redirect that should be followed.  If using a client that can follow redirects, you can use this endpoint to save the `.json` file locally without needing to save the temporary URL.\n\nThis temporary URL provided by the redirect has a life of **1 minute**, and should not be relied upon beyond the initial request.  If you need repeat access, you should use this endpoint to generate a new URL each time.\n\n-> **Note:** This endpoint is available for plans using Terraform 0.12 and later. For Terraform Enterprise, this endpoint is available from v202005-1, and its stability was improved in v202007-1.\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens) that has admin level access to the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Status  | Response                  | Reason                                                        |\n| ------- | ------------------------- | ------------------------------------------------------------- |\n| [204][] | No Content                | Plan JSON supported, but plan has not yet completed.          |\n| [307][] | Temporary Redirect        | Plan JSON found and temporary download URL generated          |\n| [422][] | [JSON API error object][] | Plan does not use a supported version of terraform (< 0.12.X) |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --location \\\n  https:\/\/app.terraform.io\/api\/v2\/plans\/plan-8F5JFydVYAmtTjET\/json-output |\n  > json-output.json\n```","site":"terraform","answers_cleaned":"    page title  Plans   API Docs   HCP Terraform description       Use the   plans  endpoint to access Terraform run plans  Show a plan and get the JSON formatted execution plan using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   307   https   developer mozilla org en US docs Web HTTP Status 307   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Plans API  A plan represents the execution plan of a Run in a Terraform workspace      Attributes      Plan States  The plan state is found in  data attributes status   and you can reference the following list of possible states     State                       Description                                                                                                                                                                                      pending                    The initial status of a plan once it has been created                              managed queued   queued    The plan has been queued  awaiting backend service capacity to run terraform       running                    The plan is running                                                                errored                    The plan has errored  This is a final state                                        canceled                   The plan has been canceled  This is a final state                                  finished                   The plan has completed successfully  This is a final state                         unreachable                The plan will not run  This is a final state                                        Show a plan   GET  plans  id     Parameter   Description                                                                  id         The ID of the plan to show     There is no endpoint to list plans  You can find the ID for a plan in the  relationships plan  property of a run object     Status    Response                                  Reason                                                                                                                                                                     200       JSON API document      type   plans      The request was successful                                  404       JSON API error object                    Plan not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 plans plan 8F5JFydVYAmtTjET          Sample Response     json      data          id    plan 8F5JFydVYAmtTjET        type    plans        attributes            execution details              mode    remote                   generated configuration   false         has changes   true         resource additions   0         resource changes   1         resource destructions   0         resource imports   0         status    finished          status timestamps              queued at    2018 07 02T22 29 53 00 00            pending at    2018 07 02T22 29 53 00 00            started at    2018 07 02T22 29 54 00 00            finished at    2018 07 02T22 29 58 00 00                  log read url    https   archivist terraform io v1 object dmF1bHQ6djE6OFA1eEdlSFVHRSs4YUcwaW83a1dRRDA0U2E3T3FiWk1HM2NyQlNtcS9JS1hHN3dmTXJmaFhEYTlHdTF1ZlgxZ2wzVC9kVTlNcjRPOEJkK050VFI3U3dvS2ZuaUhFSGpVenJVUFYzSFVZQ1VZYno3T3UyYjdDRVRPRE5pbWJDVTIrNllQTENyTndYd1Y0ak1DL1dPVlN1VlNxKzYzbWlIcnJPa2dRRkJZZGtFeTNiaU84YlZ4QWs2QzlLY3VJb3lmWlIrajF4a1hYZTlsWnFYemRkL2pNOG9Zc0ZDakdVMCtURUE3dDNMODRsRnY4cWl1dUN5dUVuUzdnZzFwL3BNeHlwbXNXZWRrUDhXdzhGNnF4c3dqaXlZS29oL3FKakI5dm9uYU5ZKzAybnloREdnQ3J2Rk5WMlBJemZQTg              relationships            state versions              data                          links            self     api v2 plans plan 8F5JFydVYAmtTjET          json output     api v2 plans plan 8F5JFydVYAmtTjET json output                    Using HCP Terraform agents    HCP Terraform agents   terraform cloud docs api docs agents  allow HCP Terraform to communicate with isolated  private  or on premises infrastructure  When a workspace is set to use the agent execution mode  the plan response will include additional details about the agent pool and agent used      json      data          id    plan 8F5JFydVYAmtTjET        type    plans        attributes            execution details              agent id    agent S1Y7tcKxXPJDQAvq            agent name    agent 01            agent pool id    apool Zigq2VGreKq7nwph            agent pool name    first pool            mode    agent                   generated configuration   false         has changes   true         resource additions   0         resource changes   1         resource destructions   0         resource imports   0         status    finished          status timestamps              queued at    2018 07 02T22 29 53 00 00            pending at    2018 07 02T22 29 53 00 00            started at    2018 07 02T22 29 54 00 00            finished at    2018 07 02T22 29 58 00 00                  log read url    https   archivist terraform io v1 object dmF1bHQ6djE6OFA1eEdlSFVHRSs4YUcwaW83a1dRRDA0U2E3T3FiWk1HM2NyQlNtcS9JS1hHN3dmTXJmaFhEYTlHdTF1ZlgxZ2wzVC9kVTlNcjRPOEJkK050VFI3U3dvS2ZuaUhFSGpVenJVUFYzSFVZQ1VZYno3T3UyYjdDRVRPRE5pbWJDVTIrNllQTENyTndYd1Y0ak1DL1dPVlN1VlNxKzYzbWlIcnJPa2dRRkJZZGtFeTNiaU84YlZ4QWs2QzlLY3VJb3lmWlIrajF4a1hYZTlsWnFYemRkL2pNOG9Zc0ZDakdVMCtURUE3dDNMODRsRnY4cWl1dUN5dUVuUzdnZzFwL3BNeHlwbXNXZWRrUDhXdzhGNnF4c3dqaXlZS29oL3FKakI5dm9uYU5ZKzAybnloREdnQ3J2Rk5WMlBJemZQTg              relationships            state versions              data                          links            self     api v2 plans plan 8F5JFydVYAmtTjET          json output     api v2 plans plan 8F5JFydVYAmtTjET json output                      Retrieve the JSON execution plan   GET  plans  id json output    GET  runs  id plan json output   These endpoints generate a temporary authenticated URL to the location of the  JSON formatted execution plan   terraform internals json format format summary    When successful  this endpoint responds with a temporary redirect that should be followed   If using a client that can follow redirects  you can use this endpoint to save the   json  file locally without needing to save the temporary URL   This temporary URL provided by the redirect has a life of   1 minute    and should not be relied upon beyond the initial request   If you need repeat access  you should use this endpoint to generate a new URL each time        Note    This endpoint is available for plans using Terraform 0 12 and later  For Terraform Enterprise  this endpoint is available from v202005 1  and its stability was improved in v202007 1        Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens  that has admin level access to the workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers    Status    Response                    Reason                                                                                                                                                                     204      No Content                  Plan JSON supported  but plan has not yet completed                307      Temporary Redirect          Plan JSON found and temporary download URL generated               422       JSON API error object      Plan does not use a supported version of terraform    0 12 X         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        location     https   app terraform io api v2 plans plan 8F5JFydVYAmtTjET json output       json output json    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Learn about API authentication response codes versioning formatting rate limiting and clients page title API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201","answers":"---\npage_title: API Docs - HCP Terraform\ndescription: >-\n  Learn about API authentication, response codes, versioning, formatting, rate limiting, and clients.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# HCP Terraform API Documentation\n\nHCP Terraform provides an API for a subset of its features. If you have any questions or want to request new API features, please email <support@hashicorp.com>.\n\n-> **Note:** Before planning an API integration, consider whether [the `tfe` Terraform provider](https:\/\/registry.terraform.io\/providers\/hashicorp\/tfe\/latest\/docs) meets your needs. It can't create or approve runs in response to arbitrary events, but it's a useful tool for managing your organizations, teams, and workspaces as code.\n\nHashiCorp provides a [stability policy](\/terraform\/cloud-docs\/api-docs\/stability-policy) for the HCP Terraform API, ensuring backwards compatibility for stable endpoints. The [changelog](\/terraform\/cloud-docs\/api-docs\/changelog) tracks changes to the API for HCP Terraform and Terraform Enterprise.\n\n## Authentication\n\nAll requests must be authenticated with a bearer token. Use the HTTP header `Authorization` with the value `Bearer <token>`. If the token is absent or invalid, HCP Terraform responds with [HTTP status 401][401] and a [JSON API error object][]. The 401 status code is reserved for problems with the authentication token; forbidden requests with a valid token result in a 404.\n\nYou can use the following types of tokens to authenticate:\n\n- [User tokens](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) \u2014\u00a0each HCP Terraform user can have any number of API tokens, which can make requests on their behalf.\n- [Team tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens) \u2014\u00a0each team can have one API token at a time. This is intended for performing plans and applies via a CI\/CD pipeline.\n- [Organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens) \u2014\u00a0each organization can have one API token at a time. This is intended for automating the management of teams, team membership, and workspaces. The organization token cannot perform plans and applies.\n<!-- BEGIN: TFC:only -->\n- [Audit trails token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#audit-trails-tokens) -\u00a0each organization can have a single token that can read that organization's audit trails. Use this token type to authenticate integrations pulling audit trail data, for example, using the [HCP Terraform for Splunk](\/terraform\/cloud-docs\/integrations\/splunk) app.\n<!-- END: TFC:only -->\n\n### Blob Storage Authentication\n\nHCP Terraform relies on a HashiCorp-developed blob storage service for storing statefiles and multiple other pieces of customer data, all of which are documented on our [data security page](\/terraform\/cloud-docs\/architectural-details\/data-security).\n\nUnlike the HCP Terraform API, this service does not require that a bearer token be submitted with each request. Instead, each URL includes a securely generated secret and is only valid for 25 hours.\n\nFor example, the [state versions api](\/terraform\/cloud-docs\/api-docs\/state-versions) returns a field named `hosted-state-download`, which is a URL of this form:\n`https:\/\/archivist.terraform.io\/v1\/object\/<secret value>`\n\nThis is a broadly accepted pattern for secure access. It is important to treat these URLs themselves as secrets. They should not be logged, nor shared with untrusted parties.\n\n## Feature Entitlements\n\nHCP Terraform is available at multiple pricing tiers (including free), which offer different feature sets.\n\nEach organization has a set of _entitlements_ that corresponds to its pricing tier. These entitlements determine which HCP Terraform features the organization can use.\n\nIf an organization doesn't have the necessary entitlement to use a given feature, HCP Terraform returns a 404 error for API requests to any endpoints devoted to that feature.\n\nThe [show entitlement set](\/terraform\/cloud-docs\/api-docs\/organizations#show-the-entitlement-set) endpoint can return information about an organization's current entitlements, which is useful if your client needs to change its interface when a given feature isn't available.\n\nThe following entitlements are available:\n\n- `agents` \u2014 Allows isolated, private or on-premises infrastructure to communicate with an organization in HCP Terraform. Affects the [agent pools][], [agents][], and [agent tokens][] endpoints.\n<!-- BEGIN: TFC:only name:tfc-audit-log -->\n- `audit-logging` \u2014 Allows an organization to access [audit trails][].\n<!-- END: TFC:only name:tfc-audit-log -->\n- `configuration-designer` \u2014 Allows an organization to use the [Configuration Designer][].\n- `cost-estimation` \u2014 Allows an organization to access [cost estimation][].\n- `global-run-tasks` \u2014 Allows an organization to apply [run tasks](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks) to every workspace. Affects the [run tasks][] endpoints. This feature is currently in beta.\n- `module-tests-generation` - Allows an organization to generate tests for private registry modules. This feature is currently in beta.\n- `operations` \u2014\u00a0Allows an organization to perform runs within HCP Terraform. Affects the [runs][], [plans][], and [applies][] endpoints.\n- `policy-enforcement` \u2014\u00a0Allows an organization to use [Sentinel][]. Affects the [policies][], [policy sets][], and [policy checks][] endpoints.\n- `private-module-registry` \u2014\u00a0Allows an organization to publish and use modules with the [private module registry][]. Affects the [registry modules][] endpoints.\n- `private-policy-agents` - Allows an organization to ensure that HTTP enabled [Sentinel][] and OPA [policies][] can communicate with isolated, private, or on-premises infrastructure.\n- `run-tasks` \u2014\u00a0Allows an organization to use [run tasks](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks). Affects the [run tasks][] endpoints.\n- `self-serve-billing` \u2014\u00a0Allows an organization to pay via credit card using the in-app billing UI.\n- `sentinel` -  **DEPRECATED** Use `policy-enforcement` instead.\n- `state-storage` \u2014 Allows an organization to store state versions in its workspaces, which enables local Terraform runs with HCP Terraform. Affects the [state versions][] endpoints.\n- `sso` \u2014 Allows an organization to manage and authenticate users with single sign on.\n- `teams` \u2014\u00a0Allows an organization to manage access to its workspaces with [teams](\/terraform\/cloud-docs\/users-teams-organizations\/teams). Without this entitlement, an organization only has an owners team. Affects the [teams][], [team members][], [team access][], and [team tokens][] endpoints.\n- `user-limit` \u2014 An integer value representing the maximum number of users allowed for the organization. If blank, there is no limit.\n- `vcs-integrations` \u2014\u00a0Allows an organization to [connect with a VCS provider][vcs integrations] and link VCS repositories to workspaces. Affects the [OAuth Clients][o-clients], and [OAuth Tokens][o-tokens] endpoints, and determines whether the `data.attributes.vcs-repo` property can be set for [workspaces][].\n\n[agents]: \/terraform\/cloud-docs\/api-docs\/agents\n\n[agent pools]: \/terraform\/cloud-docs\/api-docs\/agents\n\n[agent tokens]: \/terraform\/cloud-docs\/api-docs\/agent-tokens\n\n[applies]: \/terraform\/cloud-docs\/api-docs\/applies\n\n<!-- BEGIN: TFC:only name:tfc-audit-log -->\n[audit trails]: \/terraform\/cloud-docs\/api-docs\/audit-trails\n<!-- END: TFC:only name:tfc-audit-log -->\n\n[Configuration Designer]: \/terraform\/cloud-docs\/registry\/design\n\n[cost estimation]: \/terraform\/cloud-docs\/cost-estimation\n\n[o-clients]: \/terraform\/cloud-docs\/api-docs\/oauth-clients\n\n[o-tokens]: \/terraform\/cloud-docs\/api-docs\/oauth-tokens\n\n[plans]: \/terraform\/cloud-docs\/api-docs\/plans\n\n[policies]: \/terraform\/cloud-docs\/api-docs\/policies\n\n[policy checks]: \/terraform\/cloud-docs\/api-docs\/policy-checks\n\n[policy sets]: \/terraform\/cloud-docs\/api-docs\/policy-sets\n\n[private module registry]: \/terraform\/cloud-docs\/registry\n\n[registry modules]: \/terraform\/cloud-docs\/api-docs\/private-registry\/modules\n\n[registry providers]: \/terraform\/cloud-docs\/api-docs\/private-registry\/providers\n\n[runs]: \/terraform\/cloud-docs\/api-docs\/run\n\n[run tasks]: \/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks\n\n[Sentinel]: \/terraform\/cloud-docs\/policy-enforcement\/sentinel\n\n[single sign on]: \/terraform\/cloud-docs\/users-teams-organizations\/single-sign-on\n\n[state versions]: \/terraform\/cloud-docs\/api-docs\/state-versions\n\n[teams]: \/terraform\/cloud-docs\/api-docs\/teams\n\n[team access]: \/terraform\/cloud-docs\/api-docs\/team-access\n\n[team members]: \/terraform\/cloud-docs\/api-docs\/team-members\n\n[team tokens]: \/terraform\/cloud-docs\/api-docs\/team-tokens\n\n[vcs integrations]: \/terraform\/cloud-docs\/vcs\n\n[workspaces]: \/terraform\/cloud-docs\/api-docs\/workspaces\n\n## Response Codes\n\nThis API returns standard HTTP response codes.\n\nWe return 404 Not Found codes for resources that a user doesn't have access to, as well as for resources that don't exist. This is to avoid telling a potential attacker that a given resource exists.\n\n## Versioning\n\nThe API documented in these pages is the second version of HCP Terraform's API, and resides under the `\/v2` prefix.\n\nFuture APIs will increment this version, leaving the `\/v1` API intact, though in the future we might deprecate certain features. In that case, we'll provide ample notice to migrate to the new API.\n\n## Paths\n\nAll V2 API endpoints use `\/api\/v2` as a prefix unless otherwise specified.\n\nFor example, if the API endpoint documentation defines the path `\/runs` then the full path is `\/api\/v2\/runs`.\n\n## JSON API Formatting\n\nThe HCP Terraform endpoints use the [JSON API specification](https:\/\/jsonapi.org\/), which specifies key aspects of the API. Most notably:\n\n- [HTTP error codes](https:\/\/jsonapi.org\/examples\/#error-objects-error-codes)\n- [Error objects](https:\/\/jsonapi.org\/examples\/#error-objects-basics)\n- [Document structure][document]\n- [HTTP request\/response headers](https:\/\/jsonapi.org\/format\/#content-negotiation)\n\n[document]: https:\/\/jsonapi.org\/format\/#document-structure\n\n### JSON API Documents\n\nSince our API endpoints use the JSON API spec, most of them return [JSON API documents][document].\n\nEndpoints that use the POST method also require a JSON API document as the request payload. A request object usually looks something like this:\n\n```json\n{\n  \"data\": {\n    \"type\":\"vars\",\n    \"attributes\": {\n      \"key\":\"some_key\",\n      \"value\":\"some_value\",\n      \"category\":\"terraform\",\n      \"hcl\":false,\n      \"sensitive\":false\n    },\n    \"relationships\": {\n      \"workspace\": {\n        \"data\": {\n          \"id\":\"ws-4j8p6jX1w33MiDC7\",\n          \"type\":\"workspaces\"\n        }\n      }\n    }\n  }\n}\n```\n\nThese objects always include a top-level `data` property, which:\n\n- Must have a `type` property to indicate what type of API object you're interacting with.\n- Often has an `attributes` property to specify attributes of the object you're creating or modifying.\n- Sometimes has a `relationships` property to specify other objects that are linked to what you're working with.\n\nIn the documentation for each API method, we use dot notation to explain the structure of nested objects in the request. For example, the properties of the request object above are listed as follows:\n\n| Key path                                 | Type   | Default | Description                                                                                                     |\n| ---------------------------------------- | ------ | ------- | --------------------------------------------------------------------------------------------------------------- |\n| `data.type`                              | string |         | Must be `\"vars\"`.                                                                                               |\n| `data.attributes.key`                    | string |         | The name of the variable.                                                                                       |\n| `data.attributes.value`                  | string |         | The value of the variable.                                                                                      |\n| `data.attributes.category`               | string |         | Whether this is a Terraform or environment variable. Valid values are `\"terraform\"` or `\"env\"`.                 |\n| `data.attributes.hcl`                    | bool   | `false` | Whether to evaluate the value of the variable as a string of HCL code. Has no effect for environment variables. |\n| `data.attributes.sensitive`              | bool   | `false` | Whether the value is sensitive. If true then the variable is written once and not visible thereafter.           |\n| `data.relationships.workspace.data.type` | string |         | Must be `\"workspaces\"`.                                                                                         |\n| `data.relationships.workspace.data.id`   | string |         | The ID of the workspace that owns the variable.                                                                 |\n\nWe also always include a sample payload object, to show the document structure more visually.\n\n### Query Parameters\n\nAlthough most of our API endpoints use the POST method and receive their parameters as a JSON object in the request payload, some of them use the GET method. These GET endpoints sometimes require URL query parameters, in the standard `...path?key1=value1&key2=value2` format.\n\nSince these parameters were originally designed as part of a JSON object, they sometimes have characters that must be [percent-encoded](https:\/\/en.wikipedia.org\/wiki\/Percent-encoding) in a query parameter. For example, `[` becomes `%5B` and `]` becomes `%5D`.\n\nFor more about URI structure and query strings, see [the specification (RFC 3986)](https:\/\/tools.ietf.org\/html\/rfc3986) or [the Wikipedia page on URIs](https:\/\/en.wikipedia.org\/wiki\/Uniform_Resource_Identifier).\n\n### Pagination\n\nMost of the endpoints that return lists of objects support pagination. A client may pass the following query parameters to control pagination on supported endpoints:\n\n| Parameter      | Description                                                                                         |\n| -------------- | --------------------------------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.                                  |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 items per page. The maximum page size is 100. |\n\nAdditional data is returned in the `links` and `meta` top level attributes of the response.\n\n```json\n{\n  \"data\": [...],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/workspaces?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/workspaces?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/workspaces?page%5Bnumber%5D=2&page%5Bsize%5D=20\",\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/workspaces?page%5Bnumber%5D=2&page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"prev-page\": null,\n      \"next-page\": 2,\n      \"total-pages\": 2,\n      \"total-count\": 21\n    }\n  }\n}\n```\n\n### Inclusion of Related Resources\n\nSome of the API's GET endpoints can return additional information about nested resources by adding an `include` query parameter, whose value is a comma-separated list of resource types.\n\nThe related resource options are listed in each endpoint's documentation where available.\n\nThe related resources will appear in an `included` section of the response.\n\nExample:\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/teams\/team-n8UQ6wfhyym25sMe?include=users\n```\n\n```json\n{\n  \"data\": {\n    \"id\": \"team-n8UQ6wfhyym25sMe\",\n    \"type\": \"teams\",\n    \"attributes\": {\n      \"name\": \"owners\",\n      \"users-count\": 1\n      ...\n    },\n    \"relationships\": {\n      \"users\": {\n        \"data\": [\n          {\n              \"id\": \"user-62goNpx1ThQf689e\",\n              \"type\": \"users\"\n          }\n        ]\n      } ...\n    }\n    ...\n  },\n  \"included\": [\n    {\n      \"id\": \"user-62goNpx1ThQf689e\",\n      \"type\": \"users\",\n      \"attributes\": {\n        \"username\": \"hashibot\"\n        ...\n      } ...\n    }\n  ]\n}\n```\n\n## Rate Limiting\n\nYou can make up to 30 requests per second to the API as an authenticated or unauthenticated request. If you reach the rate limit then your access will be throttled and an error response will be returned. Some endpoints have lower rate limits to prevent abuse, including endpoints that poll Terraform for a list of runs and endpoints related to user authentication. The adjusted limits are unnoticeable under normal use. If you receive a rate-limited response, the limit is reflected in the `x-ratelimit-limit` header once triggered.\n\nAuthenticated requests are allocated to the user associated with the authentication token. This means that a user with multiple tokens will still be limited to 30 requests per second, additional tokens will not allow you to increase the requests per second permitted.\n\nUnauthenticated requests are associated with the requesting IP address.\n\n| Status  | Response                  | Reason                       |\n| ------- | ------------------------- | ---------------------------- |\n| [429][] | [JSON API error object][] | Rate limit has been reached. |\n\n```json\n{\n  \"errors\": [\n    {\n      \"detail\": \"You have exceeded the API's rate limit.\",\n      \"status\": 429,\n      \"title\": \"Too many requests\"\n    }\n  ]\n}\n```\n\n## Client libraries and tools\n\nHashiCorp maintains [go-tfe](https:\/\/github.com\/hashicorp\/go-tfe), a Go client for HCP Terraform's API.\n\nAdditionally, the community of HCP Terraform users and vendors have built client libraries in other languages. These client libraries and tools are not tested nor officially maintained by HashiCorp, but are listed below in order to help users find them easily.\n\nIf you have built a client library and would like to add it to this community list, please [contribute](https:\/\/github.com\/hashicorp\/terraform-website#contributions-welcome) to [this page](https:\/\/github.com\/hashicorp\/terraform-docs-common\/blob\/main\/website\/docs\/cloud-docs\/api-docs\/index.mdx#client-libraries-and-tools).\n\n- [tfh](https:\/\/github.com\/hashicorp-community\/tf-helper): UNIX shell console app\n- [tf_api_gateway](https:\/\/github.com\/PlethoraOfHate\/tf_api_gateway): Python API library and console app\n- [terrasnek](https:\/\/github.com\/dahlke\/terrasnek): Python API library\n- [terraform-enterprise-client](https:\/\/github.com\/skierkowski\/terraform-enterprise-client): Ruby API library and console app\n- [pyterprise](https:\/\/github.com\/JFryy\/terraform-enterprise-api-python-client): A simple Python API client library.\n- [Tfe.NetClient](https:\/\/github.com\/everis-technology\/Tfe.NetClient): .NET Client Library","site":"terraform","answers_cleaned":"    page title  API Docs   HCP Terraform description       Learn about API authentication  response codes  versioning  formatting  rate limiting  and clients        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    HCP Terraform API Documentation  HCP Terraform provides an API for a subset of its features  If you have any questions or want to request new API features  please email  support hashicorp com         Note    Before planning an API integration  consider whether  the  tfe  Terraform provider  https   registry terraform io providers hashicorp tfe latest docs  meets your needs  It can t create or approve runs in response to arbitrary events  but it s a useful tool for managing your organizations  teams  and workspaces as code   HashiCorp provides a  stability policy   terraform cloud docs api docs stability policy  for the HCP Terraform API  ensuring backwards compatibility for stable endpoints  The  changelog   terraform cloud docs api docs changelog  tracks changes to the API for HCP Terraform and Terraform Enterprise      Authentication  All requests must be authenticated with a bearer token  Use the HTTP header  Authorization  with the value  Bearer  token    If the token is absent or invalid  HCP Terraform responds with  HTTP status 401  401  and a  JSON API error object     The 401 status code is reserved for problems with the authentication token  forbidden requests with a valid token result in a 404   You can use the following types of tokens to authenticate      User tokens   terraform cloud docs users teams organizations users api tokens    each HCP Terraform user can have any number of API tokens  which can make requests on their behalf     Team tokens   terraform cloud docs users teams organizations api tokens team api tokens    each team can have one API token at a time  This is intended for performing plans and applies via a CI CD pipeline     Organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens    each organization can have one API token at a time  This is intended for automating the management of teams  team membership  and workspaces  The organization token cannot perform plans and applies       BEGIN  TFC only        Audit trails token   terraform cloud docs users teams organizations api tokens audit trails tokens    each organization can have a single token that can read that organization s audit trails  Use this token type to authenticate integrations pulling audit trail data  for example  using the  HCP Terraform for Splunk   terraform cloud docs integrations splunk  app       END  TFC only          Blob Storage Authentication  HCP Terraform relies on a HashiCorp developed blob storage service for storing statefiles and multiple other pieces of customer data  all of which are documented on our  data security page   terraform cloud docs architectural details data security    Unlike the HCP Terraform API  this service does not require that a bearer token be submitted with each request  Instead  each URL includes a securely generated secret and is only valid for 25 hours   For example  the  state versions api   terraform cloud docs api docs state versions  returns a field named  hosted state download   which is a URL of this form   https   archivist terraform io v1 object  secret value    This is a broadly accepted pattern for secure access  It is important to treat these URLs themselves as secrets  They should not be logged  nor shared with untrusted parties      Feature Entitlements  HCP Terraform is available at multiple pricing tiers  including free   which offer different feature sets   Each organization has a set of  entitlements  that corresponds to its pricing tier  These entitlements determine which HCP Terraform features the organization can use   If an organization doesn t have the necessary entitlement to use a given feature  HCP Terraform returns a 404 error for API requests to any endpoints devoted to that feature   The  show entitlement set   terraform cloud docs api docs organizations show the entitlement set  endpoint can return information about an organization s current entitlements  which is useful if your client needs to change its interface when a given feature isn t available   The following entitlements are available      agents    Allows isolated  private or on premises infrastructure to communicate with an organization in HCP Terraform  Affects the  agent pools      agents     and  agent tokens    endpoints       BEGIN  TFC only name tfc audit log        audit logging    Allows an organization to access  audit trails          END  TFC only name tfc audit log        configuration designer    Allows an organization to use the  Configuration Designer        cost estimation    Allows an organization to access  cost estimation        global run tasks    Allows an organization to apply  run tasks   terraform cloud docs workspaces settings run tasks  to every workspace  Affects the  run tasks    endpoints  This feature is currently in beta     module tests generation    Allows an organization to generate tests for private registry modules  This feature is currently in beta     operations    Allows an organization to perform runs within HCP Terraform  Affects the  runs      plans     and  applies    endpoints     policy enforcement    Allows an organization to use  Sentinel     Affects the  policies      policy sets     and  policy checks    endpoints     private module registry    Allows an organization to publish and use modules with the  private module registry     Affects the  registry modules    endpoints     private policy agents    Allows an organization to ensure that HTTP enabled  Sentinel    and OPA  policies    can communicate with isolated  private  or on premises infrastructure     run tasks    Allows an organization to use  run tasks   terraform cloud docs workspaces settings run tasks   Affects the  run tasks    endpoints     self serve billing    Allows an organization to pay via credit card using the in app billing UI     sentinel       DEPRECATED   Use  policy enforcement  instead     state storage    Allows an organization to store state versions in its workspaces  which enables local Terraform runs with HCP Terraform  Affects the  state versions    endpoints     sso    Allows an organization to manage and authenticate users with single sign on     teams    Allows an organization to manage access to its workspaces with  teams   terraform cloud docs users teams organizations teams   Without this entitlement  an organization only has an owners team  Affects the  teams      team members      team access     and  team tokens    endpoints     user limit    An integer value representing the maximum number of users allowed for the organization  If blank  there is no limit     vcs integrations    Allows an organization to  connect with a VCS provider  vcs integrations  and link VCS repositories to workspaces  Affects the  OAuth Clients  o clients   and  OAuth Tokens  o tokens  endpoints  and determines whether the  data attributes vcs repo  property can be set for  workspaces       agents    terraform cloud docs api docs agents   agent pools    terraform cloud docs api docs agents   agent tokens    terraform cloud docs api docs agent tokens   applies    terraform cloud docs api docs applies       BEGIN  TFC only name tfc audit log      audit trails    terraform cloud docs api docs audit trails      END  TFC only name tfc audit log       Configuration Designer    terraform cloud docs registry design   cost estimation    terraform cloud docs cost estimation   o clients    terraform cloud docs api docs oauth clients   o tokens    terraform cloud docs api docs oauth tokens   plans    terraform cloud docs api docs plans   policies    terraform cloud docs api docs policies   policy checks    terraform cloud docs api docs policy checks   policy sets    terraform cloud docs api docs policy sets   private module registry    terraform cloud docs registry   registry modules    terraform cloud docs api docs private registry modules   registry providers    terraform cloud docs api docs private registry providers   runs    terraform cloud docs api docs run   run tasks    terraform cloud docs api docs run tasks run tasks   Sentinel    terraform cloud docs policy enforcement sentinel   single sign on    terraform cloud docs users teams organizations single sign on   state versions    terraform cloud docs api docs state versions   teams    terraform cloud docs api docs teams   team access    terraform cloud docs api docs team access   team members    terraform cloud docs api docs team members   team tokens    terraform cloud docs api docs team tokens   vcs integrations    terraform cloud docs vcs   workspaces    terraform cloud docs api docs workspaces     Response Codes  This API returns standard HTTP response codes   We return 404 Not Found codes for resources that a user doesn t have access to  as well as for resources that don t exist  This is to avoid telling a potential attacker that a given resource exists      Versioning  The API documented in these pages is the second version of HCP Terraform s API  and resides under the   v2  prefix   Future APIs will increment this version  leaving the   v1  API intact  though in the future we might deprecate certain features  In that case  we ll provide ample notice to migrate to the new API      Paths  All V2 API endpoints use   api v2  as a prefix unless otherwise specified   For example  if the API endpoint documentation defines the path   runs  then the full path is   api v2 runs       JSON API Formatting  The HCP Terraform endpoints use the  JSON API specification  https   jsonapi org    which specifies key aspects of the API  Most notably      HTTP error codes  https   jsonapi org examples  error objects error codes     Error objects  https   jsonapi org examples  error objects basics     Document structure  document     HTTP request response headers  https   jsonapi org format  content negotiation    document   https   jsonapi org format  document structure      JSON API Documents  Since our API endpoints use the JSON API spec  most of them return  JSON API documents  document    Endpoints that use the POST method also require a JSON API document as the request payload  A request object usually looks something like this      json      data          type   vars        attributes            key   some key          value   some value          category   terraform          hcl  false         sensitive  false             relationships            workspace              data                id   ws 4j8p6jX1w33MiDC7              type   workspaces                                     These objects always include a top level  data  property  which     Must have a  type  property to indicate what type of API object you re interacting with    Often has an  attributes  property to specify attributes of the object you re creating or modifying    Sometimes has a  relationships  property to specify other objects that are linked to what you re working with   In the documentation for each API method  we use dot notation to explain the structure of nested objects in the request  For example  the properties of the request object above are listed as follows     Key path                                   Type     Default   Description                                                                                                                                                                                                                                                                                            data type                                 string             Must be   vars                                                                                                       data attributes key                       string             The name of the variable                                                                                             data attributes value                     string             The value of the variable                                                                                            data attributes category                  string             Whether this is a Terraform or environment variable  Valid values are   terraform   or   env                         data attributes hcl                       bool      false    Whether to evaluate the value of the variable as a string of HCL code  Has no effect for environment variables       data attributes sensitive                 bool      false    Whether the value is sensitive  If true then the variable is written once and not visible thereafter                 data relationships workspace data type    string             Must be   workspaces                                                                                                 data relationships workspace data id      string             The ID of the workspace that owns the variable                                                                     We also always include a sample payload object  to show the document structure more visually       Query Parameters  Although most of our API endpoints use the POST method and receive their parameters as a JSON object in the request payload  some of them use the GET method  These GET endpoints sometimes require URL query parameters  in the standard     path key1 value1 key2 value2  format   Since these parameters were originally designed as part of a JSON object  they sometimes have characters that must be  percent encoded  https   en wikipedia org wiki Percent encoding  in a query parameter  For example      becomes   5B  and     becomes   5D    For more about URI structure and query strings  see  the specification  RFC 3986   https   tools ietf org html rfc3986  or  the Wikipedia page on URIs  https   en wikipedia org wiki Uniform Resource Identifier        Pagination  Most of the endpoints that return lists of objects support pagination  A client may pass the following query parameters to control pagination on supported endpoints     Parameter        Description                                                                                                                                                                                                                       page number       Optional    If omitted  the endpoint will return the first page                                        page size         Optional    If omitted  the endpoint will return 20 items per page  The maximum page size is 100     Additional data is returned in the  links  and  meta  top level attributes of the response      json      data             links          self    https   app terraform io api v2 organizations hashicorp workspaces page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   app terraform io api v2 organizations hashicorp workspaces page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next    https   app terraform io api v2 organizations hashicorp workspaces page 5Bnumber 5D 2 page 5Bsize 5D 20        last    https   app terraform io api v2 organizations hashicorp workspaces page 5Bnumber 5D 2 page 5Bsize 5D 20          meta          pagination            current page   1         prev page   null         next page   2         total pages   2         total count   21                      Inclusion of Related Resources  Some of the API s GET endpoints can return additional information about nested resources by adding an  include  query parameter  whose value is a comma separated list of resource types   The related resource options are listed in each endpoint s documentation where available   The related resources will appear in an  included  section of the response   Example      shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 teams team n8UQ6wfhyym25sMe include users         json      data          id    team n8UQ6wfhyym25sMe        type    teams        attributes            name    owners          users count   1                       relationships            users              data                                id    user 62goNpx1ThQf689e                  type    users                                                          included                  id    user 62goNpx1ThQf689e          type    users          attributes              username    hashibot                                              Rate Limiting  You can make up to 30 requests per second to the API as an authenticated or unauthenticated request  If you reach the rate limit then your access will be throttled and an error response will be returned  Some endpoints have lower rate limits to prevent abuse  including endpoints that poll Terraform for a list of runs and endpoints related to user authentication  The adjusted limits are unnoticeable under normal use  If you receive a rate limited response  the limit is reflected in the  x ratelimit limit  header once triggered   Authenticated requests are allocated to the user associated with the authentication token  This means that a user with multiple tokens will still be limited to 30 requests per second  additional tokens will not allow you to increase the requests per second permitted   Unauthenticated requests are associated with the requesting IP address     Status    Response                    Reason                                                                                                   429       JSON API error object      Rate limit has been reached        json      errors                  detail    You have exceeded the API s rate limit           status   429         title    Too many requests                      Client libraries and tools  HashiCorp maintains  go tfe  https   github com hashicorp go tfe   a Go client for HCP Terraform s API   Additionally  the community of HCP Terraform users and vendors have built client libraries in other languages  These client libraries and tools are not tested nor officially maintained by HashiCorp  but are listed below in order to help users find them easily   If you have built a client library and would like to add it to this community list  please  contribute  https   github com hashicorp terraform website contributions welcome  to  this page  https   github com hashicorp terraform docs common blob main website docs cloud docs api docs index mdx client libraries and tools       tfh  https   github com hashicorp community tf helper   UNIX shell console app    tf api gateway  https   github com PlethoraOfHate tf api gateway   Python API library and console app    terrasnek  https   github com dahlke terrasnek   Python API library    terraform enterprise client  https   github com skierkowski terraform enterprise client   Ruby API library and console app    pyterprise  https   github com JFryy terraform enterprise api python client   A simple Python API client library     Tfe NetClient  https   github com everis technology Tfe NetClient    NET Client Library"}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the policy checks endpoint to manage the Sentinel checks performed on a Terraform run List show and override policy checks using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 page title Policy Checks API Docs HCP Terraform","answers":"---\npage_title: Policy Checks - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/policy-checks` endpoint to manage the Sentinel checks performed on a Terraform run. List, show and override policy checks using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Policy Checks API\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nPolicy checks are the default workflow for Sentinel. Policy checks use the latest version of the Sentinel runtime and have access to cost estimation data.\nThis set of APIs provides endpoints to get, list, and override policy checks.\n\n~> **Warning:** Policy checks are deprecated and will be permanently removed in August 2025. We recommend that you start using policy evaluations to avoid disruptions.\n\n## List Policy Checks\n\nThis endpoint lists the policy checks in a run.\n\n-> **Note**: The `sentinel` hash in the `result` attribute structure represents low-level Sentinel details generated by the policy engine. The keys or structure may change over time. Use the data in this hash at your own risk.\n\n`GET \/runs\/:run_id\/policy-checks`\n\n| Parameter | Description                                  |\n| --------- | -------------------------------------------- |\n| `run_id`  | The ID of the run to list policy checks for. |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.  If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.\n\n| Parameter      | Description                                                                   |\n| -------------- | ----------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.            |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 policy checks per page. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/runs\/run-CZcmD7eagjhyXavN\/policy-checks\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"polchk-9VYRc9bpfJEsnwum\",\n      \"type\": \"policy-checks\",\n      \"attributes\": {\n        \"result\": {\n          \"result\": false,\n          \"passed\": 0,\n          \"total-failed\": 1,\n          \"hard-failed\": 0,\n          \"soft-failed\": 1,\n          \"advisory-failed\": 0,\n          \"duration-ms\": 0,\n          \"sentinel\": {...}\n        },\n        \"scope\": \"organization\",\n        \"status\": \"soft_failed\",\n        \"status-timestamps\": {\n          \"queued-at\": \"2017-11-29T20:02:17+00:00\",\n          \"soft-failed-at\": \"2017-11-29T20:02:20+00:00\"\n        },\n        \"actions\": {\n          \"is-overridable\": true\n        },\n        \"permissions\": {\n          \"can-override\": false\n        }\n      },\n      \"relationships\": {\n        \"run\": {\n          \"data\": {\n            \"id\": \"run-veDoQbv6xh6TbnJD\",\n            \"type\": \"runs\"\n          }\n        }\n      },\n      \"links\": {\n        \"output\": \"\/api\/v2\/policy-checks\/polchk-9VYRc9bpfJEsnwum\/output\"\n      }\n    }\n  ]\n}\n```\n\n## Show Policy Check\n\nThis endpoint gets information about a specific policy check ID. Policy check IDs can appear in audit logs.\n\n-> **Note**: The `sentinel` hash in the `result` attribute structure represents low-level Sentinel details generated by the policy engine. The keys or structure may change over time. Use the data in this hash at your own risk.\n\n`GET \/policy-checks\/:id`\n\n| Parameter | Description                         |\n| --------- | ----------------------------------- |\n| `id`      | The ID of the policy check to show. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-checks\/polchk-9VYRc9bpfJEsnwum\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"polchk-9VYRc9bpfJEsnwum\",\n    \"type\": \"policy-checks\",\n    \"attributes\": {\n      \"result\": {\n        \"result\": false,\n        \"passed\": 0,\n        \"total-failed\": 1,\n        \"hard-failed\": 0,\n        \"soft-failed\": 1,\n        \"advisory-failed\": 0,\n        \"duration-ms\": 0,\n        \"sentinel\": {...}\n      },\n      \"scope\": \"organization\",\n      \"status\": \"soft_failed\",\n      \"status-timestamps\": {\n        \"queued-at\": \"2017-11-29T20:02:17+00:00\",\n        \"soft-failed-at\": \"2017-11-29T20:02:20+00:00\"\n      },\n      \"actions\": {\n        \"is-overridable\": true\n      },\n      \"permissions\": {\n        \"can-override\": false\n      }\n    },\n    \"relationships\": {\n      \"run\": {\n        \"data\": {\n          \"id\": \"run-veDoQbv6xh6TbnJD\",\n          \"type\": \"runs\"\n        }\n      }\n    },\n    \"links\": {\n      \"output\": \"\/api\/v2\/policy-checks\/polchk-9VYRc9bpfJEsnwum\/output\"\n    }\n  }\n}\n```\n\n## Override Policy\n\nThis endpoint overrides a soft-mandatory or warning policy.\n\n-> **Note**: The `sentinel` hash in the `result` attribute structure represents low-level Sentinel details generated by the policy engine. The keys or structure may change over time. Use the data in this hash at your own risk.\n\n`POST \/policy-checks\/:id\/actions\/override`\n\n| Parameter | Description                             |\n| --------- | --------------------------------------- |\n| `id`      | The ID of the policy check to override. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-checks\/polchk-EasPB4Srx5NAiWAU\/actions\/override\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"polchk-EasPB4Srx5NAiWAU\",\n    \"type\": \"policy-checks\",\n    \"attributes\": {\n      \"result\": {\n        \"result\": false,\n        \"passed\": 0,\n        \"total-failed\": 1,\n        \"hard-failed\": 0,\n        \"soft-failed\": 1,\n        \"advisory-failed\": 0,\n        \"duration-ms\": 0,\n        \"sentinel\": {...}\n      },\n      \"scope\": \"organization\",\n      \"status\": \"overridden\",\n      \"status-timestamps\": {\n        \"queued-at\": \"2017-11-29T20:13:37+00:00\",\n        \"soft-failed-at\": \"2017-11-29T20:13:40+00:00\",\n        \"overridden-at\": \"2017-11-29T20:14:11+00:00\"\n      },\n      \"actions\": {\n        \"is-overridable\": true\n      },\n      \"permissions\": {\n        \"can-override\": false\n      }\n    },\n    \"links\": {\n      \"output\": \"\/api\/v2\/policy-checks\/polchk-EasPB4Srx5NAiWAU\/output\"\n    }\n  }\n}\n```\n\n### Available Related Resources\n\nThe GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n| Resource Name      | Description                            |\n| ------------------ | -------------------------------------- |\n| `run`              | The run this policy check belongs to.  |\n| `run.workspace`    | The associated workspace of the run.   |","site":"terraform","answers_cleaned":"    page title  Policy Checks   API Docs   HCP Terraform description       Use the   policy checks  endpoint to manage the Sentinel checks performed on a Terraform run  List  show and override policy checks using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Policy Checks API       BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      Policy checks are the default workflow for Sentinel  Policy checks use the latest version of the Sentinel runtime and have access to cost estimation data  This set of APIs provides endpoints to get  list  and override policy checks        Warning    Policy checks are deprecated and will be permanently removed in August 2025  We recommend that you start using policy evaluations to avoid disruptions      List Policy Checks  This endpoint lists the policy checks in a run        Note    The  sentinel  hash in the  result  attribute structure represents low level Sentinel details generated by the policy engine  The keys or structure may change over time  Use the data in this hash at your own risk    GET  runs  run id policy checks     Parameter   Description                                                                                                    run id     The ID of the run to list policy checks for         Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs   If neither pagination query parameters are provided  the endpoint will not be paginated and will return all results     Parameter        Description                                                                                                                                                                           page number       Optional    If omitted  the endpoint will return the first page                  page size         Optional    If omitted  the endpoint will return 20 policy checks per page         Sample Request     shell curl       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 runs run CZcmD7eagjhyXavN policy checks          Sample Response     json      data                  id    polchk 9VYRc9bpfJEsnwum          type    policy checks          attributes              result                result   false             passed   0             total failed   1             hard failed   0             soft failed   1             advisory failed   0             duration ms   0             sentinel                             scope    organization            status    soft failed            status timestamps                queued at    2017 11 29T20 02 17 00 00              soft failed at    2017 11 29T20 02 20 00 00                      actions                is overridable   true                     permissions                can override   false                           relationships              run                data                  id    run veDoQbv6xh6TbnJD                type    runs                                        links              output     api v2 policy checks polchk 9VYRc9bpfJEsnwum output                              Show Policy Check  This endpoint gets information about a specific policy check ID  Policy check IDs can appear in audit logs        Note    The  sentinel  hash in the  result  attribute structure represents low level Sentinel details generated by the policy engine  The keys or structure may change over time  Use the data in this hash at your own risk    GET  policy checks  id     Parameter   Description                                                                                  id         The ID of the policy check to show         Sample Request     shell curl       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 policy checks polchk 9VYRc9bpfJEsnwum          Sample Response     json      data          id    polchk 9VYRc9bpfJEsnwum        type    policy checks        attributes            result              result   false           passed   0           total failed   1           hard failed   0           soft failed   1           advisory failed   0           duration ms   0           sentinel                         scope    organization          status    soft failed          status timestamps              queued at    2017 11 29T20 02 17 00 00            soft failed at    2017 11 29T20 02 20 00 00                  actions              is overridable   true                 permissions              can override   false                     relationships            run              data                id    run veDoQbv6xh6TbnJD              type    runs                                links            output     api v2 policy checks polchk 9VYRc9bpfJEsnwum output                      Override Policy  This endpoint overrides a soft mandatory or warning policy        Note    The  sentinel  hash in the  result  attribute structure represents low level Sentinel details generated by the policy engine  The keys or structure may change over time  Use the data in this hash at your own risk    POST  policy checks  id actions override     Parameter   Description                                                                                          id         The ID of the policy check to override         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 policy checks polchk EasPB4Srx5NAiWAU actions override          Sample Response     json      data          id    polchk EasPB4Srx5NAiWAU        type    policy checks        attributes            result              result   false           passed   0           total failed   1           hard failed   0           soft failed   1           advisory failed   0           duration ms   0           sentinel                         scope    organization          status    overridden          status timestamps              queued at    2017 11 29T20 13 37 00 00            soft failed at    2017 11 29T20 13 40 00 00            overridden at    2017 11 29T20 14 11 00 00                  actions              is overridable   true                 permissions              can override   false                     links            output     api v2 policy checks polchk EasPB4Srx5NAiWAU output                       Available Related Resources  The GET endpoints above can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available     Resource Name        Description                                                                                                 run                 The run this policy check belongs to        run workspace       The associated workspace of the run     "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the projects endpoint to manage an organization s projects List show create update and delete projects using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 page title Projects API Docs HCP Terraform","answers":"---\npage_title: Projects - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/projects` endpoint to manage an organization's projects. List, show, create, update, and delete projects using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n[speculative plans]: \/terraform\/cloud-docs\/run\/remote-operations#speculative-plans\n\n# Projects API\n\nThis topic provides reference information about the projects API.\n\nThe scope of the API includes the following endpoints:\n\n| Method | Path | Action |\n| --- | --- | --- |\n| `POST` | `\/organizations\/:organization_name\/projects` |  Call this endpoint to [create a project](#create-a-project). |\n| `PATCH`| `\/projects\/:project_id` | Call this endpoint to [update an existing project](#update-a-project). |\n| `GET` | `\/organizations\/:organization_name\/projects`| Call this endpoint to [list existing projects](#list-projects). |\n| `GET` | `\/projects\/:project_id` | Call this endpoint to [show project details](#show-project). |\n| `DELETE` | `\/projects\/:project_id` | Call this endpoint to [delete a project](#delete-a-project). |\n| `GET` | `\/projects\/:project_id\/tag-bindings` | Call this endpoint to [list project tag bindings](#list-project-tag-bindings). (For projects, this returns the same result set as the `effective-tag-bindings` endpoint.) |\n| `GET` | `\/projects\/:project_id\/effective-tag-bindings` | Call this endpoint to [list project effective tag bindings](#list-project-tag-bindings). (For projects, this returns the same result set as the `tag-bindings` endpoint.) |\n\n## Requirements\n\nYou must be on a team with one of the **Owners** or **Manage projects** permissions settings enabled to create and manage projects. Refer to [Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions) for additional information.\n\nYou can also provide an organization API token to call project API endpoints. Refer to [Organization API Tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens) for additional information.\n\n## Create a Project\n\n`POST \/organizations\/:organization_name\/projects`\n\n| Parameter            | Description                                                                                                                                                          |\n|----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `:organization_name` | The name of the organization to create the project in. The organization must already exist in the system, and the user must have permissions to create new projects. |\n\n-> **Note:** Project creation is restricted to the owners team, teams with the \"Manage Projects\" permission, and the [organization API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens).\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type | Default | Description |\n|---|---|---|--- |\n| `data.type` | string | none | Must be `\"projects\"`. |\n| `data.attributes.name` | string | | The name of the project. The name can contain letters, numbers, spaces, `-`, and `_`, but cannot start or end with spaces. It must be at least three characters long and no more than 40 characters long. |\n| `data.attributes.description` | string | none | Optional. The description of the project. It must be no more than 256 characters long. |\n| `data.attributes.auto-destroy-activity-duration`| string  | none | Specifies the default for how long each workspace in the project should wait before automatically destroying its infrastructure. You can specify a duration up to four digits that is greater than `0` followed by either a `d` for days or `h` hours. For example, to queue destroy runs after fourteen days of inactivity set `auto-destroy-activity-duration: \"14d\"`. All future workspaces in this project inherit this default value. Refer to [Automatically destroy inactive workspaces](\/terraform\/cloud-docs\/projects\/managing#automatically-destroy-inactive-workspaces) for additional information. | \n| `data.relationships.tag-bindings.data` | list of objects | none | Specifies a list of tags to attach to the project. |\n| `data.relationships.tag-bindings.data.type` | string | none | Must be `tag-bindings` for each object in the list. |\n| `data.relationships.tag-bindings.data.attributes.key` | string | none | Specifies the tag key for each object in the list. |\n| `data.relationships.tag-bindings.data.attributes.value` | string | none | Specifies the tag value for each object in the list. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\": \"Test Project\",\n      \"description\": \"An example project for documentation.\",\n    },\n    \"type\": \"projects\",\n    \"relationships\": {\n      \"tag-bindings\": {\n        \"data\": [\n          {\n            \"type\": \"tag-bindings\",\n            \"attributes\": {\n              \"key\": \"environment\",\n              \"value\": \"development\"\n            }\n          },\n        ]\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/projects\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"prj-WsVcWRr7SfxRci1v\",\n    \"type\": \"projects\",\n    \"attributes\": {\n      \"name\": \"Test Project\",\n      \"description\": \"An example project for documentation.\",\n      \"permissions\": {\n        \"can-update\": true\n      }\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/my-organization\"\n        }\n      },\n      \"tag-bindings\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/projects\/prj-WsVcWRr7SfxRci1v\/tag-bindings\"\n        }\n      },\n      \"effective-tag-bindings\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/projects\/prj-WsVcWRr7SfxRci1v\/effective-tag-bindings\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/projects\/prj-WsVcWRr7SfxRci1v\"\n    }\n  }\n}\n```\n\n## Update a Project\n\nCall the following endpoint to update a project:\n\n`PATCH \/projects\/:project_id`\n\n| Parameter     | Description                     |\n|---------------|---------------------------------|\n| `:project_id` | The ID of the project to update |\n\n### Request Body\n\nThese PATCH endpoints require a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type | Default | Description |\n|---|---|---|---|\n| `data.type` | string | none | Must be `\"projects\"`. |\n| `data.attributes.name` | string | existing value | A new name for the project. The name can contain letters, numbers, spaces, `-`, and `_`, but cannot start or end with spaces. It must be at least 3 characters long and no more than 40 characters long. |\n| `data.attributes.description` | string | existing value | The new description for the project. It must be no more than 256 characters long. |\n| `data.attributes.auto-destroy-activity-duration`| string  | none | Specifies how long each workspace in the project should wait before automatically destroying its infrastructure by default. You can specify a duration up to four digits that is greater than `0` followed by either a `d` for days or `h` hours. For example, to queue destroy runs after fourteen days of inactivity set `auto-destroy-activity-duration: \"14d\"`. When you update this value, all workspaces in the project receive the new value unless you previously configured an override. Refer to [Automatically destroy inactive workspaces](\/terraform\/cloud-docs\/projects\/managing#automatically-destroy-inactive-workspaces) for additional information. |\n| `data.relationships.tag-bindings.data` | list of objects | none | Specifies a list of tags to attach to the project. |\n| `data.relationships.tag-bindings.data.type` | string | none | Must be `tag-bindings` for each object in the list. |\n| `data.relationships.tag-bindings.data.attributes.key` | string | none | Specifies the tag key for each object in the list. |\n| `data.relationships.tag-bindings.data.attributes.value` | string | none | Specifies the tag value for each object in the list. |\n\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\": \"Infrastructure Project\"\n    },\n    \"type\": \"projects\",\n    \"relationships\": {\n      \"tag-bindings\": {\n        \"data\": [\n          {\n            \"type\": \"tag-bindings\",\n            \"attributes\": {\n              \"key\": \"environment\",\n              \"value\": \"staging\"\n            }\n          },\n        ]\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/projects\/prj-WsVcWRr7SfxRci1v\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"prj-WsVcWRr7SfxRci1v\",\n    \"type\": \"projects\",\n    \"attributes\": {\n      \"name\": \"Infrastructure Project\",\n      \"description\": null,\n      \"workspace-count\": 4,\n      \"team-count\": 2,\n      \"permissions\": {\n        \"can-update\": true,\n        \"can-destroy\": true,\n        \"can-create-workspace\": true\n      }\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/my-organization\"\n        }\n      },\n      \"tag-bindings\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/projects\/prj-WsVcWRr7SfxRci1v\/tag-bindings\"\n        }\n      },\n      \"effective-tag-bindings\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/projects\/prj-WsVcWRr7SfxRci1v\/effective-tag-bindings\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/projects\/prj-WsVcWRr7SfxRci1v\"\n    }\n  }\n}\n```\n\n## List projects\n\nThis endpoint lists projects in the organization.\n\n`GET \/organizations\/:organization_name\/projects`\n\n| Parameter            | Description                                           |\n|----------------------|-------------------------------------------------------|\n| `:organization_name` | The name of the organization to list the projects of. |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter                               | Description                                                                                                                                                                                                                                                         |\n|-----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `page[number]`                          | **Optional.** If omitted, the endpoint will return the first page.                                                                                                                                                                                                  |\n| `page[size]`                            | **Optional.** If omitted, the endpoint will return 20 projects per page.                                                                                                                                                                                            |\n| `q`                                     | **Optional.** A search query string. This query searches projects by name. This search is case-insensitive. If both `q` and `filter[names]` are specified, `filter[names]` will be used.                                                                            |\n| `filter[names]`                         | **Optional.** If specified, returns the project with the matching name. This filter is case-insensitive. If multiple comma separated values are specified, projects matching any of the names are returned.                                                         |\n| `filter[permissions][create-workspace]` | **Optional.** If present, returns a list of projects that the authenticated user can create workspaces in.                                                                                                                                                          |\n| `filter[permissions][update]`           | **Optional.** If present, returns a list of projects that the authenticated user can update.                                                                                                                                                                        |\n| `sort`                                  | **Optional.** Allows sorting the organization's projects by `\"name\"`. Prepending a hyphen to the sort parameter reverses the order. For example, `\"-name\"` sorts by name in reverse alphabetical order. If omitted, the default sort order is arbitrary but stable. |\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/projects\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"prj-W6k9K23oSXRHGpj3\",\n      \"type\": \"projects\",\n      \"attributes\": {\n        \"name\": \"Default Project\",\n        \"description\": null,\n        \"workspace-count\": 2,\n        \"team-count\": 1,\n        \"permissions\": {\n          \"can-update\": true,\n          \"can-destroy\": true,\n          \"can-create-workspace\": true\n        }\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/organizations\/my-organization\"\n          }\n        },\n        \"tag-bindings\": {\n          \"links\": {\n            \"related\": \"\/api\/v2\/projects\/prj-W6k9K23oSXRHGpj3\/tag-bindings\"\n          }\n        },\n        \"effective-tag-bindings\": {\n          \"links\": {\n            \"related\": \"\/api\/v2\/projects\/prj-W6k9K23oSXRHGpj3\/effective-tag-bindings\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/projects\/prj-W6k9K23oSXRHGpj3\"\n      }\n    },\n    {\n      \"id\": \"prj-YoriCxAawTMDLswn\",\n      \"type\": \"projects\",\n      \"attributes\": {\n        \"name\": \"Infrastructure Project\",\n        \"description\": null,\n        \"workspace-count\": 4,\n        \"team-count\": 2,\n        \"permissions\": {\n          \"can-update\": true,\n          \"can-destroy\": true,\n          \"can-create-workspace\": true\n        }\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/organizations\/my-organization\"\n          }\n        },\n        \"tag-bindings\": {\n          \"links\": {\n            \"related\": \"\/api\/v2\/projects\/prj-YoriCxAawTMDLswn\/tag-bindings\"\n          }\n        },\n        \"effective-tag-bindings\": {\n          \"links\": {\n            \"related\": \"\/api\/v2\/projects\/prj-YoriCxAawTMDLswn\/effective-tag-bindings\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/projects\/prj-YoriCxAawTMDLswn\"\n      }\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/projects?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/projects?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/projects?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"status-counts\": {\n      \"total\": 2,\n      \"matching\": 2\n    },\n    \"pagination\": {\n      \"current-page\": 1,\n      \"page-size\": 20,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 2\n    }\n  }\n}\n```\n\n## Show project\n\n`GET \/projects\/:project_id`\n\n| Parameter     | Description    |\n|---------------|----------------|\n| `:project_id` | The project ID |\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/projects\/prj-WsVcWRr7SfxRci1v\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"prj-WsVcWRr7SfxRci1v\",\n    \"type\": \"projects\",\n    \"attributes\": {\n      \"name\": \"Infrastructure Project\",\n      \"description\": null,\n      \"workspace-count\": 4,\n      \"team-count\": 2,\n      \"permissions\": {\n        \"can-update\": true,\n        \"can-destroy\": true,\n        \"can-create-workspace\": true\n      },\n      \"auto-destroy-activity-duration\": \"2d\"\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/my-organization\"\n        }\n      },\n      \"tag-bindings\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/projects\/prj-WsVcWRr7SfxRci1v\/tag-bindings\"\n        }\n      },\n      \"effective-tag-bindings\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/projects\/prj-WsVcWRr7SfxRci1v\/effective-tag-bindings\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/projects\/prj-WsVcWRr7SfxRci1v\"\n    }\n  }\n}\n```\n\n## Delete a project\n\nA project cannot be deleted if it contains workspaces.\n\n`DELETE \/projects\/:project_id`\n\n| Parameter     | Description                     |\n|---------------|---------------------------------|\n| `:project_id` | The ID of the project to delete |\n\n| Status  | Response                  | Reason(s)                                                         |\n|---------|---------------------------|-------------------------------------------------------------------|\n| [204][] | No Content                | Successfully deleted the project                                  |\n| [403][] | [JSON API error object][] | Not authorized to perform a force delete on the project           |\n| [404][] | [JSON API error object][] | Project not found, or user unauthorized to perform project delete |\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/projects\/prj-WsVcWRr7SfxRci1v\n```\n\n## List project tag bindings\n\nCall the following endpoints to list the tags associated with a project.\n\n- `GET \/projects\/:project_id\/effective-tag-bindings`: Lists the complete set of tags for the project, including any that have been inherited from a parent resource. For projects, this endpoint returns the same set of data as the `tag-bindings` endpoint, so only that endpoint is documented here.\n\n\n| Parameter     | Description            |\n|---------------|----------------------- |\n| `:project_id` | The ID of the project. |\n\n### Sample request\n\nThe following request returns all tags bound to a project with the ID\n`prj-WsVcWRr7SfxRci1v`.\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/projects\/prj-WsVcWRr7SfxRci1v\/effective-tag-bindings\n```\n\n### Sample response\n\n```json\n{\n    \"data\": [\n        {\n            \"type\": \"effective-tag-bindings\",\n            \"attributes\": {\n                \"key\": \"added\",\n                \"value\": \"new\"\n            }\n        },\n        {\n            \"type\": \"effective-tag-bindings\",\n            \"attributes\": {\n                \"key\": \"aww\",\n                \"value\": \"aww\"\n            }\n        }\n    ]\n}\n```\n\n## Move workspaces into a project\n\nThis endpoint allows you to move one or more workspaces into a project. You must have permission to move workspaces on\nthe destination project as well as any source project(s). If you are not authorized to move any of the workspaces in the\nrequest, or if any workspaces in the request are not found, then no workspaces will be moved.\n\n`POST \/projects\/:project_id\/relationships\/workspaces`\n\n| Parameter     | Description                       |\n|---------------|-----------------------------------|\n| `:project_id` | The ID of the destination project |\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\n| Key path      | Type   | Default | Description                                                |\n|---------------|--------|---------|------------------------------------------------------------|\n| `data[].type` | string |         | Must be `\"workspaces\"`                                     |\n| `data[].id`   | string |         | The ids of workspaces to move into the destination project |\n\n| Status  | Response                  | Reason(s)                                                                                                |\n|---------|---------------------------|----------------------------------------------------------------------------------------------------------|\n| [204][] | No Content                | Successfully moved workspace(s)                                                                          |\n| [403][] | [JSON API error object][] | Workspace(s) not found, or user is not authorized to move all workspaces out of their current project(s) |\n| [404][] | [JSON API error object][] | Project not found, or user unauthorized to move workspaces into project                                  |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"type\": \"workspaces\",\n      \"id\": \"ws-AQEct2XFuH4HBsmS\"\n    }\n  ]\n}\n\n```\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/projects\/prj-zXm4y2BjeGPgHtkp\/relationships\/workspaces\n```\n\n### Sample Error Response\n\n```json\n{\n  \"errors\": [\n    {\n      \"status\": \"403\",\n      \"title\": \"forbidden\",\n      \"detail\": \"Workspace(s) not found, or you are not authorized to move them: ws-AQEct2XFuH4HBmS\"\n    }\n  ]\n}\n```","site":"terraform","answers_cleaned":"    page title  Projects   API Docs   HCP Terraform description       Use the   projects  endpoint to manage an organization s projects  List  show  create  update  and delete projects using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects   speculative plans    terraform cloud docs run remote operations speculative plans    Projects API  This topic provides reference information about the projects API   The scope of the API includes the following endpoints     Method   Path   Action                          POST      organizations  organization name projects     Call this endpoint to  create a project   create a project        PATCH     projects  project id    Call this endpoint to  update an existing project   update a project        GET      organizations  organization name projects   Call this endpoint to  list existing projects   list projects        GET      projects  project id    Call this endpoint to  show project details   show project        DELETE      projects  project id    Call this endpoint to  delete a project   delete a project        GET      projects  project id tag bindings    Call this endpoint to  list project tag bindings   list project tag bindings    For projects  this returns the same result set as the  effective tag bindings  endpoint        GET      projects  project id effective tag bindings    Call this endpoint to  list project effective tag bindings   list project tag bindings    For projects  this returns the same result set as the  tag bindings  endpoint         Requirements  You must be on a team with one of the   Owners   or   Manage projects   permissions settings enabled to create and manage projects  Refer to  Permissions   terraform cloud docs users teams organizations permissions  for additional information   You can also provide an organization API token to call project API endpoints  Refer to  Organization API Tokens   terraform cloud docs users teams organizations api tokens organization api tokens  for additional information      Create a Project   POST  organizations  organization name projects     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                organization name    The name of the organization to create the project in  The organization must already exist in the system  and the user must have permissions to create new projects          Note    Project creation is restricted to the owners team  teams with the  Manage Projects  permission  and the  organization API token   terraform cloud docs users teams organizations api tokens organization api tokens        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type   Default   Description                         data type    string   none   Must be   projects         data attributes name    string     The name of the project  The name can contain letters  numbers  spaces       and      but cannot start or end with spaces  It must be at least three characters long and no more than 40 characters long       data attributes description    string   none   Optional  The description of the project  It must be no more than 256 characters long       data attributes auto destroy activity duration   string    none   Specifies the default for how long each workspace in the project should wait before automatically destroying its infrastructure  You can specify a duration up to four digits that is greater than  0  followed by either a  d  for days or  h  hours  For example  to queue destroy runs after fourteen days of inactivity set  auto destroy activity duration   14d    All future workspaces in this project inherit this default value  Refer to  Automatically destroy inactive workspaces   terraform cloud docs projects managing automatically destroy inactive workspaces  for additional information        data relationships tag bindings data    list of objects   none   Specifies a list of tags to attach to the project       data relationships tag bindings data type    string   none   Must be  tag bindings  for each object in the list       data relationships tag bindings data attributes key    string   none   Specifies the tag key for each object in the list       data relationships tag bindings data attributes value    string   none   Specifies the tag value for each object in the list         Sample Payload     json      data          attributes            name    Test Project          description    An example project for documentation                type    projects        relationships            tag bindings              data                              type    tag bindings                attributes                    key    environment                  value    development                                                                    Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization projects          Sample Response     json      data          id    prj WsVcWRr7SfxRci1v        type    projects        attributes            name    Test Project          description    An example project for documentation           permissions              can update   true                     relationships            organization              data                id    my organization              type    organizations                      links                related     api v2 organizations my organization                            tag bindings              links                related     api v2 projects prj WsVcWRr7SfxRci1v tag bindings                            effective tag bindings              links                related     api v2 projects prj WsVcWRr7SfxRci1v effective tag bindings                                links            self     api v2 projects prj WsVcWRr7SfxRci1v                      Update a Project  Call the following endpoint to update a project    PATCH  projects  project id     Parameter       Description                                                                               project id    The ID of the project to update        Request Body  These PATCH endpoints require a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type   Default   Description                        data type    string   none   Must be   projects         data attributes name    string   existing value   A new name for the project  The name can contain letters  numbers  spaces       and      but cannot start or end with spaces  It must be at least 3 characters long and no more than 40 characters long       data attributes description    string   existing value   The new description for the project  It must be no more than 256 characters long       data attributes auto destroy activity duration   string    none   Specifies how long each workspace in the project should wait before automatically destroying its infrastructure by default  You can specify a duration up to four digits that is greater than  0  followed by either a  d  for days or  h  hours  For example  to queue destroy runs after fourteen days of inactivity set  auto destroy activity duration   14d    When you update this value  all workspaces in the project receive the new value unless you previously configured an override  Refer to  Automatically destroy inactive workspaces   terraform cloud docs projects managing automatically destroy inactive workspaces  for additional information       data relationships tag bindings data    list of objects   none   Specifies a list of tags to attach to the project       data relationships tag bindings data type    string   none   Must be  tag bindings  for each object in the list       data relationships tag bindings data attributes key    string   none   Specifies the tag key for each object in the list       data relationships tag bindings data attributes value    string   none   Specifies the tag value for each object in the list          Sample Payload     json      data          attributes            name    Infrastructure Project              type    projects        relationships            tag bindings              data                              type    tag bindings                attributes                    key    environment                  value    staging                                                                    Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 projects prj WsVcWRr7SfxRci1v          Sample Response     json      data          id    prj WsVcWRr7SfxRci1v        type    projects        attributes            name    Infrastructure Project          description   null         workspace count   4         team count   2         permissions              can update   true           can destroy   true           can create workspace   true                     relationships            organization              data                id    my organization              type    organizations                      links                related     api v2 organizations my organization                            tag bindings              links                related     api v2 projects prj WsVcWRr7SfxRci1v tag bindings                            effective tag bindings              links                related     api v2 projects prj WsVcWRr7SfxRci1v effective tag bindings                                links            self     api v2 projects prj WsVcWRr7SfxRci1v                      List projects  This endpoint lists projects in the organization    GET  organizations  organization name projects     Parameter              Description                                                                                                                                  organization name    The name of the organization to list the projects of         Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter                                 Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                page number                                Optional    If omitted  the endpoint will return the first page                                                                                                                                                                                                        page size                                  Optional    If omitted  the endpoint will return 20 projects per page                                                                                                                                                                                                  q                                          Optional    A search query string  This query searches projects by name  This search is case insensitive  If both  q  and  filter names   are specified   filter names   will be used                                                                                  filter names                               Optional    If specified  returns the project with the matching name  This filter is case insensitive  If multiple comma separated values are specified  projects matching any of the names are returned                                                               filter permissions  create workspace       Optional    If present  returns a list of projects that the authenticated user can create workspaces in                                                                                                                                                                filter permissions  update                 Optional    If present  returns a list of projects that the authenticated user can update                                                                                                                                                                              sort                                       Optional    Allows sorting the organization s projects by   name    Prepending a hyphen to the sort parameter reverses the order  For example     name   sorts by name in reverse alphabetical order  If omitted  the default sort order is arbitrary but stable         Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations my organization projects          Sample Response     json      data                  id    prj W6k9K23oSXRHGpj3          type    projects          attributes              name    Default Project            description   null           workspace count   2           team count   1           permissions                can update   true             can destroy   true             can create workspace   true                           relationships              organization                data                  id    my organization                type    organizations                          links                  related     api v2 organizations my organization                                  tag bindings                links                  related     api v2 projects prj W6k9K23oSXRHGpj3 tag bindings                                  effective tag bindings                links                  related     api v2 projects prj W6k9K23oSXRHGpj3 effective tag bindings                                        links              self     api v2 projects prj W6k9K23oSXRHGpj3                              id    prj YoriCxAawTMDLswn          type    projects          attributes              name    Infrastructure Project            description   null           workspace count   4           team count   2           permissions                can update   true             can destroy   true             can create workspace   true                           relationships              organization                data                  id    my organization                type    organizations                          links                  related     api v2 organizations my organization                                  tag bindings                links                  related     api v2 projects prj YoriCxAawTMDLswn tag bindings                                  effective tag bindings                links                  related     api v2 projects prj YoriCxAawTMDLswn effective tag bindings                                        links              self     api v2 projects prj YoriCxAawTMDLswn                        links          self    https   app terraform io api v2 organizations my organization projects page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   app terraform io api v2 organizations my organization projects page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next   null       last    https   app terraform io api v2 organizations my organization projects page 5Bnumber 5D 1 page 5Bsize 5D 20          meta          status counts            total   2         matching   2             pagination            current page   1         page size   20         prev page   null         next page   null         total pages   1         total count   2                     Show project   GET  projects  project id     Parameter       Description                                             project id    The project ID        Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 projects prj WsVcWRr7SfxRci1v          Sample Response     json      data          id    prj WsVcWRr7SfxRci1v        type    projects        attributes            name    Infrastructure Project          description   null         workspace count   4         team count   2         permissions              can update   true           can destroy   true           can create workspace   true                 auto destroy activity duration    2d              relationships            organization              data                id    my organization              type    organizations                      links                related     api v2 organizations my organization                            tag bindings              links                related     api v2 projects prj WsVcWRr7SfxRci1v tag bindings                            effective tag bindings              links                related     api v2 projects prj WsVcWRr7SfxRci1v effective tag bindings                                links            self     api v2 projects prj WsVcWRr7SfxRci1v                      Delete a project  A project cannot be deleted if it contains workspaces    DELETE  projects  project id     Parameter       Description                                                                               project id    The ID of the project to delete      Status    Response                    Reason s                                                                                                                                                                           204      No Content                  Successfully deleted the project                                       403       JSON API error object      Not authorized to perform a force delete on the project                404       JSON API error object      Project not found  or user unauthorized to perform project delete        Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 projects prj WsVcWRr7SfxRci1v         List project tag bindings  Call the following endpoints to list the tags associated with a project      GET  projects  project id effective tag bindings   Lists the complete set of tags for the project  including any that have been inherited from a parent resource  For projects  this endpoint returns the same set of data as the  tag bindings  endpoint  so only that endpoint is documented here      Parameter       Description                                                             project id    The ID of the project         Sample request  The following request returns all tags bound to a project with the ID  prj WsVcWRr7SfxRci1v       shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 projects prj WsVcWRr7SfxRci1v effective tag bindings          Sample response     json        data                            type    effective tag bindings                attributes                      key    added                    value    new                                                  type    effective tag bindings                attributes                      key    aww                    value    aww                                          Move workspaces into a project  This endpoint allows you to move one or more workspaces into a project  You must have permission to move workspaces on the destination project as well as any source project s   If you are not authorized to move any of the workspaces in the request  or if any workspaces in the request are not found  then no workspaces will be moved    POST  projects  project id relationships workspaces     Parameter       Description                                                                                   project id    The ID of the destination project    This POST endpoint requires a JSON object with the following properties as a request payload     Key path        Type     Default   Description                                                                                                                                                       data   type    string             Must be   workspaces                                            data   id      string             The ids of workspaces to move into the destination project      Status    Response                    Reason s                                                                                                                                                                                                                                                         204      No Content                  Successfully moved workspace s                                                                                403       JSON API error object      Workspace s  not found  or user is not authorized to move all workspaces out of their current project s       404       JSON API error object      Project not found  or user unauthorized to move workspaces into project                                         Sample Payload     json      data                  type    workspaces          id    ws AQEct2XFuH4HBsmS                        Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 projects prj zXm4y2BjeGPgHtkp relationships workspaces          Sample Error Response     json      errors                  status    403          title    forbidden          detail    Workspace s  not found  or you are not authorized to move them  ws AQEct2XFuH4HBmS                 "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the policies endpoint to manage Sentinel and OPA policies List show create upload update and delete policies using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 page title Policies API Docs HCP Terraform","answers":"---\npage_title: Policies - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/policies` endpoint to manage Sentinel and OPA policies. List, show, create, upload, update, and delete policies using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Policies API\n\nPolicies are rules that HCP Terraform enforces on Terraform runs. You can use policies to validate that the Terraform plan complies with security rules and best practices. HCP Terraform policy enforcement lets you use the policy-as-code frameworks Sentinel and Open Policy Agent (OPA) to apply policy checks to HCP Terraform workspaces. Refer to [Policy Enforcement](\/terraform\/cloud-docs\/policy-enforcement) for more details.\n\n[Policy sets](\/terraform\/cloud-docs\/policy-enforcement\/manage-policy-sets) are collections of policies that you can apply globally or to specific [projects](\/terraform\/cloud-docs\/projects\/manage) and workspaces, in your organization. For each run in the selected workspaces, HCP Terraform checks the Terraform plan against the policy set and displays the results in the UI.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nThis page documents the API endpoints to create, read, update, and delete policies in an organization. To manage policy sets, use the [Policy Sets API](\/terraform\/cloud-docs\/api-docs\/policy-sets). To manage the policy results, use the [Runs API](\/terraform\/cloud-docs\/api-docs\/run).\n\n## Create a Policy\n\n`POST \/organizations\/:organization_name\/policies`\n\n| Parameter            | Description                                                                                                                                                                                                                                                        |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The organization to create the policy in. The organization must already exist in the system, and the token authenticating the API request must have permission to manage policies. (([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions)) |\n\n\\[permissions-citation]: #intentionally-unused---keep-for-maintainers)\n\nThis creates a new policy object for the organization, but does not upload the actual policy code. After creation, you must use the [Upload a Policy endpoint (below)](#upload-a-policy) with the new policy's upload path. (This endpoint's response body includes the upload path in its `links.upload` property.)\n\n| Status  | Response                                   | Reason                                                         |\n| ------- | ------------------------------------------ | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"policies\"`) | Successfully created a policy                                  |\n| [404][] | [JSON API error object][]                  | Organization not found, or user unauthorized to perform action |\n| [422][] | [JSON API error object][]                  | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n|                Key path                 |      Type      |  Default   |                                                                                                                                                                                                                               Description                                                                                                                                                                                                                                |\n| --------------------------------------- | -------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `data.type`                             | string         |            | Must be `\"policies\"`.                                                                                                                                                                                                                                                                                                                                                                                                                                                    |\n| `data.attributes.name`                  | string         |            | The name of the policy, which can include letters, numbers, `-`, and `_`. You cannot modify this name after creation.                                                                                                                                                                                                                                                                                                                                                    |\n| `data.attributes.description`           | string         | `null`     | Text describing the policy's purpose. This field supports Markdown and appears in the HCP Terraform UI.                                                                                                                                                                                                                                                                                                                                                                |\n| `data.attributes.kind`                  | string         | `sentinel` | The policy-as-code framework for the policy. Valid values are `\"sentinel\"` and `\"opa\"`.                                                                                                                                                                                                                                                                                                                                                                                  |\n| `data.attributes.query`                 | string         |            | The OPA query to run. Only valid for OPA policies.                                                                                                                                                                                                                                                                                                                                                                                                                       |\n| `data.attributes.enforcement-level`     | string         |            | The enforcement level of the policy. For Sentinel, valid values are `\"hard-mandatory\"`, `\"soft-mandatory\"`, and `\"advisory\"`. For OPA, Valid values are `\"mandatory\"`and `\"advisory\"`. Refer to [Managing Policies](\/terraform\/cloud-docs\/policy-enforcement\/manage-policy-sets) for details.                                                                                                                                                                            |\n| `data.attributes.enforce`               | array\\[object] |            | **DEPRECATED** Use `enforcement-level` instead. An array of enforcement configurations that map policy file paths to their enforcement modes. Policies support a single file, so this array should consist of a single element. For Sentinel, if the path in the enforcement map does not match the Sentinel policy (`<NAME>.sentinel`), then HCP Terraform uses the default `hard-mandatory` enforcement level. For OPA, the default enforcement level is `advisory`. |\n| `data.attributes.enforce[].path`        | string         |            | **DEPRECATED** For Sentinel, must be `<NAME>.sentinel`, where `<NAME>` has the same value as `data.attributes.name`. For OPA, must be `<NAME>.rego`.                                                                                                                                                                                                                                                                                                                    |\n| `data.attributes.enforce[].mode`        | string         |            | **DEPRECATED** Use `enforcement-level` instead. The enforcement level of the policy. For Sentinel, valid values are `\"hard-mandatory\"`, `\"soft-mandatory\"`, and `\"advisory\"`. For OPA, Valid values are `\"mandatory\"`and `\"advisory\"`. Refer to [Managing Policies](\/terraform\/cloud-docs\/policy-enforcement\/manage-policy-sets) for details.                                                                                                                            |\n| `data.relationships.policy-sets.data[]` | array\\[object] | `[]`       | A list of resource identifier objects to define which policy sets include the new policy. These objects must contain `id` and `type` properties, and the `type` property must be `policy-sets`. For example,`{ \"id\": \"polset-3yVQZvHzf5j3WRJ1\",\"type\": \"policy-sets\" }`.                                                                                                                                                                                                 |\n\n### Sample Payload (Sentinel)\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"enforcement-level\": \"hard-mandatory\",\n      \"name\": \"my-example-policy\",\n      \"description\": \"An example policy.\",\n      \"kind\": \"sentinel\"\n    },\n    \"relationships\": {\n      \"policy-sets\": {\n        \"data\": [\n          { \"id\": \"polset-3yVQZvHzf5j3WRJ1\", \"type\": \"policy-sets\" }\n        ]\n      }\n    },\n    \"type\": \"policies\"\n  }\n}\n```\n\n### Sample Payload (OPA)\n\n-> **Note**: We have deprecated the `enforce` property in requests and responses but continue to provide it for backward compatibility. The below sample uses the deprecated `enforce` property.\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"enforce\": [\n        {\n          \"path\": \"my-example-policy.rego\",\n          \"mode\": \"advisory\"\n        }\n      ],\n      \"name\": \"my-example-policy\",\n      \"description\": \"An example policy.\",\n      \"kind\": \"opa\",\n      \"query\": \"terraform.main\"\n    },\n    \"relationships\": {\n      \"policy-sets\": {\n        \"data\": [\n          { \"id\": \"polset-3yVQZvHzf5j3WRJ1\", \"type\": \"policy-sets\" }\n        ]\n      }\n    },\n    \"type\": \"policies\"\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/policies\n```\n\n### Sample Response (Sentinel)\n\n```json\n{\n  \"data\": {\n    \"id\":\"pol-u3S5p2Uwk21keu1s\",\n    \"type\":\"policies\",\n    \"attributes\": {\n      \"name\":\"my-example-policy\",\n      \"description\":\"An example policy.\",\n      \"enforcement-level\":\"advisory\",\n      \"enforce\": [\n        {\n          \"path\":\"my-example-policy.sentinel\",\n          \"mode\":\"advisory\"\n        }\n      ],\n      \"policy-set-count\": 1,\n      \"updated-at\": null\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": { \"id\": \"my-organization\", \"type\": \"organizations\" }\n      },\n      \"policy-sets\": {\n        \"data\": [\n          { \"id\": \"polset-3yVQZvHzf5j3WRJ1\", \"type\": \"policy-sets\" }\n        ]\n      }\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/policies\/pol-u3S5p2Uwk21keu1s\",\n      \"upload\":\"\/api\/v2\/policies\/pol-u3S5p2Uwk21keu1s\/upload\"\n    }\n  }\n}\n```\n\n### Sample Response (OPA)\n\n```json\n{\n  \"data\": {\n    \"id\":\"pol-u3S5p2Uwk21keu1s\",\n    \"type\":\"policies\",\n    \"attributes\": {\n      \"name\":\"my-example-policy\",\n      \"description\":\"An example policy.\",\n      \"kind\": \"opa\",\n      \"query\": \"terraform.main\",\n      \"enforcement-level\": \"advisory\",\n      \"enforce\": [\n        {\n          \"path\":\"my-example-policy.rego\",\n          \"mode\":\"advisory\"\n        }\n      ],\n      \"policy-set-count\": 1,\n      \"updated-at\": null\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": { \"id\": \"my-organization\", \"type\": \"organizations\" }\n      },\n      \"policy-sets\": {\n        \"data\": [\n          { \"id\": \"polset-3yVQZvHzf5j3WRJ1\", \"type\": \"policy-sets\" }\n        ]\n      }\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/policies\/pol-u3S5p2Uwk21keu1s\",\n      \"upload\":\"\/api\/v2\/policies\/pol-u3S5p2Uwk21keu1s\/upload\"\n    }\n  }\n}\n```\n\n## Show a Policy\n\n`GET \/policies\/:policy_id`\n\n| Parameter    | Description                                                                 |\n| ------------ | --------------------------------------------------------------------------- |\n| `:policy_id` | The ID of the policy to show. Refer to [List Policies](\/terraform\/cloud-docs\/api-docs\/policies#list-policies) for reference information about finding IDs. |\n\n| Status  | Response                                   | Reason                                                  |\n| ------- | ------------------------------------------ | ------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"policies\"`) | The request was successful                              |\n| [404][] | [JSON API error object][]                  | Policy not found or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl --request GET \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/policies\/pol-oXUppaX2ximkqp8w\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"pol-oXUppaX2ximkqp8w\",\n    \"type\": \"policies\",\n    \"attributes\": {\n      \"name\": \"my-example-policy\",\n      \"description\":\"An example policy.\",\n      \"kind\": \"sentinel\",\n      \"enforcement-level\": \"soft-mandatory\",\n      \"enforce\": [\n        {\n          \"path\": \"my-example-policy.sentinel\",\n          \"mode\": \"soft-mandatory\"\n        }\n      ],\n      \"policy-set-count\": 1,\n      \"updated-at\": \"2018-09-11T18:21:21.784Z\"\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": { \"id\": \"my-organization\", \"type\": \"organizations\" }\n      },\n      \"policy-sets\": {\n        \"data\": [\n          { \"id\": \"polset-3yVQZvHzf5j3WRJ1\", \"type\": \"policy-sets\" }\n        ]\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/policies\/pol-oXUppaX2ximkqp8w\",\n      \"upload\": \"\/api\/v2\/policies\/pol-oXUppaX2ximkqp8w\/upload\",\n      \"download\": \"\/api\/v2\/policies\/pol-oXUppaX2ximkqp8w\/download\"\n    }\n  }\n}\n```\n\n## Upload a Policy\n\n`PUT \/policies\/:policy_id\/upload`\n\n| Parameter    | Description                                                                                                                          |\n| ------------ | ------------------------------------------------------------------------------------------------------------------------------------ |\n| `:policy_id` | The ID of the policy to upload code to. Refer to [List Policies](\/terraform\/cloud-docs\/api-docs\/policies#list-policies) for reference information about finding the policy ID. The ID also appears in the response when you create a policy.  |\n\nThis endpoint uploads code to an existing Sentinel or OPA policy.\n\n-> **Note**: This endpoint does not use JSON-API's conventions for HTTP headers and body serialization.\n\n-> **Note**: This endpoint limits the size of uploaded policies to 10MB. If a larger payload is uploaded, an HTTP 413 error will be returned, and the policy will not be saved. Consider refactoring policies into multiple smaller, more concise documents if you reach this limit.\n\n### Request Body\n\nThis PUT endpoint requires the text of a valid Sentinel or OPA policy with a `Content-Type` of `application\/octet-stream`.\n\n- Refer to [Defining Sentinel Policies](\/terraform\/cloud-docs\/policy-enforcement\/sentinel) for details about writing Sentinel code.\n- Refer to [Defining OPA Policies](\/terraform\/cloud-docs\/policy-enforcement\/opa) for details about writing OPA code.\n\n### Sample Payload\n\n```plain\nmain = rule { true }\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/octet-stream\" \\\n  --request PUT \\\n  --data-binary @payload.file \\\n  https:\/\/app.terraform.io\/api\/v2\/policies\/pol-u3S5p2Uwk21keu1s\/upload\n```\n\n## Update a Policy\n\n`PATCH \/policies\/:policy_id`\n\n| Parameter    | Description                                                                   |\n| ------------ | ----------------------------------------------------------------------------- |\n| `:policy_id` | The ID of the policy to update. Refer to [List Policies](\/terraform\/cloud-docs\/api-docs\/policies#list-policies) for reference information about finding IDs. |\n\nThis endpoint can update the enforcement mode of an existing policy. To update the policy code itself, use the upload endpoint.\n\n| Status  | Response                                   | Reason                                                         |\n| ------- | ------------------------------------------ | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"policies\"`) | Successfully updated the policy                                |\n| [404][] | [JSON API error object][]                  | Policy not found, or user unauthorized to perform action       |\n| [422][] | [JSON API error object][]                  | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n|              Key path               |      Type      | Default |                                                                                                                                                                                                                               Description                                                                                                                                                                                                                                |\n| ----------------------------------- | -------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `data.type`                         | string         |         | Must be `\"policies\"`.                                                                                                                                                                                                                                                                                                                                                                                                                                                    |\n| `data.attributes.description`       | string         | `null`  | Text describing the policy's purpose. This field supports Markdown and appears in the HCP Terraform UI.                                                                                                                                                                                                                                                                                                                                                                |\n| `data.attributes.query`             | string         |         | The OPA query to run. This is only valid for OPA policies.                                                                                                                                                                                                                                                                                                                                                                                                               |\n| `data.attributes.enforcement-level` | string         |         | The enforcement level of the policy. For Sentinel, valid values are `\"hard-mandatory\"`, `\"soft-mandatory\"`, and `\"advisory\"`. For OPA, Valid values are `\"mandatory\"`and `\"advisory\"`. Refer to [Managing Policies](\/terraform\/cloud-docs\/policy-enforcement\/manage-policy-sets) for details.                                                                                                                                                                            |\n| `data.attributes.enforce`           | array\\[object] |         | **DEPRECATED** Use `enforcement-level` instead. An array of enforcement configurations that map policy file paths to their enforcement modes. Policies support a single file, so this array should consist of a single element. For Sentinel, if the path in the enforcement map does not match the Sentinel policy (`<NAME>.sentinel`), then HCP Terraform uses the default `hard-mandatory` enforcement level. For OPA, the default enforcement level is `advisory`. |\n| `data.attributes.enforce[].path`    | string         |         | **DEPRECATED** For Sentinel, must be `<NAME>.sentinel`, where `<NAME>` has the same value as `data.attributes.name`. For OPA, must be `<NAME>.rego`.                                                                                                                                                                                                                                                                                                                     |\n| `data.attributes.enforce[].mode`    | string         |         | **DEPRECATED** Use `enforcement-level` instead. The enforcement level of the policy. For Sentinel, valid values are `\"hard-mandatory\"`, `\"soft-mandatory\"`, and `\"advisory\"`. For OPA, Valid values are `\"mandatory\"`and `\"advisory\"`. Refer to [Managing Policies](\/terraform\/cloud-docs\/policy-enforcement\/manage-policy-sets) for details.                                                                                                                            |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"enforcement-level\": \"soft-mandatory\"\n    },\n    \"type\":\"policies\"\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/policies\/pol-u3S5p2Uwk21keu1s\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\":\"pol-u3S5p2Uwk21keu1s\",\n    \"type\":\"policies\",\n    \"attributes\": {\n      \"name\":\"my-example-policy\",\n      \"description\":\"An example policy.\",\n      \"kind\": \"sentinel\",\n      \"enforcement-level\": \"soft-mandatory\",\n      \"enforce\": [\n        {\n          \"path\":\"my-example-policy.sentinel\",\n          \"mode\":\"soft-mandatory\"\n        }\n      ],\n      \"policy-set-count\": 0,\n      \"updated-at\":\"2017-10-10T20:58:04.621Z\"\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": { \"id\": \"my-organization\", \"type\": \"organizations\" }\n      },\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/policies\/pol-u3S5p2Uwk21keu1s\",\n      \"upload\":\"\/api\/v2\/policies\/pol-u3S5p2Uwk21keu1s\/upload\",\n      \"download\":\"\/api\/v2\/policies\/pol-u3S5p2Uwk21keu1s\/download\"\n    }\n  }\n}\n```\n\n## List Policies\n\n`GET \/organizations\/:organization_name\/policies`\n\n| Parameter            | Description                            |\n| -------------------- | -------------------------------------- |\n| `:organization_name` | The organization to list policies for. |\n\n| Status  | Response                                             | Reason                                                         |\n| ------- | ---------------------------------------------------- | -------------------------------------------------------------- |\n| [200][] | Array of [JSON API document][]s (`type: \"policies\"`) | Success                                                        |\n| [404][] | [JSON API error object][]                            | Organization not found, or user unauthorized to perform action |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                                                                                       |\n| -------------- | --------------------------------------------------------------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.                                                                |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 policies per page.                                                          |\n| `search[name]` | **Optional.** Allows searching the organization's policies by name.                                                               |\n| `filter[kind]` | **Optional.** If specified, restricts results to those with the matching policy kind value. Valid values are `sentinel` and `opa` |                                                             |\n\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/policies\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"attributes\": {\n        \"enforcement-level\": \"advisory\",\n        \"enforce\": [\n          {\n            \"mode\": \"advisory\",\n            \"path\": \"my-example-policy.sentinel\"\n          }\n        ],\n        \"name\": \"my-example-policy\",\n        \"description\": \"An example policy.\",\n        \"policy-set-count\": 0,\n        \"updated-at\": \"2017-10-10T20:52:13.898Z\",\n        \"kind\": \"sentinel\"\n      },\n      \"id\": \"pol-u3S5p2Uwk21keu1s\",\n      \"relationships\": {\n        \"organization\": {\n          \"data\": { \"id\": \"my-organization\", \"type\": \"organizations\" }\n        },\n      },\n      \"links\": {\n        \"download\": \"\/api\/v2\/policies\/pol-u3S5p2Uwk21keu1s\/download\",\n        \"self\": \"\/api\/v2\/policies\/pol-u3S5p2Uwk21keu1s\",\n        \"upload\": \"\/api\/v2\/policies\/pol-u3S5p2Uwk21keu1s\/upload\"\n      },\n      \"type\": \"policies\"\n    },\n    {\n      \"id\":\"pol-vM2rBfj7V2Faq8By\",\n      \"type\":\"policies\",\n      \"attributes\":{\n      \"name\":\"policy1\",\n      \"description\":null,\n      \"enforcement-level\": \"advisory\",\n      \"enforce\":[\n        {\n          \"path\":\"policy1.rego\",\n          \"mode\":\"advisory\"\n        }\n      ],\n      \"policy-set-count\":1,\n      \"updated-at\":\"2022-09-13T04:57:43.516Z\",\n      \"kind\":\"opa\",\n      \"query\":\"data.terraform.rules.policy1.rule\"\n      },\n      \"relationships\":{\n        \"organization\":{\n          \"data\":{\n            \"id\":\"hashicorp\",\n            \"type\":\"organizations\"\n          }\n        },\n        \"policy-sets\":{\n          \"data\":[\n            {\n              \"id\":\"polset-FYu3k5WY5eecwwdt\",\n              \"type\":\"policy-sets\"\n            }\n          ]\n        }\n      },\n      \"links\":{\n        \"self\":\"\/api\/v2\/policies\/pol-vM2rBfj7V2Faq8By\",\n        \"upload\":\"\/api\/v2\/policies\/pol-vM2rBfj7V2Faq8By\/upload\",\n        \"download\":\"\/api\/v2\/policies\/pol-vM2rBfj7V2Faq8By\/download\"\n      }\n    }\n  ]\n}\n```\n\n## Delete a Policy\n\n`DELETE \/policies\/:policy_id`\n\n| Parameter    | Description                                                                   |\n| ------------ | ----------------------------------------------------------------------------- |\n| `:policy_id` | The ID of the policy to delete. Refer to [List Policies](\/terraform\/cloud-docs\/api-docs\/policies#list-policies) for reference information about finding IDs. |\n\n| Status  | Response                  | Reason                                                   |\n| ------- | ------------------------- | -------------------------------------------------------- |\n| [204][] | No Content                | Successfully deleted the policy                          |\n| [404][] | [JSON API error object][] | Policy not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/policies\/pl-u3S5p2Uwk21keu1s\n```\n\n## Available Related Resources\n\nThe GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n| Resource Name | Description                                            |\n| ------------- | ------------------------------------------------------ |\n| `policy_sets` | Policy sets that any returned policies are members of. |","site":"terraform","answers_cleaned":"    page title  Policies   API Docs   HCP Terraform description       Use the   policies  endpoint to manage Sentinel and OPA policies  List  show  create  upload  update  and delete policies using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Policies API  Policies are rules that HCP Terraform enforces on Terraform runs  You can use policies to validate that the Terraform plan complies with security rules and best practices  HCP Terraform policy enforcement lets you use the policy as code frameworks Sentinel and Open Policy Agent  OPA  to apply policy checks to HCP Terraform workspaces  Refer to  Policy Enforcement   terraform cloud docs policy enforcement  for more details    Policy sets   terraform cloud docs policy enforcement manage policy sets  are collections of policies that you can apply globally or to specific  projects   terraform cloud docs projects manage  and workspaces  in your organization  For each run in the selected workspaces  HCP Terraform checks the Terraform plan against the policy set and displays the results in the UI        BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      This page documents the API endpoints to create  read  update  and delete policies in an organization  To manage policy sets  use the  Policy Sets API   terraform cloud docs api docs policy sets   To manage the policy results  use the  Runs API   terraform cloud docs api docs run       Create a Policy   POST  organizations  organization name policies     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            organization name    The organization to create the policy in  The organization must already exist in the system  and the token authenticating the API request must have permission to manage policies     More about permissions    terraform cloud docs users teams organizations permissions        permissions citation    intentionally unused   keep for maintainers   This creates a new policy object for the organization  but does not upload the actual policy code  After creation  you must use the  Upload a Policy endpoint  below    upload a policy  with the new policy s upload path   This endpoint s response body includes the upload path in its  links upload  property      Status    Response                                     Reason                                                                                                                                                                                        200       JSON API document      type   policies      Successfully created a policy                                       404       JSON API error object                       Organization not found  or user unauthorized to perform action      422       JSON API error object                       Malformed request body  missing attributes  wrong types  etc          Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required                    Key path                        Type         Default                                                                                                                                                                                                                                   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          data type                                string                        Must be   policies                                                                                                                                                                                                                                                                                                                                                                                                                                                            data attributes name                     string                        The name of the policy  which can include letters  numbers       and      You cannot modify this name after creation                                                                                                                                                                                                                                                                                                                                                          data attributes description              string            null        Text describing the policy s purpose  This field supports Markdown and appears in the HCP Terraform UI                                                                                                                                                                                                                                                                                                                                                                      data attributes kind                     string            sentinel    The policy as code framework for the policy  Valid values are   sentinel   and   opa                                                                                                                                                                                                                                                                                                                                                                                          data attributes query                    string                        The OPA query to run  Only valid for OPA policies                                                                                                                                                                                                                                                                                                                                                                                                                             data attributes enforcement level        string                        The enforcement level of the policy  For Sentinel  valid values are   hard mandatory      soft mandatory    and   advisory    For OPA  Valid values are   mandatory  and   advisory    Refer to  Managing Policies   terraform cloud docs policy enforcement manage policy sets  for details                                                                                                                                                                                  data attributes enforce                  array  object                   DEPRECATED   Use  enforcement level  instead  An array of enforcement configurations that map policy file paths to their enforcement modes  Policies support a single file  so this array should consist of a single element  For Sentinel  if the path in the enforcement map does not match the Sentinel policy    NAME  sentinel    then HCP Terraform uses the default  hard mandatory  enforcement level  For OPA  the default enforcement level is  advisory        data attributes enforce   path           string                          DEPRECATED   For Sentinel  must be   NAME  sentinel   where   NAME   has the same value as  data attributes name   For OPA  must be   NAME  rego                                                                                                                                                                                                                                                                                                                           data attributes enforce   mode           string                          DEPRECATED   Use  enforcement level  instead  The enforcement level of the policy  For Sentinel  valid values are   hard mandatory      soft mandatory    and   advisory    For OPA  Valid values are   mandatory  and   advisory    Refer to  Managing Policies   terraform cloud docs policy enforcement manage policy sets  for details                                                                                                                                  data relationships policy sets data      array  object                 A list of resource identifier objects to define which policy sets include the new policy  These objects must contain  id  and  type  properties  and the  type  property must be  policy sets   For example     id    polset 3yVQZvHzf5j3WRJ1   type    policy sets                                                                                                                                                                                                             Sample Payload  Sentinel      json      data          attributes            enforcement level    hard mandatory          name    my example policy          description    An example policy           kind    sentinel              relationships            policy sets              data                  id    polset 3yVQZvHzf5j3WRJ1    type    policy sets                                  type    policies                 Sample Payload  OPA        Note    We have deprecated the  enforce  property in requests and responses but continue to provide it for backward compatibility  The below sample uses the deprecated  enforce  property      json      data          attributes            enforce                          path    my example policy rego              mode    advisory                            name    my example policy          description    An example policy           kind    opa          query    terraform main              relationships            policy sets              data                  id    polset 3yVQZvHzf5j3WRJ1    type    policy sets                                  type    policies                 Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization policies          Sample Response  Sentinel      json      data          id   pol u3S5p2Uwk21keu1s        type   policies        attributes            name   my example policy          description   An example policy           enforcement level   advisory          enforce                          path   my example policy sentinel              mode   advisory                            policy set count   1         updated at   null             relationships            organization              data      id    my organization    type    organizations                    policy sets              data                  id    polset 3yVQZvHzf5j3WRJ1    type    policy sets                                  links            self    api v2 policies pol u3S5p2Uwk21keu1s          upload    api v2 policies pol u3S5p2Uwk21keu1s upload                       Sample Response  OPA      json      data          id   pol u3S5p2Uwk21keu1s        type   policies        attributes            name   my example policy          description   An example policy           kind    opa          query    terraform main          enforcement level    advisory          enforce                          path   my example policy rego              mode   advisory                            policy set count   1         updated at   null             relationships            organization              data      id    my organization    type    organizations                    policy sets              data                  id    polset 3yVQZvHzf5j3WRJ1    type    policy sets                                  links            self    api v2 policies pol u3S5p2Uwk21keu1s          upload    api v2 policies pol u3S5p2Uwk21keu1s upload                      Show a Policy   GET  policies  policy id     Parameter      Description                                                                                                                                                                      policy id    The ID of the policy to show  Refer to  List Policies   terraform cloud docs api docs policies list policies  for reference information about finding IDs       Status    Response                                     Reason                                                                                                                                                                          200       JSON API document      type   policies      The request was successful                                   404       JSON API error object                       Policy not found or user unauthorized to perform action        Sample Request     shell curl   request GET      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json      https   app terraform io api v2 policies pol oXUppaX2ximkqp8w          Sample Response     json      data          id    pol oXUppaX2ximkqp8w        type    policies        attributes            name    my example policy          description   An example policy           kind    sentinel          enforcement level    soft mandatory          enforce                          path    my example policy sentinel              mode    soft mandatory                            policy set count   1         updated at    2018 09 11T18 21 21 784Z              relationships            organization              data      id    my organization    type    organizations                    policy sets              data                  id    polset 3yVQZvHzf5j3WRJ1    type    policy sets                                  links            self     api v2 policies pol oXUppaX2ximkqp8w          upload     api v2 policies pol oXUppaX2ximkqp8w upload          download     api v2 policies pol oXUppaX2ximkqp8w download                      Upload a Policy   PUT  policies  policy id upload     Parameter      Description                                                                                                                                                                                                                                                                                        policy id    The ID of the policy to upload code to  Refer to  List Policies   terraform cloud docs api docs policies list policies  for reference information about finding the policy ID  The ID also appears in the response when you create a policy      This endpoint uploads code to an existing Sentinel or OPA policy        Note    This endpoint does not use JSON API s conventions for HTTP headers and body serialization        Note    This endpoint limits the size of uploaded policies to 10MB  If a larger payload is uploaded  an HTTP 413 error will be returned  and the policy will not be saved  Consider refactoring policies into multiple smaller  more concise documents if you reach this limit       Request Body  This PUT endpoint requires the text of a valid Sentinel or OPA policy with a  Content Type  of  application octet stream      Refer to  Defining Sentinel Policies   terraform cloud docs policy enforcement sentinel  for details about writing Sentinel code    Refer to  Defining OPA Policies   terraform cloud docs policy enforcement opa  for details about writing OPA code       Sample Payload     plain main   rule   true            Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application octet stream        request PUT       data binary  payload file     https   app terraform io api v2 policies pol u3S5p2Uwk21keu1s upload         Update a Policy   PATCH  policies  policy id     Parameter      Description                                                                                                                                                                          policy id    The ID of the policy to update  Refer to  List Policies   terraform cloud docs api docs policies list policies  for reference information about finding IDs     This endpoint can update the enforcement mode of an existing policy  To update the policy code itself  use the upload endpoint     Status    Response                                     Reason                                                                                                                                                                                        200       JSON API document      type   policies      Successfully updated the policy                                     404       JSON API error object                       Policy not found  or user unauthorized to perform action            422       JSON API error object                       Malformed request body  missing attributes  wrong types  etc          Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required                  Key path                      Type        Default                                                                                                                                                                                                                                 Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   data type                            string                     Must be   policies                                                                                                                                                                                                                                                                                                                                                                                                                                                            data attributes description          string            null     Text describing the policy s purpose  This field supports Markdown and appears in the HCP Terraform UI                                                                                                                                                                                                                                                                                                                                                                      data attributes query                string                     The OPA query to run  This is only valid for OPA policies                                                                                                                                                                                                                                                                                                                                                                                                                     data attributes enforcement level    string                     The enforcement level of the policy  For Sentinel  valid values are   hard mandatory      soft mandatory    and   advisory    For OPA  Valid values are   mandatory  and   advisory    Refer to  Managing Policies   terraform cloud docs policy enforcement manage policy sets  for details                                                                                                                                                                                  data attributes enforce              array  object                DEPRECATED   Use  enforcement level  instead  An array of enforcement configurations that map policy file paths to their enforcement modes  Policies support a single file  so this array should consist of a single element  For Sentinel  if the path in the enforcement map does not match the Sentinel policy    NAME  sentinel    then HCP Terraform uses the default  hard mandatory  enforcement level  For OPA  the default enforcement level is  advisory        data attributes enforce   path       string                       DEPRECATED   For Sentinel  must be   NAME  sentinel   where   NAME   has the same value as  data attributes name   For OPA  must be   NAME  rego                                                                                                                                                                                                                                                                                                                            data attributes enforce   mode       string                       DEPRECATED   Use  enforcement level  instead  The enforcement level of the policy  For Sentinel  valid values are   hard mandatory      soft mandatory    and   advisory    For OPA  Valid values are   mandatory  and   advisory    Refer to  Managing Policies   terraform cloud docs policy enforcement manage policy sets  for details                                                                                                                                    Sample Payload     json      data          attributes            enforcement level    soft mandatory              type   policies                 Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 policies pol u3S5p2Uwk21keu1s          Sample Response     json      data          id   pol u3S5p2Uwk21keu1s        type   policies        attributes            name   my example policy          description   An example policy           kind    sentinel          enforcement level    soft mandatory          enforce                          path   my example policy sentinel              mode   soft mandatory                            policy set count   0         updated at   2017 10 10T20 58 04 621Z              relationships            organization              data      id    my organization    type    organizations                         links            self    api v2 policies pol u3S5p2Uwk21keu1s          upload    api v2 policies pol u3S5p2Uwk21keu1s upload          download    api v2 policies pol u3S5p2Uwk21keu1s download                      List Policies   GET  organizations  organization name policies     Parameter              Description                                                                                                    organization name    The organization to list policies for       Status    Response                                               Reason                                                                                                                                                                                                  200      Array of  JSON API document   s   type   policies      Success                                                             404       JSON API error object                                 Organization not found  or user unauthorized to perform action        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                                                                                                                                                   page number       Optional    If omitted  the endpoint will return the first page                                                                      page size         Optional    If omitted  the endpoint will return 20 policies per page                                                                search name       Optional    Allows searching the organization s policies by name                                                                     filter kind       Optional    If specified  restricts results to those with the matching policy kind value  Valid values are  sentinel  and  opa                                                                        Sample Request     shell curl       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 organizations my organization policies          Sample Response     json      data                  attributes              enforcement level    advisory            enforce                              mode    advisory                path    my example policy sentinel                                  name    my example policy            description    An example policy             policy set count   0           updated at    2017 10 10T20 52 13 898Z            kind    sentinel                  id    pol u3S5p2Uwk21keu1s          relationships              organization                data      id    my organization    type    organizations                               links              download     api v2 policies pol u3S5p2Uwk21keu1s download            self     api v2 policies pol u3S5p2Uwk21keu1s            upload     api v2 policies pol u3S5p2Uwk21keu1s upload                  type    policies                      id   pol vM2rBfj7V2Faq8By          type   policies          attributes           name   policy1          description  null         enforcement level    advisory          enforce                         path   policy1 rego              mode   advisory                            policy set count  1         updated at   2022 09 13T04 57 43 516Z          kind   opa          query   data terraform rules policy1 rule                  relationships             organization               data                 id   hashicorp                type   organizations                                  policy sets               data                                 id   polset FYu3k5WY5eecwwdt                  type   policy sets                                                      links             self    api v2 policies pol vM2rBfj7V2Faq8By            upload    api v2 policies pol vM2rBfj7V2Faq8By upload            download    api v2 policies pol vM2rBfj7V2Faq8By download                              Delete a Policy   DELETE  policies  policy id     Parameter      Description                                                                                                                                                                          policy id    The ID of the policy to delete  Refer to  List Policies   terraform cloud docs api docs policies list policies  for reference information about finding IDs       Status    Response                    Reason                                                                                                                                                           204      No Content                  Successfully deleted the policy                               404       JSON API error object      Policy not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 policies pl u3S5p2Uwk21keu1s         Available Related Resources  The GET endpoints above can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available     Resource Name   Description                                                                                                                            policy sets    Policy sets that any returned policies are members of   "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the explorer endpoint to explore your data Query filter and sort data page title Explorer API Docs HCP Terraform from a variety of views using the HTTP API tfc only true","answers":"---\npage_title: Explorer - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/explorer` endpoint to explore your data. Query, filter, and sort data\n  from a variety of views using the HTTP API.\ntfc_only: true\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n[speculative plans]: \/terraform\/cloud-docs\/run\/remote-operations#speculative-plans\n\n# Explorer API\n\nExplorer allows you to query your HCP Terraform data across workspaces in an organization.\nYou can select from a variety of available views and supply optional sort and\nfilter parameters to present your data in the format that best suits your needs.\n\nQueries are scoped to the organization level. You must have owner permissions in the organization, or read access to workspaces and projects.\n\n## Execute a query\n\n`GET \/organizations\/:organization_name\/explorer`\n\n| Parameter            | Description |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The name of the organization to query. The organization must already exist and you must have permissions to read the organization's workspaces and projects. |\n\n-> **Note:** Explorer is restricted to the owners team, teams with the \"Read all\nworkspaces\" permission, teams with the \"Read all projects\" permission, and the [organization API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens).\n\nExplorer queries will time out after 60 seconds. If the desired query is timing\nout, try a simpler query that uses less filters.\n\nRequests to the explorer are subject to [rate\nlimiting](\/terraform\/cloud-docs\/api-docs#rate-limiting). Explorer will return a\n429 status code when rate limiting is active for the authenticated context.\n\nData presented by the explorer is eventually consistent. Query results typically\ncontain data that represents the current state of the system, but in some cases\na small delay may be necessary before querying for data that has recently been\nupdated.\n\n### Query parameters\n\nThis GET endpoint requires a query string that supports the following properties.\n\n| Parameter      | Description |\n| -------------- | ----------------------------------------------------------------------------- |\n| `type`         | **Required.** Must be one of the following available views: `workspaces`, `tf_versions`, `providers`, or `modules`. See the [View Types](#view-types) section for more information about each. |\n| `sort`         | Optionally sort the returned data by the specified field name. The field name must use snake case and exist within the supplied view type. The field name supplied can be prefixed with a hyphen (`-`) to perform a descending sort. |\n| `filter`       | Optionally specify one or more filters to limit the data returned. See [Filtering](#filtering) for more information. |\n| `fields`       | Optionally limit the data returned only to the specified fields. The field names supplied must use snake case and be submitted in comma-separated format. If omitted, all available fields will be returned for the requested view type. |\n| `page[number]` | Optional. If omitted, the endpoint will return the first page. |\n| `page[size]`   | Optional. If omitted, the endpoint will use a default page size. |\n\n### View Types\n\nExplorer queries each require a view, specified by the `type` query string parameter.\nThere are several contextual views available for querying:\n\n| Type          | Description |\n| ------------- | ----------------------------------------------------------------------------- |\n| `workspaces`  | Information about the workspaces in the target organization and any current runs associated with them. Each row returned represents a single workspace. See [View Type: workspaces](#view-type-workspaces) for available fields. |\n| `tf_versions` | Information about the Terraform versions in use across this organization. Each row returned represents a Terraform version in use. See [View Type: tf_versions](#view-type-tf_versions) for available fields. |\n| `providers`   | Information about the Terraform providers in use across the target organization. Each row returned represents a Terraform provider in use. See [View Type: providers](#view-type-providers) for available fields. |\n| `modules`     | Information about the Terraform modules in use across the target organization. Each row returned represents a Terraform module in use. See [View Type: modules](#view-type-modules) for available fields. |\n\nThe fields returned by a given query are specific to the view type used. The\nfields available for each view type are detailed below:\n\n#### View Type: `workspaces`\n\n| Field                             | Type       | Description |\n| --------------------------------- | ---------- | ----------------------------------------------------------------------------- |\n| `all_checks_succeeded`            | `boolean`  | True if checks have succeeded for the workspace, false otherwise. |\n| `checks_errored`                  | `number`   | The number of checks that errored without completing. |\n| `checks_failed`                   | `number`   | The number of checks that completed and failed. |\n| `checks_passed`                   | `number`   | The number of checks that completed and passed. |\n| `checks_unknown`                  | `number`   | The number of checks that could not be assessed. |\n| `current_run_applied_at`          | `datetime` | Represents the time that this workspace's current run was applied. |\n| `current_run_external_id`         | `string`   | The external ID of this workspace's current run. |\n| `current_run_status`              | `string`   | The status of this workspace's current run. |\n| `drifted`                         | `boolean`  | True if drift has been detected for the workspace, false otherwise. |\n| `external_id`                     | `string`   | The external ID of this workspace. |\n| `module_count`                    | `number`   | The number of distinct modules used by this workspace. |\n| `modules`                         | `string`   | A comma separated list of modules used by this workspace. |\n| `organization_name`               | `string`   | The name of the organization this workspace belongs to. |\n| `project_external_id`             | `string`   | The external ID of the project this workspace belongs to. |\n| `project_name`                    | `string`   | The name of the project this workspace belongs to.\n| `provider_count`                  | `number`   | The number of distinct providers used in this workspace. |\n| `providers`                       | `string`   | A comma separated list of providers used in this workspace. |\n| `resources_drifted`               | `number`   | The count of resources that drift was detected for. |\n| `resources_undrifted`             | `number`   | The count of resources that drift was not detected for. |\n| `state_version_terraform_version` | `string`   | The Terraform version used to create the current run's state file. |\n| `vcs_repo_identifier`             | `string`   | The name of the repository configured for this workspace, if available. |\n| `workspace_created_at`            | `datetime` | The time this workspace was created. |\n| `workspace_name`                  | `string`   | The name of this workspace. |\n| `workspace_terraform_version`     | `string`   | The Terraform version currently configured for this workspace. |\n| `workspace_updated_at`            | `datetime` | The time this workspace was last updated. |\n\n#### View Type: `tf_versions`\n\n| Field              | Type     | Description |\n| ------------------ | -------- | ------------------------------------------------------------------ |\n| `version`          | `string` | The semantic version string for this Terraform version. |\n| `workspace_count`  | `number` | The number of workspaces using this terraform version. |\n| `workspaces`       | `string` | A comma-separated list of workspaces using this terraform version. |\n\n#### View Type: `providers`\n\n| Field             | Type     | Description |\n| ------------------| -------- | --------------------------------------------------------- |\n| `name`            | `string` | The name of this provider. |\n| `source`          | `string` | The source of this provider. |\n| `version`         | `string` | The semantic version string for this provider. |\n| `workspace_count` | `number` | The number of workspaces using this provider. |\n| `workspaces`      | `string` | A comma-separated list of workspaces using this provider. |\n\n#### View Type: `modules`\n\n| Field             | Type     | Description |\n| ------------------| -------- | ------------------------------------------------------- |\n| `name`            | `string` | The name of this module. |\n| `source`          | `string` | The source of this module. |\n| `version`         | `string` | The semantic version string for this module. |\n| `workspace_count` | `number` | The number of workspaces using this module version. |\n| `workspaces`      | `string` | A comma-separated list of workspaces using this module version. |\n\n### Filtering\n\nThe explorer can present filtered data from any view type using a variety of\noperators.\n\nFilter strings begin with the reserved string `filter`, and are passed as URL\nquery string parameters using key-value pairs. The parameter key contains the\nfilter's target field and operator, and the parameter value contains the filter\nvalue to be used during the query.\n\nMultiple filters can be used in a query by providing multiple filter query\nstring parameters separated with `&`. When multiple filters are used, a logical AND is used\nto evaluate the results.\n\nEach filter string must use the following format:\n\n```\nfilter[<FILTER_INDEX>][<FIELD_NAME>][<FIELD_OPERATOR>][FIELD_VALUE_INDEX]=<FILTER VALUE>\n```\n\n**Filter string sub-parameters**\n\n- **FILTER_INDEX**: The index of the filter. Each filter requires a unique\n  index. The first filter should use a `0` and each additional filter should\n  use an index that is the previous filter's index incremented by 1.\n- **FIELD_NAME**: The name of the field to apply the filter to. The field must\n  be a valid field for the view type being queried. See [View\n  Types](#view-types) for a list of fields available for each type.\n- **FIELD_OPERATOR**: The operator to use when filtering. Must be supported by\nthe field type being filtered. See [Filter Operators](#filter-operators) for the\navailable operators and their supported field types.\n- **FIELD_VALUE_INDEX**: Must be `0`. This sub-parameter is reserved for future use.\n- **FILTER_VALUE**: The filter value used by the filter during the query.\n\nOnce the desired filter strings have been added to a request URI, they should be\npercent-encoded along with the rest of the query string parameters.\n\n#### Sample Filter Strings\n\n**View Type: `workspace`**\n\n_Show workspaces that contain `test` in their name:_\n\n```\nfilter[0][workspace_name][contains][0]=test\n```\n\n_After encoding:_\n```\nfilter%5B0%5D%5Bworkspace_name%5D%5Bcontains%5D%5B0%5D=test\n```\n\n**View Type: `modules`**\n\n_Show modules that contain `aws` in their name and version string equal to `1.1`_\n\n```\nfilter[0][name][contains][0]=aws&filter[0][version][is][0]=1.1\n```\n\n_After encoding:_\n```\nfilter%5B0%5D%5Bname%5D%5Bcontains%5D%5B0%5D=aws&filter%5B0%5D%5Bversion%5D%5Bis%5D%5B0%5D=1.1\n```\n\n#### Filter Operators\n\n| Operator           | Supported Field Types         | Description |\n| ------------------ | ----------------------------- | ----------- |\n| `is`               | `boolean`, `number`, `string` | Filters data to rows where the field value is equivalent to the filter value. |\n| `is_not`           | `number`, `string`            | Filters data to rows where the field value is different from the filter value. |\n| `contains`         | `string`                      | Filters data to rows where the field value contains the filter value as a sub-string. |\n| `does not contain` | `string`                      | Filters data to rows where the field value does not contain the filter value as a sub-string. |\n| `is_empty`         | `boolean`, `number`, `string` | Filters data to rows where the field does not contain a value. |\n| `is_not_empty`     | `boolean`, `number`, `string` | Filters data to rows where the field contains any value. |\n| `gt`               | `number`                      | Filters data to rows where the field value is greater than the filter value. |\n| `lt`               | `number`                      | Filters data to rows where the field value is less than the filter value. |\n| `gteq`             | `number`                      | Filters data to rows where the field value is greater than or equal to the filter value. |\n| `lteq`             | `number`                      | Filters data to rows where the field value is less than or equal to the filter value. |\n| `is_before`        | `datetime`                    | Filters data to rows\nwhere the field value is earlier than the filter value. ISO 8601 formatted timestamps are required for filter values. If no time zone offset is provided, the filter value will be interpreted as UTC. |\n| `is_after`         | `datetime`                    | Filters data to rows\nwhere the field value is earlier than the filter value. ISO 8601 formatted timestamps are required for filter values. If no time zone offset is provided, the filter value will be interpreted as UTC. |\n\n### Pagination\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n### Sample requests\n\n**View Type: `workspaces`**\n\n_Show data for all workspaces_\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer?type=workspaces'\n```\n\n_Show data for all workspaces with `test` in their name_\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer?type=workspaces&&filter%5B0%5D%5Bworkspace_name%5D%5Bcontains%5D%5B0%5D=test'\n```\n\n**View Type: `modules`**\n\n_Show modules used across all workspaces_\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer?type=modules'\n```\n\n_Show public modules only_\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer?type=modules&filter%5B0%5D%5Bregistry_type%5D%5Bis%5D%5B0%5D=public'\n```\n\n**View Type: `provider`**\n\n_Show providers used across all workspaces_\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer?type=providers'\n```\n\n_Show most used providers across all workspaces_\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer?type=providers&sort=-workspace_count'\n```\n\n**View Type: `tf_versions`**\n\n_Show Terraform versions used across all workspaces_\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer?type=tf_versions'\n```\n\n_Show all Terraform versions used in more than 10 workspaces_\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer?type=tf_versions&filter%5B0%5D%5Bworkspace_count%5D%5Bgt%5D%5B0%5D=10'\n```\n\n### Sample response\n\n_Show data for all workspaces_\n\n```json\n{\n  \"data\": [\n    {\n      \"attributes\": {\n        \"all-checks-succeeded\": true,\n        \"checks-errored\": 0,\n        \"checks-failed\": 0,\n        \"checks-passed\": 0,\n        \"checks-unknown\": 0,\n        \"current-run-applied-at\": null,\n        \"current-run-external-id\": \"run-rdoEKh2Gy3K6SbCZ\",\n        \"current-run-status\": \"planned_and_finished\",\n        \"drifted\": false,\n        \"external-id\": \"ws-j2sAeWRxou1b5HYf\",\n        \"module-count\": 0,\n        \"modules\": null,\n        \"organization-name\": \"acme-corp\",\n        \"project-external-id\": \"prj-V61QLE8tvs4NvVZG\",\n        \"project-name\": \"Default Project\",\n        \"provider-count\": 0,\n        \"providers\": null,\n        \"resources-drifted\": 0,\n        \"resources-undrifted\": 0,\n        \"state-version-terraform-version\": \"1.5.7\",\n        \"vcs-repo-identifier\": null,\n        \"workspace-created-at\": \"2023-10-17T21:56:45.570+00:00\",\n        \"workspace-name\": \"payments-service\",\n        \"workspace-terraform-version\": \"1.5.7\",\n        \"workspace-updated-at\": \"2023-12-08T19:58:15.513+00:00\"\n      },\n      \"id\": \"ws-j2sAWeRxuo1b5HYf\",\n      \"type\": \"visibility-workspace\"\n    },\n    {\n      \"attributes\": {\n        \"all-checks-succeeded\": true,\n        \"checks-errored\": 0,\n        \"checks-failed\": 0,\n        \"checks-passed\": 0,\n        \"checks-unknown\": 0,\n        \"current-run-applied-at\": \"2023-08-18T16:24:59.000+00:00\",\n        \"current-run-external-id\": \"run-9MmJaoQTvDCh5wUa\",\n        \"current-run-status\": \"applied\",\n        \"drifted\": true,\n        \"external-id\": \"ws-igUVNQSYVXRkhYuo\",\n        \"module-count\": 0,\n        \"modules\": null,\n        \"organization-name\": \"acme-corp\",\n        \"project-external-id\": \"prj-uU2xNqGeG86U9WgT\",\n        \"project-name\": \"web\",\n        \"provider-count\": 0,\n        \"providers\": null,\n        \"resources-drifted\": 3,\n        \"resources-undrifted\": 3,\n        \"state-version-terraform-version\": \"1.4.5\",\n        \"vcs-repo-identifier\": \"acmecorp\/api\",\n        \"workspace-created-at\": \"2023-04-25T17:09:14.960+00:00\",\n        \"workspace-name\": \"api-service\",\n        \"workspace-terraform-version\": \"1.4.5\",\n        \"workspace-updated-at\": \"2023-12-08T20:43:16.779+00:00\"\n      },\n      \"id\": \"ws-igUNVQSYXVRkhYuo\",\n      \"type\": \"visibility-workspace\"\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/acme-corp\/explorer?page%5Bnumber%5D=1&page%5Bsize%5D=20&type=workspaces\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/acme-corp\/explorer?page%5Bnumber%5D=1&page%5Bsize%5D=20&type=workspaces\",\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/acme-corp\/explorer?page%5Bnumber%5D=1&page%5Bsize%5D=20&type=workspaces\",\n    \"prev\": null,\n    \"next\": null\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"page-size\": 20,\n      \"next-page\": null,\n      \"prev-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 2\n    }\n  }\n}\n\n```\n\n## Export data as CSV\n\n`GET \/organizations\/:organization_name\/explorer\/export\/csv`\n\n| Parameter            | Description |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The name of the organization to query. The organization must already exist in the system, and the user must have permissions to read the workspaces and projects within it. |\n\nThis endpoint can be used to download a full, unpaged export of the query results\nin CSV format (with the filter, sort, and field selections applied).\n\nExplorer queries will time out after 60 seconds. If the desired query is timing\nout, try a simpler query that uses less filters.\n\nRequests to the explorer are subject to [rate\nlimiting](\/terraform\/cloud-docs\/api-docs#rate-limiting). Explorer will return a\n429 status code when rate limiting is active for the authenticated context.\n\nData presented by the explorer is eventually consistent. Query results typically\ncontain data that represents the current state of the system, but in some cases\na small delay may be necessary before querying for data that has recently been\nupdated.\n\n### Query parameters\n\nThis GET endpoint supports the same query string parameters as the Explorer [Query\nendpoint](#execute-a-query).\n\n### Sample requests\n\n**View Type: `workspaces`**\n\n_Show data for all workspaces_\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer\/export\/csv?type=workspaces\n```\n\n### Sample response\n\n_Show data for all workspaces_\n\n```csv\nall_checks_succeeded,checks_errored,checks_failed,checks_passed,checks_unknown,current_run_applied_at,current_run_external_id,current_run_status,drifted,external_id,module_count,modules,organization_name,project_external_id,project_name,provider_count,providers,resources_drifted,resources_undrifted,state_version_terraform_version,vcs_repo_identifier,workspace_created_at,workspace_name,workspace_terraform_version,workspace_updated_at\n\"true\",\"0\",\"0\",\"0\",\"0\",\"\",\"run-rdoEKh2Gy3K6SbCZ\",\"planned_and_finished\",\"false\",\"ws-j2sAeWRxou1b5HYf\",\"0\",\"\",\"acme-corp\",\"prj-V61QLE8tvs4NvVZG\",\"Default Project\",\"0\",\"\",\"0\",\"0\",\"1.5.7\",\"\",\"2023-10-17T21:56:45+00:00\",\"payments-service\",\"1.5.7\",\"2023-12-13T15:48:16+00:00\"\n\"true\",\"0\",\"0\",\"0\",\"0\",\"2023-08-18T16:24:59+00:00\",\"run-9MmJaoQTvDCh5wUa\",\"applied\",\"true\",\"ws-igUVNQSYVXRkhYuo\",\"0\",\"\",\"acme-corp\",\"prj-uU2xNqGeG86U9WgT\",\"web\",\"0\",\"\",\"3\",\"3\",\"1.4.5\",\"acmecorp\/api\",\"2023-04-25T17:09:14+00:00\",\"api-service\",\"1.4.5\",\"2023-12-13T15:29:03+00:00\"\n```\n\n## List saved views\n\n-> **Note**: The ability to save views in the explorer is in public beta. All APIs and workflows are subject to change.\n\nUse this endpoint to display all of the explorer's [saved views](\/terraform\/cloud-docs\/workspaces\/explorer#saved-views) in an organization.\n\n`GET \/api\/v2\/organizations\/:organization_name\/explorer\/views`\n\n| Parameter            | Description |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The name of the organization to query. To view a query, you must have permission to read the workspaces and projects within that query. |\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --request GET \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer\/views'\n```\n\n### Sample response\n\n```json\n{\n  \"data\": [{\n    \"id\": \"sq-P9Yn6Ad6EiVMz77s\",\n    \"type\": \"explorer-saved-queries\",\n    \"attributes\": {\n      \"name\": \"my saved query\",\n      \"created-at\": \"2024-10-11T16:18:51.442Z\",\n      \"query\": {\n        \"filter\": [{\n          \"field\": \"workspace_name\",\n          \"operator\": \"contains\",\n          \"value\": [\"child\"],\n        },\n        {\n          \"field\": \"workspace_name\",\n          \"operator\": \"contains\",\n          \"value\": [\"-\"],\n        }],\n        \"type\": \"workspaces\"\n      },\n      \"query-type\": \"workspaces\"\n    }\n  }]\n}\n```\n\n## Create saved view\n\n-> **Note**: The ability to save views in the explorer is in public beta. All APIs and workflows are subject to change.\n\nUse this endpoint to create a [saved view](\/terraform\/cloud-docs\/workspaces\/explorer#saved-views) in the explorer.\n\n`POST \/api\/v2\/organizations\/:organization_name\/explorer\/views`\n\n| Parameter            | Description |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The name of the organization to query. To view a query, you must have permission to read the workspaces and projects within that query. |\n\n\nTo create a saved view, provide the following data in your payload.\n\n\n| Key path   | Type | Default | Description |\n|------------|------|---------|-------------|\n| `data.name`         | string |   | The name of the saved view. |\n| `data.query_type`   | string |   | The primary type that the view is querying. Refer to [View Types](#view-types) for details. |\n| `data.query`        | object |   | A list of filters, fields, and sorting rules for the view. Refer to [Query parameters](#query-parameters) for full details. |\n| `data.query.filter`          | array\\[objects]|   | A list of filters composed of fields, operators, and values. |\n| `data.query.filter.field`    | string        |   | The field name to filter by. |\n| `data.query.filter.operator` | string        |   | The operator applied to field and value. |\n| `data.query.filter.value`    | array\\[strings] |   | A list of values to filter by. |\n| `data.query.fields` | array\\[strings] |   | A list of fields to include in the view. |\n| `data.query.sort`   | array\\[strings] |   | A list of fields to sort by. Prefix with `-` for descending sort. |\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --request POST \\\n  --data @payload.json \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer\/views'\n```\n\n### Sample response\n\n```json\n{\n  \"data\": {\n    \"id\": \"sq-P9Yn6Ad6EiVMz77s\",\n    \"type\": \"explorer-saved-queries\",\n    \"attributes\": {\n      \"name\": \"my saved query\",\n      \"created-at\": \"2024-10-11T16:18:51.442Z\",\n      \"query\": {\n        \"filter\": [{\n          \"field\": \"workspace_name\",\n          \"operator\": \"contains\",\n          \"value\": [\"child\"],\n        },\n        {\n          \"field\": \"workspace_name\",\n          \"operator\": \"contains\",\n          \"value\": [\"-\"],\n        }],\n        \"type\": \"workspaces\"\n      },\n      \"query-type\": \"workspaces\"\n    }\n  }\n}\n```\n\n## Show saved view\n\nUse this endpoint to display a specific saved view from the explorer.\n\n`GET \/api\/v2\/organizations\/:organization_name\/explorer\/views\/:view_id`\n\n| Parameter            | Description |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The name of the organization to query. To view a query, you must have permission to read the workspaces and projects within that query.  |\n| `:view_id`           | The ID of the saved view to show. |\n\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --request GET \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer\/views\/$VIEW_ID`\n```\n\n### Sample response\n\n```json\n{\n  \"data\": {\n    \"id\": \"sq-P9Yn6Ad6EiVMz77s\",\n    \"type\": \"explorer-saved-queries\",\n    \"attributes\": {\n      \"name\": \"my saved query\",\n      \"created-at\": \"2024-10-11T16:18:51.442Z\",\n      \"query\": {\n        \"filter\": [{\n          \"field\": \"workspace_name\",\n          \"operator\": \"contains\",\n          \"value\": [\"child\"],\n        },\n        {\n          \"field\": \"workspace_name\",\n          \"operator\": \"contains\",\n          \"value\": [\"-\"],\n        }],\n        \"type\": \"workspaces\"\n      },\n      \"query-type\": \"workspaces\"\n    }\n  }\n}\n```\n\n## Update a saved view\n\nUse this endpoint to update the data a saved view queries. \n\n`PATCH \/api\/v2\/organizations\/:organization_name\/explorer\/views\/:view_id`\n\n| Parameter            | Description |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The name of the organization to query. To view a query, you must have permission to read the workspaces and projects within that query. |\n| `:view_id`           | The external id of the saved view to update. |\n\nYou must have the following fields in the payload for your request.\n\n| Key path   | Type | Default | Description |\n|------------|------|---------|-------------|\n| `data.name`         | string |   | The name of the saved view. |\n| `data.query`        | object |   | A list of filters, fields, and sorting rules for the view. See [Query parameters](#query-parameters) for full details. |\n| `data.query.filter`          | array\\[objects]|   | A list of filters composed of fields, operators, and values. See [Filtering](#filtering) for more information. |\n| `data.query.filter.field`    | string        |   | The field name to filter by. |\n| `data.query.filter.operator` | string        |   | The operator applied to field and value. |\n| `data.query.filter.value`    | array\\[strings] |   | A list of values to filter by. |\n| `data.query.fields` | array\\[strings] |   | A list of fields to include in the view. Refer to [execute a query](#execute-a-query) for a list of available parameters.  |\n| `data.query.sort`   | array\\[strings] |   | A list of fields to sort by. Prefix with `-` for descending sort. |\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer\/viewsi\/$VIEW_ID'\n```\n\n### Sample response\n\n```json\n{\n  \"data\": {\n    \"id\": \"sq-P9Yn6Ad6EiVMz77s\",\n    \"type\": \"explorer-saved-queries\",\n    \"attributes\": {\n      \"name\": \"my saved query\",\n      \"created-at\": \"2024-10-11T16:18:51.442Z\",\n      \"query\": {\n        \"filter\": [{\n          \"field\": \"workspace_name\",\n          \"operator\": \"contains\",\n          \"value\": [\"candy\"],\n        },\n        {\n          \"field\": \"workspace_name\",\n          \"operator\": \"contains\",\n          \"value\": [\"pumpkins\"],\n        }],\n        \"type\": \"workspaces\"\n      },\n      \"query-type\": \"workspaces\"\n    }\n  }\n}\n```\n\n## Delete a saved view\n\nUse this endpoint to delete a saved view.\n\n`DELETE \/api\/v2\/organizations\/:organization_name\/explorer\/views\/:view_id`\n\n| Parameter            | Description |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The name of the organization to query. To view a query, you must have permission to read the workspaces and projects within that query. |\n| `:view_id`           | The ID of the saved view you want to delete. |\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --request DELETE \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer\/views\/$VIEW_ID`\n```\n\n### Sample response\n\n```json\n{\n  \"data\": {\n    \"id\": \"sq-P9Yn6Ad6EiVMz77s\",\n    \"type\": \"explorer-saved-queries\",\n    \"attributes\": {\n      \"name\": \"my saved query\",\n      \"created-at\": \"2024-10-11T16:18:51.442Z\",\n      \"query\": {\n        \"filter\": [{\n          \"field\": \"workspace_name\",\n          \"operator\": \"contains\",\n          \"value\": [\"candy\"],\n        },\n        {\n          \"field\": \"workspace_name\",\n          \"operator\": \"contains\",\n          \"value\": [\"pumpkins\"],\n        }],\n        \"type\": \"workspaces\"\n      },\n      \"query-type\": \"workspaces\"\n    }\n  }\n}\n```\n\n## Query a saved view\n\nRe-execute a saved view's query to retrieve current results. \n\n`GET \/api\/v2\/organizations\/:organization_name\/explorer\/views\/:view_id\/results`\n\n| Parameter            | Description |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The name of the organization to query. To view a query, you must have permission to read the workspaces and projects within that query. |\n| `:view_id`           | The ID of the saved view to re-query. |\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --request GET \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer\/views\/$VIEW_ID\/results`\n```\n\n### Sample response\n\n```json\n{\n  \"data\": [\n    {\n      \"attributes\": {\n        \"all-checks-succeeded\": true,\n        \"checks-errored\": 0,\n        \"checks-failed\": 0,\n        \"checks-passed\": 0,\n        \"checks-unknown\": 0,\n        \"current-run-applied-at\": null,\n        \"current-run-external-id\": \"run-rdoEKh2Gy3K6SbCZ\",\n        \"current-run-status\": \"planned_and_finished\",\n        \"drifted\": false,\n        \"external-id\": \"ws-j2sAeWRxou1b5HYf\",\n        \"module-count\": 0,\n        \"modules\": null,\n        \"organization-name\": \"acme-corp\",\n        \"project-external-id\": \"prj-V61QLE8tvs4NvVZG\",\n        \"project-name\": \"Default Project\",\n        \"provider-count\": 0,\n        \"providers\": null,\n        \"resources-drifted\": 0,\n        \"resources-undrifted\": 0,\n        \"state-version-terraform-version\": \"1.5.7\",\n        \"vcs-repo-identifier\": null,\n        \"workspace-created-at\": \"2023-10-17T21:56:45.570+00:00\",\n        \"workspace-name\": \"candy distribution system\",\n        \"workspace-terraform-version\": \"1.5.7\",\n        \"workspace-updated-at\": \"2023-12-08T19:58:15.513+00:00\"\n      },\n      \"id\": \"ws-j2sAWeRxuo1b5HYf\",\n      \"type\": \"visibility-workspace\"\n    },\n    {\n      \"attributes\": {\n        \"all-checks-succeeded\": true,\n        \"checks-errored\": 0,\n        \"checks-failed\": 0,\n        \"checks-passed\": 0,\n        \"checks-unknown\": 0,\n        \"current-run-applied-at\": \"2023-08-18T16:24:59.000+00:00\",\n        \"current-run-external-id\": \"run-9MmJaoQTvDCh5wUa\",\n        \"current-run-status\": \"applied\",\n        \"drifted\": true,\n        \"external-id\": \"ws-igUVNQSYVXRkhYuo\",\n        \"module-count\": 0,\n        \"modules\": null,\n        \"organization-name\": \"acme-corp\",\n        \"project-external-id\": \"prj-uU2xNqGeG86U9WgT\",\n        \"project-name\": \"web\",\n        \"provider-count\": 0,\n        \"providers\": null,\n        \"resources-drifted\": 3,\n        \"resources-undrifted\": 3,\n        \"state-version-terraform-version\": \"1.4.5\",\n        \"vcs-repo-identifier\": \"acmecorp\/api\",\n        \"workspace-created-at\": \"2023-04-25T17:09:14.960+00:00\",\n        \"workspace-name\": \"pumpkin smasher\",\n        \"workspace-terraform-version\": \"1.4.5\",\n        \"workspace-updated-at\": \"2023-12-08T20:43:16.779+00:00\"\n      },\n      \"id\": \"ws-igUNVQSYXVRkhYuo\",\n      \"type\": \"visibility-workspace\"\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/acme-corp\/explorer?page%5Bnumber%5D=1&page%5Bsize%5D=20&type=workspaces\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/acme-corp\/explorer?page%5Bnumber%5D=1&page%5Bsize%5D=20&type=workspaces\",\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/acme-corp\/explorer?page%5Bnumber%5D=1&page%5Bsize%5D=20&type=workspaces\",\n    \"prev\": null,\n    \"next\": null\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"page-size\": 20,\n      \"next-page\": null,\n      \"prev-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 2\n    }\n  }\n}\n```\n\n## Query a saved view as CSV\n\nRe-execute a saved view's query to retrieve current results, HCP Terraform returns results as a CSV.\n\n`GET \/api\/v2\/organizations\/:organization_name\/explorer\/views\/:view_id\/csv`\n\n| Parameter            | Description |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The name of the organization to query. To view a query, you must have permission to read the workspaces and projects within that query. |\n| `:view_id`           | The external id of the saved view to query. |\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --request GET \\\n  'https:\/\/app.terraform.io\/api\/v2\/organizations\/$ORG_NAME\/explorer\/views\/$VIEW_ID\/csv`\n```\n\n### Sample response\n\n```csv\nall_checks_succeeded,checks_errored,checks_failed,checks_passed,checks_unknown,current_run_applied_at,current_run_external_id,current_run_status,drifted,external_id,module_count,modules,organization_name,project_external_id,project_name,provider_count,providers,resources_drifted,resources_undrifted,state_version_terraform_version,vcs_repo_identifier,workspace_created_at,workspace_name,workspace_terraform_version,workspace_updated_at\n\"true\",\"0\",\"0\",\"0\",\"0\",\"\",\"run-rdoEKh2Gy3K6SbCZ\",\"planned_and_finished\",\"false\",\"ws-j2sAeWRxou1b5HYf\",\"0\",\"\",\"acme-corp\",\"prj-V61QLE8tvs4NvVZG\",\"Default Project\",\"0\",\"\",\"0\",\"0\",\"1.5.7\",\"\",\"2023-10-17T21:56:45+00:00\",\"candy distribution service\",\"1.5.7\",\"2023-12-13T15:48:16+00:00\"\n\"true\",\"0\",\"0\",\"0\",\"0\",\"2023-08-18T16:24:59+00:00\",\"run-9MmJaoQTvDCh5wUa\",\"applied\",\"true\",\"ws-igUVNQSYVXRkhYuo\",\"0\",\"\",\"acme-corp\",\"prj-uU2xNqGeG86U9WgT\",\"web\",\"0\",\"\",\"3\",\"3\",\"1.4.5\",\"acmecorp\/api\",\"2023-04-25T17:09:14+00:00\",\"api-service\",\"1.4.5\",\"2023-12-13T15:29:03+00:00\"\n``","site":"terraform","answers_cleaned":"    page title  Explorer   API Docs   HCP Terraform description       Use the   explorer  endpoint to explore your data  Query  filter  and sort data   from a variety of views using the HTTP API  tfc only  true       200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects   speculative plans    terraform cloud docs run remote operations speculative plans    Explorer API  Explorer allows you to query your HCP Terraform data across workspaces in an organization  You can select from a variety of available views and supply optional sort and filter parameters to present your data in the format that best suits your needs   Queries are scoped to the organization level  You must have owner permissions in the organization  or read access to workspaces and projects      Execute a query   GET  organizations  organization name explorer     Parameter              Description                                                                                                                                                                                                           organization name    The name of the organization to query  The organization must already exist and you must have permissions to read the organization s workspaces and projects          Note    Explorer is restricted to the owners team  teams with the  Read all workspaces  permission  teams with the  Read all projects  permission  and the  organization API token   terraform cloud docs users teams organizations api tokens organization api tokens    Explorer queries will time out after 60 seconds  If the desired query is timing out  try a simpler query that uses less filters   Requests to the explorer are subject to  rate limiting   terraform cloud docs api docs rate limiting   Explorer will return a 429 status code when rate limiting is active for the authenticated context   Data presented by the explorer is eventually consistent  Query results typically contain data that represents the current state of the system  but in some cases a small delay may be necessary before querying for data that has recently been updated       Query parameters  This GET endpoint requires a query string that supports the following properties     Parameter        Description                                                                                                         type              Required    Must be one of the following available views   workspaces    tf versions    providers   or  modules   See the  View Types   view types  section for more information about each       sort            Optionally sort the returned data by the specified field name  The field name must use snake case and exist within the supplied view type  The field name supplied can be prefixed with a hyphen       to perform a descending sort       filter          Optionally specify one or more filters to limit the data returned  See  Filtering   filtering  for more information       fields          Optionally limit the data returned only to the specified fields  The field names supplied must use snake case and be submitted in comma separated format  If omitted  all available fields will be returned for the requested view type       page number     Optional  If omitted  the endpoint will return the first page       page size       Optional  If omitted  the endpoint will use a default page size         View Types  Explorer queries each require a view  specified by the  type  query string parameter  There are several contextual views available for querying     Type            Description                                                                                                        workspaces     Information about the workspaces in the target organization and any current runs associated with them  Each row returned represents a single workspace  See  View Type  workspaces   view type workspaces  for available fields       tf versions    Information about the Terraform versions in use across this organization  Each row returned represents a Terraform version in use  See  View Type  tf versions   view type tf versions  for available fields       providers      Information about the Terraform providers in use across the target organization  Each row returned represents a Terraform provider in use  See  View Type  providers   view type providers  for available fields       modules        Information about the Terraform modules in use across the target organization  Each row returned represents a Terraform module in use  See  View Type  modules   view type modules  for available fields     The fields returned by a given query are specific to the view type used  The fields available for each view type are detailed below        View Type   workspaces     Field                               Type         Description                                                                                                                                         all checks succeeded                boolean     True if checks have succeeded for the workspace  false otherwise       checks errored                      number      The number of checks that errored without completing       checks failed                       number      The number of checks that completed and failed       checks passed                       number      The number of checks that completed and passed       checks unknown                      number      The number of checks that could not be assessed       current run applied at              datetime    Represents the time that this workspace s current run was applied       current run external id             string      The external ID of this workspace s current run       current run status                  string      The status of this workspace s current run       drifted                             boolean     True if drift has been detected for the workspace  false otherwise       external id                         string      The external ID of this workspace       module count                        number      The number of distinct modules used by this workspace       modules                             string      A comma separated list of modules used by this workspace       organization name                   string      The name of the organization this workspace belongs to       project external id                 string      The external ID of the project this workspace belongs to       project name                        string      The name of the project this workspace belongs to     provider count                      number      The number of distinct providers used in this workspace       providers                           string      A comma separated list of providers used in this workspace       resources drifted                   number      The count of resources that drift was detected for       resources undrifted                 number      The count of resources that drift was not detected for       state version terraform version     string      The Terraform version used to create the current run s state file       vcs repo identifier                 string      The name of the repository configured for this workspace  if available       workspace created at                datetime    The time this workspace was created       workspace name                      string      The name of this workspace       workspace terraform version         string      The Terraform version currently configured for this workspace       workspace updated at                datetime    The time this workspace was last updated          View Type   tf versions     Field                Type       Description                                                                                                             version              string    The semantic version string for this Terraform version       workspace count      number    The number of workspaces using this terraform version       workspaces           string    A comma separated list of workspaces using this terraform version          View Type   providers     Field               Type       Description                                                                                                   name                string    The name of this provider       source              string    The source of this provider       version             string    The semantic version string for this provider       workspace count     number    The number of workspaces using this provider       workspaces          string    A comma separated list of workspaces using this provider          View Type   modules     Field               Type       Description                                                                                                 name                string    The name of this module       source              string    The source of this module       version             string    The semantic version string for this module       workspace count     number    The number of workspaces using this module version       workspaces          string    A comma separated list of workspaces using this module version         Filtering  The explorer can present filtered data from any view type using a variety of operators   Filter strings begin with the reserved string  filter   and are passed as URL query string parameters using key value pairs  The parameter key contains the filter s target field and operator  and the parameter value contains the filter value to be used during the query   Multiple filters can be used in a query by providing multiple filter query string parameters separated with      When multiple filters are used  a logical AND is used to evaluate the results   Each filter string must use the following format       filter  FILTER INDEX    FIELD NAME    FIELD OPERATOR   FIELD VALUE INDEX   FILTER VALUE         Filter string sub parameters        FILTER INDEX    The index of the filter  Each filter requires a unique   index  The first filter should use a  0  and each additional filter should   use an index that is the previous filter s index incremented by 1      FIELD NAME    The name of the field to apply the filter to  The field must   be a valid field for the view type being queried  See  View   Types   view types  for a list of fields available for each type      FIELD OPERATOR    The operator to use when filtering  Must be supported by the field type being filtered  See  Filter Operators   filter operators  for the available operators and their supported field types      FIELD VALUE INDEX    Must be  0   This sub parameter is reserved for future use      FILTER VALUE    The filter value used by the filter during the query   Once the desired filter strings have been added to a request URI  they should be percent encoded along with the rest of the query string parameters        Sample Filter Strings    View Type   workspace      Show workspaces that contain  test  in their name        filter 0  workspace name  contains  0  test       After encoding       filter 5B0 5D 5Bworkspace name 5D 5Bcontains 5D 5B0 5D test        View Type   modules      Show modules that contain  aws  in their name and version string equal to  1 1        filter 0  name  contains  0  aws filter 0  version  is  0  1 1       After encoding       filter 5B0 5D 5Bname 5D 5Bcontains 5D 5B0 5D aws filter 5B0 5D 5Bversion 5D 5Bis 5D 5B0 5D 1 1           Filter Operators    Operator             Supported Field Types           Description                                                                           is                   boolean    number    string    Filters data to rows where the field value is equivalent to the filter value       is not               number    string               Filters data to rows where the field value is different from the filter value       contains             string                         Filters data to rows where the field value contains the filter value as a sub string       does not contain     string                         Filters data to rows where the field value does not contain the filter value as a sub string       is empty             boolean    number    string    Filters data to rows where the field does not contain a value       is not empty         boolean    number    string    Filters data to rows where the field contains any value       gt                   number                         Filters data to rows where the field value is greater than the filter value       lt                   number                         Filters data to rows where the field value is less than the filter value       gteq                 number                         Filters data to rows where the field value is greater than or equal to the filter value       lteq                 number                         Filters data to rows where the field value is less than or equal to the filter value       is before            datetime                       Filters data to rows where the field value is earlier than the filter value  ISO 8601 formatted timestamps are required for filter values  If no time zone offset is provided  the filter value will be interpreted as UTC       is after             datetime                       Filters data to rows where the field value is earlier than the filter value  ISO 8601 formatted timestamps are required for filter values  If no time zone offset is provided  the filter value will be interpreted as UTC         Pagination  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs       Sample requests    View Type   workspaces      Show data for all workspaces      shell curl       header  Authorization  Bearer  TOKEN       https   app terraform io api v2 organizations  ORG NAME explorer type workspaces        Show data for all workspaces with  test  in their name      shell curl       header  Authorization  Bearer  TOKEN       https   app terraform io api v2 organizations  ORG NAME explorer type workspaces  filter 5B0 5D 5Bworkspace name 5D 5Bcontains 5D 5B0 5D test         View Type   modules      Show modules used across all workspaces      shell curl       header  Authorization  Bearer  TOKEN       https   app terraform io api v2 organizations  ORG NAME explorer type modules        Show public modules only      shell curl       header  Authorization  Bearer  TOKEN       https   app terraform io api v2 organizations  ORG NAME explorer type modules filter 5B0 5D 5Bregistry type 5D 5Bis 5D 5B0 5D public         View Type   provider      Show providers used across all workspaces      shell curl       header  Authorization  Bearer  TOKEN       https   app terraform io api v2 organizations  ORG NAME explorer type providers        Show most used providers across all workspaces      shell curl       header  Authorization  Bearer  TOKEN       https   app terraform io api v2 organizations  ORG NAME explorer type providers sort  workspace count         View Type   tf versions      Show Terraform versions used across all workspaces      shell curl       header  Authorization  Bearer  TOKEN       https   app terraform io api v2 organizations  ORG NAME explorer type tf versions        Show all Terraform versions used in more than 10 workspaces      shell curl       header  Authorization  Bearer  TOKEN       https   app terraform io api v2 organizations  ORG NAME explorer type tf versions filter 5B0 5D 5Bworkspace count 5D 5Bgt 5D 5B0 5D 10           Sample response   Show data for all workspaces      json      data                  attributes              all checks succeeded   true           checks errored   0           checks failed   0           checks passed   0           checks unknown   0           current run applied at   null           current run external id    run rdoEKh2Gy3K6SbCZ            current run status    planned and finished            drifted   false           external id    ws j2sAeWRxou1b5HYf            module count   0           modules   null           organization name    acme corp            project external id    prj V61QLE8tvs4NvVZG            project name    Default Project            provider count   0           providers   null           resources drifted   0           resources undrifted   0           state version terraform version    1 5 7            vcs repo identifier   null           workspace created at    2023 10 17T21 56 45 570 00 00            workspace name    payments service            workspace terraform version    1 5 7            workspace updated at    2023 12 08T19 58 15 513 00 00                  id    ws j2sAWeRxuo1b5HYf          type    visibility workspace                      attributes              all checks succeeded   true           checks errored   0           checks failed   0           checks passed   0           checks unknown   0           current run applied at    2023 08 18T16 24 59 000 00 00            current run external id    run 9MmJaoQTvDCh5wUa            current run status    applied            drifted   true           external id    ws igUVNQSYVXRkhYuo            module count   0           modules   null           organization name    acme corp            project external id    prj uU2xNqGeG86U9WgT            project name    web            provider count   0           providers   null           resources drifted   3           resources undrifted   3           state version terraform version    1 4 5            vcs repo identifier    acmecorp api            workspace created at    2023 04 25T17 09 14 960 00 00            workspace name    api service            workspace terraform version    1 4 5            workspace updated at    2023 12 08T20 43 16 779 00 00                  id    ws igUNVQSYXVRkhYuo          type    visibility workspace                links          self    https   app terraform io api v2 organizations acme corp explorer page 5Bnumber 5D 1 page 5Bsize 5D 20 type workspaces        first    https   app terraform io api v2 organizations acme corp explorer page 5Bnumber 5D 1 page 5Bsize 5D 20 type workspaces        last    https   app terraform io api v2 organizations acme corp explorer page 5Bnumber 5D 1 page 5Bsize 5D 20 type workspaces        prev   null       next   null         meta          pagination            current page   1         page size   20         next page   null         prev page   null         total pages   1         total count   2                      Export data as CSV   GET  organizations  organization name explorer export csv     Parameter              Description                                                                                                                                                                                                           organization name    The name of the organization to query  The organization must already exist in the system  and the user must have permissions to read the workspaces and projects within it     This endpoint can be used to download a full  unpaged export of the query results in CSV format  with the filter  sort  and field selections applied    Explorer queries will time out after 60 seconds  If the desired query is timing out  try a simpler query that uses less filters   Requests to the explorer are subject to  rate limiting   terraform cloud docs api docs rate limiting   Explorer will return a 429 status code when rate limiting is active for the authenticated context   Data presented by the explorer is eventually consistent  Query results typically contain data that represents the current state of the system  but in some cases a small delay may be necessary before querying for data that has recently been updated       Query parameters  This GET endpoint supports the same query string parameters as the Explorer  Query endpoint   execute a query        Sample requests    View Type   workspaces      Show data for all workspaces      shell curl       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 organizations  ORG NAME explorer export csv type workspaces          Sample response   Show data for all workspaces      csv all checks succeeded checks errored checks failed checks passed checks unknown current run applied at current run external id current run status drifted external id module count modules organization name project external id project name provider count providers resources drifted resources undrifted state version terraform version vcs repo identifier workspace created at workspace name workspace terraform version workspace updated at  true   0   0   0   0      run rdoEKh2Gy3K6SbCZ   planned and finished   false   ws j2sAeWRxou1b5HYf   0      acme corp   prj V61QLE8tvs4NvVZG   Default Project   0      0   0   1 5 7      2023 10 17T21 56 45 00 00   payments service   1 5 7   2023 12 13T15 48 16 00 00   true   0   0   0   0   2023 08 18T16 24 59 00 00   run 9MmJaoQTvDCh5wUa   applied   true   ws igUVNQSYVXRkhYuo   0      acme corp   prj uU2xNqGeG86U9WgT   web   0      3   3   1 4 5   acmecorp api   2023 04 25T17 09 14 00 00   api service   1 4 5   2023 12 13T15 29 03 00 00          List saved views       Note    The ability to save views in the explorer is in public beta  All APIs and workflows are subject to change   Use this endpoint to display all of the explorer s  saved views   terraform cloud docs workspaces explorer saved views  in an organization    GET  api v2 organizations  organization name explorer views     Parameter              Description                                                                                                                                                                                                           organization name    The name of the organization to query  To view a query  you must have permission to read the workspaces and projects within that query         Sample request     shell curl       header  Authorization  Bearer  TOKEN        request GET      https   app terraform io api v2 organizations  ORG NAME explorer views           Sample response     json      data           id    sq P9Yn6Ad6EiVMz77s        type    explorer saved queries        attributes            name    my saved query          created at    2024 10 11T16 18 51 442Z          query              filter                 field    workspace name              operator    contains              value     child                                    field    workspace name              operator    contains              value                               type    workspaces                  query type    workspaces                       Create saved view       Note    The ability to save views in the explorer is in public beta  All APIs and workflows are subject to change   Use this endpoint to create a  saved view   terraform cloud docs workspaces explorer saved views  in the explorer    POST  api v2 organizations  organization name explorer views     Parameter              Description                                                                                                                                                                                                           organization name    The name of the organization to query  To view a query  you must have permission to read the workspaces and projects within that query      To create a saved view  provide the following data in your payload      Key path     Type   Default   Description                                                    data name            string       The name of the saved view       data query type      string       The primary type that the view is querying  Refer to  View Types   view types  for details       data query           object       A list of filters  fields  and sorting rules for the view  Refer to  Query parameters   query parameters  for full details       data query filter             array  objects       A list of filters composed of fields  operators  and values       data query filter field       string              The field name to filter by       data query filter operator    string              The operator applied to field and value       data query filter value       array  strings        A list of values to filter by       data query fields    array  strings        A list of fields to include in the view       data query sort      array  strings        A list of fields to sort by  Prefix with     for descending sort         Sample request     shell curl       header  Authorization  Bearer  TOKEN        request POST       data  payload json      https   app terraform io api v2 organizations  ORG NAME explorer views           Sample response     json      data          id    sq P9Yn6Ad6EiVMz77s        type    explorer saved queries        attributes            name    my saved query          created at    2024 10 11T16 18 51 442Z          query              filter                 field    workspace name              operator    contains              value     child                                    field    workspace name              operator    contains              value                               type    workspaces                  query type    workspaces                      Show saved view  Use this endpoint to display a specific saved view from the explorer    GET  api v2 organizations  organization name explorer views  view id     Parameter              Description                                                                                                                                                                                                           organization name    The name of the organization to query  To view a query  you must have permission to read the workspaces and projects within that query         view id              The ID of the saved view to show          Sample request     shell curl       header  Authorization  Bearer  TOKEN        request GET      https   app terraform io api v2 organizations  ORG NAME explorer views  VIEW ID           Sample response     json      data          id    sq P9Yn6Ad6EiVMz77s        type    explorer saved queries        attributes            name    my saved query          created at    2024 10 11T16 18 51 442Z          query              filter                 field    workspace name              operator    contains              value     child                                    field    workspace name              operator    contains              value                               type    workspaces                  query type    workspaces                      Update a saved view  Use this endpoint to update the data a saved view queries     PATCH  api v2 organizations  organization name explorer views  view id     Parameter              Description                                                                                                                                                                                                           organization name    The name of the organization to query  To view a query  you must have permission to read the workspaces and projects within that query        view id              The external id of the saved view to update     You must have the following fields in the payload for your request     Key path     Type   Default   Description                                                    data name            string       The name of the saved view       data query           object       A list of filters  fields  and sorting rules for the view  See  Query parameters   query parameters  for full details       data query filter             array  objects       A list of filters composed of fields  operators  and values  See  Filtering   filtering  for more information       data query filter field       string              The field name to filter by       data query filter operator    string              The operator applied to field and value       data query filter value       array  strings        A list of values to filter by       data query fields    array  strings        A list of fields to include in the view  Refer to  execute a query   execute a query  for a list of available parameters        data query sort      array  strings        A list of fields to sort by  Prefix with     for descending sort         Sample request     shell curl       header  Authorization  Bearer  TOKEN        request PATCH       data  payload json      https   app terraform io api v2 organizations  ORG NAME explorer viewsi  VIEW ID           Sample response     json      data          id    sq P9Yn6Ad6EiVMz77s        type    explorer saved queries        attributes            name    my saved query          created at    2024 10 11T16 18 51 442Z          query              filter                 field    workspace name              operator    contains              value     candy                                    field    workspace name              operator    contains              value     pumpkins                         type    workspaces                  query type    workspaces                      Delete a saved view  Use this endpoint to delete a saved view    DELETE  api v2 organizations  organization name explorer views  view id     Parameter              Description                                                                                                                                                                                                           organization name    The name of the organization to query  To view a query  you must have permission to read the workspaces and projects within that query        view id              The ID of the saved view you want to delete         Sample request     shell curl       header  Authorization  Bearer  TOKEN        request DELETE      https   app terraform io api v2 organizations  ORG NAME explorer views  VIEW ID           Sample response     json      data          id    sq P9Yn6Ad6EiVMz77s        type    explorer saved queries        attributes            name    my saved query          created at    2024 10 11T16 18 51 442Z          query              filter                 field    workspace name              operator    contains              value     candy                                    field    workspace name              operator    contains              value     pumpkins                         type    workspaces                  query type    workspaces                      Query a saved view  Re execute a saved view s query to retrieve current results     GET  api v2 organizations  organization name explorer views  view id results     Parameter              Description                                                                                                                                                                                                           organization name    The name of the organization to query  To view a query  you must have permission to read the workspaces and projects within that query        view id              The ID of the saved view to re query         Sample request     shell curl       header  Authorization  Bearer  TOKEN        request GET      https   app terraform io api v2 organizations  ORG NAME explorer views  VIEW ID results           Sample response     json      data                  attributes              all checks succeeded   true           checks errored   0           checks failed   0           checks passed   0           checks unknown   0           current run applied at   null           current run external id    run rdoEKh2Gy3K6SbCZ            current run status    planned and finished            drifted   false           external id    ws j2sAeWRxou1b5HYf            module count   0           modules   null           organization name    acme corp            project external id    prj V61QLE8tvs4NvVZG            project name    Default Project            provider count   0           providers   null           resources drifted   0           resources undrifted   0           state version terraform version    1 5 7            vcs repo identifier   null           workspace created at    2023 10 17T21 56 45 570 00 00            workspace name    candy distribution system            workspace terraform version    1 5 7            workspace updated at    2023 12 08T19 58 15 513 00 00                  id    ws j2sAWeRxuo1b5HYf          type    visibility workspace                      attributes              all checks succeeded   true           checks errored   0           checks failed   0           checks passed   0           checks unknown   0           current run applied at    2023 08 18T16 24 59 000 00 00            current run external id    run 9MmJaoQTvDCh5wUa            current run status    applied            drifted   true           external id    ws igUVNQSYVXRkhYuo            module count   0           modules   null           organization name    acme corp            project external id    prj uU2xNqGeG86U9WgT            project name    web            provider count   0           providers   null           resources drifted   3           resources undrifted   3           state version terraform version    1 4 5            vcs repo identifier    acmecorp api            workspace created at    2023 04 25T17 09 14 960 00 00            workspace name    pumpkin smasher            workspace terraform version    1 4 5            workspace updated at    2023 12 08T20 43 16 779 00 00                  id    ws igUNVQSYXVRkhYuo          type    visibility workspace                links          self    https   app terraform io api v2 organizations acme corp explorer page 5Bnumber 5D 1 page 5Bsize 5D 20 type workspaces        first    https   app terraform io api v2 organizations acme corp explorer page 5Bnumber 5D 1 page 5Bsize 5D 20 type workspaces        last    https   app terraform io api v2 organizations acme corp explorer page 5Bnumber 5D 1 page 5Bsize 5D 20 type workspaces        prev   null       next   null         meta          pagination            current page   1         page size   20         next page   null         prev page   null         total pages   1         total count   2                     Query a saved view as CSV  Re execute a saved view s query to retrieve current results  HCP Terraform returns results as a CSV    GET  api v2 organizations  organization name explorer views  view id csv     Parameter              Description                                                                                                                                                                                                           organization name    The name of the organization to query  To view a query  you must have permission to read the workspaces and projects within that query        view id              The external id of the saved view to query         Sample request     shell curl       header  Authorization  Bearer  TOKEN        request GET      https   app terraform io api v2 organizations  ORG NAME explorer views  VIEW ID csv           Sample response     csv all checks succeeded checks errored checks failed checks passed checks unknown current run applied at current run external id current run status drifted external id module count modules organization name project external id project name provider count providers resources drifted resources undrifted state version terraform version vcs repo identifier workspace created at workspace name workspace terraform version workspace updated at  true   0   0   0   0      run rdoEKh2Gy3K6SbCZ   planned and finished   false   ws j2sAeWRxou1b5HYf   0      acme corp   prj V61QLE8tvs4NvVZG   Default Project   0      0   0   1 5 7      2023 10 17T21 56 45 00 00   candy distribution service   1 5 7   2023 12 13T15 48 16 00 00   true   0   0   0   0   2023 08 18T16 24 59 00 00   run 9MmJaoQTvDCh5wUa   applied   true   ws igUVNQSYVXRkhYuo   0      acme corp   prj uU2xNqGeG86U9WgT   web   0      3   3   1 4 5   acmecorp api   2023 04 25T17 09 14 00 00   api service   1 4 5   2023 12 13T15 29 03 00 00    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Configuration Versions API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the configuration versions endpoint to manage configuration versions for a workspace List show and create configuration versions and their files using the HTTP API","answers":"---\npage_title: Configuration Versions - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/configuration-versions` endpoint to manage configuration versions for a workspace. List, show, and create configuration versions and their files using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[302]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/302\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Configuration Versions API\n\n-> **Note:** Before working with the runs or configuration versions APIs, read the [API-driven run workflow](\/terraform\/cloud-docs\/run\/api) page, which includes both a full overview of this workflow and a walkthrough of a simple implementation of it.\n\nA configuration version (`configuration-version`) is a resource used to reference the uploaded configuration files. It is associated with the run to use the uploaded configuration files for performing the plan and apply.\n\nYou need read runs permission to list and view configuration versions for a workspace, and you need queue plans permission to create new configuration versions. Refer to the [permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#general-workspace-permissions) documentation for more details.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Attributes\n\n### Configuration Version States\n\nThe configuration version state is found in `data.attributes.status`, and you can reference the following list of possible states.\n\nA configuration version created through the API or CLI can only be used in runs if it is in an `uploaded` state. A configuration version created through a linked VCS repository may also be used in runs if it is in an `archived` state.\n\n| State                  | Description                                                                                                                                                                                                                                                                                                                                                                                                                        |\n| ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `pending`              | The initial status of a configuration version after it has been created. Pending configuration versions cannot be used to create new runs.                                                                                                                                                               |\n| `fetching`             | For configuration versions created from a commit to a connected VCS repository, HCP Terraform is currently fetching the associated files from VCS.                                                                                                                                                     |\n| `uploaded`             | The configuration version is fully processed and can be used in runs.                                                                                                                                                                                                                                    |\n| `archived`             | All immediate runs are complete and HCP Terraform has discarded the files associated with this configuration version. If the configuration version was created through a connected VCS repository, it can still be used in new runs. In those cases, HCP Terraform will re-fetch the files from VCS. |\n| `errored`              | HCP Terraform could not process this configuration version, and it cannot be used to create new runs. You can try again by pushing a new commit to your linked VCS repository or creating a new configuration version with the API or CLI.                                                             |\n| `backing_data_soft_deleted`  | <EnterpriseAlert inline \/> Indicates that the configuration version's backing data is marked for garbage collection. If no action is taken, the backing data associated with this configuration version is permanently deleted after a set number of days. You can restore the backing data associated with the configuration version before it is permanently deleted. |                                                                                                              |\n| `backing_data_permanently_deleted`  | <EnterpriseAlert inline \/> The configuration version's backing data has been permanently deleted and can no longer be restored.                                                                                                                                                                          |\n\n## List Configuration Versions\n\n`GET \/workspaces\/:workspace_id\/configuration-versions`\n\n| Parameter       | Description                                                                                                                                                                                                       |\n| --------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The ID of the workspace to list configurations from. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint. |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                                            |\n| -------------- | -------------------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.                     |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 configuration versions per page. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-2Qhk7LHgbMrm3grF\/configuration-versions\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"cv-ntv3HbhJqvFzamy7\",\n      \"type\": \"configuration-versions\",\n      \"attributes\": {\n        \"error\": null,\n        \"error-message\": null,\n        \"source\": \"gitlab\",\n        \"speculative\":false,\n        \"status\": \"uploaded\",\n        \"status-timestamps\": {},\n        \"provisional\": false\n      },\n      \"relationships\": {\n        \"ingress-attributes\": {\n          \"data\": {\n            \"id\": \"ia-i4MrTxmQXYxH2nYD\",\n            \"type\": \"ingress-attributes\"\n          },\n          \"links\": {\n            \"related\":\n              \"\/api\/v2\/configuration-versions\/cv-ntv3HbhJqvFzamy7\/ingress-attributes\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/configuration-versions\/cv-ntv3HbhJqvFzamy7\",\n        \"download\": \"\/api\/v2\/configuration-versions\/cv-ntv3HbhJqvFzamy7\/download\"\n      }\n    }\n  ]\n}\n```\n\n## Show a Configuration Version\n\n`GET \/configuration-versions\/:configuration-id`\n\n| Parameter           | Description                          |\n| ------------------- | ------------------------------------ |\n| `:configuration-id` | The id of the configuration to show. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/configuration-versions\/cv-ntv3HbhJqvFzamy7\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"cv-ntv3HbhJqvFzamy7\",\n    \"type\": \"configuration-versions\",\n    \"attributes\": {\n      \"error\": null,\n      \"error-message\": null,\n      \"source\": \"gitlab\",\n      \"speculative\":false,\n      \"status\": \"uploaded\",\n      \"status-timestamps\": {},\n      \"provisional\": false\n    },\n    \"relationships\": {\n      \"ingress-attributes\": {\n        \"data\": {\n          \"id\": \"ia-i4MrTxmQXYxH2nYD\",\n          \"type\": \"ingress-attributes\"\n        },\n        \"links\": {\n          \"related\":\n            \"\/api\/v2\/configuration-versions\/cv-ntv3HbhJqvFzamy7\/ingress-attributes\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/configuration-versions\/cv-ntv3HbhJqvFzamy7\",\n      \"download\": \"\/api\/v2\/configuration-versions\/cv-ntv3HbhJqvFzamy7\/download\"\n    }\n  }\n}\n```\n\n## Show a Configuration Version's Commit Information\n\nAn ingress attributes resource (`ingress-attributes`) is used to reference commit information for configuration versions created in a workspace with a VCS repository.\n\n`GET \/configuration-versions\/:configuration-id\/ingress-attributes`\n\n| Parameter           | Description                          |\n| ------------------- | ------------------------------------ |\n| `:configuration-id` | The id of the configuration to show. |\n\nIngress attributes can also be fetched as part of a query to a particular configuration version via `include`:\n\n`GET \/configuration-versions\/:configuration-id?include=ingress-attributes`\n\n| Parameter           | Description                          |\n| ------------------- | ------------------------------------ |\n| `:configuration-id` | The id of the configuration to show. |\n\n<!-- Note: \/ingress-attributes\/:ingress-attributes-id is purposely not documented here, as its\nusefulness is questionable given the routes above; IAs are inherently a part of a CV and their\nseparate resource is a vestige of the original Terraform Enterprise -->\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/configuration-versions\/cv-TrHjxIzad9Ae9i8x\/ingress-attributes\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"ia-zqHjxJzaB9Ae6i9t\",\n    \"type\": \"ingress-attributes\",\n    \"attributes\": {\n      \"branch\": \"add-cool-stuff\",\n      \"clone-url\": \"https:\/\/github.com\/hashicorp\/foobar.git\",\n      \"commit-message\": \"Adding really cool infrastructure\",\n      \"commit-sha\": \"1e1c1018d1bbc0b8517d072718e0d87c1a0eda95\",\n      \"commit-url\": \"https:\/\/github.com\/hashicorp\/foobar\/commit\/1e1c1018d1bbc0b8517d072718e0d87c1a0eda95\",\n      \"compare-url\": \"https:\/\/github.com\/hashicorp\/foobar\/pull\/163\",\n      \"identifier\": \"hashicorp\/foobar\",\n      \"is-pull-request\": true,\n      \"on-default-branch\": false,\n      \"pull-request-number\": 163,\n      \"pull-request-url\": \"https:\/\/github.com\/hashicorp\/foobar\/pull\/163\",\n      \"pull-request-title\": \"Adding really cool infrastructure\",\n      \"pull-request-body\": \"These are changes to add really cool stuff. We should absolutely merge this.\",\n      \"tag\": null,\n      \"sender-username\": \"chrisarcand\",\n      \"sender-avatar-url\": \"https:\/\/avatars.githubusercontent.com\/u\/2430490?v=4\",\n      \"sender-html-url\": \"https:\/\/github.com\/chrisarcand\"\n    },\n    \"relationships\": {\n      \"created-by\": {\n        \"data\": {\n          \"id\": \"user-PQk2Z3dfXAax18P6s\",\n          \"type\": \"users\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/ingress-attributes\/ia-zqHjxJzaB9Ae6i9t\/created-by\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/ingress-attributes\/ia-zqHjxJzaB9Ae6i9t\"\n    }\n  }\n}\n```\n\n## Create a Configuration Version\n\n`POST \/workspaces\/:workspace_id\/configuration-versions`\n\n| Parameter       | Description                                                                                                                                                                                                                      |\n| --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The ID of the workspace to create the new configuration version in. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint. |\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                          | Type    | Default | Description                                                                                                                                                                                                                   |\n| --------------------------------- | ------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.attributes.auto-queue-runs` | boolean | true    | When true, runs are queued automatically when the configuration version is uploaded.                                                                                                                                          |\n| `data.attributes.speculative`     | boolean | false   | When true, this configuration version may only be used to create runs which are speculative, that is, can neither be confirmed nor applied.                                                                                   |\n| `data.attributes.provisional`     | boolean | false   | When true, this configuration version does not immediately become the workspace current configuration version. If the associated run is applied, it then becomes the current configuration version unless a newer one exists.|\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"configuration-versions\",\n    \"attributes\": {\n      \"auto-queue-runs\": true\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-2Qhk7LHgbMrm3grF\/configuration-versions\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"cv-UYwHEakurukz85nW\",\n    \"type\": \"configuration-versions\",\n    \"attributes\": {\n      \"auto-queue-runs\": true,\n      \"error\": null,\n      \"error-message\": null,\n      \"source\": \"tfe-api\",\n      \"speculative\":false,\n      \"status\": \"pending\",\n      \"status-timestamps\": {},\n      \"upload-url\":\n        \"https:\/\/archivist.terraform.io\/v1\/object\/9224c6b3-2e14-4cd7-adff-ed484d7294c2\",\n      \"provisional\": false\n    },\n    \"relationships\": {\n      \"ingress-attributes\": {\n        \"data\": null,\n        \"links\": {\n          \"related\":\n            \"\/api\/v2\/configuration-versions\/cv-UYwHEakurukz85nW\/ingress-attributes\"\n        }\n      }\n    },\n    \"links\": { \"self\": \"\/api\/v2\/configuration-versions\/cv-UYwHEakurukz85nW\" }\n  }\n}\n```\n\n### Configuration Files Upload URL\n\nOnce a configuration version is created, use the `upload-url` attribute to [upload configuration files](#upload-configuration-files) associated with that version. The `upload-url` attribute is only provided in the response when creating configuration versions.\n\n## Upload Configuration Files\n\n-> **Note**: If `auto-queue-runs` was either not provided or set to `true` during creation of the configuration version, a run using this configuration version will be automatically queued on the workspace. If `auto-queue-runs` was set to `false` explicitly, then it is necessary to [create a run on the workspace](\/terraform\/cloud-docs\/api-docs\/run#create-a-run) manually after the configuration version is uploaded.\n\n`PUT https:\/\/archivist.terraform.io\/v1\/object\/<UNIQUE OBJECT ID>`\n\n**The URL is provided in the `upload-url` attribute when creating a `configuration-versions` resource. After creation, the URL is hidden on the resource and no longer available.**\n\n### Sample Request\n\n**@filename is the name of configuration file you wish to upload.**\n\n```shell\ncurl \\\n  --header \"Content-Type: application\/octet-stream\" \\\n  --request PUT \\\n  --data-binary @filename \\\n  https:\/\/archivist.terraform.io\/v1\/object\/4c44d964-eba7-4dd5-ad29-1ece7b99e8da\n```\n\n## Archive a Configuration Version\n\n`POST \/configuration-versions\/:configuration_version_id\/actions\/archive`\n\n| Parameter                   | Description                                    |\n| --------------------------- | ---------------------------------------------- |\n| `configuration_version_id`  | The ID of the configuration version to archive. |\n\nThis endpoint notifies HCP Terraform to discard the uploaded `.tar.gz` file associated with this configuration version. This endpoint can only archive configuration versions that were created with the API or CLI, are in an `uploaded` [state](#configuration-version-states), have no runs in progress, and are not the current configuration version for any workspace. Otherwise, calling this endpoint will result in an error.\n\nHCP Terraform automatically archives configuration versions created through VCS when associated runs are complete and then re-fetches the files for subsequent runs.\n\n| Status  | Response                  | Reason(s)                                                                                                                                     |\n| ------- | ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |\n| [202][] | none                      | Successfully initiated the archive process.                                                                                                   |\n| [409][] | [JSON API error object][] | Configuration version was in a non-archivable state or the configuration version was created with VCS and cannot be archived through the API. |\n| [404][] | [JSON API error object][] | Configuration version was not found or user not authorized.                                                                                   |\n\n### Request Body\n\nThis POST endpoint does not take a request body.\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/configuration-versions\/cv-ntv3HbhJqvFzamy7\/actions\/archive\n```\n\n## Download Configuration Files\n\n`GET \/configuration-versions\/:configuration_version_id\/download`\n\n| Parameter                   | Description                                      |\n| --------------------------- | ------------------------------------------------ |\n| `configuration_version_id`  | The ID of the configuration version to download. |\n\n`GET \/runs\/:run_id\/configuration-version\/download`\n\n| Parameter                   | Description                                                         |\n| --------------------------- | ------------------------------------------------------------------- |\n| `run_id`                    | The ID of the run whose configuration version should be downloaded. |\n\nThese endpoints generate a temporary URL to the location of the configuration version in a `.tar.gz` archive, and then redirect to that link. If using a client that can follow redirects, you can use these endpoints to save the `.tar.gz` archive locally without needing to save the temporary URL. These endpoints will return an error if attempting to download a configuration version that is not in an `uploaded` [state](#configuration-version-states).\n\n| Status  | Response                  | Reason                                                                                                                      |\n|---------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------|\n| [302][] | HTTP Redirect             | Configuration version found and temporary download URL generated                                                            |\n| [404][] | [JSON API error object][] | Configuration version not found, or specified configuration version is not uploaded, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --location \\\n  https:\/\/app.terraform.io\/api\/v2\/configuration-versions\/cv-C6Py6WQ1cUXQX2x2\/download \\\n  > export.tar.gz\n```\n\n## Mark a Configuration Version for Garbage Collection\n\n<EnterpriseAlert>\nThis endpoint is exclusive to Terraform Enterprise, and not available in HCP Terraform. <a href=\"https:\/\/developer.hashicorp.com\/terraform\/enterprise\">Learn more about Terraform Enterprise<\/a>.\n<\/EnterpriseAlert>\n\n`POST \/api\/v2\/configuration-versions\/:configuration-id\/actions\/soft_delete_backing_data`\n\n| Parameter                   | Description                                    |\n| --------------------------- | ---------------------------------------------- |\n| `:configuration-id`  | The ID of the configuration version to soft delete. |\n\nThis endpoint directs Terraform Enterprise to _soft delete_ the backing files associated with the configuration version. Soft deletion refers to marking the configuration version for garbage collection. Terraform permanently deletes configuration versions marked for soft deletion after a set number of days unless the configuration version is restored. Once a configuration version is soft deleted, any attempts to read the configuration version will fail. Refer to [Configuration Version States](#configuration-version-states) for information about all data states.\n\nThis endpoint can only soft delete configuration versions that meet the following criteria:\n\n- Were created using the API or CLI,\n- are in an [`uploaded` state](#configuration-version-states),\n- and are not the current configuration version.\n\nOtherwise, the endpoint returns an error.\n\n| Status  | Response                  | Reason(s)                                                                                                                                     |\n| ------- | ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |\n| [200][] | none                      | Terraform successfully marked the data for garbage collection.                                                                                               |\n| [400][] | [JSON API error object][] | Terraform failed to transition the state to `backing_data_soft_deleted`.                                                                          |\n| [404][] | [JSON API error object][] | Terraform did not find the configuration version or the user is not authorized to modify the configuration version state.                                                                                     |\n\n### Request Body\n\nThis POST endpoint does not take a request body.\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/configuration-versions\/cv-ntv3HbhJqvFzamy7\/actions\/soft_delete_backing_data\n  --data {\"data\": {\"attributes\": {\"delete-older-than-n-days\": 23}}}\n```\n\n## Restore Configuration Versions Marked for Garbage Collection\n\n<EnterpriseAlert>\nThis endpoint is exclusive to Terraform Enterprise, and not available in HCP Terraform. <a href=\"https:\/\/developer.hashicorp.com\/terraform\/enterprise\">Learn more about Terraform Enterprise<\/a>.\n<\/EnterpriseAlert>\n\n`POST \/api\/v2\/configuration-versions\/:configuration-id\/actions\/restore_backing_data`\n\n| Parameter                   | Description                                    |\n| --------------------------- | ---------------------------------------------- |\n| `:configuration-id`  | The ID of the configuration version to restore back to its uploaded state if applicable. |\n\nThis endpoint directs Terraform Enterprise to restore backing files associated with this configuration version. This endpoint can only restore delete configuration versions that meet the following criteria:\n\n- are not in a [`backing_data_permanently_deleted` state](#configuration-version-states).\n\nOtherwise, the endpoint returns an error. Terraform restores applicable configuration versions back to their `uploaded` state.\n\n| Status  | Response                  | Reason(s)                                                                                                                                     |\n| ------- | ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |\n| [200][] | none                      | Terraform successfully initiated the restore process.                                                                                         |\n| [400][] | [JSON API error object][] | Terraform failed to transition the state to `uploaded`.                                                                                       |\n| [404][] | [JSON API error object][] | Terraform did not find the configuration version or the user is not authorized to modify the configuration version state.                        |\n\n### Request Body\n\nThis POST endpoint does not take a request body.\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/configuration-versions\/cv-ntv3HbhJqvFzamy7\/actions\/restore_backing_data\n```\n\n## Permanently Delete a Configuration Version\n\n<EnterpriseAlert>\nThis endpoint is exclusive to Terraform Enterprise, and not available in HCP Terraform. <a href=\"https:\/\/developer.hashicorp.com\/terraform\/enterprise\">Learn more about Terraform Enterprise<\/a>.\n<\/EnterpriseAlert>\n\n`POST \/api\/v2\/configuration-versions\/:configuration-id\/actions\/permanently_delete_backing_data`\n\n| Parameter                   | Description                                    |\n| --------------------------- | ---------------------------------------------- |\n| `:configuration-id`  | The ID of the configuration version to permanently delete. |\n\nThis endpoint directs Terraform Enterprise to permanently delete backing files associated with this configuration version. This endpoint can only permanently delete configuration versions that meet the following criteria:\n\n- Were created using the API or CLI,\n- are in a [`backing_data_soft_deleted` state](#configuration-version-states),\n- and are not the current configuration version.\n\nOtherwise, the endpoint returns an error.\n\n| Status  | Response                  | Reason(s)                                                                                                                                     |\n| ------- | ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |\n| [200][] | none                      | Terraform successfully deleted the data permanently.                                                                                          |\n| [400][] | [JSON API error object][] | Terraform failed to transition the state to `backing_data_permanently_deleted`.                                                               |\n| [404][] | [JSON API error object][] | Terraform did not find the configuration version or the user is not authorized to modify the configuration version state.                        |\n\n### Request Body\n\nThis POST endpoint does not take a request body.\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/configuration-versions\/cv-ntv3HbhJqvFzamy7\/actions\/permanently_delete_backing_data\n```\n\n## Available Related Resources\n\nThe GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n| Resource Name        | Description                                       |\n| -------------------- | ------------------------------------------------- |\n| `ingress_attributes` | The commit information used in the configuration. |\n| `run`                | The run created by the configuration.             |","site":"terraform","answers_cleaned":"    page title  Configuration Versions   API Docs   HCP Terraform description       Use the   configuration versions  endpoint to manage configuration versions for a workspace  List  show  and create configuration versions and their files using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   302   https   developer mozilla org en US docs Web HTTP Status 302   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Configuration Versions API       Note    Before working with the runs or configuration versions APIs  read the  API driven run workflow   terraform cloud docs run api  page  which includes both a full overview of this workflow and a walkthrough of a simple implementation of it   A configuration version   configuration version   is a resource used to reference the uploaded configuration files  It is associated with the run to use the uploaded configuration files for performing the plan and apply   You need read runs permission to list and view configuration versions for a workspace  and you need queue plans permission to create new configuration versions  Refer to the  permissions   terraform cloud docs users teams organizations permissions general workspace permissions  documentation for more details    permissions citation    intentionally unused   keep for maintainers     Attributes      Configuration Version States  The configuration version state is found in  data attributes status   and you can reference the following list of possible states   A configuration version created through the API or CLI can only be used in runs if it is in an  uploaded  state  A configuration version created through a linked VCS repository may also be used in runs if it is in an  archived  state     State                    Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   pending                 The initial status of a configuration version after it has been created  Pending configuration versions cannot be used to create new runs                                                                                                                                                                     fetching                For configuration versions created from a commit to a connected VCS repository  HCP Terraform is currently fetching the associated files from VCS                                                                                                                                                           uploaded                The configuration version is fully processed and can be used in runs                                                                                                                                                                                                                                          archived                All immediate runs are complete and HCP Terraform has discarded the files associated with this configuration version  If the configuration version was created through a connected VCS repository  it can still be used in new runs  In those cases  HCP Terraform will re fetch the files from VCS       errored                 HCP Terraform could not process this configuration version  and it cannot be used to create new runs  You can try again by pushing a new commit to your linked VCS repository or creating a new configuration version with the API or CLI                                                                   backing data soft deleted      EnterpriseAlert inline    Indicates that the configuration version s backing data is marked for garbage collection  If no action is taken  the backing data associated with this configuration version is permanently deleted after a set number of days  You can restore the backing data associated with the configuration version before it is permanently deleted                                                                                                                      backing data permanently deleted      EnterpriseAlert inline    The configuration version s backing data has been permanently deleted and can no longer be restored                                                                                                                                                                                 List Configuration Versions   GET  workspaces  workspace id configuration versions     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                                                                                     workspace id    The ID of the workspace to list configurations from  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   terraform cloud docs api docs workspaces show workspace  endpoint         Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                                                             page number       Optional    If omitted  the endpoint will return the first page                           page size         Optional    If omitted  the endpoint will return 20 configuration versions per page         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 workspaces ws 2Qhk7LHgbMrm3grF configuration versions          Sample Response     json      data                  id    cv ntv3HbhJqvFzamy7          type    configuration versions          attributes              error   null           error message   null           source    gitlab            speculative  false           status    uploaded            status timestamps                provisional   false                 relationships              ingress attributes                data                  id    ia i4MrTxmQXYxH2nYD                type    ingress attributes                          links                  related                   api v2 configuration versions cv ntv3HbhJqvFzamy7 ingress attributes                                        links              self     api v2 configuration versions cv ntv3HbhJqvFzamy7            download     api v2 configuration versions cv ntv3HbhJqvFzamy7 download                              Show a Configuration Version   GET  configuration versions  configuration id     Parameter             Description                                                                                               configuration id    The id of the configuration to show         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 configuration versions cv ntv3HbhJqvFzamy7          Sample Response     json      data          id    cv ntv3HbhJqvFzamy7        type    configuration versions        attributes            error   null         error message   null         source    gitlab          speculative  false         status    uploaded          status timestamps              provisional   false             relationships            ingress attributes              data                id    ia i4MrTxmQXYxH2nYD              type    ingress attributes                      links                related                 api v2 configuration versions cv ntv3HbhJqvFzamy7 ingress attributes                                links            self     api v2 configuration versions cv ntv3HbhJqvFzamy7          download     api v2 configuration versions cv ntv3HbhJqvFzamy7 download                      Show a Configuration Version s Commit Information  An ingress attributes resource   ingress attributes   is used to reference commit information for configuration versions created in a workspace with a VCS repository    GET  configuration versions  configuration id ingress attributes     Parameter             Description                                                                                               configuration id    The id of the configuration to show     Ingress attributes can also be fetched as part of a query to a particular configuration version via  include     GET  configuration versions  configuration id include ingress attributes     Parameter             Description                                                                                               configuration id    The id of the configuration to show          Note   ingress attributes  ingress attributes id is purposely not documented here  as its usefulness is questionable given the routes above  IAs are inherently a part of a CV and their separate resource is a vestige of the original Terraform Enterprise          Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 configuration versions cv TrHjxIzad9Ae9i8x ingress attributes          Sample Response     json      data          id    ia zqHjxJzaB9Ae6i9t        type    ingress attributes        attributes            branch    add cool stuff          clone url    https   github com hashicorp foobar git          commit message    Adding really cool infrastructure          commit sha    1e1c1018d1bbc0b8517d072718e0d87c1a0eda95          commit url    https   github com hashicorp foobar commit 1e1c1018d1bbc0b8517d072718e0d87c1a0eda95          compare url    https   github com hashicorp foobar pull 163          identifier    hashicorp foobar          is pull request   true         on default branch   false         pull request number   163         pull request url    https   github com hashicorp foobar pull 163          pull request title    Adding really cool infrastructure          pull request body    These are changes to add really cool stuff  We should absolutely merge this           tag   null         sender username    chrisarcand          sender avatar url    https   avatars githubusercontent com u 2430490 v 4          sender html url    https   github com chrisarcand              relationships            created by              data                id    user PQk2Z3dfXAax18P6s              type    users                      links                related     api v2 ingress attributes ia zqHjxJzaB9Ae6i9t created by                                links            self     api v2 ingress attributes ia zqHjxJzaB9Ae6i9t                      Create a Configuration Version   POST  workspaces  workspace id configuration versions     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   workspace id    The ID of the workspace to create the new configuration version in  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   terraform cloud docs api docs workspaces show workspace  endpoint          Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                            Type      Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  data attributes auto queue runs    boolean   true      When true  runs are queued automatically when the configuration version is uploaded                                                                                                                                                data attributes speculative        boolean   false     When true  this configuration version may only be used to create runs which are speculative  that is  can neither be confirmed nor applied                                                                                         data attributes provisional        boolean   false     When true  this configuration version does not immediately become the workspace current configuration version  If the associated run is applied  it then becomes the current configuration version unless a newer one exists        Sample Payload     json      data          type    configuration versions        attributes            auto queue runs   true                      Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 workspaces ws 2Qhk7LHgbMrm3grF configuration versions          Sample Response     json      data          id    cv UYwHEakurukz85nW        type    configuration versions        attributes            auto queue runs   true         error   null         error message   null         source    tfe api          speculative  false         status    pending          status timestamps              upload url            https   archivist terraform io v1 object 9224c6b3 2e14 4cd7 adff ed484d7294c2          provisional   false             relationships            ingress attributes              data   null           links                related                 api v2 configuration versions cv UYwHEakurukz85nW ingress attributes                                links      self     api v2 configuration versions cv UYwHEakurukz85nW                   Configuration Files Upload URL  Once a configuration version is created  use the  upload url  attribute to  upload configuration files   upload configuration files  associated with that version  The  upload url  attribute is only provided in the response when creating configuration versions      Upload Configuration Files       Note    If  auto queue runs  was either not provided or set to  true  during creation of the configuration version  a run using this configuration version will be automatically queued on the workspace  If  auto queue runs  was set to  false  explicitly  then it is necessary to  create a run on the workspace   terraform cloud docs api docs run create a run  manually after the configuration version is uploaded    PUT https   archivist terraform io v1 object  UNIQUE OBJECT ID      The URL is provided in the  upload url  attribute when creating a  configuration versions  resource  After creation  the URL is hidden on the resource and no longer available         Sample Request     filename is the name of configuration file you wish to upload        shell curl       header  Content Type  application octet stream        request PUT       data binary  filename     https   archivist terraform io v1 object 4c44d964 eba7 4dd5 ad29 1ece7b99e8da         Archive a Configuration Version   POST  configuration versions  configuration version id actions archive     Parameter                     Description                                                                                                                          configuration version id     The ID of the configuration version to archive     This endpoint notifies HCP Terraform to discard the uploaded   tar gz  file associated with this configuration version  This endpoint can only archive configuration versions that were created with the API or CLI  are in an  uploaded   state   configuration version states   have no runs in progress  and are not the current configuration version for any workspace  Otherwise  calling this endpoint will result in an error   HCP Terraform automatically archives configuration versions created through VCS when associated runs are complete and then re fetches the files for subsequent runs     Status    Response                    Reason s                                                                                                                                                                                                                                                                                                                                   202      none                        Successfully initiated the archive process                                                                                                         409       JSON API error object      Configuration version was in a non archivable state or the configuration version was created with VCS and cannot be archived through the API       404       JSON API error object      Configuration version was not found or user not authorized                                                                                           Request Body  This POST endpoint does not take a request body       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 configuration versions cv ntv3HbhJqvFzamy7 actions archive         Download Configuration Files   GET  configuration versions  configuration version id download     Parameter                     Description                                                                                                                              configuration version id     The ID of the configuration version to download      GET  runs  run id configuration version download     Parameter                     Description                                                                                                                                                                    run id                       The ID of the run whose configuration version should be downloaded     These endpoints generate a temporary URL to the location of the configuration version in a   tar gz  archive  and then redirect to that link  If using a client that can follow redirects  you can use these endpoints to save the   tar gz  archive locally without needing to save the temporary URL  These endpoints will return an error if attempting to download a configuration version that is not in an  uploaded   state   configuration version states      Status    Response                    Reason                                                                                                                                                                                                                                                                                                 302      HTTP Redirect               Configuration version found and temporary download URL generated                                                                 404       JSON API error object      Configuration version not found  or specified configuration version is not uploaded  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        location     https   app terraform io api v2 configuration versions cv C6Py6WQ1cUXQX2x2 download       export tar gz         Mark a Configuration Version for Garbage Collection   EnterpriseAlert  This endpoint is exclusive to Terraform Enterprise  and not available in HCP Terraform   a href  https   developer hashicorp com terraform enterprise  Learn more about Terraform Enterprise  a     EnterpriseAlert    POST  api v2 configuration versions  configuration id actions soft delete backing data     Parameter                     Description                                                                                                                           configuration id     The ID of the configuration version to soft delete     This endpoint directs Terraform Enterprise to  soft delete  the backing files associated with the configuration version  Soft deletion refers to marking the configuration version for garbage collection  Terraform permanently deletes configuration versions marked for soft deletion after a set number of days unless the configuration version is restored  Once a configuration version is soft deleted  any attempts to read the configuration version will fail  Refer to  Configuration Version States   configuration version states  for information about all data states   This endpoint can only soft delete configuration versions that meet the following criteria     Were created using the API or CLI    are in an   uploaded  state   configuration version states     and are not the current configuration version   Otherwise  the endpoint returns an error     Status    Response                    Reason s                                                                                                                                                                                                                                                                                                                                   200      none                        Terraform successfully marked the data for garbage collection                                                                                                     400       JSON API error object      Terraform failed to transition the state to  backing data soft deleted                                                                                 404       JSON API error object      Terraform did not find the configuration version or the user is not authorized to modify the configuration version state                                                                                             Request Body  This POST endpoint does not take a request body       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 configuration versions cv ntv3HbhJqvFzamy7 actions soft delete backing data     data   data     attributes     delete older than n days   23            Restore Configuration Versions Marked for Garbage Collection   EnterpriseAlert  This endpoint is exclusive to Terraform Enterprise  and not available in HCP Terraform   a href  https   developer hashicorp com terraform enterprise  Learn more about Terraform Enterprise  a     EnterpriseAlert    POST  api v2 configuration versions  configuration id actions restore backing data     Parameter                     Description                                                                                                                           configuration id     The ID of the configuration version to restore back to its uploaded state if applicable     This endpoint directs Terraform Enterprise to restore backing files associated with this configuration version  This endpoint can only restore delete configuration versions that meet the following criteria     are not in a   backing data permanently deleted  state   configuration version states    Otherwise  the endpoint returns an error  Terraform restores applicable configuration versions back to their  uploaded  state     Status    Response                    Reason s                                                                                                                                                                                                                                                                                                                                   200      none                        Terraform successfully initiated the restore process                                                                                               400       JSON API error object      Terraform failed to transition the state to  uploaded                                                                                              404       JSON API error object      Terraform did not find the configuration version or the user is not authorized to modify the configuration version state                                Request Body  This POST endpoint does not take a request body       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 configuration versions cv ntv3HbhJqvFzamy7 actions restore backing data         Permanently Delete a Configuration Version   EnterpriseAlert  This endpoint is exclusive to Terraform Enterprise  and not available in HCP Terraform   a href  https   developer hashicorp com terraform enterprise  Learn more about Terraform Enterprise  a     EnterpriseAlert    POST  api v2 configuration versions  configuration id actions permanently delete backing data     Parameter                     Description                                                                                                                           configuration id     The ID of the configuration version to permanently delete     This endpoint directs Terraform Enterprise to permanently delete backing files associated with this configuration version  This endpoint can only permanently delete configuration versions that meet the following criteria     Were created using the API or CLI    are in a   backing data soft deleted  state   configuration version states     and are not the current configuration version   Otherwise  the endpoint returns an error     Status    Response                    Reason s                                                                                                                                                                                                                                                                                                                                   200      none                        Terraform successfully deleted the data permanently                                                                                                400       JSON API error object      Terraform failed to transition the state to  backing data permanently deleted                                                                      404       JSON API error object      Terraform did not find the configuration version or the user is not authorized to modify the configuration version state                                Request Body  This POST endpoint does not take a request body       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 configuration versions cv ntv3HbhJqvFzamy7 actions permanently delete backing data         Available Related Resources  The GET endpoints above can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available     Resource Name          Description                                                                                                                         ingress attributes    The commit information used in the configuration       run                   The run created by the configuration               "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the authentication token endpoint to manage organization level API tokens Generate and delete organization tokens using the HTTP API page title Organization Tokens API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201","answers":"---\npage_title: Organization Tokens - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/authentication-token` endpoint to manage organization-level API tokens. Generate and delete organization tokens using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Organization Token API\n\n## Generate a new organization token\n\n`POST \/organizations\/:organization_name\/authentication-token`\n\n| Parameter            | Description                                           |\n| -------------------- | ----------------------------------------------------- |\n| `:organization_name` | The name of the organization to generate a token for. |\n\nGenerates a new [organization API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens), replacing any existing token.\n\nOnly members of the owners team, the owners [team API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens), and the [organization API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens) can access this endpoint.\n\nThis endpoint returns the secret text of the new authentication token. You can only access this token when you create it and can not recover it later.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Status  | Response                                                | Reason              |\n| ------- | ------------------------------------------------------- | ------------------- |\n| [201][] | [JSON API document][] (`type: \"authentication-tokens\"`) | Success             |\n| [404][] | [JSON API error object][]                               | User not authorized |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\n\n| Key path                      | Type   | Default | Description                                                                                                             |\n| ----------------------------- | ------ | ------- | ----------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                   | string |         | Must be `\"authentication-token\"`.                                                                                       |\n| `data.attributes.expired-at`  | string | `null`  | The UTC date and time that the Organization Token will expire, in ISO 8601 format. If omitted or set to `null` the token will never expire. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"authentication-token\",\n    \"attributes\": {\n      \"expired-at\": \"2023-04-06T12:00:00.000Z\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/authentication-token\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"4111756\",\n    \"type\": \"authentication-tokens\",\n    \"attributes\": {\n      \"created-at\": \"2017-11-29T19:11:28.075Z\",\n      \"last-used-at\": null,\n      \"description\": null,\n      \"token\": \"ZgqYdzuvlv8Iyg.atlasv1.6nV7t1OyFls341jo1xdZTP72fN0uu9VL55ozqzekfmToGFbhoFvvygIRy2mwVAXomOE\",\n      \"expired-at\": \"2023-04-06T12:00:00.000Z\"\n    },\n    \"relationships\": {\n      \"created-by\": {\n        \"data\": {\n          \"id\": \"user-62goNpx1ThQf689e\",\n          \"type\": \"users\"\n        }\n      }\n    }\n  }\n}\n```\n\n## Delete the organization token\n\n`DELETE \/organizations\/:organization\/authentication-token`\n\n| Parameter            | Description                                   |\n| -------------------- | --------------------------------------------- |\n| `:organization_name` | Which organization's token should be deleted. |\n\nOnly members of the owners team, the owners [team API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens), and the [organization API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens) can access this endpoint.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Status  | Response                  | Reason              |\n| ------- | ------------------------- | ------------------- |\n| [204][] | No Content                | Success             |\n| [404][] | [JSON API error object][] | User not authorized |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/authentication-token\n```","site":"terraform","answers_cleaned":"    page title  Organization Tokens   API Docs   HCP Terraform description       Use the   authentication token  endpoint to manage organization level API tokens  Generate and delete organization tokens using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Organization Token API     Generate a new organization token   POST  organizations  organization name authentication token     Parameter              Description                                                                                                                                  organization name    The name of the organization to generate a token for     Generates a new  organization API token   terraform cloud docs users teams organizations api tokens organization api tokens   replacing any existing token   Only members of the owners team  the owners  team API token   terraform cloud docs users teams organizations api tokens team api tokens   and the  organization API token   terraform cloud docs users teams organizations api tokens organization api tokens  can access this endpoint   This endpoint returns the secret text of the new authentication token  You can only access this token when you create it and can not recover it later    permissions citation    intentionally unused   keep for maintainers    Status    Response                                                  Reason                                                                                                               201       JSON API document      type   authentication tokens      Success                  404       JSON API error object                                    User not authorized        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload      Key path                        Type     Default   Description                                                                                                                                                                                                                                                                                                 data type                      string             Must be   authentication token                                                                                               data attributes expired at     string    null     The UTC date and time that the Organization Token will expire  in ISO 8601 format  If omitted or set to  null  the token will never expire         Sample Payload     json      data          type    authentication token        attributes            expired at    2023 04 06T12 00 00 000Z                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization authentication token          Sample Response     json      data          id    4111756        type    authentication tokens        attributes            created at    2017 11 29T19 11 28 075Z          last used at   null         description   null         token    ZgqYdzuvlv8Iyg atlasv1 6nV7t1OyFls341jo1xdZTP72fN0uu9VL55ozqzekfmToGFbhoFvvygIRy2mwVAXomOE          expired at    2023 04 06T12 00 00 000Z              relationships            created by              data                id    user 62goNpx1ThQf689e              type    users                                        Delete the organization token   DELETE  organizations  organization authentication token     Parameter              Description                                                                                                                  organization name    Which organization s token should be deleted     Only members of the owners team  the owners  team API token   terraform cloud docs users teams organizations api tokens team api tokens   and the  organization API token   terraform cloud docs users teams organizations api tokens organization api tokens  can access this endpoint    permissions citation    intentionally unused   keep for maintainers    Status    Response                    Reason                                                                                 204      No Content                  Success                  404       JSON API error object      User not authorized        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 organizations my organization authentication token    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 Use the subscriptions endpoint to access subscription information Get an organization s subscription and access a subscription by ID using the HTTP API tfc only true page title Subscriptions API Docs HCP Terraform","answers":"---\npage_title: Subscriptions - API Docs - HCP Terraform\ntfc_only: true\ndescription: >-\n  Use the `\/subscriptions` endpoint to access subscription information. Get an organization's subscription, and access a subscription by ID using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Subscriptions API\n\n-> **Note:** The subscriptions API is only available in HCP Terraform.\n\nAn organization can subscribe to different [feature sets](\/terraform\/cloud-docs\/api-docs\/feature-sets), which represent the [pricing plans](\/terraform\/cloud-docs\/overview) available in HCP Terraform. An organization's [entitlement set](\/terraform\/cloud-docs\/api-docs#feature-entitlements) is calculated using its subscription and feature set.\n\nTo change the subscription for an organization, use the billing settings in the HCP Terraform UI.\n\n## Show Subscription For Organization\n\n`GET \/organizations\/:organization_name\/subscription`\n\n| Parameter            | Description                   |\n| -------------------- | ----------------------------- |\n| `:organization_name` | The name of the organization. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/subscription\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"sub-kyjptCZYXQ6amEVu\",\n    \"type\": \"subscriptions\",\n    \"attributes\": {\n      \"end-at\": null,\n      \"is-active\": true,\n      \"start-at\": \"2021-01-20T07:03:53.492Z\",\n      \"runs-ceiling\": 1,\n      \"contract-start-at\": null,\n      \"contract-user-limit\": null,\n      \"contract-apply-limit\": null,\n      \"run-task-limit\": null,\n      \"run-task-workspace-limit\": null,\n      \"run-task-mandatory-enforcement-limit\": null,\n      \"policy-set-limit\": null,\n      \"policy-limit\": null,\n      \"policy-mandatory-enforcement-limit\": null,\n      \"versioned-policy-set-limit\": null,\n      \"agents-ceiling\": 0,\n      \"is-public-free-tier\": true,\n      \"is-self-serve-trial\": false\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"hashicorp\",\n          \"type\": \"organizations\"\n        }\n      },\n      \"billing-account\": {\n        \"data\": null\n      },\n      \"feature-set\": {\n        \"data\": {\n          \"id\": \"fs-EvCGYfpx9CVRzteA\",\n          \"type\": \"feature-sets\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/subscriptions\/sub-kyjptCZYXQ6amEVu\"\n    }\n  },\n  \"included\": [\n    {\n      \"id\": \"fs-EvCGYfpx9CVRzteA\",\n      \"type\": \"feature-sets\",\n      \"attributes\": {\n        \"audit-logging\": false,\n        \"comparison-description\": \"Essential collaboration features for practitioners and small teams.\",\n        \"cost-estimation\": false,\n        \"cost\": 0,\n        \"description\": \"State storage, locking, run history, VCS integration, private module registry, and remote operations\",\n        \"global-run-tasks\": false,\n        \"identifier\": \"free\",\n        \"is-current\": true,\n        \"is-free-tier\": true,\n        \"module-tests-generation\": false,\n        \"name\": \"Free\",\n        \"plan\": null,\n        \"policy-enforcement\": false,\n        \"policy-limit\": null,\n        \"policy-mandatory-enforcement-limit\": null,\n        \"policy-set-limit\": null,\n        \"private-networking\": false,\n        \"private-policy-agents\": false,\n        \"private-vcs\": false,\n        \"run-task-limit\": null,\n        \"run-task-mandatory-enforcement-limit\": null,\n        \"run-task-workspace-limit\": null,\n        \"run-tasks\": false,\n        \"self-serve-billing\": true,\n        \"sentinel\": false,\n        \"sso\": false,\n        \"teams\": false,\n        \"user-limit\": 5,\n        \"versioned-policy-set-limit\": null\n      }\n    }\n  ]\n}\n```\n\n## Show Subscription By ID\n\n`GET \/subscriptions\/:id`\n\n| Parameter | Description                        |\n| --------- | ---------------------------------- |\n| `:id`     | The ID of the Subscription to show |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/subscription\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"sub-kyjptCZYXQ6amEVu\",\n    \"type\": \"subscriptions\",\n    \"attributes\": {\n      \"end-at\": null,\n      \"is-active\": true,\n      \"start-at\": \"2021-01-20T07:03:53.492Z\",\n      \"runs-ceiling\": 1,\n      \"contract-start-at\": null,\n      \"contract-user-limit\": null,\n      \"contract-apply-limit\": null,\n      \"agents-ceiling\": 0,\n      \"run-task-limit\": null,\n      \"run-task-workspace-limit\": null,\n      \"run-task-mandatory-enforcement-limit\": null,\n      \"policy-set-limit\": null,\n      \"policy-limit\": null,\n      \"policy-mandatory-enforcement-limit\": null,\n      \"versioned-policy-set-limit\": null,\n      \"is-public-free-tier\": true,\n      \"is-self-serve-trial\": false\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"hashicorp\",\n          \"type\": \"organizations\"\n        }\n      },\n      \"billing-account\": {\n        \"data\": null\n      },\n      \"feature-set\": {\n        \"data\": {\n          \"id\": \"fs-EvCGYfpx9CVRzteA\",\n          \"type\": \"feature-sets\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/subscriptions\/sub-kyjptCZYXQ6amEVu\"\n    }\n  },\n  \"included\": [\n    {\n      \"id\": \"fs-EvCGYfpx9CVRzteA\",\n      \"type\": \"feature-sets\",\n      \"attributes\": {\n        \"audit-logging\": false,\n        \"comparison-description\": \"Essential collaboration features for practitioners and small teams.\",\n        \"cost-estimation\": false,\n        \"cost\": 0,\n        \"description\": \"State storage, locking, run history, VCS integration, private module registry, and remote operations\",\n        \"global-run-tasks\": false,\n        \"identifier\": \"free\",\n        \"is-current\": true,\n        \"is-free-tier\": true,\n        \"module-tests-generation\": false,\n        \"name\": \"Free\",\n        \"plan\": null,\n        \"policy-enforcement\": false,\n        \"policy-limit\": null,\n        \"policy-mandatory-enforcement-limit\": null,\n        \"policy-set-limit\": null,\n        \"private-networking\": false,\n        \"private-policy-agents\": false,\n        \"private-vcs\": false,\n        \"run-task-limit\": null,\n        \"run-task-mandatory-enforcement-limit\": null,\n        \"run-task-workspace-limit\": null,\n        \"run-tasks\": false,\n        \"self-serve-billing\": true,\n        \"sentinel\": false,\n        \"sso\": false,\n        \"teams\": false,\n        \"user-limit\": 5,\n        \"versioned-policy-set-limit\": null\n      }\n    }\n  ]\n}\n```","site":"terraform","answers_cleaned":"    page title  Subscriptions   API Docs   HCP Terraform tfc only  true description       Use the   subscriptions  endpoint to access subscription information  Get an organization s subscription  and access a subscription by ID using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Subscriptions API       Note    The subscriptions API is only available in HCP Terraform   An organization can subscribe to different  feature sets   terraform cloud docs api docs feature sets   which represent the  pricing plans   terraform cloud docs overview  available in HCP Terraform  An organization s  entitlement set   terraform cloud docs api docs feature entitlements  is calculated using its subscription and feature set   To change the subscription for an organization  use the billing settings in the HCP Terraform UI      Show Subscription For Organization   GET  organizations  organization name subscription     Parameter              Description                                                                                  organization name    The name of the organization         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 organizations hashicorp subscription          Sample Response     json      data          id    sub kyjptCZYXQ6amEVu        type    subscriptions        attributes            end at   null         is active   true         start at    2021 01 20T07 03 53 492Z          runs ceiling   1         contract start at   null         contract user limit   null         contract apply limit   null         run task limit   null         run task workspace limit   null         run task mandatory enforcement limit   null         policy set limit   null         policy limit   null         policy mandatory enforcement limit   null         versioned policy set limit   null         agents ceiling   0         is public free tier   true         is self serve trial   false             relationships            organization              data                id    hashicorp              type    organizations                            billing account              data   null                 feature set              data                id    fs EvCGYfpx9CVRzteA              type    feature sets                                links            self     api v2 subscriptions sub kyjptCZYXQ6amEVu                included                  id    fs EvCGYfpx9CVRzteA          type    feature sets          attributes              audit logging   false           comparison description    Essential collaboration features for practitioners and small teams             cost estimation   false           cost   0           description    State storage  locking  run history  VCS integration  private module registry  and remote operations            global run tasks   false           identifier    free            is current   true           is free tier   true           module tests generation   false           name    Free            plan   null           policy enforcement   false           policy limit   null           policy mandatory enforcement limit   null           policy set limit   null           private networking   false           private policy agents   false           private vcs   false           run task limit   null           run task mandatory enforcement limit   null           run task workspace limit   null           run tasks   false           self serve billing   true           sentinel   false           sso   false           teams   false           user limit   5           versioned policy set limit   null                             Show Subscription By ID   GET  subscriptions  id     Parameter   Description                                                                                 id        The ID of the Subscription to show        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 organizations hashicorp subscription          Sample Response     json      data          id    sub kyjptCZYXQ6amEVu        type    subscriptions        attributes            end at   null         is active   true         start at    2021 01 20T07 03 53 492Z          runs ceiling   1         contract start at   null         contract user limit   null         contract apply limit   null         agents ceiling   0         run task limit   null         run task workspace limit   null         run task mandatory enforcement limit   null         policy set limit   null         policy limit   null         policy mandatory enforcement limit   null         versioned policy set limit   null         is public free tier   true         is self serve trial   false             relationships            organization              data                id    hashicorp              type    organizations                            billing account              data   null                 feature set              data                id    fs EvCGYfpx9CVRzteA              type    feature sets                                links            self     api v2 subscriptions sub kyjptCZYXQ6amEVu                included                  id    fs EvCGYfpx9CVRzteA          type    feature sets          attributes              audit logging   false           comparison description    Essential collaboration features for practitioners and small teams             cost estimation   false           cost   0           description    State storage  locking  run history  VCS integration  private module registry  and remote operations            global run tasks   false           identifier    free            is current   true           is free tier   true           module tests generation   false           name    Free            plan   null           policy enforcement   false           policy limit   null           policy mandatory enforcement limit   null           policy set limit   null           private networking   false           private policy agents   false           private vcs   false           run task limit   null           run task mandatory enforcement limit   null           run task workspace limit   null           run tasks   false           self serve billing   true           sentinel   false           sso   false           teams   false           user limit   5           versioned policy set limit   null                        "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 Use the workspace resources endpoint to interact with workspace resources List resources using the HTTP API page title Workspace Resources API Docs HCP Terraform","answers":"---\npage_title: Workspace Resources - API Docs - HCP Terraform\ndescription: >-\n  Use the workspace `\/resources` endpoint to interact with workspace resources. List resources using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Workspace Resources API\n\n## List Workspace Resources\n\n`GET \/workspaces\/:workspace_id\/resources`\n\n| Parameter       | Description                                                                                                                                                                                                      |\n| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The ID of the workspace to retrieve resources from. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [show workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint. |\n\n| Status  | Response                                    | Reason                                                      |\n| ------- | ------------------------------------------- | ----------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"resources\"`) | Request was successful.                                     |\n| [404][] | [JSON API error object][]                   | Workspace not found or user unauthorized to perform action. |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                                         |\n| -------------- | ----------------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.                  |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 workspace resources per page. |\n\n### Permissions\n\nTo list resources the user must have permission to read resources for the specified workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### Sample Request\n\n```shell\ncurl \\\n  --request GET \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-DiTzUDRpjrArAfSS\/resources\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"wsr-KNYb3Jj3JTBgoBFs\",\n      \"type\": \"resources\",\n      \"attributes\": {\n        \"address\": \"random_pet.animal\",\n        \"name\": \"animal\",\n        \"created-at\": \"2021-10-27\",\n        \"updated-at\": \"2021-10-27\",\n        \"module\": \"root\",\n        \"provider\": \"hashicorp\/random\",\n        \"provider-type\": \"random_pet\",\n        \"modified-by-state-version-id\": \"sv-y4pjfGHkGUBAa9AX\",\n        \"name-index\": null\n      }\n    },\n    {\n      \"id\": \"wsr-kYsf5A3hQ1y9zFWq\",\n      \"type\": \"resources\",\n      \"attributes\": {\n        \"address\": \"random_pet.animal2\",\n        \"name\": \"animal2\",\n        \"created-at\": \"2021-10-27\",\n        \"updated-at\": \"2021-10-27\",\n        \"module\": \"root\",\n        \"provider\": \"hashicorp\/random\",\n        \"provider-type\": \"random_pet\",\n        \"modified-by-state-version-id\": \"sv-y4pjfGHkGUBAa9AX\",\n        \"name-index\": null\n      }\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-DiTzUDRpjrArAfSS\/resources?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-DiTzUDRpjrArAfSS\/resources?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-DiTzUDRpjrArAfSS\/resources?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n  },\n  ...\n}\n```","site":"terraform","answers_cleaned":"    page title  Workspace Resources   API Docs   HCP Terraform description       Use the workspace   resources  endpoint to interact with workspace resources  List resources using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Workspace Resources API     List Workspace Resources   GET  workspaces  workspace id resources     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                                                                                   workspace id    The ID of the workspace to retrieve resources from  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  show workspace   terraform cloud docs api docs workspaces show workspace  endpoint       Status    Response                                      Reason                                                                                                                                                                                   200       JSON API document      type   resources      Request was successful                                           404       JSON API error object                        Workspace not found or user unauthorized to perform action         Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                                                       page number       Optional    If omitted  the endpoint will return the first page                        page size         Optional    If omitted  the endpoint will return 20 workspace resources per page         Permissions  To list resources the user must have permission to read resources for the specified workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers      Sample Request     shell curl       request GET       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 workspaces ws DiTzUDRpjrArAfSS resources          Sample Response     json      data                  id    wsr KNYb3Jj3JTBgoBFs          type    resources          attributes              address    random pet animal            name    animal            created at    2021 10 27            updated at    2021 10 27            module    root            provider    hashicorp random            provider type    random pet            modified by state version id    sv y4pjfGHkGUBAa9AX            name index   null                             id    wsr kYsf5A3hQ1y9zFWq          type    resources          attributes              address    random pet animal2            name    animal2            created at    2021 10 27            updated at    2021 10 27            module    root            provider    hashicorp random            provider type    random pet            modified by state version id    sv y4pjfGHkGUBAa9AX            name index   null                       links          self    https   app terraform io api v2 workspaces ws DiTzUDRpjrArAfSS resources page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   app terraform io api v2 workspaces ws DiTzUDRpjrArAfSS resources page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next   null       last    https   app terraform io api v2 workspaces ws DiTzUDRpjrArAfSS resources page 5Bnumber 5D 1 page 5Bsize 5D 20                  "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Workspaces API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the workspaces endpoint to interact with workspaces List show create update lock unlock and delete a workspace and manage SSH keys remote state consumers and tags using the HTTP API","answers":"---\npage_title: Workspaces - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/workspaces` endpoint to interact with workspaces. List, show, create, update, lock, unlock, and delete a workspace, and manage SSH keys, remote state consumers, and tags using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n[speculative plans]: \/terraform\/cloud-docs\/run\/remote-operations#speculative-plans\n\n# Workspaces API\n\nThis topic provides reference information about the workspaces AP. Workspaces represent running infrastructure managed by Terraform.\n\n## Overview\n\nThe scope of the API includes the following endpoints:\n\n| Method | Path | Action |\n| --- | --- | --- |\n| `POST` | `\/organizations\/:organization_name\/workspaces` | Call this endpoint to [create a workspace](#create-a-workspace). You can apply tags stored as key-value pairs when creating the workspace. |\n| `POST` | `\/organizations\/:organization_name\/workspaces\/:name\/actions\/safe-delete` | Call this endpoint to [safely delete a workspace](#safe-delete-a-workspace) by querying the organization and workspace names. |\n| `POST` | `\/workspaces\/:workspace_id\/actions\/safe-delete` | Call this endpoint [safely delete a workspace](#safe-delete-a-workspace) by querying the workspace ID. |\n|`POST`  | `\/workspaces\/:workspace_id\/actions\/lock` | Call this endpoint to [lock a workspace](#lock-a-workspace). |\n| `POST` | `\/workspaces\/:workspace_id\/actions\/unlock` | Call this endpoint to [unlock a workspace](#unlock-a-workspace).| \n| `POST` | `\/workspaces\/:workspace_id\/actions\/force-unlock` | Call this endpoint to [force a workspace to unlock](#force-unlock-a-workspace). |\n| `POST` | `\/workspaces\/:workspace_id\/relationships\/remote-state-consumers` | Call this endpoint to [add remote state consumers](#get-remote-state-consumers). |\n| `POST` | `\/workspaces\/:workspace_id\/relationships\/tags` | Call this endpoint to [bind flat string tags to an existing workspace](#add-tags-to-a-workspace). |\n| `POST` | `\/workspaces\/:workspace_id\/relationships\/data-retention-policy` | Call this endpoint to [show the workspace data retention policy](#show-data-retention-policy). |\n| `GET` | `\/organizations\/:organization_name\/workspaces` | Call this endpoint to [list existing workspaces](#list-workspaces). Each project in the response contains a link to `effective-tag-bindings` and `tag-bindings` collections. You can filter the response by tag keys and values using a query string parameter. |\n| `GET` | `\/organizations\/:organization_name\/workspaces\/:name` | Call this endpoint to [show workspace details](#show-workspace) by querying the organization and workspace names. |\n| `GET` | `\/workspaces\/:workspace_id` | Call this endpoint to [show workspace details](#show-workspace). |\n| `GET` | `\/workspaces\/:workspace_id\/relationships\/remote-state-consumers` | Call this endpoint to [list remote state consumers](#get-remote-state-consumers). |\n| `GET` | `\/workspaces\/:workspace_id\/relationships\/tags` | Call this endpoint to [list flat string workspace tags](#get-tags). |\n| `GET` | `\/workspaces\/:workspace_id\/tag-bindings` | Call this endpoint to [list workspace key-value tags](#get-tags) bound directly to this workspace. |\n| `GET` | `\/workspaces\/:workspace_id\/effective-tag-bindings` | Call this endpoint to [list all workspace key-value tags](#get-tags), including both those bound directly to the workspace as well as those inherited from the parent project. |\n| `GET` | `\/workspaces\/:workspace_id\/relationships\/data-retention-policy` | Call this endpoint to [show the workspace data retention policy](#show-data-retention-policy). <alert enterprise\/> |\n| `PATCH` | `\/workspaces\/:workspace_id\/relationships\/ssh-key` | Call this endpoint to manage SSH key assignments for workspaces. Refer to [Assign an SSH key to a workspace](#assign-an-ssh-key-to-a-workspace) and [Unassign an SSH key from a workspace](#unassign-an-ssh-key-from-a-workspace) for instructions. |\n| `PATCH` | `\/workspaces\/:workspace_id` | Call this endpoint to [update a workspace](#update-a-workspace). You can apply tags stored as key-value pairs when updating the workspace. |\n| `PATCH` | `\/organizations\/:organization_name\/workspaces\/:name` | Call this endpoint to [update a workspace](#update-a-workspace) by querying the organization and workspace names. |\n| `PATCH` | `\/workspaces\/:workspace_id\/relationships\/remote-state-consumers` | Call this endpoint to [replace remote state consumers](#replace-remote-state-consumers). |\n| `DELETE` | `\/workspaces\/:workspace_id\/relationships\/remote-state-consumers` | Call this endpoint to [delete remote state consumers](#delete-remote-state-consumers). |\n| `DELETE` | `\/workspaces\/:workspace_id\/relationships\/tags` | Call this endpoint to [delete flat string workspace tags](#remove-tags-from-workspace) from the workspace. |\n| `DELETE` | `\/workspaces\/:workspace_id\/relationships\/data-retention-policy` | Call this endpoint to [remove a workspace data retention policy](#remove-data-retention-policy). |\n| `DELETE` | `\/workspaces\/:workspace_id` | Call this endpoint to [force delete a workspace](#force-delete-a-workspace), which deletes the workspace without first checking for managed resources.|\n| `DELETE` | `\/organizations\/:organization_name\/workspaces\/:name` | Call this endpoint to [force delete a workspace](#force-delete-a-workspace), which deletes the workspace without first checking for managed resources, by querying the organization and workspace names. |\n\n## Requirements\n\n- You must be a member of a team with the **Read** permission enabled for Terraform runs to view workspaces.\n- You must be a member of a team with the **Admin** permissions enabled on the workspace to change settings and force-unlock it.\n- You must be a member of a team with the **Lock\/unlock** permission enabled to lock and unlock the workspace.\n- You must meet one of the following requirements to create a workspace:\n  - Be the team owner\n  - Be on a team with the **Manage all workspaces** permission enabled\n  - Present an [organization API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens) when calling the API.\n\nRefer to [Workspace Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#workspace-permissions) for additional information.\n\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Create a Workspace\n\nUse the following endpoint to create a new workspace:\n\n`POST \/organizations\/:organization_name\/workspaces`\n\n| Parameter | Description |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The name of the organization to create the workspace in. The organization must already exist in the system, and the user must have permissions to create new workspaces. |\n\n\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\nBy supplying the necessary attributes under a `vcs-repository` object, you can create a workspace that is configured against a VCS Repository.\n\n| Key path | Type | Default | Description |\n|--- | --- | ---| ---|\n| `data.type` | string  | none | Must be `\"workspaces\"`. |\n| `data.attributes.name` | string  | none | The name of the workspace. Workspace names can only include letters, numbers, `-`, and `_`. The name a unique identifier n the organization. |\n| `data.attributes.agent-pool-id` | string  | none | Required when `execution-mode` is set to `agent`. The ID of the agent pool belonging to the workspace's organization. This value must not be specified if `execution-mode` is set to `remote` or `local` or if `operations` is set to `true`. |\n| `data.attributes.allow-destroy-plan` | boolean | `true` | Whether destroy plans can be queued on the workspace. |\n| `data.attributes.assessments-enabled`           | boolean | `false`   | (previously `drift-detection`) Whether or not HCP Terraform performs health assessments for the workspace. May be overridden by the organization setting `assessments-enforced`. Only available for Plus tier organizations, in workspaces running Terraform version 0.15.4+ and operating in [Remote execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode). |\n| `data.attributes.auto-apply` | boolean | `false`   | Whether to automatically apply changes when a Terraform plan is successful in runs initiated by VCS, UI or CLI, [with some exceptions](\/terraform\/cloud-docs\/workspaces\/settings#auto-apply-and-manual-apply). |\n| `data.attributes.auto-apply-run-trigger`        | boolean | `false`   | Whether to automatically apply changes when a Terraform plan is successful in runs initiated by run triggers. |\n| `data.attributes.auto-destroy-at` | string  | (nothing) | Timestamp when the next scheduled destroy run will occur, refer to [Scheduled Destroy](\/terraform\/cloud-docs\/workspaces\/settings\/deletion#automatically-destroy). |\n| `data.attributes.auto-destroy-activity-duration`| string  | (nothing) | Value and units for [automatically scheduled destroy runs based on workspace activity](\/terraform\/cloud-docs\/workspaces\/settings\/deletion#automatically-destroy). Valid values are greater than 0 and four digits or less. Valid units are `d` and `h`. For example, to queue destroy runs after fourteen days of inactivity set `auto-destroy-activity-duration: \"14d\"`. |\n| `data.attributes.description` | string  | (nothing) | A description for the workspace. |\n| `data.attributes.execution-mode`                | string  | (nothing) | Which [execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode) to use. Valid values are `remote`, `local`, and `agent`. When set to `local`, the workspace will be used for state storage only. This value _must not_ be specified if `operations` is specified, and _must_ be specified if `setting-overwrites.execution-mode` is set to `true`. |\n| `data.attributes.file-triggers-enabled` | boolean | `true` | Whether to filter runs based on the changed files in a VCS push. If enabled, it uses either `trigger-prefixes` in conjunction with `working_directory` or `trigger-patterns` to describe the set of changed files that will start a run. If disabled, any push triggers a run. |\n| `data.attributes.global-remote-state`           | boolean | `false`   | Whether the workspace should allow all workspaces in the organization to [access its state data](\/terraform\/cloud-docs\/workspaces\/state) during runs. If `false`, then only specifically approved workspaces can access its state. Manage allowed workspaces using the [Remote State Consumers](\/terraform\/cloud-docs\/api-docs\/workspaces#get-remote-state-consumers) endpoints, documented later on this page. Terraform Enterprise admins can choose the default value for new workspaces if this attribute is omitted. |\n| `data.attributes.operations` | boolean | `true`    | **DEPRECATED** Use `execution-mode` instead. Whether to use remote execution mode. When set to `false`, the workspace will be used for state storage only. This value must not be specified if `execution-mode` is specified. |\n| `data.attributes.queue-all-runs` | boolean | `false`   | Whether runs should be queued immediately after workspace creation. When set to false, runs triggered by a VCS change will not be queued until at least one run is manually queued. |\n| `data.attributes.source-name` | string  | none | A friendly name for the application or client creating this workspace. If set, this will be displayed on the workspace as \"Created via `<SOURCE NAME>`\". |\n| `data.attributes.source-url` | string  | none | A URL for the application or client creating this workspace. This can be the URL of a related resource in another app, or a link to documentation or other info about the client. |\n| `data.attributes.speculative-enabled`           | boolean | `true`      | Whether this workspace allows automatic [speculative plans][]. Setting this to `false` prevents HCP Terraform from running plans on pull requests, which can improve security if the VCS repository is public or includes untrusted contributors. It doesn't prevent manual speculative plans via the CLI or the runs API.  |\n| `data.attributes.terraform-version` | string  | latest release | Specifies the version of Terraform to use for this workspace. You can specify an exact version or a [version constraint](\/terraform\/language\/expressions\/version-constraints) such as `~> 1.0.0`. If you specify a constraint, the workspace always uses the newest release that meets that constraint. If omitted when creating a workspace, this defaults to the latest released version. |\n| `data.attributes.trigger-patterns` | array   | `[]`      | List of glob patterns that describe the files HCP Terraform monitors for changes. Trigger patterns are always appended to the root directory of the repository.  |\n| `data.attributes.trigger-prefixes`               | array   | `[]`      | List of trigger prefixes that describe the paths HCP Terraform monitors for changes, in addition to the working directory. Trigger prefixes are always appended to the root directory of the repository. HCP Terraform starts a run when files are changed in any directory path matching the provided set of prefixes. |\n| `data.attributes.vcs-repo.branch`               | string  | repository's default branch | The repository branch that Terraform executes from. If omitted or submitted as an empty string, this defaults to the repository's default branch. |\n| `data.attributes.vcs-repo.identifier` | string  | none | A reference to your VCS repository in the format :org\/:repo where :org and :repo refer to the organization and repository in your VCS provider. The format for Azure DevOps is `:org\/:project\/_git\/:repo`. |\n| `data.attributes.vcs-repo.ingress-submodules`   | boolean | `false`   | Whether submodules should be fetched when cloning the VCS repository. |\n| `data.attributes.vcs-repo.oauth-token-id`       | string  | none | Specifies the VCS OAuth connection and token. Call the [`oauth-tokens`](\/terraform\/cloud-docs\/api-docs\/oauth-tokens) endpoint to retrieve the OAuth ID. |\n| `data.attributes.vcs-repo.tags-regex`           | string  | none | A regular expression used to match Git tags. HCP Terraform triggers a run when this value is present and a VCS event occurs that contains a matching Git tag for the regular expression.|\n| `data.attributes.vcs-repo` | object  | none | Settings for the workspace's VCS repository. If omitted, the workspace is created without a VCS repo. If included, you must specify at least the `oauth-token-id` and `identifier` keys. |\n| `data.attributes.working-directory`             | string  | (nothing) | A relative path that Terraform will execute within. This defaults to the root of your repository and is typically set to a subdirectory matching the environment when multiple environments exist within the same repository. |\n| `data.attributes.setting-overwrites` | object  | none |   The keys in this object are attributes that have organization-level defaults. Each attribute key stores a boolean value which is `true` by default. To overwrite the default inherited value, set an attribute's value to `false`. For example, to set `execution-mode` as the organization default, set `setting-overwrites.execution-mode` to `false`. |\n| `data.relationships` | object | none | Specifies a group of workspace associations. |\n| `data.relationships.project.data.id` | string  | default project | The ID of the project to create the workspace in. If left blank, Terraform creates the workspace in the organization's default project. You must have permission to create workspaces in the project, either by organization-level permissions or team admin access to a specific project. |\n| `data.relationships.tag-bindings.data`          | list of objects | none | Specifies a list of tags to attach to the workspace. |\n| `data.relationships.tag-bindings.data.type`     | string | none | Must be `tag-bindings` for each object in the list. |\n| `data.relationships.tag-bindings.data.attributes.key`   | string | none | Specifies the tag key for each object in the list. |\n| `data.relationships.tag-bindings.data.attributes.value` | string | none | Specifies the tag value for each object in the list. |\n\n\n### Sample Payload\n\n_Without a VCS repository_\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\": \"workspace-1\"\n    },\n    \"type\": \"workspaces\",\n    \"relationships\": {\n      \"tag-bindings\": [\n        {\n          \"data\": {\n            \"type\": \"tag-binding\",\n            \"attributes\": { \"key\": \"env\", \"value\": \"test\"}\n          }\n        }\n      ]\n    }\n\n  }\n}\n```\n\n_With a VCS repository_\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\": \"workspace-2\",\n      \"terraform_version\": \"0.11.1\",\n      \"working-directory\": \"\",\n      \"vcs-repo\": {\n        \"identifier\": \"example\/terraform-test-proj\",\n        \"oauth-token-id\": \"ot-hmAyP66qk2AMVdbJ\",\n        \"branch\": \"\",\n        \"tags-regex\": null\n      }\n    },\n    \"type\": \"workspaces\",\n    \"relationships\": {\n      \"tag-bindings\": [\n        {\n          \"data\": {\n            \"type\": \"tag-binding\",\n            \"attributes\": { \"key\": \"env\", \"value\": \"test\"}\n          }\n        }\n      ]\n    }\n  }\n}\n```\n_Using Git Tags_\n\nHCP Terraform triggers a run when you push a Git tag that matches the regular expression (SemVer): `1.2.3`, `22.33.44`, etc.\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\": \"workspace-3\",\n      \"terraform_version\": \"0.12.1\",\n      \"file-triggers-enabled\": false,\n      \"working-directory\": \"\/networking\",\n      \"vcs-repo\": {\n        \"identifier\": \"example\/terraform-test-proj-monorepo\",\n        \"oauth-token-id\": \"ot-hmAyP66qk2AMVdbJ\",\n        \"branch\": \"\",\n        \"tags-regex\": \"\\d+.\\d+.\\d+\"\n      }\n    },\n    \"type\": \"workspaces\",\n    \"relationships\": {\n      \"tag-bindings\": [\n        {\n          \"data\": {\n            \"type\": \"tag-binding\",\n            \"attributes\": { \"key\": \"env\", \"value\": \"test\"}\n          }\n        }\n      ]\n    }\n  }\n}\n```\n\n_For a monorepo using trigger prefixes_\n\nA run will be triggered in this workspace when changes are detected in any of the specified directories: `\/networking`, `\/modules`, or `\/vendor`.\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\": \"workspace-3\",\n      \"terraform_version\": \"0.12.1\",\n      \"file-triggers-enabled\": true,\n      \"trigger-prefixes\": [\"\/modules\", \"\/vendor\"],\n      \"working-directory\": \"\/networking\",\n      \"vcs-repo\": {\n        \"identifier\": \"example\/terraform-test-proj-monorepo\",\n        \"oauth-token-id\": \"ot-hmAyP66qk2AMVdbJ\",\n        \"branch\": \"\"\n      },\n      \"updated-at\": \"2017-11-29T19:18:09.976Z\"\n    },\n    \"type\": \"workspaces\",\n    \"relationships\": {\n      \"tag-bindings\": [\n        {\n          \"data\": {\n            \"type\": \"tag-binding\",\n            \"attributes\": { \"key\": \"env\", \"value\": \"test\"}\n          }\n        }\n      ]\n    }\n  }\n}\n```\n\n_For a monorepo using trigger patterns_\n\nA run will be triggered in this workspace when HCP Terraform detects any of the following changes:\n- A file with the extension `tf` in any directory structure in which the last folder is named `networking` (e.g., `root\/networking` and `root\/module\/networking`)\n- Any file changed in the folder `\/base`, no subfolders are included\n- Any file changed in the folder `\/submodule` and all of its subfolders\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\": \"workspace-4\",\n      \"terraform_version\": \"1.2.2\",\n      \"file-triggers-enabled\": true,\n      \"trigger-patterns\": [\"\/**\/networking\/*.tf\", \"\/base\/*\", \"\/submodule\/**\/*\"],\n      \"vcs-repo\": {\n        \"identifier\": \"example\/terraform-test-proj-monorepo\",\n        \"oauth-token-id\": \"ot-hmAyP66qk2AMVdbJ\",\n        \"branch\": \"\"\n      },\n      \"updated-at\": \"2022-06-09T19:18:09.976Z\"\n    },\n    \"type\": \"workspaces\",\n    \"relationships\": {\n      \"tag-bindings\": [\n        {\n          \"data\": {\n            \"type\": \"tag-binding\",\n            \"attributes\": { \"key\": \"env\", \"value\": \"test\"}\n          }\n        }\n      ]\n    }\n  }\n}\n```\n\n_Using HCP Terraform agents_\n\n[HCP Terraform agents](\/terraform\/cloud-docs\/api-docs\/agents) allow HCP Terraform to communicate with isolated, private, or on-premises infrastructure.\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\":\"workspace-1\",\n      \"execution-mode\": \"agent\",\n      \"agent-pool-id\": \"apool-ZjT6A7mVFm5WHT5a\"\n    }\n  },\n  \"type\": \"workspaces\",\n  \"relationships\": {\n    \"tag-bindings\": [\n      {\n        \"data\": {\n          \"type\": \"tag-binding\",\n          \"attributes\": { \"key\": \"env\", \"value\": \"test\"}\n        }\n      }\n    ]\n  }\n}\n```\n\n_Using an organization default execution mode_\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\":\"workspace-with-default\",\n      \"setting-overwrites\": {\n        \"execution-mode\": false\n      }\n    }\n  },\n  \"type\": \"workspaces\",\n  \"relationships\": {\n    \"tag-bindings\": [\n      {\n        \"data\": {\n          \"type\": \"tag-binding\",\n          \"attributes\": { \"key\": \"env\", \"value\": \"test\"}\n        }\n      }\n    ]\n  }\n}\n\n```\n\n_With a project_\n\n```json\n{\n  \"data\": {\n    \"type\": \"workspaces\",\n    \"attributes\": {\n      \"name\": \"workspace-in-project\"\n    },\n    \"relationships\": {\n      \"project\": {\n        \"data\": {\n          \"type\": \"projects\",\n          \"id\": \"prj-jT92VLSFpv8FwKtc\"\n        }\n      },\n      \"tag-bindings\": [\n        {\n          \"data\": {\n            \"type\": \"tag-binding\",\n            \"attributes\": { \"key\": \"env\", \"value\": \"test\"}\n          }\n        }\n      ]\n    }\n  }\n}\n```\n\n_With key-value tags_\n\n```json\n{\n  \"data\": {\n    \"type\": \"workspaces\",\n    \"attributes\": {\n      \"name\": \"workspace-in-project\"\n    },\n    \"relationships\": {\n      \"tag-bindings\": {\n        \"data\": {\n          \"key\": \"cost-center\",\n          \"value\": \"engineering\"\n        }\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/workspaces\n```\n\n### Sample Response\n\n_Without a VCS repository_\n\n**Note:** The `assessments-enabled` property is only accepted by or returned from HCP Terraform.\n\n@include 'api-code-blocks\/workspace.mdx'\n\n_With a VCS repository_\n\n@include 'api-code-blocks\/workspace-with-vcs.mdx'\n\n\n_With a project_\n\n```json\n{\n  \"data\": {\n    \"id\": \"ws-HRkJLSYWF97jucqQ\",\n    \"type\": \"workspaces\",\n    \"attributes\": {\n      \"allow-destroy-plan\": true,\n      \"auto-apply\": false,\n      \"auto-apply-run-trigger\": false,\n      \"auto-destroy-at\": null,\n      \"auto-destroy-activity-duration\": null,\n      \"created-at\": \"2022-12-05T20:57:13.829Z\",\n      \"environment\": \"default\",\n      \"locked\": false,\n      \"locked-reason\": \"\",\n      \"name\": \"workspace-in-project\",\n      \"queue-all-runs\": false,\n      \"speculative-enabled\": true,\n      \"structured-run-output-enabled\": true,\n      \"terraform-version\": \"1.3.5\",\n      \"working-directory\": null,\n      \"global-remote-state\": true,\n      \"updated-at\": \"2022-12-05T20:57:13.829Z\",\n      \"resource-count\": 0,\n      \"apply-duration-average\": null,\n      \"plan-duration-average\": null,\n      \"policy-check-failures\": null,\n      \"run-failures\": null,\n      \"workspace-kpis-runs-count\": null,\n      \"latest-change-at\": \"2022-12-05T20:57:13.829Z\",\n      \"operations\": true,\n      \"execution-mode\": \"remote\",\n      \"vcs-repo\": null,\n      \"vcs-repo-identifier\": null,\n      \"permissions\": {\n        \"can-update\": true,\n        \"can-destroy\": true,\n        \"can-queue-run\": true,\n        \"can-read-variable\": true,\n        \"can-update-variable\": true,\n        \"can-read-state-versions\": true,\n        \"can-read-state-outputs\": true,\n        \"can-create-state-versions\": true,\n        \"can-queue-apply\": true,\n        \"can-lock\": true,\n        \"can-unlock\": true,\n        \"can-force-unlock\": true,\n        \"can-read-settings\": true,\n        \"can-manage-tags\": true,\n        \"can-manage-run-tasks\": false,\n        \"can-force-delete\": true,\n        \"can-manage-assessments\": true,\n        \"can-read-assessment-results\": true,\n        \"can-queue-destroy\": true\n      },\n      \"actions\": {\n        \"is-destroyable\": true\n      },\n      \"description\": null,\n      \"file-triggers-enabled\": true,\n      \"trigger-prefixes\": [],\n      \"trigger-patterns\": [],\n      \"assessments-enabled\": false,\n      \"last-assessment-result-at\": null,\n      \"source\": \"tfe-api\",\n      \"source-name\": null,\n      \"source-url\": null,\n      \"tag-names\": [],\n      \"setting-overwrites\": {\n        \"execution-mode\": false,\n        \"agent-pool\": false\n      }\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        }\n      },\n      \"current-run\": {\n        \"data\": null\n      },\n      \"effective-tag-bindings\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/workspaces\/ws-HRkJLSYWF97jucqQ\/effective-tag-bindings\"\n        }\n      },\n      \"latest-run\": {\n        \"data\": null\n      },\n      \"outputs\": {\n        \"data\": []\n      },\n      \"remote-state-consumers\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/workspaces\/ws-HRkJLSYWF97jucqQ\/relationships\/remote-state-consumers\"\n        }\n      },\n      \"current-state-version\": {\n        \"data\": null\n      },\n      \"current-configuration-version\": {\n        \"data\": null\n      },\n      \"agent-pool\": {\n        \"data\": null\n      },\n      \"readme\": {\n        \"data\": null\n      },\n      \"project\": {\n        \"data\": {\n          \"id\": \"prj-jT92VLSFpv8FwKtc\",\n          \"type\": \"projects\"\n        }\n      },\n      \"current-assessment-result\": {\n        \"data\": null\n      },\n      \"tag-bindings\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/workspaces\/ws-HRkJLSYWF97jucqQ\/tag-bindings\"\n      }\n      \"vars\": {\n        \"data\": []\n      },\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/my-organization\/workspaces\/workspace-in-project\"\n    }\n  }\n}\n```\n\n## Update a Workspace\n\nUse one of the following endpoint to update a workspace:\n\n- `PATCH \/organizations\/:organization_name\/workspaces\/:name`\n- `PATCH \/workspaces\/:workspace_id`\n\n| Parameter       | Description                       |\n| --------------- | --------------------------------- |\n| `:workspace_id` | The ID of the workspace to update |\n| `:organization_name` | The name of the organization the workspace belongs to. |\n| `:name` | The name of the workspace to update. Workspace names are unique identifiers in the organization and can only include letters, numbers, `-`, and `_`. |\n\n\n### Request Body\n\nThese PATCH endpoints require a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                                      | Type           | Default          | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |\n|-----------------------------------------------|----------------|------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `data.type`                                   | string         |                  | Must be `\"workspaces\"`.                                                                                                                                                                                                                                                                                                                                                                                                                                                         |\n| `data.attributes.name`                        | string         | (previous value) | A new name for the workspace, which can only include letters, numbers, `-`, and `_`. This will be used as an identifier and must be unique in the organization. **Warning:** Changing a workspace's name changes its URL in the API and UI.                                                                                                                                                                                                                                     |\n| `data.attributes.agent-pool-id`               | string         | (previous value) | Required when `execution-mode` is set to `agent`. The ID of the agent pool belonging to the workspace's organization. This value must not be specified if `execution-mode` is set to `remote` or `local` or if `operations` is set to `true`.                                                                                                                                                                                                                                   |\n| `data.attributes.allow-destroy-plan`          | boolean        | (previous value) | Whether destroy plans can be queued on the workspace.                                                                                                                                                                                                                                                                                                                                                                                                                           |\n| `data.attributes.assessments-enabled`         | boolean        | `false`          | (previously `drift-detection`) Whether or not HCP Terraform performs health assessments for the workspace. May be overridden by the organization setting `assessments-enforced`. Only available for Plus tier organizations, in workspaces running Terraform version 0.15.4+ and operating in [Remote execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode).                                                                                               |\n| `data.attributes.auto-apply`                  | boolean        | (previous value) | Whether to automatically apply changes when a Terraform plan is successful in runs initiated by VCS, UI or CLI, [with some exceptions](\/terraform\/cloud-docs\/workspaces\/settings#auto-apply-and-manual-apply).                                                                                                                                                                                                                                                                                                      |\n| `data.attributes.auto-apply-run-trigger`      | boolean        | (previous value) | Whether to automatically apply changes when a Terraform plan is successful in runs initiated by run triggers.                                                                                                                                                                                                                                                                                                    |\n| `data.attributes.auto-destroy-at`             | string         | (previous value) | Timestamp when the next scheduled destroy run will occur, refer to [Scheduled Destroy](\/terraform\/cloud-docs\/workspaces\/settings\/deletion#automatically-destroy).                                                                                                                                                                                                                                                                                                               |\n| `data.attributes.auto-destroy-activity-duration`| string       | (previous value) | Value and units for [automatically scheduled destroy runs based on workspace activity](\/terraform\/cloud-docs\/workspaces\/settings\/deletion#automatically-destroy). Valid values are greater than 0 and four digits or less. Valid units are `d` and `h`. For example, to queue destroy runs after fourteen days of inactivity set `auto-destroy-activity-duration: \"14d\"`.                                                                                                       |\n| `data.attributes.description`                 | string         | (previous value) | A description for the workspace.                                                                                                                                                                                                                                                                                                                                                                                                                                                |\n| `data.attributes.execution-mode`              | string         | (previous value) | Which [execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode) to use. Valid values are `remote`, `local`, and `agent`. When set to `local`, the workspace will be used for state storage only. This value _must not_ be specified if `operations` is specified, and _must_ be specified if `setting-overwrites.execution-mode` is set to `true`.                                                                                                                                                                                                |\n| `data.attributes.file-triggers-enabled`       | boolean        | (previous value) | Whether to filter runs based on the changed files in a VCS push. If enabled, it uses either `trigger-prefixes` in conjunction with `working_directory` or `trigger-patterns` to describe the set of changed files that will start a run. If disabled, any push will trigger a run.                                                                                                                                                                                              |\n| `data.attributes.global-remote-state`         | boolean        | (previous value) | Whether the workspace should allow all workspaces in the organization to [access its state data](\/terraform\/cloud-docs\/workspaces\/state) during runs. If `false`, then only specifically approved workspaces can access its state. Manage allowed workspaces using the [Remote State Consumers](\/terraform\/cloud-docs\/api-docs\/workspaces#get-remote-state-consumers) endpoints, documented later on this page.                                                                 |\n| `data.attributes.operations`                  | boolean        | (previous value) | **DEPRECATED** Use `execution-mode` instead. Whether to use remote execution mode. When set to `false`, the workspace will be used for state storage only. This value must not be specified if `execution-mode` is specified.                                                                                                                                                                                                                                                   |\n| `data.attributes.queue-all-runs`              | boolean        | (previous value) | Whether runs should be queued immediately after workspace creation. When set to false, runs triggered by a VCS change will not be queued until at least one run is manually queued.                                                                                                                                                                                                                                                                                             |\n| `data.attributes.speculative-enabled`         | boolean        | (previous value) | Whether this workspace allows automatic [speculative plans][]. Setting this to `false` prevents HCP Terraform from running plans on pull requests, which can improve security if the VCS repository is public or includes untrusted contributors. It doesn't prevent manual speculative plans via the CLI or the runs API.                                                                                                                                                    |\n| `data.attributes.terraform-version`           | string         | (previous value) | The version of Terraform to use for this workspace. This can be either an exact version or a [version constraint](\/terraform\/language\/expressions\/version-constraints) (like `~> 1.0.0`); if you specify a constraint, the workspace will always use the newest release that meets that constraint.                                                                                                                                                                             |\n| `data.attributes.trigger-patterns`            | array          | (previous value) | List of glob patterns that describe the files HCP Terraform monitors for changes. Trigger patterns are always appended to the root directory of the repository.                                                                                                                                                                                                                                                                                                               |\n| `data.attributes.trigger-prefixes`            | array          | (previous value) | List of trigger prefixes that describe the paths HCP Terraform monitors for changes, in addition to the working directory. Trigger prefixes are always appended to the root directory of the repository. HCP Terraform will start a run when files are changed in any directory path matching the provided set of prefixes.                                                                                                                                                 |\n| `data.attributes.vcs-repo.branch`             | string         | (previous value) | The repository branch that Terraform will execute from.                                                                                                                                                                                                                                                                                                                                                                                                                         |\n| `data.attributes.vcs-repo.identifier`         | string         | (previous value) | A reference to your VCS repository in the format :org\/:repo where :org and :repo refer to the organization and repository in your VCS provider. The format for Azure DevOps is `:org\/:project\/_git\/:repo`.                                                                                                                                                                                                                                                                      |\n| `data.attributes.vcs-repo.ingress-submodules` | boolean        | (previous value) | Whether submodules should be fetched when cloning the VCS repository.                                                                                                                                                                                                                                                                                                                                                                                                           |\n| `data.attributes.vcs-repo.oauth-token-id`     | string  |         | The VCS Connection (OAuth Connection + Token) to use as identified. Get this ID from the [oauth-tokens](\/terraform\/cloud-docs\/api-docs\/oauth-tokens) endpoint. You can not specify this value if `github-app-installation-id` is specified.                                                                                                                                                                                                                                                     |\n| `data.attributes.vcs-repo.github-app-installation-id` | string |  | The VCS Connection GitHub App Installation to use. Find this ID on the account settings page. Requires previously authorizing the GitHub App and generating a user-to-server token. Manage the token from **Account Settings** within HCP Terraform. You can not specify this value if `oauth-token-id` is specified.                                                                                                                                                                               |\n| `data.attributes.vcs-repo.tags-regex`         | string         | (previous value) | A regular expression used to match Git tags. HCP Terraform triggers a run when this value is present and a VCS event occurs that contains a matching Git tag for the regular expression.                                                                                                                                                                                                                                                                                      |\n| `data.attributes.vcs-repo`                    | object or null | (previous value) | To delete a workspace's existing VCS repo, specify `null` instead of an object. To modify a workspace's existing VCS repo, include whichever of the keys below you wish to modify. To add a new VCS repo to a workspace that didn't previously have one, include at least the `oauth-token-id` and `identifier` keys.                                                                                                                                                           |\n| `data.attributes.working-directory`           | string         | (previous value) | A relative path that Terraform will execute within. This defaults to the root of your repository and is typically set to a subdirectory matching the environment when multiple environments exist within the same repository.                                                                                                                                                                                                                                                   |\n| `data.attributes.setting-overwrites`          | object  |      |  The keys in this object are attributes that have organization-level defaults. Each attribute key stores a boolean value which is `true` by default. To overwrite the default inherited value, set an attribute's value to `false`. For example, to set `execution-mode` as the organization default, you set `setting-overwrites.execution-mode = false`.       |\n| `data.relationships` | object | none | Specifies a group of workspace relationships. |\n| `data.relationships.project.data.id`          | string         | existing value | The ID of the project to move the workspace to. If left blank or unchanged, the workspace will not be moved. You must have admin permissions on both the source project and destination project in order to move a workspace between projects. |\n| `data.relationships.tag-bindings.data`        | list of objects | none | Specifies a list of tags to attach to the workspace. |\n| `data.relationships.tag-bindings.data.type`   | string | none | Must be `tag-bindings` for each object in the list. |\n| `data.relationships.tag-bindings.data.attributes.key`   | string | none | Specifies the tag key for each object in the list. |\n| `data.relationships.tag-bindings.data.attributes.value` | string | none | Specifies the tag value for each object in the list. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\": \"workspace-2\",\n      \"resource-count\": 0,\n      \"terraform_version\": \"0.11.1\",\n      \"working-directory\": \"\",\n      \"vcs-repo\": {\n        \"identifier\": \"example\/terraform-test-proj\",\n        \"branch\": \"\",\n        \"ingress-submodules\": false,\n        \"oauth-token-id\": \"ot-hmAyP66qk2AMVdbJ\"\n      },\n      \"updated-at\": \"2017-11-29T19:18:09.976Z\"\n    },\n    \"relationships\": {\n      \"project\": {\n        \"data\": {\n          \"type\": \"projects\",\n          \"id\": \"prj-7HWWPGY3fYxztELU\"\n        }\n      },\n      \"tag-bindings\": {\n        \"data\": [\n          {\n            \"type\": \"tag-bindings\",\n            \"attributes\": {\n              \"key\": \"environment\",\n              \"value\": \"development\"\n            }\n          },\n        ]\n      }\n    },\n    \"type\": \"workspaces\"\n  }\n}\n```\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/workspaces\/workspace-2\n```\n\n### Sample Response\n\n@include 'api-code-blocks\/workspace-with-vcs.mdx'\n\n## List workspaces\n\nThis endpoint lists workspaces in the organization.\n\n`GET \/organizations\/:organization_name\/workspaces`\n\n| Parameter            | Description                                             |\n| -------------------- | ------------------------------------------------------- |\n| `:organization_name` | The name of the organization to list the workspaces of. |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter                     | Description                                                                                                                                                                                                                                                                                                                                                                                                          |\n|-------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `page[number]`                | **Optional.** If omitted, the endpoint will return the first page.                                                                                                                                                                                                                                                                                                                                                   |\n| `page[size]`                  | **Optional.** If omitted, the endpoint will return 20 workspaces per page.                                                                                                                                                                                                                                                                                                                                           |\n| `search[name]`                | **Optional.** If specified, restricts results to workspaces with a name that matches the search string using a fuzzy search.                                                                                                                                                                                                                                                                                         |\n| `search[tags]`                | **Optional.** If specified, restricts results to workspaces with that tag. If multiple comma separated values are specified, results matching all of the tags are returned.                                                                                                                                                                                                                                          |\n| `search[exclude-tags]`        | **Optional.** If specified, results exclude workspaces with that tag. If multiple comma separated values are specified, workspaces with tags matching any of the tags are excluded.                                                                                                                                                                                                                                  |\n| `search[wildcard-name]`       | **Optional.** If specified, restricts results to workspaces with partial matching, using `*` on prefix, suffix, or both. For example, `search[wildcard-name]=*-prod` returns all workspaces ending in `-prod`, `search[wildcard-name]=prod-*` returns all workspaces beginning with `prod-`, and `search[wildcard-name]=*-prod-*` returns all workspaces with substring `-prod-` regardless of prefix and\/or suffix. |\n| `sort`                        | **Optional.** Allows sorting the organization's workspaces by a provided value. You can sort by `\"name\"`, `\"current-run.created-at\"` (the time of the current run), and `\"latest-change-at\"` (the creation time of the latest state version or the workspace itself if no state version exists). Prepending a hyphen to the sort parameter reverses the order. For example, `\"-name\"` sorts by name in reverse alphabetical order. If omitted, the default sort order is arbitrary but stable. |\n| `filter[project][id]`         | **Optional.** If specified, restricts results to workspaces in the specific project.                                                                                                                                                                                                                                                                                                                                 |\n| `filter[current-run][status]` | **Optional.** If specified, restricts results to workspaces that match the status of a current run.                                                                                                                                                                                                                                                                                                                  |\n| `filter[tagged][i][key]`      | **Optional.** If specified, restricts results to workspaces that are tagged with the provided key. Use a value of \"0\" for `i` if you are only using a single filter. For multiple tag filters, use an incrementing integer value for each filter. Multiple tag filters will be combined together with a logical AND when filtering results. |\n| `filter[tagged][i][value]`    | **Optional.** If specified, restricts results to workspaces that are tagged with the provided value. This is useful when combined with a `key` filter for more specificity. Use a value of \"0\" for `i` if you are only using a single filter. For multiple tag filters, use an incrementing integer value for each filter. Multiple tag filters will be combined together with a logical AND when filtering results. |\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/workspaces\n```\n\n_With multiple tag filters_\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/workspaces?filter%5B%tagged5D%5B0%5D%5Bkey%5D=environment&filter%5B%tagged5D%5B0%5D%5Bvalue%5D=development&filter%5B%tagged5D%5B1%5D%5Bkey%5D=meets-compliance\n```\n\n### Sample Response\n\n@include 'api-code-blocks\/workspaces-list.mdx'\n\n## Show workspace\n\nDetails on a workspace can be retrieved from two endpoints, which behave identically.\n\nOne refers to a workspace by its ID:\n\n`GET \/workspaces\/:workspace_id`\n\n| Parameter       | Description      |\n| --------------- | ---------------- |\n| `:workspace_id` | The workspace ID |\n\nThe other refers to a workspace by its name and organization:\n\n`GET \/organizations\/:organization_name\/workspaces\/:name`\n\n| Parameter            | Description                                                                                           |\n| -------------------- | ----------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization the workspace belongs to.                                                |\n| `:name`              | The name of the workspace to show details for, which can only include letters, numbers, `-`, and `_`. |\n\n### Workspace performance attributes\n\nThe following attributes are helpful in determining the overall health and performance of your workspace configuration.  These metrics refer to the **past 30 runs that have either resulted in an error or successfully applied**.\n\n| Parameter                                  | Type   | Description                                                                             |\n| ------------------------------------------ | ------ | --------------------------------------------------------------------------------------- |\n| `data.attributes.apply-duration-average`   | number | This is the average time runs spend in the **apply** phase, represented in milliseconds |\n| `data.attributes.plan-duration-average`    | number | This is the average time runs spend in the **plan** phase, represented in milliseconds  |\n| `data.attributes.policy-check-failures`    | number | Reports the number of run failures resulting from a policy check failure                |\n| `data.attributes.run-failures`             | number | Reports the number of failed runs                                                       |\n| `data.attributes.workspace-kpis-run-count` | number | Total number of runs taken into account by these metrics                                |\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/workspaces\/workspace-1\n```\n\n### Sample Response\n\n@include 'api-code-blocks\/workspace.mdx'\n\n## Safe Delete a workspace\n\nWhen you delete an HCP Terraform workspace with resources, Terraform can no longer track or manage that infrastructure. During a safe delete, HCP Terraform only deletes the workspace if it is not managing resources.\n\nYou can safe delete a workspace using two endpoints that behave identically. The first endpoint identifies a workspace with the workspace ID, and the other identifies the workspace by its name and organization.\n\n`POST \/workspaces\/:workspace_id\/actions\/safe-delete`\n\n| Parameter       | Description                       |\n| --------------- | --------------------------------- |\n| `:workspace_id` | The ID of the workspace to delete. |\n\n`POST \/organizations\/:organization_name\/workspaces\/:name\/actions\/safe-delete`\n\n| Parameter            | Description                                                                                 |\n| -------------------- | ------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the workspace's organization.                                      |\n| `:name`              | The name of the workspace to delete, which can only include letters, numbers, `-`, and `_`. |\n\n| Status  | Response                  | Reason(s)                                                             |\n| ------- | --------------------------| --------------------------------------------------------------------- |\n| [204][] | No Content                | Successfully deleted the workspace                                    |\n| [404][] | [JSON API error object][] | Workspace not found, or user unauthorized to perform workspace delete |\n| [409][] | [JSON API error object][] | Workspace is not safe to delete because it is managing resources                                       |\n\n## Force Delete a workspace\n\nDuring a force delete, HCP Terraform removes the specified workspace without checking whether it is managing resources. We recommend using the [safe delete endpoint](#safe-delete-a-workspace) instead, when possible.\n\n!> **Warning:** Terraform cannot track or manage the workspace's infrastructure after deletion. We recommend [destroying the workspace's infrastructure](\/terraform\/cloud-docs\/run\/modes-and-options#destroy-mode) before you delete it.\n\nBy default, only organization owners can force delete workspaces. Organization owners can also update [organization's settings](\/terraform\/cloud-docs\/users-teams organizations\/organizations#general) to let workspace admins force delete their own workspaces.\n\nYou can use two endpoints to force delete a workspace, which behave identically. One endpoint identifies the workspace with its workspace ID and the other endpoint identifies the workspace with its name and organization.\n\n`DELETE \/workspaces\/:workspace_id`\n\n| Parameter       | Description                       |\n| --------------- | --------------------------------- |\n| `:workspace_id` | The ID of the workspace to delete |\n\n`DELETE \/organizations\/:organization_name\/workspaces\/:name`\n\n| Parameter            | Description                                                                                 |\n| -------------------- | ------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization the workspace belongs to.                                      |\n| `:name`              | The name of the workspace to delete, which can only include letters, numbers, `-`, and `_`. |\n\n| Status  | Response                  | Reason(s)                                                             |\n| ------- | --------------------------| --------------------------------------------------------------------- |\n| [204][] | No Content                | Successfully deleted the workspace                                    |\n| [403][] | [JSON API error object][] | Not authorized to perform a force delete on the workspace             |\n| [404][] | [JSON API error object][] | Workspace not found, or user unauthorized to perform workspace delete |\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/workspaces\/workspace-1\n```\n\n## Lock a workspace\n\nThis endpoint locks a workspace.\n\n`POST \/workspaces\/:workspace_id\/actions\/lock`\n\n| Parameter       | Description                                                                                                                                             |\n| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The workspace ID to lock. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](#show-workspace) endpoint. |\n\n| Status  | Response                                     | Reason(s)                                                   |\n| ------- | -------------------------------------------- | ----------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"workspaces\"`) | Successfully locked the workspace                           |\n| [404][] | [JSON API error object][]                    | Workspace not found, or user unauthorized to perform action |\n| [409][] | [JSON API error object][]                    | Workspace already locked                                    |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type   | Default | Description                           |\n| -------- | ------ | ------- | ------------------------------------- |\n| `reason` | string | `\"\"`    | The reason for locking the workspace. |\n\n### Sample Payload\n\n```json\n{\n  \"reason\": \"Locking workspace-1\"\n}\n```\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-SihZTyXKfNXUWuUa\/actions\/lock\n```\n\n### Sample Response\n\n@include 'api-code-blocks\/workspace.mdx'\n\n## Unlock a workspace\n\nThis endpoint unlocks a workspace. Unlocking a workspace sets the current state version to the latest finalized intermediate state version. If intermediate state versions are available, but HCP Terraform has not yet finalized the latest intermediate state version, the unlock will fail with a 503 response. For this particular error, it's recommended to retry the unlock operation for a short period of time until the platform finalizes the state version. If you must force-unlock a workspace under these conditions, ensure that state was saved successfully by inspecting the latest state version using the [State Version List API](\/terraform\/cloud-docs\/api-docs\/state-versions#list-state-versions-for-a-workspace)\n\n`POST \/workspaces\/:workspace_id\/actions\/unlock`\n\n| Parameter       | Description                                                                                                                                               |\n| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The workspace ID to unlock. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](#show-workspace) endpoint. |\n\n| Status  | Response                                     | Reason(s)                                                   |\n| ------- | -------------------------------------------- | ----------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"workspaces\"`) | Successfully unlocked the workspace                         |\n| [404][] | [JSON API error object][]                    | Workspace not found, or user unauthorized to perform action |\n| [409][] | [JSON API error object][]                    | Workspace already unlocked, or locked by a different user   |\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-SihZTyXKfNXUWuUa\/actions\/unlock\n```\n\n### Sample Response\n\n@include 'api-code-blocks\/workspace.mdx'\n\n## Force Unlock a workspace\n\nThis endpoint force unlocks a workspace. Only users with admin access are authorized to force unlock a workspace.\n\n`POST \/workspaces\/:workspace_id\/actions\/force-unlock`\n\n| Parameter       | Description                                                                                                                                                     |\n| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The workspace ID to force unlock. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](#show-workspace) endpoint. |\n\n| Status  | Response                                     | Reason(s)                                                   |\n| ------- | -------------------------------------------- | ----------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"workspaces\"`) | Successfully force unlocked the workspace                   |\n| [404][] | [JSON API error object][]                    | Workspace not found, or user unauthorized to perform action |\n| [409][] | [JSON API error object][]                    | Workspace already unlocked                                  |\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-SihZTyXKfNXUWuUa\/actions\/force-unlock\n```\n\n### Sample Response\n\n@include 'api-code-blocks\/workspace-with-vcs.mdx'\n\n## Assign an SSH key to a workspace\n\nThis endpoint assigns an SSH key to a workspace.\n\n`PATCH \/workspaces\/:workspace_id\/relationships\/ssh-key`\n\n| Parameter       | Description                                                                                                                                                              |\n| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:workspace_id` | The workspace ID to assign the SSH key to. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](#show-workspace) endpoint. |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path             | Type   | Default | Description                                                                                        |\n| -------------------- | ------ | ------- | -------------------------------------------------------------------------------------------------- |\n| `data.type`          | string |         | Must be `\"workspaces\"`.                                                                            |\n| `data.attributes.id` | string |         | The SSH key ID to assign. Obtain this from the [ssh-keys](\/terraform\/cloud-docs\/api-docs\/ssh-keys) endpoint. |\n\n#### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"id\": \"sshkey-GxrePWre1Ezug7aM\"\n    },\n    \"type\": \"workspaces\"\n  }\n}\n```\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-SihZTyXKfNXUWuUa\/relationships\/ssh-key\n```\n\n### Sample Response\n\n@include 'api-code-blocks\/workspace-with-vcs.mdx'\n\n## Unassign an SSH key from a workspace\n\nThis endpoint unassigns the currently assigned SSH key from a workspace.\n\n`PATCH \/workspaces\/:workspace_id\/relationships\/ssh-key`\n\n| Parameter       | Description                                                                                                                                                              |\n| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:workspace_id` | The workspace ID to assign the SSH key to. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](#show-workspace) endpoint. |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path             | Type   | Default | Description             |\n| -------------------- | ------ | ------- | ----------------------- |\n| `data.type`          | string |         | Must be `\"workspaces\"`. |\n| `data.attributes.id` | string |         | Must be `null`.         |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"id\": null\n    },\n    \"type\": \"workspaces\"\n  }\n}\n```\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-SihZTyXKfNXUWuUa\/relationships\/ssh-key\n```\n\n### Sample Response\n\n@include 'api-code-blocks\/workspace-with-vcs.mdx'\n\n## Get Remote State Consumers\n\n`GET \/workspaces\/:workspace_id\/relationships\/remote-state-consumers`\n\n| Parameter       | Description                                                                                                                                                                       |\n| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The workspace ID to get remote state consumers for. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](#show-workspace) endpoint. |\n\nThis endpoint retrieves the list of other workspaces that are allowed to access the given workspace's state during runs.\n\n* If `global-remote-state` is set to false for the workspace, this will return the list of other workspaces that are specifically authorized to access the workspace's state.\n* If `global-remote-state` is set to true, this will return a list of every workspace in the organization except for the subject workspace.\n\nThe list returned by this endpoint is subject to the caller's normal workspace permissions; it will not include workspaces that the provided API token is unable to read.\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                                |\n| -------------- | -------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.         |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 workspaces per page. |\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-SihZTyXKfNXUWuUa\/relationships\/remote-state-consumers\n```\n\n### Sample Response\n\n@include 'api-code-blocks\/workspaces-list.mdx'\n\n## Replace Remote State Consumers\n\n`PATCH \/workspaces\/:workspace_id\/relationships\/remote-state-consumers`\n\n| Parameter       | Description                                                                                                                                                                           |\n| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The workspace ID to replace remote state consumers for. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](#show-workspace) endpoint. |\n\nThis endpoint updates the workspace's remote state consumers to be _exactly_ the list of workspaces specified in the payload. It can only be used for workspaces where `global-remote-state` is false.\n\nThis endpoint can only be used by teams with permission to manage workspaces for the entire organization \u2014\u00a0only those who can _view_ the entire list of consumers can _replace_ the entire list. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions)) Teams with admin permissions on specific workspaces can still modify remote state consumers for those workspaces, but must use the add (POST) and remove (DELETE) endpoints listed below instead of this PATCH endpoint.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Status  | Response                  | Reason(s)                                                             |\n| ------- | ------------------------- | --------------------------------------------------------------------- |\n| [204][] | No Content                | Successfully updated remote state consumers                           |\n| [404][] | [JSON API error object][] | Workspace not found, or user unauthorized to perform action           |\n| [422][] | [JSON API error object][] | Problem with payload or request; details provided in the error object |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path      | Type   | Default | Description                                                 |\n| ------------- | ------ | ------- | ----------------------------------------------------------- |\n| `data[].type` | string |         | Must be `\"workspaces\"`.                                     |\n| `data[].id`   | string |         | The ID of a workspace to be set as a remote state consumer. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"ws-7aiqKYf6ejMFdtWS\",\n      \"type\": \"workspaces\"\n    }\n  ]\n}\n```\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-UYv6RYM8fVhzeGG5\/relationships\/remote-state-consumers\n```\n\n### Response\n\nNo response body.\n\nStatus code `204`.\n\n## Add Remote State Consumers\n\n`POST \/workspaces\/:workspace_id\/relationships\/remote-state-consumers`\n\n| Parameter       | Description                                                                                                                                                                       |\n| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The workspace ID to add remote state consumers for. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](#show-workspace) endpoint. |\n\nThis endpoint adds one or more remote state consumers to the workspace. It can only be used for workspaces where `global-remote-state` is false.\n\n* The workspaces specified as consumers must be readable to the API token that makes the request.\n* A workspace cannot be added as a consumer of itself. (A workspace can always read its own state, regardless of access settings.)\n* You can safely add a consumer workspace that is already present; it will be ignored, and the rest of the consumers in the request will be processed normally.\n\n| Status  | Response                  | Reason(s)                                                             |\n| ------- | ------------------------- | --------------------------------------------------------------------- |\n| [204][] | No Content                | Successfully updated remote state consumers                           |\n| [404][] | [JSON API error object][] | Workspace not found, or user unauthorized to perform action           |\n| [422][] | [JSON API error object][] | Problem with payload or request; details provided in the error object |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path      | Type   | Default | Description                                                 |\n| ------------- | ------ | ------- | ----------------------------------------------------------- |\n| `data[].type` | string |         | Must be `\"workspaces\"`.                                     |\n| `data[].id`   | string |         | The ID of a workspace to be set as a remote state consumer. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"ws-7aiqKYf6ejMFdtWS\",\n      \"type\": \"workspaces\"\n    }\n  ]\n}\n```\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-UYv6RYM8fVhzeGG5\/relationships\/remote-state-consumers\n```\n\n### Response\n\nNo response body.\n\nStatus code `204`.\n\n## Delete Remote State Consumers\n\n`DELETE \/workspaces\/:workspace_id\/relationships\/remote-state-consumers`\n\n| Parameter       | Description                                                                                                                                                                          |\n| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:workspace_id` | The workspace ID to remove remote state consumers for. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](#show-workspace) endpoint. |\n\nThis endpoint removes one or more remote state consumers from a workspace, according to the contents of the payload. It can only be used for workspaces where `global-remote-state` is false.\n\n* The workspaces specified as consumers must be readable to the API token that makes the request.\n* You can safely remove a consumer workspace that is already absent; it will be ignored, and the rest of the consumers in the request will be processed normally.\n\n| Status  | Response                  | Reason(s)                                                             |\n| ------- | ------------------------- | --------------------------------------------------------------------- |\n| [204][] | No Content                | Successfully updated remote state consumers                           |\n| [404][] | [JSON API error object][] | Workspace not found, or user unauthorized to perform action           |\n| [422][] | [JSON API error object][] | Problem with payload or request; details provided in the error object |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path      | Type   | Default | Description                                                      |\n| ------------- | ------ | ------- | ---------------------------------------------------------------- |\n| `data[].type` | string |         | Must be `\"workspaces\"`.                                          |\n| `data[].id`   | string |         | The ID of a workspace to remove from the remote state consumers. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"ws-7aiqKYf6ejMFdtWS\",\n      \"type\": \"workspaces\"\n    }\n  ]\n}\n```\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-UYv6RYM8fVhzeGG5\/relationships\/remote-state-consumers\n```\n\n### Response\n\nNo response body.\n\nStatus code `204`.\n\n## Get tags\n\nCall the following endpoints to list the tags attached to a workspace:\n\n- `GET \/workspaces\/:workspace_id\/relationships\/tags`: Lists flat string tags attached to the workspace.\n- `GET \/workspaces\/:workspace_id\/tag-bindings`: Lists key-value tags directly bound to the workspace.\n- `GET \/workspaces\/:workspace_id\/effective-tag-bindings`: Lists all key-value tags bound to the workspace, including those inherited from the parent project.\n\n### Path parameters\n\n| Parameter       | Description                                                                                                                                                       |\n| --------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The workspace ID to fetch tags for. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](#show-workspace) endpoint. |\n\n### Query Parameters\n\nThe flat string tags endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                                |\n| -------------- | -------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.         |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 workspaces per page. |\n\n### Sample Requests\n\n<CodeBlockConfig hideClipboard heading=\"List flat string tags attached to the workspace\">\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/workspace-2\/relationships\/tags\n```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig hideClipboard heading=\"List key-value tags bound directly to the workspace\">\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/workspace-2\/tag-bindings\n```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig hideClipboard heading=\"List all key-value tags bound to the workspace, including those inherited from the parent project\">\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/workspace-2\/effective-tag-bindings\n```\n\n<\/CodeBlockConfig>\n\n### Sample Responses\n\n<CodeBlockConfig hideClipboard heading=\"List flat string tags attached to the workspace\">\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"tag-1\",\n      \"type\": \"tags\",\n      \"attributes\": {\n        \"name\": \"tag1\",\n        \"created-at\":  \"2022-03-09T06:04:39.585Z\",\n        \"instance-count\": 1\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          }\n        }\n      }\n    },\n    {\n      \"id\": \"tag-2\",\n      \"type\": \"tags\",\n      \"attributes\": {\n        \"name\": \"tag2\",\n        \"created-at\":  \"2022-03-09T06:04:39.585Z\",\n        \"instance-count\": 2\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          }\n        }\n      }\n    }\n  ]\n}\n```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig hideClipboard heading=\"List key-value tags bound directly to the workspace\">\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"tb-RKs9JSC2lInns32K\",\n      \"type\": \"tag-bindings\",\n      \"attributes\": {\n        \"key\": \"ws1-key\",\n        \"value\": \"ws1-value\"\n      }\n    }\n  ]\n}\n```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig hideClipboard heading=\"List all key-value tags bound to the workspace, including those inherited from the parent project\">\n\n```json\n{\n  \"data\": [\n    {\n      \"type\": \"effective-tag-bindings\",\n      \"attributes\": {\n        \"key\": \"ws1-key\",\n        \"value\": \"ws1-value\"\n      }\n    },\n    {\n      \"type\": \"effective-tag-bindings\",\n      \"attributes\": {\n        \"key\": \"key1\",\n        \"value\": \"value1\"\n      }\n    },\n    {\n      \"type\": \"effective-tag-bindings\",\n      \"attributes\": {\n        \"key\": \"key2\",\n        \"value\": \"value2\"\n      }\n    }\n  ]\n}\n```\n\n<\/CodeBlockConfig>\n\n## Add flat string tags to a workspace\n\n`POST \/workspaces\/:workspace_id\/relationships\/tags`\n\nTo add key-value tags to an existing workspace, call the `PATCH \/workspaces\/:workspace_id` and provide workspace tag bindings in the JSON payload. Refer to [Update a workspace](#update-a-workspace) for additional information.\n\nYou can also bind key-value tags when creating a workspace. Refer to [Create a workspace](#create-a-workspace) for additional information.\n\nRefer to [Define project tags](\/terraform\/cloud-docs\/projects\/manage#define-project-tags) for information about supported tag values.\n\n| Parameter       | Description                                                                                                                                                    |\n| --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The workspace ID to add tags to. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](#show-workspace) endpoint. |\n\n| Status  | Response                  | Reason(s)                                                   |\n| ------- | ------------------------- | ----------------------------------------------------------- |\n| [204][] | No Content                | Successfully added tags to workspace                        |\n| [404][] | [JSON API error object][] | Workspace not found, or user unauthorized to perform action |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nIt is important to note that `type`, as well as one of `id` _or_ `attributes.name` is required.\n\n| Key path                 | Type   | Default | Description                 |\n| ------------------------ | ------ | ------- | --------------------------- |\n| `data[].type`            | string |         | Must be `\"tags\"`.           |\n| `data[].id`              | string |         | The ID of the tag to add.   |\n| `data[].attributes.name` | string |         | The name of the tag to add. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"type\": \"tags\",\n      \"attributes\": {\n        \"name\": \"foo\"\n      }\n    },\n    {\n      \"type\": \"tags\",\n      \"attributes\": {\n        \"name\": \"bar\"\n      }\n    }\n  ]\n}\n```\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/workspace-2\/relationships\/tags\n```\n\n### Sample Response\n\nNo response body.\n\nStatus code `204`.\n\n## Remove tags from workspace\n\nThis endpoint removes one or more tags from a workspace. The workspace must already exist, and tag\nelement that supplies an `id` attribute must exist. If the `name` attribute is used, and no matching\norganization tag is found, no action will occur for that entry. Tags removed from all workspaces will be\nremoved from the organization-wide list.\n\nTo remove key-value tags to an existing workspace, call the `PATCH \/workspaces\/:workspace_id` and provide workspace tag bindings in the JSON payload. Refer to [Update a workspace](#update-a-workspace) for additional information.\n\n`DELETE \/workspaces\/:workspace_id\/relationships\/tags`\n\n| Parameter       | Description                                                                                                                                                         |\n| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The workspace ID to remove tags from. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](#show-workspace) endpoint. |\n\n| Status  | Response                  | Reason(s)                                                   |\n| ------- | ------------------------- | ----------------------------------------------------------- |\n| [204][] | No Content                | Successfully removed tags to workspace                      |\n| [404][] | [JSON API error object][] | Workspace not found, or user unauthorized to perform action |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nIt is important to note that `type`, as well as one of `id` _or_ `attributes.name` is required.\n\n| Key path                 | Type   | Default | Description                    |\n| ------------------------ | ------ | ------- | ------------------------------ |\n| `data[].type`            | string |         | Must be `\"tags\"`.              |\n| `data[].id`              | string |         | The ID of the tag to remove.   |\n| `data[].attributes.name` | string |         | The name of the tag to remove. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"type\": \"tags\",\n      \"id\": \"tag-Yfha4YpPievQ8wJw\"\n    }\n  ]\n}\n```\n\n### Sample Request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/workspace-2\/relationships\/tags\n```\n\n### Sample Response\n\nNo response body.\n\nStatus code `204`.\n\n## Show data retention policy\n\n<EnterpriseAlert>\nThis endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform.\n<\/EnterpriseAlert>\n\n`GET \/workspaces\/:workspace_id\/relationships\/data-retention-policy`\n\n| Parameter            | Description                                                         |\n| ---------------------| --------------------------------------------------------------------|\n| `:workspace_id`      | The ID of the workspace to show the data retention policy for. Obtain this from the [workspace settings](\/terraform\/enterprise\/workspaces\/settings) or by sending a `GET` request to the [`\/workspaces`](#show-workspace) endpoint. |\n\n\nThis endpoint shows the data retention policy set explicitly on the workspace.\nWhen no data retention policy is set for the workspace, the endpoint returns the default policy configured for the organization. Refer to [Data Retention Policies](\/terraform\/enterprise\/workspaces\/settings\/deletion#data-retention-policies) for instructions on configuring data retention policies for workspaces.\n\nRefer to [Data Retention Policy API](\/terraform\/enterprise\/api-docs\/data-retention-policies#show-data-retention-policy) in the Terraform Enterprise documentation for details.\n\n## Create or update data retention policy\n\n<EnterpriseAlert>\nThis endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform.\n<\/EnterpriseAlert>\n\n`POST \/workspaces\/:workspace_id\/relationships\/data-retention-policy`\n\n| Parameter       | Description                                                                                                                                                         |\n| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The workspace ID to update the data retention policy for. Obtain this from the [workspace settings](\/terraform\/enterprise\/workspaces\/settings) or by sending a `GET` request to the [`\/workspaces`](#show-workspace) endpoint. |\n\nThis endpoint creates a data retention policy for a workspace or updates the existing policy. \nRefer to [Data Retention Policies](\/terraform\/enterprise\/workspaces\/settings\/deletion#data-retention-policies) for additional information.\n\nRefer to [Data Retention Policy API](\/terraform\/enterprise\/api-docs\/data-retention-policies#create-or-update-data-retention-policy) in the Terraform Enterprise documentation for details.\n\n## Remove data retention policy\n\n<EnterpriseAlert>\nThis endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform.\n<\/EnterpriseAlert>\n\n`DELETE \/workspaces\/:workspace_id\/relationships\/data-retention-policy`\n\n| Parameter       | Description                                                                                                                                                                          |\n| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:workspace_id` | The workspace ID to remove the data retenetion policy for. Obtain this from the [workspace settings](\/terraform\/enterprise\/workspaces\/settings) or by sending a `GET` request to the [`\/workspaces`](#show-workspace) endpoint. |\n\nThis endpoint removes the data retention policy explicitly set on a workspace.\nWhen no data retention policy is set for the workspace, the endpoint returns the default policy configured for the organization. Refer to [Data Retention Policies](\/terraform\/enterprise\/users-teams-organizations\/organizations#destruction-and-deletion) for instructions on configuring data retention policies for organizations.\n\nRead more about [workspace data retention policies](\/terraform\/enterprise\/workspaces\/settings\/deletion#data-retention-policies).\n\nRefer to [Data Retention Policy API](\/terraform\/enterprise\/api-docs\/data-retention-policies#remove-data-retention-policy) in the Terraform Enterprise documentation for details.\n\n## Available Related Resources\n\nThe GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n* `current_configuration_version` - The last configuration this workspace received, excluding plan-only configurations. Terraform uses this configuration for new runs, unless you provide a different one.\n* `current_configuration_version.ingress_attributes` - The commit information for the current configuration version.\n* `current_run` - Additional information about the current run.\n* `current_run.configuration_version` - The configuration used in the current run.\n* `current_run.configuration_version.ingress_attributes` - The commit information used in the current run.\n* `current_run.plan` - The plan used in the current run.\n* `locked_by` - The user, team, or run responsible for locking the workspace, if the workspace is currently locked.\n* `organization` - The full organization record.\n* `outputs` - The outputs for the most recently applied run.\n* `project` - The full project record.\n* `readme` - The most recent workspace README.md.","site":"terraform","answers_cleaned":"    page title  Workspaces   API Docs   HCP Terraform description       Use the   workspaces  endpoint to interact with workspaces  List  show  create  update  lock  unlock  and delete a workspace  and manage SSH keys  remote state consumers  and tags using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects   speculative plans    terraform cloud docs run remote operations speculative plans    Workspaces API  This topic provides reference information about the workspaces AP  Workspaces represent running infrastructure managed by Terraform      Overview  The scope of the API includes the following endpoints     Method   Path   Action                          POST      organizations  organization name workspaces    Call this endpoint to  create a workspace   create a workspace   You can apply tags stored as key value pairs when creating the workspace       POST      organizations  organization name workspaces  name actions safe delete    Call this endpoint to  safely delete a workspace   safe delete a workspace  by querying the organization and workspace names       POST      workspaces  workspace id actions safe delete    Call this endpoint  safely delete a workspace   safe delete a workspace  by querying the workspace ID      POST       workspaces  workspace id actions lock    Call this endpoint to  lock a workspace   lock a workspace        POST      workspaces  workspace id actions unlock    Call this endpoint to  unlock a workspace   unlock a workspace        POST      workspaces  workspace id actions force unlock    Call this endpoint to  force a workspace to unlock   force unlock a workspace        POST      workspaces  workspace id relationships remote state consumers    Call this endpoint to  add remote state consumers   get remote state consumers        POST      workspaces  workspace id relationships tags    Call this endpoint to  bind flat string tags to an existing workspace   add tags to a workspace        POST      workspaces  workspace id relationships data retention policy    Call this endpoint to  show the workspace data retention policy   show data retention policy        GET      organizations  organization name workspaces    Call this endpoint to  list existing workspaces   list workspaces   Each project in the response contains a link to  effective tag bindings  and  tag bindings  collections  You can filter the response by tag keys and values using a query string parameter       GET      organizations  organization name workspaces  name    Call this endpoint to  show workspace details   show workspace  by querying the organization and workspace names       GET      workspaces  workspace id    Call this endpoint to  show workspace details   show workspace        GET      workspaces  workspace id relationships remote state consumers    Call this endpoint to  list remote state consumers   get remote state consumers        GET      workspaces  workspace id relationships tags    Call this endpoint to  list flat string workspace tags   get tags        GET      workspaces  workspace id tag bindings    Call this endpoint to  list workspace key value tags   get tags  bound directly to this workspace       GET      workspaces  workspace id effective tag bindings    Call this endpoint to  list all workspace key value tags   get tags   including both those bound directly to the workspace as well as those inherited from the parent project       GET      workspaces  workspace id relationships data retention policy    Call this endpoint to  show the workspace data retention policy   show data retention policy    alert enterprise        PATCH      workspaces  workspace id relationships ssh key    Call this endpoint to manage SSH key assignments for workspaces  Refer to  Assign an SSH key to a workspace   assign an ssh key to a workspace  and  Unassign an SSH key from a workspace   unassign an ssh key from a workspace  for instructions       PATCH      workspaces  workspace id    Call this endpoint to  update a workspace   update a workspace   You can apply tags stored as key value pairs when updating the workspace       PATCH      organizations  organization name workspaces  name    Call this endpoint to  update a workspace   update a workspace  by querying the organization and workspace names       PATCH      workspaces  workspace id relationships remote state consumers    Call this endpoint to  replace remote state consumers   replace remote state consumers        DELETE      workspaces  workspace id relationships remote state consumers    Call this endpoint to  delete remote state consumers   delete remote state consumers        DELETE      workspaces  workspace id relationships tags    Call this endpoint to  delete flat string workspace tags   remove tags from workspace  from the workspace       DELETE      workspaces  workspace id relationships data retention policy    Call this endpoint to  remove a workspace data retention policy   remove data retention policy        DELETE      workspaces  workspace id    Call this endpoint to  force delete a workspace   force delete a workspace   which deletes the workspace without first checking for managed resources      DELETE      organizations  organization name workspaces  name    Call this endpoint to  force delete a workspace   force delete a workspace   which deletes the workspace without first checking for managed resources  by querying the organization and workspace names        Requirements    You must be a member of a team with the   Read   permission enabled for Terraform runs to view workspaces    You must be a member of a team with the   Admin   permissions enabled on the workspace to change settings and force unlock it    You must be a member of a team with the   Lock unlock   permission enabled to lock and unlock the workspace    You must meet one of the following requirements to create a workspace      Be the team owner     Be on a team with the   Manage all workspaces   permission enabled     Present an  organization API token   terraform cloud docs users teams organizations api tokens organization api tokens  when calling the API   Refer to  Workspace Permissions   terraform cloud docs users teams organizations permissions workspace permissions  for additional information     permissions citation    intentionally unused   keep for maintainers     Create a Workspace  Use the following endpoint to create a new workspace    POST  organizations  organization name workspaces     Parameter   Description                                                                                                                                                                                                           organization name    The name of the organization to create the workspace in  The organization must already exist in the system  and the user must have permissions to create new workspaces           Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required   By supplying the necessary attributes under a  vcs repository  object  you can create a workspace that is configured against a VCS Repository     Key path   Type   Default   Description                             data type    string    none   Must be   workspaces         data attributes name    string    none   The name of the workspace  Workspace names can only include letters  numbers       and      The name a unique identifier n the organization       data attributes agent pool id    string    none   Required when  execution mode  is set to  agent   The ID of the agent pool belonging to the workspace s organization  This value must not be specified if  execution mode  is set to  remote  or  local  or if  operations  is set to  true        data attributes allow destroy plan    boolean    true    Whether destroy plans can be queued on the workspace       data attributes assessments enabled              boolean    false       previously  drift detection   Whether or not HCP Terraform performs health assessments for the workspace  May be overridden by the organization setting  assessments enforced   Only available for Plus tier organizations  in workspaces running Terraform version 0 15 4  and operating in  Remote execution mode   terraform cloud docs workspaces settings execution mode        data attributes auto apply    boolean    false      Whether to automatically apply changes when a Terraform plan is successful in runs initiated by VCS  UI or CLI   with some exceptions   terraform cloud docs workspaces settings auto apply and manual apply        data attributes auto apply run trigger           boolean    false      Whether to automatically apply changes when a Terraform plan is successful in runs initiated by run triggers       data attributes auto destroy at    string     nothing    Timestamp when the next scheduled destroy run will occur  refer to  Scheduled Destroy   terraform cloud docs workspaces settings deletion automatically destroy        data attributes auto destroy activity duration   string     nothing    Value and units for  automatically scheduled destroy runs based on workspace activity   terraform cloud docs workspaces settings deletion automatically destroy   Valid values are greater than 0 and four digits or less  Valid units are  d  and  h   For example  to queue destroy runs after fourteen days of inactivity set  auto destroy activity duration   14d         data attributes description    string     nothing    A description for the workspace       data attributes execution mode                   string     nothing    Which  execution mode   terraform cloud docs workspaces settings execution mode  to use  Valid values are  remote    local   and  agent   When set to  local   the workspace will be used for state storage only  This value  must not  be specified if  operations  is specified  and  must  be specified if  setting overwrites execution mode  is set to  true        data attributes file triggers enabled    boolean    true    Whether to filter runs based on the changed files in a VCS push  If enabled  it uses either  trigger prefixes  in conjunction with  working directory  or  trigger patterns  to describe the set of changed files that will start a run  If disabled  any push triggers a run       data attributes global remote state              boolean    false      Whether the workspace should allow all workspaces in the organization to  access its state data   terraform cloud docs workspaces state  during runs  If  false   then only specifically approved workspaces can access its state  Manage allowed workspaces using the  Remote State Consumers   terraform cloud docs api docs workspaces get remote state consumers  endpoints  documented later on this page  Terraform Enterprise admins can choose the default value for new workspaces if this attribute is omitted       data attributes operations    boolean    true         DEPRECATED   Use  execution mode  instead  Whether to use remote execution mode  When set to  false   the workspace will be used for state storage only  This value must not be specified if  execution mode  is specified       data attributes queue all runs    boolean    false      Whether runs should be queued immediately after workspace creation  When set to false  runs triggered by a VCS change will not be queued until at least one run is manually queued       data attributes source name    string    none   A friendly name for the application or client creating this workspace  If set  this will be displayed on the workspace as  Created via   SOURCE NAME          data attributes source url    string    none   A URL for the application or client creating this workspace  This can be the URL of a related resource in another app  or a link to documentation or other info about the client       data attributes speculative enabled              boolean    true         Whether this workspace allows automatic  speculative plans     Setting this to  false  prevents HCP Terraform from running plans on pull requests  which can improve security if the VCS repository is public or includes untrusted contributors  It doesn t prevent manual speculative plans via the CLI or the runs API        data attributes terraform version    string    latest release   Specifies the version of Terraform to use for this workspace  You can specify an exact version or a  version constraint   terraform language expressions version constraints  such as     1 0 0   If you specify a constraint  the workspace always uses the newest release that meets that constraint  If omitted when creating a workspace  this defaults to the latest released version       data attributes trigger patterns    array                 List of glob patterns that describe the files HCP Terraform monitors for changes  Trigger patterns are always appended to the root directory of the repository        data attributes trigger prefixes                  array                 List of trigger prefixes that describe the paths HCP Terraform monitors for changes  in addition to the working directory  Trigger prefixes are always appended to the root directory of the repository  HCP Terraform starts a run when files are changed in any directory path matching the provided set of prefixes       data attributes vcs repo branch                  string    repository s default branch   The repository branch that Terraform executes from  If omitted or submitted as an empty string  this defaults to the repository s default branch       data attributes vcs repo identifier    string    none   A reference to your VCS repository in the format  org  repo where  org and  repo refer to the organization and repository in your VCS provider  The format for Azure DevOps is   org  project  git  repo        data attributes vcs repo ingress submodules      boolean    false      Whether submodules should be fetched when cloning the VCS repository       data attributes vcs repo oauth token id          string    none   Specifies the VCS OAuth connection and token  Call the   oauth tokens    terraform cloud docs api docs oauth tokens  endpoint to retrieve the OAuth ID       data attributes vcs repo tags regex              string    none   A regular expression used to match Git tags  HCP Terraform triggers a run when this value is present and a VCS event occurs that contains a matching Git tag for the regular expression      data attributes vcs repo    object    none   Settings for the workspace s VCS repository  If omitted  the workspace is created without a VCS repo  If included  you must specify at least the  oauth token id  and  identifier  keys       data attributes working directory                string     nothing    A relative path that Terraform will execute within  This defaults to the root of your repository and is typically set to a subdirectory matching the environment when multiple environments exist within the same repository       data attributes setting overwrites    object    none     The keys in this object are attributes that have organization level defaults  Each attribute key stores a boolean value which is  true  by default  To overwrite the default inherited value  set an attribute s value to  false   For example  to set  execution mode  as the organization default  set  setting overwrites execution mode  to  false        data relationships    object   none   Specifies a group of workspace associations       data relationships project data id    string    default project   The ID of the project to create the workspace in  If left blank  Terraform creates the workspace in the organization s default project  You must have permission to create workspaces in the project  either by organization level permissions or team admin access to a specific project       data relationships tag bindings data             list of objects   none   Specifies a list of tags to attach to the workspace       data relationships tag bindings data type        string   none   Must be  tag bindings  for each object in the list       data relationships tag bindings data attributes key      string   none   Specifies the tag key for each object in the list       data relationships tag bindings data attributes value    string   none   Specifies the tag value for each object in the list          Sample Payload   Without a VCS repository      json      data          attributes            name    workspace 1              type    workspaces        relationships            tag bindings                          data                  type    tag binding                attributes      key    env    value    test                                                    With a VCS repository      json      data          attributes            name    workspace 2          terraform version    0 11 1          working directory              vcs repo              identifier    example terraform test proj            oauth token id    ot hmAyP66qk2AMVdbJ            branch                tags regex   null                     type    workspaces        relationships            tag bindings                          data                  type    tag binding                attributes      key    env    value    test                                                  Using Git Tags   HCP Terraform triggers a run when you push a Git tag that matches the regular expression  SemVer    1 2 3    22 33 44   etc      json      data          attributes            name    workspace 3          terraform version    0 12 1          file triggers enabled   false         working directory     networking          vcs repo              identifier    example terraform test proj monorepo            oauth token id    ot hmAyP66qk2AMVdbJ            branch                tags regex     d   d   d                       type    workspaces        relationships            tag bindings                          data                  type    tag binding                attributes      key    env    value    test                                                   For a monorepo using trigger prefixes   A run will be triggered in this workspace when changes are detected in any of the specified directories    networking     modules   or   vendor       json      data          attributes            name    workspace 3          terraform version    0 12 1          file triggers enabled   true         trigger prefixes      modules     vendor           working directory     networking          vcs repo              identifier    example terraform test proj monorepo            oauth token id    ot hmAyP66qk2AMVdbJ            branch                      updated at    2017 11 29T19 18 09 976Z              type    workspaces        relationships            tag bindings                          data                  type    tag binding                attributes      key    env    value    test                                                   For a monorepo using trigger patterns   A run will be triggered in this workspace when HCP Terraform detects any of the following changes    A file with the extension  tf  in any directory structure in which the last folder is named  networking   e g    root networking  and  root module networking     Any file changed in the folder   base   no subfolders are included   Any file changed in the folder   submodule  and all of its subfolders     json      data          attributes            name    workspace 4          terraform version    1 2 2          file triggers enabled   true         trigger patterns         networking   tf     base       submodule                vcs repo              identifier    example terraform test proj monorepo            oauth token id    ot hmAyP66qk2AMVdbJ            branch                      updated at    2022 06 09T19 18 09 976Z              type    workspaces        relationships            tag bindings                          data                  type    tag binding                attributes      key    env    value    test                                                   Using HCP Terraform agents    HCP Terraform agents   terraform cloud docs api docs agents  allow HCP Terraform to communicate with isolated  private  or on premises infrastructure      json      data          attributes            name   workspace 1          execution mode    agent          agent pool id    apool ZjT6A7mVFm5WHT5a                type    workspaces      relationships          tag bindings                      data                type    tag binding              attributes      key    env    value    test                                       Using an organization default execution mode      json      data          attributes            name   workspace with default          setting overwrites              execution mode   false                       type    workspaces      relationships          tag bindings                      data                type    tag binding              attributes      key    env    value    test                                        With a project      json      data          type    workspaces        attributes            name    workspace in project              relationships            project              data                type    projects              id    prj jT92VLSFpv8FwKtc                            tag bindings                          data                  type    tag binding                attributes      key    env    value    test                                                   With key value tags      json      data          type    workspaces        attributes            name    workspace in project              relationships            tag bindings              data                key    cost center              value    engineering                                         Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization workspaces          Sample Response   Without a VCS repository     Note    The  assessments enabled  property is only accepted by or returned from HCP Terraform    include  api code blocks workspace mdx    With a VCS repository    include  api code blocks workspace with vcs mdx     With a project      json      data          id    ws HRkJLSYWF97jucqQ        type    workspaces        attributes            allow destroy plan   true         auto apply   false         auto apply run trigger   false         auto destroy at   null         auto destroy activity duration   null         created at    2022 12 05T20 57 13 829Z          environment    default          locked   false         locked reason              name    workspace in project          queue all runs   false         speculative enabled   true         structured run output enabled   true         terraform version    1 3 5          working directory   null         global remote state   true         updated at    2022 12 05T20 57 13 829Z          resource count   0         apply duration average   null         plan duration average   null         policy check failures   null         run failures   null         workspace kpis runs count   null         latest change at    2022 12 05T20 57 13 829Z          operations   true         execution mode    remote          vcs repo   null         vcs repo identifier   null         permissions              can update   true           can destroy   true           can queue run   true           can read variable   true           can update variable   true           can read state versions   true           can read state outputs   true           can create state versions   true           can queue apply   true           can lock   true           can unlock   true           can force unlock   true           can read settings   true           can manage tags   true           can manage run tasks   false           can force delete   true           can manage assessments   true           can read assessment results   true           can queue destroy   true                 actions              is destroyable   true                 description   null         file triggers enabled   true         trigger prefixes              trigger patterns              assessments enabled   false         last assessment result at   null         source    tfe api          source name   null         source url   null         tag names              setting overwrites              execution mode   false           agent pool   false                     relationships            organization              data                id    my organization              type    organizations                            current run              data   null                 effective tag bindings              links                related     api v2 workspaces ws HRkJLSYWF97jucqQ effective tag bindings                            latest run              data   null                 outputs              data                      remote state consumers              links                related     api v2 workspaces ws HRkJLSYWF97jucqQ relationships remote state consumers                            current state version              data   null                 current configuration version              data   null                 agent pool              data   null                 readme              data   null                 project              data                id    prj jT92VLSFpv8FwKtc              type    projects                            current assessment result              data   null                 tag bindings              links                related     api v2 workspaces ws HRkJLSYWF97jucqQ tag bindings                 vars              data                           links            self     api v2 organizations my organization workspaces workspace in project                      Update a Workspace  Use one of the following endpoint to update a workspace      PATCH  organizations  organization name workspaces  name     PATCH  workspaces  workspace id     Parameter         Description                                                                                     workspace id    The ID of the workspace to update       organization name    The name of the organization the workspace belongs to        name    The name of the workspace to update  Workspace names are unique identifiers in the organization and can only include letters  numbers       and              Request Body  These PATCH endpoints require a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                                        Type             Default            Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  data type                                      string                              Must be   workspaces                                                                                                                                                                                                                                                                                                                                                                                                                                                                 data attributes name                           string            previous value    A new name for the workspace  which can only include letters  numbers       and      This will be used as an identifier and must be unique in the organization    Warning    Changing a workspace s name changes its URL in the API and UI                                                                                                                                                                                                                                           data attributes agent pool id                  string            previous value    Required when  execution mode  is set to  agent   The ID of the agent pool belonging to the workspace s organization  This value must not be specified if  execution mode  is set to  remote  or  local  or if  operations  is set to  true                                                                                                                                                                                                                                          data attributes allow destroy plan             boolean           previous value    Whether destroy plans can be queued on the workspace                                                                                                                                                                                                                                                                                                                                                                                                                                 data attributes assessments enabled            boolean           false              previously  drift detection   Whether or not HCP Terraform performs health assessments for the workspace  May be overridden by the organization setting  assessments enforced   Only available for Plus tier organizations  in workspaces running Terraform version 0 15 4  and operating in  Remote execution mode   terraform cloud docs workspaces settings execution mode                                                                                                      data attributes auto apply                     boolean           previous value    Whether to automatically apply changes when a Terraform plan is successful in runs initiated by VCS  UI or CLI   with some exceptions   terraform cloud docs workspaces settings auto apply and manual apply                                                                                                                                                                                                                                                                                                             data attributes auto apply run trigger         boolean           previous value    Whether to automatically apply changes when a Terraform plan is successful in runs initiated by run triggers                                                                                                                                                                                                                                                                                                          data attributes auto destroy at                string            previous value    Timestamp when the next scheduled destroy run will occur  refer to  Scheduled Destroy   terraform cloud docs workspaces settings deletion automatically destroy                                                                                                                                                                                                                                                                                                                      data attributes auto destroy activity duration   string          previous value    Value and units for  automatically scheduled destroy runs based on workspace activity   terraform cloud docs workspaces settings deletion automatically destroy   Valid values are greater than 0 and four digits or less  Valid units are  d  and  h   For example  to queue destroy runs after fourteen days of inactivity set  auto destroy activity duration   14d                                                                                                               data attributes description                    string            previous value    A description for the workspace                                                                                                                                                                                                                                                                                                                                                                                                                                                      data attributes execution mode                 string            previous value    Which  execution mode   terraform cloud docs workspaces settings execution mode  to use  Valid values are  remote    local   and  agent   When set to  local   the workspace will be used for state storage only  This value  must not  be specified if  operations  is specified  and  must  be specified if  setting overwrites execution mode  is set to  true                                                                                                                                                                                                       data attributes file triggers enabled          boolean           previous value    Whether to filter runs based on the changed files in a VCS push  If enabled  it uses either  trigger prefixes  in conjunction with  working directory  or  trigger patterns  to describe the set of changed files that will start a run  If disabled  any push will trigger a run                                                                                                                                                                                                    data attributes global remote state            boolean           previous value    Whether the workspace should allow all workspaces in the organization to  access its state data   terraform cloud docs workspaces state  during runs  If  false   then only specifically approved workspaces can access its state  Manage allowed workspaces using the  Remote State Consumers   terraform cloud docs api docs workspaces get remote state consumers  endpoints  documented later on this page                                                                       data attributes operations                     boolean           previous value      DEPRECATED   Use  execution mode  instead  Whether to use remote execution mode  When set to  false   the workspace will be used for state storage only  This value must not be specified if  execution mode  is specified                                                                                                                                                                                                                                                         data attributes queue all runs                 boolean           previous value    Whether runs should be queued immediately after workspace creation  When set to false  runs triggered by a VCS change will not be queued until at least one run is manually queued                                                                                                                                                                                                                                                                                                   data attributes speculative enabled            boolean           previous value    Whether this workspace allows automatic  speculative plans     Setting this to  false  prevents HCP Terraform from running plans on pull requests  which can improve security if the VCS repository is public or includes untrusted contributors  It doesn t prevent manual speculative plans via the CLI or the runs API                                                                                                                                                          data attributes terraform version              string            previous value    The version of Terraform to use for this workspace  This can be either an exact version or a  version constraint   terraform language expressions version constraints   like     1 0 0    if you specify a constraint  the workspace will always use the newest release that meets that constraint                                                                                                                                                                                   data attributes trigger patterns               array             previous value    List of glob patterns that describe the files HCP Terraform monitors for changes  Trigger patterns are always appended to the root directory of the repository                                                                                                                                                                                                                                                                                                                     data attributes trigger prefixes               array             previous value    List of trigger prefixes that describe the paths HCP Terraform monitors for changes  in addition to the working directory  Trigger prefixes are always appended to the root directory of the repository  HCP Terraform will start a run when files are changed in any directory path matching the provided set of prefixes                                                                                                                                                       data attributes vcs repo branch                string            previous value    The repository branch that Terraform will execute from                                                                                                                                                                                                                                                                                                                                                                                                                               data attributes vcs repo identifier            string            previous value    A reference to your VCS repository in the format  org  repo where  org and  repo refer to the organization and repository in your VCS provider  The format for Azure DevOps is   org  project  git  repo                                                                                                                                                                                                                                                                             data attributes vcs repo ingress submodules    boolean           previous value    Whether submodules should be fetched when cloning the VCS repository                                                                                                                                                                                                                                                                                                                                                                                                                 data attributes vcs repo oauth token id        string              The VCS Connection  OAuth Connection   Token  to use as identified  Get this ID from the  oauth tokens   terraform cloud docs api docs oauth tokens  endpoint  You can not specify this value if  github app installation id  is specified                                                                                                                                                                                                                                                           data attributes vcs repo github app installation id    string      The VCS Connection GitHub App Installation to use  Find this ID on the account settings page  Requires previously authorizing the GitHub App and generating a user to server token  Manage the token from   Account Settings   within HCP Terraform  You can not specify this value if  oauth token id  is specified                                                                                                                                                                                     data attributes vcs repo tags regex            string            previous value    A regular expression used to match Git tags  HCP Terraform triggers a run when this value is present and a VCS event occurs that contains a matching Git tag for the regular expression                                                                                                                                                                                                                                                                                            data attributes vcs repo                       object or null    previous value    To delete a workspace s existing VCS repo  specify  null  instead of an object  To modify a workspace s existing VCS repo  include whichever of the keys below you wish to modify  To add a new VCS repo to a workspace that didn t previously have one  include at least the  oauth token id  and  identifier  keys                                                                                                                                                                 data attributes working directory              string            previous value    A relative path that Terraform will execute within  This defaults to the root of your repository and is typically set to a subdirectory matching the environment when multiple environments exist within the same repository                                                                                                                                                                                                                                                         data attributes setting overwrites             object            The keys in this object are attributes that have organization level defaults  Each attribute key stores a boolean value which is  true  by default  To overwrite the default inherited value  set an attribute s value to  false   For example  to set  execution mode  as the organization default  you set  setting overwrites execution mode   false              data relationships    object   none   Specifies a group of workspace relationships       data relationships project data id             string           existing value   The ID of the project to move the workspace to  If left blank or unchanged  the workspace will not be moved  You must have admin permissions on both the source project and destination project in order to move a workspace between projects       data relationships tag bindings data           list of objects   none   Specifies a list of tags to attach to the workspace       data relationships tag bindings data type      string   none   Must be  tag bindings  for each object in the list       data relationships tag bindings data attributes key      string   none   Specifies the tag key for each object in the list       data relationships tag bindings data attributes value    string   none   Specifies the tag value for each object in the list         Sample Payload     json      data          attributes            name    workspace 2          resource count   0         terraform version    0 11 1          working directory              vcs repo              identifier    example terraform test proj            branch                ingress submodules   false           oauth token id    ot hmAyP66qk2AMVdbJ                  updated at    2017 11 29T19 18 09 976Z              relationships            project              data                type    projects              id    prj 7HWWPGY3fYxztELU                            tag bindings              data                              type    tag bindings                attributes                    key    environment                  value    development                                                           type    workspaces                 Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 organizations my organization workspaces workspace 2          Sample Response   include  api code blocks workspace with vcs mdx      List workspaces  This endpoint lists workspaces in the organization    GET  organizations  organization name workspaces     Parameter              Description                                                                                                                                      organization name    The name of the organization to list the workspaces of         Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter                       Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        page number                      Optional    If omitted  the endpoint will return the first page                                                                                                                                                                                                                                                                                                                                                         page size                        Optional    If omitted  the endpoint will return 20 workspaces per page                                                                                                                                                                                                                                                                                                                                                 search name                      Optional    If specified  restricts results to workspaces with a name that matches the search string using a fuzzy search                                                                                                                                                                                                                                                                                               search tags                      Optional    If specified  restricts results to workspaces with that tag  If multiple comma separated values are specified  results matching all of the tags are returned                                                                                                                                                                                                                                                search exclude tags              Optional    If specified  results exclude workspaces with that tag  If multiple comma separated values are specified  workspaces with tags matching any of the tags are excluded                                                                                                                                                                                                                                        search wildcard name             Optional    If specified  restricts results to workspaces with partial matching  using     on prefix  suffix  or both  For example   search wildcard name    prod  returns all workspaces ending in   prod    search wildcard name  prod    returns all workspaces beginning with  prod    and  search wildcard name    prod    returns all workspaces with substring   prod   regardless of prefix and or suffix       sort                             Optional    Allows sorting the organization s workspaces by a provided value  You can sort by   name      current run created at    the time of the current run   and   latest change at    the creation time of the latest state version or the workspace itself if no state version exists   Prepending a hyphen to the sort parameter reverses the order  For example     name   sorts by name in reverse alphabetical order  If omitted  the default sort order is arbitrary but stable       filter project  id               Optional    If specified  restricts results to workspaces in the specific project                                                                                                                                                                                                                                                                                                                                       filter current run  status       Optional    If specified  restricts results to workspaces that match the status of a current run                                                                                                                                                                                                                                                                                                                        filter tagged  i  key            Optional    If specified  restricts results to workspaces that are tagged with the provided key  Use a value of  0  for  i  if you are only using a single filter  For multiple tag filters  use an incrementing integer value for each filter  Multiple tag filters will be combined together with a logical AND when filtering results       filter tagged  i  value          Optional    If specified  restricts results to workspaces that are tagged with the provided value  This is useful when combined with a  key  filter for more specificity  Use a value of  0  for  i  if you are only using a single filter  For multiple tag filters  use an incrementing integer value for each filter  Multiple tag filters will be combined together with a logical AND when filtering results         Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations my organization workspaces       With multiple tag filters      shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations my organization workspaces filter 5B tagged5D 5B0 5D 5Bkey 5D environment filter 5B tagged5D 5B0 5D 5Bvalue 5D development filter 5B tagged5D 5B1 5D 5Bkey 5D meets compliance          Sample Response   include  api code blocks workspaces list mdx      Show workspace  Details on a workspace can be retrieved from two endpoints  which behave identically   One refers to a workspace by its ID    GET  workspaces  workspace id     Parameter         Description                                                   workspace id    The workspace ID    The other refers to a workspace by its name and organization    GET  organizations  organization name workspaces  name     Parameter              Description                                                                                                                                                                                                                                  organization name    The name of the organization the workspace belongs to                                                       name                 The name of the workspace to show details for  which can only include letters  numbers       and             Workspace performance attributes  The following attributes are helpful in determining the overall health and performance of your workspace configuration   These metrics refer to the   past 30 runs that have either resulted in an error or successfully applied       Parameter                                    Type     Description                                                                                                                                                                                                                                    data attributes apply duration average      number   This is the average time runs spend in the   apply   phase  represented in milliseconds      data attributes plan duration average       number   This is the average time runs spend in the   plan   phase  represented in milliseconds       data attributes policy check failures       number   Reports the number of run failures resulting from a policy check failure                     data attributes run failures                number   Reports the number of failed runs                                                            data attributes workspace kpis run count    number   Total number of runs taken into account by these metrics                                       Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations my organization workspaces workspace 1          Sample Response   include  api code blocks workspace mdx      Safe Delete a workspace  When you delete an HCP Terraform workspace with resources  Terraform can no longer track or manage that infrastructure  During a safe delete  HCP Terraform only deletes the workspace if it is not managing resources   You can safe delete a workspace using two endpoints that behave identically  The first endpoint identifies a workspace with the workspace ID  and the other identifies the workspace by its name and organization    POST  workspaces  workspace id actions safe delete     Parameter         Description                                                                                     workspace id    The ID of the workspace to delete      POST  organizations  organization name workspaces  name actions safe delete     Parameter              Description                                                                                                                                                                                                              organization name    The name of the workspace s organization                                             name                 The name of the workspace to delete  which can only include letters  numbers       and           Status    Response                    Reason s                                                                                                                                                                                   204      No Content                  Successfully deleted the workspace                                         404       JSON API error object      Workspace not found  or user unauthorized to perform workspace delete      409       JSON API error object      Workspace is not safe to delete because it is managing resources                                             Force Delete a workspace  During a force delete  HCP Terraform removes the specified workspace without checking whether it is managing resources  We recommend using the  safe delete endpoint   safe delete a workspace  instead  when possible        Warning    Terraform cannot track or manage the workspace s infrastructure after deletion  We recommend  destroying the workspace s infrastructure   terraform cloud docs run modes and options destroy mode  before you delete it   By default  only organization owners can force delete workspaces  Organization owners can also update  organization s settings   terraform cloud docs users teams organizations organizations general  to let workspace admins force delete their own workspaces   You can use two endpoints to force delete a workspace  which behave identically  One endpoint identifies the workspace with its workspace ID and the other endpoint identifies the workspace with its name and organization    DELETE  workspaces  workspace id     Parameter         Description                                                                                     workspace id    The ID of the workspace to delete     DELETE  organizations  organization name workspaces  name     Parameter              Description                                                                                                                                                                                                              organization name    The name of the organization the workspace belongs to                                             name                 The name of the workspace to delete  which can only include letters  numbers       and           Status    Response                    Reason s                                                                                                                                                                                   204      No Content                  Successfully deleted the workspace                                         403       JSON API error object      Not authorized to perform a force delete on the workspace                  404       JSON API error object      Workspace not found  or user unauthorized to perform workspace delete        Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 organizations my organization workspaces workspace 1         Lock a workspace  This endpoint locks a workspace    POST  workspaces  workspace id actions lock     Parameter         Description                                                                                                                                                                                                                                                                                                                                 workspace id    The workspace ID to lock  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   show workspace  endpoint       Status    Response                                       Reason s                                                                                                                                                                                  200       JSON API document      type   workspaces      Successfully locked the workspace                                404       JSON API error object                         Workspace not found  or user unauthorized to perform action      409       JSON API error object                         Workspace already locked                                           Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type     Default   Description                                                                                                        reason    string             The reason for locking the workspace         Sample Payload     json      reason    Locking workspace 1             Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 workspaces ws SihZTyXKfNXUWuUa actions lock          Sample Response   include  api code blocks workspace mdx      Unlock a workspace  This endpoint unlocks a workspace  Unlocking a workspace sets the current state version to the latest finalized intermediate state version  If intermediate state versions are available  but HCP Terraform has not yet finalized the latest intermediate state version  the unlock will fail with a 503 response  For this particular error  it s recommended to retry the unlock operation for a short period of time until the platform finalizes the state version  If you must force unlock a workspace under these conditions  ensure that state was saved successfully by inspecting the latest state version using the  State Version List API   terraform cloud docs api docs state versions list state versions for a workspace    POST  workspaces  workspace id actions unlock     Parameter         Description                                                                                                                                                                                                                                                                                                                                     workspace id    The workspace ID to unlock  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   show workspace  endpoint       Status    Response                                       Reason s                                                                                                                                                                                  200       JSON API document      type   workspaces      Successfully unlocked the workspace                              404       JSON API error object                         Workspace not found  or user unauthorized to perform action      409       JSON API error object                         Workspace already unlocked  or locked by a different user          Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 workspaces ws SihZTyXKfNXUWuUa actions unlock          Sample Response   include  api code blocks workspace mdx      Force Unlock a workspace  This endpoint force unlocks a workspace  Only users with admin access are authorized to force unlock a workspace    POST  workspaces  workspace id actions force unlock     Parameter         Description                                                                                                                                                                                                                                                                                                                                                 workspace id    The workspace ID to force unlock  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   show workspace  endpoint       Status    Response                                       Reason s                                                                                                                                                                                  200       JSON API document      type   workspaces      Successfully force unlocked the workspace                        404       JSON API error object                         Workspace not found  or user unauthorized to perform action      409       JSON API error object                         Workspace already unlocked                                         Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 workspaces ws SihZTyXKfNXUWuUa actions force unlock          Sample Response   include  api code blocks workspace with vcs mdx      Assign an SSH key to a workspace  This endpoint assigns an SSH key to a workspace    PATCH  workspaces  workspace id relationships ssh key     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                   workspace id    The workspace ID to assign the SSH key to  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   show workspace  endpoint         Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path               Type     Default   Description                                                                                                                                                                                                                                              data type             string             Must be   workspaces                                                                                    data attributes id    string             The SSH key ID to assign  Obtain this from the  ssh keys   terraform cloud docs api docs ssh keys  endpoint          Sample Payload     json      data          attributes            id    sshkey GxrePWre1Ezug7aM              type    workspaces                 Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 workspaces ws SihZTyXKfNXUWuUa relationships ssh key          Sample Response   include  api code blocks workspace with vcs mdx      Unassign an SSH key from a workspace  This endpoint unassigns the currently assigned SSH key from a workspace    PATCH  workspaces  workspace id relationships ssh key     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                   workspace id    The workspace ID to assign the SSH key to  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   show workspace  endpoint         Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path               Type     Default   Description                                                                                        data type             string             Must be   workspaces         data attributes id    string             Must be  null                  Sample Payload     json      data          attributes            id   null             type    workspaces                 Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 workspaces ws SihZTyXKfNXUWuUa relationships ssh key          Sample Response   include  api code blocks workspace with vcs mdx      Get Remote State Consumers   GET  workspaces  workspace id relationships remote state consumers     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                     workspace id    The workspace ID to get remote state consumers for  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   show workspace  endpoint     This endpoint retrieves the list of other workspaces that are allowed to access the given workspace s state during runs     If  global remote state  is set to false for the workspace  this will return the list of other workspaces that are specifically authorized to access the workspace s state    If  global remote state  is set to true  this will return a list of every workspace in the organization except for the subject workspace   The list returned by this endpoint is subject to the caller s normal workspace permissions  it will not include workspaces that the provided API token is unable to read       Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                                     page number       Optional    If omitted  the endpoint will return the first page               page size         Optional    If omitted  the endpoint will return 20 workspaces per page         Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 workspaces ws SihZTyXKfNXUWuUa relationships remote state consumers          Sample Response   include  api code blocks workspaces list mdx      Replace Remote State Consumers   PATCH  workspaces  workspace id relationships remote state consumers     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                             workspace id    The workspace ID to replace remote state consumers for  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   show workspace  endpoint     This endpoint updates the workspace s remote state consumers to be  exactly  the list of workspaces specified in the payload  It can only be used for workspaces where  global remote state  is false   This endpoint can only be used by teams with permission to manage workspaces for the entire organization   only those who can  view  the entire list of consumers can  replace  the entire list    More about permissions    terraform cloud docs users teams organizations permissions   Teams with admin permissions on specific workspaces can still modify remote state consumers for those workspaces  but must use the add  POST  and remove  DELETE  endpoints listed below instead of this PATCH endpoint    permissions citation    intentionally unused   keep for maintainers    Status    Response                    Reason s                                                                                                                                                                                   204      No Content                  Successfully updated remote state consumers                                404       JSON API error object      Workspace not found  or user unauthorized to perform action                422       JSON API error object      Problem with payload or request  details provided in the error object        Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path        Type     Default   Description                                                                                                                                                         data   type    string             Must be   workspaces                                             data   id      string             The ID of a workspace to be set as a remote state consumer         Sample Payload     json      data                  id    ws 7aiqKYf6ejMFdtWS          type    workspaces                       Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 workspaces ws UYv6RYM8fVhzeGG5 relationships remote state consumers          Response  No response body   Status code  204       Add Remote State Consumers   POST  workspaces  workspace id relationships remote state consumers     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                     workspace id    The workspace ID to add remote state consumers for  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   show workspace  endpoint     This endpoint adds one or more remote state consumers to the workspace  It can only be used for workspaces where  global remote state  is false     The workspaces specified as consumers must be readable to the API token that makes the request    A workspace cannot be added as a consumer of itself   A workspace can always read its own state  regardless of access settings     You can safely add a consumer workspace that is already present  it will be ignored  and the rest of the consumers in the request will be processed normally     Status    Response                    Reason s                                                                                                                                                                                   204      No Content                  Successfully updated remote state consumers                                404       JSON API error object      Workspace not found  or user unauthorized to perform action                422       JSON API error object      Problem with payload or request  details provided in the error object        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path        Type     Default   Description                                                                                                                                                         data   type    string             Must be   workspaces                                             data   id      string             The ID of a workspace to be set as a remote state consumer         Sample Payload     json      data                  id    ws 7aiqKYf6ejMFdtWS          type    workspaces                       Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 workspaces ws UYv6RYM8fVhzeGG5 relationships remote state consumers          Response  No response body   Status code  204       Delete Remote State Consumers   DELETE  workspaces  workspace id relationships remote state consumers     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                           workspace id    The workspace ID to remove remote state consumers for  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   show workspace  endpoint     This endpoint removes one or more remote state consumers from a workspace  according to the contents of the payload  It can only be used for workspaces where  global remote state  is false     The workspaces specified as consumers must be readable to the API token that makes the request    You can safely remove a consumer workspace that is already absent  it will be ignored  and the rest of the consumers in the request will be processed normally     Status    Response                    Reason s                                                                                                                                                                                   204      No Content                  Successfully updated remote state consumers                                404       JSON API error object      Workspace not found  or user unauthorized to perform action                422       JSON API error object      Problem with payload or request  details provided in the error object        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path        Type     Default   Description                                                                                                                                                                   data   type    string             Must be   workspaces                                                  data   id      string             The ID of a workspace to remove from the remote state consumers         Sample Payload     json      data                  id    ws 7aiqKYf6ejMFdtWS          type    workspaces                       Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE       data  payload json     https   app terraform io api v2 workspaces ws UYv6RYM8fVhzeGG5 relationships remote state consumers          Response  No response body   Status code  204       Get tags  Call the following endpoints to list the tags attached to a workspace      GET  workspaces  workspace id relationships tags   Lists flat string tags attached to the workspace     GET  workspaces  workspace id tag bindings   Lists key value tags directly bound to the workspace     GET  workspaces  workspace id effective tag bindings   Lists all key value tags bound to the workspace  including those inherited from the parent project       Path parameters    Parameter         Description                                                                                                                                                                                                                                                                                                                                                     workspace id    The workspace ID to fetch tags for  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   show workspace  endpoint         Query Parameters  The flat string tags endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                                     page number       Optional    If omitted  the endpoint will return the first page               page size         Optional    If omitted  the endpoint will return 20 workspaces per page         Sample Requests   CodeBlockConfig hideClipboard heading  List flat string tags attached to the workspace       shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 workspaces workspace 2 relationships tags        CodeBlockConfig    CodeBlockConfig hideClipboard heading  List key value tags bound directly to the workspace       shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 workspaces workspace 2 tag bindings        CodeBlockConfig    CodeBlockConfig hideClipboard heading  List all key value tags bound to the workspace  including those inherited from the parent project       shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 workspaces workspace 2 effective tag bindings        CodeBlockConfig       Sample Responses   CodeBlockConfig hideClipboard heading  List flat string tags attached to the workspace       json      data                  id    tag 1          type    tags          attributes              name    tag1            created at     2022 03 09T06 04 39 585Z            instance count   1                 relationships              organization                data                  id    my organization                type    organizations                                                    id    tag 2          type    tags          attributes              name    tag2            created at     2022 03 09T06 04 39 585Z            instance count   2                 relationships              organization                data                  id    my organization                type    organizations                                                   CodeBlockConfig    CodeBlockConfig hideClipboard heading  List key value tags bound directly to the workspace       json      data                  id    tb RKs9JSC2lInns32K          type    tag bindings          attributes              key    ws1 key            value    ws1 value                             CodeBlockConfig    CodeBlockConfig hideClipboard heading  List all key value tags bound to the workspace  including those inherited from the parent project       json      data                  type    effective tag bindings          attributes              key    ws1 key            value    ws1 value                              type    effective tag bindings          attributes              key    key1            value    value1                              type    effective tag bindings          attributes              key    key2            value    value2                             CodeBlockConfig      Add flat string tags to a workspace   POST  workspaces  workspace id relationships tags   To add key value tags to an existing workspace  call the  PATCH  workspaces  workspace id  and provide workspace tag bindings in the JSON payload  Refer to  Update a workspace   update a workspace  for additional information   You can also bind key value tags when creating a workspace  Refer to  Create a workspace   create a workspace  for additional information   Refer to  Define project tags   terraform cloud docs projects manage define project tags  for information about supported tag values     Parameter         Description                                                                                                                                                                                                                                                                                                                                               workspace id    The workspace ID to add tags to  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   show workspace  endpoint       Status    Response                    Reason s                                                                                                                                                               204      No Content                  Successfully added tags to workspace                             404       JSON API error object      Workspace not found  or user unauthorized to perform action        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   It is important to note that  type   as well as one of  id   or   attributes name  is required     Key path                   Type     Default   Description                                                                                                    data   type               string             Must be   tags                   data   id                 string             The ID of the tag to add         data   attributes name    string             The name of the tag to add         Sample Payload     json      data                  type    tags          attributes              name    foo                              type    tags          attributes              name    bar                               Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 workspaces workspace 2 relationships tags          Sample Response  No response body   Status code  204       Remove tags from workspace  This endpoint removes one or more tags from a workspace  The workspace must already exist  and tag element that supplies an  id  attribute must exist  If the  name  attribute is used  and no matching organization tag is found  no action will occur for that entry  Tags removed from all workspaces will be removed from the organization wide list   To remove key value tags to an existing workspace  call the  PATCH  workspaces  workspace id  and provide workspace tag bindings in the JSON payload  Refer to  Update a workspace   update a workspace  for additional information    DELETE  workspaces  workspace id relationships tags     Parameter         Description                                                                                                                                                                                                                                                                                                                                                         workspace id    The workspace ID to remove tags from  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   show workspace  endpoint       Status    Response                    Reason s                                                                                                                                                               204      No Content                  Successfully removed tags to workspace                           404       JSON API error object      Workspace not found  or user unauthorized to perform action        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   It is important to note that  type   as well as one of  id   or   attributes name  is required     Key path                   Type     Default   Description                                                                                                          data   type               string             Must be   tags                      data   id                 string             The ID of the tag to remove         data   attributes name    string             The name of the tag to remove         Sample Payload     json      data                  type    tags          id    tag Yfha4YpPievQ8wJw                       Sample Request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE       data  payload json     https   app terraform io api v2 workspaces workspace 2 relationships tags          Sample Response  No response body   Status code  204       Show data retention policy   EnterpriseAlert  This endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform    EnterpriseAlert    GET  workspaces  workspace id relationships data retention policy     Parameter              Description                                                                                                                                                              workspace id         The ID of the workspace to show the data retention policy for  Obtain this from the  workspace settings   terraform enterprise workspaces settings  or by sending a  GET  request to the    workspaces    show workspace  endpoint      This endpoint shows the data retention policy set explicitly on the workspace  When no data retention policy is set for the workspace  the endpoint returns the default policy configured for the organization  Refer to  Data Retention Policies   terraform enterprise workspaces settings deletion data retention policies  for instructions on configuring data retention policies for workspaces   Refer to  Data Retention Policy API   terraform enterprise api docs data retention policies show data retention policy  in the Terraform Enterprise documentation for details      Create or update data retention policy   EnterpriseAlert  This endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform    EnterpriseAlert    POST  workspaces  workspace id relationships data retention policy     Parameter         Description                                                                                                                                                                                                                                                                                                                                                         workspace id    The workspace ID to update the data retention policy for  Obtain this from the  workspace settings   terraform enterprise workspaces settings  or by sending a  GET  request to the    workspaces    show workspace  endpoint     This endpoint creates a data retention policy for a workspace or updates the existing policy   Refer to  Data Retention Policies   terraform enterprise workspaces settings deletion data retention policies  for additional information   Refer to  Data Retention Policy API   terraform enterprise api docs data retention policies create or update data retention policy  in the Terraform Enterprise documentation for details      Remove data retention policy   EnterpriseAlert  This endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform    EnterpriseAlert    DELETE  workspaces  workspace id relationships data retention policy     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                           workspace id    The workspace ID to remove the data retenetion policy for  Obtain this from the  workspace settings   terraform enterprise workspaces settings  or by sending a  GET  request to the    workspaces    show workspace  endpoint     This endpoint removes the data retention policy explicitly set on a workspace  When no data retention policy is set for the workspace  the endpoint returns the default policy configured for the organization  Refer to  Data Retention Policies   terraform enterprise users teams organizations organizations destruction and deletion  for instructions on configuring data retention policies for organizations   Read more about  workspace data retention policies   terraform enterprise workspaces settings deletion data retention policies    Refer to  Data Retention Policy API   terraform enterprise api docs data retention policies remove data retention policy  in the Terraform Enterprise documentation for details      Available Related Resources  The GET endpoints above can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available      current configuration version    The last configuration this workspace received  excluding plan only configurations  Terraform uses this configuration for new runs  unless you provide a different one     current configuration version ingress attributes    The commit information for the current configuration version     current run    Additional information about the current run     current run configuration version    The configuration used in the current run     current run configuration version ingress attributes    The commit information used in the current run     current run plan    The plan used in the current run     locked by    The user  team  or run responsible for locking the workspace  if the workspace is currently locked     organization    The full organization record     outputs    The outputs for the most recently applied run     project    The full project record     readme    The most recent workspace README md "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Audit Trail Tokens API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the authentication token endpoint with the token query param to manage audit trails API tokens Generate and delete audit trail tokens using the HTTP API tfc only true","answers":"---\npage_title: Audit Trail Tokens - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/authentication-token` endpoint with the `token` query param to manage audit trails API tokens. Generate and delete audit trail tokens using the HTTP API.\ntfc_only: true\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Audit Trail Tokens API\n\nAudit trail tokens are used to authenticate integrations pulling audit trail data, for example, using the [HCP Terraform for Splunk](\/terraform\/cloud-docs\/integrations\/splunk) app.\n\n## Generate a new token\n\n`POST \/organizations\/:organization_name\/authentication-token?token=audit-trails`\n\n| Parameter            | Description                                           |\n| -------------------- | ----------------------------------------------------- |\n| `:organization_name` | The name of the organization to generate a token for. |\n\nGenerates a new [audit trails token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#audit-trails-tokens), replacing any existing token.\n\nOnly members of the owners team, the owners [team API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens), and the [organization API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens) can access this endpoint.\n\nThis endpoint returns the secret text of the new authentication token. You can only access this token when you create it and can not recover it later.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Status  | Response                                                | Reason              |\n| ------- | ------------------------------------------------------- | ------------------- |\n| [201][] | [JSON API document][] (`type: \"authentication-tokens\"`) | Success             |\n| [404][] | [JSON API error object][]                               | User not authorized |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\n\n| Key path                      | Type   | Default | Description                                                                                                             |\n| ----------------------------- | ------ | ------- | ----------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                   | string |         | Must be `\"authentication-token\"`.                                                                                       |\n| `data.attributes.expired-at`  | string | `null`  | The UTC date and time that the audit trails token expires, in ISO 8601 format. If omitted or set to `null` the token will never expire. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"authentication-token\",\n    \"attributes\": {\n      \"expired-at\": \"2023-04-06T12:00:00.000Z\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/authentication-token?token=audit-trails\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"4111756\",\n    \"type\": \"authentication-tokens\",\n    \"attributes\": {\n      \"created-at\": \"2017-11-29T19:11:28.075Z\",\n      \"last-used-at\": null,\n      \"description\": null,\n      \"token\": \"ZgqYdzuvlv8Iyg.atlasv1.6nV7t1OyFls341jo1xdZTP72fN0uu9VL55ozqzekfmToGFbhoFvvygIRy2mwVAXomOE\",\n      \"expired-at\": \"2023-04-06T12:00:00.000Z\"\n    },\n    \"relationships\": {\n      \"created-by\": {\n        \"data\": {\n          \"id\": \"user-62goNpx1ThQf689e\",\n          \"type\": \"users\"\n        }\n      }\n    }\n  }\n}\n```\n\n## Delete a token\n\n`DELETE \/organizations\/:organization\/authentication-token?token=audit-trails`\n\n| Parameter            | Description                                   |\n| -------------------- | --------------------------------------------- |\n| `:organization_name` | Which organization's token should be deleted. |\n\nOnly members of the owners team, the owners [team API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens), and the [organization API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens) can access this endpoint.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Status  | Response                  | Reason              |\n| ------- | ------------------------- | ------------------- |\n| [204][] | No Content                | Success             |\n| [404][] | [JSON API error object][] | User not authorized |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/authentication-token?token=audit-trails\n```","site":"terraform","answers_cleaned":"    page title  Audit Trail Tokens   API Docs   HCP Terraform description       Use the   authentication token  endpoint with the  token  query param to manage audit trails API tokens  Generate and delete audit trail tokens using the HTTP API  tfc only  true       200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Audit Trail Tokens API  Audit trail tokens are used to authenticate integrations pulling audit trail data  for example  using the  HCP Terraform for Splunk   terraform cloud docs integrations splunk  app      Generate a new token   POST  organizations  organization name authentication token token audit trails     Parameter              Description                                                                                                                                  organization name    The name of the organization to generate a token for     Generates a new  audit trails token   terraform cloud docs users teams organizations api tokens audit trails tokens   replacing any existing token   Only members of the owners team  the owners  team API token   terraform cloud docs users teams organizations api tokens team api tokens   and the  organization API token   terraform cloud docs users teams organizations api tokens organization api tokens  can access this endpoint   This endpoint returns the secret text of the new authentication token  You can only access this token when you create it and can not recover it later    permissions citation    intentionally unused   keep for maintainers    Status    Response                                                  Reason                                                                                                               201       JSON API document      type   authentication tokens      Success                  404       JSON API error object                                    User not authorized        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload      Key path                        Type     Default   Description                                                                                                                                                                                                                                                                                                 data type                      string             Must be   authentication token                                                                                               data attributes expired at     string    null     The UTC date and time that the audit trails token expires  in ISO 8601 format  If omitted or set to  null  the token will never expire         Sample Payload     json      data          type    authentication token        attributes            expired at    2023 04 06T12 00 00 000Z                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization authentication token token audit trails          Sample Response     json      data          id    4111756        type    authentication tokens        attributes            created at    2017 11 29T19 11 28 075Z          last used at   null         description   null         token    ZgqYdzuvlv8Iyg atlasv1 6nV7t1OyFls341jo1xdZTP72fN0uu9VL55ozqzekfmToGFbhoFvvygIRy2mwVAXomOE          expired at    2023 04 06T12 00 00 000Z              relationships            created by              data                id    user 62goNpx1ThQf689e              type    users                                        Delete a token   DELETE  organizations  organization authentication token token audit trails     Parameter              Description                                                                                                                  organization name    Which organization s token should be deleted     Only members of the owners team  the owners  team API token   terraform cloud docs users teams organizations api tokens team api tokens   and the  organization API token   terraform cloud docs users teams organizations api tokens organization api tokens  can access this endpoint    permissions citation    intentionally unused   keep for maintainers    Status    Response                    Reason                                                                                 204      No Content                  Success                  404       JSON API error object      User not authorized        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 organizations my organization authentication token token audit trails    "}
{"questions":"terraform 307 https developer mozilla org en US docs Web HTTP Status 307 200 https developer mozilla org en US docs Web HTTP Status 200 page title Applies API Docs HCP Terraform Use the applies endpoint to show the results of a Terraform apply using the HTTP API","answers":"---\npage_title: Applies - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/applies` endpoint to show the results of a Terraform apply using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[307]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/307\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Applies API\n\nAn apply represents the results of applying a Terraform Run's execution plan.\n\n## Attributes\n\n### Apply States\n\nYou'll find the apply state in `data.attributes.status`, as one of the following values.\n\n| State                     | Description                                                                    |\n| ------------------------- | ------------------------------------------------------------------------------ |\n| `pending`                 | The initial status of a apply once it has been created.                        |\n| `managed_queued`\/`queued` | The apply has been queued, awaiting backend service capacity to run terraform. |\n| `running`                 | The apply is running.                                                          |\n| `errored`                 | The apply has errored. This is a final state.                                  |\n| `canceled`                | The apply has been canceled. This is a final state.                            |\n| `finished`                | The apply has completed successfully. This is a final state.                   |\n| `unreachable`             | The apply will not run. This is a final state.                                 |\n\n## Show an apply\n\n`GET \/applies\/:id`\n\n| Parameter | Description                  |\n| --------- | ---------------------------- |\n| `id`      | The ID of the apply to show. |\n\nThere is no endpoint to list applies. You can find the ID for an apply in the\n`relationships.apply` property of a run object.\n\n| Status  | Response                                  | Reason                                                  |\n| ------- | ----------------------------------------- | ------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"applies\"`) | The request was successful                              |\n| [404][] | [JSON API error object][]                 | Apply not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/applies\/apply-47MBvjwzBG8YKc2v\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"apply-47MBvjwzBG8YKc2v\",\n    \"type\": \"applies\",\n    \"attributes\": {\n      \"execution-details\": {\n        \"mode\": \"remote\",\n      },\n      \"status\": \"finished\",\n      \"status-timestamps\": {\n        \"queued-at\": \"2018-10-17T18:58:27+00:00\",\n        \"started-at\": \"2018-10-17T18:58:29+00:00\",\n        \"finished-at\": \"2018-10-17T18:58:37+00:00\"\n      },\n      \"log-read-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1bHQ6djE6OFA1eEdlSFVHRSs4YUcwaW83a1dRRDA0U2E3T3FiWk1HM2NyQlNtcS9JS1hHN3dmTXJmaFhEYTlHdTF1ZlgxZ2wzVC9kVTlNcjRPOEJkK050VFI3U3dvS2ZuaUhFSGpVenJVUFYzSFVZQ1VZYno3T3UyYjdDRVRPRE5pbWJDVTIrNllQTENyTndYd1Y0ak1DL1dPVlN1VlNxKzYzbWlIcnJPa2dRRkJZZGtFeTNiaU84YlZ4QWs2QzlLY3VJb3lmWlIrajF4a1hYZTlsWnFYemRkL2pNOG9Zc0ZDakdVMCtURUE3dDNMODRsRnY4cWl1dUN5dUVuUzdnZzFwL3BNeHlwbXNXZWRrUDhXdzhGNnF4c3dqaXlZS29oL3FKakI5dm9uYU5ZKzAybnloREdnQ3J2Rk5WMlBJemZQTg\",\n      \"resource-additions\": 1,\n      \"resource-changes\": 0,\n      \"resource-destructions\": 0,\n      \"resource-imports\": 0\n    },\n    \"relationships\": {\n      \"state-versions\": {\n        \"data\": [\n          {\n            \"id\": \"sv-TpnsuD3iewwsfeRD\",\n            \"type\": \"state-versions\"\n          },\n          {\n            \"id\": \"sv-Fu1n6a3TgJ1Typq9\",\n            \"type\": \"state-versions\"\n          }\n        ]\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/applies\/apply-47MBvjwzBG8YKc2v\"\n    }\n  }\n}\n```\n\n_Using HCP Terraform agents_\n\n[HCP Terraform agents](\/terraform\/cloud-docs\/api-docs\/agents) allow HCP Terraform to communicate with isolated, private, or on-premises infrastructure. When a workspace is set to use the agent execution mode, the apply response will include additional details about the agent pool and agent used.\n\n```json\n{\n  \"data\": {\n    \"id\": \"apply-47MBvjwzBG8YKc2v\",\n    \"type\": \"applies\",\n    \"attributes\": {\n      \"execution-details\": {\n        \"agent-id\": \"agent-S1Y7tcKxXPJDQAvq\",\n        \"agent-name\": \"agent_01\",\n        \"agent-pool-id\": \"apool-Zigq2VGreKq7nwph\",\n        \"agent-pool-name\": \"first-pool\",\n        \"mode\": \"agent\",\n      },\n      \"status\": \"finished\",\n      \"status-timestamps\": {\n        \"queued-at\": \"2018-10-17T18:58:27+00:00\",\n        \"started-at\": \"2018-10-17T18:58:29+00:00\",\n        \"finished-at\": \"2018-10-17T18:58:37+00:00\"\n      },\n      \"log-read-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1bHQ6djE6OFA1eEdlSFVHRSs4YUcwaW83a1dRRDA0U2E3T3FiWk1HM2NyQlNtcS9JS1hHN3dmTXJmaFhEYTlHdTF1ZlgxZ2wzVC9kVTlNcjRPOEJkK050VFI3U3dvS2ZuaUhFSGpVenJVUFYzSFVZQ1VZYno3T3UyYjdDRVRPRE5pbWJDVTIrNllQTENyTndYd1Y0ak1DL1dPVlN1VlNxKzYzbWlIcnJPa2dRRkJZZGtFeTNiaU84YlZ4QWs2QzlLY3VJb3lmWlIrajF4a1hYZTlsWnFYemRkL2pNOG9Zc0ZDakdVMCtURUE3dDNMODRsRnY4cWl1dUN5dUVuUzdnZzFwL3BNeHlwbXNXZWRrUDhXdzhGNnF4c3dqaXlZS29oL3FKakI5dm9uYU5ZKzAybnloREdnQ3J2Rk5WMlBJemZQTg\",\n      \"resource-additions\": 1,\n      \"resource-changes\": 0,\n      \"resource-destructions\": 0,\n      \"resource-imports\": 0\n    },\n    \"relationships\": {\n      \"state-versions\": {\n        \"data\": [\n          {\n            \"id\": \"sv-TpnsuD3iewwsfeRD\",\n            \"type\": \"state-versions\"\n          },\n          {\n            \"id\": \"sv-Fu1n6a3TgJ1Typq9\",\n            \"type\": \"state-versions\"\n          }\n        ]\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/applies\/apply-47MBvjwzBG8YKc2v\"\n    }\n  }\n}\n```\n\n## Recover a failed state upload after applying\n\n`GET \/applies\/:id\/errored-state`\n\n| Parameter | Description                               |\n| --------- | ----------------------------------------- |\n| `id`      | The ID of the apply to recover state for. |\n\nIt is possible that during the course of a Run, Terraform may fail to upload a\nstate file. This can happen for a variety of reasons, but should be an\nexceptionally rare occurrence. HCP Terraform agent versions greater than 1.12.0\ninclude a fallback mechanism which attempts to upload the state directly to\nHCP Terraform's backend storage system in these cases. This endpoint then\nmakes the raw data from these failed uploads available to users who are\nauthorized to read the state contents.\n\n| Status  | Response                                     | Reason                                                                              |\n| ------- | -------------------------------------------- | ----------------------------------------------------------------------------------- |\n| [307][] | HTTP temporary redirect to state storage URL | Errored state available and user is authorized to read it                           |\n| [404][] | [JSON API error object][]                    | Apply not found, errored state not uploaded, or user unauthorized to perform action |\n\nWhen a 307 redirect is returned, the storage URL to the raw state file will be\npresent in the `Location` header of the response. The URL in the `Location`\nheader will expire after one minute. It is recommended for the API client to\nfollow the redirect immediately. Each successful request to the errored-state\nendpoint will generate a new, unique storage URL with the same one minute\nexpiration window.\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/applies\/apply-47MBvjwzBG8YKc2v\/errored-state\n```\n\n### Sample Response\n\n```\nHTTP\/1.1 307 Temporary Redirect\nContent-Length: 22\nContent-Type: text\/plain\nLocation: https:\/\/archivist.terraform.io\/v1\/object\/dmF1bHQ6djE6OFA1eEdlSFVHRSs4YUcwaW83a1dRRDA0U2E3T3FiWk1HM2NyQlNtcS9JS1hHN3dmTXJmaFhEYTlHdTF1ZlgxZ2wzVC9kVTlNcjRPOEJkK050VFI3U3dvS2ZuaUhFSGpVenJVUFYzSFVZQ1VZYno3T3UyYjdDRVRPRE5pbWJDVTIrNllQTENyTndYd1Y0ak1DL1dPVlN1VlNxKzYzbWlIcnJPa2dRRkJZZGtFeTNiaU84YlZ4QWs2QzlLY3VJb3lmWlIrajF4a1hYZTlsWnFYemRkL2pNOG9Zc0ZDakdVMCtURUE3dDNMODRsRnY4cWl1dUN5dUVuUzdnZzFwL3BNeHlwbXNXZWRrUDhXdzhGNnF4c3dqaXlZS29oL3FKakI5dm9uYU5ZKzAybnloREdnQ3J2Rk5WMlBJemZQTg\n\n307 Temporary Redirect\n```","site":"terraform","answers_cleaned":"    page title  Applies   API Docs   HCP Terraform description       Use the   applies  endpoint to show the results of a Terraform apply using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   307   https   developer mozilla org en US docs Web HTTP Status 307   404   https   developer mozilla org en US docs Web HTTP Status 404   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Applies API  An apply represents the results of applying a Terraform Run s execution plan      Attributes      Apply States  You ll find the apply state in  data attributes status   as one of the following values     State                       Description                                                                                                                                                                                        pending                    The initial status of a apply once it has been created                              managed queued   queued    The apply has been queued  awaiting backend service capacity to run terraform       running                    The apply is running                                                                errored                    The apply has errored  This is a final state                                        canceled                   The apply has been canceled  This is a final state                                  finished                   The apply has completed successfully  This is a final state                         unreachable                The apply will not run  This is a final state                                        Show an apply   GET  applies  id     Parameter   Description                                                                    id         The ID of the apply to show     There is no endpoint to list applies  You can find the ID for an apply in the  relationships apply  property of a run object     Status    Response                                    Reason                                                                                                                                                                         200       JSON API document      type   applies      The request was successful                                   404       JSON API error object                      Apply not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 applies apply 47MBvjwzBG8YKc2v          Sample Response     json      data          id    apply 47MBvjwzBG8YKc2v        type    applies        attributes            execution details              mode    remote                   status    finished          status timestamps              queued at    2018 10 17T18 58 27 00 00            started at    2018 10 17T18 58 29 00 00            finished at    2018 10 17T18 58 37 00 00                  log read url    https   archivist terraform io v1 object dmF1bHQ6djE6OFA1eEdlSFVHRSs4YUcwaW83a1dRRDA0U2E3T3FiWk1HM2NyQlNtcS9JS1hHN3dmTXJmaFhEYTlHdTF1ZlgxZ2wzVC9kVTlNcjRPOEJkK050VFI3U3dvS2ZuaUhFSGpVenJVUFYzSFVZQ1VZYno3T3UyYjdDRVRPRE5pbWJDVTIrNllQTENyTndYd1Y0ak1DL1dPVlN1VlNxKzYzbWlIcnJPa2dRRkJZZGtFeTNiaU84YlZ4QWs2QzlLY3VJb3lmWlIrajF4a1hYZTlsWnFYemRkL2pNOG9Zc0ZDakdVMCtURUE3dDNMODRsRnY4cWl1dUN5dUVuUzdnZzFwL3BNeHlwbXNXZWRrUDhXdzhGNnF4c3dqaXlZS29oL3FKakI5dm9uYU5ZKzAybnloREdnQ3J2Rk5WMlBJemZQTg          resource additions   1         resource changes   0         resource destructions   0         resource imports   0             relationships            state versions              data                              id    sv TpnsuD3iewwsfeRD                type    state versions                                        id    sv Fu1n6a3TgJ1Typq9                type    state versions                                            links            self     api v2 applies apply 47MBvjwzBG8YKc2v                    Using HCP Terraform agents    HCP Terraform agents   terraform cloud docs api docs agents  allow HCP Terraform to communicate with isolated  private  or on premises infrastructure  When a workspace is set to use the agent execution mode  the apply response will include additional details about the agent pool and agent used      json      data          id    apply 47MBvjwzBG8YKc2v        type    applies        attributes            execution details              agent id    agent S1Y7tcKxXPJDQAvq            agent name    agent 01            agent pool id    apool Zigq2VGreKq7nwph            agent pool name    first pool            mode    agent                   status    finished          status timestamps              queued at    2018 10 17T18 58 27 00 00            started at    2018 10 17T18 58 29 00 00            finished at    2018 10 17T18 58 37 00 00                  log read url    https   archivist terraform io v1 object dmF1bHQ6djE6OFA1eEdlSFVHRSs4YUcwaW83a1dRRDA0U2E3T3FiWk1HM2NyQlNtcS9JS1hHN3dmTXJmaFhEYTlHdTF1ZlgxZ2wzVC9kVTlNcjRPOEJkK050VFI3U3dvS2ZuaUhFSGpVenJVUFYzSFVZQ1VZYno3T3UyYjdDRVRPRE5pbWJDVTIrNllQTENyTndYd1Y0ak1DL1dPVlN1VlNxKzYzbWlIcnJPa2dRRkJZZGtFeTNiaU84YlZ4QWs2QzlLY3VJb3lmWlIrajF4a1hYZTlsWnFYemRkL2pNOG9Zc0ZDakdVMCtURUE3dDNMODRsRnY4cWl1dUN5dUVuUzdnZzFwL3BNeHlwbXNXZWRrUDhXdzhGNnF4c3dqaXlZS29oL3FKakI5dm9uYU5ZKzAybnloREdnQ3J2Rk5WMlBJemZQTg          resource additions   1         resource changes   0         resource destructions   0         resource imports   0             relationships            state versions              data                              id    sv TpnsuD3iewwsfeRD                type    state versions                                        id    sv Fu1n6a3TgJ1Typq9                type    state versions                                            links            self     api v2 applies apply 47MBvjwzBG8YKc2v                      Recover a failed state upload after applying   GET  applies  id errored state     Parameter   Description                                                                                              id         The ID of the apply to recover state for     It is possible that during the course of a Run  Terraform may fail to upload a state file  This can happen for a variety of reasons  but should be an exceptionally rare occurrence  HCP Terraform agent versions greater than 1 12 0 include a fallback mechanism which attempts to upload the state directly to HCP Terraform s backend storage system in these cases  This endpoint then makes the raw data from these failed uploads available to users who are authorized to read the state contents     Status    Response                                       Reason                                                                                                                                                                                                                                    307      HTTP temporary redirect to state storage URL   Errored state available and user is authorized to read it                                404       JSON API error object                         Apply not found  errored state not uploaded  or user unauthorized to perform action    When a 307 redirect is returned  the storage URL to the raw state file will be present in the  Location  header of the response  The URL in the  Location  header will expire after one minute  It is recommended for the API client to follow the redirect immediately  Each successful request to the errored state endpoint will generate a new  unique storage URL with the same one minute expiration window       Sample Request     shell curl       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 applies apply 47MBvjwzBG8YKc2v errored state          Sample Response      HTTP 1 1 307 Temporary Redirect Content Length  22 Content Type  text plain Location  https   archivist terraform io v1 object dmF1bHQ6djE6OFA1eEdlSFVHRSs4YUcwaW83a1dRRDA0U2E3T3FiWk1HM2NyQlNtcS9JS1hHN3dmTXJmaFhEYTlHdTF1ZlgxZ2wzVC9kVTlNcjRPOEJkK050VFI3U3dvS2ZuaUhFSGpVenJVUFYzSFVZQ1VZYno3T3UyYjdDRVRPRE5pbWJDVTIrNllQTENyTndYd1Y0ak1DL1dPVlN1VlNxKzYzbWlIcnJPa2dRRkJZZGtFeTNiaU84YlZ4QWs2QzlLY3VJb3lmWlIrajF4a1hYZTlsWnFYemRkL2pNOG9Zc0ZDakdVMCtURUE3dDNMODRsRnY4cWl1dUN5dUVuUzdnZzFwL3BNeHlwbXNXZWRrUDhXdzhGNnF4c3dqaXlZS29oL3FKakI5dm9uYU5ZKzAybnloREdnQ3J2Rk5WMlBJemZQTg  307 Temporary Redirect    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Change Requests API Docs HCP Terraform Use the change requests endpoint to view and archive change requests on given workspace","answers":"---\npage_title: Change Requests - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/change_requests` endpoint to view and archive change requests on given workspace.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Change requests API\n\n-> **Note**: Change requests are in public beta. All APIs and workflows are subject to change.\n\nYou can use change requests to keep track of workspace to-dos directly on that workspace. Change requests are a backlog of the changes that a workspace requires, often to ensure that workspace keeps up with compliance and best practices of your company. Change requests can also trigger [team notifications](\/terraform\/cloud-docs\/users-teams-organizations\/teams\/notifications) to directly notify team members whenever someone creates a new change request.\n\n@include 'tfc-package-callouts\/change-requests.mdx'\n\n## List change requests\n\nUse this endpoint to list a workspace's change requests.\n\n`GET api\/v2\/workspaces\/:workspace_name\/change-requests`\n\n| Parameter               | Description              |\n| ----------------------- | ------------------------ |\n| `:workspace_name`       | The name of workspace you want to list the change requests of. |\n\n\n### Query parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter       | Description                                                                                                                                                                  |\n| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `page[number]`  | **Optional.** If omitted, the endpoint will return the first page.                                                                                                           |\n| `page[size]`    | **Optional.** If omitted, the endpoint will return 20 teams per page.                                                                                                        |\n\n### Sample request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/:workspace_name\/change-requests\n```\n\n### Sample response\n\n```json\n{\n  \"data\": [{\n\t  \"type\": \"workspace_change_requests\",\n\t  \"id\": \"wscr-yjLcKUMFTs6kr9iC\"\n\t  \"attributes\": {\n      \"subject\": \"[Action Required] Github Actions Pinning (SEC-090)\",\n      \"message\": \"Some long description here\",\n      \"archived-by\": null,\n      \"archived-at\": null,\n      \"created-at\": \"2024-06-12T21:45:11.594Z\",\n      \"updated-at\": \"2024-06-12T21:45:11.594Z\"\n\t  },\n    \"relationships\": {\n      \"workspace\" : {\n        \"data\": {\n          \"id\": \"ws-FTs6kr9iCwweku\",\n          \"type\": \"workspaces\"\n        }\n      }\n    }\n  },\n  {\n    \"type\": \"workspace_change_requests\",\n    \"id\": \"wscr-jXQMBkPAFGD6eyFs\",\n    \"attributes\": {\n      \"subject\": \"[Action Required] Github Actions Pinning (SEC-100)\",\n      \"message\": \"Some long description here\",\n      \"archived-by\": null,\n      \"archived-at\": null,\n      \"created-at\": \"2024-06-12T21:45:11.594Z\",\n      \"updated-at\": \"2024-06-12T21:45:11.594Z\"\n     },\n     \"relationships\": {\n       \"workspace\" : {\n         \"data\": {\n          \"id\": \"ws-FTs6kr9iCwweku\",\n          \"type\": \"workspaces\"\n         }\n       }\n     }\n  }],\n  \"links\": {\n    \"self\": \"https:\/\/localhost\/api\/v2\/workspaces\/example-workspace-name\/change-requests?page%5Bnumber%5D=1\\u0026page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/localhost\/api\/v2\/workspaces\/example-workspace-name\/change-requests?page%5Bnumber%5D=1\\u0026page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/localhost\/api\/v2\/workspaces\/example-workspace-name\/change-requests?page%5Bnumber%5D=1\\u0026page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"page-size\": 20,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 1\n    }\n  }\n}\n```\n\n## Show a change request\n\nUse this endpoint to list the details of a specific change request.\n\n`GET api\/v2\/workspaces\/change-requests\/:change_request_id`\n\n| Parameter               | Description              |\n| ----------------------- | ------------------------ |\n| `:change_request_id`    | The change request ID    |\n\n### Sample request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/change-requests\/wscr-jXQMBkPAFGD6eyFs\n```\n\n### Sample response\n\n```json\n{\n  \"data\": {\n    \"type\": \"workspace_change_requests\",\n      \"id\": \"wscr-jXQMBkPAFGD6eyFs\",\n      \"attributes\": {\n        \"subject\": \"[Action Required] Github Actions Pinning (SEC-100)\",\n        \"message\": \"Some long description here\",\n        \"archived-by\": null,\n        \"archived-at\": null,\n        \"created-at\": \"2024-06-12T21:45:11.594Z\",\n        \"updated-at\": \"2024-06-12T21:45:11.594Z\"\n      },\n      \"relationships\": {\n         \"workspace\" : {\n            \"data\": {\n              \"id\": \"ws-FTs6kr9iCwweku\",\n              \"type\": \"workspaces\"\n            }\n         }\n      }\n   }\n}\n```\n\n## Archive change request\n\nIf someone completes a change request, they can archive it to reflect the request's completion. You can still view archived change requests, however they are be sorted separated and marked as `\"archived\"`. \n\n`POST api\/v2\/workspaces\/change-requests\/:change_request_id`\n\n| Parameter               | Description              |\n| ----------------------- | ------------------------ |\n| `:change_request_id`    | The ID of the change request to archive.    |\n\n### Sample request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/change-requests\/wscr-jXQMBkPAFGD6eyFs`\n```\n\n### Sample response\n\n```json\n{\n  \"data\": {\n    \"type\": \"workspace_change_requests\",\n      \"id\": \"wscr-jXQMBkPAFGD6eyFs\",\n      \"attributes\": {\n        \"subject\": \"[Action Required] Github Actions Pinning (SEC-100)\",\n        \"message\": \"Some long description here\",\n        \"archived-by\": \"user-2n4yj45149kf\",\n        \"archived-at\": \"2024-08-12T10:12:44.745Z\",\n        \"created-at\": \"2024-06-12T21:45:11.594Z\",\n        \"updated-at\": \"2024-06-12T21:45:11.594Z\"\n      },\n      \"relationships\": {\n         \"workspace\" : {\n            \"data\": {\n              \"id\": \"ws-FTs6kr9iCwweku\",\n              \"type\": \"workspaces\"\n            }\n         }\n      }\n   }\n}\n```\n\n## Create new change request\n\nYou can make change requests through the [explorer's](\/terraform\/cloud-docs\/api-docs\/explorer) experimental `\"bulk actions\"` endpoint.\n\n`POST \/api\/v2\/organizations\/:organization_name\/explorer\/bulk-actions`\n\n| Parameter               | Description              |\n| ----------------------- | ------------------------ |\n| `:organization_id`       | The ID of the organization you want to create a change request in.       |\n\nYou must specify the following fields in your payload when creating a new change request.\n\n| Key path   | Type | Required | Description |\n|------------|------|---------|-------------|\n| `data.action_type`          | string |   Required   | The action to take. To create a change request, specify `\"change_request\"`. |\n| `data.action_inputs`        | object |   Required   | Arguments for the bulk action. |\n| `data.action_inputs.subject` | string |  Required   | The change request's subject line.\n| `data.action_inputs.message` | string |   Required   | The change request's message, which HCP Terraform treats as markdown. |\n| `data.target_ids`           | array  |   Optional   | The IDs of the workspace you want to associate with this change request. You do not need to specify this field if you provide `data.query`. | \n| `data.query`                | object |    Optional  | An [explorer query](\/terraform\/cloud-docs\/api-docs\/explorer#execute-a-query) with workspace data. You do not need to specify this field if you provide `data.target_ids`. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"bulk_actions\",\n    \"attributes\": {\n\t    \"action_type\": \"change_requests\",\n\t    \"action_inputs\": {\n        \"subject\": \"[Action Required] Github Actions Pinning (SEC-090)\",\n        \"message\": \"Some long description here\",\n      },\n    \"query\": {\n      \"type\": \"workspaces\",\n      \"filter\": [\n        \"field\": \"workspace_name\",\n        \"operator\": \"contains\",\n        \"value\": [\"dev\"]\n      ]\n    }\n  },\n}\n```\n\n### Sample request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/:organization_name\/explorer\/bulk-actions\n```\n\n### Sample response\n\n```json\n{\n  \"data\": {\n    \"id\": \"eba-3jk34n5k2bs\",\n    \"attributes\": {\n      \"organization_id\": \"org-9mvjfwpq098\",\n\t    \"action_type\": \"change_requests\",\n\t    \"action_inputs\": {\n        \"subject\": \"[Action Required] Github Actions Pinning (SEC-090)\",\n        \"message\": \"Some long description here\",\n      },\n      \"created_by\": {\n        \"id\": \"user-n86dcbcrwtw9\",\n        \"type\": \"users\"\n      }\n    },\n    \"type\": \"explorer_bulk_actions\",\n  }\n}\n```","site":"terraform","answers_cleaned":"    page title  Change Requests   API Docs   HCP Terraform description       Use the   change requests  endpoint to view and archive change requests on given workspace        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Change requests API       Note    Change requests are in public beta  All APIs and workflows are subject to change   You can use change requests to keep track of workspace to dos directly on that workspace  Change requests are a backlog of the changes that a workspace requires  often to ensure that workspace keeps up with compliance and best practices of your company  Change requests can also trigger  team notifications   terraform cloud docs users teams organizations teams notifications  to directly notify team members whenever someone creates a new change request    include  tfc package callouts change requests mdx      List change requests  Use this endpoint to list a workspace s change requests    GET api v2 workspaces  workspace name change requests     Parameter                 Description                                                                           workspace name          The name of workspace you want to list the change requests of          Query parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                          page number        Optional    If omitted  the endpoint will return the first page                                                                                                                 page size          Optional    If omitted  the endpoint will return 20 teams per page                                                                                                                Sample request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 workspaces  workspace name change requests          Sample response     json      data          type    workspace change requests       id    wscr yjLcKUMFTs6kr9iC      attributes            subject     Action Required  Github Actions Pinning  SEC 090           message    Some long description here          archived by   null         archived at   null         created at    2024 06 12T21 45 11 594Z          updated at    2024 06 12T21 45 11 594Z             relationships            workspace               data                id    ws FTs6kr9iCwweku              type    workspaces                                        type    workspace change requests        id    wscr jXQMBkPAFGD6eyFs        attributes            subject     Action Required  Github Actions Pinning  SEC 100           message    Some long description here          archived by   null         archived at   null         created at    2024 06 12T21 45 11 594Z          updated at    2024 06 12T21 45 11 594Z                relationships             workspace                data                id    ws FTs6kr9iCwweku              type    workspaces                                      links          self    https   localhost api v2 workspaces example workspace name change requests page 5Bnumber 5D 1 u0026page 5Bsize 5D 20        first    https   localhost api v2 workspaces example workspace name change requests page 5Bnumber 5D 1 u0026page 5Bsize 5D 20        prev   null       next   null       last    https   localhost api v2 workspaces example workspace name change requests page 5Bnumber 5D 1 u0026page 5Bsize 5D 20          meta          pagination            current page   1         page size   20         prev page   null         next page   null         total pages   1         total count   1                     Show a change request  Use this endpoint to list the details of a specific change request    GET api v2 workspaces change requests  change request id     Parameter                 Description                                                                           change request id       The change request ID           Sample request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 change requests wscr jXQMBkPAFGD6eyFs          Sample response     json      data          type    workspace change requests          id    wscr jXQMBkPAFGD6eyFs          attributes              subject     Action Required  Github Actions Pinning  SEC 100             message    Some long description here            archived by   null           archived at   null           created at    2024 06 12T21 45 11 594Z            updated at    2024 06 12T21 45 11 594Z                  relationships               workspace                   data                    id    ws FTs6kr9iCwweku                  type    workspaces                                                  Archive change request  If someone completes a change request  they can archive it to reflect the request s completion  You can still view archived change requests  however they are be sorted separated and marked as   archived       POST api v2 workspaces change requests  change request id     Parameter                 Description                                                                           change request id       The ID of the change request to archive            Sample request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH     https   app terraform io api v2 workspaces change requests wscr jXQMBkPAFGD6eyFs           Sample response     json      data          type    workspace change requests          id    wscr jXQMBkPAFGD6eyFs          attributes              subject     Action Required  Github Actions Pinning  SEC 100             message    Some long description here            archived by    user 2n4yj45149kf            archived at    2024 08 12T10 12 44 745Z            created at    2024 06 12T21 45 11 594Z            updated at    2024 06 12T21 45 11 594Z                  relationships               workspace                   data                    id    ws FTs6kr9iCwweku                  type    workspaces                                                  Create new change request  You can make change requests through the  explorer s   terraform cloud docs api docs explorer  experimental   bulk actions   endpoint    POST  api v2 organizations  organization name explorer bulk actions     Parameter                 Description                                                                           organization id          The ID of the organization you want to create a change request in           You must specify the following fields in your payload when creating a new change request     Key path     Type   Required   Description                                                    data action type             string     Required     The action to take  To create a change request  specify   change request         data action inputs           object     Required     Arguments for the bulk action       data action inputs subject    string    Required     The change request s subject line     data action inputs message    string     Required     The change request s message  which HCP Terraform treats as markdown       data target ids              array      Optional     The IDs of the workspace you want to associate with this change request  You do not need to specify this field if you provide  data query         data query                   object      Optional    An  explorer query   terraform cloud docs api docs explorer execute a query  with workspace data  You do not need to specify this field if you provide  data target ids          Sample Payload     json      data          type    bulk actions        attributes           action type    change requests         action inputs              subject     Action Required  Github Actions Pinning  SEC 090             message    Some long description here                 query            type    workspaces          filter              field    workspace name            operator    contains            value     dev                                 Sample request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations  organization name explorer bulk actions          Sample response     json      data          id    eba 3jk34n5k2bs        attributes            organization id    org 9mvjfwpq098         action type    change requests         action inputs              subject     Action Required  Github Actions Pinning  SEC 090             message    Some long description here                   created by              id    user n86dcbcrwtw9            type    users                      type    explorer bulk actions            "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Policy Sets API Docs HCP Terraform Use the policy sets endpoint to manage groups of Sentinel and OPA policies List show create and update policy sets and policy set versions Attach detach and exclude policies from workspaces or attach and detach policies to projects","answers":"---\npage_title: Policy Sets - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/policy-sets` endpoint to manage groups of Sentinel and OPA policies. List, show, create, and update policy sets and policy set versions. Attach, detach, and exclude policies from workspaces, or attach and detach policies to projects.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Policy sets API\n\n[Policy Enforcement](\/terraform\/cloud-docs\/policy-enforcement) lets you use the policy-as-code frameworks Sentinel and Open Policy Agent (OPA) to apply policy checks to HCP Terraform workspaces.\n\n[Policy sets](\/terraform\/cloud-docs\/policy-enforcement\/manage-policy-sets) are collections of policies that you can apply globally or to specific [projects](\/terraform\/cloud-docs\/projects\/manage) and workspaces. For each run in the selected workspaces, HCP Terraform checks the Terraform plan against the policy set.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nThis API provides endpoints to create, read, update, and delete policy sets in an HCP Terraform organization. To view and manage individual policies, use the [Policies API](\/terraform\/cloud-docs\/api-docs\/policies).\n\nMany of these endpoints let you create policy sets from a designated repository in a Version Control System (VCS). For more information about how to configure a policy set VCS repository, refer to [Sentinel Policy Set VCS Repositories](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/vcs) and [OPA Policy Set VCS Repositories](\/terraform\/cloud-docs\/policy-enforcement\/opa\/vcs).\n\nInstead of connecting HCP Terraform to a VCS repository, you can use the the [Policy Set Versions endpoint](#create-a-policy-set-version) to create an entire policy set from a `tar.gz` file.\n\nInteracting with policy sets requires permission to manage policies. ([More about permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions).)\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Create a policy set\n\n`POST \/organizations\/:organization_name\/policy-sets`\n\n| Parameter            | Description                                                                                                                                                                                                                                                           |\n| -------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The organization to create the policy set in. The organization must already exist in the system, and the token authenticating the API request must have permission to manage policies. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions)) |\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Status  | Response                                      | Reason                                                         |\n| ------- | --------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"policy-sets\"`) | Successfully created a policy set                              |\n| [404][] | [JSON API error object][]                     | Organization not found, or user unauthorized to perform action |\n| [422][] | [JSON API error object][]                     | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                                              | Type           | Default   | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |\n| ----------------------------------------------------- | -------------- | --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                                           | string         |            | Must be `\"policy-sets\"`.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |\n| `data.attributes.name`                                | string         |            | The name of the policy set. Can include letters, numbers, `-`, and `_`.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |\n| `data.attributes.description`                         | string         | `null`     | Text describing the policy set's purpose. This field supports Markdown and appears in the HCP Terraform UI.                                                                                                                                                                                                                                                                                                                                                                                                                                                    |\n| `data.attributes.global`                              | boolean        | `false`    | Whether HCP Terraform should automatically apply this policy set to all of an organization's workspaces.                                                                                                                                                                                                                                                                                                                                                                                                                                                       |\n| `data.attributes.kind`                                | string         | `sentinel` | The policy-as-code framework associated with the policy. Valid values are `sentinel` and `opa`.                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |\n| `data.attributes.overridable`                         | boolean        | `false`    | Whether or not users can override this policy when it fails during a run. Valid for sentinel policies only if `agent-enabled` is set to `true`.                                                                                                                                                                                                                                                                                                                                                                                                                  |\n| `data.attributes.vcs-repo`                            | object         | `null`     | VCS repository information. When present, HCP Terraform sources the policies and configuration from the specified VCS repository. This attribute and `policies` relationships are mutually exclusive, and you cannot use them simultaneously.                                                                                                                                                                                                                                                                                                                  |\n| `data.attributes.vcs-repo.branch`                     | string         | `null`     | The branch of the VCS repository where HCP Terraform should retrieve the policy set. If empty, HCP Terraform uses the default branch.                                                                                                                                                                                                                                                                                                                                                                                                                        |\n| `data.attributes.vcs-repo.identifier`                 | string         |            | The VCS repository identifier in the format `<namespace>\/<repo>`. For example, `hashicorp\/my-policy-set`. The format for Azure DevOps is `<org>\/<project>\/_git\/<repo>`.                                                                                                                                                                                                                                                                                                                                                                                          |\n| `data.attributes.vcs-repo.oauth-token-id`             | string         |            | The OAuth Token ID HCP Terraform should use to connect to the VCS host. This value must not be specified if `github-app-installation-id` is specified.                                                                                                                                                                                                                                                                                                                                                                                                         |\n| `data.attributes.vcs-repo.github-app-installation-id` | string         |            | The VCS Connection GitHub App Installation to use. Find this ID on the account settings page. Requires previously authorizing the GitHub App and generating a user-to-server token. Manage the token from **Account Settings** within HCP Terraform. You can not specify this value if `oauth-token-id` is specified.                                                                                                                                                                                                                                                |\n| `data.attributes.vcs-repo.ingress-submodules`         | boolean        | `false`    | Whether HCP Terraform should instantiate repository submodules when retrieving the policy set.                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |\n| `data.attributes.policies-path`                       | string         | `null`     | The VCS repository subdirectory that contains the policies for this policy set. HCP Terraform ignores files and directories outside of this sub-path and does not update the policy set when those files change. This attribute is only valid when you specify a VCS repository for the policy set.                                                                                                                                                                                                                                                            |\n| `data.relationships.projects.data[]`                  | array\\[object] | `[]`       | A list of resource identifier objects that defines which projects are associated with the policy set. These objects must contain `id` and `type` properties, and the `type` property must be `projects`. For example, `{ \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }`. You can only specify this attribute when `data.attributes.global` is `false`.                                                                                                                                                                                                      |\n| `data.relationships.workspaces.data[]`                | array\\[object] | `[]`       | A list of resource identifier objects that defines which workspaces are associated with the policy set. These objects must contain `id` and `type` properties, and the `type` property must be `workspaces`. For example, `{ \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }`. Obtain workspace IDs from the [workspace's settings page](\/terraform\/enterprise\/workspaces\/settings) or the [Show Workspace endpoint](\/terraform\/enterprise\/api-docs\/workspaces#show-workspace). You can only specify this attribute when `data.attributes.global` is `false`.|\n| `data.relationships.workspace-exclusions.data[]`      | array\\[object] | `[]`       | A list of resource identifier objects specifying which workspaces HCP Terraform excludes from a policy set's enforcement. These objects must contain `id` and `type` properties, and the `type` property must be `workspaces`. For example, `{ \"id\": \"ws-FVVvzCDaykN1oHiw\", \"type\": \"workspaces\" }`.                                                                                                                                                                                                                                                           |\n| `data.relationships.policies.data[]`                  | array\\[object] | `[]`       | A list of resource identifier objects that defines which policies are members of the policy set. These objects must contain `id` and `type` properties, and the `type` property must be `policies`. For example, `{ \"id\": \"pol-u3S5p2Uwk21keu1s\", \"type\": \"policies\" }`.                                                                                                                                                                                                                                                                                         |\n| `data.attributes.agent-enabled`  **(beta)**        | boolean        | `false`    | Only valid for `sentinel` policy sets. Whether this policy set should run as a policy evaluation in the HCP Terraform agent.                                                                                                                                                                                                                                                                                                                                                                                                                                     |\n| `data.attributes.policy-tool-version`  **(beta)**  | string         | `latest`  | The version of the tool that HCP Terraform uses to evaluate policies. You can only set a policy tool version for 'sentinel' policy sets if `agent-enabled` is `true`.                                                                                                                                                                                                                                                                                                                                                                                         |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"policy-sets\",\n    \"attributes\": {\n      \"name\": \"production\",\n      \"description\": \"This set contains policies that should be checked on all production infrastructure workspaces.\",\n      \"global\": false,\n      \"kind\": \"sentinel\",\n      \"agent-enabled\": true,\n      \"policy-tool-version\": \"0.23.0\",\n      \"overridable\": true,\n      \"policies-path\": \"\/policy-sets\/foo\",\n      \"vcs-repo\": {\n        \"branch\": \"main\",\n        \"identifier\": \"hashicorp\/my-policy-sets\",\n        \"ingress-submodules\": false,\n        \"oauth-token-id\": \"ot-7Fr9d83jWsi8u23A\"\n      }\n    },\n    \"relationships\": {\n      \"projects\": {\n        \"data\": [\n          { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }\n        ]\n      },\n      \"workspaces\": {\n        \"data\": [\n          { \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }\n        ]\n      },\n      \"workspace-exclusions\": {\n        \"data\": [\n          { \"id\": \"ws-FVVvzCDaykN1oHiw\", \"type\": \"workspaces\" }\n        ]\n      }\n    }\n  }\n}\n```\n\n### Sample payload with individual policy relationships\n\n```json\n{\n  \"data\": {\n    \"type\": \"policy-sets\",\n    \"attributes\": {\n      \"name\": \"production\",\n      \"description\": \"This set contains policies that should be checked on all production infrastructure workspaces.\",\n      \"kind\": \"sentinel\",\n      \"global\": false,\n      \"agent-enabled\": true,\n      \"policy-tool-version\": \"0.23.0\",\n      \"overridable\": true\n    },\n    \"relationships\": {\n      \"policies\": {\n        \"data\": [\n          { \"id\": \"pol-u3S5p2Uwk21keu1s\", \"type\": \"policies\" }\n        ]\n      },\n      \"workspaces\": {\n        \"data\": [\n          { \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }\n        ]\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/policy-sets\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\":\"polset-3yVQZvHzf5j3WRJ1\",\n    \"type\":\"policy-sets\",\n    \"attributes\": {\n      \"name\": \"production\",\n      \"description\": \"This set contains policies that should be checked on all production infrastructure workspaces.\",\n      \"kind\": \"sentinel\",\n      \"global\": false,\n      \"agent-enabled\": true,\n      \"policy-tool-version\": \"0.23.0\",\n      \"overridable\": true,\n      \"workspace-count\": 1,\n      \"policies-path\": \"\/policy-sets\/foo\",\n      \"versioned\": true,\n      \"vcs-repo\": {\n        \"branch\": \"main\",\n        \"identifier\": \"hashicorp\/my-policy-sets\",\n        \"ingress-submodules\": false,\n        \"oauth-token-id\": \"ot-7Fr9d83jWsi8u23A\"\n      },\n      \"created-at\": \"2018-09-11T18:21:21.784Z\",\n      \"updated-at\": \"2018-09-11T18:21:21.784Z\"\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": { \"id\": \"my-organization\", \"type\": \"organizations\" }\n      },\n      \"projects\": {\n        \"data\": [\n          { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }\n        ]\n      },      \n      \"workspaces\": {\n        \"data\": [\n          { \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }\n        ]\n      },\n      \"workspace-exclusions\": {\n        \"data\": [\n          { \"id\": \"ws-FVVvzCDaykN1oHiw\", \"type\": \"workspaces\" }\n        ]\n      },\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\"\n    }\n  }\n}\n```\n\n### Sample response with individual policy relationships\n\n```json\n{\n  \"data\": {\n    \"id\":\"polset-3yVQZvHzf5j3WRJ1\",\n    \"type\":\"policy-sets\",\n    \"attributes\": {\n      \"name\": \"production\",\n      \"description\": \"This set contains policies that should be checked on all production infrastructure workspaces.\",\n      \"kind\": \"sentinel\",\n      \"global\": false,\n      \"agent-enabled\": true,\n      \"policy-tool-version\": \"0.23.0\",\n      \"overridable\": true,\n      \"policy-count\": 1,\n      \"workspace-count\": 1,\n      \"versioned\": false,\n      \"created-at\": \"2018-09-11T18:21:21.784Z\",\n      \"updated-at\": \"2018-09-11T18:21:21.784Z\"\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": { \"id\": \"my-organization\", \"type\": \"organizations\" }\n      },\n      \"policies\": {\n        \"data\": [\n          { \"id\": \"pol-u3S5p2Uwk21keu1s\", \"type\": \"policies\" }\n        ]\n      },\n      \"workspaces\": {\n        \"data\": [\n          { \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }\n        ]\n      }\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\"\n    }\n  }\n}\n```\n\n## List policy sets\n\n`GET \/organizations\/:organization_name\/policy-sets`\n\n| Parameter            | Description                               |\n| -------------------- | ----------------------------------------- |\n| `:organization_name` | The organization to list policy sets for. |\n\n| Status  | Response                                      | Reason                                                         |\n| ------- | --------------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"policy-sets\"`) | Request was successful                                         |\n| [404][] | [JSON API error object][]                     | Organization not found, or user unauthorized to perform action |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter           | Description                                                                                                                                                                                                                                                                                                                                      |\n| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `filter[versioned]` | **Optional.** Allows filtering policy sets based on whether they are versioned (VCS-managed or API-managed), or use individual policy relationships. Accepts a boolean true\/false value. A `true` value returns versioned sets, and a `false` value returns sets with individual policy relationships. If omitted, all policy sets are returned. |\n| `filter[kind]`      | **Optional.** If specified, restricts results to those with the matching policy kind value. Valid values are `sentinel` and `opa`.                                                                                                                                                                                                               |\n| `include`           | **Optional.** Enables you to include related resource data. Value must be a comma-separated list containing one or more of `projects`, `workspaces`, `workspace-exclusions`, `policies`, `newest_version`, or `current_version`. See the [relationships section](#relationships) for details.                                                     |\n| `page[number]`      | **Optional.** If omitted, the endpoint will return the first page.                                                                                                                                                                                                                                                                               |\n| `page[size]`        | **Optional.** If omitted, the endpoint will return 20 policy sets per page.                                                                                                                                                                                                                                                                      |\n| `search[name]`      | **Optional.** Allows searching the organization's policy sets by name.                                                                                                                                                                                                                                                                           |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/policy-sets\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\":\"polset-3yVQZvHzf5j3WRJ1\",\n      \"type\":\"policy-sets\",\n      \"attributes\": {\n        \"name\": \"production\",\n        \"description\": \"This set contains policies that should be checked on all production infrastructure workspaces.\",\n        \"kind\": \"sentinel\",\n        \"global\": false,\n        \"agent-enabled\": true,\n        \"policy-tool-version\": \"0.23.0\",\n        \"overridable\": true,\n        \"workspace-count\": 1,\n        \"policies-path\": \"\/policy-sets\/foo\",\n        \"versioned\": true,\n        \"vcs-repo\": {\n          \"branch\": \"main\",\n          \"identifier\": \"hashicorp\/my-policy-sets\",\n          \"ingress-submodules\": false,\n          \"oauth-token-id\": \"ot-7Fr9d83jWsi8u23A\"\n        },\n        \"created-at\": \"2018-09-11T18:21:21.784Z\",\n        \"updated-at\": \"2018-09-11T18:21:21.784Z\"\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": { \"id\": \"my-organization\", \"type\": \"organizations\" }\n        },\n        \"projects\": {\n          \"data\": [\n            { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }\n          ]\n        },        \n        \"workspaces\": {\n          \"data\": [\n            { \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }\n          ]\n        },\n        \"workspace-exclusions\": {\n          \"data\": [\n            { \"id\": \"ws-FVVvzCDaykN1oHiw\", \"type\": \"workspaces\" }\n          ]\n        },\n      },\n      \"links\": {\n        \"self\":\"\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\"\n      }\n    }\n  ]\n}\n```\n\n### Sample response with individual policy relationships\n\n```json\n{\n  \"data\": [\n    {\n      \"id\":\"polset-3yVQZvHzf5j3WRJ1\",\n      \"type\":\"policy-sets\",\n      \"attributes\": {\n        \"name\": \"production\",\n        \"description\": \"This set contains policies that should be checked on all production infrastructure workspaces.\",\n        \"kind\": \"sentinel\",\n        \"global\": false,\n        \"agent-enabled\": true,\n        \"policy-tool-version\": \"0.23.0\",\n        \"overridable\": true,\n        \"policy-count\": 1,\n        \"workspace-count\": 1,\n        \"versioned\": false,\n        \"created-at\": \"2018-09-11T18:21:21.784Z\",\n        \"updated-at\": \"2018-09-11T18:21:21.784Z\"\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": { \"id\": \"my-organization\", \"type\": \"organizations\" }\n        },\n        \"policies\": {\n          \"data\": [\n            { \"id\": \"pol-u3S5p2Uwk21keu1s\", \"type\": \"policies\" }\n          ]\n        },\n        \"workspaces\": {\n          \"data\": [\n            { \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }\n          ]\n        }\n      },\n      \"links\": {\n        \"self\":\"\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\"\n      }\n    },\n  ]\n}\n```\n\n## Show a policy set\n\n`GET \/policy-sets\/:id`\n\n| Parameter | Description                                                                        |\n| --------- | ---------------------------------------------------------------------------------- |\n| `:id`     | The ID of the policy set to show. Refer to [List Policy Sets](#list-policy-sets) for reference information about finding IDs. |\n\n| Status  | Response                                      | Reason                                                      |\n| ------- | --------------------------------------------- | ----------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"policy-sets\"`) | The request was successful                                  |\n| [404][] | [JSON API error object][]                     | Policy set not found or user unauthorized to perform action |\n\n| Parameter | Description                                                                                                                                                                                                                                                                                   |\n| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `include` | **Optional.**  Enables you to include related resource data. Value must be a comma-separated list containing one or more of `projects`, `workspaces`, `workspace-exclusions`, `policies`, `newest_version`, or `current_version`. See the [relationships section](#relationships) for details. |\n\n### Sample Request\n\n```shell\ncurl --request GET \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1?include=current_version\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\":\"polset-3yVQZvHzf5j3WRJ1\",\n    \"type\":\"policy-sets\",\n    \"attributes\": {\n      \"name\": \"production\",\n      \"description\": \"This set contains policies that should be checked on all production infrastructure workspaces.\",\n      \"kind\": \"sentinel\",\n      \"global\": false,\n      \"agent-enabled\": true,\n      \"policy-tool-version\": \"0.23.0\",\n      \"overridable\": true,\n      \"policy-count\": 0,\n      \"workspace-count\": 1,\n      \"policies-path\": \"\/policy-sets\/foo\",\n      \"versioned\": true,\n      \"vcs-repo\": {\n        \"branch\": \"main\",\n        \"identifier\": \"hashicorp\/my-policy-sets\",\n        \"ingress-submodules\": false,\n        \"oauth-token-id\": \"ot-7Fr9d83jWsi8u23A\"\n      },\n      \"created-at\": \"2018-09-11T18:21:21.784Z\",\n      \"updated-at\": \"2018-09-11T18:21:21.784Z\"\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": { \"id\": \"my-organization\", \"type\": \"organizations\" }\n      },\n      \"current-version\": {\n        \"data\": {\n          \"id\": \"polsetver-m4yhbUBCgyDVpDL4\",\n          \"type\": \"policy-set-versions\"\n        }\n      },\n      \"projects\": {\n        \"data\": [\n          { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }\n        ]\n      },      \n      \"workspaces\": {\n        \"data\": [\n          { \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }\n        ]\n      },\n      \"workspace-exclusions\": {\n        \"data\": [\n          { \"id\": \"ws-FVVvzCDaykN1oHiw\", \"type\": \"workspaces\" }\n        ]\n      },\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\"\n    }\n  },\n  \"included\": [\n    {\n      \"id\": \"polsetver-m4yhbUBCgyDVpDL4\",\n      \"type\": \"policy-set-versions\",\n      \"attributes\": {\n        \"source\": \"github\",\n        \"status\": \"ready\",\n        \"status-timestamps\": {\n          \"ready-at\": \"2019-06-21T21:29:48+00:00\",\n          \"ingressing-at\": \"2019-06-21T21:29:47+00:00\"\n        },\n        \"error\": null,\n        \"ingress-attributes\": {\n          \"commit-sha\": \"8766a423cb902887deb0f7da4d9beaed432984bb\",\n          \"commit-url\": \"https:\/\/github.com\/hashicorp\/my-policy-sets\/commit\/8766a423cb902887deb0f7da4d9beaed432984bb\",\n          \"identifier\": \"hashicorp\/my-policy-sets\"\n        },\n        \"created-at\": \"2019-06-21T21:29:47.792Z\",\n        \"updated-at\": \"2019-06-21T21:29:48.887Z\"\n      },\n      \"relationships\": {\n        \"policy-set\": {\n          \"data\": {\n            \"id\": \"polset-a2mJwtmKygrA11dh\",\n            \"type\": \"policy-sets\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/policy-set-versions\/polsetver-E4S7jz8HMjBienLS\"\n      }\n    }\n  ]\n}\n```\n\n### Sample response with individual policy relationships\n\n```json\n{\n  \"data\": {\n    \"id\":\"polset-3yVQZvHzf5j3WRJ1\",\n    \"type\":\"policy-sets\",\n    \"attributes\": {\n      \"name\": \"production\",\n      \"description\": \"This set contains policies that should be checked on all production infrastructure workspaces.\",\n      \"kind\": \"sentinel\",\n      \"global\": false,\n      \"agent-enabled\": true,\n      \"policy-tool-version\": \"0.23.0\",\n      \"overridable\": true,\n      \"policy-count\": 1,\n      \"workspace-count\": 1,\n      \"versioned\": false,\n      \"created-at\": \"2018-09-11T18:21:21.784Z\",\n      \"updated-at\": \"2018-09-11T18:21:21.784Z\",\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": { \"id\": \"my-organization\", \"type\": \"organizations\" }\n      },\n      \"policies\": {\n        \"data\": [\n          { \"id\": \"pol-u3S5p2Uwk21keu1s\", \"type\": \"policies\" }\n        ]\n      },\n      \"workspaces\": {\n        \"data\": [\n          { \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }\n        ]\n      }\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\"\n    }\n  }\n}\n```\n\n-> **Note:** The `data.relationships.projects` and `data.relationships.workspaces` refer to the projects and workspaces attached to the policy set. HCP Terraform omits these keys for policy sets marked as global, which are implicitly related to all of the organization's workspaces.\n\n## Update a policy set\n\n`PATCH \/policy-sets\/:id`\n\n| Parameter | Description                                                                          |\n| --------- | ------------------------------------------------------------------------------------ |\n| `:id`     | The ID of the policy set to update. Refer to [List Policy Sets](#list-policy-sets) for reference information about finding IDs. |\n\n| Status  | Response                                      | Reason                                                         |\n| ------- | --------------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"policy-sets\"`) | The request was successful                                     |\n| [404][] | [JSON API error object][]                     | Policy set not found or user unauthorized to perform action    |\n| [422][] | [JSON API error object][]                     | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                                             | Type           | Default          | Description                                                                                                                                                                                                                                                                                                  |\n| ---------------------------------------------------- | -------------- | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `data.type`                                          | string         |                  | Must be `\"policy-sets\"`.                                                                                                                                                                                                                                                                                     |\n| `data.attributes.name`                               | string         | (previous value) | The name of the policy set. Can include letters, numbers, `-`, and `_`.                                                                                                                                                                                                                                      |\n| `data.attributes.description`                        | string         | (previous value) | A description of the set's purpose. This field supports markdown and appears in the HCP Terraform UI.                                                                                                                                                                                             |\n| `data.attributes.global`                             | boolean        | (previous value) | Whether or not the policies in this set should be checked in all of the organization's workspaces or only in workspaces directly attached to the set.                                                                                                                                                      |\n| `data.attributes.vcs-repo`                           | object         | (previous value) | VCS repository information. When present, HCP Terraform sources the policies and configuration from the specified VCS repository instead of using definitions from HCP Terraform. Note that this option and `policies` relationships are mutually exclusive and may not be used simultaneously.                    |\n| `data.attributes.vcs-repo.branch`                    | string         | (previous value) | The branch of the VCS repo. When empty, HCP Terraform uses the VCS provider's default branch value.                                                                                                                                                                                                                  |\n| `data.attributes.vcs-repo.identifier`                | string         | (previous value) | The VCS repository identifier in the the following format: `<namespace>\/<repo>`. An example identifier in GitHub is `hashicorp\/my-policy-set`. The format for Azure DevOps is `<org>\/<project>\/_git\/<repo>`.                                                                                       |\n| `data.attributes.vcs-repo.oauth-token-id`            | string         | (previous value) | The OAuth token ID to use to connect to the VCS host.                                                                                                                                                                                                                                                        |\n| `data.attributes.vcs-repo.ingress-submodules`        | boolean        | (previous value) | Determines whether HCP Terraform instantiates repository submodules during the clone operation.                                                                                                                                                                                                                    |\n| `data.attributes.policies-path`                      | boolean        | (previous value) | The subdirectory of the attached VCS repository that contains the policies for this policy set. HCP Terraform ignores files and directories outside of the sub-path. Changes to the unrelated files do not update the policy set. You can only enable this option when a VCS repository is present. |\n| `data.relationships.projects`                        | array\\[object] | (previous value) | An array of references to projects that the policy set should be assigned to. Sending an empty array clears all project assignments. You can only specify this attribute when `data.attributes.global` is `false`.                                                                                           |\n| `data.relationships.workspaces`                      | array\\[object] | (previous value) | An array of references to workspaces that the policy set should be assigned to. Sending an empty array clears all workspace assignments. You can only specify this attribute when `data.attributes.global` is `false`.                                                                                       |\n| `data.relationships.workspace-exclusions`            | array\\[object] | (previous value) | An array of references to excluded workspaces that HCP Terraform will not enforce this policy set upon. Sending an empty array clears all exclusions assignments.                                                                                                                                          | \n| `data.attributes.agent-enabled`  **(beta)**       | boolean        | `false`          | Only valid for `sentinel` policy sets. Whether this policy set should run as a policy evaluation in the HCP Terraform agent.                                                                                                                                                                               |\n|`data.attributes.policy-tool-version`  **(beta)** | string         | `latest`         | The version of the tool that HCP Terraform uses to evaluate policies. You can only set a policy tool version for 'sentinel' policy sets if `agent-enabled` is `true`.                                                                                                                                |\n\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"policy-sets\",\n    \"attributes\": {\n      \"name\": \"workspace-scoped-policy-set\",\n      \"description\": \"Policies added to this policy set will be enforced on specified workspaces\",\n      \"global\": false,\n      \"agent-enabled\": true,\n      \"policy-tool-version\": \"0.23.0\"\n    },\n    \"relationships\": {\n      \"projects\": {\n        \"data\": [\n          { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }\n        ]\n      },\n      \"workspaces\": {\n        \"data\": [\n          { \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }\n        ]\n      },\n      \"workspace-exclusions\": {\n        \"data\": [\n          { \"id\": \"ws-FVVvzCDaykN1oHiw\", \"type\": \"workspaces\" }\n        ]\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\":\"polset-3yVQZvHzf5j3WRJ1\",\n    \"type\":\"policy-sets\",\n    \"attributes\": {\n      \"name\": \"workspace-scoped-policy-set\",\n      \"description\": \"Policies added to this policy set will be enforced on specified workspaces\",\n      \"global\": false,\n      \"kind\": \"sentinel\",\n      \"agent-enabled\": true,\n      \"policy-tool-version\": \"0.23.0\",\n      \"overridable\": true,\n      \"policy-count\": 1,\n      \"workspace-count\": 1,\n      \"versioned\": false,\n      \"created-at\": \"2018-09-11T18:21:21.784Z\",\n      \"updated-at\": \"2018-09-11T18:21:21.784Z\"\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": { \"id\": \"my-organization\", \"type\": \"organizations\" }\n      },\n      \"policies\": {\n        \"data\": [\n          { \"id\": \"pol-u3S5p2Uwk21keu1s\", \"type\": \"policies\" }\n        ]\n      },\n      \"projects\": {\n        \"data\": [\n          { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }\n        ]\n      },\n      \"workspaces\": {\n        \"data\": [\n          { \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }\n        ]\n      },\n      \"workspace-exclusions\": {\n        \"data\": [\n          { \"id\": \"ws-FVVvzCDaykN1oHiw\", \"type\": \"workspaces\" }\n        ]\n      }\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\"\n    }\n  }\n}\n```\n\n## Add policies to the policy set\n\n`POST \/policy-sets\/:id\/relationships\/policies`\n\n| Parameter | Description                                                                                   |\n| --------- | --------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the policy set to add policies to. Refer to [List Policy Sets](#list-policy-sets) for reference information about finding IDs. |\n\n| Status  | Response                  | Reason                                                                     |\n| ------- | ------------------------- | -------------------------------------------------------------------------- |\n| [204][] | No Content                | The request was successful                                                 |\n| [404][] | [JSON API error object][] | Policy set not found or user unauthorized to perform action                |\n| [422][] | [JSON API error object][] | Malformed request body (one or more policies not found, wrong types, etc.) |\n\n~> **Note:** This endpoint may only be used when there is no VCS repository associated with the policy set.\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type           | Default | Description                                                                                                                                                                                                                                                  |\n| -------- | -------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `data[]` | array\\[object] |         | A list of resource identifier objects that defines which policies will be added to the set. These objects must contain `id` and `type` properties, and the `type` property must be `policies` (e.g. `{ \"id\": \"pol-u3S5p2Uwk21keu1s\", \"type\": \"policies\" }`). |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    { \"id\": \"pol-u3S5p2Uwk21keu1s\", \"type\": \"policies\" },\n    { \"id\": \"pol-2HRvNs49EWPjDqT1\", \"type\": \"policies\" }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\/relationships\/policies\n```\n\n## Attach a policy set to projects\n\n`POST \/policy-sets\/:id\/relationships\/projects`\n\n| Parameter | Description                                                                                      |\n| --------- | ------------------------------------------------------------------------------------------------ |\n| `:id`     | The ID of the policy set to attach to projects. Refer to [List Policy Sets](#list-policy-sets) for reference information about finding IDs. |\n\n-> **Note:** You can not attach global policy sets to individual projects.\n\n| Status  | Response                  | Reason                                                                     |\n| ------- | ------------------------- | -------------------------------------------------------------------------- |\n| [204][] | Nothing                   | The request was successful                                                 |\n| [404][] | [JSON API error object][] | Policy set not found or user unauthorized to perform action                |\n| [422][] | [JSON API error object][] | Malformed request body (one or more projects not found, wrong types, etc.) |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type           | Default | Description                                                                                                                                                                                                                                                          |\n| -------- | -------------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data[]` | array\\[object] |         | A list of resource identifier objects that defines which projects to attach the policy set to. These objects must contain `id` and `type` properties, and the `type` property must be `projects` (e.g. `{ \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }`). |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" },\n    { \"id\": \"prj-2HRvNs49EWPjDqT1\", \"type\": \"projects\" }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\/relationships\/projects\n```\n\n## Attach a policy set to workspaces\n\n`POST \/policy-sets\/:id\/relationships\/workspaces`\n\n| Parameter | Description                                                                                        |\n| --------- | -------------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the policy set to attach to workspaces. Refer to [List Policy Sets](#list-policy-sets) for reference information about finding IDs. |\n\n-> **Note:** Policy sets marked as global cannot be attached to individual workspaces.\n\n| Status  | Response                  | Reason                                                                       |\n| ------- | ------------------------- | ---------------------------------------------------------------------------- |\n| [204][] | No Content                | The request was successful                                                   |\n| [404][] | [JSON API error object][] | Policy set not found or user unauthorized to perform action                  |\n| [422][] | [JSON API error object][] | Malformed request body (one or more workspaces not found, wrong types, etc.) |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type           | Default | Description                                                                                                                                                                                                                                                               |\n| -------- | -------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data[]` | array\\[object] |         | A list of resource identifier objects that defines the workspaces the policy set will be attached to. These objects must contain `id` and `type` properties, and the `type` property must be `workspaces` (e.g. `{ \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }`). |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    { \"id\": \"ws-u3S5p2Uwk21keu1s\", \"type\": \"workspaces\" },\n    { \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\/relationships\/workspaces\n```\n\n## Exclude a workspace from a policy set\n\n`POST \/policy-sets\/:id\/relationships\/workspace-exclusions`\n\n| Parameter | Description                                                                                                                                       |\n| --------- | ------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of a policy set that you want HCP Terraform to exclude from the workspaces you specify. Refer to [List Policy Sets](#list-policy-sets) for reference information about finding IDs. |\n\n| Status  | Response                  | Reason                                                                                |\n| ------- | ------------------------- | ------------------------------------------------------------------------------------- |\n| [204][] | No Content                | The request was successful                                                            |\n| [404][] | [JSON API error object][] | Policy set not found or user unauthorized to perform action                           |\n| [422][] | [JSON API error object][] | Malformed request body (one or more excluded workspaces not found, wrong types, etc.) |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type           | Default | Description                                                                                                                                                                                                                                                                        |\n| -------- | -------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data[]` | array\\[object] |         | A list of resource identifier objects that defines the excluded workspaces the policy set will be attached to. These objects must contain `id` and `type` properties, and the `type` property must be `workspaces` (e.g. `{ \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }`). |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    { \"id\": \"ws-u3S5p2Uwk21keu1s\", \"type\": \"workspaces\" },\n    { \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\/relationships\/workspace-exclusions\n```\n\n## Remove policies from the policy set\n\n`DELETE \/policy-sets\/:id\/relationships\/policies`\n\n| Parameter | Description                                                                                        |\n| --------- | -------------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the policy set to remove policies from. Refer to [List Policy Sets](#list-policy-sets) for reference information about finding IDs. |\n\n| Status  | Response                  | Reason                                                      |\n| ------- | ------------------------- | ----------------------------------------------------------- |\n| [204][] | No Content                | The request was successful                                  |\n| [404][] | [JSON API error object][] | Policy set not found or user unauthorized to perform action |\n| [422][] | [JSON API error object][] | Malformed request body (wrong types, etc.)                  |\n\n~> **Note:** This endpoint may only be used when there is no VCS repository associated with the policy set.\n\n### Request Body\n\nThis DELETE endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type           | Default | Description                                                                                                                                                                                                                                                      |\n| -------- | -------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data[]` | array\\[object] |         | A list of resource identifier objects that defines which policies will be removed from the set. These objects must contain `id` and `type` properties, and the `type` property must be `policies` (e.g. `{ \"id\": \"pol-u3S5p2Uwk21keu1s\", \"type\": \"policies\" }`). |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    { \"id\": \"pol-u3S5p2Uwk21keu1s\", \"type\": \"policies\" },\n    { \"id\": \"pol-2HRvNs49EWPjDqT1\", \"type\": \"policies\" }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\/relationships\/policies\n```\n\n## Detach a policy set from projects\n\n`DELETE \/policy-sets\/:id\/relationships\/projects`\n\n| Parameter | Description                                                                                        |\n| --------- | -------------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the policy set you want to detach from the specified projects. Refer to [List Policy Sets](#list-policy-sets) for reference information about finding IDs. |\n\n-> **Note:** You can not attach global policy sets to individual projects.\n\n| Status  | Response                  | Reason                                                                     |\n| ------- | ------------------------- | -------------------------------------------------------------------------- |\n| [204][] | Nothing                   | The request was successful                                                 |\n| [404][] | [JSON API error object][] | Policy set not found or user unauthorized to perform action                |\n| [422][] | [JSON API error object][] | Malformed request body (one or more projects not found, wrong types, etc.) |\n\n### Request Body\n\nThis DELETE endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type           | Default | Description                                                                                                                                                                                                                                                            |\n| -------- | -------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data[]` | array\\[object] |         | A list of resource identifier objects that defines the projects the policy set will be detached from. These objects must contain `id` and `type` properties, and the `type` property must be `projects`. For example, `{ \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" }`. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    { \"id\": \"prj-AwfuCJTkdai4xj9w\", \"type\": \"projects\" },\n    { \"id\": \"prj-2HRvNs49EWPjDqT1\", \"type\": \"projects\" }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\/relationships\/projects\n```\n\n## Detach the policy set from workspaces\n\n`DELETE \/policy-sets\/:id\/relationships\/workspaces`\n\n| Parameter | Description                                                                                          |\n| --------- | ---------------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the policy set to detach from workspaces. Refer to [List Policy Sets](#list-policy-sets) for reference information about finding IDs. |\n\n-> **Note:** Policy sets marked as global cannot be detached from individual workspaces.\n\n| Status  | Response                  | Reason                                                      |\n| ------- | ------------------------- | ----------------------------------------------------------- |\n| [204][] | No Content                | The request was successful                                  |\n| [404][] | [JSON API error object][] | Policy set not found or user unauthorized to perform action |\n| [422][] | [JSON API error object][] | Malformed request body (wrong types, etc.)                  |\n\n### Request Body\n\nThis DELETE endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type           | Default | Description                                                                                                                                                                                                                                                                                                                                                                                                                                         |\n| -------- | -------------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data[]` | array\\[object] |         | A list of resource identifier objects that defines which workspaces the policy set will be detached from. These objects must contain `id` and `type` properties, and the `type` property must be `workspaces` (e.g. `{ \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }`). Obtain workspace IDs from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    { \"id\": \"ws-u3S5p2Uwk21keu1s\", \"type\": \"workspaces\" },\n    { \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\/relationships\/workspaces\n```\n\n## Reinclude a workspace to a policy set\n\n`DELETE \/policy-sets\/:id\/relationships\/workspace-exclusions`\n\n| Parameter | Description                                                                                                                                       |\n| --------- | ------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the policy set HCP Terraform should reinclude (enforce) on the specified workspaces. Refer to [List Policy Sets](#list-policy-sets) for reference information about finding IDs. |\n\n| Status  | Response                  | Reason                                                      |\n| ------- | ------------------------- | ----------------------------------------------------------- |\n| [204][] | No Content                | The request was successful                                  |\n| [404][] | [JSON API error object][] | Policy set not found or user unauthorized to perform action |\n| [422][] | [JSON API error object][] | Malformed request body (wrong types, etc.)                  |\n\n### Request Body\n\nThis DELETE endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type           | Default | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |\n| -------- | -------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `data[]` | array\\[object] |         | A list of resource identifier objects that defines which workspaces HCP Terraform should reinclude (enforce) this policy set on. These objects must contain `id` and `type` properties, and the `type` property must be `workspaces` (e.g. `{ \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }`). Obtain workspace IDs from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    { \"id\": \"ws-u3S5p2Uwk21keu1s\", \"type\": \"workspaces\" },\n    { \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\/relationships\/workspace-exclusions\n```\n\n## Delete a policy set\n\n`DELETE \/policy-sets\/:id`\n\n| Parameter | Description                                                                          |\n| --------- | ------------------------------------------------------------------------------------ |\n| `:id`     | The ID of the policy set to delete. Refer to [List Policy Sets](#list-policy-sets) for reference information about finding IDs. |\n\n| Status  | Response                  | Reason                                                       |\n| ------- | ------------------------- | ------------------------------------------------------------ |\n| [204][] | No Content                | Successfully deleted the policy set                          |\n| [404][] | [JSON API error object][] | Policy set not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\n```\n\n## Create a policy set version\n\nFor versioned policy sets which have no VCS repository attached, versions of policy code may be uploaded directly to the API by creating a new policy set version and, in a subsequent request, uploading a tarball (tar.gz) of data to it.\n\n`POST \/policy-sets\/:id\/versions`\n\n| Parameter | Description                                           |\n| --------- | ----------------------------------------------------- |\n| `:id`     | The ID of the policy set to create a new version for. |\n\n| Status  | Response                                              | Reason                                                      |\n| ------- | ----------------------------------------------------- | ----------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"policy-set-versions\"`) | The request was successful.                                 |\n| [404][] | [JSON API error object][]                             | Policy set not found or user unauthorized to perform action |\n| [422][] | [JSON API error object][]                             | The policy set does not support uploading versions.         |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-3yVQZvHzf5j3WRJ1\/versions\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"polsetver-cXciu9nQwmk9Cfrn\",\n    \"type\": \"policy-set-versions\",\n    \"attributes\": {\n      \"source\": \"tfe-api\",\n      \"status\": \"pending\",\n      \"status-timestamps\": {},\n      \"error\": null,\n      \"created-at\": \"2019-06-28T23:53:15.875Z\",\n      \"updated-at\": \"2019-06-28T23:53:15.875Z\"\n    },\n    \"relationships\": {\n      \"policy-set\": {\n        \"data\": {\n          \"id\": \"polset-ws1CZBzm2h5K6ZT5\",\n          \"type\": \"policy-sets\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/policy-set-versions\/polsetver-cXciu9nQwmk9Cfrn\",\n      \"upload\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1bHQ6djE6NWJPbHQ4QjV4R1ox...\"\n    }\n  }\n}\n```\n\nThe `upload` link URL in the above response is valid for one hour after creation. Make a `PUT` request to this URL directly, sending the policy set contents in `tar.gz` format as the request body. Once uploaded successfully, you can request the [Show Policy Set](#show-a-policy-set) endpoint again to verify that the status has changed from `pending` to `ready`.\n\n## Upload policy set versions\n\n`PUT https:\/\/archivist.terraform.io\/v1\/object\/<UNIQUE OBJECT ID>`\n\nThe URL is provided in the `upload` attribute in the `policy-set-versions` resource.\n\n### Sample Request\n\nIn the example below, `policy-set.tar.gz` is the local filename of the policy set version file to upload.\n\n```shell\ncurl \\\n  --header \"Content-Type: application\/octet-stream\" \\\n  --request PUT \\\n  --data-binary @policy-set.tar.gz \\\n  https:\/\/archivist.terraform.io\/v1\/object\/dmF1bHQ6djE6NWJPbHQ4QjV4R1ox...\n```\n\n## Show a policy set version\n\n`GET \/policy-set-versions\/:id`\n\n| Parameter | Description                               |\n| --------- | ----------------------------------------- |\n| `:id`     | The ID of the policy set version to show. |\n\n| Status  | Response                                              | Reason                                                              |\n| ------- | ----------------------------------------------------- | ------------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"policy-set-versions\"`) | The request was successful.                                         |\n| [404][] | [JSON API error object][]                             | Policy set version not found or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-set-versions\/polsetver-cXciu9nQwmk9Cfrn\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"polsetver-cXciu9nQwmk9Cfrn\",\n    \"type\": \"policy-set-versions\",\n    \"attributes\": {\n      \"source\": \"tfe-api\",\n      \"status\": \"pending\",\n      \"status-timestamps\": {},\n      \"error\": null,\n      \"created-at\": \"2019-06-28T23:53:15.875Z\",\n      \"updated-at\": \"2019-06-28T23:53:15.875Z\"\n    },\n    \"relationships\": {\n      \"policy-set\": {\n        \"data\": {\n          \"id\": \"polset-ws1CZBzm2h5K6ZT5\",\n          \"type\": \"policy-sets\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/policy-set-versions\/polsetver-cXciu9nQwmk9Cfrn\",\n      \"upload\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1bHQ6djE6NWJPbHQ4QjV4R1ox...\"\n    }\n  }\n}\n```\n\nThe `upload` link URL in the above response is valid for one hour after the `created_at` timestamp of the policy set version. Make a `PUT` request to this URL directly, sending the policy set contents in `tar.gz` format as the request body. Once uploaded successfully, you can request the [Show Policy Set Version](#show-a-policy-set-version) endpoint again to verify that the status has changed from `pending` to `ready`.\n\n## Available related resources\n\nThe GET endpoints above can optionally return related resources for policy sets, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n| Resource Name          | Description                                                                                                                                                                                    |\n| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `current_version`      | The most recent **successful** policy set version.                                                                                                                                             |\n| `newest_version`       | The most recently created policy set version, regardless of status. Note that this relationship may include an errored and unusable version, and is intended to allow checking for VCS errors. |\n| `policies`             | Individually managed policies which are associated with the policy set.                                                                                                                        |\n| `projects`             | The projects this policy set applies to.                                                                                                                                                       |\n| `workspaces`           | The workspaces this policy set applies to.                                                                                                                                                     |\n| `workspace-exclusions` | The workspaces excluded from this policy set's enforcement.                                                                                                                                    |\n\n\nThe following resource types may be included for policy set versions:\n\n| Resource Name | Description                                                      |\n| ------------- | ---------------------------------------------------------------- |\n| `policy_set`  | The policy set associated with the specified policy set version. |\n\n## Relationships\n\nThe following relationships may be present in various responses for policy sets:\n\n| Resource Name          | Description                                                                                                                                                                                    |\n| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `current-version`      | The most recent **successful** policy set version.                                                                                                                                             |\n| `newest-version`       | The most recently created policy set version, regardless of status. Note that this relationship may include an errored and unusable version, and is intended to allow checking for VCS errors. |\n| `organization`         | The organization associated with the specified policy set.                                                                                                                                     |\n| `policies`             | Individually managed policies which are associated with the policy set.                                                                                                                        |\n| `projects`             | The projects this policy set applies to.                                                                                                                                                       |\n| `workspaces`           | The workspaces this policy set applies to.                                                                                                                                                     |\n| `workspace-exclusions` | The workspaces excluded from this policy set's enforcement.                                                                                                                                    |\n\nThe following relationships may be present in various responses for policy set versions:\n\n| Resource Name | Description                                                      |\n| ------------- | ---------------------------------------------------------------- |\n| `policy-set`  | The policy set associated with the specified policy set version. |","site":"terraform","answers_cleaned":"    page title  Policy Sets   API Docs   HCP Terraform description       Use the   policy sets  endpoint to manage groups of Sentinel and OPA policies  List  show  create  and update policy sets and policy set versions  Attach  detach  and exclude policies from workspaces  or attach and detach policies to projects        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Policy sets API   Policy Enforcement   terraform cloud docs policy enforcement  lets you use the policy as code frameworks Sentinel and Open Policy Agent  OPA  to apply policy checks to HCP Terraform workspaces    Policy sets   terraform cloud docs policy enforcement manage policy sets  are collections of policies that you can apply globally or to specific  projects   terraform cloud docs projects manage  and workspaces  For each run in the selected workspaces  HCP Terraform checks the Terraform plan against the policy set        BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      This API provides endpoints to create  read  update  and delete policy sets in an HCP Terraform organization  To view and manage individual policies  use the  Policies API   terraform cloud docs api docs policies    Many of these endpoints let you create policy sets from a designated repository in a Version Control System  VCS   For more information about how to configure a policy set VCS repository  refer to  Sentinel Policy Set VCS Repositories   terraform cloud docs policy enforcement sentinel vcs  and  OPA Policy Set VCS Repositories   terraform cloud docs policy enforcement opa vcs    Instead of connecting HCP Terraform to a VCS repository  you can use the the  Policy Set Versions endpoint   create a policy set version  to create an entire policy set from a  tar gz  file   Interacting with policy sets requires permission to manage policies    More about permissions   terraform cloud docs users teams organizations permissions      permissions citation    intentionally unused   keep for maintainers     Create a policy set   POST  organizations  organization name policy sets     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  organization name    The organization to create the policy set in  The organization must already exist in the system  and the token authenticating the API request must have permission to manage policies    More about permissions    terraform cloud docs users teams organizations permissions       permissions citation    intentionally unused   keep for maintainers    Status    Response                                        Reason                                                                                                                                                                                           201       JSON API document      type   policy sets      Successfully created a policy set                                   404       JSON API error object                          Organization not found  or user unauthorized to perform action      422       JSON API error object                          Malformed request body  missing attributes  wrong types  etc          Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                                                Type             Default     Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       data type                                              string                        Must be   policy sets                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 data attributes name                                   string                        The name of the policy set  Can include letters  numbers       and                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    data attributes description                            string            null        Text describing the policy set s purpose  This field supports Markdown and appears in the HCP Terraform UI                                                                                                                                                                                                                                                                                                                                                                                                                                                          data attributes global                                 boolean           false       Whether HCP Terraform should automatically apply this policy set to all of an organization s workspaces                                                                                                                                                                                                                                                                                                                                                                                                                                                             data attributes kind                                   string            sentinel    The policy as code framework associated with the policy  Valid values are  sentinel  and  opa                                                                                                                                                                                                                                                                                                                                                                                                                                                                         data attributes overridable                            boolean           false       Whether or not users can override this policy when it fails during a run  Valid for sentinel policies only if  agent enabled  is set to  true                                                                                                                                                                                                                                                                                                                                                                                                                         data attributes vcs repo                               object            null        VCS repository information  When present  HCP Terraform sources the policies and configuration from the specified VCS repository  This attribute and  policies  relationships are mutually exclusive  and you cannot use them simultaneously                                                                                                                                                                                                                                                                                                                        data attributes vcs repo branch                        string            null        The branch of the VCS repository where HCP Terraform should retrieve the policy set  If empty  HCP Terraform uses the default branch                                                                                                                                                                                                                                                                                                                                                                                                                              data attributes vcs repo identifier                    string                        The VCS repository identifier in the format   namespace   repo    For example   hashicorp my policy set   The format for Azure DevOps is   org   project   git  repo                                                                                                                                                                                                                                                                                                                                                                                                  data attributes vcs repo oauth token id                string                        The OAuth Token ID HCP Terraform should use to connect to the VCS host  This value must not be specified if  github app installation id  is specified                                                                                                                                                                                                                                                                                                                                                                                                               data attributes vcs repo github app installation id    string                        The VCS Connection GitHub App Installation to use  Find this ID on the account settings page  Requires previously authorizing the GitHub App and generating a user to server token  Manage the token from   Account Settings   within HCP Terraform  You can not specify this value if  oauth token id  is specified                                                                                                                                                                                                                                                      data attributes vcs repo ingress submodules            boolean           false       Whether HCP Terraform should instantiate repository submodules when retrieving the policy set                                                                                                                                                                                                                                                                                                                                                                                                                                                                       data attributes policies path                          string            null        The VCS repository subdirectory that contains the policies for this policy set  HCP Terraform ignores files and directories outside of this sub path and does not update the policy set when those files change  This attribute is only valid when you specify a VCS repository for the policy set                                                                                                                                                                                                                                                                  data relationships projects data                       array  object                 A list of resource identifier objects that defines which projects are associated with the policy set  These objects must contain  id  and  type  properties  and the  type  property must be  projects   For example      id    prj AwfuCJTkdai4xj9w    type    projects      You can only specify this attribute when  data attributes global  is  false                                                                                                                                                                                                             data relationships workspaces data                     array  object                 A list of resource identifier objects that defines which workspaces are associated with the policy set  These objects must contain  id  and  type  properties  and the  type  property must be  workspaces   For example      id    ws 2HRvNs49EWPjDqT1    type    workspaces      Obtain workspace IDs from the  workspace s settings page   terraform enterprise workspaces settings  or the  Show Workspace endpoint   terraform enterprise api docs workspaces show workspace   You can only specify this attribute when  data attributes global  is  false       data relationships workspace exclusions data           array  object                 A list of resource identifier objects specifying which workspaces HCP Terraform excludes from a policy set s enforcement  These objects must contain  id  and  type  properties  and the  type  property must be  workspaces   For example      id    ws FVVvzCDaykN1oHiw    type    workspaces                                                                                                                                                                                                                                                                     data relationships policies data                       array  object                 A list of resource identifier objects that defines which policies are members of the policy set  These objects must contain  id  and  type  properties  and the  type  property must be  policies   For example      id    pol u3S5p2Uwk21keu1s    type    policies                                                                                                                                                                                                                                                                                                   data attributes agent enabled      beta             boolean           false       Only valid for  sentinel  policy sets  Whether this policy set should run as a policy evaluation in the HCP Terraform agent                                                                                                                                                                                                                                                                                                                                                                                                                                           data attributes policy tool version      beta       string            latest     The version of the tool that HCP Terraform uses to evaluate policies  You can only set a policy tool version for  sentinel  policy sets if  agent enabled  is  true                                                                                                                                                                                                                                                                                                                                                                                                  Sample Payload     json      data          type    policy sets        attributes            name    production          description    This set contains policies that should be checked on all production infrastructure workspaces           global   false         kind    sentinel          agent enabled   true         policy tool version    0 23 0          overridable   true         policies path     policy sets foo          vcs repo              branch    main            identifier    hashicorp my policy sets            ingress submodules   false           oauth token id    ot 7Fr9d83jWsi8u23A                      relationships            projects              data                  id    prj AwfuCJTkdai4xj9w    type    projects                              workspaces              data                  id    ws 2HRvNs49EWPjDqT1    type    workspaces                              workspace exclusions              data                  id    ws FVVvzCDaykN1oHiw    type    workspaces                                           Sample payload with individual policy relationships     json      data          type    policy sets        attributes            name    production          description    This set contains policies that should be checked on all production infrastructure workspaces           kind    sentinel          global   false         agent enabled   true         policy tool version    0 23 0          overridable   true             relationships            policies              data                  id    pol u3S5p2Uwk21keu1s    type    policies                              workspaces              data                  id    ws 2HRvNs49EWPjDqT1    type    workspaces                                           Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization policy sets          Sample Response     json      data          id   polset 3yVQZvHzf5j3WRJ1        type   policy sets        attributes            name    production          description    This set contains policies that should be checked on all production infrastructure workspaces           kind    sentinel          global   false         agent enabled   true         policy tool version    0 23 0          overridable   true         workspace count   1         policies path     policy sets foo          versioned   true         vcs repo              branch    main            identifier    hashicorp my policy sets            ingress submodules   false           oauth token id    ot 7Fr9d83jWsi8u23A                  created at    2018 09 11T18 21 21 784Z          updated at    2018 09 11T18 21 21 784Z              relationships            organization              data      id    my organization    type    organizations                    projects              data                  id    prj AwfuCJTkdai4xj9w    type    projects                                    workspaces              data                  id    ws 2HRvNs49EWPjDqT1    type    workspaces                              workspace exclusions              data                  id    ws FVVvzCDaykN1oHiw    type    workspaces                                   links            self    api v2 policy sets polset 3yVQZvHzf5j3WRJ1                       Sample response with individual policy relationships     json      data          id   polset 3yVQZvHzf5j3WRJ1        type   policy sets        attributes            name    production          description    This set contains policies that should be checked on all production infrastructure workspaces           kind    sentinel          global   false         agent enabled   true         policy tool version    0 23 0          overridable   true         policy count   1         workspace count   1         versioned   false         created at    2018 09 11T18 21 21 784Z          updated at    2018 09 11T18 21 21 784Z              relationships            organization              data      id    my organization    type    organizations                    policies              data                  id    pol u3S5p2Uwk21keu1s    type    policies                              workspaces              data                  id    ws 2HRvNs49EWPjDqT1    type    workspaces                                  links            self    api v2 policy sets polset 3yVQZvHzf5j3WRJ1                      List policy sets   GET  organizations  organization name policy sets     Parameter              Description                                                                                                          organization name    The organization to list policy sets for       Status    Response                                        Reason                                                                                                                                                                                           200       JSON API document      type   policy sets      Request was successful                                              404       JSON API error object                          Organization not found  or user unauthorized to perform action        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter             Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      filter versioned       Optional    Allows filtering policy sets based on whether they are versioned  VCS managed or API managed   or use individual policy relationships  Accepts a boolean true false value  A  true  value returns versioned sets  and a  false  value returns sets with individual policy relationships  If omitted  all policy sets are returned       filter kind            Optional    If specified  restricts results to those with the matching policy kind value  Valid values are  sentinel  and  opa                                                                                                                                                                                                                      include                Optional    Enables you to include related resource data  Value must be a comma separated list containing one or more of  projects    workspaces    workspace exclusions    policies    newest version   or  current version   See the  relationships section   relationships  for details                                                           page number            Optional    If omitted  the endpoint will return the first page                                                                                                                                                                                                                                                                                     page size              Optional    If omitted  the endpoint will return 20 policy sets per page                                                                                                                                                                                                                                                                            search name            Optional    Allows searching the organization s policy sets by name                                                                                                                                                                                                                                                                                   Sample Request     shell curl       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 organizations my organization policy sets          Sample Response     json      data                  id   polset 3yVQZvHzf5j3WRJ1          type   policy sets          attributes              name    production            description    This set contains policies that should be checked on all production infrastructure workspaces             kind    sentinel            global   false           agent enabled   true           policy tool version    0 23 0            overridable   true           workspace count   1           policies path     policy sets foo            versioned   true           vcs repo                branch    main              identifier    hashicorp my policy sets              ingress submodules   false             oauth token id    ot 7Fr9d83jWsi8u23A                      created at    2018 09 11T18 21 21 784Z            updated at    2018 09 11T18 21 21 784Z                  relationships              organization                data      id    my organization    type    organizations                        projects                data                    id    prj AwfuCJTkdai4xj9w    type    projects                                            workspaces                data                    id    ws 2HRvNs49EWPjDqT1    type    workspaces                                    workspace exclusions                data                    id    ws FVVvzCDaykN1oHiw    type    workspaces                                           links              self    api v2 policy sets polset 3yVQZvHzf5j3WRJ1                               Sample response with individual policy relationships     json      data                  id   polset 3yVQZvHzf5j3WRJ1          type   policy sets          attributes              name    production            description    This set contains policies that should be checked on all production infrastructure workspaces             kind    sentinel            global   false           agent enabled   true           policy tool version    0 23 0            overridable   true           policy count   1           workspace count   1           versioned   false           created at    2018 09 11T18 21 21 784Z            updated at    2018 09 11T18 21 21 784Z                  relationships              organization                data      id    my organization    type    organizations                        policies                data                    id    pol u3S5p2Uwk21keu1s    type    policies                                    workspaces                data                    id    ws 2HRvNs49EWPjDqT1    type    workspaces                                          links              self    api v2 policy sets polset 3yVQZvHzf5j3WRJ1                               Show a policy set   GET  policy sets  id     Parameter   Description                                                                                                                                                                                 id        The ID of the policy set to show  Refer to  List Policy Sets   list policy sets  for reference information about finding IDs       Status    Response                                        Reason                                                                                                                                                                                     200       JSON API document      type   policy sets      The request was successful                                       404       JSON API error object                          Policy set not found or user unauthorized to perform action      Parameter   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      include      Optional     Enables you to include related resource data  Value must be a comma separated list containing one or more of  projects    workspaces    workspace exclusions    policies    newest version   or  current version   See the  relationships section   relationships  for details         Sample Request     shell curl   request GET      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json      https   app terraform io api v2 policy sets polset 3yVQZvHzf5j3WRJ1 include current version          Sample Response     json      data          id   polset 3yVQZvHzf5j3WRJ1        type   policy sets        attributes            name    production          description    This set contains policies that should be checked on all production infrastructure workspaces           kind    sentinel          global   false         agent enabled   true         policy tool version    0 23 0          overridable   true         policy count   0         workspace count   1         policies path     policy sets foo          versioned   true         vcs repo              branch    main            identifier    hashicorp my policy sets            ingress submodules   false           oauth token id    ot 7Fr9d83jWsi8u23A                  created at    2018 09 11T18 21 21 784Z          updated at    2018 09 11T18 21 21 784Z              relationships            organization              data      id    my organization    type    organizations                    current version              data                id    polsetver m4yhbUBCgyDVpDL4              type    policy set versions                            projects              data                  id    prj AwfuCJTkdai4xj9w    type    projects                                    workspaces              data                  id    ws 2HRvNs49EWPjDqT1    type    workspaces                              workspace exclusions              data                  id    ws FVVvzCDaykN1oHiw    type    workspaces                                   links            self    api v2 policy sets polset 3yVQZvHzf5j3WRJ1                included                  id    polsetver m4yhbUBCgyDVpDL4          type    policy set versions          attributes              source    github            status    ready            status timestamps                ready at    2019 06 21T21 29 48 00 00              ingressing at    2019 06 21T21 29 47 00 00                      error   null           ingress attributes                commit sha    8766a423cb902887deb0f7da4d9beaed432984bb              commit url    https   github com hashicorp my policy sets commit 8766a423cb902887deb0f7da4d9beaed432984bb              identifier    hashicorp my policy sets                      created at    2019 06 21T21 29 47 792Z            updated at    2019 06 21T21 29 48 887Z                  relationships              policy set                data                  id    polset a2mJwtmKygrA11dh                type    policy sets                                        links              self     api v2 policy set versions polsetver E4S7jz8HMjBienLS                               Sample response with individual policy relationships     json      data          id   polset 3yVQZvHzf5j3WRJ1        type   policy sets        attributes            name    production          description    This set contains policies that should be checked on all production infrastructure workspaces           kind    sentinel          global   false         agent enabled   true         policy tool version    0 23 0          overridable   true         policy count   1         workspace count   1         versioned   false         created at    2018 09 11T18 21 21 784Z          updated at    2018 09 11T18 21 21 784Z               relationships            organization              data      id    my organization    type    organizations                    policies              data                  id    pol u3S5p2Uwk21keu1s    type    policies                              workspaces              data                  id    ws 2HRvNs49EWPjDqT1    type    workspaces                                  links            self    api v2 policy sets polset 3yVQZvHzf5j3WRJ1                        Note    The  data relationships projects  and  data relationships workspaces  refer to the projects and workspaces attached to the policy set  HCP Terraform omits these keys for policy sets marked as global  which are implicitly related to all of the organization s workspaces      Update a policy set   PATCH  policy sets  id     Parameter   Description                                                                                                                                                                                     id        The ID of the policy set to update  Refer to  List Policy Sets   list policy sets  for reference information about finding IDs       Status    Response                                        Reason                                                                                                                                                                                           200       JSON API document      type   policy sets      The request was successful                                          404       JSON API error object                          Policy set not found or user unauthorized to perform action         422       JSON API error object                          Malformed request body  missing attributes  wrong types  etc          Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                                               Type             Default            Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   data type                                             string                              Must be   policy sets                                                                                                                                                                                                                                                                                             data attributes name                                  string            previous value    The name of the policy set  Can include letters  numbers       and                                                                                                                                                                                                                                                data attributes description                           string            previous value    A description of the set s purpose  This field supports markdown and appears in the HCP Terraform UI                                                                                                                                                                                                   data attributes global                                boolean           previous value    Whether or not the policies in this set should be checked in all of the organization s workspaces or only in workspaces directly attached to the set                                                                                                                                                            data attributes vcs repo                              object            previous value    VCS repository information  When present  HCP Terraform sources the policies and configuration from the specified VCS repository instead of using definitions from HCP Terraform  Note that this option and  policies  relationships are mutually exclusive and may not be used simultaneously                          data attributes vcs repo branch                       string            previous value    The branch of the VCS repo  When empty  HCP Terraform uses the VCS provider s default branch value                                                                                                                                                                                                                        data attributes vcs repo identifier                   string            previous value    The VCS repository identifier in the the following format    namespace   repo    An example identifier in GitHub is  hashicorp my policy set   The format for Azure DevOps is   org   project   git  repo                                                                                               data attributes vcs repo oauth token id               string            previous value    The OAuth token ID to use to connect to the VCS host                                                                                                                                                                                                                                                              data attributes vcs repo ingress submodules           boolean           previous value    Determines whether HCP Terraform instantiates repository submodules during the clone operation                                                                                                                                                                                                                          data attributes policies path                         boolean           previous value    The subdirectory of the attached VCS repository that contains the policies for this policy set  HCP Terraform ignores files and directories outside of the sub path  Changes to the unrelated files do not update the policy set  You can only enable this option when a VCS repository is present       data relationships projects                           array  object     previous value    An array of references to projects that the policy set should be assigned to  Sending an empty array clears all project assignments  You can only specify this attribute when  data attributes global  is  false                                                                                                  data relationships workspaces                         array  object     previous value    An array of references to workspaces that the policy set should be assigned to  Sending an empty array clears all workspace assignments  You can only specify this attribute when  data attributes global  is  false                                                                                              data relationships workspace exclusions               array  object     previous value    An array of references to excluded workspaces that HCP Terraform will not enforce this policy set upon  Sending an empty array clears all exclusions assignments                                                                                                                                                 data attributes agent enabled      beta            boolean           false             Only valid for  sentinel  policy sets  Whether this policy set should run as a policy evaluation in the HCP Terraform agent                                                                                                                                                                                    data attributes policy tool version      beta      string            latest            The version of the tool that HCP Terraform uses to evaluate policies  You can only set a policy tool version for  sentinel  policy sets if  agent enabled  is  true                                                                                                                                          Sample Payload     json      data          type    policy sets        attributes            name    workspace scoped policy set          description    Policies added to this policy set will be enforced on specified workspaces          global   false         agent enabled   true         policy tool version    0 23 0              relationships            projects              data                  id    prj AwfuCJTkdai4xj9w    type    projects                              workspaces              data                  id    ws 2HRvNs49EWPjDqT1    type    workspaces                              workspace exclusions              data                  id    ws FVVvzCDaykN1oHiw    type    workspaces                                           Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 policy sets polset 3yVQZvHzf5j3WRJ1          Sample Response     json      data          id   polset 3yVQZvHzf5j3WRJ1        type   policy sets        attributes            name    workspace scoped policy set          description    Policies added to this policy set will be enforced on specified workspaces          global   false         kind    sentinel          agent enabled   true         policy tool version    0 23 0          overridable   true         policy count   1         workspace count   1         versioned   false         created at    2018 09 11T18 21 21 784Z          updated at    2018 09 11T18 21 21 784Z              relationships            organization              data      id    my organization    type    organizations                    policies              data                  id    pol u3S5p2Uwk21keu1s    type    policies                              projects              data                  id    prj AwfuCJTkdai4xj9w    type    projects                              workspaces              data                  id    ws 2HRvNs49EWPjDqT1    type    workspaces                              workspace exclusions              data                  id    ws FVVvzCDaykN1oHiw    type    workspaces                                  links            self    api v2 policy sets polset 3yVQZvHzf5j3WRJ1                      Add policies to the policy set   POST  policy sets  id relationships policies     Parameter   Description                                                                                                                                                                                                       id        The ID of the policy set to add policies to  Refer to  List Policy Sets   list policy sets  for reference information about finding IDs       Status    Response                    Reason                                                                                                                                                                                               204      No Content                  The request was successful                                                      404       JSON API error object      Policy set not found or user unauthorized to perform action                     422       JSON API error object      Malformed request body  one or more policies not found  wrong types  etc           Note    This endpoint may only be used when there is no VCS repository associated with the policy set       Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type             Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              data      array  object              A list of resource identifier objects that defines which policies will be added to the set  These objects must contain  id  and  type  properties  and the  type  property must be  policies   e g      id    pol u3S5p2Uwk21keu1s    type    policies              Sample Payload     json      data            id    pol u3S5p2Uwk21keu1s    type    policies            id    pol 2HRvNs49EWPjDqT1    type    policies                   Sample Request     shell curl      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 policy sets polset 3yVQZvHzf5j3WRJ1 relationships policies         Attach a policy set to projects   POST  policy sets  id relationships projects     Parameter   Description                                                                                                                                                                                                             id        The ID of the policy set to attach to projects  Refer to  List Policy Sets   list policy sets  for reference information about finding IDs          Note    You can not attach global policy sets to individual projects     Status    Response                    Reason                                                                                                                                                                                               204      Nothing                     The request was successful                                                      404       JSON API error object      Policy set not found or user unauthorized to perform action                     422       JSON API error object      Malformed request body  one or more projects not found  wrong types  etc          Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type             Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              data      array  object              A list of resource identifier objects that defines which projects to attach the policy set to  These objects must contain  id  and  type  properties  and the  type  property must be  projects   e g      id    prj AwfuCJTkdai4xj9w    type    projects              Sample Payload     json      data            id    prj AwfuCJTkdai4xj9w    type    projects            id    prj 2HRvNs49EWPjDqT1    type    projects                   Sample Request     shell curl      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 policy sets polset 3yVQZvHzf5j3WRJ1 relationships projects         Attach a policy set to workspaces   POST  policy sets  id relationships workspaces     Parameter   Description                                                                                                                                                                                                                 id        The ID of the policy set to attach to workspaces  Refer to  List Policy Sets   list policy sets  for reference information about finding IDs          Note    Policy sets marked as global cannot be attached to individual workspaces     Status    Response                    Reason                                                                                                                                                                                                   204      No Content                  The request was successful                                                        404       JSON API error object      Policy set not found or user unauthorized to perform action                       422       JSON API error object      Malformed request body  one or more workspaces not found  wrong types  etc          Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type             Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        data      array  object              A list of resource identifier objects that defines the workspaces the policy set will be attached to  These objects must contain  id  and  type  properties  and the  type  property must be  workspaces   e g      id    ws 2HRvNs49EWPjDqT1    type    workspaces              Sample Payload     json      data            id    ws u3S5p2Uwk21keu1s    type    workspaces            id    ws 2HRvNs49EWPjDqT1    type    workspaces                   Sample Request     shell curl      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 policy sets polset 3yVQZvHzf5j3WRJ1 relationships workspaces         Exclude a workspace from a policy set   POST  policy sets  id relationships workspace exclusions     Parameter   Description                                                                                                                                                                                                                                                                                                               id        The ID of a policy set that you want HCP Terraform to exclude from the workspaces you specify  Refer to  List Policy Sets   list policy sets  for reference information about finding IDs       Status    Response                    Reason                                                                                                                                                                                                                     204      No Content                  The request was successful                                                                 404       JSON API error object      Policy set not found or user unauthorized to perform action                                422       JSON API error object      Malformed request body  one or more excluded workspaces not found  wrong types  etc          Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type             Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          data      array  object              A list of resource identifier objects that defines the excluded workspaces the policy set will be attached to  These objects must contain  id  and  type  properties  and the  type  property must be  workspaces   e g      id    ws 2HRvNs49EWPjDqT1    type    workspaces              Sample Payload     json      data            id    ws u3S5p2Uwk21keu1s    type    workspaces            id    ws 2HRvNs49EWPjDqT1    type    workspaces                   Sample Request     shell curl      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 policy sets polset 3yVQZvHzf5j3WRJ1 relationships workspace exclusions         Remove policies from the policy set   DELETE  policy sets  id relationships policies     Parameter   Description                                                                                                                                                                                                                 id        The ID of the policy set to remove policies from  Refer to  List Policy Sets   list policy sets  for reference information about finding IDs       Status    Response                    Reason                                                                                                                                                                 204      No Content                  The request was successful                                       404       JSON API error object      Policy set not found or user unauthorized to perform action      422       JSON API error object      Malformed request body  wrong types  etc                            Note    This endpoint may only be used when there is no VCS repository associated with the policy set       Request Body  This DELETE endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type             Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      data      array  object              A list of resource identifier objects that defines which policies will be removed from the set  These objects must contain  id  and  type  properties  and the  type  property must be  policies   e g      id    pol u3S5p2Uwk21keu1s    type    policies              Sample Payload     json      data            id    pol u3S5p2Uwk21keu1s    type    policies            id    pol 2HRvNs49EWPjDqT1    type    policies                   Sample Request     shell curl      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json        request DELETE       data  payload json     https   app terraform io api v2 policy sets polset 3yVQZvHzf5j3WRJ1 relationships policies         Detach a policy set from projects   DELETE  policy sets  id relationships projects     Parameter   Description                                                                                                                                                                                                                 id        The ID of the policy set you want to detach from the specified projects  Refer to  List Policy Sets   list policy sets  for reference information about finding IDs          Note    You can not attach global policy sets to individual projects     Status    Response                    Reason                                                                                                                                                                                               204      Nothing                     The request was successful                                                      404       JSON API error object      Policy set not found or user unauthorized to perform action                     422       JSON API error object      Malformed request body  one or more projects not found  wrong types  etc          Request Body  This DELETE endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type             Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  data      array  object              A list of resource identifier objects that defines the projects the policy set will be detached from  These objects must contain  id  and  type  properties  and the  type  property must be  projects   For example      id    prj AwfuCJTkdai4xj9w    type    projects             Sample Payload     json      data            id    prj AwfuCJTkdai4xj9w    type    projects            id    prj 2HRvNs49EWPjDqT1    type    projects                   Sample Request     shell curl      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json        request DELETE       data  payload json     https   app terraform io api v2 policy sets polset 3yVQZvHzf5j3WRJ1 relationships projects         Detach the policy set from workspaces   DELETE  policy sets  id relationships workspaces     Parameter   Description                                                                                                                                                                                                                     id        The ID of the policy set to detach from workspaces  Refer to  List Policy Sets   list policy sets  for reference information about finding IDs          Note    Policy sets marked as global cannot be detached from individual workspaces     Status    Response                    Reason                                                                                                                                                                 204      No Content                  The request was successful                                       404       JSON API error object      Policy set not found or user unauthorized to perform action      422       JSON API error object      Malformed request body  wrong types  etc                           Request Body  This DELETE endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type             Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            data      array  object              A list of resource identifier objects that defines which workspaces the policy set will be detached from  These objects must contain  id  and  type  properties  and the  type  property must be  workspaces   e g      id    ws 2HRvNs49EWPjDqT1    type    workspaces       Obtain workspace IDs from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   terraform cloud docs api docs workspaces show workspace  endpoint         Sample Payload     json      data            id    ws u3S5p2Uwk21keu1s    type    workspaces            id    ws 2HRvNs49EWPjDqT1    type    workspaces                   Sample Request     shell curl      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json        request DELETE       data  payload json     https   app terraform io api v2 policy sets polset 3yVQZvHzf5j3WRJ1 relationships workspaces         Reinclude a workspace to a policy set   DELETE  policy sets  id relationships workspace exclusions     Parameter   Description                                                                                                                                                                                                                                                                                                               id        The ID of the policy set HCP Terraform should reinclude  enforce  on the specified workspaces  Refer to  List Policy Sets   list policy sets  for reference information about finding IDs       Status    Response                    Reason                                                                                                                                                                 204      No Content                  The request was successful                                       404       JSON API error object      Policy set not found or user unauthorized to perform action      422       JSON API error object      Malformed request body  wrong types  etc                           Request Body  This DELETE endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type             Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      data      array  object              A list of resource identifier objects that defines which workspaces HCP Terraform should reinclude  enforce  this policy set on  These objects must contain  id  and  type  properties  and the  type  property must be  workspaces   e g      id    ws 2HRvNs49EWPjDqT1    type    workspaces       Obtain workspace IDs from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   terraform cloud docs api docs workspaces show workspace  endpoint         Sample Payload     json      data            id    ws u3S5p2Uwk21keu1s    type    workspaces            id    ws 2HRvNs49EWPjDqT1    type    workspaces                   Sample Request     shell curl      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json        request DELETE       data  payload json     https   app terraform io api v2 policy sets polset 3yVQZvHzf5j3WRJ1 relationships workspace exclusions         Delete a policy set   DELETE  policy sets  id     Parameter   Description                                                                                                                                                                                     id        The ID of the policy set to delete  Refer to  List Policy Sets   list policy sets  for reference information about finding IDs       Status    Response                    Reason                                                                                                                                                                   204      No Content                  Successfully deleted the policy set                               404       JSON API error object      Policy set not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 policy sets polset 3yVQZvHzf5j3WRJ1         Create a policy set version  For versioned policy sets which have no VCS repository attached  versions of policy code may be uploaded directly to the API by creating a new policy set version and  in a subsequent request  uploading a tarball  tar gz  of data to it    POST  policy sets  id versions     Parameter   Description                                                                                                                       id        The ID of the policy set to create a new version for       Status    Response                                                Reason                                                                                                                                                                                             201       JSON API document      type   policy set versions      The request was successful                                       404       JSON API error object                                  Policy set not found or user unauthorized to perform action      422       JSON API error object                                  The policy set does not support uploading versions                 Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 policy sets polset 3yVQZvHzf5j3WRJ1 versions          Sample Response     json      data          id    polsetver cXciu9nQwmk9Cfrn        type    policy set versions        attributes            source    tfe api          status    pending          status timestamps              error   null         created at    2019 06 28T23 53 15 875Z          updated at    2019 06 28T23 53 15 875Z              relationships            policy set              data                id    polset ws1CZBzm2h5K6ZT5              type    policy sets                                links            self     api v2 policy set versions polsetver cXciu9nQwmk9Cfrn          upload    https   archivist terraform io v1 object dmF1bHQ6djE6NWJPbHQ4QjV4R1ox                      The  upload  link URL in the above response is valid for one hour after creation  Make a  PUT  request to this URL directly  sending the policy set contents in  tar gz  format as the request body  Once uploaded successfully  you can request the  Show Policy Set   show a policy set  endpoint again to verify that the status has changed from  pending  to  ready       Upload policy set versions   PUT https   archivist terraform io v1 object  UNIQUE OBJECT ID    The URL is provided in the  upload  attribute in the  policy set versions  resource       Sample Request  In the example below   policy set tar gz  is the local filename of the policy set version file to upload      shell curl       header  Content Type  application octet stream        request PUT       data binary  policy set tar gz     https   archivist terraform io v1 object dmF1bHQ6djE6NWJPbHQ4QjV4R1ox            Show a policy set version   GET  policy set versions  id     Parameter   Description                                                                                               id        The ID of the policy set version to show       Status    Response                                                Reason                                                                                                                                                                                                             200       JSON API document      type   policy set versions      The request was successful                                               404       JSON API error object                                  Policy set version not found or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        request GET     https   app terraform io api v2 policy set versions polsetver cXciu9nQwmk9Cfrn          Sample Response     json      data          id    polsetver cXciu9nQwmk9Cfrn        type    policy set versions        attributes            source    tfe api          status    pending          status timestamps              error   null         created at    2019 06 28T23 53 15 875Z          updated at    2019 06 28T23 53 15 875Z              relationships            policy set              data                id    polset ws1CZBzm2h5K6ZT5              type    policy sets                                links            self     api v2 policy set versions polsetver cXciu9nQwmk9Cfrn          upload    https   archivist terraform io v1 object dmF1bHQ6djE6NWJPbHQ4QjV4R1ox                      The  upload  link URL in the above response is valid for one hour after the  created at  timestamp of the policy set version  Make a  PUT  request to this URL directly  sending the policy set contents in  tar gz  format as the request body  Once uploaded successfully  you can request the  Show Policy Set Version   show a policy set version  endpoint again to verify that the status has changed from  pending  to  ready       Available related resources  The GET endpoints above can optionally return related resources for policy sets  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available     Resource Name            Description                                                                                                                                                                                                                                                                                                                                                                                                                     current version         The most recent   successful   policy set version                                                                                                                                                   newest version          The most recently created policy set version  regardless of status  Note that this relationship may include an errored and unusable version  and is intended to allow checking for VCS errors       policies                Individually managed policies which are associated with the policy set                                                                                                                              projects                The projects this policy set applies to                                                                                                                                                             workspaces              The workspaces this policy set applies to                                                                                                                                                           workspace exclusions    The workspaces excluded from this policy set s enforcement                                                                                                                                         The following resource types may be included for policy set versions     Resource Name   Description                                                                                                                                                policy set     The policy set associated with the specified policy set version        Relationships  The following relationships may be present in various responses for policy sets     Resource Name            Description                                                                                                                                                                                                                                                                                                                                                                                                                     current version         The most recent   successful   policy set version                                                                                                                                                   newest version          The most recently created policy set version  regardless of status  Note that this relationship may include an errored and unusable version  and is intended to allow checking for VCS errors       organization            The organization associated with the specified policy set                                                                                                                                           policies                Individually managed policies which are associated with the policy set                                                                                                                              projects                The projects this policy set applies to                                                                                                                                                             workspaces              The workspaces this policy set applies to                                                                                                                                                           workspace exclusions    The workspaces excluded from this policy set s enforcement                                                                                                                                        The following relationships may be present in various responses for policy set versions     Resource Name   Description                                                                                                                                                policy set     The policy set associated with the specified policy set version   "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Teams API Docs HCP Terraform Use the teams endpoint to manage teams List show create update and delete teams using the HTTP API","answers":"---\npage_title: Teams - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/teams` endpoint to manage teams. List, show, create, update, and delete teams using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Teams API\n\nThe Teams API is used to create, edit, and destroy teams as well as manage a team's organization-level permissions. The [Team Membership API](\/terraform\/cloud-docs\/api-docs\/team-members) is used to add or remove users from a team. Use the [Team Access API](\/terraform\/cloud-docs\/api-docs\/team-access) to associate a team with privileges on an individual workspace.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/team-management.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nAny member of an organization can view visible teams and any secret teams they are a member of. Only organization owners can modify teams or view the full set of secret teams. The organization token and the owners team token can act as an owner on these endpoints. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Organization Membership\n\n-> **Note:** Users must be invited to join organizations before they can be added to teams. See [the Organization Memberships API documentation](\/terraform\/cloud-docs\/api-docs\/organization-memberships) for more information. Invited users who have not yet accepted will not appear in Teams API responses.\n\n## List teams\n\n`GET organizations\/:organization_name\/teams`\n\n| Parameter            | Description                                      |\n| -------------------- | ------------------------------------------------ |\n| `:organization_name` | The name of the organization to list teams from. |\n\nThe response may identify HashiCorp API service accounts, for example `api-team_XXXXXX`, as a members of a team. However, API service accounts do not appear in the UI. As a result, there may be differences between the number of team members reported by the UI and the API. For example, the UI may report `0` members on a team when and the API reports `1`.\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter       | Description                                                                                                                                                                  |\n| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `q`             | **Optional.** Allows querying a list of teams by name. This search is case-insensitive.                                                                                      |\n| `filter[names]` | **Optional.** If specified, restricts results to a team with a matching name. If multiple comma separated values are specified, teams matching any of the names are returned.|\n| `page[number]`  | **Optional.** If omitted, the endpoint will return the first page.                                                                                                           |\n| `page[size]`    | **Optional.** If omitted, the endpoint will return 20 teams per page.                                                                                                        |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/teams\n```\n\n### Sample Response\n\nThe `sso-team-id` attribute is only returned in Terraform Enterprise 202204-1 and later, or in HCP Terraform.\nThe `allow-member-token-management` attribute is set to `false` for Terraform Enterprise versions older than 202408-1.\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"team-6p5jTwJQXwqZBncC\",\n      \"type\": \"teams\",\n      \"attributes\": {\n        \"name\": \"team-creation-test\",\n        \"sso-team-id\": \"cb265c8e41bddf3f9926b2cf3d190f0e1627daa4\",\n        \"users-count\": 0,\n        \"visibility\": \"organization\",\n        \"allow-member-token-management\": true,\n        \"permissions\": {\n          \"can-update-membership\": true,\n          \"can-destroy\": true,\n          \"can-update-organization-access\": true,\n          \"can-update-api-token\": true,\n          \"can-update-visibility\": true\n        },\n        \"organization-access\": {\n          \"manage-policies\": true,\n          \"manage-policy-overrides\": false,\n          \"manage-run-tasks\": true,\n          \"manage-workspaces\": false,\n          \"manage-vcs-settings\": false,\n          \"manage-agent-pools\": false,\n          \"manage-projects\": false,\n          \"read-projects\": false,\n          \"read-workspaces\": false\n        }\n      },\n      \"relationships\": {\n        \"users\": {\n          \"data\": []\n        },\n        \"authentication-token\": {\n          \"meta\": {}\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/teams\/team-6p5jTwJQXwqZBncC\"\n      }\n    }\n  ]\n}\n```\n\n## Create a Team\n\n`POST \/organizations\/:organization_name\/teams`\n\n| Parameter            | Description                                                                                                                                                    |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to create the team in. The organization must already exist in the system, and the user must have permissions to create new teams. |\n\n| Status  | Response                                | Reason                                                         |\n| ------- | --------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"teams\"`) | Successfully created a team                                    |\n| [400][] | [JSON API error object][]               | Invalid `include` parameter                                    |\n| [404][] | [JSON API error object][]               | Organization not found, or user unauthorized to perform action |\n| [422][] | [JSON API error object][]               | Malformed request body (missing attributes, wrong types, etc.) |\n| [500][] | [JSON API error object][]               | Failure during team creation                                   |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n-> **Note:** You cannot set `manage-workspaces` to `false` when setting `manage-projects` to `true`, since project permissions cascade down to workspaces. This is also the case for `read-workspaces` and `read-projects`. If `read-projects` is `true`, `read-workspaces` must be `true` as well.\n\n| Key path                                | Type   | Default    | Description                                                                                                                                                                                                                                                                                                       |\n|-----------------------------------------|--------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `data.type`                             | string |            | Must be `\"teams\"`.                                                                                                                                                                                                                                                                                                |\n| `data.attributes.name`                  | string |            | The name of the team, which can only include letters, numbers, `-`, and `_`. This will be used as an identifier and must be unique in the organization.                                                                                                                                                           |\n| `data.attributes.sso-team-id`           | string | (nothing)  | The unique identifier of the team from the SAML `MemberOf` attribute. Only available in Terraform Enterprise 202204-1 and later, or in HCP Terraform.                                                                                             |\n| `data.attributes.organization-access`   | object | (nothing)  | Settings for the team's organization access. This object can include the `manage-policies`, `manage-policy-overrides`, `manage-run-tasks`, `manage-workspaces`, `manage-vcs-settings`, `manage-agent-pools`, `manage-providers`, `manage-modules`, `manage-projects`, `read-projects`, `read-workspaces`, `manage-membership`, `manage-teams`, and `manage-organization-access` properties with boolean values. All properties default to `false`. |\n| `data.attributes.visibility` **(beta)** | string | `\"secret\"` | The team's visibility. Must be `\"secret\"` or `\"organization\"` (visible).                                                                                                                                                                                                                                          |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"teams\",\n    \"attributes\": {\n      \"name\": \"team-creation-test\",\n      \"sso-team-id\": \"cb265c8e41bddf3f9926b2cf3d190f0e1627daa4\",\n      \"organization-access\": {\n        \"manage-workspaces\": true\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/teams\n```\n\n### Sample Response\n\nThe `sso-team-id` attribute is only returned in Terraform Enterprise 202204-1 and later, or in HCP Terraform.\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\": \"team-creation-test\",\n      \"sso-team-id\": \"cb265c8e41bddf3f9926b2cf3d190f0e1627daa4\",\n      \"organization-access\": {\n        \"manage-policies\": false,\n        \"manage-policy-overrides\": false,\n        \"manage-run-tasks\": false,\n        \"manage-vcs-settings\": false,\n        \"manage-agent-pools\": false,\n        \"manage-workspaces\": true,\n        \"manage-providers\": false,\n        \"manage-modules\": false,\n        \"manage-projects\": false,\n        \"read-projects\": false,\n        \"read-workspaces\": true,\n        \"manage-membership\": false,\n        \"manage-teams\": false,\n        \"manage-organization-access\": false\n      },\n      \"permissions\": {\n        \"can-update-membership\": true,\n        \"can-destroy\": true,\n        \"can-update-organization-access\": true,\n        \"can-update-api-token\": true,\n        \"can-update-visibility\": true\n      },\n      \"users-count\": 0,\n      \"visibility\": \"secret\",\n      \"allow-member-token-management\": true\n    },\n    \"id\": \"team-6p5jTwJQXwqZBncC\",\n    \"links\": {\n      \"self\": \"\/api\/v2\/teams\/team-6p5jTwJQXwqZBncC\"\n    },\n    \"relationships\": {\n      \"authentication-token\": {\n        \"meta\": {}\n      },\n      \"users\": {\n        \"data\": []\n      }\n    },\n    \"type\": \"teams\"\n  }\n}\n```\n\n## Show Team Information\n\n`GET \/teams\/:team_id`\n\n| Parameter  | Description              |\n| ---------- | ------------------------ |\n| `:team_id` | The team ID to be shown. |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/teams\/team-6p5jTwJQXwqZBncC\n```\n\n### Sample Response\n\nThe `sso-team-id` attribute is only returned in Terraform Enterprise 202204-1 and later, or in HCP Terraform.\n\n```json\n{\n  \"data\": {\n    \"id\": \"team-6p5jTwJQXwqZBncC\",\n    \"type\": \"teams\",\n    \"attributes\": {\n      \"name\": \"team-creation-test\",\n      \"sso-team-id\": \"cb265c8e41bddf3f9926b2cf3d190f0e1627daa4\",\n      \"users-count\": 0,\n      \"visibility\": \"organization\",\n      \"allow-member-token-management\": true,\n      \"permissions\": {\n        \"can-update-membership\": true,\n        \"can-destroy\": true,\n        \"can-update-organization-access\": true,\n        \"can-update-api-token\": true,\n        \"can-update-visibility\": true\n      },\n      \"organization-access\": {\n        \"manage-policies\": true,\n        \"manage-policy-overrides\": false,\n        \"manage-run-tasks\": true,\n        \"manage-workspaces\": false,\n        \"manage-vcs-settings\": false,\n        \"manage-agent-pools\": false,\n        \"manage-providers\": false,\n        \"manage-modules\": false,\n        \"manage-projects\": false,\n        \"read-projects\": false,\n        \"read-workspaces\": false,\n        \"manage-membership\": false,\n        \"manage-teams\": false,\n        \"manage-organization-access\": false\n      }\n    },\n    \"relationships\": {\n      \"users\": {\n        \"data\": []\n      },\n      \"authentication-token\": {\n        \"meta\": {}\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/teams\/team-6p5jTwJQXwqZBncC\"\n    }\n  }\n}\n```\n\n## Update a Team\n\n`PATCH \/teams\/:team_id`\n\n| Parameter  | Description                |\n| ---------- | -------------------------- |\n| `:team_id` | The team ID to be updated. |\n\n| Status  | Response                                | Reason                                                         |\n| ------- | --------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"teams\"`) | Successfully created a team                                    |\n| [400][] | [JSON API error object][]               | Invalid `include` parameter                                    |\n| [404][] | [JSON API error object][]               | Team not found, or user unauthorized to perform action         |\n| [422][] | [JSON API error object][]               | Malformed request body (missing attributes, wrong types, etc.) |\n| [500][] | [JSON API error object][]               | Failure during team creation                                   |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n-> **Note:** You cannot set `manage-workspaces` to `false` when setting `manage-projects` to `true`, since project permissions cascade down to workspaces. This is also the case for `read-workspaces` and `read-projects`. If `read-projects` is `true`, `read-workspaces` must be `true` as well.\n\n\n| Key path                                | Type   | Default          | Description                                                                                                                                                                                                                                                                                                       |\n|-----------------------------------------|--------|------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `data.type`                             | string |                  | Must be `\"teams\"`.                                                                                                                                                                                                                                                                                                |\n| `data.attributes.name`                  | string | (previous value) | The name of the team, which can only include letters, numbers, `-`, and `_`. This will be used as an identifier and must be unique in the organization.                                                                                                                                                           |\n| `data.attributes.sso-team-id`           | string | (previous value) | The unique identifier of the team from the SAML `MemberOf` attribute. Only available in Terraform Enterprise 202204-1 and later, or in HCP Terraform.                                                                                             |\n| `data.attributes.organization-access`   | object | (previous value) | Settings for the team's organization access. This object can include the `manage-policies`, `manage-policy-overrides`, `manage-run-tasks`, `manage-workspaces`, `manage-vcs-settings`, `manage-agent-pools`, `manage-providers`, `manage-modules`, `manage-projects`, `read-projects`, `read-workspaces`, `manage-membership`, `manage-teams`, and `manage-organization-access` properties with boolean values. All properties default to `false`. |\n| `data.attributes.visibility` **(beta)** | string | (previous value) | The team's visibility. Must be `\"secret\"` or `\"organization\"` (visible).                                                                                                                                                                                                                                          |\n| `data.attributes.allow-team-token-management` | boolean | (previous value) | The ability to enable and disable team token management for a team. Defaults to true.                                                                                                                                                                                                                                         |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"teams\",\n    \"attributes\": {\n      \"visibility\": \"organization\",\n      \"allow-member-token-management\": true,\n      \"organization-access\": {\n        \"manage-vcs-settings\": true\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/teams\/team-6p5jTwJQXwqZBncC\n```\n\n### Sample Response\n\nThe `sso-team-id` attribute is only returned in Terraform Enterprise 202204-1 and later, or in HCP Terraform.\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\": \"team-creation-test\",\n      \"sso-team-id\": \"cb265c8e41bddf3f9926b2cf3d190f0e1627daa4\",\n      \"organization-access\": {\n        \"manage-policies\": false,\n        \"manage-policy-overrides\": false,\n        \"manage-run-tasks\": true,\n        \"manage-vcs-settings\": true,\n        \"manage-agent-pools\": false,\n        \"manage-workspaces\": true,\n        \"manage-providers\": false,\n        \"manage-modules\": false,\n        \"manage-projects\": false,\n        \"read-projects\": false,\n        \"read-workspaces\": true,\n        \"manage-membership\": false,\n        \"manage-teams\": false,\n        \"manage-organization-access\": false\n      },\n      \"visibility\": \"organization\",\n      \"allow-member-token-management\": true,\n      \"permissions\": {\n        \"can-update-membership\": true,\n        \"can-destroy\": true,\n        \"can-update-organization-access\": true,\n        \"can-update-api-token\": true,\n        \"can-update-visibility\": true\n      },\n      \"users-count\": 0\n    },\n    \"id\": \"team-6p5jTwJQXwqZBncC\",\n    \"links\": {\n      \"self\": \"\/api\/v2\/teams\/team-6p5jTwJQXwqZBncC\"\n    },\n    \"relationships\": {\n      \"authentication-token\": {\n        \"meta\": {}\n      },\n      \"users\": {\n        \"data\": []\n      }\n    },\n    \"type\": \"teams\"\n  }\n}\n```\n\n## Delete a Team\n\n`DELETE \/teams\/:team_id`\n\n| Parameter  | Description                |\n| ---------- | -------------------------- |\n| `:team_id` | The team ID to be deleted. |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/teams\/team-6p5jTwJQXwqZBncC\n```\n\n## Available Related Resources\n\nThe GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n- `users` (`string`) - Returns the full user record for every member of a team.\n- `organization-memberships` (`string`) - Returns the full organization membership record for every member of a team.","site":"terraform","answers_cleaned":"    page title  Teams   API Docs   HCP Terraform description       Use the   teams  endpoint to manage teams  List  show  create  update  and delete teams using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Teams API  The Teams API is used to create  edit  and destroy teams as well as manage a team s organization level permissions  The  Team Membership API   terraform cloud docs api docs team members  is used to add or remove users from a team  Use the  Team Access API   terraform cloud docs api docs team access  to associate a team with privileges on an individual workspace        BEGIN  TFC only name pnp callout      include  tfc package callouts team management mdx       END  TFC only name pnp callout      Any member of an organization can view visible teams and any secret teams they are a member of  Only organization owners can modify teams or view the full set of secret teams  The organization token and the owners team token can act as an owner on these endpoints    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers     Organization Membership       Note    Users must be invited to join organizations before they can be added to teams  See  the Organization Memberships API documentation   terraform cloud docs api docs organization memberships  for more information  Invited users who have not yet accepted will not appear in Teams API responses      List teams   GET organizations  organization name teams     Parameter              Description                                                                                                                        organization name    The name of the organization to list teams from     The response may identify HashiCorp API service accounts  for example  api team XXXXXX   as a members of a team  However  API service accounts do not appear in the UI  As a result  there may be differences between the number of team members reported by the UI and the API  For example  the UI may report  0  members on a team when and the API reports  1        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                          q                  Optional    Allows querying a list of teams by name  This search is case insensitive                                                                                            filter names       Optional    If specified  restricts results to a team with a matching name  If multiple comma separated values are specified  teams matching any of the names are returned      page number        Optional    If omitted  the endpoint will return the first page                                                                                                                 page size          Optional    If omitted  the endpoint will return 20 teams per page                                                                                                                Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 organizations my organization teams          Sample Response  The  sso team id  attribute is only returned in Terraform Enterprise 202204 1 and later  or in HCP Terraform  The  allow member token management  attribute is set to  false  for Terraform Enterprise versions older than 202408 1      json      data                  id    team 6p5jTwJQXwqZBncC          type    teams          attributes              name    team creation test            sso team id    cb265c8e41bddf3f9926b2cf3d190f0e1627daa4            users count   0           visibility    organization            allow member token management   true           permissions                can update membership   true             can destroy   true             can update organization access   true             can update api token   true             can update visibility   true                     organization access                manage policies   true             manage policy overrides   false             manage run tasks   true             manage workspaces   false             manage vcs settings   false             manage agent pools   false             manage projects   false             read projects   false             read workspaces   false                           relationships              users                data                          authentication token                meta                                links              self     api v2 teams team 6p5jTwJQXwqZBncC                              Create a Team   POST  organizations  organization name teams     Parameter              Description                                                                                                                                                                                                                                                                                                                                                    organization name    The name of the organization to create the team in  The organization must already exist in the system  and the user must have permissions to create new teams       Status    Response                                  Reason                                                                                                                                                                                     200       JSON API document      type   teams      Successfully created a team                                         400       JSON API error object                    Invalid  include  parameter                                         404       JSON API error object                    Organization not found  or user unauthorized to perform action      422       JSON API error object                    Malformed request body  missing attributes  wrong types  etc        500       JSON API error object                    Failure during team creation                                          Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required        Note    You cannot set  manage workspaces  to  false  when setting  manage projects  to  true   since project permissions cascade down to workspaces  This is also the case for  read workspaces  and  read projects   If  read projects  is  true    read workspaces  must be  true  as well     Key path                                  Type     Default      Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  data type                                string                Must be   teams                                                                                                                                                                                                                                                                                                        data attributes name                     string                The name of the team  which can only include letters  numbers       and      This will be used as an identifier and must be unique in the organization                                                                                                                                                                 data attributes sso team id              string    nothing     The unique identifier of the team from the SAML  MemberOf  attribute  Only available in Terraform Enterprise 202204 1 and later  or in HCP Terraform                                                                                                   data attributes organization access      object    nothing     Settings for the team s organization access  This object can include the  manage policies    manage policy overrides    manage run tasks    manage workspaces    manage vcs settings    manage agent pools    manage providers    manage modules    manage projects    read projects    read workspaces    manage membership    manage teams   and  manage organization access  properties with boolean values  All properties default to  false        data attributes visibility     beta      string     secret     The team s visibility  Must be   secret   or   organization    visible                                                                                                                                                                                                                                                   Sample Payload     json      data          type    teams        attributes            name    team creation test          sso team id    cb265c8e41bddf3f9926b2cf3d190f0e1627daa4          organization access              manage workspaces   true                              Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization teams          Sample Response  The  sso team id  attribute is only returned in Terraform Enterprise 202204 1 and later  or in HCP Terraform      json      data          attributes            name    team creation test          sso team id    cb265c8e41bddf3f9926b2cf3d190f0e1627daa4          organization access              manage policies   false           manage policy overrides   false           manage run tasks   false           manage vcs settings   false           manage agent pools   false           manage workspaces   true           manage providers   false           manage modules   false           manage projects   false           read projects   false           read workspaces   true           manage membership   false           manage teams   false           manage organization access   false                 permissions              can update membership   true           can destroy   true           can update organization access   true           can update api token   true           can update visibility   true                 users count   0         visibility    secret          allow member token management   true             id    team 6p5jTwJQXwqZBncC        links            self     api v2 teams team 6p5jTwJQXwqZBncC              relationships            authentication token              meta                      users              data                          type    teams                Show Team Information   GET  teams  team id     Parameter    Description                                                              team id    The team ID to be shown         Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 teams team 6p5jTwJQXwqZBncC          Sample Response  The  sso team id  attribute is only returned in Terraform Enterprise 202204 1 and later  or in HCP Terraform      json      data          id    team 6p5jTwJQXwqZBncC        type    teams        attributes            name    team creation test          sso team id    cb265c8e41bddf3f9926b2cf3d190f0e1627daa4          users count   0         visibility    organization          allow member token management   true         permissions              can update membership   true           can destroy   true           can update organization access   true           can update api token   true           can update visibility   true                 organization access              manage policies   true           manage policy overrides   false           manage run tasks   true           manage workspaces   false           manage vcs settings   false           manage agent pools   false           manage providers   false           manage modules   false           manage projects   false           read projects   false           read workspaces   false           manage membership   false           manage teams   false           manage organization access   false                     relationships            users              data                      authentication token              meta                          links            self     api v2 teams team 6p5jTwJQXwqZBncC                      Update a Team   PATCH  teams  team id     Parameter    Description                                                                  team id    The team ID to be updated       Status    Response                                  Reason                                                                                                                                                                                     200       JSON API document      type   teams      Successfully created a team                                         400       JSON API error object                    Invalid  include  parameter                                         404       JSON API error object                    Team not found  or user unauthorized to perform action              422       JSON API error object                    Malformed request body  missing attributes  wrong types  etc        500       JSON API error object                    Failure during team creation                                          Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required        Note    You cannot set  manage workspaces  to  false  when setting  manage projects  to  true   since project permissions cascade down to workspaces  This is also the case for  read workspaces  and  read projects   If  read projects  is  true    read workspaces  must be  true  as well      Key path                                  Type     Default            Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        data type                                string                      Must be   teams                                                                                                                                                                                                                                                                                                        data attributes name                     string    previous value    The name of the team  which can only include letters  numbers       and      This will be used as an identifier and must be unique in the organization                                                                                                                                                                 data attributes sso team id              string    previous value    The unique identifier of the team from the SAML  MemberOf  attribute  Only available in Terraform Enterprise 202204 1 and later  or in HCP Terraform                                                                                                   data attributes organization access      object    previous value    Settings for the team s organization access  This object can include the  manage policies    manage policy overrides    manage run tasks    manage workspaces    manage vcs settings    manage agent pools    manage providers    manage modules    manage projects    read projects    read workspaces    manage membership    manage teams   and  manage organization access  properties with boolean values  All properties default to  false        data attributes visibility     beta      string    previous value    The team s visibility  Must be   secret   or   organization    visible                                                                                                                                                                                                                                                 data attributes allow team token management    boolean    previous value    The ability to enable and disable team token management for a team  Defaults to true                                                                                                                                                                                                                                                 Sample Payload     json      data          type    teams        attributes            visibility    organization          allow member token management   true         organization access              manage vcs settings   true                              Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 teams team 6p5jTwJQXwqZBncC          Sample Response  The  sso team id  attribute is only returned in Terraform Enterprise 202204 1 and later  or in HCP Terraform      json      data          attributes            name    team creation test          sso team id    cb265c8e41bddf3f9926b2cf3d190f0e1627daa4          organization access              manage policies   false           manage policy overrides   false           manage run tasks   true           manage vcs settings   true           manage agent pools   false           manage workspaces   true           manage providers   false           manage modules   false           manage projects   false           read projects   false           read workspaces   true           manage membership   false           manage teams   false           manage organization access   false                 visibility    organization          allow member token management   true         permissions              can update membership   true           can destroy   true           can update organization access   true           can update api token   true           can update visibility   true                 users count   0             id    team 6p5jTwJQXwqZBncC        links            self     api v2 teams team 6p5jTwJQXwqZBncC              relationships            authentication token              meta                      users              data                          type    teams                Delete a Team   DELETE  teams  team id     Parameter    Description                                                                  team id    The team ID to be deleted         Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 teams team 6p5jTwJQXwqZBncC         Available Related Resources  The GET endpoints above can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available      users    string     Returns the full user record for every member of a team     organization memberships    string     Returns the full organization membership record for every member of a team "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Policy Evaluations API Docs HCP Terraform 404 https developer mozilla org en US docs Web HTTP Status 404 Use the policy evaluations endpoint to manage the Sentinel and OPA policy evaluations performed on a Terraform run List and show policy outcomes and list policy evaluations using the HTTP API","answers":"---\npage_title: Policy Evaluations - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/policy-evaluations` endpoint to manage the Sentinel and OPA policy evaluations performed on a Terraform run. List and show policy outcomes and list policy evaluations using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Policy Evaluations API\n\nPolicy evaluations are run within the [HCP Terraform agents](\/terraform\/cloud-docs\/api-docs\/agents) in HCP Terraform's infrastructure. Policy evaluations do not have access to cost estimation data.\nThis set of APIs provides endpoints to list and get policy evaluations and policy outcomes.\n\n## List Policy Evaluations in the Task Stage\n\nEach run passes through several stages of action (pending, plan, policy check, apply, and completion), and shows the progress through those stages as [run states](\/terraform\/cloud-docs\/run\/states).\nThis endpoint allows you to list policy evaluations that are part of the task stage.\n\n`GET \/task-stages\/:task_stage_id\/policy-evaluations`\n\n| Parameter                   | Description                                 |\n| --------------------------- | ------------------------------------------- |\n| `:task_stage_id`            | The task stage ID to get.                   |\n\n| Status  | Response                                      | Reason                                              |\n| ------- | --------------------------------------------- | --------------------------------------------------- |\n| [200][] | [JSON API document][]                         | Success                                             |\n| [404][] | [JSON API error object][]                     | Task stage not found                                |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling does not automatically encode URLs.\n\n| Parameter                          | Description                                                                  |\n| ---------------------------------- | ---------------------------------------------------------------------------- |\n| `page[number]`                     | **Optional.** If omitted, the endpoint returns the first page.               |\n| `page[size]`                       | **Optional.** If omitted, the endpoint returns 20 agent pools per page.      |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/task-stages\/ts-rL5ZsuwfjqfPJcdi\/policy-evaluations\n```\n\n### Sample Response\n\n```json\n{\n   \"data\":[\n      {\n         \"id\":\"poleval-8Jj9Hfoz892D9WMX\",\n         \"type\":\"policy-evaluations\",\n         \"attributes\":{\n            \"status\":\"passed\",\n            \"policy-kind\":\"opa\",\n            \"policy-tool-version\": \"0.44.0\",\n            \"result-count\": {\n              \"advisory-failed\":0,\n              \"errored\":0,\n              \"mandatory-failed\":0,\n              \"passed\":1\n            }\n            \"status-timestamps\":{\n               \"passed-at\":\"2022-09-16T01:40:30+00:00\",\n               \"queued-at\":\"2022-09-16T01:40:04+00:00\",\n               \"running-at\":\"2022-09-16T01:40:08+00:00\"\n            },\n            \"created-at\":\"2022-09-16T01:39:07.782Z\",\n            \"updated-at\":\"2022-09-16T01:40:30.010Z\"\n         },\n         \"relationships\":{\n            \"policy-attachable\":{\n               \"data\":{\n                  \"id\":\"ts-yxskot8Gz5yHa38W\",\n                  \"type\":\"task-stages\"\n               }\n            },\n            \"policy-set-outcomes\":{\n               \"links\":{\n                  \"related\":\"\/api\/v2\/policy-evaluations\/poleval-8Jj9Hfoz892D9WMX\/policy-set-outcomes\"\n               }\n            }\n         },\n         \"links\":{\n            \"self\":\"\/api\/v2\/policy-evaluations\/poleval-8Jj9Hfoz892D9WMX\"\n         }\n      }\n   ]\n}\n```\n\n## List Policy Outcomes\n\n`GET \/policy-evaluations\/:policy_evaluation_id\/policy-set-outcomes`\n\n| Parameter                   | Description                                                |\n| --------------------------- | ---------------------------------------------------------- |\n| `:policy_evaluation_id`     | The ID of the policy evaluation the outcome belongs to get |\n\nThis endpoint allows you to list policy set outcomes that are part of the policy evaluation.\n\n| Status  | Response                                      | Reason                                                  |\n| ------- | --------------------------------------------- | ------------------------------------------------------- |\n| [200][] | [JSON API document][]                         | Success                                                 |\n| [404][] | [JSON API error object][]                     | Policy evaluation not found                             |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling does not automatically encode URLs.\n\n| Parameter                          | Description                                                             |\n|----------------------------------- | ----------------------------------------------------------------------- |\n| `page[number]`                     | **Optional.** If omitted, the endpoint returns the first page.          |\n| `page[size]`                       | **Optional.** If omitted, the endpoint returns 20 policy sets per page. |\n| `filter[n][status]`                | **Optional.** If omitted, the endpoint returns all policies regardless of status. Must be either \"passed\", \"failed\", or \"errored\". |\n| `filter[n][enforcementLevel]`      | **Optional.** Only used if paired with a non-errored status filter. Must be either \"advisory\" or \"mandatory.\" |\n\n-> **Note**: You can use `filter[n]` to combine combinations of statuses and enforcement levels. Policy outcomes with an errored status do not have an enforcement level.\n\n### Sample Request\n\nThe following example requests demonstrate how to call the `policy-set-outcomes` endpoint using cuRL. \n\n#### All Policy Outcomes\n\nThe following example call returns all policy set outcomes.\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-evaluations\/poleval-8Jj9Hfoz892D9WMX\/policy-set-outcomes\n```\n\n#### Failed and Errored Policy Outcomes\n\nThe following example call filters the response so that it only contains failed outcomes and errors.\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-evaluations\/poleval-8Jj9Hfoz892D9WMX\/policy-set-outcomes?filter[0][status]=errored&filter[1][status]=failed&filter[1][enforcementLevel]=mandatory\n```\n\n### Sample Response\n\nThe following example response shows that the `policyVCS` policy failed.\n\n```json\n{\n   \"data\":[\n      {\n         \"id\":\"psout-cu8E9a97LBepZZXd\",\n         \"type\":\"policy-set-outcomes\",\n         \"attributes\":{\n            \"outcomes\":[\n               {\n                  \"enforcement_level\":\"advisory\",\n                  \"query\":\"data.terraform.main.main\",\n                  \"status\":\"failed\",\n                  \"policy_name\":\"policyVCS\",\n                  \"description\":\"\"\n               }\n            ],\n            \"error\":\"\",\n            \"overridable\":true,\n            \"policy-set-name\":\"opa-policies-vcs\",\n            \"policy-set-description\":null,\n            \"result-count\":{\n               \"advisory-failed\":1,\n               \"errored\":0,\n               \"mandatory-failed\":0,\n               \"passed\":0\n            },\n            \"policy-tool-version\": \"0.54.0\"\n         },\n         \"relationships\":{\n            \"policy-evaluation\":{\n               \"data\":{\n                  \"id\":\"poleval-8Jj9Hfoz892D9WMX\",\n                  \"type\":\"policy-evaluations\"\n               }\n            }\n         }\n      }\n   ],\n   \"links\":{\n      \"self\":\"\/api\/v2\/policy-evaluations\/poleval-8Jj9Hfoz892D9WMX\/policy-set-outcomes?page%5Bnumber%5D=1\\u0026page%5Bsize%5D=20\",\n      \"first\":\"\/api\/v2\/policy-evaluations\/poleval-8Jj9Hfoz892D9WMX\/policy-set-outcomes?page%5Bnumber%5D=1\\u0026page%5Bsize%5D=20\",\n      \"prev\":null,\n      \"next\":null,\n      \"last\":\"\/api\/v2\/policy-evaluations\/poleval-8Jj9Hfoz892D9WMX\/policy-set-outcomes?page%5Bnumber%5D=1\\u0026page%5Bsize%5D=20\"\n   },\n   \"meta\":{\n      \"pagination\":{\n         \"current-page\":1,\n         \"page-size\":20,\n         \"prev-page\":null,\n         \"next-page\":null,\n         \"total-pages\":1,\n         \"total-count\":1\n      }\n   }\n}\n```\n\n## Show a Policy Outcome\n\n`GET \/policy-set-outcomes\/:policy_set_outcome_id`\n\n| Parameter                | Description                                                                                                             |\n| ------------------------ | ----------------------------------------------------------------------------------------------------------------------- |\n| `:policy_set_outcome_id` | The ID of the policy outcome to show. Refer to [List the Policy Outcomes](#list-policy-outcomes) for reference information about finding IDs. |\n\n| Status  | Response                                   | Reason                                                              |\n| ------- | ------------------------------------------ | ------------------------------------------------------------------- |\n| [200][] | [JSON API document][]                      | The request was successful                                          |\n| [404][] | [JSON API error object][]                  | Policy set outcome not found or user unauthorized to perform action |\n\n### Sample Request\n\nThe following example request gets the outcomes for the `psout-cu8E9a97LBepZZXd` policy set.\n\n```shell\ncurl --request GET \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-set-outcomes\/psout-cu8E9a97LBepZZXd\n```\n\n### Sample Response\n\nThe following example response shows that the `policyVCS` policy failed.\n\n```json\n{\n   \"data\":{\n      \"id\":\"psout-cu8E9a97LBepZZXd\",\n      \"type\":\"policy-set-outcomes\",\n      \"attributes\":{\n         \"outcomes\":[\n            {\n               \"enforcement_level\":\"advisory\",\n               \"query\":\"data.terraform.main.main\",\n               \"status\":\"failed\",\n               \"policy_name\":\"policyVCS\",\n               \"description\":\"\"\n            }\n         ],\n         \"error\":\"\",\n         \"overridable\":true,\n         \"policy-set-name\":\"opa-policies-vcs\",\n         \"policy-set-description\":null,\n         \"result-count\":{\n            \"advisory-failed\":1,\n            \"errored\":0,\n            \"mandatory-failed\":0,\n            \"passed\":0\n         },\n         \"policy-tool-version\": \"0.54.0\"\n      },\n      \"relationships\":{\n         \"policy-evaluation\":{\n            \"data\":{\n               \"id\":\"poleval-8Jj9Hfoz892D9WMX\",\n               \"type\":\"policy-evaluations\"\n            }\n         }\n      }\n   }\n}\n```","site":"terraform","answers_cleaned":"    page title  Policy Evaluations   API Docs   HCP Terraform description       Use the   policy evaluations  endpoint to manage the Sentinel and OPA policy evaluations performed on a Terraform run  List and show policy outcomes and list policy evaluations using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   404   https   developer mozilla org en US docs Web HTTP Status 404   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Policy Evaluations API  Policy evaluations are run within the  HCP Terraform agents   terraform cloud docs api docs agents  in HCP Terraform s infrastructure  Policy evaluations do not have access to cost estimation data  This set of APIs provides endpoints to list and get policy evaluations and policy outcomes      List Policy Evaluations in the Task Stage  Each run passes through several stages of action  pending  plan  policy check  apply  and completion   and shows the progress through those stages as  run states   terraform cloud docs run states   This endpoint allows you to list policy evaluations that are part of the task stage    GET  task stages  task stage id policy evaluations     Parameter                     Description                                                                                                                     task stage id               The task stage ID to get                         Status    Response                                        Reason                                                                                                                                                                     200       JSON API document                              Success                                                  404       JSON API error object                          Task stage not found                                       Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling does not automatically encode URLs     Parameter                            Description                                                                                                                                                                                             page number                           Optional    If omitted  the endpoint returns the first page                     page size                             Optional    If omitted  the endpoint returns 20 agent pools per page              Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 task stages ts rL5ZsuwfjqfPJcdi policy evaluations          Sample Response     json       data                      id   poleval 8Jj9Hfoz892D9WMX             type   policy evaluations             attributes                 status   passed                policy kind   opa                policy tool version    0 44 0                result count                    advisory failed  0                 errored  0                 mandatory failed  0                 passed  1                            status timestamps                    passed at   2022 09 16T01 40 30 00 00                   queued at   2022 09 16T01 40 04 00 00                   running at   2022 09 16T01 40 08 00 00                              created at   2022 09 16T01 39 07 782Z                updated at   2022 09 16T01 40 30 010Z                        relationships                 policy attachable                    data                       id   ts yxskot8Gz5yHa38W                      type   task stages                                               policy set outcomes                    links                       related    api v2 policy evaluations poleval 8Jj9Hfoz892D9WMX policy set outcomes                                                       links                 self    api v2 policy evaluations poleval 8Jj9Hfoz892D9WMX                                    List Policy Outcomes   GET  policy evaluations  policy evaluation id policy set outcomes     Parameter                     Description                                                                                                                                                   policy evaluation id        The ID of the policy evaluation the outcome belongs to get    This endpoint allows you to list policy set outcomes that are part of the policy evaluation     Status    Response                                        Reason                                                                                                                                                                             200       JSON API document                              Success                                                      404       JSON API error object                          Policy evaluation not found                                    Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling does not automatically encode URLs     Parameter                            Description                                                                                                                                                                                   page number                           Optional    If omitted  the endpoint returns the first page                page size                             Optional    If omitted  the endpoint returns 20 policy sets per page       filter n  status                      Optional    If omitted  the endpoint returns all policies regardless of status  Must be either  passed    failed   or  errored        filter n  enforcementLevel            Optional    Only used if paired with a non errored status filter  Must be either  advisory  or  mandatory           Note    You can use  filter n   to combine combinations of statuses and enforcement levels  Policy outcomes with an errored status do not have an enforcement level       Sample Request  The following example requests demonstrate how to call the  policy set outcomes  endpoint using cuRL         All Policy Outcomes  The following example call returns all policy set outcomes      shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 policy evaluations poleval 8Jj9Hfoz892D9WMX policy set outcomes           Failed and Errored Policy Outcomes  The following example call filters the response so that it only contains failed outcomes and errors      shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 policy evaluations poleval 8Jj9Hfoz892D9WMX policy set outcomes filter 0  status  errored filter 1  status  failed filter 1  enforcementLevel  mandatory          Sample Response  The following example response shows that the  policyVCS  policy failed      json       data                      id   psout cu8E9a97LBepZZXd             type   policy set outcomes             attributes                 outcomes                                        enforcement level   advisory                      query   data terraform main main                      status   failed                      policy name   policyVCS                      description                                                  error                   overridable  true               policy set name   opa policies vcs                policy set description  null               result count                    advisory failed  1                  errored  0                  mandatory failed  0                  passed  0                             policy tool version    0 54 0                        relationships                 policy evaluation                    data                       id   poleval 8Jj9Hfoz892D9WMX                      type   policy evaluations                                                              links           self    api v2 policy evaluations poleval 8Jj9Hfoz892D9WMX policy set outcomes page 5Bnumber 5D 1 u0026page 5Bsize 5D 20          first    api v2 policy evaluations poleval 8Jj9Hfoz892D9WMX policy set outcomes page 5Bnumber 5D 1 u0026page 5Bsize 5D 20          prev  null         next  null         last    api v2 policy evaluations poleval 8Jj9Hfoz892D9WMX policy set outcomes page 5Bnumber 5D 1 u0026page 5Bsize 5D 20            meta           pagination              current page  1            page size  20            prev page  null            next page  null            total pages  1            total count  1                        Show a Policy Outcome   GET  policy set outcomes  policy set outcome id     Parameter                  Description                                                                                                                                                                                                                                                                          policy set outcome id    The ID of the policy outcome to show  Refer to  List the Policy Outcomes   list policy outcomes  for reference information about finding IDs       Status    Response                                     Reason                                                                                                                                                                                                  200       JSON API document                           The request was successful                                               404       JSON API error object                       Policy set outcome not found or user unauthorized to perform action        Sample Request  The following example request gets the outcomes for the  psout cu8E9a97LBepZZXd  policy set      shell curl   request GET      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json      https   app terraform io api v2 policy set outcomes psout cu8E9a97LBepZZXd          Sample Response  The following example response shows that the  policyVCS  policy failed      json       data           id   psout cu8E9a97LBepZZXd          type   policy set outcomes          attributes              outcomes                                  enforcement level   advisory                   query   data terraform main main                   status   failed                   policy name   policyVCS                   description                                         error                overridable  true            policy set name   opa policies vcs             policy set description  null            result count                 advisory failed  1               errored  0               mandatory failed  0               passed  0                       policy tool version    0 54 0                  relationships              policy evaluation                 data                    id   poleval 8Jj9Hfoz892D9WMX                   type   policy evaluations                                             "}
{"questions":"terraform API Changelog Keep track of changes to the API for HCP Terraform and Terraform Enterprise page title API Changelog API Docs HCP Terraform page id api changelog Learn about the changes made to the HCP Terraform and Enterprise APIs","answers":"---\npage_title: API Changelog - API Docs - HCP Terraform\npage_id: api-changelog\ndescription: >-\n  Learn about the changes made to the HCP Terraform and Enterprise APIs.\n---\n\n# API Changelog\n\nKeep track of changes to the API for HCP Terraform and Terraform Enterprise.\n\n\n## 2024-10-15\n* Add new documentation for the ability to deprecate, and revert the deprecation of, module versions. Learn more about [Managing module versions](\/terraform\/cloud-docs\/api-docs\/private-registry\/manage-module-versions).\n\n## 2024-10-14\n* Update the [Organizations API](\/terraform\/cloud-docs\/api-docs\/organizations) to support the `speculative-plan-management-enabled` attribute, which controls [automatic cancellation of plan-only runs triggered by outdated commits](\/terraform\/cloud-docs\/users-teams-organizations\/organizations\/vcs-speculative-plan-management).\n\n## 2024-10-11\n* Add documentation for new timeframe filter on List endpoints for [Runs](\/terraform\/cloud-docs\/api-docs\/run) API\n\n## 2024-09-02\n* Add warning about the deprecation and future removal of the [Policy Checks](\/terraform\/cloud-docs\/api-docs\/policy-checks) API.\n\n## 2024-08-16\n* Fixes Workspace API responses to be consistent and contain all attributes and relationships.\n\n## 2024-08-14\n* Add documentation for a new API endpoint that lists an [organization's team tokens](\/terraform\/cloud-docs\/api-docs\/team-tokens).\n\n## 2024-08-01\n\n<EnterpriseAlert>\nThis endpoint is exclusive to Terraform Enterprise, and not available in HCP Terraform.\n<\/EnterpriseAlert>\n\n* Update the [admin settings API](\/terraform\/enterprise\/api-docs\/admin\/settings##update-general-settings) and [admin organizations API](\/terraform\/enterprise\/api-docs\/admin\/organizations#update-an-organization) to indicate that the `terraform-build-worker-plan-timeout` and `terraform-build-worker-apply-timeout` attributes are deprecated and will be replaced by `plan-timeout` and `apply-timeout`, respectively.\n\n<!-- BEGIN: TFC:only name:audit-trail-tokens -->\n## 2024-07-24\n* Remove beta tags from documentation for audit trail tokens.\n<!-- END: TFC:only name:audit-trail-tokens -->\n\n## 2024-07-15\n* Update the [Team API](\/terraform\/cloud-docs\/api-docs\/teams) to include `allow-member-token-management`.\n\n<!-- BEGIN: TFC:only name:audit-trail-tokens -->\n## 2024-07-12\n* Add beta tags to documentation for audit trail tokens.\n<!-- END: TFC:only name:audit-trail-tokens -->\n\n## 2024-06-25\n* Add API documentation for new [team token management setting](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens).\n* Update API documentation for the [manage teams permission](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#team-management-permissions).\n\n## 2024-05-29\n* Add API documentation for the new [audit trails token](\/terraform\/cloud-docs\/api-docs\/audit-trails-tokens).\n\n## 2024-05-23\n* Update the [registry modules API](\/terraform\/cloud-docs\/api-docs\/private-registry\/modules#create-a-module-version) for publishing new versions of branch based modules.\n\n## 2024-05-10\n* Add API documentation for new [manage agent pools permission](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-agent-pools).\n\n## 2024-04-25\n* Project names can now be up to 40 characters.\n\n## 2024-04-08\n* Add API documentation for new [team management permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#team-management-permissions).\n\n## 2024-04-04\n* Add a `sort` parameter to the [Projects list API](\/terraform\/cloud-docs\/api-docs\/projects#query-parameters) to allow sorting projects by name.\n* Add a `description` attribute to the [Projects API](\/terraform\/cloud-docs\/api-docs\/projects).\n* Add `project-count` and `workspace-count` attributes to sample [Projects API](\/terraform\/cloud-docs\/api-docs\/projects) responses.\n\n## 2024-3-27\n* Add `private-vcs` to [Feature Entitlements](\/terraform\/cloud-docs\/api-docs#feature-entitlements).\n\n## 2024-3-26\n* Add API documentation for searching [variable sets](\/terraform\/cloud-docs\/api-docs\/variable-sets#list-variable-sets) by name.\n\n## 2024-3-14\n* Add documentation for creating runs with debugging mode enabled.\n\n## 2024-3-12\n* Update OAuth Client API endpoints to create, update, and return projects associated with an oauth client.\n* Add API endpoints to [Attach an OAuth Client](\/terraform\/cloud-docs\/api-docs\/oauth-clients#attach-an-oauth-client-to-projects) and [Detach an OAuth Client](\/terraform\/cloud-docs\/api-docs\/oauth-clients#detach-an-oauth-client-from-projects) from a project.\n* Add `organization-scoped` attribute to the [OAuth Clients API](\/terraform\/cloud-docs\/api-docs\/oauth-clients).\n\n## 2024-2-29\n* Update [run task stages](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-task-stages-and-results) with new multi-stage payload format.\n* Update [run tasks](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks) with global run tasks request\/response payloads.\n\n## 2024-2-27\n* Add `private-policy-agents` to [Feature Entitlements](\/terraform\/cloud-docs\/api-docs#feature-entitlements).\n\n## 2024-2-20\n* Add documentation for configuring organization and workspace data retention policies through the API and on the different [types of data retention policies](\/terraform\/enterprise\/api-docs\/data-retention-policies).\n<!-- BEGIN: TFC:only name:explorer -->\n## 2024-2-8\n* Add [Explorer API documentation](\/terraform\/cloud-docs\/api-docs\/explorer)\n<!-- END: TFC:only name:explorer -->\n## 2024-1-30\n* Update the [Audit trails](\/terraform\/cloud-docs\/api-docs\/audit-trails) documentation to expand on the payloads for each event.\n\n## 2024-1-24\n- Introduce configurable data retention policies at the site-wide level and extend data retention policies at the organization and workspace levels.\n- Added and\/or updated data retention policy documentation to the following topics:\n    - [Admin Settings Documentation](\/terraform\/enterprise\/application-administration\/general#data-retention-policies)\n    - [Admin API Documentation](\/terraform\/enterprise\/api-docs\/admin\/settings#data-retention-policies)\n    - [Organization Documentation](\/terraform\/enterprise\/users-teams-organizations\/organizations#data-retention-policies)\n    - [Workspace Documentation](\/terraform\/enterprise\/workspaces\/settings\/deletion#data-retention-policies)\n\n## 2024-1-4\n* Update the [Organizations API](\/terraform\/cloud-docs\/api-docs\/organizations) to support the `aggregated-commit-status-enabled` attribute, which controls whether [Aggregated Status Checks](\/terraform\/cloud-docs\/users-teams-organizations\/organizations\/vcs-status-checks) are enabled.\n\n## 2023-11-17\n\n-   Added the [`opa-versions` endpoint](\/terraform\/enterprise\/api-docs\/admin\/opa-versions) to allow administrators to manage available Open Policy Agent (OPA) versions.\n-   Added the [`sentinel-versions` endpoint](\/terraform\/enterprise\/api-docs\/admin\/sentinel-versions) to allow administrators to manage available Sentinel versions.\n-   Add `authenticated-resource` relationship to the [`account` API](\/terraform\/enterprise\/api-docs\/account).\n\n## 2023-11-15\n\n- Introduce configurable data retention policies at the [organization](\/terraform\/enterprise\/users-teams-organizations\/organizations#data-retention-policies) and [workspace](\/terraform\/enterprise\/workspaces\/settings\/deletion#data-retention-policies) levels.\n- Added data retention policy documentation to the following topics:\n   - [`state-versions` API documentation](\/terraform\/enterprise\/api-docs\/state-versions)\n   - [`configuration-versions` API documentation](\/terraform\/enterprise\/api-docs\/configuration-versions)\n   - [Organizations documentation](\/terraform\/enterprise\/users-teams-organizations\/organizations#destruction-and-deletion)\n   - [Workspaces documentation](\/terraform\/enterprise\/workspaces\/settings\/deletion#data-retention-policies)\n\n## 2023-11-07\n* Add `auto_destroy_activity_duration` to the [Workspaces API](\/terraform\/cloud-docs\/api-docs\/workspaces), which allows Terraform Cloud to schedule auto-destroy runs [based on workspace inactivity](\/terraform\/cloud-docs\/workspaces\/settings\/deletion#automatically-destroy).\n\n## 2023-10-31\n* Update the [Workspaces API](\/terraform\/cloud-docs\/api-docs\/workspaces) to support the `auto-apply-run-trigger` attribute, which controls if run trigger runs are auto-applied.\n\n## 2023-10-30\n* Add `priority` attribute to the [Variable Sets API](\/terraform\/cloud-docs\/api-docs\/variable-sets).\n\n## 2023-10-04\n* Updates to [run task integration API](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks-integration)\n  * Fix invalid JSON in the example payload.\n  * Clarify the expected JSON:API payload fields.\n* Add `policy-tool-version` attribute to [Policy Set Outcomes](\/terraform\/cloud-docs\/api-docs\/policy-evaluations#list-policy-outcomes).\n\n## 2023-10-03\n* Update [Policy Sets API](\/terraform\/cloud-docs\/api-docs\/policy-sets) to include `agent-enabled` and `policy-tool-version`.\n* Update [Policy Evaluations API](\/terraform\/cloud-docs\/api-docs\/policy-evaluations) to include `policy-tool-version`.\n\n## 2023-09-29\n* Add support for [streamlined run task reviews](\/terraform\/cloud-docs\/integrations\/run-tasks), enabling run task integrations to return high fidelity results.\n  * Update the [Terraform cloud run task API](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks) to enable streamlined run task reviews.\n  * The [run task integration API](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks-integration) now guides integrations through sending rich results.\n  * Updated the run task payload [JSON Schema](https:\/\/github.com\/hashicorp\/terraform-docs-common\/blob\/main\/website\/public\/schema\/run-tasks\/runtask-result.json).\n\n## 2023-09-25\n* Add `intermediate` boolean attribute to the [State Versions API](\/terraform\/cloud-docs\/api-docs\/state-versions).\n\n## 2023-09-19\n* Add [failed state upload recovery](\/terraform\/cloud-docs\/api-docs\/applies#recover-a-failed-state-upload-after-applying) endpoint.\n\n## 2023-09-15\n* Add `auto-destroy-at` attribute to the [Workspaces API](\/terraform\/cloud-docs\/api-docs\/workspaces).\n* Update the [Notification Configurations API](\/terraform\/cloud-docs\/api-docs\/notification-configurations) to include [automatic destroy run](\/terraform\/cloud-docs\/api-docs\/notification-configurations#automatic-destroy-runs) details.\n\n## 2023-09-08\n* Update the [Organizations API](\/terraform\/cloud-docs\/api-docs\/organizations) to include `default-execution-mode` and `default-agent-pool`.\n* Update the [Workspaces API](\/terraform\/cloud-docs\/api-docs\/workspaces) to add a `setting-overwrites` object to allow you to overwrite `default-execution-mode` and `default-agent-pool`.\n\n## 2023-09-06\n* Update Policy Sets API endpoints to create, update, and return excluded workspaces associated with a policy set.\n* Add API endpoints to [Attach a Policy Set](\/terraform\/cloud-docs\/api-docs\/policy-sets#attach-a-policy-set-to-exclusions) and [Detach a Policy Set](\/terraform\/cloud-docs\/api-docs\/policy-sets#detach-a-policy-set-to-exclusions) from excluded workspaces.\n\n## 2023-08-21\n* Add `save-plan` attribute, `planned_and_saved` status, and `save_plan` operation type to [Runs endpoints](\/terraform\/cloud-docs\/api-docs\/run).\n\n## 2023-08-10\n* Add `provisional` to `configuration-versions` endpoint.\n\n## 2023-07-26\n\n* Add support for a `custom` option to the `team_project` access level along with various customizable permissions. The `project-access` permissions apply to the project itself, and `workspace-access` permissions apply to all workspaces within the project. For more information, see [Project Team Access](\/terraform\/cloud-docs\/api-docs\/project-team-access).\n\n## 2023-06-09\n\n* Introduce support for [`import` blocks](\/terraform\/language\/import\/generating-configuration).\n  * [Runs](\/terraform\/cloud-docs\/api-docs\/run#create-a-run) have a new `allow-config-generation` option.\n  * [Plans](\/terraform\/cloud-docs\/api-docs\/plans#show-a-plan) have new `resource-imports` and `generated-configuration` properties.\n  * [Applies](\/terraform\/cloud-docs\/api-docs\/applies#show-an-apply) have a new `resource-imports` property.\n* The workspaces associated with a policy set can now be updated using the [Policy Sets PATCH endpoint](\/terraform\/cloud-docs\/api-docs\/policy-sets#update-a-policy-set)\n* Update the [Workspaces](\/terraform\/cloud-docs\/api-docs\/workspaces) API endpoints to include the associated [project](\/terraform\/cloud-docs\/api-docs\/projects).\n\n## 2023-05-25\n\n* Terraform Cloud sets the `configuration_version_download_url`, `configuration_version_id`, and `workspace_working_directory` properties for all stages of the [Run Task Request](\/terraform\/enterprise\/api-docs\/run-tasks\/run-tasks-integration#request-body).\n* Add the new `enforcement-level` property in the request and response of [Policies endpoints](\/terraform\/cloud-docs\/api-docs\/policies).\n* Deprecate the old `enforce` property in the request and response of [Policies endpoints](\/terraform\/cloud-docs\/api-docs\/policies).\n* Add new properties to limit run tasks and policies for the Terraform Cloud free tier. We updated the [entitlement set](\/terraform\/cloud-docs\/api-docs\/organizations#show-the-entitlement-set), [feature set](\/terraform\/cloud-docs\/api-docs\/feature-sets#sample-response), and [subscription](\/terraform\/cloud-docs\/api-docs\/subscriptions#sample-response) endpoints with the following properties:\n  * `run-task-limit`\n  * `run-task-workspace-limit`\n  * `run-task-mandatory-enforcement-limit`\n  * `policy-set-limit`\n  * `policy-limit`\n  * `policy-mandatory-enforcement-limit`\n  * `versioned-policy-set-limit`\n\n## 2023-04-25\n\n* Add the `version-id` property to the response for the Create, List, and Update [Workspace Variables endpoints](\/terraform\/cloud-docs\/api-docs\/workspaces-variables).\n\n## 2023-03-30\n\n* Add the `sort` query parameter to the Workspaces API's [list workspaces endpoint](\/terraform\/cloud-docs\/api-docs\/workspaces#list-workspaces).\n\n## 2023-03-24\n\n* Update the [Variable Sets](\/terraform\/cloud-docs\/api-docs\/variable-sets) API endpoints to include assigning variable sets to projects.\n\n## 2023-03-20\n\n* Add a names filter to the [Projects list API](\/terraform\/cloud-docs\/api-docs\/projects#query-parameters) to allow fetching a list of projects by name.\n\n## 2023-03-13\n\n* Update [Project Team Access API](\/terraform\/cloud-docs\/api-docs\/project-team-access) to include additional Project roles.\n* Update [Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions) to reflect the decoupling of projects and workspaces in the Organization Permissions UI.\n\n## 2023-03-08\n\n* Introduced the [GitHub App Installation APIs](\/terraform\/cloud-docs\/api-docs\/github-app-installations).\n* Updated [Workspaces API](\/terraform\/cloud-docs\/api-docs\/workspaces) to accept `vcs-repo.github-app-installation-id` to connect a workspace to a GitHub App Installation.\n* Updated [Registry Module API](\/terraform\/cloud-docs\/api-docs\/private-registry\/modules) to accept `vcs-repo.github-app-installation-id` to connect to a GitHub App Installation.\n* Updated [Policy Sets API](\/terraform\/cloud-docs\/api-docs\/policy-sets) to accept `vcs-repo.github-app-installation-id` to connect to a GitHub App Installation.\n\n## 2023-02-16\n\n* Add `manage-membership` to the organization access settings of the [Team API](\/terraform\/cloud-docs\/api-docs\/teams).\n\n## 2023-02-03\n\n* Updated the [List Runs API](\/terraform\/cloud-docs\/api-docs\/run#list-runs-in-a-workspace) to note that the filter query parameters accept comma-separated lists.\n\n## 2023-01-18\n\n* Updated the [Team API](\/terraform\/cloud-docs\/api-docs\/teams) to include the `read-workspaces` and `read-projects` permissions which grants teams view access to workspaces and projects.\n\n## 2023-01-17\n\n* Add [Projects API](\/terraform\/cloud-docs\/api-docs\/projects) for creating, updating and deleting projects.\n* Add [Project Team Access API](\/terraform\/cloud-docs\/api-docs\/project-team-access) for managing access for teams to individual projects.\n* Update [Workspaces API](\/terraform\/cloud-docs\/api-docs\/workspaces) to include examples of creating a workspace in a project and moving a workspace between projects.\n* Update [List Teams API](\/terraform\/cloud-docs\/api-docs\/teams#query-parameters) to accept a search parameter `q`, so that teams can be searched by name.\n\n## 2023-01-12\n\n* Added new rollback to previous state endpoint to [State Versions API](\/terraform\/cloud-docs\/api-docs\/state-versions)\n\n## 2022-12-22\n\n* Updated [Safe Delete a workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#safe-delete-a-workspace) to fix HTTP verb as `POST`.\n\n## 2022-11-18\n\n* Update [Policies API](\/terraform\/cloud-docs\/api-docs\/policies) to fix policy enforcement level defaults. Enforcement level is a required field, so no defaults are available.\n\n## 2022-11-03\n\n* Update [Policy Checks](\/terraform\/cloud-docs\/api-docs\/policy-checks) to fix policy set outcome return data type.\n\n### 2022-10-17\n\n* Updated the [Organizations API](\/terraform\/cloud-docs\/api-docs\/organizations) with the `allow-force-delete-workspaces`, which controls whether workspace administrators can delete workspaces with resources under management.\n* Updated the [Workspaces API](\/terraform\/cloud-docs\/api-docs\/workspaces) with a safe delete endpoint that guards against deleting workspaces that are managing resources.\n\n### 2022-10-12\n\n* Update [Policy Checks](\/terraform\/cloud-docs\/api-docs\/policy-checks) with result counts and support for filtering policy set outcomes.\n* Update [Team Membership API](\/terraform\/cloud-docs\/api-docs\/team-members) to include adding and removing users from teams using organization membership ID.\n\n<!-- BEGIN: TFC:only name:opa-policies -->\n### 2022-10-06\n\n* Updated the [Policies API](\/terraform\/cloud-docs\/api-docs\/policies) with support for Open Policy Agent (OPA) policies.\n* Update [Policy Sets](\/terraform\/cloud-docs\/api-docs\/policy-sets) with support for OPA policy sets.\n* Updated [Policy Checks](\/terraform\/cloud-docs\/api-docs\/policy-checks) to add support for listing policy evaluations and policy set outcomes.\n* Update [Run Tasks Stage](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-task-stages-and-results) to include the new `policy_evaluations` attribute in the output.\n<!-- END: TFC:only name:opa-policies -->\n\n### 2022-09-21\n\n* Update [State Versions](\/terraform\/cloud-docs\/api-docs\/state-versions#create) with optional `json-state-outputs` and `json-state` attributes, which are base-64 encoded external JSON representations of the terraform state. The read-only `hosted-json-state-download-url` attribute links to this version of the state file when available.\n* Update [State Version Outputs](\/terraform\/cloud-docs\/api-docs\/state-version-outputs) with a `detailed-type` attribute, which refines the output with the precise Terraform type.\n\n### 2022-07-26\n\n* Updated the [Run status list](\/terraform\/cloud-docs\/api-docs\/run#run-states) with `fetching`, `queuing`, `pre_plan_running` and `pre_plan_completed`\n* Update [Run Tasks](\/terraform\/cloud-docs\/api-docs\/run-tasks.mdx) to include the new `stages` attribute when attaching or updating workspace tasks.\n* Updated [Run Tasks Integration](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks-integration) to specify the different request payloads for different stages.\n\n### 2022-06-23\n<!-- BEGIN: TFC:only name:health-assessments -->\n* Added the [Assessments](\/terraform\/cloud-docs\/api-docs\/assessments).\n* Updated [Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#create-a-workspace) and\n[Notification Configurations](\/terraform\/cloud-docs\/api-docs\/notification-configurations#notification-triggers) to account for assessments.\n<!-- END: TFC:only name:health-assessments -->\n\n* Added new query parameter(s) to [List Runs endpoint](\/terraform\/cloud-docs\/api-docs\/run#list-runs-in-a-workspace).\n\n### 2022-06-21\n* Updated [Admin Organizations](\/terraform\/enterprise\/api-docs\/admin\/organizations) endpoints with new `workspace-limit` setting. This is available in Terraform Enterprise v202207-1 and later.\n* Updated [List Agent Pools](\/terraform\/cloud-docs\/api-docs\/agents#list-agent-pools) to accept a filter parameter `filter[allowed_workspaces][name]` so that agent pools can be filtered by name of an associated workspace. The given workspace must be allowed to use the agent pool. Refer to [Scoping Agent Pools to Specific Workspaces](\/terraform\/cloud-docs\/agents#scope-an-agent-pool-to-specific-workspaces).\n* Added new `organization-scoped` attribute and `allowed-workspaces` relationship to the request\/response body of the below endpoints. This is available in Terraform Enterprise v202207-1 and later.\n  * [Show an Agent Pool](\/terraform\/cloud-docs\/api-docs\/agents#show-an-agent-pool)\n  * [Create an Agent Pool](\/terraform\/cloud-docs\/api-docs\/agents#create-an-agent-pool)\n  * [Update an Agent Pool](\/terraform\/cloud-docs\/api-docs\/agents#update-an-agent-pool)\n\n### 2022-06-17\n* Updated [Creating a Run Task](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks#creating-a-run-task) section to include the new description information for the run task to be configured.\n* Update [Run Tasks](\/terraform\/cloud-docs\/api-docs\/run-tasks.mdx) to include the new description attribute.\n\n### 2022-06-09\n\n* Updated [List Agent Pools](\/terraform\/cloud-docs\/api-docs\/agents#list-agent-pools) to accept a search parameter `q`, so that agent pools can be searched by `name`. This is available in Terraform Enterprise v202207-1 and later.\n* Fixed [List Workspaces](\/terraform\/cloud-docs\/api-docs\/workspaces#list-workspaces) to add missing `search[tags]` query parameter.\n* Updated [List Workspaces](\/terraform\/cloud-docs\/api-docs\/workspaces#list-workspaces) to add new `search[exclude_tags]` query parameter. This is available in Terraform Enterprise v202207-1 and later.\n\n### 2022-05-11\n\n* Updated Run Tasks permission to the following endpoints:\n  * [Organizations](\/terraform\/cloud-docs\/api-docs\/organizations#list-organizations).\n  * [Team Access](\/terraform\/cloud-docs\/api-docs\/team-access#list-team-access-to-a-workspace).\n  * [Teams](\/terraform\/cloud-docs\/api-docs\/teams#list-teams).\n\n### 2022-05-04\n\n* Updated [Feature Sets](\/terraform\/cloud-docs\/api-docs\/feature-sets#list-feature-sets) to add new `run-tasks` attribute.\n\n### 2022-05-03\n\n* Added Run Tasks permission to the following endpoints:\n  * [Organizations](\/terraform\/cloud-docs\/api-docs\/organizations#list-organizations)\n  * [Workspaces](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace)\n\n### 2022-04-29\n\n* Updated [Run Tasks Integration](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks-integration) to specify the allowed `status` attribute values.\n* Updated [Organization Memberships](\/terraform\/cloud-docs\/api-docs\/organization-memberships#query-parameters) to add new `filter[email]` query parameter.\n* Updated [Teams](\/terraform\/cloud-docs\/api-docs\/teams#query-parameters) to add new `filter[names]` query parameter.\n\n### 2022-04-04\n\n* Added the [Run Tasks Integration](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks-integration) endpoint.\n\n### 2022-03-14\n\n* Added the [Download Configuration Files](\/terraform\/cloud-docs\/api-docs\/configuration-versions#download-configuration-files) endpoints.\n\n### 2022-03-11\n\n* Introduced [Archiving Configuration Versions](\/terraform\/cloud-docs\/workspaces\/configurations#archiving-configuration-versions).\n  * Updated [Configuration Versions](\/terraform\/cloud-docs\/api-docs\/configuration-versions#attributes) to add new `fetching` and `archived` states.\n  * Updated [Runs](\/terraform\/cloud-docs\/api-docs\/run#attributes) to add new `fetching` state.\n  * Added the [Archive a Configuration Version](\/terraform\/cloud-docs\/api-docs\/configuration-versions#archive-a-configuration-version) endpoint.\n\n### 2022-02-25\n\n* Updated [Workspace Run Tasks](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks#show-a-run-task) to add new `enabled`attribute.\n\n### 2022-02-28\n\n* Introduced the [Registry Providers](\/terraform\/cloud-docs\/api-docs\/private-registry\/providers) endpoints to manage private providers for a private registry.\n\n### 2021-12-09\n\n* Added `variables` field for POST \/runs and the run resource, enabling run-specific variable values.\n\n### 2021-12-03\n\n* OAuth API updated to handle `secret` and `rsa_public_key` fields for POST\/PUT.\n\n### 2021-11-17\n\n* Added pagination support to the following endpoints:\n  * [Feature Sets](\/terraform\/cloud-docs\/api-docs\/feature-sets#list-feature-sets) - `GET \/feature-sets`\n  * [Notification Configurations](\/terraform\/cloud-docs\/api-docs\/notification-configurations#list-notification-configurations) - `GET \/workspaces\/:workspace_id\/notification-configurations`\n  * [Oauth Clients](\/terraform\/cloud-docs\/api-docs\/oauth-clients#list-oauth-clients) - `GET \/organizations\/:organization_name\/oauth-clients`\n  * [Oauth Tokens](\/terraform\/cloud-docs\/api-docs\/oauth-tokens#list-oauth-tokens) - `GET \/oauth-clients\/:oauth_client_id\/oauth-tokens`\n  * [Organization Feature Sets](\/terraform\/cloud-docs\/api-docs\/feature-sets#list-feature-sets-for-organization) - `GET \/organizations\/:organization_name\/feature-sets`\n  * [Organizations](\/terraform\/cloud-docs\/api-docs\/organizations#list-organizations) - `GET \/organizations`\n  * [Policy Checks](\/terraform\/cloud-docs\/api-docs\/policy-checks#list-policy-checks) - `GET \/runs\/:run_id\/policy-checks`\n  * [Policy Set Parameters](\/terraform\/cloud-docs\/api-docs\/policy-set-params#list-parameters) - `GET \/policy-sets\/:policy_set_id\/parameters`\n  * [SSH Keys](\/terraform\/cloud-docs\/api-docs\/ssh-keys#list-ssh-keys) - `GET \/organizations\/:organization_name\/ssh-keys`\n  * [User Tokens](\/terraform\/cloud-docs\/api-docs\/user-tokens#list-user-tokens) - `GET \/users\/:user_id\/authentication-tokens`\n\n### 2021-11-11\n\n* Introduced the [Variable Sets](\/terraform\/cloud-docs\/api-docs\/variable-sets) endpoints for viewing and administering Variable Sets\n\n### 2021-11-18\n\n* Introduced the [Registry Providers](\/terraform\/cloud-docs\/api-docs\/private-registry\/providers) endpoint to manage public providers for a\n  private registry. These endpoints will be available in the following Terraform Enterprise Release: `v202112-1`\n\n### 2021-09-12\n\n* Added [Run Tasks Stages and Results](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-task-stages-and-results) endpoint.\n\n### 2021-08-18\n\n* Introduced the [State Version Outputs](\/terraform\/cloud-docs\/api-docs\/state-versions) endpoint to retrieve the Outputs for a\n  given State Version\n\n### 2021-08-11\n\n* **BREAKING CHANGE:** Security fix to [Configuration versions](\/terraform\/cloud-docs\/api-docs\/configuration-versions): upload-url attribute for [uploading configuration files](\/terraform\/cloud-docs\/api-docs\/configuration-versions#upload-configuration-files) is now only available on the create response.\n\n### 2021-07-30\n\n* Introduced Workspace Tagging\n  * Updated [Workspaces](\/terraform\/cloud-docs\/api-docs\/workspaces):\n    * added `tag-names` attribute.\n    * added `POST \/workspaces\/:workspace_id\/relationships\/tags`\n    * added `DELETE \/workspaces\/:workspace_id\/relationships\/tags`\n  * Added [Organization Tags](\/terraform\/cloud-docs\/api-docs\/organization-tags).\n  * Added `tags` attribute to [`tfrun`](\/terraform\/cloud-docs\/policy-enforcement\/sentinel\/import\/tfrun)\n\n### 2021-07-19\n\n* [Notification configurations](\/terraform\/cloud-docs\/api-docs\/notification-configurations): Gave organization tokens permission to create and manage notification configurations.\n\n### 2021-07-09\n\n* [State versions](\/terraform\/cloud-docs\/api-docs\/state-versions): Fixed the ID format for the workspace relationship of a state version. Previously, the reported ID was unusable due to a bug.\n* [Workspaces](\/terraform\/cloud-docs\/api-docs\/workspaces): Added `locked_by` as an includable related resource.\n* Added [Run Tasks](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks) API endpoint.\n\n### 2021-06-8\n\n* Updated [Registry Module APIs](\/terraform\/cloud-docs\/api-docs\/private-registry\/modules).\n  * added `registry_name` scoped APIs.\n  * added `organization_name` scoped APIs.\n  * added [Module List API](\/terraform\/cloud-docs\/api-docs\/private-registry\/modules#list-registry-modules-for-an-organization).\n  * updated [Module Delete APIs](\/terraform\/cloud-docs\/api-docs\/private-registry\/modules#delete-a-module) (see deprecation note below).\n  * **CLOUD**: added public registry module related APIs.\n* **DEPRECATION**: The following [Registry Module APIs](\/terraform\/cloud-docs\/api-docs\/private-registry\/modules) have been replaced with newer apis and will be removed in the future.\n  * The following endpoints to delete modules are replaced with [Module Delete APIs](\/terraform\/cloud-docs\/api-docs\/private-registry\/modules#delete-a-module)\n    * `POST \/registry-modules\/actions\/delete\/:organization_name\/:name\/:provider\/:version` replaced with `DELETE \/organizations\/:organization_name\/registry-modules\/:registry_name\/:namespace\/:name\/:provider\/:version`\n    * `POST \/registry-modules\/actions\/delete\/:organization_name\/:name\/:provider` replaced with `DELETE \/organizations\/:organization_name\/registry-modules\/:registry_name\/:namespace\/:name\/:provider`\n    * `POST \/registry-modules\/actions\/delete\/:organization_name\/:name` replaced with `DELETE \/organizations\/:organization_name\/registry-modules\/:registry_name\/:namespace\/:name`\n  * `POST \/registry-modules` replaced with [`POST \/organizations\/:organization_name\/registry-modules\/vcs`](\/terraform\/cloud-docs\/api-docs\/private-registry\/modules#publish-a-private-module-from-a-vcs)\n  * `POST \/registry-modules\/:organization_name\/:name\/:provider\/versions` replaced with [`POST \/organizations\/:organization_name\/registry-modules\/:registry_name\/:namespace\/:name\/:provider\/versions`](\/terraform\/cloud-docs\/api-docs\/private-registry\/modules#create-a-module-version)\n  * `GET \/registry-modules\/show\/:organization_name\/:name\/:provider` replaced with [`GET \/organizations\/:organization_name\/registry-modules\/:registry_name\/:namespace\/:name\/:provider`](\/terraform\/cloud-docs\/api-docs\/private-registry\/modules#get-a-module)\n\n### 2021-05-27\n\n* **CLOUD**: [Agents](\/terraform\/cloud-docs\/api-docs\/agents): added [delete endpoint](\/terraform\/cloud-docs\/api-docs\/agents#delete-an-agent).\n\n### 2021-05-19\n\n* [Runs](\/terraform\/cloud-docs\/api-docs\/run): added `refresh`, `refresh-only`, and `replace-addrs` attributes.\n\n### 2021-04-16\n\n* Introduced [Controlled Remote State Access](https:\/\/www.hashicorp.com\/blog\/announcing-controlled-remote-state-access-for-terraform-cloud-and-enterprise).\n  * [Admin Settings](\/terraform\/enterprise\/api-docs\/admin\/settings): added `default-remote-state-access` attribute.\n  * [Workspaces](\/terraform\/cloud-docs\/api-docs\/workspaces):\n    * added `global-remote-state` attribute.\n    * added [Remote State Consumers](\/terraform\/cloud-docs\/api-docs\/workspaces#get-remote-state-consumers) relationship.\n\n### 2021-04-13\n\n* [Teams](\/terraform\/cloud-docs\/api-docs\/teams): added `manage-policy-overrides` property to the `organization-access` attribute.\n\n### 2021-03-23\n\n* **ENTERPRISE**: `v202103-1` Introduced [Share Modules Across Organizations with Terraform Enterprise](https:\/\/www.hashicorp.com\/blog\/share-modules-across-organizations-terraform-enterprise).\n  * [Admin Organizations](\/terraform\/enterprise\/api-docs\/admin\/organizations):\n    * added new query parameters to [List all Organizations endpoint](\/terraform\/enterprise\/api-docs\/admin\/organizations#query-parameters)\n    * added module-consumers link in `relationships` response\n    * added [update module consumers endpoint](\/terraform\/enterprise\/api-docs\/admin\/organizations#update-an-organization-39-s-module-consumers)\n    * added [list module consumers endpoint](\/terraform\/enterprise\/api-docs\/admin\/organizations#list-module-consumers-for-an-organization)\n  * [Organizations](\/terraform\/cloud-docs\/api-docs\/organizations): added [Module Producers](\/terraform\/cloud-docs\/api-docs\/organizations#show-module-producers)\n  * **DEPRECATION**: [Admin Module Sharing](\/terraform\/enterprise\/api-docs\/admin\/module-sharing): is replaced with a new JSON::Api compliant [endpoint](\/terraform\/enterprise\/api-docs\/admin\/organizations#update-an-organization-39-s-module-consumers)\n\n### 2021-03-18\n\n* **CLOUD**: Introduced [New Workspace Overview for Terraform Cloud](https:\/\/www.hashicorp.com\/blog\/new-workspace-overview-for-terraform-cloud).\n  * [Workspaces](\/terraform\/cloud-docs\/api-docs\/workspaces):\n    * added `resource-count` and `updated-at` attributes.\n    * added [performance attributes](\/terraform\/cloud-docs\/api-docs\/workspaces#workspace-performance-attributes) (`apply-duration-average`, `plan-duration-average`, `policy-check-failures`, `run-failures`, `workspaces-kpis-run-count`).\n    * added `readme` and `outputs` [related resources](\/terraform\/cloud-docs\/api-docs\/workspaces#available-related-resources).\n  * [Team Access](\/terraform\/cloud-docs\/api-docs\/team-access): updated to support pagination.\n\n### 2021-03-11\n\n* Added [VCS Events](\/terraform\/cloud-docs\/api-docs\/vcs-events), limited to GitLab.com connections.\n\n### 2021-03-08\n\n* [Workspaces](\/terraform\/cloud-docs\/api-docs\/workspaces): added `current_configuration_version` and `current_configuration_version.ingress_attributes` as includable related resources.","site":"terraform","answers_cleaned":"    page title  API Changelog   API Docs   HCP Terraform page id  api changelog description       Learn about the changes made to the HCP Terraform and Enterprise APIs         API Changelog  Keep track of changes to the API for HCP Terraform and Terraform Enterprise       2024 10 15   Add new documentation for the ability to deprecate  and revert the deprecation of  module versions  Learn more about  Managing module versions   terraform cloud docs api docs private registry manage module versions       2024 10 14   Update the  Organizations API   terraform cloud docs api docs organizations  to support the  speculative plan management enabled  attribute  which controls  automatic cancellation of plan only runs triggered by outdated commits   terraform cloud docs users teams organizations organizations vcs speculative plan management       2024 10 11   Add documentation for new timeframe filter on List endpoints for  Runs   terraform cloud docs api docs run  API     2024 09 02   Add warning about the deprecation and future removal of the  Policy Checks   terraform cloud docs api docs policy checks  API      2024 08 16   Fixes Workspace API responses to be consistent and contain all attributes and relationships      2024 08 14   Add documentation for a new API endpoint that lists an  organization s team tokens   terraform cloud docs api docs team tokens       2024 08 01   EnterpriseAlert  This endpoint is exclusive to Terraform Enterprise  and not available in HCP Terraform    EnterpriseAlert     Update the  admin settings API   terraform enterprise api docs admin settings  update general settings  and  admin organizations API   terraform enterprise api docs admin organizations update an organization  to indicate that the  terraform build worker plan timeout  and  terraform build worker apply timeout  attributes are deprecated and will be replaced by  plan timeout  and  apply timeout   respectively        BEGIN  TFC only name audit trail tokens        2024 07 24   Remove beta tags from documentation for audit trail tokens       END  TFC only name audit trail tokens         2024 07 15   Update the  Team API   terraform cloud docs api docs teams  to include  allow member token management         BEGIN  TFC only name audit trail tokens        2024 07 12   Add beta tags to documentation for audit trail tokens       END  TFC only name audit trail tokens         2024 06 25   Add API documentation for new  team token management setting   terraform cloud docs users teams organizations api tokens     Update API documentation for the  manage teams permission   terraform cloud docs users teams organizations permissions team management permissions       2024 05 29   Add API documentation for the new  audit trails token   terraform cloud docs api docs audit trails tokens       2024 05 23   Update the  registry modules API   terraform cloud docs api docs private registry modules create a module version  for publishing new versions of branch based modules      2024 05 10   Add API documentation for new  manage agent pools permission   terraform cloud docs users teams organizations permissions manage agent pools       2024 04 25   Project names can now be up to 40 characters      2024 04 08   Add API documentation for new  team management permissions   terraform cloud docs users teams organizations permissions team management permissions       2024 04 04   Add a  sort  parameter to the  Projects list API   terraform cloud docs api docs projects query parameters  to allow sorting projects by name    Add a  description  attribute to the  Projects API   terraform cloud docs api docs projects     Add  project count  and  workspace count  attributes to sample  Projects API   terraform cloud docs api docs projects  responses      2024 3 27   Add  private vcs  to  Feature Entitlements   terraform cloud docs api docs feature entitlements       2024 3 26   Add API documentation for searching  variable sets   terraform cloud docs api docs variable sets list variable sets  by name      2024 3 14   Add documentation for creating runs with debugging mode enabled      2024 3 12   Update OAuth Client API endpoints to create  update  and return projects associated with an oauth client    Add API endpoints to  Attach an OAuth Client   terraform cloud docs api docs oauth clients attach an oauth client to projects  and  Detach an OAuth Client   terraform cloud docs api docs oauth clients detach an oauth client from projects  from a project    Add  organization scoped  attribute to the  OAuth Clients API   terraform cloud docs api docs oauth clients       2024 2 29   Update  run task stages   terraform cloud docs api docs run tasks run task stages and results  with new multi stage payload format    Update  run tasks   terraform cloud docs api docs run tasks run tasks  with global run tasks request response payloads      2024 2 27   Add  private policy agents  to  Feature Entitlements   terraform cloud docs api docs feature entitlements       2024 2 20   Add documentation for configuring organization and workspace data retention policies through the API and on the different  types of data retention policies   terraform enterprise api docs data retention policies        BEGIN  TFC only name explorer        2024 2 8   Add  Explorer API documentation   terraform cloud docs api docs explorer       END  TFC only name explorer        2024 1 30   Update the  Audit trails   terraform cloud docs api docs audit trails  documentation to expand on the payloads for each event      2024 1 24   Introduce configurable data retention policies at the site wide level and extend data retention policies at the organization and workspace levels    Added and or updated data retention policy documentation to the following topics         Admin Settings Documentation   terraform enterprise application administration general data retention policies         Admin API Documentation   terraform enterprise api docs admin settings data retention policies         Organization Documentation   terraform enterprise users teams organizations organizations data retention policies         Workspace Documentation   terraform enterprise workspaces settings deletion data retention policies      2024 1 4   Update the  Organizations API   terraform cloud docs api docs organizations  to support the  aggregated commit status enabled  attribute  which controls whether  Aggregated Status Checks   terraform cloud docs users teams organizations organizations vcs status checks  are enabled      2023 11 17      Added the   opa versions  endpoint   terraform enterprise api docs admin opa versions  to allow administrators to manage available Open Policy Agent  OPA  versions      Added the   sentinel versions  endpoint   terraform enterprise api docs admin sentinel versions  to allow administrators to manage available Sentinel versions      Add  authenticated resource  relationship to the   account  API   terraform enterprise api docs account       2023 11 15    Introduce configurable data retention policies at the  organization   terraform enterprise users teams organizations organizations data retention policies  and  workspace   terraform enterprise workspaces settings deletion data retention policies  levels    Added data retention policy documentation to the following topics         state versions  API documentation   terraform enterprise api docs state versions         configuration versions  API documentation   terraform enterprise api docs configuration versions        Organizations documentation   terraform enterprise users teams organizations organizations destruction and deletion        Workspaces documentation   terraform enterprise workspaces settings deletion data retention policies      2023 11 07   Add  auto destroy activity duration  to the  Workspaces API   terraform cloud docs api docs workspaces   which allows Terraform Cloud to schedule auto destroy runs  based on workspace inactivity   terraform cloud docs workspaces settings deletion automatically destroy       2023 10 31   Update the  Workspaces API   terraform cloud docs api docs workspaces  to support the  auto apply run trigger  attribute  which controls if run trigger runs are auto applied      2023 10 30   Add  priority  attribute to the  Variable Sets API   terraform cloud docs api docs variable sets       2023 10 04   Updates to  run task integration API   terraform cloud docs api docs run tasks run tasks integration      Fix invalid JSON in the example payload      Clarify the expected JSON API payload fields    Add  policy tool version  attribute to  Policy Set Outcomes   terraform cloud docs api docs policy evaluations list policy outcomes       2023 10 03   Update  Policy Sets API   terraform cloud docs api docs policy sets  to include  agent enabled  and  policy tool version     Update  Policy Evaluations API   terraform cloud docs api docs policy evaluations  to include  policy tool version       2023 09 29   Add support for  streamlined run task reviews   terraform cloud docs integrations run tasks   enabling run task integrations to return high fidelity results      Update the  Terraform cloud run task API   terraform cloud docs api docs run tasks run tasks  to enable streamlined run task reviews      The  run task integration API   terraform cloud docs api docs run tasks run tasks integration  now guides integrations through sending rich results      Updated the run task payload  JSON Schema  https   github com hashicorp terraform docs common blob main website public schema run tasks runtask result json       2023 09 25   Add  intermediate  boolean attribute to the  State Versions API   terraform cloud docs api docs state versions       2023 09 19   Add  failed state upload recovery   terraform cloud docs api docs applies recover a failed state upload after applying  endpoint      2023 09 15   Add  auto destroy at  attribute to the  Workspaces API   terraform cloud docs api docs workspaces     Update the  Notification Configurations API   terraform cloud docs api docs notification configurations  to include  automatic destroy run   terraform cloud docs api docs notification configurations automatic destroy runs  details      2023 09 08   Update the  Organizations API   terraform cloud docs api docs organizations  to include  default execution mode  and  default agent pool     Update the  Workspaces API   terraform cloud docs api docs workspaces  to add a  setting overwrites  object to allow you to overwrite  default execution mode  and  default agent pool       2023 09 06   Update Policy Sets API endpoints to create  update  and return excluded workspaces associated with a policy set    Add API endpoints to  Attach a Policy Set   terraform cloud docs api docs policy sets attach a policy set to exclusions  and  Detach a Policy Set   terraform cloud docs api docs policy sets detach a policy set to exclusions  from excluded workspaces      2023 08 21   Add  save plan  attribute   planned and saved  status  and  save plan  operation type to  Runs endpoints   terraform cloud docs api docs run       2023 08 10   Add  provisional  to  configuration versions  endpoint      2023 07 26    Add support for a  custom  option to the  team project  access level along with various customizable permissions  The  project access  permissions apply to the project itself  and  workspace access  permissions apply to all workspaces within the project  For more information  see  Project Team Access   terraform cloud docs api docs project team access       2023 06 09    Introduce support for   import  blocks   terraform language import generating configuration        Runs   terraform cloud docs api docs run create a run  have a new  allow config generation  option       Plans   terraform cloud docs api docs plans show a plan  have new  resource imports  and  generated configuration  properties       Applies   terraform cloud docs api docs applies show an apply  have a new  resource imports  property    The workspaces associated with a policy set can now be updated using the  Policy Sets PATCH endpoint   terraform cloud docs api docs policy sets update a policy set    Update the  Workspaces   terraform cloud docs api docs workspaces  API endpoints to include the associated  project   terraform cloud docs api docs projects       2023 05 25    Terraform Cloud sets the  configuration version download url    configuration version id   and  workspace working directory  properties for all stages of the  Run Task Request   terraform enterprise api docs run tasks run tasks integration request body     Add the new  enforcement level  property in the request and response of  Policies endpoints   terraform cloud docs api docs policies     Deprecate the old  enforce  property in the request and response of  Policies endpoints   terraform cloud docs api docs policies     Add new properties to limit run tasks and policies for the Terraform Cloud free tier  We updated the  entitlement set   terraform cloud docs api docs organizations show the entitlement set    feature set   terraform cloud docs api docs feature sets sample response   and  subscription   terraform cloud docs api docs subscriptions sample response  endpoints with the following properties       run task limit       run task workspace limit       run task mandatory enforcement limit       policy set limit       policy limit       policy mandatory enforcement limit       versioned policy set limit      2023 04 25    Add the  version id  property to the response for the Create  List  and Update  Workspace Variables endpoints   terraform cloud docs api docs workspaces variables       2023 03 30    Add the  sort  query parameter to the Workspaces API s  list workspaces endpoint   terraform cloud docs api docs workspaces list workspaces       2023 03 24    Update the  Variable Sets   terraform cloud docs api docs variable sets  API endpoints to include assigning variable sets to projects      2023 03 20    Add a names filter to the  Projects list API   terraform cloud docs api docs projects query parameters  to allow fetching a list of projects by name      2023 03 13    Update  Project Team Access API   terraform cloud docs api docs project team access  to include additional Project roles    Update  Permissions   terraform cloud docs users teams organizations permissions  to reflect the decoupling of projects and workspaces in the Organization Permissions UI      2023 03 08    Introduced the  GitHub App Installation APIs   terraform cloud docs api docs github app installations     Updated  Workspaces API   terraform cloud docs api docs workspaces  to accept  vcs repo github app installation id  to connect a workspace to a GitHub App Installation    Updated  Registry Module API   terraform cloud docs api docs private registry modules  to accept  vcs repo github app installation id  to connect to a GitHub App Installation    Updated  Policy Sets API   terraform cloud docs api docs policy sets  to accept  vcs repo github app installation id  to connect to a GitHub App Installation      2023 02 16    Add  manage membership  to the organization access settings of the  Team API   terraform cloud docs api docs teams       2023 02 03    Updated the  List Runs API   terraform cloud docs api docs run list runs in a workspace  to note that the filter query parameters accept comma separated lists      2023 01 18    Updated the  Team API   terraform cloud docs api docs teams  to include the  read workspaces  and  read projects  permissions which grants teams view access to workspaces and projects      2023 01 17    Add  Projects API   terraform cloud docs api docs projects  for creating  updating and deleting projects    Add  Project Team Access API   terraform cloud docs api docs project team access  for managing access for teams to individual projects    Update  Workspaces API   terraform cloud docs api docs workspaces  to include examples of creating a workspace in a project and moving a workspace between projects    Update  List Teams API   terraform cloud docs api docs teams query parameters  to accept a search parameter  q   so that teams can be searched by name      2023 01 12    Added new rollback to previous state endpoint to  State Versions API   terraform cloud docs api docs state versions      2022 12 22    Updated  Safe Delete a workspace   terraform cloud docs api docs workspaces safe delete a workspace  to fix HTTP verb as  POST       2022 11 18    Update  Policies API   terraform cloud docs api docs policies  to fix policy enforcement level defaults  Enforcement level is a required field  so no defaults are available      2022 11 03    Update  Policy Checks   terraform cloud docs api docs policy checks  to fix policy set outcome return data type       2022 10 17    Updated the  Organizations API   terraform cloud docs api docs organizations  with the  allow force delete workspaces   which controls whether workspace administrators can delete workspaces with resources under management    Updated the  Workspaces API   terraform cloud docs api docs workspaces  with a safe delete endpoint that guards against deleting workspaces that are managing resources       2022 10 12    Update  Policy Checks   terraform cloud docs api docs policy checks  with result counts and support for filtering policy set outcomes    Update  Team Membership API   terraform cloud docs api docs team members  to include adding and removing users from teams using organization membership ID        BEGIN  TFC only name opa policies         2022 10 06    Updated the  Policies API   terraform cloud docs api docs policies  with support for Open Policy Agent  OPA  policies    Update  Policy Sets   terraform cloud docs api docs policy sets  with support for OPA policy sets    Updated  Policy Checks   terraform cloud docs api docs policy checks  to add support for listing policy evaluations and policy set outcomes    Update  Run Tasks Stage   terraform cloud docs api docs run tasks run task stages and results  to include the new  policy evaluations  attribute in the output       END  TFC only name opa policies          2022 09 21    Update  State Versions   terraform cloud docs api docs state versions create  with optional  json state outputs  and  json state  attributes  which are base 64 encoded external JSON representations of the terraform state  The read only  hosted json state download url  attribute links to this version of the state file when available    Update  State Version Outputs   terraform cloud docs api docs state version outputs  with a  detailed type  attribute  which refines the output with the precise Terraform type       2022 07 26    Updated the  Run status list   terraform cloud docs api docs run run states  with  fetching    queuing    pre plan running  and  pre plan completed    Update  Run Tasks   terraform cloud docs api docs run tasks mdx  to include the new  stages  attribute when attaching or updating workspace tasks    Updated  Run Tasks Integration   terraform cloud docs api docs run tasks run tasks integration  to specify the different request payloads for different stages       2022 06 23      BEGIN  TFC only name health assessments       Added the  Assessments   terraform cloud docs api docs assessments     Updated  Workspace   terraform cloud docs api docs workspaces create a workspace  and  Notification Configurations   terraform cloud docs api docs notification configurations notification triggers  to account for assessments       END  TFC only name health assessments        Added new query parameter s  to  List Runs endpoint   terraform cloud docs api docs run list runs in a workspace        2022 06 21   Updated  Admin Organizations   terraform enterprise api docs admin organizations  endpoints with new  workspace limit  setting  This is available in Terraform Enterprise v202207 1 and later    Updated  List Agent Pools   terraform cloud docs api docs agents list agent pools  to accept a filter parameter  filter allowed workspaces  name   so that agent pools can be filtered by name of an associated workspace  The given workspace must be allowed to use the agent pool  Refer to  Scoping Agent Pools to Specific Workspaces   terraform cloud docs agents scope an agent pool to specific workspaces     Added new  organization scoped  attribute and  allowed workspaces  relationship to the request response body of the below endpoints  This is available in Terraform Enterprise v202207 1 and later       Show an Agent Pool   terraform cloud docs api docs agents show an agent pool       Create an Agent Pool   terraform cloud docs api docs agents create an agent pool       Update an Agent Pool   terraform cloud docs api docs agents update an agent pool       2022 06 17   Updated  Creating a Run Task   terraform cloud docs workspaces settings run tasks creating a run task  section to include the new description information for the run task to be configured    Update  Run Tasks   terraform cloud docs api docs run tasks mdx  to include the new description attribute       2022 06 09    Updated  List Agent Pools   terraform cloud docs api docs agents list agent pools  to accept a search parameter  q   so that agent pools can be searched by  name   This is available in Terraform Enterprise v202207 1 and later    Fixed  List Workspaces   terraform cloud docs api docs workspaces list workspaces  to add missing  search tags   query parameter    Updated  List Workspaces   terraform cloud docs api docs workspaces list workspaces  to add new  search exclude tags   query parameter  This is available in Terraform Enterprise v202207 1 and later       2022 05 11    Updated Run Tasks permission to the following endpoints       Organizations   terraform cloud docs api docs organizations list organizations        Team Access   terraform cloud docs api docs team access list team access to a workspace        Teams   terraform cloud docs api docs teams list teams        2022 05 04    Updated  Feature Sets   terraform cloud docs api docs feature sets list feature sets  to add new  run tasks  attribute       2022 05 03    Added Run Tasks permission to the following endpoints       Organizations   terraform cloud docs api docs organizations list organizations       Workspaces   terraform cloud docs api docs workspaces show workspace       2022 04 29    Updated  Run Tasks Integration   terraform cloud docs api docs run tasks run tasks integration  to specify the allowed  status  attribute values    Updated  Organization Memberships   terraform cloud docs api docs organization memberships query parameters  to add new  filter email   query parameter    Updated  Teams   terraform cloud docs api docs teams query parameters  to add new  filter names   query parameter       2022 04 04    Added the  Run Tasks Integration   terraform cloud docs api docs run tasks run tasks integration  endpoint       2022 03 14    Added the  Download Configuration Files   terraform cloud docs api docs configuration versions download configuration files  endpoints       2022 03 11    Introduced  Archiving Configuration Versions   terraform cloud docs workspaces configurations archiving configuration versions       Updated  Configuration Versions   terraform cloud docs api docs configuration versions attributes  to add new  fetching  and  archived  states      Updated  Runs   terraform cloud docs api docs run attributes  to add new  fetching  state      Added the  Archive a Configuration Version   terraform cloud docs api docs configuration versions archive a configuration version  endpoint       2022 02 25    Updated  Workspace Run Tasks   terraform cloud docs api docs run tasks run tasks show a run task  to add new  enabled attribute       2022 02 28    Introduced the  Registry Providers   terraform cloud docs api docs private registry providers  endpoints to manage private providers for a private registry       2021 12 09    Added  variables  field for POST  runs and the run resource  enabling run specific variable values       2021 12 03    OAuth API updated to handle  secret  and  rsa public key  fields for POST PUT       2021 11 17    Added pagination support to the following endpoints       Feature Sets   terraform cloud docs api docs feature sets list feature sets     GET  feature sets       Notification Configurations   terraform cloud docs api docs notification configurations list notification configurations     GET  workspaces  workspace id notification configurations       Oauth Clients   terraform cloud docs api docs oauth clients list oauth clients     GET  organizations  organization name oauth clients       Oauth Tokens   terraform cloud docs api docs oauth tokens list oauth tokens     GET  oauth clients  oauth client id oauth tokens       Organization Feature Sets   terraform cloud docs api docs feature sets list feature sets for organization     GET  organizations  organization name feature sets       Organizations   terraform cloud docs api docs organizations list organizations     GET  organizations       Policy Checks   terraform cloud docs api docs policy checks list policy checks     GET  runs  run id policy checks       Policy Set Parameters   terraform cloud docs api docs policy set params list parameters     GET  policy sets  policy set id parameters       SSH Keys   terraform cloud docs api docs ssh keys list ssh keys     GET  organizations  organization name ssh keys       User Tokens   terraform cloud docs api docs user tokens list user tokens     GET  users  user id authentication tokens       2021 11 11    Introduced the  Variable Sets   terraform cloud docs api docs variable sets  endpoints for viewing and administering Variable Sets      2021 11 18    Introduced the  Registry Providers   terraform cloud docs api docs private registry providers  endpoint to manage public providers for a   private registry  These endpoints will be available in the following Terraform Enterprise Release   v202112 1       2021 09 12    Added  Run Tasks Stages and Results   terraform cloud docs api docs run tasks run task stages and results  endpoint       2021 08 18    Introduced the  State Version Outputs   terraform cloud docs api docs state versions  endpoint to retrieve the Outputs for a   given State Version      2021 08 11      BREAKING CHANGE    Security fix to  Configuration versions   terraform cloud docs api docs configuration versions   upload url attribute for  uploading configuration files   terraform cloud docs api docs configuration versions upload configuration files  is now only available on the create response       2021 07 30    Introduced Workspace Tagging     Updated  Workspaces   terraform cloud docs api docs workspaces         added  tag names  attribute        added  POST  workspaces  workspace id relationships tags        added  DELETE  workspaces  workspace id relationships tags      Added  Organization Tags   terraform cloud docs api docs organization tags       Added  tags  attribute to   tfrun    terraform cloud docs policy enforcement sentinel import tfrun       2021 07 19     Notification configurations   terraform cloud docs api docs notification configurations   Gave organization tokens permission to create and manage notification configurations       2021 07 09     State versions   terraform cloud docs api docs state versions   Fixed the ID format for the workspace relationship of a state version  Previously  the reported ID was unusable due to a bug     Workspaces   terraform cloud docs api docs workspaces   Added  locked by  as an includable related resource    Added  Run Tasks   terraform cloud docs api docs run tasks run tasks  API endpoint       2021 06 8    Updated  Registry Module APIs   terraform cloud docs api docs private registry modules       added  registry name  scoped APIs      added  organization name  scoped APIs      added  Module List API   terraform cloud docs api docs private registry modules list registry modules for an organization       updated  Module Delete APIs   terraform cloud docs api docs private registry modules delete a module   see deprecation note below         CLOUD    added public registry module related APIs      DEPRECATION    The following  Registry Module APIs   terraform cloud docs api docs private registry modules  have been replaced with newer apis and will be removed in the future      The following endpoints to delete modules are replaced with  Module Delete APIs   terraform cloud docs api docs private registry modules delete a module         POST  registry modules actions delete  organization name  name  provider  version  replaced with  DELETE  organizations  organization name registry modules  registry name  namespace  name  provider  version         POST  registry modules actions delete  organization name  name  provider  replaced with  DELETE  organizations  organization name registry modules  registry name  namespace  name  provider         POST  registry modules actions delete  organization name  name  replaced with  DELETE  organizations  organization name registry modules  registry name  namespace  name       POST  registry modules  replaced with   POST  organizations  organization name registry modules vcs    terraform cloud docs api docs private registry modules publish a private module from a vcs       POST  registry modules  organization name  name  provider versions  replaced with   POST  organizations  organization name registry modules  registry name  namespace  name  provider versions    terraform cloud docs api docs private registry modules create a module version       GET  registry modules show  organization name  name  provider  replaced with   GET  organizations  organization name registry modules  registry name  namespace  name  provider    terraform cloud docs api docs private registry modules get a module       2021 05 27      CLOUD     Agents   terraform cloud docs api docs agents   added  delete endpoint   terraform cloud docs api docs agents delete an agent        2021 05 19     Runs   terraform cloud docs api docs run   added  refresh    refresh only   and  replace addrs  attributes       2021 04 16    Introduced  Controlled Remote State Access  https   www hashicorp com blog announcing controlled remote state access for terraform cloud and enterprise        Admin Settings   terraform enterprise api docs admin settings   added  default remote state access  attribute       Workspaces   terraform cloud docs api docs workspaces         added  global remote state  attribute        added  Remote State Consumers   terraform cloud docs api docs workspaces get remote state consumers  relationship       2021 04 13     Teams   terraform cloud docs api docs teams   added  manage policy overrides  property to the  organization access  attribute       2021 03 23      ENTERPRISE     v202103 1  Introduced  Share Modules Across Organizations with Terraform Enterprise  https   www hashicorp com blog share modules across organizations terraform enterprise        Admin Organizations   terraform enterprise api docs admin organizations         added new query parameters to  List all Organizations endpoint   terraform enterprise api docs admin organizations query parameters        added module consumers link in  relationships  response       added  update module consumers endpoint   terraform enterprise api docs admin organizations update an organization 39 s module consumers        added  list module consumers endpoint   terraform enterprise api docs admin organizations list module consumers for an organization       Organizations   terraform cloud docs api docs organizations   added  Module Producers   terraform cloud docs api docs organizations show module producers        DEPRECATION     Admin Module Sharing   terraform enterprise api docs admin module sharing   is replaced with a new JSON  Api compliant  endpoint   terraform enterprise api docs admin organizations update an organization 39 s module consumers       2021 03 18      CLOUD    Introduced  New Workspace Overview for Terraform Cloud  https   www hashicorp com blog new workspace overview for terraform cloud        Workspaces   terraform cloud docs api docs workspaces         added  resource count  and  updated at  attributes        added  performance attributes   terraform cloud docs api docs workspaces workspace performance attributes    apply duration average    plan duration average    policy check failures    run failures    workspaces kpis run count          added  readme  and  outputs   related resources   terraform cloud docs api docs workspaces available related resources        Team Access   terraform cloud docs api docs team access   updated to support pagination       2021 03 11    Added  VCS Events   terraform cloud docs api docs vcs events   limited to GitLab com connections       2021 03 08     Workspaces   terraform cloud docs api docs workspaces   added  current configuration version  and  current configuration version ingress attributes  as includable related resources "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the authentication token endpoint to manage team API tokens Generate delete and list team API tokens using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 page title Team Tokens API Docs HCP Terraform","answers":"---\npage_title: Team Tokens - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/authentication-token` endpoint to manage team API tokens. Generate, delete, and list team API tokens using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Team token API\n\nTeam API tokens grant access to a team's workspaces. Each team can have an API token that is not associated with a specific user. You can create and delete team tokens and list an organization's team tokens.\n\n## Generate a new team token\n\nGenerates a new team token and overrides existing token if one exists.\n\n| Method | Path                                 |\n| :----- | :----------------------------------- |\n| POST   | \/teams\/:team_id\/authentication-token |\n\nThis endpoint returns the secret text of the new authentication token. You can only access this token when you create it and can not recover it later.\n\n### Parameters\n\n- `:team_id` (`string: <required>`) - specifies the team ID for generating the team token\n\n### Request body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\n\n| Key path                      | Type   | Default | Description                                                                                                     |\n| ----------------------------- | ------ | ------- | --------------------------------------------------------------------------------------------------------------- |\n| `data.type`                   | string |         | Must be `\"authentication-token\"`.                                                                               |\n| `data.attributes.expired-at`  | string | `null`  | The UTC date and time that the Team Token will expire, in ISO 8601 format. If omitted or set to `null` the token will never expire. |\n\n### Sample payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"authentication-token\",\n    \"attributes\": {\n      \"expired-at\": \"2023-04-06T12:00:00.000Z\"\n    }\n  }\n}\n```\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/teams\/team-BUHBEM97xboT8TVz\/authentication-token\n```\n\n### Sample response\n\n```json\n{\n  \"data\": {\n    \"id\": \"4111797\",\n    \"type\": \"authentication-tokens\",\n    \"attributes\": {\n      \"created-at\": \"2017-11-29T19:18:09.976Z\",\n      \"last-used-at\": null,\n      \"description\": null,\n      \"token\": \"QnbSxjjhVMHJgw.atlasv1.gxZnWIjI5j752DGqdwEUVLOFf0mtyaQ00H9bA1j90qWb254lEkQyOdfqqcq9zZL7Sm0\",\n      \"expired-at\": \"2023-04-06T12:00:00.000Z\"\n    },\n    \"relationships\": {\n      \"team\": {\n        \"data\": {\n          \"id\": \"team-Y7RyjccPVBKVEdp7\",\n          \"type\": \"teams\"\n        }\n      },\n      \"created-by\": {\n        \"data\": {\n          \"id\": \"user-62goNpx1ThQf689e\",\n          \"type\": \"users\"\n        }\n      }\n    }\n  }\n}\n```\n\n## Delete the team token\n\n| Method | Path                                 |\n| :----- | :----------------------------------- |\n| DELETE | \/teams\/:team_id\/authentication-token |\n\n### Parameters\n\n- `:team_id` (`string: <required>`) - specifies the team_id from which to delete the token\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/teams\/team-BUHBEM97xboT8TVz\/authentication-token\n```\n\n## List team tokens\n\nLists the [team tokens](\/terraform\/cloud-docs\/users-teams-organizations\/teams#api-tokens) in an organization.\n\n`GET organizations\/:organization_name\/team-tokens`\n\n| Parameter            | Description                                              |\n|----------------------|----------------------------------------------------------|\n| `:organization_name` | The name of the organization whose team tokens you want to list. |\n\nThis endpoint returns object metadata and does not include secret authentication details of tokens. You can only view a token when you create it and cannot recover it later.\n\nBy default, this endpoint returns tokens by ascending expiration date.\n\n| Status  | Response                                                | Reason                                                                                |\n| ------- | ------------------------------------------------------- | ------------------------------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"team-tokens\"`)           | The request was successful.                                                            |\n| [200][] | Empty [JSON API document][]                             | The specified organization has no team tokens.                                        |\n| [404][] | [JSON API error object][]                               | Organization not found.                                                                |\n\n### Query parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters) and searching with the `q` parameter. Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                         |\n|----------------|---------------------------------------------------------------------|\n| `page[number]` | **Optional.** If omitted, the endpoint returns the first page.      |\n| `page[size]`   | **Optional.** If omitted, the endpoint returns 20 tokens per page.  |\n| `q`            | **Optional.** A search query string. You can search for a team authentication token using the team name. |\n| `sort`         | **Optional.** Allows sorting the team tokens by `\"team-name\"`, `\"created-by\"`, `\"expired-at\"`, and `\"last-used-at\"`. Prepending a hyphen to the sort parameter reverses the order. For example, `\"-team-name\"` sorts by name in reverse alphabetical order. If omitted, the default sort order ascending. |\n\n### Sample response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"at-TLhN8cc6ro6qYDvp\",\n      \"type\": \"authentication-tokens\",\n      \"attributes\": {\n        \"created-at\": \"2024-06-19T18:28:25.267Z\",\n        \"last-used-at\": null,\n        \"description\": null,\n        \"token\": null,\n        \"expired-at\": \"2024-07-19T18:28:25.030Z\"\n      },\n      \"relationships\": {\n        \"team\": {\n          \"data\": {\n            \"id\": \"team-Y7RyjccPVBKVEdp7\",\n            \"type\": \"teams\"\n          }\n        },\n        \"created-by\": {\n          \"data\": {\n            \"id\": \"user-ccU6h629sszLJBpY\",\n            \"type\": \"users\"\n          }\n        }\n      }\n    },\n    {\n      \"id\": \"at-qfc2wqqJ1T5sCamM\",\n      \"type\": \"authentication-tokens\",\n      \"attributes\": {\n        \"created-at\": \"2024-06-19T18:44:44.051Z\",\n        \"last-used-at\": null,\n        \"description\": null,\n        \"token\": null,\n        \"expired-at\": \"2024-07-19T18:44:43.818Z\"\n      },\n      \"relationships\": {\n        \"team\": {\n          \"data\": {\n            \"id\": \"team-58pFiBffTLMxLphR\",\n            \"type\": \"teams\"\n          }\n        },\n        \"created-by\": {\n          \"data\": {\n            \"id\": \"user-ccU6h629hhzLJBpY\",\n            \"type\": \"users\"\n          }\n        }\n      }\n    },\n  ]\n}\n```\n\n## Show a team token\n\nUse this endpoint to display a [team token](\/terraform\/cloud-docs\/users-teams-organizations\/teams#api-tokens) for a particular team.\n\n`GET \/teams\/:team-id\/authentication-token`\n\n| Parameter  | Description               |\n| ---------- | ------------------------- |\n| `:team-id` | The ID of the Team.       |\n\nYou can also fetch a team token directly by using the token's ID with the `authentication-tokens\/` endpoint.\n\n`GET \/authentication-tokens\/:token-id`\n\n| Parameter   | Description               |\n| ----------- | ------------------------- |\n| `:token-id` | The ID of the Team Token. |\n\nThe object returned by this endpoint only contains metadata, and does not include the secret text of the authentication token. A token's secret test is only shown upon creation, and cannot be recovered later.\n\n| Status  | Response                                                | Reason                                                       |\n| ------- | ------------------------------------------------------- | ------------------------------------------------------------ |\n| [200][] | [JSON API document][] (`type: \"authentication-tokens\"`) | The request was successful                                   |\n| [404][] | [JSON API error object][]                               | Team Token not found, or unauthorized to view the Team Token |\n\n### Sample request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/teams\/team-6yEmxNAhaoQLH1Da\/authentication-token\n```\n\n### Sample response\n\n```json\n{\n  \"data\": {\n    \"id\": \"at-6yEmxNAhaoQLH1Da\",\n    \"type\": \"authentication-tokens\",\n    \"attributes\": {\n      \"created-at\": \"2023-11-25T22:31:30.624Z\",\n      \"last-used-at\": \"2023-11-26T20:34:59.487Z\",\n      \"description\": null,\n      \"token\": null,\n      \"expired-at\": \"2024-04-06T12:00:00.000Z\"\n    },\n    \"relationships\": {\n      \"team\": {\n        \"data\": {\n          \"id\": \"team-LnREdjodkvZFGdXL\",\n          \"type\": \"teams\"\n        }\n      },\n      \"created-by\": {\n        \"data\": {\n          \"id\": \"user-MA4GL63FmYRpSFxa\",\n          \"type\": \"users\"\n        }\n      }\n    }\n  }\n}\n```\n","site":"terraform","answers_cleaned":"    page title  Team Tokens   API Docs   HCP Terraform description       Use the   authentication token  endpoint to manage team API tokens  Generate  delete  and list team API tokens using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Team token API  Team API tokens grant access to a team s workspaces  Each team can have an API token that is not associated with a specific user  You can create and delete team tokens and list an organization s team tokens      Generate a new team token  Generates a new team token and overrides existing token if one exists     Method   Path                                                                                       POST      teams  team id authentication token    This endpoint returns the secret text of the new authentication token  You can only access this token when you create it and can not recover it later       Parameters      team id    string   required      specifies the team ID for generating the team token      Request body  This POST endpoint requires a JSON object with the following properties as a request payload      Key path                        Type     Default   Description                                                                                                                                                                                                                                                                                 data type                      string             Must be   authentication token                                                                                       data attributes expired at     string    null     The UTC date and time that the Team Token will expire  in ISO 8601 format  If omitted or set to  null  the token will never expire         Sample payload     json      data          type    authentication token        attributes            expired at    2023 04 06T12 00 00 000Z                       Sample request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 teams team BUHBEM97xboT8TVz authentication token          Sample response     json      data          id    4111797        type    authentication tokens        attributes            created at    2017 11 29T19 18 09 976Z          last used at   null         description   null         token    QnbSxjjhVMHJgw atlasv1 gxZnWIjI5j752DGqdwEUVLOFf0mtyaQ00H9bA1j90qWb254lEkQyOdfqqcq9zZL7Sm0          expired at    2023 04 06T12 00 00 000Z              relationships            team              data                id    team Y7RyjccPVBKVEdp7              type    teams                            created by              data                id    user 62goNpx1ThQf689e              type    users                                        Delete the team token    Method   Path                                                                                       DELETE    teams  team id authentication token        Parameters      team id    string   required      specifies the team id from which to delete the token      Sample request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 teams team BUHBEM97xboT8TVz authentication token         List team tokens  Lists the  team tokens   terraform cloud docs users teams organizations teams api tokens  in an organization    GET organizations  organization name team tokens     Parameter              Description                                                                                                                                        organization name    The name of the organization whose team tokens you want to list     This endpoint returns object metadata and does not include secret authentication details of tokens  You can only view a token when you create it and cannot recover it later   By default  this endpoint returns tokens by ascending expiration date     Status    Response                                                  Reason                                                                                                                                                                                                                                                   200       JSON API document      type   team tokens                The request was successful                                                                  200      Empty  JSON API document                                  The specified organization has no team tokens                                              404       JSON API error object                                    Organization not found                                                                        Query parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters  and searching with the  q  parameter  Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                       page number       Optional    If omitted  the endpoint returns the first page            page size         Optional    If omitted  the endpoint returns 20 tokens per page        q                 Optional    A search query string  You can search for a team authentication token using the team name       sort              Optional    Allows sorting the team tokens by   team name      created by      expired at    and   last used at    Prepending a hyphen to the sort parameter reverses the order  For example     team name   sorts by name in reverse alphabetical order  If omitted  the default sort order ascending         Sample response     json      data                  id    at TLhN8cc6ro6qYDvp          type    authentication tokens          attributes              created at    2024 06 19T18 28 25 267Z            last used at   null           description   null           token   null           expired at    2024 07 19T18 28 25 030Z                  relationships              team                data                  id    team Y7RyjccPVBKVEdp7                type    teams                                  created by                data                  id    user ccU6h629sszLJBpY                type    users                                                    id    at qfc2wqqJ1T5sCamM          type    authentication tokens          attributes              created at    2024 06 19T18 44 44 051Z            last used at   null           description   null           token   null           expired at    2024 07 19T18 44 43 818Z                  relationships              team                data                  id    team 58pFiBffTLMxLphR                type    teams                                  created by                data                  id    user ccU6h629hhzLJBpY                type    users                                                     Show a team token  Use this endpoint to display a  team token   terraform cloud docs users teams organizations teams api tokens  for a particular team    GET  teams  team id authentication token     Parameter    Description                                                                team id    The ID of the Team           You can also fetch a team token directly by using the token s ID with the  authentication tokens   endpoint    GET  authentication tokens  token id     Parameter     Description                                                                 token id    The ID of the Team Token     The object returned by this endpoint only contains metadata  and does not include the secret text of the authentication token  A token s secret test is only shown upon creation  and cannot be recovered later     Status    Response                                                  Reason                                                                                                                                                                                                 200       JSON API document      type   authentication tokens      The request was successful                                        404       JSON API error object                                    Team Token not found  or unauthorized to view the Team Token        Sample request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 teams team 6yEmxNAhaoQLH1Da authentication token          Sample response     json      data          id    at 6yEmxNAhaoQLH1Da        type    authentication tokens        attributes            created at    2023 11 25T22 31 30 624Z          last used at    2023 11 26T20 34 59 487Z          description   null         token   null         expired at    2024 04 06T12 00 00 000Z              relationships            team              data                id    team LnREdjodkvZFGdXL              type    teams                            created by              data                id    user MA4GL63FmYRpSFxa              type    users                                    "}
{"questions":"terraform A variable set terraform cloud docs workspaces variables scope is a resource that allows you to reuse the same variables across multiple workspaces and projects For example you could define a variable set of provider credentials and automatically apply it to a selection of workspaces all workspaces in a project or all workspaces in an organization Variable Sets API Use the varsets endpoint to manage variable sets List show create update and delete variable sets and apply or remove variable sets from workspaces using the HTTP API page title Variable Sets API Docs HCP Terraform","answers":"---\npage_title: Variable Sets - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/varsets` endpoint to manage variable sets. List, show, create, update, and delete variable sets, and apply or remove variable sets from workspaces using the HTTP API.\n---\n\n# Variable Sets API\n\nA [variable set](\/terraform\/cloud-docs\/workspaces\/variables#scope) is a resource that allows you to reuse the same variables across multiple workspaces and projects. For example, you could define a variable set of provider credentials and automatically apply it to a selection of workspaces, all workspaces in a project, or all workspaces in an organization.\n\nYou need [**Read** variables permission](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#general-workspace-permissions) to view the variables for a particular workspace and to view the variable sets in the owning organization. To create or edit variable sets, your team must have [**Manage all workspaces** organization access](\/terraform\/cloud-docs\/users-teams-organizations\/teams\/manage#managing-organization-access).\n\n## Create a Variable Set\n\n`POST organizations\/:organization_name\/varsets`\n\n| Parameter            | Description                                            |\n| -------------------- | ------------------------------------------------------ |\n| `:organization_name` | The name of the organization the workspace belongs to. |\n\n### Request Body\n\nProperties without a default value are required.\n\n| Key path                        | Type    | Default | Description                                                                                                                 |\n|---------------------------------|---------|---------|-----------------------------------------------------------------------------------------------------------------------------|\n| `data.attributes.name`          | string  |         | The name of the variable set.                                                                                               |\n| `data.attributes.description`   | string  | `\"\"`    | Text displayed in the UI to contextualize the variable set and its purpose.                                                 |\n| `data.attributes.global`        | boolean | `false` | When true, HCP Terraform automatically applies the variable set to all current and future workspaces in the organization. |\n| `data.attributes.priority`      | boolean | `false` | When true, the variables in the set override any other variable values with a more specific scope, including values set on the command line. |\n| `data.relationships.workspaces` | array   | \\`[]`   | Array of references to workspaces that the variable set should be assigned to.                                              |\n| `data.relationships.projects`   | array   | \\`[]`   | Array of references to projects that the variable set should be assigned to.                                                |\n| `data.relationships.vars`       | array   | \\`[]`   | Array of complete variable definitions that comprise the variable set.                                                      |\n\nHCP Terraform does not allow different global variable sets to contain conflicting variables with the same name and type. You will receive a 422 response if you try to create a global variable set that contains conflicting variables.\n\n| Status  | Response                  | Reason(s)                                                             |\n| ------- | ------------------------- | --------------------------------------------------------------------- |\n| [200][] | [JSON API document][]     | Successfully added variable set                                       |\n| [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action        |\n| [422][] | [JSON API error object][] | Problem with payload or request; details provided in the error object |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"varsets\",\n    \"attributes\": {\n      \"name\": \"MyVarset\",\n      \"description\": \"Full of vars and such for mass reuse\",\n      \"global\": false,\n      \"priority\": false,\n    },\n    \"relationships\": {\n      \"workspaces\": {\n        \"data\": [\n          {\n            \"id\": \"ws-z6YvbWEYoE168kpq\",\n            \"type\": \"workspaces\"\n          }\n        ]\n      },\n      \"vars\": {\n        \"data\": [\n          {\n            \"type\": \"vars\",\n            \"attributes\": {\n              \"key\": \"c2e4612d993c18e42ef30405ea7d0e9ae\",\n              \"value\": \"8676328808c5bf56ac5c8c0def3b7071\",\n              \"category\": \"terraform\"\n            }\n          }\n        ]\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/varsets\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"varset-kjkN545LH2Sfercv\",\n    \"type\": \"varsets\",\n    \"attributes\": {\n      \"name\": \"MyVarset\",\n      \"description\": \"Full of vars and such for mass reuse\",\n      \"global\": false,\n      \"priority\": false,\n    },\n    \"relationships\": {\n      \"workspaces\": {\n        \"data\": [\n          {\n            \"id\": \"ws-z6YvbWEYoE168kpq\",\n            \"type\": \"workspaces\"\n          }\n        ]\n      },\n      \"projects\": {\n        \"data\": [\n          {\n            \"id\": \"prj-lkjasdlkfjs\",\n            \"type\": \"projects\"\n          }\n        ]\n      },\n      \"vars\": {\n        \"data\": [\n          {\n            \"id\": \"var-Nh0doz0hzj9hrm34qq\",\n            \"type\": \"vars\",\n            \"attributes\": {\n              \"key\": \"c2e4612d993c18e42ef30405ea7d0e9ae\",\n              \"value\": \"8676328808c5bf56ac5c8c0def3b7071\",\n              \"category\": \"terraform\"\n            }\n          }\n        ]\n      }\n    }\n  }\n}\n```\n\n## Update a Variable Set\n\n`PUT\/PATCH varsets\/:varset_id`\n\n| Parameter    | Description         |\n| ------------ | ------------------- |\n| `:varset_id` | The variable set ID |\n\nHCP Terraform does not allow global variable sets to contain conflicting variables with the same name and type. You will receive a 422 response if you try to create a global variable set that contains conflicting variables.\n\n### Request Body\n\n| Key path                        | Type    | Default | Description                                                                                                                                          |\n|---------------------------------|---------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `data.attributes.name`          | string  |         | The name of the variable set.                                                                                                                        |\n| `data.attributes.description`   | string  |         | Text displayed in the UI to contextualize the variable set and its purpose.                                                                          |\n| `data.attributes.global`        | boolean |         | When true, HCP Terraform automatically applies the variable set to all current and future workspaces in the organization.                          |\n| `data.attributes.priority`      | boolean | `false` | When true, the variables in the set override any other variable values set with a more specific scope, including values set on the command line. |\n| `data.relationships.workspaces` | array   |         | **Optional** Array of references to workspaces that the variable set should be assigned to. Sending an empty array clears all workspace assignments. |\n| `data.relationships.projects`   | array   |         | **Optional** Array of references to projects that the variable set should be assigned to. Sending an empty array clears all project assignments.     |\n| `data.relationships.vars`       | array   |         | **Optional** Array of complete variable definitions to add to the variable set.                                                                      |\n\n| Status  | Response                  | Reason(s)                                                                      |\n| ------- | ------------------------- | ------------------------------------------------------------------------------ |\n| [200][] | [JSON API document][]     | Successfully updated variable set                                              |\n| [404][] | [JSON API error object][] | Organization or variable set not found, or user unauthorized to perform action |\n| [422][] | [JSON API error object][] | Problem with payload or request; details provided in the error object          |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"varsets\",\n    \"attributes\": {\n      \"name\": \"MyVarset\",\n      \"description\": \"Full of vars and such for mass reuse. Now global!\",\n      \"global\": true,\n      \"priority\": true,\n    },\n    \"relationships\": {\n      \"workspaces\": {\n        \"data\": [\n          {\n            \"id\": \"ws-FRFwkYoUoGn1e34b\",\n            \"type\": \"workspaces\"\n          }\n        ]\n      },\n      \"projects\": {\n        \"data\": [\n          {\n            \"id\": \"prj-kFjgSzcZSr5c3imE\",\n            \"type\": \"projects\"\n          }\n        ]\n      },\n      \"vars\": {\n        \"data\": [\n          {\n            \"type\": \"vars\",\n            \"attributes\": {\n              \"key\": \"c2e4612d993c18e42ef30405ea7d0e9ae\",\n              \"value\": \"8676328808c5bf56ac5c8c0def3b7071\",\n              \"category\": \"terraform\"\n            }\n          }\n        ]\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/varsets\/varset-kjkN545LH2Sfercv\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"varset-kjkN545LH2Sfercv\",\n    \"type\": \"varsets\",\n    \"attributes\": {\n      \"name\": \"MyVarset\",\n      \"description\": \"Full of vars and such for mass reuse. Now global!\",\n      \"global\": true,\n      \"priority\": true\n    },\n    \"relationships\": {\n      \"vars\": {\n        \"data\": [\n          {\n            \"id\": \"var-Nh0doz0hzj9hrm34qq\",\n            \"type\": \"vars\",\n            \"attributes\": {\n              \"key\": \"c2e4612d993c18e42ef30405ea7d0e9ae\",\n              \"value\": \"8676328808c5bf56ac5c8c0def3b7071\",\n              \"category\": \"terraform\"\n            }\n          }\n        ]\n      }\n    }\n  }\n}\n```\n\n## Delete a Variable Set\n\n`DELETE varsets\/:varset_id`\n\n| Parameter    | Description         |\n| ------------ | ------------------- |\n| `:varset_id` | The variable set ID |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/varsets\/varset-kjkN545LH2Sfercv\n```\n\nOn success, this endpoint responds with no content.\n\n## Show Variable Set\n\nFetch details about the specified variable set.\n\n`GET varsets\/:varset_id`\n\n| Parameter    | Description         |\n| ------------ | ------------------- |\n| `:varset_id` | The variable set ID |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/varsets\/varset-kjkN545LH2Sfercv\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"varset-kjkN545LH2Sfercv\",\n    \"type\": \"varsets\",\n    \"attributes\": {\n      \"name\": \"MyVarset\",\n      \"description\": \"Full of vars and such for mass reuse\",\n      \"global\": false,\n      \"priority\": false,\n      \"updated-at\": \"2023-03-06T21:48:33.588Z\",\n      \"var-count\": 5,\n      \"workspace-count\": 2,\n      \"project-count\": 2\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"hashicorp\",\n          \"type\": \"organizations\"\n        }\n      },\n      \"vars\": {\n        \"data\": [\n          {\n            \"id\": \"var-mMqadSCxZtrQJAv8\",\n            \"type\": \"vars\"\n          },\n          {\n            \"id\": \"var-hFxUUKSk35QMsRVH\",\n            \"type\": \"vars\"\n          },\n          {\n            \"id\": \"var-fkd6N48tXRmoaPxH\",\n            \"type\": \"vars\"\n          },\n          {\n            \"id\": \"var-abcbBMBMWcZw3WiV\",\n            \"type\": \"vars\"\n          },\n          {\n            \"id\": \"var-vqvRKK1ZoqQCiMwN\",\n            \"type\": \"vars\"\n          }\n        ]\n      },\n      \"workspaces\": {\n        \"data\": [\n          {\n            \"id\": \"ws-UohFdKAHUGsQ8Dtf\",\n            \"type\": \"workspaces\"\n          },\n          {\n            \"id\": \"ws-XhGhaaCrsx9ATson\",\n            \"type\": \"workspaces\"\n          }\n        ]\n      },\n      \"projects\": {\n        \"data\": [\n          {\n            \"id\": \"prj-1JMwvPHFsdpsPhnt\",\n            \"type\": \"projects\"\n          },\n          {\n            \"id\": \"prj-SLDGqbYqELXE1obp\",\n            \"type\": \"projects\"\n          }\n        ]\n      }\n    }\n  }\n}\n```\n\n## List Variable Sets\n\nList all variable sets for an organization.\n\n`GET organizations\/:organization_name\/varsets`\n\n| Parameter            | Description                                              |\n|----------------------|----------------------------------------------------------|\n| `:organization_name` | The name of the organization the variable sets belong to |\n\nList all variable sets for a project. This includes global variable sets from the project's organization.\n\n`GET projects\/:project_id\/varsets`\n\n| Parameter     | Description    |\n|---------------|----------------|\n| `:project_id` | The project ID |\n\nList all variable sets for a workspace. This includes global variable sets from the workspace's organization and variable sets\nattached to the project this workspace is contained within.\n\n`GET workspaces\/:workspace_id\/varsets`\n\n| Parameter       | Description      |\n|-----------------|------------------|\n| `:workspace_id` | The workspace ID |\n\n### Query Parameters\n\nAll list endpoints support pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters) and searching with the `q` parameter. Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                         |\n|----------------|---------------------------------------------------------------------|\n| `page[number]` | **Optional.** If omitted, the endpoint returns the first page.      |\n| `page[size]`   | **Optional.** If omitted, the endpoint returns 20 varsets per page. |\n| `q`            | **Optional.** A search query string. You can search for a variable set using its name. |\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"varset-kjkN545LH2Sfercv\",\n      \"type\": \"varsets\",\n      \"attributes\": {\n        \"name\": \"MyVarset\",\n        \"description\": \"Full of vars and such for mass reuse\",\n        \"global\": false,\n        \"priority\": false,\n        \"updated-at\": \"2023-03-06T21:48:33.588Z\",\n        \"var-count\": 5,\n        \"workspace-count\": 2,\n        \"project-count\": 2\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"hashicorp\",\n            \"type\": \"organizations\"\n          }\n        },\n        \"vars\": {\n          \"data\": [\n            {\n              \"id\": \"var-mMqadSCxZtrQJAv8\",\n              \"type\": \"vars\"\n            },\n            {\n              \"id\": \"var-hFxUUKSk35QMsRVH\",\n              \"type\": \"vars\"\n            },\n            {\n              \"id\": \"var-fkd6N48tXRmoaPxH\",\n              \"type\": \"vars\"\n            },\n            {\n              \"id\": \"var-abcbBMBMWcZw3WiV\",\n              \"type\": \"vars\"\n            },\n            {\n              \"id\": \"var-vqvRKK1ZoqQCiMwN\",\n              \"type\": \"vars\"\n            }\n          ]\n        },\n        \"workspaces\": {\n          \"data\": [\n            {\n              \"id\": \"ws-UohFdKAHUGsQ8Dtf\",\n              \"type\": \"workspaces\"\n            },\n            {\n              \"id\": \"ws-XhGhaaCrsx9ATson\",\n              \"type\": \"workspaces\"\n            }\n          ]\n        },\n        \"projects\": {\n          \"data\": [\n            {\n              \"id\": \"prj-1JMwvPHFsdpsPhnt\",\n              \"type\": \"projects\"\n            },\n            {\n              \"id\": \"prj-SLDGqbYqELXE1obp\",\n              \"type\": \"projects\"\n            }\n          ]\n        }\n      }\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/varsets?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/varsets?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/varsets?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"page-size\": 20,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 1\n    }\n  }\n}\n```\n\n### Variable Relationships\n\n## Add Variable\n\n`POST varsets\/:varset_external_id\/relationships\/vars`\n\n| Parameter    | Description         |\n| ------------ | ------------------- |\n| `:varset_id` | The variable set ID |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                      | Type   | Default | Description                                                                                                     |\n| ----------------------------- | ------ | ------- | --------------------------------------------------------------------------------------------------------------- |\n| `data.type`                   | string |         | Must be `\"vars\"`.                                                                                               |\n| `data.attributes.key`         | string |         | The name of the variable.                                                                                       |\n| `data.attributes.value`       | string | `\"\"`    | The value of the variable.                                                                                      |\n| `data.attributes.description` | string |         | The description of the variable.                                                                                |\n| `data.attributes.category`    | string |         | Whether this is a Terraform or environment variable. Valid values are `\"terraform\"` or `\"env\"`.                 |\n| `data.attributes.hcl`         | bool   | `false` | Whether to evaluate the value of the variable as a string of HCL code. Has no effect for environment variables. |\n| `data.attributes.sensitive`   | bool   | `false` | Whether the value is sensitive. If true, variable is not visible in the UI.                                     |\n\nHCP Terraform does not allow different global variable sets to contain conflicting variables with the same name and type. You will receive a 422 response if you try to add a conflicting variable to a global variable set.\n\n| Status  | Response                  | Reason(s)                                                             |\n| ------- | ------------------------- | --------------------------------------------------------------------- |\n| [200][] | [JSON API document][]     | Successfully added variable to variable set                           |\n| [404][] | [JSON API error object][] | Variable set not found, or user unauthorized to perform action        |\n| [422][] | [JSON API error object][] | Problem with payload or request; details provided in the error object |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"vars\",\n    \"attributes\": {\n      \"key\": \"g6e45ae7564a17e81ef62fd1c7fa86138\",\n      \"value\": \"61e400d5ccffb3782f215344481e6c82\",\n      \"description\": \"cheeeese\",\n      \"sensitive\": false,\n      \"category\": \"terraform\",\n      \"hcl\": false\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/varsets\/varset-4q8f7H0NHG733bBH\/relationships\/vars\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\":\"var-EavQ1LztoRTQHSNT\",\n    \"type\": \"vars\",\n    \"attributes\": {\n      \"key\": \"g6e45ae7564a17e81ef62fd1c7fa86138\",\n      \"value\": \"61e400d5ccffb3782f215344481e6c82\",\n      \"description\": \"cheeeese\",\n      \"sensitive\": false,\n      \"category\": \"terraform\",\n      \"hcl\": false\n    }\n  }\n}\n```\n\n## Update a Variable in a Variable Set\n\n`PATCH varsets\/:varset_id\/relationships\/vars\/:var_id`\n\n| Parameter    | Description                      |\n| ------------ | -------------------------------- |\n| `:varset_id` | The variable set ID              |\n| `:var_id`    | The ID of the variable to delete |\n\nHCP Terraform does not allow different global variable sets to contain conflicting variables with the same name and type. You will receive a 422 response if you try to add a conflicting variable to a global variable set.\n\n| Status  | Response                  | Reason(s)                                                             |\n| ------- | ------------------------- | --------------------------------------------------------------------- |\n| [200][] | [JSON API document][]     | Successfully updated variable for variable set                        |\n| [404][] | [JSON API error object][] | Variable set not found, or user unauthorized to perform action        |\n| [422][] | [JSON API error object][] | Problem with payload or request; details provided in the error object |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"vars\",\n    \"attributes\": {\n      \"key\": \"g6e45ae7564a17e81ef62fd1c7fa86138\",\n      \"value\": \"61e400d5ccffb3782f215344481e6c82\",\n      \"description\": \"new cheeeese\",\n      \"sensitive\": false,\n      \"category\": \"terraform\",\n      \"hcl\": false\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/varsets\/varset-4q8f7H0NHG733bBH\/relationships\/vars\/var-EavQ1LztoRTQHSNT\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\":\"var-EavQ1LztoRTQHSNT\",\n    \"type\": \"vars\",\n    \"attributes\": {\n      \"key\": \"g6e45ae7564a17e81ef62fd1c7fa86138\",\n      \"value\": \"61e400d5ccffb3782f215344481e6c82\",\n      \"description\": \"new cheeeese\",\n      \"sensitive\": false,\n      \"category\": \"terraform\",\n      \"hcl\": false\n    }\n  }\n}\n```\n\n## Delete a Variable in a Variable Set\n\n`DELETE varsets\/:varset_id\/relationships\/vars\/:var_id`\n\n| Parameter    | Description                      |\n| ------------ | -------------------------------- |\n| `:varset_id` | The variable set ID              |\n| `:var_id`    | The ID of the variable to delete |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/varsets\/varset-4q8f7H0NHG733bBH\/relationships\/vars\/var-EavQ1LztoRTQHSNT\n```\n\nOn success, this endpoint responds with no content.\n\n## List Variables in a Variable Set\n\n`GET varsets\/:varset_id\/relationships\/vars`\n\n| Parameter    | Description         |\n| ------------ | ------------------- |\n| `:varset_id` | The variable set ID |\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"var-134r1k34nj5kjn\",\n      \"type\": \"vars\",\n      \"attributes\": {\n        \"key\": \"F115037558b045dd82da40b089e5db745\",\n        \"value\": \"1754288480dfd3060e2c37890422905f\",\n        \"sensitive\": false,\n        \"category\": \"terraform\",\n        \"hcl\": false,\n        \"created-at\": \"2021-10-29T18:54:29.379Z\",\n        \"description\": \"\"\n      },\n      \"relationships\": {\n        \"varset\": {\n          \"data\": {\n            \"id\": \"varset-992UMULdeDuebi1x\",\n            \"type\": \"varsets\"\n          },\n          \"links\": { \"related\": \"\/api\/v2\/varsets\/1\" }\n        }\n      },\n      \"links\": { \"self\": \"\/api\/v2\/vars\/var-BEPU9NjPVCiCfrXj\" }\n    }\n  ],\n  \"links\": {\n    \"self\": \"app.terraform.io\/app\/varsets\/varset-992UMULdeDuebi1x\/vars\",\n    \"first\": \"app.terraform.io\/app\/varsets\/varset-992UMULdeDuebi1x\/vars?page=1\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"app.terraform.io\/app\/varsets\/varset-992UMULdeDuebi1x\/vars?page=1\"\n  }\n}\n```\n\n## Apply Variable Set to Workspaces\n\nAccepts a list of workspaces to add the variable set to.\n\n`POST varsets\/:varset_id\/relationships\/workspaces`\n\n| Parameter    | Description         |\n| ------------ | ------------------- |\n| `:varset_id` | The variable set ID |\n\nProperties without a default value are required.\n\n| Key path      | Type   | Default | Description                                        |\n| ------------- | ------ | ------- | -------------------------------------------------- |\n| `data[].type` | string |         | Must be `\"workspaces\"`                             |\n| `data[].id`   | string |         | The id of the workspace to add the variable set to |\n\n| Status  | Response                  | Reason(s)                                                             |\n| ------- | ------------------------- | --------------------------------------------------------------------- |\n| [204][] |                           | Successfully added variable set to the requested workspaces           |\n| [404][] | [JSON API error object][] | Variable set not found or user unauthorized to perform action         |\n| [422][] | [JSON API error object][] | Problem with payload or request; details provided in the error object |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"type\": \"workspaces\",\n      \"id\": \"ws-YwfuBJZkdai4xj9w\"\n    }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/varsets\/varset-kjkN545LH2Sfercv\/relationships\/workspaces\n```\n\n## Remove a Variable Set from Workspaces\n\nAccepts a list of workspaces to remove the variable set from.\n\n`DELETE varsets\/:varset_id\/relationships\/workspaces`\n\n| Parameter    | Description         |\n| ------------ | ------------------- |\n| `:varset_id` | The variable set ID |\n\nProperties without a default value are required.\n\n| Key path      | Type   | Default | Description                                             |\n| ------------- | ------ | ------- | ------------------------------------------------------- |\n| `data[].type` | string |         | Must be `\"workspaces\"`                                  |\n| `data[].id`   | string |         | The id of the workspace to delete the variable set from |\n\n| Status  | Response                  | Reason(s)                                                             |\n| ------- | ------------------------- | --------------------------------------------------------------------- |\n| [204][] |                           | Successfully removed variable set from the requested workspaces       |\n| [404][] | [JSON API error object][] | Variable set not found or user unauthorized to perform action         |\n| [422][] | [JSON API error object][] | Problem with payload or request; details provided in the error object |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"type\": \"workspaces\",\n      \"id\": \"ws-YwfuBJZkdai4xj9w\"\n    },\n    {\n      \"type\": \"workspaces\",\n      \"id\": \"ws-YwfuBJZkdai4xj9w\"\n    }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/varsets\/varset-kjkN545LH2Sfercv\/relationships\/workspaces\n```\n\n## Apply Variable Set to Projects\n\nAccepts a list of projects to add the variable set to. When you apply a variable set to a project, all the workspaces in that project will have the variable set applied to them.\n\n`POST varsets\/:varset_id\/relationships\/projects`\n\n| Parameter    | Description         |\n| ------------ | ------------------- |\n| `:varset_id` | The variable set ID |\n\nProperties without a default value are required.\n\n| Key path      | Type   | Default | Description                                        |\n| ------------- | ------ | ------- | -------------------------------------------------- |\n| `data[].type` | string |         | Must be `\"projects\"`                               |\n| `data[].id`   | string |         | The id of the project to add the variable set to   |\n\n| Status  | Response                  | Reason(s)                                                             |\n| ------- | ------------------------- | --------------------------------------------------------------------- |\n| [204][] |                           | Successfully added variable set to the requested projects             |\n| [404][] | [JSON API error object][] | Variable set not found or user unauthorized to perform action         |\n| [422][] | [JSON API error object][] | Problem with payload or request; details provided in the error object |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"type\": \"projects\",\n      \"id\": \"prj-YwfuBJZkdai4xj9w\"\n    }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/varsets\/varset-kjkN545LH2Sfercv\/relationships\/projects\n```\n\n## Remove a Variable Set from Projects\n\nAccepts a list of projects to remove the variable set from.\n\n`DELETE varsets\/:varset_id\/relationships\/projects`\n\n| Parameter    | Description         |\n| ------------ | ------------------- |\n| `:varset_id` | The variable set ID |\n\nProperties without a default value are required.\n\n| Key path      | Type   | Default | Description                                             |\n| ------------- | ------ | ------- | ------------------------------------------------------- |\n| `data[].type` | string |         | Must be `\"projects\"`                                    |\n| `data[].id`   | string |         | The id of the project to delete the variable set from   |\n\n| Status  | Response                  | Reason(s)                                                             |\n| ------- | ------------------------- | --------------------------------------------------------------------- |\n| [204][] |                           | Successfully removed variable set from the requested projects         |\n| [404][] | [JSON API error object][] | Variable set not found or user unauthorized to perform action         |\n| [422][] | [JSON API error object][] | Problem with payload or request; details provided in the error object |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"type\": \"projects\",\n      \"id\": \"prj-YwfuBJZkdai4xj9w\"\n    },\n    {\n      \"type\": \"projects\",\n      \"id\": \"prj-lkjasdfiojwerlkj\"\n    }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/varsets\/varset-kjkN545LH2Sfercv\/relationships\/projects\n```\n\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[json api document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[json api error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n\n## Available Related Resources\n\nThe GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n| Resource Name        | Description                                       |\n| -------------------- | ------------------------------------------------- |\n| `vars`               | Show each variable in a variable set and all of their attributes including `id`, `key`, `value`, `sensitive`, `category`, `hcl`, `created_at`,  and `description`. |","site":"terraform","answers_cleaned":"    page title  Variable Sets   API Docs   HCP Terraform description       Use the   varsets  endpoint to manage variable sets  List  show  create  update  and delete variable sets  and apply or remove variable sets from workspaces using the HTTP API         Variable Sets API  A  variable set   terraform cloud docs workspaces variables scope  is a resource that allows you to reuse the same variables across multiple workspaces and projects  For example  you could define a variable set of provider credentials and automatically apply it to a selection of workspaces  all workspaces in a project  or all workspaces in an organization   You need    Read   variables permission   terraform cloud docs users teams organizations permissions general workspace permissions  to view the variables for a particular workspace and to view the variable sets in the owning organization  To create or edit variable sets  your team must have    Manage all workspaces   organization access   terraform cloud docs users teams organizations teams manage managing organization access       Create a Variable Set   POST organizations  organization name varsets     Parameter              Description                                                                                                                                    organization name    The name of the organization the workspace belongs to         Request Body  Properties without a default value are required     Key path                          Type      Default   Description                                                                                                                                                                                                                                                                                                            data attributes name             string              The name of the variable set                                                                                                     data attributes description      string              Text displayed in the UI to contextualize the variable set and its purpose                                                       data attributes global           boolean    false    When true  HCP Terraform automatically applies the variable set to all current and future workspaces in the organization       data attributes priority         boolean    false    When true  the variables in the set override any other variable values with a more specific scope  including values set on the command line       data relationships workspaces    array               Array of references to workspaces that the variable set should be assigned to                                                    data relationships projects      array               Array of references to projects that the variable set should be assigned to                                                      data relationships vars          array               Array of complete variable definitions that comprise the variable set                                                          HCP Terraform does not allow different global variable sets to contain conflicting variables with the same name and type  You will receive a 422 response if you try to create a global variable set that contains conflicting variables     Status    Response                    Reason s                                                                                                                                                                                   200       JSON API document          Successfully added variable set                                            404       JSON API error object      Organization not found  or user unauthorized to perform action             422       JSON API error object      Problem with payload or request  details provided in the error object        Sample Payload     json      data          type    varsets        attributes            name    MyVarset          description    Full of vars and such for mass reuse          global   false         priority   false              relationships            workspaces              data                              id    ws z6YvbWEYoE168kpq                type    workspaces                                        vars              data                              type    vars                attributes                    key    c2e4612d993c18e42ef30405ea7d0e9ae                  value    8676328808c5bf56ac5c8c0def3b7071                  category    terraform                                                                   Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization varsets          Sample Response     json      data          id    varset kjkN545LH2Sfercv        type    varsets        attributes            name    MyVarset          description    Full of vars and such for mass reuse          global   false         priority   false              relationships            workspaces              data                              id    ws z6YvbWEYoE168kpq                type    workspaces                                        projects              data                              id    prj lkjasdlkfjs                type    projects                                        vars              data                              id    var Nh0doz0hzj9hrm34qq                type    vars                attributes                    key    c2e4612d993c18e42ef30405ea7d0e9ae                  value    8676328808c5bf56ac5c8c0def3b7071                  category    terraform                                                                  Update a Variable Set   PUT PATCH varsets  varset id     Parameter      Description                                                      varset id    The variable set ID    HCP Terraform does not allow global variable sets to contain conflicting variables with the same name and type  You will receive a 422 response if you try to create a global variable set that contains conflicting variables       Request Body    Key path                          Type      Default   Description                                                                                                                                                                                                                                                                                                                                                              data attributes name             string              The name of the variable set                                                                                                                              data attributes description      string              Text displayed in the UI to contextualize the variable set and its purpose                                                                                data attributes global           boolean             When true  HCP Terraform automatically applies the variable set to all current and future workspaces in the organization                                data attributes priority         boolean    false    When true  the variables in the set override any other variable values set with a more specific scope  including values set on the command line       data relationships workspaces    array                 Optional   Array of references to workspaces that the variable set should be assigned to  Sending an empty array clears all workspace assignments       data relationships projects      array                 Optional   Array of references to projects that the variable set should be assigned to  Sending an empty array clears all project assignments           data relationships vars          array                 Optional   Array of complete variable definitions to add to the variable set                                                                            Status    Response                    Reason s                                                                                                                                                                                                     200       JSON API document          Successfully updated variable set                                                   404       JSON API error object      Organization or variable set not found  or user unauthorized to perform action      422       JSON API error object      Problem with payload or request  details provided in the error object                 Sample Payload     json      data          type    varsets        attributes            name    MyVarset          description    Full of vars and such for mass reuse  Now global           global   true         priority   true              relationships            workspaces              data                              id    ws FRFwkYoUoGn1e34b                type    workspaces                                        projects              data                              id    prj kFjgSzcZSr5c3imE                type    projects                                        vars              data                              type    vars                attributes                    key    c2e4612d993c18e42ef30405ea7d0e9ae                  value    8676328808c5bf56ac5c8c0def3b7071                  category    terraform                                                                   Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 varsets varset kjkN545LH2Sfercv          Sample Response     json      data          id    varset kjkN545LH2Sfercv        type    varsets        attributes            name    MyVarset          description    Full of vars and such for mass reuse  Now global           global   true         priority   true             relationships            vars              data                              id    var Nh0doz0hzj9hrm34qq                type    vars                attributes                    key    c2e4612d993c18e42ef30405ea7d0e9ae                  value    8676328808c5bf56ac5c8c0def3b7071                  category    terraform                                                                  Delete a Variable Set   DELETE varsets  varset id     Parameter      Description                                                      varset id    The variable set ID        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 varsets varset kjkN545LH2Sfercv      On success  this endpoint responds with no content      Show Variable Set  Fetch details about the specified variable set    GET varsets  varset id     Parameter      Description                                                      varset id    The variable set ID        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 varsets varset kjkN545LH2Sfercv          Sample Response     json      data          id    varset kjkN545LH2Sfercv        type    varsets        attributes            name    MyVarset          description    Full of vars and such for mass reuse          global   false         priority   false         updated at    2023 03 06T21 48 33 588Z          var count   5         workspace count   2         project count   2             relationships            organization              data                id    hashicorp              type    organizations                            vars              data                              id    var mMqadSCxZtrQJAv8                type    vars                                        id    var hFxUUKSk35QMsRVH                type    vars                                        id    var fkd6N48tXRmoaPxH                type    vars                                        id    var abcbBMBMWcZw3WiV                type    vars                                        id    var vqvRKK1ZoqQCiMwN                type    vars                                        workspaces              data                              id    ws UohFdKAHUGsQ8Dtf                type    workspaces                                        id    ws XhGhaaCrsx9ATson                type    workspaces                                        projects              data                              id    prj 1JMwvPHFsdpsPhnt                type    projects                                        id    prj SLDGqbYqELXE1obp                type    projects                                                    List Variable Sets  List all variable sets for an organization    GET organizations  organization name varsets     Parameter              Description                                                                                                                                        organization name    The name of the organization the variable sets belong to    List all variable sets for a project  This includes global variable sets from the project s organization    GET projects  project id varsets     Parameter       Description                                             project id    The project ID    List all variable sets for a workspace  This includes global variable sets from the workspace s organization and variable sets attached to the project this workspace is contained within    GET workspaces  workspace id varsets     Parameter         Description                                                   workspace id    The workspace ID        Query Parameters  All list endpoints support pagination  with standard URL query parameters   terraform cloud docs api docs query parameters  and searching with the  q  parameter  Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                       page number       Optional    If omitted  the endpoint returns the first page            page size         Optional    If omitted  the endpoint returns 20 varsets per page       q                 Optional    A search query string  You can search for a variable set using its name         Sample Response     json      data                  id    varset kjkN545LH2Sfercv          type    varsets          attributes              name    MyVarset            description    Full of vars and such for mass reuse            global   false           priority   false           updated at    2023 03 06T21 48 33 588Z            var count   5           workspace count   2           project count   2                 relationships              organization                data                  id    hashicorp                type    organizations                                  vars                data                                  id    var mMqadSCxZtrQJAv8                  type    vars                                              id    var hFxUUKSk35QMsRVH                  type    vars                                              id    var fkd6N48tXRmoaPxH                  type    vars                                              id    var abcbBMBMWcZw3WiV                  type    vars                                              id    var vqvRKK1ZoqQCiMwN                  type    vars                                                workspaces                data                                  id    ws UohFdKAHUGsQ8Dtf                  type    workspaces                                              id    ws XhGhaaCrsx9ATson                  type    workspaces                                                projects                data                                  id    prj 1JMwvPHFsdpsPhnt                  type    projects                                              id    prj SLDGqbYqELXE1obp                  type    projects                                                            links          self    https   app terraform io api v2 organizations hashicorp varsets page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   app terraform io api v2 organizations hashicorp varsets page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next   null       last    https   app terraform io api v2 organizations hashicorp varsets page 5Bnumber 5D 1 page 5Bsize 5D 20          meta          pagination            current page   1         page size   20         prev page   null         next page   null         total pages   1         total count   1                      Variable Relationships     Add Variable   POST varsets  varset external id relationships vars     Parameter      Description                                                      varset id    The variable set ID        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                        Type     Default   Description                                                                                                                                                                                                                                                                                 data type                      string             Must be   vars                                                                                                       data attributes key            string             The name of the variable                                                                                             data attributes value          string             The value of the variable                                                                                            data attributes description    string             The description of the variable                                                                                      data attributes category       string             Whether this is a Terraform or environment variable  Valid values are   terraform   or   env                         data attributes hcl            bool      false    Whether to evaluate the value of the variable as a string of HCL code  Has no effect for environment variables       data attributes sensitive      bool      false    Whether the value is sensitive  If true  variable is not visible in the UI                                         HCP Terraform does not allow different global variable sets to contain conflicting variables with the same name and type  You will receive a 422 response if you try to add a conflicting variable to a global variable set     Status    Response                    Reason s                                                                                                                                                                                   200       JSON API document          Successfully added variable to variable set                                404       JSON API error object      Variable set not found  or user unauthorized to perform action             422       JSON API error object      Problem with payload or request  details provided in the error object        Sample Payload     json      data          type    vars        attributes            key    g6e45ae7564a17e81ef62fd1c7fa86138          value    61e400d5ccffb3782f215344481e6c82          description    cheeeese          sensitive   false         category    terraform          hcl   false                      Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 varsets varset 4q8f7H0NHG733bBH relationships vars          Sample Response     json      data          id   var EavQ1LztoRTQHSNT        type    vars        attributes            key    g6e45ae7564a17e81ef62fd1c7fa86138          value    61e400d5ccffb3782f215344481e6c82          description    cheeeese          sensitive   false         category    terraform          hcl   false                     Update a Variable in a Variable Set   PATCH varsets  varset id relationships vars  var id     Parameter      Description                                                                                varset id    The variable set ID                    var id       The ID of the variable to delete    HCP Terraform does not allow different global variable sets to contain conflicting variables with the same name and type  You will receive a 422 response if you try to add a conflicting variable to a global variable set     Status    Response                    Reason s                                                                                                                                                                                   200       JSON API document          Successfully updated variable for variable set                             404       JSON API error object      Variable set not found  or user unauthorized to perform action             422       JSON API error object      Problem with payload or request  details provided in the error object        Sample Payload     json      data          type    vars        attributes            key    g6e45ae7564a17e81ef62fd1c7fa86138          value    61e400d5ccffb3782f215344481e6c82          description    new cheeeese          sensitive   false         category    terraform          hcl   false                      Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 varsets varset 4q8f7H0NHG733bBH relationships vars var EavQ1LztoRTQHSNT          Sample Response     json      data          id   var EavQ1LztoRTQHSNT        type    vars        attributes            key    g6e45ae7564a17e81ef62fd1c7fa86138          value    61e400d5ccffb3782f215344481e6c82          description    new cheeeese          sensitive   false         category    terraform          hcl   false                     Delete a Variable in a Variable Set   DELETE varsets  varset id relationships vars  var id     Parameter      Description                                                                                varset id    The variable set ID                    var id       The ID of the variable to delete        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 varsets varset 4q8f7H0NHG733bBH relationships vars var EavQ1LztoRTQHSNT      On success  this endpoint responds with no content      List Variables in a Variable Set   GET varsets  varset id relationships vars     Parameter      Description                                                      varset id    The variable set ID        Sample Response     json      data                  id    var 134r1k34nj5kjn          type    vars          attributes              key    F115037558b045dd82da40b089e5db745            value    1754288480dfd3060e2c37890422905f            sensitive   false           category    terraform            hcl   false           created at    2021 10 29T18 54 29 379Z            description                      relationships              varset                data                  id    varset 992UMULdeDuebi1x                type    varsets                          links      related     api v2 varsets 1                              links      self     api v2 vars var BEPU9NjPVCiCfrXj                  links          self    app terraform io app varsets varset 992UMULdeDuebi1x vars        first    app terraform io app varsets varset 992UMULdeDuebi1x vars page 1        prev   null       next   null       last    app terraform io app varsets varset 992UMULdeDuebi1x vars page 1                Apply Variable Set to Workspaces  Accepts a list of workspaces to add the variable set to    POST varsets  varset id relationships workspaces     Parameter      Description                                                      varset id    The variable set ID    Properties without a default value are required     Key path        Type     Default   Description                                                                                                                                       data   type    string             Must be   workspaces                                    data   id      string             The id of the workspace to add the variable set to      Status    Response                    Reason s                                                                                                                                                                                   204                                  Successfully added variable set to the requested workspaces                404       JSON API error object      Variable set not found or user unauthorized to perform action              422       JSON API error object      Problem with payload or request  details provided in the error object        Sample Payload     json      data                  type    workspaces          id    ws YwfuBJZkdai4xj9w                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 varsets varset kjkN545LH2Sfercv relationships workspaces         Remove a Variable Set from Workspaces  Accepts a list of workspaces to remove the variable set from    DELETE varsets  varset id relationships workspaces     Parameter      Description                                                      varset id    The variable set ID    Properties without a default value are required     Key path        Type     Default   Description                                                                                                                                                 data   type    string             Must be   workspaces                                         data   id      string             The id of the workspace to delete the variable set from      Status    Response                    Reason s                                                                                                                                                                                   204                                  Successfully removed variable set from the requested workspaces            404       JSON API error object      Variable set not found or user unauthorized to perform action              422       JSON API error object      Problem with payload or request  details provided in the error object        Sample Payload     json      data                  type    workspaces          id    ws YwfuBJZkdai4xj9w                      type    workspaces          id    ws YwfuBJZkdai4xj9w                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE       data  payload json     https   app terraform io api v2 varsets varset kjkN545LH2Sfercv relationships workspaces         Apply Variable Set to Projects  Accepts a list of projects to add the variable set to  When you apply a variable set to a project  all the workspaces in that project will have the variable set applied to them    POST varsets  varset id relationships projects     Parameter      Description                                                      varset id    The variable set ID    Properties without a default value are required     Key path        Type     Default   Description                                                                                                                                       data   type    string             Must be   projects                                      data   id      string             The id of the project to add the variable set to        Status    Response                    Reason s                                                                                                                                                                                   204                                  Successfully added variable set to the requested projects                  404       JSON API error object      Variable set not found or user unauthorized to perform action              422       JSON API error object      Problem with payload or request  details provided in the error object        Sample Payload     json      data                  type    projects          id    prj YwfuBJZkdai4xj9w                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 varsets varset kjkN545LH2Sfercv relationships projects         Remove a Variable Set from Projects  Accepts a list of projects to remove the variable set from    DELETE varsets  varset id relationships projects     Parameter      Description                                                      varset id    The variable set ID    Properties without a default value are required     Key path        Type     Default   Description                                                                                                                                                 data   type    string             Must be   projects                                           data   id      string             The id of the project to delete the variable set from        Status    Response                    Reason s                                                                                                                                                                                   204                                  Successfully removed variable set from the requested projects              404       JSON API error object      Variable set not found or user unauthorized to perform action              422       JSON API error object      Problem with payload or request  details provided in the error object        Sample Payload     json      data                  type    projects          id    prj YwfuBJZkdai4xj9w                      type    projects          id    prj lkjasdfiojwerlkj                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE       data  payload json     https   app terraform io api v2 varsets varset kjkN545LH2Sfercv relationships projects        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   json api document    terraform cloud docs api docs json api documents   json api error object   https   jsonapi org format  error objects      Available Related Resources  The GET endpoints above can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available     Resource Name          Description                                                                                                                         vars                  Show each variable in a variable set and all of their attributes including  id    key    value    sensitive    category    hcl    created at    and  description    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the assessment results endpoint to query health assessments Get continuous validation and drift detection health assessment results using the HTTP API 400 https developer mozilla org en US docs Web HTTP Status 400 page title Assessments API Docs HCP Terraform","answers":"---\npage_title: Assessments - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/assessment-results` endpoint to query health assessments. Get continuous validation and drift detection health assessment results using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n# Assessment Results API\n\nAn Assessment Result is the summary record of an instance of health assessment. HCP Terraform can perform automatic health assessments in a workspace to assess whether its real infrastructure matches the requirements defined in its Terraform configuration. Refer to [Health](\/terraform\/cloud-docs\/workspaces\/health) for more details.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/health-assessments.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n## Show Assessment Result\n\nAny user with read access to a workspace can retrieve assessment results for the workspace.\n\n`GET api\/v2\/assessment-results\/:assessment_result_id`\n\n| Parameter               | Description              |\n| ----------------------- | ------------------------ |\n| `:assessment_result_id` | The assessment result ID |\n\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/assessment-results\/asmtres-cHh5777xm\n```\n\n### Sample Response\n\n```json\n{\n  \"id\": \"asmtres-UG5rE9L1373hMYMA\",\n  \"type\": \"assessment-results\",\n  \"data\": {\n    \"attributes\": {\n      \"drifted\": true,\n      \"succeeded\": true,\n      \"error-msg\": null,\n      \"created-at\": \"2022-07-02T22:29:58+00:00\",\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/assessment-results\/asmtres-UG5rE9L1373hMYMA\/\"\n      \"json-output\": \"\/api\/v2\/assessment-results\/asmtres-UG5rE9L1373hMYMA\/json-output\"\n      \"json-schema\": \"\/api\/v2\/assessment-results\/asmtres-UG5rE9L1373hMYMA\/json-schema\"\n      \"log-output\": \"\/api\/v2\/assessment-results\/asmtres-UG5rE9L1373hMYMA\/log-output\"\n    }\n  }\n}\n```\n\n## Retrieve the JSON output from the assessment execution\n\nThe following endpoints retrieve files documenting the plan, schema, and logged runtime associated with the specified assessment result. They provide complete context for an assessment result. The responses do not adhere to JSON API spec. \n\nYou cannot access these endpoints with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access them with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens) that has admin level access to the workspace. Refer to [Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions) for details.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### JSON Plan\n\nThe following endpoint returns the JSON plan output associated with the assessment result.\n\n`GET api\/v2\/assessment-results\/:assessment_result_id\/json-output`\n\n#### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/assessment-results\/asmtres-cHh5777xm\/json-output\n```\n\n### JSON Schema file\n\nThe following endpoint returns the JSON [provider schema](\/terraform\/cli\/commands\/providers\/schema) associated with the assessment result.\n\n`GET api\/v2\/assessment-results\/:assessment_result_id\/json-schema`\n\n#### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/assessment-results\/asmtres-cHh5777xm\/json-schema\n```\n\n### JSON Log Output\n\nThe following endpoint returns Terraform JSON log output.\n\n`GET api\/v2\/assessment-results\/assessment_result_id\/log-output`\n\n#### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/assessment-results\/asmtres-cHh5777xm\/log-output\n```","site":"terraform","answers_cleaned":"    page title  Assessments   API Docs   HCP Terraform description       Use the   assessment results  endpoint to query health assessments  Get continuous validation and drift detection health assessment results using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   422   https   developer mozilla org en US docs Web HTTP Status 422   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504    Assessment Results API  An Assessment Result is the summary record of an instance of health assessment  HCP Terraform can perform automatic health assessments in a workspace to assess whether its real infrastructure matches the requirements defined in its Terraform configuration  Refer to  Health   terraform cloud docs workspaces health  for more details        BEGIN  TFC only name pnp callout      include  tfc package callouts health assessments mdx       END  TFC only name pnp callout         Show Assessment Result  Any user with read access to a workspace can retrieve assessment results for the workspace    GET api v2 assessment results  assessment result id     Parameter                 Description                                                                           assessment result id    The assessment result ID         Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 assessment results asmtres cHh5777xm          Sample Response     json      id    asmtres UG5rE9L1373hMYMA      type    assessment results      data          attributes            drifted   true         succeeded   true         error msg   null         created at    2022 07 02T22 29 58 00 00               links            self     api v2 assessment results asmtres UG5rE9L1373hMYMA          json output     api v2 assessment results asmtres UG5rE9L1373hMYMA json output         json schema     api v2 assessment results asmtres UG5rE9L1373hMYMA json schema         log output     api v2 assessment results asmtres UG5rE9L1373hMYMA log output                      Retrieve the JSON output from the assessment execution  The following endpoints retrieve files documenting the plan  schema  and logged runtime associated with the specified assessment result  They provide complete context for an assessment result  The responses do not adhere to JSON API spec    You cannot access these endpoints with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access them with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens  that has admin level access to the workspace  Refer to  Permissions   terraform cloud docs users teams organizations permissions  for details    permissions citation    intentionally unused   keep for maintainers      JSON Plan  The following endpoint returns the JSON plan output associated with the assessment result    GET api v2 assessment results  assessment result id json output        Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 assessment results asmtres cHh5777xm json output          JSON Schema file  The following endpoint returns the JSON  provider schema   terraform cli commands providers schema  associated with the assessment result    GET api v2 assessment results  assessment result id json schema        Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 assessment results asmtres cHh5777xm json schema          JSON Log Output  The following endpoint returns Terraform JSON log output    GET api v2 assessment results assessment result id log output        Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 assessment results asmtres cHh5777xm log output    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Project Team Access API Docs HCP Terraform Use the team projects endpoint to manage team access to a project List show add update and remove team access from a project using the HTTP API","answers":"---\npage_title: Project Team Access - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/team-projects` endpoint to manage team access to a project. List, show, add, update, and remove team access from a project using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Project Team Access API\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n-> **Note:** Team management is available in HCP Terraform **Standard** Edition. [Learn more about HCP Terraform pricing](https:\/\/www.hashicorp.com\/products\/terraform\/pricing).\n<!-- END: TFC:only name:pnp-callout -->\n\nThe team access APIs are used to associate a team to permissions on a project. A single `team-project` resource contains the relationship between the Team and Project, including the privileges the team has on the project.\n\n## Resource permissions\n\nA `team-project` resource represents a team's local permissions on a specific project. Teams can also have organization-level permissions that grant access to projects. HCP Terraform uses the more restrictive access level. For example, a team with the **Manage projects** permission enabled has admin access on all projects, even if their `team-project` on a particular project only grants read access. For more information, refer to [Managing Project Access](\/terraform\/cloud-docs\/users-teams-organizations\/teams\/manage#managing-project-access).\n\nAny member of an organization can view team access relative to their own team memberships, including secret teams of which they are a member. Organization owners and project admins can modify team access or view the full set of secret team accesses. The organization token and the owners team token can act as an owner on these endpoints. Refer to [Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions) for additional information.\n\n## Project Team Access Levels\n| Access Level    | Description                                                                                                                                                      |\n|-----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `read`          | Read project and Read workspace access role on project workspaces                                                                                                |\n| `write`         | Read project and Write workspace access role on project workspaces                                                                                               |\n| `maintain`      | Read project and Admin workspace access role on project workspaces                                                                                               |\n| `admin`         | Admin project, Admin workspace access role on project workspaces, create workspaces within project, move workspaces between projects, manage project team access |\n| `custom`        | Custom access permissions on project and project's workspaces                                                                                                   |\n\n## List Team Access to a Project\n\n`GET \/team-projects`\n\n| Status  | Response                                        | Reason                                                   |\n|---------|-------------------------------------------------|----------------------------------------------------------|\n| [200][] | [JSON API document][] (`type: \"team-projects\"`) | The request was successful                               |\n| [404][] | [JSON API error object][]                       | Project not found or user unauthorized to perform action |\n\n### Query Parameters\n\n[These are standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters).\n\n| Parameter             | Description                                           |\n|-----------------------|-------------------------------------------------------|\n| `filter[project][id]` | **Required.** The project ID to list team access for. |\n| `page[number]`        | **Optional.**                                         |\n| `page[size]`          | **Optional.**                                         |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  \"https:\/\/app.terraform.io\/api\/v2\/team-projects?filter%5Bproject%5D%5Bid%5D=prj-ckZoJwdERaWcFHwi\"\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"tprj-TLznAnYdcsD2Dcmm\",\n      \"type\": \"team-projects\",\n      \"attributes\": {\n        \"access\": \"read\",\n        \"project-access\": {\n          \"settings\": \"read\",\n          \"teams\": \"none\"\n        },\n        \"workspace-access\": {\n          \"create\": false,\n          \"move\": false,\n          \"locking\": false,\n          \"delete\": false,\n          \"runs\": \"read\",\n          \"variables\": \"read\",\n          \"state-versions\": \"read\",\n          \"sentinel-mocks\": \"none\",\n          \"run-tasks\": false\n        }\n      },\n      \"relationships\": {\n        \"team\": {\n          \"data\": {\n            \"id\": \"team-KpibQGL5GqRAWBwT\",\n            \"type\": \"teams\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/teams\/team-KpibQGL5GqRAWBwT\"\n          }\n        },\n        \"project\": {\n          \"data\": {\n            \"id\": \"prj-ckZoJwdERaWcFHwi\",\n            \"type\": \"projects\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/projects\/prj-ckZoJwdERaWcFHwi\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/team-projects\/tprj-TLznAnYdcsD2Dcmm\"\n      }\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/team-projects?filter%5Bproject%5D%5Bid%5D=prj-ckZoJwdERaWcFHwi&page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/team-projects?filter%5Bproject%5D%5Bid%5D=prj-ckZoJwdERaWcFHwi&page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/team-projects?filter%5Bproject%5D%5Bid%5D=prj-ckZoJwdERaWcFHwi&page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"page-size\": 20,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 1\n    }\n  }\n}\n```\n\n## Show a Team Access relationship\n\n`GET \/team-projects\/:id`\n\n| Status  | Response                                        | Reason                                                       |\n|---------|-------------------------------------------------|--------------------------------------------------------------|\n| [200][] | [JSON API document][] (`type: \"team-projects\"`) | The request was successful                                   |\n| [404][] | [JSON API error object][]                       | Team access not found or user unauthorized to perform action |\n\n| Parameter | Description                                                                                                                              |\n|-----------|------------------------------------------------------------------------------------------------------------------------------------------|\n| `:id`     | The ID of the team\/project relationship. Obtain this from the [list team access action](#list-team-access-to-a-project) described above. |\n\nAs mentioned in [Add Team Access to a Project](#add-team-access-to-a-project) and [Update to a Project](#update-team-access-to-a-project), several permission attributes are not editable unless you set `access` to `custom`. If you set `access` to `read`, `plan`, `write`, or `admin`, certain attributes are read-only and reflect the _implicit permissions_ granted to the current access level.\n\nFor example, if you set `access` to `read`, the implicit permission level for project settings and workspace run is \"read\". Conversely, if you set the access level to `admin`, the implicit permission level for the project settings is \"delete\", while the workspace runs permission is \"apply\". \n \nSeveral permission attributes are not editable unless `access` is set to `custom`. When access is `read`, `plan`, `write`, or `admin`, these attributes are read-only and reflect the implicit permissions granted to the current access level. \n\nFor example, when access is `read`, the implicit level for the project settings and workspace runs permissions are \"read\". Conversely, when the access level is `admin`, the implicit level for the project settings is \"delete\" and the workspace runs permission is \"apply\". To see all of the implied permissions at different access levels, see [Implied Custom Permission Levels](#implied-custom-permission-levels).\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/team-projects\/tprj-s68jV4FWCDwWvQq8\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"tprj-TLznAnYdcsD2Dcmm\",\n    \"type\": \"team-projects\",\n    \"attributes\": {\n      \"access\": \"read\",\n      \"project-access\": {\n        \"settings\": \"read\",\n        \"teams\": \"none\"\n      },\n      \"workspace-access\": {\n        \"create\": false,\n        \"move\": false,\n        \"locking\": false,\n        \"delete\": false,\n        \"runs\": \"read\",\n        \"variables\": \"read\",\n        \"state-versions\": \"read\",\n        \"sentinel-mocks\": \"none\",\n        \"run-tasks\": false\n      }\n    },\n    \"relationships\": {\n      \"team\": {\n        \"data\": {\n          \"id\": \"team-KpibQGL5GqRAWBwT\",\n          \"type\": \"teams\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/teams\/team-KpibQGL5GqRAWBwT\"\n        }\n      },\n      \"project\": {\n        \"data\": {\n          \"id\": \"prj-ckZoJwdERaWcFHwi\",\n          \"type\": \"projects\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/projects\/prj-ckZoJwdERaWcFHwi\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/team-projects\/tprj-TLznAnYdcsD2Dcmm\"\n    }\n  }\n}\n```\n\n## Add Team Access to a Project\n\n`POST \/team-projects`\n\n| Status  | Response                                        | Reason                                                           |\n|---------|-------------------------------------------------|------------------------------------------------------------------|\n| [200][] | [JSON API document][] (`type: \"team-projects\"`) | The request was successful                                       |\n| [404][] | [JSON API error object][]                       | Project or Team not found or user unauthorized to perform action |\n| [422][] | [JSON API error object][]                       | Malformed request body (missing attributes, wrong types, etc.)   |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                                                  | Type    | Default | Description                                                                                        |\n|-----------------------------------------------------------|-------- |---------|----------------------------------------------------------------------------------------------------|\n| `data.type`                                               | string  |         | Must be `\"team-projects\"`.                                                                         |\n| `data.attributes.access`                                  | string  |         | The type of access to grant. Valid values are `read`, `write`, `maintain`, `admin`, or `custom`.     |\n| `data.relationships.project.data.type`                    | string  |         | Must be `projects`.                                                                                |\n| `data.relationships.project.data.id`                      | string  |         | The project ID to which the team is to be added.                                                   |\n| `data.relationships.team.data.type`                       | string  |         | Must be `teams`.                                                                                   |\n| `data.relationships.team.data.id`                         | string  |         | The ID of the team to add to the project.                                                          |\n| `data.attributes.project-access.settings`                 | string  | \"read\"  | If `access` is `custom`, the permission to grant for the project's settings. Can only be used when `access` is `custom`. Valid values include `read`, `update`, or `delete`.                                                   |\n| `data.attributes.project-access.teams`                    | string  | \"none\"  | If `access` is `custom`, the permission to grant for the project's teams. Can only be used when `access` is `custom`. Valid values include `none`, `read`, or `manage`.                                                   |\n| `data.attributes.workspace-access.runs`                   | string  | \"read\"  | If `access` is `custom`, the permission to grant for the project's workspaces' runs. Can only be used when `access` is `custom`. Valid values include `read`, `plan`, or `apply`.                                                    |\n| `data.attributes.workspace-access.sentinel-mocks`         | string  | \"none\"  | If `access` is `custom`, the permission to grant for the project's workspaces' Sentinel mocks. Can only be used when `access` is `custom`. Valid values include `none`, or `read`.                                                     |\n| `data.attributes.workspace-access.state-versions`         | string  | \"none\"  | If `access` is `custom`, the permission to grant for the project's workspaces state versions. Can only be used when `access` is `custom`. Valid values include `none`, `read-outputs`, `read`, or `write`.                                                    |\n| `data.attributes.workspace-access.variables`              | string  | \"none\"  | If `access` is `custom`, the permission to grant for the project's workspaces' variables. Can only be used when `access` is `custom`. Valid values include `none`, `read`, or `write`.                                                    |\n| `data.attributes.workspace-access.create`                 | boolean | false  | If `access` is `custom`, this permission allows the team to create workspaces in the project.                 |\n| `data.attributes.workspace-access.locking`                | boolean | false  | If `access` is `custom`, the permission granting the ability to manually lock or unlock the project's workspaces. Can only be used when `access` is `custom`.                                                   |\n| `data.attributes.workspace-access.delete`                | boolean | false  | If `access` is `custom`, the permission granting the ability to delete the project's workspaces. Can only be used when `access` is `custom`.                                                   |\n| `data.attributes.workspace-access.move`                   | boolean | false  | If `access` is `move`, this permission allows the team to move workspaces into and out of the project. The team must also have permissions to the project(s) receiving the the workspace(s).       |\n| `data.attributes.workspace-access.run-tasks`              | boolean | false  | If `access` is `custom`, this permission allows the team to manage run tasks within the project's workspaces. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"access\": \"read\"\n    },\n    \"relationships\": {\n      \"project\": {\n        \"data\": {\n          \"type\": \"projects\",\n          \"id\": \"prj-ckZoJwdERaWcFHwi\"\n        }\n      },\n      \"team\": {\n        \"data\": {\n          \"type\": \"teams\",\n          \"id\": \"team-xMGyoUhKmTkTzmAy\"\n        }\n      }\n    },\n    \"type\": \"team-projects\"\n  }\n}\n```\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/team-projects\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"tprj-WbG7p5KnT7S7HZqw\",\n    \"type\": \"team-projects\",\n    \"attributes\": {\n      \"access\": \"read\",\n      \"project-access\": {\n        \"settings\": \"read\",\n        \"teams\": \"none\"\n      },\n      \"workspace-access\": {\n        \"create\": false,\n        \"move\": false,\n        \"locking\": false,\n        \"runs\": \"read\",\n        \"variables\": \"read\",\n        \"state-versions\": \"read\",\n        \"sentinel-mocks\": \"none\",\n        \"run-tasks\": false\n      }\n    },\n    \"relationships\": {\n      \"team\": {\n        \"data\": {\n          \"id\": \"team-xMGyoUhKmTkTzmAy\",\n          \"type\": \"teams\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/teams\/team-xMGyoUhKmTkTzmAy\"\n        }\n      },\n      \"project\": {\n        \"data\": {\n          \"id\": \"prj-ckZoJwdERaWcFHwi\",\n          \"type\": \"projects\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/projects\/prj-ckZoJwdERaWcFHwi\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/team-projects\/tprj-WbG7p5KnT7S7HZqw\"\n    }\n  }\n}\n```\n\n## Update Team Access to a Project\n\n`PATCH \/team-projects\/:id`\n\n| Status  | Response                                        | Reason                                                         |\n|---------|-------------------------------------------------|----------------------------------------------------------------|\n| [200][] | [JSON API document][] (`type: \"team-projects\"`) | The request was successful                                     |\n| [404][] | [JSON API error object][]                       | Team Access not found or user unauthorized to perform action   |\n| [422][] | [JSON API error object][]                       | Malformed request body (missing attributes, wrong types, etc.) |\n\n| Parameter                |        |     | Description                                                                                                                              |\n|--------------------------|--------|-----|------------------------------------------------------------------------------------------------------------------------------------------|\n| `:id`                    |        |     | The ID of the team\/project relationship. Obtain this from the [list team access action](#list-team-access-to-a-project) described above. |\n| `data.attributes.access` | string |     | The type of access to grant. Valid values are `read`, `write`, `maintain`, `admin`, or `custom`.                                         |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/team-projects\/tprj-WbG7p5KnT7S7HZqw\n```\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"id\": \"tprj-WbG7p5KnT7S7HZqw\",\n    \"attributes\": {\n      \"access\": \"custom\",\n      \"project-access\": {\n        \"settings\": \"delete\"\n        \"teams\": \"manage\",\n      },\n      \"workspace-access\" : {\n        \"runs\": \"apply\",\n        \"sentinel-mocks\": \"read\",\n        \"state-versions\": \"write\",\n        \"variables\": \"write\",\n        \"create\": true,\n        \"locking\": true,\n        \"delete\": true,\n        \"move\": true,\n        \"run-tasks\": true\n      }\n    }\n  }\n}\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"tprj-WbG7p5KnT7S7HZqw\",\n    \"type\": \"team-projects\",\n    \"attributes\": {\n      \"access\": \"custom\",\n      \"project-access\": {\n        \"settings\": \"delete\"\n        \"teams\": \"manage\",\n      },\n      \"workspace-access\" : {\n        \"runs\": \"apply\",\n        \"sentinel-mocks\": \"read\",\n        \"state-versions\": \"write\",\n        \"variables\": \"write\",\n        \"create\": true,\n        \"locking\": true,\n        \"delete\": true,\n        \"move\": true,\n        \"run-tasks\": true\n      }\n    },\n    \"relationships\": {\n      \"team\": {\n        \"data\": {\n          \"id\": \"team-xMGyoUhKmTkTzmAy\",\n          \"type\": \"teams\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/teams\/team-xMGyoUhKmTkTzmAy\"\n        }\n      },\n      \"project\": {\n        \"data\": {\n          \"id\": \"prj-ckZoJwdERaWcFHwi\",\n          \"type\": \"projects\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/projects\/prj-ckZoJwdERaWcFHwi\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/team-projects\/tprj-WbG7p5KnT7S7HZqw\"\n    }\n  }\n}\n```\n\n## Remove Team Access from a Project\n\n`DELETE \/team-projects\/:id`\n\n| Status  | Response                  | Reason                                                       |\n|---------|---------------------------|--------------------------------------------------------------|\n| [204][] |                           | The Team Access was successfully destroyed                   |\n| [404][] | [JSON API error object][] | Team Access not found or user unauthorized to perform action |\n\n| Parameter | Description                                                                                                                              |\n|-----------|------------------------------------------------------------------------------------------------------------------------------------------|\n| `:id`     | The ID of the team\/project relationship. Obtain this from the [list team access action](#list-team-access-to-a-project) described above. |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/team-projects\/tprj-WbG7p5KnT7S7HZqw\n```\n\n## Implied Custom Permission Levels\n\nAs mentioned above, when setting team access levels (`read`, `write`, `maintain`, or `admin`), you can individually set the following permissions if you use the `custom` access level.\nThe below table lists each access level alongside its implicit custom permission level.  If you use the `custom` access level and do not specify a certain permission's level, that permission uses the default value listed below.\n\n\n\n| Permissions                     | `read` | `write` | `maintain` | `admin ` | `custom` default |\n|---------------------------------|--------|---------|------------|----------|------------------|\n| project-access.settings         | \"read\" | \"read\"  | \"read\"     | \"delete\" | \"read\"           |\n| project-access.teams            | \"none\" | \"none\"  | \"none\"     | \"manage\" | \"none\"           |\n| workspace-access.runs           | \"read\" | \"apply\" | \"apply\"    | \"apply\"  | \"read\"           |\n| workspace-access.sentinel-mocks | \"none\" | \"read\"  | \"read\"     | \"read\"   | \"none\"           |\n| workspace-access.state-versions | \"read\" | \"write\" | \"write\"    | \"write\"  | \"none\"           |\n| workspace-access.variables      | \"read\" | \"write\" | \"write\"    | \"write\"  | \"none\"           |\n| workspace-access.create         | false  | false   | true       | true     | false            |\n| workspace-access.locking        | false  | true    | true       | true     | false            |\n| workspace-access.delete         | false  | false   | true       | true     | false            |\n| workspace-access.move           | false  | false   | false      | true     | false            |\n| workspace-access.run-tasks      | false  | false   | true       | true     | false            |","site":"terraform","answers_cleaned":"    page title  Project Team Access   API Docs   HCP Terraform description       Use the   team projects  endpoint to manage team access to a project  List  show  add  update  and remove team access from a project using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Project Team Access API       BEGIN  TFC only name pnp callout          Note    Team management is available in HCP Terraform   Standard   Edition   Learn more about HCP Terraform pricing  https   www hashicorp com products terraform pricing        END  TFC only name pnp callout      The team access APIs are used to associate a team to permissions on a project  A single  team project  resource contains the relationship between the Team and Project  including the privileges the team has on the project      Resource permissions  A  team project  resource represents a team s local permissions on a specific project  Teams can also have organization level permissions that grant access to projects  HCP Terraform uses the more restrictive access level  For example  a team with the   Manage projects   permission enabled has admin access on all projects  even if their  team project  on a particular project only grants read access  For more information  refer to  Managing Project Access   terraform cloud docs users teams organizations teams manage managing project access    Any member of an organization can view team access relative to their own team memberships  including secret teams of which they are a member  Organization owners and project admins can modify team access or view the full set of secret team accesses  The organization token and the owners team token can act as an owner on these endpoints  Refer to  Permissions   terraform cloud docs users teams organizations permissions  for additional information      Project Team Access Levels   Access Level      Description                                                                                                                                                                                                                                                                                                                                                  read             Read project and Read workspace access role on project workspaces                                                                                                     write            Read project and Write workspace access role on project workspaces                                                                                                    maintain         Read project and Admin workspace access role on project workspaces                                                                                                    admin            Admin project  Admin workspace access role on project workspaces  create workspaces within project  move workspaces between projects  manage project team access      custom           Custom access permissions on project and project s workspaces                                                                                                         List Team Access to a Project   GET  team projects     Status    Response                                          Reason                                                                                                                                                                                 200       JSON API document      type   team projects      The request was successful                                    404       JSON API error object                            Project not found or user unauthorized to perform action        Query Parameters   These are standard URL query parameters   terraform cloud docs api docs query parameters   remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs   This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters      Parameter               Description                                                                                                                                  filter project  id       Required    The project ID to list team access for       page number              Optional                                                 page size                Optional                                                   Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET      https   app terraform io api v2 team projects filter 5Bproject 5D 5Bid 5D prj ckZoJwdERaWcFHwi           Sample Response     json      data                  id    tprj TLznAnYdcsD2Dcmm          type    team projects          attributes              access    read            project access                settings    read              teams    none                      workspace access                create   false             move   false             locking   false             delete   false             runs    read              variables    read              state versions    read              sentinel mocks    none              run tasks   false                           relationships              team                data                  id    team KpibQGL5GqRAWBwT                type    teams                          links                  related     api v2 teams team KpibQGL5GqRAWBwT                                  project                data                  id    prj ckZoJwdERaWcFHwi                type    projects                          links                  related     api v2 projects prj ckZoJwdERaWcFHwi                                        links              self     api v2 team projects tprj TLznAnYdcsD2Dcmm                        links          self    https   app terraform io api v2 team projects filter 5Bproject 5D 5Bid 5D prj ckZoJwdERaWcFHwi page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   app terraform io api v2 team projects filter 5Bproject 5D 5Bid 5D prj ckZoJwdERaWcFHwi page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next   null       last    https   app terraform io api v2 team projects filter 5Bproject 5D 5Bid 5D prj ckZoJwdERaWcFHwi page 5Bnumber 5D 1 page 5Bsize 5D 20          meta          pagination            current page   1         page size   20         prev page   null         next page   null         total pages   1         total count   1                     Show a Team Access relationship   GET  team projects  id     Status    Response                                          Reason                                                                                                                                                                                         200       JSON API document      type   team projects      The request was successful                                        404       JSON API error object                            Team access not found or user unauthorized to perform action      Parameter   Description                                                                                                                                                                                                                                                                                             id        The ID of the team project relationship  Obtain this from the  list team access action   list team access to a project  described above     As mentioned in  Add Team Access to a Project   add team access to a project  and  Update to a Project   update team access to a project   several permission attributes are not editable unless you set  access  to  custom   If you set  access  to  read    plan    write   or  admin   certain attributes are read only and reflect the  implicit permissions  granted to the current access level   For example  if you set  access  to  read   the implicit permission level for project settings and workspace run is  read   Conversely  if you set the access level to  admin   the implicit permission level for the project settings is  delete   while the workspace runs permission is  apply      Several permission attributes are not editable unless  access  is set to  custom   When access is  read    plan    write   or  admin   these attributes are read only and reflect the implicit permissions granted to the current access level    For example  when access is  read   the implicit level for the project settings and workspace runs permissions are  read   Conversely  when the access level is  admin   the implicit level for the project settings is  delete  and the workspace runs permission is  apply   To see all of the implied permissions at different access levels  see  Implied Custom Permission Levels   implied custom permission levels        Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 team projects tprj s68jV4FWCDwWvQq8          Sample Response     json      data          id    tprj TLznAnYdcsD2Dcmm        type    team projects        attributes            access    read          project access              settings    read            teams    none                  workspace access              create   false           move   false           locking   false           delete   false           runs    read            variables    read            state versions    read            sentinel mocks    none            run tasks   false                     relationships            team              data                id    team KpibQGL5GqRAWBwT              type    teams                      links                related     api v2 teams team KpibQGL5GqRAWBwT                            project              data                id    prj ckZoJwdERaWcFHwi              type    projects                      links                related     api v2 projects prj ckZoJwdERaWcFHwi                                links            self     api v2 team projects tprj TLznAnYdcsD2Dcmm                      Add Team Access to a Project   POST  team projects     Status    Response                                          Reason                                                                                                                                                                                                 200       JSON API document      type   team projects      The request was successful                                            404       JSON API error object                            Project or Team not found or user unauthorized to perform action      422       JSON API error object                            Malformed request body  missing attributes  wrong types  etc            Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                                                    Type      Default   Description                                                                                                                                                                                                                                                                                    data type                                                  string              Must be   team projects                                                                                 data attributes access                                     string              The type of access to grant  Valid values are  read    write    maintain    admin   or  custom            data relationships project data type                       string              Must be  projects                                                                                       data relationships project data id                         string              The project ID to which the team is to be added                                                         data relationships team data type                          string              Must be  teams                                                                                          data relationships team data id                            string              The ID of the team to add to the project                                                                data attributes project access settings                    string     read     If  access  is  custom   the permission to grant for the project s settings  Can only be used when  access  is  custom   Valid values include  read    update   or  delete                                                          data attributes project access teams                       string     none     If  access  is  custom   the permission to grant for the project s teams  Can only be used when  access  is  custom   Valid values include  none    read   or  manage                                                          data attributes workspace access runs                      string     read     If  access  is  custom   the permission to grant for the project s workspaces  runs  Can only be used when  access  is  custom   Valid values include  read    plan   or  apply                                                           data attributes workspace access sentinel mocks            string     none     If  access  is  custom   the permission to grant for the project s workspaces  Sentinel mocks  Can only be used when  access  is  custom   Valid values include  none   or  read                                                            data attributes workspace access state versions            string     none     If  access  is  custom   the permission to grant for the project s workspaces state versions  Can only be used when  access  is  custom   Valid values include  none    read outputs    read   or  write                                                           data attributes workspace access variables                 string     none     If  access  is  custom   the permission to grant for the project s workspaces  variables  Can only be used when  access  is  custom   Valid values include  none    read   or  write                                                           data attributes workspace access create                    boolean   false    If  access  is  custom   this permission allows the team to create workspaces in the project                       data attributes workspace access locking                   boolean   false    If  access  is  custom   the permission granting the ability to manually lock or unlock the project s workspaces  Can only be used when  access  is  custom                                                          data attributes workspace access delete                   boolean   false    If  access  is  custom   the permission granting the ability to delete the project s workspaces  Can only be used when  access  is  custom                                                          data attributes workspace access move                      boolean   false    If  access  is  move   this permission allows the team to move workspaces into and out of the project  The team must also have permissions to the project s  receiving the the workspace s              data attributes workspace access run tasks                 boolean   false    If  access  is  custom   this permission allows the team to manage run tasks within the project s workspaces         Sample Payload     json      data          attributes            access    read              relationships            project              data                type    projects              id    prj ckZoJwdERaWcFHwi                            team              data                type    teams              id    team xMGyoUhKmTkTzmAy                                type    team projects                 Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 team projects          Sample Response     json      data          id    tprj WbG7p5KnT7S7HZqw        type    team projects        attributes            access    read          project access              settings    read            teams    none                  workspace access              create   false           move   false           locking   false           runs    read            variables    read            state versions    read            sentinel mocks    none            run tasks   false                     relationships            team              data                id    team xMGyoUhKmTkTzmAy              type    teams                      links                related     api v2 teams team xMGyoUhKmTkTzmAy                            project              data                id    prj ckZoJwdERaWcFHwi              type    projects                      links                related     api v2 projects prj ckZoJwdERaWcFHwi                                links            self     api v2 team projects tprj WbG7p5KnT7S7HZqw                      Update Team Access to a Project   PATCH  team projects  id     Status    Response                                          Reason                                                                                                                                                                                             200       JSON API document      type   team projects      The request was successful                                          404       JSON API error object                            Team Access not found or user unauthorized to perform action        422       JSON API error object                            Malformed request body  missing attributes  wrong types  etc        Parameter                                 Description                                                                                                                                                                                                                                                                                                                           id                                      The ID of the team project relationship  Obtain this from the  list team access action   list team access to a project  described above       data attributes access    string         The type of access to grant  Valid values are  read    write    maintain    admin   or  custom                                                  Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 team projects tprj WbG7p5KnT7S7HZqw          Sample Payload     json      data          id    tprj WbG7p5KnT7S7HZqw        attributes            access    custom          project access              settings    delete           teams    manage                   workspace access               runs    apply            sentinel mocks    read            state versions    write            variables    write            create   true           locking   true           delete   true           move   true           run tasks   true                              Sample Response     json      data          id    tprj WbG7p5KnT7S7HZqw        type    team projects        attributes            access    custom          project access              settings    delete           teams    manage                   workspace access               runs    apply            sentinel mocks    read            state versions    write            variables    write            create   true           locking   true           delete   true           move   true           run tasks   true                     relationships            team              data                id    team xMGyoUhKmTkTzmAy              type    teams                      links                related     api v2 teams team xMGyoUhKmTkTzmAy                            project              data                id    prj ckZoJwdERaWcFHwi              type    projects                      links                related     api v2 projects prj ckZoJwdERaWcFHwi                                links            self     api v2 team projects tprj WbG7p5KnT7S7HZqw                      Remove Team Access from a Project   DELETE  team projects  id     Status    Response                    Reason                                                                                                                                                                   204                                  The Team Access was successfully destroyed                        404       JSON API error object      Team Access not found or user unauthorized to perform action      Parameter   Description                                                                                                                                                                                                                                                                                             id        The ID of the team project relationship  Obtain this from the  list team access action   list team access to a project  described above         Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 team projects tprj WbG7p5KnT7S7HZqw         Implied Custom Permission Levels  As mentioned above  when setting team access levels   read    write    maintain   or  admin    you can individually set the following permissions if you use the  custom  access level  The below table lists each access level alongside its implicit custom permission level   If you use the  custom  access level and do not specify a certain permission s level  that permission uses the default value listed below       Permissions                        read     write     maintain     admin      custom  default                                                                                                       project access settings            read     read      read         delete     read                project access teams               none     none      none         manage     none                workspace access runs              read     apply     apply        apply      read                workspace access sentinel mocks    none     read      read         read       none                workspace access state versions    read     write     write        write      none                workspace access variables         read     write     write        write      none                workspace access create           false    false     true         true       false                workspace access locking          false    true      true         true       false                workspace access delete           false    false     true         true       false                workspace access move             false    false     false        true       false                workspace access run tasks        false    false     true         true       false             "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the organization audit trail endpoint to query audit events Get a list of an organization s audit events for the past 14 days using the HTTP API page title Audit Trails API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 tfc only true","answers":"---\npage_title: Audit Trails - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/organization\/audit-trail` endpoint to query audit events. Get a list of an organization's audit events for the past 14 days using the HTTP API.\ntfc_only: true\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Audit Trails API\n\nHCP Terraform retains 14 days of audit log information. The audit trails API exposes a stream of audit events, which describe changes to the application entities (workspaces, runs, etc.) that belong to an HCP Terraform organization. Unlike other APIs, the Audit Trails API does not use the [JSON API specification](\/terraform\/cloud-docs\/api-docs#json-api-formatting).\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/audit-trails.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\n## List an organization's audit events\n\n`GET \/organization\/audit-trail`\n\n-> **Note:** This endpoint cannot be accessed with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens). You must access it with an [organization token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens).\n\n### Query Parameters\n\n[These are standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                                                                                                                                      |\n| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `since`        | **Optional.** Returns only audit trails created after this date (UTC and in [ISO8601 Format](https:\/\/www.iso.org\/iso-8601-date-and-time-format.html) - YYYY-MM-DDTHH:MM:SS.SSSZ) |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.                                                                                                               |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 1000 audit events per page.                                                                                                   |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --request GET \\\n  \"https:\/\/app.terraform.io\/api\/v2\/organization\/audit-trail?page[number]=1&since=2020-05-30T17:52:46.000Z\"\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"ae66e491-db59-457c-8445-9c908ee726ae\",\n      \"version\": \"0\",\n      \"type\": \"Resource\",\n      \"timestamp\": \"2020-06-30T17:52:46.000Z\",\n      \"auth\": {\n        \"accessor_id\": \"user-MaPuLxAXvtq2PWTH\",\n        \"description\": \"pveverka\",\n        \"type\": \"Client\",\n        \"impersonator_id\": null,\n        \"organization_id\": \"org-AGLwRmx1snv34Yts\"\n      },\n      \"request\": {\n        \"id\": \"4df584d4-7e2a-01e6-6cc0-4adbefa020e6\"\n      },\n      \"resource\": {\n        \"id\": \"at-sjt83qTw3GZatuPm\",\n        \"type\": \"authentication_token\",\n        \"action\": \"create\",\n        \"meta\": null\n      }\n    }\n  ],\n  \"pagination\": {\n    \"current_page\": 1,\n    \"prev_page\": null,\n    \"next_page\": 2,\n    \"total_pages\": 8,\n    \"total_count\": 778\n  }\n}\n```\n\n## Standard response fields\n\nEvery audit log event in the response array includes the following standard fields:\n\n| Key                    | Data Type or format   | Description                                                 |\n| ---------------------- | ------------ | ----------------------------------------------------------- |\n| `id`                   | UUID         | The ID of this audit trail                     |\n| `version`              | number            | The audit trail schema version                              |\n| `type`                 | string            | The type of audit trail (defaults to `Resource`)            |\n| `timestamp`            | string | UTC ISO8601 DateTime (for example,`2020-06-16T20:26:58.000Z`)      |\n| `auth.accessor_id`     | string            | The ID of audited actor (for example, `user-V3R563qtJNcExAkN`)      |\n| `auth.description`     | string       | Username of audited actor                                   |\n| `auth.type`            | string       | Authentication type, is either `Client`, `Impersonated`, or `System`. |\n| `auth.impersonator_id` | string            | The ID of impersonating actor (if available)                |\n| `auth.organization_id` | string            | The ID of organization (for example, `org-QpXoEnULx3r2r1CA`)        |\n| `request.id`           | UUID         | The ID for request (if available)             |\n| `resource.id`          | string            | The ID of resource (for example,`run-FwnENkvDnrpyFC7M`)            |\n| `resource.type`        | string       | Type of resource (for example, `run`)                               |\n| `resource.action`      | string       | Action audited (for example, `applied`)                             |\n| `resource.meta`        | map            | Key-value metadata about this audited event (defaults to `null`)                 |\n\nThe following audit trail events _only_ contain these standard fields:\n\n| Event                      | Action                                                       | Description                                                |\n|----------------------------|--------------------------------------------------------------|------------------------------------------------------------|\n| `agent`                    | `destroy`                                                    | Logged when an agent is destroyed.                         |\n| `authentication_token`     | `create`, `show`, `destroy`                                 | Events related to authentication tokens.                  |\n| `configuration_version`    | `show`, `download`                                           | Events related to configuration versions.                 |\n| `notification_configuration` | `create`, `update`, `destroy`, `enable`                     | Events related to notification configurations.            |\n| `oauth_client`             | `create`, `update`, `destroy`                                | Events related to OAuth clients.                           |\n| `oauth_token`              | `index`, `show`, `update`, `destroy`                        | Events related to OAuth tokens.                            |\n| `organization`             | `create`, `update`, `destroy`                               | Events related to organizations.                           |\n| `organization_user`        | `create`, `update`, `destroy`                               | Events related to organization users.                      |\n| `policy`                   | `update`, `destroy`                                          | Events related to policies.                                |\n| `policy_check`             | `override`                                                   | Events related to policy checks.                           |\n| `policy_config`            | `create`                                                     | Events related to policy configurations.                  |\n| `policy_set`               | `destroy`                                                    | Events related to policy sets.                             |\n| `policy_version`           | `create`                                                     | Events related to policy versions.                         |\n| `project`                  | `create`, `update`, `destroy`                               | Events related to projects.                                |\n| `registry_module`          | `destroy`, `update`                                          | Events related to registry modules.                        |\n| `registry_provider`        | `create`, `destroy`                                          | Events related to registry providers.                      |\n| `registry_provider_platform` | `create`, `destroy`                                          | Events related to registry provider platforms.             |\n| `registry_provider_version` | `create`, `destroy`                                          | Events related to registry provider versions.             |\n| `run`                      | `apply`, `cancel`, `force_cancel`, `discard`, `force_execute`, `create` | Events related to runs.                                    |\n| `run_trigger`              | `create`, `destroy`                                          | Events related to run triggers.                            |\n| `saml_configuration`       | `create`, `update`, `destroy`, `enable`, `disable`          | Events related to SAML configurations.                    |\n| `ssh_key`                  | `index`, `show`, `create`, `update`, `destroy`              | Events related to SSH keys.                                |\n| `stack`                     | `create`                                                    | Events related to creating stacks.                         |\n| `state_version`            | `index`, `show`, `create`, `soft_delete_backing_data`, `restore_backing_data`, `permanently_delete_backing_data` | Events related to state versions.                         |\n| `task_stage`                  | `override`              | Events related to an overridden task stage.                               |\n| `user`                  | `index`              | Events related to indexing users.                                |\n| `var`                  | `index`, `show`, `create`, `update`, `destroy`              | Events related to variables.                             |\n| `vcs_repo`                 | `create`                                                    | Events related to creating a connection to a VCS repo.                     |\n\nThe following sections list the audit log events containing both the standard response schema and a specific payload for each action.\n\n## Data retention policy events\n\nYou can define [data retention policies](\/terraform\/cloud-docs\/workspaces\/settings\/deletion#data-retention-policies) to help reduce object storage consumption. \n\n### Destroy\n\nAn HCP Terraform organization emits this event when it destroys a data retention policy(`data_retention_policy`). Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key                        | Data Type | Description                                             |\n| ---                        | ---       | ---                                                     |\n| `delete_older_than_n_days` | integer   | The number of days to retain data before deletion. Must be greater than 0. |\n| `organization`             | string    | The organization associated with the data retention policy. |\n| `organization_id`          | string    | The ID of the organization associated with the data retention policy. |\n\n## Project events\n\n[Projects](\/terraform\/cloud-docs\/projects\/manage) let you organize your workspaces and scope access to workspace resources.\n\n### Add Team\n\nAn HCP Terraform organization emits this event when someone in your organization adds a team(`add_team`) to a project. Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key          | Data Type | Description                           |\n| ---          | ---       | ---                                   |\n| `team`       | string    | The name or identifier of the team.    |\n| `permissions` | string   | The permissions granted to the team.  |\n\n### Update Team Permissions\n\nAn HCP Terraform organization emits this event when someone in your organization updates the permissions(`update_team_permissions`) of a team on a project. Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key          | Data Type | Description                           |\n| ---          | ---       | ---                                   |\n| `team`       | string    | The name or identifier of the team.    |\n| `permissions` | string   | The updated permissions for the team. |\n\n### Remove Team\n\nAn HCP Terraform organization emits this event when someone in your organization removes a team(`remove_team`) from a project. Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key          | Data Type | Description                           |\n| ---          | ---       | ---                                   |\n| `team`       | string    | The name or identifier of the team.    |\n\n## Policy checks\n\n[Policy checks run Sentinel policies](\/terraform\/enterprise\/policy-enforcement\/manage-policy-sets). Policy checks have a lifecycle that includes the following events:\n  * `passed`\n  * `queued`\n  * `passed`\n  * `soft_failed`\n  * `hard_failed`\n  * `overridden`\n  * `canceled`\n  * `force_canceled`\n  * `errored`\n\nEvery policy check event in the response array includes the following standard fields:\n\n| Key                   | Data Type    | Description                                     |\n| ---                   | ---          | ---                                             |\n| `comment`             | `null` or string       | The comment associated with the policy check. |\n| `run.id`              | string       | The ID of the associated run.                    |\n| `run.message`         | string       | The message associated with the run.            |\n| `workspace.id`        | string       | The ID of the associated workspace.             |\n| `workspace.name`      | string       | The name of the associated workspace.           |\n\n### Passed, soft failed, hard failed, and errored\n\nAlongside the [standard audit trail fields](#standard-response-fields) and [standard policy check fields](#policy-checks), certain policy check events include a result (`includes_result`) and return the following fields:\n\n| Key                   | Data Type       | Description                                     |\n| --------------------- | --------------- | ----------------------------------------------- |\n| `result`              | [Policy Result] | An array of policy results.                  |\n| `passed`              | integer         | Number of policy checks passed.                 |\n| `total_failed`        | integer         | Total number of failed policy checks.           |\n| `hard_failed`         | integer         | Number of policy checks with hard failures.     |\n| `soft_failed`         | integer         | Number of policy checks with soft failures.     |\n| `advisory_failed`     | integer         | Number of policy checks with advisory failures. |\n| `duration_ms`         | integer         | Duration of the policy check in milliseconds.   |\n| `sentinel`            | string          | The Sentinel policy associated with the check.  |\n\n### Overridden, canceled, and force canceled\n\nAlongside the [standard audit trail fields](#standard-response-fields) and [standard policy check fields](#policy-checks), certain event responses include the following fields:\n\n| Key                   | Data Type    | Description                                     |\n| ---                   | ---          | ---                                             |\n| `actor`               | string       | The user who performed the action.              |\n\n## Policy evaluation\n\n[Policy evaluations](\/terraform\/enterprise\/policy-enforcement\/policy-results#view-policy-results) run in the HCP Terraform agent in HCP Terraform's infrastructure. Policy evaluations have a lifecycle that includes the following events:\n  * `queue`\n  * `run`\n  * `pass`\n  * `fail`\n  * `override`\n  * `error`\n  * `cancel`\n  * `force_cancel`\n\nEvery policy evaluation event in the response array includes the following standard fields:\n\n| Key                   | Data Type    | Description                                     |\n| ---                   | ---          | ---                                             |\n| `comment`             | `null` or string       | The comment associated with policy evaluation. |\n| `payload.workspace.id` | string       | The ID of the associated workspace.             |\n| `payload.workspace.name` | string     | The name of the associated workspace.           |\n| `payload.run.id`      | string       | The ID of the associated run.                    |\n| `payload.run.message` | string       | The message associated with the run.            |\n| `actor`               | string       | The policy owner.              |\n\nPolicy checks and policy evaluations serve the same purpose, but have different workflows for enforcing policies. For more information, refer to [Differences between policy checks and policy evaluations](\/terraform\/enterprise\/policy-enforcement\/manage-policy-sets#differences-between-policy-checks-and-policy-evaluations).\n\n## Registry module events\n\n[HCP Terraform's private registry](\/terraform\/cloud-docs\/registry) lets you share Terraform providers and Terraform modules across your organization.\n\n### Destroy version\n\nAn HCP Terraform organization emits this event when someone in your organization destroys a specific version of a registry module(`destroy_version`). Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key       | Data Type | Description                                      |\n| ---       | ---       | ---                                              |\n| `version` | integer   | The version number of the registry module to destroy. |\n\n## Run task events\n\nYou can create HCP Terraform [run tasks](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks) at an organization level to directly integrate third-party tools and services at certain stages in the HCP Terraform run lifecycle. You can use run tasks to validate Terraform configuration files, analyze execution plans before applying them, scan for security vulnerabilities, or perform other custom actions.\n\n### Create\n\nAn HCP Terraform organization emits this event when someone in your organization creates a task. Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key                   | Data Type    | Description                                               |\n| ---                   | ---          | ---                                                       |\n| `task_name`           | string       | The name of the task.                                     |\n| `task_category`       | string       | The task's category |\n| `task_url`            | string       | The task's URL.                         |\n| `task_enabled`        | boolean      | Indicates whether the task is enabled or not.            |\n| `task_hmac_key_present` | boolean    | Specifies if an HMAC key is present for the task.         |\n\n### Destroy\n\nAn HCP Terraform organization emits this event when someone in your organization destroys a task. Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key                   | Data Type    | Description                                               |\n| ---                   | ---          | ---                                                       |\n| `task_name`           | string       | The name of the task to be destroyed.                    |\n| `task_category`       | string       | The category of the task to be destroyed.                |\n| `task_url`            | string       | The URL associated with the task to be destroyed.        |\n| `task_enabled`        | boolean      | Indicates whether the task to be destroyed is enabled.   |\n| `task_hmac_key_present` | boolean    | Specifies if an HMAC key is present for the task to be destroyed. |\n\n### Update\n\nAn HCP Terraform organization emits this event when someone in your organization updates a task. Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key                   | Data Type    | Description                                               |\n| ---                   | ---          | ---                                                       |\n| `task_name`           | string       | The name of the task to be updated.                       |\n| `task_category`       | string       | The updated category of the task.                         |\n| `task_url`            | string       | The updated URL associated with the task.                |\n| `task_enabled`        | boolean      | Indicates whether the task is enabled after the update.  |\n| `task_hmac_key_present` | boolean    | Specifies if an HMAC key is present for the updated task. |\n| `task_hmac_key_changed` | boolean    | Indicates whether the HMAC key for the task has changed during the update. |\n\n### Callback\n\nAn HCP Terraform organization emits this event when a [run task](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks) result's (`task_result`) callback action occurs. Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key                          | Data Type          | Description                                               |\n| ---                          | ---                | ---                                                       |\n| `task_result_status`         | string             | The status of the task result.                            |\n| `task_result_url`            | string             | The URL associated with the task result.                 |\n| `task_result_message`        | string             | A message associated with the task result.               |\n| `workspace`                  | string             | The workspace related to the task result.                |\n| `workspace_id`               | string             | The ID of the workspace related to the task result.       |\n| `run_id`                     | string             | The ID of the run associated with the task result.        |\n| `run_created_at`             | date (iso8601)     | The timestamp when the run was created.                   |\n| `run_created_by`             | string             | The user who created the run.                            |\n| `run_message`                | string             | The message associated with the run.                     |\n| `run_is_speculative`         | boolean            | Indicates whether the run is speculative or not.         |\n| `workspace_task_id`          | string             | The ID of the workspace task associated with the result.  |\n| `workspace_task_stage`       | string             | The stage of the workspace task.                          |\n| `workspace_task_enforcement` | string             | The enforcement level of the workspace task.             |\n| `organization_task`          | string             | The organization task related to the task result.        |\n| `organization_task_id`       | string             | The ID of the organization task related to the result.    |\n| `organization_task_url`      | string             | The URL associated with the organization task.           |\n\n### Create a workspace's run task \n\nAn HCP Terraform organization emits this event when someone in your organization [associates a new run task with a workspace](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks#associating-run-tasks-with-a-workspace) (`workspace_task`). Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key                      | Data Type    | Description                     |\n| ---                      | ---          | ---                             |\n| `task_enforcement_level` | string       | The task's enforcement level.    |\n| `task_stage`             | string       | The stage of the task.           |\n| `workspace`              | string       | The workspace associated with the task. |\n| `workspace_id`           | string       | The ID of the workspace.         |\n| `organization_task`      | string       | The organization's task.         |\n| `organization_task_id`   | string       | The ID of the organization's task.|\n\n### Update a workspace's run task \n\nAn HCP Terraform organization emits this event when someone in your organization updates a workspace's associated run task (`workspace_task`). Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key                      | Data Type    | Description                     |\n| ---                      | ---          | ---                             |\n| `task_enforcement_level` | string       | The task's enforcement level.    |\n| `task_stage`             | string       | The stage of the task.           |\n| `workspace`              | string       | The workspace associated with the task. |\n| `workspace_id`           | string       | The ID of the workspace.         |\n| `organization_task`      | string       | The organization's task.         |\n| `organization_task_id`   | string       | The ID of the organization's task.|\n\n### Destroy a workspace's run task \n\nAn HCP Terraform organization emits this event when someone in your organization destroys a workspace's associated run task (`workspace_task`). Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key                      | Data Type    | Description                     |\n| ---                      | ---          | ---                             |\n| `task_enforcement_level` | string       | The task's enforcement level.    |\n| `task_stage`             | string       | The stage of the task.           |\n| `workspace`              | string       | The workspace associated with the task. |\n| `workspace_id`           | string       | The ID of the workspace.         |\n| `organization_task`      | string       | The organization's task.         |\n| `organization_task_id`   | string       | The ID of the organization's task.|\n\n## Team events\n\nTeams are [groups of HCP Terraform users within an organization](\/terraform\/cloud-docs\/users-teams-organizations\/teams). \n\n### Add Member\n\nAn HCP Terraform organization emits this event when someone in your organization adds a member(`add_member`) to a team. Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key      | Data Type | Description         |\n| ---      | ---       | ---                 |\n| `user`   | string    | The user being added to the team. |\n\n### Remove Member\n\nAn HCP Terraform organization emits this event when someone in your organization removes a member(`remove_member`) from a team. Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key      | Data Type | Description         |\n| ---      | ---       | ---                 |\n| `user`   | string    | The user being removed from the team. |\n\n## Workspace comment events\n\n[Workspace comments](\/terraform\/cloud-docs\/api-docs\/comments) allow users to leave feedback or record decisions about a run.\n\n### Create\n\nAn HCP Terraform organization emits this event when someone in your organization creates a comment on a workspace(`workspace_comment`). Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key       | Data Type | Description             |\n| ---       | ---       | ---                     |\n| `resource`| string    | The resource associated with the comment. |\n\n## Workspace events\n\nHCP Terraform manages infrastructure collections with [workspaces](\/terraform\/cloud-docs\/workspaces).\n\n### Add Team\n\nAn HCP Terraform organization emits this event when someone in your organization adds a team(`add_team`) to a workspace. Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key          | Data Type | Description                           |\n| ---          | ---       | ---                                   |\n| `team`       | string    | The name or identifier of the team.    |\n| `permissions` | string   | The permissions granted to the team.  |\n\n### Update Team Permissions\n\nAn HCP Terraform organization emits this event when someone in your organization updates the permissions(`update_team_permissions`) of a team on a workspace. Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key          | Data Type | Description                           |\n| ---          | ---       | ---                                   |\n| `team`       | string    | The name or identifier of the team.    |\n| `permissions` | string   | The updated permissions for the team. |\n\n### Remove Team\n\nAn HCP Terraform organization emits this event when someone in your organization removes a team(`remove_team`) from a workspace. Alongside the [standard audit trail fields](#standard-response-fields), this event response includes the following fields:\n\n| Key          | Data Type | Description                           |\n| ---          | ---       | ---                                   |\n| `team`       | string    | The name or identifier of the team.   ","site":"terraform","answers_cleaned":"    page title  Audit Trails   API Docs   HCP Terraform description       Use the   organization audit trail  endpoint to query audit events  Get a list of an organization s audit events for the past 14 days using the HTTP API  tfc only  true       200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Audit Trails API  HCP Terraform retains 14 days of audit log information  The audit trails API exposes a stream of audit events  which describe changes to the application entities  workspaces  runs  etc   that belong to an HCP Terraform organization  Unlike other APIs  the Audit Trails API does not use the  JSON API specification   terraform cloud docs api docs json api formatting         BEGIN  TFC only name pnp callout      include  tfc package callouts audit trails mdx       END  TFC only name pnp callout         List an organization s audit events   GET  organization audit trail        Note    This endpoint cannot be accessed with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens   You must access it with an  organization token   terraform cloud docs users teams organizations api tokens organization api tokens        Query Parameters   These are standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                                                                                                                                                                                                                                                 since             Optional    Returns only audit trails created after this date  UTC and in  ISO8601 Format  https   www iso org iso 8601 date and time format html    YYYY MM DDTHH MM SS SSSZ       page number       Optional    If omitted  the endpoint will return the first page                                                                                                                     page size         Optional    If omitted  the endpoint will return 1000 audit events per page                                                                                                           Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        request GET      https   app terraform io api v2 organization audit trail page number  1 since 2020 05 30T17 52 46 000Z           Sample Response     json      data                  id    ae66e491 db59 457c 8445 9c908ee726ae          version    0          type    Resource          timestamp    2020 06 30T17 52 46 000Z          auth              accessor id    user MaPuLxAXvtq2PWTH            description    pveverka            type    Client            impersonator id   null           organization id    org AGLwRmx1snv34Yts                  request              id    4df584d4 7e2a 01e6 6cc0 4adbefa020e6                  resource              id    at sjt83qTw3GZatuPm            type    authentication token            action    create            meta   null                       pagination          current page   1       prev page   null       next page   2       total pages   8       total count   778               Standard response fields  Every audit log event in the response array includes the following standard fields     Key                      Data Type or format     Description                                                                                                                                                              id                      UUID           The ID of this audit trail                          version                 number              The audit trail schema version                                   type                    string              The type of audit trail  defaults to  Resource                   timestamp               string   UTC ISO8601 DateTime  for example  2020 06 16T20 26 58 000Z             auth accessor id        string              The ID of audited actor  for example   user V3R563qtJNcExAkN             auth description        string         Username of audited actor                                        auth type               string         Authentication type  is either  Client    Impersonated   or  System        auth impersonator id    string              The ID of impersonating actor  if available                      auth organization id    string              The ID of organization  for example   org QpXoEnULx3r2r1CA               request id              UUID           The ID for request  if available                   resource id             string              The ID of resource  for example  run FwnENkvDnrpyFC7M                   resource type           string         Type of resource  for example   run                                      resource action         string         Action audited  for example   applied                                    resource meta           map              Key value metadata about this audited event  defaults to  null                      The following audit trail events  only  contain these standard fields     Event                        Action                                                         Description                                                                                                                                                                                                                agent                        destroy                                                       Logged when an agent is destroyed                               authentication token         create    show    destroy                                    Events related to authentication tokens                        configuration version        show    download                                              Events related to configuration versions                       notification configuration     create    update    destroy    enable                        Events related to notification configurations                  oauth client                 create    update    destroy                                   Events related to OAuth clients                                 oauth token                  index    show    update    destroy                           Events related to OAuth tokens                                  organization                 create    update    destroy                                  Events related to organizations                                 organization user            create    update    destroy                                  Events related to organization users                            policy                       update    destroy                                             Events related to policies                                      policy check                 override                                                      Events related to policy checks                                 policy config                create                                                        Events related to policy configurations                        policy set                   destroy                                                       Events related to policy sets                                   policy version               create                                                        Events related to policy versions                               project                      create    update    destroy                                  Events related to projects                                      registry module              destroy    update                                             Events related to registry modules                              registry provider            create    destroy                                             Events related to registry providers                            registry provider platform     create    destroy                                             Events related to registry provider platforms                   registry provider version     create    destroy                                             Events related to registry provider versions                   run                          apply    cancel    force cancel    discard    force execute    create    Events related to runs                                          run trigger                  create    destroy                                             Events related to run triggers                                  saml configuration           create    update    destroy    enable    disable             Events related to SAML configurations                          ssh key                      index    show    create    update    destroy                 Events related to SSH keys                                      stack                         create                                                       Events related to creating stacks                               state version                index    show    create    soft delete backing data    restore backing data    permanently delete backing data    Events related to state versions                               task stage                      override                 Events related to an overridden task stage                                     user                      index                 Events related to indexing users                                      var                      index    show    create    update    destroy                 Events related to variables                                   vcs repo                     create                                                       Events related to creating a connection to a VCS repo                         The following sections list the audit log events containing both the standard response schema and a specific payload for each action      Data retention policy events  You can define  data retention policies   terraform cloud docs workspaces settings deletion data retention policies  to help reduce object storage consumption        Destroy  An HCP Terraform organization emits this event when it destroys a data retention policy  data retention policy    Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key                          Data Type   Description                                                                                                                                                       delete older than n days    integer     The number of days to retain data before deletion  Must be greater than 0       organization                string      The organization associated with the data retention policy       organization id             string      The ID of the organization associated with the data retention policy        Project events   Projects   terraform cloud docs projects manage  let you organize your workspaces and scope access to workspace resources       Add Team  An HCP Terraform organization emits this event when someone in your organization adds a team  add team   to a project  Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key            Data Type   Description                                                                                                     team          string      The name or identifier of the team          permissions    string     The permissions granted to the team          Update Team Permissions  An HCP Terraform organization emits this event when someone in your organization updates the permissions  update team permissions   of a team on a project  Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key            Data Type   Description                                                                                                     team          string      The name or identifier of the team          permissions    string     The updated permissions for the team         Remove Team  An HCP Terraform organization emits this event when someone in your organization removes a team  remove team   from a project  Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key            Data Type   Description                                                                                                     team          string      The name or identifier of the team           Policy checks   Policy checks run Sentinel policies   terraform enterprise policy enforcement manage policy sets   Policy checks have a lifecycle that includes the following events       passed       queued       passed       soft failed       hard failed       overridden       canceled       force canceled       errored   Every policy check event in the response array includes the following standard fields     Key                     Data Type      Description                                                                                                                                     comment                 null  or string         The comment associated with the policy check       run id                 string         The ID of the associated run                          run message            string         The message associated with the run                  workspace id           string         The ID of the associated workspace                   workspace name         string         The name of the associated workspace                   Passed  soft failed  hard failed  and errored  Alongside the  standard audit trail fields   standard response fields  and  standard policy check fields   policy checks   certain policy check events include a result   includes result   and return the following fields     Key                     Data Type         Description                                                                                                                                        result                  Policy Result    An array of policy results                        passed                 integer           Number of policy checks passed                       total failed           integer           Total number of failed policy checks                 hard failed            integer           Number of policy checks with hard failures           soft failed            integer           Number of policy checks with soft failures           advisory failed        integer           Number of policy checks with advisory failures       duration ms            integer           Duration of the policy check in milliseconds         sentinel               string            The Sentinel policy associated with the check          Overridden  canceled  and force canceled  Alongside the  standard audit trail fields   standard response fields  and  standard policy check fields   policy checks   certain event responses include the following fields     Key                     Data Type      Description                                                                                                                                     actor                  string         The user who performed the action                     Policy evaluation   Policy evaluations   terraform enterprise policy enforcement policy results view policy results  run in the HCP Terraform agent in HCP Terraform s infrastructure  Policy evaluations have a lifecycle that includes the following events       queue       run       pass       fail       override       error       cancel       force cancel   Every policy evaluation event in the response array includes the following standard fields     Key                     Data Type      Description                                                                                                                                     comment                 null  or string         The comment associated with policy evaluation       payload workspace id    string         The ID of the associated workspace                   payload workspace name    string       The name of the associated workspace                 payload run id         string         The ID of the associated run                          payload run message    string         The message associated with the run                  actor                  string         The policy owner                  Policy checks and policy evaluations serve the same purpose  but have different workflows for enforcing policies  For more information  refer to  Differences between policy checks and policy evaluations   terraform enterprise policy enforcement manage policy sets differences between policy checks and policy evaluations       Registry module events   HCP Terraform s private registry   terraform cloud docs registry  lets you share Terraform providers and Terraform modules across your organization       Destroy version  An HCP Terraform organization emits this event when someone in your organization destroys a specific version of a registry module  destroy version    Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key         Data Type   Description                                                                                                                        version    integer     The version number of the registry module to destroy        Run task events  You can create HCP Terraform  run tasks   terraform cloud docs workspaces settings run tasks  at an organization level to directly integrate third party tools and services at certain stages in the HCP Terraform run lifecycle  You can use run tasks to validate Terraform configuration files  analyze execution plans before applying them  scan for security vulnerabilities  or perform other custom actions       Create  An HCP Terraform organization emits this event when someone in your organization creates a task  Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key                     Data Type      Description                                                                                                                                                         task name              string         The name of the task                                           task category          string         The task s category      task url               string         The task s URL                               task enabled           boolean        Indicates whether the task is enabled or not                  task hmac key present    boolean      Specifies if an HMAC key is present for the task                 Destroy  An HCP Terraform organization emits this event when someone in your organization destroys a task  Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key                     Data Type      Description                                                                                                                                                         task name              string         The name of the task to be destroyed                          task category          string         The category of the task to be destroyed                      task url               string         The URL associated with the task to be destroyed              task enabled           boolean        Indicates whether the task to be destroyed is enabled         task hmac key present    boolean      Specifies if an HMAC key is present for the task to be destroyed         Update  An HCP Terraform organization emits this event when someone in your organization updates a task  Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key                     Data Type      Description                                                                                                                                                         task name              string         The name of the task to be updated                             task category          string         The updated category of the task                               task url               string         The updated URL associated with the task                      task enabled           boolean        Indicates whether the task is enabled after the update        task hmac key present    boolean      Specifies if an HMAC key is present for the updated task       task hmac key changed    boolean      Indicates whether the HMAC key for the task has changed during the update         Callback  An HCP Terraform organization emits this event when a  run task   terraform cloud docs workspaces settings run tasks  result s   task result   callback action occurs  Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key                            Data Type            Description                                                                                                                                                                      task result status            string               The status of the task result                                  task result url               string               The URL associated with the task result                       task result message           string               A message associated with the task result                     workspace                     string               The workspace related to the task result                      workspace id                  string               The ID of the workspace related to the task result             run id                        string               The ID of the run associated with the task result              run created at                date  iso8601        The timestamp when the run was created                         run created by                string               The user who created the run                                  run message                   string               The message associated with the run                           run is speculative            boolean              Indicates whether the run is speculative or not               workspace task id             string               The ID of the workspace task associated with the result        workspace task stage          string               The stage of the workspace task                                workspace task enforcement    string               The enforcement level of the workspace task                   organization task             string               The organization task related to the task result              organization task id          string               The ID of the organization task related to the result          organization task url         string               The URL associated with the organization task                   Create a workspace s run task   An HCP Terraform organization emits this event when someone in your organization  associates a new run task with a workspace   terraform cloud docs workspaces settings run tasks associating run tasks with a workspace    workspace task    Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key                        Data Type      Description                                                                                                        task enforcement level    string         The task s enforcement level          task stage                string         The stage of the task                 workspace                 string         The workspace associated with the task       workspace id              string         The ID of the workspace               organization task         string         The organization s task               organization task id      string         The ID of the organization s task        Update a workspace s run task   An HCP Terraform organization emits this event when someone in your organization updates a workspace s associated run task   workspace task    Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key                        Data Type      Description                                                                                                        task enforcement level    string         The task s enforcement level          task stage                string         The stage of the task                 workspace                 string         The workspace associated with the task       workspace id              string         The ID of the workspace               organization task         string         The organization s task               organization task id      string         The ID of the organization s task        Destroy a workspace s run task   An HCP Terraform organization emits this event when someone in your organization destroys a workspace s associated run task   workspace task    Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key                        Data Type      Description                                                                                                        task enforcement level    string         The task s enforcement level          task stage                string         The stage of the task                 workspace                 string         The workspace associated with the task       workspace id              string         The ID of the workspace               organization task         string         The organization s task               organization task id      string         The ID of the organization s task       Team events  Teams are  groups of HCP Terraform users within an organization   terraform cloud docs users teams organizations teams         Add Member  An HCP Terraform organization emits this event when someone in your organization adds a member  add member   to a team  Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key        Data Type   Description                                                             user      string      The user being added to the team         Remove Member  An HCP Terraform organization emits this event when someone in your organization removes a member  remove member   from a team  Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key        Data Type   Description                                                             user      string      The user being removed from the team        Workspace comment events   Workspace comments   terraform cloud docs api docs comments  allow users to leave feedback or record decisions about a run       Create  An HCP Terraform organization emits this event when someone in your organization creates a comment on a workspace  workspace comment    Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key         Data Type   Description                                                                      resource   string      The resource associated with the comment        Workspace events  HCP Terraform manages infrastructure collections with  workspaces   terraform cloud docs workspaces        Add Team  An HCP Terraform organization emits this event when someone in your organization adds a team  add team   to a workspace  Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key            Data Type   Description                                                                                                     team          string      The name or identifier of the team          permissions    string     The permissions granted to the team          Update Team Permissions  An HCP Terraform organization emits this event when someone in your organization updates the permissions  update team permissions   of a team on a workspace  Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key            Data Type   Description                                                                                                     team          string      The name or identifier of the team          permissions    string     The updated permissions for the team         Remove Team  An HCP Terraform organization emits this event when someone in your organization removes a team  remove team   from a workspace  Alongside the  standard audit trail fields   standard response fields   this event response includes the following fields     Key            Data Type   Description                                                                                                     team          string      The name or identifier of the team    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Invoices API Docs HCP Terraform Use the invoices endpoint to access an organization s invoices List previous invoices and get the next invoice using the HTTP API","answers":"---\npage_title: Invoices - API Docs - HCP Terraform\ndescription: >-\n  Use the `invoices` endpoint to access an organization's invoices. List previous invoices and get the next invoice using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Invoices API\n\n-> **Note:** The invoices API is only available in HCP Terraform.\n\nOrganizations on credit-card-billed plans may view their previous and upcoming invoices.\n\n## List Invoices\n\nThis endpoint lists the previous invoices for an organization.\n\nIt uses a pagination scheme that's somewhat different from [our standard pagination](\/terraform\/cloud-docs\/api-docs#pagination). The page size is always 10 items and is not configurable; if there are no more items, `meta.continuation` will be null. The current page is controlled by the `cursor` parameter, described below.\n\n`GET \/organizations\/:organization_name\/invoices`\n\n| Parameter            | Description                                                                                                           |\n| -------------------- | --------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization you'd like to view invoices for                                                          |\n| `:cursor`            | **Optional.** The ID of the invoice where the page should start. If omitted, the endpoint will return the first page. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/invoices\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"in_1I4sraHcjZv6Wm0g7nC34mAi\",\n      \"type\": \"billing-invoices\",\n      \"attributes\": {\n        \"created-at\": \"2021-01-01T19:00:38Z\",\n        \"external-link\": \"https:\/\/pay.stripe.com\/invoice\/acct_1Eov7THcjZv6Wm0g\/invst_IgFMMfdzAZzMQq8GXyUbrk9lFMqvp9SX\/pdf\",\n        \"number\": \"2F8CA1AE-0006\",\n        \"paid\": true,\n        \"status\": \"paid\",\n        \"total\": 21000\n      }\n    },\n    {...}\n    {\n      \"id\": \"in_1Hte5nHcjZv6Wm0g2Q8hFctH\",\n      \"type\": \"billing-invoices\",\n      \"attributes\": {\n        \"created-at\": \"2020-06-01T19:00:51Z\",\n        \"external-link\": \"https:\/\/pay.stripe.com\/invoice\/acct_1Eov7THcjZv6Wm0g\/invst_IUdMM6wl0JfA95tgWGZxpBGXYtJwmBgY\/pdf\",\n        \"number\": \"2F8CA1AE-0005\",\n        \"paid\": true,\n        \"status\": \"paid\",\n        \"total\": 21000\n      }\n    }\n  ],\n  \"meta\": {\n    \"continuation\": \"in_1IBpkEHcjZv6Wm0gHcgc2uwN\"\n  }\n}\n```\n\n## Get Next Invoice\n\nThis endpoint lists the next month's invoice for an organization.\n\n`GET \/organizations\/:organization_name\/invoices\/next`\n\n| Parameter           | Description                  |\n| ------------------- | ---------------------------- |\n| `organization_name` | The name of the organization |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/invoices\/next\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"in_upcoming_510DEB1F-0002\",\n    \"type\": \"billing-invoices\",\n    \"attributes\": {\n      \"created-at\": \"2021-02-01T20:00:00Z\",\n      \"external-link\": \"\",\n      \"number\": \"510DEB1F-0002\",\n      \"paid\": false,\n      \"status\": \"draft\",\n      \"total\": 21000\n    }\n  }\n}\n```","site":"terraform","answers_cleaned":"    page title  Invoices   API Docs   HCP Terraform description       Use the  invoices  endpoint to access an organization s invoices  List previous invoices and get the next invoice using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Invoices API       Note    The invoices API is only available in HCP Terraform   Organizations on credit card billed plans may view their previous and upcoming invoices      List Invoices  This endpoint lists the previous invoices for an organization   It uses a pagination scheme that s somewhat different from  our standard pagination   terraform cloud docs api docs pagination   The page size is always 10 items and is not configurable  if there are no more items   meta continuation  will be null  The current page is controlled by the  cursor  parameter  described below    GET  organizations  organization name invoices     Parameter              Description                                                                                                                                                                                                                                                                  organization name    The name of the organization you d like to view invoices for                                                                cursor                 Optional    The ID of the invoice where the page should start  If omitted  the endpoint will return the first page         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations hashicorp invoices          Sample Response     json      data                  id    in 1I4sraHcjZv6Wm0g7nC34mAi          type    billing invoices          attributes              created at    2021 01 01T19 00 38Z            external link    https   pay stripe com invoice acct 1Eov7THcjZv6Wm0g invst IgFMMfdzAZzMQq8GXyUbrk9lFMqvp9SX pdf            number    2F8CA1AE 0006            paid   true           status    paid            total   21000                                       id    in 1Hte5nHcjZv6Wm0g2Q8hFctH          type    billing invoices          attributes              created at    2020 06 01T19 00 51Z            external link    https   pay stripe com invoice acct 1Eov7THcjZv6Wm0g invst IUdMM6wl0JfA95tgWGZxpBGXYtJwmBgY pdf            number    2F8CA1AE 0005            paid   true           status    paid            total   21000                       meta          continuation    in 1IBpkEHcjZv6Wm0gHcgc2uwN                Get Next Invoice  This endpoint lists the next month s invoice for an organization    GET  organizations  organization name invoices next     Parameter             Description                                                                              organization name    The name of the organization        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations hashicorp invoices next          Sample Response     json      data          id    in upcoming 510DEB1F 0002        type    billing invoices        attributes            created at    2021 02 01T20 00 00Z          external link              number    510DEB1F 0002          paid   false         status    draft          total   21000                "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 Use the team membership API to manage a team s users Add and remove a user from a team using the HTTP API page title Team Membership API Docs HCP Terraform","answers":"---\npage_title: Team Membership - API Docs - HCP Terraform\ndescription: >-\n  Use the team membership API to manage a team's users. Add and remove a user from a team using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Team Membership API\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n-> **Note:** Team management is available in HCP Terraform **Standard** Edition. Free organizations can also use this API, but can only manage membership of their owners team. [Learn more about HCP Terraform pricing here](https:\/\/www.hashicorp.com\/products\/terraform\/pricing).\n<!-- END: TFC:only name:pnp-callout -->\n\nThe Team Membership API is used to add or remove users from teams. The [Team API](\/terraform\/cloud-docs\/api-docs\/teams) is used to create or destroy teams.\n\n## Organization Membership\n\n-> **Note:** To add users to a team, they must first receive and accept the invitation to join the organization by email. This process ensures that you do not accidentally add the wrong person by mistyping a username. Refer to [the Organization Memberships API documentation](\/terraform\/cloud-docs\/api-docs\/organization-memberships) for more information.\n\n## Add a User to Team (With user ID)\n\nThis method adds multiple users to a team using the user ID. Both users and teams must already exist.\n\n`POST \/teams\/:team_id\/relationships\/users`\n\n| Parameter  | Description         |\n| ---------- | ------------------- |\n| `:team_id` | The ID of the team. |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path      | Type   | Default | Description                      |\n| ------------- | ------ | ------- | -------------------------------- |\n| `data[].type` | string |         | Must be `\"users\"`.               |\n| `data[].id`   | string |         | The ID of the user you want to add to this team. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"type\": \"users\",\n      \"id\": \"myuser1\"\n    },\n    {\n      \"type\": \"users\",\n      \"id\": \"myuser2\"\n    }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/teams\/257525\/relationships\/users\n```\n\n## Add a User to Team (With organization membership ID)\n\nThis method adds multiple users to a team using the organization membership ID. Unlike the user ID method, the user only needs an invitation to the organization.\n\n`POST \/teams\/:team_id\/relationships\/organization-memberships`\n\n| Parameter  | Description         |\n| ---------- | ------------------- |\n| `:team_id` | The ID of the team. |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path      | Type   | Default | Description                      |\n| ------------- | ------ | ------- | -------------------------------- |\n| `data[].type` | string |         | Must be `\"organization-memberships\"`.               |\n| `data[].id`   | string |         | The organization membership ID of the user to add. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"type\": \"organization-memberships\",\n      \"id\": \"ou-nX7inDHhmC3quYgy\"\n    },\n    {\n      \"type\": \"organization-memberships\",\n      \"id\": \"ou-tTJph1AQVK5ZmdND\"\n    }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/teams\/257525\/relationships\/organization-memberships\n```\n\n## Delete a User from Team (With user ID)\n\nThis method removes multiple users from a team using the user ID. Both users and teams must already exist. This method only removes a user from this team. It does not delete that user overall.\n\n`DELETE \/teams\/:team_id\/relationships\/users`\n\n| Parameter  | Description         |\n| ---------- | ------------------- |\n| `:team_id` | The ID of the team. |\n\n### Request Body\n\nThis DELETE endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path      | Type   | Default | Description                         |\n| ------------- | ------ | ------- | ----------------------------------- |\n| `data[].type` | string |         | Must be `\"users\"`.                  |\n| `data[].id`   | string |         | The ID of the user to remove from this team. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"type\": \"users\",\n      \"id\": \"myuser1\"\n    },\n    {\n      \"type\": \"users\",\n      \"id\": \"myuser2\"\n    }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/teams\/257525\/relationships\/users\n```\n\n## Delete a User from Team (With organization membership ID)\n\nThis method removes multiple users from a team using the organization membership ID. This method only removes a user from this team. It does not delete that user overall.\n\n`DELETE \/teams\/:team_id\/relationships\/organization-memberships`\n\n| Parameter  | Description         |\n| ---------- | ------------------- |\n| `:team_id` | The ID of the team. |\n\n### Request Body\n\nThis DELETE endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path      | Type   | Default | Description                         |\n| ------------- | ------ | ------- | ----------------------------------- |\n| `data[].type` | string |         | Must be `\"organization-memberships\"`.                  |\n| `data[].id`   | string |         | The organization membership ID of the user to remove. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"type\": \"organization-memberships\",\n      \"id\": \"ou-nX7inDHhmC3quYgy\"\n    },\n    {\n      \"type\": \"organization-memberships\",\n      \"id\": \"ou-tTJph1AQVK5ZmdND\"\n    }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/teams\/257525\/relationships\/organization-memberships\n``","site":"terraform","answers_cleaned":"    page title  Team Membership   API Docs   HCP Terraform description       Use the team membership API to manage a team s users  Add and remove a user from a team using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Team Membership API       BEGIN  TFC only name pnp callout          Note    Team management is available in HCP Terraform   Standard   Edition  Free organizations can also use this API  but can only manage membership of their owners team   Learn more about HCP Terraform pricing here  https   www hashicorp com products terraform pricing        END  TFC only name pnp callout      The Team Membership API is used to add or remove users from teams  The  Team API   terraform cloud docs api docs teams  is used to create or destroy teams      Organization Membership       Note    To add users to a team  they must first receive and accept the invitation to join the organization by email  This process ensures that you do not accidentally add the wrong person by mistyping a username  Refer to  the Organization Memberships API documentation   terraform cloud docs api docs organization memberships  for more information      Add a User to Team  With user ID   This method adds multiple users to a team using the user ID  Both users and teams must already exist    POST  teams  team id relationships users     Parameter    Description                                                    team id    The ID of the team         Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path        Type     Default   Description                                                                                                   data   type    string             Must be   users                       data   id      string             The ID of the user you want to add to this team         Sample Payload     json      data                  type    users          id    myuser1                      type    users          id    myuser2                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 teams 257525 relationships users         Add a User to Team  With organization membership ID   This method adds multiple users to a team using the organization membership ID  Unlike the user ID method  the user only needs an invitation to the organization    POST  teams  team id relationships organization memberships     Parameter    Description                                                    team id    The ID of the team         Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path        Type     Default   Description                                                                                                   data   type    string             Must be   organization memberships                       data   id      string             The organization membership ID of the user to add         Sample Payload     json      data                  type    organization memberships          id    ou nX7inDHhmC3quYgy                      type    organization memberships          id    ou tTJph1AQVK5ZmdND                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 teams 257525 relationships organization memberships         Delete a User from Team  With user ID   This method removes multiple users from a team using the user ID  Both users and teams must already exist  This method only removes a user from this team  It does not delete that user overall    DELETE  teams  team id relationships users     Parameter    Description                                                    team id    The ID of the team         Request Body  This DELETE endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path        Type     Default   Description                                                                                                         data   type    string             Must be   users                          data   id      string             The ID of the user to remove from this team         Sample Payload     json      data                  type    users          id    myuser1                      type    users          id    myuser2                       Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE       data  payload json     https   app terraform io api v2 teams 257525 relationships users         Delete a User from Team  With organization membership ID   This method removes multiple users from a team using the organization membership ID  This method only removes a user from this team  It does not delete that user overall    DELETE  teams  team id relationships organization memberships     Parameter    Description                                                    team id    The ID of the team         Request Body  This DELETE endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path        Type     Default   Description                                                                                                         data   type    string             Must be   organization memberships                          data   id      string             The organization membership ID of the user to remove         Sample Payload     json      data                  type    organization memberships          id    ou nX7inDHhmC3quYgy                      type    organization memberships          id    ou tTJph1AQVK5ZmdND                       Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE       data  payload json     https   app terraform io api v2 teams 257525 relationships organization memberships   "}
{"questions":"terraform Plan exports allow users to download data exported from the plan of a Run in a Terraform workspace Currently the only supported format for exporting plan data is to generate mock data for Sentinel page title Plan Exports API Docs HCP Terraform Plan Exports API Use the plan exports endpoint to manage plan exports for a Terraform run Create and show plan exports and download and delete exported plan data using the HTTP API","answers":"---\npage_title: Plan Exports - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/plan-exports` endpoint to manage plan exports for a Terraform run. Create and show plan exports, and download and delete exported plan data using the HTTP API.\n---\n\n# Plan Exports API\n\nPlan exports allow users to download data exported from the plan of a Run in a Terraform workspace. Currently, the only supported format for exporting plan data is to generate mock data for Sentinel.\n\n## Create a plan export\n\n`POST \/plan-exports`\n\nThis endpoint exports data from a plan in the specified format. The export process is asynchronous, and the resulting data becomes downloadable when its status is `\"finished\"`. The data is then available for one hour before expiring. After the hour is up, a new export can be created.\n\n| Status  | Response                                       | Reason                                                                                                                                                        |\n| ------- | ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"plan-exports\"`) | Successfully created a plan export                                                                                                                            |\n| [404][] | [JSON API error object][]                      | Plan not found, or user unauthorized to perform action                                                                                                        |\n| [422][] | [JSON API error object][]                      | Malformed request body (missing attributes, wrong types, etc.), or a plan export of the provided `data-type` is already pending or downloadable for this plan |\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                       | Type   | Default | Description                                                                                                                                                                                                                                                                                          |\n| ------------------------------ | ------ | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                    | string |         | Must be `\"plan-exports\"`.                                                                                                                                                                                                                                                                            |\n| `data.attributes.data-type`    | string |         | The format for the export. Currently, the only supported format is `\"sentinel-mock-bundle-v0\"`.                                                                                                                                                                                                      |\n| `data.relationships.plan.data` | object |         | A JSON API relationship object that represents the plan being exported. This object must have a `type` of `plans`, and the `id` of a finished Terraform plan that does not already have a downloadable export of the specified `data-type` (e.g: `{\"type\": \"plans\", \"id\": \"plan-8F5JFydVYAmtTjET\"}`) |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"plan-exports\",\n    \"attributes\": {\n      \"data-type\": \"sentinel-mock-bundle-v0\"\n    },\n    \"relationships\": {\n      \"plan\": {\n        \"data\": {\n          \"id\": \"plan-8F5JFydVYAmtTjET\",\n          \"type\": \"plans\"\n        }\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/plan-exports\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"pe-3yVQZvHzf5j3WRJ1\",\n    \"type\": \"plan-exports\",\n    \"attributes\": {\n      \"data-type\": \"sentinel-mock-bundle-v0\",\n      \"status\": \"queued\",\n      \"status-timestamps\": {\n        \"queued-at\": \"2019-03-04T22:29:53+00:00\",\n      },\n    },\n    \"relationships\": {\n      \"plan\": {\n        \"data\": {\n          \"id\": \"plan-8F5JFydVYAmtTjET\",\n          \"type\": \"plans\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/plan-exports\/pe-3yVQZvHzf5j3WRJ1\",\n    }\n  }\n}\n```\n\n## Show a plan export\n\n`GET \/plan-exports\/:id`\n\n| Parameter | Description                        |\n| --------- | ---------------------------------- |\n| `id`      | The ID of the plan export to show. |\n\nThere is no endpoint to list plan exports. You can find IDs for plan exports in the\n`relationships.exports` property of a plan object.\n\n| Status  | Response                                       | Reason                                                        |\n| ------- | ---------------------------------------------- | ------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"plan-exports\"`) | The request was successful                                    |\n| [404][] | [JSON API error object][]                      | Plan export not found, or user unauthorized to perform action |\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/plan-exports\/pe-3yVQZvHzf5j3WRJ1\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"pe-3yVQZvHzf5j3WRJ1\",\n    \"type\": \"plan-exports\",\n    \"attributes\": {\n      \"data-type\": \"sentinel-mock-bundle-v0\",\n      \"status\": \"finished\",\n      \"status-timestamps\": {\n        \"queued-at\": \"2019-03-04T22:29:53+00:00\",\n        \"finished-at\": \"2019-03-04T22:29:58+00:00\",\n        \"expired-at\": \"2019-03-04T23:29:58+00:00\"\n      },\n    },\n    \"relationships\": {\n      \"plan\": {\n        \"data\": {\n          \"id\": \"plan-8F5JFydVYAmtTjET\",\n          \"type\": \"plans\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/plan-exports\/pe-3yVQZvHzf5j3WRJ1\",\n      \"download\": \"\/api\/v2\/plan-exports\/pe-3yVQZvHzf5j3WRJ1\/download\"\n    }\n  }\n}\n```\n\n## Download exported plan data\n\n`GET \/plan-exports\/:id\/download`\n\nThis endpoint generates a temporary URL to the location of the exported plan data in a `.tar.gz` archive, and then redirects to that link. If using a client that can follow redirects, you can use this endpoint to save the `.tar.gz` archive locally without needing to save the temporary URL.\n\n| Status  | Response                  | Reason                                                        |\n| ------- | ------------------------- | ------------------------------------------------------------- |\n| [302][] | HTTP Redirect             | Plan export found and temporary download URL generated        |\n| [404][] | [JSON API error object][] | Plan export not found, or user unauthorized to perform action |\n\n[302]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/302\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --location \\\n  https:\/\/app.terraform.io\/api\/v2\/plan-exports\/pe-3yVQZvHzf5j3WRJ1\/download \\\n  > export.tar.gz\n```\n\n## Delete exported plan data\n\n`DELETE \/plan-exports\/:id`\n\nPlan exports expire after being available for one hour, but they can be deleted manually as well.\n\n| Status  | Response                  | Reason                                                        |\n| ------- | ------------------------- | ------------------------------------------------------------- |\n| [204][] | No content                | Plan export deleted successfully                              |\n| [404][] | [JSON API error object][] | Plan export not found, or user unauthorized to perform action |\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  -X DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/plan-exports\/pe-3yVQZvHzf5j3WRJ1\n```","site":"terraform","answers_cleaned":"    page title  Plan Exports   API Docs   HCP Terraform description       Use the   plan exports  endpoint to manage plan exports for a Terraform run  Create and show plan exports  and download and delete exported plan data using the HTTP API         Plan Exports API  Plan exports allow users to download data exported from the plan of a Run in a Terraform workspace  Currently  the only supported format for exporting plan data is to generate mock data for Sentinel      Create a plan export   POST  plan exports   This endpoint exports data from a plan in the specified format  The export process is asynchronous  and the resulting data becomes downloadable when its status is   finished    The data is then available for one hour before expiring  After the hour is up  a new export can be created     Status    Response                                         Reason                                                                                                                                                                                                                                                                                                                                                                                          201       JSON API document      type   plan exports      Successfully created a plan export                                                                                                                                 404       JSON API error object                           Plan not found  or user unauthorized to perform action                                                                                                             422       JSON API error object                           Malformed request body  missing attributes  wrong types  etc    or a plan export of the provided  data type  is already pending or downloadable for this plan     201   https   developer mozilla org en US docs Web HTTP Status 201   404   https   developer mozilla org en US docs Web HTTP Status 404   422   https   developer mozilla org en US docs Web HTTP Status 422   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects      Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                         Type     Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            data type                       string             Must be   plan exports                                                                                                                                                                                                                                                                                    data attributes data type       string             The format for the export  Currently  the only supported format is   sentinel mock bundle v0                                                                                                                                                                                                              data relationships plan data    object             A JSON API relationship object that represents the plan being exported  This object must have a  type  of  plans   and the  id  of a finished Terraform plan that does not already have a downloadable export of the specified  data type   e g     type    plans    id    plan 8F5JFydVYAmtTjET            Sample Payload     json      data          type    plan exports        attributes            data type    sentinel mock bundle v0              relationships            plan              data                id    plan 8F5JFydVYAmtTjET              type    plans                                         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 plan exports          Sample Response     json      data          id    pe 3yVQZvHzf5j3WRJ1        type    plan exports        attributes            data type    sentinel mock bundle v0          status    queued          status timestamps              queued at    2019 03 04T22 29 53 00 00                        relationships            plan              data                id    plan 8F5JFydVYAmtTjET              type    plans                                links            self     api v2 plan exports pe 3yVQZvHzf5j3WRJ1                       Show a plan export   GET  plan exports  id     Parameter   Description                                                                                id         The ID of the plan export to show     There is no endpoint to list plan exports  You can find IDs for plan exports in the  relationships exports  property of a plan object     Status    Response                                         Reason                                                                                                                                                                                          200       JSON API document      type   plan exports      The request was successful                                         404       JSON API error object                           Plan export not found  or user unauthorized to perform action     200   https   developer mozilla org en US docs Web HTTP Status 200   404   https   developer mozilla org en US docs Web HTTP Status 404   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects      Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 plan exports pe 3yVQZvHzf5j3WRJ1          Sample Response     json      data          id    pe 3yVQZvHzf5j3WRJ1        type    plan exports        attributes            data type    sentinel mock bundle v0          status    finished          status timestamps              queued at    2019 03 04T22 29 53 00 00            finished at    2019 03 04T22 29 58 00 00            expired at    2019 03 04T23 29 58 00 00                       relationships            plan              data                id    plan 8F5JFydVYAmtTjET              type    plans                                links            self     api v2 plan exports pe 3yVQZvHzf5j3WRJ1          download     api v2 plan exports pe 3yVQZvHzf5j3WRJ1 download                      Download exported plan data   GET  plan exports  id download   This endpoint generates a temporary URL to the location of the exported plan data in a   tar gz  archive  and then redirects to that link  If using a client that can follow redirects  you can use this endpoint to save the   tar gz  archive locally without needing to save the temporary URL     Status    Response                    Reason                                                                                                                                                                     302      HTTP Redirect               Plan export found and temporary download URL generated             404       JSON API error object      Plan export not found  or user unauthorized to perform action     302   https   developer mozilla org en US docs Web HTTP Status 302   404   https   developer mozilla org en US docs Web HTTP Status 404   JSON API error object   https   jsonapi org format  error objects      Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        location     https   app terraform io api v2 plan exports pe 3yVQZvHzf5j3WRJ1 download       export tar gz         Delete exported plan data   DELETE  plan exports  id   Plan exports expire after being available for one hour  but they can be deleted manually as well     Status    Response                    Reason                                                                                                                                                                     204      No content                  Plan export deleted successfully                                   404       JSON API error object      Plan export not found  or user unauthorized to perform action     204   https   developer mozilla org en US docs Web HTTP Status 204   404   https   developer mozilla org en US docs Web HTTP Status 404   JSON API error object   https   jsonapi org format  error objects      Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json       X DELETE     https   app terraform io api v2 plan exports pe 3yVQZvHzf5j3WRJ1    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Workspace Variables API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the workspace vars endpoint to manage workspace specific variables List create update and delete variables using the HTTP API","answers":"---\npage_title: Workspace Variables - API Docs - HCP Terraform\ndescription: >-\n  Use the workspace `\/vars` endpoint to manage workspace-specific variables. List, create, update, and delete variables using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Workspace Variables API\n\nThis set of APIs covers create, update, list and delete operations on workspace variables.\n\nViewing variables requires permission to read variables for their workspace. Creating, updating, and deleting variables requires permission to read and write variables for their workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Create a Variable\n\n`POST \/workspaces\/:workspace_id\/vars`\n\n| Parameter       | Description                                        |\n| --------------- | -------------------------------------------------- |\n| `:workspace_id` | The ID of the workspace to create the variable in. |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                      | Type   | Default | Description                                                                                                     |\n| ----------------------------- | ------ | ------- | --------------------------------------------------------------------------------------------------------------- |\n| `data.type`                   | string |         | Must be `\"vars\"`.                                                                                               |\n| `data.attributes.key`         | string |         | The name of the variable.                                                                                       |\n| `data.attributes.value`       | string | `\"\"`    | The value of the variable.                                                                                      |\n| `data.attributes.description` | string |         | The description of the variable.                                                                                |\n| `data.attributes.category`    | string |         | Whether this is a Terraform or environment variable. Valid values are `\"terraform\"` or `\"env\"`.                 |\n| `data.attributes.hcl`         | bool   | `false` | Whether to evaluate the value of the variable as a string of HCL code. Has no effect for environment variables. |\n| `data.attributes.sensitive`   | bool   | `false` | Whether the value is sensitive. If true then the variable is written once and not visible thereafter.           |\n\n**Deprecation warning**: The custom `filter` properties are replaced by JSON API `relationships` and will be removed from future versions of the API!\n\n| Key path                   | Type   | Default | Description                                           |\n| -------------------------- | ------ | ------- | ----------------------------------------------------- |\n| `filter.workspace.name`    | string |         | The name of the workspace that owns the variable.     |\n| `filter.organization.name` | string |         | The name of the organization that owns the workspace. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\":\"vars\",\n    \"attributes\": {\n      \"key\":\"some_key\",\n      \"value\":\"some_value\",\n      \"description\":\"some description\",\n      \"category\":\"terraform\",\n      \"hcl\":false,\n      \"sensitive\":false\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-4j8p6jX1w33MiDC7\/vars\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\":\"var-EavQ1LztoRTQHSNT\",\n    \"type\":\"vars\",\n    \"attributes\": {\n      \"key\":\"some_key\",\n      \"value\":\"some_value\",\n      \"description\":\"some description\",\n      \"sensitive\":false,\n      \"category\":\"terraform\",\n      \"hcl\":false,\n      \"version-id\":\"1aa07d63ea8ff4df941c94ca9ddfd5d2bd04\"\n    },\n    \"relationships\": {\n      \"configurable\": {\n        \"data\": {\n          \"id\":\"ws-4j8p6jX1w33MiDC7\",\n          \"type\":\"workspaces\"\n        },\n        \"links\": {\n          \"related\":\"\/api\/v2\/organizations\/my-organization\/workspaces\/my-workspace\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/workspaces\/ws-4j8p6jX1w33MiDC7\/vars\/var-EavQ1LztoRTQHSNT\"\n    }\n  }\n}\n```\n\n## List Variables\n\n`GET \/workspaces\/:workspace_id\/vars`\n\n| Parameter       | Description                                    |\n| --------------- | ---------------------------------------------- |\n| `:workspace_id` | The ID of the workspace to list variables for. |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n\"https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-cZE9LERN3rGPRAmH\/vars\"\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\":\"var-AD4pibb9nxo1468E\",\n      \"type\":\"vars\",\"attributes\": {\n        \"key\":\"name\",\n        \"value\":\"hello\",\n        \"description\":\"some description\",\n        \"sensitive\":false,\n        \"category\":\"terraform\",\n        \"hcl\":false,\n        \"version-id\":\"1aa07d63ea8ff4df941c94ca9ddfd5d2bd04\"\n      },\n      \"relationships\": {\n        \"configurable\": {\n          \"data\": {\n            \"id\":\"ws-cZE9LERN3rGPRAmH\",\n            \"type\":\"workspaces\"\n          },\n          \"links\": {\n            \"related\":\"\/api\/v2\/organizations\/my-organization\/workspaces\/my-workspace\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\":\"\/api\/v2\/workspaces\/ws-cZE9LERN3rGPRAmH\/vars\/var-AD4pibb9nxo1468E\"\n      }\n    }\n  ]\n}\n```\n\n## Update Variables\n\n`PATCH \/workspaces\/:workspace_id\/vars\/:variable_id`\n\n| Parameter       | Description                                     |\n| --------------- | ----------------------------------------------- |\n| `:workspace_id` | The ID of the workspace that owns the variable. |\n| `:variable_id`  | The ID of the variable to be updated.           |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path          | Type   | Default | Description                                                                                                                                                                                                                                                                                          |\n| ----------------- | ------ | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`       | string |         | Must be `\"vars\"`.                                                                                                                                                                                                                                                                                    |\n| `data.id`         | string |         | The ID of the variable to update.                                                                                                                                                                                                                                                                    |\n| `data.attributes` | object |         | New attributes for the variable. This object can include `key`, `value`, `description`, `category`, `hcl`, and `sensitive` properties, which are described above under [create a variable](#create-a-variable). All of these properties are optional; if omitted, a property will be left unchanged. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"id\":\"var-yRmifb4PJj7cLkMG\",\n    \"attributes\": {\n      \"key\":\"name\",\n      \"value\":\"mars\",\n      \"description\":\"some description\",\n      \"category\":\"terraform\",\n      \"hcl\": false,\n      \"sensitive\": false\n    },\n    \"type\":\"vars\"\n  }\n}\n```\n\n### Sample Request\n\n```bash\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-4j8p6jX1w33MiDC7\/vars\/var-yRmifb4PJj7cLkMG\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\":\"var-yRmifb4PJj7cLkMG\",\n    \"type\":\"vars\",\n    \"attributes\": {\n      \"key\":\"name\",\n      \"value\":\"mars\",\n      \"description\":\"some description\",\n      \"sensitive\":false,\n      \"category\":\"terraform\",\n      \"hcl\":false,\n      \"version-id\":\"1aa07d63ea8ff4df941c94ca9ddfd5d2bd04\"\n    },\n    \"relationships\": {\n      \"configurable\": {\n        \"data\": {\n          \"id\":\"ws-4j8p6jX1w33MiDC7\",\n          \"type\":\"workspaces\"\n        },\n        \"links\": {\n          \"related\":\"\/api\/v2\/organizations\/workspace-v2-06\/workspaces\/workspace-v2-06\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/workspaces\/ws-4j8p6jX1w33MiDC7\/vars\/var-yRmifb4PJj7cLkMG\"\n    }\n  }\n}\n```\n\n## Delete Variables\n\n`DELETE \/workspaces\/:workspace_id\/vars\/:variable_id`\n\n| Parameter       | Description                                     |\n| --------------- | ----------------------------------------------- |\n| `:workspace_id` | The ID of the workspace that owns the variable. |\n| `:variable_id`  | The ID of the variable to be deleted.           |\n\n### Sample Request\n\n```bash\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-4j8p6jX1w33MiDC7\/vars\/var-yRmifb4PJj7cLkMG\n```","site":"terraform","answers_cleaned":"    page title  Workspace Variables   API Docs   HCP Terraform description       Use the workspace   vars  endpoint to manage workspace specific variables  List  create  update  and delete variables using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Workspace Variables API  This set of APIs covers create  update  list and delete operations on workspace variables   Viewing variables requires permission to read variables for their workspace  Creating  updating  and deleting variables requires permission to read and write variables for their workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers     Create a Variable   POST  workspaces  workspace id vars     Parameter         Description                                                                                                                       workspace id    The ID of the workspace to create the variable in         Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                        Type     Default   Description                                                                                                                                                                                                                                                                                 data type                      string             Must be   vars                                                                                                       data attributes key            string             The name of the variable                                                                                             data attributes value          string             The value of the variable                                                                                            data attributes description    string             The description of the variable                                                                                      data attributes category       string             Whether this is a Terraform or environment variable  Valid values are   terraform   or   env                         data attributes hcl            bool      false    Whether to evaluate the value of the variable as a string of HCL code  Has no effect for environment variables       data attributes sensitive      bool      false    Whether the value is sensitive  If true then the variable is written once and not visible thereafter                 Deprecation warning    The custom  filter  properties are replaced by JSON API  relationships  and will be removed from future versions of the API     Key path                     Type     Default   Description                                                                                                                                                          filter workspace name       string             The name of the workspace that owns the variable           filter organization name    string             The name of the organization that owns the workspace         Sample Payload     json      data          type   vars        attributes            key   some key          value   some value          description   some description          category   terraform          hcl  false         sensitive  false                      Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 workspaces ws 4j8p6jX1w33MiDC7 vars          Sample Response     json      data          id   var EavQ1LztoRTQHSNT        type   vars        attributes            key   some key          value   some value          description   some description          sensitive  false         category   terraform          hcl  false         version id   1aa07d63ea8ff4df941c94ca9ddfd5d2bd04              relationships            configurable              data                id   ws 4j8p6jX1w33MiDC7              type   workspaces                      links                related    api v2 organizations my organization workspaces my workspace                                links            self    api v2 workspaces ws 4j8p6jX1w33MiDC7 vars var EavQ1LztoRTQHSNT                      List Variables   GET  workspaces  workspace id vars     Parameter         Description                                                                                                               workspace id    The ID of the workspace to list variables for         Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json     https   app terraform io api v2 workspaces ws cZE9LERN3rGPRAmH vars           Sample Response     json      data                  id   var AD4pibb9nxo1468E          type   vars   attributes              key   name            value   hello            description   some description            sensitive  false           category   terraform            hcl  false           version id   1aa07d63ea8ff4df941c94ca9ddfd5d2bd04                  relationships              configurable                data                  id   ws cZE9LERN3rGPRAmH                type   workspaces                          links                  related    api v2 organizations my organization workspaces my workspace                                        links              self    api v2 workspaces ws cZE9LERN3rGPRAmH vars var AD4pibb9nxo1468E                              Update Variables   PATCH  workspaces  workspace id vars  variable id     Parameter         Description                                                                                                                 workspace id    The ID of the workspace that owns the variable        variable id     The ID of the variable to be updated                   Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path            Type     Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               data type          string             Must be   vars                                                                                                                                                                                                                                                                                            data id            string             The ID of the variable to update                                                                                                                                                                                                                                                                          data attributes    object             New attributes for the variable  This object can include  key    value    description    category    hcl   and  sensitive  properties  which are described above under  create a variable   create a variable   All of these properties are optional  if omitted  a property will be left unchanged         Sample Payload     json      data          id   var yRmifb4PJj7cLkMG        attributes            key   name          value   mars          description   some description          category   terraform          hcl   false         sensitive   false             type   vars                 Sample Request     bash   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 workspaces ws 4j8p6jX1w33MiDC7 vars var yRmifb4PJj7cLkMG          Sample Response     json      data          id   var yRmifb4PJj7cLkMG        type   vars        attributes            key   name          value   mars          description   some description          sensitive  false         category   terraform          hcl  false         version id   1aa07d63ea8ff4df941c94ca9ddfd5d2bd04              relationships            configurable              data                id   ws 4j8p6jX1w33MiDC7              type   workspaces                      links                related    api v2 organizations workspace v2 06 workspaces workspace v2 06                                links            self    api v2 workspaces ws 4j8p6jX1w33MiDC7 vars var yRmifb4PJj7cLkMG                      Delete Variables   DELETE  workspaces  workspace id vars  variable id     Parameter         Description                                                                                                                 workspace id    The ID of the workspace that owns the variable        variable id     The ID of the variable to be deleted                   Sample Request     bash   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 workspaces ws 4j8p6jX1w33MiDC7 vars var yRmifb4PJj7cLkMG    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Team Access API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the team workspaces endpoint to manage team access to a workspace List show add update and remove team access using the HTTP API","answers":"---\npage_title: Team Access - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/team-workspaces` endpoint to manage team access to a workspace. List, show, add, update, and remove team access using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Team Access API\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n-> **Note:** Team management is available in HCP Terraform **Standard** Edition. [Learn more about HCP Terraform pricing here](https:\/\/www.hashicorp.com\/products\/terraform\/pricing).\n<!-- END: TFC:only name:pnp-callout -->\n\nThe team access APIs are used to associate a team to permissions on a workspace. A single `team-workspace` resource contains the relationship between the Team and Workspace, including the privileges the team has on the workspace.\n\n## Resource permissions\n\nA `team-workspace` resource represents a team's local permissions on a specific workspace. Teams can also have organization-level permissions that grant access to workspaces. HCP Terraform uses the more restrictive access level. For example, a team with the \"**Manage workspaces** permission enabled has admin access on all workspaces, even if their `team-workspace` on a particular workspace only grants read access. For more information, refer to [Managing Workspace Access](\/terraform\/cloud-docs\/users-teams-organizations\/teams\/manage#managing-workspace-access).\n\nAny member of an organization can view team access relative to their own team memberships, including secret teams of which they are a member. Organization owners and workspace admins can modify team access or view the full set of secret team accesses. The organization token and the owners team token can act as an owner on these endpoints. Refer to [Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions) for additional information.\n\n## List Team Access to a Workspace\n\n`GET \/team-workspaces`\n\n| Status  | Response                                          | Reason                                                     |\n| ------- | ------------------------------------------------- | ---------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"team-workspaces\"`) | The request was successful                                 |\n| [404][] | [JSON API error object][]                         | Workspace not found or user unauthorized to perform action |\n\n### Query Parameters\n\n[These are standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters).  If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.\n\n| Parameter               | Description                                                                                                                                                                                                          |\n| ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `filter[workspace][id]` | **Required.** The workspace ID to list team access for. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint. |\n| `page[number]`          | **Optional.**                                                                                                                                                                                                        |\n| `page[size]`            | **Optional.**                                                                                                                                                                                                        |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  \"https:\/\/app.terraform.io\/api\/v2\/team-workspaces?filter%5Bworkspace%5D%5Bid%5D=ws-XGA52YVykdTgryTN\"\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"tws-19iugLwoNgtWZbKP\",\n      \"type\": \"team-workspaces\",\n      \"attributes\": {\n        \"access\": \"custom\",\n        \"runs\": \"apply\",\n        \"variables\": \"none\",\n        \"state-versions\": \"none\",\n        \"sentinel-mocks\": \"none\",\n        \"workspace-locking\": false,\n        \"run-tasks\": false\n      },\n      \"relationships\": {\n        \"team\": {\n          \"data\": {\n            \"id\": \"team-DBycxkdQrGFf5zEM\",\n            \"type\": \"teams\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/teams\/team-DBycxkdQrGFf5zEM\"\n          }\n        },\n        \"workspace\": {\n          \"data\": {\n            \"id\": \"ws-XGA52YVykdTgryTN\",\n            \"type\": \"workspaces\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/organizations\/my-organization\/workspaces\/my-workspace\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/team-workspaces\/tws-19iugLwoNgtWZbKP\"\n      }\n    }\n  ]\n}\n```\n\n## Show a Team Access relationship\n\n`GET \/team-workspaces\/:id`\n\n| Status  | Response                                          | Reason                                                       |\n| ------- | ------------------------------------------------- | ------------------------------------------------------------ |\n| [200][] | [JSON API document][] (`type: \"team-workspaces\"`) | The request was successful                                   |\n| [404][] | [JSON API error object][]                         | Team access not found or user unauthorized to perform action |\n\n| Parameter | Description                                                                                                                                  |\n| --------- | -------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the team\/workspace relationship. Obtain this from the [list team access action](#list-team-access-to-a-workspace) described above. |\n\n-> **Note:** As mentioned in [Add Team Access to a Workspace](#add-team-access-to-a-workspace) and [Update Team Access\nto a Workspace](#update-team-access-to-a-workspace), several permission attributes are not editable unless `access` is\nset to `custom`. When access is `read`, `plan`, `write`, or `admin`, these attributes are read-only and reflect the\nimplicit permissions granted to the current access level.\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/team-workspaces\/tws-s68jV4FWCDwWvQq8\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"tws-s68jV4FWCDwWvQq8\",\n    \"type\": \"team-workspaces\",\n    \"attributes\": {\n      \"access\": \"write\",\n      \"runs\": \"apply\",\n      \"variables\": \"write\",\n      \"state-versions\": \"write\",\n      \"sentinel-mocks\": \"read\",\n      \"workspace-locking\": true,\n      \"run-tasks\": false\n    },\n    \"relationships\": {\n      \"team\": {\n        \"data\": {\n          \"id\": \"team-DBycxkdQrGFf5zEM\",\n          \"type\": \"teams\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/teams\/team-DBycxkdQrGFf5zEM\"\n        }\n      },\n      \"workspace\": {\n        \"data\": {\n          \"id\": \"ws-XGA52YVykdTgryTN\",\n          \"type\": \"workspaces\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/my-organization\/workspaces\/my-workspace\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/team-workspaces\/tws-s68jV4FWCDwWvQq8\"\n    }\n  }\n}\n```\n\n## Add Team Access to a Workspace\n\n`POST \/team-workspaces`\n\n| Status  | Response                                          | Reason                                                             |\n| ------- | ------------------------------------------------- | ------------------------------------------------------------------ |\n| [200][] | [JSON API document][] (`type: \"team-workspaces\"`) | The request was successful                                         |\n| [404][] | [JSON API error object][]                         | Workspace or Team not found or user unauthorized to perform action |\n| [422][] | [JSON API error object][]                         | Malformed request body (missing attributes, wrong types, etc.)     |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                                 | Type    | Default | Description                                                                                                                                                                                       |\n| ---------------------------------------- | ------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                              | string  |         | Must be `\"team-workspaces\"`.                                                                                                                                                                      |\n| `data.attributes.access`                 | string  |         | The type of access to grant. Valid values are `read`, `plan`, `write`, `admin`, or `custom`.                                                                                                      |\n| `data.attributes.runs`                   | string  | \"read\"  | If `access` is `custom`, the permission to grant for the workspace's runs. Can only be used when `access` is `custom`. Valid values include `read`, `plan`, or `apply`.                           |\n| `data.attributes.variables`              | string  | \"none\"  | If `access` is `custom`, the permission to grant for the workspace's variables. Can only be used when `access` is `custom`. Valid values include `none`, `read`, or `write`.                      |\n| `data.attributes.state-versions`         | string  | \"none\"  | If `access` is `custom`, the permission to grant for the workspace's state versions. Can only be used when `access` is `custom`. Valid values include `none`, `read-outputs`, `read`, or `write`. |\n| `data.attributes.sentinel-mocks`         | string  | \"none\"  | If `access` is `custom`, the permission to grant for the workspace's Sentinel mocks. Can only be used when `access` is `custom`. Valid values include `none`, or `read`.                          |\n| `data.attributes.workspace-locking`      | boolean | false   | If `access` is `custom`, the permission granting the ability to manually lock or unlock the workspace. Can only be used when `access` is `custom`.                                                |\n| `data.attributes.run-tasks`              | boolean | false   | If `access` is `custom`, this permission allows the team to manage run tasks within the workspace.                                                                                                |\n| `data.relationships.workspace.data.type` | string  |         | Must be `workspaces`.                                                                                                                                                                             |\n| `data.relationships.workspace.data.id`   | string  |         | The workspace ID to which the team is to be added.                                                                                                                                                |\n| `data.relationships.team.data.type`      | string  |         | Must be `teams`.                                                                                                                                                                                  |\n| `data.relationships.team.data.id`        | string  |         | The ID of the team to add to the workspace.                                                                                                                                                       |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"access\": \"custom\",\n      \"runs\": \"apply\",\n      \"variables\": \"none\",\n      \"state-versions\": \"read-outputs\",\n      \"plan-outputs\": \"none\",\n      \"sentinel-mocks\": \"read\",\n      \"workspace-locking\": false,\n      \"run-tasks\": false\n    },\n    \"relationships\": {\n      \"workspace\": {\n        \"data\": {\n          \"type\": \"workspaces\",\n          \"id\": \"ws-XGA52YVykdTgryTN\"\n        }\n      },\n      \"team\": {\n        \"data\": {\n          \"type\": \"teams\",\n          \"id\": \"team-DBycxkdQrGFf5zEM\"\n        }\n      }\n    },\n    \"type\": \"team-workspaces\"\n  }\n}\n```\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/team-workspaces\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"tws-sezDAcCYWLnd3xz2\",\n    \"type\": \"team-workspaces\",\n    \"attributes\": {\n      \"access\": \"custom\",\n      \"runs\": \"apply\",\n      \"variables\": \"none\",\n      \"state-versions\": \"read-outputs\",\n      \"sentinel-mocks\": \"read\",\n      \"workspace-locking\": false,\n      \"run-tasks\": false\n    },\n    \"relationships\": {\n      \"team\": {\n        \"data\": {\n          \"id\": \"team-DBycxkdQrGFf5zEM\",\n          \"type\": \"teams\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/teams\/team-DBycxkdQrGFf5zEM\"\n        }\n      },\n      \"workspace\": {\n        \"data\": {\n          \"id\": \"ws-XGA52YVykdTgryTN\",\n          \"type\": \"workspaces\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/my-organization\/workspaces\/my-workspace\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/team-workspaces\/tws-sezDAcCYWLnd3xz2\"\n    }\n  }\n}\n```\n\n## Update Team Access to a Workspace\n\n`PATCH \/team-workspaces\/:id`\n\n| Status  | Response                                          | Reason                                                         |\n| ------- | ------------------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"team-workspaces\"`) | The request was successful                                     |\n| [404][] | [JSON API error object][]                         | Team Access not found or user unauthorized to perform action   |\n| [422][] | [JSON API error object][]                         | Malformed request body (missing attributes, wrong types, etc.) |\n\n| Parameter                           |         |        | Description                                                                                                                                        |\n| ----------------------------------- | ------- | ------ | -------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:id`                               |         |        | The ID of the team\/workspace relationship. Obtain this from the [list team access action](#list-team-access-to-a-workspace) described above.       |\n| `data.attributes.access`            | string  |        | The type of access to grant. Valid values are `read`, `plan`, `write`, `admin`, or `custom`.                                                       |\n| `data.attributes.runs`              | string  | \"read\" | If `access` is `custom`, the permission to grant for the workspace's runs. Can only be used when `access` is `custom`.                             |\n| `data.attributes.variables`         | string  | \"none\" | If `access` is `custom`, the permission to grant for the workspace's variables. Can only be used when `access` is `custom`.                        |\n| `data.attributes.state-versions`    | string  | \"none\" | If `access` is `custom`, the permission to grant for the workspace's state versions. Can only be used when `access` is `custom`.                   |\n| `data.attributes.sentinel-mocks`    | string  | \"none\" | If `access` is `custom`, the permission to grant for the workspace's Sentinel mocks. Can only be used when `access` is `custom`.                   |\n| `data.attributes.workspace-locking` | boolean | false  | If `access` is `custom`, the permission granting the ability to manually lock or unlock the workspace. Can only be used when `access` is `custom`. |\n| `data.attributes.run-tasks`         | boolean | false  | If `access` is `custom`, this permission allows the team to manage run tasks within the workspace.                                                 |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/team-workspaces\/tws-s68jV4FWCDwWvQq8\n```\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"access\": \"custom\",\n      \"state-versions\": \"none\"\n    }\n  }\n}\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"tws-s68jV4FWCDwWvQq8\",\n    \"type\": \"team-workspaces\",\n    \"attributes\": {\n      \"access\": \"custom\",\n      \"runs\": \"apply\",\n      \"variables\": \"write\",\n      \"state-versions\": \"none\",\n      \"sentinel-mocks\": \"read\",\n      \"workspace-locking\": true,\n      \"run-tasks\": true\n    },\n    \"relationships\": {\n      \"team\": {\n        \"data\": {\n          \"id\": \"team-DBycxkdQrGFf5zEM\",\n          \"type\": \"teams\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/teams\/team-DBycxkdQrGFf5zEM\"\n        }\n      },\n      \"workspace\": {\n        \"data\": {\n          \"id\": \"ws-XGA52YVykdTgryTN\",\n          \"type\": \"workspaces\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/my-organization\/workspaces\/my-workspace\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/team-workspaces\/tws-s68jV4FWCDwWvQq8\"\n    }\n  }\n}\n```\n\n## Remove Team Access to a Workspace\n\n`DELETE \/team-workspaces\/:id`\n\n| Status  | Response                  | Reason                                                       |\n| ------- | ------------------------- | ------------------------------------------------------------ |\n| [204][] |                           | The Team Access was successfully destroyed                   |\n| [404][] | [JSON API error object][] | Team Access not found or user unauthorized to perform action |\n\n| Parameter | Description                                                                                                                                  |\n| --------- | -------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the team\/workspace relationship. Obtain this from the [list team access action](#list-team-access-to-a-workspace) described above. |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/team-workspaces\/tws-sezDAcCYWLnd3xz2\n```","site":"terraform","answers_cleaned":"    page title  Team Access   API Docs   HCP Terraform description       Use the   team workspaces  endpoint to manage team access to a workspace  List  show  add  update  and remove team access using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Team Access API       BEGIN  TFC only name pnp callout          Note    Team management is available in HCP Terraform   Standard   Edition   Learn more about HCP Terraform pricing here  https   www hashicorp com products terraform pricing        END  TFC only name pnp callout      The team access APIs are used to associate a team to permissions on a workspace  A single  team workspace  resource contains the relationship between the Team and Workspace  including the privileges the team has on the workspace      Resource permissions  A  team workspace  resource represents a team s local permissions on a specific workspace  Teams can also have organization level permissions that grant access to workspaces  HCP Terraform uses the more restrictive access level  For example  a team with the    Manage workspaces   permission enabled has admin access on all workspaces  even if their  team workspace  on a particular workspace only grants read access  For more information  refer to  Managing Workspace Access   terraform cloud docs users teams organizations teams manage managing workspace access    Any member of an organization can view team access relative to their own team memberships  including secret teams of which they are a member  Organization owners and workspace admins can modify team access or view the full set of secret team accesses  The organization token and the owners team token can act as an owner on these endpoints  Refer to  Permissions   terraform cloud docs users teams organizations permissions  for additional information      List Team Access to a Workspace   GET  team workspaces     Status    Response                                            Reason                                                                                                                                                                                       200       JSON API document      type   team workspaces      The request was successful                                      404       JSON API error object                              Workspace not found or user unauthorized to perform action        Query Parameters   These are standard URL query parameters   terraform cloud docs api docs query parameters   remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs   This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters    If neither pagination query parameters are provided  the endpoint will not be paginated and will return all results     Parameter                 Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                  filter workspace  id       Required    The workspace ID to list team access for  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   terraform cloud docs api docs workspaces show workspace  endpoint       page number                Optional                                                                                                                                                                                                                page size                  Optional                                                                                                                                                                                                                  Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET      https   app terraform io api v2 team workspaces filter 5Bworkspace 5D 5Bid 5D ws XGA52YVykdTgryTN           Sample Response     json      data                  id    tws 19iugLwoNgtWZbKP          type    team workspaces          attributes              access    custom            runs    apply            variables    none            state versions    none            sentinel mocks    none            workspace locking   false           run tasks   false                 relationships              team                data                  id    team DBycxkdQrGFf5zEM                type    teams                          links                  related     api v2 teams team DBycxkdQrGFf5zEM                                  workspace                data                  id    ws XGA52YVykdTgryTN                type    workspaces                          links                  related     api v2 organizations my organization workspaces my workspace                                        links              self     api v2 team workspaces tws 19iugLwoNgtWZbKP                              Show a Team Access relationship   GET  team workspaces  id     Status    Response                                            Reason                                                                                                                                                                                           200       JSON API document      type   team workspaces      The request was successful                                        404       JSON API error object                              Team access not found or user unauthorized to perform action      Parameter   Description                                                                                                                                                                                                                                                                                                     id        The ID of the team workspace relationship  Obtain this from the  list team access action   list team access to a workspace  described above          Note    As mentioned in  Add Team Access to a Workspace   add team access to a workspace  and  Update Team Access to a Workspace   update team access to a workspace   several permission attributes are not editable unless  access  is set to  custom   When access is  read    plan    write   or  admin   these attributes are read only and reflect the implicit permissions granted to the current access level       Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 team workspaces tws s68jV4FWCDwWvQq8          Sample Response     json      data          id    tws s68jV4FWCDwWvQq8        type    team workspaces        attributes            access    write          runs    apply          variables    write          state versions    write          sentinel mocks    read          workspace locking   true         run tasks   false             relationships            team              data                id    team DBycxkdQrGFf5zEM              type    teams                      links                related     api v2 teams team DBycxkdQrGFf5zEM                            workspace              data                id    ws XGA52YVykdTgryTN              type    workspaces                      links                related     api v2 organizations my organization workspaces my workspace                                links            self     api v2 team workspaces tws s68jV4FWCDwWvQq8                      Add Team Access to a Workspace   POST  team workspaces     Status    Response                                            Reason                                                                                                                                                                                                       200       JSON API document      type   team workspaces      The request was successful                                              404       JSON API error object                              Workspace or Team not found or user unauthorized to perform action      422       JSON API error object                              Malformed request body  missing attributes  wrong types  etc              Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                                   Type      Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                 data type                                 string              Must be   team workspaces                                                                                                                                                                              data attributes access                    string              The type of access to grant  Valid values are  read    plan    write    admin   or  custom                                                                                                             data attributes runs                      string     read     If  access  is  custom   the permission to grant for the workspace s runs  Can only be used when  access  is  custom   Valid values include  read    plan   or  apply                                  data attributes variables                 string     none     If  access  is  custom   the permission to grant for the workspace s variables  Can only be used when  access  is  custom   Valid values include  none    read   or  write                             data attributes state versions            string     none     If  access  is  custom   the permission to grant for the workspace s state versions  Can only be used when  access  is  custom   Valid values include  none    read outputs    read   or  write        data attributes sentinel mocks            string     none     If  access  is  custom   the permission to grant for the workspace s Sentinel mocks  Can only be used when  access  is  custom   Valid values include  none   or  read                                 data attributes workspace locking         boolean   false     If  access  is  custom   the permission granting the ability to manually lock or unlock the workspace  Can only be used when  access  is  custom                                                       data attributes run tasks                 boolean   false     If  access  is  custom   this permission allows the team to manage run tasks within the workspace                                                                                                      data relationships workspace data type    string              Must be  workspaces                                                                                                                                                                                    data relationships workspace data id      string              The workspace ID to which the team is to be added                                                                                                                                                      data relationships team data type         string              Must be  teams                                                                                                                                                                                         data relationships team data id           string              The ID of the team to add to the workspace                                                                                                                                                               Sample Payload     json      data          attributes            access    custom          runs    apply          variables    none          state versions    read outputs          plan outputs    none          sentinel mocks    read          workspace locking   false         run tasks   false             relationships            workspace              data                type    workspaces              id    ws XGA52YVykdTgryTN                            team              data                type    teams              id    team DBycxkdQrGFf5zEM                                type    team workspaces                 Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 team workspaces          Sample Response     json      data          id    tws sezDAcCYWLnd3xz2        type    team workspaces        attributes            access    custom          runs    apply          variables    none          state versions    read outputs          sentinel mocks    read          workspace locking   false         run tasks   false             relationships            team              data                id    team DBycxkdQrGFf5zEM              type    teams                      links                related     api v2 teams team DBycxkdQrGFf5zEM                            workspace              data                id    ws XGA52YVykdTgryTN              type    workspaces                      links                related     api v2 organizations my organization workspaces my workspace                                links            self     api v2 team workspaces tws sezDAcCYWLnd3xz2                      Update Team Access to a Workspace   PATCH  team workspaces  id     Status    Response                                            Reason                                                                                                                                                                                               200       JSON API document      type   team workspaces      The request was successful                                          404       JSON API error object                              Team Access not found or user unauthorized to perform action        422       JSON API error object                              Malformed request body  missing attributes  wrong types  etc        Parameter                                                Description                                                                                                                                                                                                                                                                                                                                                              id                                                     The ID of the team workspace relationship  Obtain this from the  list team access action   list team access to a workspace  described above             data attributes access               string             The type of access to grant  Valid values are  read    plan    write    admin   or  custom                                                              data attributes runs                 string     read    If  access  is  custom   the permission to grant for the workspace s runs  Can only be used when  access  is  custom                                    data attributes variables            string     none    If  access  is  custom   the permission to grant for the workspace s variables  Can only be used when  access  is  custom                               data attributes state versions       string     none    If  access  is  custom   the permission to grant for the workspace s state versions  Can only be used when  access  is  custom                          data attributes sentinel mocks       string     none    If  access  is  custom   the permission to grant for the workspace s Sentinel mocks  Can only be used when  access  is  custom                          data attributes workspace locking    boolean   false    If  access  is  custom   the permission granting the ability to manually lock or unlock the workspace  Can only be used when  access  is  custom        data attributes run tasks            boolean   false    If  access  is  custom   this permission allows the team to manage run tasks within the workspace                                                         Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 team workspaces tws s68jV4FWCDwWvQq8          Sample Payload     json      data          attributes            access    custom          state versions    none                       Sample Response     json      data          id    tws s68jV4FWCDwWvQq8        type    team workspaces        attributes            access    custom          runs    apply          variables    write          state versions    none          sentinel mocks    read          workspace locking   true         run tasks   true             relationships            team              data                id    team DBycxkdQrGFf5zEM              type    teams                      links                related     api v2 teams team DBycxkdQrGFf5zEM                            workspace              data                id    ws XGA52YVykdTgryTN              type    workspaces                      links                related     api v2 organizations my organization workspaces my workspace                                links            self     api v2 team workspaces tws s68jV4FWCDwWvQq8                      Remove Team Access to a Workspace   DELETE  team workspaces  id     Status    Response                    Reason                                                                                                                                                                   204                                  The Team Access was successfully destroyed                        404       JSON API error object      Team Access not found or user unauthorized to perform action      Parameter   Description                                                                                                                                                                                                                                                                                                     id        The ID of the team workspace relationship  Obtain this from the  list team access action   list team access to a workspace  described above         Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 team workspaces tws sezDAcCYWLnd3xz2    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the reserved tag keys API endpoints to denote tag keys that have special meaning for your organization Reserving tag keys allows project and workspace managers can follow a consistent tagging strategy and also provides project admins with a means of disabling overriding inherited tags 201 https developer mozilla org en US docs Web HTTP Status 201 page title Reserved Tag Keys API Docs HCP Terraform","answers":"---\npage_title: Reserved Tag Keys - API Docs - HCP Terraform\ndescription: >-\n    Use the `\/reserved-tag-keys` API endpoints to denote tag keys that have special meaning for your organization. Reserving tag keys allows project and workspace managers can follow a consistent tagging strategy, and also provides project admins with a means of disabling overriding inherited tags.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n[speculative plans]: \/terraform\/cloud-docs\/run\/remote-operations#speculative-plans\n\n# Reserved Tag Keys API\n\nUse the `\/reserved-tag-keys` API endpoints to define and manage tag keys that\nhave special meaning for your organization. Reserving tag keys enable project\nand workspace managers to follow a consistent tagging strategy across the\norganization. You can also use them to provide project managers with a means of\ndisabling overrides for inherited tags.\n\nThe following table describes the available endpoints:\n\n| Method | Path | Description |\n| --- | --- | --- |\n| `GET` | `\/organizations\/:organization_name\/reserved-tag-keys` | [List reserved tag keys](#list-reserved-tag-keys) for the specified organization. |\n| `POST` | `\/organizations\/:organization_name\/reserved-tag-keys` | [Add a reserved tag key](#add-a-reserved-tag-key) to the specified organization. |\n| `PATCH` | `\/reserved-tags\/:reserved_tag_key_id` |  [Update a reserved tag key](#add-a-reserved-tag-value) with the specified ID. |\n| `DELETE` | `\/reserved-tags\/:reserved_tag_key_id` | [Delete a reserved tag key](#delete-a-reserved-tag-key) with the specified ID. |\n\n\n## Path parameters\n\nThe `\/reserved-tag-keys\/` API endpoints require the following path parameters:\n\n| Parameter     | Description    |\n|---------------|----------------|\n| `:reserved_tag_key_id` | The external ID of the reserved tag key. |\n| `:organization_name` | The name of the organization containing the reserved tags. |\n\n## List reserved tag keys\n\n`GET \/organizations\/:organization_name\/reserved-tag-keys`\n\n### Sample payload\n\nThis endpoint does not require a payload.\n\n### Sample request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/reserved-tag-keys\n```\n\n### Sample response\n\n```json\n{\n    \"data\": [\n        {\n            \"id\": \"rtk-jjnTseo8NN1jACbk\",\n            \"type\": \"reserved-tag-keys\",\n            \"attributes\": {\n                \"key\": \"environment\",\n                \"disable-overrides\": false,\n                \"created-at\": \"2024-08-13T23:06:42.523Z\",\n                \"updated-at\": \"2024-08-13T23:06:42.523Z\"\n            }\n        },\n        {\n            \"id\": \"rtk-F1s7kKUShAQxhA1b\",\n            \"type\": \"reserved-tag-keys\",\n            \"attributes\": {\n                \"key\": \"cost-center\",\n                \"disable-overrides\": false,\n                \"created-at\": \"2024-08-13T23:06:51.445Z\",\n                \"updated-at\": \"2024-08-13T23:06:51.445Z\"\n            }\n        },\n    ],\n    \"links\": {\n        \"self\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/reserved-tag-keys?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n        \"first\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/reserved-tag-keys?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n        \"prev\": null,\n        \"next\": null,\n        \"last\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/reserved-tag-keys?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n    },\n    \"meta\": {\n        \"pagination\": {\n            \"current-page\": 1,\n            \"page-size\": 20,\n            \"prev-page\": null,\n            \"next-page\": null,\n            \"total-pages\": 1,\n            \"total-count\": 2\n        }\n    }\n}\n\n```\n\n## Create a reserved tag key\n\n`POST \/organizations\/:organization_name\/reserved-tag-keys`\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type | Default | Description |\n|---|---|---|--- |\n| `data.type` | string | none | Must be `reserved-tag-keys`. |\n| `data.attributes.key` | string | none | The key targeted by this reserved\ntag key. |\n| `data.attributes.disable-overrides` | boolean | none | If `true`, disables\noverriding inherited tags with the specified key at the workspace level. |\n\n### Sample payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"reserved-tag-keys\",\n    \"attributes\": {\n      \"key\": \"environment\",\n      \"disable-overrides\": false\n    }\n  }\n}\n```\n\n### Sample request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/${ORGANIZATION_NAME}\/reserved-tag-keys\n```\n\n### Sample response\n\n```json\n{\n    \"data\": {\n        \"id\": \"rtk-Tj86UdGahKGDiYXY\",\n        \"type\": \"reserved-tag-keys\",\n        \"attributes\": {\n            \"key\": \"environment\",\n            \"disable-overrides\": false,\n            \"created-at\": \"2024-09-04T05:02:06.794Z\",\n            \"updated-at\": \"2024-09-04T05:02:06.794Z\"\n        }\n    }\n}\n```\n\n## Update a reserved tag key\n\n`PATCH \/reserved-tags\/:reserved_tag_key_id`\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path | Type | Default | Description |\n|---|---|---|--- |\n| `data.type` | string | none | Must be `reserved-tag-keys`. |\n| `data.attributes.key` | string | none | The key targeted by this reserved\ntag key. |\n| `data.attributes.disable-overrides` | boolean | none | If `true`, disables\noverriding inherited tags with the specified key at the workspace level. |\n\n### Sample payload\n\n```json\n{\n  \"data\": {\n    \"id\": \"rtk-Tj86UdGahKGDiYXY\",\n    \"type\": \"reserved-tag-keys\",\n    \"attributes\": {\n      \"key\": \"env\",\n      \"disable-overrides\": true,\n      \"created-at\": \"2024-09-04T05:02:06.794Z\",\n      \"updated-at\": \"2024-09-04T05:02:06.794Z\"\n    }\n  }\n}\n```\n\n### Sample request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  https:\/\/app.terraform.io\/api\/v2\/reserved-tags\/${RESERVED_TAG_ID}\n```\n\n### Sample response\n\n```json\n{\n    \"data\": {\n        \"id\": \"rtk-zMtWLDftAjY3b5pA\",\n        \"type\": \"reserved-tag-keys\",\n        \"attributes\": {\n            \"key\": \"env\",\n            \"disable-overrides\": true,\n            \"created-at\": \"2024-09-04T05:05:10.449Z\",\n            \"updated-at\": \"2024-09-04T05:05:13.486Z\"\n        }\n    }\n}\n```\n\n## Delete a reserved tag key\n\n`DELETE \/reserved-tags\/:reserved_tag_key_id`\n\n### Sample payload\n\nThis endpoint does not require a payload.\n\n### Sample request\n\n```shell-session\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/reserved-tags\/rtk-zMtWLDftAjY3b5pA\n```\n\n### Sample response\n\nThis endpoint does not return a response body.\n","site":"terraform","answers_cleaned":"    page title  Reserved Tag Keys   API Docs   HCP Terraform description         Use the   reserved tag keys  API endpoints to denote tag keys that have special meaning for your organization  Reserving tag keys allows project and workspace managers can follow a consistent tagging strategy  and also provides project admins with a means of disabling overriding inherited tags        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects   speculative plans    terraform cloud docs run remote operations speculative plans    Reserved Tag Keys API  Use the   reserved tag keys  API endpoints to define and manage tag keys that have special meaning for your organization  Reserving tag keys enable project and workspace managers to follow a consistent tagging strategy across the organization  You can also use them to provide project managers with a means of disabling overrides for inherited tags   The following table describes the available endpoints     Method   Path   Description                          GET      organizations  organization name reserved tag keys     List reserved tag keys   list reserved tag keys  for the specified organization       POST      organizations  organization name reserved tag keys     Add a reserved tag key   add a reserved tag key  to the specified organization       PATCH      reserved tags  reserved tag key id      Update a reserved tag key   add a reserved tag value  with the specified ID       DELETE      reserved tags  reserved tag key id     Delete a reserved tag key   delete a reserved tag key  with the specified ID         Path parameters  The   reserved tag keys   API endpoints require the following path parameters     Parameter       Description                                             reserved tag key id    The external ID of the reserved tag key        organization name    The name of the organization containing the reserved tags        List reserved tag keys   GET  organizations  organization name reserved tag keys       Sample payload  This endpoint does not require a payload       Sample request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 organizations my organization reserved tag keys          Sample response     json        data                            id    rtk jjnTseo8NN1jACbk                type    reserved tag keys                attributes                      key    environment                    disable overrides   false                   created at    2024 08 13T23 06 42 523Z                    updated at    2024 08 13T23 06 42 523Z                                                  id    rtk F1s7kKUShAQxhA1b                type    reserved tag keys                attributes                      key    cost center                    disable overrides   false                   created at    2024 08 13T23 06 51 445Z                    updated at    2024 08 13T23 06 51 445Z                                       links              self    https   app terraform io api v2 organizations my organization reserved tag keys page 5Bnumber 5D 1 page 5Bsize 5D 20            first    https   app terraform io api v2 organizations my organization reserved tag keys page 5Bnumber 5D 1 page 5Bsize 5D 20            prev   null           next   null           last    https   app terraform io api v2 organizations my organization reserved tag keys page 5Bnumber 5D 1 page 5Bsize 5D 20              meta              pagination                  current page   1               page size   20               prev page   null               next page   null               total pages   1               total count   2                            Create a reserved tag key   POST  organizations  organization name reserved tag keys   This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type   Default   Description                         data type    string   none   Must be  reserved tag keys        data attributes key    string   none   The key targeted by this reserved tag key       data attributes disable overrides    boolean   none   If  true   disables overriding inherited tags with the specified key at the workspace level         Sample payload     json      data          type    reserved tag keys        attributes            key    environment          disable overrides   false                      Sample request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 organizations   ORGANIZATION NAME  reserved tag keys          Sample response     json        data              id    rtk Tj86UdGahKGDiYXY            type    reserved tag keys            attributes                  key    environment                disable overrides   false               created at    2024 09 04T05 02 06 794Z                updated at    2024 09 04T05 02 06 794Z                            Update a reserved tag key   PATCH  reserved tags  reserved tag key id   This PATCH endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path   Type   Default   Description                         data type    string   none   Must be  reserved tag keys        data attributes key    string   none   The key targeted by this reserved tag key       data attributes disable overrides    boolean   none   If  true   disables overriding inherited tags with the specified key at the workspace level         Sample payload     json      data          id    rtk Tj86UdGahKGDiYXY        type    reserved tag keys        attributes            key    env          disable overrides   true         created at    2024 09 04T05 02 06 794Z          updated at    2024 09 04T05 02 06 794Z                       Sample request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH     https   app terraform io api v2 reserved tags   RESERVED TAG ID           Sample response     json        data              id    rtk zMtWLDftAjY3b5pA            type    reserved tag keys            attributes                  key    env                disable overrides   true               created at    2024 09 04T05 05 10 449Z                updated at    2024 09 04T05 05 13 486Z                            Delete a reserved tag key   DELETE  reserved tags  reserved tag key id       Sample payload  This endpoint does not require a payload       Sample request     shell session   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 reserved tags rtk zMtWLDftAjY3b5pA          Sample response  This endpoint does not return a response body  "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the feature sets endpoint to review feature sets List available feature sets and which feature sets an organization is eligible for using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 page title Feature Sets API Docs HCP Terraform tfc only true","answers":"---\npage_title: Feature Sets - API Docs - HCP Terraform\ntfc_only: true\ndescription: >-\n  Use the `\/feature-sets` endpoint to review feature sets. List available feature sets and which feature sets an organization is eligible for using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Feature Sets API\n\n-> **Note:** The feature sets API is only available in HCP Terraform.\n\nFeature sets represent the different [pricing plans](\/terraform\/cloud-docs\/overview) available to HCP Terraform organizations. An organization's [entitlement set](\/terraform\/cloud-docs\/api-docs#feature-entitlements) is calculated using its [subscription](\/terraform\/cloud-docs\/api-docs\/subscriptions) and feature set.\n\n## List Feature Sets\n\nThis endpoint lists the feature sets available in HCP Terraform.\n\n`GET \/feature-sets`\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.  If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.\n\n| Parameter      | Description                                                                  |\n| -------------- | ---------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.           |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 feature sets per page. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/feature-sets\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"fs-GN3kSR1GqWNfcFaW\",\n      \"type\": \"feature-sets\",\n      \"attributes\": {\n        \"assessments\": false,\n        \"audit-logging\": false,\n        \"comparison-description\": \"\",\n        \"concurrency-override\": false,\n        \"cost-estimation\": true,\n        \"cost\": 0,\n        \"default-agents-ceiling\": 1,\n        \"default-runs-ceiling\": 1,\n        \"description\": \"Free 500 managed resources, then downgrade to limited features\",\n        \"global-run-tasks\": false,\n        \"identifier\": \"free_standard\",\n        \"is-current\": true,\n        \"is-free-tier\": true,\n        \"module-tests-generation\": false,\n        \"name\": \"Free\",\n        \"no-code-modules\": false,\n        \"plan\": null,\n        \"policy-enforcement\": true,\n        \"policy-limit\": null,\n        \"policy-mandatory-enforcement-limit\": null,\n        \"policy-set-limit\": null,\n        \"private-networking\": true,\n        \"private-policy-agents\": false,\n        \"private-vcs\": false,\n        \"run-task-limit\": null,\n        \"run-task-mandatory-enforcement-limit\": null,\n        \"run-task-workspace-limit\": null,\n        \"run-tasks\": true,\n        \"self-serve-billing\": true,\n        \"sentinel\": true,\n        \"sso\": true,\n        \"teams\": false,\n        \"user-limit\": null,\n        \"versioned-policy-set-limit\": null\n      }\n    },\n    {\n      \"id\": \"fs-f3xYUkkXwY8ZGP9g\",\n      \"type\": \"feature-sets\",\n      \"attributes\": {\n        \"assessments\": false,\n        \"audit-logging\": false,\n        \"comparison-description\": \"\",\n        \"concurrency-override\": true,\n        \"cost-estimation\": true,\n        \"cost\": 0,\n        \"default-agents-ceiling\": 10,\n        \"default-runs-ceiling\": 10,\n        \"description\": \"Automated infrastructure provisioning at any scale. First 500 free managed resources included.\",\n        \"global-run-tasks\": false,\n        \"identifier\": \"standard\",\n        \"is-current\": true,\n        \"is-free-tier\": true,\n        \"module-tests-generation\": false,\n        \"name\": \"Standard\",\n        \"no-code-modules\": false,\n        \"plan\": null,\n        \"policy-enforcement\": true,\n        \"policy-limit\": null,\n        \"policy-mandatory-enforcement-limit\": null,\n        \"policy-set-limit\": null,\n        \"private-networking\": true,\n        \"private-policy-agents\": false,\n        \"private-vcs\": false,\n        \"run-task-limit\": null,\n        \"run-task-mandatory-enforcement-limit\": null,\n        \"run-task-workspace-limit\": null,\n        \"run-tasks\": true,\n        \"self-serve-billing\": true,\n        \"sentinel\": true,\n        \"sso\": true,\n        \"teams\": false,\n        \"user-limit\": null,\n        \"versioned-policy-set-limit\": null\n      }\n    },\n    {\n      \"id\": \"fs-JhVd6dwBSZ3THzHV\",\n      \"type\": \"feature-sets\",\n      \"attributes\": {\n        \"assessments\": true,\n        \"audit-logging\": true,\n        \"comparison-description\": \"\",\n        \"concurrency-override\": true,\n        \"cost-estimation\": true,\n        \"cost\": 0,\n        \"default-agents-ceiling\": 10,\n        \"default-runs-ceiling\": 10,\n        \"description\": \"Automated infrastructure provisioning and management at any scale\",\n        \"global-run-tasks\": true,\n        \"identifier\": \"plus\",\n        \"is-current\": true,\n        \"is-free-tier\": true,\n        \"module-tests-generation\": true,\n        \"name\": \"Plus\",\n        \"no-code-modules\": true,\n        \"plan\": null,\n        \"policy-enforcement\": true,\n        \"policy-limit\": null,\n        \"policy-mandatory-enforcement-limit\": null,\n        \"policy-set-limit\": null,\n        \"private-networking\": true,\n        \"private-policy-agents\": false,\n        \"private-vcs\": false,\n        \"run-task-limit\": null,\n        \"run-task-mandatory-enforcement-limit\": null,\n        \"run-task-workspace-limit\": null,\n        \"run-tasks\": true,\n        \"self-serve-billing\": true,\n        \"sentinel\": true,\n        \"sso\": true,\n        \"teams\": true,\n        \"user-limit\": null,\n        \"versioned-policy-set-limit\": null\n      }\n    }\n  ]\n}\n```\n\n## List Feature Sets for Organization\n\nThis endpoint lists the feature sets a particular organization is eligible to access. The results may differ from the previous global endpoint - for instance, if the organization has already had a free trial, the trial feature set will not appear in this list.\n\n`GET \/organizations\/:organization_name\/feature-sets`\n\n| Parameter           | Description                  |\n| ------------------- | ---------------------------- |\n| `organization_name` | The name of the organization |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.  If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.\n\n| Parameter      | Description                                                                               |\n| -------------- | ----------------------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.                        |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 organization feature sets per page. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/feature-sets\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"fs-GN3kSR1GqWNfcFaW\",\n      \"type\": \"feature-sets\",\n      \"attributes\": {\n        \"assessments\": false,\n        \"audit-logging\": false,\n        \"comparison-description\": \"\",\n        \"concurrency-override\": false,\n        \"cost-estimation\": true,\n        \"cost\": 0,\n        \"default-agents-ceiling\": 1,\n        \"default-runs-ceiling\": 1,\n        \"description\": \"Free 500 managed resources, then downgrade to limited features\",\n        \"global-run-tasks\": false,\n        \"identifier\": \"free_standard\",\n        \"is-current\": true,\n        \"is-free-tier\": true,\n        \"module-tests-generation\": false,\n        \"name\": \"Free\",\n        \"no-code-modules\": false,\n        \"plan\": null,\n        \"policy-enforcement\": true,\n        \"policy-limit\": 5,\n        \"policy-mandatory-enforcement-limit\": 1,\n        \"policy-set-limit\": 1,\n        \"private-networking\": true,\n        \"private-policy-agents\": false,\n        \"private-vcs\": false,\n        \"run-task-limit\": 1,\n        \"run-task-mandatory-enforcement-limit\": 1,\n        \"run-task-workspace-limit\": 10,\n        \"run-tasks\": true,\n        \"self-serve-billing\": true,\n        \"sentinel\": true,\n        \"sso\": true,\n        \"teams\": false,\n        \"user-limit\": null,\n        \"versioned-policy-set-limit\": 0\n      }\n    },\n    {\n      \"id\": \"fs-f3xYUkkXwY8ZGP9g\",\n      \"type\": \"feature-sets\",\n      \"attributes\": {\n        \"assessments\": false,\n        \"audit-logging\": false,\n        \"comparison-description\": \"\",\n        \"concurrency-override\": true,\n        \"cost-estimation\": true,\n        \"cost\": 0,\n        \"default-agents-ceiling\": 10,\n        \"default-runs-ceiling\": 10,\n        \"description\": \"Automated infrastructure provisioning at any scale. First 500 free managed resources included.\",\n        \"global-run-tasks\": false,\n        \"identifier\": \"standard\",\n        \"is-current\": true,\n        \"is-free-tier\": true,\n        \"module-tests-generation\": false,\n        \"name\": \"Standard\",\n        \"no-code-modules\": false,\n        \"plan\": null,\n        \"policy-enforcement\": true,\n        \"policy-limit\": null,\n        \"policy-mandatory-enforcement-limit\": null,\n        \"policy-set-limit\": null,\n        \"private-networking\": true,\n        \"private-policy-agents\": false,\n        \"private-vcs\": false,\n        \"run-task-limit\": null,\n        \"run-task-mandatory-enforcement-limit\": null,\n        \"run-task-workspace-limit\": null,\n        \"run-tasks\": true,\n        \"self-serve-billing\": true,\n        \"sentinel\": true,\n        \"sso\": true,\n        \"teams\": false,\n        \"user-limit\": null,\n        \"versioned-policy-set-limit\": null\n      }\n    },\n    {\n      \"id\": \"fs-JhVd6dwBSZ3THzHV\",\n      \"type\": \"feature-sets\",\n      \"attributes\": {\n        \"assessments\": true,\n        \"audit-logging\": true,\n        \"comparison-description\": \"\",\n        \"concurrency-override\": true,\n        \"cost-estimation\": true,\n        \"cost\": 0,\n        \"default-agents-ceiling\": 10,\n        \"default-runs-ceiling\": 10,\n        \"description\": \"Automated infrastructure provisioning and management at any scale\",\n        \"global-run-tasks\": true,\n        \"identifier\": \"plus\",\n        \"is-current\": true,\n        \"is-free-tier\": true,\n        \"module-tests-generation\": true,\n        \"name\": \"Plus\",\n        \"no-code-modules\": true,\n        \"plan\": null,\n        \"policy-enforcement\": true,\n        \"policy-limit\": null,\n        \"policy-mandatory-enforcement-limit\": null,\n        \"policy-set-limit\": null,\n        \"private-networking\": true,\n        \"private-policy-agents\": false,\n        \"private-vcs\": false,\n        \"run-task-limit\": null,\n        \"run-task-mandatory-enforcement-limit\": null,\n        \"run-task-workspace-limit\": null,\n        \"run-tasks\": true,\n        \"self-serve-billing\": true,\n        \"sentinel\": true,\n        \"sso\": true,\n        \"teams\": true,\n        \"user-limit\": null,\n        \"versioned-policy-set-limit\": null\n      }\n    }\n  ]\n}\n```","site":"terraform","answers_cleaned":"    page title  Feature Sets   API Docs   HCP Terraform tfc only  true description       Use the   feature sets  endpoint to review feature sets  List available feature sets and which feature sets an organization is eligible for using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Feature Sets API       Note    The feature sets API is only available in HCP Terraform   Feature sets represent the different  pricing plans   terraform cloud docs overview  available to HCP Terraform organizations  An organization s  entitlement set   terraform cloud docs api docs feature entitlements  is calculated using its  subscription   terraform cloud docs api docs subscriptions  and feature set      List Feature Sets  This endpoint lists the feature sets available in HCP Terraform    GET  feature sets       Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs   If neither pagination query parameters are provided  the endpoint will not be paginated and will return all results     Parameter        Description                                                                                                                                                                         page number       Optional    If omitted  the endpoint will return the first page                 page size         Optional    If omitted  the endpoint will return 20 feature sets per page         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 feature sets          Sample Response     json      data                  id    fs GN3kSR1GqWNfcFaW          type    feature sets          attributes              assessments   false           audit logging   false           comparison description                concurrency override   false           cost estimation   true           cost   0           default agents ceiling   1           default runs ceiling   1           description    Free 500 managed resources  then downgrade to limited features            global run tasks   false           identifier    free standard            is current   true           is free tier   true           module tests generation   false           name    Free            no code modules   false           plan   null           policy enforcement   true           policy limit   null           policy mandatory enforcement limit   null           policy set limit   null           private networking   true           private policy agents   false           private vcs   false           run task limit   null           run task mandatory enforcement limit   null           run task workspace limit   null           run tasks   true           self serve billing   true           sentinel   true           sso   true           teams   false           user limit   null           versioned policy set limit   null                             id    fs f3xYUkkXwY8ZGP9g          type    feature sets          attributes              assessments   false           audit logging   false           comparison description                concurrency override   true           cost estimation   true           cost   0           default agents ceiling   10           default runs ceiling   10           description    Automated infrastructure provisioning at any scale  First 500 free managed resources included             global run tasks   false           identifier    standard            is current   true           is free tier   true           module tests generation   false           name    Standard            no code modules   false           plan   null           policy enforcement   true           policy limit   null           policy mandatory enforcement limit   null           policy set limit   null           private networking   true           private policy agents   false           private vcs   false           run task limit   null           run task mandatory enforcement limit   null           run task workspace limit   null           run tasks   true           self serve billing   true           sentinel   true           sso   true           teams   false           user limit   null           versioned policy set limit   null                             id    fs JhVd6dwBSZ3THzHV          type    feature sets          attributes              assessments   true           audit logging   true           comparison description                concurrency override   true           cost estimation   true           cost   0           default agents ceiling   10           default runs ceiling   10           description    Automated infrastructure provisioning and management at any scale            global run tasks   true           identifier    plus            is current   true           is free tier   true           module tests generation   true           name    Plus            no code modules   true           plan   null           policy enforcement   true           policy limit   null           policy mandatory enforcement limit   null           policy set limit   null           private networking   true           private policy agents   false           private vcs   false           run task limit   null           run task mandatory enforcement limit   null           run task workspace limit   null           run tasks   true           self serve billing   true           sentinel   true           sso   true           teams   true           user limit   null           versioned policy set limit   null                             List Feature Sets for Organization  This endpoint lists the feature sets a particular organization is eligible to access  The results may differ from the previous global endpoint   for instance  if the organization has already had a free trial  the trial feature set will not appear in this list    GET  organizations  organization name feature sets     Parameter             Description                                                                              organization name    The name of the organization        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs   If neither pagination query parameters are provided  the endpoint will not be paginated and will return all results     Parameter        Description                                                                                                                                                                                                   page number       Optional    If omitted  the endpoint will return the first page                              page size         Optional    If omitted  the endpoint will return 20 organization feature sets per page         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations hashicorp feature sets          Sample Response     json      data                  id    fs GN3kSR1GqWNfcFaW          type    feature sets          attributes              assessments   false           audit logging   false           comparison description                concurrency override   false           cost estimation   true           cost   0           default agents ceiling   1           default runs ceiling   1           description    Free 500 managed resources  then downgrade to limited features            global run tasks   false           identifier    free standard            is current   true           is free tier   true           module tests generation   false           name    Free            no code modules   false           plan   null           policy enforcement   true           policy limit   5           policy mandatory enforcement limit   1           policy set limit   1           private networking   true           private policy agents   false           private vcs   false           run task limit   1           run task mandatory enforcement limit   1           run task workspace limit   10           run tasks   true           self serve billing   true           sentinel   true           sso   true           teams   false           user limit   null           versioned policy set limit   0                             id    fs f3xYUkkXwY8ZGP9g          type    feature sets          attributes              assessments   false           audit logging   false           comparison description                concurrency override   true           cost estimation   true           cost   0           default agents ceiling   10           default runs ceiling   10           description    Automated infrastructure provisioning at any scale  First 500 free managed resources included             global run tasks   false           identifier    standard            is current   true           is free tier   true           module tests generation   false           name    Standard            no code modules   false           plan   null           policy enforcement   true           policy limit   null           policy mandatory enforcement limit   null           policy set limit   null           private networking   true           private policy agents   false           private vcs   false           run task limit   null           run task mandatory enforcement limit   null           run task workspace limit   null           run tasks   true           self serve billing   true           sentinel   true           sso   true           teams   false           user limit   null           versioned policy set limit   null                             id    fs JhVd6dwBSZ3THzHV          type    feature sets          attributes              assessments   true           audit logging   true           comparison description                concurrency override   true           cost estimation   true           cost   0           default agents ceiling   10           default runs ceiling   10           description    Automated infrastructure provisioning and management at any scale            global run tasks   true           identifier    plus            is current   true           is free tier   true           module tests generation   true           name    Plus            no code modules   true           plan   null           policy enforcement   true           policy limit   null           policy mandatory enforcement limit   null           policy set limit   null           private networking   true           private policy agents   false           private vcs   false           run task limit   null           run task mandatory enforcement limit   null           run task workspace limit   null           run tasks   true           self serve billing   true           sentinel   true           sso   true           teams   true           user limit   null           versioned policy set limit   null                        "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Organization Tags API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the tags endpoint to manage an organization s workspace tags Assign list and delete tags using the HTTP API","answers":"---\npage_title: Organization Tags - API Docs - HCP Terraform\ndescription: >-\n  Use the `tags` endpoint to manage an organization's workspace tags. Assign, list, and delete tags using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Organization Tags API\n\nThis API returns the list of tags used in workspaces across the organization. Tags can be added to this pool via workspaces. Tags deleted here will be removed from all other workspaces. Tags can be added, applied, removed and deleted in bulk.\n\nTags are subject to the following rules:\n\n- Workspace tags or tags must be one or more characters, have a 255 character limit, and can include letters, numbers, colons, hyphens, and underscores.\n- You can create tags for a workspace using the user interface or the API. After you create a tag, you can assign it to other workspaces in the same organization.\n- You cannot create tags for a workspace using the CLI.\n- You cannot set tags at the project level, so there is no tag inheritance from projects to workspaces.\n\n## List Tags\n\n`GET \/organizations\/:organization_name\/tags`\n\n| Parameter            | Description                                    |\n| -------------------- | ---------------------------------------------- |\n| `:organization_name` | The name of the organization to list tags from |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter                       | Description                                                                              |\n| ------------------------------- | ---------------------------------------------------------------------------------------- |\n| `q`                             | **Optional.** A search query string.  Organization tags are searchable by name likeness. |\n| `filter[exclude][taggable][id]` | **Optional.** If specified, omits organization's related workspace's tags.               |\n| `page[number]`                  | **Optional.** If omitted, the endpoint will return the first page.                       |\n| `page[size]`                    | **Optional.** If omitted, the endpoint will return 20 organization tags per page.        |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/tags\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"tag-1\",\n      \"type\": \"tags\",\n      \"attributes\": {\n        \"name\": \"tag1\",\n        \"created-at\":  \"2022-03-09T06:04:39.585Z\",\n        \"instance-count\": 1\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          }\n        }\n      }\n    },\n    {\n      \"id\": \"tag-2\",\n      \"type\": \"tags\",\n      \"attributes\": {\n        \"name\": \"tag2\",\n        \"created-at\":  \"2022-03-09T06:04:39.585Z\",\n        \"instance-count\": 2\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          }\n        }\n      }\n    }\n  ]\n}\n```\n\n## Delete tags\n\nThis endpoint deletes one or more tags from an organization. The organization and tags must already\nexist. Tags deleted here will be removed from all other resources.\n\n`DELETE \/organizations\/:organization_name\/tags`\n\n| Parameter            | Description                                      |\n| -------------------- | ------------------------------------------------ |\n| `:organization_name` | The name of the organization to delete tags from |\n\n| Status  | Response                  | Reason(s)                                                      |\n| ------- | ------------------------- | -------------------------------------------------------------- |\n| [204][] | No Content                | Successfully removed tags from organization                    |\n| [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nIt is important to note that `type` and `id` are required.\n\n| Key path      | Type   | Default | Description                  |\n| ------------- | ------ | ------- | ---------------------------- |\n| `data[].type` | string |         | Must be `\"tags\"`.            |\n| `data[].id`   | string |         | The id of the tag to remove. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n    {\n      \"type\": \"tags\",\n      \"id\": \"tag-Yfha4YpPievQ8wJw\"\n    }\n  ]\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/tags\n```\n\n## Sample Response\n\nNo response body.\n\nStatus code `204`.\n\n## Add workspaces to a tag\n\n`POST \/tags\/:tag_id\/relationships\/workspaces`\n\n| Parameter | Description                                          |\n| --------- | ---------------------------------------------------- |\n| `:tag_id` | The ID of the tag that workspaces should have added. |\n\n| Status  | Response                  | Reason(s)                                             |\n| ------- | ------------------------- | ----------------------------------------------------- |\n| [204][] | No Content                | Successfully added workspaces to tag                  |\n| [404][] | [JSON API error object][] | Tag not found, or user unauthorized to perform action |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\n| Key path      | Type   | Default | Description                     |\n| ------------- | ------ | ------- | ------------------------------- |\n| `data[].type` | string |         | Must be `\"workspaces\"`.         |\n| `data[].id`   | string |         | The id of the workspace to add. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/tags\/tag-2\/relationships\/workspaces\n```\n\n### Sample Payload\n\n```json\n{\n  \"data\": [\n      {\n          \"type\": \"workspaces\",\n          \"id\": \"ws-pmKTbUwH2VPiiTC4\"\n      }\n  ]\n}\n```\n\n### Sample Response\n\nNo response body.\n\nStatus code `204`.","site":"terraform","answers_cleaned":"    page title  Organization Tags   API Docs   HCP Terraform description       Use the  tags  endpoint to manage an organization s workspace tags  Assign  list  and delete tags using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Organization Tags API  This API returns the list of tags used in workspaces across the organization  Tags can be added to this pool via workspaces  Tags deleted here will be removed from all other workspaces  Tags can be added  applied  removed and deleted in bulk   Tags are subject to the following rules     Workspace tags or tags must be one or more characters  have a 255 character limit  and can include letters  numbers  colons  hyphens  and underscores    You can create tags for a workspace using the user interface or the API  After you create a tag  you can assign it to other workspaces in the same organization    You cannot create tags for a workspace using the CLI    You cannot set tags at the project level  so there is no tag inheritance from projects to workspaces      List Tags   GET  organizations  organization name tags     Parameter              Description                                                                                                                    organization name    The name of the organization to list tags from        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter                         Description                                                                                                                                                                                                                  q                                  Optional    A search query string   Organization tags are searchable by name likeness       filter exclude  taggable  id       Optional    If specified  omits organization s related workspace s tags                     page number                        Optional    If omitted  the endpoint will return the first page                             page size                          Optional    If omitted  the endpoint will return 20 organization tags per page                Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations hashicorp tags          Sample Response     json      data                  id    tag 1          type    tags          attributes              name    tag1            created at     2022 03 09T06 04 39 585Z            instance count   1                 relationships              organization                data                  id    my organization                type    organizations                                                    id    tag 2          type    tags          attributes              name    tag2            created at     2022 03 09T06 04 39 585Z            instance count   2                 relationships              organization                data                  id    my organization                type    organizations                                                    Delete tags  This endpoint deletes one or more tags from an organization  The organization and tags must already exist  Tags deleted here will be removed from all other resources    DELETE  organizations  organization name tags     Parameter              Description                                                                                                                        organization name    The name of the organization to delete tags from      Status    Response                    Reason s                                                                                                                                                                     204      No Content                  Successfully removed tags from organization                         404       JSON API error object      Organization not found  or user unauthorized to perform action        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   It is important to note that  type  and  id  are required     Key path        Type     Default   Description                                                                                           data   type    string             Must be   tags                    data   id      string             The id of the tag to remove         Sample Payload     json      data                  type    tags          id    tag Yfha4YpPievQ8wJw                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE       data  payload json     https   app terraform io api v2 organizations hashicorp tags         Sample Response  No response body   Status code  204       Add workspaces to a tag   POST  tags  tag id relationships workspaces     Parameter   Description                                                                                                                     tag id    The ID of the tag that workspaces should have added       Status    Response                    Reason s                                                                                                                                                   204      No Content                  Successfully added workspaces to tag                       404       JSON API error object      Tag not found  or user unauthorized to perform action        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload     Key path        Type     Default   Description                                                                                                 data   type    string             Must be   workspaces                 data   id      string             The id of the workspace to add         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 tags tag 2 relationships workspaces          Sample Payload     json      data                        type    workspaces              id    ws pmKTbUwH2VPiiTC4                         Sample Response  No response body   Status code  204  "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 Use the state versions endpoint to manage Terraform state versions List create show and roll back state versions using the HTTP API page title State Versions API Docs HCP Terraform","answers":"---\npage_title: State Versions - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/state-versions` endpoint to manage Terraform state versions. List, create, show, and roll back state versions using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# State Versions API\n\n## Attributes\n\nState version API objects represent an instance of Terraform state data, but do not directly contain the stored state. Instead, they contain information about the state, its properties, and its contents, and include one or more URLs from which the state can be downloaded.\n\nSome of the information returned in a state version API object might be **populated asynchronously** by HCP Terraform. This includes resources, modules, providers, and the [state version outputs](\/terraform\/cloud-docs\/api-docs\/state-version-outputs) associated with the state version. These values might not be immediately available after the state version is uploaded. The `resources-processed` property on the state version object indicates whether or not HCP Terraform has finished any necessary asynchronous processing. If you need to use these values, be sure to wait for `resources-processed` to become `true` before assuming that the values are in fact empty.\n\nAttribute                        | Description\n---------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n`billable-rum-count`             | Count of billable Resources Under Management (RUM). Only present for Organization members on RUM plans who have visibility to see billable RUM usage in the Usage page\n`hosted-json-state-download-url` | A URL from which you can download the state data in a [stable format](\/terraform\/internals\/json-format) appropriate for external integrations to consume. Only available if the state was created by Terraform 1.3+.\n`hosted-state-download-url`      | A URL from which you can download the raw state data, in the format used internally by Terraform.\n`hosted-json-state-upload-url`   | A URL to which you can upload state data in a [stable format](\/terraform\/internals\/json-format) appropriate for external integrations to consume. You can upload JSON state content once per state version.\n`hosted-state-upload-url`        | A URL to which you can upload state data in the format used Terraform uses internally. You can upload state data once per state version.\n`modules`                        | Extracted information about the Terraform modules in this state data. Populated asynchronously.\n`providers`                      | Extracted information about the Terraform providers used for resources in this state data. Populated asynchronously.\n`intermediate`                   | A boolean flag that indicates the state version is a snapshot and not yet set as the current state version for a workspace. The last intermediate state version becomes the current state version when the workspace is unlocked. Not yet supported in Terraform Enterprise.\n`resources`                      | Extracted information about the resources in this state data. Populated asynchronously.\n`resources-processed`            | A Boolean flag indicating whether HCP Terraform has finished asynchronously extracting outputs, resources, and other information about this state data.\n`serial`                         | The serial number of this state instance, which increases every time Terraform creates new state in the workspace.\n`state-version`                  | The version of the internal state format used for this state. Different Terraform versions read and write different format versions, but it only changes infrequently.\n`status`                         | Indicates a state version's content upload [status](\/terraform\/cloud-docs\/api-docs\/state-versions#state-version-status). This status can be `pending`, `finalized` or `discarded`.\n`terraform-version`              | The Terraform version that created this state. Populated asynchronously.\n`vcs-commit-sha`                 | The SHA of the configuration commit used in the Terraform run that produced this state. Only present if the workspace is connected to a VCS repository.\n`vcs-commit-url`                 | A link to the configuration commit used in the Terraform run that produced this state. Only present if the workspace is connected to a VCS repository.\n\n### State Version Status\nThe state version status is found in `data.attributes.status`, and you can reference the following list of possible statuses.\nA state version created through the API or CLI will only be listed in the UI if it is has a `finalized` status.\n\n| State | Description |\n| --- | --- |\n| `pending` | Indicates that a state version has been created but the state data is not encoded within the request. Pending state versions do not contain state data and do not appear in the UI. You cannot unlock the workspace until the latest state version is finalized. |\n| `finalized` | Indicates that the state version has been successfully uploaded to HCP Terraform or that the state version was created with a valid `state` attribute. |\n| `discarded`  | The state version was discarded because it was superseded by a newer state version before it could be uploaded. |\n| `backing_data_soft_deleted`  | <EnterpriseAlert inline \/> The backing files associated with this state version are marked for garbage collection. Terraform permanently deletes backing files associated with this state version after a set number of days, but you can restore the backing data associated with it before it is permanently deleted. |\n| `backing_data_permanently_deleted`  | <EnterpriseAlert inline \/> The backing files associated with this state version have been permanently deleted and can no longer be restored. |\n\n## Create a State Version\n\n> **Hands-on:** Try the [Version Remote State with the HCP Terraform API](\/terraform\/tutorials\/cloud\/cloud-state-api) tutorial to download a remote state file and use the Terraform API to create a new state version.\n\n`POST \/workspaces\/:workspace_id\/state-versions`\n\n| Parameter       | Description                                                                                                                                                                                                       |\n|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `:workspace_id` | The workspace ID to create the new state version in. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint. |\n\nCreates a state version and sets it as the current state version for the given workspace. The workspace must be locked by the user creating a state version. The workspace may be locked [with the API](\/terraform\/cloud-docs\/api-docs\/workspaces#lock-a-workspace) or [with the UI](\/terraform\/cloud-docs\/workspaces\/settings#locking). This is most useful for migrating existing state from Terraform Community edition into a new HCP Terraform workspace.\n\nCreating state versions requires permission to read and write state versions for the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n!> **Warning:** Use caution when uploading state to workspaces that have already performed Terraform runs. Replacing state improperly can result in orphaned or duplicated infrastructure resources.\n\n-> **Note:** For Free Tier organizations, HCP Terraform always retains at least the last 100 states (across all workspaces) and at least the most recent state for every workspace. Additional states beyond the last 100 are retained for six months, and are then deleted.\n\n-> **Note:** You cannot access this endpoint with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n| Status  | Response                  | Reason                                                            |\n|---------|---------------------------|-------------------------------------------------------------------|\n| [201][] | [JSON API document][]     | Successfully created a state version.                             |\n| [404][] | [JSON API error object][] | Workspace not found, or user unauthorized to perform action.      |\n| [409][] | [JSON API error object][] | Conflict; check the error object for more information.            |\n| [412][] | [JSON API error object][] | Precondition failed; check the error object for more information. |\n| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.).   |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                             | Type    | Default   | Description                                                                                                                                                                                                                                                                                |\n|--------------------------------------|---------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `data.type`                          | string  |           | Must be `\"state-versions\"`.                                                                                                                                                                                                                                                                |\n| `data.attributes.serial`             | integer |           | The serial of the state version. Must match the serial value extracted from the raw state file.                                                                                                                                                                                            |\n| `data.attributes.md5`                | string  |           | An MD5 hash of the raw state version.                                                                                                                                                                                                                                                       |\n| `data.attributes.state`              | string  | (nothing) | **Optional** Base64 encoded raw state file. If omitted, you must use the upload method below to complete the state version creation. The workspace may not be unlocked normally until the state version is uploaded.                                                                                                                                                                                                                                                              |\n| `data.attributes.lineage`            | string  | (nothing) | **Optional** Lineage of the state version. Should match the lineage extracted from the raw state file. Early versions of terraform did not have the concept of lineage, so this is an optional attribute.                                                                                  |\n| `data.attributes.json-state`         | string  | (nothing) | **Optional** Base64 encoded json state, as expressed by `terraform show -json`. See [JSON Output Format](\/terraform\/internals\/json-format) for more details.                                                                                                                                          |\n| `data.attributes.json-state-outputs` | string  | (nothing) | **Optional** Base64 encoded output values as represented by `terraform show -json` (the contents of the values\/outputs key). If provided, the workspace outputs populate immediately. If omitted, HCP Terraform populates the workspace outputs from the given state after a short time. |\n| `data.relationships.run.data.id`     | string  | (nothing) | **Optional** The ID of the run to associate with the state version.                                                                                                                                                                                                                        |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\":\"state-versions\",\n    \"attributes\": {\n      \"serial\": 1,\n      \"md5\": \"d41d8cd98f00b204e9800998ecf8427e\",\n      \"lineage\": \"871d1b4a-e579-fb7c-ffdb-f0c858a647a7\",\n      \"state\": \"...\",\n      \"json-state\": \"...\",\n      \"json-state-outputs\": \"...\"\n    },\n    \"relationships\": {\n      \"run\": {\n        \"data\": {\n          \"type\": \"runs\",\n          \"id\": \"run-bWSq4YeYpfrW4mx7\"\n        }\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-6fHMCom98SDXSQUv\/state-versions\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": {\n        \"id\": \"sv-DmoXecHePnNznaA4\",\n        \"type\": \"state-versions\",\n        \"attributes\": {\n            \"vcs-commit-sha\": null,\n            \"vcs-commit-url\": null,\n            \"created-at\": \"2018-07-12T20:32:01.490Z\",\n            \"hosted-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/f55b739b-ff03-4716-b436-726466b96dc4\",\n            \"hosted-json-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/4fde7951-93c0-4414-9a40-f3abc4bac490\",\n            \"hosted-state-upload-url\": null,\n            \"hosted-json-state-upload-url\": null,\n            \"status\": \"finalized\",\n            \"intermediate\": true,\n            \"serial\": 1\n        },\n        \"links\": {\n            \"self\": \"\/api\/v2\/state-versions\/sv-DmoXecHePnNznaA4\"\n        }\n    }\n}\n```\n\n## Upload State and JSON State\n\n You can upload state version content in the same request when creating a state version. However, we _strongly_ recommend that you upload content separately.\n\n`PUT https:\/\/archivist.terraform.io\/v1\/object\/<UNIQUE OBJECT ID>`\n\nHCP Terraform returns a `hosted-state-upload-url` or `hosted-json-state-upload-url` returned when you create a `state-version`. Once you upload state content, this URL is hidden on the resource and _no longer available_.\n\n### Sample Request\n\nIn the below example, `@filename` is the name of Terraform state file you wish to upload.\n\n```shell\ncurl \\\n  --header \"Content-Type: application\/octet-stream\" \\\n  --request PUT \\\n  --data-binary @filename \\\n  https:\/\/archivist.terraform.io\/v1\/object\/4c44d964-eba7-4dd5-ad29-1ece7b99e8da\n```\n\n## List State Versions for a Workspace\n\n`GET \/state-versions`\n\nListing state versions requires permission to read state versions for the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter                    | Description                                                                            |\n|------------------------------|----------------------------------------------------------------------------------------|\n| `filter[workspace][name]`    | **Required** The name of one workspace to list versions for.                           |\n| `filter[organization][name]` | **Required** The name of the organization that owns the desired workspace.             |\n| `filter[status]`             | **Optional.** Filter state versions by status: `pending`, `finalized`, or `discarded`. |\n| `page[number]`               | **Optional.** If omitted, the endpoint will return the first page.                     |\n| `page[size]`                 | **Optional.** If omitted, the endpoint will return 20 state versions per page.         |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  \"https:\/\/app.terraform.io\/api\/v2\/state-versions?filter%5Bworkspace%5D%5Bname%5D=my-workspace&filter%5Borganization%5D%5Bname%5D=my-organization\"\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": [\n        {\n            \"id\": \"sv-g4rqST72reoHMM5a\",\n            \"type\": \"state-versions\",\n            \"attributes\": {\n                \"created-at\": \"2021-06-08T01:22:03.794Z\",\n                \"size\": 940,\n                \"hosted-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n                \"hosted-state-upload-url\": null,\n                \"hosted-json-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n                \"hosted-json-state-upload-url\": null,\n                \"status\": \"finalized\",\n                \"intermediate\": false,\n                \"modules\": {\n                    \"root\": {\n                        \"null-resource\": 1,\n                        \"data.terraform-remote-state\": 1\n                    }\n                },\n                \"providers\": {\n                    \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\": {\n                        \"data.terraform-remote-state\": 1\n                    },\n                    \"provider[\\\"registry.terraform.io\/hashicorp\/null\\\"]\": {\n                        \"null-resource\": 1\n                    }\n                },\n                \"resources\": [\n                    {\n                        \"name\": \"other_username\",\n                        \"type\": \"data.terraform_remote_state\",\n                        \"count\": 1,\n                        \"module\": \"root\",\n                        \"provider\": \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\"\n                    },\n                    {\n                        \"name\": \"random\",\n                        \"type\": \"null_resource\",\n                        \"count\": 1,\n                        \"module\": \"root\",\n                        \"provider\": \"provider[\\\"registry.terraform.io\/hashicorp\/null\\\"]\"\n                    }\n                ],\n                \"resources-processed\": true,\n                \"serial\": 9,\n                \"state-version\": 4,\n                \"terraform-version\": \"0.15.4\",\n                \"vcs-commit-url\": \"https:\/\/gitlab.com\/my-organization\/terraform-test\/-\/commit\/abcdef12345\",\n                \"vcs-commit-sha\": \"abcdef12345\"\n            },\n            \"relationships\": {\n                \"run\": {\n                    \"data\": {\n                        \"id\": \"run-YfmFLWpgTv31VZsP\",\n                        \"type\": \"runs\"\n                    }\n                },\n                \"created-by\": {\n                    \"data\": {\n                        \"id\": \"user-onZs69ThPZjBK2wo\",\n                        \"type\": \"users\"\n                    },\n                    \"links\": {\n                        \"self\": \"\/api\/v2\/users\/user-onZs69ThPZjBK2wo\",\n                        \"related\": \"\/api\/v2\/runs\/run-YfmFLWpgTv31VZsP\/created-by\"\n                    }\n                },\n                \"workspace\": {\n                    \"data\": {\n                        \"id\": \"ws-noZcaGXsac6aZSJR\",\n                        \"type\": \"workspaces\"\n                    }\n                },\n                \"outputs\": {\n                    \"data\": [\n                        {\n                            \"id\": \"wsout-V22qbeM92xb5mw9n\",\n                            \"type\": \"state-version-outputs\"\n                        },\n                        {\n                            \"id\": \"wsout-ymkuRnrNFeU5wGpV\",\n                            \"type\": \"state-version-outputs\"\n                        },\n                        {\n                            \"id\": \"wsout-v82BjkZnFEcscipg\",\n                            \"type\": \"state-version-outputs\"\n                        }\n                    ]\n                }\n            },\n            \"links\": {\n                \"self\": \"\/api\/v2\/state-versions\/sv-g4rqST72reoHMM5a\"\n            }\n        },\n        {\n            \"id\": \"sv-QYKf6GvNv75ZPTBr\",\n            \"type\": \"state-versions\",\n            \"attributes\": {\n                \"created-at\": \"2021-06-01T21:40:25.941Z\",\n                \"size\": 819,\n                \"hosted-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n                \"hosted-state-upload-url\": null,\n                \"hosted-json-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n                \"hosted-json-state-upload-url\": null,\n                \"status\": \"finalized\",\n                \"intermediate\": false,\n                \"modules\": {\n                    \"root\": {\n                        \"data.terraform-remote-state\": 1\n                    }\n                },\n                \"providers\": {\n                    \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\": {\n                        \"data.terraform-remote-state\": 1\n                    }\n                },\n                \"resources\": [\n                    {\n                        \"name\": \"other_username\",\n                        \"type\": \"data.terraform_remote_state\",\n                        \"count\": 1,\n                        \"module\": \"root\",\n                        \"provider\": \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\"\n                    }\n                ],\n                \"resources-processed\": true,\n                \"serial\": 8,\n                \"state-version\": 4,\n                \"terraform-version\": \"0.15.4\",\n                \"vcs-commit-url\": \"https:\/\/gitlab.com\/my-organization\/terraform-test\/-\/commit\/12345abcdef\",\n                \"vcs-commit-sha\": \"12345abcdef\"\n            },\n            \"relationships\": {\n                \"run\": {\n                    \"data\": {\n                        \"id\": \"run-cVtxks6R8wsjCZMD\",\n                        \"type\": \"runs\"\n                    }\n                },\n                \"created-by\": {\n                    \"data\": {\n                        \"id\": \"user-onZs69ThPZjBK2wo\",\n                        \"type\": \"users\"\n                    },\n                    \"links\": {\n                        \"self\": \"\/api\/v2\/users\/user-onZs69ThPZjBK2wo\",\n                        \"related\": \"\/api\/v2\/runs\/run-YfmFLWpgTv31VZsP\/created-by\"\n                    }\n                },\n                \"workspace\": {\n                    \"data\": {\n                        \"id\": \"ws-noZcaGXsac6aZSJR\",\n                        \"type\": \"workspaces\"\n                    }\n                },\n                \"outputs\": {\n                    \"data\": [\n                        {\n                            \"id\": \"wsout-MmqMhmht6jFmLRvh\",\n                            \"type\": \"state-version-outputs\"\n                        },\n                        {\n                            \"id\": \"wsout-Kuo9TCHg3oDLDQqa\",\n                            \"type\": \"state-version-outputs\"\n                        }\n                    ]\n                }\n            },\n            \"links\": {\n                \"self\": \"\/api\/v2\/state-versions\/sv-QYKf6GvNv75ZPTBr\"\n            }\n        }\n    ],\n    \"links\": {\n        \"self\": \"https:\/\/app.terraform.io\/api\/v2\/state-versions?filter%5Borganization%5D%5Bname%5D=hashicorp&filter%5Bworkspace%5D%5Bname%5D=my-workspace&page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n        \"first\": \"https:\/\/app.terraform.io\/api\/v2\/state-versions?filter%5Borganization%5D%5Bname%5D=hashicorp&filter%5Bworkspace%5D%5Bname%5D=my-workspace&page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n        \"prev\": null,\n        \"next\": null,\n        \"last\": \"https:\/\/app.terraform.io.io\/api\/v2\/state-versions?filter%5Borganization%5D%5Bname%5D=hashicorp&filter%5Bworkspace%5D%5Bname%5D=my-workspace&page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n    },\n    \"meta\": {\n        \"pagination\": {\n            \"current-page\": 1,\n            \"page-size\": 20,\n            \"prev-page\": null,\n            \"next-page\": null,\n            \"total-pages\": 1,\n            \"total-count\": 10\n        }\n    }\n}\n```\n\n## Fetch the Current State Version for a Workspace\n\n`GET \/workspaces\/:workspace_id\/current-state-version`\n\n| Parameter       | Description                                                                                                                                                                                                                          |\n|-----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `:workspace_id` | The ID for the workspace whose current state version you want to fetch. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint. |\n\nFetches the current state version for the given workspace. This state version\nwill be the input state when running terraform operations.\n\nViewing state versions requires permission to read state versions for the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Status  | Response                  | Reason                                                                                                        |\n|---------|---------------------------|---------------------------------------------------------------------------------------------------------------|\n| [200][] | [JSON API document][]     | Successfully returned current state version for the given workspace.                                          |\n| [404][] | [JSON API error object][] | Workspace not found, workspace does not have a current state version, or user unauthorized to perform action. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-6fHMCom98SDXSQUv\/current-state-version\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": {\n        \"id\": \"sv-g4rqST72reoHMM5a\",\n        \"type\": \"state-versions\",\n        \"attributes\": {\n            \"billable-rum-count\": 0,\n            \"created-at\": \"2021-06-08T01:22:03.794Z\",\n            \"size\": 940,\n            \"hosted-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n            \"hosted-state-upload-url\": null,\n            \"hosted-json-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n            \"hosted-json-state-upload-url\": null,\n            \"status\": \"finalized\",\n            \"intermediate\": false,\n            \"modules\": {\n                \"root\": {\n                    \"null-resource\": 1,\n                    \"data.terraform-remote-state\": 1\n                }\n            },\n            \"providers\": {\n                \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\": {\n                    \"data.terraform-remote-state\": 1\n                },\n                \"provider[\\\"registry.terraform.io\/hashicorp\/null\\\"]\": {\n                    \"null-resource\": 1\n                }\n            },\n            \"resources\": [\n                {\n                    \"name\": \"other_username\",\n                    \"type\": \"data.terraform_remote_state\",\n                    \"count\": 1,\n                    \"module\": \"root\",\n                    \"provider\": \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\"\n                },\n                {\n                    \"name\": \"random\",\n                    \"type\": \"null_resource\",\n                    \"count\": 1,\n                    \"module\": \"root\",\n                    \"provider\": \"provider[\\\"registry.terraform.io\/hashicorp\/null\\\"]\"\n                }\n            ],\n            \"resources-processed\": true,\n            \"serial\": 9,\n            \"state-version\": 4,\n            \"terraform-version\": \"0.15.4\",\n            \"vcs-commit-url\": \"https:\/\/gitlab.com\/my-organization\/terraform-test\/-\/commit\/abcdef12345\",\n            \"vcs-commit-sha\": \"abcdef12345\"\n        },\n        \"relationships\": {\n            \"run\": {\n                \"data\": {\n                    \"id\": \"run-YfmFLWpgTv31VZsP\",\n                    \"type\": \"runs\"\n                }\n            },\n            \"created-by\": {\n                \"data\": {\n                    \"id\": \"user-onZs69ThPZjBK2wo\",\n                    \"type\": \"users\"\n                },\n                \"links\": {\n                    \"self\": \"\/api\/v2\/users\/user-onZs69ThPZjBK2wo\",\n                    \"related\": \"\/api\/v2\/runs\/run-YfmFLWpgTv31VZsP\/created-by\"\n                }\n            },\n            \"workspace\": {\n                \"data\": {\n                    \"id\": \"ws-noZcaGXsac6aZSJR\",\n                    \"type\": \"workspaces\"\n                }\n            },\n            \"outputs\": {\n                \"data\": [\n                    {\n                        \"id\": \"wsout-V22qbeM92xb5mw9n\",\n                        \"type\": \"state-version-outputs\"\n                    },\n                    {\n                        \"id\": \"wsout-ymkuRnrNFeU5wGpV\",\n                        \"type\": \"state-version-outputs\"\n                    },\n                    {\n                        \"id\": \"wsout-v82BjkZnFEcscipg\",\n                        \"type\": \"state-version-outputs\"\n                    }\n                ]\n            }\n        },\n        \"links\": {\n            \"self\": \"\/api\/v2\/state-versions\/sv-g4rqST72reoHMM5a\"\n        }\n    }\n}\n```\n\n## Show a State Version\n\n`GET \/state-versions\/:state_version_id`\n\nViewing state versions requires permission to read state versions for the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Parameter           | Description                          |\n|---------------------|--------------------------------------|\n| `:state_version_id` | The ID of the desired state version. |\n\n| Status  | Response                  | Reason                                                                                                        |\n|---------|---------------------------|---------------------------------------------------------------------------------------------------------------|\n| [200][] | [JSON API document][]     | Successfully returned current state version for the given workspace.                                          |\n| [404][] | [JSON API error object][] | Workspace not found, workspace does not have a current state version, or user unauthorized to perform action. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/state-versions\/sv-SDboVZC8TCxXEneJ\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": {\n        \"id\": \"sv-g4rqST72reoHMM5a\",\n        \"type\": \"state-versions\",\n        \"attributes\": {\n            \"created-at\": \"2021-06-08T01:22:03.794Z\",\n            \"size\": 940,\n            \"hosted-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n            \"hosted-state-upload-url\": null,\n            \"hosted-json-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n            \"hosted-json-state-upload-url\": null,\n            \"status\": \"finalized\",\n            \"intermediate\": false,\n            \"modules\": {\n                \"root\": {\n                    \"null-resource\": 1,\n                    \"data.terraform-remote-state\": 1\n                }\n            },\n            \"providers\": {\n                \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\": {\n                    \"data.terraform-remote-state\": 1\n                },\n                \"provider[\\\"registry.terraform.io\/hashicorp\/null\\\"]\": {\n                    \"null-resource\": 1\n                }\n            },\n            \"resources\": [\n                {\n                    \"name\": \"other_username\",\n                    \"type\": \"data.terraform_remote_state\",\n                    \"count\": 1,\n                    \"module\": \"root\",\n                    \"provider\": \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\"\n                },\n                {\n                    \"name\": \"random\",\n                    \"type\": \"null_resource\",\n                    \"count\": 1,\n                    \"module\": \"root\",\n                    \"provider\": \"provider[\\\"registry.terraform.io\/hashicorp\/null\\\"]\"\n                }\n            ],\n            \"resources-processed\": true,\n            \"serial\": 9,\n            \"state-version\": 4,\n            \"terraform-version\": \"0.15.4\",\n            \"vcs-commit-url\": \"https:\/\/gitlab.com\/my-organization\/terraform-test\/-\/commit\/abcdef12345\",\n            \"vcs-commit-sha\": \"abcdef12345\"\n        },\n        \"relationships\": {\n            \"run\": {\n                \"data\": {\n                    \"id\": \"run-YfmFLWpgTv31VZsP\",\n                    \"type\": \"runs\"\n                }\n            },\n            \"created-by\": {\n                \"data\": {\n                    \"id\": \"user-onZs69ThPZjBK2wo\",\n                    \"type\": \"users\"\n                },\n                \"links\": {\n                    \"self\": \"\/api\/v2\/users\/user-onZs69ThPZjBK2wo\",\n                    \"related\": \"\/api\/v2\/runs\/run-YfmFLWpgTv31VZsP\/created-by\"\n                }\n            },\n            \"workspace\": {\n                \"data\": {\n                    \"id\": \"ws-noZcaGXsac6aZSJR\",\n                    \"type\": \"workspaces\"\n                }\n            },\n            \"outputs\": {\n                \"data\": [\n                    {\n                        \"id\": \"wsout-V22qbeM92xb5mw9n\",\n                        \"type\": \"state-version-outputs\"\n                    },\n                    {\n                        \"id\": \"wsout-ymkuRnrNFeU5wGpV\",\n                        \"type\": \"state-version-outputs\"\n                    },\n                    {\n                        \"id\": \"wsout-v82BjkZnFEcscipg\",\n                        \"type\": \"state-version-outputs\"\n                    }\n                ]\n            }\n        },\n        \"links\": {\n            \"self\": \"\/api\/v2\/state-versions\/sv-g4rqST72reoHMM5a\"\n        }\n    }\n}\n```\n\n## Rollback to a Previous State Version\n\n`PATCH \/workspaces\/:workspace_id\/state-versions`\n\n| Parameter       | Description                                                                                                                                                                                                       |\n|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `:workspace_id` | The workspace ID to create the new state version in. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint. |\n\nCreates a state version by duplicating the specified state version and sets it as the current state version for the given workspace. The workspace must be locked by the user creating a state version. The workspace may be locked [with the API](\/terraform\/cloud-docs\/api-docs\/workspaces#lock-a-workspace) or [with the UI](\/terraform\/cloud-docs\/workspaces\/settings#locking). This is most useful for rolling back to a known-good state after an operation such as a Terraform upgrade didn't go as planned.\n\nCreating state versions requires permission to read and write state versions for the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n!> **Warning:** Use caution when rolling back to a previous state. Replacing state improperly can result in orphaned or duplicated infrastructure resources.\n\n-> **Note:** You cannot access this endpoint with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n| Status  | Response                  | Reason                                                          |\n|---------|---------------------------|-----------------------------------------------------------------|\n| [201][] | [JSON API document][]     | Successfully rolled back.                                       |\n| [404][] | [JSON API error object][] | Workspace not found, or user unauthorized to perform action.    |\n| [409][] | [JSON API error object][] | Conflict; check the error object for more information.          |\n| [422][] | [JSON API error object][] | Malformed request body (missing attributes, wrong types, etc.). |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                                            | Type   | Default | Description                                                    |\n|-----------------------------------------------------|--------|---------|----------------------------------------------------------------|\n| `data.type`                                         | string |         | Must be `\"state-versions\"`.                                    |\n| `data.relationships.rollback-state-version.data.id` | string |         | The ID of the state version to use for the rollback operation. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\":\"state-versions\",\n    \"relationships\": {\n      \"rollback-state-version\": {\n        \"data\": {\n          \"type\": \"state-versions\",\n          \"id\": \"sv-bWfq4Y1YpRKW4mx7\"\n        }\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-6fHMCom98SDXSQUv\/state-versions\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": {\n        \"id\": \"sv-DmoXecHePnNznaA4\",\n        \"type\": \"state-versions\",\n        \"attributes\": {\n            \"created-at\": \"2022-11-22T01:22:03.794Z\",\n            \"size\": 940,\n            \"hosted-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n            \"hosted-state-upload-url\": null,\n            \"hosted-json-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n            \"hosted-json-state-upload-url\": null,\n            \"modules\": {\n                \"root\": {\n                    \"null-resource\": 1,\n                    \"data.terraform-remote-state\": 1\n                }\n            },\n            \"providers\": {\n                \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\": {\n                    \"data.terraform-remote-state\": 1\n                },\n                \"provider[\\\"registry.terraform.io\/hashicorp\/null\\\"]\": {\n                    \"null-resource\": 1\n                }\n            },\n            \"resources\": [\n                {\n                    \"name\": \"other_username\",\n                    \"type\": \"data.terraform_remote_state\",\n                    \"count\": 1,\n                    \"module\": \"root\",\n                    \"provider\": \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\"\n                },\n                {\n                    \"name\": \"random\",\n                    \"type\": \"null_resource\",\n                    \"count\": 1,\n                    \"module\": \"root\",\n                    \"provider\": \"provider[\\\"registry.terraform.io\/hashicorp\/null\\\"]\"\n                }\n            ],\n            \"resources-processed\": true,\n            \"serial\": 9,\n            \"state-version\": 4,\n            \"terraform-version\": \"1.3.5\"\n        },\n        \"relationships\": {\n            \"rollback-state-version\": {\n                \"data\": {\n                    \"id\": \"sv-YfmFLgTv31VZsP\",\n                    \"type\": \"state-versions\"\n                }\n            }\n        },\n        \"links\": {\n            \"self\": \"\/api\/v2\/state-versions\/sv-DmoXecHePnNznaA4\"\n        }\n    }\n}\n```\n\n## Mark a State Version for Garbage Collection\n\n<EnterpriseAlert>\nThis endpoint is exclusive to Terraform Enterprise, and not available in HCP Terraform. <a href=\"https:\/\/developer.hashicorp.com\/terraform\/enterprise\">Learn more about Terraform Enterprise<\/a>.\n<\/EnterpriseAlert>\n\n`POST \/api\/v2\/state-versions\/:state_version_id\/actions\/soft_delete_backing_data`\n\nThis endpoint directs Terraform Enterprise to _soft delete_ the backing files associated with this state version. Soft deletion marks the state version for garbage collection. Terraform permanently deletes state versions after a set number of days unless the state version is restored. Once a state version is soft deleted, any attempts to read the state version will fail. Refer to [State Version Status](#state-version-status) for information about all data states.\n\nThis endpoint can only soft delete state versions that are in an [`finalized` state](#state-version-status) and are not the current state version. Otherwise, calling this endpoint results in an error.\n\nYou must have organization owner permissions to soft delete state versions. Refer to [Permissions](\/terraform\/enterprise\/users-teams-organizations\/permissions) for additional information about permissions.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Parameter           | Description                          |\n|---------------------|--------------------------------------|\n| `:state_version_id` | The ID of the state version to mark for garbage collection. |\n\n| Status  | Response                  | Reason                                                                                                        |\n|---------|---------------------------|---------------------------------------------------------------------------------------------------------------|\n| [200][] | [JSON API document][]     | Terraform successfully marked the data for garbage collection.                                                |\n| [400][] | [JSON API error object][] | Terraform failed to transition the state to `backing_data_soft_deleted`.                                      |\n| [404][] | [JSON API error object][] | Terraform did not find the state version or the user is not authorized to modify the state version.            |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/state-versions\/sv-ntv3HbhJqvFzamy7\/actions\/soft_delete_backing_data\n  --data {\"data\": {\"attributes\": {\"delete-older-than-n-days\": 23}}}\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": {\n        \"id\": \"sv-g4rqST72reoHMM5a\",\n        \"type\": \"state-versions\",\n        \"attributes\": {\n            \"created-at\": \"2021-06-08T01:22:03.794Z\",\n            \"size\": 940,\n            \"hosted-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n            \"hosted-state-upload-url\": null,\n            \"hosted-json-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n            \"hosted-json-state-upload-url\": null,\n            \"status\": \"backing_data_soft_deleted\",\n            \"intermediate\": false,\n            \"delete-older-than-n-days\": 23,\n            \"modules\": {\n                \"root\": {\n                    \"null-resource\": 1,\n                    \"data.terraform-remote-state\": 1\n                }\n            },\n            \"providers\": {\n                \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\": {\n                    \"data.terraform-remote-state\": 1\n                },\n                \"provider[\\\"registry.terraform.io\/hashicorp\/null\\\"]\": {\n                    \"null-resource\": 1\n                }\n            },\n            \"resources\": [\n                {\n                    \"name\": \"other_username\",\n                    \"type\": \"data.terraform_remote_state\",\n                    \"count\": 1,\n                    \"module\": \"root\",\n                    \"provider\": \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\"\n                },\n                {\n                    \"name\": \"random\",\n                    \"type\": \"null_resource\",\n                    \"count\": 1,\n                    \"module\": \"root\",\n                    \"provider\": \"provider[\\\"registry.terraform.io\/hashicorp\/null\\\"]\"\n                }\n            ],\n            \"resources-processed\": true,\n            \"serial\": 9,\n            \"state-version\": 4,\n            \"terraform-version\": \"0.15.4\",\n            \"vcs-commit-url\": \"https:\/\/gitlab.com\/my-organization\/terraform-test\/-\/commit\/abcdef12345\",\n            \"vcs-commit-sha\": \"abcdef12345\"\n        },\n        \"relationships\": {\n            \"run\": {\n                \"data\": {\n                    \"id\": \"run-YfmFLWpgTv31VZsP\",\n                    \"type\": \"runs\"\n                }\n            },\n            \"created-by\": {\n                \"data\": {\n                    \"id\": \"user-onZs69ThPZjBK2wo\",\n                    \"type\": \"users\"\n                },\n                \"links\": {\n                    \"self\": \"\/api\/v2\/users\/user-onZs69ThPZjBK2wo\",\n                    \"related\": \"\/api\/v2\/runs\/run-YfmFLWpgTv31VZsP\/created-by\"\n                }\n            },\n            \"workspace\": {\n                \"data\": {\n                    \"id\": \"ws-noZcaGXsac6aZSJR\",\n                    \"type\": \"workspaces\"\n                }\n            },\n            \"outputs\": {\n                \"data\": [\n                    {\n                        \"id\": \"wsout-V22qbeM92xb5mw9n\",\n                        \"type\": \"state-version-outputs\"\n                    },\n                    {\n                        \"id\": \"wsout-ymkuRnrNFeU5wGpV\",\n                        \"type\": \"state-version-outputs\"\n                    },\n                    {\n                        \"id\": \"wsout-v82BjkZnFEcscipg\",\n                        \"type\": \"state-version-outputs\"\n                    }\n                ]\n            }\n        },\n        \"links\": {\n            \"self\": \"\/api\/v2\/state-versions\/sv-g4rqST72reoHMM5a\"\n        }\n    }\n}\n```\n\n## Restore a State Version Marked for Garbage Collection\n\n<EnterpriseAlert>\nThis endpoint is exclusive to Terraform Enterprise, and not available in HCP Terraform. <a href=\"https:\/\/developer.hashicorp.com\/terraform\/enterprise\">Learn more about Terraform Enterprise<\/a>.\n<\/EnterpriseAlert>\n\n`POST \/api\/v2\/state-versions\/:state_version_id\/actions\/restore_backing_data`\n\nThis endpoint directs Terraform Enterprise to restore backing files associated with this state version. This endpoint can only restore state versions that are not in a [`backing_data_permanently_deleted` state](#state-version-status). Terraform restores applicable state versions back to their `finalized` state. Otherwise, calling this endpoint results in an error.\n\nYou must have organization owner permissions to restore state versions. Refer to [Permissions](\/terraform\/enterprise\/users-teams-organizations\/permissions) for additional information about permissions.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Parameter           | Description                          |\n|---------------------|--------------------------------------|\n| `:state_version_id` | The ID of the state version to restore. |\n\n| Status  | Response                  | Reason                                                                                                        |\n|---------|---------------------------|---------------------------------------------------------------------------------------------------------------|\n| [200][] | [JSON API document][]     | Terraform successfully initiated the restore process.                                                                   |\n| [400][] | [JSON API error object][] | Terraform failed to transition the state to `finalized`.                                                           |\n| [404][] | [JSON API error object][] | Terraform did not find the state version or the user is not authorized to modify the state version.                                           |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/state-versions\/sv-ntv3HbhJqvFzamy7\/actions\/restore_backing_data\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": {\n        \"id\": \"sv-g4rqST72reoHMM5a\",\n        \"type\": \"state-versions\",\n        \"attributes\": {\n            \"created-at\": \"2021-06-08T01:22:03.794Z\",\n            \"size\": 940,\n            \"hosted-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n            \"hosted-state-upload-url\": null,\n            \"hosted-json-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n            \"hosted-json-state-upload-url\": null,\n            \"status\": \"uploaded\",\n            \"intermediate\": false,\n            \"modules\": {\n                \"root\": {\n                    \"null-resource\": 1,\n                    \"data.terraform-remote-state\": 1\n                }\n            },\n            \"providers\": {\n                \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\": {\n                    \"data.terraform-remote-state\": 1\n                },\n                \"provider[\\\"registry.terraform.io\/hashicorp\/null\\\"]\": {\n                    \"null-resource\": 1\n                }\n            },\n            \"resources\": [\n                {\n                    \"name\": \"other_username\",\n                    \"type\": \"data.terraform_remote_state\",\n                    \"count\": 1,\n                    \"module\": \"root\",\n                    \"provider\": \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\"\n                },\n                {\n                    \"name\": \"random\",\n                    \"type\": \"null_resource\",\n                    \"count\": 1,\n                    \"module\": \"root\",\n                    \"provider\": \"provider[\\\"registry.terraform.io\/hashicorp\/null\\\"]\"\n                }\n            ],\n            \"resources-processed\": true,\n            \"serial\": 9,\n            \"state-version\": 4,\n            \"terraform-version\": \"0.15.4\",\n            \"vcs-commit-url\": \"https:\/\/gitlab.com\/my-organization\/terraform-test\/-\/commit\/abcdef12345\",\n            \"vcs-commit-sha\": \"abcdef12345\"\n        },\n        \"relationships\": {\n            \"run\": {\n                \"data\": {\n                    \"id\": \"run-YfmFLWpgTv31VZsP\",\n                    \"type\": \"runs\"\n                }\n            },\n            \"created-by\": {\n                \"data\": {\n                    \"id\": \"user-onZs69ThPZjBK2wo\",\n                    \"type\": \"users\"\n                },\n                \"links\": {\n                    \"self\": \"\/api\/v2\/users\/user-onZs69ThPZjBK2wo\",\n                    \"related\": \"\/api\/v2\/runs\/run-YfmFLWpgTv31VZsP\/created-by\"\n                }\n            },\n            \"workspace\": {\n                \"data\": {\n                    \"id\": \"ws-noZcaGXsac6aZSJR\",\n                    \"type\": \"workspaces\"\n                }\n            },\n            \"outputs\": {\n                \"data\": [\n                    {\n                        \"id\": \"wsout-V22qbeM92xb5mw9n\",\n                        \"type\": \"state-version-outputs\"\n                    },\n                    {\n                        \"id\": \"wsout-ymkuRnrNFeU5wGpV\",\n                        \"type\": \"state-version-outputs\"\n                    },\n                    {\n                        \"id\": \"wsout-v82BjkZnFEcscipg\",\n                        \"type\": \"state-version-outputs\"\n                    }\n                ]\n            }\n        },\n        \"links\": {\n            \"self\": \"\/api\/v2\/state-versions\/sv-g4rqST72reoHMM5a\"\n        }\n    }\n}\n```\n\n## Permanently Delete a State Version\n\n<EnterpriseAlert>\nThis endpoint is exclusive to Terraform Enterprise, and not available in HCP Terraform. <a href=\"https:\/\/developer.hashicorp.com\/terraform\/enterprise\">Learn more about Terraform Enterprise<\/a>.\n<\/EnterpriseAlert>\n\n`POST \/api\/v2\/state-versions\/:state_version_id\/actions\/permanently_delete_backing_data`\n\nThis endpoint directs Terraform Enterprise to permanently delete backing files associated with this state version. This endpoint can only permanently delete state versions that are in an [`backing_data_soft_deleted` state](#state-version-status) and are not the current state version. Otherwise, calling this endpoint results in an error.\n\nYou must have organization owner permissions to permanently delete state versions. Refer to [Permissions](\/terraform\/enterprise\/users-teams-organizations\/permissions) for additional information about permissions.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Parameter           | Description                          |\n|---------------------|--------------------------------------|\n| `:state_version_id` | The ID of the state version to permanently delete. |\n\n| Status  | Response                  | Reason                                                                                                        |\n|---------|---------------------------|---------------------------------------------------------------------------------------------------------------|\n| [200][] | [JSON API document][]     | Terraform deleted the data permanently.                                                        |\n| [400][] | [JSON API error object][] | Terraform failed to transition the state to `backing_data_permanently_deleted`.                                   |\n| [404][] | [JSON API error object][] | Terraform did not find the state version or the user is not authorized to modify the state version data.                                           |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/state-versions\/sv-ntv3HbhJqvFzamy7\/actions\/permanently_delete_backing_data\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": {\n        \"id\": \"sv-g4rqST72reoHMM5a\",\n        \"type\": \"state-versions\",\n        \"attributes\": {\n            \"created-at\": \"2021-06-08T01:22:03.794Z\",\n            \"size\": 940,\n            \"hosted-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n            \"hosted-state-upload-url\": null,\n            \"hosted-json-state-download-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/...\",\n            \"hosted-json-state-upload-url\": null,\n            \"status\": \"backing_data_permanently_deleted\",\n            \"intermediate\": false,\n            \"modules\": {\n                \"root\": {\n                    \"null-resource\": 1,\n                    \"data.terraform-remote-state\": 1\n                }\n            },\n            \"providers\": {\n                \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\": {\n                    \"data.terraform-remote-state\": 1\n                },\n                \"provider[\\\"registry.terraform.io\/hashicorp\/null\\\"]\": {\n                    \"null-resource\": 1\n                }\n            },\n            \"resources\": [\n                {\n                    \"name\": \"other_username\",\n                    \"type\": \"data.terraform_remote_state\",\n                    \"count\": 1,\n                    \"module\": \"root\",\n                    \"provider\": \"provider[\\\"terraform.io\/builtin\/terraform\\\"]\"\n                },\n                {\n                    \"name\": \"random\",\n                    \"type\": \"null_resource\",\n                    \"count\": 1,\n                    \"module\": \"root\",\n                    \"provider\": \"provider[\\\"registry.terraform.io\/hashicorp\/null\\\"]\"\n                }\n            ],\n            \"resources-processed\": true,\n            \"serial\": 9,\n            \"state-version\": 4,\n            \"terraform-version\": \"0.15.4\",\n            \"vcs-commit-url\": \"https:\/\/gitlab.com\/my-organization\/terraform-test\/-\/commit\/abcdef12345\",\n            \"vcs-commit-sha\": \"abcdef12345\"\n        },\n        \"relationships\": {\n            \"run\": {\n                \"data\": {\n                    \"id\": \"run-YfmFLWpgTv31VZsP\",\n                    \"type\": \"runs\"\n                }\n            },\n            \"created-by\": {\n                \"data\": {\n                    \"id\": \"user-onZs69ThPZjBK2wo\",\n                    \"type\": \"users\"\n                },\n                \"links\": {\n                    \"self\": \"\/api\/v2\/users\/user-onZs69ThPZjBK2wo\",\n                    \"related\": \"\/api\/v2\/runs\/run-YfmFLWpgTv31VZsP\/created-by\"\n                }\n            },\n            \"workspace\": {\n                \"data\": {\n                    \"id\": \"ws-noZcaGXsac6aZSJR\",\n                    \"type\": \"workspaces\"\n                }\n            },\n            \"outputs\": {\n                \"data\": [\n                    {\n                        \"id\": \"wsout-V22qbeM92xb5mw9n\",\n                        \"type\": \"state-version-outputs\"\n                    },\n                    {\n                        \"id\": \"wsout-ymkuRnrNFeU5wGpV\",\n                        \"type\": \"state-version-outputs\"\n                    },\n                    {\n                        \"id\": \"wsout-v82BjkZnFEcscipg\",\n                        \"type\": \"state-version-outputs\"\n                    }\n                ]\n            }\n        },\n        \"links\": {\n            \"self\": \"\/api\/v2\/state-versions\/sv-g4rqST72reoHMM5a\"\n        }\n    }\n}\n```\n\n## List State Version Outputs\n\nThe output values from a state version are also available via the API. For details, see the [state version outputs documentation.](\/terraform\/cloud-docs\/api-docs\/state-version-outputs#list-state-version-outputs)\n\n### Available Related Resources\n\nThe GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n* `created_by` - The user that created the state version. For state versions created via a run executed by HCP Terraform, this is an internal user account.\n* `run` - The run that created the state version, if applicable.\n* `run.created_by` - The user that manually triggered the run, if applicable.\n* `run.configuration_version` - The configuration version used in the run.\n* `outputs` - The parsed outputs for this state version.","site":"terraform","answers_cleaned":"    page title  State Versions   API Docs   HCP Terraform description       Use the   state versions  endpoint to manage Terraform state versions  List  create  show  and roll back state versions using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    State Versions API     Attributes  State version API objects represent an instance of Terraform state data  but do not directly contain the stored state  Instead  they contain information about the state  its properties  and its contents  and include one or more URLs from which the state can be downloaded   Some of the information returned in a state version API object might be   populated asynchronously   by HCP Terraform  This includes resources  modules  providers  and the  state version outputs   terraform cloud docs api docs state version outputs  associated with the state version  These values might not be immediately available after the state version is uploaded  The  resources processed  property on the state version object indicates whether or not HCP Terraform has finished any necessary asynchronous processing  If you need to use these values  be sure to wait for  resources processed  to become  true  before assuming that the values are in fact empty   Attribute                          Description                                                                                                                                                                                                                                                billable rum count                Count of billable Resources Under Management  RUM   Only present for Organization members on RUM plans who have visibility to see billable RUM usage in the Usage page  hosted json state download url    A URL from which you can download the state data in a  stable format   terraform internals json format  appropriate for external integrations to consume  Only available if the state was created by Terraform 1 3    hosted state download url         A URL from which you can download the raw state data  in the format used internally by Terraform   hosted json state upload url      A URL to which you can upload state data in a  stable format   terraform internals json format  appropriate for external integrations to consume  You can upload JSON state content once per state version   hosted state upload url           A URL to which you can upload state data in the format used Terraform uses internally  You can upload state data once per state version   modules                           Extracted information about the Terraform modules in this state data  Populated asynchronously   providers                         Extracted information about the Terraform providers used for resources in this state data  Populated asynchronously   intermediate                      A boolean flag that indicates the state version is a snapshot and not yet set as the current state version for a workspace  The last intermediate state version becomes the current state version when the workspace is unlocked  Not yet supported in Terraform Enterprise   resources                         Extracted information about the resources in this state data  Populated asynchronously   resources processed               A Boolean flag indicating whether HCP Terraform has finished asynchronously extracting outputs  resources  and other information about this state data   serial                            The serial number of this state instance  which increases every time Terraform creates new state in the workspace   state version                     The version of the internal state format used for this state  Different Terraform versions read and write different format versions  but it only changes infrequently   status                            Indicates a state version s content upload  status   terraform cloud docs api docs state versions state version status   This status can be  pending    finalized  or  discarded    terraform version                 The Terraform version that created this state  Populated asynchronously   vcs commit sha                    The SHA of the configuration commit used in the Terraform run that produced this state  Only present if the workspace is connected to a VCS repository   vcs commit url                    A link to the configuration commit used in the Terraform run that produced this state  Only present if the workspace is connected to a VCS repository       State Version Status The state version status is found in  data attributes status   and you can reference the following list of possible statuses  A state version created through the API or CLI will only be listed in the UI if it is has a  finalized  status     State   Description                    pending    Indicates that a state version has been created but the state data is not encoded within the request  Pending state versions do not contain state data and do not appear in the UI  You cannot unlock the workspace until the latest state version is finalized       finalized    Indicates that the state version has been successfully uploaded to HCP Terraform or that the state version was created with a valid  state  attribute       discarded     The state version was discarded because it was superseded by a newer state version before it could be uploaded       backing data soft deleted      EnterpriseAlert inline    The backing files associated with this state version are marked for garbage collection  Terraform permanently deletes backing files associated with this state version after a set number of days  but you can restore the backing data associated with it before it is permanently deleted       backing data permanently deleted      EnterpriseAlert inline    The backing files associated with this state version have been permanently deleted and can no longer be restored        Create a State Version      Hands on    Try the  Version Remote State with the HCP Terraform API   terraform tutorials cloud cloud state api  tutorial to download a remote state file and use the Terraform API to create a new state version    POST  workspaces  workspace id state versions     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                                                                                     workspace id    The workspace ID to create the new state version in  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   terraform cloud docs api docs workspaces show workspace  endpoint     Creates a state version and sets it as the current state version for the given workspace  The workspace must be locked by the user creating a state version  The workspace may be locked  with the API   terraform cloud docs api docs workspaces lock a workspace  or  with the UI   terraform cloud docs workspaces settings locking   This is most useful for migrating existing state from Terraform Community edition into a new HCP Terraform workspace   Creating state versions requires permission to read and write state versions for the workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers       Warning    Use caution when uploading state to workspaces that have already performed Terraform runs  Replacing state improperly can result in orphaned or duplicated infrastructure resources        Note    For Free Tier organizations  HCP Terraform always retains at least the last 100 states  across all workspaces  and at least the most recent state for every workspace  Additional states beyond the last 100 are retained for six months  and are then deleted        Note    You cannot access this endpoint with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens      Status    Response                    Reason                                                                                                                                                                             201       JSON API document          Successfully created a state version                                   404       JSON API error object      Workspace not found  or user unauthorized to perform action            409       JSON API error object      Conflict  check the error object for more information                  412       JSON API error object      Precondition failed  check the error object for more information       422       JSON API error object      Malformed request body  missing attributes  wrong types  etc             Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                               Type      Default     Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 data type                             string                Must be   state versions                                                                                                                                                                                                                                                                        data attributes serial                integer               The serial of the state version  Must match the serial value extracted from the raw state file                                                                                                                                                                                                  data attributes md5                   string                An MD5 hash of the raw state version                                                                                                                                                                                                                                                             data attributes state                 string     nothing      Optional   Base64 encoded raw state file  If omitted  you must use the upload method below to complete the state version creation  The workspace may not be unlocked normally until the state version is uploaded                                                                                                                                                                                                                                                                    data attributes lineage               string     nothing      Optional   Lineage of the state version  Should match the lineage extracted from the raw state file  Early versions of terraform did not have the concept of lineage  so this is an optional attribute                                                                                        data attributes json state            string     nothing      Optional   Base64 encoded json state  as expressed by  terraform show  json   See  JSON Output Format   terraform internals json format  for more details                                                                                                                                                data attributes json state outputs    string     nothing      Optional   Base64 encoded output values as represented by  terraform show  json   the contents of the values outputs key   If provided  the workspace outputs populate immediately  If omitted  HCP Terraform populates the workspace outputs from the given state after a short time       data relationships run data id        string     nothing      Optional   The ID of the run to associate with the state version                                                                                                                                                                                                                                Sample Payload     json      data          type   state versions        attributes            serial   1         md5    d41d8cd98f00b204e9800998ecf8427e          lineage    871d1b4a e579 fb7c ffdb f0c858a647a7          state                 json state                 json state outputs                     relationships            run              data                type    runs              id    run bWSq4YeYpfrW4mx7                                         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 workspaces ws 6fHMCom98SDXSQUv state versions          Sample Response     json        data              id    sv DmoXecHePnNznaA4            type    state versions            attributes                  vcs commit sha   null               vcs commit url   null               created at    2018 07 12T20 32 01 490Z                hosted state download url    https   archivist terraform io v1 object f55b739b ff03 4716 b436 726466b96dc4                hosted json state download url    https   archivist terraform io v1 object 4fde7951 93c0 4414 9a40 f3abc4bac490                hosted state upload url   null               hosted json state upload url   null               status    finalized                intermediate   true               serial   1                     links                  self     api v2 state versions sv DmoXecHePnNznaA4                            Upload State and JSON State   You can upload state version content in the same request when creating a state version  However  we  strongly  recommend that you upload content separately    PUT https   archivist terraform io v1 object  UNIQUE OBJECT ID    HCP Terraform returns a  hosted state upload url  or  hosted json state upload url  returned when you create a  state version   Once you upload state content  this URL is hidden on the resource and  no longer available        Sample Request  In the below example    filename  is the name of Terraform state file you wish to upload      shell curl       header  Content Type  application octet stream        request PUT       data binary  filename     https   archivist terraform io v1 object 4c44d964 eba7 4dd5 ad29 1ece7b99e8da         List State Versions for a Workspace   GET  state versions   Listing state versions requires permission to read state versions for the workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers      Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter                      Description                                                                                                                                                                                                           filter workspace  name          Required   The name of one workspace to list versions for                                 filter organization  name       Required   The name of the organization that owns the desired workspace                   filter status                   Optional    Filter state versions by status   pending    finalized   or  discarded        page number                     Optional    If omitted  the endpoint will return the first page                           page size                       Optional    If omitted  the endpoint will return 20 state versions per page                 Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json       https   app terraform io api v2 state versions filter 5Bworkspace 5D 5Bname 5D my workspace filter 5Borganization 5D 5Bname 5D my organization           Sample Response     json        data                            id    sv g4rqST72reoHMM5a                type    state versions                attributes                      created at    2021 06 08T01 22 03 794Z                    size   940                   hosted state download url    https   archivist terraform io v1 object                        hosted state upload url   null                   hosted json state download url    https   archivist terraform io v1 object                        hosted json state upload url   null                   status    finalized                    intermediate   false                   modules                          root                              null resource   1                           data terraform remote state   1                                                           providers                          provider   terraform io builtin terraform                                 data terraform remote state   1                                             provider   registry terraform io hashicorp null                                 null resource   1                                                           resources                                                    name    other username                            type    data terraform remote state                            count   1                           module    root                            provider    provider   terraform io builtin terraform                                                                           name    random                            type    null resource                            count   1                           module    root                            provider    provider   registry terraform io hashicorp null                                                               resources processed   true                   serial   9                   state version   4                   terraform version    0 15 4                    vcs commit url    https   gitlab com my organization terraform test   commit abcdef12345                    vcs commit sha    abcdef12345                              relationships                      run                          data                              id    run YfmFLWpgTv31VZsP                            type    runs                                                            created by                          data                              id    user onZs69ThPZjBK2wo                            type    users                                              links                              self     api v2 users user onZs69ThPZjBK2wo                            related     api v2 runs run YfmFLWpgTv31VZsP created by                                                            workspace                          data                              id    ws noZcaGXsac6aZSJR                            type    workspaces                                                            outputs                          data                                                            id    wsout V22qbeM92xb5mw9n                                type    state version outputs                                                                                    id    wsout ymkuRnrNFeU5wGpV                                type    state version outputs                                                                                    id    wsout v82BjkZnFEcscipg                                type    state version outputs                                                                                                links                      self     api v2 state versions sv g4rqST72reoHMM5a                                                  id    sv QYKf6GvNv75ZPTBr                type    state versions                attributes                      created at    2021 06 01T21 40 25 941Z                    size   819                   hosted state download url    https   archivist terraform io v1 object                        hosted state upload url   null                   hosted json state download url    https   archivist terraform io v1 object                        hosted json state upload url   null                   status    finalized                    intermediate   false                   modules                          root                              data terraform remote state   1                                                           providers                          provider   terraform io builtin terraform                                 data terraform remote state   1                                                           resources                                                    name    other username                            type    data terraform remote state                            count   1                           module    root                            provider    provider   terraform io builtin terraform                                                               resources processed   true                   serial   8                   state version   4                   terraform version    0 15 4                    vcs commit url    https   gitlab com my organization terraform test   commit 12345abcdef                    vcs commit sha    12345abcdef                              relationships                      run                          data                              id    run cVtxks6R8wsjCZMD                            type    runs                                                            created by                          data                              id    user onZs69ThPZjBK2wo                            type    users                                              links                              self     api v2 users user onZs69ThPZjBK2wo                            related     api v2 runs run YfmFLWpgTv31VZsP created by                                                            workspace                          data                              id    ws noZcaGXsac6aZSJR                            type    workspaces                                                            outputs                          data                                                            id    wsout MmqMhmht6jFmLRvh                                type    state version outputs                                                                                    id    wsout Kuo9TCHg3oDLDQqa                                type    state version outputs                                                                                                links                      self     api v2 state versions sv QYKf6GvNv75ZPTBr                                      links              self    https   app terraform io api v2 state versions filter 5Borganization 5D 5Bname 5D hashicorp filter 5Bworkspace 5D 5Bname 5D my workspace page 5Bnumber 5D 1 page 5Bsize 5D 20            first    https   app terraform io api v2 state versions filter 5Borganization 5D 5Bname 5D hashicorp filter 5Bworkspace 5D 5Bname 5D my workspace page 5Bnumber 5D 1 page 5Bsize 5D 20            prev   null           next   null           last    https   app terraform io io api v2 state versions filter 5Borganization 5D 5Bname 5D hashicorp filter 5Bworkspace 5D 5Bname 5D my workspace page 5Bnumber 5D 1 page 5Bsize 5D 20              meta              pagination                  current page   1               page size   20               prev page   null               next page   null               total pages   1               total count   10                           Fetch the Current State Version for a Workspace   GET  workspaces  workspace id current state version     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           workspace id    The ID for the workspace whose current state version you want to fetch  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   terraform cloud docs api docs workspaces show workspace  endpoint     Fetches the current state version for the given workspace  This state version will be the input state when running terraform operations   Viewing state versions requires permission to read state versions for the workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers    Status    Response                    Reason                                                                                                                                                                                                                                                                     200       JSON API document          Successfully returned current state version for the given workspace                                                404       JSON API error object      Workspace not found  workspace does not have a current state version  or user unauthorized to perform action         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 workspaces ws 6fHMCom98SDXSQUv current state version          Sample Response     json        data              id    sv g4rqST72reoHMM5a            type    state versions            attributes                  billable rum count   0               created at    2021 06 08T01 22 03 794Z                size   940               hosted state download url    https   archivist terraform io v1 object                    hosted state upload url   null               hosted json state download url    https   archivist terraform io v1 object                    hosted json state upload url   null               status    finalized                intermediate   false               modules                      root                          null resource   1                       data terraform remote state   1                                               providers                      provider   terraform io builtin terraform                             data terraform remote state   1                                     provider   registry terraform io hashicorp null                             null resource   1                                               resources                                            name    other username                        type    data terraform remote state                        count   1                       module    root                        provider    provider   terraform io builtin terraform                                                               name    random                        type    null resource                        count   1                       module    root                        provider    provider   registry terraform io hashicorp null                                                   resources processed   true               serial   9               state version   4               terraform version    0 15 4                vcs commit url    https   gitlab com my organization terraform test   commit abcdef12345                vcs commit sha    abcdef12345                      relationships                  run                      data                          id    run YfmFLWpgTv31VZsP                        type    runs                                                created by                      data                          id    user onZs69ThPZjBK2wo                        type    users                                      links                          self     api v2 users user onZs69ThPZjBK2wo                        related     api v2 runs run YfmFLWpgTv31VZsP created by                                                workspace                      data                          id    ws noZcaGXsac6aZSJR                        type    workspaces                                                outputs                      data                                                    id    wsout V22qbeM92xb5mw9n                            type    state version outputs                                                                        id    wsout ymkuRnrNFeU5wGpV                            type    state version outputs                                                                        id    wsout v82BjkZnFEcscipg                            type    state version outputs                                                                            links                  self     api v2 state versions sv g4rqST72reoHMM5a                            Show a State Version   GET  state versions  state version id   Viewing state versions requires permission to read state versions for the workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers    Parameter             Description                                                                                               state version id    The ID of the desired state version       Status    Response                    Reason                                                                                                                                                                                                                                                                     200       JSON API document          Successfully returned current state version for the given workspace                                                404       JSON API error object      Workspace not found  workspace does not have a current state version  or user unauthorized to perform action         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 state versions sv SDboVZC8TCxXEneJ          Sample Response     json        data              id    sv g4rqST72reoHMM5a            type    state versions            attributes                  created at    2021 06 08T01 22 03 794Z                size   940               hosted state download url    https   archivist terraform io v1 object                    hosted state upload url   null               hosted json state download url    https   archivist terraform io v1 object                    hosted json state upload url   null               status    finalized                intermediate   false               modules                      root                          null resource   1                       data terraform remote state   1                                               providers                      provider   terraform io builtin terraform                             data terraform remote state   1                                     provider   registry terraform io hashicorp null                             null resource   1                                               resources                                            name    other username                        type    data terraform remote state                        count   1                       module    root                        provider    provider   terraform io builtin terraform                                                               name    random                        type    null resource                        count   1                       module    root                        provider    provider   registry terraform io hashicorp null                                                   resources processed   true               serial   9               state version   4               terraform version    0 15 4                vcs commit url    https   gitlab com my organization terraform test   commit abcdef12345                vcs commit sha    abcdef12345                      relationships                  run                      data                          id    run YfmFLWpgTv31VZsP                        type    runs                                                created by                      data                          id    user onZs69ThPZjBK2wo                        type    users                                      links                          self     api v2 users user onZs69ThPZjBK2wo                        related     api v2 runs run YfmFLWpgTv31VZsP created by                                                workspace                      data                          id    ws noZcaGXsac6aZSJR                        type    workspaces                                                outputs                      data                                                    id    wsout V22qbeM92xb5mw9n                            type    state version outputs                                                                        id    wsout ymkuRnrNFeU5wGpV                            type    state version outputs                                                                        id    wsout v82BjkZnFEcscipg                            type    state version outputs                                                                            links                  self     api v2 state versions sv g4rqST72reoHMM5a                            Rollback to a Previous State Version   PATCH  workspaces  workspace id state versions     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                                                                                     workspace id    The workspace ID to create the new state version in  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   terraform cloud docs api docs workspaces show workspace  endpoint     Creates a state version by duplicating the specified state version and sets it as the current state version for the given workspace  The workspace must be locked by the user creating a state version  The workspace may be locked  with the API   terraform cloud docs api docs workspaces lock a workspace  or  with the UI   terraform cloud docs workspaces settings locking   This is most useful for rolling back to a known good state after an operation such as a Terraform upgrade didn t go as planned   Creating state versions requires permission to read and write state versions for the workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers       Warning    Use caution when rolling back to a previous state  Replacing state improperly can result in orphaned or duplicated infrastructure resources        Note    You cannot access this endpoint with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens      Status    Response                    Reason                                                                                                                                                                         201       JSON API document          Successfully rolled back                                             404       JSON API error object      Workspace not found  or user unauthorized to perform action          409       JSON API error object      Conflict  check the error object for more information                422       JSON API error object      Malformed request body  missing attributes  wrong types  etc           Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                                              Type     Default   Description                                                                                                                                                                                                     data type                                            string             Must be   state versions                                            data relationships rollback state version data id    string             The ID of the state version to use for the rollback operation         Sample Payload     json      data          type   state versions        relationships            rollback state version              data                type    state versions              id    sv bWfq4Y1YpRKW4mx7                                         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 workspaces ws 6fHMCom98SDXSQUv state versions          Sample Response     json        data              id    sv DmoXecHePnNznaA4            type    state versions            attributes                  created at    2022 11 22T01 22 03 794Z                size   940               hosted state download url    https   archivist terraform io v1 object                    hosted state upload url   null               hosted json state download url    https   archivist terraform io v1 object                    hosted json state upload url   null               modules                      root                          null resource   1                       data terraform remote state   1                                               providers                      provider   terraform io builtin terraform                             data terraform remote state   1                                     provider   registry terraform io hashicorp null                             null resource   1                                               resources                                            name    other username                        type    data terraform remote state                        count   1                       module    root                        provider    provider   terraform io builtin terraform                                                               name    random                        type    null resource                        count   1                       module    root                        provider    provider   registry terraform io hashicorp null                                                   resources processed   true               serial   9               state version   4               terraform version    1 3 5                      relationships                  rollback state version                      data                          id    sv YfmFLgTv31VZsP                        type    state versions                                                      links                  self     api v2 state versions sv DmoXecHePnNznaA4                            Mark a State Version for Garbage Collection   EnterpriseAlert  This endpoint is exclusive to Terraform Enterprise  and not available in HCP Terraform   a href  https   developer hashicorp com terraform enterprise  Learn more about Terraform Enterprise  a     EnterpriseAlert    POST  api v2 state versions  state version id actions soft delete backing data   This endpoint directs Terraform Enterprise to  soft delete  the backing files associated with this state version  Soft deletion marks the state version for garbage collection  Terraform permanently deletes state versions after a set number of days unless the state version is restored  Once a state version is soft deleted  any attempts to read the state version will fail  Refer to  State Version Status   state version status  for information about all data states   This endpoint can only soft delete state versions that are in an   finalized  state   state version status  and are not the current state version  Otherwise  calling this endpoint results in an error   You must have organization owner permissions to soft delete state versions  Refer to  Permissions   terraform enterprise users teams organizations permissions  for additional information about permissions    permissions citation    intentionally unused   keep for maintainers    Parameter             Description                                                                                               state version id    The ID of the state version to mark for garbage collection       Status    Response                    Reason                                                                                                                                                                                                                                                                     200       JSON API document          Terraform successfully marked the data for garbage collection                                                      400       JSON API error object      Terraform failed to transition the state to  backing data soft deleted                                             404       JSON API error object      Terraform did not find the state version or the user is not authorized to modify the state version                    Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 state versions sv ntv3HbhJqvFzamy7 actions soft delete backing data     data   data     attributes     delete older than n days   23             Sample Response     json        data              id    sv g4rqST72reoHMM5a            type    state versions            attributes                  created at    2021 06 08T01 22 03 794Z                size   940               hosted state download url    https   archivist terraform io v1 object                    hosted state upload url   null               hosted json state download url    https   archivist terraform io v1 object                    hosted json state upload url   null               status    backing data soft deleted                intermediate   false               delete older than n days   23               modules                      root                          null resource   1                       data terraform remote state   1                                               providers                      provider   terraform io builtin terraform                             data terraform remote state   1                                     provider   registry terraform io hashicorp null                             null resource   1                                               resources                                            name    other username                        type    data terraform remote state                        count   1                       module    root                        provider    provider   terraform io builtin terraform                                                               name    random                        type    null resource                        count   1                       module    root                        provider    provider   registry terraform io hashicorp null                                                   resources processed   true               serial   9               state version   4               terraform version    0 15 4                vcs commit url    https   gitlab com my organization terraform test   commit abcdef12345                vcs commit sha    abcdef12345                      relationships                  run                      data                          id    run YfmFLWpgTv31VZsP                        type    runs                                                created by                      data                          id    user onZs69ThPZjBK2wo                        type    users                                      links                          self     api v2 users user onZs69ThPZjBK2wo                        related     api v2 runs run YfmFLWpgTv31VZsP created by                                                workspace                      data                          id    ws noZcaGXsac6aZSJR                        type    workspaces                                                outputs                      data                                                    id    wsout V22qbeM92xb5mw9n                            type    state version outputs                                                                        id    wsout ymkuRnrNFeU5wGpV                            type    state version outputs                                                                        id    wsout v82BjkZnFEcscipg                            type    state version outputs                                                                            links                  self     api v2 state versions sv g4rqST72reoHMM5a                            Restore a State Version Marked for Garbage Collection   EnterpriseAlert  This endpoint is exclusive to Terraform Enterprise  and not available in HCP Terraform   a href  https   developer hashicorp com terraform enterprise  Learn more about Terraform Enterprise  a     EnterpriseAlert    POST  api v2 state versions  state version id actions restore backing data   This endpoint directs Terraform Enterprise to restore backing files associated with this state version  This endpoint can only restore state versions that are not in a   backing data permanently deleted  state   state version status   Terraform restores applicable state versions back to their  finalized  state  Otherwise  calling this endpoint results in an error   You must have organization owner permissions to restore state versions  Refer to  Permissions   terraform enterprise users teams organizations permissions  for additional information about permissions    permissions citation    intentionally unused   keep for maintainers    Parameter             Description                                                                                               state version id    The ID of the state version to restore       Status    Response                    Reason                                                                                                                                                                                                                                                                     200       JSON API document          Terraform successfully initiated the restore process                                                                         400       JSON API error object      Terraform failed to transition the state to  finalized                                                                  404       JSON API error object      Terraform did not find the state version or the user is not authorized to modify the state version                                                   Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 state versions sv ntv3HbhJqvFzamy7 actions restore backing data          Sample Response     json        data              id    sv g4rqST72reoHMM5a            type    state versions            attributes                  created at    2021 06 08T01 22 03 794Z                size   940               hosted state download url    https   archivist terraform io v1 object                    hosted state upload url   null               hosted json state download url    https   archivist terraform io v1 object                    hosted json state upload url   null               status    uploaded                intermediate   false               modules                      root                          null resource   1                       data terraform remote state   1                                               providers                      provider   terraform io builtin terraform                             data terraform remote state   1                                     provider   registry terraform io hashicorp null                             null resource   1                                               resources                                            name    other username                        type    data terraform remote state                        count   1                       module    root                        provider    provider   terraform io builtin terraform                                                               name    random                        type    null resource                        count   1                       module    root                        provider    provider   registry terraform io hashicorp null                                                   resources processed   true               serial   9               state version   4               terraform version    0 15 4                vcs commit url    https   gitlab com my organization terraform test   commit abcdef12345                vcs commit sha    abcdef12345                      relationships                  run                      data                          id    run YfmFLWpgTv31VZsP                        type    runs                                                created by                      data                          id    user onZs69ThPZjBK2wo                        type    users                                      links                          self     api v2 users user onZs69ThPZjBK2wo                        related     api v2 runs run YfmFLWpgTv31VZsP created by                                                workspace                      data                          id    ws noZcaGXsac6aZSJR                        type    workspaces                                                outputs                      data                                                    id    wsout V22qbeM92xb5mw9n                            type    state version outputs                                                                        id    wsout ymkuRnrNFeU5wGpV                            type    state version outputs                                                                        id    wsout v82BjkZnFEcscipg                            type    state version outputs                                                                            links                  self     api v2 state versions sv g4rqST72reoHMM5a                            Permanently Delete a State Version   EnterpriseAlert  This endpoint is exclusive to Terraform Enterprise  and not available in HCP Terraform   a href  https   developer hashicorp com terraform enterprise  Learn more about Terraform Enterprise  a     EnterpriseAlert    POST  api v2 state versions  state version id actions permanently delete backing data   This endpoint directs Terraform Enterprise to permanently delete backing files associated with this state version  This endpoint can only permanently delete state versions that are in an   backing data soft deleted  state   state version status  and are not the current state version  Otherwise  calling this endpoint results in an error   You must have organization owner permissions to permanently delete state versions  Refer to  Permissions   terraform enterprise users teams organizations permissions  for additional information about permissions    permissions citation    intentionally unused   keep for maintainers    Parameter             Description                                                                                               state version id    The ID of the state version to permanently delete       Status    Response                    Reason                                                                                                                                                                                                                                                                     200       JSON API document          Terraform deleted the data permanently                                                              400       JSON API error object      Terraform failed to transition the state to  backing data permanently deleted                                          404       JSON API error object      Terraform did not find the state version or the user is not authorized to modify the state version data                                                   Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 state versions sv ntv3HbhJqvFzamy7 actions permanently delete backing data          Sample Response     json        data              id    sv g4rqST72reoHMM5a            type    state versions            attributes                  created at    2021 06 08T01 22 03 794Z                size   940               hosted state download url    https   archivist terraform io v1 object                    hosted state upload url   null               hosted json state download url    https   archivist terraform io v1 object                    hosted json state upload url   null               status    backing data permanently deleted                intermediate   false               modules                      root                          null resource   1                       data terraform remote state   1                                               providers                      provider   terraform io builtin terraform                             data terraform remote state   1                                     provider   registry terraform io hashicorp null                             null resource   1                                               resources                                            name    other username                        type    data terraform remote state                        count   1                       module    root                        provider    provider   terraform io builtin terraform                                                               name    random                        type    null resource                        count   1                       module    root                        provider    provider   registry terraform io hashicorp null                                                   resources processed   true               serial   9               state version   4               terraform version    0 15 4                vcs commit url    https   gitlab com my organization terraform test   commit abcdef12345                vcs commit sha    abcdef12345                      relationships                  run                      data                          id    run YfmFLWpgTv31VZsP                        type    runs                                                created by                      data                          id    user onZs69ThPZjBK2wo                        type    users                                      links                          self     api v2 users user onZs69ThPZjBK2wo                        related     api v2 runs run YfmFLWpgTv31VZsP created by                                                workspace                      data                          id    ws noZcaGXsac6aZSJR                        type    workspaces                                                outputs                      data                                                    id    wsout V22qbeM92xb5mw9n                            type    state version outputs                                                                        id    wsout ymkuRnrNFeU5wGpV                            type    state version outputs                                                                        id    wsout v82BjkZnFEcscipg                            type    state version outputs                                                                            links                  self     api v2 state versions sv g4rqST72reoHMM5a                            List State Version Outputs  The output values from a state version are also available via the API  For details  see the  state version outputs documentation    terraform cloud docs api docs state version outputs list state version outputs       Available Related Resources  The GET endpoints above can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available      created by    The user that created the state version  For state versions created via a run executed by HCP Terraform  this is an internal user account     run    The run that created the state version  if applicable     run created by    The user that manually triggered the run  if applicable     run configuration version    The configuration version used in the run     outputs    The parsed outputs for this state version "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the ssh keys endpoint to manage an organization s SSH keys List get create update and delete SSH keys using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 page title SSH Keys API Docs HCP Terraform","answers":"---\npage_title: SSH Keys - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/ssh-keys` endpoint to manage an organization's SSH keys. List, get, create, update, and delete SSH keys using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# SSH Keys\n\nThe `ssh-key` object represents an SSH key which includes a name and the SSH private key. An organization can have multiple SSH keys available.\n\nSSH keys can be used in two places:\n\n- You can assign them to VCS provider integrations, which are available in the API as `oauth-tokens`. Refer to [OAuth Tokens](\/terraform\/cloud-docs\/api-docs\/oauth-tokens) for additional information. Azure DevOps Server and Bitbucket Data Center require an SSH key. Other providers only require an SSH key when your repositories include submodules that are only accessible using an SSH connection instead of your VCS provider's API.\n- They can be [assigned to workspaces](\/terraform\/cloud-docs\/api-docs\/workspaces#assign-an-ssh-key-to-a-workspace) and used when Terraform needs to clone modules from a Git server. This is only necessary when your configurations directly reference modules from a Git server; you do not need to do this if you use HCP Terraform's [private module registry](\/terraform\/cloud-docs\/registry).\n\nListing and viewing SSH keys requires either permission to manage VCS settings for the organization, or admin access to at least one workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n~> **Important:** The list and read methods on this API only provide metadata about SSH keys. The actual private key text is write-only, and HCP Terraform never provides it to users via the API or UI.\n\n## List SSH Keys\n\n`GET \/organizations\/:organization_name\/ssh-keys`\n\n| Parameter            | Description                                        |\n| -------------------- | -------------------------------------------------- |\n| `:organization_name` | The name of the organization to list SSH keys for. |\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n| Status  | Response                                             | Reason                                        |\n| ------- | ---------------------------------------------------- | --------------------------------------------- |\n| [200][] | Array of [JSON API document][]s (`type: \"ssh-keys\"`) | Success                                       |\n| [404][] | [JSON API error object][]                            | Organization not found or user not authorized |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.  If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.\n\n| Parameter      | Description                                                              |\n| -------------- | ------------------------------------------------------------------------ |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.       |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 ssh keys per page. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/ssh-keys\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"attributes\": {\n        \"name\": \"SSH Key\"\n      },\n      \"id\": \"sshkey-GxrePWre1Ezug7aM\",\n      \"links\": {\n        \"self\": \"\/api\/v2\/ssh-keys\/sshkey-GxrePWre1Ezug7aM\"\n      },\n      \"type\": \"ssh-keys\"\n    }\n  ]\n}\n```\n\n## Get an SSH Key\n\n`GET \/ssh-keys\/:ssh_key_id`\n\n| Parameter     | Description            |\n| ------------- | ---------------------- |\n| `:ssh_key_id` | The SSH key ID to get. |\n\nThis endpoint is for looking up the name associated with an SSH key ID. It does not retrieve the key text.\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n| Status  | Response                                   | Reason                                   |\n| ------- | ------------------------------------------ | ---------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"ssh-keys\"`) | Success                                  |\n| [404][] | [JSON API error object][]                  | SSH key not found or user not authorized |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/ssh-keys\/sshkey-GxrePWre1Ezug7aM\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\": \"SSH Key\"\n    },\n    \"id\": \"sshkey-GxrePWre1Ezug7aM\",\n    \"links\": {\n      \"self\": \"\/api\/v2\/ssh-keys\/sshkey-GxrePWre1Ezug7aM\"\n    },\n    \"type\": \"ssh-keys\"\n  }\n}\n```\n\n## Create an SSH Key\n\n`POST \/organizations\/:organization_name\/ssh-keys`\n\n| Parameter            | Description                                                                                                                                                                                                                                                         |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to create an SSH key in. The organization must already exist, and the token authenticating the API request must have permission to manage VCS settings. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions)) |\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n| Status  | Response                                   | Reason                                                         |\n| ------- | ------------------------------------------ | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"ssh-keys\"`) | Success                                                        |\n| [422][] | [JSON API error object][]                  | Malformed request body (missing attributes, wrong types, etc.) |\n| [404][] | [JSON API error object][]                  | User not authorized                                            |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                | Type   | Default | Description                      |\n| ----------------------- | ------ | ------- | -------------------------------- |\n| `data.type`             | string |         | Must be `\"ssh-keys\"`.            |\n| `data.attributes.name`  | string |         | A name to identify the SSH key.  |\n| `data.attributes.value` | string |         | The text of the SSH private key. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"ssh-keys\",\n    \"attributes\": {\n      \"name\": \"SSH Key\",\n      \"value\": \"-----BEGIN RSA PRIVATE KEY-----\\nMIIEowIBAAKCAQEAm6+JVgl...\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/ssh-keys\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\": \"SSH Key\"\n    },\n    \"id\": \"sshkey-GxrePWre1Ezug7aM\",\n    \"links\": {\n      \"self\": \"\/api\/v2\/ssh-keys\/sshkey-GxrePWre1Ezug7aM\"\n    },\n    \"type\": \"ssh-keys\"\n  }\n}\n```\n\n## Update an SSH Key\n\n`PATCH \/ssh-keys\/:ssh_key_id`\n\n| Parameter     | Description               |\n| ------------- | ------------------------- |\n| `:ssh_key_id` | The SSH key ID to update. |\n\nThis endpoint replaces the name of an existing SSH key.\n\nEditing SSH keys requires permission to manage VCS settings. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n| Status  | Response                                   | Reason                                   |\n| ------- | ------------------------------------------ | ---------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"ssh-keys\"`) | Success                                  |\n| [404][] | [JSON API error object][]                  | SSH key not found or user not authorized |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                | Type   | Default   | Description                                                                   |\n| ----------------------- | ------ | --------- | ----------------------------------------------------------------------------- |\n| `data.type`             | string |           | Must be `\"ssh-keys\"`.                                                         |\n| `data.attributes.name`  | string | (nothing) | A name to identify the SSH key. If omitted, the existing name is preserved.   |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\": \"SSH Key for GitHub\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/ssh-keys\/sshkey-GxrePWre1Ezug7aM\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"name\": \"SSH Key for GitHub\"\n    },\n    \"id\": \"sshkey-GxrePWre1Ezug7aM\",\n    \"links\": {\n      \"self\": \"\/api\/v2\/ssh-keys\/sshkey-GxrePWre1Ezug7aM\"\n    },\n    \"type\": \"ssh-keys\"\n  }\n}\n```\n\n## Delete an SSH Key\n\n`DELETE \/ssh-keys\/:ssh_key_id`\n\n| Parameter     | Description               |\n| ------------- | ------------------------- |\n| `:ssh_key_id` | The SSH key ID to delete. |\n\nDeleting SSH keys requires permission to manage VCS settings. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n| Status  | Response                  | Reason                                   |\n| ------- | ------------------------- | ---------------------------------------- |\n| [204][] | No Content                | Success                                  |\n| [404][] | [JSON API error object][] | SSH key not found or user not authorized |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/ssh-keys\/sshkey-GxrePWre1Ezug7aM\n```","site":"terraform","answers_cleaned":"    page title  SSH Keys   API Docs   HCP Terraform description       Use the   ssh keys  endpoint to manage an organization s SSH keys  List  get  create  update  and delete SSH keys using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    SSH Keys  The  ssh key  object represents an SSH key which includes a name and the SSH private key  An organization can have multiple SSH keys available   SSH keys can be used in two places     You can assign them to VCS provider integrations  which are available in the API as  oauth tokens   Refer to  OAuth Tokens   terraform cloud docs api docs oauth tokens  for additional information  Azure DevOps Server and Bitbucket Data Center require an SSH key  Other providers only require an SSH key when your repositories include submodules that are only accessible using an SSH connection instead of your VCS provider s API    They can be  assigned to workspaces   terraform cloud docs api docs workspaces assign an ssh key to a workspace  and used when Terraform needs to clone modules from a Git server  This is only necessary when your configurations directly reference modules from a Git server  you do not need to do this if you use HCP Terraform s  private module registry   terraform cloud docs registry    Listing and viewing SSH keys requires either permission to manage VCS settings for the organization  or admin access to at least one workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers       Important    The list and read methods on this API only provide metadata about SSH keys  The actual private key text is write only  and HCP Terraform never provides it to users via the API or UI      List SSH Keys   GET  organizations  organization name ssh keys     Parameter              Description                                                                                                                            organization name    The name of the organization to list SSH keys for          Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens      Status    Response                                               Reason                                                                                                                                                                200      Array of  JSON API document   s   type   ssh keys      Success                                            404       JSON API error object                                 Organization not found or user not authorized        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs   If neither pagination query parameters are provided  the endpoint will not be paginated and will return all results     Parameter        Description                                                                                                                                                                 page number       Optional    If omitted  the endpoint will return the first page             page size         Optional    If omitted  the endpoint will return 20 ssh keys per page         Sample Request     shell curl       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 organizations my organization ssh keys          Sample Response     json      data                  attributes              name    SSH Key                  id    sshkey GxrePWre1Ezug7aM          links              self     api v2 ssh keys sshkey GxrePWre1Ezug7aM                  type    ssh keys                      Get an SSH Key   GET  ssh keys  ssh key id     Parameter       Description                                                             ssh key id    The SSH key ID to get     This endpoint is for looking up the name associated with an SSH key ID  It does not retrieve the key text        Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens      Status    Response                                     Reason                                                                                                                                            200       JSON API document      type   ssh keys      Success                                       404       JSON API error object                       SSH key not found or user not authorized        Sample Request     shell curl       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 ssh keys sshkey GxrePWre1Ezug7aM          Sample Response     json      data          attributes            name    SSH Key              id    sshkey GxrePWre1Ezug7aM        links            self     api v2 ssh keys sshkey GxrePWre1Ezug7aM              type    ssh keys                Create an SSH Key   POST  organizations  organization name ssh keys     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              organization name    The name of the organization to create an SSH key in  The organization must already exist  and the token authenticating the API request must have permission to manage VCS settings    More about permissions    terraform cloud docs users teams organizations permissions       permissions citation    intentionally unused   keep for maintainers       Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens      Status    Response                                     Reason                                                                                                                                                                                        201       JSON API document      type   ssh keys      Success                                                             422       JSON API error object                       Malformed request body  missing attributes  wrong types  etc        404       JSON API error object                       User not authorized                                                   Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                  Type     Default   Description                                                                                                             data type                string             Must be   ssh keys                    data attributes name     string             A name to identify the SSH key        data attributes value    string             The text of the SSH private key         Sample Payload     json      data          type    ssh keys        attributes            name    SSH Key          value         BEGIN RSA PRIVATE KEY      nMIIEowIBAAKCAQEAm6 JVgl                          Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization ssh keys          Sample Response     json      data          attributes            name    SSH Key              id    sshkey GxrePWre1Ezug7aM        links            self     api v2 ssh keys sshkey GxrePWre1Ezug7aM              type    ssh keys                Update an SSH Key   PATCH  ssh keys  ssh key id     Parameter       Description                                                                   ssh key id    The SSH key ID to update     This endpoint replaces the name of an existing SSH key   Editing SSH keys requires permission to manage VCS settings    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers       Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens      Status    Response                                     Reason                                                                                                                                            200       JSON API document      type   ssh keys      Success                                       404       JSON API error object                       SSH key not found or user not authorized        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                  Type     Default     Description                                                                                                                                                                                                         data type                string               Must be   ssh keys                                                                 data attributes name     string    nothing    A name to identify the SSH key  If omitted  the existing name is preserved           Sample Payload     json      data          attributes            name    SSH Key for GitHub                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 ssh keys sshkey GxrePWre1Ezug7aM          Sample Response     json      data          attributes            name    SSH Key for GitHub              id    sshkey GxrePWre1Ezug7aM        links            self     api v2 ssh keys sshkey GxrePWre1Ezug7aM              type    ssh keys                Delete an SSH Key   DELETE  ssh keys  ssh key id     Parameter       Description                                                                   ssh key id    The SSH key ID to delete     Deleting SSH keys requires permission to manage VCS settings    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers       Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens      Status    Response                    Reason                                                                                                                           204      No Content                  Success                                       404       JSON API error object      SSH key not found or user not authorized        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 ssh keys sshkey GxrePWre1Ezug7aM    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Policy Set Parameters API Docs HCP Terraform Use the policy sets endpoint to manage key value pairs used by Sentinel policy checks List create update and delete parameters using the HTTP API","answers":"---\npage_title: Policy Set Parameters - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/policy-sets` endpoint to manage key\/value pairs used by Sentinel policy checks. List, create, update, and delete parameters using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Policy Set Parameters API\n\n[Sentinel parameters](https:\/\/docs.hashicorp.com\/sentinel\/language\/parameters) are a list of key\/value pairs that HCP Terraform sends to the Sentinel runtime when performing policy checks on workspaces. They can help you avoid hardcoding sensitive parameters into a policy. \n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nParameters are only available for Sentinel policies. This set of APIs provides endpoints to create, update, list and delete parameters.\n\n## Create a Parameter\n\n`POST \/policy-sets\/:policy_set_id\/parameters`\n\n| Parameter        | Description                                          |\n| ---------------- | ---------------------------------------------------- |\n| `:policy_set_id` | The ID of the policy set to create the parameter in. |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                    | Type   | Default | Description                                                                                            |\n| --------------------------- | ------ | ------- | ------------------------------------------------------------------------------------------------------ |\n| `data.type`                 | string |         | Must be `\"vars\"`.                                                                                      |\n| `data.attributes.key`       | string |         | The name of the parameter.                                                                             |\n| `data.attributes.value`     | string | `\"\"`    | The value of the parameter.                                                                            |\n| `data.attributes.category`  | string |         | The category of the parameters. Must be `\"policy-set\"`.                                                |\n| `data.attributes.sensitive` | bool   | `false` | Whether the value is sensitive. If true then the parameter is written once and not visible thereafter. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\":\"vars\",\n    \"attributes\": {\n      \"key\":\"some_key\",\n      \"value\":\"some_value\",\n      \"category\":\"policy-set\",\n      \"sensitive\":false\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-u3S5p2Uwk21keu1s\/parameters\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\":\"var-EavQ1LztoRTQHSNT\",\n    \"type\":\"vars\",\n    \"attributes\": {\n      \"key\":\"some_key\",\n      \"value\":\"some_value\",\n      \"sensitive\":false,\n      \"category\":\"policy-set\"\n    },\n    \"relationships\": {\n      \"configurable\": {\n        \"data\": {\n          \"id\":\"pol-u3S5p2Uwk21keu1s\",\n          \"type\":\"policy-sets\"\n        },\n        \"links\": {\n          \"related\":\"\/api\/v2\/policy-sets\/polset-u3S5p2Uwk21keu1s\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/policy-sets\/polset-u3S5p2Uwk21keu1s\/parameters\/var-EavQ1LztoRTQHSNT\"\n    }\n  }\n}\n```\n\n## List Parameters\n\n`GET \/policy-sets\/:policy_set_id\/parameters`\n\n| Parameter        | Description                                      |\n| ---------------- | ------------------------------------------------ |\n| `:policy_set_id` | The ID of the policy set to list parameters for. |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.  If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.\n\n| Parameter      | Description                                                                |\n| -------------- | -------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.         |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 parameters per page. |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n\"https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-u3S5p2Uwk21keu1s\/parameters\"\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\":\"var-AD4pibb9nxo1468E\",\n      \"type\":\"vars\",\n      \"attributes\": {\n        \"key\":\"name\",\n        \"value\":\"hello\",\n        \"sensitive\":false,\n        \"category\":\"policy-set\",\n      },\n      \"relationships\": {\n        \"configurable\": {\n          \"data\": {\n            \"id\":\"pol-u3S5p2Uwk21keu1s\",\n            \"type\":\"policy-sets\"\n          },\n          \"links\": {\n            \"related\":\"\/api\/v2\/policy-sets\/polset-u3S5p2Uwk21keu1s\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\":\"\/api\/v2\/policy-sets\/polset-u3S5p2Uwk21keu1s\/parameters\/var-AD4pibb9nxo1468E\"\n      }\n    }\n  ]\n}\n```\n\n## Update Parameters\n\n`PATCH \/policy-sets\/:policy_set_id\/parameters\/:parameter_id`\n\n| Parameter        | Description                                       |\n| ---------------- | ------------------------------------------------- |\n| `:policy_set_id` | The ID of the policy set that owns the parameter. |\n| `:parameter_id`  | The ID of the parameter to be updated.            |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path          | Type   | Default | Description                                                                                                                                                                                                                                                                      |\n| ----------------- | ------ | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`       | string |         | Must be `\"vars\"`.                                                                                                                                                                                                                                                                |\n| `data.id`         | string |         | The ID of the parameter to update.                                                                                                                                                                                                                                               |\n| `data.attributes` | object |         | New attributes for the parameter. This object can include `key`, `value`, `category` and `sensitive` properties, which are described above under [create a parameter](#create-a-parameter). All of these properties are optional; if omitted, a property will be left unchanged. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"id\":\"var-yRmifb4PJj7cLkMG\",\n    \"attributes\": {\n      \"key\":\"name\",\n      \"value\":\"mars\",\n      \"category\":\"policy-set\",\n      \"sensitive\": false\n    },\n    \"type\":\"vars\"\n  }\n}\n```\n\n### Sample Request\n\n```bash\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-u3S5p2Uwk21keu1s\/parameters\/var-yRmifb4PJj7cLkMG\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\":\"var-yRmifb4PJj7cLkMG\",\n    \"type\":\"vars\",\n    \"attributes\": {\n      \"key\":\"name\",\n      \"value\":\"mars\",\n      \"sensitive\":false,\n      \"category\":\"policy-set\",\n    },\n    \"relationships\": {\n      \"configurable\": {\n        \"data\": {\n          \"id\":\"pol-u3S5p2Uwk21keu1s\",\n          \"type\":\"policy-sets\"\n        },\n        \"links\": {\n          \"related\":\"\/api\/v2\/policy-sets\/polset-u3S5p2Uwk21keu1s\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/policy-sets\/polset-u3S5p2Uwk21keu1s\/parameters\/var-yRmifb4PJj7cLkMG\"\n    }\n  }\n}\n```\n\n## Delete Parameters\n\n`DELETE \/policy-sets\/:policy_set_id\/parameters\/:parameter_id`\n\n| Parameter        | Description                                       |\n| ---------------- | ------------------------------------------------- |\n| `:policy_set_id` | The ID of the policy set that owns the parameter. |\n| `:parameter_id`  | The ID of the parameter to be deleted.            |\n\n### Sample Request\n\n```bash\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/policy-sets\/polset-u3S5p2Uwk21keu1s\/parameters\/var-yRmifb4PJj7cLkMG\n```","site":"terraform","answers_cleaned":"    page title  Policy Set Parameters   API Docs   HCP Terraform description       Use the   policy sets  endpoint to manage key value pairs used by Sentinel policy checks  List  create  update  and delete parameters using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Policy Set Parameters API   Sentinel parameters  https   docs hashicorp com sentinel language parameters  are a list of key value pairs that HCP Terraform sends to the Sentinel runtime when performing policy checks on workspaces  They can help you avoid hardcoding sensitive parameters into a policy         BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      Parameters are only available for Sentinel policies  This set of APIs provides endpoints to create  update  list and delete parameters      Create a Parameter   POST  policy sets  policy set id parameters     Parameter          Description                                                                                                                            policy set id    The ID of the policy set to create the parameter in         Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                      Type     Default   Description                                                                                                                                                                                                                                                             data type                    string             Must be   vars                                                                                              data attributes key          string             The name of the parameter                                                                                   data attributes value        string             The value of the parameter                                                                                  data attributes category     string             The category of the parameters  Must be   policy set                                                        data attributes sensitive    bool      false    Whether the value is sensitive  If true then the parameter is written once and not visible thereafter         Sample Payload     json      data          type   vars        attributes            key   some key          value   some value          category   policy set          sensitive  false                      Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 policy sets polset u3S5p2Uwk21keu1s parameters          Sample Response     json      data          id   var EavQ1LztoRTQHSNT        type   vars        attributes            key   some key          value   some value          sensitive  false         category   policy set              relationships            configurable              data                id   pol u3S5p2Uwk21keu1s              type   policy sets                      links                related    api v2 policy sets polset u3S5p2Uwk21keu1s                                links            self    api v2 policy sets polset u3S5p2Uwk21keu1s parameters var EavQ1LztoRTQHSNT                      List Parameters   GET  policy sets  policy set id parameters     Parameter          Description                                                                                                                    policy set id    The ID of the policy set to list parameters for         Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs   If neither pagination query parameters are provided  the endpoint will not be paginated and will return all results     Parameter        Description                                                                                                                                                                     page number       Optional    If omitted  the endpoint will return the first page               page size         Optional    If omitted  the endpoint will return 20 parameters per page         Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json     https   app terraform io api v2 policy sets polset u3S5p2Uwk21keu1s parameters           Sample Response     json      data                  id   var AD4pibb9nxo1468E          type   vars          attributes              key   name            value   hello            sensitive  false           category   policy set                   relationships              configurable                data                  id   pol u3S5p2Uwk21keu1s                type   policy sets                          links                  related    api v2 policy sets polset u3S5p2Uwk21keu1s                                        links              self    api v2 policy sets polset u3S5p2Uwk21keu1s parameters var AD4pibb9nxo1468E                              Update Parameters   PATCH  policy sets  policy set id parameters  parameter id     Parameter          Description                                                                                                                      policy set id    The ID of the policy set that owns the parameter        parameter id     The ID of the parameter to be updated                    Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path            Type     Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       data type          string             Must be   vars                                                                                                                                                                                                                                                                        data id            string             The ID of the parameter to update                                                                                                                                                                                                                                                     data attributes    object             New attributes for the parameter  This object can include  key    value    category  and  sensitive  properties  which are described above under  create a parameter   create a parameter   All of these properties are optional  if omitted  a property will be left unchanged         Sample Payload     json      data          id   var yRmifb4PJj7cLkMG        attributes            key   name          value   mars          category   policy set          sensitive   false             type   vars                 Sample Request     bash   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 policy sets polset u3S5p2Uwk21keu1s parameters var yRmifb4PJj7cLkMG          Sample Response     json      data          id   var yRmifb4PJj7cLkMG        type   vars        attributes            key   name          value   mars          sensitive  false         category   policy set               relationships            configurable              data                id   pol u3S5p2Uwk21keu1s              type   policy sets                      links                related    api v2 policy sets polset u3S5p2Uwk21keu1s                                links            self    api v2 policy sets polset u3S5p2Uwk21keu1s parameters var yRmifb4PJj7cLkMG                      Delete Parameters   DELETE  policy sets  policy set id parameters  parameter id     Parameter          Description                                                                                                                      policy set id    The ID of the policy set that owns the parameter        parameter id     The ID of the parameter to be deleted                    Sample Request     bash   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 policy sets polset u3S5p2Uwk21keu1s parameters var yRmifb4PJj7cLkMG    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the organizations endpoint to interact with organizations List organizations entitlement sets and module producers and show create update and destroy organizations using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 page title Organizations API Docs HCP Terraform","answers":"---\npage_title: Organizations - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/organizations` endpoint to interact with organizations. List organizations, entitlement sets, and module producers, and show, create, update, and destroy organizations using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Organizations API\n\nThe Organizations API is used to list, show, create, update, and destroy organizations.\n\n## List Organizations\n\n`GET \/organizations`\n\n| Status  | Response                                        | Reason                                                        |\n| ------- | ----------------------------------------------- | ------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"organizations\"`) | The request was successful                                    |\n| [404][] | [JSON API error object][]                       | Organization not found or user unauthorized to perform action |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\nCurrently, this endpoint returns a full, unpaginated list of organizations (without pagination metadata) if both of the pagination query parameters are omitted. To avoid inconsistent behavior, we recommend always supplying pagination parameters when building against this API.\n\n| Parameter      | Description                                                                   |\n| -------------- | ----------------------------------------------------------------------------- |\n| `q`            | **Optional.** A search query string. Organizations are searchable by name and notification email. This query takes precedence over the attribute specific searches `q[email]` or `q[name]`. |\n| `q[email]`     | **Optional.** A search query string. This query searches organizations by notification email. If used with `q[name]`, it returns organizations that match both queries. |\n| `q[name]`      | **Optional.** A search query string. This query searches organizations by name. If used with `q[email]`, it returns organizations that match both queries. |\n| `page[number]` | **Optional.** Defaults to the first page, if omitted when `page[size]` is provided. |\n| `page[size]`   | **Optional.** Defaults to 20 organizations per page, if omitted when `page[number]` is provided. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\\?page\\[number\\]\\=1\\&page\\[size\\]\\=20\n```\n\n### Sample Response\n\n**Note:** Only HCP Terraform organizations return the `two-factor-conformant` and `assessments-enforced` properties.\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"hashicorp\",\n      \"type\": \"organizations\",\n      \"attributes\": {\n        \"external-id\": \"org-Hysjx5eUviuKVCJY\",\n        \"created-at\": \"2021-08-24T23:10:04.675Z\",\n        \"email\": \"hashicorp@example.com\",\n        \"session-timeout\": null,\n        \"session-remember\": null,\n        \"collaborator-auth-policy\": \"password\",\n        \"plan-expired\": false,\n        \"plan-expires-at\": null,\n        \"plan-is-trial\": false,\n        \"plan-is-enterprise\": false,\n        \"plan-identifier\": \"developer\",\n        \"cost-estimation-enabled\": true,\n        \"send-passing-statuses-for-untriggered-speculative-plans\": true,\n        \"aggregated-commit-status-enabled\": false,\n        \"speculative-plan-management-enabled\": true,\n        \"allow-force-delete-workspaces\": true,\n        \"name\": \"hashicorp\",\n        \"permissions\": {\n          \"can-update\": true,\n          \"can-destroy\": true,\n          \"can-access-via-teams\": true,\n          \"can-create-module\": true,\n          \"can-create-team\": true,\n          \"can-create-workspace\": true,\n          \"can-manage-users\": true,\n          \"can-manage-subscription\": true,\n          \"can-manage-sso\": true,\n          \"can-update-oauth\": true,\n          \"can-update-sentinel\": true,\n          \"can-update-ssh-keys\": true,\n          \"can-update-api-token\": true,\n          \"can-traverse\": true,\n          \"can-start-trial\": true,\n          \"can-update-agent-pools\": true,\n          \"can-manage-tags\": true,\n          \"can-manage-varsets\": true,\n          \"can-read-varsets\": true,\n          \"can-manage-public-providers\": true,\n          \"can-create-provider\": true,\n          \"can-manage-public-modules\": true,\n          \"can-manage-custom-providers\": false,\n          \"can-manage-run-tasks\": false,\n          \"can-read-run-tasks\": false,\n          \"can-create-project\": true\n        },\n        \"fair-run-queuing-enabled\": true,\n        \"saml-enabled\": false,\n        \"owners-team-saml-role-id\": null,\n        \"two-factor-conformant\": false,\n        \"assessments-enforced\": false,\n        \"default-execution-mode\": \"remote\"\n      },\n      \"relationships\": {\n        \"default-agent-pool\": {\n            \"data\": null\n        },\n        \"oauth-tokens\": {\n          \"links\": {\n            \"related\": \"\/api\/v2\/organizations\/hashicorp\/oauth-tokens\"\n          }\n        },\n        \"authentication-token\": {\n          \"links\": {\n            \"related\": \"\/api\/v2\/organizations\/hashicorp\/authentication-token\"\n          }\n        },\n        \"entitlement-set\": {\n          \"data\": {\n            \"id\": \"org-Hysjx5eUviuKVCJY\",\n            \"type\": \"entitlement-sets\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/organizations\/hashicorp\/entitlement-set\"\n          }\n        },\n        \"subscription\": {\n          \"links\": {\n            \"related\": \"\/api\/v2\/organizations\/hashicorp\/subscription\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/organizations\/hashicorp\"\n      }\n    },\n    {\n      \"id\": \"hashicorp-two\",\n      \"type\": \"organizations\",\n      \"attributes\": {\n        \"external-id\": \"org-iJ5tr4WgB4WpA1hD\",\n        \"created-at\": \"2022-01-04T18:57:16.036Z\",\n        \"email\": \"hashicorp@example.com\",\n        \"session-timeout\": null,\n        \"session-remember\": null,\n        \"collaborator-auth-policy\": \"password\",\n        \"plan-expired\": false,\n        \"plan-expires-at\": null,\n        \"plan-is-trial\": false,\n        \"plan-is-enterprise\": false,\n        \"plan-identifier\": \"free\",\n        \"cost-estimation-enabled\": false,\n        \"send-passing-statuses-for-untriggered-speculative-plans\": false,\n        \"aggregated-commit-status-enabled\": true,\n        \"speculative-plan-management-enabled\": true,\n        \"allow-force-delete-workspaces\": false,\n        \"name\": \"hashicorp-two\",\n        \"permissions\": {\n          \"can-update\": true,\n          \"can-destroy\": true,\n          \"can-access-via-teams\": true,\n          \"can-create-module\": true,\n          \"can-create-team\": false,\n          \"can-create-workspace\": true,\n          \"can-manage-users\": true,\n          \"can-manage-subscription\": true,\n          \"can-manage-sso\": false,\n          \"can-update-oauth\": true,\n          \"can-update-sentinel\": false,\n          \"can-update-ssh-keys\": true,\n          \"can-update-api-token\": true,\n          \"can-traverse\": true,\n          \"can-start-trial\": true,\n          \"can-update-agent-pools\": false,\n          \"can-manage-tags\": true,\n          \"can-manage-varsets\": true,\n          \"can-read-varsets\": true,\n          \"can-manage-public-providers\": true,\n          \"can-create-provider\": true,\n          \"can-manage-public-modules\": true,\n          \"can-manage-custom-providers\": false,\n          \"can-manage-run-tasks\": false,\n          \"can-read-run-tasks\": false,\n          \"can-create-project\": false\n        },\n        \"fair-run-queuing-enabled\": true,\n        \"saml-enabled\": false,\n        \"owners-team-saml-role-id\": null,\n        \"two-factor-conformant\": false,\n        \"assessments-enforced\": false,\n        \"default-execution-mode\": \"remote\"\n      },\n      \"relationships\": {\n        \"default-agent-pool\": {\n          \"data\": null\n        },\n        \"oauth-tokens\": {\n          \"links\": {\n            \"related\": \"\/api\/v2\/organizations\/hashicorp-two\/oauth-tokens\"\n          }\n        },\n        \"authentication-token\": {\n          \"links\": {\n            \"related\": \"\/api\/v2\/organizations\/hashicorp-two\/authentication-token\"\n          }\n        },\n        \"entitlement-set\": {\n          \"data\": {\n            \"id\": \"org-iJ5tr4WgB4WpA1hD\",\n            \"type\": \"entitlement-sets\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/organizations\/hashicorp-two\/entitlement-set\"\n          }\n        },\n        \"subscription\": {\n          \"links\": {\n            \"related\": \"\/api\/v2\/organizations\/hashicorp-two\/subscription\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/organizations\/hashicorp-two\"\n      }\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/tfe-zone-b0c8608c.ngrok.io\/api\/v2\/organizations?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/tfe-zone-b0c8608c.ngrok.io\/api\/v2\/organizations?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/tfe-zone-b0c8608c.ngrok.io\/api\/v2\/organizations?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"page-size\": 20,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 2\n    }\n  }\n}\n```\n\n## Show an Organization\n\n`GET \/organizations\/:organization_name`\n\n| Parameter            | Description                          |\n| -------------------- | ------------------------------------ |\n| `:organization_name` | The name of the organization to show |\n\n| Status  | Response                                        | Reason                                                        |\n| ------- | ----------------------------------------------- | ------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"organizations\"`) | The request was successful                                    |\n| [404][] | [JSON API error object][]                       | Organization not found or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\n```\n\n### Sample Response\n\n**Note:** Only HCP Terraform organizations return the `two-factor-conformant` and `assessments-enforced` properties.\n\n```json\n{\n  \"data\": {\n    \"id\": \"hashicorp\",\n    \"type\": \"organizations\",\n    \"attributes\": {\n      \"external-id\": \"org-WV6DfwfxxXvLfvfs\",\n      \"created-at\": \"2020-03-26T22:13:38.456Z\",\n      \"email\": \"user@example.com\",\n      \"session-timeout\": null,\n      \"session-remember\": null,\n      \"collaborator-auth-policy\": \"password\",\n      \"plan-expired\": false,\n      \"plan-expires-at\": null,\n      \"plan-is-trial\": false,\n      \"plan-is-enterprise\": false,\n      \"cost-estimation-enabled\": false,\n      \"send-passing-statuses-for-untriggered-speculative-plans\": false,\n      \"aggregated-commit-status-enabled\": true,\n      \"speculative-plan-management-enabled\": true,\n      \"allow-force-delete-workspaces\": false,\n      \"name\": \"hashicorp\",\n      \"permissions\": {\n        \"can-update\": true,\n        \"can-destroy\": true,\n        \"can-access-via-teams\": true,\n        \"can-create-module\": true,\n        \"can-create-team\": false,\n        \"can-create-workspace\": true,\n        \"can-manage-users\": true,\n        \"can-manage-subscription\": true,\n        \"can-manage-sso\": false,\n        \"can-update-oauth\": true,\n        \"can-update-sentinel\": false,\n        \"can-update-ssh-keys\": true,\n        \"can-update-api-token\": true,\n        \"can-traverse\": true,\n        \"can-start-trial\": true,\n        \"can-update-agent-pools\": false,\n        \"can-manage-tags\": true,\n        \"can-manage-public-modules\": true,\n        \"can-manage-public-providers\": false,\n        \"can-manage-run-tasks\": false,\n        \"can-read-run-tasks\": false,\n        \"can-create-provider\": false,\n        \"can-create-project\": true\n      },\n      \"fair-run-queuing-enabled\": true,\n      \"saml-enabled\": false,\n      \"owners-team-saml-role-id\": null,\n      \"two-factor-conformant\": false,\n      \"assessments-enforced\": false,\n      \"default-execution-mode\": \"remote\"\n    },\n    \"relationships\": {\n      \"default-agent-pool\": {\n        \"data\": null\n      },\n      \"oauth-tokens\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/oauth-tokens\"\n        }\n      },\n      \"authentication-token\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/authentication-token\"\n        }\n      },\n      \"entitlement-set\": {\n        \"data\": {\n          \"id\": \"org-WV6DfwfxxXvLfvfs\",\n          \"type\": \"entitlement-sets\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/entitlement-set\"\n        }\n      },\n      \"subscription\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/subscription\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/hashicorp\"\n    }\n  }\n}\n```\n\n## Create an Organization\n\n`POST \/organizations`\n\n| Status  | Response                                        | Reason                                                         |\n| ------- | ----------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"organizations\"`) | The organization was successfully created                      |\n| [404][] | [JSON API error object][]                       | Organization not found or user unauthorized to perform action  |\n| [422][] | [JSON API error object][]                       | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                                                                  | Type    | Default   | Description                                                                                                                                                                                                                                                                                                                                                                        |\n| ------------------------------------------------------------------------- | ------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                                                               | string  |           | Must be `\"organizations\"`                                                                                                                                                                                                                                                                                                                                                          |\n| `data.attributes.name`                                                    | string  |           | Name of the organization                                                                                                                                                                                                                                                                                                                                                           |\n| `data.attributes.email`                                                   | string  |           | Admin email address                                                                                                                                                                                                                                                                                                                                                                |\n| `data.attributes.session-timeout`                                         | integer | 20160     | Session timeout after inactivity (minutes)                                                                                                                                                                                                                                                                                                                                         |\n| `data.attributes.session-remember`                                        | integer | 20160     | Session expiration (minutes)                                                                                                                                                                                                                                                                                                                                                       |\n| `data.attributes.collaborator-auth-policy`                                | string  | password  | Authentication policy (`password` or `two_factor_mandatory`)                                                                                                                                                                                                                                                                                                                       |\n| `data.attributes.cost-estimation-enabled`                                 | boolean | false      | Whether or not the cost estimation feature is enabled for all workspaces in the organization. Defaults to false. In Terraform Enterprise, you must also enable cost estimation in [Site Administration](\/terraform\/enterprise\/admin\/application\/integration#cost-estimation-integration). |\n| `data.attributes.send-passing-statuses-for-untriggered-speculative-plans` | boolean | false     | Whether or not to send VCS status updates for untriggered speculative plans. This can be useful if large numbers of untriggered workspaces are exhausting request limits for connected version control service providers like GitHub. Defaults to false. In Terraform Enterprise, this setting is always false and cannot be changed but is also available in Site Administration. |\n| `data.attributes.aggregated-commit-status-enabled`                        | boolean | true      | Whether or not to aggregate VCS status updates for triggered workspaces. This is useful for monorepo projects with configuration spanning many workspaces. Defaults to `true`. You cannot use this option if `send-passing-statuses-for-untriggered-speculative-plans` is set to `true`.   |\n| `data.attributes.speculative-plan-management-enabled`                     | boolean | true      | Whether or not to enable [Automatically cancel plan-only runs](\/terraform\/cloud-docs\/users-teams-organizations\/organizations\/vcs-speculative-plan-management). Defaults to `true`.                                                                                                                                                                                                 |\n| `data.attributes.owners-team-saml-role-id`                                | string  | (nothing) | **Optional.** **SAML only** The name of the [\"owners\" team](\/terraform\/enterprise\/saml\/team-membership#managing-membership-of-the-owners-team)                                                                                                                                                                                                                               |\n| `data.attributes.assessments-enforced`                                    | boolean | (false)   | Whether or not to compel health assessments for all eligible workspaces. When true, health assessments occur on all compatible workspaces, regardless of the value of the workspace setting `assessments-enabled`. When false, health assessments only occur for workspaces that opt in by setting `assessments-enabled: true`.         |\n| `data.attributes.allow-force-delete-workspaces`                           | boolean | (false)   | Whether workspace administrators can [delete workspaces with resources under management](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#general). If false, only organization owners may delete these workspaces.                                                                                                                                                                              |\n| `data.attributes.default-execution-mode`                                  | boolean | `remote`  | Which [execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode) to use by default. Valid values are `remote`, `local`, and `agent`.                                                                                                                                                                           |\n| `data.attributes.default-agent-pool-id`                                   | string  | (previous value) | Required when `default-execution-mode` is set to `agent`. The ID of the agent pool belonging to the organization. Do _not_ specify this value if you set `execution-mode` to `remote` or `local`.                                                                                                                    |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"organizations\",\n    \"attributes\": {\n      \"name\": \"hashicorp\",\n      \"email\": \"user@example.com\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\n```\n\n### Sample Response\n\n**Note:** Only HCP Terraform organizations return the `two-factor-conformant` and `assessments-enforced` properties.\n\n```json\n{\n  \"data\": {\n    \"id\": \"hashicorp\",\n    \"type\": \"organizations\",\n    \"attributes\": {\n      \"external-id\": \"org-Bzyc2JuegvVLAibn\",\n      \"created-at\": \"2021-08-30T18:09:57.561Z\",\n      \"email\": \"user@example.com\",\n      \"session-timeout\": null,\n      \"session-remember\": null,\n      \"collaborator-auth-policy\": \"password\",\n      \"plan-expired\": false,\n      \"plan-expires-at\": null,\n      \"plan-is-trial\": false,\n      \"plan-is-enterprise\": false,\n      \"cost-estimation-enabled\": false,\n      \"send-passing-statuses-for-untriggered-speculative-plans\": false,\n      \"aggregated-commit-status-enabled\": true,\n      \"speculative-plan-management-enabled\": true,\n      \"allow-force-delete-workspaces\": false,\n      \"name\": \"hashicorp\",\n      \"permissions\": {\n        \"can-update\": true,\n        \"can-destroy\": true,\n        \"can-access-via-teams\": true,\n        \"can-create-module\": true,\n        \"can-create-team\": false,\n        \"can-create-workspace\": true,\n        \"can-manage-users\": true,\n        \"can-manage-subscription\": true,\n        \"can-manage-sso\": false,\n        \"can-update-oauth\": true,\n        \"can-update-sentinel\": false,\n        \"can-update-ssh-keys\": true,\n        \"can-update-api-token\": true,\n        \"can-traverse\": true,\n        \"can-start-trial\": true,\n        \"can-update-agent-pools\": false,\n        \"can-manage-tags\": true,\n        \"can-manage-public-modules\": true,\n        \"can-manage-public-providers\": false,\n        \"can-manage-run-tasks\": false,\n        \"can-read-run-tasks\": false,\n        \"can-create-provider\": false,\n        \"can-create-project\": true\n      },\n      \"fair-run-queuing-enabled\": true,\n      \"saml-enabled\": false,\n      \"owners-team-saml-role-id\": null,\n      \"two-factor-conformant\": false,\n      \"assessments-enforced\": false,\n      \"default-execution-mode\": \"remote\"\n    },\n    \"relationships\": {\n      \"default-agent-pool\": {\n        \"data\": null\n      },\n      \"oauth-tokens\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/oauth-tokens\"\n        }\n      },\n      \"authentication-token\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/authentication-token\"\n        }\n      },\n      \"entitlement-set\": {\n        \"data\": {\n          \"id\": \"org-Bzyc2JuegvVLAibn\",\n          \"type\": \"entitlement-sets\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/entitlement-set\"\n        }\n      },\n      \"subscription\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/subscription\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/hashicorp\"\n    }\n  },\n  \"included\": [\n    {\n      \"id\": \"org-Bzyc2JuegvVLAibn\",\n      \"type\": \"entitlement-sets\",\n      \"attributes\": {\n        \"agents\": false,\n        \"audit-logging\": false,\n        \"configuration-designer\": true,\n        \"cost-estimation\": false,\n        \"global-run-tasks\": false,\n        \"module-tests-generation\": false,\n        \"operations\": true,\n        \"policy-enforcement\": false,\n        \"policy-limit\": null,\n        \"policy-mandatory-enforcement-limit\": null,\n        \"policy-set-limit\": null,\n        \"private-module-registry\": true,\n        \"run-task-limit\": null,\n        \"run-task-mandatory-enforcement-limit\": null,\n        \"run-task-workspace-limit\": null,\n        \"run-tasks\": false,\n        \"self-serve-billing\": true,\n        \"sentinel\": false,\n        \"sso\": false,\n        \"state-storage\": true,\n        \"teams\": false,\n        \"usage-reporting\": false,\n        \"user-limit\": 5,\n        \"vcs-integrations\": true,\n        \"versioned-policy-set-limit\": null\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/entitlement-sets\/org-Bzyc2JuegvVLAibn\"\n      }\n    }\n  ]\n}\n```\n\n## Update an Organization\n\n`PATCH \/organizations\/:organization_name`\n\n| Parameter            | Description                            |\n| -------------------- | -------------------------------------- |\n| `:organization_name` | The name of the organization to update |\n\n| Status  | Response                                        | Reason                                                         |\n| ------- | ----------------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"organizations\"`) | The organization was successfully updated                      |\n| [404][] | [JSON API error object][]                       | Organization not found or user unauthorized to perform action  |\n| [422][] | [JSON API error object][]                       | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\n| Key path                                                                  | Type    | Default   | Description                                                                                                                                                                                                                                                                                                                                                                        |\n| ------------------------------------------------------------------------- | ------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                                                               | string  |           | Must be `\"organizations\"`                                                                                                                                                                                                                                                                                                                                                          |\n| `data.attributes.name`                                                    | string  |           | Name of the organization                                                                                                                                                                                                                                                                                                                                                           |\n| `data.attributes.email`                                                   | string  |           | Admin email address                                                                                                                                                                                                                                                                                                                                                                |\n| `data.attributes.session-timeout`                                         | integer | 20160     | Session timeout after inactivity (minutes)                                                                                                                                                                                                                                                                                                                                         |\n| `data.attributes.session-remember`                                        | integer | 20160     | Session expiration (minutes)                                                                                                                                                                                                                                                                                                                                                       |\n| `data.attributes.collaborator-auth-policy`                                | string  | password  | Authentication policy (`password` or `two_factor_mandatory`)                                                                                                                                                                                                                                                                                                                       |\n| `data.attributes.cost-estimation-enabled`                                 | boolean | false      | Whether or not the cost estimation feature is enabled for all workspaces in the organization. Defaults to false. In Terraform Enterprise, you must also enable cost estimation in [Site Administration](\/terraform\/enterprise\/admin\/application\/integration#cost-estimation-integration). |\n| `data.attributes.send-passing-statuses-for-untriggered-speculative-plans` | boolean | false     | Whether or not to send VCS status updates for untriggered speculative plans. This can be useful if large numbers of untriggered workspaces are exhausting request limits for connected version control service providers like GitHub. Defaults to false. In Terraform Enterprise, this setting is always false and cannot be changed but is also available in Site Administration. |\n| `data.attributes.aggregated-commit-status-enabled`                        | boolean | true      | Whether or not to aggregate VCS status updates for triggered workspaces. This is useful for monorepo projects with configuration spanning many workspaces. Defaults to `true`. You cannot use this option if `send-passing-statuses-for-untriggered-speculative-plans` is set to `true`.   |\n| `data.attributes.speculative-plan-management-enabled`                     | boolean | true      | Whether or not to enable [Automatically cancel plan-only runs](\/terraform\/cloud-docs\/users-teams-organizations\/organizations\/vcs-speculative-plan-management). Defaults to `true`.                                                                                                                                                                                                 |\n| `data.attributes.owners-team-saml-role-id`                                | string  | (nothing) | **Optional.** **SAML only** The name of the [\"owners\" team](\/terraform\/enterprise\/saml\/team-membership#managing-membership-of-the-owners-team)                                                                                                                                                                                                                               |\n| `data.attributes.assessments-enforced`                                    | boolean | false     | Whether or not to compel health assessments for all eligible workspaces. When true, health assessments occur on all compatible workspaces, regardless of the value of the workspace setting `assessments-enabled`. When false, health assessments only occur for workspaces that opt in by setting `assessments-enabled: true`.             |\n| `data.attributes.allow-force-delete-workspaces`                           | boolean | false     | Whether workspace administrators can [delete workspaces with resources under management](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#general). If false, only organization owners may delete these workspaces.                                                                                                                                                                              |\n| `data.attributes.default-execution-mode`                                  | boolean | `remote`  | Which [execution mode](\/terraform\/cloud-docs\/workspaces\/settings#execution-mode) to use by default. Valid values are `remote`, `local`, and `agent`.                                                                                                                                                                           |\n| `data.attributes.default-agent-pool-id`                                   | string  | (previous value) | Required when `default-execution-mode` is set to `agent`. The ID of the agent pool belonging to the organization. Do _not_ specify this value if you set `execution-mode` to `remote` or `local`.                                                                                                                      |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"organizations\",\n    \"attributes\": {\n      \"email\": \"admin@example.com\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\n```\n\n### Sample Response\n\n**Note:** The `two-factor-conformant` and `assessments-enforced` properties are only returned from HCP Terraform organizations.\n\n```json\n{\n  \"data\": {\n    \"id\": \"hashicorp\",\n    \"type\": \"organizations\",\n    \"attributes\": {\n      \"external-id\": \"org-Bzyc2JuegvVLAibn\",\n      \"created-at\": \"2021-08-30T18:09:57.561Z\",\n      \"email\": \"admin@example.com\",\n      \"session-timeout\": null,\n      \"session-remember\": null,\n      \"collaborator-auth-policy\": \"password\",\n      \"plan-expired\": false,\n      \"plan-expires-at\": null,\n      \"plan-is-trial\": false,\n      \"plan-is-enterprise\": false,\n      \"cost-estimation-enabled\": false,\n      \"send-passing-statuses-for-untriggered-speculative-plans\": false,\n      \"aggregated-commit-status-enabled\": true,\n      \"speculative-plan-management-enabled\": true,\n      \"name\": \"hashicorp\",\n      \"permissions\": {\n        \"can-update\": true,\n        \"can-destroy\": true,\n        \"can-access-via-teams\": true,\n        \"can-create-module\": true,\n        \"can-create-team\": false,\n        \"can-create-workspace\": true,\n        \"can-manage-users\": true,\n        \"can-manage-subscription\": true,\n        \"can-manage-sso\": false,\n        \"can-update-oauth\": true,\n        \"can-update-sentinel\": false,\n        \"can-update-ssh-keys\": true,\n        \"can-update-api-token\": true,\n        \"can-traverse\": true,\n        \"can-start-trial\": true,\n        \"can-update-agent-pools\": false,\n        \"can-manage-tags\": true,\n        \"can-manage-public-modules\": true,\n        \"can-manage-public-providers\": false,\n        \"can-manage-run-tasks\": false,\n        \"can-read-run-tasks\": false,\n        \"can-create-provider\": false,\n        \"can-create-project\": true\n      },\n      \"fair-run-queuing-enabled\": true,\n      \"saml-enabled\": false,\n      \"owners-team-saml-role-id\": null,\n      \"two-factor-conformant\": false,\n      \"assessments-enforced\": false,\n      \"default-execution-mode\": \"remote\"\n    },\n    \"relationships\": {\n      \"default-agent-pool\": {\n        \"data\": null\n      },\n      \"oauth-tokens\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/oauth-tokens\"\n        }\n      },\n      \"authentication-token\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/authentication-token\"\n        }\n      },\n      \"entitlement-set\": {\n        \"data\": {\n          \"id\": \"org-Bzyc2JuegvVLAibn\",\n          \"type\": \"entitlement-sets\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/entitlement-set\"\n        }\n      },\n      \"subscription\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/subscription\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/hashicorp\"\n    }\n  }\n}\n```\n\n## Destroy an Organization\n\n`DELETE \/organizations\/:organization_name`\n\n| Parameter            | Description                             |\n| -------------------- | --------------------------------------- |\n| `:organization_name` | The name of the organization to destroy |\n\n| Status  | Response                  | Reason                                                        |\n| ------- | ------------------------- | ------------------------------------------------------------- |\n| [204][] |                           | The organization was successfully destroyed                   |\n| [404][] | [JSON API error object][] | Organization not found or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\n```\n\n### Sample Response\n\nThe response body will be empty if successful.\n\n## Show the Entitlement Set\n\nThis endpoint shows the [entitlements](\/terraform\/cloud-docs\/api-docs#feature-entitlements) for an organization.\n\n`GET \/organizations\/:organization_name\/entitlement-set`\n\n| Parameter            | Description                                            |\n| -------------------- | ------------------------------------------------------ |\n| `:organization_name` | The name of the organization's entitlement set to view |\n\n| Status  | Response                                           | Reason                                                        |\n| ------- | -------------------------------------------------- | ------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"entitlement-sets\"`) | The request was successful                                    |\n| [404][] | [JSON API error object][]                          | Organization not found or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/entitlement-set\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"org-Bzyc2JuegvVLAibn\",\n    \"type\": \"entitlement-sets\",\n    \"attributes\": {\n      \"agents\": false,\n      \"audit-logging\": false,\n      \"configuration-designer\": true,\n      \"cost-estimation\": false,\n      \"global-run-tasks\": false,\n      \"module-tests-generation\": false,\n      \"operations\": true,\n      \"policy-enforcement\": false,\n      \"policy-limit\": 5,\n      \"policy-mandatory-enforcement-limit\": null,\n      \"policy-set-limit\": 1,\n      \"private-module-registry\": true,\n      \"private-policy-agents\": false,\n      \"private-vcs\": false,\n      \"run-task-limit\": 1,\n      \"run-task-mandatory-enforcement-limit\": 1,\n      \"run-task-workspace-limit\": 10,\n      \"run-tasks\": false,\n      \"self-serve-billing\": true,\n      \"sentinel\": false,\n      \"sso\": false,\n      \"state-storage\": true,\n      \"teams\": false,\n      \"usage-reporting\": false,\n      \"user-limit\": 5,\n      \"vcs-integrations\": true,\n      \"versioned-policy-set-limit\": null\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/entitlement-sets\/org-Bzyc2JuegvVLAibn\"\n    }\n  }\n}\n```\n\n## Show Module Producers\n\n<EnterpriseAlert>\nThis endpoint is exclusive to Terraform Enterprise, and not available in HCP Terraform.\n<\/EnterpriseAlert>\n\nThis endpoint shows organizations that are configured to share modules with an organization through [Module Sharing](\/terraform\/enterprise\/admin\/application\/module-sharing).\n\n`GET \/organizations\/:organization_name\/relationships\/module-producers`\n\n| Parameter            | Description                                             |\n| -------------------- | ------------------------------------------------------- |\n| `:organization_name` | The name of the organization's module producers to view |\n\n| Status  | Response                                        | Reason                                                        |\n| ------- | ----------------------------------------------- | ------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"organizations\"`) | The request was successful                                    |\n| [404][] | [JSON API error object][]                       | Organization not found or user unauthorized to perform action |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                                      |\n| -------------- | -------------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.               |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 module producers per page. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/tfe.example.com\/api\/v2\/organizations\/hashicorp\/relationships\/module-producers\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"hc-nomad\",\n      \"type\": \"organizations\",\n      \"attributes\": {\n        \"name\": \"hc-nomad\",\n        \"external-id\": \"org-ArQSQMAkFQsSUZjB\"\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/organizations\/hc-nomad\"\n      }\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/tfe.example.com\/api\/v2\/organizations\/hashicorp\/relationships\/module-producers?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/tfe.example.com\/api\/v2\/organizations\/hashicorp\/relationships\/module-producers?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/tfe.example.com\/api\/v2\/organizations\/hashicorp\/relationships\/module-producers?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 1\n    }\n  }\n}\n```\n\n## Show data retention policy\n\n<EnterpriseAlert>\nThis endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform.\n<\/EnterpriseAlert>\n\n`GET \/organizations\/:organization_name\/relationships\/data-retention-policy`\n\n| Parameter            | Description                                                         |\n| ---------------------| --------------------------------------------------------------------|\n| `:organization_name` | The name of the organization to show the data retention policy for. |\n\nThis endpoint shows the data retention policy set explicitly on the organization.\n\nWhen no data retention policy is set for the organization, the endpoint returns the default policy configured for the Terraform Enterprise installation. Read more about [organization data retention policies](\/terraform\/enterprise\/users-teams-organizations\/organizations#data-retention-policies).\n\nFor additional information, refer to [Data Retention Policy Types](\/terraform\/enterprise\/api-docs\/data-retention-policies#data-retention-policy-types) in the Terraform Enterprise documentation.\n\n## Create or update data retention policy\n\n<EnterpriseAlert>\nThis endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform.\n<\/EnterpriseAlert>\n\n`POST \/organizations\/:organization_name\/relationships\/data-retention-policy`\n\n| Parameter            | Description                                                                |\n| -------------------- | -------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to update the data retention policy for.      |\n\nThis endpoint creates a data retention policy for an organization or updates the existing policy.\n\nRead more about [organization data retention policies](\/terraform\/enterprise\/users-teams-organizations\/organizations#data-retention-policies).\n\nRefer to [Data Retention Policy API](\/terraform\/enterprise\/api-docs\/data-retention-policies#create-or-update-data-retention-policy) in the Terraform Enterprise documentation for details.\n\n## Remove data retention policy\n\n<EnterpriseAlert>\nThis endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform.\n<\/EnterpriseAlert>\n\n`DELETE \/organizations\/:organization_name\/relationships\/data-retention-policy`\n\n| Parameter            | Description                                                                                                                                                                          |\n| ---------------------| -------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to remove the data retention policy for. |\n\nThis endpoint removes the data retention policy explicitly set on an organization.\nWhen the data retention policy is deleted, the organization inherits the default policy configured for the Terraform Enterprise installation. Refer to [Data Retention Policies](\/terraform\/enterprise\/application-administration\/general#data-retention-policies) for additional information.\n\nRefer to [Data Retention Policies](\/terraform\/enterprise\/users-teams-organizations\/organizations#data-retention-policies) for information about configuring data retention policies for an organization.\n\nRefer to [Data Retention Policy API](\/terraform\/enterprise\/api-docs\/data-retention-policies#remove-data-retention-policy) in the Terraform Enterprise documentation for details.\n\n## Available Related Resources\n\nThe GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n| Resource Name         | Description                                                                                  |\n| --------------------- | -------------------------------------------------------------------------------------------- |\n| `entitlement_set`     | The entitlement set that determines which HCP Terraform features the organization can use. |\n\n## Relationships\n\nThe following relationships may be present in various responses.\n\n| Resource Name           | Description                                                                                                                   |\n| ---------------------   | ----------------------------------------------------------------------------------------------------------------------------- |\n| `module-producers`      | Other organizations configured to share modules with the organization.                                                        |\n| `oauth-tokens`          | OAuth tokens associated with VCS configurations for the organization.                                                         |\n| `authentication-token`  | The API token for an organization.                                                                                            |\n| `entitlement-set`       | The entitlement set that determines which HCP Terraform features the organization can use.                                  |\n| `subscription`          | The current subscription for an organization.                                                                                 |\n| `default-agent-pool`    | An organization's default agent pool. Set this value if your `default-execution-mode` is `agent`.                             |\n| `data-retention-policy` | <EnterpriseAlert inline\/> Specifies an organization's data retention policy. Refer to [Data Retention Policy APIs](\/terraform\/enterprise\/api-docs\/data-retention-policies) in the Terraform Enterprise documentation for more details. |","site":"terraform","answers_cleaned":"    page title  Organizations   API Docs   HCP Terraform description       Use the   organizations  endpoint to interact with organizations  List organizations  entitlement sets  and module producers  and show  create  update  and destroy organizations using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Organizations API  The Organizations API is used to list  show  create  update  and destroy organizations      List Organizations   GET  organizations     Status    Response                                          Reason                                                                                                                                                                                           200       JSON API document      type   organizations      The request was successful                                         404       JSON API error object                            Organization not found or user unauthorized to perform action        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs   Currently  this endpoint returns a full  unpaginated list of organizations  without pagination metadata  if both of the pagination query parameters are omitted  To avoid inconsistent behavior  we recommend always supplying pagination parameters when building against this API     Parameter        Description                                                                                                                                                                           q                 Optional    A search query string  Organizations are searchable by name and notification email  This query takes precedence over the attribute specific searches  q email   or  q name         q email           Optional    A search query string  This query searches organizations by notification email  If used with  q name    it returns organizations that match both queries       q name            Optional    A search query string  This query searches organizations by name  If used with  q email    it returns organizations that match both queries       page number       Optional    Defaults to the first page  if omitted when  page size   is provided       page size         Optional    Defaults to 20 organizations per page  if omitted when  page number   is provided         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 organizations  page  number    1  page  size    20          Sample Response    Note    Only HCP Terraform organizations return the  two factor conformant  and  assessments enforced  properties      json      data                  id    hashicorp          type    organizations          attributes              external id    org Hysjx5eUviuKVCJY            created at    2021 08 24T23 10 04 675Z            email    hashicorp example com            session timeout   null           session remember   null           collaborator auth policy    password            plan expired   false           plan expires at   null           plan is trial   false           plan is enterprise   false           plan identifier    developer            cost estimation enabled   true           send passing statuses for untriggered speculative plans   true           aggregated commit status enabled   false           speculative plan management enabled   true           allow force delete workspaces   true           name    hashicorp            permissions                can update   true             can destroy   true             can access via teams   true             can create module   true             can create team   true             can create workspace   true             can manage users   true             can manage subscription   true             can manage sso   true             can update oauth   true             can update sentinel   true             can update ssh keys   true             can update api token   true             can traverse   true             can start trial   true             can update agent pools   true             can manage tags   true             can manage varsets   true             can read varsets   true             can manage public providers   true             can create provider   true             can manage public modules   true             can manage custom providers   false             can manage run tasks   false             can read run tasks   false             can create project   true                     fair run queuing enabled   true           saml enabled   false           owners team saml role id   null           two factor conformant   false           assessments enforced   false           default execution mode    remote                  relationships              default agent pool                  data   null                     oauth tokens                links                  related     api v2 organizations hashicorp oauth tokens                                  authentication token                links                  related     api v2 organizations hashicorp authentication token                                  entitlement set                data                  id    org Hysjx5eUviuKVCJY                type    entitlement sets                          links                  related     api v2 organizations hashicorp entitlement set                                  subscription                links                  related     api v2 organizations hashicorp subscription                                        links              self     api v2 organizations hashicorp                              id    hashicorp two          type    organizations          attributes              external id    org iJ5tr4WgB4WpA1hD            created at    2022 01 04T18 57 16 036Z            email    hashicorp example com            session timeout   null           session remember   null           collaborator auth policy    password            plan expired   false           plan expires at   null           plan is trial   false           plan is enterprise   false           plan identifier    free            cost estimation enabled   false           send passing statuses for untriggered speculative plans   false           aggregated commit status enabled   true           speculative plan management enabled   true           allow force delete workspaces   false           name    hashicorp two            permissions                can update   true             can destroy   true             can access via teams   true             can create module   true             can create team   false             can create workspace   true             can manage users   true             can manage subscription   true             can manage sso   false             can update oauth   true             can update sentinel   false             can update ssh keys   true             can update api token   true             can traverse   true             can start trial   true             can update agent pools   false             can manage tags   true             can manage varsets   true             can read varsets   true             can manage public providers   true             can create provider   true             can manage public modules   true             can manage custom providers   false             can manage run tasks   false             can read run tasks   false             can create project   false                     fair run queuing enabled   true           saml enabled   false           owners team saml role id   null           two factor conformant   false           assessments enforced   false           default execution mode    remote                  relationships              default agent pool                data   null                     oauth tokens                links                  related     api v2 organizations hashicorp two oauth tokens                                  authentication token                links                  related     api v2 organizations hashicorp two authentication token                                  entitlement set                data                  id    org iJ5tr4WgB4WpA1hD                type    entitlement sets                          links                  related     api v2 organizations hashicorp two entitlement set                                  subscription                links                  related     api v2 organizations hashicorp two subscription                                        links              self     api v2 organizations hashicorp two                        links          self    https   tfe zone b0c8608c ngrok io api v2 organizations page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   tfe zone b0c8608c ngrok io api v2 organizations page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next   null       last    https   tfe zone b0c8608c ngrok io api v2 organizations page 5Bnumber 5D 1 page 5Bsize 5D 20          meta          pagination            current page   1         page size   20         prev page   null         next page   null         total pages   1         total count   2                     Show an Organization   GET  organizations  organization name     Parameter              Description                                                                                                organization name    The name of the organization to show      Status    Response                                          Reason                                                                                                                                                                                           200       JSON API document      type   organizations      The request was successful                                         404       JSON API error object                            Organization not found or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 organizations hashicorp          Sample Response    Note    Only HCP Terraform organizations return the  two factor conformant  and  assessments enforced  properties      json      data          id    hashicorp        type    organizations        attributes            external id    org WV6DfwfxxXvLfvfs          created at    2020 03 26T22 13 38 456Z          email    user example com          session timeout   null         session remember   null         collaborator auth policy    password          plan expired   false         plan expires at   null         plan is trial   false         plan is enterprise   false         cost estimation enabled   false         send passing statuses for untriggered speculative plans   false         aggregated commit status enabled   true         speculative plan management enabled   true         allow force delete workspaces   false         name    hashicorp          permissions              can update   true           can destroy   true           can access via teams   true           can create module   true           can create team   false           can create workspace   true           can manage users   true           can manage subscription   true           can manage sso   false           can update oauth   true           can update sentinel   false           can update ssh keys   true           can update api token   true           can traverse   true           can start trial   true           can update agent pools   false           can manage tags   true           can manage public modules   true           can manage public providers   false           can manage run tasks   false           can read run tasks   false           can create provider   false           can create project   true                 fair run queuing enabled   true         saml enabled   false         owners team saml role id   null         two factor conformant   false         assessments enforced   false         default execution mode    remote              relationships            default agent pool              data   null                 oauth tokens              links                related     api v2 organizations hashicorp oauth tokens                            authentication token              links                related     api v2 organizations hashicorp authentication token                            entitlement set              data                id    org WV6DfwfxxXvLfvfs              type    entitlement sets                      links                related     api v2 organizations hashicorp entitlement set                            subscription              links                related     api v2 organizations hashicorp subscription                                links            self     api v2 organizations hashicorp                      Create an Organization   POST  organizations     Status    Response                                          Reason                                                                                                                                                                                             201       JSON API document      type   organizations      The organization was successfully created                           404       JSON API error object                            Organization not found or user unauthorized to perform action       422       JSON API error object                            Malformed request body  missing attributes  wrong types  etc          Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                                                                    Type      Default     Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      data type                                                                  string                Must be   organizations                                                                                                                                                                                                                                                                                                                                                                 data attributes name                                                       string                Name of the organization                                                                                                                                                                                                                                                                                                                                                                data attributes email                                                      string                Admin email address                                                                                                                                                                                                                                                                                                                                                                     data attributes session timeout                                            integer   20160       Session timeout after inactivity  minutes                                                                                                                                                                                                                                                                                                                                               data attributes session remember                                           integer   20160       Session expiration  minutes                                                                                                                                                                                                                                                                                                                                                             data attributes collaborator auth policy                                   string    password    Authentication policy   password  or  two factor mandatory                                                                                                                                                                                                                                                                                                                              data attributes cost estimation enabled                                    boolean   false        Whether or not the cost estimation feature is enabled for all workspaces in the organization  Defaults to false  In Terraform Enterprise  you must also enable cost estimation in  Site Administration   terraform enterprise admin application integration cost estimation integration        data attributes send passing statuses for untriggered speculative plans    boolean   false       Whether or not to send VCS status updates for untriggered speculative plans  This can be useful if large numbers of untriggered workspaces are exhausting request limits for connected version control service providers like GitHub  Defaults to false  In Terraform Enterprise  this setting is always false and cannot be changed but is also available in Site Administration       data attributes aggregated commit status enabled                           boolean   true        Whether or not to aggregate VCS status updates for triggered workspaces  This is useful for monorepo projects with configuration spanning many workspaces  Defaults to  true   You cannot use this option if  send passing statuses for untriggered speculative plans  is set to  true          data attributes speculative plan management enabled                        boolean   true        Whether or not to enable  Automatically cancel plan only runs   terraform cloud docs users teams organizations organizations vcs speculative plan management   Defaults to  true                                                                                                                                                                                                        data attributes owners team saml role id                                   string     nothing      Optional      SAML only   The name of the   owners  team   terraform enterprise saml team membership managing membership of the owners team                                                                                                                                                                                                                                     data attributes assessments enforced                                       boolean    false      Whether or not to compel health assessments for all eligible workspaces  When true  health assessments occur on all compatible workspaces  regardless of the value of the workspace setting  assessments enabled   When false  health assessments only occur for workspaces that opt in by setting  assessments enabled  true                data attributes allow force delete workspaces                              boolean    false      Whether workspace administrators can  delete workspaces with resources under management   terraform cloud docs users teams organizations organizations general   If false  only organization owners may delete these workspaces                                                                                                                                                                                    data attributes default execution mode                                     boolean    remote     Which  execution mode   terraform cloud docs workspaces settings execution mode  to use by default  Valid values are  remote    local   and  agent                                                                                                                                                                                  data attributes default agent pool id                                      string     previous value    Required when  default execution mode  is set to  agent   The ID of the agent pool belonging to the organization  Do  not  specify this value if you set  execution mode  to  remote  or  local                                                                                                                             Sample Payload     json      data          type    organizations        attributes            name    hashicorp          email    user example com                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations          Sample Response    Note    Only HCP Terraform organizations return the  two factor conformant  and  assessments enforced  properties      json      data          id    hashicorp        type    organizations        attributes            external id    org Bzyc2JuegvVLAibn          created at    2021 08 30T18 09 57 561Z          email    user example com          session timeout   null         session remember   null         collaborator auth policy    password          plan expired   false         plan expires at   null         plan is trial   false         plan is enterprise   false         cost estimation enabled   false         send passing statuses for untriggered speculative plans   false         aggregated commit status enabled   true         speculative plan management enabled   true         allow force delete workspaces   false         name    hashicorp          permissions              can update   true           can destroy   true           can access via teams   true           can create module   true           can create team   false           can create workspace   true           can manage users   true           can manage subscription   true           can manage sso   false           can update oauth   true           can update sentinel   false           can update ssh keys   true           can update api token   true           can traverse   true           can start trial   true           can update agent pools   false           can manage tags   true           can manage public modules   true           can manage public providers   false           can manage run tasks   false           can read run tasks   false           can create provider   false           can create project   true                 fair run queuing enabled   true         saml enabled   false         owners team saml role id   null         two factor conformant   false         assessments enforced   false         default execution mode    remote              relationships            default agent pool              data   null                 oauth tokens              links                related     api v2 organizations hashicorp oauth tokens                            authentication token              links                related     api v2 organizations hashicorp authentication token                            entitlement set              data                id    org Bzyc2JuegvVLAibn              type    entitlement sets                      links                related     api v2 organizations hashicorp entitlement set                            subscription              links                related     api v2 organizations hashicorp subscription                                links            self     api v2 organizations hashicorp                included                  id    org Bzyc2JuegvVLAibn          type    entitlement sets          attributes              agents   false           audit logging   false           configuration designer   true           cost estimation   false           global run tasks   false           module tests generation   false           operations   true           policy enforcement   false           policy limit   null           policy mandatory enforcement limit   null           policy set limit   null           private module registry   true           run task limit   null           run task mandatory enforcement limit   null           run task workspace limit   null           run tasks   false           self serve billing   true           sentinel   false           sso   false           state storage   true           teams   false           usage reporting   false           user limit   5           vcs integrations   true           versioned policy set limit   null                 links              self     api v2 entitlement sets org Bzyc2JuegvVLAibn                              Update an Organization   PATCH  organizations  organization name     Parameter              Description                                                                                                    organization name    The name of the organization to update      Status    Response                                          Reason                                                                                                                                                                                             200       JSON API document      type   organizations      The organization was successfully updated                           404       JSON API error object                            Organization not found or user unauthorized to perform action       422       JSON API error object                            Malformed request body  missing attributes  wrong types  etc          Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload     Key path                                                                    Type      Default     Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      data type                                                                  string                Must be   organizations                                                                                                                                                                                                                                                                                                                                                                 data attributes name                                                       string                Name of the organization                                                                                                                                                                                                                                                                                                                                                                data attributes email                                                      string                Admin email address                                                                                                                                                                                                                                                                                                                                                                     data attributes session timeout                                            integer   20160       Session timeout after inactivity  minutes                                                                                                                                                                                                                                                                                                                                               data attributes session remember                                           integer   20160       Session expiration  minutes                                                                                                                                                                                                                                                                                                                                                             data attributes collaborator auth policy                                   string    password    Authentication policy   password  or  two factor mandatory                                                                                                                                                                                                                                                                                                                              data attributes cost estimation enabled                                    boolean   false        Whether or not the cost estimation feature is enabled for all workspaces in the organization  Defaults to false  In Terraform Enterprise  you must also enable cost estimation in  Site Administration   terraform enterprise admin application integration cost estimation integration        data attributes send passing statuses for untriggered speculative plans    boolean   false       Whether or not to send VCS status updates for untriggered speculative plans  This can be useful if large numbers of untriggered workspaces are exhausting request limits for connected version control service providers like GitHub  Defaults to false  In Terraform Enterprise  this setting is always false and cannot be changed but is also available in Site Administration       data attributes aggregated commit status enabled                           boolean   true        Whether or not to aggregate VCS status updates for triggered workspaces  This is useful for monorepo projects with configuration spanning many workspaces  Defaults to  true   You cannot use this option if  send passing statuses for untriggered speculative plans  is set to  true          data attributes speculative plan management enabled                        boolean   true        Whether or not to enable  Automatically cancel plan only runs   terraform cloud docs users teams organizations organizations vcs speculative plan management   Defaults to  true                                                                                                                                                                                                        data attributes owners team saml role id                                   string     nothing      Optional      SAML only   The name of the   owners  team   terraform enterprise saml team membership managing membership of the owners team                                                                                                                                                                                                                                     data attributes assessments enforced                                       boolean   false       Whether or not to compel health assessments for all eligible workspaces  When true  health assessments occur on all compatible workspaces  regardless of the value of the workspace setting  assessments enabled   When false  health assessments only occur for workspaces that opt in by setting  assessments enabled  true                    data attributes allow force delete workspaces                              boolean   false       Whether workspace administrators can  delete workspaces with resources under management   terraform cloud docs users teams organizations organizations general   If false  only organization owners may delete these workspaces                                                                                                                                                                                    data attributes default execution mode                                     boolean    remote     Which  execution mode   terraform cloud docs workspaces settings execution mode  to use by default  Valid values are  remote    local   and  agent                                                                                                                                                                                  data attributes default agent pool id                                      string     previous value    Required when  default execution mode  is set to  agent   The ID of the agent pool belonging to the organization  Do  not  specify this value if you set  execution mode  to  remote  or  local                                                                                                                               Sample Payload     json      data          type    organizations        attributes            email    admin example com                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 organizations hashicorp          Sample Response    Note    The  two factor conformant  and  assessments enforced  properties are only returned from HCP Terraform organizations      json      data          id    hashicorp        type    organizations        attributes            external id    org Bzyc2JuegvVLAibn          created at    2021 08 30T18 09 57 561Z          email    admin example com          session timeout   null         session remember   null         collaborator auth policy    password          plan expired   false         plan expires at   null         plan is trial   false         plan is enterprise   false         cost estimation enabled   false         send passing statuses for untriggered speculative plans   false         aggregated commit status enabled   true         speculative plan management enabled   true         name    hashicorp          permissions              can update   true           can destroy   true           can access via teams   true           can create module   true           can create team   false           can create workspace   true           can manage users   true           can manage subscription   true           can manage sso   false           can update oauth   true           can update sentinel   false           can update ssh keys   true           can update api token   true           can traverse   true           can start trial   true           can update agent pools   false           can manage tags   true           can manage public modules   true           can manage public providers   false           can manage run tasks   false           can read run tasks   false           can create provider   false           can create project   true                 fair run queuing enabled   true         saml enabled   false         owners team saml role id   null         two factor conformant   false         assessments enforced   false         default execution mode    remote              relationships            default agent pool              data   null                 oauth tokens              links                related     api v2 organizations hashicorp oauth tokens                            authentication token              links                related     api v2 organizations hashicorp authentication token                            entitlement set              data                id    org Bzyc2JuegvVLAibn              type    entitlement sets                      links                related     api v2 organizations hashicorp entitlement set                            subscription              links                related     api v2 organizations hashicorp subscription                                links            self     api v2 organizations hashicorp                      Destroy an Organization   DELETE  organizations  organization name     Parameter              Description                                                                                                      organization name    The name of the organization to destroy      Status    Response                    Reason                                                                                                                                                                     204                                  The organization was successfully destroyed                        404       JSON API error object      Organization not found or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 organizations hashicorp          Sample Response  The response body will be empty if successful      Show the Entitlement Set  This endpoint shows the  entitlements   terraform cloud docs api docs feature entitlements  for an organization    GET  organizations  organization name entitlement set     Parameter              Description                                                                                                                                    organization name    The name of the organization s entitlement set to view      Status    Response                                             Reason                                                                                                                                                                                              200       JSON API document      type   entitlement sets      The request was successful                                         404       JSON API error object                               Organization not found or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations hashicorp entitlement set          Sample Response     json      data          id    org Bzyc2JuegvVLAibn        type    entitlement sets        attributes            agents   false         audit logging   false         configuration designer   true         cost estimation   false         global run tasks   false         module tests generation   false         operations   true         policy enforcement   false         policy limit   5         policy mandatory enforcement limit   null         policy set limit   1         private module registry   true         private policy agents   false         private vcs   false         run task limit   1         run task mandatory enforcement limit   1         run task workspace limit   10         run tasks   false         self serve billing   true         sentinel   false         sso   false         state storage   true         teams   false         usage reporting   false         user limit   5         vcs integrations   true         versioned policy set limit   null             links            self     api v2 entitlement sets org Bzyc2JuegvVLAibn                      Show Module Producers   EnterpriseAlert  This endpoint is exclusive to Terraform Enterprise  and not available in HCP Terraform    EnterpriseAlert   This endpoint shows organizations that are configured to share modules with an organization through  Module Sharing   terraform enterprise admin application module sharing     GET  organizations  organization name relationships module producers     Parameter              Description                                                                                                                                      organization name    The name of the organization s module producers to view      Status    Response                                          Reason                                                                                                                                                                                           200       JSON API document      type   organizations      The request was successful                                         404       JSON API error object                            Organization not found or user unauthorized to perform action        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                                                 page number       Optional    If omitted  the endpoint will return the first page                     page size         Optional    If omitted  the endpoint will return 20 module producers per page         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   tfe example com api v2 organizations hashicorp relationships module producers          Sample Response     json      data                  id    hc nomad          type    organizations          attributes              name    hc nomad            external id    org ArQSQMAkFQsSUZjB                  links              self     api v2 organizations hc nomad                        links          self    https   tfe example com api v2 organizations hashicorp relationships module producers page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   tfe example com api v2 organizations hashicorp relationships module producers page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next   null       last    https   tfe example com api v2 organizations hashicorp relationships module producers page 5Bnumber 5D 1 page 5Bsize 5D 20          meta          pagination            current page   1         prev page   null         next page   null         total pages   1         total count   1                     Show data retention policy   EnterpriseAlert  This endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform    EnterpriseAlert    GET  organizations  organization name relationships data retention policy     Parameter              Description                                                                                                                                                              organization name    The name of the organization to show the data retention policy for     This endpoint shows the data retention policy set explicitly on the organization   When no data retention policy is set for the organization  the endpoint returns the default policy configured for the Terraform Enterprise installation  Read more about  organization data retention policies   terraform enterprise users teams organizations organizations data retention policies    For additional information  refer to  Data Retention Policy Types   terraform enterprise api docs data retention policies data retention policy types  in the Terraform Enterprise documentation      Create or update data retention policy   EnterpriseAlert  This endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform    EnterpriseAlert    POST  organizations  organization name relationships data retention policy     Parameter              Description                                                                                                                                                                            organization name    The name of the organization to update the data retention policy for          This endpoint creates a data retention policy for an organization or updates the existing policy   Read more about  organization data retention policies   terraform enterprise users teams organizations organizations data retention policies    Refer to  Data Retention Policy API   terraform enterprise api docs data retention policies create or update data retention policy  in the Terraform Enterprise documentation for details      Remove data retention policy   EnterpriseAlert  This endpoint is exclusive to Terraform Enterprise and is not available in HCP Terraform    EnterpriseAlert    DELETE  organizations  organization name relationships data retention policy     Parameter              Description                                                                                                                                                                                                                                                                                      organization name    The name of the organization to remove the data retention policy for     This endpoint removes the data retention policy explicitly set on an organization  When the data retention policy is deleted  the organization inherits the default policy configured for the Terraform Enterprise installation  Refer to  Data Retention Policies   terraform enterprise application administration general data retention policies  for additional information   Refer to  Data Retention Policies   terraform enterprise users teams organizations organizations data retention policies  for information about configuring data retention policies for an organization   Refer to  Data Retention Policy API   terraform enterprise api docs data retention policies remove data retention policy  in the Terraform Enterprise documentation for details      Available Related Resources  The GET endpoints above can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available     Resource Name           Description                                                                                                                                                                                                                entitlement set        The entitlement set that determines which HCP Terraform features the organization can use        Relationships  The following relationships may be present in various responses     Resource Name             Description                                                                                                                                                                                                                                                                                    module producers         Other organizations configured to share modules with the organization                                                              oauth tokens             OAuth tokens associated with VCS configurations for the organization                                                               authentication token     The API token for an organization                                                                                                  entitlement set          The entitlement set that determines which HCP Terraform features the organization can use                                        subscription             The current subscription for an organization                                                                                       default agent pool       An organization s default agent pool  Set this value if your  default execution mode  is  agent                                    data retention policy     EnterpriseAlert inline   Specifies an organization s data retention policy  Refer to  Data Retention Policy APIs   terraform enterprise api docs data retention policies  in the Terraform Enterprise documentation for more details   "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Run Triggers API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the run triggers endpoint to manage run triggers List show create and delete run triggers using the HTTP API","answers":"---\npage_title: Run Triggers - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/run-triggers` endpoint to manage run triggers. List, show, create, and delete run triggers using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Run Triggers API\n\n## Create a Run Trigger\n\n`POST \/workspaces\/:workspace_id\/run-triggers`\n\n| Parameter       | Description                                                                                                                                                                                                        |\n| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:workspace_id` | The ID of the workspace to create the run trigger in. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint. |\n\n| Status  | Response                                       | Reason                                                                   |\n| ------- | ---------------------------------------------- | ------------------------------------------------------------------------ |\n| [201][] | [JSON API document][] (`type: \"run-triggers\"`) | Successfully created a run trigger                                       |\n| [404][] | [JSON API error object][]                      | Workspace or sourceable not found or user unauthorized to perform action |\n| [422][] | [JSON API error object][]                      | Malformed request body (missing attributes, wrong types, etc.)           |\n\n### Permissions\n\nIn order to create a run trigger, the user must have admin access to the specified workspace and permission to read runs for the sourceable workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                             | Type   | Default | Description                                                                                                                                                                                                                                                                                                                                                                                                                   |\n| ------------------------------------ | ------ | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.relationships.sourceable.data` | object |         | A JSON API relationship object that represents the source workspace for the run trigger. This object must have `id` and `type` properties, and the `type` property must be `workspaces` (e.g. `{ \"id\": \"ws-2HRvNs49EWPjDqT1\", \"type\": \"workspaces\" }`). Obtain workspace IDs from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"relationships\": {\n      \"sourceable\": {\n        \"data\": {\n          \"id\": \"ws-2HRvNs49EWPjDqT1\",\n          \"type\": \"workspaces\"\n        }\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --request POST \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-XdeUVMWShTesDMME\/run-triggers\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"rt-3yVQZvHzf5j3WRJ1\",\n    \"type\": \"run-triggers\",\n     \"attributes\": {\n       \"workspace-name\": \"workspace-1\",\n       \"sourceable-name\": \"workspace-2\",\n       \"created-at\": \"2018-09-11T18:21:21.784Z\"\n    },\n    \"relationships\": {\n      \"workspace\": {\n        \"data\": {\n          \"id\": \"ws-XdeUVMWShTesDMME\",\n          \"type\": \"workspaces\"\n        }\n      },\n      \"sourceable\": {\n        \"data\": {\n          \"id\": \"ws-2HRvNs49EWPjDqT1\",\n          \"type\": \"workspaces\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/run-triggers\/rt-3yVQZvHzf5j3WRJ1\"\n    }\n  }\n}\n```\n\n## List Run Triggers\n\n`GET \/workspaces\/:workspace_id\/run-triggers`\n\n| Parameter       | Description                                                                                                                                                                                                    |\n| --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:workspace_id` | The ID of the workspace to list run triggers for. Obtain this from the [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings) or the [Show Workspace](\/terraform\/cloud-docs\/api-docs\/workspaces#show-workspace) endpoint. |\n\n| Status  | Response                                       | Reason                                                                                       |\n| ------- | ---------------------------------------------- | -------------------------------------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"run-triggers\"`) | Request was successful                                                                       |\n| [400][] | [JSON API error object][]                      | Required parameter `filter[run-trigger][type]` is missing or has been given an invalid value |\n| [404][] | [JSON API error object][]                      | Workspace not found or user unauthorized to perform action                                   |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter                   | Description                                                                                                                                                                                                            |\n| --------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `filter[run-trigger][type]` | **Required** Which type of run triggers to list; valid values are `inbound` or `outbound`. `inbound` run triggers create runs in the specified workspace, and `outbound` run triggers create runs in other workspaces. |\n| `page[number]`              | **Optional.** If omitted, the endpoint will return the first page.                                                                                                                                                     |\n| `page[size]`                | **Optional.** If omitted, the endpoint will return 20 run triggers per page.                                                                                                                                           |\n\n### Permissions\n\nIn order to list run triggers, the user must have permission to read runs for the specified workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### Sample Request\n\n```shell\ncurl \\\n  --request GET \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-XdeUVMWShTesDMME\/run-triggers?filter%5Brun-trigger%5D%5Btype%5D=inbound\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"rt-WygcwSBuYaQWrM39\",\n      \"type\": \"run-triggers\",\n      \"attributes\": {\n        \"workspace-name\": \"workspace-1\",\n        \"sourceable-name\": \"workspace-2\",\n        \"created-at\": \"2018-09-11T18:21:21.784Z\"\n      },\n      \"relationships\": {\n        \"workspace\": {\n          \"data\": {\n            \"id\": \"ws-XdeUVMWShTesDMME\",\n            \"type\": \"workspaces\"\n          }\n        },\n        \"sourceable\": {\n          \"data\": {\n            \"id\": \"ws-2HRvNs49EWPjDqT1\",\n            \"type\": \"workspaces\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/run-triggers\/rt-WygcwSBuYaQWrM39\"\n      }\n    },\n    {\n      \"id\": \"rt-8F5JFydVYAmtTjET\",\n      \"type\": \"run-triggers\",\n      \"attributes\": {\n        \"workspace-name\": \"workspace-1\",\n        \"sourceable-name\": \"workspace-3\",\n        \"created-at\": \"2018-09-11T18:21:21.784Z\"\n      },\n      \"relationships\": {\n        \"workspace\": {\n          \"data\": {\n            \"id\": \"ws-XdeUVMWShTesDMME\",\n            \"type\": \"workspaces\"\n          }\n        },\n        \"sourceable\": {\n          \"data\": {\n            \"id\": \"ws-BUHBEM97xboT8TVz\",\n            \"type\": \"workspaces\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/run-triggers\/rt-8F5JFydVYAmtTjET\"\n      }\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-xdiJLyGpCugbFDE1\/run-triggers?filter%5Brun-trigger%5D%5Btype%5D=inbound&page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-xdiJLyGpCugbFDE1\/run-triggers?filter%5Brun-trigger%5D%5Btype%5D=inbound&page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-xdiJLyGpCugbFDE1\/run-triggers?filter%5Brun-trigger%5D%5Btype%5D=inbound&page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 2\n    }\n  }\n}\n```\n\n## Show a Run Trigger\n\n`GET \/run-triggers\/:run_trigger_id`\n\n| Parameter         | Description                                                                          |\n| ----------------- | ------------------------------------------------------------------------------------ |\n| `:run_trigger_id` | The ID of the run trigger to show. Send a `GET` request to the `run-triggers` endpoint to find IDs. Refer to [List Run Triggers](#list-run-triggers) for details. |\n\n| Status  | Response                                       | Reason                                                       |\n| ------- | ---------------------------------------------- | ------------------------------------------------------------ |\n| [200][] | [JSON API document][] (`type: \"run-triggers\"`) | The request was successful                                   |\n| [404][] | [JSON API error object][]                      | Run trigger not found or user unauthorized to perform action |\n\n### Permissions\n\nIn order to show a run trigger, the user must have permission to read runs for either the workspace or sourceable workspace of the specified run trigger. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### Sample Request\n\n```shell\ncurl \\\n  --request GET \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/run-triggers\/rt-3yVQZvHzf5j3WRJ1\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"rt-3yVQZvHzf5j3WRJ1\",\n    \"type\": \"run-triggers\",\n     \"attributes\": {\n       \"workspace-name\": \"workspace-1\",\n       \"sourceable-name\": \"workspace-2\",\n       \"created-at\": \"2018-09-11T18:21:21.784Z\"\n    },\n    \"relationships\": {\n      \"workspace\": {\n        \"data\": {\n          \"id\": \"ws-XdeUVMWShTesDMME\",\n          \"type\": \"workspaces\"\n        }\n      },\n      \"sourceable\": {\n        \"data\": {\n          \"id\": \"ws-2HRvNs49EWPjDqT1\",\n          \"type\": \"workspaces\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/run-triggers\/rt-3yVQZvHzf5j3WRJ1\"\n    }\n  }\n}\n```\n\n## Delete a Run Trigger\n\n`DELETE \/run-triggers\/:run_trigger_id`\n\n| Parameter         | Description                                                                            |\n| ----------------- | -------------------------------------------------------------------------------------- |\n| `:run_trigger_id` | The ID of the run trigger to delete. Send a `GET` request to the `run-triggers` endpoint o find IDs. Refer to [List Run Triggers](#list-run-triggers) for details. |\n\n| Status  | Response                  | Reason                                                       |\n| ------- | ------------------------- | ------------------------------------------------------------ |\n| [204][] | No Content                | Successfully deleted the run trigger                         |\n| [404][] | [JSON API error object][] | Run trigger not found or user unauthorized to perform action |\n\n### Permissions\n\nIn order to delete a run trigger, the user must have admin access to the specified workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n### Sample Request\n\n```shell\ncurl \\\n  --request DELETE \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/run-triggers\/rt-3yVQZvHzf5j3WRJ1\n```\n\n## Available Related Resources\n\nThe GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\nThese includes respect read permissions. If you do not have access to read the related resource, it will not be returned.\n\n* `workspace` - The full workspace object.\n* `sourceable` - The full source workspace object.","site":"terraform","answers_cleaned":"    page title  Run Triggers   API Docs   HCP Terraform description       Use the   run triggers  endpoint to manage run triggers  List  show  create  and delete run triggers using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Run Triggers API     Create a Run Trigger   POST  workspaces  workspace id run triggers     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                                                                                       workspace id    The ID of the workspace to create the run trigger in  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   terraform cloud docs api docs workspaces show workspace  endpoint       Status    Response                                         Reason                                                                                                                                                                                                                201       JSON API document      type   run triggers      Successfully created a run trigger                                            404       JSON API error object                           Workspace or sourceable not found or user unauthorized to perform action      422       JSON API error object                           Malformed request body  missing attributes  wrong types  etc                    Permissions  In order to create a run trigger  the user must have admin access to the specified workspace and permission to read runs for the sourceable workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers      Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                               Type     Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    data relationships sourceable data    object             A JSON API relationship object that represents the source workspace for the run trigger  This object must have  id  and  type  properties  and the  type  property must be  workspaces   e g      id    ws 2HRvNs49EWPjDqT1    type    workspaces       Obtain workspace IDs from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   terraform cloud docs api docs workspaces show workspace  endpoint         Sample Payload     json      data          relationships            sourceable              data                id    ws 2HRvNs49EWPjDqT1              type    workspaces                                         Sample Request     shell curl       request POST       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        data  payload json     https   app terraform io api v2 workspaces ws XdeUVMWShTesDMME run triggers          Sample Response     json      data          id    rt 3yVQZvHzf5j3WRJ1        type    run triggers         attributes             workspace name    workspace 1           sourceable name    workspace 2           created at    2018 09 11T18 21 21 784Z              relationships            workspace              data                id    ws XdeUVMWShTesDMME              type    workspaces                            sourceable              data                id    ws 2HRvNs49EWPjDqT1              type    workspaces                                links            self     api v2 run triggers rt 3yVQZvHzf5j3WRJ1                      List Run Triggers   GET  workspaces  workspace id run triggers     Parameter         Description                                                                                                                                                                                                                                                                                                                                                                                                                                               workspace id    The ID of the workspace to list run triggers for  Obtain this from the  workspace settings   terraform cloud docs workspaces settings  or the  Show Workspace   terraform cloud docs api docs workspaces show workspace  endpoint       Status    Response                                         Reason                                                                                                                                                                                                                                                        200       JSON API document      type   run triggers      Request was successful                                                                            400       JSON API error object                           Required parameter  filter run trigger  type   is missing or has been given an invalid value      404       JSON API error object                           Workspace not found or user unauthorized to perform action                                          Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter                     Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                          filter run trigger  type       Required   Which type of run triggers to list  valid values are  inbound  or  outbound    inbound  run triggers create runs in the specified workspace  and  outbound  run triggers create runs in other workspaces       page number                    Optional    If omitted  the endpoint will return the first page                                                                                                                                                           page size                      Optional    If omitted  the endpoint will return 20 run triggers per page                                                                                                                                                   Permissions  In order to list run triggers  the user must have permission to read runs for the specified workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers      Sample Request     shell curl       request GET       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 workspaces ws XdeUVMWShTesDMME run triggers filter 5Brun trigger 5D 5Btype 5D inbound          Sample Response     json      data                  id    rt WygcwSBuYaQWrM39          type    run triggers          attributes              workspace name    workspace 1            sourceable name    workspace 2            created at    2018 09 11T18 21 21 784Z                  relationships              workspace                data                  id    ws XdeUVMWShTesDMME                type    workspaces                                  sourceable                data                  id    ws 2HRvNs49EWPjDqT1                type    workspaces                                        links              self     api v2 run triggers rt WygcwSBuYaQWrM39                              id    rt 8F5JFydVYAmtTjET          type    run triggers          attributes              workspace name    workspace 1            sourceable name    workspace 3            created at    2018 09 11T18 21 21 784Z                  relationships              workspace                data                  id    ws XdeUVMWShTesDMME                type    workspaces                                  sourceable                data                  id    ws BUHBEM97xboT8TVz                type    workspaces                                        links              self     api v2 run triggers rt 8F5JFydVYAmtTjET                        links          self    https   app terraform io api v2 workspaces ws xdiJLyGpCugbFDE1 run triggers filter 5Brun trigger 5D 5Btype 5D inbound page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   app terraform io api v2 workspaces ws xdiJLyGpCugbFDE1 run triggers filter 5Brun trigger 5D 5Btype 5D inbound page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next   null       last    https   app terraform io api v2 workspaces ws xdiJLyGpCugbFDE1 run triggers filter 5Brun trigger 5D 5Btype 5D inbound page 5Bnumber 5D 1 page 5Bsize 5D 20          meta          pagination            current page   1         prev page   null         next page   null         total pages   1         total count   2                     Show a Run Trigger   GET  run triggers  run trigger id     Parameter           Description                                                                                                                                                                                             run trigger id    The ID of the run trigger to show  Send a  GET  request to the  run triggers  endpoint to find IDs  Refer to  List Run Triggers   list run triggers  for details       Status    Response                                         Reason                                                                                                                                                                                        200       JSON API document      type   run triggers      The request was successful                                        404       JSON API error object                           Run trigger not found or user unauthorized to perform action        Permissions  In order to show a run trigger  the user must have permission to read runs for either the workspace or sourceable workspace of the specified run trigger    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers      Sample Request     shell curl       request GET       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 run triggers rt 3yVQZvHzf5j3WRJ1          Sample Response     json      data          id    rt 3yVQZvHzf5j3WRJ1        type    run triggers         attributes             workspace name    workspace 1           sourceable name    workspace 2           created at    2018 09 11T18 21 21 784Z              relationships            workspace              data                id    ws XdeUVMWShTesDMME              type    workspaces                            sourceable              data                id    ws 2HRvNs49EWPjDqT1              type    workspaces                                links            self     api v2 run triggers rt 3yVQZvHzf5j3WRJ1                      Delete a Run Trigger   DELETE  run triggers  run trigger id     Parameter           Description                                                                                                                                                                                                 run trigger id    The ID of the run trigger to delete  Send a  GET  request to the  run triggers  endpoint o find IDs  Refer to  List Run Triggers   list run triggers  for details       Status    Response                    Reason                                                                                                                                                                   204      No Content                  Successfully deleted the run trigger                              404       JSON API error object      Run trigger not found or user unauthorized to perform action        Permissions  In order to delete a run trigger  the user must have admin access to the specified workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers      Sample Request     shell curl       request DELETE       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 run triggers rt 3yVQZvHzf5j3WRJ1         Available Related Resources  The GET endpoints above can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available   These includes respect read permissions  If you do not have access to read the related resource  it will not be returned      workspace    The full workspace object     sourceable    The full source workspace object "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Organization Memberships API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the organization memberships endpoint to manage user membership within an organization Invite users and list show and remove memberships using the HTTP API","answers":"---\npage_title: Organization Memberships - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/organization-memberships` endpoint to manage user membership within an organization. Invite users, and list, show, and remove memberships using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Organization Memberships API\n\nUsers are added to organizations by inviting them to join. Once accepted, they become members of the organization. The Organization Membership resource represents this membership.\n\nYou can invite users who already have an account, as well as new users. If the user has an existing account with the same email address used to invite them, they can reuse the same login.\n\n-> **Note:** Once a user is a member of the organization, you can manage their team memberships using [the Team Membership API](\/terraform\/cloud-docs\/api-docs\/team-members).\n\n## Invite a User to an Organization\n\n`POST \/organizations\/:organization_name\/organization-memberships`\n\n| Parameter            | Description                                                                                                                               |\n| -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization the user will be invited to join. The inviting user must have permission to manage organization memberships. |\n\n-> **Note:** Organization membership management is restricted to members of the owners team, the owners [team API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens), the [organization API token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens), and users or teams with one of the [Team Management](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#team-management-permissions) permissions.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Status  | Response                  | Reason                                                         |\n| ------- | ------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][]     | Successfully invited the user                                  |\n| [400][] | [JSON API error object][] | Unable to invite user due to organization limits               |\n| [404][] | [JSON API error object][] | Organization not found, or user unauthorized to perform action |\n| [422][] | [JSON API error object][] | Unable to invite user due to validation errors                 |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                          | Type           | Default | Description                                                                                                                                                                                                                                                                                                                                                                                           |\n| --------------------------------- | -------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                       | string         |         | Must be `\"organization-memberships\"`.                                                                                                                                                                                                                                                                                                                                                                 |\n| `data.attributes.email`           | string         |         | The email address of the user to be invited.                                                                                                                                                                                                                                                                                                                                                          |\n| `data.relationships.teams.data[]` | array\\[object] |         | A list of resource identifier objects that defines which teams the invited user will be a member of. These objects must contain `id` and `type` properties, and the `type` property must be `teams` (e.g. `{ \"id\": \"team-GeLZkdnK6xAVjA5H\", \"type\": \"teams\" }`). Obtain team IDs from the [List Teams](\/terraform\/cloud-docs\/api-docs\/teams#list-teams) endpoint. All users must be added to at least one team. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"email\": \"test@example.com\"\n    },\n    \"relationships\": {\n      \"teams\": {\n        \"data\": [\n          {\n            \"type\": \"teams\",\n            \"id\": \"team-GeLZkdnK6xAVjA5H\"\n          }\n        ]\n      }\n    },\n    \"type\": \"organization-memberships\"\n  }\n}\n```\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/organization-memberships\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"ou-nX7inDHhmC3quYgy\",\n    \"type\": \"organization-memberships\",\n    \"attributes\": {\n      \"status\": \"invited\"\n    },\n    \"relationships\": {\n      \"teams\": {\n        \"data\": [\n          {\n            \"id\": \"team-GeLZkdnK6xAVjA5H\",\n            \"type\": \"teams\"\n          }\n        ]\n      },\n      \"user\": {\n        \"data\": {\n          \"id\": \"user-J8oxGmRk5eC2WLfX\",\n          \"type\": \"users\"\n        }\n      },\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        }\n      }\n    }\n  },\n  \"included\": [\n    {\n      \"id\": \"user-J8oxGmRk5eC2WLfX\",\n      \"type\": \"users\",\n      \"attributes\": {\n        \"username\": null,\n        \"is-service-account\": false,\n        \"auth-method\": \"hcp_sso\",\n        \"avatar-url\": \"https:\/\/www.gravatar.com\/avatar\/55502f40dc8b7c769880b10874abc9d0?s=100&d=mm\",\n        \"two-factor\": {\n          \"enabled\": false,\n          \"verified\": false\n        },\n        \"email\": \"test@example.com\",\n        \"permissions\": {\n          \"can-create-organizations\": true,\n          \"can-change-email\": true,\n          \"can-change-username\": true,\n          \"can-manage-user-tokens\": false\n        }\n      },\n      \"relationships\": {\n        \"authentication-tokens\": {\n          \"links\": {\n            \"related\": \"\/api\/v2\/users\/user-J8oxGmRk5eC2WLfX\/authentication-tokens\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/users\/user-J8oxGmRk5eC2WLfX\"\n      }\n    }\n  ]\n}\n```\n\n## List Memberships for an Organization\n\n`GET \/organizations\/:organization_name\/organization-memberships`\n\n| Parameter            | Description                                              |\n| -------------------- | -------------------------------------------------------- |\n| `:organization_name` | The name of the organization to list the memberships of. |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter        | Description                                                                                                                                                                                   |\n| ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `q`              | **Optional.** A search query string. Organization memberships are searchable by user name and email.                                                                                          |\n| `filter[status]` | **Optional.** If specified, restricts results to those with the matching status value. Valid values are `invited` and `active`.                                                               |\n| `filter[email]`  | **Optional.** If specified, restricts results to those with a matching user email address. If multiple comma separated values are specified, results matching any of the values are returned. | \n| `page[number]`   | **Optional.** If omitted, the endpoint will return the first page.                                                                                                                            |\n| `page[size]`     | **Optional.** If omitted, the endpoint will return 20 users per page.                                                                                                                         |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/organization-memberships\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"ou-tTJph1AQVK5ZmdND\",\n      \"type\": \"organization-memberships\",\n      \"attributes\": {\n        \"status\": \"active\"\n      },\n      \"relationships\": {\n        \"teams\": {\n          \"data\": [\n            {\n              \"id\": \"team-yUrEehvfG4pdmSjc\",\n              \"type\": \"teams\"\n            }\n          ]\n        },\n        \"user\": {\n          \"data\": {\n            \"id\": \"user-vaQqszES9JnuK4eB\",\n            \"type\": \"users\"\n          }\n        },\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          }\n        }\n      }\n    },\n    {\n      \"id\": \"ou-D6HPYFt4GzeBt3gB\",\n      \"type\": \"organization-memberships\",\n      \"attributes\": {\n        \"status\": \"active\"\n      },\n      \"relationships\": {\n        \"teams\": {\n          \"data\": [\n            {\n              \"id\": \"team-yUrEehvfG4pdmSjc\",\n              \"type\": \"teams\"\n            }\n          ]\n        },\n        \"user\": {\n          \"data\": {\n            \"id\": \"user-oqCgH7NgTn95jTGc\",\n            \"type\": \"users\"\n          }\n        },\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          }\n        }\n      }\n    },\n    {\n      \"id\": \"ou-x1E2eBwYwusLDC7h\",\n      \"type\": \"organization-memberships\",\n      \"attributes\": {\n        \"status\": \"invited\"\n      },\n      \"relationships\": {\n        \"teams\": {\n          \"data\": [\n            {\n              \"id\": \"team-yUrEehvfG4pdmSjc\",\n              \"type\": \"teams\"\n            }\n          ]\n        },\n        \"user\": {\n          \"data\": {\n            \"id\": \"user-UntUdBTHsVRQMzC8\",\n            \"type\": \"users\"\n          }\n        },\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          }\n        }\n      }\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/organization-memberships?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/organization-memberships?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/organization-memberships?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"status-counts\": {\n      \"total\": 3,\n      \"active\": 2,\n      \"invited\": 1\n    },\n    \"pagination\": {\n      \"current-page\": 1,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 3\n    }\n  }\n}\n```\n\n## List User's Own Memberships\n\n`GET \/organization-memberships`\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organization-memberships\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"ou-VgJgfbDVN3APUm2F\",\n      \"type\": \"organization-memberships\",\n      \"attributes\": {\n        \"status\": \"invited\"\n      },\n      \"relationships\": {\n        \"teams\": {\n          \"data\": [\n            {\n              \"id\": \"team-4QrJKzxB3J5N4cJc\",\n              \"type\": \"teams\"\n            }\n          ]\n        },\n        \"user\": {\n          \"data\": {\n            \"id\": \"user-vaQqszES9JnuK4eB\",\n            \"type\": \"users\"\n          }\n        },\n        \"organization\": {\n          \"data\": {\n            \"id\": \"acme-corp\",\n            \"type\": \"organizations\"\n          }\n        }\n      }\n    },\n    {\n      \"id\": \"ou-tTJph1AQVK5ZmdND\",\n      \"type\": \"organization-memberships\",\n      \"attributes\": {\n        \"status\": \"active\"\n      },\n      \"relationships\": {\n        \"teams\": {\n          \"data\": [\n            {\n              \"id\": \"team-yUrEehvfG4pdmSjc\",\n              \"type\": \"teams\"\n            }\n          ]\n        },\n        \"user\": {\n          \"data\": {\n            \"id\": \"user-vaQqszES9JnuK4eB\",\n            \"type\": \"users\"\n          }\n        },\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          }\n        }\n      }\n    }\n  ]\n}\n```\n\n## Show a Membership\n\n`GET \/organization-memberships\/:organization_membership_id`\n\n| Parameter                     | Description                 |\n| ----------------------------- | --------------------------- |\n| `:organization_membership_id` | The organization membership |\n\n| Status  | Response                                                   | Reason                                                                    |\n| ------- | ---------------------------------------------------------- | ------------------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"organization-memberships\"`) | The request was successful                                                |\n| [404][] | [JSON API error object][]                                  | Organization membership not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organization-memberships\/ou-kit6GaMo3zPGCzWb\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": {\n        \"id\": \"ou-kit6GaMo3zPGCzWb\",\n        \"type\": \"organization-memberships\",\n        \"attributes\": {\n            \"status\": \"active\"\n        },\n        \"relationships\": {\n            \"teams\": {\n                \"data\": [\n                    {\n                        \"id\": \"team-97LkM7QciNkwb2nh\",\n                        \"type\": \"teams\"\n                    }\n                ]\n            },\n            \"user\": {\n                \"data\": {\n                    \"id\": \"user-hn6v2WK1naDpGadd\",\n                    \"type\": \"users\"\n                }\n            },\n            \"organization\": {\n                \"data\": {\n                    \"id\": \"hashicorp\",\n                    \"type\": \"organizations\"\n                }\n            }\n        }\n    }\n}\n```\n\n## Remove User from Organization\n\n`DELETE \/organization-memberships\/:organization_membership_id`\n\n| Parameter                     | Description                 |\n| ----------------------------- | --------------------------- |\n| `:organization_membership_id` | The organization membership |\n\n| Status  | Response                  | Reason                                                                                 |\n| ------- | ------------------------- | -------------------------------------------------------------------------------------- |\n| [204][] | Empty body                | Successfully removed the user from the organization                                    |\n| [403][] | [JSON API error object][] | Unable to remove the user: you cannot remove yourself from organizations which you own |\n| [404][] | [JSON API error object][] | Organization membership not found, or user unauthorized to perform action              |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/organization-memberships\/ou-tTJph1AQVK5ZmdND\n```\n\n## Available Related Resources\n\nThe GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n* `user` - The user associated with the membership.\n* `teams` - Teams the user is a member of.","site":"terraform","answers_cleaned":"    page title  Organization Memberships   API Docs   HCP Terraform description       Use the   organization memberships  endpoint to manage user membership within an organization  Invite users  and list  show  and remove memberships using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Organization Memberships API  Users are added to organizations by inviting them to join  Once accepted  they become members of the organization  The Organization Membership resource represents this membership   You can invite users who already have an account  as well as new users  If the user has an existing account with the same email address used to invite them  they can reuse the same login        Note    Once a user is a member of the organization  you can manage their team memberships using  the Team Membership API   terraform cloud docs api docs team members       Invite a User to an Organization   POST  organizations  organization name organization memberships     Parameter              Description                                                                                                                                                                                                                                                                                                          organization name    The name of the organization the user will be invited to join  The inviting user must have permission to manage organization memberships          Note    Organization membership management is restricted to members of the owners team  the owners  team API token   terraform cloud docs users teams organizations api tokens team api tokens   the  organization API token   terraform cloud docs users teams organizations api tokens organization api tokens   and users or teams with one of the  Team Management   terraform cloud docs users teams organizations permissions team management permissions  permissions    permissions citation    intentionally unused   keep for maintainers    Status    Response                    Reason                                                                                                                                                                       201       JSON API document          Successfully invited the user                                       400       JSON API error object      Unable to invite user due to organization limits                    404       JSON API error object      Organization not found  or user unauthorized to perform action      422       JSON API error object      Unable to invite user due to validation errors                        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                            Type             Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         data type                          string                     Must be   organization memberships                                                                                                                                                                                                                                                                                                                                                                         data attributes email              string                     The email address of the user to be invited                                                                                                                                                                                                                                                                                                                                                                data relationships teams data      array  object              A list of resource identifier objects that defines which teams the invited user will be a member of  These objects must contain  id  and  type  properties  and the  type  property must be  teams   e g      id    team GeLZkdnK6xAVjA5H    type    teams       Obtain team IDs from the  List Teams   terraform cloud docs api docs teams list teams  endpoint  All users must be added to at least one team         Sample Payload     json      data          attributes            email    test example com              relationships            teams              data                              type    teams                id    team GeLZkdnK6xAVjA5H                                            type    organization memberships                 Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization organization memberships          Sample Response     json      data          id    ou nX7inDHhmC3quYgy        type    organization memberships        attributes            status    invited              relationships            teams              data                              id    team GeLZkdnK6xAVjA5H                type    teams                                        user              data                id    user J8oxGmRk5eC2WLfX              type    users                            organization              data                id    my organization              type    organizations                                  included                  id    user J8oxGmRk5eC2WLfX          type    users          attributes              username   null           is service account   false           auth method    hcp sso            avatar url    https   www gravatar com avatar 55502f40dc8b7c769880b10874abc9d0 s 100 d mm            two factor                enabled   false             verified   false                     email    test example com            permissions                can create organizations   true             can change email   true             can change username   true             can manage user tokens   false                           relationships              authentication tokens                links                  related     api v2 users user J8oxGmRk5eC2WLfX authentication tokens                                        links              self     api v2 users user J8oxGmRk5eC2WLfX                              List Memberships for an Organization   GET  organizations  organization name organization memberships     Parameter              Description                                                                                                                                        organization name    The name of the organization to list the memberships of         Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter          Description                                                                                                                                                                                                                                                                                                                                                                                                             q                   Optional    A search query string  Organization memberships are searchable by user name and email                                                                                                filter status       Optional    If specified  restricts results to those with the matching status value  Valid values are  invited  and  active                                                                      filter email        Optional    If specified  restricts results to those with a matching user email address  If multiple comma separated values are specified  results matching any of the values are returned        page number         Optional    If omitted  the endpoint will return the first page                                                                                                                                  page size           Optional    If omitted  the endpoint will return 20 users per page                                                                                                                                 Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations my organization organization memberships          Sample Response     json      data                  id    ou tTJph1AQVK5ZmdND          type    organization memberships          attributes              status    active                  relationships              teams                data                                  id    team yUrEehvfG4pdmSjc                  type    teams                                                user                data                  id    user vaQqszES9JnuK4eB                type    users                                  organization                data                  id    my organization                type    organizations                                                    id    ou D6HPYFt4GzeBt3gB          type    organization memberships          attributes              status    active                  relationships              teams                data                                  id    team yUrEehvfG4pdmSjc                  type    teams                                                user                data                  id    user oqCgH7NgTn95jTGc                type    users                                  organization                data                  id    my organization                type    organizations                                                    id    ou x1E2eBwYwusLDC7h          type    organization memberships          attributes              status    invited                  relationships              teams                data                                  id    team yUrEehvfG4pdmSjc                  type    teams                                                user                data                  id    user UntUdBTHsVRQMzC8                type    users                                  organization                data                  id    my organization                type    organizations                                              links          self    https   app terraform io api v2 organizations my organization organization memberships page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   app terraform io api v2 organizations my organization organization memberships page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next   null       last    https   app terraform io api v2 organizations my organization organization memberships page 5Bnumber 5D 1 page 5Bsize 5D 20          meta          status counts            total   3         active   2         invited   1             pagination            current page   1         prev page   null         next page   null         total pages   1         total count   3                     List User s Own Memberships   GET  organization memberships       Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organization memberships          Sample Response     json      data                  id    ou VgJgfbDVN3APUm2F          type    organization memberships          attributes              status    invited                  relationships              teams                data                                  id    team 4QrJKzxB3J5N4cJc                  type    teams                                                user                data                  id    user vaQqszES9JnuK4eB                type    users                                  organization                data                  id    acme corp                type    organizations                                                    id    ou tTJph1AQVK5ZmdND          type    organization memberships          attributes              status    active                  relationships              teams                data                                  id    team yUrEehvfG4pdmSjc                  type    teams                                                user                data                  id    user vaQqszES9JnuK4eB                type    users                                  organization                data                  id    my organization                type    organizations                                                    Show a Membership   GET  organization memberships  organization membership id     Parameter                       Description                                                                                       organization membership id    The organization membership      Status    Response                                                     Reason                                                                                                                                                                                                                              200       JSON API document      type   organization memberships      The request was successful                                                     404       JSON API error object                                       Organization membership not found  or user unauthorized to perform action        Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organization memberships ou kit6GaMo3zPGCzWb          Sample Response     json        data              id    ou kit6GaMo3zPGCzWb            type    organization memberships            attributes                  status    active                      relationships                  teams                      data                                                    id    team 97LkM7QciNkwb2nh                            type    teams                                                                      user                      data                          id    user hn6v2WK1naDpGadd                        type    users                                                organization                      data                          id    hashicorp                        type    organizations                                                            Remove User from Organization   DELETE  organization memberships  organization membership id     Parameter                       Description                                                                                       organization membership id    The organization membership      Status    Response                    Reason                                                                                                                                                                                                                       204      Empty body                  Successfully removed the user from the organization                                         403       JSON API error object      Unable to remove the user  you cannot remove yourself from organizations which you own      404       JSON API error object      Organization membership not found  or user unauthorized to perform action                     Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 organization memberships ou tTJph1AQVK5ZmdND         Available Related Resources  The GET endpoints above can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available      user    The user associated with the membership     teams    Teams the user is a member of "}
{"questions":"terraform page title IP Ranges API Docs HCP Terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 Use the meta ip ranges endpoint to query HCP Terraform s IP ranges Get a list of the IP ranges used by HCP Terraform using the HTTP API tfc only true","answers":"---\npage_title: IP Ranges - API Docs - HCP Terraform\ntfc_only: true\ndescription: >-\n  Use the `\/meta\/ip-ranges` endpoint to query HCP Terraform's IP ranges. Get a list of the IP ranges used by HCP Terraform using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[304]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/304\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[If-Modified-Since]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Headers\/If-Modified-Since\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n[CIDR Notation]: https:\/\/en.wikipedia.org\/wiki\/Classless_Inter-Domain_Routing#CIDR_notation\n\n[run task requests]: \/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks-integration#run-task-request\n\n# IP Ranges API\n\nIP Ranges provides a list of HCP Terraform's IP ranges. For more information about HCP Terraform's IP ranges, view our documentation about [HCP Terraform IP Ranges](\/terraform\/cloud-docs\/architectural-details\/ip-ranges).\n\n## IP Ranges Payload\n\n| Name            | Type  | Description                                                                                      |\n| --------------- | ----- | ------------------------------------------------------------------------------------------------ |\n| `api`           | array | List of IP ranges in [CIDR notation] used for connections from user site to HCP Terraform APIs |\n| `notifications` | array | List of IP ranges in [CIDR notation] used for notifications and outbound [run task requests]     |\n| `sentinel`      | array | List of IP ranges in [CIDR notation] used for outbound requests from Sentinel policies           |\n| `vcs`           | array | List of IP ranges in [CIDR notation] used for connecting to VCS providers                        |\n\n-> **Note:** The IP ranges for each feature returned by the IP Ranges API may overlap. Additionally, these published ranges do not currently allow for execution of Terraform runs against local resources.\n\n-> **Note:** Under normal circumstances, HashiCorp will publish any expected changes to HCP Terraform's IP ranges at least 24 hours in advance of implementing them. This should allow sufficient time for users to update any connected systems to reflect the changes. In the event of an emergency outage or failover operation, it may not be possible to pre-publish these changes.\n\n## Get IP Ranges\n\n-> **Note:** The IP Ranges API does not require authentication\n\n-> **Note:** This endpoint supports the [If-Modified-Since][] HTTP request header\n\n`GET \/meta\/ip-ranges`\n\n| Status  | Response           | Reason                                                                                                         |\n| ------- | ------------------ | -------------------------------------------------------------------------------------------------------------- |\n| [200][] | `application\/json` | The request was successful                                                                                     |\n| [304][] | empty body         | The request was successful; IP ranges were not modified since the specified date in `If-Modified-Since` header |\n\n### Sample Request\n\n```shell\ncurl \\\n  --request GET \\\n  -H \"If-Modified-Since: Tue, 26 May 2020 15:10:05 GMT\" \\\n  https:\/\/app.terraform.io\/api\/meta\/ip-ranges\n```\n\n### Sample Response\n\n```json\n{\n  \"api\": [\n    \"75.2.98.97\/32\",\n    \"99.83.150.238\/32\"\n  ],\n  \"notifications\": [\n    \"10.0.0.1\/32\",\n    \"192.168.0.1\/32\",\n    \"172.16.0.1\/32\"\n  ],\n  \"sentinel\": [\n    \"10.0.0.1\/32\",\n    \"192.168.0.1\/32\",\n    \"172.16.0.1\/32\"\n  ],\n  \"vcs\": [\n    \"10.0.0.1\/32\",\n    \"192.168.0.1\/32\",\n    \"172.16.0.1\/32\"\n  ]\n}\n```","site":"terraform","answers_cleaned":"    page title  IP Ranges   API Docs   HCP Terraform tfc only  true description       Use the   meta ip ranges  endpoint to query HCP Terraform s IP ranges  Get a list of the IP ranges used by HCP Terraform using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   304   https   developer mozilla org en US docs Web HTTP Status 304   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   If Modified Since   https   developer mozilla org en US docs Web HTTP Headers If Modified Since   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects   CIDR Notation   https   en wikipedia org wiki Classless Inter Domain Routing CIDR notation   run task requests    terraform cloud docs api docs run tasks run tasks integration run task request    IP Ranges API  IP Ranges provides a list of HCP Terraform s IP ranges  For more information about HCP Terraform s IP ranges  view our documentation about  HCP Terraform IP Ranges   terraform cloud docs architectural details ip ranges       IP Ranges Payload    Name              Type    Description                                                                                                                                                                                                                          api              array   List of IP ranges in  CIDR notation  used for connections from user site to HCP Terraform APIs      notifications    array   List of IP ranges in  CIDR notation  used for notifications and outbound  run task requests           sentinel         array   List of IP ranges in  CIDR notation  used for outbound requests from Sentinel policies                vcs              array   List of IP ranges in  CIDR notation  used for connecting to VCS providers                                Note    The IP ranges for each feature returned by the IP Ranges API may overlap  Additionally  these published ranges do not currently allow for execution of Terraform runs against local resources        Note    Under normal circumstances  HashiCorp will publish any expected changes to HCP Terraform s IP ranges at least 24 hours in advance of implementing them  This should allow sufficient time for users to update any connected systems to reflect the changes  In the event of an emergency outage or failover operation  it may not be possible to pre publish these changes      Get IP Ranges       Note    The IP Ranges API does not require authentication       Note    This endpoint supports the  If Modified Since    HTTP request header   GET  meta ip ranges     Status    Response             Reason                                                                                                                                                                                                                                                                200       application json    The request was successful                                                                                          304      empty body           The request was successful  IP ranges were not modified since the specified date in  If Modified Since  header        Sample Request     shell curl       request GET      H  If Modified Since  Tue  26 May 2020 15 10 05 GMT      https   app terraform io api meta ip ranges          Sample Response     json      api          75 2 98 97 32        99 83 150 238 32          notifications          10 0 0 1 32        192 168 0 1 32        172 16 0 1 32          sentinel          10 0 0 1 32        192 168 0 1 32        172 16 0 1 32          vcs          10 0 0 1 32        192 168 0 1 32        172 16 0 1 32           "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the state version outputs endpoint to access output values from a Terraform state version List and show state version outputs and show current state version outputs for a workspace using the HTTP API page title State Version Outputs API Docs HCP Terraform 404 https developer mozilla org en US docs Web HTTP Status 404","answers":"---\npage_title: State Version Outputs - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/state-version-outputs` endpoint to access output values from a Terraform state version. List and show state version outputs, and show current state version outputs for a workspace using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# State Version Outputs API\n\nState version outputs are the [output values](\/terraform\/language\/values\/outputs) from a Terraform state file. They include\nthe name and value of the output, as well as a sensitive boolean if the value should be hidden by default in UIs.\n\n~> **Important:** The state version outputs for a state version (as well as some other information about it) might be **populated asynchronously** by HCP Terraform. These values might not be immediately available after the state version is uploaded. The `resources-processed` property on the associated [state version object](\/terraform\/cloud-docs\/api-docs\/state-versions) indicates whether or not HCP Terraform has finished any necessary asynchronous processing. If you need to use these values, be sure to wait for `resources-processed` to become `true` before assuming that the values are in fact empty.\n\n## List State Version Outputs\n\n`GET \/state-versions\/:state_version_id\/outputs`\n\nListing state version outputs requires permission to read state outputs for the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n| Parameter           | Description                          |\n| ------------------- | ------------------------------------ |\n| `:state_version_id` | The ID of the desired state version. |\n\n| Status  | Response                  | Reason                                                               |\n| ------- | ------------------------- | -------------------------------------------------------------------- |\n| [200][] | [JSON API document][]     | Successfully returned a list of outputs for the given state version. |\n| [404][] | [JSON API error object][] | State version not found, or user unauthorized to perform action.     |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                                           |\n| -------------- | ------------------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.                    |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 state version outputs per page. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/state-versions\/sv-SDboVZC8TCxXEneJ\/outputs\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"wsout-xFAmCR3VkBGepcee\",\n      \"type\": \"state-version-outputs\",\n      \"attributes\": {\n        \"name\": \"fruits\",\n        \"sensitive\": false,\n        \"type\": \"array\",\n        \"value\": [\n          \"apple\",\n          \"strawberry\",\n          \"blueberry\",\n          \"rasberry\"\n        ],\n        \"detailed_type\": [\n          \"tuple\",\n          [\n            \"string\",\n            \"string\",\n            \"string\",\n            \"string\"\n          ]\n        ]\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/state-version-outputs\/wsout-xFAmCR3VkBGepcee\"\n      }\n    },\n    {\n      \"id\": \"wsout-vspuB754AUNkfxwo\",\n      \"type\": \"state-version-outputs\",\n      \"attributes\": {\n        \"name\": \"vegetables\",\n        \"sensitive\": false,\n        \"type\": \"array\",\n        \"value\": [\n          \"carrots\",\n          \"potato\",\n          \"tomato\",\n          \"onions\"\n        ],\n        \"detailed_type\": [\n          \"tuple\",\n          [\n            \"string\",\n            \"string\",\n            \"string\",\n            \"string\"\n          ]\n        ]\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/state-version-outputs\/wsout-vspuB754AUNkfxwo\"\n      }\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/state-versions\/sv-SVB5wMrDL1XUgJ4G\/outputs?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/state-versions\/sv-SVB5wMrDL1XUgJ4G\/outputs?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/state-versions\/sv-SVB5wMrDL1XUgJ4G\/outputs?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"page-size\": 20,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 2\n    }\n  }\n}\n```\n\n## Show a State Version Output\n\n`GET \/state-version-outputs\/:state_version_output_id`\n\n| Parameter                  | Description                                 |\n| -------------------------- | ------------------------------------------- |\n| `:state_version_output_id` | The ID of the desired state version output. |\n\nState version output IDs must be obtained from a [state version object](\/terraform\/cloud-docs\/api-docs\/state-versions). When requesting a state version, you can optionally add `?include=outputs` to include full details for all of that state version's outputs.\n\n| Status  | Response                                                | Reason                                                 |\n| ------- | ------------------------------------------------------- | ------------------------------------------------------ |\n| [200][] | [JSON API document][] (`type: \"state-version-outputs\"`) | Success.                                               |\n| [404][] | [JSON API error object][]                               | State version output not found or user not authorized. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/state-version-outputs\/wsout-J2zM24JPFbfc7bE5\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"wsout-J2zM24JPFbfc7bE5\",\n    \"type\": \"state-version-outputs\",\n    \"attributes\": {\n      \"name\": \"flavor\",\n      \"sensitive\": false,\n      \"type\": \"string\",\n      \"value\": \"Peanut Butter\",\n      \"detailed-type\": \"string\"\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/state-version-outputs\/wsout-J2zM24JPFbfc7bE5\"\n    }\n  }\n}\n```\n\n## Show Current State Version Outputs for a Workspace\n\nThis endpoint allows organization users, who do not have permissions to read state versions, to fetch the latest [output values](\/terraform\/language\/values\/outputs) for a workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n-> **Note:** Sensitive values are not revealed and will be returned as `null`. To fetch an output including sensitive values see [Show a State Version Output](\/terraform\/cloud-docs\/api-docs\/state-version-outputs#show-a-state-version-output).\n\n`GET \/workspaces\/:workspace_id\/current-state-version-outputs`\n\n| Parameter       | Description                                  |\n| --------------- | -------------------------------------------- |\n| `:workspace_id` | The ID of the workspace to read outputs from.|\n\n| Status  | Response                                                | Reason                                                                          |\n| ------- | ------------------------------------------------------- | ------------------------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"state-version-outputs\"`) | Successfully returned a list of outputs for the given workspace.                |\n| [404][] | [JSON API error object][]                               | State version outputs not found or user not authorized.                         |\n| [503][] | [JSON API error object][]                               | State version outputs are being processed and are not ready. Retry the request. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-G4zM299PFbfc10E5\/current-state-version-outputs\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"wsout-J2zM24JPFbfc7bE5\",\n      \"type\": \"state-version-outputs\",\n      \"attributes\": {\n        \"name\": \"flavor\",\n        \"sensitive\": false,\n        \"type\": \"string\",\n        \"value\": \"Peanut Butter\",\n        \"detailed-type\": \"string\"\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/state-version-outputs\/wsout-J2zM24JPFbfc7bE5\"\n      }\n    },\n    {\n      \"id\": \"wsout-FLzM23Gcd5f37bE5\",\n      \"type\": \"state-version-outputs\",\n      \"attributes\": {\n        \"name\": \"recipe\",\n        \"sensitive\": true,\n        \"type\": \"string\",\n        \"value\": \"Don Douglas' Peanut Butter Frenzy\",\n        \"detailed-type\": \"string\"\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/state-version-outputs\/wsout-FLzM23Gcd5f37bE5\"\n      }\n    }\n  ]\n}\n```\n","site":"terraform","answers_cleaned":"    page title  State Version Outputs   API Docs   HCP Terraform description       Use the   state version outputs  endpoint to access output values from a Terraform state version  List and show state version outputs  and show current state version outputs for a workspace using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   404   https   developer mozilla org en US docs Web HTTP Status 404   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    State Version Outputs API  State version outputs are the  output values   terraform language values outputs  from a Terraform state file  They include the name and value of the output  as well as a sensitive boolean if the value should be hidden by default in UIs        Important    The state version outputs for a state version  as well as some other information about it  might be   populated asynchronously   by HCP Terraform  These values might not be immediately available after the state version is uploaded  The  resources processed  property on the associated  state version object   terraform cloud docs api docs state versions  indicates whether or not HCP Terraform has finished any necessary asynchronous processing  If you need to use these values  be sure to wait for  resources processed  to become  true  before assuming that the values are in fact empty      List State Version Outputs   GET  state versions  state version id outputs   Listing state version outputs requires permission to read state outputs for the workspace    More about permissions    terraform cloud docs users teams organizations permissions      Parameter             Description                                                                                               state version id    The ID of the desired state version       Status    Response                    Reason                                                                                                                                                                                   200       JSON API document          Successfully returned a list of outputs for the given state version       404       JSON API error object      State version not found  or user unauthorized to perform action             Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                                                           page number       Optional    If omitted  the endpoint will return the first page                          page size         Optional    If omitted  the endpoint will return 20 state version outputs per page         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 state versions sv SDboVZC8TCxXEneJ outputs          Sample Response     json      data                  id    wsout xFAmCR3VkBGepcee          type    state version outputs          attributes              name    fruits            sensitive   false           type    array            value                apple              strawberry              blueberry              rasberry                      detailed type                tuple                            string                string                string                string                                        links              self     api v2 state version outputs wsout xFAmCR3VkBGepcee                              id    wsout vspuB754AUNkfxwo          type    state version outputs          attributes              name    vegetables            sensitive   false           type    array            value                carrots              potato              tomato              onions                      detailed type                tuple                            string                string                string                string                                        links              self     api v2 state version outputs wsout vspuB754AUNkfxwo                        links          self    https   app terraform io api v2 state versions sv SVB5wMrDL1XUgJ4G outputs page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   app terraform io api v2 state versions sv SVB5wMrDL1XUgJ4G outputs page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next   null       last    https   app terraform io api v2 state versions sv SVB5wMrDL1XUgJ4G outputs page 5Bnumber 5D 1 page 5Bsize 5D 20          meta          pagination            current page   1         page size   20         prev page   null         next page   null         total pages   1         total count   2                     Show a State Version Output   GET  state version outputs  state version output id     Parameter                    Description                                                                                                                    state version output id    The ID of the desired state version output     State version output IDs must be obtained from a  state version object   terraform cloud docs api docs state versions   When requesting a state version  you can optionally add   include outputs  to include full details for all of that state version s outputs     Status    Response                                                  Reason                                                                                                                                                                                     200       JSON API document      type   state version outputs      Success                                                     404       JSON API error object                                    State version output not found or user not authorized         Sample Request     shell curl       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 state version outputs wsout J2zM24JPFbfc7bE5          Sample Response     json      data          id    wsout J2zM24JPFbfc7bE5        type    state version outputs        attributes            name    flavor          sensitive   false         type    string          value    Peanut Butter          detailed type    string              links            self     api v2 state version outputs wsout J2zM24JPFbfc7bE5                      Show Current State Version Outputs for a Workspace  This endpoint allows organization users  who do not have permissions to read state versions  to fetch the latest  output values   terraform language values outputs  for a workspace    More about permissions    terraform cloud docs users teams organizations permissions         Note    Sensitive values are not revealed and will be returned as  null   To fetch an output including sensitive values see  Show a State Version Output   terraform cloud docs api docs state version outputs show a state version output     GET  workspaces  workspace id current state version outputs     Parameter         Description                                                                                                           workspace id    The ID of the workspace to read outputs from      Status    Response                                                  Reason                                                                                                                                                                                                                                       200       JSON API document      type   state version outputs      Successfully returned a list of outputs for the given workspace                      404       JSON API error object                                    State version outputs not found or user not authorized                               503       JSON API error object                                    State version outputs are being processed and are not ready  Retry the request         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 workspaces ws G4zM299PFbfc10E5 current state version outputs          Sample Response     json      data                  id    wsout J2zM24JPFbfc7bE5          type    state version outputs          attributes              name    flavor            sensitive   false           type    string            value    Peanut Butter            detailed type    string                  links              self     api v2 state version outputs wsout J2zM24JPFbfc7bE5                              id    wsout FLzM23Gcd5f37bE5          type    state version outputs          attributes              name    recipe            sensitive   true           type    string            value    Don Douglas  Peanut Butter Frenzy            detailed type    string                  links              self     api v2 state version outputs wsout FLzM23Gcd5f37bE5                          "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 Use the users endpoint to query user details Show details for a user using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201 page title Users API Docs HCP Terraform","answers":"---\npage_title: Users - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/users` endpoint to query user details. Show details for a user using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Users API\n\nHCP Terraform's user objects do not contain any identifying information about a user, other than their HCP Terraform username and avatar image; they are intended for displaying names and avatars in contexts that refer to a user by ID, like lists of team members or the details of a run. Most of these contexts can already include user objects via an `?include` parameter, so you shouldn't usually need to make a separate call to this endpoint.\n\n## Show a User\n\nShows details for a given user.\n\n`GET \/users\/:user_id`\n\n| Parameter  | Description                 |\n| ---------- | --------------------------- |\n| `:user_id` | The ID of the desired user. |\n\nTo find the ID that corresponds to a given username, you can request a [team object](\/terraform\/cloud-docs\/api-docs\/teams) for a team that user belongs to, specify `?include=users` in the request, and look for the user's name in the included list of user objects.\n\n| Status  | Response                                | Reason                                           |\n| ------- | --------------------------------------- | ------------------------------------------------ |\n| [200][] | [JSON API document][] (`type: \"users\"`) | The request was successful                       |\n| [401][] | [JSON API error object][]               | Unauthorized                                     |\n| [404][] | [JSON API error object][]               | User not found, or unauthorized to view the user |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/users\/user-MA4GL63FmYRpSFxa\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"user-MA4GL63FmYRpSFxa\",\n    \"type\": \"users\",\n    \"attributes\": {\n      \"username\": \"admin\",\n      \"is-service-account\": false,\n      \"auth-method\": \"hcp_sso\",\n      \"avatar-url\": \"https:\/\/www.gravatar.com\/avatar\/fa1f0c9364253d351bf1c7f5c534cd40?s=100&d=mm\",\n      \"v2-only\": true,\n      \"permissions\": {\n        \"can-create-organizations\": false,\n        \"can-change-email\": true,\n        \"can-change-username\": true\n      }\n    },\n    \"relationships\": {\n      \"authentication-tokens\": {\n        \"links\": {\n          \"related\": \"\/api\/v2\/users\/user-MA4GL63FmYRpSFxa\/authentication-tokens\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/users\/user-MA4GL63FmYRpSFxa\"\n    }\n  }\n}\n```","site":"terraform","answers_cleaned":"    page title  Users   API Docs   HCP Terraform description       Use the   users  endpoint to query user details  Show details for a user using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Users API  HCP Terraform s user objects do not contain any identifying information about a user  other than their HCP Terraform username and avatar image  they are intended for displaying names and avatars in contexts that refer to a user by ID  like lists of team members or the details of a run  Most of these contexts can already include user objects via an   include  parameter  so you shouldn t usually need to make a separate call to this endpoint      Show a User  Shows details for a given user    GET  users  user id     Parameter    Description                                                                    user id    The ID of the desired user     To find the ID that corresponds to a given username  you can request a  team object   terraform cloud docs api docs teams  for a team that user belongs to  specify   include users  in the request  and look for the user s name in the included list of user objects     Status    Response                                  Reason                                                                                                                                                         200       JSON API document      type   users      The request was successful                            401       JSON API error object                    Unauthorized                                          404       JSON API error object                    User not found  or unauthorized to view the user        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 users user MA4GL63FmYRpSFxa          Sample Response     json      data          id    user MA4GL63FmYRpSFxa        type    users        attributes            username    admin          is service account   false         auth method    hcp sso          avatar url    https   www gravatar com avatar fa1f0c9364253d351bf1c7f5c534cd40 s 100 d mm          v2 only   true         permissions              can create organizations   false           can change email   true           can change username   true                     relationships            authentication tokens              links                related     api v2 users user MA4GL63FmYRpSFxa authentication tokens                                links            self     api v2 users user MA4GL63FmYRpSFxa                 "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title User Tokens API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the authentication tokens endpoint to manage user specific API tokens List show create and destroy user tokens using the HTTP API","answers":"---\npage_title: User Tokens - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/authentication-tokens` endpoint to manage user-specific API tokens. List, show, create, and destroy user tokens using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# User Tokens API\n\n## List User Tokens\n\n`GET \/users\/:user_id\/authentication-tokens`\n\n| Parameter  | Description         |\n| ---------- | ------------------- |\n| `:user_id` | The ID of the User. |\n\nUse the [Account API](\/terraform\/cloud-docs\/api-docs\/account) to find your own user ID.\n\nThe objects returned by this endpoint only contain metadata, and do not include the secret text of any authentication tokens. A token is only shown upon creation, and cannot be recovered later.\n\n-> **Note:** You must access this endpoint with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens), and it will only return useful data for that token's user account.\n\n| Status  | Response                                                | Reason                                                                                |\n| ------- | ------------------------------------------------------- | ------------------------------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"authentication-tokens\"`) | The request was successful                                                            |\n| [200][] | Empty [JSON API document][] (no type)                   | User has no authentication tokens, or request was made by someone other than the user |\n| [404][] | [JSON API error object][]                               | User not found                                                                        |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.  If neither pagination query parameters are provided, the endpoint will not be paginated and will return all results.\n\n| Parameter      | Description                                                                 |\n| -------------- | --------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.          |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 user tokens per page. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/users\/user-MA4GL63FmYRpSFxa\/authentication-tokens\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"at-QmATJea6aWj1xR2t\",\n      \"type\": \"authentication-tokens\",\n      \"attributes\": {\n        \"created-at\": \"2018-11-06T22:56:10.203Z\",\n        \"last-used-at\": null,\n        \"description\": null,\n        \"token\": null,\n        \"expired-at\": null\n      },\n      \"relationships\": {\n        \"created-by\": {\n          \"data\": null\n        }\n      }\n    },\n    {\n      \"id\": \"at-6yEmxNAhaoQLH1Da\",\n      \"type\": \"authentication-tokens\",\n      \"attributes\": {\n        \"created-at\": \"2018-11-25T22:31:30.624Z\",\n        \"last-used-at\": \"2018-11-26T20:27:54.931Z\",\n        \"description\": \"api\",\n        \"token\": null,\n        \"expired-at\": \"2023-04-06T12:00:00.000Z\"\n      },\n      \"relationships\": {\n        \"created-by\": {\n          \"data\": {\n            \"id\": \"user-MA4GL63FmYRpSFxa\",\n            \"type\": \"users\"\n          }\n        }\n      }\n    }\n  ]\n}\n```\n\n## Show a User Token\n\n`GET \/authentication-tokens\/:id`\n\n| Parameter | Description               |\n| --------- | ------------------------- |\n| `:id`     | The ID of the User Token. |\n\nThe objects returned by this endpoint only contain metadata, and do not include the secret text of any authentication tokens. A token is only shown upon creation, and cannot be recovered later.\n\n-> **Note:** You must access this endpoint with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens), and it will only return useful data for that token's user account.\n\n| Status  | Response                                                | Reason                                                       |\n| ------- | ------------------------------------------------------- | ------------------------------------------------------------ |\n| [200][] | [JSON API document][] (`type: \"authentication-tokens\"`) | The request was successful                                   |\n| [404][] | [JSON API error object][]                               | User Token not found, or unauthorized to view the User Token |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/v2\/authentication-tokens\/at-6yEmxNAhaoQLH1Da\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"at-6yEmxNAhaoQLH1Da\",\n    \"type\": \"authentication-tokens\",\n    \"attributes\": {\n      \"created-at\": \"2018-11-25T22:31:30.624Z\",\n      \"last-used-at\": \"2018-11-26T20:34:59.487Z\",\n      \"description\": \"api\",\n      \"token\": null,\n      \"expired-at\": \"2023-04-06T12:00:00.000Z\"\n    },\n    \"relationships\": {\n      \"created-by\": {\n        \"data\": {\n          \"id\": \"user-MA4GL63FmYRpSFxa\",\n          \"type\": \"users\"\n        }\n      }\n    }\n  }\n}\n```\n\n## Create a User Token\n\n`POST \/users\/:user_id\/authentication-tokens`\n\n| Parameter  | Description         |\n| ---------- | ------------------- |\n| `:user_id` | The ID of the User. |\n\nUse the [Account API](\/terraform\/cloud-docs\/api-docs\/account) to find your own user ID.\n\nThis endpoint returns the secret text of the created authentication token. A token is only shown upon creation, and cannot be recovered later.\n\n-> **Note:** You must access this endpoint with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens), and it will only create new tokens for that token's user account.\n\n| Status  | Response                                                | Reason                                                         |\n| ------- | ------------------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"authentication-tokens\"`) | The request was successful                                     |\n| [404][] | [JSON API error object][]                               | User not found or user unauthorized to perform action          |\n| [422][] | [JSON API error object][]                               | Malformed request body (missing attributes, wrong types, etc.) |\n| [500][] | [JSON API error object][]                               | Failure during User Token creation                             |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                      | Type   | Default | Description                                                                                                     |\n| ----------------------------- | ------ | ------- | --------------------------------------------------------------------------------------------------------------- |\n| `data.type`                   | string |         | Must be `\"authentication-tokens\"`.                                                                              |\n| `data.attributes.description` | string |         | The description for the User Token.                                                                             |\n| `data.attributes.expired-at`  | string | `null`  | The UTC date and time that the User Token will expire, in ISO 8601 format. If omitted or set to `null` the token will never expire. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"authentication-tokens\",\n    \"attributes\": {\n      \"description\":\"api\",\n      \"expired-at\": \"2023-04-06T12:00:00.000Z\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/users\/user-MA4GL63FmYRpSFxa\/authentication-tokens\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"at-MKD1X3i4HS3AuD41\",\n    \"type\": \"authentication-tokens\",\n    \"attributes\": {\n      \"created-at\": \"2018-11-26T20:48:35.054Z\",\n      \"last-used-at\": null,\n      \"description\": \"api\",\n      \"token\": \"6tL24nM38M7XWQ.atlasv1.KmWckRfzeNmUVFNvpvwUEChKaLGznCSD6fPf3VPzqMMVzmSxFU0p2Ibzpo2h5eTGwPU\",\n      \"expired-at\": \"2023-04-06T12:00:00.000Z\"\n    },\n    \"relationships\": {\n      \"created-by\": {\n        \"data\": {\n          \"id\": \"user-MA4GL63FmYRpSFxa\",\n          \"type\": \"users\"\n        }\n      }\n    }\n  }\n}\n```\n\n## Destroy a User Token\n\n`DELETE \/authentication-tokens\/:id`\n\n| Parameter | Description                          |\n| --------- | ------------------------------------ |\n| `:id`     | The ID of the User Token to destroy. |\n\n-> **Note:** You must access this endpoint with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens), and it will only delete tokens for that token's user account.\n\n| Status  | Response                  | Reason                                                       |\n| ------- | ------------------------- | ------------------------------------------------------------ |\n| [204][] | Empty response            | The User Token was successfully destroyed                    |\n| [404][] | [JSON API error object][] | User Token not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/authentication-tokens\/at-6yEmxNAhaoQLH1Da\n```","site":"terraform","answers_cleaned":"    page title  User Tokens   API Docs   HCP Terraform description       Use the   authentication tokens  endpoint to manage user specific API tokens  List  show  create  and destroy user tokens using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    User Tokens API     List User Tokens   GET  users  user id authentication tokens     Parameter    Description                                                    user id    The ID of the User     Use the  Account API   terraform cloud docs api docs account  to find your own user ID   The objects returned by this endpoint only contain metadata  and do not include the secret text of any authentication tokens  A token is only shown upon creation  and cannot be recovered later        Note    You must access this endpoint with a  user token   terraform cloud docs users teams organizations users api tokens   and it will only return useful data for that token s user account     Status    Response                                                  Reason                                                                                                                                                                                                                                                   200       JSON API document      type   authentication tokens      The request was successful                                                                 200      Empty  JSON API document     no type                      User has no authentication tokens  or request was made by someone other than the user      404       JSON API error object                                    User not found                                                                               Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs   If neither pagination query parameters are provided  the endpoint will not be paginated and will return all results     Parameter        Description                                                                                                                                                                       page number       Optional    If omitted  the endpoint will return the first page                page size         Optional    If omitted  the endpoint will return 20 user tokens per page         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 users user MA4GL63FmYRpSFxa authentication tokens          Sample Response     json      data                  id    at QmATJea6aWj1xR2t          type    authentication tokens          attributes              created at    2018 11 06T22 56 10 203Z            last used at   null           description   null           token   null           expired at   null                 relationships              created by                data   null                                       id    at 6yEmxNAhaoQLH1Da          type    authentication tokens          attributes              created at    2018 11 25T22 31 30 624Z            last used at    2018 11 26T20 27 54 931Z            description    api            token   null           expired at    2023 04 06T12 00 00 000Z                  relationships              created by                data                  id    user MA4GL63FmYRpSFxa                type    users                                                    Show a User Token   GET  authentication tokens  id     Parameter   Description                                                               id        The ID of the User Token     The objects returned by this endpoint only contain metadata  and do not include the secret text of any authentication tokens  A token is only shown upon creation  and cannot be recovered later        Note    You must access this endpoint with a  user token   terraform cloud docs users teams organizations users api tokens   and it will only return useful data for that token s user account     Status    Response                                                  Reason                                                                                                                                                                                                 200       JSON API document      type   authentication tokens      The request was successful                                        404       JSON API error object                                    User Token not found  or unauthorized to view the User Token        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api v2 authentication tokens at 6yEmxNAhaoQLH1Da          Sample Response     json      data          id    at 6yEmxNAhaoQLH1Da        type    authentication tokens        attributes            created at    2018 11 25T22 31 30 624Z          last used at    2018 11 26T20 34 59 487Z          description    api          token   null         expired at    2023 04 06T12 00 00 000Z              relationships            created by              data                id    user MA4GL63FmYRpSFxa              type    users                                        Create a User Token   POST  users  user id authentication tokens     Parameter    Description                                                    user id    The ID of the User     Use the  Account API   terraform cloud docs api docs account  to find your own user ID   This endpoint returns the secret text of the created authentication token  A token is only shown upon creation  and cannot be recovered later        Note    You must access this endpoint with a  user token   terraform cloud docs users teams organizations users api tokens   and it will only create new tokens for that token s user account     Status    Response                                                  Reason                                                                                                                                                                                                     201       JSON API document      type   authentication tokens      The request was successful                                          404       JSON API error object                                    User not found or user unauthorized to perform action               422       JSON API error object                                    Malformed request body  missing attributes  wrong types  etc        500       JSON API error object                                    Failure during User Token creation                                    Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                        Type     Default   Description                                                                                                                                                                                                                                                                                 data type                      string             Must be   authentication tokens                                                                                      data attributes description    string             The description for the User Token                                                                                   data attributes expired at     string    null     The UTC date and time that the User Token will expire  in ISO 8601 format  If omitted or set to  null  the token will never expire         Sample Payload     json      data          type    authentication tokens        attributes            description   api          expired at    2023 04 06T12 00 00 000Z                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 users user MA4GL63FmYRpSFxa authentication tokens          Sample Response     json      data          id    at MKD1X3i4HS3AuD41        type    authentication tokens        attributes            created at    2018 11 26T20 48 35 054Z          last used at   null         description    api          token    6tL24nM38M7XWQ atlasv1 KmWckRfzeNmUVFNvpvwUEChKaLGznCSD6fPf3VPzqMMVzmSxFU0p2Ibzpo2h5eTGwPU          expired at    2023 04 06T12 00 00 000Z              relationships            created by              data                id    user MA4GL63FmYRpSFxa              type    users                                        Destroy a User Token   DELETE  authentication tokens  id     Parameter   Description                                                                                     id        The ID of the User Token to destroy          Note    You must access this endpoint with a  user token   terraform cloud docs users teams organizations users api tokens   and it will only delete tokens for that token s user account     Status    Response                    Reason                                                                                                                                                                   204      Empty response              The User Token was successfully destroyed                         404       JSON API error object      User Token not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 authentication tokens at 6yEmxNAhaoQLH1Da    "}
{"questions":"terraform A meaningful description of this endpoint Boilerplate link references This entire list should be included at the top of every API page so that the tables can use short links freely This SHOULD be all of the status codes we use in HCP Terraform s API if we need to add more update the list on every page 200 https developer mozilla org en US docs Web HTTP Status 200 Follow this template to format each API method There are usually multiple sections like this on a given API endpoint page page title Something API Docs HCP Terraform","answers":"---\npage_title: Something - API Docs - HCP Terraform\ndescription: A meaningful description of this endpoint.\n---\n\nFollow this template to format each API method. There are usually multiple sections like this on a given API endpoint page.\n\n<!-- Boilerplate link references: This entire list should be included at the top of every API page, so that the tables can use short links freely. This SHOULD be all of the status codes we use in HCP Terraform's API; if we need to add more, update the list on every page. -->\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: http:\/\/jsonapi.org\/format\/#error-objects\n\n## Create a Something\n\n<!-- Header: \"Verb a Noun\" or \"Verb Nouns.\" -->\n\n`POST \/organizations\/:organization_name\/somethings`\n\n<!-- ^ The method and path are styled as a single code span, with global prefix (`\/api\/v2`) omitted and the method capitalized. -->\n\n| Parameter            | Description                                                                                                                                                              |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The name of the organization to create the something in. The organization must already exist in the system, and the user must have permissions to create new somethings. |\n\n<!-- ^ The list of URL path parameters goes directly below the method and path, without a header of its own. They're simpler than other parameters because they're always strings and they're always mandatory, so this table only has two columns. Prefix URL path parameter names with a colon.\n\nIf further explanation of this method is needed beyond its title, write it here, after the parameter list. -->\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n<!-- ^ Include a note like the above if the endpoint CANNOT be used with a given token type. Most endpoints don't need this. -->\n\n| Status  | Response                                     | Reason                                                         |\n| ------- | -------------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"somethings\"`) | Successfully created a team                                    |\n| [400][] | [JSON API error object][]                    | Invalid `include` parameter                                    |\n| [404][] | [JSON API error object][]                    | Organization not found, or user unauthorized to perform action |\n| [422][] | [JSON API error object][]                    | Malformed request body (missing attributes, wrong types, etc.) |\n| [500][] | [JSON API error object][]                    | Failure during team creation                                   |\n\n<!-- ^ Include status codes even if they're plain 200\/404.\nIf a JSON API document is returned, specify the `type`.\nIf the table includes links, use reference-style links to keep the table size small. The references should be included once per API page, at the very top.\n -->\n\n### Query Parameters\n\n[These are standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n<!-- ^ Query parameters get their own header and boilerplate. Omit the whole section if this method takes no query parameters; we only use them for certain GET requests. -->\n\n| Parameter               | Description                                                   |\n| ----------------------- | ------------------------------------------------------------- |\n| `filter[workspace][id]` | **Required.** The workspace ID where this action will happen. |\n\n<!-- ^ This table is flexible. If we somehow end up with a case where there's a long list of parameters, in a mix of optional and required, you could add a \"Required?\" or \"Default\" column or something; likewise if there are multiple data types in play. But in the usual minimal case, keep the table minimal and style important information as strong emphasis.\n\nDo not prefix query parameter names with a question mark. -->\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n<!-- ^ Payload parameters go under this header and boilerplate. -->\n\n| Key path                    | Type   | Default | Description                                                                                            |\n| --------------------------- | ------ | ------- | ------------------------------------------------------------------------------------------------------ |\n| `data.type`                 | string |         | Must be `\"somethings\"`.                                                                                |\n| `data[].type`               | string |         | ... <!-- use data[].x when data is an array of objects. -->                                            |\n| `data.attributes.category`  | string |         | Whether this is a blue or red something. Valid values are `\"blue\"` or `\"red\"`.                         |\n| `data.attributes.sensitive` | bool   | `false` | Whether the value is sensitive. If true then the something is written once and not visible thereafter. |\n| `filter.workspace.name`     | string |         | The name of the workspace that owns the something.                                                     |\n| `filter.organization.name`  | string |         | The name of the organization that owns the workspace.                                                  |\n\n<!--\n- Name the paths to these object properties with dot notation, starting from the\n  root of the JSON object. So, `data.attributes.category` instead of just\n  `category`. Since our API format uses deeply nested structures and is finicky\n  about the details, err on the side of being very explicit about where the user\n  puts everything.\n- Style key paths as code spans.\n- Style data types as plain text.\n- Style string values as code spans with interior double-quotes, to distinguish\nthem from unquoted values like booleans and nulls.\n- If a limited number of values are valid, list them in the description.\n- In the rare case where a parameter is optional but has no default, you can\n  list something like \"(nothing)\" as the default and explain in the description.\n- List the properties in the simplest order you can... but the concept of\n  \"simple\" can be a little complex. ;) As a general guideline:\n    - The first level of sorting is _importance._ This is open to interpretation,\n      but at least put the type and name first.\n    - The second level of sorting is _complexity._ If one of the properties is a\n      huge object with a bunch of sub-properties, put it last \u2014\u00a0this lets the\n      reader clear the simpler properties out of their head before dealing with\n      it, without having to remember where they were in the list and without\n      having to remember to pop back out of the \"big sub-object\" context when\n      they hit the end of it.\n    - The third order of sorting is _predictability,_ which basically means that\n      within a group of properties of equal relative importance and complexity,\n      you should probably list them alphabetically so it's easier to find a\n      specific property.\n-->\n\n### Available Related Resources\n\n<!-- Omit this subheader and section if it's not applicable. -->\n\nThis GET endpoint can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n| Resource Name      | Description                                   |\n| ------------------ | --------------------------------------------- |\n| `organization`     | The full organization record.                 |\n| `current_run`      | Additional information about the current run. |\n| `current_run.plan` | The plan used in the current run.             |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\":\"somethings\",\n    \"attributes\": {\n      \"category\":\"red\",\n      \"sensitive\":true\n    }\n  },\n  \"filter\": {\n    \"organization\": {\n      \"name\":\"my-organization\"\n    },\n    \"workspace\": {\n      \"name\":\"my-workspace\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/somethings\n```\n\n<!-- In curl examples, you can use the `$TOKEN` environment variable. If it's a GET request with query parameters, you can use double-quotes to have curl handle the URL encoding for you.\n\nMake sure to test a query that's very nearly the same as the example, to avoid errors. -->\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\":\"som-EavQ1LztoRTQHSNT\",\n    \"type\":\"somethings\",\n    \"attributes\": {\n      \"sensitive\":true,\n      \"category\":\"red\",\n    },\n    \"relationships\": {\n      \"configurable\": {\n        \"data\": {\n          \"id\":\"ws-4j8p6jX1w33MiDC7\",\n          \"type\":\"workspaces\"\n        },\n        \"links\": {\n          \"related\":\"\/api\/v2\/organizations\/my-organization\/workspaces\/my-workspace\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\":\"\/api\/v2\/somethings\/som-EavQ1LztoRTQHSNT\"\n    }\n  }\n}\n```\n\n<!-- Make sure to mangle any real IDs this might expose. -->","site":"terraform","answers_cleaned":"    page title  Something   API Docs   HCP Terraform description  A meaningful description of this endpoint       Follow this template to format each API method  There are usually multiple sections like this on a given API endpoint page        Boilerplate link references  This entire list should be included at the top of every API page  so that the tables can use short links freely  This SHOULD be all of the status codes we use in HCP Terraform s API  if we need to add more  update the list on every page        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   http   jsonapi org format  error objects     Create a Something       Header   Verb a Noun  or  Verb Nouns         POST  organizations  organization name somethings          The method and path are styled as a single code span  with global prefix    api v2   omitted and the method capitalized         Parameter              Description                                                                                                                                                                                                                                                                                                                                                                        organization name    The name of the organization to create the something in  The organization must already exist in the system  and the user must have permissions to create new somethings            The list of URL path parameters goes directly below the method and path  without a header of its own  They re simpler than other parameters because they re always strings and they re always mandatory  so this table only has two columns  Prefix URL path parameter names with a colon   If further explanation of this method is needed beyond its title  write it here  after the parameter list            Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens           Include a note like the above if the endpoint CANNOT be used with a given token type  Most endpoints don t need this         Status    Response                                       Reason                                                                                                                                                                                          200       JSON API document      type   somethings      Successfully created a team                                         400       JSON API error object                         Invalid  include  parameter                                         404       JSON API error object                         Organization not found  or user unauthorized to perform action      422       JSON API error object                         Malformed request body  missing attributes  wrong types  etc        500       JSON API error object                         Failure during team creation                                             Include status codes even if they re plain 200 404  If a JSON API document is returned  specify the  type   If the table includes links  use reference style links to keep the table size small  The references should be included once per API page  at the very top            Query Parameters   These are standard URL query parameters   terraform cloud docs api docs query parameters   remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs          Query parameters get their own header and boilerplate  Omit the whole section if this method takes no query parameters  we only use them for certain GET requests         Parameter                 Description                                                                                                                                                    filter workspace  id       Required    The workspace ID where this action will happen            This table is flexible  If we somehow end up with a case where there s a long list of parameters  in a mix of optional and required  you could add a  Required   or  Default  column or something  likewise if there are multiple data types in play  But in the usual minimal case  keep the table minimal and style important information as strong emphasis   Do not prefix query parameter names with a question mark           Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required          Payload parameters go under this header and boilerplate         Key path                      Type     Default   Description                                                                                                                                                                                                                                                             data type                    string             Must be   somethings                                                                                        data   type                  string                      use data   x when data is an array of objects                                                      data attributes category     string             Whether this is a blue or red something  Valid values are   blue   or   red                                 data attributes sensitive    bool      false    Whether the value is sensitive  If true then the something is written once and not visible thereafter       filter workspace name        string             The name of the workspace that owns the something                                                           filter organization name     string             The name of the organization that owns the workspace                                                             Name the paths to these object properties with dot notation  starting from the   root of the JSON object  So   data attributes category  instead of just    category   Since our API format uses deeply nested structures and is finicky   about the details  err on the side of being very explicit about where the user   puts everything    Style key paths as code spans    Style data types as plain text    Style string values as code spans with interior double quotes  to distinguish them from unquoted values like booleans and nulls    If a limited number of values are valid  list them in the description    In the rare case where a parameter is optional but has no default  you can   list something like   nothing   as the default and explain in the description    List the properties in the simplest order you can    but the concept of    simple  can be a little complex     As a general guideline        The first level of sorting is  importance   This is open to interpretation        but at least put the type and name first        The second level of sorting is  complexity   If one of the properties is a       huge object with a bunch of sub properties  put it last   this lets the       reader clear the simpler properties out of their head before dealing with       it  without having to remember where they were in the list and without       having to remember to pop back out of the  big sub object  context when       they hit the end of it        The third order of sorting is  predictability   which basically means that       within a group of properties of equal relative importance and complexity        you should probably list them alphabetically so it s easier to find a       specific property           Available Related Resources       Omit this subheader and section if it s not applicable       This GET endpoint can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available     Resource Name        Description                                                                                                               organization        The full organization record                       current run         Additional information about the current run       current run plan    The plan used in the current run                     Sample Payload     json      data          type   somethings        attributes            category   red          sensitive  true               filter          organization            name   my organization              workspace            name   my workspace                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 somethings           In curl examples  you can use the   TOKEN  environment variable  If it s a GET request with query parameters  you can use double quotes to have curl handle the URL encoding for you   Make sure to test a query that s very nearly the same as the example  to avoid errors           Sample Response     json      data          id   som EavQ1LztoRTQHSNT        type   somethings        attributes            sensitive  true         category   red               relationships            configurable              data                id   ws 4j8p6jX1w33MiDC7              type   workspaces                      links                related    api v2 organizations my organization workspaces my workspace                                links            self    api v2 somethings som EavQ1LztoRTQHSNT                        Make sure to mangle any real IDs this might expose     "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Runs API Docs HCP Terraform Use the runs endpoint to manage Terraform runs List get create apply discard execute and cancel runs using the HTTP API 201 https developer mozilla org en US docs Web HTTP Status 201","answers":"---\npage_title: Runs - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/runs` endpoint to manage Terraform runs. List, get, create, apply, discard, execute, and cancel runs using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Runs API\n\n-> **Note:** Before working with the runs or configuration versions APIs, read the [API-driven run workflow](\/terraform\/cloud-docs\/run\/api) page, which includes both a full overview of this workflow and a walkthrough of a simple implementation of it.\n\nPerforming a run on a new configuration is a multi-step process.\n\n1. [Create a configuration version on the workspace](\/terraform\/cloud-docs\/api-docs\/configuration-versions#create-a-configuration-version).\n1. [Upload configuration files to the configuration version](\/terraform\/cloud-docs\/api-docs\/configuration-versions#upload-configuration-files).\n1. [Create a run on the workspace](#create-a-run); this is done automatically when a configuration file is uploaded.\n1. [Create and queue an apply on the run](#apply-a-run); if the run can't be auto-applied.\n\nAlternatively, you can create a run with a pre-existing configuration version, even one from another workspace. This is useful for promoting known good code from one workspace to another.\n\n## Attributes\n\n### Run States\n\nThe run state is found in `data.attributes.status`, and you can reference the following list of possible states.\n\n| State                  | Description                                                                                                                                                                                                                                                                                                                                                                                                                                 |\n|------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `pending`              | The initial status of a run after creation.                                                                                                                                                                                                                                                                                                                                                                                      |\n| `fetching`             | The run is waiting for HCP Terraform to fetch the configuration from VCS.                                                                                                                                                                                                                                                                                                                                                                 |\n| `fetching_completed`   | HCP Terraform has fetched the configuration from VCS and the run will continue.                                                                                                                                                                                                                                                                                                                                                           |\n| `pre_plan_running`     | The pre-plan phase of the run is in progress.                                                                                                                                                                                                                                                                                                                                                                                               |\n| `pre_plan_completed`   | The pre-plan phase of the run has completed.                                                                                                                                                                                                                                                                                                                                                                                                |\n| `queuing`              | HCP Terraform is queuing the run to start the planning phase.                                                                                                                                                                                                                                                                                                                                                                                        |\n| `plan_queued`          | HCP Terraform is waiting for its backend services to start the plan. |\n| `planning`             | The planning phase of a run is in progress.                                                                                                                                                                                                                                                                                                                                                                                                 |\n| `planned`              | The planning phase of a run has completed.                                                                                                                                                                                                                                                                                                                                                                                                  |\n| `cost_estimating`      | The cost estimation phase of a run is in progress.                                                                                                                                                                                                                                                                                                                                                                                          |\n| `cost_estimated`       | The cost estimation phase of a run has completed.                                                                                                                                                                                                                                                                                                                                                                                           |\n| `policy_checking`      | The sentinel policy checking phase of a run is in progress.                                                                                                                                                                                                                                                                                                                                                                                 |\n| `policy_override`      | A sentinel policy has soft failed, and a user can override it to continue the run.                                                                                                                                                                                                                                                                                                                                                                            |\n| `policy_soft_failed`   | A sentinel policy has soft failed for a plan-only run. This is a final state.                                                                                                                                                                                                                                                                                                                                                              |\n| `policy_checked`       | The sentinel policy checking phase of a run has completed.                                                                                                                                                                                                                                                                                                                                                                                  |\n| `confirmed`            | A user has confirmed the plan.                                                                                                                                                                                                                                                                                                                                                                                            |\n| `post_plan_running`    | The post-plan phase of the run is in progress.                                                                                                                                                                                                                                                                                                                                                                                              |\n| `post_plan_completed`  | The post-plan phase of the run has completed.                                                                                                                                                                                                                                                                                                                                                                                               |\n| `planned_and_finished` | The run is completed. This status only exists for plan-only runs and runs that produce a plan with no changes to apply. This is a final state.                                                                                                                                                                                                                                                                                                              |\n| `planned_and_saved`    | The run has finished its planning, checks, and estimates, and can be confirmed for apply. This status is only used for saved plan runs. |\n| `apply_queued`         | Once the changes in the plan have been confirmed, the run will transition to `apply_queued`. This status indicates that the run should start as soon as the backend services that run terraform have available capacity. In HCP Terraform, you should seldom see this status, as our aim is to always have capacity. However, in Terraform Enterprise this status will be more common due to the self-hosted nature.                      |\n| `applying`             | Terraform is applying the changes specified in the plan.                                                                                                                                                                                                                                                                                                                                                                                               |\n| `applied`              | Terraform has applied the changes specified in the plan.                                                                                                                                                                                                                                                                                                                                                                                                  |\n| `discarded`            | The run has been discarded. This is a final state.                                                                                                                                                                                                                                                                                                                                                                                          |\n| `errored`              | The run has errored. This is a final state.                                                                                                                                                                                                                                                                                                                                                                                                 |\n| `canceled`             | The run has been canceled.                                                                                                                                                                                                                                                                                                                                                                                                                  |\n| `force_canceled`       | A workspace admin forcefully canceled the run.\n\n### Run Operations\n\nThe run operation specifies the Terraform execution mode. You can reference the following list of possible execution modes and use them as query parameters in the [workspace](\/terraform\/cloud-docs\/api-docs\/run#list-runs-in-a-workspace) and [organization](\/terraform\/cloud-docs\/api-docs\/run#list-runs-in-a-organization) runs lists.\n\n| Operation        | Description                                                                                                                                                                             |\n| ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `plan_only`      | The run does not have an apply phase. This is also called a [_speculative plan_](\/terraform\/cloud-docs\/run\/modes-and-options#plan-only-speculative-plan).                                 |\n| `plan_and_apply` | The run includes both plan and apply phases.                                                                                                                                            |\n| `save_plan`      | The run is a saved plan run. It can include both plan and apply phases, but only becomes the workspace's current run if a user chooses to apply it.                                 |\n| `refresh_only`   | The run should update Terraform state, but not make changes to resources.                                                                                                               |\n| `destroy`        | The run should destroy all objects, regardless of configuration changes.                                                                                                                |\n| `empty_apply`    | The run should perform an apply with no changes to resources. This is most commonly used to [upgrade terraform state versions](\/terraform\/cloud-docs\/workspaces\/state#upgrading-state). |\n\n### Run Sources\n\nYou can use the following sources as query parameters in [workspace](\/terraform\/cloud-docs\/api-docs\/run#list-runs-in-a-workspace) and [organization](\/terraform\/cloud-docs\/api-docs\/run#list-runs-in-a-organization) runs lists.\n\n| Source                      | Description                                                                             |\n|-----------------------------|-----------------------------------------------------------------------------------------|\n| `tfe-ui`                    | Indicates a run was queued from HCP Terraform UI.                                     |\n| `tfe-api`                   | Indicates a run was queued from HCP Terraform API.                                    |\n| `tfe-configuration-version` | Indicates a run was queued from a Configuration Version, triggered from a VCS provider. |\n\n### Run Status Groups\n\nThe run status group specifies a collection of run states by logical category.\n\n| Group                  | Description                                                                                                                                                                                                                                                                                                                                                                                                                                 |\n|------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `non_final`              | Inclusive of runs that are currently running, require user confirmation, or are queued\/pending.                                                                                                                                                                                                                                                                                                                                                              |\n| `final`             | Inclusive of runs that have reached their final and terminal state.                                                                                                                                                                                                                                                                                                                                                                 |\n| `discardable`   | Inclusive of runs whose state falls under the following: `planned`, `planned_and_saved`, `cost_estimated`, `policy_checked`, `policy_override`, `post_plan_running`, `post_plan_completed`                                                                                                                                                                                                                                                                                            |\n\n\n## Create a Run\n\n`POST \/runs`\n\nA run performs a plan and apply, using a configuration version and the workspace\u2019s current variables. You can specify a configuration version when creating a run; if you don\u2019t provide one, the run defaults to the workspace\u2019s most recently used version. (A configuration version is \u201cused\u201d when it is created or used for a run in this workspace.)\n\nCreating a run requires permission to queue plans for the specified workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\nWhen creating a run, you may optionally provide a list of variable objects containing key and value attributes. These values apply to that run specifically and take precedence over variables with the same key applied to the workspace(e.g., variable sets). Refer to [Variable Precedence](\/terraform\/cloud-docs\/workspaces\/variables#precedence) for more information. All values must be expressed as an HCL literal in the same syntax you would use when writing Terraform code. Refer to [Types](\/terraform\/language\/expressions\/types#types) in the Terraform documentation for more details.\n\nSetting `debugging_mode: true` enables debugging mode for the queued run only. This is equivalent to setting the `TF_LOG` environment variable to `TRACE` for this run. See [Debugging Terraform](\/terraform\/internals\/debugging) for more information.\n\n**Sample Run Variables:**\n\n```json\n\"attributes\": {\n  \"variables\": [\n    { \"key\": \"replicas\", \"value\": \"2\" },\n    { \"key\": \"access_key\", \"value\": \"\\\"ABCDE12345\\\"\" }\n  ]\n}\n```\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                                           | Type                 | Default                                            | Description                                                                                                                                                                                                                                                                                                                                                                                                                   |\n| -------------------------------------------------- | -------------------- | -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.attributes.allow-empty-apply`                | bool                 | none | Specifies whether Terraform can apply the run even when the plan [contains no changes](\/terraform\/cloud-docs\/run\/modes-and-options#allow-empty-apply). Use this property to [upgrade state](\/terraform\/cloud-docs\/workspaces\/state#upgrading-state) after upgrading a workspace to a new terraform version.                                                                                                                          |\n| `data.attributes.allow-config-generation`          | bool                 | `false`                                              | Specifies whether Terraform can [generate resource configuration](\/terraform\/language\/import\/generating-configuration) when planning to import new resources. When set to `false`, Terraform returns an error when `import` blocks do not have a corresponding `resource` block.                                                                                                                                                                                  |\n| `data.attributes.auto-apply`                       | bool                 | Defaults to the [Auto Apply](\/terraform\/cloud-docs\/workspaces\/settings#auto-apply-and-manual-apply) workspace setting.  | Determines if Terraform automatically applies the configuration on a successful `terraform plan`. |\n| `data.attributes.debugging-mode`                   | bool                 | `false`                                              | When set to `true`, enables verbose logging for the queued plan. |\n| `data.attributes.is-destroy`                       | bool                 | `false`                                              | When set to `true`, the plan destroys all provisioned resources. Mutually exclusive with `refresh-only`.                                                                                                                                                                                                                                                                                            |\n| `data.attributes.message`                          | string               | `\"Queued manually via the Terraform Enterprise API\"` | Specifies the message associated with this run.                                                                                                                                                                                                                                                                                                                                                                               |\n| `data.attributes.refresh`                          | bool                 | `true`                                               | Specifies whether or not to refresh the state before a plan.                                                                                                                                                                                                                                                                                                                                                                  |\n| `data.attributes.refresh-only`                     | bool                 | `false`                                              | When set to `true`,  this run  refreshes the state without modifying any resources. Mutually exclusive with `is-destroy`.                                                                                                                                                                                                                                                                                                              |\n| `data.attributes.replace-addrs`                    | array\\[string]       |                                                    | Specifies an optional list of resource addresses to be passed to the `-replace` flag.                                                                                                                                                                                                                                                                                                                                         |\n| `data.attributes.target-addrs`                     | array\\[string]       |                                                    | Specifies an optional list of resource addresses to be passed to the `-target` flag.                                                                                                                                                                                                                                                                                                                                          |\n| `data.attributes.variables`                        | array\\[{key, value}] | (empty array)                                      | Specifies an optional list of run-specific variable values. Refer to [Run-Specific Variables](\/terraform\/cloud-docs\/workspaces\/variables\/managing-variables#run-specific-variables) for details.                                                                                                                                                                                                                              |\n| `data.attributes.plan-only`                        | bool                 | (from configuration version)                       | Specifies if this is a [speculative, plan-only](\/terraform\/cloud-docs\/run\/modes-and-options#plan-only-speculative-plan) run that Terraform cannot apply. Often used in conjunction with terraform-version in order to test whether an upgrade would succeed.                                                                                                                                                                  |\n| `data.attributes.save-plan`                        | bool                 | `false`                                              | When set to `true`, the run is executed as a `save plan` run. A `save plan` run plans and checks the configuration without becoming the workspace's current run. These run types only becomes the current run if you confirm that you want to apply them when prompted. When creating new [configuration versions](\/terraform\/enterprise\/api-docs\/configuration-versions) for saved plan runs, be sure to make them `provisional`. |\n| `data.attributes.terraform-version`                | string               | none | Specifies the Terraform version to use in this run. Only valid for plan-only runs; must be a valid Terraform version available to the organization.                                                                                                                                                                                                                                                                           |\n| `data.relationships.workspace.data.id`             | string               | none | Specifies the workspace ID to execute the run in.                                                                                                                                                                                                                                                                                                                                                                    |\n| `data.relationships.configuration-version.data.id` | string               | none | Specifies the configuration version to use for this run. If the `configuration-version` object is omitted, Terraform uses the workspace's latest configuration version to create the run .                                                                                                                                                                                                                                        |\n\n| Status  | Response                               | Reason                                                                      |\n|---------|----------------------------------------|-----------------------------------------------------------------------------|\n| [201][] | [JSON API document][] (`type: \"runs\"`) | Successfully created a run                                                  |\n| [404][] | [JSON API error object][]              | Organization or workspace not found, or user unauthorized to perform action |\n| [422][] | [JSON API error object][]              | Malformed request body (missing attributes, wrong types, etc.)              |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"message\": \"Custom message\"\n    },\n    \"type\":\"runs\",\n    \"relationships\": {\n      \"workspace\": {\n        \"data\": {\n          \"type\": \"workspaces\",\n          \"id\": \"ws-LLGHCr4SWy28wyGN\"\n        }\n      },\n      \"configuration-version\": {\n        \"data\": {\n          \"type\": \"configuration-versions\",\n          \"id\": \"cv-n4XQPBa2QnecZJ4G\"\n        }\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/runs\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"run-CZcmD7eagjhyX0vN\",\n    \"type\": \"runs\",\n    \"attributes\": {\n      \"actions\": {\n        \"is-cancelable\": true,\n        \"is-confirmable\": false,\n        \"is-discardable\": false,\n        \"is-force-cancelable\": false\n      },\n      \"canceled-at\": null,\n      \"created-at\": \"2021-05-24T07:38:04.171Z\",\n      \"has-changes\": false,\n      \"auto-apply\": false,\n      \"allow-empty-apply\": false,\n      \"allow-config-generation\": false,\n      \"is-destroy\": false,\n      \"message\": \"Custom message\",\n      \"plan-only\": false,\n      \"source\": \"tfe-api\",\n      \"status-timestamps\": {\n        \"plan-queueable-at\": \"2021-05-24T07:38:04+00:00\"\n      },\n      \"status\": \"pending\",\n      \"trigger-reason\": \"manual\",\n      \"target-addrs\": null,\n      \"permissions\": {\n        \"can-apply\": true,\n        \"can-cancel\": true,\n        \"can-comment\": true,\n        \"can-discard\": true,\n        \"can-force-execute\": true,\n        \"can-force-cancel\": true,\n        \"can-override-policy-check\": true\n      },\n      \"refresh\": false,\n      \"refresh-only\": false,\n      \"replace-addrs\": null,\n      \"save-plan\": false,\n      \"variables\": []\n    },\n    \"relationships\": {\n      \"apply\": {...},\n      \"comments\": {...},\n      \"configuration-version\": {...},\n      \"cost-estimate\": {...},\n      \"created-by\": {...},\n      \"input-state-version\": {...},\n      \"plan\": {...},\n      \"run-events\": {...},\n      \"policy-checks\": {...},\n      \"workspace\": {...},\n      \"workspace-run-alerts\": {...}\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/runs\/run-CZcmD7eagjhyX0vN\"\n    }\n  }\n}\n```\n\n## Apply a Run\n\n`POST \/runs\/:run_id\/actions\/apply`\n\n| Parameter | Description         |\n|-----------|---------------------|\n| `run_id`  | The run ID to apply |\n\nApplies a run that is paused waiting for confirmation after a plan. This includes runs in the \"needs confirmation\" and \"policy checked\" states. This action is only required for runs that can't be auto-applied. Plans can be auto-applied if the auto-apply setting is enabled on the workspace and the plan was queued by a new VCS commit or by a user with permission to apply runs for the workspace.\n\n-> **Note:** If the run has a soft failed sentinel policy, you will need to [override the policy check](\/terraform\/cloud-docs\/api-docs\/policy-checks#override-policy) before Terraform can apply the run. You can find policy check details in the `relationships` section of the [run details endpoint](#get-run-details) response.\n\nApplying a run requires permission to apply runs for the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nThis endpoint queues the request to perform an apply; the apply might not happen immediately.\n\nSince this endpoint represents an action (not a resource), it does not return any object in the response body.\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n| Status  | Response                  | Reason(s)                                               |\n|---------|---------------------------|---------------------------------------------------------|\n| [202][] | none                      | Successfully queued an apply request.                   |\n| [409][] | [JSON API error object][] | Run was not paused for confirmation; apply not allowed. |\n\n### Request Body\n\nThis POST endpoint allows an optional JSON object with the following properties as a request payload.\n\n| Key path  | Type   | Default | Description                        |\n|-----------|--------|---------|------------------------------------|\n| `comment` | string | `null`  | An optional comment about the run. |\n\n### Sample Payload\n\nThis payload is optional, so the `curl` command will work without the `--data @payload.json` option too.\n\n```json\n{\n  \"comment\":\"Looks good to me\"\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/runs\/run-DQGdmrWMX8z9yWQB\/actions\/apply\n```\n\n## List Runs in a Workspace\n\n`GET \/workspaces\/:workspace_id\/runs`\n\n| Parameter      | Description                        |\n|----------------|------------------------------------|\n| `workspace_id` | The workspace ID to list runs for. |\n\nBy default, `plan_only` runs will be excluded from the results. To see all runs, use `filter[operation]` with all available operations included as a comma-separated list.\nThis endpoint has an adjusted rate limit of 30 requests per minute. Note that most endpoints are limited to 30 requests per second.\n\n| Status  | Response                                         | Reason                   |\n|---------|--------------------------------------------------|--------------------------|\n| [200][] | Array of [JSON API document][]s (`type: \"runs\"`) | Successfully listed runs |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter                  | Description                                                                                                                                                                                                                  | Required |\n| -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- |\n| `page[number]`             | If omitted, the endpoint returns the first page.                                                                                                                                                                           | Optional |\n| `page[size]`               | If omitted, the endpoint returns 20 runs per page.                                                                                                                                                                         | Optional |\n| `filter[operation]`        | A comma-separated list of run operations. The result lists runs that perform one of these operations. For details on options, refer to [Run operations](\/terraform\/enterprise\/api-docs\/run#run-operations).                    | Optional |\n| `filter[status]`           | A comma-separated list of run statuses. The result lists runs that are in one of the statuses you specify. For details on options, refer to [Run states](\/terraform\/enterprise\/api-docs\/run#run-states).                                  | Optional |\n| `filter[agent_pool_names]` | A comma-separated list of agent pool names. The result lists runs that use one of the agent pools you specify. | Optional |                                                                      \n| `filter[source]`           | A comma-separated list of run sources. The result lists runs that came from one of the sources you specify. Options are listed in [Run Sources](\/terraform\/enterprise\/api-docs\/run#run-sources).                               | Optional |\n| `filter[status_group]`     | A single status group. The result lists runs whose status falls under this status group. For details on options, refer to [Run status groups](\/terraform\/enterprise\/api-docs\/run#run-status-groups).                            | Optional |\n| `filter[timeframe]`        | A single year period. The result lists runs that were created within the year you specify. An integer year or the string \"year\" for the past year are valid values. If omitted, the endpoint returns all runs since the creation of the workspace.      | Optional |\n| `search[user]`             | Searches for runs that match the VCS username you supply.                                                                                                                                                                          | Optional |\n| `search[commit]`           | Searches for runs that match the commit sha you specify.                                                                                                                                                                            | Optional |\n| `search[basic]`            | Searches for runs that match the VCS username, commit sha, run_id, or run message your specify. HCP Terraform prioritizes `search[commit]` or `search[user]` and ignores `search[basic]` in favor of the higher priority parameters if you include them in your query.            | Optional |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-yF7z4gyEQRhaCNG9\/runs\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"run-CZcmD7eagjhyX0vN\",\n      \"type\": \"runs\",\n      \"attributes\": {\n        \"actions\": {\n          \"is-cancelable\": true,\n          \"is-confirmable\": false,\n          \"is-discardable\": false,\n          \"is-force-cancelable\": false\n        },\n        \"canceled-at\": null,\n        \"created-at\": \"2021-05-24T07:38:04.171Z\",\n        \"has-changes\": false,\n        \"auto-apply\": false,\n        \"allow-empty-apply\": false,\n        \"allow-config-generation\": false,\n        \"is-destroy\": false,\n        \"message\": \"Custom message\",\n        \"plan-only\": false,\n        \"source\": \"tfe-api\",\n        \"status-timestamps\": {\n          \"plan-queueable-at\": \"2021-05-24T07:38:04+00:00\"\n        },\n        \"status\": \"pending\",\n        \"trigger-reason\": \"manual\",\n        \"target-addrs\": null,\n        \"permissions\": {\n          \"can-apply\": true,\n          \"can-cancel\": true,\n          \"can-comment\": true,\n          \"can-discard\": true,\n          \"can-force-execute\": true,\n          \"can-force-cancel\": true,\n          \"can-override-policy-check\": true\n        },\n        \"refresh\": false,\n        \"refresh-only\": false,\n        \"replace-addrs\": null,\n        \"save-plan\": false,\n        \"variables\": []\n      },\n      \"relationships\": {\n        \"apply\": {...},\n        \"comments\": {...},\n        \"configuration-version\": {...},\n        \"cost-estimate\": {...},\n        \"created-by\": {...},\n        \"input-state-version\": {...},\n        \"plan\": {...},\n        \"run-events\": {...},\n        \"policy-checks\": {...},\n        \"workspace\": {...},\n        \"workspace-run-alerts\": {...}\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/runs\/run-bWSq4YeYpfrW4mx7\"\n      }\n    },\n    {...}\n  ]\n}\n```\n\n## List Runs in an Organization\n\n`GET \/organizations\/:organization_name\/runs`\n\n| Parameter      | Description                        |\n|----------------|------------------------------------|\n| `organization_name` | The organization name to list runs for. |\n\nThis endpoint has an adjusted rate limit of 30 requests per minute. Note that most endpoints are limited to 30 requests per second.\n\n| Status  | Response                                         | Reason                   |\n|---------|--------------------------------------------------|--------------------------|\n| [200][] | Array of [JSON API document][]s (`type: \"runs\"`) | Successfully listed runs |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter                  | Description                                                                                                                                                                                                                  | Required |\n| -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- |\n| `page[number]`             | If omitted, the endpoint returns the first page.                                                                                                                                                                           | Optional |\n| `page[size]`               | If omitted, the endpoint returns 20 runs per page.                                                                                                                                                                         | Optional |\n| `filter[operation]`        | A comma-separated list of run operations. The result lists runs that perform one of these operations. For details on options, refer to [Run operations](\/terraform\/enterprise\/api-docs\/run#run-operations).                    | Optional |\n| `filter[status]`           | A comma-separated list of run statuses. The result lists runs that are in one of the statuses you specify. For details on options, refer to [Run states](\/terraform\/enterprise\/api-docs\/run#run-states).                                  | Optional |\n| `filter[agent_pool_names]` | A comma-separated list of agent pool names. The result lists runs that use one of the agent pools you specify.                                                                                                          | Optional |\n| `filter[workspace_names]`  | A comma-separated list of workspace names. The result lists runs that belong to one of the workspaces your specify.                                                                                                                | Optional |\n| `filter[source]`           | A comma-separated list of run sources. The result lists runs that came from one of the sources you specify. Options are listed in [Run Sources](\/terraform\/enterprise\/api-docs\/run#run-sources).                               | Optional |\n| `filter[status_group]`     | A single status group. The result lists runs whose status falls under this status group. For details on options, refer to [Run status groups](\/terraform\/enterprise\/api-docs\/run#run-status-groups).                            | Optional |\n| `filter[timeframe]`        | A single year period. The result lists runs that were created within the year you specify. An integer year or the string \"year\" for the past year are valid values. If omitted, the endpoint returns runs created in the last year.                | Optional |\n| `search[user]`             | Searches for runs that match the VCS username you supply.                                                                                                                                                                          | Optional |\n| `search[commit]`           | Searches for runs that match the commit sha you specify.                                                                                                                                                                            | Optional |\n| `search[basic]`            | Searches for runs that match the VCS username, commit sha, run_id, or run message your specify. HCP Terraform prioritizes `search[commit]` or `search[user]` and ignores `search[basic]` in favor of the higher priority parameters if you include them in your query.            | Optional |\n\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/runs\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"run-CZcmD7eagjhyX0vN\",\n      \"type\": \"runs\",\n      \"attributes\": {\n        \"actions\": {\n          \"is-cancelable\": true,\n          \"is-confirmable\": false,\n          \"is-discardable\": false,\n          \"is-force-cancelable\": false\n        },\n        \"canceled-at\": null,\n        \"created-at\": \"2021-05-24T07:38:04.171Z\",\n        \"has-changes\": false,\n        \"auto-apply\": false,\n        \"allow-empty-apply\": false,\n        \"allow-config-generation\": false,\n        \"is-destroy\": false,\n        \"message\": \"Custom message\",\n        \"plan-only\": false,\n        \"source\": \"tfe-api\",\n        \"status-timestamps\": {\n          \"plan-queueable-at\": \"2021-05-24T07:38:04+00:00\"\n        },\n        \"status\": \"pending\",\n        \"trigger-reason\": \"manual\",\n        \"target-addrs\": null,\n        \"permissions\": {\n          \"can-apply\": true,\n          \"can-cancel\": true,\n          \"can-comment\": true,\n          \"can-discard\": true,\n          \"can-force-execute\": true,\n          \"can-force-cancel\": true,\n          \"can-override-policy-check\": true\n        },\n        \"refresh\": false,\n        \"refresh-only\": false,\n        \"replace-addrs\": null,\n        \"save-plan\": false,\n        \"variables\": []\n      },\n      \"relationships\": {\n        \"apply\": {...},\n        \"comments\": {...},\n        \"configuration-version\": {...},\n        \"cost-estimate\": {...},\n        \"created-by\": {...},\n        \"input-state-version\": {...},\n        \"plan\": {...},\n        \"run-events\": {...},\n        \"policy-checks\": {...},\n        \"workspace\": {...},\n        \"workspace-run-alerts\": {...}\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/runs\/run-bWSq4YeYpfrW4mx7\"\n      }\n    },\n    {...}\n  ]\n}\n```\n\n## Get run details\n\n`GET \/runs\/:run_id`\n\n| Parameter | Description        |\n|-----------|--------------------|\n| `:run_id` | The run ID to get. |\n\nThis endpoint is used for showing details of a specific run.\n\n| Status  | Response                               | Reason                               |\n|---------|----------------------------------------|--------------------------------------|\n| [200][] | [JSON API document][] (`type: \"runs\"`) | Success                              |\n| [404][] | [JSON API error object][]              | Run not found or user not authorized |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/runs\/run-bWSq4YeYpfrW4mx7\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"run-CZcmD7eagjhyX0vN\",\n    \"type\": \"runs\",\n    \"attributes\": {\n      \"actions\": {\n        \"is-cancelable\": true,\n        \"is-confirmable\": false,\n        \"is-discardable\": false,\n        \"is-force-cancelable\": false\n      },\n      \"canceled-at\": null,\n      \"created-at\": \"2021-05-24T07:38:04.171Z\",\n      \"has-changes\": false,\n      \"auto-apply\": false,\n      \"allow-empty-apply\": false,\n      \"allow-config-generation\": false,\n      \"is-destroy\": false,\n      \"message\": \"Custom message\",\n      \"plan-only\": false,\n      \"source\": \"tfe-api\",\n      \"status-timestamps\": {\n        \"plan-queueable-at\": \"2021-05-24T07:38:04+00:00\"\n      },\n      \"status\": \"pending\",\n      \"trigger-reason\": \"manual\",\n      \"target-addrs\": null,\n      \"permissions\": {\n        \"can-apply\": true,\n        \"can-cancel\": true,\n        \"can-comment\": true,\n        \"can-discard\": true,\n        \"can-force-execute\": true,\n        \"can-force-cancel\": true,\n        \"can-override-policy-check\": true\n      },\n      \"refresh\": false,\n      \"refresh-only\": false,\n      \"replace-addrs\": null,\n      \"save-plan\": false,\n      \"variables\": []\n    },\n    \"relationships\": {\n      \"apply\": {...},\n      \"comments\": {...},\n      \"configuration-version\": {...},\n      \"cost-estimate\": {...},\n      \"created-by\": {...},\n      \"input-state-version\": {...},\n      \"plan\": {...},\n      \"run-events\": {...},\n      \"policy-checks\": {...},\n      \"task-stages\": {...},\n      \"workspace\": {...},\n      \"workspace-run-alerts\": {...}\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/runs\/run-bWSq4YeYpfrW4mx7\"\n    }\n  }\n}\n```\n\n## Discard a Run\n\n`POST \/runs\/:run_id\/actions\/discard`\n\n| Parameter | Description           |\n|-----------|-----------------------|\n| `run_id`  | The run ID to discard |\n\nThe `discard` action can be used to skip any remaining work on runs that are paused waiting for confirmation or priority. This includes runs in the \"pending,\" \"needs confirmation,\" \"policy checked,\" and \"policy override\" states.\n\nDiscarding a run requires permission to apply runs for the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nThis endpoint queues the request to perform a discard; the discard might not happen immediately. After discarding, the run is completed and later runs can proceed.\n\nThis endpoint represents an action as opposed to a resource. As such, it does not return any object in the response body.\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n| Status  | Response                  | Reason(s)                                                             |\n|---------|---------------------------|-----------------------------------------------------------------------|\n| [202][] | none                      | Successfully queued a discard request.                                |\n| [409][] | [JSON API error object][] | Run was not paused for confirmation or priority; discard not allowed. |\n\n### Request Body\n\nThis POST endpoint allows an optional JSON object with the following properties as a request payload.\n\n| Key path  | Type   | Default | Description                                            |\n|-----------|--------|---------|--------------------------------------------------------|\n| `comment` | string | `null`  | An optional explanation for why the run was discarded. |\n\n### Sample Payload\n\nThis payload is optional, so the `curl` command will work without the `--data @payload.json` option too.\n\n```json\n{\n  \"comment\": \"This run was discarded\"\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/runs\/run-DQGdmrWMX8z9yWQB\/actions\/discard\n```\n\n## Cancel a Run\n\n`POST \/runs\/:run_id\/actions\/cancel`\n\n| Parameter | Description          |\n|-----------|----------------------|\n| `run_id`  | The run ID to cancel |\n\nThe `cancel` action can be used to interrupt a run that is currently planning or applying. Performing a cancel is roughly equivalent to hitting ctrl+c during a Terraform plan or apply on the CLI. The running Terraform process is sent an `INT` signal, which instructs Terraform to end its work and wrap up in the safest way possible.\n\nCanceling a run requires permission to apply runs for the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nThis endpoint queues the request to perform a cancel; the cancel might not happen immediately. After canceling, the run is completed and later runs can proceed.\n\nThis endpoint represents an action as opposed to a resource. As such, it does not return any object in the response body.\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n| Status  | Response                  | Reason(s)                                             |\n|---------|---------------------------|-------------------------------------------------------|\n| [202][] | none                      | Successfully queued a cancel request.                 |\n| [409][] | [JSON API error object][] | Run was not planning or applying; cancel not allowed. |\n| [404][] | [JSON API error object][] | Run was not found or user not authorized.             |\n\n### Request Body\n\nThis POST endpoint allows an optional JSON object with the following properties as a request payload.\n\n| Key path  | Type   | Default | Description                                           |\n|-----------|--------|---------|-------------------------------------------------------|\n| `comment` | string | `null`  | An optional explanation for why the run was canceled. |\n\n### Sample Payload\n\nThis payload is optional, so the `curl` command will work without the `--data @payload.json` option too.\n\n```json\n{\n  \"comment\": \"This run was stuck and would never finish.\"\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/runs\/run-DQGdmrWMX8z9yWQB\/actions\/cancel\n```\n\n## Forcefully cancel a run\n\n`POST \/runs\/:run_id\/actions\/force-cancel`\n\n| Parameter | Description          |\n|-----------|----------------------|\n| `run_id`  | The run ID to cancel |\n\nThe `force-cancel` action is like [cancel](#cancel-a-run), but ends the run immediately. Once invoked, the run is placed into a `canceled` state, and the running Terraform process is terminated. The workspace is immediately unlocked, allowing further runs to be queued. The `force-cancel` operation requires admin access to the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nThis endpoint enforces a prerequisite that a [non-forceful cancel](#cancel-a-run) is performed first, and a cool-off period has elapsed. To determine if this criteria is met, it is useful to check the `data.attributes.is-force-cancelable` value of the [run details endpoint](#get-run-details). The time at which the force-cancel action will become available can be found using the [run details endpoint](#get-run-details), in the key `data.attributes.force_cancel_available_at`. Note that this key is only present in the payload after the initial cancel has been initiated.\n\nThis endpoint represents an action as opposed to a resource. As such, it does not return any object in the response body.\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n~> **Warning:** This endpoint has potentially dangerous side-effects, including loss of any in-flight state in the running Terraform process. Use this operation with extreme caution.\n\n| Status  | Response                  | Reason(s)                                                                                                          |\n|---------|---------------------------|--------------------------------------------------------------------------------------------------------------------|\n| [202][] | none                      | Successfully queued a cancel request.                                                                              |\n| [409][] | [JSON API error object][] | Run was not planning or applying, has not been canceled non-forcefully, or the cool-off period has not yet passed. |\n| [404][] | [JSON API error object][] | Run was not found or user not authorized.                                                                          |\n\n### Request Body\n\nThis POST endpoint allows an optional JSON object with the following properties as a request payload.\n\n| Key path  | Type   | Default | Description                                           |\n|-----------|--------|---------|-------------------------------------------------------|\n| `comment` | string | `null`  | An optional explanation for why the run was canceled. |\n\n### Sample Payload\n\nThis payload is optional, so the `curl` command will work without the `--data @payload.json` option too.\n\n```json\n{\n  \"comment\": \"This run was stuck and would never finish.\"\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/runs\/run-DQGdmrWMX8z9yWQB\/actions\/force-cancel\n```\n\n## Forcefully execute a run\n\n`POST \/runs\/:run_id\/actions\/force-execute`\n\n| Parameter | Description           |\n|-----------|-----------------------|\n| `run_id`  | The run ID to execute |\n\nThe force-execute action cancels all prior runs that are not already complete, unlocking the run's workspace and allowing the run to be executed. (It initiates the same actions as the \"Run this plan now\" button at the top of the view of a pending run.)\n\nForce-executing a run requires permission to apply runs for the workspace. ([More about permissions.](\/terraform\/cloud-docs\/users-teams-organizations\/permissions))\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nThis endpoint enforces the following prerequisites:\n\n- The target run is in the \"pending\" state.\n- The workspace is locked by another run.\n- The run locking the workspace can be discarded.\n\nThis endpoint represents an action as opposed to a resource. As such, it does not return any object in the response body.\n\n-> **Note:** This endpoint cannot be accessed with [organization tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#organization-api-tokens). You must access it with a [user token](\/terraform\/cloud-docs\/users-teams-organizations\/users#api-tokens) or [team token](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens#team-api-tokens).\n\n~> **Note:** While useful at times, force-executing a run circumvents the typical workflow of applying runs using HCP Terraform. It is not intended for regular use. If you find yourself using it frequently, please reach out to HashiCorp Support for help in developing an alternative approach.\n\n| Status  | Response                  | Reason(s)                                                                                     |\n|---------|---------------------------|-----------------------------------------------------------------------------------------------|\n| [202][] | none                      | Successfully initiated the force-execution process.                                           |\n| [403][] | [JSON API error object][] | Run is not pending, its workspace was not locked, or its workspace association was not found. |\n| [409][] | [JSON API error object][] | The run locking the workspace was not in a discardable state.                                 |\n| [404][] | [JSON API error object][] | Run was not found or user not authorized.                                                     |\n\n### Request Body\n\nThis POST endpoint does not take a request body.\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/runs\/run-DQGdmrWMX8z9yWQB\/actions\/force-execute\n```\n\n## Available Related Resources\n\nThe GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n- `plan` - Additional information about plans.\n- `apply` - Additional information about applies.\n- `created_by` - Full user records of the users responsible for creating the runs.\n- `cost_estimate` - Additional information about cost estimates.\n- `configuration_version` - The configuration record used in the run.\n- `configuration_version.ingress_attributes` - The commit information used in the run.","site":"terraform","answers_cleaned":"    page title  Runs   API Docs   HCP Terraform description       Use the   runs  endpoint to manage Terraform runs  List  get  create  apply  discard  execute  and cancel runs using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Runs API       Note    Before working with the runs or configuration versions APIs  read the  API driven run workflow   terraform cloud docs run api  page  which includes both a full overview of this workflow and a walkthrough of a simple implementation of it   Performing a run on a new configuration is a multi step process   1   Create a configuration version on the workspace   terraform cloud docs api docs configuration versions create a configuration version   1   Upload configuration files to the configuration version   terraform cloud docs api docs configuration versions upload configuration files   1   Create a run on the workspace   create a run   this is done automatically when a configuration file is uploaded  1   Create and queue an apply on the run   apply a run   if the run can t be auto applied   Alternatively  you can create a run with a pre existing configuration version  even one from another workspace  This is useful for promoting known good code from one workspace to another      Attributes      Run States  The run state is found in  data attributes status   and you can reference the following list of possible states     State                    Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               pending                 The initial status of a run after creation                                                                                                                                                                                                                                                                                                                                                                                            fetching                The run is waiting for HCP Terraform to fetch the configuration from VCS                                                                                                                                                                                                                                                                                                                                                                       fetching completed      HCP Terraform has fetched the configuration from VCS and the run will continue                                                                                                                                                                                                                                                                                                                                                                 pre plan running        The pre plan phase of the run is in progress                                                                                                                                                                                                                                                                                                                                                                                                     pre plan completed      The pre plan phase of the run has completed                                                                                                                                                                                                                                                                                                                                                                                                      queuing                 HCP Terraform is queuing the run to start the planning phase                                                                                                                                                                                                                                                                                                                                                                                              plan queued             HCP Terraform is waiting for its backend services to start the plan       planning                The planning phase of a run is in progress                                                                                                                                                                                                                                                                                                                                                                                                       planned                 The planning phase of a run has completed                                                                                                                                                                                                                                                                                                                                                                                                        cost estimating         The cost estimation phase of a run is in progress                                                                                                                                                                                                                                                                                                                                                                                                cost estimated          The cost estimation phase of a run has completed                                                                                                                                                                                                                                                                                                                                                                                                 policy checking         The sentinel policy checking phase of a run is in progress                                                                                                                                                                                                                                                                                                                                                                                       policy override         A sentinel policy has soft failed  and a user can override it to continue the run                                                                                                                                                                                                                                                                                                                                                                                  policy soft failed      A sentinel policy has soft failed for a plan only run  This is a final state                                                                                                                                                                                                                                                                                                                                                                    policy checked          The sentinel policy checking phase of a run has completed                                                                                                                                                                                                                                                                                                                                                                                        confirmed               A user has confirmed the plan                                                                                                                                                                                                                                                                                                                                                                                                  post plan running       The post plan phase of the run is in progress                                                                                                                                                                                                                                                                                                                                                                                                    post plan completed     The post plan phase of the run has completed                                                                                                                                                                                                                                                                                                                                                                                                     planned and finished    The run is completed  This status only exists for plan only runs and runs that produce a plan with no changes to apply  This is a final state                                                                                                                                                                                                                                                                                                                    planned and saved       The run has finished its planning  checks  and estimates  and can be confirmed for apply  This status is only used for saved plan runs       apply queued            Once the changes in the plan have been confirmed  the run will transition to  apply queued   This status indicates that the run should start as soon as the backend services that run terraform have available capacity  In HCP Terraform  you should seldom see this status  as our aim is to always have capacity  However  in Terraform Enterprise this status will be more common due to the self hosted nature                            applying                Terraform is applying the changes specified in the plan                                                                                                                                                                                                                                                                                                                                                                                                     applied                 Terraform has applied the changes specified in the plan                                                                                                                                                                                                                                                                                                                                                                                                        discarded               The run has been discarded  This is a final state                                                                                                                                                                                                                                                                                                                                                                                                errored                 The run has errored  This is a final state                                                                                                                                                                                                                                                                                                                                                                                                       canceled                The run has been canceled                                                                                                                                                                                                                                                                                                                                                                                                                        force canceled          A workspace admin forcefully canceled the run       Run Operations  The run operation specifies the Terraform execution mode  You can reference the following list of possible execution modes and use them as query parameters in the  workspace   terraform cloud docs api docs run list runs in a workspace  and  organization   terraform cloud docs api docs run list runs in a organization  runs lists     Operation          Description                                                                                                                                                                                                                                                                                                                                                                                                 plan only         The run does not have an apply phase  This is also called a   speculative plan    terraform cloud docs run modes and options plan only speculative plan                                        plan and apply    The run includes both plan and apply phases                                                                                                                                                  save plan         The run is a saved plan run  It can include both plan and apply phases  but only becomes the workspace s current run if a user chooses to apply it                                       refresh only      The run should update Terraform state  but not make changes to resources                                                                                                                     destroy           The run should destroy all objects  regardless of configuration changes                                                                                                                      empty apply       The run should perform an apply with no changes to resources  This is most commonly used to  upgrade terraform state versions   terraform cloud docs workspaces state upgrading state          Run Sources  You can use the following sources as query parameters in  workspace   terraform cloud docs api docs run list runs in a workspace  and  organization   terraform cloud docs api docs run list runs in a organization  runs lists     Source                        Description                                                                                                                                                                                                            tfe ui                       Indicates a run was queued from HCP Terraform UI                                           tfe api                      Indicates a run was queued from HCP Terraform API                                          tfe configuration version    Indicates a run was queued from a Configuration Version  triggered from a VCS provider         Run Status Groups  The run status group specifies a collection of run states by logical category     Group                    Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               non final                 Inclusive of runs that are currently running  require user confirmation  or are queued pending                                                                                                                                                                                                                                                                                                                                                                    final                Inclusive of runs that have reached their final and terminal state                                                                                                                                                                                                                                                                                                                                                                       discardable      Inclusive of runs whose state falls under the following   planned    planned and saved    cost estimated    policy checked    policy override    post plan running    post plan completed                                                                                                                                                                                                                                                                                                    Create a Run   POST  runs   A run performs a plan and apply  using a configuration version and the workspace s current variables  You can specify a configuration version when creating a run  if you don t provide one  the run defaults to the workspace s most recently used version   A configuration version is  used  when it is created or used for a run in this workspace    Creating a run requires permission to queue plans for the specified workspace    More about permissions    terraform cloud docs users teams organizations permissions    When creating a run  you may optionally provide a list of variable objects containing key and value attributes  These values apply to that run specifically and take precedence over variables with the same key applied to the workspace e g   variable sets   Refer to  Variable Precedence   terraform cloud docs workspaces variables precedence  for more information  All values must be expressed as an HCL literal in the same syntax you would use when writing Terraform code  Refer to  Types   terraform language expressions types types  in the Terraform documentation for more details   Setting  debugging mode  true  enables debugging mode for the queued run only  This is equivalent to setting the  TF LOG  environment variable to  TRACE  for this run  See  Debugging Terraform   terraform internals debugging  for more information     Sample Run Variables        json  attributes        variables            key    replicas    value    2            key    access key    value      ABCDE12345                  permissions citation    intentionally unused   keep for maintainers       Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                                             Type                   Default                                              Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           data attributes allow empty apply                   bool                   none   Specifies whether Terraform can apply the run even when the plan  contains no changes   terraform cloud docs run modes and options allow empty apply   Use this property to  upgrade state   terraform cloud docs workspaces state upgrading state  after upgrading a workspace to a new terraform version                                                                                                                                data attributes allow config generation             bool                    false                                                 Specifies whether Terraform can  generate resource configuration   terraform language import generating configuration  when planning to import new resources  When set to  false   Terraform returns an error when  import  blocks do not have a corresponding  resource  block                                                                                                                                                                                        data attributes auto apply                          bool                   Defaults to the  Auto Apply   terraform cloud docs workspaces settings auto apply and manual apply  workspace setting     Determines if Terraform automatically applies the configuration on a successful  terraform plan        data attributes debugging mode                      bool                    false                                                 When set to  true   enables verbose logging for the queued plan       data attributes is destroy                          bool                    false                                                 When set to  true   the plan destroys all provisioned resources  Mutually exclusive with  refresh only                                                                                                                                                                                                                                                                                                   data attributes message                             string                   Queued manually via the Terraform Enterprise API     Specifies the message associated with this run                                                                                                                                                                                                                                                                                                                                                                                     data attributes refresh                             bool                    true                                                  Specifies whether or not to refresh the state before a plan                                                                                                                                                                                                                                                                                                                                                                        data attributes refresh only                        bool                    false                                                 When set to  true    this run  refreshes the state without modifying any resources  Mutually exclusive with  is destroy                                                                                                                                                                                                                                                                                                                     data attributes replace addrs                       array  string                                                               Specifies an optional list of resource addresses to be passed to the   replace  flag                                                                                                                                                                                                                                                                                                                                               data attributes target addrs                        array  string                                                               Specifies an optional list of resource addresses to be passed to the   target  flag                                                                                                                                                                                                                                                                                                                                                data attributes variables                           array   key  value      empty array                                         Specifies an optional list of run specific variable values  Refer to  Run Specific Variables   terraform cloud docs workspaces variables managing variables run specific variables  for details                                                                                                                                                                                                                                    data attributes plan only                           bool                    from configuration version                          Specifies if this is a  speculative  plan only   terraform cloud docs run modes and options plan only speculative plan  run that Terraform cannot apply  Often used in conjunction with terraform version in order to test whether an upgrade would succeed                                                                                                                                                                        data attributes save plan                           bool                    false                                                 When set to  true   the run is executed as a  save plan  run  A  save plan  run plans and checks the configuration without becoming the workspace s current run  These run types only becomes the current run if you confirm that you want to apply them when prompted  When creating new  configuration versions   terraform enterprise api docs configuration versions  for saved plan runs  be sure to make them  provisional        data attributes terraform version                   string                 none   Specifies the Terraform version to use in this run  Only valid for plan only runs  must be a valid Terraform version available to the organization                                                                                                                                                                                                                                                                                 data relationships workspace data id                string                 none   Specifies the workspace ID to execute the run in                                                                                                                                                                                                                                                                                                                                                                          data relationships configuration version data id    string                 none   Specifies the configuration version to use for this run  If the  configuration version  object is omitted  Terraform uses the workspace s latest configuration version to create the run                                                                                                                                                                                                                                               Status    Response                                 Reason                                                                                                                                                                                                              201       JSON API document      type   runs      Successfully created a run                                                       404       JSON API error object                   Organization or workspace not found  or user unauthorized to perform action      422       JSON API error object                   Malformed request body  missing attributes  wrong types  etc                       Sample Payload     json      data          attributes            message    Custom message              type   runs        relationships            workspace              data                type    workspaces              id    ws LLGHCr4SWy28wyGN                            configuration version              data                type    configuration versions              id    cv n4XQPBa2QnecZJ4G                                         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 runs          Sample Response     json      data          id    run CZcmD7eagjhyX0vN        type    runs        attributes            actions              is cancelable   true           is confirmable   false           is discardable   false           is force cancelable   false                 canceled at   null         created at    2021 05 24T07 38 04 171Z          has changes   false         auto apply   false         allow empty apply   false         allow config generation   false         is destroy   false         message    Custom message          plan only   false         source    tfe api          status timestamps              plan queueable at    2021 05 24T07 38 04 00 00                  status    pending          trigger reason    manual          target addrs   null         permissions              can apply   true           can cancel   true           can comment   true           can discard   true           can force execute   true           can force cancel   true           can override policy check   true                 refresh   false         refresh only   false         replace addrs   null         save plan   false         variables                  relationships            apply                 comments                 configuration version                 cost estimate                 created by                 input state version                 plan                 run events                 policy checks                 workspace                 workspace run alerts                     links            self     api v2 runs run CZcmD7eagjhyX0vN                      Apply a Run   POST  runs  run id actions apply     Parameter   Description                                                  run id     The run ID to apply    Applies a run that is paused waiting for confirmation after a plan  This includes runs in the  needs confirmation  and  policy checked  states  This action is only required for runs that can t be auto applied  Plans can be auto applied if the auto apply setting is enabled on the workspace and the plan was queued by a new VCS commit or by a user with permission to apply runs for the workspace        Note    If the run has a soft failed sentinel policy  you will need to  override the policy check   terraform cloud docs api docs policy checks override policy  before Terraform can apply the run  You can find policy check details in the  relationships  section of the  run details endpoint   get run details  response   Applying a run requires permission to apply runs for the workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  This endpoint queues the request to perform an apply  the apply might not happen immediately   Since this endpoint represents an action  not a resource   it does not return any object in the response body        Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens      Status    Response                    Reason s                                                                                                                                                       202      none                        Successfully queued an apply request                         409       JSON API error object      Run was not paused for confirmation  apply not allowed         Request Body  This POST endpoint allows an optional JSON object with the following properties as a request payload     Key path    Type     Default   Description                                                                                                   comment    string    null     An optional comment about the run         Sample Payload  This payload is optional  so the  curl  command will work without the    data  payload json  option too      json      comment   Looks good to me             Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 runs run DQGdmrWMX8z9yWQB actions apply         List Runs in a Workspace   GET  workspaces  workspace id runs     Parameter        Description                                                                                     workspace id    The workspace ID to list runs for     By default   plan only  runs will be excluded from the results  To see all runs  use  filter operation   with all available operations included as a comma separated list  This endpoint has an adjusted rate limit of 30 requests per minute  Note that most endpoints are limited to 30 requests per second     Status    Response                                           Reason                                                                                                                  200      Array of  JSON API document   s   type   runs      Successfully listed runs        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter                    Description                                                                                                                                                                                                                    Required                                                                                                                                                                                                                                                                               page number                 If omitted  the endpoint returns the first page                                                                                                                                                                              Optional      page size                   If omitted  the endpoint returns 20 runs per page                                                                                                                                                                            Optional      filter operation            A comma separated list of run operations  The result lists runs that perform one of these operations  For details on options  refer to  Run operations   terraform enterprise api docs run run operations                        Optional      filter status               A comma separated list of run statuses  The result lists runs that are in one of the statuses you specify  For details on options  refer to  Run states   terraform enterprise api docs run run states                                      Optional      filter agent pool names     A comma separated list of agent pool names  The result lists runs that use one of the agent pools you specify    Optional                                                                            filter source               A comma separated list of run sources  The result lists runs that came from one of the sources you specify  Options are listed in  Run Sources   terraform enterprise api docs run run sources                                   Optional      filter status group         A single status group  The result lists runs whose status falls under this status group  For details on options  refer to  Run status groups   terraform enterprise api docs run run status groups                                Optional      filter timeframe            A single year period  The result lists runs that were created within the year you specify  An integer year or the string  year  for the past year are valid values  If omitted  the endpoint returns all runs since the creation of the workspace         Optional      search user                 Searches for runs that match the VCS username you supply                                                                                                                                                                             Optional      search commit               Searches for runs that match the commit sha you specify                                                                                                                                                                               Optional      search basic                Searches for runs that match the VCS username  commit sha  run id  or run message your specify  HCP Terraform prioritizes  search commit   or  search user   and ignores  search basic   in favor of the higher priority parameters if you include them in your query               Optional        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 workspaces ws yF7z4gyEQRhaCNG9 runs          Sample Response     json      data                  id    run CZcmD7eagjhyX0vN          type    runs          attributes              actions                is cancelable   true             is confirmable   false             is discardable   false             is force cancelable   false                     canceled at   null           created at    2021 05 24T07 38 04 171Z            has changes   false           auto apply   false           allow empty apply   false           allow config generation   false           is destroy   false           message    Custom message            plan only   false           source    tfe api            status timestamps                plan queueable at    2021 05 24T07 38 04 00 00                      status    pending            trigger reason    manual            target addrs   null           permissions                can apply   true             can cancel   true             can comment   true             can discard   true             can force execute   true             can force cancel   true             can override policy check   true                     refresh   false           refresh only   false           replace addrs   null           save plan   false           variables                      relationships              apply                   comments                   configuration version                   cost estimate                   created by                   input state version                   plan                   run events                   policy checks                   workspace                   workspace run alerts                         links              self     api v2 runs run bWSq4YeYpfrW4mx7                                         List Runs in an Organization   GET  organizations  organization name runs     Parameter        Description                                                                                     organization name    The organization name to list runs for     This endpoint has an adjusted rate limit of 30 requests per minute  Note that most endpoints are limited to 30 requests per second     Status    Response                                           Reason                                                                                                                  200      Array of  JSON API document   s   type   runs      Successfully listed runs        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter                    Description                                                                                                                                                                                                                    Required                                                                                                                                                                                                                                                                               page number                 If omitted  the endpoint returns the first page                                                                                                                                                                              Optional      page size                   If omitted  the endpoint returns 20 runs per page                                                                                                                                                                            Optional      filter operation            A comma separated list of run operations  The result lists runs that perform one of these operations  For details on options  refer to  Run operations   terraform enterprise api docs run run operations                        Optional      filter status               A comma separated list of run statuses  The result lists runs that are in one of the statuses you specify  For details on options  refer to  Run states   terraform enterprise api docs run run states                                      Optional      filter agent pool names     A comma separated list of agent pool names  The result lists runs that use one of the agent pools you specify                                                                                                             Optional      filter workspace names      A comma separated list of workspace names  The result lists runs that belong to one of the workspaces your specify                                                                                                                   Optional      filter source               A comma separated list of run sources  The result lists runs that came from one of the sources you specify  Options are listed in  Run Sources   terraform enterprise api docs run run sources                                   Optional      filter status group         A single status group  The result lists runs whose status falls under this status group  For details on options  refer to  Run status groups   terraform enterprise api docs run run status groups                                Optional      filter timeframe            A single year period  The result lists runs that were created within the year you specify  An integer year or the string  year  for the past year are valid values  If omitted  the endpoint returns runs created in the last year                   Optional      search user                 Searches for runs that match the VCS username you supply                                                                                                                                                                             Optional      search commit               Searches for runs that match the commit sha you specify                                                                                                                                                                               Optional      search basic                Searches for runs that match the VCS username  commit sha  run id  or run message your specify  HCP Terraform prioritizes  search commit   or  search user   and ignores  search basic   in favor of the higher priority parameters if you include them in your query               Optional         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations hashicorp runs          Sample Response     json      data                  id    run CZcmD7eagjhyX0vN          type    runs          attributes              actions                is cancelable   true             is confirmable   false             is discardable   false             is force cancelable   false                     canceled at   null           created at    2021 05 24T07 38 04 171Z            has changes   false           auto apply   false           allow empty apply   false           allow config generation   false           is destroy   false           message    Custom message            plan only   false           source    tfe api            status timestamps                plan queueable at    2021 05 24T07 38 04 00 00                      status    pending            trigger reason    manual            target addrs   null           permissions                can apply   true             can cancel   true             can comment   true             can discard   true             can force execute   true             can force cancel   true             can override policy check   true                     refresh   false           refresh only   false           replace addrs   null           save plan   false           variables                      relationships              apply                   comments                   configuration version                   cost estimate                   created by                   input state version                   plan                   run events                   policy checks                   workspace                   workspace run alerts                         links              self     api v2 runs run bWSq4YeYpfrW4mx7                                         Get run details   GET  runs  run id     Parameter   Description                                                 run id    The run ID to get     This endpoint is used for showing details of a specific run     Status    Response                                 Reason                                                                                                                                200       JSON API document      type   runs      Success                                   404       JSON API error object                   Run not found or user not authorized        Sample Request     shell curl       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 runs run bWSq4YeYpfrW4mx7          Sample Response     json      data          id    run CZcmD7eagjhyX0vN        type    runs        attributes            actions              is cancelable   true           is confirmable   false           is discardable   false           is force cancelable   false                 canceled at   null         created at    2021 05 24T07 38 04 171Z          has changes   false         auto apply   false         allow empty apply   false         allow config generation   false         is destroy   false         message    Custom message          plan only   false         source    tfe api          status timestamps              plan queueable at    2021 05 24T07 38 04 00 00                  status    pending          trigger reason    manual          target addrs   null         permissions              can apply   true           can cancel   true           can comment   true           can discard   true           can force execute   true           can force cancel   true           can override policy check   true                 refresh   false         refresh only   false         replace addrs   null         save plan   false         variables                  relationships            apply                 comments                 configuration version                 cost estimate                 created by                 input state version                 plan                 run events                 policy checks                 task stages                 workspace                 workspace run alerts                     links            self     api v2 runs run bWSq4YeYpfrW4mx7                      Discard a Run   POST  runs  run id actions discard     Parameter   Description                                                      run id     The run ID to discard    The  discard  action can be used to skip any remaining work on runs that are paused waiting for confirmation or priority  This includes runs in the  pending    needs confirmation    policy checked   and  policy override  states   Discarding a run requires permission to apply runs for the workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  This endpoint queues the request to perform a discard  the discard might not happen immediately  After discarding  the run is completed and later runs can proceed   This endpoint represents an action as opposed to a resource  As such  it does not return any object in the response body        Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens      Status    Response                    Reason s                                                                                                                                                                                   202      none                        Successfully queued a discard request                                      409       JSON API error object      Run was not paused for confirmation or priority  discard not allowed         Request Body  This POST endpoint allows an optional JSON object with the following properties as a request payload     Key path    Type     Default   Description                                                                                                                                           comment    string    null     An optional explanation for why the run was discarded         Sample Payload  This payload is optional  so the  curl  command will work without the    data  payload json  option too      json      comment    This run was discarded             Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 runs run DQGdmrWMX8z9yWQB actions discard         Cancel a Run   POST  runs  run id actions cancel     Parameter   Description                                                    run id     The run ID to cancel    The  cancel  action can be used to interrupt a run that is currently planning or applying  Performing a cancel is roughly equivalent to hitting ctrl c during a Terraform plan or apply on the CLI  The running Terraform process is sent an  INT  signal  which instructs Terraform to end its work and wrap up in the safest way possible   Canceling a run requires permission to apply runs for the workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  This endpoint queues the request to perform a cancel  the cancel might not happen immediately  After canceling  the run is completed and later runs can proceed   This endpoint represents an action as opposed to a resource  As such  it does not return any object in the response body        Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens      Status    Response                    Reason s                                                                                                                                                   202      none                        Successfully queued a cancel request                       409       JSON API error object      Run was not planning or applying  cancel not allowed       404       JSON API error object      Run was not found or user not authorized                     Request Body  This POST endpoint allows an optional JSON object with the following properties as a request payload     Key path    Type     Default   Description                                                                                                                                         comment    string    null     An optional explanation for why the run was canceled         Sample Payload  This payload is optional  so the  curl  command will work without the    data  payload json  option too      json      comment    This run was stuck and would never finish              Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 runs run DQGdmrWMX8z9yWQB actions cancel         Forcefully cancel a run   POST  runs  run id actions force cancel     Parameter   Description                                                    run id     The run ID to cancel    The  force cancel  action is like  cancel   cancel a run   but ends the run immediately  Once invoked  the run is placed into a  canceled  state  and the running Terraform process is terminated  The workspace is immediately unlocked  allowing further runs to be queued  The  force cancel  operation requires admin access to the workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  This endpoint enforces a prerequisite that a  non forceful cancel   cancel a run  is performed first  and a cool off period has elapsed  To determine if this criteria is met  it is useful to check the  data attributes is force cancelable  value of the  run details endpoint   get run details   The time at which the force cancel action will become available can be found using the  run details endpoint   get run details   in the key  data attributes force cancel available at   Note that this key is only present in the payload after the initial cancel has been initiated   This endpoint represents an action as opposed to a resource  As such  it does not return any object in the response body        Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens         Warning    This endpoint has potentially dangerous side effects  including loss of any in flight state in the running Terraform process  Use this operation with extreme caution     Status    Response                    Reason s                                                                                                                                                                                                                                                                             202      none                        Successfully queued a cancel request                                                                                    409       JSON API error object      Run was not planning or applying  has not been canceled non forcefully  or the cool off period has not yet passed       404       JSON API error object      Run was not found or user not authorized                                                                                  Request Body  This POST endpoint allows an optional JSON object with the following properties as a request payload     Key path    Type     Default   Description                                                                                                                                         comment    string    null     An optional explanation for why the run was canceled         Sample Payload  This payload is optional  so the  curl  command will work without the    data  payload json  option too      json      comment    This run was stuck and would never finish              Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 runs run DQGdmrWMX8z9yWQB actions force cancel         Forcefully execute a run   POST  runs  run id actions force execute     Parameter   Description                                                      run id     The run ID to execute    The force execute action cancels all prior runs that are not already complete  unlocking the run s workspace and allowing the run to be executed   It initiates the same actions as the  Run this plan now  button at the top of the view of a pending run    Force executing a run requires permission to apply runs for the workspace    More about permissions    terraform cloud docs users teams organizations permissions     permissions citation    intentionally unused   keep for maintainers  This endpoint enforces the following prerequisites     The target run is in the  pending  state    The workspace is locked by another run    The run locking the workspace can be discarded   This endpoint represents an action as opposed to a resource  As such  it does not return any object in the response body        Note    This endpoint cannot be accessed with  organization tokens   terraform cloud docs users teams organizations api tokens organization api tokens   You must access it with a  user token   terraform cloud docs users teams organizations users api tokens  or  team token   terraform cloud docs users teams organizations api tokens team api tokens         Note    While useful at times  force executing a run circumvents the typical workflow of applying runs using HCP Terraform  It is not intended for regular use  If you find yourself using it frequently  please reach out to HashiCorp Support for help in developing an alternative approach     Status    Response                    Reason s                                                                                                                                                                                                                                   202      none                        Successfully initiated the force execution process                                                 403       JSON API error object      Run is not pending  its workspace was not locked  or its workspace association was not found       409       JSON API error object      The run locking the workspace was not in a discardable state                                       404       JSON API error object      Run was not found or user not authorized                                                             Request Body  This POST endpoint does not take a request body       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 runs run DQGdmrWMX8z9yWQB actions force execute         Available Related Resources  The GET endpoints above can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available      plan    Additional information about plans     apply    Additional information about applies     created by    Full user records of the users responsible for creating the runs     cost estimate    Additional information about cost estimates     configuration version    The configuration record used in the run     configuration version ingress attributes    The commit information used in the run "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Tests API Docs HCP Terraform Use the tests endpoint to manage Terraform tests List get create and cancel tests using the HTTP API","answers":"---\npage_title: Tests - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/tests` endpoint to manage Terraform tests. List, get, create, and cancel tests using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Tests API\nTests are terraform operations(runs) and are referred to as Test Runs within the HCP Terraform API.\n\nPerforming a test on a new configuration is a multi-step process.\n\n1. [Create a configuration version on the registry module](#create-a-configuration-version-for-a-test).\n1. [Upload configuration files to the configuration version](#upload-configuration-files-for-a-test).\n1. [Create a test on the module](#create-a-test-run); HCP Terraform completes this step automatically when you upload a configuration file.\n\nAlternatively, you can create a test with a pre-existing configuration version, even one from another module. This is useful for promoting known good code from one module to another.\n\n## Attributes\n\nThe `tests` API endpoint has the following attributes.\n\n### Test Run States\n\nThe state of the test operation is found in `data.attributes.status`, and you can reference the following list of possible states.\n\n| State      | Description                                             |\n| ---------- | ------------------------------------------------------- |\n| `pending`  | The initial status of a run after creation.             |\n| `queued`   | HCP Terraform has queued the test operation to start. |\n| `running`  | HCP Terraform is executing the test.                  |\n| `errored`  | The test has errored. This is a final state.            |\n| `canceled` | The test has been canceled. This is a final state.      |\n| `finished` | The test has completed. This is a final state.          |\n\n### Test run status\n\nThe final test status is found in `data.attributes.test-status`, and you can reference the following list of possible states.\n\n| Status | Description                  | \n| ------ | ---------------------------- |\n| `pass` | The given tests have passed. | \n| `fail` | The given tests have failed. |\n\n### Detailed test status\n\nThe test results can be found via the following attributes\n\n| Status                          | Description                                 |   |\n| ------------------------------- | ------------------------------------------- |\n| `data.attributes.tests-passed`  | The number of tests that have passed.       |\n| `data.attributes.tests-failed`  | The number of tests that have failed.       |\n| `data.attributes.tests-errored` | The number of tests that have errored out.  |\n| `data.attributes.tests-skipped` | The number of tests that have been skipped. |\n\n### Test Sources\nList tests for a module. You can use the following sources as [tests list](\/terraform\/cloud-docs\/api-docs\/private-registry\/tests#list-tests-for-a-module) query parameters.\n\n| Source                      | Description                                                                              |\n|-----------------------------|------------------------------------------------------------------------------------------|\n| `terraform`                 | Indicates a test was queued from HCP Terraform CLI.                                    |\n| `tfe-api`                   | Indicates a test was queued from HCP Terraform API.                                    |\n| `tfe-configuration-version` | Indicates a test was queued from a Configuration Version, triggered from a VCS provider. |\n\n\n## Create a Test\n\n`POST \/organizations\/:organization_name\/tests\/registry-modules\/private\/:namespace\/:name\/:provider\/test-runs`\n\n| Parameter            | Description                                                                                                                                                                                              |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the \"owners\" team or a member of the \"owners\" team. |\n| `:namespace`         | The namespace of the module for which the test is being created. For private modules this is the same as the `:organization_name` parameter.                                                           |\n| `:name`              | The name of the module for which the test is being created.                                                                                                                                           |\n| `:provider`          | The name of the provider for which the test is being created.                                                                                                                                         |\n\nA test run executes tests against a registry module, using a configuration version and the modules\u2019s current environment variables.\n\nCreating a test run requires permission to access the specified module. Refer to [Permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions) for more information.\n\nWhen creating a test run, you may optionally provide a list of variable objects containing key and value attributes. These values apply to that test run specifically and take precedence over variables with the same key that are created within the module. All values must be expressed as an HCL literal in the same syntax you would use when writing Terraform code.\n\n**Sample Test Variables:**\n\n```json\n\"attributes\": {\n  \"variables\": [\n    { \"key\": \"replicas\", \"value\": \"2\" },\n    { \"key\": \"access_key\", \"value\": \"\\\"ABCDE12345\\\"\" }\n  ]\n}\n```\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                                           | Type                 | Default       | Description                                                                                        |\n| -------------------------------------------------- | -------------------- | ------------- | -------------------------------------------------------------------------------------------------- |\n| `data.attributes.verbose`                          | bool                 | `false`         | Specifies whether Terraform should print the plan or state for each test run block as it executes. |\n| `data.attributes.test-directory`                   | string               | `\"tests\"`       | Sets the directory where HCP Terraform executes the tests.                                                  |\n| `data.attributes.filters`                          | array\\[string]       | (empty array) | When specified, HCP Terraform only executes the test files contained within this array.              |\n| `data.attributes.variables`                        | array\\[{key, value}] | (empty array) | Specifies an optional list of test-specific environment variable values.                           |\n| `data.relationships.configuration-version.data.id` | string               |  none   | Specifies the configuration version that HCP Terraform executes the test against.                            |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"verbose\": true,\n      \"filters\": [\"tests\/test.tftest.hcl\"],\n      \"test-directory\": \"tests\",\n      \"variables\": [\n        { \"key\" : \"number\", \"value\": 4}\n      ]\n    },\n    \"type\":\"test-runs\"\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/tests\/registry-modules\/private\/my-organization\/private\/registry-provider\/test-runs\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"trun-KFg8DSiRz4E37mdJ\",\n    \"type\": \"test-runs\",\n    \"attributes\": {\n      \"status\": \"queued\",\n      \"status-timestamps\": {\n          \"queued-at\": \"2023-10-03T18:27:39+00:00\"\n      },\n      \"created-at\": \"2023-10-03T18:27:39.239Z\",\n      \"updated-at\": \"2023-10-03T18:27:39.264Z\",\n      \"test-configurable-type\": \"RegistryModule\",\n      \"test-configurable-id\": \"mod-9rjVHLCUE9QD3k6L\",\n      \"variables\": [\n          {\n              \"key\": \"number\",\n              \"value\": \"4\"\n          }\n      ],\n      \"filters\": [\n          \"tests\/test.tftest.hcl\"\n      ],\n      \"test-directory\": \"tests\",\n      \"verbose\": true,\n      \"test-status\": null,\n      \"tests-passed\": null,\n      \"tests-failed\": null,\n      \"tests-errored\": null,\n      \"tests-skipped\": null,\n      \"source\": \"tfe-api\",\n      \"message\": \"Queued manually via the Terraform Enterprise API\"\n    },\n    \"relationships\": {\n        \"configuration-version\": {\n            \"data\": {\n                \"id\": \"cv-d3zBGFf5DfWY4GY9\",\n                \"type\": \"configuration-versions\"\n            },\n            \"links\": {\n                \"related\": \"\/api\/v2\/configuration-versions\/cv-d3zBGFf5DfWY4GY9\"\n            }\n        },\n        \"created-by\": {\n            \"data\": {\n                \"id\": \"user-zsRFs3AGaAHzbEfs\",\n                \"type\": \"users\"\n            },\n            \"links\": {\n                \"related\": \"\/api\/v2\/users\/user-zsRFs3AGaAHzbEfs\"\n            }\n        }\n    }\n  }\n}\n```\n\n## Create a Configuration Version for a Test\n\n`POST \/organizations\/:organization_name\/tests\/registry-modules\/private\/:namespace\/:name\/:provider\/test-runs\/configuration-versions`\n\n| Parameter            | Description                                                                                                                                                                                              |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the \"owners\" team or a member of the \"owners\" team. |\n| `:namespace`         | The namespace of the module for which the configuration version is being created. For private modules this is the same as the `:organization_name` parameter.                                                           |\n| `:name`              | The name of the module for which the configuration version is being created.                                                                                                                                           |\n| `:provider`          | The name of the provider for which the configuration version is being created.                                                                                                                                         |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/tests\/registry-modules\/private\/my-organization\/registry-name\/registry-provider\/test-runs\/configuration-versions\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"cv-aaady7niJMY1wAvx\",\n    \"type\": \"configuration-versions\",\n    \"attributes\": {\n        \"auto-queue-runs\": true,\n        \"error\": null,\n        \"error-message\": null,\n        \"source\": \"tfe-api\",\n        \"speculative\": false,\n        \"status\": \"pending\",\n        \"status-timestamps\": {},\n        \"changed-files\": [],\n        \"provisional\": false,\n        \"upload-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1bHQ6djM6eFliQ0l1ZEhNUDRMZmdWeExoYWZ1WnFwaCtYQUFSQjFaWVcySkEyT0tyZTZXQ0hjN3ZYQkFvbkJHWkg2Y0U2MDRHRXFvQVl6cUJqQzJ0VkppVHBXTlJNWmpVc1ZTekg5Q1hMZ0hNaUpNdUhib1hGS1RpT3czRGdRaWtPZFZ3VWpDQ1U0S2dhK2xLTUQ2ZFZDaUZ3SktiNytrMlpoVHd0cXdGVHIway8zRkFmejdzMSt0Rm9TNFBTV3dWYjZUTzJVNE1jaW9UZ2VKVFJNRnUvbjBudUp4U0l6VzFDYkNzVVFsb2VFbC9DRFlCTWFsbXBMNzZLUGQxeTJHb09ZTkxHL1d2K1NtcmlEQXptZTh1Q1BwR1dhbVBXQTRiREdlTkI3Qyt1YTRRamFkRzBWYUg3NE52TGpqT1NKbzFrZ3J3QmxnMGhHT3VaTHNhSmo0eXpv\"\n    },\n    \"relationships\": {\n        \"ingress-attributes\": {\n            \"data\": null,\n            \"links\": {\n              \"related\": \"\/api\/v2\/configuration-versions\/cv-aaady7niJMY1wAvx\/ingress-attributes\"\n            }\n        }\n    },\n    \"links\": {\n        \"self\": \"\/api\/v2\/configuration-versions\/cv-aaady7niJMY1wAvx\"\n    }\n  }\n}\n```\n\n## Upload Configuration Files for a Test\n\n`PUT https:\/\/archivist.terraform.io\/v1\/object\/<UNIQUE OBJECT ID>`\n\n**The URL is provided in the `upload-url` attribute when creating a `configuration-versions` resource. After creation, the URL is hidden on the resource and no longer available.**\n\n### Sample Request\n\n**@filename is the name of the configuration file you wish to upload.**\n\n```shell\ncurl \\\n  --header \"Content-Type: application\/octet-stream\" \\\n  --request PUT \\\n  --data-binary @filename \\\n  https:\/\/archivist.terraform.io\/v1\/object\/4c44d964-eba7-4dd5-ad29-1ece7b99e8da\n```\n\n## List Tests for a Module\n\n`GET \/organizations\/:organization_name\/tests\/registry-modules\/private\/:namespace\/:name\/:provider\/test-runs\/`\n\n| Parameter            | Description                                                                                                                                                                                              |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the \"owners\" team or a member of the \"owners\" team. |\n| `:namespace`         | The namespace of the module which the tests have executed against. For private modules this is the same as the `:organization_name` parameter.                                                           |\n| `:name`              | The name of the module which the tests have executed against.                                                                                                                                           |\n| `:provider`          | The name of the provider which the tests have executed against.                                                                                                                                         |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling does not automatically encode URLs.\n\n| Parameter | Description | Required |\n| --- | --- | --- |\n| `page[number]` | If omitted, the endpoint returns the first page.   |  Optional |\n| `page[size]`   | If omitted, the endpoint returns 20 runs per page. | Optional |\n| `filter[source]`    | **Optional.** A comma-separated list of test sources; the result will only include tests that came from one of these sources. Options are listed in [Test Sources](\/terraform\/cloud-docs\/api-docs\/private-registry\/tests#test-sources).                                                                                                                                       |\n\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/tests\/registry-modules\/private\/my-organization\/registry-name\/registry-provider\/test-runs\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"trun-KFg8DSiRz4E37mdJ\",\n      \"type\": \"test-runs\",\n      \"attributes\": {\n        \"status\": \"finished\",\n        \"status-timestamps\": {\n          \"queued-at\": \"2023-10-03T18:27:39+00:00\",\n          \"started-at\": \"2023-10-03T18:27:41+00:00\",\n          \"finished-at\": \"2023-10-03T18:27:53+00:00\"\n        },\n        \"log-read-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1bHQ6djM6eFliQ0l1ZEhNUDRMZmdWeExoYWZ1WnFwaCtYQUFSQjFaWVcySkEyT0tyZTZXQ0hjN3ZYQkFvbkJHWkg2Y0U2MDRHRXFvQVl6cUJqQzJ0VkppVHBXTlJNWmpVc1ZTekg5Q1hMZ0hNaUpNdUhib1hGS1RpT3czRGdRaWtPZFZ3VWpDQ1U0S2dhK2xLTUQ2ZFZDaUZ3SktiNytrMlpoVHd0cXdGVHIway8zRkFmejdzMSt0Rm9TNFBTV3dWYjZUTzJVNE1jaW9UZ2VKVFJNRnUvbjBudUp4U0l6VzFDYkNzVVFsb2VFbC9DRFlCTWFsbXBMNzZLUGQxeTJHb09ZTkxHL1d2K1NtcmlEQXptZTh1Q1BwR1dhbVBXQTRiREdlTkI3Qyt1YTRRamFkRzBWYUg3NE52TGpqT1NKbzFrZ3J3QmxnMGhHT3VaTHNhSmo0eXpv\",\n        \"created-at\": \"2023-10-03T18:27:39.239Z\",\n        \"updated-at\": \"2023-10-03T18:27:53.574Z\",\n        \"test-configurable-type\": \"RegistryModule\",\n        \"test-configurable-id\": \"mod-9rjVHLCUE9QD3k6L\",\n        \"variables\": [\n          {\n            \"key\": \"number\",\n            \"value\": \"4\"\n          }\n        ],\n        \"filters\": [\n          \"tests\/test.tftest.hcl\"\n        ],\n        \"test-directory\": \"tests\",\n        \"verbose\": true,\n        \"test-status\": \"pass\",\n        \"tests-passed\": 1,\n        \"tests-failed\": 0,\n        \"tests-errored\": 0,\n        \"tests-skipped\": 0,\n        \"source\": \"tfe-api\",\n        \"message\": \"Queued manually via the Terraform Enterprise API\"\n      },\n      \"relationships\": {\n        \"configuration-version\": {\n            \"data\": {\n              \"id\": \"cv-d3zBGFf5DfWY4GY9\",\n              \"type\": \"configuration-versions\"\n            },\n            \"links\": {\n              \"related\": \"\/api\/v2\/configuration-versions\/cv-d3zBGFf5DfWY4GY9\"\n            }\n        },\n        \"created-by\": {\n          \"data\": {\n            \"id\": \"user-zsRFs3AGaAHzbEfs\",\n            \"type\": \"users\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/users\/user-zsRFs3AGaAHzbEfs\"\n          }\n        }\n      }\n    },\n    {...}\n  ]\n}\n```\n\n## Get Test Details\n\n`GET \/organizations\/:organization_name\/tests\/registry-modules\/private\/:namespace\/:name\/:provider\/test-runs\/:test_run_id`\n\n| Parameter            | Description                                                                                                                                                                                              |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the \"owners\" team or a member of the \"owners\" team. |\n| `:namespace`         | The namespace of the module which the test was executed against. For private modules this is the same as the `:organization_name` parameter.                                                           |\n| `:name`              | The name of the module which the test was executed against.                                                                                                                                           |\n| `:provider`          | The name of the provider which the test was executed against.                                                                                                                                         |\n| `:test_run_id`       | The test ID to get.                                                                                                                                                                                      |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/tests\/registry-modules\/private\/my-organization\/registry-name\/registry-provider\/test-runs\/trun-xFMAHM3FhkFBL6Z7\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"trun-KFg8DSiRz4E37mdJ\",\n    \"type\": \"test-runs\",\n    \"attributes\": {\n      \"status\": \"finished\",\n      \"status-timestamps\": {\n        \"queued-at\": \"2023-10-03T18:27:39+00:00\",\n        \"started-at\": \"2023-10-03T18:27:41+00:00\",\n        \"finished-at\": \"2023-10-03T18:27:53+00:00\"\n      },\n      \"log-read-url\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1bHQ6djM6eFliQ0l1ZEhNUDRMZmdWeExoYWZ1WnFwaCtYQUFSQjFaWVcySkEyT0tyZTZXQ0hjN3ZYQkFvbkJHWkg2Y0U2MDRHRXFvQVl6cUJqQzJ0VkppVHBXTlJNWmpVc1ZTekg5Q1hMZ0hNaUpNdUhib1hGS1RpT3czRGdRaWtPZFZ3VWpDQ1U0S2dhK2xLTUQ2ZFZDaUZ3SktiNytrMlpoVHd0cXdGVHIway8zRkFmejdzMSt0Rm9TNFBTV3dWYjZUTzJVNE1jaW9UZ2VKVFJNRnUvbjBudUp4U0l6VzFDYkNzVVFsb2VFbC9DRFlCTWFsbXBMNzZLUGQxeTJHb09ZTkxHL1d2K1NtcmlEQXptZTh1Q1BwR1dhbVBXQTRiREdlTkI3Qyt1YTRRamFkRzBWYUg3NE52TGpqT1NKbzFrZ3J3QmxnMGhHT3VaTHNhSmo0eXpv\",\n      \"created-at\": \"2023-10-03T18:27:39.239Z\",\n      \"updated-at\": \"2023-10-03T18:27:53.574Z\",\n      \"test-configurable-type\": \"RegistryModule\",\n      \"test-configurable-id\": \"mod-9rjVHLCUE9QD3k6L\",\n      \"variables\": [\n        {\n          \"key\": \"number\",\n          \"value\": \"4\"\n        }\n      ],\n      \"filters\": [\n        \"tests\/test.tftest.hcl\"\n      ],\n      \"test-directory\": \"tests\",\n      \"verbose\": true,\n      \"test-status\": \"pass\",\n      \"tests-passed\": 1,\n      \"tests-failed\": 0,\n      \"tests-errored\": 0,\n      \"tests-skipped\": 0,\n      \"source\": \"tfe-api\",\n      \"message\": \"Queued manually via the Terraform Enterprise API\"\n    },\n    \"relationships\": {\n      \"configuration-version\": {\n          \"data\": {\n            \"id\": \"cv-d3zBGFf5DfWY4GY9\",\n            \"type\": \"configuration-versions\"\n          },\n          \"links\": {\n            \"related\": \"\/api\/v2\/configuration-versions\/cv-d3zBGFf5DfWY4GY9\"\n          }\n      },\n      \"created-by\": {\n        \"data\": {\n          \"id\": \"user-zsRFs3AGaAHzbEfs\",\n          \"type\": \"users\"\n        },\n        \"links\": {\n          \"related\": \"\/api\/v2\/users\/user-zsRFs3AGaAHzbEfs\"\n        }\n      }\n    }\n  }\n}\n```\n\n## Cancel a Test\n\n`POST \/organizations\/:organization_name\/tests\/registry-modules\/private\/:namespace\/:name\/:provider\/test-runs\/:test_run_id\/cancel`\n\n| Parameter            | Description                                                                                                                                                                                              |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to create a test in. The organization must already exist, and the token authenticating the API request must belong to the \"owners\" team or a member of the \"owners\" team. |\n| `:namespace`         | The namespace of the module for which the test is being canceled. For private modules this is the same as the `:organization_name` parameter.                                                           |\n| `:name`              | The name of the module for which the test is being canceled.                                                                                                                                           |\n| `:provider`          | The name of the provider for which the test is being canceled.                                                                                                                                         |\n| `:test_run_id`       | The test ID to cancel. |\n\nUse the `cancel` action to interrupt a test that is currently running. The action sends an `INT` signal to the running Terraform process, which instructs Terraform to safely end the tests and attempt to teardown any infrastructure that your tests create.\n\n| Status  | Response                  | Reason(s)                                  |\n| ------- | ------------------------- | ------------------------------------------ |\n| [202][] | none                      | Successfully queued a cancel request.      |\n| [409][] | [JSON API error object][] | Test was not running; cancel not allowed.  |\n| [404][] | [JSON API error object][] | Test was not found or user not authorized. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/tests\/registry-modules\/private\/my-organization\/registry-name\/registry-provider\/test-runs\/trun-xFMAHM3FhkFBL6Z7\/cancel\n```\n\n## Forcefully cancel a Test\n\n`POST \/organizations\/:organization_name\/tests\/registry-modules\/private\/:namespace\/:name\/:provider\/test-runs\/:test_run_id\/force-cancel`\n\n| Parameter            | Description                                                                                                                                                                                              |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the `owners` team or a member of the `owners` team. |\n| `:namespace`         | The namespace of the module for which the test is being force-canceled. For private modules this is the same as the `:organization_name` parameter.                                                           |\n| `:name`              | The name of the module for which the test is being force-canceled.                                                                                                                                           |\n| `:provider`          | The name of the provider for which the test is being force-canceled.                                                                                                                                         |\n| `:test_run_id`       | The test ID to cancel. \n\nThe `force-cancel` action ends the test immediately. Once invoked, Terraform places the test into a `canceled` state and terminates the running Terraform process.\n\n~> **Warning:** This endpoint has potentially dangerous side-effects, including loss of any in-flight state in the running Terraform process. Use this operation with extreme caution.\n\n| Status  | Response                  | Reason(s)                                                      |\n| ------- | ------------------------- | -------------------------------------------------------------- |\n| [202][] | none                      | Successfully queued a cancel request.                          |\n| [409][] | [JSON API error object][] | Test was not running, or has not been canceled non-forcefully. |\n| [404][] | [JSON API error object][] | Test was not found or user not authorized.                     |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/tests\/registry-modules\/private\/my-organization\/registry-name\/registry-provider\/test-runs\/trun-xFMAHM3FhkFBL6Z7\/force-cancel\n```\n\n## Create an Environment Variable for Module Tests\n\n`POST \/organizations\/:organization_name\/tests\/registry-modules\/private\/:namespace\/:name\/:provider\/vars`\n\n| Parameter            | Description                                                                                                                                                                                              |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization of the module. The organization must already exist, and the token authenticating the API request must belong to the \"owners\" team or a member of the \"owners\" team. |\n| `:namespace`         | The namespace of the module for which the testing environment variable is being created. For private modules this is the same as the `:organization_name` parameter.                                                           |\n| `:name`              | The name of the module for which the testing environment variable is being created.                                                                                                                                           |\n| `:provider`          | The name of the provider for which the testing environment variable is being created.                                                                                                                                         |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                      | Type   | Default | Description                                                                                           |\n| ----------------------------- | ------ | ------- | ----------------------------------------------------------------------------------------------------- |\n| `data.type`                   | string | none | Must be `\"vars\"`.                                                                                     |\n| `data.attributes.key`         | string | none | The variable's name. Test variable keys must begin with a letter or underscore and can only contain letters, numbers, and underscores.                                                                          |\n| `data.attributes.value`       | string | `\"\"`    | The value of the variable.                                                                            |\n| `data.attributes.description` | string | none | The description of the variable.                                                                      |\n| `data.attributes.category`    | string | none | This must be `\"env\"`.                                                                                   |\n| `data.attributes.sensitive`   | bool   | `false` | Whether the value is sensitive. When set to `true`, Terraform writes the variable once and is not visible thereafter. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\":\"vars\",\n    \"attributes\": {\n      \"key\":\"some_key\",\n      \"value\":\"some_value\",\n      \"description\":\"some description\",\n      \"category\":\"env\",\n      \"sensitive\":false\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/tests\/registry-modules\/private\/my-organization\/registry-name\/registry-provider\/vars\n```\n\n### Sample Response\n\n```\n{\n  \"data\": {\n    \"id\": \"var-xSCUzCxdqMs2ygcg\",\n    \"type\": \"vars\",\n    \"attributes\": {\n      \"key\": \"keykey\",\n      \"value\": \"some_value\",\n      \"sensitive\": false,\n      \"category\": \"env\",\n      \"hcl\": false,\n      \"created-at\": \"2023-10-03T19:47:05.393Z\",\n      \"description\": \"some description\",\n      \"version-id\": \"699b14ea5d5e5c02f6352fac6bfd0a1424c21d32be14d1d9eb79f5e1f28f663a\"\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/vars\/var-xSCUzCxdqMs2ygcg\"\n    }\n  }\n}\n```\n\n## List Test Variables for a Module\n\n`GET \/organizations\/:organization_name\/tests\/registry-modules\/private\/:namespace\/:name\/:provider\/vars`\n\n| Parameter            | Description                                                                                                                                                                                              |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the `owners` team or a member of the `owners` team. |\n| `:namespace`         | The namespace of the module which the test environment variables were created for. For private modules this is the same as the `:organization_name` parameter.                                                           |\n| `:name`              | The name of the module which the test environment variables were created for.                                                                                                                                           |\n| `:provider`          | The name of the provider which the test environment variables were created for.                                                                                                                                         |\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/tests\/registry-modules\/private\/my-organization\/registry-name\/registry-provider\/vars\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"var-xSCUzCxdqMs2ygcg\",\n      \"type\": \"vars\",\n      \"attributes\": {\n          \"key\": \"keykey\",\n          \"value\": \"some_value\",\n          \"sensitive\": false,\n          \"category\": \"env\",\n          \"hcl\": false,\n          \"created-at\": \"2023-10-03T19:47:05.393Z\",\n          \"description\": \"some description\",\n          \"version-id\": \"699b14ea5d5e5c02f6352fac6bfd0a1424c21d32be14d1d9eb79f5e1f28f663a\"\n      },\n      \"links\": {\n          \"self\": \"\/api\/v2\/vars\/var-xSCUzCxdqMs2ygcg\"\n      }\n    }\n  ]\n}\n```\n\n## Update Test Variables for a Module\n\n`PATCH \/organizations\/:organization_name\/tests\/registry-modules\/private\/:namespace\/:name\/:provider\/vars\/variable_id`\n\n| Parameter            | Description                                                                                                                                                                                              |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the \"owners\" team or a member of the \"owners\" team. |\n| `:namespace`         | The namespace of the module for which the test environment variable is being updated. For private modules this is the same as the `:organization_name` parameter.                                                           |\n| `:name`              | The name of the module for which the test environment variable is being updated.                                                                                                                                           |\n| `:provider`          | The name of the provider for which the test environment variable is being updated.                                                                                                                                         |\n| `:variable_id`       | The ID of the variable to update. |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path          | Type   | Default | Description                                                                                                                                                                                                                                                                                                                 |\n| ----------------- | ------ | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`       | string |         | Must be `\"vars\"`.                                                                                                                                                                                                                                                                                                           |\n| `data.attributes` | object | none | New attributes for the variable. This object can include `key`, `value`, `description`, `category`, and `sensitive` properties. Refer to [Create an Environment Variable for Module Tests](#create-an-environment-variable-for-module-tests) for additional information. All properties are optional. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"key\":\"name\",\n      \"value\":\"mars\",\n      \"description\": \"new description\",\n      \"category\":\"env\",\n      \"sensitive\": false\n    },\n    \"type\":\"vars\"\n  }\n}\n```\n\n### Sample Request\n\n```bash\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n    https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/tests\/registry-modules\/private\/my-organization\/registry-name\/registry-provider\/vars\/var-yRmifb4PJj7cLkMG\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\":\"var-yRmifb4PJj7cLkMG\",\n    \"type\":\"vars\",\n    \"attributes\": {\n      \"key\":\"name\",\n      \"value\":\"mars\",\n      \"description\":\"new description\",\n      \"sensitive\":false,\n      \"category\":\"env\",\n      \"hcl\":false\n    }\n  }\n}\n```\n\n## Delete Test Variable for a Module\n\n`DELETE \/organizations\/:organization_name\/tests\/registry-modules\/private\/:namespace\/:name\/:provider\/vars\/variable_id`\n\n\n| Parameter            | Description                                                                                                                                                                                              |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization for the module. The organization must already exist, and the token authenticating the API request must belong to the `owners` team or a member of the `owners` team. |\n| `:namespace`         | The namespace of the module for which the test environment variable is being deleted. For private modules this is the same as the `:organization_name` parameter.                                                           |\n| `:name`              | The name of the module for which the test environment variable is being deleted.                                                                                                                                           |\n| `:provider`          | The name of the provider for which the test environment variable is being deleted.                                                                                                                                         |\n| `:variable_id`       | The ID of the variable to delete. |\n\n### Sample Request\n\n```bash\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n    https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/tests\/registry-modules\/private\/my-organization\/registry-name\/registry-provider\/vars\/var-yRmifb4PJj7cLkMG\n```","site":"terraform","answers_cleaned":"    page title  Tests   API Docs   HCP Terraform description       Use the   tests  endpoint to manage Terraform tests  List  get  create  and cancel tests using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Tests API Tests are terraform operations runs  and are referred to as Test Runs within the HCP Terraform API   Performing a test on a new configuration is a multi step process   1   Create a configuration version on the registry module   create a configuration version for a test   1   Upload configuration files to the configuration version   upload configuration files for a test   1   Create a test on the module   create a test run   HCP Terraform completes this step automatically when you upload a configuration file   Alternatively  you can create a test with a pre existing configuration version  even one from another module  This is useful for promoting known good code from one module to another      Attributes  The  tests  API endpoint has the following attributes       Test Run States  The state of the test operation is found in  data attributes status   and you can reference the following list of possible states     State        Description                                                                                                                           pending     The initial status of a run after creation                   queued      HCP Terraform has queued the test operation to start       running     HCP Terraform is executing the test                        errored     The test has errored  This is a final state                  canceled    The test has been canceled  This is a final state            finished    The test has completed  This is a final state                  Test run status  The final test status is found in  data attributes test status   and you can reference the following list of possible states     Status   Description                                                                  pass    The given tests have passed        fail    The given tests have failed         Detailed test status  The test results can be found via the following attributes    Status                            Description                                                                                                                            data attributes tests passed     The number of tests that have passed             data attributes tests failed     The number of tests that have failed             data attributes tests errored    The number of tests that have errored out        data attributes tests skipped    The number of tests that have been skipped         Test Sources List tests for a module  You can use the following sources as  tests list   terraform cloud docs api docs private registry tests list tests for a module  query parameters     Source                        Description                                                                                                                                                                                                              terraform                    Indicates a test was queued from HCP Terraform CLI                                          tfe api                      Indicates a test was queued from HCP Terraform API                                          tfe configuration version    Indicates a test was queued from a Configuration Version  triggered from a VCS provider         Create a Test   POST  organizations  organization name tests registry modules private  namespace  name  provider test runs     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                        organization name    The name of the organization for the module  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        namespace            The namespace of the module for which the test is being created  For private modules this is the same as the   organization name  parameter                                                                  name                 The name of the module for which the test is being created                                                                                                                                                  provider             The name of the provider for which the test is being created                                                                                                                                             A test run executes tests against a registry module  using a configuration version and the modules s current environment variables   Creating a test run requires permission to access the specified module  Refer to  Permissions   terraform cloud docs users teams organizations permissions  for more information   When creating a test run  you may optionally provide a list of variable objects containing key and value attributes  These values apply to that test run specifically and take precedence over variables with the same key that are created within the module  All values must be expressed as an HCL literal in the same syntax you would use when writing Terraform code     Sample Test Variables        json  attributes        variables            key    replicas    value    2            key    access key    value      ABCDE12345                     Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                                             Type                   Default         Description                                                                                                                                                                                                                                                                                                data attributes verbose                             bool                    false            Specifies whether Terraform should print the plan or state for each test run block as it executes       data attributes test directory                      string                   tests           Sets the directory where HCP Terraform executes the tests                                                        data attributes filters                             array  string           empty array    When specified  HCP Terraform only executes the test files contained within this array                    data attributes variables                           array   key  value      empty array    Specifies an optional list of test specific environment variable values                                 data relationships configuration version data id    string                  none     Specifies the configuration version that HCP Terraform executes the test against                                    Sample Payload     json      data          attributes            verbose   true         filters     tests test tftest hcl           test directory    tests          variables                key     number    value   4                      type   test runs                 Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization tests registry modules private my organization private registry provider test runs          Sample Response     json      data          id    trun KFg8DSiRz4E37mdJ        type    test runs        attributes            status    queued          status timestamps                queued at    2023 10 03T18 27 39 00 00                  created at    2023 10 03T18 27 39 239Z          updated at    2023 10 03T18 27 39 264Z          test configurable type    RegistryModule          test configurable id    mod 9rjVHLCUE9QD3k6L          variables                                key    number                  value    4                              filters                tests test tftest hcl                  test directory    tests          verbose   true         test status   null         tests passed   null         tests failed   null         tests errored   null         tests skipped   null         source    tfe api          message    Queued manually via the Terraform Enterprise API              relationships              configuration version                  data                      id    cv d3zBGFf5DfWY4GY9                    type    configuration versions                              links                      related     api v2 configuration versions cv d3zBGFf5DfWY4GY9                                    created by                  data                      id    user zsRFs3AGaAHzbEfs                    type    users                              links                      related     api v2 users user zsRFs3AGaAHzbEfs                                              Create a Configuration Version for a Test   POST  organizations  organization name tests registry modules private  namespace  name  provider test runs configuration versions     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                        organization name    The name of the organization for the module  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        namespace            The namespace of the module for which the configuration version is being created  For private modules this is the same as the   organization name  parameter                                                                  name                 The name of the module for which the configuration version is being created                                                                                                                                                  provider             The name of the provider for which the configuration version is being created                                                                                                                                                 Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider test runs configuration versions          Sample Response     json      data          id    cv aaady7niJMY1wAvx        type    configuration versions        attributes              auto queue runs   true           error   null           error message   null           source    tfe api            speculative   false           status    pending            status timestamps                changed files                provisional   false           upload url    https   archivist terraform io v1 object dmF1bHQ6djM6eFliQ0l1ZEhNUDRMZmdWeExoYWZ1WnFwaCtYQUFSQjFaWVcySkEyT0tyZTZXQ0hjN3ZYQkFvbkJHWkg2Y0U2MDRHRXFvQVl6cUJqQzJ0VkppVHBXTlJNWmpVc1ZTekg5Q1hMZ0hNaUpNdUhib1hGS1RpT3czRGdRaWtPZFZ3VWpDQ1U0S2dhK2xLTUQ2ZFZDaUZ3SktiNytrMlpoVHd0cXdGVHIway8zRkFmejdzMSt0Rm9TNFBTV3dWYjZUTzJVNE1jaW9UZ2VKVFJNRnUvbjBudUp4U0l6VzFDYkNzVVFsb2VFbC9DRFlCTWFsbXBMNzZLUGQxeTJHb09ZTkxHL1d2K1NtcmlEQXptZTh1Q1BwR1dhbVBXQTRiREdlTkI3Qyt1YTRRamFkRzBWYUg3NE52TGpqT1NKbzFrZ3J3QmxnMGhHT3VaTHNhSmo0eXpv              relationships              ingress attributes                  data   null               links                    related     api v2 configuration versions cv aaady7niJMY1wAvx ingress attributes                                      links              self     api v2 configuration versions cv aaady7niJMY1wAvx                      Upload Configuration Files for a Test   PUT https   archivist terraform io v1 object  UNIQUE OBJECT ID      The URL is provided in the  upload url  attribute when creating a  configuration versions  resource  After creation  the URL is hidden on the resource and no longer available         Sample Request     filename is the name of the configuration file you wish to upload        shell curl       header  Content Type  application octet stream        request PUT       data binary  filename     https   archivist terraform io v1 object 4c44d964 eba7 4dd5 ad29 1ece7b99e8da         List Tests for a Module   GET  organizations  organization name tests registry modules private  namespace  name  provider test runs      Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                        organization name    The name of the organization for the module  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        namespace            The namespace of the module which the tests have executed against  For private modules this is the same as the   organization name  parameter                                                                  name                 The name of the module which the tests have executed against                                                                                                                                                  provider             The name of the provider which the tests have executed against                                                                                                                                                 Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   remember to percent encode     as   5B  and     as   5D  if your tooling does not automatically encode URLs     Parameter   Description   Required                          page number     If omitted  the endpoint returns the first page       Optional      page size       If omitted  the endpoint returns 20 runs per page    Optional      filter source          Optional    A comma separated list of test sources  the result will only include tests that came from one of these sources  Options are listed in  Test Sources   terraform cloud docs api docs private registry tests test sources                                                                                                                                                 Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider test runs          Sample Response     json      data                  id    trun KFg8DSiRz4E37mdJ          type    test runs          attributes              status    finished            status timestamps                queued at    2023 10 03T18 27 39 00 00              started at    2023 10 03T18 27 41 00 00              finished at    2023 10 03T18 27 53 00 00                      log read url    https   archivist terraform io v1 object dmF1bHQ6djM6eFliQ0l1ZEhNUDRMZmdWeExoYWZ1WnFwaCtYQUFSQjFaWVcySkEyT0tyZTZXQ0hjN3ZYQkFvbkJHWkg2Y0U2MDRHRXFvQVl6cUJqQzJ0VkppVHBXTlJNWmpVc1ZTekg5Q1hMZ0hNaUpNdUhib1hGS1RpT3czRGdRaWtPZFZ3VWpDQ1U0S2dhK2xLTUQ2ZFZDaUZ3SktiNytrMlpoVHd0cXdGVHIway8zRkFmejdzMSt0Rm9TNFBTV3dWYjZUTzJVNE1jaW9UZ2VKVFJNRnUvbjBudUp4U0l6VzFDYkNzVVFsb2VFbC9DRFlCTWFsbXBMNzZLUGQxeTJHb09ZTkxHL1d2K1NtcmlEQXptZTh1Q1BwR1dhbVBXQTRiREdlTkI3Qyt1YTRRamFkRzBWYUg3NE52TGpqT1NKbzFrZ3J3QmxnMGhHT3VaTHNhSmo0eXpv            created at    2023 10 03T18 27 39 239Z            updated at    2023 10 03T18 27 53 574Z            test configurable type    RegistryModule            test configurable id    mod 9rjVHLCUE9QD3k6L            variables                              key    number                value    4                                  filters                tests test tftest hcl                      test directory    tests            verbose   true           test status    pass            tests passed   1           tests failed   0           tests errored   0           tests skipped   0           source    tfe api            message    Queued manually via the Terraform Enterprise API                  relationships              configuration version                  data                    id    cv d3zBGFf5DfWY4GY9                  type    configuration versions                              links                    related     api v2 configuration versions cv d3zBGFf5DfWY4GY9                                    created by                data                  id    user zsRFs3AGaAHzbEfs                type    users                          links                  related     api v2 users user zsRFs3AGaAHzbEfs                                                               Get Test Details   GET  organizations  organization name tests registry modules private  namespace  name  provider test runs  test run id     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                        organization name    The name of the organization for the module  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        namespace            The namespace of the module which the test was executed against  For private modules this is the same as the   organization name  parameter                                                                  name                 The name of the module which the test was executed against                                                                                                                                                  provider             The name of the provider which the test was executed against                                                                                                                                                test run id          The test ID to get                                                                                                                                                                                              Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider test runs trun xFMAHM3FhkFBL6Z7          Sample Response     json      data          id    trun KFg8DSiRz4E37mdJ        type    test runs        attributes            status    finished          status timestamps              queued at    2023 10 03T18 27 39 00 00            started at    2023 10 03T18 27 41 00 00            finished at    2023 10 03T18 27 53 00 00                  log read url    https   archivist terraform io v1 object dmF1bHQ6djM6eFliQ0l1ZEhNUDRMZmdWeExoYWZ1WnFwaCtYQUFSQjFaWVcySkEyT0tyZTZXQ0hjN3ZYQkFvbkJHWkg2Y0U2MDRHRXFvQVl6cUJqQzJ0VkppVHBXTlJNWmpVc1ZTekg5Q1hMZ0hNaUpNdUhib1hGS1RpT3czRGdRaWtPZFZ3VWpDQ1U0S2dhK2xLTUQ2ZFZDaUZ3SktiNytrMlpoVHd0cXdGVHIway8zRkFmejdzMSt0Rm9TNFBTV3dWYjZUTzJVNE1jaW9UZ2VKVFJNRnUvbjBudUp4U0l6VzFDYkNzVVFsb2VFbC9DRFlCTWFsbXBMNzZLUGQxeTJHb09ZTkxHL1d2K1NtcmlEQXptZTh1Q1BwR1dhbVBXQTRiREdlTkI3Qyt1YTRRamFkRzBWYUg3NE52TGpqT1NKbzFrZ3J3QmxnMGhHT3VaTHNhSmo0eXpv          created at    2023 10 03T18 27 39 239Z          updated at    2023 10 03T18 27 53 574Z          test configurable type    RegistryModule          test configurable id    mod 9rjVHLCUE9QD3k6L          variables                          key    number              value    4                            filters              tests test tftest hcl                  test directory    tests          verbose   true         test status    pass          tests passed   1         tests failed   0         tests errored   0         tests skipped   0         source    tfe api          message    Queued manually via the Terraform Enterprise API              relationships            configuration version                data                  id    cv d3zBGFf5DfWY4GY9                type    configuration versions                          links                  related     api v2 configuration versions cv d3zBGFf5DfWY4GY9                              created by              data                id    user zsRFs3AGaAHzbEfs              type    users                      links                related     api v2 users user zsRFs3AGaAHzbEfs                                        Cancel a Test   POST  organizations  organization name tests registry modules private  namespace  name  provider test runs  test run id cancel     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                        organization name    The name of the organization to create a test in  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        namespace            The namespace of the module for which the test is being canceled  For private modules this is the same as the   organization name  parameter                                                                  name                 The name of the module for which the test is being canceled                                                                                                                                                  provider             The name of the provider for which the test is being canceled                                                                                                                                                test run id          The test ID to cancel     Use the  cancel  action to interrupt a test that is currently running  The action sends an  INT  signal to the running Terraform process  which instructs Terraform to safely end the tests and attempt to teardown any infrastructure that your tests create     Status    Response                    Reason s                                                                                                                             202      none                        Successfully queued a cancel request            409       JSON API error object      Test was not running  cancel not allowed        404       JSON API error object      Test was not found or user not authorized         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider test runs trun xFMAHM3FhkFBL6Z7 cancel         Forcefully cancel a Test   POST  organizations  organization name tests registry modules private  namespace  name  provider test runs  test run id force cancel     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                        organization name    The name of the organization for the module  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        namespace            The namespace of the module for which the test is being force canceled  For private modules this is the same as the   organization name  parameter                                                                  name                 The name of the module for which the test is being force canceled                                                                                                                                                  provider             The name of the provider for which the test is being force canceled                                                                                                                                                test run id          The test ID to cancel    The  force cancel  action ends the test immediately  Once invoked  Terraform places the test into a  canceled  state and terminates the running Terraform process        Warning    This endpoint has potentially dangerous side effects  including loss of any in flight state in the running Terraform process  Use this operation with extreme caution     Status    Response                    Reason s                                                                                                                                                                     202      none                        Successfully queued a cancel request                                409       JSON API error object      Test was not running  or has not been canceled non forcefully       404       JSON API error object      Test was not found or user not authorized                             Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider test runs trun xFMAHM3FhkFBL6Z7 force cancel         Create an Environment Variable for Module Tests   POST  organizations  organization name tests registry modules private  namespace  name  provider vars     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                        organization name    The name of the organization of the module  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        namespace            The namespace of the module for which the testing environment variable is being created  For private modules this is the same as the   organization name  parameter                                                                  name                 The name of the module for which the testing environment variable is being created                                                                                                                                                  provider             The name of the provider for which the testing environment variable is being created                                                                                                                                                 Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                        Type     Default   Description                                                                                                                                                                                                                                                             data type                      string   none   Must be   vars                                                                                             data attributes key            string   none   The variable s name  Test variable keys must begin with a letter or underscore and can only contain letters  numbers  and underscores                                                                                data attributes value          string             The value of the variable                                                                                  data attributes description    string   none   The description of the variable                                                                            data attributes category       string   none   This must be   env                                                                                           data attributes sensitive      bool      false    Whether the value is sensitive  When set to  true   Terraform writes the variable once and is not visible thereafter         Sample Payload     json      data          type   vars        attributes            key   some key          value   some value          description   some description          category   env          sensitive  false                      Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider vars          Sample Response           data          id    var xSCUzCxdqMs2ygcg        type    vars        attributes            key    keykey          value    some value          sensitive   false         category    env          hcl   false         created at    2023 10 03T19 47 05 393Z          description    some description          version id    699b14ea5d5e5c02f6352fac6bfd0a1424c21d32be14d1d9eb79f5e1f28f663a              links            self     api v2 vars var xSCUzCxdqMs2ygcg                      List Test Variables for a Module   GET  organizations  organization name tests registry modules private  namespace  name  provider vars     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                        organization name    The name of the organization for the module  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        namespace            The namespace of the module which the test environment variables were created for  For private modules this is the same as the   organization name  parameter                                                                  name                 The name of the module which the test environment variables were created for                                                                                                                                                  provider             The name of the provider which the test environment variables were created for                                                                                                                                                 Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider vars          Sample Response     json      data                  id    var xSCUzCxdqMs2ygcg          type    vars          attributes                key    keykey              value    some value              sensitive   false             category    env              hcl   false             created at    2023 10 03T19 47 05 393Z              description    some description              version id    699b14ea5d5e5c02f6352fac6bfd0a1424c21d32be14d1d9eb79f5e1f28f663a                  links                self     api v2 vars var xSCUzCxdqMs2ygcg                              Update Test Variables for a Module   PATCH  organizations  organization name tests registry modules private  namespace  name  provider vars variable id     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                        organization name    The name of the organization for the module  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        namespace            The namespace of the module for which the test environment variable is being updated  For private modules this is the same as the   organization name  parameter                                                                  name                 The name of the module for which the test environment variable is being updated                                                                                                                                                  provider             The name of the provider for which the test environment variable is being updated                                                                                                                                                variable id          The ID of the variable to update         Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path            Type     Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             data type          string             Must be   vars                                                                                                                                                                                                                                                                                                                   data attributes    object   none   New attributes for the variable  This object can include  key    value    description    category   and  sensitive  properties  Refer to  Create an Environment Variable for Module Tests   create an environment variable for module tests  for additional information  All properties are optional         Sample Payload     json      data          attributes            key   name          value   mars          description    new description          category   env          sensitive   false             type   vars                 Sample Request     bash   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json       https   app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider vars var yRmifb4PJj7cLkMG          Sample Response     json      data          id   var yRmifb4PJj7cLkMG        type   vars        attributes            key   name          value   mars          description   new description          sensitive  false         category   env          hcl  false                     Delete Test Variable for a Module   DELETE  organizations  organization name tests registry modules private  namespace  name  provider vars variable id      Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                        organization name    The name of the organization for the module  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        namespace            The namespace of the module for which the test environment variable is being deleted  For private modules this is the same as the   organization name  parameter                                                                  name                 The name of the module for which the test environment variable is being deleted                                                                                                                                                  provider             The name of the provider for which the test environment variable is being deleted                                                                                                                                                variable id          The ID of the variable to delete         Sample Request     bash   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE       https   app terraform io api v2 organizations my organization tests registry modules private my organization registry name registry provider vars var yRmifb4PJj7cLkMG    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Manage module versions API Docs HCP Terraform tfc only true Use these module management endpoints to deprecate and revert the deprecation of module versions you published to an organization s private registry","answers":"---\npage_title: Manage module versions - API Docs - HCP Terraform\ndescription: |-\n Use these module management endpoints to deprecate and revert the deprecation of module versions you published to an organization's private registry. \ntfc_only: true\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[503]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/503\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Manage module versions API\n\nThis topic provides reference information about API endpoints that let your deprecate module versions in your organization\u2019s private registry. \n\n## Introduction\n\nWhen you deprecate a module version, HCP Terraform adds warnings to the module's registry page and to run outputs when anyone uses the deprecated version. \n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include \"tfc-package-callouts\/manage-module-versions.mdx\"\n<!-- END: TFC:only name:pnp-callout -->\n\nAfter deprecating a module version, you can revert that deprecated status to remove the warnings from that version in the registry and outputs. For more details on module deprecation, refer to [Deprecate module versions](\/terraform\/cloud-docs\/registry\/manage-module-versions).\n\n@include \"public-beta\/manage-module-versions.mdx\"\n\n## Deprecate a module version\n\nUse this endpoint to deprecate a module version.\n\n`PATCH \/api\/v2\/organizations\/:organization_name\/registry-modules\/private\/:organization_name\/:module_name\/:module_provider\/:module_version`\n\n\n| Parameter | Description |\n| :---- | :---- |\n| `:organization_name` | The name of the organization the module belongs to. |\n| `:module_name` | The name of the module whose version you want to deprecate. |\n| `:module_provider` | Specifies the Terraform provider that this module is used for. |\n| `:module_version` | The module version you want to deprecate. |\n\nThis endpoint allows you to deprecate a specific module version. Deprecating a module version adds warnings to the run output of any consumers using this module.  \n\n\n| Status | Response | Reason |\n| :---- | :---- | :---- |\n| [200](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200) | [JSON API document](http:\/\/terraform\/cloud-docs\/api-docs#json-api-documents)  | Successfully deprecated a module version.  |\n| [404](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404) | [JSON API error object](http:\/\/jsonapi.org\/format\/#error-objects) | This organization is not authorized to deprecate this module version, or the module version does not exist. |\n| [422](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422) | [JSON API error object](http:\/\/jsonapi.org\/format\/#error-objects) | Malformed request body, for example the request is missing attributes or uses the wrong types. |\n| [500](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500) or [503](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/503) | [JSON API error object](http:\/\/jsonapi.org\/format\/#error-objects) | Failure occurred while deprecating a module version. |\n\n### **Sample Payload**\n\n```json\n{\n  \"data\": {\n    \"type\": \"module-versions\",\n    \"attributes\": {\n      \"deprecation\": {\n        \"deprecated-status\": \"Deprecated\",\n        \"reason\": \"Deprecated due to a security vulnerability issue.\",\n        \"link\": \"https:\/\/www.hashicorp.com\/\"\n      }\n    }\n  }\n}\n```\n\n### **Sample Request**\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\nhttps:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-modules\/private\/hashicorp\/lb-http\/google\/11.0.0\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": {\n        \"type\": \"module-versions\",\n        \"id\": \"1\",\n        \"relationships\": {\n            \"deprecation\": {\n                \"data\": {\n                    \"id\": \"2\",\n                    \"type\": \"deprecations\"\n                }\n            }\n        }\n    },\n    \"included\": [\n        {\n            \"type\": \"deprecations\",\n            \"id\": \"2\",\n            \"attributes\": {\n                \"link\": \"https:\/\/www.hashicorp.com\/\",\n                \"reason\": \"Deprecated due to a security vulnerability issue. Applies will be blocked in 15 days.\"\n            }\n        }\n    ]\n}\n```\n\n\n## Revert the deprecation status for a module version\n\nUse this endpoint to revert the deprecation of a module version.\n\n`PATCH \/api\/v2\/organizations\/:organization_name\/registry-modules\/private\/:organization_name\/:module_name\/:module_provider\/:module_version`\n\n| Parameter | Description |\n| :---- | :---- |\n| `:organization_name` | The name of the organization the module belongs to. |\n| `:module_name` | The name of the module you want to revert the deprecation of. |\n| `:module_provider` | Specifies the Terraform provider that this module is used for. |\n| `:module_version` | The module version you want to revert the deprecation of. |\n\nDeprecating a module version adds warnings to the run output of any consumers using this module. Reverting the deprecation status removes warnings from the output of consumers and fully reinstates the module version. \n\n| Status | Response | Reason |\n| :---- | :---- | :---- |\n| [200](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200) | [JSON API document](http:\/\/\/terraform\/cloud-docs\/api-docs#json-api-documents)  | Successfully reverted a module version\u2019s deprecation status and reinstated that version. |\n| [404](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404) | [JSON API error object](http:\/\/jsonapi.org\/format\/#error-objects) | This organization is not authorized to revert the depreciation of this module version, or the module version does not exist. |\n| [422](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422) | [JSON API error object](http:\/\/jsonapi.org\/format\/#error-objects) | Malformed request body, for example the request is missing attributes or uses the wrong types. |\n| [500](https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500) or [503] | [JSON API error object](http:\/\/jsonapi.org\/format\/#error-objects) | Failure occurred while reverting the deprecation of a module version. |\n\n### **Sample Request**\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\nhttps:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-modules\/private\/hashicorp\/lb-http\/google\/11.0.0\n```\n\n**Sample payload**\n\n```json\n{\n  \"data\": {\n    \"type\": \"module-versions\",\n    \"attributes\": {\n      \"deprecation\": {\n        \"deprecated-status\": \"Undeprecated\"\n      }\n    }\n  }\n}\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n     \"type\": \"module-versions\",\n     \"id\": \"1\"\n  }\n}\n```","site":"terraform","answers_cleaned":"    page title  Manage module versions   API Docs   HCP Terraform description      Use these module management endpoints to deprecate and revert the deprecation of module versions you published to an organization s private registry   tfc only  true       200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   503   https   developer mozilla org en US docs Web HTTP Status 503   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Manage module versions API  This topic provides reference information about API endpoints that let your deprecate module versions in your organization s private registry       Introduction  When you deprecate a module version  HCP Terraform adds warnings to the module s registry page and to run outputs when anyone uses the deprecated version         BEGIN  TFC only name pnp callout      include  tfc package callouts manage module versions mdx       END  TFC only name pnp callout      After deprecating a module version  you can revert that deprecated status to remove the warnings from that version in the registry and outputs  For more details on module deprecation  refer to  Deprecate module versions   terraform cloud docs registry manage module versions     include  public beta manage module versions mdx      Deprecate a module version  Use this endpoint to deprecate a module version    PATCH  api v2 organizations  organization name registry modules private  organization name  module name  module provider  module version      Parameter   Description                         organization name    The name of the organization the module belongs to        module name    The name of the module whose version you want to deprecate        module provider    Specifies the Terraform provider that this module is used for        module version    The module version you want to deprecate     This endpoint allows you to deprecate a specific module version  Deprecating a module version adds warnings to the run output of any consumers using this module        Status   Response   Reason                                200  https   developer mozilla org en US docs Web HTTP Status 200     JSON API document  http   terraform cloud docs api docs json api documents     Successfully deprecated a module version        404  https   developer mozilla org en US docs Web HTTP Status 404     JSON API error object  http   jsonapi org format  error objects    This organization is not authorized to deprecate this module version  or the module version does not exist       422  https   developer mozilla org en US docs Web HTTP Status 422     JSON API error object  http   jsonapi org format  error objects    Malformed request body  for example the request is missing attributes or uses the wrong types       500  https   developer mozilla org en US docs Web HTTP Status 500  or  503  https   developer mozilla org en US docs Web HTTP Status 503     JSON API error object  http   jsonapi org format  error objects    Failure occurred while deprecating a module version           Sample Payload       json      data          type    module versions        attributes            deprecation              deprecated status    Deprecated            reason    Deprecated due to a security vulnerability issue             link    https   www hashicorp com                                  Sample Request       shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json   https   app terraform io api v2 organizations hashicorp registry modules private hashicorp lb http google 11 0 0          Sample Response     json        data              type    module versions            id    1            relationships                  deprecation                      data                          id    2                        type    deprecations                                                        included                            type    deprecations                id    2                attributes                      link    https   www hashicorp com                     reason    Deprecated due to a security vulnerability issue  Applies will be blocked in 15 days                                            Revert the deprecation status for a module version  Use this endpoint to revert the deprecation of a module version    PATCH  api v2 organizations  organization name registry modules private  organization name  module name  module provider  module version     Parameter   Description                         organization name    The name of the organization the module belongs to        module name    The name of the module you want to revert the deprecation of        module provider    Specifies the Terraform provider that this module is used for        module version    The module version you want to revert the deprecation of     Deprecating a module version adds warnings to the run output of any consumers using this module  Reverting the deprecation status removes warnings from the output of consumers and fully reinstates the module version      Status   Response   Reason                                200  https   developer mozilla org en US docs Web HTTP Status 200     JSON API document  http    terraform cloud docs api docs json api documents     Successfully reverted a module version s deprecation status and reinstated that version       404  https   developer mozilla org en US docs Web HTTP Status 404     JSON API error object  http   jsonapi org format  error objects    This organization is not authorized to revert the depreciation of this module version  or the module version does not exist       422  https   developer mozilla org en US docs Web HTTP Status 422     JSON API error object  http   jsonapi org format  error objects    Malformed request body  for example the request is missing attributes or uses the wrong types       500  https   developer mozilla org en US docs Web HTTP Status 500  or  503     JSON API error object  http   jsonapi org format  error objects    Failure occurred while reverting the deprecation of a module version           Sample Request       shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json   https   app terraform io api v2 organizations hashicorp registry modules private hashicorp lb http google 11 0 0        Sample payload       json      data          type    module versions        attributes            deprecation              deprecated status    Undeprecated                               Sample Response     json      data           type    module versions         id    1           "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Providers API Docs HCP Terraform Use the gpg keys endpoint to manage the GPG keys used to sign private providers List add get update and delete GPG keys using the HTTP API","answers":"---\npage_title: Providers - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/gpg-keys` endpoint to manage the GPG keys used to sign private providers. List, add, get, update, and delete GPG keys using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# GPG Keys API\n\nThese endpoints are only relevant to private providers. When you [publish a private provider](\/terraform\/cloud-docs\/registry\/publish-providers) to the HCP Terraform private registry, you must upload the public key of the GPG keypair used to sign the release. Refer to [Preparing and Adding a Signing Key](\/terraform\/registry\/providers\/publishing#preparing-and-adding-a-signing-key) for more details.\n\nYou need [owners team](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-owners) or [Manage Private Registry](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-private-registry) permissions to add, update, or delete GPG keys in a private registry.\n\n## List GPG Keys\n\n`GET \/api\/registry\/:registry_name\/v2\/gpg-keys`\n\n### Parameters\n\n| Parameter        | Description        |\n|------------------|--------------------|\n| `:registry_name` | Must be `private`. |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling does not automatically encode URLs.\n\n| Parameter           | Description                                                                                                          |\n|---------------------|----------------------------------------------------------------------------------------------------------------------|\n| `filter[namespace]` | **Required.** A comma-separated list of one or more namespaces. The namespaces must be an authorized HCP Terraform or Terraform Enterprise organization name. |\n| `page[number]`      | **Optional.** If omitted, the endpoint returns the first page.                                                   |\n| `page[size]`        | **Optional.** If omitted, the endpoint returns 20 GPG keys per page.                                             |\n\nGets a list of GPG keys belonging to the specified namespaces.\n\n| Status  | Response                                   | Reason                                                    |\n|---------|--------------------------------------------|-----------------------------------------------------------|\n| [200][] | [JSON API document][] (`type: \"gpg-keys\"`) | Successfully fetched GPG keys                             |\n| [400][] | [JSON API error object][]                  | Error - missing namespaces in request                     |\n| [403][] | [JSON API error object][]                  | Forbidden - no authorized namespaces specified in request |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  \"https:\/\/app.terraform.io\/api\/registry\/private\/v2\/gpg-keys?filter%5Bnamespace%5D=my-organization,my-other-organization\"\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"type\": \"gpg-keys\",\n      \"id\": \"1\",\n      \"attributes\": {\n        \"ascii-armor\": \"-----BEGIN PGP PUBLIC KEY BLOCK-----...\",\n        \"created-at\": \"2022-02-08T19:15:47Z\",\n        \"key-id\": \"C4E5E6C66C79C778\",\n        \"namespace\": \"my-other-organization\",\n        \"source\": \"\",\n        \"source-url\": null,\n        \"trust-signature\": \"\",\n        \"updated-at\": \"2022-02-08T19:15:47Z\"\n      },\n      \"links\": {\n        \"self\": \"\/v2\/gpg-keys\/1\"\n      }\n    },\n    {\n      \"type\": \"gpg-keys\",\n      \"id\": \"140\",\n      \"attributes\": {\n        \"ascii-armor\": \"-----BEGIN PGP PUBLIC KEY BLOCK-----...\",\n        \"created-at\": \"2022-04-28T21:32:11Z\",\n        \"key-id\": \"C4E5E6C66C79C778\",\n        \"namespace\": \"my-organization\",\n        \"source\": \"\",\n        \"source-url\": null,\n        \"trust-signature\": \"\",\n        \"updated-at\": \"2022-04-28T21:32:11Z\"\n      },\n      \"links\": {\n        \"self\": \"\/v2\/gpg-keys\/140\"\n      }\n    }\n  ],\n  \"links\": {\n    \"first\": \"\/v2\/gpg-keys?filter%5Bnamespace%5D=my-organization%2Cmy-other-organization&page%5Bnumber%5D=1&page%5Bsize%5D=15\",\n    \"last\": \"\/v2\/gpg-keys?filter%5Bnamespace%5D=my-organization%2Cmy-other-organization&page%5Bnumber%5D=1&page%5Bsize%5D=15\",\n    \"next\": null,\n    \"prev\": null\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"page-size\": 15,\n      \"current-page\": 1,\n      \"next-page\": null,\n      \"prev-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 2\n    }\n  }\n}\n```\n\n## Add a GPG Key\n\n`POST \/api\/registry\/:registry_name\/v2\/gpg-keys`\n\n### Parameters\n\n| Parameter            | Description          |\n| -------------------- | -------------------- |\n| `:registry_name`     | Must be `private`.   |\n\nUploads a GPG Key to a private registry scoped with a namespace. The response will provide a \"key-id\", which is required to [Create a Provider Version](\/terraform\/cloud-docs\/api-docs\/private-registry\/provider-versions-platforms#create-a-provider-version).\n\n| Status  | Response                                                   | Reason                                                         |\n| ------- | ---------------------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"gpg-keys\"`)                 | Successfully uploads a GPG key to a private provider           |\n| [422][] | [JSON API error object][]                                  | Malformed request body (missing attributes, wrong types, etc.) |\n| [403][] | [JSON API error object][]                                  | Forbidden - not available for public providers                 |\n| [404][] | [JSON API error object][]                                  | User not authorized                                            |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                      | Type   | Default | Description                                                                                  |\n| ----------------------------- | ------ | ------- | -------------------------------------------------------------------------------------------- |\n| `data.type`                   | string |         | Must be `\"gpg-keys\"`.                                                                        |\n| `data.attributes.namespace`   | string |         | The namespace of the provider. Must be the same as the `organization_name` for the provider. |\n| `data.attributes.ascii-armor` | string |         | A valid gpg-key string.                                                                      |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"gpg-keys\",\n    \"attributes\": {\n      \"namespace\": \"hashicorp\",\n      \"ascii-armor\": \"-----BEGIN PGP PUBLIC KEY BLOCK-----\\n\\nmQINB...=txfz\\n-----END PGP PUBLIC KEY BLOCK-----\\n\"\n    }  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/registry\/private\/v2\/gpg-keys\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"type\": \"gpg-keys\",\n    \"id\": \"23\",\n    \"attributes\": {\n      \"ascii-armor\": \"-----BEGIN PGP PUBLIC KEY BLOCK-----\\n\\nmQINB...=txfz\\n-----END PGP PUBLIC KEY BLOCK-----\\n\",\n      \"created-at\": \"2022-02-11T19:16:59Z\",\n      \"key-id\": \"32966F3FB5AC1129\",\n      \"namespace\": \"hashicorp\",\n      \"source\": \"\",\n      \"source-url\": null,\n      \"trust-signature\": \"\",\n      \"updated-at\": \"2022-02-11T19:16:59Z\"\n    },\n    \"links\": {\n      \"self\": \"\/v2\/gpg-keys\/23\"\n    }\n  }\n}\n```\n\n## Get GPG Key\n\n`GET \/api\/registry\/:registry_name\/v2\/gpg-keys\/:namespace\/:key_id`\n\n### Parameters\n\n| Parameter            | Description                                          |\n| -------------------- | ---------------------------------------------------- |\n| `:registry_name`     | Must be `private`.                                   |\n| `:namespace`         | The namespace of the provider scoped to the GPG key. |\n| `:key_id`            | The id of the GPG key.                               |\n\nGets the content of a GPG key.\n\n| Status  | Response                                                   | Reason                                                         |\n| ------- | ---------------------------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"gpg-keys\"`)                 | Successfully fetched GPG key                                   |\n| [403][] | [JSON API error object][]                                  | Forbidden - not available for public providers                 |\n| [404][] | [JSON API error object][]                                  | GPG key not found or user not authorized                       |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request GET \\\n  https:\/\/app.terraform.io\/api\/registry\/private\/v2\/gpg-keys\/hashicorp\/32966F3FB5AC1129\n```\n\n### Sample Response\n```json\n{\n  \"data\": {\n    \"type\": \"gpg-keys\",\n    \"id\": \"2\",\n    \"attributes\": {\n      \"ascii-armor\": \"-----BEGIN PGP PUBLIC KEY BLOCK-----\\n\\nmQINB...=txfz\\n-----END PGP PUBLIC KEY BLOCK-----\\n\",\n      \"created-at\": \"2022-02-24T17:07:25Z\",\n      \"key-id\": \"32966F3FB5AC1129\",\n      \"namespace\": \"hashicorp\",\n      \"source\": \"\",\n      \"source-url\": null,\n      \"trust-signature\": \"\",\n      \"updated-at\": \"2022-02-24T17:07:25Z\"\n    },\n    \"links\": {\n      \"self\": \"\/v2\/gpg-keys\/2\"\n    }\n  }\n}\n```\n\n## Update a GPG Key\n\n`PATCH \/api\/registry\/:registry_name\/v2\/gpg-keys\/:namespace\/:key_id`\n\n### Parameters\n\n| Parameter            | Description                                          |\n| -------------------- | ---------------------------------------------------- |\n| `:registry_name`     | Must be `private`.                                   |\n| `:namespace`         | The namespace of the provider scoped to the GPG key. |\n| `:key_id`            | The id of the GPG key.                               |\n\nUpdates the specified GPG key. Only the `namespace` attribute can be updated, and `namespace` has to match an `organization` the user has permission to access.\n\n| Status  | Response                                                   | Reason                                                         |\n| ------- | ---------------------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"gpg-keys\"`)                 | Successfully updates a GPG key                                 |\n| [422][] | [JSON API error object][]                                  | Malformed request body (missing attributes, wrong types, etc.) |\n| [403][] | [JSON API error object][]                                  | Forbidden - not available for public providers                 |\n| [404][] | [JSON API error object][]                                  | GPG key not found or user not authorized                       |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                      | Type   | Default | Description                                                                                  |\n| ----------------------------- | ------ | ------- | -------------------------------------------------------------------------------------------- |\n| `data.type`                   | string |         | Must be `\"gpg-keys\"`.                                                                        |\n| `data.attributes.namespace`   | string |         | The namespace of the provider. Must be the same as the `organization_name` for the provider. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"gpg-keys\",\n    \"attributes\": {\n      \"namespace\": \"new-namespace\",\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/registry\/private\/v2\/gpg-keys\/hashicorp\/32966F3FB5AC1129\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"type\": \"gpg-keys\",\n    \"id\": \"2\",\n    \"attributes\": {\n      \"ascii-armor\": \"-----BEGIN PGP PUBLIC KEY BLOCK-----\\n\\nmQINB...=txfz\\n-----END PGP PUBLIC KEY BLOCK-----\\n\",\n      \"created-at\": \"2022-02-24T17:07:25Z\",\n      \"key-id\": \"32966F3FB5AC1129\",\n      \"namespace\": \"new-name\",\n      \"source\": \"\",\n      \"source-url\": null,\n      \"trust-signature\": \"\",\n      \"updated-at\": \"2022-02-24T17:12:10Z\"\n    },\n    \"links\": {\n      \"self\": \"\/v2\/gpg-keys\/2\"\n    }\n  }\n}\n```\n\n## Delete a GPG Key\n\n`DELETE \/api\/registry\/:registry_name\/v2\/gpg-keys\/:namespace\/:key_id`\n\n### Parameters\n\n| Parameter            | Description                                          |\n| -------------------- | ---------------------------------------------------- |\n| `:registry_name`     | Must be `private`.                                   |\n| `:namespace`         | The namespace of the provider scoped to the GPG key. |\n| `:key_id`            | The id of the GPG key.                               |\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Status  | Response                                                   | Reason                                                         |\n| ------- | ---------------------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"gpg-keys\"`)                 | Successfully deletes a GPG key                                 |\n| [422][] | [JSON API error object][]                                  | Malformed request body (missing attributes, wrong types, etc.) |\n| [403][] | [JSON API error object][]                                  | Forbidden - not available for public providers                 |\n| [404][] | [JSON API error object][]                                  | GPG key not found or user not authorized                       |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/registry\/private\/v2\/gpg-keys\/hashicorp\/32966F3FB5AC1129\n```","site":"terraform","answers_cleaned":"    page title  Providers   API Docs   HCP Terraform description       Use the   gpg keys  endpoint to manage the GPG keys used to sign private providers  List  add  get  update  and delete GPG keys using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    GPG Keys API  These endpoints are only relevant to private providers  When you  publish a private provider   terraform cloud docs registry publish providers  to the HCP Terraform private registry  you must upload the public key of the GPG keypair used to sign the release  Refer to  Preparing and Adding a Signing Key   terraform registry providers publishing preparing and adding a signing key  for more details   You need  owners team   terraform cloud docs users teams organizations permissions organization owners  or  Manage Private Registry   terraform cloud docs users teams organizations permissions manage private registry  permissions to add  update  or delete GPG keys in a private registry      List GPG Keys   GET  api registry  registry name v2 gpg keys       Parameters    Parameter          Description                                                        registry name    Must be  private          Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling does not automatically encode URLs     Parameter             Description                                                                                                                                                                                                                                                              filter namespace       Required    A comma separated list of one or more namespaces  The namespaces must be an authorized HCP Terraform or Terraform Enterprise organization name       page number            Optional    If omitted  the endpoint returns the first page                                                         page size              Optional    If omitted  the endpoint returns 20 GPG keys per page                                                 Gets a list of GPG keys belonging to the specified namespaces     Status    Response                                     Reason                                                                                                                                                                              200       JSON API document      type   gpg keys      Successfully fetched GPG keys                                  400       JSON API error object                       Error   missing namespaces in request                          403       JSON API error object                       Forbidden   no authorized namespaces specified in request        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET      https   app terraform io api registry private v2 gpg keys filter 5Bnamespace 5D my organization my other organization           Sample Response     json      data                  type    gpg keys          id    1          attributes              ascii armor         BEGIN PGP PUBLIC KEY BLOCK                    created at    2022 02 08T19 15 47Z            key id    C4E5E6C66C79C778            namespace    my other organization            source                source url   null           trust signature                updated at    2022 02 08T19 15 47Z                  links              self     v2 gpg keys 1                              type    gpg keys          id    140          attributes              ascii armor         BEGIN PGP PUBLIC KEY BLOCK                    created at    2022 04 28T21 32 11Z            key id    C4E5E6C66C79C778            namespace    my organization            source                source url   null           trust signature                updated at    2022 04 28T21 32 11Z                  links              self     v2 gpg keys 140                        links          first     v2 gpg keys filter 5Bnamespace 5D my organization 2Cmy other organization page 5Bnumber 5D 1 page 5Bsize 5D 15        last     v2 gpg keys filter 5Bnamespace 5D my organization 2Cmy other organization page 5Bnumber 5D 1 page 5Bsize 5D 15        next   null       prev   null         meta          pagination            page size   15         current page   1         next page   null         prev page   null         total pages   1         total count   2                     Add a GPG Key   POST  api registry  registry name v2 gpg keys       Parameters    Parameter              Description                                                                registry name        Must be  private        Uploads a GPG Key to a private registry scoped with a namespace  The response will provide a  key id   which is required to  Create a Provider Version   terraform cloud docs api docs private registry provider versions platforms create a provider version      Status    Response                                                     Reason                                                                                                                                                                                                        201       JSON API document      type   gpg keys                      Successfully uploads a GPG key to a private provider                422       JSON API error object                                       Malformed request body  missing attributes  wrong types  etc        403       JSON API error object                                       Forbidden   not available for public providers                      404       JSON API error object                                       User not authorized                                                   Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                        Type     Default   Description                                                                                                                                                                                                                                           data type                      string             Must be   gpg keys                                                                                data attributes namespace      string             The namespace of the provider  Must be the same as the  organization name  for the provider       data attributes ascii armor    string             A valid gpg key string                                                                              Sample Payload     json      data          type    gpg keys        attributes            namespace    hashicorp          ascii armor         BEGIN PGP PUBLIC KEY BLOCK      n nmQINB    txfz n     END PGP PUBLIC KEY BLOCK      n                      Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api registry private v2 gpg keys          Sample Response     json      data          type    gpg keys        id    23        attributes            ascii armor         BEGIN PGP PUBLIC KEY BLOCK      n nmQINB    txfz n     END PGP PUBLIC KEY BLOCK      n          created at    2022 02 11T19 16 59Z          key id    32966F3FB5AC1129          namespace    hashicorp          source              source url   null         trust signature              updated at    2022 02 11T19 16 59Z              links            self     v2 gpg keys 23                      Get GPG Key   GET  api registry  registry name v2 gpg keys  namespace  key id       Parameters    Parameter              Description                                                                                                                                registry name        Must be  private                                           namespace            The namespace of the provider scoped to the GPG key        key id               The id of the GPG key                                   Gets the content of a GPG key     Status    Response                                                     Reason                                                                                                                                                                                                        200       JSON API document      type   gpg keys                      Successfully fetched GPG key                                        403       JSON API error object                                       Forbidden   not available for public providers                      404       JSON API error object                                       GPG key not found or user not authorized                              Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request GET     https   app terraform io api registry private v2 gpg keys hashicorp 32966F3FB5AC1129          Sample Response    json      data          type    gpg keys        id    2        attributes            ascii armor         BEGIN PGP PUBLIC KEY BLOCK      n nmQINB    txfz n     END PGP PUBLIC KEY BLOCK      n          created at    2022 02 24T17 07 25Z          key id    32966F3FB5AC1129          namespace    hashicorp          source              source url   null         trust signature              updated at    2022 02 24T17 07 25Z              links            self     v2 gpg keys 2                      Update a GPG Key   PATCH  api registry  registry name v2 gpg keys  namespace  key id       Parameters    Parameter              Description                                                                                                                                registry name        Must be  private                                           namespace            The namespace of the provider scoped to the GPG key        key id               The id of the GPG key                                   Updates the specified GPG key  Only the  namespace  attribute can be updated  and  namespace  has to match an  organization  the user has permission to access     Status    Response                                                     Reason                                                                                                                                                                                                        201       JSON API document      type   gpg keys                      Successfully updates a GPG key                                      422       JSON API error object                                       Malformed request body  missing attributes  wrong types  etc        403       JSON API error object                                       Forbidden   not available for public providers                      404       JSON API error object                                       GPG key not found or user not authorized                              Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                        Type     Default   Description                                                                                                                                                                                                                                           data type                      string             Must be   gpg keys                                                                                data attributes namespace      string             The namespace of the provider  Must be the same as the  organization name  for the provider         Sample Payload     json      data          type    gpg keys        attributes            namespace    new namespace                        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api registry private v2 gpg keys hashicorp 32966F3FB5AC1129          Sample Response     json      data          type    gpg keys        id    2        attributes            ascii armor         BEGIN PGP PUBLIC KEY BLOCK      n nmQINB    txfz n     END PGP PUBLIC KEY BLOCK      n          created at    2022 02 24T17 07 25Z          key id    32966F3FB5AC1129          namespace    new name          source              source url   null         trust signature              updated at    2022 02 24T17 12 10Z              links            self     v2 gpg keys 2                      Delete a GPG Key   DELETE  api registry  registry name v2 gpg keys  namespace  key id       Parameters    Parameter              Description                                                                                                                                registry name        Must be  private                                           namespace            The namespace of the provider scoped to the GPG key        key id               The id of the GPG key                                    permissions citation    intentionally unused   keep for maintainers    Status    Response                                                     Reason                                                                                                                                                                                                        201       JSON API document      type   gpg keys                      Successfully deletes a GPG key                                      422       JSON API error object                                       Malformed request body  missing attributes  wrong types  etc        403       JSON API error object                                       Forbidden   not available for public providers                      404       JSON API error object                                       GPG key not found or user not authorized                              Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE       data  payload json     https   app terraform io api registry private v2 gpg keys hashicorp 32966F3FB5AC1129    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Providers API Docs HCP Terraform Use the registry providers endpoint to curate providers in your private registry List create get and delete providers using the HTTP API","answers":"---\npage_title: Providers - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/registry-providers` endpoint to curate providers in your private registry. List, create, get, and delete providers using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Registry Providers API\n\nYou can add publicly curated providers from the [Terraform Registry](https:\/\/registry.terraform.io\/) and custom, private providers to your HCP Terraform private registry. The private registry stores a pointer to public providers so that you can view their data from within HCP Terraform. This lets you clearly designate all of the providers that are recommended for the organization and makes them centrally accessible.\n\n\nAll members of an organization can view and use both public and private providers, but you need [owners team](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-owners) or [Manage Private Registry](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-private-registry) permissions to add, update, or delete them them in private registry.\n\n## HCP Terraform Registry Implementation\n\nFor publicly curated providers, the HCP Terraform Registry acts as a proxy to the [Terraform Registry](https:\/\/registry.terraform.io) for the following:\n\n- The public registry discovery endpoints have the path prefix provided in the [discovery document](\/terraform\/registry\/api-docs#service-discovery) which is currently `\/api\/registry\/public\/v1`.\n- [Authentication](\/terraform\/cloud-docs\/api-docs#authentication) is handled the same as all other HCP Terraform endpoints.\n\n## List Terraform Registry Providers for an Organization\n\n`GET \/organizations\/:organization_name\/registry-providers`\n\n### Parameters\n\n| Parameter            | Description                                                    |\n| -------------------- | -------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to list available providers from. |\n\nLists the providers included in the private registry for the specified organization.\n\n| Status  | Response                                             | Reason                                                     |\n| ------- | ---------------------------------------------------- | ---------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"registry-providers\"`) | Success                                                    |\n| [404][] | [JSON API error object][]                            | Providers not found or user unauthorized to perform action |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter            | Description                                                                                                                                            |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `q`                  | **Optional.** A search query string.  Providers are searchable by both their name and their namespace fields.                                          |\n| `filter[field name]` | **Optional.** If specified, restricts results to those with the matching field name value. Valid values are `registry_name`, and `organization_name`. |\n| `page[number]`       | **Optional.** If omitted, the endpoint will return the first page.                                                                                     |\n| `page[size]`         | **Optional.** If omitted, the endpoint will return 20 registry providers per page.                                                                     |\n\n### Sample Request\n\n```shell\ncurl \\\n  --request GET \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-providers\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"prov-kwt1cBiX2SdDz38w\",\n      \"type\": \"registry-providers\",\n      \"attributes\": {\n        \"name\": \"aws\",\n        \"namespace\": \"my-organization\",\n        \"created-at\": \"2021-04-07T19:01:18.528Z\",\n        \"updated-at\": \"2021-04-07T19:01:19.863Z\",\n        \"registry-name\": \"public\",\n        \"permissions\": {\n          \"can-delete\": true\n        }\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/organizations\/my-organization\/registry-providers\/public\/my-organization\/aws\"\n      }\n    },\n    {\n      \"id\": \"prov-PopQnMtYDCcd3PRX\",\n      \"type\": \"registry-providers\",\n      \"attributes\": {\n        \"name\": \"aurora\",\n        \"namespace\": \"my-organization\",\n        \"created-at\": \"2021-04-07T19:04:41.375Z\",\n        \"updated-at\": \"2021-04-07T19:04:42.828Z\",\n        \"registry-name\": \"public\",\n        \"permissions\": {\n          \"can-delete\": true\n        }\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/organizations\/my-organization\/registry-providers\/public\/my-organization\/aurora\"\n      }\n    },\n    ...,\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-providers?page%5Bnumber%5D=1&page%5Bsize%5D=6\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-providers?page%5Bnumber%5D=1&page%5Bsize%5D=6\",\n    \"prev\": null,\n    \"next\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-providers?page%5Bnumber%5D=2&page%5Bsize%5D=6\",\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-providers?page%5Bnumber%5D=29&page%5Bsize%5D=6\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"page-size\": 6,\n      \"prev-page\": null,\n      \"next-page\": 2,\n      \"total-pages\": 29,\n      \"total-count\": 169\n    }\n  }\n}\n```\n\n## Create a Provider\n\n`POST \/organizations\/:organization_name\/registry-providers`\n\nUse this endpoint to create both public and private providers:\n\n- **Public providers:** The public provider record will be available in the organization's registry provider list immediately after creation. You cannot create versions for public providers; you must use the versions available on the Terraform Registry.\n- **Private providers:** The private provider record will be available in the organization's registry provider list immediately after creation, but you must [create a version and upload release assets](\/terraform\/cloud-docs\/registry\/publish-providers#publishing-a-provider-and-creating-a-version) before consumers can use it. The private registry does not automatically update private providers when you release new versions. You must add each new version with the [Create a Provider Version](\/terraform\/cloud-docs\/api-docs\/private-registry\/provider-versions-platforms#create-a-provider-version) endpoint.\n\n\n### Parameters\n\n| Parameter            | Description                                                                                                                                                                                                |\n| -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to create a provider in. The organization must already exist, and the token authenticating the API request must belong to the \"owners\" team or a member of the \"owners\" team. |\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n\n| Status  | Response                                             | Reason                                                         |\n| ------- | ---------------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"registry-providers\"`) | Successfully published provider                                |\n| [422][] | [JSON API error object][]                            | Malformed request body (missing attributes, wrong types, etc.) |\n| [403][] | [JSON API error object][]                            | Forbidden - public provider curation disabled                  |\n| [404][] | [JSON API error object][]                            | User not authorized                                            |\n\n### Request Body\n\n~> **Important:** For private providers, you must also create a version, a platform, and upload release assets before consumers can use the provider. Refer to [Publishing a Private Provider](\/terraform\/cloud-docs\/registry\/publish-providers) for more details.\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                        | Type   | Default | Description                                                                                                  |\n| ------------------------------- | ------ | ------- | ------------------------------------------------------------------------------------------------------------ |\n| `data.type`                     | string |         | Must be `\"registry-providers\"`.                                                                              |\n| `data.attributes.name`          | string |         | The name of the provider.                                                                                    |\n| `data.attributes.namespace`     | string |         | The namespace of the provider. For private providers this is the same as the `:organization_name` parameter. |\n| `data.attributes.registry-name` | string |         | Whether this is a publicly maintained provider or private. Must be either `public` or `private`.             |\n\n### Sample Payload (Private Provider)\n\n```json\n{\n  \"data\": {\n    \"type\": \"registry-providers\",\n    \"attributes\": {\n      \"name\": \"aws\",\n      \"namespace\": \"hashicorp\",\n      \"registry-name\": \"private\"\n    }\n  }\n}\n```\n\n### Sample Payload (Public Provider)\n\n```json\n{\n  \"data\": {\n    \"type\": \"registry-providers\",\n    \"attributes\": {\n      \"name\": \"aws\",\n      \"namespace\": \"hashicorp\",\n      \"registry-name\": \"public\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-providers\n```\n\n### Sample Response (Private Provider)\n\n```json\n{\n  \"data\": {\n    \"id\": \"prov-cmEmLstBfjNNA9F3\",\n    \"type\": \"registry-providers\",\n    \"attributes\": {\n      \"name\": \"aws\",\n      \"namespace\": \"hashicorp\",\n      \"registry-name\": \"private\",\n      \"created-at\": \"2022-02-11T19:16:59.533Z\",\n      \"updated-at\": \"2022-02-11T19:16:59.533Z\",\n      \"permissions\": {\n        \"can-delete\": true\n      }\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"hashicorp\",\n          \"type\": \"organizations\"\n        }\n      },\n      \"versions\": {\n        \"data\": [],\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\"\n    }\n  }\n}\n```\n\n### Sample Response (Public Provider)\n\n```json\n{\n  \"data\": {\n    \"id\": \"prov-fZn7uHu99ZCpAKZJ\",\n    \"type\": \"registry-providers\",\n    \"attributes\": {\n      \"name\": \"aws\",\n      \"namespace\": \"hashicorp\",\n      \"registry-name\": \"public\",\n      \"created-at\": \"2020-07-09T19:36:56.288Z\",\n      \"updated-at\": \"2020-07-09T19:36:56.288Z\",\n      \"permissions\": {\n        \"can-delete\": true\n      }\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/my-organization\/registry-providers\/public\/hashicorp\/aws\"\n    }\n  }\n}\n```\n\n## Get a Provider\n\n`GET \/organizations\/:organization_name\/registry-providers\/:registry_name\/:namespace\/:name`\n\n### Parameters\n\n| Parameter            | Description                                                                                                  |\n| -------------------- | ------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The name of the organization the provider belongs to.                                                        |\n| `:registry_name`     | Whether this is a publicly maintained provider or private. Must be either `public` or `private`.             |\n| `:namespace`         | The namespace of the provider. For private providers this is the same as the `:organization_name` parameter. |\n| `:name`              | The provider name.                                                                                           |\n\n| Status  | Response                                             | Reason                                                    |\n| ------- | ---------------------------------------------------- | --------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"registry-providers\"`) | Success                                                   |\n| [403][] | [JSON API error object][]                            | Forbidden - public provider curation disabled             |\n| [404][] | [JSON API error object][]                            | Provider not found or user unauthorized to perform action |\n\n### Sample Request (Private Provider)\n\n```shell\ncurl \\\n  --request GET \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\n```\n\n### Sample Request (Public Provider)\n\n```shell\ncurl \\\n  --request GET \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-providers\/public\/hashicorp\/aws\n```\n### Sample Response (Private Provider)\n\n```json\n{\n  \"data\": {\n    \"id\": \"prov-cmEmLstBfjNNA9F3\",\n    \"type\": \"registry-providers\",\n    \"attributes\": {\n      \"name\": \"aws\",\n      \"namespace\": \"hashicorp\",\n      \"created-at\": \"2022-02-11T19:16:59.533Z\",\n      \"updated-at\": \"2022-02-11T19:16:59.533Z\",\n      \"registry-name\": \"private\",\n      \"permissions\": {\n        \"can-delete\": true\n      }\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"hashicorp\",\n          \"type\": \"organizations\"\n        }\n      },\n      \"versions\": {\n        \"data\": [\n          {\n            \"id\": \"provver-y5KZUsSBRLV9zCtL\",\n            \"type\": \"registry-provider-versions\"\n          }\n        ],\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\"\n    }\n  }\n}\n```\n\n### Sample Response (Public Provider)\n\n```json\n{\n  \"data\": {\n    \"id\": \"prov-fZn7uHu99ZCpAKZJ\",\n    \"type\": \"registry-providers\",\n    \"attributes\": {\n      \"name\": \"aws\",\n      \"namespace\": \"hashicorp\",\n      \"registry-name\": \"public\",\n      \"created-at\": \"2020-07-09T19:36:56.288Z\",\n      \"updated-at\": \"2020-07-09T20:16:20.538Z\",\n      \"permissions\": {\n        \"can-delete\": true\n      }\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/my-organization\/registry-providers\/public\/hashicorp\/aws\"\n    }\n  }\n}\n```\n\n## Delete a Provider\n\n`DELETE \/organizations\/:organization_name\/registry-providers\/:registry_name\/:namespace\/:name`\n\n### Parameters\n\n| Parameter            | Description                                                                                                                                                                                                  |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `:organization_name` | The name of the organization to delete a provider from. The organization must already exist, and the token authenticating the API request must belong to the \"owners\" team or a member of the \"owners\" team. |\n| `:registry_name`     | Whether this is a publicly maintained provider or private. Must be either `public` or `private`.                                                                                                             |\n| `:namespace`         | The namespace of the provider that will be deleted.                                                                                                                                                          |\n| `:name`              | The name of the provider that will be deleted.                                                                                                                                                               |\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Status  | Response                  | Reason                                                      |\n| ------- | ------------------------- | ----------------------------------------------------------- |\n| [204][] | No Content                | Success                                                     |\n| [403][] | [JSON API error object][] | Forbidden - public provider curation disabled               |\n| [404][] | [JSON API error object][] | Provider not found or user not authorized to perform action |\n\n\n### Sample Request (Private Provider)\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\n```\n\n### Sample Request (Public Provider)\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-providers\/public\/hashicorp\/aws\n```","site":"terraform","answers_cleaned":"    page title  Providers   API Docs   HCP Terraform description       Use the   registry providers  endpoint to curate providers in your private registry  List  create  get  and delete providers using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Registry Providers API  You can add publicly curated providers from the  Terraform Registry  https   registry terraform io   and custom  private providers to your HCP Terraform private registry  The private registry stores a pointer to public providers so that you can view their data from within HCP Terraform  This lets you clearly designate all of the providers that are recommended for the organization and makes them centrally accessible    All members of an organization can view and use both public and private providers  but you need  owners team   terraform cloud docs users teams organizations permissions organization owners  or  Manage Private Registry   terraform cloud docs users teams organizations permissions manage private registry  permissions to add  update  or delete them them in private registry      HCP Terraform Registry Implementation  For publicly curated providers  the HCP Terraform Registry acts as a proxy to the  Terraform Registry  https   registry terraform io  for the following     The public registry discovery endpoints have the path prefix provided in the  discovery document   terraform registry api docs service discovery  which is currently   api registry public v1      Authentication   terraform cloud docs api docs authentication  is handled the same as all other HCP Terraform endpoints      List Terraform Registry Providers for an Organization   GET  organizations  organization name registry providers       Parameters    Parameter              Description                                                                                                                                                    organization name    The name of the organization to list available providers from     Lists the providers included in the private registry for the specified organization     Status    Response                                               Reason                                                                                                                                                                                          200       JSON API document      type   registry providers      Success                                                         404       JSON API error object                                 Providers not found or user unauthorized to perform action        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter              Description                                                                                                                                                                                                                                                                                                                                   q                       Optional    A search query string   Providers are searchable by both their name and their namespace fields                                                filter field name       Optional    If specified  restricts results to those with the matching field name value  Valid values are  registry name   and  organization name        page number             Optional    If omitted  the endpoint will return the first page                                                                                           page size               Optional    If omitted  the endpoint will return 20 registry providers per page                                                                             Sample Request     shell curl       request GET       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 organizations my organization registry providers          Sample Response     json      data                  id    prov kwt1cBiX2SdDz38w          type    registry providers          attributes              name    aws            namespace    my organization            created at    2021 04 07T19 01 18 528Z            updated at    2021 04 07T19 01 19 863Z            registry name    public            permissions                can delete   true                           relationships              organization                data                  id    my organization                type    organizations                                        links              self     api v2 organizations my organization registry providers public my organization aws                              id    prov PopQnMtYDCcd3PRX          type    registry providers          attributes              name    aurora            namespace    my organization            created at    2021 04 07T19 04 41 375Z            updated at    2021 04 07T19 04 42 828Z            registry name    public            permissions                can delete   true                           relationships              organization                data                  id    my organization                type    organizations                                        links              self     api v2 organizations my organization registry providers public my organization aurora                                  links          self    https   app terraform io api v2 organizations my organization registry providers page 5Bnumber 5D 1 page 5Bsize 5D 6        first    https   app terraform io api v2 organizations my organization registry providers page 5Bnumber 5D 1 page 5Bsize 5D 6        prev   null       next    https   app terraform io api v2 organizations my organization registry providers page 5Bnumber 5D 2 page 5Bsize 5D 6        last    https   app terraform io api v2 organizations my organization registry providers page 5Bnumber 5D 29 page 5Bsize 5D 6          meta          pagination            current page   1         page size   6         prev page   null         next page   2         total pages   29         total count   169                     Create a Provider   POST  organizations  organization name registry providers   Use this endpoint to create both public and private providers       Public providers    The public provider record will be available in the organization s registry provider list immediately after creation  You cannot create versions for public providers  you must use the versions available on the Terraform Registry      Private providers    The private provider record will be available in the organization s registry provider list immediately after creation  but you must  create a version and upload release assets   terraform cloud docs registry publish providers publishing a provider and creating a version  before consumers can use it  The private registry does not automatically update private providers when you release new versions  You must add each new version with the  Create a Provider Version   terraform cloud docs api docs private registry provider versions platforms create a provider version  endpoint        Parameters    Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                            organization name    The name of the organization to create a provider in  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team      permissions citation    intentionally unused   keep for maintainers     Status    Response                                               Reason                                                                                                                                                                                                  201       JSON API document      type   registry providers      Successfully published provider                                     422       JSON API error object                                 Malformed request body  missing attributes  wrong types  etc        403       JSON API error object                                 Forbidden   public provider curation disabled                       404       JSON API error object                                 User not authorized                                                   Request Body       Important    For private providers  you must also create a version  a platform  and upload release assets before consumers can use the provider  Refer to  Publishing a Private Provider   terraform cloud docs registry publish providers  for more details   This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                          Type     Default   Description                                                                                                                                                                                                                                                                             data type                        string             Must be   registry providers                                                                                      data attributes name             string             The name of the provider                                                                                          data attributes namespace        string             The namespace of the provider  For private providers this is the same as the   organization name  parameter       data attributes registry name    string             Whether this is a publicly maintained provider or private  Must be either  public  or  private                      Sample Payload  Private Provider      json      data          type    registry providers        attributes            name    aws          namespace    hashicorp          registry name    private                       Sample Payload  Public Provider      json      data          type    registry providers        attributes            name    aws          namespace    hashicorp          registry name    public                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization registry providers          Sample Response  Private Provider      json      data          id    prov cmEmLstBfjNNA9F3        type    registry providers        attributes            name    aws          namespace    hashicorp          registry name    private          created at    2022 02 11T19 16 59 533Z          updated at    2022 02 11T19 16 59 533Z          permissions              can delete   true                     relationships            organization              data                id    hashicorp              type    organizations                            versions              data                links                related     api v2 organizations hashicorp registry providers private hashicorp aws                                links            self     api v2 organizations hashicorp registry providers private hashicorp aws                       Sample Response  Public Provider      json      data          id    prov fZn7uHu99ZCpAKZJ        type    registry providers        attributes            name    aws          namespace    hashicorp          registry name    public          created at    2020 07 09T19 36 56 288Z          updated at    2020 07 09T19 36 56 288Z          permissions              can delete   true                     relationships            organization              data                id    my organization              type    organizations                                links            self     api v2 organizations my organization registry providers public hashicorp aws                      Get a Provider   GET  organizations  organization name registry providers  registry name  namespace  name       Parameters    Parameter              Description                                                                                                                                                                                                                                                organization name    The name of the organization the provider belongs to                                                               registry name        Whether this is a publicly maintained provider or private  Must be either  public  or  private                     namespace            The namespace of the provider  For private providers this is the same as the   organization name  parameter        name                 The provider name                                                                                                 Status    Response                                               Reason                                                                                                                                                                                        200       JSON API document      type   registry providers      Success                                                        403       JSON API error object                                 Forbidden   public provider curation disabled                  404       JSON API error object                                 Provider not found or user unauthorized to perform action        Sample Request  Private Provider      shell curl       request GET       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws          Sample Request  Public Provider      shell curl       request GET       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations my organization registry providers public hashicorp aws         Sample Response  Private Provider      json      data          id    prov cmEmLstBfjNNA9F3        type    registry providers        attributes            name    aws          namespace    hashicorp          created at    2022 02 11T19 16 59 533Z          updated at    2022 02 11T19 16 59 533Z          registry name    private          permissions              can delete   true                     relationships            organization              data                id    hashicorp              type    organizations                            versions              data                              id    provver y5KZUsSBRLV9zCtL                type    registry provider versions                                  links                related     api v2 organizations hashicorp registry providers private hashicorp aws                                links            self     api v2 organizations hashicorp registry providers private hashicorp aws                       Sample Response  Public Provider      json      data          id    prov fZn7uHu99ZCpAKZJ        type    registry providers        attributes            name    aws          namespace    hashicorp          registry name    public          created at    2020 07 09T19 36 56 288Z          updated at    2020 07 09T20 16 20 538Z          permissions              can delete   true                     relationships            organization              data                id    my organization              type    organizations                                links            self     api v2 organizations my organization registry providers public hashicorp aws                      Delete a Provider   DELETE  organizations  organization name registry providers  registry name  namespace  name       Parameters    Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                                organization name    The name of the organization to delete a provider from  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        registry name        Whether this is a publicly maintained provider or private  Must be either  public  or  private                                                                                                                     namespace            The namespace of the provider that will be deleted                                                                                                                                                                 name                 The name of the provider that will be deleted                                                                                                                                                                    permissions citation    intentionally unused   keep for maintainers    Status    Response                    Reason                                                                                                                                                                 204      No Content                  Success                                                          403       JSON API error object      Forbidden   public provider curation disabled                    404       JSON API error object      Provider not found or user not authorized to perform action         Sample Request  Private Provider      shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws          Sample Request  Public Provider      shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 organizations my organization registry providers public hashicorp aws    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 page title Modules API Docs HCP Terraform 201 https developer mozilla org en US docs Web HTTP Status 201 Use the registry modules endpoint to manage modules published to an organization s private registry List get create publish and delete modules and versions using the HTTP API","answers":"---\npage_title: Modules - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/registry-modules` endpoint to manage modules published to an organization's private registry. List, get, create, publish, and delete modules and versions using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Registry Modules API\n\n-> **Note:** Public Module Curation is only available in HCP Terraform. Where applicable, the `registry_name` parameter must be `private` for Terraform Enterprise.\n\n## HCP Terraform Registry Implementation\n\nThe HCP Terraform Module Registry implements the [Registry standard API](\/terraform\/registry\/api-docs) for consuming\/exposing private modules. Refer to the [Module Registry HTTP API](\/terraform\/registry\/api-docs) to perform the following:\n\n- Browse available modules\n- Search modules by keyword\n- List available versions for a specific module\n- Download source code for a specific module version\n- List latest version of a module for all providers\n- Get the latest version for a specific module provider\n- Get a specific module\n- Download the latest version of a module\n\nFor publicly curated modules, the HCP Terraform Module Registry acts as a proxy to the [Terraform Registry](https:\/\/registry.terraform.io) for the following:\n\n- List available versions for a specific module\n- Get a specific module\n- Get the latest version for a specific module provider\n\nThe HCP Terraform Module Registry endpoints differs from the Module Registry endpoints in the following ways:\n\n- The `:namespace` parameter should be replaced with the organization name for private modules.\n- The private module registry discovery endpoints have the path prefix provided in the [discovery document](\/terraform\/registry\/api-docs#service-discovery) which is currently `\/api\/registry\/v1`.\n- The public module registry discovery endpoints have the path prefix provided in the [discovery document](\/terraform\/registry\/api-docs#service-discovery) which is currently `\/api\/registry\/public\/v1`.\n- [Authentication](\/terraform\/cloud-docs\/api-docs#authentication) is handled the same as all other HCP Terraform endpoints.\n\n### Sample Registry Request (private module)\n\nList available versions for the `consul` module for the `aws` provider on the module registry published from the Github organization `my-gh-repo-org`:\n\n```shell\n$ curl https:\/\/registry.terraform.io\/v1\/modules\/my-gh-repo-org\/consul\/aws\/versions\n```\n\nThe same request for the same module and provider on the HCP Terraform module registry for the `my-cloud-org` organization:\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/registry\/v1\/modules\/my-cloud-org\/consul\/aws\/versions\n```\n\n### Sample Proxy Request (public module)\n\nList available versions for the `consul` module for the `aws` provider on the module registry published from the Github organization `my-gh-repo-org`:\n\n```shell\n$ curl https:\/\/registry.terraform.io\/v1\/modules\/my-gh-repo-org\/consul\/aws\/versions\n```\n\nThe same request for the same module and provider on the HCP Terraform module registry:\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/registry\/public\/v1\/modules\/my-gh-repo-org\/consul\/aws\/versions\n```\n\n## List Registry Modules for an Organization\n\n`GET \/organizations\/:organization_name\/registry-modules`\n\n| Parameter            | Description                                                  |\n| -------------------- | ------------------------------------------------------------ |\n| `:organization_name` | The name of the organization to list available modules from. |\n\nLists the modules that are available to a given organization. This includes the full list of publicly curated and private modules and is filterable.\n\n| Status  | Response                                           | Reason                                                   |\n| ------- | -------------------------------------------------- | -------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"registry-modules\"`) | The request was successful                               |\n| [404][] | [JSON API error object][]                          | Modules not found or user unauthorized to perform action |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter            | Description                                                                                                                                                        |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `q`                  | **Optional.** A search query string.  Modules are searchable by name, namespace, provider fields.                                                                  |\n| `filter[field name]` | **Optional.** If specified, restricts results to those with the matching field name value.  Valid values are `registry_name`, `provider`, and `organization_name`. |\n| `page[number]`       | **Optional.** If omitted, the endpoint will return the first page.                                                                                                 |\n| `page[size]`         | **Optional.** If omitted, the endpoint will return 20 registry modules per page.                                                                                   |\n\n### Sample Request\n\n```shell\ncurl \\\n  --request GET \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-modules\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"mod-kwt1cBiX2SdDz38w\",\n      \"type\": \"registry-modules\",\n      \"attributes\": {\n        \"name\": \"api-gateway\",\n        \"namespace\": \"my-organization\",\n        \"provider\": \"alicloud\",\n        \"status\": \"setup_complete\",\n        \"version-statuses\": [\n          {\n            \"version\": \"1.1.0\",\n            \"status\": \"ok\"\n          }\n        ],\n        \"created-at\": \"2021-04-07T19:01:18.528Z\",\n        \"updated-at\": \"2021-04-07T19:01:19.863Z\",\n        \"registry-name\": \"private\",\n        \"permissions\": {\n          \"can-delete\": true,\n          \"can-resync\": true,\n          \"can-retry\": true\n        }\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/organizations\/my-organization\/registry-modules\/private\/my-organization\/api-gateway\/alicloud\"\n      }\n    },\n    {\n      \"id\": \"mod-PopQnMtYDCcd3PRX\",\n      \"type\": \"registry-modules\",\n      \"attributes\": {\n        \"name\": \"aurora\",\n        \"namespace\": \"my-organization\",\n        \"provider\": \"aws\",\n        \"status\": \"setup_complete\",\n        \"version-statuses\": [\n          {\n            \"version\": \"4.1.0\",\n            \"status\": \"ok\"\n          }\n        ],\n        \"created-at\": \"2021-04-07T19:04:41.375Z\",\n        \"updated-at\": \"2021-04-07T19:04:42.828Z\",\n        \"registry-name\": \"private\",\n        \"permissions\": {\n          \"can-delete\": true,\n          \"can-resync\": true,\n          \"can-retry\": true\n        }\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"my-organization\",\n            \"type\": \"organizations\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/organizations\/my-organization\/registry-modules\/private\/my-organization\/aurora\/aws\"\n      }\n    },\n    ...,\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-modules?page%5Bnumber%5D=1&page%5Bsize%5D=6\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-modules?page%5Bnumber%5D=1&page%5Bsize%5D=6\",\n    \"prev\": null,\n    \"next\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-modules?page%5Bnumber%5D=2&page%5Bsize%5D=6\",\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-modules?page%5Bnumber%5D=29&page%5Bsize%5D=6\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"page-size\": 6,\n      \"prev-page\": null,\n      \"next-page\": 2,\n      \"total-pages\": 29,\n      \"total-count\": 169\n    }\n  }\n}\n```\n\n## Publish a Private Module from a VCS\n\n~> **Deprecation warning**: the following endpoint `POST \/registry-modules` is replaced by the below endpoint and will be removed from future versions of the API!\n\n`POST \/organizations\/:organization_name\/registry-modules\/vcs`\n\n| Parameter            | Description                                                                                                                                                                                              |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to create a module in. The organization must already exist, and the token authenticating the API request must belong to a team or team member with the **Manage modules** permission enabled. |\n\nPublishes a new registry private module from a VCS repository, with module versions managed automatically by the repository's tags. The publishing process will fetch all tags in the source repository that look like [SemVer](https:\/\/semver.org\/) versions with optional 'v' prefix. For each version, the tag is cloned and the config parsed to populate module details (input and output variables, readme, submodules, etc.). The [Module Registry Requirements](\/terraform\/registry\/modules\/publish#requirements) define additional requirements on naming, standard module structure and tags for releases.\n\n| Status  | Response                                           | Reason                                                         |\n| ------- | -------------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"registry-modules\"`) | Successfully published module                                  |\n| [422][] | [JSON API error object][]                          | Malformed request body (missing attributes, wrong types, etc.) |\n| [404][] | [JSON API error object][]                          | User not authorized                                            |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                                      | Type    | Default | Description                                                                                                                                                      |\n| --------------------------------------------- | ------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                                   | string  |         | Must be `\"registry-modules\"`.                                                                                                                                    |\n| `data.attributes.vcs-repo.identifier`         | string  |         | The repository from which to ingress the configuration.                                                                                                          |\n| `data.attributes.vcs-repo.oauth-token-id`     | string  |         | The VCS Connection (OAuth Connection + Token) to use as identified. Get this ID from the [oauth-tokens](\/terraform\/cloud-docs\/api-docs\/oauth-tokens) endpoint. You can not specify this value if `github-app-installation-id` is specified. |\n| `data.attributes.vcs-repo.github-app-installation-id` | string |  | The VCS Connection GitHub App Installation to use. Find this ID on the account settings page. Requires previously authorizing the GitHub App and generating a user-to-server token. Manage the token from **Account Settings** within HCP Terraform. You can not specify this value if `oauth-token-id` is specified.                                                                                                                                                      |\n| `data.attributes.vcs-repo.display_identifier` | string  |         | The display identifier for the repository. For most VCS providers outside of Bitbucket Cloud, this identifier matches the `data.attributes.vcs-repo.identifier` string.  |\n| `data.attributes.no-code`                     | boolean |         | Allows you to enable or disable the no-code publishing workflow for a module.  |\n| `data.attributes.vcs-repo.branch`             | string  |         | The repository branch to publish the module from if you are using the branch-based publishing workflow. If omitted, the module will be published using the tag-based publishing workflow.  |\n\nA VCS repository identifier is a reference to a VCS repository in the format `:org\/:repo`, where `:org` and `:repo` refer to the organization, or project key for Bitbucket Data Center, and repository in your VCS provider. The format for Azure DevOps is `:org\/:project\/_git\/:repo`.\n\nThe OAuth Token ID identifies the VCS connection, and therefore the organization, that the module will be created in.\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"vcs-repo\": {\n        \"identifier\":\"lafentres\/terraform-aws-my-module\",\n        \"oauth-token-id\":\"ot-hmAyP66qk2AMVdbJ\",\n        \"display_identifier\":\"lafentres\/terraform-aws-my-module\",\n        \"branch\": \"main\"\n      },\n      \"no-code\": true\n    },\n    \"type\":\"registry-modules\"\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-modules\/vcs\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"mod-fZn7uHu99ZCpAKZJ\",\n    \"type\": \"registry-modules\",\n    \"attributes\": {\n      \"name\": \"my-module\",\n      \"namespace\": \"my-organization\",\n      \"registry-name\": \"private\",\n      \"provider\": \"aws\",\n      \"status\": \"pending\",\n      \"version-statuses\": [],\n      \"created-at\": \"2020-07-09T19:36:56.288Z\",\n      \"updated-at\": \"2020-07-09T19:36:56.288Z\",\n      \"vcs-repo\": {\n        \"branch\": \"\",\n        \"ingress-submodules\": true,\n        \"identifier\": \"lafentres\/terraform-aws-my-module\",\n        \"display-identifier\": \"lafentres\/terraform-aws-my-module\",\n        \"oauth-token-id\": \"ot-hmAyP66qk2AMVdbJ\",\n        \"webhook-url\": \"https:\/\/app.terraform.io\/webhooks\/vcs\/a12b3456...\"\n      },\n      \"permissions\": {\n        \"can-delete\": true,\n        \"can-resync\": true,\n        \"can-retry\": true\n      }\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/my-organization\/registry-modules\/private\/my-organization\/my-module\/aws\"\n    }\n  }\n}\n```\n\n## Create a Module (with no VCS connection)\n\n`POST \/organizations\/:organization_name\/registry-modules`\n\n| Parameter            | Description                                                                                                                                                                                              |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to create a module in. The organization must already exist, and the token authenticating the API request must belong to a team or team member with the **Manage modules** permission enabled. |\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nCreates a new registry module without a backing VCS repository.\n\n#### Private modules\n\nAfter creating a module, a version must be created and uploaded in order to be usable. Modules created this way do not automatically update with new versions; instead, you must explicitly create and upload each new version with the [Create a Module Version](#create-a-module-version) endpoint.\n\n#### Public modules\n\nWhen created, the public module record will be available in the organization's registry module list. You cannot create versions for public modules as they are maintained in the public registry.\n\n| Status  | Response                                           | Reason                                                         |\n| ------- | -------------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"registry-modules\"`) | Successfully published module                                  |\n| [422][] | [JSON API error object][]                          | Malformed request body (missing attributes, wrong types, etc.) |\n| [403][] | [JSON API error object][]                          | Forbidden - public module curation disabled                    |\n| [404][] | [JSON API error object][]                          | User not authorized                                            |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                        | Type    | Default | Description                                                                                                                                                                                                      |\n| ------------------------------- | ------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                     | string  |         | Must be `\"registry-modules\"`.                                                                                                                                                                                    |\n| `data.attributes.name`          | string  |         | The name of this module. May contain alphanumeric characters, with dashes and underscores allowed in non-leading or trailing positions. Maximum length is 64 characters.                                         |\n| `data.attributes.provider`      | string  |         | Specifies the Terraform provider that this module is used for. May contain lowercase alphanumeric characters. Maximum length is 64 characters.                                                                   |\n| `data.attributes.namespace`     | string  |         | The namespace of this module. Cannot be set for private modules. May contain alphanumeric characters, with dashes and underscores allowed in non-leading or trailing positions. Maximum length is 64 characters. |\n| `data.attributes.registry-name` | string  |         | Indicates whether this is a publicly maintained module or private. Must be either `public` or `private`.                                                                                                         |\n| `data.attributes.no-code`       | boolean |         | Allows you to enable or disable the no-code publishing workflow for a module.\n\n### Sample Payload (private module)\n\n```json\n{\n  \"data\": {\n    \"type\": \"registry-modules\",\n    \"attributes\": {\n      \"name\": \"my-module\",\n      \"provider\": \"aws\",\n      \"registry-name\": \"private\",\n      \"no-code\": true\n    }\n  }\n}\n```\n\n### Sample Payload (public module)\n\n```json\n{\n  \"data\": {\n    \"type\": \"registry-modules\",\n    \"attributes\": {\n      \"name\": \"vpc\",\n      \"namespace\": \"terraform-aws-modules\",\n      \"provider\": \"aws\",\n      \"registry-name\": \"public\",\n      \"no-code\": true\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-modules\n```\n\n### Sample Response (private module)\n\n```json\n{\n  \"data\": {\n    \"id\": \"mod-fZn7uHu99ZCpAKZJ\",\n    \"type\": \"registry-modules\",\n    \"attributes\": {\n      \"name\": \"my-module\",\n      \"namespace\": \"my-organization\",\n      \"registry-name\": \"private\",\n      \"provider\": \"aws\",\n      \"status\": \"pending\",\n      \"version-statuses\": [],\n      \"created-at\": \"2020-07-09T19:36:56.288Z\",\n      \"updated-at\": \"2020-07-09T19:36:56.288Z\",\n      \"permissions\": {\n        \"can-delete\": true,\n        \"can-resync\": true,\n        \"can-retry\": true\n      }\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/my-organization\/registry-modules\/private\/my-organization\/my-module\/aws\"\n    }\n  }\n}\n```\n\n### Sample Response (public module)\n\n```json\n{\n  \"data\": {\n    \"id\": \"mod-fZn7uHu99ZCpAKZJ\",\n    \"type\": \"registry-modules\",\n    \"attributes\": {\n      \"name\": \"vpc\",\n      \"namespace\": \"terraform-aws-modules\",\n      \"registry-name\": \"public\",\n      \"provider\": \"aws\",\n      \"status\": \"pending\",\n      \"version-statuses\": [],\n      \"created-at\": \"2020-07-09T19:36:56.288Z\",\n      \"updated-at\": \"2020-07-09T19:36:56.288Z\",\n      \"permissions\": {\n        \"can-delete\": true,\n        \"can-resync\": true,\n        \"can-retry\": true\n      }\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/my-organization\/registry-modules\/public\/terraform-aws-modules\/vpc\/aws\"\n    }\n  }\n}\n```\n\n## Create a Module Version\n\n~> **Deprecation warning**: the following endpoint `POST \/registry-modules\/:organization_name\/:name\/:provider\/versions` is replaced by the below endpoint and will be removed from future versions of the API!\n\n`POST \/organizations\/:organization_name\/registry-modules\/:registry_name\/:namespace\/:name\/:provider\/versions`\n\n| Parameter            | Description                                                                                                                                                                                              |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to create a module in. The organization must already exist, and the token authenticating the API request must belong to a team or team member with the **Manage modules** permission enabled. |\n| `:namespace`         | The namespace of the module for which the version is being created. For private modules this is the same as the `:organization_name` parameter                                                           |\n| `:name`              | The name of the module for which the version is being created.                                                                                                                                           |\n| `:provider`          | The name of the provider for which the version is being created.                                                                                                                                         |\n| `:registry-name`     | Must be `private`.                                                                                                                                                                                       |\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nCreates a new registry module version. This endpoint only applies to private modules without a VCS repository and VCS-linked branch based modules. VCS-linked tag-based modules automatically create new versions for new tags. After creating the version for a non-VCS backed module, you should upload the module to the link that HCP Terraform returns.\n\n| Status  | Response                                                   | Reason                                                         |\n| ------- | ---------------------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"registry-module-versions\"`) | Successfully published module version                          |\n| [422][] | [JSON API error object][]                                  | Malformed request body (missing attributes, wrong types, etc.) |\n| [403][] | [JSON API error object][]                                  | Forbidden - not available for public modules                   |\n| [404][] | [JSON API error object][]                                  | User not authorized                                            |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                  | Type   | Default | Description                           |\n| ---------------------------- | ------ | ------- | ---------------------------------------------------- |\n| `data.type`                  | string |         | Must be `\"registry-module-versions\"`.                |\n| `data.attributes.version`    | string |         | A valid semver version string.                       |\n| `data.attributes.commit-sha` | string |         | The commit SHA to use to create the module version.        |\n\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"registry-module-versions\",\n    \"attributes\": {\n      \"version\": \"1.2.3\",\n      \"commit-sha\": \"abcdef12345\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-modules\/private\/my-organization\/my-module\/aws\/versions\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"modver-qjjF7ArLXJSWU3WU\",\n    \"type\": \"registry-module-versions\",\n    \"attributes\": {\n      \"source\": \"tfe-api\",\n      \"status\": \"pending\",\n      \"version\": \"1.2.3\",\n      \"created-at\": \"2018-09-24T20:47:20.931Z\",\n      \"updated-at\": \"2018-09-24T20:47:20.931Z\"\n    },\n    \"relationships\": {\n      \"registry-module\": {\n        \"data\": {\n          \"id\": \"1881\",\n          \"type\": \"registry-modules\"\n        }\n      }\n    },\n    \"links\": {\n      \"upload\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1bHQ6djE6NWJPbHQ4QjV4R1ox...\"\n    }\n  }\n}\n```\n\n## Add a Module Version (Private Module)\n\n`PUT https:\/\/archivist.terraform.io\/v1\/object\/<UNIQUE OBJECT ID>`\n\n**The URL is provided in the `upload` links attribute in the `registry-module-versions` resource.**\n\n### Expected Archive Format\n\nHCP Terraform expects the module version uploaded to be a gzip tarball with the module in the root (not in a subdirectory).\n\nGiven the following folder structure:\n\n```\nterraform-null-test\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 examples\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 default\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 README.md\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 main.tf\n\u2514\u2500\u2500 main.tf\n```\n\nPackage the files in an archive format by running `tar zcvf module.tar.gz *` in the module's directory.\n\n```\n~$ cd terraform-null-test\nterraform-null-test$ tar zcvf module.tar.gz *\na README.md\na examples\na examples\/default\na examples\/default\/main.tf\na examples\/default\/README.md\na main.tf\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Content-Type: application\/octet-stream\" \\\n  --request PUT \\\n  --data-binary @module.tar.gz \\\n  https:\/\/archivist.terraform.io\/v1\/object\/dmF1bHQ6djE6NWJPbHQ4QjV4R1ox...\n```\n\nAfter the registry module version is successfully parsed, its status will become `\"ok\"`.\n\n## Get a Module\n\n~> **Deprecation warning**: the following endpoint `GET \/registry-modules\/show\/:organization_name\/:name\/:provider` is replaced by the below endpoint and will be removed from future versions of the API!\n\n`GET \/organizations\/:organization_name\/registry-modules\/:registry_name\/:namespace\/:name\/:provider`\n\n### Parameters\n\n| Parameter            | Description                                                                                                 |\n| -------------------- | ----------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization the module belongs to.                                                         |\n| `:namespace`         | The namespace of the module. For private modules this is the name of the organization that owns the module. |\n| `:name`              | The module name.                                                                                            |\n| `:provider`          | The module provider. Must be lowercase alphanumeric.                                                        |\n| `:registry-name`     | Either `public` or `private`.                                                                               |\n\n| Status  | Response                                           | Reason                                                  |\n| ------- | -------------------------------------------------- | ------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"registry-modules\"`) | The request was successful                              |\n| [403][] | [JSON API error object][]                          | Forbidden - public module curation disabled             |\n| [404][] | [JSON API error object][]                          | Module not found or user unauthorized to perform action |\n\n### Sample Request (private module)\n\n```shell\ncurl \\\n  --request GET \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-modules\/private\/my-organization\/my-module\/aws\n```\n\n### Sample Request (public module)\n\n```shell\ncurl \\\n  --request GET \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-modules\/public\/terraform-aws-modules\/vpc\/aws\n```\n\n### Sample Response (private module)\n\n```json\n{\n  \"data\": {\n    \"id\": \"mod-fZn7uHu99ZCpAKZJ\",\n    \"type\": \"registry-modules\",\n    \"attributes\": {\n      \"name\": \"my-module\",\n      \"provider\": \"aws\",\n      \"namespace\": \"my-organization\",\n      \"registry-name\": \"private\",\n      \"status\": \"setup_complete\",\n      \"version-statuses\": [\n        {\n          \"version\": \"1.0.0\",\n          \"status\": \"ok\"\n        }\n      ],\n      \"created-at\": \"2020-07-09T19:36:56.288Z\",\n      \"updated-at\": \"2020-07-09T20:16:20.538Z\",\n      \"vcs-repo\": {\n        \"branch\": \"\",\n        \"ingress-submodules\": true,\n        \"identifier\": \"lafentres\/terraform-aws-my-module\",\n        \"display-identifier\": \"lafentres\/terraform-aws-my-module\",\n        \"oauth-token-id\": \"ot-hmAyP66qk2AMVdbJ\",\n        \"webhook-url\": \"https:\/\/app.terraform.io\/webhooks\/vcs\/a12b3456...\"\n      },\n      \"permissions\": {\n        \"can-delete\": true,\n        \"can-resync\": true,\n        \"can-retry\": true\n      }\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/my-organization\/registry-modules\/private\/my-organization\/my-module\/aws\"\n    }\n  }\n}\n```\n\n### Sample Response (public module)\n\n```json\n{\n  \"data\": {\n    \"id\": \"mod-fZn7uHu99ZCpAKZJ\",\n    \"type\": \"registry-modules\",\n    \"attributes\": {\n      \"name\": \"vpc\",\n      \"provider\": \"aws\",\n      \"namespace\": \"terraform-aws-modules\",\n      \"registry-name\": \"public\",\n      \"status\": \"setup_complete\",\n      \"version-statuses\": [],\n      \"created-at\": \"2020-07-09T19:36:56.288Z\",\n      \"updated-at\": \"2020-07-09T20:16:20.538Z\",\n      \"permissions\": {\n        \"can-delete\": true,\n        \"can-resync\": true,\n        \"can-retry\": true\n      }\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/my-organization\/registry-modules\/public\/terraform-aws-modules\/vpc\/aws\"\n    }\n  }\n}\n```\n## Update a Private Registry Module\n`PATCH \/organizations\/:organization_name\/registry-modules\/private\/:namespace\/:name\/:provider\/`\n\n### Parameters\n\n| Parameter            | Description                                                                                                                                                                                                |\n| -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to update a module from. The organization must already exist, and the token authenticating the API request must belong to the `owners` team or a member of the `owners` team. |\n| `:namespace`         | The module namespace that the update affects. For private modules this is the name of the organization that owns the module.                                                                           |\n| `:name`              | The module name that the update affects.                                                                                                                                                               |\n| `:provider`          | The name of the provider of the module that is being updated.                                                                                                                                  |\n\n### Request Body\n\nThese PATCH endpoints require a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                                      | Type           | Default          | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |\n|-----------------------------------------------|----------------|------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `data.type`                                   | string         |                  | Must be `\"registry-modules\"`.                                                                                                                                                                                                                                                                                                                                                                                                                                                         |\n| `data.attributes.vcs-repo.branch`                        | string         | (previous value) | The repository branch that Terraform executes tests and publishes new versions from. This cannot be used with the `data.attributes.vcs-repo.tags` key.                                                                                                                                                                                     |\n| `data.attributes.vcs-repo.tags`               | boolean         | (previous value) | Whether the registry module should be tag-based. This cannot be used with the `data.attributes.vcs-repo.branch` key.                                                                                                                                                                                                                                  |\n| `data.attributes.test-config.tests-enabled`               | boolean         | (previous value) | Allows you to enable or disable tests for the module.                                                                                                                                                                                                                               |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"attributes\": {\n      \"vcs-repo\": {\n        \"branch\": \"main\",\n        \"tags\": false\n      },\n      \"test-config\": {\n        \"tests-enabled\": true\n      }\n    },\n    \"type\": \"registry-modules\"\n  }\n}\n```\n\n### Sample Request\n\n```shell\n$ curl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-modules\/private\/my-organization\/registry-name\/registry-provider\/\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"mod-fZn7uHu99ZCpAKZJ\",\n    \"type\": \"registry-modules\",\n    \"attributes\": {\n      \"name\": \"my-module\",\n      \"namespace\": \"my-organization\",\n      \"registry-name\": \"private\",\n      \"provider\": \"aws\",\n      \"status\": \"pending\",\n      \"version-statuses\": [],\n      \"created-at\": \"2020-07-09T19:36:56.288Z\",\n      \"updated-at\": \"2020-07-09T19:36:56.288Z\",\n      \"vcs-repo\": {\n        \"branch\": \"main\",\n        \"ingress-submodules\": true,\n        \"identifier\": \"lafentres\/terraform-aws-my-module\",\n        \"display-identifier\": \"lafentres\/terraform-aws-my-module\",\n        \"oauth-token-id\": \"ot-hmAyP66qk2AMVdbJ\",\n        \"webhook-url\": \"https:\/\/app.terraform.io\/webhooks\/vcs\/a12b3456...\"\n      },\n      \"permissions\": {\n        \"can-delete\": true,\n        \"can-resync\": true,\n        \"can-retry\": true\n      },\n      \"test-config\": {\n        \"id\": \"tc-tcR6bxV5zE75Zb3B\",\n        \"tests-enabled\": true\n      }\n    },\n    \"relationships\": {\n      \"organization\": {\n        \"data\": {\n          \"id\": \"my-organization\",\n          \"type\": \"organizations\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/organizations\/my-organization\/registry-modules\/private\/my-organization\/my-module\/aws\"\n    }\n  }\n}\n```\n\n\n## Delete a Module\n\n<div className=\"alert alert-warning\" role=\"alert\">\n  **Deprecation warning**: the following endpoints:\n\n- `POST \/registry-modules\/actions\/delete\/:organization_name\/:name\/:provider\/:version`\n- `POST \/registry-modules\/actions\/delete\/:organization_name\/:name\/:provider`\n- `POST \/registry-modules\/actions\/delete\/:organization_name\/:name`\n\nare replaced by the below endpoints and will be removed from future versions of the API!\n\n<\/div>\n\n- `DELETE \/organizations\/:organization_name\/registry-modules\/:registry_name\/:namespace\/:name\/:provider\/:version`\n- `DELETE \/organizations\/:organization_name\/registry-modules\/:registry_name\/:namespace\/:name\/:provider`\n- `DELETE \/organizations\/:organization_name\/registry-modules\/:registry_name\/:namespace\/:name`\n\n### Parameters\n\n| Parameter            | Description                                                                                                                                                                                                |\n| -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to delete a module from. The organization must already exist, and the token authenticating the API request must belong to the \"owners\" team or a member of the \"owners\" team. |\n| `:namespace`         | The module namespace that the deletion will affect. For private modules this is the name of the organization that owns the module.                                                                         |\n| `:name`              | The module name that the deletion will affect.                                                                                                                                                             |\n| `:provider`          | If specified, the provider for the module that the deletion will affect.                                                                                                                                   |\n| `:version`           | If specified, the version for the module and provider that will be deleted.                                                                                                                                |\n| `:registry_name`     | Either `public` or `private`                                                                                                                                                                               |\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\nWhen removing modules, there are three versions of the endpoint, depending on how many parameters are specified.\n\n- If all parameters (module namespace, name, provider, and version) are specified, the specified version for the given provider of the module is deleted.\n- If module namespace, name, and provider are specified, the specified provider for the given module is deleted along with all its versions.\n- If only module namespace and name are specified, the entire module is deleted.\n\nFor public modules, only the the endpoint specifying the module namespace and name is valid. The other DELETE endpoints will 404.\nFor public modules, this only removes the record from the organization's HCP Terraform Registry and does not remove the public module from registry.terraform.io.\n\nIf a version deletion would leave a provider with no versions, the provider will be deleted. If a provider deletion would leave a module with no providers, the module will be deleted.\n\n| Status  | Response                  | Reason                                                        |\n| ------- | ------------------------- | ------------------------------------------------------------- |\n| [204][] | No Content                | Success                                                       |\n| [403][] | [JSON API error object][] | Forbidden - public module curation disabled                   |\n| [404][] | [JSON API error object][] | Module, provider, or version not found or user not authorized |\n\n### Sample Request (private module)\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-modules\/private\/my-organization\/my-module\/aws\/2.0.0\n```\n\n### Sample Request (public module)\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/registry-modules\/public\/terraform-aws-modules\/vpc\/aws\n```","site":"terraform","answers_cleaned":"    page title  Modules   API Docs   HCP Terraform description       Use the   registry modules  endpoint to manage modules published to an organization s private registry  List  get  create  publish  and delete modules and versions using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Registry Modules API       Note    Public Module Curation is only available in HCP Terraform  Where applicable  the  registry name  parameter must be  private  for Terraform Enterprise      HCP Terraform Registry Implementation  The HCP Terraform Module Registry implements the  Registry standard API   terraform registry api docs  for consuming exposing private modules  Refer to the  Module Registry HTTP API   terraform registry api docs  to perform the following     Browse available modules   Search modules by keyword   List available versions for a specific module   Download source code for a specific module version   List latest version of a module for all providers   Get the latest version for a specific module provider   Get a specific module   Download the latest version of a module  For publicly curated modules  the HCP Terraform Module Registry acts as a proxy to the  Terraform Registry  https   registry terraform io  for the following     List available versions for a specific module   Get a specific module   Get the latest version for a specific module provider  The HCP Terraform Module Registry endpoints differs from the Module Registry endpoints in the following ways     The   namespace  parameter should be replaced with the organization name for private modules    The private module registry discovery endpoints have the path prefix provided in the  discovery document   terraform registry api docs service discovery  which is currently   api registry v1     The public module registry discovery endpoints have the path prefix provided in the  discovery document   terraform registry api docs service discovery  which is currently   api registry public v1      Authentication   terraform cloud docs api docs authentication  is handled the same as all other HCP Terraform endpoints       Sample Registry Request  private module   List available versions for the  consul  module for the  aws  provider on the module registry published from the Github organization  my gh repo org       shell   curl https   registry terraform io v1 modules my gh repo org consul aws versions      The same request for the same module and provider on the HCP Terraform module registry for the  my cloud org  organization      shell   curl       header  Authorization  Bearer  TOKEN      https   app terraform io api registry v1 modules my cloud org consul aws versions          Sample Proxy Request  public module   List available versions for the  consul  module for the  aws  provider on the module registry published from the Github organization  my gh repo org       shell   curl https   registry terraform io v1 modules my gh repo org consul aws versions      The same request for the same module and provider on the HCP Terraform module registry      shell   curl       header  Authorization  Bearer  TOKEN      https   app terraform io api registry public v1 modules my gh repo org consul aws versions         List Registry Modules for an Organization   GET  organizations  organization name registry modules     Parameter              Description                                                                                                                                                organization name    The name of the organization to list available modules from     Lists the modules that are available to a given organization  This includes the full list of publicly curated and private modules and is filterable     Status    Response                                             Reason                                                                                                                                                                                    200       JSON API document      type   registry modules      The request was successful                                    404       JSON API error object                               Modules not found or user unauthorized to perform action        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter              Description                                                                                                                                                                                                                                                                                                                                                           q                       Optional    A search query string   Modules are searchable by name  namespace  provider fields                                                                        filter field name       Optional    If specified  restricts results to those with the matching field name value   Valid values are  registry name    provider   and  organization name        page number             Optional    If omitted  the endpoint will return the first page                                                                                                       page size               Optional    If omitted  the endpoint will return 20 registry modules per page                                                                                           Sample Request     shell curl       request GET       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 organizations my organization registry modules          Sample Response     json      data                  id    mod kwt1cBiX2SdDz38w          type    registry modules          attributes              name    api gateway            namespace    my organization            provider    alicloud            status    setup complete            version statuses                              version    1 1 0                status    ok                                  created at    2021 04 07T19 01 18 528Z            updated at    2021 04 07T19 01 19 863Z            registry name    private            permissions                can delete   true             can resync   true             can retry   true                           relationships              organization                data                  id    my organization                type    organizations                                        links              self     api v2 organizations my organization registry modules private my organization api gateway alicloud                              id    mod PopQnMtYDCcd3PRX          type    registry modules          attributes              name    aurora            namespace    my organization            provider    aws            status    setup complete            version statuses                              version    4 1 0                status    ok                                  created at    2021 04 07T19 04 41 375Z            updated at    2021 04 07T19 04 42 828Z            registry name    private            permissions                can delete   true             can resync   true             can retry   true                           relationships              organization                data                  id    my organization                type    organizations                                        links              self     api v2 organizations my organization registry modules private my organization aurora aws                                  links          self    https   app terraform io api v2 organizations my organization registry modules page 5Bnumber 5D 1 page 5Bsize 5D 6        first    https   app terraform io api v2 organizations my organization registry modules page 5Bnumber 5D 1 page 5Bsize 5D 6        prev   null       next    https   app terraform io api v2 organizations my organization registry modules page 5Bnumber 5D 2 page 5Bsize 5D 6        last    https   app terraform io api v2 organizations my organization registry modules page 5Bnumber 5D 29 page 5Bsize 5D 6          meta          pagination            current page   1         page size   6         prev page   null         next page   2         total pages   29         total count   169                     Publish a Private Module from a VCS       Deprecation warning    the following endpoint  POST  registry modules  is replaced by the below endpoint and will be removed from future versions of the API    POST  organizations  organization name registry modules vcs     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                        organization name    The name of the organization to create a module in  The organization must already exist  and the token authenticating the API request must belong to a team or team member with the   Manage modules   permission enabled     Publishes a new registry private module from a VCS repository  with module versions managed automatically by the repository s tags  The publishing process will fetch all tags in the source repository that look like  SemVer  https   semver org   versions with optional  v  prefix  For each version  the tag is cloned and the config parsed to populate module details  input and output variables  readme  submodules  etc    The  Module Registry Requirements   terraform registry modules publish requirements  define additional requirements on naming  standard module structure and tags for releases     Status    Response                                             Reason                                                                                                                                                                                                201       JSON API document      type   registry modules      Successfully published module                                       422       JSON API error object                               Malformed request body  missing attributes  wrong types  etc        404       JSON API error object                               User not authorized                                                   Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                                        Type      Default   Description                                                                                                                                                                                                                                                                                                                                                                                                    data type                                      string              Must be   registry modules                                                                                                                                            data attributes vcs repo identifier            string              The repository from which to ingress the configuration                                                                                                                data attributes vcs repo oauth token id        string              The VCS Connection  OAuth Connection   Token  to use as identified  Get this ID from the  oauth tokens   terraform cloud docs api docs oauth tokens  endpoint  You can not specify this value if  github app installation id  is specified       data attributes vcs repo github app installation id    string      The VCS Connection GitHub App Installation to use  Find this ID on the account settings page  Requires previously authorizing the GitHub App and generating a user to server token  Manage the token from   Account Settings   within HCP Terraform  You can not specify this value if  oauth token id  is specified                                                                                                                                                            data attributes vcs repo display identifier    string              The display identifier for the repository  For most VCS providers outside of Bitbucket Cloud  this identifier matches the  data attributes vcs repo identifier  string        data attributes no code                        boolean             Allows you to enable or disable the no code publishing workflow for a module        data attributes vcs repo branch                string              The repository branch to publish the module from if you are using the branch based publishing workflow  If omitted  the module will be published using the tag based publishing workflow      A VCS repository identifier is a reference to a VCS repository in the format   org  repo   where   org  and   repo  refer to the organization  or project key for Bitbucket Data Center  and repository in your VCS provider  The format for Azure DevOps is   org  project  git  repo    The OAuth Token ID identifies the VCS connection  and therefore the organization  that the module will be created in       Sample Payload     json      data          attributes            vcs repo              identifier   lafentres terraform aws my module            oauth token id   ot hmAyP66qk2AMVdbJ            display identifier   lafentres terraform aws my module            branch    main                  no code   true             type   registry modules                 Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization registry modules vcs          Sample Response     json      data          id    mod fZn7uHu99ZCpAKZJ        type    registry modules        attributes            name    my module          namespace    my organization          registry name    private          provider    aws          status    pending          version statuses              created at    2020 07 09T19 36 56 288Z          updated at    2020 07 09T19 36 56 288Z          vcs repo              branch                ingress submodules   true           identifier    lafentres terraform aws my module            display identifier    lafentres terraform aws my module            oauth token id    ot hmAyP66qk2AMVdbJ            webhook url    https   app terraform io webhooks vcs a12b3456                     permissions              can delete   true           can resync   true           can retry   true                     relationships            organization              data                id    my organization              type    organizations                                links            self     api v2 organizations my organization registry modules private my organization my module aws                      Create a Module  with no VCS connection    POST  organizations  organization name registry modules     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                        organization name    The name of the organization to create a module in  The organization must already exist  and the token authenticating the API request must belong to a team or team member with the   Manage modules   permission enabled      permissions citation    intentionally unused   keep for maintainers  Creates a new registry module without a backing VCS repository        Private modules  After creating a module  a version must be created and uploaded in order to be usable  Modules created this way do not automatically update with new versions  instead  you must explicitly create and upload each new version with the  Create a Module Version   create a module version  endpoint        Public modules  When created  the public module record will be available in the organization s registry module list  You cannot create versions for public modules as they are maintained in the public registry     Status    Response                                             Reason                                                                                                                                                                                                201       JSON API document      type   registry modules      Successfully published module                                       422       JSON API error object                               Malformed request body  missing attributes  wrong types  etc        403       JSON API error object                               Forbidden   public module curation disabled                         404       JSON API error object                               User not authorized                                                   Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                          Type      Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      data type                        string              Must be   registry modules                                                                                                                                                                                            data attributes name             string              The name of this module  May contain alphanumeric characters  with dashes and underscores allowed in non leading or trailing positions  Maximum length is 64 characters                                               data attributes provider         string              Specifies the Terraform provider that this module is used for  May contain lowercase alphanumeric characters  Maximum length is 64 characters                                                                         data attributes namespace        string              The namespace of this module  Cannot be set for private modules  May contain alphanumeric characters  with dashes and underscores allowed in non leading or trailing positions  Maximum length is 64 characters       data attributes registry name    string              Indicates whether this is a publicly maintained module or private  Must be either  public  or  private                                                                                                                data attributes no code          boolean             Allows you to enable or disable the no code publishing workflow for a module       Sample Payload  private module      json      data          type    registry modules        attributes            name    my module          provider    aws          registry name    private          no code   true                      Sample Payload  public module      json      data          type    registry modules        attributes            name    vpc          namespace    terraform aws modules          provider    aws          registry name    public          no code   true                      Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization registry modules          Sample Response  private module      json      data          id    mod fZn7uHu99ZCpAKZJ        type    registry modules        attributes            name    my module          namespace    my organization          registry name    private          provider    aws          status    pending          version statuses              created at    2020 07 09T19 36 56 288Z          updated at    2020 07 09T19 36 56 288Z          permissions              can delete   true           can resync   true           can retry   true                     relationships            organization              data                id    my organization              type    organizations                                links            self     api v2 organizations my organization registry modules private my organization my module aws                       Sample Response  public module      json      data          id    mod fZn7uHu99ZCpAKZJ        type    registry modules        attributes            name    vpc          namespace    terraform aws modules          registry name    public          provider    aws          status    pending          version statuses              created at    2020 07 09T19 36 56 288Z          updated at    2020 07 09T19 36 56 288Z          permissions              can delete   true           can resync   true           can retry   true                     relationships            organization              data                id    my organization              type    organizations                                links            self     api v2 organizations my organization registry modules public terraform aws modules vpc aws                      Create a Module Version       Deprecation warning    the following endpoint  POST  registry modules  organization name  name  provider versions  is replaced by the below endpoint and will be removed from future versions of the API    POST  organizations  organization name registry modules  registry name  namespace  name  provider versions     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                        organization name    The name of the organization to create a module in  The organization must already exist  and the token authenticating the API request must belong to a team or team member with the   Manage modules   permission enabled        namespace            The namespace of the module for which the version is being created  For private modules this is the same as the   organization name  parameter                                                                 name                 The name of the module for which the version is being created                                                                                                                                                  provider             The name of the provider for which the version is being created                                                                                                                                                registry name        Must be  private                                                                                                                                                                                             permissions citation    intentionally unused   keep for maintainers  Creates a new registry module version  This endpoint only applies to private modules without a VCS repository and VCS linked branch based modules  VCS linked tag based modules automatically create new versions for new tags  After creating the version for a non VCS backed module  you should upload the module to the link that HCP Terraform returns     Status    Response                                                     Reason                                                                                                                                                                                                        201       JSON API document      type   registry module versions      Successfully published module version                               422       JSON API error object                                       Malformed request body  missing attributes  wrong types  etc        403       JSON API error object                                       Forbidden   not available for public modules                        404       JSON API error object                                       User not authorized                                                   Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                    Type     Default   Description                                                                                                                                           data type                     string             Must be   registry module versions                        data attributes version       string             A valid semver version string                             data attributes commit sha    string             The commit SHA to use to create the module version                 Sample Payload     json      data          type    registry module versions        attributes            version    1 2 3          commit sha    abcdef12345                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization registry modules private my organization my module aws versions          Sample Response     json      data          id    modver qjjF7ArLXJSWU3WU        type    registry module versions        attributes            source    tfe api          status    pending          version    1 2 3          created at    2018 09 24T20 47 20 931Z          updated at    2018 09 24T20 47 20 931Z              relationships            registry module              data                id    1881              type    registry modules                                links            upload    https   archivist terraform io v1 object dmF1bHQ6djE6NWJPbHQ4QjV4R1ox                         Add a Module Version  Private Module    PUT https   archivist terraform io v1 object  UNIQUE OBJECT ID      The URL is provided in the  upload  links attribute in the  registry module versions  resource         Expected Archive Format  HCP Terraform expects the module version uploaded to be a gzip tarball with the module in the root  not in a subdirectory    Given the following folder structure       terraform null test     README md     examples         default             README md             main tf     main tf      Package the files in an archive format by running  tar zcvf module tar gz    in the module s directory          cd terraform null test terraform null test  tar zcvf module tar gz   a README md a examples a examples default a examples default main tf a examples default README md a main tf          Sample Request     shell curl       header  Content Type  application octet stream        request PUT       data binary  module tar gz     https   archivist terraform io v1 object dmF1bHQ6djE6NWJPbHQ4QjV4R1ox         After the registry module version is successfully parsed  its status will become   ok        Get a Module       Deprecation warning    the following endpoint  GET  registry modules show  organization name  name  provider  is replaced by the below endpoint and will be removed from future versions of the API    GET  organizations  organization name registry modules  registry name  namespace  name  provider       Parameters    Parameter              Description                                                                                                                                                                                                                                              organization name    The name of the organization the module belongs to                                                                namespace            The namespace of the module  For private modules this is the name of the organization that owns the module        name                 The module name                                                                                                   provider             The module provider  Must be lowercase alphanumeric                                                               registry name        Either  public  or  private                                                                                      Status    Response                                             Reason                                                                                                                                                                                  200       JSON API document      type   registry modules      The request was successful                                   403       JSON API error object                               Forbidden   public module curation disabled                  404       JSON API error object                               Module not found or user unauthorized to perform action        Sample Request  private module      shell curl       request GET       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations my organization registry modules private my organization my module aws          Sample Request  public module      shell curl       request GET       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations my organization registry modules public terraform aws modules vpc aws          Sample Response  private module      json      data          id    mod fZn7uHu99ZCpAKZJ        type    registry modules        attributes            name    my module          provider    aws          namespace    my organization          registry name    private          status    setup complete          version statuses                          version    1 0 0              status    ok                            created at    2020 07 09T19 36 56 288Z          updated at    2020 07 09T20 16 20 538Z          vcs repo              branch                ingress submodules   true           identifier    lafentres terraform aws my module            display identifier    lafentres terraform aws my module            oauth token id    ot hmAyP66qk2AMVdbJ            webhook url    https   app terraform io webhooks vcs a12b3456                     permissions              can delete   true           can resync   true           can retry   true                     relationships            organization              data                id    my organization              type    organizations                                links            self     api v2 organizations my organization registry modules private my organization my module aws                       Sample Response  public module      json      data          id    mod fZn7uHu99ZCpAKZJ        type    registry modules        attributes            name    vpc          provider    aws          namespace    terraform aws modules          registry name    public          status    setup complete          version statuses              created at    2020 07 09T19 36 56 288Z          updated at    2020 07 09T20 16 20 538Z          permissions              can delete   true           can resync   true           can retry   true                     relationships            organization              data                id    my organization              type    organizations                                links            self     api v2 organizations my organization registry modules public terraform aws modules vpc aws                     Update a Private Registry Module  PATCH  organizations  organization name registry modules private  namespace  name  provider        Parameters    Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                            organization name    The name of the organization to update a module from  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        namespace            The module namespace that the update affects  For private modules this is the name of the organization that owns the module                                                                                  name                 The module name that the update affects                                                                                                                                                                      provider             The name of the provider of the module that is being updated                                                                                                                                          Request Body  These PATCH endpoints require a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                                        Type             Default            Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  data type                                      string                              Must be   registry modules                                                                                                                                                                                                                                                                                                                                                                                                                                                                 data attributes vcs repo branch                           string            previous value    The repository branch that Terraform executes tests and publishes new versions from  This cannot be used with the  data attributes vcs repo tags  key                                                                                                                                                                                           data attributes vcs repo tags                  boolean            previous value    Whether the registry module should be tag based  This cannot be used with the  data attributes vcs repo branch  key                                                                                                                                                                                                                                        data attributes test config tests enabled                  boolean            previous value    Allows you to enable or disable tests for the module                                                                                                                                                                                                                                       Sample Payload     json      data          attributes            vcs repo              branch    main            tags   false                 test config              tests enabled   true                     type    registry modules                 Sample Request     shell   curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 organizations my organization registry modules private my organization registry name registry provider           Sample Response     json      data          id    mod fZn7uHu99ZCpAKZJ        type    registry modules        attributes            name    my module          namespace    my organization          registry name    private          provider    aws          status    pending          version statuses              created at    2020 07 09T19 36 56 288Z          updated at    2020 07 09T19 36 56 288Z          vcs repo              branch    main            ingress submodules   true           identifier    lafentres terraform aws my module            display identifier    lafentres terraform aws my module            oauth token id    ot hmAyP66qk2AMVdbJ            webhook url    https   app terraform io webhooks vcs a12b3456                     permissions              can delete   true           can resync   true           can retry   true                 test config              id    tc tcR6bxV5zE75Zb3B            tests enabled   true                     relationships            organization              data                id    my organization              type    organizations                                links            self     api v2 organizations my organization registry modules private my organization my module aws                       Delete a Module   div className  alert alert warning  role  alert       Deprecation warning    the following endpoints      POST  registry modules actions delete  organization name  name  provider  version     POST  registry modules actions delete  organization name  name  provider     POST  registry modules actions delete  organization name  name   are replaced by the below endpoints and will be removed from future versions of the API     div      DELETE  organizations  organization name registry modules  registry name  namespace  name  provider  version     DELETE  organizations  organization name registry modules  registry name  namespace  name  provider     DELETE  organizations  organization name registry modules  registry name  namespace  name       Parameters    Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                            organization name    The name of the organization to delete a module from  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        namespace            The module namespace that the deletion will affect  For private modules this is the name of the organization that owns the module                                                                                name                 The module name that the deletion will affect                                                                                                                                                                    provider             If specified  the provider for the module that the deletion will affect                                                                                                                                          version              If specified  the version for the module and provider that will be deleted                                                                                                                                       registry name        Either  public  or  private                                                                                                                                                                                    permissions citation    intentionally unused   keep for maintainers  When removing modules  there are three versions of the endpoint  depending on how many parameters are specified     If all parameters  module namespace  name  provider  and version  are specified  the specified version for the given provider of the module is deleted    If module namespace  name  and provider are specified  the specified provider for the given module is deleted along with all its versions    If only module namespace and name are specified  the entire module is deleted   For public modules  only the the endpoint specifying the module namespace and name is valid  The other DELETE endpoints will 404  For public modules  this only removes the record from the organization s HCP Terraform Registry and does not remove the public module from registry terraform io   If a version deletion would leave a provider with no versions  the provider will be deleted  If a provider deletion would leave a module with no providers  the module will be deleted     Status    Response                    Reason                                                                                                                                                                     204      No Content                  Success                                                            403       JSON API error object      Forbidden   public module curation disabled                        404       JSON API error object      Module  provider  or version not found or user not authorized        Sample Request  private module      shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 organizations my organization registry modules private my organization my module aws 2 0 0          Sample Request  public module      shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 organizations my organization registry modules public terraform aws modules vpc aws    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 Use the registry providers to manage private providers in your private registry Create get and delete versions and create get and delete platforms using the HTTP API page title Private Provider Versions and Platforms API Docs HCP Terraform","answers":"---\npage_title: Private Provider Versions and Platforms - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/registry-providers` to manage private providers in your private registry. Create, get, and delete versions, and create, get, and delete platforms using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Private Provider Versions and Platforms API\n\nThese endpoints are only relevant to private providers. When you [publish a private provider](\/terraform\/cloud-docs\/registry\/publish-providers) to the HCP Terraform private registry, you must also create at least one version and at least one platform for that version before consumers can use the provider in configurations. Unlike the public Terraform Registry, the private registry does not automatically upload new releases. You must manually add new provider versions and the associated release files.\n\n\nAll members of an organization can view and use both public and private providers, but you need [owners team](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-owners) or [Manage Private Registry](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-private-registry) permissions to add, update, or delete provider versions and platforms in private registry.\n\n## Create a Provider Version\n\n`POST \/organizations\/:organization_name\/registry-providers\/:registry_name\/:namespace\/:name\/versions`\n\nThe private registry does not automatically update private providers when you release new versions. You must use this endpoint to add each new version. Consumers cannot use new versions until you upload all [required release files](\/terraform\/cloud-docs\/registry\/publish-providers#release-files) and [Create a Provider Platform](#create-a-provider-platform).\n\n### Parameters\n\n| Parameter            | Description                                                                                                                                                                                                |\n| -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to create a provider in. The organization must already exist, and the token authenticating the API request must belong to the \"owners\" team or a member of the \"owners\" team. |\n| `:registry_name`     | Must be `private`.                                                                                                                                                                                         |\n| `:namespace`         | The namespace of the provider for which the version is being created. For private providers this is the same as the `:organization_name` parameter.                                                        |\n| `:name`              | The name of the provider for which the version is being created.                                                                                                                                           |\n\n\nCreates a new registry provider version. This endpoint only applies to private providers.\n\n| Status  | Response                                                     | Reason                                                         |\n| ------- | ----------------------------------------------------------   | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"registry-provider-versions\"`) | Success                                                        |\n| [422][] | [JSON API error object][]                                    | Malformed request body (missing attributes, wrong types, etc.) |\n| [403][] | [JSON API error object][]                                    | Forbidden - not available for public providers                 |\n| [404][] | [JSON API error object][]                                    | User not authorized                                            |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                    | Type   | Default | Description                                                       |\n| --------------------------- | ------ | ------- | ----------------------------------------------------------------- |\n| `data.type`                 | string |         | Must be `\"registry-provider-versions\"`.                           |\n| `data.attributes.version`   | string |         | A valid semver version string.                                    |\n| `data.attributes.key-id`    | string |         | A valid gpg-key string.                                           |\n| `data.attributes.protocols` | array  |         | An array of Terraform provider API versions that this version supports. Must be one or all of the following values `[\"4.0\",\"5.0\",\"6.0\"]`. |\n\n-> **Note:** Only Terraform 0.13 and later support third-party provider registries, and that Terraform version requires provider API version 5.0 or later. So you do not need to list major versions 4.0 or earlier in the `protocols` attribute.\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"registry-provider-versions\",\n    \"attributes\": {\n      \"version\": \"3.1.1\",\n      \"key-id\": \"32966F3FB5AC1129\",\n      \"protocols\": [\"5.0\"]\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"provver-y5KZUsSBRLV9zCtL\",\n    \"type\": \"registry-provider-versions\",\n    \"attributes\": {\n      \"version\": \"3.1.1\",\n      \"created-at\": \"2022-02-11T19:16:59.876Z\",\n      \"updated-at\": \"2022-02-11T19:16:59.876Z\",\n      \"key-id\": \"32966F3FB5AC1129\",\n      \"protocols\": [\"5.0\"],\n      \"permissions\": {\n        \"can-delete\": true,\n        \"can-upload-asset\": true\n      },\n      \"shasums-uploaded\": false,\n      \"shasums-sig-uploaded\": false\n    },\n    \"relationships\": {\n      \"registry-provider\": {\n        \"data\": {\n          \"id\": \"prov-cmEmLstBfjNNA9F3\",\n          \"type\": \"registry-providers\"\n        }\n      },\n      \"platforms\": {\n        \"data\": [],\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions\/3.1.1\/platforms\"\n        }\n      }\n    },\n    \"links\": {\n      \"shasums-upload\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1b...\",\n      \"shasums-sig-upload\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1b...\"\n    }\n  }\n}\n\n```\n\n## Get All Versions for a Single Provider\n\n`GET \/organizations\/:organization_name\/registry-providers\/:registry_name\/:namespace\/:name\/versions\/`\n\n### Parameters\n\n| Parameter            | Description                                                                                  |\n| -------------------- | -------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization the provider belongs to.                                        |\n| `:registry_name`     | Must be `private`.                                                                           |\n| `:namespace`         | The namespace of the provider. Must be the same as the `organization_name` for the provider. |\n| `:name`              | The provider name.                                                                           |\n\n| Status  | Response                                             | Reason                                                    |\n| ------- | ---------------------------------------------------- | --------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"registry-providers\"`) | Success                                                   |\n| [403][] | [JSON API error object][]                            | Forbidden - public provider curation disabled             |\n| [404][] | [JSON API error object][]                            | Provider not found or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --request GET \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"provver-y5KZUsSBRLV9zCtL\",\n      \"type\": \"registry-provider-versions\",\n      \"attributes\": {\n        \"version\": \"3.1.1\",\n        \"created-at\": \"2022-02-11T19:16:59.876Z\",\n        \"updated-at\": \"2022-02-11T19:16:59.876Z\",\n        \"key-id\": \"32966F3FB5AC1129\",\n        \"protocols\": [\"5.0\"],\n        \"permissions\": {\n          \"can-delete\": true,\n          \"can-upload-asset\": true\n        },\n        \"shasums-uploaded\": true,\n        \"shasums-sig-uploaded\": true\n      },\n      \"relationships\": {\n        \"registry-provider\": {\n          \"data\": {\n            \"id\": \"prov-cmEmLstBfjNNA9F3\",\n            \"type\": \"registry-providers\"\n          }\n        },\n        \"platforms\": {\n          \"data\": [\n            {\n              \"id\": \"provpltfrm-GSHhNzptr9s3WoLD\",\n              \"type\": \"registry-provider-platforms\"\n            },\n            {\n              \"id\": \"provpltfrm-A1PHitiM2KkKpVoM\",\n              \"type\": \"registry-provider-platforms\"\n            },\n            {\n              \"id\": \"provpltfrm-BLJWvWyJ2QMs525k\",\n              \"type\": \"registry-provider-platforms\"\n            },\n            {\n              \"id\": \"provpltfrm-qQYosUguetYtXGzJ\",\n              \"type\": \"registry-provider-platforms\"\n            },\n            {\n              \"id\": \"provpltfrm-pjDHFN46y193bS7t\",\n              \"type\": \"registry-provider-platforms\"\n            }\n          ],\n          \"links\": {\n            \"related\": \"\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions\/3.1.1\/platforms\"\n          }\n        }\n      },\n      \"links\": {\n        \"shasums-download\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1b...\",\n        \"shasums-sig-download\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1b...\"\n      }\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"page-size\": 20,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 1\n    }\n  }\n}\n```\n\n**Note:** The `shasums-uploaded` and `shasums-sig-uploaded` properties will be false if those files have not been uploaded to Archivist. In this case, instead of including links to `shasums-download` and `shasums-sig-download`, the response will include upload links (`shasums-upload` and `shasums-sig-upload`).\n\n## Get a Version\n\n`GET \/organizations\/:organization_name\/registry-providers\/:registry_name\/:namespace\/:name\/versions\/:version`\n\n### Parameters\n\n| Parameter            | Description                                                                                  |\n| -------------------- | -------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization the provider belongs to.                                        |\n| `:registry_name`     | Must be `private`.                                                                           |\n| `:namespace`         | The namespace of the provider. Must be the same as the `organization_name` for the provider. |\n| `:name`              | The provider name.                                                                           |\n| `:version`           | The version of the provider being created to which different platforms can be added.         |\n\n| Status  | Response                                             | Reason                                                    |\n| ------- | ---------------------------------------------------- | --------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"registry-providers\"`) | Success                                                   |\n| [403][] | [JSON API error object][]                            | Forbidden - public provider curation disabled             |\n| [404][] | [JSON API error object][]                            | Provider not found or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --request GET \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions\/3.1.1\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"provver-y5KZUsSBRLV9zCtL\",\n    \"type\": \"registry-provider-versions\",\n    \"attributes\": {\n      \"version\": \"3.1.1\",\n      \"created-at\": \"2022-02-11T19:16:59.876Z\",\n      \"updated-at\": \"2022-02-11T19:16:59.876Z\",\n      \"key-id\": \"32966F3FB5AC1129\",\n      \"protocols\": [\"5.0\"],\n      \"permissions\": {\n        \"can-delete\": true,\n        \"can-upload-asset\": true\n      },\n      \"shasums-uploaded\": true,\n      \"shasums-sig-uploaded\": true\n    },\n    \"relationships\": {\n      \"registry-provider\": {\n        \"data\": {\n          \"id\": \"prov-cmEmLstBfjNNA9F3\",\n          \"type\": \"registry-providers\"\n        }\n      },\n      \"platforms\": {\n        \"data\": [\n          {\n            \"id\": \"provpltfrm-GSHhNzptr9s3WoLD\",\n            \"type\": \"registry-provider-platforms\"\n          },\n          {\n            \"id\": \"provpltfrm-A1PHitiM2KkKpVoM\",\n            \"type\": \"registry-provider-platforms\"\n          },\n          {\n            \"id\": \"provpltfrm-BLJWvWyJ2QMs525k\",\n            \"type\": \"registry-provider-platforms\"\n          },\n          {\n            \"id\": \"provpltfrm-qQYosUguetYtXGzJ\",\n            \"type\": \"registry-provider-platforms\"\n          },\n          {\n            \"id\": \"provpltfrm-pjDHFN46y193bS7t\",\n            \"type\": \"registry-provider-platforms\"\n          }\n        ],\n        \"links\": {\n          \"related\": \"\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions\/3.1.1\/platforms\"\n        }\n      }\n    },\n    \"links\": {\n      \"shasums-download\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1b...\",\n      \"shasums-sig-download\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1b...\"\n    }\n  }\n}\n```\n\n**Note:** `shasums-uploaded` and `shasums-sig-uploaded` will be false if those files haven't been uploaded to Archivist yet. In this case, instead of including links to `shasums-download` and `shasums-sig-download`, the response will include upload links (`shasums-upload` and `shasums-sig-upload`).\n\n## Delete a Version\n\n`DELETE \/organizations\/:organization_name\/registry-providers\/:registry_name\/:namespace\/:name\/versions\/:provider_version`\n\n### Parameters\n\n| Parameter            | Description                                                                                                                                                                                                          |\n| -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to delete a provider version from. The organization must already exist, and the token authenticating the API request must belong to the \"owners\" team or a member of the \"owners\" team. |\n| `:registry_name`     | Must be `private`.                                                                                                                                                                                                   |\n| `:namespace`         | The namespace of the provider for which the version is being deleted. For private providers this is the same as the `:organization_name` parameter.                                                                  |\n| `:name`              | The name of the provider for which the version is being deleted.                                                                                                                                                   |\n| `:version`           | The version for the provider that will be deleted along with its corresponding platforms.                                                                                                                            |\n\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Status  | Response                  | Reason                                                      |\n| ------- | ------------------------- | ----------------------------------------------------------- |\n| [204][] | No Content                | Success                                                     |\n| [403][] | [JSON API error object][] | Forbidden - public provider curation disabled               |\n| [404][] | [JSON API error object][] | Provider not found or user not authorized to perform action |\n\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/public\/hashicorp\/aws\/versions\/3.1.1\n```\n\n\n## Create a Provider Platform\n\n`POST \/organizations\/:organization_name\/registry-providers\/:registry_name\/:namespace\/:name\/versions\/:version\/platforms`\n\nPlatforms are binaries that allow the provider to run on a particular operating system and architecture combination (e.g., Linux and AMD64). GoReleaser creates binaries automatically when you [create a release on GitHub](\/terraform\/registry\/providers\/publishing#creating-a-github-release) or [create a release locally](\/terraform\/registry\/providers\/publishing#using-goreleaser-locally).\n\nYou must upload one or more platforms for each version of a private provider. After you create a platform, you must upload the platform binary file to the `provider-binary-upload` URL.\n\n\n\n### Parameters\n\n| Parameter            | Description                                                                                                                                                                                                         |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to create a provider platform in. The organization must already exist, and the token authenticating the API request must belong to the \"owners\" team or a member of the \"owners\" team. |\n| `:registry_name`     | Must be `private`.                                                                                                                                                                                                  |\n| `:namespace`         | The namespace of the provider for which the platform is being created. For private providers this is the same as the `:organization_name` parameter.                                                                |\n| `:name`              | The name of the provider for which the platform is being created.                                                                                                                                                   |\n| `:version`           | The provider version of the provider for which the platform is being created.                                                                                                                                       |\n\nCreates a new registry provider platform. This endpoint only applies to private providers.\n\n| Status  | Response                                                      | Reason                                                         |\n| ------- | ------------------------------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"registry-provider-platforms\"`) | Success                                                        |\n| [422][] | [JSON API error object][]                                     | Malformed request body (missing attributes, wrong types, etc.) |\n| [403][] | [JSON API error object][]                                     | Forbidden - not available for public providers                 |\n| [404][] | [JSON API error object][]                                     | User not authorized                                            |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                   | Type   | Default  | Description                              |\n| -------------------------  | ------ | -------  | -------------------------------------    |\n| `data.type`                | string |          | Must be `\"registry-provider-platforms\"`. |\n| `data.attributes.os`       | string |          | A valid operating system string.         |\n| `data.attributes.arch`     | string |          | A valid architecture string.             |\n| `data.attributes.shasum`   | string |          | A valid shasum string.                   |\n| `data.attributes.filename` | string |          | A valid filename string.                 |\n\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"registry-provider-version-platforms\",\n    \"attributes\": {\n      \"os\": \"linux\",\n      \"arch\": \"amd64\",\n      \"shasum\": \"8f69533bc8afc227b40d15116358f91505bb638ce5919712fbb38a2dec1bba38\",\n      \"filename\": \"terraform-provider-aws_3.1.1_linux_amd64.zip\"\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions\/3.1.1\/platforms\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"provpltfrm-BLJWvWyJ2QMs525k\",\n    \"type\": \"registry-provider-platforms\",\n    \"attributes\": {\n      \"os\": \"linux\",\n      \"arch\": \"amd64\",\n      \"filename\": \"terraform-provider-aws_3.1.1_linux_amd64.zip\",\n      \"shasum\": \"8f69533bc8afc227b40d15116358f91505bb638ce5919712fbb38a2dec1bba38\",\n      \"permissions\": {\n        \"can-delete\": true,\n        \"can-upload-asset\": true\n      },\n      \"provider-binary-uploaded\": false\n    },\n    \"relationships\": {\n      \"registry-provider-version\": {\n        \"data\": {\n          \"id\": \"provver-y5KZUsSBRLV9zCtL\",\n          \"type\": \"registry-provider-versions\"\n        }\n      }\n    },\n    \"links\": {\n      \"provider-binary-upload\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1b...\"\n    }\n  }\n}\n\n```\n\n## Get All Platforms for a Single Version\n\n`GET \/organizations\/:organization_name\/registry-providers\/:registry_name\/:namespace\/:name\/versions\/:version\/platforms`\n\n### Parameters\n\n| Parameter            | Description                                                                                  |\n| -------------------- | -------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization the provider belongs to.                                        |\n| `:registry_name`     | Must be `private`.                                                                           |\n| `:namespace`         | The namespace of the provider. Must be the same as the `organization_name` for the provider. |\n| `:name`              | The provider name.                                                                           |\n| `:version`           | The version of the provider.                                                                 |\n\n| Status  | Response                                             | Reason                                                    |\n| ------- | ---------------------------------------------------- | --------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"registry-providers\"`) | Success                                                   |\n| [403][] | [JSON API error object][]                            | Forbidden - public provider curation disabled             |\n| [404][] | [JSON API error object][]                            | Provider not found or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --request GET \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions\/3.1.1\/platforms\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"provpltfrm-GSHhNzptr9s3WoLD\",\n      \"type\": \"registry-provider-platforms\",\n      \"attributes\": {\n        \"os\": \"darwin\",\n        \"arch\": \"amd64\",\n        \"filename\": \"terraform-provider-aws_3.1.1_darwin_amd64.zip\",\n        \"shasum\": \"fd580e71bd76d76913e1925f2641be9330c536464af9a08a5b8994da65a26cbc\",\n        \"permissions\": {\n          \"can-delete\": true,\n          \"can-upload-asset\": true\n        },\n        \"provider-binary-uploaded\": true\n      },\n      \"relationships\": {\n        \"registry-provider-version\": {\n          \"data\": {\n            \"id\": \"provver-y5KZUsSBRLV9zCtL\",\n            \"type\": \"registry-provider-versions\"\n          }\n        }\n      },\n      \"links\": {\n        \"provider-binary-download\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1b...\"\n      }\n    },\n    {\n      \"id\": \"provpltfrm-A1PHitiM2KkKpVoM\",\n      \"type\": \"registry-provider-platforms\",\n      \"attributes\": {\n        \"os\": \"darwin\",\n        \"arch\": \"arm64\",\n        \"filename\": \"terraform-provider-aws_3.1.1_darwin_arm64.zip\",\n        \"shasum\": \"de3c351d7f35a3c8c583c0da5c1c4d558b8cea3731a49b15f63de5bbbafc0165\",\n        \"permissions\": {\n          \"can-delete\": true,\n          \"can-upload-asset\": true\n        },\n        \"provider-binary-uploaded\": true\n      },\n      \"relationships\": {\n        \"registry-provider-version\": {\n          \"data\": {\n            \"id\": \"provver-y5KZUsSBRLV9zCtL\",\n            \"type\": \"registry-provider-versions\"\n          }\n        }\n      },\n      \"links\": {\n        \"provider-binary-download\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1b...\"\n      }\n    }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions\/3.1.1\/platforms?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions\/3.1.1\/platforms?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions\/3.1.1\/platforms?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"page-size\": 20,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 2\n    }\n  }\n}\n```\n\n**Note:** The `provider-binary-uploaded` property will be `false` if that file has not been uploaded to Archivist. In this case, instead of including a link to `provider-binary-download`, the response will include an upload link `provider-binary-upload`.\n\n## Get a Platform\n\n`GET \/organizations\/:organization_name\/registry-providers\/:registry_name\/:namespace\/:name\/versions\/:version\/platforms\/:os\/:arch`\n\n### Parameters\n\n| Parameter            | Description                                                                                  |\n| -------------------- | -------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization the provider belongs to.                                        |\n| `:registry_name`     | Must be `private`.                                                                           |\n| `:namespace`         | The namespace of the provider. Must be the same as the `organization_name` for the provider. |\n| `:name`              | The provider name.                                                                           |\n| `:version`           | The version of the provider.                                                                 |\n| `:os`                | The operating system of the provider platform.                                               |\n| `:arch`              | The architecture of the provider platform.                                                   |\n\n| Status  | Response                                             | Reason                                                    |\n| ------- | ---------------------------------------------------- | --------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"registry-providers\"`) | Success                                                   |\n| [403][] | [JSON API error object][]                            | Forbidden - public provider curation disabled             |\n| [404][] | [JSON API error object][]                            | Provider not found or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --request GET \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions\/3.1.1\/platforms\/linux\/amd64\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"provpltfrm-BLJWvWyJ2QMs525k\",\n    \"type\": \"registry-provider-platforms\",\n    \"attributes\": {\n      \"os\": \"linux\",\n      \"arch\": \"amd64\",\n      \"filename\": \"terraform-provider-aws_3.1.1_linux_amd64.zip\",\n      \"shasum\": \"8f69533bc8afc227b40d15116358f91505bb638ce5919712fbb38a2dec1bba38\",\n      \"permissions\": {\n        \"can-delete\": true,\n        \"can-upload-asset\": true\n      },\n      \"provider-binary-uploaded\": true\n    },\n    \"relationships\": {\n      \"registry-provider-version\": {\n        \"data\": {\n          \"id\": \"provver-y5KZUsSBRLV9zCtL\",\n          \"type\": \"registry-provider-versions\"\n        }\n      }\n    },\n    \"links\": {\n      \"provider-binary-download\": \"https:\/\/archivist.terraform.io\/v1\/object\/dmF1b...\"\n    }\n  }\n}\n```\n\n**Note:** The `provider-binary-uploaded` property will be `false` if that file has not been uploaded to Archivist. In this case, instead of including a link to `provider-binary-download`, the response will include an upload link `provider-binary-upload`.\n\n## Delete a Platform\n\n`DELETE \/organizations\/:organization_name\/registry-providers\/:registry_name\/:namespace\/:name\/versions\/:version\/platforms\/:os\/:arch`\n\n### Parameters\n\n| Parameter            | Description                                                                                                                                                                                                            |\n| -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The name of the organization to delete a provider platform from. The organization must already exist, and the token authenticating the API request must belong to the \"owners\" team or a member of the \"owners\" team.  |\n| `:registry_name`     | Must be `private`.                                                                                                                                                                                                     |\n| `:namespace`         | The namespace of the provider for which the platform is being deleted. For private providers this is the same as the `:organization_name` parameter.                                                                   |\n| `:name`              | The name of the provider for which the platform is being deleted.                                                                                                                                                    |\n| `:version`           | The version for which the platform is being deleted.                                                                                                                                                                 |\n| `:os`                | The operating system of the provider platform that is being deleted.                                                                                                                                                               |\n| `:arch`              | The architecture of the provider platform that is being deleted.                                                                                                                                                     |\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Status  | Response                  | Reason                                                      |\n| ------- | ------------------------- | ----------------------------------------------------------- |\n| [204][] | No Content                | Success                                                     |\n| [403][] | [JSON API error object][] | Forbidden - public provider curation disabled               |\n| [404][] | [JSON API error object][] | Provider not found or user not authorized to perform action |\n\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/registry-providers\/private\/hashicorp\/aws\/versions\/3.1.1\/platforms\/linux\/amd64\n```","site":"terraform","answers_cleaned":"    page title  Private Provider Versions and Platforms   API Docs   HCP Terraform description       Use the   registry providers  to manage private providers in your private registry  Create  get  and delete versions  and create  get  and delete platforms using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects    Private Provider Versions and Platforms API  These endpoints are only relevant to private providers  When you  publish a private provider   terraform cloud docs registry publish providers  to the HCP Terraform private registry  you must also create at least one version and at least one platform for that version before consumers can use the provider in configurations  Unlike the public Terraform Registry  the private registry does not automatically upload new releases  You must manually add new provider versions and the associated release files    All members of an organization can view and use both public and private providers  but you need  owners team   terraform cloud docs users teams organizations permissions organization owners  or  Manage Private Registry   terraform cloud docs users teams organizations permissions manage private registry  permissions to add  update  or delete provider versions and platforms in private registry      Create a Provider Version   POST  organizations  organization name registry providers  registry name  namespace  name versions   The private registry does not automatically update private providers when you release new versions  You must use this endpoint to add each new version  Consumers cannot use new versions until you upload all  required release files   terraform cloud docs registry publish providers release files  and  Create a Provider Platform   create a provider platform        Parameters    Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                            organization name    The name of the organization to create a provider in  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        registry name        Must be  private                                                                                                                                                                                                 namespace            The namespace of the provider for which the version is being created  For private providers this is the same as the   organization name  parameter                                                               name                 The name of the provider for which the version is being created                                                                                                                                                Creates a new registry provider version  This endpoint only applies to private providers     Status    Response                                                       Reason                                                                                                                                                                                                          201       JSON API document      type   registry provider versions      Success                                                             422       JSON API error object                                         Malformed request body  missing attributes  wrong types  etc        403       JSON API error object                                         Forbidden   not available for public providers                      404       JSON API error object                                         User not authorized                                                   Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                      Type     Default   Description                                                                                                                                                                                   data type                    string             Must be   registry provider versions                                   data attributes version      string             A valid semver version string                                          data attributes key id       string             A valid gpg key string                                                 data attributes protocols    array              An array of Terraform provider API versions that this version supports  Must be one or all of the following values    4 0   5 0   6 0             Note    Only Terraform 0 13 and later support third party provider registries  and that Terraform version requires provider API version 5 0 or later  So you do not need to list major versions 4 0 or earlier in the  protocols  attribute       Sample Payload     json      data          type    registry provider versions        attributes            version    3 1 1          key id    32966F3FB5AC1129          protocols     5 0                        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions          Sample Response     json      data          id    provver y5KZUsSBRLV9zCtL        type    registry provider versions        attributes            version    3 1 1          created at    2022 02 11T19 16 59 876Z          updated at    2022 02 11T19 16 59 876Z          key id    32966F3FB5AC1129          protocols     5 0           permissions              can delete   true           can upload asset   true                 shasums uploaded   false         shasums sig uploaded   false             relationships            registry provider              data                id    prov cmEmLstBfjNNA9F3              type    registry providers                            platforms              data                links                related     api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms                                links            shasums upload    https   archivist terraform io v1 object dmF1b             shasums sig upload    https   archivist terraform io v1 object dmF1b                          Get All Versions for a Single Provider   GET  organizations  organization name registry providers  registry name  namespace  name versions        Parameters    Parameter              Description                                                                                                                                                                                                                organization name    The name of the organization the provider belongs to                                               registry name        Must be  private                                                                                   namespace            The namespace of the provider  Must be the same as the  organization name  for the provider        name                 The provider name                                                                                 Status    Response                                               Reason                                                                                                                                                                                        200       JSON API document      type   registry providers      Success                                                        403       JSON API error object                                 Forbidden   public provider curation disabled                  404       JSON API error object                                 Provider not found or user unauthorized to perform action        Sample Request     shell curl       request GET       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions          Sample Response     json      data                  id    provver y5KZUsSBRLV9zCtL          type    registry provider versions          attributes              version    3 1 1            created at    2022 02 11T19 16 59 876Z            updated at    2022 02 11T19 16 59 876Z            key id    32966F3FB5AC1129            protocols     5 0             permissions                can delete   true             can upload asset   true                     shasums uploaded   true           shasums sig uploaded   true                 relationships              registry provider                data                  id    prov cmEmLstBfjNNA9F3                type    registry providers                                  platforms                data                                  id    provpltfrm GSHhNzptr9s3WoLD                  type    registry provider platforms                                              id    provpltfrm A1PHitiM2KkKpVoM                  type    registry provider platforms                                              id    provpltfrm BLJWvWyJ2QMs525k                  type    registry provider platforms                                              id    provpltfrm qQYosUguetYtXGzJ                  type    registry provider platforms                                              id    provpltfrm pjDHFN46y193bS7t                  type    registry provider platforms                                        links                  related     api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms                                        links              shasums download    https   archivist terraform io v1 object dmF1b               shasums sig download    https   archivist terraform io v1 object dmF1b                           links          self    https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next   null       last    https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions page 5Bnumber 5D 1 page 5Bsize 5D 20          meta          pagination            current page   1         page size   20         prev page   null         next page   null         total pages   1         total count   1                    Note    The  shasums uploaded  and  shasums sig uploaded  properties will be false if those files have not been uploaded to Archivist  In this case  instead of including links to  shasums download  and  shasums sig download   the response will include upload links   shasums upload  and  shasums sig upload        Get a Version   GET  organizations  organization name registry providers  registry name  namespace  name versions  version       Parameters    Parameter              Description                                                                                                                                                                                                                organization name    The name of the organization the provider belongs to                                               registry name        Must be  private                                                                                   namespace            The namespace of the provider  Must be the same as the  organization name  for the provider        name                 The provider name                                                                                  version              The version of the provider being created to which different platforms can be added               Status    Response                                               Reason                                                                                                                                                                                        200       JSON API document      type   registry providers      Success                                                        403       JSON API error object                                 Forbidden   public provider curation disabled                  404       JSON API error object                                 Provider not found or user unauthorized to perform action        Sample Request     shell curl       request GET       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1          Sample Response     json      data          id    provver y5KZUsSBRLV9zCtL        type    registry provider versions        attributes            version    3 1 1          created at    2022 02 11T19 16 59 876Z          updated at    2022 02 11T19 16 59 876Z          key id    32966F3FB5AC1129          protocols     5 0           permissions              can delete   true           can upload asset   true                 shasums uploaded   true         shasums sig uploaded   true             relationships            registry provider              data                id    prov cmEmLstBfjNNA9F3              type    registry providers                            platforms              data                              id    provpltfrm GSHhNzptr9s3WoLD                type    registry provider platforms                                        id    provpltfrm A1PHitiM2KkKpVoM                type    registry provider platforms                                        id    provpltfrm BLJWvWyJ2QMs525k                type    registry provider platforms                                        id    provpltfrm qQYosUguetYtXGzJ                type    registry provider platforms                                        id    provpltfrm pjDHFN46y193bS7t                type    registry provider platforms                                  links                related     api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms                                links            shasums download    https   archivist terraform io v1 object dmF1b             shasums sig download    https   archivist terraform io v1 object dmF1b                        Note     shasums uploaded  and  shasums sig uploaded  will be false if those files haven t been uploaded to Archivist yet  In this case  instead of including links to  shasums download  and  shasums sig download   the response will include upload links   shasums upload  and  shasums sig upload        Delete a Version   DELETE  organizations  organization name registry providers  registry name  namespace  name versions  provider version       Parameters    Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                organization name    The name of the organization to delete a provider version from  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        registry name        Must be  private                                                                                                                                                                                                           namespace            The namespace of the provider for which the version is being deleted  For private providers this is the same as the   organization name  parameter                                                                         name                 The name of the provider for which the version is being deleted                                                                                                                                                          version              The version for the provider that will be deleted along with its corresponding platforms                                                                                                                                  permissions citation    intentionally unused   keep for maintainers    Status    Response                    Reason                                                                                                                                                                 204      No Content                  Success                                                          403       JSON API error object      Forbidden   public provider curation disabled                    404       JSON API error object      Provider not found or user not authorized to perform action         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 organizations hashicorp registry providers public hashicorp aws versions 3 1 1          Create a Provider Platform   POST  organizations  organization name registry providers  registry name  namespace  name versions  version platforms   Platforms are binaries that allow the provider to run on a particular operating system and architecture combination  e g   Linux and AMD64   GoReleaser creates binaries automatically when you  create a release on GitHub   terraform registry providers publishing creating a github release  or  create a release locally   terraform registry providers publishing using goreleaser locally    You must upload one or more platforms for each version of a private provider  After you create a platform  you must upload the platform binary file to the  provider binary upload  URL         Parameters    Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                                              organization name    The name of the organization to create a provider platform in  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team        registry name        Must be  private                                                                                                                                                                                                          namespace            The namespace of the provider for which the platform is being created  For private providers this is the same as the   organization name  parameter                                                                       name                 The name of the provider for which the platform is being created                                                                                                                                                          version              The provider version of the provider for which the platform is being created                                                                                                                                           Creates a new registry provider platform  This endpoint only applies to private providers     Status    Response                                                        Reason                                                                                                                                                                                                           201       JSON API document      type   registry provider platforms      Success                                                             422       JSON API error object                                          Malformed request body  missing attributes  wrong types  etc        403       JSON API error object                                          Forbidden   not available for public providers                      404       JSON API error object                                          User not authorized                                                   Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                     Type     Default    Description                                                                                                                                 data type                   string              Must be   registry provider platforms         data attributes os          string              A valid operating system string               data attributes arch        string              A valid architecture string                   data attributes shasum      string              A valid shasum string                         data attributes filename    string              A valid filename string                          Sample Payload     json      data          type    registry provider version platforms        attributes            os    linux          arch    amd64          shasum    8f69533bc8afc227b40d15116358f91505bb638ce5919712fbb38a2dec1bba38          filename    terraform provider aws 3 1 1 linux amd64 zip                       Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms          Sample Response     json      data          id    provpltfrm BLJWvWyJ2QMs525k        type    registry provider platforms        attributes            os    linux          arch    amd64          filename    terraform provider aws 3 1 1 linux amd64 zip          shasum    8f69533bc8afc227b40d15116358f91505bb638ce5919712fbb38a2dec1bba38          permissions              can delete   true           can upload asset   true                 provider binary uploaded   false             relationships            registry provider version              data                id    provver y5KZUsSBRLV9zCtL              type    registry provider versions                                links            provider binary upload    https   archivist terraform io v1 object dmF1b                          Get All Platforms for a Single Version   GET  organizations  organization name registry providers  registry name  namespace  name versions  version platforms       Parameters    Parameter              Description                                                                                                                                                                                                                organization name    The name of the organization the provider belongs to                                               registry name        Must be  private                                                                                   namespace            The namespace of the provider  Must be the same as the  organization name  for the provider        name                 The provider name                                                                                  version              The version of the provider                                                                       Status    Response                                               Reason                                                                                                                                                                                        200       JSON API document      type   registry providers      Success                                                        403       JSON API error object                                 Forbidden   public provider curation disabled                  404       JSON API error object                                 Provider not found or user unauthorized to perform action        Sample Request     shell curl       request GET       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms          Sample Response     json      data                  id    provpltfrm GSHhNzptr9s3WoLD          type    registry provider platforms          attributes              os    darwin            arch    amd64            filename    terraform provider aws 3 1 1 darwin amd64 zip            shasum    fd580e71bd76d76913e1925f2641be9330c536464af9a08a5b8994da65a26cbc            permissions                can delete   true             can upload asset   true                     provider binary uploaded   true                 relationships              registry provider version                data                  id    provver y5KZUsSBRLV9zCtL                type    registry provider versions                                        links              provider binary download    https   archivist terraform io v1 object dmF1b                                 id    provpltfrm A1PHitiM2KkKpVoM          type    registry provider platforms          attributes              os    darwin            arch    arm64            filename    terraform provider aws 3 1 1 darwin arm64 zip            shasum    de3c351d7f35a3c8c583c0da5c1c4d558b8cea3731a49b15f63de5bbbafc0165            permissions                can delete   true             can upload asset   true                     provider binary uploaded   true                 relationships              registry provider version                data                  id    provver y5KZUsSBRLV9zCtL                type    registry provider versions                                        links              provider binary download    https   archivist terraform io v1 object dmF1b                           links          self    https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next   null       last    https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms page 5Bnumber 5D 1 page 5Bsize 5D 20          meta          pagination            current page   1         page size   20         prev page   null         next page   null         total pages   1         total count   2                    Note    The  provider binary uploaded  property will be  false  if that file has not been uploaded to Archivist  In this case  instead of including a link to  provider binary download   the response will include an upload link  provider binary upload       Get a Platform   GET  organizations  organization name registry providers  registry name  namespace  name versions  version platforms  os  arch       Parameters    Parameter              Description                                                                                                                                                                                                                organization name    The name of the organization the provider belongs to                                               registry name        Must be  private                                                                                   namespace            The namespace of the provider  Must be the same as the  organization name  for the provider        name                 The provider name                                                                                  version              The version of the provider                                                                        os                   The operating system of the provider platform                                                      arch                 The architecture of the provider platform                                                         Status    Response                                               Reason                                                                                                                                                                                        200       JSON API document      type   registry providers      Success                                                        403       JSON API error object                                 Forbidden   public provider curation disabled                  404       JSON API error object                                 Provider not found or user unauthorized to perform action        Sample Request     shell curl       request GET       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms linux amd64          Sample Response     json      data          id    provpltfrm BLJWvWyJ2QMs525k        type    registry provider platforms        attributes            os    linux          arch    amd64          filename    terraform provider aws 3 1 1 linux amd64 zip          shasum    8f69533bc8afc227b40d15116358f91505bb638ce5919712fbb38a2dec1bba38          permissions              can delete   true           can upload asset   true                 provider binary uploaded   true             relationships            registry provider version              data                id    provver y5KZUsSBRLV9zCtL              type    registry provider versions                                links            provider binary download    https   archivist terraform io v1 object dmF1b                        Note    The  provider binary uploaded  property will be  false  if that file has not been uploaded to Archivist  In this case  instead of including a link to  provider binary download   the response will include an upload link  provider binary upload       Delete a Platform   DELETE  organizations  organization name registry providers  registry name  namespace  name versions  version platforms  os  arch       Parameters    Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                    organization name    The name of the organization to delete a provider platform from  The organization must already exist  and the token authenticating the API request must belong to the  owners  team or a member of the  owners  team         registry name        Must be  private                                                                                                                                                                                                             namespace            The namespace of the provider for which the platform is being deleted  For private providers this is the same as the   organization name  parameter                                                                          name                 The name of the provider for which the platform is being deleted                                                                                                                                                           version              The version for which the platform is being deleted                                                                                                                                                                        os                   The operating system of the provider platform that is being deleted                                                                                                                                                                      arch                 The architecture of the provider platform that is being deleted                                                                                                                                                          permissions citation    intentionally unused   keep for maintainers    Status    Response                    Reason                                                                                                                                                                 204      No Content                  Success                                                          403       JSON API error object      Forbidden   public provider curation disabled                    404       JSON API error object      Provider not found or user not authorized to perform action         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 organizations hashicorp registry providers private hashicorp aws versions 3 1 1 platforms linux amd64    "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 404 https developer mozilla org en US docs Web HTTP Status 404 Use the task stages endpoint to manage run task stages and results List show and override task stages and show run task results using the HTTP API page title Run Task Stages and Results API Docs HCP Terraform","answers":"---\npage_title: Run Task Stages and Results - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/task-stages` endpoint to manage run task stages and results. List, show, and override task stages, and show run task results using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API documents]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n[run]: \/terraform\/cloud-docs\/run\/states\n\n# Run Task Stages and Results API\n\nHCP Terraform uses run task stages and run task results to track [run task](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks) execution.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/run-tasks.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nWhen HCP Terraform creates a [run][], it reads the run tasks associated to the workspace. Each run task in the workspace is configured to begin during a specific [run stage](\/terraform\/cloud-docs\/run\/states). HCP Terraform creates a run task stage object for each run stage that triggers run tasks. You can configure run tasks during the [Pre-Plan Stage](\/terraform\/cloud-docs\/run\/states#the-pre-plan-stage), [Post-Plan Stage](\/terraform\/cloud-docs\/run\/states#the-post-plan-stage), [Pre-Apply Stage](\/terraform\/cloud-docs\/run\/states#the-pre-apply-stage) and [Post-Apply Stage](\/terraform\/cloud-docs\/run\/states#the-post-apply-stage).\n\nRun task stages then create a run task result for each run task. For example, a workspace has two run tasks called `alpha` and `beta`. For each run, HCP Terraform creates one run task stage called `post-plan`. That run task stage has two run task results: one for the `alpha` run task and one for the `beta` run task.\n\nThis page lists the endpoints to retrieve run task stages and run task results. Refer to the [Run Tasks API](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks) for endpoints to create and manage run tasks within HCP Terraform. Refer to the [Run Tasks Integration API](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks-integration) for endpoints to build custom run tasks for the HCP Terraform ecosystem.\n\n## Attributes\n\n### Run Task Stage Status\n\nThe run task stage status is found in `data.attributes.status`, and you can reference the following list of possible values.\n\n| Status              | Description                                                                                                                                |\n|-------------------- |------------------------------------------------------------------------------------------------------------------------------------------- |\n| `pending`           | The initial status of a run task stage after creation.                                                                                     |\n| `running`           | The run task stage is executing one or more tasks, which have not yet completed.                                                           |\n| `passed`            | All of the run task results in the stage passed.                                                                                           |\n| `failed`            | One more results in the run task stage failed.                                                                                             |\n| `awaiting_override` | The task stage is waiting for user input. Once a user manually overrides the failed run tasks, the run returns to the `running` state. |\n| `errored`           | The run task stage has errored.                                                                                                            |\n| `canceled`          | The run task stage has been canceled.                                                                                                      |\n| `unreachable`       | The run task stage could not be executed.                                                                                                  |\n\n### Run Task Result Status\n\nThe run task result status is found in `data.attributes.status`, and you can reference the following list of possible values.\n\n| Status        | Description                                                           |\n|---------------|-----------------------------------------------------------------------|\n| `pending`     | The initial status of a run task result after creation.               |\n| `running`     | The associated run task is begun execution and has not yet completed. |\n| `passed`      | The associated run task executed and returned a passing result.       |\n| `failed`      | The associated run task executed and returned a failed result.        |\n| `errored`     | The associated run task has errored during execution.                 |\n| `canceled`    | The associated run task execution has been canceled.                  |\n| `unreachable` | The associated run task could not be executed.                        |\n\n## List the Run Task Stages in a Run\n\n`GET \/runs\/:run_id\/task-stages`\n\n| Parameter | Description                         |\n|-----------|-------------------------------------|\n| `run_id`  | The run ID to list task stages for. |\n\n| Status  | Response                                                | Reason                          |\n|---------|---------------------------------------------------------|---------------------------------|\n| [200][] | Array of [JSON API documents][] (`type: \"task-stages\"`) | Successfully listed task-stages |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters); remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                          |\n|----------------|----------------------------------------------------------------------|\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.   |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 runs per page. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/runs\/run-XdgtChJuuUwLoSmw\/task-stages\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n    {\n      \"id\": \"ts-rL5ZsuwfjqfPJcdi\",\n      \"type\": \"task-stages\",\n      \"attributes\": {\n        \"status\": \"passed\",\n        \"stage\": \"post_plan\",\n        \"status-timestamps\": {\n          \"passed-at\": \"2022-06-08T20:32:12+08:00\",\n          \"running-at\": \"2022-06-08T20:32:11+08:00\"\n        },\n        \"created-at\": \"2022-06-08T12:31:56.94Z\",\n        \"updated-at\": \"2022-06-08T12:32:12.315Z\"\n      },\n      \"relationships\": {\n        \"run\": {\n          \"data\": {\n            \"id\": \"run-XdgtChJuuUwLoSmw\",\n            \"type\": \"runs\"\n          }\n        },\n        \"task-results\": {\n          \"data\": [\n            {\n              \"id\": \"taskrs-EmnmsEDL1jgd1GTP\",\n              \"type\": \"task-results\"\n            }\n          ]\n        },\n        \"policy-evaluations\":{\n          \"data\":[\n            {\n              \"id\":\"poleval-iouaha9KLgGWkBRQ\",\n              \"type\":\"policy-evaluations\"\n            }\n          ]\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/task-stages\/ts-rL5ZsuwfjqfPJcdi\"\n      }\n    }\n  ]\n}\n```\n\n## Show a Run Task Stage\n\n`GET \/task-stages\/:task_stage_id`\n\n| Parameter        | Description               |\n|------------------|---------------------------|\n| `:task_stage_id` | The run task stage ID to get. |\n\nThis endpoint shows details of a specific task stage.\n\n| Status  | Response                                      | Reason                                      |\n|---------|-----------------------------------------------|---------------------------------------------|\n| [200][] | [JSON API document][] (`type: \"task-stages\"`) | Success                                     |\n| [404][] | [JSON API error object][]                     | Task stage not found or user not authorized |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/task-stages\/ts-rL5ZsuwfjqfPJcdi\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"ts-rL5ZsuwfjqfPJcdi\",\n    \"type\": \"task-stages\",\n    \"attributes\": {\n      \"status\": \"passed\",\n      \"stage\": \"post_plan\",\n      \"status-timestamps\": {\n        \"passed-at\": \"2022-06-08T20:32:12+08:00\",\n        \"running-at\": \"2022-06-08T20:32:11+08:00\"\n      },\n      \"created-at\": \"2022-06-08T12:31:56.94Z\",\n      \"updated-at\": \"2022-06-08T12:32:12.315Z\"\n    },\n    \"relationships\": {\n      \"run\": {\n        \"data\": {\n          \"id\": \"run-XdgtChJuuUwLoSmw\",\n          \"type\": \"runs\"\n        }\n      },\n      \"task-results\": {\n        \"data\": [\n          {\n            \"id\": \"taskrs-EmnmsEDL1jgd1GTP\",\n            \"type\": \"task-results\"\n          }\n        ]\n      },\n      \"policy-evaluations\":{\n        \"data\":[\n          {\n            \"id\":\"poleval-iouaha9KLgGWkBRQ\",\n            \"type\":\"policy-evaluations\"\n          }\n        ]\n     }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/task-stages\/ts-rL5ZsuwfjqfPJcdi\"\n    }\n  }\n}\n```\n\n## Show a Run Task Result\n\n`GET \/task-results\/:task_result_id`\n\n| Parameter         | Description                |\n|-------------------|----------------------------|\n| `:task_result_id` | The run task result ID to get. |\n\nThis endpoint shows the details for a specific run task result.\n\n| Status  | Response                                       | Reason                                       |\n|---------|------------------------------------------------|----------------------------------------------|\n| [200][] | [JSON API document][] (`type: \"task-results\"`) | Success                                      |\n| [404][] | [JSON API error object][]                      | Task result not found or user not authorized |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/task-results\/taskrs-EmnmsEDL1jgd1GZz\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"taskrs-EmnmsEDL1jgd1GZz\",\n    \"type\": \"task-results\",\n    \"attributes\": {\n      \"message\": \"No issues found.\\nSeverity threshold is set to low.\",\n      \"status\": \"passed\",\n      \"status-timestamps\": {\n        \"passed-at\": \"2022-06-08T20:32:12+08:00\",\n        \"running-at\": \"2022-06-08T20:32:11+08:00\"\n      },\n      \"url\": \"https:\/\/external.service\/project\/task-123abc\",\n      \"created-at\": \"2022-06-08T12:31:56.954Z\",\n      \"updated-at\": \"2022-06-08T12:32:12.27Z\",\n      \"task-id\": \"task-b6MaHZmGopHDtqhn\",\n      \"task-name\": \"example-task\",\n      \"task-url\": \"https:\/\/external.service\/task-123abc\",\n      \"stage\": \"post_plan\",\n      \"is-speculative\": false,\n      \"workspace-task-id\": \"wstask-258juqenQeWb3DZz\",\n      \"workspace-task-enforcement-level\": \"mandatory\"\n    },\n    \"relationships\": {\n      \"task-stage\": {\n        \"data\": {\n          \"id\": \"ts-rL5ZsuwfjqfPJczZ\",\n          \"type\": \"task-stages\"\n        }\n      }\n    },\n    \"links\": {\n      \"self\": \"\/api\/v2\/task-results\/taskrs-EmnmsEDL1jgd1GZz\"\n    }\n  }\n}\n```\n\n## Available Related Resources\n\n### Task Stage\n\nThe GET endpoints above can optionally return related resources, if requested with [the `include` query parameter](\/terraform\/cloud-docs\/api-docs#inclusion-of-related-resources). The following resource types are available:\n\n| Resource             | Description                                               |\n|--------------------- |---------------------------------------------------------- |\n| `run`                | Information about the associated run.                     |\n| `run.workspace`      | Information about the associated workspace.               |\n| `task-results`       | Information about the results for a task-stage.           |\n| `policy-evaluations`  | Information about the policy evaluations for a task-stage. |\n\n## Override a Task Stage\n\n`POST \/task-stages\/:task_stage_id\/actions\/override`\n\n| Parameter            | Description                                                                                     |\n| -------------------- | ----------------------------------------------------------------------------------------------- |\n| `:task_stage_id`     | The ID of the task stage to override.                                                           |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  https:\/\/app.terraform.io\/api\/v2\/task-stages\/ts-rL5ZsuwfjqfPJcdi\/actions\/override\n```\n\n### Sample Response\n\n```json\n{\n   \"data\":{\n      \"id\":\"ts-F7MumZQcJzVh1ZZk\",\n      \"type\":\"task-stages\",\n      \"attributes\":{\n         \"status\":\"running\",\n         \"stage\":\"post_plan\",\n         \"status-timestamps\":{\n            \"running-at\":\"2022-09-21T06:36:54+00:00\",\n            \"awaiting-override-at\":\"2022-09-21T06:31:50+00:00\"\n         },\n         \"created-at\":\"2022-09-21T06:29:44.632Z\",\n         \"updated-at\":\"2022-09-21T06:36:54.952Z\",\n         \"permissions\":{\n            \"can-override-policy\":true,\n            \"can-override-tasks\":false,\n            \"can-override\":true\n         },\n         \"actions\":{\n            \"is-overridable\":false\n         }\n      },\n      \"relationships\":{\n         \"run\":{\n            \"data\":{\n               \"id\":\"run-K6N4BAz8NfUyR2QB\",\n               \"type\":\"runs\"\n            }\n         },\n         \"task-results\":{\n            \"data\":[\n\n            ]\n         },\n         \"policy-evaluations\":{\n            \"data\":[\n               {\n                  \"id\":\"poleval-atNKxwvjYy4Gwk3k\",\n                  \"type\":\"policy-evaluations\"\n               }\n            ]\n         }\n      },\n      \"links\":{\n         \"self\":\"\/api\/v2\/task-stages\/ts-F7MumZQcJzVh1ZZk\"\n      }\n   }\n}\n```","site":"terraform","answers_cleaned":"    page title  Run Task Stages and Results   API Docs   HCP Terraform description       Use the   task stages  endpoint to manage run task stages and results  List  show  and override task stages  and show run task results using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   404   https   developer mozilla org en US docs Web HTTP Status 404   JSON API document    terraform cloud docs api docs json api documents   JSON API documents    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects   run    terraform cloud docs run states    Run Task Stages and Results API  HCP Terraform uses run task stages and run task results to track  run task   terraform cloud docs workspaces settings run tasks  execution        BEGIN  TFC only name pnp callout      include  tfc package callouts run tasks mdx       END  TFC only name pnp callout      When HCP Terraform creates a  run     it reads the run tasks associated to the workspace  Each run task in the workspace is configured to begin during a specific  run stage   terraform cloud docs run states   HCP Terraform creates a run task stage object for each run stage that triggers run tasks  You can configure run tasks during the  Pre Plan Stage   terraform cloud docs run states the pre plan stage    Post Plan Stage   terraform cloud docs run states the post plan stage    Pre Apply Stage   terraform cloud docs run states the pre apply stage  and  Post Apply Stage   terraform cloud docs run states the post apply stage    Run task stages then create a run task result for each run task  For example  a workspace has two run tasks called  alpha  and  beta   For each run  HCP Terraform creates one run task stage called  post plan   That run task stage has two run task results  one for the  alpha  run task and one for the  beta  run task   This page lists the endpoints to retrieve run task stages and run task results  Refer to the  Run Tasks API   terraform cloud docs api docs run tasks run tasks  for endpoints to create and manage run tasks within HCP Terraform  Refer to the  Run Tasks Integration API   terraform cloud docs api docs run tasks run tasks integration  for endpoints to build custom run tasks for the HCP Terraform ecosystem      Attributes      Run Task Stage Status  The run task stage status is found in  data attributes status   and you can reference the following list of possible values     Status                Description                                                                                                                                                                                                                                                                                                          pending              The initial status of a run task stage after creation                                                                                           running              The run task stage is executing one or more tasks  which have not yet completed                                                                 passed               All of the run task results in the stage passed                                                                                                 failed               One more results in the run task stage failed                                                                                                   awaiting override    The task stage is waiting for user input  Once a user manually overrides the failed run tasks  the run returns to the  running  state       errored              The run task stage has errored                                                                                                                  canceled             The run task stage has been canceled                                                                                                            unreachable          The run task stage could not be executed                                                                                                          Run Task Result Status  The run task result status is found in  data attributes status   and you can reference the following list of possible values     Status          Description                                                                                                                                                          pending        The initial status of a run task result after creation                     running        The associated run task is begun execution and has not yet completed       passed         The associated run task executed and returned a passing result             failed         The associated run task executed and returned a failed result              errored        The associated run task has errored during execution                       canceled       The associated run task execution has been canceled                        unreachable    The associated run task could not be executed                               List the Run Task Stages in a Run   GET  runs  run id task stages     Parameter   Description                                                                                  run id     The run ID to list task stages for       Status    Response                                                  Reason                                                                                                                                       200      Array of  JSON API documents      type   task stages      Successfully listed task stages        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                         page number       Optional    If omitted  the endpoint will return the first page         page size         Optional    If omitted  the endpoint will return 20 runs per page         Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 runs run XdgtChJuuUwLoSmw task stages          Sample Response     json      data                  id    ts rL5ZsuwfjqfPJcdi          type    task stages          attributes              status    passed            stage    post plan            status timestamps                passed at    2022 06 08T20 32 12 08 00              running at    2022 06 08T20 32 11 08 00                      created at    2022 06 08T12 31 56 94Z            updated at    2022 06 08T12 32 12 315Z                  relationships              run                data                  id    run XdgtChJuuUwLoSmw                type    runs                                  task results                data                                  id    taskrs EmnmsEDL1jgd1GTP                  type    task results                                                policy evaluations               data                                 id   poleval iouaha9KLgGWkBRQ                  type   policy evaluations                                                      links              self     api v2 task stages ts rL5ZsuwfjqfPJcdi                              Show a Run Task Stage   GET  task stages  task stage id     Parameter          Description                                                                      task stage id    The run task stage ID to get     This endpoint shows details of a specific task stage     Status    Response                                        Reason                                                                                                                                                     200       JSON API document      type   task stages      Success                                          404       JSON API error object                          Task stage not found or user not authorized        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 task stages ts rL5ZsuwfjqfPJcdi          Sample Response     json      data          id    ts rL5ZsuwfjqfPJcdi        type    task stages        attributes            status    passed          stage    post plan          status timestamps              passed at    2022 06 08T20 32 12 08 00            running at    2022 06 08T20 32 11 08 00                  created at    2022 06 08T12 31 56 94Z          updated at    2022 06 08T12 32 12 315Z              relationships            run              data                id    run XdgtChJuuUwLoSmw              type    runs                            task results              data                              id    taskrs EmnmsEDL1jgd1GTP                type    task results                                        policy evaluations             data                             id   poleval iouaha9KLgGWkBRQ                type   policy evaluations                                           links            self     api v2 task stages ts rL5ZsuwfjqfPJcdi                      Show a Run Task Result   GET  task results  task result id     Parameter           Description                                                                         task result id    The run task result ID to get     This endpoint shows the details for a specific run task result     Status    Response                                         Reason                                                                                                                                                        200       JSON API document      type   task results      Success                                           404       JSON API error object                           Task result not found or user not authorized        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json      https   app terraform io api v2 task results taskrs EmnmsEDL1jgd1GZz          Sample Response     json      data          id    taskrs EmnmsEDL1jgd1GZz        type    task results        attributes            message    No issues found  nSeverity threshold is set to low           status    passed          status timestamps              passed at    2022 06 08T20 32 12 08 00            running at    2022 06 08T20 32 11 08 00                  url    https   external service project task 123abc          created at    2022 06 08T12 31 56 954Z          updated at    2022 06 08T12 32 12 27Z          task id    task b6MaHZmGopHDtqhn          task name    example task          task url    https   external service task 123abc          stage    post plan          is speculative   false         workspace task id    wstask 258juqenQeWb3DZz          workspace task enforcement level    mandatory              relationships            task stage              data                id    ts rL5ZsuwfjqfPJczZ              type    task stages                                links            self     api v2 task results taskrs EmnmsEDL1jgd1GZz                      Available Related Resources      Task Stage  The GET endpoints above can optionally return related resources  if requested with  the  include  query parameter   terraform cloud docs api docs inclusion of related resources   The following resource types are available     Resource               Description                                                                                                                                         run                   Information about the associated run                           run workspace         Information about the associated workspace                     task results          Information about the results for a task stage                 policy evaluations     Information about the policy evaluations for a task stage        Override a Task Stage   POST  task stages  task stage id actions override     Parameter              Description                                                                                                                                                                                                                      task stage id        The ID of the task stage to override                                                                   Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST     https   app terraform io api v2 task stages ts rL5ZsuwfjqfPJcdi actions override          Sample Response     json       data           id   ts F7MumZQcJzVh1ZZk          type   task stages          attributes              status   running             stage   post plan             status timestamps                 running at   2022 09 21T06 36 54 00 00                awaiting override at   2022 09 21T06 31 50 00 00                        created at   2022 09 21T06 29 44 632Z             updated at   2022 09 21T06 36 54 952Z             permissions                 can override policy  true               can override tasks  false               can override  true                       actions                 is overridable  false                            relationships              run                 data                    id   run K6N4BAz8NfUyR2QB                   type   runs                                      task results                 data                                         policy evaluations                 data                                        id   poleval atNKxwvjYy4Gwk3k                      type   policy evaluations                                                            links              self    api v2 task stages ts F7MumZQcJzVh1ZZk                    "}
{"questions":"terraform page title Run Tasks Integration API Docs HCP Terraform 200 https developer mozilla org en US docs Web HTTP Status 200 401 https developer mozilla org en US docs Web HTTP Status 401 Use run tasks to make requests when a run reaches a specific phase Learn about the run task request and callback formats","answers":"---\npage_title: Run Tasks Integration - API Docs - HCP Terraform\ndescription: >-\n  Use run tasks to make requests when a run reaches a specific phase. Learn about the run task request and callback formats.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n# Run Tasks Integration API\n\n[Run tasks](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks) allow HCP Terraform to interact with external systems at specific points in the HCP Terraform run lifecycle.\nThis page lists the API endpoints used to trigger a run task and the expected response from the integration.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/run-tasks.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nRefer to [run tasks](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks) for the API endpoints to create and manage run tasks within HCP Terraform. You can also access a complete list of all run tasks in the [Terraform Registry](https:\/\/registry.terraform.io\/browse\/run-tasks).\n\n## Run Task Request\n\nWhen a run reaches the appropriate phase and a run task is triggered, HCP Terraform will send a request to the run task's URL.\nThe service receiving the run task request should respond with `200 OK`, or HCP Terraform will retry to trigger the run task.\n\n`POST :url`\n\n| Parameter | Description                                             |\n|-----------|---------------------------------------------------------|\n| `:url`    | The URL configured in the run task to send requests to. |\n\n| Status  | Response   | Reason                            |\n|---------|------------|-----------------------------------|\n| [200][] | No Content | Successfully submitted a run task |\n\n### Request Body\n\nThe POST request submits a JSON object with the following properties as a request payload.\n\n#### Common Properties\n\nAll request payloads contain the following properties.\n\n|               Key path               |  Type   |                Values                |                                                                                                  Description                                                                                                   |\n| ------------------------------------ | ------- | ------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `payload_version`                    | integer | `1`                                  | Schema version of the payload. Only `1` is supported.                                                                                                                                                          |\n| `stage`                              | string  | `pre_plan`, `post_plan`, `pre_apply`, `post_apply` | The [run stage](\/terraform\/cloud-docs\/run\/states) when HCP Terraform triggers the run task.                                                                                                                  |\n| `access_token`                       | string  |                                      | Bearer token to use when calling back to HCP Terraform.                                                                                                                                                      |\n| `capabilities`                       | object  |                                      | A map of the capabilities that the caller supports. |\n| `capabilities.outcomes`              | bool    |                                      | A flag indicating the caller accepts detailed run task outcomes. |\n| `configuration_version_download_url` | string  |                                      | The URL to [download the configuration version](\/terraform\/cloud-docs\/api-docs\/configuration-versions#download-configuration-files). This is `null` if the configuration version is not available to download. |\n| `configuration_version_id`           | string  |                                      | The ID of the [configuration version](\/terraform\/cloud-docs\/api-docs\/configuration-versions) for the run.                                                                                                      |\n| `is_speculative`                     | bool    |                                      | Whether the task is part of a [speculative run](\/terraform\/cloud-docs\/run\/remote-operations#speculative-plans).                                                                                                |\n| `organization_name`                  | string  |                                      | Name of the organization the task is configured within.                                                                                                                                                        |\n| `run_app_url`                        | string  |                                      | URL within HCP Terraform to the run.                                                                                                                                                                         |\n| `run_created_at`                     | string  |                                      | When the run was started.                                                                                                                                                                                      |\n| `run_created_by`                     | string  |                                      | Who created the run.                                                                                                                                                                                           |\n| `run_id`                             | string  |                                      | Id of the run this task is part of.                                                                                                                                                                            |\n| `run_message`                        | string  |                                      | Message that was associated with the run.                                                                                                                                                                      |\n| `task_result_callback_url`           | string  |                                      | URL that should called back with the result of this task.                                                                                                                                                      |\n| `task_result_enforcement_level`      | string  | `mandatory`, `advisory`              | Enforcement level for this task.                                                                                                                                                                               |\n| `task_result_id`                     | string  |                                      | ID of task result within HCP Terraform.                                                                                                                                                                      |\n| `vcs_branch`                         | string  |                                      | Repository branch that the workspace executes from. This is `null` if the workspace does not have a VCS repository.                                                                                            |\n| `vcs_commit_url`                     | string  |                                      | URL to the commit that triggered this run. This is `null` if the workspace does not a VCS repository.                                                                                                          |\n| `vcs_pull_request_url`               | string  |                                      | URL to the Pull Request\/Merge Request that triggered this run. This is `null` if the run was not triggered.                                                                                                    |\n| `vcs_repo_url`                       | string  |                                      | URL to the workspace's VCS repository. This is `null` if the workspace does not have a VCS repository.                                                                                                         |\n| `workspace_app_url`                  | string  |                                      | URL within HCP Terraform to the workspace.                                                                                                                                                                   |\n| `workspace_id`                       | string  |                                      | Id of the workspace the task is associated with.                                                                                                                                                               |\n| `workspace_name`                     | string  |                                      | Name of the workspace.                                                                                                                                                                                         |\n| `workspace_working_directory`        | string  |                                      | The working directory specified in the run's [workspace settings](\/terraform\/cloud-docs\/workspaces\/settings#terraform-working-directory).                                                                      |\n\n#### Post-Plan, Pre-Apply, and Post-Apply Properties\n\nRequests with `stage` set to `post_plan`, `pre_apply` or `post_apply` contain the following additional properties.\n\n|      Key path       |  Type  | Values |                        Description                        |\n| ------------------- | ------ | ------ | --------------------------------------------------------- |\n| `plan_json_api_url` | string |        | The URL to retrieve the JSON Terraform plan for this run. |\n\n### Sample Payload\n\n```json\n{\n  \"payload_version\": 1,\n  \"stage\": \"post_plan\",\n  \"access_token\": \"4QEuyyxug1f2rw.atlasv1.iDyxqhXGVZ0ykes53YdQyHyYtFOrdAWNBxcVUgWvzb64NFHjcquu8gJMEdUwoSLRu4Q\",\n  \"capabilities\": {\n    \"outcomes\": true\n  },\n  \"configuration_version_download_url\": \"https:\/\/app.terraform.io\/api\/v2\/configuration-versions\/cv-ntv3HbhJqvFzamy7\/download\",\n  \"configuration_version_id\": \"cv-ntv3HbhJqvFzamy7\",\n  \"is_speculative\": false,\n  \"organization_name\": \"hashicorp\",\n  \"plan_json_api_url\": \"https:\/\/app.terraform.io\/api\/v2\/plans\/plan-6AFmRJW1PFJ7qbAh\/json-output\",\n  \"run_app_url\": \"https:\/\/app.terraform.io\/app\/hashicorp\/my-workspace\/runs\/run-i3Df5to9ELvibKpQ\",\n  \"run_created_at\": \"2021-09-02T14:47:13.036Z\",\n  \"run_created_by\": \"username\",\n  \"run_id\": \"run-i3Df5to9ELvibKpQ\",\n  \"run_message\": \"Triggered via UI\",\n  \"task_result_callback_url\": \"https:\/\/app.terraform.io\/api\/v2\/task-results\/5ea8d46c-2ceb-42cd-83f2-82e54697bddd\/callback\",\n  \"task_result_enforcement_level\": \"mandatory\",\n  \"task_result_id\": \"taskrs-2nH5dncYoXaMVQmJ\",\n  \"vcs_branch\": \"main\",\n  \"vcs_commit_url\": \"https:\/\/github.com\/hashicorp\/terraform-random\/commit\/7d8fb2a2d601edebdb7a59ad2088a96673637d22\",\n  \"vcs_pull_request_url\": null,\n  \"vcs_repo_url\": \"https:\/\/github.com\/hashicorp\/terraform-random\",\n  \"workspace_app_url\": \"https:\/\/app.terraform.io\/app\/hashicorp\/my-workspace\",\n  \"workspace_id\": \"ws-ck4G5bb1Yei5szRh\",\n  \"workspace_name\": \"tfr_github_0\",\n  \"workspace_working_directory\": \"\/terraform\"\n}\n```\n\n### Request Headers\n\nThe POST request submits the following properties as the request headers.\n\n|          Name          |                   Value                    |                                                                                                                    Description                                                                                                                    |\n| ---------------------- | ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `Content-Type`         | `application\/json`                         | Specifies the type of data in the request body                                                                                                                                                                                                    |\n| `User-Agent`           | `TFC\/1.0 (+https:\/\/app.terraform.io; TFC)` | Identifies the request is coming from HCP Terraform                                                                                                                                                                                             |\n| `X-TFC-Task-Signature` | string                                     | If the run task is configured with an [HMAC Key](\/terraform\/cloud-docs\/integrations\/run-tasks#securing-your-run-task), this header contains the signed SHA512 sum of the request payload using the configured HMAC key. Otherwise, this is an empty string. |\n\n## Run Task Callback\n\nWhile a run task runs, it may send progressive updates to HCP Terraform with a `running` status. Once an integrator determines that Terraform supports detailed run task outcomes, they can send these outcomes by appending to the run task's callback payload.\n\nOnce the external integration fulfills the request, that integration must call back into HCP Terraform with the overall result of either `passed` or `failed`. Terraform expects this callback within 10 minutes, or the request is considered errored.\n\nYou can send outcomes with a status of `running`, `passed`, or `failed`, but it is a good practice only to send outcomes when a run task is `running`.\n\n`PATCH :callback_url`\n\n|    Parameter    |                           Description                            |\n| --------------- | ---------------------------------------------------------------- |\n| `:callback_url` | The `task_result_callback_url` specified in the run task request. Typically `\/task-results\/:guid\/callback`. |\n\n| Status  |         Response          |                  Reason                  |\n| ------- | ------------------------- | ---------------------------------------- |\n| [200][] | No Content                | Successfully submitted a run task result |\n| [401][] | [JSON API error object][] | Not authorized to perform action         |\n| [422][] | [JSON API error object][] | Invalid response payload. This could be caused by invalid attributes, or sending a status that is not accepted. |\n\n### Request Body\n\nThe PATCH request submits a JSON object with the following properties as a request payload. This payload is also described in the [JSON API schema for run task results](https:\/\/github.com\/hashicorp\/terraform-docs-common\/blob\/main\/website\/public\/schema\/run-tasks\/runtask-result.json).\n\n|         Key path          |  Type  |                                           Description                                           |\n| ------------------------- | ------ | ----------------------------------------------------------------------------------------------- |\n| `data.type`               | string | Must be `\"task-results\"`.                                                                       |\n| `data.attributes.status`  | string | The current status of the task. Only `passed`, `failed` or `running` are allowed.               |\n| `data.attributes.message` | string | (Recommended, but optional) A short message describing the status of the task.                                   |\n| `data.attributes.url`     | string | (Optional) A URL where users can obtain more information about the task.                        |\n| `relationships.outcomes.data`     | array | (Recommended, but optional) A collection of detailed run task outcomes.  |\n\nStatus values other than passed, failed, or running return an error. Both the passed and failed statuses represent a final state for a run task. The running status allows one or more partial updates until the task has reached a final state.\n\n```json\n{\n  \"data\": {\n    \"type\": \"task-results\",\n    \"attributes\": {\n      \"status\": \"passed\",\n      \"message\": \"4 passed, 0 skipped, 0 failed\",\n      \"url\": \"https:\/\/external.service.dev\/terraform-plan-checker\/run-i3Df5to9ELvibKpQ\"\n    },\n    \"relationships\": {\n      \"outcomes\": {\n        \"data\": [...]\n      }\n    }\n  }\n}\n```\n\n#### Outcomes Payload Body\n\nA run task result may optionally contain one or more detailed outcomes, which improves result visibility and content in the HCP Terraform user interface. The following attributes define the outcome.\n\n|         Key path          |  Type  |                                           Description                                           |\n| ------------------------- | ------ | ----------------------------------------------------------------------------------------------- |\n| `outcome-id`     | string | A partner supplied identifier for this outcome. |\n| `description`     | string | A one-line description of the result. |\n| `body`     | string | (Optional)  A detailed message for the result in Markdown format. |\n| `url`     | string | (Optional) A URL that a user can navigate to for more information about this result. |\n| `tags`     | object | (Optional) An object containing tag arrays, named by the property key. |\n| `tags.key`     | string | The two or three word name of the header tag. [Special handling](#severity-and-status-tags) is given to `severity` and `status` keys. |\n| `tags.key[].label`     | string | The text value of the tag. |\n| `tags.key[].level`     | enum string | (Optional) The error level for the tag. Defaults to `none`, but accepts `none`, `info`, `warning`, or `error`. For levels other than `none`, labels render with a color and icon for that level. |\n\n##### Severity and Status Tags\n\nRun task outcomes with tags named \"severity\" or \"status\" are enriched within the outcomes display list in HCP Terraform, enabling an earlier response to issues with severity and status.\n\n```json\n{\n  \"type\": \"task-result-outcomes\",\n  \"attributes\": {\n    \"outcome-id\": \"PRTNR-CC-TF-127\",\n    \"description\": \"ST-2942: S3 Bucket will not enforce MFA login on delete requests\",\n    \"tags\": {\n      \"Status\":  [\n        {\n          \"label\": \"Denied\", \n          \"level\": \"error\" \n        }\n      ],\n      \"Severity\": [\n        {\n          \"label\": \"High\", \n          \"level\": \"error\" \n        },\n        {\n          \"label\": \"Recoverable\",\n          \"level\": \"info\" \n        }\n      ],\n      \"Cost Centre\": [\n        {\n          \"label\": \"IT-OPS\"\n        }\n      ]\n    },\n    \"body\": \"# Resolution for issue ST-2942\\n\\n## Impact\\n\\nFollow instructions in the [AWS S3 docs](https:\/\/docs.aws.amazon.com\/AmazonS3\/latest\/userguide\/MultiFactorAuthenticationDelete.html) to manually configure the MFA setting.\\n\u2014-- Payload truncated \u2014--\",\n    \"url\": \"https:\/\/external.service.dev\/result\/PRTNR-CC-TF-127\"\n  }\n}\n```\n\n##### Complete Callback Payload Example\n\nThe example below shows a complete payload explaining the data structure of a callback payload, including all the necessary fields.\n\n```json\n{\n  \"data\": {\n    \"type\": \"task-results\",\n    \"attributes\": {\n      \"status\": \"failed\",\n      \"message\": \"0 passed, 0 skipped, 1 failed\",\n      \"url\": \"https:\/\/external.service.dev\/terraform-plan-checker\/run-i3Df5to9ELvibKpQ\"\n    },\n    \"relationships\": {\n      \"outcomes\": {\n        \"data\": [\n          {\n            \"type\": \"task-result-outcomes\",\n            \"attributes\": {\n              \"outcome-id\": \"PRTNR-CC-TF-127\",\n              \"description\": \"ST-2942: S3 Bucket will not enforce MFA login on delete requests\",\n              \"tags\": {\n                \"Status\":  [\n                  {\n                    \"label\": \"Denied\", \n                    \"level\": \"error\" \n                  }\n                ],\n                \"Severity\": [\n                  {\n                    \"label\": \"High\", \n                    \"level\": \"error\" \n                  },\n                  {\n                    \"label\": \"Recoverable\", \n                    \"level\": \"info\" \n                  }\n                ],\n                \"Cost Centre\": [\n                  {\n                    \"label\": \"IT-OPS\"\n                  }\n                ]\n              },\n              \"body\": \"# Resolution for issue ST-2942\\n\\n## Impact\\n\\nFollow instructions in the [AWS S3 docs](https:\/\/docs.aws.amazon.com\/AmazonS3\/latest\/userguide\/MultiFactorAuthenticationDelete.html) to manually configure the MFA setting.\\n\u2014-- Payload truncated \u2014--\",\n              \"url\": \"https:\/\/external.service.dev\/result\/PRTNR-CC-TF-127\"\n            }\n          }\n        ]\n      }\n    }\n  }\n}\n```\n\n### Request Headers\n\nThe PATCH request must use the token supplied in the originating request (`access_token`) for [authentication](\/terraform\/cloud-docs\/api-docs#authentication).","site":"terraform","answers_cleaned":"    page title  Run Tasks Integration   API Docs   HCP Terraform description       Use run tasks to make requests when a run reaches a specific phase  Learn about the run task request and callback formats        200   https   developer mozilla org en US docs Web HTTP Status 200   401   https   developer mozilla org en US docs Web HTTP Status 401   422   https   developer mozilla org en US docs Web HTTP Status 422   JSON API error object   https   jsonapi org format  error objects    Run Tasks Integration API   Run tasks   terraform cloud docs workspaces settings run tasks  allow HCP Terraform to interact with external systems at specific points in the HCP Terraform run lifecycle  This page lists the API endpoints used to trigger a run task and the expected response from the integration        BEGIN  TFC only name pnp callout      include  tfc package callouts run tasks mdx       END  TFC only name pnp callout      Refer to  run tasks   terraform cloud docs api docs run tasks run tasks  for the API endpoints to create and manage run tasks within HCP Terraform  You can also access a complete list of all run tasks in the  Terraform Registry  https   registry terraform io browse run tasks       Run Task Request  When a run reaches the appropriate phase and a run task is triggered  HCP Terraform will send a request to the run task s URL  The service receiving the run task request should respond with  200 OK   or HCP Terraform will retry to trigger the run task    POST  url     Parameter   Description                                                                                                                           url       The URL configured in the run task to send requests to       Status    Response     Reason                                                                                              200      No Content   Successfully submitted a run task        Request Body  The POST request submits a JSON object with the following properties as a request payload        Common Properties  All request payloads contain the following properties                   Key path                  Type                    Values                                                                                                                   Description                                                                                                                                                                                                                                                                                                                                                                                                                   payload version                       integer    1                                     Schema version of the payload  Only  1  is supported                                                                                                                                                                stage                                 string     pre plan    post plan    pre apply    post apply    The  run stage   terraform cloud docs run states  when HCP Terraform triggers the run task                                                                                                                        access token                          string                                           Bearer token to use when calling back to HCP Terraform                                                                                                                                                            capabilities                          object                                           A map of the capabilities that the caller supports       capabilities outcomes                 bool                                             A flag indicating the caller accepts detailed run task outcomes       configuration version download url    string                                           The URL to  download the configuration version   terraform cloud docs api docs configuration versions download configuration files   This is  null  if the configuration version is not available to download       configuration version id              string                                           The ID of the  configuration version   terraform cloud docs api docs configuration versions  for the run                                                                                                            is speculative                        bool                                             Whether the task is part of a  speculative run   terraform cloud docs run remote operations speculative plans                                                                                                       organization name                     string                                           Name of the organization the task is configured within                                                                                                                                                              run app url                           string                                           URL within HCP Terraform to the run                                                                                                                                                                               run created at                        string                                           When the run was started                                                                                                                                                                                            run created by                        string                                           Who created the run                                                                                                                                                                                                 run id                                string                                           Id of the run this task is part of                                                                                                                                                                                  run message                           string                                           Message that was associated with the run                                                                                                                                                                            task result callback url              string                                           URL that should called back with the result of this task                                                                                                                                                            task result enforcement level         string     mandatory    advisory                 Enforcement level for this task                                                                                                                                                                                     task result id                        string                                           ID of task result within HCP Terraform                                                                                                                                                                            vcs branch                            string                                           Repository branch that the workspace executes from  This is  null  if the workspace does not have a VCS repository                                                                                                  vcs commit url                        string                                           URL to the commit that triggered this run  This is  null  if the workspace does not a VCS repository                                                                                                                vcs pull request url                  string                                           URL to the Pull Request Merge Request that triggered this run  This is  null  if the run was not triggered                                                                                                          vcs repo url                          string                                           URL to the workspace s VCS repository  This is  null  if the workspace does not have a VCS repository                                                                                                               workspace app url                     string                                           URL within HCP Terraform to the workspace                                                                                                                                                                         workspace id                          string                                           Id of the workspace the task is associated with                                                                                                                                                                     workspace name                        string                                           Name of the workspace                                                                                                                                                                                               workspace working directory           string                                           The working directory specified in the run s  workspace settings   terraform cloud docs workspaces settings terraform working directory                                                                                Post Plan  Pre Apply  and Post Apply Properties  Requests with  stage  set to  post plan    pre apply  or  post apply  contain the following additional properties          Key path          Type    Values                          Description                                                                                                                                   plan json api url    string            The URL to retrieve the JSON Terraform plan for this run         Sample Payload     json      payload version   1     stage    post plan      access token    4QEuyyxug1f2rw atlasv1 iDyxqhXGVZ0ykes53YdQyHyYtFOrdAWNBxcVUgWvzb64NFHjcquu8gJMEdUwoSLRu4Q      capabilities          outcomes   true         configuration version download url    https   app terraform io api v2 configuration versions cv ntv3HbhJqvFzamy7 download      configuration version id    cv ntv3HbhJqvFzamy7      is speculative   false     organization name    hashicorp      plan json api url    https   app terraform io api v2 plans plan 6AFmRJW1PFJ7qbAh json output      run app url    https   app terraform io app hashicorp my workspace runs run i3Df5to9ELvibKpQ      run created at    2021 09 02T14 47 13 036Z      run created by    username      run id    run i3Df5to9ELvibKpQ      run message    Triggered via UI      task result callback url    https   app terraform io api v2 task results 5ea8d46c 2ceb 42cd 83f2 82e54697bddd callback      task result enforcement level    mandatory      task result id    taskrs 2nH5dncYoXaMVQmJ      vcs branch    main      vcs commit url    https   github com hashicorp terraform random commit 7d8fb2a2d601edebdb7a59ad2088a96673637d22      vcs pull request url   null     vcs repo url    https   github com hashicorp terraform random      workspace app url    https   app terraform io app hashicorp my workspace      workspace id    ws ck4G5bb1Yei5szRh      workspace name    tfr github 0      workspace working directory     terraform             Request Headers  The POST request submits the following properties as the request headers              Name                              Value                                                                                                                                         Description                                                                                                                                                                                                                                                                                                                                                                                                                                                     Content Type             application json                            Specifies the type of data in the request body                                                                                                                                                                                                         User Agent               TFC 1 0   https   app terraform io  TFC     Identifies the request is coming from HCP Terraform                                                                                                                                                                                                  X TFC Task Signature    string                                       If the run task is configured with an  HMAC Key   terraform cloud docs integrations run tasks securing your run task   this header contains the signed SHA512 sum of the request payload using the configured HMAC key  Otherwise  this is an empty string        Run Task Callback  While a run task runs  it may send progressive updates to HCP Terraform with a  running  status  Once an integrator determines that Terraform supports detailed run task outcomes  they can send these outcomes by appending to the run task s callback payload   Once the external integration fulfills the request  that integration must call back into HCP Terraform with the overall result of either  passed  or  failed   Terraform expects this callback within 10 minutes  or the request is considered errored   You can send outcomes with a status of  running    passed   or  failed   but it is a good practice only to send outcomes when a run task is  running     PATCH  callback url        Parameter                                Description                                                                                                                         callback url    The  task result callback url  specified in the run task request  Typically   task results  guid callback        Status            Response                             Reason                                                                                                          200      No Content                  Successfully submitted a run task result      401       JSON API error object      Not authorized to perform action              422       JSON API error object      Invalid response payload  This could be caused by invalid attributes  or sending a status that is not accepted         Request Body  The PATCH request submits a JSON object with the following properties as a request payload  This payload is also described in the  JSON API schema for run task results  https   github com hashicorp terraform docs common blob main website public schema run tasks runtask result json              Key path             Type                                              Description                                                                                                                                                                                         data type                  string   Must be   task results                                                                               data attributes status     string   The current status of the task  Only  passed    failed  or  running  are allowed                     data attributes message    string    Recommended  but optional  A short message describing the status of the task                                         data attributes url        string    Optional  A URL where users can obtain more information about the task                              relationships outcomes data        array    Recommended  but optional  A collection of detailed run task outcomes      Status values other than passed  failed  or running return an error  Both the passed and failed statuses represent a final state for a run task  The running status allows one or more partial updates until the task has reached a final state      json      data          type    task results        attributes            status    passed          message    4 passed  0 skipped  0 failed          url    https   external service dev terraform plan checker run i3Df5to9ELvibKpQ              relationships            outcomes              data                                       Outcomes Payload Body  A run task result may optionally contain one or more detailed outcomes  which improves result visibility and content in the HCP Terraform user interface  The following attributes define the outcome             Key path             Type                                              Description                                                                                                                                                                                         outcome id        string   A partner supplied identifier for this outcome       description        string   A one line description of the result       body        string    Optional   A detailed message for the result in Markdown format       url        string    Optional  A URL that a user can navigate to for more information about this result       tags        object    Optional  An object containing tag arrays  named by the property key       tags key        string   The two or three word name of the header tag   Special handling   severity and status tags  is given to  severity  and  status  keys       tags key   label        string   The text value of the tag       tags key   level        enum string    Optional  The error level for the tag  Defaults to  none   but accepts  none    info    warning   or  error   For levels other than  none   labels render with a color and icon for that level           Severity and Status Tags  Run task outcomes with tags named  severity  or  status  are enriched within the outcomes display list in HCP Terraform  enabling an earlier response to issues with severity and status      json      type    task result outcomes      attributes          outcome id    PRTNR CC TF 127        description    ST 2942  S3 Bucket will not enforce MFA login on delete requests        tags            Status                           label    Denied               level    error                             Severity                          label    High               level    error                                   label    Recoverable              level    info                             Cost Centre                          label    IT OPS                                body      Resolution for issue ST 2942 n n   Impact n nFollow instructions in the  AWS S3 docs  https   docs aws amazon com AmazonS3 latest userguide MultiFactorAuthenticationDelete html  to manually configure the MFA setting  n    Payload truncated            url    https   external service dev result PRTNR CC TF 127                   Complete Callback Payload Example  The example below shows a complete payload explaining the data structure of a callback payload  including all the necessary fields      json      data          type    task results        attributes            status    failed          message    0 passed  0 skipped  1 failed          url    https   external service dev terraform plan checker run i3Df5to9ELvibKpQ              relationships            outcomes              data                              type    task result outcomes                attributes                    outcome id    PRTNR CC TF 127                  description    ST 2942  S3 Bucket will not enforce MFA login on delete requests                  tags                      Status                                               label    Denied                         level    error                                                           Severity                                              label    High                         level    error                                                                 label    Recoverable                         level    info                                                           Cost Centre                                              label    IT OPS                                                                        body      Resolution for issue ST 2942 n n   Impact n nFollow instructions in the  AWS S3 docs  https   docs aws amazon com AmazonS3 latest userguide MultiFactorAuthenticationDelete html  to manually configure the MFA setting  n    Payload truncated                      url    https   external service dev result PRTNR CC TF 127                                                                   Request Headers  The PATCH request must use the token supplied in the originating request   access token   for  authentication   terraform cloud docs api docs authentication  "}
{"questions":"terraform 200 https developer mozilla org en US docs Web HTTP Status 200 201 https developer mozilla org en US docs Web HTTP Status 201 page title Run Tasks API Docs HCP Terraform Use the tasks endpoint to manage run tasks List show create update and delete run tasks and list show update delete and associate workspace run tasks using the HTTP API","answers":"---\npage_title: Run Tasks - API Docs - HCP Terraform\ndescription: >-\n  Use the `\/tasks` endpoint to manage run tasks. List, show, create, update, and delete run tasks, and list, show, update, delete and associate workspace run tasks using the HTTP API.\n---\n\n[200]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/200\n\n[201]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/201\n\n[202]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/202\n\n[204]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/204\n\n[400]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/400\n\n[401]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/401\n\n[403]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/403\n\n[404]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/404\n\n[409]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/409\n\n[412]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/412\n\n[422]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/422\n\n[429]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/429\n\n[500]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/500\n\n[504]: https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Status\/504\n\n[JSON API document]: \/terraform\/cloud-docs\/api-docs#json-api-documents\n\n[JSON API error object]: https:\/\/jsonapi.org\/format\/#error-objects\n\n[JSON API Schema document]: https:\/\/github.com\/hashicorp\/terraform-docs-common\/blob\/main\/website\/public\/schema\/run-tasks\/runtask-results.json\n\n# Run Tasks API\n\n[Run tasks](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks) allow HCP Terraform to interact with external systems at specific points in the HCP Terraform run lifecycle. Run tasks are reusable configurations that you can associate to any workspace in an organization. This page lists the API endpoints for run tasks in an organization and explains how to associate run tasks to workspaces.\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/run-tasks.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nRefer to [run tasks Integration](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks-integration) for the API endpoints related triggering run tasks and the expected integration response.\n\n## Required Permissions\n\nTo interact with run tasks on an organization, you need the [Manage Run Tasks permission](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-run-tasks). To associate or dissociate run tasks in a workspace, you need the [Manage Workspace Run Tasks permission](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#general-workspace-permissions) on that particular workspace.\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n## Create a Run Task\n\n`POST \/organizations\/:organization_name\/tasks`\n\n| Parameter            | Description                                                                                                                                                                                                                     |\n| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `:organization_name` | The organization to create a run task in. The organization must already exist in HCP Terraform, and the token authenticating the API request must have [owner permission](\/terraform\/cloud-docs\/users-teams-organizations\/permissions). |\n\n[permissions-citation]: #intentionally-unused---keep-for-maintainers\n\n| Status  | Response                                | Reason                                                         |\n| ------- | --------------------------------------- | -------------------------------------------------------------- |\n| [201][] | [JSON API document][] (`type: \"tasks\"`) | Successfully created a run task                                |\n| [404][] | [JSON API error object][]               | Organization not found, or user unauthorized to perform action |\n| [422][] | [JSON API error object][]               | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required unless otherwise specified.\n\n| Key path                                                 | Type            | Default | Description                                                                                                                                                      |\n| -------------------------------------------------------- | --------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                                              | string          |         | Must be `\"tasks\"`.                                                                                                                                               |\n| `data.attributes.name`                                   | string          |         | The name of the task. Can include letters, numbers, `-`, and `_`.                                                                                                |\n| `data.attributes.url`                                    | string          |         | URL to send a run task payload.                                                                                                                                  |\n| `data.attributes.description`                            | string          |         | The description of the run task. Can be up to 300 characters long including spaces, letters, numbers, and special characters.                                    |\n| `data.attributes.category`                               | string          |         | Must be `\"task\"`.                                                                                                                                                |\n| `data.attributes.hmac-key`                               | string          |         | (Optional) HMAC key to verify run task.                                                                                                                          |\n| `data.attributes.enabled`                                | bool            | true    | (Optional) Whether the task will be run.                                                                                                                         |\n| `data.attributes.global-configuration.enabled`           | bool            | false   | (Optional) Whether the task will be associated on all workspaces.                                                                                                |\n| `data.attributes.global-configuration.stages`            | array |         | (Optional) An array of strings representing the stages of the run lifecycle when the run task should begin. Must be one or more of `\"pre_plan\"`, `\"post_plan\"`, `\"pre_apply\"`, or `\"post_apply\"`. |\n| `data.attributes.global-configuration.enforcement-level` | string          |         | (Optional) The enforcement level of the workspace task. Must be `\"advisory\"` or `\"mandatory\"`.                                                                   |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"tasks\",\n    \"attributes\": {\n      \"name\": \"example\",\n      \"url\": \"http:\/\/example.com\",\n      \"description\": \"Simple description\",\n      \"hmac_key\": \"secret\",\n      \"enabled\": \"true\",\n      \"category\": \"task\",\n      \"global-configuration\": {\n        \"enabled\": true,\n        \"stages\": [\"pre_plan\"],\n        \"enforcement-level\": \"mandatory\"\n      }\n    }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/tasks\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"task-7oD7doVTQdAFnMLV\",\n      \"type\": \"tasks\",\n      \"attributes\": {\n        \"category\": \"task\",\n        \"name\": \"my-run-task\",\n        \"url\": \"http:\/\/example.com\",\n        \"description\": \"Simple description\",\n        \"enabled\": \"true\",\n        \"hmac-key\": null,\n        \"global-configuration\": {\n          \"enabled\": true,\n          \"stages\": [\"pre_plan\"],\n          \"enforcement-level\": \"mandatory\"\n        }\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"hashicorp\",\n            \"type\": \"organizations\"\n          }\n        },\n        \"tasks\": {\n          \"data\": []\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/tasks\/task-7oD7doVTQdAFnMLV\"\n      }\n  }\n}\n```\n\n## List Run Tasks\n\n`GET \/organizations\/:organization_name\/tasks`\n\n| Parameter            | Description                         |\n| -------------------- | ----------------------------------- |\n| `:organization_name` | The organization to list tasks for. |\n\n| Status  | Response                                | Reason                                                         |\n| ------- | --------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"tasks\"`) | Request was successful                                         |\n| [404][] | [JSON API error object][]               | Organization not found, or user unauthorized to perform action |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                                                                                                                            |\n| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `include`      | **Optional.** Allows including related resource data. Value must be a comma-separated list containing one or more of `workspace_tasks` or `workspace_tasks.workspace`. |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.                                                                                                     |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 run tasks per page.                                                                                            |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/organizations\/my-organization\/tasks\n```\n\n### Sample Response\n\n```json\n{\n    \"data\": [\n        {\n            \"id\": \"task-7oD7doVTQdAFnMLV\",\n            \"type\": \"tasks\",\n            \"attributes\": {\n                \"category\": \"task\",\n                \"name\": \"my-task\",\n                \"url\": \"http:\/\/example.com\",\n                \"description\": \"Simple description\",\n                \"enabled\": \"true\",\n                \"hmac-key\": null,\n                \"global-configuration\": {\n                  \"enabled\": true,\n                  \"stages\": [\"pre_plan\"],\n                  \"enforcement-level\": \"mandatory\"\n                }\n            },\n            \"relationships\": {\n                \"organization\": {\n                    \"data\": {\n                        \"id\": \"hashicorp\",\n                        \"type\": \"organizations\"\n                    }\n                },\n                \"tasks\": {\n                    \"data\": []\n                }\n            },\n            \"links\": {\n                \"self\": \"\/api\/v2\/tasks\/task-7oD7doVTQdAFnMLV\"\n            }\n        }\n    ],\n    \"links\": {\n        \"self\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n        \"first\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n        \"prev\": null,\n        \"next\": null,\n        \"last\": \"https:\/\/app.terraform.io\/api\/v2\/organizations\/hashicorp\/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n    },\n    \"meta\": {\n        \"pagination\": {\n            \"current-page\": 1,\n            \"page-size\": 20,\n            \"prev-page\": null,\n            \"next-page\": null,\n            \"total-pages\": 1,\n            \"total-count\": 1\n        }\n    }\n}\n```\n\n## Show a Run Task\n\n`GET \/tasks\/:id`\n\n| Parameter | Description                                                                                   |\n| --------- | --------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the task to show. Use the [\"List Run Tasks\"](#list-run-tasks) endpoint to find IDs. |\n\n| Status  | Response                                | Reason                                                    |\n| ------- | --------------------------------------- | --------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"tasks\"`) | The request was successful                                |\n| [404][] | [JSON API error object][]               | Run task not found or user unauthorized to perform action |\n\n| Parameter | Description                                                                                                                                                            |\n| --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `include` | **Optional.** Allows including related resource data. Value must be a comma-separated list containing one or more of `workspace_tasks` or `workspace_tasks.workspace`. |\n\n### Sample Request\n\n```shell\ncurl --request GET \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/tasks\/task-7oD7doVTQdAFnMLV\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"task-7oD7doVTQdAFnMLV\",\n      \"type\": \"tasks\",\n      \"attributes\": {\n        \"category\": \"task\",\n        \"name\": \"my-task\",\n        \"url\": \"http:\/\/example.com\",\n        \"description\": \"Simple description\",\n        \"enabled\": \"true\",\n        \"hmac-key\": null,\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"hashicorp\",\n            \"type\": \"organizations\"\n          }\n        },\n        \"tasks\": {\n          \"data\": [\n          {\n            \"id\": \"task-xjKZw9KaeXda61az\",\n            \"type\": \"tasks\"\n          }\n          ]\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/tasks\/task-7oD7doVTQdAFnMLV\"\n      }\n  }\n}\n```\n\n## Update a Run Task\n\n`PATCH \/tasks\/:id`\n\n| Parameter | Description                                                                                     |\n| --------- | ----------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the task to update. Use the [\"List Run Tasks\"](#list-run-tasks) endpoint to find IDs. |\n\n| Status  | Response                                | Reason                                                         |\n| ------- | --------------------------------------- | -------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"tasks\"`) | The request was successful                                     |\n| [404][] | [JSON API error object][]               | Run task not found or user unauthorized to perform action      |\n| [422][] | [JSON API error object][]               | Malformed request body (missing attributes, wrong types, etc.) |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required unless otherwise specified.\n\n| Key path                                                 | Type            | Default          | Description                                                                                                                                                      |\n| -------------------------------------------------------- | --------------- | ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `data.type`                                              | string          |                  | Must be `\"tasks\"`.                                                                                                                                               |\n| `data.attributes.name`                                   | string          | (previous value) | The name of the run task. Can include letters, numbers, `-`, and `_`.                                                                                            |\n| `data.attributes.url`                                    | string          | (previous value) | URL to send a run task payload.                                                                                                                                  |\n| `data.attributes.description`                            | string          |                  | The description of the run task. Can be up to 300 characters long including spaces, letters, numbers, and special characters.                                    |\n| `data.attributes.category`                               | string          | (previous value) | Must be `\"task\"`.                                                                                                                                                |\n| `data.attributes.hmac-key`                               | string          | (previous value) | (Optional) HMAC key to verify run task.                                                                                                                          |\n| `data.attributes.enabled`                                | bool            | (previous value) | (Optional) Whether the task will be run.                                                                                                                         |\n| `data.attributes.global-configuration.enabled`           | bool            | (previous value) | (Optional) Whether the task will be associated on all workspaces.                                                                                                |\n| `data.attributes.global-configuration.stages`            | array | (previous value) | (Optional) An array of strings representing the stages of the run lifecycle when the run task should begin. Must be one or more of `\"pre_plan\"`, `\"post_plan\"`, `\"pre_apply\"`, or `\"post_apply\"`. |\n| `data.attributes.global-configuration.enforcement-level` | string          | (previous value) | (Optional) The enforcement level of the workspace task. Must be `\"advisory\"` or `\"mandatory\"`.                                                                   |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"tasks\",\n      \"attributes\": {\n        \"name\": \"new-example\",\n        \"url\": \"http:\/\/new-example.com\",\n        \"description\": \"New description\",\n        \"hmac_key\": \"new-secret\",\n        \"enabled\": \"false\",\n        \"category\": \"task\",\n        \"global-configuration\": {\n          \"enabled\": false\n         }\n      }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/tasks\/task-7oD7doVTQdAFnMLV\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"task-7oD7doVTQdAFnMLV\",\n      \"type\": \"tasks\",\n      \"attributes\": {\n        \"category\": \"task\",\n        \"name\": \"new-example\",\n        \"url\": \"http:\/\/new-example.com\",\n        \"description\": \"New description\",\n        \"enabled\": \"false\",\n        \"hmac-key\": null,\n        \"global-configuration\": {\n          \"enabled\": false,\n          \"stages\": [\"pre_plan\"],\n          \"enforcement-level\": \"mandatory\"\n        }\n      },\n      \"relationships\": {\n        \"organization\": {\n          \"data\": {\n            \"id\": \"hashicorp\",\n            \"type\": \"organizations\"\n          }\n        },\n        \"tasks\": {\n          \"data\": [\n          {\n            \"id\": \"wstask-xjKZw9KaeXda61az\",\n            \"type\": \"workspace-tasks\"\n          }\n          ]\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/tasks\/task-7oD7doVTQdAFnMLV\"\n      }\n  }\n}\n```\n\n## Delete a Run Task\n\n`DELETE \/tasks\/:id`\n\n| Parameter | Description                                                                                         |\n| --------- | --------------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the run task to delete. Use the [\"List Run Tasks\"](#list-run-tasks) endpoint to find IDs. |\n\n| Status  | Response                  | Reason                                                     |\n| ------- | ------------------------- | ---------------------------------------------------------- |\n| [204][] | No Content                | Successfully deleted the run task                          |\n| [404][] | [JSON API error object][] | Run task not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/tasks\/task-7oD7doVTQdAFnMLV\n```\n\n## Associate a Run Task to a Workspace\n\n`POST \/workspaces\/:workspace_id\/tasks`\n\n| Parameter       | Description              |\n| --------------- | ------------------------ |\n| `:workspace_id` | The ID of the workspace. |\n\nThis endpoint associates an existing run task to a specific workspace.\n\nThis involves setting the run task enforcement level, which determines whether the run task blocks runs from completing.\n\n- Advisory run tasks can not block a run from completing. If the task fails, the run will proceed with a warning.\n\n- Mandatory run tasks block a run from completing. If the task fails (including a timeout or unexpected remote error condition), the run stops with an error.\n\nYou may also configure the run task to begin during specific [run stages](\/terraform\/cloud-docs\/run\/states). Run tasks use the [Post-Plan Stage](\/terraform\/cloud-docs\/run\/states#the-post-plan-stage) by default.\n\n| Status  | Response                  | Reason                                                                 |\n| ------- | ------------------------- | ---------------------------------------------------------------------- |\n| [204][] | No Content                | The request was successful                                             |\n| [404][] | [JSON API error object][] | Workspace or run task not found or user unauthorized to perform action |\n| [422][] | [JSON API error object][] | Malformed request body                                                 |\n\n### Request Body\n\nThis POST endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                            | Type   | Default         | Description                                                                                                                                                                            |\n|-------------------------------------|--------|-----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `data.type`                         | string |                 | Must be `\"workspace-tasks\"`.                                                                                                                                                           |\n| `data.attributes.enforcement-level` | string |                 | The enforcement level of the workspace task. Must be `\"advisory\"` or `\"mandatory\"`.                                                                                                    |\n| `data.attributes.stage`             | string | `\"post_plan\"`   | **DEPRECATED** Use `stages` instead. The stage in the run lifecycle when the run task should begin. Must be `\"pre_plan\"`, `\"post_plan\"`, `\"pre_apply\"`, or `\"post_apply\"`.             |\n| `data.attributes.stages`            | array  | `[\"post_plan\"]` | An array of strings representing the stages of the run lifecycle when the run task should begin. Must be one or more of `\"pre_plan\"`, `\"post_plan\"`, `\"pre_apply\"`, or `\"post_apply\"`. |\n| `data.relationships.task.data.id`   | string |                 | The ID of the run task.                                                                                                                                                                |\n| `data.relationships.task.data.type` | string |                 | Must be `\"tasks\"`.                                                                                                                                                                     |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"workspace-tasks\",\n      \"attributes\": {\n        \"enforcement-level\": \"advisory\",\n        \"stages\": [\"post_plan\"]\n      },\n      \"relationships\": {\n        \"task\": {\n          \"data\": {\n            \"id\": \"task-7oD7doVTQdAFnMLV\",\n            \"type\": \"tasks\"\n          }\n        }\n      }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  --request POST \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-PphL7ix3yGasYGrq\/tasks\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"wstask-tBXYu8GVAFBpcmPm\",\n      \"type\": \"workspace-tasks\",\n      \"attributes\": {\n        \"enforcement-level\": \"advisory\",\n        \"stage\": \"post_plan\",\n        \"stages\": [\"post_plan\"]\n      },\n      \"relationships\": {\n        \"task\": {\n          \"data\": {\n            \"id\": \"task-7oD7doVTQdAFnMLV\",\n            \"type\": \"tasks\"\n          }\n        },\n        \"workspace\": {\n          \"data\": {\n            \"id\": \"ws-PphL7ix3yGasYGrq\",\n            \"type\": \"workspaces\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/workspaces\/ws-PphL7ix3yGasYGrq\/tasks\/task-tBXYu8GVAFBpcmPm\"\n      }\n  }\n}\n```\n\n## List Workspace Run Tasks\n\n`GET \/workspaces\/:workspace_id\/tasks`\n\n| Parameter       | Description                      |\n| --------------- | -------------------------------- |\n| `:workspace_id` | The workspace to list tasks for. |\n\n| Status  | Response                                | Reason                                                      |\n| ------- | --------------------------------------- | ----------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"tasks\"`) | Request was successful                                      |\n| [404][] | [JSON API error object][]               | Workspace not found, or user unauthorized to perform action |\n\n### Query Parameters\n\nThis endpoint supports pagination [with standard URL query parameters](\/terraform\/cloud-docs\/api-docs#query-parameters). Remember to percent-encode `[` as `%5B` and `]` as `%5D` if your tooling doesn't automatically encode URLs.\n\n| Parameter      | Description                                                                 |\n| -------------- | --------------------------------------------------------------------------- |\n| `page[number]` | **Optional.** If omitted, the endpoint will return the first page.          |\n| `page[size]`   | **Optional.** If omitted, the endpoint will return 20 run tasks per page. |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-kRsDRPtTmtcEme4t\/tasks\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": [\n  {\n    \"id\": \"wstask-tBXYu8GVAFBpcmPm\",\n      \"type\": \"workspace-tasks\",\n      \"attributes\": {\n        \"enforcement-level\": \"advisory\",\n        \"stage\": \"post_plan\",\n        \"stages\": [\"post_plan\"]\n      },\n      \"relationships\": {\n        \"task\": {\n          \"data\": {\n            \"id\": \"task-hu74ST39g566Q4m5\",\n            \"type\": \"tasks\"\n          }\n        },\n        \"workspace\": {\n          \"data\": {\n            \"id\": \"ws-kRsDRPtTmtcEme4t\",\n            \"type\": \"workspaces\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/workspaces\/ws-kRsDRPtTmtcEme4t\/tasks\/task-tBXYu8GVAFBpcmPm\"\n      }\n  }\n  ],\n  \"links\": {\n    \"self\": \"https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-kRsDRPtTmtcEme4t\/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"first\": \"https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-kRsDRPtTmtcEme4t\/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20\",\n    \"prev\": null,\n    \"next\": null,\n    \"last\": \"https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-kRsDRPtTmtcEme4t\/tasks?page%5Bnumber%5D=1&page%5Bsize%5D=20\"\n  },\n  \"meta\": {\n    \"pagination\": {\n      \"current-page\": 1,\n      \"page-size\": 20,\n      \"prev-page\": null,\n      \"next-page\": null,\n      \"total-pages\": 1,\n      \"total-count\": 1\n    }\n  }\n}\n```\n\n## Show Workspace Run Task\n\n`GET \/workspaces\/:workspace_id\/tasks\/:id`\n\n| Parameter | Description                                                                                                                 |\n| --------- | --------------------------------------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the workspace task to show. Use the [\"List Workspace Run Tasks\"](#list-workspace-run-tasks) endpoint to find IDs. |\n\n| Status  | Response                                | Reason                                                              |\n| ------- | --------------------------------------- | ------------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"tasks\"`) | The request was successful                                          |\n| [404][] | [JSON API error object][]               | Workspace run task not found or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl --request GET \\\n  -H \"Authorization: Bearer $TOKEN\" \\\n  -H \"Content-Type: application\/vnd.api+json\" \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-kRsDRPtTmtcEme4t\/tasks\/wstask-tBXYu8GVAFBpcmPm\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"wstask-tBXYu8GVAFBpcmPm\",\n      \"type\": \"workspace-tasks\",\n      \"attributes\": {\n        \"enforcement-level\": \"advisory\",\n        \"stage\": \"post_plan\",\n        \"stages\": [\"post_plan\"]\n      },\n      \"relationships\": {\n        \"task\": {\n          \"data\": {\n            \"id\": \"task-hu74ST39g566Q4m5\",\n            \"type\": \"tasks\"\n          }\n        },\n        \"workspace\": {\n          \"data\": {\n            \"id\": \"ws-kRsDRPtTmtcEme4t\",\n            \"type\": \"workspaces\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/workspaces\/ws-kRsDRPtTmtcEme4t\/tasks\/wstask-tBXYu8GVAFBpcmPm\"\n      }\n  }\n}\n```\n\n## Update Workspace Run Task\n\n`PATCH \/workspaces\/:workspace_id\/tasks\/:id`\n\n| Parameter | Description                                                                                                         |\n| --------- | ------------------------------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the task to update. Use the [\"List Workspace Run Tasks\"](#list-workspace-run-tasks) endpoint to find IDs. |\n\n| Status  | Response                                | Reason                                                              |\n| ------- | --------------------------------------- | ------------------------------------------------------------------- |\n| [200][] | [JSON API document][] (`type: \"tasks\"`) | The request was successful                                          |\n| [404][] | [JSON API error object][]               | Workspace run task not found or user unauthorized to perform action |\n| [422][] | [JSON API error object][]               | Malformed request body (missing attributes, wrong types, etc.)      |\n\n### Request Body\n\nThis PATCH endpoint requires a JSON object with the following properties as a request payload.\n\nProperties without a default value are required.\n\n| Key path                            | Type   | Default          | Description                                                                                                                                                                            |\n|-------------------------------------|--------|------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `data.type`                         | string | (previous value) | Must be `\"workspace-tasks\"`.                                                                                                                                                           |\n| `data.attributes.enforcement-level` | string | (previous value) | The enforcement level of the workspace run task. Must be `\"advisory\"` or `\"mandatory\"`.                                                                                                |\n| `data.attributes.stage`             | string | (previous value) | **DEPRECATED** Use `stages` instead. The stage in the run lifecycle when the run task should begin. Must be `\"pre_plan\"` or `\"post_plan\"`.                                             |\n| `data.attributes.stages`            | array  | (previous value) | An array of strings representing the stages of the run lifecycle when the run task should begin. Must be one or more of `\"pre_plan\"`, `\"post_plan\"`, `\"pre_apply\"`, or `\"post_apply\"`. |\n\n### Sample Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"workspace-tasks\",\n      \"attributes\": {\n        \"enforcement-level\": \"mandatory\",\n        \"stages\": [\"post_plan\"]\n      }\n  }\n}\n```\n\n#### Deprecated Payload\n\n```json\n{\n  \"data\": {\n    \"type\": \"workspace-tasks\",\n      \"attributes\": {\n        \"enforcement-level\": \"mandatory\",\n        \"stages\": [\"post_plan\"]\n      }\n  }\n}\n```\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request PATCH \\\n  --data @payload.json \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-kRsDRPtTmtcEme4t\/tasks\/wstask-tBXYu8GVAFBpcmPm\n```\n\n### Sample Response\n\n```json\n{\n  \"data\": {\n    \"id\": \"wstask-tBXYu8GVAFBpcmPm\",\n      \"type\": \"workspace-tasks\",\n      \"attributes\": {\n        \"enforcement-level\": \"mandatory\",\n        \"stage\": \"post_plan\",\n        \"stages\": [\"post_plan\"]\n      },\n      \"relationships\": {\n        \"task\": {\n          \"data\": {\n            \"id\": \"task-hu74ST39g566Q4m5\",\n            \"type\": \"tasks\"\n          }\n        },\n        \"workspace\": {\n          \"data\": {\n            \"id\": \"ws-kRsDRPtTmtcEme4t\",\n            \"type\": \"workspaces\"\n          }\n        }\n      },\n      \"links\": {\n        \"self\": \"\/api\/v2\/workspaces\/ws-kRsDRPtTmtcEme4t\/tasks\/task-tBXYu8GVAFBpcmPm\"\n      }\n  }\n}\n```\n\n## Delete Workspace Run Task\n\n`DELETE \/workspaces\/:workspace_id\/tasks\/:id`\n\n| Parameter | Description                                                                                                                       |\n| --------- | --------------------------------------------------------------------------------------------------------------------------------- |\n| `:id`     | The ID of the Workspace run task to delete. Use the [\"List Workspace Run Tasks\"](#list-workspace-run-tasks) endpoint to find IDs. |\n\n| Status  | Response                  | Reason                                                               |\n| ------- | ------------------------- | -------------------------------------------------------------------- |\n| [204][] | No Content                | Successfully deleted the workspace run task                          |\n| [404][] | [JSON API error object][] | Workspace run task not found, or user unauthorized to perform action |\n\n### Sample Request\n\n```shell\ncurl \\\n  --header \"Authorization: Bearer $TOKEN\" \\\n  --header \"Content-Type: application\/vnd.api+json\" \\\n  --request DELETE \\\n  https:\/\/app.terraform.io\/api\/v2\/workspaces\/ws-kRsDRPtTmtcEme4t\/tasks\/wstask-tBXYu8GVAFBpcmPm\n```","site":"terraform","answers_cleaned":"    page title  Run Tasks   API Docs   HCP Terraform description       Use the   tasks  endpoint to manage run tasks  List  show  create  update  and delete run tasks  and list  show  update  delete and associate workspace run tasks using the HTTP API        200   https   developer mozilla org en US docs Web HTTP Status 200   201   https   developer mozilla org en US docs Web HTTP Status 201   202   https   developer mozilla org en US docs Web HTTP Status 202   204   https   developer mozilla org en US docs Web HTTP Status 204   400   https   developer mozilla org en US docs Web HTTP Status 400   401   https   developer mozilla org en US docs Web HTTP Status 401   403   https   developer mozilla org en US docs Web HTTP Status 403   404   https   developer mozilla org en US docs Web HTTP Status 404   409   https   developer mozilla org en US docs Web HTTP Status 409   412   https   developer mozilla org en US docs Web HTTP Status 412   422   https   developer mozilla org en US docs Web HTTP Status 422   429   https   developer mozilla org en US docs Web HTTP Status 429   500   https   developer mozilla org en US docs Web HTTP Status 500   504   https   developer mozilla org en US docs Web HTTP Status 504   JSON API document    terraform cloud docs api docs json api documents   JSON API error object   https   jsonapi org format  error objects   JSON API Schema document   https   github com hashicorp terraform docs common blob main website public schema run tasks runtask results json    Run Tasks API   Run tasks   terraform cloud docs workspaces settings run tasks  allow HCP Terraform to interact with external systems at specific points in the HCP Terraform run lifecycle  Run tasks are reusable configurations that you can associate to any workspace in an organization  This page lists the API endpoints for run tasks in an organization and explains how to associate run tasks to workspaces        BEGIN  TFC only name pnp callout      include  tfc package callouts run tasks mdx       END  TFC only name pnp callout      Refer to  run tasks Integration   terraform cloud docs api docs run tasks run tasks integration  for the API endpoints related triggering run tasks and the expected integration response      Required Permissions  To interact with run tasks on an organization  you need the  Manage Run Tasks permission   terraform cloud docs users teams organizations permissions manage run tasks   To associate or dissociate run tasks in a workspace  you need the  Manage Workspace Run Tasks permission   terraform cloud docs users teams organizations permissions general workspace permissions  on that particular workspace    permissions citation    intentionally unused   keep for maintainers     Create a Run Task   POST  organizations  organization name tasks     Parameter              Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      organization name    The organization to create a run task in  The organization must already exist in HCP Terraform  and the token authenticating the API request must have  owner permission   terraform cloud docs users teams organizations permissions       permissions citation    intentionally unused   keep for maintainers    Status    Response                                  Reason                                                                                                                                                                                     201       JSON API document      type   tasks      Successfully created a run task                                     404       JSON API error object                    Organization not found  or user unauthorized to perform action      422       JSON API error object                    Malformed request body  missing attributes  wrong types  etc          Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required unless otherwise specified     Key path                                                   Type              Default   Description                                                                                                                                                                                                                                                                                                                                                                                                                       data type                                                 string                      Must be   tasks                                                                                                                                                       data attributes name                                      string                      The name of the task  Can include letters  numbers       and                                                                                                          data attributes url                                       string                      URL to send a run task payload                                                                                                                                        data attributes description                               string                      The description of the run task  Can be up to 300 characters long including spaces  letters  numbers  and special characters                                          data attributes category                                  string                      Must be   task                                                                                                                                                        data attributes hmac key                                  string                       Optional  HMAC key to verify run task                                                                                                                                data attributes enabled                                   bool              true       Optional  Whether the task will be run                                                                                                                               data attributes global configuration enabled              bool              false      Optional  Whether the task will be associated on all workspaces                                                                                                      data attributes global configuration stages               array              Optional  An array of strings representing the stages of the run lifecycle when the run task should begin  Must be one or more of   pre plan      post plan      pre apply    or   post apply         data attributes global configuration enforcement level    string                       Optional  The enforcement level of the workspace task  Must be   advisory   or   mandatory                                                                             Sample Payload     json      data          type    tasks        attributes            name    example          url    http   example com          description    Simple description          hmac key    secret          enabled    true          category    task          global configuration              enabled   true           stages     pre plan             enforcement level    mandatory                               Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 organizations my organization tasks          Sample Response     json      data          id    task 7oD7doVTQdAFnMLV          type    tasks          attributes              category    task            name    my run task            url    http   example com            description    Simple description            enabled    true            hmac key   null           global configuration                enabled   true             stages     pre plan               enforcement level    mandatory                            relationships              organization                data                  id    hashicorp                type    organizations                                  tasks                data                                links              self     api v2 tasks task 7oD7doVTQdAFnMLV                        List Run Tasks   GET  organizations  organization name tasks     Parameter              Description                                                                                              organization name    The organization to list tasks for       Status    Response                                  Reason                                                                                                                                                                                     200       JSON API document      type   tasks      Request was successful                                              404       JSON API error object                    Organization not found  or user unauthorized to perform action        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                                                                                                                                                                                                                             include           Optional    Allows including related resource data  Value must be a comma separated list containing one or more of  workspace tasks  or  workspace tasks workspace        page number       Optional    If omitted  the endpoint will return the first page                                                                                                           page size         Optional    If omitted  the endpoint will return 20 run tasks per page                                                                                                    Sample Request     shell curl       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 organizations my organization tasks          Sample Response     json        data                            id    task 7oD7doVTQdAFnMLV                type    tasks                attributes                      category    task                    name    my task                    url    http   example com                    description    Simple description                    enabled    true                    hmac key   null                   global configuration                        enabled   true                     stages     pre plan                       enforcement level    mandatory                                                relationships                      organization                          data                              id    hashicorp                            type    organizations                                                            tasks                          data                                                    links                      self     api v2 tasks task 7oD7doVTQdAFnMLV                                      links              self    https   app terraform io api v2 organizations hashicorp tasks page 5Bnumber 5D 1 page 5Bsize 5D 20            first    https   app terraform io api v2 organizations hashicorp tasks page 5Bnumber 5D 1 page 5Bsize 5D 20            prev   null           next   null           last    https   app terraform io api v2 organizations hashicorp tasks page 5Bnumber 5D 1 page 5Bsize 5D 20              meta              pagination                  current page   1               page size   20               prev page   null               next page   null               total pages   1               total count   1                           Show a Run Task   GET  tasks  id     Parameter   Description                                                                                                                                                                                                       id        The ID of the task to show  Use the   List Run Tasks    list run tasks  endpoint to find IDs       Status    Response                                  Reason                                                                                                                                                                           200       JSON API document      type   tasks      The request was successful                                     404       JSON API error object                    Run task not found or user unauthorized to perform action      Parameter   Description                                                                                                                                                                                                                                                                                                                                                        include      Optional    Allows including related resource data  Value must be a comma separated list containing one or more of  workspace tasks  or  workspace tasks workspace          Sample Request     shell curl   request GET      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json      https   app terraform io api v2 tasks task 7oD7doVTQdAFnMLV          Sample Response     json      data          id    task 7oD7doVTQdAFnMLV          type    tasks          attributes              category    task            name    my task            url    http   example com            description    Simple description            enabled    true            hmac key   null                  relationships              organization                data                  id    hashicorp                type    organizations                                  tasks                data                              id    task xjKZw9KaeXda61az                type    tasks                                                    links              self     api v2 tasks task 7oD7doVTQdAFnMLV                        Update a Run Task   PATCH  tasks  id     Parameter   Description                                                                                                                                                                                                           id        The ID of the task to update  Use the   List Run Tasks    list run tasks  endpoint to find IDs       Status    Response                                  Reason                                                                                                                                                                                     200       JSON API document      type   tasks      The request was successful                                          404       JSON API error object                    Run task not found or user unauthorized to perform action           422       JSON API error object                    Malformed request body  missing attributes  wrong types  etc          Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required unless otherwise specified     Key path                                                   Type              Default            Description                                                                                                                                                                                                                                                                                                                                                                                                                                data type                                                 string                               Must be   tasks                                                                                                                                                       data attributes name                                      string             previous value    The name of the run task  Can include letters  numbers       and                                                                                                      data attributes url                                       string             previous value    URL to send a run task payload                                                                                                                                        data attributes description                               string                               The description of the run task  Can be up to 300 characters long including spaces  letters  numbers  and special characters                                          data attributes category                                  string             previous value    Must be   task                                                                                                                                                        data attributes hmac key                                  string             previous value     Optional  HMAC key to verify run task                                                                                                                                data attributes enabled                                   bool               previous value     Optional  Whether the task will be run                                                                                                                               data attributes global configuration enabled              bool               previous value     Optional  Whether the task will be associated on all workspaces                                                                                                      data attributes global configuration stages               array    previous value     Optional  An array of strings representing the stages of the run lifecycle when the run task should begin  Must be one or more of   pre plan      post plan      pre apply    or   post apply         data attributes global configuration enforcement level    string             previous value     Optional  The enforcement level of the workspace task  Must be   advisory   or   mandatory                                                                             Sample Payload     json      data          type    tasks          attributes              name    new example            url    http   new example com            description    New description            hmac key    new secret            enabled    false            category    task            global configuration                enabled   false                                   Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 tasks task 7oD7doVTQdAFnMLV          Sample Response     json      data          id    task 7oD7doVTQdAFnMLV          type    tasks          attributes              category    task            name    new example            url    http   new example com            description    New description            enabled    false            hmac key   null           global configuration                enabled   false             stages     pre plan               enforcement level    mandatory                            relationships              organization                data                  id    hashicorp                type    organizations                                  tasks                data                              id    wstask xjKZw9KaeXda61az                type    workspace tasks                                                    links              self     api v2 tasks task 7oD7doVTQdAFnMLV                        Delete a Run Task   DELETE  tasks  id     Parameter   Description                                                                                                                                                                                                                   id        The ID of the run task to delete  Use the   List Run Tasks    list run tasks  endpoint to find IDs       Status    Response                    Reason                                                                                                                                                               204      No Content                  Successfully deleted the run task                               404       JSON API error object      Run task not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 tasks task 7oD7doVTQdAFnMLV         Associate a Run Task to a Workspace   POST  workspaces  workspace id tasks     Parameter         Description                                                                   workspace id    The ID of the workspace     This endpoint associates an existing run task to a specific workspace   This involves setting the run task enforcement level  which determines whether the run task blocks runs from completing     Advisory run tasks can not block a run from completing  If the task fails  the run will proceed with a warning     Mandatory run tasks block a run from completing  If the task fails  including a timeout or unexpected remote error condition   the run stops with an error   You may also configure the run task to begin during specific  run stages   terraform cloud docs run states   Run tasks use the  Post Plan Stage   terraform cloud docs run states the post plan stage  by default     Status    Response                    Reason                                                                                                                                                                                       204      No Content                  The request was successful                                                  404       JSON API error object      Workspace or run task not found or user unauthorized to perform action      422       JSON API error object      Malformed request body                                                        Request Body  This POST endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                              Type     Default           Description                                                                                                                                                                                                                                                                                                                                                                                                                                             data type                            string                     Must be   workspace tasks                                                                                                                                                                   data attributes enforcement level    string                     The enforcement level of the workspace task  Must be   advisory   or   mandatory                                                                                                            data attributes stage                string     post plan         DEPRECATED   Use  stages  instead  The stage in the run lifecycle when the run task should begin  Must be   pre plan      post plan      pre apply    or   post apply                     data attributes stages               array       post plan      An array of strings representing the stages of the run lifecycle when the run task should begin  Must be one or more of   pre plan      post plan      pre apply    or   post apply         data relationships task data id      string                     The ID of the run task                                                                                                                                                                      data relationships task data type    string                     Must be   tasks                                                                                                                                                                               Sample Payload     json      data          type    workspace tasks          attributes              enforcement level    advisory            stages     post plan                   relationships              task                data                  id    task 7oD7doVTQdAFnMLV                type    tasks                                               Sample Request     shell curl      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json        request POST       data  payload json     https   app terraform io api v2 workspaces ws PphL7ix3yGasYGrq tasks          Sample Response     json      data          id    wstask tBXYu8GVAFBpcmPm          type    workspace tasks          attributes              enforcement level    advisory            stage    post plan            stages     post plan                   relationships              task                data                  id    task 7oD7doVTQdAFnMLV                type    tasks                                  workspace                data                  id    ws PphL7ix3yGasYGrq                type    workspaces                                        links              self     api v2 workspaces ws PphL7ix3yGasYGrq tasks task tBXYu8GVAFBpcmPm                        List Workspace Run Tasks   GET  workspaces  workspace id tasks     Parameter         Description                                                                                   workspace id    The workspace to list tasks for       Status    Response                                  Reason                                                                                                                                                                               200       JSON API document      type   tasks      Request was successful                                           404       JSON API error object                    Workspace not found  or user unauthorized to perform action        Query Parameters  This endpoint supports pagination  with standard URL query parameters   terraform cloud docs api docs query parameters   Remember to percent encode     as   5B  and     as   5D  if your tooling doesn t automatically encode URLs     Parameter        Description                                                                                                                                                                       page number       Optional    If omitted  the endpoint will return the first page                page size         Optional    If omitted  the endpoint will return 20 run tasks per page         Sample Request     shell curl       header  Authorization  Bearer  TOKEN      https   app terraform io api v2 workspaces ws kRsDRPtTmtcEme4t tasks          Sample Response     json      data              id    wstask tBXYu8GVAFBpcmPm          type    workspace tasks          attributes              enforcement level    advisory            stage    post plan            stages     post plan                   relationships              task                data                  id    task hu74ST39g566Q4m5                type    tasks                                  workspace                data                  id    ws kRsDRPtTmtcEme4t                type    workspaces                                        links              self     api v2 workspaces ws kRsDRPtTmtcEme4t tasks task tBXYu8GVAFBpcmPm                      links          self    https   app terraform io api v2 workspaces ws kRsDRPtTmtcEme4t tasks page 5Bnumber 5D 1 page 5Bsize 5D 20        first    https   app terraform io api v2 workspaces ws kRsDRPtTmtcEme4t tasks page 5Bnumber 5D 1 page 5Bsize 5D 20        prev   null       next   null       last    https   app terraform io api v2 workspaces ws kRsDRPtTmtcEme4t tasks page 5Bnumber 5D 1 page 5Bsize 5D 20          meta          pagination            current page   1         page size   20         prev page   null         next page   null         total pages   1         total count   1                     Show Workspace Run Task   GET  workspaces  workspace id tasks  id     Parameter   Description                                                                                                                                                                                                                                                                   id        The ID of the workspace task to show  Use the   List Workspace Run Tasks    list workspace run tasks  endpoint to find IDs       Status    Response                                  Reason                                                                                                                                                                                               200       JSON API document      type   tasks      The request was successful                                               404       JSON API error object                    Workspace run task not found or user unauthorized to perform action        Sample Request     shell curl   request GET      H  Authorization  Bearer  TOKEN       H  Content Type  application vnd api json      https   app terraform io api v2 workspaces ws kRsDRPtTmtcEme4t tasks wstask tBXYu8GVAFBpcmPm          Sample Response     json      data          id    wstask tBXYu8GVAFBpcmPm          type    workspace tasks          attributes              enforcement level    advisory            stage    post plan            stages     post plan                   relationships              task                data                  id    task hu74ST39g566Q4m5                type    tasks                                  workspace                data                  id    ws kRsDRPtTmtcEme4t                type    workspaces                                        links              self     api v2 workspaces ws kRsDRPtTmtcEme4t tasks wstask tBXYu8GVAFBpcmPm                        Update Workspace Run Task   PATCH  workspaces  workspace id tasks  id     Parameter   Description                                                                                                                                                                                                                                                   id        The ID of the task to update  Use the   List Workspace Run Tasks    list workspace run tasks  endpoint to find IDs       Status    Response                                  Reason                                                                                                                                                                                               200       JSON API document      type   tasks      The request was successful                                               404       JSON API error object                    Workspace run task not found or user unauthorized to perform action      422       JSON API error object                    Malformed request body  missing attributes  wrong types  etc               Request Body  This PATCH endpoint requires a JSON object with the following properties as a request payload   Properties without a default value are required     Key path                              Type     Default            Description                                                                                                                                                                                                                                                                                                                                                                                                                                              data type                            string    previous value    Must be   workspace tasks                                                                                                                                                                   data attributes enforcement level    string    previous value    The enforcement level of the workspace run task  Must be   advisory   or   mandatory                                                                                                        data attributes stage                string    previous value      DEPRECATED   Use  stages  instead  The stage in the run lifecycle when the run task should begin  Must be   pre plan   or   post plan                                                     data attributes stages               array     previous value    An array of strings representing the stages of the run lifecycle when the run task should begin  Must be one or more of   pre plan      post plan      pre apply    or   post apply           Sample Payload     json      data          type    workspace tasks          attributes              enforcement level    mandatory            stages     post plan                           Deprecated Payload     json      data          type    workspace tasks          attributes              enforcement level    mandatory            stages     post plan                          Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request PATCH       data  payload json     https   app terraform io api v2 workspaces ws kRsDRPtTmtcEme4t tasks wstask tBXYu8GVAFBpcmPm          Sample Response     json      data          id    wstask tBXYu8GVAFBpcmPm          type    workspace tasks          attributes              enforcement level    mandatory            stage    post plan            stages     post plan                   relationships              task                data                  id    task hu74ST39g566Q4m5                type    tasks                                  workspace                data                  id    ws kRsDRPtTmtcEme4t                type    workspaces                                        links              self     api v2 workspaces ws kRsDRPtTmtcEme4t tasks task tBXYu8GVAFBpcmPm                        Delete Workspace Run Task   DELETE  workspaces  workspace id tasks  id     Parameter   Description                                                                                                                                                                                                                                                                               id        The ID of the Workspace run task to delete  Use the   List Workspace Run Tasks    list workspace run tasks  endpoint to find IDs       Status    Response                    Reason                                                                                                                                                                                   204      No Content                  Successfully deleted the workspace run task                               404       JSON API error object      Workspace run task not found  or user unauthorized to perform action        Sample Request     shell curl       header  Authorization  Bearer  TOKEN        header  Content Type  application vnd api json        request DELETE     https   app terraform io api v2 workspaces ws kRsDRPtTmtcEme4t tasks wstask tBXYu8GVAFBpcmPm    "}
{"questions":"terraform How HCP Terraform and Terraform Enterprise help teams use Terraform to HCP Terraform Plans and Features page title Plans and Features HCP Terraform tfc only true manage infrastructure at scale","answers":"---\npage_title: Plans and Features - HCP Terraform\ndescription: >-\n  How HCP Terraform and Terraform Enterprise help teams use Terraform to\n  manage infrastructure at scale.\ntfc_only: true\n---\n\n# HCP Terraform Plans and Features\n\n[cli]: \/terraform\/cli\n\n[speculative plans]: \/terraform\/cloud-docs\/run\/remote-operations#speculative-plans\n\n[remote_state]: \/terraform\/language\/state\/remote-state-data\n\n[outputs]: \/terraform\/language\/values\/outputs\n\n[modules]: \/terraform\/language\/modules\/develop\n\n[terraform enterprise]: \/terraform\/enterprise\n\nHCP Terraform is a platform that performs Terraform runs to provision infrastructure, either on demand or in response to various events. Unlike a general-purpose continuous integration (CI) system, it is deeply integrated with Terraform's workflows and data, which allows it to make Terraform significantly more convenient and powerful.\n\n> **Hands On:** Try our [What is HCP Terraform - Intro and Sign Up](\/terraform\/tutorials\/cloud-get-started\/cloud-sign-up) tutorial.\n\n## Free and Paid Plans\n\nHCP Terraform is a commercial SaaS product developed by HashiCorp. Many of its features are free for small teams, including remote state storage, remote runs, and VCS connections. We also offer paid plans for larger teams that include additional collaboration and governance features.\n\nHCP Terraform manages plans and billing at the [organization level](\/terraform\/cloud-docs\/users-teams-organizations\/organizations). Each HCP Terraform user can belong to multiple organizations, which might subscribe to different billing plans. The set of features available depends on which organization you are currently working in.\n\nRefer to [Terraform pricing](https:\/\/www.hashicorp.com\/products\/terraform\/pricing) for details about available plans and their features.\n\n### Free Organizations\n\nSmall teams can use most of HCP Terraform's features for free, including remote Terraform execution, VCS integration, the private module registry, single-sign-on, policy enforcement, run tasks, and more.\n\nFree organizations are limited to 500 managed resources. Refer to [What is a managed resource](\/terraform\/cloud-docs\/overview\/estimate-hcp-terraform-cost#what-is-a-managed-resource) for more details.\n\n### Paid Features\n\nSome of HCP Terraform's features are limited to particular paid upgrade plans.\n\nEach higher paid upgrade plan is a strict superset of any lower plans \u2014\u00a0for example, the **Plus** edition includes all of the features of the **Standard** edition. Paid feature callouts in the documentation indicate the _lowest_ edition at which the feature is available, but any higher plans also include that feature.\n\nTerraform Enterprise generally includes all of HCP Terraform's paid features, plus additional features geared toward large enterprises. However, some features are implemented differently due to the differences between self-hosted and SaaS environments, and some features might be absent due to being impractical or irrelevant in the types of organizations that need Terraform Enterprise. Cloud-only or Enterprise-only features are clearly indicated in documentation.\n\n### Changing Your Payment Plan\n\n[Organization owners](\/terraform\/cloud-docs\/users-teams-organizations\/teams#the-owners-team) can manage an organization's billing plan. The plan and billing settings include an integrated storefront, and you can subscribe to paid plans with a credit card.\n\nTo change an organization's plan:\n\n1. Click **Settings** in the navigation bar.\n1. Click **Plan and billing**. The **Plan and Billing** page appears showing your current plan and any available invoices.\n1. Click **Change plan**.\n1. Select a plan, enter your billing information, and click **Update plan**.\n\n## Terraform Workflow\n\nHCP Terraform runs [Terraform CLI][cli] to provision infrastructure.\n\nIn its default state, Terraform CLI uses a local workflow, performing operations on the workstation where it is invoked and storing state in a local directory.\n\nSince teams must share responsibilities and awareness to avoid single points of failure, working with Terraform in a team requires a remote workflow. At minimum, state must be shared; ideally, Terraform should execute in a consistent remote environment.\n\nHCP Terraform offers a team-oriented remote Terraform workflow, designed to be comfortable for existing Terraform users and easily learned by new users. The foundations of this workflow are remote Terraform execution, a workspace-based organizational model, version control integration, command-line integration, remote state management with cross-workspace data sharing, and a private Terraform module registry.\n\n### Remote Terraform Execution\n\nHCP Terraform runs Terraform on disposable virtual machines in its own cloud infrastructure by default. You can leverage [HCP Terraform agents](\/terraform\/cloud-docs\/agents) to run Terraform on your own isolated, private, or on-premises infrastructure. Remote Terraform execution is sometimes referred to as \"remote operations.\"\n\nRemote execution helps provide consistency and visibility for critical provisioning operations. It also enables powerful features like Sentinel policy enforcement, cost estimation, notifications, version control integration, and more.\n\n- More info: [Terraform Runs and Remote Operations](\/terraform\/cloud-docs\/run\/remote-operations)\n\n#### Support for Local Execution\n\n[execution_mode]: \/terraform\/cloud-docs\/workspaces\/settings#execution-mode\n\nRemote execution can be disabled on specific workspaces with the [\"Execution Mode\" setting][execution_mode]. The workspace will still host remote state, and Terraform CLI can use that state for local runs via the [HCP Terraform CLI integration](\/terraform\/cli\/cloud).\n\n## Organize Infrastructure with Projects and Workspaces\n\nTerraform's local workflow manages a collection of infrastructure with a persistent working directory, which contains configuration, state data, and variables. You can use separate directories to organize infrastructure resources into meaningful groups, and Terraform will use the configuration in the directory you invoke Terraform commands from.\n\nHCP Terraform organizes infrastructure into projects and workspaces instead of directories. Each workspace contains everything necessary to manage a given collection of infrastructure, and Terraform uses that content when it runs in the context of that workspace.\n\nYou can use projects to organize your workspaces into groups. Organizations with HCP Terraform [Standard](https:\/\/www.hashicorp.com\/products\/terraform\/pricing) Edition can assign teams permissions for specific projects. \n\nThis lets you grant access to collections of workspaces instead of using workspace-specific or organization-wide permissions, making it easier to limit access to only the resources required for a team member's job function. \n\nRefer to [Workspaces](\/terraform\/cloud-docs\/workspaces) and [Organizing Workspaces with Projects](\/terraform\/cloud-docs\/workspaces\/projects) for more details. \n\n\n\n### Remote State Management, Data Sharing, and Run Triggers\n\nHCP Terraform acts as a remote backend for your Terraform state. State storage is tied to workspaces, which helps keep state associated with the configuration that created it.\n\nHCP Terraform also enables you to share information between workspaces with root-level [outputs][]. Separate groups of infrastructure resources often need to share a small amount of information, and workspace outputs are an ideal interface for these dependencies.\n\nWorkspaces that use remote operations can use [`terraform_remote_state` data sources][remote_state] to access other workspaces' outputs, subject to per-workspace access controls. And since new information from one workspace might change the desired infrastructure state in another, you can create workspace-to-workspace run triggers to ensure downstream workspaces react when their dependencies change.\n\n- More info: [Terraform State in HCP Terraform](\/terraform\/cloud-docs\/workspaces\/state), [Run Triggers](\/terraform\/cloud-docs\/workspaces\/settings\/run-triggers)\n\n### Version Control Integration\n\nLike other kinds of code, infrastructure-as-code belongs in version control, so HCP Terraform is designed to work directly with your version control system (VCS) provider.\n\nEach workspace can be linked to a VCS repository that contains its Terraform configuration, optionally specifying a branch and subdirectory. HCP Terraform automatically retrieves configuration content from the repository, and will also watch the repository for changes:\n\n- When new commits are merged, linked workspaces automatically run Terraform plans with the new code.\n- When pull requests are opened, linked workspaces run speculative plans with the proposed code changes and post the results as a pull request check; reviewers can see at a glance whether the plan was successful, and can click through to view the proposed changes in detail.\n\nVCS integration is powerful, but optional; if you use an unsupported VCS or want to preserve an existing validation and deployment pipeline, you can use the API or Terraform CLI to upload new configuration versions. You'll still get the benefits of remote execution and HCP Terraform's other features.\n\n- More info: [VCS-driven Runs](\/terraform\/cloud-docs\/run\/ui)\n- More info: [Supported VCS Providers](\/terraform\/cloud-docs\/vcs#supported-vcs-providers)\n\n### Command Line Integration\n\nRemote execution offers major benefits to a team, but local execution offers major benefits to individual developers; for example, most Terraform users run `terraform plan` to interactively check their work while editing configurations.\n\nHCP Terraform offers the best of both worlds, allowing you to run remote plans from your local command line. Configure the [HCP Terraform CLI integration](\/terraform\/cli\/cloud), and the `terraform plan` command will start a remote run in the configured HCP Terraform workspace. The output of the run streams directly to your terminal, and you can also share a link to the remote run with your teammates.\n\nRemote CLI-driven runs use the current working directory's Terraform configuration and the remote workspace's variables, so you don't need to obtain production cloud credentials just to preview a configuration change.\n\nThe HCP Terraform CLI integration also supports state manipulation commands like `terraform import` or `terraform taint`.\n\n-> **Note:** When used with HCP Terraform, the `terraform plan` command runs [speculative plans][], which preview changes without modifying real infrastructure. You can also use `terraform apply` to perform full remote runs, but only with workspaces that are _not_ connected to a VCS repository. This helps ensure that your VCS remains the source of record for all real infrastructure changes.\n\n- More info: [CLI-driven Runs](\/terraform\/cloud-docs\/run\/cli)\n\n### Private Registry\n\nEven small teams can benefit greatly by codifying commonly used infrastructure patterns into reusable [modules][].\n\nTerraform can fetch providers and modules from many sources. HCP Terraform makes it easier to find providers and modules to use with a private registry. Users throughout your organization can browse a directory of internal providers and modules, and can specify flexible version constraints for the modules they use in their configurations. Easy versioning lets downstream teams use private modules with confidence, and frees upstream teams to iterate faster.\n\nThe private registry uses your VCS as the source of truth, relying on Git tags to manage module versions. Tell HCP Terraform which repositories contain modules, and the registry handles the rest.\n\n- More info: [Private Registry](\/terraform\/cloud-docs\/registry)\n\n## Integrations\n\nIn addition to providing powerful extensions to the core Terraform workflow, HCP Terraform makes it simple to integrate infrastructure provisioning with your business's other systems.\n\n### Full API\n\nNearly all of HCP Terraform's features are available in [its API](\/terraform\/cloud-docs\/api-docs), which means other services can create or configure workspaces, upload configurations, start Terraform runs, and more. There's even [a Terraform provider based on the API](https:\/\/registry.terraform.io\/providers\/hashicorp\/tfe\/latest\/docs), so you can manage your HCP Terraform teams and workspaces as a Terraform configuration.\n\n- More info: [API](\/terraform\/cloud-docs\/api-docs)\n\n### Notifications\n\nHCP Terraform can send notifications about Terraform runs to other systems, including Slack and any other service that accepts webhooks. Notifications can be configured per-workspace.\n\n- More info: [Notifications](\/terraform\/cloud-docs\/workspaces\/settings\/notifications)\n\n### Run Tasks\n\nRun Tasks allow HCP Terraform to execute tasks in external systems at specific points in the HCP Terraform run lifecycle.\n\nThere are several [partner integrations](https:\/\/www.hashicorp.com\/integrations) already available, or you can create your own based on the [API](\/terraform\/cloud-docs\/api-docs\/run-tasks\/run-tasks).\n\n- More info: [Run Tasks](\/terraform\/cloud-docs\/workspaces\/settings\/run-tasks)\n\n## Access Control and Governance\n\nLarger organizations are more complex, and tend to use access controls and explicit policies to help manage that complexity. HCP Terraform's paid upgrade plans provide extra features to help meet the control and governance needs of large organizations.\n\n- More info: [Free and Paid Plans](\/terraform\/cloud-docs\/overview)\n\n### Team-Based Permissions System\n\nWith HCP Terraform's team management, you can define groups of users that match your organization's real-world teams and assign them only the permissions they need. When combined with the access controls your VCS provider already offers for code, workspace permissions are an effective way to follow the principle of least privilege.\n\n- More info: [Users, Teams, and Organizations](\/terraform\/cloud-docs\/users-teams-organizations\/permissions)\n\n### Policy Enforcement\n\n<!-- BEGIN: TFC:only name:pnp-callout -->\n@include 'tfc-package-callouts\/policies.mdx'\n<!-- END: TFC:only name:pnp-callout -->\n\nPolicy-as-code lets you define and enforce granular policies for how your organization provisions infrastructure. You can limit the size of compute VMs, confine major updates to defined maintenance windows, and much more.\n\nYou can use the Sentinel and the Open Policy Agent (OPA) policy-as-code frameworks to define policies. Depending on the settings, policies can act as advisory warnings, firm requirements that prevent Terraform from provisioning infrastructure, or soft requirements that your compliance team can bypass when appropriate.\n\nRefer to [Policy Enforcement](\/terraform\/cloud-docs\/policy-enforcement) for details.\n\n\n### Cost Estimation\n\nBefore making changes to infrastructure in the major cloud providers, HCP Terraform can display an estimate of its total cost, as well as any change in cost caused by the proposed updates. Cost estimates can also be used in Sentinel policies to provide warnings for major price shifts.\n\n- More info: [Cost Estimation](\/terraform\/cloud-docs\/cost-estimation)","site":"terraform","answers_cleaned":"    page title  Plans and Features   HCP Terraform description       How HCP Terraform and Terraform Enterprise help teams use Terraform to   manage infrastructure at scale  tfc only  true        HCP Terraform Plans and Features   cli    terraform cli   speculative plans    terraform cloud docs run remote operations speculative plans   remote state    terraform language state remote state data   outputs    terraform language values outputs   modules    terraform language modules develop   terraform enterprise    terraform enterprise  HCP Terraform is a platform that performs Terraform runs to provision infrastructure  either on demand or in response to various events  Unlike a general purpose continuous integration  CI  system  it is deeply integrated with Terraform s workflows and data  which allows it to make Terraform significantly more convenient and powerful       Hands On    Try our  What is HCP Terraform   Intro and Sign Up   terraform tutorials cloud get started cloud sign up  tutorial      Free and Paid Plans  HCP Terraform is a commercial SaaS product developed by HashiCorp  Many of its features are free for small teams  including remote state storage  remote runs  and VCS connections  We also offer paid plans for larger teams that include additional collaboration and governance features   HCP Terraform manages plans and billing at the  organization level   terraform cloud docs users teams organizations organizations   Each HCP Terraform user can belong to multiple organizations  which might subscribe to different billing plans  The set of features available depends on which organization you are currently working in   Refer to  Terraform pricing  https   www hashicorp com products terraform pricing  for details about available plans and their features       Free Organizations  Small teams can use most of HCP Terraform s features for free  including remote Terraform execution  VCS integration  the private module registry  single sign on  policy enforcement  run tasks  and more   Free organizations are limited to 500 managed resources  Refer to  What is a managed resource   terraform cloud docs overview estimate hcp terraform cost what is a managed resource  for more details       Paid Features  Some of HCP Terraform s features are limited to particular paid upgrade plans   Each higher paid upgrade plan is a strict superset of any lower plans   for example  the   Plus   edition includes all of the features of the   Standard   edition  Paid feature callouts in the documentation indicate the  lowest  edition at which the feature is available  but any higher plans also include that feature   Terraform Enterprise generally includes all of HCP Terraform s paid features  plus additional features geared toward large enterprises  However  some features are implemented differently due to the differences between self hosted and SaaS environments  and some features might be absent due to being impractical or irrelevant in the types of organizations that need Terraform Enterprise  Cloud only or Enterprise only features are clearly indicated in documentation       Changing Your Payment Plan   Organization owners   terraform cloud docs users teams organizations teams the owners team  can manage an organization s billing plan  The plan and billing settings include an integrated storefront  and you can subscribe to paid plans with a credit card   To change an organization s plan   1  Click   Settings   in the navigation bar  1  Click   Plan and billing    The   Plan and Billing   page appears showing your current plan and any available invoices  1  Click   Change plan    1  Select a plan  enter your billing information  and click   Update plan        Terraform Workflow  HCP Terraform runs  Terraform CLI  cli  to provision infrastructure   In its default state  Terraform CLI uses a local workflow  performing operations on the workstation where it is invoked and storing state in a local directory   Since teams must share responsibilities and awareness to avoid single points of failure  working with Terraform in a team requires a remote workflow  At minimum  state must be shared  ideally  Terraform should execute in a consistent remote environment   HCP Terraform offers a team oriented remote Terraform workflow  designed to be comfortable for existing Terraform users and easily learned by new users  The foundations of this workflow are remote Terraform execution  a workspace based organizational model  version control integration  command line integration  remote state management with cross workspace data sharing  and a private Terraform module registry       Remote Terraform Execution  HCP Terraform runs Terraform on disposable virtual machines in its own cloud infrastructure by default  You can leverage  HCP Terraform agents   terraform cloud docs agents  to run Terraform on your own isolated  private  or on premises infrastructure  Remote Terraform execution is sometimes referred to as  remote operations    Remote execution helps provide consistency and visibility for critical provisioning operations  It also enables powerful features like Sentinel policy enforcement  cost estimation  notifications  version control integration  and more     More info   Terraform Runs and Remote Operations   terraform cloud docs run remote operations        Support for Local Execution   execution mode    terraform cloud docs workspaces settings execution mode  Remote execution can be disabled on specific workspaces with the   Execution Mode  setting  execution mode   The workspace will still host remote state  and Terraform CLI can use that state for local runs via the  HCP Terraform CLI integration   terraform cli cloud       Organize Infrastructure with Projects and Workspaces  Terraform s local workflow manages a collection of infrastructure with a persistent working directory  which contains configuration  state data  and variables  You can use separate directories to organize infrastructure resources into meaningful groups  and Terraform will use the configuration in the directory you invoke Terraform commands from   HCP Terraform organizes infrastructure into projects and workspaces instead of directories  Each workspace contains everything necessary to manage a given collection of infrastructure  and Terraform uses that content when it runs in the context of that workspace   You can use projects to organize your workspaces into groups  Organizations with HCP Terraform  Standard  https   www hashicorp com products terraform pricing  Edition can assign teams permissions for specific projects    This lets you grant access to collections of workspaces instead of using workspace specific or organization wide permissions  making it easier to limit access to only the resources required for a team member s job function    Refer to  Workspaces   terraform cloud docs workspaces  and  Organizing Workspaces with Projects   terraform cloud docs workspaces projects  for more details          Remote State Management  Data Sharing  and Run Triggers  HCP Terraform acts as a remote backend for your Terraform state  State storage is tied to workspaces  which helps keep state associated with the configuration that created it   HCP Terraform also enables you to share information between workspaces with root level  outputs     Separate groups of infrastructure resources often need to share a small amount of information  and workspace outputs are an ideal interface for these dependencies   Workspaces that use remote operations can use   terraform remote state  data sources  remote state  to access other workspaces  outputs  subject to per workspace access controls  And since new information from one workspace might change the desired infrastructure state in another  you can create workspace to workspace run triggers to ensure downstream workspaces react when their dependencies change     More info   Terraform State in HCP Terraform   terraform cloud docs workspaces state    Run Triggers   terraform cloud docs workspaces settings run triggers       Version Control Integration  Like other kinds of code  infrastructure as code belongs in version control  so HCP Terraform is designed to work directly with your version control system  VCS  provider   Each workspace can be linked to a VCS repository that contains its Terraform configuration  optionally specifying a branch and subdirectory  HCP Terraform automatically retrieves configuration content from the repository  and will also watch the repository for changes     When new commits are merged  linked workspaces automatically run Terraform plans with the new code    When pull requests are opened  linked workspaces run speculative plans with the proposed code changes and post the results as a pull request check  reviewers can see at a glance whether the plan was successful  and can click through to view the proposed changes in detail   VCS integration is powerful  but optional  if you use an unsupported VCS or want to preserve an existing validation and deployment pipeline  you can use the API or Terraform CLI to upload new configuration versions  You ll still get the benefits of remote execution and HCP Terraform s other features     More info   VCS driven Runs   terraform cloud docs run ui    More info   Supported VCS Providers   terraform cloud docs vcs supported vcs providers       Command Line Integration  Remote execution offers major benefits to a team  but local execution offers major benefits to individual developers  for example  most Terraform users run  terraform plan  to interactively check their work while editing configurations   HCP Terraform offers the best of both worlds  allowing you to run remote plans from your local command line  Configure the  HCP Terraform CLI integration   terraform cli cloud   and the  terraform plan  command will start a remote run in the configured HCP Terraform workspace  The output of the run streams directly to your terminal  and you can also share a link to the remote run with your teammates   Remote CLI driven runs use the current working directory s Terraform configuration and the remote workspace s variables  so you don t need to obtain production cloud credentials just to preview a configuration change   The HCP Terraform CLI integration also supports state manipulation commands like  terraform import  or  terraform taint         Note    When used with HCP Terraform  the  terraform plan  command runs  speculative plans     which preview changes without modifying real infrastructure  You can also use  terraform apply  to perform full remote runs  but only with workspaces that are  not  connected to a VCS repository  This helps ensure that your VCS remains the source of record for all real infrastructure changes     More info   CLI driven Runs   terraform cloud docs run cli       Private Registry  Even small teams can benefit greatly by codifying commonly used infrastructure patterns into reusable  modules      Terraform can fetch providers and modules from many sources  HCP Terraform makes it easier to find providers and modules to use with a private registry  Users throughout your organization can browse a directory of internal providers and modules  and can specify flexible version constraints for the modules they use in their configurations  Easy versioning lets downstream teams use private modules with confidence  and frees upstream teams to iterate faster   The private registry uses your VCS as the source of truth  relying on Git tags to manage module versions  Tell HCP Terraform which repositories contain modules  and the registry handles the rest     More info   Private Registry   terraform cloud docs registry      Integrations  In addition to providing powerful extensions to the core Terraform workflow  HCP Terraform makes it simple to integrate infrastructure provisioning with your business s other systems       Full API  Nearly all of HCP Terraform s features are available in  its API   terraform cloud docs api docs   which means other services can create or configure workspaces  upload configurations  start Terraform runs  and more  There s even  a Terraform provider based on the API  https   registry terraform io providers hashicorp tfe latest docs   so you can manage your HCP Terraform teams and workspaces as a Terraform configuration     More info   API   terraform cloud docs api docs       Notifications  HCP Terraform can send notifications about Terraform runs to other systems  including Slack and any other service that accepts webhooks  Notifications can be configured per workspace     More info   Notifications   terraform cloud docs workspaces settings notifications       Run Tasks  Run Tasks allow HCP Terraform to execute tasks in external systems at specific points in the HCP Terraform run lifecycle   There are several  partner integrations  https   www hashicorp com integrations  already available  or you can create your own based on the  API   terraform cloud docs api docs run tasks run tasks      More info   Run Tasks   terraform cloud docs workspaces settings run tasks      Access Control and Governance  Larger organizations are more complex  and tend to use access controls and explicit policies to help manage that complexity  HCP Terraform s paid upgrade plans provide extra features to help meet the control and governance needs of large organizations     More info   Free and Paid Plans   terraform cloud docs overview       Team Based Permissions System  With HCP Terraform s team management  you can define groups of users that match your organization s real world teams and assign them only the permissions they need  When combined with the access controls your VCS provider already offers for code  workspace permissions are an effective way to follow the principle of least privilege     More info   Users  Teams  and Organizations   terraform cloud docs users teams organizations permissions       Policy Enforcement       BEGIN  TFC only name pnp callout      include  tfc package callouts policies mdx       END  TFC only name pnp callout      Policy as code lets you define and enforce granular policies for how your organization provisions infrastructure  You can limit the size of compute VMs  confine major updates to defined maintenance windows  and much more   You can use the Sentinel and the Open Policy Agent  OPA  policy as code frameworks to define policies  Depending on the settings  policies can act as advisory warnings  firm requirements that prevent Terraform from provisioning infrastructure  or soft requirements that your compliance team can bypass when appropriate   Refer to  Policy Enforcement   terraform cloud docs policy enforcement  for details        Cost Estimation  Before making changes to infrastructure in the major cloud providers  HCP Terraform can display an estimate of its total cost  as well as any change in cost caused by the proposed updates  Cost estimates can also be used in Sentinel policies to provide warnings for major price shifts     More info   Cost Estimation   terraform cloud docs cost estimation "}
{"questions":"terraform Learn the authorization model potential security threats and our recommendations for securely using HCP Terraform HCP Terraform security model page title Security Model HCP Terraform tfc only true","answers":"---\npage_title: Security Model - HCP Terraform\ndescription: >-\n  Learn the authorization model, potential security threats, and our\n  recommendations for securely using HCP Terraform.\ntfc_only: true\n---\n\n# HCP Terraform security model\n\n## Purpose of this document\n\nThis document explains the security model of HCP Terraform and the security controls available to end users. Additionally, it provides best practices for securely managing your infrastructure with HCP Terraform.\n\n## Important concepts\n\n### Projects, workspaces, and teams\nHCP Terraform organizes infrastructure with workspaces. Workspaces represent a logical security boundary within the organization. Variables, state, SSH keys, and log output are local to a workspace. You can grant teams [read, plan, write, admin, or a customized set of permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions) within a workspace.\n\nProjects let you group related workspaces in your organization. You can use projects to assign [read, write, maintain, admin, or a customized set of permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#project-permissions) to a particular team which grants specific permissions to all workspaces in the project.\n\n### Terraform runs - plans and applies\n\nHCP Terraform will provision infrastructure according to your Terraform configuration which you can upload through the VCS-driven, API-driven, or CLI-driven workflows. You can read more about the different workflows [here](\/terraform\/cloud-docs\/run\/remote-operations#starting-runs). It\u2019s important to note that HCP Terraform performs all Terraform operations within the same privilege context. Both the plan and apply operations have access to the full workspace variables, state versions, and Terraform configuration.\n\n### Terraform state file\n\nHCP Terraform retains the current and all historical [state](\/terraform\/language\/state) versions for each workspace. Depending on the resources that are used in your Terraform configuration, these state versions may contain sensitive data such as database passwords, resource IDs, etc.\n\n## Personas\n\n### Organization owners\n\nMembers of the [owners team](\/terraform\/cloud-docs\/users-teams-organizations\/teams#the-owners-team) have administrator-level privileges within an organization. Members of this team will have access to workspaces, projects, and settings within the organization. This role is intended for users who will perform administrative tasks in your organization.\n\n### Workspace and project team members\nTeams let you group users within an organization. You can grant teams [read, plan, write, admin, or a customized set of permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions), each of which allow them to perform various functions within the workspace. You can also grant teams [read, write, maintain, admin, or a customized set of permissions for a project](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#project-permissions), which grants specific permissions to any workspaces in that project. At a higher level, you can use [organization-level privileges](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#organization-permissions), which apply to projects and workspaces across the organization.\n\n### Contributors to connected VCS repositories\n\nHCP Terraform executes Terraform configuration from connected VCS repositories. Depending on the configuration, HCP Terraform may automatically trigger Terraform operations when the connected repositories receive new contributions.\n\n## Authorization model\n\n[![HCP Terraform authorization model diagram](\/img\/docs\/terraform-cloud-authorization.png)](\/img\/docs\/terraform-cloud-authorization.png)\n_Click on the diagram for a larger view\\._\n\n~> **Note:** This diagram displays a useful subset of HCP Terraform's authorization model, but is not comprehensive. Some details were omitted for the sake of clarity. More information is available in our [Permissions documentation](\/terraform\/cloud-docs\/users-teams-organizations\/permissions).\n\nWorkspaces provide a logical security boundary within the organization. Environment variables and Terraform configurations are isolated within a workspace, and access to a workspace is granted on a per-team basis.\n\nAll organizations in HCP Terraform contain an \u201cowners\u201d team, which grants admin-level access to the organization and all its workspaces.\n\n~> **Note:** Teams are not available to free-tier users on HCP Terraform. Organizations at the free-level will only have an owners team.\n\nAll workspaces in an organization belong to a project. You can grant teams [read, write, maintain, admin, or a customized set of permissions for the project](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#project-permissions), which grants specific permissions on on all workspaces within the project. You can also grant teams [read, plan, write, admin, or a customized set of permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#workspace-permissions) for a specific workspace. It\u2019s important to note that, from a security perspective, the plan permission is equivalent to the write permission. The plan permission is provided to protect against accidental Terraform runs but is not intended to stop malicious actors from accessing sensitive data within a workspace. Terraform `plan` and `apply` operations can execute arbitrary code within the ephemeral build environment. Both of these operations happen in the same security context with access to the full set of workspace variables, Terraform configuration, and Terraform state.\n\nBy default, Teams with read privileges within a workspace can view the workspace's state. You can remove this access by using [customized workspace permissions](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#custom-workspace-permissions); however, this will only apply to state file access through the API or UI. Terraform must access the state file in order to perform plan and apply operations, so any user with the ability to upload Terraform configurations and initiate runs will transitively have access to the workspaces' state.\n\nState may be shared across workspaces via the [remote state access workspace setting](\/terraform\/cloud-docs\/workspaces\/state#accessing-state-from-other-workspaces).\n\nTerraform configuration files in connected VCS repositories are inherently trusted. Commits to connected repositories will automatically queue a plan within the corresponding workspace. Pull requests to connected repositories will initiate a speculative plan, though this behavior may be disabled via the [speculative plan setting](\/terraform\/cloud-docs\/workspaces\/settings\/vcs#automatic-speculative-plans) on the workspace settings page. HCP Terraform has no knowledge of your VCS's authorization controls and does not associate HCP Terraform user accounts with VCS user accounts \u2014 the two should be considered separate identities.\n\n## Threat model\n\nHCP Terraform is designed to execute Terraform operations and manage the state file to ensure that infrastructure is reliably created, updated, and destroyed by multiple users of an organization.\n\nThe following are part of the HCP Terraform threat model:\n\n### Confidentiality and integrity of communication between Terraform clients and HCP Terraform\n\nAll communication between clients and HCP Terraform is encrypted end-to-end using TLS. HCP Terraform currently supports TLS version 1.2. HCP Terraform communicates with linked VCS repositories using the Oauth2 authorization protocol. HCP Terraform can also be configured to fetch Terraform modules from private repositories using the SSH protocol with a customer-provided private key.\n\n### Confidentiality of state versions, Terraform configurations, and stored variables\n\nAs a user, you will entrust HCP Terraform with information that is very sensitive to your organization such as API tokens, your Terraform configurations, and your Terraform state file. HCP Terraform is designed to ensure the confidentiality of this information, it relies on [Vault Transit](\/vault\/docs\/secrets\/transit) for encrypting workspace variables. Terraform configurations and state are encrypted at rest with uniquely derived encryption keys backed by Vault. You can view how all customer data is encrypted and stored on our [data security page](\/terraform\/cloud-docs\/architectural-details\/data-security).\n\n### Enforcement of authentication and authorization policies for data access and actions taken through the UI or API\n\nHCP Terraform enforces authorization checks for all actions taken within the API or through the UI. More information about HCP Terraform workspace-level and organization level permission are available [here](\/terraform\/cloud-docs\/users-teams-organizations\/permissions).\n\n### Isolation of Terraform executions\n\nEach Terraform operation (plan and apply) happens in an ephemeral environment that is created immediately before the run and destroyed after it is completed. The build environment is designed to provide isolation between Terraform executions and between HCP Terraform tenants.\n\n### Reliability and availability of HCP Terraform\n\nHCP Terraform is spread across multiple availability zones for reliability, we perform regular backups of our production data stores and have a process for recovering in case of a major outage.\n\n## What isn\u2019t part of the threat model\n\n### Malicious contributions to Terraform configuration in VCS repositories\n\nCommits and pull requests to connected VCS repositories will trigger a plan operation within the workspace. HCP Terraform does not perform any authentication or authorization checks against commits in linked VCS repositories, and cannot prevent malicious Terraform configuration from exfiltrating sensitive data during plan operations. For this reason, it is important to restrict access to connected VCS repositories. Speculative plans for pull requests may be disabled on the [workspace settings page](\/terraform\/cloud-docs\/workspaces\/settings\/vcs#automatic-speculative-plans).\n\n-> **Note:** HCP Terraform will not automatically trigger plans for pull requests from forked repositories.\n\n### Malicious Terraform providers or modules\n\nTerraform providers and modules used in your Terraform configuration will have full access to the variables and Terraform state within a workspace. HCP Terraform cannot prevent malicious providers and modules from exfiltrating this sensitive data. We recommend only using trusted modules and providers within your Terraform configuration.\n\n### Malicious bypasses of Terraform policies\nThe policy-as-code frameworks used by the Terraform [Policy Enforcement](\/terraform\/cloud-docs\/policy-enforcement) feature are embedded within HCP Terraform and can be used to ensure the infrastructure provisioned using Terraform complies with defined organizational policies. The goal of this feature is to enforce compliance with organizational policies and best practices when provisioning infrastructure using Terraform.\n\nIt is important to note that the policy-as-code integration in HCP Terraform should be viewed as a guide or set of guardrails, not a security boundary. It is not designed to prevent malicious actors from executing malicious Terraform configurations or modifying infrastructure.\n\n### Malicious or insecure third-party run tasks\nTerraform [Run Tasks](\/terraform\/cloud-docs\/integrations\/run-tasks) are provided with access to all Terraform configuration and plan data. HCP Terraform does not have the capability to prevent malicious Run Tasks from potentially exfiltrating sensitive data that may be present in either the Terraform configuration or plan.\n\nIn order to minimize potential security risks, it is highly recommended to only utilize trusted technology partners for Run Tasks within your Terraform organization and limit the number of users who have been assigned the [Manage Run Tasks](\/terraform\/cloud-docs\/users-teams-organizations\/permissions#manage-run-tasks) permission.\n\n### Access to sensitive variables or state from Terraform operations\n\nMarking a variable as \u201csensitive\u201d will prevent it from being displayed in the UI, but will not prevent it from being read by Terraform during plan or apply operations. Similarly, customized workspace permissions allow you to restrict access to workspace state via the UI and API, but will not prevent it from being read during Terraform operations.\n\n### Redaction of sensitive variables in Terraform logs\n\nThe logs from a Terraform plan or apply operation are visible to any user with at least \u201cread\u201d level access in the associated workspace. While Terraform tries to avoid writing sensitive information to logs, redactions are best-effort. This feature should not be treated as a security boundary, but instead as a mechanism to mitigate accidental exposure. Additionally, HCP Terraform is unable to protect against malicious users who attempt to use Terraform logs to exfiltrate sensitive data.\n\n## Recommendations for securely using HCP Terraform\n\n### Enforce strong authentication\n\nHCP Terraform supports [two factor authentication](\/terraform\/cloud-docs\/users-teams-organizations\/2fa) via SMS or TOTP. Organizations can configure mandatory 2FA for all members in the [organization settings](\/terraform\/cloud-docs\/users-teams-organizations\/organizations#authentication). Organizations may choose to configure [SSO for their organization](\/terraform\/cloud-docs\/users-teams-organizations\/single-sign-on).\n\n### Minimize the number of users in the owners team\n\nUsers of the [owners team](\/terraform\/cloud-docs\/users-teams-organizations\/teams#the-owners-team) will have full access to all workspaces within the organization. If SSO is enabled, members of the \u201cOwners\u201d team will still be able to authenticate with their username and password. This group should be reserved for only a small number of administrators, and membership should be audited periodically.\n\n### Apply the principle of least privilege to workspace membership\n\n[Teams](\/terraform\/cloud-docs\/users-teams-organizations\/teams) allow you to group users and assign them various privileges within workspaces. We recommend applying the [principle of least privilege](https:\/\/en.wikipedia.org\/wiki\/Principle_of_least_privilege) when creating teams and assigning permissions so that each user within your organization has the minimum required privileges.\n\n### Protect API keys\n\nHCP Terraform allows you to create [user, team, and organization API tokens](\/terraform\/cloud-docs\/api-docs#authentication). You should take care to store these tokens securely, and rotate them periodically.\nVault users can leverage the [Terraform Cloud secret backend](\/vault\/docs\/secrets\/terraform), which allows you to generate ephemeral tokens.\n\n### Control access to source code\n\nBy default, commits and pull requests to connected VCS repositories will automatically trigger a plan operation in an HCP Terraform workspace. HCP Terraform cannot protect against malicious code in linked repositories, so you should take care to only grant trusted operators access to these repositories.\nWorkspaces may be configured to [enable or disable speculative plans for pull requests](\/terraform\/cloud-docs\/workspaces\/settings\/vcs#automatic-speculative-plans) to linked repositories. You should disable this setting if you allow untrusted users to open pull requests in connected VCS repositories.\n\n-> **Note:** HCP Terraform will not automatically trigger plans for pull requests from forked repositories.\n\n### Restrict access to workspace state\n\nWorkspaces may be configured to share their state with other workspaces within the organization or globally with the entire organization via the [remote state setting](\/terraform\/cloud-docs\/workspaces\/state#accessing-state-from-other-workspaces). Because workspace state may contain sensitive information, we recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other.\n\n### Use separate agent pools for sensitive workspaces\n\nYou can share [HCP Terraform Agents](\/terraform\/cloud-docs\/agents) across all workspaces within an organization or [scope them to specific workspaces](\/terraform\/cloud-docs\/agents#scope-an-agent-pool-to-specific-workspaces). If multiple workspaces share agent pools, a malicious actor in one of those workspaces could exfiltrate the agent\u2019s API token, access private resources from the perspective of the agent, or modify the agent\u2019s environment, potentially impacting other workspaces. For this reason, we recommend creating separate agent pools for sensitive workspaces and using the agent scoping setting to restrict which workspaces can target each agent pool.\n\n### Treat Archivist URLs as secrets\n\nHCP Terraform uses a blob storage service called Archivist for storing various pieces of customer data. Archivist URLs have the origin `https:\/\/archivist.terraform.io` and are returned by various HCP Terraform APIs, such as the [state versions API](\/terraform\/cloud-docs\/api-docs\/state-versions#fetch-the-current-state-version-for-a-workspace). You do not need to submit a bearer token with each request to call the Archivist API. Instead, Archivist URLs contain a short-term signed authorization token that performs authorization checks. The expiry time depends on the API endpoints you used to generate the Archivist link. As a result, you must treat Archivist URLs as secrets and avoid logging or sharing them.\n\n### Use dynamic credentials\n\nStoring static credentials in HCP Terraform increases the inherent risk of a malicious user or a compromised plan or apply operation exposing your credentials. Because static credentials are usually long-lived and exposed in many locations, they are troublesome to revoke and replace.\n\nUsing [dynamic provider credentials](\/terraform\/cloud-docs\/workspaces\/dynamic-provider-credentials\/) eliminates the need to store static credentials in HCP Terraform, reducing the risk of exposure. Dynamic provider credentials generate new temporary credentials for each operation and expire after that operation completes. ","site":"terraform","answers_cleaned":"    page title  Security Model   HCP Terraform description       Learn the authorization model  potential security threats  and our   recommendations for securely using HCP Terraform  tfc only  true        HCP Terraform security model     Purpose of this document  This document explains the security model of HCP Terraform and the security controls available to end users  Additionally  it provides best practices for securely managing your infrastructure with HCP Terraform      Important concepts      Projects  workspaces  and teams HCP Terraform organizes infrastructure with workspaces  Workspaces represent a logical security boundary within the organization  Variables  state  SSH keys  and log output are local to a workspace  You can grant teams  read  plan  write  admin  or a customized set of permissions   terraform cloud docs users teams organizations permissions  within a workspace   Projects let you group related workspaces in your organization  You can use projects to assign  read  write  maintain  admin  or a customized set of permissions   terraform cloud docs users teams organizations permissions project permissions  to a particular team which grants specific permissions to all workspaces in the project       Terraform runs   plans and applies  HCP Terraform will provision infrastructure according to your Terraform configuration which you can upload through the VCS driven  API driven  or CLI driven workflows  You can read more about the different workflows  here   terraform cloud docs run remote operations starting runs   It s important to note that HCP Terraform performs all Terraform operations within the same privilege context  Both the plan and apply operations have access to the full workspace variables  state versions  and Terraform configuration       Terraform state file  HCP Terraform retains the current and all historical  state   terraform language state  versions for each workspace  Depending on the resources that are used in your Terraform configuration  these state versions may contain sensitive data such as database passwords  resource IDs  etc      Personas      Organization owners  Members of the  owners team   terraform cloud docs users teams organizations teams the owners team  have administrator level privileges within an organization  Members of this team will have access to workspaces  projects  and settings within the organization  This role is intended for users who will perform administrative tasks in your organization       Workspace and project team members Teams let you group users within an organization  You can grant teams  read  plan  write  admin  or a customized set of permissions   terraform cloud docs users teams organizations permissions   each of which allow them to perform various functions within the workspace  You can also grant teams  read  write  maintain  admin  or a customized set of permissions for a project   terraform cloud docs users teams organizations permissions project permissions   which grants specific permissions to any workspaces in that project  At a higher level  you can use  organization level privileges   terraform cloud docs users teams organizations permissions organization permissions   which apply to projects and workspaces across the organization       Contributors to connected VCS repositories  HCP Terraform executes Terraform configuration from connected VCS repositories  Depending on the configuration  HCP Terraform may automatically trigger Terraform operations when the connected repositories receive new contributions      Authorization model     HCP Terraform authorization model diagram   img docs terraform cloud authorization png    img docs terraform cloud authorization png   Click on the diagram for a larger view          Note    This diagram displays a useful subset of HCP Terraform s authorization model  but is not comprehensive  Some details were omitted for the sake of clarity  More information is available in our  Permissions documentation   terraform cloud docs users teams organizations permissions    Workspaces provide a logical security boundary within the organization  Environment variables and Terraform configurations are isolated within a workspace  and access to a workspace is granted on a per team basis   All organizations in HCP Terraform contain an  owners  team  which grants admin level access to the organization and all its workspaces        Note    Teams are not available to free tier users on HCP Terraform  Organizations at the free level will only have an owners team   All workspaces in an organization belong to a project  You can grant teams  read  write  maintain  admin  or a customized set of permissions for the project   terraform cloud docs users teams organizations permissions project permissions   which grants specific permissions on on all workspaces within the project  You can also grant teams  read  plan  write  admin  or a customized set of permissions   terraform cloud docs users teams organizations permissions workspace permissions  for a specific workspace  It s important to note that  from a security perspective  the plan permission is equivalent to the write permission  The plan permission is provided to protect against accidental Terraform runs but is not intended to stop malicious actors from accessing sensitive data within a workspace  Terraform  plan  and  apply  operations can execute arbitrary code within the ephemeral build environment  Both of these operations happen in the same security context with access to the full set of workspace variables  Terraform configuration  and Terraform state   By default  Teams with read privileges within a workspace can view the workspace s state  You can remove this access by using  customized workspace permissions   terraform cloud docs users teams organizations permissions custom workspace permissions   however  this will only apply to state file access through the API or UI  Terraform must access the state file in order to perform plan and apply operations  so any user with the ability to upload Terraform configurations and initiate runs will transitively have access to the workspaces  state   State may be shared across workspaces via the  remote state access workspace setting   terraform cloud docs workspaces state accessing state from other workspaces    Terraform configuration files in connected VCS repositories are inherently trusted  Commits to connected repositories will automatically queue a plan within the corresponding workspace  Pull requests to connected repositories will initiate a speculative plan  though this behavior may be disabled via the  speculative plan setting   terraform cloud docs workspaces settings vcs automatic speculative plans  on the workspace settings page  HCP Terraform has no knowledge of your VCS s authorization controls and does not associate HCP Terraform user accounts with VCS user accounts   the two should be considered separate identities      Threat model  HCP Terraform is designed to execute Terraform operations and manage the state file to ensure that infrastructure is reliably created  updated  and destroyed by multiple users of an organization   The following are part of the HCP Terraform threat model       Confidentiality and integrity of communication between Terraform clients and HCP Terraform  All communication between clients and HCP Terraform is encrypted end to end using TLS  HCP Terraform currently supports TLS version 1 2  HCP Terraform communicates with linked VCS repositories using the Oauth2 authorization protocol  HCP Terraform can also be configured to fetch Terraform modules from private repositories using the SSH protocol with a customer provided private key       Confidentiality of state versions  Terraform configurations  and stored variables  As a user  you will entrust HCP Terraform with information that is very sensitive to your organization such as API tokens  your Terraform configurations  and your Terraform state file  HCP Terraform is designed to ensure the confidentiality of this information  it relies on  Vault Transit   vault docs secrets transit  for encrypting workspace variables  Terraform configurations and state are encrypted at rest with uniquely derived encryption keys backed by Vault  You can view how all customer data is encrypted and stored on our  data security page   terraform cloud docs architectural details data security        Enforcement of authentication and authorization policies for data access and actions taken through the UI or API  HCP Terraform enforces authorization checks for all actions taken within the API or through the UI  More information about HCP Terraform workspace level and organization level permission are available  here   terraform cloud docs users teams organizations permissions        Isolation of Terraform executions  Each Terraform operation  plan and apply  happens in an ephemeral environment that is created immediately before the run and destroyed after it is completed  The build environment is designed to provide isolation between Terraform executions and between HCP Terraform tenants       Reliability and availability of HCP Terraform  HCP Terraform is spread across multiple availability zones for reliability  we perform regular backups of our production data stores and have a process for recovering in case of a major outage      What isn t part of the threat model      Malicious contributions to Terraform configuration in VCS repositories  Commits and pull requests to connected VCS repositories will trigger a plan operation within the workspace  HCP Terraform does not perform any authentication or authorization checks against commits in linked VCS repositories  and cannot prevent malicious Terraform configuration from exfiltrating sensitive data during plan operations  For this reason  it is important to restrict access to connected VCS repositories  Speculative plans for pull requests may be disabled on the  workspace settings page   terraform cloud docs workspaces settings vcs automatic speculative plans         Note    HCP Terraform will not automatically trigger plans for pull requests from forked repositories       Malicious Terraform providers or modules  Terraform providers and modules used in your Terraform configuration will have full access to the variables and Terraform state within a workspace  HCP Terraform cannot prevent malicious providers and modules from exfiltrating this sensitive data  We recommend only using trusted modules and providers within your Terraform configuration       Malicious bypasses of Terraform policies The policy as code frameworks used by the Terraform  Policy Enforcement   terraform cloud docs policy enforcement  feature are embedded within HCP Terraform and can be used to ensure the infrastructure provisioned using Terraform complies with defined organizational policies  The goal of this feature is to enforce compliance with organizational policies and best practices when provisioning infrastructure using Terraform   It is important to note that the policy as code integration in HCP Terraform should be viewed as a guide or set of guardrails  not a security boundary  It is not designed to prevent malicious actors from executing malicious Terraform configurations or modifying infrastructure       Malicious or insecure third party run tasks Terraform  Run Tasks   terraform cloud docs integrations run tasks  are provided with access to all Terraform configuration and plan data  HCP Terraform does not have the capability to prevent malicious Run Tasks from potentially exfiltrating sensitive data that may be present in either the Terraform configuration or plan   In order to minimize potential security risks  it is highly recommended to only utilize trusted technology partners for Run Tasks within your Terraform organization and limit the number of users who have been assigned the  Manage Run Tasks   terraform cloud docs users teams organizations permissions manage run tasks  permission       Access to sensitive variables or state from Terraform operations  Marking a variable as  sensitive  will prevent it from being displayed in the UI  but will not prevent it from being read by Terraform during plan or apply operations  Similarly  customized workspace permissions allow you to restrict access to workspace state via the UI and API  but will not prevent it from being read during Terraform operations       Redaction of sensitive variables in Terraform logs  The logs from a Terraform plan or apply operation are visible to any user with at least  read  level access in the associated workspace  While Terraform tries to avoid writing sensitive information to logs  redactions are best effort  This feature should not be treated as a security boundary  but instead as a mechanism to mitigate accidental exposure  Additionally  HCP Terraform is unable to protect against malicious users who attempt to use Terraform logs to exfiltrate sensitive data      Recommendations for securely using HCP Terraform      Enforce strong authentication  HCP Terraform supports  two factor authentication   terraform cloud docs users teams organizations 2fa  via SMS or TOTP  Organizations can configure mandatory 2FA for all members in the  organization settings   terraform cloud docs users teams organizations organizations authentication   Organizations may choose to configure  SSO for their organization   terraform cloud docs users teams organizations single sign on        Minimize the number of users in the owners team  Users of the  owners team   terraform cloud docs users teams organizations teams the owners team  will have full access to all workspaces within the organization  If SSO is enabled  members of the  Owners  team will still be able to authenticate with their username and password  This group should be reserved for only a small number of administrators  and membership should be audited periodically       Apply the principle of least privilege to workspace membership   Teams   terraform cloud docs users teams organizations teams  allow you to group users and assign them various privileges within workspaces  We recommend applying the  principle of least privilege  https   en wikipedia org wiki Principle of least privilege  when creating teams and assigning permissions so that each user within your organization has the minimum required privileges       Protect API keys  HCP Terraform allows you to create  user  team  and organization API tokens   terraform cloud docs api docs authentication   You should take care to store these tokens securely  and rotate them periodically  Vault users can leverage the  Terraform Cloud secret backend   vault docs secrets terraform   which allows you to generate ephemeral tokens       Control access to source code  By default  commits and pull requests to connected VCS repositories will automatically trigger a plan operation in an HCP Terraform workspace  HCP Terraform cannot protect against malicious code in linked repositories  so you should take care to only grant trusted operators access to these repositories  Workspaces may be configured to  enable or disable speculative plans for pull requests   terraform cloud docs workspaces settings vcs automatic speculative plans  to linked repositories  You should disable this setting if you allow untrusted users to open pull requests in connected VCS repositories        Note    HCP Terraform will not automatically trigger plans for pull requests from forked repositories       Restrict access to workspace state  Workspaces may be configured to share their state with other workspaces within the organization or globally with the entire organization via the  remote state setting   terraform cloud docs workspaces state accessing state from other workspaces   Because workspace state may contain sensitive information  we recommend that you follow the principle of least privilege and only enable state access between workspaces that specifically need information from each other       Use separate agent pools for sensitive workspaces  You can share  HCP Terraform Agents   terraform cloud docs agents  across all workspaces within an organization or  scope them to specific workspaces   terraform cloud docs agents scope an agent pool to specific workspaces   If multiple workspaces share agent pools  a malicious actor in one of those workspaces could exfiltrate the agent s API token  access private resources from the perspective of the agent  or modify the agent s environment  potentially impacting other workspaces  For this reason  we recommend creating separate agent pools for sensitive workspaces and using the agent scoping setting to restrict which workspaces can target each agent pool       Treat Archivist URLs as secrets  HCP Terraform uses a blob storage service called Archivist for storing various pieces of customer data  Archivist URLs have the origin  https   archivist terraform io  and are returned by various HCP Terraform APIs  such as the  state versions API   terraform cloud docs api docs state versions fetch the current state version for a workspace   You do not need to submit a bearer token with each request to call the Archivist API  Instead  Archivist URLs contain a short term signed authorization token that performs authorization checks  The expiry time depends on the API endpoints you used to generate the Archivist link  As a result  you must treat Archivist URLs as secrets and avoid logging or sharing them       Use dynamic credentials  Storing static credentials in HCP Terraform increases the inherent risk of a malicious user or a compromised plan or apply operation exposing your credentials  Because static credentials are usually long lived and exposed in many locations  they are troublesome to revoke and replace   Using  dynamic provider credentials   terraform cloud docs workspaces dynamic provider credentials   eliminates the need to store static credentials in HCP Terraform  reducing the risk of exposure  Dynamic provider credentials generate new temporary credentials for each operation and expire after that operation completes  "}
{"questions":"consul to config There may be a few other special cases not included but this covers off as you go you can mark them as done by replace with so github renders them as checked Then please include the completed lists you worked This is a checklist of all the places you need to update when adding a new field We suggest you copy the raw markdown into a gist or local file and check them through in your PR description the majority of configs Adding a Consul Config Field","answers":"# Adding a Consul Config Field\n\nThis is a checklist of all the places you need to update when adding a new field\nto config. There may be a few other special cases not included but this covers\nthe majority of configs.\n\nWe suggest you copy the raw markdown into a gist or local file and check them\noff as you go (you can mark them as done by replace `[ ]` with `[x]` so github\nrenders them as checked). Then **please include the completed lists you worked\nthrough in your PR description**.\n\nExamples of special cases this doesn't cover are:\n - If the config needs special treatment like a different default in `-dev` mode\n   or differences between CE and Enterprise.\n - If custom logic is needed to support backwards compatibility when changing\n   syntax or semantics of anything\n\nThere are four specific cases covered with increasing complexity:\n 1. adding a simple config field only used by client agents\n 1. adding a CLI flag to mirror that config field\n 1. adding a config field that needs to be used in Consul servers\n 1. adding a field to the Service Definition\n\n## Adding a Simple Config Field for Client Agents\n\n - [ ] Add the field to the Config struct (or an appropriate sub-struct) in\n   `agent\/config\/config.go`.\n - [ ] Add the field to the actual RuntimeConfig struct in\n   `agent\/config\/runtime.go`.\n - [ ] Add an appropriate parser\/setter in `agent\/config\/builder.go` to\n   translate.\n - [ ] Add the new field with a random value to both the JSON and HCL files in\n   `agent\/config\/testdata\/full-config.*`, which should cause the test to fail.\n   Then update the expected value in `TestLoad_FullConfig` in\n   `agent\/config\/runtime_test.go` to make the test pass again.\n - [ ] Run `go test -run TestRuntimeConfig_Sanitize .\/agent\/config -update` to update\n   the expected value for `TestRuntimeConfig_Sanitize`. Look at `git diff` to\n   make sure the value changed as you expect.\n - [ ] **If** your new config field needed some validation as it's only valid in\n   some cases or with some values (often true).\n      - [ ] Add validation to Validate in `agent\/config\/builder.go`.\n      - [ ] Add a test case to the table test `TestLoad_IntegrationWithFlags` in\n        `agent\/config\/runtime_test.go`.\n - [ ] **If** your new config field needs a non-zero-value default.\n      - [ ] Add that to `DefaultSource` in `agent\/config\/defaults.go`.\n      - [ ] Add a test case to the table test `TestLoad_IntegrationWithFlags` in\n        `agent\/config\/runtime_test.go`.\n      - [ ] If the config needs to be defaulted for the test server used in unit tests,\n            also add it to `DefaultConfig()` in `agent\/consul\/config.go`.\n - [ ] **If** your config should take effect on a reload\/HUP.\n      - [ ] Add necessary code to to trigger a safe (locked or atomic) update to\n        any state the feature needs changing. This needs to be added to one or\n        more of the following places:\n         - `ReloadConfig` in `agent\/agent.go` if it needs to affect the local\n           client state or another client agent component.\n         - `ReloadConfig` in `agent\/consul\/client.go` if it needs to affect\n           state for client agent's RPC client.\n      - [ ] Add a test to `agent\/agent_test.go` similar to others with prefix\n        `TestAgent_reloadConfig*`.\n - [ ] Add documentation to `website\/content\/docs\/agent\/config\/config-files.mdx`.\n\nDone! You can now use your new field in a client agent by accessing\n`s.agent.Config.<FieldName>`.\n\nIf you need a CLI flag, access to the variable in a Server context, or touched\nthe Service Definition, make sure you continue on to follow the appropriate\nchecklists below.\n\n## Adding a CLI Flag Corresponding to the new Field\nIf the config field also needs a CLI flag, then follow these steps.\n\n - [ ] Do all of the steps in [Adding a Simple Config\n   Field For Client Agents](#adding-a-simple-config-field-for-client-agents).\n - [ ] Add the new flag to `agent\/config\/flags.go`.\n - [ ] Add a test case to TestParseFlags in `agent\/config\/flag_test.go`.\n - [ ] Add a test case (or extend one if appropriate) to the table test\n   `TestLoad_IntegrationWithFlags` in `agent\/config\/runtime_test.go` to ensure setting the\n   flag works.\n - [ ] Add flag (as well as config file) documentation to\n   `website\/source\/docs\/agent\/config\/config-files.mdx` and `website\/source\/docs\/agent\/config\/cli-flags.mdx`.\n\n## Adding a Simple Config Field for Servers\nConsul servers have a separate Config struct for reasons. Note that Consul\nserver agents are actually also client agents, so in some cases config that is\nonly destined for servers doesn't need to follow this checklist provided it's\nonly needed during the bootstrapping of the server (which is done in code shared\nby both server and client components in `agent.go`). For example WAN Gossip\nconfigs are only valid on server agents but since WAN Gossip is setup in\n`agent.go` they don't need to follow this checklist. The simplest (and mostly\naccurate) rule is:\n\n> If you need to access the config field from code in  `agent\/consul` (e.g. RPC\n> endpoints), then you need to follow this. If it's only in `agent` (e.g. HTTP\n> endpoints or agent startup) you don't.\n\nA final word of warning - **you should never need to pass config into the FSM\n(`agent\/consul\/fsm`) or state store (`agent\/consul\/state`)**. Doing so is **_very\ndangerous_** and can violate consistency guarantees and corrupt databases. If\nyou think you need this then please discuss the design with the Consul team\nbefore writing code!\n\nConsul's server components for historical reasons don't use the `RuntimeConfig`\nstruct they have their own struct called `Config` in `agent\/consul\/config.go`.\n\n - [ ] Do all of the steps in [Adding a Simple Config\n   Field For Client Agents](#adding-a-simple-config-field-for-client-agents).\n - [ ] Add the new field to Config struct in `agent\/consul\/config.go`\n - [ ] Add code to set the values from the `RuntimeConfig` in `newConsulConfig` method in `agent\/agent.go`\n - [ ] **If needed**, add a test to `agent_test.go` if there is some non-trivial\n   behavior in the code you added in the previous step. We tend not to test\n   simple assignments from one to the other since these are typically caught by\n   higher-level tests of the actual functionality that matters but some examples\n   can be found prefixed with `TestAgent_consulConfig*`\n - [ ] **If** your config should take effect on a reload\/HUP\n      - [ ] Add necessary code to `ReloadConfig` in `agent\/consul\/server.go` this\n        needs to be adequately synchronized with any readers of the state being\n        updated.\n       - [ ] Add a new test or a new assertion to `TestServer_ReloadConfig`\n\nYou can now access that field from `s.srv.config.<FieldName>` inside an RPC\nhandler.\n\n## Adding a New Field to Service Definition\nThe [Service Definition](https:\/\/www.consul.io\/docs\/agent\/services.html) syntax\nappears both in Consul config files but also in the `\/v1\/agent\/service\/register`\nAPI.\n\nFor wonderful historical reasons, our config files have always used `snake_case`\nattribute names in both JSON and HCL (even before we supported HCL!!) while our\nAPI uses `CamelCase`.\n\nBecause we want documentation examples to work in both config files and API\nbodies to avoid needless confusion, we have to accept both snake case and camel\ncase field names for the service definition.\n\nFinally, adding a field to the service definition implies adding the field to\nseveral internal structs and to all API outputs that display services from the\ncatalog. That explains the multiple layers needed below.\n\nThis list assumes a new field in the base service definition struct. Adding new\nfields to health checks is similar but mostly needs `HealthCheck` structs and\nmethods updating instead. Adding fields to embedded structs like `ProxyConfig`\nis largely the same pattern but may need different test methods etc. updating.\n\n - [ ] Do all of the steps in [Adding a Simple Config\n   Field For Client Agents](#adding-a-simple-config-field-for-client-agents).\n - [ ] `agent\/structs` package\n      - [ ] Add the field to `ServiceDefinition` (`service_definition.go`)\n      - [ ] Add the field to `NodeService` (`structs.go`)\n      - [ ] Add the field to `ServiceNode` (`structs.go`)\n      - [ ] Update `ServiceDefinition.ToNodeService` to translate the field\n      - [ ] Update `NodeService.ToServiceNode` to translate the field\n      - [ ] Update `ServiceNode.ToNodeService` to translate the field\n      - [ ] Update `TestStructs_ServiceNode_Conversions`\n      - [ ] Update `ServiceNode.PartialClone`\n      - [ ] Update `TestStructs_ServiceNode_PartialClone` (`structs_test.go`)\n      - [ ] If needed, update `NodeService.Validate` to ensure the field value is\n        reasonable\n      - [ ] Add test like `TestStructs_NodeService_Validate*` in\n        `structs_test.go`\n      - [ ] Add comparison in `NodeService.IsSame`\n      - [ ] Update `TestStructs_NodeService_IsSame`\n      - [ ] Add comparison in `ServiceNode.IsSameService`\n      - [ ] Update `TestStructs_ServiceNode_IsSameService`\n      - [ ] **If** your field name has MultipleWords,\n          - [ ] Add it to the `aux` inline struct in\n            `ServiceDefinition.UnmarshalJSON` (`service_defintion.go`). \n            - Note: if the field is embedded higher up in a nested struct,\n              follow the chain and update the necessary struct's `UnmarshalJSON`\n              method - you may need to add one if there are no other case\n              transformations being done, copy and existing example. \n            - Note: the tests that exercise this are in agent endpoint for\n              historical reasons (this is where the translation used to happen).\n - [ ] `agent` package\n      - [ ] Update `testAgent_RegisterService` and\/or add a new test to ensure\n        your fields register correctly via API (`agent_endpoint_test.go`)\n      - [ ] **If** your field name has MultipleWords,\n          - [ ] Update `testAgent_RegisterService_TranslateKeys` to include\n            examples with it set in `snake_case` and ensure it is parsed\n            correctly. Run this via `TestAgent_RegisterService_TranslateKeys`\n            (agent_endpoint_test.go).\n - [ ] `api` package\n      - [ ] Add the field to `AgentService` (`agent.go`)\n      - [ ] Add\/update an appropriate test in `agent_test.go`\n        - (Note you need to use `make test` or ensure the `consul` binary on\n          your `$PATH` is a build with your new field - usually `make dev`\n          ensures this unless you're path is funky or you have a consul binary\n          even further up the shell's `$PATH`).\n - [ ] Docs\n      - [ ] Update docs in `website\/source\/docs\/agent\/services.html.md`\n      - [ ] Consider if it's worth adding examples to feature docs or API docs\n        that show the new field's usage.\n\nNote that although the new field will show up in the API output of\n`\/agent\/services` , `\/catalog\/services` and `\/health\/services`, those tests\nright now don't exercise anything that's super useful unless custom logic is\nrequired since they don't even encode the response object as JSON and just\nassert on the structs you already modified. If custom presentation logic is\nneeded, tests for these endpoints might be warranted too. It's usual to use\n`omit-empty` for new fields that will typically not be used by existing\nregistrations although we don't currently test for that systematically.","site":"consul","answers_cleaned":"  Adding a Consul Config Field  This is a checklist of all the places you need to update when adding a new field to config  There may be a few other special cases not included but this covers the majority of configs   We suggest you copy the raw markdown into a gist or local file and check them off as you go  you can mark them as done by replace       with   x   so github renders them as checked   Then   please include the completed lists you worked through in your PR description     Examples of special cases this doesn t cover are     If the config needs special treatment like a different default in   dev  mode    or differences between CE and Enterprise     If custom logic is needed to support backwards compatibility when changing    syntax or semantics of anything  There are four specific cases covered with increasing complexity   1  adding a simple config field only used by client agents  1  adding a CLI flag to mirror that config field  1  adding a config field that needs to be used in Consul servers  1  adding a field to the Service Definition     Adding a Simple Config Field for Client Agents         Add the field to the Config struct  or an appropriate sub struct  in     agent config config go          Add the field to the actual RuntimeConfig struct in     agent config runtime go          Add an appropriate parser setter in  agent config builder go  to    translate         Add the new field with a random value to both the JSON and HCL files in     agent config testdata full config     which should cause the test to fail     Then update the expected value in  TestLoad FullConfig  in     agent config runtime test go  to make the test pass again         Run  go test  run TestRuntimeConfig Sanitize   agent config  update  to update    the expected value for  TestRuntimeConfig Sanitize   Look at  git diff  to    make sure the value changed as you expect           If   your new config field needed some validation as it s only valid in    some cases or with some values  often true               Add validation to Validate in  agent config builder go               Add a test case to the table test  TestLoad IntegrationWithFlags  in          agent config runtime test go            If   your new config field needs a non zero value default              Add that to  DefaultSource  in  agent config defaults go               Add a test case to the table test  TestLoad IntegrationWithFlags  in          agent config runtime test go               If the config needs to be defaulted for the test server used in unit tests              also add it to  DefaultConfig    in  agent consul config go            If   your config should take effect on a reload HUP              Add necessary code to to trigger a safe  locked or atomic  update to         any state the feature needs changing  This needs to be added to one or         more of the following places              ReloadConfig  in  agent agent go  if it needs to affect the local            client state or another client agent component              ReloadConfig  in  agent consul client go  if it needs to affect            state for client agent s RPC client              Add a test to  agent agent test go  similar to others with prefix          TestAgent reloadConfig           Add documentation to  website content docs agent config config files mdx    Done  You can now use your new field in a client agent by accessing  s agent Config  FieldName     If you need a CLI flag  access to the variable in a Server context  or touched the Service Definition  make sure you continue on to follow the appropriate checklists below      Adding a CLI Flag Corresponding to the new Field If the config field also needs a CLI flag  then follow these steps          Do all of the steps in  Adding a Simple Config    Field For Client Agents   adding a simple config field for client agents          Add the new flag to  agent config flags go          Add a test case to TestParseFlags in  agent config flag test go          Add a test case  or extend one if appropriate  to the table test     TestLoad IntegrationWithFlags  in  agent config runtime test go  to ensure setting the    flag works         Add flag  as well as config file  documentation to     website source docs agent config config files mdx  and  website source docs agent config cli flags mdx       Adding a Simple Config Field for Servers Consul servers have a separate Config struct for reasons  Note that Consul server agents are actually also client agents  so in some cases config that is only destined for servers doesn t need to follow this checklist provided it s only needed during the bootstrapping of the server  which is done in code shared by both server and client components in  agent go    For example WAN Gossip configs are only valid on server agents but since WAN Gossip is setup in  agent go  they don t need to follow this checklist  The simplest  and mostly accurate  rule is     If you need to access the config field from code in   agent consul   e g  RPC   endpoints   then you need to follow this  If it s only in  agent   e g  HTTP   endpoints or agent startup  you don t   A final word of warning     you should never need to pass config into the FSM   agent consul fsm   or state store   agent consul state      Doing so is    very dangerous    and can violate consistency guarantees and corrupt databases  If you think you need this then please discuss the design with the Consul team before writing code   Consul s server components for historical reasons don t use the  RuntimeConfig  struct they have their own struct called  Config  in  agent consul config go           Do all of the steps in  Adding a Simple Config    Field For Client Agents   adding a simple config field for client agents          Add the new field to Config struct in  agent consul config go         Add code to set the values from the  RuntimeConfig  in  newConsulConfig  method in  agent agent go           If needed    add a test to  agent test go  if there is some non trivial    behavior in the code you added in the previous step  We tend not to test    simple assignments from one to the other since these are typically caught by    higher level tests of the actual functionality that matters but some examples    can be found prefixed with  TestAgent consulConfig            If   your config should take effect on a reload HUP             Add necessary code to  ReloadConfig  in  agent consul server go  this         needs to be adequately synchronized with any readers of the state being         updated               Add a new test or a new assertion to  TestServer ReloadConfig   You can now access that field from  s srv config  FieldName   inside an RPC handler      Adding a New Field to Service Definition The  Service Definition  https   www consul io docs agent services html  syntax appears both in Consul config files but also in the   v1 agent service register  API   For wonderful historical reasons  our config files have always used  snake case  attribute names in both JSON and HCL  even before we supported HCL    while our API uses  CamelCase    Because we want documentation examples to work in both config files and API bodies to avoid needless confusion  we have to accept both snake case and camel case field names for the service definition   Finally  adding a field to the service definition implies adding the field to several internal structs and to all API outputs that display services from the catalog  That explains the multiple layers needed below   This list assumes a new field in the base service definition struct  Adding new fields to health checks is similar but mostly needs  HealthCheck  structs and methods updating instead  Adding fields to embedded structs like  ProxyConfig  is largely the same pattern but may need different test methods etc  updating          Do all of the steps in  Adding a Simple Config    Field For Client Agents   adding a simple config field for client agents           agent structs  package             Add the field to  ServiceDefinition    service definition go               Add the field to  NodeService    structs go               Add the field to  ServiceNode    structs go               Update  ServiceDefinition ToNodeService  to translate the field             Update  NodeService ToServiceNode  to translate the field             Update  ServiceNode ToNodeService  to translate the field             Update  TestStructs ServiceNode Conversions              Update  ServiceNode PartialClone              Update  TestStructs ServiceNode PartialClone    structs test go               If needed  update  NodeService Validate  to ensure the field value is         reasonable             Add test like  TestStructs NodeService Validate   in          structs test go              Add comparison in  NodeService IsSame              Update  TestStructs NodeService IsSame              Add comparison in  ServiceNode IsSameService              Update  TestStructs ServiceNode IsSameService                If   your field name has MultipleWords                  Add it to the  aux  inline struct in              ServiceDefinition UnmarshalJSON    service defintion go                   Note  if the field is embedded higher up in a nested struct                follow the chain and update the necessary struct s  UnmarshalJSON                method   you may need to add one if there are no other case               transformations being done  copy and existing example                 Note  the tests that exercise this are in agent endpoint for               historical reasons  this is where the translation used to happen           agent  package             Update  testAgent RegisterService  and or add a new test to ensure         your fields register correctly via API   agent endpoint test go                 If   your field name has MultipleWords                  Update  testAgent RegisterService TranslateKeys  to include             examples with it set in  snake case  and ensure it is parsed             correctly  Run this via  TestAgent RegisterService TranslateKeys               agent endpoint test go           api  package             Add the field to  AgentService    agent go               Add update an appropriate test in  agent test go             Note you need to use  make test  or ensure the  consul  binary on           your   PATH  is a build with your new field   usually  make dev            ensures this unless you re path is funky or you have a consul binary           even further up the shell s   PATH           Docs             Update docs in  website source docs agent services html md              Consider if it s worth adding examples to feature docs or API docs         that show the new field s usage   Note that although the new field will show up in the API output of   agent services      catalog services  and   health services   those tests right now don t exercise anything that s super useful unless custom logic is required since they don t even encode the response object as JSON and just assert on the structs you already modified  If custom presentation logic is needed  tests for these endpoints might be warranted too  It s usual to use  omit empty  for new fields that will typically not be used by existing registrations although we don t currently test for that systematically "}
{"questions":"consul write requests from the RPC subsystem See the Consul Architecture Guide for an introduction to the Consul deployment architecture and the Consensus Protocol used by generic resource oriented storage layer introduced in Consul 1 16 Please see Note While the content of this document is still accurate it doesn t cover the new for more information Cluster Persistence The cluser persistence subsystem runs entirely in Server Agents It handles both read and","answers":"# Cluster Persistence\n\n> **Note**  \n> While the content of this document is still accurate, it doesn't cover the new\n> generic resource-oriented storage layer introduced in Consul 1.16. Please see\n> [Resources](..\/v2-architecture\/controller-architecture) for more information.\n\nThe cluser persistence subsystem runs entirely in Server Agents. It handles both read and\nwrite requests from the [RPC] subsystem. See the [Consul Architecture Guide] for an\nintroduction to the Consul deployment architecture and the [Consensus Protocol] used by\nthe cluster persistence subsystem.\n\n[RPC]: ..\/rpc\n[Consul Architecture Guide]: https:\/\/www.consul.io\/docs\/architecture\n[Consensus Protocol]: https:\/\/www.consul.io\/docs\/architecture\/consensus\n\n\n![Overview](.\/overview.svg)\n\n<sup>[source](.\/overview.mmd)<\/sup>\n\n\n## Raft and FSM\n\n[hashicorp\/raft] is at the core of cluster persistence. Raft requires an [FSM], a\nfinite-state machine implementation, to persist state changes. The Consul FSM is\nimplemented in [agent\/consul\/fsm] as a set of commands.\n\n[FSM]: https:\/\/pkg.go.dev\/github.com\/hashicorp\/raft#FSM\n[hashicorp\/raft]: https:\/\/github.com\/hashicorp\/raft\n[agent\/consul\/fsm]: https:\/\/github.com\/hashicorp\/consul\/tree\/main\/agent\/consul\/fsm\n\nRaft also requires a [LogStore] to persist logs to disk. Consul uses [hashicorp\/raft-boltdb]\nwhich implements [LogStore] using [boltdb]. In the near future we should be updating to\nuse [bbolt].\n\n\n[LogStore]: https:\/\/pkg.go.dev\/github.com\/hashicorp\/raft#LogStore\n[hashicorp\/raft-boltdb]: https:\/\/github.com\/hashicorp\/raft-boltdb\n[boltdb]: https:\/\/github.com\/boltdb\/bolt\n[bbolt]: https:\/\/github.com\/etcd-io\/bbolt\n\nSee [diagrams](#diagrams) below for more details on the interaction.\n\n## State Store\n\nConsul stores the full state of the cluster in memory using the state store. The state store is\nimplemented in [agent\/consul\/state] and uses [hashicorp\/go-memdb] to maintain indexes of\ndata stored in a set of tables. The main entrypoint to the state store is [NewStateStore].\n\n[agent\/consul\/state]: https:\/\/github.com\/hashicorp\/consul\/tree\/main\/agent\/consul\/state\n[hashicorp\/go-memdb]: https:\/\/github.com\/hashicorp\/go-memdb\n[NewStateStore]: https:\/\/github.com\/hashicorp\/consul\/blob\/main\/agent\/consul\/state\/state_store.go\n\n### Tables, Schemas, and Indexes\n\nThe state store is organized as a set of tables, and each table has a set of indexes.\n`newDBSchema` in [schema.go] shows the full list of tables, and each schema function shows\nthe full list of indexes.\n\n[schema.go]: https:\/\/github.com\/hashicorp\/consul\/blob\/main\/agent\/consul\/state\/schema.go\n\nThere are two styles for defining table indexes. The original style uses generic indexer\nimplementations from [hashicorp\/go-memdb] (ex: `StringFieldIndex`). These indexes use\n[reflect] to find values for an index. These generic indexers work well when the index\nvalue is a single value available directly from the struct field, and there are no\nce\/enterprise differences.\n\nThe second style of indexers are custom indexers implemented using only functions and\nbased on the types defined in [indexer.go]. This style of index works well when the index\nvalue is a value derived from one or multiple fields, or when there are ce\/enterprise\ndifferences between the indexes.\n\n[reflect]: https:\/\/golang.org\/pkg\/reflect\/\n[indexer.go]: https:\/\/github.com\/hashicorp\/consul\/blob\/main\/agent\/consul\/state\/indexer.go\n\n\n## Snapshot and Restore\n\nSnapshots are the primary mechanism used to backup the data stored by cluster persistence.\nIf all Consul servers fail, a snapshot can be used to restore the cluster back\nto its previous state.\n\nNote that there are two different snapshot and restore concepts that exist at different\nlayers. First there is the `Snapshot` and `Restore` methods on the raft [FSM] interface,\nthat Consul must implement. These methods are implemented as mostly passthrough to the\nstate store. These methods may be called internally by raft to perform log compaction\n(snapshot) or to bootstrap a new follower (restore). Consul implements snapshot and\nrestore using the `Snapshot` and `Restore` types in [agent\/consul\/state].\n\nSnapshot and restore also exist as actions that a user may perform. There are [CLI]\ncommands, [HTTP API] endpoints, and [RPC] endpoints that allow a user to capture an\narchive which contains a snapshot of the state, and restore that state to a running\ncluster. The [consul\/snapshot] package provides some of the logic for creating and reading\nthe snapshot archives for users. See [commands\/snapshot] for a reference to these user\nfacing operations.\n\n[CLI]: ..\/cli\n[HTTP API]: ..\/http-api\n[commands\/snapshot]: https:\/\/www.consul.io\/commands\/snapshot\n[consul\/snapshot]: https:\/\/github.com\/hashicorp\/consul\/tree\/main\/snapshot\n\nFinally, there is also a [snapshot agent] (enterprise only) that uses the snapshot API\nendpoints to periodically capture a snapshot, and optionally send it somewhere for\nstorage. \n\n[snapshot agent]: https:\/\/www.consul.io\/commands\/snapshot\/agent\n\n## Raft Autopilot\n\n[hashicorp\/raft-autopilot] is used by Consul to automate some parts of the upgrade process.\n\n\n[hashicorp\/raft-autopilot]: https:\/\/github.com\/hashicorp\/raft-autopilot\n\n## Diagrams\n### High-level life of a write\n![Overview](.\/write-overview.png)\n\n### Deep-dive into write through Raft\n![Deep dive](.\/write-deep-dive.png","site":"consul","answers_cleaned":"  Cluster Persistence      Note       While the content of this document is still accurate  it doesn t cover the new   generic resource oriented storage layer introduced in Consul 1 16  Please see    Resources     v2 architecture controller architecture  for more information   The cluser persistence subsystem runs entirely in Server Agents  It handles both read and write requests from the  RPC  subsystem  See the  Consul Architecture Guide  for an introduction to the Consul deployment architecture and the  Consensus Protocol  used by the cluster persistence subsystem    RPC      rpc  Consul Architecture Guide   https   www consul io docs architecture  Consensus Protocol   https   www consul io docs architecture consensus     Overview    overview svg    sup  source    overview mmd   sup       Raft and FSM   hashicorp raft  is at the core of cluster persistence  Raft requires an  FSM   a finite state machine implementation  to persist state changes  The Consul FSM is implemented in  agent consul fsm  as a set of commands    FSM   https   pkg go dev github com hashicorp raft FSM  hashicorp raft   https   github com hashicorp raft  agent consul fsm   https   github com hashicorp consul tree main agent consul fsm  Raft also requires a  LogStore  to persist logs to disk  Consul uses  hashicorp raft boltdb  which implements  LogStore  using  boltdb   In the near future we should be updating to use  bbolt      LogStore   https   pkg go dev github com hashicorp raft LogStore  hashicorp raft boltdb   https   github com hashicorp raft boltdb  boltdb   https   github com boltdb bolt  bbolt   https   github com etcd io bbolt  See  diagrams   diagrams  below for more details on the interaction      State Store  Consul stores the full state of the cluster in memory using the state store  The state store is implemented in  agent consul state  and uses  hashicorp go memdb  to maintain indexes of data stored in a set of tables  The main entrypoint to the state store is  NewStateStore     agent consul state   https   github com hashicorp consul tree main agent consul state  hashicorp go memdb   https   github com hashicorp go memdb  NewStateStore   https   github com hashicorp consul blob main agent consul state state store go      Tables  Schemas  and Indexes  The state store is organized as a set of tables  and each table has a set of indexes   newDBSchema  in  schema go  shows the full list of tables  and each schema function shows the full list of indexes    schema go   https   github com hashicorp consul blob main agent consul state schema go  There are two styles for defining table indexes  The original style uses generic indexer implementations from  hashicorp go memdb   ex   StringFieldIndex    These indexes use  reflect  to find values for an index  These generic indexers work well when the index value is a single value available directly from the struct field  and there are no ce enterprise differences   The second style of indexers are custom indexers implemented using only functions and based on the types defined in  indexer go   This style of index works well when the index value is a value derived from one or multiple fields  or when there are ce enterprise differences between the indexes    reflect   https   golang org pkg reflect   indexer go   https   github com hashicorp consul blob main agent consul state indexer go      Snapshot and Restore  Snapshots are the primary mechanism used to backup the data stored by cluster persistence  If all Consul servers fail  a snapshot can be used to restore the cluster back to its previous state   Note that there are two different snapshot and restore concepts that exist at different layers  First there is the  Snapshot  and  Restore  methods on the raft  FSM  interface  that Consul must implement  These methods are implemented as mostly passthrough to the state store  These methods may be called internally by raft to perform log compaction  snapshot  or to bootstrap a new follower  restore   Consul implements snapshot and restore using the  Snapshot  and  Restore  types in  agent consul state    Snapshot and restore also exist as actions that a user may perform  There are  CLI  commands   HTTP API  endpoints  and  RPC  endpoints that allow a user to capture an archive which contains a snapshot of the state  and restore that state to a running cluster  The  consul snapshot  package provides some of the logic for creating and reading the snapshot archives for users  See  commands snapshot  for a reference to these user facing operations    CLI      cli  HTTP API      http api  commands snapshot   https   www consul io commands snapshot  consul snapshot   https   github com hashicorp consul tree main snapshot  Finally  there is also a  snapshot agent   enterprise only  that uses the snapshot API endpoints to periodically capture a snapshot  and optionally send it somewhere for storage     snapshot agent   https   www consul io commands snapshot agent     Raft Autopilot   hashicorp raft autopilot  is used by Consul to automate some parts of the upgrade process     hashicorp raft autopilot   https   github com hashicorp raft autopilot     Diagrams     High level life of a write   Overview    write overview png       Deep dive into write through Raft   Deep dive    write deep dive png"}
{"questions":"consul At the time of writing only the service health endpoint uses streaming but more endpoints sends events as they occur and the client maintains a materialized view of the events Event Streaming will be added in the future Event streaming is a new asynchronous RPC mechanism that is being added to Consul Instead of synchronous blocking RPC calls long polling to fetch data when it changes streaming","answers":"\n# Event Streaming\n\nEvent streaming is a new asynchronous RPC mechanism that is being added to Consul. Instead\nof synchronous blocking RPC calls (long polling) to fetch data when it changes, streaming\nsends events as they occur, and the client maintains a materialized view of the events.\n\nAt the time of writing only the service health endpoint uses streaming, but more endpoints\nwill be added in the future.\n\nSee [adding a topic](.\/adding-a-topic.md) for a guide on adding new topics to streaming.\n\n## Overview\n\nThe diagram below shows the components that are used in streaming, and how they fit into\nthe rest of Consul.\n\n![Streaming Overview](.\/overview.svg)\n\n<sup>[source](.\/overview.mmd)<\/sup>\n\nRead requests are received either from the HTTP API or from a DNS request. They use\n[rpcclient\/health.Health]\nto query the cache. The [StreamingHealthServices cache-type] uses a [materialized view]\nto manage subscriptions and store the aggregated events. On the server, the\n[SubscribeEndpoint] subscribes and receives events from [EventPublisher].\n\nWrites will likely enter the system through the client as well, but to make the diagram\nless complicated the write flow starts when it is received by the RPC endpoint. The\nendpoint calls raft.Apply, which if successful will save the new data in the state.Store.\nWhen the [state.Store commits] it produces an event which is managed by the [EventPublisher]\nand sent to any active subscriptions.\n\n[rpcclient\/health.Health]: https:\/\/github.com\/hashicorp\/consul\/blob\/main\/agent\/rpcclient\/health\/health.go\n[StreamingHealthServices cache-type]: https:\/\/github.com\/hashicorp\/consul\/blob\/main\/agent\/cache-types\/streaming_health_services.go\n[materialized view]: https:\/\/github.com\/hashicorp\/consul\/blob\/main\/agent\/submatview\/materializer.go\n[SubscribeEndpoint]: https:\/\/github.com\/hashicorp\/consul\/blob\/main\/agent\/grpc-internal\/services\/subscribe\/subscribe.go\n[EventPublisher]: https:\/\/github.com\/hashicorp\/consul\/blob\/main\/agent\/consul\/stream\/event_publisher.go\n[state.Store commits]: https:\/\/github.com\/hashicorp\/consul\/blob\/main\/agent\/consul\/state\/memdb.go\n\n\n## Event Publisher\n\nThe [EventPublisher] is at the core of streaming. It receives published events, and\nsubscription requests, and forwards events to the appropriate subscriptions. The diagram\nbelow illustrates how events are stored by the [EventPublisher].\n\n![Event Publisher layout](.\/event-publisher-layout.svg)\n\n<sup>[source](.\/event-publisher-layout.mmd)<\/sup>\n\nWhen a new subscription is created it will create a snapshot of the events required to\nreflect the current state. This snapshot is cached by the [EventPublisher] so that other\nsubscriptions can re-use the snapshot without having to recreate it.\n\nThe snapshot always points at the first item in the linked list of events. A subscription\nwill initially point at the first item, but the pointer advances each time\n`Subscribe.Next` is called. The topic buffers in the EventPublisher always point at the\nlatest item in the linked list, so that new events can be appended to the buffer.\n\nWhen a snapshot cache TTL expires, the snapshot is removed. If there are no other\nsubscriptions holding a reference to those items, the items will be garbage collected by\nthe Go runtime. This setup allows EventPublisher to keep some events around for a short\nperiod of time, without any hard coded limit on the number of events to cache.\n\n\n## Subscription events\n\nA subscription provides a stream of events on a single topic. Most of the events contain\ndata for a change in state, but there are a few special \"framing\" events that are used to\ncommunicate something to the client. The diagram below helps illustrate the logic in\n`EventPublisher.Subscribe` and the [materialized view].\n\n\n![Framing events](.\/framing-events.svg)\n\n<sup>[source](.\/framing-events.mmd)<\/sup>\n\n\nEvents in the `Snapshot` contain the same data as those in the `EventStream`, the only\ndifference is that events in the `Snapshot` indicate the current state not a change in\nstate.\n\n`NewSnapshotToFollow` is a framing event that indicates to the client that their existing\nview is out of date. They must reset their view and prepare to receive a new snapshot.\n\n`EndOfSnapshot` indicates to the client that the snapshot is complete. Any future events\nwill be changes in state.\n\n\n## Event filtering\n\nAs events pass through the system from the `state.Store` to the client they are grouped\nand filtered along the way. The diagram below helps illustrate where each of the grouping\nand filtering happens.\n\n\n![event filtering](.\/event-filtering.svg)\n\n<sup>[source](.\/event-filtering.mmd)<\/sup>","site":"consul","answers_cleaned":"   Event Streaming  Event streaming is a new asynchronous RPC mechanism that is being added to Consul  Instead of synchronous blocking RPC calls  long polling  to fetch data when it changes  streaming sends events as they occur  and the client maintains a materialized view of the events   At the time of writing only the service health endpoint uses streaming  but more endpoints will be added in the future   See  adding a topic    adding a topic md  for a guide on adding new topics to streaming      Overview  The diagram below shows the components that are used in streaming  and how they fit into the rest of Consul     Streaming Overview    overview svg    sup  source    overview mmd   sup   Read requests are received either from the HTTP API or from a DNS request  They use  rpcclient health Health  to query the cache  The  StreamingHealthServices cache type  uses a  materialized view  to manage subscriptions and store the aggregated events  On the server  the  SubscribeEndpoint  subscribes and receives events from  EventPublisher    Writes will likely enter the system through the client as well  but to make the diagram less complicated the write flow starts when it is received by the RPC endpoint  The endpoint calls raft Apply  which if successful will save the new data in the state Store  When the  state Store commits  it produces an event which is managed by the  EventPublisher  and sent to any active subscriptions    rpcclient health Health   https   github com hashicorp consul blob main agent rpcclient health health go  StreamingHealthServices cache type   https   github com hashicorp consul blob main agent cache types streaming health services go  materialized view   https   github com hashicorp consul blob main agent submatview materializer go  SubscribeEndpoint   https   github com hashicorp consul blob main agent grpc internal services subscribe subscribe go  EventPublisher   https   github com hashicorp consul blob main agent consul stream event publisher go  state Store commits   https   github com hashicorp consul blob main agent consul state memdb go      Event Publisher  The  EventPublisher  is at the core of streaming  It receives published events  and subscription requests  and forwards events to the appropriate subscriptions  The diagram below illustrates how events are stored by the  EventPublisher      Event Publisher layout    event publisher layout svg    sup  source    event publisher layout mmd   sup   When a new subscription is created it will create a snapshot of the events required to reflect the current state  This snapshot is cached by the  EventPublisher  so that other subscriptions can re use the snapshot without having to recreate it   The snapshot always points at the first item in the linked list of events  A subscription will initially point at the first item  but the pointer advances each time  Subscribe Next  is called  The topic buffers in the EventPublisher always point at the latest item in the linked list  so that new events can be appended to the buffer   When a snapshot cache TTL expires  the snapshot is removed  If there are no other subscriptions holding a reference to those items  the items will be garbage collected by the Go runtime  This setup allows EventPublisher to keep some events around for a short period of time  without any hard coded limit on the number of events to cache       Subscription events  A subscription provides a stream of events on a single topic  Most of the events contain data for a change in state  but there are a few special  framing  events that are used to communicate something to the client  The diagram below helps illustrate the logic in  EventPublisher Subscribe  and the  materialized view       Framing events    framing events svg    sup  source    framing events mmd   sup    Events in the  Snapshot  contain the same data as those in the  EventStream   the only difference is that events in the  Snapshot  indicate the current state not a change in state    NewSnapshotToFollow  is a framing event that indicates to the client that their existing view is out of date  They must reset their view and prepare to receive a new snapshot    EndOfSnapshot  indicates to the client that the snapshot is complete  Any future events will be changes in state       Event filtering  As events pass through the system from the  state Store  to the client they are grouped and filtered along the way  The diagram below helps illustrate where each of the grouping and filtering happens      event filtering    event filtering svg    sup  source    event filtering mmd   sup "}
{"questions":"consul Certificate Authority Connect CA The code for the Certificate Authority is in the following packages 2 the providers are in agent connect ca 3 the RPC interface is in agent consul connectcaendpoint go 1 most of the core logic is in agent consul leaderconnectca go services and client agents via auto encrypt and auto config The Certificate Authority Subsystem manages a CA trust chain for issuing certificates to","answers":"# Certificate Authority (Connect CA)\n\nThe Certificate Authority Subsystem manages a CA trust chain for issuing certificates to\nservices and client agents (via auto-encrypt and auto-config).\n\nThe code for the Certificate Authority is in the following packages:\n1. most of the core logic is in [agent\/consul\/leader_connect_ca.go]\n2. the providers are in [agent\/connect\/ca]\n3. the RPC interface is in [agent\/consul\/connect_ca_endpoint.go]\n\n\n[agent\/consul\/leader_connect_ca.go]: https:\/\/github.com\/hashicorp\/consul\/blob\/main\/agent\/consul\/leader_connect_ca.go\n[agent\/connect\/ca]: https:\/\/github.com\/hashicorp\/consul\/blob\/main\/agent\/connect\/ca\/\n[agent\/consul\/connect_ca_endpoint.go]: https:\/\/github.com\/hashicorp\/consul\/blob\/main\/agent\/consul\/connect_ca_endpoint.go\n\n\n## Architecture\n\n### High level overview\n\nIn Consul the leader is responsible for handling the CA management. \nWhen a leader election happen, and the elected leader do not have any root CA available it will start a process of creating a set of CA certificate.\nThose certificates will be used to authenticate\/encrypt communication between services (service mesh) or between `Consul client agent` (auto-encrypt\/auto-config). This process is described in the following diagram:\n\n![CA creation](.\/hl-ca-overview.svg)\n\n<sup>[source](.\/hl-ca-overview.mmd)<\/sup>\n\nThe features that benefit from Consul CA management are:\n- [service Mesh\/Connect](https:\/\/www.consul.io\/docs\/connect)\n- [auto encrypt](https:\/\/www.consul.io\/docs\/agent\/options#auto_encrypt)\n\n\n### CA and Certificate relationship\n\nThis diagram shows the relationship between the CA certificates in Consul primary and\nsecondary.\n\n![CA relationship](.\/cert-relationship.svg)\n\n<sup>[source](.\/cert-relationship.mmd)<\/sup>\n\n\nIn most cases there is an external root CA that provides an intermediate CA that Consul\nuses as the Primary Root CA. The only except to this is when the Consul CA Provider is\nused without specifying a `RootCert`. In this one case Consul will generate the Root CA\nfrom the provided primary key, and it will be used in the primary as the top of the chain\nof trust.\n\nIn the primary datacenter, the Consul and AWS providers use the Primary Root CA to sign\nleaf certificates. The Vault provider uses an intermediate CA to sign leaf certificates.\n\nLeaf certificates are created for two purposes:\n1. the Leaf Cert Service is used by envoy proxies in the mesh to perform mTLS with other\n   services.\n2. the Leaf Cert Client Agent is created by auto-encrypt and auto-config. It is used by\n   client agents for HTTP API TLS, and for mTLS for RPC requests to servers.\n\nAny secondary datacenters receive an intermediate certificate, signed by the Primary Root\nCA, which is used as the CA certificate to sign leaf certificates in the secondary\ndatacenter.\n\n## Operations\n\nWhen trying to learn the CA subsystem it can be helpful to understand the operations that\nit can perform. The sections below are the complete set of read, write, and periodic\noperations that provide the full behaviour of the CA subsystem.\n\n### Periodic Operations\n\nPeriodic (or background) opeartions are started automatically by the Consul leader. They run at some interval (often 1 hour).\n\n- `CAManager.InitializeCA` - attempts to initialize the CA when a leader is ellected. If the synchronous InitializeCA fails, `CAManager.backgroundCAInitialization` runs `InitializeCA` periodically in a goroutine until it succeeds.\n- `CAManager.RenewIntermediate` - (called by `CAManager.intermediateCertRenewalWatch`) runs in the primary if the provider uses a separate signing cert (the Vault provider). The operation always runs in the secondary. Renews the signing cert once half its lifetime has passed.\n- `CAManager.secondaryCARootWatch` - runs in secondary only. Performs a blocking query to the primary to retrieve any updates to the CA roots and stores them locally.\n- `Server.runCARootPruning` - removes non-active and expired roots from state.CARoots\n\n### Read Operations\n\n- `RPC.ConnectCA.ConfigurationGet` - returns the CA provider configuration. Only called by user, not by any internal subsystems.\n- `RPC.ConnectCA.Roots` - returns all the roots, the trust domain ID, and the ID of the active root. Each \"root\" also includes the signing key\/cert, and any intermediate certs in the chain. It is used (via the cache) by all the connect proxy types.\n\n### Write Operations\n\n- `CAManager.UpdateConfiguration` - (via `RPC.ConnectCA.ConfigurationSet`) called by a user when they want to change the provider or provider configuration (ex: rotate root CA).\n- `CAManager.Provider.SignIntermediate` - (via `RPC.ConnectCA.SignIntermediate`) called from the secondary DC:\n    1. by `CAManager.RenewIntermediate` to sign the new intermediate when the old intermediate is about to expire\n    2. by `CAMananger.initializeSecondary` when setting up a new secondary, when the provider is changed in the secondary\n   by a user action, or when the primary roots changed and the secondary needs to generate a new intermediate for the new\n   primary roots.\n- `CAMananger.SignCertificate` - is used by:\n    1. (via `RPC.ConnectCA.Sign`) - called by client agents to sign a leaf cert for a connect proxy (via `agent\/cache-types\/connect_ca_leaf.go`)\n    2. (via in-process call to `RPC.ConnectCA.Sign`) - called by auto-encrypt to sign a leaf cert for a client agent\n    3. called by Auto-Config to sign a leaf cert for a client agent\n\n## detailed call flow\n![CA Leader Sequence](.\/ca-leader-sequence.svg)\n\n<sup>[source](.\/ca-leader-sequence.mmd)<\/sup>\n\n####TODO:\n- sequence diagram for leaf signing \n- sequence diagram for CA cert rotation\n\n## CAManager states\n\nThis section is a work in progress\n\nTODO: style the diagram to match the others, and add some narative text to describe the\ndiagram.\n\n![CA Mananger states](.\/state-machine.svg)\n\n","site":"consul","answers_cleaned":"  Certificate Authority  Connect CA   The Certificate Authority Subsystem manages a CA trust chain for issuing certificates to services and client agents  via auto encrypt and auto config    The code for the Certificate Authority is in the following packages  1  most of the core logic is in  agent consul leader connect ca go  2  the providers are in  agent connect ca  3  the RPC interface is in  agent consul connect ca endpoint go     agent consul leader connect ca go   https   github com hashicorp consul blob main agent consul leader connect ca go  agent connect ca   https   github com hashicorp consul blob main agent connect ca   agent consul connect ca endpoint go   https   github com hashicorp consul blob main agent consul connect ca endpoint go      Architecture      High level overview  In Consul the leader is responsible for handling the CA management   When a leader election happen  and the elected leader do not have any root CA available it will start a process of creating a set of CA certificate  Those certificates will be used to authenticate encrypt communication between services  service mesh  or between  Consul client agent   auto encrypt auto config   This process is described in the following diagram     CA creation    hl ca overview svg    sup  source    hl ca overview mmd   sup   The features that benefit from Consul CA management are     service Mesh Connect  https   www consul io docs connect     auto encrypt  https   www consul io docs agent options auto encrypt        CA and Certificate relationship  This diagram shows the relationship between the CA certificates in Consul primary and secondary     CA relationship    cert relationship svg    sup  source    cert relationship mmd   sup    In most cases there is an external root CA that provides an intermediate CA that Consul uses as the Primary Root CA  The only except to this is when the Consul CA Provider is used without specifying a  RootCert   In this one case Consul will generate the Root CA from the provided primary key  and it will be used in the primary as the top of the chain of trust   In the primary datacenter  the Consul and AWS providers use the Primary Root CA to sign leaf certificates  The Vault provider uses an intermediate CA to sign leaf certificates   Leaf certificates are created for two purposes  1  the Leaf Cert Service is used by envoy proxies in the mesh to perform mTLS with other    services  2  the Leaf Cert Client Agent is created by auto encrypt and auto config  It is used by    client agents for HTTP API TLS  and for mTLS for RPC requests to servers   Any secondary datacenters receive an intermediate certificate  signed by the Primary Root CA  which is used as the CA certificate to sign leaf certificates in the secondary datacenter      Operations  When trying to learn the CA subsystem it can be helpful to understand the operations that it can perform  The sections below are the complete set of read  write  and periodic operations that provide the full behaviour of the CA subsystem       Periodic Operations  Periodic  or background  opeartions are started automatically by the Consul leader  They run at some interval  often 1 hour       CAManager InitializeCA    attempts to initialize the CA when a leader is ellected  If the synchronous InitializeCA fails   CAManager backgroundCAInitialization  runs  InitializeCA  periodically in a goroutine until it succeeds     CAManager RenewIntermediate     called by  CAManager intermediateCertRenewalWatch   runs in the primary if the provider uses a separate signing cert  the Vault provider   The operation always runs in the secondary  Renews the signing cert once half its lifetime has passed     CAManager secondaryCARootWatch    runs in secondary only  Performs a blocking query to the primary to retrieve any updates to the CA roots and stores them locally     Server runCARootPruning    removes non active and expired roots from state CARoots      Read Operations     RPC ConnectCA ConfigurationGet    returns the CA provider configuration  Only called by user  not by any internal subsystems     RPC ConnectCA Roots    returns all the roots  the trust domain ID  and the ID of the active root  Each  root  also includes the signing key cert  and any intermediate certs in the chain  It is used  via the cache  by all the connect proxy types       Write Operations     CAManager UpdateConfiguration     via  RPC ConnectCA ConfigurationSet   called by a user when they want to change the provider or provider configuration  ex  rotate root CA      CAManager Provider SignIntermediate     via  RPC ConnectCA SignIntermediate   called from the secondary DC      1  by  CAManager RenewIntermediate  to sign the new intermediate when the old intermediate is about to expire     2  by  CAMananger initializeSecondary  when setting up a new secondary  when the provider is changed in the secondary    by a user action  or when the primary roots changed and the secondary needs to generate a new intermediate for the new    primary roots     CAMananger SignCertificate    is used by      1   via  RPC ConnectCA Sign     called by client agents to sign a leaf cert for a connect proxy  via  agent cache types connect ca leaf go       2   via in process call to  RPC ConnectCA Sign     called by auto encrypt to sign a leaf cert for a client agent     3  called by Auto Config to sign a leaf cert for a client agent     detailed call flow   CA Leader Sequence    ca leader sequence svg    sup  source    ca leader sequence mmd   sup       TODO    sequence diagram for leaf signing    sequence diagram for CA cert rotation     CAManager states  This section is a work in progress  TODO  style the diagram to match the others  and add some narative text to describe the diagram     CA Mananger states    state machine svg   "}
{"questions":"consul Controller Testing 1 Unit Tests These should live alongside the controller and utilize mocks and the controller TestController Where possible split out controller functionality so that other functions can be independently tested 3 Container based integration tests These tests live along with our other container based integration tests They utilize a full multi node cluster and sometimes client agents There are 3 types of tests that can be created here 2 Lightweight integration tests These should live in an internal api group api group test package These tests utilize the in memory resource service and the standard controller manager There are two types of tests that should be created Lifecycle Integration Tests These go step by step to modify resources and check what the controller did They are meant to go through the lifecycle of resources and how they are reconciled Verifications are typically intermingled with resource updates For every controller we want to enable 3 types of testing One Shot Integration Tests These tests publish a bunch of resources and then perform all the verifications These mainly are focused on the controller eventually converging given all the resources thrown at it and aren t as concerned with any intermediate states resources go through Lifecycle Integration Tests These are the same as for the lighweight integration tests","answers":"# Controller Testing\n\nFor every controller we want to enable 3 types of testing.\n\n1. Unit Tests - These should live alongside the controller and utilize mocks and the controller.TestController. Where possible split out controller functionality so that other functions can be independently tested.\n2. Lightweight integration tests - These should live in an internal\/<api group>\/<api group>test package. These tests utilize the in-memory resource service and the standard controller manager. There are two types of tests that should be created.\n\t* Lifecycle Integration Tests - These go step by step to modify resources and check what the controller did. They are meant to go through the lifecycle of resources and how they are reconciled. Verifications are typically intermingled with resource updates.\n\t* One-Shot Integration Tests - These tests publish a bunch of resources and then perform all the verifications. These mainly are focused on the controller eventually converging given all the resources thrown at it and aren't as concerned with any intermediate states resources go through.\n3. Container based integration tests - These tests live along with our other container based integration tests. They utilize a full multi-node cluster (and sometimes client agents). There are 3 types of tests that can be created here:\n   * Lifecycle Integration Tests - These are the same as for the lighweight integration tests.\n\t* One-shot IntegrationTests - These are the same as for the lightweight integration tests.\n\t* Upgrade Tests - These are a special form of One-shot Integration tests where the cluster is brought up with some original version, data is pushed in, an upgrade is done and then we verify the consistency of the data post-upgrade.\n\t\n\t\nBetween the lightweight and container based integration tests there is a lot of duplication in what is being tested. For this reason these integration test bodies should be defined as exported functions within the apigroups test package. The container based tests can then import those packages and invoke the same functionality with minimal overhead.\n\nFor one-shot integration tests, functions to do the resource publishing should be split from functions to perform the verifications. This allows upgrade tests to publish the resources once pre-upgrade and then validate that their correctness post-upgrade without requiring rewriting them.\n\nSometimes it may also be a good idea to export functions in the test packages for running a specific controllers integration tests. This is a good idea when the controller will use a different version of a dependency in Consul Enterprise to allow for the enterprise implementations package to invoke the integration tests after setting up the controller with its injected dependency.\n\n## Unit Test Template\n\nThese tests live alongside controller source.\n\n```go\npackage foo\n\nimport (\n\t\"testing\"\n\t\n\t\"github.com\/stretchr\/testif\/mock\"\n\t\"github.com\/stretchr\/testif\/require\"\n\t\"github.com\/stretchr\/testif\/suite\"\n)\n\nfunc TestReconcile(t *testing.T) {\n\trtest.RunWithTenancies(func(tenancy *pbresource.Tenancy) {\n\t\tsuite.Run(t, &reconcileSuite{tenancy: tenancy})\n\t})\n}\n\ntype reconcileSuite struct {\n\tsuite.Suite\n\t\n\ttenancy *pbresource.Tenancy\n\t\n\tctx context.Context\n\tctl *controller.TestController\n\tclient *rtest.Client\n\t\n\t\/\/ Mock objects needed for testing\n}\n\nfunc (suite *reconcileSuite) SetupTest() {\n\tsuite.ctx = testutil.TestContext(suite.T())\n\n\t\/\/ Alternatively it is sometimes useful to use a mock resource service. For that\n\t\/\/ you can use github.com\/hashicorp\/consul\/grpcmocks.NewResourceServiceClient\n\t\/\/ to create the client. \n\tclient := svctest.NewResourceServiceBuilder().\n\t\t\/\/ register this API groups types. Also register any other\n\t\t\/\/ types this controller depends on.\n\t\tWithRegisterFns(types.Register).\n\t\tWithTenancies(suite.tenancy).\n\t\tRun(suite.T())\n\t\t\n\t\/\/ Build any mock objects or other dependencies of the controller here.\n\n\t\/\/ Build the TestController\n\tsuite.ctl = controller.NewTestController(Controller(), client)\n\tsuite.client = rtest.NewClient(suite.ctl.Runtime().Client)\n}\n\n\/\/ Implement tests on the suite as needed.\nfunc (suite *reconcileSuite) TestSomething() {\n\t\/\/ Setup Mock expectations\n\t\n\t\/\/ Push resources into the resource service as needed.\n\t\n\t\/\/ Issue the Reconcile call\n\tsuite.ctl.Reconcile(suite.ctx, controller.Request{})\n}\n```\n\n## Integration Testing Templates\n\nThese tests should live in internal\/<api group>\/<api group>test. For these examples, assume the API group under test is named `foo` and the latest API group version is v2.\n\n### `run_test.go`\n\nThis file is how `go test` knows to execute the tests. These integration tests should\nbe executed against an in-memory resource service with the standard controller manager.\n\n```go\n\/\/ Copyright (c) HashiCorp, Inc.\n\/\/ SPDX-License-Identifier: BUSL-1.1\n\npackage footest\n\nimport (\n\t\"testing\"\n\n\t\"github.com\/hashicorp\/consul\/internal\/foo\"\n\t\"github.com\/hashicorp\/consul\/internal\/controller\/controllertest\"\n\t\"github.com\/hashicorp\/consul\/internal\/resource\/reaper\"\n\trtest \"github.com\/hashicorp\/consul\/internal\/resource\/resourcetest\"\n\t\"github.com\/hashicorp\/consul\/proto-public\/pbresource\"\n)\n\nvar (\n\t\/\/ This makes the CLI options available to control timing delays of requests. The\n\t\/\/ randomized timings helps to build confidence that regardless of resources writes\n\t\/\/ occurring in quick succession, the controller under test will eventually converge\n\t\/\/ on its steady state.\n\tclientOpts = rtest.ConfigureTestCLIFlags()\n)\n\nfunc runInMemResourceServiceAndControllers(t *testing.T) pbresource.ResourceServiceClient {\n\tt.Helper()\n\n\treturn controllertest.NewControllerTestBuilder().\n\t\t\/\/ Register your types for the API group and any others that these tests will depend on\n\t\tWithResourceRegisterFns(types.Register).\n\t\tWithControllerRegisterFns(\n\t\t\treaper.RegisterControllers,\n\t\t\tfoo.RegisterControllers,\n\t\t).Run(t)\n}\n\n\/\/ The basic integration test should operate mostly in a one-shot manner where resources\n\/\/ are published and then verifications are performed. \nfunc TestControllers_Integration(t *testing.T) {\n\tclient := runInMemResourceServiceAndControllers(t)\n\tRunFooV2IntegrationTest(t, client, clientOpts.ClientOptions(t)...)\n}\n\n\/\/ The lifecycle integration test is typically more complex and deals with changing\n\/\/ some values over time to cause the controllers to do something differently.\nfunc TestControllers_Lifecycle(t *testing.T) {\n\tclient := runInMemResourceServiceAndControllers(t)\n\tRunFooV2LifecycleTest(t, client, clientOpts.ClientOptions(t)...)\n}\n\n```\n\n### `test_integration_v2.go`\n\n\n```go\npackage footest\n\nimport (\n\t\"embed\"\n\t\"fmt\"\n\t\"testing\"\n\n\trtest \"github.com\/hashicorp\/consul\/internal\/resource\/resourcetest\"\n\t\"github.com\/hashicorp\/consul\/proto-public\/pbresource\"\n)\n\nvar (\n\t\/\/go:embed integration_test_data\n\ttestData embed.FS\n)\n\n\/\/ Execute the full integration test\nfunc RunFooV2IntegrationTest(t *testing.T, client pbresource.ResourceServiceClient, opts ...rtest.ClientOption) {\n\tt.Helper\n\t\n\tPublishFooV2IntegrationTestData(t, client, opts...)\n\tVerifyFooV2IntegrationTestResults(t, client)\n}\n\n\/\/ PublishFooV2IntegrationTestData publishes all the data that needs to exist in the resource service\n\/\/ for the controllers to converge on the desired state.\nfunc PublishFooV2IntegrationTestData(t *testing.T, client pbresource.ResourceServiceClient, opts ...rtest.ClientOption) {\n\tt.Helper()\n\n\tc := rtest.NewClient(client, opts...)\n\n\t\/\/ Publishing resources manually is an option but alternatively you can store the resources on disk\n\t\/\/ and use go:embed declarations to embed the whole test data filesystem into the test binary.\n\tresources := rtest.ParseResourcesFromFilesystem(t, testData, \"integration_test_data\/v2\")\n\tc.PublishResources(t, resources)\n}\n\nfunc VerifyFooV2IntegrationTestResults(t *testing.T, client pbresource.ResourceServiceClient) {\n\tt.Helper()\n\t\n\tc := rtest.NewClient(client)\n\t\n\t\/\/ Perform verifications here. All verifications should be retryable except in very exceptional circumstances.\n\t\/\/ This could be in a retry.Run block or could be retryed by using one of the WaitFor* methods on the rtest.Client.\n\t\/\/ Having them be retryable will prevent flakes especially when the verifications are run in the context of\n\t\/\/ a multi-server cluster where a raft follower hasn't yet observed some change.\n}\n```\n\n### `test_lifecycle_v2.go`\n\n```go\n\/\/ Copyright (c) HashiCorp, Inc.\n\/\/ SPDX-License-Identifier: BUSL-1.1\n\npackage footest\n\nimport (\n\t\"testing\"\n\n\trtest \"github.com\/hashicorp\/consul\/internal\/resource\/resourcetest\"\n\t\"github.com\/hashicorp\/consul\/proto-public\/pbresource\"\n)\n\nfunc RunFooV2LifecycleIntegrationTest(t *testing.T, client pbresource.ResourceServiceClient, opts ...rtest.ClientOption) {\n\tt.Helper()\n\t\n\t\/\/ execute tests.\n}\n```","site":"consul","answers_cleaned":"  Controller Testing  For every controller we want to enable 3 types of testing   1  Unit Tests   These should live alongside the controller and utilize mocks and the controller TestController  Where possible split out controller functionality so that other functions can be independently tested  2  Lightweight integration tests   These should live in an internal  api group   api group test package  These tests utilize the in memory resource service and the standard controller manager  There are two types of tests that should be created     Lifecycle Integration Tests   These go step by step to modify resources and check what the controller did  They are meant to go through the lifecycle of resources and how they are reconciled  Verifications are typically intermingled with resource updates     One Shot Integration Tests   These tests publish a bunch of resources and then perform all the verifications  These mainly are focused on the controller eventually converging given all the resources thrown at it and aren t as concerned with any intermediate states resources go through  3  Container based integration tests   These tests live along with our other container based integration tests  They utilize a full multi node cluster  and sometimes client agents   There are 3 types of tests that can be created here       Lifecycle Integration Tests   These are the same as for the lighweight integration tests     One shot IntegrationTests   These are the same as for the lightweight integration tests     Upgrade Tests   These are a special form of One shot Integration tests where the cluster is brought up with some original version  data is pushed in  an upgrade is done and then we verify the consistency of the data post upgrade      Between the lightweight and container based integration tests there is a lot of duplication in what is being tested  For this reason these integration test bodies should be defined as exported functions within the apigroups test package  The container based tests can then import those packages and invoke the same functionality with minimal overhead   For one shot integration tests  functions to do the resource publishing should be split from functions to perform the verifications  This allows upgrade tests to publish the resources once pre upgrade and then validate that their correctness post upgrade without requiring rewriting them   Sometimes it may also be a good idea to export functions in the test packages for running a specific controllers integration tests  This is a good idea when the controller will use a different version of a dependency in Consul Enterprise to allow for the enterprise implementations package to invoke the integration tests after setting up the controller with its injected dependency      Unit Test Template  These tests live alongside controller source      go package foo  import     testing      github com stretchr testif mock    github com stretchr testif require    github com stretchr testif suite     func TestReconcile t  testing T     rtest RunWithTenancies func tenancy  pbresource Tenancy      suite Run t   reconcileSuite tenancy  tenancy          type reconcileSuite struct    suite Suite    tenancy  pbresource Tenancy    ctx context Context  ctl  controller TestController  client  rtest Client       Mock objects needed for testing    func  suite  reconcileSuite  SetupTest      suite ctx   testutil TestContext suite T         Alternatively it is sometimes useful to use a mock resource service  For that     you can use github com hashicorp consul grpcmocks NewResourceServiceClient     to create the client    client    svctest NewResourceServiceBuilder         register this API groups types  Also register any other      types this controller depends on    WithRegisterFns types Register     WithTenancies suite tenancy     Run suite T           Build any mock objects or other dependencies of the controller here       Build the TestController  suite ctl   controller NewTestController Controller    client   suite client   rtest NewClient suite ctl Runtime   Client        Implement tests on the suite as needed  func  suite  reconcileSuite  TestSomething         Setup Mock expectations       Push resources into the resource service as needed        Issue the Reconcile call  suite ctl Reconcile suite ctx  controller Request              Integration Testing Templates  These tests should live in internal  api group   api group test  For these examples  assume the API group under test is named  foo  and the latest API group version is v2        run test go   This file is how  go test  knows to execute the tests  These integration tests should be executed against an in memory resource service with the standard controller manager      go    Copyright  c  HashiCorp  Inc     SPDX License Identifier  BUSL 1 1  package footest  import     testing     github com hashicorp consul internal foo    github com hashicorp consul internal controller controllertest    github com hashicorp consul internal resource reaper   rtest  github com hashicorp consul internal resource resourcetest    github com hashicorp consul proto public pbresource     var       This makes the CLI options available to control timing delays of requests  The     randomized timings helps to build confidence that regardless of resources writes     occurring in quick succession  the controller under test will eventually converge     on its steady state   clientOpts   rtest ConfigureTestCLIFlags      func runInMemResourceServiceAndControllers t  testing T  pbresource ResourceServiceClient    t Helper     return controllertest NewControllerTestBuilder         Register your types for the API group and any others that these tests will depend on   WithResourceRegisterFns types Register     WithControllerRegisterFns     reaper RegisterControllers     foo RegisterControllers      Run t        The basic integration test should operate mostly in a one shot manner where resources    are published and then verifications are performed   func TestControllers Integration t  testing T     client    runInMemResourceServiceAndControllers t   RunFooV2IntegrationTest t  client  clientOpts ClientOptions t            The lifecycle integration test is typically more complex and deals with changing    some values over time to cause the controllers to do something differently  func TestControllers Lifecycle t  testing T     client    runInMemResourceServiceAndControllers t   RunFooV2LifecycleTest t  client  clientOpts ClientOptions t                   test integration v2 go       go package footest  import     embed    fmt    testing    rtest  github com hashicorp consul internal resource resourcetest    github com hashicorp consul proto public pbresource     var      go embed integration test data  testData embed FS       Execute the full integration test func RunFooV2IntegrationTest t  testing T  client pbresource ResourceServiceClient  opts    rtest ClientOption     t Helper    PublishFooV2IntegrationTestData t  client  opts      VerifyFooV2IntegrationTestResults t  client        PublishFooV2IntegrationTestData publishes all the data that needs to exist in the resource service    for the controllers to converge on the desired state  func PublishFooV2IntegrationTestData t  testing T  client pbresource ResourceServiceClient  opts    rtest ClientOption     t Helper     c    rtest NewClient client  opts          Publishing resources manually is an option but alternatively you can store the resources on disk     and use go embed declarations to embed the whole test data filesystem into the test binary   resources    rtest ParseResourcesFromFilesystem t  testData   integration test data v2    c PublishResources t  resources     func VerifyFooV2IntegrationTestResults t  testing T  client pbresource ResourceServiceClient     t Helper      c    rtest NewClient client        Perform verifications here  All verifications should be retryable except in very exceptional circumstances      This could be in a retry Run block or could be retryed by using one of the WaitFor  methods on the rtest Client      Having them be retryable will prevent flakes especially when the verifications are run in the context of     a multi server cluster where a raft follower hasn t yet observed some change              test lifecycle v2 go      go    Copyright  c  HashiCorp  Inc     SPDX License Identifier  BUSL 1 1  package footest  import     testing    rtest  github com hashicorp consul internal resource resourcetest    github com hashicorp consul proto public pbresource     func RunFooV2LifecycleIntegrationTest t  testing T  client pbresource ResourceServiceClient  opts    rtest ClientOption     t Helper         execute tests       "}
{"questions":"consul Note out the Consul 1 16 introduced a set of generic APIs for managing resources and a controller runtime for building functionality on top of them Overview generic APIs proto public pbresource resource proto Looking for guidance on adding new resources and controllers to Consul Check","answers":"# Overview\n\n> **Note**  \n> Looking for guidance on adding new resources and controllers to Consul? Check\n> out the [developer guide](guide.md).\n\nConsul 1.16 introduced a set of [generic APIs] for managing resources, and a\n[controller runtime] for building functionality on top of them.\n\n[generic APIs]: ..\/..\/..\/proto-public\/pbresource\/resource.proto\n[controller runtime]: ..\/..\/..\/internal\/controller\n\nPreviously, adding features to Consul involved making changes at every layer of\nthe stack, including: HTTP handlers, RPC handlers, MemDB tables, Raft\noperations, and CLI commands.\n\nThis architecture made sense when the product was maintained by a small core\ngroup who could keep the entire system in their heads, but presented significant\ncollaboration, ownership, and onboarding challenges when our contributor base\nexpanded to many engineers, across several teams, and the product grew in\ncomplexity.\n\nIn the new model, teams can work with much greater autonomy by building on top\nof a shared platform and own their resource types and controllers.\n\n## Architecture Overview\n\n![architecture diagram](architecture-overview.png)\n\n<sup>[source](https:\/\/whimsical.com\/state-store-v2-UKE6SaEPXNc4UrZBrZj4Kg)<\/sup>\n\nOur resource-oriented architecture comprises the following components:\n\n#### Resource Service\n\n[Resource Service](..\/..\/..\/proto-public\/pbresource\/resource.proto) is a gRPC\nservice that contains the shared logic for creating, reading, updating,\ndeleting, and watching resources. It will be consumed by controllers, our\nKubernetes integration, the CLI, and mapped to an HTTP+JSON API.\n\n#### Type Registry\n\n[Type Registry](..\/..\/..\/internal\/resource\/registry.go) is where teams register\ntheir resource types, along with hooks for performing structural validation,\nauthorization, etc.\n\n#### Storage Backend\n\n[Storage Backend](..\/..\/..\/internal\/storage\/storage.go) is an abstraction over\nlow-level storage primitives. Today, there are two implementations (Raft and\nan in-memory backend for tests) but in the future, we envisage external storage\nsystems such as the Kubernetes API or an RDBMS could be supported which would\nreduce operational complexity for our customers.\n\n#### Controllers\n\n[Controllers](..\/..\/..\/internal\/controller\/api.go) implement Consul's business\nlogic using asynchronous control loops that respond to changes in resources.\nPlease see [Controller docs](controllers.md) for more details about controllers\n\n## Raft Storage Backend\n\nOur [Raft Storage Backend](..\/..\/..\/internal\/storage\/raft\/backend.go) integrates\nwith the existing Raft machinery (e.g. FSM) used by the [old state store]. It\nalso transparently forwards writes and strongly consistent reads to the leader\nover gRPC.\n\nThere's quite a lot going on here, so to dig into the details, let's take a look\nat how a write operation is handled.\n\n[old state store]: ..\/persistence\/\n\n![raft storage backend diagram](raft-backend.png)\n\n<sup>[source](https:\/\/whimsical.com\/state-store-v2-UKE6SaEPXNc4UrZBrZj4Kg)<\/sup>\n\n#### Steps 1 & 2\n\nUser calls the resource service's `Write` endpoint, on a Raft follower, which\nin-turn calls the storage backend's `WriteCAS` method.\n\n#### Steps 3 & 4\n\nThe storage backend determines that the current server is a Raft follower, and\nforwards the operation to the leader via a gRPC [forwarding service] listening\non the multiplexed RPC port ([`ports.server`]).\n\n[forwarding service]: ..\/..\/..\/proto\/private\/pbstorage\/raft.proto\n[`ports.server`]: https:\/\/developer.hashicorp.com\/consul\/docs\/agent\/config\/config-files#server_rpc_port\n\n#### Step 5\n\nThe leader's storage backend serializes the operation to protobuf and applies it\nto the Raft log. As we need to share the Raft log with the old state store, we go\nthrough the [`consul.raftHandle`](..\/..\/..\/agent\/consul\/raft_handle.go) and\n[`consul.Server`](..\/..\/agent\/consul\/server\/server.go) which applies a msgpack\nenvelope and type byte prefix.\n\n#### Step 6\n\nRaft consensus happens! Once the log has been committed, it is applied to the\n[FSM](..\/..\/..\/agent\/consul\/fsm\/fsm.go) which calls the storage backend's `Apply`\nmethod to apply the protobuf-encoded operation to the [`inmem.Store`].\n\n[`inmem.Store`]: ..\/..\/..\/internal\/storage\/inmem\/store.go\n\n#### Steps 7, 8, 9\n\nAt this point, the operation is complete. The forwarding service returns a\nsuccessful response, as does the follower's storage backend, and the user\ngets a successful response too.\n\n#### Steps 10 & 11\n\nAsynchronously, the log is replicated to followers and applied to their storage\nbackends.","site":"consul","answers_cleaned":"  Overview      Note       Looking for guidance on adding new resources and controllers to Consul  Check   out the  developer guide  guide md    Consul 1 16 introduced a set of  generic APIs  for managing resources  and a  controller runtime  for building functionality on top of them    generic APIs            proto public pbresource resource proto  controller runtime            internal controller  Previously  adding features to Consul involved making changes at every layer of the stack  including  HTTP handlers  RPC handlers  MemDB tables  Raft operations  and CLI commands   This architecture made sense when the product was maintained by a small core group who could keep the entire system in their heads  but presented significant collaboration  ownership  and onboarding challenges when our contributor base expanded to many engineers  across several teams  and the product grew in complexity   In the new model  teams can work with much greater autonomy by building on top of a shared platform and own their resource types and controllers      Architecture Overview    architecture diagram  architecture overview png    sup  source  https   whimsical com state store v2 UKE6SaEPXNc4UrZBrZj4Kg   sup   Our resource oriented architecture comprises the following components        Resource Service   Resource Service           proto public pbresource resource proto  is a gRPC service that contains the shared logic for creating  reading  updating  deleting  and watching resources  It will be consumed by controllers  our Kubernetes integration  the CLI  and mapped to an HTTP JSON API        Type Registry   Type Registry           internal resource registry go  is where teams register their resource types  along with hooks for performing structural validation  authorization  etc        Storage Backend   Storage Backend           internal storage storage go  is an abstraction over low level storage primitives  Today  there are two implementations  Raft and an in memory backend for tests  but in the future  we envisage external storage systems such as the Kubernetes API or an RDBMS could be supported which would reduce operational complexity for our customers        Controllers   Controllers           internal controller api go  implement Consul s business logic using asynchronous control loops that respond to changes in resources  Please see  Controller docs  controllers md  for more details about controllers     Raft Storage Backend  Our  Raft Storage Backend           internal storage raft backend go  integrates with the existing Raft machinery  e g  FSM  used by the  old state store   It also transparently forwards writes and strongly consistent reads to the leader over gRPC   There s quite a lot going on here  so to dig into the details  let s take a look at how a write operation is handled    old state store      persistence     raft storage backend diagram  raft backend png    sup  source  https   whimsical com state store v2 UKE6SaEPXNc4UrZBrZj4Kg   sup        Steps 1   2  User calls the resource service s  Write  endpoint  on a Raft follower  which in turn calls the storage backend s  WriteCAS  method        Steps 3   4  The storage backend determines that the current server is a Raft follower  and forwards the operation to the leader via a gRPC  forwarding service  listening on the multiplexed RPC port    ports server       forwarding service            proto private pbstorage raft proto   ports server    https   developer hashicorp com consul docs agent config config files server rpc port       Step 5  The leader s storage backend serializes the operation to protobuf and applies it to the Raft log  As we need to share the Raft log with the old state store  we go through the   consul raftHandle            agent consul raft handle go  and   consul Server         agent consul server server go  which applies a msgpack envelope and type byte prefix        Step 6  Raft consensus happens  Once the log has been committed  it is applied to the  FSM           agent consul fsm fsm go  which calls the storage backend s  Apply  method to apply the protobuf encoded operation to the   inmem Store       inmem Store             internal storage inmem store go       Steps 7  8  9  At this point  the operation is complete  The forwarding service returns a successful response  as does the follower s storage backend  and the user gets a successful response too        Steps 10   11  Asynchronously  the log is replicated to followers and applied to their storage backends "}
{"questions":"consul Resource Schema Consul Adding a new resource type begins with defining the object schema as a protobuf message in the appropriate package under This is a whistle stop tour through adding a new resource type and controller to Resource and Controller Developer Guide","answers":"# Resource and Controller Developer Guide\n\nThis is a whistle-stop tour through adding a new resource type and controller to\nConsul \ud83d\ude82\n\n## Resource Schema\n\nAdding a new resource type begins with defining the object schema as a protobuf\nmessage, in the appropriate package under [`proto-public`](..\/..\/..\/proto-public).\n\n```shell\n$ mkdir proto-public\/pbfoo\/v1alpha1\n```\n\n```proto\n\/\/ proto-public\/pbfoo\/v1alpha1\/foo.proto\nsyntax = \"proto3\";\n\nimport \"pbresource\/resource.proto\";\nimport \"pbresource\/annotations.proto\";\n\npackage hashicorp.consul.foo.v1alpha1;\n\nmessage Bar {\n  option (hashicorp.consul.resource.spec) = {scope: SCOPE_NAMESPACE};\n  \n  string baz = 1;\n  hashicorp.consul.resource.ID qux = 2;\n}\n```\n\n```shell\n$ make proto\n```\n\nNext, we must add our resource type to the registry. At this point, it's useful\nto add a package (e.g. under [`internal`](..\/..\/..\/internal)) to contain the logic\nassociated with this resource type.\n\nThe convention is to have this package export variables for its type identifiers\nalong with a method for registering its types:\n\n```Go\n\/\/ internal\/foo\/types.go\npackage foo\n\nimport (\n\t\"github.com\/hashicorp\/consul\/internal\/resource\"\n\tpbv1alpha1 \"github.com\/hashicorp\/consul\/proto-public\/pbfoo\/v1alpha1\"\n\t\"github.com\/hashicorp\/consul\/proto-public\/pbresource\"\n)\n\nfunc RegisterTypes(r resource.Registry) {\n\tr.Register(resource.Registration{\n\t\tType:  pbv1alpha1.BarType, \n\t\tScope: resource.ScopePartition,\n\t\tProto: &pbv1alpha1.Bar{},\n\t})\n}\n```\nNote that Scope reference the scope of the new resource, `resource.ScopePartition` \nmean that resource will be at the partition level and have no namespace, while `resource.ScopeNamespace` mean it will have both a namespace \nand a partition.\n\nUpdate the `NewTypeRegistry` method in [`type_registry.go`] to call your\npackage's type registration method:\n\n[`type_registry.go`]: ..\/..\/..\/agent\/consul\/type_registry.go\n\n```Go\nimport (\n\t\/\/ \u2026\n\t\"github.com\/hashicorp\/consul\/internal\/foo\"\n\t\/\/ \u2026\n)\n\nfunc NewTypeRegistry() resource.Registry {\n\t\/\/ \u2026\n    foo.RegisterTypes(registry)\n\t\/\/ \u2026\n}\n```\n\nThat should be all you need to start using your new resource type. Test it out\nby starting an agent in dev mode:\n\n```shell\n$ make dev\n$ consul agent -dev\n```\n\nYou can now use [grpcurl](https:\/\/github.com\/fullstorydev\/grpcurl) to interact\nwith the [resource service](..\/..\/..\/proto-public\/pbresource\/resource.proto):\n\n```shell\n$ grpcurl -d @ \\\n  -plaintext \\\n  -protoset pkg\/consul.protoset \\\n  127.0.0.1:8502 \\\n  hashicorp.consul.resource.ResourceService.Write \\\n<<EOF\n  {\n    \"resource\": {\n      \"id\": {\n        \"type\": {\n          \"group\": \"foo\",\n          \"group_version\": \"v1alpha1\",\n          \"kind\": \"bar\"\n        },\n        \"tenancy\": {\n          \"partition\": \"default\",\n          \"namespace\": \"default\"\n        }\n      },\n      \"data\": {\n        \"@type\": \"types.googleapis.com\/hashicorp.consul.foo.v1alpha1.Bar\",\n        \"baz\": \"Hello World\"\n      }\n    }\n  }\nEOF\n```\n\n## Validation\n\nBroadly, there are two kinds of validation you might want to perform against\nyour resources:\n\n- **Structural** validation ensures the user's input is well-formed, for\n  example: checking that a required field is provided, or that a port is within\n  an acceptable range.\n- **Semantic** validation ensures that the resource makes sense in the context\n  of *other* resources, for example: checking that an L7 intention is not\n  targeting an L4 service.\n\nStructural validation should be done up-front, before the resource is admitted,\nusing a validation hook provided in the type registration:\n\n```Go\nfunc RegisterTypes(r resource.Registry) {\n\tr.Register(resource.Registration{\n\t\tType:     pbv1alpha1.BarType,\n\t\tProto:    &pbv1alpha1.Bar{}, \n\t\tScope:    resource.ScopeNamespace,\n\t\tValidate: validateBar,\n\t})\n}\n\nfunc validateBar(res *pbresource.Resource) error {\n\tvar bar pbv1alpha1.Bar\n\tif err := res.Data.UnmarshalTo(&bar); err != nil {\n\t\treturn resource.NewErrDataParse(&bar, err)\n\t}\n\tif bar.Baz == \"\" {\n\t\treturn resource.ErrInvalidField{\n\t\t\tName:    \"baz\",\n\t\t\tWrapped: resource.ErrMissing,\n\t\t}\n\t}\n\treturn nil\n}\n```\n\nSemantic validation should be done asynchronously, after the resource is\nwritten, by controllers ([covered below](#controllers)).\n\n## Authorization\n\nYou can control how operations on your resource type are authorized by providing\na set of ACL hooks:\n\n```Go\nfunc RegisterTypes(r resource.Registry) {\n\tr.Register(resource.Registration{\n\t\tType:  pbv1alpha1.BarType,\n\t\tProto: &pbv1alpha1.Bar{}, \n\t\tScope: resource.ScopeNamespace,\n\t\tACLs: &resource.ACLHooks{,\n\t\t\tRead:  authzReadBar,\n\t\t\tWrite: authzWriteBar,\n\t\t\tList:  authzListBar,\n\t\t},\n\t})\n}\n\nfunc authzReadBar(authz acl.Authorizer, authzContext *acl.AuthorizerContext, id *pbresource.ID,  _ *pbresource.Resource) error {\n\treturn authz.ToAllowAuthorizer().\n\t\tBarReadAllowed(id.Name, authzContext)\n}\n\nfunc authzWriteBar(authz acl.Authorizer, authzContext *acl.AuthorizerContext, res *pbresource.Resource) error {\n\treturn authz.ToAllowAuthorizer().\n\t\tBarWriteAllowed(res.ID().Name, authzContext)\n}\n\nfunc authzListBar(authz acl.Authorizer, authzContext *acl.AuthorizerContext) error {\n\treturn authz.ToAllowAuthorizer().\n\t\tBarListAllowed(authzContext)\n}\n```\n\nIf you do not provide ACL hooks, `operator:read` and `operator:write`\npermissions will be required.\n\n## Mutation\n\nSometimes, it's necessary to modify resources before they're persisted. For\nexample, to set sensible default values or normalize user input. You can do this\nby providing a mutation hook:\n\n```Go\nfunc RegisterTypes(r resource.Registry) {\n\tr.Register(resource.Registration{\n\t\tType:   pbv1alpha1.BarType,\n\t\tProto:  &pbv1alpha1.Bar{}, \n\t\tScope:  resource.ScopeNamespace,\n\t\tMutate: mutateBar,\n\t})\n}\n\nfunc mutateBar(res *pbresource.Resource) error {\n\tvar bar pbv1alpha1.Bar\n\tif err := res.Data.UnmarshalTo(&bar); err != nil {\n\t\treturn resource.NewErrDataParse(&bar, err)\n\t}\n\tbar.Baz = strings.ToLower(bar.Baz)\n\treturn res.Data.MarshalFrom(&bar)\n}\n```\n\n## Controllers\n\nControllers are where the business logic of your resources will live. They're\nasynchronous [reconciliation loops] that \"wake up\" whenever a resource is\nmodified to validate and realize the changes.\n\nYou can create a new controller using the [builder API]. Start by identifying\nthe resource type you want this controller to manage, and provide a reconciler\nthat will be called whenever a resource of that type is changed.\n\n```Go\npackage foo\n\nimport (\n\t\"context\"\n\n\t\"github.com\/hashicorp\/consul\/internal\/controller\"\n\tpbv1alpha1 \"github.com\/hashicorp\/consul\/proto-public\/pbfoo\/v1alpha1\"\n\t\"github.com\/hashicorp\/consul\/proto-public\/pbresource\"\n)\n\nfunc barController() controller.Controller {\n\treturn controller.NewController(\"bar\", pbv1alpha1.BarType).\n\t\tWithReconciler(barReconciler{})\n}\n\ntype barReconciler struct{}\n\nfunc (barReconciler) Reconcile(ctx context.Context, rt controller.Runtime, req controller.Request) error {\n\trsp, err := rt.Client.Read(ctx, &pbresource.ReadRequest{Id: req.ID})\n\tswitch {\n\tcase status.Code(err) == codes.NotFound:\n\t\treturn nil\n\tcase err != nil:\n\t\treturn err\n\t}\n\n\tvar bar pbv1alpha1.Bar\n\tif err := rsp.Resource.Data.UnmarshalTo(&bar); err != nil {\n\t\treturn err\n\t}\n\trt.Logger.Debug(\"Hello from bar reconciler!\", \"baz\", bar.Baz)\n\n\treturn nil\n}\n```\n\n[reconciliation loops]: https:\/\/www.oreilly.com\/library\/view\/97-things-every\/9781492050896\/ch73.html\n[builder API]: https:\/\/pkg.go.dev\/github.com\/hashicorp\/consul\/internal\/controller#Controller\n\nNext, register your controller with the controller manager. Another common\npattern is to have your package expose a method for registering controllers,\nwhich is called from `registerControllers` in [`server.go`].\n\n[`server.go`]: ..\/..\/..\/agent\/consul\/server.go\n\n```Go\npackage foo\n\nfunc RegisterControllers(mgr *controller.Manager) {\n\tmgr.Register(barController())\n}\n```\n\n```Go\npackage consul\n\nfunc (s *Server) registerControllers() {\n\t\/\/ \u2026\n\tfoo.RegisterControllers(s.controllerManager)\n\t\/\/ \u2026\n}\n```\n\n### Retries\n\nBy default, if your reconciler returns an error, it will be retried with\nexponential backoff. While this is correct in most circumstances, you can\noverride it by returning [`RequeueAfter`] or [`RequeueNow`].\n\n[`RequeueAfter`]: https:\/\/pkg.go.dev\/github.com\/hashicorp\/consul\/internal\/controller#RequeueAfter\n[`RequeueNow`]: https:\/\/pkg.go.dev\/github.com\/hashicorp\/consul\/internal\/controller#RequeueNow\n\n```Go\nfunc (barReconciler) Reconcile(context.Context, controller.Runtime, controller.Request) error {\n\tif time.Now().Hour() < 9 {\n\t\treturn controller.RequeueAfter(1 * time.Hour)\n\t}\n\treturn nil\n}\n```\n\n### Status\n\nControllers can communicate the result of reconciling resource changes (e.g.\nsurfacing semantic validation issues) with users and other controllers by\nupdating the resource's status using the `WriteStatus` method.\n\nEach resource can have multiple statuses, typically one per controller,\nidentified by a string key. Statuses are composed of a set of conditions, which\nrepresent discreet observations about the resource in relation to the current\nstate of the system.\n\nThat all sounds a little abstract, so let's take a look at an example.\n\n```Go\nclient.WriteStatus(ctx, &pbresource.WriteStatusRequest{\n\tId:     res.Id,\n\tKey:    \"consul.io\/bar\",\n\tStatus: &pbresource.Status{\n\t\tObservedGeneration: res.Generation,\n\t\tConditions: []*pbresource.Condition{\n\t\t\t{\n\t\t\t\tType:    \"Healthy\",\n\t\t\t\tState:   pbresource.Condition_STATE_TRUE,\n\t\t\t\tReason:  \"OK\",\n\t\t\t\tMessage: \"All checks are passing\",\n\t\t\t},\n\t\t\t{\n\t\t\t\tType:    \"ResolvedRefs\",\n\t\t\t\tState:   pbresource.Condition_STATE_FALSE,\n\t\t\t\tReason:  \"INVALID_REFERENCE\",\n\t\t\t\tMessage: \"Bar contained an invalid reference to qux\",\n\t\t\t\tResource: resource.Reference(bar.Qux, \"\"),\n\t\t\t},\n\t\t},\n\t},\n})\n```\n\nIn the previous example, the controller makes two observations about the\ncurrent state of the resource:\n\n1. That it's \"healthy\" (whatever that means in this hypothetical scenario)\n1. That it contains a reference that couldn't be resolved\n\nThe `Type` and `Reason` should be simple, machine-readable, strings, but there\naren't any strict rules about what are acceptable values. Over time, we\nanticipate that common values will emerge that we'll standardize on for\nconsistency.\n\n`Message` should be a human-readable explanation of the condition.\n\n> **Warning**  \n> Writing a status to the resource will cause it to be re-reconciled. To avoid\n> infinite loops, we recommend dirty checking the status before writing it with\n> [`resource.EqualStatus`].\n\n[`resource.EqualStatus`]: https:\/\/pkg.go.dev\/github.com\/hashicorp\/consul\/internal\/resource#EqualStatus\n\n### Watching Other Resources\n\nIn addition to watching their \"managed\" resources, controllers can also watch\nresources of different, related, types. For example, the service endpoints\ncontroller also watches workloads and services.\n\n```Go\nfunc barController() controller.Controller {\n\treturn controller.NewController(\"bar\", pbv1alpha1.BarType).\n\t\tWithWatch(pbv1alpha1.BazType, controller.MapOwner)\n\t\tWithReconciler(barReconciler{})\n}\n```\n\nThe second argument to `WithWatch` is a [dependency mapper] function. Whenever a\nresource of the watched type is modified, the dependency mapper will be called\nto determine which of the controller's managed resources need to be reconciled.\n\n[`dependency.MapOwner`] is a convenience function which causes the watched\nresource's [owner](#ownership--cascading-deletion) to be reconciled.\n\n[dependency mapper]: https:\/\/pkg.go.dev\/github.com\/hashicorp\/consul\/internal\/controller#DependencyMapper\n[`dependency.MapOwner`]: https:\/\/pkg.go.dev\/github.com\/hashicorp\/consul\/internal\/controller\/dependency#MapOwner\n\n### Placement\n\nBy default, only a single, leader-elected, replica of each controller will run\nwithin a cluster. Sometimes it's necessary to override this, for example when\nyou want to run a copy of the controller on each server (e.g. to apply some\nconfiguration to the server whenever it changes). You can do this by changing\nthe controller's placement.\n\n```Go\nfunc barController() controller.Controller {\n\treturn controller.NewController(\"bar\", pbv1alpha1.BarType).\n\t\tWithPlacement(controller.PlacementEachServer)\n\t\tWithReconciler(barReconciler{})\n}\n```\n\n> **Warning**  \n> Controllers placed with [`controller.PlacementEachServer`] generally shouldn't\n> modify resources (as it could lead to race conditions).\n\n[`controller.PlacementEachServer`]: https:\/\/pkg.go.dev\/github.com\/hashicorp\/consul\/internal\/controller#PlacementEachServer\n\n### Initializer\n\nIf your controller needs to execute setup steps when the controller\nfirst starts and before any resources are reconciled, you can add an\nInitializer.\n\nIf the controller has an Initializer, it will not start unless the\nInitialize method is successful. The controller does not have retry\nlogic for the initialize method specifically, but the controller\nis restarted on error. When restarted, the controller will attempt\nto execute the initialization again.\n\nThe example below has the controller creating a default resource as\npart of initialization.\n\n```Go\npackage foo\n\nimport (\n \"context\"\n\n \"github.com\/hashicorp\/consul\/internal\/controller\"\n pbv1alpha1 \"github.com\/hashicorp\/consul\/proto-public\/pbfoo\/v1alpha1\"\n \"github.com\/hashicorp\/consul\/proto-public\/pbresource\"\n)\n\nfunc barController() controller.Controller {\n return controller.ForType(pbv1alpha1.BarType).\n  WithReconciler(barReconciler{}).\n  WithInitializer(barInitializer{})\n}\n\ntype barInitializer struct{}\n\nfunc (barInitializer) Initialize(ctx context.Context, rt controller.Runtime) error {\n  _, err := rt.Client.Write(ctx,\n    &pbresource.WriteRequest{\n    Resource: &pbresource.Resource{\n     Id: &pbresource.ID{\n      Name: \"default\",\n      Type: pbv1alpha1.BarType,\n     },\n    },\n   },\n  )\n  if err != nil {\n   return err\n  }\n\n  return nil\n}\n```\n\n### Finalizer\n\nA finalizer allows a controller to execute teardown logic before a\nresource is deleted. This can be useful to perform cleanup or block\ndeletion until certain conditions are met.\n\nFinalizers are encoded as keys within a resource's metadata map. It\nis the responsibility of each controller that adds a finalizer to a\nresource to remove the finalizer when it is marked for deletion.\nOnce a resource has no finalizers present, it is deleted by the\nresource service.\n\nWhen the `Delete` endpoint is called on a resource with one or more\nfinalizers, the resource is marked for deletion by adding an immutable\n`deletionTimestamp` key to the resource's metadata map. The resource is\nnow effectively frozen and will only accept subsequent `Write`s\nthat remove finalizers. `WriteStatus` is still allowed.\n\nThe `resource` package API can be used to manage finalizers and\ncheck whether a resource has been marked for deletion. You would\ntypically use this API within the logic of your controller's\n`Reconcile` method to either put a finalizer in place or perform\ncleanup and then remove a finalizer. Don't forget to `Write` your\nchanges once you add or remove finalizers.\n\n```Go\npackage resource\n\n\/\/ IsMarkedForDeletion returns true if a resource has been marked for deletion,\n\/\/ false otherwise.\nfunc IsMarkedForDeletion(res *pbresource.Resource) bool { ... }\n\n\/\/ HasFinalizers returns true if a resource has one or more finalizers, false otherwise.\nfunc HasFinalizers(res *pbresource.Resource) bool { ... }\n\n\/\/ HasFinalizer returns true if a resource has a given finalizer, false otherwise.\nfunc HasFinalizer(res *pbresource.Resource, finalizer string) bool { ... }\n\n\/\/ AddFinalizer adds a finalizer to the given resource.\nfunc AddFinalizer(res *pbresource.Resource, finalizer string) { ... }\n\n\/\/ RemoveFinalizer removes a finalizer from the given resource.\nfunc RemoveFinalizer(res *pbresource.Resource, finalizer string) { ... }\n\n\/\/ GetFinalizers returns the set of finalizers for the given resource.\nfunc GetFinalizers(res *pbresource.Resource) mapset.Set[string] { ... }\n```\n\nExample flow in a controller's `Reconcile` method\n```Go\nconst finalizer = \"consul.io\/bar-finalizer\"\n\nfunc (barReconciler) Reconcile(ctx context.Context, rt controller.Runtime, req controller.Request) error {\n\t...\n\t\/\/ Check if resource is marked for deletion. If yes, perform cleanup, remove finalizer, and Write the resource\n\tif resource.IsMarkedForDeletion(res) {\n\t\t\/\/ Perform some cleanup...\n\t\treturn EnsureFinalizerRemoved(ctx, rt, res, finalizer)\n\t}\n\n\t\/\/ Check if resource has finalizer. If not, add it and Write the resource\n\tif err := EnsureHasFinalizer(ctx, rt, res, finalizer); err != nil {\n\t\treturn err\n\t}\n}\n```\n\n## Ownership & Cascading Deletion\n\nThe resource service implements a lightweight `1:N` ownership model where, on\ncreation, you can mark a resource as being \"owned\" by another resource. When the\nowner is deleted, the owned resource will be deleted too.\n\n```Go\nclient.Write(ctx, &pbresource.WriteRequest{\n\tResource: &pbresource.Resource{,\n\t\tOwner: ownerID,\n\t\t\/\/ \u2026\n\t},\n})\n```\n\n## Testing\n\nNow that you have created your controller its time to test it. The types of tests each controller should have and boiler plat for test files is documented [here](.\/testing.md)","site":"consul","answers_cleaned":"  Resource and Controller Developer Guide  This is a whistle stop tour through adding a new resource type and controller to Consul       Resource Schema  Adding a new resource type begins with defining the object schema as a protobuf message  in the appropriate package under   proto public            proto public       shell   mkdir proto public pbfoo v1alpha1         proto    proto public pbfoo v1alpha1 foo proto syntax    proto3    import  pbresource resource proto   import  pbresource annotations proto    package hashicorp consul foo v1alpha1   message Bar     option  hashicorp consul resource spec     scope  SCOPE NAMESPACE        string baz   1    hashicorp consul resource ID qux   2            shell   make proto      Next  we must add our resource type to the registry  At this point  it s useful to add a package  e g  under   internal            internal   to contain the logic associated with this resource type   The convention is to have this package export variables for its type identifiers along with a method for registering its types      Go    internal foo types go package foo  import     github com hashicorp consul internal resource   pbv1alpha1  github com hashicorp consul proto public pbfoo v1alpha1    github com hashicorp consul proto public pbresource     func RegisterTypes r resource Registry     r Register resource Registration    Type   pbv1alpha1 BarType     Scope  resource ScopePartition    Proto   pbv1alpha1 Bar              Note that Scope reference the scope of the new resource   resource ScopePartition   mean that resource will be at the partition level and have no namespace  while  resource ScopeNamespace  mean it will have both a namespace  and a partition   Update the  NewTypeRegistry  method in   type registry go   to call your package s type registration method     type registry go             agent consul type registry go     Go import           github com hashicorp consul internal foo           func NewTypeRegistry   resource Registry             foo RegisterTypes registry               That should be all you need to start using your new resource type  Test it out by starting an agent in dev mode      shell   make dev   consul agent  dev      You can now use  grpcurl  https   github com fullstorydev grpcurl  to interact with the  resource service           proto public pbresource resource proto       shell   grpcurl  d        plaintext      protoset pkg consul protoset     127 0 0 1 8502     hashicorp consul resource ResourceService Write     EOF          resource            id              type                group    foo              group version    v1alpha1              kind    bar                      tenancy                partition    default              namespace    default                            data               type    types googleapis com hashicorp consul foo v1alpha1 Bar            baz    Hello World                    EOF         Validation  Broadly  there are two kinds of validation you might want to perform against your resources       Structural   validation ensures the user s input is well formed  for   example  checking that a required field is provided  or that a port is within   an acceptable range      Semantic   validation ensures that the resource makes sense in the context   of  other  resources  for example  checking that an L7 intention is not   targeting an L4 service   Structural validation should be done up front  before the resource is admitted  using a validation hook provided in the type registration      Go func RegisterTypes r resource Registry     r Register resource Registration    Type      pbv1alpha1 BarType    Proto      pbv1alpha1 Bar       Scope     resource ScopeNamespace    Validate  validateBar         func validateBar res  pbresource Resource  error    var bar pbv1alpha1 Bar  if err    res Data UnmarshalTo  bar   err    nil     return resource NewErrDataParse  bar  err      if bar Baz           return resource ErrInvalidField     Name      baz      Wrapped  resource ErrMissing          return nil        Semantic validation should be done asynchronously  after the resource is written  by controllers   covered below   controllers        Authorization  You can control how operations on your resource type are authorized by providing a set of ACL hooks      Go func RegisterTypes r resource Registry     r Register resource Registration    Type   pbv1alpha1 BarType    Proto   pbv1alpha1 Bar       Scope  resource ScopeNamespace    ACLs   resource ACLHooks      Read   authzReadBar     Write  authzWriteBar     List   authzListBar              func authzReadBar authz acl Authorizer  authzContext  acl AuthorizerContext  id  pbresource ID      pbresource Resource  error    return authz ToAllowAuthorizer      BarReadAllowed id Name  authzContext     func authzWriteBar authz acl Authorizer  authzContext  acl AuthorizerContext  res  pbresource Resource  error    return authz ToAllowAuthorizer      BarWriteAllowed res ID   Name  authzContext     func authzListBar authz acl Authorizer  authzContext  acl AuthorizerContext  error    return authz ToAllowAuthorizer      BarListAllowed authzContext         If you do not provide ACL hooks   operator read  and  operator write  permissions will be required      Mutation  Sometimes  it s necessary to modify resources before they re persisted  For example  to set sensible default values or normalize user input  You can do this by providing a mutation hook      Go func RegisterTypes r resource Registry     r Register resource Registration    Type    pbv1alpha1 BarType    Proto    pbv1alpha1 Bar       Scope   resource ScopeNamespace    Mutate  mutateBar         func mutateBar res  pbresource Resource  error    var bar pbv1alpha1 Bar  if err    res Data UnmarshalTo  bar   err    nil     return resource NewErrDataParse  bar  err      bar Baz   strings ToLower bar Baz   return res Data MarshalFrom  bar            Controllers  Controllers are where the business logic of your resources will live  They re asynchronous  reconciliation loops  that  wake up  whenever a resource is modified to validate and realize the changes   You can create a new controller using the  builder API   Start by identifying the resource type you want this controller to manage  and provide a reconciler that will be called whenever a resource of that type is changed      Go package foo  import     context     github com hashicorp consul internal controller   pbv1alpha1  github com hashicorp consul proto public pbfoo v1alpha1    github com hashicorp consul proto public pbresource     func barController   controller Controller    return controller NewController  bar   pbv1alpha1 BarType     WithReconciler barReconciler       type barReconciler struct    func  barReconciler  Reconcile ctx context Context  rt controller Runtime  req controller Request  error    rsp  err    rt Client Read ctx   pbresource ReadRequest Id  req ID    switch    case status Code err     codes NotFound    return nil  case err    nil    return err      var bar pbv1alpha1 Bar  if err    rsp Resource Data UnmarshalTo  bar   err    nil     return err     rt Logger Debug  Hello from bar reconciler     baz   bar Baz    return nil         reconciliation loops   https   www oreilly com library view 97 things every 9781492050896 ch73 html  builder API   https   pkg go dev github com hashicorp consul internal controller Controller  Next  register your controller with the controller manager  Another common pattern is to have your package expose a method for registering controllers  which is called from  registerControllers  in   server go       server go             agent consul server go     Go package foo  func RegisterControllers mgr  controller Manager     mgr Register barController              Go package consul  func  s  Server  registerControllers            foo RegisterControllers s controllerManager                   Retries  By default  if your reconciler returns an error  it will be retried with exponential backoff  While this is correct in most circumstances  you can override it by returning   RequeueAfter   or   RequeueNow       RequeueAfter    https   pkg go dev github com hashicorp consul internal controller RequeueAfter   RequeueNow    https   pkg go dev github com hashicorp consul internal controller RequeueNow     Go func  barReconciler  Reconcile context Context  controller Runtime  controller Request  error    if time Now   Hour     9     return controller RequeueAfter 1   time Hour      return nil            Status  Controllers can communicate the result of reconciling resource changes  e g  surfacing semantic validation issues  with users and other controllers by updating the resource s status using the  WriteStatus  method   Each resource can have multiple statuses  typically one per controller  identified by a string key  Statuses are composed of a set of conditions  which represent discreet observations about the resource in relation to the current state of the system   That all sounds a little abstract  so let s take a look at an example      Go client WriteStatus ctx   pbresource WriteStatusRequest   Id      res Id   Key      consul io bar    Status   pbresource Status    ObservedGeneration  res Generation    Conditions     pbresource Condition           Type      Healthy       State    pbresource Condition STATE TRUE      Reason    OK       Message   All checks are passing                  Type      ResolvedRefs       State    pbresource Condition STATE FALSE      Reason    INVALID REFERENCE       Message   Bar contained an invalid reference to qux       Resource  resource Reference bar Qux                              In the previous example  the controller makes two observations about the current state of the resource   1  That it s  healthy   whatever that means in this hypothetical scenario  1  That it contains a reference that couldn t be resolved  The  Type  and  Reason  should be simple  machine readable  strings  but there aren t any strict rules about what are acceptable values  Over time  we anticipate that common values will emerge that we ll standardize on for consistency    Message  should be a human readable explanation of the condition       Warning       Writing a status to the resource will cause it to be re reconciled  To avoid   infinite loops  we recommend dirty checking the status before writing it with     resource EqualStatus       resource EqualStatus    https   pkg go dev github com hashicorp consul internal resource EqualStatus      Watching Other Resources  In addition to watching their  managed  resources  controllers can also watch resources of different  related  types  For example  the service endpoints controller also watches workloads and services      Go func barController   controller Controller    return controller NewController  bar   pbv1alpha1 BarType     WithWatch pbv1alpha1 BazType  controller MapOwner    WithReconciler barReconciler           The second argument to  WithWatch  is a  dependency mapper  function  Whenever a resource of the watched type is modified  the dependency mapper will be called to determine which of the controller s managed resources need to be reconciled     dependency MapOwner   is a convenience function which causes the watched resource s  owner   ownership  cascading deletion  to be reconciled    dependency mapper   https   pkg go dev github com hashicorp consul internal controller DependencyMapper   dependency MapOwner    https   pkg go dev github com hashicorp consul internal controller dependency MapOwner      Placement  By default  only a single  leader elected  replica of each controller will run within a cluster  Sometimes it s necessary to override this  for example when you want to run a copy of the controller on each server  e g  to apply some configuration to the server whenever it changes   You can do this by changing the controller s placement      Go func barController   controller Controller    return controller NewController  bar   pbv1alpha1 BarType     WithPlacement controller PlacementEachServer    WithReconciler barReconciler               Warning       Controllers placed with   controller PlacementEachServer   generally shouldn t   modify resources  as it could lead to race conditions      controller PlacementEachServer    https   pkg go dev github com hashicorp consul internal controller PlacementEachServer      Initializer  If your controller needs to execute setup steps when the controller first starts and before any resources are reconciled  you can add an Initializer   If the controller has an Initializer  it will not start unless the Initialize method is successful  The controller does not have retry logic for the initialize method specifically  but the controller is restarted on error  When restarted  the controller will attempt to execute the initialization again   The example below has the controller creating a default resource as part of initialization      Go package foo  import     context     github com hashicorp consul internal controller   pbv1alpha1  github com hashicorp consul proto public pbfoo v1alpha1    github com hashicorp consul proto public pbresource     func barController   controller Controller    return controller ForType pbv1alpha1 BarType     WithReconciler barReconciler       WithInitializer barInitializer       type barInitializer struct    func  barInitializer  Initialize ctx context Context  rt controller Runtime  error        err    rt Client Write ctx       pbresource WriteRequest      Resource   pbresource Resource       Id   pbresource ID        Name   default         Type  pbv1alpha1 BarType                             if err    nil      return err        return nil            Finalizer  A finalizer allows a controller to execute teardown logic before a resource is deleted  This can be useful to perform cleanup or block deletion until certain conditions are met   Finalizers are encoded as keys within a resource s metadata map  It is the responsibility of each controller that adds a finalizer to a resource to remove the finalizer when it is marked for deletion  Once a resource has no finalizers present  it is deleted by the resource service   When the  Delete  endpoint is called on a resource with one or more finalizers  the resource is marked for deletion by adding an immutable  deletionTimestamp  key to the resource s metadata map  The resource is now effectively frozen and will only accept subsequent  Write s that remove finalizers   WriteStatus  is still allowed   The  resource  package API can be used to manage finalizers and check whether a resource has been marked for deletion  You would typically use this API within the logic of your controller s  Reconcile  method to either put a finalizer in place or perform cleanup and then remove a finalizer  Don t forget to  Write  your changes once you add or remove finalizers      Go package resource     IsMarkedForDeletion returns true if a resource has been marked for deletion     false otherwise  func IsMarkedForDeletion res  pbresource Resource  bool             HasFinalizers returns true if a resource has one or more finalizers  false otherwise  func HasFinalizers res  pbresource Resource  bool             HasFinalizer returns true if a resource has a given finalizer  false otherwise  func HasFinalizer res  pbresource Resource  finalizer string  bool             AddFinalizer adds a finalizer to the given resource  func AddFinalizer res  pbresource Resource  finalizer string              RemoveFinalizer removes a finalizer from the given resource  func RemoveFinalizer res  pbresource Resource  finalizer string              GetFinalizers returns the set of finalizers for the given resource  func GetFinalizers res  pbresource Resource  mapset Set string               Example flow in a controller s  Reconcile  method    Go const finalizer    consul io bar finalizer   func  barReconciler  Reconcile ctx context Context  rt controller Runtime  req controller Request  error            Check if resource is marked for deletion  If yes  perform cleanup  remove finalizer  and Write the resource  if resource IsMarkedForDeletion res         Perform some cleanup      return EnsureFinalizerRemoved ctx  rt  res  finalizer          Check if resource has finalizer  If not  add it and Write the resource  if err    EnsureHasFinalizer ctx  rt  res  finalizer   err    nil     return err              Ownership   Cascading Deletion  The resource service implements a lightweight  1 N  ownership model where  on creation  you can mark a resource as being  owned  by another resource  When the owner is deleted  the owned resource will be deleted too      Go client Write ctx   pbresource WriteRequest   Resource   pbresource Resource     Owner  ownerID                        Testing  Now that you have created your controller its time to test it  The types of tests each controller should have and boiler plat for test files is documented  here    testing md "}
{"questions":"consul Controllers This page describes how to write controllers in Consul s new controller architecture A controller consists of several parts Note This information is valid as of Consul 1 17 but some portions may change in future releases Controller Basics","answers":"# Controllers\n\nThis page describes how to write controllers in Consul's new controller architecture.\n\n-> **Note**: This information is valid as of Consul 1.17 but some portions may change in future releases.\n\n## Controller Basics\n\nA controller consists of several parts: \n\n1. **The watched type** - This is the main type a controller is watching and reconciling.\n2. **Additional watched types** - These are additional types a controller may care about in addition to the main watched type.\n3. **Additional custom watches** - These are the watches for things that aren't resources in Consul. \n4. **Reconciler** - This is the instance that's responsible for reconciling requests whenever there's an event for the main watched type or for any of the watched types.\n5. **Initializer** - This is responsible for anything that needs to be executed when the controller is started.\n\nA basic controller setup could look like this:\n\n```go\nfunc barController() controller.Controller {\n    return controller.NewController(\"bar\", pbexample.BarType).\n        WithReconciler(barReconciler{})\n}\n```\n\nbarReconciler needs to implement the `Reconcile` method of the `Reconciler` interface. \nIt's important to note that the `Reconcile` method only gets the request with the `ID` of the main\nwatched resource and so it's up to the reconcile implementation to fetch the resource and any relevant information needed\nto perform the reconciliation. The most basic reconciler could look as follows:\n\n```go\ntype barReconciler struct {}\n\nfunc (b *barReconciler) Reconcile(ctx context.Context, rt Runtime, req Request) error {\n    ...\n}\n```\n\n## Watching Additional Resources\n\nMost of the time, controllers will need to watch more resources in addition to the main watched type. \nTo set up an additional watch, the main thing we need to figure out is how to map additional watched resource to the main\nwatched resource. Controller-runtime allows us to implement a mapper function that can take the additional watched resource\nas the input and produce reconcile `Requests` for our main watched type.\n\nTo figure out how to map the two resources together, we need to think about the relationship between the two resources.\n\nThere are several common relationship types between resources that are being used currently:\n1. Name-alignment: this relationship means that resources are named the same and live in the same tenancy, but have different data. Examples: `Service` and `ServiceEndpoints`, `Workload` and `ProxyStateTemplate`.\n2. Selector: this relationship happens when one resource selects another by name or name prefix. Examples: `Service` and `Workload`, `ProxyConfiguration` and `Workload`.\n3. Owner: in this relationship, one resource is the owner of another resource. Examples: `Service` and `ServiceEndpoints`, `HealthStatus` and `Workload`.\n4. Arbitrary reference: in this relationship, one resource may reference another by some sort of reference. This reference could be a single string in the resource data or a more composite reference containing name, tenancy, and type. Examples: `Workload` and `WorkloadIdentity`, `HTTPRoute` and `Service`.\n\nNote that it's possible for the two watched resources to have more than one relationship type simultaneously. \nFor example, `FailoverPolicy` type is name-aligned with a service to which it applies, however, it also contains\nreferences to destination services, and for a controller that reconciles `FailoverPolicy` and watches `Service`\nwe need to account for both type 1 and type 4 relationship whenever we get an event for a `Service`. \n\n### Simple Mappers\n\nLet's look at some simple mapping examples. \n\n#### Name-aligned resources\nIf our resources only have a name-aligned relationship, we can map them with a built-in function:\n\n```go\nfunc barController() controller.Controller {\n    return controller.NewController(\"bar\", pbexample.BarType).\n        WithWatch(pbexample.FooType, controller.ReplaceType(pbexample.BarType)). \n        WithReconciler(barReconciler{})\n}\n```\n\nHere, all we need to do is replace the type of the `Foo` resource whenever we get an event for it.\n\n#### Owned resources\n\nLet's say our `Foo` resource owns `Bar` resources, where any `Foo` resource can own multiple `Bar` resources. \nIn this case, whenever we see a new event for `Foo`, all we need to do is get all `Bar` resources that `Foo` currently owns.\nFor this, we can also use a built-in function to set up our watch:\n\n```go\nfunc MapOwned(ctx context.Context, rt controller.Runtime, res *pbresource.Resource) ([]controller.Request, error) {\n    resp, err := rt.Client.ListByOwner(ctx, &pbresource.ListByOwnerRequest{Owner: res.Id})\n    if err != nil {\n        return nil, err\n    }\n\n    var result []controller.Request\n    for _, r := range resp.Resources {\n        result = append(result, controller.Request{ID: r.Id})\n    }\n\n    return result, nil\n}\n\nfunc barController() controller.Controller {\n    return controller.NewController(\"bar\", pbexample.BarType).\n        WithWatch(pbexample.FooType, MapOwned). \n        WithReconciler(barReconciler{})\n}\n```\n\n### Advanced Mappers and Caches\n\nFor selector or arbitrary reference relationships, the mapping that we choose may need to be more advanced. \n\n#### Naive mapper implementation\n\nLet's first consider what a naive mapping function could look like in this case. Let's say that the `Bar` resource\nreferences `Foo` resource by name in the data. Now to watch and map `Foo` resources, we need to be able to find all relevant `Bar` resources\nwhenever we get an event for a `Foo` resource.\n\n```go\nfunc MapFoo(ctx context.Context, rt controller.Runtime, res *pbresource.Resource) ([]controller.Request, error) {\n    resp, err := rt.Client.List(ctx, &pbresource.ListRequest{Type: pbexample.BarType, Tenancy: res.Id.Tenancy})\n    if err != nil {\n        return nil, err\n    }\n\n    var result []controller.Request\n    for _, r := range resp.Resources {\n        decodedResource, err := resource.Decode[*pbexample.Bar](r)\n        if err != nil {\n            return nil, err\n        }\n        \n        \/\/ Only add Bar resources that match Foo by name. \n        if decodedResource.GetData().GetFooName() == res.Id.Name {\n            result = append(result, controller.Request{ID: r.Id})\n        }\n    }\n}\n```\n\nThis approach is fine for cases when the number of `Bar` resources in a cluster is relatively small. If it's not,\nthen we'd be doing a large `O(N)` search on each `Bar` event which could be too expensive. \n\n#### Caching Mappers\n\nFor cases when `N` is too large, we'd want to use a caching layer to help us make lookups more efficient so that they\ndon't require an `O(N)` search of potentially all cluster resources.\n\nThe controller runtime contains a controller cache and the facilities to keep the cache up to date in response to watches. Additionally there are dependency mappers provided for querying the cache. \n\n_While it is possible to not use the builtin cache and manage state in dependency mappers yourself, this can get quite complex and reasoning about the correct times to track and untrack relationships is tricky to get right. Usage of the cache is therefore the advised approach._\n\nAt a high level, the controller author provides the indexes to track for each watchedtype and can then query thosfunc fooFromArgs(args ...any) ([]byte, error)e indexes in the  {\n    \n}future. The querying can occur during both dependency mapping and during resource reconciliation.\n\nThe following example shows how to configure the \"bar\" controller to rereconcile a Bar resource whenever a Foo resource is changed that references the Bar\n\n```go\nfunc fooReferenceFromBar(r *resource.DecodedResource[*pbexample.Bar]) (bool, []byte, error) {\n    idx := index.IndexFromRefOrID(&pbresource.ID{\n        Type: pbexample.FooType,\n        Tenancy: r.Id.Tenancy,\n        Name: r.Data.GetFooName(),\n    })\n    \n    return true, idx, nil\n}\n\nfunc barController() controller.Controller {\n    fooIndex := indexers.DecodedSingleIndexer(\n        \"foo\", \n        index.ReferenceOrIDFromArgs,\n        fooReferenceFromBar,\n    )\n\t\n    return controller.NewController(\"bar\", pbexample.BarType, fooIndex).\n        WithWatch(\n            pbexample.FooType, \n            dependency.CacheListMapper(pbexample.BarType, fooIndex.Name()),\n        ).\n        WithReconciler(barReconciler{})\n}\n```\n\nThe controller will now reconcile Bar type resources whenever the Foo type resources they reference are updated. No further tracking is necessary as changes to all Bar types will automatically update the cache.\n\nOne limitation of the cache is that it only has knowledge about the current state of resources. That specifically means that the previous state is forgotten once the cache observes a write. This can be problematic when you want to reconcile a resource to no longer take into account something that previously reference it. \n\nLets say there are two types: `Baz` and `ComputedBaz` and a controller that will aggregate all `Baz` resource with some value into a single `ComputedBaz` object. When\na `Baz` resource gets updated to no longer have a value, it should not be represented in the `ComputedBaz` resource. The typical way to work around this is to:\n\n1. Store references to the resources that were used during reconciliation within the computed\/reconciled resource. For types computed by controllers and not expected to be written directly by users a `bound_references` field should be added to the top level resource types message. For other user manageable types the references may need to be stored within the Status field.\n\n2. Add a cache index to the watch of the computed type (usually the controllers main managed type). This index can use one of the indexers specified within the  [`internal\/controller\/cache\/indexers`](..\/..\/..\/internal\/controller\/cache\/indexers\/) package. That package contains some builtin functionality around reference indexing.\n\n3. Update the dependency mappers to query the cache index *in addition to* looking at the current state of the dependent resource. In our example above the `Baz` dependency mapper could use the [`MultiMapper`] to combine querying the cache for `Baz` types that currently should be associated with a `ComputedBaz` and querying the index added in step 2 for previous references.\n\n#### Footgun: Needing Bound References\n\nWhen an interior (mutable) foreign key pointer on watched data is used to\ndetermine the resources's applicability in a dependency mapper, it is subject\nto the \"orphaned computed resource\" problem.\n\n(An example of this would be a ParentRef on an xRoute, or the Destination field\nof a TrafficPermission.)\n\nWhen you edit the mutable pointer to point elsewhere, the DependencyMapper will\nonly witness the NEW value and will trigger reconciles for things derived from\nthe NEW pointer, but side effects from a prior reconcile using the OLD pointer\nwill be orphaned until some other event triggers that reconcile (if ever).\n\nThis applies equally to all varieties of controller:\n\n- creates computed resources\n- only updates status conditions on existing resources\n- has other external side effects (xDS controller writes envoy config over a stream)\n\nTo solve this we need to collect the list of bound references that were\n\"ingredients\" into a computed resource's output and persist them on the newly\nwritten resource. Then we load them up and index them such that we can use them\nto AUGMENT a mapper event with additional maps using the OLD data as well.\n\nWe have only actively worked to solve this for the computed resource flavor of\ncontroller:\n\n1. The top level of the resource data protobuf needs a\n   `BoundReferences []*pbresource.Reference` field.\n\n2. Use a `*resource.BoundReferenceCollector` to capture any resource during\n   `Reconcile` that directly contributes to the final output resource data\n   payload.\n\n3. Call `brc.List()` on the above and set it to the `BoundReferences` field on\n   the computed resource before persisting.\n\n4. Use `indexers.BoundRefsIndex` to index this field on the primary type of the\n   controller.\n\n5. Create `boundRefsMapper := dependency.CacheListMapper(ZZZ, boundRefsIndex.Name())`\n\n6. For each watched type, wrap its DependencyMapper with\n   `dependency.MultiMapper(boundRefsMapper, ZZZ)`\n\n7. That's it.\n\nThis will cause each reconcile to index the prior list of inputs and augment\nthe results of future mapper events with historical references.\n\n### Custom Watches\n\nIn some cases, we may want to trigger reconciles for events that aren't generated from CRUD operations on resources, for example\nwhen Envoy proxy connects or disconnects to a server. Controller-runtime allows us to setup watches from\nevents that come from a custom event channel. Please see [xds-controller](https:\/\/github.com\/hashicorp\/consul\/blob\/ecfeb7aac51df8730064d869bb1f2c633a531522\/internal\/mesh\/internal\/controllers\/xds\/controller.go#L40-L41) for examples of custom watches.\n\n## Statuses\n\nIn many cases, controllers would need to update statuses on resources to let the user know about the successful or unsuccessful\nstate of a resource.\n\nThese are the guidelines that we recommend for statuses:\n\n* While status conditions is a list, the Condition type should be treated as a key in a map, meaning a resource should not have two status conditions with the same type.\n* Controllers need to both update successful and unsuccessful conditions states. This is because we need to make sure that we clear any failed status conditions.\n* Status conditions should be named such that the `True` state is a successful state and `False` state is a failed state. \n\n## Best Practices\n\nBelow is a list of controller best practices that we've learned so far. Many of them are inspired by [kubebuilder](https:\/\/book.kubebuilder.io\/reference\/good-practices).\n\n* Avoid monolithic controllers as much as possible. A single controller should only manage a single resource to avoid complexity and race conditions.\n* If using cached mappers, aim to write (update or delete entries) to mappers in the `Reconcile` method and read from them in the mapper functions used by watches.\n* Fetch all data in the `Reconcile` method and avoid caching it from the mapper functions. This ensures that we get the latest data for each reconciliation.","site":"consul","answers_cleaned":"  Controllers  This page describes how to write controllers in Consul s new controller architecture        Note    This information is valid as of Consul 1 17 but some portions may change in future releases      Controller Basics  A controller consists of several parts    1    The watched type     This is the main type a controller is watching and reconciling  2    Additional watched types     These are additional types a controller may care about in addition to the main watched type  3    Additional custom watches     These are the watches for things that aren t resources in Consul   4    Reconciler     This is the instance that s responsible for reconciling requests whenever there s an event for the main watched type or for any of the watched types  5    Initializer     This is responsible for anything that needs to be executed when the controller is started   A basic controller setup could look like this      go func barController   controller Controller       return controller NewController  bar   pbexample BarType           WithReconciler barReconciler           barReconciler needs to implement the  Reconcile  method of the  Reconciler  interface   It s important to note that the  Reconcile  method only gets the request with the  ID  of the main watched resource and so it s up to the reconcile implementation to fetch the resource and any relevant information needed to perform the reconciliation  The most basic reconciler could look as follows      go type barReconciler struct     func  b  barReconciler  Reconcile ctx context Context  rt Runtime  req Request  error                     Watching Additional Resources  Most of the time  controllers will need to watch more resources in addition to the main watched type   To set up an additional watch  the main thing we need to figure out is how to map additional watched resource to the main watched resource  Controller runtime allows us to implement a mapper function that can take the additional watched resource as the input and produce reconcile  Requests  for our main watched type   To figure out how to map the two resources together  we need to think about the relationship between the two resources   There are several common relationship types between resources that are being used currently  1  Name alignment  this relationship means that resources are named the same and live in the same tenancy  but have different data  Examples   Service  and  ServiceEndpoints    Workload  and  ProxyStateTemplate   2  Selector  this relationship happens when one resource selects another by name or name prefix  Examples   Service  and  Workload    ProxyConfiguration  and  Workload   3  Owner  in this relationship  one resource is the owner of another resource  Examples   Service  and  ServiceEndpoints    HealthStatus  and  Workload   4  Arbitrary reference  in this relationship  one resource may reference another by some sort of reference  This reference could be a single string in the resource data or a more composite reference containing name  tenancy  and type  Examples   Workload  and  WorkloadIdentity    HTTPRoute  and  Service    Note that it s possible for the two watched resources to have more than one relationship type simultaneously   For example   FailoverPolicy  type is name aligned with a service to which it applies  however  it also contains references to destination services  and for a controller that reconciles  FailoverPolicy  and watches  Service  we need to account for both type 1 and type 4 relationship whenever we get an event for a  Service         Simple Mappers  Let s look at some simple mapping examples         Name aligned resources If our resources only have a name aligned relationship  we can map them with a built in function      go func barController   controller Controller       return controller NewController  bar   pbexample BarType           WithWatch pbexample FooType  controller ReplaceType pbexample BarType             WithReconciler barReconciler           Here  all we need to do is replace the type of the  Foo  resource whenever we get an event for it        Owned resources  Let s say our  Foo  resource owns  Bar  resources  where any  Foo  resource can own multiple  Bar  resources   In this case  whenever we see a new event for  Foo   all we need to do is get all  Bar  resources that  Foo  currently owns  For this  we can also use a built in function to set up our watch      go func MapOwned ctx context Context  rt controller Runtime  res  pbresource Resource     controller Request  error        resp  err    rt Client ListByOwner ctx   pbresource ListByOwnerRequest Owner  res Id       if err    nil           return nil  err            var result   controller Request     for    r    range resp Resources           result   append result  controller Request ID  r Id              return result  nil    func barController   controller Controller       return controller NewController  bar   pbexample BarType           WithWatch pbexample FooType  MapOwned            WithReconciler barReconciler               Advanced Mappers and Caches  For selector or arbitrary reference relationships  the mapping that we choose may need to be more advanced         Naive mapper implementation  Let s first consider what a naive mapping function could look like in this case  Let s say that the  Bar  resource references  Foo  resource by name in the data  Now to watch and map  Foo  resources  we need to be able to find all relevant  Bar  resources whenever we get an event for a  Foo  resource      go func MapFoo ctx context Context  rt controller Runtime  res  pbresource Resource     controller Request  error        resp  err    rt Client List ctx   pbresource ListRequest Type  pbexample BarType  Tenancy  res Id Tenancy       if err    nil           return nil  err            var result   controller Request     for    r    range resp Resources           decodedResource  err    resource Decode  pbexample Bar  r          if err    nil               return nil  err                               Only add Bar resources that match Foo by name           if decodedResource GetData   GetFooName      res Id Name               result   append result  controller Request ID  r Id                          This approach is fine for cases when the number of  Bar  resources in a cluster is relatively small  If it s not  then we d be doing a large  O N   search on each  Bar  event which could be too expensive         Caching Mappers  For cases when  N  is too large  we d want to use a caching layer to help us make lookups more efficient so that they don t require an  O N   search of potentially all cluster resources   The controller runtime contains a controller cache and the facilities to keep the cache up to date in response to watches  Additionally there are dependency mappers provided for querying the cache     While it is possible to not use the builtin cache and manage state in dependency mappers yourself  this can get quite complex and reasoning about the correct times to track and untrack relationships is tricky to get right  Usage of the cache is therefore the advised approach    At a high level  the controller author provides the indexes to track for each watchedtype and can then query thosfunc fooFromArgs args    any     byte  error e indexes in the          future  The querying can occur during both dependency mapping and during resource reconciliation   The following example shows how to configure the  bar  controller to rereconcile a Bar resource whenever a Foo resource is changed that references the Bar     go func fooReferenceFromBar r  resource DecodedResource  pbexample Bar    bool    byte  error        idx    index IndexFromRefOrID  pbresource ID          Type  pbexample FooType          Tenancy  r Id Tenancy          Name  r Data GetFooName                    return true  idx  nil    func barController   controller Controller       fooIndex    indexers DecodedSingleIndexer           foo            index ReferenceOrIDFromArgs          fooReferenceFromBar              return controller NewController  bar   pbexample BarType  fooIndex           WithWatch              pbexample FooType               dependency CacheListMapper pbexample BarType  fooIndex Name                        WithReconciler barReconciler           The controller will now reconcile Bar type resources whenever the Foo type resources they reference are updated  No further tracking is necessary as changes to all Bar types will automatically update the cache   One limitation of the cache is that it only has knowledge about the current state of resources  That specifically means that the previous state is forgotten once the cache observes a write  This can be problematic when you want to reconcile a resource to no longer take into account something that previously reference it    Lets say there are two types   Baz  and  ComputedBaz  and a controller that will aggregate all  Baz  resource with some value into a single  ComputedBaz  object  When a  Baz  resource gets updated to no longer have a value  it should not be represented in the  ComputedBaz  resource  The typical way to work around this is to   1  Store references to the resources that were used during reconciliation within the computed reconciled resource  For types computed by controllers and not expected to be written directly by users a  bound references  field should be added to the top level resource types message  For other user manageable types the references may need to be stored within the Status field   2  Add a cache index to the watch of the computed type  usually the controllers main managed type   This index can use one of the indexers specified within the    internal controller cache indexers            internal controller cache indexers   package  That package contains some builtin functionality around reference indexing   3  Update the dependency mappers to query the cache index  in addition to  looking at the current state of the dependent resource  In our example above the  Baz  dependency mapper could use the   MultiMapper   to combine querying the cache for  Baz  types that currently should be associated with a  ComputedBaz  and querying the index added in step 2 for previous references        Footgun  Needing Bound References  When an interior  mutable  foreign key pointer on watched data is used to determine the resources s applicability in a dependency mapper  it is subject to the  orphaned computed resource  problem    An example of this would be a ParentRef on an xRoute  or the Destination field of a TrafficPermission    When you edit the mutable pointer to point elsewhere  the DependencyMapper will only witness the NEW value and will trigger reconciles for things derived from the NEW pointer  but side effects from a prior reconcile using the OLD pointer will be orphaned until some other event triggers that reconcile  if ever    This applies equally to all varieties of controller     creates computed resources   only updates status conditions on existing resources   has other external side effects  xDS controller writes envoy config over a stream   To solve this we need to collect the list of bound references that were  ingredients  into a computed resource s output and persist them on the newly written resource  Then we load them up and index them such that we can use them to AUGMENT a mapper event with additional maps using the OLD data as well   We have only actively worked to solve this for the computed resource flavor of controller   1  The top level of the resource data protobuf needs a     BoundReferences    pbresource Reference  field   2  Use a   resource BoundReferenceCollector  to capture any resource during     Reconcile  that directly contributes to the final output resource data    payload   3  Call  brc List    on the above and set it to the  BoundReferences  field on    the computed resource before persisting   4  Use  indexers BoundRefsIndex  to index this field on the primary type of the    controller   5  Create  boundRefsMapper    dependency CacheListMapper ZZZ  boundRefsIndex Name      6  For each watched type  wrap its DependencyMapper with     dependency MultiMapper boundRefsMapper  ZZZ    7  That s it   This will cause each reconcile to index the prior list of inputs and augment the results of future mapper events with historical references       Custom Watches  In some cases  we may want to trigger reconciles for events that aren t generated from CRUD operations on resources  for example when Envoy proxy connects or disconnects to a server  Controller runtime allows us to setup watches from events that come from a custom event channel  Please see  xds controller  https   github com hashicorp consul blob ecfeb7aac51df8730064d869bb1f2c633a531522 internal mesh internal controllers xds controller go L40 L41  for examples of custom watches      Statuses  In many cases  controllers would need to update statuses on resources to let the user know about the successful or unsuccessful state of a resource   These are the guidelines that we recommend for statuses     While status conditions is a list  the Condition type should be treated as a key in a map  meaning a resource should not have two status conditions with the same type    Controllers need to both update successful and unsuccessful conditions states  This is because we need to make sure that we clear any failed status conditions    Status conditions should be named such that the  True  state is a successful state and  False  state is a failed state       Best Practices  Below is a list of controller best practices that we ve learned so far  Many of them are inspired by  kubebuilder  https   book kubebuilder io reference good practices      Avoid monolithic controllers as much as possible  A single controller should only manage a single resource to avoid complexity and race conditions    If using cached mappers  aim to write  update or delete entries  to mappers in the  Reconcile  method and read from them in the mapper functions used by watches    Fetch all data in the  Reconcile  method and avoid caching it from the mapper functions  This ensures that we get the latest data for each reconciliation "}
{"questions":"consul Installation CPU Profiles pprof A Profile is a collection of stack traces showing the call sequences that led to instances of a particular event such as allocation It does not track time spent sleeping or waiting for I O Every 10ms the CPU profiler interrupts a thread in the program and records the stack traces of running goroutines","answers":"# pprof\n\n> A Profile is a collection of stack traces showing the call sequences that led to instances of a particular event, such as allocation.\n\n## Installation\n`go install github.com\/google\/pprof@latest`\n\n### CPU Profiles\n* Every 10ms the CPU profiler interrupts a thread in the program and records the stack traces of **running** goroutines. \n\t* It does **not** track time spent sleeping or waiting for I\/O.\n* The duration of a sample is assumed to be 10ms, so the seconds spent in a function is calculated as: `(num_samples * 10ms)\/1000`\n\n### Heap Profiles\n*  Aims to record the stack trace for every 512KB allocated, at the point of the allocation.\n\t* Tracks allocations and when profiled allocations are freed. With both of these data points pprof can calculate allocations and memory in use.\n* `alloc_objects` and `alloc_space` refer to allocations since the start of the profiling period.\n\t* Helpful for tracking functions that produce a lot of allocations.\n* `inuse_objects` and `inuse_space` refers to allocations that have not been freed.\n\t* `inuse_objects = allocations - frees`\n\t* Helpful for tracking sources of high memory usage.\n\t* May not line up with OS reported memory usage! The profiler only tracks heap memory usage, and there are also cases where the Go GC will not release free heap memory to the OS. \n* When allocations are made, the heap sampler makes a decision about whether to sample the allocation or not. If it decides to sample it, it records the stack trace, counts the allocation, and counts the number of bytes allocated.\n\t* When an object's memory is freed, the heap profiler checks whether that allocation was sampled, and if it was, it counts the bytes freed. \n\n### Goroutine Profiles\n* Shows the number of times each stack trace was seen across goroutines.\n\n## Common commands\n* Open a profile in the terminal:\n`go tool pprof profile.prof`\n`go tool pprof heap.prof`\n`go tool pprof goroutine.prof`\n\n* Open a profile in a web browser:\n`go tool pprof -http=:8080 profile.prof`\n\n* If the correct source isn't detected automatically, specify the location of the source associated with the profile:\n`go tool pprof -http=:9090 -source_path=\/Users\/freddy\/go\/src\/github.com\/hashicorp\/consul-enterprise profile.prof`\n\nUseful in annotated `Source` view which shows time on a line-by-line basis.\n\n**Important:** Ensure that the source code matches the version of the binary! The source view relies on line numbers for its annotations.\n\n* Compare two profiles:\n`go tool pprof -http=:8080 -base before\/profile.prof after\/profile.prof`\nThis comparison will subtract the `-base` profile from the given profile. In this case, \"before\" is subtracted from \"after\". \n\n* Useful commands when profile is opened in terminal:\n\t* `top` lists the top 10 nodes by value\n\t* `list <function regex>` lists the top matches to the pattern\n\n## Graph view\n* **Node Values:**\n\t* Package\n\t* Function\n\t* Flat value: the value in the function itself.\n\t* Cumulative value: the sum of the flat value and all its descendants.\n\t* Percentage of flat and cumulative values relative to total samples. Total sample time is visible with `top` command in terminal view.\n\nExample:\n```\nfunc foo(){\n    a()                                 \/\/ step 1 takes 1s\n    do something directly.              \/\/ step 2 takes 3s\n    b()                                 \/\/ step 3 takes 1s\n}\n```\n\n`flat` is the time spent on step 2 (3s),  while `cum` is time spent on steps 1 to 3. \n\n* **Path Value**\n\t* The cumulative value of the following node.\n\n* **Node Color**:\n  * large positive cumulative values are red.\n  * cumulative values close to zero are grey.\n  * large negative cumulative values are green; negative values are most likely to appear during profile comparison.\n\n* **Node Font Size**:\n  * larger font size means larger absolute flat values.\n  * smaller font size means smaller absolute flat values. \n\n* **Edge Weight**:\n  * thicker edges indicate more resources were used along that path.\n  * thinner edges indicate fewer resources were used along that path.\n\n* **Edge Color**:\n  * large positive values are red.\n  * large negative values are green.\n  * values close to zero are grey.\n\n* **Dashed Edges**: some locations between the two connected locations were removed.  \n\n* **Solid Edges**: one location directly calls the other.\n\n* **\"(inline)\" Edge Marker**: the call has been inlined into the caller. More on inlining: [Inlining optimisations in Go | Dave Cheney](https:\/\/dave.cheney.net\/2020\/04\/25\/inlining-optimisations-in-go)\n\nExample graph:\n![Nodes](.\/pprof_cpu_nodes.png)\n\n* 20.11s is spent doing direct work in `(*Store).Services()`\n* 186.88s is spent in this function **and** its descendants\n* `(*Store).Services()` has both large `flat` and `cumulative` values, so the font is large and the box is red.\n* The edges to `mapassign_faststr` and `(radixIterator).Next()` are solid and red because these are direct calls with large positive values. \n\n\n## Flame graph view\nA collection of stack traces, where each stack is a column of boxes, and each box represents a function.\n\nFunctions at the top of the flame graph are parents of functions below.\n\nThe width of each box is proportional to the number of times it was observed during sampling. \n\nMouse-over boxes shows `cum` value and percentage, while clicking on boxes lets you zoom into their stack traces.\n\n### Note\n* The background color is **not** significant.\n* Sibling boxes are not necessarily in chronological order.\n\n\n## References:\n* [Diagnostics - The Go Programming Language](https:\/\/go.dev\/doc\/diagnostics)\n* [Profiling Go programs with pprof](https:\/\/jvns.ca\/blog\/2017\/09\/24\/profiling-go-with-pprof\/)\n* [pprof\/README.md](https:\/\/github.com\/google\/pprof\/blob\/master\/doc\/README.md)\t\n* [GitHub - DataDog\/go-profiler-notes: felixge's notes on the various go profiling methods that are available.](https:\/\/github.com\/DataDog\/go-profiler-notes)\n* [The Flame Graph - ACM Queue](https:\/\/queue.acm.org\/detail.cfm?id=2927301)\n* [High Performance Go Workshop](https:\/\/dave.cheney.net\/high-performance-go-workshop\/dotgo-paris.html#pprof)\n* [Pprof and golang - how to interpret a results?](https:\/\/stackoverflow.com\/a\/56882137","site":"consul","answers_cleaned":"  pprof    A Profile is a collection of stack traces showing the call sequences that led to instances of a particular event  such as allocation      Installation  go install github com google pprof latest       CPU Profiles   Every 10ms the CPU profiler interrupts a thread in the program and records the stack traces of   running   goroutines      It does   not   track time spent sleeping or waiting for I O    The duration of a sample is assumed to be 10ms  so the seconds spent in a function is calculated as    num samples   10ms  1000       Heap Profiles    Aims to record the stack trace for every 512KB allocated  at the point of the allocation     Tracks allocations and when profiled allocations are freed  With both of these data points pprof can calculate allocations and memory in use     alloc objects  and  alloc space  refer to allocations since the start of the profiling period     Helpful for tracking functions that produce a lot of allocations     inuse objects  and  inuse space  refers to allocations that have not been freed      inuse objects   allocations   frees     Helpful for tracking sources of high memory usage     May not line up with OS reported memory usage  The profiler only tracks heap memory usage  and there are also cases where the Go GC will not release free heap memory to the OS     When allocations are made  the heap sampler makes a decision about whether to sample the allocation or not  If it decides to sample it  it records the stack trace  counts the allocation  and counts the number of bytes allocated     When an object s memory is freed  the heap profiler checks whether that allocation was sampled  and if it was  it counts the bytes freed        Goroutine Profiles   Shows the number of times each stack trace was seen across goroutines      Common commands   Open a profile in the terminal   go tool pprof profile prof   go tool pprof heap prof   go tool pprof goroutine prof     Open a profile in a web browser   go tool pprof  http  8080 profile prof     If the correct source isn t detected automatically  specify the location of the source associated with the profile   go tool pprof  http  9090  source path  Users freddy go src github com hashicorp consul enterprise profile prof   Useful in annotated  Source  view which shows time on a line by line basis     Important    Ensure that the source code matches the version of the binary  The source view relies on line numbers for its annotations     Compare two profiles   go tool pprof  http  8080  base before profile prof after profile prof  This comparison will subtract the   base  profile from the given profile  In this case   before  is subtracted from  after       Useful commands when profile is opened in terminal      top  lists the top 10 nodes by value     list  function regex   lists the top matches to the pattern     Graph view     Node Values       Package    Function    Flat value  the value in the function itself     Cumulative value  the sum of the flat value and all its descendants     Percentage of flat and cumulative values relative to total samples  Total sample time is visible with  top  command in terminal view   Example      func foo        a                                      step 1 takes 1s     do something directly                  step 2 takes 3s     b                                      step 3 takes 1s         flat  is the time spent on step 2  3s    while  cum  is time spent on steps 1 to 3        Path Value      The cumulative value of the following node       Node Color        large positive cumulative values are red      cumulative values close to zero are grey      large negative cumulative values are green  negative values are most likely to appear during profile comparison       Node Font Size        larger font size means larger absolute flat values      smaller font size means smaller absolute flat values        Edge Weight        thicker edges indicate more resources were used along that path      thinner edges indicate fewer resources were used along that path       Edge Color        large positive values are red      large negative values are green      values close to zero are grey       Dashed Edges    some locations between the two connected locations were removed         Solid Edges    one location directly calls the other         inline   Edge Marker    the call has been inlined into the caller  More on inlining   Inlining optimisations in Go   Dave Cheney  https   dave cheney net 2020 04 25 inlining optimisations in go   Example graph    Nodes    pprof cpu nodes png     20 11s is spent doing direct work in    Store  Services      186 88s is spent in this function   and   its descendants      Store  Services    has both large  flat  and  cumulative  values  so the font is large and the box is red    The edges to  mapassign faststr  and   radixIterator  Next    are solid and red because these are direct calls with large positive values        Flame graph view A collection of stack traces  where each stack is a column of boxes  and each box represents a function   Functions at the top of the flame graph are parents of functions below   The width of each box is proportional to the number of times it was observed during sampling    Mouse over boxes shows  cum  value and percentage  while clicking on boxes lets you zoom into their stack traces       Note   The background color is   not   significant    Sibling boxes are not necessarily in chronological order       References     Diagnostics   The Go Programming Language  https   go dev doc diagnostics     Profiling Go programs with pprof  https   jvns ca blog 2017 09 24 profiling go with pprof      pprof README md  https   github com google pprof blob master doc README md      GitHub   DataDog go profiler notes  felixge s notes on the various go profiling methods that are available   https   github com DataDog go profiler notes     The Flame Graph   ACM Queue  https   queue acm org detail cfm id 2927301     High Performance Go Workshop  https   dave cheney net high performance go workshop dotgo paris html pprof     Pprof and golang   how to interpret a results   https   stackoverflow com a 56882137"}
{"questions":"vault Instead of starting your Vault server manually from the command line you can Run Vault as a service layout docs page title Run Vault as a service Configure and deploy Vault as a service for Linux or Windows","answers":"---\nlayout: docs\npage_title: Run Vault as a service\ndescription: >-\n  Configure and deploy Vault as a service for Linux or Windows.\n---\n\n# Run Vault as a service\n\nInstead of starting your Vault server manually from the command line, you can\nconfigure a service to start Vault automatically.\n\n## Before you start \n\n- **You must install Vault**. You can [use a package manager](\/vault\/install)\n  or [install a binary manually](\/vault\/docs\/install\/install-binary).\n\n\n## Step 1: Create a new service\n\n<Tabs>\n\n<Tab heading=\"Linux shell\" group=\"nix\">\n\n<Highlight title=\"Example tested on Ubuntu 22.04\">\n\n   The following service definition is a simpler version of the `vault.service`\n   example in the Vault GitHub repo: [vault\/.release\/linux\/package\/usr\/lib\/systemd\/system\/vault.service](https:\/\/github.com\/hashicorp\/vault\/blob\/main\/.release\/linux\/package\/usr\/lib\/systemd\/system\/vault.service)\n   \n<\/Highlight>\n\n1. Set the `VAULT_CONFIG` environment variable to your Vault configuration\n   directory. The default configuration directory is `\/etc\/vault.d`:\n\n   ```shell-session\n   $ VAULT_CONFIG=\/etc\/vault.d\n   ```\n\n1. Confirm the path to your Vault binary:\n   ```\n   $ VAULT_BINARY=$(which vault)\n   ```\n   \n1. Create a `systemd` service called `vault.service` that uses the Vault\n   binary:\n\n   ```shell-session\n   $ sudo tee \/lib\/systemd\/system\/vault.service <<EOF\n   [Unit]\n   Description=\"HashiCorp Vault\"\n   Documentation=\"https:\/\/developer.hashicorp.com\/vault\/docs\"\n   ConditionFileNotEmpty=\"${VAULT_CONFIG}\/vault.hcl\"\n\n   [Service]\n   User=vault\n   Group=vault\n   SecureBits=keep-caps\n   AmbientCapabilities=CAP_IPC_LOCK\n   CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK\n   NoNewPrivileges=yes\n   ExecStart=${VAULT_BINARY} server -config=${VAULT_CONFIG}\/vault.hcl\n   ExecReload=\/bin\/kill --signal HUP\n   KillMode=process\n   KillSignal=SIGINT\n\n   [Install]\n   WantedBy=multi-user.target\n   EOF\n   ```\n\n1. Change the permissions on `\/lib\/systemd\/system\/vault.service` to `644`:\n\n   ```shell-session\n   $ sudo chmod 644 \/lib\/systemd\/system\/vault.service\n   ```\n\n<\/Tab>\n\n<Tab heading=\"Powershell\" group=\"ps\">\n\nThe Windows binary for Vault does not support the Windows Service Application\nAPI. To run Vault as a service, you must use a Windows service wrapper. You can\nuse whatever wrapper is appropriate for your environment, but the easiest we\nhave found is `nssm`.\n\n1. Download and install [`nssm`](https:\/\/nssm.cc\/) manually or install the\n   package with [Chocolatey](https:\/\/chocolatey.org\/):\n\n   ```powershell\n   choco install nssm\n   ```\n\n1. Set a `VAULT_HOME` environment variable to your preferred Vault home\n   directory. For example, `c:\\Program Files\\Vault`:\n\n   ```powershell\n   $env:VAULT_HOME = \"${env:ProgramFiles}\\Vault\"\n   ```\n\n1. Use `nssm` to create a new Windows service:\n   ```powershell\n   nssm install MS_VAULT \"${env:VAULT_HOME}\\vault.exe\"\n   ```\n\n1. Set the working directory for your Vault installation:\n   ```powershell\n   nssm set MS_VAULT AppDirectory \"${env:VAULT_HOME}\" ; `\n   nssm set MS_VAULT AppParameters \"server -config Config\\vault.hcl\"\n   ```\n\n1. Define the runtime parameters for Vault, including the\n   `-config` flag with the relative path to your Vault configuration file, for\n   example `Config\\vault.hcl`:\n   ```powershell\n   nssm set MS_VAULT AppDirectory \"${env:VAULT_HOME}\" ; `\n   nssm set MS_VAULT AppParameters \"server -config Config\\vault.hcl\"\n   ```\n\n1. Set the display name and description for the \"Services\"\n   management console:\n   ```powershell\n   nssm set MS_VAULT DisplayName \"Vault Service\" ; `\n   nssm set MS_VAULT Description \"Vault server running as a service\"\n   ```\n\n1. Set the startup type for your service. We recommend setting startup to\n   \"Manual\" until you confirm the service is working as expected:\n   ```powershell\n   nssm set MS_VAULT Start SERVICE_DEMAND_START\n   ```\n\n1. Configure the service to pipe information from `stdout` and `stderr` to files\n   under your logging directory, for example `${env:VAULT_HOME}\\Logs`:\n   ```powershell\n   nssm set MS_VAULT AppStdout \"${env:VAULT_HOME}\\Logs\\vault-stdout.log\" ; `\n   nssm set MS_VAULT AppStderr \"${env:VAULT_HOME}\\Logs\\vault-error.log\"\n   ```\n\n1. Optionally, you can use the `AppEnvironmentExtra` parameter to set relevant\n   variables for the service environment. For example, to set the `VAULT_ADDR`\n   environment variable:\n\n   ```powershell\n   nssm set MS_VAULT AppEnvironmentExtra `$env:VAULT_ADDR=https:\/\/localhost:8200\n   ```\n\n1. Confirm your Vault service settings with `nssm`:\n\n   ```powershell\n   nssm dump MS_VAULT | Foreach {$_ -replace '.+nssm\\.exe ',''}\n   ```\n\n<\/Tab>\n\n<\/Tabs>\n\n## Step 2: Start the new service\n\n<Tabs>\n\n<Tab heading=\"Linux shell\" group=\"nix\">\n\n1. Reload the `systemd` configuration:\n\n   ```shell-session\n   $ sudo systemctl daemon-reload\n   ```\n\n1. Start the Vault service:\n\n   ```shell-session\n   $ sudo systemctl start vault.service\n   ```\n\n1. Verify the service status:\n\n   ```shell-session\n      $ systemctl status vault.service\n\n      vault.service - \"HashiCorp Vault\"\n         Loaded: loaded (\/lib\/systemd\/system\/vault.service; disabled; vendor preset: enabled)\n         Active: active (running) since Thu 2024-09-05 13:58:45 UTC; 4s ago\n            Docs: https:\/\/developer.hashicorp.com\/vault\/docs\n         Main PID: 3145 (vault)\n            Tasks: 8 (limit: 2241)\n         Memory: 23.6M\n            CPU: 200ms\n         CGroup: \/system.slice\/vault.service\n                  \u2514\u25003145 \/usr\/bin\/vault server -config=\/etc\/vault.d\/vault.hcl\n   ```\n\n<\/Tab>\n\n<Tab heading=\"Powershell\" group=\"ps\">\n\n<Highlight title=\"Use Powershell commands or wrapper commands to manage your service\">\n\n   Once you create the service, you can control it using standard `*-Service`\n   cmdlets **or** the relevant commands for the associated wrapper. For example,\n   to control the service with `nssm` use `nssm start MS_VAULT`.\n\n<\/Highlight>\n\n1. Start the Vault service::\n   ```powershell\n   Start-Service -Name MS_VAULT\n   ```\n\n1. Confirm service status:\n\n   ```powershell\n   Get-Service -Name MS_VAULT\n\n   Status   Name               DisplayName\n   ------   ----               -----------\n   Running  MS_VAULT           Vault Service\n   ```\n\n<\/Tab>\n\n<\/Tabs>\n\n## Step 3: Verify the service is running\n\nTo confirm the service is running and your Vault service is available, open the\nVault GUI in a browser at the default address:\n[http:\/\/localhost:8200](http:\/\/localhost:8200)\n\n## Related tutorials\n\nThe following tutorials provide additional guidance for installing Vault and\nproduction cluster deployment:\n\n- [Day One Preparation](\/vault\/tutorials\/day-one-raft)\n- [Recommended Patterns](\/vault\/tutorials\/recommended-patterns)","site":"vault","answers_cleaned":"    layout  docs page title  Run Vault as a service description       Configure and deploy Vault as a service for Linux or Windows         Run Vault as a service  Instead of starting your Vault server manually from the command line  you can configure a service to start Vault automatically      Before you start       You must install Vault    You can  use a package manager   vault install    or  install a binary manually   vault docs install install binary        Step 1  Create a new service   Tabs    Tab heading  Linux shell  group  nix     Highlight title  Example tested on Ubuntu 22 04       The following service definition is a simpler version of the  vault service     example in the Vault GitHub repo   vault  release linux package usr lib systemd system vault service  https   github com hashicorp vault blob main  release linux package usr lib systemd system vault service        Highlight   1  Set the  VAULT CONFIG  environment variable to your Vault configuration    directory  The default configuration directory is   etc vault d          shell session      VAULT CONFIG  etc vault d         1  Confirm the path to your Vault binary              VAULT BINARY   which vault             1  Create a  systemd  service called  vault service  that uses the Vault    binary         shell session      sudo tee  lib systemd system vault service   EOF     Unit     Description  HashiCorp Vault     Documentation  https   developer hashicorp com vault docs     ConditionFileNotEmpty    VAULT CONFIG  vault hcl       Service     User vault    Group vault    SecureBits keep caps    AmbientCapabilities CAP IPC LOCK    CapabilityBoundingSet CAP SYSLOG CAP IPC LOCK    NoNewPrivileges yes    ExecStart   VAULT BINARY  server  config   VAULT CONFIG  vault hcl    ExecReload  bin kill   signal HUP    KillMode process    KillSignal SIGINT      Install     WantedBy multi user target    EOF         1  Change the permissions on   lib systemd system vault service  to  644          shell session      sudo chmod 644  lib systemd system vault service           Tab    Tab heading  Powershell  group  ps    The Windows binary for Vault does not support the Windows Service Application API  To run Vault as a service  you must use a Windows service wrapper  You can use whatever wrapper is appropriate for your environment  but the easiest we have found is  nssm    1  Download and install   nssm   https   nssm cc   manually or install the    package with  Chocolatey  https   chocolatey org           powershell    choco install nssm         1  Set a  VAULT HOME  environment variable to your preferred Vault home    directory  For example   c  Program Files Vault          powershell     env VAULT HOME      env ProgramFiles  Vault          1  Use  nssm  to create a new Windows service        powershell    nssm install MS VAULT    env VAULT HOME  vault exe          1  Set the working directory for your Vault installation        powershell    nssm set MS VAULT AppDirectory    env VAULT HOME          nssm set MS VAULT AppParameters  server  config Config vault hcl          1  Define the runtime parameters for Vault  including the      config  flag with the relative path to your Vault configuration file  for    example  Config vault hcl         powershell    nssm set MS VAULT AppDirectory    env VAULT HOME          nssm set MS VAULT AppParameters  server  config Config vault hcl          1  Set the display name and description for the  Services     management console        powershell    nssm set MS VAULT DisplayName  Vault Service         nssm set MS VAULT Description  Vault server running as a service          1  Set the startup type for your service  We recommend setting startup to     Manual  until you confirm the service is working as expected        powershell    nssm set MS VAULT Start SERVICE DEMAND START         1  Configure the service to pipe information from  stdout  and  stderr  to files    under your logging directory  for example    env VAULT HOME  Logs         powershell    nssm set MS VAULT AppStdout    env VAULT HOME  Logs vault stdout log         nssm set MS VAULT AppStderr    env VAULT HOME  Logs vault error log          1  Optionally  you can use the  AppEnvironmentExtra  parameter to set relevant    variables for the service environment  For example  to set the  VAULT ADDR     environment variable         powershell    nssm set MS VAULT AppEnvironmentExtra   env VAULT ADDR https   localhost 8200         1  Confirm your Vault service settings with  nssm          powershell    nssm dump MS VAULT   Foreach      replace    nssm  exe                 Tab     Tabs      Step 2  Start the new service   Tabs    Tab heading  Linux shell  group  nix    1  Reload the  systemd  configuration         shell session      sudo systemctl daemon reload         1  Start the Vault service         shell session      sudo systemctl start vault service         1  Verify the service status         shell session         systemctl status vault service        vault service    HashiCorp Vault           Loaded  loaded   lib systemd system vault service  disabled  vendor preset  enabled           Active  active  running  since Thu 2024 09 05 13 58 45 UTC  4s ago             Docs  https   developer hashicorp com vault docs          Main PID  3145  vault              Tasks  8  limit  2241           Memory  23 6M             CPU  200ms          CGroup   system slice vault service                     3145  usr bin vault server  config  etc vault d vault hcl           Tab    Tab heading  Powershell  group  ps     Highlight title  Use Powershell commands or wrapper commands to manage your service       Once you create the service  you can control it using standard    Service     cmdlets   or   the relevant commands for the associated wrapper  For example     to control the service with  nssm  use  nssm start MS VAULT      Highlight   1  Start the Vault service         powershell    Start Service  Name MS VAULT         1  Confirm service status         powershell    Get Service  Name MS VAULT     Status   Name               DisplayName                                               Running  MS VAULT           Vault Service           Tab     Tabs      Step 3  Verify the service is running  To confirm the service is running and your Vault service is available  open the Vault GUI in a browser at the default address   http   localhost 8200  http   localhost 8200      Related tutorials  The following tutorials provide additional guidance for installing Vault and production cluster deployment      Day One Preparation   vault tutorials day one raft     Recommended Patterns   vault tutorials recommended patterns "}
{"questions":"vault Guide to partnership integrations and creating plugins for Vault Vault integration program page title Vault Integration Program layout docs The HashiCorp Vault Integration Program allows for partners to integrate their products to work with HashiCorp Vault Open Source or Enterprise versions or HashiCorp Cloud Platform https cloud hashicorp com HCP Vault Vault covers a relatively large surface area and thereby a large set of possible integrations some of which require the partner to build a Vault plugin or an integration that results in the partner s solution working tightly with Vault","answers":"---\nlayout: docs\npage_title: Vault Integration Program\ndescription: Guide to partnership integrations and creating plugins for Vault.\n---\n\n# Vault integration program\n\nThe HashiCorp Vault Integration Program allows for partners to integrate their products to work with HashiCorp Vault (Open Source or Enterprise versions) or [HashiCorp Cloud Platform](https:\/\/cloud.hashicorp.com\/) (HCP) Vault. Vault covers a relatively large surface area and thereby a large set of possible integrations, some of which require the partner to build a Vault plugin or an integration that results in the partner\u2019s solution working tightly with Vault.\n\nPartners integrating their solutions via the Vault Integration Process provide their customers a verified and seamless user experience.\n\nThis program is intended to be largely a self-service process with links and guidance to information sources, clearly defined steps, and checkpoints.\n\n## Types of Vault integrations\n\nVault is an Identity-based security solution that leverages trusted sources of identity to keep secrets and application data secured with one centralized, audited workflow for tightly controlling access to secrets across applications, systems, and infrastructure while encrypting data both in flight and at rest. For a full description of the current features please refer to the Vault [website](\/).\n\nThere are two main types of integrations with Vault. The first is Runtime Integrations which use Vault as part of a workflow. Many partners have integrations that use existing Vault deployments to retrieve various types of secrets for use in a partner\u2019s application or platform. The use cases can range from Vault storing and providing secrets, issuing or managing PKI certificates or acting as an external key management system.\n\nThe second type is where a partner develops a custom plugin. Vault has a secure [plugin](\/vault\/docs\/plugins) architecture. Vault\u2019s plugins are completely separate, standalone applications that Vault executes and communicates with over RPC.\n\nPlugins can be broken into two categories, Secrets Engines and Auth Methods. They can be built-in and bundled with the Vault binary, or be external that has to be manually registered. Built-in plugins are developed by HashiCorp, while external plugins can be developed by HashiCorp, technology partners, or the community. There is a curated collection of all plugins, both built-in and external, located on the [Vault Integrations](\/vault\/integrations) page.\n\nThe diagram below depicts the key Vault integration categories and types.\n\n![Integration Categories](\/img\/integration-program-vaulteco.png)\n\nMain Vault categories for partners to integrate with include:\n\n**Authentication Methods**: Authentication (or Auth) methods are plugin components in Vault that perform authentication and are responsible for assigning identity along with a set of policies to a user. Vault supports multiple auth methods\/identity models and partners can build a plugin that allows Vault to authenticate against the partners\u2019 platform. You can find more information about Vault Auth Methods [here](\/vault\/docs\/auth\/).\n\n**Runtime Integrations**: These types of integrations include integrations developed by partners that work with existing deployments of Vault and the partner\u2019s product as part of the customer's identity\/security workflow.\n\nOftentimes these integrations involve modifying a partner\u2019s product to become \u201cVault aware\u201d. There are two main components that need to be considered for this type of integration:\n1. How is the application going to authenticate itself to Vault?\n1. Support of Namespaces\n\nThere are many ways for an application to authenticate itself to Vault (see [Auth Methods](\/vault\/docs\/auth\/)), but we recommend partners use one of the following methods: [AppRole](\/vault\/docs\/auth\/approle), [JWT \/ OIDC](\/vault\/docs\/auth\/jwt), [TLS Certificates](\/vault\/docs\/auth\/cert) or [Username \/ Password](\/vault\/docs\/auth\/userpass). For an integration to be verified as production ready by HashiCorp, there needs to be at least one other Auth method supported besides [Token](\/vault\/docs\/auth\/token). Token is not recommended for use in production since it involves creating a manual long lived token (which is against best practice and poses a security risk). Using one of the above mentioned auth methods automatically creates short lived tokens and eliminates the need to manually generate a new token on a regular basis.\n\nAs the number of customers using Vault Enterprise increases, partners are encouraged to support [Namespaces](\/vault\/tutorials\/enterprise\/namespaces). By supporting Namespaces, there is an additional benefit that an integration should be able to work with HCP Vault Dedicated.\n\nHSM (Hardware Security Module) are specific types of runtime integrations and can be configured to work with new or existing Vault deployments. They provide an added level of security and compliance. The HSM communicates with Vault using the PKCS#11 protocol thereby resulting in the integration to primarily involve verification of the operation of the functionality. You can find more information about Vault\u2019s HSM support [here](\/vault\/docs\/enterprise\/hsm). A list of HSMs that have been verified to work with Vault is shown in our [interoperability matrix](\/vault\/docs\/interoperability-matrix).\n\n**Audit\/Monitoring & Compliance**: Audit\/Monitoring and Compliance are components in Vault that keep a detailed log of all requests and responses to Vault. Because every operation with Vault is an API request\/response, the audit log contains every authenticated interaction with Vault, including errors. Vault supports multiple audit devices to support your business use case. You can find more information about Vault Audit Devices [here](\/vault\/docs\/audit\/).\n\n**Secrets Engines**: Secrets engines are plugin components which store, generate, or encrypt data. Secrets engines are provided with some set of data, that take some action on that data, and then return a result. Some secrets engines store and read data, like encrypted in-memory data structure, other secrets engines connect to other services. Examples of Secrets Engines include identity modules of Cloud providers like AWS, Azure IAM models, Cloud (LDAP), database or certificate management. You can find more information about Vault Secrets Engines [here](\/vault\/docs\/secrets\/).\n\n-> **Note:** Integrations related Vault\u2019s [storage](\/vault\/docs\/concepts\/storage) backend, [auto auth](\/vault\/docs\/agent-and-proxy\/autoauth), and [auto unseal](\/vault\/docs\/concepts\/seal#auto-unseal) functionality are not encouraged. Please reach out to [technologypartners@hashicorp.com](mailto:technologypartners@hashicorp.com) for any questions related to this.\n\n### HCP Vault Dedicated\n\nHCP Vault Dedicated is a managed version of Vault which is operated by HashiCorp to allow customers to quickly get up and running. HCP Vault Dedicated uses the same binary as self-managed Vault Enterprise, and offers a consistent user experience. You can use the same Vault clients to communicate with HCP Vault Dedicated as you use to communicate with Vault. Most runtime integrations can be verified with HCP Vault Dedicated.\n\nSign up for HCP Vault Dedicated [here](https:\/\/portal.cloud.hashicorp.com\/) and check out [this](\/vault\/tutorials\/cloud) learn guide for quickly getting started.\n\n### Vault integration badges\n\nThere are two types of badges that partners could receive: Vault Enterprise Verified and HCP Vault Verified badges. Partners will be issued the Vault Enterprise badge for integrations that work with Vault Enterprise features such as namespaces, HSM support, or key management. Partners will be issued the HCP Vault Dedicated badge once their integration has been verified to work with HCP Vault Dedicated. The badge(s) would be displayed on their partner page (example: [MongoDB](https:\/\/www.hashicorp.com\/partners\/tech\/mongodb#vault) and can also be used on their own website to help provide better visibility and differentiation to customers. The process for verification of these integrations is detailed below.\n\n<span style=>\n<ImageConfig inline height={200} width={200}>\n\n![Vault Enterprise Badge](\/img\/VaultEnterprise_badge.png)\n\n<\/ImageConfig>\n<ImageConfig inline height={200} width={200}>\n\n![HCP Vault Dedicated](\/img\/HCPV_badge.png)\n\n<\/ImageConfig>\n<\/span>\n\n## Development process\n\nThe Vault integration development process is divided into six steps. By following these steps, Vault integrations can be developed alongside HashiCorp to ensure that the integrations are able to be verified and supported in Vault as quickly as possible. A visual representation of the self-guided steps is depicted below.\n\n![Development Process](\/img\/integration-program-devprocess.png)\n\n1. Engage: Initial contact between vendor and HashiCorp\n1. Enable: Information and articles to aid with the development of the integration\n1. Develop and Test: Integration development and testing process\n1. Review: HashiCorp verification of integration (iterative process)\n1. Release: Verified integration made available and listed on the HashiCorp website once the HashiCorp technology partnership agreement has been executed\n1. Support: Ongoing maintenance and support of the integration by the partner.\n\n### 1. engage\n\nPlease begin by providing some basic information about the integration that is being built via a simple [webform](https:\/\/docs.google.com\/forms\/d\/e\/1FAIpQLSfQL1uj-mL59bd2EyCPI31LT9uvVT-xKyoHAb5FKIwWwwJ1qQ\/viewform).\n\nThis information is recorded and used by HashiCorp to track the integration through various stages. The information is also used to notify the integration developer of any overlapping work, perhaps coming from the community so you may better focus resources.\n\nVault has a large and active community and ecosystem of partners that may have already started working on a similar integration. We'll do our best to connect similar parties to avoid duplicate work.\n\n### 2. enable\n\nWhile not mandatory, HashiCorp encourages partners to sign an MNDA (Mutual Non-Disclosure Agreement) to allow for open dialog and sharing of ideas during the integration process.\n\nIn an effort to support our self-serve model, we\u2019ve included links to resources, documentation, examples and best practices to guide you through the Vault integration development and testing process.\n\n- [Vault Tutorial and Learn Site](\/vault\/tutorials)\n- Sample development implemented by a [partner](https:\/\/www.hashicorp.com\/integrations\/venafi\/vault\/)\n- Example runtime integrations for reference: [F5](https:\/\/www.hashicorp.com\/integrations\/f5\/vault), [ServiceNow](https:\/\/www.hashicorp.com\/integrations\/servicenow\/vault)\n- [Vault Community Forum](https:\/\/discuss.hashicorp.com\/c\/vault)\n\nWe encourage partners to closely follow the above guidance. Adopting the same structure and coding patterns helps expedite the review and release cycles.\n\n### 3. develop and test\n\nFor our partners who are building runtime integrations with Vault, we encourage them to support multiple [authentication](\/vault\/docs\/auth) methods (e.g. Approle, JWT, K8s) besides tokens. Additionally we encourage them to add as much flexibility when specifying paths for secrets engines. For our partners who want to build a plugin, the only knowledge necessary to write a plugin is basic command-line skills and knowledge of the Go programming language. When writing in Go-Language, HashiCorp has found the integration development process to be straightforward and simple when partners pay close attention and follow the resources by adopting the same structure and coding patterns to help expedite the review and release cycles.\n\nPlease remember that all integrations should have the appropriate documentation to assist Vault users in configuring the integrations.\n\n**Auth Methods**\n\n- [Auth Methods documentation](\/vault\/docs\/auth)\n- [Example of how to build, install, and maintain auth method plugins plugin](https:\/\/www.hashicorp.com\/blog\/building-a-vault-secure-plugin)\n- [Sample plugin code](https:\/\/github.com\/hashicorp\/vault-auth-plugin-example)\n\n**Runtime Integration**\n\n- [Vault Tutorial and Learn Site](\/vault\/tutorials)\n- [Auth Methods documentation](\/vault\/docs\/auth)\n- [HSM documentation](\/vault\/docs\/enterprise\/hsm)\n- [HSM Configuration information](\/vault\/docs\/configuration\/seal\/pkcs11)\n\n**Audit, Monitoring & Compliance Integration**\n\n- [Audit devices documentation](\/vault\/docs\/audit)\n\n**Secrets Engine Integration**\n\n- [Secret engine documentation](\/vault\/docs\/secrets)\n- [Custom Secrets Engines | Vault - HashiCorp Learn](\/vault\/tutorials\/custom-secrets-engine)\n\n**HCP Vault Dedicated**\n\nThe process to spin up a testing instance of HCP Vault Dedicated is very [straightforward](\/vault\/tutorials\/cloud\/get-started-vault). HCP has been designed as a turn-key managed service so configuration is minimal. Furthermore, HashiCorp provides all new users an initial credit which lasts for a couple of months when using the [development](https:\/\/cloud.hashicorp.com\/products\/vault\/pricing) cluster. Used in conjunction with AWS free tier resources, there should be no cost beyond the time spent by the designated tester.\n\nThere are a couple of items to consider when determining if the integration will work with HCP Vault Dedicated.\n\n- Since HCP Vault Dedicated is running Vault Enterprise, the integration will need to be aware of [Namespaces](\/vault\/tutorials\/enterprise\/namespaces). This is important as the main namespace in HCP Vault Dedicated is called 'admin' which is different from the standard \u2018root\u2019 namespace in a self managed Vault instance. If the integration currently doesn't support namespaces, then an additional benefit of adding Namespace support iis that this will also enable it to work with all self managed Vault Enterprise installations.\n- HCP Vault Dedicated is currently only deployed on AWS and so the partner\u2019s application should be able to be deployed or run in AWS. This is vital so that HCP Vault Dedicated is able to communicate with the application using a [private peered](\/hcp\/docs\/hcp\/network\/hvn-aws\/hvn-peering) connection via a [HashiCorp Virtual Network](\/hcp\/docs\/hcp\/network).\n\nAdditional resources:\n\n- [HCP Sign up](\/hcp\/docs\/hcp\/network)\n- [Namespaces - Vault Enterprise](\/vault\/docs\/enterprise\/namespaces)\n- [Create a Vault Cluster on HCP | HashiCorp Learn](\/vault\/tutorials\/cloud\/get-started-vault)\n\n### 4. review\n\nDuring the review process, HashiCorp will provide feedback on the newly developed integration for both Vault and HCP Vault Dedicated. This is an important step to allow HashiCorp to review and verify your Vault integration. Please reach out to [technologypartners@hashicorp.com](mailto:technologypartners@hashicorp.com) for verification.\n\nThe review process can take some time to complete and may require some iterations through the code to address any problems identified by the HashiCorp team.\n\nOnce the integration has been verified, the partner is requested to sign the HashiCorp Technology Partner Agreement to have their integration listed on the HashiCorp website upon release.\n\n### 5. release\n\nAt this stage, it is expected that the integration is fully complete, the necessary documentation has been written, and HashiCorp has reviewed the integration.\n\nFor Auth or Secret Engine plugins specifically, once the plugin has been verified by HashiCorp, it is recommended the plugin be hosted on Github so it can more easily be downloaded and installed within Vault. We also encourage partners to list their plugin on the [Vault Integrations](\/vault\/integrations) page. This is in addition to the listing of the plugin on the technology partners\u2019 dedicated HashiCorp partner page. To have the plugin listed on the portal page, please do a pull request via the \u201cedit in GitHub\u201d link on the bottom of the page and add the plugin in the partner section.\n\nFor HCP Vault Dedicated verifications, the partner will be issued an HCP Vault Verified badge and will have this displayed on their partner page.\n\n### 6. support\n\nAt HashiCorp, we view the release step as the beginning of the journey. Getting the integration built is just the first step in enabling users to leverage it against their infrastructure. Once development is completed, on-going effort is required to support the developed integration and address any issues in a timely manner.\n\nThe expectation from the partner is to create a mechanism to track and resolve all critical issues within 48 hours, and all other issues within 5 business days. This is a requirement given the critical nature of Vault to customers\u2019 operations. Partners who choose to not support their integration will not be considered a verified integration and cannot be listed on the website.\n\n## Checklist\n\nBelow is a checklist of steps that should be followed during the Vault integration development process. This reiterates the steps described above.\n\n- Fill out the [Vault Integration webform](https:\/\/docs.google.com\/forms\/d\/e\/1FAIpQLSfQL1uj-mL59bd2EyCPI31LT9uvVT-xKyoHAb5FKIwWwwJ1qQ\/viewform).\n- Develop and test Vault integration along with the documentation, send to [technologypartners@hashicorp.com](mailto:technologypartners@hashicorp.com), to schedule an initial review.\n- Address review feedback and finalize the development process.\n- Provide HashiCorp with credentials for underlying infrastructure for test purposes.\n- Demo the integration.\n- Execute HashiCorp Partner Agreement Documents, review logo guidelines, partner listing and more.\n- Plan to continue supporting the integration with additional functionality and responding to customer issues\n\n## Contact us\n\nFor any questions or feedback, please contact us at: [technologypartners@hashicorp.com](mailto:technologypartners@hashicorp.com)","site":"vault","answers_cleaned":"    layout  docs page title  Vault Integration Program description  Guide to partnership integrations and creating plugins for Vault         Vault integration program  The HashiCorp Vault Integration Program allows for partners to integrate their products to work with HashiCorp Vault  Open Source or Enterprise versions  or  HashiCorp Cloud Platform  https   cloud hashicorp com    HCP  Vault  Vault covers a relatively large surface area and thereby a large set of possible integrations  some of which require the partner to build a Vault plugin or an integration that results in the partner s solution working tightly with Vault   Partners integrating their solutions via the Vault Integration Process provide their customers a verified and seamless user experience   This program is intended to be largely a self service process with links and guidance to information sources  clearly defined steps  and checkpoints      Types of Vault integrations  Vault is an Identity based security solution that leverages trusted sources of identity to keep secrets and application data secured with one centralized  audited workflow for tightly controlling access to secrets across applications  systems  and infrastructure while encrypting data both in flight and at rest  For a full description of the current features please refer to the Vault  website       There are two main types of integrations with Vault  The first is Runtime Integrations which use Vault as part of a workflow  Many partners have integrations that use existing Vault deployments to retrieve various types of secrets for use in a partner s application or platform  The use cases can range from Vault storing and providing secrets  issuing or managing PKI certificates or acting as an external key management system   The second type is where a partner develops a custom plugin  Vault has a secure  plugin   vault docs plugins  architecture  Vault s plugins are completely separate  standalone applications that Vault executes and communicates with over RPC   Plugins can be broken into two categories  Secrets Engines and Auth Methods  They can be built in and bundled with the Vault binary  or be external that has to be manually registered  Built in plugins are developed by HashiCorp  while external plugins can be developed by HashiCorp  technology partners  or the community  There is a curated collection of all plugins  both built in and external  located on the  Vault Integrations   vault integrations  page   The diagram below depicts the key Vault integration categories and types     Integration Categories   img integration program vaulteco png   Main Vault categories for partners to integrate with include     Authentication Methods    Authentication  or Auth  methods are plugin components in Vault that perform authentication and are responsible for assigning identity along with a set of policies to a user  Vault supports multiple auth methods identity models and partners can build a plugin that allows Vault to authenticate against the partners  platform  You can find more information about Vault Auth Methods  here   vault docs auth       Runtime Integrations    These types of integrations include integrations developed by partners that work with existing deployments of Vault and the partner s product as part of the customer s identity security workflow   Oftentimes these integrations involve modifying a partner s product to become  Vault aware   There are two main components that need to be considered for this type of integration  1  How is the application going to authenticate itself to Vault  1  Support of Namespaces  There are many ways for an application to authenticate itself to Vault  see  Auth Methods   vault docs auth     but we recommend partners use one of the following methods   AppRole   vault docs auth approle    JWT   OIDC   vault docs auth jwt    TLS Certificates   vault docs auth cert  or  Username   Password   vault docs auth userpass   For an integration to be verified as production ready by HashiCorp  there needs to be at least one other Auth method supported besides  Token   vault docs auth token   Token is not recommended for use in production since it involves creating a manual long lived token  which is against best practice and poses a security risk   Using one of the above mentioned auth methods automatically creates short lived tokens and eliminates the need to manually generate a new token on a regular basis   As the number of customers using Vault Enterprise increases  partners are encouraged to support  Namespaces   vault tutorials enterprise namespaces   By supporting Namespaces  there is an additional benefit that an integration should be able to work with HCP Vault Dedicated   HSM  Hardware Security Module  are specific types of runtime integrations and can be configured to work with new or existing Vault deployments  They provide an added level of security and compliance  The HSM communicates with Vault using the PKCS 11 protocol thereby resulting in the integration to primarily involve verification of the operation of the functionality  You can find more information about Vault s HSM support  here   vault docs enterprise hsm   A list of HSMs that have been verified to work with Vault is shown in our  interoperability matrix   vault docs interoperability matrix      Audit Monitoring   Compliance    Audit Monitoring and Compliance are components in Vault that keep a detailed log of all requests and responses to Vault  Because every operation with Vault is an API request response  the audit log contains every authenticated interaction with Vault  including errors  Vault supports multiple audit devices to support your business use case  You can find more information about Vault Audit Devices  here   vault docs audit       Secrets Engines    Secrets engines are plugin components which store  generate  or encrypt data  Secrets engines are provided with some set of data  that take some action on that data  and then return a result  Some secrets engines store and read data  like encrypted in memory data structure  other secrets engines connect to other services  Examples of Secrets Engines include identity modules of Cloud providers like AWS  Azure IAM models  Cloud  LDAP   database or certificate management  You can find more information about Vault Secrets Engines  here   vault docs secrets          Note    Integrations related Vault s  storage   vault docs concepts storage  backend   auto auth   vault docs agent and proxy autoauth   and  auto unseal   vault docs concepts seal auto unseal  functionality are not encouraged  Please reach out to  technologypartners hashicorp com  mailto technologypartners hashicorp com  for any questions related to this       HCP Vault Dedicated  HCP Vault Dedicated is a managed version of Vault which is operated by HashiCorp to allow customers to quickly get up and running  HCP Vault Dedicated uses the same binary as self managed Vault Enterprise  and offers a consistent user experience  You can use the same Vault clients to communicate with HCP Vault Dedicated as you use to communicate with Vault  Most runtime integrations can be verified with HCP Vault Dedicated   Sign up for HCP Vault Dedicated  here  https   portal cloud hashicorp com   and check out  this   vault tutorials cloud  learn guide for quickly getting started       Vault integration badges  There are two types of badges that partners could receive  Vault Enterprise Verified and HCP Vault Verified badges  Partners will be issued the Vault Enterprise badge for integrations that work with Vault Enterprise features such as namespaces  HSM support  or key management  Partners will be issued the HCP Vault Dedicated badge once their integration has been verified to work with HCP Vault Dedicated  The badge s  would be displayed on their partner page  example   MongoDB  https   www hashicorp com partners tech mongodb vault  and can also be used on their own website to help provide better visibility and differentiation to customers  The process for verification of these integrations is detailed below    span style    ImageConfig inline height  200  width  200      Vault Enterprise Badge   img VaultEnterprise badge png     ImageConfig   ImageConfig inline height  200  width  200      HCP Vault Dedicated   img HCPV badge png     ImageConfig    span      Development process  The Vault integration development process is divided into six steps  By following these steps  Vault integrations can be developed alongside HashiCorp to ensure that the integrations are able to be verified and supported in Vault as quickly as possible  A visual representation of the self guided steps is depicted below     Development Process   img integration program devprocess png   1  Engage  Initial contact between vendor and HashiCorp 1  Enable  Information and articles to aid with the development of the integration 1  Develop and Test  Integration development and testing process 1  Review  HashiCorp verification of integration  iterative process  1  Release  Verified integration made available and listed on the HashiCorp website once the HashiCorp technology partnership agreement has been executed 1  Support  Ongoing maintenance and support of the integration by the partner       1  engage  Please begin by providing some basic information about the integration that is being built via a simple  webform  https   docs google com forms d e 1FAIpQLSfQL1uj mL59bd2EyCPI31LT9uvVT xKyoHAb5FKIwWwwJ1qQ viewform    This information is recorded and used by HashiCorp to track the integration through various stages  The information is also used to notify the integration developer of any overlapping work  perhaps coming from the community so you may better focus resources   Vault has a large and active community and ecosystem of partners that may have already started working on a similar integration  We ll do our best to connect similar parties to avoid duplicate work       2  enable  While not mandatory  HashiCorp encourages partners to sign an MNDA  Mutual Non Disclosure Agreement  to allow for open dialog and sharing of ideas during the integration process   In an effort to support our self serve model  we ve included links to resources  documentation  examples and best practices to guide you through the Vault integration development and testing process      Vault Tutorial and Learn Site   vault tutorials    Sample development implemented by a  partner  https   www hashicorp com integrations venafi vault     Example runtime integrations for reference   F5  https   www hashicorp com integrations f5 vault    ServiceNow  https   www hashicorp com integrations servicenow vault     Vault Community Forum  https   discuss hashicorp com c vault   We encourage partners to closely follow the above guidance  Adopting the same structure and coding patterns helps expedite the review and release cycles       3  develop and test  For our partners who are building runtime integrations with Vault  we encourage them to support multiple  authentication   vault docs auth  methods  e g  Approle  JWT  K8s  besides tokens  Additionally we encourage them to add as much flexibility when specifying paths for secrets engines  For our partners who want to build a plugin  the only knowledge necessary to write a plugin is basic command line skills and knowledge of the Go programming language  When writing in Go Language  HashiCorp has found the integration development process to be straightforward and simple when partners pay close attention and follow the resources by adopting the same structure and coding patterns to help expedite the review and release cycles   Please remember that all integrations should have the appropriate documentation to assist Vault users in configuring the integrations     Auth Methods       Auth Methods documentation   vault docs auth     Example of how to build  install  and maintain auth method plugins plugin  https   www hashicorp com blog building a vault secure plugin     Sample plugin code  https   github com hashicorp vault auth plugin example     Runtime Integration       Vault Tutorial and Learn Site   vault tutorials     Auth Methods documentation   vault docs auth     HSM documentation   vault docs enterprise hsm     HSM Configuration information   vault docs configuration seal pkcs11     Audit  Monitoring   Compliance Integration       Audit devices documentation   vault docs audit     Secrets Engine Integration       Secret engine documentation   vault docs secrets     Custom Secrets Engines   Vault   HashiCorp Learn   vault tutorials custom secrets engine     HCP Vault Dedicated    The process to spin up a testing instance of HCP Vault Dedicated is very  straightforward   vault tutorials cloud get started vault   HCP has been designed as a turn key managed service so configuration is minimal  Furthermore  HashiCorp provides all new users an initial credit which lasts for a couple of months when using the  development  https   cloud hashicorp com products vault pricing  cluster  Used in conjunction with AWS free tier resources  there should be no cost beyond the time spent by the designated tester   There are a couple of items to consider when determining if the integration will work with HCP Vault Dedicated     Since HCP Vault Dedicated is running Vault Enterprise  the integration will need to be aware of  Namespaces   vault tutorials enterprise namespaces   This is important as the main namespace in HCP Vault Dedicated is called  admin  which is different from the standard  root  namespace in a self managed Vault instance  If the integration currently doesn t support namespaces  then an additional benefit of adding Namespace support iis that this will also enable it to work with all self managed Vault Enterprise installations    HCP Vault Dedicated is currently only deployed on AWS and so the partner s application should be able to be deployed or run in AWS  This is vital so that HCP Vault Dedicated is able to communicate with the application using a  private peered   hcp docs hcp network hvn aws hvn peering  connection via a  HashiCorp Virtual Network   hcp docs hcp network    Additional resources      HCP Sign up   hcp docs hcp network     Namespaces   Vault Enterprise   vault docs enterprise namespaces     Create a Vault Cluster on HCP   HashiCorp Learn   vault tutorials cloud get started vault       4  review  During the review process  HashiCorp will provide feedback on the newly developed integration for both Vault and HCP Vault Dedicated  This is an important step to allow HashiCorp to review and verify your Vault integration  Please reach out to  technologypartners hashicorp com  mailto technologypartners hashicorp com  for verification   The review process can take some time to complete and may require some iterations through the code to address any problems identified by the HashiCorp team   Once the integration has been verified  the partner is requested to sign the HashiCorp Technology Partner Agreement to have their integration listed on the HashiCorp website upon release       5  release  At this stage  it is expected that the integration is fully complete  the necessary documentation has been written  and HashiCorp has reviewed the integration   For Auth or Secret Engine plugins specifically  once the plugin has been verified by HashiCorp  it is recommended the plugin be hosted on Github so it can more easily be downloaded and installed within Vault  We also encourage partners to list their plugin on the  Vault Integrations   vault integrations  page  This is in addition to the listing of the plugin on the technology partners  dedicated HashiCorp partner page  To have the plugin listed on the portal page  please do a pull request via the  edit in GitHub  link on the bottom of the page and add the plugin in the partner section   For HCP Vault Dedicated verifications  the partner will be issued an HCP Vault Verified badge and will have this displayed on their partner page       6  support  At HashiCorp  we view the release step as the beginning of the journey  Getting the integration built is just the first step in enabling users to leverage it against their infrastructure  Once development is completed  on going effort is required to support the developed integration and address any issues in a timely manner   The expectation from the partner is to create a mechanism to track and resolve all critical issues within 48 hours  and all other issues within 5 business days  This is a requirement given the critical nature of Vault to customers  operations  Partners who choose to not support their integration will not be considered a verified integration and cannot be listed on the website      Checklist  Below is a checklist of steps that should be followed during the Vault integration development process  This reiterates the steps described above     Fill out the  Vault Integration webform  https   docs google com forms d e 1FAIpQLSfQL1uj mL59bd2EyCPI31LT9uvVT xKyoHAb5FKIwWwwJ1qQ viewform     Develop and test Vault integration along with the documentation  send to  technologypartners hashicorp com  mailto technologypartners hashicorp com   to schedule an initial review    Address review feedback and finalize the development process    Provide HashiCorp with credentials for underlying infrastructure for test purposes    Demo the integration    Execute HashiCorp Partner Agreement Documents  review logo guidelines  partner listing and more    Plan to continue supporting the integration with additional functionality and responding to customer issues     Contact us  For any questions or feedback  please contact us at   technologypartners hashicorp com  mailto technologypartners hashicorp com "}
{"questions":"vault page title Glossary of Terms Glossary Vault Glossary sidebar title Glossary layout docs","answers":"---\nlayout: docs\npage_title: Glossary of Terms\nsidebar_title: Glossary\ndescription: |-\n  Vault Glossary.\n---\n\n# Glossary\n\nThis page collects brief definitions of some of the technical terms used in the\ndocumentation for Vault.\n\n- [Audit Device](#audit-device)\n- [Auth Method](#auth-method)\n- [Barrier](#barrier)\n- [Client Token](#client-token)\n- [Plugin](#plugin)\n- [Request](#request)\n- [Secret](#secret)\n- [Secrets Engine](#secrets-engine)\n- [Server](#server)\n- [Storage Backend](#storage-backend)\n\n### Audit device\n\nAn audit device is responsible for managing audit logs.\nEvery request to Vault and response from Vault goes through the configured\naudit devices. This provides a simple way to integrate Vault with multiple\naudit logging destinations of different types.\n\n### Auth method\n\nAn auth method is used to authenticate users or applications\nwhich are connecting to Vault. Once authenticated, the auth method returns the\nlist of applicable policies which should be applied. Vault takes an\nauthenticated user and returns a client token that can be used for future\nrequests. As an example, the `userpass` auth method uses a username and\npassword to authenticate the user. Alternatively, the `github` auth method\nallows users to authenticate via GitHub.\n\n### Barrier\n\nAlmost everything Vault writes to storage is encrypted using the keyring, which is protected by the seal. We refer to this practice as \"the barrier\". There are a few exceptions to the rule, for example, the seal configuration is stored in an unencrypted file since it's needed to unseal the barrier, and the keyring is encrypted using the root key, while the root key is encrypted using the seal.\n\n### Client token\n\nA client token (aka \"Vault Token\") is conceptually\nsimilar to a session cookie on a web site. Once a user authenticates, Vault\nreturns a client token which is used for future requests. The token is used by\nVault to verify the identity of the client and to enforce the applicable ACL\npolicies. This token is passed via HTTP headers.\n\n### Plugin\n\nPlugins are a feature of Vault that can be enabled, disabled, and customized to\nsome degree. All Vault [auth methods](\/vault\/docs\/auth) and [secrets engines](\/vault\/docs\/secrets)\nare considered plugins.\n\n#### Built-in plugin\n\nBuilt-in plugins are shipped with Vault, often for commonly used\nimplementations, and require no additional operator intervention to run.\nBuilt-in plugins are just like any other backend code inside Vault.\n\n#### External plugin\n\nExternal plugins are not shipped with Vault and require additional operator\nintervention to run. Vault's external plugins are completely separate,\nstandalone applications that Vault executes and communicates with over RPC.\nEach time a Vault secret engine or auth method is mounted, a new process is\nspawned.\n\n#### External multiplexed plugin\n\nAn external plugin may make use of [plugin multiplexing](\/vault\/docs\/plugins\/plugin-architecture#plugin-multiplexing).\nA multiplexed plugin allows a single plugin process to be used for multiple\nmounts of the same type.\n\n### Request\n\nA request being made to Vault contains all relevant parameters and context in order\nfor Vault to be able to act accordingly. Vault represents this request internally\nin a way that understands:\n\n* Mount Point - Used to generate relative paths.\n* Mount Type - The type of mount the request is interacting with.\n* Namespace - The [namespace](\/vault\/docs\/enterprise\/namespaces) the request is taking place within.\n* Operation - See [the operation description](#operation) below for the supported operations.\n* Path - The full path of the request.\n\n<Note title=\"Request's Namespace\">\n  The Namespace a request is targeting may be specified either as part of the path\n  or the Vault Namespace header.\n<\/Note>\n\nPlease see our Enterprise documentation for further information on how\n[Namespaces can be specified](\/vault\/docs\/enterprise\/namespaces#vault-api-and-namespaces)\nas part of a request.\n\n#### Operation\n\nThe request's operation can be one of the following: `alias-lookahead`, `create`, `delete`,\n`header`, `help`, `list`, `patch`, `read`, `renew`, `resolve-role`, `revoke`, `rollback`\n`update`.\n\n### Secret\n\nA secret is the term for anything returned by Vault which\ncontains confidential or cryptographic material. Not everything returned by\nVault is a secret, for example system configuration, status information, or\npolicies are not considered secrets. Dynamic secrets always have an associated lease, and static secrets do not.\nThis means clients cannot assume that the dynamic secret contents can be used\nindefinitely. Vault will revoke a dynamic secret at the end of the lease, and an\noperator may intervene to revoke the Dynamic Secret before the lease is over. This\ncontract between Vault and its clients is critical, as it allows for changes\nin keys and policies without manual intervention.\n\n### Secrets engine\n\nA secrets engine is responsible for managing secrets.\nSimple secrets engines, such as the \"kv\" secrets engine, return the same\nsecret when queried. Some secrets engines support using policies to\ndynamically generate a secret each time they are queried. This allows for\nunique secrets to be used which allows Vault to do fine-grained revocation and\npolicy updates. As an example, a MySQL secrets engine could be configured with\na \"web\" policy. When the \"web\" secret is read, a new MySQL user\/password pair\nwill be generated with a limited set of privileges for the web server.\n\n### Server\n\nVault depends on a long-running instance which operates as a\nserver. The Vault server provides an API which clients interact with and\nmanages the interaction between all the secrets engines, ACL enforcement, and\nsecret lease revocation. Having a server based architecture decouples clients\nfrom the security keys and policies, enables centralized audit logging, and\nsimplifies administration for operators.\n\n### Storage backend\n\nA storage backend is responsible for durable storage of\n_encrypted_ data. Backends are not trusted by Vault and are only expected to\nprovide durability. The storage backend is configured when starting the Vault\nserver.","site":"vault","answers_cleaned":"    layout  docs page title  Glossary of Terms sidebar title  Glossary description       Vault Glossary         Glossary  This page collects brief definitions of some of the technical terms used in the documentation for Vault      Audit Device   audit device     Auth Method   auth method     Barrier   barrier     Client Token   client token     Plugin   plugin     Request   request     Secret   secret     Secrets Engine   secrets engine     Server   server     Storage Backend   storage backend       Audit device  An audit device is responsible for managing audit logs  Every request to Vault and response from Vault goes through the configured audit devices  This provides a simple way to integrate Vault with multiple audit logging destinations of different types       Auth method  An auth method is used to authenticate users or applications which are connecting to Vault  Once authenticated  the auth method returns the list of applicable policies which should be applied  Vault takes an authenticated user and returns a client token that can be used for future requests  As an example  the  userpass  auth method uses a username and password to authenticate the user  Alternatively  the  github  auth method allows users to authenticate via GitHub       Barrier  Almost everything Vault writes to storage is encrypted using the keyring  which is protected by the seal  We refer to this practice as  the barrier   There are a few exceptions to the rule  for example  the seal configuration is stored in an unencrypted file since it s needed to unseal the barrier  and the keyring is encrypted using the root key  while the root key is encrypted using the seal       Client token  A client token  aka  Vault Token   is conceptually similar to a session cookie on a web site  Once a user authenticates  Vault returns a client token which is used for future requests  The token is used by Vault to verify the identity of the client and to enforce the applicable ACL policies  This token is passed via HTTP headers       Plugin  Plugins are a feature of Vault that can be enabled  disabled  and customized to some degree  All Vault  auth methods   vault docs auth  and  secrets engines   vault docs secrets  are considered plugins        Built in plugin  Built in plugins are shipped with Vault  often for commonly used implementations  and require no additional operator intervention to run  Built in plugins are just like any other backend code inside Vault        External plugin  External plugins are not shipped with Vault and require additional operator intervention to run  Vault s external plugins are completely separate  standalone applications that Vault executes and communicates with over RPC  Each time a Vault secret engine or auth method is mounted  a new process is spawned        External multiplexed plugin  An external plugin may make use of  plugin multiplexing   vault docs plugins plugin architecture plugin multiplexing   A multiplexed plugin allows a single plugin process to be used for multiple mounts of the same type       Request  A request being made to Vault contains all relevant parameters and context in order for Vault to be able to act accordingly  Vault represents this request internally in a way that understands     Mount Point   Used to generate relative paths    Mount Type   The type of mount the request is interacting with    Namespace   The  namespace   vault docs enterprise namespaces  the request is taking place within    Operation   See  the operation description   operation  below for the supported operations    Path   The full path of the request    Note title  Request s Namespace     The Namespace a request is targeting may be specified either as part of the path   or the Vault Namespace header    Note   Please see our Enterprise documentation for further information on how  Namespaces can be specified   vault docs enterprise namespaces vault api and namespaces  as part of a request        Operation  The request s operation can be one of the following   alias lookahead    create    delete    header    help    list    patch    read    renew    resolve role    revoke    rollback   update        Secret  A secret is the term for anything returned by Vault which contains confidential or cryptographic material  Not everything returned by Vault is a secret  for example system configuration  status information  or policies are not considered secrets  Dynamic secrets always have an associated lease  and static secrets do not  This means clients cannot assume that the dynamic secret contents can be used indefinitely  Vault will revoke a dynamic secret at the end of the lease  and an operator may intervene to revoke the Dynamic Secret before the lease is over  This contract between Vault and its clients is critical  as it allows for changes in keys and policies without manual intervention       Secrets engine  A secrets engine is responsible for managing secrets  Simple secrets engines  such as the  kv  secrets engine  return the same secret when queried  Some secrets engines support using policies to dynamically generate a secret each time they are queried  This allows for unique secrets to be used which allows Vault to do fine grained revocation and policy updates  As an example  a MySQL secrets engine could be configured with a  web  policy  When the  web  secret is read  a new MySQL user password pair will be generated with a limited set of privileges for the web server       Server  Vault depends on a long running instance which operates as a server  The Vault server provides an API which clients interact with and manages the interaction between all the secrets engines  ACL enforcement  and secret lease revocation  Having a server based architecture decouples clients from the security keys and policies  enables centralized audit logging  and simplifies administration for operators       Storage backend  A storage backend is responsible for durable storage of  encrypted  data  Backends are not trusted by Vault and are only expected to provide durability  The storage backend is configured when starting the Vault server "}
{"questions":"vault compares to existing software and contains a quick start for using Vault with Vault We cover what Vault is what problems it can solve how it page title Introduction What is Vault layout docs Welcome to the intro guide to Vault This guide is the best place to start","answers":"---\nlayout: docs\npage_title: Introduction\ndescription: >-\n  Welcome to the intro guide to Vault! This guide is the best place to start\n  with Vault. We cover what Vault is, what problems it can solve, how it\n  compares to existing software, and contains a quick start for using Vault.\n---\n\n# What is Vault?\n\nHashiCorp Vault is an identity-based secrets and encryption management system.\nIt provides encryption services that are gated by authentication and authorization \nmethods to ensure secure, auditable and restricted access to _secrets_. \n\nA secret is anything that you want to tightly control access to, such as tokens,\nAPI keys, passwords, encryption keys or certificates. Vault provides a unified\ninterface to any secret, while providing tight access control and recording a\ndetailed audit log.\n\nAPI keys for external services, credentials for service-oriented architecture\ncommunication, etc. It can be difficult to understand who is accessing which\nsecrets, especially since this can be platform-specific. Adding on key rolling,\nsecure storage, and detailed audit logs is almost impossible without a custom\nsolution. This is where Vault steps in.\n\nVault validates and authorizes clients (users, machines, apps) before providing\nthem access to secrets or stored sensitive data.\n\n![How Vault Works](\/img\/how-vault-works.png)\n\n\n## How does Vault work?\n\nVault works primarily with tokens and a token is associated to the client's policy. Each policy is path-based and policy rules constrains the actions and accessibility to the paths for each client. With Vault, you can create tokens manually and assign them to your clients, or the clients can log in and obtain a token. The illustration below displays Vault's core workflow.\n\n![Vault Workflow](\/img\/vault-workflow-diagram1.png)\n\n The core Vault workflow consists of four stages:\n\n- **Authenticate:** Authentication in Vault is the process by which a client supplies information that Vault uses to determine if they are who they say they are. Once the client is authenticated against an auth method, a token is generated and associated to a policy.\n- **Validation:** Vault validates the client against third-party trusted sources, such as Github, LDAP, AppRole, and more.\n- **Authorize**: A client is matched against the Vault security policy.  This policy is a set of rules defining which API endpoints a client has access to with its Vault token. Policies provide a declarative way to grant or forbid access to certain paths and operations in Vault.\n- **Access**: Vault grants access to secrets, keys, and encryption capabilities by issuing a token based on policies associated with the client\u2019s identity. The client can then use their Vault token for future operations.\n\n## Why Vault?\n\nMost enterprises today have credentials sprawled across their organizations. Passwords, API keys, and credentials are stored in plain text, app source code, config files, and other locations. Because these credentials live everywhere, the sprawl can make it difficult and daunting to really know who has access and authorization to what. Having credentials in plain text also increases the potential for malicious attacks, both by internal and external attackers.\n\nVault was designed with these challenges in mind. Vault takes all of these credentials and centralizes them so that they are defined in one location, which reduces unwanted exposure to credentials. But Vault takes it a few steps further by making sure users, apps, and systems are authenticated and explicitly authorized to access resources, while also providing an audit trail that captures and preserves a history of clients' actions.\n\nThe key features of Vault are:\n\n- **Secure Secret Storage**: Arbitrary key\/value secrets can be stored\n  in Vault. Vault encrypts these secrets prior to writing them to persistent\n  storage, so gaining access to the raw storage isn't enough to access\n  your secrets. Vault can write to disk, [Consul](https:\/\/www.consul.io\/),\n  and more.\n\n- **Dynamic Secrets**: Vault can generate secrets on-demand for some\n  systems, such as AWS or SQL databases. For example, when an application\n  needs to access an S3 bucket, it asks Vault for credentials, and Vault\n  will generate an AWS keypair with valid permissions on demand. After\n  creating these dynamic secrets, Vault will also automatically revoke them\n  after the lease is up.\n\n- **Data Encryption**: Vault can encrypt and decrypt data without storing\n  it. This allows security teams to define encryption parameters and\n  developers to store encrypted data in a location such as a SQL database\n  without having to design their own encryption methods.\n\n- **Leasing and Renewal**: All secrets in Vault have a _lease_ associated\n  with them. At the end of the lease, Vault will automatically revoke that\n  secret. Clients are able to renew leases via built-in renew APIs.\n\n- **Revocation**: Vault has built-in support for secret revocation. Vault\n  can revoke not only single secrets, but a tree of secrets, for example\n  all secrets read by a specific user, or all secrets of a particular type.\n  Revocation assists in key rolling as well as locking down systems in the\n  case of an intrusion.\n\n<Tip title=\"Vault use cases\">\n\nLearn more about Vault [use cases](\/vault\/docs\/use-cases).\n\n<\/Tip>\n\n## What is HCP Vault Dedicated?\n\nHashiCorp Cloud Platform (HCP) Vault Dedicated is a hosted version of Vault, which is operated by HashiCorp to allow organizations to get up and running quickly. HCP Vault Dedicated uses the same binary as self-hosted Vault, which means you will have a consistent user experience. You can use the same Vault clients to communicate with HCP Vault Dedicated as you use to communicate with a self-hosted Vault. Refer to the [HCP Vault Dedicated](\/hcp\/docs\/vault) documentation to learn more.\n\n<Tip title=\"Hands-on\">\n\nTry the [Get started](\/vault\/tutorials\/cloud) tutorials to set up a managed\nVault cluster.\n\n<\/Tip>\n\n## Community\n\nWe welcome questions, suggestions, and contributions from the community.\n\n- Ask questions in [HashiCorp Discuss](https:\/\/discuss.hashicorp.com\/c\/vault\/30).\n- Read our [contributing guide](https:\/\/github.com\/hashicorp\/vault\/blob\/main\/CONTRIBUTING.md).\n- [Submit an issue](https:\/\/github.com\/hashicorp\/vault\/issues\/new\/choose) for bugs and feature requests.","site":"vault","answers_cleaned":"    layout  docs page title  Introduction description       Welcome to the intro guide to Vault  This guide is the best place to start   with Vault  We cover what Vault is  what problems it can solve  how it   compares to existing software  and contains a quick start for using Vault         What is Vault   HashiCorp Vault is an identity based secrets and encryption management system  It provides encryption services that are gated by authentication and authorization  methods to ensure secure  auditable and restricted access to  secrets     A secret is anything that you want to tightly control access to  such as tokens  API keys  passwords  encryption keys or certificates  Vault provides a unified interface to any secret  while providing tight access control and recording a detailed audit log   API keys for external services  credentials for service oriented architecture communication  etc  It can be difficult to understand who is accessing which secrets  especially since this can be platform specific  Adding on key rolling  secure storage  and detailed audit logs is almost impossible without a custom solution  This is where Vault steps in   Vault validates and authorizes clients  users  machines  apps  before providing them access to secrets or stored sensitive data     How Vault Works   img how vault works png       How does Vault work   Vault works primarily with tokens and a token is associated to the client s policy  Each policy is path based and policy rules constrains the actions and accessibility to the paths for each client  With Vault  you can create tokens manually and assign them to your clients  or the clients can log in and obtain a token  The illustration below displays Vault s core workflow     Vault Workflow   img vault workflow diagram1 png    The core Vault workflow consists of four stages       Authenticate    Authentication in Vault is the process by which a client supplies information that Vault uses to determine if they are who they say they are  Once the client is authenticated against an auth method  a token is generated and associated to a policy      Validation    Vault validates the client against third party trusted sources  such as Github  LDAP  AppRole  and more      Authorize    A client is matched against the Vault security policy   This policy is a set of rules defining which API endpoints a client has access to with its Vault token  Policies provide a declarative way to grant or forbid access to certain paths and operations in Vault      Access    Vault grants access to secrets  keys  and encryption capabilities by issuing a token based on policies associated with the client s identity  The client can then use their Vault token for future operations      Why Vault   Most enterprises today have credentials sprawled across their organizations  Passwords  API keys  and credentials are stored in plain text  app source code  config files  and other locations  Because these credentials live everywhere  the sprawl can make it difficult and daunting to really know who has access and authorization to what  Having credentials in plain text also increases the potential for malicious attacks  both by internal and external attackers   Vault was designed with these challenges in mind  Vault takes all of these credentials and centralizes them so that they are defined in one location  which reduces unwanted exposure to credentials  But Vault takes it a few steps further by making sure users  apps  and systems are authenticated and explicitly authorized to access resources  while also providing an audit trail that captures and preserves a history of clients  actions   The key features of Vault are       Secure Secret Storage    Arbitrary key value secrets can be stored   in Vault  Vault encrypts these secrets prior to writing them to persistent   storage  so gaining access to the raw storage isn t enough to access   your secrets  Vault can write to disk   Consul  https   www consul io      and more       Dynamic Secrets    Vault can generate secrets on demand for some   systems  such as AWS or SQL databases  For example  when an application   needs to access an S3 bucket  it asks Vault for credentials  and Vault   will generate an AWS keypair with valid permissions on demand  After   creating these dynamic secrets  Vault will also automatically revoke them   after the lease is up       Data Encryption    Vault can encrypt and decrypt data without storing   it  This allows security teams to define encryption parameters and   developers to store encrypted data in a location such as a SQL database   without having to design their own encryption methods       Leasing and Renewal    All secrets in Vault have a  lease  associated   with them  At the end of the lease  Vault will automatically revoke that   secret  Clients are able to renew leases via built in renew APIs       Revocation    Vault has built in support for secret revocation  Vault   can revoke not only single secrets  but a tree of secrets  for example   all secrets read by a specific user  or all secrets of a particular type    Revocation assists in key rolling as well as locking down systems in the   case of an intrusion    Tip title  Vault use cases    Learn more about Vault  use cases   vault docs use cases      Tip      What is HCP Vault Dedicated   HashiCorp Cloud Platform  HCP  Vault Dedicated is a hosted version of Vault  which is operated by HashiCorp to allow organizations to get up and running quickly  HCP Vault Dedicated uses the same binary as self hosted Vault  which means you will have a consistent user experience  You can use the same Vault clients to communicate with HCP Vault Dedicated as you use to communicate with a self hosted Vault  Refer to the  HCP Vault Dedicated   hcp docs vault  documentation to learn more    Tip title  Hands on    Try the  Get started   vault tutorials cloud  tutorials to set up a managed Vault cluster     Tip      Community  We welcome questions  suggestions  and contributions from the community     Ask questions in  HashiCorp Discuss  https   discuss hashicorp com c vault 30     Read our  contributing guide  https   github com hashicorp vault blob main CONTRIBUTING md      Submit an issue  https   github com hashicorp vault issues new choose  for bugs and feature requests "}
{"questions":"vault Vault interoperability matrix page title Vault interoperability matrix Reference list of Vault integration partners layout docs To support a variety of use cases Vault verifies protocol implementation and","answers":"---\nlayout: docs\npage_title: Vault interoperability matrix\ndescription: >-\n  Reference list of Vault integration partners\n---\n\n# Vault interoperability matrix\n\nTo support a variety of use cases, Vault verifies protocol implementation and\nintegrations with partner products, appliances, and applications that support\nadvanced data protection features.\n\n<Highlight title=\"Is your integration missing?\">\n\n  Join the [Vault integration program](\/vault\/docs\/partnerships) to get your\n  integration verified and added or reach out to\n  [technologypartners@hashicorp.com](mailto:technologypartners@hashicorp.com)\n  with questions. \n\n<\/Highlight>\n\n## IPv6 validation and compliance\n\n[Vault Enterprise supports IPv6](https:\/\/www.hashicorp.com\/trust\/compliance\/vault-enterprise)\nin compliance with OMB Mandate M-21-07 and Federal IPv6 policy requirements\nfor the following operating systems and storage backends. \n\n**Self-attested testing covers functionality related to HSM, FIPS 140-2, and\nHSM\/FIPS 140-2.**\n\nOperating system | OS version                     | Validation   | Vault version\n---------------- | ------------------------------ | ------------ | -----------------------\nFreeBSD          | N\/A                            | N\/A           | Untested\nLinux            | Amazon Linux (versions 2023)   | Self-attested | ent-1.18+\nLinux            | openSUSE Leap (version 15.6)   | Self-attested | ent-1.18+\nLinux            | RHEL (versions 8.10, 9.4)      | Self-attested | ent-1.18+\nLinux            | SUSE SLES (version 15.6)       | Self-attested | ent-1.18+\nLinux            | Ubuntu (versions 20.04, 24.04) | Self-attested | ent-1.18+\nMacOS            | N\/A                            | N\/A           | Untested\nNetBSD           | N\/A                            | N\/A           | Untested\nOpenBSD          | N\/A                            | N\/A           | Untested\nWindows          | N\/A                            | N\/A           | Untested\n<span style=>\n  <em>\n    <b>Last Updated<\/b>:\n    October 14, 2024\n  <\/em>\n<\/span>\n\n<Note title=\"IPv6 limitations for Windows\">\n\n  IPv6 does not work with external plugins (plugins not built into Vault) when\n  running on Windows in server mode because they default to IPv4 and Vault\n  cannot override that behavior.\n\n<\/Note>\n\nBackend storage system  | Validation    | Vault version\n----------------------- | ------------- | -----------------------\nConsul                  | N\/A           | Untested\nIntegrated Raft storage | Self-attested | ent-1.18+\n<span style=>\n  <em>\n    <b>Last Updated<\/b>:\n    October 14, 2024\n  <\/em>\n<\/span>\n\n## Auto unsealing and HSM support\n\nHardware Security Module (HSM) support reduces the operational complexity of\nsecuring unseal keys by delegating the responsibility of securing unseal keys to\ntrusted devices or services (instead of humans). At startup, Vault connects to\nthe delegate device or service and provides an encrypted root key for\ndecryption.\n\nVault implements HSM support with the following features:\n\nFeature                                                              | Introduced\n-------------------------------------------------------------------- | ----------\n[Auto unsealing](\/vault\/docs\/concepts\/seal#auto-unseal)              | Vault 0.9\n[Entropy augmentation](\/vault\/docs\/enterprise\/entropy-augmentation)  | Vault 1.3\n[Seal wrapping](\/vault\/docs\/enterprise\/sealwrap)                     | Vault 0.9\n\nThe following table outlines the implementation status of HSM-related features\nfor partners products and the minimum Vault version required for verified\nfunctionality.\n\n| Partner           | Product                                | Auto unseal | Entropy augment | Seal wrap | Managed keys | Vault verified\n| ----------------- | -------------------------------------- | ----------- | --------------- | --------- |------------- | -------------\n| AliCloud          | AliCloud KMS                           | Yes         | **No**          | Yes       | **No**       | 0.11.2+\n| Atos              | Trustway Proteccio HSM                 | Yes         | Yes             | Yes       | **No**       | 1.9+\n| AWS               | AWS KMS                                | Yes         | Yes             | Yes       | Yes          | 0.9+\n| Crypto4a          | QxEDGE&tm; HSP                         | Yes         | Yes             | Yes       | Yes          | 1.9+\n| Entrust           | nShield HSM                            | Yes         | Yes             | Yes       | Yes          | 1.3+\n| Fortanix          | FX2200 Series                          | Yes         | Yes             | Yes       | **No**       | 0.10+\n| FutureX           | Vectera Plus, KMES Series 3            | Yes         | Yes             | Yes       | Yes          | 1.5+\n| FutureX           | VirtuCrypt cloud HSM                   | Yes         | Yes             | Yes       | Yes          | 1.5+\n| Google            | GCP Cloud KMS                          | Yes         | **No**          | Yes       | Yes          | 0.9+\n| Marvell           | Cavium HSM                             | Yes         | Yes             | Yes       | Yes          | 1.11+\n| Microsoft         | Azure Key Vault                        | Yes         | **No**          | Yes       | Yes          | 0.10.2+\n| Oracle            | OCI KMS                                | Yes         | **No**          | Yes       | **No**       | 1.2.3+\n| PrimeKey          | SignServer Hardware Appliance          | Yes         | Yes             | Yes       | **No**       | 1.6+\n| Private Machines  | ENFORCER Blade                         | Yes         | **No**          | Yes       | **No**       | 1.17.3+\n| Qrypt             | Quantum Entropy Service                | **No**      | Yes             | **No**    | **No**       | 1.11+\n| Quintessence Labs | TSF  400                               | Yes         | Yes             | Yes       | **No**       | 1.4+\n| Securosys SA      | Primus HSM                             | Yes         | Yes             | Yes       | Yes          | 1.7+\n| Thales            | Luna HSM                               | Yes         | Yes             | Yes       | Yes          | 1.4+\n| Thales            | Luna TCT HSM                           | Yes         | Yes             | Yes       | Yes          | 1.4+\n| Thales            | CipherTrust Manager                    | Yes         | Yes             | Yes       | **No**       | 1.7+\n| Utimaco           | HSM                                    | Yes         | Yes             | Yes       | Yes          | 1.4+\n| Yubico            | YubiHSM 2                              | Yes         | Yes             | Yes       | Yes          | 1.17.2+\n<span style=>\n  <em>\n    <b>Last Updated<\/b>:\n    May 03, 2023\n  <\/em>\n<\/span>\n\n\n## External key management (EKMS)\n\nVault centrally manages and automates encryption keys across environments so\ncustomers can [manage external encryption keys](\/vault\/docs\/secrets\/key-management)\nused in third party services and products with the following plugins:\n\nAbbreviation | Full plugin name\n------------ | ----------------\nEKMMSSQL     | [Vault EKM provider for SQL server](\/vault\/docs\/platform\/mssql)\nKV           | [Key\/Value secrets engine](\/vault\/docs\/secrets\/kv)\nKMSE         | [Key Management secrets engine](\/vault\/docs\/secrets\/key-management)\nKMIP         | [KMIP secrets engine](\/vault\/docs\/secrets\/kmip)\nPKCS#11      | [PKCS#11 provider](\/vault\/docs\/enterprise\/pkcs11-provider)\nTransit      | [Transit secrets engine](\/vault\/docs\/secrets\/transit)\n\n<Note title=\"Vault verified vs HCP Vault verified\">\n\n  HCP Vault verified integrations work with the current version HCP Vault\n  Dedicated. Self-managed Vault instances must meet the required minimum version\n  for verification guarantees.\n\n<\/Note>\n\nThe table below indicates the plugin support for partner products, the\nverification status for HCP Vault Dedicated and the minimum Vault version\nrequired for verified behavior in self-managed Vault instances:\n\n| Partner           | Product                  | Vault plugin | Vault verified | HCP Vault verified\n| ----------------- | ------------------------ | ------------ | -------------- | ------------------\n| AWS               | AWS KMS                  | KMSE         | 1.8+           | Yes\n| Baffle            | Shield                   | KV           | 1.3+           | **No**\n| Bloombase         | StoreSafe                | KMIP         | 1.9+           | N\/A\n| Cloudian          | HyperStore 7.5.1         | KMIP         | 1.12+          | N\/A\n| Cockroach Labs    | Cockroach Cloud DB       | KMSE         | 1.10+          | N\/A\n| Cockroach Labs    | Cockroach DB             | Transit      | 1.10+          | Yes\n| Cohesity          | Cohesity DataPlatform    | KMIP         | 1.13.2+        | N\/A\n| Commvault Systems | CommVault                | KMIP         | 1.9+           | N\/A\n| Cribl             | Cribl Stream             | KV           | 1.8+           | Yes\n| DataStax          | DataStax Enterprise      | KMIP         | 1.11+          | Yes\n| Dell              | PowerMax                 | KMIP         | 1.12.1+        | N\/A\n| Dell              | PowerProtect DDOS 8.0.X  | KMIP         | 1.15.2+        | N\/A \n| EnterpriseDB      | Postgres Advanced Server | KMIP         | 1.12.6+        | N\/A\n| Garantir          | GaraSign                 | Transit      | 1.5+           | Yes\n| Google            | Google KMS               | KMSE         | 1.9+           | N\/A\n| HPE               | Exmeral Data Fabric      | KMIP         | 1.2+           | N\/A\n| Intel             | Key Broker Service       | KMIP         | 1.11+          | N\/A\n| JumpWire          | JumpWire                 | KV           | 1.12+          | Yes\n| Micro Focus       | Connected Mx             | Transit      | 1.7+           | **No**\n| Microsoft         | Azure Key Vault          | KMSE         | 1.6+           | N\/A\n| Microsoft         | MSSSQL                   | EKMMSSQL     | 1.9+           | **No**\n| MinIO             | Key Encryption Service   | KV           | 1.11+          | **No**\n| MongoDB           | Atlas                    | KMSE         | 1.6+           | N\/A\n| MongoDB           | MongoDB Enterprise       | KMIP         | 1.2+           | N\/A\n| MongoDB           | Client Libraries         | KMIP         | 1.9+           | N\/A\n| NetApp            | ONTAP                    | KMIP         | 1.2+           | N\/A\n| NetApp            | StorageGrid              | KMIP         | 1.2+           | N\/A\n| Nutanix           | AHV\/AOS 6.5.1.6          | KMIP         | 1.12+          | N\/A\n| Ondat             | Trousseau                | Transit      | 1.9+           | Yes\n| Oracle            | MySQL                    | KMIP         | 1.2+           | N\/A\n| Oracle            | Oracle 19c               | PKCS#11      | 1.11+          | N\/A\n| Percona           | Server 8.0               | KMIP         | 1.9+           | N\/A\n| Percona           | XtraBackup 8.0           | KMIP         | 1.9+           | N\/A\n| Rubrik            | CDM 9.1 (Edge)           | KMIP         | 1.16.2+        | N\/A \n| Scality           | Scality RING             | KMIP         | 1.12+          | N\/A\n| Snowflake         | Snowflake                | KMSE         | 1.6+           | N\/A\n| Veeam             | Karsten K10              | Transit      | 1.9+           | N\/A\n| Veritas           | NetBackup                | KMIP         | 1.13.9+        | N\/A\n| VMware            | vSphere 7.0, 8.0         | KMIP         | 1.2+           | N\/A\n| VMware            | vSan 7.0, 8.0            | KMIP         | 1.2+           | N\/A\n| Yugabyte          | Yugabyte Platform        | Transit      | 1.9+           | **No**\n<span style=>\n  <em>\n    <b>Last Updated<\/b>:\n    August 25, 2023\n  <\/em>\n<\/span>","site":"vault","answers_cleaned":"    layout  docs page title  Vault interoperability matrix description       Reference list of Vault integration partners        Vault interoperability matrix  To support a variety of use cases  Vault verifies protocol implementation and integrations with partner products  appliances  and applications that support advanced data protection features    Highlight title  Is your integration missing       Join the  Vault integration program   vault docs partnerships  to get your   integration verified and added or reach out to    technologypartners hashicorp com  mailto technologypartners hashicorp com    with questions      Highlight      IPv6 validation and compliance   Vault Enterprise supports IPv6  https   www hashicorp com trust compliance vault enterprise  in compliance with OMB Mandate M 21 07 and Federal IPv6 policy requirements for the following operating systems and storage backends      Self attested testing covers functionality related to HSM  FIPS 140 2  and HSM FIPS 140 2     Operating system   OS version                       Validation     Vault version                                                                                            FreeBSD            N A                              N A             Untested Linux              Amazon Linux  versions 2023      Self attested   ent 1 18  Linux              openSUSE Leap  version 15 6      Self attested   ent 1 18  Linux              RHEL  versions 8 10  9 4         Self attested   ent 1 18  Linux              SUSE SLES  version 15 6          Self attested   ent 1 18  Linux              Ubuntu  versions 20 04  24 04    Self attested   ent 1 18  MacOS              N A                              N A             Untested NetBSD             N A                              N A             Untested OpenBSD            N A                              N A             Untested Windows            N A                              N A             Untested  span style      em       b Last Updated  b       October 14  2024     em    span    Note title  IPv6 limitations for Windows      IPv6 does not work with external plugins  plugins not built into Vault  when   running on Windows in server mode because they default to IPv4 and Vault   cannot override that behavior     Note   Backend storage system    Validation      Vault version                                                                   Consul                    N A             Untested Integrated Raft storage   Self attested   ent 1 18   span style      em       b Last Updated  b       October 14  2024     em    span      Auto unsealing and HSM support  Hardware Security Module  HSM  support reduces the operational complexity of securing unseal keys by delegating the responsibility of securing unseal keys to trusted devices or services  instead of humans   At startup  Vault connects to the delegate device or service and provides an encrypted root key for decryption   Vault implements HSM support with the following features   Feature                                                                Introduced                                                                                    Auto unsealing   vault docs concepts seal auto unseal                 Vault 0 9  Entropy augmentation   vault docs enterprise entropy augmentation     Vault 1 3  Seal wrapping   vault docs enterprise sealwrap                        Vault 0 9  The following table outlines the implementation status of HSM related features for partners products and the minimum Vault version required for verified functionality     Partner             Product                                  Auto unseal   Entropy augment   Seal wrap   Managed keys   Vault verified                                                                                                                                           AliCloud            AliCloud KMS                             Yes             No              Yes           No           0 11 2    Atos                Trustway Proteccio HSM                   Yes           Yes               Yes           No           1 9    AWS                 AWS KMS                                  Yes           Yes               Yes         Yes            0 9    Crypto4a            QxEDGE tm  HSP                           Yes           Yes               Yes         Yes            1 9    Entrust             nShield HSM                              Yes           Yes               Yes         Yes            1 3    Fortanix            FX2200 Series                            Yes           Yes               Yes           No           0 10    FutureX             Vectera Plus  KMES Series 3              Yes           Yes               Yes         Yes            1 5    FutureX             VirtuCrypt cloud HSM                     Yes           Yes               Yes         Yes            1 5    Google              GCP Cloud KMS                            Yes             No              Yes         Yes            0 9    Marvell             Cavium HSM                               Yes           Yes               Yes         Yes            1 11    Microsoft           Azure Key Vault                          Yes             No              Yes         Yes            0 10 2    Oracle              OCI KMS                                  Yes             No              Yes           No           1 2 3    PrimeKey            SignServer Hardware Appliance            Yes           Yes               Yes           No           1 6    Private Machines    ENFORCER Blade                           Yes             No              Yes           No           1 17 3    Qrypt               Quantum Entropy Service                    No          Yes                 No          No           1 11    Quintessence Labs   TSF  400                                 Yes           Yes               Yes           No           1 4    Securosys SA        Primus HSM                               Yes           Yes               Yes         Yes            1 7    Thales              Luna HSM                                 Yes           Yes               Yes         Yes            1 4    Thales              Luna TCT HSM                             Yes           Yes               Yes         Yes            1 4    Thales              CipherTrust Manager                      Yes           Yes               Yes           No           1 7    Utimaco             HSM                                      Yes           Yes               Yes         Yes            1 4    Yubico              YubiHSM 2                                Yes           Yes               Yes         Yes            1 17 2   span style      em       b Last Updated  b       May 03  2023     em    span       External key management  EKMS   Vault centrally manages and automates encryption keys across environments so customers can  manage external encryption keys   vault docs secrets key management  used in third party services and products with the following plugins   Abbreviation   Full plugin name                                 EKMMSSQL        Vault EKM provider for SQL server   vault docs platform mssql  KV              Key Value secrets engine   vault docs secrets kv  KMSE            Key Management secrets engine   vault docs secrets key management  KMIP            KMIP secrets engine   vault docs secrets kmip  PKCS 11         PKCS 11 provider   vault docs enterprise pkcs11 provider  Transit         Transit secrets engine   vault docs secrets transit    Note title  Vault verified vs HCP Vault verified      HCP Vault verified integrations work with the current version HCP Vault   Dedicated  Self managed Vault instances must meet the required minimum version   for verification guarantees     Note   The table below indicates the plugin support for partner products  the verification status for HCP Vault Dedicated and the minimum Vault version required for verified behavior in self managed Vault instances     Partner             Product                    Vault plugin   Vault verified   HCP Vault verified                                                                                                       AWS                 AWS KMS                    KMSE           1 8              Yes   Baffle              Shield                     KV             1 3                No     Bloombase           StoreSafe                  KMIP           1 9              N A   Cloudian            HyperStore 7 5 1           KMIP           1 12             N A   Cockroach Labs      Cockroach Cloud DB         KMSE           1 10             N A   Cockroach Labs      Cockroach DB               Transit        1 10             Yes   Cohesity            Cohesity DataPlatform      KMIP           1 13 2           N A   Commvault Systems   CommVault                  KMIP           1 9              N A   Cribl               Cribl Stream               KV             1 8              Yes   DataStax            DataStax Enterprise        KMIP           1 11             Yes   Dell                PowerMax                   KMIP           1 12 1           N A   Dell                PowerProtect DDOS 8 0 X    KMIP           1 15 2           N A    EnterpriseDB        Postgres Advanced Server   KMIP           1 12 6           N A   Garantir            GaraSign                   Transit        1 5              Yes   Google              Google KMS                 KMSE           1 9              N A   HPE                 Exmeral Data Fabric        KMIP           1 2              N A   Intel               Key Broker Service         KMIP           1 11             N A   JumpWire            JumpWire                   KV             1 12             Yes   Micro Focus         Connected Mx               Transit        1 7                No     Microsoft           Azure Key Vault            KMSE           1 6              N A   Microsoft           MSSSQL                     EKMMSSQL       1 9                No     MinIO               Key Encryption Service     KV             1 11               No     MongoDB             Atlas                      KMSE           1 6              N A   MongoDB             MongoDB Enterprise         KMIP           1 2              N A   MongoDB             Client Libraries           KMIP           1 9              N A   NetApp              ONTAP                      KMIP           1 2              N A   NetApp              StorageGrid                KMIP           1 2              N A   Nutanix             AHV AOS 6 5 1 6            KMIP           1 12             N A   Ondat               Trousseau                  Transit        1 9              Yes   Oracle              MySQL                      KMIP           1 2              N A   Oracle              Oracle 19c                 PKCS 11        1 11             N A   Percona             Server 8 0                 KMIP           1 9              N A   Percona             XtraBackup 8 0             KMIP           1 9              N A   Rubrik              CDM 9 1  Edge              KMIP           1 16 2           N A    Scality             Scality RING               KMIP           1 12             N A   Snowflake           Snowflake                  KMSE           1 6              N A   Veeam               Karsten K10                Transit        1 9              N A   Veritas             NetBackup                  KMIP           1 13 9           N A   VMware              vSphere 7 0  8 0           KMIP           1 2              N A   VMware              vSan 7 0  8 0              KMIP           1 2              N A   Yugabyte            Yugabyte Platform          Transit        1 9                No    span style      em       b Last Updated  b       August 25  2023     em    span "}
{"questions":"vault Manage custom messages in the Vault UI include alerts enterprise only mdx page title Manage custom messages Use custom messages in the Vault UI to share system wide alerts layout docs","answers":"---\nlayout: docs\npage_title: Manage custom messages\ndescription: >-\n  Use custom messages in the Vault UI to share system-wide alerts.\n---\n\n# Manage custom messages in the Vault UI\n\n@include 'alerts\/enterprise-only.mdx'\n\nUse custom banners and modals in the Vault UI to share system-wide alerts for all Vault UI users.\n\n<Tip title=\"Best practices for UI messages\">\n\n1. **Messages are sticky**. Users can only dismiss messages **temporarily**.\n   The message reappears if the user refreshes their browser window or logs out\n   of the current session.\n\n1. **Messages are intrusive**. Limit the number of active messages to minimize\n   the intrusion on users and reduce the chances that they will dismiss the\n   message without reading it.\n\n1. **Messages are inheritable**. Child namespaces inherit all messages created\n   on the parent namespace. Take advantage of inheritance to reach the greatest\n   number of users with the smallest number of messages.\n\n1. **Delete old messages**. Vault supports a maximum of 100 messages per namespace at a\n   time. Practice good message hygiene by regularly deleting expired and outdated\n   messages.\n\n<\/Tip>\n\n## Before you start\n\n- **You must have Vault Enterprise 1.16.0 or higher installed.**\n- **You must have the appropriate permissions**:\n  - You must have `list` permission for the `sys\/config\/ui\/custom-messages` endpoint.\n  - To **create messages**, you must have `read` permission for the `sys\/config\/ui\/custom-messages\/:id` endpoint and `create` permission for the `sys\/config\/ui\/custom-messages` endpoint.\n  - To **edit messages**, you must have `read` and `update` permission for the `sys\/config\/ui\/custom-messages\/:id` endpoint.\n  - To **delete messages**, `delete` permission for the `sys\/config\/ui\/custom-messages\/:id` endpoint.\n\n## Add a custom message\n\n1. Navigate to the **Settings** section in the Vault UI sidebar and select **Custom\n   Messages**.\n\n1. On the **Custom messages** page, select whether you want the message\n   to appear on the Vault UI login page or after a user logs in.\n\n1. Select the **+ Create message** button in the toolbar to open the **Create message** form.\n\n1. On the **Create message** form:\n\n   - Select the locations where your message should appear.\n   - Select the message type.\n   - Provide a title and the message text. For important messages, we recommend\n     keeping the text short and including a link to more information rather\n     than writing a longer message that users may not take the time to read.\n   - Set a start time when your message will publish. By default, messages\n     publish at midnight in your local timezone.\n   - Set an optional end time for the message. By default, messages do not\n     expire.\n\n1. Click **Preview** to see how your message will appear to users.\n\n1. Click **Create message** to save your new message.\n\n## Create messages for a specific namespace\n\nChild [namespaces](\/vault\/docs\/enterprise\/namespaces) inherit all messages\ncreated on the parent namespace. For example, assume you have a\ncluster with the following namespace hierarchy:\n\n<CodeBlockConfig hideClipboard>\n\n```plaintext\n\u2500 admin\n    \u251c\u2500\u2500 finance\n    \u2514\u2500\u2500 marketing\n        \u251c\u2500\u2500 digital-marketing\n        \u2514\u2500\u2500 events\n```\n\n<\/CodeBlockConfig>\n\nCustom messages created on the `admin` namespace apply to all child namespaces:\n`finance`, `marketing`, `marketing\/digital-marketing` and `marketing\/events`.\n\nTo create a custom message that only targets the marketing team, log into the `marketing` namespace before creating your message.\n\nTo create a message under a specific namespace:\n\n1. On your Vault login page, enter the namespace you want to target in\n   the **Namespace** text field.\n1. Select an appropriate authentication method and log in.\n\n1. Select **Custom Messages**.\n\n1. Select the **+ Create message** button in the toolbar to open and fill out\n   the **Create message** form.\n1. Click **Preview** to see how your message will appear to users.\n\n1. Click **Create message** to save your new message.\n\nYour new message only appears when a user logs into the targeted\nnamespace or one of its child namespaces. You can verify that your\nmessage has the correct behavior by logging into an admin or parent\nnamespace. The message should only appear when you switch to the\ntargeted namespace.\n\n![namespace picker to change namespaces](\/img\/ui-custom-msg.png)\n\n## Edit a custom message\n\nYou can open the edit screen for a custom message in two places from the **Custom messages** page.\n\n**From the additional options menu for the message**:\n\n1. Find the custom message you want to edit.\n1. Click the additional option button (three dots).\n1. Select **Edit** from the dropdown menu to bring up the edit page.\n\n**From the message details page**:\n\n1. Find the custom message you want to edit.\n1. Select the message to open the message details page.\n1. Click the **Edit message** button to bring up the edit page.\n   Fill in the information that you would like to edit for the custom message.\n\n## Delete a custom message\n\n<Warning title=\"Message deletion is permanent\">\n  Deleted messages cannot be recovered. If you delete a message by mistake, you\n  will have to recreate it.\n<\/Warning>\n\nYou can delete a custom message in two places from the **Custom messages** page.\n\n**From the additional options menu for the message**:\n\n1. Find the custom message you want to delete.\n1. Click the additional option button (three dots).\n1. Select **Delete** from the dropdown menu to bring up the delete confirmation modal.\n\n**From the message details page**:\n\n1. Find the custom message you want to delete.\n1. Select the message to open the message details page.\n1. Click the **Delete message** button to bring up the delete confirmation modal.","site":"vault","answers_cleaned":"    layout  docs page title  Manage custom messages description       Use custom messages in the Vault UI to share system wide alerts         Manage custom messages in the Vault UI   include  alerts enterprise only mdx   Use custom banners and modals in the Vault UI to share system wide alerts for all Vault UI users    Tip title  Best practices for UI messages    1    Messages are sticky    Users can only dismiss messages   temporarily       The message reappears if the user refreshes their browser window or logs out    of the current session   1    Messages are intrusive    Limit the number of active messages to minimize    the intrusion on users and reduce the chances that they will dismiss the    message without reading it   1    Messages are inheritable    Child namespaces inherit all messages created    on the parent namespace  Take advantage of inheritance to reach the greatest    number of users with the smallest number of messages   1    Delete old messages    Vault supports a maximum of 100 messages per namespace at a    time  Practice good message hygiene by regularly deleting expired and outdated    messages     Tip      Before you start      You must have Vault Enterprise 1 16 0 or higher installed        You must have the appropriate permissions        You must have  list  permission for the  sys config ui custom messages  endpoint      To   create messages    you must have  read  permission for the  sys config ui custom messages  id  endpoint and  create  permission for the  sys config ui custom messages  endpoint      To   edit messages    you must have  read  and  update  permission for the  sys config ui custom messages  id  endpoint      To   delete messages     delete  permission for the  sys config ui custom messages  id  endpoint      Add a custom message  1  Navigate to the   Settings   section in the Vault UI sidebar and select   Custom    Messages     1  On the   Custom messages   page  select whether you want the message    to appear on the Vault UI login page or after a user logs in   1  Select the     Create message   button in the toolbar to open the   Create message   form   1  On the   Create message   form        Select the locations where your message should appear       Select the message type       Provide a title and the message text  For important messages  we recommend      keeping the text short and including a link to more information rather      than writing a longer message that users may not take the time to read       Set a start time when your message will publish  By default  messages      publish at midnight in your local timezone       Set an optional end time for the message  By default  messages do not      expire   1  Click   Preview   to see how your message will appear to users   1  Click   Create message   to save your new message      Create messages for a specific namespace  Child  namespaces   vault docs enterprise namespaces  inherit all messages created on the parent namespace  For example  assume you have a cluster with the following namespace hierarchy    CodeBlockConfig hideClipboard      plaintext   admin         finance         marketing             digital marketing             events        CodeBlockConfig   Custom messages created on the  admin  namespace apply to all child namespaces   finance    marketing    marketing digital marketing  and  marketing events    To create a custom message that only targets the marketing team  log into the  marketing  namespace before creating your message   To create a message under a specific namespace   1  On your Vault login page  enter the namespace you want to target in    the   Namespace   text field  1  Select an appropriate authentication method and log in   1  Select   Custom Messages     1  Select the     Create message   button in the toolbar to open and fill out    the   Create message   form  1  Click   Preview   to see how your message will appear to users   1  Click   Create message   to save your new message   Your new message only appears when a user logs into the targeted namespace or one of its child namespaces  You can verify that your message has the correct behavior by logging into an admin or parent namespace  The message should only appear when you switch to the targeted namespace     namespace picker to change namespaces   img ui custom msg png      Edit a custom message  You can open the edit screen for a custom message in two places from the   Custom messages   page     From the additional options menu for the message     1  Find the custom message you want to edit  1  Click the additional option button  three dots   1  Select   Edit   from the dropdown menu to bring up the edit page     From the message details page     1  Find the custom message you want to edit  1  Select the message to open the message details page  1  Click the   Edit message   button to bring up the edit page     Fill in the information that you would like to edit for the custom message      Delete a custom message   Warning title  Message deletion is permanent     Deleted messages cannot be recovered  If you delete a message by mistake  you   will have to recreate it    Warning   You can delete a custom message in two places from the   Custom messages   page     From the additional options menu for the message     1  Find the custom message you want to delete  1  Click the additional option button  three dots   1  Select   Delete   from the dropdown menu to bring up the delete confirmation modal     From the message details page     1  Find the custom message you want to delete  1  Select the message to open the message details page  1  Click the   Delete message   button to bring up the delete confirmation modal "}
{"questions":"vault page title Prevent lease explosions Prevent lease explosions As your Vault environment scales to meet deployment needs you run the risk of layout docs Learn how to prevent lease explosions in Vault","answers":"---\nlayout: docs\npage_title: Prevent lease explosions\ndescription: >-\n  Learn how to prevent lease explosions in Vault.\n---\n\n# Prevent lease explosions\n\nAs your Vault environment scales to meet deployment needs, you run the risk of\nlease explosions. Lease explosions can occur when a Vault cluster is\nover-subscribed and clients overwhelm system resources with consistent,\nhigh-volume API requests\n\nUnchecked lease explosions create a memory drain on the active node, which can\ncascade to other nodes and result in denial-of-service issues for the entire\ncluster.\n\n## Look for early warning signs\n\nCleaning up after a lease explosion is time consuming and resource intensive, so\nwe strongly recommend monitoring your Vault instance for signals that your\nVault deployment has matured and requires tuning:\n\nIssue                                                                            | Possible cause\n-------------------------------------------------------------------------------- | --------------\nUnused leases consume storage space for extended periods while waiting to expire | The TTL values for dynamic secret leases or authentication tokens may be too high\nLease revocation fails frequently                                                | Failures in an external service (e.g., for dynamic secrets)\nBuild up of leases associated with unused credentials                            | Clients are not reusing valid, existing leases\nLease revocation is slow                                                         | Insufficient IOPS for the storage backend\nRapid lease count growth disproportionate to the number of clients               | Misconfiguration or anti-patterns in client usage\n\n\n## Enforce client best practices\n\nHigh lease counts can degrade system performance:\n\n- Use the smallest default time-to-live (TTL) possible for tokens and leases to\n  avoid excessive unexpired lease backlogs and high-volume, simultaneous\n  expirations.\n- Review telemetry for aberrant client behavior that might lead to rapid\n  over-subscription.\n- Limit the number of simultaneous dynamic secret requests and service token\n  authentication requests.\n- Ensure that machine clients adhere to [recommended AppRole patterns](\/vault\/tutorials\/recommended-patterns\/pattern-approle).\n- Review [AppRole best practices](https:\/\/www.hashicorp.com\/blog\/how-and-why-to-use-approle-correctly-in-hashicorp-vault).\n\n## Set reasonable TTL guardrails\n\nChoose appropriate defaults for your situation and use resource quotas as\nguardrails against lease explosion. You can set default and maximum TTLs\nglobally, in the mount configuration for a specific authN or secrets plugin, and\nat the role-level (e.g., database credential roles).\n\nVault prioritizes TTL values by granularity:\n\n- Global values act as the default.\n- Plugin TTL values override global values.\n- Role, group, and user level TTL values override plugin and global values.\n\n<Note title=\"TTL changes are not retroactive\">\n\n  Leases and tokens keep the TTL value in affect during their creation. When you\n  adjust TTL values, the new limits only apply to leases and tokens issued after\n  you deploy the changes.\n\n<\/Note>\n\n## Monitor key metrics and logs\n\nProactive monitoring is key to finding problematic behavior and usage patterns\nbefore they escalate:\n\n- Review [key Vault metrics](\/well-architected-framework\/reliability\/reliability-vault-monitoring-key-metrics)\n- Understand [metric anti-patterns](\/well-architected-framework\/operational-excellence\/security-vault-anti-patterns#poor-metrics-or-no-telemetry-data)\n- Monitor [Vault audit device logs](\/vault\/tutorials\/monitoring\/monitor-telemetry-audit-splunk) for quota-related failures.\n\n## Control resource usage with quotas\n\nUse API rate limiting quotas and\n[lease count quotas](\/vault\/tutorials\/operations\/resource-quotas#lease-count-quotas)\nto limit the number of leases generated on a per-mount basis and control\nresource consumption for your Vault instance  where hard limits makes sense.\n\n## Consider batch tokens\n\nIf your environment inherently leads to a large number of lease requests,\nconsider using batch tokens over service tokens.\n\nThe following resources can help you decide if batch tokens are reasonable for\nyour situation:\n\n- [Vault service tokens vs batch tokens](\/vault\/tutorials\/tokens\/batch-tokens#service-tokens-vs-batch-tokens)\n- [Service vs batch token lease handling](\/vault\/docs\/concepts\/tokens#service-vs-batch-token-lease-handling)\n\n## Next steps \n\nProactive monitoring and periodic usage analysis can help you identify potential\nproblems before they escalate.\n\n- Brush up on [general Vault resource quotas](\/vault\/docs\/concepts\/resource-quotas) in general.\n- Learn about [lease count quotas for Vault Enterprise](\/vault\/docs\/enterprise\/lease-count-quotas).\n- Learn how to [query audit device logs](\/vault\/tutorials\/monitoring\/query-audit-device-logs).\n- Review [recommended Vault lease limits](\/vault\/docs\/internals\/limits#lease-limits).\n- Review [lease anti-patterns](\/well-architected-framework\/operational-excellence\/security-vault-anti-patterns#not-adjusting-the-default-lease-time) for a clear explanation of the issue and solution.","site":"vault","answers_cleaned":"    layout  docs page title  Prevent lease explosions description       Learn how to prevent lease explosions in Vault         Prevent lease explosions  As your Vault environment scales to meet deployment needs  you run the risk of lease explosions  Lease explosions can occur when a Vault cluster is over subscribed and clients overwhelm system resources with consistent  high volume API requests  Unchecked lease explosions create a memory drain on the active node  which can cascade to other nodes and result in denial of service issues for the entire cluster      Look for early warning signs  Cleaning up after a lease explosion is time consuming and resource intensive  so we strongly recommend monitoring your Vault instance for signals that your Vault deployment has matured and requires tuning   Issue                                                                              Possible cause                                                                                                   Unused leases consume storage space for extended periods while waiting to expire   The TTL values for dynamic secret leases or authentication tokens may be too high Lease revocation fails frequently                                                  Failures in an external service  e g   for dynamic secrets  Build up of leases associated with unused credentials                              Clients are not reusing valid  existing leases Lease revocation is slow                                                           Insufficient IOPS for the storage backend Rapid lease count growth disproportionate to the number of clients                 Misconfiguration or anti patterns in client usage      Enforce client best practices  High lease counts can degrade system performance     Use the smallest default time to live  TTL  possible for tokens and leases to   avoid excessive unexpired lease backlogs and high volume  simultaneous   expirations    Review telemetry for aberrant client behavior that might lead to rapid   over subscription    Limit the number of simultaneous dynamic secret requests and service token   authentication requests    Ensure that machine clients adhere to  recommended AppRole patterns   vault tutorials recommended patterns pattern approle     Review  AppRole best practices  https   www hashicorp com blog how and why to use approle correctly in hashicorp vault       Set reasonable TTL guardrails  Choose appropriate defaults for your situation and use resource quotas as guardrails against lease explosion  You can set default and maximum TTLs globally  in the mount configuration for a specific authN or secrets plugin  and at the role level  e g   database credential roles    Vault prioritizes TTL values by granularity     Global values act as the default    Plugin TTL values override global values    Role  group  and user level TTL values override plugin and global values    Note title  TTL changes are not retroactive      Leases and tokens keep the TTL value in affect during their creation  When you   adjust TTL values  the new limits only apply to leases and tokens issued after   you deploy the changes     Note      Monitor key metrics and logs  Proactive monitoring is key to finding problematic behavior and usage patterns before they escalate     Review  key Vault metrics   well architected framework reliability reliability vault monitoring key metrics    Understand  metric anti patterns   well architected framework operational excellence security vault anti patterns poor metrics or no telemetry data    Monitor  Vault audit device logs   vault tutorials monitoring monitor telemetry audit splunk  for quota related failures      Control resource usage with quotas  Use API rate limiting quotas and  lease count quotas   vault tutorials operations resource quotas lease count quotas  to limit the number of leases generated on a per mount basis and control resource consumption for your Vault instance  where hard limits makes sense      Consider batch tokens  If your environment inherently leads to a large number of lease requests  consider using batch tokens over service tokens   The following resources can help you decide if batch tokens are reasonable for your situation      Vault service tokens vs batch tokens   vault tutorials tokens batch tokens service tokens vs batch tokens     Service vs batch token lease handling   vault docs concepts tokens service vs batch token lease handling      Next steps   Proactive monitoring and periodic usage analysis can help you identify potential problems before they escalate     Brush up on  general Vault resource quotas   vault docs concepts resource quotas  in general    Learn about  lease count quotas for Vault Enterprise   vault docs enterprise lease count quotas     Learn how to  query audit device logs   vault tutorials monitoring query audit device logs     Review  recommended Vault lease limits   vault docs internals limits lease limits     Review  lease anti patterns   well architected framework operational excellence security vault anti patterns not adjusting the default lease time  for a clear explanation of the issue and solution "}
{"questions":"vault Create a lease count quota page title Create a lease count quota authentication plugin layout docs Step by step instructions for creating lease count quotas for an","answers":"---\nlayout: docs\npage_title: Create a lease count quota\ndescription: >-\n  Step-by-step instructions for creating lease count quotas for an\n  authentication plugin\n---\n\n# Create a lease count quota\n\nUse lease count quotas to limit the number of leases generated on a per-mount\nbasis and control resource consumption for your Vault instance  where hard\nlimits makes sense.\n\n## Before you start \n\n- **Confirm you have access to the root or administration namespace for your\n  Vault instance**. Modifying lease count quotas is a restricted activity.\n\n\n## Step 1: Determine the appropriate granularity\n\nThe granularity of your lease limits can affect the performance of your Vault\ncluster. In particular, if your lease limits cause the number of rejected\nrequests to increase dramatically, the increased audit logging may impact Vault\nperformance.\n\nReview past system behavior to identify whether the quota limits should be\ninheritable or limited to a specific role.\n\n## Step 2: Apply the count quota\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse `vault write` and the `sys\/quotas\/lease-count\/{quota-name}` mount path to\ncreate a new lease count quota:\n\n```shell-session\n$ vault write                           \\\n    sys\/quotas\/lease-count\/<QUOTA_NAME> \\\n    name=\"<QUOTA_NAME>\"                 \\\n    path=\"<PLUGIN_MOUNT_PATH>\"          \\\n    role=\"<OPTIONAL_AUTHN_ROLE>\"        \\\n    max_leases=<LEASE_LIMIT>\n```\n\nFor example, to create a targeted quota limit called **webapp-tokens** on the\n`webapp` role for the `approle` plugin at the default mount path:\n\n```shell-session\n$ vault write                            \\\n    sys\/quotas\/lease-count\/webapp-tokens \\\n    name=\"webapp-tokens\"                 \\\n    path=\"auth\/approle\"                  \\\n    role=\"webapp\"                        \\\n    max_leases=100\n\nSuccess! Data written to: sys\/quotas\/lease-count\/webapp-tokens\n```\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\n1. Create a payload file with your quota settings.\n\n  ```json\n  {\n    \"name\": \"<QUOTA_NAME>\",\n    \"path\": \"<PLUGIN_MOUNT_PATH>\",\n    \"role\": \"<OPTIONAL_AUTHN_ROLE>\",\n    \"max_leases\": <LEASE_LIMIT>,\n  }\n  ```\n\n  For example, to create a targeted quota limit called **webapp-tokens** on the\n  `webapp` role for the `approle` plugin at the default mount path:\n\n  ```json\n  {\n    \"name\": \"webapp-tokens\",\n    \"path\": \"auth\/approle\",\n    \"role\": \"webapp\",\n    \"max_leases\": 100,\n  }\n  ```\n\n1. Call the `\/sys\/quotas\/lease-count\/{quota-name}` endpoint to apply the lease\n   count quota. For example, to apply the `webapp-tokens` quota:\n\n   ```shell-session\n   $ curl \\\n    --request POST \\\n    --header \"X-Vault-Token: ${VAULT_TOKEN}\" \\\n    --data @payload.json \\\n    ${VAULT_ADDR}\/v1\/sys\/quotas\/lease-count\/webapp-tokens\n   ```\n\n<Note title=\"Silent endpoint\">\n\n  The `\/sys\/quotas\/lease-count\/{quota-name}` endpoint succeeds silently.\n\n<\/Note>\n\n<\/Tab>\n\n<\/Tabs>\n\n## Step 3: Confirm the quota settings\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse `vault read` and the `sys\/quotas\/lease-count\/{quota-name}` mount path to\ndisplay the lease count quota details:\n\n```shell-session\n$ vault read sys\/quotas\/lease-count\/<QUOTA_NAME>\n```\n\nFor example, to read the **webapp-tokens** quota details:\n\n```shell-session\n$ vault read sys\/quotas\/lease-count\/webapp-tokens\n\nKey            Value\n---            -----\ncounter        0\ninheritable    true\nmax_leases     100\nname           webapp-tokens\npath           auth\/approle\/\nrole           webapp\ntype           lease-count\n```\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\nCall the `sys\/quotas\/lease-count\/{quota-name}` endpoint to display the lease\ncount quota details. For example, to read the **webapp-tokens** quota details:\n\n```shell-session\n$ curl                                      \\\n  --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n  --request GET                             \\\n  --silent                                  \\\n  ${VAULT_ADDR}\/v1\/sys\/quotas\/lease-count\/webapp-tokens | jq\n\n{\n  \"request_id\": \"188e22f1-dc1a-251a-a0a1-005e256fe70f\",\n  \"lease_id\": \"\",\n  \"renewable\": false,\n  \"lease_duration\": 0,\n  \"data\": {\n    \"counter\": 0,\n    \"inheritable\": false,\n    \"max_leases\": 100,\n    \"name\": \"webapp-tokens\",\n    \"path\": \"auth\/approle\/\",\n    \"role\": \"webapp\",\n    \"type\": \"lease-count\"\n  },\n  \"wrap_info\": null,\n  \"warnings\": null,\n  \"auth\": null\n}\n```\n\n<\/Tab>\n\n<\/Tabs>\n\n## Next steps \n\nProactive monitoring and periodic usage analysis can help you identify potential\nproblems before they escalate.\n\n- Brush up on [general Vault resource quotas](\/vault\/docs\/concepts\/resource-quotas) in general.\n- Learn about [lease count quotas for Vault Enterprise](\/vault\/docs\/enterprise\/lease-count-quotas).\n- Learn how to [query audit device logs](\/vault\/tutorials\/monitoring\/query-audit-device-logs).\n- Review [key Vault metrics for common health checks](\/well-architected-framework\/reliability\/reliability-vault-monitoring-key-metrics)","site":"vault","answers_cleaned":"    layout  docs page title  Create a lease count quota description       Step by step instructions for creating lease count quotas for an   authentication plugin        Create a lease count quota  Use lease count quotas to limit the number of leases generated on a per mount basis and control resource consumption for your Vault instance  where hard limits makes sense      Before you start       Confirm you have access to the root or administration namespace for your   Vault instance    Modifying lease count quotas is a restricted activity       Step 1  Determine the appropriate granularity  The granularity of your lease limits can affect the performance of your Vault cluster  In particular  if your lease limits cause the number of rejected requests to increase dramatically  the increased audit logging may impact Vault performance   Review past system behavior to identify whether the quota limits should be inheritable or limited to a specific role      Step 2  Apply the count quota   Tabs    Tab heading  CLI  group  cli    Use  vault write  and the  sys quotas lease count  quota name   mount path to create a new lease count quota      shell session   vault write                                 sys quotas lease count  QUOTA NAME        name   QUOTA NAME                         path   PLUGIN MOUNT PATH                  role   OPTIONAL AUTHN ROLE                max leases  LEASE LIMIT       For example  to create a targeted quota limit called   webapp tokens   on the  webapp  role for the  approle  plugin at the default mount path      shell session   vault write                                  sys quotas lease count webapp tokens       name  webapp tokens                        path  auth approle                         role  webapp                               max leases 100  Success  Data written to  sys quotas lease count webapp tokens       Tab    Tab heading  API  group  api    1  Create a payload file with your quota settings        json          name     QUOTA NAME         path     PLUGIN MOUNT PATH         role     OPTIONAL AUTHN ROLE         max leases    LEASE LIMIT                For example  to create a targeted quota limit called   webapp tokens   on the    webapp  role for the  approle  plugin at the default mount path        json          name    webapp tokens        path    auth approle        role    webapp        max leases   100             1  Call the   sys quotas lease count  quota name   endpoint to apply the lease    count quota  For example  to apply the  webapp tokens  quota         shell session      curl         request POST         header  X Vault Token    VAULT TOKEN           data  payload json         VAULT ADDR  v1 sys quotas lease count webapp tokens          Note title  Silent endpoint      The   sys quotas lease count  quota name   endpoint succeeds silently     Note     Tab     Tabs      Step 3  Confirm the quota settings   Tabs    Tab heading  CLI  group  cli    Use  vault read  and the  sys quotas lease count  quota name   mount path to display the lease count quota details      shell session   vault read sys quotas lease count  QUOTA NAME       For example  to read the   webapp tokens   quota details      shell session   vault read sys quotas lease count webapp tokens  Key            Value                      counter        0 inheritable    true max leases     100 name           webapp tokens path           auth approle  role           webapp type           lease count        Tab    Tab heading  API  group  api    Call the  sys quotas lease count  quota name   endpoint to display the lease count quota details  For example  to read the   webapp tokens   quota details      shell session   curl                                            header  X Vault Token    VAULT TOKEN          request GET                                   silent                                        VAULT ADDR  v1 sys quotas lease count webapp tokens   jq       request id    188e22f1 dc1a 251a a0a1 005e256fe70f      lease id          renewable   false     lease duration   0     data          counter   0       inheritable   false       max leases   100       name    webapp tokens        path    auth approle         role    webapp        type    lease count          wrap info   null     warnings   null     auth   null          Tab     Tabs      Next steps   Proactive monitoring and periodic usage analysis can help you identify potential problems before they escalate     Brush up on  general Vault resource quotas   vault docs concepts resource quotas  in general    Learn about  lease count quotas for Vault Enterprise   vault docs enterprise lease count quotas     Learn how to  query audit device logs   vault tutorials monitoring query audit device logs     Review  key Vault metrics for common health checks   well architected framework reliability reliability vault monitoring key metrics "}
{"questions":"vault page title Server Configuration Vault server configuration reference The format of this file is HCL https github com hashicorp hcl or JSON layout docs Outside of development mode Vault servers are configured using a file Vault configuration","answers":"---\nlayout: docs\npage_title: Server Configuration\ndescription: Vault server configuration reference.\n---\n\n# Vault configuration\n\nOutside of development mode, Vault servers are configured using a file.\nThe format of this file is [HCL](https:\/\/github.com\/hashicorp\/hcl) or JSON.\n\n@include 'plugin-file-permissions-check.mdx'\n\nAn example configuration is shown below:\n\n<Note>\n\nFor multi-node clusters, replace the loopback address with a valid, routable IP address for each Vault node in your network.\n\nRefer to the [Vault HA clustering with integrated storage tutorial](\/vault\/tutorials\/raft\/raft-storage) for a complete scenario.\n\n<\/Note>\n\n```hcl\nui            = true\ncluster_addr  = \"https:\/\/127.0.0.1:8201\"\napi_addr      = \"https:\/\/127.0.0.1:8200\"\ndisable_mlock = true\n\nstorage \"raft\" {\n  path = \"\/path\/to\/raft\/data\"\n  node_id = \"raft_node_id\"\n}\n\nlistener \"tcp\" {\n  address       = \"127.0.0.1:8200\"\n  tls_cert_file = \"\/path\/to\/full-chain.pem\"\n  tls_key_file  = \"\/path\/to\/private-key.pem\"\n}\n\ntelemetry {\n  statsite_address = \"127.0.0.1:8125\"\n  disable_hostname = true\n}\n```\n\nAfter the configuration is written, use the `-config` flag with `vault server`\nto specify where the configuration is.\n\n## Parameters\n\n- `storage` `([StorageBackend][storage-backend]: <required>)` \u2013\n  Configures the storage backend where Vault data is stored. Please see the\n  [storage backends documentation][storage-backend] for the full list of\n  available storage backends. Running Vault in HA mode would require\n  coordination semantics to be supported by the backend. If the storage backend\n  supports HA coordination, HA backend options can also be specified in this\n  parameter block. If not, a separate `ha_storage` parameter should be\n  configured with a backend that supports HA, along with corresponding HA\n  options.\n\n- `ha_storage` `([StorageBackend][storage-backend]: nil)` \u2013 Configures\n  the storage backend where Vault HA coordination will take place. This must be\n  an HA-supporting backend. If not set, HA will be attempted on the backend\n  given in the `storage` parameter. This parameter is not required if the\n  storage backend supports HA coordination and if HA specific options are\n  already specified with `storage` parameter. (Refer to [Use Integrated Storage\n  for HA\n  Coordination](\/vault\/tutorials\/raft\/raft-ha-storage)\n  for a usage example.)\n\n- `listener` `([Listener][listener]: <required>)` \u2013 Configures how\n  Vault is listening for API requests.\n\n- `user_lockout` `([UserLockout][user-lockout]: nil)` \u2013\n  Configures the user-lockout behaviour for failed logins. For more information, please see the\n  [user lockout configuration documentation](\/vault\/docs\/configuration\/user-lockout).\n\n- `seal` `([Seal][seal]: nil)` \u2013 Configures the seal type to use for\n  auto-unsealing, as well as for\n  [seal wrapping][sealwrap] as an additional layer of data protection.\n\n- `reporting` `([Reporting][reporting]: nil)` -\n  Configures options relating to license reporting in Vault.\n\n- `cluster_name` `(string: <generated>)` \u2013 Specifies a human-readable\n  identifier for the Vault cluster. If omitted, Vault will generate a value.\n  The cluster name is included as a label in some [telemetry metrics](\/vault\/docs\/internals\/telemetry\/metrics\/).\n  The cluster name is safe to update on an existing Vault cluster.\n\n- `cache_size` `(string: \"131072\")` \u2013 Specifies the size of the read cache used\n  by the physical storage subsystem. The value is in number of entries, so the\n  total cache size depends on the size of stored entries.\n\n- `disable_cache` `(bool: false)` \u2013 Disables all caches within Vault, including\n  the read cache used by the physical storage subsystem. This will very\n  significantly impact performance.\n\n- `disable_mlock` `(bool: false)` \u2013\u00a0Disables the server from executing the\n  `mlock` syscall. `mlock` prevents memory from being swapped to disk. Disabling\n  `mlock` is not recommended unless using [integrated storage](\/vault\/docs\/internals\/integrated-storage).\n  Follow the additional security precautions outlined below when disabling `mlock`.\n  This can also be provided via the environment variable `VAULT_DISABLE_MLOCK`.\n\n  Disabling `mlock` is not recommended unless the systems running Vault only\n  use encrypted swap or do not use swap at all. Vault only supports memory\n  locking on UNIX-like systems that support the mlock() syscall (Linux, FreeBSD, etc).\n  Non UNIX-like systems (e.g. Windows, NaCL, Android) lack the primitives to keep a\n  process's entire memory address space from spilling to disk and is therefore\n  automatically disabled on unsupported platforms.\n\n  Disabling `mlock` is strongly recommended if using [integrated\n  storage](\/vault\/docs\/internals\/integrated-storage) due to\n  the fact that `mlock` does not interact well with memory mapped files such as\n  those created by BoltDB, which is used by Raft to track state. When using\n  `mlock`, memory-mapped files get loaded into resident memory which causes\n  Vault's entire dataset to be loaded in-memory and cause out-of-memory\n  issues if Vault's data becomes larger than the available RAM. In this case,\n  even though the data within BoltDB remains encrypted at rest, swap should be\n  disabled to prevent Vault's other in-memory sensitive data from being dumped\n  into disk.\n\n  On Linux, to give the Vault executable the ability to use the `mlock`\n  syscall without running the process as root, run:\n\n  ```shell\n  sudo setcap cap_ipc_lock=+ep $(readlink -f $(which vault))\n  ```\n\n  <Note>\n\n  Since each plugin runs as a separate process, you need to do the same\n  for each plugin in your [plugins\n  directory](\/vault\/docs\/plugins\/plugin-architecture#plugin-directory).\n\n  <\/Note>\n\n  If you use a Linux distribution with a modern version of systemd, you can add\n  the following directive to the \"[Service]\" configuration section:\n\n  ```ini\n  LimitMEMLOCK=infinity\n  ```\n\n- `plugin_directory` `(string: \"\")` \u2013 A directory from which plugins are\n  allowed to be loaded. Vault must have permission to read files in this\n  directory to successfully load plugins, and the value cannot be a symbolic link.\n\n- `plugin_tmpdir` `(string: \"\")` - A directory that Vault can create temporary\n  files in to support Unix socket communication with containerized plugins. If\n  not set, Vault will use the system's default directory for temporary files.\n  Generally not necessary unless you are using\n  [containerized plugins](\/vault\/docs\/plugins\/containerized-plugins) and Vault\n  does not share a temporary folder with other processes, such as if using\n  systemd's [PrivateTmp](https:\/\/www.freedesktop.org\/software\/systemd\/man\/latest\/systemd.exec.html#PrivateTmp=)\n  setting. This can also be specified via the `VAULT_PLUGIN_TMPDIR` environment\n  variable.\n\n  @include 'plugin-file-permissions-check.mdx'\n\n- `plugin_file_uid` `(integer: 0)` \u2013 Uid of the plugin directories and plugin binaries if they\n  are owned by an user other than the user running Vault. This only needs to be set if the\n  file permissions check is enabled via the environment variable `VAULT_ENABLE_FILE_PERMISSIONS_CHECK`.\n\n- `plugin_file_permissions` `(string: \"\")` \u2013 Octal permission string of the plugin\n  directories and plugin binaries if they have write or execute permissions for group or others.\n  This only needs to be set if the file permissions check is enabled via the environment variable\n  `VAULT_ENABLE_FILE_PERMISSIONS_CHECK`.\n\n- `telemetry` `([Telemetry][telemetry]: <none>)` \u2013 Specifies the telemetry\n  reporting system.\n\n- `default_lease_ttl` `(string: \"768h\")` \u2013 Specifies the default lease duration\n  for tokens and secrets. This is specified using a label suffix like `\"30s\"` or\n  `\"1h\"`. This value cannot be larger than `max_lease_ttl`.\n\n- `max_lease_ttl` `(string: \"768h\")` \u2013\u00a0Specifies the maximum possible lease\n  duration for tokens and secrets. This is specified using a label\n  suffix like `\"30s\"` or `\"1h\"`. Individual mounts can override this value\n  by tuning the mount with the `max-lease-ttl` flag of the\n  [auth](\/vault\/docs\/commands\/auth\/tune#max-lease-ttl) or\n  [secret](\/vault\/docs\/commands\/secrets\/tune#max-lease-ttl) commands.\n\n- `default_max_request_duration` `(string: \"90s\")` \u2013\u00a0Specifies the default\n  maximum request duration allowed before Vault cancels the request. This can\n  be overridden per listener via the `max_request_duration` value.\n\n- `detect_deadlocks` `(string: \"\")` - A comma separated string that specifies the internal\nmutex locks that should be monitored for potential deadlocks. Currently supported values\ninclude `statelock`, `quotas` and `expiration` which will cause \"POTENTIAL DEADLOCK:\"\nto be logged when an attempt at a core state lock appears to be deadlocked. Enabling this\ncan have a negative effect on performance due to the tracking of each lock attempt.\n\n- `raw_storage_endpoint` `(bool: false)` \u2013 Enables the `sys\/raw` endpoint which\n  allows the decryption\/encryption of raw data into and out of the security\n  barrier. This is a highly privileged endpoint.\n\n- `introspection_endpoint` `(bool: false)` - Enables the `sys\/internal\/inspect` endpoint\n  which allows users with a root token or sudo privileges to inspect certain subsystems inside Vault.\n\n- `ui` `(bool: false)` \u2013 Enables the built-in web UI, which is available on all\n  listeners (address + port) at the `\/ui` path. Browsers accessing the standard\n  Vault API address will automatically redirect there. This can also be provided\n  via the environment variable `VAULT_UI`. For more information, please see the\n  [ui configuration documentation](\/vault\/docs\/configuration\/ui).\n\n- `pid_file` `(string: \"\")` - Path to the file in which the Vault server's\n  Process ID (PID) should be stored.\n\n- `enable_response_header_hostname` `(bool: false)` - Enables the addition of an HTTP header\n  in all of Vault's HTTP responses: `X-Vault-Hostname`. This will contain the\n  host name of the Vault node that serviced the HTTP request. This information\n  is best effort and is not guaranteed to be present. If this configuration\n  option is enabled and the `X-Vault-Hostname` header is not present in a response,\n  it means there was some kind of error retrieving the host name from the\n  operating system.\n\n- `enable_response_header_raft_node_id` `(bool: false)` - Enables the addition of an HTTP header\n  in all of Vault's HTTP responses: `X-Vault-Raft-Node-ID`. If Vault is participating\n  in a Raft cluster (i.e. using integrated storage), this header will contain the\n  Raft node ID of the Vault node that serviced the HTTP request. If Vault is not\n  participating in a Raft cluster, this header will be omitted, whether this configuration\n  option is enabled or not.\n\n- `log_level` `(string: \"info\")` - Log verbosity level.\n  Supported values (in order of descending detail) are `trace`, `debug`, `info`, `warn`, and `error`.\n  This can also be specified via the `VAULT_LOG_LEVEL` environment variable.\n\n  <Note>\n\n  On SIGHUP (`sudo kill -s HUP` _pid of vault_), if a valid value is specified, Vault will update the existing log level,\n  overriding (even if specified) both the CLI flag and environment variable.\n\n  <\/Note>\n\n  <Note>\n\n  Not all parts of Vault's logging can have its log level be changed dynamically this way; in particular,\n  secrets\/auth plugins are currently not updated dynamically.\n\n  <\/Note>\n\n- `log_format` - Equivalent to the [`-log-format` command-line flag](\/vault\/docs\/commands\/server#_log_format).\n\n- `log_file` - Equivalent to the [`-log-file` command-line flag](\/vault\/docs\/commands\/server#_log_file).\n\n- `log_rotate_duration` - Equivalent to the [`-log-rotate-duration` command-line flag](\/vault\/docs\/commands\/server#_log_rotate_duration).\n\n- `log_rotate_bytes` - Equivalent to the [`-log-rotate-bytes` command-line flag](\/vault\/docs\/commands\/server#_log_rotate_bytes).\n\n- `log_rotate_max_files` - Equivalent to the [`-log-rotate-max-files` command-line flag](\/vault\/docs\/commands\/server#_log_rotate_max_files).\n\n- `experiments` `(string array: [])` - The list of experiments to enable for this node.\n  Experiments should NOT be used in production, and the associated APIs may have backwards\n  incompatible changes between releases. Additional experiments can also be specified via\n  the `VAULT_EXPERIMENTS` environment variable as a comma-separated list, or via the\n  [`-experiment`](\/vault\/docs\/commands\/server#experiment) flag.\n\n- `imprecise_lease_role_tracking` `(bool: \"false\")` - Skip lease counting by role if there are no role based quotas enabled.\n  When `imprecise_lease_role_tracking` is set to true and a new role-based quota is enabled, subsequent lease counts start from 0.\n  `imprecise_lease_role_tracking` affects role-based lease count quotas, but reduces latencies when not using role based quotas.\n\n- `enable_post_unseal_trace` `(bool: false)` - Enables the server to generate a Go trace during the execution of the\n  `core.postUnseal` function for debug purposes. The resulting trace can be viewed with the `go tool trace` command. The output\n  directory can be specified with the `post_unseal_trace_directory` parameter. This should only be enabled temporarily for\n  debugging purposes as it can have a significant performance impact. This can be updated on a running Vault process with a \n  SIGHUP signal.\n\n- `post_unseal_trace_directory` `(string: \"\")` - Specifies the directory where the trace file will be written, which must exist\n  and be writable by the Vault process. If not specified it will create a subdirectory `vault-traces` under the result from\n  [os.TempDir()](https:\/\/pkg.go.dev\/os#TempDir) (usually `\/tmp` on Unix systems). This can be updated on a running Vault process\n  with a SIGHUP signal.\n\n### High availability parameters\n\nThe following parameters are used on backends that support [high availability][high-availability].\n\n- `api_addr` `(string: \"\")` \u2013 Specifies the address (full URL) to advertise to\n  other Vault servers in the cluster for client redirection. This value is also\n  used for [plugin backends][plugins]. This can also be provided via the\n  environment variable `VAULT_API_ADDR`. In general this should be set as a full\n  URL that points to the value of the [`listener`](#listener) address.\n  This can be dynamically defined with a\n  [go-sockaddr template](https:\/\/pkg.go.dev\/github.com\/hashicorp\/go-sockaddr\/template)\n  that is resolved at runtime.\n\n- `cluster_addr` `(string: \"\")` \u2013 Specifies the address to advertise to other\n  Vault servers in the cluster for request forwarding. This can also be provided\n  via the environment variable `VAULT_CLUSTER_ADDR`. This is a full URL, like\n  `api_addr`, but Vault will ignore the scheme (all cluster members always\n  use TLS with a private key\/certificate).\n  This can be dynamically defined with a\n  [go-sockaddr template](https:\/\/pkg.go.dev\/github.com\/hashicorp\/go-sockaddr\/template)\n  that is resolved at runtime.\n\n- `disable_clustering` `(bool: false)` \u2013 Specifies whether clustering features\n  such as request forwarding are enabled. Setting this to true on one Vault node\n  will disable these features _only when that node is the active node_. This\n  parameter cannot be set to `true` if `raft` is the storage type.\n\n### Vault enterprise parameters\n\nThe following parameters are only used with Vault Enterprise\n\n- `disable_sealwrap` `(bool: false)` \u2013\u00a0Disables using [seal wrapping][sealwrap]\n  for any value except the root key. If this value is toggled, the new\n  behavior will happen lazily (as values are read or written).\n\n- `disable_performance_standby` `(bool: false)` \u2013 Specifies whether performance\n  standbys should be disabled on this node. Setting this to true on one Vault\n  node will disable this feature when this node is Active or Standby. It's\n  recommended to sync this setting across all nodes in the cluster.\n\n- `license_path` `(string: \"\")` - Path to license file. This can also be\n  provided via the environment variable `VAULT_LICENSE_PATH`, or the license\n  itself can be provided in the environment variable `VAULT_LICENSE`.\n\n- `administrative_namespace_path` `(string: \"\")` - Specifies the absolute path\n  to the Vault namespace to be used as an [Administrative namespace](\/vault\/docs\/enterprise\/namespaces\/create-admin-namespace).\n\n[storage-backend]: \/vault\/docs\/configuration\/storage\n[listener]: \/vault\/docs\/configuration\/listener\n[reporting]: \/vault\/docs\/configuration\/reporting\n[seal]: \/vault\/docs\/configuration\/seal\n[sealwrap]: \/vault\/docs\/enterprise\/sealwrap\n[telemetry]: \/vault\/docs\/configuration\/telemetry\n[sentinel]: \/vault\/docs\/configuration\/sentinel\n[high-availability]: \/vault\/docs\/concepts\/ha\n[plugins]: \/vault\/docs\/plugins","site":"vault","answers_cleaned":"    layout  docs page title  Server Configuration description  Vault server configuration reference         Vault configuration  Outside of development mode  Vault servers are configured using a file  The format of this file is  HCL  https   github com hashicorp hcl  or JSON    include  plugin file permissions check mdx   An example configuration is shown below    Note   For multi node clusters  replace the loopback address with a valid  routable IP address for each Vault node in your network   Refer to the  Vault HA clustering with integrated storage tutorial   vault tutorials raft raft storage  for a complete scenario     Note      hcl ui              true cluster addr     https   127 0 0 1 8201  api addr         https   127 0 0 1 8200  disable mlock   true  storage  raft      path     path to raft data    node id    raft node id     listener  tcp      address          127 0 0 1 8200    tls cert file     path to full chain pem    tls key file      path to private key pem     telemetry     statsite address    127 0 0 1 8125    disable hostname   true        After the configuration is written  use the   config  flag with  vault server  to specify where the configuration is      Parameters     storage     StorageBackend  storage backend    required        Configures the storage backend where Vault data is stored  Please see the    storage backends documentation  storage backend  for the full list of   available storage backends  Running Vault in HA mode would require   coordination semantics to be supported by the backend  If the storage backend   supports HA coordination  HA backend options can also be specified in this   parameter block  If not  a separate  ha storage  parameter should be   configured with a backend that supports HA  along with corresponding HA   options      ha storage     StorageBackend  storage backend   nil     Configures   the storage backend where Vault HA coordination will take place  This must be   an HA supporting backend  If not set  HA will be attempted on the backend   given in the  storage  parameter  This parameter is not required if the   storage backend supports HA coordination and if HA specific options are   already specified with  storage  parameter   Refer to  Use Integrated Storage   for HA   Coordination   vault tutorials raft raft ha storage    for a usage example       listener     Listener  listener    required      Configures how   Vault is listening for API requests      user lockout     UserLockout  user lockout   nil       Configures the user lockout behaviour for failed logins  For more information  please see the    user lockout configuration documentation   vault docs configuration user lockout       seal     Seal  seal   nil     Configures the seal type to use for   auto unsealing  as well as for    seal wrapping  sealwrap  as an additional layer of data protection      reporting     Reporting  reporting   nil       Configures options relating to license reporting in Vault      cluster name    string   generated      Specifies a human readable   identifier for the Vault cluster  If omitted  Vault will generate a value    The cluster name is included as a label in some  telemetry metrics   vault docs internals telemetry metrics      The cluster name is safe to update on an existing Vault cluster      cache size    string   131072      Specifies the size of the read cache used   by the physical storage subsystem  The value is in number of entries  so the   total cache size depends on the size of stored entries      disable cache    bool  false     Disables all caches within Vault  including   the read cache used by the physical storage subsystem  This will very   significantly impact performance      disable mlock    bool  false     Disables the server from executing the    mlock  syscall   mlock  prevents memory from being swapped to disk  Disabling    mlock  is not recommended unless using  integrated storage   vault docs internals integrated storage     Follow the additional security precautions outlined below when disabling  mlock     This can also be provided via the environment variable  VAULT DISABLE MLOCK      Disabling  mlock  is not recommended unless the systems running Vault only   use encrypted swap or do not use swap at all  Vault only supports memory   locking on UNIX like systems that support the mlock   syscall  Linux  FreeBSD  etc     Non UNIX like systems  e g  Windows  NaCL  Android  lack the primitives to keep a   process s entire memory address space from spilling to disk and is therefore   automatically disabled on unsupported platforms     Disabling  mlock  is strongly recommended if using  integrated   storage   vault docs internals integrated storage  due to   the fact that  mlock  does not interact well with memory mapped files such as   those created by BoltDB  which is used by Raft to track state  When using    mlock   memory mapped files get loaded into resident memory which causes   Vault s entire dataset to be loaded in memory and cause out of memory   issues if Vault s data becomes larger than the available RAM  In this case    even though the data within BoltDB remains encrypted at rest  swap should be   disabled to prevent Vault s other in memory sensitive data from being dumped   into disk     On Linux  to give the Vault executable the ability to use the  mlock    syscall without running the process as root  run        shell   sudo setcap cap ipc lock  ep   readlink  f   which vault             Note     Since each plugin runs as a separate process  you need to do the same   for each plugin in your  plugins   directory   vault docs plugins plugin architecture plugin directory        Note     If you use a Linux distribution with a modern version of systemd  you can add   the following directive to the   Service   configuration section        ini   LimitMEMLOCK infinity           plugin directory    string         A directory from which plugins are   allowed to be loaded  Vault must have permission to read files in this   directory to successfully load plugins  and the value cannot be a symbolic link      plugin tmpdir    string         A directory that Vault can create temporary   files in to support Unix socket communication with containerized plugins  If   not set  Vault will use the system s default directory for temporary files    Generally not necessary unless you are using    containerized plugins   vault docs plugins containerized plugins  and Vault   does not share a temporary folder with other processes  such as if using   systemd s  PrivateTmp  https   www freedesktop org software systemd man latest systemd exec html PrivateTmp     setting  This can also be specified via the  VAULT PLUGIN TMPDIR  environment   variable      include  plugin file permissions check mdx      plugin file uid    integer  0     Uid of the plugin directories and plugin binaries if they   are owned by an user other than the user running Vault  This only needs to be set if the   file permissions check is enabled via the environment variable  VAULT ENABLE FILE PERMISSIONS CHECK       plugin file permissions    string         Octal permission string of the plugin   directories and plugin binaries if they have write or execute permissions for group or others    This only needs to be set if the file permissions check is enabled via the environment variable    VAULT ENABLE FILE PERMISSIONS CHECK       telemetry     Telemetry  telemetry    none      Specifies the telemetry   reporting system      default lease ttl    string   768h      Specifies the default lease duration   for tokens and secrets  This is specified using a label suffix like   30s   or     1h    This value cannot be larger than  max lease ttl       max lease ttl    string   768h      Specifies the maximum possible lease   duration for tokens and secrets  This is specified using a label   suffix like   30s   or   1h    Individual mounts can override this value   by tuning the mount with the  max lease ttl  flag of the    auth   vault docs commands auth tune max lease ttl  or    secret   vault docs commands secrets tune max lease ttl  commands      default max request duration    string   90s      Specifies the default   maximum request duration allowed before Vault cancels the request  This can   be overridden per listener via the  max request duration  value      detect deadlocks    string         A comma separated string that specifies the internal mutex locks that should be monitored for potential deadlocks  Currently supported values include  statelock    quotas  and  expiration  which will cause  POTENTIAL DEADLOCK   to be logged when an attempt at a core state lock appears to be deadlocked  Enabling this can have a negative effect on performance due to the tracking of each lock attempt      raw storage endpoint    bool  false     Enables the  sys raw  endpoint which   allows the decryption encryption of raw data into and out of the security   barrier  This is a highly privileged endpoint      introspection endpoint    bool  false     Enables the  sys internal inspect  endpoint   which allows users with a root token or sudo privileges to inspect certain subsystems inside Vault      ui    bool  false     Enables the built in web UI  which is available on all   listeners  address   port  at the   ui  path  Browsers accessing the standard   Vault API address will automatically redirect there  This can also be provided   via the environment variable  VAULT UI   For more information  please see the    ui configuration documentation   vault docs configuration ui       pid file    string         Path to the file in which the Vault server s   Process ID  PID  should be stored      enable response header hostname    bool  false     Enables the addition of an HTTP header   in all of Vault s HTTP responses   X Vault Hostname   This will contain the   host name of the Vault node that serviced the HTTP request  This information   is best effort and is not guaranteed to be present  If this configuration   option is enabled and the  X Vault Hostname  header is not present in a response    it means there was some kind of error retrieving the host name from the   operating system      enable response header raft node id    bool  false     Enables the addition of an HTTP header   in all of Vault s HTTP responses   X Vault Raft Node ID   If Vault is participating   in a Raft cluster  i e  using integrated storage   this header will contain the   Raft node ID of the Vault node that serviced the HTTP request  If Vault is not   participating in a Raft cluster  this header will be omitted  whether this configuration   option is enabled or not      log level    string   info      Log verbosity level    Supported values  in order of descending detail  are  trace    debug    info    warn   and  error     This can also be specified via the  VAULT LOG LEVEL  environment variable      Note     On SIGHUP   sudo kill  s HUP   pid of vault    if a valid value is specified  Vault will update the existing log level    overriding  even if specified  both the CLI flag and environment variable       Note      Note     Not all parts of Vault s logging can have its log level be changed dynamically this way  in particular    secrets auth plugins are currently not updated dynamically       Note      log format    Equivalent to the    log format  command line flag   vault docs commands server  log format       log file    Equivalent to the    log file  command line flag   vault docs commands server  log file       log rotate duration    Equivalent to the    log rotate duration  command line flag   vault docs commands server  log rotate duration       log rotate bytes    Equivalent to the    log rotate bytes  command line flag   vault docs commands server  log rotate bytes       log rotate max files    Equivalent to the    log rotate max files  command line flag   vault docs commands server  log rotate max files       experiments    string array         The list of experiments to enable for this node    Experiments should NOT be used in production  and the associated APIs may have backwards   incompatible changes between releases  Additional experiments can also be specified via   the  VAULT EXPERIMENTS  environment variable as a comma separated list  or via the      experiment    vault docs commands server experiment  flag      imprecise lease role tracking    bool   false      Skip lease counting by role if there are no role based quotas enabled    When  imprecise lease role tracking  is set to true and a new role based quota is enabled  subsequent lease counts start from 0     imprecise lease role tracking  affects role based lease count quotas  but reduces latencies when not using role based quotas      enable post unseal trace    bool  false     Enables the server to generate a Go trace during the execution of the    core postUnseal  function for debug purposes  The resulting trace can be viewed with the  go tool trace  command  The output   directory can be specified with the  post unseal trace directory  parameter  This should only be enabled temporarily for   debugging purposes as it can have a significant performance impact  This can be updated on a running Vault process with a    SIGHUP signal      post unseal trace directory    string         Specifies the directory where the trace file will be written  which must exist   and be writable by the Vault process  If not specified it will create a subdirectory  vault traces  under the result from    os TempDir    https   pkg go dev os TempDir   usually   tmp  on Unix systems   This can be updated on a running Vault process   with a SIGHUP signal       High availability parameters  The following parameters are used on backends that support  high availability  high availability       api addr    string         Specifies the address  full URL  to advertise to   other Vault servers in the cluster for client redirection  This value is also   used for  plugin backends  plugins   This can also be provided via the   environment variable  VAULT API ADDR   In general this should be set as a full   URL that points to the value of the   listener    listener  address    This can be dynamically defined with a    go sockaddr template  https   pkg go dev github com hashicorp go sockaddr template    that is resolved at runtime      cluster addr    string         Specifies the address to advertise to other   Vault servers in the cluster for request forwarding  This can also be provided   via the environment variable  VAULT CLUSTER ADDR   This is a full URL  like    api addr   but Vault will ignore the scheme  all cluster members always   use TLS with a private key certificate     This can be dynamically defined with a    go sockaddr template  https   pkg go dev github com hashicorp go sockaddr template    that is resolved at runtime      disable clustering    bool  false     Specifies whether clustering features   such as request forwarding are enabled  Setting this to true on one Vault node   will disable these features  only when that node is the active node   This   parameter cannot be set to  true  if  raft  is the storage type       Vault enterprise parameters  The following parameters are only used with Vault Enterprise     disable sealwrap    bool  false     Disables using  seal wrapping  sealwrap    for any value except the root key  If this value is toggled  the new   behavior will happen lazily  as values are read or written       disable performance standby    bool  false     Specifies whether performance   standbys should be disabled on this node  Setting this to true on one Vault   node will disable this feature when this node is Active or Standby  It s   recommended to sync this setting across all nodes in the cluster      license path    string         Path to license file  This can also be   provided via the environment variable  VAULT LICENSE PATH   or the license   itself can be provided in the environment variable  VAULT LICENSE       administrative namespace path    string         Specifies the absolute path   to the Vault namespace to be used as an  Administrative namespace   vault docs enterprise namespaces create admin namespace     storage backend    vault docs configuration storage  listener    vault docs configuration listener  reporting    vault docs configuration reporting  seal    vault docs configuration seal  sealwrap    vault docs enterprise sealwrap  telemetry    vault docs configuration telemetry  sentinel    vault docs configuration sentinel  high availability    vault docs concepts ha  plugins    vault docs plugins"}
{"questions":"vault Step by step instructions for managing Vault resources programmatically with page title Manage Vault resources programmatically layout docs Terraform Manage Vault resources programmatically with Terraform","answers":"---\nlayout: docs\npage_title: Manage Vault resources programmatically\ndescription: >-\n  Step-by-step instructions for managing Vault resources programmatically with\n  Terraform\n---\n\n# Manage Vault resources programmatically with Terraform\n\nUse Terraform to manage policies, namespaces, and plugins in Vault.\n\n## Before you start\n\n- **You must have [Terraform installed](\/terraform\/install)**.\n- **You must have the [Terraform Vault provider](https:\/\/registry.terraform.io\/providers\/hashicorp\/vault\/latest) configured**.\n- **You must have sufficient access to run Terraform**. \n- **You must have a [Vault server running](\/vault\/tutorials\/getting-started\/getting-started-dev-server)**.\n\n## Step 1: Create a resource file for namespaces\n\nTerraform Vault provider supports a `vault_namespace` resource type for\nmanaging Vault namespaces:\n\n```hcl\nresource \"vault_namespace\" \"<TERRAFORM_RESOURCE_NAME>\" {\n  path = \"<VAULT_NAMESPACE>\"\n}\n```\n\nTo manage your Vault namespaces in Terraform:\n\n1. Use the `vault namespace list` command to identify any unmanaged namespaces\n   that you need to migrate. For example:\n\n    ```shell-session\n    $ vault namespace list\n\n    Keys\n    ----\n    admin\/\n    ```\n\n1. Create a new Terraform Vault Provider resource file called\n   `vault_namespaces.tf` that defines `vault_namespace` resources for each of\n   the new or existing namespaces resources you want to manage.\n   \n   For example, to migrate the `admin` namespace in the example and create a new\n   `dev` namespace:\n\n    ```hcl\n    resource \"vault_namespace\" \"admin_ns\" {\n      path = \"admin\"\n    }\n\n    resource \"vault_namespace\" \"dev_ns\" {\n      path = \"dev\"\n    }\n    ```\n\n## Step 2: Create a resource file for secret engines\n\nTerraform Vault provider supports discrete types for the different \n[auth](https:\/\/registry.terraform.io\/providers\/hashicorp\/vault\/latest\/docs#vault-authentication-configuration-options),\n[secret](https:\/\/registry.terraform.io\/providers\/hashicorp\/vault\/latest\/docs\/resources\/mount),\nand [database](https:\/\/registry.terraform.io\/providers\/hashicorp\/vault\/latest\/docs\/resources\/database_secrets_mount)\nplugin types in Vault.\n\nTo migrate a secret engine, use the `vault_mount` resource type:\n\n```hcl\nresource \"vault_mount\" \"<TERRAFORM_RESOURCE_NAME>\" {\n  path = \"<VAULT_NAMESPACE>\"\n  type = \"<VAULT_PLUGIN_TYPE>\"\n}\n```\n\nTo manage your Vault secret engines in Terraform:\n\n1. Use the `vault secret list` command to identify any unmanaged secret engines\n   that you need to migrate. For example:\n\n    ```shell-session\n    $ vault secrets list | grep -vEw '(cubbyhole|identity|sys)'\n\n    Path          Type         Accessor              Description\n    ----          ----         --------              -----------\n    transit\/      transit      transit_8291b949      n\/a\n    ```\n\n1. Use the `-namespace` flag to check for unmanaged secret engines under any\n   namespaces you identified in the previous step. For example, to check for\n   secret engines under the `admin` namespace:\n\n    ```shell-session\n    $ vault secrets list -namespace=admin | grep -vEw '(cubbyhole|identity|sys)'\n\n    Path           Type            Accessor                 Description\n    ----           ----            --------                 -----------\n    admin_keys\/    kv              kv_87edfc65              n\/a\n    ```\n\n1. Create a new Terraform Vault Provider resource file called `vault_secrets.tf`\n   that defines `vault_mount` resources for each of the new or existing secret\n   engines you want to manage.\n   \n   For example, to  migrate the `transit` and `admin_keys` secret engines in the\n   example and enable a new `kv` engine under the new `dev` namespace called\n   `dev_keys`:\n\n    ```hcl\n    resource \"vault_mount\" \"transit_plugin\" {\n      path = \"transit\"\n      type = \"transit\"\n    }\n\n    resource \"vault_mount\" \"admin_keys_plugin\" {\n      namespace = vault_namespace.admin_ns.path\n      path = \"admin_keys\"\n      type = \"kv\"\n      options = {\n        version = \"2\"\n      }\n    }\n\n    resource \"vault_mount\" \"dev_keys_plugin\" {\n      namespace = vault_namespace.dev_ns.path\n      path = \"dev_keys\"\n      type = \"kv\"\n      options = {\n        version = \"2\"\n      }\n    }\n    ```\n\n## Step 3: Create a resource file for policies\n\nTerraform Vault provider supports a `vault_policy` resource type for\nmanaging Vault policies:\n\n```hcl\nresource \"vault_policy\" \"<TERRAFORM_RESOURCE_NAME>\" {\n  name = \"<VAULT_POLICY_NAME>\"\n  policy = <<EOT\n    <VAULT_POLICY_DEFINITION>\n  EOT\n}\n```\n\nTo manage your Vault policies in Terraform:\n\n1. Use the `vault policy list` command to identify any unmanaged policies that\n   you need to migrate. For example:\n\n    ```shell-session\n    $ vault policy list | grep -vEw 'root'\n\n    default\n    ```\n\n1. Create a Terraform Vault Provider resource file called `vault_policies.tf`\n   that defines `vault_mount` resources for each policy resource you want to\n   manage in Terraform. You can use the following `bash` code to write all\n   your existing, non-root policies to the file:\n\n  ```shell-session\n  for vpolicy in $(vault policy list | grep -vw root) ; do\n    echo \"resource \\\"vault_policy\\\" \\\"vault_$vpolicy\\\" {\"\n    echo \"  name = \\\"$vpolicy\\\"\"\n    echo \"  policy = <<EOT\"\n    vault policy read $vpolicy\n    echo \"EOT\"\n    echo \"}\"\n    echo \"\"\n  done > vault_policies.tf\n  ```\n\n1. Update the `vault_policies.tf` file with any new policies you want to add.\n   For example, to create a policy for the example `dev_keys` secret engine:\n\n   ```hcl\n   resource \"vault_policy\" \"dev_team_policy\" {\n   name = \"dev_team\"\n\n   policy = <<EOT\n    path vault_mount.dev_keys_plugin.path {\n      capabilities = [\"create\", \"update\"]\n    }\n    EOT\n   }\n   ``` \n\n## Step 4: Update your Terraform configuration\n\n1. Create a `vault` directory wherever you keep your deployment configuration\n   files for Terraform.\n\n1. Save your new resource files to your new Vault configuration directory.\n\n1. Use `terraform fmt` to adjust the formatting (if needed) of your new\n   configuration files:\n  ```shell-session\n  $ terraform fmt\n\n  vault_namespaces.tf\n  vault_secrets.tf\n  vault_policies.tf\n  ```\n\n1. Use `terraform validate` to confirm the new configuration is valid:\n  ```shell-session\n  $ terraform validate\n\n  \n  Success! The configuration is valid.\n  ```\n\n## Step 5: Import preexisting root-level resources\n\nUse the `terraform import` command to import the preexisting root-level resources.\n\nFor example, import the `admin` namespace, `default` policy, and `transit`\nplugin from the previous steps:\n\n```shell-session\n$ terraform import vault_namespace.admin_ns admin\n\nvault_namespace.admin_ns: Importing from ID \"admin\"...\nvault_namespace.admin_ns: Import prepared!\n  Prepared vault_namespace for import\nvault_namespace.admin_ns: Refreshing state... [id=admin]\n\nImport successful!\n\nThe resources that were imported are shown above. These resources are now in\nyour Terraform state and will henceforth be managed by Terraform.\n```\n\n```shell-session\n$ terraform import vault_policy.default_policy default\n\nvault_policy.default_policy: Importing from ID \"default\"...\nvault_policy.default_policy: Import prepared!\n  Prepared vault_policy for import\nvault_policy.default_policy: Refreshing state... [id=default]\n\nImport successful!\n\nThe resources that were imported are shown above. These resources are now in\nyour Terraform state and will henceforth be managed by Terraform.\n```\n\n```shell-session\n$ terraform import vault_mount.transit_plugin transit\n\nvault_mount.transit_plugin: Importing from ID \"transit\"...\nvault_mount.transit_plugin: Import prepared!\n  Prepared vault_mount for import\nvault_mount.transit_plugin: Refreshing state... [id=transit]\n\nImport successful!\n\nThe resources that were imported are shown above. These resources are now in\nyour Terraform state and will henceforth be managed by Terraform.\n```\n\n## Step 6: Import preexisting nested resources\n\nTo import resources that belong to a previously unmanaged namespace, you must\nset the `TERRAFORM_VAULT_NAMESPACE_IMPORT` environment variable before importing.\n\nFor example, to import the `admin_keys` secret engine from the `admin` namespace:\n\n1. Set `TERRAFORM_VAULT_NAMESPACE_IMPORT` to the `admin` Vault namespace:\n\n  ```shell-session\n  $ export TERRAFORM_VAULT_NAMESPACE_IMPORT=\"admin\"\n  ```\n\n1. Import the `vault_mount` resource `admin_keys`:\n\n  ```shell-session\n  $ terraform import vault_mount.admin_keys_plugin admin_keys\n\n  vault_mount.admin_keys_plugin: Importing from ID \"admin_keys\"...\n  vault_mount.admin_keys_plugin: Import prepared!\n    Prepared vault_mount for import\n  vault_mount.admin_keys_plugin: Refreshing state... [id=admin_keys]\n\n  Import successful!\n\n  The resources that were imported are shown above. These resources are now in\n  your Terraform state and will henceforth be managed by Terraform.\n  ```\n\n1. Unset the `TERRAFORM_VAULT_NAMESPACE_IMPORT` variable when you finish\n   importing child resources:\n\n  ```shell-session\n  $ unset TERRAFORM_VAULT_NAMESPACE_IMPORT\n  ```\n\n## Step 6: Verify the import\n\n1. Use the `terraform state show` command to check your Terraform state file and\n   verify the resources imported successfully. For example, to check the\n   `admin_keys` resource:\n\n  ```shell-session\n  $ terraform state show vault_mount.admin_keys_plugin\n\n  # vault_mount.admin_keys_plugin:\n  resource \"vault_mount\" \"admin_keys\" {\n      accessor                     = \"kv_87edfc65\"\n      allowed_managed_keys         = []\n      audit_non_hmac_request_keys  = []\n      audit_non_hmac_response_keys = []\n      default_lease_ttl_seconds    = 0\n      description                  = null\n      external_entropy_access      = false\n      id                           = \"admin_keys\"\n      local                        = false\n      max_lease_ttl_seconds        = 0\n      namespace                    = \"admin\"\n      options                      = {\n          \"version\" = \"2\"\n      }\n      path                         = \"admin_keys\"\n      seal_wrap                    = false\n      type                         = \"kv\"\n  ```\n\n1. For each of the migrated resources, compare the `accessor` value from your\n   Terraform state to the accessor value in Vault. For example, to confirm the\n   accessor for `admin_keys`:\n\n  ```shell-session\n  $ vault secrets list -namespace=\"admin\" | grep -vEw '(cubbyhole|identity|sys)'\n\n  Path           Type            Accessor                 Description\n  ----           ----            --------                 -----------\n  admin_keys\/    kv              kv_87edfc65              n\/a\n  ``` \n\n## Step 7: Add new Vault resources\n\n1. Run `terraform plan` to confirm the new resources that Terraform will manage:\n\n    ```shell-session\n    $ terraform plan\n\n    vault_policy.default_policy: Refreshing state... [id=default]\n    vault_namespace.admin_ns: Refreshing state... [id=admin\/]\n    vault_mount.transit_plugin: Refreshing state... [id=transit]\n    vault_mount.admin_keys_plugin: Refreshing state... [id=admin_keys]\n\n    Terraform used the selected providers to generate the following execution plan.\n    Resource actions are indicated with the following symbols:\n      + create\n\n    Terraform will perform the following actions:\n\n      # vault_mount.dev_keys_plugin will be created\n      + resource \"vault_mount\" \"dev_keys\" {\n          + accessor                     = (known after apply)\n          + audit_non_hmac_request_keys  = (known after apply)\n          + audit_non_hmac_response_keys = (known after apply)\n          + default_lease_ttl_seconds    = (known after apply)\n          + external_entropy_access      = false\n          + id                           = (known after apply)\n          + max_lease_ttl_seconds        = (known after apply)\n          + namespace                    = \"dev\"\n          + options                      = {\n              + \"version\" = \"2\"\n            }\n          + path                         = \"dev_keys\"\n          + seal_wrap                    = (known after apply)\n          + type                         = \"kv\"\n        }\n\n      # vault_namespace.dev_ns will be created\n      + resource \"vault_namespace\" \"dev\" {\n          + custom_metadata = (known after apply)\n          + id              = (known after apply)\n          + namespace_id    = (known after apply)\n          + path            = \"dev\"\n          + path_fq         = (known after apply)\n        }\n\n      # vault_policy.dev_team will be created\n      + resource \"vault_policy\" \"dev_team\" {\n          + id     = (known after apply)\n          + name   = \"dev_team\"\n          + policy = <<-EOT\n                path vault_mount.dev_keys_plugin.path {\n                      capabilities = [\"create\", \"update\"]\n                    }\n            EOT\n        }\n\n    Plan: 3 to add, 0 to change, 0 to destroy.\n    ```\n\n1. Run `terraform apply` to create the new resources:\n\n  ```shell-session\n  $ terraform apply\n\n  vault_namespace.dev_ns: Creating...\n  vault_namespace.dev_ns: Creation complete after 0s [id=dev\/]\n  vault_mount.dev_keys_plugin: Creating...\n  vault_mount.dev_keys_plugin: Creation complete after 0s [id=dev_keys]\n  vault_policy.dev_team: Creating...\n  vault_policy.dev_team: Creation complete after 0s [id=dev_team]\n\n  Apply complete! Resources: 3 added, 0 changed, 0 destroyed.\n  ```\n\n1. Use the `terraform state show` command to check your Terraform state file and\n   verify the new resources created successfully. For example, to check the\n   `dev_keys` resource:\n\n  ```shell-session\n  $ terraform state show vault_mount.dev_keys_plugin\n\n  # vault_mount.dev_keys_plugin:\n  resource \"vault_mount\" \"dev_keys\" {\n      accessor                     = \"kv_b3d2dd6f\"\n      allowed_managed_keys         = []\n      audit_non_hmac_request_keys  = []\n      audit_non_hmac_response_keys = []\n      default_lease_ttl_seconds    = 0\n      description                  = null\n      external_entropy_access      = false\n      id                           = \"dev_keys\"\n      local                        = false\n      max_lease_ttl_seconds        = 0\n      namespace                    = \"dev\"\n      options                      = {\n          \"version\" = \"2\"\n      }\n      path                         = \"dev_keys\"\n      seal_wrap                    = false\n      type                         = \"kv\"\n  }\n  ```\n\n1. Confirm that your Vault instance can use the new resources. For example, to\n   confirm the `dev_keys` resources:\n\n  ```shell-session\n  $ vault secrets list -namespace=\"dev\" | grep -vEw '(cubbyhole|identity|sys)'\n\n  Path          Type            Accessor                 Description\n  ----          ----            --------                 -----------\n  dev_keys\/     kv              kv_b3d2dd6f              n\/a\n  ``` \n\n## Next steps\n\n1. Review the [best practices for programmatic Vault management](\/vault\/docs\/configuration\/programmatic-best-practices).","site":"vault","answers_cleaned":"    layout  docs page title  Manage Vault resources programmatically description       Step by step instructions for managing Vault resources programmatically with   Terraform        Manage Vault resources programmatically with Terraform  Use Terraform to manage policies  namespaces  and plugins in Vault      Before you start      You must have  Terraform installed   terraform install         You must have the  Terraform Vault provider  https   registry terraform io providers hashicorp vault latest  configured        You must have sufficient access to run Terraform         You must have a  Vault server running   vault tutorials getting started getting started dev server         Step 1  Create a resource file for namespaces  Terraform Vault provider supports a  vault namespace  resource type for managing Vault namespaces      hcl resource  vault namespace    TERRAFORM RESOURCE NAME       path     VAULT NAMESPACE          To manage your Vault namespaces in Terraform   1  Use the  vault namespace list  command to identify any unmanaged namespaces    that you need to migrate  For example          shell session       vault namespace list      Keys              admin           1  Create a new Terraform Vault Provider resource file called     vault namespaces tf  that defines  vault namespace  resources for each of    the new or existing namespaces resources you want to manage         For example  to migrate the  admin  namespace in the example and create a new     dev  namespace          hcl     resource  vault namespace   admin ns          path    admin             resource  vault namespace   dev ns          path    dev                    Step 2  Create a resource file for secret engines  Terraform Vault provider supports discrete types for the different   auth  https   registry terraform io providers hashicorp vault latest docs vault authentication configuration options    secret  https   registry terraform io providers hashicorp vault latest docs resources mount   and  database  https   registry terraform io providers hashicorp vault latest docs resources database secrets mount  plugin types in Vault   To migrate a secret engine  use the  vault mount  resource type      hcl resource  vault mount    TERRAFORM RESOURCE NAME       path     VAULT NAMESPACE     type     VAULT PLUGIN TYPE          To manage your Vault secret engines in Terraform   1  Use the  vault secret list  command to identify any unmanaged secret engines    that you need to migrate  For example          shell session       vault secrets list   grep  vEw   cubbyhole identity sys        Path          Type         Accessor              Description                                                                      transit       transit      transit 8291b949      n a          1  Use the   namespace  flag to check for unmanaged secret engines under any    namespaces you identified in the previous step  For example  to check for    secret engines under the  admin  namespace          shell session       vault secrets list  namespace admin   grep  vEw   cubbyhole identity sys        Path           Type            Accessor                 Description                                                                             admin keys     kv              kv 87edfc65              n a          1  Create a new Terraform Vault Provider resource file called  vault secrets tf     that defines  vault mount  resources for each of the new or existing secret    engines you want to manage         For example  to  migrate the  transit  and  admin keys  secret engines in the    example and enable a new  kv  engine under the new  dev  namespace called     dev keys           hcl     resource  vault mount   transit plugin          path    transit        type    transit             resource  vault mount   admin keys plugin          namespace   vault namespace admin ns path       path    admin keys        type    kv        options             version    2                     resource  vault mount   dev keys plugin          namespace   vault namespace dev ns path       path    dev keys        type    kv        options             version    2                            Step 3  Create a resource file for policies  Terraform Vault provider supports a  vault policy  resource type for managing Vault policies      hcl resource  vault policy    TERRAFORM RESOURCE NAME       name     VAULT POLICY NAME     policy     EOT      VAULT POLICY DEFINITION    EOT        To manage your Vault policies in Terraform   1  Use the  vault policy list  command to identify any unmanaged policies that    you need to migrate  For example          shell session       vault policy list   grep  vEw  root       default          1  Create a Terraform Vault Provider resource file called  vault policies tf     that defines  vault mount  resources for each policy resource you want to    manage in Terraform  You can use the following  bash  code to write all    your existing  non root policies to the file        shell session   for vpolicy in   vault policy list   grep  vw root    do     echo  resource   vault policy     vault  vpolicy          echo    name      vpolicy        echo    policy     EOT      vault policy read  vpolicy     echo  EOT      echo         echo      done   vault policies tf        1  Update the  vault policies tf  file with any new policies you want to add     For example  to create a policy for the example  dev keys  secret engine         hcl    resource  vault policy   dev team policy       name    dev team      policy     EOT     path vault mount dev keys plugin path         capabilities     create    update             EOT                  Step 4  Update your Terraform configuration  1  Create a  vault  directory wherever you keep your deployment configuration    files for Terraform   1  Save your new resource files to your new Vault configuration directory   1  Use  terraform fmt  to adjust the formatting  if needed  of your new    configuration files       shell session     terraform fmt    vault namespaces tf   vault secrets tf   vault policies tf        1  Use  terraform validate  to confirm the new configuration is valid       shell session     terraform validate       Success  The configuration is valid            Step 5  Import preexisting root level resources  Use the  terraform import  command to import the preexisting root level resources   For example  import the  admin  namespace   default  policy  and  transit  plugin from the previous steps      shell session   terraform import vault namespace admin ns admin  vault namespace admin ns  Importing from ID  admin     vault namespace admin ns  Import prepared    Prepared vault namespace for import vault namespace admin ns  Refreshing state     id admin   Import successful   The resources that were imported are shown above  These resources are now in your Terraform state and will henceforth be managed by Terraform          shell session   terraform import vault policy default policy default  vault policy default policy  Importing from ID  default     vault policy default policy  Import prepared    Prepared vault policy for import vault policy default policy  Refreshing state     id default   Import successful   The resources that were imported are shown above  These resources are now in your Terraform state and will henceforth be managed by Terraform          shell session   terraform import vault mount transit plugin transit  vault mount transit plugin  Importing from ID  transit     vault mount transit plugin  Import prepared    Prepared vault mount for import vault mount transit plugin  Refreshing state     id transit   Import successful   The resources that were imported are shown above  These resources are now in your Terraform state and will henceforth be managed by Terraform          Step 6  Import preexisting nested resources  To import resources that belong to a previously unmanaged namespace  you must set the  TERRAFORM VAULT NAMESPACE IMPORT  environment variable before importing   For example  to import the  admin keys  secret engine from the  admin  namespace   1  Set  TERRAFORM VAULT NAMESPACE IMPORT  to the  admin  Vault namespace        shell session     export TERRAFORM VAULT NAMESPACE IMPORT  admin         1  Import the  vault mount  resource  admin keys         shell session     terraform import vault mount admin keys plugin admin keys    vault mount admin keys plugin  Importing from ID  admin keys       vault mount admin keys plugin  Import prepared      Prepared vault mount for import   vault mount admin keys plugin  Refreshing state     id admin keys     Import successful     The resources that were imported are shown above  These resources are now in   your Terraform state and will henceforth be managed by Terraform         1  Unset the  TERRAFORM VAULT NAMESPACE IMPORT  variable when you finish    importing child resources        shell session     unset TERRAFORM VAULT NAMESPACE IMPORT           Step 6  Verify the import  1  Use the  terraform state show  command to check your Terraform state file and    verify the resources imported successfully  For example  to check the     admin keys  resource        shell session     terraform state show vault mount admin keys plugin      vault mount admin keys plugin    resource  vault mount   admin keys          accessor                        kv 87edfc65        allowed managed keys                    audit non hmac request keys             audit non hmac response keys            default lease ttl seconds      0       description                    null       external entropy access        false       id                              admin keys        local                          false       max lease ttl seconds          0       namespace                       admin        options                                     version     2                path                            admin keys        seal wrap                      false       type                            kv         1  For each of the migrated resources  compare the  accessor  value from your    Terraform state to the accessor value in Vault  For example  to confirm the    accessor for  admin keys         shell session     vault secrets list  namespace  admin    grep  vEw   cubbyhole identity sys      Path           Type            Accessor                 Description                                                                         admin keys     kv              kv 87edfc65              n a            Step 7  Add new Vault resources  1  Run  terraform plan  to confirm the new resources that Terraform will manage          shell session       terraform plan      vault policy default policy  Refreshing state     id default      vault namespace admin ns  Refreshing state     id admin       vault mount transit plugin  Refreshing state     id transit      vault mount admin keys plugin  Refreshing state     id admin keys       Terraform used the selected providers to generate the following execution plan      Resource actions are indicated with the following symbols          create      Terraform will perform the following actions           vault mount dev keys plugin will be created         resource  vault mount   dev keys                accessor                        known after apply              audit non hmac request keys     known after apply              audit non hmac response keys    known after apply              default lease ttl seconds       known after apply              external entropy access        false             id                              known after apply              max lease ttl seconds           known after apply              namespace                       dev              options                                           version     2                            path                            dev keys              seal wrap                       known after apply              type                            kv                     vault namespace dev ns will be created         resource  vault namespace   dev                custom metadata    known after apply              id                 known after apply              namespace id       known after apply              path               dev              path fq            known after apply                     vault policy dev team will be created         resource  vault policy   dev team                id        known after apply              name      dev team              policy      EOT                 path vault mount dev keys plugin path                         capabilities     create    update                                     EOT                Plan  3 to add  0 to change  0 to destroy           1  Run  terraform apply  to create the new resources        shell session     terraform apply    vault namespace dev ns  Creating      vault namespace dev ns  Creation complete after 0s  id dev     vault mount dev keys plugin  Creating      vault mount dev keys plugin  Creation complete after 0s  id dev keys    vault policy dev team  Creating      vault policy dev team  Creation complete after 0s  id dev team     Apply complete  Resources  3 added  0 changed  0 destroyed         1  Use the  terraform state show  command to check your Terraform state file and    verify the new resources created successfully  For example  to check the     dev keys  resource        shell session     terraform state show vault mount dev keys plugin      vault mount dev keys plugin    resource  vault mount   dev keys          accessor                        kv b3d2dd6f        allowed managed keys                    audit non hmac request keys             audit non hmac response keys            default lease ttl seconds      0       description                    null       external entropy access        false       id                              dev keys        local                          false       max lease ttl seconds          0       namespace                       dev        options                                     version     2                path                            dev keys        seal wrap                      false       type                            kv             1  Confirm that your Vault instance can use the new resources  For example  to    confirm the  dev keys  resources        shell session     vault secrets list  namespace  dev    grep  vEw   cubbyhole identity sys      Path          Type            Accessor                 Description                                                                        dev keys      kv              kv b3d2dd6f              n a            Next steps  1  Review the  best practices for programmatic Vault management   vault docs configuration programmatic best practices  "}
{"questions":"vault metrics to upstream systems telemetry stanza layout docs page title Telemetry Configuration The telemetry stanza specifies various configurations for Vault to publish","answers":"---\nlayout: docs\npage_title: Telemetry - Configuration\ndescription: |-\n  The telemetry stanza specifies various configurations for Vault to publish\n  metrics to upstream systems.\n---\n\n# `telemetry` stanza\n\nThe `telemetry` stanza specifies various configurations for Vault to publish\nmetrics to upstream systems. Available Vault metrics can be found in the\n[Telemetry internals documentation](\/vault\/docs\/internals\/telemetry).\n\n```hcl\ntelemetry {\n  statsite_address = \"statsite.company.local:8125\"\n}\n```\n\n## `telemetry` parameters\n\nDue to the number of configurable parameters to the `telemetry` stanza,\nparameters on this page are grouped by the telemetry provider.\n\n### Common\n\nThe following options are available on all telemetry configurations.\n\n- `usage_gauge_period` `(string: \"10m\")` - Specifies the interval at which high-cardinality\n  usage data is collected, such as token counts, entity counts, and secret counts.\n   A value of \"none\" disables the collection.  Uses [duration format strings](\/vault\/docs\/concepts\/duration-format).\n- `maximum_gauge_cardinality` `(int: 500)` - The maximum cardinality of gauge labels.\n- `disable_hostname` `(bool: false)` - Specifies if gauge values should be\n  prefixed with the local hostname.\n- `enable_hostname_label` `(bool: false)` - Specifies if all metric values should\n  contain the `host` label with the local hostname. It is recommended to enable\n  `disable_hostname` if this option is used.\n- `metrics_prefix` `(string: \"vault\")` - Specifies the prefix used for metric vaules. By default, metrics are prefixed with \"vault\".\n- `lease_metrics_epsilon` `(string: \"1h\")` - Specifies the size of the bucket used to measure future\n  lease expiration. For example, for the default value of 1 hour, the `vault.expire.leases.by_expiration`\n  metric will aggregate the total number of expiring leases for 1 hour buckets, starting from the current time.\n  Note that leases are put into buckets by rounding. For example, if `lease_metrics_epsilon` is set to 1h and\n  lease A expires 25 minutes from now, and lease B expires 35 minutes from now, then lease A will be in the first\n  bucket, which corresponds to 0-30 minutes, and lease B will be in the second bucket, which corresponds to 31-90\n  minutes. Uses [duration format strings](\/vault\/docs\/concepts\/duration-format).\n- `num_lease_metrics_buckets` `(int: 168)` - The number of expiry buckets for leases. For the default value, for\n  example, 168 value labels for the `vault.expire.leases.by_expiration` metric will be reported, where each value\n  each bucket is separated in time by the `lease_metrics_epsilon` parameter. For the default 1 hour value of\n  `lease_metrics_epsilon` and the default value of `num_lease_metrics_buckets`, `vault.expire.leases.by_expiration`\n  will report the total number of leases expiring within each hour from the current time to one week from the current time.\n- `add_lease_metrics_namespace_labels` `(bool: false)` - If this value is set to true, then `vault.expire.leases.by_expiration`\n  will break down expiring leases by both time and namespace. This parameter is disabled by default because enabling it can lead\n  to a large-cardinality metric.\n- `add_mount_point_rollback_metrics` `(bool: false)` - If this value is set to true, then `vault.rollback.attempt.{MOUNT_POINT}`\n  and `vault.route.rollback.{MOUNT_POINT}` metrics will be reported for every mount point. If this parameter is false, then\n  `vault.rollback.attempt` and `vault.route.rollback` metrics (which do not have the mount point in the metric name)\n  will be reported instead. This parameter is disabled by default starting in Vault 1.15 due to the high cardinality of\n  these metrics.\n- `filter_default` `(bool: true)` - This controls whether to allow metrics that have not been specified by the filter.\n  Defaults to `true`, which will allow all metrics when no filters are provided.\n  When set to `false` with no filters, no metrics will be sent.\n- `prefix_filter` `(string array: [])` - This is a list of filter rules to apply for allowing\/blocking metrics by\n  prefix in the following format:\n  ```json\n  [\"+vault.token\", \"-vault.expire\", \"+vault.expire.num_leases\"]\n  ```\n  A leading \"**+**\" will enable any metrics with the given prefix, and a leading \"**-**\" will block them.\n  If there is overlap between two rules, the more specific rule will take precedence. Blocking will take priority if the same prefix is listed multiple times.\n\n\n### `statsite`\n\nThese `telemetry` parameters apply to\n[statsite](https:\/\/github.com\/armon\/statsite).\n\n- `statsite_address` `(string: \"\")` - Specifies the address of a statsite server\n  to forward metrics data to.\n\n```hcl\ntelemetry {\n  statsite_address = \"statsite.company.local:8125\"\n}\n```\n\n### `statsd`\n\nThese `telemetry` parameters apply to\n[statsd](https:\/\/github.com\/etsy\/statsd).\n\n- `statsd_address` `(string: \"\")` - Specifies the address of a statsd server to\n  forward metrics to.\n\n```hcl\ntelemetry {\n  statsd_address = \"statsd.company.local:8125\"\n}\n```\n\n### `circonus`\n\nThese `telemetry` parameters apply to [Circonus](http:\/\/circonus.com\/).\n\n- `circonus_api_token` `(string: \"\")` - Specifies a valid Circonus API Token\n  used to create\/manage check. If provided, metric management is enabled.\n\n- `circonus_api_app` `(string: \"nomad\")` - Specifies a valid app name associated\n  with the API token.\n\n- `circonus_api_url` `(string: \"https:\/\/api.circonus.com\/v2\")` - Specifies the\n  base URL to use for contacting the Circonus API.\n\n- `circonus_submission_interval` `(string: \"10s\")` - Specifies the interval at\n  which metrics are submitted to Circonus.\n\n- `circonus_submission_url` `(string: \"\")` - Specifies the\n  `check.config.submission_url` field, of a Check API object, from a previously\n  created HTTPTRAP check.\n\n- `circonus_check_id` `(string: \"\")` - Specifies the Check ID (**not check\n  bundle**) from a previously created HTTPTRAP check. The numeric portion of the\n  `check._cid` field in the Check API object.\n\n- `circonus_check_force_metric_activation` `(bool: false)` - Specifies if force\n  activation of metrics which already exist and are not currently active. If\n  check management is enabled, the default behavior is to add new metrics as\n  they are encountered. If the metric already exists in the check, it will\n  not be activated. This setting overrides that behavior.\n\n- `circonus_check_instance_id` `(string: \"<hostname>:<application>\")` - Serves\n  to uniquely identify the metrics coming from this _instance_. It can be used\n  to maintain metric continuity with transient or ephemeral instances as they\n  move around within an infrastructure. By default, this is set to\n  hostname:application name (e.g. \"host123:nomad\").\n\n- `circonus_check_search_tag` `(string: <service>:<application>)` - Specifies a\n  special tag which, when coupled with the instance id, helps to narrow down the\n  search results when neither a Submission URL or Check ID is provided. By\n  default, this is set to service:app (e.g. \"service:nomad\").\n\n- `circonus_check_display_name` `(string: \"\")` - Specifies a name to give a\n  check when it is created. This name is displayed in the Circonus UI Checks\n  list.\n\n- `circonus_check_tags` `(string: \"\")` - Comma separated list of additional\n  tags to add to a check when it is created.\n\n- `circonus_broker_id` `(string: \"\")` - Specifies the ID of a specific Circonus\n  Broker to use when creating a new check. The numeric portion of `broker._cid`\n  field in a Broker API object. If metric management is enabled and neither a\n  Submission URL nor Check ID is provided, an attempt will be made to search for\n  an existing check using Instance ID and Search Tag. If one is not found, a new\n  HTTPTRAP check will be created. By default, this is a random\n  Enterprise Broker is selected, or, the default Circonus Public Broker.\n\n- `circonus_broker_select_tag` `(string: \"\")` - Specifies a special tag which\n  will be used to select a Circonus Broker when a Broker ID is not provided. The\n  best use of this is to as a hint for which broker should be used based on\n  _where_ this particular instance is running (e.g. a specific geo location or\n  datacenter, dc:sfo).\n\n### `dogstatsd`\n\nThese `telemetry` parameters apply to\n[DogStatsD](http:\/\/docs.datadoghq.com\/guides\/dogstatsd\/).\n\n- `dogstatsd_addr` `(string: \"\")` - This provides the address of a DogStatsD\n  instance. DogStatsD is a protocol-compatible flavor of statsd, with the added\n  ability to decorate metrics with tags and event information. If provided,\n  Vault will send various telemetry information to that instance for\n  aggregation. This can be used to capture runtime information.\n\n* `dogstatsd_tags` `(string array: [])` - This provides a list of global tags\n  that will be added to all telemetry packets sent to DogStatsD. It is a list\n  of strings, where each string looks like \"my_tag_name:my_tag_value\".\n\n### `prometheus`\n\nThese `telemetry` parameters apply to\n[prometheus](https:\/\/prometheus.io).\n\n- `prometheus_retention_time` `(string: \"24h\")` - Specifies the amount of time that\n  Prometheus metrics are retained in memory. Setting this to 0 will disable Prometheus telemetry.\n- `disable_hostname` `(bool: false)` - It is recommended to also enable the option\n  `disable_hostname` to avoid having prefixed metrics with hostname.\n\nThe `\/v1\/sys\/metrics` endpoint is only accessible on active nodes\nand automatically disabled on standby nodes. You can enable the `\/v1\/sys\/metrics`\nendpoint on standby nodes by [enabling unauthenticated metrics access][telemetry-tcp].\nStandby nodes will never forward a request to `\/v1\/sys\/metrics` to the active\nnode. If unauthenticated metrics access is enabled, the standby node will\nrespond with its own metrics. If unauthenticated metrics access is not enabled,\nthen a standby node will attempt to service the request but fail and then\nredirect the request to the active node.\n\nQuerying `\/v1\/sys\/metrics` with one of the following headers:\n\n - `Accept: prometheus\/telemetry`\n - `Accept: application\/openmetrics-text`\n\nwill return Prometheus formatted results. Most Prometheus servers automatically\nquery scrape targets with these headers by default.\n\nA Vault token is required with `capabilities = [\"read\", \"list\"]` to\n\/v1\/sys\/metrics. The Prometheus `bearer_token` or `bearer_token_file` options\nmust be added to the scrape job.\n\nVault does not use the default Prometheus path, so Prometheus must be configured\nto scrape `v1\/sys\/metrics` instead of the default scrape path.\n\nAn example job_name stanza required in the [Prometheus config](https:\/\/prometheus.io\/docs\/prometheus\/latest\/configuration\/configuration\/#scrape_config) is provided below.\n\n```\n# prometheus.yml\nscrape_configs:\n  - job_name: 'vault'\n    metrics_path: \"\/v1\/sys\/metrics\"\n    scheme: https\n    tls_config:\n      ca_file: your_ca_here.pem\n    bearer_token: \"your_vault_token_here\"\n    static_configs:\n    - targets: ['your_vault_server_here:8200']\n```\n\nAn example telemetry configuration to be added to Vault's configuration file is shown below:\n\n```hcl\ntelemetry {\n  prometheus_retention_time = \"30s\"\n  disable_hostname = true\n}\n```\n\n### `stackdriver`\n\nThese `telemetry` parameters apply to [Stackdriver Monitoring](https:\/\/cloud.google.com\/monitoring\/).\n\nThe Stackdriver telemetry provider uses the official Google Cloud Golang SDK. This means\nit supports the common ways of\n[providing credentials to Google Cloud](https:\/\/cloud.google.com\/docs\/authentication\/production#providing_credentials_to_your_application).\n\nTo use this telemetry provider, the service account must have the following\nminimum scope(s):\n\n```text\nhttps:\/\/www.googleapis.com\/auth\/cloud-platform\nhttps:\/\/www.googleapis.com\/auth\/monitoring\nhttps:\/\/www.googleapis.com\/auth\/monitoring.write\n```\n\nAnd the following IAM role(s):\n\n```text\nroles\/monitoring.metricWriter\n```\n\n- `stackdriver_project_id` `(string: \"\")` - The Google Cloud ProjectID to send telemetry data to.\n- `stackdriver_location` `(string: \"\")` - The GCP or AWS region of the monitored resource.\n- `stackdriver_namespace` `(string: \"\")` - A namespace identifier for the telemetry data.\n- `stackdriver_debug_logs` `(bool: \"false\")` - Specifies if Vault writes additional stackdriver\nrelated debug logs to standard error output (stderr).\n\nIt is recommended to also enable the option `disable_hostname` to avoid having prefixed\nmetrics with hostname and enable instead `enable_hostname_label`.\n\n```hcl\ntelemetry {\n  stackdriver_project_id = \"my-test-project\"\n  stackdriver_location = \"us-east1-a\"\n  stackdriver_namespace = \"vault-cluster-a\"\n  disable_hostname = true\n  enable_hostname_label = true\n}\n```\n\nMetrics from Vault can be found in [Metrics Explorer](https:\/\/cloud.google.com\/monitoring\/charts\/metrics-explorer).\nAll those metrics are shown with a resource type of `generic_task`, and the metric name\nis prefixed with `custom.googleapis.com\/go-metrics\/`.\n\n[telemetry-tcp]: \/vault\/docs\/configuration\/listener\/tcp#telemetry-parameters","site":"vault","answers_cleaned":"    layout  docs page title  Telemetry   Configuration description       The telemetry stanza specifies various configurations for Vault to publish   metrics to upstream systems          telemetry  stanza  The  telemetry  stanza specifies various configurations for Vault to publish metrics to upstream systems  Available Vault metrics can be found in the  Telemetry internals documentation   vault docs internals telemetry       hcl telemetry     statsite address    statsite company local 8125             telemetry  parameters  Due to the number of configurable parameters to the  telemetry  stanza  parameters on this page are grouped by the telemetry provider       Common  The following options are available on all telemetry configurations      usage gauge period    string   10m      Specifies the interval at which high cardinality   usage data is collected  such as token counts  entity counts  and secret counts     A value of  none  disables the collection   Uses  duration format strings   vault docs concepts duration format      maximum gauge cardinality    int  500     The maximum cardinality of gauge labels     disable hostname    bool  false     Specifies if gauge values should be   prefixed with the local hostname     enable hostname label    bool  false     Specifies if all metric values should   contain the  host  label with the local hostname  It is recommended to enable    disable hostname  if this option is used     metrics prefix    string   vault      Specifies the prefix used for metric vaules  By default  metrics are prefixed with  vault      lease metrics epsilon    string   1h      Specifies the size of the bucket used to measure future   lease expiration  For example  for the default value of 1 hour  the  vault expire leases by expiration    metric will aggregate the total number of expiring leases for 1 hour buckets  starting from the current time    Note that leases are put into buckets by rounding  For example  if  lease metrics epsilon  is set to 1h and   lease A expires 25 minutes from now  and lease B expires 35 minutes from now  then lease A will be in the first   bucket  which corresponds to 0 30 minutes  and lease B will be in the second bucket  which corresponds to 31 90   minutes  Uses  duration format strings   vault docs concepts duration format      num lease metrics buckets    int  168     The number of expiry buckets for leases  For the default value  for   example  168 value labels for the  vault expire leases by expiration  metric will be reported  where each value   each bucket is separated in time by the  lease metrics epsilon  parameter  For the default 1 hour value of    lease metrics epsilon  and the default value of  num lease metrics buckets    vault expire leases by expiration    will report the total number of leases expiring within each hour from the current time to one week from the current time     add lease metrics namespace labels    bool  false     If this value is set to true  then  vault expire leases by expiration    will break down expiring leases by both time and namespace  This parameter is disabled by default because enabling it can lead   to a large cardinality metric     add mount point rollback metrics    bool  false     If this value is set to true  then  vault rollback attempt  MOUNT POINT     and  vault route rollback  MOUNT POINT   metrics will be reported for every mount point  If this parameter is false  then    vault rollback attempt  and  vault route rollback  metrics  which do not have the mount point in the metric name    will be reported instead  This parameter is disabled by default starting in Vault 1 15 due to the high cardinality of   these metrics     filter default    bool  true     This controls whether to allow metrics that have not been specified by the filter    Defaults to  true   which will allow all metrics when no filters are provided    When set to  false  with no filters  no metrics will be sent     prefix filter    string array         This is a list of filter rules to apply for allowing blocking metrics by   prefix in the following format       json      vault token     vault expire     vault expire num leases           A leading         will enable any metrics with the given prefix  and a leading         will block them    If there is overlap between two rules  the more specific rule will take precedence  Blocking will take priority if the same prefix is listed multiple times         statsite   These  telemetry  parameters apply to  statsite  https   github com armon statsite       statsite address    string         Specifies the address of a statsite server   to forward metrics data to      hcl telemetry     statsite address    statsite company local 8125              statsd   These  telemetry  parameters apply to  statsd  https   github com etsy statsd       statsd address    string         Specifies the address of a statsd server to   forward metrics to      hcl telemetry     statsd address    statsd company local 8125              circonus   These  telemetry  parameters apply to  Circonus  http   circonus com        circonus api token    string         Specifies a valid Circonus API Token   used to create manage check  If provided  metric management is enabled      circonus api app    string   nomad      Specifies a valid app name associated   with the API token      circonus api url    string   https   api circonus com v2      Specifies the   base URL to use for contacting the Circonus API      circonus submission interval    string   10s      Specifies the interval at   which metrics are submitted to Circonus      circonus submission url    string         Specifies the    check config submission url  field  of a Check API object  from a previously   created HTTPTRAP check      circonus check id    string         Specifies the Check ID    not check   bundle    from a previously created HTTPTRAP check  The numeric portion of the    check  cid  field in the Check API object      circonus check force metric activation    bool  false     Specifies if force   activation of metrics which already exist and are not currently active  If   check management is enabled  the default behavior is to add new metrics as   they are encountered  If the metric already exists in the check  it will   not be activated  This setting overrides that behavior      circonus check instance id    string    hostname   application       Serves   to uniquely identify the metrics coming from this  instance   It can be used   to maintain metric continuity with transient or ephemeral instances as they   move around within an infrastructure  By default  this is set to   hostname application name  e g   host123 nomad        circonus check search tag    string   service   application      Specifies a   special tag which  when coupled with the instance id  helps to narrow down the   search results when neither a Submission URL or Check ID is provided  By   default  this is set to service app  e g   service nomad        circonus check display name    string         Specifies a name to give a   check when it is created  This name is displayed in the Circonus UI Checks   list      circonus check tags    string         Comma separated list of additional   tags to add to a check when it is created      circonus broker id    string         Specifies the ID of a specific Circonus   Broker to use when creating a new check  The numeric portion of  broker  cid    field in a Broker API object  If metric management is enabled and neither a   Submission URL nor Check ID is provided  an attempt will be made to search for   an existing check using Instance ID and Search Tag  If one is not found  a new   HTTPTRAP check will be created  By default  this is a random   Enterprise Broker is selected  or  the default Circonus Public Broker      circonus broker select tag    string         Specifies a special tag which   will be used to select a Circonus Broker when a Broker ID is not provided  The   best use of this is to as a hint for which broker should be used based on    where  this particular instance is running  e g  a specific geo location or   datacenter  dc sfo         dogstatsd   These  telemetry  parameters apply to  DogStatsD  http   docs datadoghq com guides dogstatsd        dogstatsd addr    string         This provides the address of a DogStatsD   instance  DogStatsD is a protocol compatible flavor of statsd  with the added   ability to decorate metrics with tags and event information  If provided    Vault will send various telemetry information to that instance for   aggregation  This can be used to capture runtime information      dogstatsd tags    string array         This provides a list of global tags   that will be added to all telemetry packets sent to DogStatsD  It is a list   of strings  where each string looks like  my tag name my tag value         prometheus   These  telemetry  parameters apply to  prometheus  https   prometheus io       prometheus retention time    string   24h      Specifies the amount of time that   Prometheus metrics are retained in memory  Setting this to 0 will disable Prometheus telemetry     disable hostname    bool  false     It is recommended to also enable the option    disable hostname  to avoid having prefixed metrics with hostname   The   v1 sys metrics  endpoint is only accessible on active nodes and automatically disabled on standby nodes  You can enable the   v1 sys metrics  endpoint on standby nodes by  enabling unauthenticated metrics access  telemetry tcp   Standby nodes will never forward a request to   v1 sys metrics  to the active node  If unauthenticated metrics access is enabled  the standby node will respond with its own metrics  If unauthenticated metrics access is not enabled  then a standby node will attempt to service the request but fail and then redirect the request to the active node   Querying   v1 sys metrics  with one of the following headers       Accept  prometheus telemetry      Accept  application openmetrics text   will return Prometheus formatted results  Most Prometheus servers automatically query scrape targets with these headers by default   A Vault token is required with  capabilities     read    list    to  v1 sys metrics  The Prometheus  bearer token  or  bearer token file  options must be added to the scrape job   Vault does not use the default Prometheus path  so Prometheus must be configured to scrape  v1 sys metrics  instead of the default scrape path   An example job name stanza required in the  Prometheus config  https   prometheus io docs prometheus latest configuration configuration  scrape config  is provided below         prometheus yml scrape configs      job name   vault      metrics path    v1 sys metrics      scheme  https     tls config        ca file  your ca here pem     bearer token   your vault token here      static configs        targets    your vault server here 8200        An example telemetry configuration to be added to Vault s configuration file is shown below      hcl telemetry     prometheus retention time    30s    disable hostname   true             stackdriver   These  telemetry  parameters apply to  Stackdriver Monitoring  https   cloud google com monitoring     The Stackdriver telemetry provider uses the official Google Cloud Golang SDK  This means it supports the common ways of  providing credentials to Google Cloud  https   cloud google com docs authentication production providing credentials to your application    To use this telemetry provider  the service account must have the following minimum scope s       text https   www googleapis com auth cloud platform https   www googleapis com auth monitoring https   www googleapis com auth monitoring write      And the following IAM role s       text roles monitoring metricWriter         stackdriver project id    string         The Google Cloud ProjectID to send telemetry data to     stackdriver location    string         The GCP or AWS region of the monitored resource     stackdriver namespace    string         A namespace identifier for the telemetry data     stackdriver debug logs    bool   false      Specifies if Vault writes additional stackdriver related debug logs to standard error output  stderr    It is recommended to also enable the option  disable hostname  to avoid having prefixed metrics with hostname and enable instead  enable hostname label       hcl telemetry     stackdriver project id    my test project    stackdriver location    us east1 a    stackdriver namespace    vault cluster a    disable hostname   true   enable hostname label   true        Metrics from Vault can be found in  Metrics Explorer  https   cloud google com monitoring charts metrics explorer   All those metrics are shown with a resource type of  generic task   and the metric name is prefixed with  custom googleapis com go metrics      telemetry tcp    vault docs configuration listener tcp telemetry parameters"}
{"questions":"vault Kubernetes service registration Kubernetes Service Registration labels Vault pods with their current status layout docs for use with selectors page title Kubernetes Service Registration Configuration","answers":"---\nlayout: docs\npage_title: Kubernetes - Service Registration - Configuration\ndescription: >-\n  Kubernetes Service Registration labels Vault pods with their current status\n  for use with selectors.\n---\n\n# Kubernetes service registration\n\nKubernetes Service Registration tags Vault pods with their current status for\nuse with selectors. Service registration is only available when Vault is running in\n[High Availability mode](\/vault\/docs\/concepts\/ha).\n\n- **HashiCorp Supported** \u2013 Kubernetes Service Registration is officially supported\n  by HashiCorp.\n\n## Configuration\n\n```hcl\nservice_registration \"kubernetes\" {\n  namespace      = \"my-namespace\"\n  pod_name       = \"my-pod-name\"\n}\n```\n\nAlternatively, the namespace and pod name can be set through the following\nenvironment variables:\n\n- `VAULT_K8S_NAMESPACE`\n- `VAULT_K8S_POD_NAME`\n\nThis allows you to set these parameters using\n[the Downward API](https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/downward-api-volume-expose-pod-information\/).\n\nIf using only environment variables, the service registration stanza declaring\nyou're using Kubernetes must still exist to indicate your intentions:\n\n```\nservice_registration \"kubernetes\" {}\n```\n\nFor service registration to succeed, Vault must be able to apply labels to pods\nin Kubernetes. The following RBAC rules are required to allow the service account\nassociated with the Vault pods to update its own pod specification:\n\n```\nkind: Role\napiVersion: rbac.authorization.k8s.io\/v1\nmetadata:\n  namespace: mynamespace\n  name: vault-service-account\nrules:\n- apiGroups: [\"\"]\n  resources: [\"pods\"]\n  verbs: [\"get\", \"update\", \"patch\"]\n```\n\n## Examples\n\nOnce properly configured, enabling service registration will cause Kubernetes pods\nto come up with the following labels:\n\n```\napiVersion: v1\nkind: Pod\nmetadata:\n  name: vault\n  labels:\n    vault-active: \"false\"\n    vault-initialized: \"true\"\n    vault-perf-standby: \"false\"\n    vault-sealed: \"false\"\n    vault-version: 1.18.1\n```\n\nAfter shutdowns, Vault pods will bear the following labels:\n\n```\napiVersion: v1\nkind: Pod\nmetadata:\n  name: vault\n  labels:\n    vault-active: \"false\"\n    vault-initialized: \"false\"\n    vault-perf-standby: \"false\"\n    vault-sealed: \"true\"\n    vault-version: 1.18.1\n```\n\n## Label definitions\n\n- `vault-active` `(string: \"true\"\/\"false\")` \u2013 Vault active is updated dynamically each time Vault's active status changes.\n  True indicates that this Vault pod is currently the leader. False indicates that this Vault pod is currently a standby.\n- `vault-initialized` `(string: \"true\"\/\"false\")` \u2013 Vault initialized is updated dynamically each time Vault's initialization\n  status changes. True indicates that Vault is currently initialized. False indicates the Vault is currently uninitialized.\n- `vault-perf-standby` `(string: \"true\"\/\"false\")` \u2013 Vault performance standby is updated dynamically each\n  time Vault's leader\/standby status changes. **This field is only valuable if the pod is a member of a performance standby cluster**,\n  it will simply be set to \"false\" when it's not applicable. True indicates that this Vault pod is currently a performance standby. False indicates\n  that this Vault pod is currently a performance leader.\n- `vault-sealed` `(string: \"true\"\/\"false\")` \u2013 Vault sealed is updated dynamically each\n  time Vault's sealed\/unsealed status changes. True indicates that Vault is currently sealed. False indicates that Vault\n  is currently unsealed.\n- `vault-version` `(string: \"1.18.1\")` \u2013 Vault version is a string that will not change during a pod's lifecycle.\n\n## Working with vault's service discovery labels\n\n### Example service\n\nWith labels applied to the pod, services can be created using selectors to filter pods with specific Vault HA roles,\neffectively allowing direct communication with subsets of Vault pods. Note the `vault-active: \"true\"` line below.\n\n```\napiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app.kubernetes.io\/instance: vault\n    app.kubernetes.io\/name: vault\n    helm.sh\/chart: vault-0.29.1\n  name: vault-active-us-east\n  namespace: default\nspec:\n  clusterIP: 10.7.254.51\n  ports:\n  - name: http\n    port: 8200\n    protocol: TCP\n    targetPort: 8200\n  - name: internal\n    port: 8201\n    protocol: TCP\n    targetPort: 8201\n  publishNotReadyAddresses: false\n  selector:\n    app.kubernetes.io\/instance: vault\n    app.kubernetes.io\/name: vault\n    component: server\n    vault-active: \"true\"\n  type: ClusterIP\n```\n\nAlso, by setting `publishNotReadyAddresses: false` above, pods that have failed will be removed from the service pool.\n\nWith this active service in place, we now have a dedicated endpoint that will always reach the active node. When\nsetting up Vault replication, it can be used as the primary address:\n\n```shell-session\n$ vault write -f sys\/replication\/performance\/primary\/enable \\\n    primary_cluster_addr='https:\/\/vault-active-us-east:8201'\n```\n\n### Example upgrades\n\nIn conjunction with the pod labels and the `OnDelete` upgrade strategy, upgrades are much easier to orchestrate:\n\n```shell-session\n$ helm upgrade vault --set='server.image.tag=1.18.1'\n\n$ kubectl delete pod --selector=vault-active=false \\\n    --selector=vault-version=1.2.3\n\n$ kubectl delete pod --selector=vault-active=true \\\n    --selector=vault-version=1.2.3\n```\n\nWhen deleting an instance of a pod, the `StatefulSet` defining the desired state of the cluster will reschedule the\ndeleted pods with the newest image.","site":"vault","answers_cleaned":"    layout  docs page title  Kubernetes   Service Registration   Configuration description       Kubernetes Service Registration labels Vault pods with their current status   for use with selectors         Kubernetes service registration  Kubernetes Service Registration tags Vault pods with their current status for use with selectors  Service registration is only available when Vault is running in  High Availability mode   vault docs concepts ha        HashiCorp Supported     Kubernetes Service Registration is officially supported   by HashiCorp      Configuration     hcl service registration  kubernetes      namespace         my namespace    pod name          my pod name         Alternatively  the namespace and pod name can be set through the following environment variables      VAULT K8S NAMESPACE     VAULT K8S POD NAME   This allows you to set these parameters using  the Downward API  https   kubernetes io docs tasks inject data application downward api volume expose pod information     If using only environment variables  the service registration stanza declaring you re using Kubernetes must still exist to indicate your intentions       service registration  kubernetes          For service registration to succeed  Vault must be able to apply labels to pods in Kubernetes  The following RBAC rules are required to allow the service account associated with the Vault pods to update its own pod specification       kind  Role apiVersion  rbac authorization k8s io v1 metadata    namespace  mynamespace   name  vault service account rules    apiGroups         resources    pods     verbs    get    update    patch           Examples  Once properly configured  enabling service registration will cause Kubernetes pods to come up with the following labels       apiVersion  v1 kind  Pod metadata    name  vault   labels      vault active   false      vault initialized   true      vault perf standby   false      vault sealed   false      vault version  1 18 1      After shutdowns  Vault pods will bear the following labels       apiVersion  v1 kind  Pod metadata    name  vault   labels      vault active   false      vault initialized   false      vault perf standby   false      vault sealed   true      vault version  1 18 1         Label definitions     vault active    string   true   false      Vault active is updated dynamically each time Vault s active status changes    True indicates that this Vault pod is currently the leader  False indicates that this Vault pod is currently a standby     vault initialized    string   true   false      Vault initialized is updated dynamically each time Vault s initialization   status changes  True indicates that Vault is currently initialized  False indicates the Vault is currently uninitialized     vault perf standby    string   true   false      Vault performance standby is updated dynamically each   time Vault s leader standby status changes    This field is only valuable if the pod is a member of a performance standby cluster      it will simply be set to  false  when it s not applicable  True indicates that this Vault pod is currently a performance standby  False indicates   that this Vault pod is currently a performance leader     vault sealed    string   true   false      Vault sealed is updated dynamically each   time Vault s sealed unsealed status changes  True indicates that Vault is currently sealed  False indicates that Vault   is currently unsealed     vault version    string   1 18 1      Vault version is a string that will not change during a pod s lifecycle      Working with vault s service discovery labels      Example service  With labels applied to the pod  services can be created using selectors to filter pods with specific Vault HA roles  effectively allowing direct communication with subsets of Vault pods  Note the  vault active   true   line below       apiVersion  v1 kind  Service metadata    labels      app kubernetes io instance  vault     app kubernetes io name  vault     helm sh chart  vault 0 29 1   name  vault active us east   namespace  default spec    clusterIP  10 7 254 51   ports      name  http     port  8200     protocol  TCP     targetPort  8200     name  internal     port  8201     protocol  TCP     targetPort  8201   publishNotReadyAddresses  false   selector      app kubernetes io instance  vault     app kubernetes io name  vault     component  server     vault active   true    type  ClusterIP      Also  by setting  publishNotReadyAddresses  false  above  pods that have failed will be removed from the service pool   With this active service in place  we now have a dedicated endpoint that will always reach the active node  When setting up Vault replication  it can be used as the primary address      shell session   vault write  f sys replication performance primary enable       primary cluster addr  https   vault active us east 8201           Example upgrades  In conjunction with the pod labels and the  OnDelete  upgrade strategy  upgrades are much easier to orchestrate      shell session   helm upgrade vault   set  server image tag 1 18 1     kubectl delete pod   selector vault active false         selector vault version 1 2 3    kubectl delete pod   selector vault active true         selector vault version 1 2 3      When deleting an instance of a pod  the  StatefulSet  defining the desired state of the cluster will reschedule the deleted pods with the newest image "}
{"questions":"vault default health check page title Consul Service Registration Configuration layout docs Consul Service Registration registers Vault as a service in Consul with a","answers":"---\nlayout: docs\npage_title: Consul - Service Registration - Configuration\ndescription: >-\n  Consul Service Registration registers Vault as a service in Consul with a\n  default\n\n  health check.\n---\n\n# Consul service registration\n\nConsul Service Registration registers Vault as a service in [Consul][consul] with\na default health check. When Consul is configured as the storage backend, the stanza\n`service_registration` is not needed as it will automatically register Vault as a service.\n\n~> **Version information:** The `service_registration` configuration option was introduced in Vault 1.4.0.\n\n@include 'consul-dataplane-compat.mdx'\n\n- **HashiCorp Supported** \u2013 Consul Service Registration is officially supported\n  by HashiCorp.\n\n## Configuration\n\n```hcl\nservice_registration \"consul\" {\n  address      = \"127.0.0.1:8500\"\n}\n```\n\nIf Vault is running in HA mode, include the transfer protocol (`http:\/\/` or\n`https:\/\/`) in the address:\n\n```hcl\nservice_registration \"consul\" {\n  address      = \"http:\/\/127.0.0.1:8500\"\n}\n```\n\nOnce properly configured, an unsealed Vault installation should be available and\naccessible at:\n\n```text\nactive.vault.service.consul\n```\n\nUnsealed Vault instances in standby mode are available at:\n\n```text\nstandby.vault.service.consul\n```\n\nAll unsealed Vault instances are available as healthy at:\n\n```text\nvault.service.consul\n```\n\nSealed Vault instances will mark themselves as unhealthy to avoid being returned\nat Consul's service discovery layer.\n\n## `consul` parameters\n\n- `address` `(string: \"127.0.0.1:8500\")` \u2013 Specifies the address of the Consul\n  agent to communicate with. This can be an IP address, DNS record, or unix\n  socket. It is recommended that you communicate with a local Consul agent; do\n  not communicate directly with a server.\n\n- `check_timeout` `(string: \"5s\")` \u2013 Specifies the check interval used to send\n  health check information back to Consul. This is specified using a label\n  suffix like `\"30s\"` or `\"1h\"`.\n\n- `disable_registration` `(string: \"false\")` \u2013 Specifies whether Vault should\n  register itself with Consul.\n\n- `scheme` `(string: \"http\")` \u2013 Specifies the scheme to use when communicating\n  with Consul. This can be set to \"http\" or \"https\". It is highly recommended\n  you communicate with Consul over https over non-local connections. When\n  communicating over a unix socket, this option is ignored.\n\n- `service` `(string: \"vault\")` \u2013 Specifies the name of the service to register\n  in Consul.\n\n- `service_tags` `(string: \"\")` \u2013 Specifies a comma-separated list of case-sensitive\n  tags to attach to the service registration in Consul.\n\n- `service_meta` `(map[string]string: {})` \u2013 Specifies a key-value list of meta tags to\n  attach to the service registration in Consul. See [ServiceMeta](\/consul\/api-docs\/catalog#servicemeta) in the Consul docs for more information.\n\n- `service_address` `(string: nil)` \u2013 Specifies a service-specific address to\n  set on the service registration in Consul. If unset, Vault will use what it\n  knows to be the HA redirect address - which is usually desirable. Setting\n  this parameter to `\"\"` will tell Consul to leverage the configuration of the\n  node the service is registered on dynamically. This could be beneficial if\n  you intend to leverage Consul's\n  [`translate_wan_addrs`][consul-translate-wan-addrs] parameter.\n\n- `token` `(string: \"\")` \u2013 Specifies the [Consul ACL token][consul-acl] with\n  permission to register the Vault service into Consul's service catalog.\n  This is **not** a Vault token. See the ACL section below for help.\n\nThe following settings apply when communicating with Consul via an encrypted\nconnection. You can read more about encrypting Consul connections on the\n[Consul encryption page][consul-encryption].\n\n- `tls_ca_file` `(string: \"\")` \u2013 Specifies the path to the CA certificate used\n  for Consul communication. This defaults to system bundle if not specified.\n  This should be set according to the\n  [`ca_file`](\/consul\/docs\/agent\/config\/config-files#ca_file) setting in\n  Consul.\n\n- `tls_cert_file` `(string: \"\")` (optional) \u2013 Specifies the path to the\n  certificate for Consul communication. This should be set according to the\n  [`cert_file`](\/consul\/docs\/agent\/config\/config-files#cert_file) setting\n  in Consul.\n\n- `tls_key_file` `(string: \"\")` \u2013 Specifies the path to the private key for\n  Consul communication. This should be set according to the\n  [`key_file`](\/consul\/docs\/agent\/config\/config-files#key_file) setting\n  in Consul.\n\n- `tls_min_version` `(string: \"tls12\")` \u2013 Specifies the minimum TLS version to\n  use. Accepted values are `\"tls10\"`, `\"tls11\"`, `\"tls12\"` or `\"tls13\"`.\n\n- `tls_skip_verify` `(string: \"false\")` \u2013 Disable verification of TLS certificates.\n  Using this option is highly discouraged.\n\n## ACLs\n\nIf using ACLs in Consul, you'll need appropriate permissions to register the\nVault service. The following ACL policy will work for most use-cases, assuming\nthat your service name is `vault`:\n\n```json\n{\n  \"service\": {\n    \"vault\": {\n      \"policy\": \"write\"\n    }\n  }\n}\n```\n\n## `consul` examples\n\n### Local agent\n\nThis example shows a sample configuration which communicates with a local\nConsul agent running on `127.0.0.1:8500`.\n\n```hcl\nservice_registration \"consul\" {}\n```\n\n### Detailed customization\n\nThis example shows communicating with Consul on a custom address with an ACL\ntoken.\n\n```hcl\nservice_registration \"consul\" {\n  address = \"10.5.7.92:8194\"\n  token   = \"abcd1234\"\n}\n```\n\n### Consul via unix socket\n\nThis example shows communicating with Consul over a local unix socket.\n\n```hcl\nservice_registration \"consul\" {\n  address = \"unix:\/\/\/tmp\/.consul.http.sock\"\n}\n```\n\n### Custom TLS\n\nThis example shows using a custom CA, certificate, and key file to securely\ncommunicate with Consul over TLS.\n\n```hcl\nservice_registration \"consul\" {\n  scheme        = \"https\"\n  tls_ca_file   = \"\/etc\/pem\/vault.ca\"\n  tls_cert_file = \"\/etc\/pem\/vault.cert\"\n  tls_key_file  = \"\/etc\/pem\/vault.key\"\n}\n```\n\n[consul]: https:\/\/www.consul.io\/ 'Consul by HashiCorp'\n[consul-acl]: \/consul\/docs\/guides\/acl 'Consul ACLs'\n[consul-encryption]: \/consul\/docs\/agent\/encryption 'Consul Encryption'\n[consul-translate-wan-addrs]: \/consul\/docs\/agent\/options#translate_wan_addrs 'Consul Configuration'","site":"vault","answers_cleaned":"    layout  docs page title  Consul   Service Registration   Configuration description       Consul Service Registration registers Vault as a service in Consul with a   default    health check         Consul service registration  Consul Service Registration registers Vault as a service in  Consul  consul  with a default health check  When Consul is configured as the storage backend  the stanza  service registration  is not needed as it will automatically register Vault as a service        Version information    The  service registration  configuration option was introduced in Vault 1 4 0    include  consul dataplane compat mdx       HashiCorp Supported     Consul Service Registration is officially supported   by HashiCorp      Configuration     hcl service registration  consul      address         127 0 0 1 8500         If Vault is running in HA mode  include the transfer protocol   http     or  https      in the address      hcl service registration  consul      address         http   127 0 0 1 8500         Once properly configured  an unsealed Vault installation should be available and accessible at      text active vault service consul      Unsealed Vault instances in standby mode are available at      text standby vault service consul      All unsealed Vault instances are available as healthy at      text vault service consul      Sealed Vault instances will mark themselves as unhealthy to avoid being returned at Consul s service discovery layer       consul  parameters     address    string   127 0 0 1 8500      Specifies the address of the Consul   agent to communicate with  This can be an IP address  DNS record  or unix   socket  It is recommended that you communicate with a local Consul agent  do   not communicate directly with a server      check timeout    string   5s      Specifies the check interval used to send   health check information back to Consul  This is specified using a label   suffix like   30s   or   1h        disable registration    string   false      Specifies whether Vault should   register itself with Consul      scheme    string   http      Specifies the scheme to use when communicating   with Consul  This can be set to  http  or  https   It is highly recommended   you communicate with Consul over https over non local connections  When   communicating over a unix socket  this option is ignored      service    string   vault      Specifies the name of the service to register   in Consul      service tags    string         Specifies a comma separated list of case sensitive   tags to attach to the service registration in Consul      service meta    map string string         Specifies a key value list of meta tags to   attach to the service registration in Consul  See  ServiceMeta   consul api docs catalog servicemeta  in the Consul docs for more information      service address    string  nil     Specifies a service specific address to   set on the service registration in Consul  If unset  Vault will use what it   knows to be the HA redirect address   which is usually desirable  Setting   this parameter to      will tell Consul to leverage the configuration of the   node the service is registered on dynamically  This could be beneficial if   you intend to leverage Consul s     translate wan addrs   consul translate wan addrs  parameter      token    string         Specifies the  Consul ACL token  consul acl  with   permission to register the Vault service into Consul s service catalog    This is   not   a Vault token  See the ACL section below for help   The following settings apply when communicating with Consul via an encrypted connection  You can read more about encrypting Consul connections on the  Consul encryption page  consul encryption       tls ca file    string         Specifies the path to the CA certificate used   for Consul communication  This defaults to system bundle if not specified    This should be set according to the     ca file    consul docs agent config config files ca file  setting in   Consul      tls cert file    string        optional    Specifies the path to the   certificate for Consul communication  This should be set according to the     cert file    consul docs agent config config files cert file  setting   in Consul      tls key file    string         Specifies the path to the private key for   Consul communication  This should be set according to the     key file    consul docs agent config config files key file  setting   in Consul      tls min version    string   tls12      Specifies the minimum TLS version to   use  Accepted values are   tls10      tls11      tls12   or   tls13        tls skip verify    string   false      Disable verification of TLS certificates    Using this option is highly discouraged      ACLs  If using ACLs in Consul  you ll need appropriate permissions to register the Vault service  The following ACL policy will work for most use cases  assuming that your service name is  vault       json      service          vault            policy    write                       consul  examples      Local agent  This example shows a sample configuration which communicates with a local Consul agent running on  127 0 0 1 8500       hcl service registration  consul              Detailed customization  This example shows communicating with Consul on a custom address with an ACL token      hcl service registration  consul      address    10 5 7 92 8194    token      abcd1234             Consul via unix socket  This example shows communicating with Consul over a local unix socket      hcl service registration  consul      address    unix    tmp  consul http sock             Custom TLS  This example shows using a custom CA  certificate  and key file to securely communicate with Consul over TLS      hcl service registration  consul      scheme           https    tls ca file       etc pem vault ca    tls cert file     etc pem vault cert    tls key file      etc pem vault key          consul   https   www consul io   Consul by HashiCorp   consul acl    consul docs guides acl  Consul ACLs   consul encryption    consul docs agent encryption  Consul Encryption   consul translate wan addrs    consul docs agent options translate wan addrs  Consul Configuration "}
{"questions":"vault port page title TCP Listeners Configuration tcp listener layout docs The TCP listener configures Vault to listen on the specified TCP address and","answers":"---\nlayout: docs\npage_title: TCP - Listeners - Configuration\ndescription: >-\n  The TCP listener configures Vault to listen on the specified TCP address and\n  port.\n---\n\n# `tcp` listener\n\n@include 'alerts\/ipv6-compliance.mdx'\n\nThe TCP listener configures Vault to listen on a TCP address\/port.\n\n```hcl\nlistener \"tcp\" {\n  address = \"127.0.0.1:8200\"\n}\n```\n\nThe `listener` stanza may be specified more than once to make Vault listen on\nmultiple interfaces. If you configure multiple listeners you also need to\nspecify [`api_addr`][api-addr] and [`cluster_addr`][cluster-addr] so Vault will\nadvertise the correct address to other nodes.\n\n## Sensitive data redaction for unauthenticated endpoints\n\nUnauthenticated API endpoints may return the following sensitive information:\n\n* Vault version number\n* Vault binary build date\n* Vault cluster name\n* IP address of nodes in the cluster\n\nVault offers the ability to configure each `tcp` `listener` stanza such that,\nwhen appropriate, these values are redacted from responses.\n\nThe following API endpoints support redaction based on `listener` stanza configuration:\n\n* [`\/sys\/health`](\/vault\/api-docs\/system\/health)\n* [`\/sys\/leader`](\/vault\/api-docs\/system\/leader)\n* [`\/sys\/seal-status`](\/vault\/api-docs\/system\/seal-status)\n\nVault replaces redacted information with an empty string (`\"\"`). Some Vault APIs\nalso omit keys from the response when the corresponding value is empty (`\"\"`).\n\n<Note title=\"Redacting values affects responses to all API clients\">\n  The Vault CLI and UI consume Vault API responses. As a result, your redaction\n  settings will apply to CLI and UI output in addition to direct API calls.\n<\/Note>\n\n## Default TLS configuration\n\nBy default, Vault TCP listeners only accept TLS 1.2 or 1.3 connections and will\ndrop connection requests from clients using TLS 1.0 or 1.1.\n\nVault uses the following ciphersuites by default:\n\n- **TLS 1.3** - `TLS_AES_128_GCM_SHA256`, `TLS_AES_256_GCM_SHA384`, or `TLS_CHACHA20_POLY1305_SHA256`.\n- **TLS 1.2** - depends on whether you configure Vault with a RSA or ECDSA certificate.\n\nYou can configure Vault with any cipher supported by the\n[`tls`](https:\/\/pkg.go.dev\/crypto\/tls) and\n[`tlsutil`](https:\/\/github.com\/hashicorp\/go-secure-stdlib\/blob\/main\/tlsutil\/tlsutil.go#L31-L57)\nGo packages. Vault uses the `tlsutil` package to parse ciphersuite configurations.\n\n<Note title=\"Sweet32 and 3DES\">\n\n  The Go team and HashiCorp believe that the set of cyphers supported by `tls`\n  and `tlsutil` is appropriate for modern, secure usage. However, some\n  vulnerability scanners may flag issues with your configuration.\n\n  In particular, Sweet32 (CVE-2016-2183) is an attack against 64-bit block size\n  ciphers including 3DES that may allow an attacker to break the encryption of\n  long lived connections. According to the\n  [vulnerability disclosure](https:\/\/sweet32.info\/), Sweet32 took a\n  single HTTPS session with 785 GB of traffic to break the encryption.\n\n  As of May 2024, the Go team does not believe the risk of Sweet32 is sufficient\n  to remove existing client compatibility by deprecating 3DES support, however,\n  the team did [de-prioritize 3DES](https:\/\/github.com\/golang\/go\/issues\/45430)\n  in favor of AES-based ciphers.\n\n<\/Note>\n\nBefore overriding Vault defaults, we recommend reviewing the recommended Go team\n[approach to TLS configuration](https:\/\/go.dev\/blog\/tls-cipher-suites) with\nparticular attention to their ciphersuite selections.\n\n## Listener's custom response headers\n\nAs of version 1.9, Vault supports defining custom HTTP response headers for the root path (`\/`) and also on API endpoints (`\/v1\/*`).\nThe headers are defined based on the returned status code. For example, a user can define a list of\ncustom response headers for the `200` status code, and another list of custom response headers for\nthe `307` status code. There is a `\"\/sys\/config\/ui\"` [API endpoint](\/vault\/api-docs\/system\/config-ui) which allows users\nto set `UI` specific custom headers. If a header is configured in a configuration file, it is not allowed\nto be reconfigured through the `\"\/sys\/config\/ui\"` [API endpoint](\/vault\/api-docs\/system\/config-ui). In cases where a\ncustom header value needs to be modified or the custom header needs to be removed, the Vault's configuration file\nneeds to be modified accordingly, and a `SIGHUP` signal needs to be sent to the Vault process.\n\nIf a header is defined in the configuration file and the same header is used by the internal\nprocesses of Vault, the configured header is not accepted. For example, a custom header which has\nthe `X-Vault-` prefix will not be accepted. A message will be logged in the Vault's logs\nupon start up indicating the header with `X-Vault-` prefix is not accepted.\n\n### Order of precedence\n\nIf the same header is configured in both the configuration file and\nin the `\"\/sys\/config\/ui\"` [API endpoint](\/vault\/api-docs\/system\/config-ui), the header in the configuration file takes precedence.\nFor example, the `\"Content-Security-Policy\"` header is defined by default in the\n`\"\/sys\/config\/ui\"` [API endpoint](\/vault\/api-docs\/system\/config-ui). If that header is also defined in the configuration file,\nthe value in the configuration file is set in the response headers instead of the\ndefault value in the `\"\/sys\/config\/ui\"` [API endpoint](\/vault\/api-docs\/system\/config-ui).\n\n## `tcp` listener parameters\n\n- `address` `(string: \"127.0.0.1:8200\")` \u2013 Specifies the address to bind to for\n  listening. This can be dynamically defined with a\n  [go-sockaddr template](https:\/\/pkg.go.dev\/github.com\/hashicorp\/go-sockaddr\/template)\n  that is resolved at runtime.\n\n- `cluster_address` `(string: \"127.0.0.1:8201\")` \u2013 Specifies the address to bind\n  to for cluster server-to-server requests. This defaults to one port higher\n  than the value of `address`. This does not usually need to be set, but can be\n  useful in case Vault servers are isolated from each other in such a way that\n  they need to hop through a TCP load balancer or some other scheme in order to\n  talk. This can be dynamically defined with a\n  [go-sockaddr template](https:\/\/pkg.go.dev\/github.com\/hashicorp\/go-sockaddr\/template)\n  that is resolved at runtime.\n\n- `chroot_namespace` `(string: \"\")` \u2013 Specifies an alternate top-level namespace\n  for the listener. Vault appends namespaces provided in the `X-Vault-Namespace`\n  header or the `-namespace` field in a CLI command to the top-level namespace\n  to determine the full namespace path for the request. For example, if\n  `chroot_namespace` is set to `admin` and the `X-Vault-Namespace` header is\n  `ns1`, the full namespace path is `admin\/ns1`. Calls to the listener will fail\n   with a 4XX error if the top-level namespace provided for `chroot_namespace`\n   does not exist.\n- `http_idle_timeout` `(string: \"5m\")` - Specifies the maximum amount of time to\n  wait for the next request when keep-alives are enabled. If `http_idle_timeout`\n  is zero, the value of `http_read_timeout` is used. If both are zero, the value\n  of `http_read_header_timeout` is used. This is specified using a label suffix\n  like `\"30s\"` or `\"1h\"`.\n\n- `http_read_header_timeout` `(string: \"10s\")` - Specifies the amount of time\n  allowed to read request headers. This is specified using a label suffix like\n  `\"30s\"` or `\"1h\"`.\n\n- `http_read_timeout` `(string: \"30s\")` - Specifies the maximum duration for\n  reading the entire request, including the body. This is specified using a\n  label suffix like `\"30s\"` or `\"1h\"`.\n\n- `http_write_timeout` `string: \"0\")` - Specifies the maximum duration before\n  timing out writes of the response and is reset whenever a new request's header\n  is read. The default value of `\"0\"` means infinity. This is specified using a\n  label suffix like `\"30s\"` or `\"1h\"`.\n\n- `max_request_size` `(int: 33554432)` \u2013 Specifies a hard maximum allowed\n  request size, in bytes. Defaults to 32 MB if not set or set to `0`.\n  Specifying a number less than `0` turns off limiting altogether.\n\n- `max_request_duration` `(string: \"90s\")` \u2013 Specifies the maximum\n  request duration allowed before Vault cancels the request. This overrides\n  `default_max_request_duration` for this listener.\n\n- `proxy_protocol_behavior` `(string: \"\")` \u2013 When specified, enables a PROXY\n  protocol behavior for the listener (version 1 and 2 are both supported).\n  Accepted Values:\n\n  - _use_always_ - The client's IP address will always be used.\n  - _allow_authorized_ - If the source IP address is in the\n    `proxy_protocol_authorized_addrs` list, the client's IP address will be used.\n    If the source IP is not in the list, the source IP address will be used.\n  - _deny_unauthorized_ - The traffic will be rejected if the source IP\n    address is not in the `proxy_protocol_authorized_addrs` list.\n\n- `proxy_protocol_authorized_addrs` `(string: <required-if-enabled> or array: <required-if-enabled> )` \u2013\n  Specifies the list of allowed source IP addresses to be used with the PROXY protocol.\n  Not required if `proxy_protocol_behavior` is set to `use_always`. Source IPs should\n  be comma-delimited if provided as a string. At least one source IP must be provided,\n  `proxy_protocol_authorized_addrs` cannot be an empty array or string.\n\n- `redact_addresses` `(bool: false)` - Redacts `leader_address` and `cluster_leader_address` values in applicable API responses when `true`.\n\n- `redact_cluster_name` `(bool: false)` - Redacts `cluster_name` values in applicable API responses when `true`.\n\n- `redact_version` `(bool: false)` - Redacts `version` and `build_date` values in applicable API responses when `true`.\n\n- `tls_disable` `(string: \"false\")` \u2013 Specifies if TLS will be disabled. Vault\n  assumes TLS by default, so you must explicitly disable TLS to opt-in to\n  insecure communication. Disabling TLS can **disable** some UI functionality. See\n  the [Browser Support](\/vault\/docs\/browser-support) page for more details.\n\n- `tls_cert_file` `(string: <required-if-enabled>, reloads-on-SIGHUP)` \u2013\n  Specifies the path to the certificate for TLS. It requires a PEM-encoded file.\n  To configure the listener to use a CA certificate, concatenate the primary certificate and the CA\n  certificate together. The primary certificate should appear first in the\n  combined file. On `SIGHUP`, the path set here _at Vault startup_ will be used\n  for reloading the certificate; modifying this value while Vault is running\n  will have no effect for `SIGHUP`s.\n\n- `tls_key_file` `(string: <required-if-enabled>, reloads-on-SIGHUP)` \u2013\n  Specifies the path to the private key for the certificate. It requires a PEM-encoded file.\n  If the key file is encrypted, you will be prompted to enter the passphrase on server startup.\n  The passphrase must stay the same between key files when reloading your\n  configuration using `SIGHUP`. On `SIGHUP`, the path set here _at Vault\n  startup_ will be used for reloading the certificate; modifying this value\n  while Vault is running will have no effect for `SIGHUP`s.\n\n- `tls_min_version` `(string: \"tls12\")` \u2013 Specifies the minimum supported\n  version of TLS. Accepted values are \"tls10\", \"tls11\", \"tls12\" or \"tls13\".\n\n  ~> **Warning**: TLS 1.1 and lower (`tls10` and `tls11` values for the\n  `tls_min_version` and `tls_max_version` parameters) are widely considered\n  insecure.\n\n- `tls_max_version` `(string: \"tls13\")` \u2013 Specifies the maximum supported\n  version of TLS. Accepted values are \"tls10\", \"tls11\", \"tls12\" or \"tls13\".\n\n  ~> **Warning**: TLS 1.1 and lower (`tls10` and `tls11` values for the\n  `tls_min_version` and `tls_max_version` parameters) are widely considered\n  insecure.\n\n- `tls_cipher_suites` `(string: \"\")` \u2013 Specifies the list of supported\n  ciphersuites as a comma-separated-list. The list of all available ciphersuites\n  is available in the [Golang TLS documentation][golang-tls].\n\n  ~> **Note**: Go only consults the `tls_cipher_suites` list for TLSv1.2 and\n  earlier; the order of ciphers is not important. For this parameter to be\n  effective, the `tls_max_version` property must be set to `tls12` to prevent\n  negotiation of TLSv1.3, which is not recommended. For more information about\n  this and other TLS related changes, see the [Go TLS blog post][go-tls-blog].\n\n- `tls_prefer_server_cipher_suites` `(string: \"false\")` \u2013 Specifies to prefer the\n  server's ciphersuite over the client ciphersuites.\n\n  ~> **Warning**: The `tls_prefer_server_cipher_suites` parameter is\n  deprecated. Setting it has no effect. See the above\n  [Go blog post][go-tls-blog] for more information about\n  this change.\n\n- `tls_require_and_verify_client_cert` `(string: \"false\")` \u2013 Turns on client\n  authentication for this listener; the listener will require a presented\n  client cert that successfully validates against system CAs.\n\n- `tls_client_ca_file` `(string: \"\")` \u2013 PEM-encoded Certificate Authority file\n  used for checking the authenticity of client.\n\n- `tls_disable_client_certs` `(string: \"false\")` \u2013 Turns off client\n  authentication for this listener. The default behavior (when this is false)\n  is for Vault to request client authentication certificates when available.\n\n  ~> **Warning**: The `tls_disable_client_certs` and `tls_require_and_verify_client_cert` fields in the listener stanza of the Vault server configuration are mutually exclusive fields. Please ensure they are not both set to true. TLS client verification remains optional with default settings and is not enforced.\n\n- `x_forwarded_for_authorized_addrs` `(string: <required-to-enable>)` \u2013\n  Specifies the list of source IP CIDRs for which an X-Forwarded-For header\n  will be trusted. Comma-separated list or JSON array. This turns on\n  X-Forwarded-For support.  If for example Vault receives connections from the\n  load balancer's IP of `1.2.3.4`, adding `1.2.3.4` to `x_forwarded_for_authorized_addrs`\n  will result in the `remote_address` field in the audit log being populated with the\n  connecting client's IP, for example `3.4.5.6`. Note this requires the load balancer\n  to send the connecting client's IP in the `X-Forwarded-For` header.\n\n- `x_forwarded_for_client_cert_header` `(string: \"\")` \u2013\n  Specifies the header that will be used for the client certificate.\n  This is required if you use the [TLS Certificates Auth Method](\/vault\/docs\/auth\/cert) and your\n  vault server is behind a reverse proxy.\n\n- `x_forwarded_for_client_cert_header_decoders` `(string: \"\")` \u2013\n  Comma delimited list that specifies the decoders that will be used to decode the client certificate.\n  This is required if you use the [TLS Certificates Auth Method](\/vault\/docs\/auth\/cert) and your\n  vault server is behind a reverse proxy. The resulting certificate should be in DER format.\n  Available Values:\n\n  - BASE64 - Runs Base64 decode\n  - DER - Converts a pem certificate to der\n  - URL - Runs URL decode\n\n  Known Values:\n\n  - Traefik = \"BASE64\"\n  - NGINX   = \"URL,DER\"\n\n- `x_forwarded_for_hop_skips` `(string: \"0\")` \u2013 The number of addresses that will be\n  skipped from the _rear_ of the set of hops. For instance, for a header value\n  of `1.2.3.4, 2.3.4.5, 3.4.5.6, 4.5.6.7`, if this value is set to `\"1\"`, the address that\n  will be used as the originating client IP is `3.4.5.6`.\n\n- `x_forwarded_for_reject_not_authorized` `(string: \"true\")` \u2013 If set false,\n  if there is an X-Forwarded-For header in a connection from an unauthorized\n  address, the header will be ignored and the client connection used as-is,\n  rather than the client connection rejected.\n\n- `x_forwarded_for_reject_not_present` `(string: \"true\")` \u2013 If set false, if\n  there is no X-Forwarded-For header or it is empty, the client address will be\n  used as-is, rather than the client connection rejected.\n\n- `disable_replication_status_endpoints` `(bool: false)` - Disables replication\n  status endpoints for the configured listener when set to `true`.\n\n### `telemetry` parameters\n\n- `unauthenticated_metrics_access` `(bool: false)` - If set to true, allows\n  unauthenticated access to the `\/v1\/sys\/metrics` endpoint.\n\n### `profiling` parameters\n\n- `unauthenticated_pprof_access` `(bool: false)` - If set to true, allows\n  unauthenticated access to the `\/v1\/sys\/pprof` endpoint.\n\n### `inflight_requests_logging` parameters\n\n- `unauthenticated_in_flight_requests_access` `(bool: false)` - If set to true, allows\n  unauthenticated access to the `\/v1\/sys\/in-flight-req` endpoint.\n\n### `custom_response_headers` parameters\n\n- `default` `(key-value-map: {})` - A map of string header names to an array of\n  string values. The default headers are set on all endpoints regardless of\n  the status code value. For an example, please refer to the\n  \"Configuring custom http response headers\" section.\n\n- `<specific status code>` `(key-value-map: {})` - A map of string header names\n  to an array of string values. These headers are set only when the specific status\n  code is returned. For example, `\"200\" = {\"Header-A\": [\"Value1\", \"Value2\"]}`, `\"Header-A\"`\n  is set when the http response status code is `\"200\"`.\n\n- `<collective status code>` `(key-value-map: {})` - A map of string header names\n  to an array of string values. These headers are set only when the response status\n  code falls under the collective status code.\n  For example, `\"2xx\" = {\"Header-A\": [\"Value1\", \"Value2\"]}`, `\"Header-A\"`\n  is set when the http response status code is `\"200\"`, `\"204\"`, etc.\n\n## `tcp` listener examples\n\n### Configuring TLS\n\nThis example shows enabling a TLS listener.\n\n```hcl\nlistener \"tcp\" {\n  address = \"127.0.0.1:8200\"\n  tls_cert_file = \"\/etc\/certs\/vault.crt\"\n  tls_key_file  = \"\/etc\/certs\/vault.key\"\n}\n```\n\n### Listening on multiple interfaces\n\nThis example shows Vault listening on a private interface, as well as localhost.\n\n```hcl\nlistener \"tcp\" {\n  address = \"127.0.0.1:8200\"\n}\n\nlistener \"tcp\" {\n  address = \"10.0.0.5:8200\"\n}\n\n# Advertise the non-loopback interface\napi_addr = \"https:\/\/10.0.0.5:8200\"\ncluster_addr = \"https:\/\/10.0.0.5:8201\"\n```\n\n### Configuring unauthenticated metrics access\n\nThis example shows enabling unauthenticated metrics access.\n\n```hcl\nlistener \"tcp\" {\n  telemetry {\n    unauthenticated_metrics_access = true\n  }\n}\n```\n\n### Configuring unauthenticated profiling access\n\nThis example shows enabling unauthenticated profiling access.\n\n```hcl\nlistener \"tcp\" {\n  profiling {\n    unauthenticated_pprof_access = true\n    unauthenticated_in_flight_request_access = true\n  }\n}\n```\n\n### Configuring custom http response headers\n\nNote: Requires Vault version 1.9 or newer. This example shows configuring custom http response headers.\nOperators can configure `\"custom_response_headers\"` sub-stanza in the listener stanza to set custom http\nheaders appropriate to their applications. Examples of such headers are `\"Strict-Transport-Security\"`\nand `\"Content-Security-Policy\"` which are known HTTP headers, and could be configured to harden\nthe security of an application communicating with the Vault endpoints. Note that vulnerability\nscans often examine such security related HTTP headers. In addition, application specific\ncustom headers can also be configured. For example, `\"X-Custom-Header\"` has been configured\nin the example below.\n\n```hcl\nlistener \"tcp\" {\n  custom_response_headers {\n    \"default\" = {\n      \"Strict-Transport-Security\" = [\"max-age=31536000\",\"includeSubDomains\"],\n      \"Content-Security-Policy\" = [\"connect-src https:\/\/clusterA.vault.external\/\"],\n      \"X-Custom-Header\" = [\"Custom Header Default Value\"],\n    },\n    \"2xx\" = {\n      \"Content-Security-Policy\" = [\"connect-src https:\/\/clusterB.vault.external\/\"],\n      \"X-Custom-Header\" = [\"Custom Header Value 1\", \"Custom Header Value 2\"],\n    },\n    \"301\" = {\n      \"Strict-Transport-Security\" = [\"max-age=31536000\"],\n      \"Content-Security-Policy\" = [\"connect-src https:\/\/clusterC.vault.external\/\"],\n    },\n  }\n}\n```\n\nIn situations where a header is defined under several status code subsections,\nthe header matching the most specific response code will be returned. For example,\nwith the config example below, a `307` response would return `307 Custom header value`,\nwhile a `306` would return `3xx Custom header value`.\n\n```hcl\nlistener \"tcp\" {\n  custom_response_headers {\n    \"default\" = {\n       \"X-Custom-Header\" = [\"default Custom header value\"]\n    },\n    \"3xx\" = {\n       \"X-Custom-Header\" = [\"3xx Custom header value\"]\n    },\n    \"307\" = {\n       \"X-Custom-Header\" = [\"307 Custom header value\"]\n    }\n  }\n}\n```\n\n### Listening on all IPv6 & IPv4 interfaces\n\nThis example shows Vault listening on all IPv4 & IPv6 interfaces including localhost.\n\n```hcl\nlistener \"tcp\" {\n  address         = \"[::]:8200\"\n  cluster_address = \"[::]:8201\"\n}\n```\n\n### Listening to specific IPv6 address\n\nThis example shows Vault only using IPv6 and binding to the interface with the IP address: `2001:1c04:90d:1c00:a00:27ff:fefa:58ec`\n\n```hcl\nlistener \"tcp\" {\n  address         = \"[2001:1c04:90d:1c00:a00:27ff:fefa:58ec]:8200\"\n  cluster_address = \"[2001:1c04:90d:1c00:a00:27ff:fefa:58ec]:8201\"\n}\n\n# Advertise the non-loopback interface\napi_addr = \"https:\/\/[2001:1c04:90d:1c00:a00:27ff:fefa:58ec]:8200\"\ncluster_addr = \"https:\/\/[2001:1c04:90d:1c00:a00:27ff:fefa:58ec]:8201\"\n```\n\n## Redaction examples\n\nPlease see redaction settings above, for details on each redaction setting.\n\nExample configuration for the [`tcp`](\/vault\/docs\/configuration\/listener\/tcp) listener,\nenabling [`redact_addresses`](\/vault\/docs\/configuration\/listener\/tcp#redact_addresses),\n[`redact_cluster_name`](\/vault\/docs\/configuration\/listener\/tcp#redact_cluster_name) and\n[`redact_version`](\/vault\/docs\/configuration\/listener\/tcp#redact_version).\n\n```hcl\nui            = true\ncluster_addr  = \"https:\/\/127.0.0.1:8201\"\napi_addr      = \"https:\/\/127.0.0.1:8200\"\ndisable_mlock = true\n\nstorage \"raft\" {\n  path = \"\/path\/to\/raft\/data\"\n  node_id = \"raft_node_1\"\n}\n\nlistener \"tcp\" {\n  address             = \"127.0.0.1:8200\",\n  tls_cert_file = \"\/path\/to\/full-chain.pem\"\n  tls_key_file  = \"\/path\/to\/private-key.pem\"\n  redact_addresses    = \"true\"\n  redact_cluster_name = \"true\"\n  redact_version      = \"true\"\n}\n\ntelemetry {\n  statsite_address = \"127.0.0.1:8125\"\n  disable_hostname = true\n}\n```\n\n### API: `\/sys\/health`\nIn the following call to `\/sys\/health\/` notice that `cluster_name` and `version`\nare both redacted. The `cluster_name` field is fully omitted from the response\nand `version` is the empty string (`\"\"`).\n\n```shell-session\n$ curl -s https:\/\/127.0.0.1:8200\/v1\/sys\/health | jq`:\n\n{\n  \"initialized\": true,\n  \"sealed\": false,\n  \"standby\": true,\n  \"performance_standby\": false,\n  \"replication_performance_mode\": \"disabled\",\n  \"replication_dr_mode\": \"disabled\",\n  \"server_time_utc\": 1696598650,\n  \"version\": \"\",\n  \"cluster_id\": \"a1a7a078-0ae1-7fb9-41ec-2f4f583c773e\"\n}\n```\n\n\n### API: `sys\/leader`\nIn the following call to `\/sys\/leader\/` notice that `leader_address` and `leader_cluster_address`\nare both redacted and set to the empty string (`\"\"`).\n\n```shell-session\n$ curl -s https:\/\/127.0.0.1:8200\/v1\/sys\/leader | jq`:\n\n{\n  \"ha_enabled\": true,\n  \"is_self\": false,\n  \"active_time\": \"0001-01-01T00:00:00Z\",\n  \"leader_address\": \"\",\n  \"leader_cluster_address\": \"\",\n  \"performance_standby\": false,\n  \"performance_standby_last_remote_wal\": 0,\n  \"raft_committed_index\": 164,\n  \"raft_applied_index\": 164\n}\n```\n\n\n### API: `sys\/seal-status`\n\nIn the following call to `\/sys\/seal-status\/` notice that `cluster_name`, `build_date`,\nand `version` are all redacted. The `cluster_name` field is fully omitted from\nthe response while `build_date` and `version` are empty strings (`\"\"`).\n\n```shell-session\n$ curl -s https:\/\/127.0.0.1:8200\/v1\/sys\/seal-status | jq`:\n\n{\n  \"type\": \"shamir\",\n  \"initialized\": true,\n  \"sealed\": false,\n  \"t\": 1,\n  \"n\": 1,\n  \"progress\": 0,\n  \"nonce\": \"\",\n  \"version\": \"\",\n  \"build_date\": \"\",\n  \"migration\": false,\n  \"cluster_id\": \"a1a7a078-0ae1-7fb9-41ec-2f4f583c773e\",\n  \"recovery_seal\": false,\n  \"storage_type\": \"raft\"\n}\n```\n\n### CLI: `vault status`\n\nThe CLI command `vault status` uses endpoints that support redacting data, so the\noutput redacts `Version`, `Build Date`, `HA Cluster`, and `Active Node Address`.\n`Version`, `Build Date`, `HA Cluster` show `n\/a` because the underlying endpoint\nreturned the empty string, and  `Active Node Address` shows as `<none>` because\nit was omitted from the API response.\n\n```shell-session\n\n$ vault status\n\nKey                     Value\n---                     -----\nSeal Type               shamir\nInitialized             true\nSealed                  false\nTotal Shares            5\nThreshold               3\nVersion                 n\/a\nBuild Date              n\/a\nStorage Type            raft\nHA Enabled              true\nHA Cluster              n\/a\nHA Mode                 standby\nActive Node Address     <none>\nRaft Committed Index    219\nRaft Applied Index      219\n```\n\n[golang-tls]: https:\/\/golang.org\/src\/crypto\/tls\/cipher_suites.go\n[api-addr]: \/vault\/docs\/configuration#api_addr\n[cluster-addr]: \/vault\/docs\/configuration#cluster_addr\n[go-tls-blog]: https:\/\/go.dev\/blog\/tls-cipher-suites","site":"vault","answers_cleaned":"    layout  docs page title  TCP   Listeners   Configuration description       The TCP listener configures Vault to listen on the specified TCP address and   port          tcp  listener   include  alerts ipv6 compliance mdx   The TCP listener configures Vault to listen on a TCP address port      hcl listener  tcp      address    127 0 0 1 8200         The  listener  stanza may be specified more than once to make Vault listen on multiple interfaces  If you configure multiple listeners you also need to specify   api addr   api addr  and   cluster addr   cluster addr  so Vault will advertise the correct address to other nodes      Sensitive data redaction for unauthenticated endpoints  Unauthenticated API endpoints may return the following sensitive information     Vault version number   Vault binary build date   Vault cluster name   IP address of nodes in the cluster  Vault offers the ability to configure each  tcp   listener  stanza such that  when appropriate  these values are redacted from responses   The following API endpoints support redaction based on  listener  stanza configuration        sys health    vault api docs system health       sys leader    vault api docs system leader       sys seal status    vault api docs system seal status   Vault replaces redacted information with an empty string         Some Vault APIs also omit keys from the response when the corresponding value is empty           Note title  Redacting values affects responses to all API clients     The Vault CLI and UI consume Vault API responses  As a result  your redaction   settings will apply to CLI and UI output in addition to direct API calls    Note      Default TLS configuration  By default  Vault TCP listeners only accept TLS 1 2 or 1 3 connections and will drop connection requests from clients using TLS 1 0 or 1 1   Vault uses the following ciphersuites by default       TLS 1 3      TLS AES 128 GCM SHA256    TLS AES 256 GCM SHA384   or  TLS CHACHA20 POLY1305 SHA256       TLS 1 2     depends on whether you configure Vault with a RSA or ECDSA certificate   You can configure Vault with any cipher supported by the   tls   https   pkg go dev crypto tls  and   tlsutil   https   github com hashicorp go secure stdlib blob main tlsutil tlsutil go L31 L57  Go packages  Vault uses the  tlsutil  package to parse ciphersuite configurations    Note title  Sweet32 and 3DES      The Go team and HashiCorp believe that the set of cyphers supported by  tls    and  tlsutil  is appropriate for modern  secure usage  However  some   vulnerability scanners may flag issues with your configuration     In particular  Sweet32  CVE 2016 2183  is an attack against 64 bit block size   ciphers including 3DES that may allow an attacker to break the encryption of   long lived connections  According to the    vulnerability disclosure  https   sweet32 info    Sweet32 took a   single HTTPS session with 785 GB of traffic to break the encryption     As of May 2024  the Go team does not believe the risk of Sweet32 is sufficient   to remove existing client compatibility by deprecating 3DES support  however    the team did  de prioritize 3DES  https   github com golang go issues 45430    in favor of AES based ciphers     Note   Before overriding Vault defaults  we recommend reviewing the recommended Go team  approach to TLS configuration  https   go dev blog tls cipher suites  with particular attention to their ciphersuite selections      Listener s custom response headers  As of version 1 9  Vault supports defining custom HTTP response headers for the root path       and also on API endpoints    v1      The headers are defined based on the returned status code  For example  a user can define a list of custom response headers for the  200  status code  and another list of custom response headers for the  307  status code  There is a    sys config ui    API endpoint   vault api docs system config ui  which allows users to set  UI  specific custom headers  If a header is configured in a configuration file  it is not allowed to be reconfigured through the    sys config ui    API endpoint   vault api docs system config ui   In cases where a custom header value needs to be modified or the custom header needs to be removed  the Vault s configuration file needs to be modified accordingly  and a  SIGHUP  signal needs to be sent to the Vault process   If a header is defined in the configuration file and the same header is used by the internal processes of Vault  the configured header is not accepted  For example  a custom header which has the  X Vault   prefix will not be accepted  A message will be logged in the Vault s logs upon start up indicating the header with  X Vault   prefix is not accepted       Order of precedence  If the same header is configured in both the configuration file and in the    sys config ui    API endpoint   vault api docs system config ui   the header in the configuration file takes precedence  For example  the   Content Security Policy   header is defined by default in the    sys config ui    API endpoint   vault api docs system config ui   If that header is also defined in the configuration file  the value in the configuration file is set in the response headers instead of the default value in the    sys config ui    API endpoint   vault api docs system config ui        tcp  listener parameters     address    string   127 0 0 1 8200      Specifies the address to bind to for   listening  This can be dynamically defined with a    go sockaddr template  https   pkg go dev github com hashicorp go sockaddr template    that is resolved at runtime      cluster address    string   127 0 0 1 8201      Specifies the address to bind   to for cluster server to server requests  This defaults to one port higher   than the value of  address   This does not usually need to be set  but can be   useful in case Vault servers are isolated from each other in such a way that   they need to hop through a TCP load balancer or some other scheme in order to   talk  This can be dynamically defined with a    go sockaddr template  https   pkg go dev github com hashicorp go sockaddr template    that is resolved at runtime      chroot namespace    string         Specifies an alternate top level namespace   for the listener  Vault appends namespaces provided in the  X Vault Namespace    header or the   namespace  field in a CLI command to the top level namespace   to determine the full namespace path for the request  For example  if    chroot namespace  is set to  admin  and the  X Vault Namespace  header is    ns1   the full namespace path is  admin ns1   Calls to the listener will fail    with a 4XX error if the top level namespace provided for  chroot namespace     does not exist     http idle timeout    string   5m      Specifies the maximum amount of time to   wait for the next request when keep alives are enabled  If  http idle timeout    is zero  the value of  http read timeout  is used  If both are zero  the value   of  http read header timeout  is used  This is specified using a label suffix   like   30s   or   1h        http read header timeout    string   10s      Specifies the amount of time   allowed to read request headers  This is specified using a label suffix like     30s   or   1h        http read timeout    string   30s      Specifies the maximum duration for   reading the entire request  including the body  This is specified using a   label suffix like   30s   or   1h        http write timeout   string   0      Specifies the maximum duration before   timing out writes of the response and is reset whenever a new request s header   is read  The default value of   0   means infinity  This is specified using a   label suffix like   30s   or   1h        max request size    int  33554432     Specifies a hard maximum allowed   request size  in bytes  Defaults to 32 MB if not set or set to  0     Specifying a number less than  0  turns off limiting altogether      max request duration    string   90s      Specifies the maximum   request duration allowed before Vault cancels the request  This overrides    default max request duration  for this listener      proxy protocol behavior    string         When specified  enables a PROXY   protocol behavior for the listener  version 1 and 2 are both supported     Accepted Values        use always    The client s IP address will always be used       allow authorized    If the source IP address is in the      proxy protocol authorized addrs  list  the client s IP address will be used      If the source IP is not in the list  the source IP address will be used       deny unauthorized    The traffic will be rejected if the source IP     address is not in the  proxy protocol authorized addrs  list      proxy protocol authorized addrs    string   required if enabled  or array   required if enabled         Specifies the list of allowed source IP addresses to be used with the PROXY protocol    Not required if  proxy protocol behavior  is set to  use always   Source IPs should   be comma delimited if provided as a string  At least one source IP must be provided     proxy protocol authorized addrs  cannot be an empty array or string      redact addresses    bool  false     Redacts  leader address  and  cluster leader address  values in applicable API responses when  true       redact cluster name    bool  false     Redacts  cluster name  values in applicable API responses when  true       redact version    bool  false     Redacts  version  and  build date  values in applicable API responses when  true       tls disable    string   false      Specifies if TLS will be disabled  Vault   assumes TLS by default  so you must explicitly disable TLS to opt in to   insecure communication  Disabling TLS can   disable   some UI functionality  See   the  Browser Support   vault docs browser support  page for more details      tls cert file    string   required if enabled   reloads on SIGHUP       Specifies the path to the certificate for TLS  It requires a PEM encoded file    To configure the listener to use a CA certificate  concatenate the primary certificate and the CA   certificate together  The primary certificate should appear first in the   combined file  On  SIGHUP   the path set here  at Vault startup  will be used   for reloading the certificate  modifying this value while Vault is running   will have no effect for  SIGHUP s      tls key file    string   required if enabled   reloads on SIGHUP       Specifies the path to the private key for the certificate  It requires a PEM encoded file    If the key file is encrypted  you will be prompted to enter the passphrase on server startup    The passphrase must stay the same between key files when reloading your   configuration using  SIGHUP   On  SIGHUP   the path set here  at Vault   startup  will be used for reloading the certificate  modifying this value   while Vault is running will have no effect for  SIGHUP s      tls min version    string   tls12      Specifies the minimum supported   version of TLS  Accepted values are  tls10    tls11    tls12  or  tls13           Warning    TLS 1 1 and lower   tls10  and  tls11  values for the    tls min version  and  tls max version  parameters  are widely considered   insecure      tls max version    string   tls13      Specifies the maximum supported   version of TLS  Accepted values are  tls10    tls11    tls12  or  tls13           Warning    TLS 1 1 and lower   tls10  and  tls11  values for the    tls min version  and  tls max version  parameters  are widely considered   insecure      tls cipher suites    string         Specifies the list of supported   ciphersuites as a comma separated list  The list of all available ciphersuites   is available in the  Golang TLS documentation  golang tls           Note    Go only consults the  tls cipher suites  list for TLSv1 2 and   earlier  the order of ciphers is not important  For this parameter to be   effective  the  tls max version  property must be set to  tls12  to prevent   negotiation of TLSv1 3  which is not recommended  For more information about   this and other TLS related changes  see the  Go TLS blog post  go tls blog       tls prefer server cipher suites    string   false      Specifies to prefer the   server s ciphersuite over the client ciphersuites          Warning    The  tls prefer server cipher suites  parameter is   deprecated  Setting it has no effect  See the above    Go blog post  go tls blog  for more information about   this change      tls require and verify client cert    string   false      Turns on client   authentication for this listener  the listener will require a presented   client cert that successfully validates against system CAs      tls client ca file    string         PEM encoded Certificate Authority file   used for checking the authenticity of client      tls disable client certs    string   false      Turns off client   authentication for this listener  The default behavior  when this is false    is for Vault to request client authentication certificates when available          Warning    The  tls disable client certs  and  tls require and verify client cert  fields in the listener stanza of the Vault server configuration are mutually exclusive fields  Please ensure they are not both set to true  TLS client verification remains optional with default settings and is not enforced      x forwarded for authorized addrs    string   required to enable        Specifies the list of source IP CIDRs for which an X Forwarded For header   will be trusted  Comma separated list or JSON array  This turns on   X Forwarded For support   If for example Vault receives connections from the   load balancer s IP of  1 2 3 4   adding  1 2 3 4  to  x forwarded for authorized addrs    will result in the  remote address  field in the audit log being populated with the   connecting client s IP  for example  3 4 5 6   Note this requires the load balancer   to send the connecting client s IP in the  X Forwarded For  header      x forwarded for client cert header    string           Specifies the header that will be used for the client certificate    This is required if you use the  TLS Certificates Auth Method   vault docs auth cert  and your   vault server is behind a reverse proxy      x forwarded for client cert header decoders    string           Comma delimited list that specifies the decoders that will be used to decode the client certificate    This is required if you use the  TLS Certificates Auth Method   vault docs auth cert  and your   vault server is behind a reverse proxy  The resulting certificate should be in DER format    Available Values       BASE64   Runs Base64 decode     DER   Converts a pem certificate to der     URL   Runs URL decode    Known Values       Traefik    BASE64      NGINX      URL DER      x forwarded for hop skips    string   0      The number of addresses that will be   skipped from the  rear  of the set of hops  For instance  for a header value   of  1 2 3 4  2 3 4 5  3 4 5 6  4 5 6 7   if this value is set to   1    the address that   will be used as the originating client IP is  3 4 5 6       x forwarded for reject not authorized    string   true      If set false    if there is an X Forwarded For header in a connection from an unauthorized   address  the header will be ignored and the client connection used as is    rather than the client connection rejected      x forwarded for reject not present    string   true      If set false  if   there is no X Forwarded For header or it is empty  the client address will be   used as is  rather than the client connection rejected      disable replication status endpoints    bool  false     Disables replication   status endpoints for the configured listener when set to  true         telemetry  parameters     unauthenticated metrics access    bool  false     If set to true  allows   unauthenticated access to the   v1 sys metrics  endpoint        profiling  parameters     unauthenticated pprof access    bool  false     If set to true  allows   unauthenticated access to the   v1 sys pprof  endpoint        inflight requests logging  parameters     unauthenticated in flight requests access    bool  false     If set to true  allows   unauthenticated access to the   v1 sys in flight req  endpoint        custom response headers  parameters     default    key value map         A map of string header names to an array of   string values  The default headers are set on all endpoints regardless of   the status code value  For an example  please refer to the    Configuring custom http response headers  section       specific status code     key value map         A map of string header names   to an array of string values  These headers are set only when the specific status   code is returned  For example    200      Header A     Value1    Value2        Header A     is set when the http response status code is   200         collective status code     key value map         A map of string header names   to an array of string values  These headers are set only when the response status   code falls under the collective status code    For example    2xx      Header A     Value1    Value2        Header A     is set when the http response status code is   200      204    etc       tcp  listener examples      Configuring TLS  This example shows enabling a TLS listener      hcl listener  tcp      address    127 0 0 1 8200    tls cert file     etc certs vault crt    tls key file      etc certs vault key             Listening on multiple interfaces  This example shows Vault listening on a private interface  as well as localhost      hcl listener  tcp      address    127 0 0 1 8200     listener  tcp      address    10 0 0 5 8200       Advertise the non loopback interface api addr    https   10 0 0 5 8200  cluster addr    https   10 0 0 5 8201           Configuring unauthenticated metrics access  This example shows enabling unauthenticated metrics access      hcl listener  tcp      telemetry       unauthenticated metrics access   true                Configuring unauthenticated profiling access  This example shows enabling unauthenticated profiling access      hcl listener  tcp      profiling       unauthenticated pprof access   true     unauthenticated in flight request access   true                Configuring custom http response headers  Note  Requires Vault version 1 9 or newer  This example shows configuring custom http response headers  Operators can configure   custom response headers   sub stanza in the listener stanza to set custom http headers appropriate to their applications  Examples of such headers are   Strict Transport Security   and   Content Security Policy   which are known HTTP headers  and could be configured to harden the security of an application communicating with the Vault endpoints  Note that vulnerability scans often examine such security related HTTP headers  In addition  application specific custom headers can also be configured  For example    X Custom Header   has been configured in the example below      hcl listener  tcp      custom response headers        default             Strict Transport Security      max age 31536000   includeSubDomains           Content Security Policy      connect src https   clusterA vault external            X Custom Header      Custom Header Default Value                2xx             Content Security Policy      connect src https   clusterB vault external            X Custom Header      Custom Header Value 1    Custom Header Value 2                301             Strict Transport Security      max age 31536000           Content Security Policy      connect src https   clusterC vault external                       In situations where a header is defined under several status code subsections  the header matching the most specific response code will be returned  For example  with the config example below  a  307  response would return  307 Custom header value   while a  306  would return  3xx Custom header value       hcl listener  tcp      custom response headers        default              X Custom Header      default Custom header value               3xx              X Custom Header      3xx Custom header value               307              X Custom Header      307 Custom header value                        Listening on all IPv6   IPv4 interfaces  This example shows Vault listening on all IPv4   IPv6 interfaces including localhost      hcl listener  tcp      address                 8200    cluster address         8201             Listening to specific IPv6 address  This example shows Vault only using IPv6 and binding to the interface with the IP address   2001 1c04 90d 1c00 a00 27ff fefa 58ec      hcl listener  tcp      address             2001 1c04 90d 1c00 a00 27ff fefa 58ec  8200    cluster address     2001 1c04 90d 1c00 a00 27ff fefa 58ec  8201       Advertise the non loopback interface api addr    https    2001 1c04 90d 1c00 a00 27ff fefa 58ec  8200  cluster addr    https    2001 1c04 90d 1c00 a00 27ff fefa 58ec  8201          Redaction examples  Please see redaction settings above  for details on each redaction setting   Example configuration for the   tcp    vault docs configuration listener tcp  listener  enabling   redact addresses    vault docs configuration listener tcp redact addresses     redact cluster name    vault docs configuration listener tcp redact cluster name  and   redact version    vault docs configuration listener tcp redact version       hcl ui              true cluster addr     https   127 0 0 1 8201  api addr         https   127 0 0 1 8200  disable mlock   true  storage  raft      path     path to raft data    node id    raft node 1     listener  tcp      address                127 0 0 1 8200     tls cert file     path to full chain pem    tls key file      path to private key pem    redact addresses       true    redact cluster name    true    redact version         true     telemetry     statsite address    127 0 0 1 8125    disable hostname   true            API    sys health  In the following call to   sys health   notice that  cluster name  and  version  are both redacted  The  cluster name  field is fully omitted from the response and  version  is the empty string             shell session   curl  s https   127 0 0 1 8200 v1 sys health   jq         initialized   true     sealed   false     standby   true     performance standby   false     replication performance mode    disabled      replication dr mode    disabled      server time utc   1696598650     version          cluster id    a1a7a078 0ae1 7fb9 41ec 2f4f583c773e              API   sys leader  In the following call to   sys leader   notice that  leader address  and  leader cluster address  are both redacted and set to the empty string             shell session   curl  s https   127 0 0 1 8200 v1 sys leader   jq         ha enabled   true     is self   false     active time    0001 01 01T00 00 00Z      leader address          leader cluster address          performance standby   false     performance standby last remote wal   0     raft committed index   164     raft applied index   164             API   sys seal status   In the following call to   sys seal status   notice that  cluster name    build date   and  version  are all redacted  The  cluster name  field is fully omitted from the response while  build date  and  version  are empty strings             shell session   curl  s https   127 0 0 1 8200 v1 sys seal status   jq         type    shamir      initialized   true     sealed   false     t   1     n   1     progress   0     nonce          version          build date          migration   false     cluster id    a1a7a078 0ae1 7fb9 41ec 2f4f583c773e      recovery seal   false     storage type    raft             CLI   vault status   The CLI command  vault status  uses endpoints that support redacting data  so the output redacts  Version    Build Date    HA Cluster   and  Active Node Address    Version    Build Date    HA Cluster  show  n a  because the underlying endpoint returned the empty string  and   Active Node Address  shows as   none   because it was omitted from the API response      shell session    vault status  Key                     Value                               Seal Type               shamir Initialized             true Sealed                  false Total Shares            5 Threshold               3 Version                 n a Build Date              n a Storage Type            raft HA Enabled              true HA Cluster              n a HA Mode                 standby Active Node Address      none  Raft Committed Index    219 Raft Applied Index      219       golang tls   https   golang org src crypto tls cipher suites go  api addr    vault docs configuration api addr  cluster addr    vault docs configuration cluster addr  go tls blog   https   go dev blog tls cipher suites"}
{"questions":"vault Example TCP listener configuration with TLS encryption You can configure your TCP listener to use specific versions of TLS and specific page title Configure TLS for your Vault TCP listener Configure TLS for your Vault TCP listener layout docs","answers":"---\nlayout: docs\npage_title: Configure TLS for your Vault TCP listener\ndescription: >-\n  Example TCP listener configuration with TLS encryption.\n---\n\n# Configure TLS for your Vault TCP listener\n\nYou can configure your TCP listener to use specific versions of TLS and specific\nciphersuites.\n\n## Assumptions\n\n- **Your Vault instance is not currently running**. If your Vault cluster is\n  running, you must\n  [restart the cluster gracefully](https:\/\/support.hashicorp.com\/hc\/en-us\/articles\/17169701076371-A-Step-by-Step-Guide-to-Restarting-a-Vault-Cluster)\n  to apply changes to your TCP listener. SIGHUP will not reload your TLS\n  configuration.\n- **You have a valid TLS certificate file**.\n- **You have a valid TLS key file**.\n- **You have a valid CA file (if required)**.\n\n## Example TLS 1.3 configuration\n\nIf a reasonably modern set of clients are connecting to a Vault instance, you\ncan configure the `tcp` listener stanza to only accept TLS 1.3 with the\n`tls_min_version` parameter:\n\n<CodeBlockConfig hideClipboard highlight=\"5\">\n\n```plaintext\nlistener \"tcp\" {\n  address = \"127.0.0.1:8200\"\n  tls_cert_file = \"cert.pem\"\n  tls_key_file  = \"key.pem\"\n  tls_min_version = \"tls13\"\n}\n```\n\n<\/CodeBlockConfig>\n\nVault does not accept explicit ciphersuite configuration for TLS 1.3 because the\nGo team has already designated a select set of ciphers that align with the\nbroadly-accepted Mozilla Security\/Server Side TLS guidance for [modern TLS\nconfiguration](https:\/\/wiki.mozilla.org\/Security\/Server_Side_TLS#Modern_compatibility).\n\n## Example TLS 1.2 configuration\n\nTo use TLS 1.2 with a non-default set of ciphersuites, you can set 1.2 as the\nminimum and maximum allowed TLS version and explicitly define your preferred\nciphersuites with `tls_ciper_suites` and one or more of the ciphersuite\nconstants from the ciphersuite configuration parser. For example:\n\n<CodeBlockConfig hideClipboard highlight=\"5-7\">\n\n```plaintext\nlistener \"tcp\" {\n  address = \"127.0.0.1:8200\"\n  tls_cert_file = \"cert.pem\"\n  tls_key_file  = \"key.pem\"\n  tls_min_version = \"tls12\"\n  tls_max_version = \"tls12\"\n  tls_cipher_suites = \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\"\n}\n```\n\n<\/CodeBlockConfig>\n\nYou must set the minimum and maximum TLS version to disable TLS 1.3, which does\nnot support explicit cipher selection. The priority order of the ciphersuites\nin `tls_cipher_suites` is determined by the `tls` Go package.\n\n<Note>\n\n  The TLS 1.2 configuration example excludes any 3DES ciphers to avoid potential\n  exposure to the Sweet32 attack (CVE-2016-2183). You should customize the\n  ciphersuite list as needed to meet your environment-specific security\n  requirements.\n\n<\/Note>\n\n## Verify your TLS configuration\n\nYou can verify your TLS configuration using an SSL scanner such as\n[`sslscan`](https:\/\/github.com\/rbsec\/sslscan). \n\n<Tabs>\n<Tab heading=\"Example scan with ECDSA certificate\">\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ sslscan 127.0.0.1:8200\nVersion: 2.1.3\nOpenSSL 3.2.1 30 Jan 2024\n\nConnected to 127.0.0.1\n\nTesting SSL server 127.0.0.1 on port 8200 using SNI name 127.0.0.1\n\n  SSL\/TLS Protocols:\nSSLv2     disabled\nSSLv3     disabled\nTLSv1.0   disabled\nTLSv1.1   disabled\nTLSv1.2   enabled\nTLSv1.3   enabled\n\n  TLS Fallback SCSV:\nServer supports TLS Fallback SCSV\n\n  TLS renegotiation:\nSession renegotiation not supported\n\n  TLS Compression:\nCompression disabled\n\n  Heartbleed:\nTLSv1.3 not vulnerable to heartbleed\nTLSv1.2 not vulnerable to heartbleed\n\n  Supported Server Cipher(s):\nPreferred TLSv1.3  128 bits  TLS_AES_128_GCM_SHA256        Curve 25519 DHE 253\nAccepted  TLSv1.3  256 bits  TLS_AES_256_GCM_SHA384        Curve 25519 DHE 253\nAccepted  TLSv1.3  256 bits  TLS_CHACHA20_POLY1305_SHA256  Curve 25519 DHE 253\nPreferred TLSv1.2  128 bits  ECDHE-ECDSA-AES128-GCM-SHA256 Curve 25519 DHE 253\nAccepted  TLSv1.2  256 bits  ECDHE-ECDSA-AES256-GCM-SHA384 Curve 25519 DHE 253\nAccepted  TLSv1.2  256 bits  ECDHE-ECDSA-CHACHA20-POLY1305 Curve 25519 DHE 253\nAccepted  TLSv1.2  128 bits  ECDHE-ECDSA-AES128-SHA        Curve 25519 DHE 253\nAccepted  TLSv1.2  256 bits  ECDHE-ECDSA-AES256-SHA        Curve 25519 DHE 253\n\n  Server Key Exchange Group(s):\nTLSv1.3  128 bits  secp256r1 (NIST P-256)\nTLSv1.3  192 bits  secp384r1 (NIST P-384)\nTLSv1.3  260 bits  secp521r1 (NIST P-521)\nTLSv1.3  128 bits  x25519\nTLSv1.2  128 bits  secp256r1 (NIST P-256)\nTLSv1.2  192 bits  secp384r1 (NIST P-384)\nTLSv1.2  260 bits  secp521r1 (NIST P-521)\nTLSv1.2  128 bits  x25519\n\n  SSL Certificate:\nSignature Algorithm: ecdsa-with-SHA256\nECC Curve Name:      prime256v1\nECC Key Strength:    128\n\nSubject:  localhost\nIssuer:   localhost\n\nNot valid before: May 17 17:27:29 2024 GMT\nNot valid after:  Jun 16 17:27:29 2024 GMT\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n<Tab heading=\"Example scan with RSA certificate\">\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\nsslscan 127.0.0.1:8200\nTesting SSL server 127.0.0.1 on port 8200 using SNI name 127.0.0.1\n\n  SSL\/TLS Protocols:\nSSLv2     disabled\nSSLv3     disabled\nTLSv1.0   disabled\nTLSv1.1   disabled\nTLSv1.2   enabled\nTLSv1.3   enabled\n\n  Supported Server Cipher(s):\nPreferred TLSv1.3  128 bits  TLS_AES_128_GCM_SHA256        Curve 25519 DHE 253\nAccepted  TLSv1.3  256 bits  TLS_AES_256_GCM_SHA384        Curve 25519 DHE 253\nAccepted  TLSv1.3  256 bits  TLS_CHACHA20_POLY1305_SHA256  Curve 25519 DHE 253\nPreferred TLSv1.2  128 bits  ECDHE-RSA-AES128-GCM-SHA256   Curve 25519 DHE 253\nAccepted  TLSv1.2  256 bits  ECDHE-RSA-AES256-GCM-SHA384   Curve 25519 DHE 253\nAccepted  TLSv1.2  256 bits  ECDHE-RSA-CHACHA20-POLY1305   Curve 25519 DHE 253\nAccepted  TLSv1.2  128 bits  ECDHE-RSA-AES128-SHA          Curve 25519 DHE 253\nAccepted  TLSv1.2  256 bits  ECDHE-RSA-AES256-SHA          Curve 25519 DHE 253\nAccepted  TLSv1.2  128 bits  AES128-GCM-SHA256\nAccepted  TLSv1.2  256 bits  AES256-GCM-SHA384\nAccepted  TLSv1.2  128 bits  AES128-SHA\nAccepted  TLSv1.2  256 bits  AES256-SHA\nAccepted  TLSv1.2  112 bits  TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA\nAccepted  TLSv1.2  112 bits  TLS_RSA_WITH_3DES_EDE_CBC_SHA\n\n  Server Key Exchange Group(s):\nTLSv1.3  128 bits  secp256r1 (NIST P-256)\nTLSv1.3  192 bits  secp384r1 (NIST P-384)\nTLSv1.3  260 bits  secp521r1 (NIST P-521)\nTLSv1.3  128 bits  x25519\nTLSv1.2  128 bits  secp256r1 (NIST P-256)\nTLSv1.2  192 bits  secp384r1 (NIST P-384)\nTLSv1.2  260 bits  secp521r1 (NIST P-521)\nTLSv1.2  128 bits  x25519\n\n  SSL Certificate:\nSignature Algorithm: sha256WithRSAEncryption\nRSA Key Strength:    4096\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n<\/Tabs>","site":"vault","answers_cleaned":"    layout  docs page title  Configure TLS for your Vault TCP listener description       Example TCP listener configuration with TLS encryption         Configure TLS for your Vault TCP listener  You can configure your TCP listener to use specific versions of TLS and specific ciphersuites      Assumptions      Your Vault instance is not currently running    If your Vault cluster is   running  you must    restart the cluster gracefully  https   support hashicorp com hc en us articles 17169701076371 A Step by Step Guide to Restarting a Vault Cluster    to apply changes to your TCP listener  SIGHUP will not reload your TLS   configuration      You have a valid TLS certificate file        You have a valid TLS key file        You have a valid CA file  if required         Example TLS 1 3 configuration  If a reasonably modern set of clients are connecting to a Vault instance  you can configure the  tcp  listener stanza to only accept TLS 1 3 with the  tls min version  parameter    CodeBlockConfig hideClipboard highlight  5       plaintext listener  tcp      address    127 0 0 1 8200    tls cert file    cert pem    tls key file     key pem    tls min version    tls13           CodeBlockConfig   Vault does not accept explicit ciphersuite configuration for TLS 1 3 because the Go team has already designated a select set of ciphers that align with the broadly accepted Mozilla Security Server Side TLS guidance for  modern TLS configuration  https   wiki mozilla org Security Server Side TLS Modern compatibility       Example TLS 1 2 configuration  To use TLS 1 2 with a non default set of ciphersuites  you can set 1 2 as the minimum and maximum allowed TLS version and explicitly define your preferred ciphersuites with  tls ciper suites  and one or more of the ciphersuite constants from the ciphersuite configuration parser  For example    CodeBlockConfig hideClipboard highlight  5 7       plaintext listener  tcp      address    127 0 0 1 8200    tls cert file    cert pem    tls key file     key pem    tls min version    tls12    tls max version    tls12    tls cipher suites    TLS ECDHE ECDSA WITH AES 128 GCM SHA256 TLS ECDHE RSA WITH AES 128 GCM SHA256 TLS ECDHE ECDSA WITH AES 256 GCM SHA384 TLS ECDHE RSA WITH AES 256 GCM SHA384 TLS ECDHE ECDSA WITH CHACHA20 POLY1305 SHA256 TLS ECDHE RSA WITH CHACHA20 POLY1305 SHA256           CodeBlockConfig   You must set the minimum and maximum TLS version to disable TLS 1 3  which does not support explicit cipher selection  The priority order of the ciphersuites in  tls cipher suites  is determined by the  tls  Go package    Note     The TLS 1 2 configuration example excludes any 3DES ciphers to avoid potential   exposure to the Sweet32 attack  CVE 2016 2183   You should customize the   ciphersuite list as needed to meet your environment specific security   requirements     Note      Verify your TLS configuration  You can verify your TLS configuration using an SSL scanner such as   sslscan   https   github com rbsec sslscan      Tabs   Tab heading  Example scan with ECDSA certificate     CodeBlockConfig hideClipboard      shell session   sslscan 127 0 0 1 8200 Version  2 1 3 OpenSSL 3 2 1 30 Jan 2024  Connected to 127 0 0 1  Testing SSL server 127 0 0 1 on port 8200 using SNI name 127 0 0 1    SSL TLS Protocols  SSLv2     disabled SSLv3     disabled TLSv1 0   disabled TLSv1 1   disabled TLSv1 2   enabled TLSv1 3   enabled    TLS Fallback SCSV  Server supports TLS Fallback SCSV    TLS renegotiation  Session renegotiation not supported    TLS Compression  Compression disabled    Heartbleed  TLSv1 3 not vulnerable to heartbleed TLSv1 2 not vulnerable to heartbleed    Supported Server Cipher s   Preferred TLSv1 3  128 bits  TLS AES 128 GCM SHA256        Curve 25519 DHE 253 Accepted  TLSv1 3  256 bits  TLS AES 256 GCM SHA384        Curve 25519 DHE 253 Accepted  TLSv1 3  256 bits  TLS CHACHA20 POLY1305 SHA256  Curve 25519 DHE 253 Preferred TLSv1 2  128 bits  ECDHE ECDSA AES128 GCM SHA256 Curve 25519 DHE 253 Accepted  TLSv1 2  256 bits  ECDHE ECDSA AES256 GCM SHA384 Curve 25519 DHE 253 Accepted  TLSv1 2  256 bits  ECDHE ECDSA CHACHA20 POLY1305 Curve 25519 DHE 253 Accepted  TLSv1 2  128 bits  ECDHE ECDSA AES128 SHA        Curve 25519 DHE 253 Accepted  TLSv1 2  256 bits  ECDHE ECDSA AES256 SHA        Curve 25519 DHE 253    Server Key Exchange Group s   TLSv1 3  128 bits  secp256r1  NIST P 256  TLSv1 3  192 bits  secp384r1  NIST P 384  TLSv1 3  260 bits  secp521r1  NIST P 521  TLSv1 3  128 bits  x25519 TLSv1 2  128 bits  secp256r1  NIST P 256  TLSv1 2  192 bits  secp384r1  NIST P 384  TLSv1 2  260 bits  secp521r1  NIST P 521  TLSv1 2  128 bits  x25519    SSL Certificate  Signature Algorithm  ecdsa with SHA256 ECC Curve Name       prime256v1 ECC Key Strength     128  Subject   localhost Issuer    localhost  Not valid before  May 17 17 27 29 2024 GMT Not valid after   Jun 16 17 27 29 2024 GMT        CodeBlockConfig     Tab   Tab heading  Example scan with RSA certificate     CodeBlockConfig hideClipboard      shell session sslscan 127 0 0 1 8200 Testing SSL server 127 0 0 1 on port 8200 using SNI name 127 0 0 1    SSL TLS Protocols  SSLv2     disabled SSLv3     disabled TLSv1 0   disabled TLSv1 1   disabled TLSv1 2   enabled TLSv1 3   enabled    Supported Server Cipher s   Preferred TLSv1 3  128 bits  TLS AES 128 GCM SHA256        Curve 25519 DHE 253 Accepted  TLSv1 3  256 bits  TLS AES 256 GCM SHA384        Curve 25519 DHE 253 Accepted  TLSv1 3  256 bits  TLS CHACHA20 POLY1305 SHA256  Curve 25519 DHE 253 Preferred TLSv1 2  128 bits  ECDHE RSA AES128 GCM SHA256   Curve 25519 DHE 253 Accepted  TLSv1 2  256 bits  ECDHE RSA AES256 GCM SHA384   Curve 25519 DHE 253 Accepted  TLSv1 2  256 bits  ECDHE RSA CHACHA20 POLY1305   Curve 25519 DHE 253 Accepted  TLSv1 2  128 bits  ECDHE RSA AES128 SHA          Curve 25519 DHE 253 Accepted  TLSv1 2  256 bits  ECDHE RSA AES256 SHA          Curve 25519 DHE 253 Accepted  TLSv1 2  128 bits  AES128 GCM SHA256 Accepted  TLSv1 2  256 bits  AES256 GCM SHA384 Accepted  TLSv1 2  128 bits  AES128 SHA Accepted  TLSv1 2  256 bits  AES256 SHA Accepted  TLSv1 2  112 bits  TLS ECDHE RSA WITH 3DES EDE CBC SHA Accepted  TLSv1 2  112 bits  TLS RSA WITH 3DES EDE CBC SHA    Server Key Exchange Group s   TLSv1 3  128 bits  secp256r1  NIST P 256  TLSv1 3  192 bits  secp384r1  NIST P 384  TLSv1 3  260 bits  secp521r1  NIST P 521  TLSv1 3  128 bits  x25519 TLSv1 2  128 bits  secp256r1  NIST P 256  TLSv1 2  192 bits  secp384r1  NIST P 384  TLSv1 2  260 bits  secp521r1  NIST P 521  TLSv1 2  128 bits  x25519    SSL Certificate  Signature Algorithm  sha256WithRSAEncryption RSA Key Strength     4096        CodeBlockConfig     Tab    Tabs "}
{"questions":"vault wrapping mechanism layout docs page title PKCS11 Seals Configuration The PKCS11 seal configures Vault to use an HSM with PKCS11 as the seal pkcs11 seal","answers":"---\nlayout: docs\npage_title: PKCS11 - Seals - Configuration\ndescription: |-\n  The PKCS11 seal configures Vault to use an HSM with PKCS11 as the seal\n  wrapping mechanism.\n---\n\n# `pkcs11` seal\n\n\n<Note title=\"Auto-unseal and seal wrapping requires Vault Enterprise\">\n\n  Auto-unseal **and** seal wrapping for PKCS11 require Vault Enterprise.\n  \n  Vault Enterprise enables seal wrapping by default, which means the KMS service\n  must be available at runtime and not just during the unseal process. Refer to\n  the [Seal wrap](\/vault\/docs\/enterprise\/sealwrap) overview for more\n  information.\n\n<\/Note>\n\nThe PKCS11 seal configures Vault to use an HSM with PKCS11 as the seal wrapping\nmechanism. Vault Enterprise's HSM PKCS11 support is activated by one of the\nfollowing:\n\n- The presence of a `seal \"pkcs11\"` block in Vault's configuration file\n- The presence of the environment variable `VAULT_HSM_LIB` set to the library's\n  path as well as `VAULT_SEAL_TYPE` set to `pkcs11`. If enabling via environment\n  variable, all other required values (i.e. `VAULT_HSM_SLOT`) must be also\n  supplied.\n\n**IMPORTANT**: Having Vault generate its own key is the easiest way to get up\nand running, but for security, Vault marks the key as non-exportable. If your\nHSM key backup strategy requires the key to be exportable, you should generate\nthe key yourself. The list of creation attributes that Vault uses to generate\nthe key are listed at the end of this document.\n\n## Requirements\n\nThe following software packages are required for Vault Enterprise HSM:\n\n- PKCS#11 compatible HSM integration library. Vault targets version 2.2 or\n  higher of PKCS#11. Depending on any given HSM, some functions (such as key\n  generation) may have to be performed manually.\n- The [GNU libltdl\n  library](https:\/\/www.gnu.org\/software\/libtool\/manual\/html_node\/Using-libltdl.html)\n  \u2014 ensure that it is installed for the correct architecture of your servers\n\n## `pkcs11` example\n\nThis example shows configuring HSM PKCS11 seal through the Vault configuration\nfile by providing all the required values:\n\n```hcl\nseal \"pkcs11\" {\n  lib            = \"\/usr\/vault\/lib\/libCryptoki2_64.so\"\n  slot           = \"2305843009213693953\"\n  pin            = \"AAAA-BBBB-CCCC-DDDD\"\n  key_label      = \"vault-hsm-key\"\n  hmac_key_label = \"vault-hsm-hmac-key\"\n}\n```\n\n## `pkcs11` parameters\n\nThese parameters apply to the `seal` stanza in the Vault configuration file:\n\n- `lib` `(string: <required>)`: The path to the PKCS#11 library shared object\n  file. May also be specified by the `VAULT_HSM_LIB` environment variable.\n\n  ~> **Note:** Depending on your HSM, the value of the `lib` parameter may be\n  either a binary or a dynamic library, and its use may require other libraries\n  depending on which system the Vault binary is currently running on (e.g.: a\n  Linux system may require other libraries to interpret Windows .dll files).\n\n- `slot` `(string: <slot or token label required>)`: The slot number to use,\n  specified as a string (e.g. `\"2305843009213693953\"`). May also be specified by\n  the `VAULT_HSM_SLOT` environment variable.\n\n  ~> **Note**: Slots are typically listed as hex-decimal values in the OS setup\n  utility but this configuration uses their decimal equivalent. For example, using the\n  HSM command-line `pkcs11-tool`, a slot listed as `0x2000000000000001`in hex is equal\n  to `2305843009213693953` in decimal; these values may be listed shorter or\n  differently as determined by the HSM in use.\n\n- `token_label` `(string: <slot or token label required>)`: The slot token label to\n  use. May also be specified by the `VAULT_HSM_TOKEN_LABEL` environment variable.\n\n- `pin` `(string: <required>)`: The PIN for login. May also be specified by the\n  `VAULT_HSM_PIN` environment variable. _If set via the environment variable,\n  it will need to be re-set if Vault is restarted._\n\n- `key_label` `(string: <required>)`: The label of the key to use. If the key\n  does not exist and generation is enabled, this is the label that will be given\n  to the generated key. May also be specified by the `VAULT_HSM_KEY_LABEL`\n  environment variable.\n\n- `default_key_label` `(string: \"\")`: This is the default key label for decryption\n  operations. Prior to 0.10.1, key labels were not stored with the ciphertext.\n  Seal entries now track the label used in encryption operations. The default value\n  for this field is the `key_label`. If `key_label` is rotated and this value is not\n  set, decryption may fail. May also be specified by the `VAULT_HSM_DEFAULT_KEY_LABEL`\n  environment variable. This value is ignored in new installations.\n\n- `key_id` `(string: \"\")`: The ID of the key to use. The value should be a hexadecimal\n  string (e.g., \"0x33333435363434373537\"). May also be specified by the\n  `VAULT_HSM_KEY_ID` environment variable.\n\n- `hmac_key_label` `(string: <required>)`: The label of the key to use for\n  HMACing. This needs to be a suitable type. If Vault tries to create this it\n  will attempt to use CKK_GENERIC_SECRET_KEY. If the key does not exist and\n  generation is enabled, this is the label that will be given to the generated\n  key. May also be specified by the `VAULT_HSM_HMAC_KEY_LABEL` environment\n  variable.\n\n- `default_hmac_key_label` `(string: \"\")`: This is the default HMAC key label for signing\n  operations. Prior to 0.10.1, HMAC key labels were not stored with the signature.\n  Seal entries now track the label used in signing operations. The default value\n  for this field is the `hmac_key_label`. If `hmac_key_label` is rotated and this\n  value is not set, signature verification may fail. May also be specified by the\n  `VAULT_HSM_HMAC_DEFAULT_KEY_LABEL` environment variable. This value is ignored in\n  new installations.\n\n- `hmac_key_id` `(string: \"\")`: The ID of the HMAC key to use. The value should be a\n  hexadecimal string (e.g., \"0x33333435363434373537\"). May also be specified by the\n  `VAULT_HSM_HMAC_KEY_ID` environment variable.\n\n- `mechanism` `(string: <best available>)`: The encryption\/decryption mechanism to use,\n  specified as a decimal or hexadecimal (prefixed by `0x`) string. May also be\n  specified by the `VAULT_HSM_MECHANISM` environment variable.\n  Currently supported mechanisms (in order of precedence):\n\n  - `0x1085` `CKM_AES_CBC_PAD` (HMAC mechanism required)\n  - `0x1082` `CKM_AES_CBC` (HMAC mechanism required)\n  - `0x1087` `CKM_AES_GCM`\n  - `0x0009` `CKM_RSA_PKCS_OAEP`\n  - `0x0001` `CKM_RSA_PKCS`\n\n  ~> **Warning**: CKM_RSA_PKCS specifies the PKCS #1 v1.5 padding scheme, which is\n  in considered less secure than OAEP. Where possible, use of CKM_RSA_PKCS_OAEP is\n  recommended over CKM_RSA_PKCS.\n\n- `hmac_mechanism` `(string: \"0x0251\")`: The encryption\/decryption mechanism to\n  use, specified as a decimal or hexadecimal (prefixed by `0x`) string.\n  Currently only `0x0251` (corresponding to `CKM_SHA256_HMAC` from the\n  specification) is supported. May also be specified by the\n  `VAULT_HSM_HMAC_MECHANISM` environment variable. This value is only required\n  for specific mechanisms.\n\n- `generate_key` `(string: \"false\")`: If no existing key with the label\n  specified by `key_label` can be found at Vault initialization time, instructs\n  Vault to generate a key. This is a boolean expressed as a string (e.g.\n  `\"true\"`). May also be specified by the `VAULT_HSM_GENERATE_KEY` environment\n  variable. Vault may not be able to successfully generate keys in all\n  circumstances, such as if proprietary vendor extensions are required to\n  create keys of a suitable type.\n\n  ~> **NOTE**: Once the initial key creation has occurred post cluster\n  initialization, it is advisable to disable this flag to prevent any\n  unintended key creation in the future.\n\n- `force_rw_session` `(string: \"false\")`: Force all operations to open up\n  a read-write session to the HSM. This is a boolean expressed as a string (e.g.\n  `\"true\"`). May also be specified by the `VAULT_HSM_FORCE_RW_SESSION` environment\n  variable. This key is mainly to work around a limitation within AWS's CloudHSM v5\n  pkcs11 implementation.\n\n- `max_parallel` `(int: 1)` - The number of concurrent requests that may be\n  in flight to the HSM at any given time.\n\n- `disabled` `(string: \"\")`: Set this to `true` if Vault is migrating from an auto seal configuration. Otherwise, set to `false`.\n\nRefer to the [Seal Migration](\/vault\/docs\/concepts\/seal#seal-migration) documentation for more information about the seal migration process.\n  \n### Mechanism specific flags\n\n- `rsa_encrypt_local` `(string: \"false\")`: For HSMs that do not support encryption\n  for RSA keys, perform encryption locally. Available for mechanisms\n  `CKM_RSA_PKCS_OAEP` and `CKM_RSA_PKCS`. May also be specified by the\n  `VAULT_HSM_RSA_ENCRYPT_LOCAL` environment variable.\n\n- `rsa_oaep_hash` `(string: \"sha256\")`: Specify the hash algorithm to use for RSA\n  with OAEP padding. Valid values are sha1, sha224, sha256, sha384, and sha512.\n  Available for mechanism `CKM_RSA_PKCS_OAEP`. May also be specified by the\n  `VAULT_HSM_RSA_OAEP_HASH` environment variable.\n\n~> **Note:** Although the configuration file allows you to pass in\n`VAULT_HSM_PIN` as part of the seal's parameters, it is _strongly_ recommended\nto set this value via environment variables.\n\n## `pkcs11` environment variables\n\nAlternatively, the HSM seal can be activated by providing the following\nenvironment variables:\n\n```text\nVAULT_SEAL_TYPE\nVAULT_HSM_LIB\nVAULT_HSM_SLOT\nVAULT_HSM_TOKEN_LABEL\nVAULT_HSM_PIN\nVAULT_HSM_KEY_LABEL\nVAULT_HSM_DEFAULT_KEY_LABEL\nVAULT_HSM_KEY_ID\nVAULT_HSM_HMAC_KEY_LABEL\nVAULT_HSM_HMAC_DEFAULT_KEY_LABEL\nVAULT_HSM_HMAC_KEY_ID\nVAULT_HSM_MECHANISM\nVAULT_HSM_HMAC_MECHANISM\nVAULT_HSM_GENERATE_KEY\nVAULT_HSM_RSA_ENCRYPT_LOCAL\nVAULT_HSM_RSA_OAEP_HASH\nVAULT_HSM_FORCE_RW_SESSION\n```\n\n## Vault key generation attributes\n\nIf Vault generates the HSM key for you, the following is the list of attributes\nit uses. These identifiers correspond to official PKCS#11 identifiers.\n\n### AES key\n\n- `CKA_CLASS`: `CKO_SECRET_KEY` (It's a secret key)\n- `CKA_KEY_TYPE`: `CKK_AES` (Key type is AES)\n- `CKA_VALUE_LEN`: `32` (Key size is 256 bits)\n- `CKA_LABEL`: Set to the key label set in Vault's configuration\n- `CKA_ID`: Set to a random 32-bit unsigned integer\n- `CKA_PRIVATE`: `true` (Key is private to this slot\/token)\n- `CKA_TOKEN`: `true` (Key persists to the slot\/token rather than being for one\n  session only)\n- `CKA_SENSITIVE`: `true` (Key is a sensitive value)\n- `CKA_ENCRYPT`: `true` (Key can be used for encryption)\n- `CKA_DECRYPT`: `true` (Key can be used for decryption)\n- `CKA_WRAP`: `true` (Key can be used for wrapping)\n- `CKA_UNWRAP`: `true` (Key can be used for unwrapping)\n- `CKA_EXTRACTABLE`: `false` (Key cannot be exported)\n\n### RSA key\n\n_Public Key_\n\n- `CKA_CLASS`: `CKO_PUBLIC_KEY` (It's a public key)\n- `CKA_KEY_TYPE`: `CKK_RSA` (Key type is RSA)\n- `CKA_LABEL`: Set to the key label set in Vault's configuration\n- `CKA_ID`: Set to a random 32-bit unsigned integer\n- `CKA_ENCRYPT`: `true` (Key can be used for encryption)\n- `CKA_WRAP`: `true` (Key can be used for wrapping)\n- `CKA_MODULUS_BITS`: `2048` (Key size is 2048 bits)\n- `CKA_PUBLIC_EXPONENT`: `0x10001` (Public exponent of 65537)\n- `CKA_TOKEN`: `true` (Key persists to the slot\/token rather than being for one\n  session only)\n\n_Private Key_\n\n- `CKA_CLASS`: `CKO_PRIVATE_KEY` (It's a private key)\n- `CKA_KEY_TYPE`: `CKK_RSA` (Key type is RSA)\n- `CKA_LABEL`: Set to the key label set in Vault's configuration\n- `CKA_ID`: Set to a random 32-bit unsigned integer\n- `CKA_DECRYPT`: `true` (Key can be used for decryption)\n- `CKA_UNWRAP`: `true` (Key can be used for unwrapping)\n- `CKA_TOKEN`: `true` (Key persists to the slot\/token rather than being for one\n  session only)\n- `CKA_EXTRACTABLE`: `false` (Key cannot be exported)\n\n### HMAC key\n\n- `CKA_CLASS`: `CKO_SECRET_KEY` (It's a secret key)\n- `CKA_KEY_TYPE`: `CKK_GENERIC_SECRET_KEY` (Key type is a generic secret key)\n- `CKA_VALUE_LEN`: `32` (Key size is 256 bits)\n- `CKA_LABEL`: Set to the HMAC key label set in Vault's configuration\n- `CKA_ID`: Set to a random 32-bit unsigned integer\n- `CKA_PRIVATE`: `true` (Key is private to this slot\/token)\n- `CKA_TOKEN`: `true` (Key persists to the slot\/token rather than being for one\n  session only)\n- `CKA_SENSITIVE`: `true` (Key is a sensitive value)\n- `CKA_SIGN`: `true` (Key can be used for signing)\n- `CKA_VERIFY`: `true` (Key can be used for verifying)\n- `CKA_EXTRACTABLE`: `false` (Key cannot be exported)\n\n## Key rotation\n\nThis seal supports rotating keys by using different key labels to track key versions. To rotate\nthe key value, generate a new key in a different key label in the HSM and update Vault's\nconfiguration with the new key label value. Restart your vault instance to pick up the new key\nlabel and all new encryption operations will use the updated key label. Old keys must not be disabled\nor deleted and are used to decrypt older data. To disable or delete old keys, Vault needs to\n perform [seal-rewrap](\/vault\/api-docs\/system\/sealwrap-rewrap#start-a-seal-rewrap-process)\n so  that data encrypted by the old key can be decrypted using the new key. \n\n**NOTE**: Prior to version 0.10.1, key information was not tracked with the ciphertext. If\nrotation is desired for data that was seal wrapped prior to this version must also set\n`default_key_label` and `hmac_default_key_label` to allow for decryption of older values.\n\n## Tutorial\n\nRefer to the [HSM Integration - Seal Wrap](\/vault\/tutorials\/auto-unseal\/seal-wrap)\ntutorial to learn how to enable the Seal Wrap feature to protect your data.","site":"vault","answers_cleaned":"    layout  docs page title  PKCS11   Seals   Configuration description       The PKCS11 seal configures Vault to use an HSM with PKCS11 as the seal   wrapping mechanism          pkcs11  seal    Note title  Auto unseal and seal wrapping requires Vault Enterprise      Auto unseal   and   seal wrapping for PKCS11 require Vault Enterprise       Vault Enterprise enables seal wrapping by default  which means the KMS service   must be available at runtime and not just during the unseal process  Refer to   the  Seal wrap   vault docs enterprise sealwrap  overview for more   information     Note   The PKCS11 seal configures Vault to use an HSM with PKCS11 as the seal wrapping mechanism  Vault Enterprise s HSM PKCS11 support is activated by one of the following     The presence of a  seal  pkcs11   block in Vault s configuration file   The presence of the environment variable  VAULT HSM LIB  set to the library s   path as well as  VAULT SEAL TYPE  set to  pkcs11   If enabling via environment   variable  all other required values  i e   VAULT HSM SLOT   must be also   supplied     IMPORTANT    Having Vault generate its own key is the easiest way to get up and running  but for security  Vault marks the key as non exportable  If your HSM key backup strategy requires the key to be exportable  you should generate the key yourself  The list of creation attributes that Vault uses to generate the key are listed at the end of this document      Requirements  The following software packages are required for Vault Enterprise HSM     PKCS 11 compatible HSM integration library  Vault targets version 2 2 or   higher of PKCS 11  Depending on any given HSM  some functions  such as key   generation  may have to be performed manually    The  GNU libltdl   library  https   www gnu org software libtool manual html node Using libltdl html      ensure that it is installed for the correct architecture of your servers      pkcs11  example  This example shows configuring HSM PKCS11 seal through the Vault configuration file by providing all the required values      hcl seal  pkcs11      lib                usr vault lib libCryptoki2 64 so    slot              2305843009213693953    pin               AAAA BBBB CCCC DDDD    key label         vault hsm key    hmac key label    vault hsm hmac key             pkcs11  parameters  These parameters apply to the  seal  stanza in the Vault configuration file      lib    string   required     The path to the PKCS 11 library shared object   file  May also be specified by the  VAULT HSM LIB  environment variable          Note    Depending on your HSM  the value of the  lib  parameter may be   either a binary or a dynamic library  and its use may require other libraries   depending on which system the Vault binary is currently running on  e g   a   Linux system may require other libraries to interpret Windows  dll files       slot    string   slot or token label required     The slot number to use    specified as a string  e g    2305843009213693953     May also be specified by   the  VAULT HSM SLOT  environment variable          Note    Slots are typically listed as hex decimal values in the OS setup   utility but this configuration uses their decimal equivalent  For example  using the   HSM command line  pkcs11 tool   a slot listed as  0x2000000000000001 in hex is equal   to  2305843009213693953  in decimal  these values may be listed shorter or   differently as determined by the HSM in use      token label    string   slot or token label required     The slot token label to   use  May also be specified by the  VAULT HSM TOKEN LABEL  environment variable      pin    string   required     The PIN for login  May also be specified by the    VAULT HSM PIN  environment variable   If set via the environment variable    it will need to be re set if Vault is restarted       key label    string   required     The label of the key to use  If the key   does not exist and generation is enabled  this is the label that will be given   to the generated key  May also be specified by the  VAULT HSM KEY LABEL    environment variable      default key label    string        This is the default key label for decryption   operations  Prior to 0 10 1  key labels were not stored with the ciphertext    Seal entries now track the label used in encryption operations  The default value   for this field is the  key label   If  key label  is rotated and this value is not   set  decryption may fail  May also be specified by the  VAULT HSM DEFAULT KEY LABEL    environment variable  This value is ignored in new installations      key id    string        The ID of the key to use  The value should be a hexadecimal   string  e g    0x33333435363434373537    May also be specified by the    VAULT HSM KEY ID  environment variable      hmac key label    string   required     The label of the key to use for   HMACing  This needs to be a suitable type  If Vault tries to create this it   will attempt to use CKK GENERIC SECRET KEY  If the key does not exist and   generation is enabled  this is the label that will be given to the generated   key  May also be specified by the  VAULT HSM HMAC KEY LABEL  environment   variable      default hmac key label    string        This is the default HMAC key label for signing   operations  Prior to 0 10 1  HMAC key labels were not stored with the signature    Seal entries now track the label used in signing operations  The default value   for this field is the  hmac key label   If  hmac key label  is rotated and this   value is not set  signature verification may fail  May also be specified by the    VAULT HSM HMAC DEFAULT KEY LABEL  environment variable  This value is ignored in   new installations      hmac key id    string        The ID of the HMAC key to use  The value should be a   hexadecimal string  e g    0x33333435363434373537    May also be specified by the    VAULT HSM HMAC KEY ID  environment variable      mechanism    string   best available     The encryption decryption mechanism to use    specified as a decimal or hexadecimal  prefixed by  0x   string  May also be   specified by the  VAULT HSM MECHANISM  environment variable    Currently supported mechanisms  in order of precedence         0x1085   CKM AES CBC PAD   HMAC mechanism required       0x1082   CKM AES CBC   HMAC mechanism required       0x1087   CKM AES GCM       0x0009   CKM RSA PKCS OAEP       0x0001   CKM RSA PKCS          Warning    CKM RSA PKCS specifies the PKCS  1 v1 5 padding scheme  which is   in considered less secure than OAEP  Where possible  use of CKM RSA PKCS OAEP is   recommended over CKM RSA PKCS      hmac mechanism    string   0x0251     The encryption decryption mechanism to   use  specified as a decimal or hexadecimal  prefixed by  0x   string    Currently only  0x0251   corresponding to  CKM SHA256 HMAC  from the   specification  is supported  May also be specified by the    VAULT HSM HMAC MECHANISM  environment variable  This value is only required   for specific mechanisms      generate key    string   false     If no existing key with the label   specified by  key label  can be found at Vault initialization time  instructs   Vault to generate a key  This is a boolean expressed as a string  e g      true     May also be specified by the  VAULT HSM GENERATE KEY  environment   variable  Vault may not be able to successfully generate keys in all   circumstances  such as if proprietary vendor extensions are required to   create keys of a suitable type          NOTE    Once the initial key creation has occurred post cluster   initialization  it is advisable to disable this flag to prevent any   unintended key creation in the future      force rw session    string   false     Force all operations to open up   a read write session to the HSM  This is a boolean expressed as a string  e g      true     May also be specified by the  VAULT HSM FORCE RW SESSION  environment   variable  This key is mainly to work around a limitation within AWS s CloudHSM v5   pkcs11 implementation      max parallel    int  1     The number of concurrent requests that may be   in flight to the HSM at any given time      disabled    string        Set this to  true  if Vault is migrating from an auto seal configuration  Otherwise  set to  false    Refer to the  Seal Migration   vault docs concepts seal seal migration  documentation for more information about the seal migration process         Mechanism specific flags     rsa encrypt local    string   false     For HSMs that do not support encryption   for RSA keys  perform encryption locally  Available for mechanisms    CKM RSA PKCS OAEP  and  CKM RSA PKCS   May also be specified by the    VAULT HSM RSA ENCRYPT LOCAL  environment variable      rsa oaep hash    string   sha256     Specify the hash algorithm to use for RSA   with OAEP padding  Valid values are sha1  sha224  sha256  sha384  and sha512    Available for mechanism  CKM RSA PKCS OAEP   May also be specified by the    VAULT HSM RSA OAEP HASH  environment variable        Note    Although the configuration file allows you to pass in  VAULT HSM PIN  as part of the seal s parameters  it is  strongly  recommended to set this value via environment variables       pkcs11  environment variables  Alternatively  the HSM seal can be activated by providing the following environment variables      text VAULT SEAL TYPE VAULT HSM LIB VAULT HSM SLOT VAULT HSM TOKEN LABEL VAULT HSM PIN VAULT HSM KEY LABEL VAULT HSM DEFAULT KEY LABEL VAULT HSM KEY ID VAULT HSM HMAC KEY LABEL VAULT HSM HMAC DEFAULT KEY LABEL VAULT HSM HMAC KEY ID VAULT HSM MECHANISM VAULT HSM HMAC MECHANISM VAULT HSM GENERATE KEY VAULT HSM RSA ENCRYPT LOCAL VAULT HSM RSA OAEP HASH VAULT HSM FORCE RW SESSION         Vault key generation attributes  If Vault generates the HSM key for you  the following is the list of attributes it uses  These identifiers correspond to official PKCS 11 identifiers       AES key     CKA CLASS    CKO SECRET KEY   It s a secret key     CKA KEY TYPE    CKK AES   Key type is AES     CKA VALUE LEN    32   Key size is 256 bits     CKA LABEL   Set to the key label set in Vault s configuration    CKA ID   Set to a random 32 bit unsigned integer    CKA PRIVATE    true   Key is private to this slot token     CKA TOKEN    true   Key persists to the slot token rather than being for one   session only     CKA SENSITIVE    true   Key is a sensitive value     CKA ENCRYPT    true   Key can be used for encryption     CKA DECRYPT    true   Key can be used for decryption     CKA WRAP    true   Key can be used for wrapping     CKA UNWRAP    true   Key can be used for unwrapping     CKA EXTRACTABLE    false   Key cannot be exported       RSA key   Public Key      CKA CLASS    CKO PUBLIC KEY   It s a public key     CKA KEY TYPE    CKK RSA   Key type is RSA     CKA LABEL   Set to the key label set in Vault s configuration    CKA ID   Set to a random 32 bit unsigned integer    CKA ENCRYPT    true   Key can be used for encryption     CKA WRAP    true   Key can be used for wrapping     CKA MODULUS BITS    2048   Key size is 2048 bits     CKA PUBLIC EXPONENT    0x10001   Public exponent of 65537     CKA TOKEN    true   Key persists to the slot token rather than being for one   session only    Private Key      CKA CLASS    CKO PRIVATE KEY   It s a private key     CKA KEY TYPE    CKK RSA   Key type is RSA     CKA LABEL   Set to the key label set in Vault s configuration    CKA ID   Set to a random 32 bit unsigned integer    CKA DECRYPT    true   Key can be used for decryption     CKA UNWRAP    true   Key can be used for unwrapping     CKA TOKEN    true   Key persists to the slot token rather than being for one   session only     CKA EXTRACTABLE    false   Key cannot be exported       HMAC key     CKA CLASS    CKO SECRET KEY   It s a secret key     CKA KEY TYPE    CKK GENERIC SECRET KEY   Key type is a generic secret key     CKA VALUE LEN    32   Key size is 256 bits     CKA LABEL   Set to the HMAC key label set in Vault s configuration    CKA ID   Set to a random 32 bit unsigned integer    CKA PRIVATE    true   Key is private to this slot token     CKA TOKEN    true   Key persists to the slot token rather than being for one   session only     CKA SENSITIVE    true   Key is a sensitive value     CKA SIGN    true   Key can be used for signing     CKA VERIFY    true   Key can be used for verifying     CKA EXTRACTABLE    false   Key cannot be exported      Key rotation  This seal supports rotating keys by using different key labels to track key versions  To rotate the key value  generate a new key in a different key label in the HSM and update Vault s configuration with the new key label value  Restart your vault instance to pick up the new key label and all new encryption operations will use the updated key label  Old keys must not be disabled or deleted and are used to decrypt older data  To disable or delete old keys  Vault needs to  perform  seal rewrap   vault api docs system sealwrap rewrap start a seal rewrap process   so  that data encrypted by the old key can be decrypted using the new key      NOTE    Prior to version 0 10 1  key information was not tracked with the ciphertext  If rotation is desired for data that was seal wrapped prior to this version must also set  default key label  and  hmac default key label  to allow for decryption of older values      Tutorial  Refer to the  HSM Integration   Seal Wrap   vault tutorials auto unseal seal wrap  tutorial to learn how to enable the Seal Wrap feature to protect your data "}
{"questions":"vault page title GCP Cloud KMS Seals Configuration mechanism wrapping The GCP Cloud KMS seal configures Vault to use GCP Cloud KMS as the seal layout docs","answers":"---\nlayout: docs\npage_title: GCP Cloud KMS - Seals - Configuration\ndescription: >-\n  The GCP Cloud KMS seal configures Vault to use GCP Cloud KMS as the seal\n  wrapping\n\n  mechanism.\n---\n\n# `gcpckms` seal\n\n<Note title=\"Seal wrapping requires Vault Enterprise\">\n\n  All Vault versions support **auto-unseal** for GCP Cloud, but **seal wrapping**\n  requires Vault Enterprise.\n  \n  Vault Enterprise enables seal wrapping by default, which means the KMS service\n  must be available at runtime and not just during the unseal process. Refer to\n  the [Seal wrap](\/vault\/docs\/enterprise\/sealwrap) overview for more\n  information.\n\n<\/Note>\n\nThe GCP Cloud KMS seal configures Vault to use GCP Cloud KMS as the seal\nwrapping mechanism. The GCP Cloud KMS seal is activated by one of the following:\n\n- The presence of a `seal \"gcpckms\"` block in Vault's configuration file.\n- The presence of the environment variable `VAULT_SEAL_TYPE` set to `gcpckms`.\n  If enabling via environment variable, all other required values specific to\n  Cloud KMS (i.e. `VAULT_GCPCKMS_SEAL_KEY_RING`, etc.) must be also supplied, as\n  well as all other GCP-related environment variables that lends to successful\n  authentication (i.e. `GOOGLE_PROJECT`, etc.).\n\n## `gcpckms` example\n\nThis example shows configuring GCP Cloud KMS seal through the Vault\nconfiguration file by providing all the required values:\n\n```hcl\nseal \"gcpckms\" {\n  credentials = \"\/usr\/vault\/vault-project-user-creds.json\"\n  project     = \"vault-project\"\n  region      = \"global\"\n  key_ring    = \"vault-keyring\"\n  crypto_key  = \"vault-key\"\n}\n```\n\n## `gcpckms` parameters\n\nThese parameters apply to the `seal` stanza in the Vault configuration file:\n\n- `credentials` `(string: <required>)`: The path to the credentials JSON file\n  to use. May be also specified by the `GOOGLE_CREDENTIALS` or\n  `GOOGLE_APPLICATION_CREDENTIALS` environment variable or set automatically if\n  running under Google App Engine, Google Compute Engine or Google Kubernetes\n  Engine.\n\n- `project` `(string: <required>)`: The GCP project ID to use. May also be\n  specified by the `GOOGLE_PROJECT` environment variable.\n\n- `region` `(string: <required>)`: The GCP region\/location where the key ring\n  lives. May also be specified by the `GOOGLE_REGION` environment variable.\n\n- `key_ring` `(string: <required>)`: The GCP CKMS key ring to use. May also be\n  specified by the `VAULT_GCPCKMS_SEAL_KEY_RING` environment variable.\n\n- `crypto_key` `(string: <required>)`: The GCP CKMS crypto key to use for\n  encryption and decryption. May also be specified by the\n  `VAULT_GCPCKMS_SEAL_CRYPTO_KEY` environment variable.\n\n- `disabled` `(string: \"\")`: Set this to `true` if Vault is migrating from an auto seal configuration. Otherwise, set to `false`.\n\nRefer to the [Seal Migration](\/vault\/docs\/concepts\/seal#seal-migration) documentation for more information about the seal migration process.\n\n## Authentication &amp; permissions\n\nAuthentication-related values must be provided, either as environment\nvariables or as configuration parameters.\n\nGCP authentication values:\n\n- `GOOGLE_CREDENTIALS` or `GOOGLE_APPLICATION_CREDENTIALS`\n- `GOOGLE_PROJECT`\n- `GOOGLE_REGION`\n\nNote: The client uses the official Google SDK and will use the specified\ncredentials, environment credentials, or [application default\ncredentials](https:\/\/developers.google.com\/identity\/protocols\/application-default-credentials)\nin that order, if the above GCP specific values are not provided.\n\nThe service account needs the following minimum permissions on the crypto key:\n\n```text\ncloudkms.cryptoKeyVersions.useToEncrypt\ncloudkms.cryptoKeyVersions.useToDecrypt\ncloudkms.cryptoKeys.get\n```\n\nThese permissions can be described with the following role:\n\n```text\nroles\/cloudkms.cryptoKeyEncrypterDecrypter\ncloudkms.cryptoKeys.get\n```\n\n`cloudkms.cryptoKeys.get` permission is used for retrieving metadata information of keys from CloudKMS within this engine initialization process.\n\n## `gcpckms` environment variables\n\nAlternatively, the GCP Cloud KMS seal can be activated by providing the following\nenvironment variables:\n\n- `VAULT_SEAL_TYPE`\n- `VAULT_GCPCKMS_SEAL_KEY_RING`\n- `VAULT_GCPCKMS_SEAL_CRYPTO_KEY`\n\n## Key rotation\n\nThis seal supports rotating keys defined in Google Cloud KMS\n[doc](https:\/\/cloud.google.com\/kms\/docs\/rotating-keys). Both scheduled rotation and manual\nrotation is supported for CKMS since the key information. Old keys version must not be\ndisabled or deleted and are used to decrypt older data. Any new or updated data will be\nencrypted with the primary key version.\n\n## Tutorial\n\nRefer to the [Auto-unseal using GCP Cloud KMS](\/vault\/tutorials\/auto-unseal\/autounseal-gcp-kms)\nguide for a step-by-step tutorial.","site":"vault","answers_cleaned":"    layout  docs page title  GCP Cloud KMS   Seals   Configuration description       The GCP Cloud KMS seal configures Vault to use GCP Cloud KMS as the seal   wrapping    mechanism          gcpckms  seal   Note title  Seal wrapping requires Vault Enterprise      All Vault versions support   auto unseal   for GCP Cloud  but   seal wrapping     requires Vault Enterprise       Vault Enterprise enables seal wrapping by default  which means the KMS service   must be available at runtime and not just during the unseal process  Refer to   the  Seal wrap   vault docs enterprise sealwrap  overview for more   information     Note   The GCP Cloud KMS seal configures Vault to use GCP Cloud KMS as the seal wrapping mechanism  The GCP Cloud KMS seal is activated by one of the following     The presence of a  seal  gcpckms   block in Vault s configuration file    The presence of the environment variable  VAULT SEAL TYPE  set to  gcpckms     If enabling via environment variable  all other required values specific to   Cloud KMS  i e   VAULT GCPCKMS SEAL KEY RING   etc   must be also supplied  as   well as all other GCP related environment variables that lends to successful   authentication  i e   GOOGLE PROJECT   etc         gcpckms  example  This example shows configuring GCP Cloud KMS seal through the Vault configuration file by providing all the required values      hcl seal  gcpckms      credentials     usr vault vault project user creds json    project        vault project    region         global    key ring       vault keyring    crypto key     vault key             gcpckms  parameters  These parameters apply to the  seal  stanza in the Vault configuration file      credentials    string   required     The path to the credentials JSON file   to use  May be also specified by the  GOOGLE CREDENTIALS  or    GOOGLE APPLICATION CREDENTIALS  environment variable or set automatically if   running under Google App Engine  Google Compute Engine or Google Kubernetes   Engine      project    string   required     The GCP project ID to use  May also be   specified by the  GOOGLE PROJECT  environment variable      region    string   required     The GCP region location where the key ring   lives  May also be specified by the  GOOGLE REGION  environment variable      key ring    string   required     The GCP CKMS key ring to use  May also be   specified by the  VAULT GCPCKMS SEAL KEY RING  environment variable      crypto key    string   required     The GCP CKMS crypto key to use for   encryption and decryption  May also be specified by the    VAULT GCPCKMS SEAL CRYPTO KEY  environment variable      disabled    string        Set this to  true  if Vault is migrating from an auto seal configuration  Otherwise  set to  false    Refer to the  Seal Migration   vault docs concepts seal seal migration  documentation for more information about the seal migration process      Authentication  amp  permissions  Authentication related values must be provided  either as environment variables or as configuration parameters   GCP authentication values      GOOGLE CREDENTIALS  or  GOOGLE APPLICATION CREDENTIALS     GOOGLE PROJECT     GOOGLE REGION   Note  The client uses the official Google SDK and will use the specified credentials  environment credentials  or  application default credentials  https   developers google com identity protocols application default credentials  in that order  if the above GCP specific values are not provided   The service account needs the following minimum permissions on the crypto key      text cloudkms cryptoKeyVersions useToEncrypt cloudkms cryptoKeyVersions useToDecrypt cloudkms cryptoKeys get      These permissions can be described with the following role      text roles cloudkms cryptoKeyEncrypterDecrypter cloudkms cryptoKeys get       cloudkms cryptoKeys get  permission is used for retrieving metadata information of keys from CloudKMS within this engine initialization process       gcpckms  environment variables  Alternatively  the GCP Cloud KMS seal can be activated by providing the following environment variables      VAULT SEAL TYPE     VAULT GCPCKMS SEAL KEY RING     VAULT GCPCKMS SEAL CRYPTO KEY      Key rotation  This seal supports rotating keys defined in Google Cloud KMS  doc  https   cloud google com kms docs rotating keys   Both scheduled rotation and manual rotation is supported for CKMS since the key information  Old keys version must not be disabled or deleted and are used to decrypt older data  Any new or updated data will be encrypted with the primary key version      Tutorial  Refer to the  Auto unseal using GCP Cloud KMS   vault tutorials auto unseal autounseal gcp kms  guide for a step by step tutorial "}
{"questions":"vault transit seal page title Vault Transit Seals Configuration layout docs autoseal mechanism The Transit seal configures Vault to use Vault s Transit Secret Engine as the","answers":"---\nlayout: docs\npage_title: Vault Transit - Seals - Configuration\ndescription: |-\n  The Transit seal configures Vault to use Vault's Transit Secret Engine as the\n  autoseal mechanism.\n---\n\n# `transit` seal\n\n\n<Note title=\"Seal wrap functionality requires Vault Enterprise\">\n\n  All Vault versions support **auto-unseal** for Transit, but **seal wrapping**\n  requires Vault Enterprise.\n  \n  Vault Enterprise enables seal wrapping by default, which means the KMS service\n  must be available at runtime and not just during the unseal process. Refer to\n  the [Seal wrap](\/vault\/docs\/enterprise\/sealwrap) overview for more\n  information.\n\n<\/Note>\n\nThe Transit seal configures Vault to use Vault's Transit Secret Engine as the\nautoseal mechanism.\nThe Transit seal is activated by one of the following:\n\n- The presence of a `seal \"transit\"` block in Vault's configuration file\n- The presence of the environment variable `VAULT_SEAL_TYPE` set to `transit`.\n\n## `transit` example\n\nThis example shows configuring Transit seal through the Vault configuration file\nby providing all the required values:\n\n```hcl\nseal \"transit\" {\n  address            = \"https:\/\/vault:8200\"\n  token              = \"s.Qf1s5zigZ4OX6akYjQXJC1jY\"\n  disable_renewal    = \"false\"\n\n  \/\/ Key configuration\n  key_name           = \"transit_key_name\"\n  mount_path         = \"transit\/\"\n  namespace          = \"ns1\/\"\n\n  \/\/ TLS Configuration\n  tls_ca_cert        = \"\/etc\/vault\/ca_cert.pem\"\n  tls_client_cert    = \"\/etc\/vault\/client_cert.pem\"\n  tls_client_key     = \"\/etc\/vault\/ca_cert.pem\"\n  tls_server_name    = \"vault\"\n  tls_skip_verify    = \"false\"\n}\n```\n\n## `transit` parameters\n\nThese parameters apply to the `seal` stanza in the Vault configuration file:\n\n- `address` `(string: <required>)`: The full address to the Vault cluster.\n  This may also be specified by the `VAULT_ADDR` environment variable.\n\n- `token` `(string: <required>)`: The Vault token to use. This may also be\n  specified by the `VAULT_TOKEN` environment variable.\n\n- `key_name` `(string: <required>)`: The transit key to use for encryption and\n  decryption. This may also be supplied using the `VAULT_TRANSIT_SEAL_KEY_NAME`\n  environment variable.\n\n- `key_id_prefix` `(string: \"\")`: An optional string to add to the key id\n  of values wrapped by this transit seal.  This can help disambiguate between\n  two transit seals.\n\n- `mount_path` `(string: <required>)`: The mount path to the transit secret engine.\n  This may also be supplied using the `VAULT_TRANSIT_SEAL_MOUNT_PATH` environment\n  variable.\n\n- `namespace` `(string: \"\")`: The namespace path to the transit secret engine.\n  This may also be supplied using the `VAULT_NAMESPACE` environment variable.\n\n- `disable_renewal` `(string: \"false\")`: Disables the automatic renewal of the token\n  in case the lifecycle of the token is managed with some other mechanism outside of\n  Vault, such as Vault Agent. This may also be specified using the\n  `VAULT_TRANSIT_SEAL_DISABLE_RENEWAL` environment variable.\n\n- `tls_ca_cert` `(string: \"\")`: Specifies the path to the CA certificate file used\n  for communication with the Vault server. This may also be specified using the\n  `VAULT_CACERT` environment variable.\n\n- `tls_client_cert` `(string: \"\")`: Specifies the path to the client certificate\n  for communication with the Vault server. This may also be specified using the\n  `VAULT_CLIENT_CERT` environment variable.\n\n- `tls_client_key` `(string: \"\")`: Specifies the path to the private key for\n  communication with the Vault server. This may also be specified using the\n  `VAULT_CLIENT_KEY` environment variable.\n\n- `tls_server_name` `(string: \"\")`: Name to use as the SNI host when connecting\n  to the Vault server via TLS. This may also be specified via the\n  `VAULT_TLS_SERVER_NAME` environment variable.\n\n- `tls_skip_verify` `(bool: \"false\")`: Disable verification of TLS certificates.\n  Using this option is highly discouraged and decreases the security of data\n  transmissions to and from the Vault server. This may also be specified using the\n  `VAULT_SKIP_VERIFY` environment variable.\n\n- `disabled` `(string: \"\")`: Set this to `true` if Vault is migrating from an auto seal configuration. Otherwise, set to `false`.\n\nRefer to the [Seal Migration](\/vault\/docs\/concepts\/seal#seal-migration) documentation for more information about the seal migration process.\n\n## Authentication\n\nAuthentication-related values must be provided, either as environment\nvariables or as configuration parameters.\n\n~> **Note:** Although the configuration file allows you to pass in\n`VAULT_TOKEN` as part of the seal's parameters, it is _strongly_ recommended\nto set these values via environment variables.\n\nThe Vault token used to authenticate needs the following permissions on the\ntransit key:\n\n```hcl\npath \"<mount path>\/encrypt\/<key name>\" {\n  capabilities = [\"update\"]\n}\n\npath \"<mount path>\/decrypt\/<key name>\" {\n  capabilities = [\"update\"]\n}\n```\n\nOther considerations for the token used:\n* it should probably be an [orphan token](\/vault\/docs\/concepts\/tokens#token-hierarchies-and-orphan-tokens),\notherwise when the parent token expires or gets revoked the seal will break.\n* consider making it a [periodic token](\/vault\/docs\/concepts\/tokens#periodic-tokens)\nand not setting an explicit max TTL, otherwise at some point it will cease to be renewable.\n\n## Key rotation\n\nThis seal supports key rotation using the Transit Secret Engine's key rotation endpoints. See\n[doc](\/vault\/api-docs\/secret\/transit#rotate-key). Old keys must not be disabled or deleted and are\nused to decrypt older data.\n\n## Tutorial\n\nRefer to the [Auto-unseal using Transit Secrets Engine](\/vault\/tutorials\/auto-unseal\/autounseal-transit)\ntutorial to learn how use the transit secrets engine to automatically unseal Vault.","site":"vault","answers_cleaned":"    layout  docs page title  Vault Transit   Seals   Configuration description       The Transit seal configures Vault to use Vault s Transit Secret Engine as the   autoseal mechanism          transit  seal    Note title  Seal wrap functionality requires Vault Enterprise      All Vault versions support   auto unseal   for Transit  but   seal wrapping     requires Vault Enterprise       Vault Enterprise enables seal wrapping by default  which means the KMS service   must be available at runtime and not just during the unseal process  Refer to   the  Seal wrap   vault docs enterprise sealwrap  overview for more   information     Note   The Transit seal configures Vault to use Vault s Transit Secret Engine as the autoseal mechanism  The Transit seal is activated by one of the following     The presence of a  seal  transit   block in Vault s configuration file   The presence of the environment variable  VAULT SEAL TYPE  set to  transit        transit  example  This example shows configuring Transit seal through the Vault configuration file by providing all the required values      hcl seal  transit      address               https   vault 8200    token                 s Qf1s5zigZ4OX6akYjQXJC1jY    disable renewal       false        Key configuration   key name              transit key name    mount path            transit     namespace             ns1         TLS Configuration   tls ca cert            etc vault ca cert pem    tls client cert        etc vault client cert pem    tls client key         etc vault ca cert pem    tls server name       vault    tls skip verify       false             transit  parameters  These parameters apply to the  seal  stanza in the Vault configuration file      address    string   required     The full address to the Vault cluster    This may also be specified by the  VAULT ADDR  environment variable      token    string   required     The Vault token to use  This may also be   specified by the  VAULT TOKEN  environment variable      key name    string   required     The transit key to use for encryption and   decryption  This may also be supplied using the  VAULT TRANSIT SEAL KEY NAME    environment variable      key id prefix    string        An optional string to add to the key id   of values wrapped by this transit seal   This can help disambiguate between   two transit seals      mount path    string   required     The mount path to the transit secret engine    This may also be supplied using the  VAULT TRANSIT SEAL MOUNT PATH  environment   variable      namespace    string        The namespace path to the transit secret engine    This may also be supplied using the  VAULT NAMESPACE  environment variable      disable renewal    string   false     Disables the automatic renewal of the token   in case the lifecycle of the token is managed with some other mechanism outside of   Vault  such as Vault Agent  This may also be specified using the    VAULT TRANSIT SEAL DISABLE RENEWAL  environment variable      tls ca cert    string        Specifies the path to the CA certificate file used   for communication with the Vault server  This may also be specified using the    VAULT CACERT  environment variable      tls client cert    string        Specifies the path to the client certificate   for communication with the Vault server  This may also be specified using the    VAULT CLIENT CERT  environment variable      tls client key    string        Specifies the path to the private key for   communication with the Vault server  This may also be specified using the    VAULT CLIENT KEY  environment variable      tls server name    string        Name to use as the SNI host when connecting   to the Vault server via TLS  This may also be specified via the    VAULT TLS SERVER NAME  environment variable      tls skip verify    bool   false     Disable verification of TLS certificates    Using this option is highly discouraged and decreases the security of data   transmissions to and from the Vault server  This may also be specified using the    VAULT SKIP VERIFY  environment variable      disabled    string        Set this to  true  if Vault is migrating from an auto seal configuration  Otherwise  set to  false    Refer to the  Seal Migration   vault docs concepts seal seal migration  documentation for more information about the seal migration process      Authentication  Authentication related values must be provided  either as environment variables or as configuration parameters        Note    Although the configuration file allows you to pass in  VAULT TOKEN  as part of the seal s parameters  it is  strongly  recommended to set these values via environment variables   The Vault token used to authenticate needs the following permissions on the transit key      hcl path   mount path  encrypt  key name       capabilities     update      path   mount path  decrypt  key name       capabilities     update          Other considerations for the token used    it should probably be an  orphan token   vault docs concepts tokens token hierarchies and orphan tokens   otherwise when the parent token expires or gets revoked the seal will break    consider making it a  periodic token   vault docs concepts tokens periodic tokens  and not setting an explicit max TTL  otherwise at some point it will cease to be renewable      Key rotation  This seal supports key rotation using the Transit Secret Engine s key rotation endpoints  See  doc   vault api docs secret transit rotate key   Old keys must not be disabled or deleted and are used to decrypt older data      Tutorial  Refer to the  Auto unseal using Transit Secrets Engine   vault tutorials auto unseal autounseal transit  tutorial to learn how use the transit secrets engine to automatically unseal Vault "}
{"questions":"vault The AliCloud KMS seal configures Vault to use AliCloud KMS as the seal mechanism wrapping layout docs page title AliCloud KMS Seals Configuration","answers":"---\nlayout: docs\npage_title: AliCloud KMS - Seals - Configuration\ndescription: >-\n  The AliCloud KMS seal configures Vault to use AliCloud KMS as the seal\n  wrapping\n\n  mechanism.\n---\n\n# `alicloudkms` seal\n\n<Note title=\"Seal wrapping requires Vault Enterprise\">\n\n  All Vault versions support **auto-unseal** for AliCloud, but **seal wrapping**\n  requires Vault Enterprise.\n  \n  Vault Enterprise enables seal wrapping by default, which means the KMS service\n  must be available at runtime and not just during the unseal process. Refer to\n  the [Seal wrap](\/vault\/docs\/enterprise\/sealwrap) overview for more\n  information.\n\n<\/Note>\n\n\nThe AliCloud KMS seal configures Vault to use AliCloud KMS as the seal wrapping mechanism.\nThe AliCloud KMS seal is activated by one of the following:\n\n- The presence of a `seal \"alicloudkms\"` block in Vault's configuration file.\n- The presence of the environment variable `VAULT_SEAL_TYPE` set to `alicloudkms`. If\n  enabling via environment variable, all other required values specific to AliCloud\n  KMS (i.e. `VAULT_ALICLOUDKMS_SEAL_KEY_ID`) must be also supplied, as well as all\n  other AliCloud-related environment variables that lends to successful\n  authentication.\n\n## `alicloudkms` example\n\nThis example shows configuring AliCloud KMS seal through the Vault configuration file\nby providing all the required values:\n\n```hcl\nseal \"alicloudkms\" {\n  region     = \"us-east-1\"\n  access_key = \"0wNEpMMlzy7szvai\"\n  secret_key = \"PupkTg8jdmau1cXxYacgE736PJj4cA\"\n  kms_key_id = \"08c33a6f-4e0a-4a1b-a3fa-7ddfa1d4fb73\"\n}\n```\n\n## `alicloudkms` parameters\n\nThese parameters apply to the `seal` stanza in the Vault configuration file:\n\n- `region` `(string: <required> \"us-east-1\")`: The AliCloud region where the encryption key\n  lives. May also be specified by the `ALICLOUD_REGION`\n  environment variable.\n\n- `domain` `(string: \"kms.us-east-1.aliyuncs.com\")`: If set, overrides the endpoint\n  AliCloud would normally use for KMS for a particular region. May also be specified\n  by the `ALICLOUD_DOMAIN` environment variable.\n\n- `access_key` `(string: <required>)`: The AliCloud access key ID to use. May also be\n  specified by the `ALICLOUD_ACCESS_KEY` environment variable or as part of the\n  AliCloud profile from the AliCloud CLI or instance profile.\n\n- `secret_key` `(string: <required>)`: The AliCloud secret access key to use. May\n  also be specified by the `ALICLOUD_SECRET_KEY` environment variable or as\n  part of the AliCloud profile from the AliCloud CLI or instance profile.\n\n- `kms_key_id` `(string: <required>)`: The AliCloud KMS key ID to use for encryption\n  and decryption. May also be specified by the `VAULT_ALICLOUDKMS_SEAL_KEY_ID`\n  environment variable.\n\n- `disabled` `(string: \"\")`: Set this to `true` if Vault is migrating from an auto seal configuration. Otherwise, set to `false`.\n\nRefer to the [Seal Migration](\/vault\/docs\/concepts\/seal#seal-migration) documentation for more information about the seal migration process.\n\n## Authentication\n\nAuthentication-related values must be provided, either as environment\nvariables or as configuration parameters.\n\n~> **Note:** Although the configuration file allows you to pass in\n`ALICLOUD_ACCESS_KEY` and `ALICLOUD_SECRET_KEY` as part of the seal's parameters, it\nis _strongly_ recommended to set these values via environment variables.\n\n```text\nAliCloud authentication values:\n\n* `ALICLOUD_REGION`\n* `ALICLOUD_ACCESS_KEY`\n* `ALICLOUD_SECRET_KEY`\n```\n\nNote: The client uses the official AliCloud SDK and will use environment credentials,\nthe specified credentials, or RAM role credentials in that order.\n\n## `alicloudkms` environment variables\n\nAlternatively, the AliCloud KMS seal can be activated by providing the following\nenvironment variables:\n\n```text\nVault Seal specific values:\n\n* `VAULT_SEAL_TYPE`\n* `VAULT_ALICLOUDKMS_SEAL_KEY_ID`\n```","site":"vault","answers_cleaned":"    layout  docs page title  AliCloud KMS   Seals   Configuration description       The AliCloud KMS seal configures Vault to use AliCloud KMS as the seal   wrapping    mechanism          alicloudkms  seal   Note title  Seal wrapping requires Vault Enterprise      All Vault versions support   auto unseal   for AliCloud  but   seal wrapping     requires Vault Enterprise       Vault Enterprise enables seal wrapping by default  which means the KMS service   must be available at runtime and not just during the unseal process  Refer to   the  Seal wrap   vault docs enterprise sealwrap  overview for more   information     Note    The AliCloud KMS seal configures Vault to use AliCloud KMS as the seal wrapping mechanism  The AliCloud KMS seal is activated by one of the following     The presence of a  seal  alicloudkms   block in Vault s configuration file    The presence of the environment variable  VAULT SEAL TYPE  set to  alicloudkms   If   enabling via environment variable  all other required values specific to AliCloud   KMS  i e   VAULT ALICLOUDKMS SEAL KEY ID   must be also supplied  as well as all   other AliCloud related environment variables that lends to successful   authentication       alicloudkms  example  This example shows configuring AliCloud KMS seal through the Vault configuration file by providing all the required values      hcl seal  alicloudkms      region        us east 1    access key    0wNEpMMlzy7szvai    secret key    PupkTg8jdmau1cXxYacgE736PJj4cA    kms key id    08c33a6f 4e0a 4a1b a3fa 7ddfa1d4fb73             alicloudkms  parameters  These parameters apply to the  seal  stanza in the Vault configuration file      region    string   required   us east 1     The AliCloud region where the encryption key   lives  May also be specified by the  ALICLOUD REGION    environment variable      domain    string   kms us east 1 aliyuncs com     If set  overrides the endpoint   AliCloud would normally use for KMS for a particular region  May also be specified   by the  ALICLOUD DOMAIN  environment variable      access key    string   required     The AliCloud access key ID to use  May also be   specified by the  ALICLOUD ACCESS KEY  environment variable or as part of the   AliCloud profile from the AliCloud CLI or instance profile      secret key    string   required     The AliCloud secret access key to use  May   also be specified by the  ALICLOUD SECRET KEY  environment variable or as   part of the AliCloud profile from the AliCloud CLI or instance profile      kms key id    string   required     The AliCloud KMS key ID to use for encryption   and decryption  May also be specified by the  VAULT ALICLOUDKMS SEAL KEY ID    environment variable      disabled    string        Set this to  true  if Vault is migrating from an auto seal configuration  Otherwise  set to  false    Refer to the  Seal Migration   vault docs concepts seal seal migration  documentation for more information about the seal migration process      Authentication  Authentication related values must be provided  either as environment variables or as configuration parameters        Note    Although the configuration file allows you to pass in  ALICLOUD ACCESS KEY  and  ALICLOUD SECRET KEY  as part of the seal s parameters  it is  strongly  recommended to set these values via environment variables      text AliCloud authentication values      ALICLOUD REGION     ALICLOUD ACCESS KEY     ALICLOUD SECRET KEY       Note  The client uses the official AliCloud SDK and will use environment credentials  the specified credentials  or RAM role credentials in that order       alicloudkms  environment variables  Alternatively  the AliCloud KMS seal can be activated by providing the following environment variables      text Vault Seal specific values      VAULT SEAL TYPE     VAULT ALICLOUDKMS SEAL KEY ID     "}
{"questions":"vault ocikms seal mechanism layout docs page title OCI KMS Seals Configuration The OCI KMS seal configures Vault to use OCI KMS as the seal wrapping","answers":"---\nlayout: docs\npage_title: OCI KMS - Seals - Configuration\ndescription: |-\n  The OCI KMS seal configures Vault to use OCI KMS as the seal wrapping\n  mechanism.\n---\n\n# `ocikms` seal\n\n<Note title=\"Seal wrapping requires Vault Enterprise\">\n\n  All Vault versions support **auto-unseal** for OCI KMS, but **seal wrapping**\n  requires Vault Enterprise.\n  \n  Vault Enterprise enables seal wrapping by default, which means the KMS service\n  must be available at runtime and not just during the unseal process. Refer to\n  the [Seal wrap](\/vault\/docs\/enterprise\/sealwrap) overview for more\n  information.\n\n<\/Note>\n\nThe OCI KMS seal configures Vault to use OCI KMS as the seal wrapping mechanism.\nThe OCI KMS seal is activated by one of the following:\n\n- The presence of a `seal \"ocikms\"` block in Vault's configuration file\n- The presence of the environment variable `VAULT_SEAL_TYPE` set to `ocikms`. If\n  enabling via environment variable, all other required values specific to OCI\n  KMS (i.e. `VAULT_OCIKMS_SEAL_KEY_ID`, `VAULT_OCIKMS_CRYPTO_ENDPOINT` `VAULT_OCIKMS_MANAGEMENT_ENDPOINT`) must be also supplied, as well as all\n  other OCI-related [environment variables][oci-sdk] that lends to successful\n  authentication.\n\n## `ocikms` example\n\nThis example shows configuring the OCI KMS seal through the Vault configuration file\nby providing all the required values:\n\n```hcl\nseal \"ocikms\" {\n    key_id               = \"ocid1.key.oc1.iad.afnxza26aag4s.abzwkljsbapzb2nrha5nt3s7s7p42ctcrcj72vn3kq5qx\"\n    crypto_endpoint      = \"https:\/\/afnxza26aag4s-crypto.kms.us-ashburn-1.oraclecloud.com\"\n    management_endpoint  = \"https:\/\/afnxza26aag4s-management.kms.us-ashburn-1.oraclecloud.com\"\n    auth_type_api_key    = \"true\"\n}\n```\n\n## `ocikms` parameters\n\nThese parameters apply to the `seal` stanza in the Vault configuration file:\n\n- `key_id` `(string: <required>)`: The OCI KMS key ID to use. May also be\n  specified by the `VAULT_OCIKMS_SEAL_KEY_ID` environment variable.\n- `crypto_endpoint` `(string: <required>)`: The OCI KMS cryptographic endpoint (or data plane endpoint)\n  to be used to make OCI KMS encryption\/decryption requests. May also be specified by the `VAULT_OCIKMS_CRYPTO_ENDPOINT` environment\n  variable.\n- `management_endpoint` `(string: <required>)`: The OCI KMS management endpoint (or control plane endpoint)\n  to be used to make OCI KMS key management requests. May also be specified by the `VAULT_OCIKMS_MANAGEMENT_ENDPOINT` environment\n  variable.\n- `auth_type_api_key` `(boolean: false)`: Specifies if using API key to authenticate to OCI KMS service.\n  If it is `false`, Vault authenticates using the instance principal from the compute instance. See Authentication section for details. Default is `false`.\n\n- `disabled` `(string: \"\")`: Set this to `true` if Vault is migrating from an auto seal configuration. Otherwise, set to `false`.\n\nRefer to the [Seal Migration](\/vault\/docs\/concepts\/seal#seal-migration) documentation for more information about the seal migration process.\n\n## Authentication\n\nAuthentication-related values must be provided, either as environment\nvariables or as configuration parameters.\n\nIf you want to use Instance Principal, add section configuration below and add further configuration settings as detailed in the [configuration docs](\/vault\/docs\/configuration\/).\n\n```hcl\nseal \"ocikms\" {\n    crypto_endpoint     = \"<kms-crypto-endpoint>\"\n    management_endpoint = \"<kms-management-endpoint>\"\n    key_id              = \"<kms-key-id>\"\n}\n# Notes:\n# crypto_endpoint can be replaced by VAULT_OCIKMS_CRYPTO_ENDPOINT environment var\n# management_endpoint can be replaced by VAULT_OCIKMS_MANAGEMENT_ENDPOINT environment var\n# key_id can be replaced by VAULT_OCIKMS_SEAL_KEY_ID environment var\n```\n\nIf you want to use User Principal, the plugin will take the API key you defined for OCI SDK, often under `~\/.oci\/config`.\n\n```hcl\nseal \"ocikms\" {\n    auth_type_api_key   = true\n    crypto_endpoint     = \"<kms-crypto-endpoint>\"\n    management_endpoint = \"<kms-management-endpoint>\"\n    key_id              = \"<kms-key-id>\"\n}\n```\n\nTo grant permission for a compute instance to use OCI KMS service, write policies for KMS access.\n\n- Create a [Dynamic Group][oci-dg] in your OCI tenancy.\n- Create a policy that allows the Dynamic Group to use or manage keys from OCI KMS. There are multiple ways to write these policies. The [OCI Identity Policy][oci-id] can be used as a reference or starting point.\n\nThe most common policy allows a dynamic group of tenant A to use KMS's keys in tenant B:\n\n```text\ndefine tenancy tenantB as <tenantB-ocid>\n\nendorse dynamic-group <dynamic-group-name> to use keys in tenancy tenantB\n\n```\n\n```text\ndefine tenancy tenantA as <tenantA-ocid>\n\ndefine dynamic-group <dynamic-group-name> as <dynamic-group-ocid>\n\nadmit dynamic-group <dynamic-group-name> of tenancy tenantA to use keys in compartment <key-compartment>\n\n```\n\n## `ocikms` rotate OCI KMS master key\n\nFor the [OCI KMS key rotation feature][oci-kms-rotation], OCI KMS will create a new version of key internally. This process is independent from Vault, and Vault still uses the same `key_id` without any interruption.\n\nIf you want to change the `key_id`: migrate to Shamir, change `key_id`, and then migrate to OCI KMS with the new `key_id`.\n\n[oci-sdk]: https:\/\/docs.cloud.oracle.com\/iaas\/Content\/API\/Concepts\/sdkconfig.htm\n[oci-dg]: https:\/\/docs.cloud.oracle.com\/iaas\/Content\/Identity\/Tasks\/managingdynamicgroups.htm\n[oci-id]: https:\/\/docs.cloud.oracle.com\/iaas\/Content\/Identity\/Concepts\/policies.htm\n[oci-kms-rotation]: https:\/\/docs.cloud.oracle.com\/iaas\/Content\/KeyManagement\/Tasks\/managingkeys.htm","site":"vault","answers_cleaned":"    layout  docs page title  OCI KMS   Seals   Configuration description       The OCI KMS seal configures Vault to use OCI KMS as the seal wrapping   mechanism          ocikms  seal   Note title  Seal wrapping requires Vault Enterprise      All Vault versions support   auto unseal   for OCI KMS  but   seal wrapping     requires Vault Enterprise       Vault Enterprise enables seal wrapping by default  which means the KMS service   must be available at runtime and not just during the unseal process  Refer to   the  Seal wrap   vault docs enterprise sealwrap  overview for more   information     Note   The OCI KMS seal configures Vault to use OCI KMS as the seal wrapping mechanism  The OCI KMS seal is activated by one of the following     The presence of a  seal  ocikms   block in Vault s configuration file   The presence of the environment variable  VAULT SEAL TYPE  set to  ocikms   If   enabling via environment variable  all other required values specific to OCI   KMS  i e   VAULT OCIKMS SEAL KEY ID    VAULT OCIKMS CRYPTO ENDPOINT   VAULT OCIKMS MANAGEMENT ENDPOINT   must be also supplied  as well as all   other OCI related  environment variables  oci sdk  that lends to successful   authentication       ocikms  example  This example shows configuring the OCI KMS seal through the Vault configuration file by providing all the required values      hcl seal  ocikms        key id                  ocid1 key oc1 iad afnxza26aag4s abzwkljsbapzb2nrha5nt3s7s7p42ctcrcj72vn3kq5qx      crypto endpoint         https   afnxza26aag4s crypto kms us ashburn 1 oraclecloud com      management endpoint     https   afnxza26aag4s management kms us ashburn 1 oraclecloud com      auth type api key       true             ocikms  parameters  These parameters apply to the  seal  stanza in the Vault configuration file      key id    string   required     The OCI KMS key ID to use  May also be   specified by the  VAULT OCIKMS SEAL KEY ID  environment variable     crypto endpoint    string   required     The OCI KMS cryptographic endpoint  or data plane endpoint    to be used to make OCI KMS encryption decryption requests  May also be specified by the  VAULT OCIKMS CRYPTO ENDPOINT  environment   variable     management endpoint    string   required     The OCI KMS management endpoint  or control plane endpoint    to be used to make OCI KMS key management requests  May also be specified by the  VAULT OCIKMS MANAGEMENT ENDPOINT  environment   variable     auth type api key    boolean  false    Specifies if using API key to authenticate to OCI KMS service    If it is  false   Vault authenticates using the instance principal from the compute instance  See Authentication section for details  Default is  false       disabled    string        Set this to  true  if Vault is migrating from an auto seal configuration  Otherwise  set to  false    Refer to the  Seal Migration   vault docs concepts seal seal migration  documentation for more information about the seal migration process      Authentication  Authentication related values must be provided  either as environment variables or as configuration parameters   If you want to use Instance Principal  add section configuration below and add further configuration settings as detailed in the  configuration docs   vault docs configuration        hcl seal  ocikms        crypto endpoint         kms crypto endpoint       management endpoint     kms management endpoint       key id                  kms key id       Notes    crypto endpoint can be replaced by VAULT OCIKMS CRYPTO ENDPOINT environment var   management endpoint can be replaced by VAULT OCIKMS MANAGEMENT ENDPOINT environment var   key id can be replaced by VAULT OCIKMS SEAL KEY ID environment var      If you want to use User Principal  the plugin will take the API key you defined for OCI SDK  often under     oci config       hcl seal  ocikms        auth type api key     true     crypto endpoint         kms crypto endpoint       management endpoint     kms management endpoint       key id                  kms key id          To grant permission for a compute instance to use OCI KMS service  write policies for KMS access     Create a  Dynamic Group  oci dg  in your OCI tenancy    Create a policy that allows the Dynamic Group to use or manage keys from OCI KMS  There are multiple ways to write these policies  The  OCI Identity Policy  oci id  can be used as a reference or starting point   The most common policy allows a dynamic group of tenant A to use KMS s keys in tenant B      text define tenancy tenantB as  tenantB ocid   endorse dynamic group  dynamic group name  to use keys in tenancy tenantB          text define tenancy tenantA as  tenantA ocid   define dynamic group  dynamic group name  as  dynamic group ocid   admit dynamic group  dynamic group name  of tenancy tenantA to use keys in compartment  key compartment            ocikms  rotate OCI KMS master key  For the  OCI KMS key rotation feature  oci kms rotation   OCI KMS will create a new version of key internally  This process is independent from Vault  and Vault still uses the same  key id  without any interruption   If you want to change the  key id   migrate to Shamir  change  key id   and then migrate to OCI KMS with the new  key id     oci sdk   https   docs cloud oracle com iaas Content API Concepts sdkconfig htm  oci dg   https   docs cloud oracle com iaas Content Identity Tasks managingdynamicgroups htm  oci id   https   docs cloud oracle com iaas Content Identity Concepts policies htm  oci kms rotation   https   docs cloud oracle com iaas Content KeyManagement Tasks managingkeys htm"}
{"questions":"vault The recommended pattern and best practices for unsealing a production Vault cluster This documentation explains the concepts options and considerations for unsealing a production Vault cluster It builds on the Reference Architecture vault tutorials raft raft reference architecture and Deployment Guide vault tutorials day one raft raft deployment guide for Vault to deliver a pattern for a common Vault use case layout docs Seal best practices page title Seal best practices","answers":"---\nlayout: docs\npage_title: Seal best practices\ndescription: >-\n  The recommended pattern and best practices for unsealing a production Vault cluster.\n---\n\n# Seal best practices\n\nThis documentation explains the concepts, options, and considerations for unsealing a production Vault cluster. It builds on the [Reference Architecture](\/vault\/tutorials\/raft\/raft-reference-architecture) and [Deployment Guide](\/vault\/tutorials\/day-one-raft\/raft-deployment-guide) for Vault to deliver a pattern for a common Vault use case.\n\n## Vault unseal\n\nOnce Vault is installed and configured according to the [Deployment Guide](\/vault\/tutorials\/day-one-raft\/raft-deployment-guide), the Vault starts in a sealed state. \n\n Because Vault always starts in a sealed state, the first decision point is around your implementation strategy to handle unsealing. Unsealing is the process by which your Vault root key is used to decrypt the data encryption key that Vault uses to encrypt all data. For obvious security reasons, Vault neither keeps nor knows the root key and so this is the function of the unsealing process; to present the root key to Vault.\n\nVault Community Edition supports Shamir and cloud auto-unseal methods for most major cloud providers. Vault Enterprise also offers an hardware security module (HSM) unseal.\n\nThere are several considerations when deciding on an unseal strategy.\n\n<Tip>\n\nRefer to the [seal\/unseal](\/vault\/docs\/concepts\/seal) documentation to learn more about the concepts and reasoning behind Vault sealing.\n\n<\/Tip>\n\n\n## Operator overhead\n\nThe default method for unseal uses [Shamir's Secret Sharing algorithm](https:\/\/en.wikipedia.org\/wiki\/Shamir%27s_Secret_Sharing) to split the key into shards so that there is never a single root key. This method relies on multiple operators (each with their own key) to be available to unseal Vault, so it may not be ideal in an Enterprise solution.\n\nIf this method is employed, the recommendation is to put additional operational processes in place, such as:\n\n- Quarterly unseal drills to make sure all operators can respond.\n- Key shards should be stored in secure locations and further encrypted using personal encryption. Vault provides for this in the [init](\/vault\/docs\/commands\/operator\/init) command with flags to PGP encrypt the unseal keys and root token.\n- Key holder key access is tied to enterprise user lifecycle management to ensure the process is responsive to staffing changes.\n\n## Cloud provider\n\nIf your Vault implementation is in a public cloud or has access to one, you may have access to a secure Key Management Service (KMS), and Vault can take advantage of this to store the root key and retrieve it from there. This option is easy to use but relies on access to a public cloud.\n\nConsiderations for using this method include:\n\n- **Security policy:** Does your security policy allow for secrets to be stored on a public cloud?\n- **Business continuity:** Some enterprises may have policies around vendor reliance for business continuity reasons.\n- You need to put additional security in place for the cloud provider access keys required to read the key store.\n\n## Access to an HSM\n\nIf you have access to an HSM, Vault provides a way to store and retrieve the root key using the `pkcs11` configuration block in the `seal` stanza. This method offers considerable security as all parts of the secrets management infrastructure are within business control, but there are other considerations:\n\n- Security policies must manage access to and security of the HSM.\n- You need to put additional security in place for the HSM PIN that Vault needs to access the HSM.\n\n## Cloud provider auto-unseal\n\nMajor cloud providers offer a cryptographic key management system, which Vault can use to provide the root key for the unseal operation.\n\n- [AWS KMS](\/vault\/docs\/configuration\/seal\/awskms)\n- [Azure Key Vault](\/vault\/docs\/configuration\/seal\/azurekeyvault)\n- [GCP Cloud KMS](\/vault\/docs\/configuration\/seal\/gcpckms)\n- [AliCloud KMS](\/vault\/docs\/configuration\/seal\/alicloudkms)\n- [OCI KMS](\/vault\/docs\/configuration\/seal\/ocikms)\n\nFor all of these cloud-provider methods of auto-unseal, the high-level principles are the same. Rather than having a root key protected by splitting it into shards and then distributing them securely, the root key is generated and stored in the cloud-provider key management platform offering. Vault is configured to retrieve this key on startup and unseal automatically.\n\nUsing a cloud provider to auto-unseal has security implications around the trust of the provider.\n\n### Use AWS KMS for auto-unseal\n\nWhen using AWS, the AWS Key Management Service can store and provide the root key to the startup's Vault cluster. There are two steps:\n\n1. Generate an AWS KMS key\n2. Configure Vault to be able to read and un-encrypt this key\n\nFor the first step, you can generate an AWS KMS key in whatever way you usually provision your AWS infrastructure and services. The result is a key that has a key ID. Best practice would only allow minimal access to administer this key and only allow Vault to access it. You should also enable CloudTrail audit logs for the KMS key and validate that nothing else tries to access this key. With AWS KMS you can generate the key with your own key material or allow AWS to provide this (the default method).\n\nThe second step is to add the key ID and the AWS region of the key into the Vault configuration file inside the `seal` stanza. Vault will look for keys in `us-east-1` by default, but it is good practice to add the region key\/value even if the key is in this region.\n\nAs access to your KMS keys is limited by default, you will need to allow Vault to access it, and you can do this with Instance Profiles on the EC2 instances that run Vault. Best practice is to run Vault on its own instances and not co-host other services. The Instance Profile will need a role with a policy to `kms:Encrypt`, `kms:Decrypt` and `kms:DescribeKey`. Tie this policy to the single encryption key `arn` in the Policy JSON's Resources section.\n\n```json\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\"kms:Encrypt\", \"kms:Decrypt\", \"kms:DescribeKey\"],\n      \"Resource\": [\"${kms_arn}\"]\n    }\n  ]\n}\n```\n\n## Use HSM to unseal Vault\n\nIt is also possible to unseal Vault using either a hardware HSM or a cloud KMS. The unseal method is similar to using a KMS, which is documented in the [HSM section](\/vault\/docs\/configuration\/seal\/pkcs11) of the unseal docs. You can see a useful walkthrough video of this process by [SafenetAT](https:\/\/www.youtube.com\/watch?v=3LyWfN9fWFE). You can also watch this [HashiCorp training video](https:\/\/www.hashicorp.com\/resources\/hashicorp-and-aws-integrating-cloudhsm-with-vault-e) on using AWS HSM.\nAWS also provides a high-level guide on [Securing and Managing secrets with HashiCorp Vault](https:\/\/aws.amazon.com\/blogs\/apn\/securing-and-managing-secrets-with-hashicorp-vault-enterprise\/) with its offering of both CloudHSM and KMS.\n\n## Recovery keys\n\nWhen Vault is initialized while using HSM or cloud-provided external keys for sealing, it returns several recovery keys. These are still required for highly privileged actions, such as generating new root keys. A section in the HSM docs addresses [recovery keys](\/vault\/docs\/enterprise\/hsm\/behavior#recovery-key).\n\n## Changing seal method\n\nYou can change the seal method using [seal migration](\/vault\/docs\/concepts\/seal#seal-migration).\n\n\n## Reference material\n\n- [Reference Architecture](\/vault\/tutorials\/raft\/raft-reference-architecture) covers the recommended production Vault cluster architecture\n- [Deployment Guide](\/vault\/tutorials\/day-one-raft\/raft-deployment-guide) covers how to install and configure Vault for production use\n- Recommended Pattern - Vault Centralized Secrets Management\n- [K\/V Secrets Engine](\/vault\/docs\/secrets\/kv) is used to store static secrets within the configured physical storage for Vault\n- [Auth Methods](\/vault\/docs\/auth) are used to authenticate users and machines with Vault\n- [Auto unseal tutorials](\/vault\/tutorials\/auto-unseal)\n- [Consul Template](https:\/\/github.com\/hashicorp\/consul-template) is used to access static secrets stored in Vault and provide them to the applications and services that require them.","site":"vault","answers_cleaned":"    layout  docs page title  Seal best practices description       The recommended pattern and best practices for unsealing a production Vault cluster         Seal best practices  This documentation explains the concepts  options  and considerations for unsealing a production Vault cluster  It builds on the  Reference Architecture   vault tutorials raft raft reference architecture  and  Deployment Guide   vault tutorials day one raft raft deployment guide  for Vault to deliver a pattern for a common Vault use case      Vault unseal  Once Vault is installed and configured according to the  Deployment Guide   vault tutorials day one raft raft deployment guide   the Vault starts in a sealed state     Because Vault always starts in a sealed state  the first decision point is around your implementation strategy to handle unsealing  Unsealing is the process by which your Vault root key is used to decrypt the data encryption key that Vault uses to encrypt all data  For obvious security reasons  Vault neither keeps nor knows the root key and so this is the function of the unsealing process  to present the root key to Vault   Vault Community Edition supports Shamir and cloud auto unseal methods for most major cloud providers  Vault Enterprise also offers an hardware security module  HSM  unseal   There are several considerations when deciding on an unseal strategy    Tip   Refer to the  seal unseal   vault docs concepts seal  documentation to learn more about the concepts and reasoning behind Vault sealing     Tip       Operator overhead  The default method for unseal uses  Shamir s Secret Sharing algorithm  https   en wikipedia org wiki Shamir 27s Secret Sharing  to split the key into shards so that there is never a single root key  This method relies on multiple operators  each with their own key  to be available to unseal Vault  so it may not be ideal in an Enterprise solution   If this method is employed  the recommendation is to put additional operational processes in place  such as     Quarterly unseal drills to make sure all operators can respond    Key shards should be stored in secure locations and further encrypted using personal encryption  Vault provides for this in the  init   vault docs commands operator init  command with flags to PGP encrypt the unseal keys and root token    Key holder key access is tied to enterprise user lifecycle management to ensure the process is responsive to staffing changes      Cloud provider  If your Vault implementation is in a public cloud or has access to one  you may have access to a secure Key Management Service  KMS   and Vault can take advantage of this to store the root key and retrieve it from there  This option is easy to use but relies on access to a public cloud   Considerations for using this method include       Security policy    Does your security policy allow for secrets to be stored on a public cloud      Business continuity    Some enterprises may have policies around vendor reliance for business continuity reasons    You need to put additional security in place for the cloud provider access keys required to read the key store      Access to an HSM  If you have access to an HSM  Vault provides a way to store and retrieve the root key using the  pkcs11  configuration block in the  seal  stanza  This method offers considerable security as all parts of the secrets management infrastructure are within business control  but there are other considerations     Security policies must manage access to and security of the HSM    You need to put additional security in place for the HSM PIN that Vault needs to access the HSM      Cloud provider auto unseal  Major cloud providers offer a cryptographic key management system  which Vault can use to provide the root key for the unseal operation      AWS KMS   vault docs configuration seal awskms     Azure Key Vault   vault docs configuration seal azurekeyvault     GCP Cloud KMS   vault docs configuration seal gcpckms     AliCloud KMS   vault docs configuration seal alicloudkms     OCI KMS   vault docs configuration seal ocikms   For all of these cloud provider methods of auto unseal  the high level principles are the same  Rather than having a root key protected by splitting it into shards and then distributing them securely  the root key is generated and stored in the cloud provider key management platform offering  Vault is configured to retrieve this key on startup and unseal automatically   Using a cloud provider to auto unseal has security implications around the trust of the provider       Use AWS KMS for auto unseal  When using AWS  the AWS Key Management Service can store and provide the root key to the startup s Vault cluster  There are two steps   1  Generate an AWS KMS key 2  Configure Vault to be able to read and un encrypt this key  For the first step  you can generate an AWS KMS key in whatever way you usually provision your AWS infrastructure and services  The result is a key that has a key ID  Best practice would only allow minimal access to administer this key and only allow Vault to access it  You should also enable CloudTrail audit logs for the KMS key and validate that nothing else tries to access this key  With AWS KMS you can generate the key with your own key material or allow AWS to provide this  the default method    The second step is to add the key ID and the AWS region of the key into the Vault configuration file inside the  seal  stanza  Vault will look for keys in  us east 1  by default  but it is good practice to add the region key value even if the key is in this region   As access to your KMS keys is limited by default  you will need to allow Vault to access it  and you can do this with Instance Profiles on the EC2 instances that run Vault  Best practice is to run Vault on its own instances and not co host other services  The Instance Profile will need a role with a policy to  kms Encrypt    kms Decrypt  and  kms DescribeKey   Tie this policy to the single encryption key  arn  in the Policy JSON s Resources section      json      Version    2012 10 17      Statement                  Effect    Allow          Action     kms Encrypt    kms Decrypt    kms DescribeKey           Resource       kms arn                        Use HSM to unseal Vault  It is also possible to unseal Vault using either a hardware HSM or a cloud KMS  The unseal method is similar to using a KMS  which is documented in the  HSM section   vault docs configuration seal pkcs11  of the unseal docs  You can see a useful walkthrough video of this process by  SafenetAT  https   www youtube com watch v 3LyWfN9fWFE   You can also watch this  HashiCorp training video  https   www hashicorp com resources hashicorp and aws integrating cloudhsm with vault e  on using AWS HSM  AWS also provides a high level guide on  Securing and Managing secrets with HashiCorp Vault  https   aws amazon com blogs apn securing and managing secrets with hashicorp vault enterprise   with its offering of both CloudHSM and KMS      Recovery keys  When Vault is initialized while using HSM or cloud provided external keys for sealing  it returns several recovery keys  These are still required for highly privileged actions  such as generating new root keys  A section in the HSM docs addresses  recovery keys   vault docs enterprise hsm behavior recovery key       Changing seal method  You can change the seal method using  seal migration   vault docs concepts seal seal migration        Reference material     Reference Architecture   vault tutorials raft raft reference architecture  covers the recommended production Vault cluster architecture    Deployment Guide   vault tutorials day one raft raft deployment guide  covers how to install and configure Vault for production use   Recommended Pattern   Vault Centralized Secrets Management    K V Secrets Engine   vault docs secrets kv  is used to store static secrets within the configured physical storage for Vault    Auth Methods   vault docs auth  are used to authenticate users and machines with Vault    Auto unseal tutorials   vault tutorials auto unseal     Consul Template  https   github com hashicorp consul template  is used to access static secrets stored in Vault and provide them to the applications and services that require them "}
{"questions":"vault page title AWS KMS Seals Configuration mechanism awskms seal The AWS KMS seal configures Vault to use AWS KMS as the seal wrapping layout docs","answers":"---\nlayout: docs\npage_title: AWS KMS - Seals - Configuration\ndescription: |-\n  The AWS KMS seal configures Vault to use AWS KMS as the seal wrapping\n  mechanism.\n---\n\n# `awskms` seal\n\n<Note title=\"Seal wrapping requires Vault Enterprise\">\n\n  All Vault versions support **auto-unseal** for AWS, but **seal wrapping**\n  requires Vault Enterprise.\n  \n  Vault Enterprise enables seal wrapping by default, which means the KMS service\n  must be available at runtime and not just during the unseal process. Refer to\n  the [Seal wrap](\/vault\/docs\/enterprise\/sealwrap) overview for more\n  information.\n\n<\/Note>\n\nThe AWS KMS seal configures Vault to use AWS KMS as the seal wrapping mechanism.\nThe AWS KMS seal is activated by one of the following:\n\n- The presence of a `seal \"awskms\"` block in Vault's configuration file\n- The presence of the environment variable `VAULT_SEAL_TYPE` set to `awskms`. If\n  enabling via environment variable, all other required values specific to AWS\n  KMS (i.e. `VAULT_AWSKMS_SEAL_KEY_ID`) must be also supplied, as well as all\n  other AWS-related environment variables that lends to successful\n  authentication (i.e. `AWS_ACCESS_KEY_ID`, etc.).\n\n## `awskms` example\n\nThis example shows configuring AWS KMS seal through the Vault configuration file\nby providing all the required values:\n\n```hcl\nseal \"awskms\" {\n  region     = \"us-east-1\"\n  access_key = \"AKIAIOSFODNN7EXAMPLE\"\n  secret_key = \"wJalrXUtnFEMI\/K7MDENG\/bPxRfiCYEXAMPLEKEY\"\n  kms_key_id = \"19ec80b0-dfdd-4d97-8164-c6examplekey\"\n  endpoint   = \"https:\/\/vpce-0e1bb1852241f8cc6-pzi0do8n.kms.us-east-1.vpce.amazonaws.com\"\n}\n```\n\n## `awskms` parameters\n\nThese parameters apply to the `seal` stanza in the Vault configuration file:\n\n- `region` `(string: \"us-east-1\")`: The AWS region where the encryption key\n  lives. If not provided, may be populated from the `AWS_REGION` or\n  `AWS_DEFAULT_REGION` environment variables, from your `~\/.aws\/config` file,\n  or from instance metadata.\n\n- `access_key` `(string: <required>)`: The AWS access key ID to use. May also be\n  specified by the `AWS_ACCESS_KEY_ID` environment variable or as part of the\n  AWS profile from the AWS CLI or instance profile.\n\n- `session_token` `(string: \"\")`: Specifies the AWS session token. This can\n  also be provided via the environment variable `AWS_SESSION_TOKEN`.\n\n- `secret_key` `(string: <required>)`: The AWS secret access key to use. May\n  also be specified by the `AWS_SECRET_ACCESS_KEY` environment variable or as\n  part of the AWS profile from the AWS CLI or instance profile.\n\n- `kms_key_id` `(string: <required>)`: The AWS KMS key ID or ARN to use for encryption\n  and decryption. May also be specified by the `VAULT_AWSKMS_SEAL_KEY_ID`\n  environment variable.  An alias in the format `alias\/key-alias-name` may also be used here.\n\n- `disabled` `(string: \"\")`: Set this to `true` if Vault is migrating from an auto seal configuration. Otherwise, set to `false`.\n\n- `endpoint` `(string: \"\")`: The KMS API endpoint to be used to make AWS KMS\n  requests. May also be specified by the `AWS_KMS_ENDPOINT` environment\n  variable. This is useful, for example, when connecting to KMS over a [VPC\n  Endpoint](https:\/\/docs.aws.amazon.com\/kms\/latest\/developerguide\/kms-vpc-endpoint.html).\n  If not set, Vault will use the default API endpoint for your region.\n\nRefer to the [Seal Migration](\/vault\/docs\/concepts\/seal#seal-migration) documentation for more information about the seal migration process.\n\n## Authentication\n\nAuthentication-related values must be provided, either as environment\nvariables or as configuration parameters.\n\n~> **Note:** Although the configuration file allows you to pass in\n`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` as part of the seal's parameters, it\nis _strongly_ recommended to set these values via environment variables.\n\nAWS authentication values:\n\n- `AWS_REGION` or `AWS_DEFAULT_REGION`\n- `AWS_ACCESS_KEY_ID`\n- `AWS_SECRET_ACCESS_KEY`\n\nNote: The client uses the official AWS SDK and will use the specified\ncredentials, environment credentials, shared file credentials, or IAM role\/ECS\ntask credentials in that order, if the above AWS specific values are not\nprovided.\n\nVault needs the following permissions on the KMS key:\n\n- `kms:Encrypt`\n- `kms:Decrypt`\n- `kms:DescribeKey`\n\nThese can be granted via IAM permissions on the principal that Vault uses, on\nthe KMS key policy for the KMS key, or via KMS Grants on the key.\n\n## `awskms` environment variables\n\nAlternatively, the AWS KMS seal can be activated by providing the following\nenvironment variables.\n\nVault Seal specific values:\n\n- `VAULT_SEAL_TYPE`\n- `VAULT_AWSKMS_SEAL_KEY_ID`\n\n## Key rotation\n\nThis seal supports rotating the root keys defined in AWS KMS\n[doc](https:\/\/docs.aws.amazon.com\/kms\/latest\/developerguide\/rotate-keys.html). Both automatic\nrotation and manual rotation is supported for KMS since the key information is stored with the\nencrypted data. Old keys must not be disabled or deleted and are used to decrypt older data.\nAny new or updated data will be encrypted with the current key defined in the seal configuration\nor set to current under a key alias.\n\n## AWS instance metadata timeout\n\n@include 'aws-imds-timeout.mdx'\n\n## Tutorial\n\nRefer to the [Auto-unseal using AWS KMS](\/vault\/tutorials\/auto-unseal\/autounseal-aws-kms)\ntutorial to learn how to auto-unseal Vault using AWS KMS.","site":"vault","answers_cleaned":"    layout  docs page title  AWS KMS   Seals   Configuration description       The AWS KMS seal configures Vault to use AWS KMS as the seal wrapping   mechanism          awskms  seal   Note title  Seal wrapping requires Vault Enterprise      All Vault versions support   auto unseal   for AWS  but   seal wrapping     requires Vault Enterprise       Vault Enterprise enables seal wrapping by default  which means the KMS service   must be available at runtime and not just during the unseal process  Refer to   the  Seal wrap   vault docs enterprise sealwrap  overview for more   information     Note   The AWS KMS seal configures Vault to use AWS KMS as the seal wrapping mechanism  The AWS KMS seal is activated by one of the following     The presence of a  seal  awskms   block in Vault s configuration file   The presence of the environment variable  VAULT SEAL TYPE  set to  awskms   If   enabling via environment variable  all other required values specific to AWS   KMS  i e   VAULT AWSKMS SEAL KEY ID   must be also supplied  as well as all   other AWS related environment variables that lends to successful   authentication  i e   AWS ACCESS KEY ID   etc         awskms  example  This example shows configuring AWS KMS seal through the Vault configuration file by providing all the required values      hcl seal  awskms      region        us east 1    access key    AKIAIOSFODNN7EXAMPLE    secret key    wJalrXUtnFEMI K7MDENG bPxRfiCYEXAMPLEKEY    kms key id    19ec80b0 dfdd 4d97 8164 c6examplekey    endpoint      https   vpce 0e1bb1852241f8cc6 pzi0do8n kms us east 1 vpce amazonaws com             awskms  parameters  These parameters apply to the  seal  stanza in the Vault configuration file      region    string   us east 1     The AWS region where the encryption key   lives  If not provided  may be populated from the  AWS REGION  or    AWS DEFAULT REGION  environment variables  from your     aws config  file    or from instance metadata      access key    string   required     The AWS access key ID to use  May also be   specified by the  AWS ACCESS KEY ID  environment variable or as part of the   AWS profile from the AWS CLI or instance profile      session token    string        Specifies the AWS session token  This can   also be provided via the environment variable  AWS SESSION TOKEN       secret key    string   required     The AWS secret access key to use  May   also be specified by the  AWS SECRET ACCESS KEY  environment variable or as   part of the AWS profile from the AWS CLI or instance profile      kms key id    string   required     The AWS KMS key ID or ARN to use for encryption   and decryption  May also be specified by the  VAULT AWSKMS SEAL KEY ID    environment variable   An alias in the format  alias key alias name  may also be used here      disabled    string        Set this to  true  if Vault is migrating from an auto seal configuration  Otherwise  set to  false       endpoint    string        The KMS API endpoint to be used to make AWS KMS   requests  May also be specified by the  AWS KMS ENDPOINT  environment   variable  This is useful  for example  when connecting to KMS over a  VPC   Endpoint  https   docs aws amazon com kms latest developerguide kms vpc endpoint html     If not set  Vault will use the default API endpoint for your region   Refer to the  Seal Migration   vault docs concepts seal seal migration  documentation for more information about the seal migration process      Authentication  Authentication related values must be provided  either as environment variables or as configuration parameters        Note    Although the configuration file allows you to pass in  AWS ACCESS KEY ID  and  AWS SECRET ACCESS KEY  as part of the seal s parameters  it is  strongly  recommended to set these values via environment variables   AWS authentication values      AWS REGION  or  AWS DEFAULT REGION     AWS ACCESS KEY ID     AWS SECRET ACCESS KEY   Note  The client uses the official AWS SDK and will use the specified credentials  environment credentials  shared file credentials  or IAM role ECS task credentials in that order  if the above AWS specific values are not provided   Vault needs the following permissions on the KMS key      kms Encrypt     kms Decrypt     kms DescribeKey   These can be granted via IAM permissions on the principal that Vault uses  on the KMS key policy for the KMS key  or via KMS Grants on the key       awskms  environment variables  Alternatively  the AWS KMS seal can be activated by providing the following environment variables   Vault Seal specific values      VAULT SEAL TYPE     VAULT AWSKMS SEAL KEY ID      Key rotation  This seal supports rotating the root keys defined in AWS KMS  doc  https   docs aws amazon com kms latest developerguide rotate keys html   Both automatic rotation and manual rotation is supported for KMS since the key information is stored with the encrypted data  Old keys must not be disabled or deleted and are used to decrypt older data  Any new or updated data will be encrypted with the current key defined in the seal configuration or set to current under a key alias      AWS instance metadata timeout   include  aws imds timeout mdx      Tutorial  Refer to the  Auto unseal using AWS KMS   vault tutorials auto unseal autounseal aws kms  tutorial to learn how to auto unseal Vault using AWS KMS "}
{"questions":"vault The Azure Key Vault seal configures Vault to use Azure Key Vault as the seal mechanism wrapping layout docs page title Azure Key Vault Seals Configuration","answers":"---\nlayout: docs\npage_title: Azure Key Vault - Seals - Configuration\ndescription: >-\n  The Azure Key Vault seal configures Vault to use Azure Key Vault as the seal\n  wrapping\n\n  mechanism.\n---\n\n# `azurekeyvault` seal\n\n<Note title=\"Seal wrapping requires Vault Enterprise\">\n\n  All Vault versions support **auto-unseal** for Azure Key Vault, but\n  **seal wrapping** requires Vault Enterprise.\n  \n  Vault Enterprise enables seal wrapping by default, which means the KMS service\n  must be available at runtime and not just during the unseal process. Refer to\n  the [Seal wrap](\/vault\/docs\/enterprise\/sealwrap) overview for more\n  information.\n\n<\/Note>\n\nThe Azure Key Vault seal configures Vault to use Azure Key Vault as the seal\nwrapping mechanism. The Azure Key Vault seal is activated by one of the following:\n\n- The presence of a `seal \"azurekeyvault\"` block in Vault's configuration file.\n- The presence of the environment variable `VAULT_SEAL_TYPE` set to `azurekeyvault`.\n  If enabling via environment variable, all other required values specific to\n  Key Vault (i.e. `VAULT_AZUREKEYVAULT_VAULT_NAME`, etc.) must be also supplied, as\n  well as all other Azure-related environment variables that lends to successful\n  authentication (i.e. `AZURE_TENANT_ID`, etc.).\n\n## `azurekeyvault` example\n\nThis example shows configuring Azure Key Vault seal through the Vault\nconfiguration file by providing all the required values:\n\n```hcl\nseal \"azurekeyvault\" {\n  tenant_id      = \"46646709-b63e-4747-be42-516edeaf1e14\"\n  client_id      = \"03dc33fc-16d9-4b77-8152-3ec568f8af6e\"\n  client_secret  = \"DUJDS3...\"\n  vault_name     = \"hc-vault\"\n  key_name       = \"vault_key\"\n}\n```\n\n## `azurekeyvault` parameters\n\nThese parameters apply to the `seal` stanza in the Vault configuration file:\n\n- `tenant_id` `(string: <required>)`: The tenant id for the Azure Active Directory organization. May\n  also be specified by the `AZURE_TENANT_ID` environment variable.\n\n- `client_id` `(string: <required or MSI>)`: The client id for credentials to query the Azure APIs.\n  May also be specified by the `AZURE_CLIENT_ID` environment variable.\n\n- `client_secret` `(string: <required or MSI>)`: The client secret for credentials to query the Azure APIs.\n  May also be specified by the `AZURE_CLIENT_SECRET` environment variable.\n\n- `environment` `(string: \"AZUREPUBLICCLOUD\")`: The Azure Cloud environment API endpoints to use. May also\n  be specified by the `AZURE_ENVIRONMENT` environment variable.\n\n- `vault_name` `(string: <required>)`: The Key Vault vault to use the encryption keys for encryption and\n  decryption. May also be specified by the `VAULT_AZUREKEYVAULT_VAULT_NAME` environment variable.\n\n- `key_name` `(string: <required>)`: The Key Vault key to use for encryption and decryption. May also be specified by the\n  `VAULT_AZUREKEYVAULT_KEY_NAME` environment variable.\n\n- `resource` `(string: \"vault.azure.net\")`: The AZ KeyVault resource's DNS Suffix to connect to.\n  May also be specified in the `AZURE_AD_RESOURCE` environment variable.\n  Needs to be changed to connect to Azure's Managed HSM KeyVault instance type.\n\n- `disabled` `(string: \"\")`: Set this to `true` if Vault is migrating from an auto seal configuration. Otherwise, set to `false`.\n\nRefer to the [Seal Migration](\/vault\/docs\/concepts\/seal#seal-migration) documentation for more information about the seal migration process.\n\n\n## Authentication\n\nAuthentication-related values must be provided, either as environment\nvariables or as configuration parameters.\n\nAzure authentication values:\n\n- `AZURE_TENANT_ID`\n- `AZURE_CLIENT_ID`\n- `AZURE_CLIENT_SECRET`\n- `AZURE_ENVIRONMENT`\n- `AZURE_AD_RESOURCE`\n\n~> **Note:** If Vault is hosted on Azure, Vault can use Managed Service\nIdentities (MSI) to access Azure instead of an environment and shared client id\nand secret. MSI must be\n[enabled](https:\/\/docs.microsoft.com\/en-us\/azure\/active-directory\/managed-service-identity\/qs-configure-portal-windows-vm)\non the VMs hosting Vault, and it is the preferred configuration since MSI\nprevents your Azure credentials from being stored as clear text. Refer to the\n[Production\nHardening](\/vault\/tutorials\/operations\/production-hardening) tutorial\nfor more best practices.\n\n-> **Note:** If you are using a Managed HSM KeyVault, `AZURE_AD_RESOURCE` or the `resource`\nconfiguration parameter must be specified; usually this should point to `managedhsm.azure.net`,\nbut could point to other suffixes depending on Azure environment.\n\n## `azurekeyvault` environment variables\n\nAlternatively, the Azure Key Vault seal can be activated by providing the following\nenvironment variables:\n\n- `VAULT_AZUREKEYVAULT_VAULT_NAME`\n- `VAULT_AZUREKEYVAULT_KEY_NAME`\n\n## Key rotation\n\nThis seal supports rotating keys defined in Azure Key Vault. Key metadata is\nstored with the encrypted data to ensure the correct key is used during\ndecryption operations. Simply [set up Azure Key Vault with key\nrotation](https:\/\/docs.microsoft.com\/en-us\/azure\/key-vault\/key-vault-key-rotation-log-monitoring)\nusing Azure Automation Account and Vault will recognize newly rotated keys.\n\n## Tutorial\n\nRefer to the [Auto-unseal using Azure Key Vault](\/vault\/tutorials\/auto-unseal\/autounseal-azure-keyvault)\ntutorial to learn how to use the Azure Key Vault to auto-unseal a Vault server.","site":"vault","answers_cleaned":"    layout  docs page title  Azure Key Vault   Seals   Configuration description       The Azure Key Vault seal configures Vault to use Azure Key Vault as the seal   wrapping    mechanism          azurekeyvault  seal   Note title  Seal wrapping requires Vault Enterprise      All Vault versions support   auto unseal   for Azure Key Vault  but     seal wrapping   requires Vault Enterprise       Vault Enterprise enables seal wrapping by default  which means the KMS service   must be available at runtime and not just during the unseal process  Refer to   the  Seal wrap   vault docs enterprise sealwrap  overview for more   information     Note   The Azure Key Vault seal configures Vault to use Azure Key Vault as the seal wrapping mechanism  The Azure Key Vault seal is activated by one of the following     The presence of a  seal  azurekeyvault   block in Vault s configuration file    The presence of the environment variable  VAULT SEAL TYPE  set to  azurekeyvault     If enabling via environment variable  all other required values specific to   Key Vault  i e   VAULT AZUREKEYVAULT VAULT NAME   etc   must be also supplied  as   well as all other Azure related environment variables that lends to successful   authentication  i e   AZURE TENANT ID   etc         azurekeyvault  example  This example shows configuring Azure Key Vault seal through the Vault configuration file by providing all the required values      hcl seal  azurekeyvault      tenant id         46646709 b63e 4747 be42 516edeaf1e14    client id         03dc33fc 16d9 4b77 8152 3ec568f8af6e    client secret     DUJDS3       vault name        hc vault    key name          vault key             azurekeyvault  parameters  These parameters apply to the  seal  stanza in the Vault configuration file      tenant id    string   required     The tenant id for the Azure Active Directory organization  May   also be specified by the  AZURE TENANT ID  environment variable      client id    string   required or MSI     The client id for credentials to query the Azure APIs    May also be specified by the  AZURE CLIENT ID  environment variable      client secret    string   required or MSI     The client secret for credentials to query the Azure APIs    May also be specified by the  AZURE CLIENT SECRET  environment variable      environment    string   AZUREPUBLICCLOUD     The Azure Cloud environment API endpoints to use  May also   be specified by the  AZURE ENVIRONMENT  environment variable      vault name    string   required     The Key Vault vault to use the encryption keys for encryption and   decryption  May also be specified by the  VAULT AZUREKEYVAULT VAULT NAME  environment variable      key name    string   required     The Key Vault key to use for encryption and decryption  May also be specified by the    VAULT AZUREKEYVAULT KEY NAME  environment variable      resource    string   vault azure net     The AZ KeyVault resource s DNS Suffix to connect to    May also be specified in the  AZURE AD RESOURCE  environment variable    Needs to be changed to connect to Azure s Managed HSM KeyVault instance type      disabled    string        Set this to  true  if Vault is migrating from an auto seal configuration  Otherwise  set to  false    Refer to the  Seal Migration   vault docs concepts seal seal migration  documentation for more information about the seal migration process       Authentication  Authentication related values must be provided  either as environment variables or as configuration parameters   Azure authentication values      AZURE TENANT ID     AZURE CLIENT ID     AZURE CLIENT SECRET     AZURE ENVIRONMENT     AZURE AD RESOURCE        Note    If Vault is hosted on Azure  Vault can use Managed Service Identities  MSI  to access Azure instead of an environment and shared client id and secret  MSI must be  enabled  https   docs microsoft com en us azure active directory managed service identity qs configure portal windows vm  on the VMs hosting Vault  and it is the preferred configuration since MSI prevents your Azure credentials from being stored as clear text  Refer to the  Production Hardening   vault tutorials operations production hardening  tutorial for more best practices        Note    If you are using a Managed HSM KeyVault   AZURE AD RESOURCE  or the  resource  configuration parameter must be specified  usually this should point to  managedhsm azure net   but could point to other suffixes depending on Azure environment       azurekeyvault  environment variables  Alternatively  the Azure Key Vault seal can be activated by providing the following environment variables      VAULT AZUREKEYVAULT VAULT NAME     VAULT AZUREKEYVAULT KEY NAME      Key rotation  This seal supports rotating keys defined in Azure Key Vault  Key metadata is stored with the encrypted data to ensure the correct key is used during decryption operations  Simply  set up Azure Key Vault with key rotation  https   docs microsoft com en us azure key vault key vault key rotation log monitoring  using Azure Automation Account and Vault will recognize newly rotated keys      Tutorial  Refer to the  Auto unseal using Azure Key Vault   vault tutorials auto unseal autounseal azure keyvault  tutorial to learn how to use the Azure Key Vault to auto unseal a Vault server "}
{"questions":"vault How to configure multiple Seals for high availability Seal High Availability include alerts enterprise only mdx layout docs page title Seal High Availability Seals Configuration","answers":"---\nlayout: docs\npage_title: Seal High Availability - Seals - Configuration\ndescription: |-\n  How to configure multiple Seals for high availability.\n---\n\n# Seal High Availability\n\n@include 'alerts\/enterprise-only.mdx'\n\n[Seal High Availability](\/vault\/docs\/concepts\/seal#seal-high-availability-enterprise)\nprovides the means to configure at least two auto-seals (and no more than three)\nin order to have resilience against outage of a seal service or mechanism. \nShamir seals cannot be used in a Seal HA setup.\n\nUsing Seal HA involves configuring extra seals in Vault's server configuration file\nand restarting Vault or triggering a reload of it's configuration via sending\nit the SIGHUP signal.\n\nBefore using Seal HA, one must have upgraded to Vault 1.16 or higher.  Seal HA is enabled\nby adding the following line to Vault's configuration:\n\n```\nenable_multiseal = true\n```\n\n## Adding and Removing Seals\n\nIn order to use Seal HA, there must be more than one defined [`seal` stanza](\/vault\/docs\/configuration\/seal)\nin Vault's configuration.\n\nSeal HA adds two fields to these stanzas, `name`, and `priority`:\n\n```hcl\nseal [TYPE] {\n  name = \"seal_name\"\n  priority = \"1\"\n  # ...\n}\n```\n\nName is optional, and if not specified is set to the type of the seal.  Names\nmust be unique.  If using two seals of the same type name must be specified.\nInternally, name is used to disambiguate seal wrapped values in some cases,\nso renaming seals should be avoided if possible.  Many seal types can use\nenvironment variables instead of configuration lines to provide sensitive\nvalues.  Because there may be two seals of the same type, one must\ndisambiguate the environment variables used.  To do this, in HA setups,\nappend an underscore followed by the seal's configured name (matching its\ncase) to any environment variable names.  For example, in the sample\nconfiguration below, the AWS access key could be provided as `ACCESS_KEY_aws_east`.\nKeep in mind that the seal name must be valid in an environment variable name to use it.\n\nPriority is mandatory if more than one seal is specified.  Priority tells Vault\nthe order in which to try seals during unseal (least priority first),\nin the case more than one seal can unwrap a seal wrapped value, the order\nin which to attempt decryption, and which order to attempt to source entropy\nfor entropy augmentation.  This can be useful if your seals have different\nperformance or cost characteristics.\n\nHere is a hypothetical configuration for an [AWS seal](\/vault\/docs\/configuration\/seal\/awskms)\ncompatible with Seal HA:\n\n```hcl\nseal \"awskms\" {\n  name       = \"aws_east\"\n  priority   = \"1\"\n\n  region     = \"us-east-1\"\n  access_key = \"AKIAIOSFODNN7EXAMPLE\"\n  secret_key = \"wJalrXUtnFEMI\/K7MDENG\/bPxRfiCYEXAMPLEKEY\"\n  kms_key_id = \"19ec80b0-dfdd-4d97-8164-c6examplekey\"\n}\n```\n\nAll configured, healthy seals are used to seal wrap values.  This means that\nfor every write of a seal wrapped value or CSP, an encryption is requested\nfrom every configured seal, and the results are stored in the storage entry.\nWhen seals are unhealthy, Vault keeps track of values that could not be fully\nwrapped and will re-wrap them once seals become healthy again. Note, however,\nthat it is not possible to rotate the data encryption key nor the recovery keys\nwhile seals are unavailable. Disabled seals can still be used for decryption\nof wrapped values, but will be avoided when encrypting values.\n\nWhen reading a CSP or seal wrapped value, Vault will try to decrypt with the\nhighest priority available seal, and then try other seals on failure.\n\nTo add an additional seal, simply add another seal stanza, specifying priority\nand optionally name, and restart Vault.\n\nTo remove a seal, remove the corresponding seal stanza and restart.  There must\nbe at least one seal remaining.\n\nIt is highly recommended to take a snapshot of your Vault storage before applying\nany seal configuration change.\n\nOnce Vault unseals with the new seal configuration, it will be available to process\ntraffic even as re-wrapping proceeds.\n\nHere is a partial snippet of a two seal HA setup, using an AWS KMS seal as\nthe primary (highest priority) seal, and a Azure KMS seal as well:\n\n```\nseal \"awskms\" {\n  name = \"AWS\"\n  priority = \"1\"\n  # ...\n}\n\nseal \"azurekeyvault\" {\n  name = \"Azure\"\n  priority = \"2\"\n  # ...\n}\n```\n\n### Safety checks\n\nVault will reject seal configuration changes in the following circumstances,\nas a safety mechanism:\n\n* The old seal configuration and new seal configuration do not share one seal\nin common.  This is necessary as there would be no seal capable of decrypting\nCSPs or seal wrapped values previously written.\n\n* Seal re-wrapping is in progress.  Vault must be in a clean, fully wrapped state\non the previous configuration before attempting a configuration change.\n\n* More than one seal is being added or removed at a time.\n\nIn rare circumstances it may become impossible to update seal configuration\nwithout triggering the safety checks. If this happens, it is possible to bypass\nthe checks by setting the environment variable `VAULT_SEAL_REWRAP_SAFETY` to\n`disable`.\n\n~> **Warning**: The use of environment variable `VAULT_SEAL_REWRAP_SAFETY`\nshould be considered as a last resort.\n\n\n### Interaction with Shamir Seals\n\nSeal HA is only supported with auto seal mechanisms.  To use Seal HA when\nrunning a Shamir seal, first use the traditional\n[seal migration](\/vault\/docs\/concepts\/seal#seal-migration) mechanism to migrate to\nan auto seal of your choice.  Afterwards you may follow the above\ninstructions to add a second auto seal.\n\nCorrespondingly, to migrate back to a shamir seal, first use the above\ninstructions to move to a single auto seal, and use the traditional\nmigration method to migrate back to a Shamir seal.\n\n### Removing Seal HA\n\nMigrating back to a single seal may result in data loss if an operator does not \nuse the HA seal feature. To migrate back to a single seal:\n\n1. Perform a [seal migration](\/vault\/docs\/concepts\/seal#seal-migration) as described.\n2. Monitor [`sys\/sealwrap\/rewrap`](\/vault\/api-docs\/system\/sealwrap-rewrap) until the API returns `fully_wrapped=true`.\n3. Remove `enable_multiseal` from all Vault configuration files in the cluster.\n4. Restart Vault.","site":"vault","answers_cleaned":"    layout  docs page title  Seal High Availability   Seals   Configuration description       How to configure multiple Seals for high availability         Seal High Availability   include  alerts enterprise only mdx    Seal High Availability   vault docs concepts seal seal high availability enterprise  provides the means to configure at least two auto seals  and no more than three  in order to have resilience against outage of a seal service or mechanism   Shamir seals cannot be used in a Seal HA setup   Using Seal HA involves configuring extra seals in Vault s server configuration file and restarting Vault or triggering a reload of it s configuration via sending it the SIGHUP signal   Before using Seal HA  one must have upgraded to Vault 1 16 or higher   Seal HA is enabled by adding the following line to Vault s configuration       enable multiseal   true         Adding and Removing Seals  In order to use Seal HA  there must be more than one defined   seal  stanza   vault docs configuration seal  in Vault s configuration   Seal HA adds two fields to these stanzas   name   and  priority       hcl seal  TYPE      name    seal name    priority    1                 Name is optional  and if not specified is set to the type of the seal   Names must be unique   If using two seals of the same type name must be specified  Internally  name is used to disambiguate seal wrapped values in some cases  so renaming seals should be avoided if possible   Many seal types can use environment variables instead of configuration lines to provide sensitive values   Because there may be two seals of the same type  one must disambiguate the environment variables used   To do this  in HA setups  append an underscore followed by the seal s configured name  matching its case  to any environment variable names   For example  in the sample configuration below  the AWS access key could be provided as  ACCESS KEY aws east   Keep in mind that the seal name must be valid in an environment variable name to use it   Priority is mandatory if more than one seal is specified   Priority tells Vault the order in which to try seals during unseal  least priority first   in the case more than one seal can unwrap a seal wrapped value  the order in which to attempt decryption  and which order to attempt to source entropy for entropy augmentation   This can be useful if your seals have different performance or cost characteristics   Here is a hypothetical configuration for an  AWS seal   vault docs configuration seal awskms  compatible with Seal HA      hcl seal  awskms      name          aws east    priority      1     region        us east 1    access key    AKIAIOSFODNN7EXAMPLE    secret key    wJalrXUtnFEMI K7MDENG bPxRfiCYEXAMPLEKEY    kms key id    19ec80b0 dfdd 4d97 8164 c6examplekey         All configured  healthy seals are used to seal wrap values   This means that for every write of a seal wrapped value or CSP  an encryption is requested from every configured seal  and the results are stored in the storage entry  When seals are unhealthy  Vault keeps track of values that could not be fully wrapped and will re wrap them once seals become healthy again  Note  however  that it is not possible to rotate the data encryption key nor the recovery keys while seals are unavailable  Disabled seals can still be used for decryption of wrapped values  but will be avoided when encrypting values   When reading a CSP or seal wrapped value  Vault will try to decrypt with the highest priority available seal  and then try other seals on failure   To add an additional seal  simply add another seal stanza  specifying priority and optionally name  and restart Vault   To remove a seal  remove the corresponding seal stanza and restart   There must be at least one seal remaining   It is highly recommended to take a snapshot of your Vault storage before applying any seal configuration change   Once Vault unseals with the new seal configuration  it will be available to process traffic even as re wrapping proceeds   Here is a partial snippet of a two seal HA setup  using an AWS KMS seal as the primary  highest priority  seal  and a Azure KMS seal as well       seal  awskms      name    AWS    priority    1             seal  azurekeyvault      name    Azure    priority    2                     Safety checks  Vault will reject seal configuration changes in the following circumstances  as a safety mechanism     The old seal configuration and new seal configuration do not share one seal in common   This is necessary as there would be no seal capable of decrypting CSPs or seal wrapped values previously written     Seal re wrapping is in progress   Vault must be in a clean  fully wrapped state on the previous configuration before attempting a configuration change     More than one seal is being added or removed at a time   In rare circumstances it may become impossible to update seal configuration without triggering the safety checks  If this happens  it is possible to bypass the checks by setting the environment variable  VAULT SEAL REWRAP SAFETY  to  disable         Warning    The use of environment variable  VAULT SEAL REWRAP SAFETY  should be considered as a last resort        Interaction with Shamir Seals  Seal HA is only supported with auto seal mechanisms   To use Seal HA when running a Shamir seal  first use the traditional  seal migration   vault docs concepts seal seal migration  mechanism to migrate to an auto seal of your choice   Afterwards you may follow the above instructions to add a second auto seal   Correspondingly  to migrate back to a shamir seal  first use the above instructions to move to a single auto seal  and use the traditional migration method to migrate back to a Shamir seal       Removing Seal HA  Migrating back to a single seal may result in data loss if an operator does not  use the HA seal feature  To migrate back to a single seal   1  Perform a  seal migration   vault docs concepts seal seal migration  as described  2  Monitor   sys sealwrap rewrap    vault api docs system sealwrap rewrap  until the API returns  fully wrapped true   3  Remove  enable multiseal  from all Vault configuration files in the cluster  4  Restart Vault "}
{"questions":"vault bucket The S3 storage backend is used to persist Vault s data in an Amazon S3 layout docs page title S3 Storage Backends Configuration S3 storage backend","answers":"---\nlayout: docs\npage_title: S3 - Storage Backends - Configuration\ndescription: |-\n  The S3 storage backend is used to persist Vault's data in an Amazon S3\n  bucket.\n---\n\n# S3 storage backend\n\nThe S3 storage backend is used to persist Vault's data in an [Amazon S3][s3]\nbucket.\n\n- **No High Availability** \u2013 the S3 storage backend does not support high\n  availability.\n\n- **Community Supported** \u2013 the S3 storage backend is supported by the\n  community. While it has undergone review by HashiCorp employees, they may not\n  be as knowledgeable about the technology. If you encounter problems with them,\n  you may be referred to the original author.\n\n```hcl\nstorage \"s3\" {\n  access_key = \"abcd1234\"\n  secret_key = \"defg5678\"\n  bucket     = \"my-bucket\"\n}\n```\n\n## `s3` parameters\n\n- `bucket` `(string: <required>)` \u2013 Specifies the name of the S3 bucket. This\n  can also be provided via the environment variable `AWS_S3_BUCKET`.\n\n- `endpoint` `(string: \"\")` \u2013\u00a0Specifies an alternative, AWS compatible, S3\n  endpoint. This can also be provided via the environment variable\n  `AWS_S3_ENDPOINT`.\n\n- `region` `(string \"us-east-1\")` \u2013 Specifies the AWS region. This can also be\n  provided via the environment variable `AWS_REGION` or `AWS_DEFAULT_REGION`,\n  in that order of preference.\n\nThe following settings are used for authenticating to AWS. If you are\nrunning your Vault server on an EC2 instance, you can also make use of the EC2\ninstance profile service to provide the credentials Vault will use to make\nS3 API calls. Leaving the `access_key` and `secret_key` fields empty will\ncause Vault to attempt to retrieve credentials from the AWS metadata service.\n\n- `access_key` \u2013 Specifies the AWS access key. This can also be provided via\n  the environment variable `AWS_ACCESS_KEY_ID`, AWS credential files, or by\n  IAM role.\n\n- `secret_key` \u2013 Specifies the AWS secret key. This can also be provided via\n  the environment variable `AWS_SECRET_ACCESS_KEY`, AWS credential files, or\n  by IAM role.\n\n- `session_token` `(string: \"\")` \u2013 Specifies the AWS session token. This can\n  also be provided via the environment variable `AWS_SESSION_TOKEN`.\n\n- `max_parallel` `(string: \"128\")` \u2013 Specifies the maximum number of concurrent\n  requests to S3.\n\n- `s3_force_path_style` `(string: \"false\")` - Specifies whether to use host\n  bucket style domains with the configured endpoint.\n\n- `disable_ssl` `(string: \"false\")` - Specifies if SSL should be used for the\n  endpoint connection (highly recommended not to disable for production).\n\n- `kms_key_id` `(string: \"\")` - Specifies the ID or Alias of the KMS key used to\n  encrypt data in the S3 backend. Vault must have `kms:Encrypt`, `kms:Decrypt`\n  and `kms:GenerateDataKey` permissions for this KMS key. You can use \n  `alias\/aws\/s3` to specify the default key for the account.\n\n- `path` `(string: \"\")` - Specifies the path in the S3 Bucket where Vault\n  data will be stored.\n\n## `s3` examples\n\n### Default example\n\nThis example shows using Amazon S3 as a storage backend.\n\n```hcl\nstorage \"s3\" {\n  access_key = \"abcd1234\"\n  secret_key = \"defg5678\"\n  bucket     = \"my-bucket\"\n}\n```\n\n### S3 KMS encryption with default key\n\nThis example shows using Amazon S3 as a storage backend using KMS\nencryption with the default S3 KMS key for the account.\n\n```hcl\nstorage \"s3\" {\n  access_key = \"abcd1234\"\n  secret_key = \"defg5678\"\n  bucket     = \"my-bucket\"\n  kms_key_id = \"alias\/aws\/s3\"\n}\n```\n\n### S3 KMS encryption with custom key\n\nThis example shows using Amazon S3 as a storage backend using KMS\nencryption with a customer managed KMS key.\n\n```hcl\nstorage \"s3\" {\n  access_key = \"abcd1234\"\n  secret_key = \"defg5678\"\n  bucket     = \"my-bucket\"\n  kms_key_id = \"001234ac-72d3-9902-a3fc-0123456789ab\"\n}\n```\n\n[s3]: https:\/\/aws.amazon.com\/s3\/\n\n## AWS instance metadata timeouts\n\n@include 'aws-imds-timeout.mdx'","site":"vault","answers_cleaned":"    layout  docs page title  S3   Storage Backends   Configuration description       The S3 storage backend is used to persist Vault s data in an Amazon S3   bucket         S3 storage backend  The S3 storage backend is used to persist Vault s data in an  Amazon S3  s3  bucket       No High Availability     the S3 storage backend does not support high   availability       Community Supported     the S3 storage backend is supported by the   community  While it has undergone review by HashiCorp employees  they may not   be as knowledgeable about the technology  If you encounter problems with them    you may be referred to the original author      hcl storage  s3      access key    abcd1234    secret key    defg5678    bucket        my bucket             s3  parameters     bucket    string   required      Specifies the name of the S3 bucket  This   can also be provided via the environment variable  AWS S3 BUCKET       endpoint    string         Specifies an alternative  AWS compatible  S3   endpoint  This can also be provided via the environment variable    AWS S3 ENDPOINT       region    string  us east 1      Specifies the AWS region  This can also be   provided via the environment variable  AWS REGION  or  AWS DEFAULT REGION     in that order of preference   The following settings are used for authenticating to AWS  If you are running your Vault server on an EC2 instance  you can also make use of the EC2 instance profile service to provide the credentials Vault will use to make S3 API calls  Leaving the  access key  and  secret key  fields empty will cause Vault to attempt to retrieve credentials from the AWS metadata service      access key    Specifies the AWS access key  This can also be provided via   the environment variable  AWS ACCESS KEY ID   AWS credential files  or by   IAM role      secret key    Specifies the AWS secret key  This can also be provided via   the environment variable  AWS SECRET ACCESS KEY   AWS credential files  or   by IAM role      session token    string         Specifies the AWS session token  This can   also be provided via the environment variable  AWS SESSION TOKEN       max parallel    string   128      Specifies the maximum number of concurrent   requests to S3      s3 force path style    string   false      Specifies whether to use host   bucket style domains with the configured endpoint      disable ssl    string   false      Specifies if SSL should be used for the   endpoint connection  highly recommended not to disable for production       kms key id    string         Specifies the ID or Alias of the KMS key used to   encrypt data in the S3 backend  Vault must have  kms Encrypt    kms Decrypt    and  kms GenerateDataKey  permissions for this KMS key  You can use     alias aws s3  to specify the default key for the account      path    string         Specifies the path in the S3 Bucket where Vault   data will be stored       s3  examples      Default example  This example shows using Amazon S3 as a storage backend      hcl storage  s3      access key    abcd1234    secret key    defg5678    bucket        my bucket             S3 KMS encryption with default key  This example shows using Amazon S3 as a storage backend using KMS encryption with the default S3 KMS key for the account      hcl storage  s3      access key    abcd1234    secret key    defg5678    bucket        my bucket    kms key id    alias aws s3             S3 KMS encryption with custom key  This example shows using Amazon S3 as a storage backend using KMS encryption with a customer managed KMS key      hcl storage  s3      access key    abcd1234    secret key    defg5678    bucket        my bucket    kms key id    001234ac 72d3 9902 a3fc 0123456789ab          s3   https   aws amazon com s3      AWS instance metadata timeouts   include  aws imds timeout mdx "}
{"questions":"vault page title FoundationDB Storage Backends Configuration FoundationDB KV store FoundationDB storage backend layout docs The FoundationDB storage backend is used to persist Vault s data in the","answers":"---\nlayout: docs\npage_title: FoundationDB - Storage Backends - Configuration\ndescription: |-\n  The FoundationDB storage backend is used to persist Vault's data in the\n  FoundationDB KV store.\n---\n\n# FoundationDB storage backend\n\nThe FoundationDB storage backend is used to persist Vault's data in\n[FoundationDB][foundationdb].\n\nThe backend needs to be explicitly enabled at build time, and is not available\nin the standard Vault binary distribution. Please refer to the documentation\naccompanying the backend's source in the Vault source tree.\n\n- **High Availability** \u2013 the FoundationDB storage backend supports high\n  availability. The HA implementation relies on the clocks of the Vault\n  nodes inside the cluster being properly synchronized; clock skews are\n  susceptible to cause contention on the locks.\n\n- **Community Supported** \u2013 the FoundationDB storage backend is supported\n  by the community. While it has undergone review by HashiCorp employees,\n  they may not be as knowledgeable about the technology. If you encounter\n  problems with them, you may be referred to the original author.\n\n```hcl\nstorage \"foundationdb\" {\n  api_version      = 520\n  cluster_file     = \"\/path\/to\/fdb.cluster\"\n\n  tls_verify_peers = \"I.CN=MyTrustedIssuer,I.O=MyCompany\\, Inc.,I.OU=Certification Authority\"\n  tls_ca_file      = \"\/path\/to\/ca_bundle.pem\"\n  tls_cert_file    = \"\/path\/to\/cert.pem\"\n  tls_key_file     = \"\/path\/to\/key.pem\"\n  tls_password     = \"PrivateKeyPassword\"\n\n  path             = \"vault-top-level-directory\"\n  ha_enabled       = \"true\"\n}\n```\n\n## `foundationdb` parameters\n\n- `api_version` `(int)` - The FoundationDB API version to use; this is a\n  required parameter and doesn't have a default value. The minimum required API\n  version is 520.\n\n- `cluster_file` `(string)` - The path to the cluster file containing the\n  connection data for the target cluster; this is a required parameter and\n  doesn't have a default value.\n\n- `tls_verify_peers` `(string)` - The peer certificate verification criteria;\n  this parameter is mandatory if TLS is enabled. Refer to the [FoundationDB TLS][fdb-tls] documentation.\n\n- `tls_ca_file` `(string)` - The path to the CA certificate bundle file; this\n  parameter is mandatory if TLS is enabled.\n\n- `tls_cert_file` `(string)` - The path to the certificate file; specifying this\n  parameter together with `tls_key_file` will enable TLS support.\n\n- `tls_key_file` `(string)` - The path to the key file; specifying this\n  parameter together with `tls_cert_file` will enable TLS support.\n\n- `tls_password` `(string)` - The password needed to decrypt `tls_key_file`, if\n  it is encrypted; optional. This can also be specified via the\n  `FDB_TLS_PASSWORD` environment variable.\n\n- `path` `(string: \"vault\")` - The path of the top-level FoundationDB directory\n  (using the directory layer) under which the Vault data will reside.\n\n- `ha_enabled` `(string: \"false\")` - Whether or not to enable Vault\n  high-availability mode using the FoundationDB backend.\n\n## `foundationdb` tips\n\n### Cluster file\n\nThe FoundationDB client expects to be able to update the cluster file at\nruntime, to keep it current with changes happening to the cluster.\n\nIt does so by first writing a new cluster file alongside the current one,\nthen atomically renaming it into place.\n\nThis means the cluster file and the directory it resides in must be writable\nby the user Vault is running as. You probably want to isolate the cluster\nfile into its own directory.\n\n### Multi-version client\n\nThe FoundationDB client library version is tightly coupled to the server\nversion; during cluster upgrades, multiple server versions will be running\nin the cluster, and the client must cope with that situation.\n\nThis is handled by the (primary) client library having the ability to load\na different, later version of the client library to connect to a particular\nserver; it is referred to as the [multi-version client][multi-ver-client]\nfeature.\n\n#### Client setup with `LD_LIBRARY_PATH`\n\nIf you do not use mlock, you can use `LD_LIBRARY_PATH` to point the linker at\nthe location of the primary client library.\n\n```shell-session\n$ export LD_LIBRARY_PATH=\/dest\/dir\/for\/primary:$LD_LIBRARY_PATH\n$ export FDB_NETWORK_OPTION_EXTERNAL_CLIENT_DIRECTORY=\/dest\/dir\/for\/secondary\n$ \/path\/to\/bin\/vault ...\n```\n\n#### Client setup with `RPATH`\n\nWhen running Vault with mlock, the Vault binary must have capabilities set to\nallow the use of mlock.\n\n```\n# setcap cap_ipc_lock=+ep \/path\/to\/bin\/vault\n$ getcap \/path\/to\/bin\/vault\n\/path\/to\/bin\/vault = cap_ipc_lock+ep\n```\n\nThe presence of the capabilities will cause the linker to ignore\n`LD_LIBRARY_PATH`, for security reasons.\n\nIn that case, we have to set an `RPATH` on the Vault binary at build time\nto replace the use of `LD_LIBRARY_PATH`.\n\nWhen building Vault, pass the `-r \/dest\/dir\/for\/primary` option to the Go\nlinker, for instance:\n\n```shell-session\n$ make dev FDB_ENABLED=1 LD_FLAGS=\"-r \/dest\/dir\/for\/primary \"\n```\n\n(Note the trailing space in the variable value above).\n\nYou can verify `RPATH` is set on the Vault binary using `readelf`:\n\n```shell-session\n$ readelf -d \/path\/to\/bin\/vault | grep RPATH\n 0x000000000000000f (RPATH)              Library rpath: [\/dest\/dir\/for\/primary]\n```\n\nWith the client libraries installed:\n\n```shell-session\n$ ldd \/path\/to\/bin\/vault\n...\n    libfdb_c.so => \/dest\/dir\/for\/primary\/libfdb_c.so (0x00007f270ad05000)\n...\n```\n\nNow run Vault:\n\n```shell-session\n$ export FDB_NETWORK_OPTION_EXTERNAL_CLIENT_DIRECTORY=\/dest\/dir\/for\/secondary\n$ \/path\/to\/bin\/vault ...\n```\n\n[foundationdb]: https:\/\/www.foundationdb.org\n[fdb-tls]: https:\/\/apple.github.io\/foundationdb\/tls.html\n[multi-ver-client]: https:\/\/apple.github.io\/foundationdb\/api-general.html#multi-version-client-api","site":"vault","answers_cleaned":"    layout  docs page title  FoundationDB   Storage Backends   Configuration description       The FoundationDB storage backend is used to persist Vault s data in the   FoundationDB KV store         FoundationDB storage backend  The FoundationDB storage backend is used to persist Vault s data in  FoundationDB  foundationdb    The backend needs to be explicitly enabled at build time  and is not available in the standard Vault binary distribution  Please refer to the documentation accompanying the backend s source in the Vault source tree       High Availability     the FoundationDB storage backend supports high   availability  The HA implementation relies on the clocks of the Vault   nodes inside the cluster being properly synchronized  clock skews are   susceptible to cause contention on the locks       Community Supported     the FoundationDB storage backend is supported   by the community  While it has undergone review by HashiCorp employees    they may not be as knowledgeable about the technology  If you encounter   problems with them  you may be referred to the original author      hcl storage  foundationdb      api version        520   cluster file         path to fdb cluster     tls verify peers    I CN MyTrustedIssuer I O MyCompany   Inc  I OU Certification Authority    tls ca file          path to ca bundle pem    tls cert file        path to cert pem    tls key file         path to key pem    tls password        PrivateKeyPassword     path                vault top level directory    ha enabled          true             foundationdb  parameters     api version    int     The FoundationDB API version to use  this is a   required parameter and doesn t have a default value  The minimum required API   version is 520      cluster file    string     The path to the cluster file containing the   connection data for the target cluster  this is a required parameter and   doesn t have a default value      tls verify peers    string     The peer certificate verification criteria    this parameter is mandatory if TLS is enabled  Refer to the  FoundationDB TLS  fdb tls  documentation      tls ca file    string     The path to the CA certificate bundle file  this   parameter is mandatory if TLS is enabled      tls cert file    string     The path to the certificate file  specifying this   parameter together with  tls key file  will enable TLS support      tls key file    string     The path to the key file  specifying this   parameter together with  tls cert file  will enable TLS support      tls password    string     The password needed to decrypt  tls key file   if   it is encrypted  optional  This can also be specified via the    FDB TLS PASSWORD  environment variable      path    string   vault      The path of the top level FoundationDB directory    using the directory layer  under which the Vault data will reside      ha enabled    string   false      Whether or not to enable Vault   high availability mode using the FoundationDB backend       foundationdb  tips      Cluster file  The FoundationDB client expects to be able to update the cluster file at runtime  to keep it current with changes happening to the cluster   It does so by first writing a new cluster file alongside the current one  then atomically renaming it into place   This means the cluster file and the directory it resides in must be writable by the user Vault is running as  You probably want to isolate the cluster file into its own directory       Multi version client  The FoundationDB client library version is tightly coupled to the server version  during cluster upgrades  multiple server versions will be running in the cluster  and the client must cope with that situation   This is handled by the  primary  client library having the ability to load a different  later version of the client library to connect to a particular server  it is referred to as the  multi version client  multi ver client  feature        Client setup with  LD LIBRARY PATH   If you do not use mlock  you can use  LD LIBRARY PATH  to point the linker at the location of the primary client library      shell session   export LD LIBRARY PATH  dest dir for primary  LD LIBRARY PATH   export FDB NETWORK OPTION EXTERNAL CLIENT DIRECTORY  dest dir for secondary    path to bin vault               Client setup with  RPATH   When running Vault with mlock  the Vault binary must have capabilities set to allow the use of mlock         setcap cap ipc lock  ep  path to bin vault   getcap  path to bin vault  path to bin vault   cap ipc lock ep      The presence of the capabilities will cause the linker to ignore  LD LIBRARY PATH   for security reasons   In that case  we have to set an  RPATH  on the Vault binary at build time to replace the use of  LD LIBRARY PATH    When building Vault  pass the   r  dest dir for primary  option to the Go linker  for instance      shell session   make dev FDB ENABLED 1 LD FLAGS   r  dest dir for primary         Note the trailing space in the variable value above    You can verify  RPATH  is set on the Vault binary using  readelf       shell session   readelf  d  path to bin vault   grep RPATH  0x000000000000000f  RPATH               Library rpath    dest dir for primary       With the client libraries installed      shell session   ldd  path to bin vault         libfdb c so     dest dir for primary libfdb c so  0x00007f270ad05000           Now run Vault      shell session   export FDB NETWORK OPTION EXTERNAL CLIENT DIRECTORY  dest dir for secondary    path to bin vault           foundationdb   https   www foundationdb org  fdb tls   https   apple github io foundationdb tls html  multi ver client   https   apple github io foundationdb api general html multi version client api"}
{"questions":"vault on the version of the Etcd cluster Etcd storage backend page title Etcd Storage Backends Configuration layout docs both the v2 and v3 Etcd APIs and the version is automatically detected based The Etcd storage backend is used to persist Vault s data in Etcd It supports","answers":"---\nlayout: docs\npage_title: Etcd - Storage Backends - Configuration\ndescription: |-\n  The Etcd storage backend is used to persist Vault's data in Etcd. It supports\n  both the v2 and v3 Etcd APIs, and the version is automatically detected based\n  on the version of the Etcd cluster.\n---\n\n# Etcd storage backend\n\nThe Etcd storage backend is used to persist Vault's data in [Etcd][etcd]. It\nsupports both the v2 and v3 Etcd APIs, and the version is automatically detected\nbased on the version of the Etcd cluster.\n\n~> The Etcd v2 API has been deprecated with the release of Etcd v3.5, and will\nbe decommissioned by Etcd v3.6. It will be removed from Vault in Vault 1.10.\nUsers of the Etcd storage backend should prepare to\n[migrate](\/vault\/docs\/commands\/operator\/migrate) Vault storage to an Etcd v3 cluster\nprior to upgrading to Vault 1.10. All storage migrations should have\n[backups](\/vault\/docs\/concepts\/storage#backing-up-vault-s-persisted-data) taken prior\nto migration.\n\n- **High Availability** \u2013 the Etcd storage backend supports high availability.\n  The v2 API has known issues with HA support and should not be used in HA\n  scenarios.\n\n- **Community Supported** \u2013 the Etcd storage backend is supported by CoreOS.\n  While it has undergone review by HashiCorp employees, they may not be as\n  knowledgeable about the technology. If you encounter problems with them, you\n  may be referred to the original author.\n\n```hcl\nstorage \"etcd\" {\n  address  = \"http:\/\/localhost:2379\"\n  etcd_api = \"v3\"\n}\n```\n\n## `etcd` parameters\n\n- `address` `(string: \"http:\/\/localhost:2379\")` \u2013 Specifies the addresses of the\n  Etcd instances as a comma-separated list. This can also be provided via the\n  environment variable `ETCD_ADDR`.\n\n- `discovery_srv` `(string: \"example.com\")` - Specifies the domain name to\n  query for SRV records describing cluster endpoints. This can also be provided\n  via the environment variable `ETCD_DISCOVERY_SRV`.\n\n- `discovery_srv_name` `(string: \"vault\")` - Specifies the service name to use\n  when querying for SRV records describing cluster endpoints. This can also be\n  provided via the environment variable `ETCD_DISCOVERY_SRV_NAME`.\n\n- `etcd_api` `(string: \"<varies>\")` \u2013 Specifies the version of the API to\n  communicate with. By default, this is derived automatically. If the cluster\n  version is 3.1+ and there has been no data written using the v2 API, the\n  auto-detected default is v3.\n\n- `ha_enabled` `(string: \"false\")` \u2013 Specifies if high availability should be\n  enabled. This can also be provided via the environment variable\n  `ETCD_HA_ENABLED`.\n\n- `path` `(string: \"\/vault\/\")` \u2013 Specifies the path in Etcd where Vault data will\n  be stored.\n\n- `sync` `(string: \"true\")` \u2013 Specifies whether to sync the list of available\n  Etcd services on startup. This is a string that is coerced into a boolean\n  value. You may want to set this to false if your cluster is behind a proxy\n  server and syncing causes Vault to fail.\n\n- `username` `(string: \"\")` \u2013 Specifies the username to use when authenticating\n  with the Etcd server. This can also be provided via the environment variable\n  `ETCD_USERNAME`.\n\n- `password` `(string: \"\")` \u2013 Specifies the password to use when authenticating\n  with the Etcd server. This can also be provided via the environment variable\n  `ETCD_PASSWORD`.\n\n- `tls_ca_file` `(string: \"\")` \u2013 Specifies the path to the CA certificate used\n  for Etcd communication. This defaults to system bundle if not specified.\n\n- `tls_cert_file` `(string: \"\")` \u2013 Specifies the path to the certificate for\n  Etcd communication.\n\n- `tls_key_file` `(string: \"\")` \u2013 Specifies the path to the private key for Etcd\n  communication.\n\n- `request_timeout` `(string: \"5s\")` \u2013 Specifies timeout for requests\n  to etcd. 5 seconds should be long enough for most cases, even with internal\n  retry.\n\n- `lock_timeout` `(string: \"15s\")` \u2013 Specifies lock timeout for master\n  Vault instance. Set bigger value if you don't need faster recovery.\n\n- `max_receive_size` `(int)` \u2013 Specifies the client-side response receive limit.\nMake sure that \"max_receive_size\" >= server-side default send\/recv limit.\n(\"--max-request-bytes\" flag to etcd or \"embed.Config.MaxRequestBytes\").\n\n- `max_send_size` `(int)` \u2013 Specifies the client-side request send limit in bytes.\nMake sure that \"max_send_size\" < server-side default send\/recv limit.\n(\"--max-request-bytes\" flag to etcd or \"embed.Config.MaxRequestBytes\").\n\n## `etcd` Examples\n\n### DNS discovery of cluster members\n\nThis example configures vault to discover the Etcd cluster members via SRV\nrecords as outlined in the\n[DNS Discovery protocol documentation][dns discovery].\n\n```hcl\nstorage \"etcd\" {\n  discovery_srv = \"example.com\"\n}\n```\n\n### Custom authentication\n\nThis example shows connecting to the Etcd cluster using a username and password.\n\n```hcl\nstorage \"etcd\" {\n  username = \"user1234\"\n  password = \"pass5678\"\n}\n```\n\n### Custom path\n\nThis example shows storing data in a custom path.\n\n```hcl\nstorage \"etcd\" {\n  path = \"my-vault-data\/\"\n}\n```\n\n### Enabling high availability\n\nThis example shows enabling high availability for the Etcd storage backend.\n\n```hcl\napi_addr = \"https:\/\/vault-leader.my-company.internal\"\n\nstorage \"etcd\" {\n  ha_enabled    = \"true\"\n  ...\n}\n```\n\n[etcd]: https:\/\/coreos.com\/etcd 'Etcd by CoreOS'\n[dns discovery]: https:\/\/coreos.com\/etcd\/docs\/latest\/op-guide\/clustering.html#dns-discovery 'Etcd cluster DNS Discovery'","site":"vault","answers_cleaned":"    layout  docs page title  Etcd   Storage Backends   Configuration description       The Etcd storage backend is used to persist Vault s data in Etcd  It supports   both the v2 and v3 Etcd APIs  and the version is automatically detected based   on the version of the Etcd cluster         Etcd storage backend  The Etcd storage backend is used to persist Vault s data in  Etcd  etcd   It supports both the v2 and v3 Etcd APIs  and the version is automatically detected based on the version of the Etcd cluster      The Etcd v2 API has been deprecated with the release of Etcd v3 5  and will be decommissioned by Etcd v3 6  It will be removed from Vault in Vault 1 10  Users of the Etcd storage backend should prepare to  migrate   vault docs commands operator migrate  Vault storage to an Etcd v3 cluster prior to upgrading to Vault 1 10  All storage migrations should have  backups   vault docs concepts storage backing up vault s persisted data  taken prior to migration       High Availability     the Etcd storage backend supports high availability    The v2 API has known issues with HA support and should not be used in HA   scenarios       Community Supported     the Etcd storage backend is supported by CoreOS    While it has undergone review by HashiCorp employees  they may not be as   knowledgeable about the technology  If you encounter problems with them  you   may be referred to the original author      hcl storage  etcd      address     http   localhost 2379    etcd api    v3             etcd  parameters     address    string   http   localhost 2379      Specifies the addresses of the   Etcd instances as a comma separated list  This can also be provided via the   environment variable  ETCD ADDR       discovery srv    string   example com      Specifies the domain name to   query for SRV records describing cluster endpoints  This can also be provided   via the environment variable  ETCD DISCOVERY SRV       discovery srv name    string   vault      Specifies the service name to use   when querying for SRV records describing cluster endpoints  This can also be   provided via the environment variable  ETCD DISCOVERY SRV NAME       etcd api    string    varies       Specifies the version of the API to   communicate with  By default  this is derived automatically  If the cluster   version is 3 1  and there has been no data written using the v2 API  the   auto detected default is v3      ha enabled    string   false      Specifies if high availability should be   enabled  This can also be provided via the environment variable    ETCD HA ENABLED       path    string    vault       Specifies the path in Etcd where Vault data will   be stored      sync    string   true      Specifies whether to sync the list of available   Etcd services on startup  This is a string that is coerced into a boolean   value  You may want to set this to false if your cluster is behind a proxy   server and syncing causes Vault to fail      username    string         Specifies the username to use when authenticating   with the Etcd server  This can also be provided via the environment variable    ETCD USERNAME       password    string         Specifies the password to use when authenticating   with the Etcd server  This can also be provided via the environment variable    ETCD PASSWORD       tls ca file    string         Specifies the path to the CA certificate used   for Etcd communication  This defaults to system bundle if not specified      tls cert file    string         Specifies the path to the certificate for   Etcd communication      tls key file    string         Specifies the path to the private key for Etcd   communication      request timeout    string   5s      Specifies timeout for requests   to etcd  5 seconds should be long enough for most cases  even with internal   retry      lock timeout    string   15s      Specifies lock timeout for master   Vault instance  Set bigger value if you don t need faster recovery      max receive size    int     Specifies the client side response receive limit  Make sure that  max receive size     server side default send recv limit      max request bytes  flag to etcd or  embed Config MaxRequestBytes        max send size    int     Specifies the client side request send limit in bytes  Make sure that  max send size    server side default send recv limit      max request bytes  flag to etcd or  embed Config MaxRequestBytes         etcd  Examples      DNS discovery of cluster members  This example configures vault to discover the Etcd cluster members via SRV records as outlined in the  DNS Discovery protocol documentation  dns discovery       hcl storage  etcd      discovery srv    example com             Custom authentication  This example shows connecting to the Etcd cluster using a username and password      hcl storage  etcd      username    user1234    password    pass5678             Custom path  This example shows storing data in a custom path      hcl storage  etcd      path    my vault data              Enabling high availability  This example shows enabling high availability for the Etcd storage backend      hcl api addr    https   vault leader my company internal   storage  etcd      ha enabled       true                etcd   https   coreos com etcd  Etcd by CoreOS   dns discovery   https   coreos com etcd docs latest op guide clustering html dns discovery  Etcd cluster DNS Discovery "}
{"questions":"vault DynamoDB storage backend layout docs page title DynamoDB Storage Backends Configuration The DynamoDB storage backend is used to persist Vault s data in DynamoDB table","answers":"---\nlayout: docs\npage_title: DynamoDB - Storage Backends - Configuration\ndescription: |-\n  The DynamoDB storage backend is used to persist Vault's data in DynamoDB\n  table.\n---\n\n# DynamoDB storage backend\n\nThe DynamoDB storage backend is used to persist Vault's data in\n[DynamoDB][dynamodb] table.\n\n- **High Availability** \u2013 the DynamoDB storage backend supports high\n  availability. Because DynamoDB uses the time on the Vault node to implement\n  the session lifetimes on its locks, significant clock skew across Vault nodes\n  could cause contention issues on the lock.\n\n- **Community Supported** \u2013 the DynamoDB storage backend is supported by the\n  community. While it has undergone review by HashiCorp employees, they may not\n  be as knowledgeable about the technology. If you encounter problems with this\n  storage backend, you could be referred to the original author for support.\n\n```hcl\nstorage \"dynamodb\" {\n  ha_enabled = \"true\"\n  region     = \"us-west-2\"\n  table      = \"vault-data\"\n}\n```\n\nFor more information about the read\/write capacity of DynamoDB tables, please\nsee the [official AWS DynamoDB documentation][dynamodb-rw-capacity].\n\n## DynamoDB parameters\n\n- `endpoint` `(string: \"\")` \u2013 Specifies an alternative, AWS compatible, DynamoDB\n  endpoint. This can also be provided via the environment variable\n  `AWS_DYNAMODB_ENDPOINT`.\n\n- `ha_enabled` `(string: \"false\")` \u2013 Specifies whether this backend should be used\n  to run Vault in high availability mode. Valid values are \"true\" or \"false\". This\n  can also be provided via the environment variable `DYNAMODB_HA_ENABLED`.\n\n- `max_parallel` `(string: \"128\")` \u2013 Specifies the maximum number of concurrent\n  requests.\n\n- `region` `(string \"us-east-1\")` \u2013 Specifies the AWS region. This can also be\n  provided via the environment variable `AWS_DEFAULT_REGION`.\n\n- `read_capacity` `(int: 5)` \u2013 Specifies the maximum number of reads consumed\n  per second on the table, for use if Vault creates the DynamoDB table. This has\n  no effect if the `table` already exists. This can also be provided via the\n  environment variable `AWS_DYNAMODB_READ_CAPACITY`.\n\n- `table` `(string: \"vault-dynamodb-backend\")` \u2013 Specifies the name of the\n  DynamoDB table in which to store Vault data. If the specified table does not\n  yet exist, it will be created during initialization. This can also be\n  provided via the environment variable `AWS_DYNAMODB_TABLE`. See the\n  information on the table schema below.\n\n- `write_capacity` `(int: 5)` \u2013 Specifies the maximum number of writes performed\n  per second on the table, for use if Vault creates the DynamoDB table. This value\n  has no effect if the `table` already exists. This can also be provided via the\n  environment variable `AWS_DYNAMODB_WRITE_CAPACITY`.\n\nThe following settings are used for authenticating to AWS. If you are\nrunning your Vault server on an EC2 instance, you can also make use of the EC2\ninstance profile service to provide the credentials Vault will use to make\nDynamoDB API calls. Leaving the `access_key` and `secret_key` fields empty will\ncause Vault to attempt to retrieve credentials from the AWS metadata service.\n\n- `access_key` `(string: <required>)` \u2013 Specifies the AWS access key. This can\n  also be provided via the environment variable `AWS_ACCESS_KEY_ID`.\n\n- `secret_key` `(string: <required>)` \u2013 Specifies the AWS secret key. This can\n  also be provided via the environment variable `AWS_SECRET_ACCESS_KEY`.\n\n- `session_token` `(string: \"\")` \u2013 Specifies the AWS session token. This can\n  also be provided via the environment variable `AWS_SESSION_TOKEN`.\n\n## Required AWS permissions\n\nThe governing policy for the IAM user or EC2 instance profile that Vault uses\nto access DynamoDB must contain the following permissions for Vault to perform\nthe required operations on the DynamoDB table:\n\n```javascript\n  \"Statement\": [\n    {\n      \"Action\": [\n        \"dynamodb:DescribeLimits\",\n        \"dynamodb:DescribeTimeToLive\",\n        \"dynamodb:ListTagsOfResource\",\n        \"dynamodb:DescribeReservedCapacityOfferings\",\n        \"dynamodb:DescribeReservedCapacity\",\n        \"dynamodb:ListTables\",\n        \"dynamodb:BatchGetItem\",\n        \"dynamodb:BatchWriteItem\",\n        \"dynamodb:CreateTable\",\n        \"dynamodb:DeleteItem\",\n        \"dynamodb:GetItem\",\n        \"dynamodb:GetRecords\",\n        \"dynamodb:PutItem\",\n        \"dynamodb:Query\",\n        \"dynamodb:UpdateItem\",\n        \"dynamodb:Scan\",\n        \"dynamodb:DescribeTable\"\n      ],\n      \"Effect\": \"Allow\",\n      \"Resource\": [ \"arn:aws:dynamodb:us-east-1:... dynamodb table ARN\" ]\n    },\n```\n\n## Table schema\n\nIf you are going to create the DynamoDB table prior to the execution and\ninitialization of Vault, you will need to create a table with these attributes:\n\n- Primary partition key: \"Path\", a string\n- Primary sort key: \"Key\", a string\n\nYou might create the table via Terraform, with a configuration similar to this:\n\n```\nresource \"aws_dynamodb_table\" \"dynamodb-table\" {\n  name           = \"${var.dynamoTable}\"\n  read_capacity  = 1\n  write_capacity = 1\n  hash_key       = \"Path\"\n  range_key      = \"Key\"\n  \n  attribute {\n    name = \"Path\"\n    type = \"S\"\n  }\n\n  attribute {\n    name = \"Key\"\n    type = \"S\"\n  }\n\n  tags = {\n    Name        = \"vault-dynamodb-table\"\n    Environment = \"prod\"\n  }\n}\n```\n\nIf a table with the configured name already exists, Vault will not modify it -\nand the Vault configuration values of `read_capacity` and `write_capacity` have\nno effect.\n\nIf the table does not already exist, Vault will try to create it, with read and\nwrite capacities set to the values of `read_capacity` and `write_capacity`\nrespectively.\n\n## AWS instance metadata timeout\n\n@include 'aws-imds-timeout.mdx'\n\n## DynamoDB examples of Vault configuration\n\n### Custom table and Read-Write capacity\n\nThis example shows using a custom table name and read\/write capacity.\n\n```hcl\nstorage \"dynamodb\" {\n  table = \"my-vault-data\"\n\n  read_capacity  = 10\n  write_capacity = 15\n}\n```\n\n### Enabling high availability\n\nThis example shows enabling high availability for the DynamoDB storage backend.\n\n```hcl\napi_addr = \"https:\/\/vault-leader.my-company.internal\"\n\nstorage \"dynamodb\" {\n  ha_enabled    = \"true\"\n  ...\n}\n```\n\n[dynamodb]: https:\/\/aws.amazon.com\/dynamodb\/\n[dynamodb-rw-capacity]: https:\/\/docs.aws.amazon.com\/amazondynamodb\/latest\/developerguide\/WorkingWithTables.html#ProvisionedThroughput","site":"vault","answers_cleaned":"    layout  docs page title  DynamoDB   Storage Backends   Configuration description       The DynamoDB storage backend is used to persist Vault s data in DynamoDB   table         DynamoDB storage backend  The DynamoDB storage backend is used to persist Vault s data in  DynamoDB  dynamodb  table       High Availability     the DynamoDB storage backend supports high   availability  Because DynamoDB uses the time on the Vault node to implement   the session lifetimes on its locks  significant clock skew across Vault nodes   could cause contention issues on the lock       Community Supported     the DynamoDB storage backend is supported by the   community  While it has undergone review by HashiCorp employees  they may not   be as knowledgeable about the technology  If you encounter problems with this   storage backend  you could be referred to the original author for support      hcl storage  dynamodb      ha enabled    true    region        us west 2    table         vault data         For more information about the read write capacity of DynamoDB tables  please see the  official AWS DynamoDB documentation  dynamodb rw capacity       DynamoDB parameters     endpoint    string         Specifies an alternative  AWS compatible  DynamoDB   endpoint  This can also be provided via the environment variable    AWS DYNAMODB ENDPOINT       ha enabled    string   false      Specifies whether this backend should be used   to run Vault in high availability mode  Valid values are  true  or  false   This   can also be provided via the environment variable  DYNAMODB HA ENABLED       max parallel    string   128      Specifies the maximum number of concurrent   requests      region    string  us east 1      Specifies the AWS region  This can also be   provided via the environment variable  AWS DEFAULT REGION       read capacity    int  5     Specifies the maximum number of reads consumed   per second on the table  for use if Vault creates the DynamoDB table  This has   no effect if the  table  already exists  This can also be provided via the   environment variable  AWS DYNAMODB READ CAPACITY       table    string   vault dynamodb backend      Specifies the name of the   DynamoDB table in which to store Vault data  If the specified table does not   yet exist  it will be created during initialization  This can also be   provided via the environment variable  AWS DYNAMODB TABLE   See the   information on the table schema below      write capacity    int  5     Specifies the maximum number of writes performed   per second on the table  for use if Vault creates the DynamoDB table  This value   has no effect if the  table  already exists  This can also be provided via the   environment variable  AWS DYNAMODB WRITE CAPACITY    The following settings are used for authenticating to AWS  If you are running your Vault server on an EC2 instance  you can also make use of the EC2 instance profile service to provide the credentials Vault will use to make DynamoDB API calls  Leaving the  access key  and  secret key  fields empty will cause Vault to attempt to retrieve credentials from the AWS metadata service      access key    string   required      Specifies the AWS access key  This can   also be provided via the environment variable  AWS ACCESS KEY ID       secret key    string   required      Specifies the AWS secret key  This can   also be provided via the environment variable  AWS SECRET ACCESS KEY       session token    string         Specifies the AWS session token  This can   also be provided via the environment variable  AWS SESSION TOKEN       Required AWS permissions  The governing policy for the IAM user or EC2 instance profile that Vault uses to access DynamoDB must contain the following permissions for Vault to perform the required operations on the DynamoDB table      javascript    Statement                  Action              dynamodb DescribeLimits            dynamodb DescribeTimeToLive            dynamodb ListTagsOfResource            dynamodb DescribeReservedCapacityOfferings            dynamodb DescribeReservedCapacity            dynamodb ListTables            dynamodb BatchGetItem            dynamodb BatchWriteItem            dynamodb CreateTable            dynamodb DeleteItem            dynamodb GetItem            dynamodb GetRecords            dynamodb PutItem            dynamodb Query            dynamodb UpdateItem            dynamodb Scan            dynamodb DescribeTable                  Effect    Allow          Resource      arn aws dynamodb us east 1     dynamodb table ARN                   Table schema  If you are going to create the DynamoDB table prior to the execution and initialization of Vault  you will need to create a table with these attributes     Primary partition key   Path   a string   Primary sort key   Key   a string  You might create the table via Terraform  with a configuration similar to this       resource  aws dynamodb table   dynamodb table      name                var dynamoTable     read capacity    1   write capacity   1   hash key          Path    range key         Key       attribute       name    Path      type    S         attribute       name    Key      type    S         tags         Name           vault dynamodb table      Environment    prod             If a table with the configured name already exists  Vault will not modify it   and the Vault configuration values of  read capacity  and  write capacity  have no effect   If the table does not already exist  Vault will try to create it  with read and write capacities set to the values of  read capacity  and  write capacity  respectively      AWS instance metadata timeout   include  aws imds timeout mdx      DynamoDB examples of Vault configuration      Custom table and Read Write capacity  This example shows using a custom table name and read write capacity      hcl storage  dynamodb      table    my vault data     read capacity    10   write capacity   15            Enabling high availability  This example shows enabling high availability for the DynamoDB storage backend      hcl api addr    https   vault leader my company internal   storage  dynamodb      ha enabled       true                dynamodb   https   aws amazon com dynamodb   dynamodb rw capacity   https   docs aws amazon com amazondynamodb latest developerguide WorkingWithTables html ProvisionedThroughput"}
{"questions":"vault key value store In addition to providing durable storage inclusion of this backend will also register Vault as a service in Consul with a default health The Consul storage backend is used to persist Vault s data in Consul s page title Consul Storage Backends Configuration layout docs check","answers":"---\nlayout: docs\npage_title: Consul - Storage Backends - Configuration\ndescription: |-\n  The Consul storage backend is used to persist Vault's data in Consul's\n  key-value store. In addition to providing durable storage, inclusion of this\n  backend will also register Vault as a service in Consul with a default health\n  check.\n---\n\n# Consul storage backend\n\nThe Consul storage backend is used to persist Vault's data in [Consul's][consul]\nkey-value store. In addition to providing durable storage, inclusion of this\nbackend will also register Vault as a service in Consul with a default health\ncheck.\n\n@include 'consul-dataplane-compat.mdx'\n\n- **High Availability** \u2013 the Consul storage backend supports high availability.\n\n- **HashiCorp Supported** \u2013 the Consul storage backend is officially supported\n  by HashiCorp.\n\n```hcl\nstorage \"consul\" {\n  address = \"127.0.0.1:8500\"\n  path    = \"vault\/\"\n}\n```\n\nOnce properly configured, an unsealed Vault installation should be available and\naccessible at:\n\n```text\nactive.vault.service.consul\n```\n\nUnsealed Vault instances in standby mode are available at:\n\n```text\nstandby.vault.service.consul\n```\n\nAll unsealed Vault instances are available as healthy at:\n\n```text\nvault.service.consul\n```\n\nSealed Vault instances will mark themselves as unhealthy to avoid being returned\nat Consul's service discovery layer.\n\nNote that if you have configured multiple listeners for Vault, you must specify\nwhich one Consul should advertise to the cluster using [`api_addr`][api-addr]\nand [`cluster_addr`][cluster-addr] ([example][listener-example]).\n\n## `consul` parameters\n\n- `address` `(string: \"127.0.0.1:8500\")` \u2013 Specifies the address of the Consul\n  agent to communicate with. This can be an IP address, DNS record, or unix\n  socket. It is recommended that you communicate with a local Consul agent; do\n  not communicate directly with a server.\n\n- `check_timeout` `(string: \"5s\")` \u2013 Specifies the check interval used to send\n  health check information back to Consul. This is specified using a label\n  suffix like `\"30s\"` or `\"1h\"`.\n\n- `consistency_mode` `(string: \"default\")` \u2013 Specifies the Consul\n  [consistency mode][consul-consistency]. Possible values are `\"default\"` or\n  `\"strong\"`.\n\n- `disable_registration` `(string: \"false\")` \u2013 Specifies whether Vault should\n  register itself with Consul.\n\n- `max_parallel` `(string: \"128\")` \u2013 Specifies the maximum number of concurrent\n  requests to Consul. Make sure that your Consul agents are configured to\n  support this level of parallelism, see\n  [http_max_conns_per_client](\/consul\/docs\/agent\/config\/config-files#http_max_conns_per_client).\n\n- `path` `(string: \"vault\/\")` \u2013 Specifies the path in Consul's key-value store\n  where Vault data will be stored.\n\n- `scheme` `(string: \"http\")` \u2013 Specifies the scheme to use when communicating\n  with Consul. This can be set to \"http\" or \"https\". It is highly recommended\n  you communicate with Consul over https over non-local connections. When\n  communicating over a unix socket, this option is ignored.\n\n- `service` `(string: \"vault\")` \u2013 Specifies the name of the service to register\n  in Consul.\n\n- `service_tags` `(string: \"\")` \u2013 Specifies a comma-separated list of tags to\n  attach to the service registration in Consul.\n\n- `service_meta` `(map[string]string: {})` \u2013 Specifies a key-value list of meta tags to\n  attach to the service registration in Consul. See [ServiceMeta](\/consul\/api-docs\/catalog#servicemeta) in the Consul docs for more information.\n\n- `service_address` `(string: nil)` \u2013 Specifies a service-specific address to\n  set on the service registration in Consul. If unset, Vault will use what it\n  knows to be the HA redirect address - which is usually desirable. Setting\n  this parameter to `\"\"` will tell Consul to leverage the configuration of the\n  node the service is registered on dynamically. This could be beneficial if\n  you intend to leverage Consul's\n  [`translate_wan_addrs`][consul-translate-wan-addrs] parameter.\n\n- `token` `(string: \"\")` \u2013 Specifies the [Consul ACL token][consul-acl] with\n  permission to read and write from the `path` in Consul's key-value store.\n  This is **not** a Vault token. This can also be provided via the environment\n  variable [`CONSUL_HTTP_TOKEN`][consul-token]. See the ACL section below for help.\n\n- `session_ttl` `(string: \"15s\")` - Specifies the minimum allowed [session\n  TTL][consul-session-ttl]. Consul server has a lower limit of 10s on the\n  session TTL by default. The value of `session_ttl` here cannot be lesser than\n  10s unless the `session_ttl_min` on the consul server's configuration has a\n  lesser value.\n\n- `lock_wait_time` `(string: \"15s\")` - Specifies the wait time before a lock\n  acquisition is made. This affects the minimum time it takes to cancel a\n  lock acquisition.\n\nThe following settings apply when communicating with Consul via an encrypted\nconnection. You can read more about encrypting Consul connections on the\n[Consul encryption page][consul-encryption].\n\n- `tls_ca_file` `(string: \"\")` \u2013 Specifies the path to the CA certificate used\n  for Consul communication. This defaults to system bundle if not specified.\n  This should be set according to the\n  [`ca_file`](\/consul\/docs\/agent\/config\/config-files#ca_file) setting in\n  Consul.\n\n- `tls_cert_file` `(string: \"\")` (optional) \u2013 Specifies the path to the\n  certificate for Consul communication. This should be set according to the\n  [`cert_file`](\/consul\/docs\/agent\/config\/config-files#cert_file) setting\n  in Consul.\n\n- `tls_key_file` `(string: \"\")` \u2013 Specifies the path to the private key for\n  Consul communication. This should be set according to the\n  [`key_file`](\/consul\/docs\/agent\/config\/config-files#key_file) setting\n  in Consul.\n\n- `tls_min_version` `(string: \"tls12\")` \u2013 Specifies the minimum TLS version to\n  use. Accepted values are `\"tls10\"`, `\"tls11\"`, `\"tls12\"` or `\"tls13\"`.\n\n- `tls_skip_verify` `(string: \"false\")` \u2013 Disable verification of TLS certificates.\n  Using this option is highly discouraged.\n\n## ACLs\n\nIf using ACLs in Consul, you'll need appropriate permissions. For Consul 0.8,\nthe following will work for most use-cases, assuming that your service name is\n`vault` and the prefix being used is `vault\/`:\n\n```json\n{\n  \"key\": {\n    \"vault\/\": {\n      \"policy\": \"write\"\n    }\n  },\n  \"service\": {\n    \"vault\": {\n      \"policy\": \"write\"\n    }\n  },\n  \"agent\": {\n    \"\": {\n      \"policy\": \"read\"\n    }\n  },\n  \"session\": {\n    \"\": {\n      \"policy\": \"write\"\n    }\n  }\n}\n```\n\nFor Consul 1.4+, the following example takes into account the changed ACL\nlanguage:\n\n```json\n{\n  \"key_prefix\": {\n    \"vault\/\": {\n      \"policy\": \"write\"\n    }\n  },\n  \"service\": {\n    \"vault\": {\n      \"policy\": \"write\"\n    }\n  },\n  \"agent_prefix\": {\n    \"\": {\n      \"policy\": \"read\"\n    }\n  },\n  \"session_prefix\": {\n    \"\": {\n      \"policy\": \"write\"\n    }\n  }\n}\n```\n\n## `consul` examples\n\n### Local agent\n\nThis example shows a sample physical backend configuration which communicates\nwith a local Consul agent running on `127.0.0.1:8500`.\n\n```hcl\nstorage \"consul\" {}\n```\n\n### Detailed customization\n\nThis example shows communicating with Consul on a custom address with an ACL\ntoken.\n\n```hcl\nstorage \"consul\" {\n  address = \"10.5.7.92:8194\"\n  token   = \"abcd1234\"\n}\n```\n\n### Custom storage path\n\nThis example shows storing data at a custom path in Consul's key-value store.\nThis path must be readable and writable by the Consul ACL token, if Consul\nconfigured to use ACLs.\n\n```hcl\nstorage \"consul\" {\n  path = \"vault\/\"\n}\n```\n\n### Consul via unix socket\n\nThis example shows communicating with Consul over a local unix socket.\n\n```hcl\nstorage \"consul\" {\n  address = \"unix:\/\/\/tmp\/.consul.http.sock\"\n}\n```\n\n### Custom TLS\n\nThis example shows using a custom CA, certificate, and key file to securely\ncommunicate with Consul over TLS.\n\n```hcl\nstorage \"consul\" {\n  scheme        = \"https\"\n  tls_ca_file   = \"\/etc\/pem\/vault.ca\"\n  tls_cert_file = \"\/etc\/pem\/vault.cert\"\n  tls_key_file  = \"\/etc\/pem\/vault.key\"\n}\n```\n\n[consul]: https:\/\/www.consul.io\/ 'Consul by HashiCorp'\n[consul-acl]: \/consul\/docs\/guides\/acl 'Consul ACLs'\n[consul-consistency]: \/consul\/api-docs\/features\/consistency 'Consul Consistency Modes'\n[consul-encryption]: \/consul\/docs\/agent\/encryption 'Consul Encryption'\n[consul-translate-wan-addrs]: \/consul\/docs\/agent\/options#translate_wan_addrs 'Consul Configuration'\n[consul-token]: \/consul\/docs\/commands\/acl\/set-agent-token#token-lt-value-gt- 'Consul Token'\n[consul-session-ttl]: \/consul\/docs\/agent\/options#session_ttl_min 'Consul Configuration'\n[api-addr]: \/vault\/docs\/configuration#api_addr\n[cluster-addr]: \/vault\/docs\/configuration#cluster_addr\n[listener-example]: \/vault\/docs\/configuration\/listener\/tcp#listening-on-multiple-interfaces","site":"vault","answers_cleaned":"    layout  docs page title  Consul   Storage Backends   Configuration description       The Consul storage backend is used to persist Vault s data in Consul s   key value store  In addition to providing durable storage  inclusion of this   backend will also register Vault as a service in Consul with a default health   check         Consul storage backend  The Consul storage backend is used to persist Vault s data in  Consul s  consul  key value store  In addition to providing durable storage  inclusion of this backend will also register Vault as a service in Consul with a default health check    include  consul dataplane compat mdx       High Availability     the Consul storage backend supports high availability       HashiCorp Supported     the Consul storage backend is officially supported   by HashiCorp      hcl storage  consul      address    127 0 0 1 8500    path       vault          Once properly configured  an unsealed Vault installation should be available and accessible at      text active vault service consul      Unsealed Vault instances in standby mode are available at      text standby vault service consul      All unsealed Vault instances are available as healthy at      text vault service consul      Sealed Vault instances will mark themselves as unhealthy to avoid being returned at Consul s service discovery layer   Note that if you have configured multiple listeners for Vault  you must specify which one Consul should advertise to the cluster using   api addr   api addr  and   cluster addr   cluster addr    example  listener example         consul  parameters     address    string   127 0 0 1 8500      Specifies the address of the Consul   agent to communicate with  This can be an IP address  DNS record  or unix   socket  It is recommended that you communicate with a local Consul agent  do   not communicate directly with a server      check timeout    string   5s      Specifies the check interval used to send   health check information back to Consul  This is specified using a label   suffix like   30s   or   1h        consistency mode    string   default      Specifies the Consul    consistency mode  consul consistency   Possible values are   default   or     strong        disable registration    string   false      Specifies whether Vault should   register itself with Consul      max parallel    string   128      Specifies the maximum number of concurrent   requests to Consul  Make sure that your Consul agents are configured to   support this level of parallelism  see    http max conns per client   consul docs agent config config files http max conns per client       path    string   vault       Specifies the path in Consul s key value store   where Vault data will be stored      scheme    string   http      Specifies the scheme to use when communicating   with Consul  This can be set to  http  or  https   It is highly recommended   you communicate with Consul over https over non local connections  When   communicating over a unix socket  this option is ignored      service    string   vault      Specifies the name of the service to register   in Consul      service tags    string         Specifies a comma separated list of tags to   attach to the service registration in Consul      service meta    map string string         Specifies a key value list of meta tags to   attach to the service registration in Consul  See  ServiceMeta   consul api docs catalog servicemeta  in the Consul docs for more information      service address    string  nil     Specifies a service specific address to   set on the service registration in Consul  If unset  Vault will use what it   knows to be the HA redirect address   which is usually desirable  Setting   this parameter to      will tell Consul to leverage the configuration of the   node the service is registered on dynamically  This could be beneficial if   you intend to leverage Consul s     translate wan addrs   consul translate wan addrs  parameter      token    string         Specifies the  Consul ACL token  consul acl  with   permission to read and write from the  path  in Consul s key value store    This is   not   a Vault token  This can also be provided via the environment   variable   CONSUL HTTP TOKEN   consul token   See the ACL section below for help      session ttl    string   15s      Specifies the minimum allowed  session   TTL  consul session ttl   Consul server has a lower limit of 10s on the   session TTL by default  The value of  session ttl  here cannot be lesser than   10s unless the  session ttl min  on the consul server s configuration has a   lesser value      lock wait time    string   15s      Specifies the wait time before a lock   acquisition is made  This affects the minimum time it takes to cancel a   lock acquisition   The following settings apply when communicating with Consul via an encrypted connection  You can read more about encrypting Consul connections on the  Consul encryption page  consul encryption       tls ca file    string         Specifies the path to the CA certificate used   for Consul communication  This defaults to system bundle if not specified    This should be set according to the     ca file    consul docs agent config config files ca file  setting in   Consul      tls cert file    string        optional    Specifies the path to the   certificate for Consul communication  This should be set according to the     cert file    consul docs agent config config files cert file  setting   in Consul      tls key file    string         Specifies the path to the private key for   Consul communication  This should be set according to the     key file    consul docs agent config config files key file  setting   in Consul      tls min version    string   tls12      Specifies the minimum TLS version to   use  Accepted values are   tls10      tls11      tls12   or   tls13        tls skip verify    string   false      Disable verification of TLS certificates    Using this option is highly discouraged      ACLs  If using ACLs in Consul  you ll need appropriate permissions  For Consul 0 8  the following will work for most use cases  assuming that your service name is  vault  and the prefix being used is  vault        json      key          vault             policy    write                service          vault            policy    write                agent                      policy    read                session                      policy    write                   For Consul 1 4   the following example takes into account the changed ACL language      json      key prefix          vault             policy    write                service          vault            policy    write                agent prefix                      policy    read                session prefix                      policy    write                       consul  examples      Local agent  This example shows a sample physical backend configuration which communicates with a local Consul agent running on  127 0 0 1 8500       hcl storage  consul              Detailed customization  This example shows communicating with Consul on a custom address with an ACL token      hcl storage  consul      address    10 5 7 92 8194    token      abcd1234             Custom storage path  This example shows storing data at a custom path in Consul s key value store  This path must be readable and writable by the Consul ACL token  if Consul configured to use ACLs      hcl storage  consul      path    vault              Consul via unix socket  This example shows communicating with Consul over a local unix socket      hcl storage  consul      address    unix    tmp  consul http sock             Custom TLS  This example shows using a custom CA  certificate  and key file to securely communicate with Consul over TLS      hcl storage  consul      scheme           https    tls ca file       etc pem vault ca    tls cert file     etc pem vault cert    tls key file      etc pem vault key          consul   https   www consul io   Consul by HashiCorp   consul acl    consul docs guides acl  Consul ACLs   consul consistency    consul api docs features consistency  Consul Consistency Modes   consul encryption    consul docs agent encryption  Consul Encryption   consul translate wan addrs    consul docs agent options translate wan addrs  Consul Configuration   consul token    consul docs commands acl set agent token token lt value gt   Consul Token   consul session ttl    consul docs agent options session ttl min  Consul Configuration   api addr    vault docs configuration api addr  cluster addr    vault docs configuration cluster addr  listener example    vault docs configuration listener tcp listening on multiple interfaces"}
{"questions":"vault Consensus Algorithm The Integrated Storage Raft backend is used to persist Vault s data Unlike all the other data Instead all the nodes in a Vault cluster will have a replicated copy of the entire data The data is replicated across the nodes using the Raft page title Integrated Storage Storage Backends Configuration layout docs storage backends this backend does not operate from a single source for the","answers":"---\nlayout: docs\npage_title: Integrated Storage - Storage Backends - Configuration\ndescription: >-\n\n  The Integrated Storage (Raft) backend is used to persist Vault's data. Unlike all the other\n  storage backends, this backend does not operate from a single source for the\n  data. Instead all the nodes in a Vault cluster will have a replicated copy of\n  the entire data. The data is replicated across the nodes using the Raft\n  Consensus Algorithm.\n---\n\n# Integrated storage (Raft) backend\n\nThe Integrated Storage backend is used to persist Vault's data. Unlike other storage\nbackends, Integrated Storage does not operate from a single source of data. Instead\nall the nodes in a Vault cluster will have a replicated copy of Vault's data.\nData gets replicated across all the nodes via the [Raft Consensus\nAlgorithm][raft].\n\n- **High Availability** \u2013 the Integrated Storage backend supports high availability.\n\n- **HashiCorp Supported** \u2013 the Integrated Storage backend is officially supported\n  by HashiCorp.\n\n```hcl\nstorage \"raft\" {\n  path = \"\/path\/to\/raft\/data\"\n  node_id = \"raft_node_1\"\n}\ncluster_addr = \"http:\/\/127.0.0.1:8201\"\n```\n\n~> **Note:** When using the Integrated Storage backend, it is required to provide\n[`cluster_addr`](\/vault\/docs\/concepts\/ha#per-node-cluster-address) to indicate the address and port to be used for communication\nbetween the nodes in the Raft cluster.\n\n~> **Note:** When using the Integrated Storage backend, a separate\n[`ha_storage`](\/vault\/docs\/configuration#ha_storage)\nbackend cannot be declared.\n\n~> **Note:** When using the Integrated Storage backend, it is strongly recommended to\nset [`disable_mlock`](\/vault\/docs\/configuration#disable_mlock) to `true`, and to disable memory swapping on the system.\n\n## `raft` parameters\n\n- `path` `(string: \"\")` \u2013 The file system path where all the Vault data gets\n  stored.\n  This value can be overridden by setting the `VAULT_RAFT_PATH` environment variable.\n\n- `node_id` `(string: \"\")` - The identifier for the node in the Raft cluster.\n  You can override `node_id` with the `VAULT_RAFT_NODE_ID` environment\n  variable.  When `VAULT_RAFT_NODE_ID` is unset, Vault assigns a random\n  GUID during initialization and writes the value to `data\/node-id` in the\n  directory specified by the `path` parameter.\n\n- `performance_multiplier` `(integer: 0)` - An integer multiplier used by\n  servers to scale key Raft timing parameters, where each increment translates to approximately 1 \u2013 2 seconds of delay. For example, setting the multiplier to \"3\" translates to 3 \u2013 6 seconds of total delay.  Tuning  the multiplier affects the time it\n  takes Vault to detect leader failures and to perform leader elections, at the\n  expense of requiring more network and CPU resources for better performance.\n  Omitting this value or setting it to 0 uses default timing described below.\n  Lower values are used to tighten timing and increase sensitivity while higher\n  values relax timings and reduce sensitivity.\n\n\nBy default, Vault uses a balanced timing value of 5, which is suitable for most\nplatforms and scenarios. You should only adjust the timing value when platform\ntelemetry indicators that a change is needed or different timing is required due\nto the overall reliability your platform (network, etc.).\n\nSetting the timing value to 1 configures Raft to its highest performance (lowest\ndelay) mode. The maximum allowed value is 10.\n\n- `trailing_logs` `(integer: 10000)` - This controls how many log entries are\n  left in the log store on disk after a snapshot is made. This should only be\n  adjusted when followers cannot catch up to the leader due to a very large\n  snapshot size and high write throughput causing log truncation before a\n  snapshot can be fully installed. If you need to use this to recover a cluster,\n  consider reducing write throughput or the amount of data stored on Vault. The\n  default value is 10000 which is suitable for all normal workloads. The\n  `trailing_logs` metric is not the same as `max_trailing_logs`.\n\n- `snapshot_threshold` `(integer: 8192)` - This controls the minimum number of Raft\n  commit entries between snapshots that are saved to disk. This is a low-level\n  parameter that should rarely need to be changed. Very busy clusters\n  experiencing excessive disk IO may increase this value to reduce disk IO and\n  minimize the chances of all servers taking snapshots at the same time.\n  Increasing this trades off disk IO for disk space since the log will grow much\n  larger and the space in the `raft.db` file can't be reclaimed till the next\n  snapshot. Servers may take longer to recover from crashes or failover if this\n  is increased significantly as more logs will need to be replayed.\n\n- `snapshot_interval` `(integer: 120 seconds)` - The snapshot interval\n   controls how often Raft checks whether a snapshot operation is\n   required. Raft randomly staggers snapshots between the configured\n   interval and twice the configured interval to keep the entire cluster\n   from performing a snapshot at once. The default snapshot interval is\n   120 seconds.\n\n- `retry_join` `(list: [])` - A set of connection details for another node in the\n  cluster, which is used to help nodes locate a leader in order to join a cluster.\n  There can be one or more [`retry_join`](#retry_join-stanza) stanzas.\n\n  If the connection details for all nodes in the cluster are known in advance, you\n  can include these stanzas to enable nodes to automatically join the Raft cluster.\n  Once one of the nodes is initialized as the leader, the remaining nodes will use\n  their [`retry_join`](#retry_join-stanza) configuration to locate the leader and\n  join the cluster. Note that when using Shamir seal, the joined nodes will still\n  need to be unsealed manually.\n  See [the section below](#retry_join-stanza) for the parameters accepted by the\n  [`retry_join`](#retry_join-stanza) stanza.\n\n- `retry_join_as_non_voter` `(boolean: false)` - <EnterpriseAlert inline \/>\n  Configures this node as a permanent non-voter. The node will not participate\n  in the Raft quorum but will still receive the data replication stream\n  enhancing the read throughput of the cluster. This option has the same effect\n  as the [`-non-voter`](\/vault\/docs\/commands\/operator\/raft#non-voter) flag for\n  the `vault operator raft join` command, but only affects voting status when\n  joining via `retry_join` config. You can override the non-voter configuration\n  by setting the `VAULT_RAFT_RETRY_JOIN_AS_NON_VOTER` environment variable to\n  any non-empty value. Configuring a node as a non-voter is only valid if there\n  is at least one `retry_join` stanza.\n\n- `max_entry_size` `(integer: 1048576)` - This configures the maximum number of\n  bytes for a Raft entry. It applies to both Put operations and transactions.\n  Any put or transaction operation exceeding this configuration value will cause\n  the respective operation to fail. Raft has a suggested max size of data in a\n  Raft log entry. This is based on current architecture, default timing, etc.\n  Integrated Storage also uses a chunk size that is the threshold used for\n  breaking a large value into chunks. By default, the chunk size is the same as\n  Raft's max size log entry. The default value for this configuration is 1048576\n  -- two times the chunking size.\n  - **Note:** This option corresponds to [Consul's `kv_max_value_size` parameter](\/consul\/docs\/agent\/config\/config-files#kv_max_value_size) for\n    Vault clusters using a Consul storage backend.  If you are migrating from Consul\n    storage to Raft Integrated Storage,  and have changed this value in Consul from its\n    default to a value larger than the Integrated Storage default of 1MB, then you will\n    need to make the same change in Vault's Integrated Storage config.\n\n- `max_mount_and_namespace_table_entry_size` `(integer)`- <EnterpriseAlert\n  inline \/> Overrides `max_entry_size` to set a different limit for the specific\n  storage entries that contain mount tables, auth tables and namespace\n  configuration data. If you are reaching limits on the mount table size, you\n  can use this to increase the number of mounts and namespaces that can be\n  stored without the risk of other storage entries becoming too large. All other\n  notes on [`max_entry_size`](#max-entry-size) apply. Before changing this, read\n  the [Run Vault Enterprise\n  with many namespaces](\/vault\/docs\/enterprise\/namespaces\/namespace-limits) guide regarding important performance considerations.\n\n- `autopilot_reconcile_interval` `(string: \"10s\")` - This is the interval after\n  which autopilot will pick up any state changes. State change could mean multiple\n  things; for example a newly joined voter node, initially added as non-voter to\n  the Raft cluster by autopilot has successfully completed the stabilization\n  period thereby qualifying for being promoted as a voter, a node that has become\n  unhealthy and needs to be shown as such in the state API, a node has been marked\n  as dead needing eviction from Raft configuration, etc.\n\n- `autopilot_update_interval` `(string: \"2s\")` - This is the interval after which\n  autopilot will poll Vault for any updates to the information it cares about. This\n  includes things like the autopilot configuration, current autopilot state, raft\n  configuration, known servers, latest raft index, and stats for all the known servers.\n  The information that autopilot receives will be used to calculate its next state.\n\n- `autopilot_upgrade_version` `(string: \"\")` - <EnterpriseAlert inline \/>\n  Overrides the version used by Autopilot during [automated\n  upgrades](\/vault\/docs\/enterprise\/automated-upgrades). Vault's build version is\n  used by default. The string provided must be a valid [Semantic\n  Version](https:\/\/semver.org).\n\n- `autopilot_redundancy_zone` `(string: \"\")` - <EnterpriseAlert inline \/>\n  Specifies a [redundancy zone](\/vault\/docs\/enterprise\/redundancy-zones) which\n  is used by Autopilot to automatically swap out failed servers for enhanced\n  reliability.\n\n<Warning title=\"Experimental\">\n\n- `raft_wal` `(boolean: false)` - Enables the\n  [write-ahead](\/vault\/docs\/internals\/integrated-storage#configurable-raft-log-store)\n  log store instead of the default of BoltDB.\n\n- `raft_log_verifier_enabled` `(boolean: false)` - Enables the raft log verifier.\n  The verifier periodically writes small raft logs and verifies checksums to \n  ensure that data has been written correctly. The verifier works with raft\n  write-ahead **and** BoltDB log stores.\n\n- `raft_log_verification_interval` `(string: \"60s\")` - Sets the interval at\n  which the raft log verifier write verification logs. The default interval is\n  `60s` and the minimum supported interval is `10s`. The `raft_log_verification_interval`\n  parameter has no effect if `raft_log_verifier_enabled` is `false`.\n\n<\/Warning>\n\n### `retry_join` stanza\n\n- `leader_api_addr` `(string: \"\")` - Address of a possible leader node.\n\n- `auto_join` `(string: \"\")` - Cloud auto-join configuration, using\n  [go-discover](https:\/\/github.com\/hashicorp\/go-discover) syntax.\n\n- `auto_join_scheme` `(string: \"\")` - The optional URI protocol scheme for addresses\n  discovered via auto-join. Available values are `http` or `https`.\n\n- `auto_join_port` `(uint: \"\")` - The optional port used for addressed discovered\n  via auto-join.\n\n- `leader_tls_servername` `(string: \"\")` - The TLS server name to use when\n  connecting with HTTPS.\n  Should match one of the names in the [DNS\n  SANs](https:\/\/en.wikipedia.org\/wiki\/Subject_Alternative_Name) of the remote\n  server certificate.\n  See also [Integrated Storage and TLS](\/vault\/docs\/concepts\/integrated-storage#autojoin-with-tls-servername)\n\n- `leader_ca_cert_file` `(string: \"\")` - File path to the CA cert of the\n  possible leader node.\n\n- `leader_client_cert_file` `(string: \"\")` - File path to the client certificate\n  for the follower node to establish client authentication with the possible\n  leader node.\n\n- `leader_client_key_file` `(string: \"\")` - File path to the client key for the\n  follower node to establish client authentication with the possible leader node.\n\n- `leader_ca_cert` `(string: \"\")` - CA cert of the possible leader node.\n\n- `leader_client_cert` `(string: \"\")` - Client certificate for the follower node\n  to establish client authentication with the possible leader node.\n\n- `leader_client_key` `(string: \"\")` - Client key for the follower node to\n  establish client authentication with the possible leader node.\n\nEach [`retry_join`](#retry_join-stanza) block may provide TLS certificates via\nfile paths or as a single-line certificate string value with newlines delimited\nby `\\n`, but not a combination of both. Each [`retry_join`](#retry_join-stanza)\nstanza may contain either a [`leader_api_addr`](#leader_api_addr) value or a\ncloud [`auto_join`](#auto_join) configuration value, but not both. When an\n[`auto_join`](#auto_join) value is provided, Vault will automatically attempt to\ndiscover and resolve potential Raft leader addresses using [go-discover](https:\/\/github.com\/hashicorp\/go-discover).\nSee the go-discover\n[README](https:\/\/github.com\/hashicorp\/go-discover\/blob\/master\/README.md)\nfor details on the format of the `auto_join` value.\n\nBy default, Vault will attempt to reach discovered peers using HTTPS and port 8200. Operators may override these through the\n[`auto_join_scheme`](#auto_join_scheme) and [`auto_join_port`](#auto_join_port)\nfields respectively.\n\nExample Configuration:\n\n```hcl\nstorage \"raft\" {\n  path    = \"\/Users\/foo\/raft\/\"\n  node_id = \"node1\"\n\n  retry_join {\n    leader_api_addr = \"http:\/\/127.0.0.2:8200\"\n    leader_ca_cert_file = \"\/path\/to\/ca1\"\n    leader_client_cert_file = \"\/path\/to\/client\/cert1\"\n    leader_client_key_file = \"\/path\/to\/client\/key1\"\n  }\n  retry_join {\n    leader_api_addr = \"http:\/\/127.0.0.3:8200\"\n    leader_ca_cert_file = \"\/path\/to\/ca2\"\n    leader_client_cert_file = \"\/path\/to\/client\/cert2\"\n    leader_client_key_file = \"\/path\/to\/client\/key2\"\n  }\n  retry_join {\n    leader_api_addr = \"http:\/\/127.0.0.4:8200\"\n    leader_ca_cert_file = \"\/path\/to\/ca3\"\n    leader_client_cert_file = \"\/path\/to\/client\/cert3\"\n    leader_client_key_file = \"\/path\/to\/client\/key3\"\n  }\n  retry_join {\n    auto_join = \"provider=aws region=eu-west-1 tag_key=vault tag_value=... access_key_id=... secret_access_key=...\"\n  }\n}\n```\n\n## Tutorial\n\nRefer to the [Integrated\nStorage](\/vault\/tutorials\/raft) series of tutorials to learn more about implementing Vault using Integrated Storage.\n\n[raft]: https:\/\/raft.github.io\/ 'The Raft Consensus Algorithm'","site":"vault","answers_cleaned":"    layout  docs page title  Integrated Storage   Storage Backends   Configuration description        The Integrated Storage  Raft  backend is used to persist Vault s data  Unlike all the other   storage backends  this backend does not operate from a single source for the   data  Instead all the nodes in a Vault cluster will have a replicated copy of   the entire data  The data is replicated across the nodes using the Raft   Consensus Algorithm         Integrated storage  Raft  backend  The Integrated Storage backend is used to persist Vault s data  Unlike other storage backends  Integrated Storage does not operate from a single source of data  Instead all the nodes in a Vault cluster will have a replicated copy of Vault s data  Data gets replicated across all the nodes via the  Raft Consensus Algorithm  raft        High Availability     the Integrated Storage backend supports high availability       HashiCorp Supported     the Integrated Storage backend is officially supported   by HashiCorp      hcl storage  raft      path     path to raft data    node id    raft node 1    cluster addr    http   127 0 0 1 8201            Note    When using the Integrated Storage backend  it is required to provide   cluster addr    vault docs concepts ha per node cluster address  to indicate the address and port to be used for communication between the nodes in the Raft cluster        Note    When using the Integrated Storage backend  a separate   ha storage    vault docs configuration ha storage  backend cannot be declared        Note    When using the Integrated Storage backend  it is strongly recommended to set   disable mlock    vault docs configuration disable mlock  to  true   and to disable memory swapping on the system       raft  parameters     path    string         The file system path where all the Vault data gets   stored    This value can be overridden by setting the  VAULT RAFT PATH  environment variable      node id    string         The identifier for the node in the Raft cluster    You can override  node id  with the  VAULT RAFT NODE ID  environment   variable   When  VAULT RAFT NODE ID  is unset  Vault assigns a random   GUID during initialization and writes the value to  data node id  in the   directory specified by the  path  parameter      performance multiplier    integer  0     An integer multiplier used by   servers to scale key Raft timing parameters  where each increment translates to approximately 1   2 seconds of delay  For example  setting the multiplier to  3  translates to 3   6 seconds of total delay   Tuning  the multiplier affects the time it   takes Vault to detect leader failures and to perform leader elections  at the   expense of requiring more network and CPU resources for better performance    Omitting this value or setting it to 0 uses default timing described below    Lower values are used to tighten timing and increase sensitivity while higher   values relax timings and reduce sensitivity    By default  Vault uses a balanced timing value of 5  which is suitable for most platforms and scenarios  You should only adjust the timing value when platform telemetry indicators that a change is needed or different timing is required due to the overall reliability your platform  network  etc     Setting the timing value to 1 configures Raft to its highest performance  lowest delay  mode  The maximum allowed value is 10      trailing logs    integer  10000     This controls how many log entries are   left in the log store on disk after a snapshot is made  This should only be   adjusted when followers cannot catch up to the leader due to a very large   snapshot size and high write throughput causing log truncation before a   snapshot can be fully installed  If you need to use this to recover a cluster    consider reducing write throughput or the amount of data stored on Vault  The   default value is 10000 which is suitable for all normal workloads  The    trailing logs  metric is not the same as  max trailing logs       snapshot threshold    integer  8192     This controls the minimum number of Raft   commit entries between snapshots that are saved to disk  This is a low level   parameter that should rarely need to be changed  Very busy clusters   experiencing excessive disk IO may increase this value to reduce disk IO and   minimize the chances of all servers taking snapshots at the same time    Increasing this trades off disk IO for disk space since the log will grow much   larger and the space in the  raft db  file can t be reclaimed till the next   snapshot  Servers may take longer to recover from crashes or failover if this   is increased significantly as more logs will need to be replayed      snapshot interval    integer  120 seconds     The snapshot interval    controls how often Raft checks whether a snapshot operation is    required  Raft randomly staggers snapshots between the configured    interval and twice the configured interval to keep the entire cluster    from performing a snapshot at once  The default snapshot interval is    120 seconds      retry join    list         A set of connection details for another node in the   cluster  which is used to help nodes locate a leader in order to join a cluster    There can be one or more   retry join    retry join stanza  stanzas     If the connection details for all nodes in the cluster are known in advance  you   can include these stanzas to enable nodes to automatically join the Raft cluster    Once one of the nodes is initialized as the leader  the remaining nodes will use   their   retry join    retry join stanza  configuration to locate the leader and   join the cluster  Note that when using Shamir seal  the joined nodes will still   need to be unsealed manually    See  the section below   retry join stanza  for the parameters accepted by the     retry join    retry join stanza  stanza      retry join as non voter    boolean  false      EnterpriseAlert inline      Configures this node as a permanent non voter  The node will not participate   in the Raft quorum but will still receive the data replication stream   enhancing the read throughput of the cluster  This option has the same effect   as the    non voter    vault docs commands operator raft non voter  flag for   the  vault operator raft join  command  but only affects voting status when   joining via  retry join  config  You can override the non voter configuration   by setting the  VAULT RAFT RETRY JOIN AS NON VOTER  environment variable to   any non empty value  Configuring a node as a non voter is only valid if there   is at least one  retry join  stanza      max entry size    integer  1048576     This configures the maximum number of   bytes for a Raft entry  It applies to both Put operations and transactions    Any put or transaction operation exceeding this configuration value will cause   the respective operation to fail  Raft has a suggested max size of data in a   Raft log entry  This is based on current architecture  default timing  etc    Integrated Storage also uses a chunk size that is the threshold used for   breaking a large value into chunks  By default  the chunk size is the same as   Raft s max size log entry  The default value for this configuration is 1048576      two times the chunking size        Note    This option corresponds to  Consul s  kv max value size  parameter   consul docs agent config config files kv max value size  for     Vault clusters using a Consul storage backend   If you are migrating from Consul     storage to Raft Integrated Storage   and have changed this value in Consul from its     default to a value larger than the Integrated Storage default of 1MB  then you will     need to make the same change in Vault s Integrated Storage config      max mount and namespace table entry size    integer     EnterpriseAlert   inline    Overrides  max entry size  to set a different limit for the specific   storage entries that contain mount tables  auth tables and namespace   configuration data  If you are reaching limits on the mount table size  you   can use this to increase the number of mounts and namespaces that can be   stored without the risk of other storage entries becoming too large  All other   notes on   max entry size    max entry size  apply  Before changing this  read   the  Run Vault Enterprise   with many namespaces   vault docs enterprise namespaces namespace limits  guide regarding important performance considerations      autopilot reconcile interval    string   10s      This is the interval after   which autopilot will pick up any state changes  State change could mean multiple   things  for example a newly joined voter node  initially added as non voter to   the Raft cluster by autopilot has successfully completed the stabilization   period thereby qualifying for being promoted as a voter  a node that has become   unhealthy and needs to be shown as such in the state API  a node has been marked   as dead needing eviction from Raft configuration  etc      autopilot update interval    string   2s      This is the interval after which   autopilot will poll Vault for any updates to the information it cares about  This   includes things like the autopilot configuration  current autopilot state  raft   configuration  known servers  latest raft index  and stats for all the known servers    The information that autopilot receives will be used to calculate its next state      autopilot upgrade version    string          EnterpriseAlert inline      Overrides the version used by Autopilot during  automated   upgrades   vault docs enterprise automated upgrades   Vault s build version is   used by default  The string provided must be a valid  Semantic   Version  https   semver org       autopilot redundancy zone    string          EnterpriseAlert inline      Specifies a  redundancy zone   vault docs enterprise redundancy zones  which   is used by Autopilot to automatically swap out failed servers for enhanced   reliability    Warning title  Experimental       raft wal    boolean  false     Enables the    write ahead   vault docs internals integrated storage configurable raft log store    log store instead of the default of BoltDB      raft log verifier enabled    boolean  false     Enables the raft log verifier    The verifier periodically writes small raft logs and verifies checksums to    ensure that data has been written correctly  The verifier works with raft   write ahead   and   BoltDB log stores      raft log verification interval    string   60s      Sets the interval at   which the raft log verifier write verification logs  The default interval is    60s  and the minimum supported interval is  10s   The  raft log verification interval    parameter has no effect if  raft log verifier enabled  is  false      Warning        retry join  stanza     leader api addr    string         Address of a possible leader node      auto join    string         Cloud auto join configuration  using    go discover  https   github com hashicorp go discover  syntax      auto join scheme    string         The optional URI protocol scheme for addresses   discovered via auto join  Available values are  http  or  https       auto join port    uint         The optional port used for addressed discovered   via auto join      leader tls servername    string         The TLS server name to use when   connecting with HTTPS    Should match one of the names in the  DNS   SANs  https   en wikipedia org wiki Subject Alternative Name  of the remote   server certificate    See also  Integrated Storage and TLS   vault docs concepts integrated storage autojoin with tls servername      leader ca cert file    string         File path to the CA cert of the   possible leader node      leader client cert file    string         File path to the client certificate   for the follower node to establish client authentication with the possible   leader node      leader client key file    string         File path to the client key for the   follower node to establish client authentication with the possible leader node      leader ca cert    string         CA cert of the possible leader node      leader client cert    string         Client certificate for the follower node   to establish client authentication with the possible leader node      leader client key    string         Client key for the follower node to   establish client authentication with the possible leader node   Each   retry join    retry join stanza  block may provide TLS certificates via file paths or as a single line certificate string value with newlines delimited by   n   but not a combination of both  Each   retry join    retry join stanza  stanza may contain either a   leader api addr    leader api addr  value or a cloud   auto join    auto join  configuration value  but not both  When an   auto join    auto join  value is provided  Vault will automatically attempt to discover and resolve potential Raft leader addresses using  go discover  https   github com hashicorp go discover   See the go discover  README  https   github com hashicorp go discover blob master README md  for details on the format of the  auto join  value   By default  Vault will attempt to reach discovered peers using HTTPS and port 8200  Operators may override these through the   auto join scheme    auto join scheme  and   auto join port    auto join port  fields respectively   Example Configuration      hcl storage  raft      path        Users foo raft     node id    node1     retry join       leader api addr    http   127 0 0 2 8200      leader ca cert file     path to ca1      leader client cert file     path to client cert1      leader client key file     path to client key1        retry join       leader api addr    http   127 0 0 3 8200      leader ca cert file     path to ca2      leader client cert file     path to client cert2      leader client key file     path to client key2        retry join       leader api addr    http   127 0 0 4 8200      leader ca cert file     path to ca3      leader client cert file     path to client cert3      leader client key file     path to client key3        retry join       auto join    provider aws region eu west 1 tag key vault tag value     access key id     secret access key                    Tutorial  Refer to the  Integrated Storage   vault tutorials raft  series of tutorials to learn more about implementing Vault using Integrated Storage    raft   https   raft github io   The Raft Consensus Algorithm "}
{"questions":"vault Spanner a fully managed mission critical relational database service that page title Google Cloud Spanner Storage Backends Configuration Google Cloud spanner storage backend The Google Cloud Spanner storage backend is used to persist Vault s data in layout docs offers transactional consistency at global scale","answers":"---\nlayout: docs\npage_title: Google Cloud Spanner - Storage Backends - Configuration\ndescription: |-\n  The Google Cloud Spanner storage backend is used to persist Vault's data in\n  Spanner, a fully managed, mission-critical, relational database service that\n  offers transactional consistency at global scale.\n---\n\n# Google Cloud spanner storage backend\n\nThe Google Cloud Spanner storage backend is used to persist Vault's data in\n[Spanner][spanner-docs], a fully managed, mission-critical, relational database\nservice that offers transactional consistency at global scale, schemas, SQL, and\nautomatic, synchronous replication for high availability.\n\n- **High Availability** \u2013 the Google Cloud Spanner storage backend supports high\n  availability. Because the Google Cloud Spanner storage backend uses the system\n  time on the Vault node to acquire sessions, clock skew across Vault servers\n  can cause lock contention.\n\n- **Community Supported** \u2013 the Google Cloud Spanner storage backend is\n  supported by the community. While it has undergone review by HashiCorp\n  employees, they may not be as knowledgeable about the technology. If you\n  encounter problems with them, you may be referred to the original author.\n\n```hcl\nstorage \"spanner\" {\n  database = \"projects\/my-project\/instances\/my-instance\/databases\/my-database\"\n}\n```\n\nFor more information on schemas or Google Cloud Spanner, please see the [Google\nCloud Spanner documentation][spanner-docs].\n\n## `spanner` setup\n\nTo use the Google Cloud Spanner Vault storage backend, you must have a Google\nCloud Platform account. Either using the API or web interface, create a database\nand the following tables:\n\n-> You can choose \"Edit as text\" and copy-paste the following as the schema.\nThese are the default table names. If you choose to use different table names,\nyou will need to update the configuration accordingly.\n\n```sql\nCREATE TABLE Vault (\n  Key       STRING(MAX) NOT NULL,\n  Value     BYTES(MAX),\n) PRIMARY KEY (Key);\n\nCREATE TABLE VaultHA (\n  Key           STRING(MAX) NOT NULL,\n  Value         STRING(MAX),\n  Identity      STRING(36) NOT NULL,\n  Timestamp     TIMESTAMP NOT NULL,\n) PRIMARY KEY (Key);\n```\n\nThe Google Cloud Spanner storage backend does not support creating the table\nautomatically at this time, but this could be a future enhancement. For more\ninformation on schemas or Google Cloud Spanner, please see the [Google Cloud\nSpanner documentation][spanner-docs].\n\n## `spanner` authentication\n\nThe Google Cloud Spanner Vault storage backend uses the official Google Cloud\nGolang SDK. This means it supports the common ways of [providing credentials to\nGoogle Cloud][cloud-creds].\n\n1. The environment variable `GOOGLE_APPLICATION_CREDENTIALS`. This is specified\n   as the **path** to a Google Cloud credentials file, typically for a service\n   account. If this environment variable is present, the resulting credentials are\n   used. If the credentials are invalid, an error is returned.\n\n1. Default instance credentials. When no environment variable is present, the\n   default service account credentials are used.\n\nFor more information on service accounts, please see the [Google Cloud Service\nAccounts documentation][service-accounts].\n\nTo use this storage backend, the service account must have the following\nminimum scope(s):\n\n```text\nhttps:\/\/www.googleapis.com\/auth\/google-cloud-spanner.data\n```\n\n## `spanner` parameters\n\n- `database` `(string: <required>)` \u2013 Specifies the name of the database. Note\n  that this is specified as a \"path\" including the project ID and instance, for\n  example:\n\n  ```text\n  projects\/my-project\/instances\/my-instance\/databases\/my-database\n  ```\n\n- `table` `(string: \"Vault\")` - Specifies the name of the table where\n  data will be stored and retrieved.\n\n- `max_parallel` `(int: 128)` - Specifies the maximum number of parallel\n  operations to take place.\n\n### High availability parameters\n\n- `ha_enabled` `(string: \"false\")` - Specifies if high availability mode is\n  enabled. This is a boolean value, but it is specified as a string like \"true\"\n  or \"false\".\n\n- `ha_table` `(string: \"VaultHA\")` - Specifies the name of the table to use for\n  storing high availability information. By default, this is the name of the\n  `table` suffixed with \"HA\".\n\n## `spanner` examples\n\n### High availability\n\nThis example shows configuring Google Cloud Spanner with high availability\nenabled.\n\n```hcl\napi_addr = \"https:\/\/vault-leader.my-company.internal\"\n\nstorage \"spanner\" {\n  database   = \"projects\/demo\/instances\/abc123\/databases\/vault-data\"\n  ha_enabled = \"true\"\n}\n```\n\n### Custom tables\n\nThis example shows listing custom table names for data and HA with the Google\nCloud Spanner Vault storage backend.\n\n```hcl\nstorage \"spanner\" {\n  database = \"projects\/demo\/instances\/abc123\/databases\/vault-data\"\n  table    = \"VaultData\"\n  ha_table = \"VaultLeader\"\n}\n```\n\n[cloud-creds]: https:\/\/cloud.google.com\/docs\/authentication\/production#providing_credentials_to_your_application\n[service-accounts]: https:\/\/cloud.google.com\/compute\/docs\/access\/service-accounts\n[spanner-docs]: https:\/\/cloud.google.com\/spanner\/docs\/","site":"vault","answers_cleaned":"    layout  docs page title  Google Cloud Spanner   Storage Backends   Configuration description       The Google Cloud Spanner storage backend is used to persist Vault s data in   Spanner  a fully managed  mission critical  relational database service that   offers transactional consistency at global scale         Google Cloud spanner storage backend  The Google Cloud Spanner storage backend is used to persist Vault s data in  Spanner  spanner docs   a fully managed  mission critical  relational database service that offers transactional consistency at global scale  schemas  SQL  and automatic  synchronous replication for high availability       High Availability     the Google Cloud Spanner storage backend supports high   availability  Because the Google Cloud Spanner storage backend uses the system   time on the Vault node to acquire sessions  clock skew across Vault servers   can cause lock contention       Community Supported     the Google Cloud Spanner storage backend is   supported by the community  While it has undergone review by HashiCorp   employees  they may not be as knowledgeable about the technology  If you   encounter problems with them  you may be referred to the original author      hcl storage  spanner      database    projects my project instances my instance databases my database         For more information on schemas or Google Cloud Spanner  please see the  Google Cloud Spanner documentation  spanner docs        spanner  setup  To use the Google Cloud Spanner Vault storage backend  you must have a Google Cloud Platform account  Either using the API or web interface  create a database and the following tables      You can choose  Edit as text  and copy paste the following as the schema  These are the default table names  If you choose to use different table names  you will need to update the configuration accordingly      sql CREATE TABLE Vault     Key       STRING MAX  NOT NULL    Value     BYTES MAX     PRIMARY KEY  Key    CREATE TABLE VaultHA     Key           STRING MAX  NOT NULL    Value         STRING MAX     Identity      STRING 36  NOT NULL    Timestamp     TIMESTAMP NOT NULL    PRIMARY KEY  Key        The Google Cloud Spanner storage backend does not support creating the table automatically at this time  but this could be a future enhancement  For more information on schemas or Google Cloud Spanner  please see the  Google Cloud Spanner documentation  spanner docs        spanner  authentication  The Google Cloud Spanner Vault storage backend uses the official Google Cloud Golang SDK  This means it supports the common ways of  providing credentials to Google Cloud  cloud creds    1  The environment variable  GOOGLE APPLICATION CREDENTIALS   This is specified    as the   path   to a Google Cloud credentials file  typically for a service    account  If this environment variable is present  the resulting credentials are    used  If the credentials are invalid  an error is returned   1  Default instance credentials  When no environment variable is present  the    default service account credentials are used   For more information on service accounts  please see the  Google Cloud Service Accounts documentation  service accounts    To use this storage backend  the service account must have the following minimum scope s       text https   www googleapis com auth google cloud spanner data          spanner  parameters     database    string   required      Specifies the name of the database  Note   that this is specified as a  path  including the project ID and instance  for   example        text   projects my project instances my instance databases my database           table    string   Vault      Specifies the name of the table where   data will be stored and retrieved      max parallel    int  128     Specifies the maximum number of parallel   operations to take place       High availability parameters     ha enabled    string   false      Specifies if high availability mode is   enabled  This is a boolean value  but it is specified as a string like  true    or  false       ha table    string   VaultHA      Specifies the name of the table to use for   storing high availability information  By default  this is the name of the    table  suffixed with  HA        spanner  examples      High availability  This example shows configuring Google Cloud Spanner with high availability enabled      hcl api addr    https   vault leader my company internal   storage  spanner      database      projects demo instances abc123 databases vault data    ha enabled    true             Custom tables  This example shows listing custom table names for data and HA with the Google Cloud Spanner Vault storage backend      hcl storage  spanner      database    projects demo instances abc123 databases vault data    table       VaultData    ha table    VaultLeader          cloud creds   https   cloud google com docs authentication production providing credentials to your application  service accounts   https   cloud google com compute docs access service accounts  spanner docs   https   cloud google com spanner docs "}
{"questions":"vault The MySQL storage backend is used to persist Vault s data in a MySQL server or cluster MySQL storage backend layout docs page title MySQL Storage Backends Configuration","answers":"---\nlayout: docs\npage_title: MySQL - Storage Backends - Configuration\ndescription: |-\n  The MySQL storage backend is used to persist Vault's data in a MySQL server or\n  cluster.\n---\n\n# MySQL storage backend\n\nThe MySQL storage backend is used to persist Vault's data in a [MySQL][mysql]\nserver or cluster.\n\n- **High Availability** \u2013 the MySQL storage backend supports high availability.\n  Note that due to the way mysql locking functions work they are lost if a connection\n  dies. If you would like to not have frequent changes in your elected leader you\n  can increase interactive_timeout and wait_timeout MySQL config to much higher than\n  default which is set at 8 hours.\n\n- **Community Supported** \u2013 the MySQL storage backend is supported by the\n  community. While it has undergone review by HashiCorp employees, they may not\n  be as knowledgeable about the technology. If you encounter problems with them,\n  you may be referred to the original author.\n\n```hcl\nstorage \"mysql\" {\n  username = \"user1234\"\n  password = \"secret123!\"\n  database = \"vault\"\n}\n```\n\n## `mysql` parameters\n\n- `address` `(string: \"127.0.0.1:3306\")` \u2013 Specifies the address of the MySQL\n  host.\n\n- `database` `(string: \"vault\")` \u2013 Specifies the name of the database. If the\n  database does not exist, Vault will attempt to create it.\n\n- `table` `(string: \"vault\")` \u2013 Specifies the name of the table. If the table\n  does not exist, Vault will attempt to create it.\n\n- `tls_ca_file` `(string: \"\")` \u2013 Specifies the path to the CA certificate to\n  connect using TLS.\n\n- `plaintext_credentials_transmission` `(string: \"\")` - Provides authorization\n  to send credentials over plaintext. Failure to provide a value AND a failure\n  to provide a TLS CA certificate will warn that the credentials are being sent\n  over plain text. In the future, failure to do acknowledge or use TLS will\n  result in server start being prevented. This will be done to ensure credentials\n  are not leaked accidentally.\n\n- `max_parallel` `(string: \"128\")` \u2013 Specifies the maximum number of concurrent\n  requests to MySQL.\n\n- `max_idle_connections` `(string: \"0\")` \u2013 Specifies the maximum number of idle\n  connections to the database. A zero uses value defaults to 2 idle connections\n  and a negative value disables idle connections. If larger than\n  `max_parallel` it will be reduced to be equal.\n\n- `max_connection_lifetime` `(string: \"0\")` \u2013 Specifies the maximum amount of\n  time in seconds that a connection may be reused. If <= 0s connections are reused forever.\n\nAdditionally, Vault requires the following authentication information.\n\n- `username` `(string: <required>)` \u2013 Specifies the MySQL username to connect to\n  the database.\n\n- `password` `(string: <required>)` \u2013 Specifies the MySQL password to connect to\n  the database.\n\n### High availability parameters\n\n- `ha_enabled` `(string: \"false\")` - Specifies if high availability mode is\n  enabled. This is a boolean value, but it is specified as a string like \"true\"\n  or \"false\".\n\n- `lock_table` `(string: \"vault_lock\")` \u2013 Specifies the name of the table to\n  use for storing high availability information. By default, this is the name\n  of the `table` suffixed with `_lock`. If the table does not exist, Vault will\n  attempt to create it.\n\n## `mysql` examples\n\n### Custom database and table\n\nThis example shows configuring the MySQL backend to use a custom database and\ntable name.\n\n```hcl\nstorage \"mysql\" {\n  database = \"my-vault\"\n  table    = \"vault-data\"\n  username = \"user1234\"\n  password = \"pass5678\"\n}\n```\n\n[mysql]: https:\/\/dev.mysql.com","site":"vault","answers_cleaned":"    layout  docs page title  MySQL   Storage Backends   Configuration description       The MySQL storage backend is used to persist Vault s data in a MySQL server or   cluster         MySQL storage backend  The MySQL storage backend is used to persist Vault s data in a  MySQL  mysql  server or cluster       High Availability     the MySQL storage backend supports high availability    Note that due to the way mysql locking functions work they are lost if a connection   dies  If you would like to not have frequent changes in your elected leader you   can increase interactive timeout and wait timeout MySQL config to much higher than   default which is set at 8 hours       Community Supported     the MySQL storage backend is supported by the   community  While it has undergone review by HashiCorp employees  they may not   be as knowledgeable about the technology  If you encounter problems with them    you may be referred to the original author      hcl storage  mysql      username    user1234    password    secret123     database    vault             mysql  parameters     address    string   127 0 0 1 3306      Specifies the address of the MySQL   host      database    string   vault      Specifies the name of the database  If the   database does not exist  Vault will attempt to create it      table    string   vault      Specifies the name of the table  If the table   does not exist  Vault will attempt to create it      tls ca file    string         Specifies the path to the CA certificate to   connect using TLS      plaintext credentials transmission    string         Provides authorization   to send credentials over plaintext  Failure to provide a value AND a failure   to provide a TLS CA certificate will warn that the credentials are being sent   over plain text  In the future  failure to do acknowledge or use TLS will   result in server start being prevented  This will be done to ensure credentials   are not leaked accidentally      max parallel    string   128      Specifies the maximum number of concurrent   requests to MySQL      max idle connections    string   0      Specifies the maximum number of idle   connections to the database  A zero uses value defaults to 2 idle connections   and a negative value disables idle connections  If larger than    max parallel  it will be reduced to be equal      max connection lifetime    string   0      Specifies the maximum amount of   time in seconds that a connection may be reused  If    0s connections are reused forever   Additionally  Vault requires the following authentication information      username    string   required      Specifies the MySQL username to connect to   the database      password    string   required      Specifies the MySQL password to connect to   the database       High availability parameters     ha enabled    string   false      Specifies if high availability mode is   enabled  This is a boolean value  but it is specified as a string like  true    or  false       lock table    string   vault lock      Specifies the name of the table to   use for storing high availability information  By default  this is the name   of the  table  suffixed with   lock   If the table does not exist  Vault will   attempt to create it       mysql  examples      Custom database and table  This example shows configuring the MySQL backend to use a custom database and table name      hcl storage  mysql      database    my vault    table       vault data    username    user1234    password    pass5678          mysql   https   dev mysql com"}
{"questions":"vault Cassandra storage backend The Cassandra storage backend is used to persist Vault s data in an Apache layout docs page title Cassandra Storage Backends Configuration Cassandra cluster","answers":"---\nlayout: docs\npage_title: Cassandra - Storage Backends - Configuration\ndescription: |-\n  The Cassandra storage backend is used to persist Vault's data in an Apache\n  Cassandra cluster.\n---\n\n# Cassandra storage backend\n\nThe Cassandra storage backend is used to persist Vault's data in an [Apache\nCassandra][cassandra] cluster.\n\n- **No High Availability** \u2013 the Cassandra storage backend does not support high\n  availability.\n\n- **Community Supported** \u2013 the Cassandra storage backend is supported by the\n  community. While it has undergone review by HashiCorp employees, they may not\n  be as knowledgeable about the technology. If you encounter problems with it,\n  you may be referred to the original author.\n\n```hcl\nstorage \"cassandra\" {\n  hosts            = \"localhost\"\n  consistency      = \"LOCAL_QUORUM\"\n  protocol_version = 3\n}\n```\n\nThe Cassandra storage backend does not automatically create the keyspace and\ntable. This sample configuration can be used as a guide, but you will want to\nensure the keyspace [replication options][replication-options]\nare appropriate for your cluster:\n\n```cql\nCREATE KEYSPACE \"vault\" WITH REPLICATION = {\n    'class': 'SimpleStrategy',\n    'replication_factor': 1\n};\n\nCREATE TABLE \"vault\".\"entries\" (\n    bucket text,\n    key text,\n    value blob,\n    PRIMARY KEY (bucket, key)\n) WITH CLUSTERING ORDER BY (key ASC);\n```\n\n## `cassandra` parameters\n\n- `hosts` `(string: \"127.0.0.1\")` \u2013\u00a0Comma-separated list of Cassandra hosts to\n  connect to.\n\n- `keyspace` `(string: \"vault\")` Cassandra keyspace to use.\n\n- `table` `(string: \"entries\")` \u2013\u00a0Table within the `keyspace` in which to store\n  data.\n\n- `consistency` `(string: \"LOCAL_QUORUM\")` Consistency level to use when\n  reading\/writing data. If set, must be one of `\"ANY\"`, `\"ONE\"`, `\"TWO\"`,\n  `\"THREE\"`, `\"QUORUM\"`, `\"ALL\"`, `\"LOCAL_QUORUM\"`, `\"EACH_QUORUM\"`, or\n  `\"LOCAL_ONE\"`.\n\n- `protocol_version` `(int: 2)` Cassandra protocol version to use.\n\n- `username` `(string: \"\")` \u2013 Username to use when authenticating with the\n  Cassandra hosts.\n\n- `password` `(string: \"\")` \u2013 Password to use when authenticating with the\n  Cassandra hosts.\n\n- `disable_initial_host_lookup` `(bool: false)` - If set to true, Vault will not attempt\n  to get host info from the `system.peers` table. It will instead connect to\n  hosts supplied and will not attempt to look up the host information. This will\n  mean that `data_centre`, `rack` and `token` information will not be available and as\n  such host filtering and token aware query routing will not be available.\n\n- `initial_connection_timeout` `(int: 0)` - A timeout in seconds to wait until an initial connection is established\n  with the Cassandra hosts. If not set, default value from Cassandra driver(gocql) will be used - 600ms\n\n- `connection_timeout` `(int: 0)` - A timeout in seconds for each query.\n  If not set, default value from Cassandra driver(gocql) will be used - 600ms\n\n- `simple_retry_policy_retries` `(int: 0)` - Useful for Cassandra cluster with several nodes.\n  If current master node is down request will be retried on the next node `simple_retry_policy_retries`\n  times, and the client won't get an error.\n\n- `tls` `(int: 0)` \u2013 If `1`, indicates the connection with the Cassandra hosts\n  should use TLS.\n\n- `pem_bundle_file` `(string: \"\")` - Specifies a file containing a\n  certificate and private key; a certificate, private key, and issuing CA\n  certificate; or just a CA certificate.\n\n- `pem_json_file` `(string: \"\")` - Specifies a JSON file containing a certificate\n  and private key; a certificate, private key, and issuing CA certificate;\n  or just a CA certificate.\n\n- `tls_skip_verify` `(int: 0)` - If `1`, then TLS host verification\n  will be disabled for Cassandra. Defaults to `0`.\n\n- `tls_min_version` `(string: \"tls12\")` - Minimum TLS version to use. Accepted\n  values are `tls10`, `tls11`, `tls12` or `tls13`. Defaults to `tls12`.\n\n[cassandra]: http:\/\/cassandra.apache.org\/\n[replication-options]: https:\/\/docs.datastax.com\/en\/cassandra\/2.1\/cassandra\/architecture\/architectureDataDistributeReplication_c.html","site":"vault","answers_cleaned":"    layout  docs page title  Cassandra   Storage Backends   Configuration description       The Cassandra storage backend is used to persist Vault s data in an Apache   Cassandra cluster         Cassandra storage backend  The Cassandra storage backend is used to persist Vault s data in an  Apache Cassandra  cassandra  cluster       No High Availability     the Cassandra storage backend does not support high   availability       Community Supported     the Cassandra storage backend is supported by the   community  While it has undergone review by HashiCorp employees  they may not   be as knowledgeable about the technology  If you encounter problems with it    you may be referred to the original author      hcl storage  cassandra      hosts               localhost    consistency         LOCAL QUORUM    protocol version   3        The Cassandra storage backend does not automatically create the keyspace and table  This sample configuration can be used as a guide  but you will want to ensure the keyspace  replication options  replication options  are appropriate for your cluster      cql CREATE KEYSPACE  vault  WITH REPLICATION          class    SimpleStrategy        replication factor   1     CREATE TABLE  vault   entries        bucket text      key text      value blob      PRIMARY KEY  bucket  key    WITH CLUSTERING ORDER BY  key ASC            cassandra  parameters     hosts    string   127 0 0 1      Comma separated list of Cassandra hosts to   connect to      keyspace    string   vault    Cassandra keyspace to use      table    string   entries      Table within the  keyspace  in which to store   data      consistency    string   LOCAL QUORUM    Consistency level to use when   reading writing data  If set  must be one of   ANY      ONE      TWO        THREE      QUORUM      ALL      LOCAL QUORUM      EACH QUORUM    or     LOCAL ONE        protocol version    int  2   Cassandra protocol version to use      username    string         Username to use when authenticating with the   Cassandra hosts      password    string         Password to use when authenticating with the   Cassandra hosts      disable initial host lookup    bool  false     If set to true  Vault will not attempt   to get host info from the  system peers  table  It will instead connect to   hosts supplied and will not attempt to look up the host information  This will   mean that  data centre    rack  and  token  information will not be available and as   such host filtering and token aware query routing will not be available      initial connection timeout    int  0     A timeout in seconds to wait until an initial connection is established   with the Cassandra hosts  If not set  default value from Cassandra driver gocql  will be used   600ms     connection timeout    int  0     A timeout in seconds for each query    If not set  default value from Cassandra driver gocql  will be used   600ms     simple retry policy retries    int  0     Useful for Cassandra cluster with several nodes    If current master node is down request will be retried on the next node  simple retry policy retries    times  and the client won t get an error      tls    int  0     If  1   indicates the connection with the Cassandra hosts   should use TLS      pem bundle file    string         Specifies a file containing a   certificate and private key  a certificate  private key  and issuing CA   certificate  or just a CA certificate      pem json file    string         Specifies a JSON file containing a certificate   and private key  a certificate  private key  and issuing CA certificate    or just a CA certificate      tls skip verify    int  0     If  1   then TLS host verification   will be disabled for Cassandra  Defaults to  0       tls min version    string   tls12      Minimum TLS version to use  Accepted   values are  tls10    tls11    tls12  or  tls13   Defaults to  tls12     cassandra   http   cassandra apache org   replication options   https   docs datastax com en cassandra 2 1 cassandra architecture architectureDataDistributeReplication c html"}
{"questions":"vault PostgreSQL storage backend The PostgreSQL storage backend is used to persist Vault s data in a PostgreSQL page title PostgreSQL Storage Backends Configuration layout docs server or cluster","answers":"---\nlayout: docs\npage_title: PostgreSQL - Storage Backends - Configuration\ndescription: |-\n  The PostgreSQL storage backend is used to persist Vault's data in a PostgreSQL\n  server or cluster.\n---\n\n# PostgreSQL storage backend\n\nThe PostgreSQL storage backend is used to persist Vault's data in a\n[PostgreSQL][postgresql] server or cluster.\n\n- **High Availability** \u2013 the PostgreSQL storage backend supports\n  high availability. Requires PostgreSQL 9.5 or later.\n\n- **Community Supported** \u2013 the PostgreSQL storage backend is supported by the\n  community. While it has undergone review by HashiCorp employees, they may not\n  be as knowledgeable about the technology. If you encounter problems with them,\n  you may be referred to the original author.\n\n```hcl\nstorage \"postgresql\" {\n  connection_url = \"postgres:\/\/user123:secret123!@localhost:5432\/vault\"\n}\n```\n\n~> **Note:** The PostgreSQL storage backend plugin will attempt to use SSL\nwhen connecting to the database. If SSL is not enabled the `connection_url`\nwill need to be configured to disable SSL. See the documentation below\nto disable SSL.\n\nThe PostgreSQL storage backend does not automatically create the table. Here is\nsome sample SQL to create the schema and indexes.\n\n```sql\nCREATE TABLE vault_kv_store (\n  parent_path TEXT COLLATE \"C\" NOT NULL,\n  path        TEXT COLLATE \"C\",\n  key         TEXT COLLATE \"C\",\n  value       BYTEA,\n  CONSTRAINT pkey PRIMARY KEY (path, key)\n);\n\nCREATE INDEX parent_path_idx ON vault_kv_store (parent_path);\n```\n\nStore for HAEnabled backend\n\n```sql\nCREATE TABLE vault_ha_locks (\n  ha_key                                      TEXT COLLATE \"C\" NOT NULL,\n  ha_identity                                 TEXT COLLATE \"C\" NOT NULL,\n  ha_value                                    TEXT COLLATE \"C\",\n  valid_until                                 TIMESTAMP WITH TIME ZONE NOT NULL,\n  CONSTRAINT ha_key PRIMARY KEY (ha_key)\n);\n```\n\nIf you're using a version of PostgreSQL prior to 9.5, create the following function:\n\n```sql\nCREATE FUNCTION vault_kv_put(_parent_path TEXT, _path TEXT, _key TEXT, _value BYTEA) RETURNS VOID AS\n$$\nBEGIN\n    LOOP\n        -- first try to update the key\n        UPDATE vault_kv_store\n          SET (parent_path, path, key, value) = (_parent_path, _path, _key, _value)\n          WHERE _path = path AND key = _key;\n        IF found THEN\n            RETURN;\n        END IF;\n        -- not there, so try to insert the key\n        -- if someone else inserts the same key concurrently,\n        -- we could get a unique-key failure\n        BEGIN\n            INSERT INTO vault_kv_store (parent_path, path, key, value)\n              VALUES (_parent_path, _path, _key, _value);\n            RETURN;\n        EXCEPTION WHEN unique_violation THEN\n            -- Do nothing, and loop to try the UPDATE again.\n        END;\n    END LOOP;\nEND;\n$$\nLANGUAGE plpgsql;\n```\n\n## `postgresql` parameters\n\n- `connection_url` `(string: <required>)` \u2013\u00a0Specifies the connection string to\n  use to authenticate and connect to PostgreSQL. The connection URL can also be\n  set using the `VAULT_PG_CONNECTION_URL` environment variable. A full list of supported\n  parameters can be found in the [pgx library][pgxlib] and [PostgreSQL connection string][pg_conn_docs]\n  documentation. For example connection string URLs, see the examples section below.\n\n- `table` `(string: \"vault_kv_store\")` \u2013 Specifies the name of the table in\n  which to write Vault data. This table must already exist (Vault will not\n  attempt to create it).\n\n- `max_idle_connections` `(int)` - Default not set. Sets the maximum number of\n  connections in the idle connection pool. See\n  [golang docs on SetMaxIdleConns][golang_setmaxidleconns] for more information.\n  Requires 1.2 or later.\n\n- `max_parallel` `(string: \"128\")` \u2013 Specifies the maximum number of concurrent\n  requests to PostgreSQL.\n\n- `ha_enabled` `(string: \"true|false\")` \u2013 Default not enabled, requires 9.5 or later.\n\n- `ha_table` `(string: \"vault_ha_locks\")` \u2013 Specifies the name of the table to use\n  for storing high availability information. This table must already exist (Vault\n  will not attempt to create it).\n\n## `postgresql` examples\n\n### Custom SSL verification\n\nThis example shows connecting to a PostgreSQL cluster using full SSL\nverification (recommended).\n\n```hcl\nstorage \"postgresql\" {\n  connection_url = \"postgres:\/\/user:pass@localhost:5432\/database?sslmode=verify-full\"\n}\n```\n\nTo disable SSL verification (not recommended), replace `verify-full` with\n`disable`:\n\n```hcl\nstorage \"postgresql\" {\n  connection_url = \"postgres:\/\/user:pass@localhost:5432\/database?sslmode=disable\"\n}\n```\n\n[golang_setmaxidleconns]: https:\/\/golang.org\/pkg\/database\/sql\/#DB.SetMaxIdleConns\n[postgresql]: https:\/\/www.postgresql.org\/\n[pgxlib]: https:\/\/pkg.go.dev\/github.com\/jackc\/pgx\/stdlib\n[pg_conn_docs]: https:\/\/www.postgresql.org\/docs\/current\/libpq-connect.html#LIBPQ-CONNSTRING","site":"vault","answers_cleaned":"    layout  docs page title  PostgreSQL   Storage Backends   Configuration description       The PostgreSQL storage backend is used to persist Vault s data in a PostgreSQL   server or cluster         PostgreSQL storage backend  The PostgreSQL storage backend is used to persist Vault s data in a  PostgreSQL  postgresql  server or cluster       High Availability     the PostgreSQL storage backend supports   high availability  Requires PostgreSQL 9 5 or later       Community Supported     the PostgreSQL storage backend is supported by the   community  While it has undergone review by HashiCorp employees  they may not   be as knowledgeable about the technology  If you encounter problems with them    you may be referred to the original author      hcl storage  postgresql      connection url    postgres   user123 secret123  localhost 5432 vault              Note    The PostgreSQL storage backend plugin will attempt to use SSL when connecting to the database  If SSL is not enabled the  connection url  will need to be configured to disable SSL  See the documentation below to disable SSL   The PostgreSQL storage backend does not automatically create the table  Here is some sample SQL to create the schema and indexes      sql CREATE TABLE vault kv store     parent path TEXT COLLATE  C  NOT NULL    path        TEXT COLLATE  C     key         TEXT COLLATE  C     value       BYTEA    CONSTRAINT pkey PRIMARY KEY  path  key      CREATE INDEX parent path idx ON vault kv store  parent path        Store for HAEnabled backend     sql CREATE TABLE vault ha locks     ha key                                      TEXT COLLATE  C  NOT NULL    ha identity                                 TEXT COLLATE  C  NOT NULL    ha value                                    TEXT COLLATE  C     valid until                                 TIMESTAMP WITH TIME ZONE NOT NULL    CONSTRAINT ha key PRIMARY KEY  ha key          If you re using a version of PostgreSQL prior to 9 5  create the following function      sql CREATE FUNCTION vault kv put  parent path TEXT   path TEXT   key TEXT   value BYTEA  RETURNS VOID AS    BEGIN     LOOP            first try to update the key         UPDATE vault kv store           SET  parent path  path  key  value      parent path   path   key   value            WHERE  path   path AND key    key          IF found THEN             RETURN          END IF             not there  so try to insert the key            if someone else inserts the same key concurrently             we could get a unique key failure         BEGIN             INSERT INTO vault kv store  parent path  path  key  value                VALUES   parent path   path   key   value               RETURN          EXCEPTION WHEN unique violation THEN                Do nothing  and loop to try the UPDATE again          END      END LOOP  END     LANGUAGE plpgsql           postgresql  parameters     connection url    string   required      Specifies the connection string to   use to authenticate and connect to PostgreSQL  The connection URL can also be   set using the  VAULT PG CONNECTION URL  environment variable  A full list of supported   parameters can be found in the  pgx library  pgxlib  and  PostgreSQL connection string  pg conn docs    documentation  For example connection string URLs  see the examples section below      table    string   vault kv store      Specifies the name of the table in   which to write Vault data  This table must already exist  Vault will not   attempt to create it       max idle connections    int     Default not set  Sets the maximum number of   connections in the idle connection pool  See    golang docs on SetMaxIdleConns  golang setmaxidleconns  for more information    Requires 1 2 or later      max parallel    string   128      Specifies the maximum number of concurrent   requests to PostgreSQL      ha enabled    string   true false      Default not enabled  requires 9 5 or later      ha table    string   vault ha locks      Specifies the name of the table to use   for storing high availability information  This table must already exist  Vault   will not attempt to create it        postgresql  examples      Custom SSL verification  This example shows connecting to a PostgreSQL cluster using full SSL verification  recommended       hcl storage  postgresql      connection url    postgres   user pass localhost 5432 database sslmode verify full         To disable SSL verification  not recommended   replace  verify full  with  disable       hcl storage  postgresql      connection url    postgres   user pass localhost 5432 database sslmode disable          golang setmaxidleconns   https   golang org pkg database sql  DB SetMaxIdleConns  postgresql   https   www postgresql org   pgxlib   https   pkg go dev github com jackc pgx stdlib  pg conn docs   https   www postgresql org docs current libpq connect html LIBPQ CONNSTRING"}
{"questions":"vault page title Google Cloud Storage Storage Backends Configuration Google Cloud storage storage backend layout docs Google Cloud Storage The Google Cloud Storage storage backend is used to persist Vault s data in","answers":"---\nlayout: docs\npage_title: Google Cloud Storage - Storage Backends - Configuration\ndescription: |-\n  The Google Cloud Storage storage backend is used to persist Vault's data in\n  Google Cloud Storage.\n---\n\n# Google Cloud storage storage backend\n\nThe Google Cloud Storage storage backend is used to persist Vault's data in\n[Google Cloud Storage][gcs-docs].\n\n- **High Availability** \u2013 the Google Cloud Storage storage backend supports high\n  availability. Because the Google Cloud Storage storage backend uses the system\n  time on the Vault node to acquire sessions, clock skew across Vault servers\n  can cause lock contention.\n\n- **Community Supported** \u2013 the Google Cloud Storage storage backend is\n  supported by the community. While it has undergone review by HashiCorp\n  employees, they may not be as knowledgeable about the technology. If you\n  encounter problems with them, you may be referred to the original author.\n\n```hcl\nstorage \"gcs\" {\n  bucket = \"my-storage-bucket\"\n}\n```\n\nFor more information on schemas or Google Cloud Storage, please see the [Google\nCloud Storage documentation][gcs-docs].\n\n## `gcs` setup\n\nTo use the Google Cloud Storage Vault storage backend, you must have a Google\nCloud Platform account with permissions to create Google Cloud Storage buckets.\n\nTo use the Google Cloud Storage Vault storage backend, you must have a Google\nCloud Platform account. Either using the API or web interface, create a bucket\nusing the [`gsutil`][cloud-sdk] command. Bucket names must be globally unique\nacross all of Google Cloud, so choose a unique name:\n\n```shell-session\n$ gsutil mb gs:\/\/mycompany-vault-data\n```\n\nEven though the data is encrypted in transit and at rest, be sure to set the\nappropriate permissions on the bucket to limit exposure. You may want to create\na service account that limits Vault's interactions with Google Cloud to objects\nin the storage bucket using IAM permissions.\n\nHere is a sample [Google Cloud IAM][iam] policy that grants the proper\npermissions to a [service account][service-accounts]. Be sure to replace the\nvalue with the value for your service account.\n\n```json\n{\n  \"bindings\": [\n    {\n      \"role\": \"roles\/storage.objectAdmin\",\n      \"members\": [\"serviceAccount:my-vault@gserviceaccount.com\"]\n    }\n  ]\n}\n```\n\nThen give Vault the service account's credential file as a configuration option.\n\nFor more information on schemas or Google Cloud Storage, please see the [Google\nCloud Storage documentation][gcs-docs].\n\n## `gcs` authentication\n\nThe Google Cloud Storage Vault storage backend uses the official Google Cloud\nGolang SDK. This means it supports the common ways of [providing credentials to\nGoogle Cloud][cloud-creds].\n\n1. The environment variable `GOOGLE_APPLICATION_CREDENTIALS`. This is specified\n   as the **path** to a Google Cloud credentials file, typically for a service\n   account. If this environment variable is present, the resulting credentials are\n   used. If the credentials are invalid, an error is returned.\n\n1. Default instance credentials. When no environment variable is present, the\n   default service account credentials are used.\n\nFor more information on service accounts, please see the [Google Cloud Service\nAccounts documentation][service-accounts].\n\nTo use this storage backend, the service account must have the following\nminimum scope(s):\n\n```text\nhttps:\/\/www.googleapis.com\/auth\/devstorage.read_write\n```\n\n## `gcs` parameters\n\n- `bucket` `(string: <required>)` \u2013 Specifies the name of the bucket to use for\n  storage. Alternatively, this parameter can be omitted and the `GOOGLE_STORAGE_BUCKET` \n  environment variable can be used to set the name of the bucket. If both the environment\n  variable and the parameter in the stanza are set, the value of the environment variable\n  will take precedence.\n\n- `chunk_size` `(string: \"8192\")` \u2013 Specifies the maximum size (in kilobytes) to\n  send in a single request. If set to 0, it will attempt to send the whole\n  object at once, but will not retry any failures. If you are not storing large\n  objects in Vault, it is recommended to set this to a low value (minimum is\n  256), since it will reduce the amount of memory Vault uses. Alternatively, this parameter\n  can be omitted and the `GOOGLE_STORAGE_CHUNK_SIZE` environment variable can be used to set \n  the chunk size. If both the environment variable and the parameter in the stanza are set, \n  the value of the environment variable will take precedence.\n\n- `max_parallel` `(int: 128)` - Specifies the maximum number of parallel\n  operations to take place.\n\n### High availability parameters\n\n- `ha_enabled` `(string: \"false\")` - Specifies if high availability mode is\n  enabled. This is a boolean value, but it is specified as a string like \"true\"\n  or \"false\". Alternatively, this parameter can be omitted and the \n  `GOOGLE_STORAGE_HA_ENABLED` environment variable can be used to \n  enable or disable high availability. If both the environment variable and \n  the parameter in the stanza are set, the value of the environment variable will \n  take precedence.\n  \n## `gcs` examples\n\n### High availability\n\nThis example shows configuring Google Cloud Storage with high availability\nenabled.\n\n```hcl\napi_addr = \"https:\/\/vault-leader.my-company.internal\"\n\nstorage \"gcs\" {\n  bucket        = \"mycompany-vault-data\"\n  ha_enabled    = \"true\"\n}\n```\n\n### Custom chunk size\n\nThis example shows setting a custom chunk size for uploads. When uploading large\ndata to Vault, setting a lower number can reduce Vault's memory consumption, but\nwill increase the number of outbound requests.\n\n```hcl\nstorage \"gcs\" {\n  bucket     = \"mycompany-vault-data\"\n  chunk_size = \"512\"\n}\n```\n\n[cloud-creds]: https:\/\/cloud.google.com\/docs\/authentication\/production#providing_credentials_to_your_application\n[cloud-sdk]: https:\/\/cloud.google.com\/sdk\/downloads\n[gcs-docs]: https:\/\/cloud.google.com\/storage\/docs\/\n[iam]: https:\/\/cloud.google.com\/iam\/docs\/\n[service-accounts]: https:\/\/cloud.google.com\/compute\/docs\/access\/service-accounts","site":"vault","answers_cleaned":"    layout  docs page title  Google Cloud Storage   Storage Backends   Configuration description       The Google Cloud Storage storage backend is used to persist Vault s data in   Google Cloud Storage         Google Cloud storage storage backend  The Google Cloud Storage storage backend is used to persist Vault s data in  Google Cloud Storage  gcs docs        High Availability     the Google Cloud Storage storage backend supports high   availability  Because the Google Cloud Storage storage backend uses the system   time on the Vault node to acquire sessions  clock skew across Vault servers   can cause lock contention       Community Supported     the Google Cloud Storage storage backend is   supported by the community  While it has undergone review by HashiCorp   employees  they may not be as knowledgeable about the technology  If you   encounter problems with them  you may be referred to the original author      hcl storage  gcs      bucket    my storage bucket         For more information on schemas or Google Cloud Storage  please see the  Google Cloud Storage documentation  gcs docs        gcs  setup  To use the Google Cloud Storage Vault storage backend  you must have a Google Cloud Platform account with permissions to create Google Cloud Storage buckets   To use the Google Cloud Storage Vault storage backend  you must have a Google Cloud Platform account  Either using the API or web interface  create a bucket using the   gsutil   cloud sdk  command  Bucket names must be globally unique across all of Google Cloud  so choose a unique name      shell session   gsutil mb gs   mycompany vault data      Even though the data is encrypted in transit and at rest  be sure to set the appropriate permissions on the bucket to limit exposure  You may want to create a service account that limits Vault s interactions with Google Cloud to objects in the storage bucket using IAM permissions   Here is a sample  Google Cloud IAM  iam  policy that grants the proper permissions to a  service account  service accounts   Be sure to replace the value with the value for your service account      json      bindings                  role    roles storage objectAdmin          members     serviceAccount my vault gserviceaccount com                    Then give Vault the service account s credential file as a configuration option   For more information on schemas or Google Cloud Storage  please see the  Google Cloud Storage documentation  gcs docs        gcs  authentication  The Google Cloud Storage Vault storage backend uses the official Google Cloud Golang SDK  This means it supports the common ways of  providing credentials to Google Cloud  cloud creds    1  The environment variable  GOOGLE APPLICATION CREDENTIALS   This is specified    as the   path   to a Google Cloud credentials file  typically for a service    account  If this environment variable is present  the resulting credentials are    used  If the credentials are invalid  an error is returned   1  Default instance credentials  When no environment variable is present  the    default service account credentials are used   For more information on service accounts  please see the  Google Cloud Service Accounts documentation  service accounts    To use this storage backend  the service account must have the following minimum scope s       text https   www googleapis com auth devstorage read write          gcs  parameters     bucket    string   required      Specifies the name of the bucket to use for   storage  Alternatively  this parameter can be omitted and the  GOOGLE STORAGE BUCKET     environment variable can be used to set the name of the bucket  If both the environment   variable and the parameter in the stanza are set  the value of the environment variable   will take precedence      chunk size    string   8192      Specifies the maximum size  in kilobytes  to   send in a single request  If set to 0  it will attempt to send the whole   object at once  but will not retry any failures  If you are not storing large   objects in Vault  it is recommended to set this to a low value  minimum is   256   since it will reduce the amount of memory Vault uses  Alternatively  this parameter   can be omitted and the  GOOGLE STORAGE CHUNK SIZE  environment variable can be used to set    the chunk size  If both the environment variable and the parameter in the stanza are set     the value of the environment variable will take precedence      max parallel    int  128     Specifies the maximum number of parallel   operations to take place       High availability parameters     ha enabled    string   false      Specifies if high availability mode is   enabled  This is a boolean value  but it is specified as a string like  true    or  false   Alternatively  this parameter can be omitted and the     GOOGLE STORAGE HA ENABLED  environment variable can be used to    enable or disable high availability  If both the environment variable and    the parameter in the stanza are set  the value of the environment variable will    take precedence         gcs  examples      High availability  This example shows configuring Google Cloud Storage with high availability enabled      hcl api addr    https   vault leader my company internal   storage  gcs      bucket           mycompany vault data    ha enabled       true             Custom chunk size  This example shows setting a custom chunk size for uploads  When uploading large data to Vault  setting a lower number can reduce Vault s memory consumption  but will increase the number of outbound requests      hcl storage  gcs      bucket        mycompany vault data    chunk size    512          cloud creds   https   cloud google com docs authentication production providing credentials to your application  cloud sdk   https   cloud google com sdk downloads  gcs docs   https   cloud google com storage docs   iam   https   cloud google com iam docs   service accounts   https   cloud google com compute docs access service accounts"}
{"questions":"vault The Zookeeper storage backend is used to persist Vault s data in Zookeeper Zookeeper storage backend The Zookeeper storage backend is used to persist Vault s data in layout docs page title Zookeeper Storage Backends Configuration Zookeeper zk","answers":"---\nlayout: docs\npage_title: Zookeeper - Storage Backends - Configuration\ndescription: The Zookeeper storage backend is used to persist Vault's data in Zookeeper.\n---\n\n# Zookeeper storage backend\n\nThe Zookeeper storage backend is used to persist Vault's data in\n[Zookeeper][zk].\n\n- **High Availability** \u2013 the Zookeeper storage backend supports high\n  availability.\n\n- **Community Supported** \u2013 the Zookeeper storage backend is supported by the\n  community. While it has undergone review by HashiCorp employees, they may not\n  be as knowledgeable about the technology. If you encounter problems with them,\n  you may be referred to the original author.\n\n```hcl\nstorage \"zookeeper\" {\n  address = \"localhost:2181\"\n  path    = \"vault\/\"\n}\n```\n\n## `zookeeper` parameters\n\n- `address` `(string: \"localhost:2181\")` \u2013 Specifies the addresses of the\n  Zookeeper instances as a comma-separated list.\n\n- `path` `(string: \"vault\/\")` \u2013 Specifies the path in Zookeeper where data will\n  be stored.\n\nThe following optional settings can be used to configure zNode ACLs:\n\n~> **Warning!** If neither `auth_info` nor `znode_owner` are set, the backend\nwill not authenticate with Zookeeper and will set the `OPEN_ACL_UNSAFE` ACL on\nall nodes. In this scenario, anyone connected to Zookeeper could change Vault\u2019s\nznodes and, potentially, take Vault out of service.\n\n- `auth_info` `(string: \"\")` \u2013 Specifies an authentication string in Zookeeper\n  AddAuth format. For example, `digest:UserName:Password` could be used to\n  authenticate as user `UserName` using password `Password` with the `digest`\n  mechanism.\n\n- `znode_owner` `(string: \"\")` \u2013 If specified, Vault will always set all\n  permissions (CRWDA) to the ACL identified here via the Schema and User parts\n  of the Zookeeper ACL format. The expected format is `schema:user-ACL-match`,\n  for example:\n\n  ```text\n  # Access for user \"UserName\" with corresponding digest \"HIDfRvTv623G==\"\n  digest:UserName:HIDfRvTv623G==\n  ```\n\n  ```text\n  # Access from localhost only\n  ip:127.0.0.1\n  ```\n\n  ```text\n  # Access from any host on the 70.95.0.0 network (Zookeeper 3.5+)\n  ip:70.95.0.0\/16\n  ```\n\n- `tls_enabled` `(bool: false)` \u2013 Specifies if TLS communication with the Zookeeper\n  backend has to be enabled.\n\n- `tls_ca_file` `(string: \"\")` \u2013 Specifies the path to the CA certificate file used\n  for Zookeeper communication. Multiple CA certificates can be provided in the same file.\n\n- `tls_cert_file` `(string: \"\")` (optional) \u2013 Specifies the path to the\n  client certificate for Zookeeper communication.\n\n- `tls_key_file` `(string: \"\")` \u2013 Specifies the path to the private key for\n  Zookeeper communication.\n\n- `tls_min_version` `(string: \"tls12\")` \u2013 Specifies the minimum TLS version to\n  use. Accepted values are `\"tls10\"`, `\"tls11\"`, `\"tls12\"` or `\"tls13\"`.\n\n- `tls_skip_verify` `(bool: false)` \u2013 Disable verification of TLS certificates.\n  Using this option is highly discouraged.\n\n- `tls_verify_ip` `(bool: false)` - This property comes into play only when\n  'tls_skip_verify' is set to false. When 'tls_verify_ip' is set to 'true', the\n  zookeeper server's IP is verified in the presented certificates CN\/SAN entry.\n  When set to 'false' the server's DNS name is verified in the certificates CN\/SAN entry.\n\n## `zookeeper` examples\n\n### Custom address and path\n\nThis example shows configuring Vault to communicate with a Zookeeper\ninstallation running on a custom port and to store data at a custom path.\n\n```hcl\nstorage \"zookeeper\" {\n  address = \"localhost:3253\"\n  path    = \"my-vault-data\/\"\n}\n```\n\n### zNode Vault user only\n\nThis example instructs Vault to set an ACL on all of its zNodes which permit\naccess only to the user \"vaultUser\". As per Zookeeper's ACL model, the digest\nvalue in `znode_owner` must match the user in `znode_owner`.\n\n```hcl\nstorage \"zookeeper\" {\n  znode_owner = \"digest:vaultUser:raxgVAfnDRljZDAcJFxznkZsExs=\"\n  auth_info   = \"digest:vaultUser:abc\"\n}\n```\n\n### zNode localhost only\n\nThis example instructs Vault to only allow access from localhost. As this is the\n`ip` no `auth_info` is required since Zookeeper uses the address of the client\nfor the ACL check.\n\n```hcl\nstorage \"zookeeper\" {\n  znode_owner = \"ip:127.0.0.1\"\n}\n```\n\n### zNode connection over TLS.\n\nThis example instructs Vault to connect to Zookeeper using the provided TLS configuration. The host verification will happen with the presented certificate using the servers IP because 'tls_verify_ip' is set to true.\n\n```hcl\nstorage \"zookeeper\" {\n  address = \"host1.com:5200,host2.com:5200,host3.com:5200\"\n  path = \"vault_path_on_zk\/\"\n  znode_owner = \"digest:vault_user:digestvalueforpassword=\"\n  auth_info = \"digest:vault_user:thisisthepassword\"\n  redirect_addr = \"http:\/\/localhost:8200\"\n  tls_verify_ip = \"true\"\n  tls_enabled= \"true\"\n  tls_min_version= \"tls12\"\n  tls_cert_file = \"\/path\/to\/the\/cert\/file\/zkcert.pem\"\n  tls_key_file = \"\/path\/to\/the\/key\/file\/zkkey.pem\"\n  tls_skip_verify= \"false\"\n  tls_ca_file= \"\/path\/to\/the\/ca\/file\/ca.pem\"\n}\n```\n\n[zk]: https:\/\/zookeeper.apache.org\/","site":"vault","answers_cleaned":"    layout  docs page title  Zookeeper   Storage Backends   Configuration description  The Zookeeper storage backend is used to persist Vault s data in Zookeeper         Zookeeper storage backend  The Zookeeper storage backend is used to persist Vault s data in  Zookeeper  zk        High Availability     the Zookeeper storage backend supports high   availability       Community Supported     the Zookeeper storage backend is supported by the   community  While it has undergone review by HashiCorp employees  they may not   be as knowledgeable about the technology  If you encounter problems with them    you may be referred to the original author      hcl storage  zookeeper      address    localhost 2181    path       vault              zookeeper  parameters     address    string   localhost 2181      Specifies the addresses of the   Zookeeper instances as a comma separated list      path    string   vault       Specifies the path in Zookeeper where data will   be stored   The following optional settings can be used to configure zNode ACLs        Warning    If neither  auth info  nor  znode owner  are set  the backend will not authenticate with Zookeeper and will set the  OPEN ACL UNSAFE  ACL on all nodes  In this scenario  anyone connected to Zookeeper could change Vault s znodes and  potentially  take Vault out of service      auth info    string         Specifies an authentication string in Zookeeper   AddAuth format  For example   digest UserName Password  could be used to   authenticate as user  UserName  using password  Password  with the  digest    mechanism      znode owner    string         If specified  Vault will always set all   permissions  CRWDA  to the ACL identified here via the Schema and User parts   of the Zookeeper ACL format  The expected format is  schema user ACL match     for example        text     Access for user  UserName  with corresponding digest  HIDfRvTv623G      digest UserName HIDfRvTv623G               text     Access from localhost only   ip 127 0 0 1             text     Access from any host on the 70 95 0 0 network  Zookeeper 3 5     ip 70 95 0 0 16           tls enabled    bool  false     Specifies if TLS communication with the Zookeeper   backend has to be enabled      tls ca file    string         Specifies the path to the CA certificate file used   for Zookeeper communication  Multiple CA certificates can be provided in the same file      tls cert file    string        optional    Specifies the path to the   client certificate for Zookeeper communication      tls key file    string         Specifies the path to the private key for   Zookeeper communication      tls min version    string   tls12      Specifies the minimum TLS version to   use  Accepted values are   tls10      tls11      tls12   or   tls13        tls skip verify    bool  false     Disable verification of TLS certificates    Using this option is highly discouraged      tls verify ip    bool  false     This property comes into play only when    tls skip verify  is set to false  When  tls verify ip  is set to  true   the   zookeeper server s IP is verified in the presented certificates CN SAN entry    When set to  false  the server s DNS name is verified in the certificates CN SAN entry       zookeeper  examples      Custom address and path  This example shows configuring Vault to communicate with a Zookeeper installation running on a custom port and to store data at a custom path      hcl storage  zookeeper      address    localhost 3253    path       my vault data              zNode Vault user only  This example instructs Vault to set an ACL on all of its zNodes which permit access only to the user  vaultUser   As per Zookeeper s ACL model  the digest value in  znode owner  must match the user in  znode owner       hcl storage  zookeeper      znode owner    digest vaultUser raxgVAfnDRljZDAcJFxznkZsExs     auth info      digest vaultUser abc             zNode localhost only  This example instructs Vault to only allow access from localhost  As this is the  ip  no  auth info  is required since Zookeeper uses the address of the client for the ACL check      hcl storage  zookeeper      znode owner    ip 127 0 0 1             zNode connection over TLS   This example instructs Vault to connect to Zookeeper using the provided TLS configuration  The host verification will happen with the presented certificate using the servers IP because  tls verify ip  is set to true      hcl storage  zookeeper      address    host1 com 5200 host2 com 5200 host3 com 5200    path    vault path on zk     znode owner    digest vault user digestvalueforpassword     auth info    digest vault user thisisthepassword    redirect addr    http   localhost 8200    tls verify ip    true    tls enabled   true    tls min version   tls12    tls cert file     path to the cert file zkcert pem    tls key file     path to the key file zkkey pem    tls skip verify   false    tls ca file    path to the ca file ca pem          zk   https   zookeeper apache org "}
{"questions":"vault Manually install a Vault binary page title Install Vault manually layout docs Install Vault using a compiled binary","answers":"---\nlayout: docs\npage_title: Install Vault manually\ndescription: >-\n  Manually install a Vault binary.\n---\n\n# Manually install a Vault binary\n\nInstall Vault using a compiled binary.\n\n## Before you start\n\n- **You must have a valid Vault binary**. You can\n  [download and unzip a precompiled binary](\/vault\/install) or\n  [build a local instance of Vault from source code](\/vault\/docs\/install\/build-from-code).\n\n## Step 1: Configure the environment\n\n<Tabs>\n\n<Tab heading=\"Linux shell\" group=\"nix\">\n\n1. Set the `VAULT_DATA` environment variable to your preferred Vault data\n  directory. For example, `\/opt\/vault\/data`:\n\n   ```shell-session\n   export VAULT_DATA=\/opt\/vault\/data\n   ```\n\n1. Set the `VAULT_CONFIG` environment variable  to your preferred Vault\n   configuration directory. For example, `\/etc\/vault.d`:\n\n   ```shell-session\n   export VAULT_CONFIG=\/etc\/vault.d\n   ```\n\n1. Move the Vault binary to `\/usr\/bin`:\n\n  ```shell-session\n  $ sudo mv PATH\/TO\/VAULT\/BINARY \/usr\/bin\/\n  ```\n\n1. Ensure the Vault binary can use `mlock()` to run as a non-root user:\n\n   ```shell-session\n   $ sudo setcap cap_ipc_lock=+ep $(readlink -f $(which vault))\n   ```\n\n  See the support article\n  [Vault and mlock()](https:\/\/support.hashicorp.com\/hc\/en-us\/articles\/115012787688-Vault-and-mlock)\n  for more information.\n\n1. Create your Vault data directory:\n\n  ```shell-session\n   $ sudo mkdir -p ${VAULT_DATA}\n  ```\n\n1. Create your Vault configuration directory:\n\n   ```shell-session\n   $ sudo mkdir -p ${VAULT_CONFIG}\n   ```\n\n<Highlight title=\"Best practice\">\n  We recommend storing Vault data and Vault logs on different volumes than the\n  operating system.\n<\/Highlight>\n\n<\/Tab>\n\n<Tab heading=\"Powershell\" group=\"ps\">\n\n1. Run Powershell as Administrator.\n\n1. Set a `VAULT_HOME` environment variable to your preferred Vault home\n   directory. For example, `c:\\Program Files\\Vault`:\n\n   ```powershell\n   $env:VAULT_HOME = \"${env:ProgramFiles}\\Vault\"\n   ```\n\n1. Create the Vault home directory:\n\n  ```powershell\n  New-Item -ItemType Directory -Path \"${env:VAULT_HOME}\"\n  ```\n\n1. Create the Vault data directory. For example, `c:\\Program Files\\Vault\\Data`:\n\n  ```powershell\n  New-Item -ItemType Directory -Path \"${env:VAULT_HOME}\/Data\"\n  ```\n   \n1. Create the Vault configuration directory. For example,\n   `c:\\Program Files\\Vault\\Config`:\n  \n  ```powershell\n  New-Item -ItemType Directory -Path \"${env:VAULT_HOME}\/Config\"\n  ```\n\n1. Create the Vault logs directory. For example, `c:\\Program Files\\Vault\\Logs`:\n\n  ```powershell\n  New-Item -ItemType Directory -Path \"${env:VAULT_HOME}\/Logs\"\n  ```\n\n1. Move the Vault binary to your Vault directory:\n\n  ```powershell\n  Move-Item                      `\n    -Path <PATH\/TO\/VAULT\/BINARY> `\n    -Destination ${env:VAULT_HOME}\\vault.exe\n  ```\n\n1. Add the Vault home directory to the system `Path` variable.\n\n  [![System PATH editor in Windows OS GUI](\/img\/install\/windows-system-path.png)](\/img\/install\/windows-system-path.png)\n\n<\/Tab>\n\n<\/Tabs>\n\n\n## Step 2: Configure user permissions\n\n<Tabs>\n\n<Tab heading=\"Linux shell\" group=\"nix\">\n\n1. Create a system user called `vault` to run Vault when your Vault data\n   directory as `home` and `nologin` as the shell:\n\n   ```shell-session\n   $ sudo useradd --system --home ${VAULT_DATA} --shell \/sbin\/nologin vault\n   ```\n\n1. Change directory ownership of your data directory to the `vault` user:\n\n      ```shell-session\n      $ sudo chown vault:vault ${VAULT_DATA}\n      ```\n\n1. Grant the `vault` user full permission on the data directory, search\n   permission for the group, and deny access to others:\n\n      ```shell-session\n      $ sudo chmod -R 750 ${VAULT_DATA}\n      ```\n\n<\/Tab>\n\n<Tab heading=\"Powershell\" group=\"ps\">\n\n1. Create an access rule to grant the `Local System` user access to the Vault\n   directory and related files:\n\n  ```powershell\n  $SystemAccessRule = \n    New-Object System.Security.AccessControl.FileSystemAccessRule(\n      \"SYSTEM\",\n      \"FullControl\",\n      \"ContainerInherit,Objectinherit\",\n      \"none\",\n      \"Allow\"\n    )\n  ```\n\n1. Create an access rule to grant yourself access to the Vault directory and\n   related files so you can test your Vault installation:\n\n  ```powershell\n  $myUsername = Get-CimInstance -Class Win32_Computersystem |    `\n                Select-Object UserName | foreach {$_.UserName} ; `\n  $AdminAccessRule =\n    New-Object System.Security.AccessControl.FileSystemAccessRule(\n      \"$myUsername\",\n      \"FullControl\",\n      \"ContainerInherit,Objectinherit\",\n      \"none\",\n      \"Allow\"\n    )\n  ```\n\n  <Highlight title=\"Create additional access rules for human users if needed\">\n\n    If you expect other accounts to start and run the Vault server, you must\n    create and apply access rules for those users as well. While users can run\n    the Vault CLI without explicit access, if they try to start the Vault\n    server, the process will fail with a permission denied error.\n\n  <\/Highlight>\n\n1. Update permissions on the `env:VAULT_HOME` directory:\n\n  ```powershell\n  $ACLObject = Get-ACL ${env:VAULT_HOME} ;       `\n  $ACLObject.AddAccessRule($AdminAccessRule) ;   `\n  $ACLObject.AddAccessRule($SystemAccessRule) ;  `\n  Set-Acl ${env:VAULT_HOME} $ACLObject\n  ```\n\n<\/Tab>\n\n<\/Tabs>\n\n## Step 3: Create a basic configuration file\n\nCreate a basic Vault configuration file for testing and development.\n\n<Warning title=\"Always enable TLS for production\">\n\n  The sample configuration below disables TLS for simplicity and is not\n  appropriate for production use. Refer to the\n  [configuration documentation](\/vault\/docs\/configuration) for a full list of\n  supported parameters.\n\n<\/Warning>\n\n<Tabs>\n\n<Tab heading=\"Linux shell\" group=\"nix\">\n\n1. Create a file called `vault.hcl` under your configuration directory:\n  ```shell-session\n   $ sudo tee ${VAULT_CONFIG}\/vault.hcl <<EOF\n   ui            = true\n   cluster_addr  = \"http:\/\/127.0.0.1:8201\"\n   api_addr      = \"https:\/\/127.0.0.1:8200\"\n   disable_mlock = true\n\n   storage \"raft\" {\n     path    = \"${VAULT_DATA}\"\n     node_id = \"127.0.0.1\"\n   }\n\n   listener \"tcp\" {\n     address       = \"0.0.0.0:8200\"\n     cluster_address = \"0.0.0.0:8201\"\n     tls_disable = 1\n   }\n   EOF\n  ```\n\n1. Change ownership and permissions on the Vault configuration file.\n\n   ```shell-session\n   $ sudo chown vault:vault \"${VAULT_CONFIG}\/vault.hcl\" && \\\n     sudo chmod 640 \"${VAULT_CONFIG}\/vault.hcl\"\n   ```\n\n<\/Tab>\n\n<Tab heading=\"Powershell\" group=\"ps\">\n\nCreate a file called `vault.hcl` under your configuration directory:\n\n```powershell\n@\"\nui            = true\ncluster_addr  = \"http:\/\/127.0.0.1:8201\"\napi_addr      = \"https:\/\/127.0.0.1:8200\"\ndisable_mlock = true\n\nstorage \"raft\" {\n  path    = \"$(${env:VAULT_HOME}.Replace('\\','\\\\'))\\\\Data\"\n  node_id = \"127.0.0.1\"\n}\n\nlistener \"tcp\" {\n  address       = \"0.0.0.0:8200\"\n  cluster_address = \"0.0.0.0:8201\"\n  tls_disable = 1\n}\n\"@ | Out-File -FilePath ${env:VAULT_HOME}\/Config\/vault.hcl -Encoding ascii\n```\n\n<Note title=\"The double backslashes (\\\\) are not an error\">\n\n  You **must** escape the Windows path character in your Vault configuration\n  file or the Vault server will fail with an error claiming the file contains\n  invalid characters.\n  \n<\/Note>\n\n<\/Tab>\n\n<\/Tabs>\n\n## Step 4: Verify your installation\n\nTo confirm your Vault installation, use the help option with the Vault CLI to\nconfirm the CLI is accessible and bring up the server in development mode to\nconfirm you can run the binary.\n\n<Tabs>\n\n<Tab heading=\"Linux shell\" group=\"nix\">\n\n1. Bring up the help menu in the Vault CLI:\n  ```shell-session\n  $ vault -h\n  ```\n\n1. Use the Vault CLI to bring up a Vault server in development mode:\n  ```shell-session\n  $ vault server -dev -config ${VAULT_CONFIG}\/vault.hcl\n  ```\n\n<\/Tab>\n\n<Tab heading=\"Powershell\" group=\"ps\">\n\n1. Start a new Powershell session without Administrator permission.\n\n1. Bring up the help menu in the Vault CLI:\n  ```powershell\n  vault -h\n  ```\n\n1. Use the Vault CLI to bring up a Vault server in development mode:\n  ```powershell\n  vault server -dev -config ${env:VAULT_HOME}\\Config\\vault.hcl\n  ```\n\n<\/Tab>\n\n<\/Tabs>\n\n\n## Related tutorials\n\nThe following tutorials provide additional guidance for installing Vault and\nproduction cluster deployment:\n\n- [Get started: Install Vault](\/vault\/tutorials\/getting-started\/getting-started-install)\n- [Day One Preparation](\/vault\/tutorials\/day-one-raft)\n- [Recommended Patterns](\/vault\/tutorials\/recommended-patterns)\n- [Start the server in dev mode](\/vault\/tutorials\/getting-started\/getting-started-dev-server","site":"vault","answers_cleaned":"    layout  docs page title  Install Vault manually description       Manually install a Vault binary         Manually install a Vault binary  Install Vault using a compiled binary      Before you start      You must have a valid Vault binary    You can    download and unzip a precompiled binary   vault install  or    build a local instance of Vault from source code   vault docs install build from code       Step 1  Configure the environment   Tabs    Tab heading  Linux shell  group  nix    1  Set the  VAULT DATA  environment variable to your preferred Vault data   directory  For example    opt vault data          shell session    export VAULT DATA  opt vault data         1  Set the  VAULT CONFIG  environment variable  to your preferred Vault    configuration directory  For example    etc vault d          shell session    export VAULT CONFIG  etc vault d         1  Move the Vault binary to   usr bin         shell session     sudo mv PATH TO VAULT BINARY  usr bin         1  Ensure the Vault binary can use  mlock    to run as a non root user         shell session      sudo setcap cap ipc lock  ep   readlink  f   which vault             See the support article    Vault and mlock    https   support hashicorp com hc en us articles 115012787688 Vault and mlock    for more information   1  Create your Vault data directory        shell session      sudo mkdir  p   VAULT DATA         1  Create your Vault configuration directory         shell session      sudo mkdir  p   VAULT CONFIG           Highlight title  Best practice     We recommend storing Vault data and Vault logs on different volumes than the   operating system    Highlight     Tab    Tab heading  Powershell  group  ps    1  Run Powershell as Administrator   1  Set a  VAULT HOME  environment variable to your preferred Vault home    directory  For example   c  Program Files Vault          powershell     env VAULT HOME      env ProgramFiles  Vault          1  Create the Vault home directory        powershell   New Item  ItemType Directory  Path    env VAULT HOME          1  Create the Vault data directory  For example   c  Program Files Vault Data         powershell   New Item  ItemType Directory  Path    env VAULT HOME  Data            1  Create the Vault configuration directory  For example      c  Program Files Vault Config           powershell   New Item  ItemType Directory  Path    env VAULT HOME  Config         1  Create the Vault logs directory  For example   c  Program Files Vault Logs         powershell   New Item  ItemType Directory  Path    env VAULT HOME  Logs         1  Move the Vault binary to your Vault directory        powershell   Move Item                             Path  PATH TO VAULT BINARY         Destination   env VAULT HOME  vault exe        1  Add the Vault home directory to the system  Path  variable        System PATH editor in Windows OS GUI   img install windows system path png    img install windows system path png     Tab     Tabs       Step 2  Configure user permissions   Tabs    Tab heading  Linux shell  group  nix    1  Create a system user called  vault  to run Vault when your Vault data    directory as  home  and  nologin  as the shell         shell session      sudo useradd   system   home   VAULT DATA    shell  sbin nologin vault         1  Change directory ownership of your data directory to the  vault  user            shell session         sudo chown vault vault   VAULT DATA             1  Grant the  vault  user full permission on the data directory  search    permission for the group  and deny access to others            shell session         sudo chmod  R 750   VAULT DATA               Tab    Tab heading  Powershell  group  ps    1  Create an access rule to grant the  Local System  user access to the Vault    directory and related files        powershell    SystemAccessRule        New Object System Security AccessControl FileSystemAccessRule         SYSTEM          FullControl          ContainerInherit Objectinherit          none          Allow               1  Create an access rule to grant yourself access to the Vault directory and    related files so you can test your Vault installation        powershell    myUsername   Get CimInstance  Class Win32 Computersystem                        Select Object UserName   foreach     UserName         AdminAccessRule       New Object System Security AccessControl FileSystemAccessRule          myUsername          FullControl          ContainerInherit Objectinherit          none          Allow                  Highlight title  Create additional access rules for human users if needed        If you expect other accounts to start and run the Vault server  you must     create and apply access rules for those users as well  While users can run     the Vault CLI without explicit access  if they try to start the Vault     server  the process will fail with a permission denied error       Highlight   1  Update permissions on the  env VAULT HOME  directory        powershell    ACLObject   Get ACL   env VAULT HOME               ACLObject AddAccessRule  AdminAccessRule           ACLObject AddAccessRule  SystemAccessRule         Set Acl   env VAULT HOME   ACLObject          Tab     Tabs      Step 3  Create a basic configuration file  Create a basic Vault configuration file for testing and development    Warning title  Always enable TLS for production      The sample configuration below disables TLS for simplicity and is not   appropriate for production use  Refer to the    configuration documentation   vault docs configuration  for a full list of   supported parameters     Warning    Tabs    Tab heading  Linux shell  group  nix    1  Create a file called  vault hcl  under your configuration directory       shell session      sudo tee   VAULT CONFIG  vault hcl   EOF    ui              true    cluster addr     http   127 0 0 1 8201     api addr         https   127 0 0 1 8200     disable mlock   true     storage  raft         path         VAULT DATA        node id    127 0 0 1           listener  tcp         address          0 0 0 0 8200       cluster address    0 0 0 0 8201       tls disable   1         EOF        1  Change ownership and permissions on the Vault configuration file         shell session      sudo chown vault vault    VAULT CONFIG  vault hcl            sudo chmod 640    VAULT CONFIG  vault hcl            Tab    Tab heading  Powershell  group  ps    Create a file called  vault hcl  under your configuration directory      powershell    ui              true cluster addr     http   127 0 0 1 8201  api addr         https   127 0 0 1 8200  disable mlock   true  storage  raft      path           env VAULT HOME  Replace             Data    node id    127 0 0 1     listener  tcp      address          0 0 0 0 8200    cluster address    0 0 0 0 8201    tls disable   1        Out File  FilePath   env VAULT HOME  Config vault hcl  Encoding ascii       Note title  The double backslashes      are not an error      You   must   escape the Windows path character in your Vault configuration   file or the Vault server will fail with an error claiming the file contains   invalid characters       Note     Tab     Tabs      Step 4  Verify your installation  To confirm your Vault installation  use the help option with the Vault CLI to confirm the CLI is accessible and bring up the server in development mode to confirm you can run the binary    Tabs    Tab heading  Linux shell  group  nix    1  Bring up the help menu in the Vault CLI       shell session     vault  h        1  Use the Vault CLI to bring up a Vault server in development mode       shell session     vault server  dev  config   VAULT CONFIG  vault hcl          Tab    Tab heading  Powershell  group  ps    1  Start a new Powershell session without Administrator permission   1  Bring up the help menu in the Vault CLI       powershell   vault  h        1  Use the Vault CLI to bring up a Vault server in development mode       powershell   vault server  dev  config   env VAULT HOME  Config vault hcl          Tab     Tabs       Related tutorials  The following tutorials provide additional guidance for installing Vault and production cluster deployment      Get started  Install Vault   vault tutorials getting started getting started install     Day One Preparation   vault tutorials day one raft     Recommended Patterns   vault tutorials recommended patterns     Start the server in dev mode   vault tutorials getting started getting started dev server"}
{"questions":"vault page title Why use Agent or Proxy Use Vault tools like Agent and Proxy to simplify secret fetching and add Vault to your development environment with minimal client code updates Why use Agent or Proxy layout docs","answers":"---\nlayout: docs\npage_title: Why use Agent or Proxy?\ndescription: >-\n  Use Vault tools like Agent and Proxy to simplify secret fetching and add Vault\n  to your development environment with minimal client code updates.\n---\n\n# Why use Agent or Proxy?\n\nA valid client token must accompany most requests to Vault. This\nincludes all API requests, as well as via the Vault CLI and other libraries.\nTherefore, Vault clients must first authenticate with Vault to acquire a token.\nVault provides several authentication methods to assist in\ndelivering this initial token.\n\n![Client authentication](\/img\/diagram-vault-agent.png)\n\nIf the client can securely acquire the token, all subsequent requests (e.g., request\ndatabase credentials, read key\/value secrets) are processed based on the\ntrust established by a successful authentication.\n\nThis means that client application must invoke the Vault API to authenticate\nwith Vault and manage the acquired token, in addition to invoking the API to\nrequest secrets from Vault. This implies code changes to client applications\nalong with additional testing and maintenance of the application.\n\nThe following code example implements Vault API to authenticate with Vault\nthrough [AppRole auth method](\/vault\/docs\/auth\/approle#code-example), and then uses\nthe returned client token to read secrets at `kv-v2\/data\/creds`.\n\n```go\npackage main\n\nimport (\n    ...snip...\n    vault \"github.com\/hashicorp\/vault\/api\"\n)\n\n\/\/ Fetches a key-value secret (kv-v2) after authenticating via AppRole\nfunc getSecretWithAppRole() (string, error) {\n    config := vault.DefaultConfig()\n\n    client := vault.NewClient(config)\n    wrappingToken := ioutil.ReadFile(\"path\/to\/wrapping-token\")\n    unwrappedToken := client.Logical().Unwrap(strings.TrimSuffix(string(wrappingToken), \"\\n\"))\n\n    secretID := unwrappedToken.Data[\"secret_id\"]\n    roleID := os.Getenv(\"APPROLE_ROLE_ID\")\n\n    params := map[string]interface{}{\n        \"role_id\":   roleID,\n        \"secret_id\": secretID,\n    }\n    resp := client.Logical().Write(\"auth\/approle\/login\", params)\n    client.SetToken(resp.Auth.ClientToken)\n\n    secret, err := client.Logical().Read(\"kv-v2\/data\/creds\")\n    if err != nil {\n        return \"\", fmt.Errorf(\"unable to read secret: %w\", err)\n    }\n\n    data := secret.Data[\"data\"].(map[string]interface{})\n\n    ...snip...\n}\n```\n\nFor some Vault deployments, making (and maintaining) these changes to\napplications may not be a problem, and may actually be preferred. This may be\napplied to scenarios where you have a small number of applications, or you want\nto keep strict, customized control over how each application interacts with\nVault. However, in other situations where you have a large number of\napplications, as in large enterprises, you may not have the resources or expertise\nto update and maintain the Vault integration code for every application. When\nthird party applications are being deployed by the application, it is prohibited\nto add the Vault integration code.\n\n### Introduce Vault Agent and Vault Proxy to the workflow\n\n[Vault Agent][vaultagent] and [Vault Proxy][vaultproxy] aim to remove this initial hurdle to adopt Vault by providing a\nmore scalable and simpler way for applications to integrate with Vault. Vault Agent can\nobtain secrets and provide them to applications, and Vault Proxy can act as\na proxy between Vault and the application, optionally simplifying the authentication process\nand caching requests.\n\nAs with most other CLI commands for the Vault binary, neither Vault Agent nor Vault Proxy\nrequire a Vault Enterprise license, and are available in all Vault binaries and images. Note\nhowever that some features, such as [static secret caching][static-secret-caching], are only\navailable when connected to a Vault Enterprise server.\n\n| Capability                                                                               |    Vault Agent     | Vault Proxy |\n|------------------------------------------------------------------------------------------|:------------------:|:-----------:|\n| [Auto-Auth][autoauth] to authenticate with Vault                                         |         x          |      x      |\n| Run as a [Windows Service][winsvc]                                                       |         x          |      x      |\n| [Caching][caching] the newly created tokens and leases                                   |         x          |      x      |\n| [Templating][template] to render user-supplied templates                                 |         x          |             |\n| [Process Supervisor][exec] for injecting secrets as environment variables into a process |         x          |             |\n| [API Proxy][apiproxy] to act as a proxy for Vault API                                    | Will be deprecated |      x      |\n| [Static secret caching][static-secret-caching] for KV secrets                            |                    |      x      |\n\nTo learn more, refer to the [Vault Agent][vaultagent] or [Vault\nProxy][vaultproxy] documentation page.\n\n\n[autoauth]: \/vault\/docs\/agent-and-proxy\/autoauth\n[caching]: \/vault\/docs\/agent-and-proxy\/proxy\/caching\n[static-secret-caching]: \/vault\/docs\/agent-and-proxy\/proxy\/caching\/static-secret-caching\n[apiproxy]: \/vault\/docs\/agent-and-proxy\/proxy\/apiproxy\n[template]: \/vault\/docs\/agent-and-proxy\/agent\/template\n[exec]: \/vault\/docs\/agent-and-proxy\/agent\/process-supervisor\n[template-config]: \/vault\/docs\/agent-and-proxy\/agent\/template#template-configurations\n[vaultagent]: \/vault\/docs\/agent-and-proxy\/agent\n[vaultproxy]: \/vault\/docs\/agent-and-proxy\/proxy\n[winsvc]: \/vault\/docs\/agent-and-proxy\/agent\/winsvc","site":"vault","answers_cleaned":"    layout  docs page title  Why use Agent or Proxy  description       Use Vault tools like Agent and Proxy to simplify secret fetching and add Vault   to your development environment with minimal client code updates         Why use Agent or Proxy   A valid client token must accompany most requests to Vault  This includes all API requests  as well as via the Vault CLI and other libraries  Therefore  Vault clients must first authenticate with Vault to acquire a token  Vault provides several authentication methods to assist in delivering this initial token     Client authentication   img diagram vault agent png   If the client can securely acquire the token  all subsequent requests  e g   request database credentials  read key value secrets  are processed based on the trust established by a successful authentication   This means that client application must invoke the Vault API to authenticate with Vault and manage the acquired token  in addition to invoking the API to request secrets from Vault  This implies code changes to client applications along with additional testing and maintenance of the application   The following code example implements Vault API to authenticate with Vault through  AppRole auth method   vault docs auth approle code example   and then uses the returned client token to read secrets at  kv v2 data creds       go package main  import          snip        vault  github com hashicorp vault api        Fetches a key value secret  kv v2  after authenticating via AppRole func getSecretWithAppRole    string  error        config    vault DefaultConfig        client    vault NewClient config      wrappingToken    ioutil ReadFile  path to wrapping token       unwrappedToken    client Logical   Unwrap strings TrimSuffix string wrappingToken     n         secretID    unwrappedToken Data  secret id       roleID    os Getenv  APPROLE ROLE ID        params    map string interface             role id     roleID           secret id   secretID            resp    client Logical   Write  auth approle login   params      client SetToken resp Auth ClientToken       secret  err    client Logical   Read  kv v2 data creds       if err    nil           return     fmt Errorf  unable to read secret   w   err             data    secret Data  data    map string interface            snip           For some Vault deployments  making  and maintaining  these changes to applications may not be a problem  and may actually be preferred  This may be applied to scenarios where you have a small number of applications  or you want to keep strict  customized control over how each application interacts with Vault  However  in other situations where you have a large number of applications  as in large enterprises  you may not have the resources or expertise to update and maintain the Vault integration code for every application  When third party applications are being deployed by the application  it is prohibited to add the Vault integration code       Introduce Vault Agent and Vault Proxy to the workflow   Vault Agent  vaultagent  and  Vault Proxy  vaultproxy  aim to remove this initial hurdle to adopt Vault by providing a more scalable and simpler way for applications to integrate with Vault  Vault Agent can obtain secrets and provide them to applications  and Vault Proxy can act as a proxy between Vault and the application  optionally simplifying the authentication process and caching requests   As with most other CLI commands for the Vault binary  neither Vault Agent nor Vault Proxy require a Vault Enterprise license  and are available in all Vault binaries and images  Note however that some features  such as  static secret caching  static secret caching   are only available when connected to a Vault Enterprise server     Capability                                                                                    Vault Agent       Vault Proxy                                                                                                                                      Auto Auth  autoauth  to authenticate with Vault                                                   x                 x          Run as a  Windows Service  winsvc                                                                  x                 x           Caching  caching  the newly created tokens and leases                                             x                 x           Templating  template  to render user supplied templates                                           x                             Process Supervisor  exec  for injecting secrets as environment variables into a process           x                             API Proxy  apiproxy  to act as a proxy for Vault API                                      Will be deprecated        x           Static secret caching  static secret caching  for KV secrets                                                        x         To learn more  refer to the  Vault Agent  vaultagent  or  Vault Proxy  vaultproxy  documentation page     autoauth    vault docs agent and proxy autoauth  caching    vault docs agent and proxy proxy caching  static secret caching    vault docs agent and proxy proxy caching static secret caching  apiproxy    vault docs agent and proxy proxy apiproxy  template    vault docs agent and proxy agent template  exec    vault docs agent and proxy agent process supervisor  template config    vault docs agent and proxy agent template template configurations  vaultagent    vault docs agent and proxy agent  vaultproxy    vault docs agent and proxy proxy  winsvc    vault docs agent and proxy agent winsvc"}
{"questions":"vault What is Auto authentication Use auto authentication with Vault Agent or Vault Proxy to simplify client layout docs authentication to Vault in a variety of environments page title What is Auto authentication","answers":"---\nlayout: docs\npage_title: What is Auto-authentication?\ndescription: >-\n  Use auto-authentication with Vault Agent or Vault Proxy to simplify client\n  authentication to Vault in a variety of environments.\n---\n\n# What is Auto-authentication?\n\nAuto-authentication simplifies client authentication in a wide variety of\nenvironments. The following Vault tools come with auto-authentication built in:\n\n- Vault Agent\n- Vault Proxy\n\n## Methods and sinks\n\nAuto-auth consists of two parts:\n\n- a **method** - the desired authentication method for the current environment\n- a **sink** - the location where tools save tokens when the token value changes\n\nWhen a supported tool starts with auto-auth enabled, the tool requests a Vault\ntoken using the configured method. If the request fails, the tool retries the\nrequest with an exponential back off.\n\nOnce the request succeeds, the auth-auth renews unwrapped authentication tokens\nautomatically until Vault denies the renewal. If the authentication method wraps\ntokens, auto-authentication cannot renew the token automatically.\n\nVault typically denies renewal if the token:\n\n- the token was revoked.\n- the token has exceeded the maximum number of uses.\n- the token is otherwise invalid.\n\nEvery time authentication succeeds, auto-auth writes the token to any\nappropriately configured sink.\n\n## Advanced functionality\n\nSinks support some advanced features, including the ability for the written\nvalues to be encrypted or\n[response-wrapped](\/vault\/docs\/concepts\/response-wrapping).\n\nBoth mechanisms can be used concurrently; in this case, the value will be\nresponse-wrapped, then encrypted.\n\n### Response-Wrapping tokens\n\nThere are two ways that tokens can be response-wrapped:\n\n1. By the auth method. This allows the end client to introspect the\n   `creation_path` of the token, helping prevent Man-In-The-Middle (MITM)\n   attacks. However, because auto-auth cannot then unwrap the token and rewrap\n   it without modifying the `creation_path`, we are not able to renew the\n   token; it is up to the end client to renew the token. Agent and Proxy both\n   stay daemonized in this mode since some auth methods allow for reauthentication\n   on certain events.\n\n2. By any of the token sinks. Because more than one sink can be configured, the\n   token must be wrapped after it is fetched, rather than wrapped by Vault as\n   it's being returned. As a result, the `creation_path` will always be\n   `sys\/wrapping\/wrap`, and validation of this field cannot be used as\n   protection against MITM attacks. However, this mode allows auto-auth to keep\n   the token renewed for the end client and automatically reauthenticate when\n   it expires.\n\n### Encrypting tokens\n\n~> Support for encrypted tokens is experimental; if input\/output formats\nchange, we will make every effort to provide backwards compatibility.\n\nTokens can be encrypted, using a Diffie-Hellman exchange to generate an\nephemeral key. In this mechanism, the client receiving the token writes a\ngenerated public key to a file. The sink responsible for writing the token to\nthat client looks for this public key and uses it to compute a shared secret\nkey, which is then used to encrypt the token via AES-GCM. The nonce, encrypted\npayload, and the sink's public key are then written to the output file, where\nthe client can compute the shared secret and decrypt the token value.\n\n~> NOTE: Token encryption is not a protection against MITM attacks! The purpose\nof this feature is for forward-secrecy and coverage against bare token values\nbeing persisted. A MITM that can write to the sink's output and\/or client\npublic-key input files could attack this exchange. Using TLS to protect the\ntransit of tokens is highly recommended.\n\nTo help mitigate MITM attacks, additional authenticated data (AAD) can be\nprovided to Agent and Proxy. This data is written as part of the AES-GCM tag and must\nmatch on both Agent and Proxy and the client. This of course means that protecting\nthis AAD becomes important, but it provides another layer for an attacker to\nhave to overcome. For instance, if the attacker has access to the file system\nwhere the token is being written, but not to read configuration or read\nenvironment variables, this AAD can be generated and passed to Agent or Proxy and\nthe client in ways that would be difficult for the attacker to find.\n\nWhen using AAD, it is always a good idea for this to be as fresh as possible;\ngenerate a value and pass it to your client and Agent or Proxy on startup. Additionally,\nAgent and Proxy a Trust On First Use model; after it finds a generated public key,\nit will reuse that public key instead of looking for new values that have been\nwritten.\n\nIf writing a client that uses this feature, it will likely be helpful to look\nat the\n[dhutil](https:\/\/github.com\/hashicorp\/vault\/blob\/main\/helper\/dhutil\/dhutil.go)\nlibrary. This shows the expected format of the public key input and envelope\noutput formats.\n\n## Configuration\n\nThe top level `auto_auth` block has two configuration entries:\n\n- `method` `(object: required)` - Configuration for the method\n\n- `sinks` `(array of objects: optional)` - Configuration for the sinks\n\n- `enable_reauth_on_new_credentials` `(bool: false)` - If enabled, Auto-auth will\n  handle new credential events from supported auth methods (AliCloud\/AWS\/Cert\/JWT\/LDAP\/OCI)\n  and re-authenticate with the new credential.\n\n### Configuration (Method)\n\n~> Auto-auth does not support using tokens with a limited number of uses. Auto-auth\ndoes not track the number of uses remaining, and may allow the token to\nexpire before attempting to renew it. For example, if using AppRole auto-auth,\nyou must use 0 (meaning unlimited) as the value for\n[`token_num_uses`](\/vault\/api-docs\/auth\/approle#token_num_uses).\n\nThese are common configuration values that live within the `method` block:\n\n- `type` `(string: required)` - The type of the method to use, e.g. `aws`,\n  `gcp`, `azure`, etc. _Note_: when using HCL this can be used as the key for\n  the block, e.g. `method \"aws\" {...}`.\n\n- `mount_path` `(string: optional)` - The mount path of the method. If not\n  specified, defaults to a value of `auth\/<method type>`.\n\n- `namespace` `(string: optional)` - Namespace in which the mount lives.\n  The order of precedence is: this setting lowest, followed by the\n  environment variable `VAULT_NAMESPACE`, and then the highest precedence\n  command-line option `-namespace`.\n  If none of these are specified, defaults to the root namespace.\n  Note that because sink response wrapping and templating are also based\n  on the client created by auto-auth, they use the same namespace.\n  If specified alongside the `namespace` option in the Vault Stanza of\n  [Vault Agent](\/vault\/docs\/agent-and-proxy\/agent#vault-stanza) or\n  [Vault Proxy](\/vault\/docs\/agent-and-proxy\/proxy#vault-stanza), that\n  configuration will take precedence on everything except auto-auth.\n\n- `wrap_ttl` `(string or integer: optional)` - If specified, the written token\n  will be response-wrapped by auto-auth. This is more secure than wrapping by\n  sinks, but does not allow the auto-auth to keep the token renewed or\n  automatically reauthenticate when it expires. Rather than a simple string,\n  the written value will be a JSON-encoded\n  [SecretWrapInfo](https:\/\/godoc.org\/github.com\/hashicorp\/vault\/api#SecretWrapInfo)\n  structure. Uses [duration format strings](\/vault\/docs\/concepts\/duration-format).\n\n- `min_backoff` `(string or integer: \"1s\")` - The minimum backoff time auto-auth\n  will delay before retrying after a failed auth attempt. The backoff will start\n  at the configured value and double (with some randomness) after successive\n  failures, capped by `max_backoff.` If Agent templating is being used, this\n  value is also used as the min backoff time for the templating server.\n  Uses [duration format strings](\/vault\/docs\/concepts\/duration-format).\n\n- `max_backoff` `(string or integer: \"5m\")` - The maximum time Agent will delay\n  before retrying after a failed auth attempt. The backoff will start at\n  `min_backoff` and double (with some randomness) after successive failures,\n  capped by `max_backoff.` If Agent templating is being used, this value is also\n  used as the max backoff time for the templating server. `max_backoff` is the\n  duration between retries, and **not** the duration that retries will be\n  performed before giving up. Uses [duration format strings](\/vault\/docs\/concepts\/duration-format).\n\n- `exit_on_err` `(bool: false)` - When set to true, Vault Agent and Vault Proxy\n  will exit if any errors occur during authentication. This configurable only affects login\n  attempts for new tokens (either initial or expired tokens) and will not exit for errors on\n  valid token renewals.\n\n- `config` `(object: required)` - Configuration of the method itself. See the\n  sidebar for information about each method.\n\n### Configuration (Sinks)\n\nThese configuration values are common to all Sinks:\n\n- `type` `(string: required)` - The type of the method to use, e.g. `file`.\n  _Note_: when using HCL this can be used as the key for the block, e.g. `sink \"file\" {...}`.\n\n- `wrap_ttl` `(string or integer: optional)` - If specified, the written token\n  will be response-wrapped by the sink. This is less secure than wrapping by\n  the method, but allows auto-auth to keep the token renewed and automatically\n  reauthenticate when it expires. Rather than a simple string, the written\n  value will be a JSON-encoded\n  [SecretWrapInfo](https:\/\/godoc.org\/github.com\/hashicorp\/vault\/api#SecretWrapInfo)\n  structure. Uses [duration format strings](\/vault\/docs\/concepts\/duration-format).\n\n- `dh_type` `(string: optional)` - If specified, the type of Diffie-Hellman exchange to\n  perform, meaning, which ciphers and\/or curves. Currently only `curve25519` is\n  supported.\n\n- `dh_path` `(string: required if dh_type is set)` - The path from which the\n  auto-auth should read the client's initial parameters (e.g. curve25519 public\n  key).\n\n- `derive_key` `(bool: false)` - If specified, the final encryption key is\n  calculated by using HKDF-SHA256 to derive a key from the calculated shared\n  secret and the two public keys for enhanced security. This is recommended\n  if backward compatibility isn't a concern.\n\n- `aad` `(string: optional)` - If specified, additional authenticated data to\n  use with the AES-GCM encryption of the token. Can be any string, including\n  serialized data.\n\n- `aad_env_var` `(string: optional)` - If specified, AAD will be read from the\n  given environment variable rather than a value in the configuration file.\n\n- `config` `(object: required)` - Configuration of the sink itself. See the\n  sidebar for information about each sink.\n\n### Auto auth examples\n\nAuto-auth configuration objects take two separate forms when specified in HCL\nand JSON. The following examples are meant to clarify the differences between\nthe two formats.\n\n#### Sinks (HCL format)\n\nThe HCL format may define any number of sink objects with an optional wrapping\n`sinks {...}` object.\n\n~> Note: The [corresponding JSON format](#sinks-json-format) _must_ specify a\n`\"sinks\" : [...]` array to encapsulate all `sink` JSON objects.\n\n```hcl\n\/\/ Other Vault Agent or Vault Proxy configuration blocks\n\/\/ ...\n\nauto_auth {\n  method {\n    type = \"approle\"\n\n    config = {\n      role_id_file_path = \"\/etc\/vault\/roleid\"\n      secret_id_file_path = \"\/etc\/vault\/secretid\"\n    }\n  }\n\n  sinks {\n    sink {\n      type = \"file\"\n\n      config = {\n        path = \"\/tmp\/file-foo\"\n      }\n    }\n  }\n}\n```\n\nThe following valid HCL omits the wrapping `sinks` object while specifying\nmultiple sinks.\n\n```hcl\n\/\/ Other Vault Agent or Vault Proxy configuration blocks\n\/\/ ...\n\nauto_auth {\n  method {\n    type = \"approle\"\n\n    config = {\n      role_id_file_path = \"\/etc\/vault\/roleid\"\n      secret_id_file_path = \"\/etc\/vault\/secretid\"\n    }\n  }\n\n  sink {\n    type = \"file\"\n\n    config = {\n      path = \"\/tmp\/file-foo\"\n    }\n  }\n\n  sink {\n    type = \"file\"\n\n    config = {\n      path = \"\/tmp\/file-bar\"\n    }\n  }\n}\n```\n\n#### Sinks (JSON format)\n\nThe following JSON configuration illustrates the need for a `sinks: [...]` array\nwrapping any number of `sink` objects.\n\n```hcl\n{\n  \"auto_auth\" : {\n    \"method\" : [\n      {\n        type = \"approle\"\n\n        config = {\n          role_id_file_path = \"\/etc\/vault\/roleid\"\n          secret_id_file_path = \"\/etc\/vault\/secretid\"\n        }\n      }\n    ],\n    \"sinks\" : [\n      {\n        \"sink\" : {\n          type = \"file\"\n\n          config = {\n            path = \"\/tmp\/file-foo\"\n          }\n        }\n      }\n    ]\n  }\n}\n```\n\nMultiple sinks are defined by appending more `sink` objects within the `sinks`\narray:\n\n```hcl\n{\n  \"auto_auth\" : {\n    \"method\" : [\n      {\n        type = \"approle\"\n\n        config = {\n          role_id_file_path = \"\/etc\/vault\/roleid\"\n          secret_id_file_path = \"\/etc\/vault\/secretid\"\n        }\n      }\n    ],\n    \"sinks\" : [\n      {\n        \"sink\" : {\n          type = \"file\"\n\n          config = {\n            path = \"\/tmp\/file-foo\"\n          }\n        }\n      },\n      {\n        \"sink\" : {\n          type = \"file\"\n\n          config = {\n            path = \"\/tmp\/file-bar\"\n          }\n        }\n      }\n    ]\n  }\n}\n```","site":"vault","answers_cleaned":"    layout  docs page title  What is Auto authentication  description       Use auto authentication with Vault Agent or Vault Proxy to simplify client   authentication to Vault in a variety of environments         What is Auto authentication   Auto authentication simplifies client authentication in a wide variety of environments  The following Vault tools come with auto authentication built in     Vault Agent   Vault Proxy     Methods and sinks  Auto auth consists of two parts     a   method     the desired authentication method for the current environment   a   sink     the location where tools save tokens when the token value changes  When a supported tool starts with auto auth enabled  the tool requests a Vault token using the configured method  If the request fails  the tool retries the request with an exponential back off   Once the request succeeds  the auth auth renews unwrapped authentication tokens automatically until Vault denies the renewal  If the authentication method wraps tokens  auto authentication cannot renew the token automatically   Vault typically denies renewal if the token     the token was revoked    the token has exceeded the maximum number of uses    the token is otherwise invalid   Every time authentication succeeds  auto auth writes the token to any appropriately configured sink      Advanced functionality  Sinks support some advanced features  including the ability for the written values to be encrypted or  response wrapped   vault docs concepts response wrapping    Both mechanisms can be used concurrently  in this case  the value will be response wrapped  then encrypted       Response Wrapping tokens  There are two ways that tokens can be response wrapped   1  By the auth method  This allows the end client to introspect the     creation path  of the token  helping prevent Man In The Middle  MITM     attacks  However  because auto auth cannot then unwrap the token and rewrap    it without modifying the  creation path   we are not able to renew the    token  it is up to the end client to renew the token  Agent and Proxy both    stay daemonized in this mode since some auth methods allow for reauthentication    on certain events   2  By any of the token sinks  Because more than one sink can be configured  the    token must be wrapped after it is fetched  rather than wrapped by Vault as    it s being returned  As a result  the  creation path  will always be     sys wrapping wrap   and validation of this field cannot be used as    protection against MITM attacks  However  this mode allows auto auth to keep    the token renewed for the end client and automatically reauthenticate when    it expires       Encrypting tokens     Support for encrypted tokens is experimental  if input output formats change  we will make every effort to provide backwards compatibility   Tokens can be encrypted  using a Diffie Hellman exchange to generate an ephemeral key  In this mechanism  the client receiving the token writes a generated public key to a file  The sink responsible for writing the token to that client looks for this public key and uses it to compute a shared secret key  which is then used to encrypt the token via AES GCM  The nonce  encrypted payload  and the sink s public key are then written to the output file  where the client can compute the shared secret and decrypt the token value      NOTE  Token encryption is not a protection against MITM attacks  The purpose of this feature is for forward secrecy and coverage against bare token values being persisted  A MITM that can write to the sink s output and or client public key input files could attack this exchange  Using TLS to protect the transit of tokens is highly recommended   To help mitigate MITM attacks  additional authenticated data  AAD  can be provided to Agent and Proxy  This data is written as part of the AES GCM tag and must match on both Agent and Proxy and the client  This of course means that protecting this AAD becomes important  but it provides another layer for an attacker to have to overcome  For instance  if the attacker has access to the file system where the token is being written  but not to read configuration or read environment variables  this AAD can be generated and passed to Agent or Proxy and the client in ways that would be difficult for the attacker to find   When using AAD  it is always a good idea for this to be as fresh as possible  generate a value and pass it to your client and Agent or Proxy on startup  Additionally  Agent and Proxy a Trust On First Use model  after it finds a generated public key  it will reuse that public key instead of looking for new values that have been written   If writing a client that uses this feature  it will likely be helpful to look at the  dhutil  https   github com hashicorp vault blob main helper dhutil dhutil go  library  This shows the expected format of the public key input and envelope output formats      Configuration  The top level  auto auth  block has two configuration entries      method    object  required     Configuration for the method     sinks    array of objects  optional     Configuration for the sinks     enable reauth on new credentials    bool  false     If enabled  Auto auth will   handle new credential events from supported auth methods  AliCloud AWS Cert JWT LDAP OCI    and re authenticate with the new credential       Configuration  Method      Auto auth does not support using tokens with a limited number of uses  Auto auth does not track the number of uses remaining  and may allow the token to expire before attempting to renew it  For example  if using AppRole auto auth  you must use 0  meaning unlimited  as the value for   token num uses    vault api docs auth approle token num uses    These are common configuration values that live within the  method  block      type    string  required     The type of the method to use  e g   aws      gcp    azure   etc   Note   when using HCL this can be used as the key for   the block  e g   method  aws              mount path    string  optional     The mount path of the method  If not   specified  defaults to a value of  auth  method type        namespace    string  optional     Namespace in which the mount lives    The order of precedence is  this setting lowest  followed by the   environment variable  VAULT NAMESPACE   and then the highest precedence   command line option   namespace     If none of these are specified  defaults to the root namespace    Note that because sink response wrapping and templating are also based   on the client created by auto auth  they use the same namespace    If specified alongside the  namespace  option in the Vault Stanza of    Vault Agent   vault docs agent and proxy agent vault stanza  or    Vault Proxy   vault docs agent and proxy proxy vault stanza   that   configuration will take precedence on everything except auto auth      wrap ttl    string or integer  optional     If specified  the written token   will be response wrapped by auto auth  This is more secure than wrapping by   sinks  but does not allow the auto auth to keep the token renewed or   automatically reauthenticate when it expires  Rather than a simple string    the written value will be a JSON encoded    SecretWrapInfo  https   godoc org github com hashicorp vault api SecretWrapInfo    structure  Uses  duration format strings   vault docs concepts duration format       min backoff    string or integer   1s      The minimum backoff time auto auth   will delay before retrying after a failed auth attempt  The backoff will start   at the configured value and double  with some randomness  after successive   failures  capped by  max backoff   If Agent templating is being used  this   value is also used as the min backoff time for the templating server    Uses  duration format strings   vault docs concepts duration format       max backoff    string or integer   5m      The maximum time Agent will delay   before retrying after a failed auth attempt  The backoff will start at    min backoff  and double  with some randomness  after successive failures    capped by  max backoff   If Agent templating is being used  this value is also   used as the max backoff time for the templating server   max backoff  is the   duration between retries  and   not   the duration that retries will be   performed before giving up  Uses  duration format strings   vault docs concepts duration format       exit on err    bool  false     When set to true  Vault Agent and Vault Proxy   will exit if any errors occur during authentication  This configurable only affects login   attempts for new tokens  either initial or expired tokens  and will not exit for errors on   valid token renewals      config    object  required     Configuration of the method itself  See the   sidebar for information about each method       Configuration  Sinks   These configuration values are common to all Sinks      type    string  required     The type of the method to use  e g   file      Note   when using HCL this can be used as the key for the block  e g   sink  file              wrap ttl    string or integer  optional     If specified  the written token   will be response wrapped by the sink  This is less secure than wrapping by   the method  but allows auto auth to keep the token renewed and automatically   reauthenticate when it expires  Rather than a simple string  the written   value will be a JSON encoded    SecretWrapInfo  https   godoc org github com hashicorp vault api SecretWrapInfo    structure  Uses  duration format strings   vault docs concepts duration format       dh type    string  optional     If specified  the type of Diffie Hellman exchange to   perform  meaning  which ciphers and or curves  Currently only  curve25519  is   supported      dh path    string  required if dh type is set     The path from which the   auto auth should read the client s initial parameters  e g  curve25519 public   key       derive key    bool  false     If specified  the final encryption key is   calculated by using HKDF SHA256 to derive a key from the calculated shared   secret and the two public keys for enhanced security  This is recommended   if backward compatibility isn t a concern      aad    string  optional     If specified  additional authenticated data to   use with the AES GCM encryption of the token  Can be any string  including   serialized data      aad env var    string  optional     If specified  AAD will be read from the   given environment variable rather than a value in the configuration file      config    object  required     Configuration of the sink itself  See the   sidebar for information about each sink       Auto auth examples  Auto auth configuration objects take two separate forms when specified in HCL and JSON  The following examples are meant to clarify the differences between the two formats        Sinks  HCL format   The HCL format may define any number of sink objects with an optional wrapping  sinks        object      Note  The  corresponding JSON format   sinks json format   must  specify a   sinks           array to encapsulate all  sink  JSON objects      hcl    Other Vault Agent or Vault Proxy configuration blocks         auto auth     method       type    approle       config           role id file path     etc vault roleid        secret id file path     etc vault secretid               sinks       sink         type    file         config             path     tmp file foo                           The following valid HCL omits the wrapping  sinks  object while specifying multiple sinks      hcl    Other Vault Agent or Vault Proxy configuration blocks         auto auth     method       type    approle       config           role id file path     etc vault roleid        secret id file path     etc vault secretid               sink       type    file       config           path     tmp file foo               sink       type    file       config           path     tmp file bar                        Sinks  JSON format   The following JSON configuration illustrates the need for a  sinks         array wrapping any number of  sink  objects      hcl      auto auth           method                      type    approle           config               role id file path     etc vault roleid            secret id file path     etc vault secretid                                sinks                       sink                type    file             config                 path     tmp file foo                                                 Multiple sinks are defined by appending more  sink  objects within the  sinks  array      hcl      auto auth           method                      type    approle           config               role id file path     etc vault roleid            secret id file path     etc vault secretid                                sinks                       sink                type    file             config                 path     tmp file foo                                                  sink                type    file             config                 path     tmp file bar                                               "}
{"questions":"vault Vault Proxy Auto auth method application roles AppRole layout docs page title Auto auth with AppRole Use application roles for auto authentication with Vault Agent or","answers":"---\nlayout: docs\npage_title: Auto-auth with AppRole\ndescription: >-\n  Use application roles for auto-authentication with Vault Agent or\n  Vault Proxy.\n---\n\n# Auto-auth method: application roles (AppRole)\n\nThe `approle` method reads in a role ID and a secret ID from files and sends\nthe values to the [AppRole Auth\nmethod](\/vault\/docs\/auth\/approle).\n\nThe method caches values and it is safe to delete the role ID\/secret ID files\nafter they have been read. In fact, by default, after reading the secret ID,\nthe agent will delete the file. New files or values written at the expected\nlocations will be used on next authentication and the new values will be\ncached.\n\n## Configuration\n\n- `role_id_file_path` `(string: required)` - The path to the file with role ID\n\n- `secret_id_file_path` `(string: optional)` - The path to the file with secret\n  ID.\n  If not set, only the `role-id` will be used.\n  In that case, the AppRole should have `bind_secret_id` set to `false` otherwise\n  Vault Agent wouldn't be able to login.\n\n- `remove_secret_id_file_after_reading` `(bool: optional, defaults to true)` -\n  This can be set to `false` to disable the default behavior of removing the\n  secret ID file after it's been read.\n\n- `secret_id_response_wrapping_path` `(string: optional)` - If set, the value\n  at `secret_id_file_path` will be expected to be a [Response-Wrapping\n  Token](\/vault\/docs\/concepts\/response-wrapping)\n  containing the output of the secret ID retrieval endpoint for the role (e.g.\n  `auth\/approle\/role\/webservers\/secret-id`) and the creation path for the\n  response-wrapping token must match the value set here.\n\n## Example configuration\n\nAn example configuration, using approle to enable [auto-auth](\/vault\/docs\/agent-and-proxy\/autoauth)\nand creating both a plaintext token sink and a [response-wrapped token sink file](\/vault\/docs\/agent-and-proxy\/autoauth#wrap_ttl), follows:\n\n```hcl\npid_file = \".\/pidfile\"\n\nvault {\n  address = \"https:\/\/127.0.0.1:8200\"\n}\n\nauto_auth {\n  method {\n    type      = \"approle\"\n\n    config = {\n      role_id_file_path = \"roleid\"\n      secret_id_file_path = \"secretid\"\n      remove_secret_id_file_after_reading = false\n    }\n  }\n\n  sink {\n    type = \"file\"\n    wrap_ttl = \"30m\"\n    config = {\n      path = \"sink_file_wrapped_1.txt\"\n    }\n  }\n\n  sink {\n    type = \"file\"\n    config = {\n      path = \"sink_file_unwrapped_2.txt\"\n    }\n  }\n}\n\n\napi_proxy {\n  use_auto_auth_token = true\n}\n\nlistener \"tcp\" {\n  address = \"127.0.0.1:8100\"\n  tls_disable = true\n}\n\ntemplate {\n  source      = \"\/etc\/vault\/server.key.ctmpl\"\n  destination = \"\/etc\/vault\/server.key\"\n}\n\ntemplate {\n  source      = \"\/etc\/vault\/server.crt.ctmpl\"\n  destination = \"\/etc\/vault\/server.crt\"\n}\n```","site":"vault","answers_cleaned":"    layout  docs page title  Auto auth with AppRole description       Use application roles for auto authentication with Vault Agent or   Vault Proxy         Auto auth method  application roles  AppRole   The  approle  method reads in a role ID and a secret ID from files and sends the values to the  AppRole Auth method   vault docs auth approle    The method caches values and it is safe to delete the role ID secret ID files after they have been read  In fact  by default  after reading the secret ID  the agent will delete the file  New files or values written at the expected locations will be used on next authentication and the new values will be cached      Configuration     role id file path    string  required     The path to the file with role ID     secret id file path    string  optional     The path to the file with secret   ID    If not set  only the  role id  will be used    In that case  the AppRole should have  bind secret id  set to  false  otherwise   Vault Agent wouldn t be able to login      remove secret id file after reading    bool  optional  defaults to true       This can be set to  false  to disable the default behavior of removing the   secret ID file after it s been read      secret id response wrapping path    string  optional     If set  the value   at  secret id file path  will be expected to be a  Response Wrapping   Token   vault docs concepts response wrapping    containing the output of the secret ID retrieval endpoint for the role  e g     auth approle role webservers secret id   and the creation path for the   response wrapping token must match the value set here      Example configuration  An example configuration  using approle to enable  auto auth   vault docs agent and proxy autoauth  and creating both a plaintext token sink and a  response wrapped token sink file   vault docs agent and proxy autoauth wrap ttl   follows      hcl pid file      pidfile   vault     address    https   127 0 0 1 8200     auto auth     method       type         approle       config           role id file path    roleid        secret id file path    secretid        remove secret id file after reading   false              sink       type    file      wrap ttl    30m      config           path    sink file wrapped 1 txt               sink       type    file      config           path    sink file unwrapped 2 txt                api proxy     use auto auth token   true    listener  tcp      address    127 0 0 1 8100    tls disable   true    template     source          etc vault server key ctmpl    destination     etc vault server key     template     source          etc vault server crt ctmpl    destination     etc vault server crt       "}
{"questions":"vault page title What is Vault Proxy Vault Proxy is a server side daemon with caching and auto authentication that acts as load balancer and API proxy for Vault layout docs What is Vault Proxy","answers":"---\nlayout: docs\npage_title: What is Vault Proxy?\ndescription: >-\n  Vault Proxy is a server-side daemon with caching and auto-authentication that\n  acts as load-balancer and API proxy for Vault.\n---\n\n# What is Vault Proxy?\n\nVault Proxy aims to remove the initial hurdle to adopt Vault by providing a\nmore scalable and simpler way for applications to integrate with Vault.\nVault Proxy acts as an [API Proxy][apiproxy] for Vault, and can optionally allow\nor force interacting clients to use its [automatically authenticated token][autoauth].\n\nVault Proxy is a client daemon that provides the following features:\n\n- [Auto-Auth][autoauth] - Automatically authenticate to Vault and manage the\ntoken renewal process for locally-retrieved dynamic secrets.\n- [API Proxy][apiproxy] - Acts as a proxy for Vault's API,\noptionally using (or forcing the use of) the Auto-Auth token.\n- [Caching][caching] - Allows client-side caching of responses containing newly\ncreated tokens and responses containing leased secrets generated off of these\nnewly created tokens. The agent also manages the renewals of the cached tokens and leases.\n\n## Auto-Auth\n\nVault Proxy allows easy authentication to Vault in a wide variety of\nenvironments. Please see the [Auto-Auth docs][autoauth]\nfor information.\n\nAuto-Auth functionality takes place within an `auto_auth` configuration stanza.\n\n## API proxy\n\nVault Proxy's primary purpose is to act as an API proxy for Vault, allowing you to talk to Vault's\nAPI via a listener. It can be configured to optionally allow or force the automatic use of\nthe Auto-Auth token for these requests. Please see the [API Proxy docs][apiproxy]\nfor more information.\n\nAPI Proxy functionality takes place within a defined `listener`, and its behaviour can be configured with an\n[`api_proxy` stanza](\/vault\/docs\/agent-and-proxy\/proxy\/apiproxy#configuration-api_proxy).\n\n## Caching\n\nVault Proxy allows client-side caching of responses containing newly created tokens\nand responses containing leased secrets generated off of these newly created tokens.\nPlease see the [Caching docs][caching] for information.\n\n## API\n\n### Quit\n\nThis endpoint triggers shutdown of the proxy. By default, it is disabled, and can\nbe enabled per listener using the [`proxy_api`][proxy-api] stanza. It is recommended\nto only enable this on trusted interfaces, as it does not require any authorization to use.\n\n| Method | Path             |\n| :----- | :--------------- |\n| `POST` | `\/proxy\/v1\/quit` |\n\n### Cache\n\nSee the [caching](\/vault\/docs\/agent-and-proxy\/proxy\/caching#api) page for details on the cache API.\n\n## Configuration\n\n### Command options\n\n- `-log-level` ((#\\_log_level)) `(string: \"info\")` - Log verbosity level. Supported values (in\norder of descending detail) are `trace`, `debug`, `info`, `warn`, and `error`. This can\nalso be specified via the `VAULT_LOG_LEVEL` environment variable.\n\n- `-log-format` ((#\\_log_format)) `(string: \"standard\")` - Log format. Supported values\nare `standard` and `json`. This can also be specified via the\n`VAULT_LOG_FORMAT` environment variable.\n\n- `-log-file` ((#\\_log_file)) - the absolute path where Vault Proxy should save\n  log messages. Paths that end with a path separator use the default file name,\n  `proxy.log`. Paths that do not end with a file extension use the default\n  `.log` extension. If the log file rotates, Vault Proxy appends the current\n  timestamp to the file name at the time of rotation. For example:\n\n  `log-file` | Full log file | Rotated log file\n  ---------- | ------------- | ----------------\n  `\/var\/log` | `\/var\/log\/proxy.log` | `\/var\/log\/proxy-{timestamp}.log`\n  `\/var\/log\/my-diary` | `\/var\/log\/my-diary.log` | `\/var\/log\/my-diary-{timestamp}.log`\n  `\/var\/log\/my-diary.txt` | `\/var\/log\/my-diary.txt` | `\/var\/log\/my-diary-{timestamp}.txt`\n\n- `-log-rotate-bytes` ((#\\_log_rotate_bytes)) - to specify the number of\nbytes that should be written to a log before it needs to be rotated. Unless specified,\nthere is no limit to the number of bytes that can be written to a log file.\n\n- `-log-rotate-duration` ((#\\_log_rotate_duration)) - to specify the maximum\nduration a log should be written to before it needs to be rotated. Must be a duration\nvalue such as 30s. Defaults to 24h.\n\n- `-log-rotate-max-files` ((#\\_log_rotate_max_files)) - to specify the maximum\nnumber of older log file archives to keep. Defaults to `0` (no files are ever deleted).\nSet to `-1` to discard old log files when a new one is created.\n\n### Configuration file options\n\nThese are the currently-available general configuration options:\n\n- `vault` <code>([vault][vault]: <optional\\>)<\/code> - Specifies the remote Vault server the Proxy connects to.\n\n- `auto_auth` <code>([auto_auth][autoauth]: <optional\\>)<\/code> - Specifies the method and other options used for Auto-Auth functionality.\n\n- `api_proxy` <code>([api_proxy][apiproxy]: <optional\\>)<\/code> - Specifies options used for API Proxy functionality.\n\n- `cache` <code>([cache][caching]: <optional\\>)<\/code> - Specifies options used for Caching functionality.\n\n- `listener` <code>([listener][listener]: <optional\\>)<\/code> - Specifies the addresses and ports on which the Proxy will respond to requests.\n\n~> **Note:** On `SIGHUP` (`kill -SIGHUP $(pidof vault)`), Vault Proxy will attempt to reload listener TLS configuration.\nThis method can be used to refresh certificates used by Vault Proxy without having to restart its process.\n\n- `pid_file` `(string: \"\")` - Path to the file in which the Proxy's Process ID\n(PID) should be stored\n\n- `exit_after_auth` `(bool: false)` - If set to `true`, the proxy will exit\nwith code `0` after a single successful auth, where success means that a\ntoken was retrieved and all sinks successfully wrote it\n\n- `disable_idle_connections` `(string array: [])` - A list of strings that disables idle connections for various features in Vault Proxy.\nValid values include: `auto-auth`, and `proxying`. Can also be configured by setting the `VAULT_PROXY_DISABLE_IDLE_CONNECTIONS`\nenvironment variable as a comma separated string. This environment variable will override any values found in a configuration file.\n\n- `disable_keep_alives` `(string array: [])` - A list of strings that disables keep alives for various features in Vault Agent.\nValid values include: `auto-auth`, and `proxying`. Can also be configured by setting the `VAULT_PROXY_DISABLE_KEEP_ALIVES`\nenvironment variable as a comma separated string. This environment variable will override any values found in a configuration file.\n\n- `telemetry` <code>([telemetry][telemetry]: <optional\\>)<\/code> \u2013 Specifies the telemetry\nreporting system. See the [telemetry Stanza](\/vault\/docs\/agent-and-proxy\/proxy#telemetry-stanza) section below\nfor a list of metrics specific to Proxy.\n\n- `log_level` - Equivalent to the [`-log-level` command-line flag](#_log_level).\n\n~> **Note:** On `SIGHUP` (`kill -SIGHUP $(pidof vault)`), Vault Proxy will update the log level to the value\nspecified by configuration file (including overriding values set using CLI or environment variable parameters).\n\n- `log_format` - Equivalent to the [`-log-format` command-line flag](#_log_format).\n\n- `log_file` - Equivalent to the [`-log-file` command-line flag](#_log_file).\n\n- `log_rotate_duration` - Equivalent to the [`-log-rotate-duration` command-line flag](#_log_rotate_duration).\n\n- `log_rotate_bytes` - Equivalent to the [`-log-rotate-bytes` command-line flag](#_log_rotate_bytes).\n\n- `log_rotate_max_files` - Equivalent to the [`-log-rotate-max-files` command-line flag](#_log_rotate_max_files).\n\n### vault stanza\n\nThere can at most be one top level `vault` block, and it has the following\nconfiguration entries:\n\n- `address` `(string: <optional>)` - The address of the Vault server to\nconnect to. This should be a Fully Qualified Domain Name (FQDN) or IP\nsuch as `https:\/\/vault-fqdn:8200` or `https:\/\/172.16.9.8:8200`.\nThis value can be overridden by setting the `VAULT_ADDR` environment variable.\n\n- `ca_cert` `(string: <optional>)` - Path on the local disk to a single PEM-encoded\nCA certificate to verify the Vault server's SSL certificate. This value can\nbe overridden by setting the `VAULT_CACERT` environment variable.\n\n- `ca_path` `(string: <optional>)` - Path on the local disk to a directory of\nPEM-encoded CA certificates to verify the Vault server's SSL certificate.\nThis value can be overridden by setting the `VAULT_CAPATH` environment\nvariable.\n\n- `client_cert` `(string: <optional>)` - Path on the local disk to a single\nPEM-encoded CA certificate to use for TLS authentication to the Vault server.\nThis value can be overridden by setting the `VAULT_CLIENT_CERT` environment\nvariable.\n\n- `client_key` `(string: <optional>)` - Path on the local disk to a single\nPEM-encoded private key matching the client certificate from `client_cert`.\nThis value can be overridden by setting the `VAULT_CLIENT_KEY` environment\nvariable.\n\n- `tls_skip_verify` `(string: <optional>)` - Disable verification of TLS\ncertificates. Using this option is highly discouraged as it decreases the\nsecurity of data transmissions to and from the Vault server. This value can\nbe overridden by setting the `VAULT_SKIP_VERIFY` environment variable.\n\n- `tls_server_name` `(string: <optional>)` - Name to use as the SNI host when\nconnecting via TLS. This value can be overridden by setting the\n`VAULT_TLS_SERVER_NAME` environment variable.\n\n- `namespace` `(string: <optional>)` - Namespace to use for all of Vault Proxy's\nrequests to Vault. This can also be specified by command line or environment variable.\nThe order of precedence is: this setting lowest, followed by the environment variable\n`VAULT_NAMESPACE`, and then the highest precedence command-line option `-namespace`.\nIf none of these are specified, defaults to the root namespace.\n\n#### retry stanza\n\nThe `vault` stanza may contain a `retry` stanza that controls how failing Vault\nrequests are handled. Auto-auth, however, has its own notion of retrying and is not\naffected by this section.\n\nHere are the options for the `retry` stanza:\n\n- `num_retries` `(int: 12)` - Specify how many times a failing request will\nbe retried. A value of `0` translates to the default, i.e. 12 retries.\nA value of `-1` disables retries. The environment variable `VAULT_MAX_RETRIES`\noverrides this setting.\n\nRequests originating\nfrom the proxy cache will only be retried if they resulted in specific HTTP\nresult codes: any 50x code except 501 (\"not implemented\"), as well as 412\n(\"precondition failed\"); 412 is used in Vault Enterprise 1.7+ to indicate a\nstale read due to eventual consistency. Requests coming from the template\nsubsystem are retried regardless of the failure.\n\n### listener stanza\n\nVault Proxy supports one or more [listener][listener_main] stanzas. Listeners\ncan be configured with or without [caching][caching], but will use the cache if it\nhas been configured, and will enable the [API proxy][apiproxy]. In addition to the standard\nlistener configuration, a Proxy's listener configuration also supports the following:\n\n- `require_request_header` `(bool: false)` - Require that all incoming HTTP\nrequests on this listener must have an `X-Vault-Request: true` header entry.\nUsing this option offers an additional layer of protection from Server Side\nRequest Forgery attacks. Requests on the listener that do not have the proper\n`X-Vault-Request` header will fail, with a HTTP response status code of `412: Precondition Failed`.\n\n- `role` `(string: default)` - `role` determines which APIs the listener serves.\nIt can be configured to `metrics_only` to serve only metrics, or the default role, `default`,\nwhich serves everything (including metrics). The `require_request_header` does not apply\nto `metrics_only` listeners.\n\n- `proxy_api` <code>([proxy_api][proxy-api]: <optional\\>)<\/code> - Manages optional Proxy API endpoints.\n\n#### proxy_api stanza\n\n- `enable_quit` `(bool: false)` - If set to `true`, the Proxy will enable the [quit](\/vault\/docs\/agent-and-proxy\/proxy#quit) API.\n\n### telemetry stanza\n\nVault Proxy supports the [telemetry][telemetry] stanza and collects various\nruntime metrics about its performance, the auto-auth and the cache status:\n\n| Metric                           | Description                                          | Type    |\n| -------------------------------- | ---------------------------------------------------- | ------- |\n| `vault.proxy.auth.failure`       | Number of authentication failures                    | counter |\n| `vault.proxy.auth.success`       | Number of authentication successes                   | counter |\n| `vault.proxy.proxy.success`      | Number of requests successfully proxied              | counter |\n| `vault.proxy.proxy.client_error` | Number of requests for which Vault returned an error | counter |\n| `vault.proxy.proxy.error`        | Number of requests the proxy failed to proxy         | counter |\n| `vault.proxy.cache.hit`          | Number of cache hits                                 | counter |\n| `vault.proxy.cache.miss`         | Number of cache misses                               | counter |\n\n## Start Vault proxy\n\nTo run Vault Proxy:\n\n1. [Download](\/vault\/downloads) the Vault binary where the client application runs\n(virtual machine, Kubernetes pod, etc.)\n\n1. Create a Vault Proxy configuration file. (See the [Example\nConfiguration](#example-configuration) section for an example configuration.)\n\n1. Start a Vault Proxy with the configuration file.\n\n**Example:**\n\n```shell-session\n$ vault proxy -config=\/etc\/vault\/proxy-config.hcl\n   ```\n\nTo get help, run:\n\n```shell-session\n$ vault proxy -h\n   ```\n\nAs with Vault, the `-config` flag can be used in three different ways:\n\n- Use the flag once to name the path to a single specific configuration file.\n- Use the flag multiple times to name multiple configuration files, which will be composed at runtime.\n- Use the flag to name a directory of configuration files, the contents of which will be composed at runtime.\n\n## Example configuration\n\nAn example configuration, with very contrived values, follows:\n\n```hcl\npid_file = \".\/pidfile\"\n\nvault {\n  address = \"https:\/\/vault-fqdn:8200\"\n  retry {\n    num_retries = 5\n  }\n}\n\nauto_auth {\n  method \"aws\" {\n    mount_path = \"auth\/aws-subaccount\"\n    config = {\n      type = \"iam\"\n      role = \"foobar\"\n    }\n  }\n\n  sink \"file\" {\n    config = {\n      path = \"\/tmp\/file-foo\"\n    }\n  }\n\n  sink \"file\" {\n    wrap_ttl = \"5m\"\n    aad_env_var = \"TEST_AAD_ENV\"\n    dh_type = \"curve25519\"\n    dh_path = \"\/tmp\/file-foo-dhpath2\"\n    config = {\n      path = \"\/tmp\/file-bar\"\n    }\n  }\n}\n\ncache {\n  \/\/ An empty cache stanza still enables caching\n}\n\napi_proxy {\n  use_auto_auth_token = true\n}\n\nlistener \"unix\" {\n  address = \"\/path\/to\/socket\"\n  tls_disable = true\n\n  agent_api {\n    enable_quit = true\n  }\n}\n\nlistener \"tcp\" {\n  address = \"127.0.0.1:8100\"\n  tls_disable = true\n}\n```\n\n[vault]: \/vault\/docs\/agent-and-proxy\/proxy#vault-stanza\n[autoauth]: \/vault\/docs\/agent-and-proxy\/autoauth\n[caching]: \/vault\/docs\/agent-and-proxy\/proxy\/caching\n[apiproxy]: \/vault\/docs\/agent-and-proxy\/proxy\/apiproxy\n[persistent-cache]: \/vault\/docs\/agent-and-proxy\/proxy\/caching\/persistent-caches\n[proxy-api]: \/vault\/docs\/agent-and-proxy\/proxy\/#proxy_api-stanza\n[listener]: \/vault\/docs\/agent-and-proxy\/proxy#listener-stanza\n[listener_main]: \/vault\/docs\/configuration\/listener\/tcp\n[telemetry]: \/vault\/docs\/configuration\/telemetry","site":"vault","answers_cleaned":"    layout  docs page title  What is Vault Proxy  description       Vault Proxy is a server side daemon with caching and auto authentication that   acts as load balancer and API proxy for Vault         What is Vault Proxy   Vault Proxy aims to remove the initial hurdle to adopt Vault by providing a more scalable and simpler way for applications to integrate with Vault  Vault Proxy acts as an  API Proxy  apiproxy  for Vault  and can optionally allow or force interacting clients to use its  automatically authenticated token  autoauth    Vault Proxy is a client daemon that provides the following features      Auto Auth  autoauth    Automatically authenticate to Vault and manage the token renewal process for locally retrieved dynamic secrets     API Proxy  apiproxy    Acts as a proxy for Vault s API  optionally using  or forcing the use of  the Auto Auth token     Caching  caching    Allows client side caching of responses containing newly created tokens and responses containing leased secrets generated off of these newly created tokens  The agent also manages the renewals of the cached tokens and leases      Auto Auth  Vault Proxy allows easy authentication to Vault in a wide variety of environments  Please see the  Auto Auth docs  autoauth  for information   Auto Auth functionality takes place within an  auto auth  configuration stanza      API proxy  Vault Proxy s primary purpose is to act as an API proxy for Vault  allowing you to talk to Vault s API via a listener  It can be configured to optionally allow or force the automatic use of the Auto Auth token for these requests  Please see the  API Proxy docs  apiproxy  for more information   API Proxy functionality takes place within a defined  listener   and its behaviour can be configured with an   api proxy  stanza   vault docs agent and proxy proxy apiproxy configuration api proxy       Caching  Vault Proxy allows client side caching of responses containing newly created tokens and responses containing leased secrets generated off of these newly created tokens  Please see the  Caching docs  caching  for information      API      Quit  This endpoint triggers shutdown of the proxy  By default  it is disabled  and can be enabled per listener using the   proxy api   proxy api  stanza  It is recommended to only enable this on trusted interfaces  as it does not require any authorization to use     Method   Path                                                POST      proxy v1 quit         Cache  See the  caching   vault docs agent and proxy proxy caching api  page for details on the cache API      Configuration      Command options      log level       log level     string   info      Log verbosity level  Supported values  in order of descending detail  are  trace    debug    info    warn   and  error   This can also be specified via the  VAULT LOG LEVEL  environment variable       log format       log format     string   standard      Log format  Supported values are  standard  and  json   This can also be specified via the  VAULT LOG FORMAT  environment variable       log file       log file     the absolute path where Vault Proxy should save   log messages  Paths that end with a path separator use the default file name     proxy log   Paths that do not end with a file extension use the default     log  extension  If the log file rotates  Vault Proxy appends the current   timestamp to the file name at the time of rotation  For example      log file    Full log file   Rotated log file                                                     var log      var log proxy log      var log proxy  timestamp  log      var log my diary      var log my diary log      var log my diary  timestamp  log      var log my diary txt      var log my diary txt      var log my diary  timestamp  txt       log rotate bytes       log rotate bytes     to specify the number of bytes that should be written to a log before it needs to be rotated  Unless specified  there is no limit to the number of bytes that can be written to a log file       log rotate duration       log rotate duration     to specify the maximum duration a log should be written to before it needs to be rotated  Must be a duration value such as 30s  Defaults to 24h       log rotate max files       log rotate max files     to specify the maximum number of older log file archives to keep  Defaults to  0   no files are ever deleted   Set to   1  to discard old log files when a new one is created       Configuration file options  These are the currently available general configuration options      vault   code   vault  vault    optional     code    Specifies the remote Vault server the Proxy connects to      auto auth   code   auto auth  autoauth    optional     code    Specifies the method and other options used for Auto Auth functionality      api proxy   code   api proxy  apiproxy    optional     code    Specifies options used for API Proxy functionality      cache   code   cache  caching    optional     code    Specifies options used for Caching functionality      listener   code   listener  listener    optional     code    Specifies the addresses and ports on which the Proxy will respond to requests        Note    On  SIGHUP    kill  SIGHUP   pidof vault     Vault Proxy will attempt to reload listener TLS configuration  This method can be used to refresh certificates used by Vault Proxy without having to restart its process      pid file    string         Path to the file in which the Proxy s Process ID  PID  should be stored     exit after auth    bool  false     If set to  true   the proxy will exit with code  0  after a single successful auth  where success means that a token was retrieved and all sinks successfully wrote it     disable idle connections    string array         A list of strings that disables idle connections for various features in Vault Proxy  Valid values include   auto auth   and  proxying   Can also be configured by setting the  VAULT PROXY DISABLE IDLE CONNECTIONS  environment variable as a comma separated string  This environment variable will override any values found in a configuration file      disable keep alives    string array         A list of strings that disables keep alives for various features in Vault Agent  Valid values include   auto auth   and  proxying   Can also be configured by setting the  VAULT PROXY DISABLE KEEP ALIVES  environment variable as a comma separated string  This environment variable will override any values found in a configuration file      telemetry   code   telemetry  telemetry    optional     code    Specifies the telemetry reporting system  See the  telemetry Stanza   vault docs agent and proxy proxy telemetry stanza  section below for a list of metrics specific to Proxy      log level    Equivalent to the    log level  command line flag    log level         Note    On  SIGHUP    kill  SIGHUP   pidof vault     Vault Proxy will update the log level to the value specified by configuration file  including overriding values set using CLI or environment variable parameters       log format    Equivalent to the    log format  command line flag    log format       log file    Equivalent to the    log file  command line flag    log file       log rotate duration    Equivalent to the    log rotate duration  command line flag    log rotate duration       log rotate bytes    Equivalent to the    log rotate bytes  command line flag    log rotate bytes       log rotate max files    Equivalent to the    log rotate max files  command line flag    log rotate max files        vault stanza  There can at most be one top level  vault  block  and it has the following configuration entries      address    string   optional      The address of the Vault server to connect to  This should be a Fully Qualified Domain Name  FQDN  or IP such as  https   vault fqdn 8200  or  https   172 16 9 8 8200   This value can be overridden by setting the  VAULT ADDR  environment variable      ca cert    string   optional      Path on the local disk to a single PEM encoded CA certificate to verify the Vault server s SSL certificate  This value can be overridden by setting the  VAULT CACERT  environment variable      ca path    string   optional      Path on the local disk to a directory of PEM encoded CA certificates to verify the Vault server s SSL certificate  This value can be overridden by setting the  VAULT CAPATH  environment variable      client cert    string   optional      Path on the local disk to a single PEM encoded CA certificate to use for TLS authentication to the Vault server  This value can be overridden by setting the  VAULT CLIENT CERT  environment variable      client key    string   optional      Path on the local disk to a single PEM encoded private key matching the client certificate from  client cert   This value can be overridden by setting the  VAULT CLIENT KEY  environment variable      tls skip verify    string   optional      Disable verification of TLS certificates  Using this option is highly discouraged as it decreases the security of data transmissions to and from the Vault server  This value can be overridden by setting the  VAULT SKIP VERIFY  environment variable      tls server name    string   optional      Name to use as the SNI host when connecting via TLS  This value can be overridden by setting the  VAULT TLS SERVER NAME  environment variable      namespace    string   optional      Namespace to use for all of Vault Proxy s requests to Vault  This can also be specified by command line or environment variable  The order of precedence is  this setting lowest  followed by the environment variable  VAULT NAMESPACE   and then the highest precedence command line option   namespace   If none of these are specified  defaults to the root namespace        retry stanza  The  vault  stanza may contain a  retry  stanza that controls how failing Vault requests are handled  Auto auth  however  has its own notion of retrying and is not affected by this section   Here are the options for the  retry  stanza      num retries    int  12     Specify how many times a failing request will be retried  A value of  0  translates to the default  i e  12 retries  A value of   1  disables retries  The environment variable  VAULT MAX RETRIES  overrides this setting   Requests originating from the proxy cache will only be retried if they resulted in specific HTTP result codes  any 50x code except 501   not implemented    as well as 412   precondition failed    412 is used in Vault Enterprise 1 7  to indicate a stale read due to eventual consistency  Requests coming from the template subsystem are retried regardless of the failure       listener stanza  Vault Proxy supports one or more  listener  listener main  stanzas  Listeners can be configured with or without  caching  caching   but will use the cache if it has been configured  and will enable the  API proxy  apiproxy   In addition to the standard listener configuration  a Proxy s listener configuration also supports the following      require request header    bool  false     Require that all incoming HTTP requests on this listener must have an  X Vault Request  true  header entry  Using this option offers an additional layer of protection from Server Side Request Forgery attacks  Requests on the listener that do not have the proper  X Vault Request  header will fail  with a HTTP response status code of  412  Precondition Failed       role    string  default      role  determines which APIs the listener serves  It can be configured to  metrics only  to serve only metrics  or the default role   default   which serves everything  including metrics   The  require request header  does not apply to  metrics only  listeners      proxy api   code   proxy api  proxy api    optional     code    Manages optional Proxy API endpoints        proxy api stanza     enable quit    bool  false     If set to  true   the Proxy will enable the  quit   vault docs agent and proxy proxy quit  API       telemetry stanza  Vault Proxy supports the  telemetry  telemetry  stanza and collects various runtime metrics about its performance  the auto auth and the cache status     Metric                             Description                                            Type                                                                                                               vault proxy auth failure          Number of authentication failures                      counter      vault proxy auth success          Number of authentication successes                     counter      vault proxy proxy success         Number of requests successfully proxied                counter      vault proxy proxy client error    Number of requests for which Vault returned an error   counter      vault proxy proxy error           Number of requests the proxy failed to proxy           counter      vault proxy cache hit             Number of cache hits                                   counter      vault proxy cache miss            Number of cache misses                                 counter       Start Vault proxy  To run Vault Proxy   1   Download   vault downloads  the Vault binary where the client application runs  virtual machine  Kubernetes pod  etc    1  Create a Vault Proxy configuration file   See the  Example Configuration   example configuration  section for an example configuration    1  Start a Vault Proxy with the configuration file     Example        shell session   vault proxy  config  etc vault proxy config hcl         To get help  run      shell session   vault proxy  h         As with Vault  the   config  flag can be used in three different ways     Use the flag once to name the path to a single specific configuration file    Use the flag multiple times to name multiple configuration files  which will be composed at runtime    Use the flag to name a directory of configuration files  the contents of which will be composed at runtime      Example configuration  An example configuration  with very contrived values  follows      hcl pid file      pidfile   vault     address    https   vault fqdn 8200    retry       num retries   5        auto auth     method  aws        mount path    auth aws subaccount      config           type    iam        role    foobar               sink  file        config           path     tmp file foo               sink  file        wrap ttl    5m      aad env var    TEST AAD ENV      dh type    curve25519      dh path     tmp file foo dhpath2      config           path     tmp file bar               cache        An empty cache stanza still enables caching    api proxy     use auto auth token   true    listener  unix      address     path to socket    tls disable   true    agent api       enable quit   true        listener  tcp      address    127 0 0 1 8100    tls disable   true         vault    vault docs agent and proxy proxy vault stanza  autoauth    vault docs agent and proxy autoauth  caching    vault docs agent and proxy proxy caching  apiproxy    vault docs agent and proxy proxy apiproxy  persistent cache    vault docs agent and proxy proxy caching persistent caches  proxy api    vault docs agent and proxy proxy  proxy api stanza  listener    vault docs agent and proxy proxy listener stanza  listener main    vault docs configuration listener tcp  telemetry    vault docs configuration telemetry"}
{"questions":"vault Vault Proxy s API Proxy functionality allows you to use Vault Proxy s API as a proxy Use Vault Proxy as an API proxy page title Use Vault Proxy as an API proxy layout docs Use auto authentication and configure Vault Proxy as a proxy for the Vault API","answers":"---\nlayout: docs\npage_title: Use Vault Proxy as an API proxy\ndescription: >-\n  Use auto-authentication and configure Vault Proxy as a proxy for the Vault API.\n---\n\n# Use Vault Proxy as an API proxy\n\nVault Proxy's API Proxy functionality allows you to use Vault Proxy's API as a proxy\nfor Vault's API.\n\n## Functionality\n\nThe [`listener` stanza](\/vault\/docs\/agent-and-proxy\/proxy#listener-stanza) for Vault Proxy configures a listener for Vault Proxy. If\nits `role` is not set to `metrics_only`, it will act as a proxy for the Vault server that\nhas been configured in the [`vault` stanza](\/vault\/docs\/agent-and-proxy\/proxy#vault-stanza) of Proxy. This enables access to the Vault\nAPI from the Proxy API, and can be configured to optionally allow or force the automatic use of\nthe Auto-Auth token for these requests, as described below.\n\nIf a `listener` has been configured alongside a `cache` stanza, the API Proxy will\nfirst attempt to utilize the cache subsystem for qualifying requests, before forwarding the\nrequest to Vault. See the [caching docs](\/vault\/docs\/agent-and-proxy\/proxy\/caching) for more information on caching.\n\n## Using Auto-Auth token\n\nVault Proxy allows for easy authentication to Vault in a wide variety of\nenvironments using [Auto-Auth](\/vault\/docs\/agent-and-proxy\/autoauth). By setting the\n`use_auto_auth_token` (see below) configuration, clients will not be required\nto provide a Vault token to the requests made to the Proxy. When this\nconfiguration is set, if the request doesn't already bear a token, then the\nauto-auth token will be used to forward the request to the Vault server. This\nconfiguration will be overridden if the request already has a token attached,\nin which case, the token present in the request will be used to forward the\nrequest to the Vault server.\n\n## Forcing Auto-Auth token\n\nVault Proxy can be configured to force the use of the auto-auth token by using\nthe value `force` for the `use_auto_auth_token` option. This configuration\noverrides the default behavior described above in [Using Auto-Auth\nToken](\/vault\/docs\/agent-and-proxy\/proxy\/apiproxy#using-auto-auth-token), and instead ignores any\nexisting Vault token in the request and instead uses the auto-auth token.\n\n\n## Configuration (`api_proxy`)\n\nThe top level `api_proxy` block has the following configuration entries:\n\n- `use_auto_auth_token` `(bool\/string: false)` - If set, the requests made to Proxy\nwithout a Vault token will be forwarded to the Vault server with the\nauto-auth token attached. If the requests already bear a token, this\nconfiguration will be overridden and the token in the request will be used to\nforward the request to the Vault server. If set to `\"force\"` Proxy will use the\nauto-auth token, overwriting the attached Vault token if set.\n\n~> **Note**: When using the proxy's auto-auth token with the `use_auto_auth_token`\n   configuration, one proxy per application is very strongly recommended, as Vault will\n   unable to distinguish requests coming from multiple applications through a single proxy.\n   In situations where a single proxy is shared by multiple applications, setting `use_auto_auth_token`\n   to `false` (the default) is recommended.\n\n- `prepend_configured_namespace` `(bool: false)` - If set, when Proxy has a\n  namespace configured, such as through the\n  [Vault stanza](\/vault\/docs\/agent-and-proxy\/proxy#vault-stanza), all requests\n  proxied to Vault will have the configured namespace prepended to the namespace\n  header. If Proxy's namespace is set to `ns1` and Proxy is sent a request with the\n  namespace `ns2`, the request will go to the `ns1\/ns2` namespace. Likewise, if Proxy\n  is sent a request without a namespace, the request will go to the `ns1` namespace.\n  In essence, what this means is that all proxied requests must go to the configured\n  namespace or to its child namespaces.\n\nThe following two `api_proxy` options are only useful when making requests to a Vault\nEnterprise cluster, and are documented as part of its\n[Eventual Consistency](\/vault\/docs\/enterprise\/consistency#vault-proxy-and-consistency-headers)\npage.\n\n- `enforce_consistency` `(string: \"never\")` - Set to one of `\"always\"`\nor `\"never\"`.\n\n- `when_inconsistent` `(string: optional)` - Set to one of `\"fail\"`, `\"retry\"`,\nor `\"forward\"`.\n\n### Example configuration\n\nHere is an example of a `listener` configuration alongside `api_proxy` configuration to force the use of the auto_auth token\nand enforce consistency for a proxy dedicated to a single application.\n\n```hcl\n# Other Vault Proxy configuration blocks\n# ...\n\napi_proxy {\n  use_auto_auth_token = \"force\"\n  enforce_consistency = \"always\"\n}\n\nlistener \"unix\" {\n    address = \"\/var\/run\/vault-proxy.sock\n}\n```","site":"vault","answers_cleaned":"    layout  docs page title  Use Vault Proxy as an API proxy description       Use auto authentication and configure Vault Proxy as a proxy for the Vault API         Use Vault Proxy as an API proxy  Vault Proxy s API Proxy functionality allows you to use Vault Proxy s API as a proxy for Vault s API      Functionality  The   listener  stanza   vault docs agent and proxy proxy listener stanza  for Vault Proxy configures a listener for Vault Proxy  If its  role  is not set to  metrics only   it will act as a proxy for the Vault server that has been configured in the   vault  stanza   vault docs agent and proxy proxy vault stanza  of Proxy  This enables access to the Vault API from the Proxy API  and can be configured to optionally allow or force the automatic use of the Auto Auth token for these requests  as described below   If a  listener  has been configured alongside a  cache  stanza  the API Proxy will first attempt to utilize the cache subsystem for qualifying requests  before forwarding the request to Vault  See the  caching docs   vault docs agent and proxy proxy caching  for more information on caching      Using Auto Auth token  Vault Proxy allows for easy authentication to Vault in a wide variety of environments using  Auto Auth   vault docs agent and proxy autoauth   By setting the  use auto auth token   see below  configuration  clients will not be required to provide a Vault token to the requests made to the Proxy  When this configuration is set  if the request doesn t already bear a token  then the auto auth token will be used to forward the request to the Vault server  This configuration will be overridden if the request already has a token attached  in which case  the token present in the request will be used to forward the request to the Vault server      Forcing Auto Auth token  Vault Proxy can be configured to force the use of the auto auth token by using the value  force  for the  use auto auth token  option  This configuration overrides the default behavior described above in  Using Auto Auth Token   vault docs agent and proxy proxy apiproxy using auto auth token   and instead ignores any existing Vault token in the request and instead uses the auto auth token       Configuration   api proxy    The top level  api proxy  block has the following configuration entries      use auto auth token    bool string  false     If set  the requests made to Proxy without a Vault token will be forwarded to the Vault server with the auto auth token attached  If the requests already bear a token  this configuration will be overridden and the token in the request will be used to forward the request to the Vault server  If set to   force   Proxy will use the auto auth token  overwriting the attached Vault token if set        Note    When using the proxy s auto auth token with the  use auto auth token     configuration  one proxy per application is very strongly recommended  as Vault will    unable to distinguish requests coming from multiple applications through a single proxy     In situations where a single proxy is shared by multiple applications  setting  use auto auth token     to  false   the default  is recommended      prepend configured namespace    bool  false     If set  when Proxy has a   namespace configured  such as through the    Vault stanza   vault docs agent and proxy proxy vault stanza   all requests   proxied to Vault will have the configured namespace prepended to the namespace   header  If Proxy s namespace is set to  ns1  and Proxy is sent a request with the   namespace  ns2   the request will go to the  ns1 ns2  namespace  Likewise  if Proxy   is sent a request without a namespace  the request will go to the  ns1  namespace    In essence  what this means is that all proxied requests must go to the configured   namespace or to its child namespaces   The following two  api proxy  options are only useful when making requests to a Vault Enterprise cluster  and are documented as part of its  Eventual Consistency   vault docs enterprise consistency vault proxy and consistency headers  page      enforce consistency    string   never      Set to one of   always   or   never        when inconsistent    string  optional     Set to one of   fail      retry    or   forward         Example configuration  Here is an example of a  listener  configuration alongside  api proxy  configuration to force the use of the auto auth token and enforce consistency for a proxy dedicated to a single application      hcl   Other Vault Proxy configuration blocks        api proxy     use auto auth token    force    enforce consistency    always     listener  unix        address     var run vault proxy sock      "}
{"questions":"vault created tokens or leased secrets generated from a newly created token page title Vault Proxy caching overview Vault Proxy caching overview Use client side caching with Vault Proxy for responses with newly layout docs","answers":"---\nlayout: docs\npage_title: Vault Proxy caching overview\ndescription: >-\n  Use client-side caching with Vault Proxy for responses with newly\n  created tokens or leased secrets generated from a newly created token.\n---\n\n# Vault Proxy caching overview\n\nVault Proxy caching allows client-side caching of responses containing newly\ncreated tokens and responses containing leased secrets generated off of these\nnewly created tokens. The renewals of the cached tokens and leases are also\nmanaged by the proxy. Additionally, with `cache_static_secrets` set to `true`,\nVault Proxy [can be configured to cache KVv1 and KVv2 secrets][static-secret-caching].\n\n## Caching and renewals\n\nResponse caching and renewals for dynamic secrets are managed by Proxy only under these\nspecific scenarios.\n\n1. Token creation requests are made through the proxy. This means that any\n   login operations performed using various auth methods and invoking the token\n   creation endpoints of the token auth method via the proxy will result in the\n   response getting cached by the proxy. Responses containing new tokens will\n   be cached by the proxy only if the parent token is already being managed by\n   the proxy or if the new token is an orphan token.\n\n2. Leased secret creation requests are made through the proxy using tokens that\n   are already managed by the proxy. This means that any dynamic credentials\n   that are issued using the tokens managed by the proxy, will be cached and\n   its renewals are taken care of.\n\n## Static secret caching\n\nYou can configure Vault Proxy to cache dynamic secrets and static (KVv1 and KVv2)\nsecrets. When you enable caching for static secrets. Proxy keeps a cached entry\nof the secret but only provides the cached response to requests made with tokens\nthat can access the secret. As a result, multiple requests to Vault Proxy for\nthe same KV secret only require a single, initial request to be forwarded to Vault.\n\nStatic secret caching is disabled by default. To enable caching for static secrets you must\nconfigure [auto-auth](\/vault\/docs\/agent-and-proxy\/autoauth) and ensure the\nauto-auth token has permission to subscribe to KV\n[event](\/vault\/docs\/concepts\/events) updates.\n\nOnce configured, Proxy uses the auto-auth token to subscribe to KV events, and\nmonitors the subscription feed to know when to update the secrets in its cache.\n\nFor more information on static secret caching, refer to the\n[Vault Proxy static secret caching][static-secret-caching] overview.\n\n## Persistent cache\n\nVault Proxy can restore secrets, such as, tokens, leases, and static secrets, from a persistent\ncache file created by a previous Vault Proxy process.\n\nRefer to the [Vault Proxy Persistent\nCaching](\/vault\/docs\/agent-and-proxy\/proxy\/caching\/persistent-caches) page for more information on\nthis functionality.\n\n## Cache evictions\n\nThe eviction of cache entries pertaining to dynamic secrets will occur when the proxy\ncan no longer renew them. This can happen when the secrets hit their maximum\nTTL or if the renewals result in errors.\n\nVault Proxy does some best-effort cache evictions by observing specific request types\nand response codes. For example, if a token revocation request is made via the\nproxy and if the forwarded request to the Vault server succeeds, then proxy\nevicts all the cache entries associated with the revoked token. Similarly, any\nlease revocation operation will also be intercepted by the proxy and the\nrespective cache entries will be evicted.\n\nNote that while proxy evicts the cache entries upon secret expirations and upon\nintercepting revocation requests, it is still possible for the proxy to be\ncompletely unaware of the revocations that happen through direct client\ninteractions with the Vault server. This could potentially lead to stale cache\nentries. For managing the stale entries in the cache, an endpoint\n`\/proxy\/v1\/cache-clear`(see below) is made available to manually evict cache\nentries based on some of the query criteria used for indexing the cache entries.\n\n## Request uniqueness\n\nIn order to detect repeat requests and return cached responses, Proxy needs\nto have a way to uniquely identify the requests. This computation as it stands\ntoday takes a simplistic approach (may change in future) of serializing and\nhashing the HTTP request along with all the headers and the request body. This\nhash value is then used as an index into the cache to check if the response is\nreadily available. The consequence of this approach is that the hash value for\nany request will differ if any data in the request is modified. This has the\nside-effect of resulting in false negatives if say, the ordering of the request\nparameters are modified. As long as the requests come in without any change,\ncaching behavior should be consistent. Identical requests with differently\nordered request values will result in duplicated cache entries. A heuristic\nassumption that the clients will use consistent mechanisms to make requests,\nthereby resulting in consistent hash values per request is the idea upon which\nthe caching functionality is built upon.\n\n## Renewal management\n\nThe tokens and leases are renewed by the proxy using the secret renewer that is\nmade available via the Vault server's [Go\nAPI](https:\/\/godoc.org\/github.com\/hashicorp\/vault\/api#Renewer). Proxy performs\nall operations in memory and does not persist anything to storage. This means\nthat when the proxy is shut down, all the renewal operations are immediately\nterminated and there is no way for the proxy to resume renewals after the fact.\nNote that shutting down the proxy does not indicate revocations of the secrets,\ninstead it only means that renewal responsibility for all the valid unrevoked\nsecrets are no longer performed by the Vault proxy.\n\n## API\n\n### Cache clear\n\nThis endpoint clears the cache based on given criteria. To use this\nAPI, some information on how the proxy caches values should be known\nbeforehand. Each response that is cached in the proxy will be indexed on some\nfactors depending on the type of request. Those factors can be the `token` that\nis belonging to the cached response, the `token_accessor` of the token\nbelonging to the cached response, the `request_path` that resulted in the\ncached response, the `lease` that is attached to the cached response, the\n`namespace` to which the cached response belongs to, and a few more. This API\nexposes some factors through which associated cache entries are fetched and\nevicted. For listeners without caching enabled, this API will still be available,\nbut will do nothing (there is no cache to clear) and will return a `200` response.\n\n| Method | Path                    | Produces               |\n| :----- | :---------------------- | :--------------------- |\n| `POST` | `\/proxy\/v1\/cache-clear` | `200 application\/json` |\n\n#### Parameters\n\n- `type` `(strings: required)` - The type of cache entries to evict. Valid\n  values are `request_path`, `lease`, `token`, `token_accessor`, and `all`.\n  If the `type` is set to `all`, the _entire cache_ is cleared.\n\n- `value` `(string: required)` - An exact value or the prefix of the value for\n  the `type` selected. This parameter is optional when the `type` is set\n  to `all`.\n\n- `namespace` `(string: optional)` - This is only applicable when the `type` is set to\n  `request_path`. The namespace of which the cache entries to be evicted for\n  the given request path.\n\n### Sample payload\n\n```json\n{\n  \"type\": \"token\",\n  \"value\": \"hvs.rlNjegSKykWcplOkwsjd8bP9\"\n}\n```\n\n### Sample request\n\n```shell-session\n$ curl \\\n    --request POST \\\n    --data @payload.json \\\n    http:\/\/127.0.0.1:1234\/proxy\/v1\/cache-clear\n```\n\n## Configuration (`cache`)\n\nThe presence of the top level `cache` block in any way (including an empty `cache` block)  will enable the cache.\nNote that either `cache_static_secrets` must be `true` and\/or `disable_caching_dynamic_secrets` must\nbe `false`, otherwise the cache does nothing. The top level `cache` block has the following configuration entries:\n\n- `persist` `(object: optional)` - Configuration for the persistent cache.\n\n- `cache_static_secrets` `(bool: false)` - Enables static secret caching when\n`true`.\n\n- `disable_caching_dynamic_secrets` `(bool: false)` - Disables dynamic secret caching when\n`true`.\n\n-> **Note:** When the `cache` block is defined, a [listener][proxy-listener] must also be defined\nin the config, otherwise there is no way to utilize the cache.\n\n[proxy-listener]: \/vault\/docs\/agent-and-proxy\/proxy#listener-stanza\n\n### Configuration (Persist)\n\nThese are common configuration values that live within the `persist` block:\n\n- `type` `(string: required)` - The type of the persistent cache to use,\n  e.g. `kubernetes`. _Note_: when using HCL this can be used as the key for\n  the block, e.g. `persist \"kubernetes\" {...}`. Currently, only `kubernetes`\n  is supported.\n\n- `path` `(string: required)` - The path on disk where the persistent cache file\n  should be created or restored from.\n\n- `keep_after_import` `(bool: optional)` - When set to true, a restored cache file\n  is not deleted. Defaults to `false`.\n\n- `exit_on_err` `(bool: optional)` - When set to true, if any errors occur during\n  a persistent cache restore, Vault Proxy will exit with an error. Defaults to `true`.\n\n- `service_account_token_file` `(string: optional)` - When `type` is set to `kubernetes`,\nthis configures the path on disk where the Kubernetes service account token can be found.\nDefaults to `\/var\/run\/secrets\/kubernetes.io\/serviceaccount\/token`.\n\n## Configuration (`listener`)\n\n- `listener` `(array of objects: required)` - Configuration for the listeners.\n\nThere can be one or more `listener` blocks at the top level. Adding a listener enables\nthe [API Proxy](\/vault\/docs\/agent-and-proxy\/proxy\/apiproxy) and enables the API proxy to use the cache, if configured.\nThese configuration values are common to both `tcp` and `unix` listener blocks. Blocks of type\n`tcp` support the standard `tcp` [listener](\/vault\/docs\/configuration\/listener\/tcp)\noptions. Additionally, the `role` string option is available as part of the top level\nof the `listener` block, which can be configured to `metrics_only` to serve only metrics,\nor the default role, `default`, which serves everything (including metrics).\n\n- `type` `(string: required)` - The type of the listener to use. Valid values\n  are `tcp` and `unix`.\n  _Note_: when using HCL this can be used as the key for the block, e.g.\n  `listener \"tcp\" {...}`.\n\n- `address` `(string: required)` - The address for the listener to listen to.\n  This can either be a URL path when using `tcp` or a file path when using\n  `unix`. For example, `127.0.0.1:8200` or `\/path\/to\/socket`. Defaults to\n  `127.0.0.1:8200`.\n\n- `tls_disable` `(bool: false)` - Specifies if TLS will be disabled.\n\n- `tls_key_file` `(string: optional)` - Specifies the path to the private key\n  for the certificate.\n\n- `tls_cert_file` `(string: optional)` - Specifies the path to the certificate\n  for TLS.\n\n### Example configuration\n\nHere is an example of a cache configuration with the optional `persist` block,\nalongside a regular listener, and a listener that only serves metrics.\n\n```hcl\n# Other Vault Proxy configuration blocks\n# ...\n\ncache {\n\tpersist = {\n\t\ttype = \"kubernetes\"\n\t\tpath = \"\/vault\/proxy-cache\/\"\n\t\tkeep_after_import = true\n\t\texit_on_err = true\n\t\tservice_account_token_file = \"\/tmp\/serviceaccount\/token\"\n\t}\n}\n\nlistener \"tcp\" {\n    address = \"127.0.0.1:8100\"\n    tls_disable = true\n}\n\nlistener \"tcp\" {\n    address = \"127.0.0.1:3000\"\n    tls_disable = true\n    role = \"metrics_only\"\n}\n```\n\n[static-secret-caching]: \/vault\/docs\/agent-and-proxy\/proxy\/caching\/static-secret-caching","site":"vault","answers_cleaned":"    layout  docs page title  Vault Proxy caching overview description       Use client side caching with Vault Proxy for responses with newly   created tokens or leased secrets generated from a newly created token         Vault Proxy caching overview  Vault Proxy caching allows client side caching of responses containing newly created tokens and responses containing leased secrets generated off of these newly created tokens  The renewals of the cached tokens and leases are also managed by the proxy  Additionally  with  cache static secrets  set to  true   Vault Proxy  can be configured to cache KVv1 and KVv2 secrets  static secret caching       Caching and renewals  Response caching and renewals for dynamic secrets are managed by Proxy only under these specific scenarios   1  Token creation requests are made through the proxy  This means that any    login operations performed using various auth methods and invoking the token    creation endpoints of the token auth method via the proxy will result in the    response getting cached by the proxy  Responses containing new tokens will    be cached by the proxy only if the parent token is already being managed by    the proxy or if the new token is an orphan token   2  Leased secret creation requests are made through the proxy using tokens that    are already managed by the proxy  This means that any dynamic credentials    that are issued using the tokens managed by the proxy  will be cached and    its renewals are taken care of      Static secret caching  You can configure Vault Proxy to cache dynamic secrets and static  KVv1 and KVv2  secrets  When you enable caching for static secrets  Proxy keeps a cached entry of the secret but only provides the cached response to requests made with tokens that can access the secret  As a result  multiple requests to Vault Proxy for the same KV secret only require a single  initial request to be forwarded to Vault   Static secret caching is disabled by default  To enable caching for static secrets you must configure  auto auth   vault docs agent and proxy autoauth  and ensure the auto auth token has permission to subscribe to KV  event   vault docs concepts events  updates   Once configured  Proxy uses the auto auth token to subscribe to KV events  and monitors the subscription feed to know when to update the secrets in its cache   For more information on static secret caching  refer to the  Vault Proxy static secret caching  static secret caching  overview      Persistent cache  Vault Proxy can restore secrets  such as  tokens  leases  and static secrets  from a persistent cache file created by a previous Vault Proxy process   Refer to the  Vault Proxy Persistent Caching   vault docs agent and proxy proxy caching persistent caches  page for more information on this functionality      Cache evictions  The eviction of cache entries pertaining to dynamic secrets will occur when the proxy can no longer renew them  This can happen when the secrets hit their maximum TTL or if the renewals result in errors   Vault Proxy does some best effort cache evictions by observing specific request types and response codes  For example  if a token revocation request is made via the proxy and if the forwarded request to the Vault server succeeds  then proxy evicts all the cache entries associated with the revoked token  Similarly  any lease revocation operation will also be intercepted by the proxy and the respective cache entries will be evicted   Note that while proxy evicts the cache entries upon secret expirations and upon intercepting revocation requests  it is still possible for the proxy to be completely unaware of the revocations that happen through direct client interactions with the Vault server  This could potentially lead to stale cache entries  For managing the stale entries in the cache  an endpoint   proxy v1 cache clear  see below  is made available to manually evict cache entries based on some of the query criteria used for indexing the cache entries      Request uniqueness  In order to detect repeat requests and return cached responses  Proxy needs to have a way to uniquely identify the requests  This computation as it stands today takes a simplistic approach  may change in future  of serializing and hashing the HTTP request along with all the headers and the request body  This hash value is then used as an index into the cache to check if the response is readily available  The consequence of this approach is that the hash value for any request will differ if any data in the request is modified  This has the side effect of resulting in false negatives if say  the ordering of the request parameters are modified  As long as the requests come in without any change  caching behavior should be consistent  Identical requests with differently ordered request values will result in duplicated cache entries  A heuristic assumption that the clients will use consistent mechanisms to make requests  thereby resulting in consistent hash values per request is the idea upon which the caching functionality is built upon      Renewal management  The tokens and leases are renewed by the proxy using the secret renewer that is made available via the Vault server s  Go API  https   godoc org github com hashicorp vault api Renewer   Proxy performs all operations in memory and does not persist anything to storage  This means that when the proxy is shut down  all the renewal operations are immediately terminated and there is no way for the proxy to resume renewals after the fact  Note that shutting down the proxy does not indicate revocations of the secrets  instead it only means that renewal responsibility for all the valid unrevoked secrets are no longer performed by the Vault proxy      API      Cache clear  This endpoint clears the cache based on given criteria  To use this API  some information on how the proxy caches values should be known beforehand  Each response that is cached in the proxy will be indexed on some factors depending on the type of request  Those factors can be the  token  that is belonging to the cached response  the  token accessor  of the token belonging to the cached response  the  request path  that resulted in the cached response  the  lease  that is attached to the cached response  the  namespace  to which the cached response belongs to  and a few more  This API exposes some factors through which associated cache entries are fetched and evicted  For listeners without caching enabled  this API will still be available  but will do nothing  there is no cache to clear  and will return a  200  response     Method   Path                      Produces                                                                                  POST      proxy v1 cache clear     200 application json          Parameters     type    strings  required     The type of cache entries to evict  Valid   values are  request path    lease    token    token accessor   and  all     If the  type  is set to  all   the  entire cache  is cleared      value    string  required     An exact value or the prefix of the value for   the  type  selected  This parameter is optional when the  type  is set   to  all       namespace    string  optional     This is only applicable when the  type  is set to    request path   The namespace of which the cache entries to be evicted for   the given request path       Sample payload     json      type    token      value    hvs rlNjegSKykWcplOkwsjd8bP9             Sample request     shell session   curl         request POST         data  payload json       http   127 0 0 1 1234 proxy v1 cache clear         Configuration   cache    The presence of the top level  cache  block in any way  including an empty  cache  block   will enable the cache  Note that either  cache static secrets  must be  true  and or  disable caching dynamic secrets  must be  false   otherwise the cache does nothing  The top level  cache  block has the following configuration entries      persist    object  optional     Configuration for the persistent cache      cache static secrets    bool  false     Enables static secret caching when  true       disable caching dynamic secrets    bool  false     Disables dynamic secret caching when  true         Note    When the  cache  block is defined  a  listener  proxy listener  must also be defined in the config  otherwise there is no way to utilize the cache    proxy listener    vault docs agent and proxy proxy listener stanza      Configuration  Persist   These are common configuration values that live within the  persist  block      type    string  required     The type of the persistent cache to use    e g   kubernetes    Note   when using HCL this can be used as the key for   the block  e g   persist  kubernetes          Currently  only  kubernetes    is supported      path    string  required     The path on disk where the persistent cache file   should be created or restored from      keep after import    bool  optional     When set to true  a restored cache file   is not deleted  Defaults to  false       exit on err    bool  optional     When set to true  if any errors occur during   a persistent cache restore  Vault Proxy will exit with an error  Defaults to  true       service account token file    string  optional     When  type  is set to  kubernetes   this configures the path on disk where the Kubernetes service account token can be found  Defaults to   var run secrets kubernetes io serviceaccount token       Configuration   listener       listener    array of objects  required     Configuration for the listeners   There can be one or more  listener  blocks at the top level  Adding a listener enables the  API Proxy   vault docs agent and proxy proxy apiproxy  and enables the API proxy to use the cache  if configured  These configuration values are common to both  tcp  and  unix  listener blocks  Blocks of type  tcp  support the standard  tcp   listener   vault docs configuration listener tcp  options  Additionally  the  role  string option is available as part of the top level of the  listener  block  which can be configured to  metrics only  to serve only metrics  or the default role   default   which serves everything  including metrics       type    string  required     The type of the listener to use  Valid values   are  tcp  and  unix      Note   when using HCL this can be used as the key for the block  e g     listener  tcp              address    string  required     The address for the listener to listen to    This can either be a URL path when using  tcp  or a file path when using    unix   For example   127 0 0 1 8200  or   path to socket   Defaults to    127 0 0 1 8200       tls disable    bool  false     Specifies if TLS will be disabled      tls key file    string  optional     Specifies the path to the private key   for the certificate      tls cert file    string  optional     Specifies the path to the certificate   for TLS       Example configuration  Here is an example of a cache configuration with the optional  persist  block  alongside a regular listener  and a listener that only serves metrics      hcl   Other Vault Proxy configuration blocks        cache    persist       type    kubernetes    path     vault proxy cache     keep after import   true   exit on err   true   service account token file     tmp serviceaccount token        listener  tcp        address    127 0 0 1 8100      tls disable   true    listener  tcp        address    127 0 0 1 3000      tls disable   true     role    metrics only          static secret caching    vault docs agent and proxy proxy caching static secret caching"}
{"questions":"vault Use static secret caching with Vault Proxy to cache key value data in Vault Improve Vault traffic resiliency with Vault Proxy page title Improve Vault traffic resiliency layout docs handle updates and reduce direct requests to Vault from clients","answers":"---\nlayout: docs\npage_title: Improve Vault traffic resiliency\ndescription: >-\n  Use static secret caching with Vault Proxy to cache key\/value data in Vault,\n  handle updates, and reduce direct requests to Vault from clients.\n---\n\n# Improve Vault traffic resiliency with Vault Proxy\n\n@include 'alerts\/enterprise-only.mdx'\n\nUse static secret caching with Vault Proxy to cache KVv1 and KVv2 secrets to\nminimize requests made to Vault and provide resilient connections for clients.\n\nVault Proxy utilizes the Enterprise only [Vault event notification system](\/vault\/docs\/concepts\/events)\nfeature for cache freshness. As a result, static secret caching can only be used\nwith Vault Enterprise installations.\n\nWhen using a Vault cluster with performance standbys, Proxy may receive secret update events\nbefore the secret update has been fully replicated. To make sure that Proxy can get updated\nsecret values after receiving an event notification, Proxy must be configured to point to the\naddress of the active node in its [Vault stanza](\/vault\/docs\/agent-and-proxy\/proxy#vault-stanza),\nor [allow_forwarding_via_header must be set to true](\/vault\/docs\/configuration\/replication#allow_forwarding_via_header)\non the cluster. When `allow_forwarding_via_header` is configured, Proxy will only forward\nrequests to update a secret in its cache after receiving an event indicating that secret got updated.\nThis approach would be recommended if access to Vault was behind, for example, a load balancer.\n\n## Step 1: Subscribe Vault Proxy to KV events\n\nVault Proxy uses Vault events and auto-auth to monitor secret status and make\nappropriate cache updates.\n1. Enable [auto-auth](\/vault\/docs\/agent-and-proxy\/autoauth).\n1. Create an auto-auth token with permission to subscribe to KV event updates\nwith the [Vault event notification system](\/vault\/docs\/concepts\/events). For\nexample, to create a policy that grants access to static secret (KVv1 and KVv2)\nevents, you need permission to subscribe to the `events` endpoint, as well as\nthe `list` and `subscribe` permissions on KV secrets you want to get secrets\nfrom:\n ```hcl\n  path \"sys\/events\/subscribe\/kv*\" {\n    capabilities = [\"read\"]\n  }\n\n  path \"*\" {\n    capabilities = [\"list\", \"subscribe\"]\n    subscribe_event_types = [\"kv*\"]\n  }\n   ```\n\nSubscribing to KV events means that Proxy receives updates as soon as a secret\nchanges, which reduces staleness in the cache. Vault Proxy only checks for a\nsecret update if an event notification indicates that the related secret was\nupdated.\n\n## Step 2: Ensure tokens have `capabilities-self` access\n\nTokens require `update` access to the\n[`sys\/capabilies-self`](\/vault\/api-docs\/system\/capabilities-self) endpoint to\nrequest cached secrets. Vault tokens receive `update` permissions\n[by default](\/vault\/docs\/concepts\/policies#default-policy). If you have modified\nor removed the default policy, you must explicitly create a policy with the\nappropriate permissions. For example:\n```hcl\n  path \"sys\/capabilities-self\" {\n    capabilities = [\"update\"]\n  }\n```\n\n## Step 3: Configure an appropriate refresh interval\nBy default, Vault Proxy refreshes tokens every five minutes. You can change the\ndefault behavior and configure Proxy to verify and update cached token\ncapabilities with the `static_secret_token_capability_refresh_interval`\nparameter in the `cache` configuration stanza. For example, to set a refresh\ninterval of one minute:\n```hcl\ncache {\n  cache_static_secrets = true\n  static_secret_token_capability_refresh_interval = \"1m\"\n}\n```\n\n## Functionality\n\nWith static secret caching, Vault Proxy caches `GET` requests for KVv1 and KVv2\nendpoints.\n\nWhen a client sends a `GET` request for a new KV secret, Proxy forwards the\nrequest to Vault but caches the response before forwarding it to the client. If\nthat client makes subsequent `GET` requests for the same secret, Vault Proxy\nserves the cached response rather than forwarding the request to Vault.\n\n<Tip title=\"'Offline' Secret Access and CLI KV Get\">\n\n  Vault Proxy does not cache any non-KV API responses. While KV secrets can be retrieved even if\n  Vault is unavailable, other requests cannot be served. As a result, using the `vault kv`\n  CLI command, which sends a request to `\/sys\/internal\/ui\/mounts` before the KV `GET` request,\n  will require a real request to Vault and cannot be served entirely from the cache or\n  when Vault is unavailable (you can use `vault read` instead).\n\n<\/Tip>\n\nSimilarly, when a token requests access to a KV secret, it must complete a\nsuccess `GET` request. If the request is successful, Proxy caches the fact that\nthe token was successful in addition to the result. Subsequent requests by the\nsame token can then access this secret from the cache instead of Vault.\n\nVault Proxy uses the [event notification system](\/vault\/docs\/concepts\/events) to keep the\ncache up to date. It monitors the KV event feed for events related to any secret\ncurrently stored in the cache, including modification events like updates and\ndeletes. When Proxy detects a change in a cached secret, it will update or\nevict the cache entry as appropriate.\n\nVault Proxy also checks and refreshes the access permissions of known tokens\naccording to the window set with `static_secret_token_capability_refresh_interval`.\nBy default, the refresh interval is five minutes.\n\nEvery interval, Proxy calls [`sys\/capabilies-self`](\/vault\/api-docs\/system\/capabilities-self) on\nbehalf of every token in the cache to confirm the token still has permission to\naccess the cached secret. If the result from Vault indicates that permission (or\nthe token itself) was revoked, Proxy updates the cache entry so that the affected\ntoken can no longer access the relevant paths from the cache. The refresh interval\nis essentially the maximum period after which permission to read a KV secret is\nfully revoked for the relevant token.\n\nIf the capabilities have been removed, or Proxy receives a `403` response, the\ncapability is removed from the token, and that token cannot be used to access the\ncache. For other kinds of errors, such as Vault being unreachable or sealed,\nthe `static_secret_token_capability_refresh_behavior` config is consulted.\nIf set to `optimistic` (the default), the capability will not be removed unless we\nreceive a `403` or valid response without the capability. If set to `pessimistic`,\nthe capability will be removed for any error, such as would occur if Vault is sealed.\n\nFor token refresh to work, any token that will access the cache also needs\n`update` permission for [`sys\/capabilies-self`](\/vault\/api-docs\/system\/capabilities-self).\nHaving `update` permission for the token lets Proxy test capabilities for the\ntoken against multiple paths with a single request instead of testing for a `403`\nresponse for each path explicitly.\n\n<Tip title=\"Refresh is per token, not per secret\">\n\n  If Proxy's API proxy is configured to use auto-authentication for tokens, and **all**\n  requests that pass through Vault Proxy use the same token, Proxy only\n  makes a single request to Vault every refresh interval, no matter how many\n  secrets are currently cached.\n\n<\/Tip>\n\nWhen static secret caching is enabled, Proxy returns `HIT` or `MISS` in the `X-Cache`\nresponse header for requests so client can tell if the response was served from\nthe cache or forwarded from Vault. In the event of a hit, Proxy also sets the\n`Age` header to indicate, in seconds, how old the cache entry is.\n\n<Tip title=\"Old does not mean stale\">\n\n  The fact that a cache entry is old, does not necessarily mean that the\n  information is out of date. Vault Proxy continually monitors KV events for\n  updates. A large value for `Age` may simply mean that the secret has not been\n  rotated recently.\n\n<\/Tip>\n\n## Configuration\n\nThe top level `cache` block has the following configuration entries relating to static secret caching:\n\n- `cache_static_secrets` `(bool: false)` - Enables static secret caching when\nset to `true`. When `cache_static_secrets` and `auto_auth` are both enabled,\nVault Proxy serves KV secrets directly from the cache to clients with\nsufficient permission.\n\n- `static_secret_token_capability_refresh_interval` `(duration: \"5m\", optional)` -\nSets the interval as a [duration format string](\/vault\/docs\/concepts\/duration-format)\nat which Vault Proxy rechecks the permissions of tokens used to access cached\nsecrets. The refresh interval is the maximum period after which permission to\nread a cached KV secret is fully revoked. Ignored when `cache_static_secrets`\nis `false`.\n\n- `static_secret_token_capability_refresh_behavior` `(string: \"optimistic\", optional)` -\nSets the capability refresh behavior in the case of an error when attempting to\nrefresh capabilities. In the case of a `403`, capabilities will be removed for the token\nwith either option. In case of other errors, such as Vault being sealed or Vault being\nunavailable, this setting controls the behavior. If set to `optimistic` (the default),\ncapabilities will be removed for only `403` errors. If set to `pessimistic`, capabilities\nwill be removed for any error. This essentially allows configuring a preference between\nfavoring availability (`optimistic`) or access fidelity (`pessimistic`) of cached\nstatic secrets. Ignored when `cache_static_secrets` is `false`.\n\n### Example configuration\n\nThe following example Vault Proxy configuration:\n- Defines a TCP listener (`listener`) with TLS disabled.\n- Forces clients using API proxy (`api_proxy`) to identify with an auto-auth token.\n- Configures auto-authentication (`auto-auth`) for `approle`.\n- Enables static secret caching with `cache_static_secrets`.\n- Sets an explicit token capability refresh window of 1 hour with `static_secret_token_capability_refresh_interval`.\n\n```hcl\n# Other Vault Proxy configuration blocks\n# ...\n\ncache {\n  cache_static_secrets = true\n  static_secret_token_capability_refresh_interval = \"1h\"\n}\n\napi_proxy {\n  use_auto_auth_token = \"force\"\n}\n\nlistener \"tcp\" {\n    address = \"127.0.0.1:8100\"\n    tls_disable = true\n}\n\nauto_auth {\n  method {\n    type = \"approle\"\n    config = {\n      role_id_file_path = \"roleid\"\n      secret_id_file_path = \"secretid\"\n      remove_secret_id_file_after_reading = false\n    }\n  }\n```\n[event-system]: \/vault\/docs\/concepts\/events","site":"vault","answers_cleaned":"    layout  docs page title  Improve Vault traffic resiliency description       Use static secret caching with Vault Proxy to cache key value data in Vault    handle updates  and reduce direct requests to Vault from clients         Improve Vault traffic resiliency with Vault Proxy   include  alerts enterprise only mdx   Use static secret caching with Vault Proxy to cache KVv1 and KVv2 secrets to minimize requests made to Vault and provide resilient connections for clients   Vault Proxy utilizes the Enterprise only  Vault event notification system   vault docs concepts events  feature for cache freshness  As a result  static secret caching can only be used with Vault Enterprise installations   When using a Vault cluster with performance standbys  Proxy may receive secret update events before the secret update has been fully replicated  To make sure that Proxy can get updated secret values after receiving an event notification  Proxy must be configured to point to the address of the active node in its  Vault stanza   vault docs agent and proxy proxy vault stanza   or  allow forwarding via header must be set to true   vault docs configuration replication allow forwarding via header  on the cluster  When  allow forwarding via header  is configured  Proxy will only forward requests to update a secret in its cache after receiving an event indicating that secret got updated  This approach would be recommended if access to Vault was behind  for example  a load balancer      Step 1  Subscribe Vault Proxy to KV events  Vault Proxy uses Vault events and auto auth to monitor secret status and make appropriate cache updates  1  Enable  auto auth   vault docs agent and proxy autoauth   1  Create an auto auth token with permission to subscribe to KV event updates with the  Vault event notification system   vault docs concepts events   For example  to create a policy that grants access to static secret  KVv1 and KVv2  events  you need permission to subscribe to the  events  endpoint  as well as the  list  and  subscribe  permissions on KV secrets you want to get secrets from      hcl   path  sys events subscribe kv         capabilities     read          path           capabilities     list    subscribe       subscribe event types     kv                Subscribing to KV events means that Proxy receives updates as soon as a secret changes  which reduces staleness in the cache  Vault Proxy only checks for a secret update if an event notification indicates that the related secret was updated      Step 2  Ensure tokens have  capabilities self  access  Tokens require  update  access to the   sys capabilies self    vault api docs system capabilities self  endpoint to request cached secrets  Vault tokens receive  update  permissions  by default   vault docs concepts policies default policy   If you have modified or removed the default policy  you must explicitly create a policy with the appropriate permissions  For example     hcl   path  sys capabilities self        capabilities     update               Step 3  Configure an appropriate refresh interval By default  Vault Proxy refreshes tokens every five minutes  You can change the default behavior and configure Proxy to verify and update cached token capabilities with the  static secret token capability refresh interval  parameter in the  cache  configuration stanza  For example  to set a refresh interval of one minute     hcl cache     cache static secrets   true   static secret token capability refresh interval    1m            Functionality  With static secret caching  Vault Proxy caches  GET  requests for KVv1 and KVv2 endpoints   When a client sends a  GET  request for a new KV secret  Proxy forwards the request to Vault but caches the response before forwarding it to the client  If that client makes subsequent  GET  requests for the same secret  Vault Proxy serves the cached response rather than forwarding the request to Vault    Tip title   Offline  Secret Access and CLI KV Get      Vault Proxy does not cache any non KV API responses  While KV secrets can be retrieved even if   Vault is unavailable  other requests cannot be served  As a result  using the  vault kv    CLI command  which sends a request to   sys internal ui mounts  before the KV  GET  request    will require a real request to Vault and cannot be served entirely from the cache or   when Vault is unavailable  you can use  vault read  instead      Tip   Similarly  when a token requests access to a KV secret  it must complete a success  GET  request  If the request is successful  Proxy caches the fact that the token was successful in addition to the result  Subsequent requests by the same token can then access this secret from the cache instead of Vault   Vault Proxy uses the  event notification system   vault docs concepts events  to keep the cache up to date  It monitors the KV event feed for events related to any secret currently stored in the cache  including modification events like updates and deletes  When Proxy detects a change in a cached secret  it will update or evict the cache entry as appropriate   Vault Proxy also checks and refreshes the access permissions of known tokens according to the window set with  static secret token capability refresh interval   By default  the refresh interval is five minutes   Every interval  Proxy calls   sys capabilies self    vault api docs system capabilities self  on behalf of every token in the cache to confirm the token still has permission to access the cached secret  If the result from Vault indicates that permission  or the token itself  was revoked  Proxy updates the cache entry so that the affected token can no longer access the relevant paths from the cache  The refresh interval is essentially the maximum period after which permission to read a KV secret is fully revoked for the relevant token   If the capabilities have been removed  or Proxy receives a  403  response  the capability is removed from the token  and that token cannot be used to access the cache  For other kinds of errors  such as Vault being unreachable or sealed  the  static secret token capability refresh behavior  config is consulted  If set to  optimistic   the default   the capability will not be removed unless we receive a  403  or valid response without the capability  If set to  pessimistic   the capability will be removed for any error  such as would occur if Vault is sealed   For token refresh to work  any token that will access the cache also needs  update  permission for   sys capabilies self    vault api docs system capabilities self   Having  update  permission for the token lets Proxy test capabilities for the token against multiple paths with a single request instead of testing for a  403  response for each path explicitly    Tip title  Refresh is per token  not per secret      If Proxy s API proxy is configured to use auto authentication for tokens  and   all     requests that pass through Vault Proxy use the same token  Proxy only   makes a single request to Vault every refresh interval  no matter how many   secrets are currently cached     Tip   When static secret caching is enabled  Proxy returns  HIT  or  MISS  in the  X Cache  response header for requests so client can tell if the response was served from the cache or forwarded from Vault  In the event of a hit  Proxy also sets the  Age  header to indicate  in seconds  how old the cache entry is    Tip title  Old does not mean stale      The fact that a cache entry is old  does not necessarily mean that the   information is out of date  Vault Proxy continually monitors KV events for   updates  A large value for  Age  may simply mean that the secret has not been   rotated recently     Tip      Configuration  The top level  cache  block has the following configuration entries relating to static secret caching      cache static secrets    bool  false     Enables static secret caching when set to  true   When  cache static secrets  and  auto auth  are both enabled  Vault Proxy serves KV secrets directly from the cache to clients with sufficient permission      static secret token capability refresh interval    duration   5m   optional     Sets the interval as a  duration format string   vault docs concepts duration format  at which Vault Proxy rechecks the permissions of tokens used to access cached secrets  The refresh interval is the maximum period after which permission to read a cached KV secret is fully revoked  Ignored when  cache static secrets  is  false       static secret token capability refresh behavior    string   optimistic   optional     Sets the capability refresh behavior in the case of an error when attempting to refresh capabilities  In the case of a  403   capabilities will be removed for the token with either option  In case of other errors  such as Vault being sealed or Vault being unavailable  this setting controls the behavior  If set to  optimistic   the default   capabilities will be removed for only  403  errors  If set to  pessimistic   capabilities will be removed for any error  This essentially allows configuring a preference between favoring availability   optimistic   or access fidelity   pessimistic   of cached static secrets  Ignored when  cache static secrets  is  false        Example configuration  The following example Vault Proxy configuration    Defines a TCP listener   listener   with TLS disabled    Forces clients using API proxy   api proxy   to identify with an auto auth token    Configures auto authentication   auto auth   for  approle     Enables static secret caching with  cache static secrets     Sets an explicit token capability refresh window of 1 hour with  static secret token capability refresh interval       hcl   Other Vault Proxy configuration blocks        cache     cache static secrets   true   static secret token capability refresh interval    1h     api proxy     use auto auth token    force     listener  tcp        address    127 0 0 1 8100      tls disable   true    auto auth     method       type    approle      config           role id file path    roleid        secret id file path    secretid        remove secret id file after reading   false                event system    vault docs concepts events"}
{"questions":"vault Vault Agent can be run as a Windows service In order to do this you need to register Vault Agent with the Windows page title Run Vault Agent as a Windows service Register Vault Agent with sc exe and run Agent as a Windows service layout docs Run Vault Agent as a Windows service","answers":"---\nlayout: docs\npage_title: Run Vault Agent as a Windows service\ndescription: >-\n  Register Vault Agent with sc.exe and run Agent as a Windows service.\n---\n\n# Run Vault Agent as a Windows service\n\nVault Agent can be run as a Windows service. In order to do this, you need to register Vault Agent with the Windows\nService Control Manager. After Vault Agent is registered, it can be started like any other Windows\nservice.\n\nWhile this guide focuses on an example for Vault Agent, this example can be easily adapted to work for\n[Vault Proxy](\/vault\/docs\/agent-and-proxy\/proxy) by changing the config and subcommand\ngiven to `vault.exe` as appropriate.\n\n~> Note: The commands on this page should be run in a PowerShell session with Administrator capabilities.\n\n~> Note: When specifying Windows file paths in config files, they should be formatted like this: `C:\/foo\/bar\/file.txt`\ninstead of using backslashes.\n\n## Register Vault Agent as a Windows service\n\nThere are multiple ways to register Vault Agent as a Windows service. One way is to use\n[`sc.exe`](https:\/\/docs.microsoft.com\/en-us\/windows-server\/administration\/windows-commands\/sc-create). `sc.exe` works\nbest if the path to your Vault binary and its associated agent config file do not contain spaces. `sc.exe` can be\npretty tricky to get working correctly if your path contains spaces, as paths containing spaces must be quoted,\nand escaping quotes correctly in a way that makes `sc.exe` happy is non-trivial. If your path contains spaces, or you prefer not to use `sc.exe`, another\nalternative is to use the\n[`New-Service`](https:\/\/docs.microsoft.com\/en-us\/powershell\/module\/microsoft.powershell.management\/new-service?view=powershell-5.1)\ncmdlet. `New-Service` is less picky about the method used to escape quotes, and can sometimes be easier. Examples of\nboth will be shown below.\n\n### Using sc.exe\n\n~> **Important Note:** Ensure the executable path of the service is quoted, especially when it contains spaces, to avoid\npotential privilege escalation risks.\n\nIf you use `sc.exe`, make sure you specify `sc.exe` explicitly, and not just `sc`. The command below shows the creation\nof Vault Agent as a service, using \"Vault Agent\" as the display name, and starting automatically when Windows starts.\nThe `binPath` argument should include the fully qualified path to the Vault executable, as well as any arguments required.\n\n```shell-session\nPS C:\\Windows\\system32> sc.exe create VaultAgent binPath=\"C:\\vault\\vault.exe agent -config=C:\\vault\\agent-config.hcl\" displayName=\"Vault Agent\" start=auto\n[SC] CreateService SUCCESS\n```\n\nNote that the spacing after the `=` in all of the arguments is intentional and required.\n\nIf you receive a success message, your service is registered with the service manager.\n\nIf you get an error, please verify the path to the binary and check the arguments, by running the contents of\n`binPath=` directly in a PowerShell session and observing the results.\n\n### Using New-Service\n\nThe syntax is slightly different for `New-Service`, but the gist is the same. The invocation below is equivalent to the\n`sc.exe` one above.\n\n```shell-session\nPS C:\\Windows\\system32> New-Service -Name \"VaultAgent\" -BinaryPathName \"C:\\vault\\vault.exe agent -config=C:\\vault\\agent-config.hcl\" -DisplayName \"Vault Agent\" -StartupType \"Automatic\"\n\nStatus   Name               DisplayName\n------   ----               -----------\nStopped  VaultAgent         Vault Agent\n```\n\nAs mentioned previously, `New-Service` is easier to use if the path to your Vault executable and\/or agent config contains spaces.\nBelow is an example of how to configure Vault Agent as a service using a path with spaces.\n\n```shell-session\nPS C:\\Windows\\system32> New-Service -Name \"VaultAgent\" -BinaryPathName '\"C:\\my dir\\vault.exe\" agent -config=\"C:\\my dir\\agent-config.hcl\"' -DisplayName \"Vault Agent\" -StartupType \"Automatic\"\n\nStatus   Name               DisplayName\n------   ----               -----------\nStopped  VaultAgent         Vault Agent\n```\n\nNote that only the paths themselves are double quoted, and the entire `BinaryPathName` is wrapped in single quotes, in order\nto escape the double quotes used for the paths.\n\nIf anything goes wrong during this process, and you need to manually edit the path later, use the Registry Editor to find\nthe following key: `HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\VaultAgent`. You can edit the `ImagePath` value\nat that key to the correct path.\n\n## Start the Vault Agent service\n\nThere are multiple ways to start the service.\n\n- Using the `sc.exe` command.\n- Using the `Start-Service` cmdlet.\n- Go to the Windows Service Manager, and look for **VaultAgent** in the service name column. Click the\n`Start` button to start the service.\n\n### Example starting Vault Agent using `sc.exe`\n\n```shell-session\nPS C:\\Windows\\system32> sc.exe start VaultAgent\n\nSERVICE_NAME: VaultAgent\n     TYPE               : 10  WIN32_OWN_PROCESS\n     STATE              : 4  RUNNING\n                             (STOPPABLE, NOT_PAUSABLE, ACCEPTS_SHUTDOWN)\n     WIN32_EXIT_CODE    : 0  (0x0)\n     SERVICE_EXIT_CODE  : 0  (0x0)\n     CHECKPOINT         : 0x0\n     WAIT_HINT          : 0x0\n     PID                : 6548\n     FLAGS              :\n```\n\n### Example starting Vault Agent using `Start-Service`\n\n```shell-session\nPS C:\\Windows\\system32> Start-Service -Name \"VaultAgent\"\n```\n\nNote that in the case where the service was started successfully, `New-Service` does not return any output.","site":"vault","answers_cleaned":"    layout  docs page title  Run Vault Agent as a Windows service description       Register Vault Agent with sc exe and run Agent as a Windows service         Run Vault Agent as a Windows service  Vault Agent can be run as a Windows service  In order to do this  you need to register Vault Agent with the Windows Service Control Manager  After Vault Agent is registered  it can be started like any other Windows service   While this guide focuses on an example for Vault Agent  this example can be easily adapted to work for  Vault Proxy   vault docs agent and proxy proxy  by changing the config and subcommand given to  vault exe  as appropriate      Note  The commands on this page should be run in a PowerShell session with Administrator capabilities      Note  When specifying Windows file paths in config files  they should be formatted like this   C  foo bar file txt  instead of using backslashes      Register Vault Agent as a Windows service  There are multiple ways to register Vault Agent as a Windows service  One way is to use   sc exe   https   docs microsoft com en us windows server administration windows commands sc create    sc exe  works best if the path to your Vault binary and its associated agent config file do not contain spaces   sc exe  can be pretty tricky to get working correctly if your path contains spaces  as paths containing spaces must be quoted  and escaping quotes correctly in a way that makes  sc exe  happy is non trivial  If your path contains spaces  or you prefer not to use  sc exe   another alternative is to use the   New Service   https   docs microsoft com en us powershell module microsoft powershell management new service view powershell 5 1  cmdlet   New Service  is less picky about the method used to escape quotes  and can sometimes be easier  Examples of both will be shown below       Using sc exe       Important Note    Ensure the executable path of the service is quoted  especially when it contains spaces  to avoid potential privilege escalation risks   If you use  sc exe   make sure you specify  sc exe  explicitly  and not just  sc   The command below shows the creation of Vault Agent as a service  using  Vault Agent  as the display name  and starting automatically when Windows starts  The  binPath  argument should include the fully qualified path to the Vault executable  as well as any arguments required      shell session PS C  Windows system32  sc exe create VaultAgent binPath  C  vault vault exe agent  config C  vault agent config hcl  displayName  Vault Agent  start auto  SC  CreateService SUCCESS      Note that the spacing after the     in all of the arguments is intentional and required   If you receive a success message  your service is registered with the service manager   If you get an error  please verify the path to the binary and check the arguments  by running the contents of  binPath   directly in a PowerShell session and observing the results       Using New Service  The syntax is slightly different for  New Service   but the gist is the same  The invocation below is equivalent to the  sc exe  one above      shell session PS C  Windows system32  New Service  Name  VaultAgent   BinaryPathName  C  vault vault exe agent  config C  vault agent config hcl   DisplayName  Vault Agent   StartupType  Automatic   Status   Name               DisplayName                                         Stopped  VaultAgent         Vault Agent      As mentioned previously   New Service  is easier to use if the path to your Vault executable and or agent config contains spaces  Below is an example of how to configure Vault Agent as a service using a path with spaces      shell session PS C  Windows system32  New Service  Name  VaultAgent   BinaryPathName   C  my dir vault exe  agent  config  C  my dir agent config hcl    DisplayName  Vault Agent   StartupType  Automatic   Status   Name               DisplayName                                         Stopped  VaultAgent         Vault Agent      Note that only the paths themselves are double quoted  and the entire  BinaryPathName  is wrapped in single quotes  in order to escape the double quotes used for the paths   If anything goes wrong during this process  and you need to manually edit the path later  use the Registry Editor to find the following key   HKEY LOCAL MACHINE SYSTEM CurrentControlSet Services VaultAgent   You can edit the  ImagePath  value at that key to the correct path      Start the Vault Agent service  There are multiple ways to start the service     Using the  sc exe  command    Using the  Start Service  cmdlet    Go to the Windows Service Manager  and look for   VaultAgent   in the service name column  Click the  Start  button to start the service       Example starting Vault Agent using  sc exe      shell session PS C  Windows system32  sc exe start VaultAgent  SERVICE NAME  VaultAgent      TYPE                 10  WIN32 OWN PROCESS      STATE                4  RUNNING                               STOPPABLE  NOT PAUSABLE  ACCEPTS SHUTDOWN       WIN32 EXIT CODE      0   0x0       SERVICE EXIT CODE    0   0x0       CHECKPOINT           0x0      WAIT HINT            0x0      PID                  6548      FLAGS                         Example starting Vault Agent using  Start Service      shell session PS C  Windows system32  Start Service  Name  VaultAgent       Note that in the case where the service was started successfully   New Service  does not return any output "}
{"questions":"vault Vault Agent is a client side daemon that securely extracts secrets from Vault page title What is Vault Agent for clients without the complexity of API calls layout docs What is Vault Agent","answers":"---\nlayout: docs\npage_title: What is Vault Agent?\ndescription: >-\n  Vault Agent is a client-side daemon that securely extracts secrets from Vault\n  for clients without the complexity of API calls.\n---\n\n# What is Vault Agent?\n\nVault Agent aims to remove the initial hurdle to adopt Vault by providing a\nmore scalable and simpler way for applications to integrate with Vault, by\nproviding the ability to render [templates][template] containing the secrets\nrequired by your application, without requiring changes to your application.\n\n![Vault Agent workflow](\/img\/vault-agent-workflow.png)\n\nVault Agent is a client daemon that provides the following features:\n\n- [Auto-Auth][autoauth] - Automatically authenticate to Vault and manage the\n  token renewal process for locally-retrieved dynamic secrets.\n- [API Proxy][apiproxy] - Allows Vault Agent to act as a proxy for Vault's API,\n  optionally using (or forcing the use of) the Auto-Auth token.\n- [Caching][caching] - Allows client-side caching of responses containing newly\n  created tokens and responses containing leased secrets generated off of these\n  newly created tokens. The agent also manages the renewals of the cached tokens and leases.\n- [Windows Service][winsvc] - Allows running the Vault Agent as a Windows\n  service.\n- [Templating][template] - Allows rendering of user-supplied templates by Vault\n  Agent, using the token generated by the Auto-Auth step.\n- [Process Supervisor Mode][process-supervisor] - Runs a child process with Vault\n  secrets injected as environment variables.\n\n## Auto-Auth\n\nVault Agent allows easy authentication to Vault in a wide variety of\nenvironments. Please see the [Auto-Auth docs][autoauth]\nfor information.\n\nAuto-Auth functionality takes place within an `auto_auth` configuration stanza.\n\n## API proxy\n\nVault Agent can act as an API proxy for Vault, allowing you to talk to Vault's\nAPI via a listener defined for Agent. It can be configured to optionally allow or force the automatic use of\nthe Auto-Auth token for these requests. Please see the [API Proxy docs][apiproxy]\nfor more information.\n\nAPI Proxy functionality takes place within a defined `listener`, and its behaviour can be configured with an\n[`api_proxy` stanza](\/vault\/docs\/agent-and-proxy\/agent\/apiproxy#configuration-api_proxy).\n\n## Caching\n\nVault Agent allows client-side caching of responses containing newly created tokens\nand responses containing leased secrets generated off of these newly created tokens.\nPlease see the [Caching docs][caching] for information.\n\n## API\n\n### Quit\n\nThis endpoint triggers shutdown of the agent. By default, it is disabled, and can\nbe enabled per listener using the [`agent_api`][agent-api] stanza. It is recommended\nto only enable this on trusted interfaces, as it does not require any authorization to use.\n\n| Method | Path             |\n| :----- | :--------------- |\n| `POST` | `\/agent\/v1\/quit` |\n\n### Cache\n\nSee the [caching](\/vault\/docs\/agent-and-proxy\/agent\/caching#api) page for details on the cache API.\n\n## Configuration\n\n### Command options\n\n- `-log-level` ((#\\_log_level)) `(string: \"info\")` - Log verbosity level. Supported values (in\n  order of descending detail) are `trace`, `debug`, `info`, `warn`, and `error`. This can\n  also be specified via the `VAULT_LOG_LEVEL` environment variable.\n\n- `-log-format` ((#\\_log_format)) `(string: \"standard\")` - Log format. Supported values\n  are `standard` and `json`. This can also be specified via the\n  `VAULT_LOG_FORMAT` environment variable.\n\n- `-log-file` ((#\\_log_file)) - the absolute path where Vault Agent should save\n  log messages. Paths that end with a path separator use the default file name,\n  `agent.log`. Paths that do not end with a file extension use the default\n  `.log` extension. If the log file rotates, Vault Agent appends the current\n  timestamp to the file name at the time of rotation. For example:\n\n  `log-file` | Full log file | Rotated log file\n  ---------- | ------------- | ----------------\n  `\/var\/log` | `\/var\/log\/agent.log` | `\/var\/log\/agent-{timestamp}.log`\n  `\/var\/log\/my-diary` | `\/var\/log\/my-diary.log` | `\/var\/log\/my-diary-{timestamp}.log`\n  `\/var\/log\/my-diary.txt` | `\/var\/log\/my-diary.txt` | `\/var\/log\/my-diary-{timestamp}.txt`\n\n- `-log-rotate-bytes` ((#\\_log_rotate_bytes)) - to specify the number of\n  bytes that should be written to a log before it needs to be rotated. Unless specified,\n  there is no limit to the number of bytes that can be written to a log file.\n\n- `-log-rotate-duration` ((#\\_log_rotate_duration)) - to specify the maximum\n  duration a log should be written to before it needs to be rotated. Must be a duration\n  value such as 30s. Defaults to 24h.\n\n- `-log-rotate-max-files` ((#\\_log_rotate_max_files)) - to specify the maximum\n  number of older log file archives to keep. Defaults to `0` (no files are ever deleted).\n  Set to `-1` to discard old log files when a new one is created.\n\n### Configuration file options\n\nThese are the currently-available general configuration options:\n\n- `vault` <code>([vault][vault]: <optional\\>)<\/code> - Specifies the remote Vault server the Agent connects to.\n\n- `auto_auth` <code>([auto_auth][autoauth]: <optional\\>)<\/code> - Specifies the method and other options used for Auto-Auth functionality.\n\n- `api_proxy` <code>([api_proxy][apiproxy]: <optional\\>)<\/code> - Specifies options used for API Proxy functionality.\n\n- `cache` <code>([cache][caching]: <optional\\>)<\/code> - Specifies options used for Caching functionality.\n\n- `listener` <code>([listener][listener]: <optional\\>)<\/code> - Specifies the addresses and ports on which the Agent will respond to requests.\n\n  ~> **Note:** On `SIGHUP` (`kill -SIGHUP $(pidof vault)`), Vault Agent will attempt to reload listener TLS configuration.\n  This method can be used to refresh certificates used by Vault Agent without having to restart its process.\n\n- `pid_file` `(string: \"\")` - Path to the file in which the agent's Process ID\n  (PID) should be stored\n\n- `exit_after_auth` `(bool: false)` - If set to `true`, the agent will exit\n  with code `0` after a single successful auth, where success means that a\n  token was retrieved and all sinks successfully wrote it. If you have\n  `template` stanzas defined in your agent configuration, the agent\n  waits for the configured templates to render successfully before\n  exiting. If you use environment templates (`env_template` ) and set\n  `exit_after_auth` to true, Vault agent will not run the child processes\n  defined in your `exec` stanza.\n\n- `disable_idle_connections` `(string array: [])` - A list of strings that disables idle connections for various features in Vault Agent.\n  Valid values include: `auto-auth`, `caching`, `proxying`, and `templating`. `proxying` configures this for the API proxy, which is\n  identical in function to `caching` for historical reasons. Can also be configured by setting the `VAULT_AGENT_DISABLE_IDLE_CONNECTIONS`\n  environment variable as a comma separated string. This environment variable will override any values found in a configuration file.\n\n- `disable_keep_alives` `(string array: [])` - A list of strings that disables keep alives for various features in Vault Agent.\n  Valid values include: `auto-auth`, `caching`, `proxying`, and `templating`. `proxying` configures this for the API proxy, which is\n  identical in function to `caching` for historical reasons. Can also be configured by setting the `VAULT_AGENT_DISABLE_KEEP_ALIVES`\n  environment variable as a comma separated string. This environment variable will override any values found in a configuration file.\n\n- `template` <code>([template][template]: <optional\\>)<\/code> - Specifies options used for templating Vault secrets to files.\n\n- `template_config` <code>([template_config][template-config]: <optional\\>)<\/code> - Specifies templating engine behavior.\n\n- `exec` <code>([exec][process-supervisor]: <optional\\>)<\/code> - Specifies options for vault agent to run a child process\n  that injects secrets (via `env_template` stanzas) as environment variables.\n\n- `env_template` <code>([env_template][template]: <optional\\>)<\/code> - Multiple blocks accepted. Each block contains\n  the options used for templating Vault secrets as environment variables via the\n  [process supervisor mode](\/vault\/docs\/agent-and-proxy\/agent\/process-supervisor).\n\n- `telemetry` <code>([telemetry][telemetry]: <optional\\>)<\/code> \u2013 Specifies the telemetry\n  reporting system. See the [telemetry Stanza](\/vault\/docs\/agent-and-proxy\/agent#telemetry-stanza) section below\n  for a list of metrics specific to Agent.\n\n- `log_level` - Equivalent to the [`-log-level` command-line flag](#_log_level).\n\n  ~> **Note:** On `SIGHUP` (`kill -SIGHUP $(pidof vault)`), Vault Agent will update the log level to the value\n  specified by configuration file (including overriding values set using CLI or environment variable parameters).\n\n- `log_format` - Equivalent to the [`-log-format` command-line flag](#_log_format).\n\n- `log_file` - Equivalent to the [`-log-file` command-line flag](#_log_file).\n\n- `log_rotate_duration` - Equivalent to the [`-log-rotate-duration` command-line flag](#_log_rotate_duration).\n\n- `log_rotate_bytes` - Equivalent to the [`-log-rotate-bytes` command-line flag](#_log_rotate_bytes).\n\n- `log_rotate_max_files` - Equivalent to the [`-log-rotate-max-files` command-line flag](#_log_rotate_max_files).\n\n### vault stanza\n\nThere can at most be one top level `vault` block, and it has the following\nconfiguration entries:\n\n- `address` `(string: <optional>)` - The address of the Vault server to\n  connect to. This should be a Fully Qualified Domain Name (FQDN) or IP\n  such as `https:\/\/vault-fqdn:8200` or `https:\/\/172.16.9.8:8200`.\n  This value can be overridden by setting the `VAULT_ADDR` environment variable.\n\n- `ca_cert` `(string: <optional>)` - Path on the local disk to a single PEM-encoded\n  CA certificate to verify the Vault server's SSL certificate. This value can\n  be overridden by setting the `VAULT_CACERT` environment variable.\n\n- `ca_path` `(string: <optional>)` - Path on the local disk to a directory of\n  PEM-encoded CA certificates to verify the Vault server's SSL certificate.\n  This value can be overridden by setting the `VAULT_CAPATH` environment\n  variable.\n\n- `client_cert` `(string: <optional>)` - Path on the local disk to a single\n  PEM-encoded CA certificate to use for TLS authentication to the Vault server.\n  This value can be overridden by setting the `VAULT_CLIENT_CERT` environment\n  variable.\n\n- `client_key` `(string: <optional>)` - Path on the local disk to a single\n  PEM-encoded private key matching the client certificate from `client_cert`.\n  This value can be overridden by setting the `VAULT_CLIENT_KEY` environment\n  variable.\n\n- `tls_skip_verify` `(string: <optional>)` - Disable verification of TLS\n  certificates. Using this option is highly discouraged as it decreases the\n  security of data transmissions to and from the Vault server. This value can\n  be overridden by setting the `VAULT_SKIP_VERIFY` environment variable.\n\n- `tls_server_name` `(string: <optional>)` - Name to use as the SNI host when\n  connecting via TLS. This value can be overridden by setting the\n  `VAULT_TLS_SERVER_NAME` environment variable.\n\n- `namespace` `(string: <optional>)` - Namespace to use for all of Vault Agent's\n  requests to Vault. This can also be specified by command line or environment variable.\n  The order of precedence is: this setting lowest, followed by the environment variable\n  `VAULT_NAMESPACE`, and then the highest precedence command-line option `-namespace`.\n  If none of these are specified, defaults to the root namespace.\n\n#### retry stanza\n\nThe `vault` stanza may contain a `retry` stanza that controls how failing Vault\nrequests are handled, whether these requests are issued in order to render\ntemplates, or are proxied requests coming from the api proxy subsystem.\nAuto-auth, however, has its own notion of retrying and is not affected by this\nsection.\n\nFor requests from the templating engine, Vaul Agent will reset its retry counter and\nperform retries again once all retries are exhausted. This means that templating\nwill retry on failures indefinitely unless `exit_on_retry_failure` from the\n[`template_config`][template-config] stanza is set to `true`.\n\nHere are the options for the `retry` stanza:\n\n- `num_retries` `(int: 12)` - Specify how many times a failing request will\n  be retried. A value of `0` translates to the default, i.e. 12 retries.\n  A value of `-1` disables retries. The environment variable `VAULT_MAX_RETRIES`\n  overrides this setting.\n\nThere are a few subtleties to be aware of here. First, requests originating\nfrom the proxy cache will only be retried if they resulted in specific HTTP\nresult codes: any 50x code except 501 (\"not implemented\"), as well as 412\n(\"precondition failed\"); 412 is used in Vault Enterprise 1.7+ to indicate a\nstale read due to eventual consistency. Requests coming from the template\nsubsystem are retried regardless of the failure.\n\nSecond, templating retries may be performed by both the templating engine _and_\nthe cache proxy if Vault Agent [persistent\ncache][persistent-cache] is enabled. This is due to the\nfact that templating requests go through the cache proxy when persistence is\nenabled.\n\nThird, the backoff algorithm used to set the time between retries differs for\nthe template and cache subsystems. This is a technical limitation we hope\nto address in the future.\n\n### listener stanza\n\nVault Agent supports one or more [listener][listener_main] stanzas. Listeners\ncan be configured with or without [caching][caching], but will use the cache if it\nhas been configured, and will enable the [API proxy][apiproxy]. In addition to the standard\nlistener configuration, an Agent's listener configuration also supports the following:\n\n- `require_request_header` `(bool: false)` - Require that all incoming HTTP\n  requests on this listener must have an `X-Vault-Request: true` header entry.\n  Using this option offers an additional layer of protection from Server Side\n  Request Forgery attacks. Requests on the listener that do not have the proper\n  `X-Vault-Request` header will fail, with a HTTP response status code of `412: Precondition Failed`.\n\n- `role` `(string: default)` - `role` determines which APIs the listener serves.\n  It can be configured to `metrics_only` to serve only metrics, or the default role, `default`,\n  which serves everything (including metrics). The `require_request_header` does not apply\n  to `metrics_only` listeners.\n\n- `agent_api` <code>([agent_api][agent-api]: <optional\\>)<\/code> - Manages optional Agent API endpoints.\n\n#### agent_api stanza\n\n- `enable_quit` `(bool: false)` - If set to `true`, the agent will enable the [quit](\/vault\/docs\/agent-and-proxy\/agent#quit) API.\n\n### telemetry stanza\n\nVault Agent supports the [telemetry][telemetry] stanza and collects various\nruntime metrics about its performance, the auto-auth and the cache status:\n\n| Metric                           | Description                                          | Type    |\n| -------------------------------- | ---------------------------------------------------- | ------- |\n| `vault.agent.authenticated`      | Current authentication status (1 - has valid token,  | gauge   |\n|                                  | 0 - no valid token)                                  |         |\n| `vault.agent.auth.failure`       | Number of authentication failures                    | counter |\n| `vault.agent.auth.success`       | Number of authentication successes                   | counter |\n| `vault.agent.proxy.success`      | Number of requests successfully proxied              | counter |\n| `vault.agent.proxy.client_error` | Number of requests for which Vault returned an error | counter |\n| `vault.agent.proxy.error`        | Number of requests the agent failed to proxy         | counter |\n| `vault.agent.cache.hit`          | Number of cache hits                                 | counter |\n| `vault.agent.cache.miss`         | Number of cache misses                               | counter |\n\n### IMPORTANT: `VAULT_ADDR` usage\n\nIf you export the `VAULT_ADDR` environment variable on the Vault Agent instance, that value takes precedence over the value in the configuration file. The Vault Agent uses that to connect to Vault and this can create an infinite loop where the value of `VAULT_ADDR` is used to make a connection, and the Vault Agent ends up trying to connect to itself instead of the server.\n\nWhen the connection fails, the Vault Agent increments the port and tries again. The agent repeats these attempts, which leads to port exhaustion.\n\nThis problem is a result of the precedence order of the 3 different ways to configure the Vault address. They are, in increasing order of priority:\n\n1. Configuration files\n1. Environment variables\n1. CLI flags\n\n## Start Vault Agent\n\nTo run Vault Agent:\n\n1. [Download](\/vault\/downloads) the Vault binary where the client application runs\n   (virtual machine, Kubernetes pod, etc.)\n\n1. Create a Vault Agent configuration file. (See the [Example\n   Configuration](#example-configuration) section for an example configuration.)\n\n1. Start a Vault Agent with the configuration file.\n\n   **Example:**\n\n   ```shell-session\n   $ vault agent -config=\/etc\/vault\/agent-config.hcl\n   ```\n\n   To get help, run:\n\n   ```shell-session\n   $ vault agent -h\n   ```\n\nAs with Vault, the `-config` flag can be used in three different ways:\n\n- Use the flag once to name the path to a single specific configuration file.\n- Use the flag multiple times to name multiple configuration files, which will be composed at runtime.\n- Use the flag to name a directory of configuration files, the contents of which will be composed at runtime.\n\n## Example configuration\n\nAn example configuration, with very contrived values, follows:\n\n```hcl\npid_file = \".\/pidfile\"\n\nvault {\n  address = \"https:\/\/vault-fqdn:8200\"\n  retry {\n    num_retries = 5\n  }\n}\n\nauto_auth {\n  method \"aws\" {\n    mount_path = \"auth\/aws-subaccount\"\n    config = {\n      type = \"iam\"\n      role = \"foobar\"\n    }\n  }\n\n  sink \"file\" {\n    config = {\n      path = \"\/tmp\/file-foo\"\n    }\n  }\n\n  sink \"file\" {\n    wrap_ttl = \"5m\"\n    aad_env_var = \"TEST_AAD_ENV\"\n    dh_type = \"curve25519\"\n    dh_path = \"\/tmp\/file-foo-dhpath2\"\n    config = {\n      path = \"\/tmp\/file-bar\"\n    }\n  }\n}\n\ncache {\n  \/\/ An empty cache stanza still enables caching\n}\n\napi_proxy {\n  use_auto_auth_token = true\n}\n\nlistener \"unix\" {\n  address = \"\/path\/to\/socket\"\n  tls_disable = true\n\n  agent_api {\n    enable_quit = true\n  }\n}\n\nlistener \"tcp\" {\n  address = \"127.0.0.1:8100\"\n  tls_disable = true\n}\n\ntemplate {\n  source = \"\/etc\/vault\/server.key.ctmpl\"\n  destination = \"\/etc\/vault\/server.key\"\n}\n\ntemplate {\n  source = \"\/etc\/vault\/server.crt.ctmpl\"\n  destination = \"\/etc\/vault\/server.crt\"\n}\n```\n\n[vault]: \/vault\/docs\/agent-and-proxy\/agent#vault-stanza\n[autoauth]: \/vault\/docs\/agent-and-proxy\/autoauth\n[caching]: \/vault\/docs\/agent-and-proxy\/agent\/caching\n[apiproxy]: \/vault\/docs\/agent-and-proxy\/agent\/apiproxy\n[persistent-cache]: \/vault\/docs\/agent-and-proxy\/agent\/caching\/persistent-caches\n[template]: \/vault\/docs\/agent-and-proxy\/agent\/template\n[process-supervisor]: \/vault\/docs\/agent-and-proxy\/agent\/process-supervisor\n[template-config]: \/vault\/docs\/agent-and-proxy\/agent\/template#template-configurations\n[agent-api]: \/vault\/docs\/agent-and-proxy\/agent\/#agent_api-stanza\n[listener]: \/vault\/docs\/agent-and-proxy\/agent#listener-stanza\n[listener_main]: \/vault\/docs\/configuration\/listener\/tcp\n[winsvc]: \/vault\/docs\/agent-and-proxy\/agent\/winsvc\n[telemetry]: \/vault\/docs\/configuration\/telemetry","site":"vault","answers_cleaned":"    layout  docs page title  What is Vault Agent  description       Vault Agent is a client side daemon that securely extracts secrets from Vault   for clients without the complexity of API calls         What is Vault Agent   Vault Agent aims to remove the initial hurdle to adopt Vault by providing a more scalable and simpler way for applications to integrate with Vault  by providing the ability to render  templates  template  containing the secrets required by your application  without requiring changes to your application     Vault Agent workflow   img vault agent workflow png   Vault Agent is a client daemon that provides the following features      Auto Auth  autoauth    Automatically authenticate to Vault and manage the   token renewal process for locally retrieved dynamic secrets     API Proxy  apiproxy    Allows Vault Agent to act as a proxy for Vault s API    optionally using  or forcing the use of  the Auto Auth token     Caching  caching    Allows client side caching of responses containing newly   created tokens and responses containing leased secrets generated off of these   newly created tokens  The agent also manages the renewals of the cached tokens and leases     Windows Service  winsvc    Allows running the Vault Agent as a Windows   service     Templating  template    Allows rendering of user supplied templates by Vault   Agent  using the token generated by the Auto Auth step     Process Supervisor Mode  process supervisor    Runs a child process with Vault   secrets injected as environment variables      Auto Auth  Vault Agent allows easy authentication to Vault in a wide variety of environments  Please see the  Auto Auth docs  autoauth  for information   Auto Auth functionality takes place within an  auto auth  configuration stanza      API proxy  Vault Agent can act as an API proxy for Vault  allowing you to talk to Vault s API via a listener defined for Agent  It can be configured to optionally allow or force the automatic use of the Auto Auth token for these requests  Please see the  API Proxy docs  apiproxy  for more information   API Proxy functionality takes place within a defined  listener   and its behaviour can be configured with an   api proxy  stanza   vault docs agent and proxy agent apiproxy configuration api proxy       Caching  Vault Agent allows client side caching of responses containing newly created tokens and responses containing leased secrets generated off of these newly created tokens  Please see the  Caching docs  caching  for information      API      Quit  This endpoint triggers shutdown of the agent  By default  it is disabled  and can be enabled per listener using the   agent api   agent api  stanza  It is recommended to only enable this on trusted interfaces  as it does not require any authorization to use     Method   Path                                                POST      agent v1 quit         Cache  See the  caching   vault docs agent and proxy agent caching api  page for details on the cache API      Configuration      Command options      log level       log level     string   info      Log verbosity level  Supported values  in   order of descending detail  are  trace    debug    info    warn   and  error   This can   also be specified via the  VAULT LOG LEVEL  environment variable       log format       log format     string   standard      Log format  Supported values   are  standard  and  json   This can also be specified via the    VAULT LOG FORMAT  environment variable       log file       log file     the absolute path where Vault Agent should save   log messages  Paths that end with a path separator use the default file name     agent log   Paths that do not end with a file extension use the default     log  extension  If the log file rotates  Vault Agent appends the current   timestamp to the file name at the time of rotation  For example      log file    Full log file   Rotated log file                                                     var log      var log agent log      var log agent  timestamp  log      var log my diary      var log my diary log      var log my diary  timestamp  log      var log my diary txt      var log my diary txt      var log my diary  timestamp  txt       log rotate bytes       log rotate bytes     to specify the number of   bytes that should be written to a log before it needs to be rotated  Unless specified    there is no limit to the number of bytes that can be written to a log file       log rotate duration       log rotate duration     to specify the maximum   duration a log should be written to before it needs to be rotated  Must be a duration   value such as 30s  Defaults to 24h       log rotate max files       log rotate max files     to specify the maximum   number of older log file archives to keep  Defaults to  0   no files are ever deleted     Set to   1  to discard old log files when a new one is created       Configuration file options  These are the currently available general configuration options      vault   code   vault  vault    optional     code    Specifies the remote Vault server the Agent connects to      auto auth   code   auto auth  autoauth    optional     code    Specifies the method and other options used for Auto Auth functionality      api proxy   code   api proxy  apiproxy    optional     code    Specifies options used for API Proxy functionality      cache   code   cache  caching    optional     code    Specifies options used for Caching functionality      listener   code   listener  listener    optional     code    Specifies the addresses and ports on which the Agent will respond to requests          Note    On  SIGHUP    kill  SIGHUP   pidof vault     Vault Agent will attempt to reload listener TLS configuration    This method can be used to refresh certificates used by Vault Agent without having to restart its process      pid file    string         Path to the file in which the agent s Process ID    PID  should be stored     exit after auth    bool  false     If set to  true   the agent will exit   with code  0  after a single successful auth  where success means that a   token was retrieved and all sinks successfully wrote it  If you have    template  stanzas defined in your agent configuration  the agent   waits for the configured templates to render successfully before   exiting  If you use environment templates   env template    and set    exit after auth  to true  Vault agent will not run the child processes   defined in your  exec  stanza      disable idle connections    string array         A list of strings that disables idle connections for various features in Vault Agent    Valid values include   auto auth    caching    proxying   and  templating    proxying  configures this for the API proxy  which is   identical in function to  caching  for historical reasons  Can also be configured by setting the  VAULT AGENT DISABLE IDLE CONNECTIONS    environment variable as a comma separated string  This environment variable will override any values found in a configuration file      disable keep alives    string array         A list of strings that disables keep alives for various features in Vault Agent    Valid values include   auto auth    caching    proxying   and  templating    proxying  configures this for the API proxy  which is   identical in function to  caching  for historical reasons  Can also be configured by setting the  VAULT AGENT DISABLE KEEP ALIVES    environment variable as a comma separated string  This environment variable will override any values found in a configuration file      template   code   template  template    optional     code    Specifies options used for templating Vault secrets to files      template config   code   template config  template config    optional     code    Specifies templating engine behavior      exec   code   exec  process supervisor    optional     code    Specifies options for vault agent to run a child process   that injects secrets  via  env template  stanzas  as environment variables      env template   code   env template  template    optional     code    Multiple blocks accepted  Each block contains   the options used for templating Vault secrets as environment variables via the    process supervisor mode   vault docs agent and proxy agent process supervisor       telemetry   code   telemetry  telemetry    optional     code    Specifies the telemetry   reporting system  See the  telemetry Stanza   vault docs agent and proxy agent telemetry stanza  section below   for a list of metrics specific to Agent      log level    Equivalent to the    log level  command line flag    log level           Note    On  SIGHUP    kill  SIGHUP   pidof vault     Vault Agent will update the log level to the value   specified by configuration file  including overriding values set using CLI or environment variable parameters       log format    Equivalent to the    log format  command line flag    log format       log file    Equivalent to the    log file  command line flag    log file       log rotate duration    Equivalent to the    log rotate duration  command line flag    log rotate duration       log rotate bytes    Equivalent to the    log rotate bytes  command line flag    log rotate bytes       log rotate max files    Equivalent to the    log rotate max files  command line flag    log rotate max files        vault stanza  There can at most be one top level  vault  block  and it has the following configuration entries      address    string   optional      The address of the Vault server to   connect to  This should be a Fully Qualified Domain Name  FQDN  or IP   such as  https   vault fqdn 8200  or  https   172 16 9 8 8200     This value can be overridden by setting the  VAULT ADDR  environment variable      ca cert    string   optional      Path on the local disk to a single PEM encoded   CA certificate to verify the Vault server s SSL certificate  This value can   be overridden by setting the  VAULT CACERT  environment variable      ca path    string   optional      Path on the local disk to a directory of   PEM encoded CA certificates to verify the Vault server s SSL certificate    This value can be overridden by setting the  VAULT CAPATH  environment   variable      client cert    string   optional      Path on the local disk to a single   PEM encoded CA certificate to use for TLS authentication to the Vault server    This value can be overridden by setting the  VAULT CLIENT CERT  environment   variable      client key    string   optional      Path on the local disk to a single   PEM encoded private key matching the client certificate from  client cert     This value can be overridden by setting the  VAULT CLIENT KEY  environment   variable      tls skip verify    string   optional      Disable verification of TLS   certificates  Using this option is highly discouraged as it decreases the   security of data transmissions to and from the Vault server  This value can   be overridden by setting the  VAULT SKIP VERIFY  environment variable      tls server name    string   optional      Name to use as the SNI host when   connecting via TLS  This value can be overridden by setting the    VAULT TLS SERVER NAME  environment variable      namespace    string   optional      Namespace to use for all of Vault Agent s   requests to Vault  This can also be specified by command line or environment variable    The order of precedence is  this setting lowest  followed by the environment variable    VAULT NAMESPACE   and then the highest precedence command line option   namespace     If none of these are specified  defaults to the root namespace        retry stanza  The  vault  stanza may contain a  retry  stanza that controls how failing Vault requests are handled  whether these requests are issued in order to render templates  or are proxied requests coming from the api proxy subsystem  Auto auth  however  has its own notion of retrying and is not affected by this section   For requests from the templating engine  Vaul Agent will reset its retry counter and perform retries again once all retries are exhausted  This means that templating will retry on failures indefinitely unless  exit on retry failure  from the   template config   template config  stanza is set to  true    Here are the options for the  retry  stanza      num retries    int  12     Specify how many times a failing request will   be retried  A value of  0  translates to the default  i e  12 retries    A value of   1  disables retries  The environment variable  VAULT MAX RETRIES    overrides this setting   There are a few subtleties to be aware of here  First  requests originating from the proxy cache will only be retried if they resulted in specific HTTP result codes  any 50x code except 501   not implemented    as well as 412   precondition failed    412 is used in Vault Enterprise 1 7  to indicate a stale read due to eventual consistency  Requests coming from the template subsystem are retried regardless of the failure   Second  templating retries may be performed by both the templating engine  and  the cache proxy if Vault Agent  persistent cache  persistent cache  is enabled  This is due to the fact that templating requests go through the cache proxy when persistence is enabled   Third  the backoff algorithm used to set the time between retries differs for the template and cache subsystems  This is a technical limitation we hope to address in the future       listener stanza  Vault Agent supports one or more  listener  listener main  stanzas  Listeners can be configured with or without  caching  caching   but will use the cache if it has been configured  and will enable the  API proxy  apiproxy   In addition to the standard listener configuration  an Agent s listener configuration also supports the following      require request header    bool  false     Require that all incoming HTTP   requests on this listener must have an  X Vault Request  true  header entry    Using this option offers an additional layer of protection from Server Side   Request Forgery attacks  Requests on the listener that do not have the proper    X Vault Request  header will fail  with a HTTP response status code of  412  Precondition Failed       role    string  default      role  determines which APIs the listener serves    It can be configured to  metrics only  to serve only metrics  or the default role   default     which serves everything  including metrics   The  require request header  does not apply   to  metrics only  listeners      agent api   code   agent api  agent api    optional     code    Manages optional Agent API endpoints        agent api stanza     enable quit    bool  false     If set to  true   the agent will enable the  quit   vault docs agent and proxy agent quit  API       telemetry stanza  Vault Agent supports the  telemetry  telemetry  stanza and collects various runtime metrics about its performance  the auto auth and the cache status     Metric                             Description                                            Type                                                                                                               vault agent authenticated         Current authentication status  1   has valid token     gauge                                          0   no valid token                                                  vault agent auth failure          Number of authentication failures                      counter      vault agent auth success          Number of authentication successes                     counter      vault agent proxy success         Number of requests successfully proxied                counter      vault agent proxy client error    Number of requests for which Vault returned an error   counter      vault agent proxy error           Number of requests the agent failed to proxy           counter      vault agent cache hit             Number of cache hits                                   counter      vault agent cache miss            Number of cache misses                                 counter        IMPORTANT   VAULT ADDR  usage  If you export the  VAULT ADDR  environment variable on the Vault Agent instance  that value takes precedence over the value in the configuration file  The Vault Agent uses that to connect to Vault and this can create an infinite loop where the value of  VAULT ADDR  is used to make a connection  and the Vault Agent ends up trying to connect to itself instead of the server   When the connection fails  the Vault Agent increments the port and tries again  The agent repeats these attempts  which leads to port exhaustion   This problem is a result of the precedence order of the 3 different ways to configure the Vault address  They are  in increasing order of priority   1  Configuration files 1  Environment variables 1  CLI flags     Start Vault Agent  To run Vault Agent   1   Download   vault downloads  the Vault binary where the client application runs     virtual machine  Kubernetes pod  etc    1  Create a Vault Agent configuration file   See the  Example    Configuration   example configuration  section for an example configuration    1  Start a Vault Agent with the configuration file        Example           shell session      vault agent  config  etc vault agent config hcl            To get help  run         shell session      vault agent  h         As with Vault  the   config  flag can be used in three different ways     Use the flag once to name the path to a single specific configuration file    Use the flag multiple times to name multiple configuration files  which will be composed at runtime    Use the flag to name a directory of configuration files  the contents of which will be composed at runtime      Example configuration  An example configuration  with very contrived values  follows      hcl pid file      pidfile   vault     address    https   vault fqdn 8200    retry       num retries   5        auto auth     method  aws        mount path    auth aws subaccount      config           type    iam        role    foobar               sink  file        config           path     tmp file foo               sink  file        wrap ttl    5m      aad env var    TEST AAD ENV      dh type    curve25519      dh path     tmp file foo dhpath2      config           path     tmp file bar               cache        An empty cache stanza still enables caching    api proxy     use auto auth token   true    listener  unix      address     path to socket    tls disable   true    agent api       enable quit   true        listener  tcp      address    127 0 0 1 8100    tls disable   true    template     source     etc vault server key ctmpl    destination     etc vault server key     template     source     etc vault server crt ctmpl    destination     etc vault server crt          vault    vault docs agent and proxy agent vault stanza  autoauth    vault docs agent and proxy autoauth  caching    vault docs agent and proxy agent caching  apiproxy    vault docs agent and proxy agent apiproxy  persistent cache    vault docs agent and proxy agent caching persistent caches  template    vault docs agent and proxy agent template  process supervisor    vault docs agent and proxy agent process supervisor  template config    vault docs agent and proxy agent template template configurations  agent api    vault docs agent and proxy agent  agent api stanza  listener    vault docs agent and proxy agent listener stanza  listener main    vault docs configuration listener tcp  winsvc    vault docs agent and proxy agent winsvc  telemetry    vault docs configuration telemetry"}
{"questions":"vault page title Run Vault Agent in process supervisor mode environment variables for use in external processes Run Vault Agent in process supervisor mode Run Vault Agent in process supervisor mode to write Vault secrets to layout docs","answers":"---\nlayout: docs\npage_title: Run Vault Agent in process supervisor mode\ndescription: >-\n  Run Vault Agent in process supervisor mode to write Vault secrets to\n  environment variables for use in external processes.\n\n---\n\n# Run Vault Agent in process supervisor mode\n\nVault Agent's Process Supervisor Mode allows Vault secrets to be injected into\na process via environment variables using\n[Consul Template markup][consul-templating-language].\n\n-> If you are running your applications in a Kubernetes cluster, we recommend\n  evaluating the [Vault Secrets Operator](\/vault\/docs\/platform\/k8s\/vso) and\n  the [Vault Agent Sidecar Injector](\/vault\/docs\/platform\/k8s\/injector)\n  instead.\n\n## Functionality\n\nVault Agent will inject secrets referenced in the `env_template` configuration\nblocks as environment variables into the child process specified in the `exec` block.\n\nWhen you start Vault Agent in process supervisor mode, it will wait until each\nenvironment variable template has rendered at least once before starting the\nprocess. If `restart_on_secret_changes` is set to `always` (default), Agent\nwill restart the process whenever an update to an injected secret is detected.\nThis could be either a static secret update (done on\n[`static_secret_render_interval`](\/vault\/docs\/agent-and-proxy\/agent\/template#static_secret_render_interval))\nor dynamic secret being close to its expiration.\n\nIn many ways, Vault Agent will mirror the child process. Standard intput and\noutput streams (`stdin` \/ `stdout` \/ `stderr`) are all forwarded to the child\nprocess. Additionally, Vault Agent will exit when the child process exits on\nits own with the same exit code.\n\n## Configuration\n\n-> Agent's [generate-config](\/vault\/docs\/agent-and-proxy\/agent\/generate-config)\n   tool will help you get started by generating a valid agent configuration\n   file from the given inputs.\n\nThe process supervisor mode requires at least one `env_template` block and\nexactly one top level `exec` block. It is incompatible with regular file\n`template` entries.\n\n### `env_template`\n\n`env_template` stanza maps the template specified in the `contents` field or\nreferenced in the `source` field to the environment variable name in the title\nof the stanza. It uses the same\n[templating language](\/vault\/docs\/agent-and-proxy\/agent\/template#templating-language)\nas file templates but permits only a subset of\n[its configuration parameters](\/vault\/docs\/agent-and-proxy\/agent\/template#template_configurations):\n\n- environment variable name `(string: <required>)` - the name of the\n  environment variable to which the contents of the template should map.\n\n- `contents` `(string: \"\")` - This option allows embedding the contents of\n  a template in the configuration file rather then supplying the `source` path to\n  the template file. This is useful for short templates. This option is mutually\n  exclusive with the `source` option.\n\n- `source` `(string: \"\")` - Path on disk to use as the input template. This\n  option is required if not using the `contents` option.\n\n- `error_on_missing_key` `(bool: false)` - Exit with an error when accessing\n  a struct or map field\/key that does notexist. The default behavior will print `<no value>`\n  when accessing a field that does not exist. It is highly recommended you set this\n  to \"true\". Also see\n  [`exit_on_retry_failure` in global Vault Agent Template Config](\/vault\/docs\/agent-and-proxy\/agent\/template#interaction-between-exit_on_retry_failure-and-error_on_missing_key).\n\n- `left_delimiter` `(string: \"\")` - Delimiter to use in the template. The\n  default is \"}}\" but for some templates, it may be easier to use a different\n  delimiter that does not conflict with the output file itself.\n\n### `exec`\n\nThe top level `exec` block has the following configuration entries.\n\n- `command` `(string array: required)` - Specify the command for the child\n  process with optional arguments. The executable's path must be either\n  absolute or relative to the current working directory.\n\n- `restart_on_secret_changes` `(string: \"always\")` - Controls whether agent\n  will restart the child process on secret changes. There are two types of\n  secret changes relevant to this configuration: a static secret update (on\n  [static_secret_render_interval`](\/vault\/docs\/agent-and-proxy\/agent\/template#static_secret_render_interval))\n  and dynamic secret being close to its expiration. The configuration supports\n  two options: `always` and `never`.\n\n- `restart_stop_signal` `(string: \"SIGTERM\")` - Signal to send to the child\n  process when a secret has been updated and the process needs to be restarted.\n  The process has 30 seconds after this signal is sent until `SIGKILL` is sent\n  to force the child process to stop.\n\n\n## Configuration example\n\nThe following example was generated using\n[`vault agent generate-config`](\/vault\/docs\/agent-and-proxy\/agent\/generate-config),\na configuration helper tool. Given this configuration, Vault Agent will run\nthe child process (`.\/my-app arg1 arg2`) with two additional environment\nvariables (`FOO_USER` and `FOO_PASSWORD`) populated with secrets from Vault.\n\n```hcl\nauto_auth {\n\n  method {\n    type = \"token_file\"\n\n    config {\n      token_file_path = \"\/Users\/avean\/.vault-token\"\n    }\n  }\n}\n\ntemplate_config {\n  static_secret_render_interval = \"5m\"\n  exit_on_retry_failure         = true\n  max_connections_per_host      = 10\n}\n\nvault {\n  address = \"http:\/\/localhost:8200\"\n}\n\nenv_template \"FOO_PASSWORD\" {\n  contents             = \"\"\n  error_on_missing_key = true\n}\nenv_template \"FOO_USER\" {\n  contents             = \"\"\n  error_on_missing_key = true\n}\n\nexec {\n  command                   = [\".\/my-app\", \"arg1\", \"arg2\"]\n  restart_on_secret_changes = \"always\"\n  restart_stop_signal       = \"SIGTERM\"\n}\n```\n\n[consul-templating-language]: https:\/\/github.com\/hashicorp\/consul-template\/blob\/v0.28.1\/docs\/templating-language.md\n[template-config]: \/vault\/docs\/agent-and-proxy\/agent\/template#template-configurations\n\n\n## Tutorial\n\nRefer to the [Vault Agent - secrets as environment\nvariables](\/vault\/tutorials\/vault-agent\/agent-env-vars) tutorial for an\nend-to-end example.","site":"vault","answers_cleaned":"    layout  docs page title  Run Vault Agent in process supervisor mode description       Run Vault Agent in process supervisor mode to write Vault secrets to   environment variables for use in external processes          Run Vault Agent in process supervisor mode  Vault Agent s Process Supervisor Mode allows Vault secrets to be injected into a process via environment variables using  Consul Template markup  consul templating language       If you are running your applications in a Kubernetes cluster  we recommend   evaluating the  Vault Secrets Operator   vault docs platform k8s vso  and   the  Vault Agent Sidecar Injector   vault docs platform k8s injector    instead      Functionality  Vault Agent will inject secrets referenced in the  env template  configuration blocks as environment variables into the child process specified in the  exec  block   When you start Vault Agent in process supervisor mode  it will wait until each environment variable template has rendered at least once before starting the process  If  restart on secret changes  is set to  always   default   Agent will restart the process whenever an update to an injected secret is detected  This could be either a static secret update  done on   static secret render interval    vault docs agent and proxy agent template static secret render interval   or dynamic secret being close to its expiration   In many ways  Vault Agent will mirror the child process  Standard intput and output streams   stdin     stdout     stderr   are all forwarded to the child process  Additionally  Vault Agent will exit when the child process exits on its own with the same exit code      Configuration     Agent s  generate config   vault docs agent and proxy agent generate config     tool will help you get started by generating a valid agent configuration    file from the given inputs   The process supervisor mode requires at least one  env template  block and exactly one top level  exec  block  It is incompatible with regular file  template  entries        env template    env template  stanza maps the template specified in the  contents  field or referenced in the  source  field to the environment variable name in the title of the stanza  It uses the same  templating language   vault docs agent and proxy agent template templating language  as file templates but permits only a subset of  its configuration parameters   vault docs agent and proxy agent template template configurations      environment variable name   string   required      the name of the   environment variable to which the contents of the template should map      contents    string         This option allows embedding the contents of   a template in the configuration file rather then supplying the  source  path to   the template file  This is useful for short templates  This option is mutually   exclusive with the  source  option      source    string         Path on disk to use as the input template  This   option is required if not using the  contents  option      error on missing key    bool  false     Exit with an error when accessing   a struct or map field key that does notexist  The default behavior will print   no value     when accessing a field that does not exist  It is highly recommended you set this   to  true   Also see     exit on retry failure  in global Vault Agent Template Config   vault docs agent and proxy agent template interaction between exit on retry failure and error on missing key       left delimiter    string         Delimiter to use in the template  The   default is      but for some templates  it may be easier to use a different   delimiter that does not conflict with the output file itself        exec   The top level  exec  block has the following configuration entries      command    string array  required     Specify the command for the child   process with optional arguments  The executable s path must be either   absolute or relative to the current working directory      restart on secret changes    string   always      Controls whether agent   will restart the child process on secret changes  There are two types of   secret changes relevant to this configuration  a static secret update  on    static secret render interval    vault docs agent and proxy agent template static secret render interval     and dynamic secret being close to its expiration  The configuration supports   two options   always  and  never       restart stop signal    string   SIGTERM      Signal to send to the child   process when a secret has been updated and the process needs to be restarted    The process has 30 seconds after this signal is sent until  SIGKILL  is sent   to force the child process to stop       Configuration example  The following example was generated using   vault agent generate config    vault docs agent and proxy agent generate config   a configuration helper tool  Given this configuration  Vault Agent will run the child process     my app arg1 arg2   with two additional environment variables   FOO USER  and  FOO PASSWORD   populated with secrets from Vault      hcl auto auth      method       type    token file       config         token file path     Users avean  vault token               template config     static secret render interval    5m    exit on retry failure           true   max connections per host        10    vault     address    http   localhost 8200     env template  FOO PASSWORD      contents                    error on missing key   true   env template  FOO USER      contents                    error on missing key   true    exec     command                         my app    arg1    arg2     restart on secret changes    always    restart stop signal          SIGTERM          consul templating language   https   github com hashicorp consul template blob v0 28 1 docs templating language md  template config    vault docs agent and proxy agent template template configurations      Tutorial  Refer to the  Vault Agent   secrets as environment variables   vault tutorials vault agent agent env vars  tutorial for an end to end example "}
{"questions":"vault Use Vault Agent templates page title Use Vault Agent templates Template markup layout docs Use templates with Vault Agent to write Vault secrets files with Consul","answers":"---\nlayout: docs\npage_title: Use Vault Agent templates\ndescription: >-\n  Use templates with Vault Agent to write Vault secrets files with Consul\n  Template markup.\n---\n\n# Use Vault Agent templates\n\nVault Agent's Template functionality allows Vault secrets to be rendered to files\nor environment variables (via the [Process Supervisor Mode](\/vault\/docs\/agent-and-proxy\/agent\/process-supervisor))\nusing [Consul Template markup][consul-templating-language].\n\n## Functionality\n\nThe `template_config` stanza configures overall default behavior for the\ntemplating engine. Note that `template_config` can only be defined once, and is\ndifferent from the `template` stanza. Unlike `template` which focuses on where\nand how a specific secret is rendered, `template_config` contains parameters\naffecting how the templating engine as a whole behaves and its interaction with\nthe rest of Agent. This includes, but is not limited to, program exit behavior.\nOther parameters that apply to the templating engine as a whole may be added\nover time.\n\nThe `template` stanza configures the Vault Agent for rendering secrets to files\nusing Consul Template markup language. Multiple `template` stanzas can be\ndefined to render multiple files.\n\nWhen the Agent is started with templating enabled, it will attempt to acquire a\nVault token using the configured auto-auth Method. On failure, it will back off\nfor a short while (including some randomness to help prevent thundering herd\nscenarios) and retry. On success, secrets defined in the templates will be\nretrieved from Vault and rendered locally.\n\n## Templating language\n\nThe template output content can be provided directly as part of the `contents`\noption in a `template` stanza or as a separate `.ctmpl` file and specified in\nthe `source` option of a `template` stanza.\n\nIn order to fetch secrets from Vault, whether those are static secrets, dynamic\ncredentials, or certificates, Vault Agent templates require the use of the\n`secret`\n[function](https:\/\/github.com\/hashicorp\/consul-template\/blob\/master\/docs\/templating-language.md#secret)\nor `pkiCert`\n[function](https:\/\/github.com\/hashicorp\/consul-template\/blob\/main\/docs\/templating-language.md#pkicert)\nfrom Consul Template.\n\nThe `secret` function works for all types of secrets and depending on the type\nof secret that's being rendered by this function, template will have different\nrenewal behavior as detailed in the [Renewals\nsection](#renewals-and-updating-secrets). The `pkiCert` function is intended to\nwork specifically for certificates issued by the [PKI Secrets\nEngine](\/vault\/docs\/secrets\/pki). Refer to the [Certificates](#certificates) section\nfor differences in certificate renewal behavior between `secret` and `pkiCert`.\n\nThe following links contain additional resources for the templating language used by Vault Agent templating.\n\n- [Consul Templating Documentation][consul-templating-language]\n- [Go Templating Language Documentation](https:\/\/pkg.go.dev\/text\/template#pkg-overview)\n\n### Template language example\n\nThe following is an example of a template that retrieves a generic secret from Vault's\nKV store:\n```\n\n\n\n```\n\nThe following is an example of a template that issues a PKI certificate in\nVault's PKI secrets engine. The fetching of the certificate or key from a PKI role\nthrough this function will be based on the certificate's expiration.\n\nTo generate a new certificate and create a bundle with the key, certificate, and CA, use:\n```\n\n\n\n\n\n```\n\nTo fetch only the issuing CA for this mount, use:\n\n```\n\n\n\n```\n\nAlternatively, `pki\/cert\/ca_chain` can be used to fetch the full CA chain.\n\n## Global configurations\n\nThe top level `template_config` block has the following configuration entries that affect\nall templates:\n\n- `exit_on_retry_failure` `(bool: false)` - This option configures Vault Agent\nto exit after it has exhausted its number of template retry attempts due to\nfailures.\n\n- `static_secret_render_interval` `(string or integer: 5m)` - If specified, configures\n  how often Vault Agent Template should render non-leased secrets such as KV v2.\n  This setting will not change how often Vault Agent Templating renders leased\n  secrets. Uses [duration format strings](\/vault\/docs\/concepts\/duration-format).\n\n- `max_connections_per_host` `(int: 10)` - Limits the total number of connections\n  that the Vault Agent templating engine can use for a particular Vault host. This limit\n  includes connections in the dialing, active, and idle states.\n\n- `lease_renewal_threshold` `(float: 0.9)` - How long Vault Agent's template\n  engine should wait for to refresh dynamic, non-renewable leases, measured as\n  a fraction of the lease duration.\n\n### `template_config` stanza example\n\n```hcl\ntemplate_config {\n  exit_on_retry_failure = true\n  static_secret_render_interval = \"10m\"\n  max_connections_per_host = 20\n}\n```\n\nIn another example `template_config` with [`error_on_missing_key` parameter in the template stanza](\/vault\/docs\/agent-and-proxy\/agent\/template#error_on_missing_key)\nas well as `exit_on_retry_failure` result in the Agent exiting in case of no key\n\/ value issues instead of the default retry behavior.\n\n```hcl\ntemplate_config {\n  exit_on_retry_failure = true\n  static_secret_render_interval = \"10m\"\n  max_connections_per_host = 20\n}\n\ntemplate {\n  source      = \"\/tmp\/agent\/template.ctmpl\"\n  destination = \"\/tmp\/agent\/render.txt\"\n  error_on_missing_key = true\n}\n```\n\n### Interaction between `exit_on_retry_failure` and `error_on_missing_key`\n\nThe parameter\n[`error_on_missing_key`](\/vault\/docs\/agent-and-proxy\/agent\/template#error_on_missing_key) can be\nspecified within the `template` stanza which determines if a template should\nerror when a key is missing in the secret. When `error_on_missing_key` is not\nspecified or set to `false` and the key to render is not in the secret's\nresponse, the templating engine will ignore it (or render `\"<no value>\"`) and\ncontinue on with its rendering.\n\nIf the desire is to have Agent fail and exit on a missing key, both\n`template.error_on_missing_key` and `template_config.exit_on_retry_failure` must\nbe set to true. Otherwise, the templating engine will error and render to its\ndestination, but Agent will not exit and will retry until the key exists or until\nthe process is terminated.\n\nNote that a missing key from a secret's response is different from a missing or\nnon-existent secret. The templating engine will always error if a secret is\nmissing, but will only error for a missing key if `error_on_missing_key` is set.\nWhether Vault Agent will exit when the templating engine errors depends on the\nvalue of `exit_on_retry_failure`.\n\n## Template configurations\n\nThe top level `template` block has multiple configuration entries. The\nparameters found in the template configuration section in the consul-template\n[documentation\npage](https:\/\/github.com\/hashicorp\/consul-template\/blob\/main\/docs\/configuration.md#templates)\ncan be used here:\n\n<Tip>\n\nThe parameters marked with `\u0394` below are only applicable to file templates and\ncannot be used with `env_template` entries in process supervisor mode.\n\n<\/Tip>\n\n- `source` `(string: \"\")` - Path on disk to use as the input template. This\n  option is required if not using the `contents` option.\n- `destination`\u0394 `(string: required)` - Path on disk where the rendered secrets should\n  be created. If the parent directories do not exist, Vault\n  Agent will attempt to create them, unless `create_dest_dirs` is false.\n- `create_dest_dirs`\u0394 `(bool: true)` - This option tells Vault Agent to create\n  the parent directories of the destination path if they do not exist.\n- `contents` `(string: \"\")` - This option allows embedding the contents of\n  a template in the configuration file rather then supplying the `source` path to\n  the template file. This is useful for short templates. This option is mutually\n  exclusive with the `source` option.\n- `command`\u0394 `(string: \"\")` - This is the optional command to run when the\n  template is rendered. The command will only run if the resulting template changes.\n  The command must return within 30s (configurable), and it must have a successful\n  exit code. Vault Agent is not a replacement for a process monitor or init system.\n  This is deprecated in favor of the `exec` option.\n- `command_timeout`\u0394 `(duration: 30s)` - This is the maximum amount of time to\n  wait for the optional command to return. This is deprecated in favor of the\n  `exec` option.\n- `error_on_missing_key` `(bool: false)` - Exit with an error when accessing\n  a struct or map field\/key that does notexist. The default behavior will print `<no value>`\n  when accessing a field that does not exist. It is highly recommended you set this\n  to \"true\". Also see [`exit_on_retry_failure` in global Vault Agent Template Config](\/vault\/docs\/agent-and-proxy\/agent\/template#interaction-between-exit_on_retry_failure-and-error_on_missing_key).\n- `exec`\u0394 `(object: optional)` - The exec block executes a command when the\n  template is rendered and the output has changed. The block parameters are\n  `command` `(string or array: required)` and `timeout` `(string: optional, defaults\n  to 30s)`. `command` can be given as a string or array of strings to execute, such as\n  `\"touch myfile\"` or `[\"touch\", \"myfile\"]`. To protect against command injection, we\n  strongly recommend using an array of strings, and we attempt to parse that way first.\n  Note also that using a comma with the string approach will cause it to be interpreted as an\n  array, which may not be desirable.\n- `perms`\u0394 `(string: \"\")` - This is the permission to render the file. If\n  this option is left unspecified, Vault Agent will attempt to match the permissions\n  of the file that already exists at the destination path. If no file exists at that\n  path, the permissions are 0644.\n- `backup`\u0394 `(bool: true)` - This option backs up the previously rendered template\n  at the destination path before writing a new one. It keeps exactly one backup.\n  This option is useful for preventing accidental changes to the data without having\n  a rollback strategy.\n- `left_delimiter` `(string: \"\")` - Delimiter to use in the template. The\n  default is \"}}\" but for some templates, it may be easier to use a different\n  delimiter that does not conflict with the output file itself.\n- `sandbox_path`\u0394 `(string: \"\")` - If a sandbox path is provided, any path\n  provided to the `file` function is checked that it falls within the sandbox path.\n  Relative paths that try to traverse outside the sandbox path will exit with an error.\n- `wait`\u0394 `(object: required)` - This is the `minimum(:maximum)` to wait before rendering\n  a new template to disk and triggering a command, separated by a colon (`:`).\n\n\n### Example `template` stanza\n\n```hcl\ntemplate {\n  source      = \"\/tmp\/agent\/template.ctmpl\"\n  destination = \"\/tmp\/agent\/render.txt\"\n  error_on_missing_key = true\n}\n```\n\nIf you only want to use the Vault Agent to render one or more templates and do\nnot need to sink the acquired credentials, you can omit the `sink` stanza from\nthe `auto_auth` stanza in the Agent configuration.\n\n## Renewals and updating secrets\n\nThe Vault Agent templating automatically renews and fetches secrets\/tokens.\nUnlike [Vault Agent caching](\/vault\/docs\/agent-and-proxy\/agent\/caching), the behavior of how Vault Agent\ntemplating does this depends on the type of secret or token. The following is a\nhigh level overview of different behaviors.\n\n### Renewable secrets\n\nIf a secret or token is renewable, Vault Agent will renew the secret after 2\/3\nof the secret's lease duration has elapsed.\n\n### Non-Renewable secrets\n\nIf a secret or token isn't renewable or leased, Vault Agent will fetch the secret every 5 minutes.\nThis can be configured using the `template_config` stanza value [static_secret_render_interval](\/vault\/docs\/agent-and-proxy\/agent\/template#static_secret_render_interval) (requires Vault 1.8+).\nNon-renewable secrets include (but not limited to) [KV Version 2](\/vault\/docs\/secrets\/kv\/kv-v2).\n\n### Non-Renewable leased secrets\n\nIf a secret or token is non-renewable but leased, Vault Agent will fetch the secret when 90% of the secrets time-to-live (TTL)\nis reached, plus or minus some jitter to ensure that many clients don't hit Vault simultaneously. Leased, non-renewable secrets\ninclude (but are not limited to) dynamic secrets such as [database credentials](\/vault\/docs\/secrets\/databases). The 90% value\nis configurable using the `template_config` stanza value\n[lease_renewal_threshold](\/vault\/docs\/agent-and-proxy\/agent\/template#lease_renewal_threshold). While KVv1 secrets are not leased,\nthis also controls the fraction at which Agent will re-fetch [KV Version 1](\/vault\/docs\/secrets\/kv\/kv-v1) secrets that\nhave a defined `lease_duration`.\n\n\n### Static roles\n\nIf a secret has a `rotation_period`, such as a [database static role](\/vault\/docs\/secrets\/databases#static-roles),\nVault Agent template will fetch the new secret as it changes in Vault. It does\nthis by inspecting the secret's time-to-live (TTL).\n\n### Certificates\n\nAs of Vault 1.11, certificates can be rendered using either `pkiCert` or\n`secret` template functions, although it is recommended to use `pkiCert` to\navoid unnecessarily generating certificates whenever Agent restarts or\nre-authenticates.\n\n#### Rendering using the `pkiCert` template function\n\nIf a [certificate](\/vault\/docs\/secrets\/pki) is rendered using the `pkiCert` template\nfunction, Vault Agent template will have the following fetching and re-rendering\nbehaviors on certificates:\n\n- Fetches a new certificate on Agent startup if none has been previously\nrendered or the current rendered one has expired.\n- On Agent's auto-auth re-authentication, due to a token expiry for example,\nskip fetching unless the current rendered one has expired.\n\n#### Rendering using the `secret` template function\n\nIf a [certificate](\/vault\/docs\/secrets\/pki) is rendered using the `secret` template\nfunction, Vault Agent template will have the following fetching and re-rendering\nbehaviors on certificates:\n\n- Fetches a new certificate on Agent startup, even if previously rendered\n  certificates are still valid.\n- If `generate_lease` is unset or set to `false`, it uses the certificate's\n  `validTo` field to determine re-fetch interval.\n- If `generate_lease` is set to `true`, apply the non-renewable, leased secret\n  rules.\n- On Agent's auto-auth re-authentication, due to a token expiry for example, it\n  fetches and re-renders a new certificate even if the existing certificate is\n  valid.\n\n## Templating configuration example\n\nThe following demonstrates Vault Agent Templates configuration blocks.\n\n```hcl\n# Other Vault Agent configuration blocks\n# ...\n\ntemplate_config {\n  static_secret_render_interval = \"10m\"\n  exit_on_retry_failure = true\n  max_connections_per_host = 20\n}\n\ntemplate {\n  source      = \"\/tmp\/agent\/template.ctmpl\"\n  destination = \"\/tmp\/agent\/render.txt\"\n}\n\ntemplate {\n  contents     = \"\"\n  destination  = \"\/tmp\/agent\/render-content.txt\"\n}\n```\n\nAnd the following demonstrates how the templates look when using `env_template` with\n[Process Supervisor Mode](\/vault\/docs\/agent-and-proxy\/agent\/process-supervisor)\n\n\n```hcl\n# Other Vault Agent configuration blocks\n# ...\n\ntemplate_config {\n  static_secret_render_interval = \"10m\"\n  exit_on_retry_failure = true\n  max_connections_per_host = 20\n}\n\nenv_template \"MY_ENV_VAR\" {\n  contents = \"\"\n}\n\nenv_template \"ENV_VAR_FROM_FILE\" {\n  source = \"\/tmp\/agent\/template.ctmpl\"\n}\n```\n\n[consul-templating-language]: https:\/\/github.com\/hashicorp\/consul-template\/blob\/v0.28.1\/docs\/templating-language.md\n[process-supervisor]: \/vault\/docs\/agent-and-proxy\/agent\/process-supervisor","site":"vault","answers_cleaned":"    layout  docs page title  Use Vault Agent templates description       Use templates with Vault Agent to write Vault secrets files with Consul   Template markup         Use Vault Agent templates  Vault Agent s Template functionality allows Vault secrets to be rendered to files or environment variables  via the  Process Supervisor Mode   vault docs agent and proxy agent process supervisor   using  Consul Template markup  consul templating language       Functionality  The  template config  stanza configures overall default behavior for the templating engine  Note that  template config  can only be defined once  and is different from the  template  stanza  Unlike  template  which focuses on where and how a specific secret is rendered   template config  contains parameters affecting how the templating engine as a whole behaves and its interaction with the rest of Agent  This includes  but is not limited to  program exit behavior  Other parameters that apply to the templating engine as a whole may be added over time   The  template  stanza configures the Vault Agent for rendering secrets to files using Consul Template markup language  Multiple  template  stanzas can be defined to render multiple files   When the Agent is started with templating enabled  it will attempt to acquire a Vault token using the configured auto auth Method  On failure  it will back off for a short while  including some randomness to help prevent thundering herd scenarios  and retry  On success  secrets defined in the templates will be retrieved from Vault and rendered locally      Templating language  The template output content can be provided directly as part of the  contents  option in a  template  stanza or as a separate   ctmpl  file and specified in the  source  option of a  template  stanza   In order to fetch secrets from Vault  whether those are static secrets  dynamic credentials  or certificates  Vault Agent templates require the use of the  secret   function  https   github com hashicorp consul template blob master docs templating language md secret  or  pkiCert   function  https   github com hashicorp consul template blob main docs templating language md pkicert  from Consul Template   The  secret  function works for all types of secrets and depending on the type of secret that s being rendered by this function  template will have different renewal behavior as detailed in the  Renewals section   renewals and updating secrets   The  pkiCert  function is intended to work specifically for certificates issued by the  PKI Secrets Engine   vault docs secrets pki   Refer to the  Certificates   certificates  section for differences in certificate renewal behavior between  secret  and  pkiCert    The following links contain additional resources for the templating language used by Vault Agent templating      Consul Templating Documentation  consul templating language     Go Templating Language Documentation  https   pkg go dev text template pkg overview       Template language example  The following is an example of a template that retrieves a generic secret from Vault s KV store              The following is an example of a template that issues a PKI certificate in Vault s PKI secrets engine  The fetching of the certificate or key from a PKI role through this function will be based on the certificate s expiration   To generate a new certificate and create a bundle with the key  certificate  and CA  use                To fetch only the issuing CA for this mount  use               Alternatively   pki cert ca chain  can be used to fetch the full CA chain      Global configurations  The top level  template config  block has the following configuration entries that affect all templates      exit on retry failure    bool  false     This option configures Vault Agent to exit after it has exhausted its number of template retry attempts due to failures      static secret render interval    string or integer  5m     If specified  configures   how often Vault Agent Template should render non leased secrets such as KV v2    This setting will not change how often Vault Agent Templating renders leased   secrets  Uses  duration format strings   vault docs concepts duration format       max connections per host    int  10     Limits the total number of connections   that the Vault Agent templating engine can use for a particular Vault host  This limit   includes connections in the dialing  active  and idle states      lease renewal threshold    float  0 9     How long Vault Agent s template   engine should wait for to refresh dynamic  non renewable leases  measured as   a fraction of the lease duration        template config  stanza example     hcl template config     exit on retry failure   true   static secret render interval    10m    max connections per host   20        In another example  template config  with   error on missing key  parameter in the template stanza   vault docs agent and proxy agent template error on missing key  as well as  exit on retry failure  result in the Agent exiting in case of no key   value issues instead of the default retry behavior      hcl template config     exit on retry failure   true   static secret render interval    10m    max connections per host   20    template     source          tmp agent template ctmpl    destination     tmp agent render txt    error on missing key   true            Interaction between  exit on retry failure  and  error on missing key   The parameter   error on missing key    vault docs agent and proxy agent template error on missing key  can be specified within the  template  stanza which determines if a template should error when a key is missing in the secret  When  error on missing key  is not specified or set to  false  and the key to render is not in the secret s response  the templating engine will ignore it  or render    no value     and continue on with its rendering   If the desire is to have Agent fail and exit on a missing key  both  template error on missing key  and  template config exit on retry failure  must be set to true  Otherwise  the templating engine will error and render to its destination  but Agent will not exit and will retry until the key exists or until the process is terminated   Note that a missing key from a secret s response is different from a missing or non existent secret  The templating engine will always error if a secret is missing  but will only error for a missing key if  error on missing key  is set  Whether Vault Agent will exit when the templating engine errors depends on the value of  exit on retry failure       Template configurations  The top level  template  block has multiple configuration entries  The parameters found in the template configuration section in the consul template  documentation page  https   github com hashicorp consul template blob main docs configuration md templates  can be used here    Tip   The parameters marked with     below are only applicable to file templates and cannot be used with  env template  entries in process supervisor mode     Tip      source    string         Path on disk to use as the input template  This   option is required if not using the  contents  option     destination     string  required     Path on disk where the rendered secrets should   be created  If the parent directories do not exist  Vault   Agent will attempt to create them  unless  create dest dirs  is false     create dest dirs     bool  true     This option tells Vault Agent to create   the parent directories of the destination path if they do not exist     contents    string         This option allows embedding the contents of   a template in the configuration file rather then supplying the  source  path to   the template file  This is useful for short templates  This option is mutually   exclusive with the  source  option     command     string         This is the optional command to run when the   template is rendered  The command will only run if the resulting template changes    The command must return within 30s  configurable   and it must have a successful   exit code  Vault Agent is not a replacement for a process monitor or init system    This is deprecated in favor of the  exec  option     command timeout     duration  30s     This is the maximum amount of time to   wait for the optional command to return  This is deprecated in favor of the    exec  option     error on missing key    bool  false     Exit with an error when accessing   a struct or map field key that does notexist  The default behavior will print   no value     when accessing a field that does not exist  It is highly recommended you set this   to  true   Also see   exit on retry failure  in global Vault Agent Template Config   vault docs agent and proxy agent template interaction between exit on retry failure and error on missing key      exec     object  optional     The exec block executes a command when the   template is rendered and the output has changed  The block parameters are    command    string or array  required   and  timeout    string  optional  defaults   to 30s     command  can be given as a string or array of strings to execute  such as     touch myfile   or    touch    myfile     To protect against command injection  we   strongly recommend using an array of strings  and we attempt to parse that way first    Note also that using a comma with the string approach will cause it to be interpreted as an   array  which may not be desirable     perms     string         This is the permission to render the file  If   this option is left unspecified  Vault Agent will attempt to match the permissions   of the file that already exists at the destination path  If no file exists at that   path  the permissions are 0644     backup     bool  true     This option backs up the previously rendered template   at the destination path before writing a new one  It keeps exactly one backup    This option is useful for preventing accidental changes to the data without having   a rollback strategy     left delimiter    string         Delimiter to use in the template  The   default is      but for some templates  it may be easier to use a different   delimiter that does not conflict with the output file itself     sandbox path     string         If a sandbox path is provided  any path   provided to the  file  function is checked that it falls within the sandbox path    Relative paths that try to traverse outside the sandbox path will exit with an error     wait     object  required     This is the  minimum  maximum   to wait before rendering   a new template to disk and triggering a command  separated by a colon              Example  template  stanza     hcl template     source          tmp agent template ctmpl    destination     tmp agent render txt    error on missing key   true        If you only want to use the Vault Agent to render one or more templates and do not need to sink the acquired credentials  you can omit the  sink  stanza from the  auto auth  stanza in the Agent configuration      Renewals and updating secrets  The Vault Agent templating automatically renews and fetches secrets tokens  Unlike  Vault Agent caching   vault docs agent and proxy agent caching   the behavior of how Vault Agent templating does this depends on the type of secret or token  The following is a high level overview of different behaviors       Renewable secrets  If a secret or token is renewable  Vault Agent will renew the secret after 2 3 of the secret s lease duration has elapsed       Non Renewable secrets  If a secret or token isn t renewable or leased  Vault Agent will fetch the secret every 5 minutes  This can be configured using the  template config  stanza value  static secret render interval   vault docs agent and proxy agent template static secret render interval   requires Vault 1 8    Non renewable secrets include  but not limited to   KV Version 2   vault docs secrets kv kv v2        Non Renewable leased secrets  If a secret or token is non renewable but leased  Vault Agent will fetch the secret when 90  of the secrets time to live  TTL  is reached  plus or minus some jitter to ensure that many clients don t hit Vault simultaneously  Leased  non renewable secrets include  but are not limited to  dynamic secrets such as  database credentials   vault docs secrets databases   The 90  value is configurable using the  template config  stanza value  lease renewal threshold   vault docs agent and proxy agent template lease renewal threshold   While KVv1 secrets are not leased  this also controls the fraction at which Agent will re fetch  KV Version 1   vault docs secrets kv kv v1  secrets that have a defined  lease duration         Static roles  If a secret has a  rotation period   such as a  database static role   vault docs secrets databases static roles   Vault Agent template will fetch the new secret as it changes in Vault  It does this by inspecting the secret s time to live  TTL        Certificates  As of Vault 1 11  certificates can be rendered using either  pkiCert  or  secret  template functions  although it is recommended to use  pkiCert  to avoid unnecessarily generating certificates whenever Agent restarts or re authenticates        Rendering using the  pkiCert  template function  If a  certificate   vault docs secrets pki  is rendered using the  pkiCert  template function  Vault Agent template will have the following fetching and re rendering behaviors on certificates     Fetches a new certificate on Agent startup if none has been previously rendered or the current rendered one has expired    On Agent s auto auth re authentication  due to a token expiry for example  skip fetching unless the current rendered one has expired        Rendering using the  secret  template function  If a  certificate   vault docs secrets pki  is rendered using the  secret  template function  Vault Agent template will have the following fetching and re rendering behaviors on certificates     Fetches a new certificate on Agent startup  even if previously rendered   certificates are still valid    If  generate lease  is unset or set to  false   it uses the certificate s    validTo  field to determine re fetch interval    If  generate lease  is set to  true   apply the non renewable  leased secret   rules    On Agent s auto auth re authentication  due to a token expiry for example  it   fetches and re renders a new certificate even if the existing certificate is   valid      Templating configuration example  The following demonstrates Vault Agent Templates configuration blocks      hcl   Other Vault Agent configuration blocks        template config     static secret render interval    10m    exit on retry failure   true   max connections per host   20    template     source          tmp agent template ctmpl    destination     tmp agent render txt     template     contents            destination      tmp agent render content txt         And the following demonstrates how the templates look when using  env template  with  Process Supervisor Mode   vault docs agent and proxy agent process supervisor       hcl   Other Vault Agent configuration blocks        template config     static secret render interval    10m    exit on retry failure   true   max connections per host   20    env template  MY ENV VAR      contents         env template  ENV VAR FROM FILE      source     tmp agent template ctmpl          consul templating language   https   github com hashicorp consul template blob v0 28 1 docs templating language md  process supervisor    vault docs agent and proxy agent process supervisor"}
{"questions":"vault Generate a Vault Agent development configuration file Vault Agent in process supervisor mode page title Generate a development configuration file Use the Vault CLI to create a basic development configuration file to run layout docs","answers":"---\nlayout: docs\npage_title: Generate a development configuration file\ndescription: >-\n  Use the Vault CLI to create a basic development configuration file to run\n  Vault Agent in process supervisor mode.\n---\n\n# Generate a Vault Agent development configuration file\n\nUse the Vault CLI to create a basic development configuration file to run Vault\nAgent in process supervisor mode.\n\nDevelopment configuration files include an `auto_auth` section that reference a\ntoken file based on the Vault token used to authenticate the CLI command. Token\nfiles are convenient for local testing but **are not** appropriate for in\nproduction. **Always use a robust\n[auto-authentication method](\/vault\/docs\/agent-and-proxy\/autoauth\/methods) in\nproduction**.\n\n<Tip title=\"Assumptions\">\n\n- You have [set up a `kv` v2 plugin](\/vault\/docs\/secrets\/kv\/kv-v2\/setup).\n- Your authentication token has `read` permissions for the `kv` v2 plugin.\n\n<\/Tip>\n\nUse [`vault agent generate-config`](\/vault\/docs\/commands\/agent\/generate-config)\nto create a development configuration file with environment variable templates:\n\n```shell-session\n$ vault agent generate-config\n    -type \"env-template\"                                \\\n    -exec \"<path_to_child_process> <list_of_arguments>\" \\\n    -namespace \"<plugin_namespace>\"                     \\\n    -path \"<mount_path_to_kv_plugin_1>\"                 \\\n    -path \"<mount_path_to_kv_plugin_2>\"                 \\\n    ...\n    -path \"<mount_path_to_kv_plugin_N>\"                 \\\n    <config_file_name>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault agent generate-config             \\\n         -type=\"env-template\"             \\\n         -exec=\".\/payment-app 'wf-test'\"  \\\n         -namespace=\"testing\"             \\\n         -path=\"shared\/dev\/*\"             \\\n         -path=\"private\/ci\/integration\"   \\\n         agent-config.hcl\n\nSuccessfully generated \"agent-config.hcl\" configuration file!\nWarning: the generated file uses 'token_file' authentication method, which is not suitable for production environments.\n```\n\n<\/CodeBlockConfig>\n\nThe configuration file includes `env_template` entries for each key stored at\nthe explicit paths and any key encountered while recursing through paths ending\nwith `\/*`. Template keys have the form `<final_path_segment>_<key_name>`.\n\nFor example:\n\n<CodeBlockConfig highlight=\"7,22,26,30,34,38,42\">\n\n```hcl\nauto_auth {\n\n  method {\n    type = \"token_file\"\n\n    config {\n      token_file_path = \"\/home\/<username>\/.vault-token\"\n    }\n  }\n}\n\ntemplate_config {\n  static_secret_render_interval = \"5m\"\n  exit_on_retry_failure         = true\n  max_connections_per_host      = 10\n}\n\nvault {\n  address = \"http:\/\/192.168.0.1:8200\"\n}\n\nenv_template \"SQUARE_API_PROD\" {\n  contents             = \"\"\n  error_on_missing_key = true\n}\nenv_template \"SQUARE_API_SANDBOX\" {\n  contents             = \"\"\n  error_on_missing_key = true\n}\nenv_template \"SQUARE_API_SMOKE\" {\n  contents             = \"\"\n  error_on_missing_key = true\n}\nenv_template \"SEEDS_SEED1\" {\n  contents             = \"\"\n  error_on_missing_key = true\n}\nenv_template \"SEEDS_SEED2\" {\n  contents             = \"\"\n  error_on_missing_key = true\n}\nenv_template \"DEV_POSTMAN\" {\n  contents             = \"\"\n  error_on_missing_key = true\n}\n\nexec {\n  command                   = [\".\/payment-app\", \"'wf-test'\"]\n  restart_on_secret_changes = \"always\"\n  restart_stop_signal       = \"SIGTERM\"\n}\n```\n\n<\/CodeBlockConfig>","site":"vault","answers_cleaned":"    layout  docs page title  Generate a development configuration file description       Use the Vault CLI to create a basic development configuration file to run   Vault Agent in process supervisor mode         Generate a Vault Agent development configuration file  Use the Vault CLI to create a basic development configuration file to run Vault Agent in process supervisor mode   Development configuration files include an  auto auth  section that reference a token file based on the Vault token used to authenticate the CLI command  Token files are convenient for local testing but   are not   appropriate for in production    Always use a robust  auto authentication method   vault docs agent and proxy autoauth methods  in production      Tip title  Assumptions      You have  set up a  kv  v2 plugin   vault docs secrets kv kv v2 setup     Your authentication token has  read  permissions for the  kv  v2 plugin     Tip   Use   vault agent generate config    vault docs commands agent generate config  to create a development configuration file with environment variable templates      shell session   vault agent generate config      type  env template                                        exec   path to child process   list of arguments          namespace   plugin namespace                              path   mount path to kv plugin 1                          path   mount path to kv plugin 2                                  path   mount path to kv plugin N                          config file name       For example    CodeBlockConfig hideClipboard      shell session   vault agent generate config                         type  env template                          exec    payment app  wf test                namespace  testing                          path  shared dev                            path  private ci integration               agent config hcl  Successfully generated  agent config hcl  configuration file  Warning  the generated file uses  token file  authentication method  which is not suitable for production environments         CodeBlockConfig   The configuration file includes  env template  entries for each key stored at the explicit paths and any key encountered while recursing through paths ending with       Template keys have the form   final path segment   key name     For example    CodeBlockConfig highlight  7 22 26 30 34 38 42       hcl auto auth      method       type    token file       config         token file path     home  username   vault token               template config     static secret render interval    5m    exit on retry failure           true   max connections per host        10    vault     address    http   192 168 0 1 8200     env template  SQUARE API PROD      contents                    error on missing key   true   env template  SQUARE API SANDBOX      contents                    error on missing key   true   env template  SQUARE API SMOKE      contents                    error on missing key   true   env template  SEEDS SEED1      contents                    error on missing key   true   env template  SEEDS SEED2      contents                    error on missing key   true   env template  DEV POSTMAN      contents                    error on missing key   true    exec     command                         payment app     wf test      restart on secret changes    always    restart stop signal          SIGTERM           CodeBlockConfig "}
{"questions":"vault created tokens or leased secrets generated from a newly created token Vault Agent caching overview Use client side caching with Vault Agent for responses with newly layout docs page title Vault Agent caching overview","answers":"---\nlayout: docs\npage_title: Vault Agent caching overview\ndescription: >-\n  Use client-side caching with Vault Agent for responses with newly\n  created tokens or leased secrets generated from a newly created token.\n---\n\n# Vault Agent caching overview\n\n<Note title=\"Use Vault Proxy for static secret caching\">\n\n  [Static secret caching](\/vault\/docs\/agent-and-proxy\/proxy\/caching\/static-secret-caching)\n  (KVv1 and KVv2) with API proxy minimizes the number of requests forwarded to\n  Vault. Vault Agent does not support static secret caching with API proxy. We\n  recommend using [Vault Proxy](\/vault\/docs\/agent-and-proxy\/proxy) for API Proxy\n  related workflows.\n\n<\/Note>\n\nVault Agent Caching allows client-side caching of responses containing newly\ncreated tokens and responses containing leased secrets generated off of these\nnewly created tokens. The renewals of the cached tokens and leases are also\nmanaged by the agent.\n\n## Caching and renewals\n\nResponse caching and renewals are managed by the agent only under these\nspecific scenarios.\n\n1. Token creation requests are made through the agent. This means that any\n   login operations performed using various auth methods and invoking the token\n   creation endpoints of the token auth method via the agent will result in the\n   response getting cached by the agent. Responses containing new tokens will\n   be cached by the agent only if the parent token is already being managed by\n   the agent or if the new token is an orphan token.\n\n2. Leased secret creation requests are made through the agent using tokens that\n   are already managed by the agent. This means that any dynamic credentials\n   that are issued using the tokens managed by the agent, will be cached and\n   its renewals are taken care of.\n\n## Persistent cache\n\nVault Agent can restore tokens and leases from a persistent cache file created\nby a previous Vault Agent process.\n\nRefer to the [Vault Agent Persistent\nCaching](\/vault\/docs\/agent-and-proxy\/agent\/caching\/persistent-caches) page for more information on\nthis functionality.\n\n## Cache evictions\n\nThe eviction of cache entries pertaining to secrets will occur when the agent\ncan no longer renew them. This can happen when the secrets hit their maximum\nTTL or if the renewals result in errors.\n\nAgent does some best-effort cache evictions by observing specific request types\nand response codes. For example, if a token revocation request is made via the\nagent and if the forwarded request to the Vault server succeeds, then agent\nevicts all the cache entries associated with the revoked token. Similarly, any\nlease revocation operation will also be intercepted by the agent and the\nrespective cache entries will be evicted.\n\nNote that while agent evicts the cache entries upon secret expirations and upon\nintercepting revocation requests, it is still possible for the agent to be\ncompletely unaware of the revocations that happen through direct client\ninteractions with the Vault server. This could potentially lead to stale cache\nentries. For managing the stale entries in the cache, an endpoint\n`\/agent\/v1\/cache-clear`(see below) is made available to manually evict cache\nentries based on some of the query criteria used for indexing the cache entries.\n\n## Request uniqueness\n\nIn order to detect repeat requests and return cached responses, Agent needs\nto have a way to uniquely identify the requests. This computation as it stands\ntoday takes a simplistic approach (may change in future) of serializing and\nhashing the HTTP request along with all the headers and the request body. This\nhash value is then used as an index into the cache to check if the response is\nreadily available. The consequence of this approach is that the hash value for\nany request will differ if any data in the request is modified. This has the\nside-effect of resulting in false negatives if say, the ordering of the request\nparameters are modified. As long as the requests come in without any change,\ncaching behavior should be consistent. Identical requests with differently\nordered request values will result in duplicated cache entries. A heuristic\nassumption that the clients will use consistent mechanisms to make requests,\nthereby resulting in consistent hash values per request is the idea upon which\nthe caching functionality is built upon.\n\n## Renewal management\n\nThe tokens and leases are renewed by the agent using the secret renewer that is\nmade available via the Vault server's [Go\nAPI](https:\/\/godoc.org\/github.com\/hashicorp\/vault\/api#Renewer). Agent performs\nall operations in memory and does not persist anything to storage. This means\nthat when the agent is shut down, all the renewal operations are immediately\nterminated and there is no way for agent to resume renewals after the fact.\nNote that shutting down the agent does not indicate revocations of the secrets,\ninstead it only means that renewal responsibility for all the valid unrevoked\nsecrets are no longer performed by the Vault agent.\n\n### Agent CLI\n\nAgent's listener address will be picked up by the CLI through the\n`VAULT_AGENT_ADDR` environment variable. This should be a complete URL such as\n`\"http:\/\/127.0.0.1:8200\"`.\n\n## API\n\n### Cache clear\n\nThis endpoint clears the cache based on given criteria. To use this\nAPI, some information on how the agent caches values should be known\nbeforehand. Each response that is cached in the agent will be indexed on some\nfactors depending on the type of request. Those factors can be the `token` that\nis belonging to the cached response, the `token_accessor` of the token\nbelonging to the cached response, the `request_path` that resulted in the\ncached response, the `lease` that is attached to the cached response, the\n`namespace` to which the cached response belongs to, and a few more. This API\nexposes some factors through which associated cache entries are fetched and\nevicted. For listeners without caching enabled, this API will still be available,\nbut will do nothing (there is no cache to clear) and will return a `200` response.\n\n| Method | Path                    | Produces               |\n| :----- | :---------------------- | :--------------------- |\n| `POST` | `\/agent\/v1\/cache-clear` | `200 application\/json` |\n\n#### Parameters\n\n- `type` `(strings: required)` - The type of cache entries to evict. Valid\n  values are `request_path`, `lease`, `token`, `token_accessor`, and `all`.\n  If the `type` is set to `all`, the _entire cache_ is cleared.\n\n- `value` `(string: required)` - An exact value or the prefix of the value for\n  the `type` selected. This parameter is optional when the `type` is set\n  to `all`.\n\n- `namespace` `(string: optional)` - This is only applicable when the `type` is set to\n  `request_path`. The namespace of which the cache entries to be evicted for\n  the given request path.\n\n### Sample payload\n\n```json\n{\n  \"type\": \"token\",\n  \"value\": \"hvs.rlNjegSKykWcplOkwsjd8bP9\"\n}\n```\n\n### Sample request\n\n```shell-session\n$ curl \\\n    --request POST \\\n    --data @payload.json \\\n    http:\/\/127.0.0.1:1234\/agent\/v1\/cache-clear\n```\n\n## Configuration (`cache`)\n\nThe presence of the top level `cache` block in any way (including an empty `cache` block)  will enable the cache.\nThe top level `cache` block has the following configuration entry:\n\n- `persist` `(object: optional)` - Configuration for the persistent cache.\n\nThe `cache` block also supports the `use_auto_auth_token`, `enforce_consistency`, and\n`when_inconsistent` configuration values of the `api_proxy` block\n[described in the API Proxy documentation](\/vault\/docs\/agent-and-proxy\/agent\/apiproxy#configuration-api_proxy) only to\nmaintain backwards compatibility. This configuration **cannot** be specified alongside `api_proxy` equivalents,\nshould not be preferred over configuring these values in the `api_proxy` block,\nand `api_proxy` should be the preferred place to configure these values.\n\n-> **Note:** When the `cache` block is defined, at least one\n[template][agent-template] or [listener][agent-listener] must also be defined\nin the config, otherwise there is no way to utilize the cache.\n\n[agent-template]: \/vault\/docs\/agent-and-proxy\/agent\/template\n[agent-listener]: \/vault\/docs\/agent-and-proxy\/agent#listener-stanza\n\n### Configuration (Persist)\n\nThese are common configuration values that live within the `persist` block:\n\n- `type` `(string: required)` - The type of the persistent cache to use,\n  e.g. `kubernetes`. _Note_: when using HCL this can be used as the key for\n  the block, e.g. `persist \"kubernetes\" {...}`. Currently, only `kubernetes`\n  is supported.\n\n- `path` `(string: required)` - The path on disk where the persistent cache file\n  should be created or restored from.\n\n- `keep_after_import` `(bool: optional)` - When set to true, a restored cache file\n  is not deleted. Defaults to `false`.\n\n- `exit_on_err` `(bool: optional)` - When set to true, if any errors occur during\n  a persistent cache restore, Vault Agent will exit with an error. Defaults to `true`.\n\n- `service_account_token_file` `(string: optional)` - When `type` is set to `kubernetes`,\nthis configures the path on disk where the Kubernetes service account token can be found.\nDefaults to `\/var\/run\/secrets\/kubernetes.io\/serviceaccount\/token`.\n\n## Configuration (`listener`)\n\n- `listener` `(array of objects: required)` - Configuration for the listeners.\n\nThere can be one or more `listener` blocks at the top level. Adding a listener enables\nthe [API Proxy](\/vault\/docs\/agent-and-proxy\/agent\/apiproxy) and enables the API proxy to use the cache, if configured.\nThese configuration values are common to both `tcp` and `unix` listener blocks. Blocks of type\n`tcp` support the standard `tcp` [listener](\/vault\/docs\/configuration\/listener\/tcp)\noptions. Additionally, the `role` string option is available as part of the top level\nof the `listener` block, which can be configured to `metrics_only` to serve only metrics,\nor the default role, `default`, which serves everything (including metrics).\n\n- `type` `(string: required)` - The type of the listener to use. Valid values\n  are `tcp` and `unix`.\n  _Note_: when using HCL this can be used as the key for the block, e.g.\n  `listener \"tcp\" {...}`.\n\n- `address` `(string: required)` - The address for the listener to listen to.\n  This can either be a URL path when using `tcp` or a file path when using\n  `unix`. For example, `127.0.0.1:8200` or `\/path\/to\/socket`. Defaults to\n  `127.0.0.1:8200`.\n\n- `tls_disable` `(bool: false)` - Specifies if TLS will be disabled.\n\n- `tls_key_file` `(string: optional)` - Specifies the path to the private key\n  for the certificate.\n\n- `tls_cert_file` `(string: optional)` - Specifies the path to the certificate\n  for TLS.\n\n### Example configuration\n\nHere is an example of a cache configuration with the optional `persist` block,\nalongside a regular listener, and a listener that only serves metrics.\n\n```hcl\n# Other Vault agent configuration blocks\n# ...\n\ncache {\n\tpersist = {\n\t\ttype = \"kubernetes\"\n\t\tpath = \"\/vault\/agent-cache\/\"\n\t\tkeep_after_import = true\n\t\texit_on_err = true\n\t\tservice_account_token_file = \"\/tmp\/serviceaccount\/token\"\n\t}\n}\n\nlistener \"tcp\" {\n    address = \"127.0.0.1:8100\"\n    tls_disable = true\n}\n\nlistener \"tcp\" {\n    address = \"127.0.0.1:3000\"\n    tls_disable = true\n    role = \"metrics_only\"\n}\n```\n\n## Tutorial\n\nRefer to the [Vault Agent\nCaching](\/vault\/tutorials\/vault-agent\/agent-caching)\ntutorial to learn how to use the Vault Agent to increase the availability of tokens and secrets to clients using its Caching function.","site":"vault","answers_cleaned":"    layout  docs page title  Vault Agent caching overview description       Use client side caching with Vault Agent for responses with newly   created tokens or leased secrets generated from a newly created token         Vault Agent caching overview   Note title  Use Vault Proxy for static secret caching       Static secret caching   vault docs agent and proxy proxy caching static secret caching     KVv1 and KVv2  with API proxy minimizes the number of requests forwarded to   Vault  Vault Agent does not support static secret caching with API proxy  We   recommend using  Vault Proxy   vault docs agent and proxy proxy  for API Proxy   related workflows     Note   Vault Agent Caching allows client side caching of responses containing newly created tokens and responses containing leased secrets generated off of these newly created tokens  The renewals of the cached tokens and leases are also managed by the agent      Caching and renewals  Response caching and renewals are managed by the agent only under these specific scenarios   1  Token creation requests are made through the agent  This means that any    login operations performed using various auth methods and invoking the token    creation endpoints of the token auth method via the agent will result in the    response getting cached by the agent  Responses containing new tokens will    be cached by the agent only if the parent token is already being managed by    the agent or if the new token is an orphan token   2  Leased secret creation requests are made through the agent using tokens that    are already managed by the agent  This means that any dynamic credentials    that are issued using the tokens managed by the agent  will be cached and    its renewals are taken care of      Persistent cache  Vault Agent can restore tokens and leases from a persistent cache file created by a previous Vault Agent process   Refer to the  Vault Agent Persistent Caching   vault docs agent and proxy agent caching persistent caches  page for more information on this functionality      Cache evictions  The eviction of cache entries pertaining to secrets will occur when the agent can no longer renew them  This can happen when the secrets hit their maximum TTL or if the renewals result in errors   Agent does some best effort cache evictions by observing specific request types and response codes  For example  if a token revocation request is made via the agent and if the forwarded request to the Vault server succeeds  then agent evicts all the cache entries associated with the revoked token  Similarly  any lease revocation operation will also be intercepted by the agent and the respective cache entries will be evicted   Note that while agent evicts the cache entries upon secret expirations and upon intercepting revocation requests  it is still possible for the agent to be completely unaware of the revocations that happen through direct client interactions with the Vault server  This could potentially lead to stale cache entries  For managing the stale entries in the cache  an endpoint   agent v1 cache clear  see below  is made available to manually evict cache entries based on some of the query criteria used for indexing the cache entries      Request uniqueness  In order to detect repeat requests and return cached responses  Agent needs to have a way to uniquely identify the requests  This computation as it stands today takes a simplistic approach  may change in future  of serializing and hashing the HTTP request along with all the headers and the request body  This hash value is then used as an index into the cache to check if the response is readily available  The consequence of this approach is that the hash value for any request will differ if any data in the request is modified  This has the side effect of resulting in false negatives if say  the ordering of the request parameters are modified  As long as the requests come in without any change  caching behavior should be consistent  Identical requests with differently ordered request values will result in duplicated cache entries  A heuristic assumption that the clients will use consistent mechanisms to make requests  thereby resulting in consistent hash values per request is the idea upon which the caching functionality is built upon      Renewal management  The tokens and leases are renewed by the agent using the secret renewer that is made available via the Vault server s  Go API  https   godoc org github com hashicorp vault api Renewer   Agent performs all operations in memory and does not persist anything to storage  This means that when the agent is shut down  all the renewal operations are immediately terminated and there is no way for agent to resume renewals after the fact  Note that shutting down the agent does not indicate revocations of the secrets  instead it only means that renewal responsibility for all the valid unrevoked secrets are no longer performed by the Vault agent       Agent CLI  Agent s listener address will be picked up by the CLI through the  VAULT AGENT ADDR  environment variable  This should be a complete URL such as   http   127 0 0 1 8200        API      Cache clear  This endpoint clears the cache based on given criteria  To use this API  some information on how the agent caches values should be known beforehand  Each response that is cached in the agent will be indexed on some factors depending on the type of request  Those factors can be the  token  that is belonging to the cached response  the  token accessor  of the token belonging to the cached response  the  request path  that resulted in the cached response  the  lease  that is attached to the cached response  the  namespace  to which the cached response belongs to  and a few more  This API exposes some factors through which associated cache entries are fetched and evicted  For listeners without caching enabled  this API will still be available  but will do nothing  there is no cache to clear  and will return a  200  response     Method   Path                      Produces                                                                                  POST      agent v1 cache clear     200 application json          Parameters     type    strings  required     The type of cache entries to evict  Valid   values are  request path    lease    token    token accessor   and  all     If the  type  is set to  all   the  entire cache  is cleared      value    string  required     An exact value or the prefix of the value for   the  type  selected  This parameter is optional when the  type  is set   to  all       namespace    string  optional     This is only applicable when the  type  is set to    request path   The namespace of which the cache entries to be evicted for   the given request path       Sample payload     json      type    token      value    hvs rlNjegSKykWcplOkwsjd8bP9             Sample request     shell session   curl         request POST         data  payload json       http   127 0 0 1 1234 agent v1 cache clear         Configuration   cache    The presence of the top level  cache  block in any way  including an empty  cache  block   will enable the cache  The top level  cache  block has the following configuration entry      persist    object  optional     Configuration for the persistent cache   The  cache  block also supports the  use auto auth token    enforce consistency   and  when inconsistent  configuration values of the  api proxy  block  described in the API Proxy documentation   vault docs agent and proxy agent apiproxy configuration api proxy  only to maintain backwards compatibility  This configuration   cannot   be specified alongside  api proxy  equivalents  should not be preferred over configuring these values in the  api proxy  block  and  api proxy  should be the preferred place to configure these values        Note    When the  cache  block is defined  at least one  template  agent template  or  listener  agent listener  must also be defined in the config  otherwise there is no way to utilize the cache    agent template    vault docs agent and proxy agent template  agent listener    vault docs agent and proxy agent listener stanza      Configuration  Persist   These are common configuration values that live within the  persist  block      type    string  required     The type of the persistent cache to use    e g   kubernetes    Note   when using HCL this can be used as the key for   the block  e g   persist  kubernetes          Currently  only  kubernetes    is supported      path    string  required     The path on disk where the persistent cache file   should be created or restored from      keep after import    bool  optional     When set to true  a restored cache file   is not deleted  Defaults to  false       exit on err    bool  optional     When set to true  if any errors occur during   a persistent cache restore  Vault Agent will exit with an error  Defaults to  true       service account token file    string  optional     When  type  is set to  kubernetes   this configures the path on disk where the Kubernetes service account token can be found  Defaults to   var run secrets kubernetes io serviceaccount token       Configuration   listener       listener    array of objects  required     Configuration for the listeners   There can be one or more  listener  blocks at the top level  Adding a listener enables the  API Proxy   vault docs agent and proxy agent apiproxy  and enables the API proxy to use the cache  if configured  These configuration values are common to both  tcp  and  unix  listener blocks  Blocks of type  tcp  support the standard  tcp   listener   vault docs configuration listener tcp  options  Additionally  the  role  string option is available as part of the top level of the  listener  block  which can be configured to  metrics only  to serve only metrics  or the default role   default   which serves everything  including metrics       type    string  required     The type of the listener to use  Valid values   are  tcp  and  unix      Note   when using HCL this can be used as the key for the block  e g     listener  tcp              address    string  required     The address for the listener to listen to    This can either be a URL path when using  tcp  or a file path when using    unix   For example   127 0 0 1 8200  or   path to socket   Defaults to    127 0 0 1 8200       tls disable    bool  false     Specifies if TLS will be disabled      tls key file    string  optional     Specifies the path to the private key   for the certificate      tls cert file    string  optional     Specifies the path to the certificate   for TLS       Example configuration  Here is an example of a cache configuration with the optional  persist  block  alongside a regular listener  and a listener that only serves metrics      hcl   Other Vault agent configuration blocks        cache    persist       type    kubernetes    path     vault agent cache     keep after import   true   exit on err   true   service account token file     tmp serviceaccount token        listener  tcp        address    127 0 0 1 8100      tls disable   true    listener  tcp        address    127 0 0 1 3000      tls disable   true     role    metrics only            Tutorial  Refer to the  Vault Agent Caching   vault tutorials vault agent agent caching  tutorial to learn how to use the Vault Agent to increase the availability of tokens and secrets to clients using its Caching function "}
{"questions":"vault page title Regenerate a Vault root token layout docs Your Vault root token is a special token that gives you access to all Vault Regenerate a lost or revoked root token Regenerate a Vault root token","answers":"---\nlayout: docs\npage_title: Regenerate a Vault root token\ndescription: >-\n  Regenerate a lost or revoked root token.\n---\n\n# Regenerate a Vault root token\n\nYour Vault root token is a special token that gives you access to **all** Vault\noperations. Best practice is to enable an appropriate authentication method for\nVault admins once the server is running and revoke the root token.\n\nFor emergency situations where your require a root token, you can use the\n[`operator generate-root`](\/vault\/docs\/commands\/operator\/generate-root) CLI\ncommand and a one-time password (OTP) or Pretty Good Privacy (PGP) to generate\na new root token.\n\n## Before you start\n\n- **You need your Vault keys**. If you use auto-unseal, you need your\n  [recovery](\/vault\/docs\/concepts\/seal#recovery-key) keys, otherwise you need\n  your unseal keys.\n- **Identify current key holders**. You must distribute the token nonce to your\n  unseal\/recovery key holders during root token generation.\n\n## Step 1: Create a root token nonce\n\n1. Generate a token nonce for your new root token:\n\n   <Tabs>\n   <Tab heading=\"OTP\" group=\"otp\">\n\n   **You need the returned OTP value to decode the new root token**.\n\n   ```shell-session\n   $ vault operator generate-root -init\n\n   A One-Time-Password has been generated for you and is shown in the OTP field.\n   You will need this value to decode the resulting root token, so keep it safe.\n   Nonce         15565c79-cc9e-5e64-b986-8506e7bd1918\n   Started       true\n   Progress      0\/1\n   Complete      false\n   OTP           5JFQaH76Ky2TIuSt4SPvO1CGkx\n   OTP Length    26\n   ```\n\n   <\/Tab>\n   <Tab heading=\"PGP\" group=\"pgp\">\n\n   Use the `-pgp-key` option to provide a path to your PGP public key or Keybase\n   username to encrypt the new root token. **You will need the returned PGP\n   value to decode the new root token**.\n\n   ```shell-session\n   $ vault operator generate-root -init -pgp-key=keybase:sethvargo\n\n   Nonce              e24dec5e-f1ea-2dfe-ecce-604022006976\n   Started            true\n   Progress           0\/5\n   Complete           false\n   PGP Fingerprint    e2f8e2974623ba2a0e933a59c921994f9c27e0ff\n   ```\n\n   <\/Tab>\n   <\/Tabs>\n\n1. Distribute the nonce to each of your unseal\/recovery key holders.\n\n## Step 2: Establish key quorum with the token nonce\n\n<Highlight title=\"Use TTY to autocomplete the nonce\">\n\n  If you use a TTY, the `operator generate-root` command prompts for your key\n  and automatically completes the nonce value.\n\n<\/Highlight>\n\n1. Have each unseal\/recovery key holder run `operator generator-root` with their\n   key and the distributed nonce value:\n\n   ```shell-session\n   $ echo ${UNSEAL_OR_RECOVERY_KEY} | vault operator generate-root -nonce=${NONCE_VALUE} -\n\n   Root generation operation nonce: f67f4da3-4ae4-68fb-4716-91da6b609c3e\n   Unseal Key (will be hidden):\n   ```\n\n1. Vault returns the new, encoded root token to the user who triggers quorum:\n\n   <Tabs>\n   <Tab heading=\"OTP\" group=\"otp\">\n\n   ```shell-session\n   Nonce            f67f4da3-4ae4-68fb-4716-91da6b609c3e\n   Started          true\n   Progress         5\/5\n   Complete         true\n   Encoded Token    IxJpyqxn3YafOGhqhvP6cQ==\n   ```\n\n   <\/Tab>\n\n   <Tab heading=\"PGP\" group=\"pgp\">\n\n   ```shell-session\n   Nonce                 e24dec5e-f1ea-2dfe-ecce-604022006976\n   Started               true\n   Progress              1\/1\n   Complete              true\n   PGP Fingerprint       e2f8e2974623ba2a0e933a59c921994f9c27e0ff\n   Encoded Token         wcFMA0RVkFtoqzRlARAAI3Ux8kdSpfgXdF9mg...\n   ```\n\n   <\/Tab>\n   <\/Tabs>\n\n## Step 3: Decode the new root token\n\nDecode the new root token using OTP or PGP.\n\n<Tabs>\n<Tab heading=\"OTP\" group=\"otp\">\n\nUse `operator generate-root` and the OTP value from nonce generation to decode\nthe new root token:\n\n```shell-session\n$ vault operator generate-root \\\n   -decode=${ENCODED_TOKEN}    \\\n   -otp=${NONCE_OTP}\n\nhvs.XXXXXXXXXXXXXXXXXXXXXXXX\n```\n\n<\/Tab>\n\n<Tab heading=\"PGP\" group=\"pgp\">\n\nUse your PGP credentials and `gpg` or `keybase` to decrypt the new root token.\n\n\n**`gpg`**:\n\n```shell-session\n$ echo ${ENCODED_TOKEN} | base64 --decode | gpg --decrypt\n\nhvs.XXXXXXXXXXXXXXXXXXXXXXXX\n```\n\n**`keybase`**:\n\n```shell-session\n$ echo ${ENCODED_TOKEN} | base64 --decode | keybase pgp decrypt\n\nhvs.XXXXXXXXXXXXXXXXXXXXXXXX\n```\n\n<\/Tab>\n<\/Tabs>","site":"vault","answers_cleaned":"    layout  docs page title  Regenerate a Vault root token description       Regenerate a lost or revoked root token         Regenerate a Vault root token  Your Vault root token is a special token that gives you access to   all   Vault operations  Best practice is to enable an appropriate authentication method for Vault admins once the server is running and revoke the root token   For emergency situations where your require a root token  you can use the   operator generate root    vault docs commands operator generate root  CLI command and a one time password  OTP  or Pretty Good Privacy  PGP  to generate a new root token      Before you start      You need your Vault keys    If you use auto unseal  you need your    recovery   vault docs concepts seal recovery key  keys  otherwise you need   your unseal keys      Identify current key holders    You must distribute the token nonce to your   unseal recovery key holders during root token generation      Step 1  Create a root token nonce  1  Generate a token nonce for your new root token       Tabs      Tab heading  OTP  group  otp         You need the returned OTP value to decode the new root token           shell session      vault operator generate root  init     A One Time Password has been generated for you and is shown in the OTP field     You will need this value to decode the resulting root token  so keep it safe     Nonce         15565c79 cc9e 5e64 b986 8506e7bd1918    Started       true    Progress      0 1    Complete      false    OTP           5JFQaH76Ky2TIuSt4SPvO1CGkx    OTP Length    26              Tab      Tab heading  PGP  group  pgp       Use the   pgp key  option to provide a path to your PGP public key or Keybase    username to encrypt the new root token    You will need the returned PGP    value to decode the new root token           shell session      vault operator generate root  init  pgp key keybase sethvargo     Nonce              e24dec5e f1ea 2dfe ecce 604022006976    Started            true    Progress           0 5    Complete           false    PGP Fingerprint    e2f8e2974623ba2a0e933a59c921994f9c27e0ff              Tab       Tabs   1  Distribute the nonce to each of your unseal recovery key holders      Step 2  Establish key quorum with the token nonce   Highlight title  Use TTY to autocomplete the nonce      If you use a TTY  the  operator generate root  command prompts for your key   and automatically completes the nonce value     Highlight   1  Have each unseal recovery key holder run  operator generator root  with their    key and the distributed nonce value         shell session      echo   UNSEAL OR RECOVERY KEY    vault operator generate root  nonce   NONCE VALUE        Root generation operation nonce  f67f4da3 4ae4 68fb 4716 91da6b609c3e    Unseal Key  will be hidden           1  Vault returns the new  encoded root token to the user who triggers quorum       Tabs      Tab heading  OTP  group  otp          shell session    Nonce            f67f4da3 4ae4 68fb 4716 91da6b609c3e    Started          true    Progress         5 5    Complete         true    Encoded Token    IxJpyqxn3YafOGhqhvP6cQ                Tab       Tab heading  PGP  group  pgp          shell session    Nonce                 e24dec5e f1ea 2dfe ecce 604022006976    Started               true    Progress              1 1    Complete              true    PGP Fingerprint       e2f8e2974623ba2a0e933a59c921994f9c27e0ff    Encoded Token         wcFMA0RVkFtoqzRlARAAI3Ux8kdSpfgXdF9mg                 Tab       Tabs      Step 3  Decode the new root token  Decode the new root token using OTP or PGP    Tabs   Tab heading  OTP  group  otp    Use  operator generate root  and the OTP value from nonce generation to decode the new root token      shell session   vault operator generate root       decode   ENCODED TOKEN           otp   NONCE OTP   hvs XXXXXXXXXXXXXXXXXXXXXXXX        Tab    Tab heading  PGP  group  pgp    Use your PGP credentials and  gpg  or  keybase  to decrypt the new root token       gpg         shell session   echo   ENCODED TOKEN    base64   decode   gpg   decrypt  hvs XXXXXXXXXXXXXXXXXXXXXXXX         keybase         shell session   echo   ENCODED TOKEN    base64   decode   keybase pgp decrypt  hvs XXXXXXXXXXXXXXXXXXXXXXXX        Tab    Tabs "}
{"questions":"vault Understand the behavior of time to live set on leases The benefit of using Vault s dynamic secrets engines and auth methods is the Tune the lease time to live TTL layout docs page title Tune the lease TTL","answers":"---\nlayout: docs\npage_title: Tune the lease TTL\ndescription: >-\n  Understand the behavior of time-to-live set on leases. \n---\n\n# Tune the lease time-to-live (TTL)\n\nThe benefit of using Vault's dynamic secrets engines and auth methods is the\nability to control how long the Vault-managed credentials (leases) remain valid.\nOften times, you generate short-lived credentials or tokens to reduce the risk\nof unauthorized attacks caused by leaked credentials or tokens. If you do not\nexplicitly specify the time-to-live (TTL), Vault generates leases with TTL of 32\ndays by default.\n\nFor example, you enabled AppRole auth method at `approle`, and create a role\nnamed `read-only` with max lease TTL of **120 days**. \n\n```shell-session \n$ vault write auth\/approle\/role\/read-only token_policies=\"read-only\" \\\n    token_ttl=90d token_max_ttl=120d\n```\n\nThe command returns a warning about the TTL exceeding the mount's max TTL value.\n\n<CodeBlockConfig hideClipboard>\n\n```plaintext\nWARNING! The following warnings were returned from Vault:\n\n  * token_max_ttl is greater than the backend mount's maximum TTL value;\n  issued tokens' max TTL value will be truncated\n```\n\n<\/CodeBlockConfig>\n\n\nTherefore, it will return a client token with TTL of 768 hours (32 days) instead\nof 120 days.\n\n<CodeBlockConfig highlight=\"12\" hideClipboard>\n\n```shell-session\n$ vault write auth\/approle\/login role_id=<ROLE_ID> secret_id=<SECRET_ID>\n\nWARNING! The following warnings were returned from Vault:\n\n  * TTL of \"2880h\" exceeded the effective max_ttl of \"768h\"; TTL value is\n  capped accordingly\n\nKey                     Value\n---                     -----\ntoken                   hvs.CAESIJeVezY3UObHXTvzpI722q0MmaARB1692fT-MmdzcryvGh4KHGh2cy43czViYXVZS3FnSzltWmdVZ3Q0MmFTdkc\ntoken_accessor          wXTOvz5xxBi2vvUpTBhemUXr\ntoken_duration          768h\ntoken_renewable         true\ntoken_policies          [\"default\" \"read-only\"]\nidentity_policies       []\npolicies                [\"default\" \"read-only\"]\ntoken_meta_role_name    read-only\n```\n\n<\/CodeBlockConfig>\n\n## Max lease TTL on an auth mount\n\nYou cannot set the TTL for a role to go beyond the max lease TTL set on the\nAppRole auth mount (`approle` in this example). The default lease TTL and max\nlease TTL are 32 days (768 hours).\n\n```shell-session\n$ vault read sys\/auth\/approle\/tune\n```\n\n**Output:**\n\n<CodeBlockConfig highlight=\"3,6\" hideClipboard>\n\n```plaintext\nKey                  Value\n---                  -----\ndefault_lease_ttl    768h\ndescription          n\/a\nforce_no_cache       false\nmax_lease_ttl        768h\ntoken_type           default-service\n```\n\n<\/CodeBlockConfig>\n\nIf the desired max lease TTL is 120 days (2880 hours), update the max lease TTL\non the mount. \n\n```shell-session\n$ vault auth tune -max-lease-ttl=120d approle\n```\n\nThe following command lists all available parameters that you can tune.\n\n```shell-session\n$ vault auth tune -h\n```\n\nNow, the AppRole will generate a lease with token duration of 120 days (2880 hours). \n\n<CodeBlockConfig highlight=\"7\" hideClipboard>\n\n```shell-session\n$ vault write auth\/approle\/login role_id=<ROLE_ID> secret_id=<SECRET_ID>\n\nKey                     Value\n---                     -----\ntoken                   hvs.CAESIOzTpLX4naKw-epzhcb3DneZ9ZuRTx4tKh5mTT1CajLQGh4KHGh2cy5TUFFhY3QzVzdmSTFwQUduOWlrMVRWaUE\ntoken_accessor          blc2MGA4EmmqEROzqlotFbqr\ntoken_duration          2880h\ntoken_renewable         true\ntoken_policies          [\"default\" \"jenkins\"]\nidentity_policies       []\npolicies                [\"default\" \"jenkins\"]\ntoken_meta_role_name    jenkins\n```\n\n<\/CodeBlockConfig>\n\n\n## Max lease TTL on a secrets mount\n\nSimilar to the AppRole auth method example, you can tune the max lease TTL on\ndynamic secrets.\n\nFor example, you enabled database secrets engine at `mongodb` and create a role\nnamed `tester` with max lease TTL of 120 days (2880 hours). When you request a\ndatabase credential for the `tester` role, it returns a warning, and its lease\nduration is 32 days (768 hours) instead of 120 days.\n\n<CodeBlockConfig hideClipboard highlight=\"11\">\n\n```shell-session\n$ vault read mongodb\/creds\/tester\n\nWARNING! The following warnings were returned from Vault:\n\n  * TTL of \"2880h\" exceeded the effective max_ttl of \"768h\"; TTL value is\n  capped accordingly\n\nKey                Value\n---                -----\nlease_id           mongodb\/creds\/tester\/fVPt15506k3UW9n4pq0kIpBH\nlease_duration     768h\nlease_renewable    true\npassword           Eskkx6yRhAN4--H9WL7B\nusername           v-token-tester-6BtY903qOZBpzYa4yQs8-1724715513\n```\n\n<\/CodeBlockConfig>\n\nTo set the desired TTL on the role, tune the max lease TTL on the `mongodb`\nmount.\n\n```shell-session\n$ vault secrets tune -max-lease-ttl=120d mongodb\n```\n\nVerify the configured max lease TTL available on the mount.\n\n<CodeBlockConfig hideClipboard highlight=\"8\">\n\n```shell-session\n$ vault read sys\/mounts\/mongodb\/tune\n\nKey                  Value\n---                  -----\ndefault_lease_ttl    768h\ndescription          n\/a\nforce_no_cache       false\nmax_lease_ttl        2880h\n```\n\n<\/CodeBlockConfig>\n\nThe following command lists all available parameters that you can tune.\n\n```shell-session\n$ vault secrets tune -h\n```\n\nWhen you introduce Vault into your existing system, the existing applications\nmay not be able to handle short-lived leases. You can tune the default TTLs\non each mount. \n\nOn the similar note, if the system default of 32 days is too long, you can tune\nthe default TTL to be shorter to comply with your organization's policy.\n\n## API\n\n- [Tune auth method](\/vault\/api-docs\/system\/auth#tune-auth-method)\n- [Tune mount configuration](\/vault\/api-docs\/system\/mounts#tune-mount-configuration)","site":"vault","answers_cleaned":"    layout  docs page title  Tune the lease TTL description       Understand the behavior of time to live set on leases          Tune the lease time to live  TTL   The benefit of using Vault s dynamic secrets engines and auth methods is the ability to control how long the Vault managed credentials  leases  remain valid  Often times  you generate short lived credentials or tokens to reduce the risk of unauthorized attacks caused by leaked credentials or tokens  If you do not explicitly specify the time to live  TTL   Vault generates leases with TTL of 32 days by default   For example  you enabled AppRole auth method at  approle   and create a role named  read only  with max lease TTL of   120 days         shell session    vault write auth approle role read only token policies  read only        token ttl 90d token max ttl 120d      The command returns a warning about the TTL exceeding the mount s max TTL value    CodeBlockConfig hideClipboard      plaintext WARNING  The following warnings were returned from Vault       token max ttl is greater than the backend mount s maximum TTL value    issued tokens  max TTL value will be truncated        CodeBlockConfig    Therefore  it will return a client token with TTL of 768 hours  32 days  instead of 120 days    CodeBlockConfig highlight  12  hideClipboard      shell session   vault write auth approle login role id  ROLE ID  secret id  SECRET ID   WARNING  The following warnings were returned from Vault       TTL of  2880h  exceeded the effective max ttl of  768h   TTL value is   capped accordingly  Key                     Value                               token                   hvs CAESIJeVezY3UObHXTvzpI722q0MmaARB1692fT MmdzcryvGh4KHGh2cy43czViYXVZS3FnSzltWmdVZ3Q0MmFTdkc token accessor          wXTOvz5xxBi2vvUpTBhemUXr token duration          768h token renewable         true token policies            default   read only   identity policies          policies                  default   read only   token meta role name    read only        CodeBlockConfig      Max lease TTL on an auth mount  You cannot set the TTL for a role to go beyond the max lease TTL set on the AppRole auth mount   approle  in this example   The default lease TTL and max lease TTL are 32 days  768 hours       shell session   vault read sys auth approle tune        Output      CodeBlockConfig highlight  3 6  hideClipboard      plaintext Key                  Value                            default lease ttl    768h description          n a force no cache       false max lease ttl        768h token type           default service        CodeBlockConfig   If the desired max lease TTL is 120 days  2880 hours   update the max lease TTL on the mount       shell session   vault auth tune  max lease ttl 120d approle      The following command lists all available parameters that you can tune      shell session   vault auth tune  h      Now  the AppRole will generate a lease with token duration of 120 days  2880 hours      CodeBlockConfig highlight  7  hideClipboard      shell session   vault write auth approle login role id  ROLE ID  secret id  SECRET ID   Key                     Value                               token                   hvs CAESIOzTpLX4naKw epzhcb3DneZ9ZuRTx4tKh5mTT1CajLQGh4KHGh2cy5TUFFhY3QzVzdmSTFwQUduOWlrMVRWaUE token accessor          blc2MGA4EmmqEROzqlotFbqr token duration          2880h token renewable         true token policies            default   jenkins   identity policies          policies                  default   jenkins   token meta role name    jenkins        CodeBlockConfig       Max lease TTL on a secrets mount  Similar to the AppRole auth method example  you can tune the max lease TTL on dynamic secrets   For example  you enabled database secrets engine at  mongodb  and create a role named  tester  with max lease TTL of 120 days  2880 hours   When you request a database credential for the  tester  role  it returns a warning  and its lease duration is 32 days  768 hours  instead of 120 days    CodeBlockConfig hideClipboard highlight  11       shell session   vault read mongodb creds tester  WARNING  The following warnings were returned from Vault       TTL of  2880h  exceeded the effective max ttl of  768h   TTL value is   capped accordingly  Key                Value                          lease id           mongodb creds tester fVPt15506k3UW9n4pq0kIpBH lease duration     768h lease renewable    true password           Eskkx6yRhAN4  H9WL7B username           v token tester 6BtY903qOZBpzYa4yQs8 1724715513        CodeBlockConfig   To set the desired TTL on the role  tune the max lease TTL on the  mongodb  mount      shell session   vault secrets tune  max lease ttl 120d mongodb      Verify the configured max lease TTL available on the mount    CodeBlockConfig hideClipboard highlight  8       shell session   vault read sys mounts mongodb tune  Key                  Value                            default lease ttl    768h description          n a force no cache       false max lease ttl        2880h        CodeBlockConfig   The following command lists all available parameters that you can tune      shell session   vault secrets tune  h      When you introduce Vault into your existing system  the existing applications may not be able to handle short lived leases  You can tune the default TTLs on each mount    On the similar note  if the system default of 32 days is too long  you can tune the default TTL to be shorter to comply with your organization s policy      API     Tune auth method   vault api docs system auth tune auth method     Tune mount configuration   vault api docs system mounts tune mount configuration "}
{"questions":"vault Explanations workarounds and solutions for common lease problems in Vault page title Lease problems Troubleshoot lease problems in Vault Troubleshoot lease problems layout docs","answers":"---\nlayout: docs\npage_title: Lease problems\ndescription: >-\n  Troubleshoot lease problems in Vault.\n---\n\n# Troubleshoot lease problems\n\nExplanations, workarounds, and solutions for common lease problems in Vault.\n\n## `429 - Too Many Requests`\n\n### Problem\n\nVault returns a `429 - Too Many Requests` response when users try to\nauthenticate. For example:\n\n<CodeBlockConfig hideClipboard>\n\n```text\nError making API request.\n\nURL: PUT https:\/\/127.0.0.1:61555\/v1\/auth\/userpass\/login\/foo\nCode: 429. Errors:\n\n* 1 error occurred:\n\t* request path \"auth\/userpass\/login\/foo\": lease count quota exceeded\n```\n\n<\/CodeBlockConfig>\n\n### Cause\n\nVault returns a `429 - Too Many Requests` response if a new lease request\nviolates the configured lease quota limit.\n\nTo guard against [lease explosions](\/vault\/docs\/troubleshoot\/lease-explosions),\nVault rejects authentication requests if completing the request would violate\nthe configured lease quota limit.\n\n### Solution\n\n1. Correct any client-side errors that may cause excessive lease creation.\n1. Determine if your resource needs have changed and complete the\n   [Protecting Vault with Resource Quotas](\/vault\/tutorials\/operations\/resource-quotas)\n   tutorial to determine new, appropriate defaults.\n1. Use the [`vault lease`](\/vault\/docs\/commands\/lease) CLI command or\n   [lease count quota endpoint](\/vault\/api-docs\/system\/lease-count-quotas) to\n   tune your lease count quota.\n\n<Highlight title=\"Use proactive tuning to avoid errors\">\n   Consider making short-term changes to your lease quotas when you expect a\n   significant increase in lease creation. For example, when you release a new\n   feature or complete a marketing push to increase your user base.\n<\/Highlight>\n\n\n## Lease explosion (degraded performance)\n\n### Problem\n\nYour Vault nodes are out of memory and unresponsive to new lease requests.\n\n### Cause\n\nClients have caused a lease explosion with consistent, high-volume API requests.\n\n<Note title=\"Lease explosions can lead to DoS\">\n\n   Unchecked lease explosions create cascading denial-of-service issues for the\n   active node that can result in denial-of-service issues for the entire\n   cluster.\n\n<\/Note>\n\n### Solution\n\nTo resolve a lease explosion, you need to mitigate the problem to stabilize \nVault and provide space for cluster recovery then clean up your Vault\nenvironment.\n\n1. Mitigate resource stress by adjusting TTL values for your Vault instance:\n\n   Config level         | Parameter              | Precedence\n   -------------------- | ---------------------- | -----------\n   Database plugin      | `ttl` or `default_ttl` | first\n   Database plugin      | `max_ttl`              | first\n   AuthN\/secrets plugin | `ttl` or `default_ttl` | second\n   AuthN\/secrets plugin | `max_ttl`              | second\n   Vault                | `default_lease_ttl`    | last\n   Vault                | `max_lease_ttl`        | last\n\n   **Granular TTLs on a role, group, or user level always override plugin and\n   system-wide TTL values**.\n\n1. Use firewalls or load balancers to limit API calls to Vault from aberrant\n   clients and reduce load on the struggling cluster .\n\n1. Once the cluster stabilizes, check the active node to determine if you can\n   wait for it to purge leases automatically or if you need to speed up the\n   process by manually revoking leases.\n\n1. If the cluster requires manual intervention, confirm you have a recent, valid\n   snapshots of the cluster.\n\n1. Once you confirm a valid snapshot of the cluster exists, use\n   [`vault lease revoke`](\/vault\/docs\/commands\/lease\/revoke) to manually revoke\n   the offending leases.\n\n<Warning title=\"Potentially dangerous operation\">\n\n   Revoking or forcefully revoking leases is potentially a dangerous operation.\n   Do not proceed without a valid snapshot. If you have a valid Vault\n   Enterprise license, consider contacting the\n   [HashiCorp Customer Support team](https:\/\/support.hashicorp.com\/) for help.\n\n<\/Warning>\n\n### Related tutorials\n\n- [Troubleshoot irrevocable leases](\/vault\/tutorials\/monitoring\/troubleshoot-irrevocable-leases)","site":"vault","answers_cleaned":"    layout  docs page title  Lease problems description       Troubleshoot lease problems in Vault         Troubleshoot lease problems  Explanations  workarounds  and solutions for common lease problems in Vault       429   Too Many Requests       Problem  Vault returns a  429   Too Many Requests  response when users try to authenticate  For example    CodeBlockConfig hideClipboard      text Error making API request   URL  PUT https   127 0 0 1 61555 v1 auth userpass login foo Code  429  Errors     1 error occurred     request path  auth userpass login foo   lease count quota exceeded        CodeBlockConfig       Cause  Vault returns a  429   Too Many Requests  response if a new lease request violates the configured lease quota limit   To guard against  lease explosions   vault docs troubleshoot lease explosions   Vault rejects authentication requests if completing the request would violate the configured lease quota limit       Solution  1  Correct any client side errors that may cause excessive lease creation  1  Determine if your resource needs have changed and complete the     Protecting Vault with Resource Quotas   vault tutorials operations resource quotas     tutorial to determine new  appropriate defaults  1  Use the   vault lease    vault docs commands lease  CLI command or     lease count quota endpoint   vault api docs system lease count quotas  to    tune your lease count quota    Highlight title  Use proactive tuning to avoid errors      Consider making short term changes to your lease quotas when you expect a    significant increase in lease creation  For example  when you release a new    feature or complete a marketing push to increase your user base    Highlight       Lease explosion  degraded performance       Problem  Your Vault nodes are out of memory and unresponsive to new lease requests       Cause  Clients have caused a lease explosion with consistent  high volume API requests    Note title  Lease explosions can lead to DoS       Unchecked lease explosions create cascading denial of service issues for the    active node that can result in denial of service issues for the entire    cluster     Note       Solution  To resolve a lease explosion  you need to mitigate the problem to stabilize  Vault and provide space for cluster recovery then clean up your Vault environment   1  Mitigate resource stress by adjusting TTL values for your Vault instance      Config level           Parameter                Precedence                                                                   Database plugin         ttl  or  default ttl    first    Database plugin         max ttl                 first    AuthN secrets plugin    ttl  or  default ttl    second    AuthN secrets plugin    max ttl                 second    Vault                   default lease ttl       last    Vault                   max lease ttl           last       Granular TTLs on a role  group  or user level always override plugin and    system wide TTL values     1  Use firewalls or load balancers to limit API calls to Vault from aberrant    clients and reduce load on the struggling cluster    1  Once the cluster stabilizes  check the active node to determine if you can    wait for it to purge leases automatically or if you need to speed up the    process by manually revoking leases   1  If the cluster requires manual intervention  confirm you have a recent  valid    snapshots of the cluster   1  Once you confirm a valid snapshot of the cluster exists  use      vault lease revoke    vault docs commands lease revoke  to manually revoke    the offending leases    Warning title  Potentially dangerous operation       Revoking or forcefully revoking leases is potentially a dangerous operation     Do not proceed without a valid snapshot  If you have a valid Vault    Enterprise license  consider contacting the     HashiCorp Customer Support team  https   support hashicorp com   for help     Warning       Related tutorials     Troubleshoot irrevocable leases   vault tutorials monitoring troubleshoot irrevocable leases "}
{"questions":"vault page title Vault Enterprise Lease Count Quotas include alerts enterprise only mdx Vault Enterprise features a mechanism to create lease count quotas Lease count quotas layout docs","answers":"---\nlayout: docs\npage_title: Vault Enterprise Lease Count Quotas\ndescription: |-\n  Vault Enterprise features a mechanism to create lease count quotas.\n---\n\n# Lease count quotas\n\n@include 'alerts\/enterprise-only.mdx'\n\nVault features an extension to resource quotas that allows operators to enforce\nlimits on how many leases are created. For a given lease count quota, if the\nnumber of leases in the cluster hits the configured limit, `max_leases`,\nadditional lease creations will be forbidden for all clients until the\nan operator modifies the configured limit, or a lease has been revoked or\nexpired.\n\nLease count quotas guard against [lease\nexplosions](\/vault\/docs\/concepts\/lease-explosions).\n\n## Root tokens\n\nIt is important to note that lease count quotas do not apply to the root tokens.\nIf the number of leases in the cluster hits the configured limit, `max_leases`,\nan operator can still create a root token and access the cluster to try to\nrecover.\n\n## Batch tokens\n\nBatch token creation is blocked when the lease count quota is exceeded, but\nbatch tokens do not count toward the quota.\n\nAll the nodes in the Vault cluster share the lease quota rules, meaning that the\nlease counters are shared, regardless of which node in the Vault cluster\nreceives lease generation requests. Lease quotas can be imposed across Vault's\nAPI, or scoped down to API pertaining to specific namespaces or specific mounts.\n\n## Lease count quota inheritance\n\nA quota that is defined in the `root` namespace with no specified path is\ninherited by all namespaces. This type of quota is referred to as a `global`\nquota. Global quotas applie to the entire Vault API unless a more specific quota\n(higher precedence) quota has been defined.\n\n## Lease count quota precedence\n\nLease count quota precedence is dictated by highest to lowest level of\nspecificity. The rules are as follows:\n\n1. Global lease count quotas are applied to all mounts and namespaces only if no\nother, more specific namespace is defined.\n1. Lease count quotas defined on a namespace take precedence over the global\nquotas.\n1. Lease count quotas defined for a mount will take precedence over global\nand namespace quotas.\n1. Lease count quotas defined for a specific path will take\nprecedence over global, namespace, and mount quotas.\n1. Lease count quotas defined with a login role for a specific auth mount will\ntake precedence over every other quota when applying to login requests using\nthat auth method and the specified role.\n\nThe limits on quotas can either be increased or decreased. If a lower precedence\nquota is very restrictive and if it is desired to relax the limits in one\nnamespace, or on a specific mount, it can be done using this precedence model.\nOn the other hand, if a lower precedence quota is very liberal and if it is\ndesired to further restrict usages in a specific namespace or mount, that can be\ndone using the precedence model too.\n\n## Default lease count quota\n\nAs of Vault 1.16.0, new installations of Vault Enterprise will include a default\nglobal quota with a `max_leases` value of `300000`. This value is an\nintentionally low limit, intended to prevent runaway leases in the event that no\nother lease count quota is specified.\n\nThis limit will affect all new clusters with no pre-existing configuration. As\nwith any other quota, the default can be directly increased, decreased, or\nremoved using the [lease-count-quotas endpoints](\/vault\/api-docs\/system\/lease-count-quotas).\n\nThe default may also be overridden by higher precedence quotas (specified for a\nnamespace, mount, path, or role) as described in the [Lease count quota\nprecedence](#lease-count-quota-precedence) section above.\n\n## Quota inspection\n\nVault also allows the inspection of the state of lease count quotas in a Vault\ncluster through various\n[metrics](\/vault\/docs\/internals\/telemetry\/metrics\/core-system#quota-metrics)\nand through enabling optional audit logging.\n\n## Lease count quota exceeded\n\nVault returns a `429 - Too Many Requests` response if a new lease request\nviolates the quota limit. For more information on this error, refer to [the\nerror document](\/vault\/docs\/concepts\/lease-count-quota-exceeded).\n\n## Tutorial\n\nRefer to [Protecting Vault with Resource\nQuotas](\/vault\/tutorials\/operations\/resource-quotas) for a\nstep-by-step tutorial.\n\n## API\n\nLease count quotas can be managed over the HTTP API. Please see\n[Lease Count Quotas API](\/vault\/api-docs\/system\/lease-count-quotas) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Vault Enterprise Lease Count Quotas description       Vault Enterprise features a mechanism to create lease count quotas         Lease count quotas   include  alerts enterprise only mdx   Vault features an extension to resource quotas that allows operators to enforce limits on how many leases are created  For a given lease count quota  if the number of leases in the cluster hits the configured limit   max leases   additional lease creations will be forbidden for all clients until the an operator modifies the configured limit  or a lease has been revoked or expired   Lease count quotas guard against  lease explosions   vault docs concepts lease explosions       Root tokens  It is important to note that lease count quotas do not apply to the root tokens  If the number of leases in the cluster hits the configured limit   max leases   an operator can still create a root token and access the cluster to try to recover      Batch tokens  Batch token creation is blocked when the lease count quota is exceeded  but batch tokens do not count toward the quota   All the nodes in the Vault cluster share the lease quota rules  meaning that the lease counters are shared  regardless of which node in the Vault cluster receives lease generation requests  Lease quotas can be imposed across Vault s API  or scoped down to API pertaining to specific namespaces or specific mounts      Lease count quota inheritance  A quota that is defined in the  root  namespace with no specified path is inherited by all namespaces  This type of quota is referred to as a  global  quota  Global quotas applie to the entire Vault API unless a more specific quota  higher precedence  quota has been defined      Lease count quota precedence  Lease count quota precedence is dictated by highest to lowest level of specificity  The rules are as follows   1  Global lease count quotas are applied to all mounts and namespaces only if no other  more specific namespace is defined  1  Lease count quotas defined on a namespace take precedence over the global quotas  1  Lease count quotas defined for a mount will take precedence over global and namespace quotas  1  Lease count quotas defined for a specific path will take precedence over global  namespace  and mount quotas  1  Lease count quotas defined with a login role for a specific auth mount will take precedence over every other quota when applying to login requests using that auth method and the specified role   The limits on quotas can either be increased or decreased  If a lower precedence quota is very restrictive and if it is desired to relax the limits in one namespace  or on a specific mount  it can be done using this precedence model  On the other hand  if a lower precedence quota is very liberal and if it is desired to further restrict usages in a specific namespace or mount  that can be done using the precedence model too      Default lease count quota  As of Vault 1 16 0  new installations of Vault Enterprise will include a default global quota with a  max leases  value of  300000   This value is an intentionally low limit  intended to prevent runaway leases in the event that no other lease count quota is specified   This limit will affect all new clusters with no pre existing configuration  As with any other quota  the default can be directly increased  decreased  or removed using the  lease count quotas endpoints   vault api docs system lease count quotas    The default may also be overridden by higher precedence quotas  specified for a namespace  mount  path  or role  as described in the  Lease count quota precedence   lease count quota precedence  section above      Quota inspection  Vault also allows the inspection of the state of lease count quotas in a Vault cluster through various  metrics   vault docs internals telemetry metrics core system quota metrics  and through enabling optional audit logging      Lease count quota exceeded  Vault returns a  429   Too Many Requests  response if a new lease request violates the quota limit  For more information on this error  refer to  the error document   vault docs concepts lease count quota exceeded       Tutorial  Refer to  Protecting Vault with Resource Quotas   vault tutorials operations resource quotas  for a step by step tutorial      API  Lease count quotas can be managed over the HTTP API  Please see  Lease Count Quotas API   vault api docs system lease count quotas  for more details "}
{"questions":"vault include alerts enterprise only mdx Learn the details about long term support for Vault Enterprise layout docs Long term support for Vault page title Long term support for Vault","answers":"---\nlayout: docs\npage_title: Long-term support for Vault\ndescription: >-\n  Learn the details about long-term support for Vault Enterprise.\n---\n\n# Long-term support for Vault\n\n@include 'alerts\/enterprise-only.mdx'\n\nLong-term support (LTS) eases upgrade requirements for installations that cannot\nupgrade frequently, quickly, or easily.\n\n## LTS summary\n\n<table>\n\n  <thead>\n    <tr>\n      <th style=>Question<\/th>\n      <th style=>Answer<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n  <tr>\n    <td style=>\n      <a href=\"#who\">Who should consider long-term support?<\/a>\n    <\/td>\n    <td style=>\n      Enterprise customers using Vault for sensitive or critical workflows.\n    <\/td>\n  <\/tr>\n  <tr>\n    <td style=>\n      <a href=\"#what\">What is long-term support?<\/a>\n    <\/td>\n    <td style=>\n      Extended maintenance for select, major Vault Enterprise versions.\n      By default, HashiCorp maintains Vault Enterprise versions for one year,\n      which includes feature updates and critical patches. LTS extends\n      maintenance for an additional year with critical patches.\n    <\/td>\n  <\/tr>\n  <tr>\n    <td style=>\n      <a href=\"#where\">Where do I enable long-term support?<\/a>\n    <\/td>\n    <td style=>\n      You do not need to download a separate binary or set a flag for long-term\n      support. As long as you select an LTS Vault Enterprise version when\n      you <a href=\"\/vault\/install\">install<\/a> or <a href=\"\/vault\/docs\/upgrading\">upgrade<\/a> your\n      Vault instance, LTS is included.\n    <\/td>\n  <\/tr>\n  <tr>\n    <td style=>\n      <a href=\"#when\">When are LTS versions released?<\/a>\n    <\/td>\n    <td style=>\n      As of Vault Enterprise 1.16, the first major release of a calendar year includes\n      long-term support.\n    <\/td>\n  <\/tr>\n  <tr>\n    <td style=>\n      <a href=\"#why\">Why is there a risk to updating to a non-LTS Vault Enterprise version?<\/a>\n    <\/td>\n    <td style=>\n      If you upgrade to a non-LTS Vault Enterprise version, your Vault instance\n      will stop receiving critical updates when that version leaves the default\n      maintenance window.\n    <\/td>\n  <\/tr>\n  <tr>\n    <td style=>\n      <a href=\"#how\">How do I update my LTS Vault Enterprise installation?<\/a>\n    <\/td>\n    <td style=>\n      Follow your existing Vault upgrade process, but allow extra time for the\n      possibility of transitional upgrades across multiple Vault versions.\n    <\/td>\n  <\/tr>\n<\/tbody>\n<\/table>\n\n<a id=\"who\" \/>\n\n## Who should consider long-term support?\n\nVault upgrades are challenging, especially for sensitive or critical workflows,\nextensive integrations, and large-scale deployments. Strict upgrade policies\nalso require significant planning, testing, and employee hours to execute\nsuccessfully.\n\nCustomers who need assurances that their current installation will receive\ncritical bug fixes and security patches with minimal service disruptions should\nconsider moving to a Vault Enterprise version with long-term support.\n\n<a id=\"what\" \/>\n\n## What is long-term support?\n\nLong-term support offers extended maintenance through minor releases for select,\nmajor Vault Enterprise versions.\n\nThe standard [support period and end of life policy](https:\/\/go.hashi.co\/vault-support-policy)\ncovers \"N&minus;2\" versions, which means, at any given time, HashiCorp maintains\nthe current version (\"N\") and the two previous versions (\"N&minus;2\").\n\nVault versions typically update 3 times per calendar year (CY), which means that\n**standard maintenance** for a given Vault version lasts approximately 1 year.\nAfter the first year, LTS Vault versions move from standard maintenance to\n**extended maintenance** for three additional major version releases (approximately one additional year)\nwith patches for bugs that may cause outages and critical vulnerabilities and exposures (CVEs).\n\nMaintenance updates               | Standard maintenance | Extended maintenance\n--------------------------------- | -------------------- | --------------------\nPerformance improvements          | YES                  | NO\nBug fixes                         | YES                  | OUTAGE-RISK ONLY\nSecurity patches                  | YES                  | HIGH-RISK ONLY\nCVE patches                       | YES                  | YES\n\n<a id=\"where\" \/>\n\n## Where do I enable long-term support?\n\nYou do not need to download a separate binary or set a flag for long-term\nsupport. As long as you select an LTS Vault Enterprise version\n(e.g., 1.16, 1.19) when you [install](\/vault\/install) or [upgrade](\/vault\/docs\/upgrading) your\nVault instance, LTS is included.\n\n<a id=\"when\" \/>\n\n## When are LTS versions released?\n\nAs of Vault Enterprise 1.16, the first release of a calendar year includes\nlong-term support.\n\nLTS versions overlap by one year with the previous LTS version entering its\nextended maintenance window when the new LTS version begins its standard\nmaintenance window.\n\n<a id=\"why\" \/>\n\n## Why is there a risk to updating to a non-LTS Vault Enterprise version?\n\nLong-term support is intended for Enterprise customers who cannot upgrade\nfrequently enough to stay within the standard maintenance timeline of one year.\nThe goal is to establish a predictable upgrade path with a longer timeline\nrather than extending the lifetime for every Vault version.\n\nLong-term support ensures your Vault Enterprise version continues to receive\ncritical patches for an additional three major version releases (approximately one additional year).\nIf you upgrade to a non-LTS version,you are moving your Vault instance to a version\nthat lacks extended support. Non-LTS versions stop receiving updates once they leave\nthe standard maintenance window.\n\n@include 'assets\/lts-upgrade-path.mdx'\n\nVersion | Expected release | Standard maintenance ends  | Extended maintenance ends\n------- | ---------------- | -------------------------- | ---------------------\n1.19    | CY25 Q1          | CY26 Q1 (1.22 release)     | CY27 Q1 (1.25 release)\n1.18    | CY24 Q3          | CY25 Q3 (1.21 release)     | Not provided\n1.17    | CY24 Q2          | CY25 Q2 (1.20 release)     | Not provided\n1.16    | CY24 Q1          | CY25 Q1 (1.19 release)     | CY26 Q1 (1.22 release)\n\nIf a newer version of Vault Enterprise includes features you want to take\nadvantage of, you have two options:\n\n1. Wait for the next available LTS release to maintain long-term support.\n1. Upgrade immediately, then upgrade to an LTS release before the standard\n   maintenance window expires.\n\n<a id=\"how\" \/>\n\n## How do I upgrade my Vault Enterprise LTS installation?\n\nYou should follow your existing upgrade process for major version upgrades but\nallow additional time. Upgrading from version LTS to LTS+1 translates to jumping\n3 major Vault Enterprise versions, which **may** require transitional upgrades\nto move through the intermediate Vault versions.","site":"vault","answers_cleaned":"    layout  docs page title  Long term support for Vault description       Learn the details about long term support for Vault Enterprise         Long term support for Vault   include  alerts enterprise only mdx   Long term support  LTS  eases upgrade requirements for installations that cannot upgrade frequently  quickly  or easily      LTS summary   table      thead       tr         th style  Question  th         th style  Answer  th        tr      thead     tbody     tr       td style          a href   who  Who should consider long term support   a        td       td style         Enterprise customers using Vault for sensitive or critical workflows        td      tr     tr       td style          a href   what  What is long term support   a        td       td style         Extended maintenance for select  major Vault Enterprise versions        By default  HashiCorp maintains Vault Enterprise versions for one year        which includes feature updates and critical patches  LTS extends       maintenance for an additional year with critical patches        td      tr     tr       td style          a href   where  Where do I enable long term support   a        td       td style         You do not need to download a separate binary or set a flag for long term       support  As long as you select an LTS Vault Enterprise version when       you  a href   vault install  install  a  or  a href   vault docs upgrading  upgrade  a  your       Vault instance  LTS is included        td      tr     tr       td style          a href   when  When are LTS versions released   a        td       td style         As of Vault Enterprise 1 16  the first major release of a calendar year includes       long term support        td      tr     tr       td style          a href   why  Why is there a risk to updating to a non LTS Vault Enterprise version   a        td       td style         If you upgrade to a non LTS Vault Enterprise version  your Vault instance       will stop receiving critical updates when that version leaves the default       maintenance window        td      tr     tr       td style          a href   how  How do I update my LTS Vault Enterprise installation   a        td       td style         Follow your existing Vault upgrade process  but allow extra time for the       possibility of transitional upgrades across multiple Vault versions        td      tr    tbody    table    a id  who         Who should consider long term support   Vault upgrades are challenging  especially for sensitive or critical workflows  extensive integrations  and large scale deployments  Strict upgrade policies also require significant planning  testing  and employee hours to execute successfully   Customers who need assurances that their current installation will receive critical bug fixes and security patches with minimal service disruptions should consider moving to a Vault Enterprise version with long term support    a id  what         What is long term support   Long term support offers extended maintenance through minor releases for select  major Vault Enterprise versions   The standard  support period and end of life policy  https   go hashi co vault support policy  covers  N minus 2  versions  which means  at any given time  HashiCorp maintains the current version   N   and the two previous versions   N minus 2     Vault versions typically update 3 times per calendar year  CY   which means that   standard maintenance   for a given Vault version lasts approximately 1 year  After the first year  LTS Vault versions move from standard maintenance to   extended maintenance   for three additional major version releases  approximately one additional year  with patches for bugs that may cause outages and critical vulnerabilities and exposures  CVEs    Maintenance updates                 Standard maintenance   Extended maintenance                                                                                 Performance improvements            YES                    NO Bug fixes                           YES                    OUTAGE RISK ONLY Security patches                    YES                    HIGH RISK ONLY CVE patches                         YES                    YES   a id  where         Where do I enable long term support   You do not need to download a separate binary or set a flag for long term support  As long as you select an LTS Vault Enterprise version  e g   1 16  1 19  when you  install   vault install  or  upgrade   vault docs upgrading  your Vault instance  LTS is included    a id  when         When are LTS versions released   As of Vault Enterprise 1 16  the first release of a calendar year includes long term support   LTS versions overlap by one year with the previous LTS version entering its extended maintenance window when the new LTS version begins its standard maintenance window    a id  why         Why is there a risk to updating to a non LTS Vault Enterprise version   Long term support is intended for Enterprise customers who cannot upgrade frequently enough to stay within the standard maintenance timeline of one year  The goal is to establish a predictable upgrade path with a longer timeline rather than extending the lifetime for every Vault version   Long term support ensures your Vault Enterprise version continues to receive critical patches for an additional three major version releases  approximately one additional year   If you upgrade to a non LTS version you are moving your Vault instance to a version that lacks extended support  Non LTS versions stop receiving updates once they leave the standard maintenance window    include  assets lts upgrade path mdx   Version   Expected release   Standard maintenance ends    Extended maintenance ends                                                                                 1 19      CY25 Q1            CY26 Q1  1 22 release        CY27 Q1  1 25 release  1 18      CY24 Q3            CY25 Q3  1 21 release        Not provided 1 17      CY24 Q2            CY25 Q2  1 20 release        Not provided 1 16      CY24 Q1            CY25 Q1  1 19 release        CY26 Q1  1 22 release   If a newer version of Vault Enterprise includes features you want to take advantage of  you have two options   1  Wait for the next available LTS release to maintain long term support  1  Upgrade immediately  then upgrade to an LTS release before the standard    maintenance window expires    a id  how         How do I upgrade my Vault Enterprise LTS installation   You should follow your existing upgrade process for major version upgrades but allow additional time  Upgrading from version LTS to LTS 1 translates to jumping 3 major Vault Enterprise versions  which   may   require transitional upgrades to move through the intermediate Vault versions "}
{"questions":"vault page title Vault Enterprise Control Groups layout docs include alerts enterprise and hcp mdx Vault Enterprise has support for Control Group Authorization Vault Enterprise control groups","answers":"---\nlayout: docs\npage_title: Vault Enterprise Control Groups\ndescription: Vault Enterprise has support for Control Group Authorization.\n---\n\n# Vault Enterprise control groups\n\n@include 'alerts\/enterprise-and-hcp.mdx'\n\nVault Enterprise has support for Control Group Authorization. Control Groups\nadd additional authorization factors to be required before satisfying a request.\n\nWhen a Control Group is required for a request, a limited duration response\nwrapping token is returned to the user instead of the requested data. The\naccessor of the response wrapping token can be passed to the authorizers\nrequired by the control group policy. Once all authorizations are satisfied,\nthe wrapping token can be used to unwrap and process the original request.\n\n## Control group factors\n\nControl Groups can verify the following factors:\n\n- `Identity Groups` - Require an authorizer to be in a specific set of identity\n  groups.\n\n### Controlled capabilities\n\nControl group factors can be configured to trigger the control group workflow\non specific capabilities. This is done with the `controlled_capabilities` field.\nNot specifying the `controlled_capabilities` field will necessitate the factor to be\nchecked for all operations to the specified policy path. The `controlled_capabilities`\nfield can differ per factor, so that different factors can be required for different\noperations.\n\nFinally, the capabilities in the `controlled_capabilities` stanza must be a subset of the\n`capabilities` specified in the policy itself. For example, a policy giving only `read` access to\nthe path `secret\/foo` cannot specify a control group factor with `list` as a controlled capability.\n\nPlease see the following section for examples using ACL policies.\n\n## Control groups in ACL policies\n\nControl Group requirements on paths are specified as `control_group` along\nwith other ACL parameters.\n\n### Sample ACL policies\n\n```\npath \"secret\/foo\" {\n    capabilities = [\"read\"]\n    control_group = {\n        factor \"ops_manager\" {\n            identity {\n                group_names = [\"managers\"]\n                approvals = 1\n            }\n        }\n    }\n}\n```\n\nThe above policy grants `read` access to `secret\/foo` only after one member of\nthe \"managers\" group authorizes the request.\n\n```\npath \"secret\/foo\" {\n    capabilities = [\"create\", \"update\"]\n    control_group = {\n        ttl = \"4h\"\n        factor \"tech leads\" {\n            identity {\n                group_names = [\"managers\", \"leads\"]\n                approvals = 2\n            }\n        }\n        factor \"super users\" {\n            identity {\n                group_names = [\"superusers\"]\n                approvals = 1\n            }\n        }\n    }\n}\n```\n\nThe above policy grants `create` and `update` access to `secret\/foo` only after\ntwo (2) members of the \"managers\" or \"leads\" group and one member of the \"superusers\"\ngroup authorizes the request. If an authorizer is a member of both the\n\"managers\" and \"superusers\" group, one authorization for both factors will be\nsatisfied.\n\n```\npath \"secret\/foo\" {\n    capabilities = [\"write\",\"read\"]\n    control_group = {\n        factor \"admin\" {\n            controlled_capabilities = [\"write\"]\n            identity {\n                group_names = [\"admin\"]\n                approvals = 1\n            }\n        }\n    }\n}\n```\n\nThe above policy grants `read` access to `secret\/foo` for anyone that has a vault token\nwith this policy. It grants `write` access to `secret\/foo` only after one member from the\nadmin group authorizes the request.\n\n```\npath \"kv\/*\" {\n    capabilities = [\"create\", \"update\",\"delete\",\"list\",\"sudo\"]\n    control_group = {\n        factor \"admin\" {\n            controlled_capabilities = [\"delete\",\"list\",\"sudo\"]\n            identity {\n                group_names = [\"admin\"]\n                approvals = 1\n            }\n        }\n    }\n}\npath \"kv\/*\" {\n    capabilities = [\"create\"]\n    control_group = {\n        factor \"superuser\" {\n            identity {\n                group_names = [\"superuser\"]\n                approvals = 2\n            }\n        }\n    }\n}\n\n```\n\nBecause the second path stanza has a control group factor with no `controlled_capabilities` field,\nany token with this policy will be required to get two (2) approvals from the \"superuser\" group before executing\nany operation against `kv\/*`. In addition, by virtue of the `controlled_capabilities` field in the first\npath stanza, `delete`,`list`, and `sudo` operations will require an additional approval from the \"admin\" group.\n\n```\npath \"kv\/*\" {\n    capabilities = [\"read\", \"list\", \"create\"]\n    control_group = {\n        controlled_capabilities = [\"read\"]\n        factor \"admin\" {\n            identity {\n                group_names = [\"admin\"]\n                approvals = 1\n            }\n        }\n        factor \"superuser\" {\n            controlled_capabilities = [\"create\"]\n            identity {\n                group_names = [\"superuser\"]\n                approvals = 1\n            }\n        }\n    }\n}\n```\n\nIn this case, `read` will require one admin approval and `create` will require\none superuser approval and one admin approval. `List` will require no extra approvals\nfrom any of the control group factors, and a token with this policy will not be required\nto go through the control group workflow in order to execute a read operation against `kv\/*`.\n\n## Control groups in Sentinel\n\nControl Groups are also supported in Sentinel policies using the `controlgroup`\nimport. See [Sentinel Documentation](\/vault\/docs\/enterprise\/sentinel) for more\ndetails on available properties.\n\n### Sample Sentinel policy\n\n```\nimport \"time\"\nimport \"controlgroup\"\n\ncontrol_group = func() {\n    numAuthzs = 0\n    for controlgroup.authorizations as authz {\n\t\tif \"managers\" in authz.groups.by_name {\n\t\t\tif time.load(authz.time).unix > time.now.unix - 3600 {\n\t\t\t\tnumAuthzs = numAuthzs + 1\n\t\t\t}\n\t\t}\n    }\n    if numAuthzs >= 2 {\n        return true\n    }\n    return false\n}\n\nmain = rule {\n    control_group()\n}\n```\n\nThe above policy will reject the request unless two members of the \"managers\"\ngroup have authorized the request. Additionally it verifies the authorizations\nhappened in the last hour.\n\n## Tutorial\n\nRefer to the [Control Groups](\/vault\/tutorials\/enterprise\/control-groups)\ntutorial to learn how to implement dual controller authorization within your policies.\n\n## API\n\nControl Groups can be managed over the HTTP API. Please see\n[Control Groups API](\/vault\/api-docs\/system\/control-group) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Vault Enterprise Control Groups description  Vault Enterprise has support for Control Group Authorization         Vault Enterprise control groups   include  alerts enterprise and hcp mdx   Vault Enterprise has support for Control Group Authorization  Control Groups add additional authorization factors to be required before satisfying a request   When a Control Group is required for a request  a limited duration response wrapping token is returned to the user instead of the requested data  The accessor of the response wrapping token can be passed to the authorizers required by the control group policy  Once all authorizations are satisfied  the wrapping token can be used to unwrap and process the original request      Control group factors  Control Groups can verify the following factors      Identity Groups    Require an authorizer to be in a specific set of identity   groups       Controlled capabilities  Control group factors can be configured to trigger the control group workflow on specific capabilities  This is done with the  controlled capabilities  field  Not specifying the  controlled capabilities  field will necessitate the factor to be checked for all operations to the specified policy path  The  controlled capabilities  field can differ per factor  so that different factors can be required for different operations   Finally  the capabilities in the  controlled capabilities  stanza must be a subset of the  capabilities  specified in the policy itself  For example  a policy giving only  read  access to the path  secret foo  cannot specify a control group factor with  list  as a controlled capability   Please see the following section for examples using ACL policies      Control groups in ACL policies  Control Group requirements on paths are specified as  control group  along with other ACL parameters       Sample ACL policies      path  secret foo        capabilities     read       control group             factor  ops manager                identity                   group names     managers                   approvals   1                                      The above policy grants  read  access to  secret foo  only after one member of the  managers  group authorizes the request       path  secret foo        capabilities     create    update       control group             ttl    4h          factor  tech leads                identity                   group names     managers    leads                   approvals   2                                 factor  super users                identity                   group names     superusers                   approvals   1                                      The above policy grants  create  and  update  access to  secret foo  only after two  2  members of the  managers  or  leads  group and one member of the  superusers  group authorizes the request  If an authorizer is a member of both the  managers  and  superusers  group  one authorization for both factors will be satisfied       path  secret foo        capabilities     write   read       control group             factor  admin                controlled capabilities     write               identity                   group names     admin                   approvals   1                                      The above policy grants  read  access to  secret foo  for anyone that has a vault token with this policy  It grants  write  access to  secret foo  only after one member from the admin group authorizes the request       path  kv          capabilities     create    update   delete   list   sudo       control group             factor  admin                controlled capabilities     delete   list   sudo               identity                   group names     admin                   approvals   1                                 path  kv          capabilities     create       control group             factor  superuser                identity                   group names     superuser                   approvals   2                                       Because the second path stanza has a control group factor with no  controlled capabilities  field  any token with this policy will be required to get two  2  approvals from the  superuser  group before executing any operation against  kv     In addition  by virtue of the  controlled capabilities  field in the first path stanza   delete   list   and  sudo  operations will require an additional approval from the  admin  group       path  kv          capabilities     read    list    create       control group             controlled capabilities     read           factor  admin                identity                   group names     admin                   approvals   1                                 factor  superuser                controlled capabilities     create               identity                   group names     superuser                   approvals   1                                      In this case   read  will require one admin approval and  create  will require one superuser approval and one admin approval   List  will require no extra approvals from any of the control group factors  and a token with this policy will not be required to go through the control group workflow in order to execute a read operation against  kv         Control groups in Sentinel  Control Groups are also supported in Sentinel policies using the  controlgroup  import  See  Sentinel Documentation   vault docs enterprise sentinel  for more details on available properties       Sample Sentinel policy      import  time  import  controlgroup   control group   func         numAuthzs   0     for controlgroup authorizations as authz     if  managers  in authz groups by name      if time load authz time  unix   time now unix   3600       numAuthzs   numAuthzs   1                    if numAuthzs    2           return true           return false    main   rule       control group          The above policy will reject the request unless two members of the  managers  group have authorized the request  Additionally it verifies the authorizations happened in the last hour      Tutorial  Refer to the  Control Groups   vault tutorials enterprise control groups  tutorial to learn how to implement dual controller authorization within your policies      API  Control Groups can be managed over the HTTP API  Please see  Control Groups API   vault api docs system control group  for more details "}
{"questions":"vault Vault Enterprise Consistency Model page title Vault Enterprise Eventual Consistency layout docs include alerts enterprise and hcp mdx Vault eventual consistency","answers":"---\nlayout: docs\npage_title: Vault Enterprise Eventual Consistency\ndescription: Vault Enterprise Consistency Model\n---\n\n# Vault eventual consistency\n\n@include 'alerts\/enterprise-and-hcp.mdx'\n\nWhen running in a cluster, Vault has an eventual consistency model.\nOnly one node (the leader) can write to Vault's storage.\nUsers generally expect read-after-write consistency: in other\nwords, after writing foo=1, a subsequent read of foo should return 1. Depending\non the Vault configuration this isn't always the case. When using performance\nstandbys with Integrated Storage, or when using performance replication,\nthere are some sequences of operations that don't always yield read-after-write\nconsistency.\n\n## Performance standby nodes\n\nWhen using the Integrated Storage backend without performance standbys, only\na single Vault node (the active node) handles requests. Requests sent to\nregular standbys are handled by forwarding them to the active node. This Vault configuration\ngives Vault the same behavior as the default Consul consistency model.\n\nWhen using the Integrated Storage backend with performance standbys, both the\nactive node and performance standbys can handle requests. If a performance standby\nhandles a login request, or a request that generates a dynamic secret, the\nperformance standby will issue a remote procedure call (RPC) to the active node to store the token\nand\/or lease. If the performance standby handles any other request that\nresults in a storage write, it will forward that request to the active node\nin the same way a regular standby forwards all requests.\n\nWith Integrated Storage, all writes occur on the active node, which then issues\nRPCs to update the local storage on every other node. Between when the active\nnode writes the data to its local disk, and when those RPCs are handled on the\nother nodes to write the data to their local disks, those nodes present a stale\nview of the data.\n\nAs a result, even if you're always talking to the same performance standby,\nyou may not get read-after-write semantics. The write gets sent to the active\nnode, and if the subsequent read request occurs before the new data gets sent\nto the node handling the read request, the read request won't be able to take\nthe write into account because the new data isn't present on that node yet.\n\n## Performance replication\n\nA similar phenomenon occurs when using performance replication. One example\nof how this manifests is when using shared mounts. If a KV secrets engine\nis mounted on the primary with `local=false`, it will exist on the secondary\ncluster as well. The secondary cluster can handle requests to that mount,\nthough as with performance standbys, write requests must be forwarded - in\nthis case to the primary active node. Once data is written to the primary cluster,\nit won't be visible on the secondary cluster until the data has been replicated\nfrom the primary. Therefore, on the secondary cluster, it initially appears as if\nthe data write hasn't happened.\n\nIf the secondary cluster is using Integrated Storage, and the read request is\nbeing handled on one of its performance standbys, the problem is exacerbated because it\nhas to be sent first from the primary active node to the secondary active node,\nand then from there to the secondary performance standby, each of which can\nintroduce their own form of lag.\n\nEven without shared secret engines, stale reads can still happen with performance\nreplication. The Identity subsystem aims to provide a view on entities and\ngroups which span across clusters. As such, when logging in to a secondary cluster\nusing a shared mount, Vault tries to generate an entity and alias if they don't\nalready exist, and these must be stored on the primary using an RPC. Something\nsimilar happens with groups.\n\n## Clock skew and replication lag\n\nAs seen above, both performance standbys and replication secondaries can lag\nbehind the active node or the primary.  As of Vault 1.17, it's possible to get\nsome insight into that lag using sys\/health, sys\/ha-status, and the replication\nstatus endpoints.\n\nSecondaries and standbys regularly issue an \"echo\" heartbeat RPC to their upstream\nsource.  This heartbeat serves many purposes, one of them being to get a rough\nidea of whether the clocks of the client and server are in sync.  The server\nresponse to the heartbeat RPC includes the server's local clock time, and the\nclient takes the delta in milliseconds between that time and the client's local\nclock time to compute the clock_skew_ms field.  No effort is made to factor into\nthat field the time it took to actually perform the RPC, though that information\nis made available as the last_heartbeat_duration_ms field.  In other words, the\nreported clock skew has an uncertainty of up to last_heartbeat_duration_ms.\n\nVault assumes that clocks are synced across all nodes in a cluster, and if they\naren't, problems may arise, e.g. one node may think that a lease has expired and\nanother node won't yet.  Some community-supported storage backends may have further\nproblems relating to HA mode.\n\nThere are fewer problems expected when clock skew exists between a replication primary\nand secondary.  However, one known issue is that the replication lag canary discussed\nnext will produce surprising values if clocks aren't synced between the clusters.\n\nNon-secondary active nodes periodically write a small record to storage containing the\nlocal clock time for that node.  Replication secondaries read that record and compare\nit to their local clock time, calling the delta the replication_primary_canary_age_ms,\nwhich is exposed in the replication status endpoints.  Performance standbys do the same\ncomputation, exposing replication_primary_canary_age_ms in the sys\/health and\nsys\/ha-status endpoints.  Performance standbys and replication secondaries include\ntheir current replication_primary_canary_age_ms as part of their payload for the\naforementioned \"echo\" heartbeat RPCs they issue, allowing the active node or primary\ncluster to report on the lag seen by their downstream clients.\n\n## Mitigations\n\nThere has long been a partial mitigation for the above problems. When writing\ndata via RPC, e.g. when a performance standby registers tokens and leases on the\nactive node after a login or generating a dynamic secret, part of the response\nincludes a number known as the \"WAL index\", aka Write-Ahead Log index.\n\nA full explanation of this is outside the scope of this document, but the short\nversion is that both performance replication and performance standbys use log\nshipping to stay in sync with the upstream source of writes. The mitigation\nhistorically used by nodes doing writes via RPC is to look at the WAL index in\nthe response and wait up to 2 seconds to see if that WAL index appear in the\nlogs being shipped from upstream. Once the WAL index is seen, the Vault node\nhandling the request that resulted in RPCs can return its own response to the\nclient: it knows that any subsequent reads will be able to see the value that\nwas just written. If the WAL index isn't seen within those 2 seconds, the Vault\nnode completes the request anyway, returning a warning in the response.\n\nThis mitigation option still exists in Vault 1.7, though now there is a\nconfiguration option to adjust the wait time:\n[best_effort_wal_wait_duration](\/vault\/docs\/configuration\/replication).\n\n## Vault 1.7 mitigations\n\nThere are now a variety of other mitigations available:\n\n- per-request option to always forward the request to the active node\n- per-request option to conditionally forward the request to the active node\n  if it would otherwise result in a stale read\n- per-request option to fail requests if they might result in a stale read\n- Vault Proxy configuration to do the above for proxied requests\n\nThe remainder of this document describes the tradeoffs of these mitigations and\nhow to use them.\n\nNote that any headers requesting forwarding are disabled by default, and must\nbe enabled using [allow_forwarding_via_header](\/vault\/docs\/configuration\/replication).\n\n### Unconditional forwarding (Performance standbys only)\n\nThe simplest solution to never experience stale reads from a performance standby\nis to provide the following HTTP header in the request:\n\n```\nX-Vault-Forward: active-node\n```\n\nThe drawback here is that if all your requests are forwarded to the active node,\nyou might as well not be using performance standbys. So this mitigation only\nmakes sense to use selectively.\n\nThis mitigation will not help with stale reads relating to performance replication.\n\n### Conditional forwarding (Performance standbys only)\n\nAs of Vault Enterprise 1.7, all requests that modify storage now return a new\nHTTP response header:\n\n```\nX-Vault-Index: <base64 value>\n```\n\nTo ensure that the state resulting from that write request is visible to a\nsubsequent request, add these headers to that second request:\n\n```\nX-Vault-Index: <base64 value taken from previous response>\nX-Vault-Inconsistent: forward-active-node\n```\n\nThe effect will be that the node handling the request will look at the state\nit has locally, and if it doesn't contain the state described by the X-Vault-Index\nheader, the node will forward the request to the active node.\n\nThe drawback here is that when requests are forwarded to the active node,\nperformance standbys provide less value. If this happens often enough\nthe active node can become a bottleneck, limiting the horizontal read scalability\nperformance standbys are intended to provide.\n\n### Retry stale requests\n\nAs of Vault Enterprise 1.7, all requests that modify storage now return a new\nHTTP response header:\n\n```\nX-Vault-Index: <base64 value>\n```\n\nTo ensure that the state resulting from that write request is visible to a\nsubsequent request, add this headers to that second request:\n\n```\nX-Vault-Index: <base64 value taken from previous response>\n```\n\nWhen the desired state isn't present, Vault will return a failure response with\nHTTP status code 412. This tells the client that it should retry the request.\nThe advantage over the Conditional Forwarding solution above is twofold:\nfirst, there's no additional load on the active node. Second, this solution\nis applicable to performance replication as well as performance standbys.\n\nThe Vault Go API will now automatically retry 412s, and provides convenience\nmethods for propagating the X-Vault-Index response header into the request\nheader of subsequent requests. Those not using the Vault Go API will want\nto build equivalent functionality into their client library.\n\n### Vault proxy and consistency headers\n\nWhen configured, the [Vault API Proxy](\/vault\/docs\/agent-and-proxy\/proxy\/apiproxy) will proxy incoming requests to Vault. There is\nProxy configuration available in the `api_proxy` stanza that allows making use\nof some of the above mitigations without modifying clients.\n\nBy setting `enforce_consistency=\"always\"`, Proxy will always provide\nthe `X-Vault-Index` consistency header. The value it uses for the header\nwill be based on the responses that have passed through the Proxy previously.\n\nThe option `when_inconsistent` controls how stale reads are prevented:\n\n- `\"fail\"` means that when a `412` response is seen, it is returned to the client\n- `\"retry\"` means that `412` responses will be retried automatically by Proxy,\n  so the client doesn't have to deal with them\n- `\"forward\"` makes Proxy provide the\n  `X-Vault-Inconsistent: forward-active-node` header as described above under\n  Conditional Forwarding\n\n## Vault 1.10 mitigations\n\nIn Vault 1.10, the token format has changed, where service tokens now employ server side consistency.\nThis means that by default, requests made\nto nodes which cannot support read-after-write consistency due to\nnot having the necessary WAL index to check Vault tokens locally will output\na 412 status code. The Vault Go API automatically retries when receiving 412s, so\nunless there is a considerable replication delay, users will experience\nread-after-write consistency.\n\nThe replication option [allow_forwarding_via_token](\/vault\/docs\/configuration\/replication)\ncan be used to enforce requests that would have returned 412s in the\naforementioned way will be forwarded instead to the active node.\n\nRefer to the [Server Side Consistent Token FAQ](\/vault\/docs\/faq\/ssct) for details.\n\n## Client API helpers\n\nThere are some new helpers in the `api` package to work with the new headers.\n`WithRequestCallbacks` and `WithResponseCallbacks` create a shallow clone of\nthe client and populate it with the given callbacks. `RecordState` and\n`RequireState` are used to store the response header from one request and\nprovide it in a subsequent request. For example:\n\n```go\nclient := api.NewClient(api.DefaultConfig)\nvar state string\n_, err := client.WithResponseCallbacks(api.RecordState(&state)).Write(path, data)\nsecret, err := client.WithRequestCallbacks(api.RequireState(state)).Read(path)\n```\n\nThis will retry the `Read` until the data stored by the `Write` is present.\nThere are also callbacks to use forwarding: `ForwardInconsistent` and\n`ForwardAlways`.","site":"vault","answers_cleaned":"    layout  docs page title  Vault Enterprise Eventual Consistency description  Vault Enterprise Consistency Model        Vault eventual consistency   include  alerts enterprise and hcp mdx   When running in a cluster  Vault has an eventual consistency model  Only one node  the leader  can write to Vault s storage  Users generally expect read after write consistency  in other words  after writing foo 1  a subsequent read of foo should return 1  Depending on the Vault configuration this isn t always the case  When using performance standbys with Integrated Storage  or when using performance replication  there are some sequences of operations that don t always yield read after write consistency      Performance standby nodes  When using the Integrated Storage backend without performance standbys  only a single Vault node  the active node  handles requests  Requests sent to regular standbys are handled by forwarding them to the active node  This Vault configuration gives Vault the same behavior as the default Consul consistency model   When using the Integrated Storage backend with performance standbys  both the active node and performance standbys can handle requests  If a performance standby handles a login request  or a request that generates a dynamic secret  the performance standby will issue a remote procedure call  RPC  to the active node to store the token and or lease  If the performance standby handles any other request that results in a storage write  it will forward that request to the active node in the same way a regular standby forwards all requests   With Integrated Storage  all writes occur on the active node  which then issues RPCs to update the local storage on every other node  Between when the active node writes the data to its local disk  and when those RPCs are handled on the other nodes to write the data to their local disks  those nodes present a stale view of the data   As a result  even if you re always talking to the same performance standby  you may not get read after write semantics  The write gets sent to the active node  and if the subsequent read request occurs before the new data gets sent to the node handling the read request  the read request won t be able to take the write into account because the new data isn t present on that node yet      Performance replication  A similar phenomenon occurs when using performance replication  One example of how this manifests is when using shared mounts  If a KV secrets engine is mounted on the primary with  local false   it will exist on the secondary cluster as well  The secondary cluster can handle requests to that mount  though as with performance standbys  write requests must be forwarded   in this case to the primary active node  Once data is written to the primary cluster  it won t be visible on the secondary cluster until the data has been replicated from the primary  Therefore  on the secondary cluster  it initially appears as if the data write hasn t happened   If the secondary cluster is using Integrated Storage  and the read request is being handled on one of its performance standbys  the problem is exacerbated because it has to be sent first from the primary active node to the secondary active node  and then from there to the secondary performance standby  each of which can introduce their own form of lag   Even without shared secret engines  stale reads can still happen with performance replication  The Identity subsystem aims to provide a view on entities and groups which span across clusters  As such  when logging in to a secondary cluster using a shared mount  Vault tries to generate an entity and alias if they don t already exist  and these must be stored on the primary using an RPC  Something similar happens with groups      Clock skew and replication lag  As seen above  both performance standbys and replication secondaries can lag behind the active node or the primary   As of Vault 1 17  it s possible to get some insight into that lag using sys health  sys ha status  and the replication status endpoints   Secondaries and standbys regularly issue an  echo  heartbeat RPC to their upstream source   This heartbeat serves many purposes  one of them being to get a rough idea of whether the clocks of the client and server are in sync   The server response to the heartbeat RPC includes the server s local clock time  and the client takes the delta in milliseconds between that time and the client s local clock time to compute the clock skew ms field   No effort is made to factor into that field the time it took to actually perform the RPC  though that information is made available as the last heartbeat duration ms field   In other words  the reported clock skew has an uncertainty of up to last heartbeat duration ms   Vault assumes that clocks are synced across all nodes in a cluster  and if they aren t  problems may arise  e g  one node may think that a lease has expired and another node won t yet   Some community supported storage backends may have further problems relating to HA mode   There are fewer problems expected when clock skew exists between a replication primary and secondary   However  one known issue is that the replication lag canary discussed next will produce surprising values if clocks aren t synced between the clusters   Non secondary active nodes periodically write a small record to storage containing the local clock time for that node   Replication secondaries read that record and compare it to their local clock time  calling the delta the replication primary canary age ms  which is exposed in the replication status endpoints   Performance standbys do the same computation  exposing replication primary canary age ms in the sys health and sys ha status endpoints   Performance standbys and replication secondaries include their current replication primary canary age ms as part of their payload for the aforementioned  echo  heartbeat RPCs they issue  allowing the active node or primary cluster to report on the lag seen by their downstream clients      Mitigations  There has long been a partial mitigation for the above problems  When writing data via RPC  e g  when a performance standby registers tokens and leases on the active node after a login or generating a dynamic secret  part of the response includes a number known as the  WAL index   aka Write Ahead Log index   A full explanation of this is outside the scope of this document  but the short version is that both performance replication and performance standbys use log shipping to stay in sync with the upstream source of writes  The mitigation historically used by nodes doing writes via RPC is to look at the WAL index in the response and wait up to 2 seconds to see if that WAL index appear in the logs being shipped from upstream  Once the WAL index is seen  the Vault node handling the request that resulted in RPCs can return its own response to the client  it knows that any subsequent reads will be able to see the value that was just written  If the WAL index isn t seen within those 2 seconds  the Vault node completes the request anyway  returning a warning in the response   This mitigation option still exists in Vault 1 7  though now there is a configuration option to adjust the wait time   best effort wal wait duration   vault docs configuration replication       Vault 1 7 mitigations  There are now a variety of other mitigations available     per request option to always forward the request to the active node   per request option to conditionally forward the request to the active node   if it would otherwise result in a stale read   per request option to fail requests if they might result in a stale read   Vault Proxy configuration to do the above for proxied requests  The remainder of this document describes the tradeoffs of these mitigations and how to use them   Note that any headers requesting forwarding are disabled by default  and must be enabled using  allow forwarding via header   vault docs configuration replication        Unconditional forwarding  Performance standbys only   The simplest solution to never experience stale reads from a performance standby is to provide the following HTTP header in the request       X Vault Forward  active node      The drawback here is that if all your requests are forwarded to the active node  you might as well not be using performance standbys  So this mitigation only makes sense to use selectively   This mitigation will not help with stale reads relating to performance replication       Conditional forwarding  Performance standbys only   As of Vault Enterprise 1 7  all requests that modify storage now return a new HTTP response header       X Vault Index   base64 value       To ensure that the state resulting from that write request is visible to a subsequent request  add these headers to that second request       X Vault Index   base64 value taken from previous response  X Vault Inconsistent  forward active node      The effect will be that the node handling the request will look at the state it has locally  and if it doesn t contain the state described by the X Vault Index header  the node will forward the request to the active node   The drawback here is that when requests are forwarded to the active node  performance standbys provide less value  If this happens often enough the active node can become a bottleneck  limiting the horizontal read scalability performance standbys are intended to provide       Retry stale requests  As of Vault Enterprise 1 7  all requests that modify storage now return a new HTTP response header       X Vault Index   base64 value       To ensure that the state resulting from that write request is visible to a subsequent request  add this headers to that second request       X Vault Index   base64 value taken from previous response       When the desired state isn t present  Vault will return a failure response with HTTP status code 412  This tells the client that it should retry the request  The advantage over the Conditional Forwarding solution above is twofold  first  there s no additional load on the active node  Second  this solution is applicable to performance replication as well as performance standbys   The Vault Go API will now automatically retry 412s  and provides convenience methods for propagating the X Vault Index response header into the request header of subsequent requests  Those not using the Vault Go API will want to build equivalent functionality into their client library       Vault proxy and consistency headers  When configured  the  Vault API Proxy   vault docs agent and proxy proxy apiproxy  will proxy incoming requests to Vault  There is Proxy configuration available in the  api proxy  stanza that allows making use of some of the above mitigations without modifying clients   By setting  enforce consistency  always    Proxy will always provide the  X Vault Index  consistency header  The value it uses for the header will be based on the responses that have passed through the Proxy previously   The option  when inconsistent  controls how stale reads are prevented       fail   means that when a  412  response is seen  it is returned to the client     retry   means that  412  responses will be retried automatically by Proxy    so the client doesn t have to deal with them     forward   makes Proxy provide the    X Vault Inconsistent  forward active node  header as described above under   Conditional Forwarding     Vault 1 10 mitigations  In Vault 1 10  the token format has changed  where service tokens now employ server side consistency  This means that by default  requests made to nodes which cannot support read after write consistency due to not having the necessary WAL index to check Vault tokens locally will output a 412 status code  The Vault Go API automatically retries when receiving 412s  so unless there is a considerable replication delay  users will experience read after write consistency   The replication option  allow forwarding via token   vault docs configuration replication  can be used to enforce requests that would have returned 412s in the aforementioned way will be forwarded instead to the active node   Refer to the  Server Side Consistent Token FAQ   vault docs faq ssct  for details      Client API helpers  There are some new helpers in the  api  package to work with the new headers   WithRequestCallbacks  and  WithResponseCallbacks  create a shallow clone of the client and populate it with the given callbacks   RecordState  and  RequireState  are used to store the response header from one request and provide it in a subsequent request  For example      go client    api NewClient api DefaultConfig  var state string    err    client WithResponseCallbacks api RecordState  state   Write path  data  secret  err    client WithRequestCallbacks api RequireState state   Read path       This will retry the  Read  until the data stored by the  Write  is present  There are also callbacks to use forwarding   ForwardInconsistent  and  ForwardAlways  "}
{"questions":"vault encryption for supporting seals Seal wrap layout docs page title Vault Enterprise Seal Wrap Vault Enterprise features a mechanism to wrap values with an extra layer of","answers":"---\nlayout: docs\npage_title: Vault Enterprise Seal Wrap\ndescription: |-\n  Vault Enterprise features a mechanism to wrap values with an extra layer of\n  encryption for supporting seals.\n---\n\n# Seal wrap\n\n@include 'alerts\/enterprise-and-hcp.mdx'\n\nVault Enterprise features a mechanism to wrap values with an extra layer of\nencryption for supporting [seals](\/vault\/docs\/configuration\/seal). This adds an\nextra layer of protection and is useful in some compliance and regulatory\nenvironments, including FIPS 140-2 environments.\n\nTo use this feature, you must have an active or trial license for Vault\nEnterprise Plus (HSMs). To start a trial, contact [HashiCorp\nsales](mailto:sales@hashicorp.com).\n\n## Seal Wrap benefits\n\nYour Vault deployments can gain the following benefits by enabling seal wrapping:\n\n- Conformance with FIPS 140-2 directives on Key Storage and Key Transport as [certified by Leidos](\/vault\/docs\/enterprise\/sealwrap#fips-140-2-compliance)\n- Supports FIPS level of security equal to HSM\n  - For example, if you use Level 3 hardware encryption on an HSM, Vault will be\n    using FIPS 140-2 Level 3 cryptography\n- Enables Vault deployments in high security [GRC](https:\/\/en.wikipedia.org\/wiki\/Governance,_risk_management,_and_compliance)\n  environments (e.g. PCI-DSS, HIPAA) where FIPS guidelines important for external audits\n- Pathway to use Vault for managing Department of Defense (DOD) or North\n  Atlantic Treaty Organization (NATO) military secrets\n\n## Enabling\/Disabling\n\nSeal Wrap is enabled by default on supporting seals. This implies that the seal\nmust be available throughout Vault's runtime. Most cloud-based seals should be\nquite reliable, but, for instance, if using an HSM in a non-HA setup a\nconnection interruption to the HSM will result in issues with Vault\nfunctionality.\n\n<Tip>\n\nHaving Vault generate its own key is the easiest way to get up and running, but for security, Vault marks the key as non-exportable. If your HSM key backup strategy requires the key to be exportable, you should generate the key yourself. Refer to the [key generation attributes](\/vault\/docs\/configuration\/seal\/pkcs11#vault-key-generation-attributes).\n\n<\/Tip>\n\nTo disable seal wrapping, set `disable_sealwrap = true` in Vault's\n[configuration file][configuration]. This will not affect auto-unsealing functionality; Vault's\nroot key will still be protected by the seal wrapping mechanism. It will\nsimply prevent other storage entries within Vault from being seal wrapped.\n\n_N.B._: This is a lazy downgrade; as keys are accessed or written their seal\nwrapping status will change. Similarly, if the flag is removed, it will be a\nlazy upgrade (which is the case when initially upgrading to a seal\nwrap-supporting version of Vault).\n\n## Activating seal wrapping\n\nFor some values, seal wrapping is always enabled with a supporting seal. This\nincludes the recovery key, any stored key shares, the root key, the keyring,\nand more; essentially, any Critical Security Parameter (CSP) within Vault's\ncore. If upgrading from a version of Vault that did not support seal wrapping,\nthe next time these values are read they will be seal-wrapped and stored.\n\nBackend mounts within Vault can also take advantage of seal wrapping. Seal\nwrapping can be activated at mount time for a given mount by mounting the\nbackend with the `seal_wrap` configuration value set to `true`. (This value\ncannot currently be changed later.)\n\nA given backend's author can specify which values should be seal-wrapped by\nidentifying where CSPs are stored.  They may also choose to seal wrap all or none\nof their values.\n\nNote that it is often an order of magnitude or two slower to write to and read\nfrom HSMs or remote seals. However, values will be cached in memory\nun-seal-wrapped (but still encrypted by Vault's built-in cryptographic barrier)\nin Vault, which will mitigate this for read-heavy workloads.\n\n## Seal wrap and replication\n\nSeal wrapping takes place below the replication logic. As a result, it is\ntransparent to replication. Replication will convey which values should be\nseal-wrapped, but it is up to the seal on the local cluster to implement it.\nIn practice, this means that seal wrapping can be used without needing to have\nthe replicated keys on both ends of the connection; each cluster can have\ndistinct keys in an HSM or in KMS.\n\nIn addition, it is possible to replicate from a Shamir-protected primary\ncluster to clusters that use HSMs when seal wrapping is required in downstream\ndatacenters but not in the primary.\n\n## Wrapped parameters\n\nEach plugin (whether secret or auth) maintains control over parameters it\nfeels are best to Seal Wrap. These are usually just a few core values as\nSeal Wrapping does incur some performance overhead.\n\nSome examples of places where seal wrapping is used include:\n\n- The [LDAP](\/vault\/docs\/auth\/ldap), [RADIUS](\/vault\/docs\/auth\/radius),\n  [Okta](\/vault\/docs\/auth\/okta), and [AWS](\/vault\/docs\/auth\/aws) auth methods,\n  for storing their config.\n- [PKI](\/vault\/docs\/secrets\/pki) for storing the issuers and their keys,\n- [SSH](\/vault\/docs\/secrets\/ssh) for storing the CA's keys,\n- [KMIP](\/vault\/docs\/secrets\/kmip) for storing managed objects (externally-provided\n  keys) and its CA keys.\n- [Transit](\/vault\/docs\/secrets\/transit) for storing keys and their policy.\n\n## FIPS status\n\nSee the [FIPS-specific Seal Wrap documentation](\/vault\/docs\/enterprise\/fips\/sealwrap)\nfor more information about using Seal Wrapping to achieve FIPS 140-2 compliance.\nNote that there are additional [FIPS considerations](\/vault\/docs\/enterprise\/sealwrap#seal-wrap-and-replication)\nregarding Seal Wrap usage and Vault Replication.\n\n[configuration]: \/vault\/docs\/configuration","site":"vault","answers_cleaned":"    layout  docs page title  Vault Enterprise Seal Wrap description       Vault Enterprise features a mechanism to wrap values with an extra layer of   encryption for supporting seals         Seal wrap   include  alerts enterprise and hcp mdx   Vault Enterprise features a mechanism to wrap values with an extra layer of encryption for supporting  seals   vault docs configuration seal   This adds an extra layer of protection and is useful in some compliance and regulatory environments  including FIPS 140 2 environments   To use this feature  you must have an active or trial license for Vault Enterprise Plus  HSMs   To start a trial  contact  HashiCorp sales  mailto sales hashicorp com       Seal Wrap benefits  Your Vault deployments can gain the following benefits by enabling seal wrapping     Conformance with FIPS 140 2 directives on Key Storage and Key Transport as  certified by Leidos   vault docs enterprise sealwrap fips 140 2 compliance    Supports FIPS level of security equal to HSM     For example  if you use Level 3 hardware encryption on an HSM  Vault will be     using FIPS 140 2 Level 3 cryptography   Enables Vault deployments in high security  GRC  https   en wikipedia org wiki Governance  risk management  and compliance    environments  e g  PCI DSS  HIPAA  where FIPS guidelines important for external audits   Pathway to use Vault for managing Department of Defense  DOD  or North   Atlantic Treaty Organization  NATO  military secrets     Enabling Disabling  Seal Wrap is enabled by default on supporting seals  This implies that the seal must be available throughout Vault s runtime  Most cloud based seals should be quite reliable  but  for instance  if using an HSM in a non HA setup a connection interruption to the HSM will result in issues with Vault functionality    Tip   Having Vault generate its own key is the easiest way to get up and running  but for security  Vault marks the key as non exportable  If your HSM key backup strategy requires the key to be exportable  you should generate the key yourself  Refer to the  key generation attributes   vault docs configuration seal pkcs11 vault key generation attributes      Tip   To disable seal wrapping  set  disable sealwrap   true  in Vault s  configuration file  configuration   This will not affect auto unsealing functionality  Vault s root key will still be protected by the seal wrapping mechanism  It will simply prevent other storage entries within Vault from being seal wrapped    N B    This is a lazy downgrade  as keys are accessed or written their seal wrapping status will change  Similarly  if the flag is removed  it will be a lazy upgrade  which is the case when initially upgrading to a seal wrap supporting version of Vault       Activating seal wrapping  For some values  seal wrapping is always enabled with a supporting seal  This includes the recovery key  any stored key shares  the root key  the keyring  and more  essentially  any Critical Security Parameter  CSP  within Vault s core  If upgrading from a version of Vault that did not support seal wrapping  the next time these values are read they will be seal wrapped and stored   Backend mounts within Vault can also take advantage of seal wrapping  Seal wrapping can be activated at mount time for a given mount by mounting the backend with the  seal wrap  configuration value set to  true    This value cannot currently be changed later    A given backend s author can specify which values should be seal wrapped by identifying where CSPs are stored   They may also choose to seal wrap all or none of their values   Note that it is often an order of magnitude or two slower to write to and read from HSMs or remote seals  However  values will be cached in memory un seal wrapped  but still encrypted by Vault s built in cryptographic barrier  in Vault  which will mitigate this for read heavy workloads      Seal wrap and replication  Seal wrapping takes place below the replication logic  As a result  it is transparent to replication  Replication will convey which values should be seal wrapped  but it is up to the seal on the local cluster to implement it  In practice  this means that seal wrapping can be used without needing to have the replicated keys on both ends of the connection  each cluster can have distinct keys in an HSM or in KMS   In addition  it is possible to replicate from a Shamir protected primary cluster to clusters that use HSMs when seal wrapping is required in downstream datacenters but not in the primary      Wrapped parameters  Each plugin  whether secret or auth  maintains control over parameters it feels are best to Seal Wrap  These are usually just a few core values as Seal Wrapping does incur some performance overhead   Some examples of places where seal wrapping is used include     The  LDAP   vault docs auth ldap    RADIUS   vault docs auth radius      Okta   vault docs auth okta   and  AWS   vault docs auth aws  auth methods    for storing their config     PKI   vault docs secrets pki  for storing the issuers and their keys     SSH   vault docs secrets ssh  for storing the CA s keys     KMIP   vault docs secrets kmip  for storing managed objects  externally provided   keys  and its CA keys     Transit   vault docs secrets transit  for storing keys and their policy      FIPS status  See the  FIPS specific Seal Wrap documentation   vault docs enterprise fips sealwrap  for more information about using Seal Wrapping to achieve FIPS 140 2 compliance  Note that there are additional  FIPS considerations   vault docs enterprise sealwrap seal wrap and replication  regarding Seal Wrap usage and Vault Replication    configuration    vault docs configuration"}
{"questions":"vault include alerts enterprise only mdx Learn what data HashiCorp collects to meter Enterprise license utilization Enable or disable reporting Review sample payloads and logs Automated license utilization reporting layout docs page title Automated license utilization reporting","answers":"---\nlayout: docs\npage_title: Automated license utilization reporting\ndescription: >-\n  Learn what data HashiCorp collects to meter Enterprise license utilization. Enable or disable reporting. Review sample payloads and logs.\n---\n\n# Automated license utilization reporting\n\n@include 'alerts\/enterprise-only.mdx'\n\nAutomated license utilization reporting sends license utilization data to\nHashiCorp without requiring you to manually collect and report them. It also\nlets you review your license usage with the monitoring solution you already use\n(for example Splunk, Datadog, or others) so you can optimize and manage your\ndeployments. Use these reports to understand how much more you can deploy under\nyour current contract, protect against overutilization, and budget for predicted\nconsumption.\n\nAutomated reporting shares the minimum data required to validate license\nutilization as defined in our contracts. They consist of mostly computed metrics\nand will never contain Personal Identifiable Information (PII) or other\nsensitive information. Automated reporting shares the data with HashiCorp using\na secure, unidirectional HTTPS API and makes an auditable record in the product\nlogs each time it submits a report. The reporting process is GDPR compliant and submits\nreports roughly once every 24 hours.\n\n## Enable automated reporting\n\nTo enable automated reporting, you need to make sure that outbound network\ntraffic is configured correctly and upgrade your enterprise product to a version\nthat supports it. If your installation is air-gapped or network settings are not\nin place, automated reporting will not work.\n\n### 1. Allow outbound HTTPS traffic on port 443\n\nMake sure that your network allows HTTPS egress on port 443 to\nhttps:\/\/reporting.hashicorp.services by allow-listing the following IP\naddresses:\n\n- 100.20.70.12\n- 35.166.5.222\n- 23.95.85.111\n- 44.215.244.1\n\n### 2. Upgrade\n\nUpgrade to a release that supports license utilization reporting. These\nreleases include:\n\n- [Vault Enterprise 1.14.0](https:\/\/releases.hashicorp.com\/vault\/) and later\n- [Vault Enterprise 1.13.4](https:\/\/releases.hashicorp.com\/vault\/) and later 1.13.x versions\n- [Vault Enterprise 1.12.8](https:\/\/releases.hashicorp.com\/vault\/) and later 1.12.x versions\n- [Vault Enterprise 1.11.12](https:\/\/releases.hashicorp.com\/vault\/)\n\n### 3. Check logs\n\nAutomatic license utilization reporting will start sending data within roughly 24 hours.\nCheck the server logs for records that the data sent successfully.\n\nYou will find log entries similar to the following:\n\n<CodeBlockConfig hideClipboard>\n\n```\n[DEBUG] core.reporting: beginning snapshot export\n[DEBUG] core.reporting: creating payload\n[DEBUG] core.reporting: marshalling payload to json\n[DEBUG] core.reporting: generating authentication headers\n[DEBUG] core.reporting: creating request\n[DEBUG] core.reporting: sending request\n[DEBUG] core.reporting: performing request: method=POST url=https:\/\/reporting.hashicorp.services\n[DEBUG] core.reporting: recording audit record\n[INFO]  core.reporting: Report sent: auditRecord=\"{\\\"payload\\\":{\\\"payload_version\\\":\\\"1\\\",\\\"license_id\\\":\\\"97afe7b4-b9c8-bf19-bf35-b89b5cc0efea\\\",\\\"product\\\":\\\"vault\\\",\\\"product_version\\\":\\\"1.14.0-rc1+ent\\\",\\\"export_timestamp\\\":\\\"2023-06-01T09:34:44.215133-04:00\\\",\\\"snapshots\\\":[{\\\"snapshot_version\\\":1,\\\"snapshot_id\\\":\\\"0001J7H7KMEDRXKM5C1QJGBXV3\\\",\\\"process_id\\\":\\\"01H1T45CZK2GN9WR22863W2K32\\\",\\\"timestamp\\\":\\\"2023-06-01T09:34:44.215001-04:00\\\",\\\"schema_version\\\":\\\"1.0.0\\\",\\\"service\\\":\\\"vault\\\",\\\"metrics\\\":{\\\"clientcount.current_month_estimate\\\":{\\\"key\\\":\\\"clientcount.current_month_estimate\\\",\\\"kind\\\":\\\"sum\\\",\\\"mode\\\":\\\"write\\\",\\\"labels\\\":{\\\"type\\\":{\\\"entity\\\":20,\\\"nonentity\\\":11}}},\\\"clientcount.previous_month_complete\\\":{\\\"key\\\":\\\"clientcount.previous_month_complete\\\",\\\"kind\\\":\\\"sum\\\",\\\"mode\\\":\\\"write\\\",\\\"labels\\\":{\\\"type\\\":{\\\"entity\\\":10,\\\"nonentity\\\":11}}}}}],\\\"metadata\\\":{\\\"vault\\\":{\\\"billing_start\\\":\\\"2023-03-01T00:00:00Z\\\",\\\"cluster_id\\\":\\\"a8d95acc-ec0a-6087-d7f6-4f054ab2e7fd\\\"}}}}\"\n[DEBUG] core.reporting: completed recording audit record\n[DEBUG] core.reporting: export finished successfully\n```\n\n<\/CodeBlockConfig>\n\nIf your installation is air-gapped or your network doesn\u2019t allow the correct\negress, logs will show an error.\n\n<CodeBlockConfig hideClipboard>\n\n```\n[DEBUG] core.reporting: beginning snapshot export\n[DEBUG] core.reporting: creating payload\n[DEBUG] core.reporting: marshalling payload to json\n[DEBUG] core.reporting: generating authentication headers\n[DEBUG] core.reporting: creating request\n[DEBUG] core.reporting: sending request\n[DEBUG] core.reporting: performing request: method=POST url=https:\/\/reporting.hashicorp.services\n[DEBUG] core.reporting: error status code received: statusCode=403\n```\n\n<\/CodeBlockConfig>\n\nIn this case, reconfigure your network to allow egress and check back in 24\nhours.\n\n## Opt out\n\nIf your installation is air-gapped or you want to manually collect and report on\nthe same license utilization metrics, you can opt-out of automated reporting.\n\nManually reporting these metrics can be time consuming. Opting out of automated\nreporting does not mean that you also opt out from sending license utilization\nmetrics. Customers who opt out of automated reporting will still be required to\nmanually collect and send license utilization metrics to HashiCorp.\n\nIf you are considering opting out because you\u2019re worried about the data, we\nstrongly recommend that you review the [example payloads](#example-payloads)\nbefore opting out. If you have concerns with any of the automatically-reported\ndata please bring them to your account manager.\n\nYou have two options to opt out of automated reporting:\n\n- HCL configuration (recommended)\n- Environment variable (requires restart)\n\n\n#### HCL configuration\n\nOpting out in your product\u2019s configuration file doesn\u2019t require a system\nrestart, and is the method we recommend. Add the following block to your server\nconfiguration file (e.g. `vault-config.hcl`).\n\n```hcl\nreporting {\n\tlicense {\n\t\tenabled = false\n   }\n}\n```\n\n<Warning>\n\nWhen you have a cluster, each node must have the reporting stanza in its\nconfiguration to be consistent. In the event of leadership change, nodes will\nuse its server configuration to determine whether or not to opt-out the\nautomated reporting. Inconsistent configuration between nodes will change the\nreporting status upon active unseal.\n\n<\/Warning>\n\n\nYou will find the following entries in the server log.\n\n<CodeBlockConfig hideClipboard>\n\n```\n[DEBUG] core: reloading automated reporting\n[INFO]  core: opting out of automated reporting\n[DEBUG] activity: there is no reporting agent configured, skipping counts reporting\n```\n\n<\/CodeBlockConfig>\n\n#### Environment variable\n\nIf you need to, you can also opt out using an environment variable, which will\nprovide a startup message confirming that you have disabled automated reporting.\nThis option requires a system restart.\n\n<Note>\n\nIf the reporting stanza exists in the configuration file, the\n`OPTOUT_LICENSE_REPORTING` value overrides the configuration.\n\n<\/Note>\n\nSet the following environment variable.\n\n```shell-session\n$ export OPTOUT_LICENSE_REPORTING=true\n```\n\nNow, restart your [Vault servers](\/vault\/docs\/commands\/server) from the shell\nwhere you set the environment variable.\n\nYou will find the following entries in the server log.\n\n<CodeBlockConfig hideClipboard>\n\n```\n[INFO]  core: automated reporting disabled via environment variable: env=OPTOUT_LICENSE_REPORTING\n[INFO]  core: opting out of automated reporting\n[DEBUG] activity: there is no reporting agent configured, skipping counts reporting\n```\n\n<\/CodeBlockConfig>\n\n\nCheck your product logs roughly 24 hours after opting out to make sure that the system\nisn\u2019t trying to send reports.\n\nIf your configuration file and environment variable differ, the environment\nvariable setting will take precedence.\n\n## Example payloads\n\nHashiCorp collects the following utilization data as JSON payloads:\n\n- `payload_version` - The version of this payload schema\n- `license_id` - The license ID for this product\n- `product` - The product that this contribution is for\n- `product_version` - The product version this contribution is for\n- `export_timestamp`- The date and time for this contribution\n- `snapshots` - An array of snapshot details. A snapshot is a structure that\n  represents a single data collection\n   - `snapshot_version` - The version of the snapshot package that produced this\n     snapshot\n   - `snapshot_id` - A unique identifier for this particular snapshot\n   - `process_id` - An identifier for the system that produced this snapshot\n   - `timestamp` - The date and time for this snapshot\n   - `schema_version` - The version of the schema associated with this snapshot\n   - `service` - The service that produced this snapshot (likely to be product\n     name)\n   - `metrics` - A map of representations of snapshot metrics contained within\n     this snapshot\n      - `key` - The key name associated with this metric\n      - `kind` - The kind of metric (feature, counter, sum, or mean)\n      - `mode` - The mode of operation associated with this metric (write or\n        collect)\n      - `labels` - The labels associated with each collected metric\n         - `entity` - The sum of tokens generated for a unique client identifier\n         - `nonentity` - The sum of tokens without an entity attached\n- `metadata` - Optional product-specific metadata\n   - `billing_start` - The billing start date associated with the reporting cluster (license start date if not configured).\n   \n   <Note title=\"Important change to supported versions\">\n      \n      As of 1.16.7, 1.17.3 and later, \n      the <a href=\"\/vault\/docs\/concepts\/billing-start-date\">billing start date<\/a> automatically \n      rolls over to the latest billing year at the end of the last cycle.\n        \n      For more information, refer to the upgrade guide for your Vault version:\n\n      [Vault v1.16.x](\/vault\/docs\/upgrading\/upgrade-to-1.16.x#auto-rolled-billing-start-date),\n      [Vault v1.17.x](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#auto-rolled-billing-start-date)\n\n    <\/Note> \n    \n   - `cluster_id` - The cluster UUID as shown by `vault status` on the reporting\n     cluster\n\n<CodeBlockConfig hideClipboard>\n\n```json\n{\n  \"payload_version\": \"1\",\n  \"license_id\": \"97afe7b4-b9c8-bf19-bf35-b89b5cc0efea\",\n  \"product\": \"vault\",\n  \"product_version\": \"1.14.0-rc1+ent\",\n  \"export_timestamp\": \"2023-06-01T11:39:00.76643-04:00\",\n  \"snapshots\": [\n    {\n      \"snapshot_version\": 1,\n      \"snapshot_id\": \"0001J7HEWM1PEHPMF5YZT8EV65\",\n      \"process_id\": \"01H1VSQMNYAP77R566F1Y03GE6\",\n      \"timestamp\": \"2023-06-01T11:39:00.766099-04:00\",\n      \"schema_version\": \"1.0.0\",\n      \"service\": \"vault\",\n      \"metrics\": {\n        \"clientcount.current_month_estimate\": {\n          \"key\": \"clientcount.current_month_estimate\",\n          \"kind\": \"sum\",\n          \"mode\": \"write\",\n          \"labels\": {\n            \"type\": {\n              \"entity\": 20,\n              \"nonentity\": 11\n            }\n          }\n        },\n        \"clientcount.previous_month_complete\": {\n          \"key\": \"clientcount.previous_month_complete\",\n          \"kind\": \"sum\",\n          \"mode\": \"write\",\n          \"labels\": {\n            \"type\": {\n              \"entity\": 10,\n              \"nonentity\": 11\n            }\n          }\n        }\n      }\n    }\n  ],\n  \"metadata\": {\n    \"vault\": {\n      \"billing_start\": \"2023-03-01T00:00:00Z\",\n      \"cluster_id\": \"a8d95acc-ec0a-6087-d7f6-4f054ab2e7fd\"\n    }\n  }\n}\n```\n\n<\/CodeBlockConfig>\n\n## Pre-1.9 counts\n\nWhen upgrading Vault from 1.8 (or earlier) to 1.9 (or later), utilization reporting will only include the [non-entity tokens](\/vault\/docs\/concepts\/client-count#non-entity-tokens) that are used after the upgrade.\n\nStarting in Vault 1.9, the activity log records and de-duplicates non-entity tokens by using the namespace and token's policies to generate a unique identifier. Because Vault did not create identifiers for these tokens before 1.9, the activity log cannot know whether this token has been seen pre-1.9. To prevent inaccurate and inflated counts, the activity log will ignore any counts of non-entity tokens that were created before the upgrade and only the non-entity tokens from versions 1.9 and later will be counted.\n\nSee the client count [overview](\/vault\/docs\/concepts\/client-count) and [FAQ](\/vault\/docs\/concepts\/client-count\/faq) for more information.\n","site":"vault","answers_cleaned":"    layout  docs page title  Automated license utilization reporting description       Learn what data HashiCorp collects to meter Enterprise license utilization  Enable or disable reporting  Review sample payloads and logs         Automated license utilization reporting   include  alerts enterprise only mdx   Automated license utilization reporting sends license utilization data to HashiCorp without requiring you to manually collect and report them  It also lets you review your license usage with the monitoring solution you already use  for example Splunk  Datadog  or others  so you can optimize and manage your deployments  Use these reports to understand how much more you can deploy under your current contract  protect against overutilization  and budget for predicted consumption   Automated reporting shares the minimum data required to validate license utilization as defined in our contracts  They consist of mostly computed metrics and will never contain Personal Identifiable Information  PII  or other sensitive information  Automated reporting shares the data with HashiCorp using a secure  unidirectional HTTPS API and makes an auditable record in the product logs each time it submits a report  The reporting process is GDPR compliant and submits reports roughly once every 24 hours      Enable automated reporting  To enable automated reporting  you need to make sure that outbound network traffic is configured correctly and upgrade your enterprise product to a version that supports it  If your installation is air gapped or network settings are not in place  automated reporting will not work       1  Allow outbound HTTPS traffic on port 443  Make sure that your network allows HTTPS egress on port 443 to https   reporting hashicorp services by allow listing the following IP addresses     100 20 70 12   35 166 5 222   23 95 85 111   44 215 244 1      2  Upgrade  Upgrade to a release that supports license utilization reporting  These releases include      Vault Enterprise 1 14 0  https   releases hashicorp com vault   and later    Vault Enterprise 1 13 4  https   releases hashicorp com vault   and later 1 13 x versions    Vault Enterprise 1 12 8  https   releases hashicorp com vault   and later 1 12 x versions    Vault Enterprise 1 11 12  https   releases hashicorp com vault        3  Check logs  Automatic license utilization reporting will start sending data within roughly 24 hours  Check the server logs for records that the data sent successfully   You will find log entries similar to the following    CodeBlockConfig hideClipboard        DEBUG  core reporting  beginning snapshot export  DEBUG  core reporting  creating payload  DEBUG  core reporting  marshalling payload to json  DEBUG  core reporting  generating authentication headers  DEBUG  core reporting  creating request  DEBUG  core reporting  sending request  DEBUG  core reporting  performing request  method POST url https   reporting hashicorp services  DEBUG  core reporting  recording audit record  INFO   core reporting  Report sent  auditRecord     payload      payload version     1     license id     97afe7b4 b9c8 bf19 bf35 b89b5cc0efea     product     vault     product version     1 14 0 rc1 ent     export timestamp     2023 06 01T09 34 44 215133 04 00     snapshots       snapshot version   1   snapshot id     0001J7H7KMEDRXKM5C1QJGBXV3     process id     01H1T45CZK2GN9WR22863W2K32     timestamp     2023 06 01T09 34 44 215001 04 00     schema version     1 0 0     service     vault     metrics      clientcount current month estimate      key     clientcount current month estimate     kind     sum     mode     write     labels      type      entity   20   nonentity   11      clientcount previous month complete      key     clientcount previous month complete     kind     sum     mode     write     labels      type      entity   10   nonentity   11         metadata      vault      billing start     2023 03 01T00 00 00Z     cluster id     a8d95acc ec0a 6087 d7f6 4f054ab2e7fd         DEBUG  core reporting  completed recording audit record  DEBUG  core reporting  export finished successfully        CodeBlockConfig   If your installation is air gapped or your network doesn t allow the correct egress  logs will show an error    CodeBlockConfig hideClipboard        DEBUG  core reporting  beginning snapshot export  DEBUG  core reporting  creating payload  DEBUG  core reporting  marshalling payload to json  DEBUG  core reporting  generating authentication headers  DEBUG  core reporting  creating request  DEBUG  core reporting  sending request  DEBUG  core reporting  performing request  method POST url https   reporting hashicorp services  DEBUG  core reporting  error status code received  statusCode 403        CodeBlockConfig   In this case  reconfigure your network to allow egress and check back in 24 hours      Opt out  If your installation is air gapped or you want to manually collect and report on the same license utilization metrics  you can opt out of automated reporting   Manually reporting these metrics can be time consuming  Opting out of automated reporting does not mean that you also opt out from sending license utilization metrics  Customers who opt out of automated reporting will still be required to manually collect and send license utilization metrics to HashiCorp   If you are considering opting out because you re worried about the data  we strongly recommend that you review the  example payloads   example payloads  before opting out  If you have concerns with any of the automatically reported data please bring them to your account manager   You have two options to opt out of automated reporting     HCL configuration  recommended    Environment variable  requires restart         HCL configuration  Opting out in your product s configuration file doesn t require a system restart  and is the method we recommend  Add the following block to your server configuration file  e g   vault config hcl        hcl reporting    license     enabled   false              Warning   When you have a cluster  each node must have the reporting stanza in its configuration to be consistent  In the event of leadership change  nodes will use its server configuration to determine whether or not to opt out the automated reporting  Inconsistent configuration between nodes will change the reporting status upon active unseal     Warning    You will find the following entries in the server log    CodeBlockConfig hideClipboard        DEBUG  core  reloading automated reporting  INFO   core  opting out of automated reporting  DEBUG  activity  there is no reporting agent configured  skipping counts reporting        CodeBlockConfig        Environment variable  If you need to  you can also opt out using an environment variable  which will provide a startup message confirming that you have disabled automated reporting  This option requires a system restart    Note   If the reporting stanza exists in the configuration file  the  OPTOUT LICENSE REPORTING  value overrides the configuration     Note   Set the following environment variable      shell session   export OPTOUT LICENSE REPORTING true      Now  restart your  Vault servers   vault docs commands server  from the shell where you set the environment variable   You will find the following entries in the server log    CodeBlockConfig hideClipboard        INFO   core  automated reporting disabled via environment variable  env OPTOUT LICENSE REPORTING  INFO   core  opting out of automated reporting  DEBUG  activity  there is no reporting agent configured  skipping counts reporting        CodeBlockConfig    Check your product logs roughly 24 hours after opting out to make sure that the system isn t trying to send reports   If your configuration file and environment variable differ  the environment variable setting will take precedence      Example payloads  HashiCorp collects the following utilization data as JSON payloads      payload version    The version of this payload schema    license id    The license ID for this product    product    The product that this contribution is for    product version    The product version this contribution is for    export timestamp   The date and time for this contribution    snapshots    An array of snapshot details  A snapshot is a structure that   represents a single data collection       snapshot version    The version of the snapshot package that produced this      snapshot       snapshot id    A unique identifier for this particular snapshot       process id    An identifier for the system that produced this snapshot       timestamp    The date and time for this snapshot       schema version    The version of the schema associated with this snapshot       service    The service that produced this snapshot  likely to be product      name        metrics    A map of representations of snapshot metrics contained within      this snapshot          key    The key name associated with this metric          kind    The kind of metric  feature  counter  sum  or mean           mode    The mode of operation associated with this metric  write or         collect           labels    The labels associated with each collected metric             entity    The sum of tokens generated for a unique client identifier             nonentity    The sum of tokens without an entity attached    metadata    Optional product specific metadata       billing start    The billing start date associated with the reporting cluster  license start date if not configured           Note title  Important change to supported versions                As of 1 16 7  1 17 3 and later         the  a href   vault docs concepts billing start date  billing start date  a  automatically        rolls over to the latest billing year at the end of the last cycle                 For more information  refer to the upgrade guide for your Vault version          Vault v1 16 x   vault docs upgrading upgrade to 1 16 x auto rolled billing start date          Vault v1 17 x   vault docs upgrading upgrade to 1 17 x auto rolled billing start date         Note              cluster id    The cluster UUID as shown by  vault status  on the reporting      cluster   CodeBlockConfig hideClipboard      json      payload version    1      license id    97afe7b4 b9c8 bf19 bf35 b89b5cc0efea      product    vault      product version    1 14 0 rc1 ent      export timestamp    2023 06 01T11 39 00 76643 04 00      snapshots                  snapshot version   1         snapshot id    0001J7HEWM1PEHPMF5YZT8EV65          process id    01H1VSQMNYAP77R566F1Y03GE6          timestamp    2023 06 01T11 39 00 766099 04 00          schema version    1 0 0          service    vault          metrics              clientcount current month estimate                key    clientcount current month estimate              kind    sum              mode    write              labels                  type                    entity   20                 nonentity   11                                               clientcount previous month complete                key    clientcount previous month complete              kind    sum              mode    write              labels                  type                    entity   10                 nonentity   11                                                           metadata          vault            billing start    2023 03 01T00 00 00Z          cluster id    a8d95acc ec0a 6087 d7f6 4f054ab2e7fd                     CodeBlockConfig      Pre 1 9 counts  When upgrading Vault from 1 8  or earlier  to 1 9  or later   utilization reporting will only include the  non entity tokens   vault docs concepts client count non entity tokens  that are used after the upgrade   Starting in Vault 1 9  the activity log records and de duplicates non entity tokens by using the namespace and token s policies to generate a unique identifier  Because Vault did not create identifiers for these tokens before 1 9  the activity log cannot know whether this token has been seen pre 1 9  To prevent inaccurate and inflated counts  the activity log will ignore any counts of non entity tokens that were created before the upgrade and only the non entity tokens from versions 1 9 and later will be counted   See the client count  overview   vault docs concepts client count  and  FAQ   vault docs concepts client count faq  for more information  "}
{"questions":"vault page title Frequently Asked Questions FAQ License FAQ An overview of license Q How do the license termination changes affect upgrades q how do the license termination changes affect upgrades layout docs This FAQ section is for license changes and updates introduced for Vault Enterprise","answers":"---\nlayout: docs\npage_title: Frequently Asked Questions (FAQ)\ndescription: An overview of license.\n---\n\n# License FAQ\n\nThis FAQ section is for license changes and updates introduced for Vault Enterprise.\n- [Q: How do the license termination changes affect upgrades?](#q-how-do-the-license-termination-changes-affect-upgrades)\n- [Q: What impact on upgrades do the license termination behavior changes pose?](#q-what-impact-on-upgrades-do-the-license-termination-behavior-changes-pose)\n- [Q: Will these license changes impact HCP Vault Dedicated?](#q-will-these-license-changes-impact-hcp-vault)\n- [Q: Do these license changes impact all Vault customers\/licenses?](#q-do-these-license-changes-impact-all-vault-customers-licenses)\n- [Q: What is the product behavior change introduced by the licensing changes?](#q-what-is-the-product-behavior-change-introduced-by-the-licensing-changes)\n- [Q: How will Vault behave at startup when a license expires or terminates?](#q-how-will-vault-behave-at-startup-when-a-license-expires-or-terminates)\n- [Q: What is the impact on evaluation licenses due to this change?](#q-what-is-the-impact-on-evaluation-licenses-due-to-this-change)\n- [Q: Are there any changes to existing methods for manual license loading (API or CLI)?](#q-are-there-any-changes-to-existing-methods-for-manual-license-loading-api-or-cli)\n- [Q: Is there a grace period when evaluation licenses expire?](#q-is-there-a-grace-period-when-evaluation-licenses-expire)\n- [Q: Are the license files locked to a specific cluster?](#q-are-the-license-files-locked-to-a-specific-cluster)\n- [Q: Will a EULA check happen every time a Vault restarts?](#q-will-a-eula-check-happen-every-time-a-vault-restarts)\n- [Q: What scenarios should a customer plan for due to these license changes?](#q-what-scenarios-should-a-customer-plan-for-due-to-these-license-changes)\n- [Q: What is the migration path for customers who want to migrate from their existing license-as-applied-via-the-CLI flow to the license on disk flow?](#q-what-is-the-migration-path-for-customers-who-want-to-migrate-from-their-existing-license-as-applied-via-the-cli-flow-to-the-license-on-disk-flow)\n- [Q: What is the path for customers who want to downgrade\/rollback from Vault 1.11 or later (auto-loaded license mandatory) to a pre-Vault 1.11 (auto-loading not mandatory, stored license supported)?](#q-what-is-the-path-for-customers-who-want-to-downgrade-rollback-from-vault-1-11-or-later-auto-loaded-license-mandatory-to-a-pre-vault-1-11-auto-loading-not-mandatory-stored-license-supported)\n- [Q: Is there a limited time for support of licenses that are in storage?](#q-is-there-a-limited-time-for-support-of-licenses-that-are-in-storage)\n- [Q: What are the steps to upgrade from one autoloaded license to another autoloaded license?](#q-what-are-the-steps-to-upgrade-from-one-autoloaded-license-to-another-autoloaded-license)\n- [Q: What are the Vault ADP module licensing changes introduced in 1.8?](#q-what-are-the-vault-adp-module-licensing-changes-introduced-in-1-8)\n- [Q: How can the new ADP modules be purchased and what features are customer entitled to as part of that purchase?](#q-how-can-the-new-adp-modules-be-purchased-and-what-features-are-customer-entitled-to-as-part-of-that-purchase)\n- [Q: What is the impact to customers based on these ADP module licensing changes?](#q-what-is-the-impact-to-customers-based-on-these-adp-module-licensing-changes)\n\n### Q: what is the product behavior change introduced by the licensing changes?\n\nPer the [feature deprecation plans](\/vault\/docs\/deprecation), Vault will no longer support licenses in storage. An [auto-loaded license](\/vault\/docs\/enterprise\/license\/autoloading) must be used instead. If you are using stored licenses, you must migrate to auto-loaded licenses prior to upgrading to Vault 1.11\n\nVault 1.12 will also introduce different termination behavior for evaluation licenses versus non-evaluation licenses. An evaluation license will include a 30-day trial period after which a running Vault server will terminate. Vault servers using a non-evaluation license will not terminate.\n\n### Q: how do the license termination changes affect upgrades?\nVault 1.12 will introduce changes to the license termination behavior. Upgrades when using expired licenses will now be limited.\nVault will not startup if the build date of the binary is _after_ the expiration date of a license. License expiration date and binary build date compatibility can be verified using the [Check for Autoloaded License](\/vault\/docs\/commands\/operator\/diagnose#check-for-autoloaded-license) check performed by the `vault operator diagnose` command.\nThe build date of a binary can also be found using the [vault version](\/vault\/docs\/commands\/version#version) command.\n\nA user can expect the following to occur based on the following scenarios:\n\n**Evaluation or non-evaluation license is valid:**\nVault will start normally\n\n**Evaluation or non-evaluation license is expired, binary build date _before_ license expiry date:**\nVault will start normally\n\n**Evaluation or non-evaluation license is expired, binary build date _after_ license expiry date:**\nVault will not start\n\n**Evaluation license is terminated:**\nVault will not start independent of the binary's build date\n\n**Non-evaluation license is terminated, binary build date _before_ license expiry date:**\nVault will start normally\n\n**Non-evaluation license is terminated, binary build date _after_ license expiry date:**\nVault will not start\n\nThe Vault support team can issue you a temporary evaluation license to allow for security upgrades if your license has expired.\n\n### Q: will these license changes impact HCP Vault Dedicated?\n\nNo, these changes will not impact HCP Vault Dedicated.\n\n### Q: do these license changes impact all Vault customers\/licenses?\n\n| Customer\/licenses                                                                                                           | Impacted? |\n| --------------------------------------------------------------------------------------------------------------------------- | --------- |\n| ENT binaries (evaluation or non-evaluation downloaded from [releases.hashicorp.com](https:\/\/releases.hashicorp.com\/vault\/)) | Yes       |\n| Business-Source (BSL)                                                                                                       | No        |\n\n### Q: what is the product behavior change introduced by the licensing changes?\n\nWith Vault 1.11, the use of an [auto-loaded license](\/vault\/docs\/enterprise\/license\/autoloading) is required for Vault to start successfully.\n\n### Q: how will Vault behave at startup when a license expires or terminates?\n\nWhen a license expires, Vault continues to function until the license terminates. This behavior exists today and remains unchanged in Vault 1.11. The grace period, defined as the time between license expiration and license termination, is one day for evaluation licenses (as of 1.8), and ten years for non-evaluation licenses.\nCustomers must provide a valid license before the grace period expires. This license is required to be [auto-loaded](\/vault\/docs\/enterprise\/license\/autoloading). When license terminates (upon grace period expiry), Vault will seal itself and customers will need a valid license in order to successfully bring-up Vault. If a valid license was not installed after license expiry, customers will need to provide one, and this license will need to be auto-loaded.\n\nVault 1.12 changes the license expiration and termination behavior. Evaluation licenses include a 30-day trial period after which a running Vault server will terminate. Non-evaluation licenses, however, will no longer terminate. When a non-evaluation license expires, Vault will continue to function but upgrades will be limited. The build date of the upgrade binary must be before the expiration date of the license.\nVault will not start when attempting to use an expired license and binary with a build date _after_ the license expiration date. Attempting to [reload](\/vault\/api-docs\/system\/config-reload#reload-license-file) an expired license will result in an error if the build date of the running Vault server is _after_ the license expiration date.\n\nLicense expiration date and binary build date compatibility can be verified using the [Check for Autoloaded License](\/vault\/docs\/commands\/operator\/diagnose#check-for-autoloaded-license) check performed by the `vault operator diagnose` command. The build date of a binary can also be found using the [vault version](\/vault\/docs\/commands\/version#version) command.\n\n### Q: what is the impact on evaluation licenses due to this change?\n\nAs of Vault 1.8, any Vault cluster deployed must have a valid [auto-loaded](\/vault\/docs\/enterprise\/license\/autoloading) license.\n\nVault 1.12 introduces [expiration and termination behavior changes](#q-how-will-vault-behave-at-startup-when-a-license-expires-or-terminates) for non-evaluation licenses. Evaluation licenses will continue to have a 1-day grace period upon license expiry after which they will terminate. Vault will seal itself and shutdown once an evaluation license terminates.\n\n### Q: are there any changes to existing methods for manual license loading (API or CLI)?\n\nThe [`\/sys\/license`](\/vault\/api-docs\/v1.10.x\/system\/license#install-license) and [`\/sys\/license\/signed`](\/vault\/api-docs\/v1.10.x\/system\/license#read-signed-license) endpoints have been removed as of Vault 1.11. With that said, it is no longer possible to provide a license via the `\/sys\/license` endpoint. License [auto-loading](\/vault\/docs\/enterprise\/license\/autoloading) must be used instead.\nThe [`\/sys\/config\/reload\/license`](\/vault\/api-docs\/system\/config-reload#reload-license-file) endpoint can be used to reload an auto-loaded license provided as a path via an environment variable or configuration.\n\n### Q: is there a grace period when evaluation licenses expire?\n\nEvaluation licenses have a 1-day grace period. The grace period is the time until the license expires. Upon expiration, Vault will seal and will require a valid license to unseal and function properly.\n\n### Q: are the license files locked to a specific cluster?\n\nThe changes to licenses apply to all nodes in a cluster. The license files are not locked to a cluster, but are independently applied to the appropriate clusters.\n\n### Q: will a EULA check happen every time a Vault restarts?\n\nYes, starting with Vault 1.8, ENT binaries will be subjected to a EULA check. The release of Vault 1.8 introduces the EULA check for evaluation licenses (non-evaluation licenses are evaluated with a EULA check during contractual agreement) .\nAlthough the agreement to a EULA occurs only once (when the user receives their license), Vault will check for the presence of a valid license every time a node is restarted.\n\nStarting Vault 1.11, when customers deploy new Vault clusters, or upgrade existing Vault clusters, a valid [auto-loaded](\/vault\/docs\/enterprise\/license\/autoloading) license must exist for the upgrade to be successful.\n\n### Q: what scenarios should a customer plan for due to these license changes?\n\n- **New Cluster Deployment**: When a customer deploys new clusters to Vault 1.11 or later, a valid license must exist to successfully deploy. This valid license must be on-disk ([auto-loaded](\/vault\/docs\/enterprise\/license\/autoloading)).\n\n- **Eventual Migration**: Vault 1.11 removes support for in-storage licenses. Migrating to an auto-loaded license is required for Vault to start successfully using version 1.11 or greater. Pre-existing license storage entries will be automatically removed from storage upon startup.\n\n### Q: what is the migration path for customers who want to migrate from their existing license-as-applied-via-the-CLI flow to the license on disk flow?\n\nIf a Vault cluster using a stored license is planned to be upgraded to Vault 1.11 or greater, the operator must migrate to using an auto-loaded license. The [`vault license get -signed`](\/vault\/docs\/v1.10.x\/commands\/license\/get) command (or underlying [`\/sys\/license\/signed`](\/vault\/api-docs\/v1.10.x\/system\/license#read-signed-license) endpoint) can be used to retrieve the license from storage in a running cluster.\nIt is not necessary to remove the stored license entry. That will occur automatically upon startup in Vault 1.11 or greater. Prior to completing the [recommended upgrade steps](\/vault\/docs\/upgrading), perform the following to ensure your license is properly configured:\n\n1. Use the command `vault license get -signed` to retrieve the license from storage of your running cluster.\n2. Put the license on disk\n3. Configure license auto-loading by specifying the [`license_path`](\/vault\/docs\/configuration#license_path) config option or setting the [`VAULT_LICENSE`](\/vault\/docs\/commands#vault_license) or [`VAULT_LICENSE_PATH`](\/vault\/docs\/commands#vault_license_path) environment variable.\n\n### Q: what is the path for customers who want to downgrade\/rollback from Vault 1.11 or later (auto-loaded license mandatory) to a pre-Vault 1.11 (auto-loading not mandatory, stored license supported)?\n\nThe downgrade procedure remains the same for Vault customers who are currently on Vault 1.11 or later, have a license installed via auto-loading, and would like to downgrade their cluster to a pre-1.11 version. Please refer to the [upgrade procedures](\/vault\/tutorials\/standard-procedures\/sop-upgrade) that remind the customers that they must take a snapshot before the upgrade. Customers will need to restore their version from the snapshot, apply the pre-1.11 enterprise binary version they wish to roll back, and bring up the clusters.\n\n### Q: is there a limited time for support of licenses that are in storage?\n\nThe support of licenses installed by alternative means often leads to difficulties providing the appropriate support. To provide the support expected by our customers, as we have announced in [Vault feature deprecations and plans](\/vault\/docs\/deprecation) we are removing support for licenses in storage with Vault 1.11. This implies licensing endpoints that deal with licenses in storage will be removed, and Vault will no longer check for valid licenses in storage. This change requires that all customers have [auto-loaded](\/vault\/docs\/enterprise\/license\/autoloading) licenses to upgrade to 1.11(+) successfully.\n\n### Q: what are the steps to upgrade from one autoloaded license to another autoloaded license?\n\nFollow these steps to migrate from one autoloaded license to another autoloaded license.\n\n1. Use the [`vault license inspect`](\/vault\/docs\/commands\/license\/inspect) command to compare the new license against the output of the [`vault license get`](\/vault\/docs\/commands\/license\/get) command. This is to ensure that you have the correct license.\n1. Backup the old license file in a safe location.\n1. Replace the old license file on each Vault server with the new one.\n1. Invoke the [reload command](\/vault\/api-docs\/system\/config-reload#reload-license-file) on each individual Vault server, starting with the standbys and doing the leader last. Invoking in this manner reduces possible disruptions if something was performed incorrectly with the above steps. You can either use the [reload command](\/vault\/api-docs\/system\/config-reload#reload-license-file) or send a SIGHUP.\n1. On each node, ensure that the new license is in use by using the [`vault license get`](\/vault\/docs\/commands\/license\/get) command and\/or checking the logs.\n\n# ADP licensing\n\nThis FAQ section is for the Advanced Data Protection (ADP) license changes introduced in Vault Enterprise 1.8.\n\n### Q: what are the Vault ADP module licensing changes introduced in 1.8?\n\nAs of Vault Enterprise 1.8, the functionality formerly sold as the Vault ADP module is now separated between two new modules:\n\n**ADP-KM** includes:\n\n- [Key Management Secrets Engine (KMSE)](\/vault\/docs\/secrets\/key-management)\n- [Key Management Interoperability (KMIP)](\/vault\/docs\/secrets\/kmip)\n- [MSSQL Transparent Data Encryption (TDE)](https:\/\/www.hashicorp.com\/blog\/enabling-transparent-data-encryption-for-microsoft-sql-with-vault)\n\n**ADP-Transform** includes:\n\n- [Transform Secrets Engine (TSE)](\/vault\/docs\/secrets\/transform)\n\n### Q: how can the new ADP modules be purchased and what features are customer entitled to as part of that purchase?\n\n**ADP-KM includes**:\n\n- This is the first Vault Enterprise module that can be purchased standalone. This means it can be purchased without the purchase of a Vault Enterprise Standard license.\n- ADP-KM still requires a Vault Enterprise binary. The Vault Enterprise Standard license is automatically included with the ADP-KM module, but customers are contractually prohibited from using any features besides those in Vault Community Edition and ADP-KM (KMSE and KMIP).\n\n**ADP-Transform includes**:\n\n- This module cannot be purchased as a standalone. It requires a Vault Enterprise binary, and customers must purchase the base Vault Enterprise Standard license (at least) to use the corresponding Enterprise features.\n- The ADP-Transform SKU can be applied as an add-on. This workflow is similar to the consolidated ADP SKU.\n\n### Q: what is the impact to customers based on these ADP module licensing changes?\n\nCustomers need to be aware of the following as a result of these changes:\n\n- **New customers** may choose to purchase either or both of these modules. The old (consolidated) module is not available to them as an option.\n- **Existing customers** may continue with the consolidated Vault ADP module uninterrupted. They will only be converted to one or both new ADP modules the next time they make a change to their licensing details (i.e. contract change).","site":"vault","answers_cleaned":"    layout  docs page title  Frequently Asked Questions  FAQ  description  An overview of license         License FAQ  This FAQ section is for license changes and updates introduced for Vault Enterprise     Q  How do the license termination changes affect upgrades    q how do the license termination changes affect upgrades     Q  What impact on upgrades do the license termination behavior changes pose    q what impact on upgrades do the license termination behavior changes pose     Q  Will these license changes impact HCP Vault Dedicated    q will these license changes impact hcp vault     Q  Do these license changes impact all Vault customers licenses    q do these license changes impact all vault customers licenses     Q  What is the product behavior change introduced by the licensing changes    q what is the product behavior change introduced by the licensing changes     Q  How will Vault behave at startup when a license expires or terminates    q how will vault behave at startup when a license expires or terminates     Q  What is the impact on evaluation licenses due to this change    q what is the impact on evaluation licenses due to this change     Q  Are there any changes to existing methods for manual license loading  API or CLI     q are there any changes to existing methods for manual license loading api or cli     Q  Is there a grace period when evaluation licenses expire    q is there a grace period when evaluation licenses expire     Q  Are the license files locked to a specific cluster    q are the license files locked to a specific cluster     Q  Will a EULA check happen every time a Vault restarts    q will a eula check happen every time a vault restarts     Q  What scenarios should a customer plan for due to these license changes    q what scenarios should a customer plan for due to these license changes     Q  What is the migration path for customers who want to migrate from their existing license as applied via the CLI flow to the license on disk flow    q what is the migration path for customers who want to migrate from their existing license as applied via the cli flow to the license on disk flow     Q  What is the path for customers who want to downgrade rollback from Vault 1 11 or later  auto loaded license mandatory  to a pre Vault 1 11  auto loading not mandatory  stored license supported     q what is the path for customers who want to downgrade rollback from vault 1 11 or later auto loaded license mandatory to a pre vault 1 11 auto loading not mandatory stored license supported     Q  Is there a limited time for support of licenses that are in storage    q is there a limited time for support of licenses that are in storage     Q  What are the steps to upgrade from one autoloaded license to another autoloaded license    q what are the steps to upgrade from one autoloaded license to another autoloaded license     Q  What are the Vault ADP module licensing changes introduced in 1 8    q what are the vault adp module licensing changes introduced in 1 8     Q  How can the new ADP modules be purchased and what features are customer entitled to as part of that purchase    q how can the new adp modules be purchased and what features are customer entitled to as part of that purchase     Q  What is the impact to customers based on these ADP module licensing changes    q what is the impact to customers based on these adp module licensing changes       Q  what is the product behavior change introduced by the licensing changes   Per the  feature deprecation plans   vault docs deprecation   Vault will no longer support licenses in storage  An  auto loaded license   vault docs enterprise license autoloading  must be used instead  If you are using stored licenses  you must migrate to auto loaded licenses prior to upgrading to Vault 1 11  Vault 1 12 will also introduce different termination behavior for evaluation licenses versus non evaluation licenses  An evaluation license will include a 30 day trial period after which a running Vault server will terminate  Vault servers using a non evaluation license will not terminate       Q  how do the license termination changes affect upgrades  Vault 1 12 will introduce changes to the license termination behavior  Upgrades when using expired licenses will now be limited  Vault will not startup if the build date of the binary is  after  the expiration date of a license  License expiration date and binary build date compatibility can be verified using the  Check for Autoloaded License   vault docs commands operator diagnose check for autoloaded license  check performed by the  vault operator diagnose  command  The build date of a binary can also be found using the  vault version   vault docs commands version version  command   A user can expect the following to occur based on the following scenarios     Evaluation or non evaluation license is valid    Vault will start normally    Evaluation or non evaluation license is expired  binary build date  before  license expiry date    Vault will start normally    Evaluation or non evaluation license is expired  binary build date  after  license expiry date    Vault will not start    Evaluation license is terminated    Vault will not start independent of the binary s build date    Non evaluation license is terminated  binary build date  before  license expiry date    Vault will start normally    Non evaluation license is terminated  binary build date  after  license expiry date    Vault will not start  The Vault support team can issue you a temporary evaluation license to allow for security upgrades if your license has expired       Q  will these license changes impact HCP Vault Dedicated   No  these changes will not impact HCP Vault Dedicated       Q  do these license changes impact all Vault customers licenses     Customer licenses                                                                                                             Impacted                                                                                                                                                  ENT binaries  evaluation or non evaluation downloaded from  releases hashicorp com  https   releases hashicorp com vault      Yes           Business Source  BSL                                                                                                          No               Q  what is the product behavior change introduced by the licensing changes   With Vault 1 11  the use of an  auto loaded license   vault docs enterprise license autoloading  is required for Vault to start successfully       Q  how will Vault behave at startup when a license expires or terminates   When a license expires  Vault continues to function until the license terminates  This behavior exists today and remains unchanged in Vault 1 11  The grace period  defined as the time between license expiration and license termination  is one day for evaluation licenses  as of 1 8   and ten years for non evaluation licenses  Customers must provide a valid license before the grace period expires  This license is required to be  auto loaded   vault docs enterprise license autoloading   When license terminates  upon grace period expiry   Vault will seal itself and customers will need a valid license in order to successfully bring up Vault  If a valid license was not installed after license expiry  customers will need to provide one  and this license will need to be auto loaded   Vault 1 12 changes the license expiration and termination behavior  Evaluation licenses include a 30 day trial period after which a running Vault server will terminate  Non evaluation licenses  however  will no longer terminate  When a non evaluation license expires  Vault will continue to function but upgrades will be limited  The build date of the upgrade binary must be before the expiration date of the license  Vault will not start when attempting to use an expired license and binary with a build date  after  the license expiration date  Attempting to  reload   vault api docs system config reload reload license file  an expired license will result in an error if the build date of the running Vault server is  after  the license expiration date   License expiration date and binary build date compatibility can be verified using the  Check for Autoloaded License   vault docs commands operator diagnose check for autoloaded license  check performed by the  vault operator diagnose  command  The build date of a binary can also be found using the  vault version   vault docs commands version version  command       Q  what is the impact on evaluation licenses due to this change   As of Vault 1 8  any Vault cluster deployed must have a valid  auto loaded   vault docs enterprise license autoloading  license   Vault 1 12 introduces  expiration and termination behavior changes   q how will vault behave at startup when a license expires or terminates  for non evaluation licenses  Evaluation licenses will continue to have a 1 day grace period upon license expiry after which they will terminate  Vault will seal itself and shutdown once an evaluation license terminates       Q  are there any changes to existing methods for manual license loading  API or CLI    The    sys license    vault api docs v1 10 x system license install license  and    sys license signed    vault api docs v1 10 x system license read signed license  endpoints have been removed as of Vault 1 11  With that said  it is no longer possible to provide a license via the   sys license  endpoint  License  auto loading   vault docs enterprise license autoloading  must be used instead  The    sys config reload license    vault api docs system config reload reload license file  endpoint can be used to reload an auto loaded license provided as a path via an environment variable or configuration       Q  is there a grace period when evaluation licenses expire   Evaluation licenses have a 1 day grace period  The grace period is the time until the license expires  Upon expiration  Vault will seal and will require a valid license to unseal and function properly       Q  are the license files locked to a specific cluster   The changes to licenses apply to all nodes in a cluster  The license files are not locked to a cluster  but are independently applied to the appropriate clusters       Q  will a EULA check happen every time a Vault restarts   Yes  starting with Vault 1 8  ENT binaries will be subjected to a EULA check  The release of Vault 1 8 introduces the EULA check for evaluation licenses  non evaluation licenses are evaluated with a EULA check during contractual agreement    Although the agreement to a EULA occurs only once  when the user receives their license   Vault will check for the presence of a valid license every time a node is restarted   Starting Vault 1 11  when customers deploy new Vault clusters  or upgrade existing Vault clusters  a valid  auto loaded   vault docs enterprise license autoloading  license must exist for the upgrade to be successful       Q  what scenarios should a customer plan for due to these license changes       New Cluster Deployment    When a customer deploys new clusters to Vault 1 11 or later  a valid license must exist to successfully deploy  This valid license must be on disk   auto loaded   vault docs enterprise license autoloading         Eventual Migration    Vault 1 11 removes support for in storage licenses  Migrating to an auto loaded license is required for Vault to start successfully using version 1 11 or greater  Pre existing license storage entries will be automatically removed from storage upon startup       Q  what is the migration path for customers who want to migrate from their existing license as applied via the CLI flow to the license on disk flow   If a Vault cluster using a stored license is planned to be upgraded to Vault 1 11 or greater  the operator must migrate to using an auto loaded license  The   vault license get  signed    vault docs v1 10 x commands license get  command  or underlying    sys license signed    vault api docs v1 10 x system license read signed license  endpoint  can be used to retrieve the license from storage in a running cluster  It is not necessary to remove the stored license entry  That will occur automatically upon startup in Vault 1 11 or greater  Prior to completing the  recommended upgrade steps   vault docs upgrading   perform the following to ensure your license is properly configured   1  Use the command  vault license get  signed  to retrieve the license from storage of your running cluster  2  Put the license on disk 3  Configure license auto loading by specifying the   license path    vault docs configuration license path  config option or setting the   VAULT LICENSE    vault docs commands vault license  or   VAULT LICENSE PATH    vault docs commands vault license path  environment variable       Q  what is the path for customers who want to downgrade rollback from Vault 1 11 or later  auto loaded license mandatory  to a pre Vault 1 11  auto loading not mandatory  stored license supported    The downgrade procedure remains the same for Vault customers who are currently on Vault 1 11 or later  have a license installed via auto loading  and would like to downgrade their cluster to a pre 1 11 version  Please refer to the  upgrade procedures   vault tutorials standard procedures sop upgrade  that remind the customers that they must take a snapshot before the upgrade  Customers will need to restore their version from the snapshot  apply the pre 1 11 enterprise binary version they wish to roll back  and bring up the clusters       Q  is there a limited time for support of licenses that are in storage   The support of licenses installed by alternative means often leads to difficulties providing the appropriate support  To provide the support expected by our customers  as we have announced in  Vault feature deprecations and plans   vault docs deprecation  we are removing support for licenses in storage with Vault 1 11  This implies licensing endpoints that deal with licenses in storage will be removed  and Vault will no longer check for valid licenses in storage  This change requires that all customers have  auto loaded   vault docs enterprise license autoloading  licenses to upgrade to 1 11    successfully       Q  what are the steps to upgrade from one autoloaded license to another autoloaded license   Follow these steps to migrate from one autoloaded license to another autoloaded license   1  Use the   vault license inspect    vault docs commands license inspect  command to compare the new license against the output of the   vault license get    vault docs commands license get  command  This is to ensure that you have the correct license  1  Backup the old license file in a safe location  1  Replace the old license file on each Vault server with the new one  1  Invoke the  reload command   vault api docs system config reload reload license file  on each individual Vault server  starting with the standbys and doing the leader last  Invoking in this manner reduces possible disruptions if something was performed incorrectly with the above steps  You can either use the  reload command   vault api docs system config reload reload license file  or send a SIGHUP  1  On each node  ensure that the new license is in use by using the   vault license get    vault docs commands license get  command and or checking the logs     ADP licensing  This FAQ section is for the Advanced Data Protection  ADP  license changes introduced in Vault Enterprise 1 8       Q  what are the Vault ADP module licensing changes introduced in 1 8   As of Vault Enterprise 1 8  the functionality formerly sold as the Vault ADP module is now separated between two new modules     ADP KM   includes      Key Management Secrets Engine  KMSE    vault docs secrets key management     Key Management Interoperability  KMIP    vault docs secrets kmip     MSSQL Transparent Data Encryption  TDE   https   www hashicorp com blog enabling transparent data encryption for microsoft sql with vault     ADP Transform   includes      Transform Secrets Engine  TSE    vault docs secrets transform       Q  how can the new ADP modules be purchased and what features are customer entitled to as part of that purchase     ADP KM includes       This is the first Vault Enterprise module that can be purchased standalone  This means it can be purchased without the purchase of a Vault Enterprise Standard license    ADP KM still requires a Vault Enterprise binary  The Vault Enterprise Standard license is automatically included with the ADP KM module  but customers are contractually prohibited from using any features besides those in Vault Community Edition and ADP KM  KMSE and KMIP      ADP Transform includes       This module cannot be purchased as a standalone  It requires a Vault Enterprise binary  and customers must purchase the base Vault Enterprise Standard license  at least  to use the corresponding Enterprise features    The ADP Transform SKU can be applied as an add on  This workflow is similar to the consolidated ADP SKU       Q  what is the impact to customers based on these ADP module licensing changes   Customers need to be aware of the following as a result of these changes       New customers   may choose to purchase either or both of these modules  The old  consolidated  module is not available to them as an option      Existing customers   may continue with the consolidated Vault ADP module uninterrupted  They will only be converted to one or both new ADP modules the next time they make a change to their licensing details  i e  contract change  "}
{"questions":"vault page title Product usage reporting include alerts enterprise only mdx Learn what anonymous usage data HashiCorp collects as part of Enterprise utilization reporting Enable or disable collection layout docs Product usage reporting","answers":"---\nlayout: docs\npage_title: Product usage reporting\ndescription: >-\n  Learn what anonymous usage data HashiCorp collects as part of Enterprise utilization reporting. Enable or disable collection.\n---\n\n# Product usage reporting\n\n@include 'alerts\/enterprise-only.mdx'\n\nHashiCorp collects usage data about how Vault clusters are being used. This data is not\nused for billing or and is numerical only, and no sensitive information of\nany nature is being collected. The data is GDPR compliant and is collected as part of\nthe [license utilization reporting](\/vault\/docs\/enterprise\/license\/utilization-reporting)\nprocess. If automated reporting is enabled, this data will be collected automatically.\nIf automated reporting is disabled, then this will be collected as part of the manual reports.\n\n## Opt out\n\nWhile none of the collected usage metrics are sensitive in any way, if you are still concerned\nabout these usage metrics being reported, then you can opt-out of them being collected.\n\nIf you are considering opting out because you\u2019re worried about the data, we\nstrongly recommend that you review the [usage metrics list](#usage-metrics-list)\nbefore opting out. If you have concerns with any of the automatically-reported\ndata please bring them to your account manager.\n\nYou have two options to opt out of product usage collection:\n\n- HCL configuration (recommended)\n- Environment variable (requires restart)\n\n\n#### HCL configuration\n\nOpting out in your product's configuration file doesn't require a system\nrestart, and is the method we recommend. Add the following block to your server\nconfiguration file (e.g. `vault-config.hcl`).\n\n```hcl\nreporting {\n  disable_product_usage_reporting = true\n}\n```\n\n<Warning>\n\n  When you have a cluster, each node must have the reporting stanza in its\n  configuration to be consistent. In the event of leadership change, nodes will\n  use its server configuration to determine whether or not to opt-out the\n  product usage collection. Inconsistent configuration between nodes will change the\n  reporting status upon active unseal.\n\n<\/Warning>\n\n\nYou will find the following entries in the server log.\n\n<CodeBlockConfig hideClipboard>\n\n  ```\n  [DEBUG] activity: there is no reporting agent configured, skipping counts reporting\n  ```\n\n<\/CodeBlockConfig>\n\n#### Environment variable\n\nIf you need to, you can also opt out using an environment variable, which will\nprovide a startup message confirming that you have product usage data collection.\nThis option requires a system restart.\n\n<Note>\n\n  If the reporting stanza exists in the configuration file, the\n  `OPTOUT_PRODUCT_USAGE_REPORTING` value overrides the configuration.\n\n<\/Note>\n\nSet the following environment variable.\n\n```shell-session\n$ export OPTOUT_PRODUCT_USAGE_REPORTING=true\n```\n\nNow, restart your [Vault servers](\/vault\/docs\/commands\/server) from the shell\nwhere you set the environment variable.\n\nYou will find the following entries in the server log.\n\n<CodeBlockConfig hideClipboard>\n\n  ```\n  [DEBUG] core: product usage reporting disabled\n  ```\n\n<\/CodeBlockConfig>\n\nIf your configuration file and environment variable differ, the environment\nvariable setting will take precedence.\n\n## Usage metrics list\n\nHashiCorp collects the following product usage metrics as part of the `metrics` part of the\n[JSON payload that it collects for licence utilization](\/vault\/docs\/enterprise\/license\/utilization-reporting#example-payloads).\nAll of these metrics are numerical, and contain no sensitive values or additional metadata:\n\n| Metric Name                                          | Description                                                                        |\n|------------------------------------------------------|------------------------------------------------------------------------------------|\n| `vault.namespaces.count`                             | Total number of namespaces.                                                        |\n| `vault.leases.count`                                 | Total number of leases within Vault.                                               |\n| `vault.quotas.ratelimit.count`                       | Total number of rate limit quotas within Vault.                                    |\n| `vault.quotas.leasecount.count`                      | Total number of lease count quotas within Vault.                                   |\n| `vault.kv.version1.secrets.count`                    | Total number of KVv1 secrets within Vault.                                         |\n| `vault.kv.version2.secrets.count`                    | Total number of KVv2 secrets within Vault.                                         |\n| `vault.kv.version1.secrets.namespace.max`            | The highest number of KVv1 secrets in a namespace in Vault, e.g. `1000`.           |\n| `vault.kv.version2.secrets.namespace.max`            | The highest number of KVv2 secrets in a namespace in Vault, e.g. `1000`.           |\n| `vault.kv.version1.secrets.namespace.min`            | The lowest number of KVv1 secrets in a namespace in Vault, e.g. `2`.               |\n| `vault.kv.version2.secrets.namespace.min`            | The highest number of KVv2 secrets in a namespace in Vault, e.g. `1000`.           |\n| `vault.kv.version1.secrets.namespace.mean`           | The mean number of KVv1 secrets in namespaces in Vault, e.g. `52.8`.               |\n| `vault.kv.version2.secrets.namespace.mean`           | The mean number of KVv2 secrets in namespaces in Vault, e.g. `52.8`.               |\n| `vault.auth.method.approle.count`                    | The total number of Approle auth mounts in Vault.                                  |\n| `vault.auth.method.alicloud.count`                   | The total number of Alicloud auth mounts in Vault.                                 |\n| `vault.auth.method.aws.count`                        | The total number of AWS auth mounts in Vault.                                      |\n| `vault.auth.method.appid.count`                      | The total number of App ID auth mounts in Vault.                                   |\n| `vault.auth.method.azure.count`                      | The total number of Azure auth mounts in Vault.                                    |\n| `vault.auth.method.cloudfoundry.count`               | The total number of Cloud Foundry auth mounts in Vault.                            |\n| `vault.auth.method.github.count`                     | The total number of GitHub auth mounts in Vault.                                   |\n| `vault.auth.method.gcp.count`                        | The total number of GCP auth mounts in Vault.                                      |\n| `vault.auth.method.jwt.count`                        | The total number of JWT auth mounts in Vault.                                      |\n| `vault.auth.method.kerberos.count`                   | The total number of Kerberos auth mounts in Vault.                                 |\n| `vault.auth.method.kubernetes.count`                 | The total number of Kubernetes auth mounts in Vault.                               |\n| `vault.auth.method.ldap.count`                       | The total number of LDAP auth mounts in Vault.                                     |\n| `vault.auth.method.oci.count`                        | The total number of OCI auth mounts in Vault.                                      |\n| `vault.auth.method.okta.count`                       | The total number of Okta auth mounts in Vault.                                     |\n| `vault.auth.method.pcf.count`                        | The total number of PCF auth mounts in Vault.                                      |\n| `vault.auth.method.radius.count`                     | The total number of Radius auth mounts in Vault.                                   |\n| `vault.auth.method.saml.count`                       | The total number of SAML auth mounts in Vault.                                     |\n| `vault.auth.method.cert.count`                       | The total number of Cert auth mounts in Vault.                                     |\n| `vault.auth.method.oidc.count`                       | The total number of OIDC auth mounts in Vault.                                     |\n| `vault.auth.method.token.count`                      | The total number of Token auth mounts in Vault.                                    |\n| `vault.auth.method.userpass.count`                   | The total number of Userpass auth mounts in Vault.                                 |\n| `vault.auth.method.plugin.count`                     | The total number of custom plugin auth mounts in Vault.                            |\n| `vault.secret.engine.activedirectory.count`          | The total number of Active Directory secret engines in Vault.                      |\n| `vault.secret.engine.alicloud.count`                 | The total number of Alicloud secret engines in Vault.                              |\n| `vault.secret.engine.aws.count`                      | The total number of AWS secret engines in Vault.                                   |\n| `vault.secret.engine.azure.count`                    | The total number of Azure secret engines in Vault.                                 |\n| `vault.secret.engine.consul.count`                   | The total number of Consul secret engines in Vault.                                |\n| `vault.secret.engine.gcp.count`                      | The total number of GCP secret engines in Vault.                                   |\n| `vault.secret.engine.gcpkms.count`                   | The total number of GCPKMS secret engines in Vault.                                |\n| `vault.secret.engine.kubernetes.count`               | The total number of Kubernetes secret engines in Vault.                            |\n| `vault.secret.engine.cassandra.count`                | The total number of Cassandra secret engines in Vault.                             |\n| `vault.secret.engine.keymgmt.count`                  | The total number of Keymgmt secret engines in Vault.                               |\n| `vault.secret.engine.kv.count`                       | The total number of KV secret engines in Vault.                                    |\n| `vault.secret.engine.kmip.count`                     | The total number of KMIP secret engines in Vault.                                  |\n| `vault.secret.engine.mongodb.count`                  | The total number of MongoDB secret engines in Vault.                               |\n| `vault.secret.engine.mongodbatlas.count`             | The total number of MongoDBAtlas secret engines in Vault.                          |\n| `vault.secret.engine.mssql.count`                    | The total number of MSSql secret engines in Vault.                                 |\n| `vault.secret.engine.postgresql.count`               | The total number of Postgresql secret engines in Vault.                            |\n| `vault.secret.engine.nomad.count`                    | The total number of Nomad secret engines in Vault.                                 |\n| `vault.secret.engine.ldap.count`                     | The total number of LDAP secret engines in Vault.                                  |\n| `vault.secret.engine.openldap.count`                 | The total number of OpenLDAP secret engines in Vault.                              |\n| `vault.secret.engine.pki.count`                      | The total number of PKI secret engines in Vault.                                   |\n| `vault.secret.engine.rabbitmq.count`                 | The total number of RabbitMQ secret engines in Vault.                              |\n| `vault.secret.engine.ssh.count`                      | The total number of SSH secret engines in Vault.                                   |\n| `vault.secret.engine.terraform.count`                | The total number of Terraform secret engines in Vault.                             |\n| `vault.secret.engine.totp.count`                     | The total number of TOTP secret engines in Vault.                                  |\n| `vault.secret.engine.transform.count`                | The total number of Transform secret engines in Vault.                             |\n| `vault.secret.engine.transit.count`                  | The total number of Transit secret engines in Vault.                               |\n| `vault.secret.engine.database.count`                 | The total number of Database secret engines in Vault.                              |\n| `vault.secret.engine.plugin.count`                   | The total number of custom plugin secret engines in Vault.                         |\n| `vault.secretsync.sources.count`                     | The total number of secret sources configured for secret sync.                     |\n| `vault.secretsync.destinations.count`                | The total number of secret destinations configured for secret sync.                |\n| `vault.secretsync.destinations.aws-sm.count`         | The total number of AWS-SM secret destinations configured for secret sync.         |\n| `vault.secretsync.destinations.azure-kv.count`       | The total number of Azure-KV secret destinations configured for secret sync.       |\n| `vault.secretsync.destinations.gh.count`             | The total number of GH secret destinations configured for secret sync.             |\n| `vault.secretsync.destinations.vault.count`          | The total number of Vault secret destinations configured for secret sync.          |\n| `vault.secretsync.destinations.vercel-project.count` | The total number of Vercel Project secret destinations configured for secret sync. |\n| `vault.secretsync.destinations.terraform.count`      | The total number of Terraform secret destinations configured for secret sync.      |\n| `vault.secretsync.destinations.gitlab.count`         | The total number of GitLab secret destinations configured for secret sync.         |\n| `vault.secretsync.destinations.inmem.count`          | The total number of InMem secret destinations configured for secret sync.          |\n| `vault.pki.roles.count`                              | The total roles in all PKI mounts across all namespaces.                           |\n| `vault.pki.issuers.count`                            | The total issuers from all PKI mounts across all namespaces.                       |\n\n## Usage metadata list\n\nHashiCorp collects the following product usage metadata as part of the `metadata` part of the\n[JSON payload that it collects for licence utilization](\/vault\/docs\/enterprise\/license\/utilization-reporting#example-payloads):\n\n| Metadata Name        | Description                                                          |\n|----------------------|----------------------------------------------------------------------|\n| `replication_status` | Replication status of this cluster, e.g. `perf-disabled,dr-disabled` |","site":"vault","answers_cleaned":"    layout  docs page title  Product usage reporting description       Learn what anonymous usage data HashiCorp collects as part of Enterprise utilization reporting  Enable or disable collection         Product usage reporting   include  alerts enterprise only mdx   HashiCorp collects usage data about how Vault clusters are being used  This data is not used for billing or and is numerical only  and no sensitive information of any nature is being collected  The data is GDPR compliant and is collected as part of the  license utilization reporting   vault docs enterprise license utilization reporting  process  If automated reporting is enabled  this data will be collected automatically  If automated reporting is disabled  then this will be collected as part of the manual reports      Opt out  While none of the collected usage metrics are sensitive in any way  if you are still concerned about these usage metrics being reported  then you can opt out of them being collected   If you are considering opting out because you re worried about the data  we strongly recommend that you review the  usage metrics list   usage metrics list  before opting out  If you have concerns with any of the automatically reported data please bring them to your account manager   You have two options to opt out of product usage collection     HCL configuration  recommended    Environment variable  requires restart         HCL configuration  Opting out in your product s configuration file doesn t require a system restart  and is the method we recommend  Add the following block to your server configuration file  e g   vault config hcl        hcl reporting     disable product usage reporting   true         Warning     When you have a cluster  each node must have the reporting stanza in its   configuration to be consistent  In the event of leadership change  nodes will   use its server configuration to determine whether or not to opt out the   product usage collection  Inconsistent configuration between nodes will change the   reporting status upon active unseal     Warning    You will find the following entries in the server log    CodeBlockConfig hideClipboard            DEBUG  activity  there is no reporting agent configured  skipping counts reporting          CodeBlockConfig        Environment variable  If you need to  you can also opt out using an environment variable  which will provide a startup message confirming that you have product usage data collection  This option requires a system restart    Note     If the reporting stanza exists in the configuration file  the    OPTOUT PRODUCT USAGE REPORTING  value overrides the configuration     Note   Set the following environment variable      shell session   export OPTOUT PRODUCT USAGE REPORTING true      Now  restart your  Vault servers   vault docs commands server  from the shell where you set the environment variable   You will find the following entries in the server log    CodeBlockConfig hideClipboard            DEBUG  core  product usage reporting disabled          CodeBlockConfig   If your configuration file and environment variable differ  the environment variable setting will take precedence      Usage metrics list  HashiCorp collects the following product usage metrics as part of the  metrics  part of the  JSON payload that it collects for licence utilization   vault docs enterprise license utilization reporting example payloads   All of these metrics are numerical  and contain no sensitive values or additional metadata     Metric Name                                            Description                                                                                                                                                                                                                           vault namespaces count                                Total number of namespaces                                                              vault leases count                                    Total number of leases within Vault                                                     vault quotas ratelimit count                          Total number of rate limit quotas within Vault                                          vault quotas leasecount count                         Total number of lease count quotas within Vault                                         vault kv version1 secrets count                       Total number of KVv1 secrets within Vault                                               vault kv version2 secrets count                       Total number of KVv2 secrets within Vault                                               vault kv version1 secrets namespace max               The highest number of KVv1 secrets in a namespace in Vault  e g   1000                  vault kv version2 secrets namespace max               The highest number of KVv2 secrets in a namespace in Vault  e g   1000                  vault kv version1 secrets namespace min               The lowest number of KVv1 secrets in a namespace in Vault  e g   2                      vault kv version2 secrets namespace min               The highest number of KVv2 secrets in a namespace in Vault  e g   1000                  vault kv version1 secrets namespace mean              The mean number of KVv1 secrets in namespaces in Vault  e g   52 8                      vault kv version2 secrets namespace mean              The mean number of KVv2 secrets in namespaces in Vault  e g   52 8                      vault auth method approle count                       The total number of Approle auth mounts in Vault                                        vault auth method alicloud count                      The total number of Alicloud auth mounts in Vault                                       vault auth method aws count                           The total number of AWS auth mounts in Vault                                            vault auth method appid count                         The total number of App ID auth mounts in Vault                                         vault auth method azure count                         The total number of Azure auth mounts in Vault                                          vault auth method cloudfoundry count                  The total number of Cloud Foundry auth mounts in Vault                                  vault auth method github count                        The total number of GitHub auth mounts in Vault                                         vault auth method gcp count                           The total number of GCP auth mounts in Vault                                            vault auth method jwt count                           The total number of JWT auth mounts in Vault                                            vault auth method kerberos count                      The total number of Kerberos auth mounts in Vault                                       vault auth method kubernetes count                    The total number of Kubernetes auth mounts in Vault                                     vault auth method ldap count                          The total number of LDAP auth mounts in Vault                                           vault auth method oci count                           The total number of OCI auth mounts in Vault                                            vault auth method okta count                          The total number of Okta auth mounts in Vault                                           vault auth method pcf count                           The total number of PCF auth mounts in Vault                                            vault auth method radius count                        The total number of Radius auth mounts in Vault                                         vault auth method saml count                          The total number of SAML auth mounts in Vault                                           vault auth method cert count                          The total number of Cert auth mounts in Vault                                           vault auth method oidc count                          The total number of OIDC auth mounts in Vault                                           vault auth method token count                         The total number of Token auth mounts in Vault                                          vault auth method userpass count                      The total number of Userpass auth mounts in Vault                                       vault auth method plugin count                        The total number of custom plugin auth mounts in Vault                                  vault secret engine activedirectory count             The total number of Active Directory secret engines in Vault                            vault secret engine alicloud count                    The total number of Alicloud secret engines in Vault                                    vault secret engine aws count                         The total number of AWS secret engines in Vault                                         vault secret engine azure count                       The total number of Azure secret engines in Vault                                       vault secret engine consul count                      The total number of Consul secret engines in Vault                                      vault secret engine gcp count                         The total number of GCP secret engines in Vault                                         vault secret engine gcpkms count                      The total number of GCPKMS secret engines in Vault                                      vault secret engine kubernetes count                  The total number of Kubernetes secret engines in Vault                                  vault secret engine cassandra count                   The total number of Cassandra secret engines in Vault                                   vault secret engine keymgmt count                     The total number of Keymgmt secret engines in Vault                                     vault secret engine kv count                          The total number of KV secret engines in Vault                                          vault secret engine kmip count                        The total number of KMIP secret engines in Vault                                        vault secret engine mongodb count                     The total number of MongoDB secret engines in Vault                                     vault secret engine mongodbatlas count                The total number of MongoDBAtlas secret engines in Vault                                vault secret engine mssql count                       The total number of MSSql secret engines in Vault                                       vault secret engine postgresql count                  The total number of Postgresql secret engines in Vault                                  vault secret engine nomad count                       The total number of Nomad secret engines in Vault                                       vault secret engine ldap count                        The total number of LDAP secret engines in Vault                                        vault secret engine openldap count                    The total number of OpenLDAP secret engines in Vault                                    vault secret engine pki count                         The total number of PKI secret engines in Vault                                         vault secret engine rabbitmq count                    The total number of RabbitMQ secret engines in Vault                                    vault secret engine ssh count                         The total number of SSH secret engines in Vault                                         vault secret engine terraform count                   The total number of Terraform secret engines in Vault                                   vault secret engine totp count                        The total number of TOTP secret engines in Vault                                        vault secret engine transform count                   The total number of Transform secret engines in Vault                                   vault secret engine transit count                     The total number of Transit secret engines in Vault                                     vault secret engine database count                    The total number of Database secret engines in Vault                                    vault secret engine plugin count                      The total number of custom plugin secret engines in Vault                               vault secretsync sources count                        The total number of secret sources configured for secret sync                           vault secretsync destinations count                   The total number of secret destinations configured for secret sync                      vault secretsync destinations aws sm count            The total number of AWS SM secret destinations configured for secret sync               vault secretsync destinations azure kv count          The total number of Azure KV secret destinations configured for secret sync             vault secretsync destinations gh count                The total number of GH secret destinations configured for secret sync                   vault secretsync destinations vault count             The total number of Vault secret destinations configured for secret sync                vault secretsync destinations vercel project count    The total number of Vercel Project secret destinations configured for secret sync       vault secretsync destinations terraform count         The total number of Terraform secret destinations configured for secret sync            vault secretsync destinations gitlab count            The total number of GitLab secret destinations configured for secret sync               vault secretsync destinations inmem count             The total number of InMem secret destinations configured for secret sync                vault pki roles count                                 The total roles in all PKI mounts across all namespaces                                 vault pki issuers count                               The total issuers from all PKI mounts across all namespaces                              Usage metadata list  HashiCorp collects the following product usage metadata as part of the  metadata  part of the  JSON payload that it collects for licence utilization   vault docs enterprise license utilization reporting example payloads      Metadata Name          Description                                                                                                                                                               replication status    Replication status of this cluster  e g   perf disabled dr disabled   "}
{"questions":"vault include alerts enterprise only mdx page title Manual license utilization reporting Manual license utilization reporting allows you to export review and send license utilization data to HashiCorp through the CLI or HCP Web Portal layout docs Manual license utilization reporting","answers":"---\nlayout: docs\npage_title: Manual license utilization reporting\ndescription: >-\n  Manual license utilization reporting allows you to export, review, and send license utilization data to HashiCorp through the CLI or HCP Web Portal.\n---\n\n# Manual license utilization reporting\n\n@include 'alerts\/enterprise-only.mdx'\n\nManual license utilization reporting allows you to export, review, and send\nlicense utilization data to HashiCorp via the CLI or HCP Web Portal. Use these\nreports to understand how much more you can deploy under your current contract,\nprotect against overutilization, and budget for predicted consumption. Manual\nreporting shares the minimum data required to validate license utilization as\ndefined in our contracts. The reports consist of mostly computed metrics and\nwill never contain Personal Identifiable Information (PII) or other sensitive\ninformation.\n\nManual license utilization shares the same data as automated license utilization\nbut is more time consuming. Unless you are running in an air-gapped environment\nor have another reason to report data manually, we strongly recommend using\nautomated reporting instead. If you have disabled automated license reporting,\nyou can re-enable it by reversing the opt-out process described in the\n[documentation](\/vault\/docs\/enterprise\/license\/utilization-reporting#opt-out).\n\nIf you are considering manual reporting because you\u2019re worried about your data,\nwe strongly recommend that you review the [example\npayloads](#data-file-content), which are the same for automated and manual\nreporting. If you have further concerns with any of the automatically-reported\ndata please bring them to your account manager before opting out of automated\nreporting in favor of manual reporting.\n\n## How to manually send data reports\n\n### Generate a data bundle\n\nData bundles include collections of JSON snapshots that contain license\nutilization information.\n\n1. Login into your [cluster node](\/vault\/tutorials\/cloud\/vault-access-cluster).\n1. Run this CLI command to generate a data bundle:\n\n   ```shell-session\n   $ vault operator utilization\n   ```\n\n   By default, the bundle will include all historical snapshots.\n\n   You can provide context about the conditions under which the report was\n   generated and submitted by providing a comment. This optional comment will\n   not be included in the license utilization bundle, but will be included in\n   the Vault server logs.\n\n   **Example:**\n\n   ```shell-session\n   $ vault operator utilization -message=\u201dChange Control 654987\u201d \\\n        -output=\u201d\/utilization\/reports\/latest.json\u201d\n   ```\n\n   This command will export all the persisted snapshots into a bundle. The\n   message \u201cChange Control 654987\u201d will not be included in the bundle but will\n   be included in Vault server logs. The `-output` flags specifies the output\n   location of the JSON bundle.\n\n   **Available command flags:**\n\n   - `-message` `(string: \u201c\u201d)` - Provide context about the conditions under\n   which the report was generated and submitted. This message is not included\n   in the license utilization bundle but will be included in the vault server\n   logs. (optional)\n\n   - `-today-only` `(bool: false)` - To include only today\u2019s snapshot, no\n   historical snapshots. If no snapshots were persisted in the last 24 hrs, it\n   takes a snapshot and exports it to a bundle. (optional)\n\n   - `-output` `(string: \u201c\u201d)` - Specifies the output path for the bundle.\n   Defaults to a time-based generated file name. (optional)\n\n\n### Send the data bundle to HashiCorp\n\n1. Go to https:\/\/portal.cloud.hashicorp.com\/license-utilization\/reports\/create\n1. Click on **Choose files**, or drop your file(s) into the container.\n\n    a. If the upload succeeded, the HCP user interface will change the file\n    status to **Uploaded** in green.\n\n    b. If the upload failed, the file status will say **Failed** in red, and\n    will include error information.\n\nIf the upload fails make sure you haven\u2019t modified the file signature. If the\nerror persists, please contact your account representative.\n\n\n## Enable manual reporting\n\nUpgrade to a release that supports manual license utilization reporting. These\nreleases include:\n\n- Vault Enterprise 1.16.0 and later\n- Vault Enterprise 1.15.6 and later\n- Vault Enterprise 1.14.10 and later\n\n## Configuration\n\nAdministrators can manage disk space for storing snapshots by defining the\nnumber of days snapshots can be retained.\n\n```hcl\nreporting {\n    snapshot_retention_time = \"2400h\"\n}\n```\n\nThe default retention period is 400 days.\n\n## Data file content\n\n<CodeBlockConfig hideClipboard>\n\n```json\n{\n  \"snapshot_version\": 2,\n  \"id\": \"0001JWAY00BRF8TEXC9CVRHBAC\",\n  \"timestamp\": \"2024-02-08T16:55:28.085215-08:00\",\n  \"schema_version\": \"2.0.0\",\n  \"product\": \"vault\",\n  \"process_id\": \"01HP5NJS21HN50FY0CBS0SYGCH\",\n  \"metrics\": {\n    \"clientcount.current_month_estimate.type.acme_client\": {\n      \"key\": \"clientcount.current_month_estimate.type.acme_client\",\n      \"value\": 0,\n      \"mode\": \"write\"\n    },\n    \"clientcount.current_month_estimate.type.entity\": {\n      \"key\": \"clientcount.current_month_estimate.type.entity\",\n      \"value\": 20,\n      \"mode\": \"write\"\n    },\n    \"clientcount.current_month_estimate.type.nonentity\": {\n      \"key\": \"clientcount.current_month_estimate.type.nonentity\",\n      \"value\": 11,\n      \"mode\": \"write\"\n    },\n    \"clientcount.current_month_estimate.type.secret_sync\": {\n      \"key\": \"clientcount.current_month_estimate.type.secret_sync\",\n      \"value\": 0,\n      \"mode\": \"write\"\n    },\n    \"clientcount.previous_month_complete.type.acme_client\": {\n      \"key\": \"clientcount.previous_month_complete.type.acme_client\",\n      \"value\": 0,\n      \"mode\": \"write\"\n    },\n    \"clientcount.previous_month_complete.type.entity\": {\n      \"key\": \"clientcount.previous_month_complete.type.entity\",\n      \"value\": 0,\n      \"mode\": \"write\"\n    },\n    \"clientcount.previous_month_complete.type.nonentity\": {\n      \"key\": \"clientcount.previous_month_complete.type.nonentity\",\n      \"value\": 0,\n      \"mode\": \"write\"\n    },\n    \"clientcount.previous_month_complete.type.secret_sync\": {\n      \"key\": \"clientcount.previous_month_complete.type.secret_sync\",\n      \"value\": 0,\n      \"mode\": \"write\"\n    }\n  },\n  \"product_version\": \"1.16.0+ent\",\n  \"license_id\": \"7d68b16a-74fe-3b9f-a1a7-08cf461fff1c\",\n  \"checksum\": 6861637915450723051,\n  \"metadata\": {\n    \"billing_start\": \"2023-05-04T00:00:00Z\",\n    \"cluster_id\": \"16d0ff5b-9d40-d7a7-384c-c9b95320c60e\"\n  }\n```\n\n<\/CodeBlockConfig>","site":"vault","answers_cleaned":"    layout  docs page title  Manual license utilization reporting description       Manual license utilization reporting allows you to export  review  and send license utilization data to HashiCorp through the CLI or HCP Web Portal         Manual license utilization reporting   include  alerts enterprise only mdx   Manual license utilization reporting allows you to export  review  and send license utilization data to HashiCorp via the CLI or HCP Web Portal  Use these reports to understand how much more you can deploy under your current contract  protect against overutilization  and budget for predicted consumption  Manual reporting shares the minimum data required to validate license utilization as defined in our contracts  The reports consist of mostly computed metrics and will never contain Personal Identifiable Information  PII  or other sensitive information   Manual license utilization shares the same data as automated license utilization but is more time consuming  Unless you are running in an air gapped environment or have another reason to report data manually  we strongly recommend using automated reporting instead  If you have disabled automated license reporting  you can re enable it by reversing the opt out process described in the  documentation   vault docs enterprise license utilization reporting opt out    If you are considering manual reporting because you re worried about your data  we strongly recommend that you review the  example payloads   data file content   which are the same for automated and manual reporting  If you have further concerns with any of the automatically reported data please bring them to your account manager before opting out of automated reporting in favor of manual reporting      How to manually send data reports      Generate a data bundle  Data bundles include collections of JSON snapshots that contain license utilization information   1  Login into your  cluster node   vault tutorials cloud vault access cluster   1  Run this CLI command to generate a data bundle         shell session      vault operator utilization            By default  the bundle will include all historical snapshots      You can provide context about the conditions under which the report was    generated and submitted by providing a comment  This optional comment will    not be included in the license utilization bundle  but will be included in    the Vault server logs        Example           shell session      vault operator utilization  message  Change Control 654987             output   utilization reports latest json             This command will export all the persisted snapshots into a bundle  The    message  Change Control 654987  will not be included in the bundle but will    be included in Vault server logs  The   output  flags specifies the output    location of the JSON bundle        Available command flags            message    string         Provide context about the conditions under    which the report was generated and submitted  This message is not included    in the license utilization bundle but will be included in the vault server    logs   optional          today only    bool  false     To include only today s snapshot  no    historical snapshots  If no snapshots were persisted in the last 24 hrs  it    takes a snapshot and exports it to a bundle   optional          output    string         Specifies the output path for the bundle     Defaults to a time based generated file name   optional        Send the data bundle to HashiCorp  1  Go to https   portal cloud hashicorp com license utilization reports create 1  Click on   Choose files    or drop your file s  into the container       a  If the upload succeeded  the HCP user interface will change the file     status to   Uploaded   in green       b  If the upload failed  the file status will say   Failed   in red  and     will include error information   If the upload fails make sure you haven t modified the file signature  If the error persists  please contact your account representative       Enable manual reporting  Upgrade to a release that supports manual license utilization reporting  These releases include     Vault Enterprise 1 16 0 and later   Vault Enterprise 1 15 6 and later   Vault Enterprise 1 14 10 and later     Configuration  Administrators can manage disk space for storing snapshots by defining the number of days snapshots can be retained      hcl reporting       snapshot retention time    2400h         The default retention period is 400 days      Data file content   CodeBlockConfig hideClipboard      json      snapshot version   2     id    0001JWAY00BRF8TEXC9CVRHBAC      timestamp    2024 02 08T16 55 28 085215 08 00      schema version    2 0 0      product    vault      process id    01HP5NJS21HN50FY0CBS0SYGCH      metrics          clientcount current month estimate type acme client            key    clientcount current month estimate type acme client          value   0         mode    write              clientcount current month estimate type entity            key    clientcount current month estimate type entity          value   20         mode    write              clientcount current month estimate type nonentity            key    clientcount current month estimate type nonentity          value   11         mode    write              clientcount current month estimate type secret sync            key    clientcount current month estimate type secret sync          value   0         mode    write              clientcount previous month complete type acme client            key    clientcount previous month complete type acme client          value   0         mode    write              clientcount previous month complete type entity            key    clientcount previous month complete type entity          value   0         mode    write              clientcount previous month complete type nonentity            key    clientcount previous month complete type nonentity          value   0         mode    write              clientcount previous month complete type secret sync            key    clientcount previous month complete type secret sync          value   0         mode    write                product version    1 16 0 ent      license id    7d68b16a 74fe 3b9f a1a7 08cf461fff1c      checksum   6861637915450723051     metadata          billing start    2023 05 04T00 00 00Z        cluster id    16d0ff5b 9d40 d7a7 384c c9b95320c60e             CodeBlockConfig "}
{"questions":"vault Vault Enterprise supports TOTP MFA type page title TOTP MFA MFA Support Vault Enterprise include alerts enterprise only mdx layout docs TOTP MFA","answers":"---\nlayout: docs\npage_title: TOTP MFA - MFA Support - Vault Enterprise\ndescription: Vault Enterprise supports TOTP MFA type.\n---\n\n# TOTP MFA\n\n@include 'alerts\/enterprise-only.mdx'\n\nThis page demonstrates the TOTP MFA on ACL'd paths of Vault.\n\n## Configuration\n\n1.  Enable the appropriate auth method:\n\n    ```text\n    $ vault auth enable userpass\n    ```\n\n1.  Fetch the mount accessor for the enabled auth method:\n\n    ```text\n    $ vault auth list -detailed\n    ```\n\n    The response will look like:\n\n    ```text\n    Path         Type        Accessor                  Plugin    Default TTL    Max TTL    Replication    Description\n    ----         ----        --------                  ------    -----------    -------    -----------    -----------\n    token\/       token       auth_token_289703e9       n\/a       system         system     replicated     token based credentials\n    userpass\/    userpass    auth_userpass_54b8e339    n\/a       system         system     replicated     n\/a\n    ```\n\n1.  Configure TOTP MFA:\n\n    -> **Note**: Consider the algorithms supported by your authenticator. For example, Google Authenticator for Android supports only SHA1 as the value of `algorithm`.\n\n    ```text\n    $ vault write sys\/mfa\/method\/totp\/my_totp \\\n        issuer=Vault \\\n        period=30 \\\n        key_size=30 \\\n        algorithm=SHA256 \\\n        digits=6\n    ```\n\n1.  Create a policy that gives access to secret through the MFA method created\n    above:\n\n    ```text\n    $ vault policy write totp-policy -<<EOF\n    path \"secret\/foo\" {\n      capabilities = [\"read\"]\n      mfa_methods  = [\"my_totp\"]\n    }\n    EOF\n    ```\n\n1.  Create a user. MFA works only for tokens that have identity information on\n    them. Tokens created by logging in using auth methods will have the associated\n    identity information. Create a user in the `userpass` auth method and\n    authenticate against it:\n\n    ```text\n    $ vault write auth\/userpass\/users\/testuser \\\n        password=testpassword \\\n        policies=totp-policy\n    ```\n\n1.  Create a login token:\n\n    ```text\n    $ vault write auth\/userpass\/login\/testuser \\\n        password=testpassword\n\n    Key                    Value\n    ---                    -----\n    token                  70f97438-e174-c03c-40fe-6bcdc1028d6c\n    token_accessor         a91d97f4-1c7d-6af3-e4bf-971f74f9fab9\n    token_duration         768h\n    token_renewable        true\n    token_policies         [default totp-policy]\n    token_meta_username    \"testuser\"\n    ```\n\n    Note that the CLI is not authenticated with the newly created token yet, we\n    did not call `vault login`, instead we used the login API to simply return a\n    token.\n\n1.  Fetch the entity ID from the token. The caller identity is represented by the\n    `entity_id` property of the token:\n\n    ```text\n    $ vault token lookup 70f97438-e174-c03c-40fe-6bcdc1028d6c\n\n    Key                 Value\n    ---                 -----\n    accessor            a91d97f4-1c7d-6af3-e4bf-971f74f9fab9\n    creation_time       1502245243\n    creation_ttl        2764800\n    display_name        userpass-testuser\n    entity_id           307d6c16-6f5c-4ae7-46a9-2d153ffcbc63\n    expire_time         2017-09-09T22:20:43.448543132-04:00\n    explicit_max_ttl    0\n    id                  70f97438-e174-c03c-40fe-6bcdc1028d6c\n    issue_time          2017-08-08T22:20:43.448543003-04:00\n    meta                map[username:testuser]\n    num_uses            0\n    orphan              true\n    path                auth\/userpass\/login\/testuser\n    policies            [default totp-policy]\n    renewable           true\n    ttl                 2764623\n    ```\n\n1.  Generate TOTP method attached to the entity. This should be distributed to\n    the intended user to be able to generate TOTP passcode:\n\n    ```text\n    $ vault write sys\/mfa\/method\/totp\/my_totp\/admin-generate \\\n        entity_id=307d6c16-6f5c-4ae7-46a9-2d153ffcbc63\n\n    Key        Value\n    ---        -----\n    barcode    iVBORw0KGgoAAAANSUhEUgAAAM...\n    url        otpauth:\/\/totp\/Vault:307d6c16-6f5c-4ae7-46a9-2d153ffcbc63?algo...\n    ```\n\n    Either the base64 encoded png barcode or the url should be given to the end\n    user. This barcode\/url can be loaded into Google Authenticator or a similar\n    TOTP tool to generate codes.\n\n1.  Login as the user:\n\n    ```text\n    $ vault login 70f97438-e174-c03c-40fe-6bcdc1028d6c\n    ```\n\n1.  Read the secret, specifying the mfa flag:\n\n    ```text\n    $ vault read -mfa my_totp:146378 secret\/data\/foo\n\n    Key                 Value\n    ---                 -----\n    refresh_interval    768h\n    data                which can only be read after MFA validation\n    ```","site":"vault","answers_cleaned":"    layout  docs page title  TOTP MFA   MFA Support   Vault Enterprise description  Vault Enterprise supports TOTP MFA type         TOTP MFA   include  alerts enterprise only mdx   This page demonstrates the TOTP MFA on ACL d paths of Vault      Configuration  1   Enable the appropriate auth method          text       vault auth enable userpass          1   Fetch the mount accessor for the enabled auth method          text       vault auth list  detailed              The response will look like          text     Path         Type        Accessor                  Plugin    Default TTL    Max TTL    Replication    Description                                                                                                                           token        token       auth token 289703e9       n a       system         system     replicated     token based credentials     userpass     userpass    auth userpass 54b8e339    n a       system         system     replicated     n a          1   Configure TOTP MFA            Note    Consider the algorithms supported by your authenticator  For example  Google Authenticator for Android supports only SHA1 as the value of  algorithm           text       vault write sys mfa method totp my totp           issuer Vault           period 30           key size 30           algorithm SHA256           digits 6          1   Create a policy that gives access to secret through the MFA method created     above          text       vault policy write totp policy    EOF     path  secret foo          capabilities     read         mfa methods      my totp             EOF          1   Create a user  MFA works only for tokens that have identity information on     them  Tokens created by logging in using auth methods will have the associated     identity information  Create a user in the  userpass  auth method and     authenticate against it          text       vault write auth userpass users testuser           password testpassword           policies totp policy          1   Create a login token          text       vault write auth userpass login testuser           password testpassword      Key                    Value                                      token                  70f97438 e174 c03c 40fe 6bcdc1028d6c     token accessor         a91d97f4 1c7d 6af3 e4bf 971f74f9fab9     token duration         768h     token renewable        true     token policies          default totp policy      token meta username     testuser               Note that the CLI is not authenticated with the newly created token yet  we     did not call  vault login   instead we used the login API to simply return a     token   1   Fetch the entity ID from the token  The caller identity is represented by the      entity id  property of the token          text       vault token lookup 70f97438 e174 c03c 40fe 6bcdc1028d6c      Key                 Value                                   accessor            a91d97f4 1c7d 6af3 e4bf 971f74f9fab9     creation time       1502245243     creation ttl        2764800     display name        userpass testuser     entity id           307d6c16 6f5c 4ae7 46a9 2d153ffcbc63     expire time         2017 09 09T22 20 43 448543132 04 00     explicit max ttl    0     id                  70f97438 e174 c03c 40fe 6bcdc1028d6c     issue time          2017 08 08T22 20 43 448543003 04 00     meta                map username testuser      num uses            0     orphan              true     path                auth userpass login testuser     policies             default totp policy      renewable           true     ttl                 2764623          1   Generate TOTP method attached to the entity  This should be distributed to     the intended user to be able to generate TOTP passcode          text       vault write sys mfa method totp my totp admin generate           entity id 307d6c16 6f5c 4ae7 46a9 2d153ffcbc63      Key        Value                          barcode    iVBORw0KGgoAAAANSUhEUgAAAM        url        otpauth   totp Vault 307d6c16 6f5c 4ae7 46a9 2d153ffcbc63 algo                 Either the base64 encoded png barcode or the url should be given to the end     user  This barcode url can be loaded into Google Authenticator or a similar     TOTP tool to generate codes   1   Login as the user          text       vault login 70f97438 e174 c03c 40fe 6bcdc1028d6c          1   Read the secret  specifying the mfa flag          text       vault read  mfa my totp 146378 secret data foo      Key                 Value                                   refresh interval    768h     data                which can only be read after MFA validation        "}
{"questions":"vault Vault Enterprise has support for Multi factor Authentication MFA using page title MFA Support Vault Enterprise Vault enterprise MFA support different authentication types layout docs","answers":"---\nlayout: docs\npage_title: MFA Support - Vault Enterprise\ndescription: >-\n  Vault Enterprise has support for Multi-factor Authentication (MFA), using\n  different authentication types.\n---\n\n# Vault enterprise MFA support\n\n@include 'alerts\/enterprise-only.mdx'\n\nVault Enterprise has support for Multi-factor Authentication (MFA), using\ndifferent authentication types. MFA is built on top of the Identity system of\nVault.\n\n## MFA types\n\nMFA in Vault can be of the following types.\n\n- **Time-based One-time Password (TOTP)** - If configured and enabled on a path,\n  this would require a TOTP passcode along with Vault token, to be presented\n  while invoking the API request. The passcode will be validated against the\n  TOTP key present in the identity of the caller in Vault.\n\n- **Okta** - If Okta push is configured and enabled on a path, then the enrolled\n  device of the user will get a push notification to approve or deny the access\n  to the API. The Okta username will be derived from the caller identity's\n  alias.\n\n- **Duo** - If Duo push is configured and enabled on a path, then the enrolled\n  device of the user will get a push notification to approve or deny the access\n  to the API. The Duo username will be derived from the caller identity's\n  alias.\n\n- **PingID** - If PingID push is configured and enabled on a path, then the\n  enrolled device of the user will get a push notification to approve or deny\n  the access to the API. The PingID username will be derived from the caller\n  identity's alias.\n\n## Configuring MFA methods\n\nMFA methods are globally managed within the `System Backend` using the HTTP API.\nPlease see [MFA API](\/vault\/api-docs\/system\/mfa) for details on how to configure an MFA\nmethod.\n\n## MFA methods in policies\n\nMFA requirements on paths are specified as `mfa_methods` along with other ACL\nparameters.\n\n### Sample policy\n\n```hcl\npath \"secret\/foo\" {\n  capabilities = [\"read\"]\n  mfa_methods  = [\"dev_team_duo\", \"sales_team_totp\"]\n}\n```\n\nThe above policy grants `read` access to `secret\/foo` only after _both_ the MFA\nmethods `dev_team_duo` and `sales_team_totp` are validated.\n\n## Namespaces\n\nAll MFA configurations must be configured in the root namespace. They can be\nreferenced from ACL and Sentinel policies in any namespace via the method name\nand can be tied to a mount accessor in any namespace.\n\nWhen using [Sentinel\nEGPs](\/vault\/docs\/enterprise\/sentinel#endpoint-governing-policies-egps),\nany MFA configuration specified must be satisfied by all requests affected by\nthe policy, which can be difficult if the configured paths spread across\nnamespaces. One way to address this is to use a policy similar to the\nfollowing, using `or` operators to allow MFA configurations tied to mount\naccessors in the various namespaces:\n\n```python\nimport \"mfa\"\n\nhas_mfa = rule {\n    mfa.methods.duons1.valid\n}\n\nhas_mfa2 = rule {\n    mfa.methods.duons2.valid\n}\n\nmain = rule {\n    has_mfa or has_mfa2\n}\n```\n\nWhen using TOTP, any user with ACL permissions can self-generate credentials.\nAdmins can generate or destroy credentials only if the targeted entity is in\nthe same namespace.\n\n## Supplying MFA credentials\n\nMFA credentials are retrieved from the `X-Vault-MFA` HTTP header. The format of\nthe header is `mfa_method_name[:key[=value]]`. The items in the `[]` are\noptional.\n\n### Sample request\n\n```shell-session\n$ curl \\\n    --header \"X-Vault-Token: ...\" \\\n    --header \"X-Vault-MFA:my_totp:695452\" \\\n    http:\/\/127.0.0.1:8200\/v1\/secret\/foo\n```\n\n## API\n\nMFA can be managed entirely over the HTTP API. Please see [MFA API](\/vault\/api-docs\/system\/mfa) for more details.\n\n## Additional resources\n\n- [Duo MFA documentation](\/vault\/docs\/enterprise\/mfa\/mfa-duo)\n- [Okta MFA documentation](\/vault\/docs\/enterprise\/mfa\/mfa-okta)\n- [PingID MFA documentation](\/vault\/docs\/enterprise\/mfa\/mfa-pingid)\n- [TOTP MFA documentation](\/vault\/docs\/enterprise\/mfa\/mfa-totp)","site":"vault","answers_cleaned":"    layout  docs page title  MFA Support   Vault Enterprise description       Vault Enterprise has support for Multi factor Authentication  MFA   using   different authentication types         Vault enterprise MFA support   include  alerts enterprise only mdx   Vault Enterprise has support for Multi factor Authentication  MFA   using different authentication types  MFA is built on top of the Identity system of Vault      MFA types  MFA in Vault can be of the following types       Time based One time Password  TOTP      If configured and enabled on a path    this would require a TOTP passcode along with Vault token  to be presented   while invoking the API request  The passcode will be validated against the   TOTP key present in the identity of the caller in Vault       Okta     If Okta push is configured and enabled on a path  then the enrolled   device of the user will get a push notification to approve or deny the access   to the API  The Okta username will be derived from the caller identity s   alias       Duo     If Duo push is configured and enabled on a path  then the enrolled   device of the user will get a push notification to approve or deny the access   to the API  The Duo username will be derived from the caller identity s   alias       PingID     If PingID push is configured and enabled on a path  then the   enrolled device of the user will get a push notification to approve or deny   the access to the API  The PingID username will be derived from the caller   identity s alias      Configuring MFA methods  MFA methods are globally managed within the  System Backend  using the HTTP API  Please see  MFA API   vault api docs system mfa  for details on how to configure an MFA method      MFA methods in policies  MFA requirements on paths are specified as  mfa methods  along with other ACL parameters       Sample policy     hcl path  secret foo      capabilities     read     mfa methods      dev team duo    sales team totp          The above policy grants  read  access to  secret foo  only after  both  the MFA methods  dev team duo  and  sales team totp  are validated      Namespaces  All MFA configurations must be configured in the root namespace  They can be referenced from ACL and Sentinel policies in any namespace via the method name and can be tied to a mount accessor in any namespace   When using  Sentinel EGPs   vault docs enterprise sentinel endpoint governing policies egps   any MFA configuration specified must be satisfied by all requests affected by the policy  which can be difficult if the configured paths spread across namespaces  One way to address this is to use a policy similar to the following  using  or  operators to allow MFA configurations tied to mount accessors in the various namespaces      python import  mfa   has mfa   rule       mfa methods duons1 valid    has mfa2   rule       mfa methods duons2 valid    main   rule       has mfa or has mfa2        When using TOTP  any user with ACL permissions can self generate credentials  Admins can generate or destroy credentials only if the targeted entity is in the same namespace      Supplying MFA credentials  MFA credentials are retrieved from the  X Vault MFA  HTTP header  The format of the header is  mfa method name  key  value     The items in the      are optional       Sample request     shell session   curl         header  X Vault Token               header  X Vault MFA my totp 695452        http   127 0 0 1 8200 v1 secret foo         API  MFA can be managed entirely over the HTTP API  Please see  MFA API   vault api docs system mfa  for more details      Additional resources     Duo MFA documentation   vault docs enterprise mfa mfa duo     Okta MFA documentation   vault docs enterprise mfa mfa okta     PingID MFA documentation   vault docs enterprise mfa mfa pingid     TOTP MFA documentation   vault docs enterprise mfa mfa totp "}
{"questions":"vault include alerts enterprise only mdx Vault Enterprise supports Duo MFA type Duo MFA page title Duo MFA MFA Support Vault Enterprise layout docs","answers":"---\nlayout: docs\npage_title: Duo MFA - MFA Support - Vault Enterprise\ndescription: Vault Enterprise supports Duo MFA type.\n---\n\n# Duo MFA\n\n@include 'alerts\/enterprise-only.mdx'\n\nThis page demonstrates the Duo MFA on ACL'd paths of Vault.\n\n## Configuration\n\n1.  Enable the appropriate auth method:\n\n    ```text\n    $ vault auth enable userpass\n    ```\n\n1.  Fetch the mount accessor for the enabled auth method:\n\n    ```text\n    $ vault auth list -detailed\n    ```\n\n    The response will look like:\n\n    ```text\n    Path         Type        Accessor                  Plugin    Default TTL    Max TTL    Replication    Description\n    ----         ----        --------                  ------    -----------    -------    -----------    -----------\n    token\/       token       auth_token_289703e9       n\/a       system         system     replicated     token based credentials\n    userpass\/    userpass    auth_userpass_54b8e339    n\/a       system         system     replicated     n\/a\n    ```\n\n1.  Configure Duo MFA:\n\n    ```text\n    $ vault write sys\/mfa\/method\/duo\/my_duo \\\n        mount_accessor=auth_userpass_54b8e339 \\\n        integration_key=BIACEUEAXI20BNWTEYXT \\\n        secret_key=HIGTHtrIigh2rPZQMbguugt8IUftWhMRCOBzbuyz \\\n        api_hostname=api-2b5c39f5.duosecurity.com\n    ```\n\n1.  Create a policy that gives access to secret through the MFA method created\n    above:\n\n    ```text\n    $ vault policy write duo-policy -<<EOF\n    path \"secret\/foo\" {\n      capabilities = [\"read\"]\n      mfa_methods  = [\"my_duo\"]\n    }\n    EOF\n    ```\n\n1.  Create a user. MFA works only for tokens that have identity information on\n    them. Tokens created by logging in using auth methods will have the associated\n    identity information. Create a user in the `userpass` auth method and\n    authenticate against it:\n\n    ```text\n    $ vault write auth\/userpass\/users\/testuser \\\n        password=testpassword \\\n        policies=duo-policy\n    ```\n\n1.  Create a login token:\n\n    ```text\n    $ vault write auth\/userpass\/login\/testuser \\\n        password=testpassword\n\n    Key                    Value\n    ---                    -----\n    token                  70f97438-e174-c03c-40fe-6bcdc1028d6c\n    token_accessor         a91d97f4-1c7d-6af3-e4bf-971f74f9fab9\n    token_duration         768h\n    token_renewable        true\n    token_policies         [default duo-policy]\n    token_meta_username    \"testuser\"\n    ```\n\n    Note that the CLI is not authenticated with the newly created token yet, we\n    did not call `vault login`, instead we used the login API to simply return a\n    token.\n\n1.  Fetch the entity ID from the token. The caller identity is represented by the\n    `entity_id` property of the token:\n\n    ```text\n    $ vault token lookup 70f97438-e174-c03c-40fe-6bcdc1028d6c\n\n    Key                 Value\n    ---                 -----\n    accessor            a91d97f4-1c7d-6af3-e4bf-971f74f9fab9\n    creation_time       1502245243\n    creation_ttl        2764800\n    display_name        userpass-testuser\n    entity_id           307d6c16-6f5c-4ae7-46a9-2d153ffcbc63\n    expire_time         2017-09-09T22:20:43.448543132-04:00\n    explicit_max_ttl    0\n    id                  70f97438-e174-c03c-40fe-6bcdc1028d6c\n    issue_time          2017-08-08T22:20:43.448543003-04:00\n    meta                map[username:testuser]\n    num_uses            0\n    orphan              true\n    path                auth\/userpass\/login\/testuser\n    policies            [default duo-policy]\n    renewable           true\n    ttl                 2764623\n    ```\n\n1.  Login as the user:\n\n    ```text\n    $ vault login 70f97438-e174-c03c-40fe-6bcdc1028d6c\n    ```\n\n1.  Read a secret to trigger a Duo push. This will be a blocking call until\n    the push notification is either approved or declined:\n\n    ```text\n    $ vault read secret\/foo\n\n    Key                 Value\n    ---                 -----\n    refresh_interval    768h\n    data                which can only be read after MFA validation\n    ```","site":"vault","answers_cleaned":"    layout  docs page title  Duo MFA   MFA Support   Vault Enterprise description  Vault Enterprise supports Duo MFA type         Duo MFA   include  alerts enterprise only mdx   This page demonstrates the Duo MFA on ACL d paths of Vault      Configuration  1   Enable the appropriate auth method          text       vault auth enable userpass          1   Fetch the mount accessor for the enabled auth method          text       vault auth list  detailed              The response will look like          text     Path         Type        Accessor                  Plugin    Default TTL    Max TTL    Replication    Description                                                                                                                           token        token       auth token 289703e9       n a       system         system     replicated     token based credentials     userpass     userpass    auth userpass 54b8e339    n a       system         system     replicated     n a          1   Configure Duo MFA          text       vault write sys mfa method duo my duo           mount accessor auth userpass 54b8e339           integration key BIACEUEAXI20BNWTEYXT           secret key HIGTHtrIigh2rPZQMbguugt8IUftWhMRCOBzbuyz           api hostname api 2b5c39f5 duosecurity com          1   Create a policy that gives access to secret through the MFA method created     above          text       vault policy write duo policy    EOF     path  secret foo          capabilities     read         mfa methods      my duo             EOF          1   Create a user  MFA works only for tokens that have identity information on     them  Tokens created by logging in using auth methods will have the associated     identity information  Create a user in the  userpass  auth method and     authenticate against it          text       vault write auth userpass users testuser           password testpassword           policies duo policy          1   Create a login token          text       vault write auth userpass login testuser           password testpassword      Key                    Value                                      token                  70f97438 e174 c03c 40fe 6bcdc1028d6c     token accessor         a91d97f4 1c7d 6af3 e4bf 971f74f9fab9     token duration         768h     token renewable        true     token policies          default duo policy      token meta username     testuser               Note that the CLI is not authenticated with the newly created token yet  we     did not call  vault login   instead we used the login API to simply return a     token   1   Fetch the entity ID from the token  The caller identity is represented by the      entity id  property of the token          text       vault token lookup 70f97438 e174 c03c 40fe 6bcdc1028d6c      Key                 Value                                   accessor            a91d97f4 1c7d 6af3 e4bf 971f74f9fab9     creation time       1502245243     creation ttl        2764800     display name        userpass testuser     entity id           307d6c16 6f5c 4ae7 46a9 2d153ffcbc63     expire time         2017 09 09T22 20 43 448543132 04 00     explicit max ttl    0     id                  70f97438 e174 c03c 40fe 6bcdc1028d6c     issue time          2017 08 08T22 20 43 448543003 04 00     meta                map username testuser      num uses            0     orphan              true     path                auth userpass login testuser     policies             default duo policy      renewable           true     ttl                 2764623          1   Login as the user          text       vault login 70f97438 e174 c03c 40fe 6bcdc1028d6c          1   Read a secret to trigger a Duo push  This will be a blocking call until     the push notification is either approved or declined          text       vault read secret foo      Key                 Value                                   refresh interval    768h     data                which can only be read after MFA validation        "}
{"questions":"vault page title PingID MFA MFA Support Vault Enterprise include alerts enterprise only mdx Vault Enterprise supports PingID MFA type layout docs PingID MFA","answers":"---\nlayout: docs\npage_title: PingID MFA - MFA Support - Vault Enterprise\ndescription: Vault Enterprise supports PingID MFA type.\n---\n\n# PingID MFA\n\n@include 'alerts\/enterprise-only.mdx'\n\nThis page demonstrates PingID MFA on ACL'd paths of Vault.\n\n## Configuration\n\n1. Enable the appropriate auth method:\n\n   ```text\n   $ vault auth enable userpass\n   ```\n\n1. Fetch the mount accessor for the enabled auth method:\n\n   ```text\n   $ vault auth list -detailed\n   ```\n\n   The response will look like:\n\n   ```text\n   Path         Type        Accessor                  Plugin    Default TTL    Max TTL    Replication    Description\n   ----         ----        --------                  ------    -----------    -------    -----------    -----------\n   token\/       token       auth_token_289703e9       n\/a       system         system     replicated     token based credentials\n   userpass\/    userpass    auth_userpass_54b8e339    n\/a       system         system     replicated     n\/a\n   ```\n\n1. Configure PingID MFA:\n\n   ```text\n   $ vault write sys\/mfa\/method\/pingid\/ping \\\n       mount_accessor=auth_userpass_54b8e339 \\\n       settings_file_base64=\"AABDwWaR...\"\n   ```\n\n1. Create a policy that gives access to secret through the MFA method created\n   above:\n\n   ```\n   $ vault policy write ping-policy -<<EOF\n   path \"secret\/foo\" {\n     capabilities = [\"read\"]\n     mfa_methods  = [\"ping\"]\n   }\n   EOF\n   ```\n\n1. Create a user. MFA works only for tokens that have identity information on\n   them. Tokens created by logging in using auth methods will have the associated\n   identity information. Create a user in the `userpass` auth method and\n   authenticate against it:\n\n   ```text\n   $ vault write auth\/userpass\/users\/testuser \\\n       password=testpassword \\\n       policies=ping-policy\n   ```\n\n1. Create a login token:\n\n   ```text\n   $ vault write auth\/userpass\/login\/testuser password=testpassword\n\n   Key                    Value\n   ---                    -----\n   token                  70f97438-e174-c03c-40fe-6bcdc1028d6c\n   token_accessor         a91d97f4-1c7d-6af3-e4bf-971f74f9fab9\n   token_duration         768h0m0s\n   token_renewable        true\n   token_policies         [default ping-policy]\n   token_meta_username    \"testuser\"\n   ```\n\n   Note that the CLI is not authenticated with the newly created token yet, we\n   did not call `vault login`, instead we used the login API to simply return a\n   token.\n\n1. Fetch the entity ID from the token. The caller identity is represented by the\n   `entity_id` property of the token:\n\n   ```text\n   $ vault token lookup 70f97438-e174-c03c-40fe-6bcdc1028d6c\n\n   Key                 Value\n   ---                 -----\n   accessor            a91d97f4-1c7d-6af3-e4bf-971f74f9fab9\n   creation_time       1502245243\n   creation_ttl        2764800\n   display_name        userpass-testuser\n   entity_id           307d6c16-6f5c-4ae7-46a9-2d153ffcbc63\n   expire_time         2017-09-09T22:20:43.448543132-04:00\n   explicit_max_ttl    0\n   id                  70f97438-e174-c03c-40fe-6bcdc1028d6c\n   issue_time          2017-08-08T22:20:43.448543003-04:00\n   meta                map[username:testuser]\n   num_uses            0\n   orphan              true\n   path                auth\/userpass\/login\/testuser\n   policies            [default ping-policy]\n   renewable           true\n   ttl                 2764623\n   ```\n\n1. Login as the user:\n\n   ```text\n   $ vault login 70f97438-e174-c03c-40fe-6bcdc1028d6c\n   ```\n\n1. Read a secret to trigger a PingID push. This will be a blocking call until\n   the push notification is either approved or declined:\n\n   ```text\n   $ vault read secret\/foo\n\n   Key                 Value\n   ---                 -----\n   refresh_interval    768h\n   data                which can only be read after MFA validation\n   ```","site":"vault","answers_cleaned":"    layout  docs page title  PingID MFA   MFA Support   Vault Enterprise description  Vault Enterprise supports PingID MFA type         PingID MFA   include  alerts enterprise only mdx   This page demonstrates PingID MFA on ACL d paths of Vault      Configuration  1  Enable the appropriate auth method         text      vault auth enable userpass         1  Fetch the mount accessor for the enabled auth method         text      vault auth list  detailed            The response will look like         text    Path         Type        Accessor                  Plugin    Default TTL    Max TTL    Replication    Description                                                                                                                         token        token       auth token 289703e9       n a       system         system     replicated     token based credentials    userpass     userpass    auth userpass 54b8e339    n a       system         system     replicated     n a         1  Configure PingID MFA         text      vault write sys mfa method pingid ping          mount accessor auth userpass 54b8e339          settings file base64  AABDwWaR             1  Create a policy that gives access to secret through the MFA method created    above               vault policy write ping policy    EOF    path  secret foo         capabilities     read        mfa methods      ping           EOF         1  Create a user  MFA works only for tokens that have identity information on    them  Tokens created by logging in using auth methods will have the associated    identity information  Create a user in the  userpass  auth method and    authenticate against it         text      vault write auth userpass users testuser          password testpassword          policies ping policy         1  Create a login token         text      vault write auth userpass login testuser password testpassword     Key                    Value                                    token                  70f97438 e174 c03c 40fe 6bcdc1028d6c    token accessor         a91d97f4 1c7d 6af3 e4bf 971f74f9fab9    token duration         768h0m0s    token renewable        true    token policies          default ping policy     token meta username     testuser             Note that the CLI is not authenticated with the newly created token yet  we    did not call  vault login   instead we used the login API to simply return a    token   1  Fetch the entity ID from the token  The caller identity is represented by the     entity id  property of the token         text      vault token lookup 70f97438 e174 c03c 40fe 6bcdc1028d6c     Key                 Value                                 accessor            a91d97f4 1c7d 6af3 e4bf 971f74f9fab9    creation time       1502245243    creation ttl        2764800    display name        userpass testuser    entity id           307d6c16 6f5c 4ae7 46a9 2d153ffcbc63    expire time         2017 09 09T22 20 43 448543132 04 00    explicit max ttl    0    id                  70f97438 e174 c03c 40fe 6bcdc1028d6c    issue time          2017 08 08T22 20 43 448543003 04 00    meta                map username testuser     num uses            0    orphan              true    path                auth userpass login testuser    policies             default ping policy     renewable           true    ttl                 2764623         1  Login as the user         text      vault login 70f97438 e174 c03c 40fe 6bcdc1028d6c         1  Read a secret to trigger a PingID push  This will be a blocking call until    the push notification is either approved or declined         text      vault read secret foo     Key                 Value                                 refresh interval    768h    data                which can only be read after MFA validation       "}
{"questions":"vault page title Okta MFA MFA Support Vault Enterprise include alerts enterprise only mdx Vault Enterprise supports Okta MFA type layout docs Okta MFA","answers":"---\nlayout: docs\npage_title: Okta MFA - MFA Support - Vault Enterprise\ndescription: Vault Enterprise supports Okta MFA type.\n---\n\n# Okta MFA\n\n@include 'alerts\/enterprise-only.mdx'\n\nThis page demonstrates the Okta MFA on ACL'd paths of Vault.\n\n## Configuration\n\n1.  Enable the appropriate auth method:\n\n    ```text\n    $ vault auth enable userpass\n    ```\n\n1.  Fetch the mount accessor for the enabled auth method:\n\n    ```text\n    $ vault auth list -detailed\n    ```\n\n    The response will look like:\n\n    ```text\n    Path         Type        Accessor                  Plugin    Default TTL    Max TTL    Replication    Description\n    ----         ----        --------                  ------    -----------    -------    -----------    -----------\n    token\/       token       auth_token_289703e9       n\/a       system         system     replicated     token based credentials\n    userpass\/    userpass    auth_userpass_54b8e339    n\/a       system         system     replicated     n\/a\n    ```\n\n1.  Configure Okta MFA:\n\n    ```text\n    $ vault write sys\/mfa\/method\/okta\/my_okta \\\n        mount_accessor=auth_userpass_54b8e339 \\\n        org_name=\"dev-262775\" \\\n        api_token=\"0071u8PrReNkzmATGJAP2oDyIXwwveqx9vIOEyCZDC\"\n    ```\n\n1.  Create a policy that gives access to secret through the MFA method created\n    above:\n\n    ```text\n    $ vault policy write okta-policy -<<EOF\n    path \"secret\/foo\" {\n      capabilities = [\"read\"]\n      mfa_methods  = [\"my_okta\"]\n    }\n    EOF\n    ```\n\n1.  Create a user. MFA works only for tokens that have identity information on\n    them. Tokens created by logging in using auth methods will have the associated\n    identity information. Create a user in the `userpass` auth method and\n    authenticate against it:\n\n    ```text\n    $ vault write auth\/userpass\/users\/testuser \\\n        password=testpassword \\\n        policies=okta-policy\n    ```\n\n1.  Create a login token:\n\n    ```text\n    $ vault write auth\/userpass\/login\/testuser password=testpassword\n\n    Key                    Value\n    ---                    -----\n    token                  70f97438-e174-c03c-40fe-6bcdc1028d6c\n    token_accessor         a91d97f4-1c7d-6af3-e4bf-971f74f9fab9\n    token_duration         768h0m0s\n    token_renewable        true\n    token_policies         [default okta-policy]\n    token_meta_username    \"testuser\"\n    ```\n\n    Note that the CLI is not authenticated with the newly created token yet, we\n    did not call `vault login`, instead we used the login API to simply return a\n    token.\n\n1.  Fetch the entity ID from the token. The caller identity is represented by the\n    `entity_id` property of the token:\n\n    ```text\n    $ vault token lookup 70f97438-e174-c03c-40fe-6bcdc1028d6c\n\n    Key                     Value\n    ---                     -----\n    accessor                a91d97f4-1c7d-6af3-e4bf-971f74f9fab9\n    creation_time           1502245243\n    creation_ttl            2764800\n    display_name            userpass-testuser\n    entity_id               307d6c16-6f5c-4ae7-46a9-2d153ffcbc63\n    expire_time             2017-09-09T22:20:43.448543132-04:00\n    explicit_max_ttl        0\n    id                      70f97438-e174-c03c-40fe-6bcdc1028d6c\n    issue_time              2017-08-08T22:20:43.448543003-04:00\n    meta                    map[username:testuser]\n    num_uses                0\n    orphan                  true\n    path                    auth\/userpass\/login\/testuser\n    policies                [default okta-policy]\n    renewable               true\n    ttl                     2764623\n    ```\n\n1.  Login as the user:\n\n    ```text\n    $ vault login 70f97438-e174-c03c-40fe-6bcdc1028d6c\n    ```\n\n1.  Read a secret to trigger an Okta push. This will be a blocking call until\n    the push notification is either approved or declined:\n\n    ```text\n    $ vault read secret\/foo\n\n    Key                     Value\n    ---                     -----\n    refresh_interval        768h0m0s\n    data                    which can only be read after MFA validation\n    ```","site":"vault","answers_cleaned":"    layout  docs page title  Okta MFA   MFA Support   Vault Enterprise description  Vault Enterprise supports Okta MFA type         Okta MFA   include  alerts enterprise only mdx   This page demonstrates the Okta MFA on ACL d paths of Vault      Configuration  1   Enable the appropriate auth method          text       vault auth enable userpass          1   Fetch the mount accessor for the enabled auth method          text       vault auth list  detailed              The response will look like          text     Path         Type        Accessor                  Plugin    Default TTL    Max TTL    Replication    Description                                                                                                                           token        token       auth token 289703e9       n a       system         system     replicated     token based credentials     userpass     userpass    auth userpass 54b8e339    n a       system         system     replicated     n a          1   Configure Okta MFA          text       vault write sys mfa method okta my okta           mount accessor auth userpass 54b8e339           org name  dev 262775            api token  0071u8PrReNkzmATGJAP2oDyIXwwveqx9vIOEyCZDC           1   Create a policy that gives access to secret through the MFA method created     above          text       vault policy write okta policy    EOF     path  secret foo          capabilities     read         mfa methods      my okta             EOF          1   Create a user  MFA works only for tokens that have identity information on     them  Tokens created by logging in using auth methods will have the associated     identity information  Create a user in the  userpass  auth method and     authenticate against it          text       vault write auth userpass users testuser           password testpassword           policies okta policy          1   Create a login token          text       vault write auth userpass login testuser password testpassword      Key                    Value                                      token                  70f97438 e174 c03c 40fe 6bcdc1028d6c     token accessor         a91d97f4 1c7d 6af3 e4bf 971f74f9fab9     token duration         768h0m0s     token renewable        true     token policies          default okta policy      token meta username     testuser               Note that the CLI is not authenticated with the newly created token yet  we     did not call  vault login   instead we used the login API to simply return a     token   1   Fetch the entity ID from the token  The caller identity is represented by the      entity id  property of the token          text       vault token lookup 70f97438 e174 c03c 40fe 6bcdc1028d6c      Key                     Value                                       accessor                a91d97f4 1c7d 6af3 e4bf 971f74f9fab9     creation time           1502245243     creation ttl            2764800     display name            userpass testuser     entity id               307d6c16 6f5c 4ae7 46a9 2d153ffcbc63     expire time             2017 09 09T22 20 43 448543132 04 00     explicit max ttl        0     id                      70f97438 e174 c03c 40fe 6bcdc1028d6c     issue time              2017 08 08T22 20 43 448543003 04 00     meta                    map username testuser      num uses                0     orphan                  true     path                    auth userpass login testuser     policies                 default okta policy      renewable               true     ttl                     2764623          1   Login as the user          text       vault login 70f97438 e174 c03c 40fe 6bcdc1028d6c          1   Read a secret to trigger an Okta push  This will be a blocking call until     the push notification is either approved or declined          text       vault read secret foo      Key                     Value                                       refresh interval        768h0m0s     data                    which can only be read after MFA validation        "}
{"questions":"vault page title Sentinel Examples An overview of how Sentinel interacts with Vault Enterprise layout docs include alerts enterprise and hcp mdx Examples","answers":"---\nlayout: docs\npage_title: Sentinel Examples\ndescription: An overview of how Sentinel interacts with Vault Enterprise.\n---\n\n# Examples\n\n@include 'alerts\/enterprise-and-hcp.mdx'\n\nFollowing are some examples that help to introduce concepts. If you are\nunfamiliar with writing Sentinel policies in Vault, please read through to\nunderstand some best practices.\n\nAdditional examples can be found [here](https:\/\/github.com\/hashicorp\/vault-guides\/tree\/master\/governance).\n\n## MFA and CIDR check on login\n\nThe following Sentinel policy requires the incoming user to successfully\nvalidate with an Okta MFA push request before authenticating with LDAP.\nAdditionally, it ensures that only users on the 10.20.0.0\/16 subnet are able to\nauthenticate using LDAP.\n\n```python\nimport \"sockaddr\"\nimport \"mfa\"\nimport \"strings\"\n\n# We expect logins to come only from our private IP range\ncidrcheck = rule {\n    sockaddr.is_contained(\"10.20.0.0\/16\", request.connection.remote_addr)\n}\n\n# Require ping MFA validation to succeed\nping_valid = rule {\n    mfa.methods.ping.valid\n}\n\nmain = rule when strings.has_prefix(request.path, \"auth\/ldap\/login\") {\n    ping_valid and cidrcheck\n}\n```\n\nNote the `rule when` construct on the `main` rule. This scopes the policy to\nthe given condition.\n\nVault takes a default-deny approach to security. Without such scoping, because\nactive Sentinel policies must all pass successfully, the user would be forced\nto start with a passing status and then define the conditions under which\naccess is denied, breaking the default-deny concept.\n\nBy instead indicating the conditions under which the `main` rule (and thus, in\nthis example, the entire policy) should be evaluated, the policy instead\ndescribes the conditions under which a matching request is successful. This\nkeeps the default-deny feeling of Vault; if the evaluation condition isn't met,\nthe policy is simply a no-op.\n\n## Allow only specific identity entities or groups\n\n```python\nmain = rule {\n    identity.entity.name is \"jeff\" or\n    identity.entity.id is \"fe2a5bfd-c483-9263-b0d4-f9d345efdf9f\" or\n    \"sysops\" in identity.groups.names or\n    \"14c0940a-5c07-4b97-81ec-0d423accb8e0\" in keys(identity.groups.by_id)\n}\n```\n\nThis example shows accessing Identity properties to make decisions, showing\nthat for Identity values IDs or names can be used for reference.\n\nIn general, it is more secure to use IDs. While convenient, entity names and\ngroup names can be switched from one entity to another, because their only\nconstraint is that they must be unique. Using IDs guarantees that only that\nspecific entity or group is sufficient; if the group or entity are deleted and\nrecreated with the same name, the match will fail.\n\n## Instantly disallow all Previously-Generated tokens\n\nImagine a break-glass scenario where it is discovered that there have been\ncompromises of some unknown number of previously-generated tokens.\n\nIn such a situation it would be possible to revoke all previous tokens, but\nthis may take a while for a number of reasons, from requiring revocation of\ngenerated secrets to the simple delay required to remove many entries from\nstorage. In addition, it could revoke tokens and generated secrets that later\nforensic analysis shows were not compromised, unnecessarily widening the impact\nof the mass revocation.\n\nIn Vault's ACL system a simple deny could be put into place, but this is a very\ncoarse-grained control and would require forethought to ensure that a policy\nthat can be modified in such a way is attached to every token. It also would\nnot prevent access to login paths or other unauthenticated paths.\n\nSentinel offers much more fine-grained control:\n\n```python\nimport \"time\"\n\nmain = rule when not request.unauthenticated {\n    time.load(token.creation_time).unix >\n      time.load(\"2017-09-17T13:25:29Z\").unix\n}\n```\n\nCreated as an EGP on `*`, this will block all access to any path Sentinel\noperates on with a token created before the given time. Tokens created after\nthis time, since they were not a part of the compromise, will not be subject to\nthis restriction.\n\n## Delegate EGP policy management under a path\n\nThe following policy gives token holders with this policy (via their tokens or\ntheir Identity entities\/groups) the ability to write EGP policies that can only\ntake effect at Vault paths below certain prefixes. This effectively delegates\npolicy management to the team for their own key-value spaces.\n\n```python\nimport \"strings\"\n\ndata_match = func() {\n    # Make sure there is request data\n    if length(request.data else 0) is 0 {\n        return false\n    }\n\n    # Make sure request data includes paths\n    if length(request.data.paths else 0) is 0 {\n        return false\n    }\n\n    # For each path, verify that it is in the allowed list\n    for strings.split(request.data.paths, \",\") as path {\n        # Make it easier for users who might be used to starting paths with\n        # slashes\n        sanitizedPath = strings.trim_prefix(path, \"\/\")\n        if not strings.has_prefix(sanitizedPath, \"dev-kv\/teama\/\") and\n           not strings.has_prefix(sanitizedPath, \"prod-kv\/teama\/\") {\n            return false\n        }\n    }\n\n    return true\n}\n\n# Only care about writing; reading can be allowed by normal ACLs\nprecond = rule {\n    request.operation in [\"create\", \"update\"] and\n    strings.has_prefix(request.path, \"sys\/policies\/egp\/\")\n}\n\nmain = rule when precond {\n    strings.has_prefix(request.path, \"sys\/policies\/egp\/teama-\") and data_match()\n}\n```","site":"vault","answers_cleaned":"    layout  docs page title  Sentinel Examples description  An overview of how Sentinel interacts with Vault Enterprise         Examples   include  alerts enterprise and hcp mdx   Following are some examples that help to introduce concepts  If you are unfamiliar with writing Sentinel policies in Vault  please read through to understand some best practices   Additional examples can be found  here  https   github com hashicorp vault guides tree master governance       MFA and CIDR check on login  The following Sentinel policy requires the incoming user to successfully validate with an Okta MFA push request before authenticating with LDAP  Additionally  it ensures that only users on the 10 20 0 0 16 subnet are able to authenticate using LDAP      python import  sockaddr  import  mfa  import  strings     We expect logins to come only from our private IP range cidrcheck   rule       sockaddr is contained  10 20 0 0 16   request connection remote addr       Require ping MFA validation to succeed ping valid   rule       mfa methods ping valid    main   rule when strings has prefix request path   auth ldap login         ping valid and cidrcheck        Note the  rule when  construct on the  main  rule  This scopes the policy to the given condition   Vault takes a default deny approach to security  Without such scoping  because active Sentinel policies must all pass successfully  the user would be forced to start with a passing status and then define the conditions under which access is denied  breaking the default deny concept   By instead indicating the conditions under which the  main  rule  and thus  in this example  the entire policy  should be evaluated  the policy instead describes the conditions under which a matching request is successful  This keeps the default deny feeling of Vault  if the evaluation condition isn t met  the policy is simply a no op      Allow only specific identity entities or groups     python main   rule       identity entity name is  jeff  or     identity entity id is  fe2a5bfd c483 9263 b0d4 f9d345efdf9f  or      sysops  in identity groups names or      14c0940a 5c07 4b97 81ec 0d423accb8e0  in keys identity groups by id         This example shows accessing Identity properties to make decisions  showing that for Identity values IDs or names can be used for reference   In general  it is more secure to use IDs  While convenient  entity names and group names can be switched from one entity to another  because their only constraint is that they must be unique  Using IDs guarantees that only that specific entity or group is sufficient  if the group or entity are deleted and recreated with the same name  the match will fail      Instantly disallow all Previously Generated tokens  Imagine a break glass scenario where it is discovered that there have been compromises of some unknown number of previously generated tokens   In such a situation it would be possible to revoke all previous tokens  but this may take a while for a number of reasons  from requiring revocation of generated secrets to the simple delay required to remove many entries from storage  In addition  it could revoke tokens and generated secrets that later forensic analysis shows were not compromised  unnecessarily widening the impact of the mass revocation   In Vault s ACL system a simple deny could be put into place  but this is a very coarse grained control and would require forethought to ensure that a policy that can be modified in such a way is attached to every token  It also would not prevent access to login paths or other unauthenticated paths   Sentinel offers much more fine grained control      python import  time   main   rule when not request unauthenticated       time load token creation time  unix         time load  2017 09 17T13 25 29Z   unix        Created as an EGP on      this will block all access to any path Sentinel operates on with a token created before the given time  Tokens created after this time  since they were not a part of the compromise  will not be subject to this restriction      Delegate EGP policy management under a path  The following policy gives token holders with this policy  via their tokens or their Identity entities groups  the ability to write EGP policies that can only take effect at Vault paths below certain prefixes  This effectively delegates policy management to the team for their own key value spaces      python import  strings   data match   func           Make sure there is request data     if length request data else 0  is 0           return false              Make sure request data includes paths     if length request data paths else 0  is 0           return false              For each path  verify that it is in the allowed list     for strings split request data paths       as path             Make it easier for users who might be used to starting paths with           slashes         sanitizedPath   strings trim prefix path               if not strings has prefix sanitizedPath   dev kv teama    and            not strings has prefix sanitizedPath   prod kv teama                  return false                      return true      Only care about writing  reading can be allowed by normal ACLs precond   rule       request operation in   create    update   and     strings has prefix request path   sys policies egp       main   rule when precond       strings has prefix request path   sys policies egp teama    and data match        "}
{"questions":"vault Properties page title Sentinel Properties An overview of how Sentinel interacts with Vault Enterprise layout docs include alerts enterprise and hcp mdx","answers":"---\nlayout: docs\npage_title: Sentinel Properties\ndescription: An overview of how Sentinel interacts with Vault Enterprise.\n---\n\n# Properties\n\n@include 'alerts\/enterprise-and-hcp.mdx'\n\nVault injects a rich set of data into the running Sentinel environment,\nallowing for very fine-grained controls. The set of available properties are\nenumerated on this page.\n\nThe following properties are available for use in Sentinel policies.\n\n## Namespace properties\n\nThe `namespace` (Sentinel) namespace gives access to information about the\nnamespace in which the request is running. (This may or may not match the\nclient's chosen namespace, if a request reaches into a child namespace).\n\n| Name   | Type     | Description                    |\n| :----- | :------- | :----------------------------- |\n| `id`   | `string` | The namespace ID               |\n| `path` | `string` | The root path of the namespace |\n\n## Request properties\n\nThe following properties are available in the `request` namespace.\n\n| Name                     | Type                  | Description                                                                                 |\n| :----------------------- | :-------------------- | :------------------------------------------------------------------------------------------ |\n| `connection.remote_addr` | `string`              | TCP\/IP source address of the client                                                         |\n| `data`                   | `map (string -> any)` | Raw request data                                                                            |\n| `operation`              | `string`              | Operation type, e.g. \"read\" or \"update\"                                                     |\n| `path`                   | `string`              | Path, with any leading `\/` trimmed                                                          |\n| `policy_override`        | `bool`                | `true` if a `soft-mandatory` policy override was requested                                  |\n| `unauthenticated`        | `bool`                | `true` if the requested path is an unauthenticated path                                     |\n| `wrapping.ttl`           | `duration`            | The requested response-wrapping TTL in nanoseconds, suitable for use with the `time` import |\n| `wrapping.ttl_seconds`   | `int`                 | The requested response-wrapping TTL in seconds                                              |\n\n### Replication properties\n\nThe following properties exists at the `replication` namespace.\n\n| Name          | Type     | Description                                                                                                    |\n| :------------ | :------- | :------------------------------------------------------------------------------------------------------------- |\n| `dr.mode`          | `string` | The state of DR replication. Valid values are \"disabled\", \"bootstrapping\", \"primary\", and \"secondary\"          |\n| `performance.mode` | `string` | The state of performance replication. Valid values are \"disabled\", \"bootstrapping\", \"primary\", and \"secondary\" |\n\n## Token properties\n\nThe following properties, if available, are in the `token` namespace. The\nnamespace will not exist if there is no token information attached to a\nrequest, e.g. when logging in.\n\n| Name                       | Type                     | Description                                                                                                                        |\n| :------------------------- | :----------------------- | :--------------------------------------------------------------------------------------------------------------------------------- |\n| `creation_time`            | `string`                 | The timestamp of the token's creation, in RFC3339 format                                                                           |\n| `creation_time_unix`       | `int`                    | The timestamp of the token's creation, in seconds since Unix epoch UTC                                                             |\n| `creation_ttl`             | `duration`               | The TTL the token was first created with in nanoseconds, suitable for use with the `time` import                                   |\n| `creation_ttl_seconds`     | `int`                    | The TTL the token was first created with in seconds                                                                                |\n| `display_name`             | `string`                 | The display name set on the token, if any                                                                                          |\n| `entity_id`                | `string`                 | The Identity entity ID attached to the token, if any                                                                               |\n| `explicit_max_ttl`         | `duration`               | If the token has an explicit max TTL, the duration of the explicit max TTL in nanoseconds, suitable for use with the `time` import |\n| `explicit_max_ttl_seconds` | `int`                    | If the token has an explicit max TTL, the duration of the explicit max TTL in seconds                                              |\n| `metadata`                 | `map (string -> string)` | Metadata set on the token                                                                                                          |\n| `num_uses`                 | `int`                    | The number of uses remaining on a use-count-limited token; 0 if the token has no use-count limit                                   |\n| `path`                     | `string`                 | The request path that resulted in creation of this token                                                                           |\n| `period`                   | `duration`               | If the token has a period, the duration of the period in nanoseconds, suitable for use with the `time` import                      |\n| `period_seconds`           | `int`                    | If the token has a period, the duration of the period in seconds                                                                   |\n| `policies`                 | `list (string)`          | Policies directly attached to the token                                                                                            |\n| `role`                     | `string`                 | If created via a token role, the role that created the token                                                                       |\n| `type`                     | `string`                 | The type of token, currently will be either `batch` or `service`                                                                   |\n\n## Token namespace properties\n\nThe following properties, if available, are in the `token.namespace` namespace.\nThe (Sentinel) namespace will not exist if there is no token information attached to a\nrequest, e.g. when logging in.\n\n| Name   | Type     | Description                    |\n| :----- | :------- | :----------------------------- |\n| `id`   | `string` | The namespace ID               |\n| `path` | `string` | The root path of the namespace |\n\n## Identity properties\n\nThe following properties, if available, are in the `identity` namespace. The\nnamespace may not exist if there is no token information attached to the\nrequest; however, at login time the user's request data will be used to attempt\nto find any existing Identity information, or create some information to pass\nto MFA functions.\n\n### Entity properties\n\nThese exist at the `identity.entity` namespace.\n\n| Name                | Type                     | Description                                                   |\n| :------------------ | :----------------------- | :------------------------------------------------------------ |\n| `creation_time`     | `string`                 | The entity's creation time in RFC3339 format                  |\n| `id`                | `string`                 | The entity's ID                                               |\n| `last_update_time`  | `string`                 | The entity's last update (modify) time in RFC3339 format      |\n| `metadata`          | `map (string -> string)` | Metadata associated with the entity                           |\n| `name`              | `string`                 | The entity's name                                             |\n| `merged_entity_ids` | `list (string)`          | A list of IDs of entities that have been merged into this one |\n| `aliases`           | `list (alias)`           | List of aliases associated with this entity                   |\n| `policies`          | `list (string)`          | List of the policies set on this entity                       |\n\n### Alias properties\n\nThese can be retrieved from `identity.entity.aliases`.\n\n| Name                     | Type                     | Description                                                                                                                                   |\n| :----------------------- | :----------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------- |\n| `creation_time`          | `string`                 | The alias's creation time in RFC3339 format                                                                                                   |\n| `id`                     | `string`                 | The alias's ID                                                                                                                                |\n| `last_update_time`       | `string`                 | The alias's last update (modify) time in RFC3339 format                                                                                       |\n| `metadata`               | `map (string -> string)` | Metadata associated with the alias\n| `custom_metadata`        | `map (string -> string)` | Custom metadata associated with the alias                                                                                                            |\n| `merged_from_entity_ids` | `list (string)`          | If this alias was attached to the current entity via one or more merges, the original entity\/entities will be in this list                    |\n| `mount_accessor`         | `string`                 | The immutable accessor of the mount that created this alias                                                                                   |\n| `mount_path`             | `string`                 | The path of the mount that created this alias; unlike the accessor, there is no guarantee that the current path represents the original mount |\n| `mount_type`             | `string`                 | The type of the mount that created this alias                                                                                                 |\n| `name`                   | `string`                 | The alias's name                                                                                                                              |\n\n### Groups properties\n\nThese exist at the `identity.groups` namespace.\n\n| Name      | Type                    | Description                                                                                                                                     |\n| :-------- | :---------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------- |\n| `by_id`   | `map (string -> group)` | A map of group ID to group information                                                                                                          |\n| `by_name` | `map (string -> group)` | A map of group name to group information; unlike the group ID, there is no guarantee that the current name will always represent the same group |\n\n### Group properties\n\nThese can be retrieved from the `identity.groups` maps.\n\n| Name                | Type                     | Description                                                        |\n| :------------------ | :----------------------- | :----------------------------------------------------------------- |\n| `creation_time`     | `string`                 | The group's creation time in RFC3339 format                        |\n| `id`                | `string`                 | The group's ID                                                     |\n| `last_update_time`  | `string`                 | The group's last update (modify) time in RFC3339 format            |\n| `metadata`          | `map (string -> string)` | Metadata associated with the group                                 |\n| `name`              | `string`                 | The group's name                                                   |\n| `member_entity_ids` | `list (string)`          | A list of IDs of entities that are directly assigned to this group |\n| `parent_group_ids`  | `list (string)`          | A list of IDs of groups that are parents of this group             |\n| `policies`          | `list (string)`          | List of the policies set on this group                             |\n\n## MFA properties\n\nThese properties exist at the `mfa` namespace.\n\n| Name      | Type                     | Description                               |\n| :-------- | :----------------------- | :---------------------------------------- |\n| `methods` | `map (string -> method)` | A map of method name to method properties |\n\n### MFA method properties\n\nThese properties can be accessed via the `mfa.methods` selector.\n\n| Name    | Type   | Description                                                                                                                                                                                                                                   |\n| :------ | :----- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `valid` | `bool` | Whether the method has successfully been validated; if validation has not been attempted, this will trigger the validation attempt. The result of the validation attempt will be used for this method for all policies for the given request. |\n\n## Control group properties\n\nThese properties exist at the `controlgroup` namespace.\n\n| Name                   | Type                   | Description                                 |\n| :--------------------- | :--------------------- | :------------------------------------------ |\n| `time`, `request_time` | `string`               | The original request time in RFC3339 format |\n| `authorizations`       | `list (authorization)` | List of control group authorizations        |\n\n### Control group authorization\n\nThese properties can be accessed via the `controlgroup.authorizations` selector.\n\n| Name     | Type              | Description                                                |\n| :------- | :---------------- | :--------------------------------------------------------- |\n| `time`   | `string`          | The authorization time in RFC3339 format                   |\n| `entity` | `identity.entity` | The identity entity for the authorizer.                    |\n| `groups` | `identity.groups` | The map of identity groups associated with the authorizer. |","site":"vault","answers_cleaned":"    layout  docs page title  Sentinel Properties description  An overview of how Sentinel interacts with Vault Enterprise         Properties   include  alerts enterprise and hcp mdx   Vault injects a rich set of data into the running Sentinel environment  allowing for very fine grained controls  The set of available properties are enumerated on this page   The following properties are available for use in Sentinel policies      Namespace properties  The  namespace   Sentinel  namespace gives access to information about the namespace in which the request is running   This may or may not match the client s chosen namespace  if a request reaches into a child namespace      Name     Type       Description                                                                                id       string    The namespace ID                    path     string    The root path of the namespace       Request properties  The following properties are available in the  request  namespace     Name                       Type                    Description                                                                                                                                                                                                                                         connection remote addr     string                 TCP IP source address of the client                                                              data                       map  string    any     Raw request data                                                                                 operation                  string                 Operation type  e g   read  or  update                                                           path                       string                 Path  with any leading     trimmed                                                               policy override            bool                    true  if a  soft mandatory  policy override was requested                                       unauthenticated            bool                    true  if the requested path is an unauthenticated path                                          wrapping ttl               duration               The requested response wrapping TTL in nanoseconds  suitable for use with the  time  import      wrapping ttl seconds       int                    The requested response wrapping TTL in seconds                                                     Replication properties  The following properties exists at the  replication  namespace     Name            Type       Description                                                                                                                                                                                                                                                       dr mode              string    The state of DR replication  Valid values are  disabled    bootstrapping    primary   and  secondary                performance mode     string    The state of performance replication  Valid values are  disabled    bootstrapping    primary   and  secondary        Token properties  The following properties  if available  are in the  token  namespace  The namespace will not exist if there is no token information attached to a request  e g  when logging in     Name                         Type                       Description                                                                                                                                                                                                                                                                                                                            creation time                string                    The timestamp of the token s creation  in RFC3339 format                                                                                creation time unix           int                       The timestamp of the token s creation  in seconds since Unix epoch UTC                                                                  creation ttl                 duration                  The TTL the token was first created with in nanoseconds  suitable for use with the  time  import                                        creation ttl seconds         int                       The TTL the token was first created with in seconds                                                                                     display name                 string                    The display name set on the token  if any                                                                                               entity id                    string                    The Identity entity ID attached to the token  if any                                                                                    explicit max ttl             duration                  If the token has an explicit max TTL  the duration of the explicit max TTL in nanoseconds  suitable for use with the  time  import      explicit max ttl seconds     int                       If the token has an explicit max TTL  the duration of the explicit max TTL in seconds                                                   metadata                     map  string    string     Metadata set on the token                                                                                                               num uses                     int                       The number of uses remaining on a use count limited token  0 if the token has no use count limit                                        path                         string                    The request path that resulted in creation of this token                                                                                period                       duration                  If the token has a period  the duration of the period in nanoseconds  suitable for use with the  time  import                           period seconds               int                       If the token has a period  the duration of the period in seconds                                                                        policies                     list  string              Policies directly attached to the token                                                                                                 role                         string                    If created via a token role  the role that created the token                                                                            type                         string                    The type of token  currently will be either  batch  or  service                                                                          Token namespace properties  The following properties  if available  are in the  token namespace  namespace  The  Sentinel  namespace will not exist if there is no token information attached to a request  e g  when logging in     Name     Type       Description                                                                                id       string    The namespace ID                    path     string    The root path of the namespace       Identity properties  The following properties  if available  are in the  identity  namespace  The namespace may not exist if there is no token information attached to the request  however  at login time the user s request data will be used to attempt to find any existing Identity information  or create some information to pass to MFA functions       Entity properties  These exist at the  identity entity  namespace     Name                  Type                       Description                                                                                                                                                                           creation time         string                    The entity s creation time in RFC3339 format                       id                    string                    The entity s ID                                                    last update time      string                    The entity s last update  modify  time in RFC3339 format           metadata              map  string    string     Metadata associated with the entity                                name                  string                    The entity s name                                                  merged entity ids     list  string              A list of IDs of entities that have been merged into this one      aliases               list  alias               List of aliases associated with this entity                        policies              list  string              List of the policies set on this entity                              Alias properties  These can be retrieved from  identity entity aliases      Name                       Type                       Description                                                                                                                                                                                                                                                                                                                                                creation time              string                    The alias s creation time in RFC3339 format                                                                                                        id                         string                    The alias s ID                                                                                                                                     last update time           string                    The alias s last update  modify  time in RFC3339 format                                                                                            metadata                   map  string    string     Metadata associated with the alias    custom metadata            map  string    string     Custom metadata associated with the alias                                                                                                                 merged from entity ids     list  string              If this alias was attached to the current entity via one or more merges  the original entity entities will be in this list                         mount accessor             string                    The immutable accessor of the mount that created this alias                                                                                        mount path                 string                    The path of the mount that created this alias  unlike the accessor  there is no guarantee that the current path represents the original mount      mount type                 string                    The type of the mount that created this alias                                                                                                      name                       string                    The alias s name                                                                                                                                     Groups properties  These exist at the  identity groups  namespace     Name        Type                      Description                                                                                                                                                                                                                                                                                                                                    by id       map  string    group     A map of group ID to group information                                                                                                               by name     map  string    group     A map of group name to group information  unlike the group ID  there is no guarantee that the current name will always represent the same group        Group properties  These can be retrieved from the  identity groups  maps     Name                  Type                       Description                                                                                                                                                                                     creation time         string                    The group s creation time in RFC3339 format                             id                    string                    The group s ID                                                          last update time      string                    The group s last update  modify  time in RFC3339 format                 metadata              map  string    string     Metadata associated with the group                                      name                  string                    The group s name                                                        member entity ids     list  string              A list of IDs of entities that are directly assigned to this group      parent group ids      list  string              A list of IDs of groups that are parents of this group                  policies              list  string              List of the policies set on this group                                   MFA properties  These properties exist at the  mfa  namespace     Name        Type                       Description                                                                                                                         methods     map  string    method     A map of method name to method properties        MFA method properties  These properties can be accessed via the  mfa methods  selector     Name      Type     Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             valid     bool    Whether the method has successfully been validated  if validation has not been attempted  this will trigger the validation attempt  The result of the validation attempt will be used for this method for all policies for the given request        Control group properties  These properties exist at the  controlgroup  namespace     Name                     Type                     Description                                                                                                                                        time    request time     string                  The original request time in RFC3339 format      authorizations           list  authorization     List of control group authorizations               Control group authorization  These properties can be accessed via the  controlgroup authorizations  selector     Name       Type                Description                                                                                                                                                   time       string             The authorization time in RFC3339 format                        entity     identity entity    The identity entity for the authorizer                          groups     identity groups    The map of identity groups associated with the authorizer   "}
{"questions":"vault page title Vault Enterprise Sentinel Integration An overview of how Sentinel interacts with Vault Enterprise Vault Enterprise and Sentinel integration layout docs include alerts enterprise and hcp mdx","answers":"---\nlayout: docs\npage_title: Vault Enterprise Sentinel Integration\ndescription: An overview of how Sentinel interacts with Vault Enterprise.\n---\n\n# Vault Enterprise and Sentinel integration\n\n@include 'alerts\/enterprise-and-hcp.mdx'\n\nVault Enterprise integrates HashiCorp Sentinel to provide a rich set of access\ncontrol functionality. Because Vault is a security-focused product trusted with\nhigh-risk secrets and assets, and because of its default-deny stance,\nintegration with Vault is implemented in a defense-in-depth fashion. This takes\nthe form of multiple types of policies and a fixed evaluation order.\n\n## Policy types\n\nVault's policy system has been expanded to support three types of policies:\n\n- `ACLs` - These are the [traditional Vault\n  policies](\/vault\/docs\/concepts\/policies) and remain unchanged.\n\n- `Role Governing Policies (RGPs)` - RGPs are Sentinel policies that are tied\n  to particular tokens, Identity entities, or Identity groups. They have access\n  to a rich set of controls across various aspects of Vault.\n\n- `Endpoint Governing Policies (EGPs)` - EGPs are Sentinel policies that are\n  tied to particular paths instead of tokens. They have access to as much\n  request information as possible, but they can take effect even on\n  unauthenticated paths, such as login paths.\n\nNot every unauthenticated path supports EGPs. For instance, the paths related\nto root token generation cannot support EGPs because it's already the mechanism\nof last resort if, for instance, all clients are locked out of Vault due to\nmisconfigured EGPs.\n\nLike with ACLs, [root tokens](\/vault\/docs\/concepts\/tokens#root-tokens)\nare not subject to Sentinel policy checks.\n\nSentinel execution should be considered to be significantly slower than normal\nACL policy checking. If high performance is needed, testing should be performed\nappropriately when introducing Sentinel policies.\n\n### Policy enforcement levels\n\nSentinel policies have three enforcement levels to choose from.\n\n| Level          | Description                                                                 |\n| -------------- | --------------------------------------------------------------------------- |\n| advisory       | The policy is allowed to fail. Can be used as a tool to educate new users.  |\n| soft-mandatory | The policy must pass unless an [override](#policy-overriding) is specified. |\n| hard-mandatory | The policy must pass.                                                       |\n\n## Policy evaluation\n\nVault evaluates incoming requests against policies of all types that are\napplicable.\n\n![Policy evaluation](\/img\/diagram-policy-evaluation-workflow_light.png#light-theme-only)\n![Policy evaluation](\/img\/diagram-policy-evaluation-workflow_dark.png#dark-theme-only)\n\n1. If the request is unauthenticated, skip to evaluating the EGPs. Otherwise,\n   evaluate the token's ACL policies. These must grant access; as always, a\n   failure to be granted capabilities on a path via ACL policies denies the\n   request.\n2. Evaluate RGPs attached to the client token. All policies must pass according\n   to their enforcement level.\n3. Evaluate EGPs set on the requested path, and any prefix-matching EGPs set on\n   less-specific paths, are evaluated. All policies must pass according to their\n   enforcement level.\n\nAny failure at any of these steps results in a denied request.\n\n### RGPs and namespaces\n\nPolicies, auth methods, secrets engines, and tokens are locked into the\n[namespace](\/vault\/docs\/enterprise\/namespaces) they are created in. However,\nidentity groups can pull in entities and groups from other namespaces.\n\n<Tip>\n\nRefer to the [Set up entities and groups section of the Secure Multi-Tenancy\nwith Namespaces\ntutorial](\/vault\/tutorials\/enterprise\/namespaces#set-up-entities-and-groups) for\na step-by-step instruction.\n\n<\/Tip>\n\n<Warning>\n\n  As of the following versions, Vault only applies RPGs derived from identity\n  group membership to entities in child namespaces:\n\n  - `1.15.0+`\n  - `1.14.4+`\n  - `1.13.8+`\n\n<\/Warning>\n\nThe scenarios below describe the relevant changes in more detail.\n\n#### Versions 1.15.0, 1.14.4, 1.13.8, and later\n\nThe training namespace is a child namespace of the education namespace. The \"Sun\nShine\" entity created in the training namespace is a member of the \"Tester\"\ngroup which is defined in the education namespace. The group members inherit the\ngroup-level policy.\n\n![Relationship](\/img\/diagram-rgp-namespace-post-115_light.png#light-theme-only)\n![Relationship](\/img\/diagram-rgp-namespace-post-115_dark.png#dark-theme-only)\n\n#### Versions 1.15.0-rc1, 1.14.3, 1.13.7, and earlier\n\nThe training namespace is a child namespace of the education namespace. The \"Sun\nShine\" entity created in the education namespace is a member of the \"Tester\"\ngroup which is defined in the training namespace. The group members inherit the\ngroup-level policy.\n\n![Relationship](\/img\/diagram-rgp-namespace-pre-115_light.png#light-theme-only)\n![Relationship](\/img\/diagram-rgp-namespace-pre-115_dark.png#dark-theme-only)\n\nWhile ACL policies and EGPs set rules on a specific path, an RGP does not\nspecify a target path. RGPs are tied to tokens, identity entities, or identity\ngroups that you can write rules without specifying a path.\n\nWhat if the deny-all RGP in the training namespace looked like:\n\n<CodeBlockConfig filename=\"deny-all.sentinel\">\n\n```hcl\nprecond = rule {\n   identity.entity.metadata.org_id is \"A012345X\"\n}\n\nmain = rule when precond {\n   false\n}\n```\n\n<\/CodeBlockConfig>\n\nVault checks the requesting token's entity metadata. If the `org_id` metadata\nexists and the value is `A012345X`, the request gets denied because the\nenforcement level is hard-mandatory. It does not matter if the request invokes a\npath starts with `\/education` or `\/education\/training`, or even `\/foo` because\nthere is no path associated with the deny-all RGP.\n\n\n## Policy overriding\n\nVault supports normal Sentinel overriding behavior. Requests to override can be\nspecified on the command line via the `policy-override` flag or in HTTP\nrequests by setting the `X-Vault-Policy-Override` header to `true`.\n\nOverride requests are visible in Vault's audit log; in addition, override\nrequests and their eventual status (whether they ended up being required) are\nlogged as warnings in Vault's server logs.\n\n## MFA\n\nSentinel policies support the [Identity-based MFA\nsystem](\/vault\/docs\/enterprise\/mfa) in Vault Enterprise. Within a single\nrequest, multiple checks of any named MFA method will only trigger\nauthentication behavior for that method once, regardless of whether its\nvalidity is checked via ACLs, RGPs, or EGPs.\n\nEGPs can be used to require MFA on otherwise unauthenticated paths, such as\nlogin paths. On such paths, the request data will perform a lookahead to try to\ndiscover the appropriate Identity information to use for MFA. It may be\nnecessary to pre-populate Identity entries or supply additional parameters with\nthe request if you require more information to use MFA than the endpoint is\nable to glean from the original request alone.\n\n# Using Sentinel\n\n## Configuration\n\nSentinel policies can be configured via the `sys\/policies\/rgp\/` and\n`sys\/policies\/egp\/` endpoints; see [the\ndocumentation](\/vault\/api-docs\/system\/policies) for more information.\n\nOnce set, RGPs can be assigned to Identity entities and groups or to tokens\njust like ACL policies. As a result, they cannot share names with ACL policies.\n\nWhen setting an EGP, a list of paths must be provided specifying on which paths\nthat EGP should take effect. Endpoints can have multiple distinct EGPs set on\nthem; all are evaluated for each request. Paths can use a glob character (`*`)\nas the last character of the path to perform a prefix match; a path that\nconsists only of a `*` will apply to the root of the API. Since requests are\nsubject to an EGPs exactly matching the requested path and any glob EGPs\nsitting further up the request path, an EGP with a path of `*` will thus take\neffect on all requests.\n\n## Properties and examples\n\nSee the [Examples](\/vault\/docs\/enterprise\/sentinel\/examples) page for examples\nof Sentinel in action, and the\n[Properties](\/vault\/docs\/enterprise\/sentinel\/properties) page for detailed\nproperty documentation.\n\n## Tutorial\n\nRefer to the [Sentinel Policies](\/vault\/tutorials\/policies\/sentinel)\ntutorial to learn how to author Sentinel policies in Vault.","site":"vault","answers_cleaned":"    layout  docs page title  Vault Enterprise Sentinel Integration description  An overview of how Sentinel interacts with Vault Enterprise         Vault Enterprise and Sentinel integration   include  alerts enterprise and hcp mdx   Vault Enterprise integrates HashiCorp Sentinel to provide a rich set of access control functionality  Because Vault is a security focused product trusted with high risk secrets and assets  and because of its default deny stance  integration with Vault is implemented in a defense in depth fashion  This takes the form of multiple types of policies and a fixed evaluation order      Policy types  Vault s policy system has been expanded to support three types of policies      ACLs    These are the  traditional Vault   policies   vault docs concepts policies  and remain unchanged      Role Governing Policies  RGPs     RGPs are Sentinel policies that are tied   to particular tokens  Identity entities  or Identity groups  They have access   to a rich set of controls across various aspects of Vault      Endpoint Governing Policies  EGPs     EGPs are Sentinel policies that are   tied to particular paths instead of tokens  They have access to as much   request information as possible  but they can take effect even on   unauthenticated paths  such as login paths   Not every unauthenticated path supports EGPs  For instance  the paths related to root token generation cannot support EGPs because it s already the mechanism of last resort if  for instance  all clients are locked out of Vault due to misconfigured EGPs   Like with ACLs   root tokens   vault docs concepts tokens root tokens  are not subject to Sentinel policy checks   Sentinel execution should be considered to be significantly slower than normal ACL policy checking  If high performance is needed  testing should be performed appropriately when introducing Sentinel policies       Policy enforcement levels  Sentinel policies have three enforcement levels to choose from     Level            Description                                                                                                                                                                      advisory         The policy is allowed to fail  Can be used as a tool to educate new users       soft mandatory   The policy must pass unless an  override   policy overriding  is specified      hard mandatory   The policy must pass                                                              Policy evaluation  Vault evaluates incoming requests against policies of all types that are applicable     Policy evaluation   img diagram policy evaluation workflow light png light theme only    Policy evaluation   img diagram policy evaluation workflow dark png dark theme only   1  If the request is unauthenticated  skip to evaluating the EGPs  Otherwise     evaluate the token s ACL policies  These must grant access  as always  a    failure to be granted capabilities on a path via ACL policies denies the    request  2  Evaluate RGPs attached to the client token  All policies must pass according    to their enforcement level  3  Evaluate EGPs set on the requested path  and any prefix matching EGPs set on    less specific paths  are evaluated  All policies must pass according to their    enforcement level   Any failure at any of these steps results in a denied request       RGPs and namespaces  Policies  auth methods  secrets engines  and tokens are locked into the  namespace   vault docs enterprise namespaces  they are created in  However  identity groups can pull in entities and groups from other namespaces    Tip   Refer to the  Set up entities and groups section of the Secure Multi Tenancy with Namespaces tutorial   vault tutorials enterprise namespaces set up entities and groups  for a step by step instruction     Tip    Warning     As of the following versions  Vault only applies RPGs derived from identity   group membership to entities in child namespaces        1 15 0        1 14 4        1 13 8      Warning   The scenarios below describe the relevant changes in more detail        Versions 1 15 0  1 14 4  1 13 8  and later  The training namespace is a child namespace of the education namespace  The  Sun Shine  entity created in the training namespace is a member of the  Tester  group which is defined in the education namespace  The group members inherit the group level policy     Relationship   img diagram rgp namespace post 115 light png light theme only    Relationship   img diagram rgp namespace post 115 dark png dark theme only        Versions 1 15 0 rc1  1 14 3  1 13 7  and earlier  The training namespace is a child namespace of the education namespace  The  Sun Shine  entity created in the education namespace is a member of the  Tester  group which is defined in the training namespace  The group members inherit the group level policy     Relationship   img diagram rgp namespace pre 115 light png light theme only    Relationship   img diagram rgp namespace pre 115 dark png dark theme only   While ACL policies and EGPs set rules on a specific path  an RGP does not specify a target path  RGPs are tied to tokens  identity entities  or identity groups that you can write rules without specifying a path   What if the deny all RGP in the training namespace looked like    CodeBlockConfig filename  deny all sentinel       hcl precond   rule      identity entity metadata org id is  A012345X     main   rule when precond      false          CodeBlockConfig   Vault checks the requesting token s entity metadata  If the  org id  metadata exists and the value is  A012345X   the request gets denied because the enforcement level is hard mandatory  It does not matter if the request invokes a path starts with   education  or   education training   or even   foo  because there is no path associated with the deny all RGP       Policy overriding  Vault supports normal Sentinel overriding behavior  Requests to override can be specified on the command line via the  policy override  flag or in HTTP requests by setting the  X Vault Policy Override  header to  true    Override requests are visible in Vault s audit log  in addition  override requests and their eventual status  whether they ended up being required  are logged as warnings in Vault s server logs      MFA  Sentinel policies support the  Identity based MFA system   vault docs enterprise mfa  in Vault Enterprise  Within a single request  multiple checks of any named MFA method will only trigger authentication behavior for that method once  regardless of whether its validity is checked via ACLs  RGPs  or EGPs   EGPs can be used to require MFA on otherwise unauthenticated paths  such as login paths  On such paths  the request data will perform a lookahead to try to discover the appropriate Identity information to use for MFA  It may be necessary to pre populate Identity entries or supply additional parameters with the request if you require more information to use MFA than the endpoint is able to glean from the original request alone     Using Sentinel     Configuration  Sentinel policies can be configured via the  sys policies rgp   and  sys policies egp   endpoints  see  the documentation   vault api docs system policies  for more information   Once set  RGPs can be assigned to Identity entities and groups or to tokens just like ACL policies  As a result  they cannot share names with ACL policies   When setting an EGP  a list of paths must be provided specifying on which paths that EGP should take effect  Endpoints can have multiple distinct EGPs set on them  all are evaluated for each request  Paths can use a glob character       as the last character of the path to perform a prefix match  a path that consists only of a     will apply to the root of the API  Since requests are subject to an EGPs exactly matching the requested path and any glob EGPs sitting further up the request path  an EGP with a path of     will thus take effect on all requests      Properties and examples  See the  Examples   vault docs enterprise sentinel examples  page for examples of Sentinel in action  and the  Properties   vault docs enterprise sentinel properties  page for detailed property documentation      Tutorial  Refer to the  Sentinel Policies   vault tutorials policies sentinel  tutorial to learn how to author Sentinel policies in Vault "}
{"questions":"vault Use namespaces to create isolated environments within Vault Enterprise Guidance for using thousands of namespaces with Vault Enterprise page title Run Vault Enterprise with many namespaces Run Vault Enterprise with many namespaces layout docs","answers":"---\nlayout: docs\npage_title: Run Vault Enterprise with many namespaces\ndescription: >-\n  Guidance for using thousands of namespaces with Vault Enterprise\n---\n\n# Run Vault Enterprise with many namespaces\n\nUse namespaces to create isolated environments within Vault Enterprise.\nBy default, Vault limits the number and depth of namespaces based on your\nstorage configuration. The information below provides guidance on how to modify\nyour namespace limits and what expect when operating a Vault cluster with 7000+\nnamespaces.\n\n## Default namespace limits\n\n@include 'namespace-limits.mdx'\n\n## How to modify your namespace limit\n\n@include 'storage-entry-size.mdx'\n\n## Performance considerations\n\nRunning Vault with thousands of namespaces can have operational impacts on a\ncluster. Below are some performance considerations to take into account before\nusing thousands of namespaces.\n\nIt is **not** recommended to use thousands of namespaces with any version of Vault\nlower than 1.13.9, 1.14.5, or 1.15.0. Improvements were released in those\nversions which can improve the reliability of Raft heartbeats when using many\nnamespaces.\n\n<Note title=\"Testing parameters\">\n\n  The aggregated performance data below assumes a 3-node Vault cluster running\n  on N2 standard VMs with Google Kubernetes Engine, default mounts, and\n  integrated storage. The results average metrics from multiple `n2-standard-16`\n  and `n2-standard-32` VMs with a varying number of namespaces.\n\n<\/Note>\n\n### Unseal times\n\nVault sets up and initializes every mount after an unseal event. At minimum,\nthe initialization process includes the default mounts for all active namespaces\n(`sys`, `identity`, `cubbyhole`, and `token`).\n\nThe more namespaces and custom mounts in the deployment, the longer the\npost-unseal initialization takes. As a result, **even with auto-unseal**, Vault\nwill be unresponsive during initialization for deployments with many namespaces.\n\nPost-unseal times observed during testing:\n\n| Number of namespaces | Unseal initialization time  |\n|----------------------|-------------------|\n| 10                   | ~5 seconds        |\n| 10000                | ~2-3 minutes      |\n| 20000                | ~12-14 minutes    |\n| 30000                | ~33-36 minutes    |\n\n\n### Cluster leadership transfer times\n\nVault high availability clusters have a leader (also known as an active node)\nwhich is the server that accepts writes to the cluster and replicates the\nwritten data to the follower nodes. If the leader crashes or needs to be removed\nfrom the cluster, one of the follower nodes must take over leadership. This is\nknown as a leadership transfer.\n\nWhenever a leadership transfer happens, the new active node must go through all\nof the mounts in the cluster and set them up before the node can be ready to be\nthe leader. Because every namespace has at least 4 mounts (`sys`, `identity`,\n`cubbyhole`, and `token`), the time for a leadership transfer to complete will\nincrease with the number of namespaces.\n\nLeadership transfer times observed for the [`vault operator step-down`](\/vault\/docs\/commands\/operator\/step-down)\ncommand:\n\n| Number of namespaces | Time until a node is elected as leader |\n|----------------------|----------------------------------------|\n| 10                   | ~2 seconds                             |\n| 10000                | ~33-45 seconds                         |\n| 20000                | ~1-2 minutes                           |\n| 30000                | ~4 minutes                             |\n\n## System requirements\n\n### Minimum memory requirements\n\nEach namespace requires at least 435 KB of memory to store information\nabout the paths available within the namespace. Given `N` namespaces, your\nVault deployment must include at least (435 x N) KB memory for namespace support\nto avoid degraded performance.\n\n### Rollback and rotation worker requirements\n\nSometimes, Vault secret and auth engines need to clean up data after a request\nis canceled or a request fails halfway through. Vault issues rollback operations\nevery minute to each mount in order to periodically trigger the clean up\nprocess.\n\nBy default, Vault uses 256 workers to perform rollback operations. Mounts with a\nlarge number of namespaces can become bottlenecks that slow down the overall\nrollback process. The effects of the slowdown vary based on the particular\nmounts. At minimum, your Vault deployment will take longer to fully purge stale\ndata and periodic rotations may happen less frequently than intended.\n\nYou can tell whether the number of rollback workers is sufficient by monitoring\nthe following metrics:\n\n| Expected range | Metric                                                                                           |\n|----------------|--------------------------------------------------------------------------------------------------|\n| 0 \u2013 256        | [`vault.rollback.queued`](\/vault\/docs\/internals\/telemetry\/metrics\/core-system#rollback-metrics)  |\n| 0 \u2013 60000      | [`vault.rollback.waiting`](\/vault\/docs\/internals\/telemetry\/metrics\/core-system#rollback-metrics) |\n\n## Identity secret engine warnings\n\nWhen using OIDC with many namespaces, you may see warnings in your Vault logs\nfrom the `identity` secret mount under the `root` namespace. For example:\n\n```text\n2023-10-24T15:47:56.594Z [WARN]  secrets.identity.identity_51eb2411: error expiring OIDC public keys: err=\"context deadline exceeded\"\n2023-10-24T15:47:56.594Z [WARN]  secrets.identity.identity_51eb2411: error rotating OIDC keys: err=\"context deadline exceeded\"\n```\n\nThe `secrets.identity` warnings occur because the root namespace is responsible\nfor rotating the [OIDC keys](\/vault\/docs\/secrets\/identity\/oidc-provider) of all\nother namespaces.\n\n<Warning title=\"Avoid OIDC with many namespaces\">\n\n  Using Vault as an [OIDC provider](\/vault\/docs\/concepts\/oidc-provider) with\n  many namespaces can severely delay the rotation and invalidation of OIDC keys.\n\n<\/Warning>","site":"vault","answers_cleaned":"    layout  docs page title  Run Vault Enterprise with many namespaces description       Guidance for using thousands of namespaces with Vault Enterprise        Run Vault Enterprise with many namespaces  Use namespaces to create isolated environments within Vault Enterprise  By default  Vault limits the number and depth of namespaces based on your storage configuration  The information below provides guidance on how to modify your namespace limits and what expect when operating a Vault cluster with 7000  namespaces      Default namespace limits   include  namespace limits mdx      How to modify your namespace limit   include  storage entry size mdx      Performance considerations  Running Vault with thousands of namespaces can have operational impacts on a cluster  Below are some performance considerations to take into account before using thousands of namespaces   It is   not   recommended to use thousands of namespaces with any version of Vault lower than 1 13 9  1 14 5  or 1 15 0  Improvements were released in those versions which can improve the reliability of Raft heartbeats when using many namespaces    Note title  Testing parameters      The aggregated performance data below assumes a 3 node Vault cluster running   on N2 standard VMs with Google Kubernetes Engine  default mounts  and   integrated storage  The results average metrics from multiple  n2 standard 16    and  n2 standard 32  VMs with a varying number of namespaces     Note       Unseal times  Vault sets up and initializes every mount after an unseal event  At minimum  the initialization process includes the default mounts for all active namespaces   sys    identity    cubbyhole   and  token     The more namespaces and custom mounts in the deployment  the longer the post unseal initialization takes  As a result    even with auto unseal    Vault will be unresponsive during initialization for deployments with many namespaces   Post unseal times observed during testing     Number of namespaces   Unseal initialization time                                                   10                      5 seconds            10000                   2 3 minutes          20000                   12 14 minutes        30000                   33 36 minutes            Cluster leadership transfer times  Vault high availability clusters have a leader  also known as an active node  which is the server that accepts writes to the cluster and replicates the written data to the follower nodes  If the leader crashes or needs to be removed from the cluster  one of the follower nodes must take over leadership  This is known as a leadership transfer   Whenever a leadership transfer happens  the new active node must go through all of the mounts in the cluster and set them up before the node can be ready to be the leader  Because every namespace has at least 4 mounts   sys    identity    cubbyhole   and  token    the time for a leadership transfer to complete will increase with the number of namespaces   Leadership transfer times observed for the   vault operator step down    vault docs commands operator step down  command     Number of namespaces   Time until a node is elected as leader                                                                       10                      2 seconds                                 10000                   33 45 seconds                             20000                   1 2 minutes                               30000                   4 minutes                                   System requirements      Minimum memory requirements  Each namespace requires at least 435 KB of memory to store information about the paths available within the namespace  Given  N  namespaces  your Vault deployment must include at least  435 x N  KB memory for namespace support to avoid degraded performance       Rollback and rotation worker requirements  Sometimes  Vault secret and auth engines need to clean up data after a request is canceled or a request fails halfway through  Vault issues rollback operations every minute to each mount in order to periodically trigger the clean up process   By default  Vault uses 256 workers to perform rollback operations  Mounts with a large number of namespaces can become bottlenecks that slow down the overall rollback process  The effects of the slowdown vary based on the particular mounts  At minimum  your Vault deployment will take longer to fully purge stale data and periodic rotations may happen less frequently than intended   You can tell whether the number of rollback workers is sufficient by monitoring the following metrics     Expected range   Metric                                                                                                                                                                                                                     0   256            vault rollback queued    vault docs internals telemetry metrics core system rollback metrics       0   60000          vault rollback waiting    vault docs internals telemetry metrics core system rollback metrics        Identity secret engine warnings  When using OIDC with many namespaces  you may see warnings in your Vault logs from the  identity  secret mount under the  root  namespace  For example      text 2023 10 24T15 47 56 594Z  WARN   secrets identity identity 51eb2411  error expiring OIDC public keys  err  context deadline exceeded  2023 10 24T15 47 56 594Z  WARN   secrets identity identity 51eb2411  error rotating OIDC keys  err  context deadline exceeded       The  secrets identity  warnings occur because the root namespace is responsible for rotating the  OIDC keys   vault docs secrets identity oidc provider  of all other namespaces    Warning title  Avoid OIDC with many namespaces      Using Vault as an  OIDC provider   vault docs concepts oidc provider  with   many namespaces can severely delay the rotation and invalidation of OIDC keys     Warning "}
{"questions":"vault Vault Enterprise has support for Namespaces a feature to enable Secure Vault Enterprise namespaces EnterpriseAlert product vault inline true layout docs Multi tenancy SMT and self management page title Namespaces Vault Enterprise","answers":"---\nlayout: docs\npage_title: Namespaces - Vault Enterprise\ndescription: >-\n  Vault Enterprise has support for Namespaces, a feature to enable Secure\n  Multi-tenancy (SMT) and self-management.\n---\n\n# Vault Enterprise namespaces <EnterpriseAlert product=vault inline=true \/>\n\nMany organizations implement Vault as a service to provide centralized\nmanagement of sensitive data and ensure that the different teams in an\norganization operate within isolated environments known as **tenants**.\n\nMulti-tenant environments have the following implementation challenges:\n\n- **Tenant isolation**. Teams within a Vault as a Service (VaaS)\n  environment require strong isolation for their policies, secrets, and\n  identities. Tenant isolation may also be required due to organizational\n  security and privacy requirements or to address compliance regulations like\n  [GDPR](https:\/\/gdpr.eu).\n- **Long-term management**. Tenants typically have different policies and teams\n  request changes to their tenants at different rates. As a result, managing a\n  multi-tenant environment can become difficult for a single team as the number\n  of tenants within the organization grows.\n\nNamespaces support secure multi-tenancy (**SMT**) within a single Vault\nEnterprise instance with tenant isolation and administration delegation so Vault\nadministrators can empower delegates to manage their own tenant environment.\n\nWhen you create a namespace, you establish an isolated environment with separate\nlogin paths that functions as a mini-Vault instance within your Vault\ninstallation. Users can then create and manage their sensitive data within the\nconfines of that namespace, including:\n\n- secret engines\n- authentication methods\n- ACL, EGP, and RGP policies\n- password policies\n- entities\n- identity groups\n- tokens\n\n<Tip>\n\n  Namespaces are isolated environments, but Vault administrators can still share\n  and enforce global policies across namespaces with the\n  [group-policy-application](\/vault\/api-docs\/system\/config-group-policy-application)\n  endpoint of the Vault API.\n\n<\/Tip>\n\n## Namespace naming restrictions\n\nValid Vault namespace names:\n\n- **CANNOT** end with `\/`\n- **CANNOT** contain spaces\n- **CANNOT** be one of the following reserved strings:\n  - `root`\n  - `sys`\n  - `audit`\n  - `auth`\n  - `cubbyhole`\n  - `identity`\n\nRefer to the [Namespace limits section](\/vault\/docs\/internals\/limits#namespace-limits)\nof [Vault limits and maximums](\/vault\/docs\/internals\/limits) for storage limits\nrelated to managing namespaces.\n\n<Tip title=\"Related reading\">\n\n  Read the\n  [Vault namespace and mount structuring](\/vault\/tutorials\/enterprise\/namespace-structure)\n  tutorial for best practices and recommendations for structuring your namespaces.\n\n<\/Tip>\n\n## Child namespaces\n\nA **child namespace** is any namespace that exists entirely within the scope of\nanother namespace. The containing namespace is the **parent namespace**. For \nexample, given the namespace path `A\/B\/C`:\n\n- `A` is the top-most namespace and exists under the root namespace for the\n  Vault instance.\n- `B` is a child namespace of `A` and the parent namespace of `C`.\n- `C` is a child namespace of `B` and the grandchild namespace of `A`.\n\nChildren can inherit elements from their parent namespaces. For example,\npolicies for a child namespace might reference entities or groups from the parent\nnamespace. Parent namespaces can also **assert** policies on identities within\na child namespace. \n\nVault administrators can configure the desired inheritance behavior with the\n[group-policy-application](\/vault\/api-docs\/system\/config-group-policy-application)\nendpoint of the Vault API.\n \n## Delegation and administrative namespaces\n\nVault system administrators can assign administration rights to delegate\nadmins to allow teams to self-manage their namespace. In addition to basic\nmanagement, delegate admins can create child namespaces and assign admin rights\nto subordinate delegate admins.\n\nAdditionally,\n[administrative namespaces](\/vault\/docs\/enterprise\/namespaces\/create-admin-namespace)\nlet Vault administrators grant access to a\n[predefined subset of privileged endpoints](#privileged-endpoints) by setting\nthe relevant namespace parameters in their Vault configuration file.\n\n## Vault API and namespaces\n\nUsers can perform API operations under a specific namespace by setting the\n`X-Vault-Namespace` header to the absolute or relative namespace path. Relative\nnamespace paths are assumed to be child namespaces of the calling namespace.\nYou can also provide an absolute namespace path without using the\n`X-Vault-Namespace` header.\n\nVault constructs the fully qualified namespace path based on the calling\nnamespace and the `X-Vault` header to route the request to the\nappropriate namespace. For example, the following requests all route to the\n`ns1\/ns2\/secret\/foo` namespace:\n\n1. Path: `ns1\/ns2\/secret\/foo`\n2. Path: `secret\/foo`, Header: `X-Vault-Namespace: ns1\/ns2\/`\n3. Path: `ns2\/secret\/foo`, Header: `X-Vault-Namespace: ns1\/`\n\n<Tip title=\"Vault Enterprise has a namespaces API\">\n\n  Use the [\/sys\/namespaces](\/vault\/api-docs\/system\/namespaces) API or\n  [`namespace`](\/vault\/docs\/commands\/namespace) CLI command to manage\n  your namespaces.\n\n<\/Tip>\n\n## Restricted API paths\n\nThe Vault API includes system backend endpoints, which are mounted under the\n`sys\/` path. System endpoints let you interact with the internal features of\nyour Vault instance.\n\nBy default, Vault allows non-root calls to the less-sensitive system backend\nendpoints. But, for security reasons, Vault restricts access to some of the\nsystem backend endpoints to calls from the root namespace or calls that use a\ntoken in the root namespace with elevated permissions.\n\nRather than granting access to the full set of  privileged `sys\/` paths, Vault\nadministrators can also grant access to a predefined subset of the restricted\nendpoints with an administrative namespace.\n\n@include 'api\/restricted-endpoints.mdx'\n\n## Learn more\n\nRefer to the following tutorials to learn more about Vault namespaces:\n\n- [Secure Multi-Tenancy with Namespaces](\/vault\/tutorials\/enterprise\/namespaces)\n- [Secrets Management Across Namespaces without Hierarchical\n  Relationship](\/vault\/tutorials\/enterprise\/namespaces-secrets-sharing)\n- [Vault Namespace and Mount Structuring\n  Guide](\/vault\/tutorials\/enterprise\/namespace-structure)\n- [HCP Vault Dedicated namespace\n  considerations](\/vault\/tutorials\/cloud-ops\/hcp-vault-namespace-considerations)\n- [Using many Namespaces](\/vault\/docs\/enterprise\/namespaces\/namespace-limits)","site":"vault","answers_cleaned":"    layout  docs page title  Namespaces   Vault Enterprise description       Vault Enterprise has support for Namespaces  a feature to enable Secure   Multi tenancy  SMT  and self management         Vault Enterprise namespaces  EnterpriseAlert product vault inline true     Many organizations implement Vault as a service to provide centralized management of sensitive data and ensure that the different teams in an organization operate within isolated environments known as   tenants     Multi tenant environments have the following implementation challenges       Tenant isolation    Teams within a Vault as a Service  VaaS    environment require strong isolation for their policies  secrets  and   identities  Tenant isolation may also be required due to organizational   security and privacy requirements or to address compliance regulations like    GDPR  https   gdpr eu       Long term management    Tenants typically have different policies and teams   request changes to their tenants at different rates  As a result  managing a   multi tenant environment can become difficult for a single team as the number   of tenants within the organization grows   Namespaces support secure multi tenancy    SMT    within a single Vault Enterprise instance with tenant isolation and administration delegation so Vault administrators can empower delegates to manage their own tenant environment   When you create a namespace  you establish an isolated environment with separate login paths that functions as a mini Vault instance within your Vault installation  Users can then create and manage their sensitive data within the confines of that namespace  including     secret engines   authentication methods   ACL  EGP  and RGP policies   password policies   entities   identity groups   tokens   Tip     Namespaces are isolated environments  but Vault administrators can still share   and enforce global policies across namespaces with the    group policy application   vault api docs system config group policy application    endpoint of the Vault API     Tip      Namespace naming restrictions  Valid Vault namespace names       CANNOT   end with         CANNOT   contain spaces     CANNOT   be one of the following reserved strings       root       sys       audit       auth       cubbyhole       identity   Refer to the  Namespace limits section   vault docs internals limits namespace limits  of  Vault limits and maximums   vault docs internals limits  for storage limits related to managing namespaces    Tip title  Related reading      Read the    Vault namespace and mount structuring   vault tutorials enterprise namespace structure    tutorial for best practices and recommendations for structuring your namespaces     Tip      Child namespaces  A   child namespace   is any namespace that exists entirely within the scope of another namespace  The containing namespace is the   parent namespace    For  example  given the namespace path  A B C       A  is the top most namespace and exists under the root namespace for the   Vault instance     B  is a child namespace of  A  and the parent namespace of  C      C  is a child namespace of  B  and the grandchild namespace of  A    Children can inherit elements from their parent namespaces  For example  policies for a child namespace might reference entities or groups from the parent namespace  Parent namespaces can also   assert   policies on identities within a child namespace    Vault administrators can configure the desired inheritance behavior with the  group policy application   vault api docs system config group policy application  endpoint of the Vault API       Delegation and administrative namespaces  Vault system administrators can assign administration rights to delegate admins to allow teams to self manage their namespace  In addition to basic management  delegate admins can create child namespaces and assign admin rights to subordinate delegate admins   Additionally   administrative namespaces   vault docs enterprise namespaces create admin namespace  let Vault administrators grant access to a  predefined subset of privileged endpoints   privileged endpoints  by setting the relevant namespace parameters in their Vault configuration file      Vault API and namespaces  Users can perform API operations under a specific namespace by setting the  X Vault Namespace  header to the absolute or relative namespace path  Relative namespace paths are assumed to be child namespaces of the calling namespace  You can also provide an absolute namespace path without using the  X Vault Namespace  header   Vault constructs the fully qualified namespace path based on the calling namespace and the  X Vault  header to route the request to the appropriate namespace  For example  the following requests all route to the  ns1 ns2 secret foo  namespace   1  Path   ns1 ns2 secret foo  2  Path   secret foo   Header   X Vault Namespace  ns1 ns2   3  Path   ns2 secret foo   Header   X Vault Namespace  ns1     Tip title  Vault Enterprise has a namespaces API      Use the   sys namespaces   vault api docs system namespaces  API or     namespace    vault docs commands namespace  CLI command to manage   your namespaces     Tip      Restricted API paths  The Vault API includes system backend endpoints  which are mounted under the  sys   path  System endpoints let you interact with the internal features of your Vault instance   By default  Vault allows non root calls to the less sensitive system backend endpoints  But  for security reasons  Vault restricts access to some of the system backend endpoints to calls from the root namespace or calls that use a token in the root namespace with elevated permissions   Rather than granting access to the full set of  privileged  sys   paths  Vault administrators can also grant access to a predefined subset of the restricted endpoints with an administrative namespace    include  api restricted endpoints mdx      Learn more  Refer to the following tutorials to learn more about Vault namespaces      Secure Multi Tenancy with Namespaces   vault tutorials enterprise namespaces     Secrets Management Across Namespaces without Hierarchical   Relationship   vault tutorials enterprise namespaces secrets sharing     Vault Namespace and Mount Structuring   Guide   vault tutorials enterprise namespace structure     HCP Vault Dedicated namespace   considerations   vault tutorials cloud ops hcp vault namespace considerations     Using many Namespaces   vault docs enterprise namespaces namespace limits "}
{"questions":"vault Explains HashiCorp s recommended approach to structuring the Vault namespaces and how namespaces impact on the endpoint paths page title Namespace and mount structure guide Namespaces are isolated environments that functionally create Vaults within a Namespace and mount structure guide layout docs","answers":"---\nlayout: docs\npage_title: Namespace and mount structure guide\ndescription: >-\n  Explains HashiCorp's recommended approach to structuring the Vault namespaces, and how namespaces impact on the endpoint paths.\n---\n\n# Namespace and mount structure guide\n\nNamespaces are isolated environments that functionally create \"Vaults within a\nVault.\" They have separate login paths, and support creating and managing data\nisolated to their namespace. This functionality enables you to provide Vault as\na service to tenants.\n\n![Conceptual diagram for namespace usages](\/img\/diagram_namespaces-for-org.png)\n\nThis guide provides recommended approach to structuring Vault namespaces and\nmount paths, as well as some guidance around how to make decisions for\nnamespaces and paths structuring, given the organizational structure and use\ncases.\n\n### Why is this topic important?\n\nEverything in Vault is path-based. Each path corresponds to an operation or\nsecret in Vault, and the Vault API endpoints map to these paths; therefore,\nwriting policies configures the permitted operations to specific secret paths.\nFor example, to grant access to manage tokens in the root namespace, the policy\npath is `auth\/token\/*`. To manage tokens for the  _education_ namespace, the\nfully-qualified path functionally becomes `education\/auth\/token\/*`.\n\nThe following diagram demonstrates the API paths based on where the auth method\nand secrets engines are enabled. \n\n![Namespaces and mount paths](\/img\/diagram_namespaces_paths.png)\n\nYou can isolate secrets using namespaces or mounts dedicated to each Vault\nclient. For example, you can create a namespace for each isolated tenant and\nthey are responsible for managing the resources under their namespace.\nAlternatively, you can mount a dedicated secrets engine at a path dedicated to\neach team within the organization. \n\n![Namespaces best practices](\/img\/diagram_namespaces_intro.png)\n\nDepending on how you isolate the secrets, it determines who is responsible for\nmanaging those secrets, and more importantly, policies related to those secrets.  \n\n\n<Note>\n\nThe creation of namespaces should be performed by a user with a highly\nprivileged token such as `root` to set up isolated environments for each\norganization, team, or application.\n\n<\/Note>\n\n## Deployment considerations\n\nTo plan and design the Vault namespaces, auth method paths and secrets engine\npaths, you need to consider how to best structure Vault's logical objects for\nyour organization. \n\n\n<table>\n  <thead>\n    <tr>\n      <th>Requirements<\/th>\n      <th>What to consider<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n    <tr>\n      <td>Organizational structure<\/td>\n      <td>\n        <ul>\n          <li>What is your organizational structure?<\/li>\n          <li>What is the level of granularity across lines of businesses (LOBs), divisions, teams, services, apps that needs to be reflected in Vault's end-state design?<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td>Self-service requirements<\/td>\n      <td>\n        <ul>\n          <li>Given your organizational structure, what is the desired level of <strong>self-service<\/strong> required?<\/li>\n          <li>How are Vault policies to be managed?<\/li>\n          <li>Will teams need to directly manage policies for their own scope of responsibility?<\/li>\n          <li>Or, will they be interacting with Vault via some abstraction layer where policies and patterns will be templatized? For example, configuration by code, Git flows, the Terraform Vault provider, custom onboarding layers, or some combination of these.<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td>Audit requirements<\/td>\n      <td>\n        <ul>\n          <li>What are the requirements around auditing usage of Vault within your organization?<\/li>\n          <li>Is there a need to regularly certify access to secrets?<\/li>\n          <li>Is there a need to review and\/or decommission stale secrets or auth roles?<\/li>\n          <li>Is there a need to determine chargeback amounts to internal customers?<\/li>\n        <\/ul>\n      <\/td>\n    <\/tr>\n    <tr>\n      <td>Secrets engine requirements<\/td>\n      <td>\n        What types of secrets engines will you use (KV, database, AD, PKI, etc.)? <br \/><br \/>\n        For large organizations, each of these might require different structuring patterns. For example, with KV secrets engine, each team might have their own dedicated KV mount. However, for AD secrets engine, this is inherently a <em>shared<\/em> type of mount so you would manage access at a role level, rather than having multiple mounts that share the same connection configuration.\n      <\/td>\n    <\/tr>\n  <\/tbody>\n<\/table>\n\n\n## Chroot namespace\n\n<Note title=\"Vault version\">\n\nTo use the chroot listener feature, you must run **Vault Enterprise 1.15** or\nlater.\n\n<\/Note>\n\nVault clients (users, applications, etc.) must be aware of which namespace to\nsend requests, and set the target namespace using `-namespace` flag,\n`X-Vault-Namespace` HTTP header, or `VAULT_NAMESPACE` environment variable. If\nthe target namespace is not properly set, the request will fail. This can be\ncumbersome. \n\nTo simplify, Vault operators can specify additional `listener` stanza in the\nconfiguration file, and defines `chroot_namespace` to specify an alternate\ntop-level namespace.\n\n**Example:** \n\n<CodeBlockConfig filename=\"vault-config.hcl\" highlight=\"17-25\">\n\n```hcl\nui            = true\ncluster_addr  = \"https:\/\/127.0.0.1:8201\"\napi_addr      = \"https:\/\/127.0.0.1:8200\"\ndisable_mlock = true\n\nstorage \"raft\" {\n  path = \"\/path\/to\/raft\/data\"\n  node_id = \"raft_node_1\"\n}\n\nlistener \"tcp\" {\n  address       = \"127.0.0.1:8200\"\n  tls_cert_file = \"\/path\/to\/full-chain.pem\"\n  tls_key_file  = \"\/path\/to\/private-key.pem\"\n}\n\nlistener \"tcp\" {\n   address          = \"127.0.0.1:8300\"\n   chroot_namespace = \"usa-hq\"\n   tls_cert_file = \"\/path\/to\/full-chain.pem\"\n   tls_key_file  = \"\/path\/to\/private-key.pem\"\n   telemetry {\n      unauthenticated_metrics_access = true\n  }\n}\n\ntelemetry {\n  statsite_address = \"127.0.0.1:8125\"\n  disable_hostname = true\n}\n```\n\n<\/CodeBlockConfig>\n\nThe `chroot_namespace` specifies an alternate top-level namespace for the\nlistener, `https\/\/127.0.0.1:8300`.\n\n**Example request:**\n\n```shell-session\n$ curl --header \"X-Vault-Namespace: team_1\" \\\n   --header \"X-Vault-Token: $VAULT_TOKEN\" \\\n   --request POST \\\n   --data '{\"type\": \"kv-v2\"}' \\\n   https:\/\/127.0.0.1:8300\/v1\/sys\/mounts\/team-secret\n```\n\nThe request operates on the `usa-hq\/team_1` namespace since the top-level\nnamespace is set to `usa-hq` for the listener address, `127.0.0.1:8300`.\n\nThe top-level namespace for `https:\/\/127.0.0.1:8200` is `root`. \n\n\n## General guidance\n\nThe following principles should be used to guide an appropriate namespace or\nmount path structure.\n\n- [Use namespaces sparingly](#use-namespaces-sparingly)\n- [Leverage Vault identities](#leverage-vault-identities)\n- [Understand Vault's mount points](#understand-vault-s-mount-points)\n- [Granularity of paths](#granularity-of-paths)\n- [Standardized onboarding process](#standardized-onboarding-process)\n\n### Use namespaces sparingly\n\nThe primary purpose of namespaces is to delineate administrative boundaries.\nThe main determining factor for **encapsulating an organizational unit** into\nits own namespace is the need for that unit to be able to directly manage\npolicies. However, many organizations may find their deployment requirements are\nmore nuanced, especially if they want to enable \"self-service\" for their\nconsumers of Vault.\n\nWhen setting up Vault to be self-service, you should first ask what does\n\"self-service\" actually mean to your organization.\n\n- Will teams be managing Vault directly?\n- Will there be an onboarding process\/layer that teams interact with?\n\nWhen possible, HashiCorp recommends providing the self-service capability by\nimplementing an onboarding layer rather than directly through Vault. The\nonboarding layer can enforce a standard naming convention, secrets path\nstructure, and templated policies. In this case, the administrative boundary\nis at the onboarding layer and not at the organizational unit level. As such,\nthis use case should not require a separate namespace for the team.\n\n![Namespaces best practices](\/img\/diagram_namespaces_bp.png)\n\nHowever, these teams may roll up to a specific platform team or LOB for which\nthe policy structuring, authentication methods, and secrets use cases are common\nacross all teams within that LOB. Here, it makes sense that the higher-level\norganizational unit has its own namespace.\n\nAdditionally, in many cases, most of the desired level of isolation can be\nenforced via ACL policies.\n\nThe entire list of namespaces must fit into a single storage entry in Vault, and\neach namespace creates at least two secrets engines which also require storage\nspace. Namespace planning should include a review of the maximum number of\nnamespaces allowed by the storage entry size.\n\n### Leverage Vault identities\n\nIt's also critical to understand identity in order to make use of [Vault ACL\ntemplates](\/vault\/docs\/concepts\/policies#templated-policies),\nwhich can ease policy management.\n\nVault provides an internal layer of identity management that can be used to map\nentities to multiple auth methods as well as provide grouping capabilities. This\nallows for more robust policy assignment options.\n\n<Tip>\n\nVisit the [identity alias name\ntable](\/vault\/docs\/secrets\/identity#mount-bound-aliases) documentation page to\nlearn about constructing templated ACL policies.\n\n<\/Tip>\n\nThe entity aliases, based on specific information available from the auth\nmethod, maps to identity entities that you create. You can use the default names\nand associated metadata that are created for aliases and entities as part of\npolicy templates and deciding on naming conventions for secrets paths\/roles.\nThis allows you to avoid having hard-coded policies for use cases that follow a\ncertain pattern broadly.\n\nYou can define identity groups to associate entities that should have\npermissions in common, and reference those groups in policy templates as much as\nyou can entities and aliases. These groups may also be created automatically for\nyou, depending on the auth methods used.\n\n**ACL policy template example:**\n\n<CodeBlockConfig lineNumbers>\n\n```hcl \npath \"kvv1-\/*\" {\n   capabilities = [ \"create\", \"read\", \"update\", \"delete\", \"list\" ]\n}\n\npath \"transit\/encrypt\/\" {\n   capabilities = [ \"update\" ]\n}\n```\n\n<\/CodeBlockConfig>\n\nThose templated values get resolved dynamically based on the requester's entity\ntoken metadata. \n\nAt line 1, the `` value retrieves the\n`team_name` value set on the entity's metadata. Similarly, the\n`` value at line 5 returns\nthe Role ID of the requesting client. This enables your policies to be less\nstatic.\n\n<Note>\n\n The number of identity entities is how Vault determines the number\nof active clients for reporting and licensing purposes. Refer to the [Client\nCount](\/vault\/docs\/concepts\/client-count) documentation for\nmore detail. \n\n<\/Note>\n\n<Tip>\n\nIf you are not familiar with templated policies, read the [ACL Policy Path\nTemplating](\/vault\/tutorials\/policies\/policy-templating) tutorial.\n\n<\/Tip>\n\n### Understand Vault's mount points\n\nAuth methods and secrets engines can be categorized into two types:\n\n1. **Dedicated:** Auth methods and secrets engines that can be managed and\nmapped directly to a specific organizational unit. For example, the team that\nmanages `app-1` can utilize their own AppRole and\/or KV mounts without the\nability to impact other teams' mounts.\n\n1. **Shared:** An organization level resources such as Kubernetes auth method\nand the Active Directory (AD) secrets engine that are **shared** and managed at\nthe company level; therefore, mounted at the company-level namespace.\n\nIt's important to understand [Vault's sizing\nrestrictions](\/vault\/docs\/internals\/limits) for mounts. All\nsecrets engine and auth method mount points must each fit within a single storage\nentry. For Consul, the storage limit is 512KB. For Integrated Storage the limit\nis 1MB. \n\nEach JSON object describing a mount uses ~500 bytes, but in compressed form\nit's ~75 bytes. Since auth mounts, secrets engine mount points, local-only auth\nmethods, and local-only secrets engine mounts are stored separately the limit\napplies to each independently. \n\nBy default, each namespace is created with a token auth mount (`\/auth\/`), an\nidentity mount (`\/identity\/`), and system mount (`\/sys\/`). This means that each\nnamespace requires three different mounts and then you will add your custom\nmounts. Multiply that by 1,000s means that your mount tables will grow\nexponentially.\n\n### Granularity of paths \n\nWhen thinking about Vault's logical structure, you want to find the right\nbalance of granularity between the various mounts needed and the roles defined\nwithin the mounts. \n\nSharing mounts between teams has benefits and risk. It is up to you to find the\nright balance of granularity between the various mounts needed and the roles\ndefined within the mounts. Below are a couple use cases with their benefits and\nrisks. \n\nYou create a single KV mount with a sub-path for every team within the same\nmount.  \n\n- **Benefit:** reduces potential of hitting mount table limits.  \n- **Risk:** the KV mount is accidentally deleted causing all users of that secret \nengine to be\nimpacted.\n\nYou create a unique mount per LOB.  \n\n- **Benefit:** can provide sub-paths for different teams and limit the \nblast-radius  of an errant change to a single mount.  \n- **Risk:** unique KV mounts per team becomes inefficient from a mount\nmanagement perspective.\n\n![Compare a single mount vs. multiple mounts](\/img\/diagram_namespaces_kv-paths.png)\n\n### Standardized onboarding process\n\nWhen deploying Vault at scale, it is critical to Vault adoption to consider the\nconsumer experience. Specifically, it's important to reduce the level of\nfriction of consuming Vault. While it maybe quick to drop Vault into an\nenvironment and interact with it directly, it's important to deliberately map\nout how consumers will onboard to Vault and consume the service.\n\nOne of the pillars behind the Tao of Hashicorp is automation through\ncodification. Many HashiCorp users are using Terraform for managing\ninfrastructure on-prem and in the cloud. Terraform can also be used to [codify\nVault](\/vault\/tutorials\/operations\/codify-mgmt-enterprise)\nconfiguration tasks such as creation of namespaces, policies, and mounts. This\nallows Vault operators to increase their productivity, move quicker, promote\nrepeatable processes, and reduce human error.\n\n## Tutorials\n\nTo learn more, review the following tutorials:\n\n- [Secure multi-tenancy with namespaces](\/vault\/tutorials\/enterprise\/namespaces)\n- [Vault recommended patterns](\/vault\/tutorials\/recommended-patterns)\n- [Vault standard operating procedures](\/vault\/tutorials\/standard-procedures)","site":"vault","answers_cleaned":"    layout  docs page title  Namespace and mount structure guide description       Explains HashiCorp s recommended approach to structuring the Vault namespaces  and how namespaces impact on the endpoint paths         Namespace and mount structure guide  Namespaces are isolated environments that functionally create  Vaults within a Vault   They have separate login paths  and support creating and managing data isolated to their namespace  This functionality enables you to provide Vault as a service to tenants     Conceptual diagram for namespace usages   img diagram namespaces for org png   This guide provides recommended approach to structuring Vault namespaces and mount paths  as well as some guidance around how to make decisions for namespaces and paths structuring  given the organizational structure and use cases       Why is this topic important   Everything in Vault is path based  Each path corresponds to an operation or secret in Vault  and the Vault API endpoints map to these paths  therefore  writing policies configures the permitted operations to specific secret paths  For example  to grant access to manage tokens in the root namespace  the policy path is  auth token     To manage tokens for the   education  namespace  the fully qualified path functionally becomes  education auth token      The following diagram demonstrates the API paths based on where the auth method and secrets engines are enabled      Namespaces and mount paths   img diagram namespaces paths png   You can isolate secrets using namespaces or mounts dedicated to each Vault client  For example  you can create a namespace for each isolated tenant and they are responsible for managing the resources under their namespace  Alternatively  you can mount a dedicated secrets engine at a path dedicated to each team within the organization      Namespaces best practices   img diagram namespaces intro png   Depending on how you isolate the secrets  it determines who is responsible for managing those secrets  and more importantly  policies related to those secrets       Note   The creation of namespaces should be performed by a user with a highly privileged token such as  root  to set up isolated environments for each organization  team  or application     Note      Deployment considerations  To plan and design the Vault namespaces  auth method paths and secrets engine paths  you need to consider how to best structure Vault s logical objects for your organization      table     thead       tr         th Requirements  th         th What to consider  th        tr      thead     tbody       tr         td Organizational structure  td         td           ul             li What is your organizational structure   li             li What is the level of granularity across lines of businesses  LOBs   divisions  teams  services  apps that needs to be reflected in Vault s end state design   li            ul          td        tr       tr         td Self service requirements  td         td           ul             li Given your organizational structure  what is the desired level of  strong self service  strong  required   li             li How are Vault policies to be managed   li             li Will teams need to directly manage policies for their own scope of responsibility   li             li Or  will they be interacting with Vault via some abstraction layer where policies and patterns will be templatized  For example  configuration by code  Git flows  the Terraform Vault provider  custom onboarding layers  or some combination of these   li            ul          td        tr       tr         td Audit requirements  td         td           ul             li What are the requirements around auditing usage of Vault within your organization   li             li Is there a need to regularly certify access to secrets   li             li Is there a need to review and or decommission stale secrets or auth roles   li             li Is there a need to determine chargeback amounts to internal customers   li            ul          td        tr       tr         td Secrets engine requirements  td         td          What types of secrets engines will you use  KV  database  AD  PKI  etc     br    br            For large organizations  each of these might require different structuring patterns  For example  with KV secrets engine  each team might have their own dedicated KV mount  However  for AD secrets engine  this is inherently a  em shared  em  type of mount so you would manage access at a role level  rather than having multiple mounts that share the same connection configuration          td        tr      tbody    table       Chroot namespace   Note title  Vault version    To use the chroot listener feature  you must run   Vault Enterprise 1 15   or later     Note   Vault clients  users  applications  etc   must be aware of which namespace to send requests  and set the target namespace using   namespace  flag   X Vault Namespace  HTTP header  or  VAULT NAMESPACE  environment variable  If the target namespace is not properly set  the request will fail  This can be cumbersome    To simplify  Vault operators can specify additional  listener  stanza in the configuration file  and defines  chroot namespace  to specify an alternate top level namespace     Example       CodeBlockConfig filename  vault config hcl  highlight  17 25       hcl ui              true cluster addr     https   127 0 0 1 8201  api addr         https   127 0 0 1 8200  disable mlock   true  storage  raft      path     path to raft data    node id    raft node 1     listener  tcp      address          127 0 0 1 8200    tls cert file     path to full chain pem    tls key file      path to private key pem     listener  tcp       address             127 0 0 1 8300     chroot namespace    usa hq     tls cert file     path to full chain pem     tls key file      path to private key pem     telemetry         unauthenticated metrics access   true        telemetry     statsite address    127 0 0 1 8125    disable hostname   true          CodeBlockConfig   The  chroot namespace  specifies an alternate top level namespace for the listener   https  127 0 0 1 8300      Example request        shell session   curl   header  X Vault Namespace  team 1         header  X Vault Token   VAULT TOKEN         request POST        data    type    kv v2         https   127 0 0 1 8300 v1 sys mounts team secret      The request operates on the  usa hq team 1  namespace since the top level namespace is set to  usa hq  for the listener address   127 0 0 1 8300    The top level namespace for  https   127 0 0 1 8200  is  root         General guidance  The following principles should be used to guide an appropriate namespace or mount path structure      Use namespaces sparingly   use namespaces sparingly     Leverage Vault identities   leverage vault identities     Understand Vault s mount points   understand vault s mount points     Granularity of paths   granularity of paths     Standardized onboarding process   standardized onboarding process       Use namespaces sparingly  The primary purpose of namespaces is to delineate administrative boundaries  The main determining factor for   encapsulating an organizational unit   into its own namespace is the need for that unit to be able to directly manage policies  However  many organizations may find their deployment requirements are more nuanced  especially if they want to enable  self service  for their consumers of Vault   When setting up Vault to be self service  you should first ask what does  self service  actually mean to your organization     Will teams be managing Vault directly    Will there be an onboarding process layer that teams interact with   When possible  HashiCorp recommends providing the self service capability by implementing an onboarding layer rather than directly through Vault  The onboarding layer can enforce a standard naming convention  secrets path structure  and templated policies  In this case  the administrative boundary is at the onboarding layer and not at the organizational unit level  As such  this use case should not require a separate namespace for the team     Namespaces best practices   img diagram namespaces bp png   However  these teams may roll up to a specific platform team or LOB for which the policy structuring  authentication methods  and secrets use cases are common across all teams within that LOB  Here  it makes sense that the higher level organizational unit has its own namespace   Additionally  in many cases  most of the desired level of isolation can be enforced via ACL policies   The entire list of namespaces must fit into a single storage entry in Vault  and each namespace creates at least two secrets engines which also require storage space  Namespace planning should include a review of the maximum number of namespaces allowed by the storage entry size       Leverage Vault identities  It s also critical to understand identity in order to make use of  Vault ACL templates   vault docs concepts policies templated policies   which can ease policy management   Vault provides an internal layer of identity management that can be used to map entities to multiple auth methods as well as provide grouping capabilities  This allows for more robust policy assignment options    Tip   Visit the  identity alias name table   vault docs secrets identity mount bound aliases  documentation page to learn about constructing templated ACL policies     Tip   The entity aliases  based on specific information available from the auth method  maps to identity entities that you create  You can use the default names and associated metadata that are created for aliases and entities as part of policy templates and deciding on naming conventions for secrets paths roles  This allows you to avoid having hard coded policies for use cases that follow a certain pattern broadly   You can define identity groups to associate entities that should have permissions in common  and reference those groups in policy templates as much as you can entities and aliases  These groups may also be created automatically for you  depending on the auth methods used     ACL policy template example      CodeBlockConfig lineNumbers      hcl  path  kvv1          capabilities      create    read    update    delete    list       path  transit encrypt        capabilities      update             CodeBlockConfig   Those templated values get resolved dynamically based on the requester s entity token metadata    At line 1  the    value retrieves the  team name  value set on the entity s metadata  Similarly  the    value at line 5 returns the Role ID of the requesting client  This enables your policies to be less static    Note    The number of identity entities is how Vault determines the number of active clients for reporting and licensing purposes  Refer to the  Client Count   vault docs concepts client count  documentation for more detail      Note    Tip   If you are not familiar with templated policies  read the  ACL Policy Path Templating   vault tutorials policies policy templating  tutorial     Tip       Understand Vault s mount points  Auth methods and secrets engines can be categorized into two types   1    Dedicated    Auth methods and secrets engines that can be managed and mapped directly to a specific organizational unit  For example  the team that manages  app 1  can utilize their own AppRole and or KV mounts without the ability to impact other teams  mounts   1    Shared    An organization level resources such as Kubernetes auth method and the Active Directory  AD  secrets engine that are   shared   and managed at the company level  therefore  mounted at the company level namespace   It s important to understand  Vault s sizing restrictions   vault docs internals limits  for mounts  All secrets engine and auth method mount points must each fit within a single storage entry  For Consul  the storage limit is 512KB  For Integrated Storage the limit is 1MB    Each JSON object describing a mount uses  500 bytes  but in compressed form it s  75 bytes  Since auth mounts  secrets engine mount points  local only auth methods  and local only secrets engine mounts are stored separately the limit applies to each independently    By default  each namespace is created with a token auth mount    auth     an identity mount    identity     and system mount    sys     This means that each namespace requires three different mounts and then you will add your custom mounts  Multiply that by 1 000s means that your mount tables will grow exponentially       Granularity of paths   When thinking about Vault s logical structure  you want to find the right balance of granularity between the various mounts needed and the roles defined within the mounts    Sharing mounts between teams has benefits and risk  It is up to you to find the right balance of granularity between the various mounts needed and the roles defined within the mounts  Below are a couple use cases with their benefits and risks    You create a single KV mount with a sub path for every team within the same mount         Benefit    reduces potential of hitting mount table limits        Risk    the KV mount is accidentally deleted causing all users of that secret  engine to be impacted   You create a unique mount per LOB         Benefit    can provide sub paths for different teams and limit the  blast radius  of an errant change to a single mount        Risk    unique KV mounts per team becomes inefficient from a mount management perspective     Compare a single mount vs  multiple mounts   img diagram namespaces kv paths png       Standardized onboarding process  When deploying Vault at scale  it is critical to Vault adoption to consider the consumer experience  Specifically  it s important to reduce the level of friction of consuming Vault  While it maybe quick to drop Vault into an environment and interact with it directly  it s important to deliberately map out how consumers will onboard to Vault and consume the service   One of the pillars behind the Tao of Hashicorp is automation through codification  Many HashiCorp users are using Terraform for managing infrastructure on prem and in the cloud  Terraform can also be used to  codify Vault   vault tutorials operations codify mgmt enterprise  configuration tasks such as creation of namespaces  policies  and mounts  This allows Vault operators to increase their productivity  move quicker  promote repeatable processes  and reduce human error      Tutorials  To learn more  review the following tutorials      Secure multi tenancy with namespaces   vault tutorials enterprise namespaces     Vault recommended patterns   vault tutorials recommended patterns     Vault standard operating procedures   vault tutorials standard procedures "}
{"questions":"vault page title Configure an administrative namespace Create an administrative namespace EnterpriseAlert product vault inline true Enterprise layout docs Step by step guide for setting up an administrative namespace with Vault","answers":"---\nlayout: docs\npage_title: Configure an administrative namespace\ndescription: >-\n  Step-by-step guide for setting up an administrative namespace with Vault\n  Enterprise\n---\n\n# Create an administrative namespace <EnterpriseAlert product=vault inline=true \/>\n\nGrant access to a predefined subset of privileged system backend endpoints in\nthe Vault API with an administrative namespace.\n\n<Tip title=\"HCP Vault Dedicated has a built-in administrative namespace\">\n\n  HCP Vault Dedicated clusters include an administrative namespace (`admin`) by default.\n  For more information on managing namespaces with HCP Vault Dedicated, refer to the\n  [HCP Vault Dedicated namespace considerations](\/vault\/tutorials\/cloud-ops\/hcp-vault-namespace-considerations)\n  guide.\n\n<\/Tip>\n\n## Before you start\n\n- **You must have Vault Enterprise 1.15+ installed and running**.\n- **You must have access to your Vault configuration file**.\n- **You must have permission to create and manage namespaces for your Vault instance**.\n\n## Step 1: Create your namespace\n\nUse the `namespace create` CLI command to create a new namespace: \n\n```shell-session\n$ vault namespace create YOUR_NAMESPACE_NAME\n```\n\nFor example, to create a namespace called \"ns_admin\" under the root namespace:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault namespace create ns_admin\n```\n\n<\/CodeBlockConfig>\n\n## Step 2: Give the namespace admin permission\n\nTo create an administrative namespace, set the `administrative_namespace_path`\nparameter in your Vault configuration with the absolute path of your new\nnamespace. We recommend setting the namespace path with the other string\nassignments in your configuration file. For example: \n\n<CodeBlockConfig highlight=\"3\">\n\n```hcl \nui = true\napi_addr = \"https:\/\/127.0.0.1:8200\"\nadministrative_namespace_path = \"ns_admin\/\"\n```\n\n<\/CodeBlockConfig>\n\n## Step 3: Verify the new permissions\n\nTo verify permissions for the administrative namespace, compare API responses\nfrom a restricted endpoint from your new namespace and another namespace without\nelevated permissions.\n\n1. If you do not already have a namespace you can use for testing, create a test\n   namespace called \"ns_test\" with the `namespace create` CLI command:\n    ```shell-session\n    $ vault namespace create ns_test\n    ```\n1. Use the `monitor` CLI command to call the `\/sys\/monitor` endpoint from your\n   test namespace:\n    ```shell-session\n    $ env VAULT_NAMESPACE=\"ns_test\" vault monitor \u2013log-level=debug\n    ```\n   You should see an unsupported path error:\n\n    <CodeBlockConfig hideClipboard>\n\n    ```shell-session\n    $ env VAULT_NAMESPACE=\"ns_test\" vault monitor \u2013log-level=debug\n\n    Error starting monitor: Error making API request.\n    Namespace: ns_test\/\n    URL: GET http:\/\/127.0.0.1:8400\/v1\/sys\/monitor?log_format=standard&log_level=debug\n    Code: 404. Errors:\n    * 1 error occurred:\n      * unsupported path\n    ```\n\n    <\/CodeBlockConfig>\n\n1. Now use the `monitor` command to call the `sys\/monitor` endpoint from your\n   administrative namespace:\n    ```shell-session\n    $ env VAULT_NAMESPACE=\"ns_admin\" vault monitor \u2013log-level=debug\n    ```\n   You should see log data from your Vault instance streaming to the terminal:\n\n    <CodeBlockConfig hideClipboard>\n\n    ```shell-session\n    $ env VAULT_NAMESPACE=\"ns_admin\" vault monitor \u2013log-level=debug\n\n    2023-08-31T11:54:41.846+0200 [DEBUG] replication.index.perf: saved checkpoint: num_dirty=0\n    2023-08-31T11:54:41.961+0200 [DEBUG] replication.index.local: saved checkpoint: num_dirty=0\n    ```\n\n    <\/CodeBlockConfig>\n\n## Next steps\n\n- Follow the [Secure multi-tenancy with namespaces](\/vault\/tutorials\/enterprise\/namespaces)\n  tutorial to provide additional security and ensure teams can self-manage their\n  own environments.\n- Read more about [managing namespaces in Vault Enterprise](\/vault\/docs\/enterprise\/namespaces)","site":"vault","answers_cleaned":"    layout  docs page title  Configure an administrative namespace description       Step by step guide for setting up an administrative namespace with Vault   Enterprise        Create an administrative namespace  EnterpriseAlert product vault inline true     Grant access to a predefined subset of privileged system backend endpoints in the Vault API with an administrative namespace    Tip title  HCP Vault Dedicated has a built in administrative namespace      HCP Vault Dedicated clusters include an administrative namespace   admin   by default    For more information on managing namespaces with HCP Vault Dedicated  refer to the    HCP Vault Dedicated namespace considerations   vault tutorials cloud ops hcp vault namespace considerations    guide     Tip      Before you start      You must have Vault Enterprise 1 15  installed and running        You must have access to your Vault configuration file        You must have permission to create and manage namespaces for your Vault instance        Step 1  Create your namespace  Use the  namespace create  CLI command to create a new namespace       shell session   vault namespace create YOUR NAMESPACE NAME      For example  to create a namespace called  ns admin  under the root namespace    CodeBlockConfig hideClipboard      shell session   vault namespace create ns admin        CodeBlockConfig      Step 2  Give the namespace admin permission  To create an administrative namespace  set the  administrative namespace path  parameter in your Vault configuration with the absolute path of your new namespace  We recommend setting the namespace path with the other string assignments in your configuration file  For example     CodeBlockConfig highlight  3       hcl  ui   true api addr    https   127 0 0 1 8200  administrative namespace path    ns admin          CodeBlockConfig      Step 3  Verify the new permissions  To verify permissions for the administrative namespace  compare API responses from a restricted endpoint from your new namespace and another namespace without elevated permissions   1  If you do not already have a namespace you can use for testing  create a test    namespace called  ns test  with the  namespace create  CLI command         shell session       vault namespace create ns test         1  Use the  monitor  CLI command to call the   sys monitor  endpoint from your    test namespace         shell session       env VAULT NAMESPACE  ns test  vault monitor  log level debug            You should see an unsupported path error        CodeBlockConfig hideClipboard          shell session       env VAULT NAMESPACE  ns test  vault monitor  log level debug      Error starting monitor  Error making API request      Namespace  ns test      URL  GET http   127 0 0 1 8400 v1 sys monitor log format standard log level debug     Code  404  Errors        1 error occurred          unsupported path                CodeBlockConfig   1  Now use the  monitor  command to call the  sys monitor  endpoint from your    administrative namespace         shell session       env VAULT NAMESPACE  ns admin  vault monitor  log level debug            You should see log data from your Vault instance streaming to the terminal        CodeBlockConfig hideClipboard          shell session       env VAULT NAMESPACE  ns admin  vault monitor  log level debug      2023 08 31T11 54 41 846 0200  DEBUG  replication index perf  saved checkpoint  num dirty 0     2023 08 31T11 54 41 961 0200  DEBUG  replication index local  saved checkpoint  num dirty 0                CodeBlockConfig      Next steps    Follow the  Secure multi tenancy with namespaces   vault tutorials enterprise namespaces    tutorial to provide additional security and ensure teams can self manage their   own environments    Read more about  managing namespaces in Vault Enterprise   vault docs enterprise namespaces "}
{"questions":"vault Using the sys config group policy application endpoint you can enable secrets sharing layout docs Set up cross namespace access without hierarchical relationships for Vault Enterprise page title Configure cross namespace access without hierarchical relationships Configure cross namespace access","answers":"---\nlayout: docs\npage_title: Configure cross namespace access without hierarchical relationships\ndescription: >-\n  Set up cross namespace access without hierarchical relationships for Vault Enterprise.\n---\n\n# Configure cross namespace access\n\nUsing the `sys\/config\/group_policy_application` endpoint, you can enable secrets sharing\nacross multiple independent namespaces.\n\nHistorically, any policies attached to an [identity group](\/vault\/docs\/concepts\/identity#identity-groups) would only apply when the\nVault token authorizing a request was created in the same namespace as that\ngroup, or a descendent namespace.\n\nThis endpoint reduces the operational overhead by relaxing this restriction.\nWhen the mode is set to the default, `within_namespace_hierarchy`, the\nhistorical behaviour is maintained. When set to `any`, group policies apply to\nall members of a group, regardless of what namespace the request token came\nfrom.\n\n## Prerequisites\n\n- Vault Enterprise 1.13 or later\n- Authentication method configured\n\n## Enable secrets sharing\n\n1. Verify the current setting.\n\n   ```shell-session\n   $ vault read sys\/config\/group-policy-application\n   Key                              Value\n   ---                              -----\n   group_policy_application_mode    within_namespace_hierarchy\n   ```\n\n   `within_namespace_hierarchy` is the default setting.\n\n1. Change the `group_policy_application_mode` setting to `any`.\n\n   ```shell-session\n   $ vault write sys\/config\/group-policy-application \\\n        group_policy_application_mode=\"any\"\n   ```\n\n   <CodeBlockConfig hideClipboard>\n\n   ```plaintext\n   Success! Data written to: sys\/config\/group-policy-application\n   ```\n\n   <\/CodeBlockConfig>\n\n   Policies can now be applied, and secrets shared, across namespaces without a\n   hierarchical relationship.\n\n## Example auth method configuration\n\nCross namespace access can be used with all auth methods for both machine and\nhuman based authentication. Examples of each are provided for reference.\n\n<Tabs>\n<Tab heading=\"Kubernetes\" group=\"kubernetes\">\n\n1. Create and run a script to configure the Kubernetes auth method, and two\n   namespaces.\n\n   ```plaintext\n   # Create new namespaces - they are peers\n   vault namespace create us-west-org\n   vault namespace create us-east-org\n\n   #--------------------------\n   # us-west-org namespace\n   #--------------------------\n   VAULT_NAMESPACE=us-west-org vault auth enable kubernetes\n   VAULT_NAMESPACE=us-west-org vault write auth\/kubernetes\/config out_of=scope\n   VAULT_NAMESPACE=us-west-org vault write auth\/kubernetes\/role\/cross-namespace-demo bound_service_account_names=\"mega-app\" bound_service_account_namespaces=\"client-nicecorp\" alias_name_source=\"serviceaccount_name\"\n\n   # Create an entity\n   VAULT_NAMESPACE=us-west-org vault auth list -format=json | jq -r '.[\"kubernetes\/\"].accessor' > accessor.txt\n   VAULT_NAMESPACE=us-west-org vault write -format=json identity\/entity name=\"entity-for-mega-app\" | jq -r \".data.id\" > entity_id.txt\n   VAULT_NAMESPACE=us-west-org vault write identity\/entity-alias name=\"client-nicecorp\/mega-app\" canonical_id=$(cat entity_id.txt) mount_accessor=$(cat accessor.txt)\n\n   #--------------------------\n   # us-east-org namespace\n   #--------------------------\n   VAULT_NAMESPACE=us-east-org vault secrets enable -path=\"kv-marketing\" kv-v2\n   VAULT_NAMESPACE=us-east-org vault kv put kv-marketing\/campaign start_date=\"March 1, 2023\" end_date=\"March 31, 2023\" prise=\"Certification voucher\" quantity=\"100\"\n\n   # Create a policy to allow read access to kv-marketing\n   VAULT_NAMESPACE=us-east-org vault policy write marketing-read-only -<<EOF\n   path \"kv-marketing\/data\/campaign\" {\n      capabilities = [\"read\"]\n   }\n   EOF\n\n   # Create a group\n   VAULT_NAMESPACE=us-east-org vault write -format=json identity\/group name=\"campaign-admin\" policies=\"marketing-read-only\" member_entity_ids=$(cat entity_id.txt)\n   ```\n\n1. Authenticate to the `us-west-org` Vault namespace with a valid JWT.\n\n   ```shell-session\n   $ VAULT_NAMESPACE=us-west-org vault write -format=json auth\/kubernetes\/login role=cross-namespace-demo jwt=$(cat jwt.txt) | jq -r .auth.client_token > token.txt\n   ```\n\n1. Read a secret in the `us-east-org` namespace using the Vault token from\n   `us-west-org`.\n\n   ```shell-session\n   $ VAULT_NAMESPACE=us-east-org VAULT_TOKEN=$(cat token.txt) vault kv get kv-marketing\/campaign\n   ```\n\n<\/Tab>\n<Tab heading=\"Userpass\" group=\"userpass\">\n\n1. Create and run a script to configure the userpass auth method, and two\n   Vault namespaces.\n\n   ```plaintext\n   # Create new namespaces - they are peer\n   vault namespace create us-west-org\n   vault namespace create us-east-org\n\n   #--------------------------\n   # us-west-org namespace\n   #--------------------------\n   VAULT_NAMESPACE=us-west-org vault secrets enable -path=\"kv-customer-info\" kv-v2\n   VAULT_NAMESPACE=us-west-org vault kv put kv-customer-info\/customer-001 name=\"Example LLC\" contact_email=\"admin@example.com\"\n   # Create a policy to allow read access to kv-marketing\n   VAULT_NAMESPACE=us-west-org vault policy write customer-info-read-only -<<EOF\n   path \"kv-customer-info\/data\/*\" {\n      capabilities = [\"read\"]\n   }\n   EOF\n   VAULT_NAMESPACE=us-west-org vault auth enable userpass\n   VAULT_NAMESPACE=us-west-org vault write auth\/userpass\/users\/tam-user password=\"my-long-password\" policies=customer-info-read-only\n\n   # Create an entity\n   VAULT_NAMESPACE=us-west-org vault auth list -format=json | jq -r '.[\"userpass\/\"].accessor' > accessor.txt\n   VAULT_NAMESPACE=us-west-org vault write -format=json identity\/entity name=\"TAM\" | jq -r \".data.id\" > entity_id.txt\n   VAULT_NAMESPACE=us-west-org vault write identity\/entity-alias name=\"tam-user\" canonical_id=$(cat entity_id.txt) mount_accessor=$(cat accessor.txt)\n\n   #--------------------------\n   # us-east-org namespace\n   #--------------------------\n   VAULT_NAMESPACE=us-east-org vault secrets enable -path=\"kv-marketing\" kv-v2\n   VAULT_NAMESPACE=us-east-org vault kv put kv-marketing\/campaign start_date=\"March 1, 2023\" end_date=\"March 31, 2023\" prise=\"Certification voucher\" quantity=\"100\"\n\n   # Create a policy to allow read access to kv-marketing\n   VAULT_NAMESPACE=us-east-org vault policy write marketing-read-only -<<EOF\n   path \"kv-marketing\/data\/campaign\" {\n      capabilities = [\"read\"]\n   }\n   EOF\n\n   # Create a group\n   VAULT_NAMESPACE=us-east-org vault write -format=json identity\/group name=\"campaign-admin\" policies=\"marketing-read-only\" member_entity_ids=$(cat entity_id.txt)\n   ```\n\n1. Authenticate to the `us-west-org` Vault namespace with a valid user.\n\n   ```shell-session\n   $ VAULT_NAMESPACE=us-west-org vault login -field=token -method=userpass \\\n   username=tam-user password=\"my-long-password\" > token.txt\n   ```\n\n1. Read a secret in the `us-east-org` namespace using the Vault token from\n   `us-west-org`.\n\n   ```shell-session\n   $ VAULT_NAMESPACE=us-east-org VAULT_TOKEN=$(cat token.txt) \\\n   vault kv get kv-marketing\/campaign\n   ```\n\n<\/Tab>\n<\/Tabs>\n\n## API\n\n- [\/sys\/config\/group-policy-application](\/vault\/api-docs\/system\/config-group-policy-application)\n\n## Tutorial\n\n- [Secrets management across namespaces without hierarchical relationship](\/vault\/tutorials\/enterprise\/namespaces-secrets-sharing","site":"vault","answers_cleaned":"    layout  docs page title  Configure cross namespace access without hierarchical relationships description       Set up cross namespace access without hierarchical relationships for Vault Enterprise         Configure cross namespace access  Using the  sys config group policy application  endpoint  you can enable secrets sharing across multiple independent namespaces   Historically  any policies attached to an  identity group   vault docs concepts identity identity groups  would only apply when the Vault token authorizing a request was created in the same namespace as that group  or a descendent namespace   This endpoint reduces the operational overhead by relaxing this restriction  When the mode is set to the default   within namespace hierarchy   the historical behaviour is maintained  When set to  any   group policies apply to all members of a group  regardless of what namespace the request token came from      Prerequisites    Vault Enterprise 1 13 or later   Authentication method configured     Enable secrets sharing  1  Verify the current setting         shell session      vault read sys config group policy application    Key                              Value                                              group policy application mode    within namespace hierarchy             within namespace hierarchy  is the default setting   1  Change the  group policy application mode  setting to  any          shell session      vault write sys config group policy application           group policy application mode  any              CodeBlockConfig hideClipboard         plaintext    Success  Data written to  sys config group policy application              CodeBlockConfig      Policies can now be applied  and secrets shared  across namespaces without a    hierarchical relationship      Example auth method configuration  Cross namespace access can be used with all auth methods for both machine and human based authentication  Examples of each are provided for reference    Tabs   Tab heading  Kubernetes  group  kubernetes    1  Create and run a script to configure the Kubernetes auth method  and two    namespaces         plaintext      Create new namespaces   they are peers    vault namespace create us west org    vault namespace create us east org                                      us west org namespace                                   VAULT NAMESPACE us west org vault auth enable kubernetes    VAULT NAMESPACE us west org vault write auth kubernetes config out of scope    VAULT NAMESPACE us west org vault write auth kubernetes role cross namespace demo bound service account names  mega app  bound service account namespaces  client nicecorp  alias name source  serviceaccount name        Create an entity    VAULT NAMESPACE us west org vault auth list  format json   jq  r     kubernetes    accessor    accessor txt    VAULT NAMESPACE us west org vault write  format json identity entity name  entity for mega app    jq  r   data id    entity id txt    VAULT NAMESPACE us west org vault write identity entity alias name  client nicecorp mega app  canonical id   cat entity id txt  mount accessor   cat accessor txt                                       us east org namespace                                   VAULT NAMESPACE us east org vault secrets enable  path  kv marketing  kv v2    VAULT NAMESPACE us east org vault kv put kv marketing campaign start date  March 1  2023  end date  March 31  2023  prise  Certification voucher  quantity  100        Create a policy to allow read access to kv marketing    VAULT NAMESPACE us east org vault policy write marketing read only    EOF    path  kv marketing data campaign          capabilities     read           EOF       Create a group    VAULT NAMESPACE us east org vault write  format json identity group name  campaign admin  policies  marketing read only  member entity ids   cat entity id txt          1  Authenticate to the  us west org  Vault namespace with a valid JWT         shell session      VAULT NAMESPACE us west org vault write  format json auth kubernetes login role cross namespace demo jwt   cat jwt txt    jq  r  auth client token   token txt         1  Read a secret in the  us east org  namespace using the Vault token from     us west org          shell session      VAULT NAMESPACE us east org VAULT TOKEN   cat token txt  vault kv get kv marketing campaign           Tab   Tab heading  Userpass  group  userpass    1  Create and run a script to configure the userpass auth method  and two    Vault namespaces         plaintext      Create new namespaces   they are peer    vault namespace create us west org    vault namespace create us east org                                      us west org namespace                                   VAULT NAMESPACE us west org vault secrets enable  path  kv customer info  kv v2    VAULT NAMESPACE us west org vault kv put kv customer info customer 001 name  Example LLC  contact email  admin example com       Create a policy to allow read access to kv marketing    VAULT NAMESPACE us west org vault policy write customer info read only    EOF    path  kv customer info data            capabilities     read           EOF    VAULT NAMESPACE us west org vault auth enable userpass    VAULT NAMESPACE us west org vault write auth userpass users tam user password  my long password  policies customer info read only       Create an entity    VAULT NAMESPACE us west org vault auth list  format json   jq  r     userpass    accessor    accessor txt    VAULT NAMESPACE us west org vault write  format json identity entity name  TAM    jq  r   data id    entity id txt    VAULT NAMESPACE us west org vault write identity entity alias name  tam user  canonical id   cat entity id txt  mount accessor   cat accessor txt                                       us east org namespace                                   VAULT NAMESPACE us east org vault secrets enable  path  kv marketing  kv v2    VAULT NAMESPACE us east org vault kv put kv marketing campaign start date  March 1  2023  end date  March 31  2023  prise  Certification voucher  quantity  100        Create a policy to allow read access to kv marketing    VAULT NAMESPACE us east org vault policy write marketing read only    EOF    path  kv marketing data campaign          capabilities     read           EOF       Create a group    VAULT NAMESPACE us east org vault write  format json identity group name  campaign admin  policies  marketing read only  member entity ids   cat entity id txt          1  Authenticate to the  us west org  Vault namespace with a valid user         shell session      VAULT NAMESPACE us west org vault login  field token  method userpass      username tam user password  my long password    token txt         1  Read a secret in the  us east org  namespace using the Vault token from     us west org          shell session      VAULT NAMESPACE us east org VAULT TOKEN   cat token txt       vault kv get kv marketing campaign           Tab    Tabs      API      sys config group policy application   vault api docs system config group policy application      Tutorial     Secrets management across namespaces without hierarchical relationship   vault tutorials enterprise namespaces secrets sharing"}
{"questions":"vault include alerts enterprise only mdx Exclusion syntax for audit results layout docs Learn about the behavior and syntax for excluding audit data in Vault Enterprise page title Exclusion syntax for audit results","answers":"---\nlayout: docs\npage_title: Exclusion syntax for audit results\ndescription: >-\n  Learn about the behavior and syntax for excluding audit data in Vault Enterprise.\n---\n\n# Exclusion syntax for audit results\n\n@include 'alerts\/enterprise-only.mdx'\n\nAs of Vault 1.18.0, you can enable audit devices with an `exclude` option to exclude\nspecific fields in an audit entry that is written to a particular audit log, and fine-tune\nyour auditing process.\n\n<Warning title=\"Proceed with caution\">\n\n  Excluding audit entry fields is an advanced feature. Use of exclusion settings\n  could lead to missing data in your audit logs.\n\n  **Always** test your audit configuration in a non-production environment\n  before deploying exclusions to production. Read the\n  [Vault security model](\/vault\/docs\/internals\/security) and\n  [filtering overview](\/vault\/docs\/concepts\/filtering) to familiarize yourself\n  with Vault auditing and filtering basics before enabling audit devices that use\n  exclusions.\n\n<\/Warning>\n\nOnce you enable an audit device with exclusions, every audit entry Vault sends to\nthat audit device is compared to an (optional) condition in the form of a predicate expression.\nVault checks exclusions before writing to the audit log for a device. Vault modifies\nany audit entries that match the exclusion expression to remove the fields\nspecified for that condition. You can specify multiple sets of condition and field\ncombinations for an individual audit device.\n\nWhen you enable audit devices that use exclusion, the behavior of any existing audit\ndevice and the behavior of new audit devices that **do not** use exclusion remains\nunchanged.\n\n## `exclude` option\n\nThe value provided with the `exclude` option must be a parsable JSON array (i.e. JSON or\nan escaped JSON string) of exclusion objects.\n\n### Exclusion object\n\n- `condition` `(string: <optional>)` - predicate expression using\n  [filtering syntax](\/vault\/docs\/concepts\/filtering). When matched, Vault removes\n  the values identified by `fields`.\n- `fields` `(string[] <required>)` - collection of fields in the audit entry to exclude,\nidentified using [JSON pointer](https:\/\/tools.ietf.org\/html\/rfc6901) syntax.\n\n```json\n[\n  {\n    \"condition\": \"\",\n    \"fields\": [ \"\" ]\n  }\n]\n```\n\nVault always compares exclusion conditions against the original, immutable audit\nentry (the 'golden source'). As a result, evaluating a given condition does not\naffect the evaluation of subsequent conditions.\n\n### Exclusion examples\n\n#### Exclude response data (when present)\n\nExclude the response `data` field from any audit entry that contains it:\n\n```json\n[\n  {\n    \"fields\": [ \"\/response\/data\" ]\n  }\n]\n```\n\n#### Exclude request data (when present) for transit mounts\n\nExclude the request `data` field for audit entries with a mount type of `transit`:\n\n```json\n[\n  {\n    \"condition\": \"\\\"\/request\/mount_type\\\" == transit\",\n    \"fields\": [ \"\/request\/data\" ]\n  }\n]\n```\n#### Multiple exclusions\n\nUse multiple JSON objects to exclude:\n\n* `data` from both the request and response when the mount type is `transit`.\n* `entity_id` from requests where the `\/auth\/client_token` starts with `hmac`\n  followed by at least one other character.\n\n```json\n[\n  {\n    \"condition\": \"\\\"\/request\/mount_type\\\" == transit\",\n    \"fields\": [ \"\/request\/data\", \"\/response\/data\" ]\n  },\n  {\n    \"condition\":  \"\\\"\/auth\/client_token\\\" matches \\\"hmac.+\\\"\",\n    \"fields\": [ \"\/auth\/entity_id\" ]\n  }\n]\n```\n\n## Audit entry structure\n\nTo accurately construct `condition` and `fields`, Vault operators need a solid\nunderstanding of their audit entry structures. At a high level, there are only\n**request** audit entries and **response** audit entries, but each of these\nentries can contain different objects such as `auth`, `request` and `response`.\n\nWe strongly encourage operaters to review existing audit logs from a timeframe\nof at least 2-4 weeks to better identify appropriate exclusion conditions and\nfields.\n\n### Request audit entry\n\n```json\n{\n  \"auth\": <auth>,\n  \"error\": \"\",\n  \"forwarded_from\": \"\",\n  \"request\": <request>,\n  \"time\": \"\",\n  \"type\": \"\"\n}\n```\n\n### Response audit entry\n\n```json\n{\n  \"auth\": <auth>,\n  \"error\": \"\",\n  \"forwarded_from\": \"\",\n  \"request\": <request>,\n  \"response\": <response>,\n  \"time\": \"\",\n  \"type\": \"\"\n}\n```\n\n### Auth object (`<auth>`)\n\nThe following auth object definition includes example data with simple types\n(`string`, `bool`, `int`) and used in other JSON examples that include an\n`<auth>` object.\n\n```json\n{\n  \"accessor\": \"\",\n  \"client_token\": \"\",\n  \"display_name\": \"\",\n  \"entity_created\": \"\",\n  \"entity_id\": \"\",\n  \"external_namespace_policies\": {\n    \"allowed\": true,\n    \"granting_policies\": [\n      {\n        \"name\": \"\",\n        \"namespace_id\": \"\",\n        \"namespace_path\": \"\",\n        \"type\": \"\"\n      }\n    ]\n  },\n  \"identity_policies\": [\n    \"\"\n  ],\n  \"metadata\": {},\n  \"no_default_policy\": false,\n  \"num_uses\": 10,\n  \"policies\": [\n    \"\"\n  ],\n  \"policy_results\": {\n    \"allowed\": true,\n    \"granting_policies\": [\n      {\n        \"name\": \"\",\n        \"namespace_id\": \"\",\n        \"namespace_path\": \"\",\n        \"type\": \"\"\n      }\n    ]\n  },\n  \"remaining_uses\": 5,\n  \"token_policies\": [\n    \"\"\n  ],\n  \"token_issue_time\": \"\",\n  \"token_ttl\": 3600,\n  \"token_type\": \"\"\n}\n```\n\n### Request object (`<request>`)\n\nThe following request object definition includes example data with simple types\n(`string`, `bool`, `int`) and used in other JSON examples that include a\n`<request>` object.\n\n```json\n{\n  \"client_certificate_serial_number\": \"\",\n  \"client_id\": \"\",\n  \"client_token\": \"\",\n  \"client_token_accessor\": \"\",\n  \"data\": {},\n  \"id\": \"\",\n  \"headers\": {},\n  \"mount_accessor\": \"\",\n  \"mount_class\": \"\",\n  \"mount_point\": \"\",\n  \"mount_type\": \"\",\n  \"mount_running_version\": \"\",\n  \"mount_running_sha256\": \"\",\n  \"mount_is_external_plugin\": \"\",\n  \"namespace\": {\n    \"id\": \"\",\n    \"path\": \"\"\n  },\n  \"operation\": \"\",\n  \"path\": \"\",\n  \"policy_override\": true,\n  \"remote_address\": \"\",\n  \"remote_port\": 1234,\n  \"replication_cluster\": \"\",\n  \"request_uri\": \"\",\n  \"wrap_ttl\": 60\n}\n```\n\n### Response object (`<response>`)\n\nThe following response object definition includes example data with simple types\n(`string`, `bool`, `int`) and used in other JSON examples that include a\n`<response>` object.\n\n```json\n{\n  \"auth\": <auth>,\n  \"data\": {},\n  \"headers\": {},\n  \"mount_accessor\": \"\",\n  \"mount_class\": \"\",\n  \"mount_is_external_plugin\": false,\n  \"mount_point\": \"\",\n  \"mount_running_sha256\": \"\",\n  \"mount_running_plugin_version\": \"\",\n  \"mount_type\": \"\",\n  \"redirect\": \"\",\n  \"secret\": {\n    \"lease_id\": \"\"\n  },\n  \"wrap_info\": {\n    \"accessor\": \"\",\n    \"creation_path\": \"\",\n    \"creation_time\": \"\",\n    \"token\": \"\",\n    \"ttl\": 60,\n    \"wrapped_accessor\": \"\"\n  },\n  \"warnings\": [\n    \"\"\n  ]\n}\n```\n\n## Request audit entry schema\n\n@include 'audit\/request-entry-json-schema.mdx'\n\n## Response audit entry schema\n\n@include 'audit\/request-entry-json-schema.mdx'","site":"vault","answers_cleaned":"    layout  docs page title  Exclusion syntax for audit results description       Learn about the behavior and syntax for excluding audit data in Vault Enterprise         Exclusion syntax for audit results   include  alerts enterprise only mdx   As of Vault 1 18 0  you can enable audit devices with an  exclude  option to exclude specific fields in an audit entry that is written to a particular audit log  and fine tune your auditing process    Warning title  Proceed with caution      Excluding audit entry fields is an advanced feature  Use of exclusion settings   could lead to missing data in your audit logs       Always   test your audit configuration in a non production environment   before deploying exclusions to production  Read the    Vault security model   vault docs internals security  and    filtering overview   vault docs concepts filtering  to familiarize yourself   with Vault auditing and filtering basics before enabling audit devices that use   exclusions     Warning   Once you enable an audit device with exclusions  every audit entry Vault sends to that audit device is compared to an  optional  condition in the form of a predicate expression  Vault checks exclusions before writing to the audit log for a device  Vault modifies any audit entries that match the exclusion expression to remove the fields specified for that condition  You can specify multiple sets of condition and field combinations for an individual audit device   When you enable audit devices that use exclusion  the behavior of any existing audit device and the behavior of new audit devices that   do not   use exclusion remains unchanged       exclude  option  The value provided with the  exclude  option must be a parsable JSON array  i e  JSON or an escaped JSON string  of exclusion objects       Exclusion object     condition    string   optional      predicate expression using    filtering syntax   vault docs concepts filtering   When matched  Vault removes   the values identified by  fields      fields    string    required      collection of fields in the audit entry to exclude  identified using  JSON pointer  https   tools ietf org html rfc6901  syntax      json            condition            fields                     Vault always compares exclusion conditions against the original  immutable audit entry  the  golden source    As a result  evaluating a given condition does not affect the evaluation of subsequent conditions       Exclusion examples       Exclude response data  when present   Exclude the response  data  field from any audit entry that contains it      json            fields       response data                    Exclude request data  when present  for transit mounts  Exclude the request  data  field for audit entries with a mount type of  transit       json            condition       request mount type      transit        fields       request data                   Multiple exclusions  Use multiple JSON objects to exclude      data  from both the request and response when the mount type is  transit      entity id  from requests where the   auth client token  starts with  hmac    followed by at least one other character      json            condition       request mount type      transit        fields       request data     response data                  condition        auth client token   matches   hmac            fields       auth entity id                  Audit entry structure  To accurately construct  condition  and  fields   Vault operators need a solid understanding of their audit entry structures  At a high level  there are only   request   audit entries and   response   audit entries  but each of these entries can contain different objects such as  auth    request  and  response    We strongly encourage operaters to review existing audit logs from a timeframe of at least 2 4 weeks to better identify appropriate exclusion conditions and fields       Request audit entry     json      auth    auth      error          forwarded from          request    request      time          type                 Response audit entry     json      auth    auth      error          forwarded from          request    request      response    response      time          type                 Auth object    auth     The following auth object definition includes example data with simple types   string    bool    int   and used in other JSON examples that include an   auth   object      json      accessor          client token          display name          entity created          entity id          external namespace policies          allowed   true       granting policies                      name                namespace id                namespace path                type                            identity policies                    metadata          no default policy   false     num uses   10     policies                    policy results          allowed   true       granting policies                      name                namespace id                namespace path                type                            remaining uses   5     token policies                    token issue time          token ttl   3600     token type                 Request object    request     The following request object definition includes example data with simple types   string    bool    int   and used in other JSON examples that include a   request   object      json      client certificate serial number          client id          client token          client token accessor          data          id          headers          mount accessor          mount class          mount point          mount type          mount running version          mount running sha256          mount is external plugin          namespace          id            path              operation          path          policy override   true     remote address          remote port   1234     replication cluster          request uri          wrap ttl   60            Response object    response     The following response object definition includes example data with simple types   string    bool    int   and used in other JSON examples that include a   response   object      json      auth    auth      data          headers          mount accessor          mount class          mount is external plugin   false     mount point          mount running sha256          mount running plugin version          mount type          redirect          secret          lease id              wrap info          accessor            creation path            creation time            token            ttl   60       wrapped accessor              warnings                          Request audit entry schema   include  audit request entry json schema mdx      Response audit entry schema   include  audit request entry json schema mdx "}
{"questions":"vault Learn about the behavior and syntax for filtering audit data in Vault Enterprise include alerts enterprise only mdx page title Filter syntax for audit results layout docs Filter syntax for audit results","answers":"---\nlayout: docs\npage_title: Filter syntax for audit results\ndescription: >-\n  Learn about the behavior and syntax for filtering audit data in Vault Enterprise.\n---\n\n# Filter syntax for audit results\n\n@include 'alerts\/enterprise-only.mdx'\n\nAs of Vault 1.16.0, you can enable audit devices with a `filter` option to limit\nthe audit entries written to a particular audit log and fine-tune your auditing\nprocess.\n\n<Warning title=\"Proceed with caution\">\n\n  Filtering audit logs is an advanced feature. Exclusively enabling filtered\n  devices without configuring an audit fallback may lead to gaps in your audit\n  logs.\n\n  **Always** test your audit configuration in a non-production environment\n  before deploying filters to production. And make sure to read the\n  [Vault security model](\/vault\/docs\/internals\/security) and\n  [filtering overview](\/vault\/docs\/concepts\/filtering) to familiarize yourself\n  with Vault auditing and filtering basics before enabling filtered audit\n  devices.\n\n<\/Warning>\n\nOnce you enable an audit device with a filter, every audit entry Vault sends to\nthat audit device is compared to the predicate expression in the filter. Only\naudit entries that match the filter are written to the audit log for the device.\n\nWhen you enable filtered audit devices, the behavior of any existing audit\ndevice and the behavior of new audit devices that **do not** use filters remain\nunchanged.\n\n## Fallback auditing devices\n\nFiltering adds flexibility to your auditing workflows, but filtering also adds\ncomplexity that can lead to entries missing from your logs by mistake. For\nexample, writing audit entries to one device for `(N < 10)` and another device\nfor `(N > 10)`would exclude audit entries where `(N == 10)`, which may not be\nthe intended behavior.\n\nThe fallback audit device saves all audit entries that would otherwise get\nfiltered out and dropped from the audit record. Enabling an audit device with\nthe `fallback` parameter ensures that Vault continues to adhere to the default\n[security model](\/vault\/docs\/internals\/security) which mandates that all\nrequests and responses must be successfully logged before clients receive secret\nmaterial.\n\nVault installations that use filtered audit devices **exclusively**, should\nalways configure a fallback audit device to guarantee a comparable security\nstandard as Vault installations that only use standard, non-filtered audit\ndevices.\n\n<Warning title=\"You can only have 1 fallback device\">\n\n  Choose your fallback audit device carefully. You can only designate one\n  fallback audit device for the entire Vault installation\n  **out of all your active audit devices**.\n\n<\/Warning>\n\n### Fallback telemetry metrics\n\nWhen the fallback device successfully writes an audit entry to the audit log,\nVault emits a\n[fallback 'success' metric](\/vault\/docs\/internals\/telemetry\/metrics\/audit#vault-audit-fallback-success).\n\nIf you enable filtering **without** a fallback device, Vault emits a\n[fallback 'miss' metric](\/vault\/docs\/internals\/telemetry\/metrics\/audit#vault-audit-fallback-miss)\nmetric anytime an audit entry would have been written to the fallback device, so you can\ntrack how many auditable events you have lost.\n\n## Audit device limitations\n\n1. You cannot add filtering to an existing audit device.\n1. You can configure filtering when enabling one of the following supported audit device types:\n    - [file](\/vault\/docs\/audit\/file)\n    - [socket](\/vault\/docs\/audit\/socket)\n    - [syslog](\/vault\/docs\/audit\/syslog)\n1. You can only designate one auditing fallback device.\n\n## Filtering and test messages\n\nBy default, Vault sends a test message to the audit device when you enable it.\nDepending on how you configure your filters, the default test message may fail\nthe predicate expression and not write to the new device.\n\nYou can determine whether the test message should appear in the sink for the\nnewly enabled audit device based on the following property\ntable, which are common to all default test messages.\n\nProperty      | Value\n------------- | ----------------\n`mount_point` | empty\n`mount_type`  | empty\n`namespace`   | empty\n`operation`   | `update`\n`path`        | `sys\/audit\/test`\n\n## `filter` properties for audit devices\n\nFilters can only reference the following properties of an audit entry:\n\nProperty      | Example                             | Description\n------------- | ----------------------------------- | --------------------------\n`mount_point` | `mount_point == \\\"auth\/oidc\\\"`          | Log all entries for the `auth\/oidc` mount point\n`mount_type`  | `mount_type == \\\"kv-v2\\\"`           | Log all entries from `kv-v2` plugins\n`namespace`   | `namespace != \\\"admin\/\\\"`               | Log all entries **not** in the admin namespace\n`operation`   | `operation == \\\"read\\\"`                 | Log all read operations\n`path`        | `path == \\\"auth\/approle\/login\\\"`        | Log all activity against the AppRole login path\n\n<Tip title=\"Root namespaces are unnamed\">\n\n  Non-root namespace paths **must** end with a trailing slash (`\/`) to match correctly.\n  But the root namespace does not have a path and only matches to an empty\n  string. To match to the root namespace in your filter use `\\\"\\\"`. For example,\n  `namespace != \\\"\\\"` matches any audited request **not** in the root namespace.\n\n<\/Tip>\n\n## A practical example\n\nAssume you already have an audit file called `vault-audit.log` but you want to\nfilter your audit entries and persist all the key\/value (`kv`) type events to a\nspecific audit log file called `kv-audit.log`.\n\nTo filter the events:\n\n1. Enable a `file` audit device with a `mount_type` filter:\n  ```shell-session\n  vault audit enable                \\\n    -path kv-only                   \\\n    file                            \\\n    filter=\"mount_type == \\\"kv\\\"\"   \\\n    file_path=\/logs\/kv-audit.log\n1. Enable a fallback device:\n  ```shell-session\n  vault audit enable                \\\n    -path=my-fallback               \\\n    -description=\"fallback device\"  \\\n    file                            \\\n    fallback=true                   \\\n    file_path=\/tmp\/kv-audit.fallback.log\n1. Confirm the audit devices are enabled:\n  ```shell-session\n    vault audit list --detailed\n  ```\n1. Enable a new `kv` secrets engine called `my-kv`:\n  ```shell-session\n    vault secrets enable -path my-kv kv-v2\n  ```\n1. Write secret data to the `kv` engine:\n  ```shell-session\n    vault kv put -mount=my-kv my_secret the_value=always_angry\n  ```\n\nThe `\/var\/kv-audit.log` now includes four entries in total:\n\n- the command request that enabled `my-kv`\n- the response entry from enabling `my-kv`\n- the command request that wrote a secret to `my-kv`\n- the response entry from writing the secret to `my-kv`.\n\nThe fallback device captured entries for the other commands. And the\n original audit file, `vault-audit.log`, continues to capture all audit events.","site":"vault","answers_cleaned":"    layout  docs page title  Filter syntax for audit results description       Learn about the behavior and syntax for filtering audit data in Vault Enterprise         Filter syntax for audit results   include  alerts enterprise only mdx   As of Vault 1 16 0  you can enable audit devices with a  filter  option to limit the audit entries written to a particular audit log and fine tune your auditing process    Warning title  Proceed with caution      Filtering audit logs is an advanced feature  Exclusively enabling filtered   devices without configuring an audit fallback may lead to gaps in your audit   logs       Always   test your audit configuration in a non production environment   before deploying filters to production  And make sure to read the    Vault security model   vault docs internals security  and    filtering overview   vault docs concepts filtering  to familiarize yourself   with Vault auditing and filtering basics before enabling filtered audit   devices     Warning   Once you enable an audit device with a filter  every audit entry Vault sends to that audit device is compared to the predicate expression in the filter  Only audit entries that match the filter are written to the audit log for the device   When you enable filtered audit devices  the behavior of any existing audit device and the behavior of new audit devices that   do not   use filters remain unchanged      Fallback auditing devices  Filtering adds flexibility to your auditing workflows  but filtering also adds complexity that can lead to entries missing from your logs by mistake  For example  writing audit entries to one device for   N   10   and another device for   N   10  would exclude audit entries where   N    10    which may not be the intended behavior   The fallback audit device saves all audit entries that would otherwise get filtered out and dropped from the audit record  Enabling an audit device with the  fallback  parameter ensures that Vault continues to adhere to the default  security model   vault docs internals security  which mandates that all requests and responses must be successfully logged before clients receive secret material   Vault installations that use filtered audit devices   exclusively    should always configure a fallback audit device to guarantee a comparable security standard as Vault installations that only use standard  non filtered audit devices    Warning title  You can only have 1 fallback device      Choose your fallback audit device carefully  You can only designate one   fallback audit device for the entire Vault installation     out of all your active audit devices       Warning       Fallback telemetry metrics  When the fallback device successfully writes an audit entry to the audit log  Vault emits a  fallback  success  metric   vault docs internals telemetry metrics audit vault audit fallback success    If you enable filtering   without   a fallback device  Vault emits a  fallback  miss  metric   vault docs internals telemetry metrics audit vault audit fallback miss  metric anytime an audit entry would have been written to the fallback device  so you can track how many auditable events you have lost      Audit device limitations  1  You cannot add filtering to an existing audit device  1  You can configure filtering when enabling one of the following supported audit device types         file   vault docs audit file         socket   vault docs audit socket         syslog   vault docs audit syslog  1  You can only designate one auditing fallback device      Filtering and test messages  By default  Vault sends a test message to the audit device when you enable it  Depending on how you configure your filters  the default test message may fail the predicate expression and not write to the new device   You can determine whether the test message should appear in the sink for the newly enabled audit device based on the following property table  which are common to all default test messages   Property        Value                                   mount point    empty  mount type     empty  namespace      empty  operation       update   path            sys audit test       filter  properties for audit devices  Filters can only reference the following properties of an audit entry   Property        Example                               Description                                                                                   mount point     mount point      auth oidc               Log all entries for the  auth oidc  mount point  mount type      mount type      kv v2                Log all entries from  kv v2  plugins  namespace       namespace      admin                     Log all entries   not   in the admin namespace  operation       operation      read                      Log all read operations  path            path      auth approle login             Log all activity against the AppRole login path   Tip title  Root namespaces are unnamed      Non root namespace paths   must   end with a trailing slash       to match correctly    But the root namespace does not have a path and only matches to an empty   string  To match to the root namespace in your filter use         For example     namespace          matches any audited request   not   in the root namespace     Tip      A practical example  Assume you already have an audit file called  vault audit log  but you want to filter your audit entries and persist all the key value   kv   type events to a specific audit log file called  kv audit log    To filter the events   1  Enable a  file  audit device with a  mount type  filter       shell session   vault audit enable                       path kv only                         file                                  filter  mount type      kv            file path  logs kv audit log 1  Enable a fallback device       shell session   vault audit enable                       path my fallback                      description  fallback device         file                                  fallback true                         file path  tmp kv audit fallback log 1  Confirm the audit devices are enabled       shell session     vault audit list   detailed       1  Enable a new  kv  secrets engine called  my kv        shell session     vault secrets enable  path my kv kv v2       1  Write secret data to the  kv  engine       shell session     vault kv put  mount my kv my secret the value always angry        The   var kv audit log  now includes four entries in total     the command request that enabled  my kv    the response entry from enabling  my kv    the command request that wrote a secret to  my kv    the response entry from writing the secret to  my kv    The fallback device captured entries for the other commands  And the  original audit file   vault audit log   continues to capture all audit events "}
{"questions":"vault page title Check for Merkle tree corruption Check for Merkle tree corruption include alerts enterprise only mdx Learn how to check your Vault Enterprise cluster data for corruption in the Merkle trees used for replication layout docs","answers":"---\nlayout: docs\npage_title: Check for Merkle tree corruption\ndescription: >-\n  Learn how to check your Vault Enterprise cluster data for corruption in the Merkle trees used for replication.\n---\n\n# Check for Merkle tree corruption\n\n@include 'alerts\/enterprise-only.mdx'\n\nVault Enterprise replication uses Merkle trees to keep the cluster state, and rolls cluster state into a Merkle root hash. When data is updated or removed in a cluster, the Merkle tree is also updated. In certain circumstances detailed later in this document, the Merkle tree can become corrupted.\n\n## Types of corruption\n\nMerkle tree corruption can occur at different points in the tree:\n\n- Composite root corruption\n- Subtree root corruption\n- Page and subpage corruption\n\n## Diagnose Merkle tree corruption\n\nIf you run Vault Enterprise versions 1.15.0+, 1.14.3+ or 1.13.7+, you can use the [\/sys\/replication\/merkle-check API](\/vault\/api-docs\/system\/replication#sys-replication-merkle-check) endpoint to help determine if your cluster is encountering Merkle tree corruption. In the following sections, you'll learn about some of the details of symptoms and corruption causes which the merkle-check endpoint can detect. \n\n<Note>\n\nKeep in mind that the merkle-check endpoint cannot detect every way in which a Merkle tree could be corrupted.\n\n<\/Note>\n\nYou'll also learn how to query the merkle-check endpoint and interpret its output. Finally, you'll learn about some Vault CLI commands which can help you diagnose corruption.\n\n## Consecutive Merkle difference and synchronization loop\n\nOne indication of potential Merkle tree corruption occurs when Vault logs display consecutive Merkle difference and synchronization (merkle-diff and merkle-sync) operations without lasting resolution to streaming write-ahead logs (WALs). \n\nA known cause for this symptom is the occurrence of a split brain situation within a High Availability (HA) Vault cluster. In this case, the leader loses leadership during which it writes data to the storage. Meanwhile, a new leader is elected and reads or writes data that the old leader is mutating. During leadership transfer, an old leader can write data which becomes lost and results in an inconsistent Merkle tree state.\n\nThe two most common symptoms which can potentially indicate a split brain issue are detailed in the following sections.\n\n## Merkle difference results in no delta \n\nA merkle-diff operation resulting in no delta indicates conflicting Merkle tree pages. Despite the two clusters holding exactly the same data in both trees, their root hashes do not match.\n\nThe following example log shows entries from a performance replication secondary indicating the issue resulting from a corrupted tree, and no writes on the primary since the previous merkle-sync operation.\n\n<CodeBlockConfig hideClipboard>\n\n```plaintext\nvault [INFO]  .perf-sec.core0.core: non-matching guard, exiting\nvault [TRACE] .perf-sec.core0.core: finished client WAL streaming\nvault [INFO]  .perf-sec.core0.replication: no matching WALs available\nvault [DEBUG] .perf-sec.core0.replication: starting merkle diff\nvault [TRACE] .perf-sec.core0.core: wal context done\nvault [TRACE] .perf-sec.core0.core: checking conflicting pages\nvault [TRACE] .perf-pri.core0.core: serving conflicting pages\nvault [DEBUG] .perf-pri.core0.replication.index.perf: creating merkle state snapshot: generation=4\nvault [DEBUG] .perf-pri.core0.replication.index.perf: removing state snapshot from cache: generation=4\nvault [INFO]  .perf-sec.core0.replication: requesting WAL stream: guard=8acf94ac\nvault [TRACE] .perf-sec.core0.core: starting client WAL streaming\nvault [TRACE] .perf-sec.core0.core: receiving WALs\nvault [TRACE] .perf-pri.core0.core: starting serving WALs: clientID=e16930a6-7d24-6924-41fe-aa8beb90b1b2\nvault [TRACE] .perf-pri.core0.core: streaming from log shipper done: clientID=e16930a6-7d24-6924-41fe-aa8beb90b1b2\nvault [TRACE] .perf-pri.core0.core: internal wal stream stop channel fired: clientID=e16930a6-7d24-6924-41fe-aa8beb90b1b2\nvault [TRACE] .perf-pri.core0.core: stopping serving WALs: clientID=e16930a6-7d24-6924-41fe-aa8beb90b1b2\nvault [INFO]  .perf-sec.core0.core: non-matching guard, exiting\nvault [TRACE] .perf-sec.core0.core: finished client WAL streaming\nvault [INFO]  .perf-sec.core0.replication: no matching WALs available\nvault [TRACE] .perf-sec.core0.core: wal context done\nvault [DEBUG] .perf-sec.core0.replication: starting merkle diff\nvault [TRACE] .perf-sec.core0.core: checking conflicting pages\nvault [TRACE] .perf-pri.core0.core: serving conflicting pages\n```\n\n<\/CodeBlockConfig>\n\nIn the example log output, the performance secondary cluster's finite state machine (FSM) is entering merkle-diff mode, in which it tries to fetch conflicting pages from the primary cluster. The diff result is empty, indicated by an immediate switch to the stream-wals mode and **skipping the merkle-sync operation**. \n\nFurther into the log, the performance secondary cluster immediately goes into the merkle-diff mode again trying to reconcile the discrepancies of its Merkle tree with the primary cluster. This loop goes on without resolution due to the Merkle tree corruption.\n\n### Non resolving merkle-sync\n\nWhen a diff operation reveals conflicting data and the sync operation fetches them, but Vault still cannot enter lasting streaming WALs mode afterwards, this indicates a non-matching Merkle roots condition.\n\nIn the following server log snippet, merkle-diff returns a non-empty list of page conflicts and merkle-sync fetches those keys. The FSM then transitions to the stream-wals state. Immediately after this transition, the FSM transitions to the merkle-diff again, and returns a **non-matching guard** error.\n\n<CodeBlockConfig hideClipboard>\n\n```plaintext\nvault [INFO]  perf-sec.core0.core: non-matching guard, exiting\nvault [TRACE] perf-sec.core0.core: finished client WAL streaming\nvault [INFO]  perf-sec.core0.replication: no matching WALs available\nvault [TRACE] perf-sec.core0.core: wal context done\nvault [DEBUG] perf-sec.core0.replication: transitioning state: state=merkle-diff\nvault [DEBUG] perf-sec.core0.replication: starting merkle diff\nvault [TRACE] perf-sec.core0.core: checking conflicting pages\nvault [TRACE] perf-pri.core0.core: serving conflicting pages\nvault [DEBUG] perf-pri.core0.replication.index.perf: creating merkle state snapshot: generation=3\nvault [TRACE] perf-sec.core0.core: fetching subpage hashes\nvault [TRACE] perf-pri.core0.core: serving subpage hashes\nvault [DEBUG] perf-pri.core0.replication.index.perf: removing state snapshot from cache: generation=3\nvault [DEBUG] perf-sec.core0.replication: transitioning state: state=merkle-sync\nvault [DEBUG] perf-sec.core0.replication: waiting for operations to complete before merkle sync\nvault [DEBUG] perf-sec.core0.replication: starting merkle sync: num_conflict_keys=4\nvault [DEBUG] perf-sec.core0.replication: merkle sync debug info: local_keys=[] remote_keys=[] conflicting_keys=[\"logical\/67bf7b33-734e-f909-86e5-a7e69af0979f\/junk9\", \"logical\/67bf7b33-734e-f909-86e5-a7e69af0979f\/junk7\", \"logical\/67bf7b33-734e-f909-86e5-a7e69af0979f\/junk8\", \"logical\/67bf7b33-734e-f909-86e5-a7e69af0979f\/junk6\"]\nvault [DEBUG] perf-sec.core0.replication: transitioning state: state=stream-wals\nvault [INFO]  perf-sec.core0.replication: requesting WAL stream: guard=0c556858\nvault [TRACE] perf-sec.core0.core: starting client WAL streaming\nvault [TRACE] perf-sec.core0.core: receiving WALs\nvault [TRACE] perf-pri.core0.core: starting serving WALs: clientID=6afbce30-67c5-bb15-6eda-001140d33275\nvault [TRACE] perf-pri.core0.core: streaming from log shipper done: clientID=6afbce30-67c5-bb15-6eda-001140d33275\nvault [TRACE] perf-pri.core0.core: internal wal stream stop channel fired: clientID=6afbce30-67c5-bb15-6eda-001140d33275\nvault [TRACE] perf-pri.core0.core: stopping serving WALs: clientID=6afbce30-67c5-bb15-6eda-001140d33275\nvault [INFO]  perf-sec.core0.core: non-matching guard, exiting\nvault [TRACE] perf-sec.core0.core: finished client WAL streaming\nvault [INFO]  perf-sec.core0.replication: no matching WALs available\nvault [DEBUG] perf-sec.core0.replication: transitioning state: state=merkle-diff\nvault [DEBUG] perf-sec.core0.replication: starting merkle diff\nvault [TRACE] perf-sec.core0.core: wal context done\nvault [TRACE] perf-sec.core0.core: checking conflicting pages\nvault [TRACE] perf-pri.core0.core: serving conflicting pages\n```\n\n<\/CodeBlockConfig>\n\n## Use the merkle-check endpoint\n\nThe following examples show how you can use curl to query the merkle-check endpoint.\n\n<Note>\n\nThe merkle-check endpoint is authenticated. You need a Vault token with the capabilities detailed in the [endpoint documentation](\/vault\/api-docs\/system\/replication#sys-replication-merkle-check) to query the endpoint.\n\n<\/Note>\n\n### Check the primary cluster\n\n```shell-session\n$ curl $VAULT_ADDR\/v1\/sys\/replication\/merkle-check\n```\n\nExample output:\n\n<CodeBlockConfig hideClipboard>\n\n```json\n{\n  \"request_id\": \"d4b2ad1a-6e5f-7f9e-edfe-558eb89a40e6\",\n  \"lease_id\": \"\",\n  \"lease_duration\": 0,\n  \"renewable\": false,\n  \"data\": {\n    \"merkle_corruption_report\": {\n      \"corrupted_root\": false,\n      \"corrupted_tree_map\": {\n        \"1\": {\n          \"corrupted_index_tuples_map\": {\n            \"5\": {\n              \"corrupted\": false,\n              \"subpages\": [\n                28\n              ]\n            }\n          },\n          \"corrupted_subtree_root\": false,\n          \"root_hash\": \"DyGc6rQTV9XgyNSff3zimhi3FJM=\",\n          \"tree_type\": \"replicated\"\n        },\n        \"2\": {\n          \"corrupted_index_tuples_map\": null,\n          \"corrupted_subtree_root\": false,\n          \"root_hash\": \"EXmRTdfYCZTm5i9wLef9RQqyLCw=\",\n          \"tree_type\": \"local\"\n        }\n      },\n      \"last_corruption_check_epoch\": \"2023-09-11T11:25:59.44956-07:00\"\n    }\n  }\n}\n\n```\n\n<\/CodeBlockConfig>\n\nThe `merkle_corruption_report` stanza provides information about Merkle tree corruption.\n\nWhen the composite tree or subtree root hashes are corrupted, Vault sets the `corrupted_root` and `corrupted_subtree_root` field values to **true**. Vault sets the field values to true when it detects corruption in the both root hashes of the composite tree, and the subtree.\n\nThe `corrupted_tree_map` field identifies any corruption in the subtrees, including replicated and local subtrees. The replicated tree is indexed by number 1 in the map and the local tree is indexed by number `2`. The `tree_type` sub-field also shows which tree contains a corrupted page. The replicated subtree stores the information that is replicated to either a disaster recovery or a performance replication secondary cluster. \n\nIt contains replicated items like vault configuration, secrets engine and auth method configuration, and KV secrets. The local subtree stores information relevant to the local cluster such as local mounts, leases, and tokens. \n\nIn the event of corruption within a page or a subpage of a tree, the `corrupted_index_tuples_map` includes the page number along with a list of corrupted subpage numbers. If the page hash is corrupted, the `corrupted` field is set to true, otherwise, it sets to false.\n\n<CodeBlockConfig hideClipboard>\n\n```json\n{\n  \"request_id\": \"d4b2ad1a-6e5f-7f9e-edfe-558eb89a40e6\",\n  \"lease_id\": \"\",\n  \"lease_duration\": 0,\n  \"renewable\": false,\n  \"data\": {\n    \"merkle_corruption_report\": {\n      \"corrupted_root\": false,\n      \"corrupted_tree_map\": {\n        \"1\": {\n          \"corrupted_index_tuples_map\": {\n            \"5\": {\n              \"corrupted\": false,\n              \"subpages\": [\n                28\n              ]\n            }\n          },\n          \"corrupted_subtree_root\": false,\n          \"root_hash\": \"DyGc6rQTV9XgyNSff3zimhi3FJM=\",\n          \"tree_type\": \"replicated\"\n        },\n        \"2\": {\n          \"corrupted_index_tuples_map\": null,\n          \"corrupted_subtree_root\": false,\n          \"root_hash\": \"EXmRTdfYCZTm5i9wLef9RQqyLCw=\",\n          \"tree_type\": \"local\"\n        }\n      },\n      \"last_corruption_check_epoch\": \"2023-09-11T11:25:59.44956-07:00\"\n    }\n  }\n}\n```\n\n<\/CodeBlockConfig>\n\nIn the example output, the replicated tree is corrupted on subpage 28 of page 5 only. This means that the page hash, the replicated tree hash, and the composite tree hash are all correct.\n\n### Check a secondary cluster\n\nThe Merkle check endpoint prints information about the corruption status of the Merkle tree on a disaster recovery (DR) secondary cluster. You need a [DR operation token](\/vault\/tutorials\/enterprise\/disaster-recovery#dr-operation-token-strategy) to access this endpoint. Here is an example `curl` command that demonstrates querying the endpoint.\n\n```shell-session\n$ curl $VAULT_ADDR\/v1\/sys\/replication\/dr\/secondary\/merkle-check\n```\n\n## CLI commands for diagnosing corruption\n\nYou need to check four layers for potential Merkle tree corruption: composite root, subtree roots, page hash, and subpage hash. You can use the following example Vault CLI commands in combination with the [jq](https:\/\/jqlang.github.io\/jq\/) tool to diagnose corruption.\n\n### Composite root corruption\n\nUse a Vault command like the following to check if the composite tree root is corrupted:\n\n```shell-session\n$ vault write sys\/replication\/merkle-check \\\n    -format=json | jq -r '.data.merkle_corruption_report.corrupted_root'\n```\n\nIf the response is true, the Merkle tree is corrupted at the composite root.\n\n### Subtree root corruption\n\nUse a Vault command like the following to check if the subtree root is corrupted:\n\n```shell-session\n$ vault write sys\/replication\/merkle-check -format=json \\\n    | jq -r '.data.merkle_corruption_report.corrupted_tree_map[] | select(.corrupted_subtree_root==true)'\n```\n\nIf the response is true, the sub-tree is corrupted.\n\n### Page or subpage corruption\n\nUse a Vault command like the following to check if page or subpage is corrupted:\n\n```shell-session\n$ vault write sys\/replication\/merkle-check -format=json \\\n    | jq -r '.data.merkle_corruption_report.corrupted_tree_map[] | select(.corrupted_index_tuples_map!=null)'\n```\n\nIf the response is a non-empty map, then at least one page or subpage is corrupted.\n\nThe following is an example of a non-empty map in the command output:\n\n<CodeBlockConfig hideClipboard>\n\n```json\n{\n  \"corrupted_index_tuples_map\": {\n    \"23\": {\n      \"corrupted\": false,\n      \"subpages\": [\n        234\n      ]\n    }\n  },\n  \"corrupted_subtree_root\": false,\n  \"root_hash\": \"A5uW54VXDM4jUryDkxN8Vauk8kE=\",\n  \"tree_type\": \"replicated\"\n}\n```\n\n<\/CodeBlockConfig>\n\nIn this example, page number 23 is returned, however, as the `corrupted` field is false, which means that the page is not corrupted; however, subpage 234 is listed as a corrupted subpage. \n\nTo locate a corrupted subpage, Vault needs to also note its parent page, as each page contains 256 subpages and these indexes are repeated for every page. It's possible that only the full page is corrupted without having any corrupted subpages. In that case, the `corrupted` field in the page map is true, and no subpages are listed.\n\n## Caveats and considerations\n\nWe generally recommend that you consult the merkle-check endpoint before reindexing to ensure the process will be useful as reindexing can be time-consuming and lead to downtime.","site":"vault","answers_cleaned":"    layout  docs page title  Check for Merkle tree corruption description       Learn how to check your Vault Enterprise cluster data for corruption in the Merkle trees used for replication         Check for Merkle tree corruption   include  alerts enterprise only mdx   Vault Enterprise replication uses Merkle trees to keep the cluster state  and rolls cluster state into a Merkle root hash  When data is updated or removed in a cluster  the Merkle tree is also updated  In certain circumstances detailed later in this document  the Merkle tree can become corrupted      Types of corruption  Merkle tree corruption can occur at different points in the tree     Composite root corruption   Subtree root corruption   Page and subpage corruption     Diagnose Merkle tree corruption  If you run Vault Enterprise versions 1 15 0   1 14 3  or 1 13 7   you can use the   sys replication merkle check API   vault api docs system replication sys replication merkle check  endpoint to help determine if your cluster is encountering Merkle tree corruption  In the following sections  you ll learn about some of the details of symptoms and corruption causes which the merkle check endpoint can detect     Note   Keep in mind that the merkle check endpoint cannot detect every way in which a Merkle tree could be corrupted     Note   You ll also learn how to query the merkle check endpoint and interpret its output  Finally  you ll learn about some Vault CLI commands which can help you diagnose corruption      Consecutive Merkle difference and synchronization loop  One indication of potential Merkle tree corruption occurs when Vault logs display consecutive Merkle difference and synchronization  merkle diff and merkle sync  operations without lasting resolution to streaming write ahead logs  WALs     A known cause for this symptom is the occurrence of a split brain situation within a High Availability  HA  Vault cluster  In this case  the leader loses leadership during which it writes data to the storage  Meanwhile  a new leader is elected and reads or writes data that the old leader is mutating  During leadership transfer  an old leader can write data which becomes lost and results in an inconsistent Merkle tree state   The two most common symptoms which can potentially indicate a split brain issue are detailed in the following sections      Merkle difference results in no delta   A merkle diff operation resulting in no delta indicates conflicting Merkle tree pages  Despite the two clusters holding exactly the same data in both trees  their root hashes do not match   The following example log shows entries from a performance replication secondary indicating the issue resulting from a corrupted tree  and no writes on the primary since the previous merkle sync operation    CodeBlockConfig hideClipboard      plaintext vault  INFO    perf sec core0 core  non matching guard  exiting vault  TRACE   perf sec core0 core  finished client WAL streaming vault  INFO    perf sec core0 replication  no matching WALs available vault  DEBUG   perf sec core0 replication  starting merkle diff vault  TRACE   perf sec core0 core  wal context done vault  TRACE   perf sec core0 core  checking conflicting pages vault  TRACE   perf pri core0 core  serving conflicting pages vault  DEBUG   perf pri core0 replication index perf  creating merkle state snapshot  generation 4 vault  DEBUG   perf pri core0 replication index perf  removing state snapshot from cache  generation 4 vault  INFO    perf sec core0 replication  requesting WAL stream  guard 8acf94ac vault  TRACE   perf sec core0 core  starting client WAL streaming vault  TRACE   perf sec core0 core  receiving WALs vault  TRACE   perf pri core0 core  starting serving WALs  clientID e16930a6 7d24 6924 41fe aa8beb90b1b2 vault  TRACE   perf pri core0 core  streaming from log shipper done  clientID e16930a6 7d24 6924 41fe aa8beb90b1b2 vault  TRACE   perf pri core0 core  internal wal stream stop channel fired  clientID e16930a6 7d24 6924 41fe aa8beb90b1b2 vault  TRACE   perf pri core0 core  stopping serving WALs  clientID e16930a6 7d24 6924 41fe aa8beb90b1b2 vault  INFO    perf sec core0 core  non matching guard  exiting vault  TRACE   perf sec core0 core  finished client WAL streaming vault  INFO    perf sec core0 replication  no matching WALs available vault  TRACE   perf sec core0 core  wal context done vault  DEBUG   perf sec core0 replication  starting merkle diff vault  TRACE   perf sec core0 core  checking conflicting pages vault  TRACE   perf pri core0 core  serving conflicting pages        CodeBlockConfig   In the example log output  the performance secondary cluster s finite state machine  FSM  is entering merkle diff mode  in which it tries to fetch conflicting pages from the primary cluster  The diff result is empty  indicated by an immediate switch to the stream wals mode and   skipping the merkle sync operation      Further into the log  the performance secondary cluster immediately goes into the merkle diff mode again trying to reconcile the discrepancies of its Merkle tree with the primary cluster  This loop goes on without resolution due to the Merkle tree corruption       Non resolving merkle sync  When a diff operation reveals conflicting data and the sync operation fetches them  but Vault still cannot enter lasting streaming WALs mode afterwards  this indicates a non matching Merkle roots condition   In the following server log snippet  merkle diff returns a non empty list of page conflicts and merkle sync fetches those keys  The FSM then transitions to the stream wals state  Immediately after this transition  the FSM transitions to the merkle diff again  and returns a   non matching guard   error    CodeBlockConfig hideClipboard      plaintext vault  INFO   perf sec core0 core  non matching guard  exiting vault  TRACE  perf sec core0 core  finished client WAL streaming vault  INFO   perf sec core0 replication  no matching WALs available vault  TRACE  perf sec core0 core  wal context done vault  DEBUG  perf sec core0 replication  transitioning state  state merkle diff vault  DEBUG  perf sec core0 replication  starting merkle diff vault  TRACE  perf sec core0 core  checking conflicting pages vault  TRACE  perf pri core0 core  serving conflicting pages vault  DEBUG  perf pri core0 replication index perf  creating merkle state snapshot  generation 3 vault  TRACE  perf sec core0 core  fetching subpage hashes vault  TRACE  perf pri core0 core  serving subpage hashes vault  DEBUG  perf pri core0 replication index perf  removing state snapshot from cache  generation 3 vault  DEBUG  perf sec core0 replication  transitioning state  state merkle sync vault  DEBUG  perf sec core0 replication  waiting for operations to complete before merkle sync vault  DEBUG  perf sec core0 replication  starting merkle sync  num conflict keys 4 vault  DEBUG  perf sec core0 replication  merkle sync debug info  local keys    remote keys    conflicting keys   logical 67bf7b33 734e f909 86e5 a7e69af0979f junk9    logical 67bf7b33 734e f909 86e5 a7e69af0979f junk7    logical 67bf7b33 734e f909 86e5 a7e69af0979f junk8    logical 67bf7b33 734e f909 86e5 a7e69af0979f junk6   vault  DEBUG  perf sec core0 replication  transitioning state  state stream wals vault  INFO   perf sec core0 replication  requesting WAL stream  guard 0c556858 vault  TRACE  perf sec core0 core  starting client WAL streaming vault  TRACE  perf sec core0 core  receiving WALs vault  TRACE  perf pri core0 core  starting serving WALs  clientID 6afbce30 67c5 bb15 6eda 001140d33275 vault  TRACE  perf pri core0 core  streaming from log shipper done  clientID 6afbce30 67c5 bb15 6eda 001140d33275 vault  TRACE  perf pri core0 core  internal wal stream stop channel fired  clientID 6afbce30 67c5 bb15 6eda 001140d33275 vault  TRACE  perf pri core0 core  stopping serving WALs  clientID 6afbce30 67c5 bb15 6eda 001140d33275 vault  INFO   perf sec core0 core  non matching guard  exiting vault  TRACE  perf sec core0 core  finished client WAL streaming vault  INFO   perf sec core0 replication  no matching WALs available vault  DEBUG  perf sec core0 replication  transitioning state  state merkle diff vault  DEBUG  perf sec core0 replication  starting merkle diff vault  TRACE  perf sec core0 core  wal context done vault  TRACE  perf sec core0 core  checking conflicting pages vault  TRACE  perf pri core0 core  serving conflicting pages        CodeBlockConfig      Use the merkle check endpoint  The following examples show how you can use curl to query the merkle check endpoint    Note   The merkle check endpoint is authenticated  You need a Vault token with the capabilities detailed in the  endpoint documentation   vault api docs system replication sys replication merkle check  to query the endpoint     Note       Check the primary cluster     shell session   curl  VAULT ADDR v1 sys replication merkle check      Example output    CodeBlockConfig hideClipboard      json      request id    d4b2ad1a 6e5f 7f9e edfe 558eb89a40e6      lease id          lease duration   0     renewable   false     data          merkle corruption report            corrupted root   false         corrupted tree map              1                corrupted index tuples map                  5                    corrupted   false                 subpages                     28                                                       corrupted subtree root   false             root hash    DyGc6rQTV9XgyNSff3zimhi3FJM               tree type    replicated                      2                corrupted index tuples map   null             corrupted subtree root   false             root hash    EXmRTdfYCZTm5i9wLef9RQqyLCw               tree type    local                            last corruption check epoch    2023 09 11T11 25 59 44956 07 00                      CodeBlockConfig   The  merkle corruption report  stanza provides information about Merkle tree corruption   When the composite tree or subtree root hashes are corrupted  Vault sets the  corrupted root  and  corrupted subtree root  field values to   true    Vault sets the field values to true when it detects corruption in the both root hashes of the composite tree  and the subtree   The  corrupted tree map  field identifies any corruption in the subtrees  including replicated and local subtrees  The replicated tree is indexed by number 1 in the map and the local tree is indexed by number  2   The  tree type  sub field also shows which tree contains a corrupted page  The replicated subtree stores the information that is replicated to either a disaster recovery or a performance replication secondary cluster    It contains replicated items like vault configuration  secrets engine and auth method configuration  and KV secrets  The local subtree stores information relevant to the local cluster such as local mounts  leases  and tokens    In the event of corruption within a page or a subpage of a tree  the  corrupted index tuples map  includes the page number along with a list of corrupted subpage numbers  If the page hash is corrupted  the  corrupted  field is set to true  otherwise  it sets to false    CodeBlockConfig hideClipboard      json      request id    d4b2ad1a 6e5f 7f9e edfe 558eb89a40e6      lease id          lease duration   0     renewable   false     data          merkle corruption report            corrupted root   false         corrupted tree map              1                corrupted index tuples map                  5                    corrupted   false                 subpages                     28                                                       corrupted subtree root   false             root hash    DyGc6rQTV9XgyNSff3zimhi3FJM               tree type    replicated                      2                corrupted index tuples map   null             corrupted subtree root   false             root hash    EXmRTdfYCZTm5i9wLef9RQqyLCw               tree type    local                            last corruption check epoch    2023 09 11T11 25 59 44956 07 00                     CodeBlockConfig   In the example output  the replicated tree is corrupted on subpage 28 of page 5 only  This means that the page hash  the replicated tree hash  and the composite tree hash are all correct       Check a secondary cluster  The Merkle check endpoint prints information about the corruption status of the Merkle tree on a disaster recovery  DR  secondary cluster  You need a  DR operation token   vault tutorials enterprise disaster recovery dr operation token strategy  to access this endpoint  Here is an example  curl  command that demonstrates querying the endpoint      shell session   curl  VAULT ADDR v1 sys replication dr secondary merkle check         CLI commands for diagnosing corruption  You need to check four layers for potential Merkle tree corruption  composite root  subtree roots  page hash  and subpage hash  You can use the following example Vault CLI commands in combination with the  jq  https   jqlang github io jq   tool to diagnose corruption       Composite root corruption  Use a Vault command like the following to check if the composite tree root is corrupted      shell session   vault write sys replication merkle check        format json   jq  r   data merkle corruption report corrupted root       If the response is true  the Merkle tree is corrupted at the composite root       Subtree root corruption  Use a Vault command like the following to check if the subtree root is corrupted      shell session   vault write sys replication merkle check  format json         jq  r   data merkle corruption report corrupted tree map     select  corrupted subtree root  true        If the response is true  the sub tree is corrupted       Page or subpage corruption  Use a Vault command like the following to check if page or subpage is corrupted      shell session   vault write sys replication merkle check  format json         jq  r   data merkle corruption report corrupted tree map     select  corrupted index tuples map  null        If the response is a non empty map  then at least one page or subpage is corrupted   The following is an example of a non empty map in the command output    CodeBlockConfig hideClipboard      json      corrupted index tuples map          23            corrupted   false         subpages             234                       corrupted subtree root   false     root hash    A5uW54VXDM4jUryDkxN8Vauk8kE       tree type    replicated           CodeBlockConfig   In this example  page number 23 is returned  however  as the  corrupted  field is false  which means that the page is not corrupted  however  subpage 234 is listed as a corrupted subpage    To locate a corrupted subpage  Vault needs to also note its parent page  as each page contains 256 subpages and these indexes are repeated for every page  It s possible that only the full page is corrupted without having any corrupted subpages  In that case  the  corrupted  field in the page map is true  and no subpages are listed      Caveats and considerations  We generally recommend that you consult the merkle check endpoint before reindexing to ensure the process will be useful as reindexing can be time consuming and lead to downtime "}
{"questions":"vault replicated across clusters to support horizontally scaling and disaster Vault Enterprise has support for Replication allowing critical data to be recovery workloads page title Replication Vault Enterprise Vault Enterprise replication layout docs","answers":"---\nlayout: docs\npage_title: Replication - Vault Enterprise\ndescription: >-\n  Vault Enterprise has support for Replication, allowing critical data to be\n  replicated across clusters to support horizontally scaling and disaster\n  recovery workloads.\n---\n\n# Vault Enterprise replication\n\n@include 'alerts\/enterprise-and-hcp.mdx'\n\n## Overview\n\nMany organizations have infrastructure that spans multiple datacenters. Vault\nprovides the critical services of identity management, secrets storage, and\npolicy management. This functionality is expected to be highly available and\nto scale as the number of clients and their functional needs increase; at the\nsame time, operators would like to ensure that a common set of policies are\nenforced globally, and a consistent set of secrets and keys are exposed to\napplications that need to interoperate.\n\nVault replication addresses both of these needs in providing consistency,\nscalability, and highly-available disaster recovery.\n\n<Note title=\"Storage backend requirement\">\n\nUsing replication requires a storage backend that supports transactional\nupdates, such as [Integrated Storage](\/vault\/docs\/concepts\/integrated-storage)\nor Consul.\n\n<\/Note>\n\n## Architecture\n\nThe core unit of Vault replication is a **cluster**, which is comprised of a\ncollection of Vault nodes (an active and its corresponding HA nodes). Multiple Vault\nclusters communicate in a one-to-many near real-time flow.\n\nReplication operates on a leader\/follower model, wherein a leader cluster (known as a\n**primary**) is linked to a series of follower **secondary** clusters. The primary\ncluster acts as the system of record and asynchronously replicates most Vault data.\n\nAll communication between primaries and secondaries is end-to-end encrypted\nwith mutually-authenticated TLS sessions, setup via replication tokens which are\nexchanged during bootstrapping.\n\n## Replicated data\n\nWhat data is replicated between the primary and secondary depends on the type of\nreplication that is configured between the primary and secondary. These types\nof relationships are either **disaster recovery** or\n**performance replication** relationships.\n\n![](\/img\/replication-overview.png)\n\nThe following table shows a capability comparison between Disaster Recovery\nand Performance Replication.\n\n| Capability                                                                                                           | Disaster Recovery | Performance Replication                                                                                                                                                                        |\n| -------------------------------------------------------------------------------------------------------------------- | ----------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Mirrors the configuration of a primary cluster                                                                       | Yes               | Yes                                                                                                                                                                                |\n| Mirrors the configuration of a primary cluster\u2019s backends (i.e., auth methods, secrets engines, audit devices, etc.) | Yes               | Yes                                                                                                                                                                                |\n| Mirrors the tokens and leases for applications and users interacting with the primary cluster                        | Yes               | No. Secondaries keep track of their own tokens and leases. When the secondary is promoted, applications must reauthenticate and obtain new leases from the newly-promoted primary. |\n| Allows the secondary cluster to handle client requests                                                               | No                | Yes                                            |\n\nEverything written to storage is classified into one of three categories:\n* `replicated` (or \"shared\"): all downstream clusters receive it\n* `local`: only downstream disaster recovery clusters receive it\n* `ignored`: not replicated to downstream clusters at all\n\nWhen mounting a secret engine or auth method, you can choose whether to make it a\nlocal mount or a shared mount.\n\nShared mounts (the default) usually replicate all their data to performance secondaries, but\nthey can choose to designate specific storage paths as local.  For example, PKI mounts\nstore certificates locally. If you query the roles on a shared PKI mount, you'll see\nthe same result for that mount when you send the query to either the performance primary or\nsecondary, but if you list stored certs, you'll see different values.\n\nLocal mounts replicate their data only to disaster recovery secondaries.  A local mount\ncreated on a performance primary isn't visible at all to its performance secondaries.\nLocal mounts can also be created on performance secondaries, in which case they aren't visible\nto the performance primary.\n\nThere exist other storage entries that aren't mount specific.  For example, the Integrated\nStorage Autopilot configuration is an `ignored` storage entry, which allows for disaster recovery\nsecondaries to have a different configuration than their primary.  Tokens and leases are written to\n`local` storage entries.\n\n## Performance replication\n\n@include 'alerts\/enterprise-and-hcp.mdx'\n\nIn Performance Replication, secondaries keep track of their own tokens and leases\nbut share the underlying configuration, policies, and supporting secrets (KV values,\nencryption keys for `transit`, etc).\n\nIf a user action would modify underlying shared state, the secondary forwards the request\nto the primary to be handled; this is transparent to the client. In practice, most\nhigh-volume workloads (reads in the `kv` backend, encryption\/decryption operations\nin `transit`, etc.) can be satisfied by the local secondary, allowing Vault to scale\nrelatively horizontally with the number of secondaries rather than vertically as\nin the past.\n\n<Note title=\"Performance replication does not include automated integrated storage snapshots\">\n\nVault does not replicate automated integrated storage snapshots as a part of\nperformance replication.\n\nYou must explicitly configure each of the primary and secondary performance\nclusters to create individual automated snapshots replicas.\n\n<\/Note>\n\n### Paths filter\n\nThe primary cluster's mount configuration gets replicated across its secondary\nclusters when you enable Performance Replication. In some cases, you may not\nwant all data to be replicated. For example, your primary cluster is in the EU\nregion, and you have a secondary cluster outside of the EU region. [General Data\nProtection Regulation (GDPR)](https:\/\/gdpr.eu\/) requires that personally\nidentifiable data not be physically transferred to locations outside the\nEuropean Union unless the region or country has an equal rigor of data\nprotection regulation as the EU.\n\nTo comply with GDPR, leverage Vault's **paths filter**  feature to abide by\ndata movements and sovereignty regulations while ensuring performance access\nacross geographically distributed regions.\n\nYou can set filters based on the mount path of the secrets engines and\nnamespaces.\n\n![Performance Replication - primary](\/img\/vault-mount-filter-7.png)\n\nIn the above example, the `EU_GDPR_data\/` path and `office_FR` namespace will\nnot be replicated and remain only on the primary cluster.\n\nOn a similar note, if you want to avoid secondary cluster's data to be\nreplicated, you can mark those secrets engine and\/or auth methods **local**.\nLocal secrets engines and auth methods are not replicated or removed by\nreplication.\n\n**Example:** When you enable a secrets engine on a secondary cluster, use the\n`-local` flag.\n\n```shell-session\n$ vault secrets enable -local -path=us_west_data kv-v2\n```\n\n<Highlight title=\"Tutorials\">\n\nRefer to the _manage replicated mounts_ section in the [Set up performance\nreplication](\/vault\/tutorials\/enterprise\/performance-replication#manage-replicated-mounts)\ntutorial to learn how to specify the mounts to allow or deny data replication.\n\n<\/Highlight>\n\n## Disaster recovery (DR) replication\n\nIn disaster recovery (or DR) replication, secondaries share the same underlying configuration,\npolicy, and supporting secrets (KV values, encryption keys for `transit`, etc) infrastructure\nas the primary. They also share the same token and lease infrastructure as the primary, as\nthey are designed to allow for continuous operations with applications connecting to the\noriginal primary on the election of the DR secondary.\n\nDR is designed to be a mechanism to protect against catastrophic failure of entire clusters.\nThey do not forward service read or write requests until they are elected and become a new primary.\n\n-> **Note**: Unlike with Performance Replication, local secret engines, auth methods and audit devices are replicated to a DR secondary.\n\n\nFor more information on the capabilities of performance and disaster recovery replication, see the Vault Replication [API Documentation](\/vault\/api-docs\/system\/replication).\n\n## Primary and secondary cluster compatibility\n\n### Storage engines\n\nThere is no requirement that both clusters use the same storage engine.\n\n### Seals\n\nThere is no requirement that both clusters use the same seal type, but see\n[sealwrap](\/vault\/docs\/enterprise\/sealwrap#seal-wrap-and-replication) for the full\ndetails.\n\nAlso note that enabling replication will modify the secondary seal.\nIf the secondary uses an auto seal, its recovery configuration and keys\nwill be replaced; if it uses shamir, its seal configuration and unseal\nkeys will be replaced. Here seal\/recovery configuration means the number of\nseal\/recovery key fragments and the required threshold of those\nfragments.\n\n| Primary Seal | Secondary Seal (before) | Secondary Seal (after)                 | Secondary Recovery Key (after) | Impact on Secondary of Enabling Replication                                 |\n|--------------|-------------------------|----------------------------------------|--------------------------------|-----------------------------------------------------------------------------|\n| Shamir       | Shamir                  | Primary's shamir config & unseal keys  | N\/A                            | Seal config and unseal keys replaced with primary's                         |\n| Shamir       | Auto                    | Unchanged                              | Receives primary seal          | Seal recovery config and keys replaced with primary's seal config and keys  |\n| Auto         | Auto                    | Unchanged                              | Receives primary recovery      | Seal recovery config and recovery keys replaced with primary's              |\n| Auto         | Shamir                  | Receives primary recovery              | N\/A                            | Seal config and keys replaced with primary's recovery seal config and keys  |\n\n<Note>\n\nVault clusters configured with\n[auto-unseal](\/vault\/docs\/concepts\/seal#auto-unseal) have recovery keys instead\nof unseal keys.\n\n<\/Note>\n\n### Vault versions\n\nVault changes are designed and tested to ensure that the\n[upgrade instructions](\/vault\/docs\/upgrading#replication-installations) are viable, i.e.\nthat a secondary can run a newer Vault version than its primary.\n\nThat said, we do not recommend running replicated Vault clusters with different\nversions any longer than necessary to perform the upgrade.\n\n## Internals\n\nDetails on the internal design of the replication feature can be found in the\n[replication internals](\/vault\/docs\/internals\/replication) document.\n\n## Security model\n\nVault is trusted all over the world to keep secrets safe. As such, we have put\nextreme focus to detail to our replication model as well.\n\n### Primary\/Secondary communication\n\nWhen a cluster is marked as the primary it generates a self-signed CA\ncertificate. On request, and given a user-specified identifier, the primary\nuses this CA certificate to generate a private key and certificate and packages\nthese, along with some other information, into a replication bootstrapping\nbundle, a.k.a. a secondary activation token. The certificate is used to perform\nTLS mutual authentication between the primary and that secondary.\n\nThis CA certificate is never shared with secondaries, and no secondary ever has\naccess to any other secondary\u2019s certificate. In practice this means that\nrevoking a secondary\u2019s access to the primary does not allow it continue\nreplication with any other machine; it also means that if a primary goes down,\nthere is full administrative control over which cluster becomes primary. An\nattacker cannot spoof a secondary into believing that a cluster the attacker\ncontrols is the new primary without also being able to administratively direct\nthe secondary to connect by giving it a new bootstrap package (which is an\nACL-protected call).\n\nVault makes use of Application Layer Protocol Negotiation on its cluster port.\nThis allows the same port to handle both request forwarding and replication,\neven while keeping the certificate root of trust and feature set different.\n\n### Secondary activation tokens\n\nA secondary activation token is an extremely sensitive item and as such is\nprotected via response wrapping. Experienced Vault users will note that the\nwrapping format for replication bootstrap packages is different from normal\nresponse wrapping tokens: it is a signed JWT. This allows the replication token\nto carry the redirect address of the primary cluster as part of the token. In\nmost cases this means that simply providing the token to a new secondary is\nenough to activate replication, although this can also be overridden when the\ntoken is provided to the secondary.\n\nSecondary activation tokens should be treated like Vault root tokens. If\ndisclosed to a bad actor, that actor can gain access to all Vault data. It\nshould therefore be treated with utmost sensitivity. Like all\nresponse-wrapping tokens, once the token is used successfully (in this case, to\nactivate a secondary) it is useless, so it is only necessary to safeguard it\nfrom one machine to the next. Like with root tokens, HashiCorp recommends that\nwhen a secondary activation token is live, there are multiple eyes on it from\ngeneration until it is used.\n\nOnce a secondary is activated, its cluster information is stored safely behind\nits encrypted barrier.\n\n## Mutual TLS and load balancers\n\nVault generates its own certificates for cluster members. All replication traffic\nuses the cluster port using these Vault-generated certificates after initial\nbootstrapping. Because of this, the cluster traffic can NOT be terminated at the\ncluster port at a load balancer level.\n\n## Tutorial\n\nRefer to the following tutorials replication setup and best practices:\n\n- [Set up Performance Replication](\/vault\/tutorials\/enterprise\/performance-replication)\n- [Disaster Recovery Replication Setup](\/vault\/tutorials\/enterprise\/disaster-recovery)\n- [Monitoring Vault Replication](\/vault\/tutorials\/monitoring\/monitor-replication)\n\n## API\n\nThe Vault replication component has a full HTTP API. Refer to the\n[Vault Replication API](\/vault\/api-docs\/system\/replication) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  Replication   Vault Enterprise description       Vault Enterprise has support for Replication  allowing critical data to be   replicated across clusters to support horizontally scaling and disaster   recovery workloads         Vault Enterprise replication   include  alerts enterprise and hcp mdx      Overview  Many organizations have infrastructure that spans multiple datacenters  Vault provides the critical services of identity management  secrets storage  and policy management  This functionality is expected to be highly available and to scale as the number of clients and their functional needs increase  at the same time  operators would like to ensure that a common set of policies are enforced globally  and a consistent set of secrets and keys are exposed to applications that need to interoperate   Vault replication addresses both of these needs in providing consistency  scalability  and highly available disaster recovery    Note title  Storage backend requirement    Using replication requires a storage backend that supports transactional updates  such as  Integrated Storage   vault docs concepts integrated storage  or Consul     Note      Architecture  The core unit of Vault replication is a   cluster    which is comprised of a collection of Vault nodes  an active and its corresponding HA nodes   Multiple Vault clusters communicate in a one to many near real time flow   Replication operates on a leader follower model  wherein a leader cluster  known as a   primary    is linked to a series of follower   secondary   clusters  The primary cluster acts as the system of record and asynchronously replicates most Vault data   All communication between primaries and secondaries is end to end encrypted with mutually authenticated TLS sessions  setup via replication tokens which are exchanged during bootstrapping      Replicated data  What data is replicated between the primary and secondary depends on the type of replication that is configured between the primary and secondary  These types of relationships are either   disaster recovery   or   performance replication   relationships        img replication overview png   The following table shows a capability comparison between Disaster Recovery and Performance Replication     Capability                                                                                                             Disaster Recovery   Performance Replication                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Mirrors the configuration of a primary cluster                                                                         Yes                 Yes                                                                                                                                                                                    Mirrors the configuration of a primary cluster s backends  i e   auth methods  secrets engines  audit devices  etc     Yes                 Yes                                                                                                                                                                                    Mirrors the tokens and leases for applications and users interacting with the primary cluster                          Yes                 No  Secondaries keep track of their own tokens and leases  When the secondary is promoted  applications must reauthenticate and obtain new leases from the newly promoted primary      Allows the secondary cluster to handle client requests                                                                 No                  Yes                                               Everything written to storage is classified into one of three categories     replicated   or  shared    all downstream clusters receive it    local   only downstream disaster recovery clusters receive it    ignored   not replicated to downstream clusters at all  When mounting a secret engine or auth method  you can choose whether to make it a local mount or a shared mount   Shared mounts  the default  usually replicate all their data to performance secondaries  but they can choose to designate specific storage paths as local   For example  PKI mounts store certificates locally  If you query the roles on a shared PKI mount  you ll see the same result for that mount when you send the query to either the performance primary or secondary  but if you list stored certs  you ll see different values   Local mounts replicate their data only to disaster recovery secondaries   A local mount created on a performance primary isn t visible at all to its performance secondaries  Local mounts can also be created on performance secondaries  in which case they aren t visible to the performance primary   There exist other storage entries that aren t mount specific   For example  the Integrated Storage Autopilot configuration is an  ignored  storage entry  which allows for disaster recovery secondaries to have a different configuration than their primary   Tokens and leases are written to  local  storage entries      Performance replication   include  alerts enterprise and hcp mdx   In Performance Replication  secondaries keep track of their own tokens and leases but share the underlying configuration  policies  and supporting secrets  KV values  encryption keys for  transit   etc    If a user action would modify underlying shared state  the secondary forwards the request to the primary to be handled  this is transparent to the client  In practice  most high volume workloads  reads in the  kv  backend  encryption decryption operations in  transit   etc   can be satisfied by the local secondary  allowing Vault to scale relatively horizontally with the number of secondaries rather than vertically as in the past    Note title  Performance replication does not include automated integrated storage snapshots    Vault does not replicate automated integrated storage snapshots as a part of performance replication   You must explicitly configure each of the primary and secondary performance clusters to create individual automated snapshots replicas     Note       Paths filter  The primary cluster s mount configuration gets replicated across its secondary clusters when you enable Performance Replication  In some cases  you may not want all data to be replicated  For example  your primary cluster is in the EU region  and you have a secondary cluster outside of the EU region   General Data Protection Regulation  GDPR   https   gdpr eu   requires that personally identifiable data not be physically transferred to locations outside the European Union unless the region or country has an equal rigor of data protection regulation as the EU   To comply with GDPR  leverage Vault s   paths filter    feature to abide by data movements and sovereignty regulations while ensuring performance access across geographically distributed regions   You can set filters based on the mount path of the secrets engines and namespaces     Performance Replication   primary   img vault mount filter 7 png   In the above example  the  EU GDPR data   path and  office FR  namespace will not be replicated and remain only on the primary cluster   On a similar note  if you want to avoid secondary cluster s data to be replicated  you can mark those secrets engine and or auth methods   local    Local secrets engines and auth methods are not replicated or removed by replication     Example    When you enable a secrets engine on a secondary cluster  use the   local  flag      shell session   vault secrets enable  local  path us west data kv v2       Highlight title  Tutorials    Refer to the  manage replicated mounts  section in the  Set up performance replication   vault tutorials enterprise performance replication manage replicated mounts  tutorial to learn how to specify the mounts to allow or deny data replication     Highlight      Disaster recovery  DR  replication  In disaster recovery  or DR  replication  secondaries share the same underlying configuration  policy  and supporting secrets  KV values  encryption keys for  transit   etc  infrastructure as the primary  They also share the same token and lease infrastructure as the primary  as they are designed to allow for continuous operations with applications connecting to the original primary on the election of the DR secondary   DR is designed to be a mechanism to protect against catastrophic failure of entire clusters  They do not forward service read or write requests until they are elected and become a new primary        Note    Unlike with Performance Replication  local secret engines  auth methods and audit devices are replicated to a DR secondary    For more information on the capabilities of performance and disaster recovery replication  see the Vault Replication  API Documentation   vault api docs system replication       Primary and secondary cluster compatibility      Storage engines  There is no requirement that both clusters use the same storage engine       Seals  There is no requirement that both clusters use the same seal type  but see  sealwrap   vault docs enterprise sealwrap seal wrap and replication  for the full details   Also note that enabling replication will modify the secondary seal  If the secondary uses an auto seal  its recovery configuration and keys will be replaced  if it uses shamir  its seal configuration and unseal keys will be replaced  Here seal recovery configuration means the number of seal recovery key fragments and the required threshold of those fragments     Primary Seal   Secondary Seal  before    Secondary Seal  after                    Secondary Recovery Key  after    Impact on Secondary of Enabling Replication                                                                                                                                                                                                                                        Shamir         Shamir                    Primary s shamir config   unseal keys    N A                              Seal config and unseal keys replaced with primary s                             Shamir         Auto                      Unchanged                                Receives primary seal            Seal recovery config and keys replaced with primary s seal config and keys      Auto           Auto                      Unchanged                                Receives primary recovery        Seal recovery config and recovery keys replaced with primary s                  Auto           Shamir                    Receives primary recovery                N A                              Seal config and keys replaced with primary s recovery seal config and keys      Note   Vault clusters configured with  auto unseal   vault docs concepts seal auto unseal  have recovery keys instead of unseal keys     Note       Vault versions  Vault changes are designed and tested to ensure that the  upgrade instructions   vault docs upgrading replication installations  are viable  i e  that a secondary can run a newer Vault version than its primary   That said  we do not recommend running replicated Vault clusters with different versions any longer than necessary to perform the upgrade      Internals  Details on the internal design of the replication feature can be found in the  replication internals   vault docs internals replication  document      Security model  Vault is trusted all over the world to keep secrets safe  As such  we have put extreme focus to detail to our replication model as well       Primary Secondary communication  When a cluster is marked as the primary it generates a self signed CA certificate  On request  and given a user specified identifier  the primary uses this CA certificate to generate a private key and certificate and packages these  along with some other information  into a replication bootstrapping bundle  a k a  a secondary activation token  The certificate is used to perform TLS mutual authentication between the primary and that secondary   This CA certificate is never shared with secondaries  and no secondary ever has access to any other secondary s certificate  In practice this means that revoking a secondary s access to the primary does not allow it continue replication with any other machine  it also means that if a primary goes down  there is full administrative control over which cluster becomes primary  An attacker cannot spoof a secondary into believing that a cluster the attacker controls is the new primary without also being able to administratively direct the secondary to connect by giving it a new bootstrap package  which is an ACL protected call    Vault makes use of Application Layer Protocol Negotiation on its cluster port  This allows the same port to handle both request forwarding and replication  even while keeping the certificate root of trust and feature set different       Secondary activation tokens  A secondary activation token is an extremely sensitive item and as such is protected via response wrapping  Experienced Vault users will note that the wrapping format for replication bootstrap packages is different from normal response wrapping tokens  it is a signed JWT  This allows the replication token to carry the redirect address of the primary cluster as part of the token  In most cases this means that simply providing the token to a new secondary is enough to activate replication  although this can also be overridden when the token is provided to the secondary   Secondary activation tokens should be treated like Vault root tokens  If disclosed to a bad actor  that actor can gain access to all Vault data  It should therefore be treated with utmost sensitivity  Like all response wrapping tokens  once the token is used successfully  in this case  to activate a secondary  it is useless  so it is only necessary to safeguard it from one machine to the next  Like with root tokens  HashiCorp recommends that when a secondary activation token is live  there are multiple eyes on it from generation until it is used   Once a secondary is activated  its cluster information is stored safely behind its encrypted barrier      Mutual TLS and load balancers  Vault generates its own certificates for cluster members  All replication traffic uses the cluster port using these Vault generated certificates after initial bootstrapping  Because of this  the cluster traffic can NOT be terminated at the cluster port at a load balancer level      Tutorial  Refer to the following tutorials replication setup and best practices      Set up Performance Replication   vault tutorials enterprise performance replication     Disaster Recovery Replication Setup   vault tutorials enterprise disaster recovery     Monitoring Vault Replication   vault tutorials monitoring monitor replication      API  The Vault replication component has a full HTTP API  Refer to the  Vault Replication API   vault api docs system replication  for more details "}
{"questions":"vault This requires the Enterprise ADP KM license The Vault PKCS 11 Provider allows Vault KMIP Secrets Engine to be used via PKCS 11 calls The provider supports a subset of key generation encryption decryption and key storage operations layout docs page title PKCS 11 Provider Vault Enterprise PKCS 11 provider","answers":"---\nlayout: docs\npage_title: PKCS#11 Provider - Vault Enterprise\ndescription: |-\n  The Vault PKCS#11 Provider allows Vault KMIP Secrets Engine to be used via PKCS#11 calls.\n  The provider supports a subset of key generation, encryption, decryption and key storage operations.\n  This requires the Enterprise ADP-KM license.\n---\n\n# PKCS#11 provider\n\n@include 'alerts\/enterprise-only.mdx'\n\nPKCS11 provider is part of the [KMIP Secret Engine](\/vault\/docs\/secrets\/kmip), which requires [Vault Enterprise](https:\/\/www.hashicorp.com\/products\/vault\/pricing)\nwith the Advanced Data Protection (ADP) module.\n\n[PKCS#11](http:\/\/docs.oasis-open.org\/pkcs11\/pkcs11-base\/v2.40\/os\/pkcs11-base-v2.40-os.html)\nis an open standard C API that provides a means to access cryptographic capabilities on a device.\nFor example, it is often used to access a Hardware Security Module (HSM) (like a [Yubikey](https:\/\/www.yubico.com\/)) from a local program (such as [GPG](https:\/\/gnupg.org\/)).\n\nVault provides a PKCS#11 library (or provider) so that Vault can be used as an SSM (Software Security Module).\nThis allows a user to treat Vault like any other PKCS#11 device to manage keys, objects, and perform encryption and decryption in Vault using PKCS#11 calls.\nThe PKCS#11 library connects to Vault's [KMIP Secrets Engine](\/vault\/docs\/secrets\/kmip) to provide cryptographic operations and object storage.\n\n## Platform support\n\nThis library works with Vault Enterprise 1.11+ with the advanced data protection module in the license\nwith the KMIP Secrets Engine.\n\n| Operating System | Architecture | Distribution      | glibc   |\n| ---------------- | ------------ | ----------------- | ------- |\n| Linux            | x86-64       | RHEL 7 compatible | 2.17    |\n| Linux            | x86-64       | RHEL 8 compatible | 2.28    |\n| Linux            | x86-64       | RHEL 9 compatible | 2.34    |\n| macOS            | x86-64       | &mdash;           | &mdash; |\n| macOS            | arm64        | &mdash;           | &mdash; |\n\n_Note:_ `vault-pkcs11-provider` runs on _any_ glibc-based Linux distribution. The versions above are given in RHEL-compatible GLIBC versions; for your\ndistro's glibc version, choose the `vault-pkcs11-provider` built against the same or older version as what your distro provides.\n\nThe provider comes in the form of a shared C library, `libvault-pkcs11.so` (for Linux) or `libvault-pkcs11.dylib` (for macOS).\nIt can be downloaded from [releases.hashicorp.com](https:\/\/releases.hashicorp.com\/vault-pkcs11-provider).\n\n## Quick start\n\n1. To use the provider, you will need access to a Vault Enterprise instance with the KMIP Secrets Engine.\n\n    For example, you can start one locally (if you have a license in the `VAULT_LICENSE` environment variable) with:\n\n    ```sh\n    docker pull hashicorp\/vault-enterprise &&\n    docker run --name vault \\\n      -p 5696:5696 \\\n      -p 8200:8200 \\\n      --cap-add=IPC_LOCK \\\n      -e VAULT_LICENSE=$(printenv VAULT_LICENSE) \\\n      -e VAULT_ADDR=http:\/\/127.0.0.1:8200 \\\n      -e VAULT_TOKEN=root \\\n      hashicorp\/vault-enterprise \\\n      server -dev -dev-root-token-id root -dev-listen-address 0.0.0.0:8200\n    ```\n\n1. Configure the [KMIP Secrets Engine](\/vault\/docs\/secrets\/kmip) and a KMIP *scope*. The scope is used to hold keys and objects.\n\n    ~> **Note**: These commands will output the credentials in plaintext.\n\n    ```sh\n    vault secrets enable kmip\n    vault write kmip\/config listen_addrs=0.0.0.0:5696\n    vault write -f kmip\/scope\/my-service\n    vault write kmip\/scope\/my-service\/role\/admin operation_all=true\n    vault write -f -format=json kmip\/scope\/my-service\/role\/admin\/credential\/generate | tee kmip.json\n    ```\n\n    ~> **Important**: When configuring KMIP in production, you will probably need to set the\n    `server_hostnames` and `server_ips` [configuration parameters](\/vault\/api-docs\/secret\/kmip#parameters),\n    otherwise the TLS connection to the KMIP Secrets Engine will fail due to certification validation errors.\n\n    This last line will generate a JSON file with the certificate, key, and CA certificate chain to connect\n    to the KMIP server. You'll need to save these to files so that the PKCS#11 provider can use them.\n\n    ```sh\n    jq --raw-output --exit-status '.data.ca_chain[]' kmip.json > ca.pem\n    jq --raw-output --exit-status '.data.certificate' kmip.json > cert.pem\n    ```\n    The certificate file from the KMIP Secrets Engine also contains the key.\n\n1. Create a configuration file called `vault-pkcs11.hcl`:\n\n    ```hcl\n    slot {\n      server = \"127.0.0.1:5696\"\n      tls_cert_path = \"cert.pem\"\n      ca_path = \"ca.pem\"\n      scope = \"my-service\"\n    }\n    ```\n\n    See [below](#configuration) for all available parameters.\n\n1. Copy the certificates from the KMIP credentials into the files specified in the configuration file (e.g., `cert.pem`, and `ca.pem`).\n\n1. You should now be able to use the `libvault-pkcs11.so` (or `.dylib`) library to access the KMIP Secrets Engine in Vault using any PKCS#11-compatible tool, like OpenSC's `pkcs11-tool`, e.g.:\n\n    ```sh\n    $ VAULT_LOG_FILE=\/dev\/null pkcs11-tool --module .\/libvault-pkcs11.so -L\n    Available slots:\n    Slot 0 (0x0): Vault slot 0\n    token label        : Token 0\n    token manufacturer : HashiCorp\n    token model        : Vault Enterprise\n    token flags        : token initialized, PIN initialized, other flags=0x60\n    hardware version   : 1.12\n    firmware version   : 1.12\n    serial num         : 1234\n    pin min\/max        : 0\/255\n\n    $ VAULT_LOG_FILE=\/dev\/null pkcs11-tool --module .\/libvault-pkcs11.so --keygen -a abc123 --key-type AES:32 \\\n    --extractable --allow-sw 2>\/dev\/null\n    Key generated:\n    Secret Key Object; AES length 32\n    VALUE:\n    label:      abc123\n    Usage:      encrypt, decrypt, wrap, unwrap\n    Access:     none\n    ```\n\n    The `VAULT_LOG_FILE=\/dev\/null` setting is to prevent the Vault PKCS#11 driver logs from appearing in stdout (the default if no file is specified).\n    In production, it's good to set `VAULT_LOG_FILE` to point to somewhere more permanent, like `\/var\/log\/vault.log`.\n\n## Configuration\n\nThe PKCS#11 Provider can be configured through an HCL file and through envionment variables.\n\nThe HCL file contains directives to map PKCS#11 device\n[slots](http:\/\/docs.oasis-open.org\/pkcs11\/pkcs11-base\/v2.40\/os\/pkcs11-base-v2.40-os.html#_Toc416959678) (logical devices)\nto Vault instances and KMIP scopes and configures how the library will authenticate to KMIP (with a client TLS certificate).\nThe PKCS#11 library will look for this file in `vault-pkcs11.hcl` and `\/etc\/vault-pkcs11.hcl` by default, or you can override this by setting the `VAULT_KMIP_CONFIG` environment variable.\n\nFor example,\n\n```hcl\nslot {\n    server = \"127.0.0.1:5696\"\n    tls_cert_path = \"cert.pem\"\n    ca_path = \"ca.pem\"\n    scope = \"my-service\"\n}\n```\n\nThe `slot` block configures the first PKCS#11 slot to point to Vault.\nMost programs will use only one slot.\n\n- `server` (required): the Vault server's IP or DNS name and port number (5696 is the default).\n- `tls_cert_path` (required): the location of the client TLS certificate used to authenticate to the KMIP engine.\n- `tls_key_path` (optional, defaults to the value of `tls_cert_path`): the location of the encrypted or unencrypted TLS key used to authenticate to the KMIP engine.\n- `ca_path` (required): the location of the CA bundle that will be used to verify the server's certificate.\n- `scope` (required): the [KMIP scope](\/vault\/docs\/secrets\/kmip#scopes-and-roles) to authenticate against and where the TDE master keys and associated metadata will be stored.\n- `cache` (optional, default `true`): if the provider uses a cache to improve the performance of `C_GetAttributeValue` (KMIP: `GetAttributes`) calls.\n- `emulate_hardware` (optional, default `false`): specifies if the provider should report that it is connected to a hardware device.\n\nThe default location the PKCS#11 library will look for the configuration file is the current directory (`vault-pkcs11.hcl`) and `\/etc\/vault-pkcs11.hcl`, but you can override this by setting the `VAULT_KMIP_CONFIG` environment variable to any file.\n\nEnvironment variables can be also used to configure these parameters and more.\n\n- `VAULT_KMIP_CONFIG`: location of the HCL configuration file. By default, the provider will check `.\/vault-pkcs11.hcl` and `\/etc\/vault-pkcs11.hcl`.\n- `VAULT_KMIP_CERT_FILE`: location of the TLS certificate used for authentication to the KMIP engine.\n- `VAULT_KMIP_KEY_FILE`: location of the TLS key used for authentication to the KMIP engine.\n- `VAULT_KMIP_KEY_PASSWORD`: password for the TLS key file, if it is encrypted to the KMIP engine.\n- `VAULT_KMIP_CA_FILE`: location of the TLS CA bundle used to authenticate the connection to the KMIP engine.\n- `VAULT_KMIP_SERVER`: address and port of the KMIP engine to use for encryption and storage.\n- `VAULT_KMIP_SCOPE`: KMIP scope to use for encryption and storage\n- `VAULT_KMIP_CACHE`: whether or not to cache `C_GetAttributeValue` (KMIP: `GetAttributes`) calls.\n- `VAULT_LOG_LEVEL`: the log level that the provider will use. Defaults to `WARN`. Valid values include `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`, and `OFF`.\n- `VAULT_LOG_FILE`: the location of the file the provider will use for logging. Defaults to standard out.\n- `VAULT_EMULATE_HARDWARE`: whether or not the provider will report that it is backed by a hardware device.\n\n## Encrypted TLS key support\n\nThe TLS key returned by the KMIP engine is unencrypted by default.\nHowever, the PKCS#11 provider does support (limited) encryption options for the key using [RFC 1423](https:\/\/www.rfc-editor.org\/rfc\/rfc1423).\nWe would only recommend using AES-256-CBC out of the available algorithms.\n\nThe keys from KMIP should be ECDSA keys, and can be encrypted with a password with OpenSSL, e.g.,:\n\n```sh\nopenssl ec -in cert.key -out encrypted.key -aes-256-cbc\n```\n\nThe PKCS#11 provider will need access to the password to decrypt the TLS key.\nThe password can be supplied to the provider in two ways:\n\n- The `VAULT_KMIP_KEY_PASSWORD` environment variable, or\n- the \"PIN\" parameter to the `C_Login` PKCS#11 function will be used to try to decrypt an encrypted TLS key.\n\nNote that only a single password can be supplied via the `VAULT_KMIP_KEY_PASSWORD`, so if multiple slots in the HCL file use encrypted TLS keys, they will need to be encrypted with the same password, or use the `C_Login` method to specify the password.\n\n## Error handling\n\nIf an error occurs, the first place to check will be the `VAULT_LOG_FILE` for any relevant error messages.\n\nIf the PKCS#11 provider returns an error code of `0x30` (`CKR_DEVICE_ERROR`), then an additional device error code may\nbe available from the `C_SessionInfo` call.\nHere are the known device error codes the provider will return:\n\n| Code | Meaning                                                          |\n| ---- | ---------------------------------------------------------------- |\n| 400  | Invalid input was provided in the configuration or PKCS#11 call. |\n| 401  | Invalid credentials were provided.                               |\n| 404  | The object, attribute, or key was not found.                     |\n| 600  | An unknown I\/O error occurred.                                   |\n| 601  | A KMIP engine error occured.                                     |\n\n## Capabilities\n\nThe Vault PKCS#11 provider implements the following PKCS#11 provider profiles:\n\n- [Baseline](http:\/\/docs.oasis-open.org\/pkcs11\/pkcs11-profiles\/v2.40\/os\/pkcs11-profiles-v2.40-os.html#_Toc416960548)\n- [Extended](http:\/\/docs.oasis-open.org\/pkcs11\/pkcs11-profiles\/v2.40\/os\/pkcs11-profiles-v2.40-os.html#_Toc416960554)\n\nThe following key genration mechanisms are currently supported:\n\n| Name               | Mechanism Number |Provider Version|Vault Version|\n| ------------------ | ---------------- |----------------|-------------|\n| RSA-PKCS           | `0x0000`         | 0.2.0          | 1.13        |\n| AES key generation | `0x1080`         | 0.1.0          | 1.12        |\n\n\nThe following encryption mechanisms are currently supported:\n\n| Name               | Mechanism Number |Provider Version|Vault Version|\n| ------------------ | ---------------- |----------------|-------------|\n| RSA-PKCS           | `0x0001`         | 0.2.0          | 1.13        |\n| RSA-PKCS-OAEP      | `0x0009`         | 0.2.0          | 1.13        |\n| AES-ECB            | `0x1081`         | 0.2.0          | 1.13        |\n| AES-CBC            | `0x1082`         | 0.1.0          | 1.12        |\n| AES-CBC Pad        | `0x1085`         | 0.1.0          | 1.12        |\n| AES-CTR            | `0x1086`         | 0.1.0          | 1.12        |\n| AES-GCM            | `0x1087`         | 0.1.0          | 1.12        |\n| AES-OFB            | `0x2104`         | 0.2.0          | 1.13        |\n| AES-CFB128         | `0x2107`         | 0.2.0          | 1.13        |\n\nThe following signing mechanisms are currently supported:\n\n| Name               | Mechanism Number |Provider Version|Vault Version|\n| ------------------ | ---------------- |----------------|-------------|\n| RSA-PKCS           | `0x0001`         | 0.2.0          | 1.13        |\n| SHA256-RSA-PKCS    | `0x0040`         | 0.2.0          | 1.13        |\n| SHA384-RSA-PKCS    | `0x0041`         | 0.2.0          | 1.13        |\n| SHA512-RSA-PKCS    | `0x0042`         | 0.2.0          | 1.13        |\n| SHA224-RSA-PKCS    | `0x0046`         | 0.2.0          | 1.13        |\n| SHA512-224-HMAC    | `0x0049`         | 0.2.0          | 1.13        |\n| SHA512-256-HMAC    | `0x004D`         | 0.2.0          | 1.13        |\n| SHA256-HMAC        | `0x0251`         | 0.2.0          | 1.13        |\n| SHA224-HMAC        | `0x0256`         | 0.2.0          | 1.13        |\n| SHA384-HMAC        | `0x0261`         | 0.2.0          | 1.13        |\n| SHA512-HMAC        | `0x0271`         | 0.2.0          | 1.13        |\n\n\n<Tabs>\n<Tab heading=\"Supported PKCS#11 Functions (version 0.2)\">\nHere is the list of supported and unsupported PKCS#11 functions:\n\n- Encryption and decryption\n  - [X]  `C_EncryptInit`\n  - [X]  `C_Encrypt`\n  - [X]  `C_EncryptUpdate`\n  - [X]  `C_EncryptFinal`\n  - [X]  `C_DecryptInit`\n  - [X]  `C_Decrypt`\n  - [X]  `C_DecryptUpdate`\n  - [X]  `C_DecryptFinal`\n- Key management\n  - [X]  `C_GenerateKey`\n  - [X]  `C_GenerateKeyPair`\n  - [ ]  `C_WrapKey`\n  - [ ]  `C_UnwrapKey`\n  - [ ]  `C_DeriveKey`\n- Objects\n  - [X]  `C_CreateObject`\n  - [X]  `C_DestroyObject`\n  - [X]  `C_GetAttributeValue`\n  - [X]  `C_FindObjectsInit`\n  - [X]  `C_FindObjects`\n  - [X]  `C_FindObjectsFinal`\n  - [ ]  `C_SetAttributeValue`\n  - [ ]  `C_CopyObject`\n  - [ ]  `C_GetObjectSize`\n- Management\n  - [X]  `C_Initialize`\n  - [X]  `C_Finalize`\n  - [X]  `C_Login` (PIN is used as a passphrase for the TLS encryption key, if provided)\n  - [X]  `C_Logout`\n  - [X]  `C_GetInfo`\n  - [X]  `C_GetSlotList`\n  - [X]  `C_GetSlotInfo`\n  - [X]  `C_GetTokenInfo`\n  - [X]  `C_GetMechanismList`\n  - [X]  `C_GetMechanismInfo`\n  - [X]  `C_OpenSession`\n  - [X]  `C_CloseSession`\n  - [X]  `C_CloseAllSessions`\n  - [X]  `C_GetSessionInfo`\n  - [ ]  `C_InitToken`\n  - [ ]  `C_InitPIN`\n  - [ ]  `C_SetPIN`\n  - [ ]  `C_GetOperationState`\n  - [ ]  `C_SetOperationState`\n  - [ ]  `C_GetFunctionStatus`\n  - [ ]  `C_CancelFunction`\n  - [ ]  `C_WaitForSlotEvent`\n- Signing\n  - [X]  `C_SignInit`\n  - [X]  `C_Sign`\n  - [X]  `C_SignUpdate`\n  - [X]  `C_SignFinal`\n  - [ ]  `C_SignRecoverInit`\n  - [ ]  `C_SignRecover`\n  - [X]  `C_VerifyInit`\n  - [X]  `C_Verify`\n  - [X]  `C_VerifyUpdate`\n  - [X]  `C_VerifyFinal`\n  - [ ]  `C_VerifyRecoverInit`\n  - [ ]  `C_VerifyRecover`\n- Digests\n  - [ ]  `C_DigestInit`\n  - [ ]  `C_Digest`\n  - [ ]  `C_DigestUpdate`\n  - [ ]  `C_DigestKey`\n  - [ ]  `C_DigestFinal`\n  - [ ]  `C_DigestEncryptUpdate`\n  - [ ]  `C_DecryptDigestUpdate`\n  - [ ]  `C_SignEncryptUpdate`\n  - [ ]  `C_DecryptVerifyUpdate`\n- Random Number Generation (see note below)\n  - [X]  `C_SeedRandom`\n  - [X]  `C_GenerateRandom`\n\n<\/Tab>\n<Tab heading=\"Supported PKCS#11 Functions (version 0.1)\">\nHere is the list of supported and unsupported PKCS#11 functions:\n\n- Encryption and decryption\n  - [X]  `C_EncryptInit`\n  - [X]  `C_Encrypt`\n  - [ ]  `C_EncryptUpdate`\n  - [ ]  `C_EncryptFinal`\n  - [X]  `C_DecryptInit`\n  - [X]  `C_Decrypt`\n  - [ ]  `C_DecryptUpdate`\n  - [ ]  `C_DecryptFinal`\n- Key management\n  - [X]  `C_GenerateKey`\n  - [ ]  `C_GenerateKeyPair`\n  - [ ]  `C_WrapKey`\n  - [ ]  `C_UnwrapKey`\n  - [ ]  `C_DeriveKey`\n- Objects\n  - [X]  `C_CreateObject`\n  - [X]  `C_DestroyObject`\n  - [X]  `C_GetAttributeValue`\n  - [X]  `C_FindObjectsInit`\n  - [X]  `C_FindObjects`\n  - [X]  `C_FindObjectsFinal`\n  - [ ]  `C_SetAttributeValue`\n  - [ ]  `C_CopyObject`\n  - [ ]  `C_GetObjectSize`\n- Management\n  - [X]  `C_Initialize`\n  - [X]  `C_Finalize`\n  - [X]  `C_Login` (PIN is used as a passphrase for the TLS encryption key, if provided)\n  - [X]  `C_Logout`\n  - [X]  `C_GetInfo`\n  - [X]  `C_GetSlotList`\n  - [X]  `C_GetSlotInfo`\n  - [X]  `C_GetTokenInfo`\n  - [X]  `C_GetMechanismList`\n  - [X]  `C_GetMechanismInfo`\n  - [X]  `C_OpenSession`\n  - [X]  `C_CloseSession`\n  - [X]  `C_CloseAllSessions`\n  - [X]  `C_GetSessionInfo`\n  - [ ]  `C_InitToken`\n  - [ ]  `C_InitPIN`\n  - [ ]  `C_SetPIN`\n  - [ ]  `C_GetOperationState`\n  - [ ]  `C_SetOperationState`\n  - [ ]  `C_GetFunctionStatus`\n  - [ ]  `C_CancelFunction`\n  - [ ]  `C_WaitForSlotEvent`\n- Signing\n  - [ ]  `C_SignInit`\n  - [ ]  `C_Sign`\n  - [ ]  `C_SignUpdate`\n  - [ ]  `C_SignFinal`\n  - [ ]  `C_SignRecoverInit`\n  - [ ]  `C_SignRecover`\n  - [ ]  `C_VerifyInit`\n  - [ ]  `C_Verify`\n  - [ ]  `C_VerifyUpdate`\n  - [ ]  `C_VerifyFinal`\n  - [ ]  `C_VerifyRecoverInit`\n  - [ ]  `C_VerifyRecover`\n- Digests\n  - [ ]  `C_DigestInit`\n  - [ ]  `C_Digest`\n  - [ ]  `C_DigestUpdate`\n  - [ ]  `C_DigestKey`\n  - [ ]  `C_DigestFinal`\n  - [ ]  `C_DigestEncryptUpdate`\n  - [ ]  `C_DecryptDigestUpdate`\n  - [ ]  `C_SignEncryptUpdate`\n  - [ ]  `C_DecryptVerifyUpdate`\n- Random Number Generation (see note below)\n  - [X]  `C_SeedRandom`\n  - [X]  `C_GenerateRandom`\n\n<\/Tab>\n<\/Tabs>\n\n## Limitations and notes\n\nDue to the nature of Vault, the KMIP Secrets Engine, and PKCS#11, there are some other limitations to be aware of:\n\n- The key and object IDs returned by `C_FindObjects`, etc., are randomized for each session, and cannot be shared between sessions; they have no meaning after a session is closed.\n  This is because KMIP objects, which are used to store the PKCS#11 objects, have long random strings as IDs, but the PKCS#11 object ID is limited to a 32-bit integer. Also, the PKCS#11 provider does not have any local storage.\n- The PKCS#11 provider's performance is heavily dependent on the latency to the Vault server and its performance.\n  This is because nearly all PKCS#11 API calls are translated 1-1 to KMIP calls, aside from some object attribute calls (which can be locally cached).\n  Multiple sessions can be safely used simultaneously though, and a single Vault server node has been tested as supporting thousands of ongoing sessions.\n- The object attribute cache is valid only for a single object per session, and will be cleared when another object's attributes are queried.\n- The random number generator function, `C_GenerateRandom`, is currently implemented in software in the library by calling out to Go's [`crypto\/rand`](https:\/\/pkg.go.dev\/crypto\/rand) package,\n  and does **not** call Vault.\n\n## Changelog\n\n### v0.2.1\n\n * Go update to 1.22.7 and Go dependency updates\n * Add license files to artifacts\n\n### v0.2.0\n\n * Introduced support for RSA and HMAC operations\n\n### v0.1.3\n\n * Go update to 1.19.4 and Go dependency updates\n * Added missing checksum for EL9 builds\n\n### v0.1.2\n\n * Added arm64 support on macOS\n * Go update to 1.19.2 and Go dependency updates\n\n### v0.1.1\n\n  * KMIP: Set activation date attribute required by Vault 1.12\n  * KMIP: Revoke a key prior to destroy\n\n### v0.1.0\n\n  * Initial release","site":"vault","answers_cleaned":"    layout  docs page title  PKCS 11 Provider   Vault Enterprise description       The Vault PKCS 11 Provider allows Vault KMIP Secrets Engine to be used via PKCS 11 calls    The provider supports a subset of key generation  encryption  decryption and key storage operations    This requires the Enterprise ADP KM license         PKCS 11 provider   include  alerts enterprise only mdx   PKCS11 provider is part of the  KMIP Secret Engine   vault docs secrets kmip   which requires  Vault Enterprise  https   www hashicorp com products vault pricing  with the Advanced Data Protection  ADP  module    PKCS 11  http   docs oasis open org pkcs11 pkcs11 base v2 40 os pkcs11 base v2 40 os html  is an open standard C API that provides a means to access cryptographic capabilities on a device  For example  it is often used to access a Hardware Security Module  HSM   like a  Yubikey  https   www yubico com    from a local program  such as  GPG  https   gnupg org      Vault provides a PKCS 11 library  or provider  so that Vault can be used as an SSM  Software Security Module   This allows a user to treat Vault like any other PKCS 11 device to manage keys  objects  and perform encryption and decryption in Vault using PKCS 11 calls  The PKCS 11 library connects to Vault s  KMIP Secrets Engine   vault docs secrets kmip  to provide cryptographic operations and object storage      Platform support  This library works with Vault Enterprise 1 11  with the advanced data protection module in the license with the KMIP Secrets Engine     Operating System   Architecture   Distribution        glibc                                                                         Linux              x86 64         RHEL 7 compatible   2 17        Linux              x86 64         RHEL 8 compatible   2 28        Linux              x86 64         RHEL 9 compatible   2 34        macOS              x86 64          mdash               mdash      macOS              arm64           mdash               mdash      Note    vault pkcs11 provider  runs on  any  glibc based Linux distribution  The versions above are given in RHEL compatible GLIBC versions  for your distro s glibc version  choose the  vault pkcs11 provider  built against the same or older version as what your distro provides   The provider comes in the form of a shared C library   libvault pkcs11 so   for Linux  or  libvault pkcs11 dylib   for macOS   It can be downloaded from  releases hashicorp com  https   releases hashicorp com vault pkcs11 provider       Quick start  1  To use the provider  you will need access to a Vault Enterprise instance with the KMIP Secrets Engine       For example  you can start one locally  if you have a license in the  VAULT LICENSE  environment variable  with          sh     docker pull hashicorp vault enterprise        docker run   name vault          p 5696 5696          p 8200 8200           cap add IPC LOCK          e VAULT LICENSE   printenv VAULT LICENSE           e VAULT ADDR http   127 0 0 1 8200          e VAULT TOKEN root         hashicorp vault enterprise         server  dev  dev root token id root  dev listen address 0 0 0 0 8200          1  Configure the  KMIP Secrets Engine   vault docs secrets kmip  and a KMIP  scope   The scope is used to hold keys and objects            Note    These commands will output the credentials in plaintext          sh     vault secrets enable kmip     vault write kmip config listen addrs 0 0 0 0 5696     vault write  f kmip scope my service     vault write kmip scope my service role admin operation all true     vault write  f  format json kmip scope my service role admin credential generate   tee kmip json                   Important    When configuring KMIP in production  you will probably need to set the      server hostnames  and  server ips   configuration parameters   vault api docs secret kmip parameters       otherwise the TLS connection to the KMIP Secrets Engine will fail due to certification validation errors       This last line will generate a JSON file with the certificate  key  and CA certificate chain to connect     to the KMIP server  You ll need to save these to files so that the PKCS 11 provider can use them          sh     jq   raw output   exit status   data ca chain    kmip json   ca pem     jq   raw output   exit status   data certificate  kmip json   cert pem             The certificate file from the KMIP Secrets Engine also contains the key   1  Create a configuration file called  vault pkcs11 hcl           hcl     slot         server    127 0 0 1 5696        tls cert path    cert pem        ca path    ca pem        scope    my service                     See  below   configuration  for all available parameters   1  Copy the certificates from the KMIP credentials into the files specified in the configuration file  e g    cert pem   and  ca pem     1  You should now be able to use the  libvault pkcs11 so   or   dylib   library to access the KMIP Secrets Engine in Vault using any PKCS 11 compatible tool  like OpenSC s  pkcs11 tool   e g           sh       VAULT LOG FILE  dev null pkcs11 tool   module   libvault pkcs11 so  L     Available slots      Slot 0  0x0   Vault slot 0     token label          Token 0     token manufacturer   HashiCorp     token model          Vault Enterprise     token flags          token initialized  PIN initialized  other flags 0x60     hardware version     1 12     firmware version     1 12     serial num           1234     pin min max          0 255        VAULT LOG FILE  dev null pkcs11 tool   module   libvault pkcs11 so   keygen  a abc123   key type AES 32         extractable   allow sw 2  dev null     Key generated      Secret Key Object  AES length 32     VALUE      label       abc123     Usage       encrypt  decrypt  wrap  unwrap     Access      none              The  VAULT LOG FILE  dev null  setting is to prevent the Vault PKCS 11 driver logs from appearing in stdout  the default if no file is specified       In production  it s good to set  VAULT LOG FILE  to point to somewhere more permanent  like   var log vault log       Configuration  The PKCS 11 Provider can be configured through an HCL file and through envionment variables   The HCL file contains directives to map PKCS 11 device  slots  http   docs oasis open org pkcs11 pkcs11 base v2 40 os pkcs11 base v2 40 os html  Toc416959678   logical devices  to Vault instances and KMIP scopes and configures how the library will authenticate to KMIP  with a client TLS certificate   The PKCS 11 library will look for this file in  vault pkcs11 hcl  and   etc vault pkcs11 hcl  by default  or you can override this by setting the  VAULT KMIP CONFIG  environment variable   For example      hcl slot       server    127 0 0 1 5696      tls cert path    cert pem      ca path    ca pem      scope    my service         The  slot  block configures the first PKCS 11 slot to point to Vault  Most programs will use only one slot      server   required   the Vault server s IP or DNS name and port number  5696 is the default      tls cert path   required   the location of the client TLS certificate used to authenticate to the KMIP engine     tls key path   optional  defaults to the value of  tls cert path    the location of the encrypted or unencrypted TLS key used to authenticate to the KMIP engine     ca path   required   the location of the CA bundle that will be used to verify the server s certificate     scope   required   the  KMIP scope   vault docs secrets kmip scopes and roles  to authenticate against and where the TDE master keys and associated metadata will be stored     cache   optional  default  true    if the provider uses a cache to improve the performance of  C GetAttributeValue   KMIP   GetAttributes   calls     emulate hardware   optional  default  false    specifies if the provider should report that it is connected to a hardware device   The default location the PKCS 11 library will look for the configuration file is the current directory   vault pkcs11 hcl   and   etc vault pkcs11 hcl   but you can override this by setting the  VAULT KMIP CONFIG  environment variable to any file   Environment variables can be also used to configure these parameters and more      VAULT KMIP CONFIG   location of the HCL configuration file  By default  the provider will check    vault pkcs11 hcl  and   etc vault pkcs11 hcl      VAULT KMIP CERT FILE   location of the TLS certificate used for authentication to the KMIP engine     VAULT KMIP KEY FILE   location of the TLS key used for authentication to the KMIP engine     VAULT KMIP KEY PASSWORD   password for the TLS key file  if it is encrypted to the KMIP engine     VAULT KMIP CA FILE   location of the TLS CA bundle used to authenticate the connection to the KMIP engine     VAULT KMIP SERVER   address and port of the KMIP engine to use for encryption and storage     VAULT KMIP SCOPE   KMIP scope to use for encryption and storage    VAULT KMIP CACHE   whether or not to cache  C GetAttributeValue   KMIP   GetAttributes   calls     VAULT LOG LEVEL   the log level that the provider will use  Defaults to  WARN   Valid values include  TRACE    DEBUG    INFO    WARN    ERROR   and  OFF      VAULT LOG FILE   the location of the file the provider will use for logging  Defaults to standard out     VAULT EMULATE HARDWARE   whether or not the provider will report that it is backed by a hardware device      Encrypted TLS key support  The TLS key returned by the KMIP engine is unencrypted by default  However  the PKCS 11 provider does support  limited  encryption options for the key using  RFC 1423  https   www rfc editor org rfc rfc1423   We would only recommend using AES 256 CBC out of the available algorithms   The keys from KMIP should be ECDSA keys  and can be encrypted with a password with OpenSSL  e g        sh openssl ec  in cert key  out encrypted key  aes 256 cbc      The PKCS 11 provider will need access to the password to decrypt the TLS key  The password can be supplied to the provider in two ways     The  VAULT KMIP KEY PASSWORD  environment variable  or   the  PIN  parameter to the  C Login  PKCS 11 function will be used to try to decrypt an encrypted TLS key   Note that only a single password can be supplied via the  VAULT KMIP KEY PASSWORD   so if multiple slots in the HCL file use encrypted TLS keys  they will need to be encrypted with the same password  or use the  C Login  method to specify the password      Error handling  If an error occurs  the first place to check will be the  VAULT LOG FILE  for any relevant error messages   If the PKCS 11 provider returns an error code of  0x30    CKR DEVICE ERROR    then an additional device error code may be available from the  C SessionInfo  call  Here are the known device error codes the provider will return     Code   Meaning                                                                                                                                          400    Invalid input was provided in the configuration or PKCS 11 call      401    Invalid credentials were provided                                    404    The object  attribute  or key was not found                          600    An unknown I O error occurred                                        601    A KMIP engine error occured                                            Capabilities  The Vault PKCS 11 provider implements the following PKCS 11 provider profiles      Baseline  http   docs oasis open org pkcs11 pkcs11 profiles v2 40 os pkcs11 profiles v2 40 os html  Toc416960548     Extended  http   docs oasis open org pkcs11 pkcs11 profiles v2 40 os pkcs11 profiles v2 40 os html  Toc416960554   The following key genration mechanisms are currently supported     Name                 Mechanism Number  Provider Version Vault Version                                                                             RSA PKCS              0x0000            0 2 0            1 13            AES key generation    0x1080            0 1 0            1 12            The following encryption mechanisms are currently supported     Name                 Mechanism Number  Provider Version Vault Version                                                                             RSA PKCS              0x0001            0 2 0            1 13            RSA PKCS OAEP         0x0009            0 2 0            1 13            AES ECB               0x1081            0 2 0            1 13            AES CBC               0x1082            0 1 0            1 12            AES CBC Pad           0x1085            0 1 0            1 12            AES CTR               0x1086            0 1 0            1 12            AES GCM               0x1087            0 1 0            1 12            AES OFB               0x2104            0 2 0            1 13            AES CFB128            0x2107            0 2 0            1 13           The following signing mechanisms are currently supported     Name                 Mechanism Number  Provider Version Vault Version                                                                             RSA PKCS              0x0001            0 2 0            1 13            SHA256 RSA PKCS       0x0040            0 2 0            1 13            SHA384 RSA PKCS       0x0041            0 2 0            1 13            SHA512 RSA PKCS       0x0042            0 2 0            1 13            SHA224 RSA PKCS       0x0046            0 2 0            1 13            SHA512 224 HMAC       0x0049            0 2 0            1 13            SHA512 256 HMAC       0x004D            0 2 0            1 13            SHA256 HMAC           0x0251            0 2 0            1 13            SHA224 HMAC           0x0256            0 2 0            1 13            SHA384 HMAC           0x0261            0 2 0            1 13            SHA512 HMAC           0x0271            0 2 0            1 13             Tabs   Tab heading  Supported PKCS 11 Functions  version 0 2    Here is the list of supported and unsupported PKCS 11 functions     Encryption and decryption      X    C EncryptInit       X    C Encrypt       X    C EncryptUpdate       X    C EncryptFinal       X    C DecryptInit       X    C Decrypt       X    C DecryptUpdate       X    C DecryptFinal    Key management      X    C GenerateKey       X    C GenerateKeyPair            C WrapKey            C UnwrapKey            C DeriveKey    Objects      X    C CreateObject       X    C DestroyObject       X    C GetAttributeValue       X    C FindObjectsInit       X    C FindObjects       X    C FindObjectsFinal            C SetAttributeValue            C CopyObject            C GetObjectSize    Management      X    C Initialize       X    C Finalize       X    C Login   PIN is used as a passphrase for the TLS encryption key  if provided       X    C Logout       X    C GetInfo       X    C GetSlotList       X    C GetSlotInfo       X    C GetTokenInfo       X    C GetMechanismList       X    C GetMechanismInfo       X    C OpenSession       X    C CloseSession       X    C CloseAllSessions       X    C GetSessionInfo            C InitToken            C InitPIN            C SetPIN            C GetOperationState            C SetOperationState            C GetFunctionStatus            C CancelFunction            C WaitForSlotEvent    Signing      X    C SignInit       X    C Sign       X    C SignUpdate       X    C SignFinal            C SignRecoverInit            C SignRecover       X    C VerifyInit       X    C Verify       X    C VerifyUpdate       X    C VerifyFinal            C VerifyRecoverInit            C VerifyRecover    Digests           C DigestInit            C Digest            C DigestUpdate            C DigestKey            C DigestFinal            C DigestEncryptUpdate            C DecryptDigestUpdate            C SignEncryptUpdate            C DecryptVerifyUpdate    Random Number Generation  see note below       X    C SeedRandom       X    C GenerateRandom     Tab   Tab heading  Supported PKCS 11 Functions  version 0 1    Here is the list of supported and unsupported PKCS 11 functions     Encryption and decryption      X    C EncryptInit       X    C Encrypt            C EncryptUpdate            C EncryptFinal       X    C DecryptInit       X    C Decrypt            C DecryptUpdate            C DecryptFinal    Key management      X    C GenerateKey            C GenerateKeyPair            C WrapKey            C UnwrapKey            C DeriveKey    Objects      X    C CreateObject       X    C DestroyObject       X    C GetAttributeValue       X    C FindObjectsInit       X    C FindObjects       X    C FindObjectsFinal            C SetAttributeValue            C CopyObject            C GetObjectSize    Management      X    C Initialize       X    C Finalize       X    C Login   PIN is used as a passphrase for the TLS encryption key  if provided       X    C Logout       X    C GetInfo       X    C GetSlotList       X    C GetSlotInfo       X    C GetTokenInfo       X    C GetMechanismList       X    C GetMechanismInfo       X    C OpenSession       X    C CloseSession       X    C CloseAllSessions       X    C GetSessionInfo            C InitToken            C InitPIN            C SetPIN            C GetOperationState            C SetOperationState            C GetFunctionStatus            C CancelFunction            C WaitForSlotEvent    Signing           C SignInit            C Sign            C SignUpdate            C SignFinal            C SignRecoverInit            C SignRecover            C VerifyInit            C Verify            C VerifyUpdate            C VerifyFinal            C VerifyRecoverInit            C VerifyRecover    Digests           C DigestInit            C Digest            C DigestUpdate            C DigestKey            C DigestFinal            C DigestEncryptUpdate            C DecryptDigestUpdate            C SignEncryptUpdate            C DecryptVerifyUpdate    Random Number Generation  see note below       X    C SeedRandom       X    C GenerateRandom     Tab    Tabs      Limitations and notes  Due to the nature of Vault  the KMIP Secrets Engine  and PKCS 11  there are some other limitations to be aware of     The key and object IDs returned by  C FindObjects   etc   are randomized for each session  and cannot be shared between sessions  they have no meaning after a session is closed    This is because KMIP objects  which are used to store the PKCS 11 objects  have long random strings as IDs  but the PKCS 11 object ID is limited to a 32 bit integer  Also  the PKCS 11 provider does not have any local storage    The PKCS 11 provider s performance is heavily dependent on the latency to the Vault server and its performance    This is because nearly all PKCS 11 API calls are translated 1 1 to KMIP calls  aside from some object attribute calls  which can be locally cached     Multiple sessions can be safely used simultaneously though  and a single Vault server node has been tested as supporting thousands of ongoing sessions    The object attribute cache is valid only for a single object per session  and will be cleared when another object s attributes are queried    The random number generator function   C GenerateRandom   is currently implemented in software in the library by calling out to Go s   crypto rand   https   pkg go dev crypto rand  package    and does   not   call Vault      Changelog      v0 2 1     Go update to 1 22 7 and Go dependency updates    Add license files to artifacts      v0 2 0     Introduced support for RSA and HMAC operations      v0 1 3     Go update to 1 19 4 and Go dependency updates    Added missing checksum for EL9 builds      v0 1 2     Added arm64 support on macOS    Go update to 1 19 2 and Go dependency updates      v0 1 1      KMIP  Set activation date attribute required by Vault 1 12     KMIP  Revoke a key prior to destroy      v0 1 0      Initial release"}
{"questions":"vault include alerts enterprise only mdx layout docs page title AWS KMS External Key Store XKS PKCS 11 Provider Vault Enterprise Vault with AWS KMS external key store XKS via PKCS 11 and XKS proxy AWS KMS External Key Store can use Vault as a key store via the Vault PKCS 11 Provider","answers":"---\nlayout: docs\npage_title: AWS KMS External Key Store (XKS) - PKCS#11 Provider - Vault Enterprise\ndescription: |-\n  AWS KMS External Key Store can use Vault as a key store via the Vault PKCS#11 Provider.\n---\n\n# Vault with AWS KMS external key store (XKS) via PKCS#11 and XKS proxy\n\n@include 'alerts\/enterprise-only.mdx'\n\n~> **Note**: AWS [`xks-proxy`](https:\/\/github.com\/aws-samples\/aws-kms-xks-proxy) is used in this document as a sample implementation.\n\nVault's KMIP Secrets Engine can be used as an external key store for the AWS KMS [External Key Store (XKS)](https:\/\/aws.amazon.com\/blogs\/aws\/announcing-aws-kms-external-key-store-xks\/) protocol using the AWS [`xks-proxy`](https:\/\/github.com\/aws-samples\/aws-kms-xks-proxy) along\nwith the [Vault PKCS#11 Provider](\/vault\/docs\/enterprise\/pkcs11-provider).\n\n## Overview\n\nThis is tested as working with Vault 1.11.0 Enterprise (and later) with Advanced Data Protection (KMIP support).\n\nPrerequisites:\n\n* A server capable of running XKS Proxy on port 443, which is exposed to the Internet or a VPC endpoint. This can be the same as the Vault server.\n* A valid DNS entry with a valid TLS certificate for XKS Proxy.\n* `libvault-pkcs11.so` downloaded from [releases.hashicorp.com](https:\/\/releases.hashicorp.com\/vault-pkcs11-provider) for your platform and available on the XKS Proxy server.\n* Vault Enterprise with the KMIP Secrets Engine available and with TCP port 5696 accessible to where XKS Proxy will be running.\n\nThere are 3 parts to this setup:\n\n1. Vault KMIP Secrets Engine standard setup. (There is nothing specific to XKS in this setup.)\n1. Vault PKCS#11 setup to tell the PKCS#11 provider (`libvault-pkcs11.so`) how to talk to the Vault KMIP Secrets Engine. (There is nothing specific to XKS in this setup.)\n1. XKS Proxy setup.\n\n~> **Important**: XKS has a strict 250 ms latency requirement.\nIn order to serve requests with this latency, we recommend hosting Vault and the XKS proxy as close as possible\nto the desired KMS region.\n\n## Vault setup\n\nOn the Vault server, we need to [setup the KMIP Secrets Engine](\/vault\/docs\/secrets\/kmip):\n\n1. Start the [KMIP Secrets Engine](\/vault\/docs\/secrets\/kmip) and listener:\n\n    ```sh\n    vault secrets enable kmip\n    vault write kmip\/config listen_addrs=0.0.0.0:5696\n    ```\n\n1. Create a KMIP scope to contain the AES keys that will be accessible.\n   The KMIP scope is essentially an isolated namespace.\n   Here is an example creating one called `my-service` (which will be used throughout this document).\n\n    ```sh\n    vault write -f kmip\/scope\/my-service\n    ```\n\n1. Create a KMIP role that has access to the scope:\n\n    ```sh\n    vault write kmip\/scope\/my-service\/role\/admin operation_all=true\n    ```\n\n1. Create TLS credentials (a certificate, key, and CA bundle) for the KMIP role:\n\n    ~> **Note**: This command will output the credentials in plaintext.\n\n    ```sh\n    vault write -f -format=json kmip\/scope\/my-service\/role\/admin\/credential\/generate | tee kmip.json\n    ```\n\n    The response from the `credential\/generate` endpoint is JSON.\n    The `.data.certificate` entry contains a bundle of the TLS client key and certificate we will use to connect to KMIP with from `xks-proxy`.\n    The `.data.ca_chain[]` entries contain the CA bundle to verify the KMIP server's certificate.\n    Save these to, e.g., `cert.pem` and `ca.pem`:\n\n    ```sh\n    jq --raw-output --exit-status '.data.ca_chain[]' kmip.json > ca.pem\n    jq --raw-output --exit-status '.data.certificate' kmip.json > cert.pem\n    ```\n\n## XKS proxy setup\n\nThe rest of the steps take place on the XKS Proxy server.\n\nFor this example, We will use an HTTPS proxy service like [ngrok](https:\/\/ngrok.com\/) to forward connections\nto the XKS proxy. This helps to quickly setup a valid domain and TLS endpoint for testing.\n\n1. Start `ngrok`:\n\n   ```shell-session\n   $ ngrok http 8000\n   ```\n\n   This will output a domain that can be used to configure KMS later, such as `https:\/\/example.ngrok.io`.\n\n1. Copy the `libvault-pkcs11.so` binary to the server, such as `\/usr\/local\/lib` (should be same as in the TOML config file below), and `chmod` it so that it is executable.\n\n1. Copy the TLS certificate bundle (e.g., `\/etc\/kmip\/cert.pem`) and CA bundle (e.g., `\/etc\/kmip\/ca.pem`) to the `xks-proxy` server (doesn't matter where, as long as the `xks-proxy` process has access to it) from the Vault setup.\n\n1. Create a `configuration\/settings_vault.toml` file for the XKS to Vault PKCS#11 configuration,\n   and set the `XKS_PROXY_SETTINGS_TOML` environment variable to point to the file location.\n\n   The important settings to change:\n\n   * `[[external_key_stores]]`:\n     * change URI path prefix to anything you like\n     * choose random access ID\n     * choose random secret key\n     * set which key labels are accessible to XKS (`xks_key_id_set`)\n   * `[pkcs11]`: set the `PKCS11_HSM_MODULE` to the location of the `libvault-pkcs11.so` (or `.dylib`) file downloaded from [releases.hashicorp.com](https:\/\/releases.hashicorp.com\/vault-pkcs11-provider).\n\n   ```toml\n   [server]\n   ip = \"0.0.0.0\"\n   port = 8000\n   region = \"us-east-2\"\n   service = \"kms-xks-proxy\"\n\n   [server.tcp_keepalive]\n   tcp_keepalive_secs = 60\n   tcp_keepalive_retries = 3\n   tcp_keepalive_interval_secs = 1\n\n   [tracing]\n   is_stdout_writer_enabled = true\n   is_file_writer_enabled = true\n   level = \"DEBUG\"\n   directory = \"\/var\/local\/xks-proxy\/logs\"\n   file_prefix = \"xks-proxy.log\"\n   rotation_kind = \"HOURLY\"\n\n   [security]\n   is_sigv4_auth_enabled = true\n   is_tls_enabled = true\n   is_mtls_enabled = false\n\n   [tls]\n   tls_cert_pem = \"tls\/server_cert.pem\"\n   tls_key_pem = \"tls\/server_key.pem\"\n   mtls_client_ca_pem = \"tls\/client_ca.pem\"\n   mtls_client_dns_name = \"us-east-2.alpha.cks.kms.aws.internal.amazonaws.com\"\n\n   [[external_key_stores]]\n   uri_path_prefix = \"\/xyz\"\n   sigv4_access_key_id = \"AKIA4GBY3I6JCE5M2HPM\"\n   sigv4_secret_access_key = \"1234567890123456789012345678901234567890123=\"\n   xks_key_id_set = [\"abc123\"]\n\n   [pkcs11]\n   session_pool_max_size = 30\n   session_pool_timeout_milli = 0\n   session_eager_close = false\n   user_pin = \"\"\n   PKCS11_HSM_MODULE = \"\/usr\/local\/lib\/libvault-pkcs11.so\"\n   context_read_timeout_milli = 100\n\n   [limits]\n   max_plaintext_in_base64 = 8192\n   max_aad_in_base64 = 16384\n\n   [hsm_capabilities]\n   can_generate_iv = false\n   is_zero_iv_required = false\n   ```\n\n~> **Note**: `vault-pkcs11-provider` versions of 0.1.0\u20130.1.2 require the last two lines to be changed to `can_generate_iv = true` and `is_zero_iv_required = true`.\n\n1. Create a file, `\/etc\/vault-pkcs11.hcl` with the following contents:\n\n    ```hcl\n    slot {\n      server = \"VAULT_ADDRESS:5696\"\n      tls_cert_path = \"\/etc\/kmip\/cert.pem\"\n      ca_path = \"\/etc\/kmip\/ca.pem\"\n      scope = \"my-service\"\n    }\n    ```\n\n    This file is used by `libvault-pkcs11.so` to know how to find and communicate with the KMIP server.\n    See [the Vault docs](\/vault\/docs\/enterprise\/pkcs11-provider) for all available parameters and their usage.\n\n1. If you want to view the Vault logs (helpful when trying to find error messages), you can specify the `VAULT_LOG_FILE` (default is stdout) and `VAULT_LOG_LEVEL` (default is `INFO`). We'd recommend setting `VAULT_LOG_FILE` to something like `\/tmp\/vault.log` or `\/var\/log\/vault.log`. Other useful log levels are `WARN` (quieter) and `TRACE` (very verbose, could possibly contain sensitive information, like raw network packets).\n\n1. Create an AES-256 key in KMIP, for example, using `pkcs11-tool` (usually installed with the OpenSC package). See the [Vault docs](\/vault\/docs\/enterprise\/pkcs11-provider) for the full setup.\n   ```sh\n   VAULT_LOG_FILE=\/dev\/null pkcs11-tool --module .\/libvault-pkcs11.so --keygen -a abc123 --key-type AES:32 \\\n       --extractable --allow-sw\n   Key generated:\n   Secret Key Object; AES length 32\n   VALUE:\n   label:      abc123\n   Usage:      encrypt, decrypt, wrap, unwrap\n   Access:     none\n   ```\n\n## Enable XKS in the AWS CLI\n\n\n1. Create the KMS custom key store with the appropriate parameters to point to your XKS proxy (in this example, through `ngrok`).\n\n   ```shell-session\n   $ aws kms create-custom-key-store \\\n       --custom-key-store-name myVaultKeyStore \\\n       --custom-key-store-type EXTERNAL_KEY_STORE \\\n       --xks-proxy-uri-endpoint https:\/\/example.ngrok.io \\\n       --xks-proxy-uri-path \/xyz\/kms\/xks\/v1 \\\n       --xks-proxy-authentication-credential AccessKeyId=AKIA4GBY3I6JCE5M2HPM,RawSecretAccessKey=1234567890123456789012345678901234567890123= \\\n       --xks-proxy-connectivity PUBLIC_ENDPOINT\n\n   {\n       \"CustomKeyStoreId\": \"cks-d7a55fe93d63191d6\"\n   }\n   ```\n\n1. Tell KMS to connect to the key store.\n\n   ```shell-session\n   $ aws kms connect-custom-key-store --custom-key-store-id cks-d7a55fe93d63191d6\n   ```\n\n1. Wait for the `ConnectionState` of your custom key store to be `CONNECTED`. This can take a few minutes.\n\n   ```shell-session\n   $ aws kms describe-custom-key-stores --custom-key-store-id cks-d7a55fe93d63191d6\n   ```\n\n1. Create a KMS key associated with the XKS key ID (`abc123` in this example):\n\n   ```shell-session\n   $ aws kms create-key --custom-key-store-id cks-d7a55fe93d63191d6 \\\n       --xks-key-id abc123 --origin EXTERNAL_KEY_STORE\n   {\n       \"KeyMetadata\": {\n           \"AWSAccountId\": \"111111111111\",\n           \"KeyId\": \"a93f205a-2a37-4338-aa64-92b4a4b0b67d\",\n           \"Arn\": \"arn:aws:kms:us-east-2:111111111111:key\/a93f205a-2a37-4338-aa64-92b4a4b0b67d\",\n           \"CreationDate\": \"2022-12-22T11:03:23.695000-08:00\",\n           \"Enabled\": true,\n           \"Description\": \"\",\n           \"KeyUsage\": \"ENCRYPT_DECRYPT\",\n           \"KeyState\": \"Enabled\",\n           \"Origin\": \"EXTERNAL_KEY_STORE\",\n           \"CustomKeyStoreId\": \"cks-16460f66b34705025\",\n           \"KeyManager\": \"CUSTOMER\",\n           \"CustomerMasterKeySpec\": \"SYMMETRIC_DEFAULT\",\n           \"KeySpec\": \"SYMMETRIC_DEFAULT\",\n           \"EncryptionAlgorithms\": [\n               \"SYMMETRIC_DEFAULT\"\n           ],\n           \"MultiRegion\": false,\n           \"XksKeyConfiguration\": {\n               \"Id\": \"abc123\"\n           }\n       }\n   }\n   ```\n\n1. Encrypt some data with this key:\n\n   ```shell-session\n   $ aws kms encrypt --key-id a93f205a-2a37-4338-aa64-92b4a4b0b67d --plaintext YWJjMTIzCg==\n   {\n       \"CiphertextBlob\": \"somerandomciphertextblob=\",\n       \"KeyId\": \"arn:aws:kms:us-east-2:111111111111:key\/a93f205a-2a37-4338-aa64-92b4a4b0b67d\",\n       \"EncryptionAlgorithm\": \"SYMMETRIC_DEFAULT\"\n   }\n\n1. Decypt the resulting ciphertext:\n\n   ```shell-session\n   $ aws kms decrypt --ciphertext-blob somerandomciphertextblob=\n   {\n       \"KeyId\": \"arn:aws:kms:us-east-2:111111111111:key\/a93f205a-2a37-4338-aa64-92b4a4b0b67d\",\n       \"Plaintext\": \"YWJjMTIzCg==\",\n       \"EncryptionAlgorithm\": \"SYMMETRIC_DEFAULT\"\n   }\n   ```\n\n1. Optionally, clean up your key and key store with:\n\n   ```shell-session\n   $ aws kms disable-key --key-id a93f205a-2a37-4338-aa64-92b4a4b0b67d\n   $ aws kms disconnect-custom-key-store --custom-key-store-id cks-16460f66b34705025\n   $ aws kms delete-custom-key-store --custom-key-store-id cks-16460f66b34705025\n   ```\n\n   (The `aws kms delete-custom-key-store` command will not succeed until all keys in the key store have been disabled and deleted.","site":"vault","answers_cleaned":"    layout  docs page title  AWS KMS External Key Store  XKS    PKCS 11 Provider   Vault Enterprise description       AWS KMS External Key Store can use Vault as a key store via the Vault PKCS 11 Provider         Vault with AWS KMS external key store  XKS  via PKCS 11 and XKS proxy   include  alerts enterprise only mdx        Note    AWS   xks proxy   https   github com aws samples aws kms xks proxy  is used in this document as a sample implementation   Vault s KMIP Secrets Engine can be used as an external key store for the AWS KMS  External Key Store  XKS   https   aws amazon com blogs aws announcing aws kms external key store xks   protocol using the AWS   xks proxy   https   github com aws samples aws kms xks proxy  along with the  Vault PKCS 11 Provider   vault docs enterprise pkcs11 provider       Overview  This is tested as working with Vault 1 11 0 Enterprise  and later  with Advanced Data Protection  KMIP support    Prerequisites     A server capable of running XKS Proxy on port 443  which is exposed to the Internet or a VPC endpoint  This can be the same as the Vault server    A valid DNS entry with a valid TLS certificate for XKS Proxy     libvault pkcs11 so  downloaded from  releases hashicorp com  https   releases hashicorp com vault pkcs11 provider  for your platform and available on the XKS Proxy server    Vault Enterprise with the KMIP Secrets Engine available and with TCP port 5696 accessible to where XKS Proxy will be running   There are 3 parts to this setup   1  Vault KMIP Secrets Engine standard setup   There is nothing specific to XKS in this setup   1  Vault PKCS 11 setup to tell the PKCS 11 provider   libvault pkcs11 so   how to talk to the Vault KMIP Secrets Engine   There is nothing specific to XKS in this setup   1  XKS Proxy setup        Important    XKS has a strict 250 ms latency requirement  In order to serve requests with this latency  we recommend hosting Vault and the XKS proxy as close as possible to the desired KMS region      Vault setup  On the Vault server  we need to  setup the KMIP Secrets Engine   vault docs secrets kmip    1  Start the  KMIP Secrets Engine   vault docs secrets kmip  and listener          sh     vault secrets enable kmip     vault write kmip config listen addrs 0 0 0 0 5696          1  Create a KMIP scope to contain the AES keys that will be accessible     The KMIP scope is essentially an isolated namespace     Here is an example creating one called  my service   which will be used throughout this document           sh     vault write  f kmip scope my service          1  Create a KMIP role that has access to the scope          sh     vault write kmip scope my service role admin operation all true          1  Create TLS credentials  a certificate  key  and CA bundle  for the KMIP role            Note    This command will output the credentials in plaintext          sh     vault write  f  format json kmip scope my service role admin credential generate   tee kmip json              The response from the  credential generate  endpoint is JSON      The   data certificate  entry contains a bundle of the TLS client key and certificate we will use to connect to KMIP with from  xks proxy       The   data ca chain    entries contain the CA bundle to verify the KMIP server s certificate      Save these to  e g    cert pem  and  ca pem           sh     jq   raw output   exit status   data ca chain    kmip json   ca pem     jq   raw output   exit status   data certificate  kmip json   cert pem             XKS proxy setup  The rest of the steps take place on the XKS Proxy server   For this example  We will use an HTTPS proxy service like  ngrok  https   ngrok com   to forward connections to the XKS proxy  This helps to quickly setup a valid domain and TLS endpoint for testing   1  Start  ngrok          shell session      ngrok http 8000            This will output a domain that can be used to configure KMS later  such as  https   example ngrok io    1  Copy the  libvault pkcs11 so  binary to the server  such as   usr local lib   should be same as in the TOML config file below   and  chmod  it so that it is executable   1  Copy the TLS certificate bundle  e g     etc kmip cert pem   and CA bundle  e g     etc kmip ca pem   to the  xks proxy  server  doesn t matter where  as long as the  xks proxy  process has access to it  from the Vault setup   1  Create a  configuration settings vault toml  file for the XKS to Vault PKCS 11 configuration     and set the  XKS PROXY SETTINGS TOML  environment variable to point to the file location      The important settings to change           external key stores            change URI path prefix to anything you like        choose random access ID        choose random secret key        set which key labels are accessible to XKS   xks key id set          pkcs11    set the  PKCS11 HSM MODULE  to the location of the  libvault pkcs11 so   or   dylib   file downloaded from  releases hashicorp com  https   releases hashicorp com vault pkcs11 provider          toml     server     ip    0 0 0 0     port   8000    region    us east 2     service    kms xks proxy       server tcp keepalive     tcp keepalive secs   60    tcp keepalive retries   3    tcp keepalive interval secs   1      tracing     is stdout writer enabled   true    is file writer enabled   true    level    DEBUG     directory     var local xks proxy logs     file prefix    xks proxy log     rotation kind    HOURLY       security     is sigv4 auth enabled   true    is tls enabled   true    is mtls enabled   false      tls     tls cert pem    tls server cert pem     tls key pem    tls server key pem     mtls client ca pem    tls client ca pem     mtls client dns name    us east 2 alpha cks kms aws internal amazonaws com        external key stores      uri path prefix     xyz     sigv4 access key id    AKIA4GBY3I6JCE5M2HPM     sigv4 secret access key    1234567890123456789012345678901234567890123      xks key id set     abc123        pkcs11     session pool max size   30    session pool timeout milli   0    session eager close   false    user pin         PKCS11 HSM MODULE     usr local lib libvault pkcs11 so     context read timeout milli   100      limits     max plaintext in base64   8192    max aad in base64   16384      hsm capabilities     can generate iv   false    is zero iv required   false              Note     vault pkcs11 provider  versions of 0 1 0 0 1 2 require the last two lines to be changed to  can generate iv   true  and  is zero iv required   true    1  Create a file    etc vault pkcs11 hcl  with the following contents          hcl     slot         server    VAULT ADDRESS 5696        tls cert path     etc kmip cert pem        ca path     etc kmip ca pem        scope    my service                     This file is used by  libvault pkcs11 so  to know how to find and communicate with the KMIP server      See  the Vault docs   vault docs enterprise pkcs11 provider  for all available parameters and their usage   1  If you want to view the Vault logs  helpful when trying to find error messages   you can specify the  VAULT LOG FILE   default is stdout  and  VAULT LOG LEVEL   default is  INFO    We d recommend setting  VAULT LOG FILE  to something like   tmp vault log  or   var log vault log   Other useful log levels are  WARN   quieter  and  TRACE   very verbose  could possibly contain sensitive information  like raw network packets    1  Create an AES 256 key in KMIP  for example  using  pkcs11 tool   usually installed with the OpenSC package   See the  Vault docs   vault docs enterprise pkcs11 provider  for the full setup        sh    VAULT LOG FILE  dev null pkcs11 tool   module   libvault pkcs11 so   keygen  a abc123   key type AES 32            extractable   allow sw    Key generated     Secret Key Object  AES length 32    VALUE     label       abc123    Usage       encrypt  decrypt  wrap  unwrap    Access      none            Enable XKS in the AWS CLI   1  Create the KMS custom key store with the appropriate parameters to point to your XKS proxy  in this example  through  ngrok           shell session      aws kms create custom key store            custom key store name myVaultKeyStore            custom key store type EXTERNAL KEY STORE            xks proxy uri endpoint https   example ngrok io            xks proxy uri path  xyz kms xks v1            xks proxy authentication credential AccessKeyId AKIA4GBY3I6JCE5M2HPM RawSecretAccessKey 1234567890123456789012345678901234567890123             xks proxy connectivity PUBLIC ENDPOINT               CustomKeyStoreId    cks d7a55fe93d63191d6               1  Tell KMS to connect to the key store         shell session      aws kms connect custom key store   custom key store id cks d7a55fe93d63191d6         1  Wait for the  ConnectionState  of your custom key store to be  CONNECTED   This can take a few minutes         shell session      aws kms describe custom key stores   custom key store id cks d7a55fe93d63191d6         1  Create a KMS key associated with the XKS key ID   abc123  in this example          shell session      aws kms create key   custom key store id cks d7a55fe93d63191d6            xks key id abc123   origin EXTERNAL KEY STORE              KeyMetadata                 AWSAccountId    111111111111               KeyId    a93f205a 2a37 4338 aa64 92b4a4b0b67d               Arn    arn aws kms us east 2 111111111111 key a93f205a 2a37 4338 aa64 92b4a4b0b67d               CreationDate    2022 12 22T11 03 23 695000 08 00               Enabled   true              Description                   KeyUsage    ENCRYPT DECRYPT               KeyState    Enabled               Origin    EXTERNAL KEY STORE               CustomKeyStoreId    cks 16460f66b34705025               KeyManager    CUSTOMER               CustomerMasterKeySpec    SYMMETRIC DEFAULT               KeySpec    SYMMETRIC DEFAULT               EncryptionAlgorithms                     SYMMETRIC DEFAULT                            MultiRegion   false              XksKeyConfiguration                     Id    abc123                                     1  Encrypt some data with this key         shell session      aws kms encrypt   key id a93f205a 2a37 4338 aa64 92b4a4b0b67d   plaintext YWJjMTIzCg                CiphertextBlob    somerandomciphertextblob            KeyId    arn aws kms us east 2 111111111111 key a93f205a 2a37 4338 aa64 92b4a4b0b67d           EncryptionAlgorithm    SYMMETRIC DEFAULT        1  Decypt the resulting ciphertext         shell session      aws kms decrypt   ciphertext blob somerandomciphertextblob               KeyId    arn aws kms us east 2 111111111111 key a93f205a 2a37 4338 aa64 92b4a4b0b67d           Plaintext    YWJjMTIzCg             EncryptionAlgorithm    SYMMETRIC DEFAULT               1  Optionally  clean up your key and key store with         shell session      aws kms disable key   key id a93f205a 2a37 4338 aa64 92b4a4b0b67d      aws kms disconnect custom key store   custom key store id cks 16460f66b34705025      aws kms delete custom key store   custom key store id cks 16460f66b34705025             The  aws kms delete custom key store  command will not succeed until all keys in the key store have been disabled and deleted "}
{"questions":"vault Oracle TDE include alerts enterprise only mdx page title Oracle TDE PKCS 11 Provider Vault Enterprise layout docs The Vault PKCS 11 Provider can be used to enable Oracle TDE","answers":"---\nlayout: docs\npage_title: Oracle TDE - PKCS#11 Provider - Vault Enterprise\ndescription: |-\n  The Vault PKCS#11 Provider can be used to enable Oracle TDE.\n---\n\n# Oracle TDE\n\n@include 'alerts\/enterprise-only.mdx'\n\n[Oracle Transparent Data Encryption](https:\/\/docs.oracle.com\/en\/database\/oracle\/oracle-database\/19\/asoag\/introduction-to-transparent-data-encryption.html) (TDE)\nis supported with the [Vault PKCS#11 provider](\/vault\/docs\/enterprise\/pkcs11-provider).\nIn this setup, Vault's KMIP engine generates and store the \"TDE Master Encryption Key\" that the Oracle Database uses to encrypt and decrypt the \"TDE Table Keys\".\nOracle will not have access to the TDE Master Encryption Key itself.\n\n## Requirements\n\nTo setup Oracle TDE backed by Vault, the following are required:\n\n- A database running Oracle 19c Enterprise Edition\n- A Vault Enterprise 1.11+ server with Advanced Data Protection for KMIP support.\n- Vault has TCP port 5696 accessible to the Oracle database.\n- `libvault-pkcs11.so` downloaded from [releases.hashicorp.com](https:\/\/releases.hashicorp.com\/vault-pkcs11-provider) for the operating system running the Oracle database.\n\n## Vault setup\n\nOn the Vault server, we need to [setup the KMIP Secrets Engine](\/vault\/docs\/secrets\/kmip):\n\n1. Start the KMIP Secrets Engine and listener:\n\n    ```sh\n    vault secrets enable kmip\n    vault write kmip\/config listen_addrs=0.0.0.0:5696\n    ```\n\n    ~> **Important**: When configuring KMIP for Oracle, you will probably need to set the\n    `server_hostnames` and `server_ips` [configuration parameters](\/vault\/api-docs\/secret\/kmip#parameters),\n    otherwise the TLS connection to the KMIP Secrets Engine will fail due to certification validation errors.\n    When configuring Oracle TDE, this error can manifest as the `sqlplus` session silently hanging.\n\n1. Create a KMIP scope to contain the TDE keys and objects.\n   The KMIP scope is essentially an isolated namespace.\n   For example, you can create a scope called `my-service`:\n\n    ```sh\n    vault write -f kmip\/scope\/my-service\n    ```\n\n1. Create a KMIP role that has access to the scope:\n\n    ```sh\n    vault write kmip\/scope\/my-service\/role\/admin operation_all=true\n    ```\n\n1. Create TLS credentials (a certificate, key, and CA bundle) for the KMIP role:\n\n    ~> **Note**: This command will output the credentials in plaintext.\n\n    ```sh\n    vault write -f -format=json kmip\/scope\/my-service\/role\/admin\/credential\/generate | tee kmip.json\n    ```\n\n    The response from the `credential\/generate` endpoint is JSON.\n    The `.data.certificate` entry contains a bundle of the TLS client key and certificate we will use to connect to KMIP with from Oracle.\n    The `.data.ca_chain[]` entries contain the CA bundle to verify the KMIP server's certificate.\n    Save these to, e.g., `cert.pem` and `ca.pem`:\n\n    ```sh\n    jq --raw-output --exit-status '.data.ca_chain[]' kmip.json > ca.pem\n    jq --raw-output --exit-status '.data.certificate' kmip.json > cert.pem\n    ```\n\n## Oracle TDE preparation\n\nThe rest of the steps take place on the Oracle server.\n\nWe need to configure the Vault PKCS#11 provider.\n\n1. Copy the `libvault-pkcs11.so` binary into `$ORACLE_BASE\/extapi\/64\/hsm`, and ensure there are no other PKCS#11 libraries in `$ORACLE_BASE\/extapi\/64\/hsm`.\n\n1. Copy the TLS certificate and key bundle (e.g., `\/etc\/cert.pem`) and CA bundle (e.g., `\/etc\/ca.pem`) for the KMIP role (configured as above) to the Oracle server.\n   The exact location does not matter as long as the Oracle process has access to it.\n\n1. Create a configuration file, for example `\/etc\/vault-pkcs11.hcl`, with the following contents:\n\n    ```hcl\n    slot {\n      server = \"VAULT_ADDRESS:5696\"\n      tls_cert_path = \"\/etc\/cert.pem\"\n      ca_path = \"\/etc\/ca.pem\"\n      scope = \"my-service\"\n    }\n    ```\n\n    This file is used by `libvault-pkcs11.so` to know how to find and communicate with the KMIP engine in Vault.\n\n    In particular:\n    - The `slot` block configures the first PKCS#11 slot to point to Vault. Oracle will use this first slot.\n    - `server` should point to the Vault server's IP (or DNS name) and port number (5696 is the default).\n    - `tls_cert_path` should be the location on the Oracle database of the client TLS certificate and key bundle used to connect to Vault server.\n    - `ca_path` should be the location of the CA bundle on the Oracle database.\n    - `scope` is the KMIP scope to authenticate against and where the TDE master keys and associated metadata will be stored.\n\n    The default location the PKCS#11 library will look for the configuration file is the current directory (`.\/vault-pkcs11.hcl`) and `\/etc\/vault-pkcs11.hcl`, but you can override this by setting the `VAULT_KMIP_CONFIG` environment variable to any file.\n\n1. If you want to view the Vault logs (helpful when trying to find error messages), you can specify the `VAULT_LOG_FILE` (default is stdout) and `VAULT_LOG_LEVEL` (default is `INFO`). We'd recommend setting `VAULT_LOG_FILE` to something like `\/tmp\/vault.log` or `\/var\/log\/vault.log`. Other useful log levels are `WARN` (quieter) and `TRACE` (verbose, could possibly contain sensitive information, like raw network packets).\n\n## Enable TDE \n\nThe only remaining step is to setup Oracle TDE for an external HSM using shared library, `libvault-pkcs11.so`.\nThese steps are not specific to Vault, other than requiring the shared library, HCL configuration, and certificates be present.\nTDE is complex, but an example way to enable it is:\n\n1. Open a `sqlplus` session into the root container (or switch into it with `ALTER SESSION SET CONTAINER = CDB$ROOT;`).\n\n1. Set WALLET_ROOT and TDE_CONFIGURATION parameters on the Oracle database. The wallet root directory is only used to set the TDE configuration parameter. To learn more about the wallet parameters refer to the [Oracle TDE documentation](https:\/\/docs.oracle.com\/en\/database\/oracle\/oracle-database\/19\/refrn\/TDE_CONFIGURATION.html).\n\n    ```sql\n    SQL> alter system set wallet_root='\/opt\/oracle\/admin\/ORCLCDB\/wallet' scope=spfile;\n    SQL> shutdown immediate;\n    SQL> startup;\n    SQL> alter system set TDE_CONFIGURATION=\"KEYSTORE_CONFIGURATION=HSM\" SCOPE=both;\n    ```\n1. Validate the parameters are set by querying `V$PARAMETER`\n\n    ```sql\n    SQL> SELECT name, value from V$PARAMETER WHERE NAME IN ('wallet_root','tde_configuration');\n\n    NAME                           VALUE\n    ------------------------------ --------------------------------------------------\n    wallet_root                    \/opt\/oracle\/admin\/ORCLCDB\/wallet\n    tde_configuration              KEYSTORE_CONFIGURATION=HSM\n    ```\n\n1. Open the HSM wallet: `ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY \"1234\" CONTAINER = ALL;`.\n   The password `1234` here is used as the password for decrypting the TLS key, if it is stored encrypted on disk.\n   If the TLS key is not encrypted, this password is ignored.\n\n1. Create the TDE master key: `ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY USING TAG 'default' IDENTIFIED BY \"1234\" CONTAINER = ALL;`, again specifying the TLS key password if necessary.\n\n1. Finally, use TDE in a PDB, e.g., `CREATE TABLE test_tde (something CHAR(32) ENCRYPT);`.\n\nMore extensive information on the details and procedures for Oracle TDE can be found in [Oracle's documentation](https:\/\/docs.oracle.com\/en\/database\/oracle\/oracle-database\/19\/asoag\/configuring-transparent-data-encryption.html#GUID-753C4808-CC51-4DA1-A5C3-980417FDAB14).","site":"vault","answers_cleaned":"    layout  docs page title  Oracle TDE   PKCS 11 Provider   Vault Enterprise description       The Vault PKCS 11 Provider can be used to enable Oracle TDE         Oracle TDE   include  alerts enterprise only mdx    Oracle Transparent Data Encryption  https   docs oracle com en database oracle oracle database 19 asoag introduction to transparent data encryption html   TDE  is supported with the  Vault PKCS 11 provider   vault docs enterprise pkcs11 provider   In this setup  Vault s KMIP engine generates and store the  TDE Master Encryption Key  that the Oracle Database uses to encrypt and decrypt the  TDE Table Keys   Oracle will not have access to the TDE Master Encryption Key itself      Requirements  To setup Oracle TDE backed by Vault  the following are required     A database running Oracle 19c Enterprise Edition   A Vault Enterprise 1 11  server with Advanced Data Protection for KMIP support    Vault has TCP port 5696 accessible to the Oracle database     libvault pkcs11 so  downloaded from  releases hashicorp com  https   releases hashicorp com vault pkcs11 provider  for the operating system running the Oracle database      Vault setup  On the Vault server  we need to  setup the KMIP Secrets Engine   vault docs secrets kmip    1  Start the KMIP Secrets Engine and listener          sh     vault secrets enable kmip     vault write kmip config listen addrs 0 0 0 0 5696                   Important    When configuring KMIP for Oracle  you will probably need to set the      server hostnames  and  server ips   configuration parameters   vault api docs secret kmip parameters       otherwise the TLS connection to the KMIP Secrets Engine will fail due to certification validation errors      When configuring Oracle TDE  this error can manifest as the  sqlplus  session silently hanging   1  Create a KMIP scope to contain the TDE keys and objects     The KMIP scope is essentially an isolated namespace     For example  you can create a scope called  my service           sh     vault write  f kmip scope my service          1  Create a KMIP role that has access to the scope          sh     vault write kmip scope my service role admin operation all true          1  Create TLS credentials  a certificate  key  and CA bundle  for the KMIP role            Note    This command will output the credentials in plaintext          sh     vault write  f  format json kmip scope my service role admin credential generate   tee kmip json              The response from the  credential generate  endpoint is JSON      The   data certificate  entry contains a bundle of the TLS client key and certificate we will use to connect to KMIP with from Oracle      The   data ca chain    entries contain the CA bundle to verify the KMIP server s certificate      Save these to  e g    cert pem  and  ca pem           sh     jq   raw output   exit status   data ca chain    kmip json   ca pem     jq   raw output   exit status   data certificate  kmip json   cert pem             Oracle TDE preparation  The rest of the steps take place on the Oracle server   We need to configure the Vault PKCS 11 provider   1  Copy the  libvault pkcs11 so  binary into   ORACLE BASE extapi 64 hsm   and ensure there are no other PKCS 11 libraries in   ORACLE BASE extapi 64 hsm    1  Copy the TLS certificate and key bundle  e g     etc cert pem   and CA bundle  e g     etc ca pem   for the KMIP role  configured as above  to the Oracle server     The exact location does not matter as long as the Oracle process has access to it   1  Create a configuration file  for example   etc vault pkcs11 hcl   with the following contents          hcl     slot         server    VAULT ADDRESS 5696        tls cert path     etc cert pem        ca path     etc ca pem        scope    my service                     This file is used by  libvault pkcs11 so  to know how to find and communicate with the KMIP engine in Vault       In particular        The  slot  block configures the first PKCS 11 slot to point to Vault  Oracle will use this first slot         server  should point to the Vault server s IP  or DNS name  and port number  5696 is the default          tls cert path  should be the location on the Oracle database of the client TLS certificate and key bundle used to connect to Vault server         ca path  should be the location of the CA bundle on the Oracle database         scope  is the KMIP scope to authenticate against and where the TDE master keys and associated metadata will be stored       The default location the PKCS 11 library will look for the configuration file is the current directory     vault pkcs11 hcl   and   etc vault pkcs11 hcl   but you can override this by setting the  VAULT KMIP CONFIG  environment variable to any file   1  If you want to view the Vault logs  helpful when trying to find error messages   you can specify the  VAULT LOG FILE   default is stdout  and  VAULT LOG LEVEL   default is  INFO    We d recommend setting  VAULT LOG FILE  to something like   tmp vault log  or   var log vault log   Other useful log levels are  WARN   quieter  and  TRACE   verbose  could possibly contain sensitive information  like raw network packets       Enable TDE   The only remaining step is to setup Oracle TDE for an external HSM using shared library   libvault pkcs11 so   These steps are not specific to Vault  other than requiring the shared library  HCL configuration  and certificates be present  TDE is complex  but an example way to enable it is   1  Open a  sqlplus  session into the root container  or switch into it with  ALTER SESSION SET CONTAINER   CDB ROOT      1  Set WALLET ROOT and TDE CONFIGURATION parameters on the Oracle database  The wallet root directory is only used to set the TDE configuration parameter  To learn more about the wallet parameters refer to the  Oracle TDE documentation  https   docs oracle com en database oracle oracle database 19 refrn TDE CONFIGURATION html           sql     SQL  alter system set wallet root   opt oracle admin ORCLCDB wallet  scope spfile      SQL  shutdown immediate      SQL  startup      SQL  alter system set TDE CONFIGURATION  KEYSTORE CONFIGURATION HSM  SCOPE both          1  Validate the parameters are set by querying  V PARAMETER          sql     SQL  SELECT name  value from V PARAMETER WHERE NAME IN   wallet root   tde configuration         NAME                           VALUE                                                                                           wallet root                     opt oracle admin ORCLCDB wallet     tde configuration              KEYSTORE CONFIGURATION HSM          1  Open the HSM wallet   ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY  1234  CONTAINER   ALL       The password  1234  here is used as the password for decrypting the TLS key  if it is stored encrypted on disk     If the TLS key is not encrypted  this password is ignored   1  Create the TDE master key   ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY USING TAG  default  IDENTIFIED BY  1234  CONTAINER   ALL    again specifying the TLS key password if necessary   1  Finally  use TDE in a PDB  e g    CREATE TABLE test tde  something CHAR 32  ENCRYPT      More extensive information on the details and procedures for Oracle TDE can be found in  Oracle s documentation  https   docs oracle com en database oracle oracle database 19 asoag configuring transparent data encryption html GUID 753C4808 CC51 4DA1 A5C3 980417FDAB14  "}
{"questions":"vault FIPS 140 2 inside the Vault binary This can directly be used for FIPS compliance page title Vault Enterprise FIPS 140 2 Inside layout docs Vault Enterprise features a special build with FIPS 140 2 support built into","answers":"---\nlayout: docs\npage_title: Vault Enterprise FIPS 140-2 Inside\ndescription: |-\n  Vault Enterprise features a special build with FIPS 140-2 support built into\n  the Vault binary. This can directly be used for FIPS compliance.\n---\n\n# FIPS 140-2 inside\n\n@include 'alerts\/enterprise-only.mdx'\n\nSpecial builds of Vault Enterprise (marked with a `fips1402` feature name)\ninclude built-in support for FIPS 140-2 compliance. Unlike using Seal Wrap\nfor FIPS compliance, this binary has no external dependencies on a HSM.\n\nTo use this feature, you must have an active or trial license for Vault\nEnterprise Plus (HSMs). To start a trial, contact [HashiCorp\nsales](mailto:sales@hashicorp.com).\n\n## Using FIPS 140-2 Vault enterprise\n\nFIPS 140-2 Inside versions of Vault Enterprise behave like non-FIPS versions\nof Vault. No restrictions are placed on algorithms; it is up to the operator\nto ensure Vault remains in a FIPS-compliant mode of operation. This means\nconfiguring some Secrets Engines to permit a limited set of algorithms (e.g.,\nforbidding ed25519-based CAs with PKI Secrets Engines).\n\nBecause Vault Enterprise may return secrets in plain text, it is important to\nensure the Vault server's `listener` configuration section utilizes TLS. This\nensures secrets are transmitted securely from Server to Client. Additionally,\nnote that TLSv1.3 will not work with FIPS 140-2 Inside, as HKDF is not a\ncertified primitive. If TLSv1.3 is desired, it is suggested to front Vault\nServer with a FIPS-certified load balancer.\n\nA non-exhaustive list of potential compliance issues include:\n\n - Using Ed25519 or ChaCha20+Poly1305 keys with the Transit Secrets Engine,\n - Using Ed25519 keys as CAs in the PKI or SSH Secrets Engines,\n - Using FF3-1\/FPE in Transform Secrets Engine, or\n - Using a Derived Key (using HKDF) for Agent auto-authing or the Transit\n   Secrets Engine.\n - Using **Entropy Augmentation**: because BoringCrypto uses its internal,\n   FIPS 140-2 approved RNG, it cannot mix entropy from other sources.\n   Attempting to use EA with FIPS 140-2 HSM enabled binaries will result\n   in failures such as `panic: boringcrypto: invalid code execution`.\n\nHashicorp can only provide general guidance regarding using Vault Enterprise\nin a FIPS-compliant manner. We are not a NIST-certified testing laboratory\nand thus organizations may need to consult an approved auditor for final\ninformation.\n\nThe FIPS 140-2 variant of Vault uses separate binaries; these are available\nfrom the following sources:\n\n - From the [Hashicorp Releases Page](https:\/\/releases.hashicorp.com\/vault),\n   ending with the `+ent.fips1402` and `+ent.hsm.fips1402` suffixes.\n - From the [Docker Hub `hashicorp\/vault-enterprise-fips`](https:\/\/hub.docker.com\/r\/hashicorp\/vault-enterprise-fips)\n   container repository.\n - From the [AWS ECR `hashicorp\/vault-enterprise-fips`](https:\/\/gallery.ecr.aws\/hashicorp\/vault-enterprise-fips)\n   container repository.\n - From the [Red Hat Access `hashicorp\/vault-enterprise-fips`](https:\/\/catalog.redhat.com\/software\/containers\/hashicorp\/vault-enterprise-fips\/628d50e37ff70c66a88517ea)\n   container repository.\n\n~> **Note**: When pulling the FIPS UBI-based images, note that they are\n   ultimately designed for OpenShift certification; consider either adding\n   the `--user root --cap-add IPC_LOCK` options, to allow Vault to enable\n   mlock, or use the `--env SKIP_SETCAP=1` option, to disable mlock\n   completely, as appropriate for your environment.\n\n### Usage restrictions\n\n#### Migration restrictions\n\nHashicorp **does not** support in-place migrations from non-FIPS Inside\nversions of Vault to FIPS Inside versions of Vault, regardless of version.\nA fresh cluster installation is required to receive support. We generally\nrecommend avoiding direct upgrades and replicated-migrations for several\nreasons:\n\n - Old entries remain encrypted with the old barrier key until overwritten,\n   this barrier key was likely not created by a FIPS library and thus\n   is not compliant.\n - Many secrets engines internally create keys; things like Transit create\n   and store keys, but don't store any data (inside of Vault) -- these would\n   still need to be accessible and rotated to a new, FIPS-compliant key.\n   Any PKI engines would have also created non-compliant keys, but rotation\n   of say, a Root CA involves a concerted, non-Vault effort to accomplish\n   and must be done thoughtfully.\n\nAs such Hashicorp cannot provide support for workloads that are affected\neither technically or via non-compliance that results from converting\nexisting cluster workloads to the FIPS 140-2 Inside binary.\n\nInstead, we suggest leaving the existing cluster in place, and carefully\nconsider migration of specific workloads to the FIPS-backed cluster.  \n\n#### Entropy augmentation restrictions\n\nEntropy Augmentation **does not** work with FIPS 140-2 Inside. The internal\nBoringCrypto RNG is FIPS 140-2 certified and does not accept entropy from\nother sources. On Vault 1.11.0 and later, attempting to use Entropy\nAugmentation will result in a warning (\"Entropy Augmentation is not supported...\")\nand Entropy Augmentation will be disabled.\n\n#### TLS restrictions\n\nVault Enterprise's FIPS modifications include restrictions to supported TLS\ncipher suites and key information. Only the following cipher suites are\nallowed:\n\n - `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`,\n - `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`,\n - `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`,\n - `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`,\n - `TLS_RSA_WITH_AES_128_GCM_SHA256`, and\n - `TLS_RSA_WITH_AES_256_GCM_SHA384`.\n\nAdditionally, only the following key types are allowed in TLS chains of trust:\n\n - RSA 2048, 3072, 4096, 7680, and 8192-bit;\n - ECDSA P-256, P-384, and P-521.\n\nFinally, only TLSv1.2 or higher is supported in FIPS mode. These are in line\nwith recent NIST guidance and recommendations.\n\n#### Heterogeneous cluster deployments\n\nHashicorp does not support mixed deployment scenarios within the same Vault\ncluster, e.g., mixing FIPS and non-FIPS Vault binary versions, or mixing FIPS\nInside with FIPS Seal Wrap clusters. Clusters nodes must be of a single\nbinary\/deployment type across the entire cluster. Usage of Seal Wrap with\nthe FIPS Inside binary is permitted.\n\nRunning a heterogeneous cluster is not permitted by FIPS, as components of\nthe system are not compliant with FIPS.\n\n## Technical details\n\nVault Enterprise's FIPS 140-2 Inside binaries rely on a special version of the\nGo toolchain which include a FIPS-validated BoringCrypto version. To ensure\nyour version of Vault Enterprise includes FIPS support, after starting the\nserver, make sure you see a line with `Fips: Enabled`, such as:\n\n```\n                    Fips: FIPS 140-2 Enabled, BoringCrypto version 7\n```\n\n~> **Note**: FIPS 140-2 Inside binaries depend on cgo, which require that a\n   GNU C Library (glibc) Linux distribution be used to run Vault. We've\n   additionally opted to certify only on the AMD64 architecture at this time.\n   This means these binaries will not work on Alpine Linux based containers.\n\n### FIPS 140-2 inside and external plugins\n\nVault Enterprise's built-in plugins are compiled into the Vault binary using\nthe same Go toolchain version that compiled the core Vault; this results in\nthese plugins having FIPS 140-2 compliance status as well. This same guarantee\ndoes not apply to external plugins.\n\n### Validating FIPS 140-2 inside\n\nTo validate that the FIPS 140-2 Inside binary correctly includes BoringCrypto,\nrun `go tool nm` on the binary to get a symbol dump. On non-FIPS builds,\nsearching for `goboringcrypto` in the output will yield no results, but on\nFIPS-enabled builds, you'll see many results with this:\n\n```\n$ go tool nm vault | grep -i goboringcrypto\n  4014d0 T _cgo_6880f0fbb71e_Cfunc__goboringcrypto_AES_cbc_encrypt\n  4014f0 T _cgo_6880f0fbb71e_Cfunc__goboringcrypto_AES_ctr128_encrypt\n  401520 T _cgo_6880f0fbb71e_Cfunc__goboringcrypto_AES_decrypt\n  401540 T _cgo_6880f0fbb71e_Cfunc__goboringcrypto_AES_encrypt\n  401560 T _cgo_6880f0fbb71e_Cfunc__goboringcrypto_AES_set_decrypt_key\n...additional lines elided...\n```\n\nAll FIPS cryptographic modules must execute startup tests. BoringCrypto uses\nthe `_goboringcrypto_BORINGSSL_bcm_power_on_self_test` symbol for this. To\nensure the Vault Enterprise binary is correctly executing startup tests, use\n[GDB](https:\/\/www.sourceware.org\/gdb\/) to stop execution on this function to\nensure it gets hit.\n\n```\n$ gdb --args vault server -dev\n...GDB startup messages elided...\n(gdb) break _goboringcrypto_BORINGSSL_bcm_power_on_self_test\n...breakpoint location elided...\n(gdb) run\n...additional GDB output elided...\nThread 1 \"vault\" hit Breakpoint 1, 0x0000000000454950 in _goboringcrypto_BORINGSSL_bcm_power_on_self_test ()\n(gdb) backtrace\n#0 0x0000000000454950 in _goboringcrypto_BORINGSSL_bcm_power_on_self_test ()\n#1 0x00000000005da8f0 in runtime.asmcgocall () at \/usr\/local\/hashicorp-fips-go-devel\/src\/runtime\/asm_amd64.s:765\n#2 0x00007fffd07a5a18 in ?? ()\n#3 0x00007fffffffdf28 in ?? ()\n#4 0x000000000057ebce in runtime.persistentalloc.func1 () at \/usr\/local\/hashicorp-fips-go-devel\/src\/runtime\/malloc.go:1371\n#5 0x00000000005d8a49 in runtime.systemstack () at \/usr\/local\/hashicorp-fips-go-devel\/src\/runtime\/asm_amd64.s:383\n#6 0x00000000005dd189 in runtime.newproc (siz=6129989, fn=0x5d88fb <runtime.rt0_go+315>) at <autogenerated>:1\n#7 0x0000000000000000 in ?? ()\n```\n\nExact output may vary.\n\n<div {...{\"className\":\"alert alert-warning g-type-body\"}}>\n\n**Note**: When executing Vault Enterprise within GDB, GDB must rewrite\nparts of the binary to permit stopping on the specified breakpoint. This\nresults in the HMAC of the contained BoringCrypto library changing,\nbreaking the FIPS integrity check. If execution were to be continued\nin the example above via the `continue` command, a message like the\nfollowing would be emitted:\n\n```\nContinuing.\nFIPS integrity test failed.\nExpected: 18d35ae031f649825a4269d68d2e62583d060a31d359690f97b9c8bf8120cdf75b405f74be7018094da7eb5261f2f86d0f481cc3b5a9c7c432268d94bf91aad9\nCalculated: 111502a3201de3b23f54b29d79ca6a1a754f94ecfc57a379444aac0d3ada68bf3c06834e6d84e68599bdf763e28e2c994fcdaeac84adabd180b59cad5fc980bb\n\nThread 1 \"vault\" received signal SIGABRT, Aborted.\n```\n\nThis is expected. Rerunning Vault without GDB (or with no breakpoints\nset -- e.g., `delete 1`) will still result in this function executing, but\nwith the FIPS integrity check succeeding.\n\n<\/div>\n\n### BoringCrypto certification\n\nBoringCrypto Version 7 uses the following FIPS 140-2 Certificate and software\nversion:\n\n - NIST CMVP [Certificate #3678](https:\/\/csrc.nist.gov\/Projects\/Cryptographic-Module-Validation-Program\/Certificate\/3678).\n - BoringSSL Release [`ae223d6138807a13006342edfeef32e813246b39`](https:\/\/github.com\/google\/boringssl\/commit\/ae223d6138807a13006342edfeef32e813246b39).\n\nThe following algorithms were certified as part of this release:\n\n - RSA in all key sizes greater than or equal to 2048 bits (tested at 2048\n   and 3072 bits),\n - ECDSA and ECDH with P-224 (not accessible from Vault), P-256, P-384, and\n   P-521,\n - AES symmetric encryption with 128\/192\/256-bit CBC, ECB, and CRT modes and\n   128\/256-bit GCM modes,\n - SHA-1 and SHA-2 (224, 256,384, and 512-bit variants),\n - HMAC+SHA-2 with 224, 256, 384, and 512-bit variants of SHA-2,\n - TLSv1.0\/TLSv1.1 and TLSv1.2 KDFs,\n - AES-256 based CRT_DRBG CS-PRNG.\n\n### Leidos compliance\n\nSee the updated [Leidos Compliance Letter (V Entr v1.10.0+entrfips) for FIPS Inside](https:\/\/www.datocms-assets.com\/2885\/1653327036-boringcrypto_compliance_letter_signed.pdf) using the Boring Crypto Libraries for more details. All [past letters](https:\/\/www.hashicorp.com\/vault-compliance) are also available for reference.\n\nWhat is the difference between Seal Wrap FIPS 140 compliance and the new FIPS Inside compliance?\n- Only the storage of sensitive entries (seal wrapped entries) is covered by FIPS-validated crypto when using Seal Wrapping.\n- The TLS connection to Vault by clients is not covered by FIPS-validated crypto when using Seal Wrapping (it is when using FIPS 140-2 Inside, per items 1, 2, 7, and 13 in the updated letter).\n- The generation of key material  wasn't using FIPS-validated crypto in the Seal Wrap version (for example, the PKI certificates: item 8 in the updated FIPS 140-2 Inside letter; or SSH module: item 10 in the updated FIPS 140-2 Inside letter).\n- With Seal Wrapping, some entries were protected with FIPS-validated crypto, but all crypto in Vault wasn't FIPS certified. With FIPS 140-2 Inside, by default (if the algorithm is certified), Vault will be using the certified crypto implementation.","site":"vault","answers_cleaned":"    layout  docs page title  Vault Enterprise FIPS 140 2 Inside description       Vault Enterprise features a special build with FIPS 140 2 support built into   the Vault binary  This can directly be used for FIPS compliance         FIPS 140 2 inside   include  alerts enterprise only mdx   Special builds of Vault Enterprise  marked with a  fips1402  feature name  include built in support for FIPS 140 2 compliance  Unlike using Seal Wrap for FIPS compliance  this binary has no external dependencies on a HSM   To use this feature  you must have an active or trial license for Vault Enterprise Plus  HSMs   To start a trial  contact  HashiCorp sales  mailto sales hashicorp com       Using FIPS 140 2 Vault enterprise  FIPS 140 2 Inside versions of Vault Enterprise behave like non FIPS versions of Vault  No restrictions are placed on algorithms  it is up to the operator to ensure Vault remains in a FIPS compliant mode of operation  This means configuring some Secrets Engines to permit a limited set of algorithms  e g   forbidding ed25519 based CAs with PKI Secrets Engines    Because Vault Enterprise may return secrets in plain text  it is important to ensure the Vault server s  listener  configuration section utilizes TLS  This ensures secrets are transmitted securely from Server to Client  Additionally  note that TLSv1 3 will not work with FIPS 140 2 Inside  as HKDF is not a certified primitive  If TLSv1 3 is desired  it is suggested to front Vault Server with a FIPS certified load balancer   A non exhaustive list of potential compliance issues include      Using Ed25519 or ChaCha20 Poly1305 keys with the Transit Secrets Engine     Using Ed25519 keys as CAs in the PKI or SSH Secrets Engines     Using FF3 1 FPE in Transform Secrets Engine  or    Using a Derived Key  using HKDF  for Agent auto authing or the Transit    Secrets Engine     Using   Entropy Augmentation    because BoringCrypto uses its internal     FIPS 140 2 approved RNG  it cannot mix entropy from other sources     Attempting to use EA with FIPS 140 2 HSM enabled binaries will result    in failures such as  panic  boringcrypto  invalid code execution    Hashicorp can only provide general guidance regarding using Vault Enterprise in a FIPS compliant manner  We are not a NIST certified testing laboratory and thus organizations may need to consult an approved auditor for final information   The FIPS 140 2 variant of Vault uses separate binaries  these are available from the following sources      From the  Hashicorp Releases Page  https   releases hashicorp com vault      ending with the   ent fips1402  and   ent hsm fips1402  suffixes     From the  Docker Hub  hashicorp vault enterprise fips   https   hub docker com r hashicorp vault enterprise fips     container repository     From the  AWS ECR  hashicorp vault enterprise fips   https   gallery ecr aws hashicorp vault enterprise fips     container repository     From the  Red Hat Access  hashicorp vault enterprise fips   https   catalog redhat com software containers hashicorp vault enterprise fips 628d50e37ff70c66a88517ea     container repository        Note    When pulling the FIPS UBI based images  note that they are    ultimately designed for OpenShift certification  consider either adding    the    user root   cap add IPC LOCK  options  to allow Vault to enable    mlock  or use the    env SKIP SETCAP 1  option  to disable mlock    completely  as appropriate for your environment       Usage restrictions       Migration restrictions  Hashicorp   does not   support in place migrations from non FIPS Inside versions of Vault to FIPS Inside versions of Vault  regardless of version  A fresh cluster installation is required to receive support  We generally recommend avoiding direct upgrades and replicated migrations for several reasons      Old entries remain encrypted with the old barrier key until overwritten     this barrier key was likely not created by a FIPS library and thus    is not compliant     Many secrets engines internally create keys  things like Transit create    and store keys  but don t store any data  inside of Vault     these would    still need to be accessible and rotated to a new  FIPS compliant key     Any PKI engines would have also created non compliant keys  but rotation    of say  a Root CA involves a concerted  non Vault effort to accomplish    and must be done thoughtfully   As such Hashicorp cannot provide support for workloads that are affected either technically or via non compliance that results from converting existing cluster workloads to the FIPS 140 2 Inside binary   Instead  we suggest leaving the existing cluster in place  and carefully consider migration of specific workloads to the FIPS backed cluster          Entropy augmentation restrictions  Entropy Augmentation   does not   work with FIPS 140 2 Inside  The internal BoringCrypto RNG is FIPS 140 2 certified and does not accept entropy from other sources  On Vault 1 11 0 and later  attempting to use Entropy Augmentation will result in a warning   Entropy Augmentation is not supported      and Entropy Augmentation will be disabled        TLS restrictions  Vault Enterprise s FIPS modifications include restrictions to supported TLS cipher suites and key information  Only the following cipher suites are allowed       TLS ECDHE RSA WITH AES 128 GCM SHA256       TLS ECDHE RSA WITH AES 256 GCM SHA384       TLS ECDHE ECDSA WITH AES 128 GCM SHA256       TLS ECDHE ECDSA WITH AES 256 GCM SHA384       TLS RSA WITH AES 128 GCM SHA256   and     TLS RSA WITH AES 256 GCM SHA384    Additionally  only the following key types are allowed in TLS chains of trust      RSA 2048  3072  4096  7680  and 8192 bit     ECDSA P 256  P 384  and P 521   Finally  only TLSv1 2 or higher is supported in FIPS mode  These are in line with recent NIST guidance and recommendations        Heterogeneous cluster deployments  Hashicorp does not support mixed deployment scenarios within the same Vault cluster  e g   mixing FIPS and non FIPS Vault binary versions  or mixing FIPS Inside with FIPS Seal Wrap clusters  Clusters nodes must be of a single binary deployment type across the entire cluster  Usage of Seal Wrap with the FIPS Inside binary is permitted   Running a heterogeneous cluster is not permitted by FIPS  as components of the system are not compliant with FIPS      Technical details  Vault Enterprise s FIPS 140 2 Inside binaries rely on a special version of the Go toolchain which include a FIPS validated BoringCrypto version  To ensure your version of Vault Enterprise includes FIPS support  after starting the server  make sure you see a line with  Fips  Enabled   such as                           Fips  FIPS 140 2 Enabled  BoringCrypto version 7           Note    FIPS 140 2 Inside binaries depend on cgo  which require that a    GNU C Library  glibc  Linux distribution be used to run Vault  We ve    additionally opted to certify only on the AMD64 architecture at this time     This means these binaries will not work on Alpine Linux based containers       FIPS 140 2 inside and external plugins  Vault Enterprise s built in plugins are compiled into the Vault binary using the same Go toolchain version that compiled the core Vault  this results in these plugins having FIPS 140 2 compliance status as well  This same guarantee does not apply to external plugins       Validating FIPS 140 2 inside  To validate that the FIPS 140 2 Inside binary correctly includes BoringCrypto  run  go tool nm  on the binary to get a symbol dump  On non FIPS builds  searching for  goboringcrypto  in the output will yield no results  but on FIPS enabled builds  you ll see many results with this         go tool nm vault   grep  i goboringcrypto   4014d0 T  cgo 6880f0fbb71e Cfunc  goboringcrypto AES cbc encrypt   4014f0 T  cgo 6880f0fbb71e Cfunc  goboringcrypto AES ctr128 encrypt   401520 T  cgo 6880f0fbb71e Cfunc  goboringcrypto AES decrypt   401540 T  cgo 6880f0fbb71e Cfunc  goboringcrypto AES encrypt   401560 T  cgo 6880f0fbb71e Cfunc  goboringcrypto AES set decrypt key    additional lines elided         All FIPS cryptographic modules must execute startup tests  BoringCrypto uses the   goboringcrypto BORINGSSL bcm power on self test  symbol for this  To ensure the Vault Enterprise binary is correctly executing startup tests  use  GDB  https   www sourceware org gdb   to stop execution on this function to ensure it gets hit         gdb   args vault server  dev    GDB startup messages elided     gdb  break  goboringcrypto BORINGSSL bcm power on self test    breakpoint location elided     gdb  run    additional GDB output elided    Thread 1  vault  hit Breakpoint 1  0x0000000000454950 in  goboringcrypto BORINGSSL bcm power on self test     gdb  backtrace  0 0x0000000000454950 in  goboringcrypto BORINGSSL bcm power on self test     1 0x00000000005da8f0 in runtime asmcgocall    at  usr local hashicorp fips go devel src runtime asm amd64 s 765  2 0x00007fffd07a5a18 in        3 0x00007fffffffdf28 in        4 0x000000000057ebce in runtime persistentalloc func1    at  usr local hashicorp fips go devel src runtime malloc go 1371  5 0x00000000005d8a49 in runtime systemstack    at  usr local hashicorp fips go devel src runtime asm amd64 s 383  6 0x00000000005dd189 in runtime newproc  siz 6129989  fn 0x5d88fb  runtime rt0 go 315   at  autogenerated  1  7 0x0000000000000000 in            Exact output may vary    div       className   alert alert warning g type body        Note    When executing Vault Enterprise within GDB  GDB must rewrite parts of the binary to permit stopping on the specified breakpoint  This results in the HMAC of the contained BoringCrypto library changing  breaking the FIPS integrity check  If execution were to be continued in the example above via the  continue  command  a message like the following would be emitted       Continuing  FIPS integrity test failed  Expected  18d35ae031f649825a4269d68d2e62583d060a31d359690f97b9c8bf8120cdf75b405f74be7018094da7eb5261f2f86d0f481cc3b5a9c7c432268d94bf91aad9 Calculated  111502a3201de3b23f54b29d79ca6a1a754f94ecfc57a379444aac0d3ada68bf3c06834e6d84e68599bdf763e28e2c994fcdaeac84adabd180b59cad5fc980bb  Thread 1  vault  received signal SIGABRT  Aborted       This is expected  Rerunning Vault without GDB  or with no breakpoints set    e g    delete 1   will still result in this function executing  but with the FIPS integrity check succeeding     div       BoringCrypto certification  BoringCrypto Version 7 uses the following FIPS 140 2 Certificate and software version      NIST CMVP  Certificate  3678  https   csrc nist gov Projects Cryptographic Module Validation Program Certificate 3678      BoringSSL Release   ae223d6138807a13006342edfeef32e813246b39   https   github com google boringssl commit ae223d6138807a13006342edfeef32e813246b39    The following algorithms were certified as part of this release      RSA in all key sizes greater than or equal to 2048 bits  tested at 2048    and 3072 bits      ECDSA and ECDH with P 224  not accessible from Vault   P 256  P 384  and    P 521     AES symmetric encryption with 128 192 256 bit CBC  ECB  and CRT modes and    128 256 bit GCM modes     SHA 1 and SHA 2  224  256 384  and 512 bit variants      HMAC SHA 2 with 224  256  384  and 512 bit variants of SHA 2     TLSv1 0 TLSv1 1 and TLSv1 2 KDFs     AES 256 based CRT DRBG CS PRNG       Leidos compliance  See the updated  Leidos Compliance Letter  V Entr v1 10 0 entrfips  for FIPS Inside  https   www datocms assets com 2885 1653327036 boringcrypto compliance letter signed pdf  using the Boring Crypto Libraries for more details  All  past letters  https   www hashicorp com vault compliance  are also available for reference   What is the difference between Seal Wrap FIPS 140 compliance and the new FIPS Inside compliance    Only the storage of sensitive entries  seal wrapped entries  is covered by FIPS validated crypto when using Seal Wrapping    The TLS connection to Vault by clients is not covered by FIPS validated crypto when using Seal Wrapping  it is when using FIPS 140 2 Inside  per items 1  2  7  and 13 in the updated letter     The generation of key material  wasn t using FIPS validated crypto in the Seal Wrap version  for example  the PKI certificates  item 8 in the updated FIPS 140 2 Inside letter  or SSH module  item 10 in the updated FIPS 140 2 Inside letter     With Seal Wrapping  some entries were protected with FIPS validated crypto  but all crypto in Vault wasn t FIPS certified  With FIPS 140 2 Inside  by default  if the algorithm is certified   Vault will be using the certified crypto implementation "}
{"questions":"vault Userpass auth method username and password page title Userpass Auth Methods The userpass auth method allows users to authenticate with Vault using a layout docs","answers":"---\nlayout: docs\npage_title: Userpass - Auth Methods\ndescription: >-\n  The \"userpass\" auth method allows users to authenticate with Vault using a\n  username and password.\n---\n\n# Userpass auth method\n\nThe `userpass` auth method allows users to authenticate with Vault using\na username and password combination.\n\nThe username\/password combinations are configured directly to the auth\nmethod using the `users\/` path. This method cannot read usernames and\npasswords from an external source.\n\nThe method lowercases all submitted usernames, e.g. `Mary` and `mary` are the\nsame entry.\n\nThis documentation assumes the Username & Password method is mounted at the default `\/auth\/userpass`\npath in Vault. Since it is possible to enable auth methods at any location,\nplease update your CLI calls accordingly with the `-path` flag.\n\n## Authentication\n\n### Via the CLI\n\n```shell-session\n$ vault login -method=userpass \\\n    username=mitchellh \\\n    password=foo\n```\n\n### Via the API\n\n```shell-session\n$ curl \\\n    --request POST \\\n    --data '{\"password\": \"foo\"}' \\\n    http:\/\/127.0.0.1:8200\/v1\/auth\/userpass\/login\/mitchellh\n```\n\nThe response will contain the token at `auth.client_token`:\n\n```json\n{\n  \"lease_id\": \"\",\n  \"renewable\": false,\n  \"lease_duration\": 0,\n  \"data\": null,\n  \"auth\": {\n    \"client_token\": \"c4f280f6-fdb2-18eb-89d3-589e2e834cdb\",\n    \"policies\": [\"admins\"],\n    \"metadata\": {\n      \"username\": \"mitchellh\"\n    },\n    \"lease_duration\": 0,\n    \"renewable\": false\n  }\n}\n```\n\n## Configuration\n\nAuth methods must be configured in advance before users or machines can\nauthenticate. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the userpass auth method:\n\n   ```shell-session\n   $ vault auth enable userpass\n   ```\n\n  Enable the `userpass` auth method at the default `auth\/userpass` path.\n  You can choose to enable the auth method at a different path with the `-path` flag:\n\n   ```shell-session\n   $ vault auth enable -path=<path> userpass\n   ```\n\n1. Configure it with users that are allowed to authenticate:\n\n   ```shell-session\n   $ vault write auth\/<userpass:path>\/users\/mitchellh \\\n       password=foo \\\n       policies=admins\n   ```\n\n   This creates a new user \"mitchellh\" with the password \"foo\" that will be\n   associated with the \"admins\" policy. This is the only configuration\n   necessary.\n\n## User lockout\n\n@include 'user-lockout.mdx'\n\n## API\n\nThe Userpass auth method has a full HTTP API. Please see the [Userpass auth\nmethod API](\/vault\/api-docs\/auth\/userpass) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Userpass   Auth Methods description       The  userpass  auth method allows users to authenticate with Vault using a   username and password         Userpass auth method  The  userpass  auth method allows users to authenticate with Vault using a username and password combination   The username password combinations are configured directly to the auth method using the  users   path  This method cannot read usernames and passwords from an external source   The method lowercases all submitted usernames  e g   Mary  and  mary  are the same entry   This documentation assumes the Username   Password method is mounted at the default   auth userpass  path in Vault  Since it is possible to enable auth methods at any location  please update your CLI calls accordingly with the   path  flag      Authentication      Via the CLI     shell session   vault login  method userpass       username mitchellh       password foo          Via the API     shell session   curl         request POST         data    password    foo          http   127 0 0 1 8200 v1 auth userpass login mitchellh      The response will contain the token at  auth client token       json      lease id          renewable   false     lease duration   0     data   null     auth          client token    c4f280f6 fdb2 18eb 89d3 589e2e834cdb        policies     admins         metadata            username    mitchellh              lease duration   0       renewable   false               Configuration  Auth methods must be configured in advance before users or machines can authenticate  These steps are usually completed by an operator or configuration management tool   1  Enable the userpass auth method         shell session      vault auth enable userpass           Enable the  userpass  auth method at the default  auth userpass  path    You can choose to enable the auth method at a different path with the   path  flag         shell session      vault auth enable  path  path  userpass         1  Configure it with users that are allowed to authenticate         shell session      vault write auth  userpass path  users mitchellh          password foo          policies admins            This creates a new user  mitchellh  with the password  foo  that will be    associated with the  admins  policy  This is the only configuration    necessary      User lockout   include  user lockout mdx      API  The Userpass auth method has a full HTTP API  Please see the  Userpass auth method API   vault api docs auth userpass  for more details "}
{"questions":"vault OCI Identity credentials page title OCI Auth method layout docs OCI auth method The OCI Auth method for Vault enables authentication and authorization using","answers":"---\nlayout: docs\npage_title: OCI Auth method\ndescription: >-\n  The OCI Auth method for Vault enables authentication and authorization using\n  OCI Identity credentials.\n---\n\n# OCI auth method\n\nThe OCI Auth method for Vault enables authentication and authorization using [OCI Identity](https:\/\/docs.cloud.oracle.com\/iaas\/Content\/Identity\/Concepts\/overview.htm) credentials.\n\nThis plugin is developed in a separate GitHub repository at https:\/\/github.com\/hashicorp\/vault-plugin-auth-oci,\nbut is automatically bundled in Vault releases. Please file all feature requests, bugs, and pull requests\nspecific to the OCI plugin under that repository.\n\n## OCI roles\n\nThe OCI Auth method authorizes using roles, as shown here:\n![Role Based Authorization](\/img\/oci\/oci-role-based-authz.png)\n\nThere is a many-to-many relationship between various items seen above:\n\n- A user can belong to many identity groups.\n- An identity group can contain many users.\n- A compute instance can belong to many dynamic groups.\n- A dynamic group can contain many compute instances.\n- A role defined in Vault can be mapped to many groups and dynamic groups.\n- A single role can be mapped to both groups and dynamic groups.\n- A Vault policy can be mapped from different roles.\n\nThe `ocid_list` field of a role is a list of [Group or Dynamic Group](https:\/\/docs.cloud.oracle.com\/iaas\/Content\/Identity\/Concepts\/overview.htm#one) OCIDs. Only members of these Groups or Dynamic Groups are allowed to take this role.\n\n## Configuration\n\n### Configure the OCI tenancy to run Vault\n\nThe OCI Auth method requires [instance principal](https:\/\/blogs.oracle.com\/cloud-infrastructure\/announcing-instance-principals-for-identity-and-access-management) credentials to call OCI Identity APIs, and therefore the Vault server needs to run inside an OCI compute instance.\n\nFollow the steps below to add policies to your tenancy that allow the OCI compute instance in which the Vault server is running to call certain OCI Identity APIs.\n\n1. In your tenancy, [launch the compute instance(s)](https:\/\/docs.cloud.oracle.com\/iaas\/Content\/Compute\/Tasks\/launchinginstance.htm) that will run the Vault server. The [VCN](https:\/\/docs.cloud.oracle.com\/iaas\/Content\/Network\/Tasks\/managingVCNs.htm) in which you launch the Compute Instance should have a [Service Gateway](https:\/\/docs.cloud.oracle.com\/iaas\/Content\/Network\/Tasks\/servicegateway.htm) added to it .\n1. Make a note of the Oracle Cloud Identifier (OCID) of the compute instance(s) running Vault.\n1. In your tenancy, [create a dynamic group](https:\/\/docs.cloud.oracle.com\/iaas\/Content\/Identity\/Tasks\/managingdynamicgroups.htm) with the name VaultDynamicGroup to contain the computer instance(s).\n1. Add the OCID of the compute instance(s) to the dynamic group.\n1. Add the following policies to the root compartment of your tenancy that allow the dynamic group to call specific Identity APIs.\n\n   ```plaintext\n   allow dynamic-group VaultDynamicGroup to {AUTHENTICATION_INSPECT} in tenancy\n   allow dynamic-group VaultDynamicGroup to {GROUP_MEMBERSHIP_INSPECT} in tenancy\n   ```\n\n### Configure the OCI auth method\n\nFirst, enable the OCI Auth method.\n\n```shell-session\n$ vault auth enable oci\n```\n\nThen, configure your home tenancy in the Vault, so that only users or instances from your tenancy will be allowed to log into Vault through the OCI Auth method. \n  \n1. Create a file named `hometenancyid.json` with the below content using the\n   tenancy OCID. To find your tenancy OCID, see\n  the [Oracle Cloud IDs documentation](https:\/\/docs.cloud.oracle.com\/iaas\/Content\/General\/Concepts\/identifiers.htm).\n\n   ```json\n   { \"home_tenancy_id\": \"your tenancy ocid here\" }\n   ```\n\n1. Configure the `home_tenancy_id` parameter in the Vault.\n\n   ```shell-session\n   $ curl --header \"X-Vault-Token: $roottoken\" --request POST \\\n      --data @hometenancyid.json \\\n      http:\/\/127.0.0.1:8200\/v1\/auth\/oci\/config (127.0.0.1:8200\/v1\/auth\/oci\/config)\n   ```\n\nContinue by creating a Vault administrator role in the OCI Auth method. The `vaultadminrole` allows the administrator of Vault to log into Vault and grants them the permissions allowed in the policy.\n\n1. Create a file named `vaultadminrole.json` with the below contents. Replace the `ocid_list` with the\nGroup or Dynamic Group OCIDs in your tenancy that has users or instances that you want to take the Vault admin role.\n\n   - For testing in dev mode, you can add the OCID of the dynamic group previously created.\n   - In production, add only the OCID of groups and dynamic groups that can take the admin role in Vault.\n\n      ```json\n      {\n         \"token_policies\": \"vaultadminpolicy\",\n         \"token_ttl\": \"1800\",\n         \"ocid_list\": \"ocid1.group.oc1..aaaaaaaaiqnblimpvmegkqh3bxilrdvjobr7qd223g275idcqhexamplefq,ocid1.dynamicgroup.oc1..aaaaaaaa5hmfyrdaxvmt52ekju5n7ffamn2pdvxaq6esb2vzzoduexamplea\"\n      }\n      ```\n\n1. Create the Vault admin role:\n\n   ```shell-session\n   $ curl --header \"X-Vault-Token: $roottoken\" --request POST \\\n      --data @vaultadminrole.json \\\n      http:\/\/127.0.0.1:8200\/v1\/auth\/oci\/role\/vaultadminrole (127.0.0.1:8200\/v1\/auth\/oci\/role\/vaultadminrole)\n   ```\n\n### Log in to Vault using OCI auth\n\nAs a result of both methods described below, you will get a response that includes a token with the previously added policy.\n\nYou can use the received token to read or write secrets, and add roles per the instructions in [\/docs\/secrets\/kv\/kv-v1](\/vault\/docs\/secrets\/kv\/kv-v1).\n\nFor both methods to work:\n- The VAULT_ADDR export has to be specified, as shown earlier in this page; when testing in dev mode in the same compute instance that the Vault is running on, this is [http:\/\/127.0.0.1:8200](http:\/\/127.0.0.1:8200\/).\n\n#### Log in with instance principals\n\n```shell-session\n$ vault login -method=oci auth_type=instance role=vaultadminrole\n```\n\nThis assumes that the compute instance that you are logging in from should be a part of a dynamic group that was added to the Vault admin role. If logging on from a different compute instance than the one on which Vault is running on, the compute should have connectivity to the endpoint specified in VAULT_ADDR. \n\n#### Log in with an API key\n\n```shell-session\n$ vault login -method=oci auth_type=apikey role=vaultadminrole\n```\n\nThis assumes you have an OCI API key.\n\nIf you don't have an API key:\n\n1. [Add an API Key](https:\/\/docs.cloud.oracle.com\/iaas\/Content\/API\/Concepts\/apisigningkey.htm) for a user in the console. This user should be part of a group that has previously been added to the Vault admin role.\n1. Create the config file `~\/.oci\/config` using the user's credentials as detailed in [https:\/\/docs.cloud.oracle.com\/iaas\/Content\/API\/Concepts\/sdkconfig.htm](https:\/\/docs.cloud.oracle.com\/iaas\/Content\/API\/Concepts\/sdkconfig.htm).\n1. Ensure that the region in the config matches the region of the compute instance that is running Vault.\n\n\n### Manage roles in the OCI auth method\n\n1.  Similar to creating the Vault administrator role, create other roles mapped to other policies. Create a file named devrole.json with the following contents. Replace ocid_list with Groups or Dynamic Groups in your tenancy.\n\n   ```json\n   {\n   \"token_policies\": \"devpolicy\",\n   \"token_ttl\": \"1500\",\n   \"ocid_list\": \"ocid1.group.oc1..aaaaaaaaiqnblimpvmgrouplrdvjobr7qd223g275idcqhexamplefq,ocid1.dynamicgroup.oc1..aaaaaaaa5hmfyrdaxvmdg2u5n7ffamn2pdvxaq6esb2vzzoduexamplea\"\n   }\n   ```\n\n2.  Add the role.\n\n   ```shell-session\n   $ curl --header \"X-Vault-Token: $token\" --request POST \\\n      --data @devrole.json \\\n      http:\/\/127.0.0.1:8200\/v1\/auth\/oci\/role\/devrole (127.0.0.1:8200\/v1\/auth\/oci\/role\/devrole)\n   ```\n\n3.  Login to Vault assuming the devrole.\n\n   ```shell-session\n   $ vault login -method=oci auth_type=instance role=devrole\n   ```\n\n## Authentication\n\nYou can authenticate with the Vault CLI or by communicating with the API directly.\n\n### Via the CLI\n\nWith Compute Instance credentials:\n\n```shell-session\n$ vault login -method=oci auth_type=instance role=devrole\n```\n\nWith User credentials ([SDK configuration](https:\/\/docs.cloud.oracle.com\/iaas\/Content\/API\/Concepts\/sdkconfig.htm)):\n\n```shell-session\n$ vault login -method=oci auth_type=apikey role=devrole\n```\n\n### Via the API\n\n1.  Sign the following request with your OCI credentials and obtain the signing string and the authorization header. Replace the endpoint, scheme (http or https) & role of the URL corresponding to your Vault configuration. For more information on signing, see [signing the request](https:\/\/docs.cloud.oracle.com\/iaas\/Content\/API\/Concepts\/signingrequests.htm).\n\n    http:\/\/127.0.0.1\/v1\/auth\/oci\/login\/devrole\n\n1.  On signing the above request, you will get the following headers.\n\n   The signing string (line breaks inserted into the (request-target) header for easier reading):\n\n   <CodeBlockConfig hideClipboard>\n\n   ```text\n   date: Fri, 22 Aug 2019 21:02:19 GMT\n   (request-target): get \/v1\/auth\/oci\/login\/devrole\n   host: 127.0.0.1\n   ```\n\n   <\/CodeBlockConfig>\n\n   The Authorization header:\n\n   <CodeBlockConfig hideClipboard>\n\n   ```text\n   Signature version=\"1\",headers=\"date (request-target) host\",keyId=\"ocid1.t\n   enancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq\/\n   ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3ryn\n   jq\/73:61:a2:21:67:e0:df:be:7e:4b:93:1e:15:98:a5:b7\",algorithm=\"rsa-sha256\n   \",signature=\"GBas7grhyrhSKHP6AVIj\/h5\/Vp8bd\/peM79H9Wv8kjoaCivujVXlpbKLjMPe\n   DUhxkFIWtTtLBj3sUzaFj34XE6YZAHc9r2DmE4pMwOAy\/kiITcZxa1oHPOeRheC0jP2dqbTll\n   8fmTZVwKZOKHYPtrLJIJQHJjNvxFWeHQjMaR7M=\"\n   ```\n\n   <\/CodeBlockConfig>\n\n1.  Add the signed headers to the \"request_headers\" field and make the actual request to Vault. For example:\n\n   <CodeBlockConfig hideClipboard>\n\n   ```sh\n   POST http:\/\/127.0.0.1\/v1\/auth\/oci\/login\/devrole\n      \"request_headers\": {\n         \"date\": [\"Fri, 22 Aug 2019 21:02:19 GMT\"],\n         \"(request-target)\": [\"get \/v1\/auth\/oci\/login\/devrole\"],\n         \"host\": [\"127.0.0.1\"],\n         \"content-type\": [\"application\/json\"],\n         \"authorization\": [\"Signature algorithm=\\\"rsa-sha256\\\",headers=\\\"date (request-target) host\\\",keyId=\\\"ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq\/ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq\/73:61:a2:21:67:e0:df:be:7e:4b:93:1e:15:98:a5:b7\\\",signature=\\\"GBas7grhyrhSKHP6AVIj\/h5\/Vp8bd\/peM79H9Wv8kjoaCivujVXlpbKLjMPeDUhxkFIWtTtLBj3sUzaFj34XE6YZAHc9r2DmE4pMwOAy\/kiITcZxa1oHPOeRheC0jP2dqbTll8fmTZVwKZOKHYPtrLJIJQHJjNvxFWeHQjMaR7M=\\\",version=\\\"1\\\"\"]\n      }\n   ```\n\n   <\/CodeBlockConfig>\n\n## API\n\nThe OCI Auth method has a full HTTP API. Please see the [API docs](\/vault\/api-docs\/auth\/oci) for more details","site":"vault","answers_cleaned":"    layout  docs page title  OCI Auth method description       The OCI Auth method for Vault enables authentication and authorization using   OCI Identity credentials         OCI auth method  The OCI Auth method for Vault enables authentication and authorization using  OCI Identity  https   docs cloud oracle com iaas Content Identity Concepts overview htm  credentials   This plugin is developed in a separate GitHub repository at https   github com hashicorp vault plugin auth oci  but is automatically bundled in Vault releases  Please file all feature requests  bugs  and pull requests specific to the OCI plugin under that repository      OCI roles  The OCI Auth method authorizes using roles  as shown here    Role Based Authorization   img oci oci role based authz png   There is a many to many relationship between various items seen above     A user can belong to many identity groups    An identity group can contain many users    A compute instance can belong to many dynamic groups    A dynamic group can contain many compute instances    A role defined in Vault can be mapped to many groups and dynamic groups    A single role can be mapped to both groups and dynamic groups    A Vault policy can be mapped from different roles   The  ocid list  field of a role is a list of  Group or Dynamic Group  https   docs cloud oracle com iaas Content Identity Concepts overview htm one  OCIDs  Only members of these Groups or Dynamic Groups are allowed to take this role      Configuration      Configure the OCI tenancy to run Vault  The OCI Auth method requires  instance principal  https   blogs oracle com cloud infrastructure announcing instance principals for identity and access management  credentials to call OCI Identity APIs  and therefore the Vault server needs to run inside an OCI compute instance   Follow the steps below to add policies to your tenancy that allow the OCI compute instance in which the Vault server is running to call certain OCI Identity APIs   1  In your tenancy   launch the compute instance s   https   docs cloud oracle com iaas Content Compute Tasks launchinginstance htm  that will run the Vault server  The  VCN  https   docs cloud oracle com iaas Content Network Tasks managingVCNs htm  in which you launch the Compute Instance should have a  Service Gateway  https   docs cloud oracle com iaas Content Network Tasks servicegateway htm  added to it   1  Make a note of the Oracle Cloud Identifier  OCID  of the compute instance s  running Vault  1  In your tenancy   create a dynamic group  https   docs cloud oracle com iaas Content Identity Tasks managingdynamicgroups htm  with the name VaultDynamicGroup to contain the computer instance s   1  Add the OCID of the compute instance s  to the dynamic group  1  Add the following policies to the root compartment of your tenancy that allow the dynamic group to call specific Identity APIs         plaintext    allow dynamic group VaultDynamicGroup to  AUTHENTICATION INSPECT  in tenancy    allow dynamic group VaultDynamicGroup to  GROUP MEMBERSHIP INSPECT  in tenancy             Configure the OCI auth method  First  enable the OCI Auth method      shell session   vault auth enable oci      Then  configure your home tenancy in the Vault  so that only users or instances from your tenancy will be allowed to log into Vault through the OCI Auth method      1  Create a file named  hometenancyid json  with the below content using the    tenancy OCID  To find your tenancy OCID  see   the  Oracle Cloud IDs documentation  https   docs cloud oracle com iaas Content General Concepts identifiers htm          json       home tenancy id    your tenancy ocid here            1  Configure the  home tenancy id  parameter in the Vault         shell session      curl   header  X Vault Token   roottoken    request POST           data  hometenancyid json         http   127 0 0 1 8200 v1 auth oci config  127 0 0 1 8200 v1 auth oci config          Continue by creating a Vault administrator role in the OCI Auth method  The  vaultadminrole  allows the administrator of Vault to log into Vault and grants them the permissions allowed in the policy   1  Create a file named  vaultadminrole json  with the below contents  Replace the  ocid list  with the Group or Dynamic Group OCIDs in your tenancy that has users or instances that you want to take the Vault admin role        For testing in dev mode  you can add the OCID of the dynamic group previously created       In production  add only the OCID of groups and dynamic groups that can take the admin role in Vault            json                   token policies    vaultadminpolicy             token ttl    1800             ocid list    ocid1 group oc1  aaaaaaaaiqnblimpvmegkqh3bxilrdvjobr7qd223g275idcqhexamplefq ocid1 dynamicgroup oc1  aaaaaaaa5hmfyrdaxvmt52ekju5n7ffamn2pdvxaq6esb2vzzoduexamplea                     1  Create the Vault admin role         shell session      curl   header  X Vault Token   roottoken    request POST           data  vaultadminrole json         http   127 0 0 1 8200 v1 auth oci role vaultadminrole  127 0 0 1 8200 v1 auth oci role vaultadminrole              Log in to Vault using OCI auth  As a result of both methods described below  you will get a response that includes a token with the previously added policy   You can use the received token to read or write secrets  and add roles per the instructions in   docs secrets kv kv v1   vault docs secrets kv kv v1    For both methods to work    The VAULT ADDR export has to be specified  as shown earlier in this page  when testing in dev mode in the same compute instance that the Vault is running on  this is  http   127 0 0 1 8200  http   127 0 0 1 8200          Log in with instance principals     shell session   vault login  method oci auth type instance role vaultadminrole      This assumes that the compute instance that you are logging in from should be a part of a dynamic group that was added to the Vault admin role  If logging on from a different compute instance than the one on which Vault is running on  the compute should have connectivity to the endpoint specified in VAULT ADDR         Log in with an API key     shell session   vault login  method oci auth type apikey role vaultadminrole      This assumes you have an OCI API key   If you don t have an API key   1   Add an API Key  https   docs cloud oracle com iaas Content API Concepts apisigningkey htm  for a user in the console  This user should be part of a group that has previously been added to the Vault admin role  1  Create the config file     oci config  using the user s credentials as detailed in  https   docs cloud oracle com iaas Content API Concepts sdkconfig htm  https   docs cloud oracle com iaas Content API Concepts sdkconfig htm   1  Ensure that the region in the config matches the region of the compute instance that is running Vault        Manage roles in the OCI auth method  1   Similar to creating the Vault administrator role  create other roles mapped to other policies  Create a file named devrole json with the following contents  Replace ocid list with Groups or Dynamic Groups in your tenancy         json          token policies    devpolicy       token ttl    1500       ocid list    ocid1 group oc1  aaaaaaaaiqnblimpvmgrouplrdvjobr7qd223g275idcqhexamplefq ocid1 dynamicgroup oc1  aaaaaaaa5hmfyrdaxvmdg2u5n7ffamn2pdvxaq6esb2vzzoduexamplea               2   Add the role         shell session      curl   header  X Vault Token   token    request POST           data  devrole json         http   127 0 0 1 8200 v1 auth oci role devrole  127 0 0 1 8200 v1 auth oci role devrole          3   Login to Vault assuming the devrole         shell session      vault login  method oci auth type instance role devrole            Authentication  You can authenticate with the Vault CLI or by communicating with the API directly       Via the CLI  With Compute Instance credentials      shell session   vault login  method oci auth type instance role devrole      With User credentials   SDK configuration  https   docs cloud oracle com iaas Content API Concepts sdkconfig htm        shell session   vault login  method oci auth type apikey role devrole          Via the API  1   Sign the following request with your OCI credentials and obtain the signing string and the authorization header  Replace the endpoint  scheme  http or https    role of the URL corresponding to your Vault configuration  For more information on signing  see  signing the request  https   docs cloud oracle com iaas Content API Concepts signingrequests htm        http   127 0 0 1 v1 auth oci login devrole  1   On signing the above request  you will get the following headers      The signing string  line breaks inserted into the  request target  header for easier reading        CodeBlockConfig hideClipboard         text    date  Fri  22 Aug 2019 21 02 19 GMT     request target   get  v1 auth oci login devrole    host  127 0 0 1              CodeBlockConfig      The Authorization header       CodeBlockConfig hideClipboard         text    Signature version  1  headers  date  request target  host  keyId  ocid1 t    enancy oc1  aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq     ocid1 user oc1  aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3ryn    jq 73 61 a2 21 67 e0 df be 7e 4b 93 1e 15 98 a5 b7  algorithm  rsa sha256      signature  GBas7grhyrhSKHP6AVIj h5 Vp8bd peM79H9Wv8kjoaCivujVXlpbKLjMPe    DUhxkFIWtTtLBj3sUzaFj34XE6YZAHc9r2DmE4pMwOAy kiITcZxa1oHPOeRheC0jP2dqbTll    8fmTZVwKZOKHYPtrLJIJQHJjNvxFWeHQjMaR7M                CodeBlockConfig   1   Add the signed headers to the  request headers  field and make the actual request to Vault  For example       CodeBlockConfig hideClipboard         sh    POST http   127 0 0 1 v1 auth oci login devrole        request headers               date     Fri  22 Aug 2019 21 02 19 GMT               request target      get  v1 auth oci login devrole              host     127 0 0 1              content type     application json              authorization     Signature algorithm   rsa sha256   headers   date  request target  host   keyId   ocid1 tenancy oc1  aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq ocid1 user oc1  aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq 73 61 a2 21 67 e0 df be 7e 4b 93 1e 15 98 a5 b7   signature   GBas7grhyrhSKHP6AVIj h5 Vp8bd peM79H9Wv8kjoaCivujVXlpbKLjMPeDUhxkFIWtTtLBj3sUzaFj34XE6YZAHc9r2DmE4pMwOAy kiITcZxa1oHPOeRheC0jP2dqbTll8fmTZVwKZOKHYPtrLJIJQHJjNvxFWeHQjMaR7M    version   1                          CodeBlockConfig      API  The OCI Auth method has a full HTTP API  Please see the  API docs   vault api docs auth oci  for more details"}
{"questions":"vault Google Cloud auth method page title Google Cloud Auth Methods Google Cloud service accounts layout docs The gcp auth method allows users and machines to authenticate to Vault using","answers":"---\nlayout: docs\npage_title: Google Cloud - Auth Methods\ndescription: |-\n  The \"gcp\" auth method allows users and machines to authenticate to Vault using\n  Google Cloud service accounts.\n---\n\n# Google Cloud auth method\n\nThe `gcp` auth method allows Google Cloud Platform entities to authenticate to\nVault. Vault treats Google Cloud as a trusted third party and verifies\nauthenticating entities against the Google Cloud APIs. This backend allows for\nauthentication of:\n\n- Google Cloud IAM service accounts\n- Google Compute Engine (GCE) instances\n\nThis backend focuses on identities specific to Google _Cloud_ and does not\nsupport authenticating arbitrary Google or Google Workspace users or generic OAuth\nagainst Google.\n\nThis plugin is developed in a separate GitHub repository at\n[hashicorp\/vault-plugin-auth-gcp][repo],\nbut is automatically bundled in Vault releases. Please file all feature\nrequests, bugs, and pull requests specific to the GCP plugin under that\nrepository.\n\n## Authentication\n\n### Via the CLI helper\n\nVault includes a CLI helper that obtains a signed JWT locally and sends the\nrequest to Vault.\n\n```shell-session\n# Authentication to vault outside of Google Cloud\n$ vault login -method=gcp \\\n    role=\"my-role\" \\\n    service_account=\"authenticating-account@my-project.iam.gserviceaccount.com\" \\\n    jwt_exp=\"15m\" \\\n    credentials=@path\/to\/signer\/credentials.json\n```\n\n```shell-session\n# Authentication to vault inside of Google Cloud\n$ vault login -method=gcp role=\"my-role\"\n```\n\nFor more usage information, run `vault auth help gcp`.\n\n-> **Note:** The `project` parameter has been removed in Vault 1.5.9+, 1.6.5+, and 1.7.2+.\nIt is no longer needed for configuration and will be ignored if provided.\n\n### Via the CLI\n\n```shell-session\n$ vault write -field=token auth\/gcp\/login \\\n    role=\"my-role\" \\\n    jwt=\"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...\"\n```\n\nSee [Generating JWTs](#generating-jwts) for ways to obtain the JWT token.\n\n### Via the API\n\n```shell-session\n$ curl \\\n    --request POST \\\n    --data '{\"role\":\"my-role\", \"jwt\":\"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...\"}' \\\n    http:\/\/127.0.0.1:8200\/v1\/auth\/gcp\/login\n```\n\nSee [API docs][api-docs] for expected response.\n\n## Configuration\n\nAuth methods must be configured in advance before users or machines can\nauthenticate. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the Google Cloud auth method:\n\n   ```shell-session\n   $ vault auth enable gcp\n   ```\n\n1. Configure the auth method credentials if Vault is not running on Google Cloud:\n\n   ```text\n   $ vault write auth\/gcp\/config \\\n       credentials=@\/path\/to\/credentials.json\n   ```\n\n   If you are using instance credentials or want to specify credentials via\n   an environment variable, you can skip this step. To learn more, see the\n   [Google Cloud Credentials](#gcp-credentials) section below.\n\n   -> **Note**: If you're using a [Private Google Access](https:\/\/cloud.google.com\/vpc\/docs\/configure-private-google-access)\n   environment, you will additionally need to configure your environment\u2019s custom endpoints\n   via the [custom_endpoint](\/vault\/api-docs\/auth\/gcp#custom_endpoint) configuration parameter.\n\n   In some cases, you cannot set sensitive IAM security credentials in your\n   Vault configuration. For example, your organization may require that all\n   security credentials are short-lived or explicitly tied to a machine identity.\n\n   To provide IAM security credentials to Vault, we recommend using Vault\n   [plugin workload identity federation](#plugin-workload-identity-federation-wif)\n   (WIF) as shown below.\n\n1. Alternatively, configure the audience claim value and the service account email to assume for plugin workload identity federation:\n\n   ```text\n    $ vault write auth\/gcp\/config \\\n        identity_token_audience=\"<TOKEN AUDIENCE>\" \\\n        service_account_email=\"<SERVICE ACCOUNT EMAIL>\"\n   ```\n\n   Vault's identity token provider signs the plugin identity token JWT internally.\n   If a trust relationship exists between Vault and GCP through WIF, the auth\n   method can exchange the Vault identity token for a\n   [federated access token](https:\/\/cloud.google.com\/docs\/authentication\/token-types#access).\n\n   To configure a trusted relationship between Vault and GCP:\n       - You must configure the [identity token issuer backend](\/vault\/api-docs\/secret\/identity\/tokens#configure-the-identity-tokens-backend)\n         for Vault.\n       - GCP must have a\n         [workload identity pool and provider](https:\/\/cloud.google.com\/iam\/docs\/manage-workload-identity-pools-providers)\n         configured with information about the fully qualified and network-reachable\n         issuer URL for the Vault plugin's\n         [identity token provider](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-well-known-configurations).\n\n   Establishing a trusted relationship between Vault and GCP ensures that GCP\n   can fetch JWKS\n   [public keys](\/vault\/api-docs\/secret\/identity\/tokens#read-active-public-keys)\n   and verify the plugin identity token signature.\n\n1. Create a named role:\n\n   For an `iam`-type role:\n\n   ```shell-session\n   $ vault write auth\/gcp\/role\/my-iam-role \\\n       type=\"iam\" \\\n       policies=\"dev,prod\" \\\n       bound_service_accounts=\"my-service@my-project.iam.gserviceaccount.com\"\n   ```\n\n   For a `gce`-type role:\n\n   ```shell-session\n   $ vault write auth\/gcp\/role\/my-gce-role \\\n       type=\"gce\" \\\n       policies=\"dev,prod\" \\\n       bound_projects=\"my-project1,my-project2\" \\\n       bound_zones=\"us-east1-b\" \\\n       bound_labels=\"foo:bar,zip:zap\" \\\n       bound_service_accounts=\"my-service@my-project.iam.gserviceaccount.com\"\n   ```\n\n   Note that `bound_service_accounts` is only required for `iam`-type roles.\n\n   For the complete list of configuration options for each type, please see the\n   [API documentation][api-docs].\n\n## GCP credentials\n\nThe Google Cloud Vault auth method uses the official Google Cloud Golang SDK.\nThis means it supports the common ways of [providing credentials to Google\nCloud][cloud-creds].\n\n1. The environment variable `GOOGLE_APPLICATION_CREDENTIALS`. This is specified\n   as the **path** to a Google Cloud credentials file, typically for a service\n   account. If this environment variable is present, the resulting credentials are\n   used. If the credentials are invalid, an error is returned.\n\n1. Default instance credentials. When no environment variable is present, the\n   default service account credentials are used.\n\nFor more information on service accounts, please see the [Google Cloud Service\nAccounts documentation][service-accounts].\n\nTo use this auth method, the service account must have the following minimum\nscope:\n\n```text\nhttps:\/\/www.googleapis.com\/auth\/cloud-platform\n```\n\n### Required GCP permissions\n\n#### Enabled GCP APIs\n\nThe GCP project must have the following APIs enabled:\n\n- [iam.googleapis.com](https:\/\/console.cloud.google.com\/flows\/enableapi?apiid=iam.googleapis.com)\n  for `iam` and `gce` type roles.\n- [compute.googleapis.com](https:\/\/console.cloud.google.com\/flows\/enableapi?apiid=compute.googleapis.com)\n  for `gce` type roles.\n- [cloudresourcemanager.googleapis.com](https:\/\/console.cloud.google.com\/flows\/enableapi?apiid=cloudresourcemanager.googleapis.com)\n  for `iam` and `gce` type roles that set [`add_group_aliases`](\/vault\/api-docs\/auth\/gcp#add_group_aliases) to true.\n\n#### Vault server permissions\n\n**For `iam`-type Vault roles**, the service account [`credentials`](\/vault\/api-docs\/auth\/gcp#credentials)\ngiven to Vault can have the following role:\n\n```text\nroles\/iam.serviceAccountKeyAdmin\n```\n\n**For `gce`-type Vault roles**, the service account [`credentials`](\/vault\/api-docs\/auth\/gcp#credentials)\ngiven to Vault can have the following role:\n\n```text\nroles\/compute.viewer\n```\n\nIf you instead wish to create a custom role with only the exact GCP permissions\nrequired, use the following list of permissions:\n\n```text\niam.serviceAccounts.get\niam.serviceAccountKeys.get\ncompute.instances.get\ncompute.instanceGroups.list\n```\n\nThese allow Vault to:\n\n- verify the service account, either directly authenticating or associated with\n  authenticating GCE instance, exists\n- get the corresponding public keys for verifying JWTs signed by service account\n  private keys.\n- verify authenticating GCE instances exist\n- compare bound fields for GCE roles (zone\/region, labels, or membership\n  in given instance groups)\n\nIf you are using Group Aliases as described below, you will also need to add the\n`resourcemanager.projects.get` permission.\n\n#### Permissions for authenticating against Vault\n\nIf you are authenticating to Vault from Google Cloud, you can skip the following step as\nVault will generate and present the identity token of the service account configured\non the instance or the pod.\n\nNote that the previously mentioned permissions are given to the _Vault servers_.\nThe IAM service account or GCE instance that is **authenticating against Vault**\nmust have the following role:\n\n```text\nroles\/iam.serviceAccountTokenCreator\n```\n\n!> **WARNING:** Make sure this role is only applied so your service account can\nimpersonate itself. If this role is applied GCP project-wide, this will allow the service\naccount to impersonate any service account in the GCP project where it resides.\nSee [Managing service account impersonation](https:\/\/cloud.google.com\/iam\/docs\/impersonating-service-accounts)\nfor more information.\n\n## Plugin Workload Identity Federation (WIF)\n\n<EnterpriseAlert product=\"vault\" \/>\n\nThe GCP auth method supports the plugin WIF workflow and has a source of identity called\na plugin identity token. A plugin identity token is a JWT that is signed internally by the Vault\n[plugin identity token issuer](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-workload-identity-issuer-s-openid-configuration).\n\nIf there is a trust relationship configured between Vault and GCP through\n[workload identity federation](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation),\nthe auth method can exchange its identity token for short-lived access tokens needed to\nperform its actions.\n\nExchanging identity tokens for access tokens lets the GCP auth method\noperate without configuring explicit access to sensitive IAM security\ncredentials.\n\nTo configure the auth method to use plugin WIF:\n\n1. Ensure that Vault [openid-configuration](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-token-issuer-s-openid-configuration)\nand [public JWKS](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-token-issuer-s-public-jwks)\nAPIs are network-reachable by GCP. We recommend using an API proxy or gateway\nif you need to limit Vault API exposure.\n\n1. Create a\n    [workload identity pool and provider](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation-with-other-providers#create-pool-provider)\n    in GCP.\n    1. The provider URL **must** point at your [Vault plugin identity token issuer](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-workload-identity-issuer-s-openid-configuration) with the\n    `\/.well-known\/openid-configuration` suffix removed. For example:\n    `https:\/\/host:port\/v1\/identity\/oidc\/plugins`.\n    1. Uniquely identify the recipient of the plugin identity token as the audience.\n    You can use the [default audience](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation-with-other-providers#prepare)\n    for the identity pool or a custom value less than 256 characters.\n\n1. [Authenticate a workload](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation-with-other-providers#authenticate)\nin GCP by granting the identity pool access to a dedicated service account using service account impersonation.\nFilter requests using the unique `sub` claim issued by plugin identity tokens so the GCP Auth method can\nimpersonate the service account. `sub` claims have the form: `plugin-identity:<NAMESPACE>:auth:<GCP_AUTH_MOUNT_ACCESSOR>`.\n\n1. Configure the GCP auth method with the OIDC audience value and service account\nemail.\n\n   ```shell-session\n   $ vault write auth\/gcp\/config \\\n     identity_token_audience=\"\/\/iam.googleapis.com\/projects\/410449834127\/locations\/global\/workloadIdentityPools\/vault-gcp-auth-43777a63\/providers\/vault-gcp-auth-wif-provider\" \\\n     service_account_email=\"vault-plugin-wif-auth@hc-b712f250b4e04cacbadd258a90b.iam.gserviceaccount.com\"\n   ```\n\nYour auth method can now use plugin WIF for its configuration credentials.\nBy default, WIF [credentials](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation#access_management)\nhave a time-to-live of 1 hour and automatically refresh when they expire.\n\nPlease see the [API documentation](\/vault\/api-docs\/auth\/gcp#configure)\nfor more details on the fields associated with plugin WIF.\n\n## Group aliases\n\nAs of Vault 1.0, roles can specify an `add_group_aliases` boolean parameter\nthat adds [group aliases][identity-group-aliases] to the auth response. These\naliases can aid in building reusable policies since they are available as\ninterpolated values in Vault's policy engine. Once enabled, the auth response\nwill include the following aliases:\n\n```json\n[\n  \"project-$PROJECT_ID\",\n  \"folder-$SUBFOLDER_ID\",\n  \"folder-$FOLDER_ID\",\n  \"organization-$ORG_ID\"\n]\n```\n\nIf you are using a custom role for Vault server, you will need to add the\n`resourcemanager.projects.get` permission to your custom role.\n\n## Implementation details\n\nThis section describes the implementation details for how Vault communicates\nwith Google Cloud to authenticate and authorize JWT tokens. This information is\nprovided for those who are curious, but these details are not\nrequired knowledge for using the auth method.\n\n### IAM login\n\nIAM login applies only to roles of type `iam`. The Vault authentication workflow\nfor IAM service accounts looks like this:\n\n[![Vault Google Cloud IAM Login Workflow](\/img\/vault-gcp-iam-auth-workflow.svg)](\/img\/vault-gcp-iam-auth-workflow.svg)\n\n1. The client generates a signed JWT using the Service Account Credentials\n   [`projects.serviceAccounts.signJwt`][signjwt-method] API method. For examples\n   of how to do this, see the [Generating JWTs](#generating-jwts) section.\n\n2. The client sends this signed JWT to Vault along with a role name.\n\n3. Vault extracts the `kid` header value, which contains the ID of the\n   key-pair used to generate the JWT, and the `sub` ID\/email to find the service\n   account key. If the service account does not exist or the key is not linked to\n   the service account, Vault denies authentication.\n\n4. Vault authorizes the confirmed service account against the given role. If\n   that is successful, a Vault token with the proper policies is returned.\n\n### GCE login\n\nGCE login only applies to roles of type `gce` and **must be completed on an\ninfrastructure running on Google Cloud**. These steps will not work from your\nlocal laptop or another cloud provider.\n\n[![Vault Google Cloud GCE Login Workflow](\/img\/vault-gcp-gce-auth-workflow.svg)](\/img\/vault-gcp-gce-auth-workflow.svg)\n\n1. The client obtains an [instance identity metadata token][instance-identity]\n   on a GCE instance.\n\n2. The client sends this JWT to Vault along with a role name.\n\n3. Vault extracts the `kid` header value, which contains the ID of the\n   key-pair used to generate the JWT, to find the OAuth2 public cert to verify\n   this JWT.\n\n4. Vault authorizes the confirmed instance against the given role, ensuring\n   the instance matches the bound zones, regions, or instance groups. If that is\n   successful, a Vault token with the proper policies is returned.\n\n## Generating JWTs\n\nThis section details the various methods and examples for obtaining JWT\ntokens.\n\n### Service account credentials API\n\nThis describes how to use the GCP Service Account Credentials [API method][signjwt-method]\ndirectly to generate the signed JWT with the claims that Vault expects. Note the CLI\ndoes this process for you and is much easier, and that there is very little\nreason to do this yourself.\n\n#### curl example\n\nVault requires the following minimum claim set:\n\n```json\n{\n  \"sub\": \"$SERVICE_ACCOUNT_EMAIL_OR_ID\",\n  \"aud\": \"vault\/$ROLE\",\n  \"exp\": \"$EXPIRATION\"\n}\n```\n\nFor the API method, providing the expiration claim `exp` is required. If it is omitted,\nit will not be added automatically and Vault will deny authentication. Expiration must\nbe specified as a [NumericDate](https:\/\/tools.ietf.org\/html\/rfc7519#section-2) value\n(seconds from Epoch). This value must be before the max JWT expiration allowed for a\nrole. This defaults to 15 minutes and cannot be more than 1 hour.\n\nIf a user generates a token that expires after 15 minutes, and the gcp role has `max_jwt_exp` set to the default, Vault will return the following error: `Expiration date must be set to no more that 15 mins in JWT_CLAIM, otherwise the login request returns error \"role requires that service account JWTs expire within 900 seconds`. In this case, the user must create a new signed JWT with a shorter expiration, or set `max_jwt_exp` to a higher value in the gcp role.\n\nOne you have all this information, the JWT token can be signed using curl and\n[oauth2l](https:\/\/github.com\/google\/oauth2l):\n\n```shell-session\nROLE=\"my-role\"\nSERVICE_ACCOUNT=\"service-account@my-project.iam.gserviceaccount.com\"\nOAUTH_TOKEN=\"$(oauth2l header cloud-platform)\"\nEXPIRATION=\"<your_token_expiration>\"\nJWT_CLAIM=\"{\\\\\\\"aud\\\\\\\":\\\\\\\"vault\/${ROLE}\\\\\\\", \\\\\\\"sub\\\\\\\": \\\\\\\"${SERVICE_ACCOUNT}\\\\\\\", \\\\\\\"exp\\\\\\\": ${EXPIRATION}}\"\n\n$ curl \\\n  --header \"${OAUTH_TOKEN}\" \\\n  --header \"Content-Type: application\/json\" \\\n  --request POST \\\n  --data \"{\\\"payload\\\": \\\"${JWT_CLAIM}\\\"}\" \\\n  \"https:\/\/iamcredentials.googleapis.com\/v1\/projects\/-\/serviceAccounts\/${SERVICE_ACCOUNT}:signJwt\"\n```\n\n#### gcloud example\n\nYou can also do this through the (currently beta) gcloud command. Note that you will\nbe required to provide the expiration claim `exp` as a part of the JWT input to the\ncommand.\n\n```shell-session\n$ gcloud beta iam service-accounts sign-jwt $INPUT_JWT_CLAIMS $OUTPUT_JWT_FILE \\\n    --iam-account=service-account@my-project.iam.gserviceaccount.com \\\n    --project=my-project\n```\n\n#### Golang example\n\nRead more on the\n[Google Open Source blog](https:\/\/opensource.googleblog.com\/2017\/08\/hashicorp-vault-and-google-cloud-iam.html).\n\n### GCE\n\nYou can autogenerate this token in Vault versions 1.8.2 or higher.\n\nGCE tokens **can only be generated from a GCE instance**.\n\n1.  Vault can automatically discover the identity token on a GCE\/GKE instance. This simplifies\n    authenticating to Vault like so:\n\n    ```shell-session\n    $ vault login \\\n      -method=gcp \\\n      role=\"my-gce-role\"\n    ```\n\n1.  The JWT token can also be obtained from the `\"service-accounts\/default\/identity\"` endpoint for a\n    instance's metadata server.\n\n    #### Curl example\n\n    ```shell-session\n    ROLE=\"my-gce-role\"\n\n    $ curl \\\n      --header \"Metadata-Flavor: Google\" \\\n      --get \\\n      --data-urlencode \"audience=http:\/\/vault\/${ROLE}\" \\\n      --data-urlencode \"format=full\" \\\n      \"http:\/\/metadata\/computeMetadata\/v1\/instance\/service-accounts\/default\/identity\"\n    ```\n\n## API\n\nThe GCP Auth Plugin has a full HTTP API. Please see the\n[API docs][api-docs] for more details.\n\n[jwt]: https:\/\/tools.ietf.org\/html\/rfc7519\n[signjwt-method]: https:\/\/cloud.google.com\/iam\/docs\/reference\/credentials\/rest\/v1\/projects.serviceAccounts\/signJwt\n[cloud-creds]: https:\/\/cloud.google.com\/docs\/authentication\/production#providing_credentials_to_your_application\n[service-accounts]: https:\/\/cloud.google.com\/compute\/docs\/access\/service-accounts\n[api-docs]: \/vault\/api-docs\/auth\/gcp\n[identity-group-aliases]: \/vault\/api-docs\/secret\/identity\/group-alias\n[instance-identity]: https:\/\/cloud.google.com\/compute\/docs\/instances\/verifying-instance-identity\n[repo]: https:\/\/github.com\/hashicorp\/vault-plugin-auth-gcp\n\n## Code example\n\nThe following example demonstrates the Google Cloud auth method to authenticate\nwith Vault.\n\n<CodeTabs>\n\n<CodeBlockConfig>\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\n\tvault \"github.com\/hashicorp\/vault\/api\"\n\tauth \"github.com\/hashicorp\/vault\/api\/auth\/gcp\"\n)\n\n\/\/ Fetches a key-value secret (kv-v2) after authenticating to Vault\n\/\/ via GCP IAM, one of two auth methods used to authenticate with\n\/\/ GCP (the other is GCE auth).\nfunc getSecretWithGCPAuthIAM() (string, error) {\n\tconfig := vault.DefaultConfig() \/\/ modify for more granular configuration\n\n\tclient, err := vault.NewClient(config)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to initialize Vault client: %w\", err)\n\t}\n\n\t\/\/ For IAM-style auth, the environment variable GOOGLE_APPLICATION_CREDENTIALS\n\t\/\/ must be set with the path to a valid credentials JSON file, otherwise\n\t\/\/ Vault will fall back to Google's default instance credentials.\n\t\/\/ Learn about authenticating to GCS with service account credentials at https:\/\/cloud.google.com\/docs\/authentication\/production\n\tif pathToCreds := os.Getenv(\"GOOGLE_APPLICATION_CREDENTIALS\"); pathToCreds == \"\" {\n\t\tfmt.Printf(\"WARNING: Environment variable GOOGLE_APPLICATION_CREDENTIALS was not set. IAM client for JWT signing and Vault server IAM client will both fall back to default instance credentials.\\n\")\n\t}\n\n\tsvcAccountEmail := fmt.Sprintf(\"%s@%s.iam.gserviceaccount.com\", os.Getenv(\"GCP_SERVICE_ACCOUNT_NAME\"), os.Getenv(\"GOOGLE_CLOUD_PROJECT\"))\n\n\t\/\/ We pass the auth.WithIAMAuth option to use the IAM-style authentication\n\t\/\/ of the GCP auth backend. Otherwise, we default to using GCE-style\n\t\/\/ authentication, which gets its credentials from the metadata server.\n\tgcpAuth, err := auth.NewGCPAuth(\n\t\t\"dev-role-iam\",\n\t\tauth.WithIAMAuth(svcAccountEmail),\n\t)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to initialize GCP auth method: %w\", err)\n\t}\n\n\tauthInfo, err := client.Auth().Login(context.TODO(), gcpAuth)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to login to GCP auth method: %w\", err)\n\t}\n\tif authInfo == nil {\n\t\treturn \"\", fmt.Errorf(\"login response did not return client token\")\n\t}\n\n\t\/\/ get secret from the default mount path for KV v2 in dev mode, \"secret\"\n\tsecret, err := client.KVv2(\"secret\").Get(context.Background(), \"creds\")\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to read secret: %w\", err)\n\t}\n\n\t\/\/ data map can contain more than one key-value pair,\n\t\/\/ in this case we're just grabbing one of them\n\tvalue, ok := secret.Data[\"password\"].(string)\n\tif !ok {\n\t\treturn \"\", fmt.Errorf(\"value type assertion failed: %T %#v\", secret.Data[\"password\"], secret.Data[\"password\"])\n\t}\n\n\treturn value, nil\n}\n\n```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig>\n\n```cs\nusing System;\nusing System.Collections.Generic;\nusing System.IO;\nusing System.Threading.Tasks;\nusing Google.Apis.Auth.OAuth2;\nusing Google.Apis.Services;\nusing Google.Apis.Iam.v1;\nusing Newtonsoft.Json;\nusing VaultSharp;\nusing VaultSharp.V1.AuthMethods;\nusing VaultSharp.V1.AuthMethods.GoogleCloud;\nusing VaultSharp.V1.Commons;\n\nusing Data = Google.Apis.Iam.v1.Data;\n\nnamespace Examples\n{\n    public class GCPAuthExample\n    {\n        \/\/\/ <summary>\n        \/\/\/ Fetches a key-value secret (kv-v2) after authenticating to Vault via GCP IAM,\n        \/\/\/ one of two auth methods used to authenticate with GCP (the other is GCE auth).\n        \/\/\/ <\/summary>\n        public string GetSecretGcp()\n        {\n            var vaultAddr = Environment.GetEnvironmentVariable(\"VAULT_ADDR\");\n            if(String.IsNullOrEmpty(vaultAddr))\n            {\n                throw new System.ArgumentNullException(\"Vault Address\");\n            }\n\n            var roleName = Environment.GetEnvironmentVariable(\"VAULT_ROLE\");\n            if(String.IsNullOrEmpty(roleName))\n            {\n                throw new System.ArgumentNullException(\"Vault Role Name\");\n            }\n\n            \/\/ Learn about authenticating to GCS with service account credentials at https:\/\/cloud.google.com\/docs\/authentication\/production\n            if(String.IsNullOrEmpty(Environment.GetEnvironmentVariable(\"GOOGLE_APPLICATION_CREDENTIALS\")))\n            {\n                Console.WriteLine(\"WARNING: Environment variable GOOGLE_APPLICATION_CREDENTIALS was not set. IAM client for JWT signing will fall back to default instance credentials.\");\n            }\n\n            var jwt = SignJWT();\n\n            IAuthMethodInfo authMethod = new GoogleCloudAuthMethodInfo(roleName, jwt);\n            var vaultClientSettings = new VaultClientSettings(vaultAddr, authMethod);\n\n            IVaultClient vaultClient = new VaultClient(vaultClientSettings);\n\n            \/\/ We can retrieve the secret after creating our VaultClient object\n            Secret<SecretData> kv2Secret = null;\n            kv2Secret = vaultClient.V1.Secrets.KeyValue.V2.ReadSecretAsync(path: \"\/creds\").Result;\n\n            var password = kv2Secret.Data.Data[\"password\"];\n\n            return password.ToString();\n        }\n\n        \/\/\/ <summary>\n        \/\/\/ Generate signed JWT from GCP IAM\n        \/\/\/ <\/summary>\n        private string SignJWT()\n        {\n            var roleName = Environment.GetEnvironmentVariable(\"GCP_ROLE\");\n            var svcAcctName = Environment.GetEnvironmentVariable(\"GCP_SERVICE_ACCOUNT_NAME\");\n            var gcpProjName = Environment.GetEnvironmentVariable(\"GOOGLE_CLOUD_PROJECT\");\n\n            IamService iamService = new IamService(new BaseClientService.Initializer\n            {\n                HttpClientInitializer = GetCredential(),\n                ApplicationName = \"Google-iamSample\/0.1\",\n            });\n\n            string svcEmail = $\"{svcAcctName}@{gcpProjName}.iam.gserviceaccount.com\";\n            string name = $\"projects\/-\/serviceAccounts\/{svcEmail}\";\n\n            TimeSpan currentTime = (DateTime.UtcNow - new DateTime(1970, 1, 1));\n            int expiration = (int)(currentTime.TotalSeconds) + 900;\n\n            Data.SignJwtRequest requestBody = new Data.SignJwtRequest();\n            requestBody.Payload = JsonConvert.SerializeObject(new Dictionary<string, object> ()\n            {\n                { \"aud\", $\"vault\/{roleName}\" } ,\n                { \"sub\", svcEmail } ,\n                { \"exp\", expiration }\n            });\n\n            ProjectsResource.ServiceAccountsResource.SignJwtRequest request = iamService.Projects.ServiceAccounts.SignJwt(requestBody, name);\n\n            Data.SignJwtResponse response = request.Execute();\n\n            return JsonConvert.SerializeObject(response.SignedJwt).Replace(\"\\\"\", \"\");\n        }\n\n        public static GoogleCredential GetCredential()\n        {\n            GoogleCredential credential = Task.Run(() => GoogleCredential.GetApplicationDefaultAsync()).Result;\n            if (credential.IsCreateScopedRequired)\n            {\n                credential = credential.CreateScoped(\"https:\/\/www.googleapis.com\/auth\/cloud-platform\");\n            }\n           return credential;\n        }\n    }\n}\n```\n<\/CodeBlockConfig>\n\n<\/CodeTabs>","site":"vault","answers_cleaned":"    layout  docs page title  Google Cloud   Auth Methods description       The  gcp  auth method allows users and machines to authenticate to Vault using   Google Cloud service accounts         Google Cloud auth method  The  gcp  auth method allows Google Cloud Platform entities to authenticate to Vault  Vault treats Google Cloud as a trusted third party and verifies authenticating entities against the Google Cloud APIs  This backend allows for authentication of     Google Cloud IAM service accounts   Google Compute Engine  GCE  instances  This backend focuses on identities specific to Google  Cloud  and does not support authenticating arbitrary Google or Google Workspace users or generic OAuth against Google   This plugin is developed in a separate GitHub repository at  hashicorp vault plugin auth gcp  repo   but is automatically bundled in Vault releases  Please file all feature requests  bugs  and pull requests specific to the GCP plugin under that repository      Authentication      Via the CLI helper  Vault includes a CLI helper that obtains a signed JWT locally and sends the request to Vault      shell session   Authentication to vault outside of Google Cloud   vault login  method gcp       role  my role        service account  authenticating account my project iam gserviceaccount com        jwt exp  15m        credentials  path to signer credentials json         shell session   Authentication to vault inside of Google Cloud   vault login  method gcp role  my role       For more usage information  run  vault auth help gcp         Note    The  project  parameter has been removed in Vault 1 5 9   1 6 5   and 1 7 2   It is no longer needed for configuration and will be ignored if provided       Via the CLI     shell session   vault write  field token auth gcp login       role  my role        jwt  eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9          See  Generating JWTs   generating jwts  for ways to obtain the JWT token       Via the API     shell session   curl         request POST         data    role   my role    jwt   eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9             http   127 0 0 1 8200 v1 auth gcp login      See  API docs  api docs  for expected response      Configuration  Auth methods must be configured in advance before users or machines can authenticate  These steps are usually completed by an operator or configuration management tool   1  Enable the Google Cloud auth method         shell session      vault auth enable gcp         1  Configure the auth method credentials if Vault is not running on Google Cloud         text      vault write auth gcp config          credentials   path to credentials json            If you are using instance credentials or want to specify credentials via    an environment variable  you can skip this step  To learn more  see the     Google Cloud Credentials   gcp credentials  section below           Note    If you re using a  Private Google Access  https   cloud google com vpc docs configure private google access     environment  you will additionally need to configure your environment s custom endpoints    via the  custom endpoint   vault api docs auth gcp custom endpoint  configuration parameter      In some cases  you cannot set sensitive IAM security credentials in your    Vault configuration  For example  your organization may require that all    security credentials are short lived or explicitly tied to a machine identity      To provide IAM security credentials to Vault  we recommend using Vault     plugin workload identity federation   plugin workload identity federation wif      WIF  as shown below   1  Alternatively  configure the audience claim value and the service account email to assume for plugin workload identity federation         text       vault write auth gcp config           identity token audience   TOKEN AUDIENCE             service account email   SERVICE ACCOUNT EMAIL              Vault s identity token provider signs the plugin identity token JWT internally     If a trust relationship exists between Vault and GCP through WIF  the auth    method can exchange the Vault identity token for a     federated access token  https   cloud google com docs authentication token types access       To configure a trusted relationship between Vault and GCP           You must configure the  identity token issuer backend   vault api docs secret identity tokens configure the identity tokens backend           for Vault           GCP must have a           workload identity pool and provider  https   cloud google com iam docs manage workload identity pools providers           configured with information about the fully qualified and network reachable          issuer URL for the Vault plugin s           identity token provider   vault api docs secret identity tokens read plugin identity well known configurations       Establishing a trusted relationship between Vault and GCP ensures that GCP    can fetch JWKS     public keys   vault api docs secret identity tokens read active public keys     and verify the plugin identity token signature   1  Create a named role      For an  iam  type role         shell session      vault write auth gcp role my iam role          type  iam           policies  dev prod           bound service accounts  my service my project iam gserviceaccount com             For a  gce  type role         shell session      vault write auth gcp role my gce role          type  gce           policies  dev prod           bound projects  my project1 my project2           bound zones  us east1 b           bound labels  foo bar zip zap           bound service accounts  my service my project iam gserviceaccount com             Note that  bound service accounts  is only required for  iam  type roles      For the complete list of configuration options for each type  please see the     API documentation  api docs       GCP credentials  The Google Cloud Vault auth method uses the official Google Cloud Golang SDK  This means it supports the common ways of  providing credentials to Google Cloud  cloud creds    1  The environment variable  GOOGLE APPLICATION CREDENTIALS   This is specified    as the   path   to a Google Cloud credentials file  typically for a service    account  If this environment variable is present  the resulting credentials are    used  If the credentials are invalid  an error is returned   1  Default instance credentials  When no environment variable is present  the    default service account credentials are used   For more information on service accounts  please see the  Google Cloud Service Accounts documentation  service accounts    To use this auth method  the service account must have the following minimum scope      text https   www googleapis com auth cloud platform          Required GCP permissions       Enabled GCP APIs  The GCP project must have the following APIs enabled      iam googleapis com  https   console cloud google com flows enableapi apiid iam googleapis com    for  iam  and  gce  type roles     compute googleapis com  https   console cloud google com flows enableapi apiid compute googleapis com    for  gce  type roles     cloudresourcemanager googleapis com  https   console cloud google com flows enableapi apiid cloudresourcemanager googleapis com    for  iam  and  gce  type roles that set   add group aliases    vault api docs auth gcp add group aliases  to true        Vault server permissions    For  iam  type Vault roles    the service account   credentials    vault api docs auth gcp credentials  given to Vault can have the following role      text roles iam serviceAccountKeyAdmin        For  gce  type Vault roles    the service account   credentials    vault api docs auth gcp credentials  given to Vault can have the following role      text roles compute viewer      If you instead wish to create a custom role with only the exact GCP permissions required  use the following list of permissions      text iam serviceAccounts get iam serviceAccountKeys get compute instances get compute instanceGroups list      These allow Vault to     verify the service account  either directly authenticating or associated with   authenticating GCE instance  exists   get the corresponding public keys for verifying JWTs signed by service account   private keys    verify authenticating GCE instances exist   compare bound fields for GCE roles  zone region  labels  or membership   in given instance groups   If you are using Group Aliases as described below  you will also need to add the  resourcemanager projects get  permission        Permissions for authenticating against Vault  If you are authenticating to Vault from Google Cloud  you can skip the following step as Vault will generate and present the identity token of the service account configured on the instance or the pod   Note that the previously mentioned permissions are given to the  Vault servers   The IAM service account or GCE instance that is   authenticating against Vault   must have the following role      text roles iam serviceAccountTokenCreator           WARNING    Make sure this role is only applied so your service account can impersonate itself  If this role is applied GCP project wide  this will allow the service account to impersonate any service account in the GCP project where it resides  See  Managing service account impersonation  https   cloud google com iam docs impersonating service accounts  for more information      Plugin Workload Identity Federation  WIF    EnterpriseAlert product  vault      The GCP auth method supports the plugin WIF workflow and has a source of identity called a plugin identity token  A plugin identity token is a JWT that is signed internally by the Vault  plugin identity token issuer   vault api docs secret identity tokens read plugin workload identity issuer s openid configuration    If there is a trust relationship configured between Vault and GCP through  workload identity federation  https   cloud google com iam docs workload identity federation   the auth method can exchange its identity token for short lived access tokens needed to perform its actions   Exchanging identity tokens for access tokens lets the GCP auth method operate without configuring explicit access to sensitive IAM security credentials   To configure the auth method to use plugin WIF   1  Ensure that Vault  openid configuration   vault api docs secret identity tokens read plugin identity token issuer s openid configuration  and  public JWKS   vault api docs secret identity tokens read plugin identity token issuer s public jwks  APIs are network reachable by GCP  We recommend using an API proxy or gateway if you need to limit Vault API exposure   1  Create a      workload identity pool and provider  https   cloud google com iam docs workload identity federation with other providers create pool provider      in GCP      1  The provider URL   must   point at your  Vault plugin identity token issuer   vault api docs secret identity tokens read plugin workload identity issuer s openid configuration  with the        well known openid configuration  suffix removed  For example       https   host port v1 identity oidc plugins       1  Uniquely identify the recipient of the plugin identity token as the audience      You can use the  default audience  https   cloud google com iam docs workload identity federation with other providers prepare      for the identity pool or a custom value less than 256 characters   1   Authenticate a workload  https   cloud google com iam docs workload identity federation with other providers authenticate  in GCP by granting the identity pool access to a dedicated service account using service account impersonation  Filter requests using the unique  sub  claim issued by plugin identity tokens so the GCP Auth method can impersonate the service account   sub  claims have the form   plugin identity  NAMESPACE  auth  GCP AUTH MOUNT ACCESSOR     1  Configure the GCP auth method with the OIDC audience value and service account email         shell session      vault write auth gcp config        identity token audience    iam googleapis com projects 410449834127 locations global workloadIdentityPools vault gcp auth 43777a63 providers vault gcp auth wif provider         service account email  vault plugin wif auth hc b712f250b4e04cacbadd258a90b iam gserviceaccount com          Your auth method can now use plugin WIF for its configuration credentials  By default  WIF  credentials  https   cloud google com iam docs workload identity federation access management  have a time to live of 1 hour and automatically refresh when they expire   Please see the  API documentation   vault api docs auth gcp configure  for more details on the fields associated with plugin WIF      Group aliases  As of Vault 1 0  roles can specify an  add group aliases  boolean parameter that adds  group aliases  identity group aliases  to the auth response  These aliases can aid in building reusable policies since they are available as interpolated values in Vault s policy engine  Once enabled  the auth response will include the following aliases      json      project  PROJECT ID      folder  SUBFOLDER ID      folder  FOLDER ID      organization  ORG ID         If you are using a custom role for Vault server  you will need to add the  resourcemanager projects get  permission to your custom role      Implementation details  This section describes the implementation details for how Vault communicates with Google Cloud to authenticate and authorize JWT tokens  This information is provided for those who are curious  but these details are not required knowledge for using the auth method       IAM login  IAM login applies only to roles of type  iam   The Vault authentication workflow for IAM service accounts looks like this      Vault Google Cloud IAM Login Workflow   img vault gcp iam auth workflow svg    img vault gcp iam auth workflow svg   1  The client generates a signed JWT using the Service Account Credentials      projects serviceAccounts signJwt   signjwt method  API method  For examples    of how to do this  see the  Generating JWTs   generating jwts  section   2  The client sends this signed JWT to Vault along with a role name   3  Vault extracts the  kid  header value  which contains the ID of the    key pair used to generate the JWT  and the  sub  ID email to find the service    account key  If the service account does not exist or the key is not linked to    the service account  Vault denies authentication   4  Vault authorizes the confirmed service account against the given role  If    that is successful  a Vault token with the proper policies is returned       GCE login  GCE login only applies to roles of type  gce  and   must be completed on an infrastructure running on Google Cloud    These steps will not work from your local laptop or another cloud provider      Vault Google Cloud GCE Login Workflow   img vault gcp gce auth workflow svg    img vault gcp gce auth workflow svg   1  The client obtains an  instance identity metadata token  instance identity     on a GCE instance   2  The client sends this JWT to Vault along with a role name   3  Vault extracts the  kid  header value  which contains the ID of the    key pair used to generate the JWT  to find the OAuth2 public cert to verify    this JWT   4  Vault authorizes the confirmed instance against the given role  ensuring    the instance matches the bound zones  regions  or instance groups  If that is    successful  a Vault token with the proper policies is returned      Generating JWTs  This section details the various methods and examples for obtaining JWT tokens       Service account credentials API  This describes how to use the GCP Service Account Credentials  API method  signjwt method  directly to generate the signed JWT with the claims that Vault expects  Note the CLI does this process for you and is much easier  and that there is very little reason to do this yourself        curl example  Vault requires the following minimum claim set      json      sub     SERVICE ACCOUNT EMAIL OR ID      aud    vault  ROLE      exp     EXPIRATION         For the API method  providing the expiration claim  exp  is required  If it is omitted  it will not be added automatically and Vault will deny authentication  Expiration must be specified as a  NumericDate  https   tools ietf org html rfc7519 section 2  value  seconds from Epoch   This value must be before the max JWT expiration allowed for a role  This defaults to 15 minutes and cannot be more than 1 hour   If a user generates a token that expires after 15 minutes  and the gcp role has  max jwt exp  set to the default  Vault will return the following error   Expiration date must be set to no more that 15 mins in JWT CLAIM  otherwise the login request returns error  role requires that service account JWTs expire within 900 seconds   In this case  the user must create a new signed JWT with a shorter expiration  or set  max jwt exp  to a higher value in the gcp role   One you have all this information  the JWT token can be signed using curl and  oauth2l  https   github com google oauth2l       shell session ROLE  my role  SERVICE ACCOUNT  service account my project iam gserviceaccount com  OAUTH TOKEN    oauth2l header cloud platform   EXPIRATION   your token expiration   JWT CLAIM       aud         vault   ROLE           sub            SERVICE ACCOUNT           exp        EXPIRATION       curl       header    OAUTH TOKEN         header  Content Type  application json        request POST       data     payload        JWT CLAIM           https   iamcredentials googleapis com v1 projects   serviceAccounts   SERVICE ACCOUNT  signJwt            gcloud example  You can also do this through the  currently beta  gcloud command  Note that you will be required to provide the expiration claim  exp  as a part of the JWT input to the command      shell session   gcloud beta iam service accounts sign jwt  INPUT JWT CLAIMS  OUTPUT JWT FILE         iam account service account my project iam gserviceaccount com         project my project           Golang example  Read more on the  Google Open Source blog  https   opensource googleblog com 2017 08 hashicorp vault and google cloud iam html        GCE  You can autogenerate this token in Vault versions 1 8 2 or higher   GCE tokens   can only be generated from a GCE instance     1   Vault can automatically discover the identity token on a GCE GKE instance  This simplifies     authenticating to Vault like so          shell session       vault login          method gcp         role  my gce role           1   The JWT token can also be obtained from the   service accounts default identity   endpoint for a     instance s metadata server            Curl example         shell session     ROLE  my gce role         curl           header  Metadata Flavor  Google            get           data urlencode  audience http   vault   ROLE             data urlencode  format full           http   metadata computeMetadata v1 instance service accounts default identity              API  The GCP Auth Plugin has a full HTTP API  Please see the  API docs  api docs  for more details    jwt   https   tools ietf org html rfc7519  signjwt method   https   cloud google com iam docs reference credentials rest v1 projects serviceAccounts signJwt  cloud creds   https   cloud google com docs authentication production providing credentials to your application  service accounts   https   cloud google com compute docs access service accounts  api docs    vault api docs auth gcp  identity group aliases    vault api docs secret identity group alias  instance identity   https   cloud google com compute docs instances verifying instance identity  repo   https   github com hashicorp vault plugin auth gcp     Code example  The following example demonstrates the Google Cloud auth method to authenticate with Vault    CodeTabs    CodeBlockConfig      go package main  import     context    fmt    os    vault  github com hashicorp vault api   auth  github com hashicorp vault api auth gcp        Fetches a key value secret  kv v2  after authenticating to Vault    via GCP IAM  one of two auth methods used to authenticate with    GCP  the other is GCE auth   func getSecretWithGCPAuthIAM    string  error     config    vault DefaultConfig      modify for more granular configuration   client  err    vault NewClient config   if err    nil     return     fmt Errorf  unable to initialize Vault client   w   err          For IAM style auth  the environment variable GOOGLE APPLICATION CREDENTIALS     must be set with the path to a valid credentials JSON file  otherwise     Vault will fall back to Google s default instance credentials      Learn about authenticating to GCS with service account credentials at https   cloud google com docs authentication production  if pathToCreds    os Getenv  GOOGLE APPLICATION CREDENTIALS    pathToCreds           fmt Printf  WARNING  Environment variable GOOGLE APPLICATION CREDENTIALS was not set  IAM client for JWT signing and Vault server IAM client will both fall back to default instance credentials  n        svcAccountEmail    fmt Sprintf   s  s iam gserviceaccount com   os Getenv  GCP SERVICE ACCOUNT NAME    os Getenv  GOOGLE CLOUD PROJECT         We pass the auth WithIAMAuth option to use the IAM style authentication     of the GCP auth backend  Otherwise  we default to using GCE style     authentication  which gets its credentials from the metadata server   gcpAuth  err    auth NewGCPAuth     dev role iam     auth WithIAMAuth svcAccountEmail       if err    nil     return     fmt Errorf  unable to initialize GCP auth method   w   err       authInfo  err    client Auth   Login context TODO    gcpAuth   if err    nil     return     fmt Errorf  unable to login to GCP auth method   w   err      if authInfo    nil     return     fmt Errorf  login response did not return client token           get secret from the default mount path for KV v2 in dev mode   secret   secret  err    client KVv2  secret   Get context Background     creds    if err    nil     return     fmt Errorf  unable to read secret   w   err          data map can contain more than one key value pair      in this case we re just grabbing one of them  value  ok    secret Data  password    string   if  ok     return     fmt Errorf  value type assertion failed   T   v   secret Data  password    secret Data  password         return value  nil           CodeBlockConfig    CodeBlockConfig      cs using System  using System Collections Generic  using System IO  using System Threading Tasks  using Google Apis Auth OAuth2  using Google Apis Services  using Google Apis Iam v1  using Newtonsoft Json  using VaultSharp  using VaultSharp V1 AuthMethods  using VaultSharp V1 AuthMethods GoogleCloud  using VaultSharp V1 Commons   using Data   Google Apis Iam v1 Data   namespace Examples       public class GCPAuthExample                    summary              Fetches a key value secret  kv v2  after authenticating to Vault via GCP IAM              one of two auth methods used to authenticate with GCP  the other is GCE auth                 summary          public string GetSecretGcp                         var vaultAddr   Environment GetEnvironmentVariable  VAULT ADDR                if String IsNullOrEmpty vaultAddr                                 throw new System ArgumentNullException  Vault Address                               var roleName   Environment GetEnvironmentVariable  VAULT ROLE                if String IsNullOrEmpty roleName                                 throw new System ArgumentNullException  Vault Role Name                                  Learn about authenticating to GCS with service account credentials at https   cloud google com docs authentication production             if String IsNullOrEmpty Environment GetEnvironmentVariable  GOOGLE APPLICATION CREDENTIALS                                   Console WriteLine  WARNING  Environment variable GOOGLE APPLICATION CREDENTIALS was not set  IAM client for JWT signing will fall back to default instance credentials                                var jwt   SignJWT                 IAuthMethodInfo authMethod   new GoogleCloudAuthMethodInfo roleName  jwt               var vaultClientSettings   new VaultClientSettings vaultAddr  authMethod                IVaultClient vaultClient   new VaultClient vaultClientSettings                   We can retrieve the secret after creating our VaultClient object             Secret SecretData  kv2Secret   null              kv2Secret   vaultClient V1 Secrets KeyValue V2 ReadSecretAsync path    creds   Result               var password   kv2Secret Data Data  password                 return password ToString                            summary              Generate signed JWT from GCP IAM               summary          private string SignJWT                         var roleName   Environment GetEnvironmentVariable  GCP ROLE                var svcAcctName   Environment GetEnvironmentVariable  GCP SERVICE ACCOUNT NAME                var gcpProjName   Environment GetEnvironmentVariable  GOOGLE CLOUD PROJECT                 IamService iamService   new IamService new BaseClientService Initializer                               HttpClientInitializer   GetCredential                    ApplicationName    Google iamSample 0 1                                string svcEmail      svcAcctName   gcpProjName  iam gserviceaccount com               string name     projects   serviceAccounts  svcEmail                 TimeSpan currentTime    DateTime UtcNow   new DateTime 1970  1  1                int expiration    int  currentTime TotalSeconds    900               Data SignJwtRequest requestBody   new Data SignJwtRequest                requestBody Payload   JsonConvert SerializeObject new Dictionary string  object                                      aud     vault  roleName                          sub   svcEmail                        exp   expiration                                ProjectsResource ServiceAccountsResource SignJwtRequest request   iamService Projects ServiceAccounts SignJwt requestBody  name                Data SignJwtResponse response   request Execute                 return JsonConvert SerializeObject response SignedJwt  Replace                               public static GoogleCredential GetCredential                         GoogleCredential credential   Task Run       GoogleCredential GetApplicationDefaultAsync    Result              if  credential IsCreateScopedRequired                                credential   credential CreateScoped  https   www googleapis com auth cloud platform                             return credential                          CodeBlockConfig     CodeTabs "}
{"questions":"vault page title AWS Auth Methods AWS auth method layout docs include x509 sha1 deprecation mdx The aws auth method allows automated authentication of AWS entities","answers":"---\nlayout: docs\npage_title: AWS - Auth Methods\ndescription: The aws auth method allows automated authentication of AWS entities.\n---\n\n# AWS auth method\n\n@include 'x509-sha1-deprecation.mdx'\n\n@include 'aws-sha1-deprecation.mdx'\n\nThe `aws` auth method provides an automated mechanism to retrieve a Vault token\nfor IAM principals and AWS EC2 instances. Unlike most Vault auth methods, this\nmethod does not require manual first-deploying, or provisioning\nsecurity-sensitive credentials (tokens, username\/password, client certificates,\netc), by operators under many circumstances.\n\n## Authentication workflow\n\nThere are two authentication types present in the aws auth method: `iam` and\n`ec2`.\n\nWith the `iam` method, a special AWS request signed with AWS IAM credentials is\nused for authentication. The IAM credentials are automatically supplied to AWS\ninstances in IAM instance profiles, Lambda functions, and others, and it is\nthis information already provided by AWS which Vault can use to authenticate\nclients.\n\nWith the `ec2` method, AWS is treated as a Trusted Third Party and\ncryptographically signed dynamic metadata information that uniquely represents\neach EC2 instance is used for authentication. This metadata information is\nautomatically supplied by AWS to all EC2 instances.\n\nBased on how you attempt to authenticate, Vault will determine if you are\nattempting to use the `iam` or `ec2` type. Each has a different authentication\nworkflow, and each can solve different use cases.\n\nNote: The `ec2` method was implemented before the primitives to implement the\n`iam` method were supported by AWS. The `iam` method is the recommended approach\nas it is more flexible and aligns with best practices to perform access\ncontrol and authentication. See the section on comparing the two auth methods\nbelow for more information.\n\n-> **Usage:** See the [Authentication](#authentication) section for Vault CLI\nand API usage examples. The [Code Example](#code-example) section provides a\ncode snippet demonstrating the authentication with Vault using the AWS auth\nmethod.\n\n### IAM auth method\n\nThe AWS STS API includes a method, [`sts:GetCallerIdentity`](http:\/\/docs.aws.amazon.com\/STS\/latest\/APIReference\/API_GetCallerIdentity.html), which allows you to validate the identity of a client. The client signs a `GetCallerIdentity` query using the [AWS Signature v4 algorithm](http:\/\/docs.aws.amazon.com\/general\/latest\/gr\/sigv4_signing.html) and sends it to the Vault server. The credentials used to sign the GetCallerIdentity request can come from the EC2 instance metadata service for an EC2 instance, or from the AWS environment variables in an AWS Lambda function execution, which obviates the need for an operator to manually provision some sort of identity material first. However, the credentials can, in principle, come from anywhere, not just from the locations AWS has provided for you.\n\nThe `GetCallerIdentity` query consists of four pieces of information: the request URL, the request body, the request headers, and the request method, as the AWS signature is computed over those fields. The Vault server reconstructs the query using this information and forwards it on to the AWS STS service. Depending on the response from the STS service, the server authenticates the client.\n\nNotably, clients don't need network-level access themselves to talk\nto the AWS STS API endpoint; they merely need access to the credentials to sign\nthe request. However, it means that the Vault server does need network-level\naccess to send requests to the STS endpoint.\n\nEach signed AWS request includes the current timestamp to mitigate the risk of\nreplay attacks. In addition, Vault allows you to require an additional header,\n`X-Vault-AWS-IAM-Server-ID`, to be present to mitigate against different types\nof replay attacks (such as a signed `GetCallerIdentity` request stolen from a\ndev Vault instance and used to authenticate to a prod Vault instance). Vault\nfurther requires that this header be one of the headers included in the AWS\nsignature and relies upon AWS to authenticate that signature.\n\nWhile AWS API endpoints support both signed GET and POST requests, for\nsimplicity, the aws auth method supports only POST requests. It also does not\nsupport `presigned` requests, i.e., requests with `X-Amz-Credential`,\n`X-Amz-Signature`, and `X-Amz-SignedHeaders` GET query parameters containing\nthe authenticating information.\n\nIt's also important to note that Amazon does NOT appear to include any sort of\nauthorization around calls to `GetCallerIdentity`. For example, if you have an\nIAM policy on your credential that requires all access to be MFA authenticated,\nnon-MFA authenticated credentials (i.e., raw credentials, not those retrieved\nby calling `GetSessionToken` and supplying an MFA code) will still be able to\nauthenticate to Vault using this method. It does not appear possible to enforce\nan IAM principal to be MFA authenticated while authenticating to Vault.\n\n### EC2 auth method\n\nAmazon EC2 instances have access to metadata which describes the instance. The\nVault EC2 auth method leverages the components of this metadata to authenticate\nand distribute an initial Vault token to an EC2 instance. The data flow (which\nis also represented in the graphic below) is as follows:\n\n[![Vault AWS EC2 Authentication Flow](\/img\/vault-aws-ec2-auth-flow.png)](\/img\/vault-aws-ec2-auth-flow.png)\n\n1. An AWS EC2 instance fetches its [AWS Instance Identity Document][aws-iid]\n   from the [EC2 Metadata Service][aws-ec2-mds]. In addition to data itself, AWS\n   also provides the PKCS#7 signature of the data, and publishes the public keys\n   (by region) which can be used to verify the signature.\n\n1. The AWS EC2 instance makes a request to Vault with the PKCS#7 signature.\n   The PKCS#7 signature contains the Instance Identity Document.\n\n1. Vault verifies the signature on the PKCS#7 document, ensuring the information\n   is certified accurate by AWS. This process validates both the validity and\n   integrity of the document data. As an added security measure, Vault verifies\n   that the instance is currently running using the public EC2 API endpoint.\n\n1. Provided all steps are successful, Vault returns the initial Vault token to\n   the EC2 instance. This token is mapped to any configured policies based on the\n   instance metadata.\n\n[aws-iid]: http:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/instance-identity-documents.html\n[aws-ec2-mds]: http:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/ec2-instance-metadata.html\n\nThere are various modifications to this workflow that provide more or less\nsecurity, as detailed later in this documentation.\n\n## Authorization workflow\n\nThe basic mechanism of operation is per-role. Roles are registered in the\nmethod and associated with a specific authentication type that cannot be\nchanged once the role has been created. Roles can also be associated with\nvarious optional restrictions, such as the set of allowed policies and max TTLs\non the generated tokens. Each role can be specified with the constraints that\nare to be met during the login. Many of these constraints accept lists of\nrequired values. For any constraint which accepts a list of values, that\nconstraint will be considered satisfied if any one of the values is matched\nduring the login process. For example, one such constraint that is\nsupported is to bind against a list of AMI IDs. A role which is bound to a\nspecific list of AMIs can only be used for login by EC2 instances that are\ndeployed to one of the AMIs that the role is bound to.\n\nThe iam auth method allows you to specify bound IAM principal ARNs.\nClients authenticating to Vault must have an ARN that matches one of the ARNs bound to\nthe role they are attempting to login to. The bound ARN allows specifying a\nwildcard at the end of the bound ARN. For example, if the bound ARN were\n`arn:aws:iam::123456789012:*` it would allow any principal in AWS account\n123456789012 to login to it. Similarly, if it were\n`arn:aws:iam::123456789012:role\/*` it would allow any IAM role in the AWS\naccount to login to it. If you wish to specify a wildcard, you must give Vault\n`iam:GetUser` and `iam:GetRole` permissions to properly resolve the full user\npath.\n\nIn general, role bindings that are specific to an EC2 instance are only checked\nwhen the ec2 auth method is used to login, while bindings specific to IAM\nprincipals are only checked when the iam auth method is used to login. However,\nthe iam method includes the ability for you to \"infer\" an EC2 instance ID from\nthe authenticated client and apply many of the bindings that would otherwise\nonly apply specifically to EC2 instances.\n\nIn many cases, an organization will use a \"seed AMI\" that is specialized after\nbootup by configuration management or similar processes. For this reason, a\nrole entry in the method can also be associated with a \"role tag\" when using\nthe ec2 auth type. These tags\nare generated by the method and are placed as the value of a tag with the\ngiven key on the EC2 instance. The role tag can be used to further restrict the\nparameters set on the role, but cannot be used to grant additional privileges.\nIf a role with an AMI bind constraint has \"role tag\" enabled on the role, and\nthe EC2 instance performing login does not have an expected tag on it, or if the\ntag on the instance is deleted for some reason, authentication fails.\n\nThe role tags can be generated at will by an operator with appropriate API\naccess. They are HMAC-signed by a per-role key stored within the method, allowing\nthe method to verify the authenticity of a found role tag and ensure that it has\nnot been tampered with. There is also a mechanism to deny list role tags if one\nhas been found to be distributed outside of its intended set of machines.\n\n## IAM authentication inferences\n\nWith the iam auth method, normally Vault will see the IAM principal that\nauthenticated, either the IAM user or role. However, when you have an EC2\ninstance in an IAM instance profile, Vault can actually see the instance ID of\nthe instance and can \"infer\" that it's an EC2 instance. However, there are\nimportant security caveats to be aware of before configuring Vault to make that\ninference.\n\nEach AWS IAM role has a \"trust policy\" which specifies which entities are\ntrusted to call\n[`sts:AssumeRole`](http:\/\/docs.aws.amazon.com\/STS\/latest\/APIReference\/API_AssumeRole.html)\non the role and retrieve credentials that can be used to authenticate with that\nrole. When AssumeRole is called, a parameter called RoleSessionName is passed\nin, which is chosen arbitrarily by the entity which calls AssumeRole. If you\nhave a role with an ARN `arn:aws:iam::123456789012:role\/MyRole`, then the\ncredentials returned by calling AssumeRole on that role will be\n`arn:aws:sts::123456789012:assumed-role\/MyRole\/RoleSessionName` where\nRoleSessionName is the session name in the AssumeRole API call. It is this\nlatter value which Vault actually sees.\n\nWhen you have an EC2 instance in an instance profile, the corresponding role's\ntrust policy specifies that the principal `\"Service\": \"ec2.amazonaws.com\"` is\ntrusted to call AssumeRole. When this is configured, EC2 calls AssumeRole on\nbehalf of your instance, with a RoleSessionName corresponding to the\ninstance's instance ID. Thus, it is possible for Vault to extract the instance\nID out of the value it sees when an EC2 instance in an instance profile\nauthenticates to Vault with the iam auth method. This is known as\n\"inferencing.\" Vault can be configured, on a role-by-role basis, to infer that a\ncaller is an EC2 instance and, if so, apply further bindings that apply\nspecifically to EC2 instances -- most of the bindings available to the ec2\nauth method.\n\nHowever, it is very important to note that if any entity other than an AWS\nservice is permitted to call AssumeRole on your role, then that entity can\nsimply pass in your instance's instance ID and spoof your instance to Vault.\nThis also means that anybody who is able to modify your role's trust policy\n(e.g., via\n[`iam:UpdateAssumeRolePolicy`](http:\/\/docs.aws.amazon.com\/IAM\/latest\/APIReference\/API_UpdateAssumeRolePolicy.html),\nthen that person could also spoof your instances. If this is a concern but you\nwould like to take advantage of inferencing, then you should tightly restrict\nwho is able to call AssumeRole on the role, tightly restrict who is able to call\nUpdateAssumeRolePolicy on the role, and monitor CloudTrail logs for calls to\nAssumeRole and UpdateAssumeRolePolicy. All of these caveats apply equally to\nusing the iam auth method without inferencing; the point is merely\nthat Vault cannot offer an iron-clad guarantee about the inference and it is up\nto operators to determine, based on their own AWS controls and use cases,\nwhether or not it's appropriate to configure inferencing.\n\n## Mixing authentication types\n\nVault allows you to configure using either the ec2 auth method or the iam auth\nmethod, but not both auth methods. Further, **assumed roles are not supported**\nand Vault prevents you from enforcing restrictions that it cannot enforce given\nthe chosen auth type for a role. Some examples of how this works in practice:\n\n1. You configure a role with the ec2 auth type, with a bound AMI ID. A\n   client would not be able to login using the iam auth type.\n2. You configure a role with the iam auth type, with a bound IAM\n   principal ARN. A client would not be able to login with the ec2 auth method.\n3. You configure a role with the iam auth type and further configure\n   inferencing. You have a bound AMI ID and a bound IAM principal ARN. A client\n   must login using the iam method; the RoleSessionName must be a valid instance\n   ID viewable by Vault, and the instance must have come from the bound AMI ID.\n\n## Comparison of the IAM and EC2 methods\n\nThe iam and ec2 auth methods serve similar and somewhat overlapping\nfunctionality, in that both authenticate some type of AWS entity to Vault.\nHere are some comparisons that illustrate why `iam` method is preferred over\n`ec2`.\n\n- What type of entity is authenticated:\n  - The ec2 auth method authenticates only AWS EC2 instances and is specialized\n    to handle EC2 instances, such as restricting access to EC2 instances from\n    a particular AMI, EC2 instances in a particular instance profile, or EC2\n    instances with a specialized tag value (via the role_tag feature).\n  - The iam auth method authenticates AWS IAM principals. This can\n    include IAM users, IAM roles assumed from other accounts, AWS Lambdas that\n    are launched in an IAM role, or even EC2 instances that are launched in an\n    IAM instance profile. However, because it authenticates more generalized IAM\n    principals, this method doesn't offer more granular controls beyond binding\n    to a given IAM principal without the use of inferencing.\n- How the entities are authenticated\n  - The ec2 auth method authenticates instances by making use of the EC2\n    instance identity document, which is a cryptographically signed document\n    containing metadata about the instance. This document changes relatively\n    infrequently, so Vault adds a number of other constructs to mitigate against\n    replay attacks, such as client nonces, role tags, instance migrations, etc.\n    Because the instance identity document is signed by AWS, you have a strong\n    guarantee that it came from an EC2 instance.\n  - The iam auth method authenticates by having clients provide a specially\n    signed AWS API request which the method then passes on to AWS to validate\n    the signature and tell Vault who created it. The actual secret (i.e.,\n    the AWS secret access key) is never transmitted over the wire, and the\n    AWS signature algorithm automatically expires requests after 15 minutes,\n    providing simple and robust protection against replay attacks. The use of\n    inferencing, however, provides a weaker guarantee that the credentials came\n    from an EC2 instance in an IAM instance profile compared to the ec2\n    authentication mechanism.\n  - The instance identity document used in the ec2 auth method is more likely to\n    be stolen given its relatively static nature, but it's harder to spoof. On\n    the other hand, the credentials of an EC2 instance in an IAM instance\n    profile are less likely to be stolen given their dynamic and short-lived\n    nature, but it's easier to spoof credentials that might have come from an\n    EC2 instance.\n- Specific use cases\n  - If you have non-EC2 instance entities, such as IAM users, Lambdas in IAM\n    roles, or developer laptops using [AdRoll's\n    Hologram](https:\/\/github.com\/AdRoll\/hologram) then you would need to use the\n    iam auth method.\n  - If you have EC2 instances, then you could use either auth method. If you\n    need more granular filtering beyond just the instance profile of given EC2\n    instances (such as filtering based off the AMI the instance was launched\n    from), then you would need to use the ec2 auth method, change the instance\n    profile associated with your EC2 instances so they have unique IAM roles\n    for each different Vault role you would want them to authenticate\n    to, or make use of inferencing. If you need to make use of role tags, then\n    you will need to use the ec2 auth method.\n\n## Recommended Vault IAM policy\n\nThis specifies the recommended IAM policy needed by the AWS auth method. Note\nthat if you are using the same credentials for the AWS auth and secret methods\n(e.g., if you're running Vault on an EC2 instance in an IAM instance profile),\nthen you will need to add additional permissions as required by the AWS secret\nmethod.\n\n```json\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"ec2:DescribeInstances\",\n        \"iam:GetInstanceProfile\",\n        \"iam:GetUser\",\n        \"iam:GetRole\"\n      ],\n      \"Resource\": \"*\"\n    },\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\"sts:AssumeRole\"],\n      \"Resource\": [\"arn:aws:iam::<AccountId>:role\/<VaultRole>\"]\n    },\n    {\n      \"Sid\": \"ManageOwnAccessKeys\",\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"iam:CreateAccessKey\",\n        \"iam:DeleteAccessKey\",\n        \"iam:GetAccessKeyLastUsed\",\n        \"iam:GetUser\",\n        \"iam:ListAccessKeys\",\n        \"iam:UpdateAccessKey\"\n      ],\n      \"Resource\": \"arn:aws:iam::*:user\/${aws:username}\"\n    }\n  ]\n}\n```\n\nHere are some of the scenarios in which Vault would need to use each of these\npermissions. This isn't intended to be an exhaustive list of all the scenarios\nin which Vault might make an AWS API call, but rather illustrative of why these\nare needed.\n\n- `ec2:DescribeInstances` is necessary when you are using the `ec2` auth method\n  or when you are inferring an `ec2_instance` entity type to validate that the\n  EC2 instance meets binding requirements of the role\n- `iam:GetInstanceProfile` is used when you have a `bound_iam_role_arn` in the\n  `ec2` auth method. Vault needs to determine which IAM role is attached to the\n  instance profile.\n- `iam:GetUser` and `iam:GetRole` are used when using the iam auth method and\n  binding to an IAM user or role principal to determine the [AWS IAM Unique Identifiers](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/reference_identifiers.html#identifiers-unique-ids)\n  or when using a wildcard on the bound ARN to resolve the full ARN of the user\n  or role.\n- The `sts:AssumeRole` stanza is necessary when you are using [Cross Account\n  Access](#cross-account-access). The `Resource`s specified should be a list of\n  all the roles for which you have configured cross-account access, and each of\n  those roles should have this IAM policy attached (except for the\n  `sts:AssumeRole` statement).\n- The `ManageOwnAccessKeys` stanza is necessary when you have configured Vault\n  with static credentials, and wish to rotate these credentials with the\n  [Rotate Root Credentials](\/vault\/api-docs\/auth\/aws#rotate-root-credentials)\n  API call.\n\n## Plugin Workload Identity Federation (WIF)\n\n<EnterpriseAlert product=\"vault\" \/>\n\nThe AWS auth engine supports the plugin WIF workflow and has a source of identity called\na plugin identity token. A plugin identity token is a JWT that is signed internally by the Vault's\n[plugin identity token issuer](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-workload-identity-issuer-s-openid-configuration).\n\nIf there is a trust relationship configured between Vault and AWS through\n[workload identity federation](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id_roles_providers_oidc.html),\nthe auth engine can exchange its identity token for short-lived STS credentials needed to\nperform its actions.\n\nExchanging identity tokens for STS credentials lets the AWS auth engine\noperate without configuring explicit access to sensitive IAM security\ncredentials.\n\nTo configure the auth engine to use plugin WIF:\n\n1. Ensure that Vault [openid-configuration](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-token-issuer-s-openid-configuration)\n  and [public JWKS](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-token-issuer-s-public-jwks)\n  APIs are network-reachable by AWS. We recommend using an API proxy or gateway\n  if you need to limit Vault API exposure.\n\n1. Create an\n  [IAM OIDC identity provider](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id_roles_providers_create_oidc.html)\n  in AWS.\n  1. The provider URL **must** point at your [Vault plugin identity token issuer](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-workload-identity-issuer-s-openid-configuration) with the\n  `\/.well-known\/openid-configuration` suffix removed. For example:\n  `https:\/\/host:port\/v1\/identity\/oidc\/plugins`.\n  1. Uniquely identify the recipient of the plugin identity token as the audience.\n  In AWS, the recipient is the identity provider. We recommend using\n  the `host:port\/v1\/identity\/oidc\/plugins` portion of the provider URL as your\n  recipient since it will be unique for each configured identity provider.\n\n1. Create a [web identity role](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id_roles_create_for-idp_oidc.html#idp_oidc_Create)\n  in AWS with the same audience used for your IAM OIDC identity provider.\n\n1. Configure the AWS auth engine with the IAM OIDC audience value and web\n  identity role ARN.\n\n```shell-session\n$ vault write auth\/aws\/config\/client \\\n    identity_token_audience=\"vault.example\/v1\/identity\/oidc\/plugins\" \\\n    role_arn=\"arn:aws:iam::123456789123:role\/example-web-identity-role\"\n```\n\nYour auth engine can now use plugin WIF for its configuration credentials.\nBy default, WIF [credentials](https:\/\/docs.aws.amazon.com\/STS\/latest\/APIReference\/API_AssumeRoleWithWebIdentity.html)\nhave a time-to-live of 1 hour and automatically refresh when they expire.\n\nPlease see the [API documentation](\/vault\/api-docs\/auth\/aws#configure-client)\nfor more details on the fields associated with plugin WIF.\n\n## Client nonce\n\nNote: this only applies to the ec2 auth method.\n\nIf an unintended party gains access to the PKCS#7 signature of the identity\ndocument (which by default is available to every process and user that gains\naccess to an EC2 instance), it can impersonate that instance and fetch a Vault\ntoken. The method addresses this problem by using a Trust On First Use (TOFU)\nmechanism that allows the first client to present the PKCS#7 signature of the\ndocument to be authenticated and denying the rest. An important property of\nthis design is detection of unauthorized access: if an unintended party authenticates,\nthe intended client will be unable to authenticate and can raise an alert for\ninvestigation.\n\nDuring the first login, the method stores the instance ID that authenticated\nin a `accesslist`. One method of operation of the method is to disallow any\nauthentication attempt for an instance ID contained in the access list, using the\n`disallow_reauthentication` option on the role, meaning that an instance is\nallowed to login only once. However, this has consequences for token rotation,\nas it means that once a token has expired, subsequent authentication attempts\nwould fail. By default, reauthentication is enabled in this method, and can be\nturned off using `disallow_reauthentication` parameter on the registered role.\n\nIn the default method of operation, the method will return a unique nonce\nduring the first authentication attempt, as part of auth `metadata`. Clients\nshould present this `nonce` for subsequent login attempts and it should match\nthe `nonce` cached at the identity-accesslist entry at the method. Since only\nthe original client knows the `nonce`, only the original client is allowed to\nreauthenticate. (This is the reason that this is a accesslist rather than a\ndeny list; by default, it's keeping track of clients allowed to reauthenticate,\nrather than those that are not.). Clients can choose to provide a `nonce` even\nfor the first login attempt, in which case the provided `nonce` will be tied to\nthe cached identity-accesslist entry. It is recommended to use a strong `nonce`\nvalue in this case.\n\nIt is up to the client to behave correctly with respect to the nonce; if the\nclient stores the nonce on disk it can survive reboots, but could also give\naccess to other users or applications on the instance. It is also up to the\noperator to ensure that client nonces are in fact unique; sharing nonces allows\na compromise of the nonce value to enable an attacker that gains access to any\nEC2 instance to imitate the legitimate client on that instance. This is why\nnonces can be disabled on the method side in favor of only a single\nauthentication per instance; in some cases, such as when using ASGs, instances\nare immutable and single-boot anyways, and in conjunction with a high max TTL,\nreauthentication may not be needed (and if it is, the instance can simply be\nshut down and allow ASG to start a new one).\n\nIn both cases, entries can be removed from the accesslist by instance ID,\nallowing reauthentication by a client if the nonce is lost (or not used) and an\noperator approves the process.\n\nOne other point: if available by the OS\/distribution being used with the EC2\ninstance, it is not a bad idea to firewall access to the signed PKCS#7 metadata\nto ensure that it is accessible only to the matching user(s) that require\naccess.\n\nThe client nonce which is generated by the backend and which gets returned\nalong with the authentication response, will be audit logged in plaintext. If\nthis is undesired, clients can supply a custom nonce to the login endpoint\nwhich will not be returned and hence will not be audit logged.\n\n## Advanced options and caveats\n\n### Dynamic management of policies via role tags\n\nNote: This only applies to the ec2 auth method or the iam auth method when\ninferencing is used.\n\nIf the instance is required to have customized set of policies based on the\nrole it plays, the `role_tag` option can be used to provide a tag to set on\ninstances, for a given role. When this option is set, during login, along with\nverification of PKCS#7 signature and instance health, the method will query\nfor the value of a specific tag with the configured key that is attached to the\ninstance. The tag holds information that represents a _subset_ of privileges that\nare set on the role and are used to further restrict the set of the role's\nprivileges for that particular instance.\n\nA `role_tag` can be created using `auth\/aws\/role\/<role>\/tag` endpoint\nand is immutable. The information present in the tag is SHA256 hashed and HMAC\nprotected. The per-role key to HMAC is only maintained in the method. This prevents\nan adversarial operator from modifying the tag when setting it on the EC2 instance\nin order to escalate privileges.\n\nWhen 'role_tag' option is enabled on a role, the instances are required to have a\nrole tag. If the tag is not found on the EC2 instance, authentication will fail.\nThis is to ensure that privileges of an instance are never escalated for not\nhaving the tag on it or for getting the tag removed. If the role tag creation does\nnot specify the policy component, the client will inherit the allowed policies set\non the role. If the role tag creation specifies the policy component but it contains\nno policies, the token will contain only the `default` policy; by default, this policy\nallows only manipulation (revocation, renewal, lookup) of the existing token, plus\naccess to its [cubbyhole](\/vault\/docs\/secrets\/cubbyhole).\nThis can be useful to allow instances access to a secure \"scratch space\" for\nstoring data (via the token's cubbyhole) but without granting any access to\nother resources provided by or resident in Vault.\n\n### Handling lost client nonces\n\nNote: This only applies to the ec2 auth method.\n\nIf an EC2 instance loses its client nonce (due to a reboot, a stop\/start of the\nclient, etc.), subsequent login attempts will not succeed. If the client nonce\nis lost, normally the only option is to delete the entry corresponding to the\ninstance ID from the identity `accesslist` in the method. This can be done via\nthe `auth\/aws\/identity-accesslist\/<instance_id>` endpoint. This allows a new\nclient nonce to be accepted by the method during the next login request.\n\nUnder certain circumstances there is another useful setting. When the instance\nis placed onto a host upon creation, it is given a `pendingTime` value in the\ninstance identity document (documentation from AWS does not cover this option,\nunfortunately). If an instance is stopped and started, the `pendingTime` value\nis updated (this does not apply to reboots, however).\n\nThe method can take advantage of this via the `allow_instance_migration`\noption, which is set per-role. When this option is enabled, if the client nonce\ndoes not match the saved nonce, the `pendingTime` value in the instance\nidentity document will be checked; if it is newer than the stored `pendingTime`\nvalue, the method assumes that the client was stopped\/started and allows the\nclient to log in successfully, storing the new nonce as the valid nonce for\nthat client. This essentially re-starts the TOFU mechanism any time the\ninstance is stopped and started, so should be used with caution. Just like with\ninitial authentication, the legitimate client should have a way to alert (or an\nalert should trigger based on its logs) if it is denied authentication.\n\nUnfortunately, the `allow_instance_migration` only helps during stop\/start\nactions; the current metadata does not provide for a way to allow this\nautomatic behavior during reboots. The method will be updated if this needed\nmetadata becomes available.\n\nThe `allow_instance_migration` option is set per-role, and can also be\nspecified in a role tag. Since role tags can only restrict behavior, if the\noption is set to `false` on the role, a value of `true` in the role tag takes\neffect; however, if the option is set to `true` on the role, a value set in the\nrole tag has no effect.\n\n### Disabling reauthentication\n\nNote: this only applies to the ec2 auth method.\n\nIf in a given organization's architecture, a client fetches a long-lived Vault\ntoken and has no need to rotate the token, all future logins for that instance\nID can be disabled. If the option `disallow_reauthentication` is set, only one\nlogin will be allowed per instance. If the intended client successfully\nretrieves a token during login, it can be sure that its token will not be\nhijacked by another entity.\n\nWhen `disallow_reauthentication` option is enabled, the client can choose not\nto supply a nonce during login, although it is not an error to do so (the nonce\nis simply ignored). Note that reauthentication is enabled by default. If only\na single login is desired, `disallow_reauthentication` should be set explicitly\non the role or on the role tag.\n\nThe `disallow_reauthentication` option is set per-role, and can also be\nspecified in a role tag. Since role tags can only restrict behavior, if the\noption is set to `false` on the role, a value of `true` in the role tag takes\neffect; however, if the option is set to `true` on the role, a value set in the\nrole tag has no effect.\n\n### Deny listing role tags\n\nNote: this only applies to the ec2 auth method or the iam auth method\nwhen inferencing is used.\n\nRole tags are tied to a specific role, but the method has no control over, which\ninstances using that role, should have any particular role tag; that is purely up\nto the operator. Although role tags are only restrictive (a tag cannot escalate\nprivileges above what is set on its role), if a role tag is found to have been\nused incorrectly, and the administrator wants to ensure that the role tag has no\nfurther effect, the role tag can be placed on a `deny list` via the endpoint\n`auth\/aws\/roletag-denylist\/<role_tag>`. Note that this will not invalidate the\ntokens that were already issued; this only blocks any further login requests from\nthose instances that have the deny listed tag attached to them.\n\n### Expiration times and tidying of `denylist` and `accesslist` entries\n\nThe expired entries in both identity `accesslist` and role tag `denylist` are\ndeleted automatically. The entries in both of these lists contain an expiration\ntime which is dynamically determined by three factors: `max_ttl` set on the role,\n`max_ttl` set on the role tag, and `max_ttl` value of the method mount. The\nleast of these three dictates the maximum TTL of the issued token, and\ncorrespondingly will be set as the expiration times of these entries.\n\nThe endpoints `auth\/aws\/tidy\/identity-accesslist` and `auth\/aws\/tidy\/roletag-denylist` are\nprovided to clean up the entries present in these lists. These endpoints allow\ndefining a safety buffer, such that an entry must not only be expired, but be\npast expiration by the amount of time dictated by the safety buffer in order\nto actually remove the entry.\n\nAutomatic deletion of expired entries is performed by the periodic function\nof the method. This function does the tidying of both access list role tags\nand access list identities. Periodic tidying is activated by default and will\nhave a safety buffer of 72 hours, meaning only those entries are deleted which\nwere expired before 72 hours from when the tidy operation is being performed.\nThis can be configured via `config\/tidy\/roletag-denylist` and `config\/tidy\/identity-accesslist`\nendpoints.\n\n### Varying public certificates\n\nNote: this only applies to the ec2 auth method.\n\nThe AWS public certificate, which contains the public key used to verify the\nPKCS#7 signature, varies for different AWS regions. The primary AWS public\ncertificate, which covers most AWS regions, is already included in Vault and\ndoes not need to be added. Instances whose PKCS#7 signatures cannot be\nverified by the default public certificate included in Vault can register a\ndifferent public certificate which can be found [here](http:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/instance-identity-documents.html),\nvia the `auth\/aws\/config\/certificate\/<cert_name>` endpoint.\n\n### Dangling tokens\n\nAn EC2 instance, after authenticating itself with the method, gets a Vault token.\nAfter that, if the instance terminates or goes down for any reason, the method\nwill not be aware of such events. The token issued will still be valid, until\nit expires. The token will likely be expired sooner than its lifetime when the\ninstance fails to renew the token on time.\n\n### Cross account access\n\nTo allow Vault to authenticate IAM principals and EC2 instances in other\naccounts, Vault supports using AWS STS (Security Token Service) to assume AWS\nIAM Roles in other accounts. For each target AWS account ID, you configure the\nIAM Role for Vault to assume using the `auth\/aws\/config\/sts\/<account_id>` and\nVault will use credentials from assuming that role to validate IAM principals\nand EC2 instances in the target account.\n\nThe account in which Vault is running (i.e. the master account) must be listed as\na trusted entity in the IAM Role being assumed on the remote account. The Role itself\nshould allow the permissions specified in the [Recommended Vault IAM\nPolicy](#recommended-vault-iam-policy) except it doesn't need any further\n`sts:AssumeRole` permissions.\n\nFurthermore, in the master account, Vault must be granted the action `sts:AssumeRole`\nfor the IAM Role to be assumed.\n\n### AWS instance metadata timeout\n\n@include 'aws-imds-timeout.mdx'\n\n## Authentication\n\n### Via the CLI\n\n#### Enable AWS EC2 authentication in Vault.\n\n```shell-session\n$ vault auth enable aws\n```\n\n#### Configure the credentials required to make AWS API calls\n\nIf not specified, Vault will attempt to use standard environment variables\n(`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`) or IAM EC2 instance role\ncredentials if available.\n\nThe IAM account or role to which the credentials map must allow the\n`ec2:DescribeInstances` action. In addition, if IAM Role binding is used (see\n`bound_iam_role_arn` below), `iam:GetInstanceProfile` must also be allowed.\n\nTo provide IAM security credentials to Vault, we recommend using Vault\n[plugin workload identity federation](#plugin-workload-identity-federation-wif)\n(WIF).\n\n```shell-session\n$ vault write auth\/aws\/config\/client \\\n    secret_key=vCtSM8ZUEQ3mOFVlYPBQkf2sO6F\/W7a5TVzrl3Oj \\\n    access_key=VKIAJBRHKH6EVTTNXDHA\n\n$ vault auth\/aws\/config\/client \\\n    identity_token_audience=\"vault.example\/v1\/identity\/oidc\/plugins\" \\\n    role_arn=\"arn:aws:iam::123456789123:role\/example-web-identity-role\"\n```\n\n#### Configure the policies on the role.\n\n```shell-session\n$ vault write auth\/aws\/role\/dev-role auth_type=ec2 bound_ami_id=ami-fce3c696 policies=prod,dev max_ttl=500h\n\n$ vault write auth\/aws\/role\/dev-role-iam auth_type=iam \\\n              bound_iam_principal_arn=arn:aws:iam::123456789012:role\/MyRole policies=prod,dev max_ttl=500h\n```\n\n#### Configure a required X-Vault-AWS-IAM-Server-ID header (recommended)\n\n```shell-session\n$ vault write auth\/aws\/config\/client iam_server_id_header_value=vault.example.com\n```\n\n#### Perform the login operation\n\nFor the ec2 auth method, first fetch the PKCS#7 signature on the AWS instance:\n\n```shell-session\n$ SIGNATURE=$(curl -s http:\/\/169.254.169.254\/latest\/dynamic\/instance-identity\/rsa2048 | tr -d '\\n')\n```\n\nthen set the signature on the login endpoint:\n\n```shell-session\n$ vault write auth\/aws\/login role=dev-role \\\n  pkcs7=$SIGNATURE\n```\n\nFor the iam auth method, generating the signed request is a non-standard\noperation. The Vault cli supports generating this for you:\n\n```shell-session\n$ vault login -method=aws header_value=vault.example.com role=dev-role-iam\n```\n\nThis assumes you have AWS credentials configured in the standard locations AWS\nSDKs search for credentials (environment variables, ~\/.aws\/credentials, IAM\ninstance profile, or ECS task role, in that order). If you do not have IAM\ncredentials available at any of these locations, you can explicitly pass them\nin on the command line (though this is not recommended), omitting\n`aws_security_token` if not applicable.\n\n```shell-session\n$ vault login -method=aws header_value=vault.example.com role=dev-role-iam \\\n        aws_access_key_id=<access_key> \\\n        aws_secret_access_key=<secret_key> \\\n        aws_security_token=<security_token>\n```\n\nThe region used defaults to `us-east-1`, but you can specify a custom region like so:\n\n```shell-session\n$ vault login -method=aws region=us-west-2 role=dev-role-iam\n```\n\nIf the region is specified as `auto`, the Vault CLI will determine the region based\non standard AWS credentials precedence as described earlier. Whichever method is used,\nbe sure the designated region corresponds to that of the STS endpoint you're using.\n\n~> **Note:** If you are making use of AWS GovCloud and setting the `sts_endpoint`\nand `sts_region` role parameters to `us-gov-west-1` \/ `us-gov-east-1` then you must include\nthe `region` argument in your login request with a matching value, i.e. `region=us-gov-west-1`.\n\nAn example of how to generate the required request values for the `login` method\ncan be found found in the [vault cli\nsource code](https:\/\/github.com\/hashicorp\/vault\/blob\/main\/builtin\/credential\/aws\/cli.go).\nUsing an approach such as this, the request parameters can be generated and\npassed to the `login` method:\n\n```shell-session\n$ vault write auth\/aws\/login role=dev-role-iam \\\n        iam_http_request_method=POST \\\n        iam_request_url=aHR0cHM6Ly9zdHMuYW1hem9uYXdzLmNvbS8= \\\n        iam_request_body=QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNQ== \\\n        iam_request_headers=eyJDb250ZW50LUxlbmd0aCI6IFsiNDMiXSwgIlVzZXItQWdlbnQiOiBbImF3cy1zZGstZ28vMS40LjEyIChnbzEuNy4xOyBsaW51eDsgYW1kNjQpIl0sICJYLVZhdWx0LUFXU0lBTS1TZXJ2ZXItSWQiOiBbInZhdWx0LmV4YW1wbGUuY29tIl0sICJYLUFtei1EYXRlIjogWyIyMDE2MDkzMFQwNDMxMjFaIl0sICJDb250ZW50LVR5cGUiOiBbImFwcGxpY2F0aW9uL3gtd3d3LWZvcm0tdXJsZW5jb2RlZDsgY2hhcnNldD11dGYtOCJdLCAiQXV0aG9yaXphdGlvbiI6IFsiQVdTNC1ITUFDLVNIQTI1NiBDcmVkZW50aWFsPWZvby8yMDE2MDkzMC91cy1lYXN0LTEvc3RzL2F3czRfcmVxdWVzdCwgU2lnbmVkSGVhZGVycz1jb250ZW50LWxlbmd0aDtjb250ZW50LXR5cGU7aG9zdDt4LWFtei1kYXRlO3gtdmF1bHQtc2VydmVyLCBTaWduYXR1cmU9YTY5ZmQ3NTBhMzQ0NWM0ZTU1M2UxYjNlNzlkM2RhOTBlZWY1NDA0N2YxZWI0ZWZlOGZmYmM5YzQyOGMyNjU1YiJdfQ==\n```\n\n### Via the API\n\n#### Enable AWS authentication in Vault.\n\n```\ncurl -X POST -H \"X-Vault-Token:123\" \"http:\/\/127.0.0.1:8200\/v1\/sys\/auth\/aws\" -d '{\"type\":\"aws\"}'\n```\n\n#### Configure the credentials required to make AWS API calls.\n\n```\ncurl -X POST -H \"X-Vault-Token:123\" \"http:\/\/127.0.0.1:8200\/v1\/auth\/aws\/config\/client\" -d '{\"access_key\":\"VKIAJBRHKH6EVTTNXDHA\", \"secret_key\":\"vCtSM8ZUEQ3mOFVlYPBQkf2sO6F\/W7a5TVzrl3Oj\"}'\n```\n\n#### Configure the policies on the role.\n\n```\ncurl -X POST -H \"X-Vault-Token:123\" \"http:\/\/127.0.0.1:8200\/v1\/auth\/aws\/role\/dev-role\" -d '{\"bound_ami_id\":\"ami-fce3c696\",\"policies\":\"prod,dev\",\"max_ttl\":\"500h\"}'\n\ncurl -X POST -H \"X-Vault-Token:123\" \"http:\/\/127.0.0.1:8200\/v1\/auth\/aws\/role\/dev-role-iam\" -d '{\"auth_type\":\"iam\",\"policies\":\"prod,dev\",\"max_ttl\":\"500h\",\"bound_iam_principal_arn\":\"arn:aws:iam::123456789012:role\/MyRole\"}'\n```\n\n#### Perform the login operation\n\n```\ncurl -X POST \"http:\/\/127.0.0.1:8200\/v1\/auth\/aws\/login\" -d '{\"role\":\"dev-role\",\"pkcs7\":\"'$(curl -s http:\/\/169.254.169.254\/latest\/dynamic\/instance-identity\/rsa2048 | tr -d '\\n')'\",\"nonce\":\"5defbf9e-a8f9-3063-bdfc-54b7a42a1f95\"}'\n\ncurl -X POST \"http:\/\/127.0.0.1:8200\/v1\/auth\/aws\/login\" -d '{\"role\":\"dev\", \"iam_http_request_method\": \"POST\", \"iam_request_url\": \"aHR0cHM6Ly9zdHMuYW1hem9uYXdzLmNvbS8=\", \"iam_request_body\": \"QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNQ==\", \"iam_request_headers\": \"eyJDb250ZW50LUxlbmd0aCI6IFsiNDMiXSwgIlVzZXItQWdlbnQiOiBbImF3cy1zZGstZ28vMS40LjEyIChnbzEuNy4xOyBsaW51eDsgYW1kNjQpIl0sICJYLVZhdWx0LUFXU0lBTS1TZXJ2ZXItSWQiOiBbInZhdWx0LmV4YW1wbGUuY29tIl0sICJYLUFtei1EYXRlIjogWyIyMDE2MDkzMFQwNDMxMjFaIl0sICJDb250ZW50LVR5cGUiOiBbImFwcGxpY2F0aW9uL3gtd3d3LWZvcm0tdXJsZW5jb2RlZDsgY2hhcnNldD11dGYtOCJdLCAiQXV0aG9yaXphdGlvbiI6IFsiQVdTNC1ITUFDLVNIQTI1NiBDcmVkZW50aWFsPWZvby8yMDE2MDkzMC91cy1lYXN0LTEvc3RzL2F3czRfcmVxdWVzdCwgU2lnbmVkSGVhZGVycz1jb250ZW50LWxlbmd0aDtjb250ZW50LXR5cGU7aG9zdDt4LWFtei1kYXRlO3gtdmF1bHQtc2VydmVyLCBTaWduYXR1cmU9YTY5ZmQ3NTBhMzQ0NWM0ZTU1M2UxYjNlNzlkM2RhOTBlZWY1NDA0N2YxZWI0ZWZlOGZmYmM5YzQyOGMyNjU1YiJdfQ==\" }'\n```\n\nThe response will be in JSON. For example:\n\n```javascript\n{\n  \"auth\": {\n    \"renewable\": true,\n    \"lease_duration\": 72000,\n    \"metadata\": {\n      \"role_tag_max_ttl\": \"0s\",\n      \"role\": \"ami-f083709d\",\n      \"region\": \"us-east-1\",\n      \"nonce\": \"5defbf9e-a8f9-3063-bdfc-54b7a42a1f95\",\n      \"instance_id\": \"i-a832f734\",\n      \"ami_id\": \"ami-f083709d\"\n    },\n    \"policies\": [\n      \"default\",\n      \"dev\",\n      \"prod\"\n    ],\n    \"accessor\": \"5cd96cd1-58b7-2904-5519-75ddf957ec06\",\n    \"client_token\": \"150fc858-2402-49c9-56a5-f4b57f2c8ff1\"\n  },\n  \"warnings\": null,\n  \"wrap_info\": null,\n  \"data\": null,\n  \"lease_duration\": 0,\n  \"renewable\": false,\n  \"lease_id\": \"\",\n  \"request_id\": \"d7d50c06-56b8-37f4-606c-ccdc87a1ee4c\"\n}\n```\n\n## API\n\nThe AWS auth method has a full HTTP API. Please see the\n[AWS Auth API](\/vault\/api-docs\/auth\/aws) for more\ndetails.\n\n## Code example\n\nThe following example demonstrates the AWS (IAM) auth method to authenticate with Vault.\n\n<CodeTabs>\n\n<CodeBlockConfig>\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\tvault \"github.com\/hashicorp\/vault\/api\"\n\tauth \"github.com\/hashicorp\/vault\/api\/auth\/aws\"\n)\n\n\/\/ Fetches a key-value secret (kv-v2) after authenticating to Vault via AWS IAM,\n\/\/ one of two auth methods used to authenticate with AWS (the other is EC2 auth).\nfunc getSecretWithAWSAuthIAM() (string, error) {\n\tconfig := vault.DefaultConfig() \/\/ modify for more granular configuration\n\n\tclient, err := vault.NewClient(config)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to initialize Vault client: %w\", err)\n\t}\n\n\tawsAuth, err := auth.NewAWSAuth(\n\t\tauth.WithRole(\"dev-role-iam\"), \/\/ if not provided, Vault will fall back on looking for a role with the IAM role name if you're using the iam auth type, or the EC2 instance's AMI id if using the ec2 auth type\n\t)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to initialize AWS auth method: %w\", err)\n\t}\n\n\tauthInfo, err := client.Auth().Login(context.Background(), awsAuth)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to login to AWS auth method: %w\", err)\n\t}\n\tif authInfo == nil {\n\t\treturn \"\", fmt.Errorf(\"no auth info was returned after login\")\n\t}\n\n\t\/\/ get secret from the default mount path for KV v2 in dev mode, \"secret\"\n\tsecret, err := client.KVv2(\"secret\").Get(context.Background(), \"creds\")\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to read secret: %w\", err)\n\t}\n\n\t\/\/ data map can contain more than one key-value pair,\n\t\/\/ in this case we're just grabbing one of them\n\tvalue, ok := secret.Data[\"password\"].(string)\n\tif !ok {\n\t\treturn \"\", fmt.Errorf(\"value type assertion failed: %T %#v\", secret.Data[\"password\"], secret.Data[\"password\"])\n\t}\n\n\treturn value, nil\n}\n```\n<\/CodeBlockConfig>\n\n<CodeBlockConfig>\n\n```cs\nusing System;\nusing System.Text;\nusing Amazon.Runtime;\nusing Amazon.Runtime.Internal;\nusing Amazon.Runtime.Internal.Auth;\nusing Amazon.Runtime.Internal.Util;\nusing Amazon.SecurityToken;\nusing Amazon.SecurityToken.Model;\nusing Amazon.SecurityToken.Model.Internal.MarshallTransformations;\nusing Newtonsoft.Json;\nusing VaultSharp;\nusing VaultSharp.V1.AuthMethods;\nusing VaultSharp.V1.AuthMethods.AWS;\nusing VaultSharp.V1.Commons;\nusing VaultSharp.V1.SecretsEngines.AWS;\n\nnamespace Examples\n{\n    public class AwsAuthExample\n    {\n        \/\/\/ <summary>\n        \/\/\/ Fetches a key-value secret (kv-v2) after authenticating to Vault via AWS IAM,\n        \/\/\/ one of two auth methods used to authenticate with AWS (the other is EC2 auth).\n        \/\/\/ <\/summary>\n        public string GetSecretAWSAuthIAM()\n        {\n            var vaultAddr = Environment.GetEnvironmentVariable(\"VAULT_ADDR\");\n            if(String.IsNullOrEmpty(vaultAddr))\n            {\n                throw new System.ArgumentNullException(\"Vault Address\");\n            }\n\n            var roleName = Environment.GetEnvironmentVariable(\"VAULT_ROLE\");\n            if(String.IsNullOrEmpty(roleName))\n            {\n                throw new System.ArgumentNullException(\"Vault Role Name\");\n            }\n\n            var amazonSecurityTokenServiceConfig = new AmazonSecurityTokenServiceConfig();\n\n            \/\/ Initialize BasicAWS Credentials w\/ an accessKey and secretKey\n            Amazon.Runtime.AWSCredentials awsCredentials = new BasicAWSCredentials(accessKey: Environment.GetEnvironmentVariable(\"AWS_ACCESS_KEY_ID\"),\n                                                                secretKey: Environment.GetEnvironmentVariable(\"AWS_SECRET_ACCESS_KEY\"));\n\n            \/\/ Construct the IAM Request and add necessary headers\n            var iamRequest = GetCallerIdentityRequestMarshaller.Instance.Marshall(new GetCallerIdentityRequest());\n\n            iamRequest.Endpoint = new Uri(amazonSecurityTokenServiceConfig.DetermineServiceURL());\n            iamRequest.ResourcePath = \"\/\";\n\n            iamRequest.Headers.Add(\"User-Agent\", \"some-agent\");\n            iamRequest.Headers.Add(\"X-Amz-Security-Token\", awsCredentials.GetCredentials().Token);\n            iamRequest.Headers.Add(\"Content-Type\", \"application\/x-www-form-urlencoded; charset=utf-8\");\n\n            new AWS4Signer().Sign(iamRequest, amazonSecurityTokenServiceConfig, new RequestMetrics(), awsCredentials.GetCredentials().AccessKey, awsCredentials.GetCredentials().SecretKey);\n            var iamSTSRequestHeaders = iamRequest.Headers;\n\n            \/\/ Convert headers to Base64 encoded version\n            var base64EncodedIamRequestHeaders = Convert.ToBase64String(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(iamSTSRequestHeaders)));\n\n            IAuthMethodInfo authMethod = new IAMAWSAuthMethodInfo(roleName: roleName, requestHeaders: base64EncodedIamRequestHeaders);\n            var vaultClientSettings = new VaultClientSettings(vaultAddr, authMethod);\n\n            IVaultClient vaultClient = new VaultClient(vaultClientSettings);\n\n            \/\/ We can retrieve the secret from the VaultClient object\n            Secret<SecretData> kv2Secret = null;\n            kv2Secret = vaultClient.V1.Secrets.KeyValue.V2.ReadSecretAsync(path: \"\/creds\").Result;\n\n            var password = kv2Secret.Data.Data[\"password\"];\n\n            return password.ToString();\n        }\n    }\n}\n```\n<\/CodeBlockConfig>\n\n<\/CodeTabs>","site":"vault","answers_cleaned":"    layout  docs page title  AWS   Auth Methods description  The aws auth method allows automated authentication of AWS entities         AWS auth method   include  x509 sha1 deprecation mdx    include  aws sha1 deprecation mdx   The  aws  auth method provides an automated mechanism to retrieve a Vault token for IAM principals and AWS EC2 instances  Unlike most Vault auth methods  this method does not require manual first deploying  or provisioning security sensitive credentials  tokens  username password  client certificates  etc   by operators under many circumstances      Authentication workflow  There are two authentication types present in the aws auth method   iam  and  ec2    With the  iam  method  a special AWS request signed with AWS IAM credentials is used for authentication  The IAM credentials are automatically supplied to AWS instances in IAM instance profiles  Lambda functions  and others  and it is this information already provided by AWS which Vault can use to authenticate clients   With the  ec2  method  AWS is treated as a Trusted Third Party and cryptographically signed dynamic metadata information that uniquely represents each EC2 instance is used for authentication  This metadata information is automatically supplied by AWS to all EC2 instances   Based on how you attempt to authenticate  Vault will determine if you are attempting to use the  iam  or  ec2  type  Each has a different authentication workflow  and each can solve different use cases   Note  The  ec2  method was implemented before the primitives to implement the  iam  method were supported by AWS  The  iam  method is the recommended approach as it is more flexible and aligns with best practices to perform access control and authentication  See the section on comparing the two auth methods below for more information        Usage    See the  Authentication   authentication  section for Vault CLI and API usage examples  The  Code Example   code example  section provides a code snippet demonstrating the authentication with Vault using the AWS auth method       IAM auth method  The AWS STS API includes a method    sts GetCallerIdentity   http   docs aws amazon com STS latest APIReference API GetCallerIdentity html   which allows you to validate the identity of a client  The client signs a  GetCallerIdentity  query using the  AWS Signature v4 algorithm  http   docs aws amazon com general latest gr sigv4 signing html  and sends it to the Vault server  The credentials used to sign the GetCallerIdentity request can come from the EC2 instance metadata service for an EC2 instance  or from the AWS environment variables in an AWS Lambda function execution  which obviates the need for an operator to manually provision some sort of identity material first  However  the credentials can  in principle  come from anywhere  not just from the locations AWS has provided for you   The  GetCallerIdentity  query consists of four pieces of information  the request URL  the request body  the request headers  and the request method  as the AWS signature is computed over those fields  The Vault server reconstructs the query using this information and forwards it on to the AWS STS service  Depending on the response from the STS service  the server authenticates the client   Notably  clients don t need network level access themselves to talk to the AWS STS API endpoint  they merely need access to the credentials to sign the request  However  it means that the Vault server does need network level access to send requests to the STS endpoint   Each signed AWS request includes the current timestamp to mitigate the risk of replay attacks  In addition  Vault allows you to require an additional header   X Vault AWS IAM Server ID   to be present to mitigate against different types of replay attacks  such as a signed  GetCallerIdentity  request stolen from a dev Vault instance and used to authenticate to a prod Vault instance   Vault further requires that this header be one of the headers included in the AWS signature and relies upon AWS to authenticate that signature   While AWS API endpoints support both signed GET and POST requests  for simplicity  the aws auth method supports only POST requests  It also does not support  presigned  requests  i e   requests with  X Amz Credential    X Amz Signature   and  X Amz SignedHeaders  GET query parameters containing the authenticating information   It s also important to note that Amazon does NOT appear to include any sort of authorization around calls to  GetCallerIdentity   For example  if you have an IAM policy on your credential that requires all access to be MFA authenticated  non MFA authenticated credentials  i e   raw credentials  not those retrieved by calling  GetSessionToken  and supplying an MFA code  will still be able to authenticate to Vault using this method  It does not appear possible to enforce an IAM principal to be MFA authenticated while authenticating to Vault       EC2 auth method  Amazon EC2 instances have access to metadata which describes the instance  The Vault EC2 auth method leverages the components of this metadata to authenticate and distribute an initial Vault token to an EC2 instance  The data flow  which is also represented in the graphic below  is as follows      Vault AWS EC2 Authentication Flow   img vault aws ec2 auth flow png    img vault aws ec2 auth flow png   1  An AWS EC2 instance fetches its  AWS Instance Identity Document  aws iid     from the  EC2 Metadata Service  aws ec2 mds   In addition to data itself  AWS    also provides the PKCS 7 signature of the data  and publishes the public keys     by region  which can be used to verify the signature   1  The AWS EC2 instance makes a request to Vault with the PKCS 7 signature     The PKCS 7 signature contains the Instance Identity Document   1  Vault verifies the signature on the PKCS 7 document  ensuring the information    is certified accurate by AWS  This process validates both the validity and    integrity of the document data  As an added security measure  Vault verifies    that the instance is currently running using the public EC2 API endpoint   1  Provided all steps are successful  Vault returns the initial Vault token to    the EC2 instance  This token is mapped to any configured policies based on the    instance metadata    aws iid   http   docs aws amazon com AWSEC2 latest UserGuide instance identity documents html  aws ec2 mds   http   docs aws amazon com AWSEC2 latest UserGuide ec2 instance metadata html  There are various modifications to this workflow that provide more or less security  as detailed later in this documentation      Authorization workflow  The basic mechanism of operation is per role  Roles are registered in the method and associated with a specific authentication type that cannot be changed once the role has been created  Roles can also be associated with various optional restrictions  such as the set of allowed policies and max TTLs on the generated tokens  Each role can be specified with the constraints that are to be met during the login  Many of these constraints accept lists of required values  For any constraint which accepts a list of values  that constraint will be considered satisfied if any one of the values is matched during the login process  For example  one such constraint that is supported is to bind against a list of AMI IDs  A role which is bound to a specific list of AMIs can only be used for login by EC2 instances that are deployed to one of the AMIs that the role is bound to   The iam auth method allows you to specify bound IAM principal ARNs  Clients authenticating to Vault must have an ARN that matches one of the ARNs bound to the role they are attempting to login to  The bound ARN allows specifying a wildcard at the end of the bound ARN  For example  if the bound ARN were  arn aws iam  123456789012    it would allow any principal in AWS account 123456789012 to login to it  Similarly  if it were  arn aws iam  123456789012 role    it would allow any IAM role in the AWS account to login to it  If you wish to specify a wildcard  you must give Vault  iam GetUser  and  iam GetRole  permissions to properly resolve the full user path   In general  role bindings that are specific to an EC2 instance are only checked when the ec2 auth method is used to login  while bindings specific to IAM principals are only checked when the iam auth method is used to login  However  the iam method includes the ability for you to  infer  an EC2 instance ID from the authenticated client and apply many of the bindings that would otherwise only apply specifically to EC2 instances   In many cases  an organization will use a  seed AMI  that is specialized after bootup by configuration management or similar processes  For this reason  a role entry in the method can also be associated with a  role tag  when using the ec2 auth type  These tags are generated by the method and are placed as the value of a tag with the given key on the EC2 instance  The role tag can be used to further restrict the parameters set on the role  but cannot be used to grant additional privileges  If a role with an AMI bind constraint has  role tag  enabled on the role  and the EC2 instance performing login does not have an expected tag on it  or if the tag on the instance is deleted for some reason  authentication fails   The role tags can be generated at will by an operator with appropriate API access  They are HMAC signed by a per role key stored within the method  allowing the method to verify the authenticity of a found role tag and ensure that it has not been tampered with  There is also a mechanism to deny list role tags if one has been found to be distributed outside of its intended set of machines      IAM authentication inferences  With the iam auth method  normally Vault will see the IAM principal that authenticated  either the IAM user or role  However  when you have an EC2 instance in an IAM instance profile  Vault can actually see the instance ID of the instance and can  infer  that it s an EC2 instance  However  there are important security caveats to be aware of before configuring Vault to make that inference   Each AWS IAM role has a  trust policy  which specifies which entities are trusted to call   sts AssumeRole   http   docs aws amazon com STS latest APIReference API AssumeRole html  on the role and retrieve credentials that can be used to authenticate with that role  When AssumeRole is called  a parameter called RoleSessionName is passed in  which is chosen arbitrarily by the entity which calls AssumeRole  If you have a role with an ARN  arn aws iam  123456789012 role MyRole   then the credentials returned by calling AssumeRole on that role will be  arn aws sts  123456789012 assumed role MyRole RoleSessionName  where RoleSessionName is the session name in the AssumeRole API call  It is this latter value which Vault actually sees   When you have an EC2 instance in an instance profile  the corresponding role s trust policy specifies that the principal   Service    ec2 amazonaws com   is trusted to call AssumeRole  When this is configured  EC2 calls AssumeRole on behalf of your instance  with a RoleSessionName corresponding to the instance s instance ID  Thus  it is possible for Vault to extract the instance ID out of the value it sees when an EC2 instance in an instance profile authenticates to Vault with the iam auth method  This is known as  inferencing   Vault can be configured  on a role by role basis  to infer that a caller is an EC2 instance and  if so  apply further bindings that apply specifically to EC2 instances    most of the bindings available to the ec2 auth method   However  it is very important to note that if any entity other than an AWS service is permitted to call AssumeRole on your role  then that entity can simply pass in your instance s instance ID and spoof your instance to Vault  This also means that anybody who is able to modify your role s trust policy  e g   via   iam UpdateAssumeRolePolicy   http   docs aws amazon com IAM latest APIReference API UpdateAssumeRolePolicy html   then that person could also spoof your instances  If this is a concern but you would like to take advantage of inferencing  then you should tightly restrict who is able to call AssumeRole on the role  tightly restrict who is able to call UpdateAssumeRolePolicy on the role  and monitor CloudTrail logs for calls to AssumeRole and UpdateAssumeRolePolicy  All of these caveats apply equally to using the iam auth method without inferencing  the point is merely that Vault cannot offer an iron clad guarantee about the inference and it is up to operators to determine  based on their own AWS controls and use cases  whether or not it s appropriate to configure inferencing      Mixing authentication types  Vault allows you to configure using either the ec2 auth method or the iam auth method  but not both auth methods  Further    assumed roles are not supported   and Vault prevents you from enforcing restrictions that it cannot enforce given the chosen auth type for a role  Some examples of how this works in practice   1  You configure a role with the ec2 auth type  with a bound AMI ID  A    client would not be able to login using the iam auth type  2  You configure a role with the iam auth type  with a bound IAM    principal ARN  A client would not be able to login with the ec2 auth method  3  You configure a role with the iam auth type and further configure    inferencing  You have a bound AMI ID and a bound IAM principal ARN  A client    must login using the iam method  the RoleSessionName must be a valid instance    ID viewable by Vault  and the instance must have come from the bound AMI ID      Comparison of the IAM and EC2 methods  The iam and ec2 auth methods serve similar and somewhat overlapping functionality  in that both authenticate some type of AWS entity to Vault  Here are some comparisons that illustrate why  iam  method is preferred over  ec2      What type of entity is authenticated      The ec2 auth method authenticates only AWS EC2 instances and is specialized     to handle EC2 instances  such as restricting access to EC2 instances from     a particular AMI  EC2 instances in a particular instance profile  or EC2     instances with a specialized tag value  via the role tag feature       The iam auth method authenticates AWS IAM principals  This can     include IAM users  IAM roles assumed from other accounts  AWS Lambdas that     are launched in an IAM role  or even EC2 instances that are launched in an     IAM instance profile  However  because it authenticates more generalized IAM     principals  this method doesn t offer more granular controls beyond binding     to a given IAM principal without the use of inferencing    How the entities are authenticated     The ec2 auth method authenticates instances by making use of the EC2     instance identity document  which is a cryptographically signed document     containing metadata about the instance  This document changes relatively     infrequently  so Vault adds a number of other constructs to mitigate against     replay attacks  such as client nonces  role tags  instance migrations  etc      Because the instance identity document is signed by AWS  you have a strong     guarantee that it came from an EC2 instance      The iam auth method authenticates by having clients provide a specially     signed AWS API request which the method then passes on to AWS to validate     the signature and tell Vault who created it  The actual secret  i e       the AWS secret access key  is never transmitted over the wire  and the     AWS signature algorithm automatically expires requests after 15 minutes      providing simple and robust protection against replay attacks  The use of     inferencing  however  provides a weaker guarantee that the credentials came     from an EC2 instance in an IAM instance profile compared to the ec2     authentication mechanism      The instance identity document used in the ec2 auth method is more likely to     be stolen given its relatively static nature  but it s harder to spoof  On     the other hand  the credentials of an EC2 instance in an IAM instance     profile are less likely to be stolen given their dynamic and short lived     nature  but it s easier to spoof credentials that might have come from an     EC2 instance    Specific use cases     If you have non EC2 instance entities  such as IAM users  Lambdas in IAM     roles  or developer laptops using  AdRoll s     Hologram  https   github com AdRoll hologram  then you would need to use the     iam auth method      If you have EC2 instances  then you could use either auth method  If you     need more granular filtering beyond just the instance profile of given EC2     instances  such as filtering based off the AMI the instance was launched     from   then you would need to use the ec2 auth method  change the instance     profile associated with your EC2 instances so they have unique IAM roles     for each different Vault role you would want them to authenticate     to  or make use of inferencing  If you need to make use of role tags  then     you will need to use the ec2 auth method      Recommended Vault IAM policy  This specifies the recommended IAM policy needed by the AWS auth method  Note that if you are using the same credentials for the AWS auth and secret methods  e g   if you re running Vault on an EC2 instance in an IAM instance profile   then you will need to add additional permissions as required by the AWS secret method      json      Version    2012 10 17      Statement                  Effect    Allow          Action              ec2 DescribeInstances            iam GetInstanceProfile            iam GetUser            iam GetRole                  Resource                           Effect    Allow          Action     sts AssumeRole           Resource     arn aws iam   AccountId  role  VaultRole                        Sid    ManageOwnAccessKeys          Effect    Allow          Action              iam CreateAccessKey            iam DeleteAccessKey            iam GetAccessKeyLastUsed            iam GetUser            iam ListAccessKeys            iam UpdateAccessKey                  Resource    arn aws iam    user   aws username                    Here are some of the scenarios in which Vault would need to use each of these permissions  This isn t intended to be an exhaustive list of all the scenarios in which Vault might make an AWS API call  but rather illustrative of why these are needed      ec2 DescribeInstances  is necessary when you are using the  ec2  auth method   or when you are inferring an  ec2 instance  entity type to validate that the   EC2 instance meets binding requirements of the role    iam GetInstanceProfile  is used when you have a  bound iam role arn  in the    ec2  auth method  Vault needs to determine which IAM role is attached to the   instance profile     iam GetUser  and  iam GetRole  are used when using the iam auth method and   binding to an IAM user or role principal to determine the  AWS IAM Unique Identifiers  https   docs aws amazon com IAM latest UserGuide reference identifiers html identifiers unique ids    or when using a wildcard on the bound ARN to resolve the full ARN of the user   or role    The  sts AssumeRole  stanza is necessary when you are using  Cross Account   Access   cross account access   The  Resource s specified should be a list of   all the roles for which you have configured cross account access  and each of   those roles should have this IAM policy attached  except for the    sts AssumeRole  statement     The  ManageOwnAccessKeys  stanza is necessary when you have configured Vault   with static credentials  and wish to rotate these credentials with the    Rotate Root Credentials   vault api docs auth aws rotate root credentials    API call      Plugin Workload Identity Federation  WIF    EnterpriseAlert product  vault      The AWS auth engine supports the plugin WIF workflow and has a source of identity called a plugin identity token  A plugin identity token is a JWT that is signed internally by the Vault s  plugin identity token issuer   vault api docs secret identity tokens read plugin workload identity issuer s openid configuration    If there is a trust relationship configured between Vault and AWS through  workload identity federation  https   docs aws amazon com IAM latest UserGuide id roles providers oidc html   the auth engine can exchange its identity token for short lived STS credentials needed to perform its actions   Exchanging identity tokens for STS credentials lets the AWS auth engine operate without configuring explicit access to sensitive IAM security credentials   To configure the auth engine to use plugin WIF   1  Ensure that Vault  openid configuration   vault api docs secret identity tokens read plugin identity token issuer s openid configuration    and  public JWKS   vault api docs secret identity tokens read plugin identity token issuer s public jwks    APIs are network reachable by AWS  We recommend using an API proxy or gateway   if you need to limit Vault API exposure   1  Create an    IAM OIDC identity provider  https   docs aws amazon com IAM latest UserGuide id roles providers create oidc html    in AWS    1  The provider URL   must   point at your  Vault plugin identity token issuer   vault api docs secret identity tokens read plugin workload identity issuer s openid configuration  with the      well known openid configuration  suffix removed  For example     https   host port v1 identity oidc plugins     1  Uniquely identify the recipient of the plugin identity token as the audience    In AWS  the recipient is the identity provider  We recommend using   the  host port v1 identity oidc plugins  portion of the provider URL as your   recipient since it will be unique for each configured identity provider   1  Create a  web identity role  https   docs aws amazon com IAM latest UserGuide id roles create for idp oidc html idp oidc Create    in AWS with the same audience used for your IAM OIDC identity provider   1  Configure the AWS auth engine with the IAM OIDC audience value and web   identity role ARN      shell session   vault write auth aws config client       identity token audience  vault example v1 identity oidc plugins        role arn  arn aws iam  123456789123 role example web identity role       Your auth engine can now use plugin WIF for its configuration credentials  By default  WIF  credentials  https   docs aws amazon com STS latest APIReference API AssumeRoleWithWebIdentity html  have a time to live of 1 hour and automatically refresh when they expire   Please see the  API documentation   vault api docs auth aws configure client  for more details on the fields associated with plugin WIF      Client nonce  Note  this only applies to the ec2 auth method   If an unintended party gains access to the PKCS 7 signature of the identity document  which by default is available to every process and user that gains access to an EC2 instance   it can impersonate that instance and fetch a Vault token  The method addresses this problem by using a Trust On First Use  TOFU  mechanism that allows the first client to present the PKCS 7 signature of the document to be authenticated and denying the rest  An important property of this design is detection of unauthorized access  if an unintended party authenticates  the intended client will be unable to authenticate and can raise an alert for investigation   During the first login  the method stores the instance ID that authenticated in a  accesslist   One method of operation of the method is to disallow any authentication attempt for an instance ID contained in the access list  using the  disallow reauthentication  option on the role  meaning that an instance is allowed to login only once  However  this has consequences for token rotation  as it means that once a token has expired  subsequent authentication attempts would fail  By default  reauthentication is enabled in this method  and can be turned off using  disallow reauthentication  parameter on the registered role   In the default method of operation  the method will return a unique nonce during the first authentication attempt  as part of auth  metadata   Clients should present this  nonce  for subsequent login attempts and it should match the  nonce  cached at the identity accesslist entry at the method  Since only the original client knows the  nonce   only the original client is allowed to reauthenticate   This is the reason that this is a accesslist rather than a deny list  by default  it s keeping track of clients allowed to reauthenticate  rather than those that are not    Clients can choose to provide a  nonce  even for the first login attempt  in which case the provided  nonce  will be tied to the cached identity accesslist entry  It is recommended to use a strong  nonce  value in this case   It is up to the client to behave correctly with respect to the nonce  if the client stores the nonce on disk it can survive reboots  but could also give access to other users or applications on the instance  It is also up to the operator to ensure that client nonces are in fact unique  sharing nonces allows a compromise of the nonce value to enable an attacker that gains access to any EC2 instance to imitate the legitimate client on that instance  This is why nonces can be disabled on the method side in favor of only a single authentication per instance  in some cases  such as when using ASGs  instances are immutable and single boot anyways  and in conjunction with a high max TTL  reauthentication may not be needed  and if it is  the instance can simply be shut down and allow ASG to start a new one    In both cases  entries can be removed from the accesslist by instance ID  allowing reauthentication by a client if the nonce is lost  or not used  and an operator approves the process   One other point  if available by the OS distribution being used with the EC2 instance  it is not a bad idea to firewall access to the signed PKCS 7 metadata to ensure that it is accessible only to the matching user s  that require access   The client nonce which is generated by the backend and which gets returned along with the authentication response  will be audit logged in plaintext  If this is undesired  clients can supply a custom nonce to the login endpoint which will not be returned and hence will not be audit logged      Advanced options and caveats      Dynamic management of policies via role tags  Note  This only applies to the ec2 auth method or the iam auth method when inferencing is used   If the instance is required to have customized set of policies based on the role it plays  the  role tag  option can be used to provide a tag to set on instances  for a given role  When this option is set  during login  along with verification of PKCS 7 signature and instance health  the method will query for the value of a specific tag with the configured key that is attached to the instance  The tag holds information that represents a  subset  of privileges that are set on the role and are used to further restrict the set of the role s privileges for that particular instance   A  role tag  can be created using  auth aws role  role  tag  endpoint and is immutable  The information present in the tag is SHA256 hashed and HMAC protected  The per role key to HMAC is only maintained in the method  This prevents an adversarial operator from modifying the tag when setting it on the EC2 instance in order to escalate privileges   When  role tag  option is enabled on a role  the instances are required to have a role tag  If the tag is not found on the EC2 instance  authentication will fail  This is to ensure that privileges of an instance are never escalated for not having the tag on it or for getting the tag removed  If the role tag creation does not specify the policy component  the client will inherit the allowed policies set on the role  If the role tag creation specifies the policy component but it contains no policies  the token will contain only the  default  policy  by default  this policy allows only manipulation  revocation  renewal  lookup  of the existing token  plus access to its  cubbyhole   vault docs secrets cubbyhole   This can be useful to allow instances access to a secure  scratch space  for storing data  via the token s cubbyhole  but without granting any access to other resources provided by or resident in Vault       Handling lost client nonces  Note  This only applies to the ec2 auth method   If an EC2 instance loses its client nonce  due to a reboot  a stop start of the client  etc    subsequent login attempts will not succeed  If the client nonce is lost  normally the only option is to delete the entry corresponding to the instance ID from the identity  accesslist  in the method  This can be done via the  auth aws identity accesslist  instance id   endpoint  This allows a new client nonce to be accepted by the method during the next login request   Under certain circumstances there is another useful setting  When the instance is placed onto a host upon creation  it is given a  pendingTime  value in the instance identity document  documentation from AWS does not cover this option  unfortunately   If an instance is stopped and started  the  pendingTime  value is updated  this does not apply to reboots  however    The method can take advantage of this via the  allow instance migration  option  which is set per role  When this option is enabled  if the client nonce does not match the saved nonce  the  pendingTime  value in the instance identity document will be checked  if it is newer than the stored  pendingTime  value  the method assumes that the client was stopped started and allows the client to log in successfully  storing the new nonce as the valid nonce for that client  This essentially re starts the TOFU mechanism any time the instance is stopped and started  so should be used with caution  Just like with initial authentication  the legitimate client should have a way to alert  or an alert should trigger based on its logs  if it is denied authentication   Unfortunately  the  allow instance migration  only helps during stop start actions  the current metadata does not provide for a way to allow this automatic behavior during reboots  The method will be updated if this needed metadata becomes available   The  allow instance migration  option is set per role  and can also be specified in a role tag  Since role tags can only restrict behavior  if the option is set to  false  on the role  a value of  true  in the role tag takes effect  however  if the option is set to  true  on the role  a value set in the role tag has no effect       Disabling reauthentication  Note  this only applies to the ec2 auth method   If in a given organization s architecture  a client fetches a long lived Vault token and has no need to rotate the token  all future logins for that instance ID can be disabled  If the option  disallow reauthentication  is set  only one login will be allowed per instance  If the intended client successfully retrieves a token during login  it can be sure that its token will not be hijacked by another entity   When  disallow reauthentication  option is enabled  the client can choose not to supply a nonce during login  although it is not an error to do so  the nonce is simply ignored   Note that reauthentication is enabled by default  If only a single login is desired   disallow reauthentication  should be set explicitly on the role or on the role tag   The  disallow reauthentication  option is set per role  and can also be specified in a role tag  Since role tags can only restrict behavior  if the option is set to  false  on the role  a value of  true  in the role tag takes effect  however  if the option is set to  true  on the role  a value set in the role tag has no effect       Deny listing role tags  Note  this only applies to the ec2 auth method or the iam auth method when inferencing is used   Role tags are tied to a specific role  but the method has no control over  which instances using that role  should have any particular role tag  that is purely up to the operator  Although role tags are only restrictive  a tag cannot escalate privileges above what is set on its role   if a role tag is found to have been used incorrectly  and the administrator wants to ensure that the role tag has no further effect  the role tag can be placed on a  deny list  via the endpoint  auth aws roletag denylist  role tag    Note that this will not invalidate the tokens that were already issued  this only blocks any further login requests from those instances that have the deny listed tag attached to them       Expiration times and tidying of  denylist  and  accesslist  entries  The expired entries in both identity  accesslist  and role tag  denylist  are deleted automatically  The entries in both of these lists contain an expiration time which is dynamically determined by three factors   max ttl  set on the role   max ttl  set on the role tag  and  max ttl  value of the method mount  The least of these three dictates the maximum TTL of the issued token  and correspondingly will be set as the expiration times of these entries   The endpoints  auth aws tidy identity accesslist  and  auth aws tidy roletag denylist  are provided to clean up the entries present in these lists  These endpoints allow defining a safety buffer  such that an entry must not only be expired  but be past expiration by the amount of time dictated by the safety buffer in order to actually remove the entry   Automatic deletion of expired entries is performed by the periodic function of the method  This function does the tidying of both access list role tags and access list identities  Periodic tidying is activated by default and will have a safety buffer of 72 hours  meaning only those entries are deleted which were expired before 72 hours from when the tidy operation is being performed  This can be configured via  config tidy roletag denylist  and  config tidy identity accesslist  endpoints       Varying public certificates  Note  this only applies to the ec2 auth method   The AWS public certificate  which contains the public key used to verify the PKCS 7 signature  varies for different AWS regions  The primary AWS public certificate  which covers most AWS regions  is already included in Vault and does not need to be added  Instances whose PKCS 7 signatures cannot be verified by the default public certificate included in Vault can register a different public certificate which can be found  here  http   docs aws amazon com AWSEC2 latest UserGuide instance identity documents html   via the  auth aws config certificate  cert name   endpoint       Dangling tokens  An EC2 instance  after authenticating itself with the method  gets a Vault token  After that  if the instance terminates or goes down for any reason  the method will not be aware of such events  The token issued will still be valid  until it expires  The token will likely be expired sooner than its lifetime when the instance fails to renew the token on time       Cross account access  To allow Vault to authenticate IAM principals and EC2 instances in other accounts  Vault supports using AWS STS  Security Token Service  to assume AWS IAM Roles in other accounts  For each target AWS account ID  you configure the IAM Role for Vault to assume using the  auth aws config sts  account id   and Vault will use credentials from assuming that role to validate IAM principals and EC2 instances in the target account   The account in which Vault is running  i e  the master account  must be listed as a trusted entity in the IAM Role being assumed on the remote account  The Role itself should allow the permissions specified in the  Recommended Vault IAM Policy   recommended vault iam policy  except it doesn t need any further  sts AssumeRole  permissions   Furthermore  in the master account  Vault must be granted the action  sts AssumeRole  for the IAM Role to be assumed       AWS instance metadata timeout   include  aws imds timeout mdx      Authentication      Via the CLI       Enable AWS EC2 authentication in Vault      shell session   vault auth enable aws           Configure the credentials required to make AWS API calls  If not specified  Vault will attempt to use standard environment variables   AWS ACCESS KEY ID  and  AWS SECRET ACCESS KEY   or IAM EC2 instance role credentials if available   The IAM account or role to which the credentials map must allow the  ec2 DescribeInstances  action  In addition  if IAM Role binding is used  see  bound iam role arn  below    iam GetInstanceProfile  must also be allowed   To provide IAM security credentials to Vault  we recommend using Vault  plugin workload identity federation   plugin workload identity federation wif   WIF       shell session   vault write auth aws config client       secret key vCtSM8ZUEQ3mOFVlYPBQkf2sO6F W7a5TVzrl3Oj       access key VKIAJBRHKH6EVTTNXDHA    vault auth aws config client       identity token audience  vault example v1 identity oidc plugins        role arn  arn aws iam  123456789123 role example web identity role            Configure the policies on the role      shell session   vault write auth aws role dev role auth type ec2 bound ami id ami fce3c696 policies prod dev max ttl 500h    vault write auth aws role dev role iam auth type iam                 bound iam principal arn arn aws iam  123456789012 role MyRole policies prod dev max ttl 500h           Configure a required X Vault AWS IAM Server ID header  recommended      shell session   vault write auth aws config client iam server id header value vault example com           Perform the login operation  For the ec2 auth method  first fetch the PKCS 7 signature on the AWS instance      shell session   SIGNATURE   curl  s http   169 254 169 254 latest dynamic instance identity rsa2048   tr  d   n        then set the signature on the login endpoint      shell session   vault write auth aws login role dev role     pkcs7  SIGNATURE      For the iam auth method  generating the signed request is a non standard operation  The Vault cli supports generating this for you      shell session   vault login  method aws header value vault example com role dev role iam      This assumes you have AWS credentials configured in the standard locations AWS SDKs search for credentials  environment variables     aws credentials  IAM instance profile  or ECS task role  in that order   If you do not have IAM credentials available at any of these locations  you can explicitly pass them in on the command line  though this is not recommended   omitting  aws security token  if not applicable      shell session   vault login  method aws header value vault example com role dev role iam           aws access key id  access key            aws secret access key  secret key            aws security token  security token       The region used defaults to  us east 1   but you can specify a custom region like so      shell session   vault login  method aws region us west 2 role dev role iam      If the region is specified as  auto   the Vault CLI will determine the region based on standard AWS credentials precedence as described earlier  Whichever method is used  be sure the designated region corresponds to that of the STS endpoint you re using        Note    If you are making use of AWS GovCloud and setting the  sts endpoint  and  sts region  role parameters to  us gov west 1     us gov east 1  then you must include the  region  argument in your login request with a matching value  i e   region us gov west 1    An example of how to generate the required request values for the  login  method can be found found in the  vault cli source code  https   github com hashicorp vault blob main builtin credential aws cli go   Using an approach such as this  the request parameters can be generated and passed to the  login  method      shell session   vault write auth aws login role dev role iam           iam http request method POST           iam request url aHR0cHM6Ly9zdHMuYW1hem9uYXdzLmNvbS8            iam request body QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNQ             iam request headers eyJDb250ZW50LUxlbmd0aCI6IFsiNDMiXSwgIlVzZXItQWdlbnQiOiBbImF3cy1zZGstZ28vMS40LjEyIChnbzEuNy4xOyBsaW51eDsgYW1kNjQpIl0sICJYLVZhdWx0LUFXU0lBTS1TZXJ2ZXItSWQiOiBbInZhdWx0LmV4YW1wbGUuY29tIl0sICJYLUFtei1EYXRlIjogWyIyMDE2MDkzMFQwNDMxMjFaIl0sICJDb250ZW50LVR5cGUiOiBbImFwcGxpY2F0aW9uL3gtd3d3LWZvcm0tdXJsZW5jb2RlZDsgY2hhcnNldD11dGYtOCJdLCAiQXV0aG9yaXphdGlvbiI6IFsiQVdTNC1ITUFDLVNIQTI1NiBDcmVkZW50aWFsPWZvby8yMDE2MDkzMC91cy1lYXN0LTEvc3RzL2F3czRfcmVxdWVzdCwgU2lnbmVkSGVhZGVycz1jb250ZW50LWxlbmd0aDtjb250ZW50LXR5cGU7aG9zdDt4LWFtei1kYXRlO3gtdmF1bHQtc2VydmVyLCBTaWduYXR1cmU9YTY5ZmQ3NTBhMzQ0NWM0ZTU1M2UxYjNlNzlkM2RhOTBlZWY1NDA0N2YxZWI0ZWZlOGZmYmM5YzQyOGMyNjU1YiJdfQ            Via the API       Enable AWS authentication in Vault       curl  X POST  H  X Vault Token 123   http   127 0 0 1 8200 v1 sys auth aws   d    type   aws              Configure the credentials required to make AWS API calls       curl  X POST  H  X Vault Token 123   http   127 0 0 1 8200 v1 auth aws config client   d    access key   VKIAJBRHKH6EVTTNXDHA    secret key   vCtSM8ZUEQ3mOFVlYPBQkf2sO6F W7a5TVzrl3Oj              Configure the policies on the role       curl  X POST  H  X Vault Token 123   http   127 0 0 1 8200 v1 auth aws role dev role   d    bound ami id   ami fce3c696   policies   prod dev   max ttl   500h     curl  X POST  H  X Vault Token 123   http   127 0 0 1 8200 v1 auth aws role dev role iam   d    auth type   iam   policies   prod dev   max ttl   500h   bound iam principal arn   arn aws iam  123456789012 role MyRole              Perform the login operation      curl  X POST  http   127 0 0 1 8200 v1 auth aws login   d    role   dev role   pkcs7      curl  s http   169 254 169 254 latest dynamic instance identity rsa2048   tr  d   n      nonce   5defbf9e a8f9 3063 bdfc 54b7a42a1f95     curl  X POST  http   127 0 0 1 8200 v1 auth aws login   d    role   dev    iam http request method    POST    iam request url    aHR0cHM6Ly9zdHMuYW1hem9uYXdzLmNvbS8     iam request body    QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNQ      iam request headers    eyJDb250ZW50LUxlbmd0aCI6IFsiNDMiXSwgIlVzZXItQWdlbnQiOiBbImF3cy1zZGstZ28vMS40LjEyIChnbzEuNy4xOyBsaW51eDsgYW1kNjQpIl0sICJYLVZhdWx0LUFXU0lBTS1TZXJ2ZXItSWQiOiBbInZhdWx0LmV4YW1wbGUuY29tIl0sICJYLUFtei1EYXRlIjogWyIyMDE2MDkzMFQwNDMxMjFaIl0sICJDb250ZW50LVR5cGUiOiBbImFwcGxpY2F0aW9uL3gtd3d3LWZvcm0tdXJsZW5jb2RlZDsgY2hhcnNldD11dGYtOCJdLCAiQXV0aG9yaXphdGlvbiI6IFsiQVdTNC1ITUFDLVNIQTI1NiBDcmVkZW50aWFsPWZvby8yMDE2MDkzMC91cy1lYXN0LTEvc3RzL2F3czRfcmVxdWVzdCwgU2lnbmVkSGVhZGVycz1jb250ZW50LWxlbmd0aDtjb250ZW50LXR5cGU7aG9zdDt4LWFtei1kYXRlO3gtdmF1bHQtc2VydmVyLCBTaWduYXR1cmU9YTY5ZmQ3NTBhMzQ0NWM0ZTU1M2UxYjNlNzlkM2RhOTBlZWY1NDA0N2YxZWI0ZWZlOGZmYmM5YzQyOGMyNjU1YiJdfQ            The response will be in JSON  For example      javascript      auth          renewable   true       lease duration   72000       metadata            role tag max ttl    0s          role    ami f083709d          region    us east 1          nonce    5defbf9e a8f9 3063 bdfc 54b7a42a1f95          instance id    i a832f734          ami id    ami f083709d              policies            default          dev          prod              accessor    5cd96cd1 58b7 2904 5519 75ddf957ec06        client token    150fc858 2402 49c9 56a5 f4b57f2c8ff1          warnings   null     wrap info   null     data   null     lease duration   0     renewable   false     lease id          request id    d7d50c06 56b8 37f4 606c ccdc87a1ee4c            API  The AWS auth method has a full HTTP API  Please see the  AWS Auth API   vault api docs auth aws  for more details      Code example  The following example demonstrates the AWS  IAM  auth method to authenticate with Vault    CodeTabs    CodeBlockConfig      go package main  import     context    fmt    vault  github com hashicorp vault api   auth  github com hashicorp vault api auth aws        Fetches a key value secret  kv v2  after authenticating to Vault via AWS IAM     one of two auth methods used to authenticate with AWS  the other is EC2 auth   func getSecretWithAWSAuthIAM    string  error     config    vault DefaultConfig      modify for more granular configuration   client  err    vault NewClient config   if err    nil     return     fmt Errorf  unable to initialize Vault client   w   err       awsAuth  err    auth NewAWSAuth    auth WithRole  dev role iam       if not provided  Vault will fall back on looking for a role with the IAM role name if you re using the iam auth type  or the EC2 instance s AMI id if using the ec2 auth type     if err    nil     return     fmt Errorf  unable to initialize AWS auth method   w   err       authInfo  err    client Auth   Login context Background    awsAuth   if err    nil     return     fmt Errorf  unable to login to AWS auth method   w   err      if authInfo    nil     return     fmt Errorf  no auth info was returned after login           get secret from the default mount path for KV v2 in dev mode   secret   secret  err    client KVv2  secret   Get context Background     creds    if err    nil     return     fmt Errorf  unable to read secret   w   err          data map can contain more than one key value pair      in this case we re just grabbing one of them  value  ok    secret Data  password    string   if  ok     return     fmt Errorf  value type assertion failed   T   v   secret Data  password    secret Data  password         return value  nil         CodeBlockConfig    CodeBlockConfig      cs using System  using System Text  using Amazon Runtime  using Amazon Runtime Internal  using Amazon Runtime Internal Auth  using Amazon Runtime Internal Util  using Amazon SecurityToken  using Amazon SecurityToken Model  using Amazon SecurityToken Model Internal MarshallTransformations  using Newtonsoft Json  using VaultSharp  using VaultSharp V1 AuthMethods  using VaultSharp V1 AuthMethods AWS  using VaultSharp V1 Commons  using VaultSharp V1 SecretsEngines AWS   namespace Examples       public class AwsAuthExample                    summary              Fetches a key value secret  kv v2  after authenticating to Vault via AWS IAM              one of two auth methods used to authenticate with AWS  the other is EC2 auth                 summary          public string GetSecretAWSAuthIAM                         var vaultAddr   Environment GetEnvironmentVariable  VAULT ADDR                if String IsNullOrEmpty vaultAddr                                 throw new System ArgumentNullException  Vault Address                               var roleName   Environment GetEnvironmentVariable  VAULT ROLE                if String IsNullOrEmpty roleName                                 throw new System ArgumentNullException  Vault Role Name                               var amazonSecurityTokenServiceConfig   new AmazonSecurityTokenServiceConfig                    Initialize BasicAWS Credentials w  an accessKey and secretKey             Amazon Runtime AWSCredentials awsCredentials   new BasicAWSCredentials accessKey  Environment GetEnvironmentVariable  AWS ACCESS KEY ID                                                                    secretKey  Environment GetEnvironmentVariable  AWS SECRET ACCESS KEY                     Construct the IAM Request and add necessary headers             var iamRequest   GetCallerIdentityRequestMarshaller Instance Marshall new GetCallerIdentityRequest                  iamRequest Endpoint   new Uri amazonSecurityTokenServiceConfig DetermineServiceURL                 iamRequest ResourcePath                     iamRequest Headers Add  User Agent    some agent                iamRequest Headers Add  X Amz Security Token   awsCredentials GetCredentials   Token               iamRequest Headers Add  Content Type    application x www form urlencoded  charset utf 8                 new AWS4Signer   Sign iamRequest  amazonSecurityTokenServiceConfig  new RequestMetrics    awsCredentials GetCredentials   AccessKey  awsCredentials GetCredentials   SecretKey               var iamSTSRequestHeaders   iamRequest Headers                  Convert headers to Base64 encoded version             var base64EncodedIamRequestHeaders   Convert ToBase64String Encoding UTF8 GetBytes JsonConvert SerializeObject iamSTSRequestHeaders                  IAuthMethodInfo authMethod   new IAMAWSAuthMethodInfo roleName  roleName  requestHeaders  base64EncodedIamRequestHeaders               var vaultClientSettings   new VaultClientSettings vaultAddr  authMethod                IVaultClient vaultClient   new VaultClient vaultClientSettings                   We can retrieve the secret from the VaultClient object             Secret SecretData  kv2Secret   null              kv2Secret   vaultClient V1 Secrets KeyValue V2 ReadSecretAsync path    creds   Result               var password   kv2Secret Data Data  password                 return password ToString                            CodeBlockConfig     CodeTabs "}
{"questions":"vault The GitHub auth method allows authentication with Vault using GitHub page title GitHub Auth Methods The github auth method can be used to authenticate with Vault using a GitHub GitHub auth method layout docs personal access token This method of authentication is most useful for humans","answers":"---\nlayout: docs\npage_title: GitHub - Auth Methods\ndescription: The GitHub auth method allows authentication with Vault using GitHub.\n---\n\n# GitHub auth method\n\nThe `github` auth method can be used to authenticate with Vault using a GitHub\npersonal access token. This method of authentication is most useful for humans:\noperators or developers using Vault directly via the CLI.\n\n~> **IMPORTANT NOTE:** Vault does not support an OAuth workflow to generate\nGitHub tokens, so does not act as a GitHub application. As a result, this method\nuses personal access tokens. If the risks below are unacceptable to you, consider\nusing a different authentication method.\n\n~> Any valid GitHub access token with the `read:org` scope for any user belonging\nto the Vault-configured organization can be used for authentication. If such a\ntoken is stolen from a third party service, and the attacker is able to make\nnetwork calls to Vault, they will be able to log in as the user that generated\nthe access token.\n\n~> If the GitHub team is part of an organization with SSO enabled, the user will \nneed to authorize the personal access token. Failing to do so for SSO users will \nresult in the personal access token not providing identity information. The token \nissued by the auth method will only be assigned the default policy.\n\n## Authentication\n\n### Via the CLI\n\nThe default path is `\/github`. If this auth method was enabled at a different\npath, specify `-path=\/my-path` in the CLI.\n\n```shell-session\n$ vault login -method=github token=\"MY_TOKEN\"\n```\n\n### Via the API\n\nThe default endpoint is `auth\/github\/login`. If this auth method was enabled\nat a different path, use that value instead of `github`.\n\n```shell-session\n$ curl \\\n    --request POST \\\n    --data '{\"token\": \"MY_TOKEN\"}' \\\n    http:\/\/127.0.0.1:8200\/v1\/auth\/github\/login\n```\n\nThe response will contain a token at `auth.client_token`:\n\n```json\n{\n  \"auth\": {\n    \"renewable\": true,\n    \"lease_duration\": 2764800,\n    \"metadata\": {\n      \"username\": \"my-user\",\n      \"org\": \"my-org\"\n    },\n    \"policies\": [\"default\", \"dev-policy\"],\n    \"accessor\": \"f93c4b2d-18b6-2b50-7a32-0fecf88237b8\",\n    \"client_token\": \"1977fceb-3bfa-6c71-4d1f-b64af98ac018\"\n  }\n}\n```\n\n## Configuration\n\nAuth methods must be configured in advance before users or machines can\nauthenticate. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the GitHub auth method:\n\n   ```text\n   $ vault auth enable github\n   ```\n\n1. Use the `\/config` endpoint to configure Vault to talk to GitHub.\n\n   ```text\n   $ vault write auth\/github\/config organization=hashicorp\n   ```\n\n   For the complete list of configuration options, please see the API\n   documentation.\n\n1. Map the users\/teams of that GitHub organization to policies in Vault. Team\n   names must be \"slugified\":\n\n   ```text\n   $ vault write auth\/github\/map\/teams\/dev value=dev-policy\n   ```\n\n   In this example, when members of the team \"dev\" in the organization\n   \"hashicorp\" authenticate to Vault using a GitHub personal access token, they\n   will be given a token with the \"dev-policy\" policy attached.\n\n   ***\n\n   You can also create mappings for a specific user `map\/users\/<user>`\n   endpoint:\n\n   ```text\n   $ vault write auth\/github\/map\/users\/sethvargo value=sethvargo-policy\n   ```\n\n   In this example, a user with the GitHub username `sethvargo` will be\n   assigned the `sethvargo-policy` policy **in addition to** any team policies.\n\n## API\n\nThe GitHub auth method has a full HTTP API. Please see the\n[GitHub Auth API](\/vault\/api-docs\/auth\/github) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  GitHub   Auth Methods description  The GitHub auth method allows authentication with Vault using GitHub         GitHub auth method  The  github  auth method can be used to authenticate with Vault using a GitHub personal access token  This method of authentication is most useful for humans  operators or developers using Vault directly via the CLI        IMPORTANT NOTE    Vault does not support an OAuth workflow to generate GitHub tokens  so does not act as a GitHub application  As a result  this method uses personal access tokens  If the risks below are unacceptable to you  consider using a different authentication method      Any valid GitHub access token with the  read org  scope for any user belonging to the Vault configured organization can be used for authentication  If such a token is stolen from a third party service  and the attacker is able to make network calls to Vault  they will be able to log in as the user that generated the access token      If the GitHub team is part of an organization with SSO enabled  the user will  need to authorize the personal access token  Failing to do so for SSO users will  result in the personal access token not providing identity information  The token  issued by the auth method will only be assigned the default policy      Authentication      Via the CLI  The default path is   github   If this auth method was enabled at a different path  specify   path  my path  in the CLI      shell session   vault login  method github token  MY TOKEN           Via the API  The default endpoint is  auth github login   If this auth method was enabled at a different path  use that value instead of  github       shell session   curl         request POST         data    token    MY TOKEN          http   127 0 0 1 8200 v1 auth github login      The response will contain a token at  auth client token       json      auth          renewable   true       lease duration   2764800       metadata            username    my user          org    my org              policies     default    dev policy         accessor    f93c4b2d 18b6 2b50 7a32 0fecf88237b8        client token    1977fceb 3bfa 6c71 4d1f b64af98ac018                Configuration  Auth methods must be configured in advance before users or machines can authenticate  These steps are usually completed by an operator or configuration management tool   1  Enable the GitHub auth method         text      vault auth enable github         1  Use the   config  endpoint to configure Vault to talk to GitHub         text      vault write auth github config organization hashicorp            For the complete list of configuration options  please see the API    documentation   1  Map the users teams of that GitHub organization to policies in Vault  Team    names must be  slugified          text      vault write auth github map teams dev value dev policy            In this example  when members of the team  dev  in the organization     hashicorp  authenticate to Vault using a GitHub personal access token  they    will be given a token with the  dev policy  policy attached              You can also create mappings for a specific user  map users  user      endpoint         text      vault write auth github map users sethvargo value sethvargo policy            In this example  a user with the GitHub username  sethvargo  will be    assigned the  sethvargo policy  policy   in addition to   any team policies      API  The GitHub auth method has a full HTTP API  Please see the  GitHub Auth API   vault api docs auth github  for more details "}
{"questions":"vault page title Kerberos Auth Methods layout docs include x509 sha1 deprecation mdx Kerberos auth method The Kerberos auth method allows automated authentication of Kerberos entities","answers":"---\nlayout: docs\npage_title: Kerberos - Auth Methods\ndescription: The Kerberos auth method allows automated authentication of Kerberos entities.\n---\n\n# Kerberos auth method\n\n@include 'x509-sha1-deprecation.mdx'\n\nThe `kerberos` auth method provides an automated mechanism to retrieve\na Vault token for Kerberos entities.\n\n[Kerberos](https:\/\/web.mit.edu\/kerberos\/) is a network authentication\nprotocol invented by MIT in the 1980s. Its name is inspired by Cerberus,\nthe three-headed hound of Hades from Greek mythology. The three heads\nrefer to Kerberos' three entities - an authentication server, a ticket\ngranting server, and a principals database. Kerberos underlies\nauthentication in Active Directory, and its purpose is to _distribute_\na network's authentication workload.\n\nVault's Kerberos auth method was originally written by the folks at\n[Winton](https:\/\/github.com\/wintoncode), to whom we owe a special thanks\nfor both originally building the plugin, and for collaborating to bring\nit into HashiCorp's maintenance.\n\n## Prerequisites\n\nKerberos is a very hands-on auth method. Other auth methods like\n[LDAP](\/vault\/docs\/auth\/ldap) and\n[Azure](\/vault\/docs\/auth\/azure) only require\na cursory amount of knowledge for configuration and use.\nKerberos, on the other hand, is best used by people already familiar\nwith it. We recommend that you use simpler authentication methods if\nyour use case is achievable through them. If not, we recommend that\nbefore approaching Kerberos, you become familiar with its fundamentals.\n\n- [MicroNugget: How Kerberos Works in Windows Active Directory](https:\/\/www.youtube.com\/watch?v=kp5d8Yv3-0c)\n- [MIT's Kerberos Documentation](https:\/\/web.mit.edu\/kerberos\/)\n- [Kerberos: The Definitive Guide](https:\/\/www.amazon.com\/Kerberos-Definitive-Guide-ebook-dp-B004P1J81C\/dp\/B004P1J81C\/ref=mt_kindle?_encoding=UTF8&me=&qid=1573685442)\n\nRegardless of how you gain your knowledge, before using this auth method,\nensure you are comfortable with Kerberos' high-level architecture, and\nensure you've gone through the exercise of:\n\n- Creating a valid `krb5.conf` file\n- Creating a valid `keytab` file\n- Authenticating to your domain server with your `keytab` file using `kinit`\n\nWith that knowledge in hand, and with an environment that's already tested\nand confirmed working, you will be ready to use Kerberos with Vault.\n\n## Configuration\n\n- Enable Kerberos authentication in Vault:\n\n```shell-session\n$ vault auth enable \\\n    -passthrough-request-headers=Authorization \\\n    -allowed-response-headers=www-authenticate \\\n    kerberos\n```\n\n- Create a `keytab` for the Kerberos plugin (this keytab is used by the Vault server itself, another keytab should be generated for login purposes):\n\n```shell-session\n$ ktutil\nktutil:  addent -password -p your_service_account@REALM.COM -e aes256-cts -k 1\nPassword for your_service_account@REALM.COM:\nktutil:  list -e\nslot KVNO Principal\n---- ---- ---------------------------------------------------------------------\n  1    1            your_service_account@REALM.COM (aes256-cts-hmac-sha1-96)\nktutil:  wkt vault.keytab\n```\n\nThe KVNO (`-k 1`) should match the KVNO of the service account. An error will show in the Vault logs if this is incorrect.\n\nDifferent encryption types can also be added to the `keytab`, for example `-e rc4-hmac` with additional `addent` commands.\n\nThen base64 encode it:\n\n```shell-session\n$ base64 vault.keytab > vault.keytab.base64\n```\n\n- Configure the Kerberos auth method with the `keytab` and\n  entry name that will be used to verify inbound login\n  requests:\n\n```shell-session\n$ vault write auth\/kerberos\/config \\\n    keytab=@vault.keytab.base64 \\\n    service_account=\"vault_svc\"\n```\n\n- Configure the Kerberos auth method to communicate with\n  LDAP using the service account configured above. This is\n  a sample LDAP configuration. Yours will vary. Ensure you've\n  first tested your configuration from the Vault server using\n  a tool like `ldapsearch`.\n\n```shell-session\n$ vault write auth\/kerberos\/config\/ldap \\\n    binddn=vault_svc@MATRIX.LAN \\\n    bindpass=$VAULT_SVC_PASSWORD \\\n    groupattr=sAMAccountName \\\n    groupdn=\"DC=MATRIX,DC=LAN\" \\\n    groupfilter=\"(&(objectClass=group)(member:1.2.840.113556.1.4.1941:=))\" \\\n    userdn=\"CN=Users,DC=MATRIX,DC=LAN\" \\\n    userattr=sAMAccountName \\\n    upndomain=MATRIX.LAN \\\n    url=ldaps:\/\/somewhere.foo\n```\n\nThe LDAP above relies upon the same code as the LDAP auth method.\nSee [its documentation](\/vault\/docs\/auth\/ldap)\nfor further discussion of available parameters.\n\n- Configure the Vault policies that should be granted to those\n  who successfully authenticate based on their LDAP group membership.\n  Since this is identical to the LDAP auth method, see\n  [Group Membership Resolution](\/vault\/docs\/auth\/ldap#group-membership-resolution)\n  and [LDAP Group -> Policy Mapping](\/vault\/docs\/auth\/ldap#ldap-group-policy-mapping)\n  for further discussion.\n\n```shell-session\n$ vault write auth\/kerberos\/groups\/engineering-team \\\n    policies=engineers\n```\n\nThe above group grants the \"engineers\" policy to those who authenticate\nvia Kerberos and are found to be members of the \"engineering-team\" LDAP\ngroup.\n\n## Authentication\n\nFrom a client machine with a valid `krb5.conf` and `keytab`, perform a command\nlike the following:\n\n```shell-session\n$ vault login -method=kerberos \\\n    username=grace \\\n    service=HTTP\/my-service \\\n    realm=MATRIX.LAN \\\n    keytab_path=\/etc\/krb5\/krb5.keytab  \\\n    krb5conf_path=\/etc\/krb5.conf \\\n    disable_fast_negotiation=false\n```\n\n- `krb5conf_path` is the path to a valid `krb5.conf` file describing how to\n  communicate with the Kerberos environment.\n- `keytab_path` is the path to the `keytab` in which the entry lives for the\n  entity authenticating to Vault. Keytab files should be protected from other\n  users on a shared server using appropriate file permissions.\n- `username` is the username for the entry _within_ the `keytab` to use for\n  logging into Kerberos. This username must match a service account in LDAP.\n- `service` is the service principal name to use in obtaining a service ticket for\n  gaining a SPNEGO token. This service must exist in LDAP.\n- `realm` is the name of the Kerberos realm. This realm must match the UPNDomain\n  configured on the LDAP connection. This check is case-sensitive.\n- `disable_fast_negotiation` is for disabling the Kerberos auth method's default\n  of using FAST negotiation. FAST is a pre-authentication framework for Kerberos.\n  It includes a mechanism for tunneling pre-authentication exchanges using armoured\n  KDC messages. FAST provides increased resistance to passive password guessing attacks.\n  Some common Kerberos implementations do not support FAST negotiation.\n- `remove_instance_name` removes any instance names from a Kerberos service \n  principal name when parsing the keytab file. For example when this is set to true, \n  if a keytab has the service principal name `foo\/localhost@example.com`, the CLI \n  will strip the service principal name to just be `foo@example.com`.\n\n## Troubleshooting\n\n### Identify the malfunctioning piece\n\nOnce the malfunctioning piece of the journey is identified, you can focus\nyour debugging efforts in the most useful direction.\n\n1. Use `ldapsearch` while logged into your machine hosting Vault to ensure\n   your LDAP configuration is functional.\n2. Authenticate to your domain server using `kinit`, your `keytab`, and your\n   `krb5.conf`. Do this with both Vault's `keytab`, and any client `keytab` being\n   used for logging in. This ensures your Kerberos network is working.\n3. While logged into your client machine, verify you can reach Vault\n   through the following command: `$ curl $VAULT_ADDR\/v1\/sys\/health`.\n\n### Build clear steps to reproduce the problem\n\nIf possible, make it easy for someone else to reproduce the problem who\nis outside of your company. For instance, if you expect that you should\nbe able to login using a command like:\n\n```shell-session\n$ vault login -method=kerberos \\\n    username=my-name \\\n    service=HTTP\/my-service \\\n    realm=EXAMPLE.COM \\\n    keytab_path=\/etc\/krb5\/krb5.keytab  \\\n    krb5conf_path=\/etc\/krb5.conf\n```\n\nThen make sure you're ready to share the error output of that command, the\ncontents of the `krb5.conf` file, and [the entries listed](https:\/\/docs.oracle.com\/cd\/E19683-01\/806-4078\/6jd6cjs1q\/index.html)\nin the `keytab` file.\n\nAfter you've stripped the issue down to its simplest form, if you still\nencounter difficulty resolving it, it will be much easier to gain assistance\nby posting your reproduction to the [Vault Forum](https:\/\/discuss.hashicorp.com\/c\/vault)\nor by providing it to [HashiCorp Support](https:\/\/www.hashicorp.com\/support)\n(if applicable.)\n\n### Additional troubleshooting resources\n\n- [Troubleshooting Vault](\/vault\/tutorials\/monitoring\/troubleshooting-vault)\n- [The plugin's code](https:\/\/github.com\/hashicorp\/vault-plugin-auth-kerberos)\n\nThe Vault Kerberos library has a working integration test environment that\ncan be referenced as an example of a full Kerberos and LDAP environment.\nIt runs through Docker and can be started through either one of the following\ncommands:\n\n```shell-session\n$ make integration\n$ make dev-env\n```\n\nThese commands run variations of [a script](https:\/\/github.com\/hashicorp\/vault-plugin-auth-kerberos\/blob\/master\/scripts\/integration_env.sh)\nthat spins up a full environment, adds users, and executes a login from a\nclient.\n\n## API\n\nThe Kerberos auth method has a full HTTP API. Please see the\n[Kerberos auth method API](\/vault\/api-docs\/auth\/kerberos) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  Kerberos   Auth Methods description  The Kerberos auth method allows automated authentication of Kerberos entities         Kerberos auth method   include  x509 sha1 deprecation mdx   The  kerberos  auth method provides an automated mechanism to retrieve a Vault token for Kerberos entities    Kerberos  https   web mit edu kerberos   is a network authentication protocol invented by MIT in the 1980s  Its name is inspired by Cerberus  the three headed hound of Hades from Greek mythology  The three heads refer to Kerberos  three entities   an authentication server  a ticket granting server  and a principals database  Kerberos underlies authentication in Active Directory  and its purpose is to  distribute  a network s authentication workload   Vault s Kerberos auth method was originally written by the folks at  Winton  https   github com wintoncode   to whom we owe a special thanks for both originally building the plugin  and for collaborating to bring it into HashiCorp s maintenance      Prerequisites  Kerberos is a very hands on auth method  Other auth methods like  LDAP   vault docs auth ldap  and  Azure   vault docs auth azure  only require a cursory amount of knowledge for configuration and use  Kerberos  on the other hand  is best used by people already familiar with it  We recommend that you use simpler authentication methods if your use case is achievable through them  If not  we recommend that before approaching Kerberos  you become familiar with its fundamentals      MicroNugget  How Kerberos Works in Windows Active Directory  https   www youtube com watch v kp5d8Yv3 0c     MIT s Kerberos Documentation  https   web mit edu kerberos      Kerberos  The Definitive Guide  https   www amazon com Kerberos Definitive Guide ebook dp B004P1J81C dp B004P1J81C ref mt kindle  encoding UTF8 me  qid 1573685442   Regardless of how you gain your knowledge  before using this auth method  ensure you are comfortable with Kerberos  high level architecture  and ensure you ve gone through the exercise of     Creating a valid  krb5 conf  file   Creating a valid  keytab  file   Authenticating to your domain server with your  keytab  file using  kinit   With that knowledge in hand  and with an environment that s already tested and confirmed working  you will be ready to use Kerberos with Vault      Configuration    Enable Kerberos authentication in Vault      shell session   vault auth enable        passthrough request headers Authorization        allowed response headers www authenticate       kerberos        Create a  keytab  for the Kerberos plugin  this keytab is used by the Vault server itself  another keytab should be generated for login purposes       shell session   ktutil ktutil   addent  password  p your service account REALM COM  e aes256 cts  k 1 Password for your service account REALM COM  ktutil   list  e slot KVNO Principal                                                                                   1    1            your service account REALM COM  aes256 cts hmac sha1 96  ktutil   wkt vault keytab      The KVNO    k 1   should match the KVNO of the service account  An error will show in the Vault logs if this is incorrect   Different encryption types can also be added to the  keytab   for example   e rc4 hmac  with additional  addent  commands   Then base64 encode it      shell session   base64 vault keytab   vault keytab base64        Configure the Kerberos auth method with the  keytab  and   entry name that will be used to verify inbound login   requests      shell session   vault write auth kerberos config       keytab  vault keytab base64       service account  vault svc         Configure the Kerberos auth method to communicate with   LDAP using the service account configured above  This is   a sample LDAP configuration  Yours will vary  Ensure you ve   first tested your configuration from the Vault server using   a tool like  ldapsearch       shell session   vault write auth kerberos config ldap       binddn vault svc MATRIX LAN       bindpass  VAULT SVC PASSWORD       groupattr sAMAccountName       groupdn  DC MATRIX DC LAN        groupfilter     objectClass group  member 1 2 840 113556 1 4 1941            userdn  CN Users DC MATRIX DC LAN        userattr sAMAccountName       upndomain MATRIX LAN       url ldaps   somewhere foo      The LDAP above relies upon the same code as the LDAP auth method  See  its documentation   vault docs auth ldap  for further discussion of available parameters     Configure the Vault policies that should be granted to those   who successfully authenticate based on their LDAP group membership    Since this is identical to the LDAP auth method  see    Group Membership Resolution   vault docs auth ldap group membership resolution    and  LDAP Group    Policy Mapping   vault docs auth ldap ldap group policy mapping    for further discussion      shell session   vault write auth kerberos groups engineering team       policies engineers      The above group grants the  engineers  policy to those who authenticate via Kerberos and are found to be members of the  engineering team  LDAP group      Authentication  From a client machine with a valid  krb5 conf  and  keytab   perform a command like the following      shell session   vault login  method kerberos       username grace       service HTTP my service       realm MATRIX LAN       keytab path  etc krb5 krb5 keytab        krb5conf path  etc krb5 conf       disable fast negotiation false         krb5conf path  is the path to a valid  krb5 conf  file describing how to   communicate with the Kerberos environment     keytab path  is the path to the  keytab  in which the entry lives for the   entity authenticating to Vault  Keytab files should be protected from other   users on a shared server using appropriate file permissions     username  is the username for the entry  within  the  keytab  to use for   logging into Kerberos  This username must match a service account in LDAP     service  is the service principal name to use in obtaining a service ticket for   gaining a SPNEGO token  This service must exist in LDAP     realm  is the name of the Kerberos realm  This realm must match the UPNDomain   configured on the LDAP connection  This check is case sensitive     disable fast negotiation  is for disabling the Kerberos auth method s default   of using FAST negotiation  FAST is a pre authentication framework for Kerberos    It includes a mechanism for tunneling pre authentication exchanges using armoured   KDC messages  FAST provides increased resistance to passive password guessing attacks    Some common Kerberos implementations do not support FAST negotiation     remove instance name  removes any instance names from a Kerberos service    principal name when parsing the keytab file  For example when this is set to true     if a keytab has the service principal name  foo localhost example com   the CLI    will strip the service principal name to just be  foo example com       Troubleshooting      Identify the malfunctioning piece  Once the malfunctioning piece of the journey is identified  you can focus your debugging efforts in the most useful direction   1  Use  ldapsearch  while logged into your machine hosting Vault to ensure    your LDAP configuration is functional  2  Authenticate to your domain server using  kinit   your  keytab   and your     krb5 conf   Do this with both Vault s  keytab   and any client  keytab  being    used for logging in  This ensures your Kerberos network is working  3  While logged into your client machine  verify you can reach Vault    through the following command     curl  VAULT ADDR v1 sys health        Build clear steps to reproduce the problem  If possible  make it easy for someone else to reproduce the problem who is outside of your company  For instance  if you expect that you should be able to login using a command like      shell session   vault login  method kerberos       username my name       service HTTP my service       realm EXAMPLE COM       keytab path  etc krb5 krb5 keytab        krb5conf path  etc krb5 conf      Then make sure you re ready to share the error output of that command  the contents of the  krb5 conf  file  and  the entries listed  https   docs oracle com cd E19683 01 806 4078 6jd6cjs1q index html  in the  keytab  file   After you ve stripped the issue down to its simplest form  if you still encounter difficulty resolving it  it will be much easier to gain assistance by posting your reproduction to the  Vault Forum  https   discuss hashicorp com c vault  or by providing it to  HashiCorp Support  https   www hashicorp com support   if applicable        Additional troubleshooting resources     Troubleshooting Vault   vault tutorials monitoring troubleshooting vault     The plugin s code  https   github com hashicorp vault plugin auth kerberos   The Vault Kerberos library has a working integration test environment that can be referenced as an example of a full Kerberos and LDAP environment  It runs through Docker and can be started through either one of the following commands      shell session   make integration   make dev env      These commands run variations of  a script  https   github com hashicorp vault plugin auth kerberos blob master scripts integration env sh  that spins up a full environment  adds users  and executes a login from a client      API  The Kerberos auth method has a full HTTP API  Please see the  Kerberos auth method API   vault api docs auth kerberos  for more details "}
{"questions":"vault Kubernetes auth method The Kubernetes auth method allows automated authentication of Kubernetes layout docs Service Accounts page title Kubernetes Auth Methods","answers":"---\nlayout: docs\npage_title: Kubernetes - Auth Methods\ndescription: |-\n  The Kubernetes auth method allows automated authentication of Kubernetes\n  Service Accounts.\n---\n\n# Kubernetes auth method\n\n@include 'x509-sha1-deprecation.mdx'\n\nThe `kubernetes` auth method can be used to authenticate with Vault using a\nKubernetes Service Account Token. This method of authentication makes it easy to\nintroduce a Vault token into a Kubernetes Pod.\n\nYou can also use a Kubernetes Service Account Token to [log in via JWT auth][k8s-jwt-auth].\nSee the section on [How to work with short-lived Kubernetes tokens][short-lived-tokens]\nfor a summary of why you might want to use JWT auth instead and how it compares to\nKubernetes auth.\n\n-> **Note:** If you are upgrading to Kubernetes v1.21+, ensure the config option\n`disable_iss_validation` is set to true. Assuming the default mount path, you\ncan check with `vault read -field disable_iss_validation auth\/kubernetes\/config`.\nSee [Kubernetes 1.21](#kubernetes-1-21) below for more details.\n\n## Authentication\n\n### Via the CLI\n\nThe default path is `\/kubernetes`. If this auth method was enabled at a\ndifferent path, specify `-path=\/my-path` in the CLI.\n\n```shell-session\n$ vault write auth\/kubernetes\/login role=demo jwt=...\n```\n\n### Via the API\n\nThe default endpoint is `auth\/kubernetes\/login`. If this auth method was enabled\nat a different path, use that value instead of `kubernetes`.\n\n```shell-session\n$ curl \\\n    --request POST \\\n    --data '{\"jwt\": \"<your service account jwt>\", \"role\": \"demo\"}' \\\n    http:\/\/127.0.0.1:8200\/v1\/auth\/kubernetes\/login\n```\n\nThe response will contain a token at `auth.client_token`:\n\n```json\n{\n  \"auth\": {\n    \"client_token\": \"38fe9691-e623-7238-f618-c94d4e7bc674\",\n    \"accessor\": \"78e87a38-84ed-2692-538f-ca8b9f400ab3\",\n    \"policies\": [\"default\"],\n    \"metadata\": {\n      \"role\": \"demo\",\n      \"service_account_name\": \"myapp\",\n      \"service_account_namespace\": \"default\",\n      \"service_account_secret_name\": \"myapp-token-pd21c\",\n      \"service_account_uid\": \"aa9aa8ff-98d0-11e7-9bb7-0800276d99bf\"\n    },\n    \"lease_duration\": 2764800,\n    \"renewable\": true\n  }\n}\n```\n\n## Configuration\n\nAuth methods must be configured in advance before users or machines can\nauthenticate. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the Kubernetes auth method:\n\n  ```shell-session\n  $ vault auth enable kubernetes\n  ```\n\n1. Use the `\/config` endpoint to configure Vault to talk to Kubernetes. Use\n  `kubectl cluster-info` to validate the Kubernetes host address and TCP port.\n  For the list of available configuration options, please see the\n  [API documentation](\/vault\/api-docs\/auth\/kubernetes).\n\n  ```shell-session\n  $ vault write auth\/kubernetes\/config \\\n      token_reviewer_jwt=\"<your reviewer service account JWT>\" \\\n      kubernetes_host=https:\/\/192.168.99.100:<your TCP port or blank for 443> \\\n      kubernetes_ca_cert=@ca.crt\n  ```\n\n  !> **Note:** The pattern Vault uses to authenticate Pods depends on sharing\n  the JWT token over the network. Given the [security model of\n  Vault](\/vault\/docs\/internals\/security), this is allowable because Vault is\n  part of the trusted compute base. In general, Kubernetes applications should\n  **not** share this JWT with other applications, as it allows API calls to be\n  made on behalf of the Pod and can result in unintended access being granted\n  to 3rd parties.\n\n1. Create a named role:\n\n  ```shell-session\n  $ vault write auth\/kubernetes\/role\/demo \\\n      bound_service_account_names=myapp \\\n      bound_service_account_namespaces=default \\\n      policies=default \\\n      ttl=1h\n  ```\n\n  This role authorizes the \"myapp\" service account in the default\n  namespace and it gives it the default policy.\n\n  For the complete list of configuration options, please see the [API\n  documentation](\/vault\/api-docs\/auth\/kubernetes).\n\n## Kubernetes 1.21\n\nStarting in version [1.21][k8s-1.21-changelog], the Kubernetes\n`BoundServiceAccountTokenVolume` feature defaults to enabled. This changes the\nJWT token mounted into containers by default in two ways that are important for\nKubernetes auth:\n\n* It has an expiry time and is bound to the lifetime of the pod and service account.\n* The value of the JWT's `\"iss\"` claim depends on the cluster's configuration.\n\nThe changes to token lifetime are important when configuring the\n[`token_reviewer_jwt`](\/vault\/api-docs\/auth\/kubernetes#token_reviewer_jwt) option.\nIf a short-lived token is used,\nKubernetes will revoke it as soon as the pod or service account are deleted, or\nif the expiry time passes, and Vault will no longer be able to use the\n`TokenReview` API. See [How to work with short-lived Kubernetes tokens][short-lived-tokens]\nbelow for details on handling this change.\n\nIn response to the issuer changes, Kubernetes auth has been updated in Vault\n1.9.0 to not validate the issuer by default. The Kubernetes API does the same\nvalidation when reviewing tokens, so enabling issuer validation on the Vault\nside is duplicated\u00a0work. Without disabling Vault's issuer validation, it is not\npossible for a single Kubernetes auth configuration to work for default mounted\npod tokens with both Kubernetes 1.20 and 1.21. Note that auth mounts created\nbefore Vault 1.9 will maintain the old default, and you will need to explicitly\nset `disable_iss_validation=true` before upgrading Kubernetes to 1.21. See\n[Discovering the service account `issuer`](#discovering-the-service-account-issuer)\nbelow for guidance if you wish to enable issuer validation in Vault.\n\n[k8s-1.21-changelog]: https:\/\/github.com\/kubernetes\/kubernetes\/blob\/master\/CHANGELOG\/CHANGELOG-1.21.md#api-change-2\n[short-lived-tokens]: #how-to-work-with-short-lived-kubernetes-tokens\n\n### How to work with short-lived kubernetes tokens\n\nThere are a few different ways to configure auth for Kubernetes pods when\ndefault mounted pod tokens are short-lived, each with their own tradeoffs. This\ntable summarizes the options, each of which is explained in more detail below.\n\n| Option                               | All tokens are short-lived | Can revoke tokens early | Other considerations |\n| ------------------------------------ | -------------------------- | ----------------------- | -------------------- |\n| Use local token as reviewer JWT      | Yes                        | Yes                     | Requires Vault (1.9.3+) to be deployed on the Kubernetes cluster |\n| Use client JWT as reviewer JWT       | Yes                        | Yes                     | Operational overhead |\n| Use long-lived token as reviewer JWT | No                         | Yes                     |                      |\n| Use JWT auth instead                 | Yes                        | No                      |                      |\n\n-> **Note:** By default, Kubernetes currently extends the lifetime of admission\ninjected service account tokens to a year to help smooth the transition to\nshort-lived tokens. If you would like to disable this, set\n[--service-account-extend-token-expiration=false][k8s-extended-tokens] for\n`kube-apiserver` or specify your own `serviceAccountToken` volume mount. See\n[here](\/vault\/docs\/auth\/jwt\/oidc-providers\/kubernetes#specify-ttl-and-audience) for an example.\n\n[k8s-extended-tokens]: https:\/\/kubernetes.io\/docs\/reference\/command-line-tools-reference\/kube-apiserver\/#options\n\n#### Use local service account token as the reviewer JWT\n\nWhen running Vault in a Kubernetes pod the recommended option is to use the pod's local\nservice account token. Vault will periodically re-read the file to support\nshort-lived tokens. To use the local token and CA certificate, omit\n`token_reviewer_jwt` and `kubernetes_ca_cert` when configuring the auth method.\nVault will attempt to load them from `token` and `ca.crt` respectively inside\nthe default mount folder `\/var\/run\/secrets\/kubernetes.io\/serviceaccount\/`.\n\n```shell-session\n$ vault write auth\/kubernetes\/config \\\n    kubernetes_host=https:\/\/$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT\n```\n\n!> **Note:** Requires Vault 1.9.3+. In earlier versions the service account\ntoken and CA certificate is read once and stored in Vault storage.\nWhen the service account token expires or is revoked, Vault will no longer be\nable to use the `TokenReview` API and client authentication will fail.\n\n<Tip title=\"You can use the trust store for CA certificates\">\n  If you leave `kubernetes_ca_cert` unset and set `disable_local_ca_jwt` to\n  `true`, Vault uses the system's trust store for TLS certificate\n  verification.\n<\/Tip>\n\n#### Use the Vault client's JWT as the reviewer JWT\n\nWhen configuring Kubernetes auth, you can omit the `token_reviewer_jwt`, and Vault\nwill use the Vault client's JWT as its own auth token when communicating with\nthe Kubernetes `TokenReview` API. If Vault is running in Kubernetes, you also need\nto set `disable_local_ca_jwt=true`.\n\nThis means Vault does not store any JWTs and allows you to use short-lived tokens\neverywhere but adds some operational overhead to maintain the cluster role\nbindings on the set of service accounts you want to be able to authenticate with\nVault. Each client of Vault would need the `system:auth-delegator` ClusterRole:\n\n```shell-session\n$ kubectl create clusterrolebinding vault-client-auth-delegator \\\n    --clusterrole=system:auth-delegator \\\n    --group=group1 \\\n    --serviceaccount=default:svcaccount1 \\\n    ...\n```\n\n#### Continue using long-lived tokens\n\nYou can create a long-lived secret using the instructions [here][k8s-create-secret]\nand use that as the `token_reviewer_jwt`. In this example, the `vault` service\naccount would need the `system:auth-delegator` ClusterRole:\n\n```shell-session\n$ kubectl apply -f - <<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: vault-k8s-auth-secret\n  annotations:\n    kubernetes.io\/service-account.name: vault\ntype: kubernetes.io\/service-account-token\nEOF\n```\n\nUsing this maintains previous workflows but does not benefit from the improved\nsecurity posture of short-lived tokens.\n\n[k8s-create-secret]: https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/#manually-create-a-service-account-api-token\n\n#### Use JWT auth\n\nKubernetes auth is specialized to use Kubernetes' `TokenReview` API. However, the\nJWT tokens Kubernetes generates can also be verified using Kubernetes as an OIDC\nprovider. The JWT auth method documentation has [instructions][k8s-jwt-auth] for\nsetting up JWT auth with Kubernetes as the OIDC provider.\n\n[k8s-jwt-auth]: \/vault\/docs\/auth\/jwt\/oidc-providers\/kubernetes\n\nThis solution allows you to use short-lived tokens for all clients and removes\nthe need for a reviewer JWT. However, the client tokens cannot be revoked before\ntheir TTL expires, so it is recommended to keep the TTL short with that\nlimitation in mind.\n\n### Discovering the service account `issuer`\n\n-> **Note:** From Vault 1.9.0, `disable_iss_validation` and `issuer` are deprecated\nand the default for `disable_iss_validation` has changed to `true` for new\nKubernetes auth mounts. The following section only applies if you have set\n`disable_iss_validation=false` or created your mount before 1.9 with the default\nvalue, but `disable_iss_validation=true` is the new recommended value for all\nversions of Vault.\n\nKubernetes 1.21+ clusters may require setting the service account\n[`issuer`](\/vault\/api-docs\/auth\/kubernetes#issuer) to the same value as\n`kube-apiserver`'s `--service-account-issuer` flag. This is because the service\naccount JWTs for these clusters may have an issuer specific to the cluster\nitself, instead of the old default of `kubernetes\/serviceaccount`. If you are\nunable to check this value directly, you can run the following and look for the\n`\"iss\"` field to find the required value:\n\n```shell-session\n$ echo '{\"apiVersion\": \"authentication.k8s.io\/v1\", \"kind\": \"TokenRequest\"}' \\\n  | kubectl create -f- --raw \/api\/v1\/namespaces\/default\/serviceaccounts\/default\/token \\\n  | jq -r '.status.token' \\\n  | cut -d . -f2 \\\n  | base64 -d\n```\n\nMost clusters will also have that information available at the\n`.well-known\/openid-configuration` endpoint:\n\n```shell-session\n$ kubectl get --raw \/.well-known\/openid-configuration | jq -r .issuer\n```\n\nThis value is then used when configuring Kubernetes auth, e.g.:\n\n```shell-session\n$ vault write auth\/kubernetes\/config \\\n    kubernetes_host=\"https:\/\/$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT\" \\\n    issuer=\"\\\"test-aks-cluster-dns-d6cbb78e.hcp.uksouth.azmk8s.io\\\"\"\n```\n\n## Configuring kubernetes\n\nThis auth method accesses the [Kubernetes TokenReview API][k8s-tokenreview] to\nvalidate the provided JWT is still valid. Kubernetes should be running with\n`--service-account-lookup`. This is defaulted to true from Kubernetes 1.7.\nOtherwise deleted tokens in Kubernetes will not be properly revoked and\nwill be able to authenticate to this auth method.\n\nService Accounts used in this auth method will need to have access to the\nTokenReview API. If Kubernetes is configured to use RBAC roles, the Service\nAccount should be granted permissions to access this API. The following\nexample ClusterRoleBinding could be used to grant these permissions:\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: role-tokenreview-binding\n  namespace: default\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: system:auth-delegator\nsubjects:\n  - kind: ServiceAccount\n    name: vault-auth\n    namespace: default\n```\n\n## API\n\nThe Kubernetes Auth Plugin has a full HTTP API. Please see the\n[API docs](\/vault\/api-docs\/auth\/kubernetes) for more details.\n\n[k8s-tokenreview]: https:\/\/kubernetes.io\/docs\/reference\/kubernetes-api\/authentication-resources\/token-review-v1\/\n\n## Workflows\n\nRefer to the following workflow examples for Kubernetes auth method usage:\n\n### Working with templated policies\n\nSet `use_annotations_as_alias_metadata=true` in your Kubernetes auth\nconfiguration to use Kubernetes Service Account annotations for\n[Vault alias](\/vault\/docs\/concepts\/identity#entities-and-aliases) metadata.\n\nWhen `use_annotations_as_alias_metadata` is true, you can use the\n`identity.entity.aliases.<mount accessor>.metadata.<metadata key>` template\nparameter when you create [templated policies](\/vault\/docs\/concepts\/policies#templated-policies).\n\nTo use annotations as alias metadata, you must give Vault permission to read\nservice accounts from the Kubernetes API.\n\n#### Scenario Introduction\n\nAssume you have the following policy requirement:\n\nApplications can perform read operations on their allocated key\/value secret path:\n`(env-kv\/data\/<env>)`\n\n#### Annotate Kubernetes Service Accounts with their dedicated secret paths\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: app\n  namespace: demo\n  annotations:\n    vault.hashicorp.com\/alias-metadata-env: demo\/app\n```\n\nWhen the application, `app`, logs in with JWT for the service account, Vault\nrenders the alias metadata as `env : demo\/app`.\n\n#### Create a templated ACL policy\n\nThe `env-tmpl` policy lets applications read their secrets defined in KV v2\nsecret engine. Use the mount accessor value\n(`auth_kubernetes_bcecb1e1`) from the [`sys\/auth`](\/vault\/api-docs\/system\/auth) endpoint or the [`vault auth list`](\/vault\/docs\/commands\/auth\/list) command.\n\n```shell-session\n$ tee env-tmpl.hcl <<EOF\npath \"env-kv\/data\/\" {\n  capabilities = [ \"read\" ]\n}\nEOF\n$ vault policy write env-tmpl env-tmpl.hcl\n```\n\n#### Create a Kubernetes role with the templated ACL policy\n\nThe Kubernetes role lets users login as the `env-reader` role to read from the\nsecret path described in the `env-tmpl` policy.\n\n```shell-session\n$ vault write auth\/kubernetes\/role\/env-reader \\\n    bound_service_account_names=app \\\n    bound_service_account_namespaces=demo \\\n    policies=default,env-tmpl \\\n    ttl=1h\n```\n\n## Code example\n\nThe following example demonstrates the Kubernetes auth method to authenticate\nwith Vault.\n\n<CodeTabs>\n\n<CodeBlockConfig>\n\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\n\tvault \"github.com\/hashicorp\/vault\/api\"\n\tauth \"github.com\/hashicorp\/vault\/api\/auth\/kubernetes\"\n)\n\n\/\/ Fetches a key-value secret (kv-v2) after authenticating to Vault with a Kubernetes service account.\n\/\/ For a more in-depth setup explanation, please see the relevant readme in the hashicorp\/vault-examples repo.\nfunc getSecretWithKubernetesAuth() (string, error) {\n\t\/\/ If set, the VAULT_ADDR environment variable will be the address that\n\t\/\/ your pod uses to communicate with Vault.\n\tconfig := vault.DefaultConfig() \/\/ modify for more granular configuration\n\n\tclient, err := vault.NewClient(config)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to initialize Vault client: %w\", err)\n\t}\n\n\t\/\/ The service-account token will be read from the path where the token's\n\t\/\/ Kubernetes Secret is mounted. By default, Kubernetes will mount it to\n\t\/\/ \/var\/run\/secrets\/kubernetes.io\/serviceaccount\/token, but an administrator\n\t\/\/ may have configured it to be mounted elsewhere.\n\t\/\/ In that case, we'll use the option WithServiceAccountTokenPath to look\n\t\/\/ for the token there.\n\tk8sAuth, err := auth.NewKubernetesAuth(\n\t\t\"dev-role-k8s\",\n\t\tauth.WithServiceAccountTokenPath(\"path\/to\/service-account-token\"),\n\t)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to initialize Kubernetes auth method: %w\", err)\n\t}\n\n\tauthInfo, err := client.Auth().Login(context.TODO(), k8sAuth)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to log in with Kubernetes auth: %w\", err)\n\t}\n\tif authInfo == nil {\n\t\treturn \"\", fmt.Errorf(\"no auth info was returned after login\")\n\t}\n\n\t\/\/ get secret from Vault, from the default mount path for KV v2 in dev mode, \"secret\"\n\tsecret, err := client.KVv2(\"secret\").Get(context.Background(), \"creds\")\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to read secret: %w\", err)\n\t}\n\n\t\/\/ data map can contain more than one key-value pair,\n\t\/\/ in this case we're just grabbing one of them\n\tvalue, ok := secret.Data[\"password\"].(string)\n\tif !ok {\n\t\treturn \"\", fmt.Errorf(\"value type assertion failed: %T %#v\", secret.Data[\"password\"], secret.Data[\"password\"])\n\t}\n\n\treturn value, nil\n}\n```\n<\/CodeBlockConfig>\n\n<CodeBlockConfig>\n\n```cs\nusing System;\nusing System.IO;\nusing VaultSharp;\nusing VaultSharp.V1.AuthMethods;\nusing VaultSharp.V1.AuthMethods.Kubernetes;\nusing VaultSharp.V1.Commons;\n\nnamespace Examples\n{\n    public class KubernetesAuthExample\n    {\n        const string DefaultTokenPath = \"path\/to\/service-account-token\";\n\n        \/\/ Fetches a key-value secret (kv-v2) after authenticating to Vault with a Kubernetes service account.\n        \/\/ For a more in-depth setup explanation, please see the relevant readme in the hashicorp\/vault-examples repo.\n        public string GetSecretWithK8s()\n        {\n            var vaultAddr = Environment.GetEnvironmentVariable(\"VAULT_ADDR\");\n            if(String.IsNullOrEmpty(vaultAddr))\n            {\n                throw new System.ArgumentNullException(\"Vault Address\");\n            }\n\n            var roleName = Environment.GetEnvironmentVariable(\"VAULT_ROLE\");\n            if(String.IsNullOrEmpty(roleName))\n            {\n                throw new System.ArgumentNullException(\"Vault Role Name\");\n            }\n\n            \/\/ Get the path to service account token or fall back on default path\n            string pathToToken = String.IsNullOrEmpty(Environment.GetEnvironmentVariable(\"SA_TOKEN_PATH\")) ? DefaultTokenPath : Environment.GetEnvironmentVariable(\"SA_TOKEN_PATH\");\n            string jwt = File.ReadAllText(pathToToken);\n\n            IAuthMethodInfo authMethod = new KubernetesAuthMethodInfo(roleName, jwt);\n            var vaultClientSettings = new VaultClientSettings(vaultAddr, authMethod);\n\n            IVaultClient vaultClient = new VaultClient(vaultClientSettings);\n\n            \/\/ We can retrieve the secret after creating our VaultClient object\n            Secret<SecretData> kv2Secret = null;\n            kv2Secret = vaultClient.V1.Secrets.KeyValue.V2.ReadSecretAsync(path: \"\/creds\").Result;\n\n            var password = kv2Secret.Data.Data[\"password\"];\n\n            return password.ToString();\n        }\n    }\n}\n```\n<\/CodeBlockConfig>\n\n<\/CodeTabs>","site":"vault","answers_cleaned":"    layout  docs page title  Kubernetes   Auth Methods description       The Kubernetes auth method allows automated authentication of Kubernetes   Service Accounts         Kubernetes auth method   include  x509 sha1 deprecation mdx   The  kubernetes  auth method can be used to authenticate with Vault using a Kubernetes Service Account Token  This method of authentication makes it easy to introduce a Vault token into a Kubernetes Pod   You can also use a Kubernetes Service Account Token to  log in via JWT auth  k8s jwt auth   See the section on  How to work with short lived Kubernetes tokens  short lived tokens  for a summary of why you might want to use JWT auth instead and how it compares to Kubernetes auth        Note    If you are upgrading to Kubernetes v1 21   ensure the config option  disable iss validation  is set to true  Assuming the default mount path  you can check with  vault read  field disable iss validation auth kubernetes config   See  Kubernetes 1 21   kubernetes 1 21  below for more details      Authentication      Via the CLI  The default path is   kubernetes   If this auth method was enabled at a different path  specify   path  my path  in the CLI      shell session   vault write auth kubernetes login role demo jwt              Via the API  The default endpoint is  auth kubernetes login   If this auth method was enabled at a different path  use that value instead of  kubernetes       shell session   curl         request POST         data    jwt     your service account jwt     role    demo          http   127 0 0 1 8200 v1 auth kubernetes login      The response will contain a token at  auth client token       json      auth          client token    38fe9691 e623 7238 f618 c94d4e7bc674        accessor    78e87a38 84ed 2692 538f ca8b9f400ab3        policies     default         metadata            role    demo          service account name    myapp          service account namespace    default          service account secret name    myapp token pd21c          service account uid    aa9aa8ff 98d0 11e7 9bb7 0800276d99bf              lease duration   2764800       renewable   true               Configuration  Auth methods must be configured in advance before users or machines can authenticate  These steps are usually completed by an operator or configuration management tool   1  Enable the Kubernetes auth method        shell session     vault auth enable kubernetes        1  Use the   config  endpoint to configure Vault to talk to Kubernetes  Use    kubectl cluster info  to validate the Kubernetes host address and TCP port    For the list of available configuration options  please see the    API documentation   vault api docs auth kubernetes         shell session     vault write auth kubernetes config         token reviewer jwt   your reviewer service account JWT           kubernetes host https   192 168 99 100  your TCP port or blank for 443          kubernetes ca cert  ca crt               Note    The pattern Vault uses to authenticate Pods depends on sharing   the JWT token over the network  Given the  security model of   Vault   vault docs internals security   this is allowable because Vault is   part of the trusted compute base  In general  Kubernetes applications should     not   share this JWT with other applications  as it allows API calls to be   made on behalf of the Pod and can result in unintended access being granted   to 3rd parties   1  Create a named role        shell session     vault write auth kubernetes role demo         bound service account names myapp         bound service account namespaces default         policies default         ttl 1h          This role authorizes the  myapp  service account in the default   namespace and it gives it the default policy     For the complete list of configuration options  please see the  API   documentation   vault api docs auth kubernetes       Kubernetes 1 21  Starting in version  1 21  k8s 1 21 changelog   the Kubernetes  BoundServiceAccountTokenVolume  feature defaults to enabled  This changes the JWT token mounted into containers by default in two ways that are important for Kubernetes auth     It has an expiry time and is bound to the lifetime of the pod and service account    The value of the JWT s   iss   claim depends on the cluster s configuration   The changes to token lifetime are important when configuring the   token reviewer jwt    vault api docs auth kubernetes token reviewer jwt  option  If a short lived token is used  Kubernetes will revoke it as soon as the pod or service account are deleted  or if the expiry time passes  and Vault will no longer be able to use the  TokenReview  API  See  How to work with short lived Kubernetes tokens  short lived tokens  below for details on handling this change   In response to the issuer changes  Kubernetes auth has been updated in Vault 1 9 0 to not validate the issuer by default  The Kubernetes API does the same validation when reviewing tokens  so enabling issuer validation on the Vault side is duplicated work  Without disabling Vault s issuer validation  it is not possible for a single Kubernetes auth configuration to work for default mounted pod tokens with both Kubernetes 1 20 and 1 21  Note that auth mounts created before Vault 1 9 will maintain the old default  and you will need to explicitly set  disable iss validation true  before upgrading Kubernetes to 1 21  See  Discovering the service account  issuer    discovering the service account issuer  below for guidance if you wish to enable issuer validation in Vault    k8s 1 21 changelog   https   github com kubernetes kubernetes blob master CHANGELOG CHANGELOG 1 21 md api change 2  short lived tokens    how to work with short lived kubernetes tokens      How to work with short lived kubernetes tokens  There are a few different ways to configure auth for Kubernetes pods when default mounted pod tokens are short lived  each with their own tradeoffs  This table summarizes the options  each of which is explained in more detail below     Option                                 All tokens are short lived   Can revoke tokens early   Other considerations                                                                                                                            Use local token as reviewer JWT        Yes                          Yes                       Requires Vault  1 9 3   to be deployed on the Kubernetes cluster     Use client JWT as reviewer JWT         Yes                          Yes                       Operational overhead     Use long lived token as reviewer JWT   No                           Yes                                                Use JWT auth instead                   Yes                          No                                                     Note    By default  Kubernetes currently extends the lifetime of admission injected service account tokens to a year to help smooth the transition to short lived tokens  If you would like to disable this  set    service account extend token expiration false  k8s extended tokens  for  kube apiserver  or specify your own  serviceAccountToken  volume mount  See  here   vault docs auth jwt oidc providers kubernetes specify ttl and audience  for an example    k8s extended tokens   https   kubernetes io docs reference command line tools reference kube apiserver  options       Use local service account token as the reviewer JWT  When running Vault in a Kubernetes pod the recommended option is to use the pod s local service account token  Vault will periodically re read the file to support short lived tokens  To use the local token and CA certificate  omit  token reviewer jwt  and  kubernetes ca cert  when configuring the auth method  Vault will attempt to load them from  token  and  ca crt  respectively inside the default mount folder   var run secrets kubernetes io serviceaccount        shell session   vault write auth kubernetes config       kubernetes host https    KUBERNETES SERVICE HOST  KUBERNETES SERVICE PORT           Note    Requires Vault 1 9 3   In earlier versions the service account token and CA certificate is read once and stored in Vault storage  When the service account token expires or is revoked  Vault will no longer be able to use the  TokenReview  API and client authentication will fail    Tip title  You can use the trust store for CA certificates     If you leave  kubernetes ca cert  unset and set  disable local ca jwt  to    true   Vault uses the system s trust store for TLS certificate   verification    Tip        Use the Vault client s JWT as the reviewer JWT  When configuring Kubernetes auth  you can omit the  token reviewer jwt   and Vault will use the Vault client s JWT as its own auth token when communicating with the Kubernetes  TokenReview  API  If Vault is running in Kubernetes  you also need to set  disable local ca jwt true    This means Vault does not store any JWTs and allows you to use short lived tokens everywhere but adds some operational overhead to maintain the cluster role bindings on the set of service accounts you want to be able to authenticate with Vault  Each client of Vault would need the  system auth delegator  ClusterRole      shell session   kubectl create clusterrolebinding vault client auth delegator         clusterrole system auth delegator         group group1         serviceaccount default svcaccount1                     Continue using long lived tokens  You can create a long lived secret using the instructions  here  k8s create secret  and use that as the  token reviewer jwt   In this example  the  vault  service account would need the  system auth delegator  ClusterRole      shell session   kubectl apply  f     EOF apiVersion  v1 kind  Secret metadata    name  vault k8s auth secret   annotations      kubernetes io service account name  vault type  kubernetes io service account token EOF      Using this maintains previous workflows but does not benefit from the improved security posture of short lived tokens    k8s create secret   https   kubernetes io docs tasks configure pod container configure service account  manually create a service account api token       Use JWT auth  Kubernetes auth is specialized to use Kubernetes   TokenReview  API  However  the JWT tokens Kubernetes generates can also be verified using Kubernetes as an OIDC provider  The JWT auth method documentation has  instructions  k8s jwt auth  for setting up JWT auth with Kubernetes as the OIDC provider    k8s jwt auth    vault docs auth jwt oidc providers kubernetes  This solution allows you to use short lived tokens for all clients and removes the need for a reviewer JWT  However  the client tokens cannot be revoked before their TTL expires  so it is recommended to keep the TTL short with that limitation in mind       Discovering the service account  issuer        Note    From Vault 1 9 0   disable iss validation  and  issuer  are deprecated and the default for  disable iss validation  has changed to  true  for new Kubernetes auth mounts  The following section only applies if you have set  disable iss validation false  or created your mount before 1 9 with the default value  but  disable iss validation true  is the new recommended value for all versions of Vault   Kubernetes 1 21  clusters may require setting the service account   issuer    vault api docs auth kubernetes issuer  to the same value as  kube apiserver  s    service account issuer  flag  This is because the service account JWTs for these clusters may have an issuer specific to the cluster itself  instead of the old default of  kubernetes serviceaccount   If you are unable to check this value directly  you can run the following and look for the   iss   field to find the required value      shell session   echo    apiVersion    authentication k8s io v1    kind    TokenRequest          kubectl create  f    raw  api v1 namespaces default serviceaccounts default token       jq  r   status token        cut  d    f2       base64  d      Most clusters will also have that information available at the   well known openid configuration  endpoint      shell session   kubectl get   raw   well known openid configuration   jq  r  issuer      This value is then used when configuring Kubernetes auth  e g       shell session   vault write auth kubernetes config       kubernetes host  https    KUBERNETES SERVICE HOST  KUBERNETES SERVICE PORT        issuer    test aks cluster dns d6cbb78e hcp uksouth azmk8s io            Configuring kubernetes  This auth method accesses the  Kubernetes TokenReview API  k8s tokenreview  to validate the provided JWT is still valid  Kubernetes should be running with    service account lookup   This is defaulted to true from Kubernetes 1 7  Otherwise deleted tokens in Kubernetes will not be properly revoked and will be able to authenticate to this auth method   Service Accounts used in this auth method will need to have access to the TokenReview API  If Kubernetes is configured to use RBAC roles  the Service Account should be granted permissions to access this API  The following example ClusterRoleBinding could be used to grant these permissions      yaml apiVersion  rbac authorization k8s io v1 kind  ClusterRoleBinding metadata    name  role tokenreview binding   namespace  default roleRef    apiGroup  rbac authorization k8s io   kind  ClusterRole   name  system auth delegator subjects      kind  ServiceAccount     name  vault auth     namespace  default         API  The Kubernetes Auth Plugin has a full HTTP API  Please see the  API docs   vault api docs auth kubernetes  for more details    k8s tokenreview   https   kubernetes io docs reference kubernetes api authentication resources token review v1      Workflows  Refer to the following workflow examples for Kubernetes auth method usage       Working with templated policies  Set  use annotations as alias metadata true  in your Kubernetes auth configuration to use Kubernetes Service Account annotations for  Vault alias   vault docs concepts identity entities and aliases  metadata   When  use annotations as alias metadata  is true  you can use the  identity entity aliases  mount accessor  metadata  metadata key   template parameter when you create  templated policies   vault docs concepts policies templated policies    To use annotations as alias metadata  you must give Vault permission to read service accounts from the Kubernetes API        Scenario Introduction  Assume you have the following policy requirement   Applications can perform read operations on their allocated key value secret path    env kv data  env          Annotate Kubernetes Service Accounts with their dedicated secret paths     yaml apiVersion  v1 kind  ServiceAccount metadata    name  app   namespace  demo   annotations      vault hashicorp com alias metadata env  demo app      When the application   app   logs in with JWT for the service account  Vault renders the alias metadata as  env   demo app         Create a templated ACL policy  The  env tmpl  policy lets applications read their secrets defined in KV v2 secret engine  Use the mount accessor value   auth kubernetes bcecb1e1   from the   sys auth    vault api docs system auth  endpoint or the   vault auth list    vault docs commands auth list  command      shell session   tee env tmpl hcl   EOF path  env kv data       capabilities      read      EOF   vault policy write env tmpl env tmpl hcl           Create a Kubernetes role with the templated ACL policy  The Kubernetes role lets users login as the  env reader  role to read from the secret path described in the  env tmpl  policy      shell session   vault write auth kubernetes role env reader       bound service account names app       bound service account namespaces demo       policies default env tmpl       ttl 1h         Code example  The following example demonstrates the Kubernetes auth method to authenticate with Vault    CodeTabs    CodeBlockConfig      go package main  import     fmt    os    vault  github com hashicorp vault api   auth  github com hashicorp vault api auth kubernetes        Fetches a key value secret  kv v2  after authenticating to Vault with a Kubernetes service account     For a more in depth setup explanation  please see the relevant readme in the hashicorp vault examples repo  func getSecretWithKubernetesAuth    string  error        If set  the VAULT ADDR environment variable will be the address that     your pod uses to communicate with Vault   config    vault DefaultConfig      modify for more granular configuration   client  err    vault NewClient config   if err    nil     return     fmt Errorf  unable to initialize Vault client   w   err          The service account token will be read from the path where the token s     Kubernetes Secret is mounted  By default  Kubernetes will mount it to      var run secrets kubernetes io serviceaccount token  but an administrator     may have configured it to be mounted elsewhere      In that case  we ll use the option WithServiceAccountTokenPath to look     for the token there   k8sAuth  err    auth NewKubernetesAuth     dev role k8s     auth WithServiceAccountTokenPath  path to service account token        if err    nil     return     fmt Errorf  unable to initialize Kubernetes auth method   w   err       authInfo  err    client Auth   Login context TODO    k8sAuth   if err    nil     return     fmt Errorf  unable to log in with Kubernetes auth   w   err      if authInfo    nil     return     fmt Errorf  no auth info was returned after login           get secret from Vault  from the default mount path for KV v2 in dev mode   secret   secret  err    client KVv2  secret   Get context Background     creds    if err    nil     return     fmt Errorf  unable to read secret   w   err          data map can contain more than one key value pair      in this case we re just grabbing one of them  value  ok    secret Data  password    string   if  ok     return     fmt Errorf  value type assertion failed   T   v   secret Data  password    secret Data  password         return value  nil         CodeBlockConfig    CodeBlockConfig      cs using System  using System IO  using VaultSharp  using VaultSharp V1 AuthMethods  using VaultSharp V1 AuthMethods Kubernetes  using VaultSharp V1 Commons   namespace Examples       public class KubernetesAuthExample               const string DefaultTokenPath    path to service account token               Fetches a key value secret  kv v2  after authenticating to Vault with a Kubernetes service account             For a more in depth setup explanation  please see the relevant readme in the hashicorp vault examples repo          public string GetSecretWithK8s                         var vaultAddr   Environment GetEnvironmentVariable  VAULT ADDR                if String IsNullOrEmpty vaultAddr                                 throw new System ArgumentNullException  Vault Address                               var roleName   Environment GetEnvironmentVariable  VAULT ROLE                if String IsNullOrEmpty roleName                                 throw new System ArgumentNullException  Vault Role Name                                  Get the path to service account token or fall back on default path             string pathToToken   String IsNullOrEmpty Environment GetEnvironmentVariable  SA TOKEN PATH      DefaultTokenPath   Environment GetEnvironmentVariable  SA TOKEN PATH                string jwt   File ReadAllText pathToToken                IAuthMethodInfo authMethod   new KubernetesAuthMethodInfo roleName  jwt               var vaultClientSettings   new VaultClientSettings vaultAddr  authMethod                IVaultClient vaultClient   new VaultClient vaultClientSettings                   We can retrieve the secret after creating our VaultClient object             Secret SecretData  kv2Secret   null              kv2Secret   vaultClient V1 Secrets KeyValue V2 ReadSecretAsync path    creds   Result               var password   kv2Secret Data Data  password                 return password ToString                            CodeBlockConfig     CodeTabs "}
{"questions":"vault client certificates The cert auth method allows users to authenticate with Vault using TLS page title TLS Certificates Auth Methods layout docs TLS certificates auth method","answers":"---\nlayout: docs\npage_title: TLS Certificates - Auth Methods\ndescription: >-\n  The \"cert\" auth method allows users to authenticate with Vault using TLS\n  client certificates.\n---\n\n# TLS certificates auth method\n\n@include 'x509-sha1-deprecation.mdx'\n\nThe `cert` auth method allows authentication using SSL\/TLS client certificates\nwhich are either signed by a CA or self-signed. SSL\/TLS client certificates\nare defined as having an `ExtKeyUsage` extension with the usage set to either\n`ClientAuth` or `Any`.\n\nThe trusted certificates and CAs are configured directly to the auth method\nusing the `certs\/` path. This method cannot read trusted certificates from an\nexternal source.\n\nCA certificates are associated with a role; role names and CRL names are normalized to\nlower-case.\n\nPlease note that to use this auth method, `tls_disable` and `tls_disable_client_certs` must be false in the Vault\nconfiguration. This is because the certificates are sent through TLS communication itself. \n\n## Revocation checking\n\nSince Vault 0.4, the method supports revocation checking.\n\nAn authorized user can submit PEM-formatted CRLs identified by a given name;\nthese can be updated or deleted at will. They may also set the URL of a\ntrusted CRL distribution point, and have Vault fetch the CRL as needed.\n\nWhen there are CRLs present, at the time of client authentication:\n\n- If the client presents any chain where no certificate in the chain matches a\n  revoked serial number, authentication is allowed\n\n- If there is no chain presented by the client without a revoked serial number,\n  authentication is denied\n\nThis method provides good security while also allowing for flexibility. For\ninstance, if an intermediate CA is going to be retired, a client can be\nconfigured with two certificate chains: one that contains the initial\nintermediate CA in the path, and the other that contains the replacement. When\nthe initial intermediate CA is revoked, the chain containing the replacement\nwill still allow the client to successfully authenticate.\n\n**N.B.**: Matching is performed by _serial number only_. For most CAs,\nincluding Vault's `pki` method, multiple CRLs can successfully be used as\nserial numbers are globally unique. However, since RFCs only specify that\nserial numbers must be unique per-CA, some CAs issue serial numbers in-order,\nwhich may cause clashes if attempting to use CRLs from two such CAs in the same\nmount of the method. The workaround here is to mount multiple copies of the\n`cert` method, configure each with one CA\/CRL, and have clients connect to the\nappropriate mount.\n\nIn addition, if a CRL distribution point is not set the method will not\nfetch the CRLs itself, the CRL's designated time to next update is not\nconsidered. If a CRL is no longer in use, it is up to the administrator to\nremove it from the method.\n\nIn addition to automatic or manual CRL management, OCSP may be enabled for\na configured certificate, in which case Vault will query the OCSP server either\nspecified in the presented certificate or configured in the auth method to\ncheck revocation.\n\n## Authentication\n\n### Via the CLI\n\nThe below authenticates against the `web` cert role by presenting a certificate\n(`cert.pem`) and key (`key.pem`) signed by the CA associated with the `web` cert\nrole. Note that the name `web` ties to the configuration example below writing\nto a path of `auth\/cert\/certs\/web`. If a certificate role name is not specified,\nthe auth method will try to authenticate against all trusted certificates.\n\n~> **NOTE** The `-ca-cert` value used here is for the Vault TLS Listener CA\ncertificate, not the CA that issued the client authentication certificate. This\ncan be omitted if the CA used to issue the Vault server certificate is trusted\nby the local system executing this command.\n\n```shell-session\n$ vault login \\\n    -method=cert \\\n    -ca-cert=vault-ca.pem \\\n    -client-cert=cert.pem \\\n    -client-key=key.pem \\\n    name=web\n```\n\n### Via the API\n\nThe endpoint for the login is `\/login`. The client simply connects with their\nTLS certificate and when the login endpoint is hit, the auth method will\ndetermine if there is a matching trusted certificate to authenticate the client.\nOptionally, you may specify a single certificate role to authenticate against.\n\n~> **NOTE** The `--cacert` value used here is for the Vault TLS Listener CA\ncertificate, not the CA that issued the client authentication certificate. This\ncan be omitted if the CA used to issue the Vault server certificate is trusted\nby the local system executing this command.\n\n```shell-session\n$ curl \\\n    --request POST \\\n    --cacert vault-ca.pem \\\n    --cert cert.pem \\\n    --key key.pem \\\n    --data '{\"name\": \"web\"}' \\\n    https:\/\/127.0.0.1:8200\/v1\/auth\/cert\/login\n```\n\n## Configuration\n\nAuth methods must be configured in advance before users or machines can\nauthenticate. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the certificate auth method:\n\n   ```text\n   $ vault auth enable cert\n   ```\n\n1. Configure it with trusted certificates that are allowed to authenticate:\n\n   ```text\n   $ vault write auth\/cert\/certs\/web \\\n       display_name=web \\\n       policies=web,prod \\\n       certificate=@web-cert.pem \\\n       ttl=3600\n   ```\n\n   This creates a new trusted certificate \"web\" with same display name and the\n   \"web\" and \"prod\" policies. The certificate (public key) used to verify\n   clients is given by the \"web-cert.pem\" file. Lastly, an optional `ttl` value\n   can be provided in seconds to limit the lease duration.\n\n### Load Balancing \/ Proxying Consideration\n\nIf the Vault server is fronted by a reverse proxy or load balancer, TLS is\nterminated before Vault.  In that case the proxy must provide the validated\nclient certificate via header, and [configured in the Vault configuration's\nlistener stanza](\/vault\/docs\/configuration\/listener\/tcp#tcp-listener-parameters).\n\nConfigure the listener with the header name that your load balancer provides.  \nIn this mode, the security of authentication depends on the load balancer performing \nfull TLS verification to the client, and that the connection between the load\nbalancer and Vault is secured, ideally with Mutual TLS.\n\n## API\n\nThe TLS Certificate auth method has a full HTTP API. Please see the\n[TLS Certificate API](\/vault\/api-docs\/auth\/cert) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  TLS Certificates   Auth Methods description       The  cert  auth method allows users to authenticate with Vault using TLS   client certificates         TLS certificates auth method   include  x509 sha1 deprecation mdx   The  cert  auth method allows authentication using SSL TLS client certificates which are either signed by a CA or self signed  SSL TLS client certificates are defined as having an  ExtKeyUsage  extension with the usage set to either  ClientAuth  or  Any    The trusted certificates and CAs are configured directly to the auth method using the  certs   path  This method cannot read trusted certificates from an external source   CA certificates are associated with a role  role names and CRL names are normalized to lower case   Please note that to use this auth method   tls disable  and  tls disable client certs  must be false in the Vault configuration  This is because the certificates are sent through TLS communication itself       Revocation checking  Since Vault 0 4  the method supports revocation checking   An authorized user can submit PEM formatted CRLs identified by a given name  these can be updated or deleted at will  They may also set the URL of a trusted CRL distribution point  and have Vault fetch the CRL as needed   When there are CRLs present  at the time of client authentication     If the client presents any chain where no certificate in the chain matches a   revoked serial number  authentication is allowed    If there is no chain presented by the client without a revoked serial number    authentication is denied  This method provides good security while also allowing for flexibility  For instance  if an intermediate CA is going to be retired  a client can be configured with two certificate chains  one that contains the initial intermediate CA in the path  and the other that contains the replacement  When the initial intermediate CA is revoked  the chain containing the replacement will still allow the client to successfully authenticate     N B     Matching is performed by  serial number only   For most CAs  including Vault s  pki  method  multiple CRLs can successfully be used as serial numbers are globally unique  However  since RFCs only specify that serial numbers must be unique per CA  some CAs issue serial numbers in order  which may cause clashes if attempting to use CRLs from two such CAs in the same mount of the method  The workaround here is to mount multiple copies of the  cert  method  configure each with one CA CRL  and have clients connect to the appropriate mount   In addition  if a CRL distribution point is not set the method will not fetch the CRLs itself  the CRL s designated time to next update is not considered  If a CRL is no longer in use  it is up to the administrator to remove it from the method   In addition to automatic or manual CRL management  OCSP may be enabled for a configured certificate  in which case Vault will query the OCSP server either specified in the presented certificate or configured in the auth method to check revocation      Authentication      Via the CLI  The below authenticates against the  web  cert role by presenting a certificate   cert pem   and key   key pem   signed by the CA associated with the  web  cert role  Note that the name  web  ties to the configuration example below writing to a path of  auth cert certs web   If a certificate role name is not specified  the auth method will try to authenticate against all trusted certificates        NOTE   The   ca cert  value used here is for the Vault TLS Listener CA certificate  not the CA that issued the client authentication certificate  This can be omitted if the CA used to issue the Vault server certificate is trusted by the local system executing this command      shell session   vault login        method cert        ca cert vault ca pem        client cert cert pem        client key key pem       name web          Via the API  The endpoint for the login is   login   The client simply connects with their TLS certificate and when the login endpoint is hit  the auth method will determine if there is a matching trusted certificate to authenticate the client  Optionally  you may specify a single certificate role to authenticate against        NOTE   The    cacert  value used here is for the Vault TLS Listener CA certificate  not the CA that issued the client authentication certificate  This can be omitted if the CA used to issue the Vault server certificate is trusted by the local system executing this command      shell session   curl         request POST         cacert vault ca pem         cert cert pem         key key pem         data    name    web          https   127 0 0 1 8200 v1 auth cert login         Configuration  Auth methods must be configured in advance before users or machines can authenticate  These steps are usually completed by an operator or configuration management tool   1  Enable the certificate auth method         text      vault auth enable cert         1  Configure it with trusted certificates that are allowed to authenticate         text      vault write auth cert certs web          display name web          policies web prod          certificate  web cert pem          ttl 3600            This creates a new trusted certificate  web  with same display name and the     web  and  prod  policies  The certificate  public key  used to verify    clients is given by the  web cert pem  file  Lastly  an optional  ttl  value    can be provided in seconds to limit the lease duration       Load Balancing   Proxying Consideration  If the Vault server is fronted by a reverse proxy or load balancer  TLS is terminated before Vault   In that case the proxy must provide the validated client certificate via header  and  configured in the Vault configuration s listener stanza   vault docs configuration listener tcp tcp listener parameters    Configure the listener with the header name that your load balancer provides    In this mode  the security of authentication depends on the load balancer performing  full TLS verification to the client  and that the connection between the load balancer and Vault is secured  ideally with Mutual TLS      API  The TLS Certificate auth method has a full HTTP API  Please see the  TLS Certificate API   vault api docs auth cert  for more details "}
{"questions":"vault a Vault token for AliCloud entities Unlike most Vault auth methods this The AliCloud auth method allows automated authentication of AliCloud entities page title AliCloud Auth Methods The alicloud auth method provides an automated mechanism to retrieve layout docs AliCloud auth method","answers":"---\nlayout: docs\npage_title: AliCloud - Auth Methods\ndescription: The AliCloud auth method allows automated authentication of AliCloud entities.\n---\n\n# AliCloud auth method\n\nThe `alicloud` auth method provides an automated mechanism to retrieve\na Vault token for AliCloud entities. Unlike most Vault auth methods, this\nmethod does not require manual first-deploying, or provisioning\nsecurity-sensitive credentials (tokens, username\/password, client certificates,\netc), by operators. It treats AliCloud as a Trusted Third Party and uses a\nspecial AliCloud request signed with private credentials. A variety of credentials\ncan be used to construct the request, but AliCloud offers\n[instance metadata](https:\/\/www.alibabacloud.com\/help\/faq-detail\/49122.htm)\nthat's ideally suited for the purpose. By launching an instance with a role,\nthe role's STS credentials under instance metadata can be used to securely\nbuild the request.\n\n## Authentication workflow\n\nThe AliCloud STS API includes a method,\n[`sts:GetCallerIdentity`](https:\/\/www.alibabacloud.com\/help\/doc-detail\/43767.htm),\nwhich allows you to validate the identity of a client. The client signs\na `GetCallerIdentity` query using the [AliCloud signature\nalgorithm](https:\/\/www.alibabacloud.com\/help\/doc-detail\/67332.htm). It then\nsubmits 2 pieces of information to the Vault server to recreate a valid signed\nrequest: the request URL, and the request headers. The Vault server then\nreconstructs the query and forwards it on to the AliCloud STS service and validates\nthe result back.\n\nImportantly, the credentials used to sign the GetCallerIdentity request can come\nfrom the ECS instance metadata service for an ECS instance, which obviates the\nneed for an operator to manually provision some sort of identity material first.\nHowever, the credentials can, in principle, come from anywhere, not just from\nthe locations AliCloud has provided for you.\n\nEach signed AliCloud request includes the current timestamp and a nonce to mitigate\nthe risk of replay attacks.\n\nIt's also important to note that AliCloud does NOT include any sort\nof authorization around calls to `GetCallerIdentity`. For example, if you have\na RAM policy on your credential that requires all access to be MFA authenticated,\nnon-MFA authenticated credentials will still be able to authenticate to Vault\nusing this method. It does not appear possible to enforce a RAM principal to be\nMFA authenticated while authenticating to Vault.\n\n## Authorization workflow\n\nThe basic mechanism of operation is per-role.\n\nRoles are associated with a role ARN that has been pre-created in AliCloud.\nAliCloud's console displays each role's ARN. A role in Vault has a 1:1 relationship\nwith a role in AliCloud, and must bear the same name.\n\nWhen a client assumes that role and sends its `GetCallerIdentity` request to Vault,\nVault matches the arn of its assumed role with that of a pre-created role in Vault.\nIt then checks what policies have been associated with the role, and grants a\ntoken accordingly.\n\n## Authentication\n\n### Via the CLI\n\n#### Enable AliCloud authentication in Vault.\n\n```shell-session\n$ vault auth enable alicloud\n```\n\n#### Configure the policies on the role.\n\n```shell-session\n$ vault write auth\/alicloud\/role\/dev-role arn='acs:ram::5138828231865461:role\/dev-role'\n```\n\n#### Perform the login operation\n\n```shell-session\n$ vault write auth\/alicloud\/login \\\n        role=dev-role \\\n        identity_request_url=$IDENTITY_REQUEST_URL_BASE_64 \\\n        identity_request_headers=$IDENTITY_REQUEST_HEADERS_BASE_64\n```\n\nFor the RAM auth method, generating the signed request is a non-standard\noperation. The Vault CLI supports generating this for you:\n\n```shell-session\n$ vault login -method=alicloud access_key=... secret_key=... security_token=... region=...\n```\n\nThis assumes you have the AliCloud credentials you would find on an ECS instance using the\nfollowing call:\n\n```\ncurl 'http:\/\/100.100.100.200\/latest\/meta-data\/ram\/security-credentials\/$ROLE_NAME'\n```\n\nPlease note the `$ROLE_NAME` above is case-sensitive and must be consistent with how it's reflected\non the instance.\n\nAn example of how to generate the required request values for the `login` method\ncan be found found in the\n[Vault CLI source code](https:\/\/github.com\/hashicorp\/vault-plugin-auth-alicloud\/blob\/master\/tools\/tools.go).\n\n## API\n\nThe AliCloud auth method has a full HTTP API. Please see the\n[AliCloud Auth API](\/vault\/api-docs\/auth\/alicloud) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  AliCloud   Auth Methods description  The AliCloud auth method allows automated authentication of AliCloud entities         AliCloud auth method  The  alicloud  auth method provides an automated mechanism to retrieve a Vault token for AliCloud entities  Unlike most Vault auth methods  this method does not require manual first deploying  or provisioning security sensitive credentials  tokens  username password  client certificates  etc   by operators  It treats AliCloud as a Trusted Third Party and uses a special AliCloud request signed with private credentials  A variety of credentials can be used to construct the request  but AliCloud offers  instance metadata  https   www alibabacloud com help faq detail 49122 htm  that s ideally suited for the purpose  By launching an instance with a role  the role s STS credentials under instance metadata can be used to securely build the request      Authentication workflow  The AliCloud STS API includes a method    sts GetCallerIdentity   https   www alibabacloud com help doc detail 43767 htm   which allows you to validate the identity of a client  The client signs a  GetCallerIdentity  query using the  AliCloud signature algorithm  https   www alibabacloud com help doc detail 67332 htm   It then submits 2 pieces of information to the Vault server to recreate a valid signed request  the request URL  and the request headers  The Vault server then reconstructs the query and forwards it on to the AliCloud STS service and validates the result back   Importantly  the credentials used to sign the GetCallerIdentity request can come from the ECS instance metadata service for an ECS instance  which obviates the need for an operator to manually provision some sort of identity material first  However  the credentials can  in principle  come from anywhere  not just from the locations AliCloud has provided for you   Each signed AliCloud request includes the current timestamp and a nonce to mitigate the risk of replay attacks   It s also important to note that AliCloud does NOT include any sort of authorization around calls to  GetCallerIdentity   For example  if you have a RAM policy on your credential that requires all access to be MFA authenticated  non MFA authenticated credentials will still be able to authenticate to Vault using this method  It does not appear possible to enforce a RAM principal to be MFA authenticated while authenticating to Vault      Authorization workflow  The basic mechanism of operation is per role   Roles are associated with a role ARN that has been pre created in AliCloud  AliCloud s console displays each role s ARN  A role in Vault has a 1 1 relationship with a role in AliCloud  and must bear the same name   When a client assumes that role and sends its  GetCallerIdentity  request to Vault  Vault matches the arn of its assumed role with that of a pre created role in Vault  It then checks what policies have been associated with the role  and grants a token accordingly      Authentication      Via the CLI       Enable AliCloud authentication in Vault      shell session   vault auth enable alicloud           Configure the policies on the role      shell session   vault write auth alicloud role dev role arn  acs ram  5138828231865461 role dev role            Perform the login operation     shell session   vault write auth alicloud login           role dev role           identity request url  IDENTITY REQUEST URL BASE 64           identity request headers  IDENTITY REQUEST HEADERS BASE 64      For the RAM auth method  generating the signed request is a non standard operation  The Vault CLI supports generating this for you      shell session   vault login  method alicloud access key     secret key     security token     region          This assumes you have the AliCloud credentials you would find on an ECS instance using the following call       curl  http   100 100 100 200 latest meta data ram security credentials  ROLE NAME       Please note the   ROLE NAME  above is case sensitive and must be consistent with how it s reflected on the instance   An example of how to generate the required request values for the  login  method can be found found in the  Vault CLI source code  https   github com hashicorp vault plugin auth alicloud blob master tools tools go       API  The AliCloud auth method has a full HTTP API  Please see the  AliCloud Auth API   vault api docs auth alicloud  for more details "}
{"questions":"vault Cloud Foundry CF auth method page title Cloud Foundry Auth Methods The cf auth method allows automated authentication of Cloud Foundry instances layout docs include x509 sha1 deprecation mdx","answers":"---\nlayout: docs\npage_title: Cloud Foundry - Auth Methods\ndescription: The cf auth method allows automated authentication of Cloud Foundry instances.\n---\n\n# Cloud Foundry (CF) auth method\n\n@include 'x509-sha1-deprecation.mdx'\n\nThe `cf` auth method provides an automated mechanism to retrieve a Vault token\nfor CF instances. It leverages CF's [App and Container Identity Assurance](https:\/\/content.pivotal.io\/blog\/new-in-pcf-2-1-app-container-identity-assurance-via-automatic-cert-rotation).\nAt a high level, this works as follows:\n\n1. You construct a request to Vault including your `CF_INSTANCE_CERT`, signed by your `CF_INSTANCE_KEY`.\n2. Vault validates that the signature is no more than 300 seconds old, or 60 seconds in the future.\n3. Vault validates that the cert was issued by the CA certificate you've pre-configured.\n4. Vault validates that the request was signed by the private key for the `CF_INSTANCE_CERT`.\n5. Vault validates that the `CF_INSTANCE_CERT` application ID, space ID, and org ID presently exist.\n6. If all checks pass, Vault issues an appropriately-scoped token.\n\n## Known risks\n\nThis authentication engine uses CF's instance identity service to authenticate users to Vault. Because\nCF makes its CA certificate and private key available to certain users at any time, it's possible for\nsomeone with access to them to self-issue identity certificates that meet the criteria for a Vault role,\nallowing them to gain unintended access to Vault.\n\nFor this reason, we recommend that if you enable this auth method, you carefully guard access to the\nprivate key for your instance identity CA certificate. In CredHub, it can be obtained through the\nfollowing call: `$ credhub get -n \/cf\/diego-instance-identity-root-ca`.\n\nTake extra steps to limit access to that path in CredHub, whether it be through use of CredHub's ACL\nsystem, or through carefully limiting the users who can access CredHub.\n\n## Usage\n\n### Preparing to configure the plugin\n\nTo configure this plugin, you'll need to gather the CA certificate that CF uses to issue each `CF_INSTANCE_CERT`,\nand you'll need to configure it to access the CF API.\n\nTo gain your instance identity CA certificate, in the [cf dev](https:\/\/github.com\/cloudfoundry-incubator\/cfdev)\nenvironment it can be found using:\n\n```shell-session\n$ bosh int --path \/diego_instance_identity_ca ~\/.cfdev\/state\/bosh\/creds.yml\n```\n\nIn environments containing Ops Manager, it can be found in CredHub. To gain access to CredHub, first install\n[the PCF command-line utility](https:\/\/docs.pivotal.io\/tiledev\/2-2\/pcf-command.html) and authenticate to it\nusing the `metadata` file it describes. These instructions also use [jq](https:\/\/stedolan.github.io\/jq\/) for\nease of drilling into the particular part of the response you'll need.\n\nOnce those steps are complete, get the credentials you'll use for CredHub:\n\n```shell-session\n$ pcf settings | jq '.products[0].director_credhub_client_credentials'\n```\n\nSSH into your Ops Manager VM:\n\n```shell-session\n$ ssh -i ops_mgr.pem ubuntu@$OPS_MGR_URL\n```\n\nPlease note that the above OPS_MGR_URL shouldn't be prepended with `https:\/\/`.\n\nLog into CredHub with the credentials you obtained earlier:\n\n```shell-session\n$ credhub login --client-name=director_to_credhub --client-secret=some-secret\n```\n\nAnd view the root certificate CF uses to issue instance identity certificates:\n\n```shell-session\n$ credhub get -n \/cf\/diego-instance-identity-root-ca\n```\n\nThe output to that call will include two certificates and one RSA key. You will need to copy the certificate\nunder `ca: |` and place it into a file on your local machine that's properly formatted. Here's an example of\na properly formatted CA certificate:\n\n```shell-session\n$ cat ca.crt\n-----BEGIN CERTIFICATE-----\nMIIDNDCCAhygAwIBAgITPqTy1qvfHNEVuxsl9l1glY85OTANBgkqhkiG9w0BAQsF\nADAqMSgwJgYDVQQDEx9EaWVnbyBJbnN0YW5jZSBJZGVudGl0eSBSb290IENBMB4X\nDTE5MDYwNjA5MTIwMVoXDTIyMDYwNTA5MTIwMVowKjEoMCYGA1UEAxMfRGllZ28g\nSW5zdGFuY2UgSWRlbnRpdHkgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEP\nADCCAQoCggEBALa8xGDYT\/q3UzEKAsLDajhuHxPpIPFlCXwp6u8U5Qrf427Xof7n\nrXRKzRu3g7E20U\/OwzgBi3VZs8T29JGNWeA2k0HtX8oQ+Wc8Qngz9M8t1h9SZlx5\nfGfxPt3x7xozaIGJ8p4HKQH1ZlirL7dzun7Y+7m6Ey8cMVsepqUs64r8+KpCbxKJ\nrV04qtTNlr0LG3yOxSHlip+DDvUVL3jSFz\/JDWxwCymiFBAh0QjG1LKp2FisURoX\nGY+HJbf2StpK3i4dYnxQXQlMDpipozK7WFxv3gH4Q6YMZvlmIPidAF8FxfDIsYcq\nTgQ5q0pr9mbu8oKbZ74vyZMqiy+r9vLhbu0CAwEAAaNTMFEwHQYDVR0OBBYEFAHf\npwqBhZ8\/A6ZAvU+p5JPz\/omjMB8GA1UdIwQYMBaAFAHfpwqBhZ8\/A6ZAvU+p5JPz\n\/omjMA8GA1UdEwEB\/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBADuDJev+6bOC\nv7t9SS4Nd\/zeREuF9IKsHDHrYUZBIO1aBQbOO1iDtL4VA3LBEx6fOgN5fbxroUsz\nX9\/6PtxLe+5U8i5MOztK+OxxPrtDfnblXVb6IW4EKhTnWesS7R2WnOWtzqRQXKFU\nvoBn3QckLV1o9eqzYIE\/aob4z0GaVanA9PSzzbVPsX79RCD1B7NmV0cKEQ7IrCrh\nL7ElDV\/GlNrtVdHjY0mwz9iu+0YJvxvcHDTERi106b28KXzJz+P5\/hyg2wqRXzdI\nfaXAjW0kuq5nxyJUALwxD\/8pz77uNt4w6WfJoSDM6XrAIhh15K3tZg9EzBmAZ\/5D\njK0RcmCyaXw=\n-----END CERTIFICATE-----\n```\n\nAn easy way to verify that your CA certificate is properly formatted is using OpenSSL like so:\n\n```shell-session\n$ openssl x509 -in ca.crt -text -noout\nCertificate:\n    Data:\n        Version: 3 (0x2)\n        Serial Number:\n            3e:a4:f2:d6:ab:df:1c:d1:15:bb:1b:25:f6:5d:60:95:8f:39:39\n    Signature Algorithm: sha256WithRSAEncryption\n        Issuer: CN=Diego Instance Identity Root CA\n        Validity\n            Not Before: Jun  6 09:12:01 2019 GMT\n            Not After : Jun  5 09:12:01 2022 GMT\n        Subject: CN=Diego Instance Identity Root CA\n        Subject Public Key Info:\n            Public Key Algorithm: rsaEncryption\n                Public-Key: (2048 bit)\n                Modulus:\n                    00:b6:bc:c4:60:d8:4f:fa:b7:53:31:0a:02:c2:c3:\n                    6a:38:6e:1f:13:e9:20:f1:65:09:7c:29:ea:ef:14:\n                    e5:0a:df:e3:6e:d7:a1:fe:e7:ad:74:4a:cd:1b:b7:\n                    83:b1:36:d1:4f:ce:c3:38:01:8b:75:59:b3:c4:f6:\n                    f4:91:8d:59:e0:36:93:41:ed:5f:ca:10:f9:67:3c:\n                    42:78:33:f4:cf:2d:d6:1f:52:66:5c:79:7c:67:f1:\n                    3e:dd:f1:ef:1a:33:68:81:89:f2:9e:07:29:01:f5:\n                    66:58:ab:2f:b7:73:ba:7e:d8:fb:b9:ba:13:2f:1c:\n                    31:5b:1e:a6:a5:2c:eb:8a:fc:f8:aa:42:6f:12:89:\n                    ad:5d:38:aa:d4:cd:96:bd:0b:1b:7c:8e:c5:21:e5:\n                    8a:9f:83:0e:f5:15:2f:78:d2:17:3f:c9:0d:6c:70:\n                    0b:29:a2:14:10:21:d1:08:c6:d4:b2:a9:d8:58:ac:\n                    51:1a:17:19:8f:87:25:b7:f6:4a:da:4a:de:2e:1d:\n                    62:7c:50:5d:09:4c:0e:98:a9:a3:32:bb:58:5c:6f:\n                    de:01:f8:43:a6:0c:66:f9:66:20:f8:9d:00:5f:05:\n                    c5:f0:c8:b1:87:2a:4e:04:39:ab:4a:6b:f6:66:ee:\n                    f2:82:9b:67:be:2f:c9:93:2a:8b:2f:ab:f6:f2:e1:\n                    6e:ed\n                Exponent: 65537 (0x10001)\n        X509v3 extensions:\n            X509v3 Subject Key Identifier:\n                01:DF:A7:0A:81:85:9F:3F:03:A6:40:BD:4F:A9:E4:93:F3:FE:89:A3\n            X509v3 Authority Key Identifier:\n                keyid:01:DF:A7:0A:81:85:9F:3F:03:A6:40:BD:4F:A9:E4:93:F3:FE:89:A3\n\n            X509v3 Basic Constraints: critical\n                CA:TRUE\n    Signature Algorithm: sha256WithRSAEncryption\n         3b:83:25:eb:fe:e9:b3:82:bf:bb:7d:49:2e:0d:77:fc:de:44:\n         4b:85:f4:82:ac:1c:31:eb:61:46:41:20:ed:5a:05:06:ce:3b:\n         58:83:b4:be:15:03:72:c1:13:1e:9f:3a:03:79:7d:bc:6b:a1:\n         4b:33:5f:df:fa:3e:dc:4b:7b:ee:54:f2:2e:4c:3b:3b:4a:f8:\n         ec:71:3e:bb:43:7e:76:e5:5d:56:fa:21:6e:04:2a:14:e7:59:\n         eb:12:ed:1d:96:9c:e5:ad:ce:a4:50:5c:a1:54:be:80:67:dd:\n         07:24:2d:5d:68:f5:ea:b3:60:81:3f:6a:86:f8:cf:41:9a:55:\n         a9:c0:f4:f4:b3:cd:b5:4f:b1:7e:fd:44:20:f5:07:b3:66:57:\n         47:0a:11:0e:c8:ac:2a:e1:2f:b1:25:0d:5f:c6:94:da:ed:55:\n         d1:e3:63:49:b0:cf:d8:ae:fb:46:09:bf:1b:dc:1c:34:c4:46:\n         2d:74:e9:bd:bc:29:7c:c9:cf:e3:f9:fe:1c:a0:db:0a:91:5f:\n         37:48:7d:a5:c0:8d:6d:24:ba:ae:67:c7:22:54:00:bc:31:0f:\n         ff:29:cf:be:ee:36:de:30:e9:67:c9:a1:20:cc:e9:7a:c0:22:\n         18:75:e4:ad:ed:66:0f:44:cc:19:80:67:fe:43:8c:ad:11:72:\n         60:b2:69:7c\n```\n\nYou will also need to configure access to the CF API. To prepare for this, we will now\nuse the [cf](https:\/\/docs.cloudfoundry.org\/cf-cli\/install-go-cli.html) command-line tool.\n\nFirst, while in the directory containing the `metadata` file you used earlier to authenticate\nto CF, run `$ pcf target`. This points the `cf` tool at the same place as the `pcf` tool. Next,\nrun `$ cf api` to view the API endpoint that Vault will use.\n\nNext, configure a user for Vault to use. This plugin was tested with Org Manager level\npermissions, but lower level permissions _may_ be usable.\n\n```shell-session\n$ cf create-user vault pa55w0rd\n$ cf orgs\n$ cf org-users my-example-org\n$ cf set-org-role vault my-example-org OrgManager\n```\n\nSpecifically, the `vault` user created here will need to be able to perform the following API calls:\n\n- Method: \"GET\", endpoint: \"\/v2\/info\"\n- Method: \"POST\", endpoint: \"\/oauth\/token\"\n- Method: \"GET\", endpoint: \"\/v2\/apps\/\\$APP_ID\"\n- Method: \"GET\", endpoint: \"\/v2\/organizations\/\\$ORG_ID\"\n- Method: \"GET\", endpoint: \"\/v2\/spaces\/\\$SPACE_ID\"\n\nNext, PCF often uses a self-signed certificate for TLS, which can be rejected at first\nwith an error like:\n\n<CodeBlockConfig hideClipboard>\n\n```plaintext\nx509: certificate signed by unknown authority\n```\n\n<\/CodeBlockConfig>\n\nIf you encounter this error, you will need to first gain a copy of the certificate that CF\nis using for the API via:\n\n```shell-session\n$ openssl s_client -showcerts -servername domain.com -connect domain.com:443\n```\n\nHere is an example of a real call:\n\n```shell-session\n$ openssl s_client -showcerts -servername api.sys.somewhere.cf-app.com -connect api.sys.somewhere.cf-app.com:443\n```\n\nPart of the response will contain a certificate, which you'll need to copy and paste to\na well-formatted local file. Please see `ca.crt` above for an example of how the certificate\nshould look, and how to verify it can be parsed using `openssl`. The walkthrough below presumes\nyou name this file `cfapi.crt`.\n\n### Walkthrough\n\nAfter obtaining the information described above, a Vault operator will configure the CF auth method\nlike so:\n\n```shell-session\n$ vault auth enable cf\n\n$ vault write auth\/cf\/config \\\n      identity_ca_certificates=@ca.crt \\\n      cf_api_addr=https:\/\/api.dev.cfdev.sh \\\n      cf_username=vault \\\n      cf_password=pa55w0rd \\\n      cf_api_trusted_certificates=@cfapi.crt\n\n$ vault write auth\/cf\/roles\/my-role \\\n    bound_application_ids=2d3e834a-3a25-4591-974c-fa5626d5d0a1 \\\n    bound_space_ids=3d2eba6b-ef19-44d5-91dd-1975b0db5cc9 \\\n    bound_organization_ids=34a878d0-c2f9-4521-ba73-a9f664e82c7bf \\\n    policies=my-policy\n```\n\nOnce configured, from a CF instance containing real values for the `CF_INSTANCE_CERT` and\n`CF_INSTANCE_KEY`, login can be performed using:\n\n```shell-session\n$ vault login -method=cf role=test-role\n```\n\nFor CF, we do also offer an agent that, once configured, can be used to obtain a Vault token on\nyour behalf.\n\n### Enabling mutual TLS with the CF API\n\nThe CF API can be configured to require mutual TLS with clients. This plugin supports mutual TLS by setting the\n`cf_api_mutual_tls_certificate` and `cf_api_mutual_tls_key` configuration properties.\n\n<CodeBlockConfig highlight=\"7-8\">\n\n```shell-session\n$ vault write auth\/cf\/config \\\n      identity_ca_certificates=@ca.crt \\\n      cf_api_addr=https:\/\/api.dev.cfdev.sh \\\n      cf_username=vault \\\n      cf_password=pa55w0rd \\\n      cf_api_trusted_certificates=@cfapi.crt \\\n      cf_api_mutual_tls_certificate=@cfmutualtls.crt \\\n      cf_api_mutual_tls_key=@cfmutualtls.key\n```\n\n<\/CodeBlockConfig>\n\nThe provided certificate must be signed by a certificate authority trusted by the CF API. Obtaining such a certificate\ndepends on the specifics of your deployment of Cloud Foundry.\n\n### Maintenance\n\nIn testing we found that CF instance identity CA certificates were set to expire in 3 years. Some\nCF docs indicate they expire every 4 years. However long they last, at some point you may need\nto add another CA certificate - one that's soon to expire, and one that is currently or soon-to-be\nvalid.\n\n```shell-session\n$ CURRENT=$(cat \/path\/to\/current-ca.crt)\n$ FUTURE=$(cat \/path\/to\/future-ca.crt)\n$ vault write auth\/vault-plugin-auth-cf\/config identity_ca_certificates=\"$CURRENT\" identity_ca_certificates=\"$FUTURE\"\n```\n\nIf Vault receives a `CF_INSTANCE_CERT` matching _any_ of the `identity_ca_certificates`,\nthe instance cert will be considered valid.\n\nA similar approach can be taken to update the `cf_api_trusted_certificates`.\n\n### Troubleshooting At-A-Glance\n\nIf you receive an error containing `x509: certificate signed by unknown authority`, set\n`cf_api_trusted_certificates` as described above.\n\nIf you're unable to authenticate using the `CF_INSTANCE_CERT`, first obtain a current copy\nof your `CF_INSTANCE_CERT` and copy it to your local environment. Then divide it into two\nfiles, each being a distinct certificate. The first certificate tends to be the actual\n`identity.crt`, and the second one tends to be the `intermediate.crt`. Verify each are\nproperly named and formatted using a command like:\n\n```shell-session\n$ openssl x509 -in ca.crt -text -noout\n```\n\nThen, verify that the certificates are properly chained to the `ca.crt` you've configured:\n\n```shell-session\n$ openssl verify -CAfile ca.crt -untrusted intermediate.crt identity.crt\n```\n\nThis should show a success response. If it doesn't, try to identify the root cause, be it\nan expired certificate, an incorrect `ca.crt`, or a Vault configuration that doesn't\nmatch the certificates you're checking.\n\n## API\n\nThe CF auth method has a full HTTP API. Please see the [CF Auth API](\/vault\/api-docs\/auth\/cf)\nfor more details.","site":"vault","answers_cleaned":"    layout  docs page title  Cloud Foundry   Auth Methods description  The cf auth method allows automated authentication of Cloud Foundry instances         Cloud Foundry  CF  auth method   include  x509 sha1 deprecation mdx   The  cf  auth method provides an automated mechanism to retrieve a Vault token for CF instances  It leverages CF s  App and Container Identity Assurance  https   content pivotal io blog new in pcf 2 1 app container identity assurance via automatic cert rotation   At a high level  this works as follows   1  You construct a request to Vault including your  CF INSTANCE CERT   signed by your  CF INSTANCE KEY   2  Vault validates that the signature is no more than 300 seconds old  or 60 seconds in the future  3  Vault validates that the cert was issued by the CA certificate you ve pre configured  4  Vault validates that the request was signed by the private key for the  CF INSTANCE CERT   5  Vault validates that the  CF INSTANCE CERT  application ID  space ID  and org ID presently exist  6  If all checks pass  Vault issues an appropriately scoped token      Known risks  This authentication engine uses CF s instance identity service to authenticate users to Vault  Because CF makes its CA certificate and private key available to certain users at any time  it s possible for someone with access to them to self issue identity certificates that meet the criteria for a Vault role  allowing them to gain unintended access to Vault   For this reason  we recommend that if you enable this auth method  you carefully guard access to the private key for your instance identity CA certificate  In CredHub  it can be obtained through the following call     credhub get  n  cf diego instance identity root ca    Take extra steps to limit access to that path in CredHub  whether it be through use of CredHub s ACL system  or through carefully limiting the users who can access CredHub      Usage      Preparing to configure the plugin  To configure this plugin  you ll need to gather the CA certificate that CF uses to issue each  CF INSTANCE CERT   and you ll need to configure it to access the CF API   To gain your instance identity CA certificate  in the  cf dev  https   github com cloudfoundry incubator cfdev  environment it can be found using      shell session   bosh int   path  diego instance identity ca    cfdev state bosh creds yml      In environments containing Ops Manager  it can be found in CredHub  To gain access to CredHub  first install  the PCF command line utility  https   docs pivotal io tiledev 2 2 pcf command html  and authenticate to it using the  metadata  file it describes  These instructions also use  jq  https   stedolan github io jq   for ease of drilling into the particular part of the response you ll need   Once those steps are complete  get the credentials you ll use for CredHub      shell session   pcf settings   jq   products 0  director credhub client credentials       SSH into your Ops Manager VM      shell session   ssh  i ops mgr pem ubuntu  OPS MGR URL      Please note that the above OPS MGR URL shouldn t be prepended with  https       Log into CredHub with the credentials you obtained earlier      shell session   credhub login   client name director to credhub   client secret some secret      And view the root certificate CF uses to issue instance identity certificates      shell session   credhub get  n  cf diego instance identity root ca      The output to that call will include two certificates and one RSA key  You will need to copy the certificate under  ca     and place it into a file on your local machine that s properly formatted  Here s an example of a properly formatted CA certificate      shell session   cat ca crt      BEGIN CERTIFICATE      MIIDNDCCAhygAwIBAgITPqTy1qvfHNEVuxsl9l1glY85OTANBgkqhkiG9w0BAQsF ADAqMSgwJgYDVQQDEx9EaWVnbyBJbnN0YW5jZSBJZGVudGl0eSBSb290IENBMB4X DTE5MDYwNjA5MTIwMVoXDTIyMDYwNTA5MTIwMVowKjEoMCYGA1UEAxMfRGllZ28g SW5zdGFuY2UgSWRlbnRpdHkgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEP ADCCAQoCggEBALa8xGDYT q3UzEKAsLDajhuHxPpIPFlCXwp6u8U5Qrf427Xof7n rXRKzRu3g7E20U OwzgBi3VZs8T29JGNWeA2k0HtX8oQ Wc8Qngz9M8t1h9SZlx5 fGfxPt3x7xozaIGJ8p4HKQH1ZlirL7dzun7Y 7m6Ey8cMVsepqUs64r8 KpCbxKJ rV04qtTNlr0LG3yOxSHlip DDvUVL3jSFz JDWxwCymiFBAh0QjG1LKp2FisURoX GY HJbf2StpK3i4dYnxQXQlMDpipozK7WFxv3gH4Q6YMZvlmIPidAF8FxfDIsYcq TgQ5q0pr9mbu8oKbZ74vyZMqiy r9vLhbu0CAwEAAaNTMFEwHQYDVR0OBBYEFAHf pwqBhZ8 A6ZAvU p5JPz omjMB8GA1UdIwQYMBaAFAHfpwqBhZ8 A6ZAvU p5JPz  omjMA8GA1UdEwEB wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBADuDJev 6bOC v7t9SS4Nd zeREuF9IKsHDHrYUZBIO1aBQbOO1iDtL4VA3LBEx6fOgN5fbxroUsz X9 6PtxLe 5U8i5MOztK OxxPrtDfnblXVb6IW4EKhTnWesS7R2WnOWtzqRQXKFU voBn3QckLV1o9eqzYIE aob4z0GaVanA9PSzzbVPsX79RCD1B7NmV0cKEQ7IrCrh L7ElDV GlNrtVdHjY0mwz9iu 0YJvxvcHDTERi106b28KXzJz P5 hyg2wqRXzdI faXAjW0kuq5nxyJUALwxD 8pz77uNt4w6WfJoSDM6XrAIhh15K3tZg9EzBmAZ 5D jK0RcmCyaXw       END CERTIFICATE           An easy way to verify that your CA certificate is properly formatted is using OpenSSL like so      shell session   openssl x509  in ca crt  text  noout Certificate      Data          Version  3  0x2          Serial Number              3e a4 f2 d6 ab df 1c d1 15 bb 1b 25 f6 5d 60 95 8f 39 39     Signature Algorithm  sha256WithRSAEncryption         Issuer  CN Diego Instance Identity Root CA         Validity             Not Before  Jun  6 09 12 01 2019 GMT             Not After   Jun  5 09 12 01 2022 GMT         Subject  CN Diego Instance Identity Root CA         Subject Public Key Info              Public Key Algorithm  rsaEncryption                 Public Key   2048 bit                  Modulus                      00 b6 bc c4 60 d8 4f fa b7 53 31 0a 02 c2 c3                      6a 38 6e 1f 13 e9 20 f1 65 09 7c 29 ea ef 14                      e5 0a df e3 6e d7 a1 fe e7 ad 74 4a cd 1b b7                      83 b1 36 d1 4f ce c3 38 01 8b 75 59 b3 c4 f6                      f4 91 8d 59 e0 36 93 41 ed 5f ca 10 f9 67 3c                      42 78 33 f4 cf 2d d6 1f 52 66 5c 79 7c 67 f1                      3e dd f1 ef 1a 33 68 81 89 f2 9e 07 29 01 f5                      66 58 ab 2f b7 73 ba 7e d8 fb b9 ba 13 2f 1c                      31 5b 1e a6 a5 2c eb 8a fc f8 aa 42 6f 12 89                      ad 5d 38 aa d4 cd 96 bd 0b 1b 7c 8e c5 21 e5                      8a 9f 83 0e f5 15 2f 78 d2 17 3f c9 0d 6c 70                      0b 29 a2 14 10 21 d1 08 c6 d4 b2 a9 d8 58 ac                      51 1a 17 19 8f 87 25 b7 f6 4a da 4a de 2e 1d                      62 7c 50 5d 09 4c 0e 98 a9 a3 32 bb 58 5c 6f                      de 01 f8 43 a6 0c 66 f9 66 20 f8 9d 00 5f 05                      c5 f0 c8 b1 87 2a 4e 04 39 ab 4a 6b f6 66 ee                      f2 82 9b 67 be 2f c9 93 2a 8b 2f ab f6 f2 e1                      6e ed                 Exponent  65537  0x10001          X509v3 extensions              X509v3 Subject Key Identifier                  01 DF A7 0A 81 85 9F 3F 03 A6 40 BD 4F A9 E4 93 F3 FE 89 A3             X509v3 Authority Key Identifier                  keyid 01 DF A7 0A 81 85 9F 3F 03 A6 40 BD 4F A9 E4 93 F3 FE 89 A3              X509v3 Basic Constraints  critical                 CA TRUE     Signature Algorithm  sha256WithRSAEncryption          3b 83 25 eb fe e9 b3 82 bf bb 7d 49 2e 0d 77 fc de 44           4b 85 f4 82 ac 1c 31 eb 61 46 41 20 ed 5a 05 06 ce 3b           58 83 b4 be 15 03 72 c1 13 1e 9f 3a 03 79 7d bc 6b a1           4b 33 5f df fa 3e dc 4b 7b ee 54 f2 2e 4c 3b 3b 4a f8           ec 71 3e bb 43 7e 76 e5 5d 56 fa 21 6e 04 2a 14 e7 59           eb 12 ed 1d 96 9c e5 ad ce a4 50 5c a1 54 be 80 67 dd           07 24 2d 5d 68 f5 ea b3 60 81 3f 6a 86 f8 cf 41 9a 55           a9 c0 f4 f4 b3 cd b5 4f b1 7e fd 44 20 f5 07 b3 66 57           47 0a 11 0e c8 ac 2a e1 2f b1 25 0d 5f c6 94 da ed 55           d1 e3 63 49 b0 cf d8 ae fb 46 09 bf 1b dc 1c 34 c4 46           2d 74 e9 bd bc 29 7c c9 cf e3 f9 fe 1c a0 db 0a 91 5f           37 48 7d a5 c0 8d 6d 24 ba ae 67 c7 22 54 00 bc 31 0f           ff 29 cf be ee 36 de 30 e9 67 c9 a1 20 cc e9 7a c0 22           18 75 e4 ad ed 66 0f 44 cc 19 80 67 fe 43 8c ad 11 72           60 b2 69 7c      You will also need to configure access to the CF API  To prepare for this  we will now use the  cf  https   docs cloudfoundry org cf cli install go cli html  command line tool   First  while in the directory containing the  metadata  file you used earlier to authenticate to CF  run    pcf target   This points the  cf  tool at the same place as the  pcf  tool  Next  run    cf api  to view the API endpoint that Vault will use   Next  configure a user for Vault to use  This plugin was tested with Org Manager level permissions  but lower level permissions  may  be usable      shell session   cf create user vault pa55w0rd   cf orgs   cf org users my example org   cf set org role vault my example org OrgManager      Specifically  the  vault  user created here will need to be able to perform the following API calls     Method   GET   endpoint    v2 info    Method   POST   endpoint    oauth token    Method   GET   endpoint    v2 apps   APP ID    Method   GET   endpoint    v2 organizations   ORG ID    Method   GET   endpoint    v2 spaces   SPACE ID   Next  PCF often uses a self signed certificate for TLS  which can be rejected at first with an error like    CodeBlockConfig hideClipboard      plaintext x509  certificate signed by unknown authority        CodeBlockConfig   If you encounter this error  you will need to first gain a copy of the certificate that CF is using for the API via      shell session   openssl s client  showcerts  servername domain com  connect domain com 443      Here is an example of a real call      shell session   openssl s client  showcerts  servername api sys somewhere cf app com  connect api sys somewhere cf app com 443      Part of the response will contain a certificate  which you ll need to copy and paste to a well formatted local file  Please see  ca crt  above for an example of how the certificate should look  and how to verify it can be parsed using  openssl   The walkthrough below presumes you name this file  cfapi crt        Walkthrough  After obtaining the information described above  a Vault operator will configure the CF auth method like so      shell session   vault auth enable cf    vault write auth cf config         identity ca certificates  ca crt         cf api addr https   api dev cfdev sh         cf username vault         cf password pa55w0rd         cf api trusted certificates  cfapi crt    vault write auth cf roles my role       bound application ids 2d3e834a 3a25 4591 974c fa5626d5d0a1       bound space ids 3d2eba6b ef19 44d5 91dd 1975b0db5cc9       bound organization ids 34a878d0 c2f9 4521 ba73 a9f664e82c7bf       policies my policy      Once configured  from a CF instance containing real values for the  CF INSTANCE CERT  and  CF INSTANCE KEY   login can be performed using      shell session   vault login  method cf role test role      For CF  we do also offer an agent that  once configured  can be used to obtain a Vault token on your behalf       Enabling mutual TLS with the CF API  The CF API can be configured to require mutual TLS with clients  This plugin supports mutual TLS by setting the  cf api mutual tls certificate  and  cf api mutual tls key  configuration properties    CodeBlockConfig highlight  7 8       shell session   vault write auth cf config         identity ca certificates  ca crt         cf api addr https   api dev cfdev sh         cf username vault         cf password pa55w0rd         cf api trusted certificates  cfapi crt         cf api mutual tls certificate  cfmutualtls crt         cf api mutual tls key  cfmutualtls key        CodeBlockConfig   The provided certificate must be signed by a certificate authority trusted by the CF API  Obtaining such a certificate depends on the specifics of your deployment of Cloud Foundry       Maintenance  In testing we found that CF instance identity CA certificates were set to expire in 3 years  Some CF docs indicate they expire every 4 years  However long they last  at some point you may need to add another CA certificate   one that s soon to expire  and one that is currently or soon to be valid      shell session   CURRENT   cat  path to current ca crt    FUTURE   cat  path to future ca crt    vault write auth vault plugin auth cf config identity ca certificates   CURRENT  identity ca certificates   FUTURE       If Vault receives a  CF INSTANCE CERT  matching  any  of the  identity ca certificates   the instance cert will be considered valid   A similar approach can be taken to update the  cf api trusted certificates        Troubleshooting At A Glance  If you receive an error containing  x509  certificate signed by unknown authority   set  cf api trusted certificates  as described above   If you re unable to authenticate using the  CF INSTANCE CERT   first obtain a current copy of your  CF INSTANCE CERT  and copy it to your local environment  Then divide it into two files  each being a distinct certificate  The first certificate tends to be the actual  identity crt   and the second one tends to be the  intermediate crt   Verify each are properly named and formatted using a command like      shell session   openssl x509  in ca crt  text  noout      Then  verify that the certificates are properly chained to the  ca crt  you ve configured      shell session   openssl verify  CAfile ca crt  untrusted intermediate crt identity crt      This should show a success response  If it doesn t  try to identify the root cause  be it an expired certificate  an incorrect  ca crt   or a Vault configuration that doesn t match the certificates you re checking      API  The CF auth method has a full HTTP API  Please see the  CF Auth API   vault api docs auth cf  for more details "}
{"questions":"vault The azure auth method plugin allows automated authentication of Azure Active Directory layout docs page title Azure Auth Methods Azure auth method","answers":"---\nlayout: docs\npage_title: Azure - Auth Methods\ndescription: |-\n  The azure auth method plugin allows automated authentication of Azure Active\n  Directory.\n---\n\n# Azure auth method\n\nThe `azure` auth method allows authentication against Vault using\nAzure Active Directory credentials. It treats Azure as a Trusted Third Party\nand expects a [JSON Web Token (JWT)](https:\/\/tools.ietf.org\/html\/rfc7519)\nsigned by Azure Active Directory for the configured tenant.\n\nThis method supports authentication for system-assigned and user-assigned\nmanaged identities. See [Managed identities for Azure resources](https:\/\/learn.microsoft.com\/en-us\/azure\/active-directory\/managed-identities-azure-resources\/overview)\nfor more information about these resources.\n\nThis documentation assumes the Azure method is mounted at the `\/auth\/azure`\npath in Vault. Since it is possible to enable auth methods at any location,\nplease update your API calls accordingly.\n\n## Prerequisites:\n\nThe Azure auth method requires client credentials to access Azure APIs. The following\nare required to configure the auth method:\n\n- A configured [Azure AD application](https:\/\/docs.microsoft.com\/en-us\/azure\/active-directory\/develop\/active-directory-integrating-applications)\n  which is used as the resource for generating MSI access tokens.\n- Client credentials (shared secret) with read access to particular Azure Resource Manager\n  resources. See [Azure AD Service to Service Client Credentials](https:\/\/docs.microsoft.com\/en-us\/azure\/active-directory\/develop\/active-directory-protocols-oauth-service-to-service).\n\nIf Vault is hosted on Azure, Vault can use MSI to access Azure instead of a shared secret.\nA managed identity must be [enabled](https:\/\/learn.microsoft.com\/en-us\/azure\/active-directory\/managed-identities-azure-resources\/)\non the resource that acquires the access token.\n\nThe following Azure [role assignments](https:\/\/learn.microsoft.com\/en-us\/azure\/role-based-access-control\/overview#role-assignments)\nmust be granted to the Azure AD application in order for the auth method to access Azure\nAPIs during authentication.\n\n### Role assignments\n\n~> **Note:** The role assignments are only required when the\n[`vm_name`](\/vault\/api-docs\/auth\/azure#vm_name), [`vmss_name`](\/vault\/api-docs\/auth\/azure#vmss_name),\nor [`resource_id`](\/vault\/api-docs\/auth\/azure#resource_id) parameters are used on login.\n\n| Azure Environment | Login Parameter | Azure API Permission |\n| ----------- | --------------- | -------------------- |\n| Virtual Machine | [`vm_name`](\/vault\/api-docs\/auth\/azure#vm_name) | `Microsoft.Compute\/virtualMachines\/*\/read` |\n| Virtual Machine Scale Set ([Uniform Orchestration][vmss-uniform]) | [`vmss_name`](\/vault\/api-docs\/auth\/azure#vmss_name) | `Microsoft.Compute\/virtualMachineScaleSets\/*\/read` |\n| Virtual Machine Scale Set ([Flexible Orchestration][vmss-flex]) | [`vmss_name`](\/vault\/api-docs\/auth\/azure#vmss_name) | `Microsoft.Compute\/virtualMachineScaleSets\/*\/read` `Microsoft.ManagedIdentity\/userAssignedIdentities\/*\/read` |\n| Services that ([support managed identities][managed-identities]) for Azure resources | [`resource_id`](\/vault\/api-docs\/auth\/azure#resource_id) | `read` on the resource used to obtain the JWT |\n\n[vmss-uniform]: https:\/\/learn.microsoft.com\/en-us\/azure\/virtual-machine-scale-sets\/virtual-machine-scale-sets-orchestration-modes#scale-sets-with-uniform-orchestration\n[vmss-flex]: https:\/\/learn.microsoft.com\/en-us\/azure\/virtual-machine-scale-sets\/virtual-machine-scale-sets-orchestration-modes#scale-sets-with-flexible-orchestration\n[managed-identities]: https:\/\/learn.microsoft.com\/en-us\/azure\/active-directory\/managed-identities-azure-resources\/managed-identities-status\n\n### API permissions\n\nThe following [API permissions](https:\/\/learn.microsoft.com\/en-us\/azure\/active-directory\/develop\/permissions-consent-overview#types-of-permissions)\nmust be assigned to the service principal provided to Vault for managing the root rotation in Azure:\n\n| Permission Name               | Type        |\n| ----------------------------- | ----------- |\n| Application.ReadWrite.All     | Application |\n\n## Authentication\n\n### Via the CLI\n\nThe default path is `\/auth\/azure`. If this auth method was enabled at a different\npath, specify `auth\/my-path\/login` instead.\n\n```shell-session\n$ vault write auth\/azure\/login \\\n    role=\"dev-role\" \\\n    jwt=\"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...\" \\\n    subscription_id=\"12345-...\" \\\n    resource_group_name=\"test-group\" \\\n    vm_name=\"test-vm\"\n```\n\nThe `role` and `jwt` parameters are required. When using\n`bound_service_principal_ids` and `bound_group_ids` in the token roles, all the\ninformation is required in the JWT (except for `vm_name`, `vmss_name`, `resource_id`). When\nusing other `bound_*` parameters, calls to Azure APIs will be made and\n`subscription_id`, `resource_group_name`, and `vm_name`\/`vmss_name` are all required\nand can be obtained through instance metadata.\n\nFor example:\n\n```shell-session\n$ vault write auth\/azure\/login role=\"dev-role\" \\\n     jwt=\"$(curl -s 'http:\/\/169.254.169.254\/metadata\/identity\/oauth2\/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.azure.com%2F' -H Metadata:true | jq -r '.access_token')\" \\\n     subscription_id=$(curl -s -H Metadata:true \"http:\/\/169.254.169.254\/metadata\/instance?api-version=2017-08-01\" | jq -r '.compute | .subscriptionId')  \\\n     resource_group_name=$(curl -s -H Metadata:true \"http:\/\/169.254.169.254\/metadata\/instance?api-version=2017-08-01\" | jq -r '.compute | .resourceGroupName') \\\n     vm_name=$(curl -s -H Metadata:true \"http:\/\/169.254.169.254\/metadata\/instance?api-version=2017-08-01\" | jq -r '.compute | .name')\n```\n\n### Via the API\n\nThe default endpoint is `auth\/azure\/login`. If this auth method was enabled\nat a different path, use that value instead of `azure`.\n\n```shell-session\n$ curl \\\n    --request POST \\\n    --data '{\"role\": \"dev-role\", \"jwt\": \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...\"}' \\\n    https:\/\/127.0.0.1:8200\/v1\/auth\/azure\/login\n```\n\nThe response will contain the token at `auth.client_token`:\n\n```json\n{\n  \"auth\": {\n    \"client_token\": \"f33f8c72-924e-11f8-cb43-ac59d697597c\",\n    \"accessor\": \"0e9e354a-520f-df04-6867-ee81cae3d42d\",\n    \"policies\": [\"default\", \"dev\", \"prod\"],\n    \"lease_duration\": 2764800,\n    \"renewable\": true\n  }\n}\n```\n\n## Configuration\n\nAuth methods must be configured in advance before machines can authenticate.\nThese steps are usually completed by an operator or configuration management\ntool.\n\n### Via the CLI\n\n1. Enable Azure authentication in Vault:\n\n   ```shell-session\n   $ vault auth enable azure\n   ```\n\n1. Configure the Azure auth method:\n\n   ```shell-session\n   $ vault write auth\/azure\/config \\\n       tenant_id=7cd1f227-ca67-4fc6-a1a4-9888ea7f388c \\\n       resource=https:\/\/management.azure.com\/ \\\n       client_id=dd794de4-4c6c-40b3-a930-d84cd32e9699 \\\n       client_secret=IT3B2XfZvWnfB98s1cie8EMe7zWg483Xy8zY004=\n   ```\n\n   For the complete list of configuration options, please see the API\n   documentation.\n\n   In some cases, you cannot set sensitive account credentials in your\n   Vault configuration. For example, your organization may require that all\n   security credentials are short-lived or explicitly tied to a machine identity.\n\n   To provide managed identity security credentials to Vault, we recommend using Vault\n   [plugin workload identity federation](#plugin-workload-identity-federation-wif)\n   (WIF) as shown below.\n\n1.  Alternatively, configure the audience claim value and the Client, Tenant IDs for plugin workload identity federation:\n\n   ```text\n   $ vault write azure\/config \\\n       tenant_id=7cd1f227-ca67-4fc6-a1a4-9888ea7f388c \\\n       client_id=dd794de4-4c6c-40b3-a930-d84cd32e9699 \\\n       identity_token_audience=vault.example\/v1\/identity\/oidc\/plugins\n   ```\n\n   The Vault identity token provider signs the plugin identity token JWT internally.\n   If a trust relationship exists between Vault and Azure through WIF, the auth\n   method can exchange the Vault identity token for a federated access token.\n\n   To configure a trusted relationship between Vault and Azure:\n       - You must configure the [identity token issuer backend](\/vault\/api-docs\/secret\/identity\/tokens#configure-the-identity-tokens-backend)\n       for Vault.\n       - Azure must have a\n       [federated identity credential](https:\/\/learn.microsoft.com\/en-us\/entra\/workload-id\/workload-identity-federation-create-trust?pivots=identity-wif-apps-methods-azp#configure-a-federated-identity-credential-on-an-app)\n       configured with information about the fully qualified and network-reachable\n       issuer URL for the Vault plugin\n       [identity token provider](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-well-known-configurations).\n\n   Establishing a trusted relationship between Vault and Azure ensures that Azure\n   can fetch JWKS\n   [public keys](\/vault\/api-docs\/secret\/identity\/tokens#read-active-public-keys)\n   and verify the plugin identity token signature.\n\n1. Create a role:\n\n   ```shell-session\n   $ vault write auth\/azure\/role\/dev-role \\\n       policies=\"prod,dev\" \\\n       bound_subscription_ids=6a1d5988-5917-4221-b224-904cd7e24a25 \\\n       bound_resource_groups=vault\n   ```\n\n   Roles are associated with an authentication type\/entity and a set of Vault\n   policies. Roles are configured with constraints specific to the\n   authentication type, as well as overall constraints and configuration for\n   the generated auth tokens.\n\n   For the complete list of role options, please see the [API documentation](\/vault\/api-docs\/auth\/azure).\n\n### Via the API\n\n1. Enable Azure authentication in Vault:\n\n   ```shell-session\n   $ curl \\\n       --header \"X-Vault-Token: ...\" \\\n       --request POST \\\n       --data '{\"type\": \"azure\"}' \\\n       https:\/\/127.0.0.1:8200\/v1\/sys\/auth\/azure\n   ```\n\n1. Configure the Azure auth method:\n\n   ```shell-session\n   $ curl \\\n       --header \"X-Vault-Token: ...\" \\\n       --request POST \\\n       --data '{\"tenant_id\": \"...\", \"resource\": \"...\"}' \\\n       https:\/\/127.0.0.1:8200\/v1\/auth\/azure\/config\n   ```\n\n1. Create a role:\n\n   ```shell-session\n   $ curl \\\n       --header \"X-Vault-Token: ...\" \\\n       --request POST \\\n       --data '{\"policies\": [\"dev\", \"prod\"], ...}' \\\n       https:\/\/127.0.0.1:8200\/v1\/auth\/azure\/role\/dev-role\n   ```\n\n## Azure managed identities\n\nThere are two types of [managed identities](https:\/\/learn.microsoft.com\/en-us\/azure\/active-directory\/managed-identities-azure-resources\/overview#managed-identity-types)\nin Azure: System-assigned and User-assigned. System-assigned identities are unique to\nevery virtual machine in Azure. If the resources using Azure auth are recreated\nfrequently, using system-assigned identities could result in many Vault entities being\ncreated. For environments with high ephemeral workloads, user-assigned identities are\nrecommended.\n\n\n### Limitations\n\nThe TTL of the access token returned by Azure AD for a managed identity is\n24hrs and is not configurable. See ([limitations of using managed identities][id-limitations])\nfor more info.\n\n[id-limitations]: https:\/\/learn.microsoft.com\/en-us\/azure\/active-directory\/managed-identities-azure-resources\/managed-identity-best-practice-recommendations#limitation-of-using-managed-identities-for-authorization\n\n## Azure debug logs\n\nThe Azure auth plugin supports debug logging which includes additional information\nabout requests and responses from the Azure API.\n\nTo enable the Azure debug logs, set the following environment variable on the Vault\nserver:\n\n```shell\nAZURE_SDK_GO_LOGGING=all\n```\n\n## Plugin Workload Identity Federation (WIF)\n\n<EnterpriseAlert product=\"vault\" \/>\n\nThe Azure auth method supports the plugin WIF workflow, and has a source of identity called\na plugin identity token. A plugin identity token is a JWT that is signed internally by Vault's\n[plugin identity token issuer](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-workload-identity-issuer-s-openid-configuration).\n\nIf there is a trust relationship configured between Vault and Azure through\n[workload identity federation](https:\/\/learn.microsoft.com\/en-us\/entra\/workload-id\/workload-identity-federation),\nthe auth method can exchange its identity token for short-lived access tokens needed to\nperform its actions.\n\nExchanging identity tokens for access tokens lets the Azure auth method\noperate without configuring explicit access to sensitive client credentials.\n\nTo configure the auth method to use plugin WIF:\n\n1. Ensure that Vault [openid-configuration](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-token-issuer-s-openid-configuration)\n   and [public JWKS](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-token-issuer-s-public-jwks)\n   APIs are network-reachable by Azure. We recommend using an API proxy or gateway\n   if you need to limit Vault API exposure.\n\n1. Configure a\n   [federated identity credential](https:\/\/learn.microsoft.com\/en-us\/entra\/workload-id\/workload-identity-federation-create-trust?pivots=identity-wif-apps-methods-azp#configure-a-federated-identity-credential-on-an-app)\n   on a dedicated application registration in Azure to establish a trust relationship with Vault.\n   1. The issuer URL **must** point at your [Vault plugin identity token issuer](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-workload-identity-issuer-s-openid-configuration) with the\n   `\/.well-known\/openid-configuration` suffix removed. For example:\n   `https:\/\/host:port\/v1\/identity\/oidc\/plugins`.\n   1. The subject identifier **must** match the unique `sub` claim issued by plugin identity tokens.\n   The subject identifier should have the form `plugin-identity:<NAMESPACE>:auth:<AZURE_MOUNT_ACCESSOR>`.\n   1. The audience should be under 600 characters. The default value in Azure is `api:\/\/AzureADTokenExchange`.\n\n1. Configure the Azure auth method with the client and tenant IDs and the OIDC audience value.\n\n   ```shell-session\n   $ vault write azure\/config \\\n     tenant_id=7cd1f227-ca67-4fc6-a1a4-9888ea7f388c \\\n     client_id=dd794de4-4c6c-40b3-a930-d84cd32e9699 \\\n     identity_token_audience=vault.example\/v1\/identity\/oidc\/plugins\n   ```\n\nYour auth method can now use plugin WIF for its configuration credentials.\nBy default, WIF [credentials](https:\/\/learn.microsoft.com\/en-us\/entra\/identity-platform\/access-tokens#token-lifetime)\nhave a time-to-live of 1 hour and automatically refresh when they expire.\n\nPlease see the [API documentation](\/vault\/api-docs\/auth\/azure#configure)\nfor more details on the fields associated with plugin WIF.\n\n## API\n\nThe Azure Auth Plugin has a full HTTP API. Please see the [API documentation](\/vault\/api-docs\/auth\/azure) for more details.\n\n## Code example\n\nThe following example demonstrates the Azure auth method to authenticate\nwith Vault.\n\n<CodeTabs>\n\n<CodeBlockConfig>\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\tvault \"github.com\/hashicorp\/vault\/api\"\n\tauth \"github.com\/hashicorp\/vault\/api\/auth\/azure\"\n)\n\n\/\/ Fetches a key-value secret (kv-v2) after authenticating to Vault via Azure authentication.\n\/\/ This example assumes you have a configured Azure AD Application.\nfunc getSecretWithAzureAuth() (string, error) {\n\tconfig := vault.DefaultConfig() \/\/ modify for more granular configuration\n\n\tclient, err := vault.NewClient(config)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to initialize Vault client: %w\", err)\n\t}\n\n\tazureAuth, err := auth.NewAzureAuth(\n\t\t\"dev-role-azure\",\n\t)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to initialize Azure auth method: %w\", err)\n\t}\n\n\tauthInfo, err := client.Auth().Login(context.Background(), azureAuth)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to login to Azure auth method: %w\", err)\n\t}\n\tif authInfo == nil {\n\t\treturn \"\", fmt.Errorf(\"no auth info was returned after login\")\n\t}\n\n\t\/\/ get secret from the default mount path for KV v2 in dev mode, \"secret\"\n\tsecret, err := client.KVv2(\"secret\").Get(context.Background(), \"creds\")\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to read secret: %w\", err)\n\t}\n\n\t\/\/ data map can contain more than one key-value pair,\n\t\/\/ in this case we're just grabbing one of them\n\tvalue, ok := secret.Data[\"password\"].(string)\n\tif !ok {\n\t\treturn \"\", fmt.Errorf(\"value type assertion failed: %T %#v\", secret.Data[\"password\"], secret.Data[\"password\"])\n\t}\n\n\treturn value, nil\n}\n\n```\n<\/CodeBlockConfig>\n\n<CodeBlockConfig>\n\n```cs\nusing System;\nusing System.Collections.Generic;\nusing System.IO;\nusing System.Net;\nusing System.Net.Http;\nusing System.Text;\nusing Newtonsoft.Json;\nusing VaultSharp;\nusing VaultSharp.V1.AuthMethods;\nusing VaultSharp.V1.AuthMethods.Azure;\nusing VaultSharp.V1.Commons;\n\nnamespace Examples\n{\n    public class AzureAuthExample\n    {\n        public class InstanceMetadata\n        {\n            public string name { get; set; }\n            public string resourceGroupName { get; set; }\n            public string subscriptionId { get; set; }\n        }\n\n        const string MetadataEndPoint = \"http:\/\/169.254.169.254\/metadata\/instance?api-version=2017-08-01\";\n        const string AccessTokenEndPoint = \"http:\/\/169.254.169.254\/metadata\/identity\/oauth2\/token?api-version=2018-02-01&resource=https:\/\/management.azure.com\/\";\n\n        \/\/\/ <summary>\n        \/\/\/ Fetches a key-value secret (kv-v2) after authenticating to Vault via Azure authentication.\n        \/\/\/ This example assumes you have a configured Azure AD Application.\n        \/\/\/ <\/summary>\n        public string GetSecretWithAzureAuth()\n        {\n            string vaultAddr = Environment.GetEnvironmentVariable(\"VAULT_ADDR\");\n            if(String.IsNullOrEmpty(vaultAddr))\n            {\n                throw new System.ArgumentNullException(\"Vault Address\");\n            }\n\n            string roleName = Environment.GetEnvironmentVariable(\"VAULT_ROLE\");\n            if(String.IsNullOrEmpty(roleName))\n            {\n                throw new System.ArgumentNullException(\"Vault Role Name\");\n            }\n\n            string jwt = GetJWT();\n            InstanceMetadata metadata = GetMetadata();\n\n            IAuthMethodInfo authMethod = new AzureAuthMethodInfo(roleName: roleName, jwt: jwt, subscriptionId: metadata.subscriptionId, resourceGroupName: metadata.resourceGroupName, virtualMachineName: metadata.name);\n            var vaultClientSettings = new VaultClientSettings(vaultAddr, authMethod);\n\n            IVaultClient vaultClient = new VaultClient(vaultClientSettings);\n\n            \/\/ We can retrieve the secret from the VaultClient object\n            Secret<SecretData> kv2Secret = null;\n            kv2Secret = vaultClient.V1.Secrets.KeyValue.V2.ReadSecretAsync(path: \"\/creds\").Result;\n\n            var password = kv2Secret.Data.Data[\"password\"];\n\n            return password.ToString();\n        }\n\n        \/\/\/ <summary>\n        \/\/\/ Query Azure Resource Manage for metadata about the Azure instance\n        \/\/\/ <\/summary>\n        private InstanceMetadata GetMetadata()\n        {\n            HttpWebRequest metadataRequest = (HttpWebRequest)WebRequest.Create(MetadataEndPoint);\n            metadataRequest.Headers[\"Metadata\"] = \"true\";\n            metadataRequest.Method = \"GET\";\n\n            HttpWebResponse metadataResponse = (HttpWebResponse)metadataRequest.GetResponse();\n\n            StreamReader streamResponse = new StreamReader(metadataResponse.GetResponseStream());\n            string stringResponse = streamResponse.ReadToEnd();\n            var resultsDict = JsonConvert.DeserializeObject<Dictionary<string, InstanceMetadata>>(stringResponse);\n\n            return resultsDict[\"compute\"];\n        }\n\n        \/\/\/ <summary>\n        \/\/\/ Query Azure Resource Manager (ARM) for an access token\n        \/\/\/ <\/summary>\n        private string GetJWT()\n        {\n            HttpWebRequest request = (HttpWebRequest)WebRequest.Create(AccessTokenEndPoint);\n            request.Headers[\"Metadata\"] = \"true\";\n            request.Method = \"GET\";\n\n            HttpWebResponse response = (HttpWebResponse)request.GetResponse();\n\n            \/\/ Pipe response Stream to a StreamReader and extract access token\n            StreamReader streamResponse = new StreamReader(response.GetResponseStream());\n            string stringResponse = streamResponse.ReadToEnd();\n            var resultsDict = JsonConvert.DeserializeObject<Dictionary<string, string>>(stringResponse);\n\n            return resultsDict[\"access_token\"];\n        }\n    }\n}\n```\n\n<\/CodeBlockConfig>\n\n<\/CodeTabs>","site":"vault","answers_cleaned":"    layout  docs page title  Azure   Auth Methods description       The azure auth method plugin allows automated authentication of Azure Active   Directory         Azure auth method  The  azure  auth method allows authentication against Vault using Azure Active Directory credentials  It treats Azure as a Trusted Third Party and expects a  JSON Web Token  JWT   https   tools ietf org html rfc7519  signed by Azure Active Directory for the configured tenant   This method supports authentication for system assigned and user assigned managed identities  See  Managed identities for Azure resources  https   learn microsoft com en us azure active directory managed identities azure resources overview  for more information about these resources   This documentation assumes the Azure method is mounted at the   auth azure  path in Vault  Since it is possible to enable auth methods at any location  please update your API calls accordingly      Prerequisites   The Azure auth method requires client credentials to access Azure APIs  The following are required to configure the auth method     A configured  Azure AD application  https   docs microsoft com en us azure active directory develop active directory integrating applications    which is used as the resource for generating MSI access tokens    Client credentials  shared secret  with read access to particular Azure Resource Manager   resources  See  Azure AD Service to Service Client Credentials  https   docs microsoft com en us azure active directory develop active directory protocols oauth service to service    If Vault is hosted on Azure  Vault can use MSI to access Azure instead of a shared secret  A managed identity must be  enabled  https   learn microsoft com en us azure active directory managed identities azure resources   on the resource that acquires the access token   The following Azure  role assignments  https   learn microsoft com en us azure role based access control overview role assignments  must be granted to the Azure AD application in order for the auth method to access Azure APIs during authentication       Role assignments       Note    The role assignments are only required when the   vm name    vault api docs auth azure vm name     vmss name    vault api docs auth azure vmss name   or   resource id    vault api docs auth azure resource id  parameters are used on login     Azure Environment   Login Parameter   Azure API Permission                                                              Virtual Machine     vm name    vault api docs auth azure vm name     Microsoft Compute virtualMachines   read      Virtual Machine Scale Set   Uniform Orchestration  vmss uniform       vmss name    vault api docs auth azure vmss name     Microsoft Compute virtualMachineScaleSets   read      Virtual Machine Scale Set   Flexible Orchestration  vmss flex       vmss name    vault api docs auth azure vmss name     Microsoft Compute virtualMachineScaleSets   read   Microsoft ManagedIdentity userAssignedIdentities   read      Services that   support managed identities  managed identities   for Azure resources     resource id    vault api docs auth azure resource id     read  on the resource used to obtain the JWT     vmss uniform   https   learn microsoft com en us azure virtual machine scale sets virtual machine scale sets orchestration modes scale sets with uniform orchestration  vmss flex   https   learn microsoft com en us azure virtual machine scale sets virtual machine scale sets orchestration modes scale sets with flexible orchestration  managed identities   https   learn microsoft com en us azure active directory managed identities azure resources managed identities status      API permissions  The following  API permissions  https   learn microsoft com en us azure active directory develop permissions consent overview types of permissions  must be assigned to the service principal provided to Vault for managing the root rotation in Azure     Permission Name                 Type                                                            Application ReadWrite All       Application       Authentication      Via the CLI  The default path is   auth azure   If this auth method was enabled at a different path  specify  auth my path login  instead      shell session   vault write auth azure login       role  dev role        jwt  eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9           subscription id  12345            resource group name  test group        vm name  test vm       The  role  and  jwt  parameters are required  When using  bound service principal ids  and  bound group ids  in the token roles  all the information is required in the JWT  except for  vm name    vmss name    resource id    When using other  bound    parameters  calls to Azure APIs will be made and  subscription id    resource group name   and  vm name   vmss name  are all required and can be obtained through instance metadata   For example      shell session   vault write auth azure login role  dev role         jwt    curl  s  http   169 254 169 254 metadata identity oauth2 token api version 2018 02 01 resource https 3A 2F 2Fmanagement azure com 2F   H Metadata true   jq  r   access token           subscription id   curl  s  H Metadata true  http   169 254 169 254 metadata instance api version 2017 08 01    jq  r   compute    subscriptionId           resource group name   curl  s  H Metadata true  http   169 254 169 254 metadata instance api version 2017 08 01    jq  r   compute    resourceGroupName          vm name   curl  s  H Metadata true  http   169 254 169 254 metadata instance api version 2017 08 01    jq  r   compute    name            Via the API  The default endpoint is  auth azure login   If this auth method was enabled at a different path  use that value instead of  azure       shell session   curl         request POST         data    role    dev role    jwt    eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9             https   127 0 0 1 8200 v1 auth azure login      The response will contain the token at  auth client token       json      auth          client token    f33f8c72 924e 11f8 cb43 ac59d697597c        accessor    0e9e354a 520f df04 6867 ee81cae3d42d        policies     default    dev    prod         lease duration   2764800       renewable   true               Configuration  Auth methods must be configured in advance before machines can authenticate  These steps are usually completed by an operator or configuration management tool       Via the CLI  1  Enable Azure authentication in Vault         shell session      vault auth enable azure         1  Configure the Azure auth method         shell session      vault write auth azure config          tenant id 7cd1f227 ca67 4fc6 a1a4 9888ea7f388c          resource https   management azure com           client id dd794de4 4c6c 40b3 a930 d84cd32e9699          client secret IT3B2XfZvWnfB98s1cie8EMe7zWg483Xy8zY004             For the complete list of configuration options  please see the API    documentation      In some cases  you cannot set sensitive account credentials in your    Vault configuration  For example  your organization may require that all    security credentials are short lived or explicitly tied to a machine identity      To provide managed identity security credentials to Vault  we recommend using Vault     plugin workload identity federation   plugin workload identity federation wif      WIF  as shown below   1   Alternatively  configure the audience claim value and the Client  Tenant IDs for plugin workload identity federation         text      vault write azure config          tenant id 7cd1f227 ca67 4fc6 a1a4 9888ea7f388c          client id dd794de4 4c6c 40b3 a930 d84cd32e9699          identity token audience vault example v1 identity oidc plugins            The Vault identity token provider signs the plugin identity token JWT internally     If a trust relationship exists between Vault and Azure through WIF  the auth    method can exchange the Vault identity token for a federated access token      To configure a trusted relationship between Vault and Azure           You must configure the  identity token issuer backend   vault api docs secret identity tokens configure the identity tokens backend         for Vault           Azure must have a         federated identity credential  https   learn microsoft com en us entra workload id workload identity federation create trust pivots identity wif apps methods azp configure a federated identity credential on an app         configured with information about the fully qualified and network reachable        issuer URL for the Vault plugin         identity token provider   vault api docs secret identity tokens read plugin identity well known configurations       Establishing a trusted relationship between Vault and Azure ensures that Azure    can fetch JWKS     public keys   vault api docs secret identity tokens read active public keys     and verify the plugin identity token signature   1  Create a role         shell session      vault write auth azure role dev role          policies  prod dev           bound subscription ids 6a1d5988 5917 4221 b224 904cd7e24a25          bound resource groups vault            Roles are associated with an authentication type entity and a set of Vault    policies  Roles are configured with constraints specific to the    authentication type  as well as overall constraints and configuration for    the generated auth tokens      For the complete list of role options  please see the  API documentation   vault api docs auth azure        Via the API  1  Enable Azure authentication in Vault         shell session      curl            header  X Vault Token                  request POST            data    type    azure             https   127 0 0 1 8200 v1 sys auth azure         1  Configure the Azure auth method         shell session      curl            header  X Vault Token                  request POST            data    tenant id           resource                    https   127 0 0 1 8200 v1 auth azure config         1  Create a role         shell session      curl            header  X Vault Token                  request POST            data    policies     dev    prod                   https   127 0 0 1 8200 v1 auth azure role dev role            Azure managed identities  There are two types of  managed identities  https   learn microsoft com en us azure active directory managed identities azure resources overview managed identity types  in Azure  System assigned and User assigned  System assigned identities are unique to every virtual machine in Azure  If the resources using Azure auth are recreated frequently  using system assigned identities could result in many Vault entities being created  For environments with high ephemeral workloads  user assigned identities are recommended        Limitations  The TTL of the access token returned by Azure AD for a managed identity is 24hrs and is not configurable  See   limitations of using managed identities  id limitations   for more info    id limitations   https   learn microsoft com en us azure active directory managed identities azure resources managed identity best practice recommendations limitation of using managed identities for authorization     Azure debug logs  The Azure auth plugin supports debug logging which includes additional information about requests and responses from the Azure API   To enable the Azure debug logs  set the following environment variable on the Vault server      shell AZURE SDK GO LOGGING all         Plugin Workload Identity Federation  WIF    EnterpriseAlert product  vault      The Azure auth method supports the plugin WIF workflow  and has a source of identity called a plugin identity token  A plugin identity token is a JWT that is signed internally by Vault s  plugin identity token issuer   vault api docs secret identity tokens read plugin workload identity issuer s openid configuration    If there is a trust relationship configured between Vault and Azure through  workload identity federation  https   learn microsoft com en us entra workload id workload identity federation   the auth method can exchange its identity token for short lived access tokens needed to perform its actions   Exchanging identity tokens for access tokens lets the Azure auth method operate without configuring explicit access to sensitive client credentials   To configure the auth method to use plugin WIF   1  Ensure that Vault  openid configuration   vault api docs secret identity tokens read plugin identity token issuer s openid configuration     and  public JWKS   vault api docs secret identity tokens read plugin identity token issuer s public jwks     APIs are network reachable by Azure  We recommend using an API proxy or gateway    if you need to limit Vault API exposure   1  Configure a     federated identity credential  https   learn microsoft com en us entra workload id workload identity federation create trust pivots identity wif apps methods azp configure a federated identity credential on an app     on a dedicated application registration in Azure to establish a trust relationship with Vault     1  The issuer URL   must   point at your  Vault plugin identity token issuer   vault api docs secret identity tokens read plugin workload identity issuer s openid configuration  with the       well known openid configuration  suffix removed  For example      https   host port v1 identity oidc plugins      1  The subject identifier   must   match the unique  sub  claim issued by plugin identity tokens     The subject identifier should have the form  plugin identity  NAMESPACE  auth  AZURE MOUNT ACCESSOR       1  The audience should be under 600 characters  The default value in Azure is  api   AzureADTokenExchange    1  Configure the Azure auth method with the client and tenant IDs and the OIDC audience value         shell session      vault write azure config        tenant id 7cd1f227 ca67 4fc6 a1a4 9888ea7f388c        client id dd794de4 4c6c 40b3 a930 d84cd32e9699        identity token audience vault example v1 identity oidc plugins         Your auth method can now use plugin WIF for its configuration credentials  By default  WIF  credentials  https   learn microsoft com en us entra identity platform access tokens token lifetime  have a time to live of 1 hour and automatically refresh when they expire   Please see the  API documentation   vault api docs auth azure configure  for more details on the fields associated with plugin WIF      API  The Azure Auth Plugin has a full HTTP API  Please see the  API documentation   vault api docs auth azure  for more details      Code example  The following example demonstrates the Azure auth method to authenticate with Vault    CodeTabs    CodeBlockConfig      go package main  import     context    fmt    vault  github com hashicorp vault api   auth  github com hashicorp vault api auth azure        Fetches a key value secret  kv v2  after authenticating to Vault via Azure authentication     This example assumes you have a configured Azure AD Application  func getSecretWithAzureAuth    string  error     config    vault DefaultConfig      modify for more granular configuration   client  err    vault NewClient config   if err    nil     return     fmt Errorf  unable to initialize Vault client   w   err       azureAuth  err    auth NewAzureAuth     dev role azure       if err    nil     return     fmt Errorf  unable to initialize Azure auth method   w   err       authInfo  err    client Auth   Login context Background    azureAuth   if err    nil     return     fmt Errorf  unable to login to Azure auth method   w   err      if authInfo    nil     return     fmt Errorf  no auth info was returned after login           get secret from the default mount path for KV v2 in dev mode   secret   secret  err    client KVv2  secret   Get context Background     creds    if err    nil     return     fmt Errorf  unable to read secret   w   err          data map can contain more than one key value pair      in this case we re just grabbing one of them  value  ok    secret Data  password    string   if  ok     return     fmt Errorf  value type assertion failed   T   v   secret Data  password    secret Data  password         return value  nil          CodeBlockConfig    CodeBlockConfig      cs using System  using System Collections Generic  using System IO  using System Net  using System Net Http  using System Text  using Newtonsoft Json  using VaultSharp  using VaultSharp V1 AuthMethods  using VaultSharp V1 AuthMethods Azure  using VaultSharp V1 Commons   namespace Examples       public class AzureAuthExample               public class InstanceMetadata                       public string name   get  set                public string resourceGroupName   get  set                public string subscriptionId   get  set                       const string MetadataEndPoint    http   169 254 169 254 metadata instance api version 2017 08 01           const string AccessTokenEndPoint    http   169 254 169 254 metadata identity oauth2 token api version 2018 02 01 resource https   management azure com                  summary              Fetches a key value secret  kv v2  after authenticating to Vault via Azure authentication              This example assumes you have a configured Azure AD Application                summary          public string GetSecretWithAzureAuth                         string vaultAddr   Environment GetEnvironmentVariable  VAULT ADDR                if String IsNullOrEmpty vaultAddr                                 throw new System ArgumentNullException  Vault Address                               string roleName   Environment GetEnvironmentVariable  VAULT ROLE                if String IsNullOrEmpty roleName                                 throw new System ArgumentNullException  Vault Role Name                               string jwt   GetJWT                InstanceMetadata metadata   GetMetadata                 IAuthMethodInfo authMethod   new AzureAuthMethodInfo roleName  roleName  jwt  jwt  subscriptionId  metadata subscriptionId  resourceGroupName  metadata resourceGroupName  virtualMachineName  metadata name               var vaultClientSettings   new VaultClientSettings vaultAddr  authMethod                IVaultClient vaultClient   new VaultClient vaultClientSettings                   We can retrieve the secret from the VaultClient object             Secret SecretData  kv2Secret   null              kv2Secret   vaultClient V1 Secrets KeyValue V2 ReadSecretAsync path    creds   Result               var password   kv2Secret Data Data  password                 return password ToString                            summary              Query Azure Resource Manage for metadata about the Azure instance               summary          private InstanceMetadata GetMetadata                         HttpWebRequest metadataRequest    HttpWebRequest WebRequest Create MetadataEndPoint               metadataRequest Headers  Metadata      true               metadataRequest Method    GET                HttpWebResponse metadataResponse    HttpWebResponse metadataRequest GetResponse                 StreamReader streamResponse   new StreamReader metadataResponse GetResponseStream                 string stringResponse   streamResponse ReadToEnd                var resultsDict   JsonConvert DeserializeObject Dictionary string  InstanceMetadata   stringResponse                return resultsDict  compute                            summary              Query Azure Resource Manager  ARM  for an access token               summary          private string GetJWT                         HttpWebRequest request    HttpWebRequest WebRequest Create AccessTokenEndPoint               request Headers  Metadata      true               request Method    GET                HttpWebResponse response    HttpWebResponse request GetResponse                    Pipe response Stream to a StreamReader and extract access token             StreamReader streamResponse   new StreamReader response GetResponseStream                 string stringResponse   streamResponse ReadToEnd                var resultsDict   JsonConvert DeserializeObject Dictionary string  string   stringResponse                return resultsDict  access token                             CodeBlockConfig     CodeTabs "}
{"questions":"vault The Okta auth method allows users to authenticate with Vault using Okta credentials Okta auth method layout docs page title Okta Auth Methods","answers":"---\nlayout: docs\npage_title: Okta - Auth Methods\ndescription: |-\n  The Okta auth method allows users to authenticate with Vault using Okta\n  credentials.\n---\n\n# Okta auth method\n\nThe `okta` auth method allows authentication using Okta and user\/password\ncredentials. This allows Vault to be integrated into environments using Okta.\n\nThe mapping of groups in Okta to Vault policies is managed by using the\n[users](\/vault\/api-docs\/auth\/okta#register-user) and [groups](\/vault\/api-docs\/auth\/okta#register-group)\nAPIs.\n\n## Authentication\n\n### Via the CLI\n\nThe default path is `\/okta`. If this auth method was enabled at a different\npath, specify `-path=\/my-path` in the CLI.\n\n```shell-session\n$ vault login -method=okta username=my-username\n```\n\n### Via the API\n\nThe default endpoint is `auth\/okta\/login`. If this auth method was enabled\nat a different path, use that value instead of `okta`.\n\n```shell-session\n$ curl \\\n    --request POST \\\n    --data '{\"password\": \"MY_PASSWORD\"}' \\\n    http:\/\/127.0.0.1:8200\/v1\/auth\/okta\/login\/my-username\n```\n\nThe response will contain a token at `auth.client_token`:\n\n```json\n{\n  \"auth\": {\n    \"client_token\": \"abcd1234-7890...\",\n    \"policies\": [\"admins\"],\n    \"metadata\": {\n      \"username\": \"mitchellh\"\n    }\n  }\n}\n```\n\n### MFA\n\nOkta Verify Push and TOTP MFA methods, and Google TOTP are supported during login. For TOTP, the current\npasscode may be provided via the `totp` parameter:\n\n```shell-session\n$ vault login -method=okta username=my-username totp=123456\n```\n\nIf both Okta TOTP and Google TOTP are enabled in your Okta account, make sure to pass in\nthe `provider` name to which the `totp` code belong.\n\n```shell-session\n$ vault login -method=okta username=my-username totp=123456 provider=GOOGLE\n```\n\nIf `totp` is not set and MFA Push is configured in Okta, a Push will be sent during login.\n\nThe auth method uses the Okta [Authentication API](https:\/\/developer.okta.com\/docs\/reference\/api\/authn\/).\nIt does not manage Okta [sessions](https:\/\/developer.okta.com\/docs\/reference\/api\/sessions\/) for authenticated\nusers. This means that if MFA Push is configured, it will be required during both login and token renewal.\n\nNote that this MFA support is integrated with Okta Auth and is limited strictly to login\noperations. It is not related to [Enterprise MFA](\/vault\/docs\/enterprise\/mfa).\n\n## Configuration\n\nAuth methods must be configured in advance before users or machines can\nauthenticate. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n### Via the CLI\n\n1. Enable the Okta auth method:\n\n  ```shell-session\n  $ vault auth enable okta\n  ```\n\n1. Configure Vault to communicate with your Okta account:\n\n  ```shell-session\n  $ vault write auth\/okta\/config \\\n     base_url=\"okta.com\" \\\n     org_name=\"dev-123456\" \\\n     api_token=\"00abcxyz...\"\n  ```\n\n  -> **Note**: Support for okta auth with no API token is deprecated in Vault 1.4.\n  If no token is supplied, Vault will function, but only locally configured\n  group membership will be available. Without a token, groups will not be\n  queried.\n\n  For the complete list of configuration options, please see the\n  [API documentation](\/vault\/api-docs\/auth\/okta).\n\n1. Map an Okta group to a Vault policy:\n\n  ```shell-session\n  $ vault write auth\/okta\/groups\/scientists policies=nuclear-reactor\n  ```\n\n  In this example, anyone who successfully authenticates via Okta who is a\n  member of the \"scientists\" group will receive a Vault token with the\n  \"nuclear-reactor\" policy attached.\n\n1. It is also possible to add users directly:\n\n  ```shell-session\n  $ vault write auth\/okta\/groups\/engineers policies=autopilot\n  $ vault write auth\/okta\/users\/tesla groups=engineers\n  ```\n\n  This adds the Okta user \"tesla\" to the \"engineers\" group, which maps to\n  the \"autopilot\" Vault policy.\n\n  -> **Note**: The user-policy mapping via group membership happens at token _creation\n  time_. Any changes in group membership in Okta will not affect existing\n  tokens that have already been provisioned. To see these changes, users\n  will need to re-authenticate. You can force this by revoking the\n  existing tokens.\n\n### Okta API token permissions\n\nThe `okta` auth method uses the [Authentication](https:\/\/developer.okta.com\/docs\/reference\/api\/authn\/)\nand [User Groups](https:\/\/developer.okta.com\/docs\/reference\/api\/users\/#get-user-s-groups)\nAPIs to authenticate users and obtain their group membership. The [`api_token`](\/vault\/api-docs\/auth\/okta#api_token)\nprovided to the auth method's configuration must have sufficient privileges to exercise\nthese Okta APIs.\n\nIt is recommended to configure the auth method with a minimally permissive API token.\nTo do so, create the API token using an administrator with the standard\n[Read-only Admin](https:\/\/help.okta.com\/en\/prod\/Content\/Topics\/Security\/administrators-read-only-admin.htm)\nrole. Custom roles may also be used to grant minimal permissions to the Okta API token.\n\n## API\n\nThe Okta auth method has a full HTTP API. Please see the\n[Okta Auth API](\/vault\/api-docs\/auth\/okta) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Okta   Auth Methods description       The Okta auth method allows users to authenticate with Vault using Okta   credentials         Okta auth method  The  okta  auth method allows authentication using Okta and user password credentials  This allows Vault to be integrated into environments using Okta   The mapping of groups in Okta to Vault policies is managed by using the  users   vault api docs auth okta register user  and  groups   vault api docs auth okta register group  APIs      Authentication      Via the CLI  The default path is   okta   If this auth method was enabled at a different path  specify   path  my path  in the CLI      shell session   vault login  method okta username my username          Via the API  The default endpoint is  auth okta login   If this auth method was enabled at a different path  use that value instead of  okta       shell session   curl         request POST         data    password    MY PASSWORD          http   127 0 0 1 8200 v1 auth okta login my username      The response will contain a token at  auth client token       json      auth          client token    abcd1234 7890           policies     admins         metadata            username    mitchellh                       MFA  Okta Verify Push and TOTP MFA methods  and Google TOTP are supported during login  For TOTP  the current passcode may be provided via the  totp  parameter      shell session   vault login  method okta username my username totp 123456      If both Okta TOTP and Google TOTP are enabled in your Okta account  make sure to pass in the  provider  name to which the  totp  code belong      shell session   vault login  method okta username my username totp 123456 provider GOOGLE      If  totp  is not set and MFA Push is configured in Okta  a Push will be sent during login   The auth method uses the Okta  Authentication API  https   developer okta com docs reference api authn    It does not manage Okta  sessions  https   developer okta com docs reference api sessions   for authenticated users  This means that if MFA Push is configured  it will be required during both login and token renewal   Note that this MFA support is integrated with Okta Auth and is limited strictly to login operations  It is not related to  Enterprise MFA   vault docs enterprise mfa       Configuration  Auth methods must be configured in advance before users or machines can authenticate  These steps are usually completed by an operator or configuration management tool       Via the CLI  1  Enable the Okta auth method        shell session     vault auth enable okta        1  Configure Vault to communicate with your Okta account        shell session     vault write auth okta config        base url  okta com         org name  dev 123456         api token  00abcxyz                   Note    Support for okta auth with no API token is deprecated in Vault 1 4    If no token is supplied  Vault will function  but only locally configured   group membership will be available  Without a token  groups will not be   queried     For the complete list of configuration options  please see the    API documentation   vault api docs auth okta    1  Map an Okta group to a Vault policy        shell session     vault write auth okta groups scientists policies nuclear reactor          In this example  anyone who successfully authenticates via Okta who is a   member of the  scientists  group will receive a Vault token with the    nuclear reactor  policy attached   1  It is also possible to add users directly        shell session     vault write auth okta groups engineers policies autopilot     vault write auth okta users tesla groups engineers          This adds the Okta user  tesla  to the  engineers  group  which maps to   the  autopilot  Vault policy          Note    The user policy mapping via group membership happens at token  creation   time   Any changes in group membership in Okta will not affect existing   tokens that have already been provisioned  To see these changes  users   will need to re authenticate  You can force this by revoking the   existing tokens       Okta API token permissions  The  okta  auth method uses the  Authentication  https   developer okta com docs reference api authn   and  User Groups  https   developer okta com docs reference api users  get user s groups  APIs to authenticate users and obtain their group membership  The   api token    vault api docs auth okta api token  provided to the auth method s configuration must have sufficient privileges to exercise these Okta APIs   It is recommended to configure the auth method with a minimally permissive API token  To do so  create the API token using an administrator with the standard  Read only Admin  https   help okta com en prod Content Topics Security administrators read only admin htm  role  Custom roles may also be used to grant minimal permissions to the Okta API token      API  The Okta auth method has a full HTTP API  Please see the  Okta Auth API   vault api docs auth okta  for more details "}
{"questions":"vault credentials LDAP auth method layout docs page title LDAP Auth Methods The ldap auth method allows users to authenticate with Vault using LDAP","answers":"---\nlayout: docs\npage_title: LDAP - Auth Methods\ndescription: |-\n  The \"ldap\" auth method allows users to authenticate with Vault using LDAP\n  credentials.\n---\n\n# LDAP auth method\n\n@include 'x509-sha1-deprecation.mdx'\n\nThe `ldap` auth method allows authentication using an existing LDAP\nserver and user\/password credentials. This allows Vault to be integrated\ninto environments using LDAP without duplicating the user\/pass configuration\nin multiple places.\n\nThe mapping of groups and users in LDAP to Vault policies is managed by using\nthe `users\/` and `groups\/` paths.\n\n## A note on escaping\n\n**It is up to the administrator** to provide properly escaped DNs. This\nincludes the user DN, bind DN for search, and so on.\n\nThe only DN escaping performed by this method is on usernames given at login\ntime when they are inserted into the final bind DN, and uses escaping rules\ndefined in RFC 4514.\n\nAdditionally, Active Directory has escaping rules that differ slightly from the\nRFC; in particular it requires escaping of '#' regardless of position in the DN\n(the RFC only requires it to be escaped when it is the first character), and\n'=', which the RFC indicates can be escaped with a backslash, but does not\ncontain in its set of required escapes. If you are using Active Directory and\nthese appear in your usernames, please ensure that they are escaped, in\naddition to being properly escaped in your configured DNs.\n\nFor reference, see [RFC 4514](https:\/\/www.ietf.org\/rfc\/rfc4514.txt) and this\n[TechNet post on characters to escape in Active\nDirectory](http:\/\/social.technet.microsoft.com\/wiki\/contents\/articles\/5312.active-directory-characters-to-escape.aspx).\n\n## Authentication\n\n### Via the CLI\n\n```shell-session\n$ vault login -method=ldap username=mitchellh\nPassword (will be hidden):\nSuccessfully authenticated! The policies that are associated\nwith this token are listed below:\n\nadmins\n```\n\n### Via the API\n\n```shell-session\n$ curl \\\n    --request POST \\\n    --data '{\"password\": \"foo\"}' \\\n    http:\/\/127.0.0.1:8200\/v1\/auth\/ldap\/login\/mitchellh\n```\n\nThe response will be in JSON. For example:\n\n```javascript\n{\n  \"lease_id\": \"\",\n  \"renewable\": false,\n  \"lease_duration\": 0,\n  \"data\": null,\n  \"auth\": {\n    \"client_token\": \"c4f280f6-fdb2-18eb-89d3-589e2e834cdb\",\n    \"policies\": [\n      \"admins\"\n    ],\n    \"metadata\": {\n      \"username\": \"mitchellh\"\n    },\n    \"lease_duration\": 0,\n    \"renewable\": false\n  }\n}\n```\n\n## Configuration\n\nAuth methods must be configured in advance before users or machines can\nauthenticate. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the ldap auth method:\n\n   ```text\n   $ vault auth enable ldap\n   ```\n\n1. Configure connection details for your LDAP server, information on how to\n   authenticate users, and instructions on how to query for group membership. The\n   configuration options are categorized and detailed below.\n\n### Connection parameters\n\n- `url` (string, required) - The LDAP server to connect to. Examples: `ldap:\/\/ldap.myorg.com`, `ldaps:\/\/ldap.myorg.com:636`. This can also be a comma-delineated list of URLs, e.g. `ldap:\/\/ldap.myorg.com,ldaps:\/\/ldap.myorg.com:636`, in which case the servers will be tried in-order if there are errors during the connection process.\n- `starttls` (bool, optional) - If true, issues a `StartTLS` command after establishing an unencrypted connection.\n- `insecure_tls` - (bool, optional) - If true, skips LDAP server SSL certificate verification - insecure, use with caution!\n- `certificate` - (string, optional) - CA certificate to use when verifying LDAP server certificate, must be x509 PEM encoded.\n- `client_tls_cert` - (string, optional) - Client certificate to provide to the LDAP server, must be x509 PEM encoded.\n- `client_tls_key` - (string, optional) - Client certificate key to provide to the LDAP server, must be x509 PEM encoded.\n\n### Binding parameters\n\nThere are two alternate methods of resolving the user object used to authenticate the end user: _Search_ or _User Principal Name_. When using _Search_, the bind can be either anonymous or authenticated. User Principal Name is a method of specifying users supported by Active Directory. More information on UPN can be found [here](<https:\/\/msdn.microsoft.com\/en-us\/library\/ms677605(v=vs.85).aspx#userPrincipalName>).\n\n`userfilter` works with both authenticated and anonymous _Search_.\nIn order for `userfilter` to apply for authenticated searches, `binddn` and `bindpass` must be set.\nFor anonymous search, `discoverdn` must be set to `true`, and `deny_null_bind` must be set to false.\n\n#### Binding - authenticated search\n\n- `binddn` (string, optional) - Distinguished name of object to bind when performing user and group search. Example: `cn=vault,ou=Users,dc=example,dc=com`\n- `bindpass` (string, optional) - Password to use along with `binddn` when performing user search.\n- `userdn` (string, optional) - Base DN under which to perform user search. Example: `ou=Users,dc=example,dc=com`\n- `userattr` (string, optional) - Attribute on user attribute object matching the username passed when authenticating. Examples: `sAMAccountName`, `cn`, `uid`\n- `userfilter` (string, optional) - Go template used to construct a ldap user search filter. The template can access the following context variables: \\[`UserAttr`, `Username`\\]. The default userfilter is `(=)` or `(userPrincipalName=@UPNDomain)` if the `upndomain` parameter is set. The user search filter can be used to restrict what user can attempt to log in. For example, to limit login to users that are not contractors, you could write `(&(objectClass=user)(=)(!(employeeType=Contractor)))`.\n\n@include 'ldap-auth-userfilter-warning.mdx'\n\n#### Binding - anonymous search\n\n- `discoverdn` (bool, optional) - If true, use anonymous bind to discover the bind DN of a user\n- `userdn` (string, optional) - Base DN under which to perform user search. Example: `ou=Users,dc=example,dc=com`\n- `userattr` (string, optional) - Attribute on user attribute object matching the username passed when authenticating. Examples: `sAMAccountName`, `cn`, `uid`\n- `userfilter` (string, optional) - Go template used to construct a ldap user search filter. The template can access the following context variables: \\[`UserAttr`, `Username`\\]. The default userfilter is `(=)` or `(userPrincipalName=@UPNDomain)` if the `upndomain` parameter is set. The user search filter can be used to restrict what user can attempt to log in. For example, to limit login to users that are not contractors, you could write `(&(objectClass=user)(=)(!(employeeType=Contractor)))`.\n- `deny_null_bind` (bool, optional) - This option prevents users from bypassing authentication when providing an empty password. The default is `true`.\n- `anonymous_group_search` (bool, optional) - Use anonymous binds when performing LDAP group searches. Defaults to `false`.\n\n@include 'ldap-auth-userfilter-warning.mdx'\n\n#### Alias dereferencing\n\n- `dereference_aliases` (string, optional) - Control how aliases are dereferenced when performing the search. Possible values are: `never`, `finding`, `searching`, and `always`. `finding` will only dereference aliases during name resolution of the base. `searching` will dereference aliases after name resolution.\n\n#### Binding - user principal name (AD)\n\n- `upndomain` (string, optional) - userPrincipalDomain used to construct the UPN string for the authenticating user. The constructed UPN will appear as `[username]@UPNDomain`. Example: `example.com`, which will cause vault to bind as `username@example.com`.\n\n### Group membership resolution\n\nOnce a user has been authenticated, the LDAP auth method must know how to resolve which groups the user is a member of. The configuration for this can vary depending on your LDAP server and your directory schema. There are two main strategies when resolving group membership - the first is searching for the authenticated user object and following an attribute to groups it is a member of. The second is to search for group objects of which the authenticated user is a member of. Both methods are supported.\n\n- `groupfilter` (string, optional) - Go template used when constructing the group membership query. The template can access the following context variables: \\[`UserDN`, `Username`\\]. The default is `(|(memberUid=)(member=)(uniqueMember=))`, which is compatible with several common directory schemas. To support nested group resolution for Active Directory, instead use the following query: `(&(objectClass=group)(member:1.2.840.113556.1.4.1941:=))`.\n- `groupdn` (string, required) - LDAP search base to use for group membership search. This can be the root containing either groups or users. Example: `ou=Groups,dc=example,dc=com`\n- `groupattr` (string, optional) - LDAP attribute to follow on objects returned by `groupfilter` in order to enumerate user group membership. Examples: for groupfilter queries returning _group_ objects, use: `cn`. For queries returning _user_ objects, use: `memberOf`. The default is `cn`.\n\n_Note_: When using _Authenticated Search_ for binding parameters (see above) the distinguished name defined for `binddn` is used for the group search. Otherwise, the authenticating user is used to perform the group search.\n\nUse `vault path-help` for more details.\n\n### Other\n\n- `username_as_alias` (bool, optional) - If set to true, forces the auth method to use the username passed by the user as the alias name.\n- `max_page_size` (int, optional) - If set to a value greater than 0, the LDAP backend will use the LDAP server's paged search control to request pages of up to the given size. This can be used to avoid hitting the LDAP server's maximum result size limit. Otherwise, the LDAP backend will not use the paged search control.\n\n## Examples:\n\n### Scenario 1\n\n- LDAP server running on `ldap.example.com`, port 389.\n- Server supports `STARTTLS` command to initiate encryption on the standard port.\n- CA Certificate stored in file named `ldap_ca_cert.pem`\n- Server is Active Directory supporting the userPrincipalName attribute. Users are identified as `username@example.com`.\n- Groups are nested, we will use `LDAP_MATCHING_RULE_IN_CHAIN` to walk the ancestry graph.\n- Group search will start under `ou=Groups,dc=example,dc=com`. For all group objects under that path, the `member` attribute will be checked for a match against the authenticated user.\n- Group names are identified using their `cn` attribute.\n\n```shell-session\n$ vault write auth\/ldap\/config \\\n    url=\"ldap:\/\/ldap.example.com\" \\\n    userdn=\"ou=Users,dc=example,dc=com\" \\\n    groupdn=\"ou=Groups,dc=example,dc=com\" \\\n    groupfilter=\"(&(objectClass=group)(member:1.2.840.113556.1.4.1941:=))\" \\\n    groupattr=\"cn\" \\\n    upndomain=\"example.com\" \\\n    certificate=@ldap_ca_cert.pem \\\n    insecure_tls=false \\\n    starttls=true\n...\n```\n\n### Scenario 2\n\n- LDAP server running on `ldap.example.com`, port 389.\n- Server supports `STARTTLS` command to initiate encryption on the standard port.\n- CA Certificate stored in file named `ldap_ca_cert.pem`\n- Server does not allow anonymous binds for performing user search.\n- Bind account used for searching is `cn=vault,ou=users,dc=example,dc=com` with password `My$ecrt3tP4ss`.\n- User objects are under the `ou=Users,dc=example,dc=com` organizational unit.\n- Username passed to vault when authenticating maps to the `sAMAccountName` attribute.\n- Group membership will be resolved via the `memberOf` attribute of _user_ objects. That search will begin under `ou=Users,dc=example,dc=com`.\n\n```shell-session\n$ vault write auth\/ldap\/config \\\n    url=\"ldap:\/\/ldap.example.com\" \\\n    userattr=sAMAccountName \\\n    userdn=\"ou=Users,dc=example,dc=com\" \\\n    groupdn=\"ou=Users,dc=example,dc=com\" \\\n    groupfilter=\"(&(objectClass=person)(uid=))\" \\\n    groupattr=\"memberOf\" \\\n    binddn=\"cn=vault,ou=users,dc=example,dc=com\" \\\n    bindpass='My$ecrt3tP4ss' \\\n    certificate=@ldap_ca_cert.pem \\\n    insecure_tls=false \\\n    starttls=true\n...\n```\n\n### Scenario 3\n\n- LDAP server running on `ldap.example.com`, port 636 (LDAPS)\n- CA Certificate stored in file named `ldap_ca_cert.pem`\n- User objects are under the `ou=Users,dc=example,dc=com` organizational unit.\n- Username passed to vault when authenticating maps to the `uid` attribute.\n- User bind DN will be auto-discovered using anonymous binding.\n- Group membership will be resolved via any one of `memberUid`, `member`, or `uniqueMember` attributes. That search will begin under `ou=Groups,dc=example,dc=com`.\n- Group names are identified using the `cn` attribute.\n\n```shell-session\n$ vault write auth\/ldap\/config \\\n    url=\"ldaps:\/\/ldap.example.com\" \\\n    userattr=\"uid\" \\\n    userdn=\"ou=Users,dc=example,dc=com\" \\\n    discoverdn=true \\\n    groupdn=\"ou=Groups,dc=example,dc=com\" \\\n    certificate=@ldap_ca_cert.pem \\\n    insecure_tls=false \\\n    starttls=true\n...\n```\n\n## LDAP group -> policy mapping\n\nNext we want to create a mapping from an LDAP group to a Vault policy:\n\n```shell-session\n$ vault write auth\/ldap\/groups\/scientists policies=foo,bar\n```\n\nThis maps the LDAP group \"scientists\" to the \"foo\" and \"bar\" Vault policies.\nWe can also add specific LDAP users to additional (potentially non-LDAP)\ngroups. Note that policies can also be specified on LDAP users as well.\n\n```shell-session\n$ vault write auth\/ldap\/groups\/engineers policies=foobar\n$ vault write auth\/ldap\/users\/tesla groups=engineers policies=zoobar\n```\n\nThis adds the LDAP user \"tesla\" to the \"engineers\" group, which maps to\nthe \"foobar\" Vault policy. User \"tesla\" itself is associated with \"zoobar\"\npolicy.\n\nFinally, we can test this by authenticating:\n\n```shell-session\n$ vault login -method=ldap username=tesla\nPassword (will be hidden):\nSuccessfully authenticated! The policies that are associated\nwith this token are listed below:\n\ndefault, foobar, zoobar\n```\n\n## Note on policy mapping\n\nIt should be noted that user -> policy mapping happens at token creation time. And changes in group membership on the LDAP server will not affect tokens that have already been provisioned. To see these changes, old tokens should be revoked and the user should be asked to reauthenticate.\n\n## User lockout\n\n@include 'user-lockout.mdx'\n\n## API\n\nThe LDAP auth method has a full HTTP API. Please see the\n[LDAP auth method API](\/vault\/api-docs\/auth\/ldap) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  LDAP   Auth Methods description       The  ldap  auth method allows users to authenticate with Vault using LDAP   credentials         LDAP auth method   include  x509 sha1 deprecation mdx   The  ldap  auth method allows authentication using an existing LDAP server and user password credentials  This allows Vault to be integrated into environments using LDAP without duplicating the user pass configuration in multiple places   The mapping of groups and users in LDAP to Vault policies is managed by using the  users   and  groups   paths      A note on escaping    It is up to the administrator   to provide properly escaped DNs  This includes the user DN  bind DN for search  and so on   The only DN escaping performed by this method is on usernames given at login time when they are inserted into the final bind DN  and uses escaping rules defined in RFC 4514   Additionally  Active Directory has escaping rules that differ slightly from the RFC  in particular it requires escaping of     regardless of position in the DN  the RFC only requires it to be escaped when it is the first character   and      which the RFC indicates can be escaped with a backslash  but does not contain in its set of required escapes  If you are using Active Directory and these appear in your usernames  please ensure that they are escaped  in addition to being properly escaped in your configured DNs   For reference  see  RFC 4514  https   www ietf org rfc rfc4514 txt  and this  TechNet post on characters to escape in Active Directory  http   social technet microsoft com wiki contents articles 5312 active directory characters to escape aspx       Authentication      Via the CLI     shell session   vault login  method ldap username mitchellh Password  will be hidden   Successfully authenticated  The policies that are associated with this token are listed below   admins          Via the API     shell session   curl         request POST         data    password    foo          http   127 0 0 1 8200 v1 auth ldap login mitchellh      The response will be in JSON  For example      javascript      lease id          renewable   false     lease duration   0     data   null     auth          client token    c4f280f6 fdb2 18eb 89d3 589e2e834cdb        policies            admins              metadata            username    mitchellh              lease duration   0       renewable   false               Configuration  Auth methods must be configured in advance before users or machines can authenticate  These steps are usually completed by an operator or configuration management tool   1  Enable the ldap auth method         text      vault auth enable ldap         1  Configure connection details for your LDAP server  information on how to    authenticate users  and instructions on how to query for group membership  The    configuration options are categorized and detailed below       Connection parameters     url   string  required    The LDAP server to connect to  Examples   ldap   ldap myorg com    ldaps   ldap myorg com 636   This can also be a comma delineated list of URLs  e g   ldap   ldap myorg com ldaps   ldap myorg com 636   in which case the servers will be tried in order if there are errors during the connection process     starttls   bool  optional    If true  issues a  StartTLS  command after establishing an unencrypted connection     insecure tls     bool  optional    If true  skips LDAP server SSL certificate verification   insecure  use with caution     certificate     string  optional    CA certificate to use when verifying LDAP server certificate  must be x509 PEM encoded     client tls cert     string  optional    Client certificate to provide to the LDAP server  must be x509 PEM encoded     client tls key     string  optional    Client certificate key to provide to the LDAP server  must be x509 PEM encoded       Binding parameters  There are two alternate methods of resolving the user object used to authenticate the end user   Search  or  User Principal Name   When using  Search   the bind can be either anonymous or authenticated  User Principal Name is a method of specifying users supported by Active Directory  More information on UPN can be found  here   https   msdn microsoft com en us library ms677605 v vs 85  aspx userPrincipalName      userfilter  works with both authenticated and anonymous  Search   In order for  userfilter  to apply for authenticated searches   binddn  and  bindpass  must be set  For anonymous search   discoverdn  must be set to  true   and  deny null bind  must be set to false        Binding   authenticated search     binddn   string  optional    Distinguished name of object to bind when performing user and group search  Example   cn vault ou Users dc example dc com     bindpass   string  optional    Password to use along with  binddn  when performing user search     userdn   string  optional    Base DN under which to perform user search  Example   ou Users dc example dc com     userattr   string  optional    Attribute on user attribute object matching the username passed when authenticating  Examples   sAMAccountName    cn    uid     userfilter   string  optional    Go template used to construct a ldap user search filter  The template can access the following context variables     UserAttr    Username     The default userfilter is       or   userPrincipalName  UPNDomain   if the  upndomain  parameter is set  The user search filter can be used to restrict what user can attempt to log in  For example  to limit login to users that are not contractors  you could write     objectClass user       employeeType Contractor        include  ldap auth userfilter warning mdx        Binding   anonymous search     discoverdn   bool  optional    If true  use anonymous bind to discover the bind DN of a user    userdn   string  optional    Base DN under which to perform user search  Example   ou Users dc example dc com     userattr   string  optional    Attribute on user attribute object matching the username passed when authenticating  Examples   sAMAccountName    cn    uid     userfilter   string  optional    Go template used to construct a ldap user search filter  The template can access the following context variables     UserAttr    Username     The default userfilter is       or   userPrincipalName  UPNDomain   if the  upndomain  parameter is set  The user search filter can be used to restrict what user can attempt to log in  For example  to limit login to users that are not contractors  you could write     objectClass user       employeeType Contractor         deny null bind   bool  optional    This option prevents users from bypassing authentication when providing an empty password  The default is  true      anonymous group search   bool  optional    Use anonymous binds when performing LDAP group searches  Defaults to  false     include  ldap auth userfilter warning mdx        Alias dereferencing     dereference aliases   string  optional    Control how aliases are dereferenced when performing the search  Possible values are   never    finding    searching   and  always    finding  will only dereference aliases during name resolution of the base   searching  will dereference aliases after name resolution        Binding   user principal name  AD      upndomain   string  optional    userPrincipalDomain used to construct the UPN string for the authenticating user  The constructed UPN will appear as   username  UPNDomain   Example   example com   which will cause vault to bind as  username example com        Group membership resolution  Once a user has been authenticated  the LDAP auth method must know how to resolve which groups the user is a member of  The configuration for this can vary depending on your LDAP server and your directory schema  There are two main strategies when resolving group membership   the first is searching for the authenticated user object and following an attribute to groups it is a member of  The second is to search for group objects of which the authenticated user is a member of  Both methods are supported      groupfilter   string  optional    Go template used when constructing the group membership query  The template can access the following context variables     UserDN    Username     The default is     memberUid   member   uniqueMember      which is compatible with several common directory schemas  To support nested group resolution for Active Directory  instead use the following query      objectClass group  member 1 2 840 113556 1 4 1941          groupdn   string  required    LDAP search base to use for group membership search  This can be the root containing either groups or users  Example   ou Groups dc example dc com     groupattr   string  optional    LDAP attribute to follow on objects returned by  groupfilter  in order to enumerate user group membership  Examples  for groupfilter queries returning  group  objects  use   cn   For queries returning  user  objects  use   memberOf   The default is  cn     Note   When using  Authenticated Search  for binding parameters  see above  the distinguished name defined for  binddn  is used for the group search  Otherwise  the authenticating user is used to perform the group search   Use  vault path help  for more details       Other     username as alias   bool  optional    If set to true  forces the auth method to use the username passed by the user as the alias name     max page size   int  optional    If set to a value greater than 0  the LDAP backend will use the LDAP server s paged search control to request pages of up to the given size  This can be used to avoid hitting the LDAP server s maximum result size limit  Otherwise  the LDAP backend will not use the paged search control      Examples       Scenario 1    LDAP server running on  ldap example com   port 389    Server supports  STARTTLS  command to initiate encryption on the standard port    CA Certificate stored in file named  ldap ca cert pem    Server is Active Directory supporting the userPrincipalName attribute  Users are identified as  username example com     Groups are nested  we will use  LDAP MATCHING RULE IN CHAIN  to walk the ancestry graph    Group search will start under  ou Groups dc example dc com   For all group objects under that path  the  member  attribute will be checked for a match against the authenticated user    Group names are identified using their  cn  attribute      shell session   vault write auth ldap config       url  ldap   ldap example com        userdn  ou Users dc example dc com        groupdn  ou Groups dc example dc com        groupfilter     objectClass group  member 1 2 840 113556 1 4 1941            groupattr  cn        upndomain  example com        certificate  ldap ca cert pem       insecure tls false       starttls true              Scenario 2    LDAP server running on  ldap example com   port 389    Server supports  STARTTLS  command to initiate encryption on the standard port    CA Certificate stored in file named  ldap ca cert pem    Server does not allow anonymous binds for performing user search    Bind account used for searching is  cn vault ou users dc example dc com  with password  My ecrt3tP4ss     User objects are under the  ou Users dc example dc com  organizational unit    Username passed to vault when authenticating maps to the  sAMAccountName  attribute    Group membership will be resolved via the  memberOf  attribute of  user  objects  That search will begin under  ou Users dc example dc com       shell session   vault write auth ldap config       url  ldap   ldap example com        userattr sAMAccountName       userdn  ou Users dc example dc com        groupdn  ou Users dc example dc com        groupfilter     objectClass person  uid           groupattr  memberOf        binddn  cn vault ou users dc example dc com        bindpass  My ecrt3tP4ss        certificate  ldap ca cert pem       insecure tls false       starttls true              Scenario 3    LDAP server running on  ldap example com   port 636  LDAPS    CA Certificate stored in file named  ldap ca cert pem    User objects are under the  ou Users dc example dc com  organizational unit    Username passed to vault when authenticating maps to the  uid  attribute    User bind DN will be auto discovered using anonymous binding    Group membership will be resolved via any one of  memberUid    member   or  uniqueMember  attributes  That search will begin under  ou Groups dc example dc com     Group names are identified using the  cn  attribute      shell session   vault write auth ldap config       url  ldaps   ldap example com        userattr  uid        userdn  ou Users dc example dc com        discoverdn true       groupdn  ou Groups dc example dc com        certificate  ldap ca cert pem       insecure tls false       starttls true             LDAP group    policy mapping  Next we want to create a mapping from an LDAP group to a Vault policy      shell session   vault write auth ldap groups scientists policies foo bar      This maps the LDAP group  scientists  to the  foo  and  bar  Vault policies  We can also add specific LDAP users to additional  potentially non LDAP  groups  Note that policies can also be specified on LDAP users as well      shell session   vault write auth ldap groups engineers policies foobar   vault write auth ldap users tesla groups engineers policies zoobar      This adds the LDAP user  tesla  to the  engineers  group  which maps to the  foobar  Vault policy  User  tesla  itself is associated with  zoobar  policy   Finally  we can test this by authenticating      shell session   vault login  method ldap username tesla Password  will be hidden   Successfully authenticated  The policies that are associated with this token are listed below   default  foobar  zoobar         Note on policy mapping  It should be noted that user    policy mapping happens at token creation time  And changes in group membership on the LDAP server will not affect tokens that have already been provisioned  To see these changes  old tokens should be revoked and the user should be asked to reauthenticate      User lockout   include  user lockout mdx      API  The LDAP auth method has a full HTTP API  Please see the  LDAP auth method API   vault api docs auth ldap  for more details "}
{"questions":"vault Use JWT OIDC authentication with Vault to support OIDC and user provided JWTs page title Use JWT OIDC authentication layout docs include x509 sha1 deprecation mdx Use JWT OIDC authentication","answers":"---\nlayout: docs\npage_title: Use JWT\/OIDC authentication\ndescription: >-\n  Use JWT\/OIDC authentication with Vault to support OIDC and user-provided JWTs.\n---\n\n# Use JWT\/OIDC authentication\n\n@include 'x509-sha1-deprecation.mdx'\n\n~> **Note**: Starting in Vault 1.17, if the JWT in the authentication request\ncontains an `aud` claim, the associated `bound_audiences` for the \"jwt\" role\nmust match at least one of the `aud` claims declared for the JWT. For\nadditional details, refer to the [JWT auth method (API)](\/vault\/api-docs\/auth\/jwt)\ndocumentation and [1.17 Upgrade Guide](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#jwt-auth-login-requires-bound-audiences-on-the-role).\n\nThe `jwt` auth method can be used to authenticate with Vault using\n[OIDC](https:\/\/en.wikipedia.org\/wiki\/OpenID_Connect) or by providing a\n[JWT](https:\/\/en.wikipedia.org\/wiki\/JSON_Web_Token).\n\nThe OIDC method allows authentication via a configured OIDC provider using the\nuser's web browser. This method may be initiated from the Vault UI or the\ncommand line. Alternatively, a JWT can be provided directly. The JWT is\ncryptographically verified using locally-provided keys, or, if configured, an\nOIDC Discovery service can be used to fetch the appropriate keys. The choice of\nmethod is configured per role.\n\nBoth methods allow additional processing of the claims data in the JWT. Some of\nthe concepts common to both methods will be covered first, followed by specific\nexamples of OIDC and JWT usage.\n\n## OIDC authentication\n\nThis section covers the setup and use of OIDC roles. If a JWT is to be provided directly,\nrefer to the [JWT Authentication](\/vault\/docs\/auth\/jwt#jwt-authentication) section below. Basic\nfamiliarity with [OIDC concepts](https:\/\/developer.okta.com\/blog\/2017\/07\/25\/oidc-primer-part-1)\nis assumed. The Authorization Code flow makes use of the Proof Key for Code\nExchange (PKCE) extension.\n\nVault includes two built-in OIDC login flows: the Vault UI, and the CLI\nusing a `vault login`.\n\n### Redirect URIs\n\nAn important part of OIDC role configuration is properly setting redirect URIs. This must be\ndone both in Vault and with the OIDC provider, and these configurations must align. The\nredirect URIs are specified for a role with the `allowed_redirect_uris` parameter. There are\ndifferent redirect URIs to configure the Vault UI and CLI flows, so one or both will need to\nbe set up depending on the installation.\n\n**CLI**\n\nIf you plan to support authentication via `vault login -method=oidc`, a localhost redirect URI\nmust be set. This can usually be: `http:\/\/localhost:8250\/oidc\/callback`. Logins via the CLI may\nspecify a different host and\/or listening port if needed, and a URI with this host\/port must match one\nof the configured redirected URIs. These same \"localhost\" URIs must be added to the provider as well.\n\n**Vault UI**\n\nLogging in via the Vault UI requires a redirect URI of the form:\n\n`https:\/\/{host:port}\/ui\/vault\/auth\/{path}\/oidc\/callback`\n\nThe \"host:port\" must be correct for the Vault server, and \"path\" must match the path the JWT\nbackend is mounted at (e.g. \"oidc\" or \"jwt\").\n\nIf the [oidc_response_mode](\/vault\/api-docs\/auth\/jwt#oidc_response_mode) is set to `form_post`, then\nlogging in via the Vault UI requires a redirect URI of the form:\n\n`https:\/\/{host:port}\/v1\/auth\/{path}\/oidc\/callback`\n\nPrior to Vault 1.6, if [namespaces](\/vault\/docs\/enterprise\/namespaces) are in use,\nthey must be added as query parameters, for example:\n\n`https:\/\/vault.example.com:8200\/ui\/vault\/auth\/oidc\/oidc\/callback?namespace=my_ns`\n\nFor Vault 1.6+, it is no longer necessary to add the namespace as a query\nparameter in the redirect URI, if\n[`namespace_in_state`](\/vault\/api-docs\/auth\/jwt#namespace_in_state) is set to `true`,\nwhich is the default for new configs.\n\n### OIDC login (Vault UI)\n\n1. Select the \"OIDC\" login method.\n1. Enter a role name if necessary.\n1. Press \"Sign In\" and complete the authentication with the configured provider.\n\n### OIDC login (CLI)\n\nThe CLI login defaults to path of `\/oidc`. If this auth method was enabled at a\ndifferent path, specify `-path=\/my-path` in the CLI.\n\n```shell-session\n$ vault login -method=oidc port=8400 role=test\n\nComplete the login via your OIDC provider. Launching browser to:\n\n    https:\/\/myco.auth0.com\/authorize?redirect_uri=http%3A%2F%2Flocalhost%3A8400%2Foidc%2Fcallback&client_id=r3qXc2bix9eF...\n```\n\nThe browser will open to the generated URL to complete the provider's login. The\nURL may be entered manually if the browser cannot be automatically opened.\n\n- `skip_browser` (default: \"false\"). Toggle the automatic launching of the default browser to the login URL.\n\nThe callback listener may be customized with the following optional parameters. These are typically\nnot required to be set:\n\n- `mount` (default: \"oidc\")\n- `listenaddress` (default: \"localhost\")\n- `port` (default: 8250)\n- `callbackhost` (default: \"localhost\")\n- `callbackmethod` (default: \"http\")\n- `callbackport` (default: value set for `port`). This value is used in the `redirect_uri`, whereas\n  `port` is the localhost port that the listener is using. These two may be different in advanced setups.\n\n### OIDC provider configuration\n\nThe OIDC authentication flow has been successfully tested with a number of providers. A full\nguide to configuring OAuth\/OIDC applications is beyond the scope of Vault documentation, but a\ncollection of provider configuration steps has been collected to help get started:\n[OIDC Provider Setup](\/vault\/docs\/auth\/jwt\/oidc-providers)\n\n### OIDC configuration troubleshooting\n\nThis amount of configuration required for OIDC is relatively small, but it can be tricky to debug\nwhy things aren't working. Some tips for setting up OIDC:\n\n- If a role parameter (e.g. `bound_claims`) requires a map value, it can't be set individually using\n  the Vault CLI. In these cases the best approach is to write the entire configuration as a single\n  JSON object:\n\n```text\nvault write auth\/oidc\/role\/demo -<<EOF\n{\n  \"user_claim\": \"sub\",\n  \"bound_audiences\": \"abc123\",\n  \"role_type\": \"oidc\",\n  \"policies\": \"demo\",\n  \"ttl\": \"1h\",\n  \"bound_claims\": { \"groups\": [\"mygroup\/mysubgroup\"] }\n}\nEOF\n```\n\n- Monitor Vault's log output. Important information about OIDC validation failures will be emitted.\n\n- Ensure Redirect URIs are correct in Vault and on the provider. They need to match exactly. Check:\n  http\/https, 127.0.0.1\/localhost, port numbers, whether trailing slashes are present.\n\n- Start simple. The only claim configuration a role requires is `user_claim`. After authentication is\n  known to work, you can add additional claims bindings and metadata copying.\n\n- `bound_audiences` is optional for OIDC roles and typically not required. OIDC providers will use\n  the client_id as the audience and OIDC validation expects this.\n\n- Check your provider for what scopes are required in order to receive all\n  of the information you need. The scopes \"profile\" and \"groups\" often need to be\n  requested, and can be added by setting `oidc_scopes=\"profile,groups\"` on the role.\n\n- If you're seeing claim-related errors in logs, review the provider's docs very carefully to see\n  how they're naming and structuring their claims. Depending on the provider, you may be able to\n  construct a simple `curl` implicit grant request to obtain a JWT that you can inspect. An example\n  of how to decode the JWT (in this case located in the \"access_token\" field of a JSON response):\n\n  `cat jwt.json | jq -r .access_token | cut -d. -f2 | base64 -D`\n\n- As of Vault 1.2, the [`verbose_oidc_logging`](\/vault\/api-docs\/auth\/jwt#verbose_oidc_logging) role\n  option is available which will log the received OIDC token to the _server_ logs if debug-level logging is enabled. This can\n  be helpful when debugging provider setup and verifying that the received claims are what you expect.\n  Since claims data is logged verbatim and may contain sensitive information, this option should not be\n  used in production.\n\n- Azure requires some additional configuration when a user is a member of more\n  than 200 groups, described in [Azure-specific handling\n  configuration](\/vault\/docs\/auth\/jwt\/oidc-providers\/azuread#optional-azure-specific-configuration)\n\n## JWT authentication\n\nThe authentication flow for roles of type \"jwt\" is simpler than OIDC since Vault\nonly needs to validate the provided JWT.\n\n### JWT verification\n\nVault verifies JWT signatures against public keys from the issuer. You can\nonly configure one JWT signature verification method per mounted backend from\nthe following options:\n\n- **Static Keys**. A set of public keys is stored directly in the backend configuration. See the\n  [jwt_validation_pubkeys](\/vault\/api-docs\/auth\/jwt#jwt_validation_pubkeys) \n  configuration option.\n\n- **JWKS**. A JSON Web Key Set ([JWKS](https:\/\/tools.ietf.org\/html\/rfc7517)) URL and optional\n  certificate chain is configured. Keys will be fetched from this endpoint for authentication.\n  See the [jwks_url](\/vault\/api-docs\/auth\/jwt#jwks_url) and [jwks_ca_pem](\/vault\/api-docs\/auth\/jwt#jwks_ca_pem) \n  configuration options.\n\n- **JWKS Pairs**. A list of JSON Web Key Set ([JWKS](https:\/\/tools.ietf.org\/html\/rfc7517)) URLs and optional\n  certificate chain for each is configured. Keys will be fetched from each endpoint for authentication, \n  stopping at the first set to successfully verify the JWT signature. See the \n  [jwks_pairs](\/vault\/api-docs\/auth\/jwt#jwks_pairs) configuration option.\n\n- **OIDC Discovery**. An OIDC Discovery URL and optional certificate chain is configured. Keys\n  will be fetched from this URL during authentication. When OIDC Discovery is used, OIDC validation\n  criteria (e.g. `iss`, `aud`, etc.) will be applied. See the [oidc_discovery_url](\/vault\/api-docs\/auth\/jwt#oidc_discovery_url)\n  and [oidc_discovery_ca_pem](\/vault\/api-docs\/auth\/jwt#oidc_discovery_ca_pem) configuration \n  options.\n\n\nTo configure additional verification methods, you must mount and configure one\nbackend instance per method at different paths.\n\nAfter verifying the JWT signatures, Vault checks the corresponding `aud` claim.\n\nIf the JWT in the authentication request contains an `aud` claim, the\nassociated `bound_audiences` for the role must match at least one of the `aud`\nclaims declared for the JWT.\n\n### Via the CLI\n\n```shell-session\n$ vault write auth\/<path-to-jwt-backend>\/login role=demo jwt=...\n```\n\nThe default path for the JWT authentication backend is `\/jwt`, so if you're using the default backend, the command would be:\n\n```shell-session\n$ vault write auth\/jwt\/login role=demo jwt=...\n```\n\nIf your JWT auth backend is using a different path, use that path.\n\n### Via the API\n\nThe default endpoint is `auth\/jwt\/login`. If this auth method was enabled\nat a different path, use that value instead of `jwt`.\n\n```shell-session\n$ curl \\\n    --request POST \\\n    --data '{\"jwt\": \"your_jwt\", \"role\": \"demo\"}' \\\n    http:\/\/127.0.0.1:8200\/v1\/auth\/jwt\/login\n```\n\nThe response will contain a token at `auth.client_token`:\n\n```json\n{\n  \"auth\": {\n    \"client_token\": \"38fe9691-e623-7238-f618-c94d4e7bc674\",\n    \"accessor\": \"78e87a38-84ed-2692-538f-ca8b9f400ab3\",\n    \"policies\": [\"default\"],\n    \"metadata\": {\n      \"role\": \"demo\"\n    },\n    \"lease_duration\": 2764800,\n    \"renewable\": true\n  }\n}\n```\n\n## Configuration\n\nAuth methods must be configured in advance before users or machines can\nauthenticate. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the JWT auth method. Either the \"jwt\" or \"oidc\" name may be used. The\n   backend will be mounted at the chosen name.\n\n   ```text\n   $ vault auth enable jwt\n     or\n   $ vault auth enable oidc\n   ```\n\n1. Use the `\/config` endpoint to configure Vault. To support JWT roles, either local keys, JWKS URL(s), or an OIDC\n   Discovery URL must be present. For OIDC roles, OIDC Discovery URL, OIDC Client ID and OIDC Client Secret are required. For the\n   list of available configuration options, please see the [API documentation](\/vault\/api-docs\/auth\/jwt).\n\n   ```text\n   $ vault write auth\/jwt\/config \\\n       oidc_discovery_url=\"https:\/\/myco.auth0.com\/\" \\\n       oidc_client_id=\"m5i8bj3iofytj\" \\\n       oidc_client_secret=\"f4ubv72nfiu23hnsj\" \\\n       default_role=\"demo\"\n   ```\n\n   If you need to perform JWT verification with JWT token validation, then leave the `oidc_client_id` and `oidc_client_secret` blank. \n\n   ```text\n   $ vault write auth\/jwt\/config \\\n      oidc_discovery_url=\"https:\/\/MYDOMAIN.eu.auth0.com\/\" \\\n      oidc_client_id=\"\" \\\n      oidc_client_secret=\"\" \\\n  ```\n\n1. Create a named role:\n\n   ```text\n   vault write auth\/jwt\/role\/demo \\\n       allowed_redirect_uris=\"http:\/\/localhost:8250\/oidc\/callback\" \\\n       bound_subject=\"r3qX9DljwFIWhsiqwFiu38209F10atW6@clients\" \\\n       bound_audiences=\"https:\/\/vault.plugin.auth.jwt.test\" \\\n       user_claim=\"https:\/\/vault\/user\" \\\n       groups_claim=\"https:\/\/vault\/groups\" \\\n       policies=webapps \\\n       ttl=1h\n   ```\n\n   This role authorizes JWTs with the given subject and audience claims, gives\n   it the `webapps` policy, and uses the given user\/groups claims to set up\n   Identity aliases.\n\n   For the complete list of configuration options, please see the API\n   documentation.\n\n### Bound claims\n\nOnce a JWT has been validated as being properly signed and not expired, the\nauthorization flow will validate that any configured \"bound\" parameters match.\nIn some cases there are dedicated parameters, for example `bound_subject`,\nthat must match the provided `sub` claim. For roles of type \"jwt\":\n\n1. the `bound_audiences` parameter is required when an `aud` claim is set.\n1. the `bound_audiences` parameter must match at least one of provided `aud` claims.\n\nYou can also configure roles to check an arbitrary set of claims and required\nvalues with the `bound_claims` map. For example, assume `bound_claims` is set to:\n\n```json\n{\n  \"division\": \"Europe\",\n  \"department\": \"Engineering\"\n}\n```\n\nOnly JWTs containing both the \"division\" and \"department\" claims, and\nrespective matching values of \"Europe\" and \"Engineering\", would be authorized.\nIf the expected value is a list, the claim must match one of the items in the list.\nTo limit authorization to a set of email addresses:\n\n```json\n{\n  \"email\": [\"fred@example.com\", \"julie@example.com\"]\n}\n```\n\nBound claims can optionally be configured with globs. See the [API documentation](\/vault\/api-docs\/auth\/jwt#bound_claims_type) for more details.\n\n### Claims as metadata\n\nData from claims can be copied into the resulting auth token and alias metadata by configuring `claim_mappings`. This role\nparameter is a map of items to copy. The map elements are of the form: `\"<JWT claim>\":\"<metadata key>\"`. Assume\n`claim_mappings` is set to:\n\n```json\n{\n  \"division\": \"organization\",\n  \"department\": \"department\"\n}\n```\n\nThis specifies that the value in the JWT claim \"division\" should be copied to the metadata key \"organization\". The JWT\n\"department\" claim value will also be copied into metadata but will retain the key name. If a claim is configured in `claim_mappings`,\nit must existing in the JWT or else the authentication will fail.\n\nNote: the metadata key name \"role\" is reserved and may not be used for claim mappings. Since Vault 1.16 the role name is available \nby the key `role` in the alias metadata of the entity after a successful login.\n\n### Claim specifications and JSON pointer\n\nSome parameters (e.g. `bound_claims`, `groups_claim`, `claim_mappings`, `user_claim`) are\nused to point to data within the JWT. If the desired key is at the top of level of the JWT,\nthe name can be provided directly. If it is nested at a lower level, a JSON Pointer may be\nused.\n\nAssume the following JSON data to be referenced:\n\n```json\n{\n  \"division\": \"North America\",\n  \"groups\": {\n    \"primary\": \"Engineering\",\n    \"secondary\": \"Software\"\n  }\n}\n```\n\nA parameter of `\"division\"` will reference \"North America\", as this is a top level key. A parameter\n`\"\/groups\/primary\"` uses JSON Pointer syntax to reference \"Engineering\" at a lower level. Any valid\nJSON Pointer can be used as a selector. Refer to the\n[JSON Pointer RFC](https:\/\/tools.ietf.org\/html\/rfc6901) for a full description of the syntax.\n\n## Tutorial\n\nRefer to the following tutorials for OIDC auth method usage examples:\n\n- [OIDC Auth Method](\/vault\/tutorials\/auth-methods\/oidc-auth)\n- [Azure Active Directory with OIDC Auth Method and External\n  Groups](\/vault\/tutorials\/auth-methods\/oidc-auth-azure)\n- [OIDC Authentication with Okta](\/vault\/tutorials\/auth-methods\/vault-oidc-okta)\n- [OIDC Authentication with Google Workspace](\/vault\/tutorials\/auth-methods\/google-workspace-oauth)\n\n## API\n\nThe JWT Auth Plugin has a full HTTP API. Please see the\n[API docs](\/vault\/api-docs\/auth\/jwt) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Use JWT OIDC authentication description       Use JWT OIDC authentication with Vault to support OIDC and user provided JWTs         Use JWT OIDC authentication   include  x509 sha1 deprecation mdx        Note    Starting in Vault 1 17  if the JWT in the authentication request contains an  aud  claim  the associated  bound audiences  for the  jwt  role must match at least one of the  aud  claims declared for the JWT  For additional details  refer to the  JWT auth method  API    vault api docs auth jwt  documentation and  1 17 Upgrade Guide   vault docs upgrading upgrade to 1 17 x jwt auth login requires bound audiences on the role    The  jwt  auth method can be used to authenticate with Vault using  OIDC  https   en wikipedia org wiki OpenID Connect  or by providing a  JWT  https   en wikipedia org wiki JSON Web Token    The OIDC method allows authentication via a configured OIDC provider using the user s web browser  This method may be initiated from the Vault UI or the command line  Alternatively  a JWT can be provided directly  The JWT is cryptographically verified using locally provided keys  or  if configured  an OIDC Discovery service can be used to fetch the appropriate keys  The choice of method is configured per role   Both methods allow additional processing of the claims data in the JWT  Some of the concepts common to both methods will be covered first  followed by specific examples of OIDC and JWT usage      OIDC authentication  This section covers the setup and use of OIDC roles  If a JWT is to be provided directly  refer to the  JWT Authentication   vault docs auth jwt jwt authentication  section below  Basic familiarity with  OIDC concepts  https   developer okta com blog 2017 07 25 oidc primer part 1  is assumed  The Authorization Code flow makes use of the Proof Key for Code Exchange  PKCE  extension   Vault includes two built in OIDC login flows  the Vault UI  and the CLI using a  vault login        Redirect URIs  An important part of OIDC role configuration is properly setting redirect URIs  This must be done both in Vault and with the OIDC provider  and these configurations must align  The redirect URIs are specified for a role with the  allowed redirect uris  parameter  There are different redirect URIs to configure the Vault UI and CLI flows  so one or both will need to be set up depending on the installation     CLI    If you plan to support authentication via  vault login  method oidc   a localhost redirect URI must be set  This can usually be   http   localhost 8250 oidc callback   Logins via the CLI may specify a different host and or listening port if needed  and a URI with this host port must match one of the configured redirected URIs  These same  localhost  URIs must be added to the provider as well     Vault UI    Logging in via the Vault UI requires a redirect URI of the form    https    host port  ui vault auth  path  oidc callback   The  host port  must be correct for the Vault server  and  path  must match the path the JWT backend is mounted at  e g   oidc  or  jwt     If the  oidc response mode   vault api docs auth jwt oidc response mode  is set to  form post   then logging in via the Vault UI requires a redirect URI of the form    https    host port  v1 auth  path  oidc callback   Prior to Vault 1 6  if  namespaces   vault docs enterprise namespaces  are in use  they must be added as query parameters  for example    https   vault example com 8200 ui vault auth oidc oidc callback namespace my ns   For Vault 1 6   it is no longer necessary to add the namespace as a query parameter in the redirect URI  if   namespace in state    vault api docs auth jwt namespace in state  is set to  true   which is the default for new configs       OIDC login  Vault UI   1  Select the  OIDC  login method  1  Enter a role name if necessary  1  Press  Sign In  and complete the authentication with the configured provider       OIDC login  CLI   The CLI login defaults to path of   oidc   If this auth method was enabled at a different path  specify   path  my path  in the CLI      shell session   vault login  method oidc port 8400 role test  Complete the login via your OIDC provider  Launching browser to       https   myco auth0 com authorize redirect uri http 3A 2F 2Flocalhost 3A8400 2Foidc 2Fcallback client id r3qXc2bix9eF         The browser will open to the generated URL to complete the provider s login  The URL may be entered manually if the browser cannot be automatically opened      skip browser   default   false    Toggle the automatic launching of the default browser to the login URL   The callback listener may be customized with the following optional parameters  These are typically not required to be set      mount   default   oidc      listenaddress   default   localhost      port   default  8250     callbackhost   default   localhost      callbackmethod   default   http      callbackport   default  value set for  port    This value is used in the  redirect uri   whereas    port  is the localhost port that the listener is using  These two may be different in advanced setups       OIDC provider configuration  The OIDC authentication flow has been successfully tested with a number of providers  A full guide to configuring OAuth OIDC applications is beyond the scope of Vault documentation  but a collection of provider configuration steps has been collected to help get started   OIDC Provider Setup   vault docs auth jwt oidc providers       OIDC configuration troubleshooting  This amount of configuration required for OIDC is relatively small  but it can be tricky to debug why things aren t working  Some tips for setting up OIDC     If a role parameter  e g   bound claims   requires a map value  it can t be set individually using   the Vault CLI  In these cases the best approach is to write the entire configuration as a single   JSON object      text vault write auth oidc role demo    EOF      user claim    sub      bound audiences    abc123      role type    oidc      policies    demo      ttl    1h      bound claims      groups     mygroup mysubgroup       EOF        Monitor Vault s log output  Important information about OIDC validation failures will be emitted     Ensure Redirect URIs are correct in Vault and on the provider  They need to match exactly  Check    http https  127 0 0 1 localhost  port numbers  whether trailing slashes are present     Start simple  The only claim configuration a role requires is  user claim   After authentication is   known to work  you can add additional claims bindings and metadata copying      bound audiences  is optional for OIDC roles and typically not required  OIDC providers will use   the client id as the audience and OIDC validation expects this     Check your provider for what scopes are required in order to receive all   of the information you need  The scopes  profile  and  groups  often need to be   requested  and can be added by setting  oidc scopes  profile groups   on the role     If you re seeing claim related errors in logs  review the provider s docs very carefully to see   how they re naming and structuring their claims  Depending on the provider  you may be able to   construct a simple  curl  implicit grant request to obtain a JWT that you can inspect  An example   of how to decode the JWT  in this case located in the  access token  field of a JSON response       cat jwt json   jq  r  access token   cut  d   f2   base64  D     As of Vault 1 2  the   verbose oidc logging    vault api docs auth jwt verbose oidc logging  role   option is available which will log the received OIDC token to the  server  logs if debug level logging is enabled  This can   be helpful when debugging provider setup and verifying that the received claims are what you expect    Since claims data is logged verbatim and may contain sensitive information  this option should not be   used in production     Azure requires some additional configuration when a user is a member of more   than 200 groups  described in  Azure specific handling   configuration   vault docs auth jwt oidc providers azuread optional azure specific configuration      JWT authentication  The authentication flow for roles of type  jwt  is simpler than OIDC since Vault only needs to validate the provided JWT       JWT verification  Vault verifies JWT signatures against public keys from the issuer  You can only configure one JWT signature verification method per mounted backend from the following options       Static Keys    A set of public keys is stored directly in the backend configuration  See the    jwt validation pubkeys   vault api docs auth jwt jwt validation pubkeys     configuration option       JWKS    A JSON Web Key Set   JWKS  https   tools ietf org html rfc7517   URL and optional   certificate chain is configured  Keys will be fetched from this endpoint for authentication    See the  jwks url   vault api docs auth jwt jwks url  and  jwks ca pem   vault api docs auth jwt jwks ca pem     configuration options       JWKS Pairs    A list of JSON Web Key Set   JWKS  https   tools ietf org html rfc7517   URLs and optional   certificate chain for each is configured  Keys will be fetched from each endpoint for authentication     stopping at the first set to successfully verify the JWT signature  See the     jwks pairs   vault api docs auth jwt jwks pairs  configuration option       OIDC Discovery    An OIDC Discovery URL and optional certificate chain is configured  Keys   will be fetched from this URL during authentication  When OIDC Discovery is used  OIDC validation   criteria  e g   iss    aud   etc   will be applied  See the  oidc discovery url   vault api docs auth jwt oidc discovery url    and  oidc discovery ca pem   vault api docs auth jwt oidc discovery ca pem  configuration    options    To configure additional verification methods  you must mount and configure one backend instance per method at different paths   After verifying the JWT signatures  Vault checks the corresponding  aud  claim   If the JWT in the authentication request contains an  aud  claim  the associated  bound audiences  for the role must match at least one of the  aud  claims declared for the JWT       Via the CLI     shell session   vault write auth  path to jwt backend  login role demo jwt          The default path for the JWT authentication backend is   jwt   so if you re using the default backend  the command would be      shell session   vault write auth jwt login role demo jwt          If your JWT auth backend is using a different path  use that path       Via the API  The default endpoint is  auth jwt login   If this auth method was enabled at a different path  use that value instead of  jwt       shell session   curl         request POST         data    jwt    your jwt    role    demo          http   127 0 0 1 8200 v1 auth jwt login      The response will contain a token at  auth client token       json      auth          client token    38fe9691 e623 7238 f618 c94d4e7bc674        accessor    78e87a38 84ed 2692 538f ca8b9f400ab3        policies     default         metadata            role    demo              lease duration   2764800       renewable   true               Configuration  Auth methods must be configured in advance before users or machines can authenticate  These steps are usually completed by an operator or configuration management tool   1  Enable the JWT auth method  Either the  jwt  or  oidc  name may be used  The    backend will be mounted at the chosen name         text      vault auth enable jwt      or      vault auth enable oidc         1  Use the   config  endpoint to configure Vault  To support JWT roles  either local keys  JWKS URL s   or an OIDC    Discovery URL must be present  For OIDC roles  OIDC Discovery URL  OIDC Client ID and OIDC Client Secret are required  For the    list of available configuration options  please see the  API documentation   vault api docs auth jwt          text      vault write auth jwt config          oidc discovery url  https   myco auth0 com            oidc client id  m5i8bj3iofytj           oidc client secret  f4ubv72nfiu23hnsj           default role  demo             If you need to perform JWT verification with JWT token validation  then leave the  oidc client id  and  oidc client secret  blank          text      vault write auth jwt config         oidc discovery url  https   MYDOMAIN eu auth0 com           oidc client id            oidc client secret             1  Create a named role         text    vault write auth jwt role demo          allowed redirect uris  http   localhost 8250 oidc callback           bound subject  r3qX9DljwFIWhsiqwFiu38209F10atW6 clients           bound audiences  https   vault plugin auth jwt test           user claim  https   vault user           groups claim  https   vault groups           policies webapps          ttl 1h            This role authorizes JWTs with the given subject and audience claims  gives    it the  webapps  policy  and uses the given user groups claims to set up    Identity aliases      For the complete list of configuration options  please see the API    documentation       Bound claims  Once a JWT has been validated as being properly signed and not expired  the authorization flow will validate that any configured  bound  parameters match  In some cases there are dedicated parameters  for example  bound subject   that must match the provided  sub  claim  For roles of type  jwt    1  the  bound audiences  parameter is required when an  aud  claim is set  1  the  bound audiences  parameter must match at least one of provided  aud  claims   You can also configure roles to check an arbitrary set of claims and required values with the  bound claims  map  For example  assume  bound claims  is set to      json      division    Europe      department    Engineering         Only JWTs containing both the  division  and  department  claims  and respective matching values of  Europe  and  Engineering   would be authorized  If the expected value is a list  the claim must match one of the items in the list  To limit authorization to a set of email addresses      json      email     fred example com    julie example com          Bound claims can optionally be configured with globs  See the  API documentation   vault api docs auth jwt bound claims type  for more details       Claims as metadata  Data from claims can be copied into the resulting auth token and alias metadata by configuring  claim mappings   This role parameter is a map of items to copy  The map elements are of the form     JWT claim     metadata key     Assume  claim mappings  is set to      json      division    organization      department    department         This specifies that the value in the JWT claim  division  should be copied to the metadata key  organization   The JWT  department  claim value will also be copied into metadata but will retain the key name  If a claim is configured in  claim mappings   it must existing in the JWT or else the authentication will fail   Note  the metadata key name  role  is reserved and may not be used for claim mappings  Since Vault 1 16 the role name is available  by the key  role  in the alias metadata of the entity after a successful login       Claim specifications and JSON pointer  Some parameters  e g   bound claims    groups claim    claim mappings    user claim   are used to point to data within the JWT  If the desired key is at the top of level of the JWT  the name can be provided directly  If it is nested at a lower level  a JSON Pointer may be used   Assume the following JSON data to be referenced      json      division    North America      groups          primary    Engineering        secondary    Software             A parameter of   division   will reference  North America   as this is a top level key  A parameter    groups primary   uses JSON Pointer syntax to reference  Engineering  at a lower level  Any valid JSON Pointer can be used as a selector  Refer to the  JSON Pointer RFC  https   tools ietf org html rfc6901  for a full description of the syntax      Tutorial  Refer to the following tutorials for OIDC auth method usage examples      OIDC Auth Method   vault tutorials auth methods oidc auth     Azure Active Directory with OIDC Auth Method and External   Groups   vault tutorials auth methods oidc auth azure     OIDC Authentication with Okta   vault tutorials auth methods vault oidc okta     OIDC Authentication with Google Workspace   vault tutorials auth methods google workspace oauth      API  The JWT Auth Plugin has a full HTTP API  Please see the  API docs   vault api docs auth jwt  for more details "}
{"questions":"vault Configure Vault to use Active Directory Federation Services ADFS as an OIDC provider Use ADFS for OIDC authentication layout docs page title Use with ADFS for OIDC","answers":"---\nlayout: docs\npage_title: Use with ADFS for OIDC\ndescription: >-\n  Configure Vault to use Active Directory Federation Services (ADFS)\n  as an OIDC provider.\n---\n\n# Use ADFS for OIDC authentication\n\nConfigure your Vault instance to work with Active Directory Federation Services\n(ADFS) and use ADFS accounts with OIDC for Vault login.\n\n## Before you start\n\n1. **You must have Vault v1.15.0+**.\n1. **You must be running ADFS on Windows Server**.\n1. **You must have an OIDC client secret from your ADFS instance**.\n1. **You must know your Vault admin token**. If you do not have a valid admin\n   token, you can generate a new token in the Vault UI or with the\n   [Vault CLI](\/vault\/docs\/commands\/token\/create).\n\n## Step 1: Enable the OIDC authN method for Vault\n\n<Tabs>\n\n<Tab heading=\"Vault CLI\">\n\n1. Save your Vault instance URL to the `VAULT_ADDR` environment variable:\n   ```shell-session\n   $ export VAULT_ADDR=\"<URL_FOR_YOUR_VAULT_INSTALLATION>\"\n   ```\n   For example:\n\n   <CodeBlockConfig hideClipboard>\n\n   ```shell-session\n   $ export VAULT_ADDR=\"https:\/\/myvault.example.com:8200\"\n   ```\n\n   <\/CodeBlockConfig>\n\n1. Save your Vault instance URL to the `VAULT_TOKEN` environment variable:\n   ```shell-session\n   $ export VAULT_TOKEN=\"<YOUR_VAULT_ACCESS_TOKEN>\"\n   ```\n   For example:\n\n   <CodeBlockConfig hideClipboard>\n\n   ```shell-session\n   $ export VAULT_TOKEN=\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\"\n   ```\n\n   <\/CodeBlockConfig>\n \n1. **If you use Vault Enterprise or Vault HCP**, set the namespace where you\n   have the OIDC plugin mounted to the `VAULT_NAMESPACE` environment variable:\n   ```shell-session\n   $ export VAULT_NAMESPACE=\"<OIDC_NAMESPACE>\"\n   ```\n   For example:\n\n   <CodeBlockConfig hideClipboard>\n\n   ```shell-session\n   $ export VAULT_NAMESPACE=\"oidc-ns\"\n   ```\n\n   <\/CodeBlockConfig>\n\n1. Enable the OIDC authentication plugin:\n  ```shell-session\n  $ vault auth enable -path=<YOUR_OIDC_MOUNT_PATH> oidc\n  ```\n  For example:\n\n  <CodeBlockConfig hideClipboard>\n\n  ```shell-session\n  $ vault auth enable -path=\/adfs oidc\n  ```\n\n  <\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"Vault UI\">\n\n1. Open the web UI for your Vault instance.\n1. Select **Access** from the left-hand menu.\n1. Right click **Enable new method** on the Access page. \n1. Select **OIDC**.\n1. Click **Next**.\n1. Set the mount path for the OIDC plugin. For example, `adfs`.\n1. Click **Enable Method**.\n1. Click **Save** to enable the plugin.\n\n<\/Tab>\n\n<\/Tabs>\n\n## Step 2: Create a new application group in ADFS\n\n<Note title=\"Save the client ID\">\n\n  Make note of the 32-character **client identifier** provided by ADFS for your\n  new application group (for example, `d879d6fb-d2de-4596-b39c-191b2f83c03f`).\n  You will need the client ID to configure your OIDC plugin for Vault.\n\n<\/Note>\n\n1. Open your Windows Server UI.\n1. Go to the Server Manager screen and click **Tools**.\n1. Select **AD FS Management**.\n1. Right-click on **Application Groups** and select **Add Application Group...**.\n1. Follow the prompts to create a new application group with the following\n   information: \n    - **Name**: Vault\n    - **Description**: a short description explaining the purpose of the application\n      group. For example, \"Enable access to Vault\".\n    - **Application type**: Server application\n    - **Redirect URI**: add the callback URL of your OIDC plugin for web\n      redirects and the local OIDC callback URL for Vault CLI redirects. For\n      example, `https:\/\/myvault.example.com:8200\/ui\/vault\/auth\/<YOUR_OIDC_MOUNT_PATH>\/oidc\/callback`\n      and `http:\/\/localhost:8250\/oidc\/callback`.\n1. Check the **Generate a shared secret** box and save the secret string.\n1. Confirm the application group details and correct information before closing.\n\n## Step 3: Configure the webhook in ADFS\n\n1. Open the Vault application group in from the ADFS management screen.\n1. Click **Add application...**\n1. Select **Web API**.\n1. Follow the prompts to configure a new webhook with the following information:\n   - Identifier: the client ID of your application group\n   - Access control policy: select an existing policy or `Permit everyone`\n   - Enable `allatclaims`, `email`, `openid`, and `profile`\n1. Select the new webhook (Vault - Web API) from the properties screen of the\n   Vault application group.\n1. Open the **Issuance Transform Rules** tab.\n1. Click **Add Rule...** and follow the prompts to create a new authentication\n   rule with the following information:\n   - Select **Send LDAP Attributes as Claims**\n   - Rule name: `LDAP Group`\n   - Attribute store: `Active Directory`\n   - LDAP attribute: `Token-Groups - Unqualified Names`\n   - Outgoing claim type: `Group`\n      \n[![Screenshot of the Transform Claim Rule Configuration](\/img\/adfs-oidc-ldapgroupoption.png)](\/img\/adfs-oidc-ldapgroupoption.png)\n\n## Step 4: Create a default ADFS role in Vault\n\nUse the `vault write` CLI command to create a default role for users\nauthenticating with ADFS where:\n\n- `ADFS_APPLICATION_GROUP_CLIENT_ID` is the client ID provided by ADFS.\n- `YOUR_OIDC_MOUNT_PATH` is the mount path for the OIDC plugin.. For example,\n  `adfs`.\n- `ADFS_ROLE` is the name of your role. For example, `adfs-default`.\n\n```shell-session\n$ vault write auth\/<YOUR_OIDC_MOUNT_PATH>\/role\/<ADFS_ROLE> \\\n  bound_audiences=\"<ADFS_APPLICATION_GROUP_CLIENT_ID>\" \\\n  allowed_redirect_uris=\"${VAULT_ADDR}\/ui\/vault\/auth\/<YOUR_OIDC_MOUNT_PATH>\/oidc\/callback\" \\\n  allowed_redirect_uris=\"http:\/\/localhost:8250\/oidc\/callback\" \\\n  user_claim=\"upn\" groups_claim=\"group\" token_policies=\"default\"\n```\n\n<Tip>\n\nUsing `upn` value for `user_claim` tells Vault to consider the user email\nassociated with the ADFS authentication token as an entity alias.\n\n<\/Tip>\n\n## Step 5: Configure the OIDC plugin\n\nUse the client ID and shared secret for your ADFS application group to finish\nconfiguring the OIDC plugin. \n\n<Tabs>\n\n<Tab heading=\"Vault CLI\">\n\nUse the `vault write` CLI command to save the configuration details for the OIDC\nplugin where:\n\n- `ADFS_URL` is the discovery URL for your ADFS instance. For example,\n  `https:\/\/adfs.example.com\/adfs`\n- `ADFS_APPLICATION_GROUP_CLIENT_ID` is the client ID provided by ADFS.\n- `YOUR_OIDC_MOUNT_PATH` is the mount path for the OIDC plugin.. For example,\n  `adfs`.\n- `ADFS_APPLICATION_GROUP_SECRET` is the shared secret for your ADFS application\n  group.\n- `ADFS_ROLE` is the name of your role. For example, `adfs-default`.\n\n\n```shell-session\n$ vault write auth\/<YOUR_OIDC_MOUNT_PATH>\/config \\\n  oidc_discovery_url=\"<ADFS_URL>\" \\\n  oidc_client_id=\"<ADFS_APPLICATION_GROUP_CLIENT_ID>\" \\\n  oidc_client_secret=\"<ADFS_APPLICATION_GROUP_SECRET>\" \\\n  default_role=\"<ADFS_ROLE>\" \n```\n\n<\/Tab>\n\n<Tab heading=\"Vault UI\">\n\n1. Open the Vault UI.\n1. Select the OIDC plugin from the **Access** screen.\n1. Click **Enable Method** and follow the prompts to configure the OIDC plugin\n   with the following information:\n   - OIDC discovery URL: the discovery URL for your ADFS instance. For example,\n     `https:\/\/adfs.example.com\/adfs`.\n   - Default role: the name of your new ADFS role. For example, `adfs-default`.\n1. Click **OIDC Options** and set your OIDC information:\n    - OIDC client ID: the application group client ID provided by ADFS.\n    - OIDC client secret: the shared secret for your ADFS application group.\n1. Save your changes.\n\n<\/Tab>\n\n<\/Tabs>\n\n\n## OPTIONAL: Link Active Directory groups to Vault\n\n1. Enable the KV secret engine in Vault for ADFS:\n   ```shell-session\n   $ vault secrets enable -path=<ADFS_KV_PLUGIN_PATH> kv-v2\n   ```\n   For example:\n\n  <CodeBlockConfig hideClipboard>\n\n  ```shell-session\n   $ vault secrets enable -path=adfs-kv kv-v2\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Create a read-only policy against the KV plugin for ADFS:\n   ```shell-session\n   $ vault policy write <RO_ADFS_POLICY_NAME> - << EOF\n   # Read and list policy for the ADFS KV mount\n   path \"<ADFS_KV_PLUGIN_PATH>\/*\" {\n     capabilities = [\"read\", \"list\"]\n   }\n   EOF\n   ```\n   For example:\n\n  <CodeBlockConfig hideClipboard>\n\n  ```shell-session\n   $ vault policy write read-adfs-test - << EOF\n   # Read and list policy for the ADFS KV mount\n   path \"adfs-kv\/*\" {\n     capabilities = [\"read\", \"list\"]\n   }\n   EOF\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Write a test value to the KV plugin:\n  ```shell-session\n  $ vault kv put <ADFS_KV_PLUGIN_PATH>\/test test_key=\"test value\"\n   ```\n   For example:\n\n  <CodeBlockConfig hideClipboard>\n\n  ```shell-session\n  $ vault kv put adfs-kv\/test test_key=\"test value\"\n  ```\n\n  <\/CodeBlockConfig>\n\nNow you can create a Vault group and link to an AD group:\n\n<Tabs>\n\n<Tab heading=\"Vault CLI\">\n\n1. Create an external group in Vault and save the group ID to a file named\n   `group_id.txt`:\n  ```shell-session\n  $ vault write \\\n    -format=json \\\n    identity\/group name=\"<YOUR_NEW_VAULT_GROUP_NAME>\" \\\n    policies=\"<RO_ADFS_POLICY_NAME>\" \\\n    type=\"external\" | jq -r \".data.id\" > group_id.txt\n  ```\n1. Retrieve the mount accessor for the ADFS authentication method and save it to\n   a file named `accessor_adfs.txt`:\n   ```shell-session\n   $ vault auth list -format=json | \\\n     jq -r '.[\"<YOUR_OIDC_MOUNT_PATH>\/\"].accessor' > \\\n     accessor_adfs.txt\n   ```\n1. Create a group alias:\n   ```shell-session\n   $ vault write identity\/group-alias \\\n     name=\"<YOUR_EXISTING_AD_GROUP>\"  \\\n     mount_accessor=$(cat accessor_adfs.txt) \\\n     canonical_id=\"$(cat group_id.txt)\"\n   ```\n1. Login to Vault as an AD user who is a member of YOUR_EXISTING_AD_GROUP.\n1. Read your test value from the KV plugin:\n  ```shell-session\n  $ vault kv list -mount=secret <ADFS_KV_PLUGIN_PATH>\/test\n  ```\n\n<\/Tab>\n\n<Tab heading=\"Vault UI\">\n\n1. Open the Vault UI.\n1. Select **Access**. \n1. Select **Groups**.\n1. Click **Create group**.\n1. Follow the prompts to create an external group with the following\n   information:\n     - Name: your new Vault group name\n     - Type: `external`\n     - Policies: the read-only ADFS policy you created. For example,\n       `read-adfs-test`.\n1. Click on **Add alias** and follow the prompts to map the Vault group name\n   to an existing group on your AD:\n   - Name: the name of an existing AD group (**must match exactly**).\n   - Auth Backend: `<YOUR_OIDC_MOUNT_PATH>\/ (oidc)`\n1. Login to Vault as an AD user who is a member of the aliased AD group.\n1. Read your test value from the KV plugin.\n\n<\/Tab>\n\n<\/Tabs>\n","site":"vault","answers_cleaned":"    layout  docs page title  Use with ADFS for OIDC description       Configure Vault to use Active Directory Federation Services  ADFS    as an OIDC provider         Use ADFS for OIDC authentication  Configure your Vault instance to work with Active Directory Federation Services  ADFS  and use ADFS accounts with OIDC for Vault login      Before you start  1    You must have Vault v1 15 0     1    You must be running ADFS on Windows Server    1    You must have an OIDC client secret from your ADFS instance    1    You must know your Vault admin token    If you do not have a valid admin    token  you can generate a new token in the Vault UI or with the     Vault CLI   vault docs commands token create       Step 1  Enable the OIDC authN method for Vault   Tabs    Tab heading  Vault CLI    1  Save your Vault instance URL to the  VAULT ADDR  environment variable        shell session      export VAULT ADDR   URL FOR YOUR VAULT INSTALLATION             For example       CodeBlockConfig hideClipboard         shell session      export VAULT ADDR  https   myvault example com 8200               CodeBlockConfig   1  Save your Vault instance URL to the  VAULT TOKEN  environment variable        shell session      export VAULT TOKEN   YOUR VAULT ACCESS TOKEN             For example       CodeBlockConfig hideClipboard         shell session      export VAULT TOKEN  XXXXXXXX XXXX XXXX XXXX XXXXXXXXXXXX               CodeBlockConfig    1    If you use Vault Enterprise or Vault HCP    set the namespace where you    have the OIDC plugin mounted to the  VAULT NAMESPACE  environment variable        shell session      export VAULT NAMESPACE   OIDC NAMESPACE             For example       CodeBlockConfig hideClipboard         shell session      export VAULT NAMESPACE  oidc ns               CodeBlockConfig   1  Enable the OIDC authentication plugin       shell session     vault auth enable  path  YOUR OIDC MOUNT PATH  oidc         For example      CodeBlockConfig hideClipboard        shell session     vault auth enable  path  adfs oidc            CodeBlockConfig     Tab    Tab heading  Vault UI    1  Open the web UI for your Vault instance  1  Select   Access   from the left hand menu  1  Right click   Enable new method   on the Access page   1  Select   OIDC    1  Click   Next    1  Set the mount path for the OIDC plugin  For example   adfs   1  Click   Enable Method    1  Click   Save   to enable the plugin     Tab     Tabs      Step 2  Create a new application group in ADFS   Note title  Save the client ID      Make note of the 32 character   client identifier   provided by ADFS for your   new application group  for example   d879d6fb d2de 4596 b39c 191b2f83c03f      You will need the client ID to configure your OIDC plugin for Vault     Note   1  Open your Windows Server UI  1  Go to the Server Manager screen and click   Tools    1  Select   AD FS Management    1  Right click on   Application Groups   and select   Add Application Group       1  Follow the prompts to create a new application group with the following    information           Name    Vault         Description    a short description explaining the purpose of the application       group  For example   Enable access to Vault           Application type    Server application         Redirect URI    add the callback URL of your OIDC plugin for web       redirects and the local OIDC callback URL for Vault CLI redirects  For       example   https   myvault example com 8200 ui vault auth  YOUR OIDC MOUNT PATH  oidc callback        and  http   localhost 8250 oidc callback   1  Check the   Generate a shared secret   box and save the secret string  1  Confirm the application group details and correct information before closing      Step 3  Configure the webhook in ADFS  1  Open the Vault application group in from the ADFS management screen  1  Click   Add application      1  Select   Web API    1  Follow the prompts to configure a new webhook with the following information       Identifier  the client ID of your application group      Access control policy  select an existing policy or  Permit everyone       Enable  allatclaims    email    openid   and  profile  1  Select the new webhook  Vault   Web API  from the properties screen of the    Vault application group  1  Open the   Issuance Transform Rules   tab  1  Click   Add Rule      and follow the prompts to create a new authentication    rule with the following information       Select   Send LDAP Attributes as Claims        Rule name   LDAP Group       Attribute store   Active Directory       LDAP attribute   Token Groups   Unqualified Names       Outgoing claim type   Group            Screenshot of the Transform Claim Rule Configuration   img adfs oidc ldapgroupoption png    img adfs oidc ldapgroupoption png      Step 4  Create a default ADFS role in Vault  Use the  vault write  CLI command to create a default role for users authenticating with ADFS where      ADFS APPLICATION GROUP CLIENT ID  is the client ID provided by ADFS     YOUR OIDC MOUNT PATH  is the mount path for the OIDC plugin   For example     adfs      ADFS ROLE  is the name of your role  For example   adfs default       shell session   vault write auth  YOUR OIDC MOUNT PATH  role  ADFS ROLE      bound audiences   ADFS APPLICATION GROUP CLIENT ID       allowed redirect uris    VAULT ADDR  ui vault auth  YOUR OIDC MOUNT PATH  oidc callback      allowed redirect uris  http   localhost 8250 oidc callback      user claim  upn  groups claim  group  token policies  default        Tip   Using  upn  value for  user claim  tells Vault to consider the user email associated with the ADFS authentication token as an entity alias     Tip      Step 5  Configure the OIDC plugin  Use the client ID and shared secret for your ADFS application group to finish configuring the OIDC plugin     Tabs    Tab heading  Vault CLI    Use the  vault write  CLI command to save the configuration details for the OIDC plugin where      ADFS URL  is the discovery URL for your ADFS instance  For example     https   adfs example com adfs     ADFS APPLICATION GROUP CLIENT ID  is the client ID provided by ADFS     YOUR OIDC MOUNT PATH  is the mount path for the OIDC plugin   For example     adfs      ADFS APPLICATION GROUP SECRET  is the shared secret for your ADFS application   group     ADFS ROLE  is the name of your role  For example   adfs default        shell session   vault write auth  YOUR OIDC MOUNT PATH  config     oidc discovery url   ADFS URL       oidc client id   ADFS APPLICATION GROUP CLIENT ID       oidc client secret   ADFS APPLICATION GROUP SECRET       default role   ADFS ROLE           Tab    Tab heading  Vault UI    1  Open the Vault UI  1  Select the OIDC plugin from the   Access   screen  1  Click   Enable Method   and follow the prompts to configure the OIDC plugin    with the following information       OIDC discovery URL  the discovery URL for your ADFS instance  For example        https   adfs example com adfs        Default role  the name of your new ADFS role  For example   adfs default   1  Click   OIDC Options   and set your OIDC information        OIDC client ID  the application group client ID provided by ADFS        OIDC client secret  the shared secret for your ADFS application group  1  Save your changes     Tab     Tabs       OPTIONAL  Link Active Directory groups to Vault  1  Enable the KV secret engine in Vault for ADFS        shell session      vault secrets enable  path  ADFS KV PLUGIN PATH  kv v2           For example      CodeBlockConfig hideClipboard        shell session      vault secrets enable  path adfs kv kv v2            CodeBlockConfig   1  Create a read only policy against the KV plugin for ADFS        shell session      vault policy write  RO ADFS POLICY NAME       EOF      Read and list policy for the ADFS KV mount    path   ADFS KV PLUGIN PATH            capabilities     read    list           EOF           For example      CodeBlockConfig hideClipboard        shell session      vault policy write read adfs test      EOF      Read and list policy for the ADFS KV mount    path  adfs kv           capabilities     read    list           EOF            CodeBlockConfig   1  Write a test value to the KV plugin       shell session     vault kv put  ADFS KV PLUGIN PATH  test test key  test value            For example      CodeBlockConfig hideClipboard        shell session     vault kv put adfs kv test test key  test value             CodeBlockConfig   Now you can create a Vault group and link to an AD group    Tabs    Tab heading  Vault CLI    1  Create an external group in Vault and save the group ID to a file named     group id txt        shell session     vault write        format json       identity group name   YOUR NEW VAULT GROUP NAME         policies   RO ADFS POLICY NAME         type  external    jq  r   data id    group id txt       1  Retrieve the mount accessor for the ADFS authentication method and save it to    a file named  accessor adfs txt         shell session      vault auth list  format json          jq  r      YOUR OIDC MOUNT PATH     accessor           accessor adfs txt        1  Create a group alias        shell session      vault write identity group alias        name   YOUR EXISTING AD GROUP           mount accessor   cat accessor adfs txt         canonical id    cat group id txt          1  Login to Vault as an AD user who is a member of YOUR EXISTING AD GROUP  1  Read your test value from the KV plugin       shell session     vault kv list  mount secret  ADFS KV PLUGIN PATH  test          Tab    Tab heading  Vault UI    1  Open the Vault UI  1  Select   Access     1  Select   Groups    1  Click   Create group    1  Follow the prompts to create an external group with the following    information         Name  your new Vault group name        Type   external         Policies  the read only ADFS policy you created  For example          read adfs test   1  Click on   Add alias   and follow the prompts to map the Vault group name    to an existing group on your AD       Name  the name of an existing AD group    must match exactly          Auth Backend    YOUR OIDC MOUNT PATH    oidc   1  Login to Vault as an AD user who is a member of the aliased AD group  1  Read your test value from the KV plugin     Tab     Tabs  "}
{"questions":"vault Use Kubernetes for OIDC authentication Configure Vault to use Kubernetes as an OIDC provider page title Use Kubernetes for OIDC authentication layout docs Kubernetes can function as an OIDC provider such that Vault can validate its","answers":"---\nlayout: docs\npage_title: Use Kubernetes for OIDC authentication\ndescription: >-\n  Configure Vault to use Kubernetes as an OIDC provider.\n---\n\n# Use Kubernetes for OIDC authentication\n\nKubernetes can function as an OIDC provider such that Vault can validate its\nservice account tokens using JWT\/OIDC auth.\n\n-> **Note:** The JWT auth engine does **not** use Kubernetes' `TokenReview` API\nduring authentication, and instead uses public key cryptography to verify the\ncontents of JWTs. This means tokens that have been revoked by Kubernetes will\nstill be considered valid by Vault until their expiry time. To mitigate this\nrisk, use short TTLs for service account tokens or use\n[Kubernetes auth](\/vault\/docs\/auth\/kubernetes) which _does_ use the `TokenReview` API.\n\n## Use service account issuer discovery\n\nWhen using service account issuer discovery, you only need to provide the JWT\nauth mount with an OIDC discovery URL, and sometimes a TLS certificate authority\nto trust. This makes it the most straightforward method to configure if your\nKubernetes cluster meets the requirements.\n\nKubernetes cluster requirements:\n\n* [`ServiceAccountIssuerDiscovery`][k8s-sa-issuer-discovery] feature enabled.\n  * Present from 1.18, defaults to enabled from 1.20.\n* kube-apiserver's `--service-account-issuer` flag is set to a URL that is\n  reachable from Vault. Public by default for most managed Kubernetes solutions.\n* Must use short-lived service account tokens when logging in.\n  * Tokens mounted into pods default to short-lived from 1.21.\n\nConfiguration steps:\n\n1. Ensure OIDC discovery URLs do not require authentication, as detailed\n   [here][k8s-sa-issuer-discovery]:\n\n   ```bash\n   kubectl create clusterrolebinding oidc-reviewer  \\\n      --clusterrole=system:service-account-issuer-discovery \\\n      --group=system:unauthenticated\n   ```\n\n1. Find the issuer URL of the cluster.\n\n   ```bash\n   ISSUER=\"$(kubectl get --raw \/.well-known\/openid-configuration | jq -r '.issuer')\"\n   ```\n\n1. Enable and configure JWT auth in Vault.\n\n  1. If Vault is running in Kubernetes:\n\n     ```bash\n     kubectl exec vault-0 -- vault auth enable jwt\n     kubectl exec vault-0 -- vault write auth\/jwt\/config \\\n        oidc_discovery_url=https:\/\/kubernetes.default.svc.cluster.local \\\n        oidc_discovery_ca_pem=@\/var\/run\/secrets\/kubernetes.io\/serviceaccount\/ca.crt\n     ```\n\n  1. Alternatively, if Vault is _not_ running in Kubernetes:\n\n     -> **Note:** When Vault is outside the cluster, the `$ISSUER` endpoint below may\n     or may not be reachable. If not, you can configure JWT auth using\n     [`jwt_validation_pubkeys`](#using-jwt-validation-public-keys) instead.\n\n     ```bash\n     vault auth enable jwt\n     vault write auth\/jwt\/config oidc_discovery_url=\"${ISSUER}\"\n     ```\n\n1. Configure a role and log in as detailed [below](#creating-a-role-and-logging-in).\n\n[k8s-sa-issuer-discovery]: https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/#service-account-issuer-discovery\n\n## Use JWT validation public keys\n\nThis method can be useful if Kubernetes' API is not reachable from Vault or if\nyou would like a single JWT auth mount to service multiple Kubernetes clusters\nby chaining their public signing keys.\n\n<Note title=\"Rotation of the JWT Signing Key in Kubernetes\">\n  Should the JWT Signing Key used by Kubernetes be rotated,\n  this process should be repeated with the new key.\n<\/Note>\n\nKubernetes cluster requirements:\n\n* [`ServiceAccountIssuerDiscovery`][k8s-sa-issuer-discovery] feature enabled.\n  * Present from 1.18, defaults to enabled from 1.20.\n  * This requirement can be avoided if you can access the Kubernetes master\n    nodes to read the public signing key directly from disk at\n    `\/etc\/kubernetes\/pki\/sa.pub`. In this case, you can skip the steps to\n    retrieve and then convert the key as it will already be in PEM format.\n* Must use short-lived service account tokens when logging in.\n  * Tokens mounted into pods default to short-lived from 1.21.\n\nConfiguration steps:\n\n1. Fetch the service account signing public key from your cluster's JWKS URI.\n\n   ```bash\n   # Query the jwks_uri specified in \/.well-known\/openid-configuration\n   kubectl get --raw \"$(kubectl get --raw \/.well-known\/openid-configuration | jq -r '.jwks_uri' | sed -r 's\/.*\\.[^\/]+(.*)\/\\1\/')\"\n   ```\n\n1. Convert the keys from JWK format to PEM. You can use a CLI tool or an online\n   converter such as [this one][jwk-to-pem].\n\n1. Configure the JWT auth mount with those public keys.\n\n   ```bash\n   vault write auth\/jwt\/config \\\n      jwt_validation_pubkeys=\"-----BEGIN PUBLIC KEY-----\n   MIIBIjANBgkqhkiG9...\n   -----END PUBLIC KEY-----\",\"-----BEGIN PUBLIC KEY-----\n   MIIBIjANBgkqhkiG9...\n   -----END PUBLIC KEY-----\"\n   ```\n\n1. Configure a role and log in as detailed [below](#creating-a-role-and-logging-in).\n\n[jwk-to-pem]: https:\/\/8gwifi.org\/jwkconvertfunctions.jsp\n\n## Create a role and logging in\n\nOnce your JWT auth mount is configured, you're ready to configure a role and\nlog in. The following assumes you use the projected service account token\navailable in all pods by default. See [Specifying TTL and audience](#specifying-ttl-and-audience)\nbelow if you'd like to control the audience or TTL.\n\n1. Choose any value from the array of default audiences. In these examples,\n   there is only one audience in the `aud` array,\n   `https:\/\/kubernetes.default.svc.cluster.local`.\n\n   To find the default audiences, either create a fresh token (requires\n   `kubectl` v1.24.0+):\n\n   ```shell-session\n   $ kubectl create token default | cut -f2 -d. | base64 --decode\n   {\"aud\":[\"https:\/\/kubernetes.default.svc.cluster.local\"], ... \"sub\":\"system:serviceaccount:default:default\"}\n   ```\n\n   Or read a token from a running pod's filesystem:\n\n   ```shell-session\n   $ kubectl exec my-pod -- cat \/var\/run\/secrets\/kubernetes.io\/serviceaccount\/token | cut -f2 -d. | base64 --decode\n   {\"aud\":[\"https:\/\/kubernetes.default.svc.cluster.local\"], ... \"sub\":\"system:serviceaccount:default:default\"}\n   ```\n\n1. Create a role for JWT auth that the `default` service account from the\n   `default` namespace can use.\n\n   ```bash\n   vault write auth\/jwt\/role\/my-role \\\n      role_type=\"jwt\" \\\n      bound_audiences=\"<AUDIENCE-FROM-PREVIOUS-STEP>\" \\\n      user_claim=\"sub\" \\\n      bound_subject=\"system:serviceaccount:default:default\" \\\n      policies=\"default\" \\\n      ttl=\"1h\"\n   ```\n\n1. Pods or other clients with access to a service account JWT can then log in.\n\n   ```bash\n   vault write auth\/jwt\/login \\\n      role=my-role \\\n      jwt=@\/var\/run\/secrets\/kubernetes.io\/serviceaccount\/token\n   # OR equivalent to:\n   curl \\\n      --fail \\\n      --request POST \\\n      --header \"X-Vault-Request: true\" \\\n      --data '{\"jwt\":\"<JWT-TOKEN-HERE>\",\"role\":\"my-role\"}' \\\n      \"${VAULT_ADDR}\/v1\/auth\/jwt\/login\"\n   ```\n\n## Specify TTL and audience\n\nIf you would like to specify a custom TTL or audience for service account tokens,\nthe following pod spec illustrates a volume mount that overrides the default\nadmission injected token. This is especially relevant if you are unable to\ndisable the [--service-account-extend-token-expiration][k8s-extended-tokens]\nflag for `kube-apiserver` and want to use short TTLs.\n\nWhen using the resulting token, you will need to set `bound_audiences=vault`\nwhen creating roles in Vault's JWT auth mount.\n\n```yaml\napiVersion: v1\nkind: Pod\nmetadata:\n  name: nginx\nspec:\n  # automountServiceAccountToken is redundant in this example because the\n  # mountPath used overlaps with the default path. The overlap stops the default\n  # admission injected token from being created. You can use this option to\n  # ensure only\u00a0a single token is mounted if you choose a different mount path.\n  automountServiceAccountToken: false\n  containers:\n    - name: nginx\n      image: nginx\n      volumeMounts:\n      - name: custom-token\n        mountPath: \/var\/run\/secrets\/kubernetes.io\/serviceaccount\n  volumes:\n  - name: custom-token\n    projected:\n      defaultMode: 420\n      sources:\n      - serviceAccountToken:\n          path: token\n          expirationSeconds: 600 # 10 minutes is the minimum TTL\n          audience: vault        # Must match your JWT role's `bound_audiences`\n      # The remaining sources are included to mimic the rest of the default\n      # admission injected volume.\n      - configMap:\n          name: kube-root-ca.crt\n          items:\n          - key: ca.crt\n            path: ca.crt\n      - downwardAPI:\n          items:\n          - fieldRef:\n              apiVersion: v1\n              fieldPath: metadata.namespace\n            path: namespace\n```\n\n[k8s-extended-tokens]: https:\/\/kubernetes.io\/docs\/reference\/command-line-tools-reference\/kube-apiserver\/#options","site":"vault","answers_cleaned":"    layout  docs page title  Use Kubernetes for OIDC authentication description       Configure Vault to use Kubernetes as an OIDC provider         Use Kubernetes for OIDC authentication  Kubernetes can function as an OIDC provider such that Vault can validate its service account tokens using JWT OIDC auth        Note    The JWT auth engine does   not   use Kubernetes   TokenReview  API during authentication  and instead uses public key cryptography to verify the contents of JWTs  This means tokens that have been revoked by Kubernetes will still be considered valid by Vault until their expiry time  To mitigate this risk  use short TTLs for service account tokens or use  Kubernetes auth   vault docs auth kubernetes  which  does  use the  TokenReview  API      Use service account issuer discovery  When using service account issuer discovery  you only need to provide the JWT auth mount with an OIDC discovery URL  and sometimes a TLS certificate authority to trust  This makes it the most straightforward method to configure if your Kubernetes cluster meets the requirements   Kubernetes cluster requirements       ServiceAccountIssuerDiscovery   k8s sa issuer discovery  feature enabled      Present from 1 18  defaults to enabled from 1 20    kube apiserver s    service account issuer  flag is set to a URL that is   reachable from Vault  Public by default for most managed Kubernetes solutions    Must use short lived service account tokens when logging in      Tokens mounted into pods default to short lived from 1 21   Configuration steps   1  Ensure OIDC discovery URLs do not require authentication  as detailed     here  k8s sa issuer discovery          bash    kubectl create clusterrolebinding oidc reviewer            clusterrole system service account issuer discovery           group system unauthenticated         1  Find the issuer URL of the cluster         bash    ISSUER    kubectl get   raw   well known openid configuration   jq  r   issuer            1  Enable and configure JWT auth in Vault     1  If Vault is running in Kubernetes           bash      kubectl exec vault 0    vault auth enable jwt      kubectl exec vault 0    vault write auth jwt config           oidc discovery url https   kubernetes default svc cluster local           oidc discovery ca pem   var run secrets kubernetes io serviceaccount ca crt             1  Alternatively  if Vault is  not  running in Kubernetes             Note    When Vault is outside the cluster  the   ISSUER  endpoint below may      or may not be reachable  If not  you can configure JWT auth using        jwt validation pubkeys    using jwt validation public keys  instead           bash      vault auth enable jwt      vault write auth jwt config oidc discovery url    ISSUER             1  Configure a role and log in as detailed  below   creating a role and logging in     k8s sa issuer discovery   https   kubernetes io docs tasks configure pod container configure service account  service account issuer discovery     Use JWT validation public keys  This method can be useful if Kubernetes  API is not reachable from Vault or if you would like a single JWT auth mount to service multiple Kubernetes clusters by chaining their public signing keys    Note title  Rotation of the JWT Signing Key in Kubernetes     Should the JWT Signing Key used by Kubernetes be rotated    this process should be repeated with the new key    Note   Kubernetes cluster requirements       ServiceAccountIssuerDiscovery   k8s sa issuer discovery  feature enabled      Present from 1 18  defaults to enabled from 1 20      This requirement can be avoided if you can access the Kubernetes master     nodes to read the public signing key directly from disk at       etc kubernetes pki sa pub   In this case  you can skip the steps to     retrieve and then convert the key as it will already be in PEM format    Must use short lived service account tokens when logging in      Tokens mounted into pods default to short lived from 1 21   Configuration steps   1  Fetch the service account signing public key from your cluster s JWKS URI         bash      Query the jwks uri specified in   well known openid configuration    kubectl get   raw    kubectl get   raw   well known openid configuration   jq  r   jwks uri    sed  r  s                1             1  Convert the keys from JWK format to PEM  You can use a CLI tool or an online    converter such as  this one  jwk to pem    1  Configure the JWT auth mount with those public keys         bash    vault write auth jwt config         jwt validation pubkeys       BEGIN PUBLIC KEY         MIIBIjANBgkqhkiG9            END PUBLIC KEY             BEGIN PUBLIC KEY         MIIBIjANBgkqhkiG9            END PUBLIC KEY               1  Configure a role and log in as detailed  below   creating a role and logging in     jwk to pem   https   8gwifi org jwkconvertfunctions jsp     Create a role and logging in  Once your JWT auth mount is configured  you re ready to configure a role and log in  The following assumes you use the projected service account token available in all pods by default  See  Specifying TTL and audience   specifying ttl and audience  below if you d like to control the audience or TTL   1  Choose any value from the array of default audiences  In these examples     there is only one audience in the  aud  array      https   kubernetes default svc cluster local       To find the default audiences  either create a fresh token  requires     kubectl  v1 24 0           shell session      kubectl create token default   cut  f2  d    base64   decode      aud    https   kubernetes default svc cluster local         sub   system serviceaccount default default              Or read a token from a running pod s filesystem         shell session      kubectl exec my pod    cat  var run secrets kubernetes io serviceaccount token   cut  f2  d    base64   decode      aud    https   kubernetes default svc cluster local         sub   system serviceaccount default default           1  Create a role for JWT auth that the  default  service account from the     default  namespace can use         bash    vault write auth jwt role my role         role type  jwt          bound audiences   AUDIENCE FROM PREVIOUS STEP           user claim  sub          bound subject  system serviceaccount default default          policies  default          ttl  1h          1  Pods or other clients with access to a service account JWT can then log in         bash    vault write auth jwt login         role my role         jwt   var run secrets kubernetes io serviceaccount token      OR equivalent to     curl           fail           request POST           header  X Vault Request  true            data    jwt    JWT TOKEN HERE    role   my role               VAULT ADDR  v1 auth jwt login             Specify TTL and audience  If you would like to specify a custom TTL or audience for service account tokens  the following pod spec illustrates a volume mount that overrides the default admission injected token  This is especially relevant if you are unable to disable the    service account extend token expiration  k8s extended tokens  flag for  kube apiserver  and want to use short TTLs   When using the resulting token  you will need to set  bound audiences vault  when creating roles in Vault s JWT auth mount      yaml apiVersion  v1 kind  Pod metadata    name  nginx spec      automountServiceAccountToken is redundant in this example because the     mountPath used overlaps with the default path  The overlap stops the default     admission injected token from being created  You can use this option to     ensure only a single token is mounted if you choose a different mount path    automountServiceAccountToken  false   containers        name  nginx       image  nginx       volumeMounts          name  custom token         mountPath   var run secrets kubernetes io serviceaccount   volumes      name  custom token     projected        defaultMode  420       sources          serviceAccountToken            path  token           expirationSeconds  600   10 minutes is the minimum TTL           audience  vault          Must match your JWT role s  bound audiences          The remaining sources are included to mimic the rest of the default         admission injected volume          configMap            name  kube root ca crt           items              key  ca crt             path  ca crt         downwardAPI            items              fieldRef                apiVersion  v1               fieldPath  metadata namespace             path  namespace       k8s extended tokens   https   kubernetes io docs reference command line tools reference kube apiserver  options"}
{"questions":"vault page title Use Google for OIDC Use Google for OIDC authentication Main reference Using OAuth 2 0 to Access Google APIs https developers google com identity protocols OAuth2 Configure Vault to use Google as an OIDC provider layout docs","answers":"---\nlayout: docs\npage_title: Use Google for OIDC\ndescription: >-\n  Configure Vault to use Google as an OIDC provider.\n---\n\n# Use Google for OIDC authentication\n\nMain reference: [Using OAuth 2.0 to Access Google APIs](https:\/\/developers.google.com\/identity\/protocols\/OAuth2)\n\n1. Visit the [Google API Console](https:\/\/console.developers.google.com).\n1. Create or a select a project.\n1. Navigate to Menu > APIs & Services\n1. Create a new credential via Credentials > Create Credentials > OAuth Client ID.\n1. Configure the OAuth Consent Screen. Application Name is required. Save.\n1. Select application type: \"Web Application\".\n1. Configure Authorized [Redirect URIs](\/vault\/docs\/auth\/jwt#redirect-uris).\n1. Save client ID and secret.\n\n### Optional google-specific configuration\n\nGoogle-specific configuration is available when using Google as an identity provider from the\nVault JWT\/OIDC auth method. The configuration allows Vault to obtain Google Workspace group membership and\nuser information during the JWT\/OIDC authentication flow. The group membership obtained from Google Workspace\nmay be used for Identity group alias association. The user information obtained from Google Workspace can be\nused to copy claims data into resulting auth token and alias metadata via [claim_mappings](\/vault\/api-docs\/auth\/jwt#claim_mappings).\n\n#### Setup\n\nTo set up the Google-specific handling, you'll need:\n\n- A Google Workspace account with the [super admin role](https:\/\/support.google.com\/a\/answer\/2405986?hl=en)\n  for granting domain-wide delegation API client access, or a service account that has been granted\n  [the necessary](https:\/\/cloud.google.com\/identity\/docs\/how-to\/setup#auth-no-dwd) group admin roles.\n- The ability to create a service account in [Google Cloud Platform](https:\/\/console.developers.google.com\/iam-admin\/serviceaccounts).\n- To enable the [Admin SDK API](https:\/\/console.developers.google.com\/apis\/api\/admin.googleapis.com\/overview).\n- An OAuth 2.0 application with an [internal user type](https:\/\/support.google.com\/cloud\/answer\/10311615#user-type).\n  We **do not** recommend using an external user type since it would allow _any user_ with a\n  Google account to authenticate with Vault.\n\nThe Google-specific handling that's used to fetch Google Workspace groups and user information in Vault uses either\nGoogle Workspace Domain-Wide Delegation of Authority for authentication and authorization, or group admin roles granted to a GCP service account.\n\nLinks to steps for setting up authentication and authorization:\n- [DWDoA](https:\/\/developers.google.com\/workspace\/guides\/create-credentials#service-account)\n- [Without DWDoA](https:\/\/cloud.google.com\/identity\/docs\/how-to\/setup#auth-no-dwd)\n\nIn **step 11** within the section titled\n[Optional: Set up domain-wide delegation for a service account](https:\/\/developers.google.com\/workspace\/guides\/create-credentials#optional_set_up_domain-wide_delegation_for_a_service_account),\nthe only OAuth scopes that should be granted are:\n\n- `https:\/\/www.googleapis.com\/auth\/admin.directory.group.readonly`\n- `https:\/\/www.googleapis.com\/auth\/admin.directory.user.readonly`\n\n~> This is an **important security step** in order to give the service account the least set of privileges\nthat enable the feature.\n\n#### Configuration\n\n- `provider` `(string: <required>)` - Name of the provider. Must be set to \"gsuite\".\n- `gsuite_service_account` `(string: <optional>)` - Either the path to or the contents of a Google service\n  account key file in JSON format. If given as a file path, it must refer to a file that's readable on\n  the host that Vault is running on. If given directly as JSON contents, the JSON must be properly escaped.\n  If left empty, Application Default Credentials will be used.\n- `gsuite_admin_impersonate` `(string: <optional>)` - Email address of a Google Workspace admin to impersonate.\n- `fetch_groups` `(bool: false)` - If set to true, groups will be fetched from Google Workspace.\n- `fetch_user_info` `(bool: false)` - If set to true, user info will be fetched from Google Workspace using the configured [user_custom_schemas](#user_custom_schemas).\n- `groups_recurse_max_depth` `(int: <optional>)` - Group membership recursion max depth. Defaults to 0, which means don't recurse.\n- `user_custom_schemas` `(string: <optional>)` - Comma-separated list of Google Workspace [custom schemas](https:\/\/developers.google.com\/admin-sdk\/directory\/v1\/guides\/manage-schemas).\n  Values set for Google Workspace users using custom schema fields will be fetched and made available as claims that can be used with [claim_mappings](\/vault\/api-docs\/auth\/jwt#claim_mappings). Required if [fetch_user_info](#fetch_user_info) is set to true.\n- `impersonate_principal` `(string: <optional>)` - Service account email that has been granted domain-wide delegation of authority in Google Workspace.\n  Required if accessing the Google Workspace Directory API through domain-wide delegation of authority, without using a service account key.\n  The service account vault is running under must be granted the `iam.serviceAccounts.signJwt` permission on this service account.\n  If `gsuite_admin_impersonate` is specified, that\tWorkspace user will be impersonated.\n- `domain` `(string: <optional>)` - The domain to get groups from. Set this if your workspace is configured with more than one domain.\n\nExample configuration:\n\n```\nvault write auth\/oidc\/config -<<EOF\n{\n    \"oidc_discovery_url\": \"https:\/\/accounts.google.com\",\n    \"oidc_client_id\": \"your_client_id\",\n    \"oidc_client_secret\": \"your_client_secret\",\n    \"default_role\": \"your_default_role\",\n    \"provider_config\": {\n        \"provider\": \"gsuite\",\n        \"gsuite_service_account\": \"\/path\/to\/service-account.json\",\n        \"gsuite_admin_impersonate\": \"admin@gsuitedomain.com\",\n        \"fetch_groups\": true,\n        \"fetch_user_info\": true,\n        \"groups_recurse_max_depth\": 5,\n        \"user_custom_schemas\": \"Education,Preferences\",\n        \"impersonate_principal\": \"sa@project.iam.gserviceaccount.com\"\n    }\n}\nEOF\n```\n\n#### Role\n\nThe [user_claim](\/vault\/api-docs\/auth\/jwt#user_claim) value of the role must be set to\none of either `sub` or `email` for the Google Workspace group and user information\nqueries to succeed.\n\nExample role:\n\n```\nvault write auth\/oidc\/role\/your_default_role \\\n    allowed_redirect_uris=\"http:\/\/localhost:8200\/ui\/vault\/auth\/oidc\/oidc\/callback,http:\/\/localhost:8250\/oidc\/callback\" \\\n    user_claim=\"sub\" \\\n    groups_claim=\"groups\" \\\n    claim_mappings=\"\/Education\/graduation_date\"=\"graduation_date\" \\\n    claim_mappings=\"\/Preferences\/shirt_size\"=\"shirt_size\"\n```","site":"vault","answers_cleaned":"    layout  docs page title  Use Google for OIDC description       Configure Vault to use Google as an OIDC provider         Use Google for OIDC authentication  Main reference   Using OAuth 2 0 to Access Google APIs  https   developers google com identity protocols OAuth2   1  Visit the  Google API Console  https   console developers google com   1  Create or a select a project  1  Navigate to Menu   APIs   Services 1  Create a new credential via Credentials   Create Credentials   OAuth Client ID  1  Configure the OAuth Consent Screen  Application Name is required  Save  1  Select application type   Web Application   1  Configure Authorized  Redirect URIs   vault docs auth jwt redirect uris   1  Save client ID and secret       Optional google specific configuration  Google specific configuration is available when using Google as an identity provider from the Vault JWT OIDC auth method  The configuration allows Vault to obtain Google Workspace group membership and user information during the JWT OIDC authentication flow  The group membership obtained from Google Workspace may be used for Identity group alias association  The user information obtained from Google Workspace can be used to copy claims data into resulting auth token and alias metadata via  claim mappings   vault api docs auth jwt claim mappings         Setup  To set up the Google specific handling  you ll need     A Google Workspace account with the  super admin role  https   support google com a answer 2405986 hl en    for granting domain wide delegation API client access  or a service account that has been granted    the necessary  https   cloud google com identity docs how to setup auth no dwd  group admin roles    The ability to create a service account in  Google Cloud Platform  https   console developers google com iam admin serviceaccounts     To enable the  Admin SDK API  https   console developers google com apis api admin googleapis com overview     An OAuth 2 0 application with an  internal user type  https   support google com cloud answer 10311615 user type     We   do not   recommend using an external user type since it would allow  any user  with a   Google account to authenticate with Vault   The Google specific handling that s used to fetch Google Workspace groups and user information in Vault uses either Google Workspace Domain Wide Delegation of Authority for authentication and authorization  or group admin roles granted to a GCP service account   Links to steps for setting up authentication and authorization     DWDoA  https   developers google com workspace guides create credentials service account     Without DWDoA  https   cloud google com identity docs how to setup auth no dwd   In   step 11   within the section titled  Optional  Set up domain wide delegation for a service account  https   developers google com workspace guides create credentials optional set up domain wide delegation for a service account   the only OAuth scopes that should be granted are      https   www googleapis com auth admin directory group readonly     https   www googleapis com auth admin directory user readonly      This is an   important security step   in order to give the service account the least set of privileges that enable the feature        Configuration     provider    string   required      Name of the provider  Must be set to  gsuite      gsuite service account    string   optional      Either the path to or the contents of a Google service   account key file in JSON format  If given as a file path  it must refer to a file that s readable on   the host that Vault is running on  If given directly as JSON contents  the JSON must be properly escaped    If left empty  Application Default Credentials will be used     gsuite admin impersonate    string   optional      Email address of a Google Workspace admin to impersonate     fetch groups    bool  false     If set to true  groups will be fetched from Google Workspace     fetch user info    bool  false     If set to true  user info will be fetched from Google Workspace using the configured  user custom schemas   user custom schemas      groups recurse max depth    int   optional      Group membership recursion max depth  Defaults to 0  which means don t recurse     user custom schemas    string   optional      Comma separated list of Google Workspace  custom schemas  https   developers google com admin sdk directory v1 guides manage schemas     Values set for Google Workspace users using custom schema fields will be fetched and made available as claims that can be used with  claim mappings   vault api docs auth jwt claim mappings   Required if  fetch user info   fetch user info  is set to true     impersonate principal    string   optional      Service account email that has been granted domain wide delegation of authority in Google Workspace    Required if accessing the Google Workspace Directory API through domain wide delegation of authority  without using a service account key    The service account vault is running under must be granted the  iam serviceAccounts signJwt  permission on this service account    If  gsuite admin impersonate  is specified  that Workspace user will be impersonated     domain    string   optional      The domain to get groups from  Set this if your workspace is configured with more than one domain   Example configuration       vault write auth oidc config    EOF        oidc discovery url    https   accounts google com        oidc client id    your client id        oidc client secret    your client secret        default role    your default role        provider config              provider    gsuite            gsuite service account     path to service account json            gsuite admin impersonate    admin gsuitedomain com            fetch groups   true           fetch user info   true           groups recurse max depth   5           user custom schemas    Education Preferences            impersonate principal    sa project iam gserviceaccount com          EOF           Role  The  user claim   vault api docs auth jwt user claim  value of the role must be set to one of either  sub  or  email  for the Google Workspace group and user information queries to succeed   Example role       vault write auth oidc role your default role       allowed redirect uris  http   localhost 8200 ui vault auth oidc oidc callback http   localhost 8250 oidc callback        user claim  sub        groups claim  groups        claim mappings   Education graduation date   graduation date        claim mappings   Preferences shirt size   shirt size     "}
{"questions":"vault Configure Vault to use Azure Active Directory AD as an OIDC provider page title Use Azure AD for OIDC Use Azure AD for OIDC authentication layout docs Note Azure Active Directory Applications that have custom signing keys as a result of using","answers":"---\nlayout: docs\npage_title: Use Azure AD for OIDC\ndescription: >-\n  Configure Vault to use Azure Active Directory (AD) as an OIDC provider.\n---\n\n# Use Azure AD for OIDC authentication\n\n~> **Note:** Azure Active Directory Applications that have custom signing keys as a result of using\nthe [claims-mapping](https:\/\/docs.microsoft.com\/en-us\/azure\/active-directory\/develop\/active-directory-claims-mapping)\nfeature are currently not supported for OIDC authentication.\n\nReference: [Azure Active Directory v2.0 and the OpenID Connect protocol](https:\/\/docs.microsoft.com\/en-us\/azure\/active-directory\/develop\/v2-protocols-oidc)\n\n1. Choose your Azure tenant.\n\n1. Go to **Azure Active Directory** and\n   [register an application](https:\/\/docs.microsoft.com\/en-us\/azure\/active-directory\/develop\/quickstart-register-app)\n   for Vault.\n\n1. Add Redirect URIs with the \"Web\" type. You may include two redirect URIs,\n   one for CLI access another one for Vault UI access.\n\n    - `http:\/\/localhost:8250\/oidc\/callback`\n    - `https:\/\/hostname:port_number\/ui\/vault\/auth\/oidc\/oidc\/callback`\n\n1. Record the \"Application (client) ID\" as you will need it as the `oidc_client_id`.\n\n1. Under **Endpoints**, copy the OpenID Connect metadata document URL, omitting the `\/well-known...` portion.\n\n   - The endpoint URL (`oidc_discovery_url`) will look like: https:\/\/login.microsoftonline.com\/tenant-guid-dead-beef-aaaa-aaaa\/v2.0\n\n1. Under **Certificates & secrets**,\n   [add a client secret](https:\/\/docs.microsoft.com\/en-us\/azure\/active-directory\/develop\/quickstart-register-app#add-a-client-secret)\n   Record the secret's value as you will need it as the `oidc_client_secret` for Vault.\n\n### Connect AD group with Vault external group\n\nReference: [Azure Active Directory with OIDC Auth Method and External Groups](\/vault\/tutorials\/auth-methods\/oidc-auth-azure)\n\nTo connect the AD group with a [Vault external groups](\/vault\/docs\/secrets\/identity#external-vs-internal-groups),\nyou will need\n[Azure AD v2.0 endpoints](https:\/\/docs.microsoft.com\/en-gb\/azure\/active-directory\/develop\/azure-ad-endpoint-comparison).\nYou should set up a [Vault policy](\/vault\/tutorials\/policies\/policies) for the Azure AD group to use.\n\n1. Go to **Azure Active Directory** and choose your Vault application.\n\n1. Go to **Token configuration** and **Add groups claim**. Select \"All\" or \"SecurityGroup\" based on\n   [which groups for a user](https:\/\/docs.microsoft.com\/en-us\/azure\/active-directory\/hybrid\/how-to-connect-fed-group-claims)\n   you want returned in the claim.\n\n1. In Vault, enable the OIDC auth method.\n\n1. Configure the OIDC auth method with the `oidc_client_id` (application ID), `oidc_client_secret`\n   (client secret), and `oidc_discovery_url` (endpoint URL) you recorded from Azure.\n   ```shell\n   vault write auth\/oidc\/config \\\n      oidc_client_id=\"your_client_id\" \\\n      oidc_client_secret=\"your_client_secret\" \\\n      default_role=\"your_default_role\" \\\n      oidc_discovery_url=\"https:\/\/login.microsoftonline.com\/tenant_id\/v2.0\"\n   ```\n\n1. Configure the [OIDC Role](\/vault\/api-docs\/auth\/jwt#create-role) with the following:\n\n   - `user_claim` should be `\"sub\"` or `\"oid\"` following the\n   [recommendation](https:\/\/learn.microsoft.com\/en-us\/azure\/active-directory\/develop\/id-token-claims-reference#use-claims-to-reliably-identify-a-user)\n  from Azure.\n   - `allowed_redirect_uris` should be the two redirect URIs for Vault CLI and UI access.\n   - `groups_claim` should be set to `\"groups\"`.\n   - `oidc_scopes` should be set to `\"https:\/\/graph.microsoft.com\/.default profile\"`.\n   \n   ```shell\n   vault write auth\/oidc\/role\/your_default_role \\\n      user_claim=\"sub\" \\\n      allowed_redirect_uris=\"http:\/\/localhost:8250\/oidc\/callback,https:\/\/online_version_hostname:port_number\/ui\/vault\/auth\/oidc\/oidc\/callback\"  \\\n      groups_claim=\"groups\" \\\n      oidc_scopes=\"https:\/\/graph.microsoft.com\/.default profile\" \\\n      policies=default\n   ```\n\n1. In Vault, create the [external group](\/vault\/api-docs\/secret\/identity\/group).\n   Record the group ID as you will need it for the group alias.\n\n1. From Vault, retrieve the [OIDC accessor ID](\/vault\/api-docs\/system\/auth#list-auth-methods)\n   from the OIDC auth method as you will need it for the group alias's `mount_accessor`.\n\n1. Go to the Azure AD Group you want to attach to Vault's external group. Record the `objectId`\n   as you will need it as the group alias name in Vault.\n\n1. In Vault, create a [group alias](\/vault\/api-docs\/secret\/identity\/group-alias)\n   for the external group and set the `objectId` as the group alias name.\n   ```shell\n   vault write identity\/group-alias \\\n      name=\"your_ad_group_object_id\" \\\n      mount_accessor=\"vault_oidc_accessor_id\" \\\n      canonical_id=\"vault_external_group_id\"\n   ```\n\n### Optional azure-specific configuration\n\nIf a user is a member of more than 200 groups (directly or indirectly), Azure will\nsend `_claim_names` and `_claim_sources`. For example, returned claims might look like:\n\n```json\n{\n  \"_claim_names\": {\n    \"groups\": \"src1\"\n  },\n  \"_claim_sources\": {\n    \"src1\": {\n      \"endpoint\": \"https:\/\/graph.windows.net....\"\n    }\n  }\n}\n```\n\nThe OIDC auth method role can be configured to include the user ID in the endpoint URL,\nwhich will be used by Vault to retrieve the groups for the user. Additional API permissions\nmust be added to the Azure app in order to request the additional groups from the Microsoft\nGraph API.\n\nTo set the proper permissions on the Azure app:\n\n1. Locate the application under \"App Registrations\" in Azure\n1. Navigate to the \"API Permissions\" page for the application\n1. Add a permission\n1. Select \"Microsoft Graph\"\n1. Select \"Delegated permissions\"\n1. Add the [User.Read](https:\/\/learn.microsoft.com\/en-us\/graph\/permissions-reference#delegated-permissions-93) permission\n1. Check the \"Grant admin consent for Default Directory\" checkbox\n1. Configure the OIDC auth method in Vault by setting `\"provider_config\"` to Azure.\n   ```shell\n   vault write auth\/oidc\/config -<<\"EOH\"\n   {\n     \"oidc_client_id\": \"your_client_id\",\n     \"oidc_client_secret\": \"your_client_secret\",\n     \"default_role\": \"your_default_role\",\n     \"oidc_discovery_url\": \"https:\/\/login.microsoftonline.com\/tenant_id\/v2.0\",\n     \"provider_config\": {\n        \"provider\": \"azure\"\n     }\n   }\n   EOH\n   ```\n\n1. Add `\"profile\"` to `oidc_scopes` so the user's ID comes back on the JWT.\n   ```shell\n   vault write auth\/oidc\/role\/your_default_role \\\n    user_claim=\"sub\" \\\n    allowed_redirect_uris=\"http:\/\/localhost:8250\/oidc\/callback,https:\/\/online_version_hostname:port_number\/ui\/vault\/auth\/oidc\/oidc\/callback\"  \\\n    groups_claim=\"groups\" \\\n    oidc_scopes=\"profile\" \\\n    policies=\"default\"\n   ```","site":"vault","answers_cleaned":"    layout  docs page title  Use Azure AD for OIDC description       Configure Vault to use Azure Active Directory  AD  as an OIDC provider         Use Azure AD for OIDC authentication       Note    Azure Active Directory Applications that have custom signing keys as a result of using the  claims mapping  https   docs microsoft com en us azure active directory develop active directory claims mapping  feature are currently not supported for OIDC authentication   Reference   Azure Active Directory v2 0 and the OpenID Connect protocol  https   docs microsoft com en us azure active directory develop v2 protocols oidc   1  Choose your Azure tenant   1  Go to   Azure Active Directory   and     register an application  https   docs microsoft com en us azure active directory develop quickstart register app     for Vault   1  Add Redirect URIs with the  Web  type  You may include two redirect URIs     one for CLI access another one for Vault UI access          http   localhost 8250 oidc callback         https   hostname port number ui vault auth oidc oidc callback   1  Record the  Application  client  ID  as you will need it as the  oidc client id    1  Under   Endpoints    copy the OpenID Connect metadata document URL  omitting the   well known     portion        The endpoint URL   oidc discovery url   will look like  https   login microsoftonline com tenant guid dead beef aaaa aaaa v2 0  1  Under   Certificates   secrets        add a client secret  https   docs microsoft com en us azure active directory develop quickstart register app add a client secret     Record the secret s value as you will need it as the  oidc client secret  for Vault       Connect AD group with Vault external group  Reference   Azure Active Directory with OIDC Auth Method and External Groups   vault tutorials auth methods oidc auth azure   To connect the AD group with a  Vault external groups   vault docs secrets identity external vs internal groups   you will need  Azure AD v2 0 endpoints  https   docs microsoft com en gb azure active directory develop azure ad endpoint comparison   You should set up a  Vault policy   vault tutorials policies policies  for the Azure AD group to use   1  Go to   Azure Active Directory   and choose your Vault application   1  Go to   Token configuration   and   Add groups claim    Select  All  or  SecurityGroup  based on     which groups for a user  https   docs microsoft com en us azure active directory hybrid how to connect fed group claims     you want returned in the claim   1  In Vault  enable the OIDC auth method   1  Configure the OIDC auth method with the  oidc client id   application ID    oidc client secret      client secret   and  oidc discovery url   endpoint URL  you recorded from Azure        shell    vault write auth oidc config         oidc client id  your client id          oidc client secret  your client secret          default role  your default role          oidc discovery url  https   login microsoftonline com tenant id v2 0          1  Configure the  OIDC Role   vault api docs auth jwt create role  with the following         user claim  should be   sub   or   oid   following the     recommendation  https   learn microsoft com en us azure active directory develop id token claims reference use claims to reliably identify a user    from Azure        allowed redirect uris  should be the two redirect URIs for Vault CLI and UI access        groups claim  should be set to   groups          oidc scopes  should be set to   https   graph microsoft com  default profile              shell    vault write auth oidc role your default role         user claim  sub          allowed redirect uris  http   localhost 8250 oidc callback https   online version hostname port number ui vault auth oidc oidc callback           groups claim  groups          oidc scopes  https   graph microsoft com  default profile          policies default         1  In Vault  create the  external group   vault api docs secret identity group      Record the group ID as you will need it for the group alias   1  From Vault  retrieve the  OIDC accessor ID   vault api docs system auth list auth methods     from the OIDC auth method as you will need it for the group alias s  mount accessor    1  Go to the Azure AD Group you want to attach to Vault s external group  Record the  objectId     as you will need it as the group alias name in Vault   1  In Vault  create a  group alias   vault api docs secret identity group alias     for the external group and set the  objectId  as the group alias name        shell    vault write identity group alias         name  your ad group object id          mount accessor  vault oidc accessor id          canonical id  vault external group id              Optional azure specific configuration  If a user is a member of more than 200 groups  directly or indirectly   Azure will send   claim names  and   claim sources   For example  returned claims might look like      json       claim names          groups    src1           claim sources          src1            endpoint    https   graph windows net                       The OIDC auth method role can be configured to include the user ID in the endpoint URL  which will be used by Vault to retrieve the groups for the user  Additional API permissions must be added to the Azure app in order to request the additional groups from the Microsoft Graph API   To set the proper permissions on the Azure app   1  Locate the application under  App Registrations  in Azure 1  Navigate to the  API Permissions  page for the application 1  Add a permission 1  Select  Microsoft Graph  1  Select  Delegated permissions  1  Add the  User Read  https   learn microsoft com en us graph permissions reference delegated permissions 93  permission 1  Check the  Grant admin consent for Default Directory  checkbox 1  Configure the OIDC auth method in Vault by setting   provider config   to Azure        shell    vault write auth oidc config     EOH             oidc client id    your client id         oidc client secret    your client secret         default role    your default role         oidc discovery url    https   login microsoftonline com tenant id v2 0         provider config              provider    azure                 EOH         1  Add   profile   to  oidc scopes  so the user s ID comes back on the JWT        shell    vault write auth oidc role your default role       user claim  sub        allowed redirect uris  http   localhost 8250 oidc callback https   online version hostname port number ui vault auth oidc oidc callback         groups claim  groups        oidc scopes  profile        policies  default        "}
{"questions":"vault authenticate to Vault layout docs Use AppRole authentication with Vault to control how machines and services page title Use AppRole authentication Use AppRole authentication","answers":"---\nlayout: docs\npage_title: Use AppRole authentication\ndescription: >-\n  Use AppRole authentication with Vault to control how machines and services\n  authenticate to Vault.\n---\n\n# Use AppRole authentication\n\nThe `approle` auth method allows machines or _apps_ to authenticate with\nVault-defined _roles_. The open design of `AppRole` enables a varied set of\nworkflows and configurations to handle large numbers of apps. This auth method\nis oriented to automated workflows (machines and services), and is less useful\nfor human operators. We recommend using `batch` tokens with the\n `AppRole` auth method.\n\nAn \"AppRole\" represents a set of Vault policies and login constraints that must\nbe met to receive a token with those policies. The scope can be as narrow or\nbroad as desired. An AppRole can be created for a particular machine, or even\na particular user on that machine, or a service spread across machines. The\ncredentials required for successful login depend upon the constraints set on\nthe AppRole associated with the credentials.\n\n## Authentication\n\n### Via the CLI\n\nThe default path is `\/approle`. If this auth method was enabled at a different\npath, specify `auth\/my-path\/login` instead.\n\n```shell-session\n$ vault write auth\/approle\/login \\\n    role_id=db02de05-fa39-4855-059b-67221c5c2f63 \\\n    secret_id=6a174c20-f6de-a53c-74d2-6018fcceff64\n\nKey                Value\n---                -----\ntoken              65b74ffd-842c-fd43-1386-f7d7006e520a\ntoken_accessor     3c29bc22-5c72-11a6-f778-2bc8f48cea0e\ntoken_duration     20m0s\ntoken_renewable    true\ntoken_policies     [default]\n```\n\n### Via the API\n\nThe default endpoint is `auth\/approle\/login`. If this auth method was enabled\nat a different path, use that value instead of `approle`.\n\n```shell-session\n$ curl \\\n    --request POST \\\n    --data '{\"role_id\":\"988a9df-...\",\"secret_id\":\"37b74931...\"}' \\\n    http:\/\/127.0.0.1:8200\/v1\/auth\/approle\/login\n```\n\nThe response will contain the token at `auth.client_token`:\n\n```json\n{\n  \"auth\": {\n    \"renewable\": true,\n    \"lease_duration\": 2764800,\n    \"metadata\": {},\n    \"policies\": [\"default\", \"dev-policy\", \"test-policy\"],\n    \"accessor\": \"5d7fb475-07cb-4060-c2de-1ca3fcbf0c56\",\n    \"client_token\": \"98a4c7ab-b1fe-361b-ba0b-e307aacfd587\"\n  }\n}\n```\n\n-> **Application Integration:** See the [Code Example](#code-example) section\nfor a code snippet demonstrating the authentication with Vault using the\nAppRole auth method.\n\n## Configuration\n\nAuth methods must be configured in advance before users or machines can\nauthenticate. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n### Via the CLI\n\n1. Enable the AppRole auth method:\n\n   ```shell-session\n   $ vault auth enable approle\n   ```\n\n1. Create a named role:\n\n   ```shell-session\n   $ vault write auth\/approle\/role\/my-role \\\n       token_type=batch \\\n       secret_id_ttl=10m \\\n       token_ttl=20m \\\n       token_max_ttl=30m \\\n       secret_id_num_uses=40\n   ```\n\n~> **Note:** If the token issued by your approle needs the ability to create child tokens, you will need to set token_num_uses to 0.\n\nFor the complete list of configuration options, please see the API\ndocumentation.\n\n1. Fetch the RoleID of the AppRole:\n\n   ```shell-session\n   $ vault read auth\/approle\/role\/my-role\/role-id\n   role_id     db02de05-fa39-4855-059b-67221c5c2f63\n   ```\n\n1. Get a SecretID issued against the AppRole:\n\n   ```shell-session\n   $ vault write -f auth\/approle\/role\/my-role\/secret-id\n   secret_id               6a174c20-f6de-a53c-74d2-6018fcceff64\n   secret_id_accessor      c454f7e5-996e-7230-6074-6ef26b7bcf86\n   secret_id_ttl           10m\n   secret_id_num_uses      40\n   ```\n\n### Via the API\n\n1. Enable the AppRole auth method:\n\n   ```shell-session\n   $ curl \\\n       --header \"X-Vault-Token: ...\" \\\n       --request POST \\\n       --data '{\"type\": \"approle\"}' \\\n       http:\/\/127.0.0.1:8200\/v1\/sys\/auth\/approle\n   ```\n\n1. Create an AppRole with desired set of policies:\n\n   ```shell-session\n   $ curl \\\n       --header \"X-Vault-Token: ...\" \\\n       --request POST \\\n       --data '{\"policies\": \"dev-policy,test-policy\", \"token_type\": \"batch\"}' \\\n       http:\/\/127.0.0.1:8200\/v1\/auth\/approle\/role\/my-role\n   ```\n\n1. Fetch the identifier of the role:\n\n   ```shell-session\n   $ curl \\\n       --header \"X-Vault-Token: ...\" \\\n       http:\/\/127.0.0.1:8200\/v1\/auth\/approle\/role\/my-role\/role-id\n   ```\n\n   The response will look like:\n\n   ```json\n   {\n     \"data\": {\n       \"role_id\": \"988a9dfd-ea69-4a53-6cb6-9d6b86474bba\"\n     }\n   }\n   ```\n\n1. Create a new secret identifier under the role:\n\n   ```shell-session\n   $ curl \\\n       --header \"X-Vault-Token: ...\" \\\n       --request POST \\\n        http:\/\/127.0.0.1:8200\/v1\/auth\/approle\/role\/my-role\/secret-id\n   ```\n\n   The response will look like:\n\n   ```json\n   {\n     \"data\": {\n       \"secret_id_accessor\": \"45946873-1d96-a9d4-678c-9229f74386a5\",\n       \"secret_id\": \"37b74931-c4cd-d49a-9246-ccc62d682a25\",\n       \"secret_id_ttl\": 600,\n       \"secret_id_num_uses\": 40\n     }\n   }\n   ```\n\n## Credentials\/Constraints\n\n### RoleID\n\nRoleID is an identifier that selects the AppRole against which the other\ncredentials are evaluated. When authenticating against this auth method's login\nendpoint, the RoleID is a required argument (via `role_id`) at all times. By\ndefault, RoleIDs are unique UUIDs, which allow them to serve as secondary\nsecrets to the other credential information. However, they can be set to\nparticular values to match introspected information by the client (for\ninstance, the client's domain name).\n\n### SecretID\n\nSecretID is a credential that is required by default for any login (via\n`secret_id`) and is intended to always be secret. (For advanced usage,\nrequiring a SecretID can be disabled via an AppRole's `bind_secret_id`\nparameter, allowing machines with only knowledge of the RoleID, or matching\nother set constraints, to fetch a token). SecretIDs can be created against an\nAppRole either via generation of a 128-bit purely random UUID by the role\nitself (`Pull` mode) or via specific, custom values (`Push` mode). Similarly to\ntokens, SecretIDs have properties like usage-limit, TTLs and expirations.\n\n#### Pull and push SecretID modes\n\nIf the SecretID used for login is fetched from an AppRole, this is operating in\nPull mode. If a \"custom\" SecretID is set against an AppRole by the client, it\nis referred to as a Push mode. Push mode mimics the behavior of the deprecated\nApp-ID auth method; however, in most cases Pull mode is the better approach. The\nreason is that Push mode requires some other system to have knowledge of the\nfull set of client credentials (RoleID and SecretID) in order to create the\nentry, even if these are then distributed via different paths. However, in Pull\nmode, even though the RoleID must be known in order to distribute it to the\nclient, the SecretID can be kept confidential from all parties except for the\nfinal authenticating client by using [Response Wrapping](\/vault\/docs\/concepts\/response-wrapping).\n\nPush mode is available for App-ID workflow compatibility, which in some\nspecific cases is preferable, but in most cases Pull mode is more secure and\nshould be preferred.\n\n### Further constraints\n\n`role_id` is a required credential at the login endpoint. AppRole pointed to by\nthe `role_id` will have constraints set on it. This dictates other `required`\ncredentials for login. The `bind_secret_id` constraint requires `secret_id` to\nbe presented at the login endpoint. Going forward, this auth method can support\nmore constraint parameters to support varied set of Apps. Some constraints will\nnot require a credential, but still enforce constraints for login. For\nexample, `secret_id_bound_cidrs` will only allow logins coming from IP addresses\nbelonging to configured CIDR blocks on the AppRole.\n\n## Tutorial\n\nRefer to the following tutorials to learn more:\n\n- [AppRole Pull Authentication](\/vault\/tutorials\/auth-methods\/approle) tutorial\n  to learn how to use the AppRole auth method to generate tokens for machines or\n  apps.\n\n- [AppRole usage best\n  practices](\/vault\/tutorials\/auth-methods\/approle-best-practices) to understand\n  the recommendation for distributing the AppRole credentials to the target\n  Vault clients.\n\n## User lockout\n\n@include 'user-lockout.mdx'\n\n## API\n\nThe AppRole auth method has a full HTTP API. Please see the\n[AppRole API](\/vault\/api-docs\/auth\/approle) for more\ndetails.\n\n## Code example\n\nThe following example demonstrates AppRole authentication with response\nwrapping.\n\n<CodeTabs>\n\n<CodeBlockConfig>\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\n\tvault \"github.com\/hashicorp\/vault\/api\"\n\tauth \"github.com\/hashicorp\/vault\/api\/auth\/approle\"\n)\n\n\/\/ Fetches a key-value secret (kv-v2) after authenticating via AppRole.\nfunc getSecretWithAppRole() (string, error) {\n\tconfig := vault.DefaultConfig() \/\/ modify for more granular configuration\n\n\tclient, err := vault.NewClient(config)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to initialize Vault client: %w\", err)\n\t}\n\n\t\/\/ A combination of a Role ID and Secret ID is required to log in to Vault\n\t\/\/ with an AppRole.\n\t\/\/ First, let's get the role ID given to us by our Vault administrator.\n\troleID := os.Getenv(\"APPROLE_ROLE_ID\")\n\tif roleID == \"\" {\n\t\treturn \"\", fmt.Errorf(\"no role ID was provided in APPROLE_ROLE_ID env var\")\n\t}\n\n\t\/\/ The Secret ID is a value that needs to be protected, so instead of the\n\t\/\/ app having knowledge of the secret ID directly, we have a trusted orchestrator (https:\/\/learn.hashicorp.com\/tutorials\/vault\/secure-introduction?in=vault\/app-integration#trusted-orchestrator)\n\t\/\/ give the app access to a short-lived response-wrapping token (https:\/\/developer.hashicorp.com\/vault\/docs\/concepts\/response-wrapping).\n\t\/\/ Read more at: https:\/\/learn.hashicorp.com\/tutorials\/vault\/approle-best-practices?in=vault\/auth-methods#secretid-delivery-best-practices\n\tsecretID := &auth.SecretID{FromFile: \"path\/to\/wrapping-token\"}\n\n\tappRoleAuth, err := auth.NewAppRoleAuth(\n\t\troleID,\n\t\tsecretID,\n\t\tauth.WithWrappingToken(), \/\/ Only required if the secret ID is response-wrapped.\n\t)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to initialize AppRole auth method: %w\", err)\n\t}\n\n\tauthInfo, err := client.Auth().Login(context.Background(), appRoleAuth)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to login to AppRole auth method: %w\", err)\n\t}\n\tif authInfo == nil {\n\t\treturn \"\", fmt.Errorf(\"no auth info was returned after login\")\n\t}\n\n\t\/\/ get secret from the default mount path for KV v2 in dev mode, \"secret\"\n\tsecret, err := client.KVv2(\"secret\").Get(context.Background(), \"creds\")\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to read secret: %w\", err)\n\t}\n\n\t\/\/ data map can contain more than one key-value pair,\n\t\/\/ in this case we're just grabbing one of them\n\tvalue, ok := secret.Data[\"password\"].(string)\n\tif !ok {\n\t\treturn \"\", fmt.Errorf(\"value type assertion failed: %T %#v\", secret.Data[\"password\"], secret.Data[\"password\"])\n\t}\n\n\treturn value, nil\n}\n```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig>\n\n```cs\nusing System;\nusing System.Collections.Generic;\nusing System.IO;\nusing VaultSharp;\nusing VaultSharp.V1.AuthMethods;\nusing VaultSharp.V1.AuthMethods.AppRole;\nusing VaultSharp.V1.AuthMethods.Token;\nusing VaultSharp.V1.Commons;\n\nnamespace Examples\n{\n    public class ApproleAuthExample\n    {\n        const string DefaultTokenPath = \"..\/..\/..\/path\/to\/wrapping-token\";\n\n        \/\/\/ <summary>\n        \/\/\/ Fetches a key-value secret (kv-v2) after authenticating to Vault via AppRole authentication\n        \/\/\/ <\/summary>\n        public string GetSecretWithAppRole()\n        {\n            \/\/ A combination of a Role ID and Secret ID is required to log in to Vault with an AppRole.\n\t        \/\/ The Secret ID is a value that needs to be protected, so instead of the app having knowledge of the secret ID directly,\n\t        \/\/ we have a trusted orchestrator (https:\/\/developer.hashicorp.com\/vault\/tutorials\/app-integration\/secure-introduction?in=vault%2Fapp-integration#trusted-orchestrator)\n\t        \/\/ give the app access to a short-lived response-wrapping token (https:\/\/developer.hashicorp.com\/vault\/docs\/concepts\/response-wrapping).\n\t        \/\/ Read more at: https:\/\/learn.hashicorp.com\/tutorials\/vault\/approle-best-practices?in=vault\/auth-methods#secretid-delivery-best-practices\n            var vaultAddr = Environment.GetEnvironmentVariable(\"VAULT_ADDR\");\n            if(String.IsNullOrEmpty(vaultAddr))\n            {\n                throw new System.ArgumentNullException(\"Vault Address\");\n            }\n\n            var roleId = Environment.GetEnvironmentVariable(\"APPROLE_ROLE_ID\");\n            if(String.IsNullOrEmpty(roleId))\n            {\n                throw new System.ArgumentNullException(\"AppRole Role Id\");\n            }\n            \/\/ Get the path to wrapping token or fall back on default path\n            string pathToToken = !String.IsNullOrEmpty(Environment.GetEnvironmentVariable(\"WRAPPING_TOKEN_PATH\")) ? Environment.GetEnvironmentVariable(\"WRAPPING_TOKEN_PATH\") : DefaultTokenPath;\n            string wrappingToken = File.ReadAllText(pathToToken); \/\/ placed here by a trusted orchestrator\n\n            \/\/ We need to create two VaultClient objects for authenticating via AppRole. The first is for\n            \/\/ using the unwrap utility. We need to initialize the client with the wrapping token.\n            IAuthMethodInfo wrappedTokenAuthMethod = new TokenAuthMethodInfo(wrappingToken);\n            var vaultClientSettingsForUnwrapping = new VaultClientSettings(vaultAddr, wrappedTokenAuthMethod);\n\n            IVaultClient vaultClientForUnwrapping = new VaultClient(vaultClientSettingsForUnwrapping);\n\n            \/\/ We pass null here instead of the wrapping token to avoid depleting its single usage\n            \/\/ given that we already initialized our client with the wrapping token\n            Secret<Dictionary<string, object>> secretIdData =  vaultClientForUnwrapping.V1.System\n                .UnwrapWrappedResponseDataAsync<Dictionary<string, object>>(null).Result;\n\n            var secretId = secretIdData.Data[\"secret_id\"]; \/\/ Grab the secret_id\n\n            \/\/ We create a second VaultClient and initialize it with the AppRole auth method and our new credentials.\n            IAuthMethodInfo authMethod = new AppRoleAuthMethodInfo(roleId, secretId.ToString());\n            var vaultClientSettings = new VaultClientSettings(vaultAddr, authMethod);\n\n            IVaultClient vaultClient = new VaultClient(vaultClientSettings);\n\n            \/\/ We can retrieve the secret from VaultClient\n            Secret<SecretData> kv2Secret = null;\n            kv2Secret = vaultClient.V1.Secrets.KeyValue.V2.ReadSecretAsync(path: \"\/creds\").Result;\n\n            var password = kv2Secret.Data.Data[\"password\"];\n\n            return password.ToString();\n        }\n    }\n}\n```\n\n<\/CodeBlockConfig>\n\n<\/CodeTabs>","site":"vault","answers_cleaned":"    layout  docs page title  Use AppRole authentication description       Use AppRole authentication with Vault to control how machines and services   authenticate to Vault         Use AppRole authentication  The  approle  auth method allows machines or  apps  to authenticate with Vault defined  roles   The open design of  AppRole  enables a varied set of workflows and configurations to handle large numbers of apps  This auth method is oriented to automated workflows  machines and services   and is less useful for human operators  We recommend using  batch  tokens with the   AppRole  auth method   An  AppRole  represents a set of Vault policies and login constraints that must be met to receive a token with those policies  The scope can be as narrow or broad as desired  An AppRole can be created for a particular machine  or even a particular user on that machine  or a service spread across machines  The credentials required for successful login depend upon the constraints set on the AppRole associated with the credentials      Authentication      Via the CLI  The default path is   approle   If this auth method was enabled at a different path  specify  auth my path login  instead      shell session   vault write auth approle login       role id db02de05 fa39 4855 059b 67221c5c2f63       secret id 6a174c20 f6de a53c 74d2 6018fcceff64  Key                Value                          token              65b74ffd 842c fd43 1386 f7d7006e520a token accessor     3c29bc22 5c72 11a6 f778 2bc8f48cea0e token duration     20m0s token renewable    true token policies      default           Via the API  The default endpoint is  auth approle login   If this auth method was enabled at a different path  use that value instead of  approle       shell session   curl         request POST         data    role id   988a9df       secret id   37b74931             http   127 0 0 1 8200 v1 auth approle login      The response will contain the token at  auth client token       json      auth          renewable   true       lease duration   2764800       metadata            policies     default    dev policy    test policy         accessor    5d7fb475 07cb 4060 c2de 1ca3fcbf0c56        client token    98a4c7ab b1fe 361b ba0b e307aacfd587                  Application Integration    See the  Code Example   code example  section for a code snippet demonstrating the authentication with Vault using the AppRole auth method      Configuration  Auth methods must be configured in advance before users or machines can authenticate  These steps are usually completed by an operator or configuration management tool       Via the CLI  1  Enable the AppRole auth method         shell session      vault auth enable approle         1  Create a named role         shell session      vault write auth approle role my role          token type batch          secret id ttl 10m          token ttl 20m          token max ttl 30m          secret id num uses 40              Note    If the token issued by your approle needs the ability to create child tokens  you will need to set token num uses to 0   For the complete list of configuration options  please see the API documentation   1  Fetch the RoleID of the AppRole         shell session      vault read auth approle role my role role id    role id     db02de05 fa39 4855 059b 67221c5c2f63         1  Get a SecretID issued against the AppRole         shell session      vault write  f auth approle role my role secret id    secret id               6a174c20 f6de a53c 74d2 6018fcceff64    secret id accessor      c454f7e5 996e 7230 6074 6ef26b7bcf86    secret id ttl           10m    secret id num uses      40             Via the API  1  Enable the AppRole auth method         shell session      curl            header  X Vault Token                  request POST            data    type    approle             http   127 0 0 1 8200 v1 sys auth approle         1  Create an AppRole with desired set of policies         shell session      curl            header  X Vault Token                  request POST            data    policies    dev policy test policy    token type    batch             http   127 0 0 1 8200 v1 auth approle role my role         1  Fetch the identifier of the role         shell session      curl            header  X Vault Token                http   127 0 0 1 8200 v1 auth approle role my role role id            The response will look like         json            data             role id    988a9dfd ea69 4a53 6cb6 9d6b86474bba                      1  Create a new secret identifier under the role         shell session      curl            header  X Vault Token                  request POST           http   127 0 0 1 8200 v1 auth approle role my role secret id            The response will look like         json            data             secret id accessor    45946873 1d96 a9d4 678c 9229f74386a5           secret id    37b74931 c4cd d49a 9246 ccc62d682a25           secret id ttl   600          secret id num uses   40                        Credentials Constraints      RoleID  RoleID is an identifier that selects the AppRole against which the other credentials are evaluated  When authenticating against this auth method s login endpoint  the RoleID is a required argument  via  role id   at all times  By default  RoleIDs are unique UUIDs  which allow them to serve as secondary secrets to the other credential information  However  they can be set to particular values to match introspected information by the client  for instance  the client s domain name        SecretID  SecretID is a credential that is required by default for any login  via  secret id   and is intended to always be secret   For advanced usage  requiring a SecretID can be disabled via an AppRole s  bind secret id  parameter  allowing machines with only knowledge of the RoleID  or matching other set constraints  to fetch a token   SecretIDs can be created against an AppRole either via generation of a 128 bit purely random UUID by the role itself   Pull  mode  or via specific  custom values   Push  mode   Similarly to tokens  SecretIDs have properties like usage limit  TTLs and expirations        Pull and push SecretID modes  If the SecretID used for login is fetched from an AppRole  this is operating in Pull mode  If a  custom  SecretID is set against an AppRole by the client  it is referred to as a Push mode  Push mode mimics the behavior of the deprecated App ID auth method  however  in most cases Pull mode is the better approach  The reason is that Push mode requires some other system to have knowledge of the full set of client credentials  RoleID and SecretID  in order to create the entry  even if these are then distributed via different paths  However  in Pull mode  even though the RoleID must be known in order to distribute it to the client  the SecretID can be kept confidential from all parties except for the final authenticating client by using  Response Wrapping   vault docs concepts response wrapping    Push mode is available for App ID workflow compatibility  which in some specific cases is preferable  but in most cases Pull mode is more secure and should be preferred       Further constraints   role id  is a required credential at the login endpoint  AppRole pointed to by the  role id  will have constraints set on it  This dictates other  required  credentials for login  The  bind secret id  constraint requires  secret id  to be presented at the login endpoint  Going forward  this auth method can support more constraint parameters to support varied set of Apps  Some constraints will not require a credential  but still enforce constraints for login  For example   secret id bound cidrs  will only allow logins coming from IP addresses belonging to configured CIDR blocks on the AppRole      Tutorial  Refer to the following tutorials to learn more      AppRole Pull Authentication   vault tutorials auth methods approle  tutorial   to learn how to use the AppRole auth method to generate tokens for machines or   apps      AppRole usage best   practices   vault tutorials auth methods approle best practices  to understand   the recommendation for distributing the AppRole credentials to the target   Vault clients      User lockout   include  user lockout mdx      API  The AppRole auth method has a full HTTP API  Please see the  AppRole API   vault api docs auth approle  for more details      Code example  The following example demonstrates AppRole authentication with response wrapping    CodeTabs    CodeBlockConfig      go package main  import     context    fmt    os    vault  github com hashicorp vault api   auth  github com hashicorp vault api auth approle        Fetches a key value secret  kv v2  after authenticating via AppRole  func getSecretWithAppRole    string  error     config    vault DefaultConfig      modify for more granular configuration   client  err    vault NewClient config   if err    nil     return     fmt Errorf  unable to initialize Vault client   w   err          A combination of a Role ID and Secret ID is required to log in to Vault     with an AppRole      First  let s get the role ID given to us by our Vault administrator   roleID    os Getenv  APPROLE ROLE ID    if roleID           return     fmt Errorf  no role ID was provided in APPROLE ROLE ID env var           The Secret ID is a value that needs to be protected  so instead of the     app having knowledge of the secret ID directly  we have a trusted orchestrator  https   learn hashicorp com tutorials vault secure introduction in vault app integration trusted orchestrator      give the app access to a short lived response wrapping token  https   developer hashicorp com vault docs concepts response wrapping       Read more at  https   learn hashicorp com tutorials vault approle best practices in vault auth methods secretid delivery best practices  secretID     auth SecretID FromFile   path to wrapping token     appRoleAuth  err    auth NewAppRoleAuth    roleID    secretID    auth WithWrappingToken       Only required if the secret ID is response wrapped      if err    nil     return     fmt Errorf  unable to initialize AppRole auth method   w   err       authInfo  err    client Auth   Login context Background    appRoleAuth   if err    nil     return     fmt Errorf  unable to login to AppRole auth method   w   err      if authInfo    nil     return     fmt Errorf  no auth info was returned after login           get secret from the default mount path for KV v2 in dev mode   secret   secret  err    client KVv2  secret   Get context Background     creds    if err    nil     return     fmt Errorf  unable to read secret   w   err          data map can contain more than one key value pair      in this case we re just grabbing one of them  value  ok    secret Data  password    string   if  ok     return     fmt Errorf  value type assertion failed   T   v   secret Data  password    secret Data  password         return value  nil          CodeBlockConfig    CodeBlockConfig      cs using System  using System Collections Generic  using System IO  using VaultSharp  using VaultSharp V1 AuthMethods  using VaultSharp V1 AuthMethods AppRole  using VaultSharp V1 AuthMethods Token  using VaultSharp V1 Commons   namespace Examples       public class ApproleAuthExample               const string DefaultTokenPath             path to wrapping token                 summary              Fetches a key value secret  kv v2  after authenticating to Vault via AppRole authentication               summary          public string GetSecretWithAppRole                            A combination of a Role ID and Secret ID is required to log in to Vault with an AppRole              The Secret ID is a value that needs to be protected  so instead of the app having knowledge of the secret ID directly              we have a trusted orchestrator  https   developer hashicorp com vault tutorials app integration secure introduction in vault 2Fapp integration trusted orchestrator              give the app access to a short lived response wrapping token  https   developer hashicorp com vault docs concepts response wrapping               Read more at  https   learn hashicorp com tutorials vault approle best practices in vault auth methods secretid delivery best practices             var vaultAddr   Environment GetEnvironmentVariable  VAULT ADDR                if String IsNullOrEmpty vaultAddr                                 throw new System ArgumentNullException  Vault Address                               var roleId   Environment GetEnvironmentVariable  APPROLE ROLE ID                if String IsNullOrEmpty roleId                                 throw new System ArgumentNullException  AppRole Role Id                                 Get the path to wrapping token or fall back on default path             string pathToToken    String IsNullOrEmpty Environment GetEnvironmentVariable  WRAPPING TOKEN PATH      Environment GetEnvironmentVariable  WRAPPING TOKEN PATH     DefaultTokenPath              string wrappingToken   File ReadAllText pathToToken      placed here by a trusted orchestrator                 We need to create two VaultClient objects for authenticating via AppRole  The first is for                using the unwrap utility  We need to initialize the client with the wrapping token              IAuthMethodInfo wrappedTokenAuthMethod   new TokenAuthMethodInfo wrappingToken               var vaultClientSettingsForUnwrapping   new VaultClientSettings vaultAddr  wrappedTokenAuthMethod                IVaultClient vaultClientForUnwrapping   new VaultClient vaultClientSettingsForUnwrapping                   We pass null here instead of the wrapping token to avoid depleting its single usage                given that we already initialized our client with the wrapping token             Secret Dictionary string  object   secretIdData    vaultClientForUnwrapping V1 System                  UnwrapWrappedResponseDataAsync Dictionary string  object   null  Result               var secretId   secretIdData Data  secret id       Grab the secret id                 We create a second VaultClient and initialize it with the AppRole auth method and our new credentials              IAuthMethodInfo authMethod   new AppRoleAuthMethodInfo roleId  secretId ToString                 var vaultClientSettings   new VaultClientSettings vaultAddr  authMethod                IVaultClient vaultClient   new VaultClient vaultClientSettings                   We can retrieve the secret from VaultClient             Secret SecretData  kv2Secret   null              kv2Secret   vaultClient V1 Secrets KeyValue V2 ReadSecretAsync path    creds   Result               var password   kv2Secret Data Data  password                 return password ToString                             CodeBlockConfig     CodeTabs "}
{"questions":"vault page title Best practices for AppRole authentication Best practices for AppRole authentication Follow best practices for AppRole authentication to secure access and validate application workload identity layout docs","answers":"---\nlayout: docs\npage_title: Best practices for AppRole authentication\ndescription: >-\n  Follow best practices for AppRole authentication to secure access and validate\n  application workload identity.\n---\n\n# Best practices for AppRole authentication\n\nAt the core of Vault's usage is authentication and authorization. Understanding the methods that Vault surfaces these to the client is the key to understanding how to configure and manage Vault.\n\n- Vault provides authentication to a client by the use of [auth methods](\/vault\/docs\/concepts\/auth).\n\n- Vault provides authorization to a client by the use of [policies](\/vault\/docs\/concepts\/policies).\n\nVault provides several internal and external authentication methods. External methods are called _trusted third-party authenticators_ such as AWS, LDAP, GitHub, and so on. A trusted third-party authenticator is not available in some situations, so Vault has an alternate approach which is **AppRole**. If another platform method of authentication is available via a trusted third-party authenticator, the best practice is to use that instead of AppRole.\n\nThis guide relies heavily on two fundamental principles for Vault: limiting both the blast-radius of an identity and the duration of authentication.\n\n### Blast-radius of an identity\n\nVault is an identity-based secrets management solution, where access to a secret is based on the known and verified identity of a client. It is crucial that authenticating identities to Vault are identifiable and only have access to the secrets they are the users of. Secrets should never be proxied between Vault and the secret end-user and a client should never have access to secrets they are not the end-user of.\n\n### Duration of authentication\n\nWhen Vault verifies an entity's identity, Vault then provides that entity with a [token](\/vault\/docs\/concepts\/tokens). The client uses this token for all subsequent interactions with Vault to prove authentication, so this token should be both handled securely and have a limited lifetime. A token should only live for as long as access to the secrets it authorizes access to are needed.\n\n## Glossary of terms\n\n- **Authentication** - The process of confirming identity. Often abbreviated to _AuthN_\n- **Authorization** - The process of verifying what an entity has access to and at what level. Often abbreviated to _AuthZ_\n- **RoleID** - The semi-secret identifier for the role that will authenticate to Vault. Think of this as the _username_ portion of an authentication pair.\n- **SecretID** - The secret identifier for the role that will authenticate to Vault. Think of this as the _password_ portion of an authentication pair.\n- **AppRole role** - The role configured in Vault that contains the authorization and usage parameters for the authentication.\n\n## What is AppRole auth method?\n\nThe AppRole authentication method is for machine authentication to Vault. Because AppRole is designed to be flexible, it has many ways to be configured. The burden of security is on the configurator rather than a trusted third party, as is the case in other Vault auth methods.\n\nAppRole is not a trusted third-party authenticator, but a _trusted broker_ method. The difference is that in AppRole authentication, the onus of trust rests in a securely-managed broker system that brokers authentication between clients and Vault.\n\nThe central tenet of this security is that during the brokering of the authentication to Vault, the **RoleID** and **SecretID** are only ever together on the end-user system that needs to consume the secret.\n\nIn an AppRole authentication, there are three players:\n\n- **Vault** - The Vault service\n- **The broker** - This is the trusted and secured system that brokers the authentication.\n- **The secret consumer** - This is the final consumer of the secret from Vault.\n\n\n## Platform credential delivery method\n\nTo prevent any one system, other than the target client, from obtaining the complete set of credentials (RoleID and SecretID), the recommended implementation is to deliver those values separately through two different channels. This enables you to provide narrowly-scoped tokens to each trusted orchestrator to access either RoleID or SecretID, but never both.\n\n### RoleID delivery best practices\n\nRoleID is an identifier that selects the AppRole against which the other credentials are evaluated. Think of it as a username for an application; therefore, RoleID is not a secret value. It's a static UUID that identifies a specific role configuration. Generally, you create a role per application to ensure that each application will have a unique RoleID.\n\nBecause it is not a secret, you can embed the RoleID value into a machine image or container as a text file or environment variable.\n\nFor example:\n\n- Build an image with [Packer](\/packer\/tutorials\/) with RoleID stored as an environment variable.\n- Use [Terraform](\/terraform\/tutorials\/) to provision a machine embedded with RoleID.\n\nThere are a number of different patterns through which this value can be delivered.\n\nThe application running on the machine or container will read the RoleID from the file or environment variable to authenticate with Vault.\n\n#### Policy requirement\n\nAn appropriate policy is required to read RoleID from Vault. For example, to get the RoleID for a role named, \"jenkins\", the policy should look as below.\n\n```hcl\n# Grant 'read' permission on the 'auth\/approle\/role\/<role_name>\/role-id' path\npath \"auth\/approle\/role\/jenkins\/role-id\" {\n   capabilities = [ \"read\" ]\n}\n```\n\n### SecretID delivery best practices\n\nSecretID is a credential that is required by default for any login and is intended to always be secret. While RoleID is similar to a username, SecretID is equivalent to a password for its corresponding RoleID.\n\nThere are two additional considerations when distributing the SecretID, since it is a secret and should be secured so that only the intended recipient is able to read it.\n\n1. Binding CIDRs\n1. AppRole response wrapping\n\n#### Binding CIDRs\n\nWhen defining an AppRole, you can use the [`secretid_bound_cidrs`](\/vault\/api-docs\/auth\/approle#secret_id_bound_cidrs) parameter to specify blocks of IP addresses which can perform the login operation for this role. You can further limit the IP range per token using [`token_bound_cidrs`](\/vault\/api-docs\/auth\/approle#token_bound_cidrs).\n\n**Example:**\n\n```shell-session\n$ vault write auth\/approle\/role\/jenkins \\\n      secret_id_bound_cidrs=\"0.0.0.0\/0\",\"127.0.0.1\/32\" \\\n      secret_id_ttl=60m \\\n      secret_id_num_uses=5 \\\n      enable_local_secret_ids=false \\\n      token_bound_cidrs=\"0.0.0.0\/0\",\"127.0.0.1\/32\" \\\n      token_num_uses=10 \\\n      token_ttl=1h \\\n      token_max_ttl=3h \\\n      token_type=default \\\n      period=\"\" \\\n      policies=\"default\",\"test\"\n```\n\n<Tip title=\"CIDR consideration\">\n\nWhile there is no hard limit to how many CIDR blocks you can set using the\n`token_bound_cidrs` parameter, there are limiting factors. One is the amount of\ntime it takes for the Vault to compare an IP with the list provided. Another is\nthe maximum request size of the HTTP when you create the list.\n\n<\/Tip>\n\n#### AppRole response wrapping\n\nTo guarantee confidentiality, integrity, and non-repudiation of SecretID, you can use the `-wrap-ttl` flag when generating the SecretID. Instead of providing the SecretID in plaintext, it puts it into a new token\u2019s Cubbyhole with a token use count of 1. When the application attempts to read the SecretID, we can guarantee that only this application can read it.\n\n**Example:** The following CLI command retrieves the SecretID for a role named, \"jenkins\". The generated SecretID is wrapped in a token which is valid for 60 seconds to unwrap.\n\n```shell-session\n$ vault write -wrap-ttl=60s -force auth\/approle\/role\/jenkins\/secret-id\n\nKey                              Value\n---                              -----\nwrapping_token:                  s.yzbznr9NlZNzsgEtz3SI56pX\nwrapping_accessor:               Smi4CO0Sdhn8FJvL8XvOT30y\nwrapping_token_ttl:              1m\nwrapping_token_creation_time:    2021-06-07 20:02:01.019838 -0700 PDT\nwrapping_token_creation_path:    auth\/approle\/role\/jenkins\/secret-id\n```\n\nFinally, you can monitor your audit logs for attempted read access of your SecretID. If Vault throws a use-limit error when an application tries to read the SecretID, you know that someone else has read the SecretID and alert on that. The audit logs will indicate where the SecretID read attempt originated.\n\n#### Policy requirement\n\nAn appropriate policy is required to read SecretID from Vault. For example, to get the SecretID for a role named, \"jenkins\", the policy should look as below.\n\n```hcl\n# Grant 'update' permission on the 'auth\/approle\/role\/<role_name>\/secret-id' path\npath \"auth\/approle\/role\/jenkins\/secret-id\" {\n   capabilities = [ \"update\" ]\n}\n```\n\n## Token lifetime considerations\n\nTokens must be maintained client side and upon expiration can be renewed. For short lived workflows, traditionally tokens would be created with a lifetime that would match the average deploy time and left to expire, securing new tokens with each deployment.\n\nA long token time-to-live (TTL) can cause out of memory when trying to purge millions of AppRole leases. To avoid this, we recommend that you reduce TTLs for AppRole tokens and implement token renewal where possible. You can increase the memory on the Vault server; however, it won't be a long-term solution.\n\nIn general, with any auth method, it's preferable for applications to keep using the same Vault token to fetch secrets repeatedly instead of a new authentication each time. Authentication is an expensive operation and results in a token that Vault must keep track of. If high authentication throughput, 1000s of authentications per second, are expected we recommend using batch tokens which are issued from memory and do not consume storage.\n\n### Vault Agent\n\nConsider running [Vault Agent](\/vault\/docs\/agent-and-proxy\/agent) on the client host, and let the agent manage the token's lifecycle. Vault Agent reduces the number of tokens used by the client applications. In addition, it eliminates the need to implement the Vault APIs to authenticate with Vault and renew the token TTL if necessary.\n\nTo learn more about Vault Agent, read the following tutorials:\n\n- [Vault Agent with AWS](\/vault\/tutorials\/vault-agent\/agent-aws)\n- [Vault Agent with Kubernetes](\/vault\/tutorials\/kubernetes\/agent-kubernetes)\n- [Vault Agent Templates](\/vault\/tutorials\/vault-agent\/agent-templates)\n- [Vault Agent Caching](\/vault\/tutorials\/vault-agent\/agent-caching)\n\n## Jenkins CI\/CD\n\nWhen you are using Jenkins as a CI tool, Jenkins itself will need an identity; however, you should never have Jenkins log into Vault and pass a client token to the application via workflow. Jenkins needs to give the application its own identity so that the application gets its own secret. The best practice is to use the Vault Agent as much as possible with Jenkins so that Vault token is not managed by Jenkins. You can deliver a SecretID every morning or before every run for x number of uses. Let Vault Agent authenticate with Vault and get the token for Jenkins. Then, Jenkins uses that token for x number of operations against Vault.\n\nA key benefit of AppRole for applications is that it enables you to more easily migrate the application between platforms.\n\nWhen you use an AppRole for the application, the best practice is to obscure the RoleID from Jenkins but allow Jenkins to deliver a wrapped SecretID to the application. \n\n### Usage workflow\n\nJenkins needs to run a job requiring some data classified as secret and stored in Vault. It has a master and a worker node where the worker node runs jobs on spawned container runners that are short-lived. \n\nThe process would look like:\n\n1. Jenkins worker authenticates to Vault\n2. Vault returns a token\n3. Worker uses token to retrieve a wrapped SecretID for the **role** of the job it will spawn\n4. Wrapped SecretID returned by Vault\n5. Worker spawns job runner and passes wrapped SecretID as a variable to the job\n6. Runner container requests unwrap of SecretID\n7. Vault returns SecretID\n8. Runner uses RoleID and SecretID to authenticate to Vault\n9. Vault returns a token with policies that allow read of the required secrets\n10. Runner uses the token to get secrets from Vault\n\n![AppRole Example](\/img\/approle-best-practices.png)\n\nHere are more details on the more complicated steps of that process.\n\n<Note title=\"Secrets wrapping\">\n\nIf you are unfamiliar with secrets wrapping, refer to the [response wraping](\/vault\/docs\/concepts\/response-wrapping) documentation.\n\n<\/Note>\n\n#### CI worker authenticates to Vault\n\nThe CI worker will need to authenticate to Vault to retrieve wrapped SecretIDs for the AppRoles of the jobs it will spawn.\n\nIf the worker can use a platform method of authentication, then the worker should use that. Otherwise, the only option is to pre-authenticate the worker to Vault in some other way.\n\n#### Vault returns a token\n\nThe worker's Vault token should be of limited scope and should only retrieve wrapped SecretIDs. Because of this the worker could be pre-seeded with a long-lived Vault token or use a hard-coded RoleID and SecretID as this would present only a minor risk.\n\nThe policy the worker should have would be:\n\n```hcl\npath \"auth\/approle\/role\/+\/secret*\" {\n  capabilities = [ \"create\", \"read\", \"update\" ]\n  min_wrapping_ttl = \"100s\"\n  max_wrapping_ttl = \"300s\"\n}\n```\n\n#### Worker uses token to retrieve a wrapped SecretID\n\nThe CI worker now needs to be able to retrieve a wrapped SecretID.\nThis command would be something like:\n\n```shell-session\n$ vault write -wrap-ttl=120s -f auth\/approle\/role\/my-role\/secret-id\n```\n\nNotice that the worker only needs to know the **role** for the job it is spawning. In the example above, that is `my-role` but not the RoleID.\n\n#### Worker spawns job runner and passes wrapped SecretID\n\nThis could be achieved by passing the wrapped token as an environment variable. Below is an example of how to do this in Jenkins:\n\n```plaintext\nenvironment {\n   WRAPPED_SID = \"\"\"$s{sh(\n                    returnStdout: true,\n                    Script: \u2018curl --header \"X-Vault-Token: $VAULT_TOKEN\"\n       --header \"X-Vault-Namespace: ${PROJ_NAME}_namespace\"\n       --header \"X-Vault-Wrap-Ttl: 300s\"\n         $VAULT_ADDR\/v1\/auth\/approle\/role\/$JOB_NAME\/secret-id\u2019\n         | jq -r '.wrap_info.token'\n                 )}\"\"\"\n  }\n```\n\n#### Runner uses RoleID and SecretID to authenticate to Vault\n\nThe runner would authenticate to Vault and it would only receive the policy to read the exact secrets it needed. It could not get anything else. An example policy would be:\n\n```hcl\npath \"kv\/my-role_secrets\/*\" {\n  capabilities = [ \"read\" ]\n}\n```\n\n#### Implementation specifics\n\nAs additional security measures, create the required role for the App bearing in mind the following:\n\n- [`secret_id_bound_cidrs` (array: [])](\/vault\/api-docs\/auth\/approle#secret_id_bound_cidrs) - Comma-separated string or list of CIDR blocks; if set, specifies blocks of IP addresses which can perform the login operation.\n- [`secret_id_num_uses` (integer: 0)](\/vault\/api-docs\/auth\/approle#secret_id_num_uses) - Number of times any particular SecretID can be used to fetch a token from this [AppRole](#vault-approle-overview), after which the SecretID will expire. A value of zero will allow unlimited uses.\n\n<Note title=\"Recommendation\">\n\n For best security, set `secret_id_num_uses` to `1` use. Also, consider changing `secret_id_bound_cidrs` to restrict the source IP range of the connecting devices.\n\n<\/Note>\n\n## Anti-patterns\n\nConsider avoiding these anti-patterns when using Vault's AppRole auth method.\n\n### CI worker retrieves secrets\n\nThe CI worker could just authenticate to Vault and retrieve the secrets for the job and pass these to the runner, but this would break the first of the two best practices listed above.\n\nThe CI worker may likely have to run many different types of jobs, many of which require secrets. If you use this method, the worker would have to have the authorization (policy) to retrieve many secrets, none of which is the consumer. Additionally, if a single secret were to become compromised, then there would be no way to tie an identity to it and initiate break-glass procedures on that identity. So all secrets would have to be considered compromised.\n\n### CI worker passes RoleID and SecretID to the runner\n\nThe worker could be authorized to Vault to retrieve the RoleId and SecretID and pass both to the runner to use. While this prevents the worker from having Vault's authorization to retrieve all secrets, it has that capability as it has both RoleID and SecretID. This is against best practice.\n\n### CI worker passes a Vault token to the runner\n\nThe worker could be authorized to Vault to generate child tokens that have the authorization to retrieve secrets for the pipeline.\n\nAgain, this avoids authorization to Vault to retrieve secrets for the worker, but the worker will have access to the child tokens that would have authorization and so it is against best practices.\n\n## Security considerations\n\nIn any trusted broker situation, the broker (in this case, the Jenkins worker) must be secured and treated as a critical system. This means that users should have minimal access to it and the access should be closely monitored and audited.\n\nAlso, as the Vault audit logs provide time-stamped events, monitor the whole process with alerts on two events:\n\n- When a wrapped SecretID is requested for an AppRole, and no Jenkins job is running\n- When the Jenkins slave attempts to unwrap the token and Vault refuses as the token has already been used\n\nIn both cases, this shows that the trusted-broker workflow has likely been compromised and the event should investigated.\n\n## Reference materials\n\n- [How (and Why) to Use AppRole Correctly in HashiCorp Vault](https:\/\/www.hashicorp.com\/blog\/how-and-why-to-use-approle-correctly-in-hashicorp-vault)\n- [Response wrapping concept](\/vault\/docs\/concepts\/response-wrapping)\n- [ACL policies](\/vault\/docs\/concepts\/policies)\n- [Token periods and TTLs](\/vault\/docs\/concepts\/tokens#token-time-to-live-periodic-tokens-and-explicit-max-ttls)","site":"vault","answers_cleaned":"    layout  docs page title  Best practices for AppRole authentication description       Follow best practices for AppRole authentication to secure access and validate   application workload identity         Best practices for AppRole authentication  At the core of Vault s usage is authentication and authorization  Understanding the methods that Vault surfaces these to the client is the key to understanding how to configure and manage Vault     Vault provides authentication to a client by the use of  auth methods   vault docs concepts auth      Vault provides authorization to a client by the use of  policies   vault docs concepts policies    Vault provides several internal and external authentication methods  External methods are called  trusted third party authenticators  such as AWS  LDAP  GitHub  and so on  A trusted third party authenticator is not available in some situations  so Vault has an alternate approach which is   AppRole    If another platform method of authentication is available via a trusted third party authenticator  the best practice is to use that instead of AppRole   This guide relies heavily on two fundamental principles for Vault  limiting both the blast radius of an identity and the duration of authentication       Blast radius of an identity  Vault is an identity based secrets management solution  where access to a secret is based on the known and verified identity of a client  It is crucial that authenticating identities to Vault are identifiable and only have access to the secrets they are the users of  Secrets should never be proxied between Vault and the secret end user and a client should never have access to secrets they are not the end user of       Duration of authentication  When Vault verifies an entity s identity  Vault then provides that entity with a  token   vault docs concepts tokens   The client uses this token for all subsequent interactions with Vault to prove authentication  so this token should be both handled securely and have a limited lifetime  A token should only live for as long as access to the secrets it authorizes access to are needed      Glossary of terms      Authentication     The process of confirming identity  Often abbreviated to  AuthN      Authorization     The process of verifying what an entity has access to and at what level  Often abbreviated to  AuthZ      RoleID     The semi secret identifier for the role that will authenticate to Vault  Think of this as the  username  portion of an authentication pair      SecretID     The secret identifier for the role that will authenticate to Vault  Think of this as the  password  portion of an authentication pair      AppRole role     The role configured in Vault that contains the authorization and usage parameters for the authentication      What is AppRole auth method   The AppRole authentication method is for machine authentication to Vault  Because AppRole is designed to be flexible  it has many ways to be configured  The burden of security is on the configurator rather than a trusted third party  as is the case in other Vault auth methods   AppRole is not a trusted third party authenticator  but a  trusted broker  method  The difference is that in AppRole authentication  the onus of trust rests in a securely managed broker system that brokers authentication between clients and Vault   The central tenet of this security is that during the brokering of the authentication to Vault  the   RoleID   and   SecretID   are only ever together on the end user system that needs to consume the secret   In an AppRole authentication  there are three players       Vault     The Vault service     The broker     This is the trusted and secured system that brokers the authentication      The secret consumer     This is the final consumer of the secret from Vault       Platform credential delivery method  To prevent any one system  other than the target client  from obtaining the complete set of credentials  RoleID and SecretID   the recommended implementation is to deliver those values separately through two different channels  This enables you to provide narrowly scoped tokens to each trusted orchestrator to access either RoleID or SecretID  but never both       RoleID delivery best practices  RoleID is an identifier that selects the AppRole against which the other credentials are evaluated  Think of it as a username for an application  therefore  RoleID is not a secret value  It s a static UUID that identifies a specific role configuration  Generally  you create a role per application to ensure that each application will have a unique RoleID   Because it is not a secret  you can embed the RoleID value into a machine image or container as a text file or environment variable   For example     Build an image with  Packer   packer tutorials   with RoleID stored as an environment variable    Use  Terraform   terraform tutorials   to provision a machine embedded with RoleID   There are a number of different patterns through which this value can be delivered   The application running on the machine or container will read the RoleID from the file or environment variable to authenticate with Vault        Policy requirement  An appropriate policy is required to read RoleID from Vault  For example  to get the RoleID for a role named   jenkins   the policy should look as below      hcl   Grant  read  permission on the  auth approle role  role name  role id  path path  auth approle role jenkins role id       capabilities      read               SecretID delivery best practices  SecretID is a credential that is required by default for any login and is intended to always be secret  While RoleID is similar to a username  SecretID is equivalent to a password for its corresponding RoleID   There are two additional considerations when distributing the SecretID  since it is a secret and should be secured so that only the intended recipient is able to read it   1  Binding CIDRs 1  AppRole response wrapping       Binding CIDRs  When defining an AppRole  you can use the   secretid bound cidrs    vault api docs auth approle secret id bound cidrs  parameter to specify blocks of IP addresses which can perform the login operation for this role  You can further limit the IP range per token using   token bound cidrs    vault api docs auth approle token bound cidrs      Example        shell session   vault write auth approle role jenkins         secret id bound cidrs  0 0 0 0 0   127 0 0 1 32          secret id ttl 60m         secret id num uses 5         enable local secret ids false         token bound cidrs  0 0 0 0 0   127 0 0 1 32          token num uses 10         token ttl 1h         token max ttl 3h         token type default         period            policies  default   test        Tip title  CIDR consideration    While there is no hard limit to how many CIDR blocks you can set using the  token bound cidrs  parameter  there are limiting factors  One is the amount of time it takes for the Vault to compare an IP with the list provided  Another is the maximum request size of the HTTP when you create the list     Tip        AppRole response wrapping  To guarantee confidentiality  integrity  and non repudiation of SecretID  you can use the   wrap ttl  flag when generating the SecretID  Instead of providing the SecretID in plaintext  it puts it into a new token s Cubbyhole with a token use count of 1  When the application attempts to read the SecretID  we can guarantee that only this application can read it     Example    The following CLI command retrieves the SecretID for a role named   jenkins   The generated SecretID is wrapped in a token which is valid for 60 seconds to unwrap      shell session   vault write  wrap ttl 60s  force auth approle role jenkins secret id  Key                              Value                                        wrapping token                   s yzbznr9NlZNzsgEtz3SI56pX wrapping accessor                Smi4CO0Sdhn8FJvL8XvOT30y wrapping token ttl               1m wrapping token creation time     2021 06 07 20 02 01 019838  0700 PDT wrapping token creation path     auth approle role jenkins secret id      Finally  you can monitor your audit logs for attempted read access of your SecretID  If Vault throws a use limit error when an application tries to read the SecretID  you know that someone else has read the SecretID and alert on that  The audit logs will indicate where the SecretID read attempt originated        Policy requirement  An appropriate policy is required to read SecretID from Vault  For example  to get the SecretID for a role named   jenkins   the policy should look as below      hcl   Grant  update  permission on the  auth approle role  role name  secret id  path path  auth approle role jenkins secret id       capabilities      update              Token lifetime considerations  Tokens must be maintained client side and upon expiration can be renewed  For short lived workflows  traditionally tokens would be created with a lifetime that would match the average deploy time and left to expire  securing new tokens with each deployment   A long token time to live  TTL  can cause out of memory when trying to purge millions of AppRole leases  To avoid this  we recommend that you reduce TTLs for AppRole tokens and implement token renewal where possible  You can increase the memory on the Vault server  however  it won t be a long term solution   In general  with any auth method  it s preferable for applications to keep using the same Vault token to fetch secrets repeatedly instead of a new authentication each time  Authentication is an expensive operation and results in a token that Vault must keep track of  If high authentication throughput  1000s of authentications per second  are expected we recommend using batch tokens which are issued from memory and do not consume storage       Vault Agent  Consider running  Vault Agent   vault docs agent and proxy agent  on the client host  and let the agent manage the token s lifecycle  Vault Agent reduces the number of tokens used by the client applications  In addition  it eliminates the need to implement the Vault APIs to authenticate with Vault and renew the token TTL if necessary   To learn more about Vault Agent  read the following tutorials      Vault Agent with AWS   vault tutorials vault agent agent aws     Vault Agent with Kubernetes   vault tutorials kubernetes agent kubernetes     Vault Agent Templates   vault tutorials vault agent agent templates     Vault Agent Caching   vault tutorials vault agent agent caching      Jenkins CI CD  When you are using Jenkins as a CI tool  Jenkins itself will need an identity  however  you should never have Jenkins log into Vault and pass a client token to the application via workflow  Jenkins needs to give the application its own identity so that the application gets its own secret  The best practice is to use the Vault Agent as much as possible with Jenkins so that Vault token is not managed by Jenkins  You can deliver a SecretID every morning or before every run for x number of uses  Let Vault Agent authenticate with Vault and get the token for Jenkins  Then  Jenkins uses that token for x number of operations against Vault   A key benefit of AppRole for applications is that it enables you to more easily migrate the application between platforms   When you use an AppRole for the application  the best practice is to obscure the RoleID from Jenkins but allow Jenkins to deliver a wrapped SecretID to the application        Usage workflow  Jenkins needs to run a job requiring some data classified as secret and stored in Vault  It has a master and a worker node where the worker node runs jobs on spawned container runners that are short lived    The process would look like   1  Jenkins worker authenticates to Vault 2  Vault returns a token 3  Worker uses token to retrieve a wrapped SecretID for the   role   of the job it will spawn 4  Wrapped SecretID returned by Vault 5  Worker spawns job runner and passes wrapped SecretID as a variable to the job 6  Runner container requests unwrap of SecretID 7  Vault returns SecretID 8  Runner uses RoleID and SecretID to authenticate to Vault 9  Vault returns a token with policies that allow read of the required secrets 10  Runner uses the token to get secrets from Vault    AppRole Example   img approle best practices png   Here are more details on the more complicated steps of that process    Note title  Secrets wrapping    If you are unfamiliar with secrets wrapping  refer to the  response wraping   vault docs concepts response wrapping  documentation     Note        CI worker authenticates to Vault  The CI worker will need to authenticate to Vault to retrieve wrapped SecretIDs for the AppRoles of the jobs it will spawn   If the worker can use a platform method of authentication  then the worker should use that  Otherwise  the only option is to pre authenticate the worker to Vault in some other way        Vault returns a token  The worker s Vault token should be of limited scope and should only retrieve wrapped SecretIDs  Because of this the worker could be pre seeded with a long lived Vault token or use a hard coded RoleID and SecretID as this would present only a minor risk   The policy the worker should have would be      hcl path  auth approle role   secret       capabilities      create    read    update      min wrapping ttl    100s    max wrapping ttl    300s              Worker uses token to retrieve a wrapped SecretID  The CI worker now needs to be able to retrieve a wrapped SecretID  This command would be something like      shell session   vault write  wrap ttl 120s  f auth approle role my role secret id      Notice that the worker only needs to know the   role   for the job it is spawning  In the example above  that is  my role  but not the RoleID        Worker spawns job runner and passes wrapped SecretID  This could be achieved by passing the wrapped token as an environment variable  Below is an example of how to do this in Jenkins      plaintext environment      WRAPPED SID       s sh                      returnStdout  true                      Script   curl   header  X Vault Token   VAULT TOKEN           header  X Vault Namespace    PROJ NAME  namespace           header  X Vault Wrap Ttl  300s            VAULT ADDR v1 auth approle role  JOB NAME secret id             jq  r   wrap info token                                       Runner uses RoleID and SecretID to authenticate to Vault  The runner would authenticate to Vault and it would only receive the policy to read the exact secrets it needed  It could not get anything else  An example policy would be      hcl path  kv my role secrets        capabilities      read                Implementation specifics  As additional security measures  create the required role for the App bearing in mind the following       secret id bound cidrs   array        vault api docs auth approle secret id bound cidrs    Comma separated string or list of CIDR blocks  if set  specifies blocks of IP addresses which can perform the login operation      secret id num uses   integer  0    vault api docs auth approle secret id num uses    Number of times any particular SecretID can be used to fetch a token from this  AppRole   vault approle overview   after which the SecretID will expire  A value of zero will allow unlimited uses    Note title  Recommendation     For best security  set  secret id num uses  to  1  use  Also  consider changing  secret id bound cidrs  to restrict the source IP range of the connecting devices     Note      Anti patterns  Consider avoiding these anti patterns when using Vault s AppRole auth method       CI worker retrieves secrets  The CI worker could just authenticate to Vault and retrieve the secrets for the job and pass these to the runner  but this would break the first of the two best practices listed above   The CI worker may likely have to run many different types of jobs  many of which require secrets  If you use this method  the worker would have to have the authorization  policy  to retrieve many secrets  none of which is the consumer  Additionally  if a single secret were to become compromised  then there would be no way to tie an identity to it and initiate break glass procedures on that identity  So all secrets would have to be considered compromised       CI worker passes RoleID and SecretID to the runner  The worker could be authorized to Vault to retrieve the RoleId and SecretID and pass both to the runner to use  While this prevents the worker from having Vault s authorization to retrieve all secrets  it has that capability as it has both RoleID and SecretID  This is against best practice       CI worker passes a Vault token to the runner  The worker could be authorized to Vault to generate child tokens that have the authorization to retrieve secrets for the pipeline   Again  this avoids authorization to Vault to retrieve secrets for the worker  but the worker will have access to the child tokens that would have authorization and so it is against best practices      Security considerations  In any trusted broker situation  the broker  in this case  the Jenkins worker  must be secured and treated as a critical system  This means that users should have minimal access to it and the access should be closely monitored and audited   Also  as the Vault audit logs provide time stamped events  monitor the whole process with alerts on two events     When a wrapped SecretID is requested for an AppRole  and no Jenkins job is running   When the Jenkins slave attempts to unwrap the token and Vault refuses as the token has already been used  In both cases  this shows that the trusted broker workflow has likely been compromised and the event should investigated      Reference materials     How  and Why  to Use AppRole Correctly in HashiCorp Vault  https   www hashicorp com blog how and why to use approle correctly in hashicorp vault     Response wrapping concept   vault docs concepts response wrapping     ACL policies   vault docs concepts policies     Token periods and TTLs   vault docs concepts tokens token time to live periodic tokens and explicit max ttls "}
{"questions":"vault of user verification to your authentication workflow for Vault Use basic multi factor authentication MFA with Vault to add an extra level Set up login MFA page title Set up login MFA layout docs","answers":"---\nlayout: docs\npage_title: Set up login MFA\ndescription: >-\n  Use basic multi-factor authentication (MFA) with Vault to add an extra level\n  of user verification to your authentication workflow for Vault.\n---\n\n# Set up login MFA\n\nThe underlying identity system in Vault supports multi-factor authentication\n(MFA) for authenticating to an auth method using different authentication types.\n\nMFA implementation                        | Required Vault edition\n----------------------------------------- | -----------------------\nLogin MFA                                 | Vault Community\n[Step-up MFA](\/vault\/docs\/enterprise\/mfa) | Vault Enterprise\n\n\n## Login MFA types\n\nMFA in Vault includes the following login types:\n\n~> **NOTE:** The [Token](\/vault\/docs\/auth\/token) auth method cannot be configured with Vault's built-in Login MFA feature.\n\n- `Time-based One-time Password (TOTP)` - If configured and enabled on a login path,\n  this would require a TOTP passcode along with a Vault token to be presented\n  while invoking the API login request. The passcode will be validated against the\n  TOTP key present in the caller's identify in Vault.\n\n- `Okta` - If Okta push is configured and enabled on a login path, then the enrolled\n  device of the user will receive a push notification to either approve or deny access\n  to the API. The Okta username will be derived from the caller identity's\n  alias.\n\n- `Duo` - If Duo push is configured and enabled on a login path, then the enrolled\n  device of the user will receive a push notification to either approve or deny access\n  to the API. The Duo username will be derived from the caller identity's\n  alias. Note that Duo could also be configured to use passcodes for authentication.\n\n- `PingID` - If PingID push is configured and enabled on a login path, the\n  enrolled device of the user will receive a push notification to either approve or deny\n  access to the API. The PingID username will be derived from the caller\n  identity's alias.\n\n## Login MFA procedure\n\n~> **NOTE:** Vault's built-in Login MFA feature does not protect against brute forcing of\nTOTP passcodes by default. We recommend that per-client [rate limits](\/vault\/docs\/concepts\/resource-quotas)\nare applied to the relevant login and\/or mfa paths (e.g. `\/sys\/mfa\/validate`). External MFA\nmethods (`Duo`, `Ping` and `Okta`) may already provide configurable rate limiting. Rate limiting of\nLogin MFA paths are enforced by default in Vault 1.10.1 and above.\n\nLogin MFA can be configured to secure further authenticating to an auth method. To enable login\nMFA, an MFA method needs to be configured. Please see [Login MFA API](\/vault\/api-docs\/secret\/identity\/mfa) for details\non how to configure an MFA method. Once an MFA method is configured, an operator can configure an MFA enforcement using the returned unique MFA method ID.\nPlease see [Login MFA Enforcement API](\/vault\/api-docs\/secret\/identity\/mfa\/login-enforcement)\nfor details on how to configure an MFA enforcement config. MFA could be enforced for an entity, a group of\nentities, a specific auth method accessor, or an auth method type. A login request that matches\nany MFA enforcement restrictions is subject to further MFA validation,\nsuch as a one-time passcode, before being authenticated.\n\nThere are two ways to validate a login request that is subject to MFA validation.\n\n### Single-Phase login\n\nIn the Single-phase login, the required MFA information is embedded in a login request using\nthe `X-Vault-MFA` header. In this case, the MFA validation is done\nas a part of the login request.\n\nMFA credentials are retrieved from the `X-Vault-MFA` HTTP header. Before Vault 1.13.0, the format of\nthe header is `mfa_method_id[:passcode]` for TOTP, Okta, and PingID. However, for Duo, it is `mfa_method_id[:passcode=<passcode>]`.\nThe item in the `[]` is optional. From Vault 1.13.0, the format is consistent for all supported MFA methods, and one can use either of the above two formats.\nIf there are multiple MFA methods that need to be validated, a user can pass in multiple `X-Vault-MFA` HTTP headers.\n\n#### Sample request\n\n```shell-session\n$ curl \\\n    --header \"X-Vault-Token: ...\" \\\n    --header \"X-Vault-MFA: d16fd3c2-50de-0b9b-eed3-0301dadeca10:695452\" \\\n    http:\/\/127.0.0.1:8200\/v1\/auth\/userpass\/login\/alice\n```\n\nIf an MFA method does not require a passcode, the login request MFA header only contains the method ID.\n\n```shell-session\n $ curl \\\n     --header \"X-Vault-Token: ...\" \\\n     --header \"X-Vault-MFA: d16fd3c2-50de-0b9b-eed3-0301dadeca10\" \\\n     http:\/\/127.0.0.1:8200\/v1\/auth\/userpass\/login\/alice\n```\n\nStarting in Vault 1.13.0, an operator can configure a name for an MFA method.\nThis name should be unique in the namespace in which the MFA method is configured.\nThe MFA method name can be used in the MFA header.\n\n```shell-session\n$ curl \\\n    --header \"X-Vault-Token: ...\" \\\n    --header \"X-Vault-MFA: sample_mfa_method_name:695452\" \\\n    http:\/\/127.0.0.1:8200\/v1\/auth\/userpass\/login\/alice\n```\n\nIn cases where the MFA method is configured in a specific namespace, the MFA method name should be prefixed with the namespace path.\nBelow shows an example where an MFA method is configured in `ns1`.\n\n```shell-session\n$ curl \\\n    --header \"X-Vault-Token: ...\" \\\n    --header \"X-Vault-MFA: ns1\/sample_mfa_method_name:695452\" \\\n    http:\/\/127.0.0.1:8200\/v1\/auth\/userpass\/login\/alice\n```\n\n### Two-Phase login\n\nThe more conventional and prevalent MFA method is a two-request mechanism, also referred to as Two-phase Login MFA.\nIn Two-phase login, the `X-Vault-MFA` header is not provided in the request. In this case, after sending a regular login request,\nthe user receives an auth response in which MFA requirements are included. MFA requirements contain an MFA request ID\nwhich identifies the login request that needs validation. In addition, MFA requirements contain MFA constraints\nthat determine which MFA types should be used to validate the request, the corresponding method IDs, and\na boolean value showing whether the MFA method uses passcodes or not. MFA constraints form a nested map in MFA Requirement\nand represent all MFA enforcements that match a login request. While the example below is for the userpass login,\nnote that this can affect the login response on any auth mount protected by MFA validation.\n\n#### Sample Two-Phase login response\n\n```json\n{\n  \"request_id\": \"1044c151-13ea-1cf5-f6ed-000c42efd477\",\n  \"lease_id\": \"\",\n  \"lease_duration\": 0,\n  \"renewable\": false,\n  \"data\": null,\n  \"warnings\": [\n    \"A login request was issued that is subject to MFA validation. Please make sure to validate the login by sending another request to mfa\/validate endpoint.\"\n  ],\n  \"auth\": {\n    \"client_token\": \"\",\n    \"accessor\": \"\",\n    \"policies\": null,\n    \"token_policies\": null,\n    \"identity_policies\": null,\n    \"metadata\": null,\n    \"orphan\": false,\n    \"entity_id\": \"\",\n    \"lease_duration\": 0,\n    \"renewable\": false,\n    \"mfa_requirement\": {\n      \"mfa_request_id\": \"d0c9eec7-6921-8cc0-be62-202b289ef163\",\n      \"mfa_constraints\": {\n        \"enforcementConfigUserpass\": {\n          \"any\": [\n            {\n              \"type\": \"totp\",\n              \"id\": \"820997b3-110e-c251-7e8b-ff4aa428a6e1\",\n              \"uses_passcode\": true,\n              \"name\": \"sample_mfa_method_name\",\n            }\n          ]\n        }\n      }\n    }\n  }\n}\n```\n\nNote that the `uses_passcode` boolean value will always show true for TOTP, and false for Okta and PingID.\nFor Duo method, the value can be configured as part of the method configuration, using the `use_passcode` parameter.\nPlease see [Duo API](\/vault\/api-docs\/secret\/identity\/mfa\/duo) for details\non how to configure the boolean value for Duo.\n\nTo validate the MFA restricted login request, the user sends a second request to the [validate](\/vault\/api-docs\/system\/mfa\/validate)\nendpoint including the MFA request ID and MFA payload. MFA payload contains a map of methodIDs and their associated credentials.\nIf the configured MFA methods, such as PingID, Okta, and Duo, do not require a passcode, the associated\ncredentials will be a list with one empty string.\n\n#### Sample payload\n\n```json\n{\n  \"mfa_request_id\": \"5879c74a-1418-1948-7be9-97b209d693a7\",\n  \"mfa_payload\": {\n    \"d16fd3c2-50de-0b9b-eed3-0301dadeca10\": [\"910201\"]\n  }\n}\n```\n\nIf an MFA method is configured in a namespace, the MFA method name prefixed with the namespace path can be used in the validation payload.\n\n```json\n{\n  \"mfa_request_id\": \"5879c74a-1418-1948-7be9-97b209d693a7\",\n  \"mfa_payload\": {\n    \"ns1\/sample_mfa_method_name\": [\"910201\"]\n  }\n}\n```\n\n#### Sample request\n\n```shell-session\n$ curl \\\n    --header \"X-Vault-Token: ...\" \\\n    --request POST \\\n    --data @payload.json \\\n    http:\/\/127.0.0.1:8200\/v1\/sys\/mfa\/validate\n```\n\n#### Sample CLI request\n\nA user is also able to use the CLI write command to validate the login request.\n\n```shell-session\n$ vault write sys\/mfa\/validate -format=json @payload.json\n```\n\n#### Interactive CLI for login MFA\n\nVault supports an interactive way of authenticating to an auth method using CLI only if the\nlogin request is subject to a single MFA method validation. In this situation, if the MFA method\nis configured to use passcodes, after sending a regular login request, the user is prompted to\ninsert the passcode. Upon successful MFA validation, a client token is returned.\nIf the configured MFA methods, such as PingID, Okta, and Duo, do not require a passcode and have out of band\nmechanisms for verifying the extra factor, the user is notified to check their authenticator application.\nThis alleviates a user from sending the second request separately to validate a login request.\nTo disable the interactive login experience, a user needs to pass in the `non-interactive` flag to the login request.\n\n```shell-session\n$ vault write -non-interactive sys\/mfa\/validate -format=json @payload.json\n```\n\nTo get started with Login MFA, refer to the [Login MFA](\/vault\/tutorials\/auth-methods\/multi-factor-authentication) tutorial.\n\n\n### TOTP passcode validation rate limit\n\nRate limiting of Login MFA paths are enforced by default in Vault 1.10.1 and above.\nBy default, Vault allows for 5 consecutive failed TOTP passcode validation.\nThis value can also be configured by adding `max_validation_attempts` to the TOTP configuration.\nIf the number of consecutive failed TOTP passcode validation exceeds the configured value, the user\nneeds to wait until a fresh TOTP passcode is available.","site":"vault","answers_cleaned":"    layout  docs page title  Set up login MFA description       Use basic multi factor authentication  MFA  with Vault to add an extra level   of user verification to your authentication workflow for Vault         Set up login MFA  The underlying identity system in Vault supports multi factor authentication  MFA  for authenticating to an auth method using different authentication types   MFA implementation                          Required Vault edition                                                                     Login MFA                                   Vault Community  Step up MFA   vault docs enterprise mfa    Vault Enterprise      Login MFA types  MFA in Vault includes the following login types        NOTE    The  Token   vault docs auth token  auth method cannot be configured with Vault s built in Login MFA feature      Time based One time Password  TOTP     If configured and enabled on a login path    this would require a TOTP passcode along with a Vault token to be presented   while invoking the API login request  The passcode will be validated against the   TOTP key present in the caller s identify in Vault      Okta    If Okta push is configured and enabled on a login path  then the enrolled   device of the user will receive a push notification to either approve or deny access   to the API  The Okta username will be derived from the caller identity s   alias      Duo    If Duo push is configured and enabled on a login path  then the enrolled   device of the user will receive a push notification to either approve or deny access   to the API  The Duo username will be derived from the caller identity s   alias  Note that Duo could also be configured to use passcodes for authentication      PingID    If PingID push is configured and enabled on a login path  the   enrolled device of the user will receive a push notification to either approve or deny   access to the API  The PingID username will be derived from the caller   identity s alias      Login MFA procedure       NOTE    Vault s built in Login MFA feature does not protect against brute forcing of TOTP passcodes by default  We recommend that per client  rate limits   vault docs concepts resource quotas  are applied to the relevant login and or mfa paths  e g    sys mfa validate    External MFA methods   Duo    Ping  and  Okta   may already provide configurable rate limiting  Rate limiting of Login MFA paths are enforced by default in Vault 1 10 1 and above   Login MFA can be configured to secure further authenticating to an auth method  To enable login MFA  an MFA method needs to be configured  Please see  Login MFA API   vault api docs secret identity mfa  for details on how to configure an MFA method  Once an MFA method is configured  an operator can configure an MFA enforcement using the returned unique MFA method ID  Please see  Login MFA Enforcement API   vault api docs secret identity mfa login enforcement  for details on how to configure an MFA enforcement config  MFA could be enforced for an entity  a group of entities  a specific auth method accessor  or an auth method type  A login request that matches any MFA enforcement restrictions is subject to further MFA validation  such as a one time passcode  before being authenticated   There are two ways to validate a login request that is subject to MFA validation       Single Phase login  In the Single phase login  the required MFA information is embedded in a login request using the  X Vault MFA  header  In this case  the MFA validation is done as a part of the login request   MFA credentials are retrieved from the  X Vault MFA  HTTP header  Before Vault 1 13 0  the format of the header is  mfa method id  passcode   for TOTP  Okta  and PingID  However  for Duo  it is  mfa method id  passcode  passcode     The item in the      is optional  From Vault 1 13 0  the format is consistent for all supported MFA methods  and one can use either of the above two formats  If there are multiple MFA methods that need to be validated  a user can pass in multiple  X Vault MFA  HTTP headers        Sample request     shell session   curl         header  X Vault Token               header  X Vault MFA  d16fd3c2 50de 0b9b eed3 0301dadeca10 695452        http   127 0 0 1 8200 v1 auth userpass login alice      If an MFA method does not require a passcode  the login request MFA header only contains the method ID      shell session    curl          header  X Vault Token                header  X Vault MFA  d16fd3c2 50de 0b9b eed3 0301dadeca10         http   127 0 0 1 8200 v1 auth userpass login alice      Starting in Vault 1 13 0  an operator can configure a name for an MFA method  This name should be unique in the namespace in which the MFA method is configured  The MFA method name can be used in the MFA header      shell session   curl         header  X Vault Token               header  X Vault MFA  sample mfa method name 695452        http   127 0 0 1 8200 v1 auth userpass login alice      In cases where the MFA method is configured in a specific namespace  the MFA method name should be prefixed with the namespace path  Below shows an example where an MFA method is configured in  ns1       shell session   curl         header  X Vault Token               header  X Vault MFA  ns1 sample mfa method name 695452        http   127 0 0 1 8200 v1 auth userpass login alice          Two Phase login  The more conventional and prevalent MFA method is a two request mechanism  also referred to as Two phase Login MFA  In Two phase login  the  X Vault MFA  header is not provided in the request  In this case  after sending a regular login request  the user receives an auth response in which MFA requirements are included  MFA requirements contain an MFA request ID which identifies the login request that needs validation  In addition  MFA requirements contain MFA constraints that determine which MFA types should be used to validate the request  the corresponding method IDs  and a boolean value showing whether the MFA method uses passcodes or not  MFA constraints form a nested map in MFA Requirement and represent all MFA enforcements that match a login request  While the example below is for the userpass login  note that this can affect the login response on any auth mount protected by MFA validation        Sample Two Phase login response     json      request id    1044c151 13ea 1cf5 f6ed 000c42efd477      lease id          lease duration   0     renewable   false     data   null     warnings          A login request was issued that is subject to MFA validation  Please make sure to validate the login by sending another request to mfa validate endpoint           auth          client token            accessor            policies   null       token policies   null       identity policies   null       metadata   null       orphan   false       entity id            lease duration   0       renewable   false       mfa requirement            mfa request id    d0c9eec7 6921 8cc0 be62 202b289ef163          mfa constraints              enforcementConfigUserpass                any                                  type    totp                  id    820997b3 110e c251 7e8b ff4aa428a6e1                  uses passcode   true                 name    sample mfa method name                                                                Note that the  uses passcode  boolean value will always show true for TOTP  and false for Okta and PingID  For Duo method  the value can be configured as part of the method configuration  using the  use passcode  parameter  Please see  Duo API   vault api docs secret identity mfa duo  for details on how to configure the boolean value for Duo   To validate the MFA restricted login request  the user sends a second request to the  validate   vault api docs system mfa validate  endpoint including the MFA request ID and MFA payload  MFA payload contains a map of methodIDs and their associated credentials  If the configured MFA methods  such as PingID  Okta  and Duo  do not require a passcode  the associated credentials will be a list with one empty string        Sample payload     json      mfa request id    5879c74a 1418 1948 7be9 97b209d693a7      mfa payload          d16fd3c2 50de 0b9b eed3 0301dadeca10     910201              If an MFA method is configured in a namespace  the MFA method name prefixed with the namespace path can be used in the validation payload      json      mfa request id    5879c74a 1418 1948 7be9 97b209d693a7      mfa payload          ns1 sample mfa method name     910201                   Sample request     shell session   curl         header  X Vault Token               request POST         data  payload json       http   127 0 0 1 8200 v1 sys mfa validate           Sample CLI request  A user is also able to use the CLI write command to validate the login request      shell session   vault write sys mfa validate  format json  payload json           Interactive CLI for login MFA  Vault supports an interactive way of authenticating to an auth method using CLI only if the login request is subject to a single MFA method validation  In this situation  if the MFA method is configured to use passcodes  after sending a regular login request  the user is prompted to insert the passcode  Upon successful MFA validation  a client token is returned  If the configured MFA methods  such as PingID  Okta  and Duo  do not require a passcode and have out of band mechanisms for verifying the extra factor  the user is notified to check their authenticator application  This alleviates a user from sending the second request separately to validate a login request  To disable the interactive login experience  a user needs to pass in the  non interactive  flag to the login request      shell session   vault write  non interactive sys mfa validate  format json  payload json      To get started with Login MFA  refer to the  Login MFA   vault tutorials auth methods multi factor authentication  tutorial        TOTP passcode validation rate limit  Rate limiting of Login MFA paths are enforced by default in Vault 1 10 1 and above  By default  Vault allows for 5 consecutive failed TOTP passcode validation  This value can also be configured by adding  max validation attempts  to the TOTP configuration  If the number of consecutive failed TOTP passcode validation exceeds the configured value  the user needs to wait until a fresh TOTP passcode is available "}
{"questions":"vault Login MFA FAQ Commonly questions about Vault login MFA and multi factor authentication layout docs This FAQ section contains frequently asked questions about the Login MFA feature page title Login MFA FAQ","answers":"---\nlayout: docs\npage_title: Login MFA FAQ\ndescription: >-\n  Commonly questions about Vault login MFA and multi-factor authentication.\n---\n\n# Login MFA FAQ\n\nThis FAQ section contains frequently asked questions about the Login MFA feature.\n\n- [Q: What MFA features can I access if I upgrade to Vault version 1.10?](#q-what-mfa-features-can-i-access-if-i-upgrade-to-vault-version-1-10)\n- [Q: What are the various MFA workflows that are available to me as a Vault user as of Vault version 1.10, and how are they different?](#q-what-are-the-various-mfa-workflows-that-are-available-to-me-as-a-vault-user-as-of-vault-version-1-10-and-how-are-they-different)\n- [Q: What is the Legacy MFA feature?](#q-what-is-the-legacy-mfa-feature)\n- [Q: Will HCP Vault Dedicated support MFA?](#q-will-hcp-vault-support-mfa)\n- [Q: What is Single-Phase MFA vs. Two-Phase MFA?](#q-what-is-single-phase-mfa-vs-two-phase-mfa)\n- [Q: Are there new MFA API endpoints introduced as part of the new Vault version 1.10 MFA for login functionality?](#q-are-there-new-mfa-api-endpoints-introduced-as-part-of-the-new-vault-version-1-10-mfa-for-login-functionality)\n- [Q: How do MFA configurations differ between the Login MFA and Step-up Enterprise MFA?](#q-how-do-mfa-configurations-differ-between-the-login-mfa-and-step-up-enterprise-mfa)\n- [Q: What are the ways to configure the various MFA workflows?](#q-what-are-the-ways-to-configure-the-various-mfa-workflows)\n- [Q: What MFA mechanism is used with the different MFA workflows in Vault version 1.10?](#q-which-mfa-mechanism-is-used-with-the-different-mfa-workflows-in-vault-version-1-10)\n- [Q: Are namespaces supported with the MFA workflows that Vault has as of Vault version 1.10?](#q-are-namespaces-supported-with-the-mfa-workflows-that-vault-has-as-of-vault-version-1-10)\n- [Q: I use the Vault Agent. Does MFA pose any challenges for me?](#q-i-use-the-vault-agent-does-mfa-pose-any-challenges-for-me)\n- [Q: I am a Step-up Enterprise MFA user using MFA for login. Should I migrate to the new Login MFA?](#q-i-am-a-step-up-enterprise-mfa-user-using-mfa-for-login-should-i-migrate-to-the-new-login-mfa)\n- [Q: I am a Step-up Enterprise MFA user using MFA for login. What are the steps to migrate to Login MFA?](#q-i-am-a-step-up-enterprise-mfa-user-using-mfa-for-login-what-are-the-steps-to-migrate-to-login-mfa)\n\n### Q: what MFA features can i access if i upgrade to Vault version 1.10?\n\nVault supports Step-up Enterprise MFA as part of our Enterprise edition. The Step-up Enterprise MFA provides MFA on login, or for step-up access to sensitive resources in Vault using ACL and Sentinel policies, and is configurable through the CLI\/API.\n\nStarting with Vault version 1.10, Vault Community Edition provides [MFA on login](\/vault\/docs\/auth\/login-mfa) only. This is also available with Vault Enterprise and configurable through the CLI\/API.\n\nThe Step-up Enterprise MFA will co-exist with the newly introduced Login MFA starting with Vault version 1.10.\n\n### Q: what are the various MFA workflows that are available to me as a Vault user as of Vault version 1.10, and how are they different?\n\n| MFA workflow                                   | What does it do?                                                                                                                                                                                                                                                                                    | Who manages the MFA?              | Community vs. Enterprise Support    |\n| ---------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | ----------------------------- |\n| [Login MFA](\/vault\/docs\/auth\/login-mfa)              | MFA in Vault Community Edition provides MFA on login. CLI, API, and UI-based login are supported.                                                                                                                                                                                                                 | MFA is managed by Vault           | Supported in Vault Community Edition        |\n| [Okta Auth MFA](\/vault\/docs\/auth\/okta#mfa)           | This is MFA as part of [Okta Auth method](\/vault\/docs\/auth\/okta) in Vault Community Edition, where MFA is enforced by Okta on login. MFA must be satisfied for authentication to be successful. This is different from the Okta MFA method used with Login MFA and Step-up Enterprise MFA. CLI\/API login are supported. | MFA is managed externally by Okta | Supported in Vault Community Edition        |\n| [Step-up Enterprise MFA](\/vault\/docs\/enterprise\/mfa) | MFA in Vault Enterprise provides MFA for login and for step-up access to sensitive resources in Vault. Supports CLI\/API based login, and ACL\/Sentinel policies.                                                                                                                                     | MFA is managed by Vault           | Supported in Vault Enterprise |\n\n~> **Note**: [The Legacy MFA](\/vault\/docs\/v1.10.x\/auth\/mfa) is a **deprecated** MFA workflow in Vault Community Edition. Refer [here](#q-what-is-the-legacy-mfa-feature) for more details.\n\n### Q: what is the legacy MFA feature?\n\n[Legacy MFA](\/vault\/docs\/v1.10.x\/auth\/mfa) is functionality that was available in Vault Community Edition, prior to introducing MFA in the Enterprise version. This is now a deprecated feature. Please see the [Vault Feature Deprecation Notice and Plans](\/vault\/docs\/deprecation) for detailed product plans around deprecated features. We plan to remove Legacy MFA in 1.11.\n\n### Q: will HCP Vault Dedicated support MFA?\n\nYes, HCP Vault Dedicated will support MFA across all tiers and offering as part of the April 2022 release.\n\n### Q: what is Single-Phase MFA vs. Two-Phase MFA?\n\n- **Single-Phase MFA:** This is a single request mechanism where the required MFA information, such as MFA method ID, is provided via the X-Vault-MFA header in a single MFA request that is used to authenticate into Vault.\n\n~> **Note**: If the configured MFA methods need a passcode, it needs to be provided in the request, such as in the case of TOTP or Duo.\nIf the configured MFA methods, such as PingID, Okta, or Duo, do not require a passcode and have out of band mechanisms for verifying the extra factor, Vault will send an inquiry to the other service's APIs to determine whether the MFA request has yet been verified.\n\n- **Two-Phase MFA:** This is a two-request MFA method that is more conventionally used.\n  - The MFA passcode required for the configured MFA method is not provided in a header of the login request that is MFA-restricted. Instead, the user first authenticates to the auth method, and on successful authentication to the auth method, an MFA requirement is returned to the user. The MFA requirement contains the MFA RequestID and constraints applicable to the MFA as configured by the operator.\n  - The user then must make a second request to the new endpoint `sys\/mfa\/validate`, providing the MFA RequestID in the request, and an MFA payload which includes the MFA methodIDs passcode (if applicable). If MFA validation passes, the new Vault token will be persisted and returned to the user in the response, just like a regular Vault token created using a non-MFA-restricted auth method.\n\n### Q: are there new MFA API endpoints introduced as part of the new Vault version 1.10 MFA for login functionality?\n\nYes, this feature adds the following new MFA configuration endpoints: `identity\/mfa\/method`, `identity\/mfa\/login-enforcement`, and `sys\/mfa\/validate`. Refer to the [documentation](\/vault\/api-docs\/secret\/identity\/mfa\/duo) for more details.\n\n### Q: how do MFA configurations differ between the login MFA and step-up enterprise MFA?\n\nAll MFA methods supported with the Step-up Enterprise MFA are supported with the Login MFA, but they use different API endpoints:\n\n- Step-up Enterprise MFA: `sys\/mfa\/method\/:type\/:\/name`\n- Login MFA: `identity\/mfa\/method\/:type`\n\nThere are also two differences in how methods are defined in the two systems.\nThe Step-up Enterprise MFA expects the method creator to specify a name for the method; Login MFA does not, and instead returns an ID when a method is created.\nThe Step-up Enterprise MFA uses the combination of mount accessors plus a `username_format` template string, whereas in Login MFA, these are combined into a single field `username_format`, which uses the same identity [templating format](\/vault\/docs\/concepts\/policies#templated-policies) as used in policies.\n\n### Q: what are the ways to configure the various MFA workflows?\n\n| MFA workflow                                   | Configuration methods                                                                     | Details                                                                                                                                                                                                                                                                                     |\n| ---------------------------------------------- | ----------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| [Login MFA](\/vault\/docs\/auth\/login-mfa)              | CLI\/API. The UI does not support the configuration of Login MFA as of Vault version 1.10. | Configured using the `identity\/mfa\/method` endpoints, then passing those method IDs to the `identity\/mfa\/login-enforcement` endpoint. MFA methods supported: TOTP, Okta, Duo, PingID.                                                                                                       |\n| [Okta Auth MFA](\/vault\/docs\/auth\/okta)               | CLI\/API                                                                                   | MFA methods supported: [TOTP](https:\/\/help.okta.com\/en\/prod\/Content\/Topics\/Security\/mfa-totp-seed.htm) , [Okta Verify Push](https:\/\/help.okta.com\/en\/prod\/Content\/Topics\/Mobile\/ov-admin-config.htm). |\n| [Step-up Enterprise MFA](\/vault\/docs\/enterprise\/mfa) | CLI\/API                                                                                   | [Configured](\/vault\/api-docs\/system\/mfa) using the `sys\/mfa\/method` endpoints and by referencing those methods in policies. MFA Methods supported: TOTP, Okta, Duo, PingID                                                                                                                             |\n\n### Q: which MFA mechanism is used with the different MFA workflows in Vault version 1.10?\n\n| MFA workflow                                   | UI        | CLI\/API                                                                                                                                   | Single-Phase                | Two-Phase                   |\n| ---------------------------------------------- | --------- | ----------------------------------------------------------------------------------------------------------------------------------------- | --------------------------- | --------------------------- |\n| [Login MFA](\/vault\/docs\/auth\/login-mfa)              | Supported | Supported. You can select single-phase MFA by supplying the X-Vault-MFA header. In the absence of this header, the Two- Phase MFA is used | N\/A                         | Supported                   |\n| [Okta Auth MFA](\/vault\/docs\/auth\/okta)               | N\/A       | N\/A                                                                                                                                       | MFA is not managed by Vault | MFA is not managed by Vault |\n| [Step-up Enterprise MFA](\/vault\/docs\/enterprise\/mfa) | N\/A       | Supported                                                                                                                                 | Supported                   | N\/A                         |\n\n### Q: are namespaces supported with the MFA workflows that Vault has as of Vault version 1.10?\n\nThe Step-up Enterprise MFA configurations can only be configured in the root [namespace](\/vault\/docs\/enterprise\/mfa#namespaces), although they can be referenced in other namespaces via the policies.\nThe Login MFA supports namespaces awareness. Users will need a Vault Enterprise license to user or configure Login MFA with namespaces. MFA method configurations can be defined per namespace with Login MFA, and used in enforcements defined in that namespace and its children. Everything operates in the root namespace in Vault Community Edition. MFA login enforcements can also be defined per namespace, and applied to that namespace and its children.\n\n### Q: i use the Vault agent. does MFA pose any challenges for me?\n\nThe Vault Agent should not use MFA to authenticate to Vault; it should be able to relay requests with MFA-related headers to Vault successfully.\n\n### Q: i am a step-up enterprise MFA user using MFA for login. should i migrate to the new login MFA?\n\nIf you are currently using Enterprise MFA, evaluate your MFA specific use cases to determine whether or not you should migrate to [Login MFA](\/vault\/docs\/auth\/login-mfa).\n\nHere are some considerations:\n\n- If you use the Step-up Enterprise MFA for login (with Sentinel EGP), you may find value in the simpler Login MFA workflow. We recommend that you to test this out to evaluate if this meets all your requirements.\n- If you use the Step-up Enterprise MFA for more than login, please be aware that the new MFA workflow only supports the login use case. You will still need to use the Step-up Enterprise MFA for non-login use cases.\n\n### Q: i am a step-up enterprise MFA user using MFA for login. what are the steps to migrate to login MFA?\n\nRefer to the question [Q: I am a Step-up Enterprise MFA user using MFA for login. Should I migrate to the new Login MFA?](#q-i-am-a-step-up-enterprise-mfa-user-using-mfa-for-login-should-i-migrate-to-the-new-login-mfa) to evaluate whether or not you should migrate.\n\nIf you wish to migrate to Login MFA, follow these steps and guidelines to migrate successfully.\n\n1. First, create new MFA methods using the `identity\/mfa\/method` endpoints. These should mostly use the same fields as the MFA methods you defined using the `sys\/mfa` method while keeping the following in mind:\n\n   -the new endpoints yield an ID instead of allowing you to define a name\n\n   -the new non-TOTP endpoints have a username_format field instead of username_format+mount_accessor fields; see [Templated Policies](\/vault\/docs\/concepts\/policies#templated-policies) for the username_format format.\n\n1. Instead of writing sentinel EGP rules to require that logins use MFA, use the `identity\/mfa\/login_enforcement` endpoint to specify the MFA methods.","site":"vault","answers_cleaned":"    layout  docs page title  Login MFA FAQ description       Commonly questions about Vault login MFA and multi factor authentication         Login MFA FAQ  This FAQ section contains frequently asked questions about the Login MFA feature      Q  What MFA features can I access if I upgrade to Vault version 1 10    q what mfa features can i access if i upgrade to vault version 1 10     Q  What are the various MFA workflows that are available to me as a Vault user as of Vault version 1 10  and how are they different    q what are the various mfa workflows that are available to me as a vault user as of vault version 1 10 and how are they different     Q  What is the Legacy MFA feature    q what is the legacy mfa feature     Q  Will HCP Vault Dedicated support MFA    q will hcp vault support mfa     Q  What is Single Phase MFA vs  Two Phase MFA    q what is single phase mfa vs two phase mfa     Q  Are there new MFA API endpoints introduced as part of the new Vault version 1 10 MFA for login functionality    q are there new mfa api endpoints introduced as part of the new vault version 1 10 mfa for login functionality     Q  How do MFA configurations differ between the Login MFA and Step up Enterprise MFA    q how do mfa configurations differ between the login mfa and step up enterprise mfa     Q  What are the ways to configure the various MFA workflows    q what are the ways to configure the various mfa workflows     Q  What MFA mechanism is used with the different MFA workflows in Vault version 1 10    q which mfa mechanism is used with the different mfa workflows in vault version 1 10     Q  Are namespaces supported with the MFA workflows that Vault has as of Vault version 1 10    q are namespaces supported with the mfa workflows that vault has as of vault version 1 10     Q  I use the Vault Agent  Does MFA pose any challenges for me    q i use the vault agent does mfa pose any challenges for me     Q  I am a Step up Enterprise MFA user using MFA for login  Should I migrate to the new Login MFA    q i am a step up enterprise mfa user using mfa for login should i migrate to the new login mfa     Q  I am a Step up Enterprise MFA user using MFA for login  What are the steps to migrate to Login MFA    q i am a step up enterprise mfa user using mfa for login what are the steps to migrate to login mfa       Q  what MFA features can i access if i upgrade to Vault version 1 10   Vault supports Step up Enterprise MFA as part of our Enterprise edition  The Step up Enterprise MFA provides MFA on login  or for step up access to sensitive resources in Vault using ACL and Sentinel policies  and is configurable through the CLI API   Starting with Vault version 1 10  Vault Community Edition provides  MFA on login   vault docs auth login mfa  only  This is also available with Vault Enterprise and configurable through the CLI API   The Step up Enterprise MFA will co exist with the newly introduced Login MFA starting with Vault version 1 10       Q  what are the various MFA workflows that are available to me as a Vault user as of Vault version 1 10  and how are they different     MFA workflow                                     What does it do                                                                                                                                                                                                                                                                                       Who manages the MFA                 Community vs  Enterprise Support                                                                                                                                                                                                                                                                                                                                                                                                                                      Login MFA   vault docs auth login mfa                 MFA in Vault Community Edition provides MFA on login  CLI  API  and UI based login are supported                                                                                                                                                                                                                    MFA is managed by Vault             Supported in Vault Community Edition             Okta Auth MFA   vault docs auth okta mfa              This is MFA as part of  Okta Auth method   vault docs auth okta  in Vault Community Edition  where MFA is enforced by Okta on login  MFA must be satisfied for authentication to be successful  This is different from the Okta MFA method used with Login MFA and Step up Enterprise MFA  CLI API login are supported    MFA is managed externally by Okta   Supported in Vault Community Edition             Step up Enterprise MFA   vault docs enterprise mfa    MFA in Vault Enterprise provides MFA for login and for step up access to sensitive resources in Vault  Supports CLI API based login  and ACL Sentinel policies                                                                                                                                        MFA is managed by Vault             Supported in Vault Enterprise         Note     The Legacy MFA   vault docs v1 10 x auth mfa  is a   deprecated   MFA workflow in Vault Community Edition  Refer  here   q what is the legacy mfa feature  for more details       Q  what is the legacy MFA feature    Legacy MFA   vault docs v1 10 x auth mfa  is functionality that was available in Vault Community Edition  prior to introducing MFA in the Enterprise version  This is now a deprecated feature  Please see the  Vault Feature Deprecation Notice and Plans   vault docs deprecation  for detailed product plans around deprecated features  We plan to remove Legacy MFA in 1 11       Q  will HCP Vault Dedicated support MFA   Yes  HCP Vault Dedicated will support MFA across all tiers and offering as part of the April 2022 release       Q  what is Single Phase MFA vs  Two Phase MFA       Single Phase MFA    This is a single request mechanism where the required MFA information  such as MFA method ID  is provided via the X Vault MFA header in a single MFA request that is used to authenticate into Vault        Note    If the configured MFA methods need a passcode  it needs to be provided in the request  such as in the case of TOTP or Duo  If the configured MFA methods  such as PingID  Okta  or Duo  do not require a passcode and have out of band mechanisms for verifying the extra factor  Vault will send an inquiry to the other service s APIs to determine whether the MFA request has yet been verified       Two Phase MFA    This is a two request MFA method that is more conventionally used      The MFA passcode required for the configured MFA method is not provided in a header of the login request that is MFA restricted  Instead  the user first authenticates to the auth method  and on successful authentication to the auth method  an MFA requirement is returned to the user  The MFA requirement contains the MFA RequestID and constraints applicable to the MFA as configured by the operator      The user then must make a second request to the new endpoint  sys mfa validate   providing the MFA RequestID in the request  and an MFA payload which includes the MFA methodIDs passcode  if applicable   If MFA validation passes  the new Vault token will be persisted and returned to the user in the response  just like a regular Vault token created using a non MFA restricted auth method       Q  are there new MFA API endpoints introduced as part of the new Vault version 1 10 MFA for login functionality   Yes  this feature adds the following new MFA configuration endpoints   identity mfa method    identity mfa login enforcement   and  sys mfa validate   Refer to the  documentation   vault api docs secret identity mfa duo  for more details       Q  how do MFA configurations differ between the login MFA and step up enterprise MFA   All MFA methods supported with the Step up Enterprise MFA are supported with the Login MFA  but they use different API endpoints     Step up Enterprise MFA   sys mfa method  type   name    Login MFA   identity mfa method  type   There are also two differences in how methods are defined in the two systems  The Step up Enterprise MFA expects the method creator to specify a name for the method  Login MFA does not  and instead returns an ID when a method is created  The Step up Enterprise MFA uses the combination of mount accessors plus a  username format  template string  whereas in Login MFA  these are combined into a single field  username format   which uses the same identity  templating format   vault docs concepts policies templated policies  as used in policies       Q  what are the ways to configure the various MFA workflows     MFA workflow                                     Configuration methods                                                                       Details                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       Login MFA   vault docs auth login mfa                 CLI API  The UI does not support the configuration of Login MFA as of Vault version 1 10    Configured using the  identity mfa method  endpoints  then passing those method IDs to the  identity mfa login enforcement  endpoint  MFA methods supported  TOTP  Okta  Duo  PingID                                                                                                             Okta Auth MFA   vault docs auth okta                  CLI API                                                                                     MFA methods supported   TOTP  https   help okta com en prod Content Topics Security mfa totp seed htm     Okta Verify Push  https   help okta com en prod Content Topics Mobile ov admin config htm        Step up Enterprise MFA   vault docs enterprise mfa    CLI API                                                                                      Configured   vault api docs system mfa  using the  sys mfa method  endpoints and by referencing those methods in policies  MFA Methods supported  TOTP  Okta  Duo  PingID                                                                                                                                    Q  which MFA mechanism is used with the different MFA workflows in Vault version 1 10     MFA workflow                                     UI          CLI API                                                                                                                                     Single Phase                  Two Phase                                                                                                                                                                                                                                                                                               Login MFA   vault docs auth login mfa                 Supported   Supported  You can select single phase MFA by supplying the X Vault MFA header  In the absence of this header  the Two  Phase MFA is used   N A                           Supported                        Okta Auth MFA   vault docs auth okta                  N A         N A                                                                                                                                         MFA is not managed by Vault   MFA is not managed by Vault      Step up Enterprise MFA   vault docs enterprise mfa    N A         Supported                                                                                                                                   Supported                     N A                                Q  are namespaces supported with the MFA workflows that Vault has as of Vault version 1 10   The Step up Enterprise MFA configurations can only be configured in the root  namespace   vault docs enterprise mfa namespaces   although they can be referenced in other namespaces via the policies  The Login MFA supports namespaces awareness  Users will need a Vault Enterprise license to user or configure Login MFA with namespaces  MFA method configurations can be defined per namespace with Login MFA  and used in enforcements defined in that namespace and its children  Everything operates in the root namespace in Vault Community Edition  MFA login enforcements can also be defined per namespace  and applied to that namespace and its children       Q  i use the Vault agent  does MFA pose any challenges for me   The Vault Agent should not use MFA to authenticate to Vault  it should be able to relay requests with MFA related headers to Vault successfully       Q  i am a step up enterprise MFA user using MFA for login  should i migrate to the new login MFA   If you are currently using Enterprise MFA  evaluate your MFA specific use cases to determine whether or not you should migrate to  Login MFA   vault docs auth login mfa    Here are some considerations     If you use the Step up Enterprise MFA for login  with Sentinel EGP   you may find value in the simpler Login MFA workflow  We recommend that you to test this out to evaluate if this meets all your requirements    If you use the Step up Enterprise MFA for more than login  please be aware that the new MFA workflow only supports the login use case  You will still need to use the Step up Enterprise MFA for non login use cases       Q  i am a step up enterprise MFA user using MFA for login  what are the steps to migrate to login MFA   Refer to the question  Q  I am a Step up Enterprise MFA user using MFA for login  Should I migrate to the new Login MFA    q i am a step up enterprise mfa user using mfa for login should i migrate to the new login mfa  to evaluate whether or not you should migrate   If you wish to migrate to Login MFA  follow these steps and guidelines to migrate successfully   1  First  create new MFA methods using the  identity mfa method  endpoints  These should mostly use the same fields as the MFA methods you defined using the  sys mfa  method while keeping the following in mind       the new endpoints yield an ID instead of allowing you to define a name      the new non TOTP endpoints have a username format field instead of username format mount accessor fields  see  Templated Policies   vault docs concepts policies templated policies  for the username format format   1  Instead of writing sentinel EGP rules to require that logins use MFA  use the  identity mfa login enforcement  endpoint to specify the MFA methods "}
{"questions":"vault Use Active Directory Federation Services for SAML layout docs include alerts enterprise and hcp mdx Use Active Directory Federation Services AD FS as a SAML provider for Vault page title Use Active Directory Federation Services for SAML","answers":"---\nlayout: docs\npage_title: Use Active Directory Federation Services for SAML\ndescription: >-\n  Use Active Directory Federation Services (AD FS) as a SAML provider for Vault.\n---\n\n# Use Active Directory Federation Services for SAML\n\n@include 'alerts\/enterprise-and-hcp.mdx'\n\nConfigure your Vault instance to work with Active Directory Federation Services\n(AD FS) and use AD FS accounts for SAML authentication.\n\n\n\n## Before you start\n\n- **You must have Vault Enterprise or HCP Vault v1.15.5+**.\n- **You must be running AD FS on Windows Server**.\n- **You must have a [SAML plugin](\/vault\/docs\/auth\/saml) enabled**.\n- **You must have a Vault admin token**. If you do not have a valid admin\n   token, you can generate a new token in the Vault GUI or using\n   [`vault token create`](\/vault\/docs\/commands\/token\/create) with the Vault CLI.\n\n\n\n## Step 1: Enable the SAML authN method for Vault\n\n<Tabs>\n\n<Tab heading=\"Vault CLI\" group=\"cli\">\n\n1. Set the `VAULT_ADDR` environment variable to your Vault instance URL. For\n   example:\n\n   ```shell-session\n   $ export VAULT_ADDR=\"https:\/\/myvault.example.com:8200\"\n   ```\n\n1. Set the `VAULT_TOKEN` environment variable with your admin token:\n\n   ```shell-session\n   $ export VAULT_TOKEN=\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\"\n   ```\n\n1. Enable the SAML plugin. Use the `-namespace` flag to enable the plugin under\n   a specific namespace. For example:\n\n   ```shell-session\n   $ vault -namespace=ns_admin auth enable saml\n   ```\n\n<\/Tab>\n\n<Tab heading=\"Vault GUI\" group=\"gui\">\n\n@include 'gui-instructions\/enable-authn-plugin.mdx'\n\n- Enable the SAML plugin:\n\n    1. Select the **SAML** token.\n    1. Set the mount path.\n    1. Click **Enable Method**.\n\n<\/Tab>\n\n<\/Tabs>\n\n\n\n## Step 2: Create a new relying party trust in AD\n\n1. Open your Windows Server UI.\n\n1. Go to the **Server Manager** screen.\n\n1. Click **Tools** and select **AD FS Management**.\n\n1. Right-click **Relying Party Trusts** and select **Add Relying Party Trust...**.\n\n1. Follow the prompts to create a new party trust with the following settings:\n\n    | Option                                                | Setting\n    | ----------------------------------------------------- | -------\n    | Claims aware                                          | checked\n    | Enter data about relying party manually               | checked\n    | Display name                                          | \"Vault\"\n    | Certificates                                          | None\n    | Enable support for the SAML 2.0 WebSSO protocol       | checked\n    | SAML callback URL                                     | Callback endpoint for your SAML plugin\n    | Relying party trust identifier                        | Any meaningful, unique string. For example \"VaultIdentifier\"\n    | Access control policy                                 | Any valid policy or  `Permit everyone`\n    | Configure claims issuance policy for this application | checked\n\n<Tip>\n\n  The callback endpoint for your SAML plugin is:\n\n  `https:\/\/${VAULT_ADDRESS}\/v1\/<NAMESPACE>\/<MOUNT_PATH>\/auth\/<PLUGIN_NAME>\/callback`\n  \n  For example, if you mounted the plugin under the `ns_admin` namespace on the\n  path `org\/security`, the callback endpoint URL would be:\n\n  `https:\/\/${VAULT_ADDRESS}\/v1\/ns_admin\/auth\/org\/security\/saml\/callback`\n\n<\/Tip>\n\n\n\n## Step 3: Configure the claim issuance policy in AD\n\n1. Open your Windows Server UI.\n\n1. Go to the **Server Manager** screen.\n\n1. Click **Tools** and select **AD FS Management**.\n\n1. Right-click your new **Relying Party Trust** entry and select\n   **Edit Claim Issuance Policy...**.\n\n1. Click **Add Rule...** and follow the prompts to create a new **Transform\n   Claim Rule** with the following settings:\n\n    | Option                          | Setting\n    | ------------------------------- | -------\n    | Send LDAP Attributes as Claims  | selected\n    | Rule name                       | Any meaningful string (e.g., \"Vault SAML Claims\")\n    | Attribute store                 | `Active Directory`.\n\n1. Complete the LDAP attribute array with the following settings:\n\n    | LDAP attribute                     | Outgoing claim type           |\n    |------------------------------------|-------------------------------|\n    | `E-Mail-Addresses`                 | `Name ID`                     |\n    | `E-Mail-Addresses`                 | `E-Mail Address`              |\n    | `Token-Groups - Unqualified Names` | `groups` or `Group`           |\n\n\n\n## Step 4: Update the SAML signature in AD\n\n1. Open a PowerShell terminal on your Windows server.\n\n1. Set the SAML signature for your relying party trust identifier to `false`:\n\n   ```powershell\n   Set-ADFSRelyingPartyTrust `\n    -TargetName \"<RELYING_PARTY_TRUST_IDENTIFIER>\" `\n    -SignedSamlRequestsRequired $false\n   ```\n\n   For example:\n\n  <CodeBlockConfig hideClipboard>\n\n   ```powershell\n   Set-ADFSRelyingPartyTrust `\n    -TargetName \"MyVaultIdentifier\" `\n    -SignedSamlRequestsRequired $false\n   ```\n  <\/CodeBlockConfig>\n\n\n## Step 5: Create a default AD FS role in Vault\n\nUse the Vault CLI to create a default role for users authenticating\nwith AD FS where:\n\n- `SAML_PLUGIN_PATH` is the full path (`<NAMESPACE>\/MOUNT_PATH\/NAME`) to your\n  SAML plugin.\n- `VAULT_ROLE` is the name of your new AD FS role. For example, `adfs-default`.\n- `DOMAIN_LIST` is a comma separated list of target domains in Active Directory.\n  For example: `*@example.com,*@ext.example.com`.\n- `GROUP_ATTRIBUTES_REF` is:\n    - `groups` if your LDAP token group is `groups`\n    - `http:\/\/schemas.xmlsoap.org\/claims\/Group` if your LDAP token group is\n      `Group`\n- `AD_GROUP_LIST` is a comma separated list of Active Directory groups that\n  will authenticate with SAML. For example: `VaultAdmin,VaultUser`.\n\n```shell-session\n$ vault write <SAML_PLUGIN_PATH>\/role\/<VAULT_ROLE>  \\\n    bound_subjects=\"<DOMAIN_LIST>\"                  \\\n    bound_subjects_type=\"glob\"                      \\\n    groups_attribute=<GROUP_ATTRIBUTES_REF>         \\\n    bound_attributes=groups=\"<AD_GROUP_LIST>\"       \\\n    token_policies=\"default\"                        \\\n    ttl=\"1h\"\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault write auth\/saml\/role\/adfs-default             \\\n    bound_subjects=\"*@example.com,*@ext.example.com\"  \\\n    bound_subjects_type=\"glob\"                        \\\n    groups_attribute=groups                           \\\n    bound_attributes=groups=\"VaultAdmin,VaultUser\"    \\\n    token_policies=\"default\"                          \\\n    ttl=\"1h\"\n```\n\n<\/CodeBlockConfig>\n\n\n\n## Step 6: Configure the SAML plugin in Vault\n\nUse the Vault CLI to finish configuring the SAML plugin where:\n\n- `SAML_PLUGIN_PATH` is the full path to your SAML plugin:\n  `<NAMESPACE>\/auth\/<MOUNT_PATH>\/<PLUGIN_NAME>`.\n- `VAULT_ROLE` is the name of your new AD FS role in Vault.\n- `TRUST_IDENTIFIER` is the ID of your new relying party trust in AD FS.\n- `SAML_CALLBACK_URL` is the callback endpoint for your SAML plugin:\n  `http:\/\/${VAULT_ADDR}\/<NAMESPACE>\/auth\/<MOUNT_PATH>\/<PLUGIN_NAME>\/callback`.\n- `ADFS_URL` is the discovery URL for your AD FS instance.\n- `METADATA_FILE_PATH` is the path on your AD FS instance to the federation\n  metadata file.\n\n```shell-session\n$ vault write <SAML_PLUGIN_PATH>\/config \\\n    default_role=\"<VAULT_ROLE>\"         \\\n    entity_id=\"<TRUST_IDENTIFIER>\"      \\\n    acs_urls=\"<SAML_CALLBACK_URL>       \\\n    idp_metadata_url=\"<AD FS_URL>\/<METADATA_FILE_PATH>\"\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault write ns_admin\/auth\/org\/security\/saml\/config                    \\\n  default_role=\"adfs-default\"                                           \\\n  entity_id=\"MyVaultIdentifier\"                                         \\\n  acs_urls=\"${VAULT_ADDR}\/v1\/ns_admin\/auth\/org\/security\/saml\/callback\"  \\\n  idp_metadata_url=\"https:\/\/adfs.example.com\/metadata\/2007-06\/federationmetadata.xml\"\n```\n\n<\/CodeBlockConfig>\n\n\n\n## Next steps\n\n- [Link your Active Directory groups to Vault](\/vault\/docs\/auth\/saml\/link-vault-group-to-ad)\n- [Troubleshoot your SAML + AD FS configuration](\/vault\/docs\/auth\/saml\/troubleshoot-adfs)","site":"vault","answers_cleaned":"    layout  docs page title  Use Active Directory Federation Services for SAML description       Use Active Directory Federation Services  AD FS  as a SAML provider for Vault         Use Active Directory Federation Services for SAML   include  alerts enterprise and hcp mdx   Configure your Vault instance to work with Active Directory Federation Services  AD FS  and use AD FS accounts for SAML authentication        Before you start      You must have Vault Enterprise or HCP Vault v1 15 5         You must be running AD FS on Windows Server        You must have a  SAML plugin   vault docs auth saml  enabled        You must have a Vault admin token    If you do not have a valid admin    token  you can generate a new token in the Vault GUI or using      vault token create    vault docs commands token create  with the Vault CLI        Step 1  Enable the SAML authN method for Vault   Tabs    Tab heading  Vault CLI  group  cli    1  Set the  VAULT ADDR  environment variable to your Vault instance URL  For    example         shell session      export VAULT ADDR  https   myvault example com 8200          1  Set the  VAULT TOKEN  environment variable with your admin token         shell session      export VAULT TOKEN  XXXXXXXX XXXX XXXX XXXX XXXXXXXXXXXX          1  Enable the SAML plugin  Use the   namespace  flag to enable the plugin under    a specific namespace  For example         shell session      vault  namespace ns admin auth enable saml           Tab    Tab heading  Vault GUI  group  gui     include  gui instructions enable authn plugin mdx     Enable the SAML plugin       1  Select the   SAML   token      1  Set the mount path      1  Click   Enable Method       Tab     Tabs        Step 2  Create a new relying party trust in AD  1  Open your Windows Server UI   1  Go to the   Server Manager   screen   1  Click   Tools   and select   AD FS Management     1  Right click   Relying Party Trusts   and select   Add Relying Party Trust        1  Follow the prompts to create a new party trust with the following settings         Option                                                  Setting                                                                             Claims aware                                            checked       Enter data about relying party manually                 checked       Display name                                             Vault        Certificates                                            None       Enable support for the SAML 2 0 WebSSO protocol         checked       SAML callback URL                                       Callback endpoint for your SAML plugin       Relying party trust identifier                          Any meaningful  unique string  For example  VaultIdentifier        Access control policy                                   Any valid policy or   Permit everyone        Configure claims issuance policy for this application   checked   Tip     The callback endpoint for your SAML plugin is      https     VAULT ADDRESS  v1  NAMESPACE   MOUNT PATH  auth  PLUGIN NAME  callback       For example  if you mounted the plugin under the  ns admin  namespace on the   path  org security   the callback endpoint URL would be      https     VAULT ADDRESS  v1 ns admin auth org security saml callback     Tip        Step 3  Configure the claim issuance policy in AD  1  Open your Windows Server UI   1  Go to the   Server Manager   screen   1  Click   Tools   and select   AD FS Management     1  Right click your new   Relying Party Trust   entry and select      Edit Claim Issuance Policy        1  Click   Add Rule      and follow the prompts to create a new   Transform    Claim Rule   with the following settings         Option                            Setting                                                       Send LDAP Attributes as Claims    selected       Rule name                         Any meaningful string  e g    Vault SAML Claims         Attribute store                    Active Directory    1  Complete the LDAP attribute array with the following settings         LDAP attribute                       Outgoing claim type                                                                                               E Mail Addresses                     Name ID                               E Mail Addresses                     E Mail Address                        Token Groups   Unqualified Names     groups  or  Group                    Step 4  Update the SAML signature in AD  1  Open a PowerShell terminal on your Windows server   1  Set the SAML signature for your relying party trust identifier to  false          powershell    Set ADFSRelyingPartyTrust        TargetName   RELYING PARTY TRUST IDENTIFIER          SignedSamlRequestsRequired  false            For example      CodeBlockConfig hideClipboard         powershell    Set ADFSRelyingPartyTrust        TargetName  MyVaultIdentifier         SignedSamlRequestsRequired  false            CodeBlockConfig       Step 5  Create a default AD FS role in Vault  Use the Vault CLI to create a default role for users authenticating with AD FS where      SAML PLUGIN PATH  is the full path    NAMESPACE  MOUNT PATH NAME   to your   SAML plugin     VAULT ROLE  is the name of your new AD FS role  For example   adfs default      DOMAIN LIST  is a comma separated list of target domains in Active Directory    For example     example com   ext example com      GROUP ATTRIBUTES REF  is         groups  if your LDAP token group is  groups         http   schemas xmlsoap org claims Group  if your LDAP token group is        Group     AD GROUP LIST  is a comma separated list of Active Directory groups that   will authenticate with SAML  For example   VaultAdmin VaultUser       shell session   vault write  SAML PLUGIN PATH  role  VAULT ROLE         bound subjects   DOMAIN LIST                          bound subjects type  glob                             groups attribute  GROUP ATTRIBUTES REF                bound attributes groups   AD GROUP LIST               token policies  default                               ttl  1h       For example    CodeBlockConfig hideClipboard      shell session   vault write auth saml role adfs default                   bound subjects    example com   ext example com         bound subjects type  glob                               groups attribute groups                                 bound attributes groups  VaultAdmin VaultUser           token policies  default                                 ttl  1h         CodeBlockConfig        Step 6  Configure the SAML plugin in Vault  Use the Vault CLI to finish configuring the SAML plugin where      SAML PLUGIN PATH  is the full path to your SAML plugin      NAMESPACE  auth  MOUNT PATH   PLUGIN NAME       VAULT ROLE  is the name of your new AD FS role in Vault     TRUST IDENTIFIER  is the ID of your new relying party trust in AD FS     SAML CALLBACK URL  is the callback endpoint for your SAML plugin     http     VAULT ADDR   NAMESPACE  auth  MOUNT PATH   PLUGIN NAME  callback      ADFS URL  is the discovery URL for your AD FS instance     METADATA FILE PATH  is the path on your AD FS instance to the federation   metadata file      shell session   vault write  SAML PLUGIN PATH  config       default role   VAULT ROLE                 entity id   TRUST IDENTIFIER              acs urls   SAML CALLBACK URL              idp metadata url   AD FS URL   METADATA FILE PATH        For example    CodeBlockConfig hideClipboard      shell session   vault write ns admin auth org security saml config                        default role  adfs default                                                entity id  MyVaultIdentifier                                              acs urls    VAULT ADDR  v1 ns admin auth org security saml callback       idp metadata url  https   adfs example com metadata 2007 06 federationmetadata xml         CodeBlockConfig        Next steps     Link your Active Directory groups to Vault   vault docs auth saml link vault group to ad     Troubleshoot your SAML   AD FS configuration   vault docs auth saml troubleshoot adfs "}
{"questions":"vault page title Set up SAML authN Use SAML authentication with Vault to authenticate Vault users with public keys or certificates and a SAML identity provider layout docs Set up SAML authentication","answers":"---\nlayout: docs\npage_title: Set up SAML authN\ndescription: >-\n  Use SAML authentication with Vault to authenticate Vault users with public\n  keys or certificates and a SAML identity provider.\n---\n\n# Set up SAML authentication\n\n@include 'alerts\/enterprise-and-hcp.mdx'\n\nThe `saml` auth method allows users to authentication with Vault using their identity\nwithin a [SAML V2.0](https:\/\/saml.xml.org\/saml-specifications) identity provider.\nAuthentication is suited for human users by requiring interaction with a web browser.\n\n## Authentication\n\n<Tabs>\n\n<Tab heading=\"Vault CLI\">\n\nThe CLI login defaults to the `\/saml` path. If this auth method was enabled at a\ndifferent path, specify `-path=\/my-path` in the CLI.\n\n```shell-session\n$ vault login -method=saml role=admin\n\nComplete the login via your SAML provider. Launching browser to:\n\n    https:\/\/company.okta.com\/app\/vault\/abc123eb9xnIfzlaf697\/sso\/saml?SAMLRequest=fJI9b9swEIZ3%2FwqBu0SJ%2FpBDRAZce4iBtDViN0MX40Sda...\n```\n\nThe CLI opens the default browser to the generated URL where users must authenticate\nwith the configured SAML identity provider. The URL may be manually entered into the\nbrowser if it cannot be automatically opened.\n\nThe CLI login behavior may be customized with the following optional parameters:\n\n- `skip_browser` (default: `false`): If set to `true`, automatic launching of the default\n  browser will be skipped. The SAML identity provider URL must be manually entered in a\n  browser to complete the authentication flow.\n- `abort_on_error` (default: `false`): If set to `true`, the CLI returns an error and\n  exits with a non-zero value if it cannot launch the default browser.\n\n<\/Tab>\n\n<Tab heading=\"Vault UI\">\n\n1. Select \"SAML\" from the \"Method\" select box.\n1. Enter a role name for the \"Role\" field or leave blank to use\n   the [default role](\/vault\/api-docs\/auth\/saml#default_role).\n1. Press **Sign in with SAML Provider** and complete the authentication with the\n   configured SAML identity provider.\n\n<\/Tab>\n\n<\/Tabs>\n\n## Configuration\n\nAuth methods must be configured in advance before users or machines can\nauthenticate. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the SAML authentication method with the `auth enable` CLI command:\n\n   ```shell-session\n   $ vault auth enable saml\n   ```\n\n1. Use the `\/config` endpoint to save the configuration of your SAML identity provider and\n   set the default role. You can configure the trust relationship with the SAML Identity\n   Provider by either providing a URL for its Metadata document:\n\n   ```shell-session\n   $ vault write auth\/saml\/config \\\n      default_role=\"admin\" \\\n      idp_metadata_url=\"https:\/\/company.okta.com\/app\/abc123eb9xnIfzlaf697\/sso\/saml\/metadata\" \\\n      entity_id=\"https:\/\/my.vault\/v1\/auth\/saml\" \\\n      acs_urls=\"https:\/\/my.vault\/v1\/auth\/saml\/callback\"\n   ```\n\n   or by setting the configuration Metadata manually:\n\n   ```shell-session\n   $ vault write auth\/saml\/config \\\n      default_role=\"admin\" \\\n      idp_sso_url=\"https:\/\/company.okta.com\/app\/abc123eb9xnIfzlaf697\/sso\/saml\" \\\n      idp_entity_id=\"https:\/\/www.okta.com\/abc123eb9xnIfzlaf697\" \\\n      idp_cert=\"@path\/to\/cert.pem\" \\\n      entity_id=\"https:\/\/my.vault\/v1\/auth\/saml\" \\\n      acs_urls=\"https:\/\/my.vault\/v1\/auth\/saml\/callback\"\n   ```\n\n1. Create a named role:\n\n   ```shell-session\n   $ vault write auth\/saml\/role\/admin \\\n       bound_subjects=\"*@hashicorp.com\" \\\n       bound_subjects_type=\"glob\" \\\n       token_policies=\"writer\" \\\n       bound_attributes=group=\"admin\" \\\n       ttl=\"1h\"\n   ```\n\n   This role authorizes users that have a subject with an `@hashicorp.com` suffix and\n   are in the `admin` group to authenticate. It also gives the resulting Vault token a\n   time-to-live of 1 hour and the `writer` policy.\n\nRefer to the SAML [API documentation](\/vault\/api-docs\/auth\/saml) for a\ncomplete list of configuration options.\n\n### Assertion consumer service URLs\n\nThe [`acs_urls`](\/vault\/api-docs\/auth\/saml#acs_urls) configuration parameter determines\nwhere the SAML response will be sent after users authenticate with the configured SAML\nidentity provider in their browser.\n\nThe values provided to Vault must:\n\n- Match or be a subset of the configured values for the SAML application within the\n  configured identity provider.\n- Be directed to the auth method's [assertion consumer service\n  callback](\/vault\/api-docs\/auth\/saml#assertion-consumer-service-callback) API.\n\n<Note>\n  It is highly recommended and enforced by some identity providers to TLS-protect the\n  assertion consumer service URLs. A warning will be returned from Vault if any of the\n  configured assertion consumer service URLs are not protected by TLS.\n<\/Note>\n\n#### Configuration for replication\n\nTo support a single auth method mount being used across Vault [replication](\/vault\/docs\/enterprise\/replication)\nclusters, `acs_urls` supports configuration of multiple values. For example, to support\nSAML authentication on a primary and secondary Vault cluster, the following `acs_urls`\nconfiguration could be given:\n\n   ```shell-session\n   $ vault write auth\/saml\/config \\\n      acs_urls=\"https:\/\/primary.vault\/v1\/auth\/saml\/callback,https:\/\/secondary.vault\/v1\/auth\/saml\/callback\"\n   ```\n\nThe Vault UI and CLI will automatically request the proper assertion consumer service URL\nfor the cluster they're configured to communicate with. This means that the entirety of the\nauthentication flow will stay within the targeted cluster.\n\n#### Configuration for namespaces\n\nThe SAML auth method can be used within Vault [namespaces](\/vault\/docs\/enterprise\/namespaces).\nThe assertion consumer service URLs configured in both Vault and the identity provider must\ninclude the namespace path segment.\n\nThe following table provides assertion consumer service URLs given different namespace paths:\n\n| Namespace path  | Assertion consumer service URL                        |\n|-----------------|-------------------------------------------------------|\n| `admin\/`        | `https:\/\/my.vault\/v1\/admin\/auth\/saml\/callback`        |\n| `org\/security\/` | `https:\/\/my.vault\/v1\/org\/security\/auth\/saml\/callback` |\n\n### Bound attributes\n\nOnce the user has been authenticated the authorization flow will validate\nthat both the [`bound_subjects`](\/vault\/api-docs\/auth\/saml#bound_subjects) and\n[`bound_attributes`](\/vault\/api-docs\/auth\/saml#bound_attributes) match expected\nvalues configured for the role. This can be used to restrict access to Vault for\na subset of users in the SAML identity provider.\n\nFor example, a role with `bound_subjects=*@hashicorp.com` and\n`bound_attributes=groups=support,engineering` will only authorize users whose subject has\nan `@hashicorp.com` suffix and that are in either the `support` or `engineering` group.\n\nWhether it should be an exact match or interpret `*` as a wildcard can be\ncontrolled by the [`bound_subjects_type`](\/vault\/api-docs\/auth\/saml#bound_subjects_type) and\n[`bound_attributes_type`](\/vault\/api-docs\/auth\/saml#bound_attributes_type) parameters.\n\n### Bound attributes for the Microsoft identity platform\n\nThe bound attributes for the Microsoft identity platform requires\n`http:\/\/schemas.microsoft.com\/ws\/2008\/06\/identity\/claims\/groups` as the\nattribute name along with your group membership values. For example, a role with\n`bound_attributes=http:\/\/schemas.microsoft.com\/ws\/2008\/06\/identity\/claims\/groups=\"GROUP1_OBJECT_ID,GROUP2_OBJECT_ID\"`\nwill only authorize users that are in either the `GROUP1_OBJECT_ID` or\n`GROUP2_OBJECT_ID` group.\n\nYou can read more at the Microsoft identity platform's\n[SAML token claims reference](https:\/\/learn.microsoft.com\/en-us\/entra\/identity-platform\/reference-saml-tokens).\n\n## API\n\nThe SAML authentication plugin has a full HTTP API. Refer to the\n[SAML API documentation](\/vault\/api-docs\/auth\/saml) for more details","site":"vault","answers_cleaned":"    layout  docs page title  Set up SAML authN description       Use SAML authentication with Vault to authenticate Vault users with public   keys or certificates and a SAML identity provider         Set up SAML authentication   include  alerts enterprise and hcp mdx   The  saml  auth method allows users to authentication with Vault using their identity within a  SAML V2 0  https   saml xml org saml specifications  identity provider  Authentication is suited for human users by requiring interaction with a web browser      Authentication   Tabs    Tab heading  Vault CLI    The CLI login defaults to the   saml  path  If this auth method was enabled at a different path  specify   path  my path  in the CLI      shell session   vault login  method saml role admin  Complete the login via your SAML provider  Launching browser to       https   company okta com app vault abc123eb9xnIfzlaf697 sso saml SAMLRequest fJI9b9swEIZ3 2FwqBu0SJ 2FpBDRAZce4iBtDViN0MX40Sda         The CLI opens the default browser to the generated URL where users must authenticate with the configured SAML identity provider  The URL may be manually entered into the browser if it cannot be automatically opened   The CLI login behavior may be customized with the following optional parameters      skip browser   default   false    If set to  true   automatic launching of the default   browser will be skipped  The SAML identity provider URL must be manually entered in a   browser to complete the authentication flow     abort on error   default   false    If set to  true   the CLI returns an error and   exits with a non zero value if it cannot launch the default browser     Tab    Tab heading  Vault UI    1  Select  SAML  from the  Method  select box  1  Enter a role name for the  Role  field or leave blank to use    the  default role   vault api docs auth saml default role   1  Press   Sign in with SAML Provider   and complete the authentication with the    configured SAML identity provider     Tab     Tabs      Configuration  Auth methods must be configured in advance before users or machines can authenticate  These steps are usually completed by an operator or configuration management tool   1  Enable the SAML authentication method with the  auth enable  CLI command         shell session      vault auth enable saml         1  Use the   config  endpoint to save the configuration of your SAML identity provider and    set the default role  You can configure the trust relationship with the SAML Identity    Provider by either providing a URL for its Metadata document         shell session      vault write auth saml config         default role  admin          idp metadata url  https   company okta com app abc123eb9xnIfzlaf697 sso saml metadata          entity id  https   my vault v1 auth saml          acs urls  https   my vault v1 auth saml callback             or by setting the configuration Metadata manually         shell session      vault write auth saml config         default role  admin          idp sso url  https   company okta com app abc123eb9xnIfzlaf697 sso saml          idp entity id  https   www okta com abc123eb9xnIfzlaf697          idp cert   path to cert pem          entity id  https   my vault v1 auth saml          acs urls  https   my vault v1 auth saml callback          1  Create a named role         shell session      vault write auth saml role admin          bound subjects    hashicorp com           bound subjects type  glob           token policies  writer           bound attributes group  admin           ttl  1h             This role authorizes users that have a subject with an   hashicorp com  suffix and    are in the  admin  group to authenticate  It also gives the resulting Vault token a    time to live of 1 hour and the  writer  policy   Refer to the SAML  API documentation   vault api docs auth saml  for a complete list of configuration options       Assertion consumer service URLs  The   acs urls    vault api docs auth saml acs urls  configuration parameter determines where the SAML response will be sent after users authenticate with the configured SAML identity provider in their browser   The values provided to Vault must     Match or be a subset of the configured values for the SAML application within the   configured identity provider    Be directed to the auth method s  assertion consumer service   callback   vault api docs auth saml assertion consumer service callback  API    Note    It is highly recommended and enforced by some identity providers to TLS protect the   assertion consumer service URLs  A warning will be returned from Vault if any of the   configured assertion consumer service URLs are not protected by TLS    Note        Configuration for replication  To support a single auth method mount being used across Vault  replication   vault docs enterprise replication  clusters   acs urls  supports configuration of multiple values  For example  to support SAML authentication on a primary and secondary Vault cluster  the following  acs urls  configuration could be given         shell session      vault write auth saml config         acs urls  https   primary vault v1 auth saml callback https   secondary vault v1 auth saml callback          The Vault UI and CLI will automatically request the proper assertion consumer service URL for the cluster they re configured to communicate with  This means that the entirety of the authentication flow will stay within the targeted cluster        Configuration for namespaces  The SAML auth method can be used within Vault  namespaces   vault docs enterprise namespaces   The assertion consumer service URLs configured in both Vault and the identity provider must include the namespace path segment   The following table provides assertion consumer service URLs given different namespace paths     Namespace path    Assertion consumer service URL                                                                                                         admin             https   my vault v1 admin auth saml callback              org security      https   my vault v1 org security auth saml callback         Bound attributes  Once the user has been authenticated the authorization flow will validate that both the   bound subjects    vault api docs auth saml bound subjects  and   bound attributes    vault api docs auth saml bound attributes  match expected values configured for the role  This can be used to restrict access to Vault for a subset of users in the SAML identity provider   For example  a role with  bound subjects   hashicorp com  and  bound attributes groups support engineering  will only authorize users whose subject has an   hashicorp com  suffix and that are in either the  support  or  engineering  group   Whether it should be an exact match or interpret     as a wildcard can be controlled by the   bound subjects type    vault api docs auth saml bound subjects type  and   bound attributes type    vault api docs auth saml bound attributes type  parameters       Bound attributes for the Microsoft identity platform  The bound attributes for the Microsoft identity platform requires  http   schemas microsoft com ws 2008 06 identity claims groups  as the attribute name along with your group membership values  For example  a role with  bound attributes http   schemas microsoft com ws 2008 06 identity claims groups  GROUP1 OBJECT ID GROUP2 OBJECT ID   will only authorize users that are in either the  GROUP1 OBJECT ID  or  GROUP2 OBJECT ID  group   You can read more at the Microsoft identity platform s  SAML token claims reference  https   learn microsoft com en us entra identity platform reference saml tokens       API  The SAML authentication plugin has a full HTTP API  Refer to the  SAML API documentation   vault api docs auth saml  for more details"}
{"questions":"vault page title Link Active Directory SAML groups to Vault Federation Services AD FS as a SAML provider Connect Vault policies to Active Directory groups with Active Directory layout docs Link Active Directory SAML groups to Vault","answers":"---\nlayout: docs\npage_title: Link Active Directory SAML groups to Vault\ndescription: >-\n  Connect Vault policies to Active Directory groups with Active Directory\n  Federation Services (AD FS) as a SAML provider.\n---\n\n# Link Active Directory SAML groups to Vault\n\n@include 'alerts\/enterprise-and-hcp.mdx'\n\nConfigure your Vault instance to link your Active Directory groups to Vault\npolicies with SAML.\n\n\n\n## Before you start\n\n- **You must have Vault Enterprise or HCP Vault v1.15.5+**.\n- **You must be running AD FS on Windows Server**.\n- **You must have a [SAML plugin configured for AD FS](\/vault\/docs\/auth\/saml\/adfs)**.\n- **You must have a Vault admin token**. If you do not have a valid admin\n   token, you can generate a new token in the Vault GUI or using\n   [`vault token create`](\/vault\/docs\/commands\/token\/create) with the Vault CLI.\n\n\n\n## Step 1: Enable a `kv` plugin instance for AD clients\n\n<Tabs>\n\n<Tab heading=\"Vault CLI\" group=\"cli\">\n\nEnable an instance of the KV secret engine for AD FS under a custom path:\n\n```shell-session\n$ vault secrets enable -path=<ADFS_KV_PLUGIN_PATH> kv-v2\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault secrets enable -path=adfs-kv kv-v2\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"Vault GUI\" group=\"gui\">\n\n@include 'gui-instructions\/enable-secrets-plugin.mdx'\n\n- Enable the KV plugin:\n\n    1. Select the **KV** token.\n    1. Set a mount path that reflects the plugin purpose. For example: `dfs-kv`.\n    1. Click **Enable engine**.\n\n<\/Tab>\n\n<\/Tabs>\n\n\n## Step 2: Create a read-only policy for the `kv` plugin\n\n<Tabs>\n\n<Tab heading=\"Vault CLI\" group=\"cli\">\n\nUse `vault write` to create a read-only policy for AD FS clients that use the\nnew KV plugin:\n\n```shell-session\n$ vault policy write <RO_ADFS_POLICY_NAME> - << EOF\n# Read and list policy for the AD FS KV mount\npath \"<ADFS_KV_PLUGIN_PATH>\/*\" {\n  capabilities = [\"read\", \"list\"]\n}\nEOF\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault policy write ro-saml-adfs - << EOF\n# Read and list policy for the AD FS KV mount\npath \"adfs-kv\/*\" {\n  capabilities = [\"read\", \"list\"]\n}\nEOF\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"Vault GUI\" group=\"gui\">\n\n@include 'gui-instructions\/create-acl-policy.mdx'\n\n- Set the policy details and click **Create policy**:\n\n    - **Name**: \"ro-saml-adfs\"\n    - **Policy**:\n    ```hcl\n    # Read and list policy for the AD FS KV mount\n    path \"<ADFS_KV_PLUGIN_PATH>\/*\" {\n      capabilities = [\"read\", \"list\"]\n    }\n    ```\n\n<\/Tab>\n\n<\/Tabs>\n\n\n\n## Step 3: Create and link a Vault group to AD\n\n<Tabs>\n\n<Tab heading=\"Vault CLI\" group=\"cli\">\n\n1. Create an external group in Vault and save the group ID to a file named\n   `group_id.txt`:\n\n   ```shell-session\n   $ vault write                            \\\n     -format=json                           \\\n     identity\/group name=\"SamlVaultReader\"  \\\n     policies=\"ro-adfs-test\"                \\\n     type=\"external\" | jq -r \".data.id\" > group_id.txt\n   ```\n\n1. Retrieve the mount accessor for the AD FS authentication method and save it\n   to a file named `accessor_adfs.txt`:\n\n   ```shell-session\n   $ vault auth list -format=json |               \\\n     jq -r '.[\"<SAML_PLUGIN_PATH>\/\"].accessor' >  \\\n     accessor_adfs.txt\n   ```\n\n1. Create a group alias:\n\n   ```shell-session\n   $ vault write identity\/group-alias         \\\n     name=\"<YOUR_EXISTING_AD_GROUP>\"          \\\n     mount_accessor=$(cat accessor_adfs.txt)  \\\n     canonical_id=\"$(cat group_id.txt)\"\n   ```\n\n\n<\/Tab>\n\n<Tab heading=\"Vault GUI\" group=\"gui\">\n\n@include 'gui-instructions\/create-group.mdx'\n\n- Follow the prompts to create an external group with the following\n  information:\n     - Name: your new Vault group name\n     - Type: `external`\n     - Policies: the read-only AD FS policy you created. For example,\n       `ro-adfs-test`.\n\n- Click **Add alias** and follow the prompts to map the Vault group name to an\n  existing group in Active Directory:\n   - Name: the name of an existing AD group (**must match exactly**).\n   - Auth Backend: `<SAML_PLUGIN_PATH>\/ (saml)`\n\n<\/Tab>\n\n<\/Tabs>\n\n\n## Step 4: Verify the link to Active Directory\n\n1. Use the Vault CLI to login as an Active Directory user who is a member of\n   the linked Active Directory group:\n\n   ```shell-session\n   $ vault login -method saml -path <SAML_PLUGIN_PATH>\n   ```\n\n1. Read your test value from the KV plugin:\n\n   ```shell-session\n   $ vault kv get adfs-kv\/test\n   ```","site":"vault","answers_cleaned":"    layout  docs page title  Link Active Directory SAML groups to Vault description       Connect Vault policies to Active Directory groups with Active Directory   Federation Services  AD FS  as a SAML provider         Link Active Directory SAML groups to Vault   include  alerts enterprise and hcp mdx   Configure your Vault instance to link your Active Directory groups to Vault policies with SAML        Before you start      You must have Vault Enterprise or HCP Vault v1 15 5         You must be running AD FS on Windows Server        You must have a  SAML plugin configured for AD FS   vault docs auth saml adfs         You must have a Vault admin token    If you do not have a valid admin    token  you can generate a new token in the Vault GUI or using      vault token create    vault docs commands token create  with the Vault CLI        Step 1  Enable a  kv  plugin instance for AD clients   Tabs    Tab heading  Vault CLI  group  cli    Enable an instance of the KV secret engine for AD FS under a custom path      shell session   vault secrets enable  path  ADFS KV PLUGIN PATH  kv v2      For example    CodeBlockConfig hideClipboard      shell session   vault secrets enable  path adfs kv kv v2        CodeBlockConfig     Tab    Tab heading  Vault GUI  group  gui     include  gui instructions enable secrets plugin mdx     Enable the KV plugin       1  Select the   KV   token      1  Set a mount path that reflects the plugin purpose  For example   dfs kv       1  Click   Enable engine       Tab     Tabs       Step 2  Create a read only policy for the  kv  plugin   Tabs    Tab heading  Vault CLI  group  cli    Use  vault write  to create a read only policy for AD FS clients that use the new KV plugin      shell session   vault policy write  RO ADFS POLICY NAME       EOF   Read and list policy for the AD FS KV mount path   ADFS KV PLUGIN PATH         capabilities     read    list     EOF      For example    CodeBlockConfig hideClipboard      shell session   vault policy write ro saml adfs      EOF   Read and list policy for the AD FS KV mount path  adfs kv        capabilities     read    list     EOF        CodeBlockConfig     Tab    Tab heading  Vault GUI  group  gui     include  gui instructions create acl policy mdx     Set the policy details and click   Create policy             Name     ro saml adfs          Policy           hcl       Read and list policy for the AD FS KV mount     path   ADFS KV PLUGIN PATH             capabilities     read    list                    Tab     Tabs        Step 3  Create and link a Vault group to AD   Tabs    Tab heading  Vault CLI  group  cli    1  Create an external group in Vault and save the group ID to a file named     group id txt          shell session      vault write                                    format json                                  identity group name  SamlVaultReader          policies  ro adfs test                        type  external    jq  r   data id    group id txt         1  Retrieve the mount accessor for the AD FS authentication method and save it    to a file named  accessor adfs txt          shell session      vault auth list  format json                        jq  r      SAML PLUGIN PATH     accessor            accessor adfs txt         1  Create a group alias         shell session      vault write identity group alias                name   YOUR EXISTING AD GROUP                   mount accessor   cat accessor adfs txt          canonical id    cat group id txt              Tab    Tab heading  Vault GUI  group  gui     include  gui instructions create group mdx     Follow the prompts to create an external group with the following   information         Name  your new Vault group name        Type   external         Policies  the read only AD FS policy you created  For example          ro adfs test      Click   Add alias   and follow the prompts to map the Vault group name to an   existing group in Active Directory       Name  the name of an existing AD group    must match exactly          Auth Backend    SAML PLUGIN PATH    saml      Tab     Tabs       Step 4  Verify the link to Active Directory  1  Use the Vault CLI to login as an Active Directory user who is a member of    the linked Active Directory group         shell session      vault login  method saml  path  SAML PLUGIN PATH          1  Read your test value from the KV plugin         shell session      vault kv get adfs kv test       "}
{"questions":"vault policies when using Active Directory Federation Services ADFS as an SAML provider page title Troubleshoot ADFS and SAML automatic group mapping fails layout docs Fix connection problems in Vault due to a bad mapping between groups and Automatic group mapping fails","answers":"---\nlayout: docs\npage_title: \"Troubleshoot ADFS and SAML: automatic group mapping fails\"\ndescription: >-\n  Fix connection problems in Vault due to a bad mapping between groups and\n  policies when using Active Directory Federation Services (ADFS) as an SAML\n  provider.\n---\n\n# Automatic group mapping fails\n\nTroubleshoot problems where the debugging data suggests a bad or nonexistent\nmapping between your Vault role and AD FS the Claim Issuance Policy.\n\n\n\n## Example debugging data\n\n<CodeBlockConfig hideClipboard highlight=\"14,16,21\">\n\n```json\n[DEBUG] auth.saml.auth_saml_1d2227e7: validating user context for role: api=callback role_name=default-saml\nrole=\"{\n  \"token_bound_cidrs\":null,\n  \"token_explicit_max_ttl\":0,\n  \"token_max_ttl\":0,\n  \"token_no_default_policy\":false,\n  \"token_num_uses\":0,\n  \"token_period\":0,\n  \"token_policies\":[\"default\"],\n  \"token_type\":0,\n  \"token_ttl\":0,\n  \"BoundSubjects\":[\"*@example.com\",\"*@ext.example.com\"],\n  \"BoundSubjectsType\":\"glob\",\n  \"BoundAttributes\":{\"http:\/\/schemas.xmlsoap.org\/claims\/Group\":[\"VaultAdmin\",\"VaultUser\"]},\n  \"BoundAttributesType\":\"string\",\n  \"GroupsAttribute\":\"groups\"\n  }\"\nuser context=\"{\n  \"attributes\":\n  {\n    \"http:\/\/schemas.xmlsoap.org\/claims\/Group\":[\"Domain Users\",\"VaultAdmin\"],\n    \"http:\/\/schemas.xmlsoap.org\/ws\/2005\/05\/identity\/claims\/emailaddress\":[\"rs@example.com\"]\n  },\n  \"subject\":\"rs@example.com\"\n}\"\n```\n\n<\/CodeBlockConfig>\n\n\n\n## Analysis\n\nUse `vault read` to review the current role configuration:\n\n<CodeBlockConfig hideClipboard highlight=\"5,9\">\n\n```shell-session\n$ vault read auth\/<SAML_PLUGIN_PATH>\/role\/<ADFS_ROLE>\n\nKey                        Value\n---                        -----\nbound_attributes           map[http:\/\/schemas.xmlsoap.org\/claims\/Group:[VaultAdmin VaultUser]]\nbound_attributes_type      string\nbound_subjects             [*@example.com *@ext.example.com]\nbound_subjects_type        glob\ngroups_attribute           groups\ntoken_bound_cidrs          []\ntoken_explicit_max_ttl     0s\ntoken_max_ttl              0s\ntoken_no_default_policy    false\ntoken_num_uses             0\ntoken_period               0s\ntoken_policies             [default]\ntoken_ttl                  0s\ntoken_type                 default\n```\n\n<\/CodeBlockConfig>\n\nThe Vault role uses `groups` for the group attribute, so Vault expects user\ncontext in the SAML response to include a `groups` attribute with the form:\n\n<CodeBlockConfig hideClipboard>\n\n```text\nuser context=\"{\n  \"attributes\":\n  {\n    \"groups\":[<LIST_OF_BOUND_GROUPS>]\",\n    ...\n  }\n}\"\n```\n<\/CodeBlockConfig>\n\nBut the SAML response indicates the Claim Issuance Policy uses `Group` for the\ngroup attribute, so the user context uses `Group` to key the bound groups:\n\n\n<CodeBlockConfig hideClipboard>\n\n```text\nuser context=\"{\n  \"attributes\":\n  {\n    \"http:\/\/schemas.xmlsoap.org\/claims\/Group\":[\"Domain Users\",\"VaultAdmin\"],\n    ...\n  },\n  \"subject\":\"rs@example.com\"\n}\"\n```\n\n<\/CodeBlockConfig>\n\n\n\n## Solution\n\n<Tabs>\n\n<Tab heading=\"Option 1: Use 'Group' in the Vault role\">\n\nThe first option to resolve the problem is update `group_attribute` for the\nVault role to use `Group`:\n\n```shell-session\n$ vault write auth\/<SAML_PLUGIN_PATH>\/role\/<ADFS_ROLE> \\\n    groups_attribute=http:\/\/schemas.xmlsoap.org\/claims\/Group\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault write auth\/saml\/role\/adfs-default \\\n    groups_attribute=http:\/\/schemas.xmlsoap.org\/claims\/Group\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"Option 2: Use 'groups' for AD FS\">\n\nThe second option to resolve the problem is to update your AD FS configuration\nto use `groups` and confirm the bound attributes in Vault match the expected\ngroups:\n\n1. Update your AD FS the Claim Issuance Policy to use `groups` for unqualified\n   names:\n\n    | LDAP attribute                     | Outgoing claim type\n    |------------------------------------|--------------------\n    | `Token-Groups - Unqualified Names` | `groups`\n\n1. Verify the bound attribute for your Vault role match the groups listed in the\n   SAML response:\n\n    ```shell-session\n    $ vault write auth\/<SAML_PLUGIN_PATH>\/role\/<ADFS_ROLE> \\\n        bound_attributes=groups=\"<AD_GROUP_LIST>\"\n    ```\n\n    For example:\n\n    <CodeBlockConfig hideClipboard>\n\n    ```shell-session\n    $ vault write auth\/saml\/role\/default-adfs \\\n        bound_attributes=groups=\"VaultAdmin,VaultUser\"\n    ```\n\n    <\/CodeBlockConfig>\n\n<\/Tab>\n<\/Tabs>\n\n\n\n## Additional resources\n\n- [SAML auth method Documentation](\/vault\/docs\/auth\/saml)\n- [SAML API Documentation](\/vault\/api-docs\/auth\/saml)\n- [Set up an AD FS lab environment](https:\/\/learn.microsoft.com\/en-us\/windows-server\/identity\/ad-fs\/operations\/set-up-an-ad-fs-lab-environment)","site":"vault","answers_cleaned":"    layout  docs page title   Troubleshoot ADFS and SAML  automatic group mapping fails  description       Fix connection problems in Vault due to a bad mapping between groups and   policies when using Active Directory Federation Services  ADFS  as an SAML   provider         Automatic group mapping fails  Troubleshoot problems where the debugging data suggests a bad or nonexistent mapping between your Vault role and AD FS the Claim Issuance Policy        Example debugging data   CodeBlockConfig hideClipboard highlight  14 16 21       json  DEBUG  auth saml auth saml 1d2227e7  validating user context for role  api callback role name default saml role       token bound cidrs  null     token explicit max ttl  0     token max ttl  0     token no default policy  false     token num uses  0     token period  0     token policies    default       token type  0     token ttl  0     BoundSubjects      example com     ext example com       BoundSubjectsType   glob      BoundAttributes    http   schemas xmlsoap org claims Group    VaultAdmin   VaultUser        BoundAttributesType   string      GroupsAttribute   groups       user context       attributes            http   schemas xmlsoap org claims Group    Domain Users   VaultAdmin         http   schemas xmlsoap org ws 2005 05 identity claims emailaddress    rs example com           subject   rs example com            CodeBlockConfig        Analysis  Use  vault read  to review the current role configuration    CodeBlockConfig hideClipboard highlight  5 9       shell session   vault read auth  SAML PLUGIN PATH  role  ADFS ROLE   Key                        Value                                  bound attributes           map http   schemas xmlsoap org claims Group  VaultAdmin VaultUser   bound attributes type      string bound subjects                example com   ext example com  bound subjects type        glob groups attribute           groups token bound cidrs             token explicit max ttl     0s token max ttl              0s token no default policy    false token num uses             0 token period               0s token policies              default  token ttl                  0s token type                 default        CodeBlockConfig   The Vault role uses  groups  for the group attribute  so Vault expects user context in the SAML response to include a  groups  attribute with the form    CodeBlockConfig hideClipboard      text user context       attributes            groups    LIST OF BOUND GROUPS                          CodeBlockConfig   But the SAML response indicates the Claim Issuance Policy uses  Group  for the group attribute  so the user context uses  Group  to key the bound groups     CodeBlockConfig hideClipboard      text user context       attributes            http   schemas xmlsoap org claims Group    Domain Users   VaultAdmin                    subject   rs example com            CodeBlockConfig        Solution   Tabs    Tab heading  Option 1  Use  Group  in the Vault role    The first option to resolve the problem is update  group attribute  for the Vault role to use  Group       shell session   vault write auth  SAML PLUGIN PATH  role  ADFS ROLE        groups attribute http   schemas xmlsoap org claims Group      For example    CodeBlockConfig hideClipboard      shell session   vault write auth saml role adfs default       groups attribute http   schemas xmlsoap org claims Group        CodeBlockConfig     Tab    Tab heading  Option 2  Use  groups  for AD FS    The second option to resolve the problem is to update your AD FS configuration to use  groups  and confirm the bound attributes in Vault match the expected groups   1  Update your AD FS the Claim Issuance Policy to use  groups  for unqualified    names         LDAP attribute                       Outgoing claim type                                                                       Token Groups   Unqualified Names     groups   1  Verify the bound attribute for your Vault role match the groups listed in the    SAML response          shell session       vault write auth  SAML PLUGIN PATH  role  ADFS ROLE            bound attributes groups   AD GROUP LIST                For example        CodeBlockConfig hideClipboard          shell session       vault write auth saml role default adfs           bound attributes groups  VaultAdmin VaultUser                 CodeBlockConfig     Tab    Tabs        Additional resources     SAML auth method Documentation   vault docs auth saml     SAML API Documentation   vault api docs auth saml     Set up an AD FS lab environment  https   learn microsoft com en us windows server identity ad fs operations set up an ad fs lab environment "}
{"questions":"vault Learn about Vault s plugin architecture page title External Plugin Architecture External plugin architecture layout docs executes and communicates with over RPC This means the plugin process does not Vault s external plugins are completely separate standalone applications that Vault","answers":"---\nlayout: docs\npage_title: External Plugin Architecture\ndescription: Learn about Vault's plugin architecture.\n---\n\n# External plugin architecture\n\nVault's external plugins are completely separate, standalone applications that Vault\nexecutes and communicates with over RPC. This means the plugin process does not\nshare the same memory space as Vault and therefore can only access the\ninterfaces and arguments given to it. This also means a crash in a plugin cannot\ncrash the entirety of Vault.\n\nIt is possible to enable a custom plugin with a name that's identical to a\nbuilt-in plugin. In such a situation, Vault will always choose the custom plugin\nwhen enabling it.\n\n-> **NOTE:** See the [Vault Integrations](\/vault\/integrations) page to find a\ncurated collection of official, partner, and community Vault plugins.\n\n## External plugin lifecycle\n\nVault external plugins are long-running processes that remain running once they are\nspawned by Vault, the parent process. Plugin processes can be started by Vault's\nactive node and performance standby nodes. Additionally, there are cases where\nplugin processes may be terminated by Vault. These cases include, but are not\nlimited to:\n\n- Vault active node step-down\n- Vault barrier seal\n- Vault graceful shutdown\n- Disabling a Secrets Engine or Auth method that uses external plugins\n- Database configured connection deletion\n- Database configured connection update\n- Database configured connection reset request\n- Database root credentials rotation\n- WAL Rollback from a previously failed root credentials rotation operation\n\nThe lifecycle of plugin processes are managed automatically by Vault.\nTermination of these processes are typical in certain scenarios, such as the\nones listed above. Vault will start plugin processes when they are enabled. A\nplugin process may be started or terminated through other internal processes\nwithin Vault as well. Since Vault manages and tracks the lifecycle of its\nplugins, these processes should not be terminated by anything other than Vault.\nIf a plugin process is shutdown out-of-band, the plugin process will be lazily\nloaded when a request that requires the plugin is received by Vault.\n\n### External plugin scaling characteristics\n\nExternal plugins can leverage [Performance Standbys](\/vault\/docs\/enterprise\/performance-standby)\nwithout any explicit action by a plugin author. The default behavior of Vault\nEnterprise is to attempt to handle all requests, including requests to plugins,\non performance standbys. If the plugin request makes any attempt to modify\nstorage, the request will receive a readonly error, and the request routing\ncode will then forward the full original request transparently to the active\nnode. In other words, plugins can scale horizontally on Vault Enterprise\nwithout any effort on the plugin author's part.\n\n## Plugin communication\n\nVault communicates with external plugins over RPC. To secure this\ncommunication, Vault creates a mutually authenticated TLS connection with the\nplugin's RPC server. Plugins make use of the AutoMTLS feature of\n[go-plugin](https:\/\/www.github.com\/hashicorp\/go-plugin) which will\nautomatically negotiate mutual TLS for transport authentication.\n\nThe [`api_addr`](\/vault\/docs\/configuration#api_addr) must be set in order for the\nplugin process to establish communication with the Vault server during mount\ntime. If the storage backend has HA enabled and supports automatic host address\ndetection (e.g. Consul), Vault will automatically attempt to determine the\n`api_addr` as well.\n\n~> Note: Prior to Vault version 1.9.2, reading the original connection's TLS\nconnection state is not supported in plugins.\n\n## Plugin registration\n\nAn important consideration of Vault's plugin system is to ensure the plugin\ninvoked by Vault is authentic and maintains integrity. There are two components\nthat a Vault operator needs to configure before external plugins can be run- the\nplugin directory and the plugin catalog entry.\n\n### Plugin directory\n\nThe plugin directory is a configuration option of Vault and can be specified in\nthe [configuration file](\/vault\/docs\/configuration).\nThis setting specifies a directory in which all plugin binaries must live;\n_this value cannot be a symbolic link_. A plugin\ncannot be added to Vault unless it exists in the plugin directory. There is no\ndefault for this configuration option, and if it is not set, plugins cannot be\nadded to Vault.\n\n@include 'plugin-file-permissions-check.mdx'\n\n### Plugin catalog\n\nThe plugin catalog is Vault's list of approved plugins. The catalog is stored in\nVault's barrier and can only be updated by a Vault user with sudo permissions.\nUpon adding a new plugin, the plugin name, SHA256 sum of the executable, and the\ncommand that should be used to run the plugin must be provided. The catalog will\nensure the executable referenced in the command exists in the plugin\ndirectory. When added to the catalog, the plugin is not automatically executed,\nbut becomes visible to backends and can be executed by them. For more\ninformation on the plugin catalog please see the [Plugin Catalog API\ndocs](\/vault\/api-docs\/system\/plugins-catalog).\n\nAn example of plugin registration in current versions of Vault:\n\n```shell-session\n$ vault plugin register -sha256=<SHA256 Hex value of the plugin binary> \\\n    secret \\                  # type\n    myplugin-database-plugin\n\nSuccess! Registered plugin: myplugin-database-plugin\n```\n\nVault versions prior to v0.10.4 lacked the `vault plugin` operator and the\nregistration step for them is:\n\n```shell-session\n$ vault write sys\/plugins\/catalog\/database\/myplugin-database-plugin \\\n     sha256=<SHA256 Hex value of the plugin binary> \\\n     command=\"myplugin\"\n\nSuccess! Data written to: sys\/plugins\/catalog\/database\/myplugin-database-plugin\n```\n\n### Plugin execution\n\nWhen a backend wants to run a plugin, it first looks up the plugin, by name, in\nthe catalog. It then checks the executable's SHA256 sum against the one\nconfigured in the plugin catalog. Finally Vault runs the command configured in\nthe catalog, sending along the JWT formatted response wrapping token and mlock\nsettings. Like Vault, plugins support [the use of mlock when available](\/vault\/docs\/configuration#disable_mlock).\n\n~> Note: If Vault is configured with `mlock` enabled, then the Vault executable\nand each plugin executable in your [plugins directory](\/vault\/docs\/plugins\/plugin-architecture#plugin-directory)\nmust be given the ability to use the `mlock` syscall.\n\n### Plugin upgrades\n\nExternal plugins may be updated by registering and reloading them. More details\non the upgrade procedure can be found in\n[Upgrading Vault Plugins](\/vault\/docs\/upgrading\/plugins).\n\n## Plugin multiplexing\n\nTo avoid spawning multiple plugin processes for mounts of the same type,\nplugins can implement plugin multiplexing. This allows a single\nplugin process to be used for multiple mounts of a given type. This single\nprocess will be multiplexed across all Vault namespaces for mounts of this\ntype. Multiplexing a plugin does not affect the current behavior of existing\nplugins.\n\nTo enable multiplexing, the plugin must be compiled with the `ServeMultiplex`\nfunction call from Vault's respective `plugin` or `dbplugin` SDK packages. At\nthis time, there is no opt-out capability for plugins that implement\nmultiplexing. To use a non-multiplexed plugin, run an older version of the\nplugin, i.e., the plugin calls the `Serve` function.\n\nMore resources on implementing plugin multiplexing:\n* [Database secrets engines](\/vault\/docs\/secrets\/databases\/custom#serving-a-plugin-with-multiplexing)\n* [Secrets engines and auth methods](\/vault\/docs\/plugins\/plugin-development)\n\n## Troubleshooting\n\n### Unrecognized remote plugin message\n\nIf the following error is encountered when enabling a plugin secret engine or\nauth method:\n\n<CodeBlockConfig hideClipboard>\n\n```sh\nUnrecognized remote plugin message:\n\nThis usually means that the plugin is either invalid or simply\nneeds to be recompiled to support the latest protocol.\n```\n\n<\/CodeBlockConfig>\n\nVerify whether the Vault process has `mlock` enabled, and if so, run the\nfollowing command against the plugin binary:\n\n```shell-session\n$ sudo setcap cap_ipc_lock=+ep <plugin-binary>\n```\n","site":"vault","answers_cleaned":"    layout  docs page title  External Plugin Architecture description  Learn about Vault s plugin architecture         External plugin architecture  Vault s external plugins are completely separate  standalone applications that Vault executes and communicates with over RPC  This means the plugin process does not share the same memory space as Vault and therefore can only access the interfaces and arguments given to it  This also means a crash in a plugin cannot crash the entirety of Vault   It is possible to enable a custom plugin with a name that s identical to a built in plugin  In such a situation  Vault will always choose the custom plugin when enabling it        NOTE    See the  Vault Integrations   vault integrations  page to find a curated collection of official  partner  and community Vault plugins      External plugin lifecycle  Vault external plugins are long running processes that remain running once they are spawned by Vault  the parent process  Plugin processes can be started by Vault s active node and performance standby nodes  Additionally  there are cases where plugin processes may be terminated by Vault  These cases include  but are not limited to     Vault active node step down   Vault barrier seal   Vault graceful shutdown   Disabling a Secrets Engine or Auth method that uses external plugins   Database configured connection deletion   Database configured connection update   Database configured connection reset request   Database root credentials rotation   WAL Rollback from a previously failed root credentials rotation operation  The lifecycle of plugin processes are managed automatically by Vault  Termination of these processes are typical in certain scenarios  such as the ones listed above  Vault will start plugin processes when they are enabled  A plugin process may be started or terminated through other internal processes within Vault as well  Since Vault manages and tracks the lifecycle of its plugins  these processes should not be terminated by anything other than Vault  If a plugin process is shutdown out of band  the plugin process will be lazily loaded when a request that requires the plugin is received by Vault       External plugin scaling characteristics  External plugins can leverage  Performance Standbys   vault docs enterprise performance standby  without any explicit action by a plugin author  The default behavior of Vault Enterprise is to attempt to handle all requests  including requests to plugins  on performance standbys  If the plugin request makes any attempt to modify storage  the request will receive a readonly error  and the request routing code will then forward the full original request transparently to the active node  In other words  plugins can scale horizontally on Vault Enterprise without any effort on the plugin author s part      Plugin communication  Vault communicates with external plugins over RPC  To secure this communication  Vault creates a mutually authenticated TLS connection with the plugin s RPC server  Plugins make use of the AutoMTLS feature of  go plugin  https   www github com hashicorp go plugin  which will automatically negotiate mutual TLS for transport authentication   The   api addr    vault docs configuration api addr  must be set in order for the plugin process to establish communication with the Vault server during mount time  If the storage backend has HA enabled and supports automatic host address detection  e g  Consul   Vault will automatically attempt to determine the  api addr  as well      Note  Prior to Vault version 1 9 2  reading the original connection s TLS connection state is not supported in plugins      Plugin registration  An important consideration of Vault s plugin system is to ensure the plugin invoked by Vault is authentic and maintains integrity  There are two components that a Vault operator needs to configure before external plugins can be run  the plugin directory and the plugin catalog entry       Plugin directory  The plugin directory is a configuration option of Vault and can be specified in the  configuration file   vault docs configuration   This setting specifies a directory in which all plugin binaries must live   this value cannot be a symbolic link   A plugin cannot be added to Vault unless it exists in the plugin directory  There is no default for this configuration option  and if it is not set  plugins cannot be added to Vault    include  plugin file permissions check mdx       Plugin catalog  The plugin catalog is Vault s list of approved plugins  The catalog is stored in Vault s barrier and can only be updated by a Vault user with sudo permissions  Upon adding a new plugin  the plugin name  SHA256 sum of the executable  and the command that should be used to run the plugin must be provided  The catalog will ensure the executable referenced in the command exists in the plugin directory  When added to the catalog  the plugin is not automatically executed  but becomes visible to backends and can be executed by them  For more information on the plugin catalog please see the  Plugin Catalog API docs   vault api docs system plugins catalog    An example of plugin registration in current versions of Vault      shell session   vault plugin register  sha256  SHA256 Hex value of the plugin binary        secret                      type     myplugin database plugin  Success  Registered plugin  myplugin database plugin      Vault versions prior to v0 10 4 lacked the  vault plugin  operator and the registration step for them is      shell session   vault write sys plugins catalog database myplugin database plugin        sha256  SHA256 Hex value of the plugin binary         command  myplugin   Success  Data written to  sys plugins catalog database myplugin database plugin          Plugin execution  When a backend wants to run a plugin  it first looks up the plugin  by name  in the catalog  It then checks the executable s SHA256 sum against the one configured in the plugin catalog  Finally Vault runs the command configured in the catalog  sending along the JWT formatted response wrapping token and mlock settings  Like Vault  plugins support  the use of mlock when available   vault docs configuration disable mlock       Note  If Vault is configured with  mlock  enabled  then the Vault executable and each plugin executable in your  plugins directory   vault docs plugins plugin architecture plugin directory  must be given the ability to use the  mlock  syscall       Plugin upgrades  External plugins may be updated by registering and reloading them  More details on the upgrade procedure can be found in  Upgrading Vault Plugins   vault docs upgrading plugins       Plugin multiplexing  To avoid spawning multiple plugin processes for mounts of the same type  plugins can implement plugin multiplexing  This allows a single plugin process to be used for multiple mounts of a given type  This single process will be multiplexed across all Vault namespaces for mounts of this type  Multiplexing a plugin does not affect the current behavior of existing plugins   To enable multiplexing  the plugin must be compiled with the  ServeMultiplex  function call from Vault s respective  plugin  or  dbplugin  SDK packages  At this time  there is no opt out capability for plugins that implement multiplexing  To use a non multiplexed plugin  run an older version of the plugin  i e   the plugin calls the  Serve  function   More resources on implementing plugin multiplexing     Database secrets engines   vault docs secrets databases custom serving a plugin with multiplexing     Secrets engines and auth methods   vault docs plugins plugin development      Troubleshooting      Unrecognized remote plugin message  If the following error is encountered when enabling a plugin secret engine or auth method    CodeBlockConfig hideClipboard      sh Unrecognized remote plugin message   This usually means that the plugin is either invalid or simply needs to be recompiled to support the latest protocol         CodeBlockConfig   Verify whether the Vault process has  mlock  enabled  and if so  run the following command against the plugin binary      shell session   sudo setcap cap ipc lock  ep  plugin binary      "}
{"questions":"vault Plugin management External Plugins are mountable backends that are implemented using Vault s page title Plugin Management plugin system layout docs","answers":"---\nlayout: docs\npage_title: Plugin Management\ndescription: >-\n  External Plugins are mountable backends that are implemented using Vault's\n  plugin system.\n---\n\n# Plugin management\n\nExternal plugins are the components in Vault that can be implemented separately\nfrom Vault's built-in plugins. These plugins can be either authentication or\nsecrets engines.\n\nThe [`api_addr`][api_addr] must be set in order for the plugin process to\nestablish communication with the Vault server during mount time. If the storage\nbackend has HA enabled and supports automatic host address detection (e.g.\nConsul), Vault will automatically attempt to determine the `api_addr` as well.\n\nDetailed information regarding the plugin system can be found in the\n[internals documentation](\/vault\/docs\/plugins).\n\n## Registering external plugins\n\nBefore an external plugin can be mounted, it needs to be\n[registered](\/vault\/docs\/plugins\/plugin-architecture#plugin-registration) in the\nplugin catalog to ensure the plugin invoked by Vault is authentic and maintains\nintegrity:\n\n```shell-session\n$ vault plugin register -sha256=<SHA256 Hex value of the plugin binary> \\\n    secret \\                  # type\n    passthrough-plugin\n\nSuccess! Registered plugin: passthrough-plugin\n```\n\n## Enabling\/Disabling external plugins\n\nAfter the plugin is registered, it can be mounted by specifying the registered\nplugin name:\n\n```shell-session\n$ vault secrets enable -path=my-secrets passthrough-plugin\nSuccess! Enabled the passthrough-plugin secrets engine at: my-secrets\/\n```\n\nListing secrets engines will display secrets engines that are mounted as\nplugins:\n\n```shell-session\n$ vault secrets list\nPath         Type       Accessor            Plugin              Default TTL  Max TTL  Force No Cache  Replication Behavior  Description\nmy-secrets\/  plugin     plugin_deb84140     passthrough-plugin  system       system   false           replicated\n```\n\nDisabling an external plugins is identical to disabling a built-in plugin:\n\n```shell-session\n$ vault secrets disable my-secrets\n```\n\n## Upgrading plugins\n\nUpgrade instructions can be found in the [Upgrading Plugins - Guides][upgrading_plugins]\npage.\n\n[api_addr]: \/vault\/docs\/configuration#api_addr\n[upgrading_plugins]: \/vault\/docs\/upgrading\/plugins\n\n## Plugin environment variables\n\nAn advantage for external plugins over builtin plugins is they can specify\nadditional environment variables because they are run in their own process.\n\n-> Vault 1.16.0 changed the precedence given to plugin-specific environment\nvariables so they take priority over Vault's environment. See full details in\nthe [upgrade notes](\/vault\/docs\/upgrading\/upgrade-to-1.16.x).\n\nUse the `-env` flag once per environment variable that a plugin should be\nstarted with:\n\n```shell-session\n$ vault plugin register -sha256=<SHA256 Hex value of the plugin binary> \\\n    -env REGION=eu \\\n    -env TOKEN_FILE=\/var\/run\/token \\\n    secret \\                  # type\n    passthrough-plugin\n\nSuccess! Registered plugin: passthrough-plugin\n```\n\n### Plugin-specific HTTP proxy settings\n\nMany tools and libraries automatically consume `HTTP_PROXY`, `HTTPS_PROXY`, and\n`NO_PROXY` environment variables to configure HTTP proxy settings, including the\nGo standard library's default HTTP client. You can use these environment\nvariables to configure different network proxies for different plugins:\n\n-> You must be using an external plugin to take advantage of custom environment\nvariables. If you are using a builtin plugin, you can still download and register\nan external version of it in order to use this workflow. Check the\n[releases](https:\/\/releases.hashicorp.com\/) page for the latest prebuilt plugin\nbinaries.\n\n```shell-session\n$ vault plugin register -sha256=<SHA256 Hex value of the plugin binary> \\\n    -env HTTP_PROXY=eu.example.com \\\n    auth \\\n    jwt-eu\n\nSuccess! Registered plugin: jwt-eu\n\n$ vault plugin register -sha256=<SHA256 Hex value of the plugin binary> \\\n    -env HTTP_PROXY=us.example.com \\\n    auth \\\n    jwt-us\n\nSuccess! Registered plugin: jwt-us\n```\n\nYou can then enable each plugin on its own path, and configure clients that\nshould be associated with one or the other appropriately:\n\n```shell-session\n$ vault auth enable jwt-eu\nSuccess! Enabled the jwt-eu auth method at: auth\/jwt-eu\/\n\n$ vault auth enable jwt-us\nSuccess! Enabled the jwt-us auth method at: auth\/jwt-us\/\n```","site":"vault","answers_cleaned":"    layout  docs page title  Plugin Management description       External Plugins are mountable backends that are implemented using Vault s   plugin system         Plugin management  External plugins are the components in Vault that can be implemented separately from Vault s built in plugins  These plugins can be either authentication or secrets engines   The   api addr   api addr  must be set in order for the plugin process to establish communication with the Vault server during mount time  If the storage backend has HA enabled and supports automatic host address detection  e g  Consul   Vault will automatically attempt to determine the  api addr  as well   Detailed information regarding the plugin system can be found in the  internals documentation   vault docs plugins       Registering external plugins  Before an external plugin can be mounted  it needs to be  registered   vault docs plugins plugin architecture plugin registration  in the plugin catalog to ensure the plugin invoked by Vault is authentic and maintains integrity      shell session   vault plugin register  sha256  SHA256 Hex value of the plugin binary        secret                      type     passthrough plugin  Success  Registered plugin  passthrough plugin         Enabling Disabling external plugins  After the plugin is registered  it can be mounted by specifying the registered plugin name      shell session   vault secrets enable  path my secrets passthrough plugin Success  Enabled the passthrough plugin secrets engine at  my secrets       Listing secrets engines will display secrets engines that are mounted as plugins      shell session   vault secrets list Path         Type       Accessor            Plugin              Default TTL  Max TTL  Force No Cache  Replication Behavior  Description my secrets   plugin     plugin deb84140     passthrough plugin  system       system   false           replicated      Disabling an external plugins is identical to disabling a built in plugin      shell session   vault secrets disable my secrets         Upgrading plugins  Upgrade instructions can be found in the  Upgrading Plugins   Guides  upgrading plugins  page    api addr    vault docs configuration api addr  upgrading plugins    vault docs upgrading plugins     Plugin environment variables  An advantage for external plugins over builtin plugins is they can specify additional environment variables because they are run in their own process      Vault 1 16 0 changed the precedence given to plugin specific environment variables so they take priority over Vault s environment  See full details in the  upgrade notes   vault docs upgrading upgrade to 1 16 x    Use the   env  flag once per environment variable that a plugin should be started with      shell session   vault plugin register  sha256  SHA256 Hex value of the plugin binary         env REGION eu        env TOKEN FILE  var run token       secret                      type     passthrough plugin  Success  Registered plugin  passthrough plugin          Plugin specific HTTP proxy settings  Many tools and libraries automatically consume  HTTP PROXY    HTTPS PROXY   and  NO PROXY  environment variables to configure HTTP proxy settings  including the Go standard library s default HTTP client  You can use these environment variables to configure different network proxies for different plugins      You must be using an external plugin to take advantage of custom environment variables  If you are using a builtin plugin  you can still download and register an external version of it in order to use this workflow  Check the  releases  https   releases hashicorp com   page for the latest prebuilt plugin binaries      shell session   vault plugin register  sha256  SHA256 Hex value of the plugin binary         env HTTP PROXY eu example com       auth       jwt eu  Success  Registered plugin  jwt eu    vault plugin register  sha256  SHA256 Hex value of the plugin binary         env HTTP PROXY us example com       auth       jwt us  Success  Registered plugin  jwt us      You can then enable each plugin on its own path  and configure clients that should be associated with one or the other appropriately      shell session   vault auth enable jwt eu Success  Enabled the jwt eu auth method at  auth jwt eu     vault auth enable jwt us Success  Enabled the jwt us auth method at  auth jwt us     "}
{"questions":"vault Advanced topic Plugin development is a highly advanced topic in Vault and is not required knowledge for day to day usage If you don t plan on writing any Learn about Vault plugin development page title Plugin Development layout docs Plugin development","answers":"---\nlayout: docs\npage_title: Plugin Development\ndescription: Learn about Vault plugin development.\n---\n\n# Plugin development\n\n~> Advanced topic! Plugin development is a highly advanced topic in Vault, and\nis not required knowledge for day-to-day usage. If you don't plan on writing any\nplugins, we recommend not reading this section of the documentation.\n\nBecause Vault communicates to plugins over a RPC interface, you can build and\ndistribute a plugin for Vault without having to rebuild Vault itself. This makes\nit easy for you to build a Vault plugin for your organization's internal use,\nfor a proprietary API that you don't want to open source, or to prototype\nsomething before contributing it back to the main project.\n\nIn theory, because the plugin interface is HTTP, you could even develop a plugin\nusing a completely different programming language! (Disclaimer, you would also\nhave to re-implement the plugin API which is not a trivial amount of work.)\n\nDeveloping a plugin is simple. The only knowledge necessary to write\na plugin is basic command-line skills and basic knowledge of the\n[Go programming language](http:\/\/golang.org).\n\nYour plugin implementation needs to satisfy the interface for the plugin\ntype you want to build. You can find these definitions in the docs for the\nbackend running the plugin.\n\n~> Note: Plugins should be prepared to handle multiple concurrent requests\nfrom Vault.\n\n## Serving a plugin\n\n### Serving a plugin with multiplexing\n\n~> Plugin multiplexing requires `github.com\/hashicorp\/vault\/sdk v0.5.4` or above.\n\nThe following code exhibits an example main package for a Vault plugin using\nthe Vault SDK for a secrets engine or auth method:\n\n```go\npackage main\n\nimport (\n\t\"os\"\n\n\tmyPlugin \"your\/plugin\/import\/path\"\n\t\"github.com\/hashicorp\/vault\/api\"\n\t\"github.com\/hashicorp\/vault\/sdk\/plugin\"\n)\n\nfunc main() {\n\tapiClientMeta := &api.PluginAPIClientMeta{}\n\tflags := apiClientMeta.FlagSet()\n\tflags.Parse(os.Args[1:])\n\n\ttlsConfig := apiClientMeta.GetTLSConfig()\n\ttlsProviderFunc := api.VaultPluginTLSProvider(tlsConfig)\n\n\terr := plugin.ServeMultiplex(&plugin.ServeOpts{\n\t\tBackendFactoryFunc: myPlugin.Factory,\n\t\tTLSProviderFunc:    tlsProviderFunc,\n\t})\n\tif err != nil {\n\t\tlogger := hclog.New(&hclog.LoggerOptions{})\n\n\t\tlogger.Error(\"plugin shutting down\", \"error\", err)\n\t\tos.Exit(1)\n\t}\n}\n```\n\nAnd that's basically it! You would just need to change `myPlugin` to your actual\nplugin.\n\n### Plugin backwards compatibility with Vault\n\nLet's take a closer look at a snippet from the above main package.\n\n```go\n\terr := plugin.ServeMultiplex(&plugin.ServeOpts{\n\t\tBackendFactoryFunc: myPlugin.Factory,\n\t\tTLSProviderFunc:    tlsProviderFunc,\n\t})\n```\n\nThe call to `plugin.ServeMultiplex` ensures that the plugin will use\nVault's [plugin\nmultiplexing](\/vault\/docs\/plugins\/plugin-architecture#plugin-multiplexing) feature.\nHowever, this plugin will not be multiplexed if it is run by a version of Vault\nthat does not support multiplexing. Vault will simply fall back to a plugin\nversion that it can run. Additionally, we set the `TLSProviderFunc` to ensure\nthat our plugin is backwards compatible with versions of Vault that do not\nsupport automatic mutual TLS for secure [plugin\ncommunication](\/vault\/docs\/plugins\/plugin-architecture#plugin-communication). If you\nare certain your plugin does not need backwards compatibility, this field can\nbe omitted.\n\n[api_addr]: \/vault\/docs\/configuration#api_addr\n\n## Leveraging plugin versioning\n\n@include 'plugin-versioning.mdx'\n\nAuth and secrets plugins based on `framework.Backend` from the SDK should set the\n[`RunningVersion`](https:\/\/github.com\/hashicorp\/vault\/blob\/sdk\/v0.6.0\/sdk\/framework\/backend.go#L95-L96)\nvariable, and the framework will implement the version interface.\n\nDatabase plugins have a smaller API than `framework.Backend` exposes, and should\ninstead implement the\n[`PluginVersioner`](https:\/\/github.com\/hashicorp\/vault\/blob\/sdk\/v0.6.0\/sdk\/logical\/logical.go#L150-L154)\ninterface directly.\n\n## Plugin logging\n\nAuth and secrets plugins based on `framework.Backend` from the SDK can take\nadvantage of the SDK's [default logger](https:\/\/github.com\/hashicorp\/vault\/blob\/fe55cbbf05586ec4c0cd9bdf865ec6f741a8933c\/sdk\/framework\/backend.go#L437).\nNo additional setup is required. The logger can be used like the following:\n\n```go\nfunc (b *backend) example() {\n    b.Logger().Trace(\"Trace level log\")\n    b.Logger().Debug(\"Debug level log\")\n    b.Logger().Info(\"Info level log\")\n    b.Logger().Warn(\"Warn level log\")\n    b.Logger().Error(\"Error level log\")\n}\n```\n\nSee the source code of [vault-auth-plugin-example](https:\/\/github.com\/hashicorp\/vault-auth-plugin-example)\nfor a more complete example of a plugin using logging.\n\n## Building a plugin from source\n\nTo build a plugin from source, first navigate to the location holding the\ndesired plugin version. Next, run `go build` to obtain a new binary for the\nplugin. Finally,\n[register](\/vault\/docs\/plugins\/plugin-architecture#plugin-registration) the\nplugin and enable it.\n\n## Plugin development - resources\n\nFor more information on how to register and enable your plugin, refer to the\n[Building Plugin Backends](\/vault\/tutorials\/app-integration\/plugin-backends)\ntutorial.\n\nOther HashiCorp plugin development resources:\n\n* [vault-auth-plugin-example](https:\/\/github.com\/hashicorp\/vault-auth-plugin-example)\n* [Custom Secrets Engines](\/vault\/tutorials\/custom-secrets-engine)\n\n### Plugin development - resources - community\n\nSee the [Vault Integrations](\/vault\/integrations) page to find Community\nplugin examples\/guides developed by community members. HashiCorp does not\nvalidate these for correctness.","site":"vault","answers_cleaned":"    layout  docs page title  Plugin Development description  Learn about Vault plugin development         Plugin development     Advanced topic  Plugin development is a highly advanced topic in Vault  and is not required knowledge for day to day usage  If you don t plan on writing any plugins  we recommend not reading this section of the documentation   Because Vault communicates to plugins over a RPC interface  you can build and distribute a plugin for Vault without having to rebuild Vault itself  This makes it easy for you to build a Vault plugin for your organization s internal use  for a proprietary API that you don t want to open source  or to prototype something before contributing it back to the main project   In theory  because the plugin interface is HTTP  you could even develop a plugin using a completely different programming language   Disclaimer  you would also have to re implement the plugin API which is not a trivial amount of work    Developing a plugin is simple  The only knowledge necessary to write a plugin is basic command line skills and basic knowledge of the  Go programming language  http   golang org    Your plugin implementation needs to satisfy the interface for the plugin type you want to build  You can find these definitions in the docs for the backend running the plugin      Note  Plugins should be prepared to handle multiple concurrent requests from Vault      Serving a plugin      Serving a plugin with multiplexing     Plugin multiplexing requires  github com hashicorp vault sdk v0 5 4  or above   The following code exhibits an example main package for a Vault plugin using the Vault SDK for a secrets engine or auth method      go package main  import     os    myPlugin  your plugin import path    github com hashicorp vault api    github com hashicorp vault sdk plugin     func main      apiClientMeta     api PluginAPIClientMeta    flags    apiClientMeta FlagSet    flags Parse os Args 1      tlsConfig    apiClientMeta GetTLSConfig    tlsProviderFunc    api VaultPluginTLSProvider tlsConfig    err    plugin ServeMultiplex  plugin ServeOpts    BackendFactoryFunc  myPlugin Factory    TLSProviderFunc     tlsProviderFunc       if err    nil     logger    hclog New  hclog LoggerOptions       logger Error  plugin shutting down    error   err    os Exit 1            And that s basically it  You would just need to change  myPlugin  to your actual plugin       Plugin backwards compatibility with Vault  Let s take a closer look at a snippet from the above main package      go  err    plugin ServeMultiplex  plugin ServeOpts    BackendFactoryFunc  myPlugin Factory    TLSProviderFunc     tlsProviderFunc           The call to  plugin ServeMultiplex  ensures that the plugin will use Vault s  plugin multiplexing   vault docs plugins plugin architecture plugin multiplexing  feature  However  this plugin will not be multiplexed if it is run by a version of Vault that does not support multiplexing  Vault will simply fall back to a plugin version that it can run  Additionally  we set the  TLSProviderFunc  to ensure that our plugin is backwards compatible with versions of Vault that do not support automatic mutual TLS for secure  plugin communication   vault docs plugins plugin architecture plugin communication   If you are certain your plugin does not need backwards compatibility  this field can be omitted    api addr    vault docs configuration api addr     Leveraging plugin versioning   include  plugin versioning mdx   Auth and secrets plugins based on  framework Backend  from the SDK should set the   RunningVersion   https   github com hashicorp vault blob sdk v0 6 0 sdk framework backend go L95 L96  variable  and the framework will implement the version interface   Database plugins have a smaller API than  framework Backend  exposes  and should instead implement the   PluginVersioner   https   github com hashicorp vault blob sdk v0 6 0 sdk logical logical go L150 L154  interface directly      Plugin logging  Auth and secrets plugins based on  framework Backend  from the SDK can take advantage of the SDK s  default logger  https   github com hashicorp vault blob fe55cbbf05586ec4c0cd9bdf865ec6f741a8933c sdk framework backend go L437   No additional setup is required  The logger can be used like the following      go func  b  backend  example         b Logger   Trace  Trace level log       b Logger   Debug  Debug level log       b Logger   Info  Info level log       b Logger   Warn  Warn level log       b Logger   Error  Error level log          See the source code of  vault auth plugin example  https   github com hashicorp vault auth plugin example  for a more complete example of a plugin using logging      Building a plugin from source  To build a plugin from source  first navigate to the location holding the desired plugin version  Next  run  go build  to obtain a new binary for the plugin  Finally   register   vault docs plugins plugin architecture plugin registration  the plugin and enable it      Plugin development   resources  For more information on how to register and enable your plugin  refer to the  Building Plugin Backends   vault tutorials app integration plugin backends  tutorial   Other HashiCorp plugin development resources      vault auth plugin example  https   github com hashicorp vault auth plugin example     Custom Secrets Engines   vault tutorials custom secrets engine       Plugin development   resources   community  See the  Vault Integrations   vault integrations  page to find Community plugin examples guides developed by community members  HashiCorp does not validate these for correctness "}
{"questions":"vault page title Add a containerized secrets plugin Add a containerized secrets plugin to Vault layout docs Run your external secrets plugins in containers to increases the isolation Add a containerized secrets plugin to your Vault instance","answers":"---\nlayout: docs\npage_title: Add a containerized secrets plugin\ndescription: >-\n  Add a containerized secrets plugin to your Vault instance.\n---\n\n# Add a containerized secrets plugin to Vault\n\nRun your external secrets plugins in containers to increases the isolation\nbetween the plugin and Vault.\n\n## Before you start\n\n- **Your Vault instance must be running on Linux**.\n- **Your Vault instance must have local access to the Docker Engine API**.\n  Vault uses the [Docker SDK](https:\/\/pkg.go.dev\/github.com\/docker\/docker) to\n  manage containerized plugins.\n- **You must have [gVisor](https:\/\/gvisor.dev\/docs\/user_guide\/install\/)\n  installed**. Vault uses `runsc` as the entrypoint to your container runtime.\n- **If you are using a container runtime other than gVisor, you must have a\n  `runsc`-compatible container runtime installed**.\n\n## Step 1: Install your container engine\n\nInstall one of the supported container engines:\n\n  - [Docker](https:\/\/docs.docker.com\/engine\/install\/)\n  - [Rootless Docker](https:\/\/docs.docker.com\/engine\/security\/rootless\/)\n\n## Step 2: Configure your container runtime \n\nUpdate your container engine to use `runsc` for Unix sockets between the host\nand plugin binary.\n\n<Tabs>\n\n<Tab heading=\"Docker\">\n\n  1. Add `runsc` to your\n     [Docker daemon configuration](https:\/\/docs.docker.com\/config\/daemon):\n\n      ```shell-session\n      $ sudo tee PATH_TO_DOCKER_DAEMON_CONFIG_FILE <<EOF\n      {\n        \"runtimes\": {\n          \"runsc\": {\n            \"path\": \"PATH_TO_RUNSC_INSTALLATION\",\n            \"runtimeArgs\": [\n              \"--host-uds=all\"\n            ]\n          }\n        }\n      }\n      EOF\n      ```\n\n  1. Restart Docker:\n\n      ```shell-session\n      $ sudo systemctl reload docker\n      ```\n\n<\/Tab>\n\n<Tab heading=\"Rootless Docker\">\n\n  1. Create a configuration directory if it does not exist already:\n\n    ```shell-session\n    $ mkdir -p ~\/.config\/docker\n    ```\n\n  1. Add `runsc` to your Docker configuration:\n\n    ```shell-session\n    $ tee ~\/.config\/docker\/daemon.json  <<EOF\n    {\n      \"runtimes\": {\n        \"runsc\": {\n          \"path\": \"PATH_TO_RUNSC_INSTALLATION\",\n          \"runtimeArgs\": [\n            \"--host-uds=all\"\n            \"--ignore-cgroups\"\n          ]\n        }\n      }\n    }\n    EOF\n    ```\n\n  1. Restart Docker:\n\n    ```shell-session\n    $ systemctl --user restart docker\n    ```\n\n<\/Tab>\n\n<\/Tabs>\n\n## Step 3: Update the HashiCorp `go-plugin` library\n\nYou must build your plugin locally with v1.5.0+ of the HashiCorp\n[`go-plugin`](https:\/\/github.com\/hashicorp\/go-plugin) library to ensure the\nfinished binary is compatible with containerization.\n\nUse `go install` to pull the latest version of the plugin library from the\n`hashicorp\/go-plugin` repo on GitHub:\n\n```shell-session\n$ go install github.com\/hashicorp\/go-plugin@latest\n```\n\n<Tip title=\"The Vault SDK includes go-plugin\">\n\n  If you build with the Vault SDK, you can update `go-plugin` with `go install`\n  by pulling the latest SDK version from the `hashicorp\/vault` repo:\n\n  `go install github.com\/hashicorp\/vault\/sdk@latest`\n\n<\/Tip>\n\n\n## Step 4: Build the plugin container\n\nContainerized plugins must run as a binary in the finished container and\nbehave the same whether run in a container or as a standalone application:\n\n1. Build your plugin binary so it runs on Linux.\n1. Create a container file for your plugin with the compiled binary as the\n   entry-point.\n1. Build the image with a unique tag.\n\nFor example, to build a containerized version of the built-in key-value (KV)\nsecrets plugin for Docker:\n\n1. Clone the latest version of the KV secrets plugin from\n   `hashicorp\/vault-plugin-secrets-kv`.\n    ```shell-session\n    $ git clone https:\/\/github.com\/hashicorp\/vault-plugin-secrets-kv.git\n    ```\n1. Build the Go binary for Linux.\n    ```shell-session\n    $ cd vault-plugin-secrets-kv ; CGO_ENABLED=0 GOOS=linux \\\n    go build -o kv cmd\/vault-plugin-secrets-kv\/main.go\n    ```\n1. Create an empty Dockerfile.\n    ```shell-session\n    $ touch Dockerfile\n    ```\n1. Update the empty `Dockerfile` with your infrastructure build details and the\n   compiled binary as the entry-point.\n   ```Dockerfile\n   FROM gcr.io\/distroless\/static-debian12\n   COPY kv \/bin\/kv\n   ENTRYPOINT [ \"\/bin\/kv\" ]\n   ```\n1. Build the container image and assign an identifiable tag.\n   ```shell-session\n   $ docker build -t hashicorp\/vault-plugin-secrets-kv:mycontainer .\n   ```\n\n## Step 5: Register the plugin\n\nRegistering a containerized plugin with Vault is similar to registering any\nother external plugin that is available locally to Vault.\n\n1. Store the SHA256 of the plugin image:\n   ```shell-session\n   $ export SHA256=$(docker images \\\n       --no-trunc \\\n       --format=\"\" \\\n       YOUR_PLUGIN_IMAGE_TAG | cut -d: -f2)\n   ```\n   For example:\n   \n   <CodeBlockConfig hideClipboard>\n\n   ```shell-session\n   $ export SHA256=$(docker images \\\n       --no-trunc \\\n       --format=\"\" \\\n       hashicorp\/vault-plugin-secrets-kv:mycontainer | cut -d: -f2)\n   ```\n   \n   <\/CodeBlockConfig>\n\n1. Register the plugin with `vault plugin register` and specify your plugin\n   image with the `oci_image` flag:\n   ```shell-session\n   $ vault plugin register \\\n       -sha256=\"${SHA256}\" \\\n       -oci_image=YOUR_PLUGIN_IMAGE_TAG \\\n       NEW_PLUGIN_TYPE NEW_PLUGIN_ID\n   ```\n   For example:\n   \n   <CodeBlockConfig hideClipboard>\n   \n   ```shell-session\n   $ vault plugin register \\\n       -sha256=\"${SHA256}\" \\\n       -oci_image=hashicorp\/vault-plugin-secrets-kv:mycontainer \\\n       secret my-kv-container\n   ```\n   \n   <\/CodeBlockConfig>\n\n1. Enable the new plugin for your Vault instance with `vault secrets enable` and\n   the new plugin ID:\n   ```shell-session\n   $ vault secrets enable NEW_PLUGIN_ID\n   ```\n   For example:\n   \n   <CodeBlockConfig hideClipboard>\n   \n   ```shell-session\n   $ vault secrets enable my-kv-container\n   ```\n        \n   <\/CodeBlockConfig>\n\n\n<Tip title=\"Customize container behavior with registration flags\">\n\n  You can provide additional information about the image entrypoint, command,\n  and environment with the `-command`, `-args`, and `-env` flags for\n  `vault plugin register`.\n\n<\/Tip>\n\n## Step 6: Test your plugin\n\nNow that the container is registered with Vault, you should be able to interact\nwith it like any other plugin. Try writing then fetching a new secret with your\nnew plugin.\n\n\n1. Use `vault write` to store a secret with your containerized plugin:\n   ```shell-session\n   $ vault write NEW_PLUGIN_ID\/SECRET_PATH SECRET_KEY=SECRET_VALUE\n   ```\n   For example:\n   \n   <CodeBlockConfig hideClipboard>\n   \n   ```shell-session\n   $ vault write my-kv-container\/testing subject=containers\n   Success! Data written to: my-kv-container\/testing\n   ```\n    \n   <\/CodeBlockConfig>\n\n1. Fetch the secret you just wrote:\n   ```shell-session\n   $ vault read NEW_PLUGIN_ID\/SECRET_PATH\n   ```\n   For example:\n   \n   <CodeBlockConfig hideClipboard>\n   \n   ```shell-session\n   $ vault read my-kv-container\/testing\n   ===== Data =====\n   Key        Value\n   ---        -----\n   subject    containers\n   ```\n   \n   <\/CodeBlockConfig>\n\n## Use alternative runtimes ((#alt-runtimes))\n\nYou can force Vault to use alternative runtimes provided the runtime is\ninstalled locally.\n\nTo use an alternative runtime:\n\n1. Register and name the runtime with `vault plugin runtime register`. For\n   example, to register the default Docker runtime (`runc`) as `docker-rt`:\n   ```shell-session\n   $ vault plugin runtime register  \\\n      -oci_runtime=runc             \\\n      -type=container docker-rt\n   ```\n\n1. Use the `--runtime` flag during plugin registration to tell Vault what\n   runtime to use:\n   ```shell-session\n   $ vault plugin register  \\\n      -runtime=RUNTIME_NAME \\\n      -sha256=\"${SHA256}\"   \\\n      -oci_image=YOUR_PLUGIN_IMAGE_TAG \\\n      NEW_PLUGIN_TYPE NEW_PLUGIN_ID\n   ```\n   For example:\n   \n   <CodeBlockConfig hideClipboard>\n   \n   ```shell-session\n   $ vault plugin register \\\n      -runtime=docker-rt   \\\n      -sha256=\"${SHA256}\"  \\\n      -oci_image=hashicorp\/vault-plugin-secrets-kv:mycontainer \\\n      secret my-kv-container\n   ```\n\n   <\/CodeBlockConfig>\n\n## Troubleshooting\n\n### Invalid backend version error\n\nIf you run into the following error while registering your plugin:\n\n<CodeBlockConfig hideClipboard>\n\n```plaintext\ninvalid backend version error: 2 errors occurred:\n        * error creating container: Error response from daemon: error while looking up the specified runtime path: exec: \" \/usr\/bin\/runsc\": stat  \/usr\/bin\/runsc: no such file or directory\n        * error creating container: Error response from daemon: error while looking up the specified runtime path: exec: \" \/usr\/bin\/runsc\": stat  \/usr\/bin\/runsc: no such file or directory\n```\n\n<\/CodeBlockConfig>\n\nit means that Vault cannot find the executable for `runsc`. Confirm the\nfollowing is true before trying again:\n\n1. You have gVisor installed locally to Vault.\n1. The path to `runsc` is correct in you your Docker configuration.\n1. Vault has permission to run the `runsc` executable.\n\nIf you still get errors when registering a plugin, the recommended workaround is\nto use the default Docker runtime (`runc`) as an\n[alternative runtime](#alt-runtimes)","site":"vault","answers_cleaned":"    layout  docs page title  Add a containerized secrets plugin description       Add a containerized secrets plugin to your Vault instance         Add a containerized secrets plugin to Vault  Run your external secrets plugins in containers to increases the isolation between the plugin and Vault      Before you start      Your Vault instance must be running on Linux        Your Vault instance must have local access to the Docker Engine API      Vault uses the  Docker SDK  https   pkg go dev github com docker docker  to   manage containerized plugins      You must have  gVisor  https   gvisor dev docs user guide install     installed    Vault uses  runsc  as the entrypoint to your container runtime      If you are using a container runtime other than gVisor  you must have a    runsc  compatible container runtime installed        Step 1  Install your container engine  Install one of the supported container engines        Docker  https   docs docker com engine install        Rootless Docker  https   docs docker com engine security rootless       Step 2  Configure your container runtime   Update your container engine to use  runsc  for Unix sockets between the host and plugin binary    Tabs    Tab heading  Docker      1  Add  runsc  to your       Docker daemon configuration  https   docs docker com config daemon             shell session         sudo tee PATH TO DOCKER DAEMON CONFIG FILE   EOF                  runtimes                runsc                  path    PATH TO RUNSC INSTALLATION                runtimeArgs                      host uds all                                                    EOF              1  Restart Docker            shell session         sudo systemctl reload docker              Tab    Tab heading  Rootless Docker      1  Create a configuration directory if it does not exist already          shell session       mkdir  p    config docker            1  Add  runsc  to your Docker configuration          shell session       tee    config docker daemon json    EOF              runtimes              runsc                path    PATH TO RUNSC INSTALLATION              runtimeArgs                    host uds all                 ignore cgroups                                          EOF            1  Restart Docker          shell session       systemctl   user restart docker            Tab     Tabs      Step 3  Update the HashiCorp  go plugin  library  You must build your plugin locally with v1 5 0  of the HashiCorp   go plugin   https   github com hashicorp go plugin  library to ensure the finished binary is compatible with containerization   Use  go install  to pull the latest version of the plugin library from the  hashicorp go plugin  repo on GitHub      shell session   go install github com hashicorp go plugin latest       Tip title  The Vault SDK includes go plugin      If you build with the Vault SDK  you can update  go plugin  with  go install    by pulling the latest SDK version from the  hashicorp vault  repo      go install github com hashicorp vault sdk latest     Tip       Step 4  Build the plugin container  Containerized plugins must run as a binary in the finished container and behave the same whether run in a container or as a standalone application   1  Build your plugin binary so it runs on Linux  1  Create a container file for your plugin with the compiled binary as the    entry point  1  Build the image with a unique tag   For example  to build a containerized version of the built in key value  KV  secrets plugin for Docker   1  Clone the latest version of the KV secrets plugin from     hashicorp vault plugin secrets kv          shell session       git clone https   github com hashicorp vault plugin secrets kv git         1  Build the Go binary for Linux         shell session       cd vault plugin secrets kv   CGO ENABLED 0 GOOS linux       go build  o kv cmd vault plugin secrets kv main go         1  Create an empty Dockerfile         shell session       touch Dockerfile         1  Update the empty  Dockerfile  with your infrastructure build details and the    compiled binary as the entry point        Dockerfile    FROM gcr io distroless static debian12    COPY kv  bin kv    ENTRYPOINT     bin kv           1  Build the container image and assign an identifiable tag        shell session      docker build  t hashicorp vault plugin secrets kv mycontainer              Step 5  Register the plugin  Registering a containerized plugin with Vault is similar to registering any other external plugin that is available locally to Vault   1  Store the SHA256 of the plugin image        shell session      export SHA256   docker images            no trunc            format             YOUR PLUGIN IMAGE TAG   cut  d   f2            For example          CodeBlockConfig hideClipboard         shell session      export SHA256   docker images            no trunc            format             hashicorp vault plugin secrets kv mycontainer   cut  d   f2                  CodeBlockConfig   1  Register the plugin with  vault plugin register  and specify your plugin    image with the  oci image  flag        shell session      vault plugin register           sha256    SHA256             oci image YOUR PLUGIN IMAGE TAG          NEW PLUGIN TYPE NEW PLUGIN ID           For example          CodeBlockConfig hideClipboard            shell session      vault plugin register           sha256    SHA256             oci image hashicorp vault plugin secrets kv mycontainer          secret my kv container                 CodeBlockConfig   1  Enable the new plugin for your Vault instance with  vault secrets enable  and    the new plugin ID        shell session      vault secrets enable NEW PLUGIN ID           For example          CodeBlockConfig hideClipboard            shell session      vault secrets enable my kv container                      CodeBlockConfig     Tip title  Customize container behavior with registration flags      You can provide additional information about the image entrypoint  command    and environment with the   command     args   and   env  flags for    vault plugin register      Tip      Step 6  Test your plugin  Now that the container is registered with Vault  you should be able to interact with it like any other plugin  Try writing then fetching a new secret with your new plugin    1  Use  vault write  to store a secret with your containerized plugin        shell session      vault write NEW PLUGIN ID SECRET PATH SECRET KEY SECRET VALUE           For example          CodeBlockConfig hideClipboard            shell session      vault write my kv container testing subject containers    Success  Data written to  my kv container testing                  CodeBlockConfig   1  Fetch the secret you just wrote        shell session      vault read NEW PLUGIN ID SECRET PATH           For example          CodeBlockConfig hideClipboard            shell session      vault read my kv container testing          Data          Key        Value                        subject    containers                 CodeBlockConfig      Use alternative runtimes    alt runtimes    You can force Vault to use alternative runtimes provided the runtime is installed locally   To use an alternative runtime   1  Register and name the runtime with  vault plugin runtime register   For    example  to register the default Docker runtime   runc   as  docker rt         shell session      vault plugin runtime register           oci runtime runc                      type container docker rt         1  Use the    runtime  flag during plugin registration to tell Vault what    runtime to use        shell session      vault plugin register           runtime RUNTIME NAME          sha256    SHA256              oci image YOUR PLUGIN IMAGE TAG         NEW PLUGIN TYPE NEW PLUGIN ID           For example          CodeBlockConfig hideClipboard            shell session      vault plugin register          runtime docker rt            sha256    SHA256             oci image hashicorp vault plugin secrets kv mycontainer         secret my kv container              CodeBlockConfig      Troubleshooting      Invalid backend version error  If you run into the following error while registering your plugin    CodeBlockConfig hideClipboard      plaintext invalid backend version error  2 errors occurred            error creating container  Error response from daemon  error while looking up the specified runtime path  exec     usr bin runsc   stat   usr bin runsc  no such file or directory           error creating container  Error response from daemon  error while looking up the specified runtime path  exec     usr bin runsc   stat   usr bin runsc  no such file or directory        CodeBlockConfig   it means that Vault cannot find the executable for  runsc   Confirm the following is true before trying again   1  You have gVisor installed locally to Vault  1  The path to  runsc  is correct in you your Docker configuration  1  Vault has permission to run the  runsc  executable   If you still get errors when registering a plugin  the recommended workaround is to use the default Docker runtime   runc   as an  alternative runtime   alt runtimes "}
{"questions":"vault Learn about running external Vault plugins in containers page title Containerized plugins overview Note title Limited OS support Containerized plugins overview layout docs","answers":"---\nlayout: docs\npage_title: Containerized plugins overview\ndescription: Learn about running external Vault plugins in containers.\n---\n\n# Containerized plugins overview\n\n<Note title=\"Limited OS support\">\n\n  Support for the `container` runtime is currently limited to Linux.\n\n<\/Note>\n\nVault has a wide selection of builtin plugins to support integrating with other\nsystems. For example, you can use plugins to exchange app identity information\nwith an authentication service to receive a Vault token, or manage database\ncredentials. You can also register **external** plugins with your Vault instance\nto extend the capabilities of your Vault server.\n\nBy default, external plugins run as subprocesses that share the user and\nenvironment variables of your Vault instance. Administrators managing Vault\ninstances on Linux can choose to run external plugins in containers. Running\nplugins in containers increases the isolation between individual plugins and\nbetween the plugins and Vault.\n\n## System requirements\n\n- **Your Vault instance must be running on Linux**.\n\n- **Your environment must provide Vault local access to the Docker Engine API**.\n  Vault uses the [Docker SDK](https:\/\/pkg.go.dev\/github.com\/docker\/docker) to\n  manage containerized plugins.\n\n- **You must have a valid container runtime installed**. We recommend\n  [installing gVisor](https:\/\/gvisor.dev\/docs\/user_guide\/install\/) for your\n  container runtime as Vault specifies the `runsc` runtime by default.\n\n- **You must have all your plugin container images pulled and available locally**.\n  Vault does not currently support pulling images as part of the plugin\n  registration process.\n\n## Plugin requirements\n\nAll plugins have the following basic requirements to be containerized:\n\n- **Your plugin must be built with at least v1.6.0 of the HashiCorp\n  [`go-plugin`](https:\/\/github.com\/hashicorp\/go-plugin) library**.\n\n- **The image entrypoint should run the plugin binary**.\n\nSome configurations have additional requirements for the container image, listed\nin [supported configurations](#supported-configurations).\n\n## Supported configurations\n\nVault's containerized plugins are compatible with a variety of configurations.\nIn particular, it has been tested with the following:\n\n- Default and [rootless](https:\/\/docs.docker.com\/engine\/security\/rootless\/) Docker.\n- OCI-compatible runtimes `runsc` and `runc`.\n- Plugin container images running as root and non-root users.\n- [Mlock](\/vault\/docs\/configuration#disable_mlock) disabled or enabled.\n\nNot all combinations work and some have additional requirements, listed below.\nIf you use a configuration that matches multiple headings, you should combine\nthe requirements from each matching heading.\n\n### `runsc` runtime\n\n- You must pass an additional `--host-uds=create` flag to the `runsc` runtime.\n\n### Rootless Docker with `runsc` runtime\n\n- You must pass an additional `--ignore-cgroups` flag to the `runsc` runtime.\n  - Cgroup limits are not currently supported for this configuration.\n\n### Rootless Docker with non-root container user\n\n- You must use a container plugin runtime with\n  [`rootless`](\/vault\/docs\/commands\/plugin\/runtime\/register#rootless) enabled.\n- Your filesystem must have Posix 1e ACL support, available by default in most\n  modern Linux file systems.\n- Only supported for gVisor's `runsc` runtime.\n\n### Rootless Docker with mlock enabled\n\n- Only supported for gVisor's `runsc` runtime.\n\n### Non-root container user with mlock enabled\n\n- You must set the `IPC_LOCK` capability on the plugin binary.\n\n## Container lifecycle and metadata\n\nLike any other external plugin, Vault will automatically manage the lifecycle\nof plugin containers. If they are killed out of band, Vault will restart them\nbefore servicing any requests that need to be handled by them. Vault will also\n[multiplex](\/vault\/docs\/plugins\/plugin-architecture#plugin-multiplexing) multiple\nmounts to be serviced by the same container if the plugin supports multiplexing.\n\nVault labels each plugin container with a standard set of metadata to help\nidentify the owner of the container, including the cluster ID, Vault's own\nprocess ID, and the plugin's name, type, and version.\n\n## Plugin runtimes\n\nUsers who require more control over plugin containers can use the \"plugin\nruntime\" APIs for finer grained settings. See the CLI documentation for\n[`vault plugin runtime`](\/vault\/docs\/commands\/plugin\/runtime) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Containerized plugins overview description  Learn about running external Vault plugins in containers         Containerized plugins overview   Note title  Limited OS support      Support for the  container  runtime is currently limited to Linux     Note   Vault has a wide selection of builtin plugins to support integrating with other systems  For example  you can use plugins to exchange app identity information with an authentication service to receive a Vault token  or manage database credentials  You can also register   external   plugins with your Vault instance to extend the capabilities of your Vault server   By default  external plugins run as subprocesses that share the user and environment variables of your Vault instance  Administrators managing Vault instances on Linux can choose to run external plugins in containers  Running plugins in containers increases the isolation between individual plugins and between the plugins and Vault      System requirements      Your Vault instance must be running on Linux         Your environment must provide Vault local access to the Docker Engine API      Vault uses the  Docker SDK  https   pkg go dev github com docker docker  to   manage containerized plugins       You must have a valid container runtime installed    We recommend    installing gVisor  https   gvisor dev docs user guide install   for your   container runtime as Vault specifies the  runsc  runtime by default       You must have all your plugin container images pulled and available locally      Vault does not currently support pulling images as part of the plugin   registration process      Plugin requirements  All plugins have the following basic requirements to be containerized       Your plugin must be built with at least v1 6 0 of the HashiCorp     go plugin   https   github com hashicorp go plugin  library         The image entrypoint should run the plugin binary     Some configurations have additional requirements for the container image  listed in  supported configurations   supported configurations       Supported configurations  Vault s containerized plugins are compatible with a variety of configurations  In particular  it has been tested with the following     Default and  rootless  https   docs docker com engine security rootless   Docker    OCI compatible runtimes  runsc  and  runc     Plugin container images running as root and non root users     Mlock   vault docs configuration disable mlock  disabled or enabled   Not all combinations work and some have additional requirements  listed below  If you use a configuration that matches multiple headings  you should combine the requirements from each matching heading        runsc  runtime    You must pass an additional    host uds create  flag to the  runsc  runtime       Rootless Docker with  runsc  runtime    You must pass an additional    ignore cgroups  flag to the  runsc  runtime      Cgroup limits are not currently supported for this configuration       Rootless Docker with non root container user    You must use a container plugin runtime with     rootless    vault docs commands plugin runtime register rootless  enabled    Your filesystem must have Posix 1e ACL support  available by default in most   modern Linux file systems    Only supported for gVisor s  runsc  runtime       Rootless Docker with mlock enabled    Only supported for gVisor s  runsc  runtime       Non root container user with mlock enabled    You must set the  IPC LOCK  capability on the plugin binary      Container lifecycle and metadata  Like any other external plugin  Vault will automatically manage the lifecycle of plugin containers  If they are killed out of band  Vault will restart them before servicing any requests that need to be handled by them  Vault will also  multiplex   vault docs plugins plugin architecture plugin multiplexing  multiple mounts to be serviced by the same container if the plugin supports multiplexing   Vault labels each plugin container with a standard set of metadata to help identify the owner of the container  including the cluster ID  Vault s own process ID  and the plugin s name  type  and version      Plugin runtimes  Users who require more control over plugin containers can use the  plugin runtime  APIs for finer grained settings  See the CLI documentation for   vault plugin runtime    vault docs commands plugin runtime  for more details "}
{"questions":"vault page title Server Side Consistent Token FAQ Server side consistent token FAQ This FAQ section contains frequently asked questions about the Server Side Consistent Token feature layout docs An list of frequently asked questions about server side consistent tokens","answers":"---\nlayout: docs\npage_title: Server Side Consistent Token FAQ\ndescription: An list of frequently asked questions about server side consistent tokens\n---\n\n# Server side consistent token FAQ\n\nThis FAQ section contains frequently asked questions about the Server Side Consistent Token feature.\n\n- [Q: What is the Server Side Consistent Token feature?](#q-what-is-the-server-side-consistent-token-feature)\n- [Q: I have Vault Community Edition. How does this feature impact me?](#q-i-have-vault-community-edition-how-does-this-feature-impact-me)\n- [Q: What token changes does the Server Side Consistent Tokens feature introduce?](#q-what-token-changes-does-the-server-side-consistent-tokens-feature-introduce)\n- [Q: Why are we changing the token?](#q-why-are-we-changing-the-token)\n- [Q: What type of tokens are impacted by this feature?](#q-what-type-of-tokens-are-impacted-by-this-feature)\n- [Q: Is there a new configuration that this feature introduces?](#q-is-there-a-new-configuration-that-this-feature-introduces)\n- [Q: Is there anything else I need to consider to achieve consistency, besides upgrading to Vault 1.10?](#q-is-there-anything-else-i-need-to-consider-to-achieve-consistency-besides-upgrading-to-vault-1-10)\n- [Q: What do I need to be paying attention to if I rely on tokens for some of my workflows?](#q-what-do-i-need-to-be-paying-attention-to-if-i-rely-on-tokens-for-some-of-my-workflows)\n- [Q: What are the main mitigation options that Vault offers to achieve consistency, and what are the differences between them?](#q-what-are-the-main-mitigation-options-that-vault-offers-to-achieve-consistency-and-what-are-the-differences-between-them)\n- [Q: Is this feature something I need with Consul Storage?](#q-is-this-feature-something-i-need-with-consul-storage)\n\n### Q: What is the server side consistent token feature?\n\n~> **Note**: This features requires Vault Enterprise.\n\nVault has an [eventual consistency](\/vault\/docs\/enterprise\/consistency) model where only the leader can write to Vault's storage. When using performance standbys with Integrated Storage, there are sequences of operations that don't always yield read-after-write consistency, which may pose a challenge for some use cases.\n\nSeveral client-based mitigations were added in Vault version 1.7, which depended on some modifications to clients (provide the appropriate response header per request) so they can specify state. This may not be possible to do in some environments.\n\nTo help with such cases, we\u2019ve now added support for the Server Side Consistent Tokens feature in Vault version 1.10. See [Replication](\/vault\/docs\/configuration\/replication), [Vault Eventual Consistency](\/vault\/docs\/enterprise\/consistency), and [Upgrade to 1.10](\/vault\/docs\/upgrading\/upgrade-to-1.10.x).\nThis feature provides a way for Service tokens, returned from logins (or token create requests), to have the relevant minimum WAL state information embedded within the token itself. Clients can then use this token to authenticate subsequent requests. Thus, clients can obtain read-after-write consistency for the token without typically having to make changes to their code or architecture.\nIf a performance standby does not have the state required to authenticate the token, it returns a 412 error to allow the client to retry. If client retry is not possible, there is a server config to allow for the consistency.\n\n### Q: I have Vault Community Edition. How does this feature impact me?\n\nFor the sake of standardization between Community and Enterprise, and due to the value of adding token prefixes in Vault for token scanning use cases, the token formats are changed across all Vault versions starting Vault 1.10. However, since there are no performance standbys or replication in Vault Community Edition, the new Vault token will always show the local index of the WAL as 0 to indicate there is nothing to wait for.\n\n### Q: What token changes does the Server Side Consistent Tokens feature introduce?\n\nServer Side Consistent Tokens introduces the following key changes:\n\n- Token length : Server side consistent tokens are longer, being 95+ characters as opposed to 27+ characters. Since the token can be subject to change (see [Token](\/vault\/docs\/concepts\/tokens)), we recommend that you plan for a maximum length of 255 bytes to future proof yourself if you have workflows that rely on the token size.\n- By default, Vault 1.10 will use the new token prefixes and new token format.\n- Tokens don't visibly have a \".namespaceID\" suffix\n- Token prefix: Token prefixes are being changed as follows:\n\n| Token Type      | Old Prefix | New Prefix |\n| --------------- | ---------- | ---------- |\n| Service Tokens  | s.         | hvs.       |\n| Batch Tokens    | b.         | hvb.       |\n| Recovery Tokens | r.         | hvr.       |\n\n### Q: Why are we changing the token?\n\nTo help with use cases that need read-after-write consistency, the Server Side Consistent Tokens feature provides a way for Service tokens, returned from logins (or token create requests), to embed the relevant information for Vault servers using Integrated Storage to know the minimum WAL index that includes the storage write for the token. This entails changes to the service token format.\n\nThe token prefix is being updated to make it easier for static-analysis code scanning tools to scan for Vault tokens, for example, to identify Vault tokens that are accidentally stored in a version control system.\n\n### Q: What type of tokens are impacted by this feature?\n\nWith the exception of the prefix changes detailed above that apply to all token types, only Service tokens are impacted by the changes that are introduced by this feature. Other token types such as batch tokens, recovery tokens, or root service tokens are not impacted.\n\n### Q: Is there a new configuration that this feature introduces?\n\nThere is a new configuration in the replication section as follows:\n\n```\nreplication {\n  allow_forwarding_via_token = \"new_token\"\n}\n```\n\nThis configuration allows Vault clusters to be configured so that requests made to performance standbys that don\u2019t yet have the most up-to-date WAL index are forwarded to the active node. Please note that there will be extra load on the active node with this type of configuration.\n\n### Q: Is there anything else I need to consider to achieve consistency, besides upgrading to Vault 1.10?\n\nYes, there are several considerations to keep in mind, and possibly things that may require you to take action, depending on your use case.\n\n- As stated earlier, if a performance standby does not have the state required to authenticate the token, it returns a 412 error allowing the client to retry.\n- Ensure that your clients can retry for the best experience.\n- Starting with Go api version [1.1.0](https:\/\/pkg.go.dev\/github.com\/hashicorp\/vault\/api@v1.1.0), the Go client library enables automatic retries for 412 errors. By default, retries=2, or use client method [SetMaxRetries](https:\/\/pkg.go.dev\/github.com\/hashicorp\/vault\/api#Client.SetMaxRetries). Or, you can use the Vault environment variable [VAULT_MAX_RETRIES](\/vault\/docs\/commands#vault_max_retries) to achieve the same result.\n- If you use a client library other than Go, you may still need to ensure that your client can handle 412 retries in order to achieve consistency.\n- If your client cannot retry, you can use the Vault server replication configuration `allow_forwarding_via_token`  to allow for consistency. As stated earlier, this will incur extra load on the server due to forwarding of requests that don't have the up-to-date WAL-state to the server:\n\n```\nreplication {\n  allow_forwarding_via_token = \"new_token\"\n}\n```\n\n~> **Note:** If you are generating root tokens or recovery tokens without using the Vault CLI, you will need to modify the OTP length used. refer [here](\/vault\/docs\/upgrading\/upgrade-to-1.10.x) for details.\n\n### Q: What do I need to be paying attention to if I rely on tokens for some of my workflows?\n\nOur documentation on [tokens](\/vault\/docs\/concepts\/tokens) clearly identifies that the token body itself is subject to change between versions and should not be relied on. We strongly recommend that you consider this while architecting your environment.\n\nHowever, if you use scripting and tooling to help in the authentication process for Vault-dependent applications, it is important that you take time to understand the changes (see [Replication](\/vault\/docs\/configuration\/replication), [Vault Eventual Consistency](\/vault\/docs\/enterprise\/consistency), and [Upgrade to 1.10](\/vault\/docs\/upgrading\/upgrade-to-1.10.x)), and test these changes in your specific dev environments before deploying this in production.\n\nIf your workflow used the embedded NamespaceID suffix, you will need to perform a [token lookup](\/vault\/docs\/commands\/token\/lookup) because this is currently absent in the new tokens.\n\n### Q: What are the main mitigation options that Vault offers to achieve consistency, and what are the differences between them?\n\nVault offers the following options to achieve consistency:\n\n- [Client based mitigations](\/vault\/docs\/enterprise\/consistency#vault-1-7-mitigations), which was added in Vault Release 1.7, depend on some modifications to clients to include per request header options to \u2018always forward the request to the active node\u2019 OR to \u2018conditionally forward the request to the active node\u2019 if it would otherwise result in a stale read OR to \u2018fail requests\u2019 with error 412 if they might result in a stale read.\n- The Vault Agent can also be leveraged for proxied requests to achieve consistency via the above mitigations without client modifications\n- Server Side Consistent Tokens, added in Vault version 1.10, provide a more implicit way to achieve consistency, but only addresses consistency for new tokens.\n\nThe following table outlines the main differences:\n\n| Client Controlled Consistency                                                                     | Agent with client controlled consistency settings                                                           | Server Side Consistent Tokens                                                                                      |\n| ------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ |\n| Needs Client side modifications for consistency (per request header options need to be included). | Vault Agent can also be leveraged to achieve consistency without client modifications for proxied requests. | Implicit way for consistency where the relevant minimum WAL state information is embedded within the token itself. |\n| Works across clusters too (Performance standbys and Performance Replication)                      | Single cluster only (Performance Standby)                                                                   | Single cluster only (Performance Standby)                                                                          |\n| Applies to any Vault operation                                                                    | Applies to any Vault operation                                                                              | Applies to login \/ token create requests only                                                                      |\n| May have performance implications via enforcing too much consistency                              | May have performance implications, via enforcing too much consistency for proxied requests                  | May have performance implications if server side configuration to forward requests to active nodes is leveraged.   |\n\n~> **Note:** Client controlled consistency headers , if configured, will take precedence over the server configuration.\n\nFinally, when speaking of performance implications above, there are two kinds that you should keep in mind while selecting the best option for your use case:\n\n- Using forwarding will impact horizontal scalability by placing additional load on the active node\n- No using forwarding will impact latency of client requests due to retrying until the state is consistency\n\n### Q: Is this feature something I need with Consul Storage?\n\nConsul has a [default consistency model](\/consul\/api-docs\/features\/consistency) and this feature is not relevant with Consul storage.","site":"vault","answers_cleaned":"    layout  docs page title  Server Side Consistent Token FAQ description  An list of frequently asked questions about server side consistent tokens        Server side consistent token FAQ  This FAQ section contains frequently asked questions about the Server Side Consistent Token feature      Q  What is the Server Side Consistent Token feature    q what is the server side consistent token feature     Q  I have Vault Community Edition  How does this feature impact me    q i have vault community edition how does this feature impact me     Q  What token changes does the Server Side Consistent Tokens feature introduce    q what token changes does the server side consistent tokens feature introduce     Q  Why are we changing the token    q why are we changing the token     Q  What type of tokens are impacted by this feature    q what type of tokens are impacted by this feature     Q  Is there a new configuration that this feature introduces    q is there a new configuration that this feature introduces     Q  Is there anything else I need to consider to achieve consistency  besides upgrading to Vault 1 10    q is there anything else i need to consider to achieve consistency besides upgrading to vault 1 10     Q  What do I need to be paying attention to if I rely on tokens for some of my workflows    q what do i need to be paying attention to if i rely on tokens for some of my workflows     Q  What are the main mitigation options that Vault offers to achieve consistency  and what are the differences between them    q what are the main mitigation options that vault offers to achieve consistency and what are the differences between them     Q  Is this feature something I need with Consul Storage    q is this feature something i need with consul storage       Q  What is the server side consistent token feature        Note    This features requires Vault Enterprise   Vault has an  eventual consistency   vault docs enterprise consistency  model where only the leader can write to Vault s storage  When using performance standbys with Integrated Storage  there are sequences of operations that don t always yield read after write consistency  which may pose a challenge for some use cases   Several client based mitigations were added in Vault version 1 7  which depended on some modifications to clients  provide the appropriate response header per request  so they can specify state  This may not be possible to do in some environments   To help with such cases  we ve now added support for the Server Side Consistent Tokens feature in Vault version 1 10  See  Replication   vault docs configuration replication    Vault Eventual Consistency   vault docs enterprise consistency   and  Upgrade to 1 10   vault docs upgrading upgrade to 1 10 x   This feature provides a way for Service tokens  returned from logins  or token create requests   to have the relevant minimum WAL state information embedded within the token itself  Clients can then use this token to authenticate subsequent requests  Thus  clients can obtain read after write consistency for the token without typically having to make changes to their code or architecture  If a performance standby does not have the state required to authenticate the token  it returns a 412 error to allow the client to retry  If client retry is not possible  there is a server config to allow for the consistency       Q  I have Vault Community Edition  How does this feature impact me   For the sake of standardization between Community and Enterprise  and due to the value of adding token prefixes in Vault for token scanning use cases  the token formats are changed across all Vault versions starting Vault 1 10  However  since there are no performance standbys or replication in Vault Community Edition  the new Vault token will always show the local index of the WAL as 0 to indicate there is nothing to wait for       Q  What token changes does the Server Side Consistent Tokens feature introduce   Server Side Consistent Tokens introduces the following key changes     Token length   Server side consistent tokens are longer  being 95  characters as opposed to 27  characters  Since the token can be subject to change  see  Token   vault docs concepts tokens    we recommend that you plan for a maximum length of 255 bytes to future proof yourself if you have workflows that rely on the token size    By default  Vault 1 10 will use the new token prefixes and new token format    Tokens don t visibly have a   namespaceID  suffix   Token prefix  Token prefixes are being changed as follows     Token Type        Old Prefix   New Prefix                                                   Service Tokens    s            hvs            Batch Tokens      b            hvb            Recovery Tokens   r            hvr               Q  Why are we changing the token   To help with use cases that need read after write consistency  the Server Side Consistent Tokens feature provides a way for Service tokens  returned from logins  or token create requests   to embed the relevant information for Vault servers using Integrated Storage to know the minimum WAL index that includes the storage write for the token  This entails changes to the service token format   The token prefix is being updated to make it easier for static analysis code scanning tools to scan for Vault tokens  for example  to identify Vault tokens that are accidentally stored in a version control system       Q  What type of tokens are impacted by this feature   With the exception of the prefix changes detailed above that apply to all token types  only Service tokens are impacted by the changes that are introduced by this feature  Other token types such as batch tokens  recovery tokens  or root service tokens are not impacted       Q  Is there a new configuration that this feature introduces   There is a new configuration in the replication section as follows       replication     allow forwarding via token    new token         This configuration allows Vault clusters to be configured so that requests made to performance standbys that don t yet have the most up to date WAL index are forwarded to the active node  Please note that there will be extra load on the active node with this type of configuration       Q  Is there anything else I need to consider to achieve consistency  besides upgrading to Vault 1 10   Yes  there are several considerations to keep in mind  and possibly things that may require you to take action  depending on your use case     As stated earlier  if a performance standby does not have the state required to authenticate the token  it returns a 412 error allowing the client to retry    Ensure that your clients can retry for the best experience    Starting with Go api version  1 1 0  https   pkg go dev github com hashicorp vault api v1 1 0   the Go client library enables automatic retries for 412 errors  By default  retries 2  or use client method  SetMaxRetries  https   pkg go dev github com hashicorp vault api Client SetMaxRetries   Or  you can use the Vault environment variable  VAULT MAX RETRIES   vault docs commands vault max retries  to achieve the same result    If you use a client library other than Go  you may still need to ensure that your client can handle 412 retries in order to achieve consistency    If your client cannot retry  you can use the Vault server replication configuration  allow forwarding via token   to allow for consistency  As stated earlier  this will incur extra load on the server due to forwarding of requests that don t have the up to date WAL state to the server       replication     allow forwarding via token    new token              Note    If you are generating root tokens or recovery tokens without using the Vault CLI  you will need to modify the OTP length used  refer  here   vault docs upgrading upgrade to 1 10 x  for details       Q  What do I need to be paying attention to if I rely on tokens for some of my workflows   Our documentation on  tokens   vault docs concepts tokens  clearly identifies that the token body itself is subject to change between versions and should not be relied on  We strongly recommend that you consider this while architecting your environment   However  if you use scripting and tooling to help in the authentication process for Vault dependent applications  it is important that you take time to understand the changes  see  Replication   vault docs configuration replication    Vault Eventual Consistency   vault docs enterprise consistency   and  Upgrade to 1 10   vault docs upgrading upgrade to 1 10 x    and test these changes in your specific dev environments before deploying this in production   If your workflow used the embedded NamespaceID suffix  you will need to perform a  token lookup   vault docs commands token lookup  because this is currently absent in the new tokens       Q  What are the main mitigation options that Vault offers to achieve consistency  and what are the differences between them   Vault offers the following options to achieve consistency      Client based mitigations   vault docs enterprise consistency vault 1 7 mitigations   which was added in Vault Release 1 7  depend on some modifications to clients to include per request header options to  always forward the request to the active node  OR to  conditionally forward the request to the active node  if it would otherwise result in a stale read OR to  fail requests  with error 412 if they might result in a stale read    The Vault Agent can also be leveraged for proxied requests to achieve consistency via the above mitigations without client modifications   Server Side Consistent Tokens  added in Vault version 1 10  provide a more implicit way to achieve consistency  but only addresses consistency for new tokens   The following table outlines the main differences     Client Controlled Consistency                                                                       Agent with client controlled consistency settings                                                             Server Side Consistent Tokens                                                                                                                                                                                                                                                                                                                                                                                                                                   Needs Client side modifications for consistency  per request header options need to be included     Vault Agent can also be leveraged to achieve consistency without client modifications for proxied requests    Implicit way for consistency where the relevant minimum WAL state information is embedded within the token itself      Works across clusters too  Performance standbys and Performance Replication                         Single cluster only  Performance Standby                                                                      Single cluster only  Performance Standby                                                                               Applies to any Vault operation                                                                      Applies to any Vault operation                                                                                Applies to login   token create requests only                                                                          May have performance implications via enforcing too much consistency                                May have performance implications  via enforcing too much consistency for proxied requests                    May have performance implications if server side configuration to forward requests to active nodes is leveraged            Note    Client controlled consistency headers   if configured  will take precedence over the server configuration   Finally  when speaking of performance implications above  there are two kinds that you should keep in mind while selecting the best option for your use case     Using forwarding will impact horizontal scalability by placing additional load on the active node   No using forwarding will impact latency of client requests due to retrying until the state is consistency      Q  Is this feature something I need with Consul Storage   Consul has a  default consistency model   consul api docs features consistency  and this feature is not relevant with Consul storage "}
{"questions":"vault The Venafi integrated secrets engine for Vault Venafi secrets engine for HashiCorp Vault layout docs The Venafi Machine Identity Secrets Engine provides applications with the page title Venafi Secrets Engines ability to dynamically generate SSL TLS certificates that serve as machine","answers":"---\nlayout: docs\npage_title: Venafi - Secrets Engines\ndescription: The Venafi integrated secrets engine for Vault.\n---\n\n# Venafi secrets engine for HashiCorp Vault\n\nThe Venafi Machine Identity Secrets Engine provides applications with the\nability to dynamically generate SSL\/TLS certificates that serve as machine\nidentities. Using\n[Venafi Trust Protection Platform](https:\/\/www.venafi.com\/platform\/trust-protection-platform)\nor [Venafi Cloud](https:\/\/www.venafi.com\/venaficloud) assures compliance\nwith enterprise policy and consistency with industry standard trust protection.\nDesigned for high performance with the same interface as the built-in PKI\nsecrets engine, services can get certificates without manually generating a\nprivate key and CSR, submitting to a certificate authority, and waiting for a\nverification and signing process to complete. Venafi's certificate authority\nintegrations and policy controls, combined with Vault's built-in authentication\nand authorization mechanisms, provide the verification functionality.\n\nLike the built-in PKI secrets engine, short-lived certificates for ephemeral\nworkloads are the primary focus of the Venafi secrets engine. As such,\nrevocation is not currently supported.\n\nThe Venafi secrets engine makes use of HashiCorp Vault's\n[plugin system](\/vault\/docs\/plugins)\nand Venafi's [VCert Client SDK](https:\/\/github.com\/Venafi\/vcert). If you have\nquestions about the Venafi secrets engine, have an issue to report, or have\ndeveloped improvements that you want to contribute, visit the\n[GitHub](https:\/\/github.com\/Venafi\/vault-pki-backend-venafi) repository.\n\n## Considerations\n\nTo successfully deploy this secrets engine, there are some important\nconsiderations. Before using Venafi secrets engine, you should read every\nconsideration.\n\n### Venafi trust protection platform requirements\n\nYour certificate authority (CA) must be able to issue a certificate in\nunder one minute. Microsoft Active Directory Certificate Services (ADCS) is a\npopular choice. Other CA choices may have slightly different\nrequirements.\n\nWithin Trust Protection Platform, configure these settings. For more\ninformation see the _Venafi Administration Guide_.\n\n- A user account that has an authentication token for the \"Venafi Secrets\n  Engine for HashiCorp Vault\" (ID \"hashicorp-vault-by-venafi\") API Application\n  as of 20.1 (or scope \"certificate:manage\" for 19.2 through 19.4) or has been\n  granted WebSDK Access (deprecated)\n- A Policy folder where the user has the following permissions: View, Read,\n  Write, Create.\n- Enterprise compliant policies applied to the folder including:\n\n  - Subject DN values for Organizational Unit (OU), Organization (O),\n    City\/Locality (L), State\/Province (ST) and Country (C).\n  - CA Template that Trust Protection Platform will use to enroll general\n    certificate requests.\n  - Management Type not locked or locked to 'Enrollment'.\n  - Certificate Signing Request (CSR) Generation unlocked or not locked to\n    'Service Generated CSR'.\n  - Generate Key\/CSR on Application not locked or locked to 'No'.\n  - (Recommended) Disable Automatic Renewal set to 'Yes'.\n  - (Recommended) Key Bit Strength set to 2048 or higher.\n  - (Recommended) Domain Whitelisting policy appropriately assigned.\n\n  **NOTE**: If you are using Microsoft ACDS, the CRL distribution point and\n  Authority Information Access (AIA) URIs must start with an HTTP URI\n  (non-default configuration). If an LDAP URI appears first in the X509v3\n  extensions, some applications will fail, such as NGINX ingress controllers.\n  These applications aren't able to retrieve CRL and OCSP information.\n\n#### Trust between Vault and trust protection platform\n\nThe Trust Protection Platform REST API (WebSDK) must be secured with a\ncertificate. Generally, the certificate is issued by a CA that is not publicly\ntrusted so establishing trust is a critical part of your setup.\n\nTwo methods can be used to establish trust. Both require the trust anchor\n(root CA certificate) of the WebSDK certificate. If you have administrative\naccess, you can import the root certificate into the trust store for your\noperating system. If you don't have administrative access, or prefer not to\nmake changes to your system configuration, save the root certificate to a file\nin PEM format (e.g. \/opt\/venafi\/bundle.pem) and reference it using the\n`trust_bundle_file` parameter whenever you create or update a PKI role in your\nVault.\n\n### Venafi Cloud requirements\n\nIf you are using Venafi Cloud, be sure to set up an issuing template, project,\nand any other dependencies that appear in the Venafi Cloud documentation.\n\n- Set up an issuing template to link Venafi Cloud to your CA. To learn more,\n  search for \"Issuing Templates\" in the\n  [Venafi Cloud Help system](https:\/\/docs.venafi.cloud\/help\/Default.htm).\n- Create a project and zone that identifies the template and other information.\n  To learn more, search for \"Projects\" in the\n  [Venafi Cloud Help system](https:\/\/docs.venafi.cloud\/help\/Default.htm).\n\n## Setup\n\n<Tabs>\n<Tab heading=\"Vault\" group=\"vault\">\n\nBefore certificates can be issued, you must complete these steps to configure the\nVenafi secrets engine:\n\n1. Create the [directory](\/vault\/docs\/plugins\/plugin-architecture#plugin-directory)\n   where your Vault server will look for plugins (e.g. \/etc\/vault\/vault_plugins).\n   The directory must not be a symbolic link. On macOS, for example, \/etc is a\n   link to \/private\/etc. To avoid errors, choose an alternative directory such\n   as \/private\/etc\/vault\/vault_plugins.\n\n1. Download the latest `vault-pki-backend-venafi`\n   [release package](https:\/\/github.com\/Venafi\/vault-pki-backend-venafi\/releases\/latest)\n   for your operating system. Unzip the binary to the plugin directory. Note\n   that the URL for the zip file, referenced below, changes as new versions of the\n   plugin are released. Replace the version (0.12.0) of the release in the command below to\n   download the desired version.\n\n   ```shell-session\n   $ wget https:\/\/github.com\/Venafi\/vault-pki-backend-venafi\/releases\/download\/v0.12.0\/venafi-pki-backend_v0.12.0_darwin.zip\n   $ unzip venafi-pki-backend_v0.12.0_darwin.zip\n   $ mv venafi-pki-backend \/etc\/vault\/vault_plugins\n   ```\n\n1. Update the Vault [server configuration](\/vault\/docs\/configuration\/)\n   to specify the plugin directory:\n\n   ```shell-session\n   $ plugin_directory = \"\/etc\/vault\/vault_plugins\"\n   ```\n\n1. Start your Vault using the [server command](\/vault\/docs\/commands\/server).\n\n1. Get the SHA-256 checksum of the `venafi-pki-backend` plugin binary:\n\n   ```shell-session\n   $ SHA256=$(sha256sum \/etc\/vault\/vault_plugins\/venafi-pki-backend| cut -d' ' -f1)\n   ```\n\n1. Register the `venafi-pki-backend` plugin in the Vault\n   [system catalog](\/vault\/docs\/plugins\/plugin-architecture#plugin-catalog):\n\n   ```shell-session\n   $ vault write sys\/plugins\/catalog\/secret\/venafi-pki-backend \\\n   sha_256=\"${SHA256}\" command=\"venafi-pki-backend\"\n   ```\n\n1. Enable the Venafi secrets engine:\n\n   ```shell-session\n   $ vault secrets enable -path=venafi-pki -plugin-name=venafi-pki-backend plugin\n   ```\n\n1. Configure a Venafi secret that maps a name in Vault to connection and authentication\n   settings for enrolling certificates using Venafi. The zone is a policy folder for Trust\n   Protection Platform or a DevOps project zone for Venafi Cloud.\n\n   Obtain the `access_token` and `refresh_token` for Trust Protection Platform using the\n   [VCert CLI](https:\/\/github.com\/Venafi\/vcert\/blob\/master\/README-CLI-PLATFORM.md#obtaining-an-authorization-token)\n   (`getcred` action with `--client-id \"hashicorp-vault-by-venafi\"` and\n   `--scope \"certificate:manage\"`) or the Platform's Authorize REST API method.\n\n   To see all options available for venafi secrets, use\n   `vault path-help venafi-pki\/venafi\/:name` after creating the secret.\n\n   **Trust Protection Platform**:\n\n   ```shell-session\n   $ vault write venafi-pki\/venafi\/tpp \\\n       url=\"https:\/\/tpp.venafi.example\" \\\n       access_token=\"tn1PwE1QTZorXmvnTowSyA==\" \\\n       refresh_token=\"MGxV7DzNnclQi9CkJMCXCg==\" \\\n       zone=\"DevOps\\\\HashiCorp Vault\" \\\n       trust_bundle_file=\"\/path-to\/bundle.pem\"\n   ```\n\n   **Venafi Cloud**:\n\n   ```shell-session\n   $ vault write venafi-pki\/venafi\/cloud \\\n       apikey=\"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" \\\n       zone=\"zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz\"\n   ```\n\n1. Lastly, configure a [role](\/vault\/docs\/secrets\/pki)\n   that maps a name in Vault to a Venafi secret for enrollment. To see all\n   options available for roles, including `ttl`, `max_ttl` and `issuer_hint`\n   (for validity), use `vault path-help venafi-pki\/roles\/:name` after\n   creating the role.\n\n   **Trust Protection Platform**:\n\n   ```shell-session\n   $ vault write venafi-pki\/roles\/tpp \\\n       venafi_secret=tpp \\\n       store_by=serial store_pkey=true \\\n       allowed_domains=example.com \\\n       allow_subdomains=true\n   ```\n\n   **Venafi Cloud**:\n\n   ```shell-session\n   $ vault write venafi-pki\/roles\/cloud \\\n       venafi_secret=cloud \\\n       store_by=serial store_pkey=true \\\n       allowed_domains=example.com \\\n       allow_subdomains=true\n   ```\n\n<\/Tab>\n<Tab heading=\"HCP Vault Dedicated\" group=\"hcp\">\n\n~> The Venafi Secrets Engine on HCP Vault Dedicated currently supports Venafi Cloud or Trust Protection Platform instances secured using a certificate from a publicly trusted CA.\nSupport for uploading a certificate signed by a private CA using trust_bundle_file parameter is not available on HCP Vault Dedicated and requires running a self-managed Vault to use.\n\nBefore certificates can be issued, you must complete these steps to configure the Venafi secrets engine:\n\n1. Navigate to your HCP Vault Dedicated cluster's [Integrations](\/hcp\/docs\/vault\/integrations#hashicorp-partner-plugins) page within the HCP portal\nto add the Venafi secrets engine to your cluster.\n\n1. After the Venafi plugin has been successfully added to your cluster, you can use the Vault CLI to configure the Venafi secrets engine \nfor use.\n\n1. Enable the Venafi secrets engine:\n\n   ```shell-session\n   $ vault secrets enable -path=venafi-pki -plugin-name=venafi-pki-backend plugin\n   ```\n\n   Configure a Venafi secret that maps a name in Vault to connection and authentication\n   settings for enrolling certificates using Venafi. The zone is a DevOps project zone for Venafi Cloud.\n\n   To see all options available for venafi secrets, use\n   `vault path-help venafi-pki\/venafi\/:name` after creating the secret.\n\n   **Venafi Cloud**:\n\n   ```shell-session\n   $ vault write venafi-pki\/venafi\/cloud \\\n       apikey=\"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" \\\n       zone=\"zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz\"\n   ```\n\n1. Lastly, configure a [role](\/vault\/docs\/secrets\/pki)\n   that maps a name in Vault to a Venafi secret for enrollment. To see all\n   options available for roles, including `ttl`, `max_ttl` and `issuer_hint`\n   (for validity), use `vault path-help venafi-pki\/roles\/:name` after\n   creating the role.\n\n   **Venafi Cloud**:\n\n   ```shell-session\n   $ vault write venafi-pki\/roles\/cloud \\\n       venafi_secret=cloud \\\n       store_by=serial store_pkey=true \\\n       allowed_domains=example.com \\\n       allow_subdomains=true\n   ```\n\n<\/Tab>\n<\/Tabs>\n\n## Usage\n\nAfter the Venafi secrets engine is configured and a user\/machine has a Vault\ntoken with the proper permission, it can enroll certificates using Venafi.\nTo see all of the options available when requesting a certificate, including\n`ttl` (for validity), `key_password`, and `custom_fields`, use\n`vault path-help venafi-pki\/issue\/:role-name` and\n`vault path-help venafi-pki\/sign\/:role-name`.\n\n1. Generate a certificate by writing to the `\/issue` endpoint with the name of\n   the role:\n\n   **Trust Protection Platform**:\n\n   ```shell-session\n   $ vault write venafi-pki\/issue\/tpp common_name=\"common-name.example.com\" \\\n       alt_names=\"dns-san-1.example.com,dns-san-2.example.com\"\n   ```\n   **Example output:** \n\n   ```text\n   Key                  Value\n   ---                  -----\n   lease_id             venafi-pki\/issue\/tpp\/oLih42SCFzyjntxGc00vqmWH\n   lease_duration       719h49m55s\n   lease_renewable      false\n   certificate          -----BEGIN CERTIFICATE-----\n   certificate_chain    -----BEGIN CERTIFICATE-----\n   common_name          common-name.example.com\n   private_key          -----BEGIN RSA PRIVATE KEY-----\n   serial_number        1d:bc:a8:3c:00:00:00:05:5c:e8\n   ```\n\n   **Venafi Cloud**:\n\n   ```shell-session\n   $ vault write venafi-pki\/issue\/cloud common_name=\"common-name.example.com\" \\\n       alt_names=\"dns-san-1.example.com,dns-san-2.example.com\"\n   ```\n\n   **Example output:** \n\n   ```text\n   Key                  Value\n   ---                  -----\n   lease_id             venafi-pki\/issue\/cloud\/1WCNvXKiwboWfRRfjzlPAwEi\n   lease_duration       167h59m58s\n   lease_renewable      false\n   certificate          -----BEGIN CERTIFICATE-----\n   certificate_chain    -----BEGIN CERTIFICATE-----\n   common_name          common-name.example.com\n   private_key          -----BEGIN RSA PRIVATE KEY-----\n   serial_number        17:47:8b:13:90:b8:3d:87:b0:dc:b6:9e:00:2b:87:02:c9:d3:1e:8a\n   ```\n\n1. Or sign a CSR from a file by writing to the `\/sign` endpoint with the name of\n   the role:\n\n   **Trust Protection Platform**:\n\n   ```shell-session\n   $ vault write venafi-pki\/sign\/tpp csr=@example.req\n   ```\n\n   **Example output:** \n\n   ```text\n   Key                  Value\n   ---                  -----\n   lease_id             venafi-pki\/sign\/tpp\/tQq3QNY45e4sJMqTTI9DXEGK\n   lease_duration       719h49m57s\n   lease_renewable      false\n   certificate          -----BEGIN CERTIFICATE-----\n   certificate_chain    -----BEGIN CERTIFICATE-----\n   common_name          common-name.example.com\n   serial_number        1d:c4:07:9a:00:00:00:05:5c:ea\n   ```\n\n   **Venafi Cloud**:\n\n   ```shell-session\n   $ vault write venafi-pki\/sign\/cloud csr=@example.req\n   ```\n\n   **Example output:** \n\n   ```text\n   Key                  Value\n   ---                  -----\n   lease_id             venafi-pki\/sign\/cloud\/fF44FdMAjuCdC29w3Ff81hes\n   lease_duration       167h59m58s\n   lease_renewable      false\n   certificate          -----BEGIN CERTIFICATE-----\n   certificate_chain    -----BEGIN CERTIFICATE-----\n   common_name          common-name.example.com\n   serial_number        76:55:e2:14:de:c8:3f:e1:64:4a:fa:37:d4:6e:f5:ef:5e:4c:16:5b\n   ```\n\n## API\n\nVenafi Machine Identity Secrets Engine uses the same\n[Vault API](\/vault\/api-docs\/secret\/pki)\nas the built-in PKI secrets engine. Some methods, such as those for\nmanaging certificate authorities, do not apply.","site":"vault","answers_cleaned":"    layout  docs page title  Venafi   Secrets Engines description  The Venafi integrated secrets engine for Vault         Venafi secrets engine for HashiCorp Vault  The Venafi Machine Identity Secrets Engine provides applications with the ability to dynamically generate SSL TLS certificates that serve as machine identities  Using  Venafi Trust Protection Platform  https   www venafi com platform trust protection platform  or  Venafi Cloud  https   www venafi com venaficloud  assures compliance with enterprise policy and consistency with industry standard trust protection  Designed for high performance with the same interface as the built in PKI secrets engine  services can get certificates without manually generating a private key and CSR  submitting to a certificate authority  and waiting for a verification and signing process to complete  Venafi s certificate authority integrations and policy controls  combined with Vault s built in authentication and authorization mechanisms  provide the verification functionality   Like the built in PKI secrets engine  short lived certificates for ephemeral workloads are the primary focus of the Venafi secrets engine  As such  revocation is not currently supported   The Venafi secrets engine makes use of HashiCorp Vault s  plugin system   vault docs plugins  and Venafi s  VCert Client SDK  https   github com Venafi vcert   If you have questions about the Venafi secrets engine  have an issue to report  or have developed improvements that you want to contribute  visit the  GitHub  https   github com Venafi vault pki backend venafi  repository      Considerations  To successfully deploy this secrets engine  there are some important considerations  Before using Venafi secrets engine  you should read every consideration       Venafi trust protection platform requirements  Your certificate authority  CA  must be able to issue a certificate in under one minute  Microsoft Active Directory Certificate Services  ADCS  is a popular choice  Other CA choices may have slightly different requirements   Within Trust Protection Platform  configure these settings  For more information see the  Venafi Administration Guide      A user account that has an authentication token for the  Venafi Secrets   Engine for HashiCorp Vault   ID  hashicorp vault by venafi   API Application   as of 20 1  or scope  certificate manage  for 19 2 through 19 4  or has been   granted WebSDK Access  deprecated    A Policy folder where the user has the following permissions  View  Read    Write  Create    Enterprise compliant policies applied to the folder including       Subject DN values for Organizational Unit  OU   Organization  O       City Locality  L   State Province  ST  and Country  C       CA Template that Trust Protection Platform will use to enroll general     certificate requests      Management Type not locked or locked to  Enrollment       Certificate Signing Request  CSR  Generation unlocked or not locked to      Service Generated CSR       Generate Key CSR on Application not locked or locked to  No        Recommended  Disable Automatic Renewal set to  Yes        Recommended  Key Bit Strength set to 2048 or higher       Recommended  Domain Whitelisting policy appropriately assigned       NOTE    If you are using Microsoft ACDS  the CRL distribution point and   Authority Information Access  AIA  URIs must start with an HTTP URI    non default configuration   If an LDAP URI appears first in the X509v3   extensions  some applications will fail  such as NGINX ingress controllers    These applications aren t able to retrieve CRL and OCSP information        Trust between Vault and trust protection platform  The Trust Protection Platform REST API  WebSDK  must be secured with a certificate  Generally  the certificate is issued by a CA that is not publicly trusted so establishing trust is a critical part of your setup   Two methods can be used to establish trust  Both require the trust anchor  root CA certificate  of the WebSDK certificate  If you have administrative access  you can import the root certificate into the trust store for your operating system  If you don t have administrative access  or prefer not to make changes to your system configuration  save the root certificate to a file in PEM format  e g   opt venafi bundle pem  and reference it using the  trust bundle file  parameter whenever you create or update a PKI role in your Vault       Venafi Cloud requirements  If you are using Venafi Cloud  be sure to set up an issuing template  project  and any other dependencies that appear in the Venafi Cloud documentation     Set up an issuing template to link Venafi Cloud to your CA  To learn more    search for  Issuing Templates  in the    Venafi Cloud Help system  https   docs venafi cloud help Default htm     Create a project and zone that identifies the template and other information    To learn more  search for  Projects  in the    Venafi Cloud Help system  https   docs venafi cloud help Default htm       Setup   Tabs   Tab heading  Vault  group  vault    Before certificates can be issued  you must complete these steps to configure the Venafi secrets engine   1  Create the  directory   vault docs plugins plugin architecture plugin directory     where your Vault server will look for plugins  e g   etc vault vault plugins      The directory must not be a symbolic link  On macOS  for example   etc is a    link to  private etc  To avoid errors  choose an alternative directory such    as  private etc vault vault plugins   1  Download the latest  vault pki backend venafi      release package  https   github com Venafi vault pki backend venafi releases latest     for your operating system  Unzip the binary to the plugin directory  Note    that the URL for the zip file  referenced below  changes as new versions of the    plugin are released  Replace the version  0 12 0  of the release in the command below to    download the desired version         shell session      wget https   github com Venafi vault pki backend venafi releases download v0 12 0 venafi pki backend v0 12 0 darwin zip      unzip venafi pki backend v0 12 0 darwin zip      mv venafi pki backend  etc vault vault plugins         1  Update the Vault  server configuration   vault docs configuration      to specify the plugin directory         shell session      plugin directory     etc vault vault plugins          1  Start your Vault using the  server command   vault docs commands server    1  Get the SHA 256 checksum of the  venafi pki backend  plugin binary         shell session      SHA256   sha256sum  etc vault vault plugins venafi pki backend  cut  d     f1          1  Register the  venafi pki backend  plugin in the Vault     system catalog   vault docs plugins plugin architecture plugin catalog          shell session      vault write sys plugins catalog secret venafi pki backend      sha 256    SHA256   command  venafi pki backend          1  Enable the Venafi secrets engine         shell session      vault secrets enable  path venafi pki  plugin name venafi pki backend plugin         1  Configure a Venafi secret that maps a name in Vault to connection and authentication    settings for enrolling certificates using Venafi  The zone is a policy folder for Trust    Protection Platform or a DevOps project zone for Venafi Cloud      Obtain the  access token  and  refresh token  for Trust Protection Platform using the     VCert CLI  https   github com Venafi vcert blob master README CLI PLATFORM md obtaining an authorization token       getcred  action with    client id  hashicorp vault by venafi   and       scope  certificate manage    or the Platform s Authorize REST API method      To see all options available for venafi secrets  use     vault path help venafi pki venafi  name  after creating the secret        Trust Protection Platform           shell session      vault write venafi pki venafi tpp          url  https   tpp venafi example           access token  tn1PwE1QTZorXmvnTowSyA             refresh token  MGxV7DzNnclQi9CkJMCXCg             zone  DevOps  HashiCorp Vault           trust bundle file   path to bundle pem               Venafi Cloud           shell session      vault write venafi pki venafi cloud          apikey  xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxx           zone  zzzzzzzz zzzz zzzz zzzz zzzzzzzzzzzz          1  Lastly  configure a  role   vault docs secrets pki     that maps a name in Vault to a Venafi secret for enrollment  To see all    options available for roles  including  ttl    max ttl  and  issuer hint      for validity   use  vault path help venafi pki roles  name  after    creating the role        Trust Protection Platform           shell session      vault write venafi pki roles tpp          venafi secret tpp          store by serial store pkey true          allowed domains example com          allow subdomains true              Venafi Cloud           shell session      vault write venafi pki roles cloud          venafi secret cloud          store by serial store pkey true          allowed domains example com          allow subdomains true           Tab   Tab heading  HCP Vault Dedicated  group  hcp       The Venafi Secrets Engine on HCP Vault Dedicated currently supports Venafi Cloud or Trust Protection Platform instances secured using a certificate from a publicly trusted CA  Support for uploading a certificate signed by a private CA using trust bundle file parameter is not available on HCP Vault Dedicated and requires running a self managed Vault to use   Before certificates can be issued  you must complete these steps to configure the Venafi secrets engine   1  Navigate to your HCP Vault Dedicated cluster s  Integrations   hcp docs vault integrations hashicorp partner plugins  page within the HCP portal to add the Venafi secrets engine to your cluster   1  After the Venafi plugin has been successfully added to your cluster  you can use the Vault CLI to configure the Venafi secrets engine  for use   1  Enable the Venafi secrets engine         shell session      vault secrets enable  path venafi pki  plugin name venafi pki backend plugin            Configure a Venafi secret that maps a name in Vault to connection and authentication    settings for enrolling certificates using Venafi  The zone is a DevOps project zone for Venafi Cloud      To see all options available for venafi secrets  use     vault path help venafi pki venafi  name  after creating the secret        Venafi Cloud           shell session      vault write venafi pki venafi cloud          apikey  xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxx           zone  zzzzzzzz zzzz zzzz zzzz zzzzzzzzzzzz          1  Lastly  configure a  role   vault docs secrets pki     that maps a name in Vault to a Venafi secret for enrollment  To see all    options available for roles  including  ttl    max ttl  and  issuer hint      for validity   use  vault path help venafi pki roles  name  after    creating the role        Venafi Cloud           shell session      vault write venafi pki roles cloud          venafi secret cloud          store by serial store pkey true          allowed domains example com          allow subdomains true           Tab    Tabs      Usage  After the Venafi secrets engine is configured and a user machine has a Vault token with the proper permission  it can enroll certificates using Venafi  To see all of the options available when requesting a certificate  including  ttl   for validity    key password   and  custom fields   use  vault path help venafi pki issue  role name  and  vault path help venafi pki sign  role name    1  Generate a certificate by writing to the   issue  endpoint with the name of    the role        Trust Protection Platform           shell session      vault write venafi pki issue tpp common name  common name example com           alt names  dns san 1 example com dns san 2 example com              Example output            text    Key                  Value                                  lease id             venafi pki issue tpp oLih42SCFzyjntxGc00vqmWH    lease duration       719h49m55s    lease renewable      false    certificate               BEGIN CERTIFICATE         certificate chain         BEGIN CERTIFICATE         common name          common name example com    private key               BEGIN RSA PRIVATE KEY         serial number        1d bc a8 3c 00 00 00 05 5c e8              Venafi Cloud           shell session      vault write venafi pki issue cloud common name  common name example com           alt names  dns san 1 example com dns san 2 example com               Example output            text    Key                  Value                                  lease id             venafi pki issue cloud 1WCNvXKiwboWfRRfjzlPAwEi    lease duration       167h59m58s    lease renewable      false    certificate               BEGIN CERTIFICATE         certificate chain         BEGIN CERTIFICATE         common name          common name example com    private key               BEGIN RSA PRIVATE KEY         serial number        17 47 8b 13 90 b8 3d 87 b0 dc b6 9e 00 2b 87 02 c9 d3 1e 8a         1  Or sign a CSR from a file by writing to the   sign  endpoint with the name of    the role        Trust Protection Platform           shell session      vault write venafi pki sign tpp csr  example req              Example output            text    Key                  Value                                  lease id             venafi pki sign tpp tQq3QNY45e4sJMqTTI9DXEGK    lease duration       719h49m57s    lease renewable      false    certificate               BEGIN CERTIFICATE         certificate chain         BEGIN CERTIFICATE         common name          common name example com    serial number        1d c4 07 9a 00 00 00 05 5c ea              Venafi Cloud           shell session      vault write venafi pki sign cloud csr  example req              Example output            text    Key                  Value                                  lease id             venafi pki sign cloud fF44FdMAjuCdC29w3Ff81hes    lease duration       167h59m58s    lease renewable      false    certificate               BEGIN CERTIFICATE         certificate chain         BEGIN CERTIFICATE         common name          common name example com    serial number        76 55 e2 14 de c8 3f e1 64 4a fa 37 d4 6e f5 ef 5e 4c 16 5b            API  Venafi Machine Identity Secrets Engine uses the same  Vault API   vault api docs secret pki  as the built in PKI secrets engine  Some methods  such as those for managing certificate authorities  do not apply "}
{"questions":"vault Google Cloud secrets engine The Google Cloud secrets engine for Vault dynamically generates Google Cloud service account keys and OAuth tokens based on IAM policies layout docs page title Google Cloud Secrets Engines","answers":"---\nlayout: docs\npage_title: Google Cloud - Secrets Engines\ndescription: |-\n  The Google Cloud secrets engine for Vault dynamically generates Google Cloud\n  service account keys and OAuth tokens based on IAM policies.\n---\n\n# Google Cloud secrets engine\n\nThe Google Cloud Vault secrets engine dynamically generates Google Cloud service\naccount keys and OAuth tokens based on IAM policies. This enables users to gain\naccess to Google Cloud resources without needing to create or manage a dedicated\nservice account.\n\nThe benefits of using this secrets engine to manage Google Cloud IAM service accounts are:\n\n- **Automatic cleanup of GCP IAM service account keys** - each Service Account\n  key is associated with a Vault lease. When the lease expires (either during\n  normal revocation or through early revocation), the service account key is\n  automatically revoked.\n\n- **Quick, short-term access** - users do not need to create new GCP Service\n  Accounts for short-term or one-off access (such as batch jobs or quick\n  introspection).\n\n- **Multi-cloud and hybrid cloud applications** - users authenticate to Vault\n  using a central identity service (such as LDAP) and generate GCP credentials\n  without the need to create or manage a new Service Account for that user.\n\n~> **NOTE: Deprecation of `access_token` Leases**: In previous versions of this secrets engine\n(released with Vault <= 0.11.1), a lease was generated with access tokens. If you're using\nan old version of the plugin, please upgrade. Read more in the\n[upgrade guide](#deprecation-of-access-token-leases)\n\n## Setup\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1.  Enable the Google Cloud secrets engine:\n\n    ```shell-session\n    $ vault secrets enable gcp\n    Success! Enabled the gcp secrets engine at: gcp\/\n    ```\n\n    By default, the secrets engine will mount at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure the secrets engine with account credentials, or leave blank or unwritten\n    to use Application Default Credentials.\n\n    ```shell-session\n    $ vault write gcp\/config credentials=@my-credentials.json\n    Success! Data written to: gcp\/config\n    ```\n\n    If you are running Vault from inside [Google Compute Engine][gce] or [Google\n    Kubernetes Engine][gke], the instance or pod service account can be used in\n    place of specifying the credentials JSON file.\n    For more information on authentication, see the [authentication section](#authentication) below.\n\n    In some cases, you cannot set sensitive IAM security credentials in your\n    Vault configuration. For example, your organization may require that all\n    security credentials are short-lived or explicitly tied to a machine identity.\n\n    To provide IAM security credentials to Vault, we recommend using Vault\n    [plugin workload identity federation](#plugin-workload-identity-federation-wif)\n    (WIF) as shown below.\n\n\n1.  Alternatively, configure the audience claim value and the service account email to assume for plugin workload identity federation:\n\n    ```text\n    $ vault write gcp\/config \\\n        identity_token_audience=\"<TOKEN AUDIENCE>\" \\\n        service_account_email=\"<SERVICE ACCOUNT EMAIL>\"\n    ```\n\n    Vault's identity token provider signs the plugin identity token JWT internally.\n    If a trust relationship exists between Vault and GCP through WIF, the secrets\n    engine can exchange the Vault identity token for a\n    [federated access token](https:\/\/cloud.google.com\/docs\/authentication\/token-types#access).\n\n    To configure a trusted relationship between Vault and GCP:\n        - You must configure the [identity token issuer backend](\/vault\/api-docs\/secret\/identity\/tokens#configure-the-identity-tokens-backend)\n          for Vault.\n        - GCP must have a\n          [workload identity pool and provider](https:\/\/cloud.google.com\/iam\/docs\/manage-workload-identity-pools-providers)\n          configured with information about the fully qualified and network-reachable\n          issuer URL for the Vault plugin's\n          [identity token provider](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-well-known-configurations).\n\n    Establishing a trusted relationship between Vault and GCP ensures that GCP\n    can fetch JWKS\n    [public keys](\/vault\/api-docs\/secret\/identity\/tokens#read-active-public-keys)\n    and verify the plugin identity token signature.\n\n1. Configure rolesets or static accounts. See the relevant sections below.\n\n## Rolesets\n\nA roleset consists of a Vault managed GCP Service account along with a set of IAM bindings\ndefined for that service account. The name of the service account is generated based on the time\nof creation or update. You should not depend on the name of the service account being\nfixed and should manage all IAM bindings for the service account through the `bindings` parameter\nwhen creating or updating the roleset.\n\nFor more information on the differences between rolesets and static accounts, see the\n[things to note](#things-to-note) section below.\n\n### Roleset policy considerations\n\nStarting with Vault 1.8.0, existing permissive policies containing globs\nfor the GCP Secrets Engine may grant additional privileges due to the introduction\nof `\/gcp\/roleset\/:roleset\/token` and `\/gcp\/roleset\/:roleset\/key` endpoints.\n\nThe following policy grants a user the ability to read all rolesets, but would\nalso allow them to generate tokens and keys. This type of policy is not recommended:\n\n```hcl\n# DO NOT USE\npath \"\/gcp\/roleset\/*\" {\n    capabilities = [\"read\"]\n}\n```\n\nThe following example demonstrates how a wildcard can instead be used in a roleset policy to\nadhere to the principle of least privilege:\n\n```hcl\npath \"\/gcp\/roleset\/+\" {\n    capabilities = [\"read\"]\n}\n```\n\nFor more more information on policy syntax, see the\n[policy documentation](\/vault\/docs\/concepts\/policies#policy-syntax).\n\n### Examples\n\nTo configure a roleset that generates OAuth2 access tokens (preferred):\n\n```shell-session\n$ vault write gcp\/roleset\/my-token-roleset \\\n    project=\"my-project-id\" \\\n    secret_type=\"access_token\"  \\\n    token_scopes=\"https:\/\/www.googleapis.com\/auth\/cloud-platform\" \\\n    bindings=-<<EOF\n      resource \"\/\/cloudresourcemanager.googleapis.com\/projects\/my-project-id\" {\n        roles = [\"roles\/viewer\"]\n      }\n    EOF\n```\n\nTo configure a roleset that generates GCP Service Account keys:\n\n```shell-session\n$ vault write gcp\/roleset\/my-key-roleset \\\n    project=\"my-project\" \\\n    secret_type=\"service_account_key\"  \\\n    bindings=-<<EOF\n      resource \"\/\/cloudresourcemanager.googleapis.com\/projects\/my-project\" {\n        roles = [\"roles\/viewer\"]\n      }\n    EOF\n```\n\nAlternatively, provide a file for the `bindings` argument like so:\n\n```shell-session\n$ vault write gcp\/roleset\/my-roleset\n    bindings=@mybindings.hcl\n    ...\n```\n\nFor more information on role bindings and sample role bindings, please see\nthe [bindings](#bindings) section below.\n\nFor more information on the differences between OAuth2 access tokens and\nService Account keys, see the [things to note](#things-to-note) section\nbelow.\n\nFor more information on creating and managing rolesets, see the\n[GCP secrets engine API docs][api] docs.\n\n## Static accounts\n\nStatic accounts are GCP service accounts that are created outside of Vault and then provided to\nVault to generate access tokens or keys. You can also use Vault to optionally manage IAM bindings\nfor the service account.\n\nFor more information on the differences between rolesets and static accounts, see the\n[things to note](#things-to-note) section below.\n\n### Examples\n\nBefore configuring a static account, you need to create a\n[Google Cloud Service Account][service-accounts]. Take note of the email address of the service\naccount you have created. Service account emails are of the format\n`<service-account-id>@<project-id>.iam.gserviceaccount.com`.\n\nTo configure a static account that generates OAuth2 access tokens (preferred):\n\n```shell-session\n$ vault write gcp\/static-account\/my-token-account \\\n    service_account_email=\"account@my-project.iam.gserviceaccount.com\" \\\n    secret_type=\"access_token\"  \\\n    token_scopes=\"https:\/\/www.googleapis.com\/auth\/cloud-platform\" \\\n    bindings=-<<EOF\n      resource \"\/\/cloudresourcemanager.googleapis.com\/projects\/my-project\" {\n        roles = [\"roles\/viewer\"]\n      }\n    EOF\n```\n\nTo configure a static account that generates GCP Service Account keys:\n\n```shell-session\n$ vault write gcp\/static-account\/my-key-account \\\n    service_account_email=\"account@my-project.iam.gserviceaccount.com\" \\\n    secret_type=\"service_account_key\"  \\\n    bindings=-<<EOF\n      resource \"\/\/cloudresourcemanager.googleapis.com\/projects\/my-project\" {\n        roles = [\"roles\/viewer\"]\n      }\n    EOF\n```\n\nAlternatively, provide a file for the `bindings` argument like so:\n\n```shell-session\n$ vault write gcp\/static-account\/my-account\n    bindings=@mybindings.hcl\n    ...\n```\n\nFor more information on role bindings and sample role bindings, please see\nthe [bindings](#bindings) section below.\n\nFor more information on the differences between OAuth2 access tokens and\nService Account keys, see the [things to note](#things-to-note) section\nbelow.\n\nFor more information on creating and managing static accounts, see the\n[GCP secrets engine API docs][api] docs.\n\n## Impersonated accounts\n\nImpersonated accounts are a way to generate an OAuth2 [access token](\/vault\/docs\/secrets\/gcp#access-tokens) that is granted\nthe permissions and accesses of another given service account. These access\ntokens do not have the same 10-key limit as service account keys do, yet they\nretain their short-lived nature. By default, their TTL in GCP is 1 hour, but\nthis may be configured to be up to 12 hours as explained in Google's\n[short-lived credentials documentation](https:\/\/cloud.google.com\/iam\/docs\/create-short-lived-credentials-delegated#sa-credentials-oauth).\n\nFor more information regarding service account impersonation in GCP, consider starting\nwith their documentation [available here](https:\/\/cloud.google.com\/iam\/docs\/impersonating-service-accounts).\n\n### Examples\n\nTo configure a Vault role that impersonates the administrator on the Google\nCloud project with the cloud platform and compute scopes:\n\n```shell-session\n$ vault write gcp\/impersonated-account\/my-token-impersonate \\\n    service_account_email=\"projectAdmin@my-project.iam.gserviceaccount.com\" \\\n    token_scopes=\"https:\/\/www.googleapis.com\/auth\/cloud-platform,https:\/\/www.googleapis.com\/auth\/compute\" \\\n    ttl=\"6h\"\n```\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials. Depending on how the Vault role\nwas configured, you can generate OAuth2 tokens or service account keys.\n\n### Access tokens\n\nTo generate OAuth2 [access tokens](https:\/\/cloud.google.com\/docs\/authentication\/token-types#access),\nread from the [`gcp\/...\/token`](\/vault\/api-docs\/secret\/gcp#generate-secret-iam-service-account-creds-oauth2-access-token)\nAPI. If using a roleset or static account, it must have been created with a\n[`secret_type`](\/vault\/api-docs\/secret\/gcp#secret_type) of `access_token`. Impersonated accounts will\ngenerate OAuth2 tokens by default.\n\n**Roleset:**\n```shell-session\n$ vault read gcp\/roleset\/my-token-roleset\/token\n\nKey                Value\n---                -----\nexpires_at_seconds    1537402548\ntoken                 ya29.c.ElodBmNPwHUNY5gcBpnXcE4ywG4w1k...\ntoken_ttl             3599\n```\n\n**Static account:**\n```shell-session\n$ vault read gcp\/static-account\/my-token-account\/token\n\nKey                Value\n---                -----\nexpires_at_seconds    1672231587\ntoken                 ya29.c.b0Aa9VdykAdYoW9S1ImtPZykF_oTi9...\ntoken_ttl             3599\n```\n\n**Impersonated account:**\n```shell-session\n$ vault read gcp\/impersonated-account\/my-token-impersonate\/token\n\nKey                Value\n---                -----\nexpires_at_seconds    1671667844\ntoken                 ya29.c.b0AT7lpjBRmO7ghBEyMV18evd016hq...\ntoken_ttl             59m59s\n```\n\nThis endpoint generates a non-renewable, non-revocable static OAuth2 access token\nwith a max lifetime of one hour, where `token_ttl` is given in seconds and the\n`expires_at_seconds` is the expiry time for the token, given as a Unix timestamp.\nThe `token` value then can be used as a HTTP Authorization Bearer token in requests\nto GCP APIs:\n\n```shell-session\n$ curl -H \"Authorization: Bearer ya29.c.ElodBmNPwHUNY5gcBpnXcE4ywG4w1k...\"\n```\n\n### Service account keys\n\nTo generate service account keys, read from `gcp\/...\/key`. Vault returns the service\naccount key data as a base64-encoded string in the `private_key_data` field. This can\nbe read by decoding it using `base64 --decode \"ewogICJ0e...\"` or another base64 tool of\nyour choice. The roleset or static account must have been created as type `service_account_key`:\n\n```shell-session\n$ vault read gcp\/roleset\/my-key-roleset\/key\n\nKey                 Value\n---                 -----\nlease_id            gcp\/key\/my-key-roleset\/ce563a99-5e55-389b...\nlease_duration      30m\nlease_renewable     true\nkey_algorithm       KEY_ALG_RSA_2048\nkey_type            TYPE_GOOGLE_CREDENTIALS_FILE\nprivate_key_data    ewogICJ0eXBlIjogInNlcnZpY2VfYWNjb3VudCIsC...\n```\n\nThis endpoint generates a new [GCP IAM service account key][iam-keys] associated\nwith the role's Service Account. When the lease expires (or is revoked\nearly), the Service Account key will be deleted.\n\n**There is a default limit of 10 keys per Service Account.** For more\ninformation on this limit and recommended mitigation, please see the [things to\nnote](#things-to-note) section below.\n\n## Bindings\n\nRoleset or static account bindings define a list of resources and the associated IAM roles on that\nresource. Bindings are used as the `binding` argument when creating or\nupdating a roleset or static account and are specified in the following format using HCL:\n\n```hcl\nresource NAME {\n  roles = [ROLE, [ROLE...]]\n}\n```\n\nFor example:\n\n```hcl\nresource \"buckets\/my-bucket\" {\n  roles = [\n    \"roles\/storage.objectAdmin\",\n    \"roles\/storage.legacyBucketReader\",\n  ]\n}\n\n# At instance level, using self-link\nresource \"https:\/\/www.googleapis.com\/compute\/v1\/projects\/my-project\/zone\/my-zone\/instances\/my-instance\" {\n  roles = [\n    \"roles\/compute.instanceAdmin.v1\"\n  ]\n}\n\n# At project level\nresource \"\/\/cloudresourcemanager.googleapis.com\/projects\/my-project\" {\n  roles = [\n    \"roles\/compute.instanceAdmin.v1\",\n    \"roles\/iam.serviceAccountUser\",  # required if managing instances that run as service accounts\n  ]\n}\n\n# At folder level\nresource \"\/\/cloudresourcemanager.googleapis.com\/folders\/123456\" {\n  roles = [\n    \"roles\/compute.viewer\",\n    \"roles\/deploymentmanager.viewer\",\n  ]\n}\n\n```\n\nThe top-level `resource` block defines the resource or resource path for which\nIAM policy information will be bound. The resource path may be specified in a\nfew different formats:\n\n- **Project-level self-link** - a URI with scheme and host, generally\n  corresponding to the `self_link` attribute of a resource in GCP. This must\n  include the resource nested in the parent project.\n\n  ```text\n  # compute alpha zone\n  https:\/\/www.googleapis.com\/compute\/alpha\/projects\/my-project\/zones\/us-central1-c\n  ```\n\n- **Full resource name** - a schema-less URI consisting of a DNS-compatible API\n  service name and resource path. See the [full resource name API\n  documentation][resource-name-full] for more information.\n\n  ```text\n  # Compute snapshot\n  \/\/compute.googleapis.com\/project\/my-project\/snapshots\/my-compute-snapshot\n\n  # Pubsub snapshot\n  \/\/pubsub.googleapis.com\/project\/my-project\/snapshots\/my-pubsub-snapshot\n\n  # BigQuery dataset\n  \/\/bigquery.googleapis.com\/projects\/my-project\/datasets\/mydataset\n\n  # Resource manager\n  \/\/cloudresourcemanager.googleapis.com\/projects\/my-project\"\n  ```\n\n- **Relative resource name** - A path-noscheme URI path, usually as accepted by\n  the API. Use this if the version or service are apparent from the resource\n  type. Please see the [relative resource name API\n  documentation][resource-name-relative] for more information.\n\n  ```text\n  # Storage bucket objects\n  buckets\/my-bucket\n  buckets\/my-bucket\/objects\/my-object\n\n  # PubSub topics\n  projects\/my-project\/topics\/my-pubsub-topic\n  ```\n\nThe nested `roles` attribute is an array of strings names of [GCP IAM\nroles][iam-roles]. The roles may be specified in the following formats:\n\n- **Global role name** - these are global roles built into Google Cloud. For the\n  full list of available roles, please see the [list of predefined GCP\n  roles][predefined-roles].\n\n  ```text\n  roles\/viewer\n  roles\/bigquery.user\n  roles\/billing.admin\n  ```\n\n- **Organization-level custom role** - these are roles that are created at the\n  organization level by organization owners.\n\n  ```text\n  organizations\/my-organization\/roles\/my-custom-role\n  ```\n\n  For more information, please see the documentation on [GCP custom\n  roles][custom-roles].\n\n- **Project-level custom role** - these are roles that are created at a\n  per-project level by project owners.\n\n  ```text\n  projects\/my-project\/roles\/my-custom-role\n  ```\n\n  For more information, please see the documentation on [GCP custom\n  roles][custom-roles].\n\n## Authentication\n\nThe Google Cloud Vault secrets backend uses the official Google Cloud Golang\nSDK. This means it supports the common ways of [providing credentials to Google\nCloud][cloud-creds]. In addition to specifying `credentials` directly via Vault\nconfiguration, you can also get configuration from the following values **on the\nVault server**:\n\n1. The `GOOGLE_APPLICATION_CREDENTIALS` environment variable. This is specified\n   as the **path** to a Google Cloud credentials file, typically for a service\n   account. If this environment variable is present, the resulting credentials are\n   used. If the credentials are invalid, an error is returned.\n\n1. The identity of a Google Cloud [workload][workloads-ids]. When Vault server is running\n   on a Google workload like [Google Compute Engine][gce] or [Google Kubernetes Engine][gke],\n   identity associated with the workload is automatically used. To configure Google Compute\n   Engine with an identity, see [attached service accounts][attached-service-accounts]. To\n   configure Google Kubernetes Engine with an identity, see [GKE workload identity][gke-workload-ids].\n\nFor more information on service accounts, please see the [Google Cloud Service\nAccounts documentation][service-accounts].\n\nTo use this secrets engine, the service account must have the following\nminimum scope(s):\n\n```text\nhttps:\/\/www.googleapis.com\/auth\/cloud-platform\n```\n\n### Required permissions\n\nThe credentials given to Vault must have the following permissions when using rolesets at the\nproject level:\n\n```text\n# Service account + key admin\niam.serviceAccounts.create\niam.serviceAccounts.delete\niam.serviceAccounts.get\niam.serviceAccounts.list\niam.serviceAccounts.update\niam.serviceAccountKeys.create\niam.serviceAccountKeys.delete\niam.serviceAccountKeys.get\niam.serviceAccountKeys.list\n```\n\nWhen using static accounts or impersonated accounts, Vault must have the following permissions\nat the service account level:\n\n```text\n# For `access_token` secrets and impersonated accounts\niam.serviceAccounts.getAccessToken\n\n# For `service_account_keys` secrets\niam.serviceAccountKeys.create\niam.serviceAccountKeys.delete\niam.serviceAccountKeys.get\niam.serviceAccountKeys.list\n```\n\nWhen using rolesets or static accounts with bindings, Vault must have the following permissions:\n\n```text\n# IAM policy changes\n<service>.<resource>.getIamPolicy\n<service>.<resource>.setIamPolicy\n```\n\nwhere `<service>` and `<resource>` correspond to permissions which will be\ngranted, for example:\n\n```text\n# Projects\nresourcemanager.projects.getIamPolicy\nresourcemanager.projects.setIamPolicy\n\n# All compute\ncompute.*.getIamPolicy\ncompute.*.setIamPolicy\n\n# BigQuery datasets\nbigquery.datasets.get\nbigquery.datasets.update\n```\n\nYou can either:\n\n- Create a [custom role][custom-roles] using these permissions, and assign this\n  role at a project-level\n\n- Assign the set of roles required to get resource-specific\n  `getIamPolicy\/setIamPolicy` permissions. At a minimum you will need to assign\n  `roles\/iam.serviceAccountAdmin` and `roles\/iam.serviceAccountKeyAdmin` so\n  Vault can manage service accounts and keys.\n\n- Notice that BigQuery requires different permissions than other resource. This is\n  because BigQuery currently uses legacy ACL instead of traditional IAM permissions.\n  This means to update access on the dataset, Vault must be able to update the dataset's\n  metadata.\n\n## Plugin Workload Identity Federation (WIF)\n\n<EnterpriseAlert product=\"vault\" \/>\n\nThe GCP secrets engine supports the plugin WIF workflow and has a source of identity called\na plugin identity token. The plugin identity token is a JWT that is signed internally by Vault's\n[plugin identity token issuer](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-workload-identity-issuer-s-openid-configuration).\n\nIf there is a trust relationship configured between Vault and GCP through\n[workload identity federation](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation),\nthe secrets engine can exchange its identity token for short-lived access tokens needed to\nperform its actions.\n\nExchanging identity tokens for access tokens lets the GCP secrets engine\noperate without configuring explicit access to sensitive IAM security\ncredentials.\n\nTo configure the secrets engine to use plugin WIF:\n\n1. Ensure that Vault [openid-configuration](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-token-issuer-s-openid-configuration)\nand [public JWKS](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-token-issuer-s-public-jwks)\nAPIs are network-reachable by GCP. We recommend using an API proxy or gateway\nif you need to limit Vault API exposure.\n\n1. Create a\n    [workload identity pool and provider](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation-with-other-providers#create-pool-provider)\n    in GCP.\n    1. The provider URL **must** point at your [Vault plugin identity token issuer](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-workload-identity-issuer-s-openid-configuration) with the\n    `\/.well-known\/openid-configuration` suffix removed. For example:\n    `https:\/\/host:port\/v1\/identity\/oidc\/plugins`.\n    1. Uniquely identify the recipient of the plugin identity token as the audience.\n    You can use the [default audience](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation-with-other-providers#prepare)\n    for the identity pool or a custom value less than 256 characters.\n\n1. [Authenticate a workload](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation-with-other-providers#authenticate)\nin GCP by granting the identity pool access to a dedicated service account using service account impersonation.\nFilter requests using the unique `sub` claim issued by plugin identity tokens so the GCP Auth engine can\nimpersonate the service account. `sub` claims have the form: `plugin-identity:<NAMESPACE>:secret:<GCP_SECRETS_MOUNT_ACCESSOR>`.\n\n1. Configure the GCP secrets engine with the OIDC audience value and service account\nemail.\n\n   ```shell-session\n   $ vault write gcp\/config \\\n     identity_token_audience=\"\/\/iam.googleapis.com\/projects\/410449834127\/locations\/global\/workloadIdentityPools\/vault-gcp-secrets-43777a63\/providers\/vault-gcp-secrets-wif-provider\" \\\n     service_account_email=\"vault-plugin-wif-secrets@hc-b712f250b4e04cacbadd258a90b.iam.gserviceaccount.com\"\n   ```\n\nYour secrets engine can now use plugin WIF for its configuration credentials.\nBy default, WIF [credentials](https:\/\/cloud.google.com\/iam\/docs\/workload-identity-federation#access_management)\nhave a time-to-live of 1 hour and automatically refresh when they expire.\n\nPlease see the [API documentation](\/vault\/api-docs\/secret\/gcp#write-config)\nfor more details on the fields associated with plugin WIF.\n\n### Root credential rotation\n\nIf the mount is configured with credentials directly, the credential's key may be\nrotated to a Vault-generated value that is not accessible by the operator. For more\ndetails on this operation, please see the\n[Root Credential Rotation](\/vault\/api-docs\/secret\/gcp#rotate-root-credentials) API docs.\n\n## Things to note\n\n### Rolesets vs. static accounts\n\nAdvantages of rolesets:\n\n- Service accounts and IAM bindings are fully managed by Vault\n\nDisadvantages of rolesets:\n\n- Cannot easily decouple IAM bindings from the ones managed in Vault\n- Vault requires permissions to manage IAM bindings and service accounts\n\nAdvantages of static accounts:\n\n- Can manage IAM bindings independently from the ones managed in Vault\n- Vault does not require permissions to IAM bindings and service accounts and only permissions\n  related to the keys of the service account\n\nDisadvantages of static accounts:\n\n- Self management of service accounts is necessary.\n\n### Access tokens vs. service account keys\n\nAdvantages of `access_tokens`:\n\n- Can generate infinite number of tokens per roleset\n\nDisadvantages of `access_tokens`:\n\n- Cannot be used with some client libraries or tools\n- Have a static life-time of 1 hr that cannot be modified, revoked, or extended.\n\nAdvantages of `service_account_keys`:\n\n- Controllable life-time through Vault, allowing for longer access\n- Can be used by all normal GCP tooling\n\nDisadvantages of `service_account_keys`:\n\n- Infinite lifetime in GCP (i.e. if they are not managed properly, leaked keys can live forever)\n- Limited to 10 per roleset\/service account.\n\nWhen generating OAuth access tokens, Vault will still\ngenerate a dedicated service account and key. This private key is stored in Vault\nand is never accessible to other users, and the underlying key can\nbe rotated. See the [GCP API documentation][api] for more information on\nrotation.\n\n### Service accounts are tied to rolesets\n\nService Accounts are created when the roleset is created (or updated) rather\nthan each time a secret is generated. This may be different from how other\nsecrets engines behave, but it is for good reasons:\n\n- IAM Service Account creation and permission propagation can take up to 60\n  seconds to complete. By creating the Service Account in advance, we speed up\n  the timeliness of future operations and reduce the flakiness of automated\n  workflows.\n\n- Each GCP project has a limit on the number of IAM Service Accounts. You can\n  [request additional quota][quotas]. The quota increase is processed by humans,\n  so it is best to request this additional quota in advance. This limit is\n  currently 100, **including system-managed Service Accounts**. If Service\n  Accounts were created per secret, this quota limit would reduce the number of\n  secrets that can be generated.\n\n### Service account keys quota limits\n\nGCP IAM has a hard limit (currently 10) on the number of Service Account keys.\nAttempts to generate more keys will result in an error. If you find yourself\nrunning into this limit, consider the following:\n\n- Have shorter TTLs or revoke access earlier. If you are not using past Service\n  Account keys, consider rotating and freeing quota earlier.\n\n- Create additional rolesets which share the same set of permissions. Additional\n  rolesets can be created with the same set of permissions. This will create a\n  new service account and increases the number of keys you can create.\n\n- Where possible, use OAuth2 access tokens instead of Service Account keys.\n\n### Resources in IAM bindings must exist at roleset or static account creation\n\nBecause the bindings for the Service Account are set during roleset\/static account creation,\nresources that do not exist will fail the `getIamPolicy` API call.\n\n### Roleset creation may partially fail\n\nEvery Service Account creation, key creation, and IAM policy change is a GCP API\ncall per resource. If an API call to one of these resources fails, the roleset\ncreation fails and Vault will attempt to rollback.\n\nThese rollbacks are API calls, so they may also fail. The secrets engine uses a\nWAL to ensure that unused bindings are cleaned up. In the case of quota limits,\nyou may need to clean these up manually.\n\n### Do not modify vault-owned IAM accounts\n\nWhile Vault will initially create and assign permissions to IAM service\naccounts, it is possible that an external user deletes or modifies this service\naccount. These changes are difficult to detect, and it is best to prevent this\ntype of modification through IAM permissions.\n\nVault roleset Service Accounts will have emails in the format:\n\n```\nvault<roleset-prefix>-<creation-unix-timestamp>@...\n```\n\nCommunicate with your teams (or use IAM permissions) to not modify these\nresources.\n\n## Help &amp; support\n\nThe Google Cloud Vault secrets engine is written as an external Vault plugin and\nthus exists outside the main Vault repository. It is automatically bundled with\nVault releases, but the code is managed separately.\n\nPlease report issues, add feature requests, and submit contributions to the\n[vault-plugin-secrets-gcp repo on GitHub][repo].\n\n## API\n\nThe GCP secrets engine has a full HTTP API. Please see the [GCP secrets engine API docs][api]\nfor more details.\n\n[api]: \/vault\/api-docs\/secret\/gcp\n[cloud-creds]: https:\/\/cloud.google.com\/docs\/authentication\/production#providing_credentials_to_your_application\n[custom-roles]: https:\/\/cloud.google.com\/iam\/docs\/creating-custom-roles\n[gce]: https:\/\/cloud.google.com\/compute\/\n[gke]: https:\/\/cloud.google.com\/kubernetes-engine\/\n[iam-keys]: https:\/\/cloud.google.com\/iam\/docs\/service-accounts#service_account_keys\n[iam-roles]: https:\/\/cloud.google.com\/iam\/docs\/understanding-roles\n[predefined-roles]: https:\/\/cloud.google.com\/iam\/docs\/understanding-roles#predefined_roles\n[repo]: https:\/\/github.com\/hashicorp\/vault-plugin-secrets-gcp\n[resource-name-full]: https:\/\/cloud.google.com\/apis\/design\/resource_names#full_resource_name\n[resource-name-relative]: https:\/\/cloud.google.com\/apis\/design\/resource_names#relative_resource_name\n[quotas]: https:\/\/cloud.google.com\/compute\/quotas\n[service-accounts]: https:\/\/cloud.google.com\/compute\/docs\/access\/service-accounts\n[workloads-ids]: https:\/\/cloud.google.com\/iam\/docs\/workload-identities\n[attached-service-accounts]: https:\/\/cloud.google.com\/iam\/docs\/workload-identities#attached-service-accounts\n[gke-workload-ids]: https:\/\/cloud.google.com\/iam\/docs\/workload-identities#kubernetes-workload-identity\n\n## Upgrade guides\n\n### Deprecation of access token leases\n\n~> **NOTE**: This deprecation only affects access tokens. There is no change to the `service_account_key` secret type.\n\nPrevious versions of this secrets engine (Vault <= 0.11.1) created a lease for\neach access token secret. We have removed them after discovering that these\ntokens, specifically Google OAuth2 tokens for IAM service accounts, are\nnon-revocable and have a static 60 minute lifetime. To match the current\nlimitations of the GCP APIs, the secrets engine will no longer allow for\nrevocation or manage the token TTL - more specifically, **the access_token\nresponse will no longer include `lease_id` or other lease information**. This\nchange does not reflect any change to the actual underlying OAuth tokens or GCP\nservice accounts.\n\nTo upgrade:\n\n- Remove references from `lease_id`, `lease_duration` or other `lease_*`\n  attributes when reading responses for the access tokens secrets endpoint (i.e.\n  from `gcp\/token\/$roleset`). See the [documentation for access\n  tokens](#access-tokens) to see the new format for the response.\n\n- Be aware of leftover leases from previous versions. While these old leases\n  will still be revocable, they will not actually invalidate their associated\n  access token, and that token will still be useable for up to one hour.","site":"vault","answers_cleaned":"    layout  docs page title  Google Cloud   Secrets Engines description       The Google Cloud secrets engine for Vault dynamically generates Google Cloud   service account keys and OAuth tokens based on IAM policies         Google Cloud secrets engine  The Google Cloud Vault secrets engine dynamically generates Google Cloud service account keys and OAuth tokens based on IAM policies  This enables users to gain access to Google Cloud resources without needing to create or manage a dedicated service account   The benefits of using this secrets engine to manage Google Cloud IAM service accounts are       Automatic cleanup of GCP IAM service account keys     each Service Account   key is associated with a Vault lease  When the lease expires  either during   normal revocation or through early revocation   the service account key is   automatically revoked       Quick  short term access     users do not need to create new GCP Service   Accounts for short term or one off access  such as batch jobs or quick   introspection        Multi cloud and hybrid cloud applications     users authenticate to Vault   using a central identity service  such as LDAP  and generate GCP credentials   without the need to create or manage a new Service Account for that user        NOTE  Deprecation of  access token  Leases    In previous versions of this secrets engine  released with Vault    0 11 1   a lease was generated with access tokens  If you re using an old version of the plugin  please upgrade  Read more in the  upgrade guide   deprecation of access token leases      Setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1   Enable the Google Cloud secrets engine          shell session       vault secrets enable gcp     Success  Enabled the gcp secrets engine at  gcp               By default  the secrets engine will mount at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure the secrets engine with account credentials  or leave blank or unwritten     to use Application Default Credentials          shell session       vault write gcp config credentials  my credentials json     Success  Data written to  gcp config              If you are running Vault from inside  Google Compute Engine  gce  or  Google     Kubernetes Engine  gke   the instance or pod service account can be used in     place of specifying the credentials JSON file      For more information on authentication  see the  authentication section   authentication  below       In some cases  you cannot set sensitive IAM security credentials in your     Vault configuration  For example  your organization may require that all     security credentials are short lived or explicitly tied to a machine identity       To provide IAM security credentials to Vault  we recommend using Vault      plugin workload identity federation   plugin workload identity federation wif       WIF  as shown below    1   Alternatively  configure the audience claim value and the service account email to assume for plugin workload identity federation          text       vault write gcp config           identity token audience   TOKEN AUDIENCE             service account email   SERVICE ACCOUNT EMAIL                Vault s identity token provider signs the plugin identity token JWT internally      If a trust relationship exists between Vault and GCP through WIF  the secrets     engine can exchange the Vault identity token for a      federated access token  https   cloud google com docs authentication token types access        To configure a trusted relationship between Vault and GCP            You must configure the  identity token issuer backend   vault api docs secret identity tokens configure the identity tokens backend            for Vault            GCP must have a            workload identity pool and provider  https   cloud google com iam docs manage workload identity pools providers            configured with information about the fully qualified and network reachable           issuer URL for the Vault plugin s            identity token provider   vault api docs secret identity tokens read plugin identity well known configurations        Establishing a trusted relationship between Vault and GCP ensures that GCP     can fetch JWKS      public keys   vault api docs secret identity tokens read active public keys      and verify the plugin identity token signature   1  Configure rolesets or static accounts  See the relevant sections below      Rolesets  A roleset consists of a Vault managed GCP Service account along with a set of IAM bindings defined for that service account  The name of the service account is generated based on the time of creation or update  You should not depend on the name of the service account being fixed and should manage all IAM bindings for the service account through the  bindings  parameter when creating or updating the roleset   For more information on the differences between rolesets and static accounts  see the  things to note   things to note  section below       Roleset policy considerations  Starting with Vault 1 8 0  existing permissive policies containing globs for the GCP Secrets Engine may grant additional privileges due to the introduction of   gcp roleset  roleset token  and   gcp roleset  roleset key  endpoints   The following policy grants a user the ability to read all rolesets  but would also allow them to generate tokens and keys  This type of policy is not recommended      hcl   DO NOT USE path   gcp roleset          capabilities     read          The following example demonstrates how a wildcard can instead be used in a roleset policy to adhere to the principle of least privilege      hcl path   gcp roleset          capabilities     read          For more more information on policy syntax  see the  policy documentation   vault docs concepts policies policy syntax        Examples  To configure a roleset that generates OAuth2 access tokens  preferred       shell session   vault write gcp roleset my token roleset       project  my project id        secret type  access token         token scopes  https   www googleapis com auth cloud platform        bindings    EOF       resource    cloudresourcemanager googleapis com projects my project id            roles     roles viewer               EOF      To configure a roleset that generates GCP Service Account keys      shell session   vault write gcp roleset my key roleset       project  my project        secret type  service account key         bindings    EOF       resource    cloudresourcemanager googleapis com projects my project            roles     roles viewer               EOF      Alternatively  provide a file for the  bindings  argument like so      shell session   vault write gcp roleset my roleset     bindings  mybindings hcl              For more information on role bindings and sample role bindings  please see the  bindings   bindings  section below   For more information on the differences between OAuth2 access tokens and Service Account keys  see the  things to note   things to note  section below   For more information on creating and managing rolesets  see the  GCP secrets engine API docs  api  docs      Static accounts  Static accounts are GCP service accounts that are created outside of Vault and then provided to Vault to generate access tokens or keys  You can also use Vault to optionally manage IAM bindings for the service account   For more information on the differences between rolesets and static accounts  see the  things to note   things to note  section below       Examples  Before configuring a static account  you need to create a  Google Cloud Service Account  service accounts   Take note of the email address of the service account you have created  Service account emails are of the format   service account id   project id  iam gserviceaccount com    To configure a static account that generates OAuth2 access tokens  preferred       shell session   vault write gcp static account my token account       service account email  account my project iam gserviceaccount com        secret type  access token         token scopes  https   www googleapis com auth cloud platform        bindings    EOF       resource    cloudresourcemanager googleapis com projects my project            roles     roles viewer               EOF      To configure a static account that generates GCP Service Account keys      shell session   vault write gcp static account my key account       service account email  account my project iam gserviceaccount com        secret type  service account key         bindings    EOF       resource    cloudresourcemanager googleapis com projects my project            roles     roles viewer               EOF      Alternatively  provide a file for the  bindings  argument like so      shell session   vault write gcp static account my account     bindings  mybindings hcl              For more information on role bindings and sample role bindings  please see the  bindings   bindings  section below   For more information on the differences between OAuth2 access tokens and Service Account keys  see the  things to note   things to note  section below   For more information on creating and managing static accounts  see the  GCP secrets engine API docs  api  docs      Impersonated accounts  Impersonated accounts are a way to generate an OAuth2  access token   vault docs secrets gcp access tokens  that is granted the permissions and accesses of another given service account  These access tokens do not have the same 10 key limit as service account keys do  yet they retain their short lived nature  By default  their TTL in GCP is 1 hour  but this may be configured to be up to 12 hours as explained in Google s  short lived credentials documentation  https   cloud google com iam docs create short lived credentials delegated sa credentials oauth    For more information regarding service account impersonation in GCP  consider starting with their documentation  available here  https   cloud google com iam docs impersonating service accounts        Examples  To configure a Vault role that impersonates the administrator on the Google Cloud project with the cloud platform and compute scopes      shell session   vault write gcp impersonated account my token impersonate       service account email  projectAdmin my project iam gserviceaccount com        token scopes  https   www googleapis com auth cloud platform https   www googleapis com auth compute        ttl  6h          Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials  Depending on how the Vault role was configured  you can generate OAuth2 tokens or service account keys       Access tokens  To generate OAuth2  access tokens  https   cloud google com docs authentication token types access   read from the   gcp     token    vault api docs secret gcp generate secret iam service account creds oauth2 access token  API  If using a roleset or static account  it must have been created with a   secret type    vault api docs secret gcp secret type  of  access token   Impersonated accounts will generate OAuth2 tokens by default     Roleset       shell session   vault read gcp roleset my token roleset token  Key                Value                          expires at seconds    1537402548 token                 ya29 c ElodBmNPwHUNY5gcBpnXcE4ywG4w1k    token ttl             3599        Static account       shell session   vault read gcp static account my token account token  Key                Value                          expires at seconds    1672231587 token                 ya29 c b0Aa9VdykAdYoW9S1ImtPZykF oTi9    token ttl             3599        Impersonated account       shell session   vault read gcp impersonated account my token impersonate token  Key                Value                          expires at seconds    1671667844 token                 ya29 c b0AT7lpjBRmO7ghBEyMV18evd016hq    token ttl             59m59s      This endpoint generates a non renewable  non revocable static OAuth2 access token with a max lifetime of one hour  where  token ttl  is given in seconds and the  expires at seconds  is the expiry time for the token  given as a Unix timestamp  The  token  value then can be used as a HTTP Authorization Bearer token in requests to GCP APIs      shell session   curl  H  Authorization  Bearer ya29 c ElodBmNPwHUNY5gcBpnXcE4ywG4w1k              Service account keys  To generate service account keys  read from  gcp     key   Vault returns the service account key data as a base64 encoded string in the  private key data  field  This can be read by decoding it using  base64   decode  ewogICJ0e      or another base64 tool of your choice  The roleset or static account must have been created as type  service account key       shell session   vault read gcp roleset my key roleset key  Key                 Value                           lease id            gcp key my key roleset ce563a99 5e55 389b    lease duration      30m lease renewable     true key algorithm       KEY ALG RSA 2048 key type            TYPE GOOGLE CREDENTIALS FILE private key data    ewogICJ0eXBlIjogInNlcnZpY2VfYWNjb3VudCIsC         This endpoint generates a new  GCP IAM service account key  iam keys  associated with the role s Service Account  When the lease expires  or is revoked early   the Service Account key will be deleted     There is a default limit of 10 keys per Service Account    For more information on this limit and recommended mitigation  please see the  things to note   things to note  section below      Bindings  Roleset or static account bindings define a list of resources and the associated IAM roles on that resource  Bindings are used as the  binding  argument when creating or updating a roleset or static account and are specified in the following format using HCL      hcl resource NAME     roles    ROLE   ROLE             For example      hcl resource  buckets my bucket      roles          roles storage objectAdmin        roles storage legacyBucketReader            At instance level  using self link resource  https   www googleapis com compute v1 projects my project zone my zone instances my instance      roles          roles compute instanceAdmin v1           At project level resource    cloudresourcemanager googleapis com projects my project      roles          roles compute instanceAdmin v1        roles iam serviceAccountUser      required if managing instances that run as service accounts          At folder level resource    cloudresourcemanager googleapis com folders 123456      roles          roles compute viewer        roles deploymentmanager viewer               The top level  resource  block defines the resource or resource path for which IAM policy information will be bound  The resource path may be specified in a few different formats       Project level self link     a URI with scheme and host  generally   corresponding to the  self link  attribute of a resource in GCP  This must   include the resource nested in the parent project        text     compute alpha zone   https   www googleapis com compute alpha projects my project zones us central1 c            Full resource name     a schema less URI consisting of a DNS compatible API   service name and resource path  See the  full resource name API   documentation  resource name full  for more information        text     Compute snapshot     compute googleapis com project my project snapshots my compute snapshot      Pubsub snapshot     pubsub googleapis com project my project snapshots my pubsub snapshot      BigQuery dataset     bigquery googleapis com projects my project datasets mydataset      Resource manager     cloudresourcemanager googleapis com projects my project             Relative resource name     A path noscheme URI path  usually as accepted by   the API  Use this if the version or service are apparent from the resource   type  Please see the  relative resource name API   documentation  resource name relative  for more information        text     Storage bucket objects   buckets my bucket   buckets my bucket objects my object      PubSub topics   projects my project topics my pubsub topic        The nested  roles  attribute is an array of strings names of  GCP IAM roles  iam roles   The roles may be specified in the following formats       Global role name     these are global roles built into Google Cloud  For the   full list of available roles  please see the  list of predefined GCP   roles  predefined roles         text   roles viewer   roles bigquery user   roles billing admin            Organization level custom role     these are roles that are created at the   organization level by organization owners        text   organizations my organization roles my custom role          For more information  please see the documentation on  GCP custom   roles  custom roles        Project level custom role     these are roles that are created at a   per project level by project owners        text   projects my project roles my custom role          For more information  please see the documentation on  GCP custom   roles  custom roles       Authentication  The Google Cloud Vault secrets backend uses the official Google Cloud Golang SDK  This means it supports the common ways of  providing credentials to Google Cloud  cloud creds   In addition to specifying  credentials  directly via Vault configuration  you can also get configuration from the following values   on the Vault server     1  The  GOOGLE APPLICATION CREDENTIALS  environment variable  This is specified    as the   path   to a Google Cloud credentials file  typically for a service    account  If this environment variable is present  the resulting credentials are    used  If the credentials are invalid  an error is returned   1  The identity of a Google Cloud  workload  workloads ids   When Vault server is running    on a Google workload like  Google Compute Engine  gce  or  Google Kubernetes Engine  gke      identity associated with the workload is automatically used  To configure Google Compute    Engine with an identity  see  attached service accounts  attached service accounts   To    configure Google Kubernetes Engine with an identity  see  GKE workload identity  gke workload ids    For more information on service accounts  please see the  Google Cloud Service Accounts documentation  service accounts    To use this secrets engine  the service account must have the following minimum scope s       text https   www googleapis com auth cloud platform          Required permissions  The credentials given to Vault must have the following permissions when using rolesets at the project level      text   Service account   key admin iam serviceAccounts create iam serviceAccounts delete iam serviceAccounts get iam serviceAccounts list iam serviceAccounts update iam serviceAccountKeys create iam serviceAccountKeys delete iam serviceAccountKeys get iam serviceAccountKeys list      When using static accounts or impersonated accounts  Vault must have the following permissions at the service account level      text   For  access token  secrets and impersonated accounts iam serviceAccounts getAccessToken    For  service account keys  secrets iam serviceAccountKeys create iam serviceAccountKeys delete iam serviceAccountKeys get iam serviceAccountKeys list      When using rolesets or static accounts with bindings  Vault must have the following permissions      text   IAM policy changes  service   resource  getIamPolicy  service   resource  setIamPolicy      where   service   and   resource   correspond to permissions which will be granted  for example      text   Projects resourcemanager projects getIamPolicy resourcemanager projects setIamPolicy    All compute compute   getIamPolicy compute   setIamPolicy    BigQuery datasets bigquery datasets get bigquery datasets update      You can either     Create a  custom role  custom roles  using these permissions  and assign this   role at a project level    Assign the set of roles required to get resource specific    getIamPolicy setIamPolicy  permissions  At a minimum you will need to assign    roles iam serviceAccountAdmin  and  roles iam serviceAccountKeyAdmin  so   Vault can manage service accounts and keys     Notice that BigQuery requires different permissions than other resource  This is   because BigQuery currently uses legacy ACL instead of traditional IAM permissions    This means to update access on the dataset  Vault must be able to update the dataset s   metadata      Plugin Workload Identity Federation  WIF    EnterpriseAlert product  vault      The GCP secrets engine supports the plugin WIF workflow and has a source of identity called a plugin identity token  The plugin identity token is a JWT that is signed internally by Vault s  plugin identity token issuer   vault api docs secret identity tokens read plugin workload identity issuer s openid configuration    If there is a trust relationship configured between Vault and GCP through  workload identity federation  https   cloud google com iam docs workload identity federation   the secrets engine can exchange its identity token for short lived access tokens needed to perform its actions   Exchanging identity tokens for access tokens lets the GCP secrets engine operate without configuring explicit access to sensitive IAM security credentials   To configure the secrets engine to use plugin WIF   1  Ensure that Vault  openid configuration   vault api docs secret identity tokens read plugin identity token issuer s openid configuration  and  public JWKS   vault api docs secret identity tokens read plugin identity token issuer s public jwks  APIs are network reachable by GCP  We recommend using an API proxy or gateway if you need to limit Vault API exposure   1  Create a      workload identity pool and provider  https   cloud google com iam docs workload identity federation with other providers create pool provider      in GCP      1  The provider URL   must   point at your  Vault plugin identity token issuer   vault api docs secret identity tokens read plugin workload identity issuer s openid configuration  with the        well known openid configuration  suffix removed  For example       https   host port v1 identity oidc plugins       1  Uniquely identify the recipient of the plugin identity token as the audience      You can use the  default audience  https   cloud google com iam docs workload identity federation with other providers prepare      for the identity pool or a custom value less than 256 characters   1   Authenticate a workload  https   cloud google com iam docs workload identity federation with other providers authenticate  in GCP by granting the identity pool access to a dedicated service account using service account impersonation  Filter requests using the unique  sub  claim issued by plugin identity tokens so the GCP Auth engine can impersonate the service account   sub  claims have the form   plugin identity  NAMESPACE  secret  GCP SECRETS MOUNT ACCESSOR     1  Configure the GCP secrets engine with the OIDC audience value and service account email         shell session      vault write gcp config        identity token audience    iam googleapis com projects 410449834127 locations global workloadIdentityPools vault gcp secrets 43777a63 providers vault gcp secrets wif provider         service account email  vault plugin wif secrets hc b712f250b4e04cacbadd258a90b iam gserviceaccount com          Your secrets engine can now use plugin WIF for its configuration credentials  By default  WIF  credentials  https   cloud google com iam docs workload identity federation access management  have a time to live of 1 hour and automatically refresh when they expire   Please see the  API documentation   vault api docs secret gcp write config  for more details on the fields associated with plugin WIF       Root credential rotation  If the mount is configured with credentials directly  the credential s key may be rotated to a Vault generated value that is not accessible by the operator  For more details on this operation  please see the  Root Credential Rotation   vault api docs secret gcp rotate root credentials  API docs      Things to note      Rolesets vs  static accounts  Advantages of rolesets     Service accounts and IAM bindings are fully managed by Vault  Disadvantages of rolesets     Cannot easily decouple IAM bindings from the ones managed in Vault   Vault requires permissions to manage IAM bindings and service accounts  Advantages of static accounts     Can manage IAM bindings independently from the ones managed in Vault   Vault does not require permissions to IAM bindings and service accounts and only permissions   related to the keys of the service account  Disadvantages of static accounts     Self management of service accounts is necessary       Access tokens vs  service account keys  Advantages of  access tokens      Can generate infinite number of tokens per roleset  Disadvantages of  access tokens      Cannot be used with some client libraries or tools   Have a static life time of 1 hr that cannot be modified  revoked  or extended   Advantages of  service account keys      Controllable life time through Vault  allowing for longer access   Can be used by all normal GCP tooling  Disadvantages of  service account keys      Infinite lifetime in GCP  i e  if they are not managed properly  leaked keys can live forever    Limited to 10 per roleset service account   When generating OAuth access tokens  Vault will still generate a dedicated service account and key  This private key is stored in Vault and is never accessible to other users  and the underlying key can be rotated  See the  GCP API documentation  api  for more information on rotation       Service accounts are tied to rolesets  Service Accounts are created when the roleset is created  or updated  rather than each time a secret is generated  This may be different from how other secrets engines behave  but it is for good reasons     IAM Service Account creation and permission propagation can take up to 60   seconds to complete  By creating the Service Account in advance  we speed up   the timeliness of future operations and reduce the flakiness of automated   workflows     Each GCP project has a limit on the number of IAM Service Accounts  You can    request additional quota  quotas   The quota increase is processed by humans    so it is best to request this additional quota in advance  This limit is   currently 100    including system managed Service Accounts    If Service   Accounts were created per secret  this quota limit would reduce the number of   secrets that can be generated       Service account keys quota limits  GCP IAM has a hard limit  currently 10  on the number of Service Account keys  Attempts to generate more keys will result in an error  If you find yourself running into this limit  consider the following     Have shorter TTLs or revoke access earlier  If you are not using past Service   Account keys  consider rotating and freeing quota earlier     Create additional rolesets which share the same set of permissions  Additional   rolesets can be created with the same set of permissions  This will create a   new service account and increases the number of keys you can create     Where possible  use OAuth2 access tokens instead of Service Account keys       Resources in IAM bindings must exist at roleset or static account creation  Because the bindings for the Service Account are set during roleset static account creation  resources that do not exist will fail the  getIamPolicy  API call       Roleset creation may partially fail  Every Service Account creation  key creation  and IAM policy change is a GCP API call per resource  If an API call to one of these resources fails  the roleset creation fails and Vault will attempt to rollback   These rollbacks are API calls  so they may also fail  The secrets engine uses a WAL to ensure that unused bindings are cleaned up  In the case of quota limits  you may need to clean these up manually       Do not modify vault owned IAM accounts  While Vault will initially create and assign permissions to IAM service accounts  it is possible that an external user deletes or modifies this service account  These changes are difficult to detect  and it is best to prevent this type of modification through IAM permissions   Vault roleset Service Accounts will have emails in the format       vault roleset prefix   creation unix timestamp           Communicate with your teams  or use IAM permissions  to not modify these resources      Help  amp  support  The Google Cloud Vault secrets engine is written as an external Vault plugin and thus exists outside the main Vault repository  It is automatically bundled with Vault releases  but the code is managed separately   Please report issues  add feature requests  and submit contributions to the  vault plugin secrets gcp repo on GitHub  repo       API  The GCP secrets engine has a full HTTP API  Please see the  GCP secrets engine API docs  api  for more details    api    vault api docs secret gcp  cloud creds   https   cloud google com docs authentication production providing credentials to your application  custom roles   https   cloud google com iam docs creating custom roles  gce   https   cloud google com compute   gke   https   cloud google com kubernetes engine   iam keys   https   cloud google com iam docs service accounts service account keys  iam roles   https   cloud google com iam docs understanding roles  predefined roles   https   cloud google com iam docs understanding roles predefined roles  repo   https   github com hashicorp vault plugin secrets gcp  resource name full   https   cloud google com apis design resource names full resource name  resource name relative   https   cloud google com apis design resource names relative resource name  quotas   https   cloud google com compute quotas  service accounts   https   cloud google com compute docs access service accounts  workloads ids   https   cloud google com iam docs workload identities  attached service accounts   https   cloud google com iam docs workload identities attached service accounts  gke workload ids   https   cloud google com iam docs workload identities kubernetes workload identity     Upgrade guides      Deprecation of access token leases       NOTE    This deprecation only affects access tokens  There is no change to the  service account key  secret type   Previous versions of this secrets engine  Vault    0 11 1  created a lease for each access token secret  We have removed them after discovering that these tokens  specifically Google OAuth2 tokens for IAM service accounts  are non revocable and have a static 60 minute lifetime  To match the current limitations of the GCP APIs  the secrets engine will no longer allow for revocation or manage the token TTL   more specifically    the access token response will no longer include  lease id  or other lease information    This change does not reflect any change to the actual underlying OAuth tokens or GCP service accounts   To upgrade     Remove references from  lease id    lease duration  or other  lease      attributes when reading responses for the access tokens secrets endpoint  i e    from  gcp token  roleset    See the  documentation for access   tokens   access tokens  to see the new format for the response     Be aware of leftover leases from previous versions  While these old leases   will still be revocable  they will not actually invalidate their associated   access token  and that token will still be useable for up to one hour "}
{"questions":"vault AWS secrets engine page title AWS Secrets Engines layout docs The AWS secrets engine for Vault generates access keys dynamically based on IAM policies","answers":"---\nlayout: docs\npage_title: AWS - Secrets Engines\ndescription: |-\n  The AWS secrets engine for Vault generates access keys dynamically based on\n  IAM policies.\n---\n\n# AWS secrets engine\n\nThe AWS secrets engine generates AWS access credentials dynamically based on IAM\npolicies. This generally makes working with AWS IAM easier, since it does not\ninvolve clicking in the web UI. Additionally, the process is codified and mapped\nto internal auth methods (such as LDAP). The AWS IAM credentials are time-based\nand are automatically revoked when the Vault lease expires.\n\nVault supports four different types of credentials to retrieve from AWS:\n\n1. `iam_user`: Vault will create an IAM user for each lease, attach the managed\n   and inline IAM policies as specified in the role to the user, and if a\n   [permissions\n   boundary](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/access_policies_boundaries.html)\n   is specified on the role, the permissions boundary will also be attached.\n   Vault will then generate an access key and secret key for the IAM user and\n   return them to the caller. IAM users have no session tokens and so no\n   session token will be returned. Vault will delete the IAM user upon reaching the TTL expiration.\n2. `assumed_role`: Vault will call\n   [sts:AssumeRole](https:\/\/docs.aws.amazon.com\/STS\/latest\/APIReference\/API_AssumeRole.html)\n   and return the access key, secret key, and session token to the caller.\n3. `federation_token`: Vault will call\n   [sts:GetFederationToken](https:\/\/docs.aws.amazon.com\/STS\/latest\/APIReference\/API_GetFederationToken.html)\n   passing in the supplied AWS policy document and return the access key, secret\n   key, and session token to the caller.\n4. `session_token`: Vault will call\n   [sts:GetSessionToken](https:\/\/docs.aws.amazon.com\/STS\/latest\/APIReference\/API_GetSessionToken.html)\n   and return the access key, secret key, and session token to the caller.\n\n### Static roles\n\nThe AWS secrets engine supports the concept of \"static roles\", which are\na 1-to-1 mapping of Vault Roles to IAM users. The current password\nfor the user is stored and automatically rotated by Vault on a\nconfigurable period of time. This is in contrast to dynamic secrets, where a\nunique username and password pair are generated with each credential request.\nWhen credentials are requested for the Role, Vault returns the current\nAccess Key ID and Secret Access Key for the configured user, allowing anyone with the proper\nVault policies to have access to the IAM credentials.\n\nPlease see the [API documentation](\/vault\/api-docs\/secret\/aws#create-static-role) for details on this feature.\n\n## Setup\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1.  Enable the AWS secrets engine:\n\n    ```text\n    $ vault secrets enable aws\n    Success! Enabled the aws secrets engine at: aws\/\n    ```\n\n    By default, the secrets engine will mount at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure the credentials that Vault uses to communicate with AWS to generate\n    the IAM credentials:\n\n    ```text\n    $ vault write aws\/config\/root \\\n        access_key=AKIAJWVN5Z4FOFT7NLNA \\\n        secret_key=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 \\\n        region=us-east-1\n    ```\n\n    Internally, Vault will connect to AWS using these credentials. As such,\n    these credentials must be a superset of any policies which might be granted\n    on IAM credentials. Since Vault uses the official AWS SDK, it will use the\n    specified credentials. You can also specify the credentials via the standard\n    AWS environment credentials, shared file credentials, or IAM role\/ECS task\n    credentials. (Note that you can't authorize vault with IAM role credentials if you plan\n    on using STS Federation Tokens, since the temporary security credentials\n    associated with the role are not authorized to use GetFederationToken.)\n\n    In some cases, you cannot set sensitive IAM security credentials in your\n    Vault configuration. For example, your organization may require that all\n    security credentials are short-lived or explicitly tied to a machine identity.\n    \n    To provide IAM security credentials to Vault, we recommend using Vault\n    [plugin workload identity federation](#plugin-workload-identity-federation-wif)\n    (WIF).\n\n    ~> **Notice:** Even though the path above is `aws\/config\/root`, do not use\n    your AWS root account credentials. Instead, generate a dedicated user or\n    role.\n\n1.  Alternatively, configure the audience claim value and the role ARN to assume for plugin workload identity federation:\n\n    ```text\n    $ vault write aws\/config\/root \\\n        identity_token_audience=\"<TOKEN AUDIENCE>\" \\\n        role_arn=\"<AWS ROLE ARN>\"\n    ```\n\n    Vault's identity token provider will internally sign the plugin identity token JWT.\n    Given a trust relationship is configured between Vault and AWS via\n    Web Identity Federation, the secrets engine can exchange this identity token to obtain\n    ephemeral STS credentials.\n\n    ~> **Notice:** For this trust relationship to be established, AWS must have an\n    an [IAM OIDC identity provider](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id_roles_providers_create_oidc.html)\n    configured with information about the fully qualified and network-reachable\n    Issuer URL for Vault's plugin [identity token provider](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-well-known-configurations).\n    This is to ensure that AWS can fetch the JWKS [public keys](\/vault\/api-docs\/secret\/identity\/tokens#read-active-public-keys)\n    and verify the plugin identity token signature. To configure Vault's Issuer,\n    please refer to the Identity Tokens\n    [documentation](\/vault\/api-docs\/secret\/identity\/tokens#configure-the-identity-tokens-backend)\n\n1.  Configure a Vault role that maps to a set of permissions in AWS as well as an\n    AWS credential type. When users generate credentials, they are generated\n    against this role. An example:\n\n    ```text\n    $ vault write aws\/roles\/my-role \\\n        credential_type=iam_user \\\n        policy_document=-<<EOF\n    {\n      \"Version\": \"2012-10-17\",\n      \"Statement\": [\n        {\n          \"Effect\": \"Allow\",\n          \"Action\": \"ec2:*\",\n          \"Resource\": \"*\"\n        }\n      ]\n    }\n    EOF\n    ```\n\n    This creates a role named \"my-role\". When users generate credentials against\n    this role, Vault will create an IAM user and attach the specified policy\n    document to the IAM user. Vault will then create an access key and secret\n    key for the IAM user and return these credentials. You supply a\n    user inline policy and\/or provide references to an existing AWS policy's full\n    ARN and\/or a list of IAM groups:\n\n    ```text\n    $ vault write aws\/roles\/my-other-role \\\n        policy_arns=arn:aws:iam::aws:policy\/AmazonEC2ReadOnlyAccess,arn:aws:iam::aws:policy\/IAMReadOnlyAccess \\\n        iam_groups=group1,group2 \\\n        credential_type=iam_user \\\n        policy_document=-<<EOF\n    {\n      \"Version\": \"2012-10-17\",\n      \"Statement\": [\n        {\n          \"Effect\": \"Allow\",\n          \"Action\": \"ec2:*\",\n          \"Resource\": \"*\"\n        }\n      ]\n    }\n    EOF\n    ```\n\n    For more information on IAM policies, please see the\n    [AWS IAM policy documentation](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/PoliciesOverview.html).\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials.\n\n1.  Generate a new credential by reading from the `\/creds` endpoint with the name\n    of the role:\n\n    ```text\n    $ vault read aws\/creds\/my-role\n    Key                Value\n    ---                -----\n    lease_id           aws\/creds\/my-role\/f3e92392-7d9c-09c8-c921-575d62fe80d8\n    lease_duration     768h\n    lease_renewable    true\n    access_key         AKIAIOWQXTLW36DV7IEA\n    secret_key         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\n    session_token     <nil>\n    ```\n\n    Each invocation of the command will generate a new credential.\n\n    Unfortunately, IAM credentials are eventually consistent with respect to\n    other Amazon services. If you are planning on using these credential in a\n    pipeline, you may need to add a delay of 5-10 seconds (or more) after\n    fetching credentials before they can be used successfully.\n\n    If you want to be able to use credentials without the wait, consider using\n    the STS method of fetching keys. IAM credentials supported by an STS token\n    are available for use as soon as they are generated.\n\n1.  Rotate the credentials that Vault uses to communicate with AWS:\n\n    ```text\n    $ vault write -f aws\/config\/rotate-root\n    Key           Value\n    ---           -----\n    access_key    AKIA3ALIVABCDG5XC8H4\n    ```\n\n    <Note>\n\n      Calls from Vault to AWS may fail immediately after calling `aws\/config\/rotate-root` until\n      AWS becomes consistent again. Refer to\n      the <a href=\"\/vault\/api-docs\/secret\/aws#rotate-root-iam-credentials\">AWS secrets engine API<\/a> reference\n      for additional information on rotating IAM credentials.\n\n    <\/Note>\n\n## IAM permissions policy for Vault\n\nThe `aws\/config\/root` credentials need permission to manage dynamic IAM users.\nHere is an example AWS IAM policy that grants the most commonly required\npermissions Vault needs:\n\n```json\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"iam:AttachUserPolicy\",\n        \"iam:CreateAccessKey\",\n        \"iam:CreateUser\",\n        \"iam:DeleteAccessKey\",\n        \"iam:DeleteUser\",\n        \"iam:DeleteUserPolicy\",\n        \"iam:DetachUserPolicy\",\n        \"iam:GetUser\",\n        \"iam:ListAccessKeys\",\n        \"iam:ListAttachedUserPolicies\",\n        \"iam:ListGroupsForUser\",\n        \"iam:ListUserPolicies\",\n        \"iam:PutUserPolicy\",\n        \"iam:AddUserToGroup\",\n        \"iam:RemoveUserFromGroup\",\n        \"iam:TagUser\"\n      ],\n      \"Resource\": [\"arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:user\/vault-*\"]\n    }\n  ]\n}\n```\n\nVault also supports AWS Permissions Boundaries when creating IAM users. If you\nwish to enforce that Vault always attaches a permissions boundary to an IAM\nuser, you can use a policy like:\n\n```json\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"iam:CreateAccessKey\",\n        \"iam:DeleteAccessKey\",\n        \"iam:DeleteUser\",\n        \"iam:GetUser\",\n        \"iam:ListAccessKeys\",\n        \"iam:ListAttachedUserPolicies\",\n        \"iam:ListGroupsForUser\",\n        \"iam:ListUserPolicies\",\n        \"iam:AddUserToGroup\",\n        \"iam:RemoveUserFromGroup\"\n      ],\n      \"Resource\": [\"arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:user\/vault-*\"]\n    },\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"iam:AttachUserPolicy\",\n        \"iam:CreateUser\",\n        \"iam:DeleteUserPolicy\",\n        \"iam:DetachUserPolicy\",\n        \"iam:PutUserPolicy\"\n      ],\n      \"Resource\": [\"arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:user\/vault-*\"],\n      \"Condition\": {\n        \"StringEquals\": {\n          \"iam:PermissionsBoundary\": [\n            \"arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:policy\/PolicyName\"\n          ]\n        }\n      }\n    }\n  ]\n}\n```\n\nwhere the \"iam:PermissionsBoundary\" condition contains the list of permissions\nboundary policies that you wish to ensure that Vault uses. This policy will\nensure that Vault uses one of the permissions boundaries specified (not all of\nthem).\n\n## Plugin Workload Identity Federation (WIF)\n\n<EnterpriseAlert product=\"vault\" \/>\n\nThe AWS secrets engine supports the Plugin WIF workflow, and has a source of identity called\na plugin identity token. The plugin identity token is a JWT that is internally signed by Vault's \n[plugin identity token issuer](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-workload-identity-issuer-s-openid-configuration). \n\nIf there is a trust relationship configured between Vault and AWS through\n[Web Identity Federation](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id_roles_providers_oidc.html),\nthe secrets engine can exchange its identity token for short-lived STS credentials needed to\nperform its actions.\n\nExchanging identity tokens for STS credentials lets the AWS secrets engine\noperate without configuring explicit access to sensitive IAM security\ncredentials.\n\nTo configure the secrets engine to use plugin WIF:\n\n1. Ensure that Vault [openid-configuration](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-token-issuer-s-openid-configuration)\n   and [public JWKS](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-token-issuer-s-public-jwks) \n   APIs are network-reachable by AWS. We recommend using an API proxy or gateway\n   if you need to limit Vault API exposure.  \n\n1. Create an\n   [IAM OIDC identity provider](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id_roles_providers_create_oidc.html)\n   in AWS.\n   1. The provider URL **must** point at your [Vault plugin identity token issuer](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-workload-identity-issuer-s-openid-configuration) with the\n   `\/.well-known\/openid-configuration` suffix removed. For example: \n   `https:\/\/host:port\/v1\/identity\/oidc\/plugins`.\n   1. The audience should uniquely identify the recipient of the plugin identity\n   token. In AWS, the recipient is the identity provider. We recommend using\n   the `host:port\/v1\/identity\/oidc\/plugins` portion of the provider URL as your\n   recipient since it will be unique for each configured identity provider.\n\n1. Create a [web identity role](https:\/\/docs.aws.amazon.com\/IAM\/latest\/UserGuide\/id_roles_create_for-idp_oidc.html#idp_oidc_Create)\n   in AWS with the same audience used for your IAM OIDC identity provider.\n\n1. Configure the AWS secrets engine with the IAM OIDC audience value and web\n   identity role ARN.\n\n```shell-session\n$ vault write aws\/config\/root \\\n    identity_token_audience=\"vault.example\/v1\/identity\/oidc\/plugins\" \\\n    role_arn=\"arn:aws:iam::123456789123:role\/example-web-identity-role\"\n```\n\nYour secrets engine can now use plugin WIF for its configuration credentials. \nBy default, WIF [credentials](https:\/\/docs.aws.amazon.com\/STS\/latest\/APIReference\/API_AssumeRoleWithWebIdentity.html) \nhave a time-to-live of 1 hour and automatically refresh when they expire.\n\nPlease see the [API documentation](\/vault\/api-docs\/secret\/aws#configure-root-credentials) \nfor more details on the fields associated with plugin WIF.\n\n## STS credentials\n\nThe above demonstrated usage with `iam_user` credential types. As mentioned,\nVault also supports `assumed_role`, `federation_token`, and `session_token`\ncredential types.\n\n### STS federation tokens\n\n~> **Notice:** Due to limitations in AWS, in order to use the `federation_token`\ncredential type, Vault **must** be configured with IAM user credentials. AWS\ndoes not allow temporary credentials (such as those from an IAM instance\nprofile) to be used.\n\nAn STS federation token inherits a set of permissions that are the combination\n(intersection) of four sets of permissions:\n\n1. The permissions granted to the `aws\/config\/root` credentials\n2. The user inline policy configured in the Vault role\n3. The managed policy ARNs configured in the Vault role\n4. An implicit deny policy on IAM or STS operations.\n\nRoles with a `credential_type` of `federation_token` can specify one or more of\nthe `policy_document`, `policy_arns`, and `iam_groups` parameters in the Vault\nrole.\n\nThe `aws\/config\/root` credentials require IAM permissions for\n`sts:GetFederationToken` and the permissions to delegate to the STS\nfederation token. For example, this policy on the `aws\/config\/root` credentials\nwould allow creation of an STS federated token with delegated `ec2:*`\npermissions (or any subset of `ec2:*` permissions):\n\n```javascript\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": {\n    \"Effect\": \"Allow\",\n    \"Action\": [\n      \"ec2:*\",\n      \"sts:GetFederationToken\"\n    ],\n    \"Resource\": \"*\"\n  }\n}\n```\n\nAn `ec2_admin` role would then assign an inline policy with the same `ec2:*`\npermissions.\n\n```shell-session\n$ vault write aws\/roles\/ec2_admin \\\n    credential_type=federation_token \\\n    policy_document=@policy.json\n```\n\nThe policy.json file would contain an inline policy with similar permissions,\nless the `sts:GetFederationToken` permission. (We could grant\n`sts:GetFederationToken` permissions, but STS attaches attach an implicit deny\nthat overrides the allow.)\n\n```javascript\n{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": {\n    \"Effect\": \"Allow\",\n    \"Action\": \"ec2:*\",\n    \"Resource\": \"*\"\n  }\n}\n```\n\nTo generate a new set of STS federation token credentials, we simply write to\nthe role using the aws\/sts endpoint:\n\n```shell-session\n$ vault write aws\/sts\/ec2_admin ttl=60m\nKey            \tValue\nlease_id       \taws\/sts\/ec2_admin\/31d771a6-fb39-f46b-fdc5-945109106422\nlease_duration \t60m0s\nlease_renewable\tfalse\naccess_key     \tASIAJYYYY2AA5K4WIXXX\nsecret_key     \tHSs0DYYYYYY9W81DXtI0K7X84H+OVZXK5BXXXX\nsession_token \tAQoDYXdzEEwasAKwQyZUtZaCjVNDiXXXXXXXXgUgBBVUUbSyujLjsw6jYzboOQ89vUVIehUw\/9MreAifXFmfdbjTr3g6zc0me9M+dB95DyhetFItX5QThw0lEsVQWSiIeIotGmg7mjT1\/\/e7CJc4LpxbW707loFX1TYD1ilNnblEsIBKGlRNXZ+QJdguY4VkzXxv2urxIH0Sl14xtqsRPboV7eYruSEZlAuP3FLmqFbmA0AFPCT37cLf\/vUHinSbvw49C4c9WQLH7CeFPhDub7\/rub\/QU\/lCjjJ43IqIRo9jYgcEvvdRkQSt70zO8moGCc7pFvmL7XGhISegQpEzudErTE\/PdhjlGpAKGR3d5qKrHpPYK\/k480wk1Ai\/t1dTa\/8\/3jUYTUeIkaJpNBnupQt7qoaXXXXXXXXXX\n```\n\n### STS Session Tokens\n\nThe `session_token` credential type is used to generate short-lived credentials under the root config.\nTo create these with Vault and AWS, you must configure Vault to use IAM user credentials. AWS does not\nallow temporary credentials, like those from an IAM instance profile, to be used when generating session tokens.\n\n<Warning>\n\n  STS session tokens inherit any and all permissions granted to the user configured in `aws\/config\/root`.\n  In this expample, the `temp_user` role will obtain a policy with the same `ec2:*` permissions as the\n  root config. For this reason, assigning a role or policy is disallowed for this credential type.\n\n<\/Warning>\n\n```shell-session\n$ vault write aws\/roles\/temp_user \\\n    credential_type=session_token\n```\n\nTo generate a new set of STS federation token credentials, write to the `temp_user`\nrole using the `aws\/creds` endpoint:\n\n```shell-session\n$ vault read aws\/sts\/temp_user ttl=60m\nKey            \tValue\nlease_id       \taws\/creds\/temp_user\/w4eKbMaJOi1xLqG3MWk7y8n6\nlease_duration \t60m0s\nlease_renewable\tfalse\naccess_key     \tASIAJYYYY2AA5K4WIXXX\nsecret_key     \tHSs0DYYYYYY9W81DXtI0K7X84H+OVZXK5BXXXX\nsession_token \tAQoDYXdzEEwasAKwQyZUtZaCjVNDiXXXXXXXXgUgBBVUUbSyujLjsw6jYzboOQ89vUVIehUw\/9MreAifXFmfdbjTr3g6zc0me9M+dB95DyhetFItX5QThw0lEsVQWSiIeIotGmg7mjT1\/\/e7CJc4LpxbW707loFX1TYD1ilNnblEsIBKGlRNXZ+QJdguY4VkzXxv2urxIH0Sl14xtqsRPboV7eYruSEZlAuP3FLmqFbmA0AFPCT37cLf\/vUHinSbvw49C4c9WQLH7CeFPhDub7\/rub\/QU\/lCjjJ43IqIRo9jYgcEvvdRkQSt70zO8moGCc7pFvmL7XGhISegQpEzudErTE\/PdhjlGpAKGR3d5qKrHpPYK\/k480wk1Ai\/t1dTa\/8\/3jUYTUeIkaJpNBnupQt7qoaXXXXXXXXXX\n```\n\nSession tokens may also require an MFA-based TOTP to be provided if the IAM user is configured to require it.\nIf so, the Vault role requires the MFA device serial number to be set, and the TOTP may be provided when\nreading credentials from the Vault role.\n\n```shell-session\n$ vault write aws\/roles\/mfa_user \\\n    credential_type=session_token \\\n    mfa_serial_number=\"arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:mfa\/device-name\"\n```\n\n```shell-session\n$ vault read aws\/creds\/mfa_user mfa_code=123456\n```\n\n### STS AssumeRole\n\nThe `assumed_role` credential type is typically used for cross-account\nauthentication or single sign-on (SSO) scenarios. In order to use an\n`assumed_role` credential type, you must configure outside of Vault:\n\n1. An IAM role\n2. IAM inline policies and\/or managed policies attached to the IAM role\n3. IAM trust policy attached to the IAM role to grant privileges for Vault to\n   assume the role\n\n`assumed_role` credentials offer a few benefits over `federation_token`:\n\n1. Assumed roles can invoke IAM and STS operations, if granted by the role's\n   IAM policies.\n2. Assumed roles support cross-account authentication\n3. Temporary credentials (such as those granted by running Vault on an EC2\n   instance in an IAM instance profile) can retrieve `assumed_role` credentials\n   (but cannot retrieve `federation_token` credentials).\n\nThe `aws\/config\/root` credentials must be allowed `sts:AssumeRole` through one of\ntwo methods:\n\n1.  The credentials have an IAM policy attached to them against the target role:\n    ```javascript\n    {\n      \"Version\": \"2012-10-17\",\n      \"Statement\": {\n        \"Effect\": \"Allow\",\n        \"Action\": \"sts:AssumeRole\",\n        \"Resource\": \"arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:role\/RoleNameToAssume\"\n      }\n    }\n    ```\n\n1.  A trust policy is attached to the target IAM role for the principal:\n    ```javascript\n    {\n      \"Version\": \"2012-10-17\",\n      \"Statement\": [\n        {\n          \"Effect\": \"Allow\",\n          \"Principal\": {\n            \"AWS\": \"arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:user\/VAULT-AWS-ROOT-CONFIG-USER-NAME\"\n          },\n          \"Action\": \"sts:AssumeRole\"\n        }\n      ]\n    }\n    ```\n\nWhen specifying a Vault role with a `credential_type` of `assumed_role`, you can\nspecify more than one IAM role ARN. If you do so, Vault clients can select which\nrole ARN they would like to assume when retrieving credentials from that role.\n\nFurther, you can specify both a `policy_document` and `policy_arns` parameters;\nif specified, each acts as a filter on the IAM permissions granted to the\nassumed role. If `iam_groups` is specified, the inline and attached policies for\neach IAM group will be added to the `policy_document` and `policy_arns`\nparameters, respectively, when calling [sts:AssumeRole]. For an action to be\nallowed, it must be permitted by both the IAM policy on the AWS role that is\nassumed, the `policy_document` specified on the Vault role (if specified), and\nthe managed policies specified by the `policy_arns` parameter. (The\n`policy_document` parameter is passed in as the `Policy` parameter to the\n[sts:AssumeRole] API call, while the `policy_arns` parameter is passed in as the\n`PolicyArns` parameter to the same call.)\n\nNote: When multiple `role_arns` are specified, clients requesting credentials\ncan specify any of the role ARNs that are defined on the Vault role in order to\nretrieve credentials. However, when `policy_document`, `policy_arns`, or\n`iam_groups` are specified, that will apply to ALL role credentials retrieved\nfrom AWS.\n\nLet's create a \"deploy\" policy using the arn of our role to assume:\n\n```shell-session\n$ vault write aws\/roles\/deploy \\\n    role_arns=arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:role\/RoleNameToAssume \\\n    credential_type=assumed_role\n```\n\nTo generate a new set of STS assumed role credentials, we again write to\nthe role using the aws\/sts endpoint:\n\n```shell-session\n$ vault write aws\/sts\/deploy ttl=60m\nKey            \tValue\nlease_id       \taws\/sts\/deploy\/31d771a6-fb39-f46b-fdc5-945109106422\nlease_duration \t60m0s\nlease_renewable\tfalse\naccess_key     \tASIAJYYYY2AA5K4WIXXX\nsecret_key     \tHSs0DYYYYYY9W81DXtI0K7X84H+OVZXK5BXXXX\nsession_token \tAQoDYXdzEEwasAKwQyZUtZaCjVNDiXXXXXXXXgUgBBVUUbSyujLjsw6jYzboOQ89vUVIehUw\/9MreAifXFmfdbjTr3g6zc0me9M+dB95DyhetFItX5QThw0lEsVQWSiIeIotGmg7mjT1\/\/e7CJc4LpxbW707loFX1TYD1ilNnblEsIBKGlRNXZ+QJdguY4VkzXxv2urxIH0Sl14xtqsRPboV7eYruSEZlAuP3FLmqFbmA0AFPCT37cLf\/vUHinSbvw49C4c9WQLH7CeFPhDub7\/rub\/QU\/lCjjJ43IqIRo9jYgcEvvdRkQSt70zO8moGCc7pFvmL7XGhISegQpEzudErTE\/PdhjlGpAKGR3d5qKrHpPYK\/k480wk1Ai\/t1dTa\/8\/3jUYTUeIkaJpNBnupQt7qoaXXXXXXXXXX\n```\n\n[sts:assumerole]: https:\/\/docs.aws.amazon.com\/STS\/latest\/APIReference\/API_AssumeRole.html\n\n## Troubleshooting\n\n### Dynamic IAM user errors\n\nIf you get an error message similar to either of the following, the root credentials that you wrote to `aws\/config\/root` have insufficient privilege:\n\n```shell-session\n$ vault read aws\/creds\/deploy\n* Error creating IAM user: User: arn:aws:iam::000000000000:user\/hashicorp is not authorized to perform: iam:CreateUser on resource: arn:aws:iam::000000000000:user\/vault-root-1432735386-4059\n\n$ vault revoke aws\/creds\/deploy\/774cfb27-c22d-6e78-0077-254879d1af3c\nRevoke error: Error making API request.\n\nURL: POST http:\/\/127.0.0.1:8200\/v1\/sys\/revoke\/aws\/creds\/deploy\/774cfb27-c22d-6e78-0077-254879d1af3c\nCode: 400. Errors:\n\n* invalid request\n```\n\nIf you get stuck at any time, simply run `vault path-help aws` or with a subpath for\ninteractive help output.\n\n### STS federated token errors\n\nVault generates STS tokens using the IAM credentials passed to `aws\/config`.\n\nThose credentials must have two properties:\n\n- They must have permissions to call `sts:GetFederationToken`.\n- The capabilities of those credentials have to be at least as permissive as those requested\n  by policies attached to the STS creds.\n\nIf either of those conditions are not met, a \"403 not-authorized\" error will be returned.\n\nSee http:\/\/docs.aws.amazon.com\/STS\/latest\/APIReference\/API_GetFederationToken.html for more details.\n\nVault 0.5.1 or later is recommended when using STS tokens to avoid validation\nerrors for exceeding the AWS limit of 32 characters on STS token names.\n\n<Note title=\"AWS character limit includes path\">\n\n  The AWS character limit for token names **includes** the full path to\n   the token. For example, `aws\/sts\/dev005_vault-test_testtest` (34\n   characters) exceeds the limit , but `aws\/roles\/dev005_vaulttest-test` (31\n   characters) does not.\n\n<\/Note>\n\n### AWS instance metadata timeouts\n\n@include 'aws-imds-timeout.mdx'\n\n## API\n\nThe AWS secrets engine has a full HTTP API. Please see the\n[AWS secrets engine API](\/vault\/api-docs\/secret\/aws) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  AWS   Secrets Engines description       The AWS secrets engine for Vault generates access keys dynamically based on   IAM policies         AWS secrets engine  The AWS secrets engine generates AWS access credentials dynamically based on IAM policies  This generally makes working with AWS IAM easier  since it does not involve clicking in the web UI  Additionally  the process is codified and mapped to internal auth methods  such as LDAP   The AWS IAM credentials are time based and are automatically revoked when the Vault lease expires   Vault supports four different types of credentials to retrieve from AWS   1   iam user   Vault will create an IAM user for each lease  attach the managed    and inline IAM policies as specified in the role to the user  and if a     permissions    boundary  https   docs aws amazon com IAM latest UserGuide access policies boundaries html     is specified on the role  the permissions boundary will also be attached     Vault will then generate an access key and secret key for the IAM user and    return them to the caller  IAM users have no session tokens and so no    session token will be returned  Vault will delete the IAM user upon reaching the TTL expiration  2   assumed role   Vault will call     sts AssumeRole  https   docs aws amazon com STS latest APIReference API AssumeRole html     and return the access key  secret key  and session token to the caller  3   federation token   Vault will call     sts GetFederationToken  https   docs aws amazon com STS latest APIReference API GetFederationToken html     passing in the supplied AWS policy document and return the access key  secret    key  and session token to the caller  4   session token   Vault will call     sts GetSessionToken  https   docs aws amazon com STS latest APIReference API GetSessionToken html     and return the access key  secret key  and session token to the caller       Static roles  The AWS secrets engine supports the concept of  static roles   which are a 1 to 1 mapping of Vault Roles to IAM users  The current password for the user is stored and automatically rotated by Vault on a configurable period of time  This is in contrast to dynamic secrets  where a unique username and password pair are generated with each credential request  When credentials are requested for the Role  Vault returns the current Access Key ID and Secret Access Key for the configured user  allowing anyone with the proper Vault policies to have access to the IAM credentials   Please see the  API documentation   vault api docs secret aws create static role  for details on this feature      Setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1   Enable the AWS secrets engine          text       vault secrets enable aws     Success  Enabled the aws secrets engine at  aws               By default  the secrets engine will mount at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure the credentials that Vault uses to communicate with AWS to generate     the IAM credentials          text       vault write aws config root           access key AKIAJWVN5Z4FOFT7NLNA           secret key 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           region us east 1              Internally  Vault will connect to AWS using these credentials  As such      these credentials must be a superset of any policies which might be granted     on IAM credentials  Since Vault uses the official AWS SDK  it will use the     specified credentials  You can also specify the credentials via the standard     AWS environment credentials  shared file credentials  or IAM role ECS task     credentials   Note that you can t authorize vault with IAM role credentials if you plan     on using STS Federation Tokens  since the temporary security credentials     associated with the role are not authorized to use GetFederationToken        In some cases  you cannot set sensitive IAM security credentials in your     Vault configuration  For example  your organization may require that all     security credentials are short lived or explicitly tied to a machine identity           To provide IAM security credentials to Vault  we recommend using Vault      plugin workload identity federation   plugin workload identity federation wif       WIF             Notice    Even though the path above is  aws config root   do not use     your AWS root account credentials  Instead  generate a dedicated user or     role   1   Alternatively  configure the audience claim value and the role ARN to assume for plugin workload identity federation          text       vault write aws config root           identity token audience   TOKEN AUDIENCE             role arn   AWS ROLE ARN                Vault s identity token provider will internally sign the plugin identity token JWT      Given a trust relationship is configured between Vault and AWS via     Web Identity Federation  the secrets engine can exchange this identity token to obtain     ephemeral STS credentials            Notice    For this trust relationship to be established  AWS must have an     an  IAM OIDC identity provider  https   docs aws amazon com IAM latest UserGuide id roles providers create oidc html      configured with information about the fully qualified and network reachable     Issuer URL for Vault s plugin  identity token provider   vault api docs secret identity tokens read plugin identity well known configurations       This is to ensure that AWS can fetch the JWKS  public keys   vault api docs secret identity tokens read active public keys      and verify the plugin identity token signature  To configure Vault s Issuer      please refer to the Identity Tokens      documentation   vault api docs secret identity tokens configure the identity tokens backend   1   Configure a Vault role that maps to a set of permissions in AWS as well as an     AWS credential type  When users generate credentials  they are generated     against this role  An example          text       vault write aws roles my role           credential type iam user           policy document    EOF              Version    2012 10 17          Statement                          Effect    Allow              Action    ec2                Resource                                   EOF              This creates a role named  my role   When users generate credentials against     this role  Vault will create an IAM user and attach the specified policy     document to the IAM user  Vault will then create an access key and secret     key for the IAM user and return these credentials  You supply a     user inline policy and or provide references to an existing AWS policy s full     ARN and or a list of IAM groups          text       vault write aws roles my other role           policy arns arn aws iam  aws policy AmazonEC2ReadOnlyAccess arn aws iam  aws policy IAMReadOnlyAccess           iam groups group1 group2           credential type iam user           policy document    EOF              Version    2012 10 17          Statement                          Effect    Allow              Action    ec2                Resource                                   EOF              For more information on IAM policies  please see the      AWS IAM policy documentation  https   docs aws amazon com IAM latest UserGuide PoliciesOverview html       Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials   1   Generate a new credential by reading from the   creds  endpoint with the name     of the role          text       vault read aws creds my role     Key                Value                                  lease id           aws creds my role f3e92392 7d9c 09c8 c921 575d62fe80d8     lease duration     768h     lease renewable    true     access key         AKIAIOWQXTLW36DV7IEA     secret key         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     session token      nil               Each invocation of the command will generate a new credential       Unfortunately  IAM credentials are eventually consistent with respect to     other Amazon services  If you are planning on using these credential in a     pipeline  you may need to add a delay of 5 10 seconds  or more  after     fetching credentials before they can be used successfully       If you want to be able to use credentials without the wait  consider using     the STS method of fetching keys  IAM credentials supported by an STS token     are available for use as soon as they are generated   1   Rotate the credentials that Vault uses to communicate with AWS          text       vault write  f aws config rotate root     Key           Value                             access key    AKIA3ALIVABCDG5XC8H4               Note         Calls from Vault to AWS may fail immediately after calling  aws config rotate root  until       AWS becomes consistent again  Refer to       the  a href   vault api docs secret aws rotate root iam credentials  AWS secrets engine API  a  reference       for additional information on rotating IAM credentials         Note      IAM permissions policy for Vault  The  aws config root  credentials need permission to manage dynamic IAM users  Here is an example AWS IAM policy that grants the most commonly required permissions Vault needs      json      Version    2012 10 17      Statement                  Effect    Allow          Action              iam AttachUserPolicy            iam CreateAccessKey            iam CreateUser            iam DeleteAccessKey            iam DeleteUser            iam DeleteUserPolicy            iam DetachUserPolicy            iam GetUser            iam ListAccessKeys            iam ListAttachedUserPolicies            iam ListGroupsForUser            iam ListUserPolicies            iam PutUserPolicy            iam AddUserToGroup            iam RemoveUserFromGroup            iam TagUser                  Resource     arn aws iam  ACCOUNT ID WITHOUT HYPHENS user vault                      Vault also supports AWS Permissions Boundaries when creating IAM users  If you wish to enforce that Vault always attaches a permissions boundary to an IAM user  you can use a policy like      json      Version    2012 10 17      Statement                  Effect    Allow          Action              iam CreateAccessKey            iam DeleteAccessKey            iam DeleteUser            iam GetUser            iam ListAccessKeys            iam ListAttachedUserPolicies            iam ListGroupsForUser            iam ListUserPolicies            iam AddUserToGroup            iam RemoveUserFromGroup                  Resource     arn aws iam  ACCOUNT ID WITHOUT HYPHENS user vault                         Effect    Allow          Action              iam AttachUserPolicy            iam CreateUser            iam DeleteUserPolicy            iam DetachUserPolicy            iam PutUserPolicy                  Resource     arn aws iam  ACCOUNT ID WITHOUT HYPHENS user vault             Condition              StringEquals                iam PermissionsBoundary                  arn aws iam  ACCOUNT ID WITHOUT HYPHENS policy PolicyName                                                 where the  iam PermissionsBoundary  condition contains the list of permissions boundary policies that you wish to ensure that Vault uses  This policy will ensure that Vault uses one of the permissions boundaries specified  not all of them       Plugin Workload Identity Federation  WIF    EnterpriseAlert product  vault      The AWS secrets engine supports the Plugin WIF workflow  and has a source of identity called a plugin identity token  The plugin identity token is a JWT that is internally signed by Vault s   plugin identity token issuer   vault api docs secret identity tokens read plugin workload identity issuer s openid configuration     If there is a trust relationship configured between Vault and AWS through  Web Identity Federation  https   docs aws amazon com IAM latest UserGuide id roles providers oidc html   the secrets engine can exchange its identity token for short lived STS credentials needed to perform its actions   Exchanging identity tokens for STS credentials lets the AWS secrets engine operate without configuring explicit access to sensitive IAM security credentials   To configure the secrets engine to use plugin WIF   1  Ensure that Vault  openid configuration   vault api docs secret identity tokens read plugin identity token issuer s openid configuration     and  public JWKS   vault api docs secret identity tokens read plugin identity token issuer s public jwks      APIs are network reachable by AWS  We recommend using an API proxy or gateway    if you need to limit Vault API exposure     1  Create an     IAM OIDC identity provider  https   docs aws amazon com IAM latest UserGuide id roles providers create oidc html     in AWS     1  The provider URL   must   point at your  Vault plugin identity token issuer   vault api docs secret identity tokens read plugin workload identity issuer s openid configuration  with the       well known openid configuration  suffix removed  For example       https   host port v1 identity oidc plugins      1  The audience should uniquely identify the recipient of the plugin identity    token  In AWS  the recipient is the identity provider  We recommend using    the  host port v1 identity oidc plugins  portion of the provider URL as your    recipient since it will be unique for each configured identity provider   1  Create a  web identity role  https   docs aws amazon com IAM latest UserGuide id roles create for idp oidc html idp oidc Create     in AWS with the same audience used for your IAM OIDC identity provider   1  Configure the AWS secrets engine with the IAM OIDC audience value and web    identity role ARN      shell session   vault write aws config root       identity token audience  vault example v1 identity oidc plugins        role arn  arn aws iam  123456789123 role example web identity role       Your secrets engine can now use plugin WIF for its configuration credentials   By default  WIF  credentials  https   docs aws amazon com STS latest APIReference API AssumeRoleWithWebIdentity html   have a time to live of 1 hour and automatically refresh when they expire   Please see the  API documentation   vault api docs secret aws configure root credentials   for more details on the fields associated with plugin WIF      STS credentials  The above demonstrated usage with  iam user  credential types  As mentioned  Vault also supports  assumed role    federation token   and  session token  credential types       STS federation tokens       Notice    Due to limitations in AWS  in order to use the  federation token  credential type  Vault   must   be configured with IAM user credentials  AWS does not allow temporary credentials  such as those from an IAM instance profile  to be used   An STS federation token inherits a set of permissions that are the combination  intersection  of four sets of permissions   1  The permissions granted to the  aws config root  credentials 2  The user inline policy configured in the Vault role 3  The managed policy ARNs configured in the Vault role 4  An implicit deny policy on IAM or STS operations   Roles with a  credential type  of  federation token  can specify one or more of the  policy document    policy arns   and  iam groups  parameters in the Vault role   The  aws config root  credentials require IAM permissions for  sts GetFederationToken  and the permissions to delegate to the STS federation token  For example  this policy on the  aws config root  credentials would allow creation of an STS federated token with delegated  ec2    permissions  or any subset of  ec2    permissions       javascript      Version    2012 10 17      Statement          Effect    Allow        Action            ec2            sts GetFederationToken              Resource                  An  ec2 admin  role would then assign an inline policy with the same  ec2    permissions      shell session   vault write aws roles ec2 admin       credential type federation token       policy document  policy json      The policy json file would contain an inline policy with similar permissions  less the  sts GetFederationToken  permission   We could grant  sts GetFederationToken  permissions  but STS attaches attach an implicit deny that overrides the allow       javascript      Version    2012 10 17      Statement          Effect    Allow        Action    ec2          Resource                  To generate a new set of STS federation token credentials  we simply write to the role using the aws sts endpoint      shell session   vault write aws sts ec2 admin ttl 60m Key             Value lease id        aws sts ec2 admin 31d771a6 fb39 f46b fdc5 945109106422 lease duration  60m0s lease renewable false access key      ASIAJYYYY2AA5K4WIXXX secret key      HSs0DYYYYYY9W81DXtI0K7X84H OVZXK5BXXXX session token  AQoDYXdzEEwasAKwQyZUtZaCjVNDiXXXXXXXXgUgBBVUUbSyujLjsw6jYzboOQ89vUVIehUw 9MreAifXFmfdbjTr3g6zc0me9M dB95DyhetFItX5QThw0lEsVQWSiIeIotGmg7mjT1  e7CJc4LpxbW707loFX1TYD1ilNnblEsIBKGlRNXZ QJdguY4VkzXxv2urxIH0Sl14xtqsRPboV7eYruSEZlAuP3FLmqFbmA0AFPCT37cLf vUHinSbvw49C4c9WQLH7CeFPhDub7 rub QU lCjjJ43IqIRo9jYgcEvvdRkQSt70zO8moGCc7pFvmL7XGhISegQpEzudErTE PdhjlGpAKGR3d5qKrHpPYK k480wk1Ai t1dTa 8 3jUYTUeIkaJpNBnupQt7qoaXXXXXXXXXX          STS Session Tokens  The  session token  credential type is used to generate short lived credentials under the root config  To create these with Vault and AWS  you must configure Vault to use IAM user credentials  AWS does not allow temporary credentials  like those from an IAM instance profile  to be used when generating session tokens    Warning     STS session tokens inherit any and all permissions granted to the user configured in  aws config root     In this expample  the  temp user  role will obtain a policy with the same  ec2    permissions as the   root config  For this reason  assigning a role or policy is disallowed for this credential type     Warning      shell session   vault write aws roles temp user       credential type session token      To generate a new set of STS federation token credentials  write to the  temp user  role using the  aws creds  endpoint      shell session   vault read aws sts temp user ttl 60m Key             Value lease id        aws creds temp user w4eKbMaJOi1xLqG3MWk7y8n6 lease duration  60m0s lease renewable false access key      ASIAJYYYY2AA5K4WIXXX secret key      HSs0DYYYYYY9W81DXtI0K7X84H OVZXK5BXXXX session token  AQoDYXdzEEwasAKwQyZUtZaCjVNDiXXXXXXXXgUgBBVUUbSyujLjsw6jYzboOQ89vUVIehUw 9MreAifXFmfdbjTr3g6zc0me9M dB95DyhetFItX5QThw0lEsVQWSiIeIotGmg7mjT1  e7CJc4LpxbW707loFX1TYD1ilNnblEsIBKGlRNXZ QJdguY4VkzXxv2urxIH0Sl14xtqsRPboV7eYruSEZlAuP3FLmqFbmA0AFPCT37cLf vUHinSbvw49C4c9WQLH7CeFPhDub7 rub QU lCjjJ43IqIRo9jYgcEvvdRkQSt70zO8moGCc7pFvmL7XGhISegQpEzudErTE PdhjlGpAKGR3d5qKrHpPYK k480wk1Ai t1dTa 8 3jUYTUeIkaJpNBnupQt7qoaXXXXXXXXXX      Session tokens may also require an MFA based TOTP to be provided if the IAM user is configured to require it  If so  the Vault role requires the MFA device serial number to be set  and the TOTP may be provided when reading credentials from the Vault role      shell session   vault write aws roles mfa user       credential type session token       mfa serial number  arn aws iam  ACCOUNT ID WITHOUT HYPHENS mfa device name          shell session   vault read aws creds mfa user mfa code 123456          STS AssumeRole  The  assumed role  credential type is typically used for cross account authentication or single sign on  SSO  scenarios  In order to use an  assumed role  credential type  you must configure outside of Vault   1  An IAM role 2  IAM inline policies and or managed policies attached to the IAM role 3  IAM trust policy attached to the IAM role to grant privileges for Vault to    assume the role   assumed role  credentials offer a few benefits over  federation token    1  Assumed roles can invoke IAM and STS operations  if granted by the role s    IAM policies  2  Assumed roles support cross account authentication 3  Temporary credentials  such as those granted by running Vault on an EC2    instance in an IAM instance profile  can retrieve  assumed role  credentials     but cannot retrieve  federation token  credentials    The  aws config root  credentials must be allowed  sts AssumeRole  through one of two methods   1   The credentials have an IAM policy attached to them against the target role         javascript              Version    2012 10 17          Statement              Effect    Allow            Action    sts AssumeRole            Resource    arn aws iam  ACCOUNT ID WITHOUT HYPHENS role RoleNameToAssume                         1   A trust policy is attached to the target IAM role for the principal         javascript              Version    2012 10 17          Statement                          Effect    Allow              Principal                  AWS    arn aws iam  ACCOUNT ID WITHOUT HYPHENS user VAULT AWS ROOT CONFIG USER NAME                          Action    sts AssumeRole                                   When specifying a Vault role with a  credential type  of  assumed role   you can specify more than one IAM role ARN  If you do so  Vault clients can select which role ARN they would like to assume when retrieving credentials from that role   Further  you can specify both a  policy document  and  policy arns  parameters  if specified  each acts as a filter on the IAM permissions granted to the assumed role  If  iam groups  is specified  the inline and attached policies for each IAM group will be added to the  policy document  and  policy arns  parameters  respectively  when calling  sts AssumeRole   For an action to be allowed  it must be permitted by both the IAM policy on the AWS role that is assumed  the  policy document  specified on the Vault role  if specified   and the managed policies specified by the  policy arns  parameter   The  policy document  parameter is passed in as the  Policy  parameter to the  sts AssumeRole  API call  while the  policy arns  parameter is passed in as the  PolicyArns  parameter to the same call    Note  When multiple  role arns  are specified  clients requesting credentials can specify any of the role ARNs that are defined on the Vault role in order to retrieve credentials  However  when  policy document    policy arns   or  iam groups  are specified  that will apply to ALL role credentials retrieved from AWS   Let s create a  deploy  policy using the arn of our role to assume      shell session   vault write aws roles deploy       role arns arn aws iam  ACCOUNT ID WITHOUT HYPHENS role RoleNameToAssume       credential type assumed role      To generate a new set of STS assumed role credentials  we again write to the role using the aws sts endpoint      shell session   vault write aws sts deploy ttl 60m Key             Value lease id        aws sts deploy 31d771a6 fb39 f46b fdc5 945109106422 lease duration  60m0s lease renewable false access key      ASIAJYYYY2AA5K4WIXXX secret key      HSs0DYYYYYY9W81DXtI0K7X84H OVZXK5BXXXX session token  AQoDYXdzEEwasAKwQyZUtZaCjVNDiXXXXXXXXgUgBBVUUbSyujLjsw6jYzboOQ89vUVIehUw 9MreAifXFmfdbjTr3g6zc0me9M dB95DyhetFItX5QThw0lEsVQWSiIeIotGmg7mjT1  e7CJc4LpxbW707loFX1TYD1ilNnblEsIBKGlRNXZ QJdguY4VkzXxv2urxIH0Sl14xtqsRPboV7eYruSEZlAuP3FLmqFbmA0AFPCT37cLf vUHinSbvw49C4c9WQLH7CeFPhDub7 rub QU lCjjJ43IqIRo9jYgcEvvdRkQSt70zO8moGCc7pFvmL7XGhISegQpEzudErTE PdhjlGpAKGR3d5qKrHpPYK k480wk1Ai t1dTa 8 3jUYTUeIkaJpNBnupQt7qoaXXXXXXXXXX       sts assumerole   https   docs aws amazon com STS latest APIReference API AssumeRole html     Troubleshooting      Dynamic IAM user errors  If you get an error message similar to either of the following  the root credentials that you wrote to  aws config root  have insufficient privilege      shell session   vault read aws creds deploy   Error creating IAM user  User  arn aws iam  000000000000 user hashicorp is not authorized to perform  iam CreateUser on resource  arn aws iam  000000000000 user vault root 1432735386 4059    vault revoke aws creds deploy 774cfb27 c22d 6e78 0077 254879d1af3c Revoke error  Error making API request   URL  POST http   127 0 0 1 8200 v1 sys revoke aws creds deploy 774cfb27 c22d 6e78 0077 254879d1af3c Code  400  Errors     invalid request      If you get stuck at any time  simply run  vault path help aws  or with a subpath for interactive help output       STS federated token errors  Vault generates STS tokens using the IAM credentials passed to  aws config    Those credentials must have two properties     They must have permissions to call  sts GetFederationToken     The capabilities of those credentials have to be at least as permissive as those requested   by policies attached to the STS creds   If either of those conditions are not met  a  403 not authorized  error will be returned   See http   docs aws amazon com STS latest APIReference API GetFederationToken html for more details   Vault 0 5 1 or later is recommended when using STS tokens to avoid validation errors for exceeding the AWS limit of 32 characters on STS token names    Note title  AWS character limit includes path      The AWS character limit for token names   includes   the full path to    the token  For example   aws sts dev005 vault test testtest   34    characters  exceeds the limit   but  aws roles dev005 vaulttest test   31    characters  does not     Note       AWS instance metadata timeouts   include  aws imds timeout mdx      API  The AWS secrets engine has a full HTTP API  Please see the  AWS secrets engine API   vault api docs secret aws  for more details "}
{"questions":"vault KMIP secrets engine The KMIP secrets engine allows Vault to act as a KMIP server provider and handle the lifecycle of its KMIP managed objects layout docs page title KMIP Secrets Engines","answers":"---\nlayout: docs\npage_title: KMIP - Secrets Engines\ndescription: |-\n  The KMIP secrets engine allows Vault to act as a KMIP server provider and\n  handle the lifecycle of its KMIP managed objects.\n---\n\n# KMIP secrets engine\n\n@include 'alerts\/enterprise-and-hcp.mdx'\n\nKMIP secrets engine requires [Vault Enterprise](https:\/\/www.hashicorp.com\/products\/vault\/pricing)\nwith the Advanced Data Protection (ADP) module.\n\nThe KMIP secrets engine allows Vault to act as a [Key Management\nInteroperability Protocol][kmip-spec] (KMIP) server provider and handle\nthe lifecycle of its KMIP managed objects. KMIP is a standardized protocol that allows\nservices and applications to perform cryptographic operations without having to\nmanage cryptographic material, otherwise known as managed objects, by delegating\nits storage and lifecycle to a key management server.\n\nVault's KMIP secrets engine listens on a separate port from the standard Vault\nlistener. Each Vault server in a Vault cluster configured with a KMIP secrets\nengine uses the same listener configuration. The KMIP listener defaults to port\n5696 and is configurable to alternative ports, for example, if there are\nmultiple KMIP secrets engine mounts configured.  KMIP clients connect and\nauthenticate to this KMIP secrets engine listener port using generated TLS\ncertificates. KMIP clients may connect directly to the Vault active server, or\nany of the Vault performance standby servers, on the configured KMIP port. A\nlayer 4 tcp load balancer may be used in front of the Vault server's KMIP ports.\nThe load balancer should support long-lived connections and it may use a round\nrobin routing algorithm as Vault servers will forward to the primary Vault\nserver, if necessary.\n\n## KMIP conformance\n\nVault implements version 1.4 of the following Key Management Interoperability Protocol Profiles:\n\n  * [Baseline Server][baseline-server]\n    * Supports all profile attributes except for *Key Value Location*.\n    * Supports all profile operations except for *Check*.\n    * Operation *Locate* supports all profile attributes except for *Key Value Location*.\n\n  * [Symmetric Key Lifecycle Server][lifecycle-server]\n    * Supports cryptographic algorithm *AES* (*3DES* is not supported).\n    * Only the *Raw* key format type is supported. (*Transparent Symmetric Key* is not supported).\n\n  * [Basic Cryptographic Server][basic-cryptographic-server]\n    * Supports block cipher modes *CBC*, *CFB*, *CTR*, *ECB*, *GCM*, and *OFB*.\n    * On multi-part (streaming) operations, block cipher mode *GCM* is not supported.\n    * The supported padding methods are *None* and *PKCS5*.\n\n  * [Asymmetric Key Lifecycle Server][asymmetric-key-lifecycle-server]\n    * Supports *Public Key* and *Private Key* objects.\n    * Supports *RSA* cryptographic algorithm\n    * Supports *PKCS#1*, *PKCS#8*, *X.509*, *Transparent RSA Public Key* and *Transparent RSA Private Key* key format types.\n\n  * [Advanced Cryptographic Server][advanced-cryptographic-server]\n    * Supports *Encrypt*, *Decrypt*, *Sign*, *Signature Verify*, *MAC*, *MAC Verify*, *RNG Retrieve*, and *RNG Seed* client-to-server operations.\n    * The supported hashing algorithms for Sign and Signature Verify operations are *SHA224*, *SHA256*, *SHA384*, *SHA512*, *RIPEMD160*, *SHA512_224*, *SHA512_256*, *SHA3_224*, *SHA3_256*, *SHA3_384*, and *SHA3_512* for *PSS* padding method, and algorithms *SHA224*, *SHA256*, *SHA384*, *SHA512*, and *RIPEMD160* for *PKCS1v15* padding method.\n    * The supported hashing algorithms for MAC and MAC Verify operations are *SHA224*, *SHA256*, *SHA384*, *SHA512*, *RIPEMD160*, *SHA512_224*, *SHA512_256*, *SHA3_224*, *SHA3_256*, *SHA3_384*, and *SHA3_512* (*MD4*, *MD5*, and *SHA1* are not supported).\n\nRefer to [KMIP - Profiles Support](\/vault\/docs\/secrets\/kmip-profiles) page for more details.\n\n[baseline-server]: http:\/\/docs.oasis-open.org\/kmip\/profiles\/v1.4\/os\/kmip-profiles-v1.4-os.html#_Toc491431430\n[lifecycle-server]: http:\/\/docs.oasis-open.org\/kmip\/profiles\/v1.4\/os\/kmip-profiles-v1.4-os.html#_Toc491431487\n[basic-cryptographic-server]: http:\/\/docs.oasis-open.org\/kmip\/profiles\/v1.4\/os\/kmip-profiles-v1.4-os.html#_Toc491431527\n[asymmetric-key-lifecycle-server]: http:\/\/docs.oasis-open.org\/kmip\/profiles\/v1.4\/os\/kmip-profiles-v1.4-os.html#_Toc491431516\n[advanced-cryptographic-server]: http:\/\/docs.oasis-open.org\/kmip\/profiles\/v1.4\/os\/kmip-profiles-v1.4-os.html#_Toc491431528\n\n## Setup\n\nThe KMIP secrets engine must be configured before it can start accepting KMIP\nrequests.\n\n1.  Enable the KMIP secrets engine\n\n    ```text\n    $ vault secrets enable kmip\n    Success! Enabled the kmip secrets engine at: kmip\/\n    ```\n\n1.  Configure the secrets engine with the desired listener addresses to use and\n    TLS parameters, or leave unwritten to use default values\n\n    ```text\n    $ vault write kmip\/config listen_addrs=0.0.0.0:5696\n    ```\n### KMIP Certificate Authority for Client Certificates\n\nWhen the KMIP Secrets Engine is initially configured, Vault generates a KMIP\nCertificate Authority (CA) whose only purpose is to authenticate KMIP client\ncertificates.\n\nVault uses the internal KMIP CA to generate certificates for clients\nauthenticating to Vault with the KMIP protocol. You cannot import external KMIP\nauthorities. All KMIP authentication must use the internally-generated KMIP CA.\n\n## Usage\n\n### Scopes and roles\n\nThe KMIP secrets engine uses the concept of scopes to partition KMIP managed\nobject storage into multiple named buckets. Within a scope, roles can be created\nwhich dictate the set of allowed operations that the particular role can perform.\nTLS client certificates can be generated for a role, which services and applications\ncan then use when sending KMIP requests against Vault's KMIP secret engine.\n\nIn order to generate client certificates for KMIP clients to interact with Vault's\nKMIP server, we must first create a scope and role and specify the desired set of\nallowed operations for it.\n\n1.  Create a scope:\n\n    ```text\n    $ vault write -f kmip\/scope\/my-service\n    Success! Data written to: kmip\/scope\/my-service\n    ```\n\n1.  Create a role within the scope, specifying the set of operations to allow or\n    deny.\n\n    ```text\n    $ vault write kmip\/scope\/my-service\/role\/admin operation_all=true\n      Success! Data written to: kmip\/scope\/my-service\/role\/admin\n    ```\n\n### Supported KMIP operations\n\nThe KMIP secrets engine currently supports the following set of operations:\n\n```text\noperation_activate\noperation_add_attribute\noperation_create\noperation_create_keypair\noperation_decrypt\noperation_delete_attribute\noperation_destroy\noperation_discover_versions\noperation_encrypt\noperation_get\noperation_get_attribute_list\noperation_get_attributes\noperation_import\noperation_locate\noperation_mac\noperation_mac_verify\noperation_modify_attribute\noperation_query\noperation_register\noperation_rekey\noperation_rekey_keypair\noperation_revoke\noperation_sign\noperation_signature_verify\noperation_rng_seed\noperation_rng_retrieve\n```\n\nAdditionally, there are two pseudo-operations that can be used to allow or deny\nall operation capabilities to a role. These operations are mutually exclusive to\nall other operations. That is, if it's provided during role creation or update,\nno other operations can be provided. Similarly, if an existing role contains a\npseudo-operation, and it is then updated with a set supported operation, it will\nbe overwritten with the newly set of provided operations.\n\nPseudo-operations:\n\n```text\noperation_all\noperation_none\n```\n\n### Client certificate generation\n\nOnce a scope and role has been created, client certificates can be generated for\nthat role. The client certificate can then be provided to applications and\nservices that support KMIP to establish communication with Vault's KMIP server.\nScope and role identifiers are embedded in the certificate,\nwhich will be used when evaluating permissions during a KMIP request.\n\n1.  Generate a client certificate. This returns the CA Chain, the certificate,\n    and the private key.\n\n    ```text\n    $ vault write -f kmip\/scope\/my-service\/role\/admin\/credential\/generate\n      Key              Value\n      ---              -----\n      ca_chain         [-----BEGIN CERTIFICATE-----\n      MIICNTCCAZigAwIBAgIUKqNFb3Zy+8ypIhTDs\/2\/8f\/xEI8wCgYIKoZIzj0EAwIw\n      HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyN1oX\n      DTI5MDYyMTE4MjQ1N1owKjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWlu\n      dGVybWVkaWF0ZTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEAbniGNXHOiPvSb0I\n      fbc1B9QkOmdT2Ecx2WaQPLISplmO0Jm0u0z11CGuf3Igby7unnCNvCuCXrKJFCsQ\n      8JGhwknNAG3eesSZxG4tklA6FMZjE9ETUtYfjH7Z4vuJSw\/fxOeey7fhrqAzhV3P\n      GRkvA9EQUHJOeV4rEpiINP\/fneHNfsn1o2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYD\n      VR0TAQH\/BAgwBgEB\/wIBCTAdBgNVHQ4EFgQUR0o0v4rPiBU9RwQfEUucx3JwbPAw\n      HwYDVR0jBBgwFoAUMhORultSN+ABogxQdkt7KChD0wQwCgYIKoZIzj0EAwIDgYoA\n      MIGGAkF1IvkIaXNkVfe+q0V78CnX0XIJuvmPpgjN8AQzqLci8txikd9gF1zt8fFQ\n      gIKERm2QPrshSV9srHDB0YnThRKuiQJBNcDjCfYOzqKlBHifT4WT4OX1U6nP\/Y2b\n      imGaLJK9VIwfcJOpVCFGp7Xi8QGV6rJIFiQAqzqCy69vcU6nVMsvens=\n      -----END CERTIFICATE----- -----BEGIN CERTIFICATE-----\n      MIICKjCCAYugAwIBAgIUerDfApmkq0VYychkhlxEnBlIDUcwCgYIKoZIzj0EAwIw\n      HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyNloX\n      DTI5MDYyMTE4MjQ1NlowHTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MIGb\n      MBAGByqGSM49AgEGBSuBBAAjA4GGAAQBA466Axrrz+HWanNe35gPVvB7OE7TWZcc\n      QZw1QSMQ+QIQMu5NcdfvZfh68exhe1FiJezKB+zeoJWp1Q\/kqhyh7fsAFUuIcJDO\n      okZYPTmjPh3h5IZLPg5r7Pw1j99rLHhc\/EXF9wYVy2UeH\/2IqGJ+cncmVgqczlG8\n      m36g9OXd6hkofhCjZjBkMA4GA1UdDwEB\/wQEAwIBBjASBgNVHRMBAf8ECDAGAQH\/\n      AgEKMB0GA1UdDgQWBBQyE5G6W1I34AGiDFB2S3soKEPTBDAfBgNVHSMEGDAWgBQy\n      E5G6W1I34AGiDFB2S3soKEPTBDAKBggqhkjOPQQDAgOBjAAwgYgCQgGtPVCtgDc1\n      0SrTsVpEtUMYQKbOWnTKNHZ9h5jSna8n9aY+70Ai3U57q3FL95iIhZRW79PRpp65\n      d6tWqY51o2hHpwJCAK+eE7xpdnqh5H8TqAXKVuSoC0WEsovYCD03c8Ih3jWcZn6N\n      kbz2kXPcAk+dE6ncnwhwqNQgsJQGgQzJroH+Zzvb\n      -----END CERTIFICATE-----]\n      certificate      -----BEGIN CERTIFICATE-----\n      MIICOzCCAZygAwIBAgIUN5V7bLAGu8QIUFxlIugg8fBb+eYwCgYIKoZIzj0EAwIw\n      KjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWludGVybWVkaWF0ZTAeFw0x\n      OTA2MjQxODQ3MTdaFw0xOTA2MjUxODQ3NDdaMCAxDjAMBgNVBAsTBWNqVVNJMQ4w\n      DAYDVQQDEwVkdjRZbTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEANVsHV8CHYpW\n      CBKbYVEx\/sLphk67SdWxbII4Sc9Rj1KymApD4gPmS+rw0FDMZGFbn1sAfpqMBqMj\n      ylv72o9izbYSALHnYT+AaE0NFn4eGWZ2G0p56cVmfXm3ZI959E+3gvZK6X5Jnzm4\n      FKXTDKGA4pocYec\/rnYJ5X8sbAJKHvk1OeO+o2cwZTAOBgNVHQ8BAf8EBAMCA6gw\n      EwYDVR0lBAwwCgYIKwYBBQUHAwIwHQYDVR0OBBYEFBEIsBo3HiBIg2l2psaQoYkT\n      D1RNMB8GA1UdIwQYMBaAFEdKNL+Kz4gVPUcEHxFLnMdycGzwMAoGCCqGSM49BAMC\n      A4GMADCBiAJCAc8DV23DJsHV4fdmbmssu0eDIgNH+PrRKdYgqiHptbuVjF2qbILp\n      Z34dJRVN+R9B+RprZXkYiv7gJ\/47KSUKzRZpAkIByMjZqLtcypamJM\/t+\/O1BSst\n      CWcblb45FIxAmO4hE00Q5wnwXNxNnDHXWiuGdSNmIBjpb9nM5wehQlbkx7HzvPk=\n      -----END CERTIFICATE-----\n      private_key      -----BEGIN EC PRIVATE KEY-----\n      MIHcAgEBBEIB9Nn7M28VUVW6g5IlOTS3bHIZYM\/zqVy+PvYQxn2lFbg1YrQzfd7h\n      sdtCjet0lc7pvtoOwd1dFiATOGg98OVN7MegBwYFK4EEACOhgYkDgYYABADVbB1f\n      Ah2KVggSm2FRMf7C6YZOu0nVsWyCOEnPUY9SspgKQ+ID5kvq8NBQzGRhW59bAH6a\n      jAajI8pb+9qPYs22EgCx52E\/gGhNDRZ+HhlmdhtKeenFZn15t2SPefRPt4L2Sul+\n      SZ85uBSl0wyhgOKaHGHnP652CeV\/LGwCSh75NTnjvg==\n      -----END EC PRIVATE KEY-----\n      serial_number    317328055225536560033788492808123425026102524390\n    ```\n\n### Client certificate signing\n\nAs an alternative to the above section on generating client certificates,\nthe KMIP secrets engine supports signing of Certificate Signing Requests\n(CSRs). Normally the above generation process is simpler, but some KMIP\nclients prefer (or only support) retaining the private key associated\nwith their client certificate.\n\n1. In this workflow the first step is KMIP-client dependent: use the KMIP\n   client's UI or CLI to create a client certificate CSR in PEM format.\n\n2. Sign the client certificate. This returns the CA Chain and the certificate,\n   but not the private key, which never leaves the KMIP client.\n\n   ```text\n   $ vault write kmip\/scope\/my-service\/role\/admin\/credential\/sign csr=\"$(cat my-csr.pem)\"\n     Key              Value\n     ---              -----\n     ca_chain         [-----BEGIN CERTIFICATE-----\n     MIICNTCCAZigAwIBAgIUKqNFb3Zy+8ypIhTDs\/2\/8f\/xEI8wCgYIKoZIzj0EAwIw\n     HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyN1oX\n     DTI5MDYyMTE4MjQ1N1owKjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWlu\n     dGVybWVkaWF0ZTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEAbniGNXHOiPvSb0I\n     fbc1B9QkOmdT2Ecx2WaQPLISplmO0Jm0u0z11CGuf3Igby7unnCNvCuCXrKJFCsQ\n     8JGhwknNAG3eesSZxG4tklA6FMZjE9ETUtYfjH7Z4vuJSw\/fxOeey7fhrqAzhV3P\n     GRkvA9EQUHJOeV4rEpiINP\/fneHNfsn1o2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYD\n     VR0TAQH\/BAgwBgEB\/wIBCTAdBgNVHQ4EFgQUR0o0v4rPiBU9RwQfEUucx3JwbPAw\n     HwYDVR0jBBgwFoAUMhORultSN+ABogxQdkt7KChD0wQwCgYIKoZIzj0EAwIDgYoA\n     MIGGAkF1IvkIaXNkVfe+q0V78CnX0XIJuvmPpgjN8AQzqLci8txikd9gF1zt8fFQ\n     gIKERm2QPrshSV9srHDB0YnThRKuiQJBNcDjCfYOzqKlBHifT4WT4OX1U6nP\/Y2b\n     imGaLJK9VIwfcJOpVCFGp7Xi8QGV6rJIFiQAqzqCy69vcU6nVMsvens=\n     -----END CERTIFICATE----- -----BEGIN CERTIFICATE-----\n     MIICKjCCAYugAwIBAgIUerDfApmkq0VYychkhlxEnBlIDUcwCgYIKoZIzj0EAwIw\n     HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyNloX\n     DTI5MDYyMTE4MjQ1NlowHTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MIGb\n     MBAGByqGSM49AgEGBSuBBAAjA4GGAAQBA466Axrrz+HWanNe35gPVvB7OE7TWZcc\n     QZw1QSMQ+QIQMu5NcdfvZfh68exhe1FiJezKB+zeoJWp1Q\/kqhyh7fsAFUuIcJDO\n     okZYPTmjPh3h5IZLPg5r7Pw1j99rLHhc\/EXF9wYVy2UeH\/2IqGJ+cncmVgqczlG8\n     m36g9OXd6hkofhCjZjBkMA4GA1UdDwEB\/wQEAwIBBjASBgNVHRMBAf8ECDAGAQH\/\n     AgEKMB0GA1UdDgQWBBQyE5G6W1I34AGiDFB2S3soKEPTBDAfBgNVHSMEGDAWgBQy\n     E5G6W1I34AGiDFB2S3soKEPTBDAKBggqhkjOPQQDAgOBjAAwgYgCQgGtPVCtgDc1\n     0SrTsVpEtUMYQKbOWnTKNHZ9h5jSna8n9aY+70Ai3U57q3FL95iIhZRW79PRpp65\n     d6tWqY51o2hHpwJCAK+eE7xpdnqh5H8TqAXKVuSoC0WEsovYCD03c8Ih3jWcZn6N\n     kbz2kXPcAk+dE6ncnwhwqNQgsJQGgQzJroH+Zzvb\n     -----END CERTIFICATE-----]\n     certificate      -----BEGIN CERTIFICATE-----\n     MIICOzCCAZygAwIBAgIUN5V7bLAGu8QIUFxlIugg8fBb+eYwCgYIKoZIzj0EAwIw\n     KjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWludGVybWVkaWF0ZTAeFw0x\n     OTA2MjQxODQ3MTdaFw0xOTA2MjUxODQ3NDdaMCAxDjAMBgNVBAsTBWNqVVNJMQ4w\n     DAYDVQQDEwVkdjRZbTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEANVsHV8CHYpW\n     CBKbYVEx\/sLphk67SdWxbII4Sc9Rj1KymApD4gPmS+rw0FDMZGFbn1sAfpqMBqMj\n     ylv72o9izbYSALHnYT+AaE0NFn4eGWZ2G0p56cVmfXm3ZI959E+3gvZK6X5Jnzm4\n     FKXTDKGA4pocYec\/rnYJ5X8sbAJKHvk1OeO+o2cwZTAOBgNVHQ8BAf8EBAMCA6gw\n     EwYDVR0lBAwwCgYIKwYBBQUHAwIwHQYDVR0OBBYEFBEIsBo3HiBIg2l2psaQoYkT\n     D1RNMB8GA1UdIwQYMBaAFEdKNL+Kz4gVPUcEHxFLnMdycGzwMAoGCCqGSM49BAMC\n     A4GMADCBiAJCAc8DV23DJsHV4fdmbmssu0eDIgNH+PrRKdYgqiHptbuVjF2qbILp\n     Z34dJRVN+R9B+RprZXkYiv7gJ\/47KSUKzRZpAkIByMjZqLtcypamJM\/t+\/O1BSst\n     CWcblb45FIxAmO4hE00Q5wnwXNxNnDHXWiuGdSNmIBjpb9nM5wehQlbkx7HzvPk=\n     -----END CERTIFICATE-----\n     serial_number    317328055225536560033788492808123425026102524390\n   ```\n\n## Tutorial\n\nRefer to the [KMIP Secrets Engine](\/vault\/tutorials\/adp\/kmip-engine)\nguide for a step-by-step tutorial.\n\n[kmip-spec]: http:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/kmip-spec-v1.4.html\n[kmip-ops]: http:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/os\/kmip-spec-v1.4-os.html#_Toc490660840","site":"vault","answers_cleaned":"    layout  docs page title  KMIP   Secrets Engines description       The KMIP secrets engine allows Vault to act as a KMIP server provider and   handle the lifecycle of its KMIP managed objects         KMIP secrets engine   include  alerts enterprise and hcp mdx   KMIP secrets engine requires  Vault Enterprise  https   www hashicorp com products vault pricing  with the Advanced Data Protection  ADP  module   The KMIP secrets engine allows Vault to act as a  Key Management Interoperability Protocol  kmip spec   KMIP  server provider and handle the lifecycle of its KMIP managed objects  KMIP is a standardized protocol that allows services and applications to perform cryptographic operations without having to manage cryptographic material  otherwise known as managed objects  by delegating its storage and lifecycle to a key management server   Vault s KMIP secrets engine listens on a separate port from the standard Vault listener  Each Vault server in a Vault cluster configured with a KMIP secrets engine uses the same listener configuration  The KMIP listener defaults to port 5696 and is configurable to alternative ports  for example  if there are multiple KMIP secrets engine mounts configured   KMIP clients connect and authenticate to this KMIP secrets engine listener port using generated TLS certificates  KMIP clients may connect directly to the Vault active server  or any of the Vault performance standby servers  on the configured KMIP port  A layer 4 tcp load balancer may be used in front of the Vault server s KMIP ports  The load balancer should support long lived connections and it may use a round robin routing algorithm as Vault servers will forward to the primary Vault server  if necessary      KMIP conformance  Vault implements version 1 4 of the following Key Management Interoperability Protocol Profiles        Baseline Server  baseline server        Supports all profile attributes except for  Key Value Location         Supports all profile operations except for  Check         Operation  Locate  supports all profile attributes except for  Key Value Location         Symmetric Key Lifecycle Server  lifecycle server        Supports cryptographic algorithm  AES    3DES  is not supported         Only the  Raw  key format type is supported    Transparent Symmetric Key  is not supported         Basic Cryptographic Server  basic cryptographic server        Supports block cipher modes  CBC    CFB    CTR    ECB    GCM   and  OFB         On multi part  streaming  operations  block cipher mode  GCM  is not supported        The supported padding methods are  None  and  PKCS5         Asymmetric Key Lifecycle Server  asymmetric key lifecycle server        Supports  Public Key  and  Private Key  objects        Supports  RSA  cryptographic algorithm       Supports  PKCS 1    PKCS 8    X 509    Transparent RSA Public Key  and  Transparent RSA Private Key  key format types        Advanced Cryptographic Server  advanced cryptographic server        Supports  Encrypt    Decrypt    Sign    Signature Verify    MAC    MAC Verify    RNG Retrieve   and  RNG Seed  client to server operations        The supported hashing algorithms for Sign and Signature Verify operations are  SHA224    SHA256    SHA384    SHA512    RIPEMD160    SHA512 224    SHA512 256    SHA3 224    SHA3 256    SHA3 384   and  SHA3 512  for  PSS  padding method  and algorithms  SHA224    SHA256    SHA384    SHA512   and  RIPEMD160  for  PKCS1v15  padding method        The supported hashing algorithms for MAC and MAC Verify operations are  SHA224    SHA256    SHA384    SHA512    RIPEMD160    SHA512 224    SHA512 256    SHA3 224    SHA3 256    SHA3 384   and  SHA3 512    MD4    MD5   and  SHA1  are not supported    Refer to  KMIP   Profiles Support   vault docs secrets kmip profiles  page for more details    baseline server   http   docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html  Toc491431430  lifecycle server   http   docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html  Toc491431487  basic cryptographic server   http   docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html  Toc491431527  asymmetric key lifecycle server   http   docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html  Toc491431516  advanced cryptographic server   http   docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html  Toc491431528     Setup  The KMIP secrets engine must be configured before it can start accepting KMIP requests   1   Enable the KMIP secrets engine         text       vault secrets enable kmip     Success  Enabled the kmip secrets engine at  kmip           1   Configure the secrets engine with the desired listener addresses to use and     TLS parameters  or leave unwritten to use default values         text       vault write kmip config listen addrs 0 0 0 0 5696             KMIP Certificate Authority for Client Certificates  When the KMIP Secrets Engine is initially configured  Vault generates a KMIP Certificate Authority  CA  whose only purpose is to authenticate KMIP client certificates   Vault uses the internal KMIP CA to generate certificates for clients authenticating to Vault with the KMIP protocol  You cannot import external KMIP authorities  All KMIP authentication must use the internally generated KMIP CA      Usage      Scopes and roles  The KMIP secrets engine uses the concept of scopes to partition KMIP managed object storage into multiple named buckets  Within a scope  roles can be created which dictate the set of allowed operations that the particular role can perform  TLS client certificates can be generated for a role  which services and applications can then use when sending KMIP requests against Vault s KMIP secret engine   In order to generate client certificates for KMIP clients to interact with Vault s KMIP server  we must first create a scope and role and specify the desired set of allowed operations for it   1   Create a scope          text       vault write  f kmip scope my service     Success  Data written to  kmip scope my service          1   Create a role within the scope  specifying the set of operations to allow or     deny          text       vault write kmip scope my service role admin operation all true       Success  Data written to  kmip scope my service role admin              Supported KMIP operations  The KMIP secrets engine currently supports the following set of operations      text operation activate operation add attribute operation create operation create keypair operation decrypt operation delete attribute operation destroy operation discover versions operation encrypt operation get operation get attribute list operation get attributes operation import operation locate operation mac operation mac verify operation modify attribute operation query operation register operation rekey operation rekey keypair operation revoke operation sign operation signature verify operation rng seed operation rng retrieve      Additionally  there are two pseudo operations that can be used to allow or deny all operation capabilities to a role  These operations are mutually exclusive to all other operations  That is  if it s provided during role creation or update  no other operations can be provided  Similarly  if an existing role contains a pseudo operation  and it is then updated with a set supported operation  it will be overwritten with the newly set of provided operations   Pseudo operations      text operation all operation none          Client certificate generation  Once a scope and role has been created  client certificates can be generated for that role  The client certificate can then be provided to applications and services that support KMIP to establish communication with Vault s KMIP server  Scope and role identifiers are embedded in the certificate  which will be used when evaluating permissions during a KMIP request   1   Generate a client certificate  This returns the CA Chain  the certificate      and the private key          text       vault write  f kmip scope my service role admin credential generate       Key              Value                                    ca chain               BEGIN CERTIFICATE            MIICNTCCAZigAwIBAgIUKqNFb3Zy 8ypIhTDs 2 8f xEI8wCgYIKoZIzj0EAwIw       HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyN1oX       DTI5MDYyMTE4MjQ1N1owKjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWlu       dGVybWVkaWF0ZTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEAbniGNXHOiPvSb0I       fbc1B9QkOmdT2Ecx2WaQPLISplmO0Jm0u0z11CGuf3Igby7unnCNvCuCXrKJFCsQ       8JGhwknNAG3eesSZxG4tklA6FMZjE9ETUtYfjH7Z4vuJSw fxOeey7fhrqAzhV3P       GRkvA9EQUHJOeV4rEpiINP fneHNfsn1o2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYD       VR0TAQH BAgwBgEB wIBCTAdBgNVHQ4EFgQUR0o0v4rPiBU9RwQfEUucx3JwbPAw       HwYDVR0jBBgwFoAUMhORultSN ABogxQdkt7KChD0wQwCgYIKoZIzj0EAwIDgYoA       MIGGAkF1IvkIaXNkVfe q0V78CnX0XIJuvmPpgjN8AQzqLci8txikd9gF1zt8fFQ       gIKERm2QPrshSV9srHDB0YnThRKuiQJBNcDjCfYOzqKlBHifT4WT4OX1U6nP Y2b       imGaLJK9VIwfcJOpVCFGp7Xi8QGV6rJIFiQAqzqCy69vcU6nVMsvens             END CERTIFICATE           BEGIN CERTIFICATE            MIICKjCCAYugAwIBAgIUerDfApmkq0VYychkhlxEnBlIDUcwCgYIKoZIzj0EAwIw       HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyNloX       DTI5MDYyMTE4MjQ1NlowHTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MIGb       MBAGByqGSM49AgEGBSuBBAAjA4GGAAQBA466Axrrz HWanNe35gPVvB7OE7TWZcc       QZw1QSMQ QIQMu5NcdfvZfh68exhe1FiJezKB zeoJWp1Q kqhyh7fsAFUuIcJDO       okZYPTmjPh3h5IZLPg5r7Pw1j99rLHhc EXF9wYVy2UeH 2IqGJ cncmVgqczlG8       m36g9OXd6hkofhCjZjBkMA4GA1UdDwEB wQEAwIBBjASBgNVHRMBAf8ECDAGAQH        AgEKMB0GA1UdDgQWBBQyE5G6W1I34AGiDFB2S3soKEPTBDAfBgNVHSMEGDAWgBQy       E5G6W1I34AGiDFB2S3soKEPTBDAKBggqhkjOPQQDAgOBjAAwgYgCQgGtPVCtgDc1       0SrTsVpEtUMYQKbOWnTKNHZ9h5jSna8n9aY 70Ai3U57q3FL95iIhZRW79PRpp65       d6tWqY51o2hHpwJCAK eE7xpdnqh5H8TqAXKVuSoC0WEsovYCD03c8Ih3jWcZn6N       kbz2kXPcAk dE6ncnwhwqNQgsJQGgQzJroH Zzvb            END CERTIFICATE             certificate           BEGIN CERTIFICATE            MIICOzCCAZygAwIBAgIUN5V7bLAGu8QIUFxlIugg8fBb eYwCgYIKoZIzj0EAwIw       KjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWludGVybWVkaWF0ZTAeFw0x       OTA2MjQxODQ3MTdaFw0xOTA2MjUxODQ3NDdaMCAxDjAMBgNVBAsTBWNqVVNJMQ4w       DAYDVQQDEwVkdjRZbTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEANVsHV8CHYpW       CBKbYVEx sLphk67SdWxbII4Sc9Rj1KymApD4gPmS rw0FDMZGFbn1sAfpqMBqMj       ylv72o9izbYSALHnYT AaE0NFn4eGWZ2G0p56cVmfXm3ZI959E 3gvZK6X5Jnzm4       FKXTDKGA4pocYec rnYJ5X8sbAJKHvk1OeO o2cwZTAOBgNVHQ8BAf8EBAMCA6gw       EwYDVR0lBAwwCgYIKwYBBQUHAwIwHQYDVR0OBBYEFBEIsBo3HiBIg2l2psaQoYkT       D1RNMB8GA1UdIwQYMBaAFEdKNL Kz4gVPUcEHxFLnMdycGzwMAoGCCqGSM49BAMC       A4GMADCBiAJCAc8DV23DJsHV4fdmbmssu0eDIgNH PrRKdYgqiHptbuVjF2qbILp       Z34dJRVN R9B RprZXkYiv7gJ 47KSUKzRZpAkIByMjZqLtcypamJM t  O1BSst       CWcblb45FIxAmO4hE00Q5wnwXNxNnDHXWiuGdSNmIBjpb9nM5wehQlbkx7HzvPk             END CERTIFICATE            private key           BEGIN EC PRIVATE KEY            MIHcAgEBBEIB9Nn7M28VUVW6g5IlOTS3bHIZYM zqVy PvYQxn2lFbg1YrQzfd7h       sdtCjet0lc7pvtoOwd1dFiATOGg98OVN7MegBwYFK4EEACOhgYkDgYYABADVbB1f       Ah2KVggSm2FRMf7C6YZOu0nVsWyCOEnPUY9SspgKQ ID5kvq8NBQzGRhW59bAH6a       jAajI8pb 9qPYs22EgCx52E gGhNDRZ HhlmdhtKeenFZn15t2SPefRPt4L2Sul        SZ85uBSl0wyhgOKaHGHnP652CeV LGwCSh75NTnjvg              END EC PRIVATE KEY            serial number    317328055225536560033788492808123425026102524390              Client certificate signing  As an alternative to the above section on generating client certificates  the KMIP secrets engine supports signing of Certificate Signing Requests  CSRs   Normally the above generation process is simpler  but some KMIP clients prefer  or only support  retaining the private key associated with their client certificate   1  In this workflow the first step is KMIP client dependent  use the KMIP    client s UI or CLI to create a client certificate CSR in PEM format   2  Sign the client certificate  This returns the CA Chain and the certificate     but not the private key  which never leaves the KMIP client         text      vault write kmip scope my service role admin credential sign csr    cat my csr pem        Key              Value                                  ca chain               BEGIN CERTIFICATE           MIICNTCCAZigAwIBAgIUKqNFb3Zy 8ypIhTDs 2 8f xEI8wCgYIKoZIzj0EAwIw      HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyN1oX      DTI5MDYyMTE4MjQ1N1owKjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWlu      dGVybWVkaWF0ZTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEAbniGNXHOiPvSb0I      fbc1B9QkOmdT2Ecx2WaQPLISplmO0Jm0u0z11CGuf3Igby7unnCNvCuCXrKJFCsQ      8JGhwknNAG3eesSZxG4tklA6FMZjE9ETUtYfjH7Z4vuJSw fxOeey7fhrqAzhV3P      GRkvA9EQUHJOeV4rEpiINP fneHNfsn1o2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYD      VR0TAQH BAgwBgEB wIBCTAdBgNVHQ4EFgQUR0o0v4rPiBU9RwQfEUucx3JwbPAw      HwYDVR0jBBgwFoAUMhORultSN ABogxQdkt7KChD0wQwCgYIKoZIzj0EAwIDgYoA      MIGGAkF1IvkIaXNkVfe q0V78CnX0XIJuvmPpgjN8AQzqLci8txikd9gF1zt8fFQ      gIKERm2QPrshSV9srHDB0YnThRKuiQJBNcDjCfYOzqKlBHifT4WT4OX1U6nP Y2b      imGaLJK9VIwfcJOpVCFGp7Xi8QGV6rJIFiQAqzqCy69vcU6nVMsvens            END CERTIFICATE           BEGIN CERTIFICATE           MIICKjCCAYugAwIBAgIUerDfApmkq0VYychkhlxEnBlIDUcwCgYIKoZIzj0EAwIw      HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyNloX      DTI5MDYyMTE4MjQ1NlowHTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MIGb      MBAGByqGSM49AgEGBSuBBAAjA4GGAAQBA466Axrrz HWanNe35gPVvB7OE7TWZcc      QZw1QSMQ QIQMu5NcdfvZfh68exhe1FiJezKB zeoJWp1Q kqhyh7fsAFUuIcJDO      okZYPTmjPh3h5IZLPg5r7Pw1j99rLHhc EXF9wYVy2UeH 2IqGJ cncmVgqczlG8      m36g9OXd6hkofhCjZjBkMA4GA1UdDwEB wQEAwIBBjASBgNVHRMBAf8ECDAGAQH       AgEKMB0GA1UdDgQWBBQyE5G6W1I34AGiDFB2S3soKEPTBDAfBgNVHSMEGDAWgBQy      E5G6W1I34AGiDFB2S3soKEPTBDAKBggqhkjOPQQDAgOBjAAwgYgCQgGtPVCtgDc1      0SrTsVpEtUMYQKbOWnTKNHZ9h5jSna8n9aY 70Ai3U57q3FL95iIhZRW79PRpp65      d6tWqY51o2hHpwJCAK eE7xpdnqh5H8TqAXKVuSoC0WEsovYCD03c8Ih3jWcZn6N      kbz2kXPcAk dE6ncnwhwqNQgsJQGgQzJroH Zzvb           END CERTIFICATE            certificate           BEGIN CERTIFICATE           MIICOzCCAZygAwIBAgIUN5V7bLAGu8QIUFxlIugg8fBb eYwCgYIKoZIzj0EAwIw      KjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWludGVybWVkaWF0ZTAeFw0x      OTA2MjQxODQ3MTdaFw0xOTA2MjUxODQ3NDdaMCAxDjAMBgNVBAsTBWNqVVNJMQ4w      DAYDVQQDEwVkdjRZbTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEANVsHV8CHYpW      CBKbYVEx sLphk67SdWxbII4Sc9Rj1KymApD4gPmS rw0FDMZGFbn1sAfpqMBqMj      ylv72o9izbYSALHnYT AaE0NFn4eGWZ2G0p56cVmfXm3ZI959E 3gvZK6X5Jnzm4      FKXTDKGA4pocYec rnYJ5X8sbAJKHvk1OeO o2cwZTAOBgNVHQ8BAf8EBAMCA6gw      EwYDVR0lBAwwCgYIKwYBBQUHAwIwHQYDVR0OBBYEFBEIsBo3HiBIg2l2psaQoYkT      D1RNMB8GA1UdIwQYMBaAFEdKNL Kz4gVPUcEHxFLnMdycGzwMAoGCCqGSM49BAMC      A4GMADCBiAJCAc8DV23DJsHV4fdmbmssu0eDIgNH PrRKdYgqiHptbuVjF2qbILp      Z34dJRVN R9B RprZXkYiv7gJ 47KSUKzRZpAkIByMjZqLtcypamJM t  O1BSst      CWcblb45FIxAmO4hE00Q5wnwXNxNnDHXWiuGdSNmIBjpb9nM5wehQlbkx7HzvPk            END CERTIFICATE           serial number    317328055225536560033788492808123425026102524390            Tutorial  Refer to the  KMIP Secrets Engine   vault tutorials adp kmip engine  guide for a step by step tutorial    kmip spec   http   docs oasis open org kmip spec v1 4 kmip spec v1 4 html  kmip ops   http   docs oasis open org kmip spec v1 4 os kmip spec v1 4 os html  Toc490660840"}
{"questions":"vault page title MongoDB Atlas Secrets Engines The MongoDB Atlas secrets engine for Vault generates MongoDB Atlas MongoDB atlas secrets engine Programmatic API Keys dynamically layout docs","answers":"---\nlayout: docs\npage_title: MongoDB Atlas - Secrets Engines\ndescription: |-\n  The MongoDB Atlas secrets engine for Vault generates MongoDB Atlas\n  Programmatic API Keys dynamically.\n---\n\n# MongoDB atlas secrets engine\n\nThe MongoDB Atlas secrets engine generates Programmatic API keys. The created MongoDB Atlas secrets are\ntime-based and are automatically revoked when the Vault lease expires, unless renewed.\n\nVault will create a Programmatic API key for each lease that provide appropriate access to the defined MongoDB Atlas\nproject or organization with appropriate role(s). The MongoDB Atlas Programmatic API Key Public and\nPrivate Keys are returned to the caller. To learn more about Programmatic API Keys visit the\n[Programmatic API Keys Doc](https:\/\/www.mongodb.com\/docs\/atlas\/configure-api-access\/#programmatic-api-keys).\n\n  <Note>\n\n  The information below relates to the **MongoDB Altas secrets engine**. Refer to the\n  [MongoDB Atlas **database** secrets engine](\/vault\/docs\/secrets\/databases\/mongodbatlas)\n  for information about using the MongoDB Atlas database plugin for the Vault\n  database secrets engine.\n\n  <\/Note>\n\n## Setup\n\nMost secrets engines must be configured in advance before they can perform their functions. These\nsteps are usually completed by an operator or configuration management tool.\n\n1. Enable the MongoDB Atlas secrets engine:\n\n   ```shell-session\n   $ vault secrets enable mongodbatlas\n   Success! Enabled the mongodbatlas secrets engine at: mongodbatlas\/\n   ```\n\n   By default, the secrets engine will mount at the name of the engine. To\n   enable the secrets engine at a different path, use the `-path` argument.\n\n1. It's necessary to generate and configure a MongoDB Atlas Programmatic API Key for your organization\n   or project that has sufficient permissions to allow Vault to create other Programmatic API Keys.\n\n   In order to grant Vault programmatic access to an organization or project using only the\n   [API](https:\/\/www.mongodb.com\/docs\/atlas\/reference\/api-resources-spec\/v2\/) you need to create a MongoDB Atlas Programmatic API\n   Key with the appropriate roles if you have not already done so. A Programmatic API Key consists\n   of a public and private key, so ensure you have both. Regarding roles, the Organization Owner and\n   Project Owner roles should be sufficient for most needs, however be sure to check what each role\n   grants in the [MongoDB Atlas Programmatic API Key User Roles documentation](https:\/\/www.mongodb.com\/docs\/atlas\/reference\/user-roles\/).\n   It is recommended to set an IP Network Access list when creating the key.\n\n   For more detailed instructions on how to create a Programmatic API Key in the Atlas UI, including\n   available roles, visit the [Programmatic API Key documentation](https:\/\/www.mongodb.com\/docs\/atlas\/configure-api-access\/#programmatic-api-keys).\n\n1. Once you have a MongoDB Atlas Programmatic Key pair, as created in the previous step, Vault can now\n   be configured to use it with MongoDB Atlas:\n\n   ```shell-session\n   $ vault write mongodbatlas\/config \\\n       public_key=yhltsvan \\\n       private_key=2c130c23-e6b6-4da8-a93f-a8bf33218830\n   ```\n\n   Internally, Vault will connect to MongoDB Atlas using these credentials. As such,\n   these credentials must be a superset of any policies which might be granted\n   on API Keys.\n\n  <Note>\n\n  It is highly recommended to _not_ use your MongoDB Atlas root account credentials.\n  Generate a dedicated Programmatic API key with appropriate roles instead.\n\n  <\/Note>\n\n## Programmatic API keys\n\nProgrammatic API Key credential types use a Vault role to generate a Programmatic API Key at\neither the MongoDB Atlas Organization or Project level with the designated role(s) for programmatic access.\n\nProgrammatic API Keys:\n\n- Have two parts, a public key and a private key\n- Cannot be used to log into Atlas through the user interface\n- Must be granted appropriate roles to complete required tasks\n- Must belong to one organization, but may be granted access to any number of\n  projects in that organization.\n- May have an IP Network Access list configured and some capabilities may require a\n  Network Access list to be configured (these are noted in the MongoDB Atlas API\n  documentation).\n\nCreate a Vault role for a MongoDB Atlas Programmatic API Key by mapping appropriate arguments to the\norganization or project designated:\n\n- Organization API Key: Set `organization_id` argument with the appropriate\n  [Organization Level Roles](https:\/\/www.mongodb.com\/docs\/atlas\/reference\/user-roles\/#organization-roles).\n- Project API Key: Set `project_id` with the appropriate [Project Level Roles](https:\/\/www.mongodb.com\/docs\/atlas\/reference\/user-roles\/#project-roles).\n\n  <Note>\n\n  Programmatic API keys can belong to only one Organization but can belong to one or more Projects.\n\n  <\/Note>\n\nExamples:\n\n```shell-session\n$ vault write mongodbatlas\/roles\/test \\\n    organization_id=5b23ff2f96e82130d0aaec13 \\\n    roles=ORG_MEMBER\n```\n\n```shell-session\n$ vault write mongodbatlas\/roles\/test \\\n    project_id=5cf5a45a9ccf6400e60981b6 \\\n    roles=GROUP_DATA_ACCESS_READ_ONLY\n```\n\n## Programmatic API key network access list\n\n~> **Note:** MongoDB Atlas has deprecated whitelists, and the API will be disabled in June 2021. It is replaced by a\nsimilar access list API which is live now. If you specify CIDR blocks or IP addresses to allow, you need to run **Vault\n1.6.3 or greater** to avoid interruption. See [MongoDB Atlas documentation](https:\/\/www.mongodb.com\/docs\/atlas\/reference\/api-resources-spec\/v2\/#tag\/Project-IP-Access-List)\nfor further details.\n\nProgrammatic API Key access can and should be limited with a IP Network Access list. In the following example both a CIDR\nblock and IP address are added to the IP Network Access list for Keys generated with this Vault role:\n\n```shell-session\n  $ vault write atlas\/roles\/test \\\n      project_id=5cf5a45a9ccf6400e60981b6 \\\n      roles=GROUP_CLUSTER_MANAGER \\\n      cidr_blocks=192.168.1.3\/32 \\\n      ip_addresses=192.168.1.3\n```\n\nVerify the created Programmatic API Key Vault role has the added CIDR block and IP address by running:\n\n```shell-session\n  $ vault read atlas\/roles\/test\n\n    Key                       Value\n    ---                       -----\n    cidr_blocks               [192.168.1.3\/32]\n    ip_addresses              [192.168.1.3]\n    max_ttl                   1h\n    organization_id           n\/a\n    roles                     [GROUP_CLUSTER_MANAGER]\n    project_id                5cf5a45a9ccf6400e60981b6\n    roles                     n\/a\n    ttl                       30m\n```\n\n## TTL and max TTL\n\nProgrammatic API Keys Vault have a time-to-live (TTL) and maximum time-to-live (Max TTL).\nWhen a credential expires it's automatically revoked. You can set the TTL and Max TTL for each role\nor by tuning the secrets engine's configuration.\n\nThe following creates a Vault role \"test\" for a Project level Programmatic API key with a 2 hour time-to-live and a\nmax time-to-live of 5 hours.\n\n```shell-session\n$ vault write mongodbatlas\/roles\/test \\\n    project_id=5cf5a45a9ccf6400e60981b6 \\\n    roles=GROUP_DATA_ACCESS_READ_ONLY \\\n    ttl=2h \\\n    max_ttl=5h\n```\n\nYou can verify the role that you have created with:\n\n```shell-session\n$ vault read mongodbatlas\/roles\/test\n\n    Key                       Value\n    ---                       -----\n    organization_id           5b71ff2f96e82120d0aaec14\n    roles                     [GROUP_DATA_ACCESS_READ_ONLY]\n    project_id                5cf5a45a9ccf6400e60981b6\n    roles                     n\/a\n    ttl                       2h0m0s\n    max_ttl                   5h0m0s\n```\n\n## Generating credentials\n\nAfter a user has authenticated to Vault has has sufficient permissions, a read request to the\n`creds` endpoint for the role will generate and return new Programmatic API Keys:\n\n```shell-session\n$ vault read mongodbatlas\/creds\/test\n\n    Key                Value\n    ---                -----\n    lease_id           mongodbatlas\/creds\/test\/0fLBv1c2YDzPlJB1PwsRRKHR\n    lease_duration     2h\n    lease_renewable    true\n    description        vault-test-1563980947-1318\n    private_key        905ae89e-6ee8-40rd-ab12-613t8e3fe836\n    public_key         klpruxce\n```\n\n## API\n\nThe MongoDB Atlas secrets engine has a full HTTP API. Please see the\n[MongoDB Atlas secrets engine API docs](\/vault\/api-docs\/secret\/mongodbatlas) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  MongoDB Atlas   Secrets Engines description       The MongoDB Atlas secrets engine for Vault generates MongoDB Atlas   Programmatic API Keys dynamically         MongoDB atlas secrets engine  The MongoDB Atlas secrets engine generates Programmatic API keys  The created MongoDB Atlas secrets are time based and are automatically revoked when the Vault lease expires  unless renewed   Vault will create a Programmatic API key for each lease that provide appropriate access to the defined MongoDB Atlas project or organization with appropriate role s   The MongoDB Atlas Programmatic API Key Public and Private Keys are returned to the caller  To learn more about Programmatic API Keys visit the  Programmatic API Keys Doc  https   www mongodb com docs atlas configure api access  programmatic api keys       Note     The information below relates to the   MongoDB Altas secrets engine    Refer to the    MongoDB Atlas   database   secrets engine   vault docs secrets databases mongodbatlas    for information about using the MongoDB Atlas database plugin for the Vault   database secrets engine       Note      Setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1  Enable the MongoDB Atlas secrets engine         shell session      vault secrets enable mongodbatlas    Success  Enabled the mongodbatlas secrets engine at  mongodbatlas             By default  the secrets engine will mount at the name of the engine  To    enable the secrets engine at a different path  use the   path  argument   1  It s necessary to generate and configure a MongoDB Atlas Programmatic API Key for your organization    or project that has sufficient permissions to allow Vault to create other Programmatic API Keys      In order to grant Vault programmatic access to an organization or project using only the     API  https   www mongodb com docs atlas reference api resources spec v2   you need to create a MongoDB Atlas Programmatic API    Key with the appropriate roles if you have not already done so  A Programmatic API Key consists    of a public and private key  so ensure you have both  Regarding roles  the Organization Owner and    Project Owner roles should be sufficient for most needs  however be sure to check what each role    grants in the  MongoDB Atlas Programmatic API Key User Roles documentation  https   www mongodb com docs atlas reference user roles       It is recommended to set an IP Network Access list when creating the key      For more detailed instructions on how to create a Programmatic API Key in the Atlas UI  including    available roles  visit the  Programmatic API Key documentation  https   www mongodb com docs atlas configure api access  programmatic api keys    1  Once you have a MongoDB Atlas Programmatic Key pair  as created in the previous step  Vault can now    be configured to use it with MongoDB Atlas         shell session      vault write mongodbatlas config          public key yhltsvan          private key 2c130c23 e6b6 4da8 a93f a8bf33218830            Internally  Vault will connect to MongoDB Atlas using these credentials  As such     these credentials must be a superset of any policies which might be granted    on API Keys      Note     It is highly recommended to  not  use your MongoDB Atlas root account credentials    Generate a dedicated Programmatic API key with appropriate roles instead       Note      Programmatic API keys  Programmatic API Key credential types use a Vault role to generate a Programmatic API Key at either the MongoDB Atlas Organization or Project level with the designated role s  for programmatic access   Programmatic API Keys     Have two parts  a public key and a private key   Cannot be used to log into Atlas through the user interface   Must be granted appropriate roles to complete required tasks   Must belong to one organization  but may be granted access to any number of   projects in that organization    May have an IP Network Access list configured and some capabilities may require a   Network Access list to be configured  these are noted in the MongoDB Atlas API   documentation    Create a Vault role for a MongoDB Atlas Programmatic API Key by mapping appropriate arguments to the organization or project designated     Organization API Key  Set  organization id  argument with the appropriate    Organization Level Roles  https   www mongodb com docs atlas reference user roles  organization roles     Project API Key  Set  project id  with the appropriate  Project Level Roles  https   www mongodb com docs atlas reference user roles  project roles       Note     Programmatic API keys can belong to only one Organization but can belong to one or more Projects       Note   Examples      shell session   vault write mongodbatlas roles test       organization id 5b23ff2f96e82130d0aaec13       roles ORG MEMBER         shell session   vault write mongodbatlas roles test       project id 5cf5a45a9ccf6400e60981b6       roles GROUP DATA ACCESS READ ONLY         Programmatic API key network access list       Note    MongoDB Atlas has deprecated whitelists  and the API will be disabled in June 2021  It is replaced by a similar access list API which is live now  If you specify CIDR blocks or IP addresses to allow  you need to run   Vault 1 6 3 or greater   to avoid interruption  See  MongoDB Atlas documentation  https   www mongodb com docs atlas reference api resources spec v2  tag Project IP Access List  for further details   Programmatic API Key access can and should be limited with a IP Network Access list  In the following example both a CIDR block and IP address are added to the IP Network Access list for Keys generated with this Vault role      shell session     vault write atlas roles test         project id 5cf5a45a9ccf6400e60981b6         roles GROUP CLUSTER MANAGER         cidr blocks 192 168 1 3 32         ip addresses 192 168 1 3      Verify the created Programmatic API Key Vault role has the added CIDR block and IP address by running      shell session     vault read atlas roles test      Key                       Value                                         cidr blocks                192 168 1 3 32      ip addresses               192 168 1 3      max ttl                   1h     organization id           n a     roles                      GROUP CLUSTER MANAGER      project id                5cf5a45a9ccf6400e60981b6     roles                     n a     ttl                       30m         TTL and max TTL  Programmatic API Keys Vault have a time to live  TTL  and maximum time to live  Max TTL   When a credential expires it s automatically revoked  You can set the TTL and Max TTL for each role or by tuning the secrets engine s configuration   The following creates a Vault role  test  for a Project level Programmatic API key with a 2 hour time to live and a max time to live of 5 hours      shell session   vault write mongodbatlas roles test       project id 5cf5a45a9ccf6400e60981b6       roles GROUP DATA ACCESS READ ONLY       ttl 2h       max ttl 5h      You can verify the role that you have created with      shell session   vault read mongodbatlas roles test      Key                       Value                                         organization id           5b71ff2f96e82120d0aaec14     roles                      GROUP DATA ACCESS READ ONLY      project id                5cf5a45a9ccf6400e60981b6     roles                     n a     ttl                       2h0m0s     max ttl                   5h0m0s         Generating credentials  After a user has authenticated to Vault has has sufficient permissions  a read request to the  creds  endpoint for the role will generate and return new Programmatic API Keys      shell session   vault read mongodbatlas creds test      Key                Value                                  lease id           mongodbatlas creds test 0fLBv1c2YDzPlJB1PwsRRKHR     lease duration     2h     lease renewable    true     description        vault test 1563980947 1318     private key        905ae89e 6ee8 40rd ab12 613t8e3fe836     public key         klpruxce         API  The MongoDB Atlas secrets engine has a full HTTP API  Please see the  MongoDB Atlas secrets engine API docs   vault api docs secret mongodbatlas  for more details "}
{"questions":"vault Google Cloud KMS secrets engine The Google Cloud KMS secrets engine for Vault interfaces with Google Cloud page title Google Cloud KMS Secrets Engines layout docs KMS for encryption decryption of data and KMS key management through Vault","answers":"---\nlayout: docs\npage_title: Google Cloud KMS - Secrets Engines\ndescription: |-\n  The Google Cloud KMS secrets engine for Vault interfaces with Google Cloud\n  KMS for encryption\/decryption of data and KMS key management through Vault.\n---\n\n# Google Cloud KMS secrets engine\n\nThe Google Cloud KMS Vault secrets engine provides encryption and key management\nvia [Google Cloud KMS][kms]. It supports management of keys, including creation,\nrotation, and revocation, as well as encrypting and decrypting data with managed\nkeys. This enables management of KMS keys through Vault's policies and IAM\nsystem.\n\n## Setup\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the Google Cloud KMS secrets engine:\n\n   ```text\n   $ vault secrets enable gcpkms\n   Success! Enabled the gcpkms secrets engine at: gcpkms\/\n   ```\n\n   By default, the secrets engine will mount at the name of the engine. To\n   enable the secrets engine at a different path, use the `-path` argument.\n\n1. Configure the secrets engine with account credentials and\/or scopes:\n\n   ```text\n   $ vault write gcpkms\/config \\\n       credentials=@my-credentials.json\n   Success! Data written to: gcpkms\/config\n   ```\n\n   If you are running Vault from inside [Google Compute Engine][gce] or [Google\n   Kubernetes Engine][gke], the instance or pod service account can be used in\n   place of specifying the credentials JSON file. For more information on\n   authentication, see the [authentication section](#authentication) below.\n\n1. Create a Google Cloud KMS key:\n\n   ```text\n   $ vault write gcpkms\/keys\/my-key \\\n     key_ring=projects\/my-project\/locations\/my-location\/keyRings\/my-keyring \\\n     rotation_period=72h\n   ```\n\n   The `key_ring` parameter is specified in the following format:\n\n   ```text\n   projects\/<project>\/locations\/<location>\/keyRings\/<keyring>\n   ```\n\n   where:\n\n   - `<project>` - the name of the GCP project (e.g. \"my-project\")\n   - `<location>` - the location of the KMS key ring (e.g. \"us-east1\", \"global\")\n   - `<keyring>` - the name of the KMS key ring (e.g. \"my-keyring\")\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can be used to encrypt, decrypt, and manage keys. The\nfollowing sections describe the different ways in which keys can be managed.\n\n### Symmetric Encryption\/Decryption\n\nThis section describes using a Cloud KMS key for symmetric\nencryption\/decryption. This is one of the most common types of encryption.\nGoogle Cloud manages the key ring which is used to encrypt and decrypt data.\n\n<table>\n  <thead>\n    <tr>\n      <th>Purpose<\/th>\n      <th>Supported Algorithms<\/th>\n    <\/tr>\n    <tr>\n      <td valign=\"top\">\n        <code>encrypt_decrypt<\/code>\n      <\/td>\n      <td valign=\"top\">\n        <code>symmetric_encryption<\/code>\n      <\/td>\n    <\/tr>\n  <\/thead>\n<\/table>\n\n1.  Create or use an existing key eligible for symmetric encryption\/decryption:\n\n    ```text\n    $ vault write gcpkms\/keys\/my-key \\\n        key_ring=projects\/...\/my-keyring \\\n        purpose=encrypt_decrypt \\\n        algorithm=symmetric_encryption\n    ```\n\n1.  Encrypt plaintext data using the `\/encrypt` endpoint with a named key:\n\n    ```text\n    $ vault write gcpkms\/encrypt\/my-key plaintext=\"hello world\"\n    Key            Value\n    ---            -----\n    ciphertext     CiQAuMv0lTiKjrF43Lgr4...\n    key_version    1\n    ```\n\n    Unlike Vault's transit backend, plaintext data does not need to be base64\n    encoded. The endpoint will automatically convert data.\n\n    Note that Vault is not _storing_ this data. The caller is responsible for\n    storing the resulting ciphertext.\n\n1.  Decrypt ciphertext using the `\/decrypt` endpoint with a named key:\n\n    ```text\n    $ vault write gcpkms\/decrypt\/my-key ciphertext=CiQAuMv0lTiKjrF43Lgr4...\n    Key          Value\n    ---          -----\n    plaintext    hello world\n    ```\n\n    For easier scripting, it is also possible to extract the plaintext directly:\n\n    ```text\n    $ vault write -field=plaintext gcpkms\/decrypt\/my-key ciphertext=CiQAuMv0lTiKjrF43Lgr4...\n    hello world\n    ```\n\n1.  Rotate the underlying encryption key. This will generate a new crypto key\n    version on Google Cloud KMS and set that version as the active key.\n\n    ```text\n    $ vault write -f gcpkms\/keys\/rotate\/my-key\n    WARNING! The following warnings were returned from Vault:\n\n      * The crypto key version was rotated successfully, but it can take up to 2\n      hours for the new crypto key version to become the primary. In practice, it\n      is usually much shorter. Be sure to issue a read operation and verify the\n      key version if you require new data to be encrypted with this key.\n\n    Key            Value\n    ---            -----\n    key_version    2\n    ```\n\n    As the message says, rotation is not immediate. Depending on a number of\n    factors, the propagation of the new key can take quite some time. If you\n    have a need to immediately encrypt data with this new key, query the API to\n    wait for the key to become the primary. Alternatively, you can specify the\n    `key_version` parameter to lock to the exact key for use with encryption.\n\n1.  Re-encrypt already-encrypted ciphertext to be encrypted with a new version of\n    the crypto key. Vault will decrypt the value using the appropriate key in the\n    keyring and then encrypt the resulting plaintext with the newest key in the\n    keyring.\n\n    ```text\n    $ vault write gcpkms\/reencrypt\/my-key ciphertext=CiQAuMv0lTiKjrF43Lgr4...\n    Key            Value\n    ---            -----\n    ciphertext     CiQAuMv0lZTTozQA\/ElqM...\n    key_version    2\n    ```\n\n    This process **does not** reveal the plaintext data. As such, a Vault policy\n    could grant an untrusted process the ability to re-encrypt ciphertext data,\n    since the process would not be able to get access to the plaintext data.\n\n1.  Trim old key versions by deleting Cloud KMS crypto key versions that are\n    older than the `min_version` allowed on the key.\n\n    ```text\n    $ vault write gcpkms\/keys\/config\/my-key min_version=10\n    ```\n\n    Then delete all keys older than version 10. This will make it impossible to\n    encrypt, decrypt, or sign values with the older key by conventional means.\n\n    ```text\n    $ vault write -f gcpkms\/keys\/trim\/my-key\n    ```\n\n1.  Delete the key to delete all key versions and Vault's record of the key.\n\n    ```text\n    $ vault delete gcpkms\/keys\/my-key\n    ```\n\n    This will make it impossible to encrypt, decrypt, or sign values by\n    conventional means.\n\n### Asymmetric decryption\n\nThis section describes using a Cloud KMS key for asymmetric decryption. In this\nmodel Google Cloud manages the key ring and exposes the public key via an API\nendpoint. The public key is used to encrypt data offline and produce ciphertext.\nWhen the plaintext is desired, the user submits the ciphertext to Cloud KMS\nwhich decrypts the value using the corresponding private key.\n\n<table>\n  <thead>\n    <tr>\n      <th>Purpose<\/th>\n      <th>Supported Algorithms<\/th>\n    <\/tr>\n    <tr>\n      <td valign=\"top\">\n        <code>asymmetric_decrypt<\/code>\n      <\/td>\n      <td valign=\"top\">\n        <code>rsa_decrypt_oaep_2048_sha256<\/code>\n        <br \/>\n        <code>rsa_decrypt_oaep_3072_sha256<\/code>\n        <br \/>\n        <code>rsa_decrypt_oaep_4096_sha256<\/code>\n      <\/td>\n    <\/tr>\n  <\/thead>\n<\/table>\n\n1.  Create or use an existing key eligible for symmetric encryption\/decryption:\n\n    ```text\n    $ vault write gcpkms\/keys\/my-key \\\n        key_ring=projects\/...\/my-keyring \\\n        purpose=asymmetric_decrypt \\\n        algorithm=rsa_decrypt_oaep_4096_sha256\n    ```\n\n1.  Retrieve the public key from Cloud KMS:\n\n    ```text\n    $ gcloud kms keys versions get-public-key <crypto-key-version> \\\n        --location <location> \\\n        --keyring <key-ring> \\\n        --key <key> \\\n        --output-file ~\/mykey.pub\n    ```\n\n1.  Encrypt plaintext data with the public key. Note this varies widely between\n    programming languages. The following example uses OpenSSL, but you can use your\n    language's built-ins as well.\n\n    ```text\n    $ openssl pkeyutl -in ~\/my-secret-file \\\n        -encrypt -pubin \\\n        -inkey ~\/mykey.pub \\\n        -pkeyopt rsa_padding_mode:oaep \\\n        -pkeyopt rsa_oaep_md:sha256 \\\n        -pkeyopt rsa_mgf1_md:sha256\n    ```\n\n    Note that this encryption happens offline (meaning outside of Vault), and\n    the encryption is happening with a _public_ key. Only Cloud KMS has the\n    corresponding _private_ key.\n\n1.  Decrypt ciphertext using the `\/decrypt` endpoint with a named key:\n\n    ```text\n    $ vault write gcpkms\/decrypt\/my-key key_version=1 ciphertext=CiQAuMv0lTiKjrF43Lgr4...\n    Key          Value\n    ---          -----\n    plaintext    hello world\n    ```\n\n### Asymmetric signing\n\nThis section describes using a Cloud KMS key for asymmetric signing. In this\nmodel Google Cloud manages the key ring and exposes the public key via an API\nendpoint. A message or digest is signed with the corresponding private key, and\ncan be verified by anyone with the corresponding public key.\n\n<table>\n  <thead>\n    <tr>\n      <th>Purpose<\/th>\n      <th>Supported Algorithms<\/th>\n    <\/tr>\n    <tr>\n      <td valign=\"top\">\n        <code>asymmetric_sign<\/code>\n      <\/td>\n      <td valign=\"top\">\n        <code>rsa_sign_pss_2048_sha256<\/code>\n        <br \/>\n        <code>rsa_sign_pss_3072_sha256<\/code>\n        <br \/>\n        <code>rsa_sign_pss_4096_sha256<\/code>\n        <br \/>\n        <code>rsa_sign_pkcs1_2048_sha256<\/code>\n        <br \/>\n        <code>rsa_sign_pkcs1_3072_sha256<\/code>\n        <br \/>\n        <code>rsa_sign_pkcs1_4096_sha256<\/code>\n        <br \/>\n        <code>ec_sign_p256_sha256<\/code>\n        <br \/>\n        <code>ec_sign_p384_sha384<\/code>\n      <\/td>\n    <\/tr>\n  <\/thead>\n<\/table>\n\n1.  Create or use an existing key eligible for symmetric encryption\/decryption:\n\n    ```text\n    $ vault write gcpkms\/keys\/my-key \\\n        key_ring=projects\/...\/my-keyring \\\n        purpose=asymmetric_sign \\\n        algorithm=ec_sign_p384_sha384\n    ```\n\n1.  Calculate the base64-encoded binary digest. Use the hashing algorithm that\n    corresponds to they key type:\n\n    ```text\n    $ export DIGEST=$(openssl dgst -sha384 -binary \/my\/file | base64)\n    ```\n\n    Ask Cloud KMS to sign the digest:\n\n    ```text\n    $ vault write gcpkms\/sign\/my-key key_version=1 digest=$DIGEST\n    Key          Value\n    ---          -----\n    signature    MGYCMQDbOS2462SKMsGdh2GQ...\n    ```\n\n1.  Verify the signature of the digest:\n\n    ```text\n    $ vault write gcpkms\/verify\/my-key key_version=1 digest=$DIGEST signature=$SIGNATURE\n    Key      Value\n    ---      -----\n    valid    true\n    ```\n\n    Note: it is also possible to verify this signature without Vault. Download\n    the public key from Cloud KMS, and use a tool like OpenSSL or your\n    programming language primitives to verify the signature.\n\n## Authentication\n\nThe Google Cloud KMS Vault secrets backend uses the official Google Cloud Golang\nSDK. This means it supports the common ways of [providing credentials to Google\nCloud][cloud-creds]. In addition to specifying `credentials` directly via Vault\nconfiguration, you can also get configuration from the following values **on the\nVault server**:\n\n1. The environment variables `GOOGLE_APPLICATION_CREDENTIALS`. This is specified\n   as the **path** to a Google Cloud credentials file, typically for a service\n   account. If this environment variable is present, the resulting credentials are\n   used. If the credentials are invalid, an error is returned.\n\n1. Default instance credentials. When no environment variable is present, the\n   default service account credentials are used. This is useful when running Vault\n   on [Google Compute Engine][gce] or [Google Kubernetes Engine][gke]\n\nFor more information on service accounts, please see the [Google Cloud Service\nAccounts documentation][service-accounts].\n\nTo use this secrets engine, the service account must have the following\nminimum scope(s):\n\n```text\nhttps:\/\/www.googleapis.com\/auth\/kms\n```\n\n### Required permissions\n\nThe credentials given to Vault must have the following role:\n\n```text\nroles\/cloudkms.admin\n```\n\nIf Vault will not be creating keys, you can reduce the permissions. For example,\nto create keys out of band and have Vault manage the encryption\/decryption, you\nonly need the following permissions:\n\n```text\nroles\/cloudkms.cryptoKeyEncrypterDecrypter\n```\n\nTo sign and verify, you only need the following permissions:\n\n```text\nroles\/cloudkms.signerVerifier\n```\n\nFor more information, please see the [Google Cloud KMS IAM documentation][kms-iam].\n\n## FAQ\n\n**How is this different than Vault's transit secrets engine?**<br \/>\nVault's [transit][vault-transit] secrets engine uses in-memory keys to\nencrypt\/decrypt keys. In general it will be faster and more performant. However,\nusers who need physical, off-site, or out-of-band key management can use the\n[Google Cloud KMS][kms] secrets engine to get those benefits while leveraging\nVault's policy and identity system.\n\n**Can Vault use an existing KMS key?**<br \/>\nYou can use the `\/register` endpoint to configure Vault to talk to an existing\nGoogle Cloud KMS key. As long as the IAM permissions are correct, Vault will be\nable to encrypt\/decrypt data and rotate the key. See the [api][api] for more\ninformation.\n\n**Can this be used with a hardware key like an HSM?**<br \/>\nYes! You can set `protection_level` to \"hsm\" when creating a key, or use an\nexisting Cloud KMS key that is backed by an HSM.\n\n**How much does this cost?**<br \/>\nThe plugin is free and open source. KMS costs vary by key type and the number of\noperations. Please see the [Cloud KMS pricing page][kms-pricing] for more\ndetails.\n\n## Help &amp; support\n\nThe Google Cloud KMS Vault secrets engine is written as an external Vault\nplugin. The code lives outside the main Vault repository. It is automatically\nbundled with Vault releases, but the code is managed separately.\n\nPlease report issues, add feature requests, and submit contributions to the\n[vault-plugin-secrets-gcpkms repo on GitHub][repo].\n\n## API\n\nThe Google Cloud KMS secrets engine has a full HTTP API. Please see the\n[Google Cloud KMS secrets engine API docs][api] for more details.\n\n[api]: \/vault\/api-docs\/secret\/gcpkms\n[cloud-creds]: https:\/\/cloud.google.com\/docs\/authentication\/production#providing_credentials_to_your_application\n[gce]: https:\/\/cloud.google.com\/compute\/\n[gke]: https:\/\/cloud.google.com\/kubernetes-engine\/\n[kms]: https:\/\/cloud.google.com\/kms\n[kms-iam]: https:\/\/cloud.google.com\/kms\/docs\/reference\/permissions-and-roles\n[kms-pricing]: https:\/\/cloud.google.com\/kms\/pricing\n[repo]: https:\/\/github.com\/hashicorp\/vault-plugin-secrets-gcpkms\n[service-accounts]: https:\/\/cloud.google.com\/compute\/docs\/access\/service-accounts\n[vault-transit]: \/vault\/docs\/secrets\/transit","site":"vault","answers_cleaned":"    layout  docs page title  Google Cloud KMS   Secrets Engines description       The Google Cloud KMS secrets engine for Vault interfaces with Google Cloud   KMS for encryption decryption of data and KMS key management through Vault         Google Cloud KMS secrets engine  The Google Cloud KMS Vault secrets engine provides encryption and key management via  Google Cloud KMS  kms   It supports management of keys  including creation  rotation  and revocation  as well as encrypting and decrypting data with managed keys  This enables management of KMS keys through Vault s policies and IAM system      Setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1  Enable the Google Cloud KMS secrets engine         text      vault secrets enable gcpkms    Success  Enabled the gcpkms secrets engine at  gcpkms             By default  the secrets engine will mount at the name of the engine  To    enable the secrets engine at a different path  use the   path  argument   1  Configure the secrets engine with account credentials and or scopes         text      vault write gcpkms config          credentials  my credentials json    Success  Data written to  gcpkms config            If you are running Vault from inside  Google Compute Engine  gce  or  Google    Kubernetes Engine  gke   the instance or pod service account can be used in    place of specifying the credentials JSON file  For more information on    authentication  see the  authentication section   authentication  below   1  Create a Google Cloud KMS key         text      vault write gcpkms keys my key        key ring projects my project locations my location keyRings my keyring        rotation period 72h            The  key ring  parameter is specified in the following format         text    projects  project  locations  location  keyRings  keyring             where          project     the name of the GCP project  e g   my project          location     the location of the KMS key ring  e g   us east1    global          keyring     the name of the KMS key ring  e g   my keyring       Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can be used to encrypt  decrypt  and manage keys  The following sections describe the different ways in which keys can be managed       Symmetric Encryption Decryption  This section describes using a Cloud KMS key for symmetric encryption decryption  This is one of the most common types of encryption  Google Cloud manages the key ring which is used to encrypt and decrypt data    table     thead       tr         th Purpose  th         th Supported Algorithms  th        tr       tr         td valign  top            code encrypt decrypt  code          td         td valign  top            code symmetric encryption  code          td        tr      thead    table   1   Create or use an existing key eligible for symmetric encryption decryption          text       vault write gcpkms keys my key           key ring projects     my keyring           purpose encrypt decrypt           algorithm symmetric encryption          1   Encrypt plaintext data using the   encrypt  endpoint with a named key          text       vault write gcpkms encrypt my key plaintext  hello world      Key            Value                              ciphertext     CiQAuMv0lTiKjrF43Lgr4        key version    1              Unlike Vault s transit backend  plaintext data does not need to be base64     encoded  The endpoint will automatically convert data       Note that Vault is not  storing  this data  The caller is responsible for     storing the resulting ciphertext   1   Decrypt ciphertext using the   decrypt  endpoint with a named key          text       vault write gcpkms decrypt my key ciphertext CiQAuMv0lTiKjrF43Lgr4        Key          Value                            plaintext    hello world              For easier scripting  it is also possible to extract the plaintext directly          text       vault write  field plaintext gcpkms decrypt my key ciphertext CiQAuMv0lTiKjrF43Lgr4        hello world          1   Rotate the underlying encryption key  This will generate a new crypto key     version on Google Cloud KMS and set that version as the active key          text       vault write  f gcpkms keys rotate my key     WARNING  The following warnings were returned from Vault           The crypto key version was rotated successfully  but it can take up to 2       hours for the new crypto key version to become the primary  In practice  it       is usually much shorter  Be sure to issue a read operation and verify the       key version if you require new data to be encrypted with this key       Key            Value                              key version    2              As the message says  rotation is not immediate  Depending on a number of     factors  the propagation of the new key can take quite some time  If you     have a need to immediately encrypt data with this new key  query the API to     wait for the key to become the primary  Alternatively  you can specify the      key version  parameter to lock to the exact key for use with encryption   1   Re encrypt already encrypted ciphertext to be encrypted with a new version of     the crypto key  Vault will decrypt the value using the appropriate key in the     keyring and then encrypt the resulting plaintext with the newest key in the     keyring          text       vault write gcpkms reencrypt my key ciphertext CiQAuMv0lTiKjrF43Lgr4        Key            Value                              ciphertext     CiQAuMv0lZTTozQA ElqM        key version    2              This process   does not   reveal the plaintext data  As such  a Vault policy     could grant an untrusted process the ability to re encrypt ciphertext data      since the process would not be able to get access to the plaintext data   1   Trim old key versions by deleting Cloud KMS crypto key versions that are     older than the  min version  allowed on the key          text       vault write gcpkms keys config my key min version 10              Then delete all keys older than version 10  This will make it impossible to     encrypt  decrypt  or sign values with the older key by conventional means          text       vault write  f gcpkms keys trim my key          1   Delete the key to delete all key versions and Vault s record of the key          text       vault delete gcpkms keys my key              This will make it impossible to encrypt  decrypt  or sign values by     conventional means       Asymmetric decryption  This section describes using a Cloud KMS key for asymmetric decryption  In this model Google Cloud manages the key ring and exposes the public key via an API endpoint  The public key is used to encrypt data offline and produce ciphertext  When the plaintext is desired  the user submits the ciphertext to Cloud KMS which decrypts the value using the corresponding private key    table     thead       tr         th Purpose  th         th Supported Algorithms  th        tr       tr         td valign  top            code asymmetric decrypt  code          td         td valign  top            code rsa decrypt oaep 2048 sha256  code           br             code rsa decrypt oaep 3072 sha256  code           br             code rsa decrypt oaep 4096 sha256  code          td        tr      thead    table   1   Create or use an existing key eligible for symmetric encryption decryption          text       vault write gcpkms keys my key           key ring projects     my keyring           purpose asymmetric decrypt           algorithm rsa decrypt oaep 4096 sha256          1   Retrieve the public key from Cloud KMS          text       gcloud kms keys versions get public key  crypto key version              location  location              keyring  key ring              key  key              output file   mykey pub          1   Encrypt plaintext data with the public key  Note this varies widely between     programming languages  The following example uses OpenSSL  but you can use your     language s built ins as well          text       openssl pkeyutl  in   my secret file            encrypt  pubin            inkey   mykey pub            pkeyopt rsa padding mode oaep            pkeyopt rsa oaep md sha256            pkeyopt rsa mgf1 md sha256              Note that this encryption happens offline  meaning outside of Vault   and     the encryption is happening with a  public  key  Only Cloud KMS has the     corresponding  private  key   1   Decrypt ciphertext using the   decrypt  endpoint with a named key          text       vault write gcpkms decrypt my key key version 1 ciphertext CiQAuMv0lTiKjrF43Lgr4        Key          Value                            plaintext    hello world              Asymmetric signing  This section describes using a Cloud KMS key for asymmetric signing  In this model Google Cloud manages the key ring and exposes the public key via an API endpoint  A message or digest is signed with the corresponding private key  and can be verified by anyone with the corresponding public key    table     thead       tr         th Purpose  th         th Supported Algorithms  th        tr       tr         td valign  top            code asymmetric sign  code          td         td valign  top            code rsa sign pss 2048 sha256  code           br             code rsa sign pss 3072 sha256  code           br             code rsa sign pss 4096 sha256  code           br             code rsa sign pkcs1 2048 sha256  code           br             code rsa sign pkcs1 3072 sha256  code           br             code rsa sign pkcs1 4096 sha256  code           br             code ec sign p256 sha256  code           br             code ec sign p384 sha384  code          td        tr      thead    table   1   Create or use an existing key eligible for symmetric encryption decryption          text       vault write gcpkms keys my key           key ring projects     my keyring           purpose asymmetric sign           algorithm ec sign p384 sha384          1   Calculate the base64 encoded binary digest  Use the hashing algorithm that     corresponds to they key type          text       export DIGEST   openssl dgst  sha384  binary  my file   base64               Ask Cloud KMS to sign the digest          text       vault write gcpkms sign my key key version 1 digest  DIGEST     Key          Value                            signature    MGYCMQDbOS2462SKMsGdh2GQ             1   Verify the signature of the digest          text       vault write gcpkms verify my key key version 1 digest  DIGEST signature  SIGNATURE     Key      Value                        valid    true              Note  it is also possible to verify this signature without Vault  Download     the public key from Cloud KMS  and use a tool like OpenSSL or your     programming language primitives to verify the signature      Authentication  The Google Cloud KMS Vault secrets backend uses the official Google Cloud Golang SDK  This means it supports the common ways of  providing credentials to Google Cloud  cloud creds   In addition to specifying  credentials  directly via Vault configuration  you can also get configuration from the following values   on the Vault server     1  The environment variables  GOOGLE APPLICATION CREDENTIALS   This is specified    as the   path   to a Google Cloud credentials file  typically for a service    account  If this environment variable is present  the resulting credentials are    used  If the credentials are invalid  an error is returned   1  Default instance credentials  When no environment variable is present  the    default service account credentials are used  This is useful when running Vault    on  Google Compute Engine  gce  or  Google Kubernetes Engine  gke   For more information on service accounts  please see the  Google Cloud Service Accounts documentation  service accounts    To use this secrets engine  the service account must have the following minimum scope s       text https   www googleapis com auth kms          Required permissions  The credentials given to Vault must have the following role      text roles cloudkms admin      If Vault will not be creating keys  you can reduce the permissions  For example  to create keys out of band and have Vault manage the encryption decryption  you only need the following permissions      text roles cloudkms cryptoKeyEncrypterDecrypter      To sign and verify  you only need the following permissions      text roles cloudkms signerVerifier      For more information  please see the  Google Cloud KMS IAM documentation  kms iam       FAQ    How is this different than Vault s transit secrets engine    br    Vault s  transit  vault transit  secrets engine uses in memory keys to encrypt decrypt keys  In general it will be faster and more performant  However  users who need physical  off site  or out of band key management can use the  Google Cloud KMS  kms  secrets engine to get those benefits while leveraging Vault s policy and identity system     Can Vault use an existing KMS key    br    You can use the   register  endpoint to configure Vault to talk to an existing Google Cloud KMS key  As long as the IAM permissions are correct  Vault will be able to encrypt decrypt data and rotate the key  See the  api  api  for more information     Can this be used with a hardware key like an HSM    br    Yes  You can set  protection level  to  hsm  when creating a key  or use an existing Cloud KMS key that is backed by an HSM     How much does this cost    br    The plugin is free and open source  KMS costs vary by key type and the number of operations  Please see the  Cloud KMS pricing page  kms pricing  for more details      Help  amp  support  The Google Cloud KMS Vault secrets engine is written as an external Vault plugin  The code lives outside the main Vault repository  It is automatically bundled with Vault releases  but the code is managed separately   Please report issues  add feature requests  and submit contributions to the  vault plugin secrets gcpkms repo on GitHub  repo       API  The Google Cloud KMS secrets engine has a full HTTP API  Please see the  Google Cloud KMS secrets engine API docs  api  for more details    api    vault api docs secret gcpkms  cloud creds   https   cloud google com docs authentication production providing credentials to your application  gce   https   cloud google com compute   gke   https   cloud google com kubernetes engine   kms   https   cloud google com kms  kms iam   https   cloud google com kms docs reference permissions and roles  kms pricing   https   cloud google com kms pricing  repo   https   github com hashicorp vault plugin secrets gcpkms  service accounts   https   cloud google com compute docs access service accounts  vault transit    vault docs secrets transit"}
{"questions":"vault page title KMIP Profiles Support These profiles define a set of normative constraints for employing KMIP within a particular and authentication methods within specific contexts of KMIP server and client interaction layout docs environment or context of use The KMIP profiles define the use of KMIP objects attributes operations message elements","answers":"---\nlayout: docs\npage_title: KMIP - Profiles Support\ndescription: |-\n  The KMIP profiles define the use of KMIP objects, attributes, operations, message elements\n  and authentication methods within specific contexts of KMIP server and client interaction.\n  These profiles define a set of normative constraints for employing KMIP within a particular\n  environment or context of use.\n---\n\n# KMIP profiles version 1.4\n\nThis document specifies conformance clauses in accordance with the OASIS TC Process ([TC-PROC section 2.18 paragraph 8a][tc-proc-2.18] )\nfor the KMIP Specification ([KMIP-SPEC 12.1 and 12.2][kmip-spec]) for a KMIP server or KMIP client through profiles that define the\nuse of KMIP objects, attributes, operations, message elements and authentication methods within specific contexts of\nKMIP server and client interaction.\n\nVault implements version 1.4 of the following Key Management Interoperability Protocol Profiles:\n\n## [Baseline server][baseline-server]\n  1. Supports the following objects:\n\n    | Object                                                                  | Supported |\n    | ----------------------------------------------------------------------- | :-------: |\n    | Attribute [KMIP-SPEC 2.1.1][kmip-spec-2.1.1]                            | \u2705        |\n    | Credential [KMIP-SPEC 2.1.2][kmip-spec-2.1.2]                           | \u2705        |\n    | Key Block [KMIP-SPEC 2.1.3][kmip-spec-2.1.3]                            | \u2705        |\n    | Key Value [KMIP-SPEC 2.1.4][kmip-spec-2.1.4]                            | \u2705        |\n    | Template-Attribute Structure [KMIP-SPEC 2.1.8][kmip-spec-2.1.8]         | \u2705        |\n    | Extension Information [KMIP-SPEC 2.1.9][kmip-spec-2.1.9]                | \u2705        |\n    | Profile Information [KMIP-SPEC 2.1.19][kmip-spec-2.1.19]                | \u2705        |\n    | Validation Information [KMIP-SPEC 2.1.20][kmip-spec-2.1.20]             | \u2705        |\n    | Capability Information [KMIP-SPEC 2.1.21][kmip-spec-2.1.21]             | \u2705        |\n\n  2. Supports the following subsets of attributes:\n\n    | Attribute                                                              | Supported | Notes  |\n    | -----------------------------------------------------------------------| :-------: | :----: |\n    | Unique Identifier [KMIP-SPEC 3.1][kmip-spec-3.1]                       | \u2705        |        |\n    | Name [KMIP-SPEC 3.2][kmip-spec-3.2]                                    | \u2705        |        |\n    | Object Type [KMIP-SPEC 3.3][kmip-spec-3.3]                             | \u2705        |        |\n    | Cryptographic Algorithm [KMIP-SPEC 3.4][kmip-spec-3.4]                 | \u2705        |        |\n    | Cryptographic Length [KMIP-SPEC 3.5][kmip-spec-3.5]                    | \u2705        |        |\n    | Cryptographic Parameters [KMIP-SPEC 3.6][kmip-spec-3.6]                | \u2705        |        |\n    | Digest [KMIP-SPEC 3.17][kmip-spec-3.17]                                | \u2705        |        |\n    | Cryptographic Usage Mask [KMIP-SPEC 3.19][kmip-spec-3.19]              | \u2705        |        |\n    | State [KMIP-SPEC 3.22][kmip-spec-3.22]                                 | \u2705        |        |\n    | Initial Date [KMIP-SPEC 3.23][kmip-spec-3.23]                          | \u2705        |        |\n    | Process Start Date [KMIP-SPEC 3.25][kmip-spec-3.25]                    | \u2705        | Vault 1.11 |\n    | Protect Stop Date [KMIP-SPEC 3.26][kmip-spec-3.26]                     | \u2705        | Vault 1.11 |\n    | Activation Date [KMIP-SPEC 3.24][kmip-spec-3.24]                       | \u2705        |        |\n    | Deactivation Date [KMIP-SPEC 3.27][kmip-spec-3.27]                     | \u2705        |        |\n    | Compromise Occurrence Date [KMIP-SPEC 3.29][kmip-spec-3.29]            | \u2705        |        |\n    | Compromise Date [KMIP-SPEC 3.30][kmip-spec-3.30]                       | \u2705        |        |\n    | Revocation Reason [KMIP-SPEC 3.31][kmip-spec-3.31]                     | \u2705        |        |\n    | Object Group [KMIP-SPEC 3.33][kmip-spec-3.33]                          | \u2705        |        |\n    | Fresh [KMIP-SPEC 3.34][kmip-spec-3.34]                                 | \u2705        |        |\n    | Link [KMIP-SPEC 3.35][kmip-spec-3.35]                                  | \u2705        |        |\n    | Last Change Date [KMIP-SPEC 3.38][kmip-spec-3.38]                      | \u2705        |        |\n    | Alternative Name [KMIP-SPEC 3.40][kmip-spec-3.40]                      | \u2705        | Vault 1.12 |\n    | Key Value Present [KMIP-SPEC 3.41][kmip-spec-3.41]                     | \u2705        | Vault 1.12 |\n    | Key Value Location [KMIP-SPEC 3.42][kmip-spec-3.42]                    | \ud83d\udd34        |        |\n    | Original Creation Date [KMIP-SPEC 3.43][kmip-spec-3.43]                | \u2705        |        |\n    | Random Number Generator [KMIP-SPEC 3.44][kmip-spec-3.44]               | \u2705        |        |\n    | Description [KMIP-SPEC 3.46][kmip-spec-3.46]                           | \u2705        |        |\n    | Comment [KMIP-SPEC 3.47][kmip-spec-3.47]                               | \u2705        |        |\n    | Sensitive [KMIP-SPEC 3.48][kmip-spec-3.48]                             | \u2705        |        |\n    | Always Sensitive [KMIP-SPEC 3.49][kmip-spec-3.49]                      | \u2705        |        |\n    | Extractable [KMIP-SPEC 3.50][kmip-spec-3.50]                           | \u2705        |        |\n    | Never Extractable [KMIP-SPEC 3.51][kmip-spec-3.51]                     | \u2705        |        |\n\n  3. Supports the following client-to-server operations:\n\n    | Operation                                             | Supported | Notes |\n    | ------------------------------------------------------| :--------:|:-----:|\n    | Locate [KMIP-SPEC 4.9][kmip-spec-4.9]                 | \u2705        | Vault version 1.11 supports attributes Activation Date, Application Specific Information, Cryptographic Algorithm, Cryptographic Length, Name, Object Type, Original Creation Date, and State. <br\/> Vault version 1.12 supports all profile attributes except for Key Value Location.      |\n    | Check [KMIP-SPEC 4.10][kmip-spec-4.10]                | \ud83d\udd34        |        |\n    | Get [KMIP-SPEC 4.11][kmip-spec-4.11]                  | \u2705        |        |\n    | Get Attributes [KMIP-SPEC 4.12][kmip-spec-4.12]       | \u2705        |        |\n    | Get Attribute List [KMIP-SPEC 4.13][kmip-spec-4.13]   | \u2705        |        |\n    | Add Attribute [KMIP-SPEC 4.14][kmip-spec-4.14]        | \u2705        |        |\n    | Modify Attribute [KMIP-SPEC 4.15][kmip-spec-4.15]     | \u2705        | Vault 1.12 |\n    | Delete Attribute [KMIP-SPEC 4.16][kmip-spec-4.16]     | \u2705        | Vault 1.12 |\n    | Activate [KMIP-SPEC 4.19][kmip-spec-4.19]             | \u2705        |        |\n    | Revoke [KMIP-SPEC 4.20][kmip-spec-4.20]               | \u2705        |        |\n    | Destroy [KMIP-SPEC 4.21][kmip-spec-4.21]              | \u2705        |        |\n    | Query [KMIP-SPEC 4.25][kmip-spec-4.25]                | \u2705        | Vault 1.11 |\n    | Discover Versions [KMIP-SPEC 4.26][kmip-spec-4.26]    | \u2705        |        |\n\n  4.Supports the following message contents:\n\n    | Message Content                                                  | Supported |\n    | -----------------------------------------------------------------| :--------:|\n    | Protocol Version [KMIP-SPEC 6.1][kmip-spec-6.1]                  | \u2705        |\n    | Operation [KMIP-SPEC 6.2][kmip-spec-6.2]                         | \u2705        |\n    | Maximum Response Size [KMIP-SPEC 6.3][kmip-spec-6.3]             | \u2705        |\n    | Unique Batch Item ID [KMIP-SPEC 6.4][kmip-spec-6.4]              | \u2705        |\n    | Time Stamp [KMIP-SPEC 6.5][kmip-spec-6.5]                        | \u2705        |\n    | Asynchronous Indicator [KMIP-SPEC 6.7][kmip-spec-6.7]            | \u2705        |\n    | Result Status [KMIP-SPEC 6.9][kmip-spec-6.9]                     | \u2705        |\n    | Result Reason [KMIP-SPEC 6.10][kmip-spec-6.10]                   | \u2705        |\n    | Batch Order Option [KMIP-SPEC 6.12][kmip-spec-6.12]              | \u2705        |\n    | Batch Error Continuation Option [KMIP-SPEC 6.13][kmip-spec-6.13] | \u2705        |\n    | Batch Count [KMIP-SPEC 6.14][kmip-spec-6.14]                     | \u2705        |\n    | Batch Item [KMIP-SPEC 6.15][kmip-spec-6.15]                      | \u2705        |\n    | Attestation Capable Indicator [KMIP-SPEC 6.17][kmip-spec-6.17]   | \u2705        |\n    | Client Correlation Value [KMIP-SPEC 6.18][kmip-spec-6.18]        | \u2705        |\n    | Server Correlation Value [KMIP-SPEC 6.19][kmip-spec-6.19]        | \u2705        |\n    | Message Extension [KMIP-SPEC 6.16][kmip-spec-6.16]               | \u2705        |\n\n  5. Supports the ID Placeholder [KMIP-SPEC 4][kmip-spec-4]\n  6. Supports Message Format [KMIP-SPEC 7][kmip-spec-7]\n  7. Supports Authentication [KMIP-SPEC 8][kmip-spec-8]\n  8. Supports the TTLV encoding [KMIP-SPEC 9.1][kmip-spec-9.1]\n  9. Supports the transport requirements [KMIP-SPEC 10][kmip-spec-10]\n  10. Supports Error Handling [KMIP-SPEC 11][kmip-spec-11] for any supported object, attribute, or operation\n  11. Optionally supports any clause within [KMIP-SPEC][kmip-spec] that is not listed above\n  12. Optionally supports extensions outside the scope of this standard (e.g., vendor extensions, conformance clauses) that do not contradict any KMIP requirements - We do not have any extensions\n\n## [Symmetric key lifecycle server][lifecycle-server]\n\n  1. SHALL conform to the [Baseline Server][baseline-server]\n  2. Supports the following objects:\n\n    | Object                                                                 | Supported |\n    | -----------------------------------------------------------------------| :----- --:|\n    | Symmetric Key [KMIP-SPEC 2.2.2][kmip-spec-2.2.2]                       | \u2705        |\n    | Key Format Type [KMIP-SPEC 9.1.3.2.3][kmip-spec-9.1.3.2.3]             | \u2705        |\n\n  3. Supports the following subsets of attributes:\n\n    | Attribute                                                              | Supported | Notes |\n    | -----------------------------------------------------------------------| :-------: | :---: |\n    | Cryptographic Algorithm [KMIP-SPEC 3.4][kmip-spec-3.4]                 | \u2705        |       |\n    | Object Type [KMIP-SPEC 3.3][kmip-spec-3.3]                             | \u2705        |       |\n    | Process Start Date [KMIP-SPEC 3.25][kmip-spec-3.25]                    | \u2705        | Vault 1.11 |\n    | Protect Stop Date [KMIP-SPEC 3.26][kmip-spec-3.26]                     | \u2705        | Vault 1.11 |\n\n  4. Supports the following client-to-server operations:\n\n    | Operation                                             | Supported |\n    | ------------------------------------------------------| :--------:|\n    | Create [KMIP-SPEC 4.1][kmip-spec-4.1]                 | \u2705        |\n\n  5. Supports the following message encoding:\n\n    | Message Encoding                                                                     | Supported | Notes |\n    | -------------------------------------------------------------------------------------| :--------:|:-----:|\n    | Cryptographic Algorithm [KMIP-SPEC 9.1.3.2.13][kmip-spec-9.1.3.2.13] with values:    |           |       |\n    | i. 3DES                                                                              | \u2705        | Vault 1.12 |\n    | ii. AES                                                                              | \u2705        |        |\n    | Object Type [KMIP-SPEC 9.1.3.2.12][kmip-spec-9.1.3.2.12] with value:                 |           |        |\n    | i. Symmetric Key                                                                     | \u2705        |        |\n    | Key Format Type [KMIP-SPEC 9.1.3.2.3][kmip-spec-9.1.3.2.3] with value:               |           |        |\n    | i. Raw                                                                               | \u2705        |        |\n    | ii. Transparent Symmetric Key                                                        | \ud83d\udd34        |        |\n\n  6. MAY support any clause within [KMIP-SPEC][kmip-spec] provided it does not conflict with any other clause within the section [Symmetric Key Lifecycle Server][lifecycle-server]\n  7. MAY support extensions outside the scope of this standard (e.g., vendor extensions, conformance clauses) that do not contradict any KMIP requirements.\n\n## [Basic cryptographic server][basic-cryptographic-server]\n\n  1. SHALL conform to the [Baseline Server][baseline-server]\n  2. Supports the following client-to-server operations:\n\n    | Operation                                             | Supported | Notes   |\n    | ------------------------------------------------------| :--------:| --------|\n    | Encrypt [KMIP-SPEC 4.29][kmip-spec-4.29]              | \u2705        | Vault 1.11 <br\/> Supported for AES, unsupported for 3DES: <br\/><br\/> Supported Block Cipher Modes: <br\/> <ol> <li> GCM <\/li> <li> CBC <\/li> <li> CFB <\/li> <li> CTR <\/li> <li> ECB <\/li> <li> OFB <\/li> <\/ol> <br\/> Stream operations are supported except for GCM block cipher mode. <br\/><br\/> Supported padding methods: <br\/> <ol> <li> None <\/li> <li> PKCS5 <\/li> <\/ol>  |\n    | Decypt [KMIP-SPEC 4.30][kmip-spec-4.30]               | \u2705        | Vault 1.11 <br\/> Supported for AES, unsupported for 3DES: <br\/><br\/> Supported Block Cipher Modes: <br\/> <ol> <li> GCM <\/li> <li> CBC <\/li> <li> CFB <\/li> <li> CTR <\/li> <li> ECB <\/li> <li> OFB <\/li> <\/ol> <br\/> Stream operations are supported except for GCM block cipher mode. <br\/><br\/> Supported padding methods: <br\/> <ol> <li> None <\/li> <li> PKCS5 <\/li> <\/ol>  |  |\n\n  3. MAY support any clause within [KMIP-SPEC][kmip-spec] provided it does not conflict with any other clause within the section [Basic Cryptographic Server][basic-cryptographic-server]\n  4. MAY support extensions outside the scope of this standard (e.g., vendor extensions, conformance clauses) that do not contradict any KMIP requirements.\n\n## [Asymmetric key lifecycle server][asymmetric-key-lifecycle-server]\n\n  1. SHALL conform to the [Baseline Server][baseline-server]\n\n  2. Supports the following objects:\n\n    | Object                                                                 | Supported |\n    | -----------------------------------------------------------------------| :----- --:|\n    | Symmetric Key [KMIP-SPEC 2.2.2][kmip-spec-2.2.2]                       | \u2705        |\n    | Key Format Type [KMIP-SPEC 9.1.3.2.3][kmip-spec-9.1.3.2.3]             | \u2705        |\n\n  3. Supports the following objects:\n\n    | Object                                                              | Supported | Notes |\n    | --------------------------------------------------------------------| :-------: | :---: |\n    | Public Key [KMIP-SPEC 2.2.3][kmip-spec-2.2.3]                       | \u2705        |  Vault 1.13 |\n    | Private Key [KMIP-SPEC 2.2.4][kmip-spec-2.2.4]                      | \u2705        |  Vault 1.13 |\n    | Process Start Date [KMIP-SPEC 3.25][kmip-spec-3.25]                 | \u2705        |  Vault 1.11 |\n    | Key Format Type [KMIP-SPEC 9.1.3.2.3][kmip-spec-9.1.3.2.3]          | \u2705        |        |\n\n  4. Supports the following attributes:\n\n    | Attribute                                                              | Supported | Notes |\n    | -----------------------------------------------------------------------| :-------: | :---: |\n    | Cryptographic Algorithm [KMIP-SPEC 3.4][kmip-spec-3.4]                 | \u2705        |       |\n    | Object Type [KMIP-SPEC 3.3][kmip-spec-3.3]                             | \u2705        |       |\n    | Process Start Date [KMIP-SPEC 3.25][kmip-spec-3.25]                    | \u2705        | Vault 1.11 |\n    | Protect Stop Date [KMIP-SPEC 3.26][kmip-spec-3.26]                     | \u2705        | Vault 1.11 |\n\n  5. Supports the following message encoding:\n\n    | Message Encoding                                                                     | Supported | Notes |\n    | -------------------------------------------------------------------------------------| :--------:|:-----:|\n    | Cryptographic Algorithm [KMIP-SPEC 9.1.3.2.13][kmip-spec-9.1.3.2.13] with values:    |           |       |\n    | i. RSA                                                                               | \u2705        | Vault 1.13 |\n    | Object Type [KMIP-SPEC 9.1.3.2.12][kmip-spec-9.1.3.2.12] with value:                 |           |        |\n    | i. Public\t Key                                                                       | \u2705        | Vault 1.13 |\n    | ii. Private\t Key                                                                     | \u2705        | Vault 1.13 |\n    | Key Format Type [KMIP-SPEC 9.1.3.2.3][kmip-spec-9.1.3.2.3] with value:               |           |        |\n    | i. PKCS#1                                                                            | \u2705        | Vault 1.13 <br\/> Supported for Private and Public Keys|\n    | ii. PKCS#8                                                                           | \u2705        | Vault 1.13 <br\/> Supported for Private Key|\n    | iii. Transparent RSA Public Key                                                      | \u2705        | Vault 1.13 |\n    | iv. Transparent RSA Private Key                                                      | \u2705        | Vault 1.13 |\n    | v. X.509                                                                             | \u2705        | Vault 1.13 <br\/> Supported for Public Key|\n\n  6. MAY support any clause within [KMIP-SPEC][kmip-spec] provided it does not conflict with any other clause within the section [Symmetric Key Lifecycle Server][lifecycle-server]\n  7. MAY support extensions outside the scope of this standard (e.g., vendor extensions, conformance clauses) that do not contradict any KMIP requirements.\n\n## [Advanced cryptographic server][advanced-cryptographic-server]\n  1. SHALL conform to the [Baseline Server][baseline-server]\n  2. Supports the following client-to-server operations:\n\n    | Operation                                             | Supported | Notes   |\n    | ------------------------------------------------------| :--------:| --------|\n    | Encrypt [KMIP-SPEC 4.29][kmip-spec-4.29]              | \u2705        | Vault 1.11 <br\/> [See Basic Cryptographic Server](#basic-cryptographic-server) <br\/><br\/> Vault 1.13 <br\/> Supported for RSA Asymmetric Keys: <br\/><br\/> Supported padding methods: <br\/> <ol> <li> OAEP <\/li> <li> PKCS1v15 <\/li> <\/ol> <br\/> Streaming operations are not supported. |\n    | Decypt [KMIP-SPEC 4.30][kmip-spec-4.30]               | \u2705        | Vault 1.11 <br\/> [See Basic Cryptographic Server](#basic-cryptographic-server) <br\/><br\/> Vault 1.13 <br\/> Supported for RSA Asymmetric Keys: <br\/><br\/> Supported padding methods: <br\/> <ol> <li> OAEP <\/li> <li> PKCS1v15 <\/li> <\/ol> <br\/> Streaming operations are not supported. |\n    | Sign [KMIP-SPEC 4.31][kmip-spec-4.31]                 | \u2705        | Vault 1.13 <br\/> Supported for RSA Asymmetric Keys: <br\/><br\/> Supported padding methods: <br\/> <ol> <li> PSS <\/li> <li> PKCS1v15 <\/li> <\/ol> <br\/><br\/> The supported hashing algorithms with PSS are: <br\/> <ol> <li> SHA224 <\/li> <li> SHA256 <\/li> <li> SHA384 <\/li> <li> SHA512 <\/li> <li> RIPEMD160 <\/li> <li> SHA512_224 <\/li> <li> SHA512_256 <\/li> <li> SHA3_224 <\/li> <li> SHA3_256 <\/li> <li> SHA3_384 <\/li> <li> SHA3_512 <\/li> <\/ol> <br\/> The supported hashing algorithms with PKCS1v15 are: <br\/> <ol> <li> SHA224 <\/li> <li> SHA256 <\/li> <li> SHA384 <\/li> <li> SHA512 <\/li> <li> RIPEMD160 <\/li> <\/ol> <br\/> Streaming operations are supported.|\n    | Signature Verify [KMIP-SPEC 4.32][kmip-spec-4.32]     | \u2705        | Vault 1.13 <br\/> Supported for RSA Asymmetric Keys: <br\/><br\/> Supported padding methods: <br\/> <ol> <li> PSS <\/li> <li> PKCS1v15 <\/li> <\/ol> <br\/><br\/> The supported hashing algorithms with PSS are: <br\/> <ol> <li> SHA224 <\/li> <li> SHA256 <\/li> <li> SHA384 <\/li> <li> SHA512 <\/li> <li> RIPEMD160 <\/li> <li> SHA512_224 <\/li> <li> SHA512_256 <\/li> <li> SHA3_224 <\/li> <li> SHA3_256 <\/li> <li> SHA3_384 <\/li> <li> SHA3_512 <\/li> <\/ol> <br\/> The supported hashing algorithms with PKCS1v15 are: <br\/> <ol> <li> SHA224 <\/li> <li> SHA256 <\/li> <li> SHA384 <\/li> <li> SHA512 <\/li> <li> RIPEMD160 <\/li> <\/ol> <br\/> Streaming operations are supported.|\n    | MAC [KMIP-SPEC 4.33][kmip-spec-4.33]                  | \u2705        | Vault 1.13 <br\/> Supported for RSA Asymmetric Keys: <br\/><br\/> The supported hashing algorithms are: <br\/> <ol> <li> SHA224 <\/li> <li> SHA256 <\/li> <li> SHA384 <\/li> <li> SHA512 <\/li> <li> RIPEMD160 <\/li> <li> SHA512_224 <\/li> <li> SHA512_256 <\/li> <li> SHA3_256 <\/li> <li> SHA3_384 <\/li> <li> SHA3_512 <\/li> <\/ol> <br\/> The follwing hashing algorithms are not supported: <br\/> <ol> <li> MD4 <\/li> <li> MD5 <\/li> <li> SHA1 <\/li> <\/ol> <br\/> Streaming operations are supported.|\n    | MAC Verify [KMIP-SPEC 4.34][kmip-spec-4.34]           | \u2705        | Vault 1.13 <br\/> Supported for RSA Asymmetric Keys: <br\/><br\/> The supported hashing algorithms are: <br\/> <ol> <li> SHA224 <\/li> <li> SHA256 <\/li> <li> SHA384 <\/li> <li> SHA512 <\/li> <li> RIPEMD160 <\/li> <li> SHA512_224 <\/li> <li> SHA512_256 <\/li> <li> SHA3_256 <\/li> <li> SHA3_384 <\/li> <li> SHA3_512 <\/li> <\/ol> <br\/> The follwing hashing algorithms are not supported: <br\/> <ol> <li> MD4 <\/li> <li> MD5 <\/li> <li> SHA1 <\/li> <\/ol> <br\/> Streaming operations are supported.|\n    | RNG Retrieve [KMIP-SPEC 4.35][kmip-spec-4.35]         | \u2705        | Vault 1.13 |\n    | RNG Seed [KMIP-SPEC 4.36][kmip-spec-4.36]             | \u2705        | Vault 1.13 |\n  3. MAY support any clause within [KMIP-SPEC][kmip-spec] provided it does not conflict with any other clause within the section [Basic Cryptographic Server][basic-cryptographic-server]\n  4. MAY support extensions outside the scope of this standard (e.g., vendor extensions, conformance clauses) that do not contradict any KMIP requirements.\n\n\n[kmip-spec-2.1.1]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660735\n  [kmip-spec-2.1.2]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660736\n  [kmip-spec-2.1.3]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660737\n  [kmip-spec-2.1.4]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660738\n  [kmip-spec-2.1.8]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660757\n  [kmip-spec-2.1.9]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660758\n  [kmip-spec-2.1.19]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660768\n  [kmip-spec-2.1.20]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660769\n  [kmip-spec-2.1.21]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660770\n  [kmip-spec-2.2.3]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660776\n  [kmip-spec-2.2.4]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660777\n  [kmip-spec-3.1]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660784\n  [kmip-spec-3.2]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660785\n  [kmip-spec-3.3]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660786\n  [kmip-spec-3.4]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660787\n  [kmip-spec-3.5]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660788\n  [kmip-spec-3.6]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660789\n  [kmip-spec-3.17]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660800\n  [kmip-spec-3.19]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660807\n  [kmip-spec-3.22]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660810\n  [kmip-spec-3.23]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660811\n  [kmip-spec-3.25]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660813\n  [kmip-spec-3.26]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660814\n  [kmip-spec-3.24]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660812\n  [kmip-spec-3.27]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660815\n  [kmip-spec-3.29]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660817\n  [kmip-spec-3.30]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660818\n  [kmip-spec-3.31]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660819\n  [kmip-spec-3.33]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660821\n  [kmip-spec-3.34]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660822\n  [kmip-spec-3.35]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660823\n  [kmip-spec-3.38]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660826\n  [kmip-spec-3.40]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660828\n  [kmip-spec-3.41]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660829\n  [kmip-spec-3.42]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660830\n  [kmip-spec-3.43]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660831\n  [kmip-spec-3.44]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660832\n  [kmip-spec-3.46]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660834\n  [kmip-spec-3.47]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660835\n  [kmip-spec-3.48]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660836\n  [kmip-spec-3.49]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660837\n  [kmip-spec-3.50]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660838\n  [kmip-spec-3.51]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660839\n  [kmip-spec-4.9]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660849\n  [kmip-spec-4.10]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660850\n  [kmip-spec-4.11]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660851\n  [kmip-spec-4.12]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660852\n  [kmip-spec-4.13]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660853\n  [kmip-spec-4.14]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660854\n  [kmip-spec-4.15]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660855\n  [kmip-spec-4.16]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660856\n  [kmip-spec-4.19]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660859\n  [kmip-spec-4.20]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660860\n  [kmip-spec-4.21]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660861\n  [kmip-spec-4.25]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660865\n  [kmip-spec-4.26]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660866\n  [kmip-spec-4.29]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660869\n  [kmip-spec-4.30]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660870\n  [kmip-spec-4.31]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660871\n  [kmip-spec-4.32]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660872\n  [kmip-spec-4.33]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660873\n  [kmip-spec-4.34]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660874\n  [kmip-spec-4.35]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660875\n  [kmip-spec-4.36]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660876\n  [kmip-spec-6.1]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660887\n  [kmip-spec-6.2]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660888\n  [kmip-spec-6.3]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660889\n  [kmip-spec-6.4]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660890\n  [kmip-spec-6.5]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660891\n  [kmip-spec-6.7]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660893\n  [kmip-spec-6.9]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660895\n  [kmip-spec-6.10]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660896\n  [kmip-spec-6.12]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660898\n  [kmip-spec-6.13]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660899\n  [kmip-spec-6.14]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660900\n  [kmip-spec-6.15]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660901\n  [kmip-spec-6.17]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660903\n  [kmip-spec-6.18]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660904\n  [kmip-spec-6.19]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660905\n  [kmip-spec-6.16]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660902\n  [kmip-spec-4]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660840\n  [kmip-spec-7]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660906\n  [kmip-spec-8]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660909\n  [kmip-spec-9.1]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660911\n  [kmip-spec-10]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660973\n  [kmip-spec-11]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660974\n  [kmip-spec]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html\n  [kmip-spec-2.2.2]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660775\n  [kmip-spec-9.1.3.2.3]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660923\n  [kmip-spec-4.1]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660841\n  [kmip-spec-9.1.3.2.13]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660933\n  [kmip-spec-9.1.3.2.12]: https:\/\/docs.oasis-open.org\/kmip\/spec\/v1.4\/errata01\/os\/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660932\n  [baseline-server]: http:\/\/docs.oasis-open.org\/kmip\/profiles\/v1.4\/os\/kmip-profiles-v1.4-os.html#_Toc491431430\n  [lifecycle-server]: http:\/\/docs.oasis-open.org\/kmip\/profiles\/v1.4\/os\/kmip-profiles-v1.4-os.html#_Toc491431487\n  [basic-cryptographic-server]: http:\/\/docs.oasis-open.org\/kmip\/profiles\/v1.4\/os\/kmip-profiles-v1.4-os.html#_Toc491431527\n  [asymmetric-key-lifecycle-server]: http:\/\/docs.oasis-open.org\/kmip\/profiles\/v1.4\/os\/kmip-profiles-v1.4-os.html#_Toc491431516\n  [advanced-cryptographic-server]: http:\/\/docs.oasis-open.org\/kmip\/profiles\/v1.4\/os\/kmip-profiles-v1.4-os.html#_Toc491431528\n  [tc-proc-2.18]: https:\/\/www.oasis-open.org\/policies-guidelines\/tc-process-2017-05-26\/technical-committee-tc-process-27-july-2011\/#specQuality\n","site":"vault","answers_cleaned":"    layout  docs page title  KMIP   Profiles Support description       The KMIP profiles define the use of KMIP objects  attributes  operations  message elements   and authentication methods within specific contexts of KMIP server and client interaction    These profiles define a set of normative constraints for employing KMIP within a particular   environment or context of use         KMIP profiles version 1 4  This document specifies conformance clauses in accordance with the OASIS TC Process   TC PROC section 2 18 paragraph 8a  tc proc 2 18    for the KMIP Specification   KMIP SPEC 12 1 and 12 2  kmip spec   for a KMIP server or KMIP client through profiles that define the use of KMIP objects  attributes  operations  message elements and authentication methods within specific contexts of KMIP server and client interaction   Vault implements version 1 4 of the following Key Management Interoperability Protocol Profiles       Baseline server  baseline server    1  Supports the following objects         Object                                                                    Supported                                                                                                     Attribute  KMIP SPEC 2 1 1  kmip spec 2 1 1                                                Credential  KMIP SPEC 2 1 2  kmip spec 2 1 2                                               Key Block  KMIP SPEC 2 1 3  kmip spec 2 1 3                                                Key Value  KMIP SPEC 2 1 4  kmip spec 2 1 4                                                Template Attribute Structure  KMIP SPEC 2 1 8  kmip spec 2 1 8                             Extension Information  KMIP SPEC 2 1 9  kmip spec 2 1 9                                    Profile Information  KMIP SPEC 2 1 19  kmip spec 2 1 19                                    Validation Information  KMIP SPEC 2 1 20  kmip spec 2 1 20                                 Capability Information  KMIP SPEC 2 1 21  kmip spec 2 1 21                              2  Supports the following subsets of attributes         Attribute                                                                Supported   Notes                                                                                                              Unique Identifier  KMIP SPEC 3 1  kmip spec 3 1                                                    Name  KMIP SPEC 3 2  kmip spec 3 2                                                                 Object Type  KMIP SPEC 3 3  kmip spec 3 3                                                          Cryptographic Algorithm  KMIP SPEC 3 4  kmip spec 3 4                                              Cryptographic Length  KMIP SPEC 3 5  kmip spec 3 5                                                 Cryptographic Parameters  KMIP SPEC 3 6  kmip spec 3 6                                             Digest  KMIP SPEC 3 17  kmip spec 3 17                                                             Cryptographic Usage Mask  KMIP SPEC 3 19  kmip spec 3 19                                           State  KMIP SPEC 3 22  kmip spec 3 22                                                              Initial Date  KMIP SPEC 3 23  kmip spec 3 23                                                       Process Start Date  KMIP SPEC 3 25  kmip spec 3 25                                  Vault 1 11         Protect Stop Date  KMIP SPEC 3 26  kmip spec 3 26                                   Vault 1 11         Activation Date  KMIP SPEC 3 24  kmip spec 3 24                                                    Deactivation Date  KMIP SPEC 3 27  kmip spec 3 27                                                  Compromise Occurrence Date  KMIP SPEC 3 29  kmip spec 3 29                                         Compromise Date  KMIP SPEC 3 30  kmip spec 3 30                                                    Revocation Reason  KMIP SPEC 3 31  kmip spec 3 31                                                  Object Group  KMIP SPEC 3 33  kmip spec 3 33                                                       Fresh  KMIP SPEC 3 34  kmip spec 3 34                                                              Link  KMIP SPEC 3 35  kmip spec 3 35                                                               Last Change Date  KMIP SPEC 3 38  kmip spec 3 38                                                   Alternative Name  KMIP SPEC 3 40  kmip spec 3 40                                    Vault 1 12         Key Value Present  KMIP SPEC 3 41  kmip spec 3 41                                   Vault 1 12         Key Value Location  KMIP SPEC 3 42  kmip spec 3 42                                                 Original Creation Date  KMIP SPEC 3 43  kmip spec 3 43                                             Random Number Generator  KMIP SPEC 3 44  kmip spec 3 44                                            Description  KMIP SPEC 3 46  kmip spec 3 46                                                        Comment  KMIP SPEC 3 47  kmip spec 3 47                                                            Sensitive  KMIP SPEC 3 48  kmip spec 3 48                                                          Always Sensitive  KMIP SPEC 3 49  kmip spec 3 49                                                   Extractable  KMIP SPEC 3 50  kmip spec 3 50                                                        Never Extractable  KMIP SPEC 3 51  kmip spec 3 51                                               3  Supports the following client to server operations         Operation                                               Supported   Notes                                                                                           Locate  KMIP SPEC 4 9  kmip spec 4 9                               Vault version 1 11 supports attributes Activation Date  Application Specific Information  Cryptographic Algorithm  Cryptographic Length  Name  Object Type  Original Creation Date  and State   br   Vault version 1 12 supports all profile attributes except for Key Value Location               Check  KMIP SPEC 4 10  kmip spec 4 10                                             Get  KMIP SPEC 4 11  kmip spec 4 11                                               Get Attributes  KMIP SPEC 4 12  kmip spec 4 12                                    Get Attribute List  KMIP SPEC 4 13  kmip spec 4 13                                Add Attribute  KMIP SPEC 4 14  kmip spec 4 14                                     Modify Attribute  KMIP SPEC 4 15  kmip spec 4 15                   Vault 1 12         Delete Attribute  KMIP SPEC 4 16  kmip spec 4 16                   Vault 1 12         Activate  KMIP SPEC 4 19  kmip spec 4 19                                          Revoke  KMIP SPEC 4 20  kmip spec 4 20                                            Destroy  KMIP SPEC 4 21  kmip spec 4 21                                           Query  KMIP SPEC 4 25  kmip spec 4 25                              Vault 1 11         Discover Versions  KMIP SPEC 4 26  kmip spec 4 26                              4 Supports the following message contents         Message Content                                                    Supported                                                                                              Protocol Version  KMIP SPEC 6 1  kmip spec 6 1                                      Operation  KMIP SPEC 6 2  kmip spec 6 2                                             Maximum Response Size  KMIP SPEC 6 3  kmip spec 6 3                                 Unique Batch Item ID  KMIP SPEC 6 4  kmip spec 6 4                                  Time Stamp  KMIP SPEC 6 5  kmip spec 6 5                                            Asynchronous Indicator  KMIP SPEC 6 7  kmip spec 6 7                                Result Status  KMIP SPEC 6 9  kmip spec 6 9                                         Result Reason  KMIP SPEC 6 10  kmip spec 6 10                                       Batch Order Option  KMIP SPEC 6 12  kmip spec 6 12                                  Batch Error Continuation Option  KMIP SPEC 6 13  kmip spec 6 13                     Batch Count  KMIP SPEC 6 14  kmip spec 6 14                                         Batch Item  KMIP SPEC 6 15  kmip spec 6 15                                          Attestation Capable Indicator  KMIP SPEC 6 17  kmip spec 6 17                       Client Correlation Value  KMIP SPEC 6 18  kmip spec 6 18                            Server Correlation Value  KMIP SPEC 6 19  kmip spec 6 19                            Message Extension  KMIP SPEC 6 16  kmip spec 6 16                                5  Supports the ID Placeholder  KMIP SPEC 4  kmip spec 4    6  Supports Message Format  KMIP SPEC 7  kmip spec 7    7  Supports Authentication  KMIP SPEC 8  kmip spec 8    8  Supports the TTLV encoding  KMIP SPEC 9 1  kmip spec 9 1    9  Supports the transport requirements  KMIP SPEC 10  kmip spec 10    10  Supports Error Handling  KMIP SPEC 11  kmip spec 11  for any supported object  attribute  or operation   11  Optionally supports any clause within  KMIP SPEC  kmip spec  that is not listed above   12  Optionally supports extensions outside the scope of this standard  e g   vendor extensions  conformance clauses  that do not contradict any KMIP requirements   We do not have any extensions      Symmetric key lifecycle server  lifecycle server     1  SHALL conform to the  Baseline Server  baseline server    2  Supports the following objects         Object                                                                   Supported                                                                                                    Symmetric Key  KMIP SPEC 2 2 2  kmip spec 2 2 2                                           Key Format Type  KMIP SPEC 9 1 3 2 3  kmip spec 9 1 3 2 3                              3  Supports the following subsets of attributes         Attribute                                                                Supported   Notes                                                                                                            Cryptographic Algorithm  KMIP SPEC 3 4  kmip spec 3 4                                             Object Type  KMIP SPEC 3 3  kmip spec 3 3                                                         Process Start Date  KMIP SPEC 3 25  kmip spec 3 25                                  Vault 1 11         Protect Stop Date  KMIP SPEC 3 26  kmip spec 3 26                                   Vault 1 11      4  Supports the following client to server operations         Operation                                               Supported                                                                                   Create  KMIP SPEC 4 1  kmip spec 4 1                                  5  Supports the following message encoding         Message Encoding                                                                       Supported   Notes                                                                                                                          Cryptographic Algorithm  KMIP SPEC 9 1 3 2 13  kmip spec 9 1 3 2 13  with values                                 i  3DES                                                                                           Vault 1 12         ii  AES                                                                                                          Object Type  KMIP SPEC 9 1 3 2 12  kmip spec 9 1 3 2 12  with value                                               i  Symmetric Key                                                                                                 Key Format Type  KMIP SPEC 9 1 3 2 3  kmip spec 9 1 3 2 3  with value                                             i  Raw                                                                                                           ii  Transparent Symmetric Key                                                                                 6  MAY support any clause within  KMIP SPEC  kmip spec  provided it does not conflict with any other clause within the section  Symmetric Key Lifecycle Server  lifecycle server    7  MAY support extensions outside the scope of this standard  e g   vendor extensions  conformance clauses  that do not contradict any KMIP requirements       Basic cryptographic server  basic cryptographic server     1  SHALL conform to the  Baseline Server  baseline server    2  Supports the following client to server operations         Operation                                               Supported   Notes                                                                                               Encrypt  KMIP SPEC 4 29  kmip spec 4 29                            Vault 1 11  br   Supported for AES  unsupported for 3DES   br   br   Supported Block Cipher Modes   br    ol   li  GCM   li   li  CBC   li   li  CFB   li   li  CTR   li   li  ECB   li   li  OFB   li    ol   br   Stream operations are supported except for GCM block cipher mode   br   br   Supported padding methods   br    ol   li  None   li   li  PKCS5   li    ol           Decypt  KMIP SPEC 4 30  kmip spec 4 30                             Vault 1 11  br   Supported for AES  unsupported for 3DES   br   br   Supported Block Cipher Modes   br    ol   li  GCM   li   li  CBC   li   li  CFB   li   li  CTR   li   li  ECB   li   li  OFB   li    ol   br   Stream operations are supported except for GCM block cipher mode   br   br   Supported padding methods   br    ol   li  None   li   li  PKCS5   li    ol           3  MAY support any clause within  KMIP SPEC  kmip spec  provided it does not conflict with any other clause within the section  Basic Cryptographic Server  basic cryptographic server    4  MAY support extensions outside the scope of this standard  e g   vendor extensions  conformance clauses  that do not contradict any KMIP requirements       Asymmetric key lifecycle server  asymmetric key lifecycle server     1  SHALL conform to the  Baseline Server  baseline server     2  Supports the following objects         Object                                                                   Supported                                                                                                    Symmetric Key  KMIP SPEC 2 2 2  kmip spec 2 2 2                                           Key Format Type  KMIP SPEC 9 1 3 2 3  kmip spec 9 1 3 2 3                              3  Supports the following objects         Object                                                                Supported   Notes                                                                                                         Public Key  KMIP SPEC 2 2 3  kmip spec 2 2 3                                      Vault 1 13         Private Key  KMIP SPEC 2 2 4  kmip spec 2 2 4                                     Vault 1 13         Process Start Date  KMIP SPEC 3 25  kmip spec 3 25                                Vault 1 11         Key Format Type  KMIP SPEC 9 1 3 2 3  kmip spec 9 1 3 2 3                                    4  Supports the following attributes         Attribute                                                                Supported   Notes                                                                                                            Cryptographic Algorithm  KMIP SPEC 3 4  kmip spec 3 4                                             Object Type  KMIP SPEC 3 3  kmip spec 3 3                                                         Process Start Date  KMIP SPEC 3 25  kmip spec 3 25                                  Vault 1 11         Protect Stop Date  KMIP SPEC 3 26  kmip spec 3 26                                   Vault 1 11      5  Supports the following message encoding         Message Encoding                                                                       Supported   Notes                                                                                                                          Cryptographic Algorithm  KMIP SPEC 9 1 3 2 13  kmip spec 9 1 3 2 13  with values                                 i  RSA                                                                                            Vault 1 13         Object Type  KMIP SPEC 9 1 3 2 12  kmip spec 9 1 3 2 12  with value                                               i  Public  Key                                                                                    Vault 1 13         ii  Private  Key                                                                                  Vault 1 13         Key Format Type  KMIP SPEC 9 1 3 2 3  kmip spec 9 1 3 2 3  with value                                             i  PKCS 1                                                                                         Vault 1 13  br   Supported for Private and Public Keys        ii  PKCS 8                                                                                        Vault 1 13  br   Supported for Private Key        iii  Transparent RSA Public Key                                                                   Vault 1 13         iv  Transparent RSA Private Key                                                                   Vault 1 13         v  X 509                                                                                          Vault 1 13  br   Supported for Public Key     6  MAY support any clause within  KMIP SPEC  kmip spec  provided it does not conflict with any other clause within the section  Symmetric Key Lifecycle Server  lifecycle server    7  MAY support extensions outside the scope of this standard  e g   vendor extensions  conformance clauses  that do not contradict any KMIP requirements       Advanced cryptographic server  advanced cryptographic server    1  SHALL conform to the  Baseline Server  baseline server    2  Supports the following client to server operations         Operation                                               Supported   Notes                                                                                               Encrypt  KMIP SPEC 4 29  kmip spec 4 29                            Vault 1 11  br    See Basic Cryptographic Server   basic cryptographic server   br   br   Vault 1 13  br   Supported for RSA Asymmetric Keys   br   br   Supported padding methods   br    ol   li  OAEP   li   li  PKCS1v15   li    ol   br   Streaming operations are not supported          Decypt  KMIP SPEC 4 30  kmip spec 4 30                             Vault 1 11  br    See Basic Cryptographic Server   basic cryptographic server   br   br   Vault 1 13  br   Supported for RSA Asymmetric Keys   br   br   Supported padding methods   br    ol   li  OAEP   li   li  PKCS1v15   li    ol   br   Streaming operations are not supported          Sign  KMIP SPEC 4 31  kmip spec 4 31                               Vault 1 13  br   Supported for RSA Asymmetric Keys   br   br   Supported padding methods   br    ol   li  PSS   li   li  PKCS1v15   li    ol   br   br   The supported hashing algorithms with PSS are   br    ol   li  SHA224   li   li  SHA256   li   li  SHA384   li   li  SHA512   li   li  RIPEMD160   li   li  SHA512 224   li   li  SHA512 256   li   li  SHA3 224   li   li  SHA3 256   li   li  SHA3 384   li   li  SHA3 512   li    ol   br   The supported hashing algorithms with PKCS1v15 are   br    ol   li  SHA224   li   li  SHA256   li   li  SHA384   li   li  SHA512   li   li  RIPEMD160   li    ol   br   Streaming operations are supported         Signature Verify  KMIP SPEC 4 32  kmip spec 4 32                   Vault 1 13  br   Supported for RSA Asymmetric Keys   br   br   Supported padding methods   br    ol   li  PSS   li   li  PKCS1v15   li    ol   br   br   The supported hashing algorithms with PSS are   br    ol   li  SHA224   li   li  SHA256   li   li  SHA384   li   li  SHA512   li   li  RIPEMD160   li   li  SHA512 224   li   li  SHA512 256   li   li  SHA3 224   li   li  SHA3 256   li   li  SHA3 384   li   li  SHA3 512   li    ol   br   The supported hashing algorithms with PKCS1v15 are   br    ol   li  SHA224   li   li  SHA256   li   li  SHA384   li   li  SHA512   li   li  RIPEMD160   li    ol   br   Streaming operations are supported         MAC  KMIP SPEC 4 33  kmip spec 4 33                                Vault 1 13  br   Supported for RSA Asymmetric Keys   br   br   The supported hashing algorithms are   br    ol   li  SHA224   li   li  SHA256   li   li  SHA384   li   li  SHA512   li   li  RIPEMD160   li   li  SHA512 224   li   li  SHA512 256   li   li  SHA3 256   li   li  SHA3 384   li   li  SHA3 512   li    ol   br   The follwing hashing algorithms are not supported   br    ol   li  MD4   li   li  MD5   li   li  SHA1   li    ol   br   Streaming operations are supported         MAC Verify  KMIP SPEC 4 34  kmip spec 4 34                         Vault 1 13  br   Supported for RSA Asymmetric Keys   br   br   The supported hashing algorithms are   br    ol   li  SHA224   li   li  SHA256   li   li  SHA384   li   li  SHA512   li   li  RIPEMD160   li   li  SHA512 224   li   li  SHA512 256   li   li  SHA3 256   li   li  SHA3 384   li   li  SHA3 512   li    ol   br   The follwing hashing algorithms are not supported   br    ol   li  MD4   li   li  MD5   li   li  SHA1   li    ol   br   Streaming operations are supported         RNG Retrieve  KMIP SPEC 4 35  kmip spec 4 35                       Vault 1 13         RNG Seed  KMIP SPEC 4 36  kmip spec 4 36                           Vault 1 13     3  MAY support any clause within  KMIP SPEC  kmip spec  provided it does not conflict with any other clause within the section  Basic Cryptographic Server  basic cryptographic server    4  MAY support extensions outside the scope of this standard  e g   vendor extensions  conformance clauses  that do not contradict any KMIP requirements     kmip spec 2 1 1   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660735    kmip spec 2 1 2   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660736    kmip spec 2 1 3   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660737    kmip spec 2 1 4   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660738    kmip spec 2 1 8   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660757    kmip spec 2 1 9   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660758    kmip spec 2 1 19   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660768    kmip spec 2 1 20   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660769    kmip spec 2 1 21   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660770    kmip spec 2 2 3   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660776    kmip spec 2 2 4   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660777    kmip spec 3 1   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660784    kmip spec 3 2   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660785    kmip spec 3 3   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660786    kmip spec 3 4   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660787    kmip spec 3 5   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660788    kmip spec 3 6   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660789    kmip spec 3 17   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660800    kmip spec 3 19   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660807    kmip spec 3 22   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660810    kmip spec 3 23   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660811    kmip spec 3 25   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660813    kmip spec 3 26   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660814    kmip spec 3 24   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660812    kmip spec 3 27   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660815    kmip spec 3 29   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660817    kmip spec 3 30   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660818    kmip spec 3 31   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660819    kmip spec 3 33   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660821    kmip spec 3 34   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660822    kmip spec 3 35   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660823    kmip spec 3 38   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660826    kmip spec 3 40   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660828    kmip spec 3 41   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660829    kmip spec 3 42   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660830    kmip spec 3 43   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660831    kmip spec 3 44   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660832    kmip spec 3 46   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660834    kmip spec 3 47   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660835    kmip spec 3 48   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660836    kmip spec 3 49   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660837    kmip spec 3 50   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660838    kmip spec 3 51   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660839    kmip spec 4 9   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660849    kmip spec 4 10   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660850    kmip spec 4 11   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660851    kmip spec 4 12   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660852    kmip spec 4 13   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660853    kmip spec 4 14   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660854    kmip spec 4 15   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660855    kmip spec 4 16   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660856    kmip spec 4 19   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660859    kmip spec 4 20   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660860    kmip spec 4 21   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660861    kmip spec 4 25   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660865    kmip spec 4 26   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660866    kmip spec 4 29   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660869    kmip spec 4 30   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660870    kmip spec 4 31   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660871    kmip spec 4 32   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660872    kmip spec 4 33   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660873    kmip spec 4 34   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660874    kmip spec 4 35   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660875    kmip spec 4 36   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660876    kmip spec 6 1   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660887    kmip spec 6 2   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660888    kmip spec 6 3   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660889    kmip spec 6 4   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660890    kmip spec 6 5   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660891    kmip spec 6 7   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660893    kmip spec 6 9   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660895    kmip spec 6 10   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660896    kmip spec 6 12   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660898    kmip spec 6 13   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660899    kmip spec 6 14   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660900    kmip spec 6 15   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660901    kmip spec 6 17   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660903    kmip spec 6 18   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660904    kmip spec 6 19   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660905    kmip spec 6 16   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660902    kmip spec 4   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660840    kmip spec 7   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660906    kmip spec 8   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660909    kmip spec 9 1   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660911    kmip spec 10   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660973    kmip spec 11   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660974    kmip spec   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html    kmip spec 2 2 2   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660775    kmip spec 9 1 3 2 3   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660923    kmip spec 4 1   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660841    kmip spec 9 1 3 2 13   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660933    kmip spec 9 1 3 2 12   https   docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html  Toc490660932    baseline server   http   docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html  Toc491431430    lifecycle server   http   docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html  Toc491431487    basic cryptographic server   http   docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html  Toc491431527    asymmetric key lifecycle server   http   docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html  Toc491431516    advanced cryptographic server   http   docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html  Toc491431528    tc proc 2 18   https   www oasis open org policies guidelines tc process 2017 05 26 technical committee tc process 27 july 2011  specQuality "}
{"questions":"vault Kubernetes secrets engine layout docs tokens service accounts role bindings and roles dynamically The Kubernetes secrets engine for Vault generates Kubernetes service account page title Kubernetes Secrets Engines","answers":"---\nlayout: docs\npage_title: Kubernetes - Secrets Engines\ndescription: >-\n  The Kubernetes secrets engine for Vault generates Kubernetes service account\n  tokens, service accounts, role bindings, and roles dynamically.\n---\n\n# Kubernetes secrets engine\n\n@include 'x509-sha1-deprecation.mdx'\n\nThe Kubernetes Secrets Engine for Vault generates Kubernetes service account tokens, and\noptionally service accounts, role bindings, and roles. The created service account tokens have\na configurable TTL and any objects created are automatically deleted when the Vault lease expires.\n\nFor each lease, Vault will create a service account token attached to the\ndefined service account. The service account token is returned to the caller.\n\nTo learn more about service accounts in Kubernetes, visit the\n[Kubernetes service account](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/)\nand [Kubernetes RBAC](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/)\ndocumentation.\n\n~> **Note:** We do not recommend using tokens created by the Kubernetes Secrets Engine to\n   authenticate with the [Vault Kubernetes Auth Method](\/vault\/docs\/auth\/kubernetes). This will\n   generate many unique identities in Vault that will be hard to manage.\n\n## Setup\n\nThe Kubernetes Secrets Engine must be configured in advance before it\ncan perform its functions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. By default, Vault will connect to Kubernetes using its own service account.\n   If using the [standard Helm chart](https:\/\/github.com\/hashicorp\/vault-helm), this service account\n   is created automatically by default and named after the Helm release (often `vault`, but this can be\n   configured via the Helm value `server.serviceAccount.name`).\n\n   It's necessary to ensure that the service account Vault uses will have permissions to manage\n   service account tokens, and optionally manage service accounts, roles, and role bindings. These\n   permissions can be managed using a Kubernetes role or cluster role. The role is attached to the\n   Vault service account with a role binding or cluster role binding.\n\n   For example, a minimal cluster role to create service account tokens is:\n   ```yaml\n   apiVersion: rbac.authorization.k8s.io\/v1\n   kind: ClusterRole\n   metadata:\n     name: k8s-minimal-secrets-abilities\n   rules:\n   - apiGroups: [\"\"]\n     resources: [\"serviceaccounts\/token\"]\n     verbs: [\"create\"]\n   ```\n\n   Similarly, you can create a more permissive cluster role with full permissions to manage tokens,\n   service accounts, bindings, and roles.\n\n   ```yaml\n   apiVersion: rbac.authorization.k8s.io\/v1\n   kind: ClusterRole\n   metadata:\n     name: k8s-full-secrets-abilities\n   rules:\n   - apiGroups: [\"\"]\n     resources: [\"serviceaccounts\", \"serviceaccounts\/token\"]\n     verbs: [\"create\", \"update\", \"delete\"]\n   - apiGroups: [\"rbac.authorization.k8s.io\"]\n     resources: [\"rolebindings\", \"clusterrolebindings\"]\n     verbs: [\"create\", \"update\", \"delete\"]\n   - apiGroups: [\"rbac.authorization.k8s.io\"]\n     resources: [\"roles\", \"clusterroles\"]\n     verbs: [\"bind\", \"escalate\", \"create\", \"update\", \"delete\"]\n   ```\n\n   Create this role in Kubernetes (e.g., with `kubectl apply -f`).\n\n   Moreover, if you want to use label selection to configure the namespaces on which a role can act,\n   you will need to grant Vault permissions to read namespaces.\n\n   ```yaml\n   apiVersion: rbac.authorization.k8s.io\/v1\n   kind: ClusterRole\n   metadata:\n     name: k8s-full-secrets-abilities-with-labels\n   rules:\n   - apiGroups: [\"\"]\n     resources: [\"namespaces\"]\n     verbs: [\"get\"]\n   - apiGroups: [\"\"]\n     resources: [\"serviceaccounts\", \"serviceaccounts\/token\"]\n     verbs: [\"create\", \"update\", \"delete\"]\n   - apiGroups: [\"rbac.authorization.k8s.io\"]\n     resources: [\"rolebindings\", \"clusterrolebindings\"]\n     verbs: [\"create\", \"update\", \"delete\"]\n   - apiGroups: [\"rbac.authorization.k8s.io\"]\n     resources: [\"roles\", \"clusterroles\"]\n     verbs: [\"bind\", \"escalate\", \"create\", \"update\", \"delete\"]\n   ```\n\n   ~> **Note:** Getting the right permissions for Vault will require some trial and error most\n      likely since Kubernetes has strict protections against privilege escalation. You can read more\n      in the\n      [Kubernetes RBAC documentation](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/#privilege-escalation-prevention-and-bootstrapping).\n\n   ~> **Note:** Protect the Vault service account, especially if you use broader permissions for it,\n      as it is essentially a cluster administrator account.\n\n1. Create a role binding to bind the role to Vault's service account and grant Vault permission\n   to manage tokens.\n\n   ```yaml\n   apiVersion: rbac.authorization.k8s.io\/v1\n   kind: ClusterRoleBinding\n   metadata:\n     name: vault-token-creator-binding\n   roleRef:\n     apiGroup: rbac.authorization.k8s.io\n     kind: ClusterRole\n     name: k8s-minimal-secrets-abilities\n   subjects:\n   - kind: ServiceAccount\n     name: vault\n     namespace: vault\n   ```\n\n   For more information on Kubernetes roles, service accounts, bindings, and tokens, visit the\n   [Kubernetes RBAC documentation](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/).\n\n1. If Vault will not be automatically managing roles or service accounts (see\n   [Automatically Managing Roles and Service Accounts](#automatically-managing-roles-and-service-accounts)),\n   then you will need to set up a service account that Vault will issue tokens for.\n\n   ~> **Note**: It is highly recommended that the service account that Vault issues tokens for\n      is **NOT** the same service account that Vault itself uses.\n\n   The examples we will use will under the namespace `test`, which you can create if it does not\n   already exist.\n\n   ```shell-session\n   $ kubectl create namespace test\n   namespace\/test created\n   ```\n\n   Here is a simple set up of a service account, role, and role binding in the Kubernetes `test`\n   namespace with basic permissions we will use for this document:\n   ```yaml\n   apiVersion: v1\n   kind: ServiceAccount\n   metadata:\n     name: test-service-account-with-generated-token\n     namespace: test\n   ---\n   apiVersion: rbac.authorization.k8s.io\/v1\n   kind: Role\n   metadata:\n     name: test-role-list-pods\n     namespace: test\n   rules:\n   - apiGroups: [\"\"]\n     resources: [\"pods\"]\n     verbs: [\"list\"]\n   ---\n   apiVersion: rbac.authorization.k8s.io\/v1\n   kind: RoleBinding\n   metadata:\n     name: test-role-abilities\n     namespace: test\n   roleRef:\n     apiGroup: rbac.authorization.k8s.io\n     kind: Role\n     name: test-role-list-pods\n   subjects:\n   - kind: ServiceAccount\n     name: test-service-account-with-generated-token\n     namespace: test\n   ```\n\n   You can create these objects with `kubectl apply -f`.\n\n1. Enable the Kubernetes Secrets Engine:\n\n   ```shell-session\n   $ vault secrets enable kubernetes\n   Success! Enabled the kubernetes Secrets Engine at: kubernetes\/\n   ```\n\n   By default, the secrets engine will mount at the same name as the engine, i.e.,\n   `kubernetes\/` here. This can be changed by passing the `-path` argument when enabling.\n\n1. Configure the mount point. An empty config is allowed.\n\n   ```shell-session\n   $ vault write -f kubernetes\/config\n   ```\n\n   Configuration options are available as specified in the\n   [API docs](\/vault\/api-docs\/secret\/kubernetes).\n\n1. You can now configure Kubernetes Secrets Engine to create a Vault role (**not** the same as a\n   Kubernetes role) that can generate service account tokens for the given service account:\n\n   ```shell-session\n   $ vault write kubernetes\/roles\/my-role \\\n       allowed_kubernetes_namespaces=\"*\" \\\n       service_account_name=\"test-service-account-with-generated-token\" \\\n       token_default_ttl=\"10m\"\n   ```\n\n## Generating credentials\n\nAfter a user has authenticated to Vault and has sufficient permissions, a write to the\n`creds` endpoint for the Vault role will generate and return a new service account token.\n\n```shell-session\n$ vault write kubernetes\/creds\/my-role \\\n    kubernetes_namespace=test\n\nKey                        Value\n\u2013--                        -----\nlease_id                   kubernetes\/creds\/my-role\/31d771a6-...\nlease_duration             10m0s\nlease_renwable             false\nservice_account_name       test-service-account-with-generated-token\nservice_account_namespace  test\nservice_account_token      eyJHbGci0iJSUzI1NiIsImtpZCI6ImlrUEE...\n```\n\nYou can use the service account token above (`eyJHbG...`) with any Kubernetes API request that\nits service account is authorized for (through role bindings).\n\n```shell-session\n$ curl -sk $(kubectl config view --minify -o 'jsonpath={.clusters[].cluster.server}')\/api\/v1\/namespaces\/test\/pods \\\n    --header \"Authorization: Bearer eyJHbGci0iJSUzI1Ni...\"\n{\n  \"kind\": \"PodList\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {\n    \"resourceVersion\": \"1624\"\n  },\n  \"items\": []\n}\n```\n\nWhen the lease expires, you can verify that the token has been revoked.\n```shell-session\n$ curl -sk $(kubectl config view --minify -o 'jsonpath={.clusters[].cluster.server}')\/api\/v1\/namespaces\/test\/pods \\\n    --header \"Authorization: Bearer eyJHbGci0iJSUzI1Ni...\"\n{\n  \"kind\": \"Status\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {},\n  \"status\": \"Failure\",\n  \"message\": \"Unauthorized\",\n  \"reason\": \"Unauthorized\",\n  \"code\": 401\n}\n```\n\n## TTL\n\nKubernetes service account tokens have a time-to-live (TTL). When a token expires it is\nautomatically revoked.\n\nYou can set a default (`token_default_ttl`) and a maximum TTL (`token_max_ttl`) when\ncreating or tuning the Vault role.\n\n```shell-session\n$ vault write kubernetes\/roles\/my-role \\\n    allowed_kubernetes_namespaces=\"*\" \\\n    service_account_name=\"new-service-account-with-generated-token\" \\\n    token_default_ttl=\"10m\" \\\n    token_max_ttl=\"2h\"\n```\n\nYou can also set a TTL (`ttl`) when you generate the token from the credentials endpoint.\nThe TTL of the token will be given the default if not specified (and cannot exceed the\nmaximum TTL of the role, if present).\n\n```shell-session\n$ vault write kubernetes\/creds\/my-role \\\n    kubernetes_namespace=test \\\n    ttl=20m\n\nKey                        Value\n\u2013--                        -----\nlease_id                   kubernetes\/creds\/my-role\/31d771a6-...\nlease_duration             20m0s\nlease_renwable             false\nservice_account_name       new-service-account-with-generated-token\nservice_account_namespace  test\nservice_account_token      eyJHbGci0iJSUzI1NiIsImtpZCI6ImlrUEE...\n```\n\n\nYou can verify the token's TTL by decoding the JWT token and extracting the `iat`\n(issued at) and `exp` (expiration time) claims.\n\n```shell-session\n$ echo 'eyJhbGc...' | cut -d'.' -f2 | base64 -d  | jq -r '.iat,.exp|todate'\n2022-05-20T17:14:50Z\n2022-05-20T17:34:50Z\n```\n\n## Audiences\n\nKubernetes service account tokens have audiences.\n\nYou can set default audiences (`token_default_audiences`) when creating or tuning the Vault role.\nThe Kubernetes cluster default audiences for service account tokens will be used if not specified.\n\n```shell-session\n$ vault write kubernetes\/roles\/my-role \\\n    allowed_kubernetes_namespaces=\"*\" \\\n    service_account_name=\"new-service-account-with-generated-token\" \\\n    token_default_audiences=\"custom-audience\"\n```\n\nYou can also set audiences (`audiences`) when you generate the token from the credentials endpoint.\nThe audiences of the token will be given the default audiences if not specified.\n\n```shell-session\n$ vault write kubernetes\/creds\/my-role \\\n    kubernetes_namespace=test \\\n    audiences=\"another-custom-audience\"\n\nKey                        Value\n\u2013--                        -----\nlease_id                   kubernetes\/creds\/my-role\/SriWQf0bPZ...\nlease_duration             768h\nlease_renwable             false\nservice_account_name       new-service-account-with-generated-token\nservice_account_namespace  test\nservice_account_token      eyJHbGci0iJSUzI1NiIsImtpZCI6ImlrUEE...\n```\n\nYou can verify the token's audiences by decoding the JWT.\n\n```shell-session\n$ echo 'eyJhbGc...' | cut -d'.' -f2 | base64 -d\n{\"aud\":[\"another-custom-audience\"]...\n```\n\n## Automatically managing roles and service accounts\n\nWhen configuring the Vault role, you can pass in parameters to specify that you want to\nautomatically generate the Kubernetes service account and role binding,\nand optionally generate the Kubernetes role itself.\n\nIf you want to configure the Vault role to use a pre-existing Kubernetes role, but generate\nthe service account and role binding automatically, you can set the `kubernetes_role_name`\nparameter.\n\n```shell-session\n$ vault write kubernetes\/roles\/auto-managed-sa-role \\\n    allowed_kubernetes_namespaces=\"test\" \\\n    kubernetes_role_name=\"test-role-list-pods\"\n```\n\n~> **Note**: Vault's service account will also need access to the resources it is granting\naccess to. This can be done for the examples above with `kubectl -n test create rolebinding --role test-role-list-pods --serviceaccount=vault:vault vault-test-role-abilities`.\nThis is how Kubernetes prevents privilege escalation.\nYou can read more in the\n[Kubernetes RBAC documentation](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/#privilege-escalation-prevention-and-bootstrapping).\n\nYou can then get credentials with the automatically generated service account.\n```shell-session\n$ vault write kubernetes\/creds\/auto-managed-sa-role \\\n    kubernetes_namespace=test\nKey                          Value\n---                          -----\nlease_id                     kubernetes\/creds\/auto-managed-sa-role\/cujRLYjKZUMQk6dkHBGGWm67\nlease_duration               768h\nlease_renewable              false\nservice_account_name         v-token-auto-man-1653001548-5z6hrgsxnmzncxejztml4arz\nservice_account_namespace    test\nservice_account_token        eyJHbGci0iJSUzI1Ni...\n```\n\nFurthermore, Vault can also automatically create the role in addition to the service account and\nrole binding by specifying the `generated_role_rules` parameter, which accepts a set of JSON or YAML\nrules for the generated role.\n\n```shell-session\n$ vault write kubernetes\/roles\/auto-managed-sa-and-role \\\n    allowed_kubernetes_namespaces=\"test\" \\\n    generated_role_rules='{\"rules\":[{\"apiGroups\":[\"\"],\"resources\":[\"pods\"],\"verbs\":[\"list\"]}]}'\n```\nYou can then get credentials in the same way as before.\n```shell-session\n$ vault write kubernetes\/creds\/auto-managed-sa-and-role \\\n    kubernetes_namespace=test\nKey                          Value\n---                          -----\nlease_id                     kubernetes\/creds\/auto-managed-sa-and-role\/pehLtegoTP8vCkcaQozUqOHf\nlease_duration               768h\nlease_renewable              false\nservice_account_name         v-token-auto-man-1653002096-4imxf3ytjh5hbyro9s1oqdo3\nservice_account_namespace    test\nservice_account_token        eyJHbGci0iJSUzI1Ni...\n```\n\n## API\n\nThe Kubernetes Secrets Engine has a full HTTP API. Please see the\n[Kubernetes Secrets Engine API docs](\/vault\/api-docs\/secret\/kubernetes) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Kubernetes   Secrets Engines description       The Kubernetes secrets engine for Vault generates Kubernetes service account   tokens  service accounts  role bindings  and roles dynamically         Kubernetes secrets engine   include  x509 sha1 deprecation mdx   The Kubernetes Secrets Engine for Vault generates Kubernetes service account tokens  and optionally service accounts  role bindings  and roles  The created service account tokens have a configurable TTL and any objects created are automatically deleted when the Vault lease expires   For each lease  Vault will create a service account token attached to the defined service account  The service account token is returned to the caller   To learn more about service accounts in Kubernetes  visit the  Kubernetes service account  https   kubernetes io docs tasks configure pod container configure service account   and  Kubernetes RBAC  https   kubernetes io docs reference access authn authz rbac   documentation        Note    We do not recommend using tokens created by the Kubernetes Secrets Engine to    authenticate with the  Vault Kubernetes Auth Method   vault docs auth kubernetes   This will    generate many unique identities in Vault that will be hard to manage      Setup  The Kubernetes Secrets Engine must be configured in advance before it can perform its functions  These steps are usually completed by an operator or configuration management tool   1  By default  Vault will connect to Kubernetes using its own service account     If using the  standard Helm chart  https   github com hashicorp vault helm   this service account    is created automatically by default and named after the Helm release  often  vault   but this can be    configured via the Helm value  server serviceAccount name        It s necessary to ensure that the service account Vault uses will have permissions to manage    service account tokens  and optionally manage service accounts  roles  and role bindings  These    permissions can be managed using a Kubernetes role or cluster role  The role is attached to the    Vault service account with a role binding or cluster role binding      For example  a minimal cluster role to create service account tokens is        yaml    apiVersion  rbac authorization k8s io v1    kind  ClusterRole    metadata       name  k8s minimal secrets abilities    rules       apiGroups            resources    serviceaccounts token        verbs    create              Similarly  you can create a more permissive cluster role with full permissions to manage tokens     service accounts  bindings  and roles         yaml    apiVersion  rbac authorization k8s io v1    kind  ClusterRole    metadata       name  k8s full secrets abilities    rules       apiGroups            resources    serviceaccounts    serviceaccounts token        verbs    create    update    delete        apiGroups    rbac authorization k8s io        resources    rolebindings    clusterrolebindings        verbs    create    update    delete        apiGroups    rbac authorization k8s io        resources    roles    clusterroles        verbs    bind    escalate    create    update    delete              Create this role in Kubernetes  e g   with  kubectl apply  f        Moreover  if you want to use label selection to configure the namespaces on which a role can act     you will need to grant Vault permissions to read namespaces         yaml    apiVersion  rbac authorization k8s io v1    kind  ClusterRole    metadata       name  k8s full secrets abilities with labels    rules       apiGroups            resources    namespaces        verbs    get        apiGroups            resources    serviceaccounts    serviceaccounts token        verbs    create    update    delete        apiGroups    rbac authorization k8s io        resources    rolebindings    clusterrolebindings        verbs    create    update    delete        apiGroups    rbac authorization k8s io        resources    roles    clusterroles        verbs    bind    escalate    create    update    delete                   Note    Getting the right permissions for Vault will require some trial and error most       likely since Kubernetes has strict protections against privilege escalation  You can read more       in the        Kubernetes RBAC documentation  https   kubernetes io docs reference access authn authz rbac  privilege escalation prevention and bootstrapping            Note    Protect the Vault service account  especially if you use broader permissions for it        as it is essentially a cluster administrator account   1  Create a role binding to bind the role to Vault s service account and grant Vault permission    to manage tokens         yaml    apiVersion  rbac authorization k8s io v1    kind  ClusterRoleBinding    metadata       name  vault token creator binding    roleRef       apiGroup  rbac authorization k8s io      kind  ClusterRole      name  k8s minimal secrets abilities    subjects       kind  ServiceAccount      name  vault      namespace  vault            For more information on Kubernetes roles  service accounts  bindings  and tokens  visit the     Kubernetes RBAC documentation  https   kubernetes io docs reference access authn authz rbac     1  If Vault will not be automatically managing roles or service accounts  see     Automatically Managing Roles and Service Accounts   automatically managing roles and service accounts       then you will need to set up a service account that Vault will issue tokens for           Note    It is highly recommended that the service account that Vault issues tokens for       is   NOT   the same service account that Vault itself uses      The examples we will use will under the namespace  test   which you can create if it does not    already exist         shell session      kubectl create namespace test    namespace test created            Here is a simple set up of a service account  role  and role binding in the Kubernetes  test     namespace with basic permissions we will use for this document        yaml    apiVersion  v1    kind  ServiceAccount    metadata       name  test service account with generated token      namespace  test           apiVersion  rbac authorization k8s io v1    kind  Role    metadata       name  test role list pods      namespace  test    rules       apiGroups            resources    pods        verbs    list             apiVersion  rbac authorization k8s io v1    kind  RoleBinding    metadata       name  test role abilities      namespace  test    roleRef       apiGroup  rbac authorization k8s io      kind  Role      name  test role list pods    subjects       kind  ServiceAccount      name  test service account with generated token      namespace  test            You can create these objects with  kubectl apply  f    1  Enable the Kubernetes Secrets Engine         shell session      vault secrets enable kubernetes    Success  Enabled the kubernetes Secrets Engine at  kubernetes             By default  the secrets engine will mount at the same name as the engine  i e       kubernetes   here  This can be changed by passing the   path  argument when enabling   1  Configure the mount point  An empty config is allowed         shell session      vault write  f kubernetes config            Configuration options are available as specified in the     API docs   vault api docs secret kubernetes    1  You can now configure Kubernetes Secrets Engine to create a Vault role    not   the same as a    Kubernetes role  that can generate service account tokens for the given service account         shell session      vault write kubernetes roles my role          allowed kubernetes namespaces              service account name  test service account with generated token           token default ttl  10m             Generating credentials  After a user has authenticated to Vault and has sufficient permissions  a write to the  creds  endpoint for the Vault role will generate and return a new service account token      shell session   vault write kubernetes creds my role       kubernetes namespace test  Key                        Value                                  lease id                   kubernetes creds my role 31d771a6     lease duration             10m0s lease renwable             false service account name       test service account with generated token service account namespace  test service account token      eyJHbGci0iJSUzI1NiIsImtpZCI6ImlrUEE         You can use the service account token above   eyJHbG      with any Kubernetes API request that its service account is authorized for  through role bindings       shell session   curl  sk   kubectl config view   minify  o  jsonpath   clusters   cluster server    api v1 namespaces test pods         header  Authorization  Bearer eyJHbGci0iJSUzI1Ni          kind    PodList      apiVersion    v1      metadata          resourceVersion    1624          items             When the lease expires  you can verify that the token has been revoked     shell session   curl  sk   kubectl config view   minify  o  jsonpath   clusters   cluster server    api v1 namespaces test pods         header  Authorization  Bearer eyJHbGci0iJSUzI1Ni          kind    Status      apiVersion    v1      metadata          status    Failure      message    Unauthorized      reason    Unauthorized      code   401           TTL  Kubernetes service account tokens have a time to live  TTL   When a token expires it is automatically revoked   You can set a default   token default ttl   and a maximum TTL   token max ttl   when creating or tuning the Vault role      shell session   vault write kubernetes roles my role       allowed kubernetes namespaces           service account name  new service account with generated token        token default ttl  10m        token max ttl  2h       You can also set a TTL   ttl   when you generate the token from the credentials endpoint  The TTL of the token will be given the default if not specified  and cannot exceed the maximum TTL of the role  if present       shell session   vault write kubernetes creds my role       kubernetes namespace test       ttl 20m  Key                        Value                                  lease id                   kubernetes creds my role 31d771a6     lease duration             20m0s lease renwable             false service account name       new service account with generated token service account namespace  test service account token      eyJHbGci0iJSUzI1NiIsImtpZCI6ImlrUEE          You can verify the token s TTL by decoding the JWT token and extracting the  iat   issued at  and  exp   expiration time  claims      shell session   echo  eyJhbGc       cut  d     f2   base64  d    jq  r   iat  exp todate  2022 05 20T17 14 50Z 2022 05 20T17 34 50Z         Audiences  Kubernetes service account tokens have audiences   You can set default audiences   token default audiences   when creating or tuning the Vault role  The Kubernetes cluster default audiences for service account tokens will be used if not specified      shell session   vault write kubernetes roles my role       allowed kubernetes namespaces           service account name  new service account with generated token        token default audiences  custom audience       You can also set audiences   audiences   when you generate the token from the credentials endpoint  The audiences of the token will be given the default audiences if not specified      shell session   vault write kubernetes creds my role       kubernetes namespace test       audiences  another custom audience   Key                        Value                                  lease id                   kubernetes creds my role SriWQf0bPZ    lease duration             768h lease renwable             false service account name       new service account with generated token service account namespace  test service account token      eyJHbGci0iJSUzI1NiIsImtpZCI6ImlrUEE         You can verify the token s audiences by decoding the JWT      shell session   echo  eyJhbGc       cut  d     f2   base64  d   aud    another custom audience              Automatically managing roles and service accounts  When configuring the Vault role  you can pass in parameters to specify that you want to automatically generate the Kubernetes service account and role binding  and optionally generate the Kubernetes role itself   If you want to configure the Vault role to use a pre existing Kubernetes role  but generate the service account and role binding automatically  you can set the  kubernetes role name  parameter      shell session   vault write kubernetes roles auto managed sa role       allowed kubernetes namespaces  test        kubernetes role name  test role list pods            Note    Vault s service account will also need access to the resources it is granting access to  This can be done for the examples above with  kubectl  n test create rolebinding   role test role list pods   serviceaccount vault vault vault test role abilities   This is how Kubernetes prevents privilege escalation  You can read more in the  Kubernetes RBAC documentation  https   kubernetes io docs reference access authn authz rbac  privilege escalation prevention and bootstrapping    You can then get credentials with the automatically generated service account     shell session   vault write kubernetes creds auto managed sa role       kubernetes namespace test Key                          Value                                    lease id                     kubernetes creds auto managed sa role cujRLYjKZUMQk6dkHBGGWm67 lease duration               768h lease renewable              false service account name         v token auto man 1653001548 5z6hrgsxnmzncxejztml4arz service account namespace    test service account token        eyJHbGci0iJSUzI1Ni         Furthermore  Vault can also automatically create the role in addition to the service account and role binding by specifying the  generated role rules  parameter  which accepts a set of JSON or YAML rules for the generated role      shell session   vault write kubernetes roles auto managed sa and role       allowed kubernetes namespaces  test        generated role rules    rules     apiGroups        resources    pods    verbs    list           You can then get credentials in the same way as before     shell session   vault write kubernetes creds auto managed sa and role       kubernetes namespace test Key                          Value                                    lease id                     kubernetes creds auto managed sa and role pehLtegoTP8vCkcaQozUqOHf lease duration               768h lease renewable              false service account name         v token auto man 1653002096 4imxf3ytjh5hbyro9s1oqdo3 service account namespace    test service account token        eyJHbGci0iJSUzI1Ni            API  The Kubernetes Secrets Engine has a full HTTP API  Please see the  Kubernetes Secrets Engine API docs   vault api docs secret kubernetes  for more details "}
{"questions":"vault credentials dynamically based on RAM policies or roles page title AliCloud Secrets Engines layout docs The AliCloud secrets engine for Vault generates access tokens or STS","answers":"---\nlayout: docs\npage_title: AliCloud - Secrets Engines\ndescription: >-\n  The AliCloud secrets engine for Vault generates access tokens or STS\n  credentials\n\n  dynamically based on RAM policies or roles.\n---\n\n# AliCloud secrets engine\n\nThe AliCloud secrets engine dynamically generates AliCloud access tokens based on RAM\npolicies, or AliCloud STS credentials based on RAM roles. This generally\nmakes working with AliCloud easier, since it does not involve clicking in the web UI.\nThe AliCloud access tokens are time-based and are automatically revoked when the Vault\nlease expires. STS credentials are short-lived, non-renewable, and expire on their own.\n\n## Setup\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1.  Enable the AliCloud secrets engine:\n\n    ```text\n    $ vault secrets enable alicloud\n    Success! Enabled the alicloud secrets engine at: alicloud\/\n    ```\n\n    By default, the secrets engine will mount at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  [Create a custom policy](https:\/\/www.alibabacloud.com\/help\/doc-detail\/28640.htm)\n    in AliCloud that will be used for the access key you will give Vault. See \"Example\n    RAM Policy for Vault\".\n\n1.  [Create a user](https:\/\/www.alibabacloud.com\/help\/faq-detail\/28637.htm) in AliCloud\n    with a name like \"hashicorp-vault\", and directly apply the new custom policy to that user\n    in the \"User Authorization Policies\" section.\n\n1.  Create an access key for that user in AliCloud, which is an action available in\n    AliCloud's UI on the user's page.\n\n1.  Configure that access key as the credentials that Vault will use to communicate with\n    AliCloud to generate credentials:\n\n    ```text\n    $ vault write alicloud\/config \\\n        access_key=0wNEpMMlzy7szvai \\\n        secret_key=PupkTg8jdmau1cXxYacgE736PJj4cA\n    ```\n\n    Alternatively, the AliCloud secrets engine can pick up credentials set as environment variables,\n    or credentials available through instance metadata. Since it checks current credentials on every API call,\n    changes in credentials will be picked up almost immediately without a Vault restart.\n\n    If available, we recommend using instance metadata for these credentials as they are the most\n    secure option. To do so, simply ensure that the instance upon which Vault is running has sufficient\n    privileges, and do not add any config.\n\n1.  Configure a role describing how credentials will be granted.\n\n    To generate access tokens using only policies that have already been created in AliCloud:\n\n    ```text\n    $ vault write alicloud\/role\/policy-based \\\n        remote_policies='name:AliyunOSSReadOnlyAccess,type:System' \\\n        remote_policies='name:AliyunRDSReadOnlyAccess,type:System'\n    ```\n\n    To generate access tokens using only policies that will be dynamically created in AliCloud by\n    Vault:\n\n    ```text\n    $ vault write alicloud\/role\/policy-based \\\n        inline_policies=-<<EOF\n    [\n        {\n          \"Statement\": [\n            {\n              \"Action\": \"rds:Describe*\",\n              \"Effect\": \"Allow\",\n              \"Resource\": \"*\"\n            }\n          ],\n          \"Version\": \"1\"\n        },\n        {...}\n    ]\n    EOF\n    ```\n\n    Both `inline_policies` and `remote_policies` may be used together. However, neither may be\n    used configuring how to generate STS credentials, like so:\n\n    ```text\n    $ vault write alibaba\/role\/role-based \\\n          role_arn='acs:ram::5138828231865461:role\/hastrustedactors'\n    ```\n\n    Any `role_arn` specified must have added \"trusted actors\" when it was being created. These\n    can only be added at role creation time. Trusted actors are entities that can assume the role.\n    Since we will be assuming the role to gain credentials, the `access_key` and `secret_key` in\n    the config must qualify as a trusted actor.\n\n### Helpful links\n\n- [More on roles](https:\/\/www.alibabacloud.com\/help\/doc-detail\/28649.htm)\n- [More on policies](https:\/\/www.alibabacloud.com\/help\/doc-detail\/28652.htm)\n\n### Example RAM policy for Vault\n\nWhile AliCloud credentials can be supplied by environment variables, an explicit\nsetting in the `alicloud\/config`, or through instance metadata, the resulting\ncredentials need sufficient permissions to issue secrets. The necessary permissions\nvary based on the ways roles are configured.\n\nThis is an example RAM policy that would allow you to create credentials using\nany type of role:\n\n```json\n{\n  \"Statement\": [\n    {\n      \"Action\": [\n        \"ram:CreateAccessKey\",\n        \"ram:DeleteAccessKey\",\n        \"ram:CreatePolicy\",\n        \"ram:DeletePolicy\",\n        \"ram:AttachPolicyToUser\",\n        \"ram:DetachPolicyFromUser\",\n        \"ram:CreateUser\",\n        \"ram:DeleteUser\",\n        \"sts:AssumeRole\"\n      ],\n      \"Effect\": \"Allow\",\n      \"Resource\": \"*\"\n    }\n  ],\n  \"Version\": \"1\"\n}\n```\n\nHowever, the policy you use should only allow the actions you actually need\nfor how your roles are configured.\n\nIf any roles are using `inline_policies`, you need the following actions:\n\n- `\"ram:CreateAccessKey\"`\n- `\"ram:DeleteAccessKey\"`\n- `\"ram:AttachPolicyToUser\"`\n- `\"ram:DetachPolicyFromUser\"`\n- `\"ram:CreateUser\"`\n- `\"ram:DeleteUser\"`\n\nIf any roles are using `remote_policies`, you need the following actions:\n\n- All listed for `inline_policies`\n- `\"ram:CreatePolicy\"`\n- `\"ram:DeletePolicy\"`\n\nIf any roles are using `role_arn`, you need the following actions:\n\n- `\"sts:AssumeRole\"`\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials.\n\n1.  Generate a new access key by reading from the `\/creds` endpoint with the name\n    of the role:\n\n    ```text\n    $ vault read alicloud\/creds\/policy-based\n    Key                Value\n    ---                -----\n    lease_id           alicloud\/creds\/policy-based\/f3e92392-7d9c-09c8-c921-575d62fe80d8\n    lease_duration     768h\n    lease_renewable    true\n    access_key         0wNEpMMlzy7szvai\n    secret_key         PupkTg8jdmau1cXxYacgE736PJj4cA\n    ```\n\n    The `access_key` and `secret_key` returned are also known is an\n    `\"AccessKeyId\"`and `\"AccessKeySecret\"`, respectively, in the Alibaba's\n    docs.\n\n    Retrieving creds for a role using a `role_arn` will carry the additional\n    fields of `expiration` and `security_token`, like so:\n\n    ```text\n    $ vault read alicloud\/creds\/role-based\n    Key                Value\n    ---                -----\n    lease_id           alicloud\/creds\/role-based\/f3e92392-7d9c-09c8-c921-575d62fe80d9\n    lease_duration     59m59s\n    lease_renewable    false\n    access_key         STS.L4aBSCSJVMuKg5U1vFDw\n    secret_key         wyLTSmsyPGP1ohvvw8xYgB29dlGI8KMiH2pKCNZ9\n    security_token     CAESrAIIARKAAShQquMnLIlbvEcIxO6wCoqJufs8sWwieUxu45hS9AvKNEte8KRUWiJWJ6Y+YHAPgNwi7yfRecMFydL2uPOgBI7LDio0RkbYLmJfIxHM2nGBPdml7kYEOXmJp2aDhbvvwVYIyt\/8iES\/R6N208wQh0Pk2bu+\/9dvalp6wOHF4gkFGhhTVFMuTDRhQlNDU0pWTXVLZzVVMXZGRHciBTQzMjc0KgVhbGljZTCpnJjwySk6BlJzYU1ENUJuCgExGmkKBUFsbG93Eh8KDEFjdGlvbkVxdWFscxIGQWN0aW9uGgcKBW9zczoqEj8KDlJlc291cmNlRXF1YWxzEghSZXNvdXJjZRojCiFhY3M6b3NzOio6NDMyNzQ6c2FtcGxlYm94L2FsaWNlLyo=\n    expiration         2018-08-15T21:58:00Z\n    ```\n\n## API\n\nThe AliCloud secrets engine has a full HTTP API. Please see the\n[AliCloud secrets engine API](\/vault\/api-docs\/secret\/alicloud) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  AliCloud   Secrets Engines description       The AliCloud secrets engine for Vault generates access tokens or STS   credentials    dynamically based on RAM policies or roles         AliCloud secrets engine  The AliCloud secrets engine dynamically generates AliCloud access tokens based on RAM policies  or AliCloud STS credentials based on RAM roles  This generally makes working with AliCloud easier  since it does not involve clicking in the web UI  The AliCloud access tokens are time based and are automatically revoked when the Vault lease expires  STS credentials are short lived  non renewable  and expire on their own      Setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1   Enable the AliCloud secrets engine          text       vault secrets enable alicloud     Success  Enabled the alicloud secrets engine at  alicloud               By default  the secrets engine will mount at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1    Create a custom policy  https   www alibabacloud com help doc detail 28640 htm      in AliCloud that will be used for the access key you will give Vault  See  Example     RAM Policy for Vault    1    Create a user  https   www alibabacloud com help faq detail 28637 htm  in AliCloud     with a name like  hashicorp vault   and directly apply the new custom policy to that user     in the  User Authorization Policies  section   1   Create an access key for that user in AliCloud  which is an action available in     AliCloud s UI on the user s page   1   Configure that access key as the credentials that Vault will use to communicate with     AliCloud to generate credentials          text       vault write alicloud config           access key 0wNEpMMlzy7szvai           secret key PupkTg8jdmau1cXxYacgE736PJj4cA              Alternatively  the AliCloud secrets engine can pick up credentials set as environment variables      or credentials available through instance metadata  Since it checks current credentials on every API call      changes in credentials will be picked up almost immediately without a Vault restart       If available  we recommend using instance metadata for these credentials as they are the most     secure option  To do so  simply ensure that the instance upon which Vault is running has sufficient     privileges  and do not add any config   1   Configure a role describing how credentials will be granted       To generate access tokens using only policies that have already been created in AliCloud          text       vault write alicloud role policy based           remote policies  name AliyunOSSReadOnlyAccess type System            remote policies  name AliyunRDSReadOnlyAccess type System               To generate access tokens using only policies that will be dynamically created in AliCloud by     Vault          text       vault write alicloud role policy based           inline policies    EOF                            Statement                                  Action    rds Describe                   Effect    Allow                  Resource                                             Version    1                                     EOF              Both  inline policies  and  remote policies  may be used together  However  neither may be     used configuring how to generate STS credentials  like so          text       vault write alibaba role role based             role arn  acs ram  5138828231865461 role hastrustedactors               Any  role arn  specified must have added  trusted actors  when it was being created  These     can only be added at role creation time  Trusted actors are entities that can assume the role      Since we will be assuming the role to gain credentials  the  access key  and  secret key  in     the config must qualify as a trusted actor       Helpful links     More on roles  https   www alibabacloud com help doc detail 28649 htm     More on policies  https   www alibabacloud com help doc detail 28652 htm       Example RAM policy for Vault  While AliCloud credentials can be supplied by environment variables  an explicit setting in the  alicloud config   or through instance metadata  the resulting credentials need sufficient permissions to issue secrets  The necessary permissions vary based on the ways roles are configured   This is an example RAM policy that would allow you to create credentials using any type of role      json      Statement                  Action              ram CreateAccessKey            ram DeleteAccessKey            ram CreatePolicy            ram DeletePolicy            ram AttachPolicyToUser            ram DetachPolicyFromUser            ram CreateUser            ram DeleteUser            sts AssumeRole                  Effect    Allow          Resource                     Version    1         However  the policy you use should only allow the actions you actually need for how your roles are configured   If any roles are using  inline policies   you need the following actions       ram CreateAccessKey       ram DeleteAccessKey       ram AttachPolicyToUser       ram DetachPolicyFromUser       ram CreateUser       ram DeleteUser    If any roles are using  remote policies   you need the following actions     All listed for  inline policies      ram CreatePolicy       ram DeletePolicy    If any roles are using  role arn   you need the following actions       sts AssumeRole       Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials   1   Generate a new access key by reading from the   creds  endpoint with the name     of the role          text       vault read alicloud creds policy based     Key                Value                                  lease id           alicloud creds policy based f3e92392 7d9c 09c8 c921 575d62fe80d8     lease duration     768h     lease renewable    true     access key         0wNEpMMlzy7szvai     secret key         PupkTg8jdmau1cXxYacgE736PJj4cA              The  access key  and  secret key  returned are also known is an       AccessKeyId  and   AccessKeySecret    respectively  in the Alibaba s     docs       Retrieving creds for a role using a  role arn  will carry the additional     fields of  expiration  and  security token   like so          text       vault read alicloud creds role based     Key                Value                                  lease id           alicloud creds role based f3e92392 7d9c 09c8 c921 575d62fe80d9     lease duration     59m59s     lease renewable    false     access key         STS L4aBSCSJVMuKg5U1vFDw     secret key         wyLTSmsyPGP1ohvvw8xYgB29dlGI8KMiH2pKCNZ9     security token     CAESrAIIARKAAShQquMnLIlbvEcIxO6wCoqJufs8sWwieUxu45hS9AvKNEte8KRUWiJWJ6Y YHAPgNwi7yfRecMFydL2uPOgBI7LDio0RkbYLmJfIxHM2nGBPdml7kYEOXmJp2aDhbvvwVYIyt 8iES R6N208wQh0Pk2bu  9dvalp6wOHF4gkFGhhTVFMuTDRhQlNDU0pWTXVLZzVVMXZGRHciBTQzMjc0KgVhbGljZTCpnJjwySk6BlJzYU1ENUJuCgExGmkKBUFsbG93Eh8KDEFjdGlvbkVxdWFscxIGQWN0aW9uGgcKBW9zczoqEj8KDlJlc291cmNlRXF1YWxzEghSZXNvdXJjZRojCiFhY3M6b3NzOio6NDMyNzQ6c2FtcGxlYm94L2FsaWNlLyo      expiration         2018 08 15T21 58 00Z             API  The AliCloud secrets engine has a full HTTP API  Please see the  AliCloud secrets engine API   vault api docs secret alicloud  for more details "}
{"questions":"vault page title TOTP Secrets Engines layout docs The TOTP secrets engine for Vault generates time based one time use passwords TOTP secrets engine The TOTP secrets engine generates time based credentials according to the TOTP standard The secrets engine can also be used to generate a new key and validate","answers":"---\nlayout: docs\npage_title: TOTP - Secrets Engines\ndescription: The TOTP secrets engine for Vault generates time-based one-time use passwords.\n---\n\n# TOTP secrets engine\n\nThe TOTP secrets engine generates time-based credentials according to the TOTP\nstandard. The secrets engine can also be used to generate a new key and validate\npasswords generated by that key.\n\nThe TOTP secrets engine can act as both a generator (like Google Authenticator)\nand a provider (like the Google.com sign in service).\n\n## As a generator\n\nThe TOTP secrets engine can act as a TOTP code generator. In this mode, it can\nreplace traditional TOTP generators like Google Authenticator. It provides an\nadded layer of security since the ability to generate codes is guarded by\npolicies and the entire process is audited.\n\n### Setup\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1.  Enable the TOTP secrets engine:\n\n    ```text\n    $ vault secrets enable totp\n    Success! Enabled the totp secrets engine at: totp\/\n    ```\n\n    By default, the secrets engine will mount at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure a named key. The name of this key will be a human identifier as to\n    its purpose.\n\n    ```text\n    $ vault write totp\/keys\/my-key \\\n        url=\"otpauth:\/\/totp\/Vault:test@test.com?secret=Y64VEVMBTSXCYIWRSHRNDZW62MPGVU2G&issuer=Vault\"\n    Success! Data written to: totp\/keys\/my-key\n    ```\n\n    The `url` corresponds to the secret key or value from the barcode provided\n    by the third-party service.\n\n### Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials.\n\n1.  Generate a new time-based OTP by reading from the `\/code` endpoint with the\n    name of the key:\n\n    ```text\n    $ vault read totp\/code\/my-key\n    Key     Value\n    ---     -----\n    code    260610\n    ```\n\n    Using ACLs, it is possible to restrict using the TOTP secrets engine such\n    that trusted operators can manage the key definitions, and both users and\n    applications are restricted in the credentials they are allowed to read.\n\n## As a provider\n\nThe TOTP secrets engine can also act as a TOTP provider. In this mode, it can be\nused to generate new keys and validate passwords generated using those keys.\n\n### Setup\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1.  Enable the TOTP secrets engine:\n\n    ```text\n    $ vault secrets enable totp\n    Success! Enabled the totp secrets engine at: totp\/\n    ```\n\n    By default, the secrets engine will mount at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Create a named key, using the `generate` option. This tells Vault to be the\n    provider:\n\n    ```text\n    $ vault write totp\/keys\/my-user \\\n        generate=true \\\n        issuer=Vault \\\n        account_name=user@test.com\n\n    Key        Value\n    ---        -----\n    barcode    iVBORw0KGgoAAAANSUhEUgAAAMgAAADIEAAAAADYoy0BA...\n    url        otpauth:\/\/totp\/Vault:user@test.com?algorithm=SHA1&digits=6&issuer=Vault&period=30&secret=V7MBSK324I7KF6KVW34NDFH2GYHIF6JY\n    ```\n\n    The response includes a base64-encoded barcode and OTP url. Both are\n    equivalent. Give these to the user who is authenticating with TOTP.\n\n### Usage\n\n1. As a user, validate a TOTP code generated by a third-party app:\n\n   ```text\n   $ vault write totp\/code\/my-user code=886531\n   Key      Value\n   ---      -----\n   valid    true\n   ```\n\n## API\n\nThe TOTP secrets engine has a full HTTP API. Please see the\n[TOTP secrets engine API](\/vault\/api-docs\/secret\/totp) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  TOTP   Secrets Engines description  The TOTP secrets engine for Vault generates time based one time use passwords         TOTP secrets engine  The TOTP secrets engine generates time based credentials according to the TOTP standard  The secrets engine can also be used to generate a new key and validate passwords generated by that key   The TOTP secrets engine can act as both a generator  like Google Authenticator  and a provider  like the Google com sign in service       As a generator  The TOTP secrets engine can act as a TOTP code generator  In this mode  it can replace traditional TOTP generators like Google Authenticator  It provides an added layer of security since the ability to generate codes is guarded by policies and the entire process is audited       Setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1   Enable the TOTP secrets engine          text       vault secrets enable totp     Success  Enabled the totp secrets engine at  totp               By default  the secrets engine will mount at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure a named key  The name of this key will be a human identifier as to     its purpose          text       vault write totp keys my key           url  otpauth   totp Vault test test com secret Y64VEVMBTSXCYIWRSHRNDZW62MPGVU2G issuer Vault      Success  Data written to  totp keys my key              The  url  corresponds to the secret key or value from the barcode provided     by the third party service       Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials   1   Generate a new time based OTP by reading from the   code  endpoint with the     name of the key          text       vault read totp code my key     Key     Value                       code    260610              Using ACLs  it is possible to restrict using the TOTP secrets engine such     that trusted operators can manage the key definitions  and both users and     applications are restricted in the credentials they are allowed to read      As a provider  The TOTP secrets engine can also act as a TOTP provider  In this mode  it can be used to generate new keys and validate passwords generated using those keys       Setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1   Enable the TOTP secrets engine          text       vault secrets enable totp     Success  Enabled the totp secrets engine at  totp               By default  the secrets engine will mount at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Create a named key  using the  generate  option  This tells Vault to be the     provider          text       vault write totp keys my user           generate true           issuer Vault           account name user test com      Key        Value                          barcode    iVBORw0KGgoAAAANSUhEUgAAAMgAAADIEAAAAADYoy0BA        url        otpauth   totp Vault user test com algorithm SHA1 digits 6 issuer Vault period 30 secret V7MBSK324I7KF6KVW34NDFH2GYHIF6JY              The response includes a base64 encoded barcode and OTP url  Both are     equivalent  Give these to the user who is authenticating with TOTP       Usage  1  As a user  validate a TOTP code generated by a third party app         text      vault write totp code my user code 886531    Key      Value                      valid    true            API  The TOTP secrets engine has a full HTTP API  Please see the  TOTP secrets engine API   vault api docs secret totp  for more details "}
{"questions":"vault page title Nomad Secrets Engine Nomad secrets engine The Nomad secrets engine for Vault generates tokens for Nomad dynamically layout docs include x509 sha1 deprecation mdx","answers":"---\nlayout: docs\npage_title: Nomad Secrets Engine\ndescription: The Nomad secrets engine for Vault generates tokens for Nomad dynamically.\n---\n\n# Nomad secrets engine\n\n@include 'x509-sha1-deprecation.mdx'\n\nName: `Nomad`\n\nNomad is a simple, flexible scheduler and workload orchestrator. The Nomad secrets engine for Vault generates [Nomad](https:\/\/www.nomadproject.io\/)\nACL tokens dynamically based on pre-existing Nomad ACL policies.\n\nThis page will show a quick start for this secrets engine. For detailed documentation\non every path, use `vault path-help` after mounting the secrets engine.\n\n~> **Version information** ACLs are only available on Nomad 0.7.0 and above.\n\n## Quick start\n\nThe first step to using the Vault secrets engine is to enable it.\n\n```shell-session\n$ vault secrets enable nomad\nSuccessfully mounted 'nomad' at 'nomad'!\n```\n\nOptionally, we can configure the lease settings for credentials generated\nby Vault. This is done by writing to the `config\/lease` key:\n\n```shell-session\n$ vault write nomad\/config\/lease ttl=3600 max_ttl=86400\nSuccess! Data written to: nomad\/config\/lease\n```\n\nFor a quick start, you can use the SecretID token provided by the [Nomad ACL bootstrap\nprocess](\/nomad\/tutorials\/access-control#generate-the-initial-token), although this\nis discouraged for production deployments.\n\n```shell-session\n$ nomad acl bootstrap\nAccessor ID  = 95a0ee55-eaa6-2c0a-a900-ed94c156754e\nSecret ID    = c25b6ca0-ea4e-000f-807a-fd03fcab6e3c\nName         = Bootstrap Token\nType         = management\nGlobal       = true\nPolicies     = n\/a\nCreate Time  = 2017-09-20 19:40:36.527512364 +0000 UTC\nCreate Index = 7\nModify Index = 7\n```\n\nThe suggested pattern is to generate a token specifically for Vault, following the\n[Nomad ACL guide](\/nomad\/tutorials\/access-control)\n\nNext, we must configure Vault to know how to contact Nomad.\nThis is done by writing the access information:\n\n```shell-session\n$ vault write nomad\/config\/access \\\n    address=http:\/\/127.0.0.1:4646 \\\n    token=adf4238a-882b-9ddc-4a9d-5b6758e4159e\nSuccess! Data written to: nomad\/config\/access\n```\n\nIn this case, we've configured Vault to connect to Nomad\non the default port with the loopback address. We've also provided\nan ACL token to use with the `token` parameter. Vault must have a management\ntype token so that it can create and revoke ACL tokens.\n\nThe next step is to configure a role. A role is a logical name that maps\nto a set of policy names used to generate those credentials. For example, let's create\na \"monitoring\" role that maps to a \"readonly\" policy:\n\n```shell-session\n$ vault write nomad\/role\/monitoring policies=readonly\nSuccess! Data written to: nomad\/role\/monitoring\n```\n\nThe secrets engine expects either a single or a comma separated list of policy names.\n\nTo generate a new Nomad ACL token, we simply read from that role:\n\n```shell-session\n$ vault read nomad\/creds\/monitoring\nKey              Value\n---              -----\nlease_id         nomad\/creds\/monitoring\/78ec3ef3-c806-1022-4aa8-1dbae39c760c\nlease_duration   768h0m0s\nlease_renewable  true\naccessor_id      a715994d-f5fd-1194-73df-ae9dad616307\nsecret_id        b31fb56c-0936-5428-8c5f-ed010431aba9\n```\n\nHere we can see that Vault has generated a new Nomad ACL token for us.\nWe can test this token out, by reading it in Nomad (by it's accessor):\n\n```shell-session\n$ nomad acl token info a715994d-f5fd-1194-73df-ae9dad616307\nAccessor ID  = a715994d-f5fd-1194-73df-ae9dad616307\nSecret ID    = b31fb56c-0936-5428-8c5f-ed010431aba9\nName         = Vault example root 1505945527022465593\nType         = client\nGlobal       = false\nPolicies     = [readonly]\nCreate Time  = 2017-09-20 22:12:07.023455379 +0000 UTC\nCreate Index = 138\nModify Index = 138\n```\n\n## Tutorial\n\nRefer to [Generate Nomad Tokens with HashiCorp\nVault](\/nomad\/tutorials\/integrate-vault\/vault-nomad-secrets) for a\nstep-by-step tutorial.\n\n## API\n\nThe Nomad secrets engine has a full HTTP API. Please see the\n[Nomad Secrets Engine API](\/vault\/api-docs\/secret\/nomad) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  Nomad Secrets Engine description  The Nomad secrets engine for Vault generates tokens for Nomad dynamically         Nomad secrets engine   include  x509 sha1 deprecation mdx   Name   Nomad   Nomad is a simple  flexible scheduler and workload orchestrator  The Nomad secrets engine for Vault generates  Nomad  https   www nomadproject io   ACL tokens dynamically based on pre existing Nomad ACL policies   This page will show a quick start for this secrets engine  For detailed documentation on every path  use  vault path help  after mounting the secrets engine        Version information   ACLs are only available on Nomad 0 7 0 and above      Quick start  The first step to using the Vault secrets engine is to enable it      shell session   vault secrets enable nomad Successfully mounted  nomad  at  nomad        Optionally  we can configure the lease settings for credentials generated by Vault  This is done by writing to the  config lease  key      shell session   vault write nomad config lease ttl 3600 max ttl 86400 Success  Data written to  nomad config lease      For a quick start  you can use the SecretID token provided by the  Nomad ACL bootstrap process   nomad tutorials access control generate the initial token   although this is discouraged for production deployments      shell session   nomad acl bootstrap Accessor ID    95a0ee55 eaa6 2c0a a900 ed94c156754e Secret ID      c25b6ca0 ea4e 000f 807a fd03fcab6e3c Name           Bootstrap Token Type           management Global         true Policies       n a Create Time    2017 09 20 19 40 36 527512364  0000 UTC Create Index   7 Modify Index   7      The suggested pattern is to generate a token specifically for Vault  following the  Nomad ACL guide   nomad tutorials access control   Next  we must configure Vault to know how to contact Nomad  This is done by writing the access information      shell session   vault write nomad config access       address http   127 0 0 1 4646       token adf4238a 882b 9ddc 4a9d 5b6758e4159e Success  Data written to  nomad config access      In this case  we ve configured Vault to connect to Nomad on the default port with the loopback address  We ve also provided an ACL token to use with the  token  parameter  Vault must have a management type token so that it can create and revoke ACL tokens   The next step is to configure a role  A role is a logical name that maps to a set of policy names used to generate those credentials  For example  let s create a  monitoring  role that maps to a  readonly  policy      shell session   vault write nomad role monitoring policies readonly Success  Data written to  nomad role monitoring      The secrets engine expects either a single or a comma separated list of policy names   To generate a new Nomad ACL token  we simply read from that role      shell session   vault read nomad creds monitoring Key              Value                        lease id         nomad creds monitoring 78ec3ef3 c806 1022 4aa8 1dbae39c760c lease duration   768h0m0s lease renewable  true accessor id      a715994d f5fd 1194 73df ae9dad616307 secret id        b31fb56c 0936 5428 8c5f ed010431aba9      Here we can see that Vault has generated a new Nomad ACL token for us  We can test this token out  by reading it in Nomad  by it s accessor       shell session   nomad acl token info a715994d f5fd 1194 73df ae9dad616307 Accessor ID    a715994d f5fd 1194 73df ae9dad616307 Secret ID      b31fb56c 0936 5428 8c5f ed010431aba9 Name           Vault example root 1505945527022465593 Type           client Global         false Policies        readonly  Create Time    2017 09 20 22 12 07 023455379  0000 UTC Create Index   138 Modify Index   138         Tutorial  Refer to  Generate Nomad Tokens with HashiCorp Vault   nomad tutorials integrate vault vault nomad secrets  for a step by step tutorial      API  The Nomad secrets engine has a full HTTP API  Please see the  Nomad Secrets Engine API   vault api docs secret nomad  for more details "}
{"questions":"vault page title Consul Secrets Engines The Consul secrets engine for Vault generates tokens for Consul dynamically Consul secrets engine layout docs include x509 sha1 deprecation mdx","answers":"---\nlayout: docs\npage_title: Consul - Secrets Engines\ndescription: The Consul secrets engine for Vault generates tokens for Consul dynamically.\n---\n\n# Consul secrets engine\n\n@include 'x509-sha1-deprecation.mdx'\n\nThe Consul secrets engine generates [Consul](https:\/\/www.consul.io\/) API tokens\ndynamically based on Consul ACL policies.\n\n-> **Note:** See the Consul Agent [config documentation](\/consul\/docs\/agent\/config\/config-files#acl-parameters)\nfor details on how to enable Consul's ACL system.\n\n## Setup\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. (Optional) If you're only looking to set up a quick test environment, you can start a\n    Consul Agent in dev mode in a separate terminal window.\n\n    ```shell-session\n    $ consul agent -dev -hcl \"acl { enabled = true }\"\n    ```\n\n1.  Enable the Consul secrets engine:\n\n    ```shell-session\n    $ vault secrets enable consul\n    Success! Enabled the consul secrets engine at: consul\/\n    ```\n\n    By default, the secrets engine will mount at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure Vault to connect and authenticate to Consul.\n\n    Vault can bootstrap the Consul ACL system automatically if it is enabled and hasn't already\n    been bootstrapped. If you have already bootstrapped the ACL system, then you will need to\n    provide Vault with a management token. This can either be the bootstrap token or another\n    management token you've created yourself.\n\n    1.  Configuring Vault without previously bootstrapping the Consul ACL system:\n\n        ```shell-session\n        $ vault write consul\/config\/access \\\n            address=\"127.0.0.1:8500\"\n        Success! Data written to: consul\/config\/access\n        ```\n\n        ~> **Note:** Vault will silently store the bootstrap token as the configuration token when\n        it performs the automatic bootstrap; it will not be presented to the user. If you need\n        another management token, you will need to generate one by writing a Vault role with the\n        `global-management` policy and then reading new creds back from it.\n\n    1. Configuring Vault after manually bootstrapping the Consul ACL system:\n\n        1.  For Consul 1.4 and above, use the command line to generate a token with the appropriate policy:\n\n            ```shell-session\n            $ CONSUL_HTTP_TOKEN=\"<bootstrap-token>\" consul acl token create -policy-name=\"global-management\"\n            AccessorID:   865dc5e9-e585-3180-7b49-4ddc0fc45135\n            SecretID:     ef35f0f1-885b-0cab-573c-7c91b65a7a7e\n            Description:\n            Local:        false\n            Create Time:  2018-10-22 17:40:24.128188 -0700 PDT\n            Policies:\n                00000000-0000-0000-0000-000000000001 - global-management\n            ```\n\n            ```shell-session\n            $ vault write consul\/config\/access \\\n                address=\"127.0.0.1:8500\" \\\n                token=\"ef35f0f1-885b-0cab-573c-7c91b65a7a7e\"\n            Success! Data written to: consul\/config\/access\n            ```\n\n        1.  For Consul versions below 1.4, acquire a [management token][consul-mgmt-token] from Consul, using the\n            `acl_master_token` from your Consul configuration file or another management token:\n\n            ```shell-session\n            $ curl \\\n                --header \"X-Consul-Token: my-management-token\" \\\n                --request PUT \\\n                --data '{\"Name\": \"sample\", \"Type\": \"management\"}' \\\n                https:\/\/consul.rocks\/v1\/acl\/create\n            ```\n\n            Vault must have a management type token so that it can create and revoke ACL\n            tokens. The response will return a new token:\n\n            ```json\n            {\n            \"ID\": \"7652ba4c-0f6e-8e75-5724-5e083d72cfe4\"\n            }\n            ```\n\n1.  Configure a role that maps a name in Vault to a Consul ACL policy. Depending on your Consul version,\n    you will either provide a policy document and a token type, a list of policies or roles, or a set of\n    service or node identities. When users generate credentials, they are generated against this role.\n\n    1.  For Consul versions 1.8 and above, attach [a Consul node identity](\/consul\/commands\/acl\/token\/create#node-identity) to the role.\n\n        ```shell-session\n        $ vault write consul\/roles\/my-role \\\n            node_identities=\"server-1:dc1\" \\\n            node_identities=\"server-2:dc1\"\n        Success! Data written to: consul\/roles\/my-role\n        ```\n\n    1.  For Consul versions 1.5 and above, attach either [a role in Consul](\/consul\/api-docs\/acl\/roles) or [a Consul service identity](\/consul\/commands\/acl\/token\/create#service-identity) to the role:\n\n        ```shell-session\n        $ vault write consul\/roles\/my-role consul_roles=\"api-server\"\n        Success! Data written to: consul\/roles\/my-role\n        ```\n\n        ```shell-session\n        $ vault write consul\/roles\/my-role \\\n            service_identities=\"myservice-1:dc1,dc2\" \\\n            service_identities=\"myservice-2:dc1\"\n        Success! Data written to: consul\/roles\/my-role\n        ```\n\n    1.  For Consul versions 1.4 and above, generate [a policy in Consul](\/consul\/tutorials\/security\/access-control-setup-production),\n        and proceed to link it to the role:\n\n        ```shell-session\n        $ vault write consul\/roles\/my-role consul_policies=\"readonly\"\n        Success! Data written to: consul\/roles\/my-role\n        ```\n\n    1.  For Consul versions below 1.4, the policy must be base64-encoded. The policy language is\n        [documented by Consul](\/consul\/docs\/security\/acl\/acl-legacy). Support for this method is\n        deprecated as of Vault 1.11.\n\n        Write a policy and proceed to link it to the role:\n\n        ```shell-session\n        $ vault write consul\/roles\/my-role policy=\"$(echo 'key \"\" { policy = \"read\" }' | base64)\"\n        Success! Data written to: consul\/roles\/my-role\n        ```\n\n        -> **Token lease duration:** If you do not specify a value for `ttl` (or `lease` for Consul versions below 1.4) the\n        tokens created using Vault's Consul secrets engine are created with a Time To Live (TTL) of 30 days. You can change\n        the lease duration by passing `-ttl=<duration>` to the command above where duration is a [duration format strings](\/vault\/docs\/concepts\/duration-format).\n\n1.  You may further limit a role's access by adding the optional parameters `consul_namespace` and\n    `partition`. Please refer to Consul's [namespace documentation](\/consul\/docs\/enterprise\/namespaces) and\n    [admin partition documentation](\/consul\/docs\/enterprise\/admin-partitions) for further information about\n    these features.\n\n    1.  For Consul version 1.11 and above, link an admin partition to a role:\n\n        ```shell-session\n        $ vault write consul\/roles\/my-role consul_roles=\"admin-management\" partition=\"admin1\"\n        Success! Data written to: consul\/roles\/my-role\n        ```\n\n    1.  For Consul versions 1.7 and above, link a Consul namespace to the role:\n\n        ```shell-session\n        $ vault write consul\/roles\/my-role consul_roles=\"namespace-management\" consul_namespace=\"ns1\"\n        Success! Data written to: consul\/roles\/my-role\n        ```\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials.\n\nGenerate a new credential by reading from the `\/creds` endpoint with the name\nof the role:\n\n```shell-session\n$ vault read consul\/creds\/my-role\nKey                 Value\n---                 -----\nlease_id            consul\/creds\/my-role\/b2469121-f55f-53c5-89af-a3ba52b1d6d8\nlease_duration      768h\nlease_renewable     true\naccessor            c81b9cf7-2c4f-afc7-1449-4e442b831f65\nconsul_namespace    ns1\nlocal               false\npartition           admin1\ntoken               642783bf-1540-526f-d4de-fe1ac1aed6f0\n```\n\n!> **Expired token rotation:** Once a token's TTL expires, then Consul operations will no longer be allowed with it.\nThis requires you to have an external process to rotate tokens. At this time, the recommended approach for operators\nis to rotate the tokens manually by creating a new token using the `vault read consul\/creds\/my-role` command. Once\nthe token is synchronized with Consul, apply the token to the agents using the Consul API or CLI.\n\n## Tutorial\n\nRefer to [Administer Consul Access Control Tokens with\nVault](\/consul\/tutorials\/vault-secure\/vault-consul-secrets) for a\nstep-by-step tutorial.\n\n## API\n\nThe Consul secrets engine has a full HTTP API. Please see the\n[Consul secrets engine API](\/vault\/api-docs\/secret\/consul) for more\ndetails.\n\n[consul-mgmt-token]: \/consul\/api-docs\/acl#acl_create","site":"vault","answers_cleaned":"    layout  docs page title  Consul   Secrets Engines description  The Consul secrets engine for Vault generates tokens for Consul dynamically         Consul secrets engine   include  x509 sha1 deprecation mdx   The Consul secrets engine generates  Consul  https   www consul io   API tokens dynamically based on Consul ACL policies        Note    See the Consul Agent  config documentation   consul docs agent config config files acl parameters  for details on how to enable Consul s ACL system      Setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1   Optional  If you re only looking to set up a quick test environment  you can start a     Consul Agent in dev mode in a separate terminal window          shell session       consul agent  dev  hcl  acl   enabled   true             1   Enable the Consul secrets engine          shell session       vault secrets enable consul     Success  Enabled the consul secrets engine at  consul               By default  the secrets engine will mount at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure Vault to connect and authenticate to Consul       Vault can bootstrap the Consul ACL system automatically if it is enabled and hasn t already     been bootstrapped  If you have already bootstrapped the ACL system  then you will need to     provide Vault with a management token  This can either be the bootstrap token or another     management token you ve created yourself       1   Configuring Vault without previously bootstrapping the Consul ACL system              shell session           vault write consul config access               address  127 0 0 1 8500          Success  Data written to  consul config access                           Note    Vault will silently store the bootstrap token as the configuration token when         it performs the automatic bootstrap  it will not be presented to the user  If you need         another management token  you will need to generate one by writing a Vault role with the          global management  policy and then reading new creds back from it       1  Configuring Vault after manually bootstrapping the Consul ACL system           1   For Consul 1 4 and above  use the command line to generate a token with the appropriate policy                  shell session               CONSUL HTTP TOKEN   bootstrap token   consul acl token create  policy name  global management              AccessorID    865dc5e9 e585 3180 7b49 4ddc0fc45135             SecretID      ef35f0f1 885b 0cab 573c 7c91b65a7a7e             Description              Local         false             Create Time   2018 10 22 17 40 24 128188  0700 PDT             Policies                  00000000 0000 0000 0000 000000000001   global management                                 shell session               vault write consul config access                   address  127 0 0 1 8500                    token  ef35f0f1 885b 0cab 573c 7c91b65a7a7e              Success  Data written to  consul config access                          1   For Consul versions below 1 4  acquire a  management token  consul mgmt token  from Consul  using the              acl master token  from your Consul configuration file or another management token                  shell session               curl                     header  X Consul Token  my management token                      request PUT                     data    Name    sample    Type    management                      https   consul rocks v1 acl create                              Vault must have a management type token so that it can create and revoke ACL             tokens  The response will return a new token                  json                            ID    7652ba4c 0f6e 8e75 5724 5e083d72cfe4                                 1   Configure a role that maps a name in Vault to a Consul ACL policy  Depending on your Consul version      you will either provide a policy document and a token type  a list of policies or roles  or a set of     service or node identities  When users generate credentials  they are generated against this role       1   For Consul versions 1 8 and above  attach  a Consul node identity   consul commands acl token create node identity  to the role              shell session           vault write consul roles my role               node identities  server 1 dc1                node identities  server 2 dc1          Success  Data written to  consul roles my role                  1   For Consul versions 1 5 and above  attach either  a role in Consul   consul api docs acl roles  or  a Consul service identity   consul commands acl token create service identity  to the role              shell session           vault write consul roles my role consul roles  api server          Success  Data written to  consul roles my role                         shell session           vault write consul roles my role               service identities  myservice 1 dc1 dc2                service identities  myservice 2 dc1          Success  Data written to  consul roles my role                  1   For Consul versions 1 4 and above  generate  a policy in Consul   consul tutorials security access control setup production           and proceed to link it to the role              shell session           vault write consul roles my role consul policies  readonly          Success  Data written to  consul roles my role                  1   For Consul versions below 1 4  the policy must be base64 encoded  The policy language is          documented by Consul   consul docs security acl acl legacy   Support for this method is         deprecated as of Vault 1 11           Write a policy and proceed to link it to the role              shell session           vault write consul roles my role policy    echo  key      policy    read       base64           Success  Data written to  consul roles my role                           Token lease duration    If you do not specify a value for  ttl   or  lease  for Consul versions below 1 4  the         tokens created using Vault s Consul secrets engine are created with a Time To Live  TTL  of 30 days  You can change         the lease duration by passing   ttl  duration   to the command above where duration is a  duration format strings   vault docs concepts duration format    1   You may further limit a role s access by adding the optional parameters  consul namespace  and      partition   Please refer to Consul s  namespace documentation   consul docs enterprise namespaces  and      admin partition documentation   consul docs enterprise admin partitions  for further information about     these features       1   For Consul version 1 11 and above  link an admin partition to a role              shell session           vault write consul roles my role consul roles  admin management  partition  admin1          Success  Data written to  consul roles my role                  1   For Consul versions 1 7 and above  link a Consul namespace to the role              shell session           vault write consul roles my role consul roles  namespace management  consul namespace  ns1          Success  Data written to  consul roles my role                 Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials   Generate a new credential by reading from the   creds  endpoint with the name of the role      shell session   vault read consul creds my role Key                 Value                           lease id            consul creds my role b2469121 f55f 53c5 89af a3ba52b1d6d8 lease duration      768h lease renewable     true accessor            c81b9cf7 2c4f afc7 1449 4e442b831f65 consul namespace    ns1 local               false partition           admin1 token               642783bf 1540 526f d4de fe1ac1aed6f0           Expired token rotation    Once a token s TTL expires  then Consul operations will no longer be allowed with it  This requires you to have an external process to rotate tokens  At this time  the recommended approach for operators is to rotate the tokens manually by creating a new token using the  vault read consul creds my role  command  Once the token is synchronized with Consul  apply the token to the agents using the Consul API or CLI      Tutorial  Refer to  Administer Consul Access Control Tokens with Vault   consul tutorials vault secure vault consul secrets  for a step by step tutorial      API  The Consul secrets engine has a full HTTP API  Please see the  Consul secrets engine API   vault api docs secret consul  for more details    consul mgmt token    consul api docs acl acl create"}
{"questions":"vault page title Terraform Cloud Secret Backend The Terraform Cloud secret backend for Vault generates tokens for Terraform Cloud dynamically layout docs Name Terraform Cloud Terraform Cloud secret backend","answers":"---\nlayout: docs\npage_title: Terraform Cloud Secret Backend\ndescription: The Terraform Cloud secret backend for Vault generates tokens for Terraform Cloud dynamically.\n---\n\n# Terraform Cloud secret backend\n\nName: `Terraform Cloud`\n\nThe Terraform Cloud secret backend for Vault generates\n[Terraform Cloud](https:\/\/cloud.hashicorp.com\/products\/terraform)\nAPI tokens dynamically for Organizations, Teams, and Users.\n\nThis page will show a quick start for this backend. For detailed documentation\non every path, use `vault path-help` after mounting the backend.\n\n~> **Terraform Enterprise Support:** this secret engine supports both Terraform\nCloud ([app.terraform.io](https:\/\/app.terraform.io\/session)) as well as on-prem\nTerraform Enterprise. Any version requirements will be documented alongside the\nfeatures that require them, if any.\n\n## Quick start\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1.  Enable the Terraform Cloud secrets engine:\n\n    ```shell-session\n    $ vault secrets enable terraform\n    Success! Enabled the terraform cloud secrets engine at: terraform\/\n    ```\n\n    By default, the secrets engine will mount at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n2.  Configure Vault to connect and authenticate to Terraform Cloud:\n\n    ```shell-session\n    $ vault write terraform\/config \\\n        token=Vhz7652ba4c-0f6e-8e75-5724-5e083d72cfe4\n    Success! Data written to: terraform\/config\n    ```\n\n    See [Terraform Cloud's documentation on API\n    tokens](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens)\n    to determine the appropriate API token for use with the secret engine. In\n    order to perform all operations, a User API token is recommended.\n\n3.  Configure a role that maps a name in Vault to a Terraform Cloud User. At\n    this time the Terraform Cloud API does not allow dynamic user generation. As\n    a result this secret engine creates dynamic API tokens for an existing user,\n    and manages the lifecycle of that API token. You will need to know the User\n    ID in order to generate User API tokens for that user. You can use the\n    Terraform Cloud [Account\n    API](\/terraform\/cloud-docs\/api-docs\/account) to find the\n    desired User ID.\n\n    ```shell-session\n    $ vault write terraform\/role\/my-role user_id=user-12345abcde\n    Success! Data written to: terraform\/role\/my-role\n    ```\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials.\n\nGenerate a new credential by reading from the `\/creds` endpoint with the name\nof the role:\n\n```shell-session\n$ vault read terraform\/creds\/my-role\nKey                Value\n---                -----\nlease_id           terraform\/creds\/my-user\/A_LEASE_ID_PdvmJjACTtKrY2I\nlease_duration     180s\nlease_renewable    true\ntoken              TJFDSIFDSKFEKZX.FKFKA.akjlfdiouajlkdakadfiowe\ntoken_id           at-123acbdfask\n```\n\n## Organization, team, and user roles\n\nTerraform Cloud supports three distinct types of API tokens; Organizations,\nTeams, and Users. Each token type has distinct access levels and generation\nworkflows. A given Vault role can manage any one of the three types at a time,\nhowever there are important differences to be aware of.\n\n### Organization and team roles\n\nThe Terraform Cloud API limits both Organization and Team roles to **one active\ntoken at any given time**. Generating a new Organization or Team API token by\nreading the credentials in Vault or otherwise generating them on\n[app.terraform.io](https:\/\/app.terraform.io\/session) will effectively revoke **any**\nexisting API token for that Organization or Team.\n\nDue to this behavior, Organization and Team API tokens created by Vault will be\nstored and returned on future requests, until the credentials get rotated. This\nis to prevent unintentional revocation of tokens that are currently in-use.\n\nBelow is an example of creating a Vault role to manage an Organization\nAPI token and rotating the token:\n\n```shell-session\n$ vault write terraform\/role\/testing organization=\"${TF_ORGANIZATION}\"\nSuccess! Data written to: terraform\/role\/testing\n\n$ vault write -f terraform\/rotate-role\/testing\nSuccess! Data written to: terraform\/rotate-role\/testing\n```\n\nThe API token is retrieved by reading the credentials for the role:\n\n```\n$ vault read terraform\/creds\/testing\n\nKey             Value\n---             -----\norganization    hashicorp-vault-testing\nrole            testing\ntoken           <example token>\ntoken_id        at-fqvtdTQ5kQWcjUfG\n```\n\n### User roles\n\nTraditionally, Vault secret engines create dynamic users and dynamic credentials\nalong with them. At the time of writing, the Terraform Cloud API does not allow\nfor creating dynamic users. Instead, the Terraform Cloud secret engine creates\ndynamic User API tokens by configuring a Vault role to manage an existing\nTerraform Cloud user. The lifecycle of these tokens is managed by Vault and\nwill auto expire according to the configured TTL and max TTL of the Vault\nrole.\n\nBelow is an example of creating a Vault role to manage manage User API tokens:\n\n```shell-session\n$ vault write terraform\/role\/user-testing user_id=\"${TF_USER_ID}\"\nSuccess! Data written to: terraform\/role\/user-testing\n```\n\nThe API token is retrieved by reading the credentials for the role:\n\n```\n$ vault read terraform\/creds\/user-testing\n\nKey             Value\n---             -----\nrole            user-testing\ntoken           <example token>\ntoken_id        at-fqvtdTQ5kQWcjUfG\n```\n\nPlease see the [Terraform Cloud API\nToken documentation for more\ninformation](\/terraform\/cloud-docs\/users-teams-organizations\/api-tokens).\n\n## Tutorial\n\nRefer to [Terraform Cloud Secrets\nEngine](\/vault\/tutorials\/secrets-management\/terraform-secrets-engine)\nfor a step-by-step tutorial.\n\n## API\n\nThe Terraform Cloud secrets engine has a full HTTP API. Please see the\n[Terraform Cloud secrets engine API](\/vault\/api-docs\/secret\/terraform) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  Terraform Cloud Secret Backend description  The Terraform Cloud secret backend for Vault generates tokens for Terraform Cloud dynamically         Terraform Cloud secret backend  Name   Terraform Cloud   The Terraform Cloud secret backend for Vault generates  Terraform Cloud  https   cloud hashicorp com products terraform  API tokens dynamically for Organizations  Teams  and Users   This page will show a quick start for this backend  For detailed documentation on every path  use  vault path help  after mounting the backend        Terraform Enterprise Support    this secret engine supports both Terraform Cloud   app terraform io  https   app terraform io session   as well as on prem Terraform Enterprise  Any version requirements will be documented alongside the features that require them  if any      Quick start  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1   Enable the Terraform Cloud secrets engine          shell session       vault secrets enable terraform     Success  Enabled the terraform cloud secrets engine at  terraform               By default  the secrets engine will mount at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   2   Configure Vault to connect and authenticate to Terraform Cloud          shell session       vault write terraform config           token Vhz7652ba4c 0f6e 8e75 5724 5e083d72cfe4     Success  Data written to  terraform config              See  Terraform Cloud s documentation on API     tokens   terraform cloud docs users teams organizations api tokens      to determine the appropriate API token for use with the secret engine  In     order to perform all operations  a User API token is recommended   3   Configure a role that maps a name in Vault to a Terraform Cloud User  At     this time the Terraform Cloud API does not allow dynamic user generation  As     a result this secret engine creates dynamic API tokens for an existing user      and manages the lifecycle of that API token  You will need to know the User     ID in order to generate User API tokens for that user  You can use the     Terraform Cloud  Account     API   terraform cloud docs api docs account  to find the     desired User ID          shell session       vault write terraform role my role user id user 12345abcde     Success  Data written to  terraform role my role             Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials   Generate a new credential by reading from the   creds  endpoint with the name of the role      shell session   vault read terraform creds my role Key                Value                          lease id           terraform creds my user A LEASE ID PdvmJjACTtKrY2I lease duration     180s lease renewable    true token              TJFDSIFDSKFEKZX FKFKA akjlfdiouajlkdakadfiowe token id           at 123acbdfask         Organization  team  and user roles  Terraform Cloud supports three distinct types of API tokens  Organizations  Teams  and Users  Each token type has distinct access levels and generation workflows  A given Vault role can manage any one of the three types at a time  however there are important differences to be aware of       Organization and team roles  The Terraform Cloud API limits both Organization and Team roles to   one active token at any given time    Generating a new Organization or Team API token by reading the credentials in Vault or otherwise generating them on  app terraform io  https   app terraform io session  will effectively revoke   any   existing API token for that Organization or Team   Due to this behavior  Organization and Team API tokens created by Vault will be stored and returned on future requests  until the credentials get rotated  This is to prevent unintentional revocation of tokens that are currently in use   Below is an example of creating a Vault role to manage an Organization API token and rotating the token      shell session   vault write terraform role testing organization    TF ORGANIZATION   Success  Data written to  terraform role testing    vault write  f terraform rotate role testing Success  Data written to  terraform rotate role testing      The API token is retrieved by reading the credentials for the role         vault read terraform creds testing  Key             Value                       organization    hashicorp vault testing role            testing token            example token  token id        at fqvtdTQ5kQWcjUfG          User roles  Traditionally  Vault secret engines create dynamic users and dynamic credentials along with them  At the time of writing  the Terraform Cloud API does not allow for creating dynamic users  Instead  the Terraform Cloud secret engine creates dynamic User API tokens by configuring a Vault role to manage an existing Terraform Cloud user  The lifecycle of these tokens is managed by Vault and will auto expire according to the configured TTL and max TTL of the Vault role   Below is an example of creating a Vault role to manage manage User API tokens      shell session   vault write terraform role user testing user id    TF USER ID   Success  Data written to  terraform role user testing      The API token is retrieved by reading the credentials for the role         vault read terraform creds user testing  Key             Value                       role            user testing token            example token  token id        at fqvtdTQ5kQWcjUfG      Please see the  Terraform Cloud API Token documentation for more information   terraform cloud docs users teams organizations api tokens       Tutorial  Refer to  Terraform Cloud Secrets Engine   vault tutorials secrets management terraform secrets engine  for a step by step tutorial      API  The Terraform Cloud secrets engine has a full HTTP API  Please see the  Terraform Cloud secrets engine API   vault api docs secret terraform  for more details "}
{"questions":"vault RabbitMQ secrets engine The RabbitMQ secrets engine for Vault generates user credentials to access RabbitMQ page title RabbitMQ Secrets Engines layout docs","answers":"---\nlayout: docs\npage_title: RabbitMQ - Secrets Engines\ndescription: >-\n  The RabbitMQ secrets engine for Vault generates user credentials to access\n  RabbitMQ.\n---\n\n# RabbitMQ secrets engine\n\nThe RabbitMQ secrets engine generates user credentials dynamically based on\nconfigured permissions and virtual hosts. This means that services that need to\naccess a virtual host no longer need to hardcode credentials.\n\nWith every service accessing the messaging queue with unique credentials,\nauditing is much easier when questionable data access is discovered. Easily\ntrack issues down to a specific instance of a service based on the RabbitMQ\nusername.\n\nVault makes use both of its own internal revocation system as well as the\ndeleting RabbitMQ users when creating RabbitMQ users to ensure that users become\ninvalid within a reasonable time of the lease expiring.\n\n## Setup\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1.  Enable the RabbitMQ secrets engine:\n\n    ```text\n    $ vault secrets enable rabbitmq\n    Success! Enabled the rabbitmq secrets engine at: rabbitmq\/\n    ```\n\n    By default, the secrets engine will mount at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure the credentials that Vault uses to communicate with RabbitMQ to\n    generate credentials:\n\n    ```text\n    $ vault write rabbitmq\/config\/connection \\\n        connection_uri=\"http:\/\/localhost:15672\" \\\n        username=\"admin\" \\\n        password=\"password\"\n    Success! Data written to: rabbitmq\/config\/connection\n    ```\n\n    It is important that the Vault user have the administrator privilege to\n    manager users.\n\n1.  Configure a role that maps a name in Vault to virtual host permissions:\n\n    ```text\n    $ vault write rabbitmq\/roles\/my-role \\\n        vhosts='{\"\/\":{\"write\": \".*\", \"read\": \".*\"}}'\n    Success! Data written to: rabbitmq\/roles\/my-role\n    ```\n\n    By writing to the `roles\/my-role` path we are defining the `my-role` role.\n    This role will be created by evaluating the given `vhosts`, `vhost_topics`\n    and `tags` statements. By default, no tags, no virtual hosts or topic\n    permissions are assigned to a role. If no topic permissions are defined\n    and the default authorisation backend is used, publishing to a topic\n    exchange or consuming from a topic is always authorised. You can read\n    more about [RabbitMQ management tags][rmq-perms]\n    and [RabbitMQ topic authorization][rmq-topics].\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials.\n\n1.  Generate a new credential by reading from the `\/creds` endpoint with the name\n    of the role:\n\n    ```text\n    $ vault read rabbitmq\/creds\/my-role\n    Key                Value\n    ---                -----\n    lease_id           rabbitmq\/creds\/my-role\/I39Hu8XXOombof4wiK5bKMn9\n    lease_duration     768h\n    lease_renewable    true\n    password           3yNDBikgQvrkx2VA2zhq5IdSM7IWk1RyMYJr\n    username           root-39669250-3894-8032-c420-3d58483ebfc4\n    ```\n\n    Using ACLs, it is possible to restrict using the rabbitmq secrets engine\n    such that trusted operators can manage the role definitions, and both users\n    and applications are restricted in the credentials they are allowed to read.\n\n## API\n\nThe RabbitMQ secrets engine has a full HTTP API. Please see the\n[RabbitMQ secrets engine API](\/vault\/api-docs\/secret\/rabbitmq) for more\ndetails.\n\n[rmq-perms]: https:\/\/www.rabbitmq.com\/management.html#permissions\n[rmq-topics]: https:\/\/www.rabbitmq.com\/access-control.html#topic-authorisation","site":"vault","answers_cleaned":"    layout  docs page title  RabbitMQ   Secrets Engines description       The RabbitMQ secrets engine for Vault generates user credentials to access   RabbitMQ         RabbitMQ secrets engine  The RabbitMQ secrets engine generates user credentials dynamically based on configured permissions and virtual hosts  This means that services that need to access a virtual host no longer need to hardcode credentials   With every service accessing the messaging queue with unique credentials  auditing is much easier when questionable data access is discovered  Easily track issues down to a specific instance of a service based on the RabbitMQ username   Vault makes use both of its own internal revocation system as well as the deleting RabbitMQ users when creating RabbitMQ users to ensure that users become invalid within a reasonable time of the lease expiring      Setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1   Enable the RabbitMQ secrets engine          text       vault secrets enable rabbitmq     Success  Enabled the rabbitmq secrets engine at  rabbitmq               By default  the secrets engine will mount at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure the credentials that Vault uses to communicate with RabbitMQ to     generate credentials          text       vault write rabbitmq config connection           connection uri  http   localhost 15672            username  admin            password  password      Success  Data written to  rabbitmq config connection              It is important that the Vault user have the administrator privilege to     manager users   1   Configure a role that maps a name in Vault to virtual host permissions          text       vault write rabbitmq roles my role           vhosts         write          read               Success  Data written to  rabbitmq roles my role              By writing to the  roles my role  path we are defining the  my role  role      This role will be created by evaluating the given  vhosts    vhost topics      and  tags  statements  By default  no tags  no virtual hosts or topic     permissions are assigned to a role  If no topic permissions are defined     and the default authorisation backend is used  publishing to a topic     exchange or consuming from a topic is always authorised  You can read     more about  RabbitMQ management tags  rmq perms      and  RabbitMQ topic authorization  rmq topics       Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials   1   Generate a new credential by reading from the   creds  endpoint with the name     of the role          text       vault read rabbitmq creds my role     Key                Value                                  lease id           rabbitmq creds my role I39Hu8XXOombof4wiK5bKMn9     lease duration     768h     lease renewable    true     password           3yNDBikgQvrkx2VA2zhq5IdSM7IWk1RyMYJr     username           root 39669250 3894 8032 c420 3d58483ebfc4              Using ACLs  it is possible to restrict using the rabbitmq secrets engine     such that trusted operators can manage the role definitions  and both users     and applications are restricted in the credentials they are allowed to read      API  The RabbitMQ secrets engine has a full HTTP API  Please see the  RabbitMQ secrets engine API   vault api docs secret rabbitmq  for more details    rmq perms   https   www rabbitmq com management html permissions  rmq topics   https   www rabbitmq com access control html topic authorisation"}
{"questions":"vault Azure secrets engine page title Azure Secrets Engine The Azure Vault secrets engine dynamically generates Azure layout docs service principals and role assignments","answers":"---\nlayout: docs\npage_title: Azure - Secrets Engine\ndescription: |-\n  The Azure Vault secrets engine dynamically generates Azure\n  service principals and role assignments.\n---\n\n# Azure secrets engine\n\nThe Azure secrets engine dynamically generates Azure service principals along\nwith role and group assignments. Vault roles can be mapped to one or more Azure\nroles, and optionally group assignments, providing a simple, flexible way to\nmanage the permissions granted to generated service principals.\n\nEach service principal is associated with a Vault lease. When the lease expires\n(either during normal revocation or through early revocation), the service\nprincipal is automatically deleted.\n\nIf an existing service principal is specified as part of the role configuration,\na new password will be dynamically generated instead of a new service principal.\nThe password will be deleted when the lease is revoked.\n\n## Setup\n\n<Note>\n\n  You can configure the Azure secrets engine with the Vault API or\n  established environment variables such as `AZURE_CLIENT_ID` or\n  `AZURE_CLIENT_SECRET`. If you use both methods, note that\n  environment variables always take precedence over API values.\n\n<\/Note>\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the Azure secrets engine:\n\n   ```shell\n   $ vault secrets enable azure\n   Success! Enabled the azure secrets engine at: azure\/\n   ```\n\n   By default, the secrets engine will mount at the name of the engine. To\n   enable the secrets engine at a different path, use the `-path` argument.\n\n1. Configure the secrets engine with account credentials:\n\n   ```shell\n   $ vault write azure\/config \\\n       subscription_id=$AZURE_SUBSCRIPTION_ID \\\n       tenant_id=$AZURE_TENANT_ID \\\n       client_id=$AZURE_CLIENT_ID \\\n       client_secret=$AZURE_CLIENT_SECRET\n\n   Success! Data written to: azure\/config\n   ```\n\n   If you are running Vault inside an Azure VM with MSI enabled, `client_id` and\n   `client_secret` may be omitted. For more information on authentication, see the [authentication](#authentication) section below.\n\n   In some cases, you cannot set sensitive account credentials in your\n   Vault configuration. For example, your organization may require that all\n   security credentials are short-lived or explicitly tied to a machine identity.\n\n   To provide managed identity security credentials to Vault, we recommend using Vault\n   [plugin workload identity federation](#plugin-workload-identity-federation-wif)\n   (WIF) as shown below.\n\n1.  Alternatively, configure the audience claim value and the Client, Tenant and Subscription IDs for plugin workload identity federation:\n\n   ```text\n   $ vault write azure\/config \\\n       subscription_id=$AZURE_SUBSCRIPTION_ID \\\n       tenant_id=$AZURE_TENANT_ID \\\n       client_id=$AZURE_CLIENT_ID \\\n       identity_token_audience=$TOKEN_AUDIENCE\n   ```\n\n   The Vault identity token provider signs the plugin identity token JWT internally.\n   If a trust relationship exists between Vault and Azure through WIF, the secrets\n   engine can exchange the Vault identity token for a federated access token.\n\n   To configure a trusted relationship between Vault and Azure:\n\n     - You must configure the [identity token issuer backend](\/vault\/api-docs\/secret\/identity\/tokens#configure-the-identity-tokens-backend)\n       for Vault.\n     - Azure must have a\n       [federated identity credential](https:\/\/learn.microsoft.com\/en-us\/entra\/workload-id\/workload-identity-federation-create-trust?pivots=identity-wif-apps-methods-azp#configure-a-federated-identity-credential-on-an-app)\n       configured with information about the fully qualified and network-reachable\n       issuer URL for the Vault plugin\n       [identity token provider](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-well-known-configurations).\n\n   Establishing a trusted relationship between Vault and Azure ensures that Azure\n   can fetch JWKS\n   [public keys](\/vault\/api-docs\/secret\/identity\/tokens#read-active-public-keys)\n   and verify the plugin identity token signature.\n\n1. Configure a role. A role may be set up with either an existing service principal, or\n   a set of Azure roles that will be assigned to a dynamically created service principal.\n\nTo configure a role called \"my-role\" with an existing service principal:\n\n```shell-session\n$ vault write azure\/roles\/my-role \\\n    application_object_id=<existing_app_obj_id> \\\n    ttl=1h\n```\n\nAlternatively, to configure the role to create a new service principal with Azure roles:\n\n```shell-session\n$ vault write azure\/roles\/my-role ttl=1h azure_roles=-<<EOF\n    [\n        {\n            \"role_name\": \"Contributor\",\n            \"scope\":  \"\/subscriptions\/<uuid>\/resourceGroups\/Website\"\n        }\n    ]\nEOF\n```\n\nRoles may also have their own TTL configuration that is separate from the mount's\nTTL. For more information on roles see the [roles](#roles) section below.\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permissions, it can generate credentials. The usage pattern is the same\nwhether an existing or dynamic service principal is used.\n\nTo generate a credential using the \"my-role\" role:\n\n```shell-session\n$ vault read azure\/creds\/my-role\n\nKey                Value\n---                -----\nlease_id           azure\/creds\/sp_role\/1afd0969-ad23-73e2-f974-962f7ac1c2b4\nlease_duration     60m\nlease_renewable    true\nclient_id          408bf248-dd4e-4be5-919a-7f6207a307ab\nclient_secret      ad06228a-2db9-4e0a-8a5d-e047c7f32594\n```\n\nThis endpoint generates a renewable set of credentials. The application can login\nusing the `client_id`\/`client_secret` and will have access provided by configured service\nprincipal or the Azure roles set in the \"my-role\" configuration.\n\n## Root credential rotation\n\nIf the mount is configured with credentials directly, the credential's key may be\nrotated to a Vault-generated value that is not accessible by the operator.\nThis will ensure that only Vault is able to access the \"root\" user that Vault uses to\nmanipulate dynamic & static credentials.\n\n```shell-session\nvault write -f azure\/rotate-root\n```\n\nFor more details on this operation, please see the\n[Root Credential Rotation](\/vault\/api-docs\/secret\/azure#rotate-root) API docs.\n\n## Roles\n\nVault roles let you configure either an existing service principal or a set of Azure roles, along with\nrole-specific TTL parameters. If an existing service principal is not provided, the configured Azure\nroles will be assigned to a newly created service principal. The Vault role may optionally specify\nrole-specific `ttl` and\/or `max_ttl` values. When the lease is created, the more restrictive of the\nmount or role TTL value will be used.\n\n### Application object IDs\n\nIf an existing service principal is to be used, the Application Object ID must be set on the Vault role.\nThis ID can be found by inspecting the desired Application with the `az` CLI tool, or via the Azure Portal. Note\nthat the Application **Object** ID must be provided, not the Application ID.\n\n### Azure roles\n\nIf dynamic service principals are used, Azure roles must be configured on the Vault role.\nAzure roles are provided as a JSON list, with each element describing an Azure role and scope to be assigned.\nAzure roles may be specified using the `role_name` parameter (\"Owner\"), or `role_id`\n(\"\/subscriptions\/...\/roleDefinitions\/...\").\n`role_id` is the definitive ID that's used during Vault operation; `role_name` is a convenience during\nrole management operations. All roles _must exist_ when the configuration is written or the operation will fail. The role lookup priority is:\n\n1. If `role_id` is provided, it is validated and the corresponding `role_name` updated.\n1. If only `role_name` is provided, a case-insensitive search-by-name is made, succeeding\n   only if _exactly one_ matching role is found. The `role_id` field will updated with the matching role ID.\n\nThe `scope` must be provided for every role assignment.\n\n### Azure groups\n\nIf dynamic service principals are used, a list of Azure groups may be configured on the Vault role.\nWhen the service principal is created, it will be assigned to these groups. Similar to the format used\nfor specifying Azure roles, Azure groups may be referenced by either their `group_name` or `object_id`.\nGroup specification by name must yield a single matching group.\n\nExample of role configuration:\n\n```shell-session\n$ vault write azure\/roles\/my-role \\\n    ttl=1h \\\n    max_ttl=24h \\\n    azure_roles=@az_roles.json \\\n    azure_groups=@az_groups.json\n\n$ cat az_roles.json\n[\n  {\n    \"role_name\": \"Contributor\",\n    \"scope\":  \"\/subscriptions\/<uuid>\/resourceGroups\/Website\"\n  },\n  {\n    \"role_id\": \"\/subscriptions\/<uuid>\/providers\/Microsoft.Authorization\/roleDefinitions\/<uuid>\",\n    \"scope\":  \"\/subscriptions\/<uuid>\"\n  },\n  {\n    \"role_name\": \"This won't matter as it will be overwritten\",\n    \"role_id\": \"\/subscriptions\/<uuid>\/providers\/Microsoft.Authorization\/roleDefinitions\/<uuid>\",\n    \"scope\":  \"\/subscriptions\/<uuid>\/resourceGroups\/Database\"\n  }\n]\n\n$ cat az_groups.json\n[\n  {\n    \"group_name\": \"foo\"\n  },\n  {\n    \"group_name\": \"This won't matter as it will be overwritten\",\n    \"object_id\": \"a6a834a6-36c3-4575-8e2b-05095963d603\"\n  }\n]\n```\n\n### Permanently delete Azure objects\n\nIf dynamic service principals are used, the option to permanently delete the applications and service principals created by Vault may be configured on the Vault role.\nWhen this option is enabled and a lease is expired or revoked, the application and service principal associated with the lease will be [permanently deleted](https:\/\/docs.microsoft.com\/en-us\/graph\/api\/directory-deleteditems-delete) from the Azure Active Directory.\nAs a result, these objects will not count toward the [quota](https:\/\/docs.microsoft.com\/en-us\/azure\/azure-resource-manager\/management\/azure-subscription-service-limits#active-directory-limits) of total resources in an Azure tenant. When this option is not enabled\nand a lease is expired or revoked, the application and service principal associated with the lease will be deleted, but not permanently. These objects will be available to restore for 30 days from deletion.\n\nExample of role configuration:\n\n```shell-session\n$ vault write azure\/roles\/my-role permanently_delete=true ttl=1h azure_roles=-<<EOF\n    [\n        {\n            \"role_name\": \"Contributor\",\n            \"scope\":  \"\/subscriptions\/<uuid>\/resourceGroups\/Website\"\n        }\n    ]\nEOF\n```\n\n## Authentication\n\nThe Azure secrets backend must have sufficient permissions to read Azure role information and manage\nservice principals. The authentication parameters can be set in the backend configuration or as environment\nvariables. Environment variables will take precedence. The individual parameters are described in the\n[configuration][config] section of the API docs.\n\nIf the client ID or secret are not present and Vault is running on an Azure VM, Vault will attempt to use\n[Managed Service Identity (MSI)](https:\/\/docs.microsoft.com\/en-us\/azure\/active-directory\/managed-service-identity\/overview)\nto access Azure. Note that when MSI is used, tenant and subscription IDs must still be explicitly provided\nin the configuration or environment variables.\n\n### MS Graph API permissions\n\nThe following MS Graph [API permissions](https:\/\/learn.microsoft.com\/en-us\/azure\/active-directory\/develop\/permissions-consent-overview#types-of-permissions)\nmust be assigned to the service principal provided to Vault for managing Azure. The permissions\ndiffer depending on if you're using [dynamic or existing](#choosing-between-dynamic-or-existing-service-principals)\nservice principals.\n\n#### Dynamic Service Principals\n\n| Permission Name               | Type        |\n| ----------------------------- | ----------- |\n| Application.ReadWrite.OwnedBy | Application |\n| GroupMember.ReadWrite.All     | Application |\n\n~> **Note**: If you plan to use the [rotate root](\/vault\/api-docs\/secret\/azure#rotate-root)\ncredentials API, you'll need to change `Application.ReadWrite.OwnedBy` to `Application.ReadWrite.All`.\n\n#### Existing Service Principals\n\n| Permission Name               | Type        |\n| ----------------------------- | ----------- |\n| Application.ReadWrite.All     | Application |\n| GroupMember.ReadWrite.All     | Application |\n\n### Role assignments\n\nThe following Azure [role assignments](https:\/\/learn.microsoft.com\/en-us\/azure\/role-based-access-control\/role-assignments-cli)\nmust be granted in order for the secrets engine to manage role assignments for service\nprinciples it creates.\n\n| Role                                           | Scope        | Security Principal                          |\n|------------------------------------------------| ------------ | ------------------------------------------- |\n| [User Access Administrator][user_access_admin] | Subscription | Service Principal ID given in configuration |\n\n## Plugin Workload Identity Federation (WIF)\n\n<EnterpriseAlert product=\"vault\" \/>\n\nThe Azure secrets engine supports the plugin WIF workflow, and has a source of identity called\na plugin identity token. The plugin identity token is a JWT that is signed internally by Vault's\n[plugin identity token issuer](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-workload-identity-issuer-s-openid-configuration).\n\nIf there is a trust relationship configured between Vault and Azure through\n[workload identity federation](https:\/\/learn.microsoft.com\/en-us\/entra\/workload-id\/workload-identity-federation),\nthe secrets engine can exchange its identity token for short-lived access tokens needed to\nperform its actions.\n\nExchanging identity tokens for access tokens lets the Azure secrets engine\noperate without configuring explicit access to sensitive client credentials.\n\nTo configure the secrets engine to use plugin WIF:\n\n1. Ensure that Vault [openid-configuration](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-token-issuer-s-openid-configuration)\n   and [public JWKS](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-identity-token-issuer-s-public-jwks)\n   APIs are network-reachable by Azure. We recommend using an API proxy or gateway\n   if you need to limit Vault API exposure.\n\n1. Configure a\n   [federated identity credential](https:\/\/learn.microsoft.com\/en-us\/entra\/workload-id\/workload-identity-federation-create-trust?pivots=identity-wif-apps-methods-azp#configure-a-federated-identity-credential-on-an-app)\n   on a dedicated application registration in Azure to establish a trust relationship with Vault.\n   1. The issuer URL **must** point at your [Vault plugin identity token issuer](\/vault\/api-docs\/secret\/identity\/tokens#read-plugin-workload-identity-issuer-s-openid-configuration) with the\n   `\/.well-known\/openid-configuration` suffix removed. For example:\n   `https:\/\/host:port\/v1\/identity\/oidc\/plugins`.\n   1. The subject identifier **must** match the unique `sub` claim issued by plugin identity tokens.\n   The subject identifier should have the form `plugin-identity:<NAMESPACE>:secret:<AZURE_MOUNT_ACCESSOR>`.\n   1. The audience should be under 600 characters. The default value in Azure is `api:\/\/AzureADTokenExchange`.\n\n1. Configure the Azure secrets engine with the subscription, client and tenant IDs and the OIDC audience value.\n\n   ```shell-session\n   $ vault write azure\/config \\\n     subscription_id=$AZURE_SUBSCRIPTION_ID \\\n     tenant_id=$AZURE_TENANT_ID \\\n     client_id=$AZURE_CLIENT_ID \\\n     identity_token_audience=\"vault.example\/v1\/identity\/oidc\/plugins\"\n   ```\n\nYour secrets engine can now use plugin WIF for its configuration credentials.\nBy default, WIF [credentials](https:\/\/learn.microsoft.com\/en-us\/entra\/identity-platform\/access-tokens#token-lifetime)\nhave a time-to-live of 1 hour and automatically refresh when they expire.\n\nPlease see the [API documentation](\/vault\/api-docs\/secret\/azure#configure-access)\nfor more details on the fields associated with plugin WIF.\n\n## Choosing between dynamic or existing service principals\n\nDynamic service principals are preferred if the desired Azure resources can be provided\nvia the RBAC system and Azure roles defined in the Vault role. This form of credential is\ncompletely decoupled from any other clients, is not subject to permission changes after\nissuance, and offers the best audit granularity.\n\nAccess to some Azure services cannot be provided with the RBAC system, however. In these\ncases, an existing service principal can be set up with the necessary access, and Vault\ncan create new passwords for this service principal. Any changes to the service principal\npermissions affect all clients. Furthermore, Azure does not provide any logging with\nregard to _which_ credential was used for an operation.\n\nAn important limitation when using an existing service principal is that Azure limits the\nnumber of passwords for a single Application. This limit is based on Application object\nsize and isn't firmly specified, but in practice hundreds of passwords can be issued per\nApplication. An error will be returned if the object size is reached. This limit can be\nmanaged by reducing the role TTL, or by creating another Vault role against a different\nAzure service principal configured with the same permissions.\n\n## Additional notes\n\n- **If a referenced Azure role doesn't exist, a credential will not be generated.**\n  Service principals will only be generated if _all_ role assignments are successful.\n  This is important to note if you're using custom Azure role definitions that might be deleted\n  at some point.\n\n- Azure roles are assigned only once, when the service principal is created. If the\n  Vault role changes the list of Azure roles, these changes will not be reflected in\n  any existing service principal, even after token renewal.\n\n- The time required to issue a credential is roughly proportional to the number of\n  Azure roles that must be assigned. This operation make take some time (10s of seconds\n  are common, and over a minute has been seen).\n\n- Service principal credential timeouts are not used. Vault will revoke access by\n  deleting the service principal.\n\n- The Application Name for dynamic service principals will be prefixed with `vault-`. Similarly\n  the `keyId` of any passwords added to an existing service principal will begin with\n  `ffffff`. These may be used to search for Vault-created credentials using the `az` tool\n  or Portal.\n\n## Azure debug logs\n\nThe Azure secret engine plugin supports debug logging which includes additional information\nabout requests and responses from the Azure API.\n\nTo enable the Azure debug logs, set the `AZURE_SDK_GO_LOGGING`  environment variable to `all` on your Vault\nserver:\n\n```shell\nAZURE_SDK_GO_LOGGING=all\n```\n\n## Help &amp; support\n\nThe Azure secrets engine is written as an external Vault plugin and\nthus exists outside the main Vault repository. It is automatically bundled with\nVault releases, but the code is managed separately.\n\nPlease report issues, add feature requests, and submit contributions to the\n[vault-plugin-secrets-azure repo][repo] on GitHub.\n\n## Tutorial\n\nRefer to the [Azure Secrets\nEngine](\/vault\/tutorials\/secrets-management\/azure-secrets) tutorial\nto learn how to use the Azure secrets engine to dynamically generate Azure credentials.\n\n## API\n\nThe Azure secrets engine has a full HTTP API. Please see the [Azure secrets engine API docs][api]\nfor more details.\n\n[api]: \/vault\/api-docs\/secret\/azure\n[config]: \/vault\/api-docs\/secret\/azure#configure-access\n[repo]: https:\/\/github.com\/hashicorp\/vault-plugin-secrets-azure\n[user_access_admin]: https:\/\/learn.microsoft.com\/en-us\/azure\/role-based-access-control\/built-in-roles#user-access-administrator","site":"vault","answers_cleaned":"    layout  docs page title  Azure   Secrets Engine description       The Azure Vault secrets engine dynamically generates Azure   service principals and role assignments         Azure secrets engine  The Azure secrets engine dynamically generates Azure service principals along with role and group assignments  Vault roles can be mapped to one or more Azure roles  and optionally group assignments  providing a simple  flexible way to manage the permissions granted to generated service principals   Each service principal is associated with a Vault lease  When the lease expires  either during normal revocation or through early revocation   the service principal is automatically deleted   If an existing service principal is specified as part of the role configuration  a new password will be dynamically generated instead of a new service principal  The password will be deleted when the lease is revoked      Setup   Note     You can configure the Azure secrets engine with the Vault API or   established environment variables such as  AZURE CLIENT ID  or    AZURE CLIENT SECRET   If you use both methods  note that   environment variables always take precedence over API values     Note   Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1  Enable the Azure secrets engine         shell      vault secrets enable azure    Success  Enabled the azure secrets engine at  azure             By default  the secrets engine will mount at the name of the engine  To    enable the secrets engine at a different path  use the   path  argument   1  Configure the secrets engine with account credentials         shell      vault write azure config          subscription id  AZURE SUBSCRIPTION ID          tenant id  AZURE TENANT ID          client id  AZURE CLIENT ID          client secret  AZURE CLIENT SECRET     Success  Data written to  azure config            If you are running Vault inside an Azure VM with MSI enabled   client id  and     client secret  may be omitted  For more information on authentication  see the  authentication   authentication  section below      In some cases  you cannot set sensitive account credentials in your    Vault configuration  For example  your organization may require that all    security credentials are short lived or explicitly tied to a machine identity      To provide managed identity security credentials to Vault  we recommend using Vault     plugin workload identity federation   plugin workload identity federation wif      WIF  as shown below   1   Alternatively  configure the audience claim value and the Client  Tenant and Subscription IDs for plugin workload identity federation         text      vault write azure config          subscription id  AZURE SUBSCRIPTION ID          tenant id  AZURE TENANT ID          client id  AZURE CLIENT ID          identity token audience  TOKEN AUDIENCE            The Vault identity token provider signs the plugin identity token JWT internally     If a trust relationship exists between Vault and Azure through WIF  the secrets    engine can exchange the Vault identity token for a federated access token      To configure a trusted relationship between Vault and Azure          You must configure the  identity token issuer backend   vault api docs secret identity tokens configure the identity tokens backend         for Vault         Azure must have a         federated identity credential  https   learn microsoft com en us entra workload id workload identity federation create trust pivots identity wif apps methods azp configure a federated identity credential on an app         configured with information about the fully qualified and network reachable        issuer URL for the Vault plugin         identity token provider   vault api docs secret identity tokens read plugin identity well known configurations       Establishing a trusted relationship between Vault and Azure ensures that Azure    can fetch JWKS     public keys   vault api docs secret identity tokens read active public keys     and verify the plugin identity token signature   1  Configure a role  A role may be set up with either an existing service principal  or    a set of Azure roles that will be assigned to a dynamically created service principal   To configure a role called  my role  with an existing service principal      shell session   vault write azure roles my role       application object id  existing app obj id        ttl 1h      Alternatively  to configure the role to create a new service principal with Azure roles      shell session   vault write azure roles my role ttl 1h azure roles    EOF                              role name    Contributor                scope      subscriptions  uuid  resourceGroups Website                  EOF      Roles may also have their own TTL configuration that is separate from the mount s TTL  For more information on roles see the  roles   roles  section below      Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permissions  it can generate credentials  The usage pattern is the same whether an existing or dynamic service principal is used   To generate a credential using the  my role  role      shell session   vault read azure creds my role  Key                Value                          lease id           azure creds sp role 1afd0969 ad23 73e2 f974 962f7ac1c2b4 lease duration     60m lease renewable    true client id          408bf248 dd4e 4be5 919a 7f6207a307ab client secret      ad06228a 2db9 4e0a 8a5d e047c7f32594      This endpoint generates a renewable set of credentials  The application can login using the  client id   client secret  and will have access provided by configured service principal or the Azure roles set in the  my role  configuration      Root credential rotation  If the mount is configured with credentials directly  the credential s key may be rotated to a Vault generated value that is not accessible by the operator  This will ensure that only Vault is able to access the  root  user that Vault uses to manipulate dynamic   static credentials      shell session vault write  f azure rotate root      For more details on this operation  please see the  Root Credential Rotation   vault api docs secret azure rotate root  API docs      Roles  Vault roles let you configure either an existing service principal or a set of Azure roles  along with role specific TTL parameters  If an existing service principal is not provided  the configured Azure roles will be assigned to a newly created service principal  The Vault role may optionally specify role specific  ttl  and or  max ttl  values  When the lease is created  the more restrictive of the mount or role TTL value will be used       Application object IDs  If an existing service principal is to be used  the Application Object ID must be set on the Vault role  This ID can be found by inspecting the desired Application with the  az  CLI tool  or via the Azure Portal  Note that the Application   Object   ID must be provided  not the Application ID       Azure roles  If dynamic service principals are used  Azure roles must be configured on the Vault role  Azure roles are provided as a JSON list  with each element describing an Azure role and scope to be assigned  Azure roles may be specified using the  role name  parameter   Owner    or  role id     subscriptions     roleDefinitions         role id  is the definitive ID that s used during Vault operation   role name  is a convenience during role management operations  All roles  must exist  when the configuration is written or the operation will fail  The role lookup priority is   1  If  role id  is provided  it is validated and the corresponding  role name  updated  1  If only  role name  is provided  a case insensitive search by name is made  succeeding    only if  exactly one  matching role is found  The  role id  field will updated with the matching role ID   The  scope  must be provided for every role assignment       Azure groups  If dynamic service principals are used  a list of Azure groups may be configured on the Vault role  When the service principal is created  it will be assigned to these groups  Similar to the format used for specifying Azure roles  Azure groups may be referenced by either their  group name  or  object id   Group specification by name must yield a single matching group   Example of role configuration      shell session   vault write azure roles my role       ttl 1h       max ttl 24h       azure roles  az roles json       azure groups  az groups json    cat az roles json            role name    Contributor        scope      subscriptions  uuid  resourceGroups Website                role id     subscriptions  uuid  providers Microsoft Authorization roleDefinitions  uuid         scope      subscriptions  uuid                 role name    This won t matter as it will be overwritten        role id     subscriptions  uuid  providers Microsoft Authorization roleDefinitions  uuid         scope      subscriptions  uuid  resourceGroups Database           cat az groups json            group name    foo                group name    This won t matter as it will be overwritten        object id    a6a834a6 36c3 4575 8e2b 05095963d603                 Permanently delete Azure objects  If dynamic service principals are used  the option to permanently delete the applications and service principals created by Vault may be configured on the Vault role  When this option is enabled and a lease is expired or revoked  the application and service principal associated with the lease will be  permanently deleted  https   docs microsoft com en us graph api directory deleteditems delete  from the Azure Active Directory  As a result  these objects will not count toward the  quota  https   docs microsoft com en us azure azure resource manager management azure subscription service limits active directory limits  of total resources in an Azure tenant  When this option is not enabled and a lease is expired or revoked  the application and service principal associated with the lease will be deleted  but not permanently  These objects will be available to restore for 30 days from deletion   Example of role configuration      shell session   vault write azure roles my role permanently delete true ttl 1h azure roles    EOF                              role name    Contributor                scope      subscriptions  uuid  resourceGroups Website                  EOF         Authentication  The Azure secrets backend must have sufficient permissions to read Azure role information and manage service principals  The authentication parameters can be set in the backend configuration or as environment variables  Environment variables will take precedence  The individual parameters are described in the  configuration  config  section of the API docs   If the client ID or secret are not present and Vault is running on an Azure VM  Vault will attempt to use  Managed Service Identity  MSI   https   docs microsoft com en us azure active directory managed service identity overview  to access Azure  Note that when MSI is used  tenant and subscription IDs must still be explicitly provided in the configuration or environment variables       MS Graph API permissions  The following MS Graph  API permissions  https   learn microsoft com en us azure active directory develop permissions consent overview types of permissions  must be assigned to the service principal provided to Vault for managing Azure  The permissions differ depending on if you re using  dynamic or existing   choosing between dynamic or existing service principals  service principals        Dynamic Service Principals    Permission Name                 Type                                                            Application ReadWrite OwnedBy   Application     GroupMember ReadWrite All       Application         Note    If you plan to use the  rotate root   vault api docs secret azure rotate root  credentials API  you ll need to change  Application ReadWrite OwnedBy  to  Application ReadWrite All         Existing Service Principals    Permission Name                 Type                                                            Application ReadWrite All       Application     GroupMember ReadWrite All       Application        Role assignments  The following Azure  role assignments  https   learn microsoft com en us azure role based access control role assignments cli  must be granted in order for the secrets engine to manage role assignments for service principles it creates     Role                                             Scope          Security Principal                                                                                                                                               User Access Administrator  user access admin    Subscription   Service Principal ID given in configuration       Plugin Workload Identity Federation  WIF    EnterpriseAlert product  vault      The Azure secrets engine supports the plugin WIF workflow  and has a source of identity called a plugin identity token  The plugin identity token is a JWT that is signed internally by Vault s  plugin identity token issuer   vault api docs secret identity tokens read plugin workload identity issuer s openid configuration    If there is a trust relationship configured between Vault and Azure through  workload identity federation  https   learn microsoft com en us entra workload id workload identity federation   the secrets engine can exchange its identity token for short lived access tokens needed to perform its actions   Exchanging identity tokens for access tokens lets the Azure secrets engine operate without configuring explicit access to sensitive client credentials   To configure the secrets engine to use plugin WIF   1  Ensure that Vault  openid configuration   vault api docs secret identity tokens read plugin identity token issuer s openid configuration     and  public JWKS   vault api docs secret identity tokens read plugin identity token issuer s public jwks     APIs are network reachable by Azure  We recommend using an API proxy or gateway    if you need to limit Vault API exposure   1  Configure a     federated identity credential  https   learn microsoft com en us entra workload id workload identity federation create trust pivots identity wif apps methods azp configure a federated identity credential on an app     on a dedicated application registration in Azure to establish a trust relationship with Vault     1  The issuer URL   must   point at your  Vault plugin identity token issuer   vault api docs secret identity tokens read plugin workload identity issuer s openid configuration  with the       well known openid configuration  suffix removed  For example      https   host port v1 identity oidc plugins      1  The subject identifier   must   match the unique  sub  claim issued by plugin identity tokens     The subject identifier should have the form  plugin identity  NAMESPACE  secret  AZURE MOUNT ACCESSOR       1  The audience should be under 600 characters  The default value in Azure is  api   AzureADTokenExchange    1  Configure the Azure secrets engine with the subscription  client and tenant IDs and the OIDC audience value         shell session      vault write azure config        subscription id  AZURE SUBSCRIPTION ID        tenant id  AZURE TENANT ID        client id  AZURE CLIENT ID        identity token audience  vault example v1 identity oidc plugins          Your secrets engine can now use plugin WIF for its configuration credentials  By default  WIF  credentials  https   learn microsoft com en us entra identity platform access tokens token lifetime  have a time to live of 1 hour and automatically refresh when they expire   Please see the  API documentation   vault api docs secret azure configure access  for more details on the fields associated with plugin WIF      Choosing between dynamic or existing service principals  Dynamic service principals are preferred if the desired Azure resources can be provided via the RBAC system and Azure roles defined in the Vault role  This form of credential is completely decoupled from any other clients  is not subject to permission changes after issuance  and offers the best audit granularity   Access to some Azure services cannot be provided with the RBAC system  however  In these cases  an existing service principal can be set up with the necessary access  and Vault can create new passwords for this service principal  Any changes to the service principal permissions affect all clients  Furthermore  Azure does not provide any logging with regard to  which  credential was used for an operation   An important limitation when using an existing service principal is that Azure limits the number of passwords for a single Application  This limit is based on Application object size and isn t firmly specified  but in practice hundreds of passwords can be issued per Application  An error will be returned if the object size is reached  This limit can be managed by reducing the role TTL  or by creating another Vault role against a different Azure service principal configured with the same permissions      Additional notes      If a referenced Azure role doesn t exist  a credential will not be generated      Service principals will only be generated if  all  role assignments are successful    This is important to note if you re using custom Azure role definitions that might be deleted   at some point     Azure roles are assigned only once  when the service principal is created  If the   Vault role changes the list of Azure roles  these changes will not be reflected in   any existing service principal  even after token renewal     The time required to issue a credential is roughly proportional to the number of   Azure roles that must be assigned  This operation make take some time  10s of seconds   are common  and over a minute has been seen      Service principal credential timeouts are not used  Vault will revoke access by   deleting the service principal     The Application Name for dynamic service principals will be prefixed with  vault    Similarly   the  keyId  of any passwords added to an existing service principal will begin with    ffffff   These may be used to search for Vault created credentials using the  az  tool   or Portal      Azure debug logs  The Azure secret engine plugin supports debug logging which includes additional information about requests and responses from the Azure API   To enable the Azure debug logs  set the  AZURE SDK GO LOGGING   environment variable to  all  on your Vault server      shell AZURE SDK GO LOGGING all         Help  amp  support  The Azure secrets engine is written as an external Vault plugin and thus exists outside the main Vault repository  It is automatically bundled with Vault releases  but the code is managed separately   Please report issues  add feature requests  and submit contributions to the  vault plugin secrets azure repo  repo  on GitHub      Tutorial  Refer to the  Azure Secrets Engine   vault tutorials secrets management azure secrets  tutorial to learn how to use the Azure secrets engine to dynamically generate Azure credentials      API  The Azure secrets engine has a full HTTP API  Please see the  Azure secrets engine API docs  api  for more details    api    vault api docs secret azure  config    vault api docs secret azure configure access  repo   https   github com hashicorp vault plugin secrets azure  user access admin   https   learn microsoft com en us azure role based access control built in roles user access administrator"}
{"questions":"vault LDAP secrets engine page title LDAP Secrets Engine The LDAP secret engine manages LDAP entry passwords layout docs include x509 sha1 deprecation mdx","answers":"---\nlayout: docs\npage_title: LDAP - Secrets Engine\ndescription: >-\n  The LDAP secret engine manages LDAP entry passwords.\n---\n\n# LDAP secrets engine\n\n@include 'x509-sha1-deprecation.mdx'\n\nThe LDAP secrets engine provides management of LDAP credentials as well as dynamic\ncreation of credentials. It supports integration with implementations of the LDAP\nv3 protocol, including OpenLDAP, Active Directory, and IBM Resource Access Control\nFacility (RACF).\n\nThe secrets engine has three primary features:\n- [Static Credentials](\/vault\/docs\/secrets\/ldap#static-credentials)\n- [Dynamic Credentials](\/vault\/docs\/secrets\/ldap#dynamic-credentials)\n- [Service Account Check-Out](\/vault\/docs\/secrets\/ldap#service-account-check-out)\n\n## Setup\n\n1. Enable the LDAP secret engine:\n\n   ```sh\n   $ vault secrets enable ldap\n   ```\n\n   By default, the secrets engine will mount at the name of the engine. To\n   enable the secrets engine at a different path, use the `-path` argument.\n\n2. Configure the credentials that Vault uses to communicate with LDAP\n   to generate passwords:\n\n   ```sh\n   $ vault write ldap\/config \\\n       binddn=$USERNAME \\\n       bindpass=$PASSWORD \\\n       url=ldaps:\/\/138.91.247.105\n   ```\n\n   Note: it's recommended a dedicated entry management account be created specifically for Vault.\n\n3. Rotate the root password so only Vault knows the credentials:\n\n   ```sh\n   $ vault write -f ldap\/rotate-root\n   ```\n\n   Note: it's not possible to retrieve the generated password once rotated by Vault.\n   It's recommended a dedicated entry management account be created specifically for Vault.\n\n### Schemas\n\nThe LDAP Secret Engine supports three different schemas:\n\n- `openldap` (default)\n- `racf`\n- `ad`\n\n#### OpenLDAP\n\nBy default, the LDAP Secret Engine assumes the entry password is stored in `userPassword`.\nThere are many object classes that provide `userPassword` including for example:\n\n- `organization`\n- `organizationalUnit`\n- `organizationalRole`\n- `inetOrgPerson`\n- `person`\n- `posixAccount`\n\n#### Resource access control facility (RACF)\n\nFor managing IBM's Resource Access Control Facility (RACF) security system, the secret\nengine must be configured to use the schema `racf`.\n\nGenerated passwords must be 8 characters or less to support RACF. The length of the\npassword can be configured using a [password policy](\/vault\/docs\/concepts\/password-policies):\n\n```bash\n$ vault write ldap\/config \\\n\tbinddn=$USERNAME \\\n\tbindpass=$PASSWORD \\\n\turl=ldaps:\/\/138.91.247.105 \\\n\tschema=racf \\\n\tpassword_policy=racf_password_policy\n```\n\n#### Active directory (AD)\n\nFor managing Active Directory instances, the secret engine must be configured to use the\nschema `ad`.\n\n```bash\n$ vault write ldap\/config \\\n\tbinddn=$USERNAME \\\n\tbindpass=$PASSWORD \\\n\turl=ldaps:\/\/138.91.247.105 \\\n\tschema=ad\n```\n\n## Static credentials\n\n### Setup\n\n1. Configure a static role that maps a name in Vault to an entry in LDAP.\n   Password rotation settings will be managed by this role.\n\n   ```sh\n   $ vault write ldap\/static-role\/hashicorp \\\n       dn='uid=hashicorp,ou=users,dc=hashicorp,dc=com' \\\n       username='hashicorp' \\\n       rotation_period=\"24h\"\n   ```\n\n2. Request credentials for the \"hashicorp\" role:\n\n   ```sh\n   $ vault read ldap\/static-cred\/hashicorp\n   ```\n\n### Password rotation\n\nPasswords can be managed in two ways:\n\n- automatic time based rotation\n- manual rotation\n\n### Auto password rotation\n\nPasswords will automatically be rotated based on the `rotation_period` configured\nin the static role (minimum of 5 seconds). When requesting credentials for a static\nrole, the response will include the time before the next rotation (`ttl`).\n\nAuto-rotation is currently only supported for static roles. The `binddn` account used\nby Vault should be rotated using the `rotate-root` endpoint to generate a password\nonly Vault will know.\n\n### Manual rotation\n\nStatic roles can be manually rotated using the `rotate-role` endpoint. When manually\nrotated the rotation period will start over.\n\n### Deleting static roles\n\nPasswords are not rotated upon deletion of a static role. The password should be manually\nrotated prior to deleting the role or revoking access to the static role.\n\n## Dynamic credentials\n\n### Setup\n\nDynamic credentials can be configured by calling the `\/role\/:role_name` endpoint:\n\n```bash\n$ vault write ldap\/role\/dynamic-role \\\n  creation_ldif=@\/path\/to\/creation.ldif \\\n  deletion_ldif=@\/path\/to\/deletion.ldif \\\n  rollback_ldif=@\/path\/to\/rollback.ldif \\\n  default_ttl=1h \\\n  max_ttl=24h\n```\n\n-> Note: The `rollback_ldif` argument is optional, but recommended. The statements within `rollback_ldif` will be\nexecuted if the creation fails for any reason. This ensures any entities are removed in the event of a failure.\n\nTo generate credentials:\n\n```bash\n$ vault read ldap\/creds\/dynamic-role\nKey                    Value\n---                    -----\nlease_id               ldap\/creds\/dynamic-role\/HFgd6uKaDomVMvJpYbn9q4q5\nlease_duration         1h\nlease_renewable        true\ndistinguished_names    [cn=v_token_dynamic-role_FfH2i1c4dO_1611952635,ou=users,dc=learn,dc=example]\npassword               xWMjkIFMerYttEbzfnBVZvhRQGmhpAA0yeTya8fdmDB3LXDzGrjNEPV2bCPE9CW6\nusername               v_token_testrole_FfH2i1c4dO_1611952635\n```\n\nThe `distinguished_names` field is an array of DNs that are created from the `creation_ldif` statements. If more than\none LDIF entry is included, the DN from each statement will be included in this field. Each entry in this field\ncorresponds to a single LDIF statement. No de-duplication occurs and order is maintained.\n\n### LDIF entries\n\nUser account management is provided through LDIF entries. The LDIF entries may be a base64-encoded version of the\nLDIF string. The string will be parsed and validated to ensure that it adheres to LDIF syntax. A good reference\nfor proper LDIF syntax can be found [here](https:\/\/ldap.com\/ldif-the-ldap-data-interchange-format\/).\n\nSome important things to remember when crafting your LDIF entries:\n\n- There should not be any trailing spaces on any line, including empty lines\n- Each `modify` block needs to be preceded with an empty line\n- Multiple modifications for a `dn` can be defined in a single `modify` block. Each modification needs to close\n  with a single dash (`-`)\n\n### Active directory (AD)\n\n<Note> \n\n  Windows Servers hosting Active Directory include a \n  `lifetime period of an old password` configuration setting that lets clients\n  authenticate with old passwords for a specified amount of time.\n\n  For more information, refer to the\n [NTLM network authentication behavior](https:\/\/learn.microsoft.com\/en-us\/troubleshoot\/windows-server\/windows-security new-setting-modifies-ntlm-network-authentication)\n  guide by Microsoft.\n\n<\/Note>\n\nFor Active Directory, there are a few additional details that are important to remember:\n\nTo create a user programmatically in AD, you first `add` a user object and then `modify` that user to provide a\npassword and enable the account.\n\n- Passwords in AD are set using the `unicodePwd` field. This must be proceeded by two (2) colons (`::`).\n- When setting a password programmatically in AD, the following criteria must be met:\n\n  - The password must be enclosed in double quotes (`\" \"`)\n  - The password must be in [`UTF16LE` format](https:\/\/docs.microsoft.com\/en-us\/openspecs\/windows_protocols\/ms-adts\/6e803168-f140-4d23-b2d3-c3a8ab5917d2)\n  - The password must be `base64`-encoded\n  - Additional details can be found [here](https:\/\/docs.microsoft.com\/en-us\/troubleshoot\/windows-server\/identity\/set-user-password-with-ldifde)\n\n- Once a user's password has been set, it can be enabled. AD uses the `userAccountControl` field for this purpose:\n  - To enable the account, set `userAccountControl` to `512`\n  - You will likely also want to disable AD's password expiration for this dynamic user account. The\n    `userAccountControl` value for this is: `65536`\n  - `userAccountControl` flags are cumulative, so to set both of the above two flags, add up the two values\n    (`512 + 65536 = 66048`): set `userAccountControl` to `66048`\n  - See [here](https:\/\/docs.microsoft.com\/en-us\/troubleshoot\/windows-server\/identity\/useraccountcontrol-manipulate-account-properties#property-flag-descriptions)\n    for details on `userAccountControl` flags\n\n`sAMAccountName` is a common field when working with AD users. It is used to provide compatibility with legacy\nWindows NT systems and has a limit of 20 characters. Keep this in mind when defining your `username_template`.\nSee [here](https:\/\/docs.microsoft.com\/en-us\/windows\/win32\/adschema\/a-samaccountname) for additional details.\n\nSince the default `username_template` is longer than 20 characters which follows the template of `v____`, we recommend customising the `username_template` on the role configuration to generate accounts with names less than 20 characters. Please refer to the [username templating document](\/vault\/docs\/concepts\/username-templating) for more information.\n\nWith regard to adding dynamic users to groups, AD doesn't let you directly modify a user's `memberOf` attribute.\nThe `member` attribute of a group and `memberOf` attribute of a user are\n[linked attributes](https:\/\/docs.microsoft.com\/en-us\/windows\/win32\/ad\/linked-attributes). Linked attributes are\nforward link\/back link pairs, with the forward link able to be modified. In the case of AD group membership, the\ngroup's `member` attribute is the forward link. In order to add a newly-created dynamic user to a group, we also\nneed to issue a `modify` request to the desired group and update the group membership with the new user.\n\n#### Active directory LDIF example\n\nThe various `*_ldif` parameters are templates that use the [go template](https:\/\/golang.org\/pkg\/text\/template\/)\nlanguage. A complete LDIF example for creating an Active Directory user account is provided here for reference:\n\n```ldif\ndn: CN=,OU=HashiVault,DC=adtesting,DC=lab\nchangetype: add\nobjectClass: top\nobjectClass: person\nobjectClass: organizationalPerson\nobjectClass: user\nuserPrincipalName: @adtesting.lab\nsAMAccountName: \n\ndn: CN=,OU=HashiVault,DC=adtesting,DC=lab\nchangetype: modify\nreplace: unicodePwd\nunicodePwd::\n-\nreplace: userAccountControl\nuserAccountControl: 66048\n-\n\ndn: CN=test-group,OU=HashiVault,DC=adtesting,DC=lab\nchangetype: modify\nadd: member\nmember: CN=,OU=HashiVault,DC=adtesting,DC=lab\n-\n```\n\n## Service account Check-Out\n\nService account check-out provides a library of service accounts that can be checked out\nby a person or by machines. Vault will automatically rotate the password each time a\nservice account is checked in. Service accounts can be voluntarily checked in, or Vault\nwill check them in when their lending period (or, \"ttl\", in Vault's language) ends.\n\nThe service account check-out functionality works with various [schemas](\/vault\/api-docs\/secret\/ldap#schema),\nincluding OpenLDAP, Active Directory, and RACF. In the following usage example, the secrets\nengine is configured to manage a library of service accounts in an Active Directory instance.\n\nFirst we'll need to enable the LDAP secrets engine and tell it how to securely connect\nto an AD server.\n\n```shell-session\n$ vault secrets enable ldap\nSuccess! Enabled the ad secrets engine at: ldap\/\n\n$ vault write ldap\/config \\\n    binddn=$USERNAME \\\n    bindpass=$PASSWORD \\\n    url=ldaps:\/\/138.91.247.105 \\\n    userdn='dc=example,dc=com'\n```\n\nOur next step is to designate a set of service accounts for check-out.\n\n```shell-session\n$ vault write ldap\/library\/accounting-team \\\n    service_account_names=fizz@example.com,buzz@example.com \\\n    ttl=10h \\\n    max_ttl=20h \\\n    disable_check_in_enforcement=false\n```\n\nIn this example, the service account names of `fizz@example.com` and `buzz@example.com` have\nalready been created on the remote AD server. They've been set aside solely for Vault to handle.\nThe `ttl` is how long each check-out will last before Vault checks in a service account,\nrotating its password during check-in. The `max_ttl` is the maximum amount of time it can live\nif it's renewed. These default to `24h`, and both use [duration format strings](\/vault\/docs\/concepts\/duration-format).\nAlso by default, a service account must be checked in by the same Vault entity or client token that\nchecked it out. However, if this behavior causes problems, set `disable_check_in_enforcement=true`.\n\nWhen a library of service accounts has been created, view their status at any time to see if they're\navailable or checked out.\n\n```shell-session\n$ vault read ldap\/library\/accounting-team\/status\nKey                 Value\n---                 -----\nbuzz@example.com    map[available:true]\nfizz@example.com    map[available:true]\n```\n\nTo check out any service account that's available, simply execute:\n\n```shell-session\n$ vault write -f ldap\/library\/accounting-team\/check-out\nKey                     Value\n---                     -----\nlease_id                ldap\/library\/accounting-team\/check-out\/EpuS8cX7uEsDzOwW9kkKOyGW\nlease_duration          10h\nlease_renewable         true\npassword                ?@09AZKh03hBORZPJcTDgLfntlHqxLy29tcQjPVThzuwWAx\/Twx4a2ZcRQRqrZ1w\nservice_account_name    fizz@example.com\n```\n\nIf the default `ttl` for the check-out is higher than needed, set the check-out to last\nfor a shorter time by using:\n\n```shell-session\n$ vault write ldap\/library\/accounting-team\/check-out ttl=30m\nKey                     Value\n---                     -----\nlease_id                ldap\/library\/accounting-team\/check-out\/gMonJ2jB6kYs6d3Vw37WFDCY\nlease_duration          30m\nlease_renewable         true\npassword                ?@09AZerLLuJfEMbRqP+3yfQYDSq6laP48TCJRBJaJu\/kDKLsq9WxL9szVAvL\/E1\nservice_account_name    buzz@example.com\n```\n\nThis can be a nice way to say, \"Although I _can_ have a check-out for 24 hours, if I\nhaven't checked it in after 30 minutes, I forgot or I'm a dead instance, so you can just\ncheck it back in.\"\n\nIf no service accounts are available for check-out, Vault will return a 400 Bad Request.\n\n```shell-session\n$ vault write -f ldap\/library\/accounting-team\/check-out\nError writing data to ldap\/library\/accounting-team\/check-out: Error making API request.\n\nURL: POST http:\/\/localhost:8200\/v1\/ldap\/library\/accounting-team\/check-out\nCode: 400. Errors:\n\n* No service accounts available for check-out.\n```\n\nTo extend a check-out, renew its lease.\n\n```shell-session\n$ vault lease renew ldap\/library\/accounting-team\/check-out\/0C2wmeaDmsToVFc0zDiX9cMq\nKey                Value\n---                -----\nlease_id           ldap\/library\/accounting-team\/check-out\/0C2wmeaDmsToVFc0zDiX9cMq\nlease_duration     10h\nlease_renewable    true\n```\n\nRenewing a check-out means its current password will live longer, since passwords are rotated\nanytime a password is _checked in_ either by a caller, or by Vault because the check-out `ttl`\nends.\n\nTo check a service account back in for others to use, call:\n\n```shell-session\n$ vault write -f ldap\/library\/accounting-team\/check-in\nKey          Value\n---          -----\ncheck_ins    [fizz@example.com]\n```\n\nMost of the time this will just work, but if multiple service accounts are checked out by the same\ncaller, Vault will need to know which one(s) to check in.\n\n```shell-session\n$ vault write ldap\/library\/accounting-team\/check-in service_account_names=fizz@example.com\nKey          Value\n---          -----\ncheck_ins    [fizz@example.com]\n```\n\nTo perform a check-in, Vault verifies that the caller _should_ be able to check in a given service account.\nTo do this, Vault looks for either the same [entity ID](\/vault\/tutorials\/auth-methods\/identity)\nused to check out the service account, or the same client token.\n\nIf a caller is unable to check in a service account, or simply doesn't try,\nVault will check it back in automatically when the `ttl` expires. However, if that is too long,\nservice accounts can be forcibly checked in by a highly privileged user through:\n\n```shell-session\n$ vault write -f ldap\/library\/manage\/accounting-team\/check-in\nKey          Value\n---          -----\ncheck_ins    [fizz@example.com]\n```\n\nOr, alternatively, revoking the secret's lease has the same effect.\n\n```shell-session\n$ vault lease revoke ldap\/library\/accounting-team\/check-out\/PvBVG0m7pEg2940Cb3Jw3KpJ\nAll revocation operations queued successfully!\n```\n\n## Password generation\n\nThis engine previously allowed configuration of the length of the password that is generated\nwhen rotating credentials. This mechanism was deprecated in Vault 1.5 in favor of\n[password policies](\/vault\/docs\/concepts\/password-policies). This means the `length` field should\nno longer be used. The following password policy can be used to mirror the same behavior\nthat the `length` field provides:\n\n```hcl\nlength=<length>\nrule \"charset\" {\n  charset = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\"\n}\n```\n\n## LDAP password policy\n\nThe LDAP secret engine does not hash or encrypt passwords prior to modifying\nvalues in LDAP. This behavior can cause plaintext passwords to be stored in LDAP.\n\nTo avoid having plaintext passwords stored, the LDAP server should be configured\nwith an LDAP password policy (ppolicy, not to be confused with a Vault password\npolicy). A ppolicy can enforce rules such as hashing plaintext passwords by default.\n\nThe following is an example of an LDAP password policy to enforce hashing on the\ndata information tree (DIT) `dc=hashicorp,dc=com`:\n\n```\ndn: cn=module{0},cn=config\nchangetype: modify\nadd: olcModuleLoad\nolcModuleLoad: ppolicy\n\ndn: olcOverlay={2}ppolicy,olcDatabase={1}mdb,cn=config\nchangetype: add\nobjectClass: olcPPolicyConfig\nobjectClass: olcOverlayConfig\nolcOverlay: {2}ppolicy\nolcPPolicyDefault: cn=default,ou=pwpolicies,dc=hashicorp,dc=com\nolcPPolicyForwardUpdates: FALSE\nolcPPolicyHashCleartext: TRUE\nolcPPolicyUseLockout: TRUE\n```\n\n## Hierarchical paths\n\nThe LDAP secrets engine lets you define role and set names that contain an\narbitrary number of forward slashes. Names with forward slashes define\nhierarchical path structures.\n\nFor example, you can configure two static roles with the names `org\/secure` and `org\/platform\/dev`:\n\n```shell-session\n$ vault write ldap\/static-role\/org\/secure \\\n    username=\"user1\" \\\n    rotation_period=\"1h\"\nSuccess! Data written to: ldap\/static-role\/org\/secure\n\n$ vault write ldap\/static-role\/org\/platform\/dev \\\n    username=\"user2\" \\\n    rotation_period=\"1h\"\nSuccess! Data written to: ldap\/static-role\/org\/platform\/dev\n```\n\nNames with hierarchical paths let you use the Vault API to query the available\nroles at a specific path with arbitrary depth. Names that end with a forward\nslash indicate that sub-paths reside under that path.\n\nFor example, to list all direct children under the `org\/` path:\n\n```shell-session\n$ vault list ldap\/static-role\/org\/\nKeys\n----\nplatform\/\nsecure\n```\n\nThe `platform\/` key also ends in a forward slash. To list the `platform` sub-keys:\n\n```shell-session\n$ vault list ldap\/static-role\/org\/platform\nKeys\n----\ndev\n```\n\nYou can read and rotate credentials using the same role name and the respective\nAPIs. For example,\n\n```shell-session\n$ vault read ldap\/static-cred\/org\/platform\/dev\nKey                    Value\n---                    -----\ndn                     n\/a\nlast_password          a3sQ6OkmXKt2dtx22kAt36YLkkxLsg4RmhMZCLYCBCbvvv67ILROaOokdCaGPEAE\nlast_vault_rotation    2024-05-03T16:39:27.174164-05:00\npassword               ECf7ZoxfDxGuJEYZrzgzTffSIDI4tx5TojBR9wuEGp8bqUXbl4Kr9eAgPjmizcvg\nrotation_period        5m\nttl                    4m58s\nusername               user2\n```\n\n```shell-session\n$ vault write -f ldap\/rotate-role\/org\/platform\/dev\n```\n\nSince [Vault policies](\/vault\/docs\/concepts\/policies) are also path-based,\nhierarchical names also let you define policies that map 1-1 to LDAP secrets\nengine roles and set paths.\n\nThe following Vault API endpoints support hierarchical path handling:\n\n- [Static roles](\/vault\/api-docs\/secret\/ldap#static-roles)\n- [Static role passwords](\/vault\/api-docs\/secret\/ldap#static-role-passwords)\n- [Manually rotate static role password](\/vault\/api-docs\/secret\/ldap#manually-rotate-static-role-password)\n- [Dynamic roles](\/vault\/api-docs\/secret\/ldap#dynamic-roles)\n- [Dynamic role passwords](\/vault\/api-docs\/secret\/ldap#dynamic-role-passwords)\n- [Library set management](\/vault\/api-docs\/secret\/ldap#library-set-management)\n- [Library set status check](\/vault\/api-docs\/secret\/ldap#library-set-status-check)\n- [Check-Out management](\/vault\/api-docs\/secret\/ldap#check-out-management)\n- [Check-In management](\/vault\/api-docs\/secret\/ldap#check-in-management)\n\n## Tutorial\n\nRefer to the [LDAP Secrets Engine](\/vault\/tutorials\/secrets-management\/openldap)\ntutorial to learn how to configure and use the LDAP secrets engine.\n\n\n## API\n\nThe LDAP secrets engine has a full HTTP API. Please see the [LDAP secrets engine API docs](\/vault\/api-docs\/secret\/ldap)\nfor more details.","site":"vault","answers_cleaned":"    layout  docs page title  LDAP   Secrets Engine description       The LDAP secret engine manages LDAP entry passwords         LDAP secrets engine   include  x509 sha1 deprecation mdx   The LDAP secrets engine provides management of LDAP credentials as well as dynamic creation of credentials  It supports integration with implementations of the LDAP v3 protocol  including OpenLDAP  Active Directory  and IBM Resource Access Control Facility  RACF    The secrets engine has three primary features     Static Credentials   vault docs secrets ldap static credentials     Dynamic Credentials   vault docs secrets ldap dynamic credentials     Service Account Check Out   vault docs secrets ldap service account check out      Setup  1  Enable the LDAP secret engine         sh      vault secrets enable ldap            By default  the secrets engine will mount at the name of the engine  To    enable the secrets engine at a different path  use the   path  argument   2  Configure the credentials that Vault uses to communicate with LDAP    to generate passwords         sh      vault write ldap config          binddn  USERNAME          bindpass  PASSWORD          url ldaps   138 91 247 105            Note  it s recommended a dedicated entry management account be created specifically for Vault   3  Rotate the root password so only Vault knows the credentials         sh      vault write  f ldap rotate root            Note  it s not possible to retrieve the generated password once rotated by Vault     It s recommended a dedicated entry management account be created specifically for Vault       Schemas  The LDAP Secret Engine supports three different schemas      openldap   default     racf     ad        OpenLDAP  By default  the LDAP Secret Engine assumes the entry password is stored in  userPassword   There are many object classes that provide  userPassword  including for example      organization     organizationalUnit     organizationalRole     inetOrgPerson     person     posixAccount        Resource access control facility  RACF   For managing IBM s Resource Access Control Facility  RACF  security system  the secret engine must be configured to use the schema  racf    Generated passwords must be 8 characters or less to support RACF  The length of the password can be configured using a  password policy   vault docs concepts password policies       bash   vault write ldap config    binddn  USERNAME    bindpass  PASSWORD    url ldaps   138 91 247 105    schema racf    password policy racf password policy           Active directory  AD   For managing Active Directory instances  the secret engine must be configured to use the schema  ad       bash   vault write ldap config    binddn  USERNAME    bindpass  PASSWORD    url ldaps   138 91 247 105    schema ad         Static credentials      Setup  1  Configure a static role that maps a name in Vault to an entry in LDAP     Password rotation settings will be managed by this role         sh      vault write ldap static role hashicorp          dn  uid hashicorp ou users dc hashicorp dc com           username  hashicorp           rotation period  24h          2  Request credentials for the  hashicorp  role         sh      vault read ldap static cred hashicorp             Password rotation  Passwords can be managed in two ways     automatic time based rotation   manual rotation      Auto password rotation  Passwords will automatically be rotated based on the  rotation period  configured in the static role  minimum of 5 seconds   When requesting credentials for a static role  the response will include the time before the next rotation   ttl     Auto rotation is currently only supported for static roles  The  binddn  account used by Vault should be rotated using the  rotate root  endpoint to generate a password only Vault will know       Manual rotation  Static roles can be manually rotated using the  rotate role  endpoint  When manually rotated the rotation period will start over       Deleting static roles  Passwords are not rotated upon deletion of a static role  The password should be manually rotated prior to deleting the role or revoking access to the static role      Dynamic credentials      Setup  Dynamic credentials can be configured by calling the   role  role name  endpoint      bash   vault write ldap role dynamic role     creation ldif   path to creation ldif     deletion ldif   path to deletion ldif     rollback ldif   path to rollback ldif     default ttl 1h     max ttl 24h         Note  The  rollback ldif  argument is optional  but recommended  The statements within  rollback ldif  will be executed if the creation fails for any reason  This ensures any entities are removed in the event of a failure   To generate credentials      bash   vault read ldap creds dynamic role Key                    Value                              lease id               ldap creds dynamic role HFgd6uKaDomVMvJpYbn9q4q5 lease duration         1h lease renewable        true distinguished names     cn v token dynamic role FfH2i1c4dO 1611952635 ou users dc learn dc example  password               xWMjkIFMerYttEbzfnBVZvhRQGmhpAA0yeTya8fdmDB3LXDzGrjNEPV2bCPE9CW6 username               v token testrole FfH2i1c4dO 1611952635      The  distinguished names  field is an array of DNs that are created from the  creation ldif  statements  If more than one LDIF entry is included  the DN from each statement will be included in this field  Each entry in this field corresponds to a single LDIF statement  No de duplication occurs and order is maintained       LDIF entries  User account management is provided through LDIF entries  The LDIF entries may be a base64 encoded version of the LDIF string  The string will be parsed and validated to ensure that it adheres to LDIF syntax  A good reference for proper LDIF syntax can be found  here  https   ldap com ldif the ldap data interchange format     Some important things to remember when crafting your LDIF entries     There should not be any trailing spaces on any line  including empty lines   Each  modify  block needs to be preceded with an empty line   Multiple modifications for a  dn  can be defined in a single  modify  block  Each modification needs to close   with a single dash            Active directory  AD    Note      Windows Servers hosting Active Directory include a     lifetime period of an old password  configuration setting that lets clients   authenticate with old passwords for a specified amount of time     For more information  refer to the   NTLM network authentication behavior  https   learn microsoft com en us troubleshoot windows server windows security new setting modifies ntlm network authentication    guide by Microsoft     Note   For Active Directory  there are a few additional details that are important to remember   To create a user programmatically in AD  you first  add  a user object and then  modify  that user to provide a password and enable the account     Passwords in AD are set using the  unicodePwd  field  This must be proceeded by two  2  colons           When setting a password programmatically in AD  the following criteria must be met       The password must be enclosed in double quotes             The password must be in   UTF16LE  format  https   docs microsoft com en us openspecs windows protocols ms adts 6e803168 f140 4d23 b2d3 c3a8ab5917d2      The password must be  base64  encoded     Additional details can be found  here  https   docs microsoft com en us troubleshoot windows server identity set user password with ldifde     Once a user s password has been set  it can be enabled  AD uses the  userAccountControl  field for this purpose      To enable the account  set  userAccountControl  to  512      You will likely also want to disable AD s password expiration for this dynamic user account  The      userAccountControl  value for this is   65536       userAccountControl  flags are cumulative  so to set both of the above two flags  add up the two values       512   65536   66048    set  userAccountControl  to  66048      See  here  https   docs microsoft com en us troubleshoot windows server identity useraccountcontrol manipulate account properties property flag descriptions      for details on  userAccountControl  flags   sAMAccountName  is a common field when working with AD users  It is used to provide compatibility with legacy Windows NT systems and has a limit of 20 characters  Keep this in mind when defining your  username template   See  here  https   docs microsoft com en us windows win32 adschema a samaccountname  for additional details   Since the default  username template  is longer than 20 characters which follows the template of  v       we recommend customising the  username template  on the role configuration to generate accounts with names less than 20 characters  Please refer to the  username templating document   vault docs concepts username templating  for more information   With regard to adding dynamic users to groups  AD doesn t let you directly modify a user s  memberOf  attribute  The  member  attribute of a group and  memberOf  attribute of a user are  linked attributes  https   docs microsoft com en us windows win32 ad linked attributes   Linked attributes are forward link back link pairs  with the forward link able to be modified  In the case of AD group membership  the group s  member  attribute is the forward link  In order to add a newly created dynamic user to a group  we also need to issue a  modify  request to the desired group and update the group membership with the new user        Active directory LDIF example  The various    ldif  parameters are templates that use the  go template  https   golang org pkg text template   language  A complete LDIF example for creating an Active Directory user account is provided here for reference      ldif dn  CN  OU HashiVault DC adtesting DC lab changetype  add objectClass  top objectClass  person objectClass  organizationalPerson objectClass  user userPrincipalName   adtesting lab sAMAccountName    dn  CN  OU HashiVault DC adtesting DC lab changetype  modify replace  unicodePwd unicodePwd     replace  userAccountControl userAccountControl  66048    dn  CN test group OU HashiVault DC adtesting DC lab changetype  modify add  member member  CN  OU HashiVault DC adtesting DC lab           Service account Check Out  Service account check out provides a library of service accounts that can be checked out by a person or by machines  Vault will automatically rotate the password each time a service account is checked in  Service accounts can be voluntarily checked in  or Vault will check them in when their lending period  or   ttl   in Vault s language  ends   The service account check out functionality works with various  schemas   vault api docs secret ldap schema   including OpenLDAP  Active Directory  and RACF  In the following usage example  the secrets engine is configured to manage a library of service accounts in an Active Directory instance   First we ll need to enable the LDAP secrets engine and tell it how to securely connect to an AD server      shell session   vault secrets enable ldap Success  Enabled the ad secrets engine at  ldap     vault write ldap config       binddn  USERNAME       bindpass  PASSWORD       url ldaps   138 91 247 105       userdn  dc example dc com       Our next step is to designate a set of service accounts for check out      shell session   vault write ldap library accounting team       service account names fizz example com buzz example com       ttl 10h       max ttl 20h       disable check in enforcement false      In this example  the service account names of  fizz example com  and  buzz example com  have already been created on the remote AD server  They ve been set aside solely for Vault to handle  The  ttl  is how long each check out will last before Vault checks in a service account  rotating its password during check in  The  max ttl  is the maximum amount of time it can live if it s renewed  These default to  24h   and both use  duration format strings   vault docs concepts duration format   Also by default  a service account must be checked in by the same Vault entity or client token that checked it out  However  if this behavior causes problems  set  disable check in enforcement true    When a library of service accounts has been created  view their status at any time to see if they re available or checked out      shell session   vault read ldap library accounting team status Key                 Value                           buzz example com    map available true  fizz example com    map available true       To check out any service account that s available  simply execute      shell session   vault write  f ldap library accounting team check out Key                     Value                               lease id                ldap library accounting team check out EpuS8cX7uEsDzOwW9kkKOyGW lease duration          10h lease renewable         true password                  09AZKh03hBORZPJcTDgLfntlHqxLy29tcQjPVThzuwWAx Twx4a2ZcRQRqrZ1w service account name    fizz example com      If the default  ttl  for the check out is higher than needed  set the check out to last for a shorter time by using      shell session   vault write ldap library accounting team check out ttl 30m Key                     Value                               lease id                ldap library accounting team check out gMonJ2jB6kYs6d3Vw37WFDCY lease duration          30m lease renewable         true password                  09AZerLLuJfEMbRqP 3yfQYDSq6laP48TCJRBJaJu kDKLsq9WxL9szVAvL E1 service account name    buzz example com      This can be a nice way to say   Although I  can  have a check out for 24 hours  if I haven t checked it in after 30 minutes  I forgot or I m a dead instance  so you can just check it back in    If no service accounts are available for check out  Vault will return a 400 Bad Request      shell session   vault write  f ldap library accounting team check out Error writing data to ldap library accounting team check out  Error making API request   URL  POST http   localhost 8200 v1 ldap library accounting team check out Code  400  Errors     No service accounts available for check out       To extend a check out  renew its lease      shell session   vault lease renew ldap library accounting team check out 0C2wmeaDmsToVFc0zDiX9cMq Key                Value                          lease id           ldap library accounting team check out 0C2wmeaDmsToVFc0zDiX9cMq lease duration     10h lease renewable    true      Renewing a check out means its current password will live longer  since passwords are rotated anytime a password is  checked in  either by a caller  or by Vault because the check out  ttl  ends   To check a service account back in for others to use  call      shell session   vault write  f ldap library accounting team check in Key          Value                    check ins     fizz example com       Most of the time this will just work  but if multiple service accounts are checked out by the same caller  Vault will need to know which one s  to check in      shell session   vault write ldap library accounting team check in service account names fizz example com Key          Value                    check ins     fizz example com       To perform a check in  Vault verifies that the caller  should  be able to check in a given service account  To do this  Vault looks for either the same  entity ID   vault tutorials auth methods identity  used to check out the service account  or the same client token   If a caller is unable to check in a service account  or simply doesn t try  Vault will check it back in automatically when the  ttl  expires  However  if that is too long  service accounts can be forcibly checked in by a highly privileged user through      shell session   vault write  f ldap library manage accounting team check in Key          Value                    check ins     fizz example com       Or  alternatively  revoking the secret s lease has the same effect      shell session   vault lease revoke ldap library accounting team check out PvBVG0m7pEg2940Cb3Jw3KpJ All revocation operations queued successfully          Password generation  This engine previously allowed configuration of the length of the password that is generated when rotating credentials  This mechanism was deprecated in Vault 1 5 in favor of  password policies   vault docs concepts password policies   This means the  length  field should no longer be used  The following password policy can be used to mirror the same behavior that the  length  field provides      hcl length  length  rule  charset      charset    abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789            LDAP password policy  The LDAP secret engine does not hash or encrypt passwords prior to modifying values in LDAP  This behavior can cause plaintext passwords to be stored in LDAP   To avoid having plaintext passwords stored  the LDAP server should be configured with an LDAP password policy  ppolicy  not to be confused with a Vault password policy   A ppolicy can enforce rules such as hashing plaintext passwords by default   The following is an example of an LDAP password policy to enforce hashing on the data information tree  DIT   dc hashicorp dc com        dn  cn module 0  cn config changetype  modify add  olcModuleLoad olcModuleLoad  ppolicy  dn  olcOverlay  2 ppolicy olcDatabase  1 mdb cn config changetype  add objectClass  olcPPolicyConfig objectClass  olcOverlayConfig olcOverlay   2 ppolicy olcPPolicyDefault  cn default ou pwpolicies dc hashicorp dc com olcPPolicyForwardUpdates  FALSE olcPPolicyHashCleartext  TRUE olcPPolicyUseLockout  TRUE         Hierarchical paths  The LDAP secrets engine lets you define role and set names that contain an arbitrary number of forward slashes  Names with forward slashes define hierarchical path structures   For example  you can configure two static roles with the names  org secure  and  org platform dev       shell session   vault write ldap static role org secure       username  user1        rotation period  1h  Success  Data written to  ldap static role org secure    vault write ldap static role org platform dev       username  user2        rotation period  1h  Success  Data written to  ldap static role org platform dev      Names with hierarchical paths let you use the Vault API to query the available roles at a specific path with arbitrary depth  Names that end with a forward slash indicate that sub paths reside under that path   For example  to list all direct children under the  org   path      shell session   vault list ldap static role org  Keys      platform  secure      The  platform   key also ends in a forward slash  To list the  platform  sub keys      shell session   vault list ldap static role org platform Keys      dev      You can read and rotate credentials using the same role name and the respective APIs  For example      shell session   vault read ldap static cred org platform dev Key                    Value                              dn                     n a last password          a3sQ6OkmXKt2dtx22kAt36YLkkxLsg4RmhMZCLYCBCbvvv67ILROaOokdCaGPEAE last vault rotation    2024 05 03T16 39 27 174164 05 00 password               ECf7ZoxfDxGuJEYZrzgzTffSIDI4tx5TojBR9wuEGp8bqUXbl4Kr9eAgPjmizcvg rotation period        5m ttl                    4m58s username               user2         shell session   vault write  f ldap rotate role org platform dev      Since  Vault policies   vault docs concepts policies  are also path based  hierarchical names also let you define policies that map 1 1 to LDAP secrets engine roles and set paths   The following Vault API endpoints support hierarchical path handling      Static roles   vault api docs secret ldap static roles     Static role passwords   vault api docs secret ldap static role passwords     Manually rotate static role password   vault api docs secret ldap manually rotate static role password     Dynamic roles   vault api docs secret ldap dynamic roles     Dynamic role passwords   vault api docs secret ldap dynamic role passwords     Library set management   vault api docs secret ldap library set management     Library set status check   vault api docs secret ldap library set status check     Check Out management   vault api docs secret ldap check out management     Check In management   vault api docs secret ldap check in management      Tutorial  Refer to the  LDAP Secrets Engine   vault tutorials secrets management openldap  tutorial to learn how to configure and use the LDAP secrets engine       API  The LDAP secrets engine has a full HTTP API  Please see the  LDAP secrets engine API docs   vault api docs secret ldap  for more details "}
{"questions":"vault plugin generates database credentials dynamically based on configured roles layout docs Oracle is one of the supported plugins for the database secrets engine This page title Oracle database secrets engines Oracle database secrets engine for the Oracle database","answers":"---\nlayout: docs\npage_title: Oracle - database - secrets engines\ndescription: |-\n  Oracle is one of the supported plugins for the database secrets engine. This\n  plugin generates database credentials dynamically based on configured roles\n  for the Oracle database.\n---\n\n# Oracle database secrets engine\n\n-> The Oracle database plugin is now available for use with the database secrets engine for HCP Vault Dedicated on AWS.\n   The plugin configuration (including installation of the Oracle Instant Client library) is managed\n   by HCP. Refer to the HCP Vault Dedicated tab for more information.\n\nThis secrets engine is a part of the database secrets engine. If you have not read the\n[database backend](\/vault\/docs\/secrets\/databases) page, please do so now as it explains how to set up the database backend and\ngives an overview of how the engine functions.\n\nOracle is one of the supported plugins for the database secrets engine. It is capable of dynamically generating\ncredentials based on configured roles for Oracle databases. It also supports [static roles](\/vault\/docs\/secrets\/databases#static-roles).\n\n## Capabilities\n\n<Tabs>\n<Tab heading=\"Vault\" group=\"vault\">\n\n~> The Oracle database plugin is not bundled in the core Vault code tree and can be\nfound at its own git repository here:\n[hashicorp\/vault-plugin-database-oracle](https:\/\/github.com\/hashicorp\/vault-plugin-database-oracle)\n\n~> This plugin is not compatible with Alpine Linux out of the box.\n\n| Plugin Name                                                          | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |\n| -------------------------------------------------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |\n| Customizable (see: [Custom Plugins](\/vault\/docs\/secrets\/databases\/custom)) | Yes                      | Yes           | Yes          | Yes (1.7+)             |\n\n<\/Tab>\n<Tab heading=\"HCP Vault Dedicated\" group=\"hcp\">\n\n~> The Oracle Database Plugin is managed by the HCP platform. No extra installation steps are required for HCP Vault Dedicated.\n\n| Plugin Name                                                          | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |\n| -------------------------------------------------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |\n| `vault-plugin-database-oracle`                                       | Yes                      | Yes           | Yes          | Yes                    |\n\n<\/Tab>\n<\/Tabs>\n\n## Setup\n\n<Tabs>\n<Tab heading=\"Vault\" group=\"vault\">\n\nThe Oracle database plugin is not bundled in the core Vault code tree and can be\nfound at its own git repository here:\n[hashicorp\/vault-plugin-database-oracle](https:\/\/github.com\/hashicorp\/vault-plugin-database-oracle)\n\nFor linux\/amd64, pre-built binaries can be found at [the releases page](https:\/\/releases.hashicorp.com\/vault-plugin-database-oracle)\n\nBefore running the plugin you will need to have the Oracle Instant Client\nlibrary installed. These can be downloaded from Oracle. The libraries will need to\nbe placed in the default library search path or defined in the ld.so.conf configuration files.\n\nThe following privileges are needed by the plugin for minimum functionality. Additional privileges may be needed \ndepending on the SQL configured on the database roles. \n\n```sql\nGRANT CREATE USER to vault WITH ADMIN OPTION;\nGRANT ALTER USER to vault WITH ADMIN OPTION;\nGRANT DROP USER to vault WITH ADMIN OPTION;\nGRANT CONNECT to vault WITH ADMIN OPTION;\nGRANT CREATE SESSION to vault WITH ADMIN OPTION;\nGRANT SELECT on gv_$session to vault;\nGRANT SELECT on v_$sql to vault;\nGRANT ALTER SYSTEM to vault WITH ADMIN OPTION;\n```\n\n~> Vault needs `ALTER SYSTEM` to terminate user sessions when revoking users. This may be \nsubstituted with a stored procedure and granted to the Vault administrator user.\n\nIf you are running Vault with [mlock enabled](\/vault\/docs\/configuration#disable_mlock),\nyou will need to enable ipc_lock capabilities for the plugin binary.\n\n1.  Enable the database secrets engine if it is not already enabled:\n\n    ```shell-session\n    $ vault secrets enable database\n    Success! Enabled the database secrets engine at: database\/\n    ```\n\n    By default, the secrets engine will enable at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Download and register the plugin:\n\n    ```shell-session\n    $ vault write sys\/plugins\/catalog\/database\/oracle-database-plugin \\\n        sha256=\"...\" \\\n        command=vault-plugin-database-oracle\n    ```\n\n1.  Configure Vault with the proper plugin and connection information:\n\n    ```shell-session\n    $ vault write database\/config\/my-oracle-database \\\n        plugin_name=oracle-database-plugin \\\n        connection_url=\"\/@localhost:1521\/OraDoc.localhost\" \\\n        allowed_roles=\"my-role\" \\\n        username=\"VAULT_SUPER_USER\" \\\n        password=\"myreallysecurepassword\"\n    ```\n\n   If Oracle uses SSL, see the [connecting using SSL](\/vault\/docs\/secrets\/databases\/oracle#connect-using-ssl) example.\n\n   If the version of Oracle you are using has a container database, you will need to connect to one of the\n   pluggable databases rather than the container database in the `connection_url` field.\n\n1. It is highly recommended that you immediately rotate the \"root\" user's password, see\n   [Rotate Root Credentials](\/vault\/api-docs\/secret\/databases#rotate-root-credentials) for more details.\n   This will ensure that only Vault is able to access the \"root\" user that Vault uses to\n   manipulate dynamic & static credentials.\n\n   !> **Use caution:** the root user's password will not be accessible once rotated so it is highly\n   recommended that you create a user for Vault to use rather than using the actual root user.\n\n1. Configure a role that maps a name in Vault to an SQL statement to execute to\n   create the database credential:\n\n   ```shell-session\n   $ vault write database\/roles\/my-role \\\n       db_name=my-oracle-database \\\n       creation_statements='CREATE USER  IDENTIFIED BY \"\"; GRANT CONNECT TO ; GRANT CREATE SESSION TO ;' \\\n       default_ttl=\"1h\" \\\n       max_ttl=\"24h\"\n   ```\n\n   Note: The `creation_statements` may be specified in a file and interpreted by the Vault CLI using the `@` symbol:\n\n   ```shell-session\n   $ vault write database\/roles\/my-role \\\n       creation_statements=@creation_statements.sql \\\n       ...\n   ```\n\n   See the [Commands](\/vault\/docs\/commands#files) docs for more details.\n\n### Connect using SSL\n\nIf the Oracle server Vault is trying to connect to uses an SSL listener, the database\nplugin will require extra configuration using the `connection_url` parameter:\n\n```shell-session\nvault write database\/config\/oracle \\\n  plugin_name=vault-plugin-database-oracle \\\n  connection_url='\/@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=<host>)(PORT=<port>))(CONNECT_DATA=(SERVICE_NAME=<service_name>))(SECURITY=(SSL_SERVER_CERT_DN=\"<cert_dn>\")(MY_WALLET_DIRECTORY=<path_to_wallet>)))' \\\n  allowed_roles=\"my-role\" \\\n  username=\"admin\" \\\n  password=\"password\"\n```\n\nFor example, the SSL server certificate distinguished name and path to the Oracle Wallet\nto use for connection and verification could be configured using:\n\n```shell-session\nvault write database\/config\/oracle \\\n  plugin_name=vault-plugin-database-oracle \\\n  connection_url='\/@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=hashicorp.com)(PORT=1523))(CONNECT_DATA=(SERVICE_NAME=ORCL))(SECURITY=(SSL_SERVER_CERT_DN=\"CN=hashicorp.com,OU=TestCA,O=HashiCorp=com\")(MY_WALLET_DIRECTORY=\/etc\/oracle\/wallets)))' \\\n  allowed_roles=\"my-role\" \\\n  username=\"admin\" \\\n  password=\"password\"\n```\n\n#### Wallet permissions\n\n~> **Note**: The wallets used when connecting via SSL should be available on every Vault\nserver when using high availability clusters.\n\nThe wallet used by Vault should be in a well known location with the proper filesystem permissions. For example, if Vault is running as the `vault` user,\nthe wallet directory may be setup as follows:\n\n```shell-session\nmkdir -p \/etc\/vault\/wallets\ncp cwallet.sso \/etc\/vault\/wallets\/cwallet.sso\nchown -R vault:vault \/etc\/vault\nchmod 600 \/etc\/vault\/wallets\/cwallet.sso\n```\n\n### Using TNS names\n\n~> **Note**: The `tnsnames.ora` file and environment variable used when connecting via SSL should\nbe available on every Vault server when using high availability clusters.\n\nVault can optionally use TNS names in the connection string when connecting to Oracle databases using a `tnsnames.ora` file. An example\nof a `tnsnames.ora` file may look like the following:\n\n```shell-session\nAWSEAST=\n(DESCRIPTION =\n  (ADDRESS = (PROTOCOL = TCPS)(HOST = hashicorp.us-east-1.rds.amazonaws.com)(PORT = 1523))\n  (CONNECT_DATA =\n    (SERVER = DEDICATED)\n    (SID = ORCL)\n  )\n  (SECURITY =\n      (SSL_SERVER_CERT_DN = \"CN=hashicorp.rds.amazonaws.com\/OU=RDS\/O=Amazon.com\/L=Seattle\/ST=Washington\/C=US\")\n      (MY_WALLET_DIRECTORY = \/etc\/oracle\/wallet\/east)\n  )\n)\n\nAWSWEST=\n(DESCRIPTION =\n  (ADDRESS = (PROTOCOL = TCPS)(HOST = hashicorp.us-west-1.rds.amazonaws.com)(PORT = 1523))\n  (CONNECT_DATA =\n    (SERVER = DEDICATED)\n    (SID = ORCL)\n  )\n  (SECURITY =\n      (SSL_SERVER_CERT_DN = \"CN=hashicorp.rds.amazonaws.com\/OU=RDS\/O=Amazon.com\/L=Seattle\/ST=Washington\/C=US\")\n      (MY_WALLET_DIRECTORY = \/etc\/oracle\/wallet\/west)\n  )\n)\n```\n\nTo configure Vault to use TNS names, set the following environment variable on the Vault server:\n\n```shell-session\nTNS_ADMIN=\/path\/to\/tnsnames\/directory\n```\n\n~> **Note**: If Vault returns a \"could not open file\" error, double check that\nthe `TNS_ADMIN` environment variable is available to the Vault server.\n\nUse the alias in the `connection_url` parameter on the database configuration:\n\n```\nvault write database\/config\/oracle-east \\\n    plugin_name=vault-plugin-database-oracle \\\n    connection_url=\"\/@AWSEAST\" \\\n    allowed_roles=\"my-role\" \\\n    username=\"VAULT_SUPER_USER\" \\\n    password=\"myreallysecurepassword\"\n\nvault write database\/config\/oracle-west \\\n    plugin_name=vault-plugin-database-oracle \\\n    connection_url=\"\/@AWSWEST\" \\\n    allowed_roles=\"my-role\" \\\n    username=\"VAULT_SUPER_USER\" \\\n    password=\"myreallysecurepassword\"\n```\n\n<\/Tab>\n<Tab heading=\"HCP Vault Dedicated\" group=\"hcp\">\n\n1. Enable the database secrets engine if it is not already enabled:\n\n    ```shell-session\n    $ vault secrets enable database\n    Success! Enabled the database secrets engine at: database\/\n    ```\n\n    By default, the secrets engine will enable at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1. Configure Vault with the proper plugin and connection information. The `plugin-name` must be set to\n   `vault-plugin-database-oracle`.\n\n   ~> **Note:** Replace `your-oracle-host` in the `connection_url` parameter with the hostname of your Oracle server.\n\n    ```shell-session\n    $ vault write database\/config\/my-oracle-database \\\n        plugin_name=vault-plugin-database-oracle \\\n        connection_url=\"\/@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=your-oracle-host)(PORT=1521))(CONNECT_DATA=(SID=ORCL)))\" \\\n        allowed_roles=\"my-role\" \\\n        username=\"VAULT_SUPER_USER\" \\\n        password=\"myreallysecurepassword\"\n    ```\n\n   HCP Vault Dedicated currently supports SSL connections for Oracle on Amazon Web Services (AWS) Relational Database Service (RDS).\n   If Oracle is deployed on AWS RDS, and uses SSL, see the [connecting with HCP Vault Dedicated using SSL](#connect-with-hcp-vault-using-ssl) example.\n\n   If the version of Oracle you are using has a container database, you will need to connect to one of the\n   pluggable databases rather than the container database in the `connection_url` field.\n\n1. It is highly recommended that you immediately rotate the \"root\" user's password, see\n   [Rotate Root Credentials](\/vault\/api-docs\/secret\/databases#rotate-root-credentials) for more details.\n   This will ensure that only Vault is able to access the \"root\" user that Vault uses to\n   manipulate dynamic & static credentials.\n\n   !> **Use caution:** the \"root\" user's password will not be accessible once rotated so it is highly\n   recommended that you create a user for Vault to use rather than the actual `root` user.\n\n1. Configure a role that maps a name in Vault to a SQL statement to execute and\n   create the database credential:\n\n   ```shell-session\n   $ vault write database\/roles\/my-role \\\n       db_name=my-oracle-database \\\n       creation_statements='CREATE USER  IDENTIFIED BY \"\"; GRANT CONNECT TO ; GRANT CREATE SESSION TO ;' \\\n       default_ttl=\"1h\" \\\n       max_ttl=\"24h\"\n   ```\n\n   Note: The `creation_statements` may be specified in a file and interpreted by the Vault CLI using the `@` symbol:\n\n   ```shell-session\n   $ vault write database\/roles\/my-role \\\n       creation_statements=@creation_statements.sql \\\n       ...\n   ```\n\n   See the [Commands](\/vault\/docs\/commands#files) docs for more details.\n\n### Connect with HCP Vault Dedicated using SSL\n\nBefore using SSL with Oracle RDS, you must configure a option group with SSL and set the following:\n\n- `SQLNET.SSL_VERSION` to `1.2`\n- `SQLNET.CIPHER_SUITE` to one of `TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384`, `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`, `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`, `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256`, `TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA`, or `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA`\n\nIf the AWS RDS Oracle instance Vault is trying to connect to uses an SSL listener, the database\nplugin will require extra configuration using the `connection_url` parameter:\n\n```shell-session\n$ vault write database\/config\/oracle \\\n  plugin_name=vault-plugin-database-oracle \\\n  connection_url='\/@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=<host>)(PORT=<port>))(CONNECT_DATA=(SERVICE_NAME=<service_name>))(SECURITY=(SSL_SERVER_CERT_DN=\"<cert_dn>\")(MY_WALLET_DIRECTORY=<path_to_wallet>)))' \\\n  allowed_roles=\"my-role\" \\\n  username=\"VAULT_SUPER_USER\" \\\n  password=\"myreallysecurepassword\"\n```\n\nFor example, the SSL server certificate distinguished name for AWS RDS and path to the Oracle Wallet\nto use for connection and verification could be configured using:\n\n- Wallet location and permissions are managed by the HCP platform. The wallet is available at `\/etc\/vault.d\/plugin\/oracle\/ssl_wallet`.\n- The distinguished name for the current AWS RDS CA is in the format `SECURITY=(SSL_SERVER_CERT_DN=\"C=US,ST=Washington,L=Seattle,O=Amazon.com,OU=RDS,CN=your-rds-endpoint-url\")`.\n- A listener on port `2484` is enabled by adding `SSL` to a RDS option group and applying the option group with SSL to your Oracle RDS instance.\n- Replace `your-rds-endpoint-url` with the endpoint for your RDS instance in the `HOST` and `DN` parameters.\n\n-> **Note:** For more information on using SSL\/TLS with AWS RDS, review the [Using SSL\/TLS to encrypt a connetion to a DB instance](https:\/\/docs.aws.amazon.com\/AmazonRDS\/latest\/UserGuide\/UsingWithRDS.SSL.html) AWS documentation.\n\n```shell-session\n$ vault write database\/config\/my-oracle-database \\\n  plugin_name=vault-plugin-database-oracle \\\n  connection_url=\"\/@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=your-rds-endpoint-url)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=ORCL))(SECURITY=(SSL_SERVER_CERT_DN=\"C=US,ST=Washington,L=Seattle,O=Amazon.com,OU=RDS,CN=your-rds-endpoint-url\")(MY_WALLET_DIRECTORY=\/etc\/vault.d\/plugin\/oracle\/ssl_wallet)))\" \\\n  allowed_roles=\"my-role\" \\\n  username=\"admin\" \\\n  password=\"password\"\n```\n\n~> **Using TNS names:** `tnsnames.ora` configuration is not currently available with HCP Vault Dedicated.\n\n<\/Tab>\n<\/Tabs>\n\n## Usage\n\n### Dynamic credentials\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials.\n\n1.  Generate a new credential by reading from the `\/creds` endpoint with the name\n    of the role:\n\n    ```text\n    $ vault read database\/creds\/my-role\n    Key                Value\n    ---                -----\n    lease_id           database\/creds\/my-role\/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6\n    lease_duration     1h\n    lease_renewable    true\n    password           yRUSyd-vPYDg5NkU9kDg\n    username           V_VAULTUSE_MY_ROLE_SJJUK3Q8W3BKAYAN8S62_1602543009\n    ```\n\n## API\n\nThe full list of configurable options can be seen in the [Oracle database plugin\nAPI](\/vault\/api-docs\/secret\/databases\/oracle) page.\n\nFor more information on the database secrets engine's HTTP API please see the\n[Database secrets engine API](\/vault\/api-docs\/secret\/databases) page.","site":"vault","answers_cleaned":"    layout  docs page title  Oracle   database   secrets engines description       Oracle is one of the supported plugins for the database secrets engine  This   plugin generates database credentials dynamically based on configured roles   for the Oracle database         Oracle database secrets engine     The Oracle database plugin is now available for use with the database secrets engine for HCP Vault Dedicated on AWS     The plugin configuration  including installation of the Oracle Instant Client library  is managed    by HCP  Refer to the HCP Vault Dedicated tab for more information   This secrets engine is a part of the database secrets engine  If you have not read the  database backend   vault docs secrets databases  page  please do so now as it explains how to set up the database backend and gives an overview of how the engine functions   Oracle is one of the supported plugins for the database secrets engine  It is capable of dynamically generating credentials based on configured roles for Oracle databases  It also supports  static roles   vault docs secrets databases static roles       Capabilities   Tabs   Tab heading  Vault  group  vault       The Oracle database plugin is not bundled in the core Vault code tree and can be found at its own git repository here   hashicorp vault plugin database oracle  https   github com hashicorp vault plugin database oracle      This plugin is not compatible with Alpine Linux out of the box     Plugin Name                                                            Root Credential Rotation   Dynamic Roles   Static Roles   Username Customization                                                                                                                                                                 Customizable  see   Custom Plugins   vault docs secrets databases custom     Yes                        Yes             Yes            Yes  1 7                    Tab   Tab heading  HCP Vault Dedicated  group  hcp       The Oracle Database Plugin is managed by the HCP platform  No extra installation steps are required for HCP Vault Dedicated     Plugin Name                                                            Root Credential Rotation   Dynamic Roles   Static Roles   Username Customization                                                                                                                                                                  vault plugin database oracle                                          Yes                        Yes             Yes            Yes                         Tab    Tabs      Setup   Tabs   Tab heading  Vault  group  vault    The Oracle database plugin is not bundled in the core Vault code tree and can be found at its own git repository here   hashicorp vault plugin database oracle  https   github com hashicorp vault plugin database oracle   For linux amd64  pre built binaries can be found at  the releases page  https   releases hashicorp com vault plugin database oracle   Before running the plugin you will need to have the Oracle Instant Client library installed  These can be downloaded from Oracle  The libraries will need to be placed in the default library search path or defined in the ld so conf configuration files   The following privileges are needed by the plugin for minimum functionality  Additional privileges may be needed  depending on the SQL configured on the database roles       sql GRANT CREATE USER to vault WITH ADMIN OPTION  GRANT ALTER USER to vault WITH ADMIN OPTION  GRANT DROP USER to vault WITH ADMIN OPTION  GRANT CONNECT to vault WITH ADMIN OPTION  GRANT CREATE SESSION to vault WITH ADMIN OPTION  GRANT SELECT on gv  session to vault  GRANT SELECT on v  sql to vault  GRANT ALTER SYSTEM to vault WITH ADMIN OPTION          Vault needs  ALTER SYSTEM  to terminate user sessions when revoking users  This may be  substituted with a stored procedure and granted to the Vault administrator user   If you are running Vault with  mlock enabled   vault docs configuration disable mlock   you will need to enable ipc lock capabilities for the plugin binary   1   Enable the database secrets engine if it is not already enabled          shell session       vault secrets enable database     Success  Enabled the database secrets engine at  database               By default  the secrets engine will enable at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Download and register the plugin          shell session       vault write sys plugins catalog database oracle database plugin           sha256                 command vault plugin database oracle          1   Configure Vault with the proper plugin and connection information          shell session       vault write database config my oracle database           plugin name oracle database plugin           connection url    localhost 1521 OraDoc localhost            allowed roles  my role            username  VAULT SUPER USER            password  myreallysecurepassword              If Oracle uses SSL  see the  connecting using SSL   vault docs secrets databases oracle connect using ssl  example      If the version of Oracle you are using has a container database  you will need to connect to one of the    pluggable databases rather than the container database in the  connection url  field   1  It is highly recommended that you immediately rotate the  root  user s password  see     Rotate Root Credentials   vault api docs secret databases rotate root credentials  for more details     This will ensure that only Vault is able to access the  root  user that Vault uses to    manipulate dynamic   static credentials           Use caution    the root user s password will not be accessible once rotated so it is highly    recommended that you create a user for Vault to use rather than using the actual root user   1  Configure a role that maps a name in Vault to an SQL statement to execute to    create the database credential         shell session      vault write database roles my role          db name my oracle database          creation statements  CREATE USER  IDENTIFIED BY     GRANT CONNECT TO   GRANT CREATE SESSION TO             default ttl  1h           max ttl  24h             Note  The  creation statements  may be specified in a file and interpreted by the Vault CLI using the     symbol         shell session      vault write database roles my role          creation statements  creation statements sql                         See the  Commands   vault docs commands files  docs for more details       Connect using SSL  If the Oracle server Vault is trying to connect to uses an SSL listener  the database plugin will require extra configuration using the  connection url  parameter      shell session vault write database config oracle     plugin name vault plugin database oracle     connection url     DESCRIPTION  ADDRESS  PROTOCOL tcps  HOST  host   PORT  port    CONNECT DATA  SERVICE NAME  service name    SECURITY  SSL SERVER CERT DN   cert dn    MY WALLET DIRECTORY  path to wallet          allowed roles  my role      username  admin      password  password       For example  the SSL server certificate distinguished name and path to the Oracle Wallet to use for connection and verification could be configured using      shell session vault write database config oracle     plugin name vault plugin database oracle     connection url     DESCRIPTION  ADDRESS  PROTOCOL tcps  HOST hashicorp com  PORT 1523   CONNECT DATA  SERVICE NAME ORCL   SECURITY  SSL SERVER CERT DN  CN hashicorp com OU TestCA O HashiCorp com   MY WALLET DIRECTORY  etc oracle wallets         allowed roles  my role      username  admin      password  password            Wallet permissions       Note    The wallets used when connecting via SSL should be available on every Vault server when using high availability clusters   The wallet used by Vault should be in a well known location with the proper filesystem permissions  For example  if Vault is running as the  vault  user  the wallet directory may be setup as follows      shell session mkdir  p  etc vault wallets cp cwallet sso  etc vault wallets cwallet sso chown  R vault vault  etc vault chmod 600  etc vault wallets cwallet sso          Using TNS names       Note    The  tnsnames ora  file and environment variable used when connecting via SSL should be available on every Vault server when using high availability clusters   Vault can optionally use TNS names in the connection string when connecting to Oracle databases using a  tnsnames ora  file  An example of a  tnsnames ora  file may look like the following      shell session AWSEAST   DESCRIPTION      ADDRESS    PROTOCOL   TCPS  HOST   hashicorp us east 1 rds amazonaws com  PORT   1523      CONNECT DATA        SERVER   DEDICATED       SID   ORCL         SECURITY          SSL SERVER CERT DN    CN hashicorp rds amazonaws com OU RDS O Amazon com L Seattle ST Washington C US          MY WALLET DIRECTORY    etc oracle wallet east         AWSWEST   DESCRIPTION      ADDRESS    PROTOCOL   TCPS  HOST   hashicorp us west 1 rds amazonaws com  PORT   1523      CONNECT DATA        SERVER   DEDICATED       SID   ORCL         SECURITY          SSL SERVER CERT DN    CN hashicorp rds amazonaws com OU RDS O Amazon com L Seattle ST Washington C US          MY WALLET DIRECTORY    etc oracle wallet west             To configure Vault to use TNS names  set the following environment variable on the Vault server      shell session TNS ADMIN  path to tnsnames directory           Note    If Vault returns a  could not open file  error  double check that the  TNS ADMIN  environment variable is available to the Vault server   Use the alias in the  connection url  parameter on the database configuration       vault write database config oracle east       plugin name vault plugin database oracle       connection url    AWSEAST        allowed roles  my role        username  VAULT SUPER USER        password  myreallysecurepassword   vault write database config oracle west       plugin name vault plugin database oracle       connection url    AWSWEST        allowed roles  my role        username  VAULT SUPER USER        password  myreallysecurepassword         Tab   Tab heading  HCP Vault Dedicated  group  hcp    1  Enable the database secrets engine if it is not already enabled          shell session       vault secrets enable database     Success  Enabled the database secrets engine at  database               By default  the secrets engine will enable at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1  Configure Vault with the proper plugin and connection information  The  plugin name  must be set to     vault plugin database oracle            Note    Replace  your oracle host  in the  connection url  parameter with the hostname of your Oracle server          shell session       vault write database config my oracle database           plugin name vault plugin database oracle           connection url     DESCRIPTION  ADDRESS  PROTOCOL TCP  HOST your oracle host  PORT 1521   CONNECT DATA  SID ORCL               allowed roles  my role            username  VAULT SUPER USER            password  myreallysecurepassword              HCP Vault Dedicated currently supports SSL connections for Oracle on Amazon Web Services  AWS  Relational Database Service  RDS      If Oracle is deployed on AWS RDS  and uses SSL  see the  connecting with HCP Vault Dedicated using SSL   connect with hcp vault using ssl  example      If the version of Oracle you are using has a container database  you will need to connect to one of the    pluggable databases rather than the container database in the  connection url  field   1  It is highly recommended that you immediately rotate the  root  user s password  see     Rotate Root Credentials   vault api docs secret databases rotate root credentials  for more details     This will ensure that only Vault is able to access the  root  user that Vault uses to    manipulate dynamic   static credentials           Use caution    the  root  user s password will not be accessible once rotated so it is highly    recommended that you create a user for Vault to use rather than the actual  root  user   1  Configure a role that maps a name in Vault to a SQL statement to execute and    create the database credential         shell session      vault write database roles my role          db name my oracle database          creation statements  CREATE USER  IDENTIFIED BY     GRANT CONNECT TO   GRANT CREATE SESSION TO             default ttl  1h           max ttl  24h             Note  The  creation statements  may be specified in a file and interpreted by the Vault CLI using the     symbol         shell session      vault write database roles my role          creation statements  creation statements sql                         See the  Commands   vault docs commands files  docs for more details       Connect with HCP Vault Dedicated using SSL  Before using SSL with Oracle RDS  you must configure a option group with SSL and set the following      SQLNET SSL VERSION  to  1 2     SQLNET CIPHER SUITE  to one of  TLS ECDHE RSA WITH AES 256 CBC SHA384    TLS ECDHE RSA WITH AES 256 GCM SHA384    TLS ECDHE RSA WITH AES 128 GCM SHA256    TLS ECDHE RSA WITH AES 128 CBC SHA256    TLS ECDHE RSA WITH AES 256 CBC SHA   or  TLS ECDHE RSA WITH AES 128 CBC SHA   If the AWS RDS Oracle instance Vault is trying to connect to uses an SSL listener  the database plugin will require extra configuration using the  connection url  parameter      shell session   vault write database config oracle     plugin name vault plugin database oracle     connection url     DESCRIPTION  ADDRESS  PROTOCOL tcps  HOST  host   PORT  port    CONNECT DATA  SERVICE NAME  service name    SECURITY  SSL SERVER CERT DN   cert dn    MY WALLET DIRECTORY  path to wallet          allowed roles  my role      username  VAULT SUPER USER      password  myreallysecurepassword       For example  the SSL server certificate distinguished name for AWS RDS and path to the Oracle Wallet to use for connection and verification could be configured using     Wallet location and permissions are managed by the HCP platform  The wallet is available at   etc vault d plugin oracle ssl wallet     The distinguished name for the current AWS RDS CA is in the format  SECURITY  SSL SERVER CERT DN  C US ST Washington L Seattle O Amazon com OU RDS CN your rds endpoint url       A listener on port  2484  is enabled by adding  SSL  to a RDS option group and applying the option group with SSL to your Oracle RDS instance    Replace  your rds endpoint url  with the endpoint for your RDS instance in the  HOST  and  DN  parameters        Note    For more information on using SSL TLS with AWS RDS  review the  Using SSL TLS to encrypt a connetion to a DB instance  https   docs aws amazon com AmazonRDS latest UserGuide UsingWithRDS SSL html  AWS documentation      shell session   vault write database config my oracle database     plugin name vault plugin database oracle     connection url     DESCRIPTION  ADDRESS  PROTOCOL tcps  HOST your rds endpoint url  PORT 2484   CONNECT DATA  SERVICE NAME ORCL   SECURITY  SSL SERVER CERT DN  C US ST Washington L Seattle O Amazon com OU RDS CN your rds endpoint url   MY WALLET DIRECTORY  etc vault d plugin oracle ssl wallet         allowed roles  my role      username  admin      password  password            Using TNS names     tnsnames ora  configuration is not currently available with HCP Vault Dedicated     Tab    Tabs      Usage      Dynamic credentials  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials   1   Generate a new credential by reading from the   creds  endpoint with the name     of the role          text       vault read database creds my role     Key                Value                                  lease id           database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6     lease duration     1h     lease renewable    true     password           yRUSyd vPYDg5NkU9kDg     username           V VAULTUSE MY ROLE SJJUK3Q8W3BKAYAN8S62 1602543009             API  The full list of configurable options can be seen in the  Oracle database plugin API   vault api docs secret databases oracle  page   For more information on the database secrets engine s HTTP API please see the  Database secrets engine API   vault api docs secret databases  page "}
{"questions":"vault plugin generates database credentials dynamically based on configured roles MongoDB Atlas is one of the supported plugins for the database secrets engine This layout docs MongoDB Atlas database secrets engine page title MongoDB Atlas Database Secrets Engines for MongoDB Atlas databases","answers":"---\nlayout: docs\npage_title: MongoDB Atlas- Database - Secrets Engines\ndescription: |-\n  MongoDB Atlas is one of the supported plugins for the database secrets engine. This\n  plugin generates database credentials dynamically based on configured roles\n  for MongoDB Atlas databases.\n---\n\n# MongoDB Atlas database secrets engine\n\nMongoDB Atlas is one of the supported plugins for the database secrets engine. This\nplugin generates database credentials dynamically based on configured roles for\nMongoDB Atlas databases. It cannot support rotating the root user's credentials because\nit uses a public and private key pair to authenticate.\n\nSee the [database secrets engine](\/vault\/docs\/secrets\/databases) docs for\nmore information about setting up the database secrets engine.\n\n<Note>\n  The information below relates to the MongoDB Altas <b>database plugin<\/b> for the Vault database secrets engine.\n  Refer to the <a href=\"\/vault\/docs\/secrets\/mongodbatlas\">MongoDB Atlas secrets engine<\/a> for\n  information about using the MongoDB Atlas secrets engine for the Vault.\n<\/Note>\n\n## Capabilities\n\n| Plugin Name                    | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization | Credential Types             |\n| ------------------------------ | ------------------------ | ------------- | ------------ | ---------------------- | ---------------------------- |\n| `mongodbatlas-database-plugin` | No                       | Yes           | Yes          | Yes (1.8+)             | password, client_certificate |\n\n## Setup\n\n1.  Enable the database secrets engine if it is not already enabled:\n\n    ```shell-session\n    $ vault secrets enable database\n    Success! Enabled the database secrets engine at: database\/\n    ```\n\n    By default, the secrets engine will enable at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure Vault with the proper plugin and connection information:\n\n    ```shell-session\n    $ vault write database\/config\/my-mongodbatlas-database \\\n        plugin_name=mongodbatlas-database-plugin \\\n        allowed_roles=\"*\" \\\n        public_key=\"jmskfortvf\" \\\n        private_key=\"ea6acbc7-8a30-4a3f-812e-6f869c08bcd1\" \\\n        project_id=\"4f96cad208574fd14aa8dda3a\"\n    ```\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permissions, it can generate credentials.\n\n#### Password credentials\n\n1.  Configure a role that maps a name in Vault to a MongoDB Atlas command that executes and\n    creates the database user credential:\n\n    ```shell-session\n    $ vault write database\/roles\/my-password-role \\\n        db_name=my-mongodbatlas-database \\\n        creation_statements='{\"database_name\": \"admin\",\"roles\": [{\"databaseName\":\"admin\",\"roleName\":\"atlasAdmin\"}]}' \\\n        default_ttl=\"1h\" \\\n        max_ttl=\"24h\"\n    Success! Data written to: database\/roles\/my-password-role\n    ```\n\n1.  Generate a new credential by reading from the `\/creds` endpoint with the name\n    of the role:\n\n    ```shell-session\n    $ vault read database\/creds\/my-password-role\n        Key                Value\n        ---                -----\n        lease_id           database\/creds\/my-role\/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6\n        lease_duration     1h\n        lease_renewable    true\n        password           FBYwnnh-fwc0quxtKf11\n        username           v-my-password-role-DKbQEg6uRn\n    ```\n\n    Each invocation of the command generates a new credential.\n\n    MongoDB Atlas database credentials eventually become consistent when the\n    [MongoDB Atlas Admin API](https:\/\/www.mongodb.com\/docs\/atlas\/reference\/api-resources-spec\/v2\/)\n    coordinates with hosted clusters in your Atlas project. You cannot use the\n    credentials successfully until the consistency process completes.\n\n    If you plan to use MongoDB Atlas credentials in a pipeline, you may need to add\n    a time delay or secondary process to account for the time required to establish consistency.\n\n#### Client certificate credentials\n\n1.  Configure a role that maps a name in Vault to a MongoDB Atlas command that executes and\n    creates the X509 type database user credential:\n\n    ```shell-session\n    $ vault write database\/roles\/my-dynamic-certificate-role \\\n      db_name=my-mongodbatlas-database \\\n      creation_statements='{\"database_name\": \"$external\", \"x509Type\": \"CUSTOMER\", \"roles\": [{\"databaseName\":\"<db_name>\",\"roleName\":\"readWrite\"}]}' \\\n      default_ttl=\"1h\" \\\n      max_ttl=\"24h\" \\\n      credential_type=\"client_certificate\" \\\n      credential_config=ca_cert=\"$(cat path\/to\/ca_cert.pem)\" \\\n      credential_config=ca_private_key=\"$(cat path\/to\/private_key.pem)\" \\\n      credential_config=key_type=\"rsa\" \\\n      credential_config=key_bits=2048 \\\n      credential_config=signature_bits=256 \\\n      credential_config=common_name_template=\"__\"\n    Success! Data written to: database\/roles\/my-dynamic-certificate-role\n    ```\n\n1.  Generate a new credential by reading from the `\/creds` endpoint with the name\nof the role:\n\n    ```shell-session\n    $ vault read database\/creds\/my-dynamic-certificate-role\n      Key                 Value\n      ---                 -----\n      request_id          b6556b2d-c379-5a92-465d-6597c506c821\n      lease_id            database\/creds\/my-dynamic-certificate-role\/AZ5tao6NjLJctx7fm1bujKEL\n      lease_duration      1h\n      lease_renewable     true\n      client_certificate  -----BEGIN CERTIFICATE-----\n                          ...\n                          -----END CERTIFICATE-----\n      private_key         -----BEGIN PRIVATE KEY-----\n                          ...\n                          -----END PRIVATE KEY-----\n      private_key_type    rsa\n      username            CN=token_my-dynamic-certificate-role_1677262121\n    ```\n\n## Client certificate authentication\n\nMongoDB Atlas supports [X.509 client certificate based authentication](https:\/\/www.mongodb.com\/docs\/manual\/tutorial\/configure-x509-client-authentication\/)\nfor enhanced authentication security as an alternative to username and password authentication.\nThe MongoDB Atlas database plugin can be used to manage client certificate credentials for\nMongoDB Atlas users by using `client_certificate` [credential_type](\/vault\/api-docs\/secret\/databases#credential_type).\n\nSee the [usage](\/vault\/docs\/secrets\/databases\/mongodbatlas#usage) section for examples using dynamic roles.\n\n## API\n\nThe full list of configurable options can be seen in the [MongoDB Atlas Database\nPlugin HTTP API](\/vault\/api-docs\/secret\/databases\/mongodbatlas) page.\n\nFor more information on the database secrets engine's HTTP API please see the\n[Database Secrets Engine API](\/vault\/api-docs\/secret\/databases) page.","site":"vault","answers_cleaned":"    layout  docs page title  MongoDB Atlas  Database   Secrets Engines description       MongoDB Atlas is one of the supported plugins for the database secrets engine  This   plugin generates database credentials dynamically based on configured roles   for MongoDB Atlas databases         MongoDB Atlas database secrets engine  MongoDB Atlas is one of the supported plugins for the database secrets engine  This plugin generates database credentials dynamically based on configured roles for MongoDB Atlas databases  It cannot support rotating the root user s credentials because it uses a public and private key pair to authenticate   See the  database secrets engine   vault docs secrets databases  docs for more information about setting up the database secrets engine    Note    The information below relates to the MongoDB Altas  b database plugin  b  for the Vault database secrets engine    Refer to the  a href   vault docs secrets mongodbatlas  MongoDB Atlas secrets engine  a  for   information about using the MongoDB Atlas secrets engine for the Vault    Note      Capabilities    Plugin Name                      Root Credential Rotation   Dynamic Roles   Static Roles   Username Customization   Credential Types                                                                                                                                                                       mongodbatlas database plugin    No                         Yes             Yes            Yes  1 8                 password  client certificate       Setup  1   Enable the database secrets engine if it is not already enabled          shell session       vault secrets enable database     Success  Enabled the database secrets engine at  database               By default  the secrets engine will enable at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure Vault with the proper plugin and connection information          shell session       vault write database config my mongodbatlas database           plugin name mongodbatlas database plugin           allowed roles               public key  jmskfortvf            private key  ea6acbc7 8a30 4a3f 812e 6f869c08bcd1            project id  4f96cad208574fd14aa8dda3a              Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permissions  it can generate credentials        Password credentials  1   Configure a role that maps a name in Vault to a MongoDB Atlas command that executes and     creates the database user credential          shell session       vault write database roles my password role           db name my mongodbatlas database           creation statements    database name    admin   roles      databaseName   admin   roleName   atlasAdmin                default ttl  1h            max ttl  24h      Success  Data written to  database roles my password role          1   Generate a new credential by reading from the   creds  endpoint with the name     of the role          shell session       vault read database creds my password role         Key                Value                                          lease id           database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6         lease duration     1h         lease renewable    true         password           FBYwnnh fwc0quxtKf11         username           v my password role DKbQEg6uRn              Each invocation of the command generates a new credential       MongoDB Atlas database credentials eventually become consistent when the      MongoDB Atlas Admin API  https   www mongodb com docs atlas reference api resources spec v2       coordinates with hosted clusters in your Atlas project  You cannot use the     credentials successfully until the consistency process completes       If you plan to use MongoDB Atlas credentials in a pipeline  you may need to add     a time delay or secondary process to account for the time required to establish consistency        Client certificate credentials  1   Configure a role that maps a name in Vault to a MongoDB Atlas command that executes and     creates the X509 type database user credential          shell session       vault write database roles my dynamic certificate role         db name my mongodbatlas database         creation statements    database name     external    x509Type    CUSTOMER    roles      databaseName    db name    roleName   readWrite              default ttl  1h          max ttl  24h          credential type  client certificate          credential config ca cert    cat path to ca cert pem           credential config ca private key    cat path to private key pem           credential config key type  rsa          credential config key bits 2048         credential config signature bits 256         credential config common name template          Success  Data written to  database roles my dynamic certificate role          1   Generate a new credential by reading from the   creds  endpoint with the name of the role          shell session       vault read database creds my dynamic certificate role       Key                 Value                                       request id          b6556b2d c379 5a92 465d 6597c506c821       lease id            database creds my dynamic certificate role AZ5tao6NjLJctx7fm1bujKEL       lease duration      1h       lease renewable     true       client certificate       BEGIN CERTIFICATE                                                                   END CERTIFICATE            private key              BEGIN PRIVATE KEY                                                                   END PRIVATE KEY            private key type    rsa       username            CN token my dynamic certificate role 1677262121             Client certificate authentication  MongoDB Atlas supports  X 509 client certificate based authentication  https   www mongodb com docs manual tutorial configure x509 client authentication   for enhanced authentication security as an alternative to username and password authentication  The MongoDB Atlas database plugin can be used to manage client certificate credentials for MongoDB Atlas users by using  client certificate   credential type   vault api docs secret databases credential type    See the  usage   vault docs secrets databases mongodbatlas usage  section for examples using dynamic roles      API  The full list of configurable options can be seen in the  MongoDB Atlas Database Plugin HTTP API   vault api docs secret databases mongodbatlas  page   For more information on the database secrets engine s HTTP API please see the  Database Secrets Engine API   vault api docs secret databases  page "}
{"questions":"vault page title MySQL MariaDB Database Secrets Engines plugin generates database credentials dynamically based on configured roles for the MySQL database layout docs MySQL is one of the supported plugins for the database secrets engine This MySQL MariaDB database secrets engine","answers":"---\nlayout: docs\npage_title: MySQL\/MariaDB - Database - Secrets Engines\ndescription: |-\n  MySQL is one of the supported plugins for the database secrets engine. This\n  plugin generates database credentials dynamically based on configured roles\n  for the MySQL database.\n---\n\n# MySQL\/MariaDB database secrets engine\n\n@include 'x509-sha1-deprecation.mdx'\n\nMySQL is one of the supported plugins for the database secrets engine. This\nplugin generates database credentials dynamically based on configured roles for\nthe MySQL database, and also supports [Static\nRoles](\/vault\/docs\/secrets\/databases#static-roles).\n\nThis plugin has a few different instances built into vault, each instance is for\na slightly different MySQL driver. The only difference between these plugins is\nthe length of usernames generated by the plugin as different versions of mysql\naccept different lengths. The available plugins are:\n\n- mysql-database-plugin\n- mysql-aurora-database-plugin\n- mysql-rds-database-plugin\n- mysql-legacy-database-plugin\n\nSee the [database secrets engine](\/vault\/docs\/secrets\/databases) docs for\nmore information about setting up the database secrets engine.\n\n## Capabilities\n\n| Plugin Name                                                    | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |\n| -------------------------------------------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |\n| Depends (see: [above](#mysql-mariadb-database-secrets-engine)) | Yes                      | Yes           | Yes          | Yes (1.7+)             |\n\n## Setup\n\n1. Enable the database secrets engine if it is not already enabled:\n\n   ```text\n   $ vault secrets enable database\n   Success! Enabled the database secrets engine at: database\/\n   ```\n\n   By default, the secrets engine will enable at the name of the engine. To\n   enable the secrets engine at a different path, use the `-path` argument.\n\n1. Configure Vault with the proper plugin and connection information:\n\n   ```text\n   $ vault write database\/config\/my-mysql-database \\\n       plugin_name=mysql-database-plugin \\\n       connection_url=\":@tcp(127.0.0.1:3306)\/\" \\\n       allowed_roles=\"my-role\" \\\n       username=\"vaultuser\" \\\n       password=\"vaultpass\"\n   ```\n\n1. Configure a role that maps a name in Vault to an SQL statement to execute to\n   create the database credential:\n\n   ```text\n   $ vault write database\/roles\/my-role \\\n       db_name=my-mysql-database \\\n       creation_statements=\"CREATE USER ''@'%' IDENTIFIED BY '';GRANT SELECT ON *.* TO ''@'%';\" \\\n       default_ttl=\"1h\" \\\n       max_ttl=\"24h\"\n   Success! Data written to: database\/roles\/my-role\n   ```\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials.\n\n1. Generate a new credential by reading from the `\/creds` endpoint with the name\n   of the role:\n\n   ```text\n   $ vault read database\/creds\/my-role\n   Key                Value\n   ---                -----\n   lease_id           database\/creds\/my-role\/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6\n   lease_duration     1h\n   lease_renewable    true\n   password           yY-57n3X5UQhxnmFRP3f\n   username           v_vaultuser_my-role_crBWVqVh2Hc1\n   ```\n\n## Client x509 certificate authentication\n\nThis plugin supports using MySQL's [x509 Client-side Certificate Authentication](https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/using-encrypted-connections.html#using-encrypted-connections-client-side-configuration)\n\nTo use this authentication mechanism, configure the plugin:\n\n```shell-session\n$ vault write database\/config\/my-mysql-database \\\n    plugin_name=mysql-database-plugin \\\n    allowed_roles=\"my-role\" \\\n    connection_url=\"user:password@tcp(localhost:3306)\/test\" \\\n    tls_certificate_key=@\/path\/to\/client.pem \\\n    tls_ca=@\/path\/to\/client.ca\n```\n\nNote: `tls_certificate_key` and `tls_ca` map to [`ssl-cert (combined with ssl-key)`](https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/connection-options.html#option_general_ssl-cert)\nand [`ssl-ca`](https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/connection-options.html#option_general_ssl-ca) configuration options\nfrom MySQL with the exception that the Vault parameters are the contents of those files, not filenames. As such,\nthe two options are independent of each other. See the [MySQL Connection Options](https:\/\/dev.mysql.com\/doc\/refman\/8.0\/en\/connection-options.html)\nfor more information.\n\n## Examples\n\n### Using wildcards in grant statements\n\nMySQL supports using wildcards in grant statements. These are sometimes needed\nby applications which expect access to a large number of databases inside MySQL.\nThis can be realized by using a wildcard in the grant statement. For example if\nyou want the user created by Vault to have access to all databases starting with\n`fooapp_` you could use the following creation statement:\n\n```text\nCREATE USER ''@'%' IDENTIFIED BY ''; GRANT SELECT ON `fooapp\\_%`.* TO ''@'%';\n```\n\nMySQL expects the part in which the wildcards are to be placed inside backticks.\nIf you want to add this creation statement to Vault via the Vault CLI you cannot\nsimply paste the above statement on the CLI because the shell will interpret the\ntext between the backticks as something that must be executed. The easiest way to\nget around this is to encode the creation statement as Base64 and feed this to Vault.\nFor example:\n\n```shell-session\n$ vault write database\/roles\/my-role \\\n    db_name=mysql \\\n    creation_statements=\"Q1JFQVRFIFVTRVIgJ3t7bmFtZX19J0AnJScgSURFTlRJRklFRCBCWSAne3twYXNzd29yZH19JzsgR1JBTlQgU0VMRUNUIE9OIGBmb29hcHBcXyVgLiogVE8gJ3t7bmFtZX19J0AnJSc7\" \\\n    default_ttl=\"1h\" \\\n    max_ttl=\"24h\"\n```\n\n### Rotating root credentials in MySQL 5.6\n\nThe default root rotation setup for MySQL uses the `ALTER USER` syntax present\nin MySQL 5.7 and up. For MySQL 5.6, the [root rotation\nstatements](\/vault\/api-docs\/secret\/databases#root_rotation_statements)\nmust be configured to use the old `SET PASSWORD` syntax. For example:\n\n```shell-session\n$ vault write database\/config\/my-mysql-database \\\n    plugin_name=mysql-database-plugin \\\n    connection_url=\":@tcp(127.0.0.1:3306)\/\" \\\n    root_rotation_statements=\"SET PASSWORD = PASSWORD('')\" \\\n    allowed_roles=\"my-role\" \\\n    username=\"root\" \\\n    password=\"mysql\"\n```\n\nFor a guide in root credential rotation, see [Database Root Credential\nRotation](\/vault\/tutorials\/db-credentials\/database-root-rotation).\n\n## API\n\nThe full list of configurable options can be seen in the [MySQL database plugin\nAPI](\/vault\/api-docs\/secret\/databases\/mysql-maria) page.\n\nFor more information on the database secrets engine's HTTP API please see the\n[Database secrets engine API](\/vault\/api-docs\/secret\/databases) page.\n\n## Authenticating to Cloud DBs via IAM\n\n### Google Cloud\n\nAside from IAM roles denoted by [Google's CloudSQL documentation](https:\/\/cloud.google.com\/sql\/docs\/postgres\/add-manage-iam-users#creating-a-database-user),\nthe following SQL privileges are needed by the service account's DB user for minimum functionality with Vault.\nAdditional privileges may be needed depending on the SQL configured on the database roles.\n\n```sql\n-- Enable service account to create users within DB\nGRANT SELECT, CREATE, CREATE USER ON <database>.<object> TO \"test-user\"@\"%\" WITH GRANT OPTION;\n```\n\n### Setup\n\n1.  Enable the database secrets engine if it is not already enabled:\n\n    ```shell-session\n    $ vault secrets enable database\n    Success! Enabled the database secrets engine at: database\/\n    ```\n\n    By default, the secrets engine will enable at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure Vault with the proper plugin and connection information. Here you can explicitly enable GCP IAM authentication\n    and use [Application Default Credentials](https:\/\/cloud.google.com\/docs\/authentication\/provide-credentials-adc#how-to) to authenticate.\n\n    ~> **Note**: For Google Cloud IAM, the Protocol is `cloudsql-mysql` instead of `tcp`.\n\n    ```shell-session\n    $ vault write database\/config\/my-mysql-database \\\n        plugin_name=\"mysql-database-plugin\" \\\n        allowed_roles=\"my-role\" \\\n        connection_url=\"user@cloudsql-mysql(project:region:instance)\/mysql\" \\\n        auth_type=\"gcp_iam\"\n    ```\n\n    You can also configure the connection and authenticate by directly passing in the service account credentials\n    as an encoded JSON string:\n\n    ```shell-session\n    $ vault write database\/config\/my-mysql-database \\\n        plugin_name=\"mysql-database-plugin\" \\\n        allowed_roles=\"my-role\" \\\n        connection_url=\"user@cloudsql-mysql(project:region:instance)\/mysql\" \\\n        auth_type=\"gcp_iam\" \\\n        service_account_json=\"@my_credentials.json\"\n    ```\n\n1.  Configure a new role in Vault but override the default revocation statements\n    so Vault will drop the user instead:\n\n    ```shell-session\n       $ vault write database\/roles\/my-role \\\n           db_name=my-mysql-database \\\n           creation_statements=\"CREATE USER ''@'%' IDENTIFIED BY '';GRANT SELECT ON *.* TO ''@'%';\" \\\n           revocation_statements=\"DROP USER ''@'%';\" \\\n           default_ttl=\"1h\" \\\n           max_ttl=\"24h\"\n    ```\n\n1.  When you finish configuring the new role, generate credentials as before:\n\n    ```shell-session\n       $ vault read database\/creds\/my-role\n       Key                Value\n       ---                -----\n       lease_id           database\/creds\/my-role\/2f6b629f-7ah2-7b19-24b9-ad879a8d4bf2\n       lease_duration     1h\n       lease_renewable    true\n       password           vY-57n3X5UQhxnmGTK7g\n       username           v_vaultuser_my-role_frBYNfYh3Kw3\n    ```","site":"vault","answers_cleaned":"    layout  docs page title  MySQL MariaDB   Database   Secrets Engines description       MySQL is one of the supported plugins for the database secrets engine  This   plugin generates database credentials dynamically based on configured roles   for the MySQL database         MySQL MariaDB database secrets engine   include  x509 sha1 deprecation mdx   MySQL is one of the supported plugins for the database secrets engine  This plugin generates database credentials dynamically based on configured roles for the MySQL database  and also supports  Static Roles   vault docs secrets databases static roles    This plugin has a few different instances built into vault  each instance is for a slightly different MySQL driver  The only difference between these plugins is the length of usernames generated by the plugin as different versions of mysql accept different lengths  The available plugins are     mysql database plugin   mysql aurora database plugin   mysql rds database plugin   mysql legacy database plugin  See the  database secrets engine   vault docs secrets databases  docs for more information about setting up the database secrets engine      Capabilities    Plugin Name                                                      Root Credential Rotation   Dynamic Roles   Static Roles   Username Customization                                                                                                                                                           Depends  see   above   mysql mariadb database secrets engine     Yes                        Yes             Yes            Yes  1 7                     Setup  1  Enable the database secrets engine if it is not already enabled         text      vault secrets enable database    Success  Enabled the database secrets engine at  database             By default  the secrets engine will enable at the name of the engine  To    enable the secrets engine at a different path  use the   path  argument   1  Configure Vault with the proper plugin and connection information         text      vault write database config my mysql database          plugin name mysql database plugin          connection url    tcp 127 0 0 1 3306             allowed roles  my role           username  vaultuser           password  vaultpass          1  Configure a role that maps a name in Vault to an SQL statement to execute to    create the database credential         text      vault write database roles my role          db name my mysql database          creation statements  CREATE USER        IDENTIFIED BY    GRANT SELECT ON     TO                   default ttl  1h           max ttl  24h     Success  Data written to  database roles my role            Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials   1  Generate a new credential by reading from the   creds  endpoint with the name    of the role         text      vault read database creds my role    Key                Value                                lease id           database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6    lease duration     1h    lease renewable    true    password           yY 57n3X5UQhxnmFRP3f    username           v vaultuser my role crBWVqVh2Hc1            Client x509 certificate authentication  This plugin supports using MySQL s  x509 Client side Certificate Authentication  https   dev mysql com doc refman 8 0 en using encrypted connections html using encrypted connections client side configuration   To use this authentication mechanism  configure the plugin      shell session   vault write database config my mysql database       plugin name mysql database plugin       allowed roles  my role        connection url  user password tcp localhost 3306  test        tls certificate key   path to client pem       tls ca   path to client ca      Note   tls certificate key  and  tls ca  map to   ssl cert  combined with ssl key    https   dev mysql com doc refman 8 0 en connection options html option general ssl cert  and   ssl ca   https   dev mysql com doc refman 8 0 en connection options html option general ssl ca  configuration options from MySQL with the exception that the Vault parameters are the contents of those files  not filenames  As such  the two options are independent of each other  See the  MySQL Connection Options  https   dev mysql com doc refman 8 0 en connection options html  for more information      Examples      Using wildcards in grant statements  MySQL supports using wildcards in grant statements  These are sometimes needed by applications which expect access to a large number of databases inside MySQL  This can be realized by using a wildcard in the grant statement  For example if you want the user created by Vault to have access to all databases starting with  fooapp   you could use the following creation statement      text CREATE USER        IDENTIFIED BY     GRANT SELECT ON  fooapp       TO              MySQL expects the part in which the wildcards are to be placed inside backticks  If you want to add this creation statement to Vault via the Vault CLI you cannot simply paste the above statement on the CLI because the shell will interpret the text between the backticks as something that must be executed  The easiest way to get around this is to encode the creation statement as Base64 and feed this to Vault  For example      shell session   vault write database roles my role       db name mysql       creation statements  Q1JFQVRFIFVTRVIgJ3t7bmFtZX19J0AnJScgSURFTlRJRklFRCBCWSAne3twYXNzd29yZH19JzsgR1JBTlQgU0VMRUNUIE9OIGBmb29hcHBcXyVgLiogVE8gJ3t7bmFtZX19J0AnJSc7        default ttl  1h        max ttl  24h           Rotating root credentials in MySQL 5 6  The default root rotation setup for MySQL uses the  ALTER USER  syntax present in MySQL 5 7 and up  For MySQL 5 6  the  root rotation statements   vault api docs secret databases root rotation statements  must be configured to use the old  SET PASSWORD  syntax  For example      shell session   vault write database config my mysql database       plugin name mysql database plugin       connection url    tcp 127 0 0 1 3306          root rotation statements  SET PASSWORD   PASSWORD            allowed roles  my role        username  root        password  mysql       For a guide in root credential rotation  see  Database Root Credential Rotation   vault tutorials db credentials database root rotation       API  The full list of configurable options can be seen in the  MySQL database plugin API   vault api docs secret databases mysql maria  page   For more information on the database secrets engine s HTTP API please see the  Database secrets engine API   vault api docs secret databases  page      Authenticating to Cloud DBs via IAM      Google Cloud  Aside from IAM roles denoted by  Google s CloudSQL documentation  https   cloud google com sql docs postgres add manage iam users creating a database user   the following SQL privileges are needed by the service account s DB user for minimum functionality with Vault  Additional privileges may be needed depending on the SQL configured on the database roles      sql    Enable service account to create users within DB GRANT SELECT  CREATE  CREATE USER ON  database   object  TO  test user      WITH GRANT OPTION           Setup  1   Enable the database secrets engine if it is not already enabled          shell session       vault secrets enable database     Success  Enabled the database secrets engine at  database               By default  the secrets engine will enable at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure Vault with the proper plugin and connection information  Here you can explicitly enable GCP IAM authentication     and use  Application Default Credentials  https   cloud google com docs authentication provide credentials adc how to  to authenticate            Note    For Google Cloud IAM  the Protocol is  cloudsql mysql  instead of  tcp           shell session       vault write database config my mysql database           plugin name  mysql database plugin            allowed roles  my role            connection url  user cloudsql mysql project region instance  mysql            auth type  gcp iam               You can also configure the connection and authenticate by directly passing in the service account credentials     as an encoded JSON string          shell session       vault write database config my mysql database           plugin name  mysql database plugin            allowed roles  my role            connection url  user cloudsql mysql project region instance  mysql            auth type  gcp iam            service account json   my credentials json           1   Configure a new role in Vault but override the default revocation statements     so Vault will drop the user instead          shell session          vault write database roles my role              db name my mysql database              creation statements  CREATE USER        IDENTIFIED BY    GRANT SELECT ON     TO                       revocation statements  DROP USER                       default ttl  1h               max ttl  24h           1   When you finish configuring the new role  generate credentials as before          shell session          vault read database creds my role        Key                Value                                        lease id           database creds my role 2f6b629f 7ah2 7b19 24b9 ad879a8d4bf2        lease duration     1h        lease renewable    true        password           vY 57n3X5UQhxnmGTK7g        username           v vaultuser my role frBYNfYh3Kw3        "}
{"questions":"vault Note page title IBM Db2 Database Credentials Manage credentials for IBM Db2 using Vault s LDAP secrets engine layout docs IBM Db2","answers":"---\nlayout: docs\npage_title: IBM Db2 - Database - Credentials\ndescription: |-\n  Manage credentials for IBM Db2 using Vault's LDAP secrets engine.\n---\n\n# IBM Db2\n\n<Note>\n\nVault supports IBM Db2 credential management using the LDAP secrets engine.\n\n<\/Note>\n\nAccess to Db2 is managed by facilities that reside outside the Db2 database system. By\ndefault, user authentication is completed by a security facility that relies on operating\nsystem based authentication of users and passwords. This means that the lifecycle of user\nidentities in Db2 aren't capable of being managed using SQL statements and Vault's\ndatabase secrets engine.\n\nTo provide flexibility in accommodating authentication needs, Db2 ships with authentication\n[plugin modules](https:\/\/www.ibm.com\/docs\/en\/db2\/11.5?topic=ins-ldap-based-authentication-group-lookup-support)\nfor Lightweight Directory Access Protocol (LDAP). This enables the Db2 database manager to\nauthenticate users and obtain group membership defined in an LDAP directory, removing the\nrequirement that users and groups be defined to the operating system.\n\nVault's [LDAP secrets engine](\/vault\/docs\/secrets\/ldap) can be used to manage the lifecycle\nof credentials for Db2 environments that have been configured to delegate user authentication\nand group membership to an LDAP server. You can use either dynamic credentials\nor static credentials with the LDAP secrets engine.\n\n## Before you start\n\nThe architecture for implementing this solution is highly context dependent.\nThe assumptions made in this guide help to provide a practical example of how this _could_\nbe configured.\n\nBe sure to read the [IBM LDAP plugin documentation](https:\/\/www.ibm.com\/docs\/en\/db2\/11.5?topic=ins-ldap-based-authentication-group-lookup-support)\nto understand the tradeoffs and security implications.\n\nThe setup presented in this guide makes the following assumptions:\n\n- **Db2 is configured to authenticate users from an LDAP server using the\n  [server authentication plugin](https:\/\/www.ibm.com\/docs\/en\/db2\/11.5?topic=ins-ldap-based-authentication-group-lookup-support#d83944e187)\n  module.**\n- **Db2 is configured to retrieve group membership from an LDAP server using the\n  [group lookup plugin](https:\/\/www.ibm.com\/docs\/en\/db2\/11.5?topic=ins-ldap-based-authentication-group-lookup-support#d83944e235)\n  module.**\n- **The LDAP directory information tree (DIT) has the following structure:**\n\n   <CodeBlockConfig hideClipboard>\n\n   ```plaintext\n   # Organizational units\n   dn: ou=groups,dc=example,dc=com\n   objectClass: organizationalUnit\n   ou: groups\n\n   dn: ou=users,dc=example,dc=com\n   objectClass: organizationalUnit\n   ou: users\n\n   # Db2 groups\n   #  - https:\/\/www.ibm.com\/docs\/en\/db2\/11.5?topic=unix-db2-users-groups\n   #  - https:\/\/www.ibm.com\/docs\/en\/db2\/11.5?topic=ins-ldap-based-authentication-group-lookup-support\n   dn: cn=db2iadm1,ou=groups,dc=example,dc=com\n   objectClass: groupOfNames\n   cn: db2iadm1\n   member: uid=db2inst1,ou=users,dc=example,dc=com\n   description: DB2 sysadm group\n\n   dn: cn=db2fadm1,ou=groups,dc=example,dc=com\n   objectClass: groupOfNames\n   cn: db2fadm1\n   member: uid=db2fenc1,ou=users,dc=example,dc=com\n   description: DB2 fenced user group\n\n   dn: cn=dev,ou=groups,dc=example,dc=com\n   objectClass: groupOfNames\n   cn: dev\n   member: uid=staticuser,ou=users,dc=example,dc=com\n   description: Development group\n\n   # Db2 users\n   #  - https:\/\/www.ibm.com\/docs\/en\/db2\/11.5?topic=unix-db2-users-groups\n   #  - https:\/\/www.ibm.com\/docs\/en\/db2\/11.5?topic=ins-ldap-based-authentication-group-lookup-support\n   dn: uid=db2inst1,ou=users,dc=example,dc=com\n   objectClass: inetOrgPerson\n   cn: db2inst1\n   sn: db2inst1\n   uid: db2inst1\n   userPassword: Db2AdminPassword\n\n   dn: uid=db2fenc1,ou=users,dc=example,dc=com\n   objectClass: inetOrgPerson\n   cn: db2fenc1\n   sn: db2fenc1\n   uid: db2fenc1\n   userPassword: Db2FencedPassword\n\n   # Add user for static role rotation\n   dn: uid=staticuser,ou=users,dc=example,dc=com\n   objectClass: inetOrgPerson\n   cn: staticuser\n   sn: staticuser\n   uid: staticuser\n   userPassword: StaticUserPassword\n   ```\n\n   <\/CodeBlockConfig>\n\n   - **`IBMLDAPSecurity.ini` is updated to match the LDAP server configuration.**\n\n## Setup\n\n<Tabs>\n<Tab heading=\"Dynamic credentials\" group=\"dynamic\">\n\n1. Enable the LDAP secrets engine.\n\n   ```shell-session\n   $ vault secrets enable ldap\n   ```\n\n1. Configure the LDAP secrets engine.\n\n   ```shell-session\n   $ vault write ldap\/config \\\n       binddn=\"cn=admin,dc=example,dc=com\" \\\n       bindpass=\"LDAPAdminPassword\" \\\n       url=\"ldap:\/\/127.0.0.1:389\"\n   ```\n\n1. Write a template file that defines how to create LDAP users.\n\n   ```shell-session\n   $ cat > \/tmp\/creation.ldif <<EOF\n   dn: uid=,ou=users,dc=example,dc=com\n   objectClass: inetOrgPerson\n   uid: \n   cn: \n   sn: \n   userPassword: \n   EOF\n   ```\n\n   This file will be used by Vault to create LDAP users when credentials are requested.\n\n1. Write a template file that defines how to delete LDAP users.\n\n   ```shell-session\n   $ cat > \/tmp\/deletion_rollback.ldif <<EOF\n   dn: uid=,ou=users,dc=example,dc=com\n   changetype: delete\n   EOF\n   ```\n\n   This file will be used by Vault to delete LDAP users when the credentials are\n   revoked.\n\n1. Create a Vault role that includes `creation.ldif` and\n   `deletion_rollback.ldif`\n\n   ```shell-session\n   $ vault write ldap\/role\/dynamic \\\n        creation_ldif=@\/tmp\/creation.ldif \\\n        deletion_ldif=@\/tmp\/deletion_rollback.ldif \\\n        rollback_ldif=@\/tmp\/deletion_rollback.ldif \\\n        default_ttl=1h\n   ```\n\n<\/Tab>\n<Tab heading=\"Static credentials\" group=\"static\">\n\n1. Enable the LDAP secrets engine.\n\n   ```shell-session\n   $ vault secrets enable ldap\n   ```\n\n1. Configure the LDAP secrets engine.\n\n   ```shell-session\n   $ vault write ldap\/config \\\n       binddn=\"cn=admin,dc=example,dc=com\" \\\n       bindpass=\"LDAPAdminPassword\" \\\n       url=\"ldap:\/\/127.0.0.1:389\"\n   ```\n\n1. Create a static role that maps a name in Vault to an entry in an LDAP directory.\n\n   ```shell-session\n   $ vault write ldap\/static-role\/static \\\n        username='staticuser' \\\n        dn='uid=staticuser,ou=users,dc=example,dc=com' \\\n        rotation_period=\"1h\"\n   ```\n\n<\/Tab>\n<\/Tabs>\n\n## Usage\n\n<Tabs>\n<Tab heading=\"Dynamic credentials\" group=\"dynamic\">\n\nGenerate dynamic credentials using the Vault `dynamic` role.\n\n```shell-session\n$ vault read ldap\/creds\/dynamic\n```\n\n**Successful output:**\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\nKey                    Value\n---                    -----\nlease_id               ldap\/creds\/dynamic\/doa187ysuFExnvsJwmt8WrNo\nlease_duration         1h\nlease_renewable        true\ndistinguished_names    [uid=v_token_dynamic_joctelE9RB_1647220296,ou=users,dc=example,dc=com]\npassword               3WAOcuHUUt3qMKaUqo14pfTWapiOt8fmcBNoDo7Rx1R9dKxMOMVoMR3MYjCxQvmL\nusername               v_token_dynamic_joctelE9RB_1647220296\n```\n\n<\/CodeBlockConfig>\n\nUse the dynamic credentials to connect to Db2.\n\n<\/Tab>\n<Tab heading=\"Static credentials\" group=\"static\">\n\nRead the rotated password of the LDAP user that was used in the static role.\n\n```shell-session\n$ vault read ldap\/static-cred\/static\n```\n\n**Successful output:**\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\nKey                    Value\n---                    -----\ndn                     uid=staticuser,ou=users,dc=example,dc=com\nlast_vault_rotation    2022-03-14T11:56:15.252772-07:00\npassword               VWpUznJ0IcaYbHbnyqwBuJhsfb9YTe5MzwePR9oTkkrs26GhGKZ7dD5HuULpFfri\nrotation_period        1h\nttl                    59m55s\nusername               staticuser\n```\n\n<\/CodeBlockConfig>\n\nUse the rotated credentials for `staticuser` to connect to Db2.\n\n<\/Tab>\n<\/Tabs>\n\n## Tutorial\n\nRefer to the [LDAP Secrets Engine tutorial](\/vault\/tutorials\/secrets-management\/openldap) to learn how to configure and use the LDAP secrets engine.\n\n## API\n\nThe LDAP secrets engine has a full HTTP API. Please see the [LDAP secrets engine API docs](\/vault\/api-docs\/secret\/ldap) for more details","site":"vault","answers_cleaned":"    layout  docs page title  IBM Db2   Database   Credentials description       Manage credentials for IBM Db2 using Vault s LDAP secrets engine         IBM Db2   Note   Vault supports IBM Db2 credential management using the LDAP secrets engine     Note   Access to Db2 is managed by facilities that reside outside the Db2 database system  By default  user authentication is completed by a security facility that relies on operating system based authentication of users and passwords  This means that the lifecycle of user identities in Db2 aren t capable of being managed using SQL statements and Vault s database secrets engine   To provide flexibility in accommodating authentication needs  Db2 ships with authentication  plugin modules  https   www ibm com docs en db2 11 5 topic ins ldap based authentication group lookup support  for Lightweight Directory Access Protocol  LDAP   This enables the Db2 database manager to authenticate users and obtain group membership defined in an LDAP directory  removing the requirement that users and groups be defined to the operating system   Vault s  LDAP secrets engine   vault docs secrets ldap  can be used to manage the lifecycle of credentials for Db2 environments that have been configured to delegate user authentication and group membership to an LDAP server  You can use either dynamic credentials or static credentials with the LDAP secrets engine      Before you start  The architecture for implementing this solution is highly context dependent  The assumptions made in this guide help to provide a practical example of how this  could  be configured   Be sure to read the  IBM LDAP plugin documentation  https   www ibm com docs en db2 11 5 topic ins ldap based authentication group lookup support  to understand the tradeoffs and security implications   The setup presented in this guide makes the following assumptions       Db2 is configured to authenticate users from an LDAP server using the    server authentication plugin  https   www ibm com docs en db2 11 5 topic ins ldap based authentication group lookup support d83944e187    module        Db2 is configured to retrieve group membership from an LDAP server using the    group lookup plugin  https   www ibm com docs en db2 11 5 topic ins ldap based authentication group lookup support d83944e235    module        The LDAP directory information tree  DIT  has the following structure         CodeBlockConfig hideClipboard         plaintext      Organizational units    dn  ou groups dc example dc com    objectClass  organizationalUnit    ou  groups     dn  ou users dc example dc com    objectClass  organizationalUnit    ou  users       Db2 groups         https   www ibm com docs en db2 11 5 topic unix db2 users groups         https   www ibm com docs en db2 11 5 topic ins ldap based authentication group lookup support    dn  cn db2iadm1 ou groups dc example dc com    objectClass  groupOfNames    cn  db2iadm1    member  uid db2inst1 ou users dc example dc com    description  DB2 sysadm group     dn  cn db2fadm1 ou groups dc example dc com    objectClass  groupOfNames    cn  db2fadm1    member  uid db2fenc1 ou users dc example dc com    description  DB2 fenced user group     dn  cn dev ou groups dc example dc com    objectClass  groupOfNames    cn  dev    member  uid staticuser ou users dc example dc com    description  Development group       Db2 users         https   www ibm com docs en db2 11 5 topic unix db2 users groups         https   www ibm com docs en db2 11 5 topic ins ldap based authentication group lookup support    dn  uid db2inst1 ou users dc example dc com    objectClass  inetOrgPerson    cn  db2inst1    sn  db2inst1    uid  db2inst1    userPassword  Db2AdminPassword     dn  uid db2fenc1 ou users dc example dc com    objectClass  inetOrgPerson    cn  db2fenc1    sn  db2fenc1    uid  db2fenc1    userPassword  Db2FencedPassword       Add user for static role rotation    dn  uid staticuser ou users dc example dc com    objectClass  inetOrgPerson    cn  staticuser    sn  staticuser    uid  staticuser    userPassword  StaticUserPassword              CodeBlockConfig           IBMLDAPSecurity ini  is updated to match the LDAP server configuration        Setup   Tabs   Tab heading  Dynamic credentials  group  dynamic    1  Enable the LDAP secrets engine         shell session      vault secrets enable ldap         1  Configure the LDAP secrets engine         shell session      vault write ldap config          binddn  cn admin dc example dc com           bindpass  LDAPAdminPassword           url  ldap   127 0 0 1 389          1  Write a template file that defines how to create LDAP users         shell session      cat    tmp creation ldif   EOF    dn  uid  ou users dc example dc com    objectClass  inetOrgPerson    uid      cn      sn      userPassword      EOF            This file will be used by Vault to create LDAP users when credentials are requested   1  Write a template file that defines how to delete LDAP users         shell session      cat    tmp deletion rollback ldif   EOF    dn  uid  ou users dc example dc com    changetype  delete    EOF            This file will be used by Vault to delete LDAP users when the credentials are    revoked   1  Create a Vault role that includes  creation ldif  and     deletion rollback ldif         shell session      vault write ldap role dynamic           creation ldif   tmp creation ldif           deletion ldif   tmp deletion rollback ldif           rollback ldif   tmp deletion rollback ldif           default ttl 1h           Tab   Tab heading  Static credentials  group  static    1  Enable the LDAP secrets engine         shell session      vault secrets enable ldap         1  Configure the LDAP secrets engine         shell session      vault write ldap config          binddn  cn admin dc example dc com           bindpass  LDAPAdminPassword           url  ldap   127 0 0 1 389          1  Create a static role that maps a name in Vault to an entry in an LDAP directory         shell session      vault write ldap static role static           username  staticuser            dn  uid staticuser ou users dc example dc com            rotation period  1h            Tab    Tabs      Usage   Tabs   Tab heading  Dynamic credentials  group  dynamic    Generate dynamic credentials using the Vault  dynamic  role      shell session   vault read ldap creds dynamic        Successful output      CodeBlockConfig hideClipboard      shell session Key                    Value                              lease id               ldap creds dynamic doa187ysuFExnvsJwmt8WrNo lease duration         1h lease renewable        true distinguished names     uid v token dynamic joctelE9RB 1647220296 ou users dc example dc com  password               3WAOcuHUUt3qMKaUqo14pfTWapiOt8fmcBNoDo7Rx1R9dKxMOMVoMR3MYjCxQvmL username               v token dynamic joctelE9RB 1647220296        CodeBlockConfig   Use the dynamic credentials to connect to Db2     Tab   Tab heading  Static credentials  group  static    Read the rotated password of the LDAP user that was used in the static role      shell session   vault read ldap static cred static        Successful output      CodeBlockConfig hideClipboard      shell session Key                    Value                              dn                     uid staticuser ou users dc example dc com last vault rotation    2022 03 14T11 56 15 252772 07 00 password               VWpUznJ0IcaYbHbnyqwBuJhsfb9YTe5MzwePR9oTkkrs26GhGKZ7dD5HuULpFfri rotation period        1h ttl                    59m55s username               staticuser        CodeBlockConfig   Use the rotated credentials for  staticuser  to connect to Db2     Tab    Tabs      Tutorial  Refer to the  LDAP Secrets Engine tutorial   vault tutorials secrets management openldap  to learn how to configure and use the LDAP secrets engine      API  The LDAP secrets engine has a full HTTP API  Please see the  LDAP secrets engine API docs   vault api docs secret ldap  for more details"}
{"questions":"vault plugin interface There are a number of built in database types and an exposed framework for running custom database types for extendability on configured roles It works with a number of different databases through a layout docs The database secrets engine generates database credentials dynamically based page title Database Secrets Engines","answers":"---\nlayout: docs\npage_title: Database - Secrets Engines\ndescription: |-\n  The database secrets engine generates database credentials dynamically based\n  on configured roles. It works with a number of different databases through a\n  plugin interface. There are a number of built-in database types and an exposed\n  framework for running custom database types for extendability.\n---\n\n# Databases\n\nThe database secrets engine generates database credentials dynamically based on\nconfigured roles. It works with a number of different databases through a plugin\ninterface. There are a number of built-in database types, and an exposed framework\nfor running custom database types for extendability. This means that services\nthat need to access a database no longer need to hardcode credentials: they can\nrequest them from Vault, and use Vault's [leasing mechanism](\/vault\/docs\/concepts\/lease)\nto more easily roll keys. These are referred to as \"dynamic roles\" or \"dynamic\nsecrets\".\n\nSince every service is accessing the database with unique credentials, it makes\nauditing much easier when questionable data access is discovered. You can track\nit down to the specific instance of a service based on the SQL username.\n\nVault makes use of its own internal revocation system to ensure that users\nbecome invalid within a reasonable time of the lease expiring.\n\n### Static roles\n\nVault also supports **static roles** for all database secrets engines. Static\nroles are a 1-to-1 mapping of Vault roles to usernames in a database. With\nstatic roles, Vault stores and automatically rotates passwords for the\nassociated database user based on a configurable period of time or rotation\nschedule.\n\nWhen a client requests credentials for the static role, Vault returns the\ncurrent password for whichever database user is mapped to the requested role.\nWith static roles, anyone with the proper Vault policies can access the\nassociated user account in the database.\n\n<Warning title=\"Do not use static roles for root database credentials\">\n\nDo not manage the same root database credentials that you provide to Vault in\n<tt>config\/<\/tt> with static roles.\n\nVault does not distinguish between standard credentials and root credentials\nwhen rotating passwords. If you assign your root credentials to a static\nrole, any dynamic or static users managed by that database configuration will\nfail after rotation because the password for <tt>config\/<\/tt> is no longer\nvalid.\n\nIf you need to rotate root credentials, use the\n[Rotate root credentials](\/vault\/api-docs\/secret\/databases#rotate-root-credentials)\nAPI endpoint.\n\n<\/Warning>\n\n## Setup\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the database secrets engine:\n\n   ```shell-session\n   $ vault secrets enable database\n   Success! Enabled the database secrets engine at: database\/\n   ```\n\n   By default, the secrets engine will enable at the name of the engine. To\n   enable the secrets engine at a different path, use the `-path` argument.\n\n1. Configure Vault with the proper plugin and connection information:\n\n   ```shell-session\n   $ vault write database\/config\/my-database \\\n       plugin_name=\"...\" \\\n       connection_url=\"...\" \\\n       allowed_roles=\"...\" \\\n       username=\"...\" \\\n       password=\"...\" \\\n   ```\n\n   ~> It is highly recommended a user within the database is created\n   specifically for Vault to use. This user will be used to manipulate\n   dynamic and static users within the database. This user is called the\n   \"root\" user within the documentation.\n\n   Vault will use the user specified here to create\/update\/revoke database\n   credentials. That user must have the appropriate permissions to perform\n   actions upon other database users (create, update credentials, delete, etc.).\n\n   This secrets engine can configure multiple database connections. For details\n   on the specific configuration options, please see the database-specific\n   documentation.\n\n1. After configuring the root user, it is highly recommended you rotate that user's\n   password such that the vault user is not accessible by any users other than\n   Vault itself:\n\n   ```shell-session\n   $ vault write -force database\/rotate-root\/my-database\n   ```\n\n   !> When this is done, the password for the user specified in the previous step\n   is no longer accessible. Because of this, it is highly recommended that a\n   user is created specifically for Vault to use to manage database\n   users.\n\n1. Configure a role that maps a name in Vault to a set of creation statements to\n   create the database credential:\n\n   ```shell-session\n   $ vault write database\/roles\/my-role \\\n       db_name=my-database \\\n       creation_statements=\"...\" \\\n       default_ttl=\"1h\" \\\n       max_ttl=\"24h\"\n   Success! Data written to: database\/roles\/my-role\n   ```\n\n   The `` and `` fields will be populated by the plugin\n   with dynamically generated values. In some plugins the `` field is also supported.\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials.\n\n1.  Generate a new credential by reading from the `\/creds` endpoint with the name\n    of the role:\n\n    ```shell-session\n    $ vault read database\/creds\/my-role\n    Key                Value\n    ---                -----\n    lease_id           database\/creds\/my-role\/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6\n    lease_duration     1h\n    lease_renewable    true\n    password           FSREZ1S0kFsZtLat-y94\n    username           v-vaultuser-e2978cd0-ugp7iqI2hdlff5hfjylJ-1602537260\n    ```\n\n## Database capabilities\n\nAs of Vault 1.6, all databases support dynamic roles and static roles. All plugins except MongoDB Atlas support rotating\nthe root user's credentials. MongoDB Atlas cannot support rotating the root user's credentials because it uses a public\nand private key pair to authenticate.\n\n<a id=\"db-capabilities-table\" \/>\n\n\n| Database                                                            | UI support | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization | Credential Types             |\n| ------------------------------------------------------------------- | ---------- | ------------------------ | ------------- | ------------ | ---------------------- | ---------------------------- |\n| [Cassandra](\/vault\/docs\/secrets\/databases\/cassandra)                | No         | Yes                      | Yes           | Yes (1.6+)   | Yes (1.7+)             | password                     |\n| [Couchbase](\/vault\/docs\/secrets\/databases\/couchbase)                | No         | Yes                      | Yes           | Yes          | Yes (1.7+)             | password                     |\n| [Elasticsearch](\/vault\/docs\/secrets\/databases\/elasticdb)            | Yes (1.9+) | Yes                      | Yes           | Yes (1.6+)   | Yes (1.8+)             | password                     |\n| [HanaDB](\/vault\/docs\/secrets\/databases\/hanadb)                      | No         | Yes (1.6+)               | Yes           | Yes (1.6+)   | Yes (1.12+)            | password                     |\n| [InfluxDB](\/vault\/docs\/secrets\/databases\/influxdb)                  | No         | Yes                      | Yes           | Yes (1.6+)   | Yes (1.8+)             | password                     |\n| [MongoDB](\/vault\/docs\/secrets\/databases\/mongodb)                    | Yes (1.7+) | Yes                      | Yes           | Yes          | Yes (1.7+)             | password                     |\n| [MongoDB Atlas](\/vault\/docs\/secrets\/databases\/mongodbatlas)         | No         | No                       | Yes           | Yes          | Yes (1.8+)             | password, client_certificate |\n| [MSSQL](\/vault\/docs\/secrets\/databases\/mssql)                        | Yes (1.8+) | Yes                      | Yes           | Yes          | Yes (1.7+)             | password                     |\n| [MySQL\/MariaDB](\/vault\/docs\/secrets\/databases\/mysql-maria)          | Yes (1.8+) | Yes                      | Yes           | Yes          | Yes (1.7+)             | password, gcp_iam            |\n| [Oracle](\/vault\/docs\/secrets\/databases\/oracle)                      | Yes (1.9+) | Yes                      | Yes           | Yes          | Yes (1.7+)             | password                     |\n| [PostgreSQL](\/vault\/docs\/secrets\/databases\/postgresql)              | Yes (1.9+) | Yes                      | Yes           | Yes          | Yes (1.7+)             | password, gcp_iam            |\n| [Redis](\/vault\/docs\/secrets\/databases\/redis)                        | No         | Yes                      | Yes           | Yes          | No                     | password                     |\n| [Redis ElastiCache](\/vault\/docs\/secrets\/databases\/rediselasticache) | No         | No                       | No            | Yes          | No                     | password                     |\n| [Redshift](\/vault\/docs\/secrets\/databases\/redshift)                  | No         | Yes                      | Yes           | Yes          | Yes (1.8+)             | password                     |\n| [Snowflake](\/vault\/docs\/secrets\/databases\/snowflake)                | No         | Yes                      | Yes           | Yes          | Yes (1.8+)             | password, rsa_private_key    |\n\n## Custom plugins\n\nThis secrets engine allows custom database types to be run through the exposed\nplugin interface. Please see the [custom database plugin](\/vault\/docs\/secrets\/databases\/custom)\nfor more information.\n\n## Credential types\n\nDatabase systems support a variety of authentication methods and credential types.\nThe database secrets engine supports management of credentials alternative to usernames\nand passwords. The [credential_type](\/vault\/api-docs\/secret\/databases#credential_type)\nand [credential_config](\/vault\/api-docs\/secret\/databases#credential_config) parameters\nof dynamic and static roles configure the credential that Vault will generate and\nmake available to database plugins. See the documentation of individual database\nplugins for the credential types they support and usage examples.\n\n## Schedule-based static role rotation\n\nThe database secrets engine supports configuring schedule-based automatic\ncredential rotation for static roles with the\n[rotation_schedule](\/vault\/api-docs\/secret\/databases#rotation_schedule) field.\nFor example:\n\n```shell-session\n$ vault write database\/static-roles\/my-role \\\n    db_name=my-database \\\n    username=\"vault\" \\\n    rotation_schedule=\"0 * * * SAT\"\n```\n\nThis configuration will set the role's credential rotation to occur on Saturday\nat 00:00.\n\nAdditionally, this schedule-based approach allows for optionally configuring a\n[rotation_window](\/vault\/api-docs\/secret\/databases#rotation_window) in which\nthe automatic rotation is allowed to occur. For example:\n\n```shell-session\n$ vault write database\/static-roles\/my-role \\\n    db_name=my-database \\\n    username=\"vault\" \\\n    rotation_window=\"1h\" \\\n    rotation_schedule=\"0 * * * SAT\"\n```\n\nThis configuration will set rotations to occur on Saturday at 00:00. The 1\nhour `rotation_window` will prevent the rotation from occuring after 01:00. If\nthe static role's credential is not rotated during this window, due to a failure\nor otherwise, it will not be rotated until the next scheduled rotation.\n\n!> The `rotation_period` and `rotation_schedule` fields are\nmutually exclusive. One of them must be set but not both.\n\n## Password generation\n\nPasswords are generated via [Password Policies](\/vault\/docs\/concepts\/password-policies).\nDatabases can optionally set a password policy for use across all roles or at the\nindividual role level for that database. For example, each time you call\n`vault write database\/config\/my-database` you can specify a password policy for all\nroles using `my-database`. Each database has a default password policy defined as:\n20 characters with at least 1 uppercase character, at least 1 lowercase character,\nat least 1 number, and at least 1 dash character.\n\nThe default password generation can be represented as the following password policy:\n\n```hcl\nlength = 20\n\nrule \"charset\" {\n\tcharset = \"abcdefghijklmnopqrstuvwxyz\"\n\tmin-chars = 1\n}\nrule \"charset\" {\n\tcharset = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n\tmin-chars = 1\n}\nrule \"charset\" {\n\tcharset = \"0123456789\"\n\tmin-chars = 1\n}\nrule \"charset\" {\n\tcharset = \"-\"\n\tmin-chars = 1\n}\n```\n\n## Disable character escaping\n\nAs of Vault 1.10, you can now specify the option `disable_escaping` with a value of `true ` in\nsome secrets engines to prevent Vault from escaping special characters in the username and password\nfields. This is necessary for some alternate connection string formats, such as ADO with MSSQL or Azure\nSQL. See the [databases secrets engine API docs](\/vault\/api-docs\/secret\/databases#common-fields) and reference\nindividual plugin documentation to determine support for this parameter.\n\nFor example, when the password contains URL-escaped characters like `#` or `%` they will\nremain as so instead of becoming `%23` and `%25` respectively.\n\n```shell-session\n$ vault write database\/config\/my-mssql-database \\\nplugin_name=\"mssql-database-plugin\" \\\nconnection_url='server=localhost;port=1433;user id=;password=;database=mydb;' \\\nusername=\"root\" \\\npassword='your#StrongPassword%' \\\ndisable_escaping=\"true\"\n```\n\n## Unsupported databases\n\n### AWS DynamoDB\n\nAmazon Web Services (AWS) DynamoDB is a fully managed, serverless, key-value NoSQL database service. While\nDynamoDB is not supported by the database secrets engine, you can use the [AWS secrets engine](\/vault\/docs\/secrets\/aws)\nto provision dynamic credentials capable of accessing DynamoDB.\n\n1. Verify you have the AWS secrets engine enabled and configured.\n\n1. Create a role with the necessary permissions for your users to access DynamoDB. For example:\n\n   ```shell-session\n   $ vault write aws\/roles\/aws-dynamodb-read \\\n           credential_type=iam_user \\\n           policy_document=-<<EOF\n   {\n   \t\"Version\": \"2012-10-17\",\n   \t\"Statement\": [\n   \t\t{\n   \t\t\t\"Effect\": \"Allow\",\n   \t\t\t\"Action\": [\n   \t\t\t\t\"dynamodb:DescribeTable\",\n   \t\t\t\t\"dynamodb:GetItem\",\n   \t\t\t\t\"dynamodb:GetRecords\"\n   \t\t\t],\n   \t\t\t\"Resource\": \"arn:aws:dynamodb:us-east-1:1234567891:table\/example-table\"\n   \t\t},\n   \t\t{\n   \t\t\t\"Effect\": \"Allow\",\n   \t\t\t\"Action\": \"dynamodb:ListTables\",\n   \t\t\t\"Resource\": \"*\"\n   \t\t}\n   \t]\n   }\n   EOF\n   ```\n\n1. Generate dynamic credentials for DynamoDB using the `aws-dynamodb-read` role:\n\n   ```shell-session\n   $ vault read aws\/creds\/aws-dynamodb-read\n   Key                Value\n   ---                -----\n   lease_id           aws\/creds\/my-role\/kbSnl9WSDzOXQerd8GiVh75N.DACNl\n   lease_duration     1h\n   lease_renewable    true\n   access_key         AKALMNOP123456\n   secret_key         xY4XhS3AsM3s+R33tCaybsT2XI6BVL+vF+khbbYD\n   security_token     <nil>\n   ```\n\n1. Use the dynamic credentials generated by Vault to access DynamoDB. For example, to connect with the\n   the [AWS CLI](https:\/\/docs.aws.amazon.com\/cli\/latest\/reference\/dynamodb\/).\n\n   ```shell-session\n   $ aws dynamodb list-tables --region us-east-1\n   {\n       \"TableNames\": [\n           \"example-table\"\n       ]\n   }\n   ```\n\n## Tutorial\n\nRefer to the following step-by-step tutorials for more information:\n\n- [Secrets as a Service: Dynamic Secrets](\/vault\/tutorials\/db-credentials\/database-secrets)\n- [Database Root Credential Rotation](\/vault\/tutorials\/db-credentials\/database-root-rotation)\n\n## API\n\nThe database secrets engine has a full HTTP API. Please see the [Database secret\nsecrets engine API](\/vault\/api-docs\/secret\/databases) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Database   Secrets Engines description       The database secrets engine generates database credentials dynamically based   on configured roles  It works with a number of different databases through a   plugin interface  There are a number of built in database types and an exposed   framework for running custom database types for extendability         Databases  The database secrets engine generates database credentials dynamically based on configured roles  It works with a number of different databases through a plugin interface  There are a number of built in database types  and an exposed framework for running custom database types for extendability  This means that services that need to access a database no longer need to hardcode credentials  they can request them from Vault  and use Vault s  leasing mechanism   vault docs concepts lease  to more easily roll keys  These are referred to as  dynamic roles  or  dynamic secrets    Since every service is accessing the database with unique credentials  it makes auditing much easier when questionable data access is discovered  You can track it down to the specific instance of a service based on the SQL username   Vault makes use of its own internal revocation system to ensure that users become invalid within a reasonable time of the lease expiring       Static roles  Vault also supports   static roles   for all database secrets engines  Static roles are a 1 to 1 mapping of Vault roles to usernames in a database  With static roles  Vault stores and automatically rotates passwords for the associated database user based on a configurable period of time or rotation schedule   When a client requests credentials for the static role  Vault returns the current password for whichever database user is mapped to the requested role  With static roles  anyone with the proper Vault policies can access the associated user account in the database    Warning title  Do not use static roles for root database credentials    Do not manage the same root database credentials that you provide to Vault in  tt config   tt  with static roles   Vault does not distinguish between standard credentials and root credentials when rotating passwords  If you assign your root credentials to a static role  any dynamic or static users managed by that database configuration will fail after rotation because the password for  tt config   tt  is no longer valid   If you need to rotate root credentials  use the  Rotate root credentials   vault api docs secret databases rotate root credentials  API endpoint     Warning      Setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1  Enable the database secrets engine         shell session      vault secrets enable database    Success  Enabled the database secrets engine at  database             By default  the secrets engine will enable at the name of the engine  To    enable the secrets engine at a different path  use the   path  argument   1  Configure Vault with the proper plugin and connection information         shell session      vault write database config my database          plugin name                connection url                allowed roles                username                password                       It is highly recommended a user within the database is created    specifically for Vault to use  This user will be used to manipulate    dynamic and static users within the database  This user is called the     root  user within the documentation      Vault will use the user specified here to create update revoke database    credentials  That user must have the appropriate permissions to perform    actions upon other database users  create  update credentials  delete  etc        This secrets engine can configure multiple database connections  For details    on the specific configuration options  please see the database specific    documentation   1  After configuring the root user  it is highly recommended you rotate that user s    password such that the vault user is not accessible by any users other than    Vault itself         shell session      vault write  force database rotate root my database               When this is done  the password for the user specified in the previous step    is no longer accessible  Because of this  it is highly recommended that a    user is created specifically for Vault to use to manage database    users   1  Configure a role that maps a name in Vault to a set of creation statements to    create the database credential         shell session      vault write database roles my role          db name my database          creation statements                default ttl  1h           max ttl  24h     Success  Data written to  database roles my role            The    and    fields will be populated by the plugin    with dynamically generated values  In some plugins the    field is also supported      Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials   1   Generate a new credential by reading from the   creds  endpoint with the name     of the role          shell session       vault read database creds my role     Key                Value                                  lease id           database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6     lease duration     1h     lease renewable    true     password           FSREZ1S0kFsZtLat y94     username           v vaultuser e2978cd0 ugp7iqI2hdlff5hfjylJ 1602537260             Database capabilities  As of Vault 1 6  all databases support dynamic roles and static roles  All plugins except MongoDB Atlas support rotating the root user s credentials  MongoDB Atlas cannot support rotating the root user s credentials because it uses a public and private key pair to authenticate    a id  db capabilities table         Database                                                              UI support   Root Credential Rotation   Dynamic Roles   Static Roles   Username Customization   Credential Types                                                                                                                                                                                                                         Cassandra   vault docs secrets databases cassandra                   No           Yes                        Yes             Yes  1 6       Yes  1 7                 password                          Couchbase   vault docs secrets databases couchbase                   No           Yes                        Yes             Yes            Yes  1 7                 password                          Elasticsearch   vault docs secrets databases elasticdb               Yes  1 9     Yes                        Yes             Yes  1 6       Yes  1 8                 password                          HanaDB   vault docs secrets databases hanadb                         No           Yes  1 6                   Yes             Yes  1 6       Yes  1 12                password                          InfluxDB   vault docs secrets databases influxdb                     No           Yes                        Yes             Yes  1 6       Yes  1 8                 password                          MongoDB   vault docs secrets databases mongodb                       Yes  1 7     Yes                        Yes             Yes            Yes  1 7                 password                          MongoDB Atlas   vault docs secrets databases mongodbatlas            No           No                         Yes             Yes            Yes  1 8                 password  client certificate      MSSQL   vault docs secrets databases mssql                           Yes  1 8     Yes                        Yes             Yes            Yes  1 7                 password                          MySQL MariaDB   vault docs secrets databases mysql maria             Yes  1 8     Yes                        Yes             Yes            Yes  1 7                 password  gcp iam                 Oracle   vault docs secrets databases oracle                         Yes  1 9     Yes                        Yes             Yes            Yes  1 7                 password                          PostgreSQL   vault docs secrets databases postgresql                 Yes  1 9     Yes                        Yes             Yes            Yes  1 7                 password  gcp iam                 Redis   vault docs secrets databases redis                           No           Yes                        Yes             Yes            No                       password                          Redis ElastiCache   vault docs secrets databases rediselasticache    No           No                         No              Yes            No                       password                          Redshift   vault docs secrets databases redshift                     No           Yes                        Yes             Yes            Yes  1 8                 password                          Snowflake   vault docs secrets databases snowflake                   No           Yes                        Yes             Yes            Yes  1 8                 password  rsa private key          Custom plugins  This secrets engine allows custom database types to be run through the exposed plugin interface  Please see the  custom database plugin   vault docs secrets databases custom  for more information      Credential types  Database systems support a variety of authentication methods and credential types  The database secrets engine supports management of credentials alternative to usernames and passwords  The  credential type   vault api docs secret databases credential type  and  credential config   vault api docs secret databases credential config  parameters of dynamic and static roles configure the credential that Vault will generate and make available to database plugins  See the documentation of individual database plugins for the credential types they support and usage examples      Schedule based static role rotation  The database secrets engine supports configuring schedule based automatic credential rotation for static roles with the  rotation schedule   vault api docs secret databases rotation schedule  field  For example      shell session   vault write database static roles my role       db name my database       username  vault        rotation schedule  0       SAT       This configuration will set the role s credential rotation to occur on Saturday at 00 00   Additionally  this schedule based approach allows for optionally configuring a  rotation window   vault api docs secret databases rotation window  in which the automatic rotation is allowed to occur  For example      shell session   vault write database static roles my role       db name my database       username  vault        rotation window  1h        rotation schedule  0       SAT       This configuration will set rotations to occur on Saturday at 00 00  The 1 hour  rotation window  will prevent the rotation from occuring after 01 00  If the static role s credential is not rotated during this window  due to a failure or otherwise  it will not be rotated until the next scheduled rotation      The  rotation period  and  rotation schedule  fields are mutually exclusive  One of them must be set but not both      Password generation  Passwords are generated via  Password Policies   vault docs concepts password policies   Databases can optionally set a password policy for use across all roles or at the individual role level for that database  For example  each time you call  vault write database config my database  you can specify a password policy for all roles using  my database   Each database has a default password policy defined as  20 characters with at least 1 uppercase character  at least 1 lowercase character  at least 1 number  and at least 1 dash character   The default password generation can be represented as the following password policy      hcl length   20  rule  charset     charset    abcdefghijklmnopqrstuvwxyz   min chars   1   rule  charset     charset    ABCDEFGHIJKLMNOPQRSTUVWXYZ   min chars   1   rule  charset     charset    0123456789   min chars   1   rule  charset     charset        min chars   1           Disable character escaping  As of Vault 1 10  you can now specify the option  disable escaping  with a value of  true   in some secrets engines to prevent Vault from escaping special characters in the username and password fields  This is necessary for some alternate connection string formats  such as ADO with MSSQL or Azure SQL  See the  databases secrets engine API docs   vault api docs secret databases common fields  and reference individual plugin documentation to determine support for this parameter   For example  when the password contains URL escaped characters like     or     they will remain as so instead of becoming   23  and   25  respectively      shell session   vault write database config my mssql database   plugin name  mssql database plugin    connection url  server localhost port 1433 user id  password  database mydb     username  root    password  your StrongPassword     disable escaping  true          Unsupported databases      AWS DynamoDB  Amazon Web Services  AWS  DynamoDB is a fully managed  serverless  key value NoSQL database service  While DynamoDB is not supported by the database secrets engine  you can use the  AWS secrets engine   vault docs secrets aws  to provision dynamic credentials capable of accessing DynamoDB   1  Verify you have the AWS secrets engine enabled and configured   1  Create a role with the necessary permissions for your users to access DynamoDB  For example         shell session      vault write aws roles aws dynamodb read              credential type iam user              policy document    EOF           Version    2012 10 17        Statement                   Effect    Allow          Action             dynamodb DescribeTable           dynamodb GetItem           dynamodb GetRecords                  Resource    arn aws dynamodb us east 1 1234567891 table example table                        Effect    Allow          Action    dynamodb ListTables          Resource                            EOF         1  Generate dynamic credentials for DynamoDB using the  aws dynamodb read  role         shell session      vault read aws creds aws dynamodb read    Key                Value                                lease id           aws creds my role kbSnl9WSDzOXQerd8GiVh75N DACNl    lease duration     1h    lease renewable    true    access key         AKALMNOP123456    secret key         xY4XhS3AsM3s R33tCaybsT2XI6BVL vF khbbYD    security token      nil          1  Use the dynamic credentials generated by Vault to access DynamoDB  For example  to connect with the    the  AWS CLI  https   docs aws amazon com cli latest reference dynamodb           shell session      aws dynamodb list tables   region us east 1              TableNames                 example table                           Tutorial  Refer to the following step by step tutorials for more information      Secrets as a Service  Dynamic Secrets   vault tutorials db credentials database secrets     Database Root Credential Rotation   vault tutorials db credentials database root rotation      API  The database secrets engine has a full HTTP API  Please see the  Database secret secrets engine API   vault api docs secret databases  for more details "}
{"questions":"vault write your own code to generate credentials in any database you wish It also allows databases that require dynamically linked libraries to be used as plugins while keeping Vault itself statically linked layout docs The database secrets engine allows new functionality to be added through a plugin interface without needing to modify Vault s core code This allows you page title Custom Database Secrets Engines","answers":"---\nlayout: docs\npage_title: Custom - Database - Secrets Engines\ndescription: |-\n  The database secrets engine allows new functionality to be added through a\n  plugin interface without needing to modify Vault's core code. This allows you\n  write your own code to generate credentials in any database you wish. It also\n  allows databases that require dynamically linked libraries to be used as\n  plugins while keeping Vault itself statically linked.\n---\n\n# Custom database secrets engines\n\n~> The interface for custom database plugins has changed in Vault 1.6. Vault will\ncontinue to recognize the now deprecated version of this interface for some time.\nIf you are using a plugin with the deprecated interface, you should upgrade to the\nnewest version. See [Upgrading database plugins](#upgrading-database-plugins)\nfor more details.\n\n~> **Advanced topic!** Plugin development is a highly advanced topic in Vault,\nand is not required knowledge for day-to-day usage. If you don't plan on writing\nany plugins, feel free to skip this section of the documentation.\n\nThe database secrets engine allows new functionality to be added through a\nplugin interface without needing to modify Vault's core code. This allows you\nwrite your own code to generate credentials in any database you wish. It also\nallows databases that require dynamically linked libraries to be used as plugins\nwhile keeping Vault itself statically linked.\n\nPlease read the [Plugins internals](\/vault\/docs\/plugins) docs for more\ninformation about the plugin system before getting started building your\nDatabase plugin.\n\nDatabase plugins can be made to implement\n[plugin multiplexing](\/vault\/docs\/plugins\/plugin-architecture#plugin-multiplexing)\nwhich allows a single plugin process to be used for multiple database\nconnections. To enable multiplexing, the plugin must be compiled with the\n`ServeMultiplex` function call from Vault's `dbplugin` package.\n\n\n## Plugin interface\n\nAll plugins for the database secrets engine must implement the same interface. This interface\nis found in `sdk\/database\/dbplugin\/v5\/database.go`\n\n```go\ntype Database interface {\n\t\/\/ Initialize the database plugin. This is the equivalent of a constructor for the\n\t\/\/ database object itself.\n\tInitialize(ctx context.Context, req InitializeRequest) (InitializeResponse, error)\n\n\t\/\/ NewUser creates a new user within the database. This user is temporary in that it\n\t\/\/ will exist until the TTL expires.\n\tNewUser(ctx context.Context, req NewUserRequest) (NewUserResponse, error)\n\n\t\/\/ UpdateUser updates an existing user within the database.\n\tUpdateUser(ctx context.Context, req UpdateUserRequest) (UpdateUserResponse, error)\n\n\t\/\/ DeleteUser from the database. This should not error if the user didn't\n\t\/\/ exist prior to this call.\n\tDeleteUser(ctx context.Context, req DeleteUserRequest) (DeleteUserResponse, error)\n\n\t\/\/ Type returns the Name for the particular database backend implementation.\n\t\/\/ This type name is usually set as a constant within the database backend\n\t\/\/ implementation, e.g. \"mysql\" for the MySQL database backend. This is used\n\t\/\/ for things like metrics and logging. No behavior is switched on this.\n\tType() (string, error)\n\n\t\/\/ Close attempts to close the underlying database connection that was\n\t\/\/ established by the backend.\n\tClose() error\n}\n```\n\nEach of the request and response objects can also be found in `sdk\/database\/dbplugin\/v5\/database.go`.\n\nIn each of the requests, you will see at least 1 `Statements` object (in `UpdateUserRequest`\nthey are in sub-fields). This object represents the set of commands to run for that particular\noperation. For the `NewUser` function, this is a set of commands to create the user (and often\nset permissions for that user). These statements are from the following fields in the API:\n\n| API Argument               | Request Object                                     |\n| -------------------------- | -------------------------------------------------- |\n| `creation_statements`      | `NewUserRequest.Statements.Commands`               |\n| `revocation_statements`    | `DeleteUserRequest.Statements.Commands`            |\n| `rollback_statements`      | `NewUserRequest.RollbackStatements.Commands`       |\n| `renew_statements`         | `UpdateUserRequest.Expiration.Statements.Commands` |\n| `rotation_statements`      | `UpdateUserRequest.Password.Statements.Commands`   |\n| `root_rotation_statements` | `UpdateUserRequest.Password.Statements.Commands`   |\n\nIn many of the built-in plugins, they replace `` (or ``), ``,\nand\/or `` with the associated values. It is up to your plugin to perform these\nstring replacements. There is a helper function located in `sdk\/database\/helper\/dbutil`\ncalled `QueryHelper` that assists in doing this string replacement. You are not required to\nuse it, but it will make your plugin's behavior consistent with the built-in plugins.\n\nThe `InitializeRequest` object contains a map of keys to values. This data is what the\nuser specified as the configuration for the plugin. Your plugin should use this\ndata to make connections to the database. The response object contains a similar configuration\nmap. The response object should contain the configuration map that should be saved within Vault.\nThis allows the plugin to manipulate the configuration prior to saving it.\n\nIt is also passed a boolean value (`InitializeRequest.VerifyConnection`) indicating if your\nplugin should initialize a connection to the database during the `Initialize` call. This\nfunction is called when the configuration is written. This allows the user to know whether\nthe configuration is valid and able to connect to the database in question. If this is set to\nfalse, no connection should be made during the `Initialize` call, but subsequent calls to the\nother functions will need to open a connection.\n\n## Serving a plugin\n\n### Serving a plugin with multiplexing\n\n~> Plugin multiplexing requires `github.com\/hashicorp\/vault\/sdk v0.4.0` or above.\n\nThe plugin runs as a separate binary outside of Vault, so the plugin itself\nwill need a `main` function. Use the `ServeMultiplex` function within\n`sdk\/database\/dbplugin\/v5` to serve your multiplexed plugin.\n\nBelow is an example setup:\n\n```go\npackage main\n\nimport (\n\t\"github.com\/hashicorp\/vault\/api\"\n\tdbplugin \"github.com\/hashicorp\/vault\/sdk\/database\/dbplugin\/v5\"\n)\n\nfunc main() {\n\tapiClientMeta := &api.PluginAPIClientMeta{}\n\tflags := apiClientMeta.FlagSet()\n\tflags.Parse(os.Args[1:])\n\n\terr := Run()\n\tif err != nil {\n\t\tlog.Println(err)\n\t\tos.Exit(1)\n\t}\n}\n\nfunc Run() error {\n\tdbplugin.ServeMultiplex(dbType.(dbplugin.New))\n\n\treturn nil\n}\n\nfunc New() (interface{}, error) {\n\tdb, err := newDatabase()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t\/\/ This middleware isn't strictly required, but highly recommended to prevent accidentally exposing\n\t\/\/ values such as passwords in error messages. An example of this is included below\n\tdb = dbplugin.NewDatabaseErrorSanitizerMiddleware(db, db.secretValues)\n\treturn db, nil\n}\n\ntype MyDatabase struct {\n\t\/\/ Variables for the database\n\tpassword string\n}\n\nfunc newDatabase() (MyDatabase, error) {\n\t\/\/ ...\n\tdb := &MyDatabase{\n\t\t\/\/ ...\n\t}\n\treturn db, nil\n}\n\nfunc (db *MyDatabase) secretValues() map[string]string {\n\treturn map[string]string{\n\t\tdb.password: \"[password]\",\n\t}\n}\n```\n\nReplacing `MyDatabase` with the actual implementation of your database plugin.\n\n### Serving a plugin without multiplexing\n\nServing a plugin without multiplexing requires calling the `Serve` function\nfrom `sdk\/database\/dbplugin\/v5` to serve your plugin.\n\nThe setup is exactly the same as the multiplexed case above, except for the\n`Run` function:\n\n```go\nfunc Run() error {\n\tdbType, err := New()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tdbplugin.Serve(dbType.(dbplugin.Database))\n\n\treturn nil\n}\n```\n\n## Running your plugin\n\nThe above main package, once built, will supply you with a binary of your\nplugin. We also recommend if you are planning on distributing your plugin to\nbuild with [gox](https:\/\/github.com\/mitchellh\/gox) for cross platform builds.\n\nTo use your plugin with the database secrets engine you need to place the binary in the\nplugin directory as specified in the [plugin internals](\/vault\/docs\/plugins) docs.\n\nYou should now be able to register your plugin into the Vault catalog. To do\nthis your token will need sudo permissions.\n\n```shell-session\n$ vault write sys\/plugins\/catalog\/database\/mydatabase-database-plugin \\\n    sha256=\"...\" \\\n    command=\"mydatabase\"\nSuccess! Data written to: sys\/plugins\/catalog\/database\/mydatabase-database-plugin\n```\n\nNow you should be able to configure your plugin like any other:\n\n```shell-session\n$ vault write database\/config\/mydatabase \\\n    plugin_name=mydatabase-database-plugin \\\n    allowed_roles=\"readonly\" \\\n    myplugins_connection_details=\"...\"\n```\n\n## Updating database plugins to leverage plugin versioning\n\n@include 'plugin-versioning.mdx'\n\nIn addition to the `Database` interface above, database plugins can then also\nimplement the\n[`PluginVersioner`](https:\/\/github.com\/hashicorp\/vault\/blob\/sdk\/v0.6.0\/sdk\/logical\/logical.go#L150-L154)\ninterface:\n\n```go\n\/\/ PluginVersioner is an optional interface to return version info.\ntype PluginVersioner interface {\n\t\/\/ PluginVersion returns the version for the backend\n\tPluginVersion() PluginVersion\n}\n\ntype PluginVersion struct {\n\tVersion string\n}\n```\n\n## Upgrading database plugins to leverage plugin multiplexing\n\n### Background\n\nScaling many external plugins can become resource intensive. To address\nperformance problems with scaling external plugins, database plugins can be\nmade to implement [plugin multiplexing](\/vault\/docs\/plugins\/plugin-architecture#plugin-multiplexing)\nwhich allows a single plugin process to be used for multiple database\nconnections. To enable multiplexing, the plugin must be compiled with the\n`ServeMultiplex` function call from Vault's `dbplugin` package.\n\n### Upgrading your database plugin to leverage plugin multiplexing\n\nThere is only one step required to upgrade from a non-multiplexed to a\nmultiplexed database plugin: Change the `Serve` function call to `ServeMultiplex`.\n\nThis will run the RPC server for the plugin just as before. However, the\n`ServeMultiplex` function takes the factory function directly as its argument.\nThis factory function is a function that returns an object that implements the\n[`dbplugin.Database` interface](\/vault\/docs\/secrets\/databases\/custom#plugin-interface).\n\n### When should plugin multiplexing be avoided?\n\nSome use cases that should avoid plugin multiplexing might include:\n\n* Plugin process level separation is required\n* Avoiding restart across all mounts\/database connections for a plugin type on\n  crashes or plugin reload calls\n\n## Upgrading database plugins to the v5 interface\n\n### Background\n\nIn Vault 1.6, the database interface changed. The new version is referred to as version 5\nand the previous version as version 4. This is due to prior versioning of the interface\nthat was not explicitly exposed.\n\nThe new interface was introduced for several reasons:\n\n1. [Password policies](\/vault\/docs\/concepts\/password-policies) introduced in Vault 1.5 required\n   that Vault be responsible for generating passwords. In the prior version, the database\n   plugin was responsible for generating passwords. This prevented integration with\n   password policies.\n2. Passwords needed to be generated by database plugins. This meant that plugin authors\n   were responsible for generating secure passwords. This should be done with a helper\n   function available within the Vault SDK, however there was nothing preventing an\n   author from generating insecure passwords.\n3. There were a number of inconsistencies within the version 4 interface that made it\n   confusing for authors. For instance: passwords were handled in 3 different ways.\n   `CreateUser` generated a password and returned it, `SetCredentials` receives a password\n   via a configuration struct and returns it, and `RotateRootCredentials` generated a\n   password and was expected to return an updated copy of its entire configuration\n   with the new password.\n4. The `SetCredentials` and `RotateRootCredentials` used for static credential rotation,\n   and rotating the root user's credentials respectively were essentially the same operation:\n   change a user's password. The only practical difference was which user it was referring\n   to. This was especially evident when `SetCredentials` was used when rotating root\n   credentials (unless static credential rotation wasn't supported by the plugin in question).\n5. The old interface included both `Init` and `Initialize` adding to the confusion.\n\nThe new interface is roughly modeled after a [gRPC](https:\/\/grpc.io\/) interface. It has improved\nfuture compatibility by not requiring changes to the interface definition to add additional data\nin the requests or responses. It also simplifies the interface by merging several into a single\nfunction call.\n\n### Upgrading your custom database\n\nVault 1.6 supports both version 4 and version 5 database plugins. The support for version 4\nplugins will be removed in a future release. Version 5 database plugins will not function with\nVault prior to version 1.6. If you upgrade your database plugins, ensure that you are only using\nVault 1.6 or later. To determine if a plugin is using version 4 or version 5, the following is a\nlist of changes in no particular order that you can check against your plugin to determine\nthe version:\n\n1. The import path for version 4 is `github.com\/hashicorp\/vault\/sdk\/database\/dbplugin`\n   whereas the import path for version 5 is `github.com\/hashicorp\/vault\/sdk\/database\/dbplugin\/v5`\n2. Version 4 has the following functions: `Initialize`, `Init`, `CreateUser`, `RenewUser`,\n   `RevokeUser`, `SetCredentials`, `RotateRootCredentials`, `Type`, and `Close`. You can see the\n   full function signatures in `sdk\/database\/dbplugin\/plugin.go`.\n3. Version 5 has the following functions: `Initialize`, `NewUser`, `UpdateUser`, `DeleteUser`,\n   `Type`, and `Close`. You can see the full function signatures in\n   `sdk\/database\/dbplugin\/v5\/database.go`.\n\nIf you are using a version 4 custom database plugin, the following are basic instructions\nfor upgrading to version 5.\n\n-> In version 4, password generation was the responsibility of the plugin. This is no longer\nthe case with version 5. Vault is responsible for generating passwords and passing them to\nthe plugin via `NewUserRequest.Password` and `UpdateUserRequest.Password.NewPassword`.\n\n1. Change the import path from `github.com\/hashicorp\/vault\/sdk\/database\/dbplugin` to\n   `github.com\/hashicorp\/vault\/sdk\/database\/dbplugin\/v5`. The package name is the same, so any\n   references to `dbplugin` can remain as long as those symbols exist within the new package\n   (such as the `Serve` function).\n2. An easy way to see what functions need to be implemented is to put the following as a\n   global variable within your package: `var _ dbplugin.Database = (*MyDatabase)(nil)`. This\n   will fail to compile if the `MyDatabase` type does not adhere to the `dbplugin.Database` interface.\n3. Replace `Init` and `Initialize` with the new `Initialize` function definition. The fields that\n   `Init` was taking (`config` and `verifyConnection`) are now wrapped into `InitializeRequest`.\n   The returned `map[string]interface{}` object is now wrapped into `InitializeResponse`.\n   Only `Initialize` is needed to adhere to the `Database` interface.\n4. Update `CreateUser` to `NewUser`. The `NewUserRequest` object contains the username and\n   password of the user to be created. It also includes a list of statements for creating the\n   user as well as several other fields that may or may not be applicable. Your custom plugin\n   should use the password provided in the request, not generate one. If you generate a password\n   instead, Vault will not know about it and will give the caller the wrong password.\n5. `SetCredentials`, `RotateRootCredentials`, and `RenewUser` are combined into `UpdateUser`.\n   The request object, `UpdateUserRequest` contains three parts: the username to change, a\n   `ChangePassword` and a `ChangeExpiration` object. When one of the objects is not nil, this\n   indicates that particular field (password or expiration) needs to change. For instance, if\n   the `ChangePassword` field is not-nil, the user's password should be changed. This is\n   equivalent to calling `SetCredentials`. If the `ChangeExpiration` field is not-nil, the\n   user's expiration date should be changed. This is equivalent to calling `RenewUser`.\n   Many databases don't need to do anything with the updated expiration.\n6. Update `RevokeUser` to `DeleteUser`. This is the simplest change. The username to be\n   deleted is enclosed in the `DeleteUserRequest` object.","site":"vault","answers_cleaned":"    layout  docs page title  Custom   Database   Secrets Engines description       The database secrets engine allows new functionality to be added through a   plugin interface without needing to modify Vault s core code  This allows you   write your own code to generate credentials in any database you wish  It also   allows databases that require dynamically linked libraries to be used as   plugins while keeping Vault itself statically linked         Custom database secrets engines     The interface for custom database plugins has changed in Vault 1 6  Vault will continue to recognize the now deprecated version of this interface for some time  If you are using a plugin with the deprecated interface  you should upgrade to the newest version  See  Upgrading database plugins   upgrading database plugins  for more details        Advanced topic    Plugin development is a highly advanced topic in Vault  and is not required knowledge for day to day usage  If you don t plan on writing any plugins  feel free to skip this section of the documentation   The database secrets engine allows new functionality to be added through a plugin interface without needing to modify Vault s core code  This allows you write your own code to generate credentials in any database you wish  It also allows databases that require dynamically linked libraries to be used as plugins while keeping Vault itself statically linked   Please read the  Plugins internals   vault docs plugins  docs for more information about the plugin system before getting started building your Database plugin   Database plugins can be made to implement  plugin multiplexing   vault docs plugins plugin architecture plugin multiplexing  which allows a single plugin process to be used for multiple database connections  To enable multiplexing  the plugin must be compiled with the  ServeMultiplex  function call from Vault s  dbplugin  package       Plugin interface  All plugins for the database secrets engine must implement the same interface  This interface is found in  sdk database dbplugin v5 database go      go type Database interface       Initialize the database plugin  This is the equivalent of a constructor for the     database object itself   Initialize ctx context Context  req InitializeRequest   InitializeResponse  error       NewUser creates a new user within the database  This user is temporary in that it     will exist until the TTL expires   NewUser ctx context Context  req NewUserRequest   NewUserResponse  error       UpdateUser updates an existing user within the database   UpdateUser ctx context Context  req UpdateUserRequest   UpdateUserResponse  error       DeleteUser from the database  This should not error if the user didn t     exist prior to this call   DeleteUser ctx context Context  req DeleteUserRequest   DeleteUserResponse  error       Type returns the Name for the particular database backend implementation      This type name is usually set as a constant within the database backend     implementation  e g   mysql  for the MySQL database backend  This is used     for things like metrics and logging  No behavior is switched on this   Type    string  error       Close attempts to close the underlying database connection that was     established by the backend   Close   error        Each of the request and response objects can also be found in  sdk database dbplugin v5 database go    In each of the requests  you will see at least 1  Statements  object  in  UpdateUserRequest  they are in sub fields   This object represents the set of commands to run for that particular operation  For the  NewUser  function  this is a set of commands to create the user  and often set permissions for that user   These statements are from the following fields in the API     API Argument                 Request Object                                                                                                                              creation statements          NewUserRequest Statements Commands                     revocation statements        DeleteUserRequest Statements Commands                  rollback statements          NewUserRequest RollbackStatements Commands             renew statements             UpdateUserRequest Expiration Statements Commands       rotation statements          UpdateUserRequest Password Statements Commands         root rotation statements     UpdateUserRequest Password Statements Commands       In many of the built in plugins  they replace     or          and or    with the associated values  It is up to your plugin to perform these string replacements  There is a helper function located in  sdk database helper dbutil  called  QueryHelper  that assists in doing this string replacement  You are not required to use it  but it will make your plugin s behavior consistent with the built in plugins   The  InitializeRequest  object contains a map of keys to values  This data is what the user specified as the configuration for the plugin  Your plugin should use this data to make connections to the database  The response object contains a similar configuration map  The response object should contain the configuration map that should be saved within Vault  This allows the plugin to manipulate the configuration prior to saving it   It is also passed a boolean value   InitializeRequest VerifyConnection   indicating if your plugin should initialize a connection to the database during the  Initialize  call  This function is called when the configuration is written  This allows the user to know whether the configuration is valid and able to connect to the database in question  If this is set to false  no connection should be made during the  Initialize  call  but subsequent calls to the other functions will need to open a connection      Serving a plugin      Serving a plugin with multiplexing     Plugin multiplexing requires  github com hashicorp vault sdk v0 4 0  or above   The plugin runs as a separate binary outside of Vault  so the plugin itself will need a  main  function  Use the  ServeMultiplex  function within  sdk database dbplugin v5  to serve your multiplexed plugin   Below is an example setup      go package main  import     github com hashicorp vault api   dbplugin  github com hashicorp vault sdk database dbplugin v5     func main      apiClientMeta     api PluginAPIClientMeta    flags    apiClientMeta FlagSet    flags Parse os Args 1      err    Run    if err    nil     log Println err    os Exit 1        func Run   error    dbplugin ServeMultiplex dbType  dbplugin New     return nil    func New    interface    error     db  err    newDatabase    if err    nil     return nil  err         This middleware isn t strictly required  but highly recommended to prevent accidentally exposing     values such as passwords in error messages  An example of this is included below  db   dbplugin NewDatabaseErrorSanitizerMiddleware db  db secretValues   return db  nil    type MyDatabase struct       Variables for the database  password string    func newDatabase    MyDatabase  error             db     MyDatabase               return db  nil    func  db  MyDatabase  secretValues   map string string    return map string string    db password    password              Replacing  MyDatabase  with the actual implementation of your database plugin       Serving a plugin without multiplexing  Serving a plugin without multiplexing requires calling the  Serve  function from  sdk database dbplugin v5  to serve your plugin   The setup is exactly the same as the multiplexed case above  except for the  Run  function      go func Run   error    dbType  err    New    if err    nil     return err      dbplugin Serve dbType  dbplugin Database     return nil           Running your plugin  The above main package  once built  will supply you with a binary of your plugin  We also recommend if you are planning on distributing your plugin to build with  gox  https   github com mitchellh gox  for cross platform builds   To use your plugin with the database secrets engine you need to place the binary in the plugin directory as specified in the  plugin internals   vault docs plugins  docs   You should now be able to register your plugin into the Vault catalog  To do this your token will need sudo permissions      shell session   vault write sys plugins catalog database mydatabase database plugin       sha256             command  mydatabase  Success  Data written to  sys plugins catalog database mydatabase database plugin      Now you should be able to configure your plugin like any other      shell session   vault write database config mydatabase       plugin name mydatabase database plugin       allowed roles  readonly        myplugins connection details               Updating database plugins to leverage plugin versioning   include  plugin versioning mdx   In addition to the  Database  interface above  database plugins can then also implement the   PluginVersioner   https   github com hashicorp vault blob sdk v0 6 0 sdk logical logical go L150 L154  interface      go    PluginVersioner is an optional interface to return version info  type PluginVersioner interface       PluginVersion returns the version for the backend  PluginVersion   PluginVersion    type PluginVersion struct    Version string           Upgrading database plugins to leverage plugin multiplexing      Background  Scaling many external plugins can become resource intensive  To address performance problems with scaling external plugins  database plugins can be made to implement  plugin multiplexing   vault docs plugins plugin architecture plugin multiplexing  which allows a single plugin process to be used for multiple database connections  To enable multiplexing  the plugin must be compiled with the  ServeMultiplex  function call from Vault s  dbplugin  package       Upgrading your database plugin to leverage plugin multiplexing  There is only one step required to upgrade from a non multiplexed to a multiplexed database plugin  Change the  Serve  function call to  ServeMultiplex    This will run the RPC server for the plugin just as before  However  the  ServeMultiplex  function takes the factory function directly as its argument  This factory function is a function that returns an object that implements the   dbplugin Database  interface   vault docs secrets databases custom plugin interface        When should plugin multiplexing be avoided   Some use cases that should avoid plugin multiplexing might include     Plugin process level separation is required   Avoiding restart across all mounts database connections for a plugin type on   crashes or plugin reload calls     Upgrading database plugins to the v5 interface      Background  In Vault 1 6  the database interface changed  The new version is referred to as version 5 and the previous version as version 4  This is due to prior versioning of the interface that was not explicitly exposed   The new interface was introduced for several reasons   1   Password policies   vault docs concepts password policies  introduced in Vault 1 5 required    that Vault be responsible for generating passwords  In the prior version  the database    plugin was responsible for generating passwords  This prevented integration with    password policies  2  Passwords needed to be generated by database plugins  This meant that plugin authors    were responsible for generating secure passwords  This should be done with a helper    function available within the Vault SDK  however there was nothing preventing an    author from generating insecure passwords  3  There were a number of inconsistencies within the version 4 interface that made it    confusing for authors  For instance  passwords were handled in 3 different ways      CreateUser  generated a password and returned it   SetCredentials  receives a password    via a configuration struct and returns it  and  RotateRootCredentials  generated a    password and was expected to return an updated copy of its entire configuration    with the new password  4  The  SetCredentials  and  RotateRootCredentials  used for static credential rotation     and rotating the root user s credentials respectively were essentially the same operation     change a user s password  The only practical difference was which user it was referring    to  This was especially evident when  SetCredentials  was used when rotating root    credentials  unless static credential rotation wasn t supported by the plugin in question   5  The old interface included both  Init  and  Initialize  adding to the confusion   The new interface is roughly modeled after a  gRPC  https   grpc io   interface  It has improved future compatibility by not requiring changes to the interface definition to add additional data in the requests or responses  It also simplifies the interface by merging several into a single function call       Upgrading your custom database  Vault 1 6 supports both version 4 and version 5 database plugins  The support for version 4 plugins will be removed in a future release  Version 5 database plugins will not function with Vault prior to version 1 6  If you upgrade your database plugins  ensure that you are only using Vault 1 6 or later  To determine if a plugin is using version 4 or version 5  the following is a list of changes in no particular order that you can check against your plugin to determine the version   1  The import path for version 4 is  github com hashicorp vault sdk database dbplugin     whereas the import path for version 5 is  github com hashicorp vault sdk database dbplugin v5  2  Version 4 has the following functions   Initialize    Init    CreateUser    RenewUser       RevokeUser    SetCredentials    RotateRootCredentials    Type   and  Close   You can see the    full function signatures in  sdk database dbplugin plugin go   3  Version 5 has the following functions   Initialize    NewUser    UpdateUser    DeleteUser       Type   and  Close   You can see the full function signatures in     sdk database dbplugin v5 database go    If you are using a version 4 custom database plugin  the following are basic instructions for upgrading to version 5      In version 4  password generation was the responsibility of the plugin  This is no longer the case with version 5  Vault is responsible for generating passwords and passing them to the plugin via  NewUserRequest Password  and  UpdateUserRequest Password NewPassword    1  Change the import path from  github com hashicorp vault sdk database dbplugin  to     github com hashicorp vault sdk database dbplugin v5   The package name is the same  so any    references to  dbplugin  can remain as long as those symbols exist within the new package     such as the  Serve  function   2  An easy way to see what functions need to be implemented is to put the following as a    global variable within your package   var   dbplugin Database     MyDatabase  nil    This    will fail to compile if the  MyDatabase  type does not adhere to the  dbplugin Database  interface  3  Replace  Init  and  Initialize  with the new  Initialize  function definition  The fields that     Init  was taking   config  and  verifyConnection   are now wrapped into  InitializeRequest      The returned  map string interface    object is now wrapped into  InitializeResponse      Only  Initialize  is needed to adhere to the  Database  interface  4  Update  CreateUser  to  NewUser   The  NewUserRequest  object contains the username and    password of the user to be created  It also includes a list of statements for creating the    user as well as several other fields that may or may not be applicable  Your custom plugin    should use the password provided in the request  not generate one  If you generate a password    instead  Vault will not know about it and will give the caller the wrong password  5   SetCredentials    RotateRootCredentials   and  RenewUser  are combined into  UpdateUser      The request object   UpdateUserRequest  contains three parts  the username to change  a     ChangePassword  and a  ChangeExpiration  object  When one of the objects is not nil  this    indicates that particular field  password or expiration  needs to change  For instance  if    the  ChangePassword  field is not nil  the user s password should be changed  This is    equivalent to calling  SetCredentials   If the  ChangeExpiration  field is not nil  the    user s expiration date should be changed  This is equivalent to calling  RenewUser      Many databases don t need to do anything with the updated expiration  6  Update  RevokeUser  to  DeleteUser   This is the simplest change  The username to be    deleted is enclosed in the  DeleteUserRequest  object "}
{"questions":"vault page title Snowflake Database Secrets Engines Snowflake database secrets engine layout docs roles for Snowflake hosted databases This plugin generates database credentials dynamically based on configured Snowflake is one of the supported plugins for the database secrets engine","answers":"---\nlayout: docs\npage_title: Snowflake - Database - Secrets Engines\ndescription: |-\n  Snowflake is one of the supported plugins for the database secrets engine.\n  This plugin generates database credentials dynamically based on configured\n  roles for Snowflake hosted databases.\n---\n\n# Snowflake database secrets engine\n\nSnowflake is one of the supported plugins for the database secrets engine. This plugin\ngenerates database credentials dynamically based on configured roles for Snowflake-hosted\ndatabases and supports [Static Roles](\/vault\/docs\/secrets\/databases#static-roles).\n\nSee the [database secrets engine](\/vault\/docs\/secrets\/databases) docs for\nmore information about setting up the database secrets engine.\n\nThe Snowflake database secrets engine uses\n[gosnowflake](https:\/\/pkg.go.dev\/github.com\/snowflakedb\/gosnowflake).\n\n## Capabilities\n\n| Plugin Name                 | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization | Credential Types          |\n| --------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |---------------------------|\n| `snowflake-database-plugin` | Yes                      | Yes           | Yes          | Yes (1.8+)             | password, rsa_private_key |\n\n## Setup\n\n1.  Enable the database secrets engine if it is not already enabled:\n\n    ```shell-session\n    $ vault secrets enable database\n    Success! Enabled the database secrets engine at: database\/\n    ```\n\n    By default, the secrets engine will enable at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure Vault with the proper plugin and connection information:\n\n    ```shell-session\n    $ vault write database\/config\/my-snowflake-database \\\n        plugin_name=snowflake-database-plugin \\\n        allowed_roles=\"my-role\" \\\n        connection_url=\":@ecxxxx.west-us-1.azure\/db_name\" \\\n        username=\"vaultuser\" \\\n        password=\"vaultpass\"\n    ```\n\n    A properly formatted data source name (DSN) needs to be provided during configuration of the\n    database. This DSN is typically formatted with the following options:\n\n    ```shell-session\n    :@account\/db_name\n    ```\n\n    `` and `` will typically be used as is during configuration. The\n    special formatting is replaced by the username and password options passed to the configuration\n    for initial connection.\n\n    `account` is your Snowflake account identifier. You can find out more about this value by reading\n    the `server` section of\n    [this document](https:\/\/docs.snowflake.com\/en\/user-guide\/odbc-parameters.html#connection-parameters).\n\n    `db_name` is the name of a database in your Snowflake instance.\n\n    ~> **Note:** The user being utilized should have `ACCOUNT_ADMIN` privileges, and should be different\n    from the root user you were provided when making your Snowflake account. This allows you to rotate\n    the root credentials and still be able to access your account.\n\n## Usage\n\nAfter the secrets engine is configured, configure dynamic and static roles to\nenable generating credentials.\n\n### Dynamic roles\n\n#### Password credentials\n\n1.  Configure a role that creates new Snowflake users with password credentials:\n\n    ```shell-session\n    $ vault write database\/roles\/my-password-role \\\n        db_name=my-snowflake-database \\\n        creation_statements=\"CREATE USER  PASSWORD = ''\n            DAYS_TO_EXPIRY =  DEFAULT_ROLE=myrole;\n            GRANT ROLE myrole TO USER ;\" \\\n        default_ttl=\"1h\" \\\n        max_ttl=\"24h\"\n    Success! Data written to: database\/roles\/my-password-role\n    ```\n\n1.  Generate a new credential by reading from the `\/creds` endpoint with the name\n    of the role:\n\n    ```shell-session\n    $ vault read database\/creds\/my-password-role\n    Key                Value\n    ---                -----\n    lease_id           database\/creds\/my-password-role\/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6\n    lease_duration     1h\n    lease_renewable    true\n    password           SsnoaA-8Tv4t34f41baD\n    username           v_root_my_password_role_fU0jqEy4wMNoAY2h60yd_1610561532\n    ```\n\n#### Key pair credentials\n\n1. Configure a role that creates new Snowflake users with key pair credentials:\n\n    ```shell-session\n    $ vault write database\/roles\/my-keypair-role \\\n        db_name=my-snowflake-database \\\n        creation_statements=\"CREATE USER  RSA_PUBLIC_KEY=''\n          DAYS_TO_EXPIRY =  DEFAULT_ROLE=myrole;\n          GRANT ROLE myrole TO USER ;\" \\\n        credential_type=\"rsa_private_key\" \\\n        credential_config=key_bits=2048 \\\n        default_ttl=\"1h\" \\\n        max_ttl=\"24h\"\n    Success! Data written to: database\/roles\/my-keypair-role\n    ```\n\n1.  Generate a new credential by reading from the `\/creds` endpoint with the name\n    of the role:\n\n    ```shell-session\n    $ vault read database\/creds\/my-keypair-role\n    Key                Value\n    ---                -----\n    lease_id           database\/creds\/my-keypair-role\/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6\n    lease_duration     1h\n    lease_renewable    true\n    rsa_private_key    -----BEGIN PRIVATE KEY-----\n                       ...\n                       -----END PRIVATE KEY-----\n    username           v_token_my_keypair_role_n20WjS9U3LWTlBWn4Wbh_1654718170\n    ```\n\n    You can directly use the PEM-encoded `rsa_private_key` value to establish a connection\n    to Snowflake. See [connection options](https:\/\/docs.snowflake.com\/en\/user-guide\/key-pair-auth.html#step-6-configure-the-snowflake-client-to-use-key-pair-authentication)\n    for a list of clients and instructions for establishing a connection using key pair\n    authentication.\n\n### Static roles\n\n#### Password credentials\n\n1. Configure a static role that rotates the password credential for an existing Snowflake user.\n\n    ```shell-session\n    $ vault write database\/static-roles\/my-password-role \\\n        db_name=my-snowflake-database \\\n        username=\"snowflake_existing_user\" \\\n        rotation_period=\"24h\" \\\n        rotation_statements=\"ALTER USER  SET PASSWORD = ''\"\n    Success! Data written to: database\/static-roles\/my-password-role\n    ```\n\n1.  Retrieve the current password credential from the `\/static-creds` endpoint:\n\n    ```shell-session\n    $ vault read database\/static-creds\/my-password-role\n    Key                    Value\n    ---                    -----\n    last_vault_rotation    2020-08-07T16:50:48.393354+01:00\n    password               Z4-KH8F-VK5VJc0hSkXQ\n    rotation_period        24h\n    ttl                    23h59m39s\n    username               my-existing-couchbase-user\n    ```\n\n#### Key pair credentials\n\n1. Configure a static role that rotates the key pair credential for an existing Snowflake user:\n\n    ```shell-session\n    $ vault write database\/static-roles\/my-keypair-role \\\n        db_name=my-snowflake-database \\\n        username=\"snowflake_existing_user\" \\\n        rotation_period=\"24h\" \\\n        rotation_statements=\"ALTER USER  SET RSA_PUBLIC_KEY=''\" \\\n        credential_type=\"rsa_private_key\" \\\n        credential_config=key_bits=2048\n    Success! Data written to: database\/static-roles\/my-keypair-role\n    ```\n\n1.  Retrieve the current key pair credential from the `\/static-creds` endpoint:\n\n    ```shell-session\n    $ vault read database\/static-creds\/my-keypair-role\n    Key                    Value\n    ---                    -----\n    last_vault_rotation    2022-06-08T13:13:02.355928-07:00\n    rotation_period        24h\n    rsa_private_key        -----BEGIN PRIVATE KEY-----\n                           ...\n                           -----END PRIVATE KEY-----\n    ttl                    23h59m55s\n    username               snowflake_existing_user\n    ```\n\n    You can directly use the PEM-encoded `rsa_private_key` value to establish a connection\n    to Snowflake. See [connection options](https:\/\/docs.snowflake.com\/en\/user-guide\/key-pair-auth.html#step-6-configure-the-snowflake-client-to-use-key-pair-authentication)\n    for a list of clients and instructions for establishing a connection using key pair\n    authentication.\n\n## Key pair authentication\n\nSnowflake supports using [key pair authentication](https:\/\/docs.snowflake.com\/en\/user-guide\/key-pair-auth.html)\nfor enhanced authentication security as an alternative to username and password authentication.\nThe Snowflake database plugin can be used to manage key pair credentials for Snowflake users\nby using the `rsa_private_key` [credential_type](\/vault\/api-docs\/secret\/databases#credential_type).\n\nSee the [usage](\/vault\/docs\/secrets\/databases\/snowflake#usage) section for examples using both\ndynamic and static roles.\n\n## API\n\nThe full list of configurable options can be seen in the [Snowflake database\nplugin API](\/vault\/api-docs\/secret\/databases\/snowflake) page.\n\nFor more information on the database secrets engine's HTTP API please see the\n[Database secrets engine API](\/vault\/api-docs\/secret\/databases) page.","site":"vault","answers_cleaned":"    layout  docs page title  Snowflake   Database   Secrets Engines description       Snowflake is one of the supported plugins for the database secrets engine    This plugin generates database credentials dynamically based on configured   roles for Snowflake hosted databases         Snowflake database secrets engine  Snowflake is one of the supported plugins for the database secrets engine  This plugin generates database credentials dynamically based on configured roles for Snowflake hosted databases and supports  Static Roles   vault docs secrets databases static roles    See the  database secrets engine   vault docs secrets databases  docs for more information about setting up the database secrets engine   The Snowflake database secrets engine uses  gosnowflake  https   pkg go dev github com snowflakedb gosnowflake       Capabilities    Plugin Name                   Root Credential Rotation   Dynamic Roles   Static Roles   Username Customization   Credential Types                                                                                                                                                              snowflake database plugin    Yes                        Yes             Yes            Yes  1 8                 password  rsa private key       Setup  1   Enable the database secrets engine if it is not already enabled          shell session       vault secrets enable database     Success  Enabled the database secrets engine at  database               By default  the secrets engine will enable at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure Vault with the proper plugin and connection information          shell session       vault write database config my snowflake database           plugin name snowflake database plugin           allowed roles  my role            connection url    ecxxxx west us 1 azure db name            username  vaultuser            password  vaultpass               A properly formatted data source name  DSN  needs to be provided during configuration of the     database  This DSN is typically formatted with the following options          shell session       account db name                 and    will typically be used as is during configuration  The     special formatting is replaced by the username and password options passed to the configuration     for initial connection        account  is your Snowflake account identifier  You can find out more about this value by reading     the  server  section of      this document  https   docs snowflake com en user guide odbc parameters html connection parameters         db name  is the name of a database in your Snowflake instance            Note    The user being utilized should have  ACCOUNT ADMIN  privileges  and should be different     from the root user you were provided when making your Snowflake account  This allows you to rotate     the root credentials and still be able to access your account      Usage  After the secrets engine is configured  configure dynamic and static roles to enable generating credentials       Dynamic roles       Password credentials  1   Configure a role that creates new Snowflake users with password credentials          shell session       vault write database roles my password role           db name my snowflake database           creation statements  CREATE USER  PASSWORD                  DAYS TO EXPIRY    DEFAULT ROLE myrole              GRANT ROLE myrole TO USER              default ttl  1h            max ttl  24h      Success  Data written to  database roles my password role          1   Generate a new credential by reading from the   creds  endpoint with the name     of the role          shell session       vault read database creds my password role     Key                Value                                  lease id           database creds my password role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6     lease duration     1h     lease renewable    true     password           SsnoaA 8Tv4t34f41baD     username           v root my password role fU0jqEy4wMNoAY2h60yd 1610561532               Key pair credentials  1  Configure a role that creates new Snowflake users with key pair credentials          shell session       vault write database roles my keypair role           db name my snowflake database           creation statements  CREATE USER  RSA PUBLIC KEY              DAYS TO EXPIRY    DEFAULT ROLE myrole            GRANT ROLE myrole TO USER              credential type  rsa private key            credential config key bits 2048           default ttl  1h            max ttl  24h      Success  Data written to  database roles my keypair role          1   Generate a new credential by reading from the   creds  endpoint with the name     of the role          shell session       vault read database creds my keypair role     Key                Value                                  lease id           database creds my keypair role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6     lease duration     1h     lease renewable    true     rsa private key         BEGIN PRIVATE KEY                                                             END PRIVATE KEY          username           v token my keypair role n20WjS9U3LWTlBWn4Wbh 1654718170              You can directly use the PEM encoded  rsa private key  value to establish a connection     to Snowflake  See  connection options  https   docs snowflake com en user guide key pair auth html step 6 configure the snowflake client to use key pair authentication      for a list of clients and instructions for establishing a connection using key pair     authentication       Static roles       Password credentials  1  Configure a static role that rotates the password credential for an existing Snowflake user          shell session       vault write database static roles my password role           db name my snowflake database           username  snowflake existing user            rotation period  24h            rotation statements  ALTER USER  SET PASSWORD           Success  Data written to  database static roles my password role          1   Retrieve the current password credential from the   static creds  endpoint          shell session       vault read database static creds my password role     Key                    Value                                      last vault rotation    2020 08 07T16 50 48 393354 01 00     password               Z4 KH8F VK5VJc0hSkXQ     rotation period        24h     ttl                    23h59m39s     username               my existing couchbase user               Key pair credentials  1  Configure a static role that rotates the key pair credential for an existing Snowflake user          shell session       vault write database static roles my keypair role           db name my snowflake database           username  snowflake existing user            rotation period  24h            rotation statements  ALTER USER  SET RSA PUBLIC KEY               credential type  rsa private key            credential config key bits 2048     Success  Data written to  database static roles my keypair role          1   Retrieve the current key pair credential from the   static creds  endpoint          shell session       vault read database static creds my keypair role     Key                    Value                                      last vault rotation    2022 06 08T13 13 02 355928 07 00     rotation period        24h     rsa private key             BEGIN PRIVATE KEY                                                                     END PRIVATE KEY          ttl                    23h59m55s     username               snowflake existing user              You can directly use the PEM encoded  rsa private key  value to establish a connection     to Snowflake  See  connection options  https   docs snowflake com en user guide key pair auth html step 6 configure the snowflake client to use key pair authentication      for a list of clients and instructions for establishing a connection using key pair     authentication      Key pair authentication  Snowflake supports using  key pair authentication  https   docs snowflake com en user guide key pair auth html  for enhanced authentication security as an alternative to username and password authentication  The Snowflake database plugin can be used to manage key pair credentials for Snowflake users by using the  rsa private key   credential type   vault api docs secret databases credential type    See the  usage   vault docs secrets databases snowflake usage  section for examples using both dynamic and static roles      API  The full list of configurable options can be seen in the  Snowflake database plugin API   vault api docs secret databases snowflake  page   For more information on the database secrets engine s HTTP API please see the  Database secrets engine API   vault api docs secret databases  page "}
{"questions":"vault plugin generates database credentials dynamically based on configured roles Elasticsearch is one of the supported plugins for the database secrets engine for Elasticsearch This layout docs page title Elasticsearch Database Secrets Engines","answers":"---\nlayout: docs\npage_title: Elasticsearch - Database - Secrets Engines\ndescription: >-\n  Elasticsearch is one of the supported plugins for the database secrets engine.\n  This\n\n  plugin generates database credentials dynamically based on configured roles\n\n  for Elasticsearch.\n---\n\n# Elasticsearch database secrets engine\n\n@include 'x509-sha1-deprecation.mdx'\n\nElasticsearch is one of the supported plugins for the database secrets engine. This\nplugin generates database credentials dynamically based on configured roles for\nElasticsearch.\n\nSee the [database secrets engine](\/vault\/docs\/secrets\/databases) docs for\nmore information about setting up the database secrets engine.\n\n## Capabilities\n\n| Plugin Name                     | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |\n| ------------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |\n| `elasticsearch-database-plugin` | Yes                      | Yes           | Yes (1.6+)   | Yes (1.8+)             |\n\n## Getting started\n\nTo take advantage of this plugin, you must first enable Elasticsearch's native realm of security by activating X-Pack. These\ninstructions will walk you through doing this using Elasticsearch 7.1.1.\n\n### Enable X-Pack security in elasticsearch\n\nRead [Securing the Elastic Stack](https:\/\/www.elastic.co\/guide\/en\/elastic-stack-overview\/7.1\/elasticsearch-security.html) and\nfollow [its instructions for enabling X-Pack Security](https:\/\/www.elastic.co\/guide\/en\/elasticsearch\/reference\/7.1\/setup-xpack.html).\n\n### Enable encrypted communications\n\nThis plugin communicates with Elasticsearch's security API. ES requires TLS for these communications so they can be\nencrypted.\n\nTo set up TLS in Elasticsearch, first read [encrypted communications](https:\/\/www.elastic.co\/guide\/en\/elastic-stack-overview\/7.1\/encrypting-communications.html)\nand go through its instructions on [encrypting HTTP client communications](https:\/\/www.elastic.co\/guide\/en\/elasticsearch\/reference\/7.1\/configuring-tls.html#tls-http).\n\nAfter enabling TLS on the Elasticsearch side, you'll need to convert the .p12 certificates you generated to other formats so they can be\nused by Vault. [Here is an example using OpenSSL](https:\/\/stackoverflow.com\/questions\/15144046\/converting-pkcs12-certificate-into-pem-using-openssl)\nto convert our .p12 certs to the pem format.\n\nAlso, on the instance running Elasticsearch, we needed to install our newly generated CA certificate that was originally in the .p12 format.\nWe did this by converting the .p12 CA cert to a pem, and then further converting that\n[pem to a crt](https:\/\/stackoverflow.com\/questions\/13732826\/convert-pem-to-crt-and-key), adding that crt to `\/usr\/share\/ca-certificates\/extra`,\nand using `sudo dpkg-reconfigure ca-certificates`.\n\nThe above instructions may vary if you are not using an Ubuntu machine. Please ensure you're using the methods specific to your operating\nenvironment. Describing every operating environment is outside the scope of these instructions.\n\n### Set up passwords\n\nWhen done, verify that you've enabled X-Pack by running `$ $ES_HOME\/bin\/elasticsearch-setup-passwords interactive`. You'll\nknow it's been set up successfully if it takes you through a number of password-inputting steps.\n\n### Create a role for Vault\n\nNext, in Elasticsearch, we recommend that you create a user just for Vault to use in managing secrets.\n\nTo do this, first create a role that will allow Vault the minimum privileges needed to administer users and passwords by performing a\nPOST to Elasticsearch. To do this, we used the `elastic` vaultuser whose password we created in the\n`$ $ES_HOME\/bin\/elasticsearch-setup-passwords interactive` step.\n\n```shell-session\n$ curl \\\n    -X POST \\\n    -H \"Content-Type: application\/json\" \\\n    -d '{\"cluster\": [\"manage_security\"]}' \\\n    http:\/\/vaultuser:$PASSWORD@localhost:9200\/_xpack\/security\/role\/vault\n```\n\nNext, create a user for Vault associated with that role.\n\n```shell-session\n$ curl \\\n    -X POST \\\n    -H \"Content-Type: application\/json\" \\\n    -d @data.json \\\n    http:\/\/vaultuser:$PASSWORD@localhost:9200\/_xpack\/security\/user\/vault\n```\n\nThe contents of `data.json` in this example are:\n\n```json\n{\n \"password\" : \"myPa55word\",\n \"roles\" : [ \"vault\" ],\n \"full_name\" : \"Hashicorp Vault\",\n \"metadata\" : {\n   \"plugin_name\": \"Vault Plugin Database Elasticsearch\",\n   \"plugin_url\": \"https:\/\/github.com\/hashicorp\/vault-plugin-database-elasticsearch\"\n }\n}\n```\n\nNow, Elasticsearch is configured and ready to be used with Vault.\n\n## Setup\n\n1. Enable the database secrets engine if it is not already enabled:\n\n   ```shell-session\n   $ vault secrets enable database\n   Success! Enabled the database secrets engine at: database\/\n   ```\n\n   By default, the secrets engine will enable at the name of the engine. To\n   enable the secrets engine at a different path, use the `-path` argument.\n\n1. Configure Vault with the proper plugin and connection information:\n\n   ```shell-session\n   $ vault write database\/config\/my-elasticsearch-database \\\n       plugin_name=\"elasticsearch-database-plugin\" \\\n       allowed_roles=\"internally-defined-role,externally-defined-role\" \\\n       username=vault \\\n       password=myPa55word \\\n       url=http:\/\/localhost:9200 \\\n       ca_cert=\/usr\/share\/ca-certificates\/extra\/elastic-stack-ca.crt.pem \\\n       client_cert=$ES_HOME\/config\/certs\/elastic-certificates.crt.pem \\\n       client_key=$ES_HOME\/config\/certs\/elastic-certificates.key.pem\n   ```\n\n## Usage\n\nAfter the secrets engine is configured, configure dynamic and static roles to enable generating credentials.\n\n### Dynamic Roles\n\nDynamic roles generate new credentials for every request.\n\n1. Configure a role that maps a name in Vault to a role definition in Elasticsearch.\n   This is considered the most secure type of role because nobody can perform\n   a privilege escalation by editing a role's privileges out-of-band in\n   Elasticsearch:\n\n   ```shell-session\n   $ vault write database\/roles\/internally-defined-role \\\n         db_name=my-elasticsearch-database \\\n         creation_statements='{\"elasticsearch_role_definition\": {\"indices\": [{\"names\":[\"*\"], \"privileges\":[\"read\"]}]}}' \\\n         default_ttl=\"1h\" \\\n         max_ttl=\"24h\"\n   ```\n\n1. Alternatively, configure a role that maps a name in Vault to a pre-existing\n   role definition in Elasticsearch:\n\n   ```shell-session\n   $ vault write database\/roles\/externally-defined-role \\\n         db_name=my-elasticsearch-database \\\n         creation_statements='{\"elasticsearch_roles\": [\"pre-existing-role-in-elasticsearch\"]}' \\\n         default_ttl=\"1h\" \\\n         max_ttl=\"24h\"\n   ```\n\n1. Generate a new credential by reading from the `\/creds` endpoint with the name\n   of the role:\n\n   ```shell-session\n   $ vault read database\/creds\/my-role\n   Key                Value\n   ---                -----\n   lease_id           database\/creds\/my-role\/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6\n   lease_duration     1h\n   lease_renewable    true\n   password           0ZsueAP-dqCNGZo35M0n\n   username           v-vaultuser-my-role-AgIViC5TdQHBdeiCxae0-1602541724\n   ```\n\n### Static Roles\n\nStatic roles return the same credentials for every request. The credentials are rotated based on the schedule provided.\n\n1. Configure a static role that maps a name in Vault to a pre-existing user in Elasticsearch:\n\n   ```shell-session\n   $ vault write database\/static-roles\/my-static-role\\\n         db_name=my-elasticsearch-database \\\n         username=my-existing-elasticsearch-uername \\\n         rotation_period=\"24h\"\n   ```\n\n1. Retrieve the current username and password from the `\/static-creds` endpoint:\n\n   ```shell-session\n   $ vault read database\/static-creds\/my-static-role\n   Key                    Value\n   ---                    -----\n   last_vault_rotation    2023-09-14T08:24:39.650491913-04:00\n   password               current-password\n   rotation_period        24h\n   ttl                    23h59m59s\n   username               my-existing-elasticsearch-uername\n   ```\n\n## API\n\nThe full list of configurable options can be seen in the [Elasticsearch database plugin API](\/vault\/api-docs\/secret\/databases\/elasticdb) page.\n\nFor more information on the database secrets engine's HTTP API please see the\n[Database secrets engine API](\/vault\/api-docs\/secret\/databases) page.","site":"vault","answers_cleaned":"    layout  docs page title  Elasticsearch   Database   Secrets Engines description       Elasticsearch is one of the supported plugins for the database secrets engine    This    plugin generates database credentials dynamically based on configured roles    for Elasticsearch         Elasticsearch database secrets engine   include  x509 sha1 deprecation mdx   Elasticsearch is one of the supported plugins for the database secrets engine  This plugin generates database credentials dynamically based on configured roles for Elasticsearch   See the  database secrets engine   vault docs secrets databases  docs for more information about setting up the database secrets engine      Capabilities    Plugin Name                       Root Credential Rotation   Dynamic Roles   Static Roles   Username Customization                                                                                                                             elasticsearch database plugin    Yes                        Yes             Yes  1 6       Yes  1 8                     Getting started  To take advantage of this plugin  you must first enable Elasticsearch s native realm of security by activating X Pack  These instructions will walk you through doing this using Elasticsearch 7 1 1       Enable X Pack security in elasticsearch  Read  Securing the Elastic Stack  https   www elastic co guide en elastic stack overview 7 1 elasticsearch security html  and follow  its instructions for enabling X Pack Security  https   www elastic co guide en elasticsearch reference 7 1 setup xpack html        Enable encrypted communications  This plugin communicates with Elasticsearch s security API  ES requires TLS for these communications so they can be encrypted   To set up TLS in Elasticsearch  first read  encrypted communications  https   www elastic co guide en elastic stack overview 7 1 encrypting communications html  and go through its instructions on  encrypting HTTP client communications  https   www elastic co guide en elasticsearch reference 7 1 configuring tls html tls http    After enabling TLS on the Elasticsearch side  you ll need to convert the  p12 certificates you generated to other formats so they can be used by Vault   Here is an example using OpenSSL  https   stackoverflow com questions 15144046 converting pkcs12 certificate into pem using openssl  to convert our  p12 certs to the pem format   Also  on the instance running Elasticsearch  we needed to install our newly generated CA certificate that was originally in the  p12 format  We did this by converting the  p12 CA cert to a pem  and then further converting that  pem to a crt  https   stackoverflow com questions 13732826 convert pem to crt and key   adding that crt to   usr share ca certificates extra   and using  sudo dpkg reconfigure ca certificates    The above instructions may vary if you are not using an Ubuntu machine  Please ensure you re using the methods specific to your operating environment  Describing every operating environment is outside the scope of these instructions       Set up passwords  When done  verify that you ve enabled X Pack by running     ES HOME bin elasticsearch setup passwords interactive   You ll know it s been set up successfully if it takes you through a number of password inputting steps       Create a role for Vault  Next  in Elasticsearch  we recommend that you create a user just for Vault to use in managing secrets   To do this  first create a role that will allow Vault the minimum privileges needed to administer users and passwords by performing a POST to Elasticsearch  To do this  we used the  elastic  vaultuser whose password we created in the     ES HOME bin elasticsearch setup passwords interactive  step      shell session   curl        X POST        H  Content Type  application json         d    cluster     manage security           http   vaultuser  PASSWORD localhost 9200  xpack security role vault      Next  create a user for Vault associated with that role      shell session   curl        X POST        H  Content Type  application json         d  data json       http   vaultuser  PASSWORD localhost 9200  xpack security user vault      The contents of  data json  in this example are      json     password     myPa55word     roles       vault       full name     Hashicorp Vault     metadata          plugin name    Vault Plugin Database Elasticsearch       plugin url    https   github com hashicorp vault plugin database elasticsearch            Now  Elasticsearch is configured and ready to be used with Vault      Setup  1  Enable the database secrets engine if it is not already enabled         shell session      vault secrets enable database    Success  Enabled the database secrets engine at  database             By default  the secrets engine will enable at the name of the engine  To    enable the secrets engine at a different path  use the   path  argument   1  Configure Vault with the proper plugin and connection information         shell session      vault write database config my elasticsearch database          plugin name  elasticsearch database plugin           allowed roles  internally defined role externally defined role           username vault          password myPa55word          url http   localhost 9200          ca cert  usr share ca certificates extra elastic stack ca crt pem          client cert  ES HOME config certs elastic certificates crt pem          client key  ES HOME config certs elastic certificates key pem            Usage  After the secrets engine is configured  configure dynamic and static roles to enable generating credentials       Dynamic Roles  Dynamic roles generate new credentials for every request   1  Configure a role that maps a name in Vault to a role definition in Elasticsearch     This is considered the most secure type of role because nobody can perform    a privilege escalation by editing a role s privileges out of band in    Elasticsearch         shell session      vault write database roles internally defined role            db name my elasticsearch database            creation statements    elasticsearch role definition     indices      names          privileges    read                   default ttl  1h             max ttl  24h          1  Alternatively  configure a role that maps a name in Vault to a pre existing    role definition in Elasticsearch         shell session      vault write database roles externally defined role            db name my elasticsearch database            creation statements    elasticsearch roles     pre existing role in elasticsearch                default ttl  1h             max ttl  24h          1  Generate a new credential by reading from the   creds  endpoint with the name    of the role         shell session      vault read database creds my role    Key                Value                                lease id           database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6    lease duration     1h    lease renewable    true    password           0ZsueAP dqCNGZo35M0n    username           v vaultuser my role AgIViC5TdQHBdeiCxae0 1602541724             Static Roles  Static roles return the same credentials for every request  The credentials are rotated based on the schedule provided   1  Configure a static role that maps a name in Vault to a pre existing user in Elasticsearch         shell session      vault write database static roles my static role           db name my elasticsearch database            username my existing elasticsearch uername            rotation period  24h          1  Retrieve the current username and password from the   static creds  endpoint         shell session      vault read database static creds my static role    Key                    Value                                    last vault rotation    2023 09 14T08 24 39 650491913 04 00    password               current password    rotation period        24h    ttl                    23h59m59s    username               my existing elasticsearch uername            API  The full list of configurable options can be seen in the  Elasticsearch database plugin API   vault api docs secret databases elasticdb  page   For more information on the database secrets engine s HTTP API please see the  Database secrets engine API   vault api docs secret databases  page "}
{"questions":"vault plugin generates database credentials dynamically based on configured roles MongoDB database secrets engine for the MongoDB database MongoDB is one of the supported plugins for the database secrets engine This layout docs page title MongoDB Database Secrets Engines","answers":"---\nlayout: docs\npage_title: MongoDB - Database - Secrets Engines\ndescription: |-\n  MongoDB is one of the supported plugins for the database secrets engine. This\n  plugin generates database credentials dynamically based on configured roles\n  for the MongoDB database.\n---\n\n# MongoDB database secrets engine\n\n@include 'x509-sha1-deprecation.mdx'\n\nMongoDB is one of the supported plugins for the database secrets engine. This\nplugin generates database credentials dynamically based on configured roles for\nthe MongoDB database and also supports\n[Static Roles](\/vault\/docs\/secrets\/databases#static-roles).\n\nSee the [database secrets engine](\/vault\/docs\/secrets\/databases) docs for\nmore information about setting up the database secrets engine.\n\n## Capabilities\n\n| Plugin Name               | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |\n| ------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |\n| `mongodb-database-plugin` | Yes                      | Yes           | Yes          | Yes (1.7+)             |\n\n## Setup\n\n1.  Enable the database secrets engine if it is not already enabled:\n\n    ```text\n    $ vault secrets enable database\n    Success! Enabled the database secrets engine at: database\/\n    ```\n\n    By default, the secrets engine will enable at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure Vault with the proper plugin and connection information:\n\n    ```text\n    $ vault write database\/config\/my-mongodb-database \\\n        plugin_name=mongodb-database-plugin \\\n        allowed_roles=\"my-role\" \\\n        connection_url=\"mongodb:\/\/:@mongodb.acme.com:27017\/admin?tls=true\" \\\n        username=\"vaultuser\" \\\n        password=\"vaultpass!\"\n    ```\n\n1.  Configure a role that maps a name in Vault to a MongoDB command that executes and\n    creates the database credential:\n\n    ```text\n    $ vault write database\/roles\/my-role \\\n        db_name=my-mongodb-database \\\n        creation_statements='{ \"db\": \"admin\", \"roles\": [{ \"role\": \"readWrite\" }, {\"role\": \"read\", \"db\": \"foo\"}] }' \\\n        default_ttl=\"1h\" \\\n        max_ttl=\"24h\"\n    Success! Data written to: database\/roles\/my-role\n    ```\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials.\n\n1.  Generate a new credential by reading from the `\/creds` endpoint with the name\n    of the role:\n\n    ```text\n    $ vault read database\/creds\/my-role\n    Key                Value\n    ---                -----\n    lease_id           database\/creds\/my-role\/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6\n    lease_duration     1h\n    lease_renewable    true\n    password           LEm-lcDJ2k0Hi05FvizN\n    username           v-vaultuser-my-role-ItceCZHlp0YGn90Puy9Z-1602542024\n    ```\n\n## Client x509 certificate authentication\n\nThis plugin supports using MongoDB's [x509 Client-side Certificate Authentication](https:\/\/docs.mongodb.com\/manual\/core\/security-x.509\/)\n\nTo use this authentication mechanism, configure the plugin:\n\n```shell-session\n$ vault write database\/config\/my-mongodb-database \\\n    plugin_name=mongodb-database-plugin \\\n    allowed_roles=\"my-role\" \\\n    connection_url=\"mongodb:\/\/@mongodb.acme.com:27017\/admin\" \\\n    tls_certificate_key=@\/path\/to\/client.pem \\\n    tls_ca=@\/path\/to\/client.ca\n```\n\nNote: `tls_certificate_key` and `tls_ca` map to [`tlsCertificateKeyFile`](https:\/\/docs.mongodb.com\/manual\/reference\/program\/mongo\/#cmdoption-mongo-tlscertificatekeyfile)\nand [`tlsCAFile`](https:\/\/docs.mongodb.com\/manual\/reference\/program\/mongo\/#cmdoption-mongo-tlscafile) configuration options\nfrom MongoDB with the exception that the Vault parameters are the contents of those files, not filenames. As such,\nthe two options are independent of each other. See the [MongoDB Configuration Options](https:\/\/docs.mongodb.com\/manual\/reference\/program\/mongo\/)\nfor more information.\n\n## Tutorial\n\nRefer to [Database Secrets Engine tutorial](\/vault\/tutorials\/db-credentials\/database-secrets) for a\nstep-by-step example of using the database secrets engine.\n\n## API\n\nThe full list of configurable options can be seen in the [MongoDB database\nplugin API](\/vault\/api-docs\/secret\/databases\/mongodb) page.\n\nFor more information on the database secrets engine's HTTP API please see the\n[Database secrets engine API](\/vault\/api-docs\/secret\/databases) page.","site":"vault","answers_cleaned":"    layout  docs page title  MongoDB   Database   Secrets Engines description       MongoDB is one of the supported plugins for the database secrets engine  This   plugin generates database credentials dynamically based on configured roles   for the MongoDB database         MongoDB database secrets engine   include  x509 sha1 deprecation mdx   MongoDB is one of the supported plugins for the database secrets engine  This plugin generates database credentials dynamically based on configured roles for the MongoDB database and also supports  Static Roles   vault docs secrets databases static roles    See the  database secrets engine   vault docs secrets databases  docs for more information about setting up the database secrets engine      Capabilities    Plugin Name                 Root Credential Rotation   Dynamic Roles   Static Roles   Username Customization                                                                                                                       mongodb database plugin    Yes                        Yes             Yes            Yes  1 7                     Setup  1   Enable the database secrets engine if it is not already enabled          text       vault secrets enable database     Success  Enabled the database secrets engine at  database               By default  the secrets engine will enable at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure Vault with the proper plugin and connection information          text       vault write database config my mongodb database           plugin name mongodb database plugin           allowed roles  my role            connection url  mongodb     mongodb acme com 27017 admin tls true            username  vaultuser            password  vaultpass            1   Configure a role that maps a name in Vault to a MongoDB command that executes and     creates the database credential          text       vault write database roles my role           db name my mongodb database           creation statements     db    admin    roles       role    readWrite       role    read    db    foo                 default ttl  1h            max ttl  24h      Success  Data written to  database roles my role             Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials   1   Generate a new credential by reading from the   creds  endpoint with the name     of the role          text       vault read database creds my role     Key                Value                                  lease id           database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6     lease duration     1h     lease renewable    true     password           LEm lcDJ2k0Hi05FvizN     username           v vaultuser my role ItceCZHlp0YGn90Puy9Z 1602542024             Client x509 certificate authentication  This plugin supports using MongoDB s  x509 Client side Certificate Authentication  https   docs mongodb com manual core security x 509    To use this authentication mechanism  configure the plugin      shell session   vault write database config my mongodb database       plugin name mongodb database plugin       allowed roles  my role        connection url  mongodb    mongodb acme com 27017 admin        tls certificate key   path to client pem       tls ca   path to client ca      Note   tls certificate key  and  tls ca  map to   tlsCertificateKeyFile   https   docs mongodb com manual reference program mongo  cmdoption mongo tlscertificatekeyfile  and   tlsCAFile   https   docs mongodb com manual reference program mongo  cmdoption mongo tlscafile  configuration options from MongoDB with the exception that the Vault parameters are the contents of those files  not filenames  As such  the two options are independent of each other  See the  MongoDB Configuration Options  https   docs mongodb com manual reference program mongo   for more information      Tutorial  Refer to  Database Secrets Engine tutorial   vault tutorials db credentials database secrets  for a step by step example of using the database secrets engine      API  The full list of configurable options can be seen in the  MongoDB database plugin API   vault api docs secret databases mongodb  page   For more information on the database secrets engine s HTTP API please see the  Database secrets engine API   vault api docs secret databases  page "}
{"questions":"vault Couchbase is one of the supported plugins for the database secrets engine Couchbase database secrets engine layout docs This plugin generates database credentials dynamically based on configured page title Couchbase Database Secrets Engines roles for the Couchbase database","answers":"---\nlayout: docs\npage_title: Couchbase - Database - Secrets Engines\ndescription: |-\n  Couchbase is one of the supported plugins for the database secrets engine.\n  This plugin generates database credentials dynamically based on configured\n  roles for the Couchbase database.\n---\n\n# Couchbase database secrets engine\n\n@include 'x509-sha1-deprecation.mdx'\n\nCouchbase is one of the supported plugins for the database secrets engine. This\nplugin generates database credentials dynamically based on configured roles for\nthe Couchbase database.\n\nSee the [database secrets engine](\/vault\/docs\/secrets\/databases) docs for\nmore information about setting up the database secrets engine.\n\n## Capabilities\n\n| Plugin Name                 | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |\n| --------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |\n| `couchbase-database-plugin` | Yes                      | Yes           | Yes          | Yes (1.7+)             |\n\n## Setup\n\n1.  Enable the database secrets engine if it is not already enabled:\n\n    ```bash\n    $ vault secrets enable database\n    Success! Enabled the database secrets engine at: database\/\n    ```\n\n    By default, the secrets engine will enable at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure Vault with the proper plugin and connection configuration:\n\n    ```bash\n    $ vault write database\/config\/my-couchbase-database \\\n        plugin_name=\"couchbase-database-plugin\" \\\n        hosts=\"couchbase:\/\/127.0.0.1\" \\\n        tls=true \\\n        base64pem=\"${BASE64PEM}\" \\\n        username=\"vaultuser\" \\\n        password=\"vaultpass\" \\\n        allowed_roles=\"my-*-role\"\n    ```\n\n    Where `${BASE64PEM}` is the server's root certificate authority in PEM\n    format, encoded as a base64 string with no new lines.\n\n    To connect to clusters prior to version 6.5.0, a `bucket_name` must also\n    be configured:\n\n    ```bash\n    $ vault write database\/config\/my-couchbase-database \\\n        plugin_name=\"couchbase-database-plugin\" \\\n        hosts=\"couchbase:\/\/127.0.0.1\" \\\n        tls=true \\\n        base64pem=\"${BASE64PEM}\" \\\n        username=\"vaultuser\" \\\n        password=\"vaultpass\" \\\n        allowed_roles=\"my-*-role\" \\\n        bucket_name=\"travel-sample\"\n    ```\n\n1.  You should consider rotating the admin password. Note that if you do, the\n    new password will never be made available through Vault, so you should\n    create a Vault-specific database admin user for this.\n\n    ```bash\n    vault write -force database\/rotate-root\/my-couchbase-database\n    ```\n\n## Usage\n\nAfter the secrets engine is configured, configure dynamic and static roles\nto enable generating credentials.\n\n### Dynamic roles\n\n1.  Configure a dynamic role that maps a name in Vault to a JSON string\n    specifying a Couchbase RBAC role. The default value for\n    `creation_statements` is a read-only admin role:\n    `{\"Roles\": [{\"role\":\"ro_admin\"}]}`.\n\n    ```bash\n    $ vault write database\/roles\/my-dynamic-role \\\n        db_name=\"my-couchbase-database\" \\\n        creation_statements='{\"Roles\": [{\"role\":\"ro_admin\"}]}' \\\n        default_ttl=\"5m\" \\\n        max_ttl=\"1h\"\n    ```\n\n    Note that any groups specified in the creation statement must already exist.\n\n1.  Generate a new credential by reading from the `\/creds` endpoint with the name\n    of the role:\n\n    ```bash\n    $ vault read database\/creds\/my-dynamic-role\n    Key                Value\n    ---                -----\n    lease_id           database\/creds\/my-dynamic-role\/wiLNQjtcvCOT1VnN3qnUJnBz\n    lease_duration     5m\n    lease_renewable    true\n    password           mhyM-Gs7IpmOPnSqXEDe\n    username           v-root-my-dynamic-role-eXnVr4gm55dpM1EVgTYz-1596815027\n    ```\n\n### Static roles\n\n1.  Configure a static role that maps a name in Vault to an existing couchbase\n    user.\n\n    ```bash\n    $ vault write database\/static-roles\/my-static-role \\\n        db_name=\"my-couchbase-database\" \\\n        username=\"my-existing-couchbase-user\" \\\n        rotation_period=5m\n    ```\n\n1.  Retrieve the credentials from the `\/static-creds` endpoint:\n\n    ```bash\n    $ vault read database\/static-creds\/my-static-role\n    Key                    Value\n    ---                    -----\n    last_vault_rotation    2020-08-07T16:50:48.393354+01:00\n    password               Z4-KH8F-VK5VJc0hSkXQ\n    rotation_period        5m\n    ttl                    4m39s\n    username               my-existing-couchbase-user\n    ```\n\n## API\n\nThe full list of configurable options can be seen in the [Couchbase database plugin API](\/vault\/api-docs\/secret\/databases\/couchbase) page.\n\nFor more information on the database secrets engine's HTTP API please see the [Database secret secrets engine API](\/vault\/api-docs\/secret\/databases) page.","site":"vault","answers_cleaned":"    layout  docs page title  Couchbase   Database   Secrets Engines description       Couchbase is one of the supported plugins for the database secrets engine    This plugin generates database credentials dynamically based on configured   roles for the Couchbase database         Couchbase database secrets engine   include  x509 sha1 deprecation mdx   Couchbase is one of the supported plugins for the database secrets engine  This plugin generates database credentials dynamically based on configured roles for the Couchbase database   See the  database secrets engine   vault docs secrets databases  docs for more information about setting up the database secrets engine      Capabilities    Plugin Name                   Root Credential Rotation   Dynamic Roles   Static Roles   Username Customization                                                                                                                         couchbase database plugin    Yes                        Yes             Yes            Yes  1 7                     Setup  1   Enable the database secrets engine if it is not already enabled          bash       vault secrets enable database     Success  Enabled the database secrets engine at  database               By default  the secrets engine will enable at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure Vault with the proper plugin and connection configuration          bash       vault write database config my couchbase database           plugin name  couchbase database plugin            hosts  couchbase   127 0 0 1            tls true           base64pem    BASE64PEM             username  vaultuser            password  vaultpass            allowed roles  my   role               Where    BASE64PEM   is the server s root certificate authority in PEM     format  encoded as a base64 string with no new lines       To connect to clusters prior to version 6 5 0  a  bucket name  must also     be configured          bash       vault write database config my couchbase database           plugin name  couchbase database plugin            hosts  couchbase   127 0 0 1            tls true           base64pem    BASE64PEM             username  vaultuser            password  vaultpass            allowed roles  my   role            bucket name  travel sample           1   You should consider rotating the admin password  Note that if you do  the     new password will never be made available through Vault  so you should     create a Vault specific database admin user for this          bash     vault write  force database rotate root my couchbase database             Usage  After the secrets engine is configured  configure dynamic and static roles to enable generating credentials       Dynamic roles  1   Configure a dynamic role that maps a name in Vault to a JSON string     specifying a Couchbase RBAC role  The default value for      creation statements  is a read only admin role         Roles      role   ro admin               bash       vault write database roles my dynamic role           db name  my couchbase database            creation statements    Roles      role   ro admin                default ttl  5m            max ttl  1h               Note that any groups specified in the creation statement must already exist   1   Generate a new credential by reading from the   creds  endpoint with the name     of the role          bash       vault read database creds my dynamic role     Key                Value                                  lease id           database creds my dynamic role wiLNQjtcvCOT1VnN3qnUJnBz     lease duration     5m     lease renewable    true     password           mhyM Gs7IpmOPnSqXEDe     username           v root my dynamic role eXnVr4gm55dpM1EVgTYz 1596815027              Static roles  1   Configure a static role that maps a name in Vault to an existing couchbase     user          bash       vault write database static roles my static role           db name  my couchbase database            username  my existing couchbase user            rotation period 5m          1   Retrieve the credentials from the   static creds  endpoint          bash       vault read database static creds my static role     Key                    Value                                      last vault rotation    2020 08 07T16 50 48 393354 01 00     password               Z4 KH8F VK5VJc0hSkXQ     rotation period        5m     ttl                    4m39s     username               my existing couchbase user             API  The full list of configurable options can be seen in the  Couchbase database plugin API   vault api docs secret databases couchbase  page   For more information on the database secrets engine s HTTP API please see the  Database secret secrets engine API   vault api docs secret databases  page "}
{"questions":"vault plugin generates database credentials dynamically based on configured roles for the MSSQL database layout docs MSSQL is one of the supported plugins for the database secrets engine This page title MSSQL Database Secrets Engines","answers":"---\nlayout: docs\npage_title: MSSQL - Database - Secrets Engines\ndescription: |-\n\n  MSSQL is one of the supported plugins for the database secrets engine. This\n  plugin generates database credentials dynamically based on configured roles\n  for the MSSQL database.\n---\n\n# MSSQL database secrets engine\n\nMSSQL is one of the supported plugins for the database secrets engine. This\nplugin generates database credentials dynamically based on configured roles for\nthe MSSQL database.\n\nSee the [database secrets engine](\/vault\/docs\/secrets\/databases) docs for\nmore information about setting up the database secrets engine.\n\nThe following privileges are needed by the plugin for minimum functionality. Additional privileges may be needed \ndepending on the SQL configured on the database roles. \n\n```sql\n-- Create Login\nCREATE LOGIN vault_login WITH PASSWORD = '<password>';\n\n-- Create User\nCREATE user vault_user for login vault_login;\n\n-- Grant Permissions\nGRANT ALTER ANY LOGIN TO \"vault_user\";\nGRANT ALTER ANY USER TO \"vault_user\";\nGRANT ALTER ANY CONNECTION TO \"vault_login\";\nGRANT CONTROL ON SCHEMA::<schema_name> TO \"vault_user\";\nEXEC sp_addrolemember \"db_accessadmin\", \"vault_user\";\n```\n\n## Capabilities\n\n| Plugin Name             | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |\n| ----------------------- | ------------------------ | ------------- | ------------ | ---------------------- |\n| `mssql-database-plugin` | Yes                      | Yes           | Yes          | Yes (1.7+)             |\n\n## Setup\n\n1.  Enable the database secrets engine if it is not already enabled:\n\n    ```text\n    $ vault secrets enable database\n    Success! Enabled the database secrets engine at: database\/\n    ```\n\n    By default, the secrets engine will enable at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure Vault with the proper plugin and connection information:\n\n    ```text\n    $ vault write database\/config\/my-mssql-database \\\n        plugin_name=mssql-database-plugin \\\n        connection_url='sqlserver:\/\/:@localhost:1433' \\\n        allowed_roles=\"my-role\" \\\n        username=\"vaultuser\" \\\n        password=\"yourStrong(!)Password\"\n    ```\n\n    ~> Note: The example above demonstrates a connection with SQL server user named `vaultuser`, although the user `vaultuser` might be Windows Authentication user part of Active Directory domain, for example: `DOMAIN\\vaultuser`.\n\n    In this case, we've configured Vault with the user \"vaultuser\" and password\n    \"yourStrong(!)Password\", connecting to an instance at \"localhost\" on port 1433. It is\n    not necessary that Vault has the vaultuser login, but the user must have privileges\n    to create logins and manage processes. The fixed server roles\n    `securityadmin` and `processadmin` are examples of built-in roles that grant\n    these permissions. The user also must have privileges to create database\n    users and grant permissions in the databases that Vault manages. The fixed\n    database roles `db_accessadmin` and `db_securityadmin` are examples or\n    built-in roles that grant these permissions.\n\n1.  Configure a role that maps a name in Vault to an SQL statement to execute to\n    create the database credential:\n\n    ```text\n    $ vault write database\/roles\/my-role \\\n        db_name=my-mssql-database \\\n        creation_statements=\"CREATE LOGIN [] WITH PASSWORD = '';\\\n            CREATE USER [] FOR LOGIN [];\\\n            GRANT SELECT ON SCHEMA::dbo TO [];\" \\\n        default_ttl=\"1h\" \\\n        max_ttl=\"24h\"\n    Success! Data written to: database\/roles\/my-role\n    ```\n\n~> **Be aware!** If no `revocation_statement` is supplied,\nvault will execute the default revocation procedure.\nIn larger databases, this might cause connection timeouts.\nPlease specify a revocation statement in such a scenario.\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials.\n\n1.  Generate a new credential by reading from the `\/creds` endpoint with the name\n    of the role:\n\n    ```text\n    $ vault read database\/creds\/my-role\n    Key                Value\n    ---                -----\n    lease_id           database\/creds\/my-role\/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6\n    lease_duration     1h\n    lease_renewable    true\n    password           wJKpk9kg-T1Ma7qQfS8y\n    username           v-vaultuser-my-role-r7kCtKGGr3eYQP1OGR6G-1602542258\n    ```\n\n## Example for Azure SQL database\n\nHere is a complete example using Azure SQL Database. Note that databases in Azure SQL Database are [contained databases](https:\/\/docs.microsoft.com\/en-us\/sql\/relational-databases\/databases\/contained-databases) and that we do not create a login for the user; instead, we associate the password directly with the user itself. Also note that you will need a separate connection and role for each Azure SQL database for which you want to generate dynamic credentials. You can use a single database backend mount for all these databases or use a separate mount for each of them. In this example, we use a custom path for the database backend.\n\n<Note>\n  Azure SQL databases may use different authentication mechanism that are configured on the SQL server. Vault only supports SQL authentication. Azure AD authentication is not supported.\n<\/Note>\n\nFirst, we mount a database backend at the azuresql path with `vault secrets enable -path=azuresql database`. Then we configure a connection called \"testvault\" to connect to a database called \"test-vault\", using \"azuresql\" at the beginning of our path:\n\n    ~> Note: If you are using a windows vault client with cmd.exe, change the single quotes to double quotes in the connection string.  Windows cmd.exe does not interpret single quotes as a continous string\n    \n```shell-session\n$ vault write azuresql\/config\/testvault \\\n    plugin_name=mssql-database-plugin \\\n    connection_url='server=hashisqlserver.database.windows.net;port=1433;user id=admin;password=pAssw0rd;database=test-vault;app name=vault;' \\\n    allowed_roles=\"test\"\n```\n\nNow we add a role called \"test\" for use with the \"testvault\" connection:\n\n```shell-session\n$ vault write azuresql\/roles\/test \\\n    db_name=testvault \\\n    creation_statements=\"CREATE USER [] WITH PASSWORD = '';\" \\\n    revocation_statements=\"DROP USER IF EXISTS []\" \\\n    default_ttl=\"1h\" \\\n    max_ttl=\"24h\"\n```\n\nWe can now use this role to dynamically generate credentials for the Azure SQL database, test-vault:\n\n```shell-session\n$ vault read azuresql\/creds\/test\nKey            \tValue\n---            \t-----\nlease_id       \tazuresql\/creds\/test\/2e5b1e0b-a081-c7e1-5622-39f58e79a719\nlease_duration \t1h0m0s\nlease_renewable\ttrue\npassword       \tcZ-BJy-SqO5tKwazAuUP\nusername       \tv-token-test-tr2t4x9pxvq1z8878s9s-1513446795\n```\n\nWhen we no longer need the backend, we can unmount it with `vault unmount azuresql`. Now, you can use the MSSQL Database Plugin with your Azure SQL databases.\n\n## Amazon RDS\n\nThe MSSQL plugin supports databases running on [Amazon RDS](https:\/\/docs.aws.amazon.com\/AmazonRDS\/latest\/UserGuide\/CHAP_SQLServer.html),\nbut there are differences that need to be accommodated. A key limitation is that Amazon RDS doesn't support\nthe \"sysadmin\" role, which is used by default during Vault's revocation process for MSSQL. The workaround\nis to add custom revocation statements to roles, for example:\n\n```shell\nvault write database\/roles\/my-role revocation_statements=\"\\\n   USE my_database \\\n   IF EXISTS \\\n     (SELECT name \\\n      FROM sys.database_principals \\\n      WHERE name = N'') \\\n   BEGIN \\\n     DROP USER [] \\\n   END \\\n\n   IF EXISTS \\\n     (SELECT name \\\n      FROM master.sys.server_principals \\\n      WHERE name = N'') \\\n   BEGIN \\\n     DROP LOGIN [] \\\n   END\"\n```\n\n## API\n\nThe full list of configurable options can be seen in the [MSSQL database\nplugin API](\/vault\/api-docs\/secret\/databases\/mssql) page.\n\nFor more information on the database secrets engine's HTTP API please see the\n[Database secrets engine API](\/vault\/api-docs\/secret\/databases) page.","site":"vault","answers_cleaned":"    layout  docs page title  MSSQL   Database   Secrets Engines description        MSSQL is one of the supported plugins for the database secrets engine  This   plugin generates database credentials dynamically based on configured roles   for the MSSQL database         MSSQL database secrets engine  MSSQL is one of the supported plugins for the database secrets engine  This plugin generates database credentials dynamically based on configured roles for the MSSQL database   See the  database secrets engine   vault docs secrets databases  docs for more information about setting up the database secrets engine   The following privileges are needed by the plugin for minimum functionality  Additional privileges may be needed  depending on the SQL configured on the database roles       sql    Create Login CREATE LOGIN vault login WITH PASSWORD     password        Create User CREATE user vault user for login vault login      Grant Permissions GRANT ALTER ANY LOGIN TO  vault user   GRANT ALTER ANY USER TO  vault user   GRANT ALTER ANY CONNECTION TO  vault login   GRANT CONTROL ON SCHEMA   schema name  TO  vault user   EXEC sp addrolemember  db accessadmin    vault user           Capabilities    Plugin Name               Root Credential Rotation   Dynamic Roles   Static Roles   Username Customization                                                                                                                     mssql database plugin    Yes                        Yes             Yes            Yes  1 7                     Setup  1   Enable the database secrets engine if it is not already enabled          text       vault secrets enable database     Success  Enabled the database secrets engine at  database               By default  the secrets engine will enable at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure Vault with the proper plugin and connection information          text       vault write database config my mssql database           plugin name mssql database plugin           connection url  sqlserver     localhost 1433            allowed roles  my role            username  vaultuser            password  yourStrong   Password                  Note  The example above demonstrates a connection with SQL server user named  vaultuser   although the user  vaultuser  might be Windows Authentication user part of Active Directory domain  for example   DOMAIN vaultuser        In this case  we ve configured Vault with the user  vaultuser  and password      yourStrong   Password   connecting to an instance at  localhost  on port 1433  It is     not necessary that Vault has the vaultuser login  but the user must have privileges     to create logins and manage processes  The fixed server roles      securityadmin  and  processadmin  are examples of built in roles that grant     these permissions  The user also must have privileges to create database     users and grant permissions in the databases that Vault manages  The fixed     database roles  db accessadmin  and  db securityadmin  are examples or     built in roles that grant these permissions   1   Configure a role that maps a name in Vault to an SQL statement to execute to     create the database credential          text       vault write database roles my role           db name my mssql database           creation statements  CREATE LOGIN    WITH PASSWORD                    CREATE USER    FOR LOGIN                  GRANT SELECT ON SCHEMA  dbo TO                default ttl  1h            max ttl  24h      Success  Data written to  database roles my role               Be aware    If no  revocation statement  is supplied  vault will execute the default revocation procedure  In larger databases  this might cause connection timeouts  Please specify a revocation statement in such a scenario      Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials   1   Generate a new credential by reading from the   creds  endpoint with the name     of the role          text       vault read database creds my role     Key                Value                                  lease id           database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6     lease duration     1h     lease renewable    true     password           wJKpk9kg T1Ma7qQfS8y     username           v vaultuser my role r7kCtKGGr3eYQP1OGR6G 1602542258             Example for Azure SQL database  Here is a complete example using Azure SQL Database  Note that databases in Azure SQL Database are  contained databases  https   docs microsoft com en us sql relational databases databases contained databases  and that we do not create a login for the user  instead  we associate the password directly with the user itself  Also note that you will need a separate connection and role for each Azure SQL database for which you want to generate dynamic credentials  You can use a single database backend mount for all these databases or use a separate mount for each of them  In this example  we use a custom path for the database backend    Note    Azure SQL databases may use different authentication mechanism that are configured on the SQL server  Vault only supports SQL authentication  Azure AD authentication is not supported    Note   First  we mount a database backend at the azuresql path with  vault secrets enable  path azuresql database   Then we configure a connection called  testvault  to connect to a database called  test vault   using  azuresql  at the beginning of our path          Note  If you are using a windows vault client with cmd exe  change the single quotes to double quotes in the connection string   Windows cmd exe does not interpret single quotes as a continous string         shell session   vault write azuresql config testvault       plugin name mssql database plugin       connection url  server hashisqlserver database windows net port 1433 user id admin password pAssw0rd database test vault app name vault         allowed roles  test       Now we add a role called  test  for use with the  testvault  connection      shell session   vault write azuresql roles test       db name testvault       creation statements  CREATE USER    WITH PASSWORD              revocation statements  DROP USER IF EXISTS           default ttl  1h        max ttl  24h       We can now use this role to dynamically generate credentials for the Azure SQL database  test vault      shell session   vault read azuresql creds test Key             Value                       lease id        azuresql creds test 2e5b1e0b a081 c7e1 5622 39f58e79a719 lease duration  1h0m0s lease renewable true password        cZ BJy SqO5tKwazAuUP username        v token test tr2t4x9pxvq1z8878s9s 1513446795      When we no longer need the backend  we can unmount it with  vault unmount azuresql   Now  you can use the MSSQL Database Plugin with your Azure SQL databases      Amazon RDS  The MSSQL plugin supports databases running on  Amazon RDS  https   docs aws amazon com AmazonRDS latest UserGuide CHAP SQLServer html   but there are differences that need to be accommodated  A key limitation is that Amazon RDS doesn t support the  sysadmin  role  which is used by default during Vault s revocation process for MSSQL  The workaround is to add custom revocation statements to roles  for example      shell vault write database roles my role revocation statements       USE my database      IF EXISTS         SELECT name         FROM sys database principals         WHERE name   N         BEGIN        DROP USER         END       IF EXISTS         SELECT name         FROM master sys server principals         WHERE name   N         BEGIN        DROP LOGIN         END          API  The full list of configurable options can be seen in the  MSSQL database plugin API   vault api docs secret databases mssql  page   For more information on the database secrets engine s HTTP API please see the  Database secrets engine API   vault api docs secret databases  page "}
{"questions":"vault Redis ElastiCache database secrets engine Redis ElastiCache is one of the supported plugins for the database secrets engine layout docs page title Redis ElastiCache Database Secrets Engines This plugin generates static credentials for existing managed roles","answers":"---\nlayout: docs\npage_title: Redis ElastiCache - Database - Secrets Engines\ndescription: |-\n  Redis ElastiCache is one of the supported plugins for the database secrets engine.\n  This plugin generates static credentials for existing managed roles.\n---\n\n# Redis ElastiCache database secrets engine\n\nRedis ElastiCache is one of the supported plugins for the database secrets engine.\nThis plugin generates static credentials for existing managed roles.\n\nSee the [database secrets engine](\/vault\/docs\/secrets\/databases) docs for\nmore information about setting up the database secrets engine.\n\n## Capabilities\n\n| Plugin Name                             | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |\n| --------------------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |\n| `redis-elasticache-database-plugin`     | No                       | No            | Yes          | No                     |\n\n## Setup\n\n1.  Enable the database secrets engine if it is not already enabled:\n\n  ```shell-session\n  $ vault secrets enable database\n  Success! Enabled the database secrets engine at: database\/\n  ```\n\n  By default, the secrets engine will enable at the name of the engine. To\n  enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure Vault with the proper plugin and connection configuration:\n\n  ```shell-session\n  $ vault write database\/config\/my-redis-elasticache-cluster \\\n    plugin_name=\"redis-elasticache-database-plugin\" \\\n    url=\"primary-endpoint.my-cluster.xxx.yyy.cache.amazonaws.com:6379\" \\\n    access_key_id=\"AKI***\" \\\n    secret_access_key=\"ktriNYvULAWLzUmTGb***\" \\\n    region=us-east-1 \\\n    allowed_roles=\"*\"\n  ```\n\n~> **Note**: The `access_key_id`, `secret_access_key` and `region` parameters are optional. If omitted, authentication falls back\non the AWS credentials provider chain.\n\n~> **Deprecated**: The `username` & `password` parameters are deprecated but supported for backward compatibility. They are replaced\nby the equivalent `access_key_id` and `secret_access_key` parameters respectively.\n\nThe Redis ElastiCache secrets engine must use AWS credentials that have sufficient permissions to manage ElastiCache users.\nThis IAM policy sample can be used as an example. Note that &lt;region&gt; and &lt;account-id&gt;\nmust correspond to your own environment.\n\n  ```json\n  {\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n      {\n        \"Sid\": \"\",\n        \"Effect\": \"Allow\",\n        \"Action\": [\n          \"elasticache:ModifyUser\",\n          \"elasticache:DescribeUsers\"\n        ],\n        \"Resource\": \"arn:aws:elasticache:<region>:<account-id>:user:*\"\n      }\n    ]\n  }\n  ```\n\n## Usage\n\nAfter the secrets engine is configured, write static roles to enable generating credentials.\n\n### Static roles\n\n1.  Configure a static role that maps a name in Vault to an existing Redis ElastiCache user.\n\n  ```shell-session\n  $ vault write database\/static-roles\/my-static-role \\\n      db_name=\"my-redis-elasticache-cluster\" \\\n      username=\"my-existing-redis-user\" \\\n      rotation_period=5m\n  Success! Data written to: database\/static-roles\/my-static-role\n  ```\n\n1.  Retrieve the credentials from the `\/static-creds` endpoint:\n\n  ```shell-session\n  $ vault read database\/static-creds\/my-static-role\n  Key                    Value\n  ---                    -----\n  last_vault_rotation    2022-09-14T11:45:57.24715105-04:00\n  password               GKdS6qY-UtVAMpcD9iuu\n  rotation_period        5m\n  ttl                    4m48s\n  username               my-existing-redis-user\n  ```\n\n~> **Note**: New passwords may take up-to a couple of minutes before ElastiCache has the chance to complete their configuration.\n  It is recommended to use a retry strategy when establishing new Redis ElastiCache connections. This may prevent errors when\n  trying to use a password that isn't yet live on the targeted ElastiCache cluster.\n\n## API\n\nThe full list of configurable options can be seen in the [Redis ElastiCache Database Plugin API](\/vault\/api-docs\/secret\/databases\/rediselasticache) page.\n\nFor more information on the database secrets engine's HTTP API please see the [Database Secrets Engine API](\/vault\/api-docs\/secret\/databases) page.","site":"vault","answers_cleaned":"    layout  docs page title  Redis ElastiCache   Database   Secrets Engines description       Redis ElastiCache is one of the supported plugins for the database secrets engine    This plugin generates static credentials for existing managed roles         Redis ElastiCache database secrets engine  Redis ElastiCache is one of the supported plugins for the database secrets engine  This plugin generates static credentials for existing managed roles   See the  database secrets engine   vault docs secrets databases  docs for more information about setting up the database secrets engine      Capabilities    Plugin Name                               Root Credential Rotation   Dynamic Roles   Static Roles   Username Customization                                                                                                                                     redis elasticache database plugin        No                         No              Yes            No                           Setup  1   Enable the database secrets engine if it is not already enabled        shell session     vault secrets enable database   Success  Enabled the database secrets engine at  database           By default  the secrets engine will enable at the name of the engine  To   enable the secrets engine at a different path  use the   path  argument   1   Configure Vault with the proper plugin and connection configuration        shell session     vault write database config my redis elasticache cluster       plugin name  redis elasticache database plugin        url  primary endpoint my cluster xxx yyy cache amazonaws com 6379        access key id  AKI           secret access key  ktriNYvULAWLzUmTGb           region us east 1       allowed roles                 Note    The  access key id    secret access key  and  region  parameters are optional  If omitted  authentication falls back on the AWS credentials provider chain        Deprecated    The  username     password  parameters are deprecated but supported for backward compatibility  They are replaced by the equivalent  access key id  and  secret access key  parameters respectively   The Redis ElastiCache secrets engine must use AWS credentials that have sufficient permissions to manage ElastiCache users  This IAM policy sample can be used as an example  Note that  lt region gt  and  lt account id gt  must correspond to your own environment        json          Version    2012 10 17        Statement                      Sid                Effect    Allow            Action                elasticache ModifyUser              elasticache DescribeUsers                      Resource    arn aws elasticache  region   account id  user                                Usage  After the secrets engine is configured  write static roles to enable generating credentials       Static roles  1   Configure a static role that maps a name in Vault to an existing Redis ElastiCache user        shell session     vault write database static roles my static role         db name  my redis elasticache cluster          username  my existing redis user          rotation period 5m   Success  Data written to  database static roles my static role        1   Retrieve the credentials from the   static creds  endpoint        shell session     vault read database static creds my static role   Key                    Value                                  last vault rotation    2022 09 14T11 45 57 24715105 04 00   password               GKdS6qY UtVAMpcD9iuu   rotation period        5m   ttl                    4m48s   username               my existing redis user             Note    New passwords may take up to a couple of minutes before ElastiCache has the chance to complete their configuration    It is recommended to use a retry strategy when establishing new Redis ElastiCache connections  This may prevent errors when   trying to use a password that isn t yet live on the targeted ElastiCache cluster      API  The full list of configurable options can be seen in the  Redis ElastiCache Database Plugin API   vault api docs secret databases rediselasticache  page   For more information on the database secrets engine s HTTP API please see the  Database Secrets Engine API   vault api docs secret databases  page "}
{"questions":"vault page title Redis Database Secrets Engines roles for the Redis database and also supports Static Roles vault docs secrets databases static roles Redis database secrets engine layout docs This plugin generates database credentials dynamically based on configured Redis is one of the supported plugins for the database secrets engine","answers":"---\nlayout: docs\npage_title: Redis - Database - Secrets Engines\ndescription: |-\n  Redis is one of the supported plugins for the database secrets engine.\n  This plugin generates database credentials dynamically based on configured\n  roles for the Redis database, and also supports [Static Roles](\/vault\/docs\/secrets\/databases#static-roles).\n---\n\n# Redis database secrets engine\n\nRedis is one of the supported plugins for the database secrets engine. This\nplugin generates database credentials dynamically based on configured roles for\nthe Redis database.\n\nSee the [database secrets engine](\/vault\/docs\/secrets\/databases) docs for\nmore information about setting up the database secrets engine.\n\n## Capabilities\n\n| Plugin Name                 | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |\n| --------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |\n| `redis-database-plugin`     | Yes                      | Yes           | Yes          | No                     |\n\n## Setup\n\n1.  Enable the database secrets engine if it is not already enabled:\n\n    ```shell-session\n    $ vault secrets enable database\n    Success! Enabled the database secrets engine at: database\/\n    ```\n\n    By default, the secrets engine will enable at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure Vault with the proper plugin and connection configuration:\n\n    ```shell-session\n    $ vault write database\/config\/my-redis-database \\\n      plugin_name=\"redis-database-plugin\" \\\n      host=\"localhost\" \\\n      port=6379 \\\n      tls=true \\\n      ca_cert=\"$CACERT\" \\\n      username=\"user\" \\\n      password=\"pass\" \\\n      allowed_roles=\"my-*-role\"\n    ```\n\n1.  You should consider rotating the admin password. Note that if you do, the\n    new password will never be made available through Vault, so you should\n    create a Vault-specific database admin user for this.\n\n    ```shell-session\n    vault write -force database\/rotate-root\/my-redis-database\n    ```\n\n## Usage\n\nAfter the secrets engine is configured, write dynamic and static roles\nto Vault to enable generating credentials.\n\n### Dynamic roles\n\n1.  Configure a dynamic role that maps a name in Vault to a JSON string\n    containing the Redis ACL rules, which are either documented [here](https:\/\/redis.io\/commands\/acl-cat) or in the output\n    of the `ACL CAT` Redis command.\n\n    ```shell-session\n    $ vault write database\/roles\/my-dynamic-role \\\n        db_name=\"my-redis-database\" \\\n        creation_statements='[\"+@admin\"]' \\\n        default_ttl=\"5m\" \\\n        max_ttl=\"1h\"\n    Success! Data written to: database\/roles\/my-dynamic-role\n    ```\n\n    Note that if a creation_statement is not provided the user account will\n    default to a read only user, `'[\"~*\", \"+@read\"]'` that can read any key.\n\n1.  Generate a new set of credentials by reading from the `\/creds` endpoint with the name\n    of the role:\n\n    ```shell-session\n    $ vault read database\/creds\/my-dynamic-role\n    Key                Value\n    ---                -----\n    lease_id           database\/creds\/my-dynamic-role\/OxCTXJcxQ2F4lReWPjbezSnA\n    lease_duration     5m\n    lease_renewable    true\n    password           dACqHsav6-attdv1glGZ\n    username           V_TOKEN_MY-DYNAMIC-ROLE_YASUQUF3GVVD0ZWTEMK4_1608481717\n    ```\n\n### Static roles\n\n1.  Configure a static role that maps a name in Vault to an existing Redis\n    user.\n\n    ```shell-session\n    $ vault write database\/static-roles\/my-static-role \\\n        db_name=\"my-redis-database\" \\\n        username=\"my-existing-redis-user\" \\\n        rotation_period=5m\n    Success! Data written to: database\/static-roles\/my-static-role\n    ```\n\n1.  Retrieve the credentials from the `\/static-creds` endpoint:\n\n    ```shell-session\n    $ vault read database\/static-creds\/my-static-role\n    Key                    Value\n    ---                    -----\n    last_vault_rotation    2020-12-20T10:39:49.647822-06:00\n    password               ylKNgqa3NPVAioBf-0S5\n    rotation_period        5m\n    ttl                    4m39s\n    username               my-existing-redis-user\n    ```\n\n## API\n\nThe full list of configurable options can be seen in the [Redis Database Plugin API](\/vault\/api-docs\/secret\/databases\/redis) page.\n\nFor more information on the database secrets engine's HTTP API please see the [Database Secrets Engine API](\/vault\/api-docs\/secret\/databases) page.","site":"vault","answers_cleaned":"    layout  docs page title  Redis   Database   Secrets Engines description       Redis is one of the supported plugins for the database secrets engine    This plugin generates database credentials dynamically based on configured   roles for the Redis database  and also supports  Static Roles   vault docs secrets databases static roles          Redis database secrets engine  Redis is one of the supported plugins for the database secrets engine  This plugin generates database credentials dynamically based on configured roles for the Redis database   See the  database secrets engine   vault docs secrets databases  docs for more information about setting up the database secrets engine      Capabilities    Plugin Name                   Root Credential Rotation   Dynamic Roles   Static Roles   Username Customization                                                                                                                         redis database plugin        Yes                        Yes             Yes            No                           Setup  1   Enable the database secrets engine if it is not already enabled          shell session       vault secrets enable database     Success  Enabled the database secrets engine at  database               By default  the secrets engine will enable at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure Vault with the proper plugin and connection configuration          shell session       vault write database config my redis database         plugin name  redis database plugin          host  localhost          port 6379         tls true         ca cert   CACERT          username  user          password  pass          allowed roles  my   role           1   You should consider rotating the admin password  Note that if you do  the     new password will never be made available through Vault  so you should     create a Vault specific database admin user for this          shell session     vault write  force database rotate root my redis database             Usage  After the secrets engine is configured  write dynamic and static roles to Vault to enable generating credentials       Dynamic roles  1   Configure a dynamic role that maps a name in Vault to a JSON string     containing the Redis ACL rules  which are either documented  here  https   redis io commands acl cat  or in the output     of the  ACL CAT  Redis command          shell session       vault write database roles my dynamic role           db name  my redis database            creation statements      admin              default ttl  5m            max ttl  1h      Success  Data written to  database roles my dynamic role              Note that if a creation statement is not provided the user account will     default to a read only user              read     that can read any key   1   Generate a new set of credentials by reading from the   creds  endpoint with the name     of the role          shell session       vault read database creds my dynamic role     Key                Value                                  lease id           database creds my dynamic role OxCTXJcxQ2F4lReWPjbezSnA     lease duration     5m     lease renewable    true     password           dACqHsav6 attdv1glGZ     username           V TOKEN MY DYNAMIC ROLE YASUQUF3GVVD0ZWTEMK4 1608481717              Static roles  1   Configure a static role that maps a name in Vault to an existing Redis     user          shell session       vault write database static roles my static role           db name  my redis database            username  my existing redis user            rotation period 5m     Success  Data written to  database static roles my static role          1   Retrieve the credentials from the   static creds  endpoint          shell session       vault read database static creds my static role     Key                    Value                                      last vault rotation    2020 12 20T10 39 49 647822 06 00     password               ylKNgqa3NPVAioBf 0S5     rotation period        5m     ttl                    4m39s     username               my existing redis user             API  The full list of configurable options can be seen in the  Redis Database Plugin API   vault api docs secret databases redis  page   For more information on the database secrets engine s HTTP API please see the  Database Secrets Engine API   vault api docs secret databases  page "}
{"questions":"vault PostgreSQL database secrets engine page title PostgreSQL Database Secrets Engines layout docs This plugin generates database credentials dynamically based on configured PostgreSQL is one of the supported plugins for the database secrets engine roles for the PostgreSQL database","answers":"---\nlayout: docs\npage_title: PostgreSQL - Database - Secrets Engines\ndescription: |-\n  PostgreSQL is one of the supported plugins for the database secrets engine.\n  This plugin generates database credentials dynamically based on configured\n  roles for the PostgreSQL database.\n---\n\n# PostgreSQL database secrets engine\n\nPostgreSQL is one of the supported plugins for the database secrets engine. This\nplugin generates database credentials dynamically based on configured roles for\nthe PostgreSQL database, and also supports [Static\nRoles](\/vault\/docs\/secrets\/databases#static-roles).\n\nSee the [database secrets engine](\/vault\/docs\/secrets\/databases) docs for\nmore information about setting up the database secrets engine.\n\nThe PostgreSQL secrets engine uses [pgx][pgxlib], the same database library as the\n[PostgreSQL storage backend](\/vault\/docs\/configuration\/storage\/postgresql). Connection string\noptions, including SSL options, can be found in the [pgx][pgxlib] and\n[PostgreSQL connection string][pg_conn_docs] documentation.\n\n## Capabilities\n\n| Plugin Name                  | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization | Credential Types             |\n| ---------------------------- | ------------------------ | ------------- | ------------ | ---------------------- | ---------------------------- |\n| `postgresql-database-plugin` | Yes                      | Yes           | Yes          | Yes (1.7+)             | password, gcp_iam            |\n\n## Setup\n\n1.  Enable the database secrets engine if it is not already enabled:\n\n    ```shell-session\n    $ vault secrets enable database\n    Success! Enabled the database secrets engine at: database\/\n    ```\n\n    By default, the secrets engine will enable at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure Vault with the proper plugin and connection information:\n\n    ```shell-session\n    $ vault write database\/config\/my-postgresql-database \\\n        plugin_name=\"postgresql-database-plugin\" \\\n        allowed_roles=\"my-role\" \\\n        connection_url=\"postgresql:\/\/:@localhost:5432\/database-name\" \\\n        username=\"vaultuser\" \\\n        password=\"vaultpass\" \\\n        password_authentication=\"scram-sha-256\"\n    ```\n\n1.  Configure a role that maps a name in Vault to an SQL statement to execute to\n    create the database credential:\n\n    ```shell-session\n    $ vault write database\/roles\/my-role \\\n        db_name=\"my-postgresql-database\" \\\n        creation_statements=\"CREATE ROLE \\\"\\\" WITH LOGIN PASSWORD '' VALID UNTIL ''; \\\n            GRANT SELECT ON ALL TABLES IN SCHEMA public TO \\\"\\\";\" \\\n        default_ttl=\"1h\" \\\n        max_ttl=\"24h\"\n    Success! Data written to: database\/roles\/my-role\n    ```\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials.\n\n1.  Generate a new credential by reading from the `\/creds` endpoint with the name\n    of the role:\n\n    ```shell-session\n    $ vault read database\/creds\/my-role\n    Key                Value\n    ---                -----\n    lease_id           database\/creds\/my-role\/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6\n    lease_duration     1h\n    lease_renewable    true\n    password           SsnoaA-8Tv4t34f41baD\n    username           v-vaultuse-my-role-x\n    ```\n\n## Rootless Configuration and Password Rotation for Static Roles\n\n<EnterpriseAlert product=\"vault\" \/>\n\nThe PostgreSQL secrets engine supports using Static Roles and its password rotation mechanisms with a Rootless\nDB connection configuration. In this workflow, a static DB user can be onboarded onto Vault's static role rotation\nmechanism without the need of privileged root accounts to configure the connection. Instead of using a single root\nconnection, multiple dedicated connections to the DB are made for each static role. This workflow does not support\ndynamic roles\/credentials.\n\n~> Note: It is **highly recommended** that the DB users being onboarded as static roles\nhave the minimum set of privileges. Each static role will open a new connection into the DB.\nGranting minimum privileges to the DB users being onboarded ensures that multiple\nhighly-privileged connections to an external system are not being made.\n\n~> Note: Out-of-band password rotations will cause Vault to be out of sync with the state of\nthe DB user, and will require manually updating the user's password in the external PostgreSQL\nDB in order to resolve any errors encountered during rotation.\n\n1.  Enable the database secrets engine if it is not already enabled:\n\n    ```shell-session\n    $ vault secrets enable database\n    Success! Enabled the database secrets engine at: database\/\n    ```\n\n    By default, the secrets engine will enable at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure connection to DB without root credentials and enable the rootless\n    workflow by setting the `self_managed` parameter:\n\n    ```shell-session\n    $ vault write database\/config\/my-postgresql-database \\\n        plugin_name=\"postgresql-database-plugin\" \\\n        allowed_roles=\"my-role\" \\\n        connection_url=\"postgresql:\/\/:@localhost:5432\/database-name\" \\\n        self_managed=true\n    ```\n\n1.  Configure a static role that creates a dedicated connection to a user in the DB with\n    the `self_managed_password` parameter:\n\n    ```shell-session\n    $ vault write database\/static-roles\/my-static-role \\\n      db_name=\"my-postgresql-database\" \\\n      username=\"staticuser\" \\\n      self_managed_password=\"password\" \\\n      rotation_period=\"1h\"\n    ```\n\n1.  Read static credentials:\n\n    ```shell-session\n    $ vault read database\/static-creds\/static-test\n    Key                    Value\n    ---                    -----\n    last_vault_rotation    2024-09-11T14:15:13.764783-07:00\n    password               XZY42BVc-UO5bMsbgxrW\n    rotation_period        1h\n    ttl                    59m55s\n    username               staticuser\n    ```\n\n## Client x509 certificate authentication\n\nThis plugin supports using PostgreSQl's [x509 Client-side Certificate Authentication](https:\/\/www.postgresql.org\/docs\/16\/libpq-ssl.html#LIBPQ-SSL-CLIENTCERT).\n\nTo use this authentication mechanism, configure the plugin to consume the\nPEM-encoded TLS data inline from a file on disk by prefixing with the \"@\"\nsymbol. This is useful in environments where you do not have direct access to\nthe machine that is hosting the Vault server. For example:\n\n```shell-session\n$ vault write database\/config\/my-postgresql-database \\\n    plugin_name=\"postgresql-database-plugin\" \\\n    allowed_roles=\"my-role\" \\\n    connection_url=\"postgresql:\/\/:@localhost:5432\/database-name?sslmode=verify-full\" \\\n    username=\"vaultuser\" \\\n    private_key=@\/path\/to\/client.key \\\n    tls_certificate=@\/path\/to\/client.pem \\\n    tls_ca=@\/path\/to\/client.ca\n```\n\nNote: `private_key`, `tls_certificate`, and `tls_ca` map to [`sslkey`][sslkey_docs],\n[`sslcert`][sslcert_docs], and [`sslrootcert`][sslrootcert_docs] configuration\noptions from PostgreSQL with the exception that the Vault parameters are the\ncontents of those files, not filenames.\n\n[sslkey_docs]: https:\/\/www.postgresql.org\/docs\/current\/libpq-connect.html#LIBPQ-CONNECT-SSLKEY\n[sslcert_docs]: https:\/\/www.postgresql.org\/docs\/current\/libpq-connect.html#LIBPQ-CONNECT-SSLCERT\n[sslrootcert_docs]: https:\/\/www.postgresql.org\/docs\/current\/libpq-connect.html#LIBPQ-CONNECT-SSLROOTCERT\n\nAlternatively, you can configure certificate authentication in environments\nwhere the TLS certificate data is present on the machine that is running the\nVault server process. Set `sslmode` to be any of the applicable values as\noutlined in the PostgreSQL documentation and set the SSL credentials in the\n`sslrootcert`, `sslcert` and `sslkey` connection parameters as paths to files.\nFor example:\n\n```shell-session\n$ export SSL=\"sslmode=verify-full&sslrootcert=\/path\/to\/ca.pem&sslcert=\/path\/to\/client.pem&sslkey=\/path\/to\/client.key\"\n$ vault write database\/config\/my-postgresql-database \\\n    plugin_name=\"postgresql-database-plugin\" \\\n    allowed_roles=\"my-role\" \\\n    connection_url=\"postgresql:\/\/:@localhost:5432\/database-name?sslmode=verify-full&${SSL}\" \\\n    username=\"vaultuser\"\n```\n\n## API\n\nThe full list of configurable options can be seen in the [PostgreSQL database\nplugin API](\/vault\/api-docs\/secret\/databases\/postgresql) page.\n\nFor more information on the database secrets engine's HTTP API please see the\n[Database secrets engine API](\/vault\/api-docs\/secret\/databases) page.\n\n[pgxlib]: https:\/\/pkg.go.dev\/github.com\/jackc\/pgx\/stdlib\n[pg_conn_docs]: https:\/\/www.postgresql.org\/docs\/current\/libpq-connect.html#LIBPQ-CONNSTRING\n\n## Authenticating to Cloud DBs via IAM\n\n### Google Cloud\n\nAside from IAM roles denoted by [Google's CloudSQL documentation](https:\/\/cloud.google.com\/sql\/docs\/postgres\/add-manage-iam-users#creating-a-database-user),\nthe following SQL privileges are needed by the service account's DB user for minimum functionality with Vault.\nAdditional privileges may be needed depending on the SQL configured on the database roles.\n\n```sql\n-- Enable service account to create roles within DB\nALTER USER \"<YOUR DB USERNAME>\" WITH CREATEROLE;\n```\n\n### Setup\n\n1.  Enable the database secrets engine if it is not already enabled:\n\n    ```shell-session\n    $ vault secrets enable database\n    Success! Enabled the database secrets engine at: database\/\n    ```\n\n    By default, the secrets engine will enable at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n1.  Configure Vault with the proper plugin and connection information. Here you can explicitly enable GCP IAM authentication\n    and use [Application Default Credentials](https:\/\/cloud.google.com\/docs\/authentication\/provide-credentials-adc#how-to) to authenticate:\n\n    ```shell-session\n    $ vault write database\/config\/my-postgresql-database \\\n        plugin_name=\"postgresql-database-plugin\" \\\n        allowed_roles=\"my-role\" \\\n        connection_url=\"host=project:us-west1:mydb user=test-user@project.iam dbname=postgres sslmode=disable\" \\\n        use_private_ip=\"false\" \\\n        auth_type=\"gcp_iam\"\n    ```\n\n    You can also configure the connection and authenticate by directly passing in the service account credentials\n    as an encoded JSON string:\n\n    ```shell-session\n    $ vault write database\/config\/my-postgresql-database \\\n        plugin_name=\"postgresql-database-plugin\" \\\n        allowed_roles=\"my-role\" \\\n        connection_url=\"host=project:region:instance user=test-user@project.iam dbname=postgres sslmode=disable\" \\\n        use_private_ip=\"false\" \\\n        auth_type=\"gcp_iam\" \\\n        service_account_json=\"@my_credentials.json\"\n    ```\n\nOnce the connection has been configured and IAM authentication is complete, the steps to set up a role and generate\ncredentials are the same as the ones listed above.","site":"vault","answers_cleaned":"    layout  docs page title  PostgreSQL   Database   Secrets Engines description       PostgreSQL is one of the supported plugins for the database secrets engine    This plugin generates database credentials dynamically based on configured   roles for the PostgreSQL database         PostgreSQL database secrets engine  PostgreSQL is one of the supported plugins for the database secrets engine  This plugin generates database credentials dynamically based on configured roles for the PostgreSQL database  and also supports  Static Roles   vault docs secrets databases static roles    See the  database secrets engine   vault docs secrets databases  docs for more information about setting up the database secrets engine   The PostgreSQL secrets engine uses  pgx  pgxlib   the same database library as the  PostgreSQL storage backend   vault docs configuration storage postgresql   Connection string options  including SSL options  can be found in the  pgx  pgxlib  and  PostgreSQL connection string  pg conn docs  documentation      Capabilities    Plugin Name                    Root Credential Rotation   Dynamic Roles   Static Roles   Username Customization   Credential Types                                                                                                                                                                     postgresql database plugin    Yes                        Yes             Yes            Yes  1 7                 password  gcp iam                  Setup  1   Enable the database secrets engine if it is not already enabled          shell session       vault secrets enable database     Success  Enabled the database secrets engine at  database               By default  the secrets engine will enable at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure Vault with the proper plugin and connection information          shell session       vault write database config my postgresql database           plugin name  postgresql database plugin            allowed roles  my role            connection url  postgresql     localhost 5432 database name            username  vaultuser            password  vaultpass            password authentication  scram sha 256           1   Configure a role that maps a name in Vault to an SQL statement to execute to     create the database credential          shell session       vault write database roles my role           db name  my postgresql database            creation statements  CREATE ROLE      WITH LOGIN PASSWORD    VALID UNTIL                   GRANT SELECT ON ALL TABLES IN SCHEMA public TO                  default ttl  1h            max ttl  24h      Success  Data written to  database roles my role             Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials   1   Generate a new credential by reading from the   creds  endpoint with the name     of the role          shell session       vault read database creds my role     Key                Value                                  lease id           database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6     lease duration     1h     lease renewable    true     password           SsnoaA 8Tv4t34f41baD     username           v vaultuse my role x             Rootless Configuration and Password Rotation for Static Roles   EnterpriseAlert product  vault      The PostgreSQL secrets engine supports using Static Roles and its password rotation mechanisms with a Rootless DB connection configuration  In this workflow  a static DB user can be onboarded onto Vault s static role rotation mechanism without the need of privileged root accounts to configure the connection  Instead of using a single root connection  multiple dedicated connections to the DB are made for each static role  This workflow does not support dynamic roles credentials      Note  It is   highly recommended   that the DB users being onboarded as static roles have the minimum set of privileges  Each static role will open a new connection into the DB  Granting minimum privileges to the DB users being onboarded ensures that multiple highly privileged connections to an external system are not being made      Note  Out of band password rotations will cause Vault to be out of sync with the state of the DB user  and will require manually updating the user s password in the external PostgreSQL DB in order to resolve any errors encountered during rotation   1   Enable the database secrets engine if it is not already enabled          shell session       vault secrets enable database     Success  Enabled the database secrets engine at  database               By default  the secrets engine will enable at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure connection to DB without root credentials and enable the rootless     workflow by setting the  self managed  parameter          shell session       vault write database config my postgresql database           plugin name  postgresql database plugin            allowed roles  my role            connection url  postgresql     localhost 5432 database name            self managed true          1   Configure a static role that creates a dedicated connection to a user in the DB with     the  self managed password  parameter          shell session       vault write database static roles my static role         db name  my postgresql database          username  staticuser          self managed password  password          rotation period  1h           1   Read static credentials          shell session       vault read database static creds static test     Key                    Value                                      last vault rotation    2024 09 11T14 15 13 764783 07 00     password               XZY42BVc UO5bMsbgxrW     rotation period        1h     ttl                    59m55s     username               staticuser             Client x509 certificate authentication  This plugin supports using PostgreSQl s  x509 Client side Certificate Authentication  https   www postgresql org docs 16 libpq ssl html LIBPQ SSL CLIENTCERT    To use this authentication mechanism  configure the plugin to consume the PEM encoded TLS data inline from a file on disk by prefixing with the     symbol  This is useful in environments where you do not have direct access to the machine that is hosting the Vault server  For example      shell session   vault write database config my postgresql database       plugin name  postgresql database plugin        allowed roles  my role        connection url  postgresql     localhost 5432 database name sslmode verify full        username  vaultuser        private key   path to client key       tls certificate   path to client pem       tls ca   path to client ca      Note   private key    tls certificate   and  tls ca  map to   sslkey   sslkey docs     sslcert   sslcert docs   and   sslrootcert   sslrootcert docs  configuration options from PostgreSQL with the exception that the Vault parameters are the contents of those files  not filenames    sslkey docs   https   www postgresql org docs current libpq connect html LIBPQ CONNECT SSLKEY  sslcert docs   https   www postgresql org docs current libpq connect html LIBPQ CONNECT SSLCERT  sslrootcert docs   https   www postgresql org docs current libpq connect html LIBPQ CONNECT SSLROOTCERT  Alternatively  you can configure certificate authentication in environments where the TLS certificate data is present on the machine that is running the Vault server process  Set  sslmode  to be any of the applicable values as outlined in the PostgreSQL documentation and set the SSL credentials in the  sslrootcert    sslcert  and  sslkey  connection parameters as paths to files  For example      shell session   export SSL  sslmode verify full sslrootcert  path to ca pem sslcert  path to client pem sslkey  path to client key    vault write database config my postgresql database       plugin name  postgresql database plugin        allowed roles  my role        connection url  postgresql     localhost 5432 database name sslmode verify full   SSL         username  vaultuser          API  The full list of configurable options can be seen in the  PostgreSQL database plugin API   vault api docs secret databases postgresql  page   For more information on the database secrets engine s HTTP API please see the  Database secrets engine API   vault api docs secret databases  page    pgxlib   https   pkg go dev github com jackc pgx stdlib  pg conn docs   https   www postgresql org docs current libpq connect html LIBPQ CONNSTRING     Authenticating to Cloud DBs via IAM      Google Cloud  Aside from IAM roles denoted by  Google s CloudSQL documentation  https   cloud google com sql docs postgres add manage iam users creating a database user   the following SQL privileges are needed by the service account s DB user for minimum functionality with Vault  Additional privileges may be needed depending on the SQL configured on the database roles      sql    Enable service account to create roles within DB ALTER USER   YOUR DB USERNAME   WITH CREATEROLE           Setup  1   Enable the database secrets engine if it is not already enabled          shell session       vault secrets enable database     Success  Enabled the database secrets engine at  database               By default  the secrets engine will enable at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   1   Configure Vault with the proper plugin and connection information  Here you can explicitly enable GCP IAM authentication     and use  Application Default Credentials  https   cloud google com docs authentication provide credentials adc how to  to authenticate          shell session       vault write database config my postgresql database           plugin name  postgresql database plugin            allowed roles  my role            connection url  host project us west1 mydb user test user project iam dbname postgres sslmode disable            use private ip  false            auth type  gcp iam               You can also configure the connection and authenticate by directly passing in the service account credentials     as an encoded JSON string          shell session       vault write database config my postgresql database           plugin name  postgresql database plugin            allowed roles  my role            connection url  host project region instance user test user project iam dbname postgres sslmode disable            use private ip  false            auth type  gcp iam            service account json   my credentials json           Once the connection has been configured and IAM authentication is complete  the steps to set up a role and generate credentials are the same as the ones listed above "}
{"questions":"vault page title KV Secrets Engines The kv secrets engine is used to store arbitrary secrets within the KV secrets engine version 1 configured physical storage for Vault layout docs The KV secrets engine can store arbitrary secrets","answers":"---\nlayout: docs\npage_title: KV - Secrets Engines\ndescription: The KV secrets engine can store arbitrary secrets.\n---\n\n# KV secrets engine - version 1\n\nThe `kv` secrets engine is used to store arbitrary secrets within the\nconfigured physical storage for Vault.\n\nWriting to a key in the `kv` backend will replace the old value; sub-fields are\nnot merged together.\n\nKey names must always be strings. If you write non-string values directly via\nthe CLI, they will be converted into strings. However, you can preserve\nnon-string values by writing the key\/value pairs to Vault from a JSON file or\nusing the HTTP API.\n\nThis secrets engine honors the distinction between the `create` and `update`\ncapabilities inside ACL policies.\n\n~> **Note**: Path and key names are _not_ obfuscated or encrypted; only the\nvalues set on keys are. You should not store sensitive information as part of a\nsecret's path.\n\n## Setup\n\nTo enable a version 1 kv store:\n\n```shell-session\n$ vault secrets enable -version=1 kv\n```\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials. The `kv` secrets engine\nallows for writing keys with arbitrary values.\n\n1. Write arbitrary data:\n\n   ```shell-session\n   $ vault kv put kv\/my-secret my-value=s3cr3t\n   Success! Data written to: kv\/my-secret\n   ```\n\n1. Read arbitrary data:\n\n   ```shell-session\n   $ vault kv get kv\/my-secret\n   Key                 Value\n   ---                 -----\n   my-value            s3cr3t\n   ```\n\n1. List the keys:\n\n   ```shell-session\n   $ vault kv list kv\/\n   Keys\n   ----\n   my-secret\n   ```\n\n1. Delete a key:\n\n   ```shell-session\n   $ vault kv delete kv\/my-secret\n   Success! Data deleted (if it existed) at: kv\/my-secret\n   ```\n\nYou can also use [Vault's password policy](\/vault\/docs\/concepts\/password-policies) feature to generate arbitrary values.\n\n1. Write a password policy:\n\n   ```shell-session\n   $ vault write sys\/policies\/password\/example policy=-<<EOF\n\n     length=20\n\n     rule \"charset\" {\n       charset = \"abcdefghij0123456789\"\n       min-chars = 1\n     }\n\n     rule \"charset\" {\n       charset = \"!@#$%^&*STUVWXYZ\"\n       min-chars = 1\n     }\n\n   EOF\n   ```\n\n1. Write data using the `example` policy:\n\n   ```shell-session\n   $ vault kv put kv\/my-generated-secret \\\n       password=$(vault read -field password sys\/policies\/password\/example\/generate)\n   ```\n\n1. Read the generated data:\n\n   ```shell-session\n   $ vault kv get kv\/my-generated-secret\n   ====== Data ======\n   Key         Value\n   ---         -----\n   password    ^dajd609Xf8Zhac$dW24\n   ```\n\n## TTLs\n\nUnlike other secrets engines, the KV secrets engine does not enforce TTLs\nfor expiration. Instead, the `lease_duration` is a hint for how often consumers\nshould check back for a new value.\n\nIf provided a key of `ttl`, the KV secrets engine will utilize this value\nas the lease duration:\n\n```shell-session\n$ vault kv put kv\/my-secret ttl=30m my-value=s3cr3t\nSuccess! Data written to: kv\/my-secret\n```\n\nEven with a `ttl` set, the secrets engine _never_ removes data on its own. The\n`ttl` key is merely advisory.\n\nWhen reading a value with a `ttl`, both the `ttl` key _and_ the refresh interval\nwill reflect the value:\n\n```shell-session\n$ vault kv get kv\/my-secret\nKey                 Value\n---                 -----\nmy-value            s3cr3t\nttl                 30m\n```\n\n## API\n\nThe KV secrets engine has a full HTTP API. Please see the\n[KV secrets engine API](\/vault\/api-docs\/secret\/kv\/kv-v1) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  KV   Secrets Engines description  The KV secrets engine can store arbitrary secrets         KV secrets engine   version 1  The  kv  secrets engine is used to store arbitrary secrets within the configured physical storage for Vault   Writing to a key in the  kv  backend will replace the old value  sub fields are not merged together   Key names must always be strings  If you write non string values directly via the CLI  they will be converted into strings  However  you can preserve non string values by writing the key value pairs to Vault from a JSON file or using the HTTP API   This secrets engine honors the distinction between the  create  and  update  capabilities inside ACL policies        Note    Path and key names are  not  obfuscated or encrypted  only the values set on keys are  You should not store sensitive information as part of a secret s path      Setup  To enable a version 1 kv store      shell session   vault secrets enable  version 1 kv         Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials  The  kv  secrets engine allows for writing keys with arbitrary values   1  Write arbitrary data         shell session      vault kv put kv my secret my value s3cr3t    Success  Data written to  kv my secret         1  Read arbitrary data         shell session      vault kv get kv my secret    Key                 Value                                 my value            s3cr3t         1  List the keys         shell session      vault kv list kv     Keys            my secret         1  Delete a key         shell session      vault kv delete kv my secret    Success  Data deleted  if it existed  at  kv my secret         You can also use  Vault s password policy   vault docs concepts password policies  feature to generate arbitrary values   1  Write a password policy         shell session      vault write sys policies password example policy    EOF       length 20       rule  charset           charset    abcdefghij0123456789         min chars   1              rule  charset           charset            STUVWXYZ         min chars   1            EOF         1  Write data using the  example  policy         shell session      vault kv put kv my generated secret          password   vault read  field password sys policies password example generate          1  Read the generated data         shell session      vault kv get kv my generated secret           Data           Key         Value                         password     dajd609Xf8Zhac dW24            TTLs  Unlike other secrets engines  the KV secrets engine does not enforce TTLs for expiration  Instead  the  lease duration  is a hint for how often consumers should check back for a new value   If provided a key of  ttl   the KV secrets engine will utilize this value as the lease duration      shell session   vault kv put kv my secret ttl 30m my value s3cr3t Success  Data written to  kv my secret      Even with a  ttl  set  the secrets engine  never  removes data on its own  The  ttl  key is merely advisory   When reading a value with a  ttl   both the  ttl  key  and  the refresh interval will reflect the value      shell session   vault kv get kv my secret Key                 Value                           my value            s3cr3t ttl                 30m         API  The KV secrets engine has a full HTTP API  Please see the  KV secrets engine API   vault api docs secret kv kv v1  for more details "}
{"questions":"vault secrets within the configured physical storage for Vault This secrets engine page title KV Secrets Engines KV secrets engine The kv secrets engine is a generic key value store used to store arbitrary layout docs The KV secrets engine can store arbitrary secrets","answers":"---\nlayout: docs\npage_title: KV - Secrets Engines\ndescription: The KV secrets engine can store arbitrary secrets.\n---\n\n# KV secrets engine\n\nThe `kv` secrets engine is a generic key-value store used to store arbitrary\nsecrets within the configured physical storage for Vault. This secrets engine\ncan run in one of two modes; store a single value for a key, or store a number\nof versions for each key and maintain the record of them.\n\n## KV version 1\n\nWhen running the `kv` secrets engine non-versioned, it stores the most recently\nwritten value for a key. Any update will overwrite the original value and not\nrecoverable. The benefits of non-versioned `kv` is a reduced storage size for\neach key since no additional metadata or history is stored. Additionally, it\ngives better runtime performance because the requests require fewer storage\ncalls and no locking.\n\nRefer to the [KV version 1 Docs](\/vault\/docs\/secrets\/kv\/kv-v1) for more\ninformation.\n\n## KV version 2\n\nWhen running v2 of the `kv` secrets engine, a key can retain a configurable\nnumber of versions. The default is 10 versions. The older versions' metadata and\ndata can be retrieved. Additionally, it provides check-and-set operations to\nprevent overwriting data unintentionally.\n\nWhen a version is deleted, the underlying data is not removed, rather it is\nmarked as deleted. Deleted versions can be undeleted. To permanently remove a\nversion's data, use the `vault kv destroy` command or the API endpoint. You can\ndelete all versions and metadata for a key by deleting the metadata using the\n`vault kv metadata delete` command or the API endpoint with DELETE verb. You can\nrestrict who has permissions to soft delete, undelete, or fully remove data with\n[Vault policies](\/vault\/docs\/concepts\/policies).\n\nRefer to the [KV version 2 Docs](\/vault\/docs\/secrets\/kv\/kv-v2) for more\ninformation.\n\n\n## Version comparison\n\nRegardless of its version, you use the [`vault kv`](\/vault\/docs\/commands\/kv)\ncommand to interact with KV secrets engine. However, the API endpoint are\ndifferent. You must be aware of those differences to write policies as intended.\n\nThe following table lists the `vault kv` sub-commands and their respective API\nendpoints assuming the KV secrets engine is enabled at `secret\/`.\n\n\n| Command           | KV v1 endpoint    | KV v2 endpoint                 |\n| ----------------- | ----------------- | ------------------------------ |\n| `vault kv get`    | secret\/<key_path> | secret\/**data**\/<key_path>     |\n| `vault kv put`    | secret\/<key_path> | secret\/**data**\/<key_path>     |\n| `vault kv list`   | secret\/<key_path> | secret\/**metadata**\/<key_path> |\n| `vault kv delete` | secret\/<key_path> | secret\/**data**\/<key_path>     |\n\nIn addition, KV v2 has sub-commands to handle versioning of secrets.\n\n| Command             | KV v2 endpoint                 |\n| ------------------- | ------------------------------ |\n| `vault kv patch`    | secret\/**data**\/<key_path>     |\n| `vault kv rollback` | secret\/**data**\/<key_path>     |\n| `vault kv undelete` | secret\/**undelete**\/<key_path> |\n| `vault kv destroy`  | secret\/**destroy**\/<key_path>  |\n| `vault kv metadata` | secret\/**metadata**\/<key_path> |\n\n\nTo reduce confusion, the CLI command outputs the secret path when you are\nworking with KV v2.\n\n**Example:**\n\n<CodeBlockConfig hideClipboard highlight=\"4\">\n\n```shell-session \n$ vault kv put secret\/web-app api-token=\"WEOIRJ13895130WENJWEFN\"\n\n=== Secret Path ===\nsecret\/data\/web-app\n\n======= Metadata =======\nKey                Value\n---                -----\ncreated_time       2024-07-02T00:34:58.074825Z\ncustom_metadata    <nil>\ndeletion_time      n\/a\ndestroyed          false\nversion            1\n```\n\n<\/CodeBlockConfig>\n\nYou can use `-mount` flag if omitting `\/data\/` in the CLI command is confusing.\n\n```shell-session\n$ vault kv put -mount=secret web-app api-token=\"WEOIRJ13895130WENJWEFN\"\n```","site":"vault","answers_cleaned":"    layout  docs page title  KV   Secrets Engines description  The KV secrets engine can store arbitrary secrets         KV secrets engine  The  kv  secrets engine is a generic key value store used to store arbitrary secrets within the configured physical storage for Vault  This secrets engine can run in one of two modes  store a single value for a key  or store a number of versions for each key and maintain the record of them      KV version 1  When running the  kv  secrets engine non versioned  it stores the most recently written value for a key  Any update will overwrite the original value and not recoverable  The benefits of non versioned  kv  is a reduced storage size for each key since no additional metadata or history is stored  Additionally  it gives better runtime performance because the requests require fewer storage calls and no locking   Refer to the  KV version 1 Docs   vault docs secrets kv kv v1  for more information      KV version 2  When running v2 of the  kv  secrets engine  a key can retain a configurable number of versions  The default is 10 versions  The older versions  metadata and data can be retrieved  Additionally  it provides check and set operations to prevent overwriting data unintentionally   When a version is deleted  the underlying data is not removed  rather it is marked as deleted  Deleted versions can be undeleted  To permanently remove a version s data  use the  vault kv destroy  command or the API endpoint  You can delete all versions and metadata for a key by deleting the metadata using the  vault kv metadata delete  command or the API endpoint with DELETE verb  You can restrict who has permissions to soft delete  undelete  or fully remove data with  Vault policies   vault docs concepts policies    Refer to the  KV version 2 Docs   vault docs secrets kv kv v2  for more information       Version comparison  Regardless of its version  you use the   vault kv    vault docs commands kv  command to interact with KV secrets engine  However  the API endpoint are different  You must be aware of those differences to write policies as intended   The following table lists the  vault kv  sub commands and their respective API endpoints assuming the KV secrets engine is enabled at  secret        Command             KV v1 endpoint      KV v2 endpoint                                                                                                 vault kv get       secret  key path    secret   data    key path           vault kv put       secret  key path    secret   data    key path           vault kv list      secret  key path    secret   metadata    key path       vault kv delete    secret  key path    secret   data    key path         In addition  KV v2 has sub commands to handle versioning of secrets     Command               KV v2 endpoint                                                                               vault kv patch       secret   data    key path           vault kv rollback    secret   data    key path           vault kv undelete    secret   undelete    key path       vault kv destroy     secret   destroy    key path        vault kv metadata    secret   metadata    key path      To reduce confusion  the CLI command outputs the secret path when you are working with KV v2     Example      CodeBlockConfig hideClipboard highlight  4       shell session    vault kv put secret web app api token  WEOIRJ13895130WENJWEFN       Secret Path     secret data web app          Metadata         Key                Value                          created time       2024 07 02T00 34 58 074825Z custom metadata     nil  deletion time      n a destroyed          false version            1        CodeBlockConfig   You can use   mount  flag if omitting   data   in the CLI command is confusing      shell session   vault kv put  mount secret web app api token  WEOIRJ13895130WENJWEFN     "}
{"questions":"vault page title Save random strings layout docs Use password policies and the key value v2 plugins to generate and store Save random strings to the key value v2 plugin random strings in Vault","answers":"---\nlayout: docs\npage_title: Save random strings\ndescription: >-\n   Use password policies and the key\/value v2 plugins to generate and store\n   random strings in Vault.\n---\n\n# Save random strings to the key\/value v2 plugin\n\nUse [password policies](\/vault\/docs\/concepts\/password-policies) to generate\nrandom strings and save the strings to your key\/value v2 plugin.\n\n## Before you start \n\n- **You must have `read`, `create`, and `update` permission for password policies.\n- **You must have `create` and `update` permission for your `kv` v2 plugin**.\n\n\n## Step 1: Create a password policy file\n\nCreate an HCL file with a password policy with the desired randomization and\ngeneration rules.\n\nFor example, the following password policy requires a string 20 characters long\nthat includes:\n\n- at least one lowercase character\n- at least one uppercase character\n- at least one number\n- at least two special characters\n\n```hcl\nlength=20\n\nrule \"charset\" {\n  charset = \"abcdefghijklmnopqrstuvwxyz\"\n  min-chars = 1\n}\n\nrule \"charset\" {\n  charset = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n  min-chars = 1\n}\n\nrule \"charset\" {\n  charset = \"0123456789\"\n  min-chars = 1\n}\n\nrule \"charset\" {\n  charset = \"!@#$%^&*STUVWXYZ\"\n  min-chars = 2\n}\n```\n\n\n## Step 2: Save the password policy\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse `vault write` to save policies to the password policies endpoint\n(`sys\/policies\/password\/<policy_name>`):\n\n```shell-session\n$ vault write sys\/policies\/password\/<policy_name> policy=@<policy_file>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault write sys\/policies\/password\/randomize policy=@password-rules.hcl\n\nSuccess! Data written to: sys\/policies\/password\/randomize\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\nEscape your password policy file and make a `POST` call to\n[`\/sys\/policies\/password\/{policy_name}`](\/vault\/api-docs\/system\/policies-password#create-update-password-policy)\nwith your password creation rules:\n\n```shell-session\n$ jq -Rs '{ \"policy\": . | gsub(\"[\\\\r\\\\n\\\\t]\"; \"\") }' <path_to_policy_file> |\n  curl                                        \\\n    --request POST                            \\\n    --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n    \"$(<\/dev\/stdin)\"                          \\\n    ${VAULT_ADDR}\/v1\/sys\/policies\/password\/<policy_name>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ jq -Rs '{ \"policy\": . | gsub(\"[\\\\r\\\\n\\\\t]\"; \"\") }' .\/password-rules.hcl |\n  curl                                          \\\n      --request POST                            \\\n      --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n      --data \"$(<\/dev\/stdin)\"                   \\\n      ${VAULT_ADDR}\/v1\/sys\/policies\/password\/randomize | jq\n```\n\n<\/CodeBlockConfig>\n\n`\/sys\/policies\/password\/{policy_name}` does not return data on success.\n\n<\/Tab>\n\n<\/Tabs>\n\n\n\n## Step 3: Save a random string to `kv` v2\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse `vault read` and the `generate` endpoint of the new password policy to\ngenerate a new random string and write it to the `kv` plugin with\n`vault kv put`:\n\n```shell-session\n$ vault kv put                                    \\\n  -mount <mount_path>                             \\\n  <secret_path>                                   \\\n  <key_name>=$(                                   \\\n    vault read -field password                    \\\n    sys\/policies\/password\/<policy_name>\/generate  \\\n  )\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault kv put                                \\\n  -mount shared                               \\\n  \/dev\/seeds                                  \\\n  seed1=$(                                    \\\n    vault read -field password                \\\n    sys\/policies\/password\/randomize\/generate  \\\n  )\n\n==== Secret Path ====\nshared\/data\/dev\/seeds\n\n======= Metadata =======\nKey                Value\n---                -----\ncreated_time       2024-11-15T23:15:31.929717548Z\ncustom_metadata    <nil>\ndeletion_time      n\/a\ndestroyed          false\nversion            1\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\nUse the\n[`\/sys\/policies\/password\/{policy_name}\/generate`](\/vault\/api-docs\/system\/policies-password#generate-password-from-password-policy)\nendpoint of the new password policy to generate a random string and write it to\nthe `kv` plugin with a `POST` call to\n[`\/{plugin_mount_path}\/data\/{secret_path}`](\/vault\/api-docs\/secret\/kv\/kv-v2#create-update-secret):\n\n```shell-session\n$ curl                                        \\\n    --request POST                            \\\n    --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n    --data                                    \\\n    \"{\n    \\\"data\\\": {\n      \\\"<key_name>\\\": \\\"$(\n        vault read -field password sys\/policies\/password\/<policy_name>\/generate\n      )\\\"\n      }\n    }\"                                        \\\n    ${VAULT_ADDR}\/v1\/<plugin_mount_path>\/data\/<secret_path>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ curl                                        \\\n    --request POST                            \\\n    --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n    --data                                    \\\n    \"{\n    \\\"data\\\": {\n      \\\"seed1\\\": \\\"$(\n        vault read -field password sys\/policies\/password\/randomize\/generate\n      )\\\"\n      }\n    }\"                                        \\\n    ${VAULT_ADDR}\/v1\/shared\/data\/dev\/seeds | jq\n\n{\n  \"request_id\": \"f9fad221-74e7-72c4-3f5a-9364944c37d9\",\n  \"lease_id\": \"\",\n  \"renewable\": false,\n  \"lease_duration\": 0,\n  \"data\": {\n    \"created_time\": \"2024-11-15T23:33:08.549750507Z\",\n    \"custom_metadata\": null,\n    \"deletion_time\": \"\",\n    \"destroyed\": false,\n    \"version\": 1\n  },\n  \"wrap_info\": null,\n  \"warnings\": null,\n  \"auth\": null,\n  \"mount_type\": \"kv\"\n}\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<\/Tabs>\n\n## Step 4: Verify the data in Vault\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse [`vault kv get`](\/vault\/docs\/command\/kv\/read) with the `-field` flag to read\nthe randomized string from the relevant secret path:\n\n```shell-session\n$ vault kv get          \\\n   -mount <mount_path>  \\\n   -field <field_name>  \\\n   <secret_path>       \n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault kv get -mount shared -field seed1 dev\/seeds\n\ng0bc0b6W3ii^SXa@*ie5\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"GUI\" group=\"gui\">\n\n@include 'gui-instructions\/plugins\/kv\/open-overview.mdx'\n\n- Select the **Secret** tab.\n\n- Click the eye icon to view the desired key value.\n\n![Partial screenshot of the Vault GUI showing the randomized string stored at the path dev\/seeds.](\/img\/gui\/kv\/random-string.png)\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\nCall the [`\/{plugin_mount_path}\/data\/{secret_path}`](\/vault\/api-docs\/secret\/kv\/kv-v2#read-secret-version)\nendpoint to read all the key\/value pairs at the secret path:\n\n```shell-session\n$ curl                                       \\\n   --request GET                             \\\n   --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n   ${VAULT_ADDR}\/v1\/<plugin_mount_path>\/data\/<secret_path>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ curl                                       \\\n   --request GET                             \\\n   --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n   ${VAULT_ADDR}\/v1\/shared\/data\/dev\/seeds | jq\n\n{\n  \"request_id\": \"c1202e8d-aff9-2d81-0929-4a558a193b4c\",\n  \"lease_id\": \"\",\n  \"renewable\": false,\n  \"lease_duration\": 0,\n  \"data\": {\n    \"data\": {\n      \"seed1\": \"g0bc0b6W3ii^SXa@*ie5\"\n    },\n    \"metadata\": {\n      \"created_time\": \"2024-11-15T23:33:08.549750507Z\",\n      \"custom_metadata\": null,\n      \"deletion_time\": \"\",\n      \"destroyed\": false,\n      \"version\": 1\n    }\n  },\n  \"wrap_info\": null,\n  \"warnings\": null,\n  \"auth\": null,\n  \"mount_type\": \"kv\"\n}\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<\/Tabs>","site":"vault","answers_cleaned":"    layout  docs page title  Save random strings description        Use password policies and the key value v2 plugins to generate and store    random strings in Vault         Save random strings to the key value v2 plugin  Use  password policies   vault docs concepts password policies  to generate random strings and save the strings to your key value v2 plugin      Before you start       You must have  read    create   and  update  permission for password policies      You must have  create  and  update  permission for your  kv  v2 plugin         Step 1  Create a password policy file  Create an HCL file with a password policy with the desired randomization and generation rules   For example  the following password policy requires a string 20 characters long that includes     at least one lowercase character   at least one uppercase character   at least one number   at least two special characters     hcl length 20  rule  charset      charset    abcdefghijklmnopqrstuvwxyz    min chars   1    rule  charset      charset    ABCDEFGHIJKLMNOPQRSTUVWXYZ    min chars   1    rule  charset      charset    0123456789    min chars   1    rule  charset      charset            STUVWXYZ    min chars   2            Step 2  Save the password policy   Tabs    Tab heading  CLI  group  cli    Use  vault write  to save policies to the password policies endpoint   sys policies password  policy name         shell session   vault write sys policies password  policy name  policy   policy file       For example    CodeBlockConfig hideClipboard  true       shell session   vault write sys policies password randomize policy  password rules hcl  Success  Data written to  sys policies password randomize        CodeBlockConfig     Tab    Tab heading  API  group  api    Escape your password policy file and make a  POST  call to    sys policies password  policy name     vault api docs system policies password create update password policy  with your password creation rules      shell session   jq  Rs     policy       gsub     r  n  t            path to policy file      curl                                                request POST                                    header  X Vault Token    VAULT TOKEN               dev stdin                                    VAULT ADDR  v1 sys policies password  policy name       For example    CodeBlockConfig hideClipboard  true       shell session   jq  Rs     policy       gsub     r  n  t             password rules hcl     curl                                                    request POST                                      header  X Vault Token    VAULT TOKEN              data      dev stdin                               VAULT ADDR  v1 sys policies password randomize   jq        CodeBlockConfig     sys policies password  policy name   does not return data on success     Tab     Tabs        Step 3  Save a random string to  kv  v2   Tabs    Tab heading  CLI  group  cli    Use  vault read  and the  generate  endpoint of the new password policy to generate a new random string and write it to the  kv  plugin with  vault kv put       shell session   vault kv put                                         mount  mount path                                   secret path                                         key name                                             vault read  field password                          sys policies password  policy name  generate             For example    CodeBlockConfig hideClipboard  true       shell session   vault kv put                                     mount shared                                    dev seeds                                      seed1                                             vault read  field password                      sys policies password randomize generate              Secret Path      shared data dev seeds          Metadata         Key                Value                          created time       2024 11 15T23 15 31 929717548Z custom metadata     nil  deletion time      n a destroyed          false version            1        CodeBlockConfig     Tab    Tab heading  API  group  api    Use the    sys policies password  policy name  generate    vault api docs system policies password generate password from password policy  endpoint of the new password policy to generate a random string and write it to the  kv  plugin with a  POST  call to     plugin mount path  data  secret path     vault api docs secret kv kv v2 create update secret       shell session   curl                                                request POST                                    header  X Vault Token    VAULT TOKEN            data                                                   data               key name                  vault read  field password sys policies password  policy name  generate                                                                         VAULT ADDR  v1  plugin mount path  data  secret path       For example    CodeBlockConfig hideClipboard  true       shell session   curl                                                request POST                                    header  X Vault Token    VAULT TOKEN            data                                                   data              seed1                 vault read  field password sys policies password randomize generate                                                                         VAULT ADDR  v1 shared data dev seeds   jq       request id    f9fad221 74e7 72c4 3f5a 9364944c37d9      lease id          renewable   false     lease duration   0     data          created time    2024 11 15T23 33 08 549750507Z        custom metadata   null       deletion time            destroyed   false       version   1         wrap info   null     warnings   null     auth   null     mount type    kv           CodeBlockConfig     Tab     Tabs      Step 4  Verify the data in Vault   Tabs    Tab heading  CLI  group  cli    Use   vault kv get    vault docs command kv read  with the   field  flag to read the randomized string from the relevant secret path      shell session   vault kv get                mount  mount path         field  field name         secret path              For example    CodeBlockConfig hideClipboard  true       shell session   vault kv get  mount shared  field seed1 dev seeds  g0bc0b6W3ii SXa  ie5        CodeBlockConfig     Tab    Tab heading  GUI  group  gui     include  gui instructions plugins kv open overview mdx     Select the   Secret   tab     Click the eye icon to view the desired key value     Partial screenshot of the Vault GUI showing the randomized string stored at the path dev seeds    img gui kv random string png     Tab    Tab heading  API  group  api    Call the     plugin mount path  data  secret path     vault api docs secret kv kv v2 read secret version  endpoint to read all the key value pairs at the secret path      shell session   curl                                              request GET                                    header  X Vault Token    VAULT TOKEN           VAULT ADDR  v1  plugin mount path  data  secret path       For example    CodeBlockConfig hideClipboard  true       shell session   curl                                              request GET                                    header  X Vault Token    VAULT TOKEN           VAULT ADDR  v1 shared data dev seeds   jq       request id    c1202e8d aff9 2d81 0929 4a558a193b4c      lease id          renewable   false     lease duration   0     data          data            seed1    g0bc0b6W3ii SXa  ie5              metadata            created time    2024 11 15T23 33 08 549750507Z          custom metadata   null         deletion time              destroyed   false         version   1               wrap info   null     warnings   null     auth   null     mount type    kv           CodeBlockConfig     Tab     Tabs "}
{"questions":"vault page title Set up the key value v2 plugin secrets in Vault Enable and configure the key value v2 plugins to store arbitrary static layout docs Set up the key value v2 plugin","answers":"---\nlayout: docs\npage_title: Set up the key\/value v2 plugin\ndescription: >-\n   Enable and configure the key\/value v2 plugins to store arbitrary static\n   secrets in Vault.\n---\n\n# Set up the key\/value v2 plugin\n\nUse `vault secrets enable` to enable an instance of the `kv` plugin. To specify\nversion 2, use the `-version` flag or specific `kv-v2` as the plugin type:\n\nAdditionally, when running a dev-mode server, the v2 `kv` secrets engine is enabled by default at the\npath `demo\/` (for non-dev servers, it is currently v1). It can be disabled, moved, or enabled multiple times at\ndifferent paths. Each instance of the KV secrets engine is isolated and unique.\n\n\n## Before you start \n\n- **You must have permission to update ACL policies**.\n- **You must have permission to enable plugins**.\n\n\n\n## Step 1: Enable the plugin\n\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse `vault secrets enable` to establish a new instance of the `kv` plugin.\n\nTo specify key\/value version 2, use the `-version` flag or use `kv-v2` as the\nplugin type.\n\n\n**Option 1**: Use the `-version` flag:\n\n```shell-session\n$ vault secrets enable -path <mount_path> -version=2 kv\n```\n\n**Option 2**: Use the `kv-v2` plugin type:\n\n```shell-session\n$ vault secrets enable -path <mount_path> kv-v2\n```\n\n<\/Tab>\n\n<Tab heading=\"GUI\" group=\"gui\">\n\n@include 'gui-instructions\/enable-secrets-plugin.mdx'\n\n- Select the \"KV\" plugin.\n\n- Enter a unique path for the plugin and provide the relevant configuration\n  data.\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\n1. Create a JSON file with the type and configuration information for your `kv`\nv2 instance. Use the `options` field to set optional flags.\n\n1. Make a `POST` call to\n   [`\/sys\/mounts\/{plugin_mount_path}`](\/vault\/api-docs\/system\/mounts#enable-secrets-engine)\n   with the JSON data:\n    ```shell-session\n    $ curl                                      \\\n      --request POST                            \\\n      --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n      --data @data.json                         \\\n      ${VAULT_ADDR}\/v1\/sys\/mounts\/<plugin_mount_path>\n    ```\n\n\nFor example:\n\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```json\n{\n  \"type\": \"kv\",\n  \"options\": {\n    \"version\": \"2\"\n  }\n}\n```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ curl                                        \\\n    --request POST                            \\\n    --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n    --data @data.json                         \\\n    ${VAULT_ADDR}\/v1\/sys\/mounts\/shared | jq\n```\n\n<\/CodeBlockConfig>\n\n`\/sys\/mounts\/{plugin_mount_path}` does not return data on success.\n\n<\/Tab>\n\n<\/Tabs>\n\n\n\n## Step 2: Create an ACL policy file\n\n<Note>\n\n  ACL policies for `kv` plugins do not support the `allowed_parameters`,\n  `denied_parameters`, and `required_parameters` policy fields.\n\n<\/Note>\n\nCreate a policy definition file based on your needs.\n\nFor example, assume there are API keys stored on the path `\/dev\/square-api` for\na `kv` plugin mounted at `shared\/`. The following policy lets clients read and\npatch the latest version of API keys and read metadata for any version of the\nAPI keys:\n\n```hcl\n# Grants permission to read and patch the latest version of API keys\npath \"shared\/data\/dev\/square-api\/*\" {\n\n  capabilities = [\"read\", \"patch\"]\n}\n\n# Grants permission to read metadata for any version of the API keys\npath \"shared\/metadata\/dev\/square-api\/\" {\n\n  capabilities = [\"read\"]\n}\n```\n\n<Tabs>\n\n<Tab heading=\"Available path prefixes\">\n\n@include 'policies\/path-prefixes.mdx'\n\n<\/Tab>\n\n<Tab heading=\"Available permissions\">\n\n@include 'policies\/policy-permissions.mdx'\n\n<\/Tab>\n\n<\/Tabs>\n\nIf you are unsure about the required permissions, use the Vault CLI to run a\ncommand with placeholder data and the `-output-policy` flag against an existing\n`kv` plugin to generate a minimal policy. \n\n<CodeBlockConfig highlight=\"2\">\n\n```shell-session\n$ vault kv patch                  \\\n    -output-policy                \\\n    -mount <existing_mount_path>  \\\n    test-path                     \\\n    test=test\n```\n\n<\/CodeBlockConfig>\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault kv patch      \\\n    -output-policy    \\\n    -mount private    \\\n    test-path         \\\n    test=test\n\npath \"private\/data\/test-path\" {\n  capabilities = [\"patch\"]\n}\n```\n\n<\/CodeBlockConfig>\n\n## Step 3: Save the ACL policy\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse `vault policy write` to create a new ACL policy with the policy definition\nfile:\n\n```shell-session\n$ vault policy write <name> <path_to_policy_file>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault policy write \"KV-access-policy\" .\/kv-policy.hcl\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"GUI\" group=\"gui\">\n\n@include 'gui-instructions\/create-acl-policy.mdx'\n\n- Provide a name for the policy and upload the policy definition file.\n\n<Tip>\n\nIf you expect to modify policies with the Vault API, avoid spaces and special\ncharacters in the policy name. The policy name becomes part of the API endpoint\npath.\n\n<\/Tip>\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\nEscape your policy file and make a `POST` call to\n[`\/sys\/policy\/{policy_name}`](\/vault\/api-docs\/system\/policy#create-update-policy)\nwith your policy details:\n\n```shell-session\n$ jq -Rs '{ \"policy\": . | gsub(\"[\\\\r\\\\n\\\\t]\"; \"\") }' <path_to_policy_file> |\n  curl                                        \\\n    --request POST                            \\\n    --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n    \"$(<\/dev\/stdin)\"                          \\\n    ${VAULT_ADDR}\/v1\/sys\/policy\/<policy_name>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ jq -Rs '{ \"policy\": . | gsub(\"[\\\\r\\\\n\\\\t]\"; \"\") }' .\/kv-policy.hcl |\n  curl                                          \\\n      --request POST                            \\\n      --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n      --data \"$(<\/dev\/stdin)\"                   \\\n      ${VAULT_ADDR}\/v1\/sys\/policy\/kv-access | jq\n```\n\n<\/CodeBlockConfig>\n\n`\/sys\/mounts\/{plugin_mount_path}` does not return data on success.\n\n<\/Tab>\n\n<\/Tabs>\n\n\n\n## Next steps \n\n- [Create an authentication mapping for the plugin](\/vault\/docs\/concepts\/policies#associating-policies)","site":"vault","answers_cleaned":"    layout  docs page title  Set up the key value v2 plugin description        Enable and configure the key value v2 plugins to store arbitrary static    secrets in Vault         Set up the key value v2 plugin  Use  vault secrets enable  to enable an instance of the  kv  plugin  To specify version 2  use the   version  flag or specific  kv v2  as the plugin type   Additionally  when running a dev mode server  the v2  kv  secrets engine is enabled by default at the path  demo    for non dev servers  it is currently v1   It can be disabled  moved  or enabled multiple times at different paths  Each instance of the KV secrets engine is isolated and unique       Before you start       You must have permission to update ACL policies        You must have permission to enable plugins          Step 1  Enable the plugin    Tabs    Tab heading  CLI  group  cli    Use  vault secrets enable  to establish a new instance of the  kv  plugin   To specify key value version 2  use the   version  flag or use  kv v2  as the plugin type      Option 1    Use the   version  flag      shell session   vault secrets enable  path  mount path   version 2 kv        Option 2    Use the  kv v2  plugin type      shell session   vault secrets enable  path  mount path  kv v2        Tab    Tab heading  GUI  group  gui     include  gui instructions enable secrets plugin mdx     Select the  KV  plugin     Enter a unique path for the plugin and provide the relevant configuration   data     Tab    Tab heading  API  group  api    1  Create a JSON file with the type and configuration information for your  kv  v2 instance  Use the  options  field to set optional flags   1  Make a  POST  call to       sys mounts  plugin mount path     vault api docs system mounts enable secrets engine     with the JSON data         shell session       curl                                                request POST                                      header  X Vault Token    VAULT TOKEN              data  data json                                   VAULT ADDR  v1 sys mounts  plugin mount path            For example     CodeBlockConfig hideClipboard  true       json      type    kv      options          version    2               CodeBlockConfig    CodeBlockConfig hideClipboard  true       shell session   curl                                                request POST                                    header  X Vault Token    VAULT TOKEN            data  data json                                 VAULT ADDR  v1 sys mounts shared   jq        CodeBlockConfig     sys mounts  plugin mount path   does not return data on success     Tab     Tabs        Step 2  Create an ACL policy file   Note     ACL policies for  kv  plugins do not support the  allowed parameters      denied parameters   and  required parameters  policy fields     Note   Create a policy definition file based on your needs   For example  assume there are API keys stored on the path   dev square api  for a  kv  plugin mounted at  shared    The following policy lets clients read and patch the latest version of API keys and read metadata for any version of the API keys      hcl   Grants permission to read and patch the latest version of API keys path  shared data dev square api         capabilities     read    patch        Grants permission to read metadata for any version of the API keys path  shared metadata dev square api        capabilities     read           Tabs    Tab heading  Available path prefixes     include  policies path prefixes mdx     Tab    Tab heading  Available permissions     include  policies policy permissions mdx     Tab     Tabs   If you are unsure about the required permissions  use the Vault CLI to run a command with placeholder data and the   output policy  flag against an existing  kv  plugin to generate a minimal policy     CodeBlockConfig highlight  2       shell session   vault kv patch                         output policy                       mount  existing mount path         test path                           test test        CodeBlockConfig   For example    CodeBlockConfig hideClipboard  true       shell session   vault kv patch             output policy           mount private          test path               test test  path  private data test path      capabilities     patch            CodeBlockConfig      Step 3  Save the ACL policy   Tabs    Tab heading  CLI  group  cli    Use  vault policy write  to create a new ACL policy with the policy definition file      shell session   vault policy write  name   path to policy file       For example    CodeBlockConfig hideClipboard  true       shell session   vault policy write  KV access policy    kv policy hcl        CodeBlockConfig     Tab    Tab heading  GUI  group  gui     include  gui instructions create acl policy mdx     Provide a name for the policy and upload the policy definition file    Tip   If you expect to modify policies with the Vault API  avoid spaces and special characters in the policy name  The policy name becomes part of the API endpoint path     Tip     Tab    Tab heading  API  group  api    Escape your policy file and make a  POST  call to    sys policy  policy name     vault api docs system policy create update policy  with your policy details      shell session   jq  Rs     policy       gsub     r  n  t            path to policy file      curl                                                request POST                                    header  X Vault Token    VAULT TOKEN               dev stdin                                    VAULT ADDR  v1 sys policy  policy name       For example    CodeBlockConfig hideClipboard  true       shell session   jq  Rs     policy       gsub     r  n  t             kv policy hcl     curl                                                    request POST                                      header  X Vault Token    VAULT TOKEN              data      dev stdin                               VAULT ADDR  v1 sys policy kv access   jq        CodeBlockConfig     sys mounts  plugin mount path   does not return data on success     Tab     Tabs        Next steps      Create an authentication mapping for the plugin   vault docs concepts policies associating policies "}
{"questions":"vault Upgrade existing v1 key value plugins to leverage kv v2 features page title Upgrade to key value v2 Upgrade kv version 1 plugins You can upgrade existing version 1 key value stores to version 2 to use layout docs","answers":"---\nlayout: docs\npage_title: Upgrade to key\/value v2\ndescription: >-\n   Upgrade existing v1 key\/value plugins to leverage kv v2 features.\n---\n\n# Upgrade `kv` version 1 plugins\n\nYou can upgrade existing version 1 key\/value stores to version 2 to use\nversioning.\n\n<Warning>\n\n   You cannot access v1 plugin mounts during the upgrade, which may take a long\n   time for plugins that contain significant data.\n\n<\/Warning>\n\n## Before you start \n\n- **You must have permission to update ACL policies**.\n- **You must have permission to tune the `kv1` v1 plugin**.\n\n\n\n## Step 1: Update ACL rules\n\nThe `kv` v2 plugin uses different API path prefixes than `kv` v1. You must\nupgrade the relevant ACL policies **before** upgrading the plugin by changing\nv1 paths for read, write, or update policies to include the v2 path prefix,\n`data\/`.\n\nFor example, the following `kv` v1 policy:\n\n```hcl\npath \"shared\/dev\/square-api\/*\" {\n  capabilities = [\"create\", \"update\", \"read\"]\n}\n```\n\nbecomes:\n\n```hcl\npath \"secret\/dev\/square-api\/*\" {\n  capabilities = [\"create\", \"update\", \"read\"]\n}\n```\n\n<Tip>\n\n  You can assign different ACL policies to different `kv` v2 paths.\n\n<\/Tip>\n\n\n\n## Step 2: Upgrade the plugin instance\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse the `enable-versioning` subcommand to upgrade from v1 to v2:\n\n```shell-session\n$ vault kv enable-versioning <kv_v1_mount_path>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault kv enable-versioning shared\/\nSuccess! Tuned the secrets engine at: shared\/\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\nMake a `POST` call to\n[`\/sys\/mounts\/{plugin_mount_path}`](\/vault\/api-docs\/system\/mounts#enable-secrets-engine)\nwith `options.version` set to `2` to update the plugin version:\n\n```shell-session\n$ curl                                            \\\n    --header \"X-Vault-Token: ...\"                 \\\n    --request POST                                \\\n    --data '{\\\"options\\\": {\\\"version\\\": \\\"2\\\"}}'  \\\n    http:\/\/${VAULT_ADDR}\/v1\/sys\/mounts\/${KV_MOUNT_PATH}\/tune\n```\n\n<\/Tab>\n\n<\/Tabs>\n\n\n\n## Related resources\n\n- [KV v2 plugin API docs](\/vault\/api-docs\/secret\/kv\/kv-v2)\n- [Tutorial: Versioned Key Value Secrets Engine](\/vault\/tutorials\/secrets-management\/versioned-kv) -\n   Learn how to compare data in the KV v2 secrets engine and protect data from\n   accidental deletion","site":"vault","answers_cleaned":"    layout  docs page title  Upgrade to key value v2 description        Upgrade existing v1 key value plugins to leverage kv v2 features         Upgrade  kv  version 1 plugins  You can upgrade existing version 1 key value stores to version 2 to use versioning    Warning      You cannot access v1 plugin mounts during the upgrade  which may take a long    time for plugins that contain significant data     Warning      Before you start       You must have permission to update ACL policies        You must have permission to tune the  kv1  v1 plugin          Step 1  Update ACL rules  The  kv  v2 plugin uses different API path prefixes than  kv  v1  You must upgrade the relevant ACL policies   before   upgrading the plugin by changing v1 paths for read  write  or update policies to include the v2 path prefix   data     For example  the following  kv  v1 policy      hcl path  shared dev square api        capabilities     create    update    read          becomes      hcl path  secret dev square api        capabilities     create    update    read           Tip     You can assign different ACL policies to different  kv  v2 paths     Tip        Step 2  Upgrade the plugin instance   Tabs    Tab heading  CLI  group  cli    Use the  enable versioning  subcommand to upgrade from v1 to v2      shell session   vault kv enable versioning  kv v1 mount path       For example    CodeBlockConfig hideClipboard  true       shell session   vault kv enable versioning shared  Success  Tuned the secrets engine at  shared         CodeBlockConfig     Tab    Tab heading  API  group  api    Make a  POST  call to    sys mounts  plugin mount path     vault api docs system mounts enable secrets engine  with  options version  set to  2  to update the plugin version      shell session   curl                                                    header  X Vault Token                               request POST                                        data     options       version      2             http     VAULT ADDR  v1 sys mounts   KV MOUNT PATH  tune        Tab     Tabs        Related resources     KV v2 plugin API docs   vault api docs secret kv kv v2     Tutorial  Versioned Key Value Secrets Engine   vault tutorials secrets management versioned kv       Learn how to compare data in the KV v2 secrets engine and protect data from    accidental deletion"}
{"questions":"vault Restore soft deleted key value data page title Restore soft deleted data You can restore data from soft deletes in the kv v2 plugin as long as the Revert soft deletes to restore versioned key value data in the kv v2 plugin layout docs","answers":"---\nlayout: docs\npage_title: Restore soft deleted data\ndescription: >-\n   Revert soft deletes to restore versioned key\/value data in the kv v2 plugin.\n---\n\n# Restore soft deleted key\/value data\n\nYou can restore data from soft deletes in the `kv` v2 plugin as long as the\n`destroyed` metadata field for the targeted version is `false`.\n\n<Tip title=\"Assumptions\">\n\n- You have [set up a `kv` v2 plugin](\/vault\/docs\/secrets\/kv\/kv-v2\/setup). \n- Your authentication token has `create` and `update` permissions for the `kv`\n  v2 plugin.\n\n<\/Tip>\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse [`vault kv undelete`](\/vault\/docs\/command\/kv\/undelete) with the `-versions`\nflag to restore soft deleted version of key\/value data:\n\n```shell-session\n$ vault kv undelete             \\\n   -mount <mount_path>          \\\n   -versions <target_versions>  \\\n   <secret_path>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault kv undelete -mount shared -versions 1,4 dev\/square-api\n\nSuccess! Data deleted (if it existed) at: shared\/data\/dev\/square-api\n```\n\n<\/CodeBlockConfig>\n\nThe `deletion_time` metadata field for versions 1 and 4 is now `n\/a`:\n\n<CodeBlockConfig hideClipboard=\"true\" highlight=\"22,31\">\n\n```shell-session\n$ vault kv metadata get -mount shared dev\/square-api\n\n======== Metadata Path ========\nshared\/metadata\/dev\/square-api\n\n========== Metadata ==========\nKey                     Value\n---                     -----\ncas_required            false\ncreated_time            2024-11-13T21:51:50.898782695Z\ncurrent_version         4\ncustom_metadata         <nil>\ndelete_version_after    0s\nmax_versions            5\noldest_version          0\nupdated_time            2024-11-14T22:32:42.29534643Z\n\n====== Version 1 ======\nKey              Value\n---              -----\ncreated_time     2024-11-13T21:51:50.898782695Z\ndeletion_time    n\/a\ndestroyed        false\n\n...\n\n====== Version 4 ======\nKey              Value\n---              -----\ncreated_time     2024-11-14T22:32:42.29534643Z\ndeletion_time    n\/a\ndestroyed        false\n```\n\n<\/CodeBlockConfig>\n\n\n<\/Tab>\n\n<Tab heading=\"GUI\" group=\"gui\">\n\n@include 'gui-instructions\/plugins\/kv\/open-overview.mdx'\n\n- Select the **Secret** tab.\n- Select the appropriate data version from the **Version** dropdown.\n- Click **Undelete**.\n\n![Partial screenshot of the Vault GUI showing the deleted version message](\/img\/gui\/kv\/undelete-data.png)\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\nMake a `POST` call to\n[`\/{plugin_mount_path}\/undelete\/{secret_path}`](\/vault\/api-docs\/secret\/kv\/kv-v2#undelete-secret-versions)\nwith the data versions you want to restore:\n\n```shell-session\n$ curl                                       \\\n   --request POST                            \\\n   --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n   --data '{\"versions\":[<target_versions>]}  \\\n   ${VAULT_ADDR}\/v1\/<plugin_mount_path>\/undelete\/<secret_path>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ curl                                       \\\n    --request POST                           \\\n    --header \"X-Vault-Token: ${VAULT_TOKEN}\" \\\n    --data '{\"versions\":[5,8]}'              \\\n    ${VAULT_ADDR}\/v1\/shared\/undelete\/dev\/square-api | jq\n\n```\n\n`\/{plugin_mount_path}\/undelete\/{secret_path}` does not return data on success.\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<\/Tabs","site":"vault","answers_cleaned":"    layout  docs page title  Restore soft deleted data description        Revert soft deletes to restore versioned key value data in the kv v2 plugin         Restore soft deleted key value data  You can restore data from soft deletes in the  kv  v2 plugin as long as the  destroyed  metadata field for the targeted version is  false     Tip title  Assumptions      You have  set up a  kv  v2 plugin   vault docs secrets kv kv v2 setup      Your authentication token has  create  and  update  permissions for the  kv    v2 plugin     Tip    Tabs    Tab heading  CLI  group  cli    Use   vault kv undelete    vault docs command kv undelete  with the   versions  flag to restore soft deleted version of key value data      shell session   vault kv undelete                   mount  mount path                 versions  target versions         secret path       For example    CodeBlockConfig hideClipboard  true       shell session   vault kv undelete  mount shared  versions 1 4 dev square api  Success  Data deleted  if it existed  at  shared data dev square api        CodeBlockConfig   The  deletion time  metadata field for versions 1 and 4 is now  n a     CodeBlockConfig hideClipboard  true  highlight  22 31       shell session   vault kv metadata get  mount shared dev square api           Metadata Path          shared metadata dev square api             Metadata            Key                     Value                               cas required            false created time            2024 11 13T21 51 50 898782695Z current version         4 custom metadata          nil  delete version after    0s max versions            5 oldest version          0 updated time            2024 11 14T22 32 42 29534643Z         Version 1        Key              Value                        created time     2024 11 13T21 51 50 898782695Z deletion time    n a destroyed        false              Version 4        Key              Value                        created time     2024 11 14T22 32 42 29534643Z deletion time    n a destroyed        false        CodeBlockConfig      Tab    Tab heading  GUI  group  gui     include  gui instructions plugins kv open overview mdx     Select the   Secret   tab    Select the appropriate data version from the   Version   dropdown    Click   Undelete       Partial screenshot of the Vault GUI showing the deleted version message   img gui kv undelete data png     Tab    Tab heading  API  group  api    Make a  POST  call to     plugin mount path  undelete  secret path     vault api docs secret kv kv v2 undelete secret versions  with the data versions you want to restore      shell session   curl                                              request POST                                   header  X Vault Token    VAULT TOKEN           data    versions    target versions            VAULT ADDR  v1  plugin mount path  undelete  secret path       For example    CodeBlockConfig hideClipboard  true       shell session   curl                                               request POST                                   header  X Vault Token    VAULT TOKEN           data    versions   5 8                         VAULT ADDR  v1 shared undelete dev square api   jq          plugin mount path  undelete  secret path   does not return data on success     CodeBlockConfig     Tab     Tabs"}
{"questions":"vault page title Read data Read versioned key value data from the kv v2 plugin Read versioned data from an existing data path in the kv v2 plugin layout docs Read versioned key value data","answers":"---\nlayout: docs\npage_title: Read data\ndescription: >-\n   Read versioned key\/value data from the kv v2 plugin\n---\n\n# Read versioned key\/value data\n\nRead versioned data from an existing data path in the `kv` v2 plugin.\n\n<Tip title=\"Assumptions\">\n\n- You have [set up a `kv` v2 plugin](\/vault\/docs\/secrets\/kv\/kv-v2\/setup). \n- Your authentication token has `read` permissions for the `kv` v2 plugin.\n\n<\/Tip>\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse [`vault kv get`](\/vault\/docs\/command\/kv\/read) to read **all** the current\nkey\/value pairs on the given path:\n\n```shell-session\n$ vault kv get             \\\n   -mount <mount_path>     \\\n   <secret_path>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault kv get -mount shared dev\/square-api\n\n======= Secret Path =======\nshared\/data\/dev\/square-api\n\n======= Metadata =======\nKey                Value\n---                -----\ncreated_time       2024-11-13T21:58:32.128442898Z\ncustom_metadata    <nil>\ndeletion_time      n\/a\ndestroyed          false\nversion            3\n\n===== Data =====\nKey        Value\n---        -----\nprod       5678\nsandbox    1234\n```\n\n<\/CodeBlockConfig>\n\nUse the `-field` flag to target specific key value pairs on the given path:\n\n```shell-session\n$ vault kv get          \\\n   -mount <mount_path>  \\\n   -field <field_name>  \\\n   <secret_path>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault kv get -mount shared -field prod dev\/square-api\n\n5678\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"GUI\" group=\"gui\">\n\n@include 'gui-instructions\/plugins\/kv\/open-overview.mdx'\n\n- Select the **Secret** tab.\n- Click the eye icon to view the desired key value.\n\n![Partial screenshot of the Vault GUI showing two key\/value pairs at the path dev\/square-api. The \"prod\" key is visible](\/img\/gui\/kv\/read-data.png)\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\nCall the [`\/{plugin_mount_path}\/data\/{secret_path}`](\/vault\/api-docs\/secret\/kv\/kv-v2#read-secret-version)\nendpoint to read all the key\/value pairs at the secret path:\n\n```shell-session\n$ curl                                       \\\n   --request GET                             \\\n   --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n   ${VAULT_ADDR}\/v1\/<plugin_mount_path>\/data\/<secret_path>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ curl                                       \\\n   --request GET                             \\\n   --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n   ${VAULT_ADDR}\/v1\/shared\/data\/dev\/square-api | jq\n\n{\n  \"request_id\": \"992da4a2-f2d1-5786-ea53-1e8ea6440a7c\",\n  \"lease_id\": \"\",\n  \"renewable\": false,\n  \"lease_duration\": 0,\n  \"data\": {\n    \"data\": {\n      \"prod\": \"5679\",\n      \"sandbox\": \"1234\",\n      \"smoke\": \"abcd\"\n    },\n    \"metadata\": {\n      \"created_time\": \"2024-11-15T02:41:02.556301319Z\",\n      \"custom_metadata\": null,\n      \"deletion_time\": \"\",\n      \"destroyed\": false,\n      \"version\": 7\n    }\n  },\n  \"wrap_info\": null,\n  \"warnings\": null,\n  \"auth\": null,\n  \"mount_type\": \"kv\"\n}\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<\/Tabs>","site":"vault","answers_cleaned":"    layout  docs page title  Read data description        Read versioned key value data from the kv v2 plugin        Read versioned key value data  Read versioned data from an existing data path in the  kv  v2 plugin    Tip title  Assumptions      You have  set up a  kv  v2 plugin   vault docs secrets kv kv v2 setup      Your authentication token has  read  permissions for the  kv  v2 plugin     Tip    Tabs    Tab heading  CLI  group  cli    Use   vault kv get    vault docs command kv read  to read   all   the current key value pairs on the given path      shell session   vault kv get                   mount  mount path            secret path       For example    CodeBlockConfig hideClipboard  true       shell session   vault kv get  mount shared dev square api          Secret Path         shared data dev square api          Metadata         Key                Value                          created time       2024 11 13T21 58 32 128442898Z custom metadata     nil  deletion time      n a destroyed          false version            3        Data       Key        Value                  prod       5678 sandbox    1234        CodeBlockConfig   Use the   field  flag to target specific key value pairs on the given path      shell session   vault kv get                mount  mount path         field  field name         secret path       For example    CodeBlockConfig hideClipboard  true       shell session   vault kv get  mount shared  field prod dev square api  5678        CodeBlockConfig     Tab    Tab heading  GUI  group  gui     include  gui instructions plugins kv open overview mdx     Select the   Secret   tab    Click the eye icon to view the desired key value     Partial screenshot of the Vault GUI showing two key value pairs at the path dev square api  The  prod  key is visible   img gui kv read data png     Tab    Tab heading  API  group  api    Call the     plugin mount path  data  secret path     vault api docs secret kv kv v2 read secret version  endpoint to read all the key value pairs at the secret path      shell session   curl                                              request GET                                    header  X Vault Token    VAULT TOKEN           VAULT ADDR  v1  plugin mount path  data  secret path       For example    CodeBlockConfig hideClipboard  true       shell session   curl                                              request GET                                    header  X Vault Token    VAULT TOKEN           VAULT ADDR  v1 shared data dev square api   jq       request id    992da4a2 f2d1 5786 ea53 1e8ea6440a7c      lease id          renewable   false     lease duration   0     data          data            prod    5679          sandbox    1234          smoke    abcd              metadata            created time    2024 11 15T02 41 02 556301319Z          custom metadata   null         deletion time              destroyed   false         version   7               wrap info   null     warnings   null     auth   null     mount type    kv           CodeBlockConfig     Tab     Tabs "}
{"questions":"vault Soft delete key value data Use soft deletes to control the lifecycle of versioned key value data in the page title Soft delete data layout docs kv v2 plugin","answers":"---\nlayout: docs\npage_title: Soft delete data\ndescription: >-\n   Use soft deletes to control the lifecycle of versioned key\/value data in the\n   kv v2 plugin.\n---\n\n# Soft delete key\/value data\n\nUse soft deletes to flag data at a secret path as unavailable while leaving the\ndata recoverable. You can revert soft deletes as long as the `destroyed` field\nis `false` in the metadata.\n\n<Tip title=\"Assumptions\">\n\n- You have [set up a `kv` v2 plugin](\/vault\/docs\/secrets\/kv\/kv-v2\/setup). \n- Your authentication token has `create` and `update` permissions for the `kv`\n  v2 plugin.\n\n<\/Tip>\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse [`vault kv delete`](\/vault\/docs\/command\/kv\/delete) with the `-versions` flag to\nsoft delete one or more version of key\/value data and set `deletion_time` in the\nmetadata:\n\n```shell-session\n$ vault kv delete               \\\n   -mount <mount_path>          \\\n   -versions <target_versions>  \\\n   <secret_path>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault kv delete -mount shared -versions 1,4 dev\/square-api\n\nSuccess! Data deleted (if it existed) at: shared\/data\/dev\/square-api\n```\n\n<\/CodeBlockConfig>\n\nThe `deletion_time` metadata field for versions 1 and 4 now has the timestamp\nof when Vault marked the versions as deleted:\n\n<CodeBlockConfig hideClipboard=\"true\" highlight=\"22,31\">\n\n```shell-session\n$ vault kv metadata get -mount shared dev\/square-api\n\n======== Metadata Path ========\nshared\/metadata\/dev\/square-api\n\n========== Metadata ==========\nKey                     Value\n---                     -----\ncas_required            false\ncreated_time            2024-11-13T21:51:50.898782695Z\ncurrent_version         4\ncustom_metadata         <nil>\ndelete_version_after    0s\nmax_versions            5\noldest_version          0\nupdated_time            2024-11-14T22:32:42.29534643Z\n\n====== Version 1 ======\nKey              Value\n---              -----\ncreated_time     2024-11-13T21:51:50.898782695Z\ndeletion_time    2024-11-15T00:45:04.057772212Z\ndestroyed        false\n\n...\n\n====== Version 4 ======\nKey              Value\n---              -----\ncreated_time     2024-11-14T22:32:42.29534643Z\ndeletion_time    2024-11-15T00:45:04.057772712Z\ndestroyed        false\n```\n\n\n<\/CodeBlockConfig>\n\n\n<\/Tab>\n\n<Tab heading=\"GUI\" group=\"gui\">\n\n@include 'gui-instructions\/plugins\/kv\/open-overview.mdx'\n\n- Select the **Secret** tab.\n- Select the appropriate data version from the **Version** dropdown.\n- Click **Delete**.\n- Select **Delete this version** to delete the selected version or\n  **Delete latest version** to delete the most recent data.\n- Click **Confirm**.\n\n![Partial screenshot of the Vault GUI showing the \"Delete version?\" confirmation modal for data at the path dev\/square-api](\/img\/gui\/kv\/delete-version.png)\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\nMake a `POST` call to\n[`\/{plugin_mount_path}\/delete\/{secret_path}`](\/vault\/api-docs\/secret\/kv\/kv-v2#delete-secret-versions)\nwith the data versions you want to soft delete:\n\n```shell-session\n$ curl                                       \\\n   --request POST                            \\\n   --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n   --data '{\"versions\":[<target_versions>]}  \\\n   ${VAULT_ADDR}\/v1\/<plugin_mount_path>\/delete\/<secret_path>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ curl                                       \\\n    --request POST                           \\\n    --header \"X-Vault-Token: ${VAULT_TOKEN}\" \\\n    --data '{\"versions\":[5,8]}'              \\\n    ${VAULT_ADDR}\/v1\/shared\/delete\/dev\/square-api | jq\n\n```\n\n`\/{plugin_mount_path}\/delete\/{secret_path}` does not return data on success.\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<\/Tabs","site":"vault","answers_cleaned":"    layout  docs page title  Soft delete data description        Use soft deletes to control the lifecycle of versioned key value data in the    kv v2 plugin         Soft delete key value data  Use soft deletes to flag data at a secret path as unavailable while leaving the data recoverable  You can revert soft deletes as long as the  destroyed  field is  false  in the metadata    Tip title  Assumptions      You have  set up a  kv  v2 plugin   vault docs secrets kv kv v2 setup      Your authentication token has  create  and  update  permissions for the  kv    v2 plugin     Tip    Tabs    Tab heading  CLI  group  cli    Use   vault kv delete    vault docs command kv delete  with the   versions  flag to soft delete one or more version of key value data and set  deletion time  in the metadata      shell session   vault kv delete                     mount  mount path                 versions  target versions         secret path       For example    CodeBlockConfig hideClipboard  true       shell session   vault kv delete  mount shared  versions 1 4 dev square api  Success  Data deleted  if it existed  at  shared data dev square api        CodeBlockConfig   The  deletion time  metadata field for versions 1 and 4 now has the timestamp of when Vault marked the versions as deleted    CodeBlockConfig hideClipboard  true  highlight  22 31       shell session   vault kv metadata get  mount shared dev square api           Metadata Path          shared metadata dev square api             Metadata            Key                     Value                               cas required            false created time            2024 11 13T21 51 50 898782695Z current version         4 custom metadata          nil  delete version after    0s max versions            5 oldest version          0 updated time            2024 11 14T22 32 42 29534643Z         Version 1        Key              Value                        created time     2024 11 13T21 51 50 898782695Z deletion time    2024 11 15T00 45 04 057772212Z destroyed        false              Version 4        Key              Value                        created time     2024 11 14T22 32 42 29534643Z deletion time    2024 11 15T00 45 04 057772712Z destroyed        false         CodeBlockConfig      Tab    Tab heading  GUI  group  gui     include  gui instructions plugins kv open overview mdx     Select the   Secret   tab    Select the appropriate data version from the   Version   dropdown    Click   Delete      Select   Delete this version   to delete the selected version or     Delete latest version   to delete the most recent data    Click   Confirm       Partial screenshot of the Vault GUI showing the  Delete version   confirmation modal for data at the path dev square api   img gui kv delete version png     Tab    Tab heading  API  group  api    Make a  POST  call to     plugin mount path  delete  secret path     vault api docs secret kv kv v2 delete secret versions  with the data versions you want to soft delete      shell session   curl                                              request POST                                   header  X Vault Token    VAULT TOKEN           data    versions    target versions            VAULT ADDR  v1  plugin mount path  delete  secret path       For example    CodeBlockConfig hideClipboard  true       shell session   curl                                               request POST                                   header  X Vault Token    VAULT TOKEN           data    versions   5 8                         VAULT ADDR  v1 shared delete dev square api   jq          plugin mount path  delete  secret path   does not return data on success     CodeBlockConfig     Tab     Tabs"}
{"questions":"vault The standard vault kv delete command performs soft deletes Use the CLI or GUI page title Permanently delete data Permanently delete versioned key value data in the kv v2 plugin layout docs Destroy key value data","answers":"---\nlayout: docs\npage_title: Permanently delete data\ndescription: >-\n   Permanently delete versioned key\/value data in the kv v2 plugin.\n---\n\n# Destroy key\/value data\n\nThe standard `vault kv delete` command performs soft deletes. Use the CLI or GUI\nto permanently delete (destroy) data so Vault purges the underlying data and\nsets the `destroyed` metadata field to `true`.\n\n\n<Tip title=\"Assumptions\">\n\n- You have [set up a `kv` v2 plugin](\/vault\/docs\/secrets\/kv\/kv-v2\/setup). \n- Your authentication token has `create` and `update` permissions for the `kv`\n  v2 plugin.\n\n<\/Tip>\n\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse [`vault kv destroy`](\/vault\/docs\/command\/kv\/destroy) with the `-versions` flag to\npermanently delete one or more version of key\/value data:\n\n```shell-session\n$ vault kv destroy              \\\n   -mount <mount_path>          \\\n   -versions <target_versions>  \\\n   <secret_path>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault kv destroy -mount shared -versions 2,3 dev\/square-api\n\nSuccess! Data written to: shared\/destroy\/dev\/square-api\n```\n\n<\/CodeBlockConfig>\n\nThe `destroyed` metadata field for versions 2 and 3 is now `true`\n\n<CodeBlockConfig hideClipboard=\"true\" highlight=\"25,32\">\n\n```shell-session\n$ vault kv metadata get -mount shared dev\/square-api\n\n======== Metadata Path ========\nshared\/metadata\/dev\/square-api\n\n========== Metadata ==========\nKey                     Value\n---                     -----\ncas_required            false\ncreated_time            2024-11-13T21:51:50.898782695Z\ncurrent_version         4\ncustom_metadata         <nil>\ndelete_version_after    0s\nmax_versions            5\noldest_version          0\nupdated_time            2024-11-14T22:32:42.29534643Z\n\n...\n\n====== Version 2 ======\nKey              Value\n---              -----\ncreated_time     2024-11-13T21:52:10.326204209Z\ndeletion_time    n\/a\ndestroyed        true\n\n====== Version 3 ======\nKey              Value\n---              -----\ncreated_time     2024-11-13T21:58:32.128442898Z\ndeletion_time    n\/a\ndestroyed        true\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"GUI\" group=\"gui\">\n\n@include 'gui-instructions\/plugins\/kv\/open-overview.mdx'\n\n- Select the **Secret** tab.\n- Select the appropriate data version from the **Version** dropdown.\n- Click **Destroy**.\n- Click **Confirm**.\n\n![Partial screenshot of the Vault GUI showing the \"Destroy version?\" confirmation modal for data at the path dev\/square-api](\/img\/gui\/kv\/destroy-version.png)\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\nMake a `POST` call to\n[`\/{plugin_mount_path}\/destroy\/{secret_path}`](\/vault\/api-docs\/secret\/kv\/kv-v2#destroy-secret-versions)\nwith the data versions you want to destroy:\n\n```shell-session\n$ curl                                       \\\n   --request POST                            \\\n   --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n   --data '{\"versions\":[<target_versions>]}  \\\n   ${VAULT_ADDR}\/v1\/<plugin_mount_path>\/destroy\/<secret_path>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ curl                                       \\\n    --request POST                           \\\n    --header \"X-Vault-Token: ${VAULT_TOKEN}\" \\\n    --data '{\"versions\":[4,7]}'              \\\n    ${VAULT_ADDR}\/v1\/shared\/destroy\/dev\/square-api | jq\n\n```\n\n`\/{plugin_mount_path}\/destroy\/{secret_path}` does not return data on success.\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<\/Tabs>","site":"vault","answers_cleaned":"    layout  docs page title  Permanently delete data description        Permanently delete versioned key value data in the kv v2 plugin         Destroy key value data  The standard  vault kv delete  command performs soft deletes  Use the CLI or GUI to permanently delete  destroy  data so Vault purges the underlying data and sets the  destroyed  metadata field to  true      Tip title  Assumptions      You have  set up a  kv  v2 plugin   vault docs secrets kv kv v2 setup      Your authentication token has  create  and  update  permissions for the  kv    v2 plugin     Tip     Tabs    Tab heading  CLI  group  cli    Use   vault kv destroy    vault docs command kv destroy  with the   versions  flag to permanently delete one or more version of key value data      shell session   vault kv destroy                    mount  mount path                 versions  target versions         secret path       For example    CodeBlockConfig hideClipboard  true       shell session   vault kv destroy  mount shared  versions 2 3 dev square api  Success  Data written to  shared destroy dev square api        CodeBlockConfig   The  destroyed  metadata field for versions 2 and 3 is now  true    CodeBlockConfig hideClipboard  true  highlight  25 32       shell session   vault kv metadata get  mount shared dev square api           Metadata Path          shared metadata dev square api             Metadata            Key                     Value                               cas required            false created time            2024 11 13T21 51 50 898782695Z current version         4 custom metadata          nil  delete version after    0s max versions            5 oldest version          0 updated time            2024 11 14T22 32 42 29534643Z              Version 2        Key              Value                        created time     2024 11 13T21 52 10 326204209Z deletion time    n a destroyed        true         Version 3        Key              Value                        created time     2024 11 13T21 58 32 128442898Z deletion time    n a destroyed        true        CodeBlockConfig     Tab    Tab heading  GUI  group  gui     include  gui instructions plugins kv open overview mdx     Select the   Secret   tab    Select the appropriate data version from the   Version   dropdown    Click   Destroy      Click   Confirm       Partial screenshot of the Vault GUI showing the  Destroy version   confirmation modal for data at the path dev square api   img gui kv destroy version png     Tab    Tab heading  API  group  api    Make a  POST  call to     plugin mount path  destroy  secret path     vault api docs secret kv kv v2 destroy secret versions  with the data versions you want to destroy      shell session   curl                                              request POST                                   header  X Vault Token    VAULT TOKEN           data    versions    target versions            VAULT ADDR  v1  plugin mount path  destroy  secret path       For example    CodeBlockConfig hideClipboard  true       shell session   curl                                               request POST                                   header  X Vault Token    VAULT TOKEN           data    versions   4 7                         VAULT ADDR  v1 shared destroy dev square api   jq          plugin mount path  destroy  secret path   does not return data on success     CodeBlockConfig     Tab     Tabs "}
{"questions":"vault Use max versions to automatically destroy older data versions in the kv v2 Set max data versions in key value v2 layout docs page title Set max data versions plugin","answers":"---\nlayout: docs\npage_title: Set max data versions\ndescription: >-\n   Use max-versions to automatically destroy older data versions in the kv v2\n   plugin.\n---\n\n# Set max data versions in key\/value v2\n\nLimit the number of active versions for a `kv` v2 secret path so Vault\npermanently deletes (destroys) older data versions automatically.\n\n<Tip title=\"Assumptions\">\n\n- You have [set up a `kv` v2 plugin](\/vault\/docs\/secrets\/kv\/kv-v2\/setup). \n- Your authentication token has `create` and `update` permissions for the `kv`\n  v2 plugin.\n\n<\/Tip>\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse [`vault kv metadata put`](\/vault\/docs\/command\/kv\/metadata) to change the max\nnumber of versions allowed for a `kv` mount path: \n\n```shell-session\n$ vault kv metadata put          \\\n   -max-versions <max_versions>  \\\n   -mount <mount_path>           \\\n   <secret_path>\n```\n\nFor example: \n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault kv metadata put \\\n   -max-versions 5      \\\n   -mount shared        \\\n   dev\/square-api\n\nSuccess! Data written to: shared\/metadata\/dev\/square-api\n```\n<\/CodeBlockConfig>\n\n\nVault now auto-deletes data when the number of versions exceeds 5:\n\n<CodeBlockConfig hideClipboard=\"true\" highlight=\"14\">\n\n```shell-session\n$ vault kv metadata get -mount shared dev\/square-api\n\n======== Metadata Path ========\nshared\/metadata\/dev\/square-api\n\n========== Metadata ==========\nKey                     Value\n---                     -----\ncas_required            false\ncreated_time            2024-11-13T21:51:50.898782695Z\ncurrent_version         4\ncustom_metadata         <nil>\ndelete_version_after    0s\nmax_versions            5\noldest_version          0\nupdated_time            2024-11-14T22:32:42.29534643Z\n\n====== Version 1 ======\nKey              Value\n---              -----\ncreated_time     2024-11-13T21:51:50.898782695Z\ndeletion_time    n\/a\ndestroyed        false\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"GUI\" group=\"gui\">\n\n@include 'gui-instructions\/plugins\/kv\/open-overview.mdx'\n\n- Select the **Metadata** tab.\n- Click **Edit metadata >**.\n- Update the **Maximum number of versions** field.\n- Click **Update**.\n\n![Partial screenshot of the Vault GUI showing the \"Edit Secret Metadata\" screen](\/img\/gui\/kv\/edit-metadata.png)\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\n1. Create a JSON file with the metadata field `max_versions` set to the maximum\n   number of versions you want to allow.\n\n1. Make a `POST` call to\n   [`\/{plugin_mount_path}\/metadata\/{secret_path}`](\/vault\/api-docs\/secret\/kv\/kv-v2#create-update-metadata)\n   with the JSON data file:\n    ```shell-session\n    $ curl                                      \\\n      --request POST                            \\\n      --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n      --data @metadata.json                     \\\n      ${VAULT_ADDR}\/v1\/<plugin_mount_path>\/metadata\/<secret_path>\n    ```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```json\n{\n  \"max_versions\": 10\n}\n```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ curl                                       \\\n    --request POST                           \\\n    --header \"X-Vault-Token: ${VAULT_TOKEN}\" \\\n    --data @metadata.json                    \\\n    ${VAULT_ADDR}\/v1\/shared\/metadata\/dev\/square-api\n```\n\n`\/{plugin_mount_path}\/metadata\/{secret_path}` does not return data on success.\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<\/Tabs","site":"vault","answers_cleaned":"    layout  docs page title  Set max data versions description        Use max versions to automatically destroy older data versions in the kv v2    plugin         Set max data versions in key value v2  Limit the number of active versions for a  kv  v2 secret path so Vault permanently deletes  destroys  older data versions automatically    Tip title  Assumptions      You have  set up a  kv  v2 plugin   vault docs secrets kv kv v2 setup      Your authentication token has  create  and  update  permissions for the  kv    v2 plugin     Tip    Tabs    Tab heading  CLI  group  cli    Use   vault kv metadata put    vault docs command kv metadata  to change the max number of versions allowed for a  kv  mount path       shell session   vault kv metadata put                max versions  max versions         mount  mount path                  secret path       For example     CodeBlockConfig hideClipboard  true       shell session   vault kv metadata put       max versions 5            mount shared             dev square api  Success  Data written to  shared metadata dev square api       CodeBlockConfig    Vault now auto deletes data when the number of versions exceeds 5    CodeBlockConfig hideClipboard  true  highlight  14       shell session   vault kv metadata get  mount shared dev square api           Metadata Path          shared metadata dev square api             Metadata            Key                     Value                               cas required            false created time            2024 11 13T21 51 50 898782695Z current version         4 custom metadata          nil  delete version after    0s max versions            5 oldest version          0 updated time            2024 11 14T22 32 42 29534643Z         Version 1        Key              Value                        created time     2024 11 13T21 51 50 898782695Z deletion time    n a destroyed        false        CodeBlockConfig     Tab    Tab heading  GUI  group  gui     include  gui instructions plugins kv open overview mdx     Select the   Metadata   tab    Click   Edit metadata        Update the   Maximum number of versions   field    Click   Update       Partial screenshot of the Vault GUI showing the  Edit Secret Metadata  screen   img gui kv edit metadata png     Tab    Tab heading  API  group  api    1  Create a JSON file with the metadata field  max versions  set to the maximum    number of versions you want to allow   1  Make a  POST  call to        plugin mount path  metadata  secret path     vault api docs secret kv kv v2 create update metadata     with the JSON data file         shell session       curl                                                request POST                                      header  X Vault Token    VAULT TOKEN              data  metadata json                               VAULT ADDR  v1  plugin mount path  metadata  secret path           For example    CodeBlockConfig hideClipboard  true       json      max versions   10          CodeBlockConfig    CodeBlockConfig hideClipboard  true       shell session   curl                                               request POST                                   header  X Vault Token    VAULT TOKEN           data  metadata json                            VAULT ADDR  v1 shared metadata dev square api         plugin mount path  metadata  secret path   does not return data on success     CodeBlockConfig     Tab     Tabs"}
{"questions":"vault page title Read subkeys Read the available subkeys on an existing data path in the kv v2 plugin layout docs Read subkeys for a key value data path Read the available subkeys on a given path from the kv v2 plugin","answers":"---\nlayout: docs\npage_title: Read subkeys\ndescription: >-\n   Read the available subkeys on a given path from the kv v2 plugin\n---\n\n# Read subkeys for a key\/value data path\n\nRead the available subkeys on an existing data path in the `kv` v2 plugin.\n\n<Tip title=\"Assumptions\">\n\n- You have [set up a `kv` v2 plugin](\/vault\/docs\/secrets\/kv\/kv-v2\/setup). \n- Your authentication token has `read` permissions for subkeys on the target\n  secret path.\n\n<\/Tip>\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse `vault read` with the `\/subkeys` path to retrieve a list of secret data\nsubkeys at the given path. \n\n```shell-session\n$ vault read <mount_path>\/subkeys\/<secret_path>\n```\n\nVault retrieves secrets at the given path but replaces the underlying values of\nnon-map keys and map keys with no underlying subkeys (leaf keys) with `nil`.\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault read shared\/subkeys\/dev\/square-api\n\nKey         Value\n---         -----\nmetadata    map[created_time:2024-11-20T20:00:13.385182722Z custom_metadata:<nil> deletion_time: destroyed:false version:1]\nsubkeys     map[prod:<nil> sandbox:<nil> smoke:<nil>]\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"GUI\" group=\"gui\">\n\n@include 'alerts\/enterprise-only.mdx'\n\n@include 'gui-instructions\/plugins\/kv\/open-overview.mdx'\n\nYou can read a list of available subkeys for the target path in the **Subkeys**\ncard.\n\n![Partial screenshot of the Vault GUI showing subkeys \"prod\" and \"sandbox\" for secret data at path dev\/square-api.](\/img\/gui\/kv\/overview-page.png)\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\nCall the [`\/{plugin_mount_path}\/subkeys\/{secret_path}`](\/vault\/api-docs\/secret\/kv\/kv-v2#read-secret-subkeys)\nendpoint to fetch a list of available subkeys on the given path:\n\n```shell-session\n$ curl                                       \\\n   --request GET                             \\\n   --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n   ${VAULT_ADDR}\/v1\/<plugin_mount_path>\/subkeys\/<secret_path>\n```\n\nVault retrieves secrets at the given path but replaces the underlying values of\nnon-map keys and map keys with no underlying subkeys (leaf keys) with `null`.\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ curl                                       \\\n   --request GET                             \\\n   --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n   ${VAULT_ADDR}\/v1\/shared\/subkeys\/dev\/square-api | jq\n\n{\n  \"request_id\": \"bfeac3c5-f4dc-37b2-8909-3b15121cfd20\",\n  \"lease_id\": \"\",\n  \"renewable\": false,\n  \"lease_duration\": 0,\n  \"data\": {\n    \"metadata\": {\n      \"created_time\": \"2024-11-20T20:00:13.385182722Z\",\n      \"custom_metadata\": null,\n      \"deletion_time\": \"\",\n      \"destroyed\": false,\n      \"version\": 11\n    },\n    \"subkeys\": {\n      \"prod\": null,\n      \"sandbox\": null,\n      \"smoke\": null\n    }\n  },\n  \"wrap_info\": null,\n  \"warnings\": null,\n  \"auth\": null,\n  \"mount_type\": \"kv\"\n}\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<\/Tabs>","site":"vault","answers_cleaned":"    layout  docs page title  Read subkeys description        Read the available subkeys on a given path from the kv v2 plugin        Read subkeys for a key value data path  Read the available subkeys on an existing data path in the  kv  v2 plugin    Tip title  Assumptions      You have  set up a  kv  v2 plugin   vault docs secrets kv kv v2 setup      Your authentication token has  read  permissions for subkeys on the target   secret path     Tip    Tabs    Tab heading  CLI  group  cli    Use  vault read  with the   subkeys  path to retrieve a list of secret data subkeys at the given path       shell session   vault read  mount path  subkeys  secret path       Vault retrieves secrets at the given path but replaces the underlying values of non map keys and map keys with no underlying subkeys  leaf keys  with  nil    For example    CodeBlockConfig hideClipboard  true       shell session   vault read shared subkeys dev square api  Key         Value                   metadata    map created time 2024 11 20T20 00 13 385182722Z custom metadata  nil  deletion time  destroyed false version 1  subkeys     map prod  nil  sandbox  nil  smoke  nil          CodeBlockConfig     Tab    Tab heading  GUI  group  gui     include  alerts enterprise only mdx    include  gui instructions plugins kv open overview mdx   You can read a list of available subkeys for the target path in the   Subkeys   card     Partial screenshot of the Vault GUI showing subkeys  prod  and  sandbox  for secret data at path dev square api    img gui kv overview page png     Tab    Tab heading  API  group  api    Call the     plugin mount path  subkeys  secret path     vault api docs secret kv kv v2 read secret subkeys  endpoint to fetch a list of available subkeys on the given path      shell session   curl                                              request GET                                    header  X Vault Token    VAULT TOKEN           VAULT ADDR  v1  plugin mount path  subkeys  secret path       Vault retrieves secrets at the given path but replaces the underlying values of non map keys and map keys with no underlying subkeys  leaf keys  with  null    For example    CodeBlockConfig hideClipboard  true       shell session   curl                                              request GET                                    header  X Vault Token    VAULT TOKEN           VAULT ADDR  v1 shared subkeys dev square api   jq       request id    bfeac3c5 f4dc 37b2 8909 3b15121cfd20      lease id          renewable   false     lease duration   0     data          metadata            created time    2024 11 20T20 00 13 385182722Z          custom metadata   null         deletion time              destroyed   false         version   11             subkeys            prod   null         sandbox   null         smoke   null               wrap info   null     warnings   null     auth   null     mount type    kv           CodeBlockConfig     Tab     Tabs "}
{"questions":"vault Write custom metadata fields to your kv v2 plugin page title Write custom metadata layout docs Write custom metadata in key value v2 Write custom metadata to a kv v2 secret path","answers":"---\nlayout: docs\npage_title: Write custom metadata\ndescription: >-\n   Write custom metadata fields to your kv v2 plugin.\n---\n\n# Write custom metadata in key\/value v2 \n\nWrite custom metadata to a `kv` v2 secret path.\n\n<Tip title=\"Assumptions\">\n\n- You have [set up a `kv` v2 plugin](\/vault\/docs\/secrets\/kv\/kv-v2\/setup). \n- Your authentication token has `create` and `update` permissions for the `kv`\n  v2 plugin.\n\n<\/Tip>\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse [`vault kv metadata put`](\/vault\/docs\/command\/kv\/metadata) to set custom\nmetadata fields for a `kv` mount path. Repeat the `-custom-metadata` flag for\neach key\/value metadata entry:\n\n```shell-session\n$ vault kv metadata put                \\\n   -custom-metadata <key_value_pair>   \\\n   -mount <mount_path>                 \\\n   <secret_path>\n```\n\nFor example: \n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault kv metadata put                                              \\\n   -custom-metadata \"use=API keys for different dev environments\"   \\\n   -custom-metadata \"renew-date=2026-11-14\"                          \\\n   -mount shared                                                     \\\n   dev\/square-api\n\nSuccess! Data written to: shared\/metadata\/dev\/square-api\n```\n<\/CodeBlockConfig>\n\n\nThe `custom_metadata` metadata field now includes a map with the two custom\nfields:\n\n<CodeBlockConfig hideClipboard=\"true\" highlight=\"14\">\n\n```shell-session\n$ vault kv metadata get -mount shared dev\/square-api\n\n======== Metadata Path ========\nshared\/metadata\/dev\/square-api\n\n========== Metadata ==========\nKey                     Value\n---                     -----\ncas_required            false\ncreated_time            2024-11-13T21:51:50.898782695Z\ncurrent_version         9\ncustom_metadata         map[use:API keys for different dev environments renew-date:2026-11-14]\ndelete_version_after    0s\nmax_versions            10\noldest_version          4\nupdated_time            2024-11-15T03:10:26.749233814Z\n\n====== Version 1 ======\nKey              Value\n---              -----\ncreated_time     2024-11-13T21:51:50.898782695Z\ndeletion_time    n\/a\ndestroyed        false\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"GUI\" group=\"gui\">\n\n@include 'gui-instructions\/plugins\/kv\/open-overview.mdx'\n\n- Select the **Metadata** tab.\n- Click **Edit metadata >**.\n- Set a new key name and value under **Custom metadata**.\n- Use the **Add** button to set additional key\/value pairs.\n- Click **Update**.\n\n![Partial screenshot of the Vault GUI showing the \"Edit Secret Metadata\" screen](\/img\/gui\/kv\/custom-metadata.png)\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\n1. Create a JSON file with the metadata you want to write to the your `kv` v2\n   plugin. Use the `custom_metadata` field to define the custom metadata fields\n   and initial values.\n\n1. Make a `POST` call to\n   [`\/{plugin_mount_path}\/metadata\/{secret_path}`](\/vault\/api-docs\/secret\/kv\/kv-v2#create-update-metadata)\n   with the JSON data file:\n    ```shell-session\n    $ curl                                      \\\n      --request POST                            \\\n      --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n      --data @metadata.json                     \\\n      ${VAULT_ADDR}\/v1\/<plugin_mount_path>\/metadata\/<secret_path>\n    ```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```json\n{\n  \"custom_metadata\": {\n    \"use\": \"API keys for different dev environments\",\n    \"renew-date\": \"2026-11-14\"\n  }\n}\n```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ curl                                       \\\n    --request POST                           \\\n    --header \"X-Vault-Token: ${VAULT_TOKEN}\" \\\n    --data @metadata.json                    \\\n    ${VAULT_ADDR}\/v1\/shared\/metadata\/dev\/square-api\n```\n\n`\/{plugin_mount_path}\/metadata\/{secret_path}` does not return data on success.\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<\/Tabs>","site":"vault","answers_cleaned":"    layout  docs page title  Write custom metadata description        Write custom metadata fields to your kv v2 plugin         Write custom metadata in key value v2   Write custom metadata to a  kv  v2 secret path    Tip title  Assumptions      You have  set up a  kv  v2 plugin   vault docs secrets kv kv v2 setup      Your authentication token has  create  and  update  permissions for the  kv    v2 plugin     Tip    Tabs    Tab heading  CLI  group  cli    Use   vault kv metadata put    vault docs command kv metadata  to set custom metadata fields for a  kv  mount path  Repeat the   custom metadata  flag for each key value metadata entry      shell session   vault kv metadata put                      custom metadata  key value pair          mount  mount path                        secret path       For example     CodeBlockConfig hideClipboard  true       shell session   vault kv metadata put                                                    custom metadata  use API keys for different dev environments          custom metadata  renew date 2026 11 14                                 mount shared                                                          dev square api  Success  Data written to  shared metadata dev square api       CodeBlockConfig    The  custom metadata  metadata field now includes a map with the two custom fields    CodeBlockConfig hideClipboard  true  highlight  14       shell session   vault kv metadata get  mount shared dev square api           Metadata Path          shared metadata dev square api             Metadata            Key                     Value                               cas required            false created time            2024 11 13T21 51 50 898782695Z current version         9 custom metadata         map use API keys for different dev environments renew date 2026 11 14  delete version after    0s max versions            10 oldest version          4 updated time            2024 11 15T03 10 26 749233814Z         Version 1        Key              Value                        created time     2024 11 13T21 51 50 898782695Z deletion time    n a destroyed        false        CodeBlockConfig     Tab    Tab heading  GUI  group  gui     include  gui instructions plugins kv open overview mdx     Select the   Metadata   tab    Click   Edit metadata        Set a new key name and value under   Custom metadata      Use the   Add   button to set additional key value pairs    Click   Update       Partial screenshot of the Vault GUI showing the  Edit Secret Metadata  screen   img gui kv custom metadata png     Tab    Tab heading  API  group  api    1  Create a JSON file with the metadata you want to write to the your  kv  v2    plugin  Use the  custom metadata  field to define the custom metadata fields    and initial values   1  Make a  POST  call to        plugin mount path  metadata  secret path     vault api docs secret kv kv v2 create update metadata     with the JSON data file         shell session       curl                                                request POST                                      header  X Vault Token    VAULT TOKEN              data  metadata json                               VAULT ADDR  v1  plugin mount path  metadata  secret path           For example    CodeBlockConfig hideClipboard  true       json      custom metadata          use    API keys for different dev environments        renew date    2026 11 14               CodeBlockConfig    CodeBlockConfig hideClipboard  true       shell session   curl                                               request POST                                   header  X Vault Token    VAULT TOKEN           data  metadata json                            VAULT ADDR  v1 shared metadata dev square api         plugin mount path  metadata  secret path   does not return data on success     CodeBlockConfig     Tab     Tabs "}
{"questions":"vault Patch versioned key value data layout docs page title Patch data Use the patch process to update specific values or add new key value pairs to Make partial updates or add new keys to versioned data in the kv v2 plugin","answers":"---\nlayout: docs\npage_title: Patch data\ndescription: >-\n   Make partial updates or add new keys to versioned data in the kv v2 plugin\n---\n\n# Patch versioned key\/value data\n\nUse the patch process to update specific values or add new key\/value pairs to\nan existing data path in the `kv` v2 plugin.\n\n<Tip title=\"Assumptions\">\n\n- You have [set up a `kv` v2 plugin](\/vault\/docs\/secrets\/kv\/kv-v2\/setup). \n- Your authentication token has appropriate permissions for the `kv` v2 plugin:\n   - **`patch`** permission to make direct updates with `PATCH` actions.\n   - **`create`**+**`update`** permission if you want to make indirect\n      updates with the Vault CLI by combining `GET` and `POST` actions.\n- You know the keys or [subkeys](\/vault\/docs\/secrets\/kv\/kv-v2\/cookbook\/read-subkey)\n  you want to patch.\n\n<\/Tip>\n\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\nUse the [`vault kv patch`](\/vault\/docs\/commands\/kv\/patch) command and set the\n`-cas` flag to the expected data version to perform a check-and-set operation\nbefore applying the patch:\n\n```shell-session\n$ vault kv patch                 \\\n   -cas <target_version>         \\\n   -max-versions <max_versions>  \\\n   -mount <mount_path>           \\\n   <secret_path>                 \\\n   <key_name>=<key_value>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault kv patch  \\\n  -cas 2          \\\n  -mount shared   \\\n  dev\/square-api  \\\n  prod=5678\n\n======= Secret Path =======\nshared\/data\/dev\/square-api\n\n======= Metadata =======\nKey                Value\n---                -----\ncreated_time       2024-11-13T21:52:10.326204209Z\ncustom_metadata    <nil>\ndeletion_time      n\/a\ndestroyed          false\nversion            2\n```\n\n<\/CodeBlockConfig>\n\nIf the `-cas` version is older than the current version of data at the target\npath, the patch fails:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault kv patch -cas 1 -mount shared dev\/square-api prod=5678\n\nError writing data to shared\/data\/dev\/square-api: Error making API request.\n\nURL: PATCH http:\/\/192.168.0.1:8200\/v1\/shared\/data\/dev\/square-api\nCode: 400. Errors:\n\n* check-and-set parameter did not match the current version\n```\n\n<\/CodeBlockConfig>\n\nTo **force** a patch, you can exclude the `-cas` flag **or** use the\n`read+write` patch method with the `-method` flag. For example:\n\n```shell-session\n$ vault kv patch -method rw -mount shared dev\/square-api prod=5678\n\n======= Secret Path =======\nshared\/data\/dev\/square-api\n\n======= Metadata =======\nKey                Value\n---                -----\ncreated_time       2024-11-13T21:58:32.128442898Z\ncustom_metadata    <nil>\ndeletion_time      n\/a\ndestroyed          false\nversion            3\n```\n\nInstead of using an HTTP `PATCH` action, the `read+write` method uses a sequence\nof `GET` and `POST` operations to fetch the most recent version of data stored\nat the targeted path, perform an in-memory update to the targeted keys, then\npush the update to the plugin.\n\n<\/Tab>\n\n<Tab heading=\"GUI\" group=\"gui\">\n\n@include 'alerts\/enterprise-only.mdx'\n\n@include 'gui-instructions\/plugins\/kv\/open-overview.mdx'\n\n- Select the **Secret** tab.\n- Click **Patch latest version +**.\n- Edit the values you want to update.\n- Click **Save**.\n\n![Partial screenshot of the Vault GUI showing two editable key\/value pairs at the path dev\/square-api](\/img\/gui\/kv\/patch-data.png)\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\n1. Create a JSON file with the key\/value data you want to patch. Use the\n`options` field to set optional flags and `data` to define the key\/value pairs\nyou want to update.\n\n1. Make a `PATCH` call to\n   [`\/{plugin_mount_path}\/data\/{secret_path}`](\/vault\/api-docs\/secret\/kv\/kv-v2#patch-secret)\n   with the JSON data file and the `Content-Type` header set to\n   `application\/merge-patch+json`:\n    ```shell-session\n    $ curl                                                  \\\n      --request PATCH                                       \\\n      --header \"X-Vault-Token: ${VAULT_TOKEN}\"              \\\n      --header \"Content-Type: application\/merge-patch+json\" \\\n      --data @data.json                                     \\\n      ${VAULT_ADDR}\/v1\/<plugin_mount_path>\/data\/<secret_path>\n    ```\n\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```json\n{\n  \"options\": {\n    \"cas\": 4\n  },\n  \"data\": {\n    \"smoke\": \"efgh\"\n  }\n}\n```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ curl                                                      \\\n    --request PATCH                                         \\\n    --header \"X-Vault-Token: ${VAULT_TOKEN}\"                \\\n    --header \"Content-Type: application\/merge-patch+json\"   \\\n    --data @data.json                                       \\\n    ${VAULT_ADDR}\/v1\/shared\/data\/dev\/square-api | jq\n\n{\n  \"request_id\": \"6f3bae46-6444-adeb-372a-7f100b4117f9\",\n  \"lease_id\": \"\",\n  \"renewable\": false,\n  \"lease_duration\": 0,\n  \"data\": {\n    \"created_time\": \"2024-11-15T02:52:24.287700164Z\",\n    \"custom_metadata\": null,\n    \"deletion_time\": \"\",\n    \"destroyed\": false,\n    \"version\": 5\n  },\n  \"wrap_info\": null,\n  \"warnings\": null,\n  \"auth\": null,\n  \"mount_type\": \"kv\"\n}\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<\/Tabs>","site":"vault","answers_cleaned":"    layout  docs page title  Patch data description        Make partial updates or add new keys to versioned data in the kv v2 plugin        Patch versioned key value data  Use the patch process to update specific values or add new key value pairs to an existing data path in the  kv  v2 plugin    Tip title  Assumptions      You have  set up a  kv  v2 plugin   vault docs secrets kv kv v2 setup      Your authentication token has appropriate permissions for the  kv  v2 plugin          patch    permission to make direct updates with  PATCH  actions          create       update    permission if you want to make indirect       updates with the Vault CLI by combining  GET  and  POST  actions    You know the keys or  subkeys   vault docs secrets kv kv v2 cookbook read subkey    you want to patch     Tip     Tabs    Tab heading  CLI  group  cli    Use the   vault kv patch    vault docs commands kv patch  command and set the   cas  flag to the expected data version to perform a check and set operation before applying the patch      shell session   vault kv patch                       cas  target version                max versions  max versions         mount  mount path                  secret path                        key name   key value       For example    CodeBlockConfig hideClipboard  true       shell session   vault kv patch       cas 2               mount shared       dev square api      prod 5678          Secret Path         shared data dev square api          Metadata         Key                Value                          created time       2024 11 13T21 52 10 326204209Z custom metadata     nil  deletion time      n a destroyed          false version            2        CodeBlockConfig   If the   cas  version is older than the current version of data at the target path  the patch fails    CodeBlockConfig hideClipboard  true       shell session   vault kv patch  cas 1  mount shared dev square api prod 5678  Error writing data to shared data dev square api  Error making API request   URL  PATCH http   192 168 0 1 8200 v1 shared data dev square api Code  400  Errors     check and set parameter did not match the current version        CodeBlockConfig   To   force   a patch  you can exclude the   cas  flag   or   use the  read write  patch method with the   method  flag  For example      shell session   vault kv patch  method rw  mount shared dev square api prod 5678          Secret Path         shared data dev square api          Metadata         Key                Value                          created time       2024 11 13T21 58 32 128442898Z custom metadata     nil  deletion time      n a destroyed          false version            3      Instead of using an HTTP  PATCH  action  the  read write  method uses a sequence of  GET  and  POST  operations to fetch the most recent version of data stored at the targeted path  perform an in memory update to the targeted keys  then push the update to the plugin     Tab    Tab heading  GUI  group  gui     include  alerts enterprise only mdx    include  gui instructions plugins kv open overview mdx     Select the   Secret   tab    Click   Patch latest version        Edit the values you want to update    Click   Save       Partial screenshot of the Vault GUI showing two editable key value pairs at the path dev square api   img gui kv patch data png     Tab    Tab heading  API  group  api    1  Create a JSON file with the key value data you want to patch  Use the  options  field to set optional flags and  data  to define the key value pairs you want to update   1  Make a  PATCH  call to        plugin mount path  data  secret path     vault api docs secret kv kv v2 patch secret     with the JSON data file and the  Content Type  header set to     application merge patch json          shell session       curl                                                            request PATCH                                                 header  X Vault Token    VAULT TOKEN                          header  Content Type  application merge patch json            data  data json                                               VAULT ADDR  v1  plugin mount path  data  secret path            For example    CodeBlockConfig hideClipboard  true       json      options          cas   4         data          smoke    efgh               CodeBlockConfig    CodeBlockConfig hideClipboard  true       shell session   curl                                                              request PATCH                                                 header  X Vault Token    VAULT TOKEN                          header  Content Type  application merge patch json            data  data json                                               VAULT ADDR  v1 shared data dev square api   jq       request id    6f3bae46 6444 adeb 372a 7f100b4117f9      lease id          renewable   false     lease duration   0     data          created time    2024 11 15T02 52 24 287700164Z        custom metadata   null       deletion time            destroyed   false       version   5         wrap info   null     warnings   null     auth   null     mount type    kv           CodeBlockConfig     Tab     Tabs "}
{"questions":"vault Write new versions of data to a new or existing data path in the kv v2 plugin Write new key value data layout docs Write new versioned data to the kv v2 plugin page title Write new data","answers":"---\nlayout: docs\npage_title: Write new data\ndescription: >-\n   Write new versioned data to the kv v2 plugin\n---\n\n# Write new key\/value data\n\nWrite new versions of data to a new or existing data path in the `kv` v2 plugin.\n\n<Tip title=\"Assumptions\">\n\n- You have [set up a `kv` v2 plugin](\/vault\/docs\/secrets\/kv\/kv-v2\/setup). \n- Your authentication token has `create` and `update` permissions for the `kv`\n  v2 plugin.\n\n<\/Tip>\n\n<Tabs>\n\n<Tab heading=\"CLI\" group=\"cli\">\n\n<Note>\n\nThe Vault CLI forcibly converts `kv` keys and values data to strings before\nwriting data. To preserve non-string data, write your key\/value pairs to Vault\nfrom a JSON file or use the plugin API.\n\n<\/Note>\n\nUse [`vault kv put`](\/vault\/docs\/command\/kv\/put) to save a new version of\nkey\/value data to an new or existing secret path:\n\n```shell-session\n$ vault kv put        \\\n  -mount <mount_path> \\\n  <secret_path>       \\\n  <list_of_kv_values>\n```\n\nFor example:\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ vault kv put    \\\n  -mount shared   \\\n  dev\/square-api  \\\n  sandbox=1234 prod=5679 smoke=abcd\n\n======= Secret Path =======\nshared\/data\/dev\/square-api\n\n======= Metadata =======\nKey                Value\n---                -----\ncreated_time       2024-11-15T01:52:23.434633061Z\ncustom_metadata    <nil>\ndeletion_time      n\/a\ndestroyed          false\nversion            5\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"GUI\" group=\"gui\">\n\nThe Vault GUI forcibly converts non-string keys to strings before writing data.\nTo preserve non-string values, use the JSON toggle to write your key\/value data\nas JSON.\n\n@include 'gui-instructions\/plugins\/kv\/open-overview.mdx'\n\n- Click **Create new +** from one of the following tabs:\n    - **Overview** tab: in the \"Current version\" card.\n    - **Secret** tab: in the toolbar.\n- Set a new key name and value.\n- Use the **Add** button to set additional key\/value pairs.\n- Click **Save** to write the new version data.\n\n![Partial screenshot of the Vault GUI showing the \"Create New Version\" screen](\/img\/gui\/kv\/write-data.png)\n\n<\/Tab>\n\n<Tab heading=\"API\" group=\"api\">\n\n1. Create a JSON file with the key\/value data you want to write to Vault. Use\nthe `options` field to set optional flags and `data` to define the key\/value\npairs.\n\n1. Make a `POST` call to\n   [`\/{plugin_mount_path}\/data\/{secret_path}`](\/vault\/api-docs\/secret\/kv\/kv-v2#create-update-secret)\n   with the JSON data:\n    ```shell-session\n    $ curl                                      \\\n      --request POST                            \\\n      --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n      --data @data.json                         \\\n      ${VAULT_ADDR}\/v1\/<plugin_mount_path>\/data\/<secret_path>\n    ```\n\n\nFor example:\n\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```json\n{\n  \"options\": {\n    \"cas\": 4\n  },\n  \"data\": {\n    \"sandbox\": \"1234\",\n    \"prod\": \"5679\",\n    \"smoke\": \"abcd\"\n  }\n}\n```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig hideClipboard=\"true\">\n\n```shell-session\n$ curl                                        \\\n    --request POST                            \\\n    --header \"X-Vault-Token: ${VAULT_TOKEN}\"  \\\n    --data @data.json                         \\\n    ${VAULT_ADDR}\/v1\/shared\/data\/dev\/square-api | jq\n\n{\n  \"request_id\": \"0c872d86-0def-4261-34d9-b796039ec02f\",\n  \"lease_id\": \"\",\n  \"renewable\": false,\n  \"lease_duration\": 0,\n  \"data\": {\n    \"created_time\": \"2024-11-15T02:41:02.556301319Z\",\n    \"custom_metadata\": null,\n    \"deletion_time\": \"\",\n    \"destroyed\": false,\n    \"version\": 5\n  },\n  \"wrap_info\": null,\n  \"warnings\": null,\n  \"auth\": null,\n  \"mount_type\": \"kv\"\n}\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<\/Tabs>","site":"vault","answers_cleaned":"    layout  docs page title  Write new data description        Write new versioned data to the kv v2 plugin        Write new key value data  Write new versions of data to a new or existing data path in the  kv  v2 plugin    Tip title  Assumptions      You have  set up a  kv  v2 plugin   vault docs secrets kv kv v2 setup      Your authentication token has  create  and  update  permissions for the  kv    v2 plugin     Tip    Tabs    Tab heading  CLI  group  cli     Note   The Vault CLI forcibly converts  kv  keys and values data to strings before writing data  To preserve non string data  write your key value pairs to Vault from a JSON file or use the plugin API     Note   Use   vault kv put    vault docs command kv put  to save a new version of key value data to an new or existing secret path      shell session   vault kv put             mount  mount path       secret path             list of kv values       For example    CodeBlockConfig hideClipboard  true       shell session   vault kv put         mount shared       dev square api      sandbox 1234 prod 5679 smoke abcd          Secret Path         shared data dev square api          Metadata         Key                Value                          created time       2024 11 15T01 52 23 434633061Z custom metadata     nil  deletion time      n a destroyed          false version            5        CodeBlockConfig     Tab    Tab heading  GUI  group  gui    The Vault GUI forcibly converts non string keys to strings before writing data  To preserve non string values  use the JSON toggle to write your key value data as JSON    include  gui instructions plugins kv open overview mdx     Click   Create new     from one of the following tabs          Overview   tab  in the  Current version  card          Secret   tab  in the toolbar    Set a new key name and value    Use the   Add   button to set additional key value pairs    Click   Save   to write the new version data     Partial screenshot of the Vault GUI showing the  Create New Version  screen   img gui kv write data png     Tab    Tab heading  API  group  api    1  Create a JSON file with the key value data you want to write to Vault  Use the  options  field to set optional flags and  data  to define the key value pairs   1  Make a  POST  call to        plugin mount path  data  secret path     vault api docs secret kv kv v2 create update secret     with the JSON data         shell session       curl                                                request POST                                      header  X Vault Token    VAULT TOKEN              data  data json                                   VAULT ADDR  v1  plugin mount path  data  secret path            For example     CodeBlockConfig hideClipboard  true       json      options          cas   4         data          sandbox    1234        prod    5679        smoke    abcd               CodeBlockConfig    CodeBlockConfig hideClipboard  true       shell session   curl                                                request POST                                    header  X Vault Token    VAULT TOKEN            data  data json                                 VAULT ADDR  v1 shared data dev square api   jq       request id    0c872d86 0def 4261 34d9 b796039ec02f      lease id          renewable   false     lease duration   0     data          created time    2024 11 15T02 41 02 556301319Z        custom metadata   null       deletion time            destroyed   false       version   5         wrap info   null     warnings   null     auth   null     mount type    kv           CodeBlockConfig     Tab     Tabs "}
{"questions":"vault Active directory secrets engine include ad secrets deprecation mdx The Active Directory secrets engine allowing Vault to generate dynamic credentials page title Active Directory Secrets Engines layout docs","answers":"---\nlayout: docs\npage_title: Active Directory - Secrets Engines\ndescription: >-\n  The Active Directory secrets engine allowing Vault to generate dynamic credentials.\n---\n\n# Active directory secrets engine\n\n@include 'ad-secrets-deprecation.mdx'\n\n@include 'x509-sha1-deprecation.mdx'\n\nThe Active Directory (AD) secrets engine is a plugin residing [here](https:\/\/github.com\/hashicorp\/vault-plugin-secrets-active-directory).\nIt has two main features.\n\nThe first feature (password rotation) is where the AD secrets engine rotates AD passwords dynamically.\nThis is designed for a high-load environment where many instances may be accessing\na shared password simultaneously. With a simple set up and a simple creds API,\nit doesn't require instances to be manually registered in advance to gain access.\nAs long as access has been granted to the creds path via a method like\n[AppRole](\/vault\/api-docs\/auth\/approle), they're available. Passwords are\nlazily rotated based on preset TTLs and can have a length configured to meet your needs. Additionally,\npasswords can be manually rotated using the [rotate-role](\/vault\/api-docs\/secret\/ad#rotate-role-credentials) endpoint.\n\nThe second feature (service account check-out) is where a library of service accounts can\nbe checked out by a person or by machines. Vault will automatically rotate the password\neach time a service account is checked in. Service accounts can be voluntarily checked in, or Vault\nwill check them in when their lending period (or, \"ttl\", in Vault's language) ends.\n\n## Password rotation\n\n### Customizing password generation\n\nThere are two ways of customizing how passwords are generated in the Active Directory secret engine:\n\n1. [Password Policies](\/vault\/docs\/concepts\/password-policies)\n2. `length` and `formatter` fields within the [configuration](\/vault\/api-docs\/secret\/ad#password-parameters)\n\nUtilizing password policies is the recommended path as the `length` and `formatter` fields have\nbeen deprecated in favor of password policies. The `password_policy` field within the configuration\ncannot be specified alongside either `length` or `formatter` to prevent a confusing configuration.\n\n### A note on lazy rotation\n\nTo drive home the point that passwords are rotated \"lazily\", consider this scenario:\n\n- A password is configured with a TTL of 1 hour.\n- All instances of a service using this password are off for 12 hours.\n- Then they wake up and again request the password.\n\nIn this scenario, although the password TTL was set to 1 hour, the password wouldn't be rotated for 12 hours when it\nwas next requested. \"Lazy\" rotation means passwords are rotated when all of the following conditions are true:\n\n- They are over their TTL\n- They are requested\n\nTherefore, the AD TTL can be considered a soft contract. It's fulfilled when the given password is next requested.\n\nTo ensure your passwords are rotated as expected, we'd recommend you configure services to request each password at least\ntwice as often as its TTL.\n\n### A note on escaping\n\n**It is up to the administrator** to provide properly escaped DNs. This\nincludes the user DN, bind DN for search, and so on.\n\nThe only DN escaping performed by this method is on usernames given at login\ntime when they are inserted into the final bind DN, and uses escaping rules\ndefined in RFC 4514.\n\nAdditionally, Active Directory has escaping rules that differ slightly from the\nRFC; in particular it requires escaping of '#' regardless of position in the DN\n(the RFC only requires it to be escaped when it is the first character), and\n'=', which the RFC indicates can be escaped with a backslash, but does not\ncontain in its set of required escapes. If you are using Active Directory and\nthese appear in your usernames, please ensure that they are escaped, in\naddition to being properly escaped in your configured DNs.\n\nFor reference, see [RFC 4514](https:\/\/www.ietf.org\/rfc\/rfc4514.txt) and this\n[TechNet post on characters to escape in Active\nDirectory](http:\/\/social.technet.microsoft.com\/wiki\/contents\/articles\/5312.active-directory-characters-to-escape.aspx).\n\n### Quick setup\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1.  Enable the Active Directory secrets engine:\n\n    ```text\n    $ vault secrets enable ad\n    Success! Enabled the ad secrets engine at: ad\/\n    ```\n\n    By default, the secrets engine will mount at the name of the engine. To\n    enable the secrets engine at a different path, use the `-path` argument.\n\n2.  Configure the credentials that Vault uses to communicate with Active Directory\n    to generate passwords:\n\n    ```text\n    $ vault write ad\/config \\\n        binddn=$USERNAME \\\n        bindpass=$PASSWORD \\\n        url=ldaps:\/\/138.91.247.105 \\\n        userdn='dc=example,dc=com'\n    ```\n\n    The `$USERNAME` and `$PASSWORD` given must have access to modify passwords\n    for the given account. It is possible to delegate access to change\n    passwords for these accounts to the one Vault is in control of, and this is\n    usually the highest-security solution.\n\n    If you'd like to do a quick, insecure evaluation, also set `insecure_tls` to true. However, this is NOT RECOMMENDED\n    in a production environment. In production, we recommend `insecure_tls` is false (its default) and is used with a valid\n    `certificate`.\n\n3.  Configure a role that maps a name in Vault to an account in Active Directory.\n    When applications request passwords, password rotation settings will be managed by\n    this role.\n\n    ```text\n    $ vault write ad\/roles\/my-application \\\n        service_account_name=\"my-application@example.com\"\n    ```\n\n4.  Grant \"my-application\" access to its creds at `ad\/creds\/my-application` using an\n    auth method like [AppRole](\/vault\/api-docs\/auth\/approle).\n\n### FAQ\n\n#### What if someone directly rotates an active directory password that Vault is managing?\n\nIf an administrator at your company rotates a password that Vault is managing,\nthe next time an application asks _Vault_ for that password, Vault won't know\nit.\n\nTo maintain that application's up-time, Vault will need to return to a state of\nknowing the password. Vault will generate a new password, update it, and return\nit to the application(s) asking for it. This all occurs automatically, without\nhuman intervention.\n\nThus, we wouldn't recommend that administrators directly rotate the passwords\nfor accounts that Vault is managing. This may lead to behavior the\nadministrator wouldn't expect, like finding very quickly afterwards that their\nnew password has already been changed.\n\nThe password `ttl` on a role can be updated at any time to ensure that the\nresponsibility of updating passwords can be left to Vault, rather than\nrequiring manual administrator updates.\n\n#### Why does Vault return the last password in addition to the current one?\n\nActive Directory promises _eventual consistency_, which means that new\npasswords may not be propagated to all instances immediately. To deal with\nthis, Vault returns the current password with the last password if it's known.\nThat way, if a new password isn't fully operational, the last password can also\nbe used.\n\n## Service account Check-Out\n\nVault offers the ability to check service accounts in and out. This is a separate,\ndifferent set of functionality from the password rotation feature above. Let's walk\nthrough how to use it, with explanation at each step.\n\nFirst we'll need to enable the AD secrets engine and tell it how to talk to our AD\nserver just as we did above.\n\n```shell-session\n$ vault secrets enable ad\nSuccess! Enabled the ad secrets engine at: ad\/\n\n$ vault write ad\/config \\\n    binddn=$USERNAME \\\n    bindpass=$PASSWORD \\\n    url=ldaps:\/\/138.91.247.105 \\\n    userdn='dc=example,dc=com'\n```\n\nOur next step is to designate a set of service accounts for check-out.\n\n```shell-session\n$ vault write ad\/library\/accounting-team \\\n    service_account_names=fizz@example.com,buzz@example.com \\\n    ttl=10h \\\n    max_ttl=20h \\\n    disable_check_in_enforcement=false\n```\n\nIn this example, the service account names of `fizz@example.com` and `buzz@example.com` have\nalready been created on the remote AD server. They've been set aside solely for Vault to handle.\nThe `ttl` is how long each check-out will last before Vault checks in a service account,\nrotating its password during check-in. The `max_ttl` is the maximum amount of time it can live\nif it's renewed. These default to `24h`, and both use [duration format strings](\/vault\/docs\/concepts\/duration-format).\nAlso by default, a service account must be checked in by the same Vault entity or client token that\nchecked it out. However, if this behavior causes problems, set `disable_check_in_enforcement=true`.\n\nWhen a library of service accounts has been created, view their status at any time to see if they're\navailable or checked out.\n\n```shell-session\n$ vault read ad\/library\/accounting-team\/status\nKey                 Value\n---                 -----\nbuzz@example.com    map[available:true]\nfizz@example.com    map[available:true]\n```\n\nTo check out any service account that's available, simply execute:\n\n```shell-session\n$ vault write -f ad\/library\/accounting-team\/check-out\nKey                     Value\n---                     -----\nlease_id                ad\/library\/accounting-team\/check-out\/EpuS8cX7uEsDzOwW9kkKOyGW\nlease_duration          10h\nlease_renewable         true\npassword                ?@09AZKh03hBORZPJcTDgLfntlHqxLy29tcQjPVThzuwWAx\/Twx4a2ZcRQRqrZ1w\nservice_account_name    fizz@example.com\n```\n\nIf the default `ttl` for the check-out is higher than needed, set the check-out to last\nfor a shorter time by using:\n\n```shell-session\n$ vault write ad\/library\/accounting-team\/check-out ttl=30m\nKey                     Value\n---                     -----\nlease_id                ad\/library\/accounting-team\/check-out\/gMonJ2jB6kYs6d3Vw37WFDCY\nlease_duration          30m\nlease_renewable         true\npassword                ?@09AZerLLuJfEMbRqP+3yfQYDSq6laP48TCJRBJaJu\/kDKLsq9WxL9szVAvL\/E1\nservice_account_name    buzz@example.com\n```\n\nThis can be a nice way to say, \"Although I _can_ have a check-out for 24 hours, if I\nhaven't checked it in after 30 minutes, I forgot or I'm a dead instance, so you can just\ncheck it back in.\"\n\nIf no service accounts are available for check-out, Vault will return a 400 Bad Request.\n\n```shell-session\n$ vault write -f ad\/library\/accounting-team\/check-out\nError writing data to ad\/library\/accounting-team\/check-out: Error making API request.\n\nURL: POST http:\/\/localhost:8200\/v1\/ad\/library\/accounting-team\/check-out\nCode: 400. Errors:\n\n* No service accounts available for check-out.\n```\n\nTo extend a check-out, renew its lease.\n\n```shell-session\n$ vault lease renew ad\/library\/accounting-team\/check-out\/0C2wmeaDmsToVFc0zDiX9cMq\nKey                Value\n---                -----\nlease_id           ad\/library\/accounting-team\/check-out\/0C2wmeaDmsToVFc0zDiX9cMq\nlease_duration     10h\nlease_renewable    true\n```\n\nRenewing a check-out means its current password will live longer, since passwords are rotated\nanytime a password is _checked in_ either by a caller, or by Vault because the check-out `ttl`\nends.\n\nTo check a service account back in for others to use, call:\n\n```shell-session\n$ vault write -f ad\/library\/accounting-team\/check-in\nKey          Value\n---          -----\ncheck_ins    [fizz@example.com]\n```\n\nMost of the time this will just work, but if multiple service accounts checked out by the same\ncaller, Vault will need to know which one(s) to check in.\n\n```shell-session\n$ vault write ad\/library\/accounting-team\/check-in service_account_names=fizz@example.com\nKey          Value\n---          -----\ncheck_ins    [fizz@example.com]\n```\n\nTo perform a check-in, Vault verifies that the caller _should_ be able to check in a given service account.\nTo do this, Vault looks for either the same [entity ID](\/vault\/tutorials\/auth-methods\/identity)\nused to check out the service account, or the same client token.\n\nIf a caller is unable to check in a service account, or simply doesn't try,\nVault will check it back in automatically when the `ttl` expires. However, if that is too long,\nservice accounts can be forcibly checked in by a highly privileged user through:\n\n```shell-session\n$ vault write -f ad\/library\/manage\/accounting-team\/check-in\nKey          Value\n---          -----\ncheck_ins    [fizz@example.com]\n```\n\nOr, alternatively, revoking the secret's lease has the same effect.\n\n```shell-session\n$ vault lease revoke ad\/library\/accounting-team\/check-out\/PvBVG0m7pEg2940Cb3Jw3KpJ\nAll revocation operations queued successfully!\n```\n\n### Troubleshooting\n\n#### Old passwords are still valid for a period of time.\n\nDuring testing, we found that by default, many versions of Active Directory\nperpetuate old passwords for a short while. After we discovered this behavior,\nwe found articles discussing it by searching for \"AD password caching\" and \"OldPasswordAllowedPeriod\". We\nalso found [an article from Microsoft](https:\/\/support.microsoft.com\/en-us\/help\/906305\/new-setting-modifies-ntlm-network-authentication-behavior)\ndiscussing how to configure this behavior. This behavior appears to vary by AD\nversion. We recommend you test the behavior of your particular AD server,\nand edit its settings to gain the desired behavior.\n\n#### I get a lot of 400 bad request's when trying to check out service accounts.\n\nThis will occur when there aren't enough service accounts for those requesting them. Let's\nsuppose our \"accounting-team\" service accounts are the ones being requested. When Vault\nreceives a check-out call but none are available, Vault will log at debug level:\n\"'accounting-team' had no check-outs available\". Vault will also increment a metric\ncontaining the strings \"active directory\", \"check-out\", \"unavailable\", and \"accounting-team\".\n\nOnce it's known _which_ library needs more service accounts for checkout, fix this issue\nby merely creating a new service account for it to use in Active Directory, then adding it to\nVault like so:\n\n```shell-session\n$ vault write ad\/library\/accounting-team \\\n    service_account_names=fizz@example.com,buzz@example.com,new@example.com\n```\n\nIn this example, fizz and buzz were pre-existing but were still included in the call\nbecause we'd like them to exist in the resulting set. The new account was appended to\nthe end.\n\n#### Sometimes Vault gives me a password but then AD says it's not valid.\n\nActive Directory is eventually consistent, meaning that it can take some time for word\nof a new password to travel across all AD instances in a cluster. In larger clusters, we\nhave observed the password taking over 10 seconds to propagate fully. The simplest way to\nhandle this is to simply wait and retry using the new password.\n\n#### When trying to read credentials i get 'LDAP result code 53 \"Unwilling to perform\"'\n\nActive Directory will only support password changes over a secure connection. Ensure that your configuration block is not using an unsecured LDAP connection.\n\n## Tutorial\n\nRefer to the [Active Directory Service Account Check-out](\/vault\/tutorials\/secrets-management\/active-directory) tutorial to learn how to enable a team to share a select set of service accounts.\n\n## API\n\nThe Active Directory secrets engine has a full HTTP API. Please see the\n[Active Directory secrets engine API](\/vault\/api-docs\/secret\/ad) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  Active Directory   Secrets Engines description       The Active Directory secrets engine allowing Vault to generate dynamic credentials         Active directory secrets engine   include  ad secrets deprecation mdx    include  x509 sha1 deprecation mdx   The Active Directory  AD  secrets engine is a plugin residing  here  https   github com hashicorp vault plugin secrets active directory   It has two main features   The first feature  password rotation  is where the AD secrets engine rotates AD passwords dynamically  This is designed for a high load environment where many instances may be accessing a shared password simultaneously  With a simple set up and a simple creds API  it doesn t require instances to be manually registered in advance to gain access  As long as access has been granted to the creds path via a method like  AppRole   vault api docs auth approle   they re available  Passwords are lazily rotated based on preset TTLs and can have a length configured to meet your needs  Additionally  passwords can be manually rotated using the  rotate role   vault api docs secret ad rotate role credentials  endpoint   The second feature  service account check out  is where a library of service accounts can be checked out by a person or by machines  Vault will automatically rotate the password each time a service account is checked in  Service accounts can be voluntarily checked in  or Vault will check them in when their lending period  or   ttl   in Vault s language  ends      Password rotation      Customizing password generation  There are two ways of customizing how passwords are generated in the Active Directory secret engine   1   Password Policies   vault docs concepts password policies  2   length  and  formatter  fields within the  configuration   vault api docs secret ad password parameters   Utilizing password policies is the recommended path as the  length  and  formatter  fields have been deprecated in favor of password policies  The  password policy  field within the configuration cannot be specified alongside either  length  or  formatter  to prevent a confusing configuration       A note on lazy rotation  To drive home the point that passwords are rotated  lazily   consider this scenario     A password is configured with a TTL of 1 hour    All instances of a service using this password are off for 12 hours    Then they wake up and again request the password   In this scenario  although the password TTL was set to 1 hour  the password wouldn t be rotated for 12 hours when it was next requested   Lazy  rotation means passwords are rotated when all of the following conditions are true     They are over their TTL   They are requested  Therefore  the AD TTL can be considered a soft contract  It s fulfilled when the given password is next requested   To ensure your passwords are rotated as expected  we d recommend you configure services to request each password at least twice as often as its TTL       A note on escaping    It is up to the administrator   to provide properly escaped DNs  This includes the user DN  bind DN for search  and so on   The only DN escaping performed by this method is on usernames given at login time when they are inserted into the final bind DN  and uses escaping rules defined in RFC 4514   Additionally  Active Directory has escaping rules that differ slightly from the RFC  in particular it requires escaping of     regardless of position in the DN  the RFC only requires it to be escaped when it is the first character   and      which the RFC indicates can be escaped with a backslash  but does not contain in its set of required escapes  If you are using Active Directory and these appear in your usernames  please ensure that they are escaped  in addition to being properly escaped in your configured DNs   For reference  see  RFC 4514  https   www ietf org rfc rfc4514 txt  and this  TechNet post on characters to escape in Active Directory  http   social technet microsoft com wiki contents articles 5312 active directory characters to escape aspx        Quick setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1   Enable the Active Directory secrets engine          text       vault secrets enable ad     Success  Enabled the ad secrets engine at  ad               By default  the secrets engine will mount at the name of the engine  To     enable the secrets engine at a different path  use the   path  argument   2   Configure the credentials that Vault uses to communicate with Active Directory     to generate passwords          text       vault write ad config           binddn  USERNAME           bindpass  PASSWORD           url ldaps   138 91 247 105           userdn  dc example dc com               The   USERNAME  and   PASSWORD  given must have access to modify passwords     for the given account  It is possible to delegate access to change     passwords for these accounts to the one Vault is in control of  and this is     usually the highest security solution       If you d like to do a quick  insecure evaluation  also set  insecure tls  to true  However  this is NOT RECOMMENDED     in a production environment  In production  we recommend  insecure tls  is false  its default  and is used with a valid      certificate    3   Configure a role that maps a name in Vault to an account in Active Directory      When applications request passwords  password rotation settings will be managed by     this role          text       vault write ad roles my application           service account name  my application example com           4   Grant  my application  access to its creds at  ad creds my application  using an     auth method like  AppRole   vault api docs auth approle        FAQ       What if someone directly rotates an active directory password that Vault is managing   If an administrator at your company rotates a password that Vault is managing  the next time an application asks  Vault  for that password  Vault won t know it   To maintain that application s up time  Vault will need to return to a state of knowing the password  Vault will generate a new password  update it  and return it to the application s  asking for it  This all occurs automatically  without human intervention   Thus  we wouldn t recommend that administrators directly rotate the passwords for accounts that Vault is managing  This may lead to behavior the administrator wouldn t expect  like finding very quickly afterwards that their new password has already been changed   The password  ttl  on a role can be updated at any time to ensure that the responsibility of updating passwords can be left to Vault  rather than requiring manual administrator updates        Why does Vault return the last password in addition to the current one   Active Directory promises  eventual consistency   which means that new passwords may not be propagated to all instances immediately  To deal with this  Vault returns the current password with the last password if it s known  That way  if a new password isn t fully operational  the last password can also be used      Service account Check Out  Vault offers the ability to check service accounts in and out  This is a separate  different set of functionality from the password rotation feature above  Let s walk through how to use it  with explanation at each step   First we ll need to enable the AD secrets engine and tell it how to talk to our AD server just as we did above      shell session   vault secrets enable ad Success  Enabled the ad secrets engine at  ad     vault write ad config       binddn  USERNAME       bindpass  PASSWORD       url ldaps   138 91 247 105       userdn  dc example dc com       Our next step is to designate a set of service accounts for check out      shell session   vault write ad library accounting team       service account names fizz example com buzz example com       ttl 10h       max ttl 20h       disable check in enforcement false      In this example  the service account names of  fizz example com  and  buzz example com  have already been created on the remote AD server  They ve been set aside solely for Vault to handle  The  ttl  is how long each check out will last before Vault checks in a service account  rotating its password during check in  The  max ttl  is the maximum amount of time it can live if it s renewed  These default to  24h   and both use  duration format strings   vault docs concepts duration format   Also by default  a service account must be checked in by the same Vault entity or client token that checked it out  However  if this behavior causes problems  set  disable check in enforcement true    When a library of service accounts has been created  view their status at any time to see if they re available or checked out      shell session   vault read ad library accounting team status Key                 Value                           buzz example com    map available true  fizz example com    map available true       To check out any service account that s available  simply execute      shell session   vault write  f ad library accounting team check out Key                     Value                               lease id                ad library accounting team check out EpuS8cX7uEsDzOwW9kkKOyGW lease duration          10h lease renewable         true password                  09AZKh03hBORZPJcTDgLfntlHqxLy29tcQjPVThzuwWAx Twx4a2ZcRQRqrZ1w service account name    fizz example com      If the default  ttl  for the check out is higher than needed  set the check out to last for a shorter time by using      shell session   vault write ad library accounting team check out ttl 30m Key                     Value                               lease id                ad library accounting team check out gMonJ2jB6kYs6d3Vw37WFDCY lease duration          30m lease renewable         true password                  09AZerLLuJfEMbRqP 3yfQYDSq6laP48TCJRBJaJu kDKLsq9WxL9szVAvL E1 service account name    buzz example com      This can be a nice way to say   Although I  can  have a check out for 24 hours  if I haven t checked it in after 30 minutes  I forgot or I m a dead instance  so you can just check it back in    If no service accounts are available for check out  Vault will return a 400 Bad Request      shell session   vault write  f ad library accounting team check out Error writing data to ad library accounting team check out  Error making API request   URL  POST http   localhost 8200 v1 ad library accounting team check out Code  400  Errors     No service accounts available for check out       To extend a check out  renew its lease      shell session   vault lease renew ad library accounting team check out 0C2wmeaDmsToVFc0zDiX9cMq Key                Value                          lease id           ad library accounting team check out 0C2wmeaDmsToVFc0zDiX9cMq lease duration     10h lease renewable    true      Renewing a check out means its current password will live longer  since passwords are rotated anytime a password is  checked in  either by a caller  or by Vault because the check out  ttl  ends   To check a service account back in for others to use  call      shell session   vault write  f ad library accounting team check in Key          Value                    check ins     fizz example com       Most of the time this will just work  but if multiple service accounts checked out by the same caller  Vault will need to know which one s  to check in      shell session   vault write ad library accounting team check in service account names fizz example com Key          Value                    check ins     fizz example com       To perform a check in  Vault verifies that the caller  should  be able to check in a given service account  To do this  Vault looks for either the same  entity ID   vault tutorials auth methods identity  used to check out the service account  or the same client token   If a caller is unable to check in a service account  or simply doesn t try  Vault will check it back in automatically when the  ttl  expires  However  if that is too long  service accounts can be forcibly checked in by a highly privileged user through      shell session   vault write  f ad library manage accounting team check in Key          Value                    check ins     fizz example com       Or  alternatively  revoking the secret s lease has the same effect      shell session   vault lease revoke ad library accounting team check out PvBVG0m7pEg2940Cb3Jw3KpJ All revocation operations queued successfully           Troubleshooting       Old passwords are still valid for a period of time   During testing  we found that by default  many versions of Active Directory perpetuate old passwords for a short while  After we discovered this behavior  we found articles discussing it by searching for  AD password caching  and  OldPasswordAllowedPeriod   We also found  an article from Microsoft  https   support microsoft com en us help 906305 new setting modifies ntlm network authentication behavior  discussing how to configure this behavior  This behavior appears to vary by AD version  We recommend you test the behavior of your particular AD server  and edit its settings to gain the desired behavior        I get a lot of 400 bad request s when trying to check out service accounts   This will occur when there aren t enough service accounts for those requesting them  Let s suppose our  accounting team  service accounts are the ones being requested  When Vault receives a check out call but none are available  Vault will log at debug level    accounting team  had no check outs available   Vault will also increment a metric containing the strings  active directory    check out    unavailable   and  accounting team    Once it s known  which  library needs more service accounts for checkout  fix this issue by merely creating a new service account for it to use in Active Directory  then adding it to Vault like so      shell session   vault write ad library accounting team       service account names fizz example com buzz example com new example com      In this example  fizz and buzz were pre existing but were still included in the call because we d like them to exist in the resulting set  The new account was appended to the end        Sometimes Vault gives me a password but then AD says it s not valid   Active Directory is eventually consistent  meaning that it can take some time for word of a new password to travel across all AD instances in a cluster  In larger clusters  we have observed the password taking over 10 seconds to propagate fully  The simplest way to handle this is to simply wait and retry using the new password        When trying to read credentials i get  LDAP result code 53  Unwilling to perform    Active Directory will only support password changes over a secure connection  Ensure that your configuration block is not using an unsecured LDAP connection      Tutorial  Refer to the  Active Directory Service Account Check out   vault tutorials secrets management active directory  tutorial to learn how to enable a team to share a select set of service accounts      API  The Active Directory secrets engine has a full HTTP API  Please see the  Active Directory secrets engine API   vault api docs secret ad  for more details "}
{"questions":"vault page title Migration Guide Active Directory Secrets Engines The Vault Active Directory secrets engine vault docs secrets ad has been deprecated as Migration guide active directory secrets engine The guide for migrating from the Active Directory secrets engine to the LDAP secrets engine layout docs","answers":"---\nlayout: docs\npage_title: Migration Guide - Active Directory - Secrets Engines\ndescription: >-\n  The guide for migrating from the Active Directory secrets engine to the LDAP secrets engine.\n---\n\n# Migration guide - active directory secrets engine\n\nThe Vault [Active Directory secrets engine](\/vault\/docs\/secrets\/ad) has been deprecated as\nof the Vault 1.13 release. This document provides guidance for migrating from the Active\nDirectory secrets engine to the [LDAP secrets engine](\/vault\/docs\/secrets\/ldap) that was\nintroduced in Vault 1.12.\n\n## Deprecation timeline\n\nBeginning from the Vault 1.13 release, we will continue to support the Active Directory (AD)\nsecrets engine in maintenance mode for six major Vault releases. Maintenance mode means that\nwe will fix bugs and security issues, but no new features will be added. All new feature\ndevelopment efforts will go towards the unified LDAP secrets engine. At Vault 1.18, we will\nmark the AD secrets engine as [pending removal](\/vault\/docs\/deprecation\/faq#pending-removal).\nAt this time, Vault will begin to strongly signal operators that they need to migrate off of\nthe AD secrets engine. At Vault 1.19, we will mark the AD secrets engine as\n[removed](\/vault\/docs\/deprecation\/faq#removed). At this time, the AD secrets engine will be\nremoved from Vault. Vault will not start up with the AD secrets engine mounts enabled.\n\n## Migration steps\n\nThe following sections detail how to migrate the AD secrets engine configuration and\napplications consuming secrets to the new LDAP secrets engine.\n\n### 1. enable LDAP secrets engine\n\nThe LDAP secrets engine needs to be enabled in order to have a target for migration of\nexisting AD secrets engine mounts. AD secrets engine mounts should be mapped 1-to-1 with\nnew LDAP secrets engine mounts.\n\nTo enable the LDAP secrets engine:\n\n```shell-session\n$ vault secrets enable ldap\n```\n\nTo enable at a custom path:\n\n```shell-session\n$ vault secrets enable -path=<custom_path> ldap\n```\n\nIf enabled at a custom path, the `\/ldap\/` path segment in API paths must be replaced with\nthe custom path value.\n\n### 2. migrate configuration\n\nThe AD secrets engine [configuration](\/vault\/api-docs\/secret\/ad#configuration)\nwill need to be migrated to an LDAP secrets engine [configuration](\/vault\/api-docs\/secret\/ldap#configuration-management).\nThe API paths and parameters will need to be considered during the migration.\n\n#### API path\n\n| AD Secrets Engine | LDAP Secrets Engine |\n| ----------------- |-------------------- |\n| [\/ad\/config](\/vault\/api-docs\/secret\/ad#configuration) | [\/ldap\/config](\/vault\/api-docs\/secret\/ad#configuration)    |\n\n#### Parameters\n\nThe parameters from existing AD secrets engine configurations can generally be mapped 1-to-1\nto LDAP secrets engine configuration. The following LDAP secrets engine parameters are the\nexception and must be considered during the migration.\n\n| AD Secrets Engine | LDAP Secrets Engine | Details |\n| ----------------- | ------------------- | ------- |\n| N\/A | [schema](\/vault\/api-docs\/secret\/ldap#schema) | Must be set to the `ad` option on the LDAP secrets engine configuration. |\n| [userdn](\/vault\/api-docs\/secret\/ad#userdn) | [userdn](\/vault\/api-docs\/secret\/ad#userdn) | Required to be set if using the [library sets](#4-migrate-library-sets) check-out feature. It can be optionally set if using the [static roles](#3-migrate-roles) feature without providing a distinguished name ([dn](\/vault\/api-docs\/secret\/ldap#dn)). |\n| [ttl](\/vault\/api-docs\/secret\/ad#ttl) | N\/A | Replaced by static role [rotation_period](\/vault\/api-docs\/secret\/ldap#rotation_period). |\n| [max_ttl](\/vault\/api-docs\/secret\/ad#max_ttl) | N\/A | Not supported for [static roles](#3-migrate-roles). Can be set using [max_ttl](\/vault\/api-docs\/secret\/ldap#max_ttl-1) for library sets. |\n| [last_rotation_tolerance](\/vault\/api-docs\/secret\/ad#last_rotation_tolerance) | N\/A | Not supported by the LDAP secrets engine. Passwords will be rotated based on the static role [rotation_period](\/vault\/api-docs\/secret\/ldap#rotation_period). |\n\n### 3. migrate roles\n\nAD secrets engine [roles](\/vault\/api-docs\/secret\/ad#role-management) will need to be migrated\nto LDAP secrets engine [static roles](\/vault\/api-docs\/secret\/ldap#static-roles). The API paths,\nparameters, and rotation periods will need to be considered during the migration.\n\n#### API path\n\n| AD Secrets Engine | LDAP Secrets Engine |\n| ----------------- | ------------------- |\n| [\/ad\/roles\/:role_name](\/vault\/api-docs\/secret\/ad#role-management) | [\/ldap\/static-role\/:role_name](\/vault\/api-docs\/secret\/ldap#static-roles) |\n\n#### Parameters\n\nThe following parameters must be migrated.\n\n| AD Secrets Engine | LDAP Secrets Engine | Details |\n| ----------------- | ------------------- | ------- |\n| [ttl](\/vault\/api-docs\/secret\/ad#ttl-1) | [rotation_period](\/vault\/api-docs\/secret\/ldap#rotation_period) | N\/A |\n| [service_account_name](\/vault\/api-docs\/secret\/ad#service_account_name) | [username](\/vault\/api-docs\/secret\/ldap#username)    | If `username` is set without setting the [dn](\/vault\/api-docs\/secret\/ldap#dn) value, then the configuration [userdn](\/vault\/api-docs\/secret\/ldap#userdn) must also be set. |\n\n#### Rotation periods\n\nRotations that occur from AD secrets engine [roles](\/vault\/api-docs\/secret\/ad#role-management)\nmay conflict with rotations performed by LDAP secrets engine [static roles](\/vault\/api-docs\/secret\/ldap#static-roles)\nduring the migration process. This could cause applications consuming passwords to read a\npassword that gets invalidated by a rotation shortly after. To mitigate this, it's recommended\nto set an initial [rotation_period](\/vault\/api-docs\/secret\/ldap#rotation_period) that provides\na large enough window to complete [application migrations](#5-migrate-applications) to minimize\nthe chance of this happening. Additionally, tuning the AD secrets engine [last_rotation_tolerance](\/vault\/api-docs\/secret\/ad#last_rotation_tolerance)\nparameter could help mitigate applications reading stale passwords, since the parameter allows\nrotation of the password if it's been rotated out-of-band within a given duration.\n\n\n<Note title=\"Lazy rotation vs automatic rotation\">\n\n  The AD secrets engine uses **lazy rotation** for passwords. With lazy\n  rotation, passwords rotate whenever the engine receives a request for a role\n  whose rotation period has elapsed.\n  \n  The LDAP secret engine uses **automatic rotation** for passwords. With\n  automatic rotation, passwords are rotated as soon as the rotation period\n  elapses, without waiting for a client request.\n\n  When migrating to the LDAP secret engine, you may need to account for the\n  rotation changes in your clients.  For example, if your client assumes the\n  password does not change until its next request to Vault and uses the password\n  to verify against other services.\n\n<\/Note>\n\n### 4. migrate library sets\n\nAD secrets engine [library sets](\/vault\/api-docs\/secret\/ad#library-management) will need to\nbe migrated to LDAP secrets engine [library sets](\/vault\/api-docs\/secret\/ldap#library-set-management).\nThe API paths and parameters will need to be considered during the migration.\n\n#### API path\n\n| AD Secrets Engine | LDAP Secrets Engine |\n| ----------------- | ------------------- |\n| [\/ad\/library\/:set_name](\/vault\/api-docs\/secret\/ad#library-management) | [\/ldap\/library\/:set_name](\/vault\/api-docs\/secret\/ldap#library-set-management) |\n\n#### Parameters\n\nThe parameters from existing AD secrets engine library sets can be exactly mapped 1-to-1\nto LDAP secrets engine library sets. There are no exceptions to consider.\n\n### 5. migrate applications\n\nThe AD secrets engine provides APIs to obtain credentials for AD users and service accounts.\nApplications, or Vault clients, are typically the consumer of these credentials. For applications\nto successfully migrate, they must begin using new API paths and response formats provided\nby the LDAP secrets engine. Additionally, they must obtain a Vault [token](\/vault\/docs\/concepts\/tokens)\nwith an ACL [policy](\/vault\/docs\/concepts\/policies) that authorizes access to the new APIs.\nThe following section details credential-providing APIs and how their response formats differ\nbetween the AD secrets engine and LDAP secrets engine.\n\n#### API paths\n\n| AD Secrets Engine | LDAP Secrets Engine | Details |\n| ----------------- | ------------------- | ------- |\n| [\/ad\/creds\/:role_name](\/vault\/api-docs\/secret\/ad#retrieving-passwords) | [\/ldap\/static-cred\/:role_name](\/vault\/api-docs\/secret\/ldap#static-role-passwords) | Response formats differ. Namely, `current_password` is now `password`.  See [AD response](\/vault\/api-docs\/secret\/ad#sample-get-response) and [LDAP response](\/vault\/api-docs\/secret\/ldap#sample-get-response-1) for the difference. |\n| [\/ad\/library\/:set_name\/check-out](\/vault\/api-docs\/secret\/ad#check-a-credential-out) | [\/ldap\/library\/:set_name\/check-out](\/vault\/api-docs\/secret\/ldap#check-out-management) | Response formats do not differ. |\n| [\/ad\/library\/:set_name\/check-in](\/vault\/api-docs\/secret\/ad#check-a-credential-in) | [\/ldap\/library\/:set_name\/check-in](\/vault\/api-docs\/secret\/ldap#check-in-management) | Response formats do not differ. |\n\n### 6. disable AD secrets engines\n\nAD secrets engine mounts can be disabled after successful migration of configuration and\napplications to the LDAP secrets engine. Note that disabling the secrets engine will erase\nits configuration from storage. This cannot be reversed.\n\nTo disable the AD secrets engine:\n\n```shell-session\n$ vault secrets disable ad\n```\n\nTo disable at a custom path:\n\n```shell-session\n$ vault secrets disable <custom_path>\n```","site":"vault","answers_cleaned":"    layout  docs page title  Migration Guide   Active Directory   Secrets Engines description       The guide for migrating from the Active Directory secrets engine to the LDAP secrets engine         Migration guide   active directory secrets engine  The Vault  Active Directory secrets engine   vault docs secrets ad  has been deprecated as of the Vault 1 13 release  This document provides guidance for migrating from the Active Directory secrets engine to the  LDAP secrets engine   vault docs secrets ldap  that was introduced in Vault 1 12      Deprecation timeline  Beginning from the Vault 1 13 release  we will continue to support the Active Directory  AD  secrets engine in maintenance mode for six major Vault releases  Maintenance mode means that we will fix bugs and security issues  but no new features will be added  All new feature development efforts will go towards the unified LDAP secrets engine  At Vault 1 18  we will mark the AD secrets engine as  pending removal   vault docs deprecation faq pending removal   At this time  Vault will begin to strongly signal operators that they need to migrate off of the AD secrets engine  At Vault 1 19  we will mark the AD secrets engine as  removed   vault docs deprecation faq removed   At this time  the AD secrets engine will be removed from Vault  Vault will not start up with the AD secrets engine mounts enabled      Migration steps  The following sections detail how to migrate the AD secrets engine configuration and applications consuming secrets to the new LDAP secrets engine       1  enable LDAP secrets engine  The LDAP secrets engine needs to be enabled in order to have a target for migration of existing AD secrets engine mounts  AD secrets engine mounts should be mapped 1 to 1 with new LDAP secrets engine mounts   To enable the LDAP secrets engine      shell session   vault secrets enable ldap      To enable at a custom path      shell session   vault secrets enable  path  custom path  ldap      If enabled at a custom path  the   ldap   path segment in API paths must be replaced with the custom path value       2  migrate configuration  The AD secrets engine  configuration   vault api docs secret ad configuration  will need to be migrated to an LDAP secrets engine  configuration   vault api docs secret ldap configuration management   The API paths and parameters will need to be considered during the migration        API path    AD Secrets Engine   LDAP Secrets Engine                                                   ad config   vault api docs secret ad configuration      ldap config   vault api docs secret ad configuration             Parameters  The parameters from existing AD secrets engine configurations can generally be mapped 1 to 1 to LDAP secrets engine configuration  The following LDAP secrets engine parameters are the exception and must be considered during the migration     AD Secrets Engine   LDAP Secrets Engine   Details                                                           N A    schema   vault api docs secret ldap schema    Must be set to the  ad  option on the LDAP secrets engine configuration       userdn   vault api docs secret ad userdn     userdn   vault api docs secret ad userdn    Required to be set if using the  library sets   4 migrate library sets  check out feature  It can be optionally set if using the  static roles   3 migrate roles  feature without providing a distinguished name   dn   vault api docs secret ldap dn         ttl   vault api docs secret ad ttl    N A   Replaced by static role  rotation period   vault api docs secret ldap rotation period        max ttl   vault api docs secret ad max ttl    N A   Not supported for  static roles   3 migrate roles   Can be set using  max ttl   vault api docs secret ldap max ttl 1  for library sets       last rotation tolerance   vault api docs secret ad last rotation tolerance    N A   Not supported by the LDAP secrets engine  Passwords will be rotated based on the static role  rotation period   vault api docs secret ldap rotation period          3  migrate roles  AD secrets engine  roles   vault api docs secret ad role management  will need to be migrated to LDAP secrets engine  static roles   vault api docs secret ldap static roles   The API paths  parameters  and rotation periods will need to be considered during the migration        API path    AD Secrets Engine   LDAP Secrets Engine                                                   ad roles  role name   vault api docs secret ad role management      ldap static role  role name   vault api docs secret ldap static roles          Parameters  The following parameters must be migrated     AD Secrets Engine   LDAP Secrets Engine   Details                                                            ttl   vault api docs secret ad ttl 1     rotation period   vault api docs secret ldap rotation period    N A      service account name   vault api docs secret ad service account name     username   vault api docs secret ldap username       If  username  is set without setting the  dn   vault api docs secret ldap dn  value  then the configuration  userdn   vault api docs secret ldap userdn  must also be set          Rotation periods  Rotations that occur from AD secrets engine  roles   vault api docs secret ad role management  may conflict with rotations performed by LDAP secrets engine  static roles   vault api docs secret ldap static roles  during the migration process  This could cause applications consuming passwords to read a password that gets invalidated by a rotation shortly after  To mitigate this  it s recommended to set an initial  rotation period   vault api docs secret ldap rotation period  that provides a large enough window to complete  application migrations   5 migrate applications  to minimize the chance of this happening  Additionally  tuning the AD secrets engine  last rotation tolerance   vault api docs secret ad last rotation tolerance  parameter could help mitigate applications reading stale passwords  since the parameter allows rotation of the password if it s been rotated out of band within a given duration     Note title  Lazy rotation vs automatic rotation      The AD secrets engine uses   lazy rotation   for passwords  With lazy   rotation  passwords rotate whenever the engine receives a request for a role   whose rotation period has elapsed       The LDAP secret engine uses   automatic rotation   for passwords  With   automatic rotation  passwords are rotated as soon as the rotation period   elapses  without waiting for a client request     When migrating to the LDAP secret engine  you may need to account for the   rotation changes in your clients   For example  if your client assumes the   password does not change until its next request to Vault and uses the password   to verify against other services     Note       4  migrate library sets  AD secrets engine  library sets   vault api docs secret ad library management  will need to be migrated to LDAP secrets engine  library sets   vault api docs secret ldap library set management   The API paths and parameters will need to be considered during the migration        API path    AD Secrets Engine   LDAP Secrets Engine                                                   ad library  set name   vault api docs secret ad library management      ldap library  set name   vault api docs secret ldap library set management          Parameters  The parameters from existing AD secrets engine library sets can be exactly mapped 1 to 1 to LDAP secrets engine library sets  There are no exceptions to consider       5  migrate applications  The AD secrets engine provides APIs to obtain credentials for AD users and service accounts  Applications  or Vault clients  are typically the consumer of these credentials  For applications to successfully migrate  they must begin using new API paths and response formats provided by the LDAP secrets engine  Additionally  they must obtain a Vault  token   vault docs concepts tokens  with an ACL  policy   vault docs concepts policies  that authorizes access to the new APIs  The following section details credential providing APIs and how their response formats differ between the AD secrets engine and LDAP secrets engine        API paths    AD Secrets Engine   LDAP Secrets Engine   Details                                                             ad creds  role name   vault api docs secret ad retrieving passwords      ldap static cred  role name   vault api docs secret ldap static role passwords    Response formats differ  Namely   current password  is now  password    See  AD response   vault api docs secret ad sample get response  and  LDAP response   vault api docs secret ldap sample get response 1  for the difference        ad library  set name check out   vault api docs secret ad check a credential out      ldap library  set name check out   vault api docs secret ldap check out management    Response formats do not differ        ad library  set name check in   vault api docs secret ad check a credential in      ldap library  set name check in   vault api docs secret ldap check in management    Response formats do not differ         6  disable AD secrets engines  AD secrets engine mounts can be disabled after successful migration of configuration and applications to the LDAP secrets engine  Note that disabling the secrets engine will erase its configuration from storage  This cannot be reversed   To disable the AD secrets engine      shell session   vault secrets disable ad      To disable at a custom path      shell session   vault secrets disable  custom path     "}
{"questions":"vault page title OIDC Identity Provider layout docs Vault is an OpenID Connect OIDC https openid net specs openid connect core 1 0 html OIDC identity provider Setup and configuration for Vault as an OpenID Connect OIDC identity provider","answers":"---\nlayout: docs\npage_title: OIDC Identity Provider\ndescription: >-\n  Setup and configuration for Vault as an OpenID Connect (OIDC) identity provider.\n---\n\n# OIDC identity provider\n\nVault is an OpenID Connect ([OIDC](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html))\nidentity provider. This enables client applications that speak the OIDC protocol to leverage\nVault's source of [identity](\/vault\/docs\/concepts\/identity) and wide range of [authentication methods](\/vault\/docs\/auth)\nwhen authenticating end-users. Client applications can configure their authentication logic\nto talk to Vault. Once enabled, Vault will act as the bridge to other identity providers via\nits existing authentication methods. Client applications can also obtain identity information\nfor their end-users by leveraging custom templating of Vault identity information.\n\n\\-> **Note**: For more detailed information on the configuration resources and OIDC endpoints,\nplease visit the [OIDC provider](\/vault\/docs\/concepts\/oidc-provider) concepts page.\n\n## Setup\n\nThe Vault OIDC provider system is built on top of the identity secrets engine.\nThis secrets engine is mounted by default and cannot be disabled or moved.\n\nEach Vault namespace has a default OIDC [provider](\/vault\/docs\/concepts\/oidc-provider#oidc-providers)\nand [key](\/vault\/docs\/concepts\/oidc-provider#keys). This built-in configuration enables client\napplications to begin using Vault as a source of identity with minimal configuration. For\ndetails on the built-in configuration and advanced options, see the [OIDC provider](\/vault\/docs\/concepts\/oidc-provider)\nconcepts page.\n\nThe following steps show a minimal configuration that allows a client application to use\nVault as an OIDC provider.\n\n1. Enable a Vault auth method:\n\n   ```text\n   $ vault auth enable userpass\n   Success! Enabled userpass auth method at: userpass\/\n   ```\n\nAny Vault auth method may be used within the OIDC flow. For simplicity, enable the\n`userpass` auth method.\n\n2. Create a user:\n\n   ```text\n   $ vault write auth\/userpass\/users\/end-user password=\"securepassword\"\n   Success! Data written to: auth\/userpass\/users\/end-user\n   ```\n\n   This user will authenticate to Vault through a client application, otherwise known as\n   an OIDC [relying party](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#Terminology).\n\n2. Create a client application:\n\n   ```text\n   $ vault write identity\/oidc\/client\/my-webapp \\\n     redirect_uris=\"https:\/\/localhost:9702\/auth\/oidc-callback\" \\\n     assignments=\"allow_all\"\n   Success! Data written to: identity\/oidc\/client\/my-webapp\n   ```\n\n   This operation creates a client application which can be used to configure an OIDC\n   relying party. See the [client applications](\/vault\/docs\/concepts\/oidc-provider#client-applications)\n   section for details on different client types, including `confidential` and `public` clients.\n\n   The `assignments` parameter limits the Vault entities and groups that are allowed to\n   authenticate through the client application. By default, no Vault entities are allowed.\n   To allow all Vault entities to authenticate, the built-in [allow_all](\/vault\/docs\/concepts\/oidc-provider#assignments)\n   assignment is provided.\n\n2. Read client credentials:\n\n   ```text\n   $ vault read identity\/oidc\/client\/my-webapp\n\n   Key                 Value\n   ---                 -----\n   access_token_ttl    24h\n   assignments         [allow_all]\n   client_id           GSDTnn3KaOrLpNlVGlYLS9TVsZgOTweO\n   client_secret       hvo_secret_gBKHcTP58C4aq7FqPWsuqKgpiiegd7ahpifGae9WGkHRCwFEJTZA9KGdNVpzE0r8\n   client_type         confidential\n   id_token_ttl        24h\n   key                 default\n   redirect_uris       [https:\/\/localhost:9702\/auth\/oidc-callback]\n   ```\n\n   The `client_id` and `client_secret` are the client application's credentials. These\n   values are typically required when configuring an OIDC relying party.\n\n2. Read OIDC discovery configuration:\n\n   ```text\n   $ curl -s http:\/\/127.0.0.1:8200\/v1\/identity\/oidc\/provider\/default\/.well-known\/openid-configuration\n   {\n     \"issuer\": \"http:\/\/127.0.0.1:8200\/v1\/identity\/oidc\/provider\/default\",\n     \"jwks_uri\": \"http:\/\/127.0.0.1:8200\/v1\/identity\/oidc\/provider\/default\/.well-known\/keys\",\n     \"authorization_endpoint\": \"http:\/\/127.0.0.1:8200\/ui\/vault\/identity\/oidc\/provider\/default\/authorize\",\n     \"token_endpoint\": \"http:\/\/127.0.0.1:8200\/v1\/identity\/oidc\/provider\/default\/token\",\n     \"userinfo_endpoint\": \"http:\/\/127.0.0.1:8200\/v1\/identity\/oidc\/provider\/default\/userinfo\",\n     \"request_parameter_supported\": false,\n     \"request_uri_parameter_supported\": false,\n     \"id_token_signing_alg_values_supported\": [\n       \"RS256\",\n       \"RS384\",\n       \"RS512\",\n       \"ES256\",\n       \"ES384\",\n       \"ES512\",\n       \"EdDSA\"\n     ],\n     \"response_types_supported\": [\n       \"code\"\n     ],\n     \"scopes_supported\": [\n       \"openid\"\n     ],\n     \"subject_types_supported\": [\n       \"public\"\n     ],\n     \"grant_types_supported\": [\n       \"authorization_code\"\n     ],\n     \"token_endpoint_auth_methods_supported\": [\n       \"none\",\n       \"client_secret_basic\",\n       \"client_secret_post\"\n     ],\n     \"code_challenge_methods_supported\": [\n       \"plain\",\n       \"S256\"\n     ]\n   }\n   ```\n\n   Each Vault OIDC provider publishes [discovery metadata](https:\/\/openid.net\/specs\/openid-connect-discovery-1_0.html#ProviderMetadata).\n   The `issuer` value is typically required when configuring an OIDC relying party.\n\n## Usage\n\nAfter configuring a Vault auth method and client application, the following details can\nbe used to configure an OIDC relying party to delegate end-user authentication to Vault.\n\n- `client_id` - The ID of the client application\n- `client_secret` - The secret of the client application\n- `issuer` - The issuer of the Vault OIDC provider\n\nA number of HashiCorp products provide OIDC authentication methods. This means that they\ncan leverage Vault as a source of identity using the OIDC protocol. See the following links\nfor details on configuring OIDC authentication for other HashiCorp products:\n\n- [Boundary](\/boundary\/tutorials\/access-management\/oidc-auth)\n- [Consul](\/consul\/docs\/security\/acl\/auth-methods\/oidc)\n- [Waypoint](\/waypoint\/docs\/server\/auth\/oidc)\n- [Nomad](\/nomad\/tutorials\/single-sign-on\/sso-oidc-vault)\n\nOtherwise, refer to the documentation of the specific OIDC relying party for usage details.\n\n## Supported flows\n\nThe Vault OIDC provider feature currently supports the following authentication flow:\n\n- [Authorization Code Flow](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#CodeFlowAuth).\n\n## Tutorial\n\nRefer to the [Vault as an OIDC Identity Provider](\/vault\/tutorials\/auth-methods\/oidc-identity-provider)\ntutorial to learn how to configure a HashiCorp [Boundary](https:\/\/www.boundaryproject.io\/)\nto leverage Vault as a source of identity using the OIDC protocol.\n\n## API\n\nThe Vault OIDC provider feature has a full HTTP API. Please see the\n[OIDC identity provider API](\/vault\/api-docs\/secret\/identity\/oidc-provider) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  OIDC Identity Provider description       Setup and configuration for Vault as an OpenID Connect  OIDC  identity provider         OIDC identity provider  Vault is an OpenID Connect   OIDC  https   openid net specs openid connect core 1 0 html   identity provider  This enables client applications that speak the OIDC protocol to leverage Vault s source of  identity   vault docs concepts identity  and wide range of  authentication methods   vault docs auth  when authenticating end users  Client applications can configure their authentication logic to talk to Vault  Once enabled  Vault will act as the bridge to other identity providers via its existing authentication methods  Client applications can also obtain identity information for their end users by leveraging custom templating of Vault identity information         Note    For more detailed information on the configuration resources and OIDC endpoints  please visit the  OIDC provider   vault docs concepts oidc provider  concepts page      Setup  The Vault OIDC provider system is built on top of the identity secrets engine  This secrets engine is mounted by default and cannot be disabled or moved   Each Vault namespace has a default OIDC  provider   vault docs concepts oidc provider oidc providers  and  key   vault docs concepts oidc provider keys   This built in configuration enables client applications to begin using Vault as a source of identity with minimal configuration  For details on the built in configuration and advanced options  see the  OIDC provider   vault docs concepts oidc provider  concepts page   The following steps show a minimal configuration that allows a client application to use Vault as an OIDC provider   1  Enable a Vault auth method         text      vault auth enable userpass    Success  Enabled userpass auth method at  userpass          Any Vault auth method may be used within the OIDC flow  For simplicity  enable the  userpass  auth method   2  Create a user         text      vault write auth userpass users end user password  securepassword     Success  Data written to  auth userpass users end user            This user will authenticate to Vault through a client application  otherwise known as    an OIDC  relying party  https   openid net specs openid connect core 1 0 html Terminology    2  Create a client application         text      vault write identity oidc client my webapp        redirect uris  https   localhost 9702 auth oidc callback         assignments  allow all     Success  Data written to  identity oidc client my webapp            This operation creates a client application which can be used to configure an OIDC    relying party  See the  client applications   vault docs concepts oidc provider client applications     section for details on different client types  including  confidential  and  public  clients      The  assignments  parameter limits the Vault entities and groups that are allowed to    authenticate through the client application  By default  no Vault entities are allowed     To allow all Vault entities to authenticate  the built in  allow all   vault docs concepts oidc provider assignments     assignment is provided   2  Read client credentials         text      vault read identity oidc client my webapp     Key                 Value                                 access token ttl    24h    assignments          allow all     client id           GSDTnn3KaOrLpNlVGlYLS9TVsZgOTweO    client secret       hvo secret gBKHcTP58C4aq7FqPWsuqKgpiiegd7ahpifGae9WGkHRCwFEJTZA9KGdNVpzE0r8    client type         confidential    id token ttl        24h    key                 default    redirect uris        https   localhost 9702 auth oidc callback             The  client id  and  client secret  are the client application s credentials  These    values are typically required when configuring an OIDC relying party   2  Read OIDC discovery configuration         text      curl  s http   127 0 0 1 8200 v1 identity oidc provider default  well known openid configuration            issuer    http   127 0 0 1 8200 v1 identity oidc provider default         jwks uri    http   127 0 0 1 8200 v1 identity oidc provider default  well known keys         authorization endpoint    http   127 0 0 1 8200 ui vault identity oidc provider default authorize         token endpoint    http   127 0 0 1 8200 v1 identity oidc provider default token         userinfo endpoint    http   127 0 0 1 8200 v1 identity oidc provider default userinfo         request parameter supported   false        request uri parameter supported   false        id token signing alg values supported             RS256           RS384           RS512           ES256           ES384           ES512           EdDSA                response types supported             code                scopes supported             openid                subject types supported             public                grant types supported             authorization code                token endpoint auth methods supported             none           client secret basic           client secret post                code challenge methods supported             plain           S256                         Each Vault OIDC provider publishes  discovery metadata  https   openid net specs openid connect discovery 1 0 html ProviderMetadata      The  issuer  value is typically required when configuring an OIDC relying party      Usage  After configuring a Vault auth method and client application  the following details can be used to configure an OIDC relying party to delegate end user authentication to Vault      client id    The ID of the client application    client secret    The secret of the client application    issuer    The issuer of the Vault OIDC provider  A number of HashiCorp products provide OIDC authentication methods  This means that they can leverage Vault as a source of identity using the OIDC protocol  See the following links for details on configuring OIDC authentication for other HashiCorp products      Boundary   boundary tutorials access management oidc auth     Consul   consul docs security acl auth methods oidc     Waypoint   waypoint docs server auth oidc     Nomad   nomad tutorials single sign on sso oidc vault   Otherwise  refer to the documentation of the specific OIDC relying party for usage details      Supported flows  The Vault OIDC provider feature currently supports the following authentication flow      Authorization Code Flow  https   openid net specs openid connect core 1 0 html CodeFlowAuth       Tutorial  Refer to the  Vault as an OIDC Identity Provider   vault tutorials auth methods oidc identity provider  tutorial to learn how to configure a HashiCorp  Boundary  https   www boundaryproject io   to leverage Vault as a source of identity using the OIDC protocol      API  The Vault OIDC provider feature has a full HTTP API  Please see the  OIDC identity provider API   vault api docs secret identity oidc provider  for more details "}
{"questions":"vault page title Identity Tokens Details and best practices for identity tokens Introduction layout docs Identity tokens","answers":"---\nlayout: docs\npage_title: Identity Tokens\ndescription: Details and best practices for identity tokens.\n---\n\n# Identity tokens\n\n## Introduction\n\nIdentity information is used throughout Vault, but it can also be exported for\nuse by other applications. An authorized user\/application can request a token\nthat encapsulates identity information for their associated entity. These\ntokens are signed JWTs following the [OIDC ID\ntoken](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#IDToken) structure.\nThe public keys used to authenticate the tokens are published by Vault on an\nunauthenticated endpoint following OIDC discovery and JWKS conventions, which\nshould be directly usable by JWT\/OIDC libraries. An introspection endpoint is\nalso provided by Vault for token verification.\n\n### Roles and keys\n\nOIDC-compliant ID tokens are generated against a role which allows configuration\nof token claims via a templating system, token ttl, and a way to specify which\n\"key\" will be used to sign the token. The role template is an optional parameter\nto customize the token contents and is described in the next section. Token TTL\ncontrols the expiration time of the token, after which verification libraries will\nconsider the token invalid. All roles have an associated `client_id` that will be\nadded to the token's `aud` parameter. JWT\/OIDC libraries will usually require this\nvalue. The parameter may be set by the operator to a chosen value, or a\nVault-generated value will be used if left unconfigured.\n\nA role's `key` parameter links a role to an existing named key (multiple roles\nmay refer to the same key). It is not possible to generate an unsigned ID token.\n\nA named key is a public\/private key pair generated by Vault. The private key is\nused to sign the identity tokens, and the public key is used by clients to\nverify the signature. Keys are regularly rotated, whereby a new key pair is\ngenerated and the previous _public_ key is retained for a limited time for\nverification purposes.\n\nA named key's configuration specifies a rotation period, a verification ttl,\nsigning algorithm and allowed client IDs. Rotation period specifies the\nfrequency at which a new signing key is generated and the private portion of the\nprevious signing key is deleted. Verification ttl is the time a public key is\nretained for verification after being rotated. By default, keys are rotated\nevery 24 hours, and continue to be available for verification for 24 hours after\ntheir rotation.\n\nA key's list of allowed client IDs limits which roles may reference the key. The\nparameter may be set to `*` to allow all roles. The validity evaluation is made\nwhen a token is requested, not during configuration.\n\n### Token contents and templates\n\nIdentity tokens will always contain, at a minimum, the claims required by OIDC:\n\n- `iss` - Issuer URL\n- `sub` - Requester's entity ID\n- `aud` - `client_id` for the role\n- `iat` - Time of issue\n- `exp` - Expiration time for the token\n\nIn addition, the operator may configure per-role templates that allow a variety\nof other entity information to be added to the token. The templates are\nstructured as JSON with replaceable parameters. The parameter syntax is the same\nas that used for [ACL Path Templating](\/vault\/docs\/concepts\/policies).\n\nFor example:\n\n```jsx\n{\n  \"color\": ,\n  \"userinfo\": {\n     \"username\": ,\n     \"groups\": \n  },\n  \"nbf\": \n}\n```\n\nWhen a token is requested, the resulting template might be populated as:\n\n```json\n{\n  \"color\": \"green\",\n  \"userinfo\": {\n     \"username\": \"bob\",\n     \"groups\": [\"web\", \"engr\", \"default\"]\n  },\n  \"nbf\": 1561411915\n}\n```\n\nwhich would be merged with the base OIDC claims into the final token:\n\n```json\n{\n  \"iss\": \"https:\/\/10.1.1.45:8200\/v1\/identity\/oidc\",\n  \"sub\": \"a2cd63d3-5364-406f-980e-8d71bb0692f5\",\n  \"aud\": \"SxSouteCYPBoaTFy94hFghmekos\",\n  \"iat\": 1561411915,\n  \"exp\": 1561412215,\n  \"color\": \"green\",\n  \"userinfo\": {\n    \"username\": \"bob\",\n    \"groups\": [\"web\", \"engr\", \"default\"]\n  },\n  \"nbf\": 1561411915\n}\n```\n\nNote how the template is merged, with top level template keys becoming top level\ntoken keys. For this reason, templates may not contain top level keys that\noverwrite the standard OIDC claims.\n\nTemplate parameters that are not present for an entity, such as a metadata that\nisn't present, or an alias accessor which doesn't exist, are simply empty\nstrings or objects, depending on the data type.\n\nTemplates are configured on the role and may be optionally encoded as base64.\n\nThe full list of template parameters is shown below:\n\n| Name                                                                             | Description                                                                             |\n| :------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------- |\n| `identity.entity.id`                                                             | The entity's ID                                                                         |\n| `identity.entity.name`                                                           | The entity's name                                                                       |\n| `identity.entity.groups.ids`                                                     | The IDs of the groups the entity is a member of                                         |\n| `identity.entity.groups.names`                                                   | The names of the groups the entity is a member of                                       |\n| `identity.entity.metadata`                                                       | Metadata associated with the entity                                                     |\n| `identity.entity.metadata.<metadata key>`                                        | Metadata associated with the entity for the given key                                   |\n| `identity.entity.aliases.<mount accessor>.id`                                    | Entity alias ID for the given mount                                                     |\n| `identity.entity.aliases.<mount accessor>.name`                                  | Entity alias name for the given mount                                                   |\n| `identity.entity.aliases.<mount accessor>.metadata`                              | Metadata associated with the alias for the given mount                                  |\n| `identity.entity.aliases.<mount accessor>.metadata.<metadata key>`               | Metadata associated with the alias for the given mount and metadata key                 |\n| `identity.entity.aliases.<mount accessor>.custom_metadata`                       | Custom metadata associated with the alias for the given mount                           |\n| `identity.entity.aliases.<mount accessor>.custom_metadata.<custom_metadata key>` | Custom metadata associated with the alias for the given mount and custom metadata key   |\n| `time.now`                                                                       | Current time as integral seconds since the Epoch                                        |\n| `time.now.plus.<duration>`                                                       | Current time plus a [duration format string](\/vault\/docs\/concepts\/duration-format)                 |\n| `time.now.minus.<duration>`                                                      | Current time minus a [duration format string](\/vault\/docs\/concepts\/duration-format)                |\n\n### Token generation\n\nAn authenticated client may request a token using the [token generation\nendpoint](\/vault\/api-docs\/secret\/identity\/tokens#generate-a-signed-id-token). The token\nwill be generated per the requested role's specifications, for the requester's\nentity. It is not possible to generate tokens for an arbitrary entity.\n\n### Verifying authenticity of ID tokens generated by Vault\n\nAn identity token may be verified by the client party using the public keys\npublished by Vault, or via a Vault-provided introspection endpoint.\n\nVault will serve standard \"[.well-known](https:\/\/tools.ietf.org\/html\/rfc5785)\"\nendpoints that allow easy integration with OIDC verification libraries.\nConfiguring the libraries will typically involve providing an issuer URL and\nclient ID. The library will then handle key requests and can validate the\nsignature and claims requirements on tokens. This approach has the advantage of\nonly requiring _access_ to Vault, not _authorization_, as the .well-known\nendpoints are unauthenticated.\n\nAlternatively, the token may be sent to Vault for verification via an\n[introspection endpoint](\/vault\/api-docs\/secret\/identity\/tokens#introspect-a-signed-id-token).\nThe response will indicate whether the token is \"active\" or not, as well as any\nerrors that occurred during validation. Beyond simply allowing the client to\ndelegate verification to Vault, using this endpoint incorporates the additional\ncheck of whether the entity is still active or not, which is something that\ncannot be determined from the token alone. Unlike the .well-known endpoint, accessing the\nintrospection endpoint does require a valid Vault token and sufficient\nauthorization.\n\n### Issuer considerations\n\nThe identity token system has one configurable parameter: issuer. The issuer\n`iss` claim is particularly important for proper validation of the token by\nclients, and special consideration should be given when using Identity Tokens\nwith [performance replication](\/vault\/docs\/enterprise\/replication).\nConsumers of the token will request public keys from Vault using the issuer URL,\nso it must be network reachable. Furthermore, the returned set of keys will include\nan issuer that must match the request.\n\nBy default Vault will set the issuer to the Vault instance's\n[`api_addr`](\/vault\/docs\/configuration#api_addr). This means that tokens\nissued in a given cluster should be validated within that same cluster.\nAlternatively, the [`issuer`](\/vault\/api-docs\/secret\/identity\/tokens#issuer) parameter\nmay be configured explicitly. This address must point to the identity\/oidc path\nfor the Vault instance (e.g.\n`https:\/\/vault-1.example.com:8200\/v1\/identity\/oidc`) and should be\nreachable by any client trying to validate identity tokens.\n\n## API\n\nThe Identity secrets engine has a full HTTP API. Please see the\n[Identity secrets engine API](\/vault\/api-docs\/secret\/identity) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  Identity Tokens description  Details and best practices for identity tokens         Identity tokens     Introduction  Identity information is used throughout Vault  but it can also be exported for use by other applications  An authorized user application can request a token that encapsulates identity information for their associated entity  These tokens are signed JWTs following the  OIDC ID token  https   openid net specs openid connect core 1 0 html IDToken  structure  The public keys used to authenticate the tokens are published by Vault on an unauthenticated endpoint following OIDC discovery and JWKS conventions  which should be directly usable by JWT OIDC libraries  An introspection endpoint is also provided by Vault for token verification       Roles and keys  OIDC compliant ID tokens are generated against a role which allows configuration of token claims via a templating system  token ttl  and a way to specify which  key  will be used to sign the token  The role template is an optional parameter to customize the token contents and is described in the next section  Token TTL controls the expiration time of the token  after which verification libraries will consider the token invalid  All roles have an associated  client id  that will be added to the token s  aud  parameter  JWT OIDC libraries will usually require this value  The parameter may be set by the operator to a chosen value  or a Vault generated value will be used if left unconfigured   A role s  key  parameter links a role to an existing named key  multiple roles may refer to the same key   It is not possible to generate an unsigned ID token   A named key is a public private key pair generated by Vault  The private key is used to sign the identity tokens  and the public key is used by clients to verify the signature  Keys are regularly rotated  whereby a new key pair is generated and the previous  public  key is retained for a limited time for verification purposes   A named key s configuration specifies a rotation period  a verification ttl  signing algorithm and allowed client IDs  Rotation period specifies the frequency at which a new signing key is generated and the private portion of the previous signing key is deleted  Verification ttl is the time a public key is retained for verification after being rotated  By default  keys are rotated every 24 hours  and continue to be available for verification for 24 hours after their rotation   A key s list of allowed client IDs limits which roles may reference the key  The parameter may be set to     to allow all roles  The validity evaluation is made when a token is requested  not during configuration       Token contents and templates  Identity tokens will always contain  at a minimum  the claims required by OIDC      iss    Issuer URL    sub    Requester s entity ID    aud     client id  for the role    iat    Time of issue    exp    Expiration time for the token  In addition  the operator may configure per role templates that allow a variety of other entity information to be added to the token  The templates are structured as JSON with replaceable parameters  The parameter syntax is the same as that used for  ACL Path Templating   vault docs concepts policies    For example      jsx      color        userinfo           username           groups            nbf           When a token is requested  the resulting template might be populated as      json      color    green      userinfo           username    bob         groups     web    engr    default           nbf   1561411915        which would be merged with the base OIDC claims into the final token      json      iss    https   10 1 1 45 8200 v1 identity oidc      sub    a2cd63d3 5364 406f 980e 8d71bb0692f5      aud    SxSouteCYPBoaTFy94hFghmekos      iat   1561411915     exp   1561412215     color    green      userinfo          username    bob        groups     web    engr    default           nbf   1561411915        Note how the template is merged  with top level template keys becoming top level token keys  For this reason  templates may not contain top level keys that overwrite the standard OIDC claims   Template parameters that are not present for an entity  such as a metadata that isn t present  or an alias accessor which doesn t exist  are simply empty strings or objects  depending on the data type   Templates are configured on the role and may be optionally encoded as base64   The full list of template parameters is shown below     Name                                                                               Description                                                                                                                                                                                                                                                                 identity entity id                                                                The entity s ID                                                                              identity entity name                                                              The entity s name                                                                            identity entity groups ids                                                        The IDs of the groups the entity is a member of                                              identity entity groups names                                                      The names of the groups the entity is a member of                                            identity entity metadata                                                          Metadata associated with the entity                                                          identity entity metadata  metadata key                                            Metadata associated with the entity for the given key                                        identity entity aliases  mount accessor  id                                       Entity alias ID for the given mount                                                          identity entity aliases  mount accessor  name                                     Entity alias name for the given mount                                                        identity entity aliases  mount accessor  metadata                                 Metadata associated with the alias for the given mount                                       identity entity aliases  mount accessor  metadata  metadata key                   Metadata associated with the alias for the given mount and metadata key                      identity entity aliases  mount accessor  custom metadata                          Custom metadata associated with the alias for the given mount                                identity entity aliases  mount accessor  custom metadata  custom metadata key     Custom metadata associated with the alias for the given mount and custom metadata key        time now                                                                          Current time as integral seconds since the Epoch                                             time now plus  duration                                                           Current time plus a  duration format string   vault docs concepts duration format                       time now minus  duration                                                          Current time minus a  duration format string   vault docs concepts duration format                        Token generation  An authenticated client may request a token using the  token generation endpoint   vault api docs secret identity tokens generate a signed id token   The token will be generated per the requested role s specifications  for the requester s entity  It is not possible to generate tokens for an arbitrary entity       Verifying authenticity of ID tokens generated by Vault  An identity token may be verified by the client party using the public keys published by Vault  or via a Vault provided introspection endpoint   Vault will serve standard    well known  https   tools ietf org html rfc5785   endpoints that allow easy integration with OIDC verification libraries  Configuring the libraries will typically involve providing an issuer URL and client ID  The library will then handle key requests and can validate the signature and claims requirements on tokens  This approach has the advantage of only requiring  access  to Vault  not  authorization   as the  well known endpoints are unauthenticated   Alternatively  the token may be sent to Vault for verification via an  introspection endpoint   vault api docs secret identity tokens introspect a signed id token   The response will indicate whether the token is  active  or not  as well as any errors that occurred during validation  Beyond simply allowing the client to delegate verification to Vault  using this endpoint incorporates the additional check of whether the entity is still active or not  which is something that cannot be determined from the token alone  Unlike the  well known endpoint  accessing the introspection endpoint does require a valid Vault token and sufficient authorization       Issuer considerations  The identity token system has one configurable parameter  issuer  The issuer  iss  claim is particularly important for proper validation of the token by clients  and special consideration should be given when using Identity Tokens with  performance replication   vault docs enterprise replication   Consumers of the token will request public keys from Vault using the issuer URL  so it must be network reachable  Furthermore  the returned set of keys will include an issuer that must match the request   By default Vault will set the issuer to the Vault instance s   api addr    vault docs configuration api addr   This means that tokens issued in a given cluster should be validated within that same cluster  Alternatively  the   issuer    vault api docs secret identity tokens issuer  parameter may be configured explicitly  This address must point to the identity oidc path for the Vault instance  e g   https   vault 1 example com 8200 v1 identity oidc   and should be reachable by any client trying to validate identity tokens      API  The Identity secrets engine has a full HTTP API  Please see the  Identity secrets engine API   vault api docs secret identity  for more details "}
{"questions":"vault Transit secrets engine layout docs page title Transit Secrets Engines doesn t store any secrets The transit secrets engine for Vault encrypts decrypts data in transit It","answers":"---\nlayout: docs\npage_title: Transit - Secrets Engines\ndescription: >-\n  The transit secrets engine for Vault encrypts\/decrypts data in-transit. It\n  doesn't store any secrets.\n---\n\n# Transit secrets engine\n\nThe transit secrets engine handles cryptographic functions on data in-transit.\nVault doesn't store the data sent to the secrets engine. It can also be viewed\nas \"cryptography as a service\" or \"encryption as a service\". The transit secrets\nengine can also sign and verify data; generate hashes and HMACs of data; and act\nas a source of random bytes.\n\nThe primary use case for `transit` is to encrypt data from applications while\nstill storing that encrypted data in some primary data store. This relieves the\nburden of proper encryption\/decryption from application developers and pushes\nthe burden onto the operators of Vault.\n\nKey derivation is supported, which allows the same key to be used for multiple\npurposes by deriving a new key based on a user-supplied context value. In this\nmode, convergent encryption can optionally be supported, which allows the same\ninput values to produce the same ciphertext.\n\nDatakey generation allows processes to request a high-entropy key of a given\nbit length be returned to them, encrypted with the named key. Normally this will\nalso return the key in plaintext to allow for immediate use, but this can be\ndisabled to accommodate auditing requirements.\n\n## Working set management\n\nThe Transit engine supports versioning of keys. Key versions that are earlier\nthan a key's specified `min_decryption_version` gets archived, and the rest of\nthe key versions belong to the working set. This is a performance consideration\nto keep key loading fast, as well as a security consideration: by disallowing\ndecryption of old versions of keys, found ciphertext corresponding to obsolete\n(but sensitive) data can not be decrypted by most users, but in an emergency\nthe `min_decryption_version` can be moved back to allow for legitimate\ndecryption.\n\nCurrently this archive is stored in a single storage entry. With some storage\nbackends, notably those using Raft or Paxos for HA capabilities, frequent\nrotation may lead to a storage entry size for the archive that is larger than\nthe storage backend can handle. For frequent rotation needs, using named keys\nthat correspond to time bounds (e.g. five-minute periods floored to the closest\nmultiple of five) may provide a good alternative, allowing for several keys to\nbe live at once and a deterministic way to decide which key to use at any given\ntime.\n\n## NIST rotation guidance\n\nPeriodic rotation of the encryption keys is recommended, even in the absence of\ncompromise. For AES-GCM keys, rotation should occur before approximately 2<sup>32<\/sup>\nencryptions have been performed by a key version, following the guidelines of NIST\npublication 800-38D. It is recommended that operators estimate the\nencryption rate of a key and use that to determine a frequency of rotation\nthat prevents the guidance limits from being reached. For example, if one determines\nthat the estimated rate is 40 million operations per day, then rotating a key every\nthree months is sufficient.\n\n## Key types\n\nAs of now, the transit secrets engine supports the following key types (all key\ntypes also generate separate HMAC keys):\n\n- `aes128-gcm96`: AES-GCM with a 128-bit AES key and a 96-bit nonce; supports\n  encryption, decryption, key derivation, and convergent encryption\n- `aes256-gcm96`: AES-GCM with a 256-bit AES key and a 96-bit nonce; supports\n  encryption, decryption, key derivation, and convergent encryption (default)\n- `chacha20-poly1305`: ChaCha20-Poly1305 with a 256-bit key; supports\n  encryption, decryption, key derivation, and convergent encryption\n- `ed25519`: Ed25519; supports signing, signature verification, and key\n  derivation\n- `ecdsa-p256`: ECDSA using curve P-256; supports signing and signature\n  verification\n- `ecdsa-p384`: ECDSA using curve P-384; supports signing and signature\n  verification\n- `ecdsa-p521`: ECDSA using curve P-521; supports signing and signature\n  verification\n- `rsa-2048`: 2048-bit RSA key; supports encryption, decryption, signing, and\n  signature verification\n- `rsa-3072`: 3072-bit RSA key; supports encryption, decryption, signing, and\n  signature verification\n- `rsa-4096`: 4096-bit RSA key; supports encryption, decryption, signing, and\n  signature verification\n- `hmac`: HMAC; supporting HMAC generation and verification.\n- `managed_key`: Managed key; supports a variety of operations depending on the\n  backing key management solution. See [Managed Keys](\/vault\/docs\/enterprise\/managed-keys)\n  for more information. <EnterpriseAlert inline=\"true\" \/>\n- `aes128-cmac`: CMAC with a 128-bit AES key; supporting CMAC generation and verification. <EnterpriseAlert inline=\"true\" \/>\n- `aes256-cmac`: CMAC with a 256-bit AES key; supporting CMAC generation and verification. <EnterpriseAlert inline=\"true\" \/>\n\n~> **Note**: In FIPS 140-2 mode, the following algorithms are not certified\nand thus should not be used: `chacha20-poly1305` and `ed25519`.\n\n~> **Note**: All key types support HMAC operations through the use of a second randomly\ngenerated key created key creation time or rotation. The HMAC key type only\nsupports HMAC, and behaves identically to other algorithms with\nrespect to the HMAC operations but supports key import. By default,\nthe HMAC key type uses a 256-bit key.\n\nRSA operations use one of the following methods:\n\n - OAEP (encrypt, decrypt), with SHA-256 hash function and MGF,\n - PSS (sign, verify), with configurable hash function also used for MGF, and\n - PKCS#1v1.5: (sign, verify), with configurable hash function.\n\n## Convergent encryption\n\nConvergent encryption is a mode where the same set of plaintext+context always\nresult in the same ciphertext. It does this by deriving a key using a key\nderivation function but also by deterministically deriving a nonce. Because\nthese properties differ for any combination of plaintext and ciphertext over a\nkeyspace the size of 2^256, the risk of nonce reuse is near zero.\n\nThis has many practical uses. One common usage mode is to allow values to be stored\nencrypted in a database, but with limited lookup\/query support, so that rows\nwith the same value for a specific field can be returned from a query.\n\nTo accommodate for any needed upgrades to the algorithm, different versions of\nconvergent encryption have historically been supported:\n\n- Version 1 required the client to provide their own nonce, which is highly\n  flexible but if done incorrectly can be dangerous. This was only in Vault\n  0.6.1, and keys using this version cannot be upgraded.\n- Version 2 used an algorithmic approach to deriving the parameters. However,\n  the algorithm used was susceptible to offline plaintext-confirmation attacks,\n  which could allow attackers to brute force decryption if the plaintext size\n  was small. Keys using version 2 can be upgraded by simply performing a rotate\n  operation to a new key version; existing values can then be rewrapped against\n  the new key version and will use the version 3 algorithm.\n- Version 3 uses a different algorithm designed to be resistant to offline\n  plaintext-confirmation attacks. It is similar to AES-SIV in that it uses a\n  PRF to generate the nonce from the plaintext.\n\n## Setup\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the Transit secrets engine:\n\n   ```text\n   $ vault secrets enable transit\n   Success! Enabled the transit secrets engine at: transit\/\n   ```\n\n   By default, the secrets engine will mount at the name of the engine. To\n   enable the secrets engine at a different path, use the `-path` argument.\n\n1. Create a named encryption key:\n\n   ```text\n   $ vault write -f transit\/keys\/my-key\n   Success! Data written to: transit\/keys\/my-key\n   ```\n\n   Usually each application has its own encryption key.\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can use this secrets engine.\n\n1.  Encrypt some plaintext data using the `\/encrypt` endpoint with a named key:\n\n    **NOTE:** All plaintext data **must be base64-encoded**. The reason for this\n    requirement is that Vault does not require that the plaintext is \"text\". It\n    could be a binary file such as a PDF or image. The easiest safe transport\n    mechanism for this data as part of a JSON payload is to base64-encode it.\n\n    ```text\n    $ vault write transit\/encrypt\/my-key plaintext=$(echo \"my secret data\" | base64)\n\n    Key           Value\n    ---           -----\n    ciphertext    vault:v1:8SDd3WHDOjf7mq69CyCqYjBXAiQQAVZRkFM13ok481zoCmHnSeDX9vyf7w==\n    ```\n\n    The returned ciphertext starts with `vault:v1:`. The first prefix (`vault`)\n    identifies that it has been wrapped by Vault. The `v1` indicates the key\n    version 1 was used to encrypt the plaintext; therefore, when you rotate\n    keys, Vault knows which version to use for decryption. The rest is a base64\n    concatenation of the initialization vector (IV) and ciphertext.\n\n    Note that Vault does not _store_ any of this data. The caller is responsible\n    for storing the encrypted ciphertext. When the caller wants the plaintext,\n    it must provide the ciphertext back to Vault to decrypt the value.\n\n    !> Vault HTTP API imposes a maximum request size of 32MB to prevent a denial\n    of service attack. This can be tuned per [`listener`\n    block](\/vault\/docs\/configuration\/listener\/tcp) in the Vault server\n    configuration.\n\n1.  Decrypt a piece of data using the `\/decrypt` endpoint with a named key:\n\n    ```text\n    $ vault write transit\/decrypt\/my-key ciphertext=vault:v1:8SDd3WHDOjf7mq69CyCqYjBXAiQQAVZRkFM13ok481zoCmHnSeDX9vyf7w==\n\n    Key          Value\n    ---          -----\n    plaintext    bXkgc2VjcmV0IGRhdGEK\n    ```\n\n    The resulting data is base64-encoded (see the note above for details on\n    why). Decode it to get the raw plaintext:\n\n    ```text\n    $ base64 --decode <<< \"bXkgc2VjcmV0IGRhdGEK\"\n    my secret data\n    ```\n\n    It is also possible to script this decryption using some clever shell\n    scripting in one command:\n\n    ```text\n    $ vault write -field=plaintext transit\/decrypt\/my-key ciphertext=... | base64 --decode\n    my secret data\n    ```\n\n    Using ACLs, it is possible to restrict using the transit secrets engine such\n    that trusted operators can manage the named keys, and applications can only\n    encrypt or decrypt using the named keys they need access to.\n\n1.  Rotate the underlying encryption key. This will generate a new encryption key\n    and add it to the keyring for the named key:\n\n    ```text\n    $ vault write -f transit\/keys\/my-key\/rotate\n    Success! Data written to: transit\/keys\/my-key\/rotate\n    ```\n\n    Future encryptions will use this new key. Old data can still be decrypted\n    due to the use of a key ring.\n\n1.  Upgrade already-encrypted data to a new key. Vault will decrypt the value\n    using the appropriate key in the keyring and then encrypt the resulting\n    plaintext with the newest key in the keyring.\n\n    ```text\n    $ vault write transit\/rewrap\/my-key ciphertext=vault:v1:8SDd3WHDOjf7mq69CyCqYjBXAiQQAVZRkFM13ok481zoCmHnSeDX9vyf7w==\n\n    Key           Value\n    ---           -----\n    ciphertext    vault:v2:0VHTTBb2EyyNYHsa3XiXsvXOQSLKulH+NqS4eRZdtc2TwQCxqJ7PUipvqQ==\n    ```\n\n    This process **does not** reveal the plaintext data. As such, a Vault policy\n    could grant almost an untrusted process the ability to \"rewrap\" encrypted\n    data, since the process would not be able to get access to the plaintext\n    data.\n\n## Bring your own key (BYOK)\n\n~> **Note:** Key import functionality supports cases in which there is a need to bring\nin an existing key from an HSM or other outside system. It is more secure to\nhave Transit generate and manage a key within Vault.\n\n### Via the Command Line\n\nThe Vault command line tool [includes a helper](\/vault\/docs\/commands\/transit\/) to perform the steps described\nin Manual below.\n\n### Via the API\n\nFirst, the wrapping key needs to be read from transit:\n\n```text\n$ vault read transit\/wrapping_key\n```\n\nThe wrapping key will be a 4096-bit RSA public key.\n\nThen the wrapping key is used to create the ciphertext input for the `import` endpoint,\nas described below. In the below, the target key refers to the key being imported.\n\n### HSM\n\nIf the key is being imported from an HSM that supports PKCS#11, there are\ntwo possible scenarios:\n\n- If the HSM supports the CKM_RSA_AES_KEY_WRAP mechanism, that can be used to wrap the\n  target key using the wrapping key.\n\n- Otherwise, two mechanisms can be combined to wrap the target key. First, a 256-bit AES key should\n  be generated and then used to wrap the target key using the CKM_AES_KEY_WRAP_KWP mechanism.\n  Then the AES key should be wrapped under the wrapping key using the CKM_RSA_PKCS_OAEP mechanism\n  using MGF1 and either SHA-1, SHA-224, SHA-256, SHA-384, or SHA-512.\n\nThe ciphertext is constructed by appending the wrapped target key to the wrapped AES key.\n\nThe ciphertext bytes should be base64-encoded.\n\n### Manual process\n\nIf the target key is not stored in an HSM or KMS, the following steps can be used to construct\nthe ciphertext for the input of the `import` endpoint:\n\n- Generate an ephemeral 256-bit AES key.\n\n- Wrap the target key using the ephemeral AES key with AES-KWP.\n\n~> Note: When wrapping a symmetric key (such as an AES or ChaCha20 key), wrap\n   the raw bytes of the key. For instance, with an AES 128-bit key, this'll be\n   a byte array 16 characters in length that will directly be wrapped without\n   base64 or other encodings.<br \/><br \/>When wrapping an asymmetric key\n   (such as a RSA or ECDSA key), wrap the **PKCS8** encoded format of this\n   key, in raw DER\/binary form. Do not apply PEM encoding to this blob prior\n   to encryption and do not base64 encode it.\n\n- Wrap the AES key under the Vault wrapping key using RSAES-OAEP with MGF1 and\n  either SHA-1, SHA-224, SHA-256, SHA-384, or SHA-512.\n\n- Delete the ephemeral AES key.\n\n- Append the wrapped target key to the wrapped AES key.\n\n- Base64 encode the result.\n\nFor more details about wrapping the key for import into transit, see the\n[key wrapping guide](\/vault\/docs\/secrets\/transit\/key-wrapping-guide).\n\n## Tutorial\n\nRefer to the [Encryption as a Service: Transit Secrets\nEngine](\/vault\/tutorials\/encryption-as-a-service\/eaas-transit)\ntutorial to learn how to use the transit secrets engine to handle cryptographic functions on data in-transit.\n\n## API\n\nThe Transit secrets engine has a full HTTP API. Please see the\n[Transit secrets engine API](\/vault\/api-docs\/secret\/transit) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  Transit   Secrets Engines description       The transit secrets engine for Vault encrypts decrypts data in transit  It   doesn t store any secrets         Transit secrets engine  The transit secrets engine handles cryptographic functions on data in transit  Vault doesn t store the data sent to the secrets engine  It can also be viewed as  cryptography as a service  or  encryption as a service   The transit secrets engine can also sign and verify data  generate hashes and HMACs of data  and act as a source of random bytes   The primary use case for  transit  is to encrypt data from applications while still storing that encrypted data in some primary data store  This relieves the burden of proper encryption decryption from application developers and pushes the burden onto the operators of Vault   Key derivation is supported  which allows the same key to be used for multiple purposes by deriving a new key based on a user supplied context value  In this mode  convergent encryption can optionally be supported  which allows the same input values to produce the same ciphertext   Datakey generation allows processes to request a high entropy key of a given bit length be returned to them  encrypted with the named key  Normally this will also return the key in plaintext to allow for immediate use  but this can be disabled to accommodate auditing requirements      Working set management  The Transit engine supports versioning of keys  Key versions that are earlier than a key s specified  min decryption version  gets archived  and the rest of the key versions belong to the working set  This is a performance consideration to keep key loading fast  as well as a security consideration  by disallowing decryption of old versions of keys  found ciphertext corresponding to obsolete  but sensitive  data can not be decrypted by most users  but in an emergency the  min decryption version  can be moved back to allow for legitimate decryption   Currently this archive is stored in a single storage entry  With some storage backends  notably those using Raft or Paxos for HA capabilities  frequent rotation may lead to a storage entry size for the archive that is larger than the storage backend can handle  For frequent rotation needs  using named keys that correspond to time bounds  e g  five minute periods floored to the closest multiple of five  may provide a good alternative  allowing for several keys to be live at once and a deterministic way to decide which key to use at any given time      NIST rotation guidance  Periodic rotation of the encryption keys is recommended  even in the absence of compromise  For AES GCM keys  rotation should occur before approximately 2 sup 32  sup  encryptions have been performed by a key version  following the guidelines of NIST publication 800 38D  It is recommended that operators estimate the encryption rate of a key and use that to determine a frequency of rotation that prevents the guidance limits from being reached  For example  if one determines that the estimated rate is 40 million operations per day  then rotating a key every three months is sufficient      Key types  As of now  the transit secrets engine supports the following key types  all key types also generate separate HMAC keys       aes128 gcm96   AES GCM with a 128 bit AES key and a 96 bit nonce  supports   encryption  decryption  key derivation  and convergent encryption    aes256 gcm96   AES GCM with a 256 bit AES key and a 96 bit nonce  supports   encryption  decryption  key derivation  and convergent encryption  default     chacha20 poly1305   ChaCha20 Poly1305 with a 256 bit key  supports   encryption  decryption  key derivation  and convergent encryption    ed25519   Ed25519  supports signing  signature verification  and key   derivation    ecdsa p256   ECDSA using curve P 256  supports signing and signature   verification    ecdsa p384   ECDSA using curve P 384  supports signing and signature   verification    ecdsa p521   ECDSA using curve P 521  supports signing and signature   verification    rsa 2048   2048 bit RSA key  supports encryption  decryption  signing  and   signature verification    rsa 3072   3072 bit RSA key  supports encryption  decryption  signing  and   signature verification    rsa 4096   4096 bit RSA key  supports encryption  decryption  signing  and   signature verification    hmac   HMAC  supporting HMAC generation and verification     managed key   Managed key  supports a variety of operations depending on the   backing key management solution  See  Managed Keys   vault docs enterprise managed keys    for more information   EnterpriseAlert inline  true        aes128 cmac   CMAC with a 128 bit AES key  supporting CMAC generation and verification   EnterpriseAlert inline  true        aes256 cmac   CMAC with a 256 bit AES key  supporting CMAC generation and verification   EnterpriseAlert inline  true           Note    In FIPS 140 2 mode  the following algorithms are not certified and thus should not be used   chacha20 poly1305  and  ed25519         Note    All key types support HMAC operations through the use of a second randomly generated key created key creation time or rotation  The HMAC key type only supports HMAC  and behaves identically to other algorithms with respect to the HMAC operations but supports key import  By default  the HMAC key type uses a 256 bit key   RSA operations use one of the following methods      OAEP  encrypt  decrypt   with SHA 256 hash function and MGF     PSS  sign  verify   with configurable hash function also used for MGF  and    PKCS 1v1 5   sign  verify   with configurable hash function      Convergent encryption  Convergent encryption is a mode where the same set of plaintext context always result in the same ciphertext  It does this by deriving a key using a key derivation function but also by deterministically deriving a nonce  Because these properties differ for any combination of plaintext and ciphertext over a keyspace the size of 2 256  the risk of nonce reuse is near zero   This has many practical uses  One common usage mode is to allow values to be stored encrypted in a database  but with limited lookup query support  so that rows with the same value for a specific field can be returned from a query   To accommodate for any needed upgrades to the algorithm  different versions of convergent encryption have historically been supported     Version 1 required the client to provide their own nonce  which is highly   flexible but if done incorrectly can be dangerous  This was only in Vault   0 6 1  and keys using this version cannot be upgraded    Version 2 used an algorithmic approach to deriving the parameters  However    the algorithm used was susceptible to offline plaintext confirmation attacks    which could allow attackers to brute force decryption if the plaintext size   was small  Keys using version 2 can be upgraded by simply performing a rotate   operation to a new key version  existing values can then be rewrapped against   the new key version and will use the version 3 algorithm    Version 3 uses a different algorithm designed to be resistant to offline   plaintext confirmation attacks  It is similar to AES SIV in that it uses a   PRF to generate the nonce from the plaintext      Setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1  Enable the Transit secrets engine         text      vault secrets enable transit    Success  Enabled the transit secrets engine at  transit             By default  the secrets engine will mount at the name of the engine  To    enable the secrets engine at a different path  use the   path  argument   1  Create a named encryption key         text      vault write  f transit keys my key    Success  Data written to  transit keys my key            Usually each application has its own encryption key      Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can use this secrets engine   1   Encrypt some plaintext data using the   encrypt  endpoint with a named key         NOTE    All plaintext data   must be base64 encoded    The reason for this     requirement is that Vault does not require that the plaintext is  text   It     could be a binary file such as a PDF or image  The easiest safe transport     mechanism for this data as part of a JSON payload is to base64 encode it          text       vault write transit encrypt my key plaintext   echo  my secret data    base64       Key           Value                             ciphertext    vault v1 8SDd3WHDOjf7mq69CyCqYjBXAiQQAVZRkFM13ok481zoCmHnSeDX9vyf7w                The returned ciphertext starts with  vault v1    The first prefix   vault       identifies that it has been wrapped by Vault  The  v1  indicates the key     version 1 was used to encrypt the plaintext  therefore  when you rotate     keys  Vault knows which version to use for decryption  The rest is a base64     concatenation of the initialization vector  IV  and ciphertext       Note that Vault does not  store  any of this data  The caller is responsible     for storing the encrypted ciphertext  When the caller wants the plaintext      it must provide the ciphertext back to Vault to decrypt the value          Vault HTTP API imposes a maximum request size of 32MB to prevent a denial     of service attack  This can be tuned per   listener      block   vault docs configuration listener tcp  in the Vault server     configuration   1   Decrypt a piece of data using the   decrypt  endpoint with a named key          text       vault write transit decrypt my key ciphertext vault v1 8SDd3WHDOjf7mq69CyCqYjBXAiQQAVZRkFM13ok481zoCmHnSeDX9vyf7w        Key          Value                            plaintext    bXkgc2VjcmV0IGRhdGEK              The resulting data is base64 encoded  see the note above for details on     why   Decode it to get the raw plaintext          text       base64   decode      bXkgc2VjcmV0IGRhdGEK      my secret data              It is also possible to script this decryption using some clever shell     scripting in one command          text       vault write  field plaintext transit decrypt my key ciphertext       base64   decode     my secret data              Using ACLs  it is possible to restrict using the transit secrets engine such     that trusted operators can manage the named keys  and applications can only     encrypt or decrypt using the named keys they need access to   1   Rotate the underlying encryption key  This will generate a new encryption key     and add it to the keyring for the named key          text       vault write  f transit keys my key rotate     Success  Data written to  transit keys my key rotate              Future encryptions will use this new key  Old data can still be decrypted     due to the use of a key ring   1   Upgrade already encrypted data to a new key  Vault will decrypt the value     using the appropriate key in the keyring and then encrypt the resulting     plaintext with the newest key in the keyring          text       vault write transit rewrap my key ciphertext vault v1 8SDd3WHDOjf7mq69CyCqYjBXAiQQAVZRkFM13ok481zoCmHnSeDX9vyf7w        Key           Value                             ciphertext    vault v2 0VHTTBb2EyyNYHsa3XiXsvXOQSLKulH NqS4eRZdtc2TwQCxqJ7PUipvqQ                This process   does not   reveal the plaintext data  As such  a Vault policy     could grant almost an untrusted process the ability to  rewrap  encrypted     data  since the process would not be able to get access to the plaintext     data      Bring your own key  BYOK        Note    Key import functionality supports cases in which there is a need to bring in an existing key from an HSM or other outside system  It is more secure to have Transit generate and manage a key within Vault       Via the Command Line  The Vault command line tool  includes a helper   vault docs commands transit   to perform the steps described in Manual below       Via the API  First  the wrapping key needs to be read from transit      text   vault read transit wrapping key      The wrapping key will be a 4096 bit RSA public key   Then the wrapping key is used to create the ciphertext input for the  import  endpoint  as described below  In the below  the target key refers to the key being imported       HSM  If the key is being imported from an HSM that supports PKCS 11  there are two possible scenarios     If the HSM supports the CKM RSA AES KEY WRAP mechanism  that can be used to wrap the   target key using the wrapping key     Otherwise  two mechanisms can be combined to wrap the target key  First  a 256 bit AES key should   be generated and then used to wrap the target key using the CKM AES KEY WRAP KWP mechanism    Then the AES key should be wrapped under the wrapping key using the CKM RSA PKCS OAEP mechanism   using MGF1 and either SHA 1  SHA 224  SHA 256  SHA 384  or SHA 512   The ciphertext is constructed by appending the wrapped target key to the wrapped AES key   The ciphertext bytes should be base64 encoded       Manual process  If the target key is not stored in an HSM or KMS  the following steps can be used to construct the ciphertext for the input of the  import  endpoint     Generate an ephemeral 256 bit AES key     Wrap the target key using the ephemeral AES key with AES KWP      Note  When wrapping a symmetric key  such as an AES or ChaCha20 key   wrap    the raw bytes of the key  For instance  with an AES 128 bit key  this ll be    a byte array 16 characters in length that will directly be wrapped without    base64 or other encodings  br    br   When wrapping an asymmetric key     such as a RSA or ECDSA key   wrap the   PKCS8   encoded format of this    key  in raw DER binary form  Do not apply PEM encoding to this blob prior    to encryption and do not base64 encode it     Wrap the AES key under the Vault wrapping key using RSAES OAEP with MGF1 and   either SHA 1  SHA 224  SHA 256  SHA 384  or SHA 512     Delete the ephemeral AES key     Append the wrapped target key to the wrapped AES key     Base64 encode the result   For more details about wrapping the key for import into transit  see the  key wrapping guide   vault docs secrets transit key wrapping guide       Tutorial  Refer to the  Encryption as a Service  Transit Secrets Engine   vault tutorials encryption as a service eaas transit  tutorial to learn how to use the transit secrets engine to handle cryptographic functions on data in transit      API  The Transit secrets engine has a full HTTP API  Please see the  Transit secrets engine API   vault api docs secret transit  for more details "}
{"questions":"vault page title Key Wrapping for Transit Key Import Transit Secrets Engines layout docs Details about wrapping keys for import into the transit secrets engine The bring your own key BYOK functionality for the transit Key wrapping for transit key import","answers":"---\nlayout: docs\npage_title: Key Wrapping for Transit Key Import - Transit - Secrets Engines\ndescription: |-\n  Details about wrapping keys for import into the transit secrets engine.\n---\n\n# Key wrapping for transit key import\n\nThe \"bring your own key\" (BYOK) functionality for the transit\nsecrets engine allows users to import keys that were generated\noutside of Vault into the transit secrets engine.\n\nThis document describes the process for wrapping an externally-generated\nkey (the target key) for import into Vault. It describes the processes\nfor importing a software-stored key using Golang and for importing a key\nthat is stored in an HSM.\n\n### Mount the secrets engine\n\n```shell-session\n$ vault secrets enable transit\nSuccess! Enabled the transit secrets engine at: transit\/\n```\n\n### Retrieve the transit wrapping key\n\n```shell-session\n$ vault read transit\/wrapping_key\n```\n\nThis returns a 4096-bit RSA key.\n\nThe steps after this depend on whether the key is stored using\na software solution or in an HSM.\n\n### Software example (Go)\n\nThis example assumes that the key is stored in software using the\nvariable name `key`. It demonstrates how to wrap the target key using\nGolang crypto libraries.\n\nOnce you have the wrapping key, you can parse it using the `encoding\/pem`\nand `crypto\/x509` libraries (the example code below assumes that the wrapping\nkey has been written to a variable called `wrappingKeyString`):\n\n```\nkeyBlock, _ := pem.Decode([]byte(wrappingKeyString))\nparsedKey, err := x509.ParsePKIXPublicKey(keyBlock.Bytes)\nif err != nil {\n    return err\n}\n```\n\nThen generate an ephemeral AES key for wrapping the target key.\nThis example uses Golang's `crypto\/rand` library for generating the key:\n\n```\nephemeralAESKey := make([]byte, 32)\n_, err := rand.Read(ephemeralAESKey)\nif err != nil {\n        return err\n}\n```\n\n~> **NOTE**: Be sure to securely delete the ephemeral AES key once it\nhas been used!\n\nGoogle's [tink library](https:\/\/pkg.go.dev\/github.com\/google\/tink\/go@v1.6.1\/kwp\/subtle)\nprovides a function for performing the key wrap operation:\n\n```\nwrapKWP, err := subtle.NewKWP(aesKey)\nif err != nil {\n        return err\n}\nwrappedTargetKey, err := wrapKWP.Wrap(key)\nif err != nil {\n        return err\n}\n```\n\nThen encrypt the ephemeral AES key using the transit wrapping key:\n\n```\nwrappedAESKey, err := rsa.EncryptOAEP(\n        sha256.New(),\n        rand.Reader,\n        wrappingKey,\n        ephemeralAESKey,\n        []byte{},\n)\nif err != nil {\n        return err\n}\n```\n\nNote that though this example uses SHA256, Vault also supports the use of\nSHA1, SHA384, or SHA512. The hash function that was used at this step will\nneed to be provided as a parameter when importing the key.\n\nFinally, concatenate the wrapped keys into a single byte string.\nThe leftmost 4096 bits of the string should be the wrapped AES key, and\nthe remaining bits should be the wrapped target key. Then the resulting\nbytes should be base64-encoded.\n\n```\ncombinedCiphertext := append(wrappedAESKey, wrappedTargetKey...)\nbase64Ciphertext := base64.StdEncoding.EncodeToString(combinedCiphertext)\n```\n\nThis is the ciphertext that should be provided to Vault when importing a\nkey into the transit secrets engine.\n\n```shell-session\n$ vault write transit\/keys\/test-key\/import ciphertext=$CIPHERTEXT hash_function=SHA256 type=$KEY_TYPE\n```\n\n\n### AWS CloudHSM example\n\nThis example demonstrates how to import a key into the transit secrets engine from\nan AWS CloudHSM cluster. The process and mechanisms used will apply to importing\na key from an HSM in general, but the details will differ between HSMs.\n\nFor information on creating and communicating with an AWS CloudHSM cluster, see\nthe [Getting Started guide in the AWS CloudHSM documentation](https:\/\/docs.aws.amazon.com\/cloudhsm\/latest\/userguide\/getting-started.html).\n\nCommunication with the HSM uses AWS's `key_mgmt_util` tool. For help setting that\nup, see the [Getting Started page for key_mgmt_util](https:\/\/docs.aws.amazon.com\/cloudhsm\/latest\/userguide\/key_mgmt_util-getting-started.html).\n\nThe first step is writing the transit wrapping key to the HSM. This involves\ncreating a new RSA public key object with the key returned by transit's\n`wrapping_key` endpoint.\n\n```shell-session\n$ importPubKey -f wrapping_key.pem -l \"vault-transit-wrapping-key\"\n```\n\nThis will create the public key in the HSM with all of the necessary permissions.\nIf you're using a different tool, make sure that the usage for the wrapping key\nincludes the attribute `CKA_WRAP`.\n\nThe next step is wrapping the target key using the wrapping key. If the\nID of the target key is `1` and the wrapping key is `2`, the command looks like this:\n\n```shell-session\n$ wrapKey -noheader -k 1 -w 2 -t 3 -m 7 -out ciphertext.key\n```\n\nThe `-m 7` flag specifies the mechanism to use for the key wrapping. For AWS CloudHSM,\n7 corresponds to the PKCS11 mechanism `CKM_AES_RSA_KEY_WRAP` ([see the AWS documentation for details](https:\/\/docs.aws.amazon.com\/cloudhsm\/latest\/userguide\/key_mgmt_util-wrapKey.html)).\nThe `-t 3` flag specifies `SHA256` as the hash function. The result is written to a\nfile called `ciphertext.key`. The `noheader` flag ensures that the ciphertext does\nnot include an AWS-specific header.\n\nThe output from this is a binary file, which needs to be base64-encoded when it\nis provided to Vault.\n\n```shell-session\n$ export CIPHERTEXT=$(base64 ciphertext.key)\n$ vault write transit\/keys\/test-key\/import ciphertext=$CIPHERTEXT hash_function=SHA256 type=$KEY_TYPE\n```\n\nOnce the key has been imported, it can be used like any other transit key.","site":"vault","answers_cleaned":"    layout  docs page title  Key Wrapping for Transit Key Import   Transit   Secrets Engines description       Details about wrapping keys for import into the transit secrets engine         Key wrapping for transit key import  The  bring your own key   BYOK  functionality for the transit secrets engine allows users to import keys that were generated outside of Vault into the transit secrets engine   This document describes the process for wrapping an externally generated key  the target key  for import into Vault  It describes the processes for importing a software stored key using Golang and for importing a key that is stored in an HSM       Mount the secrets engine     shell session   vault secrets enable transit Success  Enabled the transit secrets engine at  transit           Retrieve the transit wrapping key     shell session   vault read transit wrapping key      This returns a 4096 bit RSA key   The steps after this depend on whether the key is stored using a software solution or in an HSM       Software example  Go   This example assumes that the key is stored in software using the variable name  key   It demonstrates how to wrap the target key using Golang crypto libraries   Once you have the wrapping key  you can parse it using the  encoding pem  and  crypto x509  libraries  the example code below assumes that the wrapping key has been written to a variable called  wrappingKeyString         keyBlock       pem Decode   byte wrappingKeyString   parsedKey  err    x509 ParsePKIXPublicKey keyBlock Bytes  if err    nil       return err        Then generate an ephemeral AES key for wrapping the target key  This example uses Golang s  crypto rand  library for generating the key       ephemeralAESKey    make   byte  32     err    rand Read ephemeralAESKey  if err    nil           return err             NOTE    Be sure to securely delete the ephemeral AES key once it has been used   Google s  tink library  https   pkg go dev github com google tink go v1 6 1 kwp subtle  provides a function for performing the key wrap operation       wrapKWP  err    subtle NewKWP aesKey  if err    nil           return err   wrappedTargetKey  err    wrapKWP Wrap key  if err    nil           return err        Then encrypt the ephemeral AES key using the transit wrapping key       wrappedAESKey  err    rsa EncryptOAEP          sha256 New            rand Reader          wrappingKey          ephemeralAESKey            byte      if err    nil           return err        Note that though this example uses SHA256  Vault also supports the use of SHA1  SHA384  or SHA512  The hash function that was used at this step will need to be provided as a parameter when importing the key   Finally  concatenate the wrapped keys into a single byte string  The leftmost 4096 bits of the string should be the wrapped AES key  and the remaining bits should be the wrapped target key  Then the resulting bytes should be base64 encoded       combinedCiphertext    append wrappedAESKey  wrappedTargetKey     base64Ciphertext    base64 StdEncoding EncodeToString combinedCiphertext       This is the ciphertext that should be provided to Vault when importing a key into the transit secrets engine      shell session   vault write transit keys test key import ciphertext  CIPHERTEXT hash function SHA256 type  KEY TYPE           AWS CloudHSM example  This example demonstrates how to import a key into the transit secrets engine from an AWS CloudHSM cluster  The process and mechanisms used will apply to importing a key from an HSM in general  but the details will differ between HSMs   For information on creating and communicating with an AWS CloudHSM cluster  see the  Getting Started guide in the AWS CloudHSM documentation  https   docs aws amazon com cloudhsm latest userguide getting started html    Communication with the HSM uses AWS s  key mgmt util  tool  For help setting that up  see the  Getting Started page for key mgmt util  https   docs aws amazon com cloudhsm latest userguide key mgmt util getting started html    The first step is writing the transit wrapping key to the HSM  This involves creating a new RSA public key object with the key returned by transit s  wrapping key  endpoint      shell session   importPubKey  f wrapping key pem  l  vault transit wrapping key       This will create the public key in the HSM with all of the necessary permissions  If you re using a different tool  make sure that the usage for the wrapping key includes the attribute  CKA WRAP    The next step is wrapping the target key using the wrapping key  If the ID of the target key is  1  and the wrapping key is  2   the command looks like this      shell session   wrapKey  noheader  k 1  w 2  t 3  m 7  out ciphertext key      The   m 7  flag specifies the mechanism to use for the key wrapping  For AWS CloudHSM  7 corresponds to the PKCS11 mechanism  CKM AES RSA KEY WRAP    see the AWS documentation for details  https   docs aws amazon com cloudhsm latest userguide key mgmt util wrapKey html    The   t 3  flag specifies  SHA256  as the hash function  The result is written to a file called  ciphertext key   The  noheader  flag ensures that the ciphertext does not include an AWS specific header   The output from this is a binary file  which needs to be base64 encoded when it is provided to Vault      shell session   export CIPHERTEXT   base64 ciphertext key    vault write transit keys test key import ciphertext  CIPHERTEXT hash function SHA256 type  KEY TYPE      Once the key has been imported  it can be used like any other transit key "}
{"questions":"vault management of cryptographic keys in various key management service KMS providers page title Key Management Secrets Engines Key management secrets engine The key management secrets engine provides a consistent workflow for distribution and lifecycle layout docs","answers":"---\nlayout: docs\npage_title: Key Management - Secrets Engines\ndescription: >-\n  The key management secrets engine provides a consistent workflow for distribution and lifecycle\n  management of cryptographic keys in various key management service (KMS) providers.\n---\n\n# Key management secrets engine\n\n@include 'alerts\/enterprise-and-hcp.mdx'\n\nKey management secrets engine requires [Vault\nEnterprise](https:\/\/www.hashicorp.com\/products\/vault\/pricing) with the Advanced Data\nProtection (ADP) module.\n\nThe key management secrets engine provides a consistent workflow for distribution and lifecycle\nmanagement of cryptographic keys in various key management service (KMS) providers. It allows\norganizations to maintain centralized control of their keys in Vault while still taking advantage\nof cryptographic capabilities native to the KMS providers.\n\nThe secrets engine generates and owns original copies of key material. When an operator decides\nto distribute and manage the lifecycle of a key in one of the [supported KMS providers](#kms-providers),\na copy of the key material is distributed. This provides additional durability and disaster\nrecovery means for the complete lifecycle of the key in the KMS provider.\n\nKey material will always be securely transferred in accordance with the\n[key import specification](#kms-providers) of the supported KMS providers.\n\n## Setup\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the key management secrets engine:\n\n   ```shell-session\n   $ vault secrets enable keymgmt\n   Success! Enabled the keymgmt secrets engine at: keymgmt\/\n   ```\n\n   By default, the secrets engine will mount at the name of the engine. To enable\n   the secrets engine at a different path, use the `-path` argument.\n\n## Usage\n\nAfter the secrets engine is mounted and a user\/machine has a Vault token with\nthe proper permission, it can use this secrets engine to generate, distribute, and\nmanage the lifecycle of cryptographic keys in [supported KMS providers](#kms-providers).\n\n1. Create a named cryptographic key of a specified type:\n\n   ```shell-session\n   $ vault write -f keymgmt\/key\/example-key type=\"rsa-2048\"\n   Success! Data written to: keymgmt\/key\/example-key\n   ```\n\n   Keys created by the secrets engine are considered general-purpose until\n   they're distributed to a KMS provider.\n\n1. Configure a KMS provider:\n\n   ```shell-session\n   $ vault write keymgmt\/kms\/example-kms \\\n       provider=\"azurekeyvault\" \\\n       key_collection=\"keyvault-name\" \\\n       credentials=client_id=\"a0454cd1-e28e-405e-bc50-7477fa8a00b7\" \\\n       credentials=client_secret=\"eR%HizuCVEpAKgeaUEx\" \\\n       credentials=tenant_id=\"cd4bf224-d114-4f96-9bbc-b8f45751c43f\"\n   ```\n\n   Conceptually, a KMS provider resource represents a destination for keys to be distributed to\n   and subsequently managed in. It is configured using a generic set of parameters. The values\n   supplied to the generic set of parameters will differ depending on the specified `provider`.\n\n   This operation creates a KMS provider that represents a named Azure Key Vault instance.\n   This is accomplished by specifying the `azurekeyvault` provider along with other provider-specific\n   parameter values. For details on how to configure each supported KMS provider, see the\n   [KMS Providers](#kms-providers) section.\n\n1. Distribute a key to a KMS provider:\n\n   ```shell-session\n   $ vault write keymgmt\/kms\/example-kms\/key\/example-key \\\n       purpose=\"encrypt,decrypt\" \\\n       protection=\"hsm\"\n   ```\n\n   This operation distributes a **copy** of the named key to the KMS provider with a specific\n   `purpose` and `protection`. The `purpose` defines the set of cryptographic capabilities\n   that the key will have in the KMS provider. The `protection` defines where cryptographic\n   operations are performed with the key in the KMS provider. See the API documentation for a list of\n   supported [purpose](\/vault\/api-docs\/secret\/key-management#purpose) and [protection](\/vault\/api-docs\/secret\/key-management#protection)\n   values.\n\n   ~> **Note:** The amount of time it takes to distribute a key to a KMS provider is proportional to the\n   number of versions that the key has. If a timeout occurs when distributing a key to a KMS\n   provider, you may need to increase the [VAULT_CLIENT_TIMEOUT](\/vault\/docs\/commands#vault_client_timeout).\n\n1. Rotate a key:\n\n   ```shell-session\n   $ vault write -f keymgmt\/key\/example-key\/rotate\n   ```\n\n   Rotating a key creates a new key version that contains new key material. The key will be rotated\n   in both Vault and the KMS provider that the key has been distributed to. The new key version\n   will be enabled and set as the current version for cryptographic operations in the KMS provider.\n\n1. Enable or disable key versions:\n\n   ```shell-session\n   $ vault write keymgmt\/key\/example-key min_enabled_version=2\n   ```\n\n   The `min_enabled_version` of a key can be updated in order to enable or disable sequences of\n   key versions. All versions of the key less than the `min_enabled_version` will be disabled for\n   cryptographic operations in the KMS provider that the key has been distributed to. Setting a\n   `min_enabled_version` of `0` means that all key versions will be enabled.\n\n1. Remove a key from a KMS provider:\n\n   ```shell-session\n   $ vault delete keymgmt\/kms\/example-kms\/key\/example-key\n   ```\n\n   This operation results in the key being deleted from the KMS provider. The key will still exist\n   in the secrets engine and can be redistributed to a KMS provider at a later time.\n\n   To permanently delete the key from the secrets engine, the [delete key](\/vault\/api-docs\/secret\/key-management#delete-key)\n   API may be invoked.\n\n## Key types\n\nThe key management secrets engine supports generation of the following key types:\n\n- `aes256-gcm96` - AES-GCM with a 256-bit AES key and a 96-bit nonce (symmetric)\n- `rsa-2048` - RSA with bit size of 2048 (asymmetric)\n- `rsa-3072` - RSA with bit size of 3072 (asymmetric)\n- `rsa-4096` - RSA with bit size of 4096 (asymmetric)\n- `ecdsa-p256` - ECDSA using the P-256 elliptic curve (asymmetric)\n- `ecdsa-p384` - ECDSA using the P-384 elliptic curve (asymmetric)\n- `ecdsa-p521` - ECDSA using the P-521 elliptic curve (asymmetric)\n\n## KMS providers\n\nThe key management secrets engine supports lifecycle management of keys in the following\nKMS providers:\n\n- [Azure Key Vault](\/vault\/docs\/secrets\/key-management\/azurekeyvault)\n- [AWS KMS](\/vault\/docs\/secrets\/key-management\/awskms)\n- [GCP Cloud KMS](\/vault\/docs\/secrets\/key-management\/gcpkms)\n\nRefer to the provider-specific documentation for details on how to properly configure each provider.\n\n## Compatibility\n\nThe following table defines which key types are compatible with each KMS provider.\n\n| Key Type       | Azure Key Vault | AWS KMS | GCP Cloud KMS |\n| -------------- | --------------- | ------- | ------------- |\n| `aes256-gcm96` | No              | **Yes** | **Yes**       |\n| `rsa-2048`     | **Yes**         | No      | **Yes**       |\n| `rsa-3072`     | **Yes**         | No      | **Yes**       |\n| `rsa-4096`     | **Yes**         | No      | **Yes**       |\n| `ecdsa-p256`   | No              | No      | **Yes**       |\n| `ecdsa-p384`   | No              | No      | **Yes**       |\n| `ecdsa-p521`   | No              | No      | No            |\n\n## API\n\nThe key management secrets engine has a full HTTP API. Please see the\n[Key Management Secrets Engine API](\/vault\/api-docs\/secret\/key-management) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  Key Management   Secrets Engines description       The key management secrets engine provides a consistent workflow for distribution and lifecycle   management of cryptographic keys in various key management service  KMS  providers         Key management secrets engine   include  alerts enterprise and hcp mdx   Key management secrets engine requires  Vault Enterprise  https   www hashicorp com products vault pricing  with the Advanced Data Protection  ADP  module   The key management secrets engine provides a consistent workflow for distribution and lifecycle management of cryptographic keys in various key management service  KMS  providers  It allows organizations to maintain centralized control of their keys in Vault while still taking advantage of cryptographic capabilities native to the KMS providers   The secrets engine generates and owns original copies of key material  When an operator decides to distribute and manage the lifecycle of a key in one of the  supported KMS providers   kms providers   a copy of the key material is distributed  This provides additional durability and disaster recovery means for the complete lifecycle of the key in the KMS provider   Key material will always be securely transferred in accordance with the  key import specification   kms providers  of the supported KMS providers      Setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1  Enable the key management secrets engine         shell session      vault secrets enable keymgmt    Success  Enabled the keymgmt secrets engine at  keymgmt             By default  the secrets engine will mount at the name of the engine  To enable    the secrets engine at a different path  use the   path  argument      Usage  After the secrets engine is mounted and a user machine has a Vault token with the proper permission  it can use this secrets engine to generate  distribute  and manage the lifecycle of cryptographic keys in  supported KMS providers   kms providers    1  Create a named cryptographic key of a specified type         shell session      vault write  f keymgmt key example key type  rsa 2048     Success  Data written to  keymgmt key example key            Keys created by the secrets engine are considered general purpose until    they re distributed to a KMS provider   1  Configure a KMS provider         shell session      vault write keymgmt kms example kms          provider  azurekeyvault           key collection  keyvault name           credentials client id  a0454cd1 e28e 405e bc50 7477fa8a00b7           credentials client secret  eR HizuCVEpAKgeaUEx           credentials tenant id  cd4bf224 d114 4f96 9bbc b8f45751c43f             Conceptually  a KMS provider resource represents a destination for keys to be distributed to    and subsequently managed in  It is configured using a generic set of parameters  The values    supplied to the generic set of parameters will differ depending on the specified  provider       This operation creates a KMS provider that represents a named Azure Key Vault instance     This is accomplished by specifying the  azurekeyvault  provider along with other provider specific    parameter values  For details on how to configure each supported KMS provider  see the     KMS Providers   kms providers  section   1  Distribute a key to a KMS provider         shell session      vault write keymgmt kms example kms key example key          purpose  encrypt decrypt           protection  hsm             This operation distributes a   copy   of the named key to the KMS provider with a specific     purpose  and  protection   The  purpose  defines the set of cryptographic capabilities    that the key will have in the KMS provider  The  protection  defines where cryptographic    operations are performed with the key in the KMS provider  See the API documentation for a list of    supported  purpose   vault api docs secret key management purpose  and  protection   vault api docs secret key management protection     values           Note    The amount of time it takes to distribute a key to a KMS provider is proportional to the    number of versions that the key has  If a timeout occurs when distributing a key to a KMS    provider  you may need to increase the  VAULT CLIENT TIMEOUT   vault docs commands vault client timeout    1  Rotate a key         shell session      vault write  f keymgmt key example key rotate            Rotating a key creates a new key version that contains new key material  The key will be rotated    in both Vault and the KMS provider that the key has been distributed to  The new key version    will be enabled and set as the current version for cryptographic operations in the KMS provider   1  Enable or disable key versions         shell session      vault write keymgmt key example key min enabled version 2            The  min enabled version  of a key can be updated in order to enable or disable sequences of    key versions  All versions of the key less than the  min enabled version  will be disabled for    cryptographic operations in the KMS provider that the key has been distributed to  Setting a     min enabled version  of  0  means that all key versions will be enabled   1  Remove a key from a KMS provider         shell session      vault delete keymgmt kms example kms key example key            This operation results in the key being deleted from the KMS provider  The key will still exist    in the secrets engine and can be redistributed to a KMS provider at a later time      To permanently delete the key from the secrets engine  the  delete key   vault api docs secret key management delete key     API may be invoked      Key types  The key management secrets engine supports generation of the following key types      aes256 gcm96    AES GCM with a 256 bit AES key and a 96 bit nonce  symmetric     rsa 2048    RSA with bit size of 2048  asymmetric     rsa 3072    RSA with bit size of 3072  asymmetric     rsa 4096    RSA with bit size of 4096  asymmetric     ecdsa p256    ECDSA using the P 256 elliptic curve  asymmetric     ecdsa p384    ECDSA using the P 384 elliptic curve  asymmetric     ecdsa p521    ECDSA using the P 521 elliptic curve  asymmetric      KMS providers  The key management secrets engine supports lifecycle management of keys in the following KMS providers      Azure Key Vault   vault docs secrets key management azurekeyvault     AWS KMS   vault docs secrets key management awskms     GCP Cloud KMS   vault docs secrets key management gcpkms   Refer to the provider specific documentation for details on how to properly configure each provider      Compatibility  The following table defines which key types are compatible with each KMS provider     Key Type         Azure Key Vault   AWS KMS   GCP Cloud KMS                                                                     aes256 gcm96    No                  Yes       Yes              rsa 2048          Yes             No          Yes              rsa 3072          Yes             No          Yes              rsa 4096          Yes             No          Yes              ecdsa p256      No                No          Yes              ecdsa p384      No                No          Yes              ecdsa p521      No                No        No                  API  The key management secrets engine has a full HTTP API  Please see the  Key Management Secrets Engine API   vault api docs secret key management  for more details "}
{"questions":"vault key management secrets engine using azurekeyvault provider Configure the key management secrets engine and distribute the Vault managed keys to the target Azure Key Vault instance To manage the lifecycle of the Azure Key Vault keys you need to setup the page title Azure Key Vault setup guide layout docs Setup guide Azure Key Vault","answers":"---\nlayout: docs\npage_title: Azure Key Vault setup guide\ndescription: Configure the key management secrets engine, and distribute the Vault-managed keys to the target Azure Key Vault instance.\n---\n\n# Setup guide - Azure Key Vault\n\nTo manage the lifecycle of the Azure Key Vault keys, you need to setup the\nkey management secrets engine using `azurekeyvault` provider.\n\n## Setup\n\n1. Enable the key management secrets engine.\n\n    ```shell-session\n    $ vault secrets enable keymgmt\n    Success! Enabled the keymgmt secrets engine at: keymgmt\/\n    ```\n\n1. Configure a KMS provider resource named, `example-kms`.\n\n    ```shell-session\n    $ vault write keymgmt\/kms\/example-kms \\\n        provider=\"azurekeyvault\" \\\n        key_collection=\"keyvault-name\" \\\n        credentials=client_id=\"a0454cd1-e28e-405e-bc50-7477fa8a00b7\" \\\n        credentials=client_secret=\"eR%HizuCVEpAKgeaUEx\" \\\n        credentials=tenant_id=\"cd4bf224-d114-4f96-9bbc-b8f45751c43f\"\n    ```\n\n    The command specified the following:\n\n    - The full path to this KMS provider instance in Vault\n      (`keymgmt\/kms\/example-kms`).\n    - A key collection, which corresponds to the name of the key vault instance\n      in Azure.\n    - The KMS provider is set to `azurekeyvault`.\n    - Azure client ID credential, that can also be specified with\n      `AZURE_CLIENT_ID` environment variable.\n    - Azure client secret credential, that can also be specified with\n      `AZURE_CLIENT_SECRET `environment variable.\n    - Azure tenant ID credential, that can also be specified with\n      `AZURE_TENANT_ID` environment variable.\n\n<Tip title=\"API documentation\">\n\nRefer to the Azure Key Vault [API\ndocumentation](\/vault\/api-docs\/secret\/key-management\/azurekeyvault) for a\ndetailed description of individual configuration parameters.\n\n<\/Tip>\n\n## Usage \n\n1. Write a pair of RSA-2048 keys to the secrets engine. The following command\n   will write a new key of type **rsa-2048** to the path `keymgmt\/key\/rsa-1`. \n\n    ```shell-session\n    $ vault write keymgmt\/key\/rsa-1 type=\"rsa-2048\"\n    Success! Data written to: keymgmt\/key\/rsa-1\n    ```\n\n    The key management secrets engine currently supports generation of the key\n    types specified in [Key\n    Types](\/vault\/docs\/secrets\/key-management#key-types).\n\n    Based on the\n    [Compatibility](\/vault\/docs\/secrets\/key-management#compatibility) section of\n    the documentation, Azure Key Vault currently supports use of RSA-2048,\n    RSA-3072, and RSA-4096 key types.\n\n1. Read the **rsa-1** key you created. Use JSON as the output and pipe that into\n   `jq`.\n\n    ```shell-session\n    $ vault read -format=json keymgmt\/key\/rsa-1 | jq\n    ```\n\n1. To use the keys you wrote, you must distribute them to the key vault.\n   Distribute the **rsa-1** key to the key vault at the path\n   `keymgmt\/kms\/example-kms\/key\/rsa-1`.\n\n    ```shell-session\n    $ vault write keymgmt\/kms\/example-kms\/key\/rsa-1 \\\n        purpose=\"encrypt,decrypt\" \\\n        protection=\"hsm\"\n    ```\n\n    Here you are instructing Vault to distribute the key and specify that its\n    purpose is only to encrypt and decrypt. The protection type is dependent on\n    the cloud provider and the value is either `hsm` or `software`. In the case\n    of Azure, you specify `hsm` for the protection type. The key will be\n    securely delivered to the key vault instance according to the [Azure Bring\n    Your Own\n    Key](https:\/\/docs.microsoft.com\/en-us\/azure\/key-vault\/keys\/byok-specification)\n    (BYOK) specification.\n\n1. You can list the keys that have been distributed to the Azure Key Vault instance.\n\n    ```shell-session\n    $ vault list keymgmt\/kms\/keyvault\/key\/\n    Keys\n    ----\n    rsa-1\n    ```\n\n1. Rotate the rsa-1 key.\n\n    ```shell-session\n    $ vault write -f keymgmt\/key\/rsa-1\/rotate\n    Success! Data written to: keymgmt\/key\/rsa-1\/rotate\n    ```\n\n1. Confirm successful key rotation by reading the key, and getting the value of\n   `.data.latest_version`.\n\n    ```shell-session\n    $  vault read -format=json keymgmt\/key\/rsa-1 | jq '.data.latest_version'\n    2\n    ```\n\n## Azure private link\n\nThe secrets engine can be configured to communicate with Azure Key Vault\ninstances using [Azure Private\nEndpoints](https:\/\/docs.microsoft.com\/en-us\/azure\/private-link\/private-endpoint-overview).\nFollow the guide at [Integrate Key Vault with Azure Private\nLink](https:\/\/docs.microsoft.com\/en-us\/azure\/key-vault\/general\/private-link-service?tabs=portal)\nto set up a Private Endpoint for your target Key Vault instance in Azure. The\nPrivate Endpoint must be network reachable by Vault. This means Vault needs to\nbe running in the same virtual network or a peered virtual network to properly\nresolve the Key Vault domain name to the Private Endpoint IP address.\n\nThe Private Endpoint configuration relies on a correct [Azure Private\nDNS](https:\/\/docs.microsoft.com\/en-us\/azure\/dns\/private-dns-overview)\nintegration. From the host that Vault is running on, follow the steps in\n[Validate that the private link connection\nworks](https:\/\/docs.microsoft.com\/en-us\/azure\/key-vault\/general\/private-link-service?tabs=portal#validate-that-the-private-link-connection-works)\nto ensure that the Key Vault domain name resolves to the Private Endpoint IP\naddress you've configured.\n\n```shell-session\n$ nslookup <keyvault-name>.vault.azure.net\n\nNon-authoritative answer:\nName:\nAddress:  10.0.2.5 (private IP address)\nAliases:  <keyvault-name>.vault.azure.net\n          <keyvault-name>.privatelink.vaultcore.azure.net\n```\n\nThe secrets engine doesn't require special configuration to communicate with a\nKey Vault instance over an Azure Private Endpoint. For example, the given [KMS\nconfiguration](\/vault\/docs\/secrets\/key-management\/azurekeyvault#configuration)\nwill result in the secrets engine resolving a Key Vault domain name of\n`keyvault-name.vault.azure.net` to the Private Endpoint IP address. Note that\nit's possible to change the Key Vault DNS suffix using the\n[environment](\/vault\/api-docs\/secret\/key-management\/azurekeyvault#environment)\nconfiguration parameter or `AZURE_ENVIRONMENT` environment variable.\n\n\n## Terraform example \n\nIf you are familiar with [Terraform](\/terraform\/), you can use it to deploy the\nnecessary Azure infrastructure.\n\n```hcl\nprovider \"azuread\" {\n  version = \"=0.11.0\"\n}\n\nprovider \"azurerm\" {\n  features {\n    key_vault {\n      purge_soft_delete_on_destroy = true\n    }\n  }\n}\n\nresource \"random_id\" \"app_rg_name\" {\n  byte_length = 3\n}\n\nresource \"random_id\" \"keyvault_name\" {\n  byte_length = 3\n}\n\ndata \"azurerm_client_config\" \"current\" {}\n\nresource \"azuread_application\" \"key_vault_app\" {\n  name                       = \"app-${random_id.app_rg_name.hex}\"\n  homepage                   = \"http:\/\/homepage${random_id.app_rg_name.b64_url}\"\n  identifier_uris            = [\"http:\/\/uri${random_id.app_rg_name.b64_url}\"]\n  reply_urls                 = [\"http:\/\/replyur${random_id.app_rg_name.b64_url}\"]\n  available_to_other_tenants = false\n  oauth2_allow_implicit_flow = true\n}\n\nresource \"azuread_service_principal\" \"key_vault_sp\" {\n  application_id               = azuread_application.key_vault_app.application_id\n  app_role_assignment_required = false\n}\n\nresource \"random_password\" \"password\" {\n  length           = 24\n  special          = true\n  override_special = \"%@\"\n}\n\nresource \"azuread_service_principal_password\" \"key_vault_sp_pwd\" {\n  service_principal_id = azuread_service_principal.key_vault_sp.id\n  value                = random_password.password.result\n  end_date             = \"2099-01-01T01:02:03Z\"\n}\n\nresource \"azurerm_resource_group\" \"key_vault_rg\" {\n  name     = \"learn-rg-${random_id.app_rg_name.hex}\"\n  location = \"West US\"\n}\n\nresource \"azurerm_key_vault\" \"key_vault_kv\" {\n  name                = \"learn-keyvault-${random_id.keyvault_name.hex}\"\n  location            = azurerm_resource_group.key_vault_rg.location\n  resource_group_name = azurerm_resource_group.key_vault_rg.name\n  sku_name            = \"premium\"\n  soft_delete_enabled = true\n  tenant_id           = data.azurerm_client_config.current.tenant_id\n\n  access_policy {\n    tenant_id = data.azurerm_client_config.current.tenant_id\n    object_id = data.azurerm_client_config.current.object_id\n    key_permissions = [\n      \"backup\",\n      \"create\",\n      \"decrypt\",\n      \"delete\",\n      \"encrypt\",\n      \"get\",\n      \"import\",\n      \"list\",\n      \"purge\",\n      \"recover\",\n      \"restore\",\n      \"sign\",\n      \"unwrapKey\",\n      \"update\",\n      \"verify\",\n      \"wrapKey\"\n    ]\n  }\n\n  access_policy {\n    tenant_id = data.azurerm_client_config.current.tenant_id\n    object_id = azuread_service_principal.key_vault_sp.object_id\n    key_permissions = [\n      \"create\",\n      \"delete\",\n      \"get\",\n      \"import\",\n      \"update\"\n    ]\n  }\n}\n\noutput \"key_vault_1_name\" {\n  value = azurerm_key_vault.key_vault_kv.name\n}\n\noutput \"tenant_id\" {\n  value = data.azurerm_client_config.current.tenant_id\n}\n\noutput \"client_id\" {\n  value = azuread_application.key_vault_app.application_id\n}\n\noutput \"client_secret\" {\n  value = azuread_service_principal_password.key_vault_sp_pwd.value\n}\n```","site":"vault","answers_cleaned":"    layout  docs page title  Azure Key Vault setup guide description  Configure the key management secrets engine  and distribute the Vault managed keys to the target Azure Key Vault instance         Setup guide   Azure Key Vault  To manage the lifecycle of the Azure Key Vault keys  you need to setup the key management secrets engine using  azurekeyvault  provider      Setup  1  Enable the key management secrets engine          shell session       vault secrets enable keymgmt     Success  Enabled the keymgmt secrets engine at  keymgmt           1  Configure a KMS provider resource named   example kms           shell session       vault write keymgmt kms example kms           provider  azurekeyvault            key collection  keyvault name            credentials client id  a0454cd1 e28e 405e bc50 7477fa8a00b7            credentials client secret  eR HizuCVEpAKgeaUEx            credentials tenant id  cd4bf224 d114 4f96 9bbc b8f45751c43f               The command specified the following         The full path to this KMS provider instance in Vault         keymgmt kms example kms          A key collection  which corresponds to the name of the key vault instance       in Azure        The KMS provider is set to  azurekeyvault         Azure client ID credential  that can also be specified with        AZURE CLIENT ID  environment variable        Azure client secret credential  that can also be specified with        AZURE CLIENT SECRET  environment variable        Azure tenant ID credential  that can also be specified with        AZURE TENANT ID  environment variable    Tip title  API documentation    Refer to the Azure Key Vault  API documentation   vault api docs secret key management azurekeyvault  for a detailed description of individual configuration parameters     Tip      Usage   1  Write a pair of RSA 2048 keys to the secrets engine  The following command    will write a new key of type   rsa 2048   to the path  keymgmt key rsa 1            shell session       vault write keymgmt key rsa 1 type  rsa 2048      Success  Data written to  keymgmt key rsa 1              The key management secrets engine currently supports generation of the key     types specified in  Key     Types   vault docs secrets key management key types        Based on the      Compatibility   vault docs secrets key management compatibility  section of     the documentation  Azure Key Vault currently supports use of RSA 2048      RSA 3072  and RSA 4096 key types   1  Read the   rsa 1   key you created  Use JSON as the output and pipe that into     jq           shell session       vault read  format json keymgmt key rsa 1   jq          1  To use the keys you wrote  you must distribute them to the key vault     Distribute the   rsa 1   key to the key vault at the path     keymgmt kms example kms key rsa 1           shell session       vault write keymgmt kms example kms key rsa 1           purpose  encrypt decrypt            protection  hsm               Here you are instructing Vault to distribute the key and specify that its     purpose is only to encrypt and decrypt  The protection type is dependent on     the cloud provider and the value is either  hsm  or  software   In the case     of Azure  you specify  hsm  for the protection type  The key will be     securely delivered to the key vault instance according to the  Azure Bring     Your Own     Key  https   docs microsoft com en us azure key vault keys byok specification       BYOK  specification   1  You can list the keys that have been distributed to the Azure Key Vault instance          shell session       vault list keymgmt kms keyvault key      Keys              rsa 1          1  Rotate the rsa 1 key          shell session       vault write  f keymgmt key rsa 1 rotate     Success  Data written to  keymgmt key rsa 1 rotate          1  Confirm successful key rotation by reading the key  and getting the value of      data latest version           shell session        vault read  format json keymgmt key rsa 1   jq   data latest version      2             Azure private link  The secrets engine can be configured to communicate with Azure Key Vault instances using  Azure Private Endpoints  https   docs microsoft com en us azure private link private endpoint overview   Follow the guide at  Integrate Key Vault with Azure Private Link  https   docs microsoft com en us azure key vault general private link service tabs portal  to set up a Private Endpoint for your target Key Vault instance in Azure  The Private Endpoint must be network reachable by Vault  This means Vault needs to be running in the same virtual network or a peered virtual network to properly resolve the Key Vault domain name to the Private Endpoint IP address   The Private Endpoint configuration relies on a correct  Azure Private DNS  https   docs microsoft com en us azure dns private dns overview  integration  From the host that Vault is running on  follow the steps in  Validate that the private link connection works  https   docs microsoft com en us azure key vault general private link service tabs portal validate that the private link connection works  to ensure that the Key Vault domain name resolves to the Private Endpoint IP address you ve configured      shell session   nslookup  keyvault name  vault azure net  Non authoritative answer  Name  Address   10 0 2 5  private IP address  Aliases    keyvault name  vault azure net            keyvault name  privatelink vaultcore azure net      The secrets engine doesn t require special configuration to communicate with a Key Vault instance over an Azure Private Endpoint  For example  the given  KMS configuration   vault docs secrets key management azurekeyvault configuration  will result in the secrets engine resolving a Key Vault domain name of  keyvault name vault azure net  to the Private Endpoint IP address  Note that it s possible to change the Key Vault DNS suffix using the  environment   vault api docs secret key management azurekeyvault environment  configuration parameter or  AZURE ENVIRONMENT  environment variable       Terraform example   If you are familiar with  Terraform   terraform    you can use it to deploy the necessary Azure infrastructure      hcl provider  azuread      version     0 11 0     provider  azurerm      features       key vault         purge soft delete on destroy   true              resource  random id   app rg name      byte length   3    resource  random id   keyvault name      byte length   3    data  azurerm client config   current      resource  azuread application   key vault app      name                          app   random id app rg name hex     homepage                      http   homepage  random id app rg name b64 url     identifier uris                http   uri  random id app rg name b64 url      reply urls                     http   replyur  random id app rg name b64 url      available to other tenants   false   oauth2 allow implicit flow   true    resource  azuread service principal   key vault sp      application id                 azuread application key vault app application id   app role assignment required   false    resource  random password   password      length             24   special            true   override special           resource  azuread service principal password   key vault sp pwd      service principal id   azuread service principal key vault sp id   value                  random password password result   end date                2099 01 01T01 02 03Z     resource  azurerm resource group   key vault rg      name        learn rg   random id app rg name hex     location    West US     resource  azurerm key vault   key vault kv      name                   learn keyvault   random id keyvault name hex     location              azurerm resource group key vault rg location   resource group name   azurerm resource group key vault rg name   sku name               premium    soft delete enabled   true   tenant id             data azurerm client config current tenant id    access policy       tenant id   data azurerm client config current tenant id     object id   data azurerm client config current object id     key permissions            backup          create          decrypt          delete          encrypt          get          import          list          purge          recover          restore          sign          unwrapKey          update          verify          wrapKey               access policy       tenant id   data azurerm client config current tenant id     object id   azuread service principal key vault sp object id     key permissions            create          delete          get          import          update               output  key vault 1 name      value   azurerm key vault key vault kv name    output  tenant id      value   data azurerm client config current tenant id    output  client id      value   azuread application key vault app application id    output  client secret      value   azuread service principal password key vault sp pwd value      "}
{"questions":"vault To manage the lifecycle of the GCP Cloud KMS key rings you need to setup the Setup guide GCP Cloud KMS Configure the key management secrets engine and distribute the Vault managed keys to the target GCP Cloud KMS page title GCP Cloud KMS Key Management Secrets Engines layout docs key management secrets engine using gcpckms provider","answers":"---\nlayout: docs\npage_title: GCP Cloud KMS - Key Management - Secrets Engines\ndescription: Configure the key management secrets engine, and distribute the Vault-managed keys to the target GCP Cloud KMS.\n---\n\n# Setup guide - GCP Cloud KMS \n\nTo manage the lifecycle of the GCP Cloud KMS key rings, you need to setup the\nkey management secrets engine using `gcpckms` provider.\n\n## Setup\n\n1. Enable the key management secrets engine.\n\n    ```shell-session\n    $ vault secrets enable keymgmt\n    ```\n\n1. Configure the KMS provider resource named, `example-kms`.\n\n    ```shell-session\n    $ vault write keymgmt\/kms\/example-kms \\\n        provider=\"gcpckms\" \\\n        key_collection=\"projects\/<project-id>\/locations\/<location>\/keyRings\/<keyring>\" \\\n        credentials=service_account_file=\"\/path\/to\/service_account\/credentials.json\"\n    ```\n\n    The command specified the following:\n\n    - The full path to this KMS provider instance in Vault\n      (`keymgmt\/kms\/example-kms`).\n    - The KMS provider type is set to `gcpckms`.\n    - A key collection, which refers to the [resource\n      ID](https:\/\/cloud.google.com\/kms\/docs\/resource-hierarchy#retrieve_resource_id)\n      of an existing GCP Cloud KMS key ring; this value cannot be changed after\n      creation.\n    - Credentials file to use for authentication with GCP Cloud KMS. Supplying\n      values for this parameter is optional, as credentials may also be\n      specified as the `GOOGLE_CREDENTIALS` environment variable or default\n      application credentials.\n\n\n<Tip title=\"API documentation\">\n\nRefer to the GCP Cloud KMS [API\ndocumentation](\/vault\/api-docs\/secret\/key-management\/gcpkms) for a detailed\ndescription of individual configuration parameters.\n\n<\/Tip>\n\n## Usage \n\n1. Write a new key of type **aes256-gcm96** to the path `keymgmt\/key\/aes256-gcm96`. \n\n    ```shell-session\n    $ vault write keymgmt\/key\/aes256-gcm96 type=\"aes256-gcm96\"\n    ```\n\n1. Read the **aes256-gcm96** key, use JSON as the output, and pipe that into `jq`.\n\n    ```shell-session\n    $ vault read -format=json keymgmt\/key\/aes256-gcm96 | jq\n    ```\n\n    **Example output:**\n\n    <CodeBlockConfig hideClipboard>\n\n    ```json\n    {\n        \"request_id\": \"631f98de-b755-9863-40db-f789ff9ff10a\",\n        \"lease_id\": \"\",\n        \"lease_duration\": 0,\n        \"renewable\": false,\n        \"data\": {\n            \"deletion_allowed\": false,\n            \"latest_version\": 1,\n            \"min_enabled_version\": 1,\n            \"name\": \"aes256-gcm96\",\n            \"type\": \"aes256-gcm96\",\n            \"versions\": {\n            \"1\": {\n                \"creation_time\": \"2021-11-16T13:07:17.878864-05:00\"\n            }\n            }\n        },\n        \"warnings\": null\n    }\n    ```\n\n    <\/CodeBlockConfig>\n\n    Notice the value of `versions`; it is **1** since this is the first version\n    of the key that Vault knows about. This will figure into the example on key\n    rotation later.\n\n1. To use the keys you wrote, you must distribute them to the Cloud KMS. Add the\n   **aes256-gcm96** key to the Cloud KMS at the path\n   `keymgmt\/kms\/example-kms\/key\/aes256-gcm96`.\n\n    ```shell-session\n    $ vault write keymgmt\/kms\/example-kms\/key\/aes256-gcm96 \\\n        purpose=\"encrypt,decrypt\" \\\n        protection=\"hsm\"\n    ```\n\n1. List the keys that have been distributed to the Cloud KMS instance.\n\n    ```shell-session\n    $ vault list keymgmt\/kms\/gcpckms\/key\/\n    Keys\n    ----\n    aes256-gcm96\n    ```\n\n1. Rotate the key.\n\n    ```shell-session\n    $ vault write -f keymgmt\/key\/aes256-gcm96\/rotate\n    ```\n\n1. Confirm successful key rotation by reading the key, and get the value of\n   `.data.latest_version`.\n\n    ```shell-session\n    $  vault read -format=json keymgmt\/key\/aes256-gcm96 | jq '.data.latest_version'\n    2\n    ``` \n\n    The key is now at version 2; in the Cloud Console, the key will be expected\n    to have a different version string than the original value under **Primary\n    version** as shown in the Cloud Console UI screenshot.","site":"vault","answers_cleaned":"    layout  docs page title  GCP Cloud KMS   Key Management   Secrets Engines description  Configure the key management secrets engine  and distribute the Vault managed keys to the target GCP Cloud KMS         Setup guide   GCP Cloud KMS   To manage the lifecycle of the GCP Cloud KMS key rings  you need to setup the key management secrets engine using  gcpckms  provider      Setup  1  Enable the key management secrets engine          shell session       vault secrets enable keymgmt          1  Configure the KMS provider resource named   example kms           shell session       vault write keymgmt kms example kms           provider  gcpckms            key collection  projects  project id  locations  location  keyRings  keyring             credentials service account file   path to service account credentials json               The command specified the following         The full path to this KMS provider instance in Vault         keymgmt kms example kms          The KMS provider type is set to  gcpckms         A key collection  which refers to the  resource       ID  https   cloud google com kms docs resource hierarchy retrieve resource id        of an existing GCP Cloud KMS key ring  this value cannot be changed after       creation        Credentials file to use for authentication with GCP Cloud KMS  Supplying       values for this parameter is optional  as credentials may also be       specified as the  GOOGLE CREDENTIALS  environment variable or default       application credentials     Tip title  API documentation    Refer to the GCP Cloud KMS  API documentation   vault api docs secret key management gcpkms  for a detailed description of individual configuration parameters     Tip      Usage   1  Write a new key of type   aes256 gcm96   to the path  keymgmt key aes256 gcm96            shell session       vault write keymgmt key aes256 gcm96 type  aes256 gcm96           1  Read the   aes256 gcm96   key  use JSON as the output  and pipe that into  jq           shell session       vault read  format json keymgmt key aes256 gcm96   jq                Example output          CodeBlockConfig hideClipboard          json                request id    631f98de b755 9863 40db f789ff9ff10a            lease id                lease duration   0           renewable   false           data                  deletion allowed   false               latest version   1               min enabled version   1               name    aes256 gcm96                type    aes256 gcm96                versions                  1                      creation time    2021 11 16T13 07 17 878864 05 00                                                  warnings   null                      CodeBlockConfig       Notice the value of  versions   it is   1   since this is the first version     of the key that Vault knows about  This will figure into the example on key     rotation later   1  To use the keys you wrote  you must distribute them to the Cloud KMS  Add the      aes256 gcm96   key to the Cloud KMS at the path     keymgmt kms example kms key aes256 gcm96           shell session       vault write keymgmt kms example kms key aes256 gcm96           purpose  encrypt decrypt            protection  hsm           1  List the keys that have been distributed to the Cloud KMS instance          shell session       vault list keymgmt kms gcpckms key      Keys              aes256 gcm96          1  Rotate the key          shell session       vault write  f keymgmt key aes256 gcm96 rotate          1  Confirm successful key rotation by reading the key  and get the value of      data latest version           shell session        vault read  format json keymgmt key aes256 gcm96   jq   data latest version      2               The key is now at version 2  in the Cloud Console  the key will be expected     to have a different version string than the original value under   Primary     version   as shown in the Cloud Console UI screenshot "}
{"questions":"vault page title One Time SSH Passwords OTP SSH Secrets Engines host using a helper command on the remote host to perform verification to issue a One Time Password every time a client wants to SSH into a remote The One Time SSH Password OTP SSH secrets engine type allows a Vault server layout docs One Time SSH passwords","answers":"---\nlayout: docs\npage_title: One-Time SSH Passwords (OTP) - SSH - Secrets Engines\ndescription: |-\n  The One-Time SSH Password (OTP) SSH secrets engine type allows a Vault server\n  to issue a One-Time Password every time a client wants to SSH into a remote\n  host using a helper command on the remote host to perform verification.\n---\n\n# One-Time SSH passwords\n\nThe One-Time SSH Password (OTP) SSH secrets engine type allows a Vault server to\nissue a One-Time Password every time a client wants to SSH into a remote host\nusing a helper command on the remote host to perform verification.\n\nAn authenticated client requests credentials from the Vault server and, if\nauthorized, is issued an OTP. When the client establishes an SSH connection to\nthe desired remote host, the OTP used during SSH authentication is received by\nthe Vault helper, which then validates the OTP with the Vault server. The Vault\nserver then deletes this OTP, ensuring that it is only used once.\n\nSince the Vault server is contacted during SSH connection establishment, every\nlogin attempt and the correlating Vault lease information is logged to the audit\nsecrets engine.\n\nSee [Vault-SSH-Helper](https:\/\/github.com\/hashicorp\/vault-ssh-helper) for\ndetails on the helper.\n\nThis page will show a quick start for this secrets engine. For detailed\ndocumentation on every path, use `vault path-help` after mounting the secrets\nengine.\n\n### Drawbacks\n\nThe main concern with the OTP secrets engine type is the remote host's\nconnection to Vault; if compromised, an attacker could spoof the Vault server\nreturning a successful request. This risk can be mitigated by using TLS for the\nconnection to Vault and checking certificate validity; future enhancements to\nthis secrets engine may allow for extra security on top of what TLS provides.\n\n### Mount the secrets engine\n\n```shell-session\n$ vault secrets enable ssh\nSuccessfully mounted 'ssh' at 'ssh'!\n```\n\n### Create a role\n\nCreate a role with the `key_type` parameter set to `otp`. All of the machines\nrepresented by the role's CIDR list should have helper properly installed and\nconfigured.\n\n```shell-session\n$ vault write ssh\/roles\/otp_key_role \\\n    key_type=otp \\\n    default_user=username \\\n    cidr_list=x.x.x.x\/y,m.m.m.m\/n\nSuccess! Data written to: ssh\/roles\/otp_key_role\n```\n\n### Create a credential\n\nCreate an OTP credential for an IP of the remote host that belongs to\n`otp_key_role`.\n\n```shell-session\n$ vault write ssh\/creds\/otp_key_role ip=x.x.x.x\nKey            \tValue\nlease_id       \tssh\/creds\/otp_key_role\/73bbf513-9606-4bec-816c-5a2f009765a5\nlease_duration \t600\nlease_renewable\tfalse\nport           \t22\nusername       \tusername\nip             \tx.x.x.x\nkey            \t2f7e25a2-24c9-4b7b-0d35-27d5e5203a5c\nkey_type       \totp\n```\n\n### Establish an SSH session\n\n```shell-session\n$ ssh username@x.x.x.x\nPassword: <Enter OTP>\nusername@x.x.x.x:~$\n```\n\n### Automate it!\n\nA single CLI command can be used to create a new OTP and invoke SSH with the\ncorrect parameters to connect to the host.\n\n```shell-session\n$ vault ssh -role otp_key_role -mode otp username@x.x.x.x\nOTP for the session is `b4d47e1b-4879-5f4e-ce5c-7988d7986f37`\n[Note: Install `sshpass` to automate typing in OTP]\nPassword: <Enter OTP>\n```\n\nThe OTP will be entered automatically using `sshpass` if it is installed.\n\n```shell-session\n$ vault ssh -role otp_key_role -mode otp -strict-host-key-checking=no username@x.x.x.x\nusername@<IP of remote host>:~$\n```\n\nNote: `sshpass` cannot handle host key checking. Host key checking can be\ndisabled by setting `-strict-host-key-checking=no`.\n\n## Tutorial\n\nRefer to the [SSH Secrets Engine: One-Time SSH\nPassword](\/vault\/tutorials\/secrets-management\/ssh-otp) tutorial\nto learn how to use the Vault SSH secrets engine to secure authentication and authorization for access to machines.\n\n## API\n\nThe SSH secrets engine has a full HTTP API. Please see the\n[SSH secrets engine API](\/vault\/api-docs\/secret\/ssh) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  One Time SSH Passwords  OTP    SSH   Secrets Engines description       The One Time SSH Password  OTP  SSH secrets engine type allows a Vault server   to issue a One Time Password every time a client wants to SSH into a remote   host using a helper command on the remote host to perform verification         One Time SSH passwords  The One Time SSH Password  OTP  SSH secrets engine type allows a Vault server to issue a One Time Password every time a client wants to SSH into a remote host using a helper command on the remote host to perform verification   An authenticated client requests credentials from the Vault server and  if authorized  is issued an OTP  When the client establishes an SSH connection to the desired remote host  the OTP used during SSH authentication is received by the Vault helper  which then validates the OTP with the Vault server  The Vault server then deletes this OTP  ensuring that it is only used once   Since the Vault server is contacted during SSH connection establishment  every login attempt and the correlating Vault lease information is logged to the audit secrets engine   See  Vault SSH Helper  https   github com hashicorp vault ssh helper  for details on the helper   This page will show a quick start for this secrets engine  For detailed documentation on every path  use  vault path help  after mounting the secrets engine       Drawbacks  The main concern with the OTP secrets engine type is the remote host s connection to Vault  if compromised  an attacker could spoof the Vault server returning a successful request  This risk can be mitigated by using TLS for the connection to Vault and checking certificate validity  future enhancements to this secrets engine may allow for extra security on top of what TLS provides       Mount the secrets engine     shell session   vault secrets enable ssh Successfully mounted  ssh  at  ssh            Create a role  Create a role with the  key type  parameter set to  otp   All of the machines represented by the role s CIDR list should have helper properly installed and configured      shell session   vault write ssh roles otp key role       key type otp       default user username       cidr list x x x x y m m m m n Success  Data written to  ssh roles otp key role          Create a credential  Create an OTP credential for an IP of the remote host that belongs to  otp key role       shell session   vault write ssh creds otp key role ip x x x x Key             Value lease id        ssh creds otp key role 73bbf513 9606 4bec 816c 5a2f009765a5 lease duration  600 lease renewable false port            22 username        username ip              x x x x key             2f7e25a2 24c9 4b7b 0d35 27d5e5203a5c key type        otp          Establish an SSH session     shell session   ssh username x x x x Password   Enter OTP  username x x x x             Automate it   A single CLI command can be used to create a new OTP and invoke SSH with the correct parameters to connect to the host      shell session   vault ssh  role otp key role  mode otp username x x x x OTP for the session is  b4d47e1b 4879 5f4e ce5c 7988d7986f37   Note  Install  sshpass  to automate typing in OTP  Password   Enter OTP       The OTP will be entered automatically using  sshpass  if it is installed      shell session   vault ssh  role otp key role  mode otp  strict host key checking no username x x x x username  IP of remote host          Note   sshpass  cannot handle host key checking  Host key checking can be disabled by setting   strict host key checking no       Tutorial  Refer to the  SSH Secrets Engine  One Time SSH Password   vault tutorials secrets management ssh otp  tutorial to learn how to use the Vault SSH secrets engine to secure authentication and authorization for access to machines      API  The SSH secrets engine has a full HTTP API  Please see the  SSH secrets engine API   vault api docs secret ssh  for more details "}
{"questions":"vault page title Signed SSH Certificates SSH Secrets Engines type an SSH CA signing key is generated or configured at the secrets engine s The signed SSH certificates is the simplest and most powerful in terms of mount layout docs setup complexity and in terms of being platform agnostic When using this","answers":"---\nlayout: docs\npage_title: Signed SSH Certificates - SSH - Secrets Engines\ndescription: >-\n  The signed SSH certificates is the simplest and most powerful in terms of\n\n  setup complexity and in terms of being platform agnostic. When using this\n\n  type, an SSH CA signing key is generated or configured at the secrets engine's\n  mount.\n\n  This key will be used to sign other SSH keys.\n---\n\n# Signed SSH certificates\n\nThe signed SSH certificates is the simplest and most powerful in terms of setup\ncomplexity and in terms of being platform agnostic. By leveraging Vault's\npowerful CA capabilities and functionality built into OpenSSH, clients can SSH\ninto target hosts using their own local SSH keys.\n\nIn this section, the term \"**client**\" refers to the person or machine\nperforming the SSH operation. The \"**host**\" refers to the target machine. If\nthis is confusing, substitute \"client\" with \"user\".\n\nThis page will show a quick start for this secrets engine. For detailed documentation\non every path, use `vault path-help` after mounting the secrets engine.\n\n## Client key signing\n\nBefore a client can request their SSH key be signed, the Vault SSH secrets engine must\nbe configured. Usually a Vault administrator or security team performs these\nsteps. It is also possible to automate these actions using a configuration\nmanagement tool like Chef, Puppet, Ansible, or Salt.\n\n### Signing key &amp; role configuration\n\nThe following steps are performed in advance by a Vault administrator, security\nteam, or configuration management tooling.\n\n1.  Mount the secrets engine. Like all secrets engines in Vault, the SSH secrets engine\n    must be mounted before use.\n\n    ```text\n    $ vault secrets enable -path=ssh-client-signer ssh\n    Successfully mounted 'ssh' at 'ssh-client-signer'!\n    ```\n\n    This enables the SSH secrets engine at the path \"ssh-client-signer\". It is\n    possible to mount the same secrets engine multiple times using different\n    `-path` arguments. The name \"ssh-client-signer\" is not special - it can be\n    any name, but this documentation will assume \"ssh-client-signer\".\n\n1.  Configure Vault with a CA for signing client keys using the `\/config\/ca`\n    endpoint. If you do not have an internal CA, Vault can generate a keypair for\n    you.\n\n    ```text\n    $ vault write ssh-client-signer\/config\/ca generate_signing_key=true\n    Key             Value\n    ---             -----\n    public_key      ssh-rsa AAAAB3NzaC1yc2EA...\n    ```\n\n    If you already have a keypair, specify the public and private key parts as\n    part of the payload:\n\n    ```text\n    $ vault write ssh-client-signer\/config\/ca \\\n        private_key=\"...\" \\\n        public_key=\"...\"\n    ```\n\n    Regardless of whether it is generated or uploaded, the client signer public\n    key is accessible via the API at the `\/public_key` endpoint or the CLI (see next step).\n\n1.  Add the public key to all target host's SSH configuration. This process can\n    be manual or automated using a configuration management tool. The public key is\n    accessible via the API and does not require authentication.\n\n    ```text\n    $ curl -o \/etc\/ssh\/trusted-user-ca-keys.pem http:\/\/127.0.0.1:8200\/v1\/ssh-client-signer\/public_key\n    ```\n\n    ```text\n    $ vault read -field=public_key ssh-client-signer\/config\/ca > \/etc\/ssh\/trusted-user-ca-keys.pem\n    ```\n\n    Add the path where the public key contents are stored to the SSH\n    configuration file as the `TrustedUserCAKeys` option.\n\n    ```text\n    # \/etc\/ssh\/sshd_config\n    # ...\n    TrustedUserCAKeys \/etc\/ssh\/trusted-user-ca-keys.pem\n    ```\n\n    Restart the SSH service to pick up the changes.\n\n1.  Create a named Vault role for signing client keys.\n\n    ~> **IMPORTANT NOTE:** Prior to Vault-1.9, if `\"allowed_extensions\"` is either empty or not specified in the role,\n    Vault will assume permissive defaults: any user assigned to the role may specify any arbitrary\n    extension values as part of the certificate request to the Vault server.\n    This may have significant impact on third-party systems that rely on an `extensions` field for security-critical information.\n    In those cases, consider using a template to specify default extensions, and explicitly setting\n    `\"allowed_extensions\"` to an arbitrary, non-empty string if the field is empty or not set.\n\n    Because of the way some SSH certificate features are implemented, options\n    are passed as a map. The following example adds the `permit-pty` extension\n    to the certificate, and allows the user to specify their own values for `permit-pty` and `permit-port-forwarding`\n    when requesting the certificate.\n\n    ```text\n    $ vault write ssh-client-signer\/roles\/my-role -<<\"EOH\"\n    {\n      \"algorithm_signer\": \"rsa-sha2-256\",\n      \"allow_user_certificates\": true,\n      \"allowed_users\": \"*\",\n      \"allowed_extensions\": \"permit-pty,permit-port-forwarding\",\n      \"default_extensions\": {\n        \"permit-pty\": \"\"\n      },\n      \"key_type\": \"ca\",\n      \"default_user\": \"ubuntu\",\n      \"ttl\": \"30m0s\"\n    }\n    EOH\n    ```\n\n### Client SSH authentication\n\nThe following steps are performed by the client (user) that wants to\nauthenticate to machines managed by Vault. These commands are usually run from\nthe client's local workstation.\n\n1.  Locate or generate the SSH public key. Usually this is `~\/.ssh\/id_rsa.pub`.\n    If you do not have an SSH keypair, generate one:\n\n    ```text\n    $ ssh-keygen -t rsa -C \"user@example.com\"\n    ```\n\n1.  Ask Vault to sign your **public key**. This file usually ends in `.pub` and\n    the contents begin with `ssh-rsa ...`.\n\n    ```text\n    $ vault write ssh-client-signer\/sign\/my-role \\\n        public_key=@$HOME\/.ssh\/id_rsa.pub\n\n    Key             Value\n    ---             -----\n    serial_number   c73f26d2340276aa\n    signed_key      ssh-rsa-cert-v01@openssh.com AAAAHHNzaC1...\n    ```\n\n    The result will include the serial and the signed key. This signed key is\n    another public key.\n\n    To customize the signing options, use a JSON payload:\n\n    ```text\n    $ vault write ssh-client-signer\/sign\/my-role -<<\"EOH\"\n    {\n      \"public_key\": \"ssh-rsa AAA...\",\n      \"valid_principals\": \"my-user\",\n      \"key_id\": \"custom-prefix\",\n      \"extensions\": {\n        \"permit-pty\": \"\",\n        \"permit-port-forwarding\": \"\"\n      }\n    }\n    EOH\n    ```\n\n1.  Save the resulting signed, public key to disk. Limit permissions as needed.\n\n    ```text\n    $ vault write -field=signed_key ssh-client-signer\/sign\/my-role \\\n        public_key=@$HOME\/.ssh\/id_rsa.pub > signed-cert.pub\n    ```\n\n    If you are saving the certificate directly beside your SSH keypair, suffix\n    the name with `-cert.pub` (`~\/.ssh\/id_rsa-cert.pub`). With this naming\n    scheme, OpenSSH will automatically use it during authentication.\n\n1.  (Optional) View enabled extensions, principals, and metadata of the signed\n    key.\n\n    ```text\n    $ ssh-keygen -Lf ~\/.ssh\/signed-cert.pub\n    ```\n\n1.  SSH into the host machine using the signed key. You must supply both the\n    signed public key from Vault **and** the corresponding private key as\n    authentication to the SSH call.\n\n    ```text\n    $ ssh -i signed-cert.pub -i ~\/.ssh\/id_rsa username@10.0.23.5\n    ```\n\n## Host key signing\n\nFor an added layer of security, we recommend enabling host key signing. This is\nused in conjunction with client key signing to provide an additional integrity\nlayer. When enabled, the SSH agent will verify the target host is valid and\ntrusted before attempting to SSH. This will reduce the probability of a user\naccidentally SSHing into an unmanaged or malicious machine.\n\n### Signing key configuration\n\n1.  Mount the secrets engine. For the most security, mount at a different path from the\n    client signer.\n\n    ```text\n    $ vault secrets enable -path=ssh-host-signer ssh\n    Successfully mounted 'ssh' at 'ssh-host-signer'!\n    ```\n\n1.  Configure Vault with a CA for signing host keys using the `\/config\/ca`\n    endpoint. If you do not have an internal CA, Vault can generate a keypair for\n    you.\n\n    ```text\n    $ vault write ssh-host-signer\/config\/ca generate_signing_key=true\n    Key             Value\n    ---             -----\n    public_key      ssh-rsa AAAAB3NzaC1yc2EA...\n    ```\n\n    If you already have a keypair, specify the public and private key parts as\n    part of the payload:\n\n    ```text\n    $ vault write ssh-host-signer\/config\/ca \\\n        private_key=\"...\" \\\n        public_key=\"...\"\n    ```\n\n    Regardless of whether it is generated or uploaded, the host signer public\n    key is accessible via the API at the `\/public_key` endpoint.\n\n1.  Extend host key certificate TTLs.\n\n    ```text\n    $ vault secrets tune -max-lease-ttl=87600h ssh-host-signer\n    ```\n\n1.  Create a role for signing host keys. Be sure to fill in the list of allowed\n    domains, set `allow_bare_domains`, or both.\n\n    ```text\n    $ vault write ssh-host-signer\/roles\/hostrole \\\n        key_type=ca \\\n        algorithm_signer=rsa-sha2-256 \\\n        ttl=87600h \\\n        allow_host_certificates=true \\\n        allowed_domains=\"localdomain,example.com\" \\\n        allow_subdomains=true\n    ```\n\n1.  Sign the host's SSH public key.\n\n    ```text\n    $ vault write ssh-host-signer\/sign\/hostrole \\\n        cert_type=host \\\n        public_key=@\/etc\/ssh\/ssh_host_rsa_key.pub\n    Key             Value\n    ---             -----\n    serial_number   3746eb17371540d9\n    signed_key      ssh-rsa-cert-v01@openssh.com AAAAHHNzaC1y...\n    ```\n\n1.  Set the resulting signed certificate as `HostCertificate` in the SSH\n    configuration on the host machine.\n\n    ```text\n    $ vault write -field=signed_key ssh-host-signer\/sign\/hostrole \\\n        cert_type=host \\\n        public_key=@\/etc\/ssh\/ssh_host_rsa_key.pub > \/etc\/ssh\/ssh_host_rsa_key-cert.pub\n    ```\n\n    Set permissions on the certificate to be `0640`:\n\n    ```text\n    $ chmod 0640 \/etc\/ssh\/ssh_host_rsa_key-cert.pub\n    ```\n\n    Add host key and host certificate to the SSH configuration file.\n\n    ```text\n    # \/etc\/ssh\/sshd_config\n    # ...\n\n    # For client keys\n    TrustedUserCAKeys \/etc\/ssh\/trusted-user-ca-keys.pem\n\n    # For host keys\n    HostKey \/etc\/ssh\/ssh_host_rsa_key\n    HostCertificate \/etc\/ssh\/ssh_host_rsa_key-cert.pub\n    ```\n\n    Restart the SSH service to pick up the changes.\n\n### Client-Side host verification\n\n1.  Retrieve the host signing CA public key to validate the host signature of\n    target machines.\n\n    ```text\n    $ curl http:\/\/127.0.0.1:8200\/v1\/ssh-host-signer\/public_key\n    ```\n\n    ```text\n    $ vault read -field=public_key ssh-host-signer\/config\/ca\n    ```\n\n1.  Add the resulting public key to the `known_hosts` file with authority.\n\n    ```text\n    # ~\/.ssh\/known_hosts\n    @cert-authority *.example.com ssh-rsa AAAAB3NzaC1yc2EAAA...\n    ```\n\n1.  SSH into target machines as usual.\n\n## Troubleshooting\n\nWhen initially configuring this type of key signing, enable `VERBOSE` SSH\nlogging to help annotate any errors in the log.\n\n```text\n# \/etc\/ssh\/sshd_config\n# ...\nLogLevel VERBOSE\n```\n\nRestart SSH after making these changes.\n\nBy default, SSH logs to `\/var\/log\/auth.log`, but so do many other things. To\nextract just the SSH logs, use the following:\n\n```shell-session\n$ tail -f \/var\/log\/auth.log | grep --line-buffered \"sshd\"\n```\n\nIf you are unable to make a connection to the host, the SSH server logs may\nprovide guidance and insights.\n\n### Name is not a listed principal\n\nIf the `auth.log` displays the following messages:\n\n```text\n# \/var\/log\/auth.log\nkey_cert_check_authority: invalid certificate\nCertificate invalid: name is not a listed principal\n```\n\nThe certificate does not permit the username as a listed principal for\nauthenticating to the system. This is most likely due to an OpenSSH bug (see\n[known issues](#known-issues) for more information). This bug does not respect\nthe `allowed_users` option value of \"\\*\". Here are ways to work around this\nissue:\n\n1.  Set `default_user` in the role. If you are always authenticating as the same\n    user, set the `default_user` in the role to the username you are SSHing into the\n    target machine:\n\n    ```text\n    $ vault write ssh\/roles\/my-role -<<\"EOH\"\n    {\n      \"default_user\": \"YOUR_USER\",\n      \/\/ ...\n    }\n    EOH\n    ```\n\n1.  Set `valid_principals` during signing. In situations where multiple users may\n    be authenticating to SSH via Vault, set the list of valid principles during key\n    signing to include the current username:\n\n    ```text\n    $ vault write ssh-client-signer\/sign\/my-role -<<\"EOH\"\n    {\n      \"valid_principals\": \"my-user\"\n      \/\/ ...\n    }\n    EOH\n    ```\n\n### No prompt after login\n\nIf you do not see a prompt after authenticating to the host machine, the signed\ncertificate may not have the `permit-pty` extension. There are two ways to add\nthis extension to the signed certificate.\n\n- As part of the role creation\n\n  ```text\n  $ vault write ssh-client-signer\/roles\/my-role -<<\"EOH\"\n  {\n    \"default_extensions\": {\n      \"permit-pty\": \"\"\n    }\n    \/\/ ...\n  }\n  EOH\n  ```\n\n- As part of the signing operation itself:\n\n  ```text\n  $ vault write ssh-client-signer\/sign\/my-role -<<\"EOH\"\n  {\n    \"extensions\": {\n      \"permit-pty\": \"\"\n    }\n    \/\/ ...\n  }\n  EOH\n  ```\n\n### No port forwarding\n\nIf port forwarding from the guest to the host is not working, the signed\ncertificate may not have the `permit-port-forwarding` extension. Add the\nextension as part of the role creation or signing process to enable port\nforwarding. See [no prompt after login](#no-prompt-after-login) for examples.\n\n```json\n{\n  \"default_extensions\": {\n    \"permit-port-forwarding\": \"\"\n  }\n}\n```\n\n### No x11 forwarding\n\nIf X11 forwarding from the guest to the host is not working, the signed\ncertificate may not have the `permit-X11-forwarding` extension. Add the\nextension as part of the role creation or signing process to enable X11\nforwarding. See [no prompt after login](#no-prompt-after-login) for examples.\n\n```json\n{\n  \"default_extensions\": {\n    \"permit-X11-forwarding\": \"\"\n  }\n}\n```\n\n### No agent forwarding\n\nIf agent forwarding from the guest to the host is not working, the signed\ncertificate may not have the `permit-agent-forwarding` extension. Add the\nextension as part of the role creation or signing process to enable agent\nforwarding. See [no prompt after login](#no-prompt-after-login) for examples.\n\n```json\n{\n  \"default_extensions\": {\n    \"permit-agent-forwarding\": \"\"\n  }\n}\n```\n\n### Key comments\nThere are additional steps needed to preserve [comment attributes](https:\/\/www.rfc-editor.org\/rfc\/rfc4716#section-3.3.2)\nin keys which ought to be considered if they are required. Private and public \nkey may have comments applied to them and for example where `ssh-keygen` is used \nwith its `-C` parameter - similar to:\n\n```shell-session\nssh-keygen -C \"...Comments\" -N \"\" -t rsa -b 4096 -f host-ca\n```\n\nAdapted key values containing comments must be provided with the key related \nparameters as per the Vault CLI and API steps demonstrated below. \n\n```shell-extension\n# Using CLI:\nvault secrets enable -path=hosts-ca ssh\nKEY_PRI=$(cat ~\/.ssh\/id_rsa | sed -z 's\/\\n\/\\\\n\/g')\nKEY_PUB=$(cat ~\/.ssh\/id_rsa.pub | sed -z 's\/\\n\/\\\\n\/g')\n# Create \/ update keypair in Vault\nvault write ssh-client-signer\/config\/ca \\\n  generate_signing_key=false \\\n  private_key=\"${KEY_PRI}\" \\\n  public_key=\"${KEY_PUB}\"\n```\n\n```shell-extension\n# Using API:\ncurl -X POST -H \"X-Vault-Token: ...\" -d '{\"type\":\"ssh\"}' http:\/\/127.0.0.1:8200\/v1\/sys\/mounts\/hosts-ca\nKEY_PRI=$(cat ~\/.ssh\/id_rsa | sed -z 's\/\\n\/\\\\n\/g')\nKEY_PUB=$(cat ~\/.ssh\/id_rsa.pub | sed -z 's\/\\n\/\\\\n\/g')\ntee payload.json <<EOF\n{\n  \"generate_signing_key\" : false,\n  \"private_key\"          : \"${KEY_PRI}\",\n  \"public_key\"           : \"${KEY_PUB}\"\n}\nEOF\n# Create \/ update keypair in Vault\ncurl -X POST -H \"X-Vault-Token: ...\" -d @payload.json http:\/\/127.0.0.1:8200\/v1\/hosts-ca\/config\/ca\n```\n\n~> **IMPORTANT:** Do NOT add a private key password since Vault can't decrypt it.\nDestroy the keypair and `payload.json` from your hosts immediately after they have been confirmed as successfully uploaded.\n\n### Known issues\n- On SELinux-enforcing systems, you may need to adjust related types so that the\n  SSH daemon is able to read it. For example, adjust the signed host certificate\n  to be an `sshd_key_t` type.\n\n- On some versions of SSH, you may get the following error:\n\n  ```text\n  no separate private key for certificate\n  ```\n\n  This is a bug introduced in OpenSSH version 7.2 and fixed in 7.5. See\n  [OpenSSH bug 2617](https:\/\/bugzilla.mindrot.org\/show_bug.cgi?id=2617) for\n  details.\n\n- On some versions of SSH, you may get the following error on target host:\n\n  ```text\n  userauth_pubkey: certificate signature algorithm ssh-rsa: signature algorithm not supported [preauth]\n  ```\n  Fix is to add below line to \/etc\/ssh\/sshd_config\n  ```text\n  CASignatureAlgorithms ^ssh-rsa\n  ```\n  The ssh-rsa algorithm is no longer supported in [OpenSSH 8.2](https:\/\/www.openssh.com\/txt\/release-8.2)\n\n## API\n\nThe SSH secrets engine has a full HTTP API. Please see the\n[SSH secrets engine API](\/vault\/api-docs\/secret\/ssh) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  Signed SSH Certificates   SSH   Secrets Engines description       The signed SSH certificates is the simplest and most powerful in terms of    setup complexity and in terms of being platform agnostic  When using this    type  an SSH CA signing key is generated or configured at the secrets engine s   mount     This key will be used to sign other SSH keys         Signed SSH certificates  The signed SSH certificates is the simplest and most powerful in terms of setup complexity and in terms of being platform agnostic  By leveraging Vault s powerful CA capabilities and functionality built into OpenSSH  clients can SSH into target hosts using their own local SSH keys   In this section  the term    client    refers to the person or machine performing the SSH operation  The    host    refers to the target machine  If this is confusing  substitute  client  with  user    This page will show a quick start for this secrets engine  For detailed documentation on every path  use  vault path help  after mounting the secrets engine      Client key signing  Before a client can request their SSH key be signed  the Vault SSH secrets engine must be configured  Usually a Vault administrator or security team performs these steps  It is also possible to automate these actions using a configuration management tool like Chef  Puppet  Ansible  or Salt       Signing key  amp  role configuration  The following steps are performed in advance by a Vault administrator  security team  or configuration management tooling   1   Mount the secrets engine  Like all secrets engines in Vault  the SSH secrets engine     must be mounted before use          text       vault secrets enable  path ssh client signer ssh     Successfully mounted  ssh  at  ssh client signer                This enables the SSH secrets engine at the path  ssh client signer   It is     possible to mount the same secrets engine multiple times using different       path  arguments  The name  ssh client signer  is not special   it can be     any name  but this documentation will assume  ssh client signer    1   Configure Vault with a CA for signing client keys using the   config ca      endpoint  If you do not have an internal CA  Vault can generate a keypair for     you          text       vault write ssh client signer config ca generate signing key true     Key             Value                               public key      ssh rsa AAAAB3NzaC1yc2EA                 If you already have a keypair  specify the public and private key parts as     part of the payload          text       vault write ssh client signer config ca           private key                 public key                    Regardless of whether it is generated or uploaded  the client signer public     key is accessible via the API at the   public key  endpoint or the CLI  see next step    1   Add the public key to all target host s SSH configuration  This process can     be manual or automated using a configuration management tool  The public key is     accessible via the API and does not require authentication          text       curl  o  etc ssh trusted user ca keys pem http   127 0 0 1 8200 v1 ssh client signer public key                 text       vault read  field public key ssh client signer config ca    etc ssh trusted user ca keys pem              Add the path where the public key contents are stored to the SSH     configuration file as the  TrustedUserCAKeys  option          text        etc ssh sshd config               TrustedUserCAKeys  etc ssh trusted user ca keys pem              Restart the SSH service to pick up the changes   1   Create a named Vault role for signing client keys            IMPORTANT NOTE    Prior to Vault 1 9  if   allowed extensions   is either empty or not specified in the role      Vault will assume permissive defaults  any user assigned to the role may specify any arbitrary     extension values as part of the certificate request to the Vault server      This may have significant impact on third party systems that rely on an  extensions  field for security critical information      In those cases  consider using a template to specify default extensions  and explicitly setting       allowed extensions   to an arbitrary  non empty string if the field is empty or not set       Because of the way some SSH certificate features are implemented  options     are passed as a map  The following example adds the  permit pty  extension     to the certificate  and allows the user to specify their own values for  permit pty  and  permit port forwarding      when requesting the certificate          text       vault write ssh client signer roles my role     EOH               algorithm signer    rsa sha2 256          allow user certificates   true         allowed users               allowed extensions    permit pty permit port forwarding          default extensions              permit pty                      key type    ca          default user    ubuntu          ttl    30m0s            EOH              Client SSH authentication  The following steps are performed by the client  user  that wants to authenticate to machines managed by Vault  These commands are usually run from the client s local workstation   1   Locate or generate the SSH public key  Usually this is     ssh id rsa pub       If you do not have an SSH keypair  generate one          text       ssh keygen  t rsa  C  user example com           1   Ask Vault to sign your   public key    This file usually ends in   pub  and     the contents begin with  ssh rsa               text       vault write ssh client signer sign my role           public key   HOME  ssh id rsa pub      Key             Value                               serial number   c73f26d2340276aa     signed key      ssh rsa cert v01 openssh com AAAAHHNzaC1                 The result will include the serial and the signed key  This signed key is     another public key       To customize the signing options  use a JSON payload          text       vault write ssh client signer sign my role     EOH               public key    ssh rsa AAA             valid principals    my user          key id    custom prefix          extensions              permit pty                permit port forwarding                        EOH          1   Save the resulting signed  public key to disk  Limit permissions as needed          text       vault write  field signed key ssh client signer sign my role           public key   HOME  ssh id rsa pub   signed cert pub              If you are saving the certificate directly beside your SSH keypair  suffix     the name with   cert pub       ssh id rsa cert pub    With this naming     scheme  OpenSSH will automatically use it during authentication   1    Optional  View enabled extensions  principals  and metadata of the signed     key          text       ssh keygen  Lf    ssh signed cert pub          1   SSH into the host machine using the signed key  You must supply both the     signed public key from Vault   and   the corresponding private key as     authentication to the SSH call          text       ssh  i signed cert pub  i    ssh id rsa username 10 0 23 5             Host key signing  For an added layer of security  we recommend enabling host key signing  This is used in conjunction with client key signing to provide an additional integrity layer  When enabled  the SSH agent will verify the target host is valid and trusted before attempting to SSH  This will reduce the probability of a user accidentally SSHing into an unmanaged or malicious machine       Signing key configuration  1   Mount the secrets engine  For the most security  mount at a different path from the     client signer          text       vault secrets enable  path ssh host signer ssh     Successfully mounted  ssh  at  ssh host signer            1   Configure Vault with a CA for signing host keys using the   config ca      endpoint  If you do not have an internal CA  Vault can generate a keypair for     you          text       vault write ssh host signer config ca generate signing key true     Key             Value                               public key      ssh rsa AAAAB3NzaC1yc2EA                 If you already have a keypair  specify the public and private key parts as     part of the payload          text       vault write ssh host signer config ca           private key                 public key                    Regardless of whether it is generated or uploaded  the host signer public     key is accessible via the API at the   public key  endpoint   1   Extend host key certificate TTLs          text       vault secrets tune  max lease ttl 87600h ssh host signer          1   Create a role for signing host keys  Be sure to fill in the list of allowed     domains  set  allow bare domains   or both          text       vault write ssh host signer roles hostrole           key type ca           algorithm signer rsa sha2 256           ttl 87600h           allow host certificates true           allowed domains  localdomain example com            allow subdomains true          1   Sign the host s SSH public key          text       vault write ssh host signer sign hostrole           cert type host           public key   etc ssh ssh host rsa key pub     Key             Value                               serial number   3746eb17371540d9     signed key      ssh rsa cert v01 openssh com AAAAHHNzaC1y             1   Set the resulting signed certificate as  HostCertificate  in the SSH     configuration on the host machine          text       vault write  field signed key ssh host signer sign hostrole           cert type host           public key   etc ssh ssh host rsa key pub    etc ssh ssh host rsa key cert pub              Set permissions on the certificate to be  0640           text       chmod 0640  etc ssh ssh host rsa key cert pub              Add host key and host certificate to the SSH configuration file          text        etc ssh sshd config                  For client keys     TrustedUserCAKeys  etc ssh trusted user ca keys pem        For host keys     HostKey  etc ssh ssh host rsa key     HostCertificate  etc ssh ssh host rsa key cert pub              Restart the SSH service to pick up the changes       Client Side host verification  1   Retrieve the host signing CA public key to validate the host signature of     target machines          text       curl http   127 0 0 1 8200 v1 ssh host signer public key                 text       vault read  field public key ssh host signer config ca          1   Add the resulting public key to the  known hosts  file with authority          text          ssh known hosts      cert authority   example com ssh rsa AAAAB3NzaC1yc2EAAA             1   SSH into target machines as usual      Troubleshooting  When initially configuring this type of key signing  enable  VERBOSE  SSH logging to help annotate any errors in the log      text    etc ssh sshd config       LogLevel VERBOSE      Restart SSH after making these changes   By default  SSH logs to   var log auth log   but so do many other things  To extract just the SSH logs  use the following      shell session   tail  f  var log auth log   grep   line buffered  sshd       If you are unable to make a connection to the host  the SSH server logs may provide guidance and insights       Name is not a listed principal  If the  auth log  displays the following messages      text    var log auth log key cert check authority  invalid certificate Certificate invalid  name is not a listed principal      The certificate does not permit the username as a listed principal for authenticating to the system  This is most likely due to an OpenSSH bug  see  known issues   known issues  for more information   This bug does not respect the  allowed users  option value of       Here are ways to work around this issue   1   Set  default user  in the role  If you are always authenticating as the same     user  set the  default user  in the role to the username you are SSHing into the     target machine          text       vault write ssh roles my role     EOH               default user    YOUR USER                          EOH          1   Set  valid principals  during signing  In situations where multiple users may     be authenticating to SSH via Vault  set the list of valid principles during key     signing to include the current username          text       vault write ssh client signer sign my role     EOH               valid principals    my user                         EOH              No prompt after login  If you do not see a prompt after authenticating to the host machine  the signed certificate may not have the  permit pty  extension  There are two ways to add this extension to the signed certificate     As part of the role creation       text     vault write ssh client signer roles my role     EOH           default extensions            permit pty                             EOH          As part of the signing operation itself        text     vault write ssh client signer sign my role     EOH           extensions            permit pty                             EOH            No port forwarding  If port forwarding from the guest to the host is not working  the signed certificate may not have the  permit port forwarding  extension  Add the extension as part of the role creation or signing process to enable port forwarding  See  no prompt after login   no prompt after login  for examples      json      default extensions          permit port forwarding                     No x11 forwarding  If X11 forwarding from the guest to the host is not working  the signed certificate may not have the  permit X11 forwarding  extension  Add the extension as part of the role creation or signing process to enable X11 forwarding  See  no prompt after login   no prompt after login  for examples      json      default extensions          permit X11 forwarding                     No agent forwarding  If agent forwarding from the guest to the host is not working  the signed certificate may not have the  permit agent forwarding  extension  Add the extension as part of the role creation or signing process to enable agent forwarding  See  no prompt after login   no prompt after login  for examples      json      default extensions          permit agent forwarding                     Key comments There are additional steps needed to preserve  comment attributes  https   www rfc editor org rfc rfc4716 section 3 3 2  in keys which ought to be considered if they are required  Private and public  key may have comments applied to them and for example where  ssh keygen  is used  with its   C  parameter   similar to      shell session ssh keygen  C     Comments   N     t rsa  b 4096  f host ca      Adapted key values containing comments must be provided with the key related  parameters as per the Vault CLI and API steps demonstrated below       shell extension   Using CLI  vault secrets enable  path hosts ca ssh KEY PRI   cat    ssh id rsa   sed  z  s  n   n g   KEY PUB   cat    ssh id rsa pub   sed  z  s  n   n g     Create   update keypair in Vault vault write ssh client signer config ca     generate signing key false     private key    KEY PRI       public key    KEY PUB           shell extension   Using API  curl  X POST  H  X Vault Token        d    type   ssh    http   127 0 0 1 8200 v1 sys mounts hosts ca KEY PRI   cat    ssh id rsa   sed  z  s  n   n g   KEY PUB   cat    ssh id rsa pub   sed  z  s  n   n g   tee payload json   EOF      generate signing key    false     private key                KEY PRI       public key                 KEY PUB     EOF   Create   update keypair in Vault curl  X POST  H  X Vault Token        d  payload json http   127 0 0 1 8200 v1 hosts ca config ca           IMPORTANT    Do NOT add a private key password since Vault can t decrypt it  Destroy the keypair and  payload json  from your hosts immediately after they have been confirmed as successfully uploaded       Known issues   On SELinux enforcing systems  you may need to adjust related types so that the   SSH daemon is able to read it  For example  adjust the signed host certificate   to be an  sshd key t  type     On some versions of SSH  you may get the following error        text   no separate private key for certificate          This is a bug introduced in OpenSSH version 7 2 and fixed in 7 5  See    OpenSSH bug 2617  https   bugzilla mindrot org show bug cgi id 2617  for   details     On some versions of SSH  you may get the following error on target host        text   userauth pubkey  certificate signature algorithm ssh rsa  signature algorithm not supported  preauth          Fix is to add below line to  etc ssh sshd config      text   CASignatureAlgorithms  ssh rsa         The ssh rsa algorithm is no longer supported in  OpenSSH 8 2  https   www openssh com txt release 8 2      API  The SSH secrets engine has a full HTTP API  Please see the  SSH secrets engine API   vault api docs secret ssh  for more details "}
{"questions":"vault More information on the Tokenization transform Not to be confused with Vault tokens Tokenization exchanges a page title Transform Secrets Engines Tokenization layout docs Tokenization transform","answers":"---\nlayout: docs\npage_title: Transform - Secrets Engines - Tokenization\ndescription: >-\n  More information on the Tokenization transform.\n---\n\n# Tokenization transform\n\nNot to be confused with Vault tokens, Tokenization exchanges a\nsensitive value for an unrelated value called a _token_. The original sensitive\nvalue cannot be recovered from a token alone, they are irreversible. Instead,\nunlike format preserving encryption, tokenization is stateful. To decode the\noriginal value, the token must be submitted to Vault where it is\nretrieved from a cryptographic mapping in storage.\n\n## Operation\n\nOn encode, Vault generates a random, signed token and stores a mapping of a\nversion of that token to encrypted versions of the plaintext and metadata, as\nwell as a fingerprint of the original plaintext which facilitates the `tokenized`\nendpoint that lets one query whether a plaintext exists in the system.\n\nDepending on the mapping mode, the plaintext may be decoded only with possession\nof the distributed token, or may be recoverable in the export operation. See\n[Security Considerations](#security-considerations) for more. \nTokenization's cryptosystem uses AES256-GCM96 for encryption of its token\nstore, with keys derived from the token and a tokenization root key.\n\n### Convergence\n\nBy default, tokenization produces a unique token for every encode operation. \nThis makes the resulting token fully independent of its plaintext and expiration. \nSometimes, though, it may be beneficial if the tokenization of a plaintext\/expiration\npair tokenizes consistently to the same value.  For example if one wants to \ndo a statistical analysis of the tokens as they relate to some other field\nin a database (without decoding the token), or if one needed to tokenize\nin two different systems but be able relate the results.  In this case,\none can create a tokenization transformation that is *convergent*.  \n\nWhen enabled at transformation creation time, Vault alters the calculation so that\nencoding a plaintext and expiration tokenizes to the same value every time, and\nstorage keeps only a single entry of that token.  Like the exportable mapping\nmode, convergence should only be enabled if needed.  Convergent tokenization\nhas a small performance penalty in external stores and a larger one in the\nbuilt in store due to the need to avoid duplicate entries and to update\nmetadata when convergently encoding.  It is recommended that if one has some\nuse cases that require convergence and some that do not, one should create two\ndifferent tokenization transforms with convergence enabled on only one. \n\n### Token lookup\n\nSome use cases may want to lookup the value of a token given its plaintext.  Ordinarily\nthis is contrary to the nature of tokenization where we want to prevent the ability\nof an attacker to determine that a token corresponds to a plaintext value (a known\nplaintext attack).  But for use cases that require it, the \n[token lookup](\/vault\/api-docs\/secret\/transform#token-lookup)\noperation is supported, but only in some configurations of the tokenization\ntransformation.  Token lookup is supported when convergence is enabled, or\nif the mapping mode is exportable *and* the storage backend is external.\n\n## Performance considerations\n\n### Builtin (Internal) store\n\nAs tokenization is stateful, the encode operation necessarily writes values to\nstorage. By default, that storage is the Vault backend store itself. This\ndiffers from some secret engines in that the encode and decode operations require\nan access of storage per operation. Other engines use storage for configuration\nbut can process operations largely without accessing any storage.\n\nSince these operations involve writes to storage, and therefore must be performed\non primary nodes, the scalability of the encode operation is limited by the\nprimary's storage performance.\n\nAdditionally, using internal storage, since writes must be performed on primary\nnodes, the scalability of the encode operation will be limited by the performance\nof the primary and its storage subsystem. All other operations can be performed\non secondaries.\n\nFinally, due to replication, writes to the primary may take some time to reach\nsecondaries, so other read operations like decode or metadata may not succeed on\nthe secondaries until this happens. In other words, tokenization is eventually\nconsistent.\n\n### External storage\n\nAll nodes (except DRs) can participate in all operations using external storage,\nbut one must take care to monitor and scale the external storage for the level of\ntraffic experienced. The storage schema is simple however and well known approaches\nshould be effective.\n\n## Security considerations\n\nThe goal of Tokenization is to let end users' devices store the token rather than\ntheir sensitive values (such as credit card numbers) and still participate in\ntransactions where the token is a stand-in for the sensitive value. For this reason\nthe token Vault generates is completely unrelated (e.g. irreversible) to the\nsensitive value.\n\nFurthermore, the Tokenization transform is designed to resist a number of attacks\non the values produced during encode. In particular it is designed so that\nattackers cannot recover plaintext even if they steal the tokenization values\nfrom Vault itself. In the default mapping mode,\neven stealing the underlying transform key does not allow them to recover\nthe plaintext without also possessing the encoded token. An attacker must have\ngotten access to all values in the construct.\n\nIn the `exportable` mapping mode however, the plaintext values are encrypted\nin a way that can be decrypted within Vault. If the attacker possesses the\ntransform key and the tokenization mapping values, the plaintext can be\nrecovered. This mode is available for the case where operators prioritize the\nability to export all of the plaintext values in an emergency, via the\n`export-decoded` operation.\n\n### Metadata\n\nSince tokenization isn't format preserving and requires storage, one can associate\narbitrary metadata with a token. Metadata is considered less sensitive than the\noriginal plaintext value. As it has it's own retrieval endpoint, operators can\nconfigure policies that may allow access to the metadata of a token but not\nits decoded value to enable workflows that operate just on the metadata.\n\n## TTLs and tidying\n\nBy default, tokens are long lived, and the storage for them will be maintained\nindefinitely. Where there is a concept of time-to-live, it is strongly encouraged\nthat the tokens be generated with a TTL. For example, as credit cards\nhave an expiration date, it is recommended that tokenizing a credit card\nprimary account number (PAN) be done with a TTL that corresponds to the time\nafter which the PAN is invalid.\n\nThis allows such values to be _tidied_ and removed from storage once expired.\nTokens themselves encode the expiration time, so decode and other operations\ncan immediately reject the operation when presented with an expired token.\n\n## Storage\n\n### External SQL stores\n\nCurrently the PostgreSQL, MySQL, and MSSQL relational databases are supported as\nexternal storage backends for tokenization.\nThe [Schema Endpoint](\/vault\/api-docs\/secret\/transform#create-update-store-schema)\nmay be used to initialize and upgrade the necessary database tables. Vault uses\na schema versioning table to determine if it needs to create or modify the\ntables when using that endpoint. If you make changes to those tables yourself,\nthe automatic schema management may become out of sync and may fail in the future.\n\nExternal stores may often be preferred due to their ability to achieve a much\nhigher scale of performance, especially when used with batch operations.\n\n### Snapshot\/Restore\n\nSnapshot allows one to iteratively retrieve the tokenization state, for\nbackup or migration purposes. The resulting data can be fed to the restore\nendpoint of the same or a different tokenization store. Note that the state\nis only useable by the tokenization transform that created it, as state is\nencrypted via keys in that configured transform.\n\n### Export decoded\n\nFor stores configured with the `exportable` mapping mode, the export decoded\nendpoint allows operators to retrieve the _decoded_ contents of tokenization\nstate, which includes tokens and their decoded, sensitive values. The\n`exportable` mode is only recommended if this use case is required, as the default\ncannot be decoded by attackers even if they gain access to Vault's storage and\nkeys.\n\n### Migration\n\nTokenization stores are configured separately from the tokenization transform,\nand the transform can point to multiple stores. The primary use case for this\none-to-many relationship is to facilitate migration between two tokenization\nstores.\n\nWhen multiple stores are configured, Vault writes new tokenization state to all\nconfigured stores, and reads from each store in the order they were configured.\nThus, one can use multiple configured stores along with the snapshot\/restore\nfunctionality to perform a zero-downtime migration to a new store:\n\n1. Configure the new tokenization store in the API.\n1. Modify the existing tokenization transform to use both the existing and new\n   store.\n1. Snapshot the old store.\n1. Restore the snapshot to the new store.\n1. Perform any desired validations.\n1. Modify the tokenization transform to use only the new store.\n\n## Key management\n\nTokenization supports key rotation. Keys are tied to transforms, so key\nnames are the same as the name of the corresponding tokenization transform.\nKeys can be rotated to a new version, with backward compatibility for\ndecoding. Encoding is always performed with the newest key version. Keys versions\ncan be tidied as well. Keys may also be rotated automatically on a user-defined\ntime interval, specified by the `auto_rotate_field` of the key config. For more\ninformation, see the [transform api docs](\/vault\/api-docs\/secret\/transform).\n\n## Tutorial\n\nRefer to [Tokenize Data with Transform Secrets\nEngine](\/vault\/tutorials\/adp\/tokenization) for a\nstep-by-step tutorial.","site":"vault","answers_cleaned":"    layout  docs page title  Transform   Secrets Engines   Tokenization description       More information on the Tokenization transform         Tokenization transform  Not to be confused with Vault tokens  Tokenization exchanges a sensitive value for an unrelated value called a  token   The original sensitive value cannot be recovered from a token alone  they are irreversible  Instead  unlike format preserving encryption  tokenization is stateful  To decode the original value  the token must be submitted to Vault where it is retrieved from a cryptographic mapping in storage      Operation  On encode  Vault generates a random  signed token and stores a mapping of a version of that token to encrypted versions of the plaintext and metadata  as well as a fingerprint of the original plaintext which facilitates the  tokenized  endpoint that lets one query whether a plaintext exists in the system   Depending on the mapping mode  the plaintext may be decoded only with possession of the distributed token  or may be recoverable in the export operation  See  Security Considerations   security considerations  for more   Tokenization s cryptosystem uses AES256 GCM96 for encryption of its token store  with keys derived from the token and a tokenization root key       Convergence  By default  tokenization produces a unique token for every encode operation   This makes the resulting token fully independent of its plaintext and expiration   Sometimes  though  it may be beneficial if the tokenization of a plaintext expiration pair tokenizes consistently to the same value   For example if one wants to  do a statistical analysis of the tokens as they relate to some other field in a database  without decoding the token   or if one needed to tokenize in two different systems but be able relate the results   In this case  one can create a tokenization transformation that is  convergent      When enabled at transformation creation time  Vault alters the calculation so that encoding a plaintext and expiration tokenizes to the same value every time  and storage keeps only a single entry of that token   Like the exportable mapping mode  convergence should only be enabled if needed   Convergent tokenization has a small performance penalty in external stores and a larger one in the built in store due to the need to avoid duplicate entries and to update metadata when convergently encoding   It is recommended that if one has some use cases that require convergence and some that do not  one should create two different tokenization transforms with convergence enabled on only one        Token lookup  Some use cases may want to lookup the value of a token given its plaintext   Ordinarily this is contrary to the nature of tokenization where we want to prevent the ability of an attacker to determine that a token corresponds to a plaintext value  a known plaintext attack    But for use cases that require it  the   token lookup   vault api docs secret transform token lookup  operation is supported  but only in some configurations of the tokenization transformation   Token lookup is supported when convergence is enabled  or if the mapping mode is exportable  and  the storage backend is external      Performance considerations      Builtin  Internal  store  As tokenization is stateful  the encode operation necessarily writes values to storage  By default  that storage is the Vault backend store itself  This differs from some secret engines in that the encode and decode operations require an access of storage per operation  Other engines use storage for configuration but can process operations largely without accessing any storage   Since these operations involve writes to storage  and therefore must be performed on primary nodes  the scalability of the encode operation is limited by the primary s storage performance   Additionally  using internal storage  since writes must be performed on primary nodes  the scalability of the encode operation will be limited by the performance of the primary and its storage subsystem  All other operations can be performed on secondaries   Finally  due to replication  writes to the primary may take some time to reach secondaries  so other read operations like decode or metadata may not succeed on the secondaries until this happens  In other words  tokenization is eventually consistent       External storage  All nodes  except DRs  can participate in all operations using external storage  but one must take care to monitor and scale the external storage for the level of traffic experienced  The storage schema is simple however and well known approaches should be effective      Security considerations  The goal of Tokenization is to let end users  devices store the token rather than their sensitive values  such as credit card numbers  and still participate in transactions where the token is a stand in for the sensitive value  For this reason the token Vault generates is completely unrelated  e g  irreversible  to the sensitive value   Furthermore  the Tokenization transform is designed to resist a number of attacks on the values produced during encode  In particular it is designed so that attackers cannot recover plaintext even if they steal the tokenization values from Vault itself  In the default mapping mode  even stealing the underlying transform key does not allow them to recover the plaintext without also possessing the encoded token  An attacker must have gotten access to all values in the construct   In the  exportable  mapping mode however  the plaintext values are encrypted in a way that can be decrypted within Vault  If the attacker possesses the transform key and the tokenization mapping values  the plaintext can be recovered  This mode is available for the case where operators prioritize the ability to export all of the plaintext values in an emergency  via the  export decoded  operation       Metadata  Since tokenization isn t format preserving and requires storage  one can associate arbitrary metadata with a token  Metadata is considered less sensitive than the original plaintext value  As it has it s own retrieval endpoint  operators can configure policies that may allow access to the metadata of a token but not its decoded value to enable workflows that operate just on the metadata      TTLs and tidying  By default  tokens are long lived  and the storage for them will be maintained indefinitely  Where there is a concept of time to live  it is strongly encouraged that the tokens be generated with a TTL  For example  as credit cards have an expiration date  it is recommended that tokenizing a credit card primary account number  PAN  be done with a TTL that corresponds to the time after which the PAN is invalid   This allows such values to be  tidied  and removed from storage once expired  Tokens themselves encode the expiration time  so decode and other operations can immediately reject the operation when presented with an expired token      Storage      External SQL stores  Currently the PostgreSQL  MySQL  and MSSQL relational databases are supported as external storage backends for tokenization  The  Schema Endpoint   vault api docs secret transform create update store schema  may be used to initialize and upgrade the necessary database tables  Vault uses a schema versioning table to determine if it needs to create or modify the tables when using that endpoint  If you make changes to those tables yourself  the automatic schema management may become out of sync and may fail in the future   External stores may often be preferred due to their ability to achieve a much higher scale of performance  especially when used with batch operations       Snapshot Restore  Snapshot allows one to iteratively retrieve the tokenization state  for backup or migration purposes  The resulting data can be fed to the restore endpoint of the same or a different tokenization store  Note that the state is only useable by the tokenization transform that created it  as state is encrypted via keys in that configured transform       Export decoded  For stores configured with the  exportable  mapping mode  the export decoded endpoint allows operators to retrieve the  decoded  contents of tokenization state  which includes tokens and their decoded  sensitive values  The  exportable  mode is only recommended if this use case is required  as the default cannot be decoded by attackers even if they gain access to Vault s storage and keys       Migration  Tokenization stores are configured separately from the tokenization transform  and the transform can point to multiple stores  The primary use case for this one to many relationship is to facilitate migration between two tokenization stores   When multiple stores are configured  Vault writes new tokenization state to all configured stores  and reads from each store in the order they were configured  Thus  one can use multiple configured stores along with the snapshot restore functionality to perform a zero downtime migration to a new store   1  Configure the new tokenization store in the API  1  Modify the existing tokenization transform to use both the existing and new    store  1  Snapshot the old store  1  Restore the snapshot to the new store  1  Perform any desired validations  1  Modify the tokenization transform to use only the new store      Key management  Tokenization supports key rotation  Keys are tied to transforms  so key names are the same as the name of the corresponding tokenization transform  Keys can be rotated to a new version  with backward compatibility for decoding  Encoding is always performed with the newest key version  Keys versions can be tidied as well  Keys may also be rotated automatically on a user defined time interval  specified by the  auto rotate field  of the key config  For more information  see the  transform api docs   vault api docs secret transform       Tutorial  Refer to  Tokenize Data with Transform Secrets Engine   vault tutorials adp tokenization  for a step by step tutorial "}
{"questions":"vault include alerts enterprise and hcp mdx page title Transform Secrets Engines layout docs Transform secrets engine The Transform secrets engine for Vault performs secure data transformation","answers":"---\nlayout: docs\npage_title: Transform - Secrets Engines\ndescription: >-\n  The Transform secrets engine for Vault performs secure data transformation.\n---\n\n# Transform secrets engine\n\n@include 'alerts\/enterprise-and-hcp.mdx'\n\nTransform secrets engine requires [Vault\nEnterprise](https:\/\/www.hashicorp.com\/products\/vault\/pricing) with the Advanced Data\nProtection Transform (ADP-Transform) module.\n\nThe Transform secrets engine handles secure data transformation and tokenization\nagainst provided input value. Transformation methods may encompass NIST vetted\ncryptographic standards such as [format-preserving encryption\n(FPE)](https:\/\/en.wikipedia.org\/wiki\/Format-preserving_encryption) via\n[FF3-1](https:\/\/csrc.nist.gov\/publications\/detail\/sp\/800-38g\/rev-1\/draft), but\ncan also be pseudonymous transformations of the data through other means, such\nas masking.\n\nThe secret engine currently supports `fpe`, `masking`, and `tokenization` as\ndata transformation types.\n\n## Setup\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the Transform secrets engine:\n\n   ```text\n   $ vault secrets enable transform\n   Success! Enabled the transform secrets engine at: transform\/\n   ```\n\n   By default, the secrets engine will mount at the name of the engine. To enable\n   the secrets engine at a different path, use the -path argument.\n\n1. Create a named role:\n\n   ```text\n   $ vault write transform\/role\/payments transformations=ccn-fpe\n   Success! Data written to: transform\/role\/payments\n   ```\n\n1. Create a transformation:\n\n   ```text\n   $ vault write transform\/transformations\/fpe\/ccn-fpe \\\n     template=ccn \\\n     tweak_source=internal \\\n     allowed_roles=payments\n   Success! Data written to: transform\/transformations\/fpe\/ccn-fpe\n   ```\n\n1. Optionally, create a template:\n\n   ```text\n   $ vault write transform\/template\/ccn \\\n     type=regex \\\n     pattern='(\\d{4})[- ](\\d{4})[- ](\\d{4})[- ](\\d{4})' \\\n     encode_format='$1-$2-$3-$4' \\\n     decode_formats=last-four='$4' \\\n     alphabet=numerics\n   Success! Data written to: transform\/template\/ccn\n   ```\n\n1. Optionally, create an alphabet:\n\n   ```text\n   $ vault write transform\/alphabet\/numerics \\\n       alphabet=\"0123456789\"\n   Success! Data written to: transform\/alphabet\/numerics\n   ```\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can use this secrets engine to encode and decode input\nvalues.\n\n1. Encode some input value using the `\/encode` endpoint with a named role:\n\n   ```text\n   $ vault write transform\/encode\/payments value=1111-2222-3333-4444\n   Key              Value\n   ---              -----\n   encoded_value    9300-3376-4943-8903\n   ```\n\n   A transformation must be provided if the role contains more than one\n   transformation. A tweak must be provided if the tweak source for the\n   transformation is \"supplied\".\n\n1. Decode some input value using the `\/decode` endpoint with a named role:\n\n   ```text\n   $ vault write transform\/decode\/payments value=9300-3376-4943-8903\n   Key              Value\n   ---              -----\n   decoded_value    1111-2222-3333-4444\n   ```\n\n   A transformation must be provided if the role contains more than one\n   transformation. A tweak must be provided if the tweak source for the\n   transformation is \"supplied\" or \"generated\".\n\n1. Decode some input value using the `\/decode` endpoint with a named role and decode format:\n\n   ```text\n   $ vault write transform\/decode\/payments\/last-four value=9300-3376-4943-8903\n   Key              Value\n   ---              -----\n   decoded_value    4444\n   ```\n\n   A transformation must be provided if the role contains more than one\n   transformation. A tweak must be provided if the tweak source for the\n   transformation is \"supplied\" or \"generated\". A decode format can optionally\n   be provided. If one isn't provided, the decoded output will be formatted to\n   match the template's pattern as in the previous example.\n\n## Roles, transformations, templates, and alphabets\n\nThe Transform secrets engine contains several types of resources that\nencapsulate different aspects of the information required in order to perform\ndata transformation.\n\n- **Roles** are the basic high-level construct that holds the set of\n  transformation that it is allowed to performed. The role name is provided when\n  performing encode and decode operations.\n\n- **Transformations** hold information about a particular transformation. It\n  contains information about the type of transformation that we want to perform,\n  the template that it should use for value detection, and other\n  transformation-specific values such as the tweak source or the masking character\n  to use.\n\n- **Templates** allow us to determine what and how to capture the value that we\n  want to transform.\n\n- **Alphabets** provide the set of valid UTF-8 character contained within both\n  the input and transformed value on FPE transformations.\n\n## Transformations\n\n### Format preserving encryption\n\nFormat Preserving Encryption (FPE) performs cryptographically secure\ntransformation via FF3-1 to encode input values while maintaining its data\nformat and length.  FF3-1 is a construction that uses AES-256 for\nencryption.\n\n#### Tweak and tweak source\n\nFF3-1 uses a non-confidential parameter called the tweak along with the\nciphertext when performing encryption and decryption operations. The tweak\nis precisely a 7-byte value. The secret engine consumes a base64 encoded string\nof this value for its encode and decode operation whenever this input is\nrequired.\n\nIn order to simplify the flow of encoding and decoding operations, transformation\ncreation can take care of generating and associating a tweak value. This allows\napplications to provide a single value without having the need to generate or\nstore any other metadata.\n\nIn cases where more granularity is required, a tweak value can be generated by\nVault and returned, or it may be independently generated and provided.\n\nIn summary, there are three ways in which the tweak value may be sourced:\n\n- `supplied`: This is the default behavior for FPE transformations. The tweak\n  value must be generated externally, and supplied into the on encode and decode\n  operations.\n- `generated`: The secret engine will take care of generating the tweak value\n  on encode operations and return this back as part of the response along\n  with the encoded value. It is up to the application to store this value\n  so that it can be provided back when decoding the encoded value.\n- `internal`: The secret engine will generate an internal tweak value per\n  transformation. This value is not returned on encode or decode operations\n  since it gets re-used for all encode and decode operations for the\n  transformation. Depending on the uniqueness of the dataset, this mode may\n  introduce higher risks, but provides the most convenience since the value does\n  not need to be stored separately. This mode should only be used if the values\n  being encoded are sufficiently unique.\n\nYour team and organization should weigh the trade-offs when it comes to\nchoosing the proper tweak source to use. For `supplied` and `internal`\nsourcing, please see [FF3-1 Tweak Usage Details](\/vault\/docs\/secrets\/transform\/ff3-tweak-details)\n\n#### Input limits\n\nFF3-1 specifies both minimum and maximum limits on the length of an input.\nThese limits are driven by the security goals, making sure that for a given\nalphabet the input size does not leave the input guessable by brute force.\n\nGiven an alphabet of length A, an input length L is valid if:\n\n- L >= 2,\n- A<sup>L<\/sup> >= 1,000,000\n- and L <= 2 \\* floor(log<sub>A<\/sub>(2<sup>96<\/sup>)).\n\nAs a concrete example, for handling credit card numbers, A is 10, L is 16, so\nvalid input lengths would be between 6 and 56 characters. This is because\n10<sup>6<\/sup>=1,000,000 (already greater than 2), and 2 \\* floor(log<sub>10<\/sub>(2<sup>96<\/sup>)) = 56.\n\nOf course, in the case of credit card numbers valid input would always be\nbetween 12 and 19 decimal digits.\n\n#### Output limitations\n\nAfter transformation and formatting by the template, the value is an encrypted\nversion of the input with the format preserved. However, the value itself may\nbe _invalid_ with respect to other standards. For example the output credit card\nnumber may not validate (it likely won't create a valid check digit).\n\nSo one must consider when the outputs are stored whether validation in storage\nmay reject them.\n\n### Masking\n\nMasking performs replacement of matched characters on the input value with a\ndesired character. This form of transformation is non-reversible and thus does\nnot support retrieving the original value back using the decode operation.\n\n### Tokenization\n\n[Tokenization](\/vault\/docs\/secrets\/transform\/tokenization) exchanges a\nsensitive value for an unrelated value called a _token_. The original sensitive\nvalue cannot be recovered from a token alone, they are irreversible.\n\n#### Inputs\n\nTokenization inputs are not processed by templates or alphabets, as they do not\npreserve any of the contents or format of the input.\n\n#### Outputs\n\nTokenization is not format preserving. The token output is a Base58 encoded\nstring value of unrelated length, and is not rendered by a template.\n\nThe decoded value is returned verbatim as it was before encoding.\n\n#### Metadata\n\nAs tokenization isn't format preserving and is stateful, the input values can be\nany length, subject to other limits in Vault's request processing. In addition,\nnon-sensitive _metadata_ can be encoded alongside the value, and retrieved either\nwith or independently of the original value.\n\n#### Operations\n\nIn addition to encode and decode, as tokenization is stateful, it provides two\nadditional operations:\n\n- Retrieve metadata given a token.\n- Check whether an input value has a valid, unexpired token.\n- For some configurations, retrieve a previously encoded token for a plaintext\n  input.\n\n#### Stores\n\nTokenization is stateful. Tokenized state can be stored internally (the\ndefault) or in an external store. Currently only PostgreSQL, MySQL, and MSSQL are supported\nfor external storage.\n\n#### Mapping modes\n\n[Tokenization](\/vault\/docs\/secrets\/transform\/tokenization) stores the results of an encode operation\nin storage using a cryptographic construct that enhances the safety of its values.\nIn the `default` mapping mode, the token itself is transformed via a one way\nfunction involving the transform key and elements of the token. As Vault does\nnot store the token, the values in Vault storage themselves cannot be used to\nretrieve original input.\n\nA second mapping mode, `exportable` is provided for cases where\noperators may need to recover the full set of decoded inputs in an emergency via\nthe export operation. It is strongly recommended that one use the `default` mode if\npossible, as it is resistant to more types of attack.\n\n#### Convergent tokenization\n\n~> **Note:** Convergent tokenization is not supported for transformations with\nimported keys.\n\nIn addition, tokenization transformations may be configured as *convergent*, meaning\nthat tokenizing a plaintext and expiration more than once results in the\nsame token value.  Enabling convergence has performance and security\n[considerations](\/vault\/docs\/secrets\/transform\/tokenization#convergence).\n\n## Deletion behavior\n\nThe deletion of resources, aside from roles, is guarded by checking whether any\nother related resources are currently using it in order to avoid accidental data\nloss of any encoded value that may depend on these bits of information to\ndecode and reconstruct the original value. Role deletion can be safely done\nsince the information related to the transformation itself is contained within\ntransformation object and its related resources.\n\nThe following rules applies when it comes to deleting a resource:\n\n- A transformation cannot be deleted if it's in use by a role.\n- A template or store cannot be deleted if it's in use by a transformation.\n- An alphabet cannot be deleted if it's in use by a template.\n\n## Provided builtin resources\n\nThe secret engine provides a set of builtin templates and alphabets that are\nconsidered common. Builtin templates cannot be deleted, and the prefix\n\"builtin\/\" on template and alphabet names is a reserved keyword.\n\n### Templates\n\nThe following builtin templates are available for use in the secret engine:\n\n- builtin\/creditcardnumber\n- builtin\/socialsecuritynumber\n\nNote that these templates only check for the matching pattern(s), and not the\nvalidity of the value itself. For instance, the builtin credit card number\ntemplate can determine whether the provided value is in the format of commonly\nissued credit cards, but not whether the credit card is a valid number from a\nparticular issuer.\n\nTemplates currently only accept regular expressions as the matching pattern\ntype. It uses Go's standard library for the regexp engine, which supports\n[the RE2 syntax](https:\/\/github.com\/google\/re2\/wiki\/Syntax).\n\n**Note**: The `builtin\/any` template is only valid and is the default for the tokenization\ntransform.\n\n### Alphabets\n\nThe following builtin alphabets are available for use in the secret engine:\n\n- builtin\/numeric\n- builtin\/alphalower\n- builtin\/alphaupper\n- builtin\/alphanumericlower\n- builtin\/alphanumericupper\n- builtin\/alphanumeric\n\nCustom alphabets must contain between 2 and 65536 unique characters.\n\n### Stores\n\nThe following builtin store is available (and is the default) for tokenization\ntransformations:\n\n- builtin\/internal\n\n## Tutorial\n\nRefer to the [Transform Secrets Engine](\/vault\/tutorials\/adp\/transform) tutorial to learn how to use the Transform secrets engine to handle secure data transmission and tokenization against provided secrets.\n\n## Bring your own key (BYOK)\n\n~> **Note:** Key import functionality supports cases where there is a need to bring\nin an existing key from an HSM or other outside systems. It is more secure to\nhave Transform generate and manage a key within Vault.\n\n### Via the Command Line\n\nThe Vault command line tool [includes a helper](\/vault\/docs\/commands\/transform\/) to perform the steps described\nin Manual below. \n\n### Via the API\n\nFirst, the wrapping key needs to be read from the transform secrets engine:\n\n```text\n$ vault read transform\/wrapping_key\n```\n\nThe wrapping key will be a 4096-bit RSA public key.\n\nThen, the wrapping key is used to create the ciphertext input for the `import` endpoint,\nas described below. The target key refers to the key being imported.\n\n### HSM\n\nIf the key is being imported from an HSM that supports PKCS#11, there are\ntwo possible scenarios:\n\n- If the HSM supports the CKM_RSA_AES_KEY_WRAP mechanism, it can be used to wrap the\ntarget key using the wrapping key.\n\n- Otherwise, two mechanisms can be combined to wrap the target key. First, a 256-bit AES key is\ngenerated and then used to wrap the target key using the CKM_AES_KEY_WRAP_KWP mechanism.\nThen the AES key should be wrapped under the wrapping key using the CKM_RSA_PKCS_OAEP mechanism\nusing MGF1 and either SHA-1, SHA-224, SHA-256, SHA-384, or SHA-512.\n\nThe ciphertext is constructed by appending the wrapped target key to the wrapped AES key.\n\nThe ciphertext bytes should be base64-encoded.\n\n### Manual process\n\nIf the target key is not stored in an HSM or KMS, the following steps can be used to construct\nthe ciphertext for the input of the `import` endpoint:\n\n1. Generate an ephemeral 256-bit AES key.\n\n2. Wrap the target key using the ephemeral AES key with AES-KWP.\n\n3. Wrap the AES key under the Vault wrapping key using RSAES-OAEP with MGF1 and\neither SHA-1, SHA-224, SHA-256, SHA-384, or SHA-512.\n\n4. Delete the ephemeral AES key.\n\n5. Append the wrapped target key to the wrapped AES key.\n\n6. Base64 encode the result.\n\nFor more details on the key wrapping process, see the [key wrapping guide](\/vault\/docs\/secrets\/transit\/key-wrapping-guide)\n(be sure to use the transform wrapping key when wrapping a key for import into the transform secrets engine).\n\n## API\n\nThe Transform secrets engine has a full HTTP API. Please see the\n[Transform secrets engine API](\/vault\/api-docs\/secret\/transform) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  Transform   Secrets Engines description       The Transform secrets engine for Vault performs secure data transformation         Transform secrets engine   include  alerts enterprise and hcp mdx   Transform secrets engine requires  Vault Enterprise  https   www hashicorp com products vault pricing  with the Advanced Data Protection Transform  ADP Transform  module   The Transform secrets engine handles secure data transformation and tokenization against provided input value  Transformation methods may encompass NIST vetted cryptographic standards such as  format preserving encryption  FPE   https   en wikipedia org wiki Format preserving encryption  via  FF3 1  https   csrc nist gov publications detail sp 800 38g rev 1 draft   but can also be pseudonymous transformations of the data through other means  such as masking   The secret engine currently supports  fpe    masking   and  tokenization  as data transformation types      Setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1  Enable the Transform secrets engine         text      vault secrets enable transform    Success  Enabled the transform secrets engine at  transform             By default  the secrets engine will mount at the name of the engine  To enable    the secrets engine at a different path  use the  path argument   1  Create a named role         text      vault write transform role payments transformations ccn fpe    Success  Data written to  transform role payments         1  Create a transformation         text      vault write transform transformations fpe ccn fpe        template ccn        tweak source internal        allowed roles payments    Success  Data written to  transform transformations fpe ccn fpe         1  Optionally  create a template         text      vault write transform template ccn        type regex        pattern    d 4        d 4        d 4        d 4           encode format   1  2  3  4         decode formats last four   4         alphabet numerics    Success  Data written to  transform template ccn         1  Optionally  create an alphabet         text      vault write transform alphabet numerics          alphabet  0123456789     Success  Data written to  transform alphabet numerics            Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can use this secrets engine to encode and decode input values   1  Encode some input value using the   encode  endpoint with a named role         text      vault write transform encode payments value 1111 2222 3333 4444    Key              Value                              encoded value    9300 3376 4943 8903            A transformation must be provided if the role contains more than one    transformation  A tweak must be provided if the tweak source for the    transformation is  supplied    1  Decode some input value using the   decode  endpoint with a named role         text      vault write transform decode payments value 9300 3376 4943 8903    Key              Value                              decoded value    1111 2222 3333 4444            A transformation must be provided if the role contains more than one    transformation  A tweak must be provided if the tweak source for the    transformation is  supplied  or  generated    1  Decode some input value using the   decode  endpoint with a named role and decode format         text      vault write transform decode payments last four value 9300 3376 4943 8903    Key              Value                              decoded value    4444            A transformation must be provided if the role contains more than one    transformation  A tweak must be provided if the tweak source for the    transformation is  supplied  or  generated   A decode format can optionally    be provided  If one isn t provided  the decoded output will be formatted to    match the template s pattern as in the previous example      Roles  transformations  templates  and alphabets  The Transform secrets engine contains several types of resources that encapsulate different aspects of the information required in order to perform data transformation       Roles   are the basic high level construct that holds the set of   transformation that it is allowed to performed  The role name is provided when   performing encode and decode operations       Transformations   hold information about a particular transformation  It   contains information about the type of transformation that we want to perform    the template that it should use for value detection  and other   transformation specific values such as the tweak source or the masking character   to use       Templates   allow us to determine what and how to capture the value that we   want to transform       Alphabets   provide the set of valid UTF 8 character contained within both   the input and transformed value on FPE transformations      Transformations      Format preserving encryption  Format Preserving Encryption  FPE  performs cryptographically secure transformation via FF3 1 to encode input values while maintaining its data format and length   FF3 1 is a construction that uses AES 256 for encryption        Tweak and tweak source  FF3 1 uses a non confidential parameter called the tweak along with the ciphertext when performing encryption and decryption operations  The tweak is precisely a 7 byte value  The secret engine consumes a base64 encoded string of this value for its encode and decode operation whenever this input is required   In order to simplify the flow of encoding and decoding operations  transformation creation can take care of generating and associating a tweak value  This allows applications to provide a single value without having the need to generate or store any other metadata   In cases where more granularity is required  a tweak value can be generated by Vault and returned  or it may be independently generated and provided   In summary  there are three ways in which the tweak value may be sourced      supplied   This is the default behavior for FPE transformations  The tweak   value must be generated externally  and supplied into the on encode and decode   operations     generated   The secret engine will take care of generating the tweak value   on encode operations and return this back as part of the response along   with the encoded value  It is up to the application to store this value   so that it can be provided back when decoding the encoded value     internal   The secret engine will generate an internal tweak value per   transformation  This value is not returned on encode or decode operations   since it gets re used for all encode and decode operations for the   transformation  Depending on the uniqueness of the dataset  this mode may   introduce higher risks  but provides the most convenience since the value does   not need to be stored separately  This mode should only be used if the values   being encoded are sufficiently unique   Your team and organization should weigh the trade offs when it comes to choosing the proper tweak source to use  For  supplied  and  internal  sourcing  please see  FF3 1 Tweak Usage Details   vault docs secrets transform ff3 tweak details        Input limits  FF3 1 specifies both minimum and maximum limits on the length of an input  These limits are driven by the security goals  making sure that for a given alphabet the input size does not leave the input guessable by brute force   Given an alphabet of length A  an input length L is valid if     L    2    A sup L  sup     1 000 000   and L    2    floor log sub A  sub  2 sup 96  sup      As a concrete example  for handling credit card numbers  A is 10  L is 16  so valid input lengths would be between 6 and 56 characters  This is because 10 sup 6  sup  1 000 000  already greater than 2   and 2    floor log sub 10  sub  2 sup 96  sup      56   Of course  in the case of credit card numbers valid input would always be between 12 and 19 decimal digits        Output limitations  After transformation and formatting by the template  the value is an encrypted version of the input with the format preserved  However  the value itself may be  invalid  with respect to other standards  For example the output credit card number may not validate  it likely won t create a valid check digit    So one must consider when the outputs are stored whether validation in storage may reject them       Masking  Masking performs replacement of matched characters on the input value with a desired character  This form of transformation is non reversible and thus does not support retrieving the original value back using the decode operation       Tokenization   Tokenization   vault docs secrets transform tokenization  exchanges a sensitive value for an unrelated value called a  token   The original sensitive value cannot be recovered from a token alone  they are irreversible        Inputs  Tokenization inputs are not processed by templates or alphabets  as they do not preserve any of the contents or format of the input        Outputs  Tokenization is not format preserving  The token output is a Base58 encoded string value of unrelated length  and is not rendered by a template   The decoded value is returned verbatim as it was before encoding        Metadata  As tokenization isn t format preserving and is stateful  the input values can be any length  subject to other limits in Vault s request processing  In addition  non sensitive  metadata  can be encoded alongside the value  and retrieved either with or independently of the original value        Operations  In addition to encode and decode  as tokenization is stateful  it provides two additional operations     Retrieve metadata given a token    Check whether an input value has a valid  unexpired token    For some configurations  retrieve a previously encoded token for a plaintext   input        Stores  Tokenization is stateful  Tokenized state can be stored internally  the default  or in an external store  Currently only PostgreSQL  MySQL  and MSSQL are supported for external storage        Mapping modes   Tokenization   vault docs secrets transform tokenization  stores the results of an encode operation in storage using a cryptographic construct that enhances the safety of its values  In the  default  mapping mode  the token itself is transformed via a one way function involving the transform key and elements of the token  As Vault does not store the token  the values in Vault storage themselves cannot be used to retrieve original input   A second mapping mode   exportable  is provided for cases where operators may need to recover the full set of decoded inputs in an emergency via the export operation  It is strongly recommended that one use the  default  mode if possible  as it is resistant to more types of attack        Convergent tokenization       Note    Convergent tokenization is not supported for transformations with imported keys   In addition  tokenization transformations may be configured as  convergent   meaning that tokenizing a plaintext and expiration more than once results in the same token value   Enabling convergence has performance and security  considerations   vault docs secrets transform tokenization convergence       Deletion behavior  The deletion of resources  aside from roles  is guarded by checking whether any other related resources are currently using it in order to avoid accidental data loss of any encoded value that may depend on these bits of information to decode and reconstruct the original value  Role deletion can be safely done since the information related to the transformation itself is contained within transformation object and its related resources   The following rules applies when it comes to deleting a resource     A transformation cannot be deleted if it s in use by a role    A template or store cannot be deleted if it s in use by a transformation    An alphabet cannot be deleted if it s in use by a template      Provided builtin resources  The secret engine provides a set of builtin templates and alphabets that are considered common  Builtin templates cannot be deleted  and the prefix  builtin   on template and alphabet names is a reserved keyword       Templates  The following builtin templates are available for use in the secret engine     builtin creditcardnumber   builtin socialsecuritynumber  Note that these templates only check for the matching pattern s   and not the validity of the value itself  For instance  the builtin credit card number template can determine whether the provided value is in the format of commonly issued credit cards  but not whether the credit card is a valid number from a particular issuer   Templates currently only accept regular expressions as the matching pattern type  It uses Go s standard library for the regexp engine  which supports  the RE2 syntax  https   github com google re2 wiki Syntax      Note    The  builtin any  template is only valid and is the default for the tokenization transform       Alphabets  The following builtin alphabets are available for use in the secret engine     builtin numeric   builtin alphalower   builtin alphaupper   builtin alphanumericlower   builtin alphanumericupper   builtin alphanumeric  Custom alphabets must contain between 2 and 65536 unique characters       Stores  The following builtin store is available  and is the default  for tokenization transformations     builtin internal     Tutorial  Refer to the  Transform Secrets Engine   vault tutorials adp transform  tutorial to learn how to use the Transform secrets engine to handle secure data transmission and tokenization against provided secrets      Bring your own key  BYOK        Note    Key import functionality supports cases where there is a need to bring in an existing key from an HSM or other outside systems  It is more secure to have Transform generate and manage a key within Vault       Via the Command Line  The Vault command line tool  includes a helper   vault docs commands transform   to perform the steps described in Manual below        Via the API  First  the wrapping key needs to be read from the transform secrets engine      text   vault read transform wrapping key      The wrapping key will be a 4096 bit RSA public key   Then  the wrapping key is used to create the ciphertext input for the  import  endpoint  as described below  The target key refers to the key being imported       HSM  If the key is being imported from an HSM that supports PKCS 11  there are two possible scenarios     If the HSM supports the CKM RSA AES KEY WRAP mechanism  it can be used to wrap the target key using the wrapping key     Otherwise  two mechanisms can be combined to wrap the target key  First  a 256 bit AES key is generated and then used to wrap the target key using the CKM AES KEY WRAP KWP mechanism  Then the AES key should be wrapped under the wrapping key using the CKM RSA PKCS OAEP mechanism using MGF1 and either SHA 1  SHA 224  SHA 256  SHA 384  or SHA 512   The ciphertext is constructed by appending the wrapped target key to the wrapped AES key   The ciphertext bytes should be base64 encoded       Manual process  If the target key is not stored in an HSM or KMS  the following steps can be used to construct the ciphertext for the input of the  import  endpoint   1  Generate an ephemeral 256 bit AES key   2  Wrap the target key using the ephemeral AES key with AES KWP   3  Wrap the AES key under the Vault wrapping key using RSAES OAEP with MGF1 and either SHA 1  SHA 224  SHA 256  SHA 384  or SHA 512   4  Delete the ephemeral AES key   5  Append the wrapped target key to the wrapped AES key   6  Base64 encode the result   For more details on the key wrapping process  see the  key wrapping guide   vault docs secrets transit key wrapping guide   be sure to use the transform wrapping key when wrapping a key for import into the transform secrets engine       API  The Transform secrets engine has a full HTTP API  Please see the  Transform secrets engine API   vault api docs secret transform  for more details "}
{"questions":"vault PKI secrets engine quick start root CA setup This document provides a brief overview of setting up a Vault PKI Secrets The PKI secrets engine for Vault generates TLS certificates layout docs Engine with a Root CA certificate page title PKI Secrets Engines Quick Start Root CA Setup","answers":"---\nlayout: docs\npage_title: 'PKI - Secrets Engines: Quick Start: Root CA Setup'\ndescription: The PKI secrets engine for Vault generates TLS certificates.\n---\n\n# PKI secrets engine - quick start - root CA setup\n\nThis document provides a brief overview of setting up a Vault PKI Secrets\nEngine with a Root CA certificate.\n\n#### Mount the backend\n\nThe first step to using the PKI backend is to mount it. Unlike the `kv`\nbackend, the `pki` backend is not mounted by default.\n\n```shell-session\n$ vault secrets enable pki\nSuccessfully mounted 'pki' at 'pki'!\n```\n\n#### Configure a CA certificate\n\nNext, Vault must be configured with a CA certificate and associated private\nkey. We'll take advantage of the backend's self-signed root generation support,\nbut Vault also supports generating an intermediate CA (with a CSR for signing)\nor setting a PEM-encoded certificate and private key bundle directly into the\nbackend.\n\nGenerally you'll want a root certificate to only be used to sign CA\nintermediate certificates, but for this example we'll proceed as if you will\nissue certificates directly from the root. As it's a root, we'll want to set a\nlong maximum life time for the certificate; since it honors the maximum mount\nTTL, first we adjust that:\n\n```shell-session\n$ vault secrets tune -max-lease-ttl=87600h pki\nSuccessfully tuned mount 'pki'!\n```\n\nThat sets the maximum TTL for secrets issued from the mount to 10 years. (Note\nthat roles can further restrict the maximum TTL.)\n\nNow, we generate our root certificate:\n\n```shell-session\n$ vault write pki\/root\/generate\/internal common_name=myvault.com ttl=87600h\nKey             Value\n---             -----\ncertificate     -----BEGIN CERTIFICATE-----\nMIIDNTCCAh2gAwIBAgIUJqrw\/9EDZbp4DExaLjh0vSAHyBgwDQYJKoZIhvcNAQEL\nBQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyMzIwWhcNMjcx\nMjA2MTkyMzQ5WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN\nAQEBBQADggEPADCCAQoCggEBAKY\/vJ6sRFym+yFYUneoVtDmOCaDKAQiGzQw0IXL\nwgMBBb82iKpYj5aQjXZGIl+VkVnCi+M2AQ\/iYXWZf1kTAdle4A6OC4+VefSIa2b4\neB7R8aiGTce62jB95+s5\/YgrfIqk6igfpCSXYLE8ubNDA2\/+cqvjhku1UzlvKBX2\nhIlgWkKlrsnybHN+B\/3Usw9Km\/87rzoDR3OMxLV55YPHiq6+olIfSSwKAPjH8LZm\nuM1ITLG3WQUl8ARF17Dj+wOKqbUG38PduVwKL5+qPksrvNwlmCP7Kmjncc6xnYp6\n5lfr7V4DC\/UezrJYCIb0g\/SvtxoN1OuqmmvSTKiEE7hVOAcCAwEAAaN7MHkwDgYD\nVR0PAQH\/BAQDAgEGMA8GA1UdEwEB\/wQFMAMBAf8wHQYDVR0OBBYEFECKdYM4gDbM\nkxRZA2wR4f\/yNhQUMB8GA1UdIwQYMBaAFECKdYM4gDbMkxRZA2wR4f\/yNhQUMBYG\nA1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQCCJKZPcjjn\n7mvD2+sr6lx4DW\/vJwVSW8eTuLtOLNu6\/aFhcgTY\/OOB8q4n6iHuLrEt8\/RV7RJI\nobRx74SfK9BcOLt4+DHGnFXqu2FNVnhDMOKarj41yGyXlJaQRUPYf6WJJLF+ZphN\nnNsZqHJHBfZtpJpE5Vywx3pah08B5yZHk1ItRPEz7EY3uwBI\/CJoBb+P5Ahk6krc\nLZ62kFwstkVuFp43o3K7cRNexCIsZGx2tsyZ0nyqDUFsBr66xwUfn3C+\/1CDc9YL\nzjq+8nI2ooIrj4ZKZCOm2fKd1KeGN\/CZD7Ob6uNhXrd0Tjwv00a7nffvYQkl\/1V5\nBT55jevSPVVu\n-----END CERTIFICATE-----\nexpiration      1828121029\nissuing_ca      -----BEGIN CERTIFICATE-----\nMIIDNTCCAh2gAwIBAgIUJqrw\/9EDZbp4DExaLjh0vSAHyBgwDQYJKoZIhvcNAQEL\nBQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyMzIwWhcNMjcx\nMjA2MTkyMzQ5WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN\nAQEBBQADggEPADCCAQoCggEBAKY\/vJ6sRFym+yFYUneoVtDmOCaDKAQiGzQw0IXL\nwgMBBb82iKpYj5aQjXZGIl+VkVnCi+M2AQ\/iYXWZf1kTAdle4A6OC4+VefSIa2b4\neB7R8aiGTce62jB95+s5\/YgrfIqk6igfpCSXYLE8ubNDA2\/+cqvjhku1UzlvKBX2\nhIlgWkKlrsnybHN+B\/3Usw9Km\/87rzoDR3OMxLV55YPHiq6+olIfSSwKAPjH8LZm\nuM1ITLG3WQUl8ARF17Dj+wOKqbUG38PduVwKL5+qPksrvNwlmCP7Kmjncc6xnYp6\n5lfr7V4DC\/UezrJYCIb0g\/SvtxoN1OuqmmvSTKiEE7hVOAcCAwEAAaN7MHkwDgYD\nVR0PAQH\/BAQDAgEGMA8GA1UdEwEB\/wQFMAMBAf8wHQYDVR0OBBYEFECKdYM4gDbM\nkxRZA2wR4f\/yNhQUMB8GA1UdIwQYMBaAFECKdYM4gDbMkxRZA2wR4f\/yNhQUMBYG\nA1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQCCJKZPcjjn\n7mvD2+sr6lx4DW\/vJwVSW8eTuLtOLNu6\/aFhcgTY\/OOB8q4n6iHuLrEt8\/RV7RJI\nobRx74SfK9BcOLt4+DHGnFXqu2FNVnhDMOKarj41yGyXlJaQRUPYf6WJJLF+ZphN\nnNsZqHJHBfZtpJpE5Vywx3pah08B5yZHk1ItRPEz7EY3uwBI\/CJoBb+P5Ahk6krc\nLZ62kFwstkVuFp43o3K7cRNexCIsZGx2tsyZ0nyqDUFsBr66xwUfn3C+\/1CDc9YL\nzjq+8nI2ooIrj4ZKZCOm2fKd1KeGN\/CZD7Ob6uNhXrd0Tjwv00a7nffvYQkl\/1V5\nBT55jevSPVVu\n-----END CERTIFICATE-----\nserial_number   26:aa:f0:ff:d1:03:65:ba:78:0c:4c:5a:2e:38:74:bd:20:07:c8:18\n```\n\nThe returned certificate is purely informational; it and its private key are\nsafely stored in the backend mount.\n\n#### Set URL configuration\n\nGenerated certificates can have the CRL location and the location of the\nissuing certificate encoded. These values must be set manually and typically to FQDN associated to the Vault server, but can be changed at any time.\n\n```shell-session\n$ vault write pki\/config\/urls issuing_certificates=\"http:\/\/vault.example.com:8200\/v1\/pki\/ca\" crl_distribution_points=\"http:\/\/vault.example.com:8200\/v1\/pki\/crl\"\nSuccess! Data written to: pki\/ca\/urls\n```\n\n#### Configure a role\n\nThe next step is to configure a role. A role is a logical name that maps to a\npolicy used to generate those credentials. For example, let's create an\n\"example-dot-com\" role:\n\n```shell-session\n$ vault write pki\/roles\/example-dot-com \\\n    allowed_domains=example.com \\\n    allow_subdomains=true max_ttl=72h\nSuccess! Data written to: pki\/roles\/example-dot-com\n```\n\n#### Issue certificates\n\nBy writing to the `roles\/example-dot-com` path we are defining the\n`example-dot-com` role. To generate a new certificate, we simply write\nto the `issue` endpoint with that role name: Vault is now configured to create\nand manage certificates!\n\n```shell-session\n$ vault write pki\/issue\/example-dot-com \\\n    common_name=blah.example.com\nKey                 Value\n---                 -----\ncertificate         -----BEGIN CERTIFICATE-----\nMIIDvzCCAqegAwIBAgIUWQuvpMpA2ym36EoiYyf3Os5UeIowDQYJKoZIhvcNAQEL\nBQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyNDA1WhcNMTcx\nMjExMTkyNDM1WjAbMRkwFwYDVQQDExBibGFoLmV4YW1wbGUuY29tMIIBIjANBgkq\nhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1CU93lVgcLXGPxRGTRT3GM5wqytCo7Z6\ngjfoHyKoPCAqjRdjsYgp1FMvumNQKjUat5KTtr2fypbOnAURDCh4bN\/omcj7eAqt\nldJ8mf8CtKUaaJ1kp3R6RRFY\/u96BnmKUG8G7oDeEDsKlXuEuRcNbGlGF8DaM\/O1\nHFa57cM\/8yFB26Nj5wBoG5Om6ee5+W+14Qee8AB6OJbsf883Z+zvhJTaB0QM4ZUq\nuAMoMVEutWhdI5EFm5OjtMeMu2U+iJl2XqqgQ\/JmLRjRdMn1qd9TzTaVSnjoZ97s\njHK444Px1m45einLqKUJ+Ia2ljXYkkItJj9Ut6ZSAP9fHlAtX84W3QIDAQABo4H\/\nMIH8MA4GA1UdDwEB\/wQEAwIDqDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUH\nAwIwHQYDVR0OBBYEFH\/YdObW6T94U0zuU5hBfTfU5pt1MB8GA1UdIwQYMBaAFECK\ndYM4gDbMkxRZA2wR4f\/yNhQUMDsGCCsGAQUFBwEBBC8wLTArBggrBgEFBQcwAoYf\naHR0cDovLzEyNy4wLjAuMTo4MjAwL3YxL3BraS9jYTAbBgNVHREEFDASghBibGFo\nLmV4YW1wbGUuY29tMDEGA1UdHwQqMCgwJqAkoCKGIGh0dHA6Ly8xMjcuMC4wLjE6\nODIwMC92MS9wa2kvY3JsMA0GCSqGSIb3DQEBCwUAA4IBAQCDXbHV68VayweB2tkb\nKDdCaveaTULjCeJUnm9UT\/6C0YqC\/RxTAjdKFrilK49elOA3rAtEL6dmsDP2yH25\nptqi2iU+y99HhZgu0zkS\/p8elYN3+l+0O7pOxayYXBkFf5t0TlEWSTb7cW+Etz\/c\nMvSqx6vVvspSjB0PsA3eBq0caZnUJv2u\/TEiUe7PPY0UmrZxp\/R\/P\/kE54yI3nWN\n4Cwto6yUwScOPbVR1d3hE2KU2toiVkEoOk17UyXWTokbG8rG0KLj99zu7my+Fyre\nsjV5nWGDSMZODEsGxHOC+JgNAC1z3n14\/InFNOsHICnA5AnJzQdSQQjvcZHN2NyW\n+t4f\n-----END CERTIFICATE-----\nissuing_ca          -----BEGIN CERTIFICATE-----\nMIIDNTCCAh2gAwIBAgIUJqrw\/9EDZbp4DExaLjh0vSAHyBgwDQYJKoZIhvcNAQEL\nBQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyMzIwWhcNMjcx\nMjA2MTkyMzQ5WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN\nAQEBBQADggEPADCCAQoCggEBAKY\/vJ6sRFym+yFYUneoVtDmOCaDKAQiGzQw0IXL\nwgMBBb82iKpYj5aQjXZGIl+VkVnCi+M2AQ\/iYXWZf1kTAdle4A6OC4+VefSIa2b4\neB7R8aiGTce62jB95+s5\/YgrfIqk6igfpCSXYLE8ubNDA2\/+cqvjhku1UzlvKBX2\nhIlgWkKlrsnybHN+B\/3Usw9Km\/87rzoDR3OMxLV55YPHiq6+olIfSSwKAPjH8LZm\nuM1ITLG3WQUl8ARF17Dj+wOKqbUG38PduVwKL5+qPksrvNwlmCP7Kmjncc6xnYp6\n5lfr7V4DC\/UezrJYCIb0g\/SvtxoN1OuqmmvSTKiEE7hVOAcCAwEAAaN7MHkwDgYD\nVR0PAQH\/BAQDAgEGMA8GA1UdEwEB\/wQFMAMBAf8wHQYDVR0OBBYEFECKdYM4gDbM\nkxRZA2wR4f\/yNhQUMB8GA1UdIwQYMBaAFECKdYM4gDbMkxRZA2wR4f\/yNhQUMBYG\nA1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQCCJKZPcjjn\n7mvD2+sr6lx4DW\/vJwVSW8eTuLtOLNu6\/aFhcgTY\/OOB8q4n6iHuLrEt8\/RV7RJI\nobRx74SfK9BcOLt4+DHGnFXqu2FNVnhDMOKarj41yGyXlJaQRUPYf6WJJLF+ZphN\nnNsZqHJHBfZtpJpE5Vywx3pah08B5yZHk1ItRPEz7EY3uwBI\/CJoBb+P5Ahk6krc\nLZ62kFwstkVuFp43o3K7cRNexCIsZGx2tsyZ0nyqDUFsBr66xwUfn3C+\/1CDc9YL\nzjq+8nI2ooIrj4ZKZCOm2fKd1KeGN\/CZD7Ob6uNhXrd0Tjwv00a7nffvYQkl\/1V5\nBT55jevSPVVu\n-----END CERTIFICATE-----\nprivate_key         -----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEA1CU93lVgcLXGPxRGTRT3GM5wqytCo7Z6gjfoHyKoPCAqjRdj\nsYgp1FMvumNQKjUat5KTtr2fypbOnAURDCh4bN\/omcj7eAqtldJ8mf8CtKUaaJ1k\np3R6RRFY\/u96BnmKUG8G7oDeEDsKlXuEuRcNbGlGF8DaM\/O1HFa57cM\/8yFB26Nj\n5wBoG5Om6ee5+W+14Qee8AB6OJbsf883Z+zvhJTaB0QM4ZUquAMoMVEutWhdI5EF\nm5OjtMeMu2U+iJl2XqqgQ\/JmLRjRdMn1qd9TzTaVSnjoZ97sjHK444Px1m45einL\nqKUJ+Ia2ljXYkkItJj9Ut6ZSAP9fHlAtX84W3QIDAQABAoIBAQCf5YIANfF+gkNt\n\/+YM6yRi+hZJrU2I\/1zPETxPW1vaFZR8y4hEoxCEDD8JCRm+9k+w1TWoorvxgkEv\nr1HuDALYbNtwLd\/71nCHYCKyH1b2uQpyl07qOAyASlb9r5oVjz4E6eobkd3N9fJA\nQN0EdK+VarN968mLJsD3Hxb8chGdObBCQ+LO+zdqQLaz+JwhfnK98rm6huQtYK3w\nccd0OwoVmtZz2eJl11TJkB9fi4WqJyxl4wST7QC80LstB1deR78oDmN5WUKU12+G\n4Mrgc1hRwUSm18HTTgAhaA4A3rjPyirBohb5Sf+jJxusnnay7tvWeMnIiRI9mqCE\ndr3tLrcxAoGBAPL+jHVUF6sxBqm6RTe8Ewg\/8RrGmd69oB71QlVUrLYyC96E2s56\n19dcyt5U2z+F0u9wlwR1rMb2BJIXbxlNk+i87IHmpOjCMS38SPZYWLHKj02eGfvA\nMjKKqEjNY\/md9eVAVZIWSEy63c4UcBK1qUH3\/5PNlyjk53gCOI\/4OXX\/AoGBAN+A\nAlyd6A\/pyHWq8WMyAlV18LnzX8XktJ07xrNmjbPGD5sEHp+Q9V33NitOZpu3bQL+\ngCNmcrodrbr9LBV83bkAOVJrf82SPaBesV+ATY7ZiWpqvHTmcoS7nglM2XTr+uWR\nY9JGdpCE9U5QwTc6qfcn7Eqj7yNvvHMrT+1SHwsjAoGBALQyQEbhzYuOF7rV\/26N\nci+z+0A39vNO++b5Se+tk0apZlPlgb2NK3LxxR+LHevFed9GRzdvbGk\/F7Se3CyP\ncxgswdazC6fwGjhX1mOYsG1oIU0V6X7f0FnaqWETrwf1M9yGEO78xzDfgozIazP0\ns0fQeR9KXsZcuaotO3TIRxRRAoGAMFIDsLRvDKm1rkL0B0czm\/hwwDMu\/KDyr5\/R\n2M2OS1TB4PjmCgeUFOmyq3A63OWuStxtJboribOK8Qd1dXvWj\/3NZtVY\/z\/j1P1E\nCeq6We0MOZa0Ae4kyi+p\/kbAKPgv+VwSoc6cKailRHZPH7quLoJSIt0IgbfRnXC6\nygtcLNMCgYBwiPw2mTYvXDrAcO17NhK\/r7IL7BEdFdx\/w8vNJQp+Ub4OO3Iw6ARI\nvXxu6A+Qp50jra3UUtnI+hIirMS+XEeWqJghK1js3ZR6wA\/ZkYZw5X1RYuPexb\/4\n6befxmnEuGSbsgvGqYYTf5Z0vgsw4tAHfNS7TqSulYH06CjeG1F8DQ==\n-----END RSA PRIVATE KEY-----\nprivate_key_type    rsa\nserial_number       59:0b:af:a4:ca:40:db:29:b7:e8:4a:22:63:27:f7:3a:ce:54:78:8a\n```\n\nVault has now generated a new set of credentials using the `example-dot-com`\nrole configuration. Here we see the dynamically generated private key and\ncertificate.\n\nUsing ACLs, it is possible to restrict using the pki backend such that trusted\noperators can manage the role definitions, and both users and applications are\nrestricted in the credentials they are allowed to read.\n\nIf you get stuck at any time, simply run `vault path-help pki` or with a\nsubpath for interactive help output.\n\n## Tutorial\n\nRefer to the [Build Your Own Certificate Authority (CA)](\/vault\/tutorials\/secrets-management\/pki-engine)\nguide for a step-by-step tutorial.\n\nHave a look at the [PKI Secrets Engine with Managed Keys](\/vault\/tutorials\/enterprise\/managed-key-pki)\nfor more about how to use externally managed keys with PKI.\n\n## API\n\nThe PKI secrets engine has a full HTTP API. Please see the\n[PKI secrets engine API](\/vault\/api-docs\/secret\/pki) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title   PKI   Secrets Engines  Quick Start  Root CA Setup  description  The PKI secrets engine for Vault generates TLS certificates         PKI secrets engine   quick start   root CA setup  This document provides a brief overview of setting up a Vault PKI Secrets Engine with a Root CA certificate        Mount the backend  The first step to using the PKI backend is to mount it  Unlike the  kv  backend  the  pki  backend is not mounted by default      shell session   vault secrets enable pki Successfully mounted  pki  at  pki             Configure a CA certificate  Next  Vault must be configured with a CA certificate and associated private key  We ll take advantage of the backend s self signed root generation support  but Vault also supports generating an intermediate CA  with a CSR for signing  or setting a PEM encoded certificate and private key bundle directly into the backend   Generally you ll want a root certificate to only be used to sign CA intermediate certificates  but for this example we ll proceed as if you will issue certificates directly from the root  As it s a root  we ll want to set a long maximum life time for the certificate  since it honors the maximum mount TTL  first we adjust that      shell session   vault secrets tune  max lease ttl 87600h pki Successfully tuned mount  pki        That sets the maximum TTL for secrets issued from the mount to 10 years   Note that roles can further restrict the maximum TTL    Now  we generate our root certificate      shell session   vault write pki root generate internal common name myvault com ttl 87600h Key             Value                       certificate          BEGIN CERTIFICATE      MIIDNTCCAh2gAwIBAgIUJqrw 9EDZbp4DExaLjh0vSAHyBgwDQYJKoZIhvcNAQEL BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyMzIwWhcNMjcx MjA2MTkyMzQ5WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN AQEBBQADggEPADCCAQoCggEBAKY vJ6sRFym yFYUneoVtDmOCaDKAQiGzQw0IXL wgMBBb82iKpYj5aQjXZGIl VkVnCi M2AQ iYXWZf1kTAdle4A6OC4 VefSIa2b4 eB7R8aiGTce62jB95 s5 YgrfIqk6igfpCSXYLE8ubNDA2  cqvjhku1UzlvKBX2 hIlgWkKlrsnybHN B 3Usw9Km 87rzoDR3OMxLV55YPHiq6 olIfSSwKAPjH8LZm uM1ITLG3WQUl8ARF17Dj wOKqbUG38PduVwKL5 qPksrvNwlmCP7Kmjncc6xnYp6 5lfr7V4DC UezrJYCIb0g SvtxoN1OuqmmvSTKiEE7hVOAcCAwEAAaN7MHkwDgYD VR0PAQH BAQDAgEGMA8GA1UdEwEB wQFMAMBAf8wHQYDVR0OBBYEFECKdYM4gDbM kxRZA2wR4f yNhQUMB8GA1UdIwQYMBaAFECKdYM4gDbMkxRZA2wR4f yNhQUMBYG A1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQCCJKZPcjjn 7mvD2 sr6lx4DW vJwVSW8eTuLtOLNu6 aFhcgTY OOB8q4n6iHuLrEt8 RV7RJI obRx74SfK9BcOLt4 DHGnFXqu2FNVnhDMOKarj41yGyXlJaQRUPYf6WJJLF ZphN nNsZqHJHBfZtpJpE5Vywx3pah08B5yZHk1ItRPEz7EY3uwBI CJoBb P5Ahk6krc LZ62kFwstkVuFp43o3K7cRNexCIsZGx2tsyZ0nyqDUFsBr66xwUfn3C  1CDc9YL zjq 8nI2ooIrj4ZKZCOm2fKd1KeGN CZD7Ob6uNhXrd0Tjwv00a7nffvYQkl 1V5 BT55jevSPVVu      END CERTIFICATE      expiration      1828121029 issuing ca           BEGIN CERTIFICATE      MIIDNTCCAh2gAwIBAgIUJqrw 9EDZbp4DExaLjh0vSAHyBgwDQYJKoZIhvcNAQEL BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyMzIwWhcNMjcx MjA2MTkyMzQ5WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN AQEBBQADggEPADCCAQoCggEBAKY vJ6sRFym yFYUneoVtDmOCaDKAQiGzQw0IXL wgMBBb82iKpYj5aQjXZGIl VkVnCi M2AQ iYXWZf1kTAdle4A6OC4 VefSIa2b4 eB7R8aiGTce62jB95 s5 YgrfIqk6igfpCSXYLE8ubNDA2  cqvjhku1UzlvKBX2 hIlgWkKlrsnybHN B 3Usw9Km 87rzoDR3OMxLV55YPHiq6 olIfSSwKAPjH8LZm uM1ITLG3WQUl8ARF17Dj wOKqbUG38PduVwKL5 qPksrvNwlmCP7Kmjncc6xnYp6 5lfr7V4DC UezrJYCIb0g SvtxoN1OuqmmvSTKiEE7hVOAcCAwEAAaN7MHkwDgYD VR0PAQH BAQDAgEGMA8GA1UdEwEB wQFMAMBAf8wHQYDVR0OBBYEFECKdYM4gDbM kxRZA2wR4f yNhQUMB8GA1UdIwQYMBaAFECKdYM4gDbMkxRZA2wR4f yNhQUMBYG A1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQCCJKZPcjjn 7mvD2 sr6lx4DW vJwVSW8eTuLtOLNu6 aFhcgTY OOB8q4n6iHuLrEt8 RV7RJI obRx74SfK9BcOLt4 DHGnFXqu2FNVnhDMOKarj41yGyXlJaQRUPYf6WJJLF ZphN nNsZqHJHBfZtpJpE5Vywx3pah08B5yZHk1ItRPEz7EY3uwBI CJoBb P5Ahk6krc LZ62kFwstkVuFp43o3K7cRNexCIsZGx2tsyZ0nyqDUFsBr66xwUfn3C  1CDc9YL zjq 8nI2ooIrj4ZKZCOm2fKd1KeGN CZD7Ob6uNhXrd0Tjwv00a7nffvYQkl 1V5 BT55jevSPVVu      END CERTIFICATE      serial number   26 aa f0 ff d1 03 65 ba 78 0c 4c 5a 2e 38 74 bd 20 07 c8 18      The returned certificate is purely informational  it and its private key are safely stored in the backend mount        Set URL configuration  Generated certificates can have the CRL location and the location of the issuing certificate encoded  These values must be set manually and typically to FQDN associated to the Vault server  but can be changed at any time      shell session   vault write pki config urls issuing certificates  http   vault example com 8200 v1 pki ca  crl distribution points  http   vault example com 8200 v1 pki crl  Success  Data written to  pki ca urls           Configure a role  The next step is to configure a role  A role is a logical name that maps to a policy used to generate those credentials  For example  let s create an  example dot com  role      shell session   vault write pki roles example dot com       allowed domains example com       allow subdomains true max ttl 72h Success  Data written to  pki roles example dot com           Issue certificates  By writing to the  roles example dot com  path we are defining the  example dot com  role  To generate a new certificate  we simply write to the  issue  endpoint with that role name  Vault is now configured to create and manage certificates      shell session   vault write pki issue example dot com       common name blah example com Key                 Value                           certificate              BEGIN CERTIFICATE      MIIDvzCCAqegAwIBAgIUWQuvpMpA2ym36EoiYyf3Os5UeIowDQYJKoZIhvcNAQEL BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyNDA1WhcNMTcx MjExMTkyNDM1WjAbMRkwFwYDVQQDExBibGFoLmV4YW1wbGUuY29tMIIBIjANBgkq hkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1CU93lVgcLXGPxRGTRT3GM5wqytCo7Z6 gjfoHyKoPCAqjRdjsYgp1FMvumNQKjUat5KTtr2fypbOnAURDCh4bN omcj7eAqt ldJ8mf8CtKUaaJ1kp3R6RRFY u96BnmKUG8G7oDeEDsKlXuEuRcNbGlGF8DaM O1 HFa57cM 8yFB26Nj5wBoG5Om6ee5 W 14Qee8AB6OJbsf883Z zvhJTaB0QM4ZUq uAMoMVEutWhdI5EFm5OjtMeMu2U iJl2XqqgQ JmLRjRdMn1qd9TzTaVSnjoZ97s jHK444Px1m45einLqKUJ Ia2ljXYkkItJj9Ut6ZSAP9fHlAtX84W3QIDAQABo4H  MIH8MA4GA1UdDwEB wQEAwIDqDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUH AwIwHQYDVR0OBBYEFH YdObW6T94U0zuU5hBfTfU5pt1MB8GA1UdIwQYMBaAFECK dYM4gDbMkxRZA2wR4f yNhQUMDsGCCsGAQUFBwEBBC8wLTArBggrBgEFBQcwAoYf aHR0cDovLzEyNy4wLjAuMTo4MjAwL3YxL3BraS9jYTAbBgNVHREEFDASghBibGFo LmV4YW1wbGUuY29tMDEGA1UdHwQqMCgwJqAkoCKGIGh0dHA6Ly8xMjcuMC4wLjE6 ODIwMC92MS9wa2kvY3JsMA0GCSqGSIb3DQEBCwUAA4IBAQCDXbHV68VayweB2tkb KDdCaveaTULjCeJUnm9UT 6C0YqC RxTAjdKFrilK49elOA3rAtEL6dmsDP2yH25 ptqi2iU y99HhZgu0zkS p8elYN3 l 0O7pOxayYXBkFf5t0TlEWSTb7cW Etz c MvSqx6vVvspSjB0PsA3eBq0caZnUJv2u TEiUe7PPY0UmrZxp R P kE54yI3nWN 4Cwto6yUwScOPbVR1d3hE2KU2toiVkEoOk17UyXWTokbG8rG0KLj99zu7my Fyre sjV5nWGDSMZODEsGxHOC JgNAC1z3n14 InFNOsHICnA5AnJzQdSQQjvcZHN2NyW  t4f      END CERTIFICATE      issuing ca               BEGIN CERTIFICATE      MIIDNTCCAh2gAwIBAgIUJqrw 9EDZbp4DExaLjh0vSAHyBgwDQYJKoZIhvcNAQEL BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMjA4MTkyMzIwWhcNMjcx MjA2MTkyMzQ5WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN AQEBBQADggEPADCCAQoCggEBAKY vJ6sRFym yFYUneoVtDmOCaDKAQiGzQw0IXL wgMBBb82iKpYj5aQjXZGIl VkVnCi M2AQ iYXWZf1kTAdle4A6OC4 VefSIa2b4 eB7R8aiGTce62jB95 s5 YgrfIqk6igfpCSXYLE8ubNDA2  cqvjhku1UzlvKBX2 hIlgWkKlrsnybHN B 3Usw9Km 87rzoDR3OMxLV55YPHiq6 olIfSSwKAPjH8LZm uM1ITLG3WQUl8ARF17Dj wOKqbUG38PduVwKL5 qPksrvNwlmCP7Kmjncc6xnYp6 5lfr7V4DC UezrJYCIb0g SvtxoN1OuqmmvSTKiEE7hVOAcCAwEAAaN7MHkwDgYD VR0PAQH BAQDAgEGMA8GA1UdEwEB wQFMAMBAf8wHQYDVR0OBBYEFECKdYM4gDbM kxRZA2wR4f yNhQUMB8GA1UdIwQYMBaAFECKdYM4gDbMkxRZA2wR4f yNhQUMBYG A1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQCCJKZPcjjn 7mvD2 sr6lx4DW vJwVSW8eTuLtOLNu6 aFhcgTY OOB8q4n6iHuLrEt8 RV7RJI obRx74SfK9BcOLt4 DHGnFXqu2FNVnhDMOKarj41yGyXlJaQRUPYf6WJJLF ZphN nNsZqHJHBfZtpJpE5Vywx3pah08B5yZHk1ItRPEz7EY3uwBI CJoBb P5Ahk6krc LZ62kFwstkVuFp43o3K7cRNexCIsZGx2tsyZ0nyqDUFsBr66xwUfn3C  1CDc9YL zjq 8nI2ooIrj4ZKZCOm2fKd1KeGN CZD7Ob6uNhXrd0Tjwv00a7nffvYQkl 1V5 BT55jevSPVVu      END CERTIFICATE      private key              BEGIN RSA PRIVATE KEY      MIIEpAIBAAKCAQEA1CU93lVgcLXGPxRGTRT3GM5wqytCo7Z6gjfoHyKoPCAqjRdj sYgp1FMvumNQKjUat5KTtr2fypbOnAURDCh4bN omcj7eAqtldJ8mf8CtKUaaJ1k p3R6RRFY u96BnmKUG8G7oDeEDsKlXuEuRcNbGlGF8DaM O1HFa57cM 8yFB26Nj 5wBoG5Om6ee5 W 14Qee8AB6OJbsf883Z zvhJTaB0QM4ZUquAMoMVEutWhdI5EF m5OjtMeMu2U iJl2XqqgQ JmLRjRdMn1qd9TzTaVSnjoZ97sjHK444Px1m45einL qKUJ Ia2ljXYkkItJj9Ut6ZSAP9fHlAtX84W3QIDAQABAoIBAQCf5YIANfF gkNt   YM6yRi hZJrU2I 1zPETxPW1vaFZR8y4hEoxCEDD8JCRm 9k w1TWoorvxgkEv r1HuDALYbNtwLd 71nCHYCKyH1b2uQpyl07qOAyASlb9r5oVjz4E6eobkd3N9fJA QN0EdK VarN968mLJsD3Hxb8chGdObBCQ LO zdqQLaz JwhfnK98rm6huQtYK3w ccd0OwoVmtZz2eJl11TJkB9fi4WqJyxl4wST7QC80LstB1deR78oDmN5WUKU12 G 4Mrgc1hRwUSm18HTTgAhaA4A3rjPyirBohb5Sf jJxusnnay7tvWeMnIiRI9mqCE dr3tLrcxAoGBAPL jHVUF6sxBqm6RTe8Ewg 8RrGmd69oB71QlVUrLYyC96E2s56 19dcyt5U2z F0u9wlwR1rMb2BJIXbxlNk i87IHmpOjCMS38SPZYWLHKj02eGfvA MjKKqEjNY md9eVAVZIWSEy63c4UcBK1qUH3 5PNlyjk53gCOI 4OXX AoGBAN A Alyd6A pyHWq8WMyAlV18LnzX8XktJ07xrNmjbPGD5sEHp Q9V33NitOZpu3bQL  gCNmcrodrbr9LBV83bkAOVJrf82SPaBesV ATY7ZiWpqvHTmcoS7nglM2XTr uWR Y9JGdpCE9U5QwTc6qfcn7Eqj7yNvvHMrT 1SHwsjAoGBALQyQEbhzYuOF7rV 26N ci z 0A39vNO  b5Se tk0apZlPlgb2NK3LxxR LHevFed9GRzdvbGk F7Se3CyP cxgswdazC6fwGjhX1mOYsG1oIU0V6X7f0FnaqWETrwf1M9yGEO78xzDfgozIazP0 s0fQeR9KXsZcuaotO3TIRxRRAoGAMFIDsLRvDKm1rkL0B0czm hwwDMu KDyr5 R 2M2OS1TB4PjmCgeUFOmyq3A63OWuStxtJboribOK8Qd1dXvWj 3NZtVY z j1P1E Ceq6We0MOZa0Ae4kyi p kbAKPgv VwSoc6cKailRHZPH7quLoJSIt0IgbfRnXC6 ygtcLNMCgYBwiPw2mTYvXDrAcO17NhK r7IL7BEdFdx w8vNJQp Ub4OO3Iw6ARI vXxu6A Qp50jra3UUtnI hIirMS XEeWqJghK1js3ZR6wA ZkYZw5X1RYuPexb 4 6befxmnEuGSbsgvGqYYTf5Z0vgsw4tAHfNS7TqSulYH06CjeG1F8DQ        END RSA PRIVATE KEY      private key type    rsa serial number       59 0b af a4 ca 40 db 29 b7 e8 4a 22 63 27 f7 3a ce 54 78 8a      Vault has now generated a new set of credentials using the  example dot com  role configuration  Here we see the dynamically generated private key and certificate   Using ACLs  it is possible to restrict using the pki backend such that trusted operators can manage the role definitions  and both users and applications are restricted in the credentials they are allowed to read   If you get stuck at any time  simply run  vault path help pki  or with a subpath for interactive help output      Tutorial  Refer to the  Build Your Own Certificate Authority  CA    vault tutorials secrets management pki engine  guide for a step by step tutorial   Have a look at the  PKI Secrets Engine with Managed Keys   vault tutorials enterprise managed key pki  for more about how to use externally managed keys with PKI      API  The PKI secrets engine has a full HTTP API  Please see the  PKI secrets engine API   vault api docs secret pki  for more details "}
{"questions":"vault PKI secrets engine quick start intermediate CA setup The PKI secrets engine for Vault generates TLS certificates certificates were issued directly from the root certificate authority layout docs In the first Quick Start guide vault docs secrets pki quick start root ca page title PKI Secrets Engines Quick Start Intermediate CA Setup","answers":"---\nlayout: docs\npage_title: 'PKI - Secrets Engines: Quick Start: Intermediate CA Setup'\ndescription: The PKI secrets engine for Vault generates TLS certificates.\n---\n\n# PKI secrets engine - quick start - intermediate CA setup\n\nIn the [first Quick Start guide](\/vault\/docs\/secrets\/pki\/quick-start-root-ca),\ncertificates were issued directly from the root certificate authority.\nAs described in the example, this is not a recommended practice. This guide\nbuilds on the previous guide's root certificate authority and creates an\nintermediate authority using the root authority to sign the intermediate's\ncertificate.\n\n#### Mount the backend\n\nTo add another certificate authority to our Vault instance, we have to mount it\nat a different path.\n\n```shell-session\n$ vault secrets enable -path=pki_int pki\nSuccessfully mounted 'pki' at 'pki_int'!\n```\n\n#### Configure an intermediate CA\n\n```shell-session\n$ vault secrets tune -max-lease-ttl=43800h pki_int\nSuccessfully tuned mount 'pki_int'!\n```\n\nThat sets the maximum TTL for secrets issued from the mount to 5 years. This\nvalue should be less than or equal to the root certificate authority.\n\nNow, we generate our intermediate certificate signing request:\n\n```shell-session\n$ vault write pki_int\/intermediate\/generate\/internal common_name=\"myvault.com Intermediate Authority\" ttl=43800h\nKey Value\ncsr -----BEGIN CERTIFICATE REQUEST-----\nMIICsjCCAZoCAQAwLTErMCkGA1UEAxMibXl2YXVsdC5jb20gSW50ZXJtZWRpYXRl\nIEF1dGhvcml0eTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJU1Qh8l\nBW16WHAu34Fy92FnSy4219WVlKw1xwpKxjd95xH6WcxXozOs6oHFQ9c592bz51F8\nKK3FFJYraUrGONI5Cz9qHbzC1mFCmjnXVXCoeNKIzEBG0Y+ehH7MQ1SvDCyvaJPX\nItFXaGf6zENiGsApw3Y3lFr0MjPzZDBH1p4Nq3aA6L2BaxvO5vczdQl5tE2ud\/zs\nGIdCWnl1ThDEeiX1Ppduos\/dx3gaZa9ly3iCuDMKIL9yK5XTBTgKB6ALPApekLQB\nkcUFbOuMzjrDSBe9ytu65yICYp26iAPPA8aKTj5cUgscgzEvQS66rSAVG\/unrWxb\nwbl8b7eQztCmp60CAwEAAaBAMD4GCSqGSIb3DQEJDjExMC8wLQYDVR0RBCYwJIIi\nbXl2YXVsdC5jb20gSW50ZXJtZWRpYXRlIEF1dGhvcml0eTANBgkqhkiG9w0BAQsF\nAAOCAQEAZA9A1QvTdAd45+Ay55FmKNWnis1zLjbmWNJURUoDei6i6SCJg0YGX1cZ\nWkD0ibxPYihSsKRaIUwC2bE8cxZM57OSs7ISUmyPQAT2IHTHvuGK72qlFRBlFOzg\nSHEG7gfyKdrALphyF8wM3u4gXhcnY3CdltjabL3YakZqd3Ey4870\/0XXeo5c4k7w\n\/+n9M4xED4TnXYCGfLAlu5WWKSeCvu9mHXnJcLo1MiYjX7KGey\/xYYbfxHSPm4ul\ntI6Vf59zDRscfNmq37fERD3TiKP0QZNGTSRvnrxrx2RUQGXFywM8l4doG8nS5BxU\n2jP20cdv0lJFvHr9663\/8B\/+F5L6Yw==\n-----END CERTIFICATE REQUEST-----\n```\n\nTake the signing request from the intermediate authority and sign it using\nanother certificate authority, in this case the root certificate authority\ngenerated in the first example.\n\n```shell-session\n$ vault write pki\/root\/sign-intermediate csr=@pki_int.csr format=pem_bundle ttl=43800h\nKey             Value\ncertificate     -----BEGIN CERTIFICATE-----\nMIIDZTCCAk2gAwIBAgIUENxQD7KIJi1zE\/jEiYqAG1VC4NwwDQYJKoZIhvcNAQEL\nBQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMTI4MTcwNzIzWhcNMjIx\nMTI3MTcwNzUzWjAtMSswKQYDVQQDEyJteXZhdWx0LmNvbSBJbnRlcm1lZGlhdGUg\nQXV0aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA5seNV4Yd\nuCMX0POUUuSzCBiR3Cyf9b9tGsCX7UfvZmjPs+Fl\/X+Ovq6UtHM9RuTGlyfFrCWy\npflO7mc0H8PBzlvhv1WQet5aRyUOXkG6iYmooG9iobIY8z\/TZCaCF605pgygfOaS\nDIlwOdJkfiXxGpQ00pfIwe\/Y2OK2I5e36u0E2EA6kXvcfexLjQGFPbod+H0R29Ro\n\/GwOJ6MpSHqB77mF025x1y08EtqT1z1kFCiDzFSkzNZEZYWljhDS6ZRY9ctzKufm\n5CkUwmvCVRI2CivDJvmfhXyv0DRoq4IhYdJHo179RSObq3BY9f9LQ0balNLiM0Ft\nO8f0urTqUAbySwIDAQABo4GTMIGQMA4GA1UdDwEB\/wQEAwIBBjAPBgNVHRMBAf8E\nBTADAQH\/MB0GA1UdDgQWBBSQgTfcMrKYzyckP6t\/0iVQkl0ZBDAfBgNVHSMEGDAW\ngBRccsCARqs3wQDjW7JMNXS6pWlFSDAtBgNVHREEJjAkgiJteXZhdWx0LmNvbSBJ\nbnRlcm1lZGlhdGUgQXV0aG9yaXR5MA0GCSqGSIb3DQEBCwUAA4IBAQABNg2HxccY\nDwRpsJ+sxA0BgDyF+tYtOlXViVNv6Z+nOU0nNhQSCjfzjYWmBg25nfKaFhQSC3b7\nfIW+e7it\/FLVrCgaqdysoxljqhR0gXMAy8S\/ubmskPWjJiKauJB5bfB59Uf2GP6j\nzimZDu6WjWvvgkKcJqJEbOOS9DWBvCTdmmml1NMXZtcytpod2Y7mxninqNRx3qpx\nPst4vgAbyM\/3zLSzkyUD+MXIyRXwxktFlyEYBHvMd9OoHzLO6WLxk22FyQQ+w4by\nNfXJY4r5pj6a4lJ6pPuqyfBhidYMTdY3AI7w\/QRGk4qQv1iDmnZspk2AxdbR5Lwe\nYmChIML\/f++S\n-----END CERTIFICATE-----\nexpiration      1669568873\nissuing_ca      -----BEGIN CERTIFICATE-----\nMIIDNTCCAh2gAwIBAgIUdR44qhhyh3CZjnCtflGKQlTI8NswDQYJKoZIhvcNAQEL\nBQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMTI4MTYxODA2WhcNMjcx\nMTI2MTYxODM1WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN\nAQEBBQADggEPADCCAQoCggEBANTPnQ2CUkuLrYT4V6\/IIK\/gWFZXFG4lWTmgM5Zh\nPDquMhLEikZCbZKbupouBI8MOr5i8tycENaTnSs9dBwVEOWAHbLkliVgvCKgLi0F\nPfPM87FnBoKVctO2ip8AdmYcAt\/wc096dWBG6eKLVP5xsAe7NcYDtF\/inHgEZ22q\nZjGVEyC6WntIASgULoHGgHakPp1AHLhGm8nL5YbusWY7RgZIlNeGWLVoneG0pxdV\n7W1SPO67dsQyq58mTxMIGVUj5YE1q7\/C6OhCTnAHc+sRm0oUehPfO8kY4NHpCJGv\nnDRdJi6k6ewk94c0KK2tUUM\/TN6ZSRfx6ccgfPH8zNcVPVcCAwEAAaN7MHkwDgYD\nVR0PAQH\/BAQDAgEGMA8GA1UdEwEB\/wQFMAMBAf8wHQYDVR0OBBYEFFxywIBGqzfB\nAONbskw1dLqlaUVIMB8GA1UdIwQYMBaAFFxywIBGqzfBAONbskw1dLqlaUVIMBYG\nA1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQBgvsgpBuVR\niKVdXXpFyoQLImuoaHZgaj5tuUDqnMoxOA1XWW6SVlZmGfDQ7+u5NBkp2cGSDRGm\nARHJTeURvdZIwdFdGkNqfAZjutRjjQOnXgS65ujZd7AnlZq1v0ZOZqVVk9YEOhOe\nRh2MjnHGNuiLBib1YNQHNuRef1mPwIE2Gm\/Tz\/z3JPHtkKNIKbn60zHrIIM\/OT2Z\nHYjcMUcqXtKGYfNjVspJm3lSDUoyJdaq80Afmy2Ez1Vt9crGG3Dj8mgs59lEhEyo\nMDVhOP116M5HJfQlRPVd29qS8pFrjBvXKjJSnJNG1UFdrWBJRJ3QrBxUQALKrJlR\ng5lvTeymHjS\/\n-----END CERTIFICATE-----\nserial_number   10:dc:50:0f:b2:88:26:2d:73:13:f8:c4:89:8a:80:1b:55:42:e0:dc\n```\n\nNow set the intermediate certificate authorities signing certificate to the\nroot-signed certificate.\n\n```shell-session\n$ vault write pki_int\/intermediate\/set-signed certificate=@signed_certificate.pem\nSuccess! Data written to: pki_int\/intermediate\/set-signed\n```\n\nThe intermediate certificate authority is now configured and ready to issue\ncertificates.\n\n#### Set URL configuration\n\nGenerated certificates can have the CRL location and the location of the\nissuing certificate encoded. These values must be set manually, but can be\nchanged at any time.\n\n```shell-session\n$ vault write pki_int\/config\/urls issuing_certificates=\"http:\/\/127.0.0.1:8200\/v1\/pki_int\/ca\" crl_distribution_points=\"http:\/\/127.0.0.1:8200\/v1\/pki_int\/crl\"\nSuccess! Data written to: pki_int\/ca\/urls\n```\n\n#### Configure a role\n\nThe next step is to configure a role. A role is a logical name that maps to a\npolicy used to generate those credentials. For example, let's create an\n\"example-dot-com\" role:\n\n```shell-session\n$ vault write pki_int\/roles\/example-dot-com \\\n    allowed_domains=example.com \\\n    allow_subdomains=true max_ttl=72h\nSuccess! Data written to: pki_int\/roles\/example-dot-com\n```\n\n#### Issue certificates\n\nBy writing to the `roles\/example-dot-com` path we are defining the\n`example-dot-com` role. To generate a new certificate, we simply write\nto the `issue` endpoint with that role name: Vault is now configured to create\nand manage certificates!\n\n```shell-session\n$ vault write pki_int\/issue\/example-dot-com \\\n    common_name=blah.example.com\nKey                 Value\n---                 -----\ncertificate         -----BEGIN CERTIFICATE-----\nMIIDbDCCAlSgAwIBAgIUPiAyxq+nIE6xlWf7hrzLkPQxtvMwDQYJKoZIhvcNAQEL\nBQAwMzExMC8GA1UEAxMoVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3ViIEF1\ndGhvcml0eTAeFw0xNjA5MjcwMDA5MTNaFw0xNjA5MjcwMTA5NDNaMBsxGTAXBgNV\nBAMTEGJsYWguZXhhbXBsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK\nAoIBAQDJAYB04IVdmSC\/TimaA6BbXlvgBTZHL5wBUTmO4iHhenL0eDEXVe2Fd7Yq\n75LiBJmcC96hKbqh5rwS8KwN9ElZI52\/mSMC+IvoNlYHAf7shwfsjrVx3q7\/bTFg\nlz6wECn1ugysxynmMvgQD\/pliRkxTQ7RMh4Qlh75YG3R9BHy9ZddklZp0aNaitts\n0uufHnN1UER\/wxBCZdWTUu34KDL9I6yE7Br0slKKHPdEsGlFcMkbZhvjslZ7DGvO\n974S0qtOdKiawJZbpNPg0foGZ3AxesDUlkHmmgzUNes\/sjknDYTHEfeXM6Uap0j6\nXvyhCxqdeahb\/Vtibg0z9I0IusJbAgMBAAGjgY8wgYwwDgYDVR0PAQH\/BAQDAgOo\nMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAdBgNVHQ4EFgQU\/5oy0rL7\nTT0wX7KZK7qcXqgayNwwHwYDVR0jBBgwFoAUgM37P8oXmA972ztLfw+b1eIY5now\nGwYDVR0RBBQwEoIQYmxhaC5leGFtcGxlLmNvbTANBgkqhkiG9w0BAQsFAAOCAQEA\nCT2vI6\/taeLTw6ZulUhLXEXYXWZu1gF8n2COjZzbZXmHxQAoZ3GtnSNwacPHAyIj\nf3cA9Moo552y39LUtWk+wgFtQokWGK7LXglLaveNUBowOHq\/xk0waiIinJcgTG53\nZ\/qnbJnTjAOG7JwVJplWUIiS1avCksrHt7heE2EGRGJALqyLZ119+PW6ogtCLUv1\nX8RCTw\/UkIF\/LT+sLF0bXWy4Hn38Gjwj1MVv1l76cEGOVSHyrYkN+6AMnAP58L5+\nIWE9tN3oac4x7jhbuNpfxazIJ8Q6l\/Up5U5Evfbh6N1DI0\/gFCP20fMBkHwkuLfZ\n2ekZoSeCgFRDlHGkr7Vv9w==\n-----END CERTIFICATE-----\nissuing_ca          -----BEGIN CERTIFICATE-----\nMIIDijCCAnKgAwIBAgIUB28DoGwgGFKL7fbOu9S4FalHLn0wDQYJKoZIhvcNAQEL\nBQAwLzEtMCsGA1UEAxMkVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgQXV0aG9y\naXR5MB4XDTE2MDkyNzAwMDgyMVoXDTI2MDkxNjE2MDg1MVowMzExMC8GA1UEAxMo\nVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3ViIEF1dGhvcml0eTCCASIwDQYJ\nKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOSCiSij4wy1wiMwvZt+rtU3IaO6ZTn9\nLfIPuGsR5\/QSJk37pCZQco1LgoE\/rTl+\/xu3bDovyHDmgObghC6rzVOX2Tpi7kD+\nDOZpqxOsaS8ebYgxB\/XJTSxyEJuSAcpSNLqqAiZivuQXdaD0N7H3Or0awwmKE9mD\nI0g8CF4fPDmuuOG0ASn9fMqXVVt5tXtEqZ9yJYfNOXx3FOPjRVOZf+kvSc31wCKe\ni\/KmR0AQOmToKMzq988nLqFPTi9KZB8sEU20cGFeTQFol+m3FTcIru94EPD+nLUn\nxtlLELVspYb\/PP3VpvRj9b+DY8FGJ5nfSJl7Rkje+CD4VxJpSadin3kCAwEAAaOB\nmTCBljAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH\/BAUwAwEB\/zAdBgNVHQ4EFgQU\ngM37P8oXmA972ztLfw+b1eIY5nowHwYDVR0jBBgwFoAUj4YAIxRwrBy0QMRKLnD0\nkVidIuYwMwYDVR0RBCwwKoIoVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3Vi\nIEF1dGhvcml0eTANBgkqhkiG9w0BAQsFAAOCAQEAA4buJuPNJvA1kiATLw1dVU2J\nHPubk2Kp26Mg+GwLn7Vz45Ub133JCYfF3\/zXLFZZ5Yub9gWTtjScrvNfQTAbNGdQ\nBdnUlMmIRmfB7bfckhryR2R9byumeHATgNKZF7h8liNHI7X8tTzZGs6wPdXOLlzR\nTlM3m1RNK8pbSPOkfPb06w9cBRlD8OAbNtJmuypXA6tYyiiMYBhP0QLAO3i4m1ns\naAjAgEjtkB1rQxW5DxoTArZ0asiIdmIcIGmsVxfDQIjFlRxAkafMs74v+5U5gbBX\nwsOledU0fLl8KLq8W3OXqJwhGLK65fscrP0\/omPAcFgzXf+L4VUADM4XhW6Xyg==\n-----END CERTIFICATE-----\nca_chain            [-----BEGIN CERTIFICATE-----\nMIIDijCCAnKgAwIBAgIUB28DoGwgGFKL7fbOu9S4FalHLn0wDQYJKoZIhvcNAQEL\nBQAwLzEtMCsGA1UEAxMkVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgQXV0aG9y\naXR5MB4XDTE2MDkyNzAwMDgyMVoXDTI2MDkxNjE2MDg1MVowMzExMC8GA1UEAxMo\nVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3ViIEF1dGhvcml0eTCCASIwDQYJ\nKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOSCiSij4wy1wiMwvZt+rtU3IaO6ZTn9\nLfIPuGsR5\/QSJk37pCZQco1LgoE\/rTl+\/xu3bDovyHDmgObghC6rzVOX2Tpi7kD+\nDOZpqxOsaS8ebYgxB\/XJTSxyEJuSAcpSNLqqAiZivuQXdaD0N7H3Or0awwmKE9mD\nI0g8CF4fPDmuuOG0ASn9fMqXVVt5tXtEqZ9yJYfNOXx3FOPjRVOZf+kvSc31wCKe\ni\/KmR0AQOmToKMzq988nLqFPTi9KZB8sEU20cGFeTQFol+m3FTcIru94EPD+nLUn\nxtlLELVspYb\/PP3VpvRj9b+DY8FGJ5nfSJl7Rkje+CD4VxJpSadin3kCAwEAAaOB\nmTCBljAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH\/BAUwAwEB\/zAdBgNVHQ4EFgQU\ngM37P8oXmA972ztLfw+b1eIY5nowHwYDVR0jBBgwFoAUj4YAIxRwrBy0QMRKLnD0\nkVidIuYwMwYDVR0RBCwwKoIoVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3Vi\nIEF1dGhvcml0eTANBgkqhkiG9w0BAQsFAAOCAQEAA4buJuPNJvA1kiATLw1dVU2J\nHPubk2Kp26Mg+GwLn7Vz45Ub133JCYfF3\/zXLFZZ5Yub9gWTtjScrvNfQTAbNGdQ\nBdnUlMmIRmfB7bfckhryR2R9byumeHATgNKZF7h8liNHI7X8tTzZGs6wPdXOLlzR\nTlM3m1RNK8pbSPOkfPb06w9cBRlD8OAbNtJmuypXA6tYyiiMYBhP0QLAO3i4m1ns\naAjAgEjtkB1rQxW5DxoTArZ0asiIdmIcIGmsVxfDQIjFlRxAkafMs74v+5U5gbBX\nwsOledU0fLl8KLq8W3OXqJwhGLK65fscrP0\/omPAcFgzXf+L4VUADM4XhW6Xyg==\n-----END CERTIFICATE-----]\nprivate_key         -----BEGIN RSA PRIVATE KEY-----\nMIIEpgIBAAKCAQEAyQGAdOCFXZkgv04pmgOgW15b4AU2Ry+cAVE5juIh4Xpy9Hgx\nF1XthXe2Ku+S4gSZnAveoSm6oea8EvCsDfRJWSOdv5kjAviL6DZWBwH+7IcH7I61\ncd6u\/20xYJc+sBAp9boMrMcp5jL4EA\/6ZYkZMU0O0TIeEJYe+WBt0fQR8vWXXZJW\nadGjWorbbNLrnx5zdVBEf8MQQmXVk1Lt+Cgy\/SOshOwa9LJSihz3RLBpRXDJG2Yb\n47JWewxrzve+EtKrTnSomsCWW6TT4NH6BmdwMXrA1JZB5poM1DXrP7I5Jw2ExxH3\nlzOlGqdI+l78oQsanXmoW\/1bYm4NM\/SNCLrCWwIDAQABAoIBAQCCbHMJY1Wl8eIJ\nv5HG2WuHXaaHqVoavo2fXTDXwWryfx1v+zz\/Q0YnQBH3shPAi\/OQCTOfpw\/uVWTb\ndUZul3+wUyfcVmUdXGCLgBY53dWna8Z8e+zHwhISsqtDXV\/TpelUBDCNO324XIIR\nCg0TLO4nyzQ+ESLo6D+Y2DTp8lBjMEkmKTd8CLXR2ycEoVykN98qPZm8keiLGO91\nI8K7aRd8uOyQ6HUfJRlzFHSuwaLReErxGTEPI4t\/wVqh2nP2gGBsn3apiJ0ul6Jz\nNlYO5PqiwpeDk4ibhQBpicnm1jnEcynH\/WtGuKgMNB0M4SBRBsEguO7WoKx3o+qZ\niVIaPWDhAoGBAO05UBvyJpAcz\/ZNQlaF0EAOhoxNQ3h6+6ZYUE52PgZ\/DHftyJPI\nY+JJNclY91wn91Yk3ROrDi8gqhzA+2Lelxo1kuZDu+m+bpzhVUdJia7tZDNzRIhI\n24eP2GdochooOZ0qjvrik4kuX43amBhQ4RHsBjmX5CnUlL5ZULs8v2xnAoGBANjq\nVLAwiIIqJZEC6BuBvVYKaRWkBCAXvQ3j\/OqxHRYu3P68PZ58Q7HrhrCuyQHTph2v\nfzfmEMPbSCrFIrrMRmjUG8wopL7GjZjFl8HOBHFwzFiz+CT5DEC+IJIRkp4HM8F\/\nPAzjB2wCdRdSjLTD5ph0\/xQIg5xfln7D+wqU0QHtAoGBAKkLF0\/ivaIiNftw0J3x\nWxXag\/yErlizYpIGCqvuzII6lLr9YdoViT\/eJYrmb9Zm0HS9biCu2zuwDijRSBIL\nRieyF40opUaKoi3+0JMtDwTtO2MCd8qaCH3QfkgqAG0tTuj1Q8\/6F2JA\/myKYamq\nMMhhpYny9+7rAlemM8ZJIqtvAoGBAKOI3zpKDNCdd98A4v7B7H2usZUIJ7gOTZDo\nXqiNyRENWb2PK6GNq\/e6SrxvuclvyKA+zFnXULJoYtsj7tAH69lieGaOCc5uoRgZ\neBU7\/euMj\/McE6vEO3GgJawaJYCQi3uJMjvA+bp7i81+hehOfU5ZfmmbFaZSBoMh\nu+U5Vu3tAoGBANnBIbHfD3E7rqnqdpH1oRRHLA1VdghzEKgyUTPHNDzPJG87RY3c\nrRqeXepblud3qFjD60xS9BzcBijOvZ4+KHk6VIMpkyqoeNVFCJbBVCw+JGMp88+v\ne9t+2iwryh5+rnq+pg6anmgwHldptJc1XEFZA2UUQ89RP7kOGQF6IkIS\n-----END RSA PRIVATE KEY-----\nprivate_key_type    rsa\nserial_number       3e:20:32:c6:af:a7:20:4e:b1:95:67:fb:86:bc:cb:90:f4:31:b6:f3\n```\n\nVault has now generated a new set of credentials using the `example-dot-com`\nrole configuration. Here we see the dynamically generated private key and\ncertificate. The issuing CA certificate and CA trust chain are returned as well.\nThe CA Chain returns all the intermediate authorities in the trust chain. The root\nauthority is not included since that will usually be trusted by the underlying\nOS.\n\n## Tutorial\n\nRefer to the [Build Your Own Certificate Authority (CA)](\/vault\/tutorials\/secrets-management\/pki-engine)\nguide for a step-by-step tutorial.\n\nHave a look at the [PKI Secrets Engine with Managed Keys](\/vault\/tutorials\/enterprise\/managed-key-pki)\nfor more about how to use externally managed keys with PKI.\n\n## API\n\nThe PKI secrets engine has a full HTTP API. Please see the\n[PKI secrets engine API](\/vault\/api-docs\/secret\/pki) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title   PKI   Secrets Engines  Quick Start  Intermediate CA Setup  description  The PKI secrets engine for Vault generates TLS certificates         PKI secrets engine   quick start   intermediate CA setup  In the  first Quick Start guide   vault docs secrets pki quick start root ca   certificates were issued directly from the root certificate authority  As described in the example  this is not a recommended practice  This guide builds on the previous guide s root certificate authority and creates an intermediate authority using the root authority to sign the intermediate s certificate        Mount the backend  To add another certificate authority to our Vault instance  we have to mount it at a different path      shell session   vault secrets enable  path pki int pki Successfully mounted  pki  at  pki int             Configure an intermediate CA     shell session   vault secrets tune  max lease ttl 43800h pki int Successfully tuned mount  pki int        That sets the maximum TTL for secrets issued from the mount to 5 years  This value should be less than or equal to the root certificate authority   Now  we generate our intermediate certificate signing request      shell session   vault write pki int intermediate generate internal common name  myvault com Intermediate Authority  ttl 43800h Key Value csr      BEGIN CERTIFICATE REQUEST      MIICsjCCAZoCAQAwLTErMCkGA1UEAxMibXl2YXVsdC5jb20gSW50ZXJtZWRpYXRl IEF1dGhvcml0eTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJU1Qh8l BW16WHAu34Fy92FnSy4219WVlKw1xwpKxjd95xH6WcxXozOs6oHFQ9c592bz51F8 KK3FFJYraUrGONI5Cz9qHbzC1mFCmjnXVXCoeNKIzEBG0Y ehH7MQ1SvDCyvaJPX ItFXaGf6zENiGsApw3Y3lFr0MjPzZDBH1p4Nq3aA6L2BaxvO5vczdQl5tE2ud zs GIdCWnl1ThDEeiX1Ppduos dx3gaZa9ly3iCuDMKIL9yK5XTBTgKB6ALPApekLQB kcUFbOuMzjrDSBe9ytu65yICYp26iAPPA8aKTj5cUgscgzEvQS66rSAVG unrWxb wbl8b7eQztCmp60CAwEAAaBAMD4GCSqGSIb3DQEJDjExMC8wLQYDVR0RBCYwJIIi bXl2YXVsdC5jb20gSW50ZXJtZWRpYXRlIEF1dGhvcml0eTANBgkqhkiG9w0BAQsF AAOCAQEAZA9A1QvTdAd45 Ay55FmKNWnis1zLjbmWNJURUoDei6i6SCJg0YGX1cZ WkD0ibxPYihSsKRaIUwC2bE8cxZM57OSs7ISUmyPQAT2IHTHvuGK72qlFRBlFOzg SHEG7gfyKdrALphyF8wM3u4gXhcnY3CdltjabL3YakZqd3Ey4870 0XXeo5c4k7w   n9M4xED4TnXYCGfLAlu5WWKSeCvu9mHXnJcLo1MiYjX7KGey xYYbfxHSPm4ul tI6Vf59zDRscfNmq37fERD3TiKP0QZNGTSRvnrxrx2RUQGXFywM8l4doG8nS5BxU 2jP20cdv0lJFvHr9663 8B  F5L6Yw        END CERTIFICATE REQUEST           Take the signing request from the intermediate authority and sign it using another certificate authority  in this case the root certificate authority generated in the first example      shell session   vault write pki root sign intermediate csr  pki int csr format pem bundle ttl 43800h Key             Value certificate          BEGIN CERTIFICATE      MIIDZTCCAk2gAwIBAgIUENxQD7KIJi1zE jEiYqAG1VC4NwwDQYJKoZIhvcNAQEL BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMTI4MTcwNzIzWhcNMjIx MTI3MTcwNzUzWjAtMSswKQYDVQQDEyJteXZhdWx0LmNvbSBJbnRlcm1lZGlhdGUg QXV0aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA5seNV4Yd uCMX0POUUuSzCBiR3Cyf9b9tGsCX7UfvZmjPs Fl X Ovq6UtHM9RuTGlyfFrCWy pflO7mc0H8PBzlvhv1WQet5aRyUOXkG6iYmooG9iobIY8z TZCaCF605pgygfOaS DIlwOdJkfiXxGpQ00pfIwe Y2OK2I5e36u0E2EA6kXvcfexLjQGFPbod H0R29Ro  GwOJ6MpSHqB77mF025x1y08EtqT1z1kFCiDzFSkzNZEZYWljhDS6ZRY9ctzKufm 5CkUwmvCVRI2CivDJvmfhXyv0DRoq4IhYdJHo179RSObq3BY9f9LQ0balNLiM0Ft O8f0urTqUAbySwIDAQABo4GTMIGQMA4GA1UdDwEB wQEAwIBBjAPBgNVHRMBAf8E BTADAQH MB0GA1UdDgQWBBSQgTfcMrKYzyckP6t 0iVQkl0ZBDAfBgNVHSMEGDAW gBRccsCARqs3wQDjW7JMNXS6pWlFSDAtBgNVHREEJjAkgiJteXZhdWx0LmNvbSBJ bnRlcm1lZGlhdGUgQXV0aG9yaXR5MA0GCSqGSIb3DQEBCwUAA4IBAQABNg2HxccY DwRpsJ sxA0BgDyF tYtOlXViVNv6Z nOU0nNhQSCjfzjYWmBg25nfKaFhQSC3b7 fIW e7it FLVrCgaqdysoxljqhR0gXMAy8S ubmskPWjJiKauJB5bfB59Uf2GP6j zimZDu6WjWvvgkKcJqJEbOOS9DWBvCTdmmml1NMXZtcytpod2Y7mxninqNRx3qpx Pst4vgAbyM 3zLSzkyUD MXIyRXwxktFlyEYBHvMd9OoHzLO6WLxk22FyQQ w4by NfXJY4r5pj6a4lJ6pPuqyfBhidYMTdY3AI7w QRGk4qQv1iDmnZspk2AxdbR5Lwe YmChIML f  S      END CERTIFICATE      expiration      1669568873 issuing ca           BEGIN CERTIFICATE      MIIDNTCCAh2gAwIBAgIUdR44qhhyh3CZjnCtflGKQlTI8NswDQYJKoZIhvcNAQEL BQAwFjEUMBIGA1UEAxMLbXl2YXVsdC5jb20wHhcNMTcxMTI4MTYxODA2WhcNMjcx MTI2MTYxODM1WjAWMRQwEgYDVQQDEwtteXZhdWx0LmNvbTCCASIwDQYJKoZIhvcN AQEBBQADggEPADCCAQoCggEBANTPnQ2CUkuLrYT4V6 IIK gWFZXFG4lWTmgM5Zh PDquMhLEikZCbZKbupouBI8MOr5i8tycENaTnSs9dBwVEOWAHbLkliVgvCKgLi0F PfPM87FnBoKVctO2ip8AdmYcAt wc096dWBG6eKLVP5xsAe7NcYDtF inHgEZ22q ZjGVEyC6WntIASgULoHGgHakPp1AHLhGm8nL5YbusWY7RgZIlNeGWLVoneG0pxdV 7W1SPO67dsQyq58mTxMIGVUj5YE1q7 C6OhCTnAHc sRm0oUehPfO8kY4NHpCJGv nDRdJi6k6ewk94c0KK2tUUM TN6ZSRfx6ccgfPH8zNcVPVcCAwEAAaN7MHkwDgYD VR0PAQH BAQDAgEGMA8GA1UdEwEB wQFMAMBAf8wHQYDVR0OBBYEFFxywIBGqzfB AONbskw1dLqlaUVIMB8GA1UdIwQYMBaAFFxywIBGqzfBAONbskw1dLqlaUVIMBYG A1UdEQQPMA2CC215dmF1bHQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQBgvsgpBuVR iKVdXXpFyoQLImuoaHZgaj5tuUDqnMoxOA1XWW6SVlZmGfDQ7 u5NBkp2cGSDRGm ARHJTeURvdZIwdFdGkNqfAZjutRjjQOnXgS65ujZd7AnlZq1v0ZOZqVVk9YEOhOe Rh2MjnHGNuiLBib1YNQHNuRef1mPwIE2Gm Tz z3JPHtkKNIKbn60zHrIIM OT2Z HYjcMUcqXtKGYfNjVspJm3lSDUoyJdaq80Afmy2Ez1Vt9crGG3Dj8mgs59lEhEyo MDVhOP116M5HJfQlRPVd29qS8pFrjBvXKjJSnJNG1UFdrWBJRJ3QrBxUQALKrJlR g5lvTeymHjS       END CERTIFICATE      serial number   10 dc 50 0f b2 88 26 2d 73 13 f8 c4 89 8a 80 1b 55 42 e0 dc      Now set the intermediate certificate authorities signing certificate to the root signed certificate      shell session   vault write pki int intermediate set signed certificate  signed certificate pem Success  Data written to  pki int intermediate set signed      The intermediate certificate authority is now configured and ready to issue certificates        Set URL configuration  Generated certificates can have the CRL location and the location of the issuing certificate encoded  These values must be set manually  but can be changed at any time      shell session   vault write pki int config urls issuing certificates  http   127 0 0 1 8200 v1 pki int ca  crl distribution points  http   127 0 0 1 8200 v1 pki int crl  Success  Data written to  pki int ca urls           Configure a role  The next step is to configure a role  A role is a logical name that maps to a policy used to generate those credentials  For example  let s create an  example dot com  role      shell session   vault write pki int roles example dot com       allowed domains example com       allow subdomains true max ttl 72h Success  Data written to  pki int roles example dot com           Issue certificates  By writing to the  roles example dot com  path we are defining the  example dot com  role  To generate a new certificate  we simply write to the  issue  endpoint with that role name  Vault is now configured to create and manage certificates      shell session   vault write pki int issue example dot com       common name blah example com Key                 Value                           certificate              BEGIN CERTIFICATE      MIIDbDCCAlSgAwIBAgIUPiAyxq nIE6xlWf7hrzLkPQxtvMwDQYJKoZIhvcNAQEL BQAwMzExMC8GA1UEAxMoVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3ViIEF1 dGhvcml0eTAeFw0xNjA5MjcwMDA5MTNaFw0xNjA5MjcwMTA5NDNaMBsxGTAXBgNV BAMTEGJsYWguZXhhbXBsZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK AoIBAQDJAYB04IVdmSC TimaA6BbXlvgBTZHL5wBUTmO4iHhenL0eDEXVe2Fd7Yq 75LiBJmcC96hKbqh5rwS8KwN9ElZI52 mSMC IvoNlYHAf7shwfsjrVx3q7 bTFg lz6wECn1ugysxynmMvgQD pliRkxTQ7RMh4Qlh75YG3R9BHy9ZddklZp0aNaitts 0uufHnN1UER wxBCZdWTUu34KDL9I6yE7Br0slKKHPdEsGlFcMkbZhvjslZ7DGvO 974S0qtOdKiawJZbpNPg0foGZ3AxesDUlkHmmgzUNes sjknDYTHEfeXM6Uap0j6 XvyhCxqdeahb Vtibg0z9I0IusJbAgMBAAGjgY8wgYwwDgYDVR0PAQH BAQDAgOo MB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAdBgNVHQ4EFgQU 5oy0rL7 TT0wX7KZK7qcXqgayNwwHwYDVR0jBBgwFoAUgM37P8oXmA972ztLfw b1eIY5now GwYDVR0RBBQwEoIQYmxhaC5leGFtcGxlLmNvbTANBgkqhkiG9w0BAQsFAAOCAQEA CT2vI6 taeLTw6ZulUhLXEXYXWZu1gF8n2COjZzbZXmHxQAoZ3GtnSNwacPHAyIj f3cA9Moo552y39LUtWk wgFtQokWGK7LXglLaveNUBowOHq xk0waiIinJcgTG53 Z qnbJnTjAOG7JwVJplWUIiS1avCksrHt7heE2EGRGJALqyLZ119 PW6ogtCLUv1 X8RCTw UkIF LT sLF0bXWy4Hn38Gjwj1MVv1l76cEGOVSHyrYkN 6AMnAP58L5  IWE9tN3oac4x7jhbuNpfxazIJ8Q6l Up5U5Evfbh6N1DI0 gFCP20fMBkHwkuLfZ 2ekZoSeCgFRDlHGkr7Vv9w        END CERTIFICATE      issuing ca               BEGIN CERTIFICATE      MIIDijCCAnKgAwIBAgIUB28DoGwgGFKL7fbOu9S4FalHLn0wDQYJKoZIhvcNAQEL BQAwLzEtMCsGA1UEAxMkVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgQXV0aG9y aXR5MB4XDTE2MDkyNzAwMDgyMVoXDTI2MDkxNjE2MDg1MVowMzExMC8GA1UEAxMo VmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3ViIEF1dGhvcml0eTCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBAOSCiSij4wy1wiMwvZt rtU3IaO6ZTn9 LfIPuGsR5 QSJk37pCZQco1LgoE rTl  xu3bDovyHDmgObghC6rzVOX2Tpi7kD  DOZpqxOsaS8ebYgxB XJTSxyEJuSAcpSNLqqAiZivuQXdaD0N7H3Or0awwmKE9mD I0g8CF4fPDmuuOG0ASn9fMqXVVt5tXtEqZ9yJYfNOXx3FOPjRVOZf kvSc31wCKe i KmR0AQOmToKMzq988nLqFPTi9KZB8sEU20cGFeTQFol m3FTcIru94EPD nLUn xtlLELVspYb PP3VpvRj9b DY8FGJ5nfSJl7Rkje CD4VxJpSadin3kCAwEAAaOB mTCBljAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH BAUwAwEB zAdBgNVHQ4EFgQU gM37P8oXmA972ztLfw b1eIY5nowHwYDVR0jBBgwFoAUj4YAIxRwrBy0QMRKLnD0 kVidIuYwMwYDVR0RBCwwKoIoVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3Vi IEF1dGhvcml0eTANBgkqhkiG9w0BAQsFAAOCAQEAA4buJuPNJvA1kiATLw1dVU2J HPubk2Kp26Mg GwLn7Vz45Ub133JCYfF3 zXLFZZ5Yub9gWTtjScrvNfQTAbNGdQ BdnUlMmIRmfB7bfckhryR2R9byumeHATgNKZF7h8liNHI7X8tTzZGs6wPdXOLlzR TlM3m1RNK8pbSPOkfPb06w9cBRlD8OAbNtJmuypXA6tYyiiMYBhP0QLAO3i4m1ns aAjAgEjtkB1rQxW5DxoTArZ0asiIdmIcIGmsVxfDQIjFlRxAkafMs74v 5U5gbBX wsOledU0fLl8KLq8W3OXqJwhGLK65fscrP0 omPAcFgzXf L4VUADM4XhW6Xyg        END CERTIFICATE      ca chain                  BEGIN CERTIFICATE      MIIDijCCAnKgAwIBAgIUB28DoGwgGFKL7fbOu9S4FalHLn0wDQYJKoZIhvcNAQEL BQAwLzEtMCsGA1UEAxMkVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgQXV0aG9y aXR5MB4XDTE2MDkyNzAwMDgyMVoXDTI2MDkxNjE2MDg1MVowMzExMC8GA1UEAxMo VmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3ViIEF1dGhvcml0eTCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBAOSCiSij4wy1wiMwvZt rtU3IaO6ZTn9 LfIPuGsR5 QSJk37pCZQco1LgoE rTl  xu3bDovyHDmgObghC6rzVOX2Tpi7kD  DOZpqxOsaS8ebYgxB XJTSxyEJuSAcpSNLqqAiZivuQXdaD0N7H3Or0awwmKE9mD I0g8CF4fPDmuuOG0ASn9fMqXVVt5tXtEqZ9yJYfNOXx3FOPjRVOZf kvSc31wCKe i KmR0AQOmToKMzq988nLqFPTi9KZB8sEU20cGFeTQFol m3FTcIru94EPD nLUn xtlLELVspYb PP3VpvRj9b DY8FGJ5nfSJl7Rkje CD4VxJpSadin3kCAwEAAaOB mTCBljAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH BAUwAwEB zAdBgNVHQ4EFgQU gM37P8oXmA972ztLfw b1eIY5nowHwYDVR0jBBgwFoAUj4YAIxRwrBy0QMRKLnD0 kVidIuYwMwYDVR0RBCwwKoIoVmF1bHQgVGVzdGluZyBJbnRlcm1lZGlhdGUgU3Vi IEF1dGhvcml0eTANBgkqhkiG9w0BAQsFAAOCAQEAA4buJuPNJvA1kiATLw1dVU2J HPubk2Kp26Mg GwLn7Vz45Ub133JCYfF3 zXLFZZ5Yub9gWTtjScrvNfQTAbNGdQ BdnUlMmIRmfB7bfckhryR2R9byumeHATgNKZF7h8liNHI7X8tTzZGs6wPdXOLlzR TlM3m1RNK8pbSPOkfPb06w9cBRlD8OAbNtJmuypXA6tYyiiMYBhP0QLAO3i4m1ns aAjAgEjtkB1rQxW5DxoTArZ0asiIdmIcIGmsVxfDQIjFlRxAkafMs74v 5U5gbBX wsOledU0fLl8KLq8W3OXqJwhGLK65fscrP0 omPAcFgzXf L4VUADM4XhW6Xyg        END CERTIFICATE       private key              BEGIN RSA PRIVATE KEY      MIIEpgIBAAKCAQEAyQGAdOCFXZkgv04pmgOgW15b4AU2Ry cAVE5juIh4Xpy9Hgx F1XthXe2Ku S4gSZnAveoSm6oea8EvCsDfRJWSOdv5kjAviL6DZWBwH 7IcH7I61 cd6u 20xYJc sBAp9boMrMcp5jL4EA 6ZYkZMU0O0TIeEJYe WBt0fQR8vWXXZJW adGjWorbbNLrnx5zdVBEf8MQQmXVk1Lt Cgy SOshOwa9LJSihz3RLBpRXDJG2Yb 47JWewxrzve EtKrTnSomsCWW6TT4NH6BmdwMXrA1JZB5poM1DXrP7I5Jw2ExxH3 lzOlGqdI l78oQsanXmoW 1bYm4NM SNCLrCWwIDAQABAoIBAQCCbHMJY1Wl8eIJ v5HG2WuHXaaHqVoavo2fXTDXwWryfx1v zz Q0YnQBH3shPAi OQCTOfpw uVWTb dUZul3 wUyfcVmUdXGCLgBY53dWna8Z8e zHwhISsqtDXV TpelUBDCNO324XIIR Cg0TLO4nyzQ ESLo6D Y2DTp8lBjMEkmKTd8CLXR2ycEoVykN98qPZm8keiLGO91 I8K7aRd8uOyQ6HUfJRlzFHSuwaLReErxGTEPI4t wVqh2nP2gGBsn3apiJ0ul6Jz NlYO5PqiwpeDk4ibhQBpicnm1jnEcynH WtGuKgMNB0M4SBRBsEguO7WoKx3o qZ iVIaPWDhAoGBAO05UBvyJpAcz ZNQlaF0EAOhoxNQ3h6 6ZYUE52PgZ DHftyJPI Y JJNclY91wn91Yk3ROrDi8gqhzA 2Lelxo1kuZDu m bpzhVUdJia7tZDNzRIhI 24eP2GdochooOZ0qjvrik4kuX43amBhQ4RHsBjmX5CnUlL5ZULs8v2xnAoGBANjq VLAwiIIqJZEC6BuBvVYKaRWkBCAXvQ3j OqxHRYu3P68PZ58Q7HrhrCuyQHTph2v fzfmEMPbSCrFIrrMRmjUG8wopL7GjZjFl8HOBHFwzFiz CT5DEC IJIRkp4HM8F  PAzjB2wCdRdSjLTD5ph0 xQIg5xfln7D wqU0QHtAoGBAKkLF0 ivaIiNftw0J3x WxXag yErlizYpIGCqvuzII6lLr9YdoViT eJYrmb9Zm0HS9biCu2zuwDijRSBIL RieyF40opUaKoi3 0JMtDwTtO2MCd8qaCH3QfkgqAG0tTuj1Q8 6F2JA myKYamq MMhhpYny9 7rAlemM8ZJIqtvAoGBAKOI3zpKDNCdd98A4v7B7H2usZUIJ7gOTZDo XqiNyRENWb2PK6GNq e6SrxvuclvyKA zFnXULJoYtsj7tAH69lieGaOCc5uoRgZ eBU7 euMj McE6vEO3GgJawaJYCQi3uJMjvA bp7i81 hehOfU5ZfmmbFaZSBoMh u U5Vu3tAoGBANnBIbHfD3E7rqnqdpH1oRRHLA1VdghzEKgyUTPHNDzPJG87RY3c rRqeXepblud3qFjD60xS9BzcBijOvZ4 KHk6VIMpkyqoeNVFCJbBVCw JGMp88 v e9t 2iwryh5 rnq pg6anmgwHldptJc1XEFZA2UUQ89RP7kOGQF6IkIS      END RSA PRIVATE KEY      private key type    rsa serial number       3e 20 32 c6 af a7 20 4e b1 95 67 fb 86 bc cb 90 f4 31 b6 f3      Vault has now generated a new set of credentials using the  example dot com  role configuration  Here we see the dynamically generated private key and certificate  The issuing CA certificate and CA trust chain are returned as well  The CA Chain returns all the intermediate authorities in the trust chain  The root authority is not included since that will usually be trusted by the underlying OS      Tutorial  Refer to the  Build Your Own Certificate Authority  CA    vault tutorials secrets management pki engine  guide for a step by step tutorial   Have a look at the  PKI Secrets Engine with Managed Keys   vault tutorials enterprise managed key pki  for more about how to use externally managed keys with PKI      API  The PKI secrets engine has a full HTTP API  Please see the  PKI secrets engine API   vault api docs secret pki  for more details "}
{"questions":"vault page title Certificate Issuance External Policy CIEPS PKI Secrets Engines Vault PKI Secrets Engine when communicating with the Certificate Issuance PKI secrets engine Certificate Issuance External Policy Service CIEPS EnterpriseAlert inline true An overview of the Certificate Issuance External Policy CIEPS protocol layout docs This document covers high level architecture and service APIs used by the","answers":"---\nlayout: docs\npage_title: Certificate Issuance External Policy (CIEPS) | PKI - Secrets Engines\ndescription: An overview of the Certificate Issuance External Policy (CIEPS) protocol\n---\n\n# PKI secrets engine - Certificate Issuance External Policy Service (CIEPS) <EnterpriseAlert inline=\"true\" \/>\n\nThis document covers high-level architecture and service APIs used by the\nVault PKI Secrets Engine when communicating with the Certificate Issuance\nExternal Policy Service (CIEPS) <EnterpriseAlert inline=\"true\" \/>.\n\n## What is Certificate Issuance External Policy Service (CIEPS)?\n\nHashicorp Vault's PKI Secrets Engine has a mechanism for issuing leaf\ncertificates with arbitrary structure: [`\/pki\/sign-verbatim`](\/vault\/api-docs\/secret\/pki#sign-verbatim).\nThis requires an organization to run an application\/user-accessible service\nfor authenticating, authorizing, and validating certificate issuance requests\n(potentially handling key pair generation as well), before asking PKI to sign\nthe resulting CSR and leaf certificate with its own highly-privileged Vault\ntoken. If any attribute is missing from the original requester's CSR, the\noriginal service must reject the request as `sign-verbatim` does not give the\ncontrolling service the ability to modify the request.\n\nThe Certificate Issuance External Policy Service (CIEPS) <EnterpriseAlert inline=\"true\" \/>\nprotocol solves this by placing the validation and certificate templating\ngated behind the PKI, solving:\n\n 1. **Auditing**, so the original requester is still identified and both the\n    original request and subsequent response are tracked.\n 2. **Central access**, so applications only need to use a new URL for\n    requesting certificates.\n 3. **Certificate modification**, so customization of the requester's\n    submission can be exposed to this external service.\n 4. **External validation**, when compared to the Role-based system, as the\n    CIEPS implementation can reach out to customer-defined external systems\n    for validation.\n\nEither of these two mechanisms allow an organization to leverage the Vault\nPKI Secrets Engine to build their own flexible issuance control architecture,\nleveraging Vault as a PKI-as-a-Service platform. However, CIEPS grants far\ngreater control to the organization than the `sign-verbatim` approach.\n\n### Custom policy with `sign-verbatim`\n\nWith `sign-verbatim`, the policy validation service must sit in front of\nVault, processing requests from the user (which cannot use Vault\nauthentication and needs to authenticate themselves separately to this\nservice). This RA service then handles its own authentication to Vault,\nwhich provides the signing capabilities via the PKI plugin.\n\nWhen the application retains control over its own key material by providing\na CSR, the policy service cannot modify the requested CSR and thus cannot\nmodify the resulting certificate. It can only approve or deny requests\nwithout allowing operators to hide implementation details from calling\napplications. This is because PKI's `sign-verbatim` endpoint lacks the\nability for the Vault API caller (in this case, the fronting policy service)\nto modify the certificate independent of the provided CSR.\n\nIf, however, the policy service can control key material (and this is an\nacceptable risk to the organization), the policy service could modify requests\non behalf of the calling application. However, this still requires the\nexternal application to know how to authenticate to this external policy\nservice.\n\nAdditionally, to ensure compatibility with Vault, this policy service (and its\ndevelopers) would need to add support for the ACME protocol. For any new\nprotocols Vault supports in the future, this service would also need to\nimplement support to retain compatibility.\n\n### Custom policy with Certificate Issuance External Policy Service (CIEPS)\n\nWith CIEPS, users still authenticate to Vault and use the normal request\nworkflow to sign and issue certificates, including via ACME. However,\nVault's PKI Engine reaches out to the configured CIEPS implementation to\nvalidate and template the requested certificate, transparently from the\ncalling application.\n\nNotably, the application can opt to either retain full control over its key\nmaterial or delegate key creation to the trusted Vault service, with no impact\non the functionality CIEPS can provide. The CIEPS service can be scoped to\nrespond to requests from either a single PKI mount or multiple, getting\ninformation about the requesting user and the Vault PKI instance from the\nCIEPS messages.\n\nBecause the CIEPS service only needs knowledge of validating requests and\ntemplating the final certificate structure, its developers need only be\nconcerned with the business policy logic and not broader PKI concerns (such\nas generating key material or re-implementing support for other issuance\nprotocols).\n\n## Certificate Issuance External Policy Service (CIEPS) webhook format\n\nThe CIEPS protocol is a REST-based, optionally mTLS protected webhook. The\nexternal service configuration specifies the single URL that Vault will POST\nthe formatted CIEPS request to. When the CIEPS service is unavailable (either\ndue to misconfiguration or outage), Vault will reject the request and it is\nup to the client to retry the request at a later time.\n\nFor convenience, Go versions of these structs are available [from the Vault\nSDK](https:\/\/github.com\/hashicorp\/vault\/blob\/main\/sdk\/helper\/certutil\/cieps.go).\n\n### Vault to CIEPS request format\n\nThis document outlines CIEPS request\/response version 1.\n\nUsing the `application\/json` content type, Vault will post the following\nrequest body as a JSON object:\n\n - `request_version` `(int: 1)` - The version of the CIEPS request sent by\n   Vault; a compatible response format is expected.\n\n - `request_uuid` `(string)` - A random UUID which serves to identify this\n   request. This value must be sent in the response.\n\n - `synchronous` `(bool: true)` - A boolean indicating whether the request is\n   synchronous or not. Presently set to true; no asynchronous response is\n   understood.\n\n - `user_request_key_values` `(map[string]interface{})` - The unvalidated\n   request parameters sent by the user. It is up to the CIEPS service to\n   validate these prior to using them. The following fields may be present,\n   including any other fields submitted by the user:\n\n   - `csr` `(string)` - A PEM format CSR submitted either by the client (\n     in the case of `\/sign` or ACME requests) or on the client's behalf\n     (in the case of `\/issue` requests, where key material is generated\n     by Vault).\n\n - `identity_request_key_values` `(map[string]interface{})` - Values related\n   to the user's identity. When the request type is ACME, this value is not\n   populated. These are:\n\n   - `entity_id` `(string)` - The entity identifier from the request after\n     authentication.\n\n   - `entity` `(map[string]interface{})` - The entire resolved `logical.Entity`\n     of the user after authentication; subject to change by the\n     `entity_jmsepath` parameter in the configuration.\n\n   - `groups` `([]map[string]interface{})` - The set of resolved\n     `logical.Groups` of the user after authentication; subject to change by\n     the `group_jmsepath` parameter in the configuration.\n\n   ~> **Note**: in the event that the direct token backend or a root token is\n      used, entity information may not exist. In either case,\n      `identity_request_key_values` will be omitted.\n\n - `acme_request_key_values` `(map[string]interface{})` - Values related to\n   ACME authorizations challenges attached to the finished order. Only present\n   when the request type is ACME. These are:\n\n   - `authorizations` `(map[string]interface{})` - Authorizations and\n     challenges solved by the client to move this order to the finalization\n     state.\n\n   - `account` `(map[string]interface{})` - Information related to the ACME\n     account issuing the request. These are:\n\n     - `id` `(string)` - The UUID of the ACME account.\n\n     - `directory` `(string)` - The path to the ACME directory requested by\n       this account.\n\n     - `contact` `([]string)` - Unverified contact information submitted by\n       the requesting ACME account on creation.\n\n     - `created_date` `(string: RFC 3999 format)` - Timestamp when the account\n       was created.\n\n     - `eab` `(map[string]interface{}, optional)` - When present, the details\n       of the EAB used to authorize this account via Vault authentication. If\n       not present, this ACME account was created without EAB bindings.\n\n       - `key_id` `(string)` - Identifier of the EAB binding used by this\n         account.\n\n       - `key_type` `(string)` - Key type of the EAB binding used by this\n         account.\n\n       - `created_date` `(string: RFC 3999 format)` - Timestamp when the\n         account was created.\n\n - `vault_request_values` `(map[string]interface{})` - Request value validated\n   or created by Vault. These have higher trust than the unvalidated\n   `user_request_key_values`. These are:\n\n   - `policy_name` `(string: \"\")` - The optional policy name specified by the\n     requester. When the issuance mode is not ACME (or if it was ACME and EAB\n     was enforced), this has been validated by Vault's ACL system.\n\n   - `mount` `(string)` - The request's mount path as known by the PKI plugin.\n\n   - `namespace` `(string)` - The request's namespace the mount path exists\n     within as known by the PKI plugin.\n\n   - `vault_is_performance_standby` `(bool)` - Asserted when this requesting\n     node is a standby node. When the service indicates storage is required in\n     its response, Vault will forward the user's HTTP request up to an active\n     node, requiring it to re-submit the CIEPS request. In this case, if the\n     service knows it must always store certificates and sees a request from\n     a standby node, it can skip policy and template evaluation or cache the\n     results for a second pass.\n\n   - `vault_is_performance_secondary` `(bool)` - Asserted when this requesting\n      node is from a performance secondary versus the primary cluster.\n\n   - `issuance_mode` `(string: \"sign\", \"issue\", \"ica\", or \"acme\")` - The type\n     of the request: whether a REST call to `\/external-policy\/sign(\/:policy)`,\n     to `\/external-policy\/issue(\/:policy)`, `\/external-policy\/sign-intermediate(\/:policy)`,\n     or an ACME request, respectively.\n\n   - `vault_generated_private_key` `(bool)` - Whether or not Vault generated\n     the key material behind this request. Set to true when\n     `issuance_mode=\"issue\"` only presently.\n\n   - `requested_issuer_name` `(string)` - Name of the user's requested issuer;\n     can be changed by modifying the response `issuer_ref` value.\n\n   - `requested_issuer_id` `(string)` - UUID of the user's requested issuer;\n     can be changed by modifying the response `issuer_ref` value.\n\n   - `requested_issuer_cert` `(string)` - PEM format certificate of the user's\n     requested issuer; can be changed by modifying the response `issuer_ref`\n     value.\n\n   - `requested_issuance_config` `(map[string]interface{})` - Configuration\n     used for leaf certificate issuance. These are:\n\n     - `aia_values` `(map[string]interface{})` - AIA values (CA, CRL, and\n       OCSP) for the suggested issuer. These may differ from the actual values\n       used for issuance of this request if `issuer_ref` is set on the response.\n\n     - `leaf_not_after_behavior` `(string: \"err\", \"truncate\", or \"permit\")` - leaf\n       validity period behavior for the suggested issuer.\n\n     - `mount_default_ttl` `(string)` - Suggested default TTL set on mount tuning.\n\n     - `mount_max_ttl` `(string)` - Suggested maximum TTL set on the mount tuning.\n\n### CIEPS to Vault response format\n\nThe CIEPS engine must reply to this POST response with a `200 OK` status,\nregardless of whether a certificate should be issued or not. Redirects will\nnot be followed by Vault; any proxy or load balancing functionality should be\nstrictly transparent to the caller. Any verbatim message returned by a non-200\nstatus code will not be returned, either in Vault server logs or to the user.\n\nIn the response to the above request, only one of the `certificate` or `error`\nfields should be specified. In the event both `certificate` and `error` are\npresent, the `error` will be appended to the returned `warnings` and the\n`certificate` will be issued.\n\nUsing the `application\/json` content type, the server should reply with the\nfollowing JSON object:\n\n - `request_uuid` `(string)` - The random UUID which the server used to\n   identify this request.\n\n - `error` `(string, optional)` - The error message to be returned to the\n   user about why their request failed. Only one of the `error` or\n   `certificate` response parameters should be specified.\n\n - `warnings` `([]string, optional)` - Optional warnings to be returned to the user\n   about minor issues with their request.\n\n - `certificate` `(string, optional)` - A PEM format certificate to be signed\n   by the Vault service. Only one of the `error` or `certificate` response\n   parameters should be specified.\n\n - `issuer_ref` `(string)` - The issuer reference to use to sign this request.\n   If the user's issuer choice (in `requested_issuer_id`) is OK, this must\n   be set in this field.\n\n - `store_certificate` `(bool: false)` - Whether or not to store the signed\n   certificate.\n\n - `generate_lease` `(bool: false)` - Whether or not Vault should generate an\n   associated lease for the certificate. Note that to generate a lease,\n   `store_certificate` also needs to be set to `true`, otherwise no lease\n   will be generated.\n\nThe certificate's signature will be ignored and replaced by a signature created\nby the specified issuer. If a signature algorithm compatible with this issuer\nis specified on the certificate, it will be preserved; otherwise, the default\nsignature algorithm for this issuer's key type will be used.\n\nThe certificate's AIA information will be replaced by the information from the\nspecified issuer, if present, else the global AIA URLs will be set, replacing\nthe AIA URIs and CRL distribution point extensions. Additionally, the\nAuthority Key Identifier extension will be replaced by the issuer's Subject\nKey Identifier extension value as mandated by RFC 5280.\n\n## Tutorial\n\nRefer to the following tutorials for PKI secrets engine usage examples:\n\n- [Build Your Own Certificate Authority (CA)](\/vault\/tutorials\/secrets-management\/pki-engine)\n- [Build Certificate Authority (CA) in Vault with an offline Root](\/vault\/tutorials\/secrets-management\/pki-engine-external-ca)\n- [Enable ACME with PKI secrets engine](\/vault\/tutorials\/secrets-management\/pki-acme-caddy)\n- [PKI Secrets Engine with Managed Keys](\/vault\/tutorials\/enterprise\/managed-key-pki)\n- [PKI Unified CRL and OCSP With Cross Cluster\n  Revocation](\/vault\/tutorials\/secrets-management\/pki-unified-crl-ocsp-cross-cluster)\n- [Configure Vault as a Certificate Manager in Kubernetes with\n  Helm](\/vault\/tutorials\/kubernetes\/kubernetes-cert-manager)\n- [Generate mTLS Certificates for Nomad using\n  Vault](\/vault\/tutorials\/secrets-management\/vault-pki-nomad)\n\n\n## API\n\nThe PKI secrets engine has a full HTTP API. Please see the\n[PKI secrets engine API](\/vault\/api-docs\/secret\/pki) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title  Certificate Issuance External Policy  CIEPS    PKI   Secrets Engines description  An overview of the Certificate Issuance External Policy  CIEPS  protocol        PKI secrets engine   Certificate Issuance External Policy Service  CIEPS   EnterpriseAlert inline  true      This document covers high level architecture and service APIs used by the Vault PKI Secrets Engine when communicating with the Certificate Issuance External Policy Service  CIEPS   EnterpriseAlert inline  true          What is Certificate Issuance External Policy Service  CIEPS    Hashicorp Vault s PKI Secrets Engine has a mechanism for issuing leaf certificates with arbitrary structure     pki sign verbatim    vault api docs secret pki sign verbatim   This requires an organization to run an application user accessible service for authenticating  authorizing  and validating certificate issuance requests  potentially handling key pair generation as well   before asking PKI to sign the resulting CSR and leaf certificate with its own highly privileged Vault token  If any attribute is missing from the original requester s CSR  the original service must reject the request as  sign verbatim  does not give the controlling service the ability to modify the request   The Certificate Issuance External Policy Service  CIEPS   EnterpriseAlert inline  true     protocol solves this by placing the validation and certificate templating gated behind the PKI  solving    1    Auditing    so the original requester is still identified and both the     original request and subsequent response are tracked   2    Central access    so applications only need to use a new URL for     requesting certificates   3    Certificate modification    so customization of the requester s     submission can be exposed to this external service   4    External validation    when compared to the Role based system  as the     CIEPS implementation can reach out to customer defined external systems     for validation   Either of these two mechanisms allow an organization to leverage the Vault PKI Secrets Engine to build their own flexible issuance control architecture  leveraging Vault as a PKI as a Service platform  However  CIEPS grants far greater control to the organization than the  sign verbatim  approach       Custom policy with  sign verbatim   With  sign verbatim   the policy validation service must sit in front of Vault  processing requests from the user  which cannot use Vault authentication and needs to authenticate themselves separately to this service   This RA service then handles its own authentication to Vault  which provides the signing capabilities via the PKI plugin   When the application retains control over its own key material by providing a CSR  the policy service cannot modify the requested CSR and thus cannot modify the resulting certificate  It can only approve or deny requests without allowing operators to hide implementation details from calling applications  This is because PKI s  sign verbatim  endpoint lacks the ability for the Vault API caller  in this case  the fronting policy service  to modify the certificate independent of the provided CSR   If  however  the policy service can control key material  and this is an acceptable risk to the organization   the policy service could modify requests on behalf of the calling application  However  this still requires the external application to know how to authenticate to this external policy service   Additionally  to ensure compatibility with Vault  this policy service  and its developers  would need to add support for the ACME protocol  For any new protocols Vault supports in the future  this service would also need to implement support to retain compatibility       Custom policy with Certificate Issuance External Policy Service  CIEPS   With CIEPS  users still authenticate to Vault and use the normal request workflow to sign and issue certificates  including via ACME  However  Vault s PKI Engine reaches out to the configured CIEPS implementation to validate and template the requested certificate  transparently from the calling application   Notably  the application can opt to either retain full control over its key material or delegate key creation to the trusted Vault service  with no impact on the functionality CIEPS can provide  The CIEPS service can be scoped to respond to requests from either a single PKI mount or multiple  getting information about the requesting user and the Vault PKI instance from the CIEPS messages   Because the CIEPS service only needs knowledge of validating requests and templating the final certificate structure  its developers need only be concerned with the business policy logic and not broader PKI concerns  such as generating key material or re implementing support for other issuance protocols       Certificate Issuance External Policy Service  CIEPS  webhook format  The CIEPS protocol is a REST based  optionally mTLS protected webhook  The external service configuration specifies the single URL that Vault will POST the formatted CIEPS request to  When the CIEPS service is unavailable  either due to misconfiguration or outage   Vault will reject the request and it is up to the client to retry the request at a later time   For convenience  Go versions of these structs are available  from the Vault SDK  https   github com hashicorp vault blob main sdk helper certutil cieps go        Vault to CIEPS request format  This document outlines CIEPS request response version 1   Using the  application json  content type  Vault will post the following request body as a JSON object       request version    int  1     The version of the CIEPS request sent by    Vault  a compatible response format is expected       request uuid    string     A random UUID which serves to identify this    request  This value must be sent in the response       synchronous    bool  true     A boolean indicating whether the request is    synchronous or not  Presently set to true  no asynchronous response is    understood       user request key values    map string interface       The unvalidated    request parameters sent by the user  It is up to the CIEPS service to    validate these prior to using them  The following fields may be present     including any other fields submitted by the user         csr    string     A PEM format CSR submitted either by the client        in the case of   sign  or ACME requests  or on the client s behalf       in the case of   issue  requests  where key material is generated      by Vault        identity request key values    map string interface       Values related    to the user s identity  When the request type is ACME  this value is not    populated  These are         entity id    string     The entity identifier from the request after      authentication         entity    map string interface       The entire resolved  logical Entity       of the user after authentication  subject to change by the       entity jmsepath  parameter in the configuration         groups      map string interface       The set of resolved       logical Groups  of the user after authentication  subject to change by      the  group jmsepath  parameter in the configuration           Note    in the event that the direct token backend or a root token is       used  entity information may not exist  In either case         identity request key values  will be omitted       acme request key values    map string interface       Values related to    ACME authorizations challenges attached to the finished order  Only present    when the request type is ACME  These are         authorizations    map string interface       Authorizations and      challenges solved by the client to move this order to the finalization      state         account    map string interface       Information related to the ACME      account issuing the request  These are           id    string     The UUID of the ACME account           directory    string     The path to the ACME directory requested by        this account           contact      string     Unverified contact information submitted by        the requesting ACME account on creation           created date    string  RFC 3999 format     Timestamp when the account        was created           eab    map string interface    optional     When present  the details        of the EAB used to authorize this account via Vault authentication  If        not present  this ACME account was created without EAB bindings             key id    string     Identifier of the EAB binding used by this          account             key type    string     Key type of the EAB binding used by this          account             created date    string  RFC 3999 format     Timestamp when the          account was created       vault request values    map string interface       Request value validated    or created by Vault  These have higher trust than the unvalidated     user request key values   These are         policy name    string         The optional policy name specified by the      requester  When the issuance mode is not ACME  or if it was ACME and EAB      was enforced   this has been validated by Vault s ACL system         mount    string     The request s mount path as known by the PKI plugin         namespace    string     The request s namespace the mount path exists      within as known by the PKI plugin         vault is performance standby    bool     Asserted when this requesting      node is a standby node  When the service indicates storage is required in      its response  Vault will forward the user s HTTP request up to an active      node  requiring it to re submit the CIEPS request  In this case  if the      service knows it must always store certificates and sees a request from      a standby node  it can skip policy and template evaluation or cache the      results for a second pass         vault is performance secondary    bool     Asserted when this requesting       node is from a performance secondary versus the primary cluster         issuance mode    string   sign    issue    ica   or  acme      The type      of the request  whether a REST call to   external policy sign   policy         to   external policy issue   policy      external policy sign intermediate   policy         or an ACME request  respectively         vault generated private key    bool     Whether or not Vault generated      the key material behind this request  Set to true when       issuance mode  issue   only presently         requested issuer name    string     Name of the user s requested issuer       can be changed by modifying the response  issuer ref  value         requested issuer id    string     UUID of the user s requested issuer       can be changed by modifying the response  issuer ref  value         requested issuer cert    string     PEM format certificate of the user s      requested issuer  can be changed by modifying the response  issuer ref       value         requested issuance config    map string interface       Configuration      used for leaf certificate issuance  These are           aia values    map string interface       AIA values  CA  CRL  and        OCSP  for the suggested issuer  These may differ from the actual values        used for issuance of this request if  issuer ref  is set on the response           leaf not after behavior    string   err    truncate   or  permit      leaf        validity period behavior for the suggested issuer           mount default ttl    string     Suggested default TTL set on mount tuning           mount max ttl    string     Suggested maximum TTL set on the mount tuning       CIEPS to Vault response format  The CIEPS engine must reply to this POST response with a  200 OK  status  regardless of whether a certificate should be issued or not  Redirects will not be followed by Vault  any proxy or load balancing functionality should be strictly transparent to the caller  Any verbatim message returned by a non 200 status code will not be returned  either in Vault server logs or to the user   In the response to the above request  only one of the  certificate  or  error  fields should be specified  In the event both  certificate  and  error  are present  the  error  will be appended to the returned  warnings  and the  certificate  will be issued   Using the  application json  content type  the server should reply with the following JSON object       request uuid    string     The random UUID which the server used to    identify this request       error    string  optional     The error message to be returned to the    user about why their request failed  Only one of the  error  or     certificate  response parameters should be specified       warnings      string  optional     Optional warnings to be returned to the user    about minor issues with their request       certificate    string  optional     A PEM format certificate to be signed    by the Vault service  Only one of the  error  or  certificate  response    parameters should be specified       issuer ref    string     The issuer reference to use to sign this request     If the user s issuer choice  in  requested issuer id   is OK  this must    be set in this field       store certificate    bool  false     Whether or not to store the signed    certificate       generate lease    bool  false     Whether or not Vault should generate an    associated lease for the certificate  Note that to generate a lease      store certificate  also needs to be set to  true   otherwise no lease    will be generated   The certificate s signature will be ignored and replaced by a signature created by the specified issuer  If a signature algorithm compatible with this issuer is specified on the certificate  it will be preserved  otherwise  the default signature algorithm for this issuer s key type will be used   The certificate s AIA information will be replaced by the information from the specified issuer  if present  else the global AIA URLs will be set  replacing the AIA URIs and CRL distribution point extensions  Additionally  the Authority Key Identifier extension will be replaced by the issuer s Subject Key Identifier extension value as mandated by RFC 5280      Tutorial  Refer to the following tutorials for PKI secrets engine usage examples      Build Your Own Certificate Authority  CA    vault tutorials secrets management pki engine     Build Certificate Authority  CA  in Vault with an offline Root   vault tutorials secrets management pki engine external ca     Enable ACME with PKI secrets engine   vault tutorials secrets management pki acme caddy     PKI Secrets Engine with Managed Keys   vault tutorials enterprise managed key pki     PKI Unified CRL and OCSP With Cross Cluster   Revocation   vault tutorials secrets management pki unified crl ocsp cross cluster     Configure Vault as a Certificate Manager in Kubernetes with   Helm   vault tutorials kubernetes kubernetes cert manager     Generate mTLS Certificates for Nomad using   Vault   vault tutorials secrets management vault pki nomad       API  The PKI secrets engine has a full HTTP API  Please see the  PKI secrets engine API   vault api docs secret pki  for more details "}
{"questions":"vault To successfully deploy this secrets engine there are a number of important The PKI secrets engine for Vault generates TLS certificates page title PKI Secrets Engines Considerations considerations to be aware of as well as some preparatory steps that should be layout docs PKI secrets engine considerations","answers":"---\nlayout: docs\npage_title: 'PKI - Secrets Engines: Considerations'\ndescription: The PKI secrets engine for Vault generates TLS certificates.\n---\n\n# PKI secrets engine - considerations\n\nTo successfully deploy this secrets engine, there are a number of important\nconsiderations to be aware of, as well as some preparatory steps that should be\nundertaken. You should read all of these _before_ using this secrets engine or\ngenerating the CA to use with this secrets engine.\n\n## Table of contents\n\n - [Be Careful with Root CAs](#be-careful-with-root-cas)\n   - [Managed Keys](#managed-keys)\n - [One CA Certificate, One Secrets Engine](#one-ca-certificate-one-secrets-engine)\n   - [Always Configure a Default Issuer](#always-configure-a-default-issuer)\n - [Key Types Matter](#key-types-matter)\n   - [Cluster Performance and Key Types](#cluster-performance-and-key-types)\n - [Use a CA Hierarchy](#use-a-ca-hierarchy)\n   - [Cross-Signed Intermediates](#cross-signed-intermediates)\n - [Cluster URLs are Important](#cluster-urls-are-important)\n - [Automate Rotation with ACME](#automate-rotation-with-acme)\n   - [ACME Stores Certificates](#acme-stores-certificates)\n   - [ACME Role Restrictions Require EAB](#acme-role-restrictions-require-eab)\n   - [ACME and the Public Internet](#acme-and-the-public-internet)\n   - [ACME Errors are in Server Logs](#acme-errors-are-in-server-logs)\n   - [ACME Security Considerations](#acme-security-considerations)\n   - [ACME and Client Counting](#acme-and-client-counting)\n - [Keep Certificate Lifetimes Short, For CRL's Sake](#keep-certificate-lifetimes-short-for-crls-sake)\n   - [NotAfter Behavior on Leaf Certificates](#notafter-behavior-on-leaf-certificates)\n   - [Cluster Performance and Quantity of Leaf Certificates](#cluster-performance-and-quantity-of-leaf-certificates)\n - [You must configure issuing\/CRL\/OCSP information _in advance_](#you-must-configure-issuingcrlocsp-information-_in-advance_)\n - [Distribution of CRLs and OCSP](#distribution-of-crls-and-ocsp)\n - [Automate CRL Building and Tidying](#automate-crl-building-and-tidying)\n - [Spectrum of Revocation Support](#spectrum-of-revocation-support)\n   - [What Are Cross-Cluster CRLs?](#what-are-cross-cluster-crls)\n - [Issuer Subjects and CRLs](#issuer-subjects-and-crls)\n - [Automate Leaf Certificate Renewal](#automate-leaf-certificate-renewal)\n - [Safe Minimums](#safe-minimums)\n - [Token Lifetimes and Revocation](#token-lifetimes-and-revocation)\n - [Safe Usage of Roles](#safe-usage-of-roles)\n - [Telemetry](#telemetry)\n - [Auditing](#auditing)\n - [Role-Based Access](#role-based-access)\n - [Replicated DataSets](#replicated-datasets)\n - [Cluster Scalability](#cluster-scalability)\n - [PSS Support](#pss-support)\n - [Issuer Storage Migration Issues](#issuer-storage-migration-issues)\n - [Issuer Constraints Enforcement](#issuer-constraints-enforcement)\n - [Tutorial](#tutorial)\n - [API](#api)\n\n## Be careful with root CAs\n\nVault storage is secure, but not as secure as a piece of paper in a bank vault.\nIt is, after all, networked software. If your root CA is hosted outside of\nVault, don't put it in Vault as well; instead, issue a shorter-lived\nintermediate CA certificate and put this into Vault. This aligns with industry\nbest practices.\n\nSince 0.4, the secrets engine supports generating self-signed root CAs and\ncreating and signing CSRs for intermediate CAs. In each instance, for security\nreasons, the private key can _only_ be exported at generation time, and the\nability to do so is part of the command path (so it can be put into ACL\npolicies).\n\nIf you plan on using intermediate CAs with Vault, it is suggested that you let\nVault create CSRs and do not export the private key, then sign those with your\nroot CA (which may be a second mount of the `pki` secrets engine).\n\n### Managed keys\n\nSince 1.10, Vault Enterprise can access private key material in a\n[_managed key_](\/vault\/docs\/enterprise\/managed-keys).  In this case, Vault never sees the\nprivate key, and the external KMS or HSM performs certificate signing operations.\nManaged keys are configured by selecting the `kms` type when generating a root\nor intermediate.\n\n## One CA certificate, one secrets engine\n\nSince Vault 1.11.0, the PKI Secrets Engine supports multiple issuers in a single\nmount. However, in order to simplify the configuration, it is _strongly_\nrecommended that operators limit a mount to a single issuer. If you want to issue\ncertificates from multiple disparate CAs, mount the PKI secrets engine at multiple\nmount points with separate CA certificates in each.\n\nThe rationale for separating mounts is to simplify permissions management:\nvery few individuals need access to perform operations with the root, but\nmany need access to create leaves. The operations on a root should generally\nbe limited to issuing and revoking intermediate CAs, which is a highly\nprivileged operation; it becomes much easier to audit these operations when\nthey're in a separate mount than if they're mixed in with day-to-day leaf\nissuance.\n\nA common pattern is to have one mount act as your root CA and to use this CA\nonly to sign intermediate CA CSRs from other PKI secrets engines.\n\nTo keep old CAs active, there's two approaches to achieving rotation:\n\n 1. Use multiple secrets engines. This allows a fresh start, preserving the\n    old issuer and CRL. Vault ACL policy can be updated to deny new issuance\n    under the old mount point and roles can be re-evaluated before being\n    imported into the new mount point.\n 2. Use multiple issuers in the same mount point. The usage of the old issuer\n    can be restricted to CRL signing, and existing roles and ACL policy can be\n    kept as-is. This allows cross-signing within the same mount, and consumers\n    of the mount won't have to update their configuration. Once the transitional\n    period for this rotation has completed and all past issued certificate have\n    expired, it is encouraged to fully remove the old issuer and any unnecessary\n    cross-signed issuers from the mount point.\n\nAnother suggested use case for multiple issuers in the same mount is splitting\nissuance by TTL lifetime. For short-lived certificates, an intermediate\nstored in Vault will often out-perform a HSM-backed intermediate. For\nlonger-lived certificates, however, it is often important to have the\nintermediate key material secured throughout the lifetime of the end-entity\ncertificate. This means that two intermediates in the same mount -- one backed\nby the HSM and one backed by Vault -- can satisfy both use cases. Operators\ncan make roles setting maximum TTLs for each issuer and consumers of the\nmount can decide which to use.\n\n### Always configure a default issuer\n\nFor backwards compatibility, [the default issuer](\/vault\/api-docs\/secret\/pki#read-issuers-configuration)\nis used to service PKI endpoints without an explicit issuer (either via path\nselection or role-based selection). When certificates are revoked and their\nissuer is no longer part of this PKI mount, Vault places them on the default\nissuer's CRL. This means maintaining a default issuer is important for both\nbackwards compatibility for issuing certificates and for ensuring revoked\ncertificates land on a CRL.\n\n## Key types matter\n\nCertain key types have impacts on performance. Signing certificates from a RSA\nkey will be slower than issuing from an ECDSA or Ed25519 key. Key generation\n(using `\/issue\/:role` endpoints) using RSA keys will also be slow: RSA key\ngeneration involves finding suitable random primes, whereas Ed25519 keys can\nbe random data. As the number of bits goes up (RSA 2048 -> 4096 or ECDSA\nP-256 -> P-521), signature times also increases.\n\nThis matters in both directions: not only is issuance more expensive,\nbut validation of the corresponding signature (in say, TLS handshakes) will\nalso be more expensive. Careful consideration of both issuer and issued key\ntypes can have meaningful impacts on performance of not only Vault, but\nsystems using these certificates.\n\n### Cluster performance and key types\n\nThe [benchmark-vault](https:\/\/github.com\/hashicorp\/vault-benchmark) project\ncan be used to measure the performance of a Vault PKI instance. In general,\nsome considerations to be aware of:\n\n - RSA key generation is much slower and highly variable than EC key\n   generation. If performance and throughput are a necessity, consider using\n   EC keys (including NIST P-curves and Ed25519) instead of RSA.\n\n - Key signing requests (via `\/pki\/sign`) will be faster than (`\/pki\/issue`),\n   especially for RSA keys: this removes the necessity for Vault to generate\n   key material and can sign the key material provided by the client. This\n   signing step is common between both endpoints, so key generation is pure\n   overhead if the client has a sufficiently secure source of entropy.\n\n - The CA's key type matters as well: using a RSA CA will result in a RSA\n   signature and takes longer than a ECDSA or Ed25519 CA.\n\n - Storage is an important factor: with [BYOC Revocation](\/vault\/api-docs\/secret\/pki#revoke-certificate),\n   using `no_store=true` still gives you the ability to revoke certificates\n   and audit logs can be used to track issuance. Clusters using a remote\n   storage (like Consul) over a slow network and using `no_store=false`\n   or `no_store_cert_metadata=false` along with specifying metadata on issuance, will\n   result in additional latency on issuance. Adding leases for every issued\n   certificate compounds the problem.\n\n    - Storing too many certificates results in longer `LIST \/pki\/certs` time,\n      including the time to tidy the instance. As such, for large scale\n      deployments (>= 250k active certificates) it is recommended to use audit\n      logs to track certificates outside of Vault.\n\nAs a general comparison on unspecified hardware, using `benchmark-vault` for\n`30s` on a local, single node, raft-backed Vault instance:\n\n  - Vault can issue 300k certificates using EC P-256 for CA & leaf keys and\n    without storage.\n\n    - But switching to storing these leaves drops that number to 65k, and only\n      20k with leases.\n\n - Using large, expensive RSA-4096 bit keys, Vault can only issue 160 leaves,\n   regardless of whether or not storage or leases were used. The 95% key\n   generation time is above 10s.\n\n    - In comparison, using P-521 keys, Vault can issue closer to 30k leaves\n      without leases and 18k with leases.\n\nThese numbers are for example only, to represent the impact different key types\ncan have on PKI cluster performance.\n\nThe use of ACME adds additional latency into these numbers, both because\ncertificates need to be stored and because challenge validation needs to\nbe performed.\n\n## Use a CA hierarchy\n\nIt is generally recommended to use a hierarchical CA setup, with a root\ncertificate which issues one or more intermediates (based on usage), which\nin turn issue the leaf certificates.\n\nThis allows stronger storage or policy guarantees around [protection of the\nroot CA](#be-careful-with-root-cas), while letting Vault manage the\nintermediate CAs and issuance of leaves. Different intermediates might be\nissued for different usage, such as VPN signing, Email signing, or testing\nversus production TLS services. This helps to keep CRLs limited to specific\npurposes: for example, VPN services don't care about the revoked set of\nemail signing certificates if they're using separate certificates and\ndifferent intermediates, and thus don't need both CRL contents. Additionally,\nthis allows higher risk intermediates (such as those issuing longer-lived\nemail signing certificates) to have HSM-backing without impacting the\nperformance of easier-to-rotate intermediates and certificates (such as\nTLS intermediates).\n\nVault supports the use of both the [`allowed_domains` parameter on\nRoles](\/vault\/api-docs\/secret\/pki#allowed_domains) and the [`permitted_dns_domains`\nparameter to set the Name Constraints extension](\/vault\/api-docs\/secret\/pki#permitted_dns_domains)\non root and intermediate generation. This allows for several layers of\nseparation of concerns between TLS-based services.\n\n### Cross-Signed intermediates\n\nWhen cross-signing intermediates from two separate roots, two separate\nintermediate issuers will exist within the Vault PKI mount. In order to\ncorrectly serve the cross-signed chain on issuance requests, the\n`manual_chain` override is required on either or both intermediates. This\ncan be constructed in the following order:\n\n - this issuer (`self`)\n - this root\n - the other copy of this intermediate\n - the other root\n\nAll requests to this issuer for signing will now present the full cross-signed\nchain.\n\n## Cluster URLs are important\n\nIn Vault 1.13, support for [templated AIA\nURLs](\/vault\/api-docs\/secret\/pki#enable_aia_url_templating-1)\nwas added. With the [per-cluster URL\nconfiguration](\/vault\/api-docs\/secret\/pki#set-cluster-configuration) pointing\nto this Performance Replication cluster, AIA information will point to the\ncluster that issued this certificate automatically.\n\nIn Vault 1.14, with ACME support, the same configuration is used for allowing\nACME clients to discover the URL of this cluster.\n\n~> **Warning**: It is important to ensure that this configuration is\n   up to date and maintained correctly, always pointing to the node's\n   PR cluster address (which may be a Load Balanced or a DNS Round-Robbin\n   address). If this configuration is not set on every Performance Replication\n   cluster, certificate issuance (via REST and\/or via ACME) will fail.\n\n## Automate rotation with ACME\n\nIn Vault 1.14, support for the [Automatic Certificate Management Environment\n(ACME)](https:\/\/datatracker.ietf.org\/doc\/html\/rfc8555) protocol has been\nadded to the PKI Engine. This is a standardized way to handle validation,\nissuance, rotation, and revocation of server certificates.\n\nMany ecosystems, from web servers like Caddy, Nginx, and Apache, to\norchestration environments like Kubernetes (via cert-manager) natively\nsupport issuance via the ACME protocol. For deployments without native\nsupport, stand-alone tools like certbot support fetching and renewing\ncertificates on behalf of consumers. Vault's PKI Engine only includes server\nsupport for ACME; no client functionality has been included.\n\n~> Note: Vault's PKI ACME server caps the certificate's validity at 90 days\n   maximum by default, overridable using the ACME config max_ttl parameter.\n   Shorter validity durations can be set via limiting the role's TTL to\n   be under the global ACME configured limit.\n   Aligning with Let's Encrypt, we do not support the optional `NotBefore`\n   and `NotAfter` order request parameters.\n\n### ACME stores certificates\n\nBecause ACME requires stored certificates in order to function, the notes\n[below about automating tidy](#automate-crl-building-and-tidying) are\nespecially important for the long-term health of the PKI cluster. ACME also\nintroduces additional resource types (accounts, orders, authorizations, and\nchallenges) that must be tidied via [the `tidy_acme=true`\noption](\/vault\/api-docs\/secret\/pki#tidy). Orders, authorizations, and\nchallenges are [cleaned up based on the\n`safety_buffer`](\/vault\/api-docs\/secret\/pki#safety_buffer)\nparameter, but accounts can live longer past their last issued certificate\nby controlling the [`acme_account_safety_buffer`\nparameter](\/vault\/api-docs\/secret\/pki#acme_account_safety_buffer).\n\nAs a consequence of the above, and like the discussions in the [Cluster\nScalability](#cluster-scalability) section, because these roles have\n`no_store=false` set, ACME can only issue certificates on the active nodes\nof PR clusters; standby nodes, if contacted, will transparently forward\nall requests to the active node.\n\n### ACME role restrictions require EAB\n\nBecause ACME by default has no external authorization engine and is\nunauthenticated from a Vault perspective, the use of roles with ACME\nin the default configuration are of limited value as any ACME client\ncan request certificates under any role by proving possession of the\nrequested certificate identifiers.\n\nTo solve this issue, there are two possible approaches:\n\n 1. Use a restrictive [`allowed_roles`, `allowed_issuers`, and\n    `default_directory_policy` ACME\n    configuration](\/vault\/api-docs\/secret\/pki#set-acme-configuration)\n    to let only a single role and issuer be used. This prevents user\n    choice, allowing some global restrictions to be placed on issuance\n    and avoids requiring ACME clients to have (at initial setup) access\n    to a Vault token other mechanism for acquiring a Vault EAB ACME token.\n 2. Use a more permissive [configuration with\n    `eab_policy=always-required`](\/vault\/api-docs\/secret\/pki#eab_policy)\n    to allow more roles and users to select the roles, but bind ACME clients\n    to a Vault token which can be suitably ACL'd to particular sets of\n    approved ACME directories.\n\nThe choice of approach depends on the policies of the organization wishing\nto use ACME.\n\nAnother consequence of the Vault unauthenticated nature of ACME requests\nare that role templating, based on entity information, cannot be used as\nthere is no token and thus no entity associated with the request, even when\nEAB binding is used.\n\n### ACME and the public internet\n\nUsing ACME is possible over the public internet; public CAs like Let's Encrypt\noffer this as a service. Similarly, organizations running internal PKI\ninfrastructure might wish to issue server certificates to pieces of\ninfrastructure outside of their internal network boundaries, from a publicly\naccessible Vault instance. By default, without enforcing a restrictive\n`eab_policy`, this results in a complicated threat model: _any_ external\nclient which can prove possession of a domain can issue a certificate under\nthis CA, which might be considered more trusted by this organization.\n\nAs such, we strongly recommend publicly facing Vault instances (such as HCP\nVault) enforce that PKI mount operators have required a [restrictive\n`eab_policy=always-required` configuration](\/vault\/api-docs\/secret\/pki#eab_policy).\nSystem administrators of Vault instances can enforce this by [setting the\n`VAULT_DISABLE_PUBLIC_ACME=true` environment\nvariable](\/vault\/api-docs\/secret\/pki#acme-external-account-bindings).\n\n### ACME errors are in server logs\n\nBecause the ACME client is not necessarily trusted (as account registration\nmay not be tied to a valid Vault token when EAB is not used), many error\nmessages end up in the Vault server logs out of security necessity. When\ntroubleshooting issues with clients requesting certificates, first check\nthe client's logs, if any, (e.g., certbot will state the log location on\nerrors), and then correlate with Vault server logs to identify the failure\nreason.\n\n### ACME security considerations\n\nACME allows any client to use Vault to make some sort of external call;\nwhile the design of ACME attempts to minimize this scope and will prohibit\nissuance if incorrect servers are contacted, it cannot account for all\npossible remote server implementations. Vault's ACME server makes three\ntypes of requests:\n\n 1. DNS requests for `_acme-challenge.<domain>`, which should be least\n    invasive and most safe.\n 2. TLS ALPN requests for the `acme-tls\/1` protocol, which should be\n    safely handled by the TLS before any application code is invoked.\n 3. HTTP requests to `http:\/\/<domain>\/.well-known\/acme-challenge\/<token>`,\n    which could be problematic based on server design; if all requests,\n    regardless of path, are treated the same and assumed to be trusted,\n    this could result in Vault being used to make (invalid) requests.\n    Ideally, any such server implementations should be updated to ignore\n    such ACME validation requests or to block access originating from Vault\n    to this service.\n\nIn all cases, no information about the response presented by the remote\nserver is returned to the ACME client.\n\nWhen running Vault on multiple networks, note that Vault's ACME server\nplaces no restrictions on requesting client\/destination identifier\nvalidations paths; a client could use a HTTP challenge to force Vault to\nreach out to a server on a network it could otherwise not access.\n\n### ACME and client counting\n\nIn Vault 1.14, ACME contributes differently to usage metrics than other\ninteractions with the PKI Secrets Engine. Due to its use of unauthenticated\nrequests (which do not generate Vault tokens), it would not be counted in\nthe traditional [activity log APIs](\/vault\/api-docs\/system\/internal-counters#activity-export).\nInstead, certificates issued via ACME will be counted via their unique\ncertificate identifiers (the combination of CN, DNS SANs, and IP SANs).\nThese will create a stable identifier that will be consistent across\nrenewals, other ACME clients, mounts, and namespaces, contributing to\nthe activity log presently as a non-entity token attributed to the first\nmount which created that request.\n\n## Keep certificate lifetimes short, for CRL's sake\n\nThis secrets engine aligns with Vault's philosophy of short-lived secrets. As\nsuch it is not expected that CRLs will grow large; the only place a private key\nis ever returned is to the requesting client (this secrets engine does _not_\nstore generated private keys, except for CA certificates). In most cases, if the\nkey is lost, the certificate can simply be ignored, as it will expire shortly.\n\nIf a certificate must truly be revoked, the normal Vault revocation function can\nbe used, and any revocation action will cause the CRL to be regenerated. When\nthe CRL is regenerated, any expired certificates are removed from the CRL (and\nany revoked, expired certificate are removed from secrets engine storage). This\nis an expensive operation! Due to the structure of the CRL standard, Vault must\nread **all** revoked certificates into memory in order to rebuild the CRL and\nclients must fetch the regenerated CRL.\n\nThis secrets engine does not support multiple CRL endpoints with sliding date\nwindows; often such mechanisms will have the transition point a few days apart,\nbut this gets into the expected realm of the actual certificate validity periods\nissued from this secrets engine. A good rule of thumb for this secrets engine\nwould be to simply not issue certificates with a validity period greater than\nyour maximum comfortable CRL lifetime. Alternately, you can control CRL caching\nbehavior on the client to ensure that checks happen more often.\n\nOften multiple endpoints are used in case a single CRL endpoint is down so that\nclients don't have to figure out what to do with a lack of response. Run Vault\nin HA mode, and the CRL endpoint should be available even if a particular node\nis down.\n\n~> Note: Since Vault 1.11.0, with multiple issuers in the same mount point,\n   different issuers may have different CRLs (depending on subject and key\n   material). This means that Vault may need to regenerate multiple CRLs.\n   This is again a rationale for keeping TTLs short and avoiding revocation\n   if possible.\n\n~> Note: Since Vault 1.12.0, we support two complementary revocation\n   mechanisms: Delta CRLs, which allow for rebuilds of smaller, incremental\n   additions to the last complete CRL, and OCSP, which allows responding to\n   revocation status requests for individual certificates. When coupled with\n   the new CRL auto-rebuild functionality, this means that the revoking step\n   isn't as costly (as the CRL isn't always rebuilt on each revocation),\n   outside of storage considerations. However, while the rebuild operation\n   still can be expensive with lots of certificates, it will be done on a\n   schedule rather than on demand.\n\n### NotAfter behavior on leaf certificates\n\nIn Vault 1.11.0, the PKI Secrets Engine has introduced a new\n`leaf_not_after_behavior` [parameter on\nissuers](\/vault\/api-docs\/secret\/pki#leaf_not_after_behavior).\nThis allows modification of the issuance behavior: should Vault `err`,\npreventing issuance of a longer-lived leaf cert than issuer, silently\n`truncate` to that of the issuer's `NotAfter` value, or `permit` longer\nexpirations.\n\nIt is strongly suggested to use `err` or `truncate` for intermediates;\n`permit` is only useful for root certificates, as intermediate's NotAfter\nexpiration are checked when validating presented chains.\n\nIn combination with a cascading expiration with longer lived roots (perhaps\non the range of 2-10 years), shorter lived intermediates (perhaps on the\nrange of 6 months to 2 years), and short-lived leaf certificates (on the\nrange of 30 to 90 days), and the [rotation strategies discussed in other\nsections](\/vault\/docs\/secrets\/pki\/rotation-primitives), this should keep the\nCRLs adequately small.\n\n### Cluster performance and quantity of leaf certificates\n\nAs mentioned above, keeping TTLs short (or using `no_store=true` and\n`no_store_cert_metadata=true`) and avoiding\nleases is important for a healthy cluster. However it is important to note\nthis is a scale problem: 10-1000 long-lived, stored certificates are probably\nfine, but 50k-100k become a problem and 500k+ stored, unexpired certificates\ncan negatively impact even large Vault clusters--even with short TTLs!\n\nHowever, once these certificates are expired, a [tidy operation](\/vault\/api-docs\/secret\/pki#tidy)\nwill clean up CRLs and Vault cluster storage.\n\nNote that organizational risk assessments for certificate compromise might\nmean certain certificate types should always be issued with `no_store=false`;\neven short-lived broad wildcard certificates (say, `*.example.com`) might be\nimportant enough to have precise control over revocation. However, an internal\nservice with a well-scoped certificate (say, `service.example.com`) might be\nof low enough risk to issue a 90-day TTL with `no_store=true`, preventing\nthe need for revocation in the unlikely case of compromise.\n\nHaving a shorter TTL decreases the likelihood of needing to revoke a cert\n(but cannot prevent it entirely) and decrease the impact of any such\ncompromise.\n\n~> Note: As of Vault 1.12, the PKI Secret Engine's [Bring-Your-Own-Cert\n    (BYOC)](\/vault\/api-docs\/secret\/pki#revoke-certificate)\n   functionality allows revocation of certificates not previously stored\n   (e.g., issued via a role with `no_store=true`). This means that setting\n   `no_store=true` _is now_ safe to be used globally, regardless of importance\n   of issued certificates (and their likelihood for revocation).\n\n## You must configure issuing\/CRL\/OCSP information _in advance_\n\nThis secrets engine serves CRLs from a predictable location, but it is not\npossible for the secrets engine to know where it is running. Therefore, you must\nconfigure desired URLs for the issuing certificate, CRL distribution points, and\nOCSP servers manually using the `config\/urls` endpoint. It is supported to have\nmore than one of each of these by passing in the multiple URLs as a\ncomma-separated string parameter.\n\n~> Note: when using Vault Enterprise's Performance Replication features with a\n   PKI Secrets Engine mount, each cluster will have its own CRL; this means\n   each cluster's unique CRL address should be included in the [AIA\n   information](https:\/\/datatracker.ietf.org\/doc\/html\/rfc5280#section-5.2.7)\n   field separately, or the CRLs should be consolidated and served outside of\n   Vault.\n\n~> Note: When using multiple issuers in the same mount, it is suggested to use\n   the per-issuer AIA fields rather than the global (`\/config\/urls`) variant.\n   This is for correctness: these fields are used for chain building and\n   automatic CRL detection in certain applications. If they point to the wrong\n   issuer's information, these applications may break.\n\n## Distribution of CRLs and OCSP\n\nBoth CRLs and OCSP allow interrogating revocation status of certificates. Both\nof these methods include internal security and authenticity (both CRLs and\nOCSP responses are signed by the issuing CA within Vault). This means both are\nfine to distribute over non-secure and non-authenticated channels, such as\nHTTP.\n\n~> Note: The OCSP implementation for GET requests can lead to intermittent\n400 errors when an encoded OCSP request contains consecutive '\/' characters.\nUntil this is resolved it is recommended to use POST based OCSP requests.\n\n## Automate CRL building and tidying\n\nSince Vault 1.12, the PKI Secrets Engine supports automated CRL rebuilding\n(including optional Delta CRLs which can be built more frequently than\ncomplete CRLs) via the `\/config\/crl` endpoint. Additionally, tidying of\nrevoked and expired certificates can be configured automatically via the\n`\/config\/auto-tidy` endpoint. Both of these should be enabled to ensure\ncompatibility with the wider PKIX ecosystem and performance of the cluster.\n\n## Spectrum of revocation support\n\nStarting with Vault 1.13, the PKI secrets engine has the ability to support a\nspectrum of cluster sizes and certificate revocation quantities.\n\nFor users with few revocations or who want a unified view and have the\ninter-cluster bandwidth to support it, we recommend turning on auto\nrebuilding of CRLs, cross-cluster revocation queues, and cross-cluster CRLs.\nThis allows all consumers of the CRLs to have the most accurate picture of\nrevocations, regardless of which cluster they talk to.\n\nIf the unified CRL becomes too big for the underlying storage mechanism or\nfor a single host to build, we recommend relying on OCSP instead of CRLs.\nThese have much smaller storage entries, and the CRL `disabled` flag is\nindependent of `unified_crls`, allowing unified OCSP to remain.\n\nHowever, when cross-cluster traffic becomes too high (or if CRLs are still\nnecessary in addition to OCSP), we recommend sharding the CRL between\ndifferent clusters. This has been the default behavior of Vault, but with\nthe introduction of per-cluster, templated AIA information, the leaf\ncertificate's Authority Information Access (AIA) info will point directly\nto the cluster which issued it, allowing the correct CRL for this cert to\nbe identified by the application. This more correctly mimics the behavior\nof [Let's Encrypt's CRL sharding](https:\/\/letsencrypt.org\/2022\/09\/07\/new-life-for-crls.html).\n\nThis sharding behavior can also be used for OCSP, if the cross-cluster\ntraffic for revocation entries becomes too high.\n\nFor users who wish to manage revocation manually, using the audit logs to\ntrack certificate issuance would allow an external system to identify which\ncertificates were issued. These can be manually tracked for revocation, and\na [custom CRL can be built](\/vault\/api-docs\/secret\/pki#combine-crls-from-the-same-issuer)\nusing externally tracked revocations. This would allow usage of roles set to\n`no_store=true`, so Vault is strictly used as an issuing authority and isn't\nstoring any certificates, issued or revoked. For the highest of revocation\nvolumes, this could be the best option.\n\nNotably, this last approach can either be used for the creation of externally\nstored unified or sharded CRLs. If a single external unified CRL becomes\nunreasonably large, each cluster's certificates could have AIA info point\nto an externally stored and maintained, sharded CRL. However,\nVault has no mechanism to sign OCSP requests at this time.\n\n### What are Cross-Cluster CRLs?\n\nVault Enterprise supports a clustering mode called [Performance\nReplication](\/vault\/docs\/enterprise\/replication#performance-replication). In\na replicated PKI Secrets Engine mount, issuer and role information is synced\nbetween the Performance Primary and all Performance Secondary clusters.\nHowever, each Performance Secondary cluster has its own local storage of\nissued certificates and revocations which is not synced. In Vault versions\nbefore 1.13, this meant that each of these clusters had its own CRL and\nOCSP data, and any revocation requests needed to be processed on the\ncluster that issued it (or BYOC used).\n\nStarting with Vault 1.13, we've added [two\nfeatures](\/vault\/api-docs\/secret\/pki#read-crl-configuration) to Vault\nEnterprise to help manage this setup more correctly and easily: revocation\nrequest queues (`cross_cluster_revocation=true` in `config\/crl`) and unified\nrevocation entries (`unified_crl=true` in `config\/crl`).\n\nThe former allows operators (revoking by serial number) to request a\ncertificate be revoked regardless of which cluster it was issued on. For\nexample, if a request goes into the Performance Primary, but it didn't\nissue the certificate, it'll write a cross-cluster revocation request,\nand mark the results as pending. If another cluster already has this\ncertificate in storage, it will revoke it and confirm the revocation back\nto the main cluster. An operator can [list pending\nrevocations](\/vault\/api-docs\/secret\/pki#list-revocation-requests) to see\nthe status of these requests. To clean up invalid requests (e.g., if the\ncluster which had that certificate disappeared, if that certificate was\nissued with `no_store=true` on the role, or if it was an invalid serial\nnumber), an operator can [use tidy](\/vault\/api-docs\/secret\/pki#tidy) with\n`tidy_revocation_queue=true`, optionally shortening\n`revocation_queue_safety_buffer` to remove them quicker.\n\nThe latter allows all clusters to have a unified view of revocations,\nthat is, to have access to a list of revocations performed by other clusters.\nWhile the configuration parameter includes `crl` in the description, this\napplies to [both CRLs](\/vault\/api-docs\/secret\/pki#read-issuer-crl) and the\n[OCSP responder](\/vault\/api-docs\/secret\/pki#ocsp-request). When this\nrevocation replication occurs, if any cluster considers a cert revoked when\nanother doesn't (e.g., via BYOC revocation of a `no_store=false` certificate),\nall clusters will now consider it revoked assuming it hasn't expired. Notably,\nthe active node of the primary cluster will be used to rebuild the CRL; as\nthis can grow large if many clusters have lots of revoked certs, an operator\nmight need to disable CRL building (`disabled=true` in `config\/crl`) or\nincrease the [storage size](\/vault\/docs\/configuration\/storage\/raft#max_entry_size).\n\nAs an aside, all new cross-cluster writes (from Performance Secondary up to\nthe Performance Primary) are performed synchronously. This gives the caller\nconfidence that the request actually went through, at the expense of incurring\na bit higher overhead for revoking certificates. When a node loses its GRPC\nconnection (e.g., during leadership election or being otherwise unable to\ncontact the active primary), errors will occur though the local portion of the\nwrite (if any) will still succeed. For cross-cluster revocation requests, due\nto there being no local write, this means that the operation will need to be\nretried, but in the event of an issue writing a cross-cluster revocation entry\nwhen the cert existed locally, the revocation will eventually be synced across\nclusters when the connection comes back.\n\n## Issuer subjects and CRLs\n\nAs noted on several [GitHub issues](https:\/\/github.com\/hashicorp\/vault\/issues\/10176),\nGo's x509 library has an opinionated parsing and structuring mechanism for\ncertificate's Subjects. Issuers created within Vault are fine, but when using\nexternally created CA certificates, these may not be parsed\ncorrectly throughout all parts of the PKI. In particular, CRLs embed a\n(modified) copy of the issuer name. This can be avoided by using OCSP to\ntrack revocation, but note that performance characteristics are different\nbetween OCSP and CRLs.\n\n~> Note: As of Go 1.20 and Vault 1.13, Go correctly formats the CRL's issuer\n   name and this notice [does not apply](https:\/\/github.com\/golang\/go\/commit\/a367981b4c8e3ae955eca9cc597d9622201155f3).\n\n## Automate leaf certificate renewal\n\nTo manage certificates for services at scale, it is best to automate the\ncertificate renewal as much as possible. Vault Agent [has support for\nautomatically renewing requested certificates](\/vault\/docs\/agent-and-proxy\/agent\/template#certificates)\nbased on the `validTo` field. Other solutions might involve using\n[cert-manager](https:\/\/cert-manager.io\/) in Kubernetes or OpenShift, backed\nby the Vault CA.\n\n## Safe minimums\n\nSince its inception, this secrets engine has enforced SHA256 for signature\nhashes rather than SHA1. As of 0.5.1, a minimum of 2048 bits for RSA keys is\nalso enforced. Software that can handle SHA256 signatures should also be able to\nhandle 2048-bit keys, and 1024-bit keys are considered unsafe and are disallowed\nin the Internet PKI.\n\n## Token lifetimes and revocation\n\nWhen a token expires, it revokes all leases associated with it. This means that\nlong-lived CA certs need correspondingly long-lived tokens, something that is\neasy to forget. Starting with 0.6, root and intermediate CA certs no longer have\nassociated leases, to prevent unintended revocation when not using a token with\na long enough lifetime. To revoke these certificates, use the `pki\/revoke`\nendpoint.\n\n## Safe usage of roles\n\nThe Vault PKI Secrets Engine supports many options to limit issuance via\n[Roles](\/vault\/api-docs\/secret\/pki#create-update-role).\nCareful consideration of construction is necessary to ensure that more\npermissions are not given than necessary. Additionally, roles should generally\ndo _one_ thing; multiple roles should be preferable over having too permissive\nroles that allow arbitrary issuance (e.g., `allow_any_name` should generally\nbe used sparingly, if at all).\n\n - `allow_any_name` should generally be set to `false`; this is the default.\n - `allow_localhost` should generally be set to `false` for production\n   services, unless listening on `localhost` is expected.\n - Unless necessary, `allow_wildcard_certificates` should generally be set to\n   `false`. This is **not** the default due to backwards compatibility\n   concerns.\n   - This is especially necessary when `allow_subdomains` or `allow_glob_domains`\n     are enabled.\n - `enforce_hostnames` should generally be enabled for TLS services; this is\n   the default.\n - `allow_ip_sans` should generally be set to `false` (but defaults to `true`),\n   unless IP address certificates are explicitly required.\n - When using short TTLs (< 30 days) or with high issuance volume, it is\n   generally recommend to set `no_store` to `true` (defaults to `false`).\n   This prevents serial number based revocation, but allows higher throughput\n   as Vault no longer needs to store every issued certificate. This is discussed\n   more in the [Replicated Datasets](#replicated-datasets) section below.\n - Do not use roles with root certificates (`issuer_ref`). Root certificates\n   should generally only issue intermediates (see the section on [CA hierarchy\n   above](#use-a-ca-hierarchy)), which doesn't rely on roles.\n - Limit `key_usage` and `ext_key_usage`; don't attempt to allow all usages\n   for all purposes. Generally the default values are useful for client and\n   server TLS authentication.\n\n## Telemetry\n\nBeyond Vault's default telemetry around request processing, PKI exposes count and\nduration metrics for the issue, sign, sign-verbatim, and revoke calls. The\nmetrics keys take the form `mount-path,operation,[failure]` with labels for\nnamespace and role name.\n\nNote that these metrics are per-node and thus would need to be aggregated across\nnodes and clusters.\n\n## Auditing\n\nBecause Vault HMACs audit string keys by default, it is necessary to tune\nPKI secrets mounts to get an accurate view of issuance that is occurring under\nthis mount.\n\n~> Note: Depending on usage of Vault, CRLs (and rarely, CA chains) can grow to\n   be rather large. We don't recommend un-HMACing the `crl` field for this\n   reason, but note that the recommendations below suggest to un-HMAC the\n   `certificate` response parameter, which the CRL can be served in via\n   the `\/pki\/cert\/crl` API endpoint. Additionally, the `http_raw_body` can\n   be used to return CRL both in PEM and raw binary DER form, so it is\n   suggested not to un-HMAC that field to not corrupt the log format.<br \/><br \/>\n   If this is done with only a [syslog](\/vault\/docs\/audit\/syslog) audit device,\n   Vault can deny requests (with an opaque `500 Internal Error` message)\n   after the action has been performed on the server, because it was\n   unable to log the message.<br \/><br \/>\n   The suggested workaround is to either leave the `certificate` and `crl`\n   response fields HMACed and\/or to also enable the [`file`](\/vault\/docs\/audit\/file)\n   audit log type.\n\nSome suggested keys to un-HMAC for requests are as follows:\n\n - `csr` - the requested CSR to sign,\n - `certificate` - the requested self-signed certificate to re-sign or\n   when importing issuers,\n - Various issuance-related overriding parameters, such as:\n   - `issuer_ref` - the issuer requested to sign this certificate,\n   - `common_name` - the requested common name,\n   - `alt_names` - alternative requested DNS-type SANs for this certificate,\n   - `other_sans` - other (non-DNS, non-Email, non-IP, non-URI) requested SANs for this certificate,\n   - `ip_sans` - requested IP-type SANs for this certificate,\n   - `uri_sans` - requested URI-type SANs for this certificate,\n   - `ttl` - requested expiration date of this certificate,\n   - `not_after` - requested expiration date of this certificate,\n   - `serial_number` - the subject's requested serial number,\n   - `key_type` - the requested key type,\n   - `private_key_format` - the requested key format which is also\n     used for the public certificate format as well,\n - Various role- or issuer-related generation parameters, such as:\n   - `managed_key_name` - when creating an issuer, the requested managed\n     key name,\n   - `managed_key_id` -  when creating an issuer, the requested managed\n     key identifier,\n   - `ou` - the subject's organizational unit,\n   - `organization` - the subject's organization,\n   - `country` - the subject's country code,\n   - `locality` - the subject's locality,\n   - `province` - the subject's province,\n   - `street_address` - the subject's street address,\n   - `postal_code` - the subject's postal code,\n   - `permitted_dns_domains` - permitted DNS domains,\n   - `policy_identifiers` - the requested policy identifiers when creating a role, and\n   - `ext_key_usage_oids` - the extended key usage OIDs for the requested certificate.\n\nSome suggested keys to un-HMAC for responses are as follows:\n\n - `certificate` - the certificate that was issued,\n - `issuing_ca` - the certificate of the CA which issued the requested\n   certificate,\n - `serial_number` - the serial number of the certificate that was issued,\n - `error` - to show errors associated with the request, and\n - `ca_chain` - optional due to noise; the full CA chain of the issuer of\n   the requested certificate.\n\n~> Note: These list of parameters to un-HMAC are provided as a suggestion and\n   may not be exhaustive.\n\nThe following keys are suggested **NOT** to un-HMAC, due to their sensitive\nnature:\n\n - `private_key` - this response parameter contains the private keys\n   generated by Vault during issuance, and\n - `pem_bundle` this request parameter is only used on the issuer-import\n   paths and may contain sensitive private key material.\n\n## Role-Based access\n\nVault supports [path-based ACL Policies](\/vault\/tutorials\/getting-started\/getting-started-policies)\nfor limiting access to various paths within Vault.\n\nThe following is a condensed example reference of ACLing the PKI Secrets\nEngine. These are just a suggestion; other personas and policy approaches\nmay also be valid.\n\nWe suggest the following personas:\n\n - *Operator*;  a privileged user who manages the health of the PKI\n   subsystem; manages issuers and key material.\n - *Agent*; a semi-privileged user that manages roles and handles\n   revocation on behalf of an operator; may also handle delegated\n   issuance. This may also be called an *administrator* or *role\n   manager*.\n - *Advanced*; potentially a power-user or service that has access to\n   additional issuance APIs.\n - *Requester*; a low-level user or service that simply requests certificates.\n - *Unauthed*; any arbitrary user or service that lacks a Vault token.\n\nFor these personas, we suggest the following ACLs, in condensed, tabular form:\n\n| Path | Operations | Operator | Agent | Advanced | Requester | Unauthed |\n| :--- | :--------- | :------- | :---- | :------- | :-------- | :------- |\n| `\/ca(\/pem)?` | Read | Yes | Yes | Yes | Yes | Yes |\n| `\/ca_chain` | Read | Yes | Yes | Yes | Yes | Yes |\n| `\/crl(\/pem)?` | Read | Yes | Yes | Yes | Yes | Yes |\n| `\/crl\/delta(\/pem)?` | Read | Yes | Yes | Yes | Yes | Yes |\n| `\/cert\/:serial(\/raw(\/pem)?)?` | Read | Yes | Yes | Yes | Yes | Yes |\n| `\/issuers` | List | Yes | Yes | Yes | Yes | Yes |\n| `\/issuer\/:issuer_ref\/(json\u00a6der\u00a6pem)` | Read | Yes | Yes | Yes | Yes | Yes |\n| `\/issuer\/:issuer_ref\/crl(\/der\u00a6\/pem)?` | Read | Yes | Yes | Yes | Yes | Yes |\n| `\/issuer\/:issuer_ref\/crl\/delta(\/der\u00a6\/pem)?` | Read | Yes | Yes | Yes | Yes | Yes |\n| `\/ocsp\/<request>` | Read | Yes | Yes | Yes | Yes | Yes |\n| `\/ocsp` | Write | Yes | Yes | Yes | Yes | Yes |\n| `\/certs` | List | Yes | Yes | Yes | Yes | |\n| `\/revoke-with-key` | Write | Yes | Yes | Yes | Yes | |\n| `\/roles` | List | Yes | Yes | Yes | Yes | |\n| `\/roles\/:role` | Read | Yes | Yes | Yes | Yes | |\n| `\/(issue\u00a6sign)\/:role` | Write | Yes | Yes | Yes | Yes | |\n| `\/issuer\/:issuer_ref\/(issue\u00a6sign)\/:role` | Write | Yes | Yes | Yes | | |\n| `\/config\/auto-tidy` | Read | Yes | Yes |  |  |  |\n| `\/config\/ca` | Read | Yes | Yes |  |  |  |\n| `\/config\/crl` | Read | Yes | Yes |  |  |  |\n| `\/config\/issuers` | Read | Yes | Yes |  |  |  |\n| `\/crl\/rotate` | Read | Yes | Yes | | | |\n| `\/crl\/rotate-delta` | Read | Yes | Yes | | | |\n| `\/roles\/:role` | Write | Yes | Yes | | | |\n| `\/issuer\/:issuer_ref` | Read | Yes | Yes | | | |\n| `\/sign-verbatim(\/:role)?` | Write | Yes | Yes | | | |\n| `\/issuer\/:issuer_ref\/sign-verbatim(\/:role)?` | Write | Yes | Yes | | | |\n| `\/revoke` | Write | Yes | Yes | | | |\n| `\/tidy` | Write | Yes | Yes | | | |\n| `\/tidy-cancel` | Write | Yes | Yes | | | |\n| `\/tidy-status` | Read | Yes | Yes | | | |\n| `\/config\/auto-tidy` | Write | Yes | |  |  |  |\n| `\/config\/ca` | Write | Yes | |  |  |  |\n| `\/config\/crl` | Write | Yes | |  |  |  |\n| `\/config\/issuers` | Write | Yes | |  |  |  |\n| `\/config\/keys` | Read, Write | Yes | |  |  |  |\n| `\/config\/urls` | Read, Write | Yes | |  |  |  |\n| `\/issuer\/:issuer_ref` | Write | Yes | | | | |\n| `\/issuer\/:issuer_ref\/revoke` | Write | Yes | | | | |\n| `\/issuer\/:issuer_ref\/sign-intermediate` | Write | Yes | | | | |\n| `\/issuer\/issuer_ref\/sign-self-issued` | Write | Yes | | | | |\n| `\/issuers\/generate\/+\/+` | Write | Yes | | | | |\n| `\/issuers\/import\/+` | Write | Yes | | | | |\n| `\/intermediate\/generate\/+` | Write | Yes | | | | |\n| `\/intermediate\/cross-sign` | Write | Yes | | | | |\n| `\/intermediate\/set-signed` | Write | Yes | | | | |\n| `\/keys` | List | Yes | | | | |\n| `\/key\/:key_ref` | Read, Write | Yes | | | | |\n| `\/keys\/generate\/+` | Write | Yes | | | | |\n| `\/keys\/import` | Write | Yes | | | | |\n| `\/root\/generate\/+` | Write | Yes | | | | |\n| `\/root\/sign-intermediate` | Write | Yes | | | | |\n| `\/root\/sign-self-issued` | Write | Yes | | | | |\n| `\/root\/rotate\/+` | Write | Yes | | | | |\n| `\/root\/replace` | Write | Yes | | | | |\n\n~> Note: With managed keys, operators might need access to [read the mount\n   point's tunable data](\/vault\/api-docs\/system\/mounts) (Read on `\/sys\/mounts`) and\n   may need access [to use or manage managed keys](\/vault\/api-docs\/system\/managed-keys).\n\n## Replicated DataSets\n\nWhen operating with [Performance Secondary](\/vault\/docs\/enterprise\/replication#architecture)\nclusters, certain data-sets are maintained across all clusters, while others for performance\nand scalability reasons are kept within a given cluster.\n\nThe following table breaks down by data type what data sets will cross the cluster boundaries.\nFor data-types that do not cross a cluster boundary, read requests for that data will need to be\nsent to the appropriate cluster that the data was generated on.\n\n| Data Set                 | Replicated Across Clusters |\n|--------------------------|----------------------------|\n| Issuers & Keys           | Yes                        |\n| Roles                    | Yes                        |\n| CRL Config               | Yes                        |\n| URL Config               | Yes                        |\n| Issuer Config            | Yes                        |\n| Key Config               | Yes                        |\n| CRL                      | No                         |\n| Revoked Certificates     | No                         |\n| Leaf\/Issued Certificates | No                         |\n| Certificate Metadata     | No                         |\n\nThe main effect is that within the PKI secrets engine leaf certificates\nissued with `no_store` set to `false` are stored local to the cluster that issued them.\nThis allows for both primary and [Performance Secondary](\/vault\/docs\/enterprise\/replication#architecture)\nclusters' active node to issue certificates for greater scalability. As a\nresult, these certificates, metadata and any revocations are visible only on the issuing\ncluster. This additionally means each cluster has its own set of CRLs, distinct\nfrom other clusters. These CRLs should either be unified into a single CRL for\ndistribution from a single URI, or server operators should know to fetch all\nCRLs from all clusters.\n\n## Cluster scalability\n\nMost non-introspection operations in the PKI secrets engine require a write to\nstorage, and so are forwarded to the cluster's active node for execution.\nThis table outlines which operations can be executed on performance standby nodes\nand thus scale horizontally across all nodes within a cluster.\n\n| Path                          | Operations           |\n|-------------------------------|----------------------|\n| ca[\/pem]                      | Read                 |\n| cert\/<em>serial-number<\/em>   | Read                 |\n| cert\/ca_chain                 | Read                 |\n| config\/crl                    | Read                 |\n| certs                         | List                 |\n| ca_chain                      | Read                 |\n| crl[\/pem]                     | Read                 |\n| issue                         | Update <sup>\\*<\/sup> |\n| revoke\/<em>serial-number<\/em> | Read                 |\n| sign                          | Update <sup>\\*<\/sup> |\n| sign-verbatim                 | Update <sup>\\*<\/sup> |\n\n\\* Only if the corresponding role has `no_store` set to true, `generate_lease`\nset to false and no metadata is being written. If `generate_lease` is true the\nlease creation will be forwarded to the active node; if `no_store` is false\nthe entire request will be forwarded to the active node.\nIf `no_store_cert_metadata=false` and `metadata` argument is provided the entire\nrequest will be forwarded to the active node.\n\n## PSS support\n\nGo lacks support for PSS certificates, keys, and CSRs using the `rsaPSS` OID\n(`1.2.840.113549.1.1.10`). It requires all RSA certificates, keys, and CSRs\nto use the alternative `rsaEncryption` OID (`1.2.840.113549.1.1.1`).\n\nWhen using OpenSSL to generate CAs or CSRs from PKCS8-encoded PSS keys, the\nresulting CAs and CSRs will have the `rsaPSS` OID. Go and Vault will reject\nthem. Instead, use OpenSSL to generate or convert to a PKCS#1v1.5 private\nkey file and use this to generate the CSR. Vault will, depending on the role\nand the signing mechanism, still use a PSS signature despite the\n`rsaEncryption` OID on the request as the SubjectPublicKeyInfo and\nSignatureAlgorithm fields are orthogonal. When creating an external CA and\nimporting it into Vault, ensure that the `rsaEncryption` OID is present on\nthe SubjectPublicKeyInfo field even if the SignatureAlgorithm is PSS-based.\n\nThese certificates generated by Go (with `rsaEncryption` OID but PSS-based\nsignatures) are otherwise compatible with the fully PSS-based certificates.\nOpenSSL and NSS support parsing and verifying chains using this type of\ncertificate. Note that some TLS implementations may not support these types\nof certificates if they do not support `rsa_pss_rsae_*` signature schemes.\nAdditionally, some implementations allow rsaPSS OID certificates to contain\nrestrictions on signature parameters allowed by this certificate, but Go and\nVault do not support adding such restrictions.\n\nAt this time Go lacks support for signing CSRs with the PSS signature\nalgorithm. If using a managed key that requires a RSA PSS algorithm (such as GCP or\na PKCS#11 HSM) as a backing for an intermediate CA key, attempting to generate\na CSR (via `pki\/intermediate\/generate\/kms`) will fail signature verification.\nIn this case, the CSR will need to be generated outside of Vault and the\nsigned final certificate can be imported into the mount.\n\nGo additionally lacks support for creating OCSP responses with the PSS\nsignature algorithm. Vault will automatically downgrade issuers with\nPSS-based revocation signature algorithms to PKCS#1v1.5, but note that\ncertain KMS devices (like HSMs and GCP) may not support this with the\nsame key. As a result, the OCSP responder may fail to sign responses,\nreturning an internal error.\n\n## Issuer storage migration issues\n\nWhen Vault migrates to the new multi-issuer storage layout on releases prior\nto 1.11.6, 1.12.2, and 1.13, and storage write errors occur during the mount\ninitialization and storage migration process, the default issuer _may_ not\nhave the correct `ca_chain` value and may only have the self-reference. These\nwrite errors most commonly manifest in logs as a message like\n`failed to persist issuer ... chain to disk: <cause>` and indicate that Vault\nwas not stable at the time of migration. Note that this only occurs when more\nthan one issuer exists within the mount (such as an intermediate with root).\n\nTo fix this manually (until a new version of Vault automatically rebuilds the\nissuer chain), a rebuild of the chains can be performed:\n\n```\ncurl -X PATCH -H \"Content-Type: application\/merge-patch+json\" -H \"X-Vault-Request: true\" -H \"X-Vault-Token: $(vault print token)\" -d '{\"manual_chain\":\"self\"}' https:\/\/...\/issuer\/default\ncurl -X PATCH -H \"Content-Type: application\/merge-patch+json\" -H \"X-Vault-Request: true\" -H \"X-Vault-Token: $(vault print token)\" -d '{\"manual_chain\":\"\"}' https:\/\/...\/issuer\/default\n```\n\nThis temporarily sets the manual chain on the default issuer to a self-chain\nonly, before reverting it back to automatic chain building. This triggers a\nrefresh of the `ca_chain` field on the issuer, and can be verified with:\n\n```\nvault read pki\/issuer\/default\n```\n\n## Issuer Constraints Enforcement\n\nStarting with versions 1.18.3, 1.18.3+ent, 1.17.10+ent and 1.16.14+ent, Vault\nperforms additional verifications when creating or signing leaf certificates for\nissuers that have constraints extensions. This verification includes validating\nextended key usage, name constraints, and correct copying of the issuer name\nonto the certificate. Certificates issued without this verification might not be\naccepted by end user applications.\n\nProblems with issuance arising from this validation should be fixed by changing\nthe issuer certificate itself, to avoid more problems down the line.\n\nIt is possible to completely disable verification by setting environment\nvariable `VAULT_DISABLE_PKI_CONSTRAINTS_VERIFICATION` to `true`.\n\n~> **Warning**: The use of environment variable `VAULT_DISABLE_PKI_CONSTRAINTS_VERIFICATION`\nshould be considered as a last resort.\n\n\n## Tutorial\n\nRefer to the [Build Your Own Certificate Authority (CA)](\/vault\/tutorials\/secrets-management\/pki-engine)\nguide for a step-by-step tutorial.\n\nHave a look at the [PKI Secrets Engine with Managed Keys](\/vault\/tutorials\/enterprise\/managed-key-pki)\nfor more about how to use externally managed keys with PKI.\n\n## API\n\nThe PKI secrets engine has a full HTTP API. Please see the\n[PKI secrets engine API](\/vault\/api-docs\/secret\/pki) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title   PKI   Secrets Engines  Considerations  description  The PKI secrets engine for Vault generates TLS certificates         PKI secrets engine   considerations  To successfully deploy this secrets engine  there are a number of important considerations to be aware of  as well as some preparatory steps that should be undertaken  You should read all of these  before  using this secrets engine or generating the CA to use with this secrets engine      Table of contents      Be Careful with Root CAs   be careful with root cas        Managed Keys   managed keys      One CA Certificate  One Secrets Engine   one ca certificate one secrets engine        Always Configure a Default Issuer   always configure a default issuer      Key Types Matter   key types matter        Cluster Performance and Key Types   cluster performance and key types      Use a CA Hierarchy   use a ca hierarchy        Cross Signed Intermediates   cross signed intermediates      Cluster URLs are Important   cluster urls are important      Automate Rotation with ACME   automate rotation with acme        ACME Stores Certificates   acme stores certificates        ACME Role Restrictions Require EAB   acme role restrictions require eab        ACME and the Public Internet   acme and the public internet        ACME Errors are in Server Logs   acme errors are in server logs        ACME Security Considerations   acme security considerations        ACME and Client Counting   acme and client counting      Keep Certificate Lifetimes Short  For CRL s Sake   keep certificate lifetimes short for crls sake        NotAfter Behavior on Leaf Certificates   notafter behavior on leaf certificates        Cluster Performance and Quantity of Leaf Certificates   cluster performance and quantity of leaf certificates      You must configure issuing CRL OCSP information  in advance    you must configure issuingcrlocsp information  in advance       Distribution of CRLs and OCSP   distribution of crls and ocsp      Automate CRL Building and Tidying   automate crl building and tidying      Spectrum of Revocation Support   spectrum of revocation support        What Are Cross Cluster CRLs    what are cross cluster crls      Issuer Subjects and CRLs   issuer subjects and crls      Automate Leaf Certificate Renewal   automate leaf certificate renewal      Safe Minimums   safe minimums      Token Lifetimes and Revocation   token lifetimes and revocation      Safe Usage of Roles   safe usage of roles      Telemetry   telemetry      Auditing   auditing      Role Based Access   role based access      Replicated DataSets   replicated datasets      Cluster Scalability   cluster scalability      PSS Support   pss support      Issuer Storage Migration Issues   issuer storage migration issues      Issuer Constraints Enforcement   issuer constraints enforcement      Tutorial   tutorial      API   api      Be careful with root CAs  Vault storage is secure  but not as secure as a piece of paper in a bank vault  It is  after all  networked software  If your root CA is hosted outside of Vault  don t put it in Vault as well  instead  issue a shorter lived intermediate CA certificate and put this into Vault  This aligns with industry best practices   Since 0 4  the secrets engine supports generating self signed root CAs and creating and signing CSRs for intermediate CAs  In each instance  for security reasons  the private key can  only  be exported at generation time  and the ability to do so is part of the command path  so it can be put into ACL policies    If you plan on using intermediate CAs with Vault  it is suggested that you let Vault create CSRs and do not export the private key  then sign those with your root CA  which may be a second mount of the  pki  secrets engine        Managed keys  Since 1 10  Vault Enterprise can access private key material in a   managed key    vault docs enterprise managed keys    In this case  Vault never sees the private key  and the external KMS or HSM performs certificate signing operations  Managed keys are configured by selecting the  kms  type when generating a root or intermediate      One CA certificate  one secrets engine  Since Vault 1 11 0  the PKI Secrets Engine supports multiple issuers in a single mount  However  in order to simplify the configuration  it is  strongly  recommended that operators limit a mount to a single issuer  If you want to issue certificates from multiple disparate CAs  mount the PKI secrets engine at multiple mount points with separate CA certificates in each   The rationale for separating mounts is to simplify permissions management  very few individuals need access to perform operations with the root  but many need access to create leaves  The operations on a root should generally be limited to issuing and revoking intermediate CAs  which is a highly privileged operation  it becomes much easier to audit these operations when they re in a separate mount than if they re mixed in with day to day leaf issuance   A common pattern is to have one mount act as your root CA and to use this CA only to sign intermediate CA CSRs from other PKI secrets engines   To keep old CAs active  there s two approaches to achieving rotation    1  Use multiple secrets engines  This allows a fresh start  preserving the     old issuer and CRL  Vault ACL policy can be updated to deny new issuance     under the old mount point and roles can be re evaluated before being     imported into the new mount point   2  Use multiple issuers in the same mount point  The usage of the old issuer     can be restricted to CRL signing  and existing roles and ACL policy can be     kept as is  This allows cross signing within the same mount  and consumers     of the mount won t have to update their configuration  Once the transitional     period for this rotation has completed and all past issued certificate have     expired  it is encouraged to fully remove the old issuer and any unnecessary     cross signed issuers from the mount point   Another suggested use case for multiple issuers in the same mount is splitting issuance by TTL lifetime  For short lived certificates  an intermediate stored in Vault will often out perform a HSM backed intermediate  For longer lived certificates  however  it is often important to have the intermediate key material secured throughout the lifetime of the end entity certificate  This means that two intermediates in the same mount    one backed by the HSM and one backed by Vault    can satisfy both use cases  Operators can make roles setting maximum TTLs for each issuer and consumers of the mount can decide which to use       Always configure a default issuer  For backwards compatibility   the default issuer   vault api docs secret pki read issuers configuration  is used to service PKI endpoints without an explicit issuer  either via path selection or role based selection   When certificates are revoked and their issuer is no longer part of this PKI mount  Vault places them on the default issuer s CRL  This means maintaining a default issuer is important for both backwards compatibility for issuing certificates and for ensuring revoked certificates land on a CRL      Key types matter  Certain key types have impacts on performance  Signing certificates from a RSA key will be slower than issuing from an ECDSA or Ed25519 key  Key generation  using   issue  role  endpoints  using RSA keys will also be slow  RSA key generation involves finding suitable random primes  whereas Ed25519 keys can be random data  As the number of bits goes up  RSA 2048    4096 or ECDSA P 256    P 521   signature times also increases   This matters in both directions  not only is issuance more expensive  but validation of the corresponding signature  in say  TLS handshakes  will also be more expensive  Careful consideration of both issuer and issued key types can have meaningful impacts on performance of not only Vault  but systems using these certificates       Cluster performance and key types  The  benchmark vault  https   github com hashicorp vault benchmark  project can be used to measure the performance of a Vault PKI instance  In general  some considerations to be aware of      RSA key generation is much slower and highly variable than EC key    generation  If performance and throughput are a necessity  consider using    EC keys  including NIST P curves and Ed25519  instead of RSA      Key signing requests  via   pki sign   will be faster than    pki issue       especially for RSA keys  this removes the necessity for Vault to generate    key material and can sign the key material provided by the client  This    signing step is common between both endpoints  so key generation is pure    overhead if the client has a sufficiently secure source of entropy      The CA s key type matters as well  using a RSA CA will result in a RSA    signature and takes longer than a ECDSA or Ed25519 CA      Storage is an important factor  with  BYOC Revocation   vault api docs secret pki revoke certificate      using  no store true  still gives you the ability to revoke certificates    and audit logs can be used to track issuance  Clusters using a remote    storage  like Consul  over a slow network and using  no store false     or  no store cert metadata false  along with specifying metadata on issuance  will    result in additional latency on issuance  Adding leases for every issued    certificate compounds the problem         Storing too many certificates results in longer  LIST  pki certs  time        including the time to tidy the instance  As such  for large scale       deployments     250k active certificates  it is recommended to use audit       logs to track certificates outside of Vault   As a general comparison on unspecified hardware  using  benchmark vault  for  30s  on a local  single node  raft backed Vault instance       Vault can issue 300k certificates using EC P 256 for CA   leaf keys and     without storage         But switching to storing these leaves drops that number to 65k  and only       20k with leases      Using large  expensive RSA 4096 bit keys  Vault can only issue 160 leaves     regardless of whether or not storage or leases were used  The 95  key    generation time is above 10s         In comparison  using P 521 keys  Vault can issue closer to 30k leaves       without leases and 18k with leases   These numbers are for example only  to represent the impact different key types can have on PKI cluster performance   The use of ACME adds additional latency into these numbers  both because certificates need to be stored and because challenge validation needs to be performed      Use a CA hierarchy  It is generally recommended to use a hierarchical CA setup  with a root certificate which issues one or more intermediates  based on usage   which in turn issue the leaf certificates   This allows stronger storage or policy guarantees around  protection of the root CA   be careful with root cas   while letting Vault manage the intermediate CAs and issuance of leaves  Different intermediates might be issued for different usage  such as VPN signing  Email signing  or testing versus production TLS services  This helps to keep CRLs limited to specific purposes  for example  VPN services don t care about the revoked set of email signing certificates if they re using separate certificates and different intermediates  and thus don t need both CRL contents  Additionally  this allows higher risk intermediates  such as those issuing longer lived email signing certificates  to have HSM backing without impacting the performance of easier to rotate intermediates and certificates  such as TLS intermediates    Vault supports the use of both the   allowed domains  parameter on Roles   vault api docs secret pki allowed domains  and the   permitted dns domains  parameter to set the Name Constraints extension   vault api docs secret pki permitted dns domains  on root and intermediate generation  This allows for several layers of separation of concerns between TLS based services       Cross Signed intermediates  When cross signing intermediates from two separate roots  two separate intermediate issuers will exist within the Vault PKI mount  In order to correctly serve the cross signed chain on issuance requests  the  manual chain  override is required on either or both intermediates  This can be constructed in the following order      this issuer   self      this root    the other copy of this intermediate    the other root  All requests to this issuer for signing will now present the full cross signed chain      Cluster URLs are important  In Vault 1 13  support for  templated AIA URLs   vault api docs secret pki enable aia url templating 1  was added  With the  per cluster URL configuration   vault api docs secret pki set cluster configuration  pointing to this Performance Replication cluster  AIA information will point to the cluster that issued this certificate automatically   In Vault 1 14  with ACME support  the same configuration is used for allowing ACME clients to discover the URL of this cluster        Warning    It is important to ensure that this configuration is    up to date and maintained correctly  always pointing to the node s    PR cluster address  which may be a Load Balanced or a DNS Round Robbin    address   If this configuration is not set on every Performance Replication    cluster  certificate issuance  via REST and or via ACME  will fail      Automate rotation with ACME  In Vault 1 14  support for the  Automatic Certificate Management Environment  ACME   https   datatracker ietf org doc html rfc8555  protocol has been added to the PKI Engine  This is a standardized way to handle validation  issuance  rotation  and revocation of server certificates   Many ecosystems  from web servers like Caddy  Nginx  and Apache  to orchestration environments like Kubernetes  via cert manager  natively support issuance via the ACME protocol  For deployments without native support  stand alone tools like certbot support fetching and renewing certificates on behalf of consumers  Vault s PKI Engine only includes server support for ACME  no client functionality has been included      Note  Vault s PKI ACME server caps the certificate s validity at 90 days    maximum by default  overridable using the ACME config max ttl parameter     Shorter validity durations can be set via limiting the role s TTL to    be under the global ACME configured limit     Aligning with Let s Encrypt  we do not support the optional  NotBefore     and  NotAfter  order request parameters       ACME stores certificates  Because ACME requires stored certificates in order to function  the notes  below about automating tidy   automate crl building and tidying  are especially important for the long term health of the PKI cluster  ACME also introduces additional resource types  accounts  orders  authorizations  and challenges  that must be tidied via  the  tidy acme true  option   vault api docs secret pki tidy   Orders  authorizations  and challenges are  cleaned up based on the  safety buffer    vault api docs secret pki safety buffer  parameter  but accounts can live longer past their last issued certificate by controlling the   acme account safety buffer  parameter   vault api docs secret pki acme account safety buffer    As a consequence of the above  and like the discussions in the  Cluster Scalability   cluster scalability  section  because these roles have  no store false  set  ACME can only issue certificates on the active nodes of PR clusters  standby nodes  if contacted  will transparently forward all requests to the active node       ACME role restrictions require EAB  Because ACME by default has no external authorization engine and is unauthenticated from a Vault perspective  the use of roles with ACME in the default configuration are of limited value as any ACME client can request certificates under any role by proving possession of the requested certificate identifiers   To solve this issue  there are two possible approaches    1  Use a restrictive   allowed roles    allowed issuers   and      default directory policy  ACME     configuration   vault api docs secret pki set acme configuration      to let only a single role and issuer be used  This prevents user     choice  allowing some global restrictions to be placed on issuance     and avoids requiring ACME clients to have  at initial setup  access     to a Vault token other mechanism for acquiring a Vault EAB ACME token   2  Use a more permissive  configuration with      eab policy always required    vault api docs secret pki eab policy      to allow more roles and users to select the roles  but bind ACME clients     to a Vault token which can be suitably ACL d to particular sets of     approved ACME directories   The choice of approach depends on the policies of the organization wishing to use ACME   Another consequence of the Vault unauthenticated nature of ACME requests are that role templating  based on entity information  cannot be used as there is no token and thus no entity associated with the request  even when EAB binding is used       ACME and the public internet  Using ACME is possible over the public internet  public CAs like Let s Encrypt offer this as a service  Similarly  organizations running internal PKI infrastructure might wish to issue server certificates to pieces of infrastructure outside of their internal network boundaries  from a publicly accessible Vault instance  By default  without enforcing a restrictive  eab policy   this results in a complicated threat model   any  external client which can prove possession of a domain can issue a certificate under this CA  which might be considered more trusted by this organization   As such  we strongly recommend publicly facing Vault instances  such as HCP Vault  enforce that PKI mount operators have required a  restrictive  eab policy always required  configuration   vault api docs secret pki eab policy   System administrators of Vault instances can enforce this by  setting the  VAULT DISABLE PUBLIC ACME true  environment variable   vault api docs secret pki acme external account bindings        ACME errors are in server logs  Because the ACME client is not necessarily trusted  as account registration may not be tied to a valid Vault token when EAB is not used   many error messages end up in the Vault server logs out of security necessity  When troubleshooting issues with clients requesting certificates  first check the client s logs  if any   e g   certbot will state the log location on errors   and then correlate with Vault server logs to identify the failure reason       ACME security considerations  ACME allows any client to use Vault to make some sort of external call  while the design of ACME attempts to minimize this scope and will prohibit issuance if incorrect servers are contacted  it cannot account for all possible remote server implementations  Vault s ACME server makes three types of requests    1  DNS requests for   acme challenge  domain    which should be least     invasive and most safe   2  TLS ALPN requests for the  acme tls 1  protocol  which should be     safely handled by the TLS before any application code is invoked   3  HTTP requests to  http    domain   well known acme challenge  token        which could be problematic based on server design  if all requests      regardless of path  are treated the same and assumed to be trusted      this could result in Vault being used to make  invalid  requests      Ideally  any such server implementations should be updated to ignore     such ACME validation requests or to block access originating from Vault     to this service   In all cases  no information about the response presented by the remote server is returned to the ACME client   When running Vault on multiple networks  note that Vault s ACME server places no restrictions on requesting client destination identifier validations paths  a client could use a HTTP challenge to force Vault to reach out to a server on a network it could otherwise not access       ACME and client counting  In Vault 1 14  ACME contributes differently to usage metrics than other interactions with the PKI Secrets Engine  Due to its use of unauthenticated requests  which do not generate Vault tokens   it would not be counted in the traditional  activity log APIs   vault api docs system internal counters activity export   Instead  certificates issued via ACME will be counted via their unique certificate identifiers  the combination of CN  DNS SANs  and IP SANs   These will create a stable identifier that will be consistent across renewals  other ACME clients  mounts  and namespaces  contributing to the activity log presently as a non entity token attributed to the first mount which created that request      Keep certificate lifetimes short  for CRL s sake  This secrets engine aligns with Vault s philosophy of short lived secrets  As such it is not expected that CRLs will grow large  the only place a private key is ever returned is to the requesting client  this secrets engine does  not  store generated private keys  except for CA certificates   In most cases  if the key is lost  the certificate can simply be ignored  as it will expire shortly   If a certificate must truly be revoked  the normal Vault revocation function can be used  and any revocation action will cause the CRL to be regenerated  When the CRL is regenerated  any expired certificates are removed from the CRL  and any revoked  expired certificate are removed from secrets engine storage   This is an expensive operation  Due to the structure of the CRL standard  Vault must read   all   revoked certificates into memory in order to rebuild the CRL and clients must fetch the regenerated CRL   This secrets engine does not support multiple CRL endpoints with sliding date windows  often such mechanisms will have the transition point a few days apart  but this gets into the expected realm of the actual certificate validity periods issued from this secrets engine  A good rule of thumb for this secrets engine would be to simply not issue certificates with a validity period greater than your maximum comfortable CRL lifetime  Alternately  you can control CRL caching behavior on the client to ensure that checks happen more often   Often multiple endpoints are used in case a single CRL endpoint is down so that clients don t have to figure out what to do with a lack of response  Run Vault in HA mode  and the CRL endpoint should be available even if a particular node is down      Note  Since Vault 1 11 0  with multiple issuers in the same mount point     different issuers may have different CRLs  depending on subject and key    material   This means that Vault may need to regenerate multiple CRLs     This is again a rationale for keeping TTLs short and avoiding revocation    if possible      Note  Since Vault 1 12 0  we support two complementary revocation    mechanisms  Delta CRLs  which allow for rebuilds of smaller  incremental    additions to the last complete CRL  and OCSP  which allows responding to    revocation status requests for individual certificates  When coupled with    the new CRL auto rebuild functionality  this means that the revoking step    isn t as costly  as the CRL isn t always rebuilt on each revocation      outside of storage considerations  However  while the rebuild operation    still can be expensive with lots of certificates  it will be done on a    schedule rather than on demand       NotAfter behavior on leaf certificates  In Vault 1 11 0  the PKI Secrets Engine has introduced a new  leaf not after behavior   parameter on issuers   vault api docs secret pki leaf not after behavior   This allows modification of the issuance behavior  should Vault  err   preventing issuance of a longer lived leaf cert than issuer  silently  truncate  to that of the issuer s  NotAfter  value  or  permit  longer expirations   It is strongly suggested to use  err  or  truncate  for intermediates   permit  is only useful for root certificates  as intermediate s NotAfter expiration are checked when validating presented chains   In combination with a cascading expiration with longer lived roots  perhaps on the range of 2 10 years   shorter lived intermediates  perhaps on the range of 6 months to 2 years   and short lived leaf certificates  on the range of 30 to 90 days   and the  rotation strategies discussed in other sections   vault docs secrets pki rotation primitives   this should keep the CRLs adequately small       Cluster performance and quantity of leaf certificates  As mentioned above  keeping TTLs short  or using  no store true  and  no store cert metadata true   and avoiding leases is important for a healthy cluster  However it is important to note this is a scale problem  10 1000 long lived  stored certificates are probably fine  but 50k 100k become a problem and 500k  stored  unexpired certificates can negatively impact even large Vault clusters  even with short TTLs   However  once these certificates are expired  a  tidy operation   vault api docs secret pki tidy  will clean up CRLs and Vault cluster storage   Note that organizational risk assessments for certificate compromise might mean certain certificate types should always be issued with  no store false   even short lived broad wildcard certificates  say     example com   might be important enough to have precise control over revocation  However  an internal service with a well scoped certificate  say   service example com   might be of low enough risk to issue a 90 day TTL with  no store true   preventing the need for revocation in the unlikely case of compromise   Having a shorter TTL decreases the likelihood of needing to revoke a cert  but cannot prevent it entirely  and decrease the impact of any such compromise      Note  As of Vault 1 12  the PKI Secret Engine s  Bring Your Own Cert      BYOC    vault api docs secret pki revoke certificate     functionality allows revocation of certificates not previously stored     e g   issued via a role with  no store true    This means that setting     no store true   is now  safe to be used globally  regardless of importance    of issued certificates  and their likelihood for revocation       You must configure issuing CRL OCSP information  in advance   This secrets engine serves CRLs from a predictable location  but it is not possible for the secrets engine to know where it is running  Therefore  you must configure desired URLs for the issuing certificate  CRL distribution points  and OCSP servers manually using the  config urls  endpoint  It is supported to have more than one of each of these by passing in the multiple URLs as a comma separated string parameter      Note  when using Vault Enterprise s Performance Replication features with a    PKI Secrets Engine mount  each cluster will have its own CRL  this means    each cluster s unique CRL address should be included in the  AIA    information  https   datatracker ietf org doc html rfc5280 section 5 2 7     field separately  or the CRLs should be consolidated and served outside of    Vault      Note  When using multiple issuers in the same mount  it is suggested to use    the per issuer AIA fields rather than the global    config urls   variant     This is for correctness  these fields are used for chain building and    automatic CRL detection in certain applications  If they point to the wrong    issuer s information  these applications may break      Distribution of CRLs and OCSP  Both CRLs and OCSP allow interrogating revocation status of certificates  Both of these methods include internal security and authenticity  both CRLs and OCSP responses are signed by the issuing CA within Vault   This means both are fine to distribute over non secure and non authenticated channels  such as HTTP      Note  The OCSP implementation for GET requests can lead to intermittent 400 errors when an encoded OCSP request contains consecutive     characters  Until this is resolved it is recommended to use POST based OCSP requests      Automate CRL building and tidying  Since Vault 1 12  the PKI Secrets Engine supports automated CRL rebuilding  including optional Delta CRLs which can be built more frequently than complete CRLs  via the   config crl  endpoint  Additionally  tidying of revoked and expired certificates can be configured automatically via the   config auto tidy  endpoint  Both of these should be enabled to ensure compatibility with the wider PKIX ecosystem and performance of the cluster      Spectrum of revocation support  Starting with Vault 1 13  the PKI secrets engine has the ability to support a spectrum of cluster sizes and certificate revocation quantities   For users with few revocations or who want a unified view and have the inter cluster bandwidth to support it  we recommend turning on auto rebuilding of CRLs  cross cluster revocation queues  and cross cluster CRLs  This allows all consumers of the CRLs to have the most accurate picture of revocations  regardless of which cluster they talk to   If the unified CRL becomes too big for the underlying storage mechanism or for a single host to build  we recommend relying on OCSP instead of CRLs  These have much smaller storage entries  and the CRL  disabled  flag is independent of  unified crls   allowing unified OCSP to remain   However  when cross cluster traffic becomes too high  or if CRLs are still necessary in addition to OCSP   we recommend sharding the CRL between different clusters  This has been the default behavior of Vault  but with the introduction of per cluster  templated AIA information  the leaf certificate s Authority Information Access  AIA  info will point directly to the cluster which issued it  allowing the correct CRL for this cert to be identified by the application  This more correctly mimics the behavior of  Let s Encrypt s CRL sharding  https   letsencrypt org 2022 09 07 new life for crls html    This sharding behavior can also be used for OCSP  if the cross cluster traffic for revocation entries becomes too high   For users who wish to manage revocation manually  using the audit logs to track certificate issuance would allow an external system to identify which certificates were issued  These can be manually tracked for revocation  and a  custom CRL can be built   vault api docs secret pki combine crls from the same issuer  using externally tracked revocations  This would allow usage of roles set to  no store true   so Vault is strictly used as an issuing authority and isn t storing any certificates  issued or revoked  For the highest of revocation volumes  this could be the best option   Notably  this last approach can either be used for the creation of externally stored unified or sharded CRLs  If a single external unified CRL becomes unreasonably large  each cluster s certificates could have AIA info point to an externally stored and maintained  sharded CRL  However  Vault has no mechanism to sign OCSP requests at this time       What are Cross Cluster CRLs   Vault Enterprise supports a clustering mode called  Performance Replication   vault docs enterprise replication performance replication   In a replicated PKI Secrets Engine mount  issuer and role information is synced between the Performance Primary and all Performance Secondary clusters  However  each Performance Secondary cluster has its own local storage of issued certificates and revocations which is not synced  In Vault versions before 1 13  this meant that each of these clusters had its own CRL and OCSP data  and any revocation requests needed to be processed on the cluster that issued it  or BYOC used    Starting with Vault 1 13  we ve added  two features   vault api docs secret pki read crl configuration  to Vault Enterprise to help manage this setup more correctly and easily  revocation request queues   cross cluster revocation true  in  config crl   and unified revocation entries   unified crl true  in  config crl     The former allows operators  revoking by serial number  to request a certificate be revoked regardless of which cluster it was issued on  For example  if a request goes into the Performance Primary  but it didn t issue the certificate  it ll write a cross cluster revocation request  and mark the results as pending  If another cluster already has this certificate in storage  it will revoke it and confirm the revocation back to the main cluster  An operator can  list pending revocations   vault api docs secret pki list revocation requests  to see the status of these requests  To clean up invalid requests  e g   if the cluster which had that certificate disappeared  if that certificate was issued with  no store true  on the role  or if it was an invalid serial number   an operator can  use tidy   vault api docs secret pki tidy  with  tidy revocation queue true   optionally shortening  revocation queue safety buffer  to remove them quicker   The latter allows all clusters to have a unified view of revocations  that is  to have access to a list of revocations performed by other clusters  While the configuration parameter includes  crl  in the description  this applies to  both CRLs   vault api docs secret pki read issuer crl  and the  OCSP responder   vault api docs secret pki ocsp request   When this revocation replication occurs  if any cluster considers a cert revoked when another doesn t  e g   via BYOC revocation of a  no store false  certificate   all clusters will now consider it revoked assuming it hasn t expired  Notably  the active node of the primary cluster will be used to rebuild the CRL  as this can grow large if many clusters have lots of revoked certs  an operator might need to disable CRL building   disabled true  in  config crl   or increase the  storage size   vault docs configuration storage raft max entry size    As an aside  all new cross cluster writes  from Performance Secondary up to the Performance Primary  are performed synchronously  This gives the caller confidence that the request actually went through  at the expense of incurring a bit higher overhead for revoking certificates  When a node loses its GRPC connection  e g   during leadership election or being otherwise unable to contact the active primary   errors will occur though the local portion of the write  if any  will still succeed  For cross cluster revocation requests  due to there being no local write  this means that the operation will need to be retried  but in the event of an issue writing a cross cluster revocation entry when the cert existed locally  the revocation will eventually be synced across clusters when the connection comes back      Issuer subjects and CRLs  As noted on several  GitHub issues  https   github com hashicorp vault issues 10176   Go s x509 library has an opinionated parsing and structuring mechanism for certificate s Subjects  Issuers created within Vault are fine  but when using externally created CA certificates  these may not be parsed correctly throughout all parts of the PKI  In particular  CRLs embed a  modified  copy of the issuer name  This can be avoided by using OCSP to track revocation  but note that performance characteristics are different between OCSP and CRLs      Note  As of Go 1 20 and Vault 1 13  Go correctly formats the CRL s issuer    name and this notice  does not apply  https   github com golang go commit a367981b4c8e3ae955eca9cc597d9622201155f3       Automate leaf certificate renewal  To manage certificates for services at scale  it is best to automate the certificate renewal as much as possible  Vault Agent  has support for automatically renewing requested certificates   vault docs agent and proxy agent template certificates  based on the  validTo  field  Other solutions might involve using  cert manager  https   cert manager io   in Kubernetes or OpenShift  backed by the Vault CA      Safe minimums  Since its inception  this secrets engine has enforced SHA256 for signature hashes rather than SHA1  As of 0 5 1  a minimum of 2048 bits for RSA keys is also enforced  Software that can handle SHA256 signatures should also be able to handle 2048 bit keys  and 1024 bit keys are considered unsafe and are disallowed in the Internet PKI      Token lifetimes and revocation  When a token expires  it revokes all leases associated with it  This means that long lived CA certs need correspondingly long lived tokens  something that is easy to forget  Starting with 0 6  root and intermediate CA certs no longer have associated leases  to prevent unintended revocation when not using a token with a long enough lifetime  To revoke these certificates  use the  pki revoke  endpoint      Safe usage of roles  The Vault PKI Secrets Engine supports many options to limit issuance via  Roles   vault api docs secret pki create update role   Careful consideration of construction is necessary to ensure that more permissions are not given than necessary  Additionally  roles should generally do  one  thing  multiple roles should be preferable over having too permissive roles that allow arbitrary issuance  e g    allow any name  should generally be used sparingly  if at all        allow any name  should generally be set to  false   this is the default      allow localhost  should generally be set to  false  for production    services  unless listening on  localhost  is expected     Unless necessary   allow wildcard certificates  should generally be set to     false   This is   not   the default due to backwards compatibility    concerns       This is especially necessary when  allow subdomains  or  allow glob domains       are enabled      enforce hostnames  should generally be enabled for TLS services  this is    the default      allow ip sans  should generally be set to  false   but defaults to  true       unless IP address certificates are explicitly required     When using short TTLs    30 days  or with high issuance volume  it is    generally recommend to set  no store  to  true   defaults to  false       This prevents serial number based revocation  but allows higher throughput    as Vault no longer needs to store every issued certificate  This is discussed    more in the  Replicated Datasets   replicated datasets  section below     Do not use roles with root certificates   issuer ref    Root certificates    should generally only issue intermediates  see the section on  CA hierarchy    above   use a ca hierarchy    which doesn t rely on roles     Limit  key usage  and  ext key usage   don t attempt to allow all usages    for all purposes  Generally the default values are useful for client and    server TLS authentication      Telemetry  Beyond Vault s default telemetry around request processing  PKI exposes count and duration metrics for the issue  sign  sign verbatim  and revoke calls  The metrics keys take the form  mount path operation  failure   with labels for namespace and role name   Note that these metrics are per node and thus would need to be aggregated across nodes and clusters      Auditing  Because Vault HMACs audit string keys by default  it is necessary to tune PKI secrets mounts to get an accurate view of issuance that is occurring under this mount      Note  Depending on usage of Vault  CRLs  and rarely  CA chains  can grow to    be rather large  We don t recommend un HMACing the  crl  field for this    reason  but note that the recommendations below suggest to un HMAC the     certificate  response parameter  which the CRL can be served in via    the   pki cert crl  API endpoint  Additionally  the  http raw body  can    be used to return CRL both in PEM and raw binary DER form  so it is    suggested not to un HMAC that field to not corrupt the log format  br    br       If this is done with only a  syslog   vault docs audit syslog  audit device     Vault can deny requests  with an opaque  500 Internal Error  message     after the action has been performed on the server  because it was    unable to log the message  br    br       The suggested workaround is to either leave the  certificate  and  crl     response fields HMACed and or to also enable the   file    vault docs audit file     audit log type   Some suggested keys to un HMAC for requests are as follows       csr    the requested CSR to sign      certificate    the requested self signed certificate to re sign or    when importing issuers     Various issuance related overriding parameters  such as        issuer ref    the issuer requested to sign this certificate        common name    the requested common name        alt names    alternative requested DNS type SANs for this certificate        other sans    other  non DNS  non Email  non IP  non URI  requested SANs for this certificate        ip sans    requested IP type SANs for this certificate        uri sans    requested URI type SANs for this certificate        ttl    requested expiration date of this certificate        not after    requested expiration date of this certificate        serial number    the subject s requested serial number        key type    the requested key type        private key format    the requested key format which is also      used for the public certificate format as well     Various role  or issuer related generation parameters  such as        managed key name    when creating an issuer  the requested managed      key name        managed key id     when creating an issuer  the requested managed      key identifier        ou    the subject s organizational unit        organization    the subject s organization        country    the subject s country code        locality    the subject s locality        province    the subject s province        street address    the subject s street address        postal code    the subject s postal code        permitted dns domains    permitted DNS domains        policy identifiers    the requested policy identifiers when creating a role  and       ext key usage oids    the extended key usage OIDs for the requested certificate   Some suggested keys to un HMAC for responses are as follows       certificate    the certificate that was issued      issuing ca    the certificate of the CA which issued the requested    certificate      serial number    the serial number of the certificate that was issued      error    to show errors associated with the request  and     ca chain    optional due to noise  the full CA chain of the issuer of    the requested certificate      Note  These list of parameters to un HMAC are provided as a suggestion and    may not be exhaustive   The following keys are suggested   NOT   to un HMAC  due to their sensitive nature       private key    this response parameter contains the private keys    generated by Vault during issuance  and     pem bundle  this request parameter is only used on the issuer import    paths and may contain sensitive private key material      Role Based access  Vault supports  path based ACL Policies   vault tutorials getting started getting started policies  for limiting access to various paths within Vault   The following is a condensed example reference of ACLing the PKI Secrets Engine  These are just a suggestion  other personas and policy approaches may also be valid   We suggest the following personas       Operator    a privileged user who manages the health of the PKI    subsystem  manages issuers and key material      Agent   a semi privileged user that manages roles and handles    revocation on behalf of an operator  may also handle delegated    issuance  This may also be called an  administrator  or  role    manager       Advanced   potentially a power user or service that has access to    additional issuance APIs      Requester   a low level user or service that simply requests certificates      Unauthed   any arbitrary user or service that lacks a Vault token   For these personas  we suggest the following ACLs  in condensed  tabular form     Path   Operations   Operator   Agent   Advanced   Requester   Unauthed                                                                                  ca  pem      Read   Yes   Yes   Yes   Yes   Yes       ca chain    Read   Yes   Yes   Yes   Yes   Yes       crl  pem      Read   Yes   Yes   Yes   Yes   Yes       crl delta  pem      Read   Yes   Yes   Yes   Yes   Yes       cert  serial  raw  pem        Read   Yes   Yes   Yes   Yes   Yes       issuers    List   Yes   Yes   Yes   Yes   Yes       issuer  issuer ref  json der pem     Read   Yes   Yes   Yes   Yes   Yes       issuer  issuer ref crl  der  pem      Read   Yes   Yes   Yes   Yes   Yes       issuer  issuer ref crl delta  der  pem      Read   Yes   Yes   Yes   Yes   Yes       ocsp  request     Read   Yes   Yes   Yes   Yes   Yes       ocsp    Write   Yes   Yes   Yes   Yes   Yes       certs    List   Yes   Yes   Yes   Yes         revoke with key    Write   Yes   Yes   Yes   Yes         roles    List   Yes   Yes   Yes   Yes         roles  role    Read   Yes   Yes   Yes   Yes          issue sign   role    Write   Yes   Yes   Yes   Yes         issuer  issuer ref  issue sign   role    Write   Yes   Yes   Yes           config auto tidy    Read   Yes   Yes                config ca    Read   Yes   Yes                config crl    Read   Yes   Yes                config issuers    Read   Yes   Yes                crl rotate    Read   Yes   Yes             crl rotate delta    Read   Yes   Yes             roles  role    Write   Yes   Yes             issuer  issuer ref    Read   Yes   Yes             sign verbatim   role      Write   Yes   Yes             issuer  issuer ref sign verbatim   role      Write   Yes   Yes             revoke    Write   Yes   Yes             tidy    Write   Yes   Yes             tidy cancel    Write   Yes   Yes             tidy status    Read   Yes   Yes             config auto tidy    Write   Yes                  config ca    Write   Yes                  config crl    Write   Yes                  config issuers    Write   Yes                  config keys    Read  Write   Yes                  config urls    Read  Write   Yes                  issuer  issuer ref    Write   Yes               issuer  issuer ref revoke    Write   Yes               issuer  issuer ref sign intermediate    Write   Yes               issuer issuer ref sign self issued    Write   Yes               issuers generate        Write   Yes               issuers import      Write   Yes               intermediate generate      Write   Yes               intermediate cross sign    Write   Yes               intermediate set signed    Write   Yes               keys    List   Yes               key  key ref    Read  Write   Yes               keys generate      Write   Yes               keys import    Write   Yes               root generate      Write   Yes               root sign intermediate    Write   Yes               root sign self issued    Write   Yes               root rotate      Write   Yes               root replace    Write   Yes               Note  With managed keys  operators might need access to  read the mount    point s tunable data   vault api docs system mounts   Read on   sys mounts   and    may need access  to use or manage managed keys   vault api docs system managed keys       Replicated DataSets  When operating with  Performance Secondary   vault docs enterprise replication architecture  clusters  certain data sets are maintained across all clusters  while others for performance and scalability reasons are kept within a given cluster   The following table breaks down by data type what data sets will cross the cluster boundaries  For data types that do not cross a cluster boundary  read requests for that data will need to be sent to the appropriate cluster that the data was generated on     Data Set                   Replicated Across Clusters                                                               Issuers   Keys             Yes                            Roles                      Yes                            CRL Config                 Yes                            URL Config                 Yes                            Issuer Config              Yes                            Key Config                 Yes                            CRL                        No                             Revoked Certificates       No                             Leaf Issued Certificates   No                             Certificate Metadata       No                            The main effect is that within the PKI secrets engine leaf certificates issued with  no store  set to  false  are stored local to the cluster that issued them  This allows for both primary and  Performance Secondary   vault docs enterprise replication architecture  clusters  active node to issue certificates for greater scalability  As a result  these certificates  metadata and any revocations are visible only on the issuing cluster  This additionally means each cluster has its own set of CRLs  distinct from other clusters  These CRLs should either be unified into a single CRL for distribution from a single URI  or server operators should know to fetch all CRLs from all clusters      Cluster scalability  Most non introspection operations in the PKI secrets engine require a write to storage  and so are forwarded to the cluster s active node for execution  This table outlines which operations can be executed on performance standby nodes and thus scale horizontally across all nodes within a cluster     Path                            Operations                                                                        ca  pem                         Read                     cert  em serial number  em      Read                     cert ca chain                   Read                     config crl                      Read                     certs                           List                     ca chain                        Read                     crl  pem                        Read                     issue                           Update  sup     sup      revoke  em serial number  em    Read                     sign                            Update  sup     sup      sign verbatim                   Update  sup     sup        Only if the corresponding role has  no store  set to true   generate lease  set to false and no metadata is being written  If  generate lease  is true the lease creation will be forwarded to the active node  if  no store  is false the entire request will be forwarded to the active node  If  no store cert metadata false  and  metadata  argument is provided the entire request will be forwarded to the active node      PSS support  Go lacks support for PSS certificates  keys  and CSRs using the  rsaPSS  OID   1 2 840 113549 1 1 10    It requires all RSA certificates  keys  and CSRs to use the alternative  rsaEncryption  OID   1 2 840 113549 1 1 1     When using OpenSSL to generate CAs or CSRs from PKCS8 encoded PSS keys  the resulting CAs and CSRs will have the  rsaPSS  OID  Go and Vault will reject them  Instead  use OpenSSL to generate or convert to a PKCS 1v1 5 private key file and use this to generate the CSR  Vault will  depending on the role and the signing mechanism  still use a PSS signature despite the  rsaEncryption  OID on the request as the SubjectPublicKeyInfo and SignatureAlgorithm fields are orthogonal  When creating an external CA and importing it into Vault  ensure that the  rsaEncryption  OID is present on the SubjectPublicKeyInfo field even if the SignatureAlgorithm is PSS based   These certificates generated by Go  with  rsaEncryption  OID but PSS based signatures  are otherwise compatible with the fully PSS based certificates  OpenSSL and NSS support parsing and verifying chains using this type of certificate  Note that some TLS implementations may not support these types of certificates if they do not support  rsa pss rsae    signature schemes  Additionally  some implementations allow rsaPSS OID certificates to contain restrictions on signature parameters allowed by this certificate  but Go and Vault do not support adding such restrictions   At this time Go lacks support for signing CSRs with the PSS signature algorithm  If using a managed key that requires a RSA PSS algorithm  such as GCP or a PKCS 11 HSM  as a backing for an intermediate CA key  attempting to generate a CSR  via  pki intermediate generate kms   will fail signature verification  In this case  the CSR will need to be generated outside of Vault and the signed final certificate can be imported into the mount   Go additionally lacks support for creating OCSP responses with the PSS signature algorithm  Vault will automatically downgrade issuers with PSS based revocation signature algorithms to PKCS 1v1 5  but note that certain KMS devices  like HSMs and GCP  may not support this with the same key  As a result  the OCSP responder may fail to sign responses  returning an internal error      Issuer storage migration issues  When Vault migrates to the new multi issuer storage layout on releases prior to 1 11 6  1 12 2  and 1 13  and storage write errors occur during the mount initialization and storage migration process  the default issuer  may  not have the correct  ca chain  value and may only have the self reference  These write errors most commonly manifest in logs as a message like  failed to persist issuer     chain to disk   cause   and indicate that Vault was not stable at the time of migration  Note that this only occurs when more than one issuer exists within the mount  such as an intermediate with root    To fix this manually  until a new version of Vault automatically rebuilds the issuer chain   a rebuild of the chains can be performed       curl  X PATCH  H  Content Type  application merge patch json   H  X Vault Request  true   H  X Vault Token    vault print token    d    manual chain   self    https       issuer default curl  X PATCH  H  Content Type  application merge patch json   H  X Vault Request  true   H  X Vault Token    vault print token    d    manual chain       https       issuer default      This temporarily sets the manual chain on the default issuer to a self chain only  before reverting it back to automatic chain building  This triggers a refresh of the  ca chain  field on the issuer  and can be verified with       vault read pki issuer default         Issuer Constraints Enforcement  Starting with versions 1 18 3  1 18 3 ent  1 17 10 ent and 1 16 14 ent  Vault performs additional verifications when creating or signing leaf certificates for issuers that have constraints extensions  This verification includes validating extended key usage  name constraints  and correct copying of the issuer name onto the certificate  Certificates issued without this verification might not be accepted by end user applications   Problems with issuance arising from this validation should be fixed by changing the issuer certificate itself  to avoid more problems down the line   It is possible to completely disable verification by setting environment variable  VAULT DISABLE PKI CONSTRAINTS VERIFICATION  to  true         Warning    The use of environment variable  VAULT DISABLE PKI CONSTRAINTS VERIFICATION  should be considered as a last resort       Tutorial  Refer to the  Build Your Own Certificate Authority  CA    vault tutorials secrets management pki engine  guide for a step by step tutorial   Have a look at the  PKI Secrets Engine with Managed Keys   vault tutorials enterprise managed key pki  for more about how to use externally managed keys with PKI      API  The PKI secrets engine has a full HTTP API  Please see the  PKI secrets engine API   vault api docs secret pki  for more details "}
{"questions":"vault PKI secrets engine Certificate Management Protocol v2 CMPv2 EnterpriseAlert inline true page title Certificate Management Protocol v2 CMPv2 within Vault PKI Secrets Engines This document summarizes Vault s PKI Secrets Engine layout docs implementation of the CMPv2 protocol https datatracker ietf org doc html rfc4210 EnterpriseAlert inline true An overview of the Certificate Management Protocol v2 implementation within Vault","answers":"---\nlayout: docs\npage_title: Certificate Management Protocol v2 (CMPv2) within Vault | PKI - Secrets Engines\ndescription: An overview of the Certificate Management Protocol (v2) implementation within Vault.\n---\n\n# PKI secrets engine - Certificate Management Protocol v2 (CMPv2) <EnterpriseAlert inline=\"true\" \/>\n\nThis document summarizes Vault's PKI Secrets Engine\nimplementation of the [CMPv2 protocol](https:\/\/datatracker.ietf.org\/doc\/html\/rfc4210) <EnterpriseAlert inline=\"true\" \/>,\nits configuration, and limitations.\n\n## What is Certificate Management Protocol v2 (CMPv2)?\n\nThe CMP protocol is an IETF standardized protocol, [RFC 4210](https:\/\/datatracker.ietf.org\/doc\/html\/rfc4210),\nthat allows clients to acquire client certificates and their associated Certificate\nAuthority (CA) certficates.\n\n## Enabling CMPv2 support on a Vault PKI mount\n\nTo configure an existing mount to serve CMPv2 clients, the following steps are\nrequired, which are broken down into three main categories:\n\n 1. [Configuring an Issuer](#configuring-an-issuer)\n 1. [Authentication mechanisms](#configuring-cmpv2-authentication)\n 1. [Updating PKI tunable parameters](#updating-the-pki-mount-tunable-parameters)\n 1. [PKI CMPv2 configuration](#enabling-cmpv2)\n\n### Configuring an Issuer\n\nCMPv2 is a bit unique, in that it uses the Issuer CA certificate to sign the\nCMP messages.  This means your issuer must have the `DigitalSignature` key\nusage.\nExisting CA issuers likely do not have this, so you will need to generate a new \nissuer (likely an intermediate) that has this property.  If you are configuring PKI \nfor the first time or creating a new issuer, ensure you set `key_usage` to,\nas an example, `CRL,CASign,DigitalSignature`.\n\nSee [Generate intermediate CSR](\/vault\/api-docs\/secret\/pki#generate-intermediate-csr)\n\n### Configuring CMPv2 Authentication\n\nAt this time, Vault's implementation of CMPv2 supports only \n[Certificate TLS authentication](\/vault\/docs\/auth\/cert), where clients proof\nof posession of a TLS client certificate authenticates them to Vault.\n\nAuthentication leverages a separate Vault authentication\nmount, within the same namespace, to validate the client provided credentials\nalong with the client's ACL policy to enforce. \n\nFor proper accounting, a mount supporting CMPv2 authentication should be\ndedicated to this purpose, not shared with other workflows.  In other words,\ncreate a new certificate auth mount for CMPv2 even if you already have one \nanother in use for other purposes.\n\nWhen setting up the authentication mount for CMPv2 clients, the token type must\nbe configured to return [batch tokens](\/vault\/docs\/concepts\/tokens#batch-tokens).\nBatch tokens are required to avoid an excessive amount of leases being generated\nand persisted as every CMPv2 incoming request needs to be authenticated.\n\nThe path within an ACL policy must match the `cmp` path underneath the \nPKI mount.  The path to use can be the default `cmp` path or a role based one.\n\nIf using the `sign-verbatim` as a path policy, the following\nACL policy will allow an authenticated client access the required PKI CMP path.\n```\npath \u201cpki\/cmp\u201d {\n  capabilities=[\u201cupdate\u201d, \u201ccreate\u201d]\n}\n```\n\nFor a role base path policy, this sample policy can be used\n```\npath \u201cpki\/roles\/my-role-name\/cmp\u201d {\n  capabilities=[\u201cupdate\u201d, \u201ccreate\u201d]\n}\n```\n\n#### Updating the PKI mount tunable parameters\n\nOnce the authentication mount has been created and configured, the authentication mount's accessor\nwill need to be captured and added within the PKI mount's [delegated auth accessors](\/vault\/api-docs\/system\/mounts#delegated_auth_accessors).\n\nTo get an authentication mount's accessor field, the following command can be used.\n\n```shell-session\n$ vault read -field=accessor sys\/auth\/auth\/cert\n```\n\nFor CMP to work within certain clients, a few response headers need to be explicitly\nallowed, trailing slashes must be trimmed, and the list of accessors the mount can delegate authentication towards\nmust be configured. The following will grant the required response headers, you will need to replace the values for \nthe `delegated-auth-accessors` to match your values.\n\n```shell-session\n$ vault secrets tune \\\n  -allowed-response-headers=\"Content-Transfer-Encoding\" \\\n  -allowed-response-headers=\"Content-Length\" \\\n  -allowed-response-headers=\"WWW-Authenticate\" \\\n  -delegated-auth-accessors=\"auth_cert_4088ac2d\" \\\n  -trim-request-trailing-slashes=\"true\" \\\n  pki\n```\n\n#### Enabling CMPv2\n\nEnabling CMP is a matter of writing to the `config\/cmp` endpoint, to set it\nenabled and configure default path policy and authentication.\n\n```shell-session\nvault write pki\/config\/cmp -<<EOC\n{\n  \"enabled\": true,\n  \"default_path_policy\": \"role:example-role\",\n  \"authenticators\": {\n    \"cert\": {\n      \"accessor\": \"auth_cert_4088ac2d\"\n    }\n  },\n  \"audit_fields\": [\"common_name\", \"alt_names\", \"ip_sans\", \"uri_sans\"]\n}\nEOC\n```\n\nOf course, substituting your own role and accessor values.  After this, the\nCMP endpoints will be able to handle client requests, authenticated with the \npreviously configured Cert Auth method.\n\n## Limitations\n\nThe initial release of CMPv2 support is intentionally limited to a subset of the\nprotocol, covering Initialization, Certification, and Key Update, over HTTP.\nIn particular, the following are not yet supported:\n\n * Basic authentication scheme using PasswordBasedMac\n * Revocation\n * CRL fetching via CMP itself.\n * CA creation\/update operations.\n\nNote that CMPv2 is not integrated with these existing Vault PKI features:\n \n * Certificate Metadata - CMPv2 has no means of providing metadata.\n * Certificate Issuance External Policy Service [(CIEPS)](\/vault\/docs\/secrets\/pki\/cieps)\n\n","site":"vault","answers_cleaned":"    layout  docs page title  Certificate Management Protocol v2  CMPv2  within Vault   PKI   Secrets Engines description  An overview of the Certificate Management Protocol  v2  implementation within Vault         PKI secrets engine   Certificate Management Protocol v2  CMPv2   EnterpriseAlert inline  true      This document summarizes Vault s PKI Secrets Engine implementation of the  CMPv2 protocol  https   datatracker ietf org doc html rfc4210   EnterpriseAlert inline  true      its configuration  and limitations      What is Certificate Management Protocol v2  CMPv2    The CMP protocol is an IETF standardized protocol   RFC 4210  https   datatracker ietf org doc html rfc4210   that allows clients to acquire client certificates and their associated Certificate Authority  CA  certficates      Enabling CMPv2 support on a Vault PKI mount  To configure an existing mount to serve CMPv2 clients  the following steps are required  which are broken down into three main categories    1   Configuring an Issuer   configuring an issuer   1   Authentication mechanisms   configuring cmpv2 authentication   1   Updating PKI tunable parameters   updating the pki mount tunable parameters   1   PKI CMPv2 configuration   enabling cmpv2       Configuring an Issuer  CMPv2 is a bit unique  in that it uses the Issuer CA certificate to sign the CMP messages   This means your issuer must have the  DigitalSignature  key usage  Existing CA issuers likely do not have this  so you will need to generate a new  issuer  likely an intermediate  that has this property   If you are configuring PKI  for the first time or creating a new issuer  ensure you set  key usage  to  as an example   CRL CASign DigitalSignature    See  Generate intermediate CSR   vault api docs secret pki generate intermediate csr       Configuring CMPv2 Authentication  At this time  Vault s implementation of CMPv2 supports only   Certificate TLS authentication   vault docs auth cert   where clients proof of posession of a TLS client certificate authenticates them to Vault   Authentication leverages a separate Vault authentication mount  within the same namespace  to validate the client provided credentials along with the client s ACL policy to enforce    For proper accounting  a mount supporting CMPv2 authentication should be dedicated to this purpose  not shared with other workflows   In other words  create a new certificate auth mount for CMPv2 even if you already have one  another in use for other purposes   When setting up the authentication mount for CMPv2 clients  the token type must be configured to return  batch tokens   vault docs concepts tokens batch tokens   Batch tokens are required to avoid an excessive amount of leases being generated and persisted as every CMPv2 incoming request needs to be authenticated   The path within an ACL policy must match the  cmp  path underneath the  PKI mount   The path to use can be the default  cmp  path or a role based one   If using the  sign verbatim  as a path policy  the following ACL policy will allow an authenticated client access the required PKI CMP path      path  pki cmp      capabilities   update    create          For a role base path policy  this sample policy can be used     path  pki roles my role name cmp      capabilities   update    create               Updating the PKI mount tunable parameters  Once the authentication mount has been created and configured  the authentication mount s accessor will need to be captured and added within the PKI mount s  delegated auth accessors   vault api docs system mounts delegated auth accessors    To get an authentication mount s accessor field  the following command can be used      shell session   vault read  field accessor sys auth auth cert      For CMP to work within certain clients  a few response headers need to be explicitly allowed  trailing slashes must be trimmed  and the list of accessors the mount can delegate authentication towards must be configured  The following will grant the required response headers  you will need to replace the values for  the  delegated auth accessors  to match your values      shell session   vault secrets tune      allowed response headers  Content Transfer Encoding       allowed response headers  Content Length       allowed response headers  WWW Authenticate       delegated auth accessors  auth cert 4088ac2d       trim request trailing slashes  true      pki           Enabling CMPv2  Enabling CMP is a matter of writing to the  config cmp  endpoint  to set it enabled and configure default path policy and authentication      shell session vault write pki config cmp    EOC      enabled   true     default path policy    role example role      authenticators          cert            accessor    auth cert 4088ac2d                audit fields     common name    alt names    ip sans    uri sans     EOC      Of course  substituting your own role and accessor values   After this  the CMP endpoints will be able to handle client requests  authenticated with the  previously configured Cert Auth method      Limitations  The initial release of CMPv2 support is intentionally limited to a subset of the protocol  covering Initialization  Certification  and Key Update  over HTTP  In particular  the following are not yet supported      Basic authentication scheme using PasswordBasedMac    Revocation    CRL fetching via CMP itself     CA creation update operations   Note that CMPv2 is not integrated with these existing Vault PKI features       Certificate Metadata   CMPv2 has no means of providing metadata     Certificate Issuance External Policy Service   CIEPS    vault docs secrets pki cieps   "}
{"questions":"vault The PKI secrets engine for Vault generates TLS certificates layout docs Since Vault 1 11 0 Vault s PKI Secrets Engine supports multiple issuers in a single mount point By using the certificate types below rotation can be page title PKI Secrets Engine Rotation Primitives PKI secrets engine rotation primitives","answers":"---\nlayout: docs\npage_title: 'PKI - Secrets Engine: Rotation Primitives'\ndescription: The PKI secrets engine for Vault generates TLS certificates.\n---\n\n# PKI secrets engine - rotation primitives\n\nSince Vault 1.11.0, Vault's PKI Secrets Engine supports multiple issuers in a\nsingle mount point. By using the certificate types below, rotation can be\naccomplished in various situations involving both root and intermediate CAs\nmanaged by Vault.\n\n## X.509 certificate fields\n\nX.509 is a complex specification; modern implementations tend to refer to\n[RFC 5280](https:\/\/datatracker.ietf.org\/doc\/html\/rfc5280) for specific\ndetails. For validation of certificates, both RFC 5280 and the TLS\nvalidation [RFC 6125](https:\/\/datatracker.ietf.org\/doc\/html\/rfc6125) are\nimportant for understanding how to achieve rotation.\n\nThe following is a simplification of these standards for the purpose of\nthis document.\n\nEvery X.509 certificate begins with an asymmetric key pair, using an algorithm\nlike RSA or ECDSA. This key pair is used to create a Certificate Signing\nRequest (CSR), which contains a set of fields the requester would like in the\nfinal certificate (but, it is up to the Certificate Authority (CA) to decide what\nfields to take from the CSR and which to override). The CSR also contains the\npublic key of the pair, which is signed by the private key of the key pair to\nprove possession. Usually, the requester would ask for attributes in the\nSubject field of the CSR or in the Subject Alternative Name extension CSR to\nbe respected in the final certificate. It is up to the CA if these values are\ntrusted or not. When approved by the issuing authority (which may be backed by\nthis asymmetric key itself in the case of a root self-signed certificate), the\nauthority attaches the Subject of _its_ certificate to the issued certificate in\nthe Issuer field, assigns a unique serial number to the issued certificate, and\nsigns the set of fields with its private key, thus creating the certificate.\n\nThere are some important restrictions here:\n\n - One certificate can only have one Issuer, but this issuer is identified by\n   the Subject on the issuing certificate and its public key.\n - One key pair can be used for multiple certificates, but one certificate can\n   only have one backing key material.\n\nThe following fields on the final certificate are relevant to rotation:\n\n - The backing [public](https:\/\/datatracker.ietf.org\/doc\/html\/rfc5280#section-4.1.2.7)\n   and private key material (Subject Public Key Info).\n   - Note that the private key is not included in the certificate but is\n     uniquely determined by the public key material.\n - The [Subject](https:\/\/datatracker.ietf.org\/doc\/html\/rfc5280#section-4.1.2.6) of the certificate.\n   - This identifies the entity to which the certificate was issued. While the\n     SAN values (in the [Subject Alternative Name](https:\/\/datatracker.ietf.org\/doc\/html\/rfc5280#section-4.2.1.6)\n     extension) is useful when validating TLS Server certificates against the\n     negotiated hostname and URI, it isn't generally relevant for the purposes\n     of validating intermediate certificate chains or in rotation.\n - The [Validity](https:\/\/datatracker.ietf.org\/doc\/html\/rfc5280#section-4.1.2.5)\n   period of this certificate.\n   - Notably, RFC 5280 does not place any requirements around the issued\n     certificate's validity period relative to the validity period of the\n     issuing certificate. However, it [does state](https:\/\/datatracker.ietf.org\/doc\/html\/rfc5280#section-4.1.2.5)\n     that certificates ought to be revoked if their status cannot be maintained\n     up to their notAfter date. This is why Vault 1.11's `\/pki\/issuer\/:issuer_ref`\n     configuration endpoint maintains the `leaf_not_after_behavior` per-issuer\n     rather than per-role.\n   - Additionally, some browsers will place ultimate trust in the certificates\n     in their trust stores, even when these certificates are expired.\n     - Note that this only applies to certificates in the trust store; validity\n       periods will still be enforced for certificates not in the store (such\n       as intermediates).\n - The [Issuer](https:\/\/datatracker.ietf.org\/doc\/html\/rfc5280#section-4.1.2.4) and\n   [signatureValue](https:\/\/datatracker.ietf.org\/doc\/html\/rfc5280#section-4.1.1.3)\n   of this certificate.\n   - In the issued certificate's Issuer field, the issuing certificate places\n     its own Subject value. This allows the issuer to be identified later\n     (without having to try signature validation against every known local\n     certificate), when validating the presented certificate and chain.\n   - The signature over the entire certificate (by the issuer's private key)\n     is then placed in the signatureValue field.\n - The optional [Authority Key Identifier](https:\/\/datatracker.ietf.org\/doc\/html\/rfc5280#section-4.2.1.1)\n   field.\n   - This field can contain either (or both) of two values:\n     - The hash of the issuer's public key. This extension is set and this\n       value is filled in by Vault.\n     - The Issuer's Subject and Serial Number. This value is not set by Vault.\n   - The latter is a dangerous restriction for the purposes of rotation: it\n     prevents cross-signing and reissuance as the new issuing certificates\n     (while having the same backing key material) will have different serial\n     numbers. See the [Limitations of Primitives](#limitations-of-primitives)\n     section below for more information on this restriction.\n - The [Serial Number](https:\/\/datatracker.ietf.org\/doc\/html\/rfc5280#section-4.1.2.2)\n   of this certificate.\n   - This field is unique to a specific issuer; when a certificate is\n     reissued by its parent authority, it will always have a different serial\n     number field.\n - The [CRL distribution](https:\/\/datatracker.ietf.org\/doc\/html\/rfc5280#section-4.2.1.13)\n   point field.\n   - This is a field detailing where a CRL is expected to exist for this\n     certificate and under which CRL issuers (defaulting to the issuing\n     certificate itself) the CRL is expected to be signed by. This is mostly\n     informational and for server software like nginx, Vault's Cert Auth method,\n     and Apache, CRLs are provided to the server, rather than having the\n     server fetch CRLs for certificates automatically.\n   - Note that root certificates (in browsers trust stores) are generally not\n     considered revocable. However, if an intermediate is revoked by serial,\n     it will appear on its parent's CRL, and may prevent rotation from\n     happening.\n\n## X.509 rotation primitives\n\nRotation (from an organizational standpoint) can only safely happen with\ncertain intermediate X.509 certificates being issued. To distinguish the two\ntypes of certificates used to achieve rotation, this document notates them\nas _primitives_.\n\nRotation of an end-entity certificate is trivial from an X.509 trust chain\nperspective; this process happens every day and should only depend on what is\nin the trust store and not the end-entity certificate itself. In Vault, the\nrequester would hit the various issuance endpoints (`\/pki\/issue\/:name` or\n`\/pki\/sign\/:name` -- or use the unsafe `\/pki\/sign-verbatim`) and swap out the\nold certificate with the new certificate and reload the configuration or\nrestart the service. Other parts of the organizations might use\n[ACME](https:\/\/datatracker.ietf.org\/doc\/html\/rfc8555) for certificate issuance\nand rotation, especially if the service is public-facing (and thus needs to\nbe issued by a Public CA). Given it was signed by a trusted root, any devices\nconnecting to the service would not know the difference.\n\nRotation of intermediate certificates is almost as easy. Assuming a decent\noperational setup (wherein during end-entity issuance, the full certificate\nchain is updated in the service's configuration), this should be as easy as\ncreating a new intermediate CA, signing it against the root CA, and then\nbeginning issuance against the new intermediate certificate. In Vault, if\nthe intermediate is generated in an existing mount path (or is moved into\nsuch), the requesting entity shouldn't care much. Under ACME, Let's Encrypt\nhas successfully rotated intermediates to present a cross-signed chain\n([for older Android devices](https:\/\/letsencrypt.org\/2020\/12\/21\/extending-android-compatibility.html)).\nAssuming the old intermediate's parent(s) are still valid and trusted,\ncertificates issued under old intermediates should continue to validate.\n\nThe hard part of rotation--calling for the use of these primitives--is\nrotating root certificates. These live in every device's trust store and\nare hard to update from an organization-wide operational perspective.\nUnless the organization can swap out roots almost instantaneously and\nsimultaneously (e.g., via an agent) with no missed devices, this process\nwill likely span months.\n\nTo make this process lower risk, there are various primitive certificate\ntypes that use the [above certificate fields](#x-509-certificate-fields).\nKey to their success is the following note:\n\n~> Note: While certificates are added to the trust store, it is ultimately\n   the associated key material that determines trust: two issuer certificates\n   with the same subject but different public keys cannot validate the same\n   leaf certificate; only if the keys are the same can this occur.\n\n### Cross-Signed primitive\n\nThis is the most common type of rotation primitive. A common CSR is signed by\ntwo CAs, resulting in two certificates. These certificates must have the same\nSubject (but may have different Issuers and will have different Serial Numbers)\nand the same backing key material, to allow certificates they sign to be\ntrusted by either variant.\n\nNote that, due to restrictions in how end-entity certificates are used and\nvalidated (services and validation libraries expect only one), cross-signing\nmost typically only applies to intermediate.\n\n#### A note on Cross-Signed roots\n\nTechnically, cross-signing can occur between two roots, allowing trust bundles\nwith either root to validate certs issued through the other. However, this\nprocess creates a certificate that is effectively an intermediate (as it is\nno longer self-signed) and usually must be served alongside the trust chain.\nGiven this restriction, it's preferable to instead cross-sign the top-level\nintermediates under the root unless strictly necessary when the old root\ncertificate has been used to directly issue leaf certificates.\n\nSo, the rest of this process flow assumes an intermediate is being\ncross-signed as this is more common.\n\n##### Process flow\n\n```\n        -------------------\n       | generate key pair | -------------> ...\n        -------------------                 ...\n           |            |                   ...\n --------------        --------------       ...\n| generate CSR |      | generate CSR |      ...\n --------------        --------------       ...\n         |                   |              ...\n    -----------         -----------         ...\n   | signed by |       | signed by |        ...\n   | root A    |       | root B    |        ...\n    -----------         -----------         ...\n```\n\nHere, a key pair was generated at some point in time. Two CSRs are created and\nsent to two different root authorities (Root A and Root B). These result in two\nseparate certificates (potentially with different validity periods) with the\nsame Subject and same backing key material.\n\nNote that this cross-signing need not happen simultaneously; there could be a\ngap of several years between the first and second certificate. Additionally,\nthere's no limit on the number of cross-signed \"duplicate\" (used loosely--with\nthe same subject and key material) certificates: this could be cross-signed\nby many different root certificates if necessary and desired.\n\n##### Certificate hierarchy\n\n```\n --------                                            --------\n| root A |                                          | root B |\n --------                                            --------\n   |                                                      |\n ----------------                            ----------------\n| intermediate C |  <- same key material -> | intermediate D |\n ----------------              |             ----------------\n                               |\n                      -------------------\n                     | leaf certificates |\n                      -------------------\n```\n\nThe above process results in two trust paths: either of root A or root B (or\nboth) could exist in the client's trust stores and the leaf certificate would\nvalidate correctly. Because the same key material is used for both intermediate\ncertificates (C and D), the issued leaf certificate's signature field would\nbe the same regardless of which intermediate was contacted.\n\nCross-signing is thus a unifying primitive; two separate trust paths now join\ninto a single one, by having leaf certificate's issuer field to point to two\nseparate paths (via duplication of the certificate in the chain) and would be\nconditionally validated based on which root is present in the trust store.\n\nThis construct is documented and used in several places:\n\n - https:\/\/letsencrypt.org\/certificates\/\n - https:\/\/scotthelme.co.uk\/cross-signing-alternate-trust-paths-how-they-work\/\n - https:\/\/security.stackexchange.com\/questions\/14043\/what-is-the-use-of-cross-signing-certificates-in-x-509\n\n#### Execution in Vault\n\nTo create a cross-signed certificate in Vault, use the [`\/intermediate\/cross-sign`\nendpoint](\/vault\/api-docs\/secret\/pki#generate-intermediate-csr). Here, when creating\na cross-signature to all `cert B` to be validated by `cert A`, provide the values\n(`key_ref`, all Subject parts, &c) for `cert B` during intermediate generation.\nThen sign this CSR (using the [`\/issuer\/:issuer_ref\/sign-intermediate`\nendpoint](\/vault\/api-docs\/secret\/pki#sign-intermediate)) with `cert A`'s reference\nand provide necessary values from `cert B` (e.g., Subject parts). `cert A` may\nlive outside Vault. Finally, import the cross-signed certificate into Vault\n[using the `\/issuers\/import\/cert` endpoint](\/vault\/api-docs\/secret\/pki#import-ca-certificates-and-keys).\n\nIf this process succeeded, and both `cert A` and `cert B` and their key\nmaterial lives in Vault, the newly imported cross-signed certificate\nwill have a `ca_chain` response field [during read](\/vault\/api-docs\/secret\/pki#read-issuer)\ncontaining `cert A`, and `cert B`'s `ca_chain` will contain the cross-signed\ncert and its `ca_chain` value.\n\n~> Note: Regardless of issuer type, is important to provide all relevant\n   parameters as they were originally; Vault does not infer e.g., the Subject\n   name parameters from the existing issuer; it merely reuses the same key\n   material.\n\n##### Notes on `manual_chain`\n\nIf an intermediate is cross-signed and imported into the same mount as its\npair, Vault will not detect the cross-signed pairs during automatic chain\nbuilding. As a result, leaf issuance will have a chain that only includes\none of these pairs of chains. This is because the leaf issuance's `ca_chain`\nparameter copies the value from signing issuer directly, rather than computing\nits own copy of the chain.\n\nTo fix this, update the `manual_chain` field on the [issuers](\/vault\/api-docs\/secret\/pki#update-issuer)\nto include the chains of both pairs. For instance, given `intA` signed by\n`rootA` and `intB` signed by `rootB` as its cross-signed version, one\ncould do the following:\n\n```\n$ vault patch pki\/issuer\/intA manual_chain=self,rootA,intB,rootB\n$ vault patch pki\/issuer\/intB manual_chain=self,rootB,intA,rootA\n```\n\nThis will ensure that issuance with either copy of the intermediate reports\nthe full cross-signed chain when signing leaf certs.\n\n### Reissuance primitive\n\nThe second most common type of rotation primitive. In this scheme, the existing\nkey material is used to generate a new certificate, usually at a much later\npoint in time from the existing issuance.\n\nWhile similar to the cross-signed primitive, this one differs in that usually\nthe reissuance happens after the original certificate expires or is close to\nexpiration and is reissued by the original root CA. In the event of a\nself-signed certificate (e.g., a root certificate), this parent certificate\nwould be itself. In both cases, this changes the contents of the certificate\n(due to the new serial number) but allows all existing leaf signatures to\nstill validate.\n\nUnlike the cross-signed primitive, this primitive type can be used on all\ntypes of certificates (including leaves, intermediates, and roots).\n\n#### Process flow\n\n```\n          -------------------\n         | generate key pair | ---------------> ...\n          -------------------                   ...\n           |              |                     ...\n --------------           --------------        ...\n| generate CSR |   <->   | generate CSR |       ...\n --------------           --------------        ...\n         |                    |                 ...\n ------------------      ------------------     ...\n| signed by issuer | -> | signed by issuer | -> ...\n ------------------      ------------------     ...\n```\n\nIn this process flow, a single key pair is generated at some point in time\nand stored. The CSR (with same requested fields) is generated from this\ncommon key material and signed by the same issuer at multiple points in\ntime, preserving all critical fields (Subject, Issuer, &c). While there is\nstrictly no limit on the number of times a key can be reissued, at some point\nsafety would dictate the key material should be rotated instead of being\ncontinually reissued.\n\n#### Certificate hierarchy\n\n```\n                          ------\n              -----------| root |-------------\n             \/            ------              \\\n             |                                |\n ---------------                           ---------------\n| original cert | <- same key material -> | reissued cert |\n ---------------              |            ---------------\n                              |\n                      -------------------\n                     | leaf certificates |\n                      -------------------\n```\n\nNote that while this again results in two trust paths, depending on which\nintermediate certificate is presented and is still valid, only a root need be\ntrusted. When a reissued certificate is a root certificate, the issuance link is\nsimply self-loop. But, in this case, note that both certificates are\n(technically) valid issuers of each other. This means it should be possible to\nprovide a reissued root certificate in the TLS certificate chain and have it\nchain back to an existing root certificate in a trust store.\n\nThis primitive type is thus an incrementing primitive; the life cycle of an\nexisting key is extended into the future by issuing a new certificate with the\nsame key material from the existing authority.\n\n#### Execution in Vault\n\nTo create a reissued root certificate in Vault, use [`\/issuers\/generate\/root\/existing`\nendpoint](\/vault\/api-docs\/secret\/pki#generate-root). This allows the generation of a new\nroot certificate with the existing key material (via the `key_ref` request parameter).\nIf this process succeeded, when [reading the issuer](\/vault\/api-docs\/secret\/pki#read-issuer)\n(via `GET \/issuer\/:issuer_ref`), both issuers (old and reissued) will appear in\neach others' `ca_chain` response field (unless prevented so by a `manual_chain`\nvalue).\n\nTo create a reissued intermediate certificate in Vault, this is a three step\nprocess:\n\n 1. Use the [`\/issuers\/generate\/intermediate\/existing`\n    endpoint](\/vault\/api-docs\/secret\/pki#generate-intermediate-csr)\n    to generate a new CSR with the existing key material with the `key_ref`\n    request parameter.\n 2. Sign this CSR via the same signing process under the same issuer. This\n    step is specific to the parent CA, which may or may not be Vault.\n 3. Finally, use the [`\/intermediate\/set-signed` endpoint](\/vault\/api-docs\/secret\/pki#import-ca-certificates-and-keys)\n    to import the signed certificate from step 2.\n\nIf the process to reissue an intermediate certificate succeeded, when\n[reading the issuer](\/vault\/api-docs\/secret\/pki#read-issuer) (via\n`GET \/issuer\/:issuer_ref`), both issuers (old and reissued) will have\nthe same `ca_chain` response field, except for the first entry (unless\nprevented so by a `manual_chain` value).\n\n~> Note: Regardless of issuer type, is important to provide all relevant\n   parameters as they were originally; Vault does not infer e.g., the Subject\n   name parameters from the existing issuer; it merely reuses the same key\n   material.\n\n### Temporal primitives\n\nWe can use the above primitive types to rotate roots and intermediates to new\nkeys and extend their lifetimes. This time-based rotation is what ultimately\nallows us to rotate root certificates.\n\nThere's two main variants of this: a **forward** primitive, wherein an old\ncertificate is used to bless new key material, and a **backwards** primitive,\nwherein a new certificate is used to bless old key material. Both of these\nprimitives are independently used by Let's Encrypt in the aforementioned\nchain of trust document:\n\n - The link from DST Root CA X3 to ISRG Root X1 is an example of a forward\n   primitive.\n - The link from ISRG Root X1 to R3 (which was originally signed by DST Root\n   CA X3) is an example of a backwards primitive.\n\nFor most organizations with a hierarchical structured CA setup, cross-signing\nall intermediates with both the new and old root CAs is sufficient for root\nrotation.\n\nHowever, for organizations which have directly issued leaf certificates from a\nroot, the old root will need to be reissued under the new root (with shorter\nduration) to allow these certificates to continue to validate. This combines\nboth of the above primitives (cross-signing and reissuance) into a single\nbackwards primitive step. In the future, these organizations should probably\nmove to a more standard, hierarchical setup.\n\n### Limitations of primitives\n\nThe certificate's [Authority Key Identifier](https:\/\/datatracker.ietf.org\/doc\/html\/rfc5280#section-4.2.1.1)\nextension field may contain either or both of the issuer's keyIdentifier\n(a hash of the public key) or both the issuer's Subject and Serial Number\nfields. Generating certificates with the latter enabled (luckily not possible\nin Vault, especially so since Vault uses strictly random serial numbers)\nprevents building a proper cross-signed chain without re-issuing the same\nserial number, which will not work with most browsers' trust stores and\nvalidation engines, due to [caching of\ncertificates](https:\/\/support.mozilla.org\/en-US\/kb\/Certificate-contains-the-same-serial-number-as-another-certificate)\nused in successful validations. In the strictest sense, when using a\ncross-signing primitive (from a different CA), the intermediate could be reissued\nwith the same serial number, assuming no previous certificate was issued by that\nCA with that serial. This does not work when using a reissuance primitive as these\nare technically the same authority and thus this authority must issue\ncertificates with unique serial numbers.\n\n## Suggested root rotation procedure\n\nThe following is a suggested process for achieving root rotation easily and\nwithout (outage) impact to the broader organization, assuming [best\npractices](\/vault\/docs\/secrets\/pki\/considerations#use-a-ca-hierarchy) are\nbeing followed. Some adaption will be necessary.\n\nNote that this process takes time. How much time is dependent on the\nautomation level and operational awareness of the organization.\n\n1. [Generate](\/vault\/api-docs\/secret\/pki#generate-root) the new root\n   certificate. For clarity, it is suggested to use a new common name\n   to distinguish it from the old root certificate. Key material need\n   not be the same.\n\n2. [Cross-sign](#cross-signed-primitive) all existing intermediates.\n   It is important to update the manual chain on the issuers as discussed\n   in that section, as we assume servers are configured to combine the\n   `certificate` field with the `ca_chain` field on renewal and issuance,\n   thus getting the cross-signed intermediates.\n\n3. Encourage rotation to pickup the new cross-signed intermediates. With\n   short-lived certificates, this should [happen\n   automatically](\/vault\/docs\/secrets\/pki\/considerations#automate-leaf-certificate-renewal).\n   However, for some long-lived certs, it is suggested to rotate them\n   manually and proactively. This step takes time, and depends on the\n   types of certificates issued (e.g., server certs, code signing, or client\n   auth).\n\n4. Once _all_ chains have been updated, new systems can be brought online\n   with only the new root certificate, and connect to all existing systems.\n\n5. Existing systems can now be migrated with a one-shot root switch: the\n   new root can be added and the old root can be removed at the same time.\n   Assuming the above step 3 can be achieved in a reasonable amount of time,\n   this decreases the time it takes to move the majority of systems over to\n   fully using the new root and no longer trusting the old root. This step\n   also takes time, depending on how quickly the organization can migrate\n   roots and ensure all such systems are migrated. If some systems are\n   offline and only infrequently online (or, if they have hard-coded\n   certificate stores and need to reach obsolescence first), the organization\n   might not be ready to move on to future steps.\n\n6. At this point, since all systems now use the new root, it is safe to remove\n   or archive the old root and intermediates, updating the manual chain to\n   point strictly to the new intermediate+root.\n\nAt this point, rotation is fully completed.\n\n## Tutorial\n\nRefer to the [Build Your Own Certificate Authority (CA)](\/vault\/tutorials\/secrets-management\/pki-engine)\nguide for a step-by-step tutorial.\n\nHave a look at the [PKI Secrets Engine with Managed Keys](\/vault\/tutorials\/enterprise\/managed-key-pki)\nfor more about how to use externally managed keys with PKI.\n\n## API\n\nThe PKI secrets engine has a full HTTP API. Please see the\n[PKI secrets engine API](\/vault\/api-docs\/secret\/pki) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title   PKI   Secrets Engine  Rotation Primitives  description  The PKI secrets engine for Vault generates TLS certificates         PKI secrets engine   rotation primitives  Since Vault 1 11 0  Vault s PKI Secrets Engine supports multiple issuers in a single mount point  By using the certificate types below  rotation can be accomplished in various situations involving both root and intermediate CAs managed by Vault      X 509 certificate fields  X 509 is a complex specification  modern implementations tend to refer to  RFC 5280  https   datatracker ietf org doc html rfc5280  for specific details  For validation of certificates  both RFC 5280 and the TLS validation  RFC 6125  https   datatracker ietf org doc html rfc6125  are important for understanding how to achieve rotation   The following is a simplification of these standards for the purpose of this document   Every X 509 certificate begins with an asymmetric key pair  using an algorithm like RSA or ECDSA  This key pair is used to create a Certificate Signing Request  CSR   which contains a set of fields the requester would like in the final certificate  but  it is up to the Certificate Authority  CA  to decide what fields to take from the CSR and which to override   The CSR also contains the public key of the pair  which is signed by the private key of the key pair to prove possession  Usually  the requester would ask for attributes in the Subject field of the CSR or in the Subject Alternative Name extension CSR to be respected in the final certificate  It is up to the CA if these values are trusted or not  When approved by the issuing authority  which may be backed by this asymmetric key itself in the case of a root self signed certificate   the authority attaches the Subject of  its  certificate to the issued certificate in the Issuer field  assigns a unique serial number to the issued certificate  and signs the set of fields with its private key  thus creating the certificate   There are some important restrictions here      One certificate can only have one Issuer  but this issuer is identified by    the Subject on the issuing certificate and its public key     One key pair can be used for multiple certificates  but one certificate can    only have one backing key material   The following fields on the final certificate are relevant to rotation      The backing  public  https   datatracker ietf org doc html rfc5280 section 4 1 2 7     and private key material  Subject Public Key Info        Note that the private key is not included in the certificate but is      uniquely determined by the public key material     The  Subject  https   datatracker ietf org doc html rfc5280 section 4 1 2 6  of the certificate       This identifies the entity to which the certificate was issued  While the      SAN values  in the  Subject Alternative Name  https   datatracker ietf org doc html rfc5280 section 4 2 1 6       extension  is useful when validating TLS Server certificates against the      negotiated hostname and URI  it isn t generally relevant for the purposes      of validating intermediate certificate chains or in rotation     The  Validity  https   datatracker ietf org doc html rfc5280 section 4 1 2 5     period of this certificate       Notably  RFC 5280 does not place any requirements around the issued      certificate s validity period relative to the validity period of the      issuing certificate  However  it  does state  https   datatracker ietf org doc html rfc5280 section 4 1 2 5       that certificates ought to be revoked if their status cannot be maintained      up to their notAfter date  This is why Vault 1 11 s   pki issuer  issuer ref       configuration endpoint maintains the  leaf not after behavior  per issuer      rather than per role       Additionally  some browsers will place ultimate trust in the certificates      in their trust stores  even when these certificates are expired         Note that this only applies to certificates in the trust store  validity        periods will still be enforced for certificates not in the store  such        as intermediates      The  Issuer  https   datatracker ietf org doc html rfc5280 section 4 1 2 4  and     signatureValue  https   datatracker ietf org doc html rfc5280 section 4 1 1 3     of this certificate       In the issued certificate s Issuer field  the issuing certificate places      its own Subject value  This allows the issuer to be identified later       without having to try signature validation against every known local      certificate   when validating the presented certificate and chain       The signature over the entire certificate  by the issuer s private key       is then placed in the signatureValue field     The optional  Authority Key Identifier  https   datatracker ietf org doc html rfc5280 section 4 2 1 1     field       This field can contain either  or both  of two values         The hash of the issuer s public key  This extension is set and this        value is filled in by Vault         The Issuer s Subject and Serial Number  This value is not set by Vault       The latter is a dangerous restriction for the purposes of rotation  it      prevents cross signing and reissuance as the new issuing certificates       while having the same backing key material  will have different serial      numbers  See the  Limitations of Primitives   limitations of primitives       section below for more information on this restriction     The  Serial Number  https   datatracker ietf org doc html rfc5280 section 4 1 2 2     of this certificate       This field is unique to a specific issuer  when a certificate is      reissued by its parent authority  it will always have a different serial      number field     The  CRL distribution  https   datatracker ietf org doc html rfc5280 section 4 2 1 13     point field       This is a field detailing where a CRL is expected to exist for this      certificate and under which CRL issuers  defaulting to the issuing      certificate itself  the CRL is expected to be signed by  This is mostly      informational and for server software like nginx  Vault s Cert Auth method       and Apache  CRLs are provided to the server  rather than having the      server fetch CRLs for certificates automatically       Note that root certificates  in browsers trust stores  are generally not      considered revocable  However  if an intermediate is revoked by serial       it will appear on its parent s CRL  and may prevent rotation from      happening      X 509 rotation primitives  Rotation  from an organizational standpoint  can only safely happen with certain intermediate X 509 certificates being issued  To distinguish the two types of certificates used to achieve rotation  this document notates them as  primitives    Rotation of an end entity certificate is trivial from an X 509 trust chain perspective  this process happens every day and should only depend on what is in the trust store and not the end entity certificate itself  In Vault  the requester would hit the various issuance endpoints    pki issue  name  or   pki sign  name     or use the unsafe   pki sign verbatim   and swap out the old certificate with the new certificate and reload the configuration or restart the service  Other parts of the organizations might use  ACME  https   datatracker ietf org doc html rfc8555  for certificate issuance and rotation  especially if the service is public facing  and thus needs to be issued by a Public CA   Given it was signed by a trusted root  any devices connecting to the service would not know the difference   Rotation of intermediate certificates is almost as easy  Assuming a decent operational setup  wherein during end entity issuance  the full certificate chain is updated in the service s configuration   this should be as easy as creating a new intermediate CA  signing it against the root CA  and then beginning issuance against the new intermediate certificate  In Vault  if the intermediate is generated in an existing mount path  or is moved into such   the requesting entity shouldn t care much  Under ACME  Let s Encrypt has successfully rotated intermediates to present a cross signed chain   for older Android devices  https   letsencrypt org 2020 12 21 extending android compatibility html    Assuming the old intermediate s parent s  are still valid and trusted  certificates issued under old intermediates should continue to validate   The hard part of rotation  calling for the use of these primitives  is rotating root certificates  These live in every device s trust store and are hard to update from an organization wide operational perspective  Unless the organization can swap out roots almost instantaneously and simultaneously  e g   via an agent  with no missed devices  this process will likely span months   To make this process lower risk  there are various primitive certificate types that use the  above certificate fields   x 509 certificate fields   Key to their success is the following note      Note  While certificates are added to the trust store  it is ultimately    the associated key material that determines trust  two issuer certificates    with the same subject but different public keys cannot validate the same    leaf certificate  only if the keys are the same can this occur       Cross Signed primitive  This is the most common type of rotation primitive  A common CSR is signed by two CAs  resulting in two certificates  These certificates must have the same Subject  but may have different Issuers and will have different Serial Numbers  and the same backing key material  to allow certificates they sign to be trusted by either variant   Note that  due to restrictions in how end entity certificates are used and validated  services and validation libraries expect only one   cross signing most typically only applies to intermediate        A note on Cross Signed roots  Technically  cross signing can occur between two roots  allowing trust bundles with either root to validate certs issued through the other  However  this process creates a certificate that is effectively an intermediate  as it is no longer self signed  and usually must be served alongside the trust chain  Given this restriction  it s preferable to instead cross sign the top level intermediates under the root unless strictly necessary when the old root certificate has been used to directly issue leaf certificates   So  the rest of this process flow assumes an intermediate is being cross signed as this is more common         Process flow                                           generate key pair                                                                                                                                                                        generate CSR          generate CSR                                                                                                                                                                 signed by           signed by                   root A              root B                                                                      Here  a key pair was generated at some point in time  Two CSRs are created and sent to two different root authorities  Root A and Root B   These result in two separate certificates  potentially with different validity periods  with the same Subject and same backing key material   Note that this cross signing need not happen simultaneously  there could be a gap of several years between the first and second certificate  Additionally  there s no limit on the number of cross signed  duplicate   used loosely  with the same subject and key material  certificates  this could be cross signed by many different root certificates if necessary and desired         Certificate hierarchy                                                                      root A                                              root B                                                                                                                                                                                             intermediate C       same key material      intermediate D                                                                                                                                                                   leaf certificates                                                  The above process results in two trust paths  either of root A or root B  or both  could exist in the client s trust stores and the leaf certificate would validate correctly  Because the same key material is used for both intermediate certificates  C and D   the issued leaf certificate s signature field would be the same regardless of which intermediate was contacted   Cross signing is thus a unifying primitive  two separate trust paths now join into a single one  by having leaf certificate s issuer field to point to two separate paths  via duplication of the certificate in the chain  and would be conditionally validated based on which root is present in the trust store   This construct is documented and used in several places      https   letsencrypt org certificates     https   scotthelme co uk cross signing alternate trust paths how they work     https   security stackexchange com questions 14043 what is the use of cross signing certificates in x 509       Execution in Vault  To create a cross signed certificate in Vault  use the    intermediate cross sign  endpoint   vault api docs secret pki generate intermediate csr   Here  when creating a cross signature to all  cert B  to be validated by  cert A   provide the values   key ref   all Subject parts   c  for  cert B  during intermediate generation  Then sign this CSR  using the    issuer  issuer ref sign intermediate  endpoint   vault api docs secret pki sign intermediate   with  cert A  s reference and provide necessary values from  cert B   e g   Subject parts    cert A  may live outside Vault  Finally  import the cross signed certificate into Vault  using the   issuers import cert  endpoint   vault api docs secret pki import ca certificates and keys    If this process succeeded  and both  cert A  and  cert B  and their key material lives in Vault  the newly imported cross signed certificate will have a  ca chain  response field  during read   vault api docs secret pki read issuer  containing  cert A   and  cert B  s  ca chain  will contain the cross signed cert and its  ca chain  value      Note  Regardless of issuer type  is important to provide all relevant    parameters as they were originally  Vault does not infer e g   the Subject    name parameters from the existing issuer  it merely reuses the same key    material         Notes on  manual chain   If an intermediate is cross signed and imported into the same mount as its pair  Vault will not detect the cross signed pairs during automatic chain building  As a result  leaf issuance will have a chain that only includes one of these pairs of chains  This is because the leaf issuance s  ca chain  parameter copies the value from signing issuer directly  rather than computing its own copy of the chain   To fix this  update the  manual chain  field on the  issuers   vault api docs secret pki update issuer  to include the chains of both pairs  For instance  given  intA  signed by  rootA  and  intB  signed by  rootB  as its cross signed version  one could do the following         vault patch pki issuer intA manual chain self rootA intB rootB   vault patch pki issuer intB manual chain self rootB intA rootA      This will ensure that issuance with either copy of the intermediate reports the full cross signed chain when signing leaf certs       Reissuance primitive  The second most common type of rotation primitive  In this scheme  the existing key material is used to generate a new certificate  usually at a much later point in time from the existing issuance   While similar to the cross signed primitive  this one differs in that usually the reissuance happens after the original certificate expires or is close to expiration and is reissued by the original root CA  In the event of a self signed certificate  e g   a root certificate   this parent certificate would be itself  In both cases  this changes the contents of the certificate  due to the new serial number  but allows all existing leaf signatures to still validate   Unlike the cross signed primitive  this primitive type can be used on all types of certificates  including leaves  intermediates  and roots         Process flow                                               generate key pair                                                                                                                                                                                      generate CSR             generate CSR                                                                                                                                                                           signed by issuer        signed by issuer                                                                   In this process flow  a single key pair is generated at some point in time and stored  The CSR  with same requested fields  is generated from this common key material and signed by the same issuer at multiple points in time  preserving all critical fields  Subject  Issuer   c   While there is strictly no limit on the number of times a key can be reissued  at some point safety would dictate the key material should be rotated instead of being continually reissued        Certificate hierarchy                                                                  root                                                                                                                                                                             original cert      same key material      reissued cert                                                                                                                                                               leaf certificates                                                  Note that while this again results in two trust paths  depending on which intermediate certificate is presented and is still valid  only a root need be trusted  When a reissued certificate is a root certificate  the issuance link is simply self loop  But  in this case  note that both certificates are  technically  valid issuers of each other  This means it should be possible to provide a reissued root certificate in the TLS certificate chain and have it chain back to an existing root certificate in a trust store   This primitive type is thus an incrementing primitive  the life cycle of an existing key is extended into the future by issuing a new certificate with the same key material from the existing authority        Execution in Vault  To create a reissued root certificate in Vault  use    issuers generate root existing  endpoint   vault api docs secret pki generate root   This allows the generation of a new root certificate with the existing key material  via the  key ref  request parameter   If this process succeeded  when  reading the issuer   vault api docs secret pki read issuer   via  GET  issuer  issuer ref    both issuers  old and reissued  will appear in each others   ca chain  response field  unless prevented so by a  manual chain  value    To create a reissued intermediate certificate in Vault  this is a three step process    1  Use the    issuers generate intermediate existing      endpoint   vault api docs secret pki generate intermediate csr      to generate a new CSR with the existing key material with the  key ref      request parameter   2  Sign this CSR via the same signing process under the same issuer  This     step is specific to the parent CA  which may or may not be Vault   3  Finally  use the    intermediate set signed  endpoint   vault api docs secret pki import ca certificates and keys      to import the signed certificate from step 2   If the process to reissue an intermediate certificate succeeded  when  reading the issuer   vault api docs secret pki read issuer   via  GET  issuer  issuer ref    both issuers  old and reissued  will have the same  ca chain  response field  except for the first entry  unless prevented so by a  manual chain  value       Note  Regardless of issuer type  is important to provide all relevant    parameters as they were originally  Vault does not infer e g   the Subject    name parameters from the existing issuer  it merely reuses the same key    material       Temporal primitives  We can use the above primitive types to rotate roots and intermediates to new keys and extend their lifetimes  This time based rotation is what ultimately allows us to rotate root certificates   There s two main variants of this  a   forward   primitive  wherein an old certificate is used to bless new key material  and a   backwards   primitive  wherein a new certificate is used to bless old key material  Both of these primitives are independently used by Let s Encrypt in the aforementioned chain of trust document      The link from DST Root CA X3 to ISRG Root X1 is an example of a forward    primitive     The link from ISRG Root X1 to R3  which was originally signed by DST Root    CA X3  is an example of a backwards primitive   For most organizations with a hierarchical structured CA setup  cross signing all intermediates with both the new and old root CAs is sufficient for root rotation   However  for organizations which have directly issued leaf certificates from a root  the old root will need to be reissued under the new root  with shorter duration  to allow these certificates to continue to validate  This combines both of the above primitives  cross signing and reissuance  into a single backwards primitive step  In the future  these organizations should probably move to a more standard  hierarchical setup       Limitations of primitives  The certificate s  Authority Key Identifier  https   datatracker ietf org doc html rfc5280 section 4 2 1 1  extension field may contain either or both of the issuer s keyIdentifier  a hash of the public key  or both the issuer s Subject and Serial Number fields  Generating certificates with the latter enabled  luckily not possible in Vault  especially so since Vault uses strictly random serial numbers  prevents building a proper cross signed chain without re issuing the same serial number  which will not work with most browsers  trust stores and validation engines  due to  caching of certificates  https   support mozilla org en US kb Certificate contains the same serial number as another certificate  used in successful validations  In the strictest sense  when using a cross signing primitive  from a different CA   the intermediate could be reissued with the same serial number  assuming no previous certificate was issued by that CA with that serial  This does not work when using a reissuance primitive as these are technically the same authority and thus this authority must issue certificates with unique serial numbers      Suggested root rotation procedure  The following is a suggested process for achieving root rotation easily and without  outage  impact to the broader organization  assuming  best practices   vault docs secrets pki considerations use a ca hierarchy  are being followed  Some adaption will be necessary   Note that this process takes time  How much time is dependent on the automation level and operational awareness of the organization   1   Generate   vault api docs secret pki generate root  the new root    certificate  For clarity  it is suggested to use a new common name    to distinguish it from the old root certificate  Key material need    not be the same   2   Cross sign   cross signed primitive  all existing intermediates     It is important to update the manual chain on the issuers as discussed    in that section  as we assume servers are configured to combine the     certificate  field with the  ca chain  field on renewal and issuance     thus getting the cross signed intermediates   3  Encourage rotation to pickup the new cross signed intermediates  With    short lived certificates  this should  happen    automatically   vault docs secrets pki considerations automate leaf certificate renewal      However  for some long lived certs  it is suggested to rotate them    manually and proactively  This step takes time  and depends on the    types of certificates issued  e g   server certs  code signing  or client    auth    4  Once  all  chains have been updated  new systems can be brought online    with only the new root certificate  and connect to all existing systems   5  Existing systems can now be migrated with a one shot root switch  the    new root can be added and the old root can be removed at the same time     Assuming the above step 3 can be achieved in a reasonable amount of time     this decreases the time it takes to move the majority of systems over to    fully using the new root and no longer trusting the old root  This step    also takes time  depending on how quickly the organization can migrate    roots and ensure all such systems are migrated  If some systems are    offline and only infrequently online  or  if they have hard coded    certificate stores and need to reach obsolescence first   the organization    might not be ready to move on to future steps   6  At this point  since all systems now use the new root  it is safe to remove    or archive the old root and intermediates  updating the manual chain to    point strictly to the new intermediate root   At this point  rotation is fully completed      Tutorial  Refer to the  Build Your Own Certificate Authority  CA    vault tutorials secrets management pki engine  guide for a step by step tutorial   Have a look at the  PKI Secrets Engine with Managed Keys   vault tutorials enterprise managed key pki  for more about how to use externally managed keys with PKI      API  The PKI secrets engine has a full HTTP API  Please see the  PKI secrets engine API   vault api docs secret pki  for more details "}
{"questions":"vault page title Enrollment over Secure Transport EST within Vault PKI Secrets Engines This document covers configuration and limitations of Vault s PKI Secrets Engine An overview of the Enrollment over Secure Transport protocol implementation within Vault implementation of the EST protocol https datatracker ietf org doc html rfc7030 EnterpriseAlert inline true layout docs PKI secrets engine Enrollment over Secure Transport EST EnterpriseAlert inline true","answers":"---\nlayout: docs\npage_title: Enrollment over Secure Transport (EST) within Vault | PKI - Secrets Engines\ndescription: An overview of the Enrollment over Secure Transport protocol implementation within Vault.\n---\n\n# PKI secrets engine - Enrollment over Secure Transport (EST) <EnterpriseAlert inline=\"true\" \/>\n\nThis document covers configuration and limitations of Vault's PKI Secrets Engine\nimplementation of the [EST protocol](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7030) <EnterpriseAlert inline=\"true\" \/>.\n\n## What is Enrollment over Secure Transport (EST)?\n\nThe EST protocol is an IETF standardized protocol, [RFC 7030](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7030),\nthat allows clients to acquire client certificates and associated\nCertificate Authority (CA) certificates.\n\n## Enabling EST support on a Vault PKI mount\n\nThe following is a list of steps required to configure an existing PKI\nmount to serve EST clients. Each of which can be broken down within three main\ncategories.\n\n 1. [Authentication mechanisms](#configuring-est-authentication)\n 1. [Updating PKI tunable parameters](#updating-the-pki-mount-tunable-parameters)\n 1. [PKI EST configuration](#pki-est-configuration)\n\n### Configuring EST Authentication\n\nThe EST protocol specifies a few different authentication mechanisms, of which\nVault supports two.\n\n 1. [HTTP-Based Client authentication](\/vault\/docs\/auth\/userpass)\n 1. [Certificate TLS authentication](\/vault\/docs\/auth\/cert)\n\nBoth of these authentication mechanisms leverage a separate Vault authentication\nmount, within the same namespace, to validate the client provided credentials\nalong with the client's ACL policy to enforce. While both authentication\nschemes can be enabled at once, only a single mount will be used to authenticate\na client based on the way credentials were provided through EST. If an EST client sends\nHTTP-Based authentication credentials, they will be preferred over TLS client\ncertificates.\n\nFor proper accounting, mounts supporting EST authentication should be\ndedicated to this purpose, not shared with other workflows.  In other words,\ncreate a new auth mount for EST even if you already have one of the same\ntype for other purposes.\n\nWhen setting up the authentication mount for EST clients, the token type must\nbe configured to return [batch tokens](\/vault\/docs\/concepts\/tokens#batch-tokens).\nBatch tokens are required to avoid an excessive amount of leases being generated\nand persisted as every EST incoming request needs to be authenticated.\n\nThe path within an ACL policy, must match the internal redirected path including\nthe mount and not the `.well-known\/est\/` URI the client is initially using.\nThe path to use within the plugin depends on the path policy that is in configured\nfor the EST label being used by the client.\n\nIf using the `sign-verbatim` as a path policy, the following\nACL policy will allow an authenticated client access the required PKI EST paths.\n```\npath \u201cpki\/est\/simpleenroll\u201d {\n  capabilities=[\u201cupdate\u201d, \u201ccreate\u201d]\n}\npath \u201cpki\/est\/simplereenroll\u201d {\n  capabilities=[\u201cupdate\u201d, \u201ccreate\u201d]\n}\n```\n\nFor a role base path policy, this sample policy can be used\n```\npath \u201cpki\/roles\/my-role-name\/est\/simpleenroll\u201d {\n  capabilities=[\u201cupdate\u201d, \u201ccreate\u201d]\n}\npath \u201cpki\/roles\/my-role-name\/est\/simplereenroll\u201d {\n  capabilities=[\u201cupdate\u201d, \u201ccreate\u201d]\n}\n```\n\n#### Updating the PKI mount tunable parameters\n\nOnce the authentication mount has been created and configured, the authentication mount's accessor\nwill need to be captured and added within the PKI mount's [delegated auth accessors](\/vault\/api-docs\/system\/mounts#delegated_auth_accessors).\n\nTo get an authentication mount's accessor field, the following command can be used.\n\n```shell-session\n$ vault read -field=accessor sys\/auth\/auth\/userpass\n```\n\nFor EST to work within certain clients, a few response headers need to be explicitly allowed\nalong with configuring the list of accessors the mount can delegate authentication towards.\nThe following will grant the required response headers, you will need to replace the values for the `delegated-auth-accessors`\nto match your values.\n\n```shell-session\n$ vault secrets tune \\\n  -allowed-response-headers=\"Content-Transfer-Encoding\" \\\n  -allowed-response-headers=\"Content-Length\" \\\n  -allowed-response-headers=\"WWW-Authenticate\" \\\n  -delegated-auth-accessors=\"auth_userpass_e2f4f6d5\" \\\n  -delegated-auth-accessors=\"auth_cert_4088ac2d\" \\\n  pki\n```\n\n#### PKI EST configuration\n\nThe EST protocol specifies that an EST server must support a URI path-prefix of\n`.well-known\/est\/` as defined in [RFC-5785](https:\/\/datatracker.ietf.org\/doc\/html\/rfc5785).\nEST client's normally don't provide any sort of configuration for different path-prefixes, and will\ndefault to hitting the host on the path `https:\/\/<hostname>:<port>\/.well-known\/est\/`.\n\nSome clients allow a single label, sometimes referred to as `additional path segment`,\nto accommodate different issuers. This label will be added to the path after\nthe est path such as `https:\/\/<hostname>:<port>\/.well-known\/est\/<label>\/`.\n\nTo provide different restrictions around usage (defaults, an issuer or role),\nfor EST protocol endpoints, a path policy is associated with the EST\nlabel.\n\n@include 'pki-est-default-policy.mdx'\n\nWithin the Vault [EST configuration API](\/vault\/api-docs\/secret\/pki\/issuance#set-est-configuration), a PKI\nmount can be specified as the default mount by enabling [default_mount](\/vault\/api-docs\/secret\/pki\/issuance#default_mount)\nto true, or provide a mapping of a label within [label_to_path_policy](\/vault\/api-docs\/secret\/pki\/issuance#label_to_path_policy)\n\nAs an example of a complete EST configuration, that would enable the pki mount\nto register the .well-known\/est default label, along with two additional labels\nof test-label and sign-all.\n\nThe test-label would use the existing est-clients PKI role for restrictions and defaults,\nleveraging the issuer specified within the role. The other two labels, default and sign-all will\nleverage a sign-verbatim type role, allowing any identifier to be issued using the default\nissuer.\n\n```shell-session\nvault write pki\/config\/est -<<EOC\n{\n  \"enabled\": true,\n  \"default_mount\": true,\n  \"default_path_policy\": \"sign-verbatim\",\n  \"label_to_path_policy\": {\n    \"test-label\": \"role:est-clients\",\n    \"sign-all\": \"sign-verbatim\"\n  },\n  \"authenticators\": {\n    \"cert\": {\n      \"accessor\": \"auth_cert_4088ac2d\"\n    },\n    \"userpass\": {\n      \"accessor\": \"auth_userpass_e2f4f6d5\"\n    }\n  }\n}\nEOC\n```\n\n## Limitations\n\n### EST API Support\n\nThe initial implementation covers solely the required API endpoints of the EST protocol.\nThe following optional features from the specification are not currently supported.\n\n - [Full CMC](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7030#section-4.3)\n - [Server-side key generation](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7030#section-4.4)\n - [CSR attribute endpoints](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7030#section-4.5)\n\n### Well Known redirections\n\nThe EST configuration parameters `default_mount` and\/or `label_to_path_policy` can be used to register\npaths within the .well-known path space. The following limitations apply:\n\n - Only a single PKI mount, across all namespaces, can be enabled as the `default_mount`.\n - Labels within `label_to_path_policy` must also be unique across all PKI mounts regardless of namespace.\n - Care must be taken if enabling EST on a [local](\/vault\/docs\/commands\/secrets\/enable#local) PKI mount on\n   performance secondary clusters. Vault cannot guarantee the configured EST labels do\n   not conflict across different PKI mounts in this use-case. This can lead to\n   different issuers being used across clusters for the same EST labels.\n\n## API\n\nThe PKI secrets engine has a full HTTP API. Please see the\n[PKI secrets engine API](\/vault\/api-docs\/secret\/pki) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Enrollment over Secure Transport  EST  within Vault   PKI   Secrets Engines description  An overview of the Enrollment over Secure Transport protocol implementation within Vault         PKI secrets engine   Enrollment over Secure Transport  EST   EnterpriseAlert inline  true      This document covers configuration and limitations of Vault s PKI Secrets Engine implementation of the  EST protocol  https   datatracker ietf org doc html rfc7030   EnterpriseAlert inline  true          What is Enrollment over Secure Transport  EST    The EST protocol is an IETF standardized protocol   RFC 7030  https   datatracker ietf org doc html rfc7030   that allows clients to acquire client certificates and associated Certificate Authority  CA  certificates      Enabling EST support on a Vault PKI mount  The following is a list of steps required to configure an existing PKI mount to serve EST clients  Each of which can be broken down within three main categories    1   Authentication mechanisms   configuring est authentication   1   Updating PKI tunable parameters   updating the pki mount tunable parameters   1   PKI EST configuration   pki est configuration       Configuring EST Authentication  The EST protocol specifies a few different authentication mechanisms  of which Vault supports two    1   HTTP Based Client authentication   vault docs auth userpass   1   Certificate TLS authentication   vault docs auth cert   Both of these authentication mechanisms leverage a separate Vault authentication mount  within the same namespace  to validate the client provided credentials along with the client s ACL policy to enforce  While both authentication schemes can be enabled at once  only a single mount will be used to authenticate a client based on the way credentials were provided through EST  If an EST client sends HTTP Based authentication credentials  they will be preferred over TLS client certificates   For proper accounting  mounts supporting EST authentication should be dedicated to this purpose  not shared with other workflows   In other words  create a new auth mount for EST even if you already have one of the same type for other purposes   When setting up the authentication mount for EST clients  the token type must be configured to return  batch tokens   vault docs concepts tokens batch tokens   Batch tokens are required to avoid an excessive amount of leases being generated and persisted as every EST incoming request needs to be authenticated   The path within an ACL policy  must match the internal redirected path including the mount and not the   well known est   URI the client is initially using  The path to use within the plugin depends on the path policy that is in configured for the EST label being used by the client   If using the  sign verbatim  as a path policy  the following ACL policy will allow an authenticated client access the required PKI EST paths      path  pki est simpleenroll      capabilities   update    create     path  pki est simplereenroll      capabilities   update    create          For a role base path policy  this sample policy can be used     path  pki roles my role name est simpleenroll      capabilities   update    create     path  pki roles my role name est simplereenroll      capabilities   update    create               Updating the PKI mount tunable parameters  Once the authentication mount has been created and configured  the authentication mount s accessor will need to be captured and added within the PKI mount s  delegated auth accessors   vault api docs system mounts delegated auth accessors    To get an authentication mount s accessor field  the following command can be used      shell session   vault read  field accessor sys auth auth userpass      For EST to work within certain clients  a few response headers need to be explicitly allowed along with configuring the list of accessors the mount can delegate authentication towards  The following will grant the required response headers  you will need to replace the values for the  delegated auth accessors  to match your values      shell session   vault secrets tune      allowed response headers  Content Transfer Encoding       allowed response headers  Content Length       allowed response headers  WWW Authenticate       delegated auth accessors  auth userpass e2f4f6d5       delegated auth accessors  auth cert 4088ac2d      pki           PKI EST configuration  The EST protocol specifies that an EST server must support a URI path prefix of   well known est   as defined in  RFC 5785  https   datatracker ietf org doc html rfc5785   EST client s normally don t provide any sort of configuration for different path prefixes  and will default to hitting the host on the path  https    hostname   port   well known est     Some clients allow a single label  sometimes referred to as  additional path segment   to accommodate different issuers  This label will be added to the path after the est path such as  https    hostname   port   well known est  label      To provide different restrictions around usage  defaults  an issuer or role   for EST protocol endpoints  a path policy is associated with the EST label    include  pki est default policy mdx   Within the Vault  EST configuration API   vault api docs secret pki issuance set est configuration   a PKI mount can be specified as the default mount by enabling  default mount   vault api docs secret pki issuance default mount  to true  or provide a mapping of a label within  label to path policy   vault api docs secret pki issuance label to path policy   As an example of a complete EST configuration  that would enable the pki mount to register the  well known est default label  along with two additional labels of test label and sign all   The test label would use the existing est clients PKI role for restrictions and defaults  leveraging the issuer specified within the role  The other two labels  default and sign all will leverage a sign verbatim type role  allowing any identifier to be issued using the default issuer      shell session vault write pki config est    EOC      enabled   true     default mount   true     default path policy    sign verbatim      label to path policy          test label    role est clients        sign all    sign verbatim          authenticators          cert            accessor    auth cert 4088ac2d              userpass            accessor    auth userpass e2f4f6d5              EOC         Limitations      EST API Support  The initial implementation covers solely the required API endpoints of the EST protocol  The following optional features from the specification are not currently supported       Full CMC  https   datatracker ietf org doc html rfc7030 section 4 3      Server side key generation  https   datatracker ietf org doc html rfc7030 section 4 4      CSR attribute endpoints  https   datatracker ietf org doc html rfc7030 section 4 5       Well Known redirections  The EST configuration parameters  default mount  and or  label to path policy  can be used to register paths within the  well known path space  The following limitations apply      Only a single PKI mount  across all namespaces  can be enabled as the  default mount      Labels within  label to path policy  must also be unique across all PKI mounts regardless of namespace     Care must be taken if enabling EST on a  local   vault docs commands secrets enable local  PKI mount on    performance secondary clusters  Vault cannot guarantee the configured EST labels do    not conflict across different PKI mounts in this use case  This can lead to    different issuers being used across clusters for the same EST labels      API  The PKI secrets engine has a full HTTP API  Please see the  PKI secrets engine API   vault api docs secret pki  for more details "}
{"questions":"vault Troubleshoot PKI Secrets Engine and ACME Secrets Engine s ACME server Solve common problems related to ACME client integration with Vault PKI page title PKI Secrets Engine Troubleshooting ACME layout docs Troubleshoot problems with ACME clients and Vault PKI Secrets Engine s ACME server","answers":"---\nlayout: docs\npage_title: 'PKI - Secrets Engine: Troubleshooting ACME'\ndescription: Troubleshoot problems with ACME clients and Vault PKI Secrets Engine's ACME server.\n---\n\n# Troubleshoot PKI Secrets Engine and ACME\n\nSolve common problems related to ACME client integration with Vault PKI\nSecrets Engine's ACME server.\n\n## Error: ACME feature requires local cluster 'path' field configuration to be set\n\nIf ACME works on some nodes of a Vault Enterprise cluster but not on\nothers, it likely means that the cluster address has not been set.\n\n### Symptoms\n\nWhen a Vault client reads the ACME config (`\/config\/acme`) on a\nPerformance Secondary nodes or when an ACME client attempts to connect to a\ndirectory on this node, it will error with:\n\n> ACME feature requires local cluster 'path' field configuration to be set\n\n### Cause\n\nIn most cases, cluster path errors mean that the required cluster address is\nnot set in your cluster configuration parameter.\n\n### Resolution\n\nFor each Performance Replication cluster, read the value of `\/config\/cluster`\nand ensure the `path` field is set. When it is missing, update the URL to\npoint to this mount's path on a TLS-enabled address for this PR cluster; this\ndomain may be a load balanced or a DNS round robin address. For example:\n\n```\n$ vault write pki\/config\/cluster path=https:\/\/cluster-b.vault.example.com\/v1\/pki\n```\n\nOnce this is done, re-read the ACME configuration and make sure no warnings\nappear:\n\n```\n$ vault read pki\/config\/acme\n```\n\n## Error: Unable to register an account with the ACME server\n\n### Symptoms\n\nWhen registering a new account without an [External Account Binding\n(EAB)](\/vault\/api-docs\/secret\/pki#acme-external-account-bindings), the\nVault Server rejects the request with a response like:\n\n> Unable to register an account with ACME server\n\nwith further information provided in the debug logs (in the case of\n`certbot`):\n\n> Server requires external account binding.\n\nor, if the client incorrectly contacted the server, an error like:\n\n> The request must include a value for the 'externalAccountBinding' field\n\nIn either case, a new account needs to be created with an EAB token created\nby Vault.\n\n### Cause\n\nIf a server has been updated to require `eab_policy=always-required` in the\n[ACME configuration](\/vault\/api-docs\/secret\/pki#set-acme-configuration),\nnew account registration (and reuse of existing accounts will fail).\n\n### Resolution\n\nUsing a Vault token, [fetch a new external account\nbinding](\/vault\/api-docs\/secret\/pki#get-acme-eab-binding-token) for\nthe [desired directory](\/vault\/api-docs\/v1.14.x\/secret\/pki#acme-directories):\n\n```\n$ vault write -f pki\/roles\/my-role-name\/acme\/new-eab\n...\ndirectory roles\/my-role-name\/acme\/directory\nid        bc8088d9-3816-5177-ae8e-d8393265f7dd\nkey       MHcCAQE... additional data elided ...\n...\n```\n\nThen pass this new EAB token into the ACME client. For example, with\n`certbot`:\n\n```\n$ certbot [... additional parameters ...] \\\n    --server https:\/\/cluster-b.vault.example.com\/v1\/pki\/roles\/my-role-name\/acme\/directory \\\n    --eab-kid bc8088d9-3816-5177-ae8e-d8393265f7dd \\\n    --eab-hmac-key MHcCAQE... additional data elided ...\n```\n\nEnsure that the ACME directory passed to the ACME client matches that\nfetched from the Vault.\n\n## Error: Failed to verify eab\n\n### Symptoms\n\nWhen initializing a new account against this Vault server, the ACME client\nmight error with a message like:\n\n> The client lacks sufficient authorization :: failed to verify eab\n\nThis is caused by requesting an EAB from a directory not matching the\none the client used.\n\n### Cause\n\nIf an EAB account token is incorrectly used with the wrong directory, the\nACME server will reject the request with an error about insufficient\npermissions.\n\n### Resolution\n\nEnsure the requested EAB token matches the directory. For a given directory\nat `\/some\/path\/acme\/directory`, fetch EAB tokens from\n`\/some\/path\/acme\/new-eab`. The remaining resolution steps are the same as\nfor [debugging account registration\nfailures](#debugging-account-registration-failures).\n\n## Error: ACME validation failed for `{challenge_id}`\n\n### Symptoms\n\nWhen viewing the Vault server logs or attempting to fetch a certificate via\nan ACME client, an error like:\n\n> ACME validation failed for a465a798-4400-6c17-6735-e1b38c23de38-tls-alpn-01: ...\n\nindicates that the server was unable to validate this challenge accepted\nby the client.\n\n### Cause\n\nVault can not verify the server's identity through the client's requested\n[challenge type](\/vault\/api-docs\/secret\/pki#acme-challenge-types) (`dns-01`,\n`http-01`, or `tls-alpn-01`). Vault will not issue the certificate requested\nby the client.\n\n### Resolution\n\nEnsure that DNS is configured correctly from the Vault server's perspective,\nincluding setting [any custom DNS resolver](\/vault\/api-docs\/secret\/pki#dns_resolver).\n\nEnsure that any firewalls are set up to allow Vault to talk to the relevant\nsystems (the DNS server in the case of `dns-01`, port 80 on the target\nmachine for `http-01`, or port 443 on the target machine for `tls-alpn-01`\nchallenges).\n\n## Error: The client lacks sufficient authorization: account in status: revoked\n\n### Symptoms\n\nWhen attempting to renew a certificate, the ACME client reports an error:\n\n> The client lacks sufficient authorization: account in status: revoked\n\n### Cause\n\nIf you run a [manual tidy](\/vault\/api-docs\/secret\/pki#tidy_acme) or have\n[auto-tidy](\/vault\/api-docs\/secret\/pki#configure-automatic-tidy) enabled\nwith `tidy_acme=true, Vault will periodically remove stale ACME accounts.\n\nConnections from clients using removed accounts will be rejected.\n\n### Resolution\n\nRefer to the ACME client's documentation for removing cached local\nconfiguration and setup a new account, specifying any EABs as required.\n\n## Get help\n\nPlease provide the following information when contacting Hashicorp Support\nor filing a GitHub issue to help with our investigation and reproducibility:\n\n - ACME client name and version\n - ACME client logs and\/or output\n - Vault server **DEBUG** level logs\n\n## Tutorial\n\nRefer to the [Build Your Own Certificate Authority (CA)](\/vault\/tutorials\/secrets-management\/pki-engine)\nguide for a step-by-step tutorial.\n\nHave a look at the [PKI Secrets Engine with Managed Keys](\/vault\/tutorials\/enterprise\/managed-key-pki)\nfor more about how to use externally managed keys with PKI.\n\n## API\n\nThe PKI secrets engine has a full HTTP API. Please see the\n[PKI secrets engine API](\/vault\/api-docs\/secret\/pki) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title   PKI   Secrets Engine  Troubleshooting ACME  description  Troubleshoot problems with ACME clients and Vault PKI Secrets Engine s ACME server         Troubleshoot PKI Secrets Engine and ACME  Solve common problems related to ACME client integration with Vault PKI Secrets Engine s ACME server      Error  ACME feature requires local cluster  path  field configuration to be set  If ACME works on some nodes of a Vault Enterprise cluster but not on others  it likely means that the cluster address has not been set       Symptoms  When a Vault client reads the ACME config    config acme   on a Performance Secondary nodes or when an ACME client attempts to connect to a directory on this node  it will error with     ACME feature requires local cluster  path  field configuration to be set      Cause  In most cases  cluster path errors mean that the required cluster address is not set in your cluster configuration parameter       Resolution  For each Performance Replication cluster  read the value of   config cluster  and ensure the  path  field is set  When it is missing  update the URL to point to this mount s path on a TLS enabled address for this PR cluster  this domain may be a load balanced or a DNS round robin address  For example         vault write pki config cluster path https   cluster b vault example com v1 pki      Once this is done  re read the ACME configuration and make sure no warnings appear         vault read pki config acme         Error  Unable to register an account with the ACME server      Symptoms  When registering a new account without an  External Account Binding  EAB    vault api docs secret pki acme external account bindings   the Vault Server rejects the request with a response like     Unable to register an account with ACME server  with further information provided in the debug logs  in the case of  certbot       Server requires external account binding   or  if the client incorrectly contacted the server  an error like     The request must include a value for the  externalAccountBinding  field  In either case  a new account needs to be created with an EAB token created by Vault       Cause  If a server has been updated to require  eab policy always required  in the  ACME configuration   vault api docs secret pki set acme configuration   new account registration  and reuse of existing accounts will fail        Resolution  Using a Vault token   fetch a new external account binding   vault api docs secret pki get acme eab binding token  for the  desired directory   vault api docs v1 14 x secret pki acme directories          vault write  f pki roles my role name acme new eab     directory roles my role name acme directory id        bc8088d9 3816 5177 ae8e d8393265f7dd key       MHcCAQE    additional data elided              Then pass this new EAB token into the ACME client  For example  with  certbot          certbot      additional parameters              server https   cluster b vault example com v1 pki roles my role name acme directory         eab kid bc8088d9 3816 5177 ae8e d8393265f7dd         eab hmac key MHcCAQE    additional data elided          Ensure that the ACME directory passed to the ACME client matches that fetched from the Vault      Error  Failed to verify eab      Symptoms  When initializing a new account against this Vault server  the ACME client might error with a message like     The client lacks sufficient authorization    failed to verify eab  This is caused by requesting an EAB from a directory not matching the one the client used       Cause  If an EAB account token is incorrectly used with the wrong directory  the ACME server will reject the request with an error about insufficient permissions       Resolution  Ensure the requested EAB token matches the directory  For a given directory at   some path acme directory   fetch EAB tokens from   some path acme new eab   The remaining resolution steps are the same as for  debugging account registration failures   debugging account registration failures       Error  ACME validation failed for   challenge id        Symptoms  When viewing the Vault server logs or attempting to fetch a certificate via an ACME client  an error like     ACME validation failed for a465a798 4400 6c17 6735 e1b38c23de38 tls alpn 01       indicates that the server was unable to validate this challenge accepted by the client       Cause  Vault can not verify the server s identity through the client s requested  challenge type   vault api docs secret pki acme challenge types    dns 01    http 01   or  tls alpn 01    Vault will not issue the certificate requested by the client       Resolution  Ensure that DNS is configured correctly from the Vault server s perspective  including setting  any custom DNS resolver   vault api docs secret pki dns resolver    Ensure that any firewalls are set up to allow Vault to talk to the relevant systems  the DNS server in the case of  dns 01   port 80 on the target machine for  http 01   or port 443 on the target machine for  tls alpn 01  challenges       Error  The client lacks sufficient authorization  account in status  revoked      Symptoms  When attempting to renew a certificate  the ACME client reports an error     The client lacks sufficient authorization  account in status  revoked      Cause  If you run a  manual tidy   vault api docs secret pki tidy acme  or have  auto tidy   vault api docs secret pki configure automatic tidy  enabled with  tidy acme true  Vault will periodically remove stale ACME accounts   Connections from clients using removed accounts will be rejected       Resolution  Refer to the ACME client s documentation for removing cached local configuration and setup a new account  specifying any EABs as required      Get help  Please provide the following information when contacting Hashicorp Support or filing a GitHub issue to help with our investigation and reproducibility      ACME client name and version    ACME client logs and or output    Vault server   DEBUG   level logs     Tutorial  Refer to the  Build Your Own Certificate Authority  CA    vault tutorials secrets management pki engine  guide for a step by step tutorial   Have a look at the  PKI Secrets Engine with Managed Keys   vault tutorials enterprise managed key pki  for more about how to use externally managed keys with PKI      API  The PKI secrets engine has a full HTTP API  Please see the  PKI secrets engine API   vault api docs secret pki  for more details "}
{"questions":"vault The PKI secrets engine for Vault generates TLS certificates Secrets Engine layout docs PKI secrets engine setup and usage This document provides a brief overview of the setup and usage of the PKI page title PKI Secrets Engines Setup and Usage","answers":"---\nlayout: docs\npage_title: 'PKI - Secrets Engines: Setup and Usage'\ndescription: The PKI secrets engine for Vault generates TLS certificates.\n---\n\n# PKI secrets engine - setup and usage\n\nThis document provides a brief overview of the setup and usage of the PKI\nSecrets Engine.\n\n## Setup\n\nMost secrets engines must be configured in advance before they can perform their\nfunctions. These steps are usually completed by an operator or configuration\nmanagement tool.\n\n1. Enable the PKI secrets engine:\n\n   ```text\n   $ vault secrets enable pki\n   Success! Enabled the pki secrets engine at: pki\/\n   ```\n\n   By default, the secrets engine will mount at the name of the engine. To\n   enable the secrets engine at a different path, use the `-path` argument.\n\n1. Increase the TTL by tuning the secrets engine. The default value of 30 days may be too short, so increase it to 1 year:\n\n   ```text\n   $ vault secrets tune -max-lease-ttl=8760h pki\n   Success! Tuned the secrets engine at: pki\/\n   ```\n\n   Note that individual roles can restrict this value to be shorter on a\n   per-certificate basis. This just configures the global maximum for this\n   secrets engine.\n\n1. Configure a CA certificate and private key. Vault can accept an existing key\n   pair, or it can generate its own self-signed root. In general, we recommend\n   maintaining your root CA outside of Vault and providing Vault a signed\n   intermediate CA.\n\n   ```text\n   $ vault write pki\/root\/generate\/internal \\\n       common_name=my-website.com \\\n       ttl=8760h\n\n   Key              Value\n   ---              -----\n   certificate      -----BEGIN CERTIFICATE-----...\n   expiration       1536807433\n   issuing_ca       -----BEGIN CERTIFICATE-----...\n   serial_number    7c:f1:fb:2c:6e:4d:99:0e:82:1b:08:0a:81:ed:61:3e:1d:fa:f5:29\n   ```\n\n   The returned certificate is purely informative. The private key is safely\n   stored internally in Vault.\n\n1. Update the CRL location and issuing certificates. These values can be updated\n   in the future.\n\n   ```text\n   $ vault write pki\/config\/urls \\\n       issuing_certificates=\"http:\/\/127.0.0.1:8200\/v1\/pki\/ca\" \\\n       crl_distribution_points=\"http:\/\/127.0.0.1:8200\/v1\/pki\/crl\"\n   Success! Data written to: pki\/config\/urls\n   ```\n\n1. Configure a role that maps a name in Vault to a procedure for generating a\n   certificate. When users or machines generate credentials, they are generated\n   against this role:\n\n   ```text\n   $ vault write pki\/roles\/example-dot-com \\\n       allowed_domains=my-website.com \\\n       allow_subdomains=true \\\n       max_ttl=72h\n   Success! Data written to: pki\/roles\/example-dot-com\n   ```\n\n## Usage\n\nAfter the secrets engine is configured and a user\/machine has a Vault token with\nthe proper permission, it can generate credentials.\n\n1.  Generate a new credential by writing to the `\/issue` endpoint with the name\n    of the role:\n\n    ```text\n    $ vault write pki\/issue\/example-dot-com \\\n        common_name=www.my-website.com\n\n    Key                 Value\n    ---                 -----\n    certificate         -----BEGIN CERTIFICATE-----...\n    issuing_ca          -----BEGIN CERTIFICATE-----...\n    private_key         -----BEGIN RSA PRIVATE KEY-----...\n    private_key_type    rsa\n    serial_number       1d:2e:c6:06:45:18:60:0e:23:d6:c5:17:43:c0:fe:46:ed:d1:50:be\n    ```\n\n    The output will include a dynamically generated private key and certificate\n    which corresponds to the given role and expires in 72h (as dictated by our\n    role definition). The issuing CA and trust chain is also returned for\n    automation simplicity.\n\n## Tutorial\n\nRefer to the [Build Your Own Certificate Authority (CA)](\/vault\/tutorials\/secrets-management\/pki-engine)\nguide for a step-by-step tutorial.\n\nHave a look at the [PKI Secrets Engine with Managed Keys](\/vault\/tutorials\/enterprise\/managed-key-pki)\nfor more about how to use externally managed keys with PKI.\n\n## API\n\nThe PKI secrets engine has a full HTTP API. Please see the\n[PKI secrets engine API](\/vault\/api-docs\/secret\/pki) for more\ndetails.","site":"vault","answers_cleaned":"    layout  docs page title   PKI   Secrets Engines  Setup and Usage  description  The PKI secrets engine for Vault generates TLS certificates         PKI secrets engine   setup and usage  This document provides a brief overview of the setup and usage of the PKI Secrets Engine      Setup  Most secrets engines must be configured in advance before they can perform their functions  These steps are usually completed by an operator or configuration management tool   1  Enable the PKI secrets engine         text      vault secrets enable pki    Success  Enabled the pki secrets engine at  pki             By default  the secrets engine will mount at the name of the engine  To    enable the secrets engine at a different path  use the   path  argument   1  Increase the TTL by tuning the secrets engine  The default value of 30 days may be too short  so increase it to 1 year         text      vault secrets tune  max lease ttl 8760h pki    Success  Tuned the secrets engine at  pki             Note that individual roles can restrict this value to be shorter on a    per certificate basis  This just configures the global maximum for this    secrets engine   1  Configure a CA certificate and private key  Vault can accept an existing key    pair  or it can generate its own self signed root  In general  we recommend    maintaining your root CA outside of Vault and providing Vault a signed    intermediate CA         text      vault write pki root generate internal          common name my website com          ttl 8760h     Key              Value                              certificate           BEGIN CERTIFICATE            expiration       1536807433    issuing ca            BEGIN CERTIFICATE            serial number    7c f1 fb 2c 6e 4d 99 0e 82 1b 08 0a 81 ed 61 3e 1d fa f5 29            The returned certificate is purely informative  The private key is safely    stored internally in Vault   1  Update the CRL location and issuing certificates  These values can be updated    in the future         text      vault write pki config urls          issuing certificates  http   127 0 0 1 8200 v1 pki ca           crl distribution points  http   127 0 0 1 8200 v1 pki crl     Success  Data written to  pki config urls         1  Configure a role that maps a name in Vault to a procedure for generating a    certificate  When users or machines generate credentials  they are generated    against this role         text      vault write pki roles example dot com          allowed domains my website com          allow subdomains true          max ttl 72h    Success  Data written to  pki roles example dot com            Usage  After the secrets engine is configured and a user machine has a Vault token with the proper permission  it can generate credentials   1   Generate a new credential by writing to the   issue  endpoint with the name     of the role          text       vault write pki issue example dot com           common name www my website com      Key                 Value                                   certificate              BEGIN CERTIFICATE             issuing ca               BEGIN CERTIFICATE             private key              BEGIN RSA PRIVATE KEY             private key type    rsa     serial number       1d 2e c6 06 45 18 60 0e 23 d6 c5 17 43 c0 fe 46 ed d1 50 be              The output will include a dynamically generated private key and certificate     which corresponds to the given role and expires in 72h  as dictated by our     role definition   The issuing CA and trust chain is also returned for     automation simplicity      Tutorial  Refer to the  Build Your Own Certificate Authority  CA    vault tutorials secrets management pki engine  guide for a step by step tutorial   Have a look at the  PKI Secrets Engine with Managed Keys   vault tutorials enterprise managed key pki  for more about how to use externally managed keys with PKI      API  The PKI secrets engine has a full HTTP API  Please see the  PKI secrets engine API   vault api docs secret pki  for more details "}
{"questions":"vault Migrate Consul to Raft storage page title Migrate Consul to Raft storage This procedure assumes you have a Vault cluster deployed in a Kubernetes environment configured with Consul storage The storage migration can occur while leaving the Consul cluster intact A single change to the Consul cluster is a lock file written by Vault during the migration Guide to migration of Consul storage to Raft layout docs","answers":"---\nlayout: docs\npage_title: Migrate Consul to Raft storage\ndescription: >-\n  Guide to migration of Consul storage to Raft. \n---\n\n# Migrate Consul to Raft storage\n\nThis procedure assumes you have a Vault cluster deployed in a Kubernetes environment configured with Consul storage.  The storage migration can occur while leaving the Consul cluster intact.  A single change to the Consul cluster is a lock file written by Vault during the migration.\n\nThis guide uses basic examples and default Vault configurations.  It is for illustrative purposes, and adaption to specific configurations relevant to your environment is still required.\n\n<Warning title=\"Back up data\">\n\nAlways back up your data before attempting migration! Although this is an offline operation and the risk is low, it is advisable to take a recent snapshot from your Consul cluster before proceeding.\n\n<\/Warning>\n\n## Overview\n\nThis guide uses an intermediate Helm configuration to introduce an init container that will perform the storage migration, and then start a single Vault server using the Raft storage backend to verify the results.  Then update the Helm configuration to remove the init container and start Vault replicas.\n\n### Vault and Kubernetes setup\n\nConsider the following `vault status` output and Helm Chart values for Vault:\n\n<CodeBlockConfig hideClipboard>\n\n```plaintext\nKey             Value\n---             -----\nSeal Type       shamir\nInitialized     true\nSealed          false\nTotal Shares    1\nThreshold       1\nVersion         1.14.8+ent\nBuild Date      2023-12-05T01:49:39Z\nStorage Type    consul\nCluster Name    vault-cluster-68870bf8\nCluster ID      cd18c692-f2e3-77a5-fba3-28f06f41f375\nHA Enabled      true\nHA Cluster      https:\/\/vault-0.vault-internal:8201\nHA Mode         active\nActive Since    2024-04-10T02:45:33.367042122Z\nLast WAL        52\n```\n\n<\/CodeBlockConfig>\n\nHelm chart values:\n\n<CodeBlockConfig hideClipboard>\n\n```plaintext\nglobal:\n  enabled: false\n\nserver:\n  enabled: true\n  image:\n    repository: hashicorp\/vault-enterprise\n    tag: 1.14.8-ent\n  enterpriseLicense:\n    secretName: vault-license\n    secretKey: vault.hclic\n  ha:\n    enabled: true\n    replicas: 3\n    config: |\n      ui = true\n      service_registration \"kubernetes\" {}\n\n      listener \"tcp\" {\n        address = \":8200\"\n        cluster_address = \":8201\"\n        tls_disable = 1\n      }\n\n      storage \"consul\" {\n        path = \"vault\"\n        address = \"http:\/\/HOST_IP:8500\"\n      }\n```\n\n<\/CodeBlockConfig>\n\n### Migration procedure\n\n1. Uninstall Vault via Helm.\n\n  ```shell-session\n  $ helm uninstall vault\n  ```\n\n  Deployed `StatefulSets` cannot have certain attributes modified after their initial deployment.  Therefore, the `StatefulSet` deployment must be entirely replaced.\n\n Vault servers using Consul storage are by default stateless. Unless explicitly configured, the Vault server `StatefulSet` does not create any Persistent Volume Claims (PVC) or other artifacts.  Vault's index holds its state, which is entirely stored in the Consul server `StatefulSet`'s persistent volumes.\n\n  <Warning title='Caution'>\n\n  It is strongly advised to review your Vault deployment configurations and take appropriate backups for any stateful information managed via Helm or other orchestration platforms.\n\n  <\/Warning>\n\n1. Create a `ConfigMap` containing the Storage Migration configuration.\n\n  ```shell-session\n  $ cat > vault-storage-migration-configmap.yml <<EOF\n  apiVersion: v1\n  kind: ConfigMap\n  metadata:\n    labels:\n      app.kubernetes.io\/instance: vault\n      app.kubernetes.io\/name: vault\n    name: storage-migration\n    namespace: default\n  data:\n    migrate.hcl: |-\n      storage_source \"consul\" {\n        address = \"http:\/\/consul-server.default.svc.cluster.local:8500\"\n        path = \"vault\/\"\n      }\n\n      storage_destination \"raft\" {\n        path = \"\/vault\/data\"\n      }\n\n      cluster_addr = \"https:\/\/vault-0.vault-internal:8201\" \n  EOF\n  ```\n  \n  Often your Vault server should communicate to Consul via a Consul client agent.  This example uses the service endpoint for a Consul server deployed in Kubernetes, although it can work for a Consul server cluster deployed outside of Kubernetes as well.\n\n1. Apply the `ConfigMap`.\n\n  ```shell-session\n  $ kubectl create -f vault-storage-migration-configmap.yml\n  ```\n\n1. Install Vault via Helm deployment with Raft Migration storage configuration.\n\n  ```shell-session\n  $ cat > vault-migration-values.yml <<EOF\n  global:\n    enabled: false\n\n  server:\n    enabled: true\n    image:\n      repository: hashicorp\/vault-enterprise\n      tag: 1.14.8-ent\n    enterpriseLicense:\n      secretName: vault-license\n      secretKey: vault.hclic\n    extraInitContainers:\n      - name: vault-storage-migration\n        image: hashicorp\/vault-enterprise:1.14.8-ent\n        command:\n          - \"\/bin\/sh\"\n          - \"-ec\"\n        args:\n          - \"\/bin\/vault operator migrate -config \/vault\/storage-migration\/migrate.hcl\"\n        volumeMounts:\n          - name: storage-migration\n            mountPath: \"\/vault\/storage-migration\"\n          - name: data\n            mountPath: \"\/vault\/data\"\n    volumeMounts:\n      - name: storage-migration\n        mountPath: \"\/vault\/storage-migration\"\n    volumes:\n      - name: storage-migration\n        configMap:\n          name: storage-migration\n    dataStorage:\n      enabled: true\n      size: \"1Gi\"\n    ha:\n      enabled: true\n      replicas: 1\n      raft:\n        enabled: true\n        config: |\n          ui = true\n          service_registration \"kubernetes\" {}\n\n          listener \"tcp\" {\n            address = \":8200\"\n            cluster_address = \":8201\"\n            tls_disable = 1\n          }\n\n          storage \"raft\" {\n            path = \"\/vault\/data\"\n            retry_join {\n              auto_join_scheme = \"http\"\n              auto_join = \"provider=k8s\"\n            }\n          }\n  EOF\n  ```\n\n  **Configuration notes**\n    - `storage \u201craft\u201d` configuration to specify the path for the Raft DB (`\/vault\/data` by default), and any `retry_join` parameters in your original configuration.\n      - This example uses `auto_join` to automatically find Raft peers via the Kubernetes API.  See the [`retry_join`](\/vault\/docs\/configuration\/storage\/raft#retry_join-stanza) for more information.\n    - `dataStorage` configuration in the Helm override values, to specify the parameters of the PVCs the Vault `StatefulSet` will create.\n    - `extraInitContainers` will start an init container mounting the storage migration ConfigMap and `data` volume, which it will then use to execute the storage migration.  \n    - `replicas: 1`\n      - This setting is temporary for the purposes of the migration.  A new Vault `StatefulSet` with one replica to confirm the init container completed the migration and unseal Vault using the new storage backend.\n\n1. Apply this configuration.\n\n  ```shell-session\n  $ helm install vault hashicorp\/vault -f vault-migration-values.yml\n  ```\n  \n1. Review the migration logs.\n  \n  ```shell-session\n  $ kubectl logs vault-0 -c vault-server-migration\n  ```\n  \n1. Unseal Vault.\n\n  ```shell-session\n  $ kubectl exec -it vault-0 -- vault operator unseal\n  Key                     Value\n  ---                     -----\n  Seal Type               shamir\n  Initialized             true\n  Sealed                  false\n  Total Shares            1\n  Threshold               1\n  Version                 1.14.8+ent\n  Build Date              2023-12-05T01:49:39Z\n  Storage Type            raft\n  Cluster Name            vault-cluster-68870bf8\n  Cluster ID              cd18c692-f2e3-77a5-fba3-28f06f41f375\n  HA Enabled              true\n  HA Cluster              https:\/\/vault-0.vault-internal:8201\n  HA Mode                 active\n  Active Since            2024-04-10T04:20:23.707098402Z\n  Raft Committed Index    157\n  Raft Applied Index      157\n  Last WAL                55\n  ```\n\n1. Update Vault Helm deployment with Raft storage configuration.\n\n  ```shell-session\n  $ cat > vault-raft-values.yml <<EOF\n  global:\n    enabled: false\n  server:\n    enabled: true\n    image:\n      repository: hashicorp\/vault-enterprise\n      tag: 1.14.8-ent\n    enterpriseLicense:\n      secretName: vault-license\n      secretKey: vault.hclic\n    dataStorage:\n      enabled: true\n      size: \"1Gi\"\n    ha:\n      enabled: true\n      replicas: 5\n      raft:\n        enabled: true\n        config: |\n          ui = true\n          service_registration \"kubernetes\" {}\n\n          listener \"tcp\" {\n            address = \":8200\"\n            cluster_address = \":8201\"\n            tls_disable = 1\n          }\n\n          storage \"raft\" {\n            path = \"\/vault\/data\"\n            retry_join {\n              auto_join_scheme = \"http\"\n              auto_join = \"provider=k8s\"\n            }\n          }\n  EOF\n  ```\n  **Configuration notes**\n    - `replicas: 5`\n      - Upgrade the Helm deployment in place using the final Raft storage configuration, removing the `extraInitContainer` and storage migration `ConfigMap`, and increasing the number of replicas.  The `retry_join` parameters used by the new Vault server replicas to automatically join the cluster.\n\n1. Apply the configuration.\n\n  ```shell-session\n  $ helm upgrade vault hashicorp\/vault -f vault-raft-values.yml\n  ```\n  \n1. Unseal Vault.\n\n  ```shell-session\n  $ for i in {1..4} ; do kubectl exec -it vault-0 -- vault operator unseal ; done\n  ```\n\n1. Confirm the Raft peers have formed a quorum.\n\n  ```shell-session\n  $ kubectl exec -it vault-0 -- vault operator raft list-peers\n\n  Node                                    Address                        State       Voter\n  ----                                    -------                        -----       -----\n  24c166d8-a8bb-3ac7-f8a0-12bd066a34bb    vault-0.vault-internal:8201    leader      true\n  626434d1-170b-575a-2a04-af4f2e90820b    vault-1.vault-internal:8201    follower    true\n  1dfbba31-9b5b-2d16-18ce-bfa7b6c0ead6    vault-2.vault-internal:8201    follower    true\n  3f333082-1a64-7559-0142-e4f1658a28f3    vault-3.vault-internal:8201    follower    true\n  9ca5a15e-3ddc-d132-0b46-5b895f3828dc    vault-4.vault-internal:8201    follower    true\n  ```\n  \n## Rollback procedure\n\nTo revert to the original configuration, you'll just need to delete the Helm deployment, and re-deploy it using the override values specifying your Consul storage configuration.\n\nNote that the Vault Helm Chart's default configuration using Raft storage will retain any PVCs created.  Vault does not use these while configured with Consul storage.  You will need to remove the PVCs before re-attempting the migration.\n\n1. Uninstall Vault via Helm.\n\n  ```shell-session\n  $ helm uninstall vault\n  ```\n\n1. Install Vault via Helm with old Consul storage configuration.\n\n  ```shell-session\n  $ `helm install vault hashicorp\/vault -f vault-consul-values.yml\n  ```\n  \n1. Unseal Vault and confirm the storage has reverted to Consul.\n\n  <CodeBlockConfig highlight=\"11\">\n\n  ```she-session\n  $ kubectl exec -it vault-0 -- vault status\n  Key             Value\n  ---             -----\n  Seal Type       shamir\n  Initialized     true\n  Sealed          false\n  Total Shares    1\n  Threshold       1\n  Version         1.14.8+ent\n  Build Date      2023-12-05T01:49:39Z\n  Storage Type    consul\n  Cluster Name    vault-cluster-68870bf8\n  Cluster ID      cd18c692-f2e3-77a5-fba3-28f06f41f375\n  HA Enabled      true\n  HA Cluster      https:\/\/vault-0.vault-internal:8201\n  HA Mode         active\n  Active Since    2024-04-10T04:44:12.516016652Z\n  Last WAL        54\n  ```\n\n  <\/CodeBlockConfig>\n\n## References\n\n- [Vault operator migrate command](\/vault\/docs\/commands\/operator\/migrate)\n- [Helm Chart configuration](\/vault\/docs\/platform\/k8s\/helm\/configuration)\n- [Vault on Kubernetes deployment guide](\/vault\/tutorials\/kubernetes\/kubernetes-raft-deployment-guide)\n- [Vault Helm Chart configuration](https:\/\/github.com\/hashicorp\/vault-helm)\n- [kubectl commands](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands)\n- [Kubernetes storage volumes](https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/)\n- [Create a Pod that has an Init Container](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-pod-initialization\/#create-a-pod-that-has-an-init-container)\n- [Helm docs](https:\/\/helm.sh\/docs\/)","site":"vault","answers_cleaned":"    layout  docs page title  Migrate Consul to Raft storage description       Guide to migration of Consul storage to Raft          Migrate Consul to Raft storage  This procedure assumes you have a Vault cluster deployed in a Kubernetes environment configured with Consul storage   The storage migration can occur while leaving the Consul cluster intact   A single change to the Consul cluster is a lock file written by Vault during the migration   This guide uses basic examples and default Vault configurations   It is for illustrative purposes  and adaption to specific configurations relevant to your environment is still required    Warning title  Back up data    Always back up your data before attempting migration  Although this is an offline operation and the risk is low  it is advisable to take a recent snapshot from your Consul cluster before proceeding     Warning      Overview  This guide uses an intermediate Helm configuration to introduce an init container that will perform the storage migration  and then start a single Vault server using the Raft storage backend to verify the results   Then update the Helm configuration to remove the init container and start Vault replicas       Vault and Kubernetes setup  Consider the following  vault status  output and Helm Chart values for Vault    CodeBlockConfig hideClipboard      plaintext Key             Value                       Seal Type       shamir Initialized     true Sealed          false Total Shares    1 Threshold       1 Version         1 14 8 ent Build Date      2023 12 05T01 49 39Z Storage Type    consul Cluster Name    vault cluster 68870bf8 Cluster ID      cd18c692 f2e3 77a5 fba3 28f06f41f375 HA Enabled      true HA Cluster      https   vault 0 vault internal 8201 HA Mode         active Active Since    2024 04 10T02 45 33 367042122Z Last WAL        52        CodeBlockConfig   Helm chart values    CodeBlockConfig hideClipboard      plaintext global    enabled  false  server    enabled  true   image      repository  hashicorp vault enterprise     tag  1 14 8 ent   enterpriseLicense      secretName  vault license     secretKey  vault hclic   ha      enabled  true     replicas  3     config          ui   true       service registration  kubernetes            listener  tcp            address     8200          cluster address     8201          tls disable   1                storage  consul            path    vault          address    http   HOST IP 8500                 CodeBlockConfig       Migration procedure  1  Uninstall Vault via Helm        shell session     helm uninstall vault          Deployed  StatefulSets  cannot have certain attributes modified after their initial deployment   Therefore  the  StatefulSet  deployment must be entirely replaced    Vault servers using Consul storage are by default stateless  Unless explicitly configured  the Vault server  StatefulSet  does not create any Persistent Volume Claims  PVC  or other artifacts   Vault s index holds its state  which is entirely stored in the Consul server  StatefulSet  s persistent volumes      Warning title  Caution      It is strongly advised to review your Vault deployment configurations and take appropriate backups for any stateful information managed via Helm or other orchestration platforms       Warning   1  Create a  ConfigMap  containing the Storage Migration configuration        shell session     cat   vault storage migration configmap yml   EOF   apiVersion  v1   kind  ConfigMap   metadata      labels        app kubernetes io instance  vault       app kubernetes io name  vault     name  storage migration     namespace  default   data      migrate hcl           storage source  consul            address    http   consul server default svc cluster local 8500          path    vault                  storage destination  raft            path     vault data                 cluster addr    https   vault 0 vault internal 8201     EOF            Often your Vault server should communicate to Consul via a Consul client agent   This example uses the service endpoint for a Consul server deployed in Kubernetes  although it can work for a Consul server cluster deployed outside of Kubernetes as well   1  Apply the  ConfigMap         shell session     kubectl create  f vault storage migration configmap yml        1  Install Vault via Helm deployment with Raft Migration storage configuration        shell session     cat   vault migration values yml   EOF   global      enabled  false    server      enabled  true     image        repository  hashicorp vault enterprise       tag  1 14 8 ent     enterpriseLicense        secretName  vault license       secretKey  vault hclic     extraInitContainers          name  vault storage migration         image  hashicorp vault enterprise 1 14 8 ent         command                bin sh                ec          args                bin vault operator migrate  config  vault storage migration migrate hcl          volumeMounts              name  storage migration             mountPath    vault storage migration              name  data             mountPath    vault data      volumeMounts          name  storage migration         mountPath    vault storage migration      volumes          name  storage migration         configMap            name  storage migration     dataStorage        enabled  true       size   1Gi      ha        enabled  true       replicas  1       raft          enabled  true         config              ui   true           service registration  kubernetes                listener  tcp                address     8200              cluster address     8201              tls disable   1                        storage  raft                path     vault data              retry join                 auto join scheme    http                auto join    provider k8s                              EOF            Configuration notes          storage  raft   configuration to specify the path for the Raft DB    vault data  by default   and any  retry join  parameters in your original configuration          This example uses  auto join  to automatically find Raft peers via the Kubernetes API   See the   retry join    vault docs configuration storage raft retry join stanza  for more information         dataStorage  configuration in the Helm override values  to specify the parameters of the PVCs the Vault  StatefulSet  will create         extraInitContainers  will start an init container mounting the storage migration ConfigMap and  data  volume  which it will then use to execute the storage migration           replicas  1          This setting is temporary for the purposes of the migration   A new Vault  StatefulSet  with one replica to confirm the init container completed the migration and unseal Vault using the new storage backend   1  Apply this configuration        shell session     helm install vault hashicorp vault  f vault migration values yml          1  Review the migration logs          shell session     kubectl logs vault 0  c vault server migration          1  Unseal Vault        shell session     kubectl exec  it vault 0    vault operator unseal   Key                     Value                                   Seal Type               shamir   Initialized             true   Sealed                  false   Total Shares            1   Threshold               1   Version                 1 14 8 ent   Build Date              2023 12 05T01 49 39Z   Storage Type            raft   Cluster Name            vault cluster 68870bf8   Cluster ID              cd18c692 f2e3 77a5 fba3 28f06f41f375   HA Enabled              true   HA Cluster              https   vault 0 vault internal 8201   HA Mode                 active   Active Since            2024 04 10T04 20 23 707098402Z   Raft Committed Index    157   Raft Applied Index      157   Last WAL                55        1  Update Vault Helm deployment with Raft storage configuration        shell session     cat   vault raft values yml   EOF   global      enabled  false   server      enabled  true     image        repository  hashicorp vault enterprise       tag  1 14 8 ent     enterpriseLicense        secretName  vault license       secretKey  vault hclic     dataStorage        enabled  true       size   1Gi      ha        enabled  true       replicas  5       raft          enabled  true         config              ui   true           service registration  kubernetes                listener  tcp                address     8200              cluster address     8201              tls disable   1                        storage  raft                path     vault data              retry join                 auto join scheme    http                auto join    provider k8s                              EOF           Configuration notes          replicas  5          Upgrade the Helm deployment in place using the final Raft storage configuration  removing the  extraInitContainer  and storage migration  ConfigMap   and increasing the number of replicas   The  retry join  parameters used by the new Vault server replicas to automatically join the cluster   1  Apply the configuration        shell session     helm upgrade vault hashicorp vault  f vault raft values yml          1  Unseal Vault        shell session     for i in  1  4    do kubectl exec  it vault 0    vault operator unseal   done        1  Confirm the Raft peers have formed a quorum        shell session     kubectl exec  it vault 0    vault operator raft list peers    Node                                    Address                        State       Voter                                                                                              24c166d8 a8bb 3ac7 f8a0 12bd066a34bb    vault 0 vault internal 8201    leader      true   626434d1 170b 575a 2a04 af4f2e90820b    vault 1 vault internal 8201    follower    true   1dfbba31 9b5b 2d16 18ce bfa7b6c0ead6    vault 2 vault internal 8201    follower    true   3f333082 1a64 7559 0142 e4f1658a28f3    vault 3 vault internal 8201    follower    true   9ca5a15e 3ddc d132 0b46 5b895f3828dc    vault 4 vault internal 8201    follower    true             Rollback procedure  To revert to the original configuration  you ll just need to delete the Helm deployment  and re deploy it using the override values specifying your Consul storage configuration   Note that the Vault Helm Chart s default configuration using Raft storage will retain any PVCs created   Vault does not use these while configured with Consul storage   You will need to remove the PVCs before re attempting the migration   1  Uninstall Vault via Helm        shell session     helm uninstall vault        1  Install Vault via Helm with old Consul storage configuration        shell session      helm install vault hashicorp vault  f vault consul values yml          1  Unseal Vault and confirm the storage has reverted to Consul      CodeBlockConfig highlight  11         she session     kubectl exec  it vault 0    vault status   Key             Value                           Seal Type       shamir   Initialized     true   Sealed          false   Total Shares    1   Threshold       1   Version         1 14 8 ent   Build Date      2023 12 05T01 49 39Z   Storage Type    consul   Cluster Name    vault cluster 68870bf8   Cluster ID      cd18c692 f2e3 77a5 fba3 28f06f41f375   HA Enabled      true   HA Cluster      https   vault 0 vault internal 8201   HA Mode         active   Active Since    2024 04 10T04 44 12 516016652Z   Last WAL        54            CodeBlockConfig      References     Vault operator migrate command   vault docs commands operator migrate     Helm Chart configuration   vault docs platform k8s helm configuration     Vault on Kubernetes deployment guide   vault tutorials kubernetes kubernetes raft deployment guide     Vault Helm Chart configuration  https   github com hashicorp vault helm     kubectl commands  https   kubernetes io docs reference generated kubectl kubectl commands     Kubernetes storage volumes  https   kubernetes io docs concepts storage volumes      Create a Pod that has an Init Container  https   kubernetes io docs tasks configure pod container configure pod initialization  create a pod that has an init container     Helm docs  https   helm sh docs  "}
{"questions":"vault This document explores two different methods for integrating HashiCorp Vault with Kubernetes The information provided is intended for DevOps practitioners who understand secret management concepts and are familiar with HashiCorp Vault and Kubernetes This document also offers practical guidance to help you understand and choose the best method for your use case Agent injector vs Vault CSI provider This section compares Sidecar Injector and Vault CSI Provider for Kubernetes and Vault integration layout docs page title Agent Injector vs Vault CSI Provider","answers":"---\nlayout: docs\npage_title: Agent Injector vs. Vault CSI Provider\ndescription: This section compares Sidecar Injector and Vault CSI Provider for Kubernetes and Vault integration.\n---\n\n# Agent injector vs. Vault CSI provider\n\nThis document explores two different methods for integrating HashiCorp Vault with Kubernetes. The information provided is intended for DevOps practitioners who understand secret management concepts and are familiar with HashiCorp Vault and Kubernetes. This document also offers practical guidance to help you understand and choose the best method for your use case.\n\nInformation contained within this document details the contrast between the Agent Injector, also referred as _Vault Sidecar_ or _Sidecar_ in this document, and the Vault Container Storage Interface (CSI) provider used to integrate Vault and Kubernetes.\n\n## Vault sidecar agent injector\n\nThe [Vault Sidecar Agent Injector](\/vault\/docs\/platform\/k8s\/injector) leverages the [sidecar pattern](https:\/\/docs.microsoft.com\/en-us\/azure\/architecture\/patterns\/sidecar) to alter pod specifications to include a Vault Agent container that renders Vault secrets to a shared memory volume. By rendering secrets to a shared volume, containers within the pod can consume Vault secrets without being Vault-aware. The injector is a Kubernetes mutating webhook controller. The controller intercepts pod events and applies mutations to the pod if annotations exist within the request. This functionality is provided by the [vault-k8s](https:\/\/github.com\/hashicorp\/vault-k8s) project and can be automatically installed and configured using the Vault Helm chart.\n\n![Vault Sidecar Injection Workflow](\/img\/vault-sidecar-inject-workflow.png)\n\n## Vault CSI provider\n\nThe [Vault CSI provider](\/vault\/docs\/platform\/k8s\/csi) allows pods to consume Vault secrets by using ephemeral [CSI Secrets Store](https:\/\/github.com\/kubernetes-sigs\/secrets-store-csi-driver) volumes. At a high level, the CSI Secrets Store driver enables users to create `SecretProviderClass` objects. These objects define which secret provider to use and what secrets to retrieve. When pods requesting CSI volumes are made, the CSI Secrets Store driver sends the request to the Vault CSI provider if the provider is `vault`. The Vault CSI provider then uses the specified `SecretProviderClass` and the pod\u2019s service account to retrieve the secrets from Vault and mount them into the pod\u2019s CSI volume. Note that the secret is retrieved from Vault and populated to the CSI secrets store volume during the `ContainerCreation` phase. Therefore, pods are blocked from starting until the secrets are read from Vault and written to the volume.\n\n![Vault Sidecar Injection Workflow](\/img\/vault-csi-workflow.png)\n\n~> **Note**: Secrets are fetched earlier in the pod lifecycle, therefore, they have fewer compatibility issues with Sidecars, such as Istio.\n\nBefore we get into some of the similarities and differences between the two solutions, let's look at several common design considerations.\n\n- **Secret projections:** Every application requires secrets to explicitly presented. Typically, applications expect secrets to be either exported as environment variables or written to a file that the application can read on startup. Keep that in mind as you\u2019re deciding on a suitable method to use.\n\n- **Secret scope:** Some applications are deployed across multiple Kubernetes environments (e.g., dev, qa, prod) across your data centers, the edge, or public clouds. Some services run outside of Kubernetes on VMs, serverless, or other cloud-managed services. You may face scenarios where these applications need to share sets of secrets across these heterogeneous environments. Scoping the secrets correctly to be either local to the Kubernetes environment or global across different environments helps ensure that each application can easily and securely access its own set of secrets within the environment it is deployed in.\n\n- **Secret types:** Secrets can be text files, binary files, tokens, or certs, or they can be statically or dynamically generated. They can also be valid permanently or time-scoped, and can vary in size. You need to consider the secret types your application requires and how they\u2019re projected into the application.\n\n- **Secret definition:** You also need to consider how each secret is defined, created, updated, and removed, as well as the tooling associated with that process.\n\n- **Encryption:** Encrypting secrets both at rest and in transit is a critical requirement for many enterprise organizations.\n\n- **Governance:** Applications and secrets can have a many-to-many relationship that requires careful considerations when granting access for applications to retrieve their respective secrets. As the number of applications and secrets scale, so does the challenge of managing their access policies.\n\n- **Secrets updates and rotation:** Secrets can be leased, time-scoped, or automatically rotated, and each scenario needs to be a programmatic process to ensure the new secret is propagated to the application pods properly.\n\n- **Secret caching:** In certain Kubernetes environments (e.g., edge or retail), there is a potential need for secret caching in the case of communication or network failures between the environment and the secret storage.\n\n- **Auditability:** Keeping a secret access audit log detailing all secret access information is critical to ensure traceability of secret-access events.\n\nNow that you're familiar with some of the design considerations, we'll explore the similarities and differences between the two solutions to help you determine the best solution to use as you design and implement your secrets management strategy in a Kubernetes environment.\n\n## Similarities\n\nBoth Agent Injection and Vault CSI solutions have the following similarities:\n\n- They simplify retrieving different types of secrets stored in Vault and expose them to the target pod running on Kubernetes without knowing the not-so-trivial Vault processes. It\u2019s important to note that there is no need to change the application logic or code to use these solutions, therefore, making it easier to migrate brownfield applications into Kubernetes. Developers working on greenfield applications can leverage the Vault SDKs to integrate with Vault directly.\n\n- They support all types of Vault [secrets engines](\/vault\/docs\/secrets). This support allows you to leverage an extensive set of secret types, ranging from static key-value secrets to dynamically generated database credentials and TLS certs with customized TTL.\n\n- They leverage the application\u2019s Kubernetes pod service account token as [Secret Zero](https:\/\/www.hashicorp.com\/resources\/secret-zero-mitigating-the-risk-of-secret-introduction-with-vault) to authenticate with Vault via the Kubernetes auth method. With this method, there is no need to manage yet another separate identity to identify the application pods when authenticating to Vault.\n\n- Secret lifetime is tied to the lifetime of the pod for both methods. While this holds true for file contents inside the pod, this also holds true for Kubernetes secrets that CSI creates. Secrets are automatically created and deleted as the pod is created and deleted.\n\n![Vault's Kubernetes auth workflow](\/img\/k8s-auth-workflow.png)\n\n- They require the desired secrets to exist within Vault before deploying the application.\n\n- They require the pod\u2019s service account to bind to a Vault role with a policy enabling access to desired secrets (that is, Kubernetes RBAC isn\u2019t used to authorize access to secrets).\n\n- They can both be deployed via Helm.\n\n- They require successfully retrieving secrets from Vault before the pods are started.\n\n- They rely on user-defined pod annotations to retrieve the required secrets from Vault.\n\n## Differences\n\nNow that you understand the similarities, there are differences between these two solutions for considerations:\n\n- The Sidecar Agent Injector solution is composed of two elements:\n\n  - The Sidecar Service Injector, which is deployed as a cluster service and is responsible for intercepting Kubernetes apiserver pod events and mutating pod specs to add required sidecar containers\n  - The Vault Sidecar Container, which is deployed alongside each application pod and is responsible for authenticating into Vault, retrieving secrets from Vault, and rendering secrets for the application to consume.\n\n- In contrast, the Vault CSI Driver is deployed as a daemonset on every node in the Kubernetes cluster and uses the Secret Provider Class specified and the pod\u2019s service account to retrieve the secrets from Vault and mount them into the pod\u2019s CSI volume.\n\n- The Sidecar Agent Injector supports [all](\/vault\/docs\/platform\/k8s\/injector\/annotations#vault-hashicorp-com-auth-path) Vault [auto-auth](\/vault\/docs\/agent-and-proxy\/autoauth\/methods) methods. The Sidecar CSI driver supports only Vault\u2019s [Kubernetes auth method](\/vault\/docs\/platform\/k8s\/csi\/configurations#vaultkubernetesmountpath).\n\n- The Sidecar container launched with every application pod uses [Vault Agent](https:\/\/www.hashicorp.com\/blog\/why-use-the-vault-agent-for-secrets-management), which provides a powerful set of capabilities such as auto-auth, templating, and caching. The CSI driver does not use the Vault Agent and therefore lacks these functionalities.\n\n- The Vault CSI driver supports rendering Vault secrets into Kubernetes secrets and environment variables. Sidecar Injector Service does not support rendering secrets into Kubernetes secrets; however, there are ways to [agent templating](\/vault\/docs\/platform\/k8s\/injector\/examples#environment-variable-example) to render secrets into environment variables.\n\n- The CSI driver uses `hostPath` to mount ephemeral volumes into the pods, which some container platforms (e.g., OpenShift) disable by default. On the other hand, Sidecar Agent Service uses in-memory _tmpfs_ volumes.\n\n- Sidecar Injector Service [automatically](\/vault\/docs\/agent-and-proxy\/agent\/template#renewals-and-updating-secrets) renews, rotates, and fetches secrets\/tokens while the CSI Driver does not support that.\n\n## Comparison chart\n\nThe below chart provides a high-level comparison between the two solutions.\n\n~> **Note:** Shared Memory Volume Environment Variable can be achieved through [Agent templating](\/vault\/docs\/platform\/k8s\/injector\/examples#environment-variable-example).\n\n![Comparison Chart](\/img\/comparison-table.png)\n\n## Going beyond the native kubernetes secrets\n\nOn the surface, Kubernetes native secrets might seem similar to the two approaches presented above, but there are significant differences between them:\n\n- Kubernetes is not a secrets management solution. It does have native support for secrets, but that is quite different from an enterprise secrets management solution. Kubernetes secrets are scoped to the cluster only, and many applications will have some services running outside Kubernetes or in other Kubernetes clusters. Having these applications use Kubernetes secrets from outside a Kubernetes environment will be cumbersome and introduce authentication and authorization challenges. Therefore, considering the secret scope as part of the design process is critical.\n\n- Kubernetes secrets are static in nature. You can define secrets by using kubectl or the Kubernetes API, but once they are defined, they are stored in etcd and presented to pods only during pod creation. Defining secrets in this manner may create scenarios where secrets get stale, outdated, or expired, requiring additional workflows to update and rotate the secrets, and then re-deploy the application to use the new version, which can add complexity and become quite time-consuming. Ensure consideration is given to all requirements for secret freshness, updates, and rotation as part of your design process.\n\n- The secret access management security model is tied to the Kubernetes RBAC model. This model can be challenging for users who are not familiar with Kubernetes. Adopting a platform-agnostic security governance model can enable you to adapt workflows for applications regardless of how and where they are running.\n\n## Summary\n\nDesigning secrets management in Kubernetes is an intricate task. There are multiple approaches, each with its own set of attributes. We recommend exploring the options presented in this document to increase your understanding of the internals and decide on the best option for your use case.\n\n## Additional resources\n\n- [HashiCorp Vault: Delivering Secrets with Kubernetes](https:\/\/medium.com\/hashicorp-engineering\/hashicorp-vault-delivering-secrets-with-kubernetes-1b358c03b2a3)\n\n- [Retrieve HashiCorp Vault Secrets with Kubernetes CSI](https:\/\/www.hashicorp.com\/blog\/retrieve-hashicorp-vault-secrets-with-kubernetes-csi)\n\n- [Mount Vault Secrets Through Container Storage Interface (CSI) Volume](\/vault\/tutorials\/kubernetes\/kubernetes-secret-store-driver)\n\n- [Injecting Secrets into Kubernetes Pods via Vault Agent Containers](\/vault\/tutorials\/kubernetes\/kubernetes-sidecar)\n\n- [Vault Sidecar Injector Configurations and Examples](\/vault\/docs\/platform\/k8s\/injector\/annotations)\n\n- [Vault CSI Driver Configurations and Examples](\/vault\/docs\/platform\/k8s\/csi\/configurations)","site":"vault","answers_cleaned":"    layout  docs page title  Agent Injector vs  Vault CSI Provider description  This section compares Sidecar Injector and Vault CSI Provider for Kubernetes and Vault integration         Agent injector vs  Vault CSI provider  This document explores two different methods for integrating HashiCorp Vault with Kubernetes  The information provided is intended for DevOps practitioners who understand secret management concepts and are familiar with HashiCorp Vault and Kubernetes  This document also offers practical guidance to help you understand and choose the best method for your use case   Information contained within this document details the contrast between the Agent Injector  also referred as  Vault Sidecar  or  Sidecar  in this document  and the Vault Container Storage Interface  CSI  provider used to integrate Vault and Kubernetes      Vault sidecar agent injector  The  Vault Sidecar Agent Injector   vault docs platform k8s injector  leverages the  sidecar pattern  https   docs microsoft com en us azure architecture patterns sidecar  to alter pod specifications to include a Vault Agent container that renders Vault secrets to a shared memory volume  By rendering secrets to a shared volume  containers within the pod can consume Vault secrets without being Vault aware  The injector is a Kubernetes mutating webhook controller  The controller intercepts pod events and applies mutations to the pod if annotations exist within the request  This functionality is provided by the  vault k8s  https   github com hashicorp vault k8s  project and can be automatically installed and configured using the Vault Helm chart     Vault Sidecar Injection Workflow   img vault sidecar inject workflow png      Vault CSI provider  The  Vault CSI provider   vault docs platform k8s csi  allows pods to consume Vault secrets by using ephemeral  CSI Secrets Store  https   github com kubernetes sigs secrets store csi driver  volumes  At a high level  the CSI Secrets Store driver enables users to create  SecretProviderClass  objects  These objects define which secret provider to use and what secrets to retrieve  When pods requesting CSI volumes are made  the CSI Secrets Store driver sends the request to the Vault CSI provider if the provider is  vault   The Vault CSI provider then uses the specified  SecretProviderClass  and the pod s service account to retrieve the secrets from Vault and mount them into the pod s CSI volume  Note that the secret is retrieved from Vault and populated to the CSI secrets store volume during the  ContainerCreation  phase  Therefore  pods are blocked from starting until the secrets are read from Vault and written to the volume     Vault Sidecar Injection Workflow   img vault csi workflow png        Note    Secrets are fetched earlier in the pod lifecycle  therefore  they have fewer compatibility issues with Sidecars  such as Istio   Before we get into some of the similarities and differences between the two solutions  let s look at several common design considerations       Secret projections    Every application requires secrets to explicitly presented  Typically  applications expect secrets to be either exported as environment variables or written to a file that the application can read on startup  Keep that in mind as you re deciding on a suitable method to use       Secret scope    Some applications are deployed across multiple Kubernetes environments  e g   dev  qa  prod  across your data centers  the edge  or public clouds  Some services run outside of Kubernetes on VMs  serverless  or other cloud managed services  You may face scenarios where these applications need to share sets of secrets across these heterogeneous environments  Scoping the secrets correctly to be either local to the Kubernetes environment or global across different environments helps ensure that each application can easily and securely access its own set of secrets within the environment it is deployed in       Secret types    Secrets can be text files  binary files  tokens  or certs  or they can be statically or dynamically generated  They can also be valid permanently or time scoped  and can vary in size  You need to consider the secret types your application requires and how they re projected into the application       Secret definition    You also need to consider how each secret is defined  created  updated  and removed  as well as the tooling associated with that process       Encryption    Encrypting secrets both at rest and in transit is a critical requirement for many enterprise organizations       Governance    Applications and secrets can have a many to many relationship that requires careful considerations when granting access for applications to retrieve their respective secrets  As the number of applications and secrets scale  so does the challenge of managing their access policies       Secrets updates and rotation    Secrets can be leased  time scoped  or automatically rotated  and each scenario needs to be a programmatic process to ensure the new secret is propagated to the application pods properly       Secret caching    In certain Kubernetes environments  e g   edge or retail   there is a potential need for secret caching in the case of communication or network failures between the environment and the secret storage       Auditability    Keeping a secret access audit log detailing all secret access information is critical to ensure traceability of secret access events   Now that you re familiar with some of the design considerations  we ll explore the similarities and differences between the two solutions to help you determine the best solution to use as you design and implement your secrets management strategy in a Kubernetes environment      Similarities  Both Agent Injection and Vault CSI solutions have the following similarities     They simplify retrieving different types of secrets stored in Vault and expose them to the target pod running on Kubernetes without knowing the not so trivial Vault processes  It s important to note that there is no need to change the application logic or code to use these solutions  therefore  making it easier to migrate brownfield applications into Kubernetes  Developers working on greenfield applications can leverage the Vault SDKs to integrate with Vault directly     They support all types of Vault  secrets engines   vault docs secrets   This support allows you to leverage an extensive set of secret types  ranging from static key value secrets to dynamically generated database credentials and TLS certs with customized TTL     They leverage the application s Kubernetes pod service account token as  Secret Zero  https   www hashicorp com resources secret zero mitigating the risk of secret introduction with vault  to authenticate with Vault via the Kubernetes auth method  With this method  there is no need to manage yet another separate identity to identify the application pods when authenticating to Vault     Secret lifetime is tied to the lifetime of the pod for both methods  While this holds true for file contents inside the pod  this also holds true for Kubernetes secrets that CSI creates  Secrets are automatically created and deleted as the pod is created and deleted     Vault s Kubernetes auth workflow   img k8s auth workflow png     They require the desired secrets to exist within Vault before deploying the application     They require the pod s service account to bind to a Vault role with a policy enabling access to desired secrets  that is  Kubernetes RBAC isn t used to authorize access to secrets      They can both be deployed via Helm     They require successfully retrieving secrets from Vault before the pods are started     They rely on user defined pod annotations to retrieve the required secrets from Vault      Differences  Now that you understand the similarities  there are differences between these two solutions for considerations     The Sidecar Agent Injector solution is composed of two elements       The Sidecar Service Injector  which is deployed as a cluster service and is responsible for intercepting Kubernetes apiserver pod events and mutating pod specs to add required sidecar containers     The Vault Sidecar Container  which is deployed alongside each application pod and is responsible for authenticating into Vault  retrieving secrets from Vault  and rendering secrets for the application to consume     In contrast  the Vault CSI Driver is deployed as a daemonset on every node in the Kubernetes cluster and uses the Secret Provider Class specified and the pod s service account to retrieve the secrets from Vault and mount them into the pod s CSI volume     The Sidecar Agent Injector supports  all   vault docs platform k8s injector annotations vault hashicorp com auth path  Vault  auto auth   vault docs agent and proxy autoauth methods  methods  The Sidecar CSI driver supports only Vault s  Kubernetes auth method   vault docs platform k8s csi configurations vaultkubernetesmountpath      The Sidecar container launched with every application pod uses  Vault Agent  https   www hashicorp com blog why use the vault agent for secrets management   which provides a powerful set of capabilities such as auto auth  templating  and caching  The CSI driver does not use the Vault Agent and therefore lacks these functionalities     The Vault CSI driver supports rendering Vault secrets into Kubernetes secrets and environment variables  Sidecar Injector Service does not support rendering secrets into Kubernetes secrets  however  there are ways to  agent templating   vault docs platform k8s injector examples environment variable example  to render secrets into environment variables     The CSI driver uses  hostPath  to mount ephemeral volumes into the pods  which some container platforms  e g   OpenShift  disable by default  On the other hand  Sidecar Agent Service uses in memory  tmpfs  volumes     Sidecar Injector Service  automatically   vault docs agent and proxy agent template renewals and updating secrets  renews  rotates  and fetches secrets tokens while the CSI Driver does not support that      Comparison chart  The below chart provides a high level comparison between the two solutions        Note    Shared Memory Volume Environment Variable can be achieved through  Agent templating   vault docs platform k8s injector examples environment variable example      Comparison Chart   img comparison table png      Going beyond the native kubernetes secrets  On the surface  Kubernetes native secrets might seem similar to the two approaches presented above  but there are significant differences between them     Kubernetes is not a secrets management solution  It does have native support for secrets  but that is quite different from an enterprise secrets management solution  Kubernetes secrets are scoped to the cluster only  and many applications will have some services running outside Kubernetes or in other Kubernetes clusters  Having these applications use Kubernetes secrets from outside a Kubernetes environment will be cumbersome and introduce authentication and authorization challenges  Therefore  considering the secret scope as part of the design process is critical     Kubernetes secrets are static in nature  You can define secrets by using kubectl or the Kubernetes API  but once they are defined  they are stored in etcd and presented to pods only during pod creation  Defining secrets in this manner may create scenarios where secrets get stale  outdated  or expired  requiring additional workflows to update and rotate the secrets  and then re deploy the application to use the new version  which can add complexity and become quite time consuming  Ensure consideration is given to all requirements for secret freshness  updates  and rotation as part of your design process     The secret access management security model is tied to the Kubernetes RBAC model  This model can be challenging for users who are not familiar with Kubernetes  Adopting a platform agnostic security governance model can enable you to adapt workflows for applications regardless of how and where they are running      Summary  Designing secrets management in Kubernetes is an intricate task  There are multiple approaches  each with its own set of attributes  We recommend exploring the options presented in this document to increase your understanding of the internals and decide on the best option for your use case      Additional resources     HashiCorp Vault  Delivering Secrets with Kubernetes  https   medium com hashicorp engineering hashicorp vault delivering secrets with kubernetes 1b358c03b2a3      Retrieve HashiCorp Vault Secrets with Kubernetes CSI  https   www hashicorp com blog retrieve hashicorp vault secrets with kubernetes csi      Mount Vault Secrets Through Container Storage Interface  CSI  Volume   vault tutorials kubernetes kubernetes secret store driver      Injecting Secrets into Kubernetes Pods via Vault Agent Containers   vault tutorials kubernetes kubernetes sidecar      Vault Sidecar Injector Configurations and Examples   vault docs platform k8s injector annotations      Vault CSI Driver Configurations and Examples   vault docs platform k8s csi configurations "}
{"questions":"vault The following examples demonstrate how the Vault CSI Provider can be used page title Vault CSI Provider Examples This section documents examples of using the Vault CSI Provider layout docs Vault CSI provider examples","answers":"---\nlayout: docs\npage_title: Vault CSI Provider Examples\ndescription: This section documents examples of using the Vault CSI Provider.\n---\n\n# Vault CSI provider examples\n\nThe following examples demonstrate how the Vault CSI Provider can be used.\n\n~> A common mistake is to not install the CSI Secret Store Driver before using the Vault CSI Provider.\n\n## File based dynamic database credentials\n\nThe following Secret Provider Class retrieves dynamic database credentials from Vault and\nextracts the generated username and password. The secrets are then mounted as files in the\nconfigured mount location.\n\n```yaml\n---\napiVersion: secrets-store.csi.x-k8s.io\/v1alpha1\nkind: SecretProviderClass\nmetadata:\n  name: vault-db-creds\nspec:\n  provider: vault\n  parameters:\n    roleName: 'app'\n    objects: |\n      - objectName: \"dbUsername\"\n        secretPath: \"database\/creds\/db-app\"\n        secretKey: \"username\"\n      - objectName: \"dbPassword\"\n        secretPath: \"database\/creds\/db-app\"\n        secretKey: \"password\"\n```\n\nNext, a pod can be created to use this Secret Provider Class to populate the secrets in the pod:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: app\n  labels:\n    app: demo\nspec:\n  selector:\n    matchLabels:\n      app: demo\n  replicas: 1\n  template:\n    metadata:\n      annotations:\n      labels:\n        app: demo\n    spec:\n      serviceAccountName: app\n      containers:\n        - name: app\n          image: my-app:1.0.0\n          volumeMounts:\n            - name: 'vault-db-creds'\n              mountPath: '\/mnt\/secrets-store'\n              readOnly: true\n      volumes:\n        - name: vault-db-creds\n          csi:\n            driver: 'secrets-store.csi.k8s.io'\n            readOnly: true\n            volumeAttributes:\n              secretProviderClass: 'vault-db-creds'\n```\n\nThe pod mounts a CSI volume and specifies the Secret Provider Class (`vault-db-creds`) created above.\nThe secrets created from that provider class are mounted to `\/mnt\/secrets-store`. When this pod is\ncreated the containers will find two files containing secrets:\n\n- `\/mnt\/secrets-store\/dbUsername`\n- `\/mnt\/secrets-store\/dbPassword`\n\n## Environment variable dynamic database credentials\n\nThe following Secret Provider Class retrieves dynamic database credentials from Vault and\nextracts the generated username and password. The secrets are then synced to Kubernetes secrets\nso that they can be mounted as environment variables in the containers.\n\n```yaml\n---\napiVersion: secrets-store.csi.x-k8s.io\/v1alpha1\nkind: SecretProviderClass\nmetadata:\n  name: vault-db-creds\nspec:\n  provider: vault\n  secretObjects:\n    - secretName: vault-db-creds-secret\n      type: Opaque\n      data:\n        - objectName: dbUsername # References dbUsername below\n          key: username # Key within k8s secret for this value\n        - objectName: dbPassword\n          key: password\n  parameters:\n    roleName: 'app'\n    objects: |\n      - objectName: \"dbUsername\"\n        secretPath: \"database\/creds\/db-app\"\n        secretKey: \"username\"\n      - objectName: \"dbPassword\"\n        secretPath: \"database\/creds\/db-app\"\n        secretKey: \"password\"\n```\n\nNext, a pod can be created which uses this Secret Provider Class to populate the secrets in the pod's environment:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: app\n  labels:\n    app: demo\nspec:\n  selector:\n    matchLabels:\n      app: demo\n  replicas: 1\n  template:\n    metadata:\n      annotations:\n      labels:\n        app: demo\n    spec:\n      serviceAccountName: app\n      containers:\n        - name: app\n          image: my-app:1.0.0\n          env:\n            - name: DB_USERNAME\n              valueFrom:\n                secretKeyRef:\n                  name: vault-db-creds-secret\n                  key: username\n            - name: DB_PASSWORD\n              valueFrom:\n                secretKeyRef:\n                  name: vault-db-creds-secret\n                  key: password\n          volumeMounts:\n            - name: 'vault-db-creds'\n              mountPath: '\/mnt\/secrets-store'\n              readOnly: true\n      volumes:\n        - name: vault-db-creds\n          csi:\n            driver: 'secrets-store.csi.k8s.io'\n            readOnly: true\n            volumeAttributes:\n              secretProviderClass: 'vault-db-creds'\n```\n\nThe pod mounts a CSI volume and specifies the Secret Provider Class (`vault-db-creds`) created above.\nThe secrets created from that provider class are mounted to `\/mnt\/secrets-store`, additionally a Kubernetes\nsecret called `vault-db-creds` is created and referenced in two environment variables.","site":"vault","answers_cleaned":"    layout  docs page title  Vault CSI Provider Examples description  This section documents examples of using the Vault CSI Provider         Vault CSI provider examples  The following examples demonstrate how the Vault CSI Provider can be used      A common mistake is to not install the CSI Secret Store Driver before using the Vault CSI Provider      File based dynamic database credentials  The following Secret Provider Class retrieves dynamic database credentials from Vault and extracts the generated username and password  The secrets are then mounted as files in the configured mount location      yaml     apiVersion  secrets store csi x k8s io v1alpha1 kind  SecretProviderClass metadata    name  vault db creds spec    provider  vault   parameters      roleName   app      objects            objectName   dbUsername          secretPath   database creds db app          secretKey   username          objectName   dbPassword          secretPath   database creds db app          secretKey   password       Next  a pod can be created to use this Secret Provider Class to populate the secrets in the pod      yaml apiVersion  apps v1 kind  Deployment metadata    name  app   labels      app  demo spec    selector      matchLabels        app  demo   replicas  1   template      metadata        annotations        labels          app  demo     spec        serviceAccountName  app       containers            name  app           image  my app 1 0 0           volumeMounts                name   vault db creds                mountPath    mnt secrets store                readOnly  true       volumes            name  vault db creds           csi              driver   secrets store csi k8s io              readOnly  true             volumeAttributes                secretProviderClass   vault db creds       The pod mounts a CSI volume and specifies the Secret Provider Class   vault db creds   created above  The secrets created from that provider class are mounted to   mnt secrets store   When this pod is created the containers will find two files containing secrets       mnt secrets store dbUsername      mnt secrets store dbPassword      Environment variable dynamic database credentials  The following Secret Provider Class retrieves dynamic database credentials from Vault and extracts the generated username and password  The secrets are then synced to Kubernetes secrets so that they can be mounted as environment variables in the containers      yaml     apiVersion  secrets store csi x k8s io v1alpha1 kind  SecretProviderClass metadata    name  vault db creds spec    provider  vault   secretObjects        secretName  vault db creds secret       type  Opaque       data            objectName  dbUsername   References dbUsername below           key  username   Key within k8s secret for this value           objectName  dbPassword           key  password   parameters      roleName   app      objects            objectName   dbUsername          secretPath   database creds db app          secretKey   username          objectName   dbPassword          secretPath   database creds db app          secretKey   password       Next  a pod can be created which uses this Secret Provider Class to populate the secrets in the pod s environment      yaml apiVersion  apps v1 kind  Deployment metadata    name  app   labels      app  demo spec    selector      matchLabels        app  demo   replicas  1   template      metadata        annotations        labels          app  demo     spec        serviceAccountName  app       containers            name  app           image  my app 1 0 0           env                name  DB USERNAME               valueFrom                  secretKeyRef                    name  vault db creds secret                   key  username               name  DB PASSWORD               valueFrom                  secretKeyRef                    name  vault db creds secret                   key  password           volumeMounts                name   vault db creds                mountPath    mnt secrets store                readOnly  true       volumes            name  vault db creds           csi              driver   secrets store csi k8s io              readOnly  true             volumeAttributes                secretProviderClass   vault db creds       The pod mounts a CSI volume and specifies the Secret Provider Class   vault db creds   created above  The secrets created from that provider class are mounted to   mnt secrets store   additionally a Kubernetes secret called  vault db creds  is created and referenced in two environment variables "}
{"questions":"vault The Vault CSI Provider allows pods to consume Vault secrets using Vault CSI provider layout docs The Vault CSI Provider allows pods to consume Vault secrets using CSI volumes page title Vault CSI Provider","answers":"---\nlayout: docs\npage_title: Vault CSI Provider\ndescription: >-\n  The Vault CSI Provider allows pods to consume Vault secrets using CSI volumes.\n---\n\n# Vault CSI provider\n\nThe Vault CSI Provider allows pods to consume Vault secrets using\n[CSI Secrets Store](https:\/\/github.com\/kubernetes-sigs\/secrets-store-csi-driver) volumes.\n\n~> The Vault CSI Provider requires the [CSI Secret Store](https:\/\/github.com\/kubernetes-sigs\/secrets-store-csi-driver)\nDriver to be installed.\n\n## Overview\n\nAt a high level, the CSI Secrets Store driver allows users to create `SecretProviderClass` objects.\nThis object defines which secret provider to use and what secrets to retrieve. When pods requesting CSI volumes\nare created, the CSI Secrets Store driver will send the request to the Vault CSI Provider if the provider\nis `vault`. The Vault CSI Provider will then use Secret Provider Class specified and the pod's service account to retrieve\nthe secrets from Vault, and mount them into the pod's CSI volume.\n\nThe secret is retrieved from Vault and populated to the CSI secrets store volume during the `ContainerCreation` phase.\nThis means that pods will be blocked from starting until the secrets have been read from Vault and written to the volume.\n\n### Features\n\nThe following features are supported by the Vault CSI Provider:\n\n- All Vault secret engines supported.\n- Authentication using the requesting pod's service account.\n- TLS\/mTLS communications with Vault.\n- Rendering Vault secrets to files.\n- Dynamic lease caching and renewal performed by Agent.\n- Syncing secrets to Kubernetes secrets to be used as environment variables.\n- Installation via [Vault Helm](\/vault\/docs\/platform\/k8s\/helm)\n\n@include 'kubernetes-supported-versions.mdx'\n\n## Authenticating with Vault\n\nThe Vault CSI Provider will authenticate with Vault as the service account of the\npod that mounts the CSI volume. [Kubernetes](\/vault\/docs\/auth\/kubernetes) and\n[JWT](\/vault\/docs\/auth\/jwt) auth methods are supported. The pod's service account\nmust be bound to a Vault role and a policy granting access to the secrets desired.\n\nIt is highly recommended to run pods with dedicated Kubernetes service accounts to\nensure applications cannot access more secrets than they require.\n\n## Secret provider class example\n\nThe following is an example of a Secret Provider Class using the `vault` provider:\n\n```yaml\n---\napiVersion: secrets-store.csi.x-k8s.io\/v1alpha1\nkind: SecretProviderClass\nmetadata:\n  name: vault-db-creds\nspec:\n  # Vault CSI Provider\n  provider: vault\n  parameters:\n    # Vault role name to use during login\n    roleName: 'app'\n    # Vault address and TLS connection config is normally best configured by the\n    # helm chart, but can be overridden per SecretProviderClass:\n    # Vault's hostname\n    #vaultAddress: 'https:\/\/vault:8200'\n    # TLS CA certification for validation\n    #vaultCACertPath: '\/vault\/tls\/ca.crt'\n    objects: |\n      - objectName: \"dbUsername\"\n        secretPath: \"database\/creds\/db-app\"\n        secretKey: \"username\"\n      - objectName: \"dbPassword\"\n        secretPath: \"database\/creds\/db-app\"\n        secretKey: \"password\"\n    # \"objectName\" is an alias used within the SecretProviderClass to reference\n    # that specific secret. This will also be the filename containing the secret.\n    # \"secretPath\" is the path in Vault where the secret should be retrieved.\n    # \"secretKey\" is the key within the Vault secret response to extract a value from.\n```\n\n~> Secret Provider Class is a namespaced object in Kubernetes.\n\n## Using secret provider classes\n\nAn application pod uses the example Secret Provider Class above by mounting it as a CSI volume:\n\n```yaml\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: app\n  labels:\n    app: demo\nspec:\n  selector:\n    matchLabels:\n      app: demo\n  replicas: 1\n  template:\n    spec:\n      serviceAccountName: app\n      containers:\n        - name: app\n          image: my-app:1.0.0\n          volumeMounts:\n            - name: 'vault-db-creds'\n              mountPath: '\/mnt\/secrets-store'\n              readOnly: true\n      volumes:\n        - name: vault-db-creds\n          csi:\n            driver: 'secrets-store.csi.k8s.io'\n            readOnly: true\n            volumeAttributes:\n              secretProviderClass: 'vault-db-creds'\n```\n\nIn this example `volumes.csi` is created on the application deployment and references\nthe Secret Provider Class named `vault-db-creds`.\n\n## Tutorial\n\nRefer to the [Vault CSI Provider](\/vault\/tutorials\/kubernetes\/kubernetes-secret-store-driver)\ntutorial to learn how to set up Vault and its dependencies with a Helm chart.","site":"vault","answers_cleaned":"    layout  docs page title  Vault CSI Provider description       The Vault CSI Provider allows pods to consume Vault secrets using CSI volumes         Vault CSI provider  The Vault CSI Provider allows pods to consume Vault secrets using  CSI Secrets Store  https   github com kubernetes sigs secrets store csi driver  volumes      The Vault CSI Provider requires the  CSI Secret Store  https   github com kubernetes sigs secrets store csi driver  Driver to be installed      Overview  At a high level  the CSI Secrets Store driver allows users to create  SecretProviderClass  objects  This object defines which secret provider to use and what secrets to retrieve  When pods requesting CSI volumes are created  the CSI Secrets Store driver will send the request to the Vault CSI Provider if the provider is  vault   The Vault CSI Provider will then use Secret Provider Class specified and the pod s service account to retrieve the secrets from Vault  and mount them into the pod s CSI volume   The secret is retrieved from Vault and populated to the CSI secrets store volume during the  ContainerCreation  phase  This means that pods will be blocked from starting until the secrets have been read from Vault and written to the volume       Features  The following features are supported by the Vault CSI Provider     All Vault secret engines supported    Authentication using the requesting pod s service account    TLS mTLS communications with Vault    Rendering Vault secrets to files    Dynamic lease caching and renewal performed by Agent    Syncing secrets to Kubernetes secrets to be used as environment variables    Installation via  Vault Helm   vault docs platform k8s helm    include  kubernetes supported versions mdx      Authenticating with Vault  The Vault CSI Provider will authenticate with Vault as the service account of the pod that mounts the CSI volume   Kubernetes   vault docs auth kubernetes  and  JWT   vault docs auth jwt  auth methods are supported  The pod s service account must be bound to a Vault role and a policy granting access to the secrets desired   It is highly recommended to run pods with dedicated Kubernetes service accounts to ensure applications cannot access more secrets than they require      Secret provider class example  The following is an example of a Secret Provider Class using the  vault  provider      yaml     apiVersion  secrets store csi x k8s io v1alpha1 kind  SecretProviderClass metadata    name  vault db creds spec      Vault CSI Provider   provider  vault   parameters        Vault role name to use during login     roleName   app        Vault address and TLS connection config is normally best configured by the       helm chart  but can be overridden per SecretProviderClass        Vault s hostname      vaultAddress   https   vault 8200        TLS CA certification for validation      vaultCACertPath    vault tls ca crt      objects            objectName   dbUsername          secretPath   database creds db app          secretKey   username          objectName   dbPassword          secretPath   database creds db app          secretKey   password         objectName  is an alias used within the SecretProviderClass to reference       that specific secret  This will also be the filename containing the secret         secretPath  is the path in Vault where the secret should be retrieved         secretKey  is the key within the Vault secret response to extract a value from          Secret Provider Class is a namespaced object in Kubernetes      Using secret provider classes  An application pod uses the example Secret Provider Class above by mounting it as a CSI volume      yaml     apiVersion  apps v1 kind  Deployment metadata    name  app   labels      app  demo spec    selector      matchLabels        app  demo   replicas  1   template      spec        serviceAccountName  app       containers            name  app           image  my app 1 0 0           volumeMounts                name   vault db creds                mountPath    mnt secrets store                readOnly  true       volumes            name  vault db creds           csi              driver   secrets store csi k8s io              readOnly  true             volumeAttributes                secretProviderClass   vault db creds       In this example  volumes csi  is created on the application deployment and references the Secret Provider Class named  vault db creds       Tutorial  Refer to the  Vault CSI Provider   vault tutorials kubernetes kubernetes secret store driver  tutorial to learn how to set up Vault and its dependencies with a Helm chart "}
{"questions":"vault This section documents the configurables for the Vault CSI Provider page title Vault CSI Provider Configurations Most settings support being set by in ascending order of precedence The following command line arguments are supported by the Vault CSI provider layout docs Command line arguments","answers":"---\nlayout: docs\npage_title: Vault CSI Provider Configurations\ndescription: This section documents the configurables for the Vault CSI Provider.\n---\n\n# Command line arguments\n\nThe following command line arguments are supported by the Vault CSI provider.\nMost settings support being set by, in ascending order of precedence:\n\n- Environment variables\n- Command line arguments\n- Secret Provider Class parameters\n\nIf installing via the helm chart, they can be set using e.g.\n`--set \"csi.extraArgs={-debug=true}\"`.\n\n- `-cache-size` `(int: 1000)` - Set the maximum number of Vault tokens that will\n  be cached in-memory. One Vault token will be stored for each pod on the same\n  node that mounts secrets. Setting to 0 will disable the cache and force each\n  volume mount request to reauthenticate to Vault.\n\n- `-debug` `(bool: false)` - Set to true to enable debug level logging.\n\n- `-endpoint` `(string: \"\/tmp\/vault.sock\")` - Path to unix socket on which the\n  provider will listen for gRPC calls from the driver.\n\n- `-health-addr` `(string: \":8080\")` - The address of the HTTP listener\n  for reporting health.\n\n- `-hmac-secret-name` `(string: \"vault-csi-provider-hmac-key\")` - Configure the\n  Kubernetes secret name that the provider creates to store an HMAC key for\n  generating secret version hashes.\n\n- `-vault-addr` `(string: \"https:\/\/127.0.0.1:8200\")` - Default address\n  for connecting to Vault. Can also be specified via the `VAULT_ADDR` environment\n  variable. **Note:** It is highly recommended to only set the Vault address when\n  installing the helm chart. The helm chart will install Vault Agent as a sidecar\n  to the Vault CSI Provider for caching and renewals, but setting `-vault-addr`\n  here will cause the Vault CSI Provider to bypass the Agent's cache.\n\n- `-vault-mount` `(string: \"kubernetes\")` - Default Vault mount path\n  for Kubernetes authentication. Can be overridden per Secret Provider Class\n  object.\n\n- `-vault-namespace` `(string: \"\")` - (v1.1.0+) Default Vault namespace for Vault\n  requests. Can also be specified via the `VAULT_NAMESPACE` environment variable.\n\n- `-vault-tls-ca-cert` `(string: \"\")` - (v1.1.0+) Path on disk to a single\n  PEM-encoded CA certificate to trust for Vault. Takes precedence over\n  `-vault-tls-ca-directory`. Can also be specified via the `VAULT_CACERT`\n  environment variable.\n\n- `-vault-tls-ca-directory` `(string: \"\")` - (v1.1.0+) Path on disk to a\n  directory of PEM-encoded CA certificates to trust for Vault. Can also be\n  specified via the `VAULT_CAPATH` environment variable.\n\n- `-vault-tls-server-name` `(string: \"\")` - (v1.1.0+) Name to use as the SNI\n  host when connecting to Vault via TLS. Can also be specified via the\n  `VAULT_TLS_SERVER_NAME` environment variable.\n\n- `-vault-tls-client-cert` `(string: \"\")` - (v1.1.0+) Path on disk to a\n  PEM-encoded client certificate for mTLS communication with Vault. If set,\n  also requires `-vault-tls-client-key`. Can also be specified via the\n  `VAULT_CLIENT_CERT` environment variable.\n\n- `-vault-tls-client-key` `(string: \"\")` - (v1.1.0+) Path on disk to a\n  PEM-encoded client key for mTLS communication with Vault. If set, also\n  requires `-vault-tls-client-cert`. Can also be specified via the\n  `VAULT_CLIENT_KEY` environment variable.\n\n- `-vault-tls-skip-verify` `(bool: false)` - (v1.1.0+) Disable verification of\n  TLS certificates. Can also be specified via the `VAULT_SKIP_VERIFY` environment\n  variable.\n\n- `-version` `(bool: false)` - print version information and exit.\n\n\n# Secret provider class parameters\n\nThe following parameters are supported by the Vault provider. Each parameter is\nan entry under `spec.parameters` in a SecretProviderClass object. The full\nstructure is illustrated in the [examples](\/vault\/docs\/platform\/k8s\/csi\/examples).\n\n- `roleName` `(string: \"\")` - Name of the role to be used during login with Vault.\n\n- `vaultAddress` `(string: \"\")` - The address of the Vault server. **Note:** It is\n  highly recommended to only set the Vault address when installing the helm chart.\n  The helm chart will install Vault Agent as a sidecar to the Vault CSI Provider\n  for caching and renewals, but setting `vaultAddress` here will cause the Vault\n  CSI Provider to bypass the Agent's cache.\n\n- `vaultNamespace` `(string: \"\")` - The Vault [namespace](\/vault\/docs\/enterprise\/namespaces) to use.\n\n- `vaultSkipTLSVerify` `(string: \"false\")` - When set to true, skips verification of the Vault server\n  certificate. Setting this to true is not recommended for production.\n\n- `vaultCACertPath` `(string: \"\")` - The path on disk where the Vault CA certificate can be found\n  when verifying the Vault server certificate.\n\n- `vaultCADirectory` `(string: \"\")` - The directory on disk where the Vault CA certificate can be found\n  when verifying the Vault server certificate.\n\n- `vaultTLSClientCertPath` `(string: \"\")` - The path on disk where the client certificate can be found\n  for mTLS communications with Vault.\n\n- `vaultTLSClientKeyPath` `(string: \"\")` - The path on disk where the client key can be found\n  for mTLS communications with Vault.\n\n- `vaultTLSServerName` `(string: \"\")` - The name to use as the SNI host when connecting via TLS.\n\n- `vaultAuthMountPath` `(string: \"kubernetes\")` - The name of the auth mount used for login.\n  Can be a Kubernetes or JWT auth mount. Mutually exclusive with `vaultKubernetesMountPath`.\n\n- `vaultKubernetesMountPath` `(string: \"kubernetes\")` - The name of the auth mount used for login.\n  Can be a Kubernetes or JWT auth mount. Mutually exclusive with `vaultAuthMountPath`.\n\n- `audience` `(string: \"\")` - Specifies a custom audience for the requesting pod's service account token,\n  generated using the\n  [TokenRequest API](https:\/\/kubernetes.io\/docs\/reference\/kubernetes-api\/authentication-resources\/token-request-v1\/#TokenRequestSpec).\n  The resulting token is used to authenticate to Vault, so if you specify an\n  [audience](\/vault\/api-docs\/auth\/kubernetes#audience) for your Kubernetes auth\n  role, it must match the audience specified here. If not set, the token audiences will default to\n  the Kubernetes cluster's default API audiences.\n\n- `objects` `(array)` - An array of secrets to retrieve from Vault.\n\n  - `objectName` `(string: \"\")` - The alias of the object which can be referenced within the secret provider class and\n  the name of the secret file.\n\n  - `method` `(string: \"GET\")` - The type of HTTP request. Supported values include \"GET\" and \"PUT\".\n\n  - `secretPath` `(string: \"\")` - The path in Vault where the secret is located.\n    For secrets that are retrieved via HTTP GET method, the `secretPath` can include optional URI parameters,\n    for example, the [version of the KV2 secret](\/vault\/api-docs\/secret\/kv\/kv-v2#read-secret-version):\n\n    ```yaml\n    objects: |\n      - objectName: \"app-secret\"\n        secretPath: \"secret\/data\/test?version=1\"\n        secretKey: \"password\"\n    ```\n\n  - `secretKey` `(string: \"\")` - The key in the Vault secret to extract. If omitted, the whole response from Vault will be written as JSON.\n\n  - `filePermission` `(integer: 0o644)` - The file permissions to set for this secret's file.\n\n  - `encoding` `(string: \"utf-8\")` - The encoding of the secret value. Supports decoding `utf-8` (default), `hex`, and `base64` values.\n\n  - `secretArgs` `(map: {})` - Additional arguments to be sent to Vault for a specific secret. Arguments can vary\n    for different secret engines. For example:\n\n    ```yaml\n    secretArgs:\n      common_name: 'test.example.com'\n      ttl: '24h'\n    ```\n\n    ~> `secretArgs` are sent as part of the HTTP request body. Therefore, they are only effective for HTTP PUT\/POST requests, for instance,\n        the [request used to generate a new certificate](\/vault\/api-docs\/secret\/pki#generate-certificate).\n        To supply additional parameters for secrets retrieved via HTTP GET, include optional URI parameters in [`secretPath`](#secretpath).","site":"vault","answers_cleaned":"    layout  docs page title  Vault CSI Provider Configurations description  This section documents the configurables for the Vault CSI Provider         Command line arguments  The following command line arguments are supported by the Vault CSI provider  Most settings support being set by  in ascending order of precedence     Environment variables   Command line arguments   Secret Provider Class parameters  If installing via the helm chart  they can be set using e g     set  csi extraArgs   debug true          cache size    int  1000     Set the maximum number of Vault tokens that will   be cached in memory  One Vault token will be stored for each pod on the same   node that mounts secrets  Setting to 0 will disable the cache and force each   volume mount request to reauthenticate to Vault       debug    bool  false     Set to true to enable debug level logging       endpoint    string    tmp vault sock      Path to unix socket on which the   provider will listen for gRPC calls from the driver       health addr    string    8080      The address of the HTTP listener   for reporting health       hmac secret name    string   vault csi provider hmac key      Configure the   Kubernetes secret name that the provider creates to store an HMAC key for   generating secret version hashes       vault addr    string   https   127 0 0 1 8200      Default address   for connecting to Vault  Can also be specified via the  VAULT ADDR  environment   variable    Note    It is highly recommended to only set the Vault address when   installing the helm chart  The helm chart will install Vault Agent as a sidecar   to the Vault CSI Provider for caching and renewals  but setting   vault addr    here will cause the Vault CSI Provider to bypass the Agent s cache       vault mount    string   kubernetes      Default Vault mount path   for Kubernetes authentication  Can be overridden per Secret Provider Class   object       vault namespace    string          v1 1 0   Default Vault namespace for Vault   requests  Can also be specified via the  VAULT NAMESPACE  environment variable       vault tls ca cert    string          v1 1 0   Path on disk to a single   PEM encoded CA certificate to trust for Vault  Takes precedence over     vault tls ca directory   Can also be specified via the  VAULT CACERT    environment variable       vault tls ca directory    string          v1 1 0   Path on disk to a   directory of PEM encoded CA certificates to trust for Vault  Can also be   specified via the  VAULT CAPATH  environment variable       vault tls server name    string          v1 1 0   Name to use as the SNI   host when connecting to Vault via TLS  Can also be specified via the    VAULT TLS SERVER NAME  environment variable       vault tls client cert    string          v1 1 0   Path on disk to a   PEM encoded client certificate for mTLS communication with Vault  If set    also requires   vault tls client key   Can also be specified via the    VAULT CLIENT CERT  environment variable       vault tls client key    string          v1 1 0   Path on disk to a   PEM encoded client key for mTLS communication with Vault  If set  also   requires   vault tls client cert   Can also be specified via the    VAULT CLIENT KEY  environment variable       vault tls skip verify    bool  false      v1 1 0   Disable verification of   TLS certificates  Can also be specified via the  VAULT SKIP VERIFY  environment   variable       version    bool  false     print version information and exit      Secret provider class parameters  The following parameters are supported by the Vault provider  Each parameter is an entry under  spec parameters  in a SecretProviderClass object  The full structure is illustrated in the  examples   vault docs platform k8s csi examples       roleName    string         Name of the role to be used during login with Vault      vaultAddress    string         The address of the Vault server    Note    It is   highly recommended to only set the Vault address when installing the helm chart    The helm chart will install Vault Agent as a sidecar to the Vault CSI Provider   for caching and renewals  but setting  vaultAddress  here will cause the Vault   CSI Provider to bypass the Agent s cache      vaultNamespace    string         The Vault  namespace   vault docs enterprise namespaces  to use      vaultSkipTLSVerify    string   false      When set to true  skips verification of the Vault server   certificate  Setting this to true is not recommended for production      vaultCACertPath    string         The path on disk where the Vault CA certificate can be found   when verifying the Vault server certificate      vaultCADirectory    string         The directory on disk where the Vault CA certificate can be found   when verifying the Vault server certificate      vaultTLSClientCertPath    string         The path on disk where the client certificate can be found   for mTLS communications with Vault      vaultTLSClientKeyPath    string         The path on disk where the client key can be found   for mTLS communications with Vault      vaultTLSServerName    string         The name to use as the SNI host when connecting via TLS      vaultAuthMountPath    string   kubernetes      The name of the auth mount used for login    Can be a Kubernetes or JWT auth mount  Mutually exclusive with  vaultKubernetesMountPath       vaultKubernetesMountPath    string   kubernetes      The name of the auth mount used for login    Can be a Kubernetes or JWT auth mount  Mutually exclusive with  vaultAuthMountPath       audience    string         Specifies a custom audience for the requesting pod s service account token    generated using the    TokenRequest API  https   kubernetes io docs reference kubernetes api authentication resources token request v1  TokenRequestSpec     The resulting token is used to authenticate to Vault  so if you specify an    audience   vault api docs auth kubernetes audience  for your Kubernetes auth   role  it must match the audience specified here  If not set  the token audiences will default to   the Kubernetes cluster s default API audiences      objects    array     An array of secrets to retrieve from Vault        objectName    string         The alias of the object which can be referenced within the secret provider class and   the name of the secret file        method    string   GET      The type of HTTP request  Supported values include  GET  and  PUT         secretPath    string         The path in Vault where the secret is located      For secrets that are retrieved via HTTP GET method  the  secretPath  can include optional URI parameters      for example  the  version of the KV2 secret   vault api docs secret kv kv v2 read secret version           yaml     objects            objectName   app secret          secretPath   secret data test version 1          secretKey   password                secretKey    string         The key in the Vault secret to extract  If omitted  the whole response from Vault will be written as JSON        filePermission    integer  0o644     The file permissions to set for this secret s file        encoding    string   utf 8      The encoding of the secret value  Supports decoding  utf 8   default    hex   and  base64  values        secretArgs    map         Additional arguments to be sent to Vault for a specific secret  Arguments can vary     for different secret engines  For example          yaml     secretArgs        common name   test example com        ttl   24h                   secretArgs  are sent as part of the HTTP request body  Therefore  they are only effective for HTTP PUT POST requests  for instance          the  request used to generate a new certificate   vault api docs secret pki generate certificate           To supply additional parameters for secrets retrieved via HTTP GET  include optional URI parameters in   secretPath    secretpath  "}
{"questions":"vault The Vault CSI Provider can be installed using Vault Helm Installing the Vault CSI provider layout docs Prerequisites page title Vault CSI Provider Installation","answers":"---\nlayout: docs\npage_title: Vault CSI Provider Installation\ndescription: The Vault CSI Provider can be installed using Vault Helm.\n---\n\n# Installing the Vault CSI provider\n\n## Prerequisites\n\n- Kubernetes 1.16+ for both the master and worker nodes (Linux-only)\n- [Secrets store CSI driver](https:\/\/secrets-store-csi-driver.sigs.k8s.io\/getting-started\/installation.html) installed\n- `TokenRequest` endpoint available, which requires setting the flags\n  `--service-account-signing-key-file` and `--service-account-issuer` for\n  `kube-apiserver`. Set by default from 1.20+ and earlier in most managed services.\n\n## Installation using helm\n\nThe [Vault Helm chart](\/vault\/docs\/platform\/k8s\/helm) is the recommended way to\ninstall and configure the Vault CSI Provider in Kubernetes.\n\nTo install a new instance of Vault and the Vault CSI Provider, first add the\nHashiCorp helm repository and ensure you have access to the chart:\n\n~> **Note:** Vault CSI Provider Helm installation requires Vault Helm 0.10.0+.\n\n@include 'helm\/repo.mdx'\n\nThen install the chart and enable the CSI feature by setting the\n`csi.enabled` value to `true`:\n\n~> **Note:** this will also install the Vault server and Agent Injector.\n\n```shell-session\n$ helm install vault hashicorp\/vault --set=\"csi.enabled=true\"\n```\n\nUpgrades may be performed with `helm upgrade` on an existing installation. Please\nalways run Helm with `--dry-run` before any install or upgrade to verify\nchanges.\n\nYou can see all the available values settings by running `helm inspect values hashicorp\/vault` or by reading the [Vault Helm Configuration\nDocs](\/vault\/docs\/platform\/k8s\/helm\/configuration). Commonly used values in the Helm\nchart include limiting the namespaces the Vault CSI Provider runs in, TLS options and\nmore.\n\n## Installation on OpenShift\n\nWe recommend using the [Vault agent injector on Openshift](\/vault\/docs\/platform\/k8s\/helm\/openshift)\ninstead of the Secrets Store CSI driver. OpenShift\n[does not recommend](https:\/\/docs.openshift.com\/container-platform\/4.9\/storage\/persistent_storage\/persistent-storage-hostpath.html)\nusing `hostPath` mounting in production or\n[certify Helm charts](https:\/\/github.com\/redhat-certification\/chart-verifier\/blob\/dbf89bff2d09142e4709d689a9f4037a739c2244\/docs\/helm-chart-checks.md#table-2-helm-chart-default-checks)\nusing CSI objects because pods must run as privileged. Pods will have elevated access to\nother pods on the same node, which OpenShift does not recommend.\n\nYou can run the Secrets Store CSI driver with additional\nsecurity configurations on a OpenShift development\nor testing cluster.\n\nDeploy the Secrets Store CSI driver and Vault Helm chart\nto your OpenShift cluster.\n\nThen, patch the `DaemonSet` for the Vault CSI provider to\nrun with a privileged security context.\n\n```shell-session\n$ kubectl patch daemonset vault-csi-provider \\\n  --type='json' \\\n  --patch='[{\"op\": \"add\", \"path\": \"\/spec\/template\/spec\/containers\/0\/securityContext\", \"value\": {\"privileged\": true} }]'\n```\n\nThe Secrets Store CSI driver and Vault CSI provider need `hostPath` mount access.\nAdd the service account for the Secrets Store CSI driver to the `privileged`\n[security context constraint](https:\/\/cloud.redhat.com\/blog\/managing-sccs-in-openshift).\n\n```shell-session\n$ oc adm policy add-scc-to-user privileged system:serviceaccount:${KUBERNETES_VAULT_NAMESPACE}:secrets-store-csi-driver\n```\n\nAdd the service account for the Vault CSI provider to the `privileged`\nsecurity context constraint.\n\n```shell-session\n$ oc adm policy add-scc-to-user privileged system:serviceaccount:${KUBERNETES_VAULT_NAMESPACE}:vault-csi-provider\n```\n\nYou need to give additional access to the application retrieving secrets with the Vault CSI provider.\nCreate a `SecurityContextConstraint` to `allowHostDirVolumePlugin`, `allowHostDirVolumePlugin`, and\n`allowHostPorts` for the application's service account.\nYou can adjust the other attributes based on your application's runtime configuration.\n\n```shell-session\n$ cat > application-scc.yaml << EOF\napiVersion: security.openshift.io\/v1\nkind: SecurityContextConstraints\nmetadata:\n  name: vault-csi-provider\nallowPrivilegedContainer: false\nallowHostDirVolumePlugin: true\nallowHostNetwork: true\nallowHostPorts: true\nallowHostIPC: false\nallowHostPID: false\nreadOnlyRootFilesystem: false\ndefaultAddCapabilities:\n- SYS_ADMIN\nrunAsUser:\n  type: RunAsAny\nseLinuxContext:\n  type: RunAsAny\nfsGroup:\n  type: RunAsAny\nusers:\n- system:serviceaccount:${KUBERNETES_APPLICATION_NAMESPACE}:${APPLICATION_SERVICE_ACCOUNT}\nEOF\n```\n\nAdd the security context constraint for the application.\n\n```shell-session\n$ kubectl apply -f application-scc.yaml\n```","site":"vault","answers_cleaned":"    layout  docs page title  Vault CSI Provider Installation description  The Vault CSI Provider can be installed using Vault Helm         Installing the Vault CSI provider     Prerequisites    Kubernetes 1 16  for both the master and worker nodes  Linux only     Secrets store CSI driver  https   secrets store csi driver sigs k8s io getting started installation html  installed    TokenRequest  endpoint available  which requires setting the flags      service account signing key file  and    service account issuer  for    kube apiserver   Set by default from 1 20  and earlier in most managed services      Installation using helm  The  Vault Helm chart   vault docs platform k8s helm  is the recommended way to install and configure the Vault CSI Provider in Kubernetes   To install a new instance of Vault and the Vault CSI Provider  first add the HashiCorp helm repository and ensure you have access to the chart        Note    Vault CSI Provider Helm installation requires Vault Helm 0 10 0     include  helm repo mdx   Then install the chart and enable the CSI feature by setting the  csi enabled  value to  true         Note    this will also install the Vault server and Agent Injector      shell session   helm install vault hashicorp vault   set  csi enabled true       Upgrades may be performed with  helm upgrade  on an existing installation  Please always run Helm with    dry run  before any install or upgrade to verify changes   You can see all the available values settings by running  helm inspect values hashicorp vault  or by reading the  Vault Helm Configuration Docs   vault docs platform k8s helm configuration   Commonly used values in the Helm chart include limiting the namespaces the Vault CSI Provider runs in  TLS options and more      Installation on OpenShift  We recommend using the  Vault agent injector on Openshift   vault docs platform k8s helm openshift  instead of the Secrets Store CSI driver  OpenShift  does not recommend  https   docs openshift com container platform 4 9 storage persistent storage persistent storage hostpath html  using  hostPath  mounting in production or  certify Helm charts  https   github com redhat certification chart verifier blob dbf89bff2d09142e4709d689a9f4037a739c2244 docs helm chart checks md table 2 helm chart default checks  using CSI objects because pods must run as privileged  Pods will have elevated access to other pods on the same node  which OpenShift does not recommend   You can run the Secrets Store CSI driver with additional security configurations on a OpenShift development or testing cluster   Deploy the Secrets Store CSI driver and Vault Helm chart to your OpenShift cluster   Then  patch the  DaemonSet  for the Vault CSI provider to run with a privileged security context      shell session   kubectl patch daemonset vault csi provider       type  json        patch     op    add    path     spec template spec containers 0 securityContext    value     privileged   true           The Secrets Store CSI driver and Vault CSI provider need  hostPath  mount access  Add the service account for the Secrets Store CSI driver to the  privileged   security context constraint  https   cloud redhat com blog managing sccs in openshift       shell session   oc adm policy add scc to user privileged system serviceaccount   KUBERNETES VAULT NAMESPACE  secrets store csi driver      Add the service account for the Vault CSI provider to the  privileged  security context constraint      shell session   oc adm policy add scc to user privileged system serviceaccount   KUBERNETES VAULT NAMESPACE  vault csi provider      You need to give additional access to the application retrieving secrets with the Vault CSI provider  Create a  SecurityContextConstraint  to  allowHostDirVolumePlugin    allowHostDirVolumePlugin   and  allowHostPorts  for the application s service account  You can adjust the other attributes based on your application s runtime configuration      shell session   cat   application scc yaml    EOF apiVersion  security openshift io v1 kind  SecurityContextConstraints metadata    name  vault csi provider allowPrivilegedContainer  false allowHostDirVolumePlugin  true allowHostNetwork  true allowHostPorts  true allowHostIPC  false allowHostPID  false readOnlyRootFilesystem  false defaultAddCapabilities    SYS ADMIN runAsUser    type  RunAsAny seLinuxContext    type  RunAsAny fsGroup    type  RunAsAny users    system serviceaccount   KUBERNETES APPLICATION NAMESPACE    APPLICATION SERVICE ACCOUNT  EOF      Add the security context constraint for the application      shell session   kubectl apply  f application scc yaml    "}
{"questions":"vault commit SHA 08a6e5071ffa4faa486bd4b2c53b27585da4680c Configuration for the Vault Secrets Operator Helm chart layout docs DO NOT EDIT page title Vault Secrets Operator Helm Chart Configuration Generated from chart values yaml in the vault secrets operator repo","answers":"---\nlayout: docs\npage_title: Vault Secrets Operator Helm Chart Configuration\ndescription: >-\n  Configuration for the Vault Secrets Operator Helm chart.\n---\n<!-- DO NOT EDIT.\nGenerated from chart\/values.yaml in the vault-secrets-operator repo.\ncommit SHA=08a6e5071ffa4faa486bd4b2c53b27585da4680c\n\nTo update run 'make gen-helm-docs' from the vault-secrets-operator repo.\n-->\n\n# Vault Secrets Operator helm chart\n\nThe chart is customizable using\n[Helm configuration values](https:\/\/helm.sh\/docs\/intro\/using_helm\/#customizing-the-chart-before-installing).\n\n<!-- codegen: start -->\n\n## Top-Level Stanzas\n\nUse these links to navigate to a particular top-level stanza.\n\n- [`controller`](#h-controller)\n- [`metricsService`](#h-metricsservice)\n- [`defaultVaultConnection`](#h-defaultvaultconnection)\n- [`defaultAuthMethod`](#h-defaultauthmethod)\n- [`telemetry`](#h-telemetry)\n- [`hooks`](#h-hooks)\n- [`tests`](#h-tests)\n\n## All Values\n\n### controller ((#h-controller))\n\n- `controller` ((#v-controller)) - Top level configuration for the vault secrets operator deployment.\n  This consists of a controller and a kube rbac proxy container.\n\n  - `replicas` ((#v-controller-replicas)) (`integer: 1`) - Set the number of replicas for the operator.\n\n  - `strategy` ((#v-controller-strategy)) (`object: \"\"`) - Configure update strategy for multi-replica deployments.\n    Kubernetes supports types Recreate, and RollingUpdate\n    ref: https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/#strategy\n    Example:\n    strategy: {}\n      rollingUpdate:\n        maxSurge: 1\n        maxUnavailable: 0\n      type: RollingUpdate\n\n  - `hostAliases` ((#v-controller-hostaliases)) (`array<map>`) - Host Aliases settings for vault-secrets-operator pod.\n    The value is an array of PodSpec HostAlias maps.\n    ref: https:\/\/kubernetes.io\/docs\/tasks\/network\/customize-hosts-file-for-pods\/\n    Example:\n    hostAliases:\n      - ip: 192.168.1.100\n        hostnames:\n        - vault.example.com\n\n  - `nodeSelector` ((#v-controller-nodeselector)) (`map`) - nodeSelector labels for vault-secrets-operator pod assignment.\n    ref: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/#nodeselector\n    Example:\n    nodeSelector:\n      beta.kubernetes.io\/arch: amd64\n\n  - `tolerations` ((#v-controller-tolerations)) (`array<map>`) - Toleration Settings for vault-secrets-operator pod.\n    The value is an array of PodSpec Toleration maps.\n    ref: https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/taint-and-toleration\/\n    Example:\n    tolerations:\n     - key: \"key1\"\n       operator: \"Equal\"\n       value: \"value1\"\n       effect: \"NoSchedule\"\n\n  - `affinity` ((#v-controller-affinity)) - Affinity settings for vault-secrets-operator pod.\n    The value is a map of PodSpec Affinity maps.\n    ref: https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#affinity-and-anti-affinity\n    Example:\n    affinity:\n      nodeAffinity:\n        requiredDuringSchedulingIgnoredDuringExecution:\n          nodeSelectorTerms:\n          - matchExpressions:\n            - key: topology.kubernetes.io\/zone\n              operator: In\n              values:\n              - antarctica-east1\n              - antarctica-west1\n\n  - `rbac` ((#v-controller-rbac))\n\n    - `clusterRoleAggregation` ((#v-controller-rbac-clusterroleaggregation)) - clusterRoleAggregation defines the roles included in the aggregated ClusterRole.\n\n      - `viewerRoles` ((#v-controller-rbac-clusterroleaggregation-viewerroles)) (`array<string>: []`) - viewerRoles is a list of roles that will be aggregated into the viewer ClusterRole.\n        The role name must be that of any VSO resource type. E.g. \"VaultAuth\", \"HCPAuth\".\n        All values are case-insensitive.\n        Specifying '*' as the first element will include all roles in the aggregation.\n\n        The ClusterRole name takes the form of `<chart-fullname>`-aggregate-role-viewer.\n\n        Example usages:\n        all roles:\n        - '*'\n        individually specified roles:\n        - \"VaultAuth\"\n        - \"HCPAuth\"\n\n      - `editorRoles` ((#v-controller-rbac-clusterroleaggregation-editorroles)) (`array<string>: []`) - editorRoles is a list of roles that will be aggregated into the editor ClusterRole.\n        The role name must be that of any VSO resource type. E.g. \"VaultAuth\", \"HCPAuth\".\n        All values are case-insensitive.\n        Specifying '*' as the first element will include all roles in the aggregation.\n\n        The ClusterRole name takes the form of `<chart-fullname>`-aggregate-role-editor.\n\n        Example usages:\n        all roles:\n        - '*'\n        individually specified roles:\n        - \"VaultAuth\"\n        - \"HCPAuth\"\n\n      - `userFacingRoles` ((#v-controller-rbac-clusterroleaggregation-userfacingroles)) (`object: \"\"`) - userFacingRoles is a map of roles that will be aggregated into the viewer and editor ClusterRoles.\n        See https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/#user-facing-roles for more information.\n\n        - `view` ((#v-controller-rbac-clusterroleaggregation-userfacingroles-view)) (`boolean: false`) - view controls whether the aggregated viewer ClusterRole will be made available to the user-facing\n          'view' ClusterRole. Requires the viewerRoles to be set.\n\n        - `edit` ((#v-controller-rbac-clusterroleaggregation-userfacingroles-edit)) (`boolean: false`) - view controls whether the aggregated editor ClusterRole will be made available to the user-facing\n          'edit' ClusterRole. Requires the editorRoles to be set.\n\n  - `kubeRbacProxy` ((#v-controller-kuberbacproxy)) - Settings related to the kubeRbacProxy container. This container is an HTTP proxy for the\n    controller manager which performs RBAC authorization against the Kubernetes API using SubjectAccessReviews.\n\n    - `image` ((#v-controller-kuberbacproxy-image)) - Image sets the repo and tag of the kube-rbac-proxy image to use for the controller.\n\n      - `pullPolicy` ((#v-controller-kuberbacproxy-image-pullpolicy)) (`string: IfNotPresent`)\n\n      - `repository` ((#v-controller-kuberbacproxy-image-repository)) (`string: quay.io\/brancz\/kube-rbac-proxy`)\n\n      - `tag` ((#v-controller-kuberbacproxy-image-tag)) (`string: v0.18.1`)\n\n    - `resources` ((#v-controller-kuberbacproxy-resources)) (`map`) - Configures the default resources for the kube rbac proxy container.\n      For more information on configuring resources, see the K8s documentation:\n      https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n      - `limits` ((#v-controller-kuberbacproxy-resources-limits))\n\n        - `cpu` ((#v-controller-kuberbacproxy-resources-limits-cpu)) (`string: 500m`)\n\n        - `memory` ((#v-controller-kuberbacproxy-resources-limits-memory)) (`string: 128Mi`)\n\n      - `requests` ((#v-controller-kuberbacproxy-resources-requests))\n\n        - `cpu` ((#v-controller-kuberbacproxy-resources-requests-cpu)) (`string: 5m`)\n\n        - `memory` ((#v-controller-kuberbacproxy-resources-requests-memory)) (`string: 64Mi`)\n\n  - `imagePullSecrets` ((#v-controller-imagepullsecrets)) (`array<map>`) - Image pull secret to use for private container registry authentication which will be applied to the controllers\n    service account. Alternatively, the value may be specified as an array of strings.\n    Example:\n    ```yaml\n    imagePullSecrets:\n      - name: pull-secret-name-1\n      - name: pull-secret-name-2\n    ```\n    Refer to https:\/\/kubernetes.io\/docs\/concepts\/containers\/images\/#using-a-private-registry.\n\n  - `extraLabels` ((#v-controller-extralabels)) - Extra labels to attach to the deployment. This should be formatted as a YAML object (map)\n\n  - `annotations` ((#v-controller-annotations)) - This value defines additional annotations for the deployment. This should be formatted as a YAML object (map)\n\n  - `manager` ((#v-controller-manager)) - Settings related to the vault-secrets-operator container.\n\n    - `image` ((#v-controller-manager-image)) - Image sets the repo and tag of the vault-secrets-operator image to use for the controller.\n\n      - `pullPolicy` ((#v-controller-manager-image-pullpolicy)) (`string: IfNotPresent`)\n\n      - `repository` ((#v-controller-manager-image-repository)) (`string: hashicorp\/vault-secrets-operator`)\n\n      - `tag` ((#v-controller-manager-image-tag)) (`string: 0.9.0`)\n\n    - `logging` ((#v-controller-manager-logging)) - logging\n\n      - `level` ((#v-controller-manager-logging-level)) (`string: info`) - Sets the log level for the operator.\n        Builtin levels are: info, error, debug, debug-extended, trace\n        Default: info\n\n      - `timeEncoding` ((#v-controller-manager-logging-timeencoding)) (`string: rfc3339`) - Sets the time encoding for the operator.\n        Options are: epoch, millis, nano, iso8601, rfc3339, rfc3339nano\n        Default: rfc3339\n\n      - `stacktraceLevel` ((#v-controller-manager-logging-stacktracelevel)) (`string: panic`) - Sets the stacktrace level for the operator.\n        Options are: info, error, panic\n        Default: panic\n\n    - `globalTransformationOptions` ((#v-controller-manager-globaltransformationoptions)) - Global secret transformation options. In addition to the boolean options\n      below, these options may be set via the\n      `VSO_GLOBAL_TRANSFORMATION_OPTIONS` environment variable as a\n      comma-separated list. Valid values are: `exclude-raw`\n\n      - `excludeRaw` ((#v-controller-manager-globaltransformationoptions-excluderaw)) (`boolean: false`) - excludeRaw directs the operator to prevent _raw secret data being stored\n        in the destination K8s Secret.\n\n    - `globalVaultAuthOptions` ((#v-controller-manager-globalvaultauthoptions)) - Global Vault auth options. In addition to the boolean options\n      below, these options may be set via the\n      `VSO_GLOBAL_VAULT_OPTION_OPTIONS` environment variable as a\n      comma-separated list. Valid values are: `allow-default-globals`\n\n      - `allowDefaultGlobals` ((#v-controller-manager-globalvaultauthoptions-allowdefaultglobals)) (`boolean: true`) - allowDefaultGlobals directs the operator search for a \"default\"\n        VaultAuthGlobal if none is specified on the referring VaultAuth CR.\n        Default: true\n\n    - `backoffOnSecretSourceError` ((#v-controller-manager-backoffonsecretsourceerror)) (`object: \"\"`) - Backoff settings for the controller manager. These settings control the backoff behavior\n      when the controller encounters an error while fetching secrets from the SecretSource.\n      For example given the following settings:\n        initialInterval: 5s\n        maxInterval: 60s\n        randomizationFactor: 0.5\n        multiplier: 1.5\n\n      The backoff retry sequence might be something like:\n       5.5s, 7.5s, 11.25s, 16.87s, 25.3125s, 37.96s, 56.95, 60.95s...\n\n      - `initialInterval` ((#v-controller-manager-backoffonsecretsourceerror-initialinterval)) (`duration: 5s`) - Initial interval between retries.\n\n      - `maxInterval` ((#v-controller-manager-backoffonsecretsourceerror-maxinterval)) (`duration: 60s`) - Maximum interval between retries.\n\n      - `maxElapsedTime` ((#v-controller-manager-backoffonsecretsourceerror-maxelapsedtime)) (`duration: 0s`) - Maximum elapsed time without a successful sync from the secret's source.\n        It's important to note that setting this option to anything other than\n        its default will result in the secret sync no longer being retried after\n        reaching the max elapsed time.\n\n      - `randomizationFactor` ((#v-controller-manager-backoffonsecretsourceerror-randomizationfactor)) (`float: 0.5`) - Randomization factor randomizes the backoff interval between retries.\n        This helps to spread out the retries to avoid a thundering herd.\n        If the value is 0, then the backoff interval will not be randomized.\n        It is recommended to set this to a value that is greater than 0.\n\n      - `multiplier` ((#v-controller-manager-backoffonsecretsourceerror-multiplier)) (`float: 1.5`) - Sets the multiplier that is used to increase the backoff interval between retries.\n        This value should always be set to a value greater than 0.\n        The value must be greater than zero.\n\n    - `clientCache` ((#v-controller-manager-clientcache)) - Configures the client cache which is used by the controller to cache (and potentially persist) vault tokens that\n      are the result of using the VaultAuthMethod. This enables re-use of Vault Tokens\n      throughout their TTLs as well as the ability to renew.\n      Persistence is only useful in the context of Dynamic Secrets, so \"none\" is an okay default.\n\n      - `persistenceModel` ((#v-controller-manager-clientcache-persistencemodel)) (`string: \"\"`) - Defines the `-client-cache-persistence-model` which caches+persists vault tokens.\n        May also be set via the `VSO_CLIENT_CACHE_PERSISTENCE_MODEL` environment variable.\n        Valid values are:\n        \"none\" - in-memory client cache is used, no tokens are persisted.\n        \"direct-unencrypted\" - in-memory client cache is persisted, unencrypted. This is NOT recommended for any production workload.\n        \"direct-encrypted\" - in-memory client cache is persisted encrypted using the Vault Transit engine.\n        Note: It is strongly encouraged to not use the setting of \"direct-unencrypted\" in\n        production due to the potential of vault tokens being leaked as they would then be stored\n        in clear text.\n\n        default: \"none\"\n\n      - `cacheSize` ((#v-controller-manager-clientcache-cachesize)) (`integer: \"\"`) - Defines the size of the in-memory LRU cache *in entries*, that is used by the client cache controller.\n        May also be set via the `VSO_CLIENT_CACHE_SIZE` environment variable.\n        Larger numbers will increase memory usage by the controller, lower numbers will cause more frequent evictions\n        of the client cache which can result in additional Vault client counts.\n\n        default: 10000\n\n      - `storageEncryption` ((#v-controller-manager-clientcache-storageencryption)) - StorageEncryption provides the necessary configuration to encrypt the client storage\n        cache within Kubernetes objects using (required) Vault Transit Engine.\n        This should only be configured when client cache persistence with encryption is enabled and\n        will deploy an additional VaultAuthMethod to be used by the Vault Transit Engine.\n        E.g. when `controller.manager.clientCache.persistenceModel=direct-encrypted`\n        Supported Vault authentication methods for the Transit Auth method are: jwt, appRole,\n        aws, and kubernetes.\n        Typically, there should only ever be one VaultAuth configured with\n        StorageEncryption in the Cluster.\n\n        - `enabled` ((#v-controller-manager-clientcache-storageencryption-enabled)) (`boolean: false`) - toggles the deployment of the Transit VaultAuthMethod CR.\n\n        - `vaultConnectionRef` ((#v-controller-manager-clientcache-storageencryption-vaultconnectionref)) (`string: default`) - Vault Connection Ref to be used by the Transit VaultAuthMethod.\n          Default setting will use the default VaultConnectionRef, which must also be configured.\n\n        - `keyName` ((#v-controller-manager-clientcache-storageencryption-keyname)) (`string: \"\"`) - KeyName to use for encrypt\/decrypt operations via Vault Transit.\n\n        - `transitMount` ((#v-controller-manager-clientcache-storageencryption-transitmount)) (`string: \"\"`) - Mount path for the Transit VaultAuthMethod.\n\n        - `namespace` ((#v-controller-manager-clientcache-storageencryption-namespace)) (`string: \"\"`) - Vault namespace for the Transit VaultAuthMethod CR.\n\n        - `method` ((#v-controller-manager-clientcache-storageencryption-method)) (`string: kubernetes`) - Vault Auth method to be used with the Transit VaultAuthMethod CR.\n\n        - `mount` ((#v-controller-manager-clientcache-storageencryption-mount)) (`string: kubernetes`) - Mount path for the Transit VaultAuthMethod.\n\n        - `kubernetes` ((#v-controller-manager-clientcache-storageencryption-kubernetes)) - Vault Kubernetes auth method specific configuration\n\n          - `role` ((#v-controller-manager-clientcache-storageencryption-kubernetes-role)) (`string: \"\"`) - Vault Auth Role to use\n            This is a required field and must be setup in Vault prior to deploying the helm chart\n            if `defaultAuthMethod.enabled=true`\n\n          - `serviceAccount` ((#v-controller-manager-clientcache-storageencryption-kubernetes-serviceaccount)) (`string: \"\"`) - Kubernetes ServiceAccount associated with the Transit Vault Auth Role\n            Defaults to using the Operator's service-account.\n\n          - `tokenAudiences` ((#v-controller-manager-clientcache-storageencryption-kubernetes-tokenaudiences)) (`array<string>: []`) - Token Audience should match the audience of the vault kubernetes auth role.\n\n        - `jwt` ((#v-controller-manager-clientcache-storageencryption-jwt)) - Vault JWT auth method specific configuration\n\n          - `role` ((#v-controller-manager-clientcache-storageencryption-jwt-role)) (`string: \"\"`) - Vault Auth Role to use\n            This is a required field and must be setup in Vault prior to deploying the helm chart\n            if using JWT for the Transit VaultAuthMethod.\n\n          - `secretRef` ((#v-controller-manager-clientcache-storageencryption-jwt-secretref)) (`string: \"\"`) - One of the following is required prior to deploying the helm chart\n            - K8s secret that contains the JWT\n            - K8s service account if a service account JWT is used as a Vault JWT auth token and\n            needs generating by VSO.\n\n            Name of Kubernetes Secret that has the Vault JWT auth token.\n            The Kubernetes Secret must contain a key named `jwt` which references the JWT token, and\n            must exist in the namespace of any consuming VaultSecret CR. This is a required field if\n            a JWT token is provided.\n\n          - `serviceAccount` ((#v-controller-manager-clientcache-storageencryption-jwt-serviceaccount)) (`string: default`) - Kubernetes ServiceAccount to generate a service account JWT\n\n          - `tokenAudiences` ((#v-controller-manager-clientcache-storageencryption-jwt-tokenaudiences)) (`array<string>: []`) - Token Audience should match the bound_audiences or the `aud` list in bound_claims if\n            applicable of the Vault JWT auth role.\n\n        - `appRole` ((#v-controller-manager-clientcache-storageencryption-approle)) - AppRole auth method specific configuration\n\n          - `roleId` ((#v-controller-manager-clientcache-storageencryption-approle-roleid)) (`string: \"\"`) - AppRole Role's RoleID to use for authenticating to Vault.\n            This is a required field when using appRole and must be setup in Vault prior to deploying\n            the helm chart.\n\n          - `secretRef` ((#v-controller-manager-clientcache-storageencryption-approle-secretref)) (`string: \"\"`) - Name of Kubernetes Secret that has the AppRole Role's SecretID used to authenticate with\n            Vault. The Kubernetes Secret must contain a key named `id` which references the AppRole\n            Role's SecretID, and must exist in the namespace of any consuming VaultSecret CR.\n            This is a required field when using appRole and must be setup in Vault prior to\n            deploying the helm chart.\n\n        - `aws` ((#v-controller-manager-clientcache-storageencryption-aws)) - AWS auth method specific configuration\n\n          - `role` ((#v-controller-manager-clientcache-storageencryption-aws-role)) (`string: \"\"`) - Vault Auth Role to use\n            This is a required field and must be setup in Vault prior to deploying the helm chart\n            if using the AWS for the Transit auth method.\n\n          - `region` ((#v-controller-manager-clientcache-storageencryption-aws-region)) (`string: \"\"`) - AWS region to use for signing the authentication request\n            Optional, but most commonly will be the EKS cluster region.\n\n          - `headerValue` ((#v-controller-manager-clientcache-storageencryption-aws-headervalue)) (`string: \"\"`) - Vault header value to include in the STS signing request\n\n          - `sessionName` ((#v-controller-manager-clientcache-storageencryption-aws-sessionname)) (`string: \"\"`) - The role session name to use when creating a WebIdentity provider\n\n          - `stsEndpoint` ((#v-controller-manager-clientcache-storageencryption-aws-stsendpoint)) (`string: \"\"`) - The STS endpoint to use; if not set will use the default\n\n          - `iamEndpoint` ((#v-controller-manager-clientcache-storageencryption-aws-iamendpoint)) (`string: \"\"`) - The IAM endpoint to use; if not set will use the default\n\n          - `secretRef` ((#v-controller-manager-clientcache-storageencryption-aws-secretref)) (`string: \"\"`) - The name of a Kubernetes Secret which holds credentials for AWS. Supported keys\n            include `access_key_id`, `secret_access_key`, `session_token`\n\n          - `irsaServiceAccount` ((#v-controller-manager-clientcache-storageencryption-aws-irsaserviceaccount)) (`string: \"\"`) - Name of a Kubernetes service account that is configured with IAM Roles\n            for Service Accounts (IRSA). Should be annotated with \"eks.amazonaws.com\/role-arn\".\n\n        - `gcp` ((#v-controller-manager-clientcache-storageencryption-gcp))\n\n          - `role` ((#v-controller-manager-clientcache-storageencryption-gcp-role)) (`string: \"\"`) - Vault Auth Role to use\n            This is a required field and must be setup in Vault prior to deploying the helm chart\n            if using GCP for the Transit auth method.\n\n          - `workloadIdentityServiceAccount` ((#v-controller-manager-clientcache-storageencryption-gcp-workloadidentityserviceaccount)) (`string: \"\"`) - Name of a Kubernetes service account that is configured for workload\n            identity in GKE.\n\n          - `region` ((#v-controller-manager-clientcache-storageencryption-gcp-region)) (`string: \"\"`) - GCP Region of the GKE cluster's identity provider. Defaults to the\n            region returned from the operator pod's local metadata server if\n            unspecified.\n\n          - `clusterName` ((#v-controller-manager-clientcache-storageencryption-gcp-clustername)) (`string: \"\"`) - GKE cluster name. Defaults to the cluster-name returned from the\n            operator pod's local metadata server if unspecified.\n\n          - `projectID` ((#v-controller-manager-clientcache-storageencryption-gcp-projectid)) (`string: \"\"`) - GCP project id. Defaults to the project-id returned from the\n            operator pod's local metadata server if unspecified.\n\n        - `params` ((#v-controller-manager-clientcache-storageencryption-params)) (`map`) - Params to use when authenticating to Vault using this auth method.\n          params:\n            param-something1: \"foo\"\n\n        - `headers` ((#v-controller-manager-clientcache-storageencryption-headers)) (` map: \"\"`) - Headers to be included in all Vault requests.\n          headers:\n            X-vault-something1: \"foo\"\n\n    - `maxConcurrentReconciles` ((#v-controller-manager-maxconcurrentreconciles)) (`integer: \"\"`) - Defines the maximum number of concurrent reconciles for each controller.\n      May also be set via the `VSO_MAX_CONCURRENT_RECONCILES` environment variable.\n\n      default: 100\n\n    - `extraEnv` ((#v-controller-manager-extraenv)) (`array<map>`) - Defines additional environment variables to be added to the\n      vault-secrets-operator manager container.\n      Example:\n\n      ```yaml\n      extraEnv:\n        - name: HTTP_PROXY\n          value: http:\/\/proxy.example.com\n        - name: VSO_OUTPUT_FORMAT\n          value: json\n        - name: VSO_CLIENT_CACHE_SIZE\n          value: \"20000\"\n        - name: VSO_CLIENT_CACHE_PERSISTENCE_MODEL\n          value: \"direct-encrypted\"\n        - name: VSO_MAX_CONCURRENT_RECONCILES\n          value: \"30\"\n      ```\n\n    - `extraArgs` ((#v-controller-manager-extraargs)) (`array: []`) - Defines additional commandline arguments to be passed to the\n      vault-secrets-operator manager container.\n\n    - `resources` ((#v-controller-manager-resources)) (`map`) - Configures the default resources for the vault-secrets-operator container.\n      For more information on configuring resources, see the K8s documentation:\n      https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\n\n      - `limits` ((#v-controller-manager-resources-limits))\n\n        - `cpu` ((#v-controller-manager-resources-limits-cpu)) (`string: 500m`)\n\n        - `memory` ((#v-controller-manager-resources-limits-memory)) (`string: 128Mi`)\n\n      - `requests` ((#v-controller-manager-resources-requests))\n\n        - `cpu` ((#v-controller-manager-resources-requests-cpu)) (`string: 10m`)\n\n        - `memory` ((#v-controller-manager-resources-requests-memory)) (`string: 64Mi`)\n\n  - `podSecurityContext` ((#v-controller-podsecuritycontext)) - Configures the Pod Security Context\n    https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\n\n    - `runAsNonRoot` ((#v-controller-podsecuritycontext-runasnonroot)) (`boolean: true`)\n\n  - `securityContext` ((#v-controller-securitycontext)) - Configures the Container Security Context\n    https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\n\n    - `allowPrivilegeEscalation` ((#v-controller-securitycontext-allowprivilegeescalation)) (`boolean: false`)\n\n  - `controllerConfigMapYaml` ((#v-controller-controllerconfigmapyaml)) (`map`) - Sets the configuration settings used by the controller. Any custom changes will be reflected in the\n    data field of the configmap.\n    For more information on configuring resources, see the K8s documentation:\n    https:\/\/kubernetes.io\/docs\/concepts\/configuration\/configmap\/\n\n    - `health` ((#v-controller-controllerconfigmapyaml-health))\n\n      - `healthProbeBindAddress` ((#v-controller-controllerconfigmapyaml-health-healthprobebindaddress)) (`string: :8081`)\n\n    - `leaderElection` ((#v-controller-controllerconfigmapyaml-leaderelection))\n\n      - `leaderElect` ((#v-controller-controllerconfigmapyaml-leaderelection-leaderelect)) (`boolean: true`)\n\n      - `resourceName` ((#v-controller-controllerconfigmapyaml-leaderelection-resourcename)) (`string: b0d477c0.hashicorp.com`)\n\n    - `metrics` ((#v-controller-controllerconfigmapyaml-metrics))\n\n      - `bindAddress` ((#v-controller-controllerconfigmapyaml-metrics-bindaddress)) (`string: 127.0.0.1:8080`)\n\n    - `webhook` ((#v-controller-controllerconfigmapyaml-webhook))\n\n      - `port` ((#v-controller-controllerconfigmapyaml-webhook-port)) (`integer: 9443`)\n\n  - `kubernetesClusterDomain` ((#v-controller-kubernetesclusterdomain)) (`string: cluster.local`) - Configures the environment variable KUBERNETES_CLUSTER_DOMAIN used by KubeDNS.\n\n  - `terminationGracePeriodSeconds` ((#v-controller-terminationgraceperiodseconds)) (`integer: 120`) - Duration in seconds the pod needs to terminate gracefully.\n    See: https:\/\/kubernetes.io\/docs\/concepts\/containers\/container-lifecycle-hooks\/\n\n  - `preDeleteHookTimeoutSeconds` ((#v-controller-predeletehooktimeoutseconds)) (`integer: 120`) - Timeout in seconds for the pre-delete hook\n\n### metricsService ((#h-metricsservice))\n\n- `metricsService` ((#v-metricsservice)) (`map`) - Configure the metrics service ports used by the metrics service.\n  Set the configuration fo the metricsService port.\n\n  - `ports` ((#v-metricsservice-ports)) (`map`) - Set the port settings for the metrics service.\n    For more information on configuring resources, see the K8s documentation:\n    https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/\n\n  - `name` ((#v-metricsservice-ports-name)) (`string: https`)\n\n  - `port` ((#v-metricsservice-ports-port)) (`integer: 8443`)\n\n  - `protocol` ((#v-metricsservice-ports-protocol)) (`string: TCP`)\n\n  - `targetPort` ((#v-metricsservice-ports-targetport)) (`string: https`)\n\n  - `type` ((#v-metricsservice-type)) (`string: ClusterIP`)\n\n### defaultVaultConnection ((#h-defaultvaultconnection))\n\n- `defaultVaultConnection` ((#v-defaultvaultconnection)) - Configures the default VaultConnection CR which will be used by resources\n  if they do not specify a VaultConnection reference. The name is 'default' and will\n  always be installed in the same namespace as the operator.\n  NOTE:\n  * It is strongly recommended to deploy the vault secrets operator in a secure Vault environment\n    which includes a configuration utilizing TLS and installing Vault into its own restricted namespace.\n\n  - `enabled` ((#v-defaultvaultconnection-enabled)) (`boolean: false`) - toggles the deployment of the VaultAuthMethod CR\n\n  - `address` ((#v-defaultvaultconnection-address)) (`string: \"\"`) - Address of the Vault Server\n    Example: http:\/\/vault.default.svc.cluster.local:8200\n\n  - `caCertSecret` ((#v-defaultvaultconnection-cacertsecret)) (`string: \"\"`) - CACertSecret is the name of a Kubernetes secret containing the trusted PEM encoded CA certificate chain as `ca.crt`.\n    Note: This secret must exist prior to deploying the CR.\n\n  - `tlsServerName` ((#v-defaultvaultconnection-tlsservername)) (`string: \"\"`) - TLSServerName to use as the SNI host for TLS connections.\n\n  - `skipTLSVerify` ((#v-defaultvaultconnection-skiptlsverify)) (`boolean: false`) - SkipTLSVerify for TLS connections.\n\n  - `headers` ((#v-defaultvaultconnection-headers)) (`map`) - Headers to be included in all Vault requests.\n    headers:\n      X-vault-something: \"foo\"\n\n### defaultAuthMethod ((#h-defaultauthmethod))\n\n- `defaultAuthMethod` ((#v-defaultauthmethod)) - Configures and deploys the default VaultAuthMethod CR which will be used by resources\n  if they do not specify a VaultAuthMethod reference. The name is 'default' and will\n  always be installed in the same namespace as the operator.\n  NOTE:\n  * It is strongly recommended to deploy the vault secrets operator in a secure Vault environment\n    which includes a configuration utilizing TLS and installing Vault into its own restricted namespace.\n\n  - `enabled` ((#v-defaultauthmethod-enabled)) (`boolean: false`) - toggles the deployment of the VaultAuthMethod CR\n\n  - `namespace` ((#v-defaultauthmethod-namespace)) (`string: \"\"`) - Vault namespace for the VaultAuthMethod CR\n\n  - `allowedNamespaces` ((#v-defaultauthmethod-allowednamespaces)) (`array<string>: []`) - Kubernetes namespace glob patterns which are allow-listed for use with the default AuthMethod.\n\n  - `method` ((#v-defaultauthmethod-method)) (`string: kubernetes`) - Vault Auth method to be used with the VaultAuthMethod CR\n\n  - `mount` ((#v-defaultauthmethod-mount)) (`string: kubernetes`) - Mount path for the Vault Auth Method.\n\n  - `kubernetes` ((#v-defaultauthmethod-kubernetes)) - Vault Kubernetes auth method specific configuration\n\n    - `role` ((#v-defaultauthmethod-kubernetes-role)) (`string: \"\"`) - Vault Auth Role to use\n      This is a required field and must be setup in Vault prior to deploying the helm chart\n      if `defaultAuthMethod.enabled=true`\n\n    - `serviceAccount` ((#v-defaultauthmethod-kubernetes-serviceaccount)) (`string: default`) - Kubernetes ServiceAccount associated with the default Vault Auth Role\n\n    - `tokenAudiences` ((#v-defaultauthmethod-kubernetes-tokenaudiences)) (`array<string>: []`) - Token Audience should match the audience of the vault kubernetes auth role.\n\n  - `jwt` ((#v-defaultauthmethod-jwt)) - Vault JWT auth method specific configuration\n\n    - `role` ((#v-defaultauthmethod-jwt-role)) (`string: \"\"`) - Vault Auth Role to use\n      This is a required field and must be setup in Vault prior to deploying the helm chart\n      if using the JWT for the default auth method.\n\n    - `secretRef` ((#v-defaultauthmethod-jwt-secretref)) (`string: \"\"`) - One of the following is required prior to deploying the helm chart\n      - K8s secret that contains the JWT\n      - K8s service account if a service account JWT is used as a Vault JWT auth token and needs generating by VSO\n\n      Name of Kubernetes Secret that has the Vault JWT auth token.\n      The Kubernetes Secret must contain a key named `jwt` which references the JWT token, and must exist in the namespace\n      of any consuming VaultSecret CR. This is a required field if a JWT token is provided.\n\n    - `serviceAccount` ((#v-defaultauthmethod-jwt-serviceaccount)) (`string: default`) - Kubernetes ServiceAccount to generate a service account JWT\n\n    - `tokenAudiences` ((#v-defaultauthmethod-jwt-tokenaudiences)) (`array<string>: []`) - Token Audience should match the bound_audiences or the `aud` list in bound_claims if applicable\n      of the Vault JWT auth role.\n\n  - `appRole` ((#v-defaultauthmethod-approle)) - AppRole auth method specific configuration\n\n    - `roleId` ((#v-defaultauthmethod-approle-roleid)) (`string: \"\"`) - AppRole Role's RoleID to use for authenticating to Vault.\n      This is a required field when using appRole and must be setup in Vault prior to deploying the\n      helm chart.\n\n    - `secretRef` ((#v-defaultauthmethod-approle-secretref)) (`string: \"\"`) - Name of Kubernetes Secret that has the AppRole Role's SecretID used to authenticate with Vault.\n      The Kubernetes Secret must contain a key named `id` which references the AppRole Role's\n      SecretID, and must exist in the namespace of any consuming VaultSecret CR.\n      This is a required field when using appRole and must be setup in Vault prior to deploying the\n      helm chart.\n\n  - `aws` ((#v-defaultauthmethod-aws)) - AWS auth method specific configuration\n\n    - `role` ((#v-defaultauthmethod-aws-role)) (`string: \"\"`) - Vault Auth Role to use\n      This is a required field and must be setup in Vault prior to deploying the helm chart\n      if using the AWS for the default auth method.\n\n    - `region` ((#v-defaultauthmethod-aws-region)) (`string: \"\"`) - AWS region to use for signing the authentication request\n      Optional, but most commonly will be the region where the EKS cluster is running\n\n    - `headerValue` ((#v-defaultauthmethod-aws-headervalue)) (`string: \"\"`) - Vault header value to include in the STS signing request\n\n    - `sessionName` ((#v-defaultauthmethod-aws-sessionname)) (`string: \"\"`) - The role session name to use when creating a WebIdentity provider\n\n    - `stsEndpoint` ((#v-defaultauthmethod-aws-stsendpoint)) (`string: \"\"`) - The STS endpoint to use; if not set will use the default\n\n    - `iamEndpoint` ((#v-defaultauthmethod-aws-iamendpoint)) (`string: \"\"`) - The IAM endpoint to use; if not set will use the default\n\n    - `secretRef` ((#v-defaultauthmethod-aws-secretref)) (`string: \"\"`) - The name of a Kubernetes Secret which holds credentials for AWS. Supported keys include\n      `access_key_id`, `secret_access_key`, `session_token`\n\n    - `irsaServiceAccount` ((#v-defaultauthmethod-aws-irsaserviceaccount)) (`string: \"\"`) - Name of a Kubernetes service account that is configured with IAM Roles\n      for Service Accounts (IRSA). Should be annotated with \"eks.amazonaws.com\/role-arn\".\n\n  - `gcp` ((#v-defaultauthmethod-gcp))\n\n    - `role` ((#v-defaultauthmethod-gcp-role)) (`string: \"\"`) - Vault Auth Role to use\n      This is a required field and must be setup in Vault prior to deploying the helm chart\n      if using GCP for the Transit auth method.\n\n    - `workloadIdentityServiceAccount` ((#v-defaultauthmethod-gcp-workloadidentityserviceaccount)) (`string: \"\"`) - Name of a Kubernetes service account that is configured for workload\n      identity in GKE.\n\n    - `region` ((#v-defaultauthmethod-gcp-region)) (`string: \"\"`) - GCP Region of the GKE cluster's identity provider. Defaults to the\n      region returned from the operator pod's local metadata server if\n      unspecified.\n\n    - `clusterName` ((#v-defaultauthmethod-gcp-clustername)) (`string: \"\"`) - GKE cluster name. Defaults to the cluster-name returned from the\n      operator pod's local metadata server if unspecified.\n\n    - `projectID` ((#v-defaultauthmethod-gcp-projectid)) (`string: \"\"`) - GCP project id. Defaults to the project-id returned from the\n      operator pod's local metadata server if unspecified.\n\n  - `params` ((#v-defaultauthmethod-params)) (`map`) - Params to use when authenticating to Vault\n    params:\n      param-something1: \"foo\"\n\n  - `headers` ((#v-defaultauthmethod-headers)) (`map`) - Headers to be included in all Vault requests.\n    headers:\n      X-vault-something1: \"foo\"\n\n  - `vaultAuthGlobalRef` ((#v-defaultauthmethod-vaultauthglobalref)) - VaultAuthGlobalRef\n\n    - `enabled` ((#v-defaultauthmethod-vaultauthglobalref-enabled)) (`boolean: false`) -  toggles the inclusion of the VaultAuthGlobal configuration in the\n      default VaultAuth CR.\n\n    - `name` ((#v-defaultauthmethod-vaultauthglobalref-name)) (`string: \"\"`) - Name of the VaultAuthGlobal CR to reference.\n\n    - `namespace` ((#v-defaultauthmethod-vaultauthglobalref-namespace)) (`string: \"\"`) - Namespace of the VaultAuthGlobal CR to reference.\n\n    - `allowDefault` ((#v-defaultauthmethod-vaultauthglobalref-allowdefault)) (`boolean: \"\"`) - allow default globals\n\n    - `mergeStrategy` ((#v-defaultauthmethod-vaultauthglobalref-mergestrategy))\n\n      - `headers` ((#v-defaultauthmethod-vaultauthglobalref-mergestrategy-headers)) (`string: none`) - merge strategy for headers\n        Valid values are: \"replace\", \"merge\", \"none\"\n        Default: \"replace\"\n\n      - `params` ((#v-defaultauthmethod-vaultauthglobalref-mergestrategy-params)) (`string: none`) - merge strategy for params\n        Valid values are: \"replace\", \"merge\", \"none\"\n        Default: \"replace\"\n\n### telemetry ((#h-telemetry))\n\n- `telemetry` ((#v-telemetry)) - Configures a Prometheus ServiceMonitor\n\n  - `serviceMonitor` ((#v-telemetry-servicemonitor))\n\n    - `enabled` ((#v-telemetry-servicemonitor-enabled)) (`boolean: false`) - The Prometheus operator *must* be installed before enabling this feature,\n      if not the chart will fail to install due to missing CustomResourceDefinitions\n      provided by the operator.\n\n      Instructions on how to install the Helm chart can be found here:\n       https:\/\/github.com\/prometheus-community\/helm-charts\/tree\/main\/charts\/kube-prometheus-stack\n      More information can be found here:\n       https:\/\/github.com\/prometheus-operator\/prometheus-operator\n       https:\/\/github.com\/prometheus-operator\/kube-prometheus\n\n      Enable deployment of the Vault Secrets Operator ServiceMonitor CustomResource.\n\n    - `selectors` ((#v-telemetry-servicemonitor-selectors)) (`string: \"\"`) - Selector labels to add to the ServiceMonitor.\n      When empty, defaults to:\n       release: prometheus\n\n    - `scheme` ((#v-telemetry-servicemonitor-scheme)) (`string: https`) - Scheme of the service Prometheus scrapes metrics from. This must match the scheme of the metrics service of VSO\n\n    - `port` ((#v-telemetry-servicemonitor-port)) (`string: https`) - Port at which Prometheus scrapes metrics. This must match the port of the metrics service of VSO\n\n    - `path` ((#v-telemetry-servicemonitor-path)) (`string: \/metrics`) - Path at which Prometheus scrapes metrics\n\n    - `bearerTokenFile` ((#v-telemetry-servicemonitor-bearertokenfile)) (`string: \/var\/run\/secrets\/kubernetes.io\/serviceaccount\/token`) - File Prometheus reads bearer token from for scraping metrics\n\n    - `interval` ((#v-telemetry-servicemonitor-interval)) (`string: 30s`) - Interval at which Prometheus scrapes metrics\n\n    - `scrapeTimeout` ((#v-telemetry-servicemonitor-scrapetimeout)) (`string: 10s`) - Timeout for Prometheus scrapes\n\n### hooks ((#h-hooks))\n\n- `hooks` ((#v-hooks)) - Configure the behaviour of Helm hooks.\n\n  - `resources` ((#v-hooks-resources)) - Resources common to all hooks.\n\n    - `limits` ((#v-hooks-resources-limits))\n\n      - `cpu` ((#v-hooks-resources-limits-cpu)) (`string: 500m`)\n\n      - `memory` ((#v-hooks-resources-limits-memory)) (`string: 128Mi`)\n\n    - `requests` ((#v-hooks-resources-requests))\n\n      - `cpu` ((#v-hooks-resources-requests-cpu)) (`string: 10m`)\n\n      - `memory` ((#v-hooks-resources-requests-memory)) (`string: 64Mi`)\n\n  - `upgradeCRDs` ((#v-hooks-upgradecrds)) - Configure the Helm pre-upgrade hook that handles custom resource definition (CRD) upgrades.\n\n    - `enabled` ((#v-hooks-upgradecrds-enabled)) (`boolean: true`) - Set to true to automatically upgrade the CRDs.\n      Disabling this will require manual intervention to upgrade the CRDs, so it is recommended to\n      always leave it enabled.\n\n    - `backoffLimit` ((#v-hooks-upgradecrds-backofflimit)) (`integer: 5`) - Limit the number of retries for the CRD upgrade.\n\n    - `executionTimeout` ((#v-hooks-upgradecrds-executiontimeout)) (`string: 30s`) - Set the timeout for the CRD upgrade. The operation should typically take less than 5s\n      to complete.\n\n### tests ((#h-tests))\n\n- `tests` ((#v-tests)) - # Used by unit tests, and will not be rendered except when using `helm template`, this can be safely ignored.\n\n  - `enabled` ((#v-tests-enabled)) (`boolean: true`)\n\n  <!-- codegen: end -->\n\n## Helm chart examples\n\nThe below `config.yaml` results in a single replica installation of the Vault Secrets Operator\nwith a default vault connection and auth method custom resource deployed.\nIt expects a local Vault installation within the kubernetes cluster\naccessible via `http:\/\/vault.default.svc.cluster.local:8200` with TLS disabled,\nand a [Vault Auth Method](\/vault\/docs\/auth\/kubernetes) to be setup against the `default` ServiceAccount.\n\n\n```yaml\n# config.yaml\n\ndefaultVaultConnection:\n  enabled: true\ndefaultAuthMethod:\n  enabled: true\n\n```\n\n## Customizing the helm chart\n\nIf you need to extend the Helm chart with additional options, we recommend using a third-party tool,\nsuch as [kustomize](https:\/\/github.com\/kubernetes-sigs\/kustomize) using the project repo `config\/` path\nin the [vault-secrets-operator](https:\/\/github.com\/hashicorp\/vault-secrets-operator) project.","site":"vault","answers_cleaned":"    layout  docs page title  Vault Secrets Operator Helm Chart Configuration description       Configuration for the Vault Secrets Operator Helm chart           DO NOT EDIT  Generated from chart values yaml in the vault secrets operator repo  commit SHA 08a6e5071ffa4faa486bd4b2c53b27585da4680c  To update run  make gen helm docs  from the vault secrets operator repo         Vault Secrets Operator helm chart  The chart is customizable using  Helm configuration values  https   helm sh docs intro using helm  customizing the chart before installing         codegen  start         Top Level Stanzas  Use these links to navigate to a particular top level stanza       controller    h controller      metricsService    h metricsservice      defaultVaultConnection    h defaultvaultconnection      defaultAuthMethod    h defaultauthmethod      telemetry    h telemetry      hooks    h hooks      tests    h tests      All Values      controller    h controller       controller     v controller     Top level configuration for the vault secrets operator deployment    This consists of a controller and a kube rbac proxy container        replicas     v controller replicas     integer  1     Set the number of replicas for the operator        strategy     v controller strategy     object         Configure update strategy for multi replica deployments      Kubernetes supports types Recreate  and RollingUpdate     ref  https   kubernetes io docs concepts workloads controllers deployment  strategy     Example      strategy           rollingUpdate          maxSurge  1         maxUnavailable  0       type  RollingUpdate       hostAliases     v controller hostaliases     array map      Host Aliases settings for vault secrets operator pod      The value is an array of PodSpec HostAlias maps      ref  https   kubernetes io docs tasks network customize hosts file for pods      Example      hostAliases          ip  192 168 1 100         hostnames            vault example com       nodeSelector     v controller nodeselector     map     nodeSelector labels for vault secrets operator pod assignment      ref  https   kubernetes io docs concepts configuration assign pod node  nodeselector     Example      nodeSelector        beta kubernetes io arch  amd64       tolerations     v controller tolerations     array map      Toleration Settings for vault secrets operator pod      The value is an array of PodSpec Toleration maps      ref  https   kubernetes io docs concepts scheduling eviction taint and toleration      Example      tolerations         key   key1         operator   Equal         value   value1         effect   NoSchedule        affinity     v controller affinity     Affinity settings for vault secrets operator pod      The value is a map of PodSpec Affinity maps      ref  https   kubernetes io docs concepts scheduling eviction assign pod node  affinity and anti affinity     Example      affinity        nodeAffinity          requiredDuringSchedulingIgnoredDuringExecution            nodeSelectorTerms              matchExpressions                key  topology kubernetes io zone               operator  In               values                  antarctica east1                 antarctica west1       rbac     v controller rbac           clusterRoleAggregation     v controller rbac clusterroleaggregation     clusterRoleAggregation defines the roles included in the aggregated ClusterRole            viewerRoles     v controller rbac clusterroleaggregation viewerroles     array string          viewerRoles is a list of roles that will be aggregated into the viewer ClusterRole          The role name must be that of any VSO resource type  E g   VaultAuth    HCPAuth           All values are case insensitive          Specifying     as the first element will include all roles in the aggregation           The ClusterRole name takes the form of   chart fullname   aggregate role viewer           Example usages          all roles                        individually specified roles             VaultAuth             HCPAuth            editorRoles     v controller rbac clusterroleaggregation editorroles     array string          editorRoles is a list of roles that will be aggregated into the editor ClusterRole          The role name must be that of any VSO resource type  E g   VaultAuth    HCPAuth           All values are case insensitive          Specifying     as the first element will include all roles in the aggregation           The ClusterRole name takes the form of   chart fullname   aggregate role editor           Example usages          all roles                        individually specified roles             VaultAuth             HCPAuth            userFacingRoles     v controller rbac clusterroleaggregation userfacingroles     object         userFacingRoles is a map of roles that will be aggregated into the viewer and editor ClusterRoles          See https   kubernetes io docs reference access authn authz rbac  user facing roles for more information              view     v controller rbac clusterroleaggregation userfacingroles view     boolean  false     view controls whether the aggregated viewer ClusterRole will be made available to the user facing            view  ClusterRole  Requires the viewerRoles to be set              edit     v controller rbac clusterroleaggregation userfacingroles edit     boolean  false     view controls whether the aggregated editor ClusterRole will be made available to the user facing            edit  ClusterRole  Requires the editorRoles to be set        kubeRbacProxy     v controller kuberbacproxy     Settings related to the kubeRbacProxy container  This container is an HTTP proxy for the     controller manager which performs RBAC authorization against the Kubernetes API using SubjectAccessReviews          image     v controller kuberbacproxy image     Image sets the repo and tag of the kube rbac proxy image to use for the controller            pullPolicy     v controller kuberbacproxy image pullpolicy     string  IfNotPresent             repository     v controller kuberbacproxy image repository     string  quay io brancz kube rbac proxy             tag     v controller kuberbacproxy image tag     string  v0 18 1           resources     v controller kuberbacproxy resources     map     Configures the default resources for the kube rbac proxy container        For more information on configuring resources  see the K8s documentation        https   kubernetes io docs concepts configuration manage resources containers            limits     v controller kuberbacproxy resources limits               cpu     v controller kuberbacproxy resources limits cpu     string  500m               memory     v controller kuberbacproxy resources limits memory     string  128Mi             requests     v controller kuberbacproxy resources requests               cpu     v controller kuberbacproxy resources requests cpu     string  5m               memory     v controller kuberbacproxy resources requests memory     string  64Mi         imagePullSecrets     v controller imagepullsecrets     array map      Image pull secret to use for private container registry authentication which will be applied to the controllers     service account  Alternatively  the value may be specified as an array of strings      Example         yaml     imagePullSecrets          name  pull secret name 1         name  pull secret name 2             Refer to https   kubernetes io docs concepts containers images  using a private registry        extraLabels     v controller extralabels     Extra labels to attach to the deployment  This should be formatted as a YAML object  map        annotations     v controller annotations     This value defines additional annotations for the deployment  This should be formatted as a YAML object  map        manager     v controller manager     Settings related to the vault secrets operator container          image     v controller manager image     Image sets the repo and tag of the vault secrets operator image to use for the controller            pullPolicy     v controller manager image pullpolicy     string  IfNotPresent             repository     v controller manager image repository     string  hashicorp vault secrets operator             tag     v controller manager image tag     string  0 9 0           logging     v controller manager logging     logging           level     v controller manager logging level     string  info     Sets the log level for the operator          Builtin levels are  info  error  debug  debug extended  trace         Default  info           timeEncoding     v controller manager logging timeencoding     string  rfc3339     Sets the time encoding for the operator          Options are  epoch  millis  nano  iso8601  rfc3339  rfc3339nano         Default  rfc3339           stacktraceLevel     v controller manager logging stacktracelevel     string  panic     Sets the stacktrace level for the operator          Options are  info  error  panic         Default  panic         globalTransformationOptions     v controller manager globaltransformationoptions     Global secret transformation options  In addition to the boolean options       below  these options may be set via the        VSO GLOBAL TRANSFORMATION OPTIONS  environment variable as a       comma separated list  Valid values are   exclude raw            excludeRaw     v controller manager globaltransformationoptions excluderaw     boolean  false     excludeRaw directs the operator to prevent  raw secret data being stored         in the destination K8s Secret          globalVaultAuthOptions     v controller manager globalvaultauthoptions     Global Vault auth options  In addition to the boolean options       below  these options may be set via the        VSO GLOBAL VAULT OPTION OPTIONS  environment variable as a       comma separated list  Valid values are   allow default globals            allowDefaultGlobals     v controller manager globalvaultauthoptions allowdefaultglobals     boolean  true     allowDefaultGlobals directs the operator search for a  default          VaultAuthGlobal if none is specified on the referring VaultAuth CR          Default  true         backoffOnSecretSourceError     v controller manager backoffonsecretsourceerror     object         Backoff settings for the controller manager  These settings control the backoff behavior       when the controller encounters an error while fetching secrets from the SecretSource        For example given the following settings          initialInterval  5s         maxInterval  60s         randomizationFactor  0 5         multiplier  1 5        The backoff retry sequence might be something like         5 5s  7 5s  11 25s  16 87s  25 3125s  37 96s  56 95  60 95s              initialInterval     v controller manager backoffonsecretsourceerror initialinterval     duration  5s     Initial interval between retries            maxInterval     v controller manager backoffonsecretsourceerror maxinterval     duration  60s     Maximum interval between retries            maxElapsedTime     v controller manager backoffonsecretsourceerror maxelapsedtime     duration  0s     Maximum elapsed time without a successful sync from the secret s source          It s important to note that setting this option to anything other than         its default will result in the secret sync no longer being retried after         reaching the max elapsed time            randomizationFactor     v controller manager backoffonsecretsourceerror randomizationfactor     float  0 5     Randomization factor randomizes the backoff interval between retries          This helps to spread out the retries to avoid a thundering herd          If the value is 0  then the backoff interval will not be randomized          It is recommended to set this to a value that is greater than 0            multiplier     v controller manager backoffonsecretsourceerror multiplier     float  1 5     Sets the multiplier that is used to increase the backoff interval between retries          This value should always be set to a value greater than 0          The value must be greater than zero          clientCache     v controller manager clientcache     Configures the client cache which is used by the controller to cache  and potentially persist  vault tokens that       are the result of using the VaultAuthMethod  This enables re use of Vault Tokens       throughout their TTLs as well as the ability to renew        Persistence is only useful in the context of Dynamic Secrets  so  none  is an okay default            persistenceModel     v controller manager clientcache persistencemodel     string         Defines the   client cache persistence model  which caches persists vault tokens          May also be set via the  VSO CLIENT CACHE PERSISTENCE MODEL  environment variable          Valid values are           none    in memory client cache is used  no tokens are persisted           direct unencrypted    in memory client cache is persisted  unencrypted  This is NOT recommended for any production workload           direct encrypted    in memory client cache is persisted encrypted using the Vault Transit engine          Note  It is strongly encouraged to not use the setting of  direct unencrypted  in         production due to the potential of vault tokens being leaked as they would then be stored         in clear text           default   none            cacheSize     v controller manager clientcache cachesize     integer         Defines the size of the in memory LRU cache  in entries   that is used by the client cache controller          May also be set via the  VSO CLIENT CACHE SIZE  environment variable          Larger numbers will increase memory usage by the controller  lower numbers will cause more frequent evictions         of the client cache which can result in additional Vault client counts           default  10000           storageEncryption     v controller manager clientcache storageencryption     StorageEncryption provides the necessary configuration to encrypt the client storage         cache within Kubernetes objects using  required  Vault Transit Engine          This should only be configured when client cache persistence with encryption is enabled and         will deploy an additional VaultAuthMethod to be used by the Vault Transit Engine          E g  when  controller manager clientCache persistenceModel direct encrypted          Supported Vault authentication methods for the Transit Auth method are  jwt  appRole          aws  and kubernetes          Typically  there should only ever be one VaultAuth configured with         StorageEncryption in the Cluster              enabled     v controller manager clientcache storageencryption enabled     boolean  false     toggles the deployment of the Transit VaultAuthMethod CR              vaultConnectionRef     v controller manager clientcache storageencryption vaultconnectionref     string  default     Vault Connection Ref to be used by the Transit VaultAuthMethod            Default setting will use the default VaultConnectionRef  which must also be configured              keyName     v controller manager clientcache storageencryption keyname     string         KeyName to use for encrypt decrypt operations via Vault Transit              transitMount     v controller manager clientcache storageencryption transitmount     string         Mount path for the Transit VaultAuthMethod              namespace     v controller manager clientcache storageencryption namespace     string         Vault namespace for the Transit VaultAuthMethod CR              method     v controller manager clientcache storageencryption method     string  kubernetes     Vault Auth method to be used with the Transit VaultAuthMethod CR              mount     v controller manager clientcache storageencryption mount     string  kubernetes     Mount path for the Transit VaultAuthMethod              kubernetes     v controller manager clientcache storageencryption kubernetes     Vault Kubernetes auth method specific configuration               role     v controller manager clientcache storageencryption kubernetes role     string         Vault Auth Role to use             This is a required field and must be setup in Vault prior to deploying the helm chart             if  defaultAuthMethod enabled true                serviceAccount     v controller manager clientcache storageencryption kubernetes serviceaccount     string         Kubernetes ServiceAccount associated with the Transit Vault Auth Role             Defaults to using the Operator s service account                tokenAudiences     v controller manager clientcache storageencryption kubernetes tokenaudiences     array string          Token Audience should match the audience of the vault kubernetes auth role              jwt     v controller manager clientcache storageencryption jwt     Vault JWT auth method specific configuration               role     v controller manager clientcache storageencryption jwt role     string         Vault Auth Role to use             This is a required field and must be setup in Vault prior to deploying the helm chart             if using JWT for the Transit VaultAuthMethod                secretRef     v controller manager clientcache storageencryption jwt secretref     string         One of the following is required prior to deploying the helm chart               K8s secret that contains the JWT               K8s service account if a service account JWT is used as a Vault JWT auth token and             needs generating by VSO               Name of Kubernetes Secret that has the Vault JWT auth token              The Kubernetes Secret must contain a key named  jwt  which references the JWT token  and             must exist in the namespace of any consuming VaultSecret CR  This is a required field if             a JWT token is provided                serviceAccount     v controller manager clientcache storageencryption jwt serviceaccount     string  default     Kubernetes ServiceAccount to generate a service account JWT               tokenAudiences     v controller manager clientcache storageencryption jwt tokenaudiences     array string          Token Audience should match the bound audiences or the  aud  list in bound claims if             applicable of the Vault JWT auth role              appRole     v controller manager clientcache storageencryption approle     AppRole auth method specific configuration               roleId     v controller manager clientcache storageencryption approle roleid     string         AppRole Role s RoleID to use for authenticating to Vault              This is a required field when using appRole and must be setup in Vault prior to deploying             the helm chart                secretRef     v controller manager clientcache storageencryption approle secretref     string         Name of Kubernetes Secret that has the AppRole Role s SecretID used to authenticate with             Vault  The Kubernetes Secret must contain a key named  id  which references the AppRole             Role s SecretID  and must exist in the namespace of any consuming VaultSecret CR              This is a required field when using appRole and must be setup in Vault prior to             deploying the helm chart              aws     v controller manager clientcache storageencryption aws     AWS auth method specific configuration               role     v controller manager clientcache storageencryption aws role     string         Vault Auth Role to use             This is a required field and must be setup in Vault prior to deploying the helm chart             if using the AWS for the Transit auth method                region     v controller manager clientcache storageencryption aws region     string         AWS region to use for signing the authentication request             Optional  but most commonly will be the EKS cluster region                headerValue     v controller manager clientcache storageencryption aws headervalue     string         Vault header value to include in the STS signing request               sessionName     v controller manager clientcache storageencryption aws sessionname     string         The role session name to use when creating a WebIdentity provider               stsEndpoint     v controller manager clientcache storageencryption aws stsendpoint     string         The STS endpoint to use  if not set will use the default               iamEndpoint     v controller manager clientcache storageencryption aws iamendpoint     string         The IAM endpoint to use  if not set will use the default               secretRef     v controller manager clientcache storageencryption aws secretref     string         The name of a Kubernetes Secret which holds credentials for AWS  Supported keys             include  access key id    secret access key    session token                irsaServiceAccount     v controller manager clientcache storageencryption aws irsaserviceaccount     string         Name of a Kubernetes service account that is configured with IAM Roles             for Service Accounts  IRSA   Should be annotated with  eks amazonaws com role arn               gcp     v controller manager clientcache storageencryption gcp                 role     v controller manager clientcache storageencryption gcp role     string         Vault Auth Role to use             This is a required field and must be setup in Vault prior to deploying the helm chart             if using GCP for the Transit auth method                workloadIdentityServiceAccount     v controller manager clientcache storageencryption gcp workloadidentityserviceaccount     string         Name of a Kubernetes service account that is configured for workload             identity in GKE                region     v controller manager clientcache storageencryption gcp region     string         GCP Region of the GKE cluster s identity provider  Defaults to the             region returned from the operator pod s local metadata server if             unspecified                clusterName     v controller manager clientcache storageencryption gcp clustername     string         GKE cluster name  Defaults to the cluster name returned from the             operator pod s local metadata server if unspecified                projectID     v controller manager clientcache storageencryption gcp projectid     string         GCP project id  Defaults to the project id returned from the             operator pod s local metadata server if unspecified              params     v controller manager clientcache storageencryption params     map     Params to use when authenticating to Vault using this auth method            params              param something1   foo              headers     v controller manager clientcache storageencryption headers      map         Headers to be included in all Vault requests            headers              X vault something1   foo          maxConcurrentReconciles     v controller manager maxconcurrentreconciles     integer         Defines the maximum number of concurrent reconciles for each controller        May also be set via the  VSO MAX CONCURRENT RECONCILES  environment variable         default  100         extraEnv     v controller manager extraenv     array map      Defines additional environment variables to be added to the       vault secrets operator manager container        Example            yaml       extraEnv            name  HTTP PROXY           value  http   proxy example com           name  VSO OUTPUT FORMAT           value  json           name  VSO CLIENT CACHE SIZE           value   20000            name  VSO CLIENT CACHE PERSISTENCE MODEL           value   direct encrypted            name  VSO MAX CONCURRENT RECONCILES           value   30                    extraArgs     v controller manager extraargs     array         Defines additional commandline arguments to be passed to the       vault secrets operator manager container          resources     v controller manager resources     map     Configures the default resources for the vault secrets operator container        For more information on configuring resources  see the K8s documentation        https   kubernetes io docs concepts configuration manage resources containers            limits     v controller manager resources limits               cpu     v controller manager resources limits cpu     string  500m               memory     v controller manager resources limits memory     string  128Mi             requests     v controller manager resources requests               cpu     v controller manager resources requests cpu     string  10m               memory     v controller manager resources requests memory     string  64Mi         podSecurityContext     v controller podsecuritycontext     Configures the Pod Security Context     https   kubernetes io docs tasks configure pod container security context         runAsNonRoot     v controller podsecuritycontext runasnonroot     boolean  true         securityContext     v controller securitycontext     Configures the Container Security Context     https   kubernetes io docs tasks configure pod container security context         allowPrivilegeEscalation     v controller securitycontext allowprivilegeescalation     boolean  false         controllerConfigMapYaml     v controller controllerconfigmapyaml     map     Sets the configuration settings used by the controller  Any custom changes will be reflected in the     data field of the configmap      For more information on configuring resources  see the K8s documentation      https   kubernetes io docs concepts configuration configmap          health     v controller controllerconfigmapyaml health             healthProbeBindAddress     v controller controllerconfigmapyaml health healthprobebindaddress     string   8081           leaderElection     v controller controllerconfigmapyaml leaderelection             leaderElect     v controller controllerconfigmapyaml leaderelection leaderelect     boolean  true             resourceName     v controller controllerconfigmapyaml leaderelection resourcename     string  b0d477c0 hashicorp com           metrics     v controller controllerconfigmapyaml metrics             bindAddress     v controller controllerconfigmapyaml metrics bindaddress     string  127 0 0 1 8080           webhook     v controller controllerconfigmapyaml webhook             port     v controller controllerconfigmapyaml webhook port     integer  9443         kubernetesClusterDomain     v controller kubernetesclusterdomain     string  cluster local     Configures the environment variable KUBERNETES CLUSTER DOMAIN used by KubeDNS        terminationGracePeriodSeconds     v controller terminationgraceperiodseconds     integer  120     Duration in seconds the pod needs to terminate gracefully      See  https   kubernetes io docs concepts containers container lifecycle hooks        preDeleteHookTimeoutSeconds     v controller predeletehooktimeoutseconds     integer  120     Timeout in seconds for the pre delete hook      metricsService    h metricsservice       metricsService     v metricsservice     map     Configure the metrics service ports used by the metrics service    Set the configuration fo the metricsService port        ports     v metricsservice ports     map     Set the port settings for the metrics service      For more information on configuring resources  see the K8s documentation      https   kubernetes io docs concepts services networking service        name     v metricsservice ports name     string  https         port     v metricsservice ports port     integer  8443         protocol     v metricsservice ports protocol     string  TCP         targetPort     v metricsservice ports targetport     string  https         type     v metricsservice type     string  ClusterIP        defaultVaultConnection    h defaultvaultconnection       defaultVaultConnection     v defaultvaultconnection     Configures the default VaultConnection CR which will be used by resources   if they do not specify a VaultConnection reference  The name is  default  and will   always be installed in the same namespace as the operator    NOTE      It is strongly recommended to deploy the vault secrets operator in a secure Vault environment     which includes a configuration utilizing TLS and installing Vault into its own restricted namespace        enabled     v defaultvaultconnection enabled     boolean  false     toggles the deployment of the VaultAuthMethod CR       address     v defaultvaultconnection address     string         Address of the Vault Server     Example  http   vault default svc cluster local 8200       caCertSecret     v defaultvaultconnection cacertsecret     string         CACertSecret is the name of a Kubernetes secret containing the trusted PEM encoded CA certificate chain as  ca crt       Note  This secret must exist prior to deploying the CR        tlsServerName     v defaultvaultconnection tlsservername     string         TLSServerName to use as the SNI host for TLS connections        skipTLSVerify     v defaultvaultconnection skiptlsverify     boolean  false     SkipTLSVerify for TLS connections        headers     v defaultvaultconnection headers     map     Headers to be included in all Vault requests      headers        X vault something   foo       defaultAuthMethod    h defaultauthmethod       defaultAuthMethod     v defaultauthmethod     Configures and deploys the default VaultAuthMethod CR which will be used by resources   if they do not specify a VaultAuthMethod reference  The name is  default  and will   always be installed in the same namespace as the operator    NOTE      It is strongly recommended to deploy the vault secrets operator in a secure Vault environment     which includes a configuration utilizing TLS and installing Vault into its own restricted namespace        enabled     v defaultauthmethod enabled     boolean  false     toggles the deployment of the VaultAuthMethod CR       namespace     v defaultauthmethod namespace     string         Vault namespace for the VaultAuthMethod CR       allowedNamespaces     v defaultauthmethod allowednamespaces     array string          Kubernetes namespace glob patterns which are allow listed for use with the default AuthMethod        method     v defaultauthmethod method     string  kubernetes     Vault Auth method to be used with the VaultAuthMethod CR       mount     v defaultauthmethod mount     string  kubernetes     Mount path for the Vault Auth Method        kubernetes     v defaultauthmethod kubernetes     Vault Kubernetes auth method specific configuration         role     v defaultauthmethod kubernetes role     string         Vault Auth Role to use       This is a required field and must be setup in Vault prior to deploying the helm chart       if  defaultAuthMethod enabled true          serviceAccount     v defaultauthmethod kubernetes serviceaccount     string  default     Kubernetes ServiceAccount associated with the default Vault Auth Role         tokenAudiences     v defaultauthmethod kubernetes tokenaudiences     array string          Token Audience should match the audience of the vault kubernetes auth role        jwt     v defaultauthmethod jwt     Vault JWT auth method specific configuration         role     v defaultauthmethod jwt role     string         Vault Auth Role to use       This is a required field and must be setup in Vault prior to deploying the helm chart       if using the JWT for the default auth method          secretRef     v defaultauthmethod jwt secretref     string         One of the following is required prior to deploying the helm chart         K8s secret that contains the JWT         K8s service account if a service account JWT is used as a Vault JWT auth token and needs generating by VSO        Name of Kubernetes Secret that has the Vault JWT auth token        The Kubernetes Secret must contain a key named  jwt  which references the JWT token  and must exist in the namespace       of any consuming VaultSecret CR  This is a required field if a JWT token is provided          serviceAccount     v defaultauthmethod jwt serviceaccount     string  default     Kubernetes ServiceAccount to generate a service account JWT         tokenAudiences     v defaultauthmethod jwt tokenaudiences     array string          Token Audience should match the bound audiences or the  aud  list in bound claims if applicable       of the Vault JWT auth role        appRole     v defaultauthmethod approle     AppRole auth method specific configuration         roleId     v defaultauthmethod approle roleid     string         AppRole Role s RoleID to use for authenticating to Vault        This is a required field when using appRole and must be setup in Vault prior to deploying the       helm chart          secretRef     v defaultauthmethod approle secretref     string         Name of Kubernetes Secret that has the AppRole Role s SecretID used to authenticate with Vault        The Kubernetes Secret must contain a key named  id  which references the AppRole Role s       SecretID  and must exist in the namespace of any consuming VaultSecret CR        This is a required field when using appRole and must be setup in Vault prior to deploying the       helm chart        aws     v defaultauthmethod aws     AWS auth method specific configuration         role     v defaultauthmethod aws role     string         Vault Auth Role to use       This is a required field and must be setup in Vault prior to deploying the helm chart       if using the AWS for the default auth method          region     v defaultauthmethod aws region     string         AWS region to use for signing the authentication request       Optional  but most commonly will be the region where the EKS cluster is running         headerValue     v defaultauthmethod aws headervalue     string         Vault header value to include in the STS signing request         sessionName     v defaultauthmethod aws sessionname     string         The role session name to use when creating a WebIdentity provider         stsEndpoint     v defaultauthmethod aws stsendpoint     string         The STS endpoint to use  if not set will use the default         iamEndpoint     v defaultauthmethod aws iamendpoint     string         The IAM endpoint to use  if not set will use the default         secretRef     v defaultauthmethod aws secretref     string         The name of a Kubernetes Secret which holds credentials for AWS  Supported keys include        access key id    secret access key    session token          irsaServiceAccount     v defaultauthmethod aws irsaserviceaccount     string         Name of a Kubernetes service account that is configured with IAM Roles       for Service Accounts  IRSA   Should be annotated with  eks amazonaws com role arn         gcp     v defaultauthmethod gcp           role     v defaultauthmethod gcp role     string         Vault Auth Role to use       This is a required field and must be setup in Vault prior to deploying the helm chart       if using GCP for the Transit auth method          workloadIdentityServiceAccount     v defaultauthmethod gcp workloadidentityserviceaccount     string         Name of a Kubernetes service account that is configured for workload       identity in GKE          region     v defaultauthmethod gcp region     string         GCP Region of the GKE cluster s identity provider  Defaults to the       region returned from the operator pod s local metadata server if       unspecified          clusterName     v defaultauthmethod gcp clustername     string         GKE cluster name  Defaults to the cluster name returned from the       operator pod s local metadata server if unspecified          projectID     v defaultauthmethod gcp projectid     string         GCP project id  Defaults to the project id returned from the       operator pod s local metadata server if unspecified        params     v defaultauthmethod params     map     Params to use when authenticating to Vault     params        param something1   foo        headers     v defaultauthmethod headers     map     Headers to be included in all Vault requests      headers        X vault something1   foo        vaultAuthGlobalRef     v defaultauthmethod vaultauthglobalref     VaultAuthGlobalRef         enabled     v defaultauthmethod vaultauthglobalref enabled     boolean  false      toggles the inclusion of the VaultAuthGlobal configuration in the       default VaultAuth CR          name     v defaultauthmethod vaultauthglobalref name     string         Name of the VaultAuthGlobal CR to reference          namespace     v defaultauthmethod vaultauthglobalref namespace     string         Namespace of the VaultAuthGlobal CR to reference          allowDefault     v defaultauthmethod vaultauthglobalref allowdefault     boolean         allow default globals         mergeStrategy     v defaultauthmethod vaultauthglobalref mergestrategy             headers     v defaultauthmethod vaultauthglobalref mergestrategy headers     string  none     merge strategy for headers         Valid values are   replace    merge    none          Default   replace            params     v defaultauthmethod vaultauthglobalref mergestrategy params     string  none     merge strategy for params         Valid values are   replace    merge    none          Default   replace       telemetry    h telemetry       telemetry     v telemetry     Configures a Prometheus ServiceMonitor       serviceMonitor     v telemetry servicemonitor           enabled     v telemetry servicemonitor enabled     boolean  false     The Prometheus operator  must  be installed before enabling this feature        if not the chart will fail to install due to missing CustomResourceDefinitions       provided by the operator         Instructions on how to install the Helm chart can be found here         https   github com prometheus community helm charts tree main charts kube prometheus stack       More information can be found here         https   github com prometheus operator prometheus operator        https   github com prometheus operator kube prometheus        Enable deployment of the Vault Secrets Operator ServiceMonitor CustomResource          selectors     v telemetry servicemonitor selectors     string         Selector labels to add to the ServiceMonitor        When empty  defaults to         release  prometheus         scheme     v telemetry servicemonitor scheme     string  https     Scheme of the service Prometheus scrapes metrics from  This must match the scheme of the metrics service of VSO         port     v telemetry servicemonitor port     string  https     Port at which Prometheus scrapes metrics  This must match the port of the metrics service of VSO         path     v telemetry servicemonitor path     string   metrics     Path at which Prometheus scrapes metrics         bearerTokenFile     v telemetry servicemonitor bearertokenfile     string   var run secrets kubernetes io serviceaccount token     File Prometheus reads bearer token from for scraping metrics         interval     v telemetry servicemonitor interval     string  30s     Interval at which Prometheus scrapes metrics         scrapeTimeout     v telemetry servicemonitor scrapetimeout     string  10s     Timeout for Prometheus scrapes      hooks    h hooks       hooks     v hooks     Configure the behaviour of Helm hooks        resources     v hooks resources     Resources common to all hooks          limits     v hooks resources limits             cpu     v hooks resources limits cpu     string  500m             memory     v hooks resources limits memory     string  128Mi           requests     v hooks resources requests             cpu     v hooks resources requests cpu     string  10m             memory     v hooks resources requests memory     string  64Mi         upgradeCRDs     v hooks upgradecrds     Configure the Helm pre upgrade hook that handles custom resource definition  CRD  upgrades          enabled     v hooks upgradecrds enabled     boolean  true     Set to true to automatically upgrade the CRDs        Disabling this will require manual intervention to upgrade the CRDs  so it is recommended to       always leave it enabled          backoffLimit     v hooks upgradecrds backofflimit     integer  5     Limit the number of retries for the CRD upgrade          executionTimeout     v hooks upgradecrds executiontimeout     string  30s     Set the timeout for the CRD upgrade  The operation should typically take less than 5s       to complete       tests    h tests       tests     v tests       Used by unit tests  and will not be rendered except when using  helm template   this can be safely ignored        enabled     v tests enabled     boolean  true           codegen  end         Helm chart examples  The below  config yaml  results in a single replica installation of the Vault Secrets Operator with a default vault connection and auth method custom resource deployed  It expects a local Vault installation within the kubernetes cluster accessible via  http   vault default svc cluster local 8200  with TLS disabled  and a  Vault Auth Method   vault docs auth kubernetes  to be setup against the  default  ServiceAccount       yaml   config yaml  defaultVaultConnection    enabled  true defaultAuthMethod    enabled  true          Customizing the helm chart  If you need to extend the Helm chart with additional options  we recommend using a third party tool  such as  kustomize  https   github com kubernetes sigs kustomize  using the project repo  config   path in the  vault secrets operator  https   github com hashicorp vault secrets operator  project "}
{"questions":"vault copied from docs api api reference md in the vault secrets operator repo The Vault Secrets Operator allows Pods to consume Vault secrets natively from Kubernetes Secrets commit SHA 08a6e5071ffa4faa486bd4b2c53b27585da4680c page title Vault Secrets Operator API Reference layout docs","answers":"---\nlayout: docs\npage_title: Vault Secrets Operator API Reference\ndescription: >-\n  The Vault Secrets Operator allows Pods to consume Vault secrets natively from Kubernetes Secrets.\n---\n\n<!--\n  copied from docs\/api\/api-reference.md in the vault-secrets-operator repo.\n  commit SHA=08a6e5071ffa4faa486bd4b2c53b27585da4680c\n-->\n# API Reference\n\n## Packages\n- [secrets.hashicorp.com\/v1beta1](#secretshashicorpcomv1beta1)\n\n\n## secrets.hashicorp.com\/v1beta1\n\nPackage v1beta1 contains API Schema definitions for the secrets v1beta1 API group\n\n### Resource Types\n- [HCPAuth](#hcpauth)\n- [HCPAuthList](#hcpauthlist)\n- [HCPVaultSecretsApp](#hcpvaultsecretsapp)\n- [HCPVaultSecretsAppList](#hcpvaultsecretsapplist)\n- [SecretTransformation](#secrettransformation)\n- [SecretTransformationList](#secrettransformationlist)\n- [VaultAuth](#vaultauth)\n- [VaultAuthGlobal](#vaultauthglobal)\n- [VaultAuthGlobalList](#vaultauthgloballist)\n- [VaultAuthList](#vaultauthlist)\n- [VaultConnection](#vaultconnection)\n- [VaultConnectionList](#vaultconnectionlist)\n- [VaultDynamicSecret](#vaultdynamicsecret)\n- [VaultDynamicSecretList](#vaultdynamicsecretlist)\n- [VaultPKISecret](#vaultpkisecret)\n- [VaultPKISecretList](#vaultpkisecretlist)\n- [VaultStaticSecret](#vaultstaticsecret)\n- [VaultStaticSecretList](#vaultstaticsecretlist)\n\n\n\n#### Destination\n\n\n\nDestination provides the configuration that will be applied to the\ndestination Kubernetes Secret during a Vault Secret -> K8s Secret sync.\n\n\n\n_Appears in:_\n- [HCPVaultSecretsAppSpec](#hcpvaultsecretsappspec)\n- [VaultDynamicSecretSpec](#vaultdynamicsecretspec)\n- [VaultPKISecretSpec](#vaultpkisecretspec)\n- [VaultStaticSecretSpec](#vaultstaticsecretspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `name` _string_ | Name of the Secret |  |  |\n| `create` _boolean_ | Create the destination Secret.<br \/>If the Secret already exists this should be set to false. | false |  |\n| `overwrite` _boolean_ | Overwrite the destination Secret if it exists and Create is true. This is<br \/>useful when migrating to VSO from a previous secret deployment strategy. | false |  |\n| `labels` _object (keys:string, values:string)_ | Labels to apply to the Secret. Requires Create to be set to true. |  |  |\n| `annotations` _object (keys:string, values:string)_ | Annotations to apply to the Secret. Requires Create to be set to true. |  |  |\n| `type` _[SecretType](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#secrettype-v1-core)_ | Type of Kubernetes Secret. Requires Create to be set to true.<br \/>Defaults to Opaque. |  |  |\n| `transformation` _[Transformation](#transformation)_ | Transformation provides configuration for transforming the secret data before<br \/>it is stored in the Destination. |  |  |\n\n\n#### HCPAuth\n\n\n\nHCPAuth is the Schema for the hcpauths API\n\n\n\n_Appears in:_\n- [HCPAuthList](#hcpauthlist)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `HCPAuth` | | |\n| `metadata` _[ObjectMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `spec` _[HCPAuthSpec](#hcpauthspec)_ |  |  |  |\n\n\n#### HCPAuthList\n\n\n\nHCPAuthList contains a list of HCPAuth\n\n\n\n\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `HCPAuthList` | | |\n| `metadata` _[ListMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `items` _[HCPAuth](#hcpauth) array_ |  |  |  |\n\n\n#### HCPAuthServicePrincipal\n\n\n\nHCPAuthServicePrincipal provides HCPAuth configuration options needed for\nauthenticating to HCP using a service principal configured in SecretRef.\n\n\n\n_Appears in:_\n- [HCPAuthSpec](#hcpauthspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `secretRef` _string_ | SecretRef is the name of a Kubernetes secret in the consumer's<br \/>(VDS\/VSS\/PKI\/HCP) namespace which provides the HCP ServicePrincipal clientID,<br \/>and clientSecret.<br \/>The secret data must have the following structure {<br \/>  \"clientID\": \"clientID\",<br \/>  \"clientSecret\": \"clientSecret\",<br \/>} |  |  |\n\n\n#### HCPAuthSpec\n\n\n\nHCPAuthSpec defines the desired state of HCPAuth\n\n\n\n_Appears in:_\n- [HCPAuth](#hcpauth)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `organizationID` _string_ | OrganizationID of the HCP organization. |  |  |\n| `projectID` _string_ | ProjectID of the HCP project. |  |  |\n| `allowedNamespaces` _string array_ | AllowedNamespaces Kubernetes Namespaces which are allow-listed for use with this AuthMethod.<br \/>This field allows administrators to customize which Kubernetes namespaces are authorized to<br \/>use with this AuthMethod. While Vault will still enforce its own rules, this has the added<br \/>configurability of restricting which HCPAuthMethods can be used by which namespaces.<br \/>Accepted values:<br \/>[]{\"*\"} - wildcard, all namespaces.<br \/>[]{\"a\", \"b\"} - list of namespaces.<br \/>unset - disallow all namespaces except the Operator's the HCPAuthMethod's namespace, this<br \/>is the default behavior. |  |  |\n| `method` _string_ | Method to use when authenticating to Vault. | servicePrincipal | Enum: [servicePrincipal] <br \/> |\n| `servicePrincipal` _[HCPAuthServicePrincipal](#hcpauthserviceprincipal)_ | ServicePrincipal provides the necessary configuration for authenticating to<br \/>HCP using a service principal. For security reasons, only project-level<br \/>service principals should ever be used. |  |  |\n\n\n\n\n#### HCPVaultSecretsApp\n\n\n\nHCPVaultSecretsApp is the Schema for the hcpvaultsecretsapps API\n\n\n\n_Appears in:_\n- [HCPVaultSecretsAppList](#hcpvaultsecretsapplist)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `HCPVaultSecretsApp` | | |\n| `metadata` _[ObjectMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `spec` _[HCPVaultSecretsAppSpec](#hcpvaultsecretsappspec)_ |  |  |  |\n\n\n#### HCPVaultSecretsAppList\n\n\n\nHCPVaultSecretsAppList contains a list of HCPVaultSecretsApp\n\n\n\n\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `HCPVaultSecretsAppList` | | |\n| `metadata` _[ListMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `items` _[HCPVaultSecretsApp](#hcpvaultsecretsapp) array_ |  |  |  |\n\n\n#### HCPVaultSecretsAppSpec\n\n\n\nHCPVaultSecretsAppSpec defines the desired state of HCPVaultSecretsApp\n\n\n\n_Appears in:_\n- [HCPVaultSecretsApp](#hcpvaultsecretsapp)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `appName` _string_ | AppName of the Vault Secrets Application that is to be synced. |  |  |\n| `hcpAuthRef` _string_ | HCPAuthRef to the HCPAuth resource, can be prefixed with a namespace, eg:<br \/>`namespaceA\/vaultAuthRefB`. If no namespace prefix is provided it will default<br \/>to the namespace of the HCPAuth CR. If no value is specified for HCPAuthRef the<br \/>Operator will default to the `default` HCPAuth, configured in the operator's<br \/>namespace. |  |  |\n| `refreshAfter` _string_ | RefreshAfter a period of time, in duration notation e.g. 30s, 1m, 24h | 600s | Pattern: `^([0-9]+(\\.[0-9]+)?(s|m|h))$` <br \/>Type: string <br \/> |\n| `rolloutRestartTargets` _[RolloutRestartTarget](#rolloutrestarttarget) array_ | RolloutRestartTargets should be configured whenever the application(s)<br \/>consuming the HCP Vault Secrets App does not support dynamically reloading a<br \/>rotated secret. In that case one, or more RolloutRestartTarget(s) can be<br \/>configured here. The Operator will trigger a \"rollout-restart\" for each target<br \/>whenever the Vault secret changes between reconciliation events. See<br \/>RolloutRestartTarget for more details. |  |  |\n| `destination` _[Destination](#destination)_ | Destination provides configuration necessary for syncing the HCP Vault<br \/>Application secrets to Kubernetes. |  |  |\n| `syncConfig` _[HVSSyncConfig](#hvssyncconfig)_ | SyncConfig configures sync behavior from HVS to VSO |  |  |\n\n\n\n\n#### HVSDynamicStatus\n\n\n\nHVSDynamicStatus defines the observed state of a dynamic secret within an HCP\nVault Secrets App\n\n\n\n_Appears in:_\n- [HCPVaultSecretsAppStatus](#hcpvaultsecretsappstatus)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `name` _string_ | Name of the dynamic secret |  |  |\n| `createdAt` _string_ | CreatedAt is the timestamp string of when the dynamic secret was created |  |  |\n| `expiresAt` _string_ | ExpiresAt is the timestamp string of when the dynamic secret will expire |  |  |\n| `ttl` _string_ | TTL is the time-to-live of the dynamic secret in seconds |  |  |\n\n\n#### HVSDynamicSyncConfig\n\n\n\nHVSDynamicSyncConfig configures sync behavior for HVS dynamic secrets.\n\n\n\n_Appears in:_\n- [HVSSyncConfig](#hvssyncconfig)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `renewalPercent` _integer_ | RenewalPercent is the percent out of 100 of a dynamic secret's TTL when<br \/>new secrets are generated. Defaults to 67 percent plus up to 10% jitter. | 67 | Maximum: 90 <br \/>Minimum: 0 <br \/> |\n\n\n#### HVSSyncConfig\n\n\n\nHVSSyncConfig configures sync behavior from HVS to VSO\n\n\n\n_Appears in:_\n- [HCPVaultSecretsAppSpec](#hcpvaultsecretsappspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `dynamic` _[HVSDynamicSyncConfig](#hvsdynamicsyncconfig)_ | Dynamic configures sync behavior for dynamic secrets. |  |  |\n\n\n#### MergeStrategy\n\n\n\nMergeStrategy provides the configuration for merging HTTP headers and\nparameters from the referring VaultAuth resource and its VaultAuthGlobal\nresource.\n\n\n\n_Appears in:_\n- [VaultAuthGlobalRef](#vaultauthglobalref)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `headers` _string_ | Headers configures the merge strategy for HTTP headers that are included in<br \/>all Vault requests. Choices are `union`, `replace`, or `none`.<br \/><br \/>If `union` is set, the headers from the VaultAuthGlobal and VaultAuth<br \/>resources are merged. The headers from the VaultAuth always take precedence.<br \/><br \/>If `replace` is set, the first set of non-empty headers taken in order from:<br \/>VaultAuth, VaultAuthGlobal auth method, VaultGlobal default headers.<br \/><br \/>If `none` is set, the headers from the<br \/>VaultAuthGlobal resource are ignored and only the headers from the VaultAuth<br \/>resource are used. The default is `none`. |  | Enum: [union replace none] <br \/> |\n| `params` _string_ | Params configures the merge strategy for HTTP parameters that are included in<br \/>all Vault requests. Choices are `union`, `replace`, or `none`.<br \/><br \/>If `union` is set, the parameters from the VaultAuthGlobal and VaultAuth<br \/>resources are merged. The parameters from the VaultAuth always take<br \/>precedence.<br \/><br \/>If `replace` is set, the first set of non-empty parameters taken in order from:<br \/>VaultAuth, VaultAuthGlobal auth method, VaultGlobal default parameters.<br \/><br \/>If `none` is set, the parameters from the VaultAuthGlobal resource are ignored<br \/>and only the parameters from the VaultAuth resource are used. The default is<br \/>`none`. |  | Enum: [union replace none] <br \/> |\n\n\n#### RolloutRestartTarget\n\n\n\nRolloutRestartTarget provides the configuration required to perform a\nrollout-restart of the supported resources upon Vault Secret rotation.\nThe rollout-restart is triggered by patching the target resource's\n'spec.template.metadata.annotations' to include 'vso.secrets.hashicorp.com\/restartedAt'\nwith a timestamp value of when the trigger was executed.\nE.g. vso.secrets.hashicorp.com\/restartedAt: \"2023-03-23T13:39:31Z\"\n\n\nSupported resources: Deployment, DaemonSet, StatefulSet, argo.Rollout\n\n\n\n_Appears in:_\n- [HCPVaultSecretsAppSpec](#hcpvaultsecretsappspec)\n- [VaultDynamicSecretSpec](#vaultdynamicsecretspec)\n- [VaultPKISecretSpec](#vaultpkisecretspec)\n- [VaultStaticSecretSpec](#vaultstaticsecretspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `kind` _string_ | Kind of the resource |  | Enum: [Deployment DaemonSet StatefulSet argo.Rollout] <br \/> |\n| `name` _string_ | Name of the resource |  |  |\n\n\n#### SecretTransformation\n\n\n\nSecretTransformation is the Schema for the secrettransformations API\n\n\n\n_Appears in:_\n- [SecretTransformationList](#secrettransformationlist)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `SecretTransformation` | | |\n| `metadata` _[ObjectMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `spec` _[SecretTransformationSpec](#secrettransformationspec)_ |  |  |  |\n\n\n#### SecretTransformationList\n\n\n\nSecretTransformationList contains a list of SecretTransformation\n\n\n\n\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `SecretTransformationList` | | |\n| `metadata` _[ListMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `items` _[SecretTransformation](#secrettransformation) array_ |  |  |  |\n\n\n#### SecretTransformationSpec\n\n\n\nSecretTransformationSpec defines the desired state of SecretTransformation\n\n\n\n_Appears in:_\n- [SecretTransformation](#secrettransformation)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `templates` _object (keys:string, values:[Template](#template))_ | Templates maps a template name to its Template. Templates are always included<br \/>in the rendered K8s Secret with the specified key. |  |  |\n| `sourceTemplates` _[SourceTemplate](#sourcetemplate) array_ | SourceTemplates are never included in the rendered K8s Secret, they can be<br \/>used to provide common template definitions, etc. |  |  |\n| `includes` _string array_ | Includes contains regex patterns used to filter top-level source secret data<br \/>fields for inclusion in the final K8s Secret data. These pattern filters are<br \/>never applied to templated fields as defined in Templates. They are always<br \/>applied last. |  |  |\n| `excludes` _string array_ | Excludes contains regex patterns used to filter top-level source secret data<br \/>fields for exclusion from the final K8s Secret data. These pattern filters are<br \/>never applied to templated fields as defined in Templates. They are always<br \/>applied before any inclusion patterns. To exclude all source secret data<br \/>fields, you can configure the single pattern \".*\". |  |  |\n\n\n\n\n#### SourceTemplate\n\n\n\nSourceTemplate provides source templating configuration.\n\n\n\n_Appears in:_\n- [SecretTransformationSpec](#secrettransformationspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `name` _string_ |  |  |  |\n| `text` _string_ | Text contains the Go text template format. The template<br \/>references attributes from the data structure of the source secret.<br \/>Refer to https:\/\/pkg.go.dev\/text\/template for more information. |  |  |\n\n\n#### StorageEncryption\n\n\n\nStorageEncryption provides the necessary configuration need to encrypt the storage cache\nentries using Vault's Transit engine.\n\n\n\n_Appears in:_\n- [VaultAuthSpec](#vaultauthspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `mount` _string_ | Mount path of the Transit engine in Vault. |  |  |\n| `keyName` _string_ | KeyName to use for encrypt\/decrypt operations via Vault Transit. |  |  |\n\n\n#### SyncConfig\n\n\n\nSyncConfig configures sync behavior from Vault to VSO\n\n\n\n_Appears in:_\n- [VaultStaticSecretSpec](#vaultstaticsecretspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `instantUpdates` _boolean_ | InstantUpdates is a flag to indicate that event-driven updates are<br \/>enabled for this VaultStaticSecret |  |  |\n\n\n#### Template\n\n\n\nTemplate provides templating configuration.\n\n\n\n_Appears in:_\n- [SecretTransformationSpec](#secrettransformationspec)\n- [Transformation](#transformation)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `name` _string_ | Name of the Template |  |  |\n| `text` _string_ | Text contains the Go text template format. The template<br \/>references attributes from the data structure of the source secret.<br \/>Refer to https:\/\/pkg.go.dev\/text\/template for more information. |  |  |\n\n\n#### TemplateRef\n\n\n\nTemplateRef points to templating text that is stored in a\nSecretTransformation custom resource.\n\n\n\n_Appears in:_\n- [TransformationRef](#transformationref)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `name` _string_ | Name of the Template in SecretTransformationSpec.Templates.<br \/>the rendered secret data. |  |  |\n| `keyOverride` _string_ | KeyOverride to the rendered template in the Destination secret. If Key is<br \/>empty, then the Key from reference spec will be used. Set this to override the<br \/>Key set from the reference spec. |  |  |\n\n\n#### Transformation\n\n\n\n\n\n\n\n_Appears in:_\n- [Destination](#destination)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `templates` _object (keys:string, values:[Template](#template))_ | Templates maps a template name to its Template. Templates are always included<br \/>in the rendered K8s Secret, and take precedence over templates defined in a<br \/>SecretTransformation. |  |  |\n| `transformationRefs` _[TransformationRef](#transformationref) array_ | TransformationRefs contain references to template configuration from<br \/>SecretTransformation. |  |  |\n| `includes` _string array_ | Includes contains regex patterns used to filter top-level source secret data<br \/>fields for inclusion in the final K8s Secret data. These pattern filters are<br \/>never applied to templated fields as defined in Templates. They are always<br \/>applied last. |  |  |\n| `excludes` _string array_ | Excludes contains regex patterns used to filter top-level source secret data<br \/>fields for exclusion from the final K8s Secret data. These pattern filters are<br \/>never applied to templated fields as defined in Templates. They are always<br \/>applied before any inclusion patterns. To exclude all source secret data<br \/>fields, you can configure the single pattern \".*\". |  |  |\n| `excludeRaw` _boolean_ | ExcludeRaw data from the destination Secret. Exclusion policy can be set<br \/>globally by including 'exclude-raw` in the '--global-transformation-options'<br \/>command line flag. If set, the command line flag always takes precedence over<br \/>this configuration. |  |  |\n\n\n#### TransformationRef\n\n\n\nTransformationRef contains the configuration for accessing templates from an\nSecretTransformation resource. TransformationRefs can be shared across all\nsyncable secret custom resources.\n\n\n\n_Appears in:_\n- [Transformation](#transformation)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `namespace` _string_ | Namespace of the SecretTransformation resource. |  |  |\n| `name` _string_ | Name of the SecretTransformation resource. |  |  |\n| `templateRefs` _[TemplateRef](#templateref) array_ | TemplateRefs map to a Template found in this TransformationRef. If empty, then<br \/>all templates from the SecretTransformation will be rendered to the K8s Secret. |  |  |\n| `ignoreIncludes` _boolean_ | IgnoreIncludes controls whether to use the SecretTransformation's Includes<br \/>data key filters. |  |  |\n| `ignoreExcludes` _boolean_ | IgnoreExcludes controls whether to use the SecretTransformation's Excludes<br \/>data key filters. |  |  |\n\n\n#### VaultAuth\n\n\n\nVaultAuth is the Schema for the vaultauths API\n\n\n\n_Appears in:_\n- [VaultAuthList](#vaultauthlist)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `VaultAuth` | | |\n| `metadata` _[ObjectMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `spec` _[VaultAuthSpec](#vaultauthspec)_ |  |  |  |\n\n\n#### VaultAuthConfigAWS\n\n\n\nVaultAuthConfigAWS provides VaultAuth configuration options needed for\nauthenticating to Vault via an AWS AuthMethod. Will use creds from\n`SecretRef` or `IRSAServiceAccount` if provided, in that order. If neither\nare provided, the underlying node role or instance profile will be used to\nauthenticate to Vault.\n\n\n\n_Appears in:_\n- [VaultAuthGlobalConfigAWS](#vaultauthglobalconfigaws)\n- [VaultAuthSpec](#vaultauthspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `role` _string_ | Vault role to use for authenticating |  |  |\n| `region` _string_ | AWS Region to use for signing the authentication request |  |  |\n| `headerValue` _string_ | The Vault header value to include in the STS signing request |  |  |\n| `sessionName` _string_ | The role session name to use when creating a webidentity provider |  |  |\n| `stsEndpoint` _string_ | The STS endpoint to use; if not set will use the default |  |  |\n| `iamEndpoint` _string_ | The IAM endpoint to use; if not set will use the default |  |  |\n| `secretRef` _string_ | SecretRef is the name of a Kubernetes Secret in the consumer's (VDS\/VSS\/PKI) namespace<br \/>which holds credentials for AWS. Expected keys include `access_key_id`, `secret_access_key`,<br \/>`session_token` |  |  |\n| `irsaServiceAccount` _string_ | IRSAServiceAccount name to use with IAM Roles for Service Accounts<br \/>(IRSA), and should be annotated with \"eks.amazonaws.com\/role-arn\". This<br \/>ServiceAccount will be checked for other EKS annotations:<br \/>eks.amazonaws.com\/audience and eks.amazonaws.com\/token-expiration |  |  |\n\n\n#### VaultAuthConfigAppRole\n\n\n\nVaultAuthConfigAppRole provides VaultAuth configuration options needed for authenticating to\nVault via an AppRole AuthMethod.\n\n\n\n_Appears in:_\n- [VaultAuthGlobalConfigAppRole](#vaultauthglobalconfigapprole)\n- [VaultAuthSpec](#vaultauthspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `roleId` _string_ | RoleID of the AppRole Role to use for authenticating to Vault. |  |  |\n| `secretRef` _string_ | SecretRef is the name of a Kubernetes secret in the consumer's (VDS\/VSS\/PKI) namespace which<br \/>provides the AppRole Role's SecretID. The secret must have a key named `id` which holds the<br \/>AppRole Role's secretID. |  |  |\n\n\n#### VaultAuthConfigGCP\n\n\n\nVaultAuthConfigGCP provides VaultAuth configuration options needed for\nauthenticating to Vault via a GCP AuthMethod, using workload identity\n\n\n\n_Appears in:_\n- [VaultAuthGlobalConfigGCP](#vaultauthglobalconfiggcp)\n- [VaultAuthSpec](#vaultauthspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `role` _string_ | Vault role to use for authenticating |  |  |\n| `workloadIdentityServiceAccount` _string_ | WorkloadIdentityServiceAccount is the name of a Kubernetes service<br \/>account (in the same Kubernetes namespace as the Vault*Secret referencing<br \/>this resource) which has been configured for workload identity in GKE.<br \/>Should be annotated with \"iam.gke.io\/gcp-service-account\". |  |  |\n| `region` _string_ | GCP Region of the GKE cluster's identity provider. Defaults to the region<br \/>returned from the operator pod's local metadata server. |  |  |\n| `clusterName` _string_ | GKE cluster name. Defaults to the cluster-name returned from the operator<br \/>pod's local metadata server. |  |  |\n| `projectID` _string_ | GCP project ID. Defaults to the project-id returned from the operator<br \/>pod's local metadata server. |  |  |\n\n\n#### VaultAuthConfigJWT\n\n\n\nVaultAuthConfigJWT provides VaultAuth configuration options needed for authenticating to Vault.\n\n\n\n_Appears in:_\n- [VaultAuthGlobalConfigJWT](#vaultauthglobalconfigjwt)\n- [VaultAuthSpec](#vaultauthspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `role` _string_ | Role to use for authenticating to Vault. |  |  |\n| `secretRef` _string_ | SecretRef is the name of a Kubernetes secret in the consumer's (VDS\/VSS\/PKI) namespace which<br \/>provides the JWT token to authenticate to Vault's JWT authentication backend. The secret must<br \/>have a key named `jwt` which holds the JWT token. |  |  |\n| `serviceAccount` _string_ | ServiceAccount to use when creating a ServiceAccount token to authenticate to Vault's<br \/>JWT authentication backend. |  |  |\n| `audiences` _string array_ | TokenAudiences to include in the ServiceAccount token. |  |  |\n| `tokenExpirationSeconds` _integer_ | TokenExpirationSeconds to set the ServiceAccount token. | 600 | Minimum: 600 <br \/> |\n\n\n#### VaultAuthConfigKubernetes\n\n\n\nVaultAuthConfigKubernetes provides VaultAuth configuration options needed for authenticating to Vault.\n\n\n\n_Appears in:_\n- [VaultAuthGlobalConfigKubernetes](#vaultauthglobalconfigkubernetes)\n- [VaultAuthSpec](#vaultauthspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `role` _string_ | Role to use for authenticating to Vault. |  |  |\n| `serviceAccount` _string_ | ServiceAccount to use when authenticating to Vault's<br \/>authentication backend. This must reside in the consuming secret's (VDS\/VSS\/PKI) namespace. |  |  |\n| `audiences` _string array_ | TokenAudiences to include in the ServiceAccount token. |  |  |\n| `tokenExpirationSeconds` _integer_ | TokenExpirationSeconds to set the ServiceAccount token. | 600 | Minimum: 600 <br \/> |\n\n\n#### VaultAuthGlobal\n\n\n\nVaultAuthGlobal is the Schema for the vaultauthglobals API\n\n\n\n_Appears in:_\n- [VaultAuthGlobalList](#vaultauthgloballist)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `VaultAuthGlobal` | | |\n| `metadata` _[ObjectMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `spec` _[VaultAuthGlobalSpec](#vaultauthglobalspec)_ |  |  |  |\n\n\n#### VaultAuthGlobalConfigAWS\n\n\n\n\n\n\n\n_Appears in:_\n- [VaultAuthGlobalSpec](#vaultauthglobalspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `role` _string_ | Vault role to use for authenticating |  |  |\n| `region` _string_ | AWS Region to use for signing the authentication request |  |  |\n| `headerValue` _string_ | The Vault header value to include in the STS signing request |  |  |\n| `sessionName` _string_ | The role session name to use when creating a webidentity provider |  |  |\n| `stsEndpoint` _string_ | The STS endpoint to use; if not set will use the default |  |  |\n| `iamEndpoint` _string_ | The IAM endpoint to use; if not set will use the default |  |  |\n| `secretRef` _string_ | SecretRef is the name of a Kubernetes Secret in the consumer's (VDS\/VSS\/PKI) namespace<br \/>which holds credentials for AWS. Expected keys include `access_key_id`, `secret_access_key`,<br \/>`session_token` |  |  |\n| `irsaServiceAccount` _string_ | IRSAServiceAccount name to use with IAM Roles for Service Accounts<br \/>(IRSA), and should be annotated with \"eks.amazonaws.com\/role-arn\". This<br \/>ServiceAccount will be checked for other EKS annotations:<br \/>eks.amazonaws.com\/audience and eks.amazonaws.com\/token-expiration |  |  |\n| `namespace` _string_ | Namespace to auth to in Vault |  |  |\n| `mount` _string_ | Mount to use when authenticating to auth method. |  |  |\n| `params` _object (keys:string, values:string)_ | Params to use when authenticating to Vault |  |  |\n| `headers` _object (keys:string, values:string)_ | Headers to be included in all Vault requests. |  |  |\n\n\n#### VaultAuthGlobalConfigAppRole\n\n\n\n\n\n\n\n_Appears in:_\n- [VaultAuthGlobalSpec](#vaultauthglobalspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `roleId` _string_ | RoleID of the AppRole Role to use for authenticating to Vault. |  |  |\n| `secretRef` _string_ | SecretRef is the name of a Kubernetes secret in the consumer's (VDS\/VSS\/PKI) namespace which<br \/>provides the AppRole Role's SecretID. The secret must have a key named `id` which holds the<br \/>AppRole Role's secretID. |  |  |\n| `namespace` _string_ | Namespace to auth to in Vault |  |  |\n| `mount` _string_ | Mount to use when authenticating to auth method. |  |  |\n| `params` _object (keys:string, values:string)_ | Params to use when authenticating to Vault |  |  |\n| `headers` _object (keys:string, values:string)_ | Headers to be included in all Vault requests. |  |  |\n\n\n#### VaultAuthGlobalConfigGCP\n\n\n\n\n\n\n\n_Appears in:_\n- [VaultAuthGlobalSpec](#vaultauthglobalspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `role` _string_ | Vault role to use for authenticating |  |  |\n| `workloadIdentityServiceAccount` _string_ | WorkloadIdentityServiceAccount is the name of a Kubernetes service<br \/>account (in the same Kubernetes namespace as the Vault*Secret referencing<br \/>this resource) which has been configured for workload identity in GKE.<br \/>Should be annotated with \"iam.gke.io\/gcp-service-account\". |  |  |\n| `region` _string_ | GCP Region of the GKE cluster's identity provider. Defaults to the region<br \/>returned from the operator pod's local metadata server. |  |  |\n| `clusterName` _string_ | GKE cluster name. Defaults to the cluster-name returned from the operator<br \/>pod's local metadata server. |  |  |\n| `projectID` _string_ | GCP project ID. Defaults to the project-id returned from the operator<br \/>pod's local metadata server. |  |  |\n| `namespace` _string_ | Namespace to auth to in Vault |  |  |\n| `mount` _string_ | Mount to use when authenticating to auth method. |  |  |\n| `params` _object (keys:string, values:string)_ | Params to use when authenticating to Vault |  |  |\n| `headers` _object (keys:string, values:string)_ | Headers to be included in all Vault requests. |  |  |\n\n\n#### VaultAuthGlobalConfigJWT\n\n\n\n\n\n\n\n_Appears in:_\n- [VaultAuthGlobalSpec](#vaultauthglobalspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `role` _string_ | Role to use for authenticating to Vault. |  |  |\n| `secretRef` _string_ | SecretRef is the name of a Kubernetes secret in the consumer's (VDS\/VSS\/PKI) namespace which<br \/>provides the JWT token to authenticate to Vault's JWT authentication backend. The secret must<br \/>have a key named `jwt` which holds the JWT token. |  |  |\n| `serviceAccount` _string_ | ServiceAccount to use when creating a ServiceAccount token to authenticate to Vault's<br \/>JWT authentication backend. |  |  |\n| `audiences` _string array_ | TokenAudiences to include in the ServiceAccount token. |  |  |\n| `tokenExpirationSeconds` _integer_ | TokenExpirationSeconds to set the ServiceAccount token. | 600 | Minimum: 600 <br \/> |\n| `namespace` _string_ | Namespace to auth to in Vault |  |  |\n| `mount` _string_ | Mount to use when authenticating to auth method. |  |  |\n| `params` _object (keys:string, values:string)_ | Params to use when authenticating to Vault |  |  |\n| `headers` _object (keys:string, values:string)_ | Headers to be included in all Vault requests. |  |  |\n\n\n#### VaultAuthGlobalConfigKubernetes\n\n\n\n\n\n\n\n_Appears in:_\n- [VaultAuthGlobalSpec](#vaultauthglobalspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `role` _string_ | Role to use for authenticating to Vault. |  |  |\n| `serviceAccount` _string_ | ServiceAccount to use when authenticating to Vault's<br \/>authentication backend. This must reside in the consuming secret's (VDS\/VSS\/PKI) namespace. |  |  |\n| `audiences` _string array_ | TokenAudiences to include in the ServiceAccount token. |  |  |\n| `tokenExpirationSeconds` _integer_ | TokenExpirationSeconds to set the ServiceAccount token. | 600 | Minimum: 600 <br \/> |\n| `namespace` _string_ | Namespace to auth to in Vault |  |  |\n| `mount` _string_ | Mount to use when authenticating to auth method. |  |  |\n| `params` _object (keys:string, values:string)_ | Params to use when authenticating to Vault |  |  |\n| `headers` _object (keys:string, values:string)_ | Headers to be included in all Vault requests. |  |  |\n\n\n#### VaultAuthGlobalList\n\n\n\nVaultAuthGlobalList contains a list of VaultAuthGlobal\n\n\n\n\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `VaultAuthGlobalList` | | |\n| `metadata` _[ListMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `items` _[VaultAuthGlobal](#vaultauthglobal) array_ |  |  |  |\n\n\n#### VaultAuthGlobalRef\n\n\n\nVaultAuthGlobalRef is a reference to a VaultAuthGlobal resource. A referring\nVaultAuth resource can use the VaultAuthGlobal resource to share common\nconfiguration across multiple VaultAuth resources. The VaultAuthGlobal\nresource is used to store global configuration for VaultAuth resources.\n\n\n\n_Appears in:_\n- [VaultAuthSpec](#vaultauthspec)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `name` _string_ | Name of the VaultAuthGlobal resource. |  | Pattern: `^([a-z0-9.-]{1,253})$` <br \/> |\n| `namespace` _string_ | Namespace of the VaultAuthGlobal resource. If not provided, the namespace of<br \/>the referring VaultAuth resource is used. |  | Pattern: `^([a-z0-9.-]{1,253})$` <br \/> |\n| `mergeStrategy` _[MergeStrategy](#mergestrategy)_ | MergeStrategy configures the merge strategy for HTTP headers and parameters<br \/>that are included in all Vault authentication requests. |  |  |\n| `allowDefault` _boolean_ | AllowDefault when set to true will use the default VaultAuthGlobal resource<br \/>as the default if Name is not set. The 'allow-default-globals' option must be<br \/>set on the operator's '-global-vault-auth-options' flag<br \/><br \/>The default VaultAuthGlobal search is conditional.<br \/>When a ref Namespace is set, the search for the default<br \/>VaultAuthGlobal resource is constrained to that namespace.<br \/>Otherwise, the search order is:<br \/>1. The default VaultAuthGlobal resource in the referring VaultAuth resource's<br \/>namespace.<br \/>2. The default VaultAuthGlobal resource in the Operator's namespace. |  |  |\n\n\n#### VaultAuthGlobalSpec\n\n\n\nVaultAuthGlobalSpec defines the desired state of VaultAuthGlobal\n\n\n\n_Appears in:_\n- [VaultAuthGlobal](#vaultauthglobal)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `allowedNamespaces` _string array_ | AllowedNamespaces Kubernetes Namespaces which are allow-listed for use with<br \/>this VaultAuthGlobal. This field allows administrators to customize which<br \/>Kubernetes namespaces are authorized to reference this resource. While Vault<br \/>will still enforce its own rules, this has the added configurability of<br \/>restricting which VaultAuthMethods can be used by which namespaces. Accepted<br \/>values: []{\"*\"} - wildcard, all namespaces. []{\"a\", \"b\"} - list of namespaces.<br \/>unset - disallow all namespaces except the Operator's and the referring<br \/>VaultAuthMethod's namespace, this is the default behavior. |  |  |\n| `vaultConnectionRef` _string_ | VaultConnectionRef to the VaultConnection resource, can be prefixed with a namespace,<br \/>eg: `namespaceA\/vaultConnectionRefB`. If no namespace prefix is provided it will default to<br \/>the namespace of the VaultConnection CR. If no value is specified for VaultConnectionRef the<br \/>Operator will default to the `default` VaultConnection, configured in the operator's namespace. |  |  |\n| `defaultVaultNamespace` _string_ | DefaultVaultNamespace to auth to in Vault, if not specified the namespace of the auth<br \/>method will be used. This can be used as a default Vault namespace for all<br \/>auth methods. |  |  |\n| `defaultAuthMethod` _string_ | DefaultAuthMethod to use when authenticating to Vault. |  | Enum: [kubernetes jwt appRole aws gcp] <br \/> |\n| `defaultMount` _string_ | DefaultMount to use when authenticating to auth method. If not specified the mount of<br \/>the auth method configured in Vault will be used. |  |  |\n| `params` _object (keys:string, values:string)_ | DefaultParams to use when authenticating to Vault |  |  |\n| `headers` _object (keys:string, values:string)_ | DefaultHeaders to be included in all Vault requests. |  |  |\n| `kubernetes` _[VaultAuthGlobalConfigKubernetes](#vaultauthglobalconfigkubernetes)_ | Kubernetes specific auth configuration, requires that the Method be set to `kubernetes`. |  |  |\n| `appRole` _[VaultAuthGlobalConfigAppRole](#vaultauthglobalconfigapprole)_ | AppRole specific auth configuration, requires that the Method be set to `appRole`. |  |  |\n| `jwt` _[VaultAuthGlobalConfigJWT](#vaultauthglobalconfigjwt)_ | JWT specific auth configuration, requires that the Method be set to `jwt`. |  |  |\n| `aws` _[VaultAuthGlobalConfigAWS](#vaultauthglobalconfigaws)_ | AWS specific auth configuration, requires that Method be set to `aws`. |  |  |\n| `gcp` _[VaultAuthGlobalConfigGCP](#vaultauthglobalconfiggcp)_ | GCP specific auth configuration, requires that Method be set to `gcp`. |  |  |\n\n\n\n\n#### VaultAuthList\n\n\n\nVaultAuthList contains a list of VaultAuth\n\n\n\n\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `VaultAuthList` | | |\n| `metadata` _[ListMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `items` _[VaultAuth](#vaultauth) array_ |  |  |  |\n\n\n#### VaultAuthSpec\n\n\n\nVaultAuthSpec defines the desired state of VaultAuth\n\n\n\n_Appears in:_\n- [VaultAuth](#vaultauth)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `vaultConnectionRef` _string_ | VaultConnectionRef to the VaultConnection resource, can be prefixed with a namespace,<br \/>eg: `namespaceA\/vaultConnectionRefB`. If no namespace prefix is provided it will default to<br \/>the namespace of the VaultConnection CR. If no value is specified for VaultConnectionRef the<br \/>Operator will default to the `default` VaultConnection, configured in the operator's namespace. |  |  |\n| `vaultAuthGlobalRef` _[VaultAuthGlobalRef](#vaultauthglobalref)_ | VaultAuthGlobalRef. |  |  |\n| `namespace` _string_ | Namespace to auth to in Vault |  |  |\n| `allowedNamespaces` _string array_ | AllowedNamespaces Kubernetes Namespaces which are allow-listed for use with this AuthMethod.<br \/>This field allows administrators to customize which Kubernetes namespaces are authorized to<br \/>use with this AuthMethod. While Vault will still enforce its own rules, this has the added<br \/>configurability of restricting which VaultAuthMethods can be used by which namespaces.<br \/>Accepted values:<br \/>[]{\"*\"} - wildcard, all namespaces.<br \/>[]{\"a\", \"b\"} - list of namespaces.<br \/>unset - disallow all namespaces except the Operator's the VaultAuthMethod's namespace, this<br \/>is the default behavior. |  |  |\n| `method` _string_ | Method to use when authenticating to Vault. |  | Enum: [kubernetes jwt appRole aws gcp] <br \/> |\n| `mount` _string_ | Mount to use when authenticating to auth method. |  |  |\n| `params` _object (keys:string, values:string)_ | Params to use when authenticating to Vault |  |  |\n| `headers` _object (keys:string, values:string)_ | Headers to be included in all Vault requests. |  |  |\n| `kubernetes` _[VaultAuthConfigKubernetes](#vaultauthconfigkubernetes)_ | Kubernetes specific auth configuration, requires that the Method be set to `kubernetes`. |  |  |\n| `appRole` _[VaultAuthConfigAppRole](#vaultauthconfigapprole)_ | AppRole specific auth configuration, requires that the Method be set to `appRole`. |  |  |\n| `jwt` _[VaultAuthConfigJWT](#vaultauthconfigjwt)_ | JWT specific auth configuration, requires that the Method be set to `jwt`. |  |  |\n| `aws` _[VaultAuthConfigAWS](#vaultauthconfigaws)_ | AWS specific auth configuration, requires that Method be set to `aws`. |  |  |\n| `gcp` _[VaultAuthConfigGCP](#vaultauthconfiggcp)_ | GCP specific auth configuration, requires that Method be set to `gcp`. |  |  |\n| `storageEncryption` _[StorageEncryption](#storageencryption)_ | StorageEncryption provides the necessary configuration to encrypt the client storage cache.<br \/>This should only be configured when client cache persistence with encryption is enabled.<br \/>This is done by passing setting the manager's commandline argument<br \/>--client-cache-persistence-model=direct-encrypted. Typically, there should only ever<br \/>be one VaultAuth configured with StorageEncryption in the Cluster, and it should have<br \/>the label: cacheStorageEncryption=true |  |  |\n\n\n\n\n#### VaultClientMeta\n\n\n\nVaultClientMeta defines the observed state of the last Vault Client used to\nsync the secret. This status is used during resource reconciliation.\n\n\n\n_Appears in:_\n- [VaultDynamicSecretStatus](#vaultdynamicsecretstatus)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `cacheKey` _string_ | CacheKey is the unique key used to identify the client cache. |  |  |\n| `id` _string_ | ID is the Vault ID of the authenticated client. The ID should never contain<br \/>any sensitive information. |  |  |\n\n\n#### VaultConnection\n\n\n\nVaultConnection is the Schema for the vaultconnections API\n\n\n\n_Appears in:_\n- [VaultConnectionList](#vaultconnectionlist)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `VaultConnection` | | |\n| `metadata` _[ObjectMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `spec` _[VaultConnectionSpec](#vaultconnectionspec)_ |  |  |  |\n\n\n#### VaultConnectionList\n\n\n\nVaultConnectionList contains a list of VaultConnection\n\n\n\n\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `VaultConnectionList` | | |\n| `metadata` _[ListMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `items` _[VaultConnection](#vaultconnection) array_ |  |  |  |\n\n\n#### VaultConnectionSpec\n\n\n\nVaultConnectionSpec defines the desired state of VaultConnection\n\n\n\n_Appears in:_\n- [VaultConnection](#vaultconnection)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `address` _string_ | Address of the Vault server |  |  |\n| `headers` _object (keys:string, values:string)_ | Headers to be included in all Vault requests. |  |  |\n| `tlsServerName` _string_ | TLSServerName to use as the SNI host for TLS connections. |  |  |\n| `caCertSecretRef` _string_ | CACertSecretRef is the name of a Kubernetes secret containing the trusted PEM encoded CA certificate chain as `ca.crt`. |  |  |\n| `skipTLSVerify` _boolean_ | SkipTLSVerify for TLS connections. | false |  |\n| `timeout` _string_ | Timeout applied to all Vault requests for this connection. If not set, the<br \/>default timeout from the Vault API client config is used. |  | Pattern: `^([0-9]+(\\.[0-9]+)?(s|m|h))$` <br \/>Type: string <br \/> |\n\n\n\n\n#### VaultDynamicSecret\n\n\n\nVaultDynamicSecret is the Schema for the vaultdynamicsecrets API\n\n\n\n_Appears in:_\n- [VaultDynamicSecretList](#vaultdynamicsecretlist)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `VaultDynamicSecret` | | |\n| `metadata` _[ObjectMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `spec` _[VaultDynamicSecretSpec](#vaultdynamicsecretspec)_ |  |  |  |\n\n\n#### VaultDynamicSecretList\n\n\n\nVaultDynamicSecretList contains a list of VaultDynamicSecret\n\n\n\n\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `VaultDynamicSecretList` | | |\n| `metadata` _[ListMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `items` _[VaultDynamicSecret](#vaultdynamicsecret) array_ |  |  |  |\n\n\n#### VaultDynamicSecretSpec\n\n\n\nVaultDynamicSecretSpec defines the desired state of VaultDynamicSecret\n\n\n\n_Appears in:_\n- [VaultDynamicSecret](#vaultdynamicsecret)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `vaultAuthRef` _string_ | VaultAuthRef to the VaultAuth resource, can be prefixed with a namespace,<br \/>eg: `namespaceA\/vaultAuthRefB`. If no namespace prefix is provided it will default to<br \/>the namespace of the VaultAuth CR. If no value is specified for VaultAuthRef the Operator<br \/>will default to the `default` VaultAuth, configured in the operator's namespace. |  |  |\n| `namespace` _string_ | Namespace of the secrets engine mount in Vault. If not set, the namespace that's<br \/>part of VaultAuth resource will be inferred. |  |  |\n| `mount` _string_ | Mount path of the secret's engine in Vault. |  |  |\n| `requestHTTPMethod` _string_ | RequestHTTPMethod to use when syncing Secrets from Vault.<br \/>Setting a value here is not typically required.<br \/>If left unset the Operator will make requests using the GET method.<br \/>In the case where Params are specified the Operator will use the PUT method.<br \/>Please consult [secrets](\/vault\/docs\/secrets) if you are<br \/>uncertain about what method to use.<br \/>Of note, the Vault client treats PUT and POST as being equivalent.<br \/>The underlying Vault client implementation will always use the PUT method. |  | Enum: [GET POST PUT] <br \/> |\n| `path` _string_ | Path in Vault to get the credentials for, and is relative to Mount.<br \/>Please consult [secrets](\/vault\/docs\/secrets) if you are<br \/>uncertain about what 'path' should be set to. |  |  |\n| `params` _object (keys:string, values:string)_ | Params that can be passed when requesting credentials\/secrets.<br \/>When Params is set the configured RequestHTTPMethod will be<br \/>ignored. See RequestHTTPMethod for more details.<br \/>Please consult [secrets](\/vault\/docs\/secrets) if you are<br \/>uncertain about what 'params' should\/can be set to. |  |  |\n| `renewalPercent` _integer_ | RenewalPercent is the percent out of 100 of the lease duration when the<br \/>lease is renewed. Defaults to 67 percent plus jitter. | 67 | Maximum: 90 <br \/>Minimum: 0 <br \/> |\n| `revoke` _boolean_ | Revoke the existing lease on VDS resource deletion. |  |  |\n| `allowStaticCreds` _boolean_ | AllowStaticCreds should be set when syncing credentials that are periodically<br \/>rotated by the Vault server, rather than created upon request. These secrets<br \/>are sometimes referred to as \"static roles\", or \"static credentials\", with a<br \/>request path that contains \"static-creds\". |  |  |\n| `rolloutRestartTargets` _[RolloutRestartTarget](#rolloutrestarttarget) array_ | RolloutRestartTargets should be configured whenever the application(s) consuming the Vault secret does<br \/>not support dynamically reloading a rotated secret.<br \/>In that case one, or more RolloutRestartTarget(s) can be configured here. The Operator will<br \/>trigger a \"rollout-restart\" for each target whenever the Vault secret changes between reconciliation events.<br \/>See RolloutRestartTarget for more details. |  |  |\n| `destination` _[Destination](#destination)_ | Destination provides configuration necessary for syncing the Vault secret to Kubernetes. |  |  |\n| `refreshAfter` _string_ | RefreshAfter a period of time for VSO to sync the source secret data, in<br \/>duration notation e.g. 30s, 1m, 24h. This value only needs to be set when<br \/>syncing from a secret's engine that does not provide a lease TTL in its<br \/>response. The value should be within the secret engine's configured ttl or<br \/>max_ttl. The source secret's lease duration takes precedence over this<br \/>configuration when it is greater than 0. |  | Pattern: `^([0-9]+(\\.[0-9]+)?(s|m|h))$` <br \/>Type: string <br \/> |\n\n\n\n\n#### VaultPKISecret\n\n\n\nVaultPKISecret is the Schema for the vaultpkisecrets API\n\n\n\n_Appears in:_\n- [VaultPKISecretList](#vaultpkisecretlist)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `VaultPKISecret` | | |\n| `metadata` _[ObjectMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `spec` _[VaultPKISecretSpec](#vaultpkisecretspec)_ |  |  |  |\n\n\n#### VaultPKISecretList\n\n\n\nVaultPKISecretList contains a list of VaultPKISecret\n\n\n\n\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `VaultPKISecretList` | | |\n| `metadata` _[ListMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `items` _[VaultPKISecret](#vaultpkisecret) array_ |  |  |  |\n\n\n#### VaultPKISecretSpec\n\n\n\nVaultPKISecretSpec defines the desired state of VaultPKISecret\n\n\n\n_Appears in:_\n- [VaultPKISecret](#vaultpkisecret)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `vaultAuthRef` _string_ | VaultAuthRef to the VaultAuth resource, can be prefixed with a namespace,<br \/>eg: `namespaceA\/vaultAuthRefB`. If no namespace prefix is provided it will default to<br \/>the namespace of the VaultAuth CR. If no value is specified for VaultAuthRef the Operator<br \/>will default to the `default` VaultAuth, configured in the operator's namespace. |  |  |\n| `namespace` _string_ | Namespace of the secrets engine mount in Vault. If not set, the namespace that's<br \/>part of VaultAuth resource will be inferred. |  |  |\n| `mount` _string_ | Mount for the secret in Vault |  |  |\n| `role` _string_ | Role in Vault to use when issuing TLS certificates. |  |  |\n| `revoke` _boolean_ | Revoke the certificate when the resource is deleted. |  |  |\n| `clear` _boolean_ | Clear the Kubernetes secret when the resource is deleted. |  |  |\n| `expiryOffset` _string_ | ExpiryOffset to use for computing when the certificate should be renewed.<br \/>The rotation time will be difference between the expiration and the offset.<br \/>Should be in duration notation e.g. 30s, 120s, etc. |  | Pattern: `^([0-9]+(\\.[0-9]+)?(s|m|h))$` <br \/>Type: string <br \/> |\n| `issuerRef` _string_ | IssuerRef reference to an existing PKI issuer, either by Vault-generated<br \/>identifier, the literal string default to refer to the currently<br \/>configured default issuer, or the name assigned to an issuer.<br \/>This parameter is part of the request URL. |  |  |\n| `rolloutRestartTargets` _[RolloutRestartTarget](#rolloutrestarttarget) array_ | RolloutRestartTargets should be configured whenever the application(s) consuming the Vault secret does<br \/>not support dynamically reloading a rotated secret.<br \/>In that case one, or more RolloutRestartTarget(s) can be configured here. The Operator will<br \/>trigger a \"rollout-restart\" for each target whenever the Vault secret changes between reconciliation events.<br \/>See RolloutRestartTarget for more details. |  |  |\n| `destination` _[Destination](#destination)_ | Destination provides configuration necessary for syncing the Vault secret<br \/>to Kubernetes. If the type is set to \"kubernetes.io\/tls\", \"tls.key\" will<br \/>be set to the \"private_key\" response from Vault, and \"tls.crt\" will be<br \/>set to \"certificate\" + \"ca_chain\" from the Vault response (\"issuing_ca\"<br \/>is used when \"ca_chain\" is empty). The \"remove_roots_from_chain=true\"<br \/>option is used with Vault to exclude the root CA from the Vault response. |  |  |\n| `commonName` _string_ | CommonName to include in the request. |  |  |\n| `altNames` _string array_ | AltNames to include in the request<br \/>May contain both DNS names and email addresses. |  |  |\n| `ipSans` _string array_ | IPSans to include in the request. |  |  |\n| `uriSans` _string array_ | The requested URI SANs. |  |  |\n| `otherSans` _string array_ | Requested other SANs, in an array with the format<br \/>oid;type:value for each entry. |  |  |\n| `userIDs` _string array_ | User ID (OID 0.9.2342.19200300.100.1.1) Subject values to be placed on the<br \/>signed certificate. |  |  |\n| `ttl` _string_ | TTL for the certificate; sets the expiration date.<br \/>If not specified the Vault role's default,<br \/>backend default, or system default TTL is used, in that order.<br \/>Cannot be larger than the mount's max TTL.<br \/>Note: this only has an effect when generating a CA cert or signing a CA cert,<br \/>not when generating a CSR for an intermediate CA.<br \/>Should be in duration notation e.g. 120s, 2h, etc. |  | Pattern: `^([0-9]+(\\.[0-9]+)?(s|m|h))$` <br \/>Type: string <br \/> |\n| `format` _string_ | Format for the certificate. Choices: \"pem\", \"der\", \"pem_bundle\".<br \/>If \"pem_bundle\",<br \/>any private key and issuing cert will be appended to the certificate pem.<br \/>If \"der\", the value will be base64 encoded.<br \/>Default: pem |  |  |\n| `privateKeyFormat` _string_ | PrivateKeyFormat, generally the default will be controlled by the Format<br \/>parameter as either base64-encoded DER or PEM-encoded DER.<br \/>However, this can be set to \"pkcs8\" to have the returned<br \/>private key contain base64-encoded pkcs8 or PEM-encoded<br \/>pkcs8 instead.<br \/>Default: der |  |  |\n| `notAfter` _string_ | NotAfter field of the certificate with specified date value.<br \/>The value format should be given in UTC format YYYY-MM-ddTHH:MM:SSZ |  |  |\n| `excludeCNFromSans` _boolean_ | ExcludeCNFromSans from DNS or Email Subject Alternate Names.<br \/>Default: false |  |  |\n\n\n\n\n#### VaultSecretLease\n\n\n\n\n\n\n\n_Appears in:_\n- [VaultDynamicSecretStatus](#vaultdynamicsecretstatus)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `id` _string_ | ID of the Vault secret. |  |  |\n| `duration` _integer_ | LeaseDuration of the Vault secret. |  |  |\n| `renewable` _boolean_ | Renewable Vault secret lease |  |  |\n| `requestID` _string_ | RequestID of the Vault secret request. |  |  |\n\n\n#### VaultStaticCredsMetaData\n\n\n\n\n\n\n\n_Appears in:_\n- [VaultDynamicSecretStatus](#vaultdynamicsecretstatus)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `lastVaultRotation` _integer_ | LastVaultRotation represents the last time Vault rotated the password |  |  |\n| `rotationPeriod` _integer_ | RotationPeriod is number in seconds between each rotation, effectively a<br \/>\"time to live\". This value is compared to the LastVaultRotation to<br \/>determine if a password needs to be rotated |  |  |\n| `rotationSchedule` _string_ | RotationSchedule is a \"cron style\" string representing the allowed<br \/>schedule for each rotation.<br \/>e.g. \"1 0 * * *\" would rotate at one minute past midnight (00:01) every<br \/>day. |  |  |\n| `ttl` _integer_ | TTL is the seconds remaining before the next rotation. |  |  |\n\n\n#### VaultStaticSecret\n\n\n\nVaultStaticSecret is the Schema for the vaultstaticsecrets API\n\n\n\n_Appears in:_\n- [VaultStaticSecretList](#vaultstaticsecretlist)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `VaultStaticSecret` | | |\n| `metadata` _[ObjectMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `spec` _[VaultStaticSecretSpec](#vaultstaticsecretspec)_ |  |  |  |\n\n\n#### VaultStaticSecretList\n\n\n\nVaultStaticSecretList contains a list of VaultStaticSecret\n\n\n\n\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `apiVersion` _string_ | `secrets.hashicorp.com\/v1beta1` | | |\n| `kind` _string_ | `VaultStaticSecretList` | | |\n| `metadata` _[ListMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. |  |  |\n| `items` _[VaultStaticSecret](#vaultstaticsecret) array_ |  |  |  |\n\n\n#### VaultStaticSecretSpec\n\n\n\nVaultStaticSecretSpec defines the desired state of VaultStaticSecret\n\n\n\n_Appears in:_\n- [VaultStaticSecret](#vaultstaticsecret)\n\n| Field | Description | Default | Validation |\n| --- | --- | --- | --- |\n| `vaultAuthRef` _string_ | VaultAuthRef to the VaultAuth resource, can be prefixed with a namespace,<br \/>eg: `namespaceA\/vaultAuthRefB`. If no namespace prefix is provided it will default to the<br \/>namespace of the VaultAuth CR. If no value is specified for VaultAuthRef the Operator will<br \/>default to the `default` VaultAuth, configured in the operator's namespace. |  |  |\n| `namespace` _string_ | Namespace of the secrets engine mount in Vault. If not set, the namespace that's<br \/>part of VaultAuth resource will be inferred. |  |  |\n| `mount` _string_ | Mount for the secret in Vault |  |  |\n| `path` _string_ | Path of the secret in Vault, corresponds to the `path` parameter for,<br \/>[kv-v1](\/vault\/api-docs\/secret\/kv\/kv-v1#read-secret) [kv-v2](\/vault\/api-docs\/secret\/kv\/kv-v2#read-secret-version) |  |  |\n| `version` _integer_ | Version of the secret to fetch. Only valid for type kv-v2. Corresponds to version query parameter:<br \/>[version](\/vault\/api-docs\/secret\/kv\/kv-v2#version) |  | Minimum: 0 <br \/> |\n| `type` _string_ | Type of the Vault static secret |  | Enum: [kv-v1 kv-v2] <br \/> |\n| `refreshAfter` _string_ | RefreshAfter a period of time, in duration notation e.g. 30s, 1m, 24h |  | Pattern: `^([0-9]+(\\.[0-9]+)?(s|m|h))$` <br \/>Type: string <br \/> |\n| `hmacSecretData` _boolean_ | HMACSecretData determines whether the Operator computes the<br \/>HMAC of the Secret's data. The MAC value will be stored in<br \/>the resource's Status.SecretMac field, and will be used for drift detection<br \/>and during incoming Vault secret comparison.<br \/>Enabling this feature is recommended to ensure that Secret's data stays consistent with Vault. | true |  |\n| `rolloutRestartTargets` _[RolloutRestartTarget](#rolloutrestarttarget) array_ | RolloutRestartTargets should be configured whenever the application(s) consuming the Vault secret does<br \/>not support dynamically reloading a rotated secret.<br \/>In that case one, or more RolloutRestartTarget(s) can be configured here. The Operator will<br \/>trigger a \"rollout-restart\" for each target whenever the Vault secret changes between reconciliation events.<br \/>All configured targets wil be ignored if HMACSecretData is set to false.<br \/>See RolloutRestartTarget for more details. |  |  |\n| `destination` _[Destination](#destination)_ | Destination provides configuration necessary for syncing the Vault secret to Kubernetes. |  |  |\n| `syncConfig` _[SyncConfig](#syncconfig)_ | SyncConfig configures sync behavior from Vault to VSO |  |  |","site":"vault","answers_cleaned":"    layout  docs page title  Vault Secrets Operator API Reference description       The Vault Secrets Operator allows Pods to consume Vault secrets natively from Kubernetes Secrets              copied from docs api api reference md in the vault secrets operator repo    commit SHA 08a6e5071ffa4faa486bd4b2c53b27585da4680c       API Reference     Packages    secrets hashicorp com v1beta1   secretshashicorpcomv1beta1       secrets hashicorp com v1beta1  Package v1beta1 contains API Schema definitions for the secrets v1beta1 API group      Resource Types    HCPAuth   hcpauth     HCPAuthList   hcpauthlist     HCPVaultSecretsApp   hcpvaultsecretsapp     HCPVaultSecretsAppList   hcpvaultsecretsapplist     SecretTransformation   secrettransformation     SecretTransformationList   secrettransformationlist     VaultAuth   vaultauth     VaultAuthGlobal   vaultauthglobal     VaultAuthGlobalList   vaultauthgloballist     VaultAuthList   vaultauthlist     VaultConnection   vaultconnection     VaultConnectionList   vaultconnectionlist     VaultDynamicSecret   vaultdynamicsecret     VaultDynamicSecretList   vaultdynamicsecretlist     VaultPKISecret   vaultpkisecret     VaultPKISecretList   vaultpkisecretlist     VaultStaticSecret   vaultstaticsecret     VaultStaticSecretList   vaultstaticsecretlist          Destination    Destination provides the configuration that will be applied to the destination Kubernetes Secret during a Vault Secret    K8s Secret sync      Appears in      HCPVaultSecretsAppSpec   hcpvaultsecretsappspec     VaultDynamicSecretSpec   vaultdynamicsecretspec     VaultPKISecretSpec   vaultpkisecretspec     VaultStaticSecretSpec   vaultstaticsecretspec     Field   Description   Default   Validation                                name   string    Name of the Secret            create   boolean    Create the destination Secret  br   If the Secret already exists this should be set to false    false         overwrite   boolean    Overwrite the destination Secret if it exists and Create is true  This is br   useful when migrating to VSO from a previous secret deployment strategy    false         labels   object  keys string  values string     Labels to apply to the Secret  Requires Create to be set to true             annotations   object  keys string  values string     Annotations to apply to the Secret  Requires Create to be set to true             type    SecretType  https   kubernetes io docs reference generated kubernetes api v1 24  secrettype v1 core     Type of Kubernetes Secret  Requires Create to be set to true  br   Defaults to Opaque             transformation    Transformation   transformation     Transformation provides configuration for transforming the secret data before br   it is stored in the Destination                 HCPAuth    HCPAuth is the Schema for the hcpauths API     Appears in      HCPAuthList   hcpauthlist     Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     HCPAuth           metadata    ObjectMeta  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              spec    HCPAuthSpec   hcpauthspec                     HCPAuthList    HCPAuthList contains a list of HCPAuth        Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     HCPAuthList           metadata    ListMeta  https   kubernetes io docs reference generated kubernetes api v1 24  listmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              items    HCPAuth   hcpauth  array                    HCPAuthServicePrincipal    HCPAuthServicePrincipal provides HCPAuth configuration options needed for authenticating to HCP using a service principal configured in SecretRef      Appears in      HCPAuthSpec   hcpauthspec     Field   Description   Default   Validation                                secretRef   string    SecretRef is the name of a Kubernetes secret in the consumer s br    VDS VSS PKI HCP  namespace which provides the HCP ServicePrincipal clientID  br   and clientSecret  br   The secret data must have the following structure   br      clientID    clientID   br      clientSecret    clientSecret   br                    HCPAuthSpec    HCPAuthSpec defines the desired state of HCPAuth     Appears in      HCPAuth   hcpauth     Field   Description   Default   Validation                                organizationID   string    OrganizationID of the HCP organization             projectID   string    ProjectID of the HCP project             allowedNamespaces   string array    AllowedNamespaces Kubernetes Namespaces which are allow listed for use with this AuthMethod  br   This field allows administrators to customize which Kubernetes namespaces are authorized to br   use with this AuthMethod  While Vault will still enforce its own rules  this has the added br   configurability of restricting which HCPAuthMethods can be used by which namespaces  br   Accepted values  br             wildcard  all namespaces  br       a    b     list of namespaces  br   unset   disallow all namespaces except the Operator s the HCPAuthMethod s namespace  this br   is the default behavior             method   string    Method to use when authenticating to Vault    servicePrincipal   Enum   servicePrincipal   br         servicePrincipal    HCPAuthServicePrincipal   hcpauthserviceprincipal     ServicePrincipal provides the necessary configuration for authenticating to br   HCP using a service principal  For security reasons  only project level br   service principals should ever be used                   HCPVaultSecretsApp    HCPVaultSecretsApp is the Schema for the hcpvaultsecretsapps API     Appears in      HCPVaultSecretsAppList   hcpvaultsecretsapplist     Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     HCPVaultSecretsApp           metadata    ObjectMeta  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              spec    HCPVaultSecretsAppSpec   hcpvaultsecretsappspec                     HCPVaultSecretsAppList    HCPVaultSecretsAppList contains a list of HCPVaultSecretsApp        Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     HCPVaultSecretsAppList           metadata    ListMeta  https   kubernetes io docs reference generated kubernetes api v1 24  listmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              items    HCPVaultSecretsApp   hcpvaultsecretsapp  array                    HCPVaultSecretsAppSpec    HCPVaultSecretsAppSpec defines the desired state of HCPVaultSecretsApp     Appears in      HCPVaultSecretsApp   hcpvaultsecretsapp     Field   Description   Default   Validation                                appName   string    AppName of the Vault Secrets Application that is to be synced             hcpAuthRef   string    HCPAuthRef to the HCPAuth resource  can be prefixed with a namespace  eg  br    namespaceA vaultAuthRefB   If no namespace prefix is provided it will default br   to the namespace of the HCPAuth CR  If no value is specified for HCPAuthRef the br   Operator will default to the  default  HCPAuth  configured in the operator s br   namespace             refreshAfter   string    RefreshAfter a period of time  in duration notation e g  30s  1m  24h   600s   Pattern      0 9      0 9     s m h      br   Type  string  br         rolloutRestartTargets    RolloutRestartTarget   rolloutrestarttarget  array    RolloutRestartTargets should be configured whenever the application s  br   consuming the HCP Vault Secrets App does not support dynamically reloading a br   rotated secret  In that case one  or more RolloutRestartTarget s  can be br   configured here  The Operator will trigger a  rollout restart  for each target br   whenever the Vault secret changes between reconciliation events  See br   RolloutRestartTarget for more details             destination    Destination   destination     Destination provides configuration necessary for syncing the HCP Vault br   Application secrets to Kubernetes             syncConfig    HVSSyncConfig   hvssyncconfig     SyncConfig configures sync behavior from HVS to VSO                  HVSDynamicStatus    HVSDynamicStatus defines the observed state of a dynamic secret within an HCP Vault Secrets App     Appears in      HCPVaultSecretsAppStatus   hcpvaultsecretsappstatus     Field   Description   Default   Validation                                name   string    Name of the dynamic secret            createdAt   string    CreatedAt is the timestamp string of when the dynamic secret was created            expiresAt   string    ExpiresAt is the timestamp string of when the dynamic secret will expire            ttl   string    TTL is the time to live of the dynamic secret in seconds                HVSDynamicSyncConfig    HVSDynamicSyncConfig configures sync behavior for HVS dynamic secrets      Appears in      HVSSyncConfig   hvssyncconfig     Field   Description   Default   Validation                                renewalPercent   integer    RenewalPercent is the percent out of 100 of a dynamic secret s TTL when br   new secrets are generated  Defaults to 67 percent plus up to 10  jitter    67   Maximum  90  br   Minimum  0  br             HVSSyncConfig    HVSSyncConfig configures sync behavior from HVS to VSO     Appears in      HCPVaultSecretsAppSpec   hcpvaultsecretsappspec     Field   Description   Default   Validation                                dynamic    HVSDynamicSyncConfig   hvsdynamicsyncconfig     Dynamic configures sync behavior for dynamic secrets                 MergeStrategy    MergeStrategy provides the configuration for merging HTTP headers and parameters from the referring VaultAuth resource and its VaultAuthGlobal resource      Appears in      VaultAuthGlobalRef   vaultauthglobalref     Field   Description   Default   Validation                                headers   string    Headers configures the merge strategy for HTTP headers that are included in br   all Vault requests  Choices are  union    replace   or  none   br    br   If  union  is set  the headers from the VaultAuthGlobal and VaultAuth br   resources are merged  The headers from the VaultAuth always take precedence  br    br   If  replace  is set  the first set of non empty headers taken in order from  br   VaultAuth  VaultAuthGlobal auth method  VaultGlobal default headers  br    br   If  none  is set  the headers from the br   VaultAuthGlobal resource are ignored and only the headers from the VaultAuth br   resource are used  The default is  none        Enum   union replace none   br         params   string    Params configures the merge strategy for HTTP parameters that are included in br   all Vault requests  Choices are  union    replace   or  none   br    br   If  union  is set  the parameters from the VaultAuthGlobal and VaultAuth br   resources are merged  The parameters from the VaultAuth always take br   precedence  br    br   If  replace  is set  the first set of non empty parameters taken in order from  br   VaultAuth  VaultAuthGlobal auth method  VaultGlobal default parameters  br    br   If  none  is set  the parameters from the VaultAuthGlobal resource are ignored br   and only the parameters from the VaultAuth resource are used  The default is br    none        Enum   union replace none   br             RolloutRestartTarget    RolloutRestartTarget provides the configuration required to perform a rollout restart of the supported resources upon Vault Secret rotation  The rollout restart is triggered by patching the target resource s  spec template metadata annotations  to include  vso secrets hashicorp com restartedAt  with a timestamp value of when the trigger was executed  E g  vso secrets hashicorp com restartedAt   2023 03 23T13 39 31Z    Supported resources  Deployment  DaemonSet  StatefulSet  argo Rollout     Appears in      HCPVaultSecretsAppSpec   hcpvaultsecretsappspec     VaultDynamicSecretSpec   vaultdynamicsecretspec     VaultPKISecretSpec   vaultpkisecretspec     VaultStaticSecretSpec   vaultstaticsecretspec     Field   Description   Default   Validation                                kind   string    Kind of the resource      Enum   Deployment DaemonSet StatefulSet argo Rollout   br         name   string    Name of the resource                SecretTransformation    SecretTransformation is the Schema for the secrettransformations API     Appears in      SecretTransformationList   secrettransformationlist     Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     SecretTransformation           metadata    ObjectMeta  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              spec    SecretTransformationSpec   secrettransformationspec                     SecretTransformationList    SecretTransformationList contains a list of SecretTransformation        Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     SecretTransformationList           metadata    ListMeta  https   kubernetes io docs reference generated kubernetes api v1 24  listmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              items    SecretTransformation   secrettransformation  array                    SecretTransformationSpec    SecretTransformationSpec defines the desired state of SecretTransformation     Appears in      SecretTransformation   secrettransformation     Field   Description   Default   Validation                                templates   object  keys string  values  Template   template      Templates maps a template name to its Template  Templates are always included br   in the rendered K8s Secret with the specified key             sourceTemplates    SourceTemplate   sourcetemplate  array    SourceTemplates are never included in the rendered K8s Secret  they can be br   used to provide common template definitions  etc             includes   string array    Includes contains regex patterns used to filter top level source secret data br   fields for inclusion in the final K8s Secret data  These pattern filters are br   never applied to templated fields as defined in Templates  They are always br   applied last             excludes   string array    Excludes contains regex patterns used to filter top level source secret data br   fields for exclusion from the final K8s Secret data  These pattern filters are br   never applied to templated fields as defined in Templates  They are always br   applied before any inclusion patterns  To exclude all source secret data br   fields  you can configure the single pattern                        SourceTemplate    SourceTemplate provides source templating configuration      Appears in      SecretTransformationSpec   secrettransformationspec     Field   Description   Default   Validation                                name   string                text   string    Text contains the Go text template format  The template br   references attributes from the data structure of the source secret  br   Refer to https   pkg go dev text template for more information                 StorageEncryption    StorageEncryption provides the necessary configuration need to encrypt the storage cache entries using Vault s Transit engine      Appears in      VaultAuthSpec   vaultauthspec     Field   Description   Default   Validation                                mount   string    Mount path of the Transit engine in Vault             keyName   string    KeyName to use for encrypt decrypt operations via Vault Transit                 SyncConfig    SyncConfig configures sync behavior from Vault to VSO     Appears in      VaultStaticSecretSpec   vaultstaticsecretspec     Field   Description   Default   Validation                                instantUpdates   boolean    InstantUpdates is a flag to indicate that event driven updates are br   enabled for this VaultStaticSecret                Template    Template provides templating configuration      Appears in      SecretTransformationSpec   secrettransformationspec     Transformation   transformation     Field   Description   Default   Validation                                name   string    Name of the Template            text   string    Text contains the Go text template format  The template br   references attributes from the data structure of the source secret  br   Refer to https   pkg go dev text template for more information                 TemplateRef    TemplateRef points to templating text that is stored in a SecretTransformation custom resource      Appears in      TransformationRef   transformationref     Field   Description   Default   Validation                                name   string    Name of the Template in SecretTransformationSpec Templates  br   the rendered secret data             keyOverride   string    KeyOverride to the rendered template in the Destination secret  If Key is br   empty  then the Key from reference spec will be used  Set this to override the br   Key set from the reference spec                 Transformation         Appears in      Destination   destination     Field   Description   Default   Validation                                templates   object  keys string  values  Template   template      Templates maps a template name to its Template  Templates are always included br   in the rendered K8s Secret  and take precedence over templates defined in a br   SecretTransformation             transformationRefs    TransformationRef   transformationref  array    TransformationRefs contain references to template configuration from br   SecretTransformation             includes   string array    Includes contains regex patterns used to filter top level source secret data br   fields for inclusion in the final K8s Secret data  These pattern filters are br   never applied to templated fields as defined in Templates  They are always br   applied last             excludes   string array    Excludes contains regex patterns used to filter top level source secret data br   fields for exclusion from the final K8s Secret data  These pattern filters are br   never applied to templated fields as defined in Templates  They are always br   applied before any inclusion patterns  To exclude all source secret data br   fields  you can configure the single pattern                  excludeRaw   boolean    ExcludeRaw data from the destination Secret  Exclusion policy can be set br   globally by including  exclude raw  in the    global transformation options  br   command line flag  If set  the command line flag always takes precedence over br   this configuration                 TransformationRef    TransformationRef contains the configuration for accessing templates from an SecretTransformation resource  TransformationRefs can be shared across all syncable secret custom resources      Appears in      Transformation   transformation     Field   Description   Default   Validation                                namespace   string    Namespace of the SecretTransformation resource             name   string    Name of the SecretTransformation resource             templateRefs    TemplateRef   templateref  array    TemplateRefs map to a Template found in this TransformationRef  If empty  then br   all templates from the SecretTransformation will be rendered to the K8s Secret             ignoreIncludes   boolean    IgnoreIncludes controls whether to use the SecretTransformation s Includes br   data key filters             ignoreExcludes   boolean    IgnoreExcludes controls whether to use the SecretTransformation s Excludes br   data key filters                 VaultAuth    VaultAuth is the Schema for the vaultauths API     Appears in      VaultAuthList   vaultauthlist     Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     VaultAuth           metadata    ObjectMeta  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              spec    VaultAuthSpec   vaultauthspec                     VaultAuthConfigAWS    VaultAuthConfigAWS provides VaultAuth configuration options needed for authenticating to Vault via an AWS AuthMethod  Will use creds from  SecretRef  or  IRSAServiceAccount  if provided  in that order  If neither are provided  the underlying node role or instance profile will be used to authenticate to Vault      Appears in      VaultAuthGlobalConfigAWS   vaultauthglobalconfigaws     VaultAuthSpec   vaultauthspec     Field   Description   Default   Validation                                role   string    Vault role to use for authenticating            region   string    AWS Region to use for signing the authentication request            headerValue   string    The Vault header value to include in the STS signing request            sessionName   string    The role session name to use when creating a webidentity provider            stsEndpoint   string    The STS endpoint to use  if not set will use the default            iamEndpoint   string    The IAM endpoint to use  if not set will use the default            secretRef   string    SecretRef is the name of a Kubernetes Secret in the consumer s  VDS VSS PKI  namespace br   which holds credentials for AWS  Expected keys include  access key id    secret access key   br    session token             irsaServiceAccount   string    IRSAServiceAccount name to use with IAM Roles for Service Accounts br    IRSA   and should be annotated with  eks amazonaws com role arn   This br   ServiceAccount will be checked for other EKS annotations  br   eks amazonaws com audience and eks amazonaws com token expiration                VaultAuthConfigAppRole    VaultAuthConfigAppRole provides VaultAuth configuration options needed for authenticating to Vault via an AppRole AuthMethod      Appears in      VaultAuthGlobalConfigAppRole   vaultauthglobalconfigapprole     VaultAuthSpec   vaultauthspec     Field   Description   Default   Validation                                roleId   string    RoleID of the AppRole Role to use for authenticating to Vault             secretRef   string    SecretRef is the name of a Kubernetes secret in the consumer s  VDS VSS PKI  namespace which br   provides the AppRole Role s SecretID  The secret must have a key named  id  which holds the br   AppRole Role s secretID                 VaultAuthConfigGCP    VaultAuthConfigGCP provides VaultAuth configuration options needed for authenticating to Vault via a GCP AuthMethod  using workload identity     Appears in      VaultAuthGlobalConfigGCP   vaultauthglobalconfiggcp     VaultAuthSpec   vaultauthspec     Field   Description   Default   Validation                                role   string    Vault role to use for authenticating            workloadIdentityServiceAccount   string    WorkloadIdentityServiceAccount is the name of a Kubernetes service br   account  in the same Kubernetes namespace as the Vault Secret referencing br   this resource  which has been configured for workload identity in GKE  br   Should be annotated with  iam gke io gcp service account              region   string    GCP Region of the GKE cluster s identity provider  Defaults to the region br   returned from the operator pod s local metadata server             clusterName   string    GKE cluster name  Defaults to the cluster name returned from the operator br   pod s local metadata server             projectID   string    GCP project ID  Defaults to the project id returned from the operator br   pod s local metadata server                 VaultAuthConfigJWT    VaultAuthConfigJWT provides VaultAuth configuration options needed for authenticating to Vault      Appears in      VaultAuthGlobalConfigJWT   vaultauthglobalconfigjwt     VaultAuthSpec   vaultauthspec     Field   Description   Default   Validation                                role   string    Role to use for authenticating to Vault             secretRef   string    SecretRef is the name of a Kubernetes secret in the consumer s  VDS VSS PKI  namespace which br   provides the JWT token to authenticate to Vault s JWT authentication backend  The secret must br   have a key named  jwt  which holds the JWT token             serviceAccount   string    ServiceAccount to use when creating a ServiceAccount token to authenticate to Vault s br   JWT authentication backend             audiences   string array    TokenAudiences to include in the ServiceAccount token             tokenExpirationSeconds   integer    TokenExpirationSeconds to set the ServiceAccount token    600   Minimum  600  br             VaultAuthConfigKubernetes    VaultAuthConfigKubernetes provides VaultAuth configuration options needed for authenticating to Vault      Appears in      VaultAuthGlobalConfigKubernetes   vaultauthglobalconfigkubernetes     VaultAuthSpec   vaultauthspec     Field   Description   Default   Validation                                role   string    Role to use for authenticating to Vault             serviceAccount   string    ServiceAccount to use when authenticating to Vault s br   authentication backend  This must reside in the consuming secret s  VDS VSS PKI  namespace             audiences   string array    TokenAudiences to include in the ServiceAccount token             tokenExpirationSeconds   integer    TokenExpirationSeconds to set the ServiceAccount token    600   Minimum  600  br             VaultAuthGlobal    VaultAuthGlobal is the Schema for the vaultauthglobals API     Appears in      VaultAuthGlobalList   vaultauthgloballist     Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     VaultAuthGlobal           metadata    ObjectMeta  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              spec    VaultAuthGlobalSpec   vaultauthglobalspec                     VaultAuthGlobalConfigAWS         Appears in      VaultAuthGlobalSpec   vaultauthglobalspec     Field   Description   Default   Validation                                role   string    Vault role to use for authenticating            region   string    AWS Region to use for signing the authentication request            headerValue   string    The Vault header value to include in the STS signing request            sessionName   string    The role session name to use when creating a webidentity provider            stsEndpoint   string    The STS endpoint to use  if not set will use the default            iamEndpoint   string    The IAM endpoint to use  if not set will use the default            secretRef   string    SecretRef is the name of a Kubernetes Secret in the consumer s  VDS VSS PKI  namespace br   which holds credentials for AWS  Expected keys include  access key id    secret access key   br    session token             irsaServiceAccount   string    IRSAServiceAccount name to use with IAM Roles for Service Accounts br    IRSA   and should be annotated with  eks amazonaws com role arn   This br   ServiceAccount will be checked for other EKS annotations  br   eks amazonaws com audience and eks amazonaws com token expiration            namespace   string    Namespace to auth to in Vault            mount   string    Mount to use when authenticating to auth method             params   object  keys string  values string     Params to use when authenticating to Vault            headers   object  keys string  values string     Headers to be included in all Vault requests                 VaultAuthGlobalConfigAppRole         Appears in      VaultAuthGlobalSpec   vaultauthglobalspec     Field   Description   Default   Validation                                roleId   string    RoleID of the AppRole Role to use for authenticating to Vault             secretRef   string    SecretRef is the name of a Kubernetes secret in the consumer s  VDS VSS PKI  namespace which br   provides the AppRole Role s SecretID  The secret must have a key named  id  which holds the br   AppRole Role s secretID             namespace   string    Namespace to auth to in Vault            mount   string    Mount to use when authenticating to auth method             params   object  keys string  values string     Params to use when authenticating to Vault            headers   object  keys string  values string     Headers to be included in all Vault requests                 VaultAuthGlobalConfigGCP         Appears in      VaultAuthGlobalSpec   vaultauthglobalspec     Field   Description   Default   Validation                                role   string    Vault role to use for authenticating            workloadIdentityServiceAccount   string    WorkloadIdentityServiceAccount is the name of a Kubernetes service br   account  in the same Kubernetes namespace as the Vault Secret referencing br   this resource  which has been configured for workload identity in GKE  br   Should be annotated with  iam gke io gcp service account              region   string    GCP Region of the GKE cluster s identity provider  Defaults to the region br   returned from the operator pod s local metadata server             clusterName   string    GKE cluster name  Defaults to the cluster name returned from the operator br   pod s local metadata server             projectID   string    GCP project ID  Defaults to the project id returned from the operator br   pod s local metadata server             namespace   string    Namespace to auth to in Vault            mount   string    Mount to use when authenticating to auth method             params   object  keys string  values string     Params to use when authenticating to Vault            headers   object  keys string  values string     Headers to be included in all Vault requests                 VaultAuthGlobalConfigJWT         Appears in      VaultAuthGlobalSpec   vaultauthglobalspec     Field   Description   Default   Validation                                role   string    Role to use for authenticating to Vault             secretRef   string    SecretRef is the name of a Kubernetes secret in the consumer s  VDS VSS PKI  namespace which br   provides the JWT token to authenticate to Vault s JWT authentication backend  The secret must br   have a key named  jwt  which holds the JWT token             serviceAccount   string    ServiceAccount to use when creating a ServiceAccount token to authenticate to Vault s br   JWT authentication backend             audiences   string array    TokenAudiences to include in the ServiceAccount token             tokenExpirationSeconds   integer    TokenExpirationSeconds to set the ServiceAccount token    600   Minimum  600  br         namespace   string    Namespace to auth to in Vault            mount   string    Mount to use when authenticating to auth method             params   object  keys string  values string     Params to use when authenticating to Vault            headers   object  keys string  values string     Headers to be included in all Vault requests                 VaultAuthGlobalConfigKubernetes         Appears in      VaultAuthGlobalSpec   vaultauthglobalspec     Field   Description   Default   Validation                                role   string    Role to use for authenticating to Vault             serviceAccount   string    ServiceAccount to use when authenticating to Vault s br   authentication backend  This must reside in the consuming secret s  VDS VSS PKI  namespace             audiences   string array    TokenAudiences to include in the ServiceAccount token             tokenExpirationSeconds   integer    TokenExpirationSeconds to set the ServiceAccount token    600   Minimum  600  br         namespace   string    Namespace to auth to in Vault            mount   string    Mount to use when authenticating to auth method             params   object  keys string  values string     Params to use when authenticating to Vault            headers   object  keys string  values string     Headers to be included in all Vault requests                 VaultAuthGlobalList    VaultAuthGlobalList contains a list of VaultAuthGlobal        Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     VaultAuthGlobalList           metadata    ListMeta  https   kubernetes io docs reference generated kubernetes api v1 24  listmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              items    VaultAuthGlobal   vaultauthglobal  array                    VaultAuthGlobalRef    VaultAuthGlobalRef is a reference to a VaultAuthGlobal resource  A referring VaultAuth resource can use the VaultAuthGlobal resource to share common configuration across multiple VaultAuth resources  The VaultAuthGlobal resource is used to store global configuration for VaultAuth resources      Appears in      VaultAuthSpec   vaultauthspec     Field   Description   Default   Validation                                name   string    Name of the VaultAuthGlobal resource       Pattern      a z0 9    1 253      br         namespace   string    Namespace of the VaultAuthGlobal resource  If not provided  the namespace of br   the referring VaultAuth resource is used       Pattern      a z0 9    1 253      br         mergeStrategy    MergeStrategy   mergestrategy     MergeStrategy configures the merge strategy for HTTP headers and parameters br   that are included in all Vault authentication requests             allowDefault   boolean    AllowDefault when set to true will use the default VaultAuthGlobal resource br   as the default if Name is not set  The  allow default globals  option must be br   set on the operator s   global vault auth options  flag br    br   The default VaultAuthGlobal search is conditional  br   When a ref Namespace is set  the search for the default br   VaultAuthGlobal resource is constrained to that namespace  br   Otherwise  the search order is  br   1  The default VaultAuthGlobal resource in the referring VaultAuth resource s br   namespace  br   2  The default VaultAuthGlobal resource in the Operator s namespace                 VaultAuthGlobalSpec    VaultAuthGlobalSpec defines the desired state of VaultAuthGlobal     Appears in      VaultAuthGlobal   vaultauthglobal     Field   Description   Default   Validation                                allowedNamespaces   string array    AllowedNamespaces Kubernetes Namespaces which are allow listed for use with br   this VaultAuthGlobal  This field allows administrators to customize which br   Kubernetes namespaces are authorized to reference this resource  While Vault br   will still enforce its own rules  this has the added configurability of br   restricting which VaultAuthMethods can be used by which namespaces  Accepted br   values            wildcard  all namespaces      a    b     list of namespaces  br   unset   disallow all namespaces except the Operator s and the referring br   VaultAuthMethod s namespace  this is the default behavior             vaultConnectionRef   string    VaultConnectionRef to the VaultConnection resource  can be prefixed with a namespace  br   eg   namespaceA vaultConnectionRefB   If no namespace prefix is provided it will default to br   the namespace of the VaultConnection CR  If no value is specified for VaultConnectionRef the br   Operator will default to the  default  VaultConnection  configured in the operator s namespace             defaultVaultNamespace   string    DefaultVaultNamespace to auth to in Vault  if not specified the namespace of the auth br   method will be used  This can be used as a default Vault namespace for all br   auth methods             defaultAuthMethod   string    DefaultAuthMethod to use when authenticating to Vault       Enum   kubernetes jwt appRole aws gcp   br         defaultMount   string    DefaultMount to use when authenticating to auth method  If not specified the mount of br   the auth method configured in Vault will be used             params   object  keys string  values string     DefaultParams to use when authenticating to Vault            headers   object  keys string  values string     DefaultHeaders to be included in all Vault requests             kubernetes    VaultAuthGlobalConfigKubernetes   vaultauthglobalconfigkubernetes     Kubernetes specific auth configuration  requires that the Method be set to  kubernetes              appRole    VaultAuthGlobalConfigAppRole   vaultauthglobalconfigapprole     AppRole specific auth configuration  requires that the Method be set to  appRole              jwt    VaultAuthGlobalConfigJWT   vaultauthglobalconfigjwt     JWT specific auth configuration  requires that the Method be set to  jwt              aws    VaultAuthGlobalConfigAWS   vaultauthglobalconfigaws     AWS specific auth configuration  requires that Method be set to  aws              gcp    VaultAuthGlobalConfigGCP   vaultauthglobalconfiggcp     GCP specific auth configuration  requires that Method be set to  gcp                    VaultAuthList    VaultAuthList contains a list of VaultAuth        Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     VaultAuthList           metadata    ListMeta  https   kubernetes io docs reference generated kubernetes api v1 24  listmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              items    VaultAuth   vaultauth  array                    VaultAuthSpec    VaultAuthSpec defines the desired state of VaultAuth     Appears in      VaultAuth   vaultauth     Field   Description   Default   Validation                                vaultConnectionRef   string    VaultConnectionRef to the VaultConnection resource  can be prefixed with a namespace  br   eg   namespaceA vaultConnectionRefB   If no namespace prefix is provided it will default to br   the namespace of the VaultConnection CR  If no value is specified for VaultConnectionRef the br   Operator will default to the  default  VaultConnection  configured in the operator s namespace             vaultAuthGlobalRef    VaultAuthGlobalRef   vaultauthglobalref     VaultAuthGlobalRef             namespace   string    Namespace to auth to in Vault            allowedNamespaces   string array    AllowedNamespaces Kubernetes Namespaces which are allow listed for use with this AuthMethod  br   This field allows administrators to customize which Kubernetes namespaces are authorized to br   use with this AuthMethod  While Vault will still enforce its own rules  this has the added br   configurability of restricting which VaultAuthMethods can be used by which namespaces  br   Accepted values  br             wildcard  all namespaces  br       a    b     list of namespaces  br   unset   disallow all namespaces except the Operator s the VaultAuthMethod s namespace  this br   is the default behavior             method   string    Method to use when authenticating to Vault       Enum   kubernetes jwt appRole aws gcp   br         mount   string    Mount to use when authenticating to auth method             params   object  keys string  values string     Params to use when authenticating to Vault            headers   object  keys string  values string     Headers to be included in all Vault requests             kubernetes    VaultAuthConfigKubernetes   vaultauthconfigkubernetes     Kubernetes specific auth configuration  requires that the Method be set to  kubernetes              appRole    VaultAuthConfigAppRole   vaultauthconfigapprole     AppRole specific auth configuration  requires that the Method be set to  appRole              jwt    VaultAuthConfigJWT   vaultauthconfigjwt     JWT specific auth configuration  requires that the Method be set to  jwt              aws    VaultAuthConfigAWS   vaultauthconfigaws     AWS specific auth configuration  requires that Method be set to  aws              gcp    VaultAuthConfigGCP   vaultauthconfiggcp     GCP specific auth configuration  requires that Method be set to  gcp              storageEncryption    StorageEncryption   storageencryption     StorageEncryption provides the necessary configuration to encrypt the client storage cache  br   This should only be configured when client cache persistence with encryption is enabled  br   This is done by passing setting the manager s commandline argument br     client cache persistence model direct encrypted  Typically  there should only ever br   be one VaultAuth configured with StorageEncryption in the Cluster  and it should have br   the label  cacheStorageEncryption true                  VaultClientMeta    VaultClientMeta defines the observed state of the last Vault Client used to sync the secret  This status is used during resource reconciliation      Appears in      VaultDynamicSecretStatus   vaultdynamicsecretstatus     Field   Description   Default   Validation                                cacheKey   string    CacheKey is the unique key used to identify the client cache             id   string    ID is the Vault ID of the authenticated client  The ID should never contain br   any sensitive information                 VaultConnection    VaultConnection is the Schema for the vaultconnections API     Appears in      VaultConnectionList   vaultconnectionlist     Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     VaultConnection           metadata    ObjectMeta  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              spec    VaultConnectionSpec   vaultconnectionspec                     VaultConnectionList    VaultConnectionList contains a list of VaultConnection        Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     VaultConnectionList           metadata    ListMeta  https   kubernetes io docs reference generated kubernetes api v1 24  listmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              items    VaultConnection   vaultconnection  array                    VaultConnectionSpec    VaultConnectionSpec defines the desired state of VaultConnection     Appears in      VaultConnection   vaultconnection     Field   Description   Default   Validation                                address   string    Address of the Vault server            headers   object  keys string  values string     Headers to be included in all Vault requests             tlsServerName   string    TLSServerName to use as the SNI host for TLS connections             caCertSecretRef   string    CACertSecretRef is the name of a Kubernetes secret containing the trusted PEM encoded CA certificate chain as  ca crt              skipTLSVerify   boolean    SkipTLSVerify for TLS connections    false         timeout   string    Timeout applied to all Vault requests for this connection  If not set  the br   default timeout from the Vault API client config is used       Pattern      0 9      0 9     s m h      br   Type  string  br               VaultDynamicSecret    VaultDynamicSecret is the Schema for the vaultdynamicsecrets API     Appears in      VaultDynamicSecretList   vaultdynamicsecretlist     Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     VaultDynamicSecret           metadata    ObjectMeta  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              spec    VaultDynamicSecretSpec   vaultdynamicsecretspec                     VaultDynamicSecretList    VaultDynamicSecretList contains a list of VaultDynamicSecret        Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     VaultDynamicSecretList           metadata    ListMeta  https   kubernetes io docs reference generated kubernetes api v1 24  listmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              items    VaultDynamicSecret   vaultdynamicsecret  array                    VaultDynamicSecretSpec    VaultDynamicSecretSpec defines the desired state of VaultDynamicSecret     Appears in      VaultDynamicSecret   vaultdynamicsecret     Field   Description   Default   Validation                                vaultAuthRef   string    VaultAuthRef to the VaultAuth resource  can be prefixed with a namespace  br   eg   namespaceA vaultAuthRefB   If no namespace prefix is provided it will default to br   the namespace of the VaultAuth CR  If no value is specified for VaultAuthRef the Operator br   will default to the  default  VaultAuth  configured in the operator s namespace             namespace   string    Namespace of the secrets engine mount in Vault  If not set  the namespace that s br   part of VaultAuth resource will be inferred             mount   string    Mount path of the secret s engine in Vault             requestHTTPMethod   string    RequestHTTPMethod to use when syncing Secrets from Vault  br   Setting a value here is not typically required  br   If left unset the Operator will make requests using the GET method  br   In the case where Params are specified the Operator will use the PUT method  br   Please consult  secrets   vault docs secrets  if you are br   uncertain about what method to use  br   Of note  the Vault client treats PUT and POST as being equivalent  br   The underlying Vault client implementation will always use the PUT method       Enum   GET POST PUT   br         path   string    Path in Vault to get the credentials for  and is relative to Mount  br   Please consult  secrets   vault docs secrets  if you are br   uncertain about what  path  should be set to             params   object  keys string  values string     Params that can be passed when requesting credentials secrets  br   When Params is set the configured RequestHTTPMethod will be br   ignored  See RequestHTTPMethod for more details  br   Please consult  secrets   vault docs secrets  if you are br   uncertain about what  params  should can be set to             renewalPercent   integer    RenewalPercent is the percent out of 100 of the lease duration when the br   lease is renewed  Defaults to 67 percent plus jitter    67   Maximum  90  br   Minimum  0  br         revoke   boolean    Revoke the existing lease on VDS resource deletion             allowStaticCreds   boolean    AllowStaticCreds should be set when syncing credentials that are periodically br   rotated by the Vault server  rather than created upon request  These secrets br   are sometimes referred to as  static roles   or  static credentials   with a br   request path that contains  static creds              rolloutRestartTargets    RolloutRestartTarget   rolloutrestarttarget  array    RolloutRestartTargets should be configured whenever the application s  consuming the Vault secret does br   not support dynamically reloading a rotated secret  br   In that case one  or more RolloutRestartTarget s  can be configured here  The Operator will br   trigger a  rollout restart  for each target whenever the Vault secret changes between reconciliation events  br   See RolloutRestartTarget for more details             destination    Destination   destination     Destination provides configuration necessary for syncing the Vault secret to Kubernetes             refreshAfter   string    RefreshAfter a period of time for VSO to sync the source secret data  in br   duration notation e g  30s  1m  24h  This value only needs to be set when br   syncing from a secret s engine that does not provide a lease TTL in its br   response  The value should be within the secret engine s configured ttl or br   max ttl  The source secret s lease duration takes precedence over this br   configuration when it is greater than 0       Pattern      0 9      0 9     s m h      br   Type  string  br               VaultPKISecret    VaultPKISecret is the Schema for the vaultpkisecrets API     Appears in      VaultPKISecretList   vaultpkisecretlist     Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     VaultPKISecret           metadata    ObjectMeta  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              spec    VaultPKISecretSpec   vaultpkisecretspec                     VaultPKISecretList    VaultPKISecretList contains a list of VaultPKISecret        Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     VaultPKISecretList           metadata    ListMeta  https   kubernetes io docs reference generated kubernetes api v1 24  listmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              items    VaultPKISecret   vaultpkisecret  array                    VaultPKISecretSpec    VaultPKISecretSpec defines the desired state of VaultPKISecret     Appears in      VaultPKISecret   vaultpkisecret     Field   Description   Default   Validation                                vaultAuthRef   string    VaultAuthRef to the VaultAuth resource  can be prefixed with a namespace  br   eg   namespaceA vaultAuthRefB   If no namespace prefix is provided it will default to br   the namespace of the VaultAuth CR  If no value is specified for VaultAuthRef the Operator br   will default to the  default  VaultAuth  configured in the operator s namespace             namespace   string    Namespace of the secrets engine mount in Vault  If not set  the namespace that s br   part of VaultAuth resource will be inferred             mount   string    Mount for the secret in Vault            role   string    Role in Vault to use when issuing TLS certificates             revoke   boolean    Revoke the certificate when the resource is deleted             clear   boolean    Clear the Kubernetes secret when the resource is deleted             expiryOffset   string    ExpiryOffset to use for computing when the certificate should be renewed  br   The rotation time will be difference between the expiration and the offset  br   Should be in duration notation e g  30s  120s  etc       Pattern      0 9      0 9     s m h      br   Type  string  br         issuerRef   string    IssuerRef reference to an existing PKI issuer  either by Vault generated br   identifier  the literal string default to refer to the currently br   configured default issuer  or the name assigned to an issuer  br   This parameter is part of the request URL             rolloutRestartTargets    RolloutRestartTarget   rolloutrestarttarget  array    RolloutRestartTargets should be configured whenever the application s  consuming the Vault secret does br   not support dynamically reloading a rotated secret  br   In that case one  or more RolloutRestartTarget s  can be configured here  The Operator will br   trigger a  rollout restart  for each target whenever the Vault secret changes between reconciliation events  br   See RolloutRestartTarget for more details             destination    Destination   destination     Destination provides configuration necessary for syncing the Vault secret br   to Kubernetes  If the type is set to  kubernetes io tls    tls key  will br   be set to the  private key  response from Vault  and  tls crt  will be br   set to  certificate     ca chain  from the Vault response   issuing ca  br   is used when  ca chain  is empty   The  remove roots from chain true  br   option is used with Vault to exclude the root CA from the Vault response             commonName   string    CommonName to include in the request             altNames   string array    AltNames to include in the request br   May contain both DNS names and email addresses             ipSans   string array    IPSans to include in the request             uriSans   string array    The requested URI SANs             otherSans   string array    Requested other SANs  in an array with the format br   oid type value for each entry             userIDs   string array    User ID  OID 0 9 2342 19200300 100 1 1  Subject values to be placed on the br   signed certificate             ttl   string    TTL for the certificate  sets the expiration date  br   If not specified the Vault role s default  br   backend default  or system default TTL is used  in that order  br   Cannot be larger than the mount s max TTL  br   Note  this only has an effect when generating a CA cert or signing a CA cert  br   not when generating a CSR for an intermediate CA  br   Should be in duration notation e g  120s  2h  etc       Pattern      0 9      0 9     s m h      br   Type  string  br         format   string    Format for the certificate  Choices   pem    der    pem bundle   br   If  pem bundle   br   any private key and issuing cert will be appended to the certificate pem  br   If  der   the value will be base64 encoded  br   Default  pem            privateKeyFormat   string    PrivateKeyFormat  generally the default will be controlled by the Format br   parameter as either base64 encoded DER or PEM encoded DER  br   However  this can be set to  pkcs8  to have the returned br   private key contain base64 encoded pkcs8 or PEM encoded br   pkcs8 instead  br   Default  der            notAfter   string    NotAfter field of the certificate with specified date value  br   The value format should be given in UTC format YYYY MM ddTHH MM SSZ            excludeCNFromSans   boolean    ExcludeCNFromSans from DNS or Email Subject Alternate Names  br   Default  false                  VaultSecretLease         Appears in      VaultDynamicSecretStatus   vaultdynamicsecretstatus     Field   Description   Default   Validation                                id   string    ID of the Vault secret             duration   integer    LeaseDuration of the Vault secret             renewable   boolean    Renewable Vault secret lease            requestID   string    RequestID of the Vault secret request                 VaultStaticCredsMetaData         Appears in      VaultDynamicSecretStatus   vaultdynamicsecretstatus     Field   Description   Default   Validation                                lastVaultRotation   integer    LastVaultRotation represents the last time Vault rotated the password            rotationPeriod   integer    RotationPeriod is number in seconds between each rotation  effectively a br    time to live   This value is compared to the LastVaultRotation to br   determine if a password needs to be rotated            rotationSchedule   string    RotationSchedule is a  cron style  string representing the allowed br   schedule for each rotation  br   e g   1 0        would rotate at one minute past midnight  00 01  every br   day             ttl   integer    TTL is the seconds remaining before the next rotation                 VaultStaticSecret    VaultStaticSecret is the Schema for the vaultstaticsecrets API     Appears in      VaultStaticSecretList   vaultstaticsecretlist     Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     VaultStaticSecret           metadata    ObjectMeta  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              spec    VaultStaticSecretSpec   vaultstaticsecretspec                     VaultStaticSecretList    VaultStaticSecretList contains a list of VaultStaticSecret        Field   Description   Default   Validation                                apiVersion   string     secrets hashicorp com v1beta1           kind   string     VaultStaticSecretList           metadata    ListMeta  https   kubernetes io docs reference generated kubernetes api v1 24  listmeta v1 meta     Refer to Kubernetes API documentation for fields of  metadata              items    VaultStaticSecret   vaultstaticsecret  array                    VaultStaticSecretSpec    VaultStaticSecretSpec defines the desired state of VaultStaticSecret     Appears in      VaultStaticSecret   vaultstaticsecret     Field   Description   Default   Validation                                vaultAuthRef   string    VaultAuthRef to the VaultAuth resource  can be prefixed with a namespace  br   eg   namespaceA vaultAuthRefB   If no namespace prefix is provided it will default to the br   namespace of the VaultAuth CR  If no value is specified for VaultAuthRef the Operator will br   default to the  default  VaultAuth  configured in the operator s namespace             namespace   string    Namespace of the secrets engine mount in Vault  If not set  the namespace that s br   part of VaultAuth resource will be inferred             mount   string    Mount for the secret in Vault            path   string    Path of the secret in Vault  corresponds to the  path  parameter for  br    kv v1   vault api docs secret kv kv v1 read secret   kv v2   vault api docs secret kv kv v2 read secret version             version   integer    Version of the secret to fetch  Only valid for type kv v2  Corresponds to version query parameter  br    version   vault api docs secret kv kv v2 version       Minimum  0  br         type   string    Type of the Vault static secret      Enum   kv v1 kv v2   br         refreshAfter   string    RefreshAfter a period of time  in duration notation e g  30s  1m  24h      Pattern      0 9      0 9     s m h      br   Type  string  br         hmacSecretData   boolean    HMACSecretData determines whether the Operator computes the br   HMAC of the Secret s data  The MAC value will be stored in br   the resource s Status SecretMac field  and will be used for drift detection br   and during incoming Vault secret comparison  br   Enabling this feature is recommended to ensure that Secret s data stays consistent with Vault    true         rolloutRestartTargets    RolloutRestartTarget   rolloutrestarttarget  array    RolloutRestartTargets should be configured whenever the application s  consuming the Vault secret does br   not support dynamically reloading a rotated secret  br   In that case one  or more RolloutRestartTarget s  can be configured here  The Operator will br   trigger a  rollout restart  for each target whenever the Vault secret changes between reconciliation events  br   All configured targets wil be ignored if HMACSecretData is set to false  br   See RolloutRestartTarget for more details             destination    Destination   destination     Destination provides configuration necessary for syncing the Vault secret to Kubernetes             syncConfig    SyncConfig   syncconfig     SyncConfig configures sync behavior from Vault to VSO        "}
{"questions":"vault Utilizing advanced templating and data filters the Vault Secrets Operator for Kubernetes VSO can page title Vault Secrets Operator Secret Transformation layout docs Learn how to transform Secret data with the Vault Secrets Operator Secret data transformation","answers":"---\nlayout: docs\npage_title: Vault Secrets Operator Secret Transformation\ndescription: >-\n  Learn how to transform Secret data with the Vault Secrets Operator.\n---\n\n# Secret data transformation\n\nUtilizing advanced templating and data filters, the Vault Secrets Operator for Kubernetes (VSO) can\ntransform source secret data, secret metadata, resource labels and annotations into a format that is\ncompatible with your application. All secret data sources are supported. Secret transformations can\nbe specified directly within a secret custom resource (CR), or by references to one or more\n[SecretTransformation](\/vault\/docs\/platform\/k8s\/vso\/api-reference#secrettransformation) custom resource\ninstances, or both.\n\n## Templating\n\nVSO utilizes the data-driven [templates for Golang](https:\/\/pkg.go.dev\/text\/template) to generate\nsecret data output. The template data input holds the secret data, secret metadata, resource labels\nand annotations.\n\nTemplates are configured in a secret custom resource's\n[spec.Destination.Transformation.Templates](\/vault\/docs\/platform\/k8s\/vso\/api-reference#transformation),\nor in a SecretTransformation resource's [spec.templates](\/vault\/docs\/platform\/k8s\/vso\/api-reference#secrettransformationspec).\n\nVSO provides access to a large library of template functions, some of which are documented\n[below](#template-functions).\n\n### Secret data input\n\nSecret data is accessed through the `.Secrets` input member. It contains a map of secret\nkey-value pairs, which are assumed to be sensitive information fetched from a\n[secret source](\/vault\/docs\/platform\/k8s\/vso\/sources).\n\nFor example, to include a password in your application's secret, you might specify\na template like:\n```go\n\n```\n\n### Secret metadata input\n\nSecret metadata is accessed through the `.Metadata` input member. It contains a map of metadata key to\nits value. The data should not contain any confidential information.\n\nFor example, to include a secret metadata value in your application's secret, you might specify\na template like:\n```go\n\n```\n\n### Resource annotations input\n\nResource annotations are accessed through the `.Annotations` input member. The annotations consist\nof all `metadata.annotations` configured on the secret custom resource.\n\nFor example, to include a value from the resource's annotations in your application's secret, you\nmight specify a template like:\n```go\n\n```\n\n### Resource labels input\n\nResource labels are accessed through the `.Labels` input member. The labels consist\nof all `metadata.labels` configured on the secret custom resource.\n\nFor example, to include a value from the resource's labels in your application's secret, you\nmight specify a template like:\n```go\n\n```\n\n## Filters\n\nFilters are used to control which source secret data fields are included in the destination secret's\ndata. They are specified as a set of exclude\/include RE2 accepted [regular expressions](https:\/\/golang.org\/s\/re2syntax).\n\nFilters are configured in the `excludes` and `includes` fields of a secret custom resource's\n[spec.Destination.Transformation](\/vault\/docs\/platform\/k8s\/vso\/api-reference#transformation),\nor in a SecretTransformation resource's [spec](\/vault\/docs\/platform\/k8s\/vso\/api-reference#secrettransformationspec).\n\nAll exclude patterns take precedence over any include patterns, and are never applied to templated keys.\n\n## Examples\n\n### Local transformation\n\nA VaultDynamicSecret configured to sync Postgres database credentials from Vault to the Kubernetes\nsecret named `example-vds`.\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultDynamicSecret\nmetadata:\n  namespace: example-ns\n  name: example-vds\n  annotations:\n    myapp.config\/postgres-host: postgres-postgresql.postgres.svc.cluster.local:5432\nspec:\n  destination:\n    create: true\n    name: app-secret\n    transformation:\n      excludes:\n       - .*\n      templates:\n        url:\n          text: |\n            \n            \n  path: creds\/dev-postgres\n```\n\nThe resulting Kubernetes secret includes a single key named `url`, with a valid Postgres connection\nURL as its value.\n\n```yaml\nurl: postgresql:\/\/v-postgres-user:XUpah-password@postgres-postgresql.postgres.svc.cluster.local:5432\/postgres?sslmode=disable\n```\n\n### Shared transformation\n\nThe following manifest contains shared transformation templates and filters. All `templates` it provides\nwill be included in the destination k8s secret. It also provides `sourceTemplates` that can be included\nin any template text configured in a secret CR or within the same resource instance.\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: SecretTransformation\nmetadata:\n  name: vso-templates\n  namespace: example-vds\nspec:\n  excludes:\n    - password|username\n  templates:\n    url:\n      text: ''\n  sourceTemplates:\n    - name: helpers\n      text: |\n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n        \n```\n\nThe following `VaultDynamicSecret` manifest references the `SecretTransformation` above.\nAll templates and filters from the reference object will be applied to the destination secret data.\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultDynamicSecret\nmetadata:\n  namespace: example-ns\n  name: example-vds\n  annotations:\n    myapp.config\/postgres-host: postgres-postgresql.postgres.svc.cluster.local:5432\nspec:\n  destination:\n    create: true\n    name: app-secret\n    transformation:\n      transformationRefs:\n        - name: vso-templates\n  path: creds\/dev-postgres\n```\n\nThe resulting Kubernetes secret includes a single key named `url`, with a valid Postgres connection\nURL as its value\n\n```yaml\nurl: postgresql:\/\/v-postgres-user:XUpah-password@postgres-postgresql.postgres.svc.cluster.local:5432\/postgres?sslmode=disable\n```\n\n### Template functions\n\nAll template functions are provided by the [sprig](http:\/\/masterminds.github.io\/sprig) library. Some common functions are mentioned below.\nFor the complete list of functions see [allowedSprigFuncs](https:\/\/github.com\/hashicorp\/vault-secrets-operator\/blob\/main\/template\/funcs.go#L26)\n\n### String functions\n\n`trim` removes any leading or trailing whitespaces from the input:\n```\ntrim \" host \" -> `host`\n```\n\n### Encoding functions\n\n`b64enc` base64 encodes an input value\n```\nb64enc \"host\" -> `aG9zdAo=`\n```\n\n`b64dec` base64 decodes an input value\n```\nb64dec \"aG9zdAo=\" -> `host`\n```\n\n### Map functions\n\n`get` retrieves a value from a `map` input:\n```\nget .Secrets \"baz\" -> `qux`\n```\n\nGiven a nested `map` input:\n\n```json\n{\n  \"foo\": {\n    \"bar\": \"baz\",\n    \"quz\": \"quux\"\n  }\n}\n```\n\n`get` can retrieve a specific value:\n```\nget (get .Secrets \"foo\") \"bar\" -> `baz`\n```\n\n`dig` can also retrieve a specific value, or return a default if any of the keys\nare not found:\n```\ndig \"foo\" \"quz\" \"<not found>\" .Secrets -> `quux`\n\ndig \"foo\" \"nux\" \"<not found>\" .Secrets -> `<not found>`\n```\n\n## Related API references\n\n- [Transformation](\/vault\/docs\/platform\/k8s\/vso\/api-reference#transformation)\n- [HCPVaultSecretsApp](\/vault\/docs\/platform\/k8s\/vso\/api-reference#hcpvaultsecretsapp)\n- [VaultDynamicSecret](\/vault\/docs\/platform\/k8s\/vso\/api-reference#vaultdynamicsecret)\n- [VaultPKISecret](\/vault\/docs\/platform\/k8s\/vso\/api-reference#vaultpkisecret)\n- [VaultStaticSecret](\/vault\/docs\/platform\/k8s\/vso\/api-reference#vaultstaticsecret)\n- [SecretTransformation](\/vault\/docs\/platform\/k8s\/vso\/api-reference#secrettransformation)","site":"vault","answers_cleaned":"    layout  docs page title  Vault Secrets Operator Secret Transformation description       Learn how to transform Secret data with the Vault Secrets Operator         Secret data transformation  Utilizing advanced templating and data filters  the Vault Secrets Operator for Kubernetes  VSO  can transform source secret data  secret metadata  resource labels and annotations into a format that is compatible with your application  All secret data sources are supported  Secret transformations can be specified directly within a secret custom resource  CR   or by references to one or more  SecretTransformation   vault docs platform k8s vso api reference secrettransformation  custom resource instances  or both      Templating  VSO utilizes the data driven  templates for Golang  https   pkg go dev text template  to generate secret data output  The template data input holds the secret data  secret metadata  resource labels and annotations   Templates are configured in a secret custom resource s  spec Destination Transformation Templates   vault docs platform k8s vso api reference transformation   or in a SecretTransformation resource s  spec templates   vault docs platform k8s vso api reference secrettransformationspec    VSO provides access to a large library of template functions  some of which are documented  below   template functions        Secret data input  Secret data is accessed through the   Secrets  input member  It contains a map of secret key value pairs  which are assumed to be sensitive information fetched from a  secret source   vault docs platform k8s vso sources    For example  to include a password in your application s secret  you might specify a template like     go           Secret metadata input  Secret metadata is accessed through the   Metadata  input member  It contains a map of metadata key to its value  The data should not contain any confidential information   For example  to include a secret metadata value in your application s secret  you might specify a template like     go           Resource annotations input  Resource annotations are accessed through the   Annotations  input member  The annotations consist of all  metadata annotations  configured on the secret custom resource   For example  to include a value from the resource s annotations in your application s secret  you might specify a template like     go           Resource labels input  Resource labels are accessed through the   Labels  input member  The labels consist of all  metadata labels  configured on the secret custom resource   For example  to include a value from the resource s labels in your application s secret  you might specify a template like     go          Filters  Filters are used to control which source secret data fields are included in the destination secret s data  They are specified as a set of exclude include RE2 accepted  regular expressions  https   golang org s re2syntax    Filters are configured in the  excludes  and  includes  fields of a secret custom resource s  spec Destination Transformation   vault docs platform k8s vso api reference transformation   or in a SecretTransformation resource s  spec   vault docs platform k8s vso api reference secrettransformationspec    All exclude patterns take precedence over any include patterns  and are never applied to templated keys      Examples      Local transformation  A VaultDynamicSecret configured to sync Postgres database credentials from Vault to the Kubernetes secret named  example vds      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultDynamicSecret metadata    namespace  example ns   name  example vds   annotations      myapp config postgres host  postgres postgresql postgres svc cluster local 5432 spec    destination      create  true     name  app secret     transformation        excludes                    templates          url            text                                path  creds dev postgres      The resulting Kubernetes secret includes a single key named  url   with a valid Postgres connection URL as its value      yaml url  postgresql   v postgres user XUpah password postgres postgresql postgres svc cluster local 5432 postgres sslmode disable          Shared transformation  The following manifest contains shared transformation templates and filters  All  templates  it provides will be included in the destination k8s secret  It also provides  sourceTemplates  that can be included in any template text configured in a secret CR or within the same resource instance      yaml     apiVersion  secrets hashicorp com v1beta1 kind  SecretTransformation metadata    name  vso templates   namespace  example vds spec    excludes        password username   templates      url        text       sourceTemplates        name  helpers       text                                                                                                                                                                                                                                 The following  VaultDynamicSecret  manifest references the  SecretTransformation  above  All templates and filters from the reference object will be applied to the destination secret data     yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultDynamicSecret metadata    namespace  example ns   name  example vds   annotations      myapp config postgres host  postgres postgresql postgres svc cluster local 5432 spec    destination      create  true     name  app secret     transformation        transformationRefs            name  vso templates   path  creds dev postgres      The resulting Kubernetes secret includes a single key named  url   with a valid Postgres connection URL as its value     yaml url  postgresql   v postgres user XUpah password postgres postgresql postgres svc cluster local 5432 postgres sslmode disable          Template functions  All template functions are provided by the  sprig  http   masterminds github io sprig  library  Some common functions are mentioned below  For the complete list of functions see  allowedSprigFuncs  https   github com hashicorp vault secrets operator blob main template funcs go L26       String functions   trim  removes any leading or trailing whitespaces from the input      trim   host       host           Encoding functions   b64enc  base64 encodes an input value     b64enc  host      aG9zdAo         b64dec  base64 decodes an input value     b64dec  aG9zdAo       host           Map functions   get  retrieves a value from a  map  input      get  Secrets  baz      qux       Given a nested  map  input      json      foo          bar    baz        quz    quux              get  can retrieve a specific value      get  get  Secrets  foo    bar      baz        dig  can also retrieve a specific value  or return a default if any of the keys are not found      dig  foo   quz    not found    Secrets     quux   dig  foo   nux    not found    Secrets      not found           Related API references     Transformation   vault docs platform k8s vso api reference transformation     HCPVaultSecretsApp   vault docs platform k8s vso api reference hcpvaultsecretsapp     VaultDynamicSecret   vault docs platform k8s vso api reference vaultdynamicsecret     VaultPKISecret   vault docs platform k8s vso api reference vaultpkisecret     VaultStaticSecret   vault docs platform k8s vso api reference vaultstaticsecret     SecretTransformation   vault docs platform k8s vso api reference secrettransformation "}
{"questions":"vault The Vault Secrets Operator can be installed using Helm Installing and upgrading the Vault Secrets Operator include vso common links mdx page title Vault Secrets Operator Installation layout docs","answers":"---\nlayout: docs\npage_title: Vault Secrets Operator Installation\ndescription: >-\n  The Vault Secrets Operator can be installed using Helm.\n---\n@include 'vso\/common-links.mdx'\n\n# Installing and upgrading the Vault Secrets Operator\n\n## Prerequisites\n\n- A Kubernetes cluster running 1.23+\n- Helm 3.7+\n- [Optional] Kustomize 4.5.7+\n\n## Installation using Helm\n\n[Install Helm](https:\/\/helm.sh\/docs\/intro\/install) before beginning.\n\nThe [Helm chart][helm] is the recommended way of\ninstalling and configuring the Vault Secrets Operator.\n\nTo install a new instance of the Vault Secrets Operator, first add the\nHashiCorp Helm repository and ensure you have access to the chart:\n\n```shell-session\n$ helm repo add hashicorp https:\/\/helm.releases.hashicorp.com\n\"hashicorp\" has been added to your repositories\n```\n\n```shell-session\n$ helm search repo hashicorp\/vault-secrets-operator\nNAME           \tCHART VERSION\tAPP VERSION\tDESCRIPTION\nhashicorp\/vault-secrets-operator\t0.9.0       \t0.9.0     \tOfficial HashiCorp Vault Secrets Operator Chart\n```\n\nThen install the Operator:\n\n```shell-session\n$ helm install --version 0.9.0 --create-namespace --namespace vault-secrets-operator vault-secrets-operator hashicorp\/vault-secrets-operator\n```\n\n## Upgrading using Helm\n\nYou can upgrade an existing installation with the `helm upgrade` command.\nPlease always run Helm with the `--dry-run` option before any install or upgrade to verify\nchanges.\n\nUpdate the `hashicorp` Helm repo:\n```shell-session\n$ helm repo update hashicorp\nHang tight while we grab the latest from your chart repositories...\n...Successfully got an update from the \"hashicorp\" chart repository\nUpdate Complete. \u2388Happy Helming!\u2388\n```\n\n## Updating CRDs when using Helm\n\n<Note title=\"Important\">\n\n  As of VSO 0.8.0, VSO will automatically update its CRDs.\n  The manual upgrade step [Updating CRDs](#updating-crds-when-using-helm-prior-to-vso-0-8-0) below is no longer required when\n  upgrading to VSO 0.8.0+.\n\n<\/Note>\n\nThe VSO Helm chart will automatically upgrade the CRDs to match the VSO version being deployed.\nThere should be no need to manually update the CRDs prior to upgrading VSO using Helm.\n\n## Chart values\n\nRefer to the [Helm chart][helm] overview for a full list of supported chart values.\n\n## Installation using Kustomize\n\nYou can install and update your installation using `kustomize` which allows you to extend the `config\/` path of the VSO repository using Kustomize primitives.\n\nTo install using Kustomize, download and untar\/unzip the latest release from the [Releases Page](https:\/\/github.com\/hashicorp\/vault-secrets-operator\/releases).\n```shell-session\n$ wget -q https:\/\/github.com\/hashicorp\/vault-secrets-operator\/archive\/refs\/tags\/v0.9.0.tar.gz\n$ tar -zxf v0.9.0.tar.gz\n$ cd vault-secrets-operator-0.9.0\/\n```\n\nNext install using `kustomize build`:\n```shell-session\n$ kustomize build config\/default | kubectl apply -f -\nnamespace\/vault-secrets-operator-system created\ncustomresourcedefinition.apiextensions.k8s.io\/hcpauths.secrets.hashicorp.com created\ncustomresourcedefinition.apiextensions.k8s.io\/hcpvaultsecretsapps.secrets.hashicorp.com created\ncustomresourcedefinition.apiextensions.k8s.io\/vaultauths.secrets.hashicorp.com created\ncustomresourcedefinition.apiextensions.k8s.io\/vaultconnections.secrets.hashicorp.com created\ncustomresourcedefinition.apiextensions.k8s.io\/vaultdynamicsecrets.secrets.hashicorp.com created\ncustomresourcedefinition.apiextensions.k8s.io\/vaultpkisecrets.secrets.hashicorp.com created\ncustomresourcedefinition.apiextensions.k8s.io\/vaultstaticsecrets.secrets.hashicorp.com created\nserviceaccount\/vault-secrets-operator-controller-manager created\nrole.rbac.authorization.k8s.io\/vault-secrets-operator-leader-election-role created\nclusterrole.rbac.authorization.k8s.io\/vault-secrets-operator-manager-role created\nclusterrole.rbac.authorization.k8s.io\/vault-secrets-operator-metrics-reader created\nclusterrole.rbac.authorization.k8s.io\/vault-secrets-operator-proxy-role created\nrolebinding.rbac.authorization.k8s.io\/vault-secrets-operator-leader-election-rolebinding created\nclusterrolebinding.rbac.authorization.k8s.io\/vault-secrets-operator-manager-rolebinding created\nclusterrolebinding.rbac.authorization.k8s.io\/vault-secrets-operator-proxy-rolebinding created\nconfigmap\/vault-secrets-operator-manager-config created\nservice\/vault-secrets-operator-controller-manager-metrics-service created\ndeployment.apps\/vault-secrets-operator-controller-manager created\n```\n\nConfirm the operator has been installed by examining the pods:\n```shell-session\n$ kubectl get pods -n vault-secrets-operator-system\nNAMESPACE                       NAME                                                         READY   STATUS    RESTARTS   AGE\nvault-secrets-operator-system   vault-secrets-operator-controller-manager-56754d5496-cq69s   2\/2     Running   0          1m17s\n```\n\n<Note title=\"Kustomize does not support all features of the Helm chart\">\n\n  Notably it will not deploy default VaultAuthMethod, VaultConnection or Transit related resources.\n  Kustomize also does not support pre-delete hooks that the Helm chart uses to cleanup resources\n  and remove finalizers on the uninstall path. Please see [`config\/samples`](https:\/\/github.com\/hashicorp\/vault-secrets-operator\/tree\/main\/config\/samples)\n  or `config\/samples` in the downloaded release artifacts for additional resources.\n\n<\/Note>\n\n## Upgrade using Kustomize\n\nUpgrading using Kustomize is similar to installation: simply download the new release from github and follow\nthe same steps as outlined in [Installation using Kustomize](#installation-using-kustomize).\nNo additional steps are required to update the CRDs.\n\n## Legacy notes\n\nThe following notes provide guidance for installing\/upgrading older versions of VSO.\n\n### Updating CRDs when using Helm prior to VSO 0.8.0\n\nThis step can be skipped if you are upgrading to VSO 0.8.0 or later.\n\n<Note title=\"Helm does not automatically update CRDs\">\n  You must update all CRDs manually before upgrading VSO to a version prior to 0.8.0.\n<\/Note>\n\nYou must update the CRDs for VSO manually **before** you upgrade the\noperator when the operator is managed by Helm.\n\n**Any `kubectl` warnings related to `last-applied-configuration` should be safe to ignore.**\n\nTo update the VSO CRDs, replace `<TARGET_VSO_VERSION>` with the VSO version you are upgrading to:\n```shell-session\n$ helm show crds --version <TARGET_VSO_VERSION> hashicorp\/vault-secrets-operator | kubectl apply -f -\n```\n\nFor example, if you are upgrading to VSO 0.7.1:\n```shell-session\n$ helm show crds --version 0.7.1 hashicorp\/vault-secrets-operator | kubectl apply -f -\n\ncustomresourcedefinition.apiextensions.k8s.io\/hcpauths.secrets.hashicorp.com created\ncustomresourcedefinition.apiextensions.k8s.io\/hcpvaultsecretsapps.secrets.hashicorp.com created\nWarning: resource customresourcedefinitions\/vaultauths.secrets.hashicorp.com is missing the kubectl.kubernetes.io\/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.\ncustomresourcedefinition.apiextensions.k8s.io\/vaultauths.secrets.hashicorp.com configured\nWarning: resource customresourcedefinitions\/vaultconnections.secrets.hashicorp.com is missing the kubectl.kubernetes.io\/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.\ncustomresourcedefinition.apiextensions.k8s.io\/vaultconnections.secrets.hashicorp.com configured\nWarning: resource customresourcedefinitions\/vaultdynamicsecrets.secrets.hashicorp.com is missing the kubectl.kubernetes.io\/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.\ncustomresourcedefinition.apiextensions.k8s.io\/vaultdynamicsecrets.secrets.hashicorp.com configured\nWarning: resource customresourcedefinitions\/vaultpkisecrets.secrets.hashicorp.com is missing the kubectl.kubernetes.io\/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.\ncustomresourcedefinition.apiextensions.k8s.io\/vaultpkisecrets.secrets.hashicorp.com configured\nWarning: resource customresourcedefinitions\/vaultstaticsecrets.secrets.hashicorp.com is missing the kubectl.kubernetes.io\/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.\ncustomresourcedefinition.apiextensions.k8s.io\/vaultstaticsecrets.secrets.hashicorp.com configured\n```","site":"vault","answers_cleaned":"    layout  docs page title  Vault Secrets Operator Installation description       The Vault Secrets Operator can be installed using Helm       include  vso common links mdx     Installing and upgrading the Vault Secrets Operator     Prerequisites    A Kubernetes cluster running 1 23    Helm 3 7     Optional  Kustomize 4 5 7      Installation using Helm   Install Helm  https   helm sh docs intro install  before beginning   The  Helm chart  helm  is the recommended way of installing and configuring the Vault Secrets Operator   To install a new instance of the Vault Secrets Operator  first add the HashiCorp Helm repository and ensure you have access to the chart      shell session   helm repo add hashicorp https   helm releases hashicorp com  hashicorp  has been added to your repositories         shell session   helm search repo hashicorp vault secrets operator NAME            CHART VERSION APP VERSION DESCRIPTION hashicorp vault secrets operator 0 9 0        0 9 0      Official HashiCorp Vault Secrets Operator Chart      Then install the Operator      shell session   helm install   version 0 9 0   create namespace   namespace vault secrets operator vault secrets operator hashicorp vault secrets operator         Upgrading using Helm  You can upgrade an existing installation with the  helm upgrade  command  Please always run Helm with the    dry run  option before any install or upgrade to verify changes   Update the  hashicorp  Helm repo     shell session   helm repo update hashicorp Hang tight while we grab the latest from your chart repositories       Successfully got an update from the  hashicorp  chart repository Update Complete   Happy Helming           Updating CRDs when using Helm   Note title  Important      As of VSO 0 8 0  VSO will automatically update its CRDs    The manual upgrade step  Updating CRDs   updating crds when using helm prior to vso 0 8 0  below is no longer required when   upgrading to VSO 0 8 0      Note   The VSO Helm chart will automatically upgrade the CRDs to match the VSO version being deployed  There should be no need to manually update the CRDs prior to upgrading VSO using Helm      Chart values  Refer to the  Helm chart  helm  overview for a full list of supported chart values      Installation using Kustomize  You can install and update your installation using  kustomize  which allows you to extend the  config   path of the VSO repository using Kustomize primitives   To install using Kustomize  download and untar unzip the latest release from the  Releases Page  https   github com hashicorp vault secrets operator releases      shell session   wget  q https   github com hashicorp vault secrets operator archive refs tags v0 9 0 tar gz   tar  zxf v0 9 0 tar gz   cd vault secrets operator 0 9 0       Next install using  kustomize build      shell session   kustomize build config default   kubectl apply  f   namespace vault secrets operator system created customresourcedefinition apiextensions k8s io hcpauths secrets hashicorp com created customresourcedefinition apiextensions k8s io hcpvaultsecretsapps secrets hashicorp com created customresourcedefinition apiextensions k8s io vaultauths secrets hashicorp com created customresourcedefinition apiextensions k8s io vaultconnections secrets hashicorp com created customresourcedefinition apiextensions k8s io vaultdynamicsecrets secrets hashicorp com created customresourcedefinition apiextensions k8s io vaultpkisecrets secrets hashicorp com created customresourcedefinition apiextensions k8s io vaultstaticsecrets secrets hashicorp com created serviceaccount vault secrets operator controller manager created role rbac authorization k8s io vault secrets operator leader election role created clusterrole rbac authorization k8s io vault secrets operator manager role created clusterrole rbac authorization k8s io vault secrets operator metrics reader created clusterrole rbac authorization k8s io vault secrets operator proxy role created rolebinding rbac authorization k8s io vault secrets operator leader election rolebinding created clusterrolebinding rbac authorization k8s io vault secrets operator manager rolebinding created clusterrolebinding rbac authorization k8s io vault secrets operator proxy rolebinding created configmap vault secrets operator manager config created service vault secrets operator controller manager metrics service created deployment apps vault secrets operator controller manager created      Confirm the operator has been installed by examining the pods     shell session   kubectl get pods  n vault secrets operator system NAMESPACE                       NAME                                                         READY   STATUS    RESTARTS   AGE vault secrets operator system   vault secrets operator controller manager 56754d5496 cq69s   2 2     Running   0          1m17s       Note title  Kustomize does not support all features of the Helm chart      Notably it will not deploy default VaultAuthMethod  VaultConnection or Transit related resources    Kustomize also does not support pre delete hooks that the Helm chart uses to cleanup resources   and remove finalizers on the uninstall path  Please see   config samples   https   github com hashicorp vault secrets operator tree main config samples    or  config samples  in the downloaded release artifacts for additional resources     Note      Upgrade using Kustomize  Upgrading using Kustomize is similar to installation  simply download the new release from github and follow the same steps as outlined in  Installation using Kustomize   installation using kustomize   No additional steps are required to update the CRDs      Legacy notes  The following notes provide guidance for installing upgrading older versions of VSO       Updating CRDs when using Helm prior to VSO 0 8 0  This step can be skipped if you are upgrading to VSO 0 8 0 or later    Note title  Helm does not automatically update CRDs     You must update all CRDs manually before upgrading VSO to a version prior to 0 8 0    Note   You must update the CRDs for VSO manually   before   you upgrade the operator when the operator is managed by Helm     Any  kubectl  warnings related to  last applied configuration  should be safe to ignore     To update the VSO CRDs  replace   TARGET VSO VERSION   with the VSO version you are upgrading to     shell session   helm show crds   version  TARGET VSO VERSION  hashicorp vault secrets operator   kubectl apply  f        For example  if you are upgrading to VSO 0 7 1     shell session   helm show crds   version 0 7 1 hashicorp vault secrets operator   kubectl apply  f    customresourcedefinition apiextensions k8s io hcpauths secrets hashicorp com created customresourcedefinition apiextensions k8s io hcpvaultsecretsapps secrets hashicorp com created Warning  resource customresourcedefinitions vaultauths secrets hashicorp com is missing the kubectl kubernetes io last applied configuration annotation which is required by kubectl apply  kubectl apply should only be used on resources created declaratively by either kubectl create   save config or kubectl apply  The missing annotation will be patched automatically  customresourcedefinition apiextensions k8s io vaultauths secrets hashicorp com configured Warning  resource customresourcedefinitions vaultconnections secrets hashicorp com is missing the kubectl kubernetes io last applied configuration annotation which is required by kubectl apply  kubectl apply should only be used on resources created declaratively by either kubectl create   save config or kubectl apply  The missing annotation will be patched automatically  customresourcedefinition apiextensions k8s io vaultconnections secrets hashicorp com configured Warning  resource customresourcedefinitions vaultdynamicsecrets secrets hashicorp com is missing the kubectl kubernetes io last applied configuration annotation which is required by kubectl apply  kubectl apply should only be used on resources created declaratively by either kubectl create   save config or kubectl apply  The missing annotation will be patched automatically  customresourcedefinition apiextensions k8s io vaultdynamicsecrets secrets hashicorp com configured Warning  resource customresourcedefinitions vaultpkisecrets secrets hashicorp com is missing the kubectl kubernetes io last applied configuration annotation which is required by kubectl apply  kubectl apply should only be used on resources created declaratively by either kubectl create   save config or kubectl apply  The missing annotation will be patched automatically  customresourcedefinition apiextensions k8s io vaultpkisecrets secrets hashicorp com configured Warning  resource customresourcedefinitions vaultstaticsecrets secrets hashicorp com is missing the kubectl kubernetes io last applied configuration annotation which is required by kubectl apply  kubectl apply should only be used on resources created declaratively by either kubectl create   save config or kubectl apply  The missing annotation will be patched automatically  customresourcedefinition apiextensions k8s io vaultstaticsecrets secrets hashicorp com configured    "}
{"questions":"vault HCP Vault Secrets source The Vault Secrets Operator allows Pods to consume HCP Vault Secrets natively from Kubernetes Secrets layout docs page title Vault Secrets Operator with HCP Vault Secrets Overview","answers":"---\nlayout: docs\npage_title: Vault Secrets Operator with HCP Vault Secrets\ndescription: >-\n  The Vault Secrets Operator allows Pods to consume HCP Vault Secrets natively from Kubernetes Secrets.\n---\n\n# HCP Vault Secrets source\n\n## Overview\n\nThe Vault secrets operator (VSO) syncs your [HCP Vault Secrets app](\/hcp\/docs\/vault-secrets) (HVSA) to\na Kubernetes Secret. Vault syncs each `HCPVaultSecretsApp` custom resource periodically to ensure that\nchanges to the secret source are properly reflected in the Kubernetes secret.\n\n\n## Features\n\n- Periodic synchronization of HCP Vault Secrets app to a *destination* Kubernetes Secret.\n- Automatic drift detection and remediation when the destination Kubernetes Secret\n  is modified or deleted.\n- Supports all VSO features, including rollout-restarts on secret rotation or\n  during drift remediation.\n- Supports authentication to HCP using [HCP service principals](\/hcp\/docs\/hcp\/admin\/iam\/service-principals).\n- Supports [static](#static-secrets), [auto-rotating and dynamic secrets](#auto-rotating-and-dynamic-secrets)\n  within an HCP Vault Secrets app.\n\n\n### Supported HCP authentication methods\n\n| Backend                                                              | Description                                            |\n|----------------------------------------------------------------------|--------------------------------------------------------|\n| [HCP Service Principals](\/hcp\/docs\/hcp\/admin\/iam\/service-principals) | Relies on static credentials for authenticating to HCP |\n\n\n### HCP Vault Secrets sync example\n\nThe following Kubernetes configuration can be used to sync the HCP Vault Secrets app, `vso-example`,\nto the Kubernetes Secret, `vso-app-secret`, in the `vso-example-ns` Kubernetes Namespace. It assumes that\nyou have already setup [service principal Kubernetes secret](\/vault\/docs\/platform\/k8s\/vso\/api-reference#hcpauthserviceprincipal),\nand have created the HCP Vault Secrets app.\n\nUse the following Kubernetes configuration to sync your HCP Vault Secrets app, `vso-example`,\nto the Kubernetes secret, `vso-app-secret`, in the `vso-example-ns` Kubernetes namespace.\nThe example configuration assumes you already a HCP Vault Secrets app created and have your\n[service principal Kubernetes secret](\/vault\/docs\/platform\/k8s\/vso\/api-reference#hcpauthserviceprincipal)\nconfigured.\n\nRefer to the [Kubernetes VSO installation guide](\/vault\/docs\/platform\/k8s\/vso\/installation)\nbefore applying any of the example configurations below.\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: HCPAuth\nmetadata:\n  name: hcp-auth\n  namespace: vso-example-ns\nspec:\n  organizationID: xxxxxxxx-76e9-4e17-b5e9-xxxxxxxx4c33\n  projectID: xxxxxxxx-bd16-443f-a266-xxxxxxxxcb52\n  servicePrincipal:\n    secretRef: vso-app-sp\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: HCPVaultSecretsApp\nmetadata:\n  name: vso-app\n  namespace: vso-example-ns\nspec:\n  appName: vso-app\n  hcpAuthRef: hcp-auth\n  destination:\n    create: true\n    name: vso-app-secret\n```\n\n### Static Secrets\n\nVSO supports syncing [static secrets](\/hcp\/docs\/vault-secrets\/static-secrets\/create-static-secret)\nfrom an HCP Vault Secrets app to a Kubernetes Secret. VSO syncs the secrets to\nKubernetes on the [refreshAfter](\/vault\/docs\/platform\/k8s\/vso\/api-reference#hcpvaultsecretsappspec)\ninterval set in the HCPVaultSecretsApp spec.\n\n### Auto-rotating and Dynamic Secrets\n\n<Tip title=\"Feature availability\">\n\n  VSO v0.9.0\n\n<\/Tip>\n\nVSO also supports syncing [auto-rotating](\/hcp\/docs\/vault-secrets\/auto-rotation)\nand [dynamic](\/hcp\/docs\/vault-secrets\/dynamic-secrets) secrets from an HCP Vault\nSecrets app to a Kubernetes Secret.\n\nVSO syncs auto-rotating secrets along with static secrets on the\n[refreshAfter](\/vault\/docs\/platform\/k8s\/vso\/api-reference#hcpvaultsecretsappspec)\ninterval, and rotation is handled by HCP. VSO syncs dynamic secrets when the\n[specified percentage](\/vault\/docs\/platform\/k8s\/vso\/api-reference#hvsdynamicsyncconfig)\nof their TTL has elapsed. Each sync of a dynamic secret generates a new set of\ncredentials.\n\nAn auto-rotating or dynamic secret can have multiple key-value pairs, which\nare rendered in the destination Kubernetes Secret as both a nested map and\nflattened key-value pairs. For example:\n\n```yaml\napiVersion: v1\nkind: Secret\ndata:\n  secret_name: {\"key_one\": \"value_one\", \"key_two\": \"value_two\"}\n  secret_name_key_one: \"value_one\"\n  secret_name_key_two: \"value_two\"\n...\n```\n\nTransformation [template commands like `get` and `dig`](\/vault\/docs\/platform\/k8s\/vso\/secret-transformation#map-functions)\nin the HCPVaultSecretsApp Destination can be used to extract values from the\nnested map format:\n\n```yaml\n    transformation:\n      templates:\n        secret_one:\n          text: ''\n        secret_two:\n          text: ''\n```\n\n@include 'vso\/blurb-api-reference.mdx'\n\n## Tutorial\n\nRefer to the [HCP Vault Secrets with Vault Secrets Operator for\nKubernetes](\/vault\/tutorials\/hcp-vault-secrets-get-started\/kubernetes-vso) tutorial to\nlearn the end-to-end workflow using the Vault Secrets Operator.","site":"vault","answers_cleaned":"    layout  docs page title  Vault Secrets Operator with HCP Vault Secrets description       The Vault Secrets Operator allows Pods to consume HCP Vault Secrets natively from Kubernetes Secrets         HCP Vault Secrets source     Overview  The Vault secrets operator  VSO  syncs your  HCP Vault Secrets app   hcp docs vault secrets   HVSA  to a Kubernetes Secret  Vault syncs each  HCPVaultSecretsApp  custom resource periodically to ensure that changes to the secret source are properly reflected in the Kubernetes secret       Features    Periodic synchronization of HCP Vault Secrets app to a  destination  Kubernetes Secret    Automatic drift detection and remediation when the destination Kubernetes Secret   is modified or deleted    Supports all VSO features  including rollout restarts on secret rotation or   during drift remediation    Supports authentication to HCP using  HCP service principals   hcp docs hcp admin iam service principals     Supports  static   static secrets    auto rotating and dynamic secrets   auto rotating and dynamic secrets    within an HCP Vault Secrets app        Supported HCP authentication methods    Backend                                                                Description                                                                                                                                                                                   HCP Service Principals   hcp docs hcp admin iam service principals    Relies on static credentials for authenticating to HCP         HCP Vault Secrets sync example  The following Kubernetes configuration can be used to sync the HCP Vault Secrets app   vso example   to the Kubernetes Secret   vso app secret   in the  vso example ns  Kubernetes Namespace  It assumes that you have already setup  service principal Kubernetes secret   vault docs platform k8s vso api reference hcpauthserviceprincipal   and have created the HCP Vault Secrets app   Use the following Kubernetes configuration to sync your HCP Vault Secrets app   vso example   to the Kubernetes secret   vso app secret   in the  vso example ns  Kubernetes namespace  The example configuration assumes you already a HCP Vault Secrets app created and have your  service principal Kubernetes secret   vault docs platform k8s vso api reference hcpauthserviceprincipal  configured   Refer to the  Kubernetes VSO installation guide   vault docs platform k8s vso installation  before applying any of the example configurations below      yaml     apiVersion  secrets hashicorp com v1beta1 kind  HCPAuth metadata    name  hcp auth   namespace  vso example ns spec    organizationID  xxxxxxxx 76e9 4e17 b5e9 xxxxxxxx4c33   projectID  xxxxxxxx bd16 443f a266 xxxxxxxxcb52   servicePrincipal      secretRef  vso app sp     apiVersion  secrets hashicorp com v1beta1 kind  HCPVaultSecretsApp metadata    name  vso app   namespace  vso example ns spec    appName  vso app   hcpAuthRef  hcp auth   destination      create  true     name  vso app secret          Static Secrets  VSO supports syncing  static secrets   hcp docs vault secrets static secrets create static secret  from an HCP Vault Secrets app to a Kubernetes Secret  VSO syncs the secrets to Kubernetes on the  refreshAfter   vault docs platform k8s vso api reference hcpvaultsecretsappspec  interval set in the HCPVaultSecretsApp spec       Auto rotating and Dynamic Secrets   Tip title  Feature availability      VSO v0 9 0    Tip   VSO also supports syncing  auto rotating   hcp docs vault secrets auto rotation  and  dynamic   hcp docs vault secrets dynamic secrets  secrets from an HCP Vault Secrets app to a Kubernetes Secret   VSO syncs auto rotating secrets along with static secrets on the  refreshAfter   vault docs platform k8s vso api reference hcpvaultsecretsappspec  interval  and rotation is handled by HCP  VSO syncs dynamic secrets when the  specified percentage   vault docs platform k8s vso api reference hvsdynamicsyncconfig  of their TTL has elapsed  Each sync of a dynamic secret generates a new set of credentials   An auto rotating or dynamic secret can have multiple key value pairs  which are rendered in the destination Kubernetes Secret as both a nested map and flattened key value pairs  For example      yaml apiVersion  v1 kind  Secret data    secret name    key one    value one    key two    value two     secret name key one   value one    secret name key two   value two           Transformation  template commands like  get  and  dig    vault docs platform k8s vso secret transformation map functions  in the HCPVaultSecretsApp Destination can be used to extract values from the nested map format      yaml     transformation        templates          secret one            text             secret two            text           include  vso blurb api reference mdx      Tutorial  Refer to the  HCP Vault Secrets with Vault Secrets Operator for Kubernetes   vault tutorials hcp vault secrets get started kubernetes vso  tutorial to learn the end to end workflow using the Vault Secrets Operator "}
{"questions":"vault The Vault Secrets Operator allows Pods to consume Vault secrets natively from Kubernetes Secrets page title Vault Secrets Operator include vso common links mdx layout docs Vault Secrets Operator","answers":"---\nlayout: docs\npage_title: Vault Secrets Operator\ndescription: >-\n  The Vault Secrets Operator allows Pods to consume Vault secrets natively from Kubernetes Secrets.\n---\n@include 'vso\/common-links.mdx'\n\n# Vault Secrets Operator\n\nThe Vault Secrets Operator (VSO) supports Vault as a secret source, which\nlets you seamlessly integrate VSO with a Vault instance running on any\nplatform.\n\n## Supported Vault platform and version\n\n| Platform                                  | Version |\n|-------------------------------------------|---------|\n| [Vault Enterprise\/Community](\/vault\/docs) | 1.11+   |\n| [HCP Vault Dedicated](\/hcp\/docs\/vault)    | 1.11+   |\n\n## Features\n\nVault Secrets Operator supports the following Vault features:\n\n- Sync from multiple instances of Vault.\n- All Vault [secret engines](\/vault\/docs\/secrets) supported.\n- TLS\/mTLS communications with Vault.\n- Support for all VSO features, including performing a rollout-restart upon secret rotation or\nduring drift remediation.\n- Cross Vault namespace authentication for Vault Enterprise 1.13+.\n- [Encrypted Vault client cache storage](\/vault\/docs\/platform\/k8s\/vso\/sources\/vault#vault-client-cache) for improved performance and security.\n- [Instant updates](\/vault\/docs\/platform\/k8s\/vso\/sources\/vault#instant-updates)\n  for VaultStaticSecret's with Vault Enterprise 1.16.3+.\n\n### Supported Vault authentication methods\n\n| Backend                                                                            | Description                                                                                                 |\n|------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|\n| [Kubernetes](\/vault\/docs\/platform\/k8s\/vso\/api-reference#vaultauthconfigkubernetes) | Relies on short-lived Kubernetes ServiceAccount tokens for Vault authentication                             |\n| [JWT](\/vault\/docs\/platform\/k8s\/vso\/api-reference#vaultauthconfigjwt)               | Relies on either static JWT tokens or short-lived Kubernetes ServiceAccount tokens for Vault authentication |\n| [AppRole](\/vault\/docs\/platform\/k8s\/vso\/api-reference#vaultauthconfigapprole)       | Relies on static AppRole credentials for Vault authentication                                               |\n| [AWS](\/vault\/docs\/platform\/k8s\/vso\/sources\/vault\/auth\/aws)                         | Relies on AWS credentials for Vault authentication                                                          |\n| [GCP](\/vault\/docs\/platform\/k8s\/vso\/sources\/vault\/auth\/gcp)                         | Relies on GCP credentials for Vault authentication                                                          |\n\n## Vault access and custom resource definitions\n\n`VaultConnection` and `VaultAuth` CRDs provide Vault connection and authentication configuration\ninformation for the operator. Consider `VaultConnection` and `VaultAuth` as foundational resources\nused by all secret replication type resources.\n\n### VaultConnection custom resource\n\nProvides the required configuration details for connecting to a single Vault server instance.\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultConnection\nmetadata:\n  namespace: vso-example\n  name: vault-connection\nspec:\n  # required configuration\n  # address to the Vault server.\n  address: http:\/\/vault.vault.svc.cluster.local:8200\n\n  # optional configuration\n  # HTTP headers to be included in all Vault requests.\n  # headers: []\n  # TLS server name to use as the SNI host for TLS connections.\n  # tlsServerName: \"\"\n  # skip TLS verification for TLS connections to Vault.\n  # skipTLSVerify: false\n  # the trusted PEM encoded CA certificate chain stored in a Kubernetes Secret\n  # caCertSecretRef: \"\"\n```\n\n### VaultAuth custom resource\n\nProvide the configuration necessary for the Operator to authenticate to a single Vault server instance as\nspecified in a `VaultConnection` custom resource.\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultAuth\nmetadata:\n  namespace: vso-example\n  name: vault-auth\nspec:\n  # required configuration\n  # VaultConnectionRef of the corresponding VaultConnection CustomResource.\n  # If no value is specified the Operator will default to the `default` VaultConnection,\n  # configured in its own Kubernetes namespace.\n  vaultConnectionRef: vault-connection\n  # Method to use when authenticating to Vault.\n  method: kubernetes\n  # Mount to use when authenticating to auth method.\n  mount: kubernetes\n  # Kubernetes specific auth configuration, requires that the Method be set to kubernetes.\n  kubernetes:\n    # role to use when authenticating to Vault\n    role: example\n    # ServiceAccount to use when authenticating to Vault\n    # it is recommended to always provide a unique serviceAccount per Pod\/application\n    serviceAccount: default\n\n  # optional configuration\n  # Vault namespace where the auth backend is mounted (requires Vault Enterprise)\n  # namespace: \"\"\n  # Params to use when authenticating to Vault\n  # params: []\n  # HTTP headers to be included in all Vault authentication requests.\n  # headers: []\n```\n\n### VaultAuthGlobal custom resource\n\n<Tip title=\"Feature availability\">\n\nVSO v0.8.0\n\n<\/Tip>\n\nNamespaced resource that provides shared Vault authentication configuration that can be inherited by multiple\n`VaultAuth` custom resources. It supports multiple authentication methods and allows you to define a default\nauthentication method that can be overridden by individual VaultAuth custom resources. See `vaultAuthGlobalRef` in\nthe [VaultAuth spec][va-spec] for more details. The `VaultAuthGlobal` custom resource is optional and can be used to\nsimplify the configuration of multiple VaultAuth custom resources by reducing config duplication. Like other\nnamespaced VSO custom resources, there can be many VaultAuthGlobal resources configured in a single Kubernetes cluster.\n\nFor more details on how to integrate VaultAuthGlobals into your workflow, see the detailed [Authentication][auth]\ndocs.\n\n<Tip>\n\n  The VaultAuthGlobal resources shares many of the same fields as the VaultAuth custom resource, but cannot be used\n  for authentication directly. It is only used to define shared Vault authentication configuration within a Kubernetes\n  cluster.\n\n<\/Tip>\n\nThe example below demonstrates how to define a VaultAuthGlobal custom resource with a default authentication method of\n`kubernetes`, along with a VaultAuth custom resource that inherits its global configuration.\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultAuthGlobal\nmetadata:\n  namespace: vso-example\n  name: vault-auth-global\nspec:\n  defaultAuthMethod: kubernetes\n  kubernetes:\n    audiences:\n    - vault\n    mount: kubernetes\n    namespace: example-ns\n    role: auth-role\n    serviceAccount: default\n    tokenExpirationSeconds: 600\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultAuth\nmetadata:\n  namespace: vso-example\n  name: vault-auth\nspec:\n  vaultAuthGlobalRef:\n    name: vault-auth-global\n  kubernetes:\n    role: local-role\n```\n\n#### Explanation\n\n- The VaultAuthGlobal custom resource defines a default authentication method of kubernetes with the `defaultAuthMethod`\n  field.\n- The VaultAuth custom resource inherits the global configuration by referencing the VaultAuthGlobal custom\n  resource with the `vaultAuthGlobalRef` field.\n- The `kubernetes.role` field in the VaultAuth custom resource spec overrides the value of the corresponding field in\n  the VaultAuthGlobal custom resource. All other fields are inherited from the VaultAuthGlobal custom resource\n  `spec.kubernetes` field, e.g., `audiences`, `mount`, `serviceAccount`, `namespace`, etc.\n\n\n## Vault secret custom resource definitions\n\nProvide the configuration necessary for the Operator to replicate a single Vault Secret to a single Kubernetes Secret.\nEach supported CRD is specialized to a *class* of Vault secret, documented below.\n\n### VaultStaticSecret custom resource\n\nProvides the configuration necessary for the Operator to synchronize a single Vault *static* Secret to a single Kubernetes Secret.<br \/>\nSupported secrets engines: [kv-v2](\/vault\/docs\/secrets\/kv\/kv-v2), [kv-v1](\/vault\/docs\/secrets\/kv\/kv-v1)\n\n##### KV version 1 secret example\n\nThe KV secrets engine's `kvv1` mount path is specified under `spec.mount` of `VaultStaticSecret` custom resource. Please consult [KV Secrets Engine - Version 1 - Setup](\/vault\/docs\/secrets\/kv\/kv-v1#setup) for configuring KV secrets engine version 1. The following results in a request to `http:\/\/127.0.0.1:8200\/v1\/kvv1\/eng\/apikey\/google` to retrieve the secret.\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultStaticSecret\nmetadata:\n  namespace: vso-example\n  name: vault-static-secret-v1\nspec:\n  vaultAuthRef: vault-auth\n  mount: kvv1\n  type: kv-v1\n  path: eng\/apikey\/google\n  refreshAfter: 60s\n  destination:\n    create: true\n    name: static-secret1\n```\n\n##### KV version 2 secret example\n\nSet the KV secrets engine (`kvv2`) mount path with the `spec.mount` parameter of\nyour `VaultStaticSecret` custom resource. For more advanced KV secrets engine\nversion 2 configuration options, consult the\n[KV Secrets Engine - Version 2 - Setup](\/vault\/docs\/secrets\/kv\/kv-v2#setup)\nguide.\n\nFor example, to send requests to `http:\/\/127.0.0.1:8200\/v1\/kvv2\/eng\/apikey\/google`\nto retrieve secrets:\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultStaticSecret\nmetadata:\n  namespace: vso-example\n  name: vault-static-secret-v2\nspec:\n  vaultAuthRef: vault-auth\n  mount: kvv2\n  type: kv-v2\n  path: eng\/apikey\/google\n  version: 2\n  refreshAfter: 60s\n  destination:\n    create: true\n    name: static-secret2\n```\n\n### VaultPKISecret custom resource\nProvides the configuration necessary for the Operator to synchronize a single Vault *PKI* Secret to a single Kubernetes Secret.<br \/>\nSupported secrets engines: [pki](\/vault\/docs\/secrets\/pki)\n\nThe PKI secrets engine's mount path is specified under `spec.mount` of `VaultPKISecret` custom resource. Please consult [PKI Secrets Engine - Setup and Usage](\/vault\/docs\/secrets\/pki\/setup) for configuring PKI secrets engine. The following results in a request to `http:\/\/127.0.0.1:8200\/v1\/pki\/issue\/default` to generate TLS certificates.\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultPKISecret\nmetadata:\n  namespace: vso-example\n  name: vault-pki\nspec:\n  vaultAuthRef: vault-auth\n  mount: pki\n  role: default\n  commonName: example.com\n  format: pem\n  expiryOffset: 1s\n  ttl: 60s\n  namespace: tenant-1\n  destination:\n    create: true\n    name: pki1\n```\n\n### VaultDynamicSecret custom resource\n\nProvides the configuration necessary for the Operator to synchronize a single Vault *dynamic* Secret to a single Kubernetes Secret.<br \/>\nSupported secrets engines *non-exhaustive*: [databases](\/vault\/docs\/secrets\/databases), [aws](\/vault\/docs\/secrets\/aws),\n[azure](\/vault\/docs\/secrets\/azure), [gcp](\/vault\/docs\/secrets\/gcp), ...\n\n##### Database secret example\n\nSet the database secret engine mount path (`db`) with the `spec.mount` of your\n`VaultDynamicSecret` custom resource. For more advanced database secrets engine\nconfiguration options, consult the\n[Database Secrets Engine - Setup](\/vault\/docs\/secrets\/databases#setup) guide.\n\nFor example, to send requests to\n`http:\/\/127.0.0.1:8200\/v1\/db\/creds\/my-postgresql-role` to generate a new\ncredential:\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultDynamicSecret\nmetadata:\n  namespace: vso-example\n  name: vault-dynamic-secret-db\nspec:\n  vaultAuthRef: vault-auth\n  mount: db\n  path: creds\/my-postgresql-role\n  destination:\n    create: true\n    name: dynamic-db\n```\n\n##### AWS secret example\n\nSet the AWS secrets engine mount path (`aws`) with the `spec.mount` parameter of\nyour `VaultDynamicSecret` custom resource. For more advanced AWS secrets engine\nconfiguration options, consult the\n[AWS Secrets Engine - Setup](\/vault\/docs\/secrets\/aws#setup) guide.\n\nFor example, to send requests to `http:\/\/127.0.0.1:8200\/v1\/aws\/creds\/my-iam-role`\nto generate a new IAM credential:\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultDynamicSecret\nmetadata:\n  namespace: vso-example\n  name: vault-dynamic-secret-aws-iam\nspec:\n  vaultAuthRef: vault-auth\n  mount: aws\n  path: creds\/my-iam-role\n  destination:\n    create: true\n    name: dynamic-aws-iam\n```\n\nTo send requests to `http:\/\/127.0.0.1:8200\/v1\/aws\/sts\/my-sts-role` to generate a new STS credential:\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultDynamicSecret\nmetadata:\n  namespace: vso-example\n  name: vault-dynamic-secret-aws-sts\nspec:\n  vaultAuthRef: vault-auth\n  mount: aws\n  path: sts\/my-sts-role\n  destination:\n    create: true\n    name: dynamic-aws-sts\n```\n\n##### HCP Vault Secrets Example\n\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultDynamicSecret\nmetadata:\n  namespace: vso-example\n  name: vault-dynamic-secret-aws-iam-role\nspec:\n  vaultAuthRef: vault-auth\n  mount: aws\n  path: creds\/my-iam-role\n  destination:\n    create: true\n    name: dynamic-aws-iam-role\n```\n\n@include 'vso\/blurb-api-reference.mdx'\n\n## Vault client cache\n\nThe Vault Secrets Operator can optionally cache Vault client information such as Vault tokens and leases in Kubernetes Secrets within its own namespace. The client cache enables seamless upgrades because Vault tokens and dynamic secret leases can continue to be tracked and renewed through leadership changes. Client cache persistence and encryption is not enabled by default because it requires extra configuration and Vault Server setup. VSO supports encrypting the client cache using Vault Server's [transit secrets engine](\/vault\/docs\/secrets\/transit).\n\nThe [Encrypted client cache](\/vault\/docs\/platform\/k8s\/vso\/sources\/vault\/client-cache) guide will walk you through the steps to enable and configure client cache encryption.\n\n## Instant updates <EnterpriseAlert inline=\"true\" \/>\n<Tip title=\"Feature availability\">\n\n  VSO v0.8.0\n\n<\/Tip>\n\nThe Vault Secrets Operator can instantly update Kubernetes Secrets when changes\nare made in Vault, by subscribing to [Vault Events][vault-events] for change\nnotification. Setting a refresh interval (e.g. [refreshAfter][vss-spec]) is\nstill recommended since event message delivery is not guaranteed.\n\n**Supported secret types:**\n- [VaultStaticSecret](#vaultstaticsecret-custom-resource) ([kv-v1](\/vault\/docs\/secrets\/kv\/kv-v1),\n  [kv-v2](\/vault\/docs\/secrets\/kv\/kv-v2))\n\n<Note title=\"Requires Vault Enterprise 1.16.3+\">\n\nThe instant updates option requires [Vault Enterprise](\/vault\/docs\/enterprise)\n1.16.3+ due to the use of [Vault Event Notifications][vault-events].\n\n<\/Note>\n\nThe [Instant updates](\/vault\/docs\/platform\/k8s\/vso\/sources\/vault\/instant-updates) guide\nwill walk you through the steps to enable instant updates for a VaultStaticSecret.\n\n[vss-spec]: \/vault\/docs\/platform\/k8s\/vso\/api-reference#vaultstaticsecretspec\n[vault-events]: \/vault\/docs\/concepts\/events\n\n## Tutorial\n\nRefer to the [Vault Secrets Operator on\nKubernetes](\/vault\/tutorials\/kubernetes\/vault-secrets-operator) tutorial to\nlearn the end-to-end workflow using the Vault Secrets Operator.","site":"vault","answers_cleaned":"    layout  docs page title  Vault Secrets Operator description       The Vault Secrets Operator allows Pods to consume Vault secrets natively from Kubernetes Secrets       include  vso common links mdx     Vault Secrets Operator  The Vault Secrets Operator  VSO  supports Vault as a secret source  which lets you seamlessly integrate VSO with a Vault instance running on any platform      Supported Vault platform and version    Platform                                    Version                                                              Vault Enterprise Community   vault docs    1 11         HCP Vault Dedicated   hcp docs vault       1 11          Features  Vault Secrets Operator supports the following Vault features     Sync from multiple instances of Vault    All Vault  secret engines   vault docs secrets  supported    TLS mTLS communications with Vault    Support for all VSO features  including performing a rollout restart upon secret rotation or during drift remediation    Cross Vault namespace authentication for Vault Enterprise 1 13      Encrypted Vault client cache storage   vault docs platform k8s vso sources vault vault client cache  for improved performance and security     Instant updates   vault docs platform k8s vso sources vault instant updates    for VaultStaticSecret s with Vault Enterprise 1 16 3        Supported Vault authentication methods    Backend                                                                              Description                                                                                                                                                                                                                                                                                                           Kubernetes   vault docs platform k8s vso api reference vaultauthconfigkubernetes    Relies on short lived Kubernetes ServiceAccount tokens for Vault authentication                                  JWT   vault docs platform k8s vso api reference vaultauthconfigjwt                  Relies on either static JWT tokens or short lived Kubernetes ServiceAccount tokens for Vault authentication      AppRole   vault docs platform k8s vso api reference vaultauthconfigapprole          Relies on static AppRole credentials for Vault authentication                                                    AWS   vault docs platform k8s vso sources vault auth aws                            Relies on AWS credentials for Vault authentication                                                               GCP   vault docs platform k8s vso sources vault auth gcp                            Relies on GCP credentials for Vault authentication                                                                Vault access and custom resource definitions   VaultConnection  and  VaultAuth  CRDs provide Vault connection and authentication configuration information for the operator  Consider  VaultConnection  and  VaultAuth  as foundational resources used by all secret replication type resources       VaultConnection custom resource  Provides the required configuration details for connecting to a single Vault server instance      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultConnection metadata    namespace  vso example   name  vault connection spec      required configuration     address to the Vault server    address  http   vault vault svc cluster local 8200      optional configuration     HTTP headers to be included in all Vault requests      headers         TLS server name to use as the SNI host for TLS connections      tlsServerName         skip TLS verification for TLS connections to Vault      skipTLSVerify  false     the trusted PEM encoded CA certificate chain stored in a Kubernetes Secret     caCertSecretRef              VaultAuth custom resource  Provide the configuration necessary for the Operator to authenticate to a single Vault server instance as specified in a  VaultConnection  custom resource      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultAuth metadata    namespace  vso example   name  vault auth spec      required configuration     VaultConnectionRef of the corresponding VaultConnection CustomResource      If no value is specified the Operator will default to the  default  VaultConnection      configured in its own Kubernetes namespace    vaultConnectionRef  vault connection     Method to use when authenticating to Vault    method  kubernetes     Mount to use when authenticating to auth method    mount  kubernetes     Kubernetes specific auth configuration  requires that the Method be set to kubernetes    kubernetes        role to use when authenticating to Vault     role  example       ServiceAccount to use when authenticating to Vault       it is recommended to always provide a unique serviceAccount per Pod application     serviceAccount  default      optional configuration     Vault namespace where the auth backend is mounted  requires Vault Enterprise      namespace         Params to use when authenticating to Vault     params         HTTP headers to be included in all Vault authentication requests      headers              VaultAuthGlobal custom resource   Tip title  Feature availability    VSO v0 8 0    Tip   Namespaced resource that provides shared Vault authentication configuration that can be inherited by multiple  VaultAuth  custom resources  It supports multiple authentication methods and allows you to define a default authentication method that can be overridden by individual VaultAuth custom resources  See  vaultAuthGlobalRef  in the  VaultAuth spec  va spec  for more details  The  VaultAuthGlobal  custom resource is optional and can be used to simplify the configuration of multiple VaultAuth custom resources by reducing config duplication  Like other namespaced VSO custom resources  there can be many VaultAuthGlobal resources configured in a single Kubernetes cluster   For more details on how to integrate VaultAuthGlobals into your workflow  see the detailed  Authentication  auth  docs    Tip     The VaultAuthGlobal resources shares many of the same fields as the VaultAuth custom resource  but cannot be used   for authentication directly  It is only used to define shared Vault authentication configuration within a Kubernetes   cluster     Tip   The example below demonstrates how to define a VaultAuthGlobal custom resource with a default authentication method of  kubernetes   along with a VaultAuth custom resource that inherits its global configuration      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultAuthGlobal metadata    namespace  vso example   name  vault auth global spec    defaultAuthMethod  kubernetes   kubernetes      audiences        vault     mount  kubernetes     namespace  example ns     role  auth role     serviceAccount  default     tokenExpirationSeconds  600     apiVersion  secrets hashicorp com v1beta1 kind  VaultAuth metadata    namespace  vso example   name  vault auth spec    vaultAuthGlobalRef      name  vault auth global   kubernetes      role  local role           Explanation    The VaultAuthGlobal custom resource defines a default authentication method of kubernetes with the  defaultAuthMethod    field    The VaultAuth custom resource inherits the global configuration by referencing the VaultAuthGlobal custom   resource with the  vaultAuthGlobalRef  field    The  kubernetes role  field in the VaultAuth custom resource spec overrides the value of the corresponding field in   the VaultAuthGlobal custom resource  All other fields are inherited from the VaultAuthGlobal custom resource    spec kubernetes  field  e g    audiences    mount    serviceAccount    namespace   etc       Vault secret custom resource definitions  Provide the configuration necessary for the Operator to replicate a single Vault Secret to a single Kubernetes Secret  Each supported CRD is specialized to a  class  of Vault secret  documented below       VaultStaticSecret custom resource  Provides the configuration necessary for the Operator to synchronize a single Vault  static  Secret to a single Kubernetes Secret  br    Supported secrets engines   kv v2   vault docs secrets kv kv v2    kv v1   vault docs secrets kv kv v1         KV version 1 secret example  The KV secrets engine s  kvv1  mount path is specified under  spec mount  of  VaultStaticSecret  custom resource  Please consult  KV Secrets Engine   Version 1   Setup   vault docs secrets kv kv v1 setup  for configuring KV secrets engine version 1  The following results in a request to  http   127 0 0 1 8200 v1 kvv1 eng apikey google  to retrieve the secret      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultStaticSecret metadata    namespace  vso example   name  vault static secret v1 spec    vaultAuthRef  vault auth   mount  kvv1   type  kv v1   path  eng apikey google   refreshAfter  60s   destination      create  true     name  static secret1            KV version 2 secret example  Set the KV secrets engine   kvv2   mount path with the  spec mount  parameter of your  VaultStaticSecret  custom resource  For more advanced KV secrets engine version 2 configuration options  consult the  KV Secrets Engine   Version 2   Setup   vault docs secrets kv kv v2 setup  guide   For example  to send requests to  http   127 0 0 1 8200 v1 kvv2 eng apikey google  to retrieve secrets      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultStaticSecret metadata    namespace  vso example   name  vault static secret v2 spec    vaultAuthRef  vault auth   mount  kvv2   type  kv v2   path  eng apikey google   version  2   refreshAfter  60s   destination      create  true     name  static secret2          VaultPKISecret custom resource Provides the configuration necessary for the Operator to synchronize a single Vault  PKI  Secret to a single Kubernetes Secret  br    Supported secrets engines   pki   vault docs secrets pki   The PKI secrets engine s mount path is specified under  spec mount  of  VaultPKISecret  custom resource  Please consult  PKI Secrets Engine   Setup and Usage   vault docs secrets pki setup  for configuring PKI secrets engine  The following results in a request to  http   127 0 0 1 8200 v1 pki issue default  to generate TLS certificates      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultPKISecret metadata    namespace  vso example   name  vault pki spec    vaultAuthRef  vault auth   mount  pki   role  default   commonName  example com   format  pem   expiryOffset  1s   ttl  60s   namespace  tenant 1   destination      create  true     name  pki1          VaultDynamicSecret custom resource  Provides the configuration necessary for the Operator to synchronize a single Vault  dynamic  Secret to a single Kubernetes Secret  br    Supported secrets engines  non exhaustive    databases   vault docs secrets databases    aws   vault docs secrets aws    azure   vault docs secrets azure    gcp   vault docs secrets gcp              Database secret example  Set the database secret engine mount path   db   with the  spec mount  of your  VaultDynamicSecret  custom resource  For more advanced database secrets engine configuration options  consult the  Database Secrets Engine   Setup   vault docs secrets databases setup  guide   For example  to send requests to  http   127 0 0 1 8200 v1 db creds my postgresql role  to generate a new credential      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultDynamicSecret metadata    namespace  vso example   name  vault dynamic secret db spec    vaultAuthRef  vault auth   mount  db   path  creds my postgresql role   destination      create  true     name  dynamic db            AWS secret example  Set the AWS secrets engine mount path   aws   with the  spec mount  parameter of your  VaultDynamicSecret  custom resource  For more advanced AWS secrets engine configuration options  consult the  AWS Secrets Engine   Setup   vault docs secrets aws setup  guide   For example  to send requests to  http   127 0 0 1 8200 v1 aws creds my iam role  to generate a new IAM credential      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultDynamicSecret metadata    namespace  vso example   name  vault dynamic secret aws iam spec    vaultAuthRef  vault auth   mount  aws   path  creds my iam role   destination      create  true     name  dynamic aws iam      To send requests to  http   127 0 0 1 8200 v1 aws sts my sts role  to generate a new STS credential      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultDynamicSecret metadata    namespace  vso example   name  vault dynamic secret aws sts spec    vaultAuthRef  vault auth   mount  aws   path  sts my sts role   destination      create  true     name  dynamic aws sts            HCP Vault Secrets Example      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultDynamicSecret metadata    namespace  vso example   name  vault dynamic secret aws iam role spec    vaultAuthRef  vault auth   mount  aws   path  creds my iam role   destination      create  true     name  dynamic aws iam role       include  vso blurb api reference mdx      Vault client cache  The Vault Secrets Operator can optionally cache Vault client information such as Vault tokens and leases in Kubernetes Secrets within its own namespace  The client cache enables seamless upgrades because Vault tokens and dynamic secret leases can continue to be tracked and renewed through leadership changes  Client cache persistence and encryption is not enabled by default because it requires extra configuration and Vault Server setup  VSO supports encrypting the client cache using Vault Server s  transit secrets engine   vault docs secrets transit    The  Encrypted client cache   vault docs platform k8s vso sources vault client cache  guide will walk you through the steps to enable and configure client cache encryption      Instant updates  EnterpriseAlert inline  true      Tip title  Feature availability      VSO v0 8 0    Tip   The Vault Secrets Operator can instantly update Kubernetes Secrets when changes are made in Vault  by subscribing to  Vault Events  vault events  for change notification  Setting a refresh interval  e g   refreshAfter  vss spec   is still recommended since event message delivery is not guaranteed     Supported secret types       VaultStaticSecret   vaultstaticsecret custom resource    kv v1   vault docs secrets kv kv v1      kv v2   vault docs secrets kv kv v2     Note title  Requires Vault Enterprise 1 16 3     The instant updates option requires  Vault Enterprise   vault docs enterprise  1 16 3  due to the use of  Vault Event Notifications  vault events      Note   The  Instant updates   vault docs platform k8s vso sources vault instant updates  guide will walk you through the steps to enable instant updates for a VaultStaticSecret    vss spec    vault docs platform k8s vso api reference vaultstaticsecretspec  vault events    vault docs concepts events     Tutorial  Refer to the  Vault Secrets Operator on Kubernetes   vault tutorials kubernetes vault secrets operator  tutorial to learn the end to end workflow using the Vault Secrets Operator "}
{"questions":"vault Instant updates for a VaultStaticSecret page title Instant updates with Vault Secrets Operator Vault Secrets Operator VSO supports instant updates for layout docs Enable instant updates with Vault Secrets Operator","answers":"---\nlayout: docs\npage_title: Instant updates with Vault Secrets Operator\ndescription: >-\n  Enable instant updates with Vault Secrets Operator.\n---\n\n# Instant updates for a VaultStaticSecret\n\nVault Secrets Operator (VSO) supports instant updates for\n[VaultStaticSecrets][vss-spec] by subscribing to event notifications from Vault.\n\n## Before you start\n\n- **You must have [Vault Secrets Operator](\/vault\/docs\/platform\/k8s\/vso\/sources\/vault) installed**.\n- **You must use [Vault Enterprise](\/vault\/docs\/enterprise) version 1.16.3 or later**.\n\n## Step 1: Set event permissions\n\nGrant these permissions in the policy associated with the VaultAuth role:\n\n  ```hcl\n  path \"<kv mount>\/<kv secret path>\" {\n    capabilities = [\"read\", \"list\", \"subscribe\"]\n    subscribe_event_types = [\"*\"]\n  }\n\n  path \"sys\/events\/subscribe\/kv*\" {\n    capabilities = [\"read\"]\n  }\n  ```\n\n<Tip>\n\nSee [Event Notifications Policies][events-policies] for more information on\nVault event notification permissions.\n\n<\/Tip>\n\n## Step 2: Enable instant updates on the VaultStaticSecret\n\nSet `syncConfig.instantUpdates=true` in the [VaultStaticSecret spec][vss-spec]:\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultStaticSecret\nmetadata:\n  namespace: vso-example\n  name: vault-static-secret-v2\nspec:\n  vaultAuthRef: vault-auth\n  mount: <kv mount>\n  type: kv-v2\n  path: <kv secret path>\n  version: 2\n  refreshAfter: 1h\n  destination:\n    create: true\n    name: static-secret2\n  syncConfig:\n    instantUpdates: true\n```\n\n## Debugging\n\nCheck Kubernetes events on the VaultStaticSecret resource to see if VSO\nsubscribed to Vault event notifications.\n\n### Example: VSO is subscribed to Vault event notifications for the secret\n\n```shell-session\n$ kubectl describe vaultstaticsecret vault-static-secret-v2 -n vso-example\n...\nEvents:\n  Type    Reason               Age              From               Message\n  ----    ------               ----             ----               -------\n  Normal  SecretSynced         2s               VaultStaticSecret  Secret synced\n  Normal  EventWatcherStarted  2s (x2 over 2s)  VaultStaticSecret  Started watching events\n  Normal  SecretRotated        2s               VaultStaticSecret  Secret synced\n```\n\n### Example: The VaultAuth role policy lacks the required event permissions\n\n```shell-session\n$ kubectl describe vaultstaticsecret vault-static-secret-v2 -n vso-example\n...\nEvents:\n  Type     Reason             Age   From               Message\n  ----     ------             ----  ----               -------\n  Normal   SecretSynced       2s    VaultStaticSecret  Secret synced\n  Warning  EventWatcherError  2s    VaultStaticSecret  Error while watching events: \n   failed to connect to vault websocket: error returned when opening event stream\n   web socket to wss:\/\/vault.vault.svc.cluster.local:8200\/v1\/sys\/events\/subscribe\/kv%2A?json=true, \n   ensure VaultAuth role has correct permissions and Vault is Enterprise version \n   1.16 or above: {\"errors\":[\"1 error occurred:\\n\\t* permission denied\\n\\n\"]}\n  Normal   SecretRotated      2s    VaultStaticSecret  Secret synced\n```\n\n[vss-spec]: \/vault\/docs\/platform\/k8s\/vso\/api-reference#vaultstaticsecretspec\n[vault-events]: \/vault\/docs\/concepts\/events\n[events-policies]: \/vault\/docs\/concepts\/events#policies","site":"vault","answers_cleaned":"    layout  docs page title  Instant updates with Vault Secrets Operator description       Enable instant updates with Vault Secrets Operator         Instant updates for a VaultStaticSecret  Vault Secrets Operator  VSO  supports instant updates for  VaultStaticSecrets  vss spec  by subscribing to event notifications from Vault      Before you start      You must have  Vault Secrets Operator   vault docs platform k8s vso sources vault  installed        You must use  Vault Enterprise   vault docs enterprise  version 1 16 3 or later        Step 1  Set event permissions  Grant these permissions in the policy associated with the VaultAuth role        hcl   path   kv mount   kv secret path         capabilities     read    list    subscribe       subscribe event types                path  sys events subscribe kv         capabilities     read               Tip   See  Event Notifications Policies  events policies  for more information on Vault event notification permissions     Tip      Step 2  Enable instant updates on the VaultStaticSecret  Set  syncConfig instantUpdates true  in the  VaultStaticSecret spec  vss spec       yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultStaticSecret metadata    namespace  vso example   name  vault static secret v2 spec    vaultAuthRef  vault auth   mount   kv mount    type  kv v2   path   kv secret path    version  2   refreshAfter  1h   destination      create  true     name  static secret2   syncConfig      instantUpdates  true         Debugging  Check Kubernetes events on the VaultStaticSecret resource to see if VSO subscribed to Vault event notifications       Example  VSO is subscribed to Vault event notifications for the secret     shell session   kubectl describe vaultstaticsecret vault static secret v2  n vso example     Events    Type    Reason               Age              From               Message                                                                              Normal  SecretSynced         2s               VaultStaticSecret  Secret synced   Normal  EventWatcherStarted  2s  x2 over 2s   VaultStaticSecret  Started watching events   Normal  SecretRotated        2s               VaultStaticSecret  Secret synced          Example  The VaultAuth role policy lacks the required event permissions     shell session   kubectl describe vaultstaticsecret vault static secret v2  n vso example     Events    Type     Reason             Age   From               Message                                                                  Normal   SecretSynced       2s    VaultStaticSecret  Secret synced   Warning  EventWatcherError  2s    VaultStaticSecret  Error while watching events      failed to connect to vault websocket  error returned when opening event stream    web socket to wss   vault vault svc cluster local 8200 v1 sys events subscribe kv 2A json true      ensure VaultAuth role has correct permissions and Vault is Enterprise version     1 16 or above    errors    1 error occurred  n t  permission denied n n      Normal   SecretRotated      2s    VaultStaticSecret  Secret synced       vss spec    vault docs platform k8s vso api reference vaultstaticsecretspec  vault events    vault docs concepts events  events policies    vault docs concepts events policies"}
{"questions":"vault Persist and encrypt the Vault client cache Enable and encrypt the Vault client cache for dynamic secrets with Vault layout docs Secrets Operator page title Persist and encrypt the Vault client cache","answers":"---\nlayout: docs\npage_title: Persist and encrypt the Vault client cache\ndescription: >-\n  Enable and encrypt the Vault client cache for dynamic secrets with Vault\n  Secrets Operator.\n---\n\n# Persist and encrypt the Vault client cache\n\nBy default, the [Vault client cache](\/vault\/docs\/platform\/k8s\/vso\/sources\/vault#vault-client-cache) does not persist. You can use the\n[transit secrets engine](\/vault\/docs\/secrets\/transit) with Vault Secrets Operator (VSO)\nto store and encrypt the client cache in your Vault server.\n\n<Highlight title=\"Dynamic secrets best practice\">\n\n  We strongly recommend persisting and encrypting the client cache if you use\n  [Vault dynamic secrets](\/vault\/docs\/platform\/k8s\/vso\/api-reference#vaultdynamicsecret),\n  so that dynamic secret leases are maintained through restarts and upgrades.\n\n<\/Highlight>\n\n## Before you start\n\n- **You must have [Vault Secrets Operator](\/vault\/docs\/platform\/k8s\/vso\/sources\/vault) installed**.\n- **You must have the [`transit` secrets engine](\/vault\/docs\/secrets\/transit) enabled**.\n- **You must have the [`kubernetes` authentication engine](\/vault\/docs\/auth\/kubernetes) enabled**.\n\n## Step 1: Configure a key and policy for VSO\n\nUse the Vault CLI or Terraform to add a key to `transit` and define policies\nfor encrypting and decrypting cache information:\n\n<CodeTabs>\n<CodeBlockConfig>\n\n```shell-session\nexport VAULT_NAMESPACE=<VAULT_NAMESPACE>\nexport VAULT_TRANSIT_PATH=<VAULT_TRANSIT_PATH>\n\nvault write -f ${VAULT_TRANSIT_PATH}\/keys\/vso-client-cache\n\nvault policy write operator - <<EOH\npath \"${VAULT_TRANSIT_PATH}\/encrypt\/vso-client-cache\" {\n  capabilities = [\"create\", \"update\"]\n}\npath \"${VAULT_TRANSIT_PATH}\/decrypt\/vso-client-cache\" {\n  capabilities = [\"create\", \"update\"]\n}\nEOH\n```\n\n<\/CodeBlockConfig>\n<CodeBlockConfig>\n\n```hcl\nlocals {\n  transit_path      = \"<VAULT_TRANSIT_PATH>\"\n  transit_namespace = \"<VAULT_NAMESPACE>\"\n}\n\nresource \"vault_transit_secret_cache_config\" \"cache\" {\n  namespace = local.transit_namespace\n  backend   = local.transit_path\n  size      = 500\n}\n\nresource \"vault_transit_secret_backend_key\" \"cache\" {\n  namespace        = local.transit_namespace\n  backend          = local.transit_path\n  name             = \"vso-client-cache\"\n  deletion_allowed = true\n}\n\ndata \"vault_policy_document\" \"operator_transit\" {\n  rule {\n    path         = \"${local.transit_path}\/encrypt\/${vault_transit_secret_backend_key.cache.name}\"\n    capabilities = [\"create\", \"update\"]\n    description  = \"encrypt\"\n  }\n  rule {\n    path         = \"${local.transit_path}\/decrypt\/${vault_transit_secret_backend_key.cache.name}\"\n    capabilities = [\"create\", \"update\"]\n    description  = \"decrypt\"\n  }\n}\n\nresource \"vault_policy\" \"operator\" {\n  namespace = vault_transit_secret_backend_key.cache.namespace\n  name      = \"operator\"\n  policy    = data.vault_policy_document.operator_transit.hcl\n}\n```\n\n<\/CodeBlockConfig>\n<\/CodeTabs>\n\n## Step 2: Create a kubernetes authentication role\n\nUse the Vault CLI or Terraform to create a Kubernetes authentication role for\nVault Secrets Operator.\n\n<CodeTabs>\n<CodeBlockConfig>\n\n```shell-session\nexport VAULT_NAMESPACE=<VAULT_NAMESPACE>\n\nvault write auth\/<VAULT_KUBERNETES_PATH>\/role\/operator \\\n  bound_service_account_names=vault-secrets-operator-controller-manager \\\n  bound_service_account_namespaces=vault-secrets-operator \\\n  token_period=\"24h\" \\\n  token_policies=operator \\\n  audience=\"vault\"\n```\n\n<\/CodeBlockConfig>\n<CodeBlockConfig>\n\n```hcl\ndata \"vault_auth_backend\" \"kubernetes\" {\n  namespace = \"<VAULT_NAMESPACE>\"\n  path      = \"<VAULT_KUBERNETES_PATH>\"\n}\n\nresource \"vault_kubernetes_auth_backend_config\" \"local\" {\n  namespace       = data.vault_auth_backend.kubernetes.namespace\n  backend         = data.vault_auth_backend.kubernetes.path\n  kubernetes_host = \"https:\/\/kubernetes.default.svc\"\n}\n\nresource \"vault_kubernetes_auth_backend_role\" \"operator\" {\n  namespace                        = data.vault_auth_backend.kubernetes.namespace\n  backend                          = vault_kubernetes_auth_backend_config.local.backend\n  role_name                        = \"operator\"\n  bound_service_account_names      = [\"vault-secrets-operator-controller-manager\"]\n  bound_service_account_namespaces = [\"vault-secrets-operator\"]\n  token_period                     = 120\n  token_policies = [\n    vault_policy.operator.name,\n  ]\n  audience = \"vault\"\n}\n```\n\n<\/CodeBlockConfig>\n<\/CodeTabs>\n\n## Step 3: Configure a Vault connection for VSO\n\nUse the Vault Secrets Operator API to add a\n[VaultConnection](\/vault\/docs\/platform\/k8s\/vso\/api-reference#vaultconnection)\nbetween VSO and your Vault server.\n\n<Note>If you already have a connection for VSO, continue to the next step<\/Note>\n\n```yaml\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultConnection\nmetadata:\n  name: local-vault-server\n  namespace: vault-secrets-operator\nspec:\n  address: 'https:\/\/vault.vault.svc.cluster.local:8200'\n```\n\n## Step 4: Enable encrypted client cache storage\n\n<Tabs>\n\n<Tab heading=\"Helm\">\n\nFor [Helm installs](\/vault\/docs\/platform\/k8s\/vso\/installation#installation-using-helm),\ninstall (or upgrade) your [`server.clientCache`](\/vault\/docs\/platform\/k8s\/vso\/helm#v-controller-manager-clientcache)\nconfiguration:\n\n```yaml\ncontroller:\n  manager:\n    clientCache:\n      persistenceModel: direct-encrypted\n      storageEncryption:\n        enabled: true\n        vaultConnectionRef: local-vault-server\n        keyName: vso-client-cache\n        transitMount: <VAULT_TRANSIT_PATH>\n        namespace: <VAULT_NAMESPACE>\n        method: kubernetes\n        mount: <VAULT_KUBERNETES_PATH>\n        kubernetes:\n          role: operator\n          serviceAccount: vault-secrets-operator-controller-manager\n          tokenAudiences: [\"vault\"]\n```\n\n<\/Tab>\n\n<Tab heading=\"OLM\/OperatorHub\">\n\nFor [OpenShift OperatorHub](\/vault\/docs\/platform\/k8s\/vso\/openshift#operatorhub)\ninstalls:\n\n1. Add a [VaultAuth](\/vault\/docs\/platform\/k8s\/vso\/api-reference#vaultauth) to\n   entry to your cluster for storage:\n    - set `cacheStorageEncryption` to `true`\n    - add a [spec.storageEncryption](\/vault\/docs\/platform\/k8s\/vso\/api-reference#vaultauthspec)\n      configuration.\n\n    ```yaml\n    apiVersion: secrets.hashicorp.com\/v1beta1\n    kind: VaultAuth\n    metadata:\n      name: operator\n      namespace: vault-secrets-operator\n      labels:\n        cacheStorageEncryption: 'true'\n    spec:\n      kubernetes:\n        role: operator\n        serviceAccount: vault-secrets-operator-controller-manager\n        tokenExpirationSeconds: 600\n        audiences: [\"vault\"]\n      method: kubernetes\n      mount: <VAULT_KUBERNETES_PATH>\n      namespace: <VAULT_NAMESPACE>\n      storageEncryption:\n        keyName: vso-client-cache\n        mount: <VAULT_TRANSIT_PATH>\n      vaultConnectionRef: local-vault-server\n    ```\n\n1. Set the `VSO_CLIENT_CACHE_PERSISTENCE_MODEL` environment variable in VSO's\n   subscription:\n\n    <CodeBlockConfig highlight=\"6-7\">\n\n    ```yaml\n    spec:\n      name: vault-secrets-operator\n      channel: stable\n      config:\n        env:\n          - name: VSO_CLIENT_CACHE_PERSISTENCE_MODEL\n            value: direct-encrypted\n    ```\n\n    <\/CodeBlockConfig>\n\n   With the operator installed through OperatorHub, edit your subscription and\n   set the `VSO_CLIENT_CACHE_PERSISTENCE_MODEL` environment variable.\n\n<Tabs>\n<Tab heading=\"Web Console\">\n\n  <ol>\n    <li>Navigate to the <b>Operators<\/b> menu<\/li>\n    <li>Select <b>Installed Operators<\/b><\/li>\n    <li>Select \"Vault Secrets Operator\"<\/li>\n    <li>Click \"Edit Subscription\" in the top right action menu<\/li>\n  <\/ol>\n\n<\/Tab>\n<Tab heading=\"CLI\">\n\n  ```shell-session\n  oc edit subscription      \\\n    vault-secrets-operator  \\\n    -n vault-secrets-operator\n  ```\n\n<\/Tab>\n<\/Tabs>\n\nThe pod in the operator deployment restarts once the\n`VSO_CLIENT_CACHE_PERSISTENCE_MODEL` environment variable persists.\n\n<\/Tab>\n\n<\/Tabs>\n\n## Optional: Verify client cache storage and encryption\n\n1. Confirm the Vault Secrets Operator logs the following information on startup:\n\n    <CodeBlockConfig hideClipboard>\n\n    ```json\n    Starting manager    {\"clientCachePersistenceModel\": \"direct-encrypted\",\n        \"clientCacheSize\": 10000}\n    ```\n\n    <\/CodeBlockConfig>\n\n1. Confirm the Vault Secrets Operator logs a \"Setting up Vault Client for\n   storage encryption\" message when authenticating to Vault on behalf of a user:\n\n    <CodeBlockConfig hideClipboard>\n\n    ```json\n    {\"level\":\"info\",\"ts\":\"2024-02-22T00:41:46Z\",\"logger\":\"clientCacheFactory\",\n    \"msg\":\"Setting up Vault Client for storage encryption\",\"persist\":true,\n    \"enforceEncryption\":true,\"cacheKey\":\"kubernetes-59ebf88ccb963a22226bad\"}\n    ```\n\n    <\/CodeBlockConfig>\n\n1. Verify the encrypted cache is stored as Kubernetes secrets under the correct\n   namespace with the prefix `vso-cc-<auth method>`. For example:\n\n    <CodeBlockConfig hideClipboard>\n\n    ```shell-session\n    $ kubectl get secrets -n vault-secrets-operator\n    ...\n    NAME                                       TYPE     DATA   AGE\n    vso-cc-kubernetes-0147431c618992b6adfed1   Opaque   2      73s\n    ...\n    ```\n\n    <\/CodeBlockConfig>","site":"vault","answers_cleaned":"    layout  docs page title  Persist and encrypt the Vault client cache description       Enable and encrypt the Vault client cache for dynamic secrets with Vault   Secrets Operator         Persist and encrypt the Vault client cache  By default  the  Vault client cache   vault docs platform k8s vso sources vault vault client cache  does not persist  You can use the  transit secrets engine   vault docs secrets transit  with Vault Secrets Operator  VSO  to store and encrypt the client cache in your Vault server    Highlight title  Dynamic secrets best practice      We strongly recommend persisting and encrypting the client cache if you use    Vault dynamic secrets   vault docs platform k8s vso api reference vaultdynamicsecret     so that dynamic secret leases are maintained through restarts and upgrades     Highlight      Before you start      You must have  Vault Secrets Operator   vault docs platform k8s vso sources vault  installed        You must have the   transit  secrets engine   vault docs secrets transit  enabled        You must have the   kubernetes  authentication engine   vault docs auth kubernetes  enabled        Step 1  Configure a key and policy for VSO  Use the Vault CLI or Terraform to add a key to  transit  and define policies for encrypting and decrypting cache information    CodeTabs   CodeBlockConfig      shell session export VAULT NAMESPACE  VAULT NAMESPACE  export VAULT TRANSIT PATH  VAULT TRANSIT PATH   vault write  f   VAULT TRANSIT PATH  keys vso client cache  vault policy write operator     EOH path    VAULT TRANSIT PATH  encrypt vso client cache      capabilities     create    update     path    VAULT TRANSIT PATH  decrypt vso client cache      capabilities     create    update     EOH        CodeBlockConfig   CodeBlockConfig      hcl locals     transit path          VAULT TRANSIT PATH     transit namespace     VAULT NAMESPACE      resource  vault transit secret cache config   cache      namespace   local transit namespace   backend     local transit path   size        500    resource  vault transit secret backend key   cache      namespace          local transit namespace   backend            local transit path   name                vso client cache    deletion allowed   true    data  vault policy document   operator transit      rule       path              local transit path  encrypt   vault transit secret backend key cache name       capabilities     create    update       description     encrypt        rule       path              local transit path  decrypt   vault transit secret backend key cache name       capabilities     create    update       description     decrypt         resource  vault policy   operator      namespace   vault transit secret backend key cache namespace   name         operator    policy      data vault policy document operator transit hcl          CodeBlockConfig    CodeTabs      Step 2  Create a kubernetes authentication role  Use the Vault CLI or Terraform to create a Kubernetes authentication role for Vault Secrets Operator    CodeTabs   CodeBlockConfig      shell session export VAULT NAMESPACE  VAULT NAMESPACE   vault write auth  VAULT KUBERNETES PATH  role operator     bound service account names vault secrets operator controller manager     bound service account namespaces vault secrets operator     token period  24h      token policies operator     audience  vault         CodeBlockConfig   CodeBlockConfig      hcl data  vault auth backend   kubernetes      namespace     VAULT NAMESPACE     path          VAULT KUBERNETES PATH      resource  vault kubernetes auth backend config   local      namespace         data vault auth backend kubernetes namespace   backend           data vault auth backend kubernetes path   kubernetes host    https   kubernetes default svc     resource  vault kubernetes auth backend role   operator      namespace                          data vault auth backend kubernetes namespace   backend                            vault kubernetes auth backend config local backend   role name                           operator    bound service account names          vault secrets operator controller manager     bound service account namespaces     vault secrets operator     token period                       120   token policies         vault policy operator name        audience    vault           CodeBlockConfig    CodeTabs      Step 3  Configure a Vault connection for VSO  Use the Vault Secrets Operator API to add a  VaultConnection   vault docs platform k8s vso api reference vaultconnection  between VSO and your Vault server    Note If you already have a connection for VSO  continue to the next step  Note      yaml apiVersion  secrets hashicorp com v1beta1 kind  VaultConnection metadata    name  local vault server   namespace  vault secrets operator spec    address   https   vault vault svc cluster local 8200          Step 4  Enable encrypted client cache storage   Tabs    Tab heading  Helm    For  Helm installs   vault docs platform k8s vso installation installation using helm   install  or upgrade  your   server clientCache    vault docs platform k8s vso helm v controller manager clientcache  configuration      yaml controller    manager      clientCache        persistenceModel  direct encrypted       storageEncryption          enabled  true         vaultConnectionRef  local vault server         keyName  vso client cache         transitMount   VAULT TRANSIT PATH          namespace   VAULT NAMESPACE          method  kubernetes         mount   VAULT KUBERNETES PATH          kubernetes            role  operator           serviceAccount  vault secrets operator controller manager           tokenAudiences    vault          Tab    Tab heading  OLM OperatorHub    For  OpenShift OperatorHub   vault docs platform k8s vso openshift operatorhub  installs   1  Add a  VaultAuth   vault docs platform k8s vso api reference vaultauth  to    entry to your cluster for storage        set  cacheStorageEncryption  to  true        add a  spec storageEncryption   vault docs platform k8s vso api reference vaultauthspec        configuration          yaml     apiVersion  secrets hashicorp com v1beta1     kind  VaultAuth     metadata        name  operator       namespace  vault secrets operator       labels          cacheStorageEncryption   true      spec        kubernetes          role  operator         serviceAccount  vault secrets operator controller manager         tokenExpirationSeconds  600         audiences    vault         method  kubernetes       mount   VAULT KUBERNETES PATH        namespace   VAULT NAMESPACE        storageEncryption          keyName  vso client cache         mount   VAULT TRANSIT PATH        vaultConnectionRef  local vault server          1  Set the  VSO CLIENT CACHE PERSISTENCE MODEL  environment variable in VSO s    subscription        CodeBlockConfig highlight  6 7           yaml     spec        name  vault secrets operator       channel  stable       config          env              name  VSO CLIENT CACHE PERSISTENCE MODEL             value  direct encrypted                CodeBlockConfig      With the operator installed through OperatorHub  edit your subscription and    set the  VSO CLIENT CACHE PERSISTENCE MODEL  environment variable    Tabs   Tab heading  Web Console       ol       li Navigate to the  b Operators  b  menu  li       li Select  b Installed Operators  b   li       li Select  Vault Secrets Operator   li       li Click  Edit Subscription  in the top right action menu  li      ol     Tab   Tab heading  CLI         shell session   oc edit subscription            vault secrets operator         n vault secrets operator          Tab    Tabs   The pod in the operator deployment restarts once the  VSO CLIENT CACHE PERSISTENCE MODEL  environment variable persists     Tab     Tabs      Optional  Verify client cache storage and encryption  1  Confirm the Vault Secrets Operator logs the following information on startup        CodeBlockConfig hideClipboard          json     Starting manager      clientCachePersistenceModel    direct encrypted            clientCacheSize   10000                 CodeBlockConfig   1  Confirm the Vault Secrets Operator logs a  Setting up Vault Client for    storage encryption  message when authenticating to Vault on behalf of a user        CodeBlockConfig hideClipboard          json       level   info   ts   2024 02 22T00 41 46Z   logger   clientCacheFactory        msg   Setting up Vault Client for storage encryption   persist  true       enforceEncryption  true  cacheKey   kubernetes 59ebf88ccb963a22226bad                  CodeBlockConfig   1  Verify the encrypted cache is stored as Kubernetes secrets under the correct    namespace with the prefix  vso cc  auth method    For example        CodeBlockConfig hideClipboard          shell session       kubectl get secrets  n vault secrets operator             NAME                                       TYPE     DATA   AGE     vso cc kubernetes 0147431c618992b6adfed1   Opaque   2      73s                        CodeBlockConfig "}
{"questions":"vault page title AWS auth support for Vault Secrets Operator The Vault Secrets Operator VSO supports layout docs AWS auth support for Vault Secrets Operator Learn how AWS authentication works for Vault Secrets Operator","answers":"---\nlayout: docs\npage_title: AWS auth support for Vault Secrets Operator\ndescription: >-\n  Learn how AWS authentication works for Vault Secrets Operator\n---\n\n# AWS auth support for Vault Secrets Operator\n\nThe Vault Secrets Operator (VSO) supports\n [AWS authentication](\/vault\/docs\/auth\/aws) when accessing Vault. VSO\n can retrieve AWS credentials:\n\n- from an [IRSA-enabled Kubernetes service account][aws-irsa].\n- by inferring credentials from the underlying EKS node role.\n- by inferring credentials from the EC2 instance profile of the instance\n   where the operator pod is running.\n- from an explicitly provided static access key id and secret key.\n\nThe following examples illustrate how to configure a Vault role and the corresponding VaultAuth profile in VSO for different AWS authentication scenarios.\n\n## IRSA\n\n1. Follow the Amazon documentation for [IAM roles for service accounts][aws-irsa]\n   to add an OIDC provider so your Kubernetes service account can assume an IAM\n   role.\n\n1. Create an appropriate authentication role in your Vault instance:\n\n  <CodeTabs>\n  <CodeBlockConfig>\n\n  ```shell-session\n  $ vault write auth\/aws\/role\/<VAULT_AWS_IRSA_ROLE> \\\n      auth_type=\"iam\" \\\n      policies=\"default\" \\\n      bound_iam_principal_arn=\"arn:aws:iam::<ACCOUNT_ID>:role\/<IAM_IRSA_ROLE>\"\n  ```\n\n  <\/CodeBlockConfig>\n  <CodeBlockConfig>\n\n  ```hcl\n  resource \"vault_aws_auth_backend_role\" \"aws_irsa_role\" {\n    backend                  = \"auth\/aws\"\n    role                     = <VAULT_AWS_IRSA_ROLE>\n    auth_type                = \"iam\"\n    token_policies           = [\"default\"]\n    bound_iam_principal_arns = [\n      \"arn:aws:iam::<ACCOUNT_ID>:role\/<IAM_IRSA_ROLE>\",\n    ]\n  }\n  ```\n\n  <\/CodeBlockConfig>\n  <\/CodeTabs>\n\n1. Create the corresponding authentication entry in VSO:\n\n  ```yaml\n  apiVersion: secrets.hashicorp.com\/v1beta1\n  kind: VaultAuth\n  metadata:\n    name: vaultauth-aws-irsa-example\n    namespace: <K8S_NAMESPACE>\n  spec:\n    vaultConnectionRef: <VAULT_CONNECTION_NAME>\n    mount: aws\n    method: aws\n    aws:\n      role: <VAULT_AWS_IRSA_ROLE>\n      region: <AWS_REGION>\n      irsaServiceAccount: <SERVICE_ACCOUNT>\n  ```\n\n<Tip title=\"Terraform has IRSA support\">\n\n  If you use Terraform to manage your Elastic Kubernetes (EKS) cluster, the\n  [AWS EKS module](https:\/\/registry.terraform.io\/modules\/terraform-aws-modules\/eks\/aws\/latest)\n  includes IRSA support through the\n  [IRSA submodule](https:\/\/registry.terraform.io\/modules\/terraform-aws-modules\/iam\/aws\/latest\/submodules\/iam-role-for-service-accounts-eks).\n\n<\/Tip>\n\n## Node role\n\n1. Create an appropriate authentication role in your Vault instance:\n\n  <CodeTabs>\n  <CodeBlockConfig>\n\n  ```shell-session\n  $ vault write auth\/aws\/role\/<VAULT_AWS_NODE_ROLE> \\\n      auth_type=\"iam\" \\\n      policies=\"default\" \\\n      bound_iam_principal_arn=\"arn:aws:iam::<ACCOUNT_ID>:role\/eks-nodes-<EKS_CLUSTER_NAME>\"\n  ```\n\n  <\/CodeBlockConfig>\n  <CodeBlockConfig>\n\n  ```hcl\n  resource \"vault_aws_auth_backend_role\" \"aws_node_role\" {\n    backend                  = \"auth\/aws\"\n    role                     = <VAULT_AWS_NODE_ROLE>\n    auth_type                = \"iam\"\n    token_policies           = [\"default\"]\n    bound_iam_principal_arns = [\n      \"arn:aws:iam::<ACCOUNT_ID>:role\/eks-nodes-<EKS_CLUSTER_NAME>\",\n    ]\n  }\n  ```\n\n  <\/CodeBlockConfig>\n  <\/CodeTabs>\n\n1. Create the corresponding authentication entry in VSO:\n\n  ```yaml\n  apiVersion: secrets.hashicorp.com\/v1beta1\n  kind: VaultAuth\n  metadata:\n    name: vaultauth-aws-node-example\n    namespace: <K8S_NAMESPACE>\n  spec:\n    vaultConnectionRef: <VAULT_CONNECTION_NAME>\n    mount: aws\n    method: aws\n    aws:\n      role: <VAULT_AWS_NODE_ROLE>\n      region: <AWS_REGION>\n  ```\n\n## Instance profile\n\n1. Create an appropriate authentication role in your Vault instance:\n\n  <CodeTabs>\n  <CodeBlockConfig>\n\n  ```shell-session\n  $ vault write auth\/aws\/role\/<VAULT_AWS_INSTANCE_ROLE> \\\n      auth_type=\"iam\" \\\n      policies=\"default\" \\\n      inferred_entity_type=\"ec2_instance\" \\\n      inferred_aws_region=-\"<AWS_REGION>\" \\\n      bound_account_id=\"<ACCOUNT_ID>\" \\\n      bound_iam_principal_arn=\"arn:aws:iam::<ACCOUNT_ID>:instance-profile\/eks-<INSTANCE_PROFILE_UUID>\"\n  ```\n\n  <\/CodeBlockConfig>\n  <CodeBlockConfig>\n\n  ```hcl\n  resource \"vault_aws_auth_backend_role\" \"aws_node_role\" {\n    backend                  = \"auth\/aws\"\n    role                     = <VAULT_AWS_INSTANCE_ROLE>\n    auth_type                = \"iam\"\n    token_policies           = [\"default\"]\n    inferred_entity_type     = \"ec2_instance\"\n    inferred_aws_region      = \"<AWS_REGION>\"\n    bound_account_ids        = [\"<ACCOUNT_ID>\"]\n    bound_iam_principal_arns = [\n      \"arn:aws:iam::<ACCOUNT_ID>:role\/eks-nodes-<EKS_CLUSTER_NAME>\",\n    ]\n  }\n  ```\n\n  <\/CodeBlockConfig>\n  <\/CodeTabs>\n\n1. Create the corresponding authentication entry in VSO:\n\n  ```yaml\n  apiVersion: secrets.hashicorp.com\/v1beta1\n  kind: VaultAuth\n  metadata:\n    name: vaultauth-aws-instance-example\n    namespace: <K8S_NAMESPACE>\n  spec:\n    vaultConnectionRef: <VAULT_CONNECTION_NAME>\n    mount: aws\n    method: aws\n    aws:\n      role: <VAULT_AWS_INSTANCE_ROLE>\n      region: <AWS_REGION>\n  ```\n\n## Static credentials\n\n1. Create an appropriate authentication role in your Vault instance:\n\n  <CodeTabs>\n  <CodeBlockConfig>\n\n  ```shell-session\n  $ vault write auth\/aws\/role\/<VAULT_AWS_STATIC_ROLE> \\\n      auth_type=\"iam\" \\\n      policies=\"default\" \\\n      bound_iam_principal_arn=\"arn:aws:iam::<ACCOUNT_ID>:role\/<IAM_ROLE>\"\n  ```\n\n  <\/CodeBlockConfig>\n  <CodeBlockConfig>\n\n  ```hcl\n  resource \"vault_aws_auth_backend_role\" \"aws_static_role\" {\n    backend                  = \"auth\/aws\"\n    role                     = <VAULT_AWS_STATIC_ROLE>\n    auth_type                = \"iam\"\n    token_policies           = [\"default\"]\n    bound_iam_principal_arns = [\n      \"arn:aws:iam::<ACCOUNT_ID>:role\/<IAM_ROLE>\",\n    ]\n  }\n  ```\n\n  <\/CodeBlockConfig>\n  <\/CodeTabs>\n\n1. Create the corresponding authentication entry in VSO:\n\n  ```yaml\n  apiVersion: v1\n  kind: Secret\n  metadata:\n    name: aws-static-creds\n    namespace: <K8S_NAMESPACE>\n  data:\n    access_key_id: <AWS_ACCESS_KEY_ID>\n    secret_access_key: <AWS_SECRET_ACCESS_KEY>\n    session_token: <AWS_SESSION_TOKEN>  # session_token is optional\n  ---\n  apiVersion: secrets.hashicorp.com\/v1beta1\n  kind: VaultAuth\n  metadata:\n    name: vaultauth-aws-static-example\n    namespace: <K8S_NAMESPACE>\n  spec:\n    vaultConnectionRef: <VAULT_CONNECTION_NAME>\n    mount: aws\n    method: aws\n    aws:\n      role: <VAULT_AWS_STATIC_ROLE>\n      region: <AWS_REGION>\n      secretRef: aws-static-creds\n  ```\n\n# API\n\nSee the full list of AWS VaultAuth options on the [VSO API page](\/vault\/docs\/platform\/k8s\/vso\/api-reference#vaultauthconfigaws).\n\n[aws-irsa]: https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/iam-roles-for-service-accounts.html","site":"vault","answers_cleaned":"    layout  docs page title  AWS auth support for Vault Secrets Operator description       Learn how AWS authentication works for Vault Secrets Operator        AWS auth support for Vault Secrets Operator  The Vault Secrets Operator  VSO  supports   AWS authentication   vault docs auth aws  when accessing Vault  VSO  can retrieve AWS credentials     from an  IRSA enabled Kubernetes service account  aws irsa     by inferring credentials from the underlying EKS node role    by inferring credentials from the EC2 instance profile of the instance    where the operator pod is running    from an explicitly provided static access key id and secret key   The following examples illustrate how to configure a Vault role and the corresponding VaultAuth profile in VSO for different AWS authentication scenarios      IRSA  1  Follow the Amazon documentation for  IAM roles for service accounts  aws irsa     to add an OIDC provider so your Kubernetes service account can assume an IAM    role   1  Create an appropriate authentication role in your Vault instance      CodeTabs     CodeBlockConfig        shell session     vault write auth aws role  VAULT AWS IRSA ROLE          auth type  iam          policies  default          bound iam principal arn  arn aws iam   ACCOUNT ID  role  IAM IRSA ROLE              CodeBlockConfig     CodeBlockConfig        hcl   resource  vault aws auth backend role   aws irsa role        backend                     auth aws      role                        VAULT AWS IRSA ROLE      auth type                   iam      token policies               default       bound iam principal arns            arn aws iam   ACCOUNT ID  role  IAM IRSA ROLE                         CodeBlockConfig      CodeTabs   1  Create the corresponding authentication entry in VSO        yaml   apiVersion  secrets hashicorp com v1beta1   kind  VaultAuth   metadata      name  vaultauth aws irsa example     namespace   K8S NAMESPACE    spec      vaultConnectionRef   VAULT CONNECTION NAME      mount  aws     method  aws     aws        role   VAULT AWS IRSA ROLE        region   AWS REGION        irsaServiceAccount   SERVICE ACCOUNT          Tip title  Terraform has IRSA support      If you use Terraform to manage your Elastic Kubernetes  EKS  cluster  the    AWS EKS module  https   registry terraform io modules terraform aws modules eks aws latest    includes IRSA support through the    IRSA submodule  https   registry terraform io modules terraform aws modules iam aws latest submodules iam role for service accounts eks      Tip      Node role  1  Create an appropriate authentication role in your Vault instance      CodeTabs     CodeBlockConfig        shell session     vault write auth aws role  VAULT AWS NODE ROLE          auth type  iam          policies  default          bound iam principal arn  arn aws iam   ACCOUNT ID  role eks nodes  EKS CLUSTER NAME              CodeBlockConfig     CodeBlockConfig        hcl   resource  vault aws auth backend role   aws node role        backend                     auth aws      role                        VAULT AWS NODE ROLE      auth type                   iam      token policies               default       bound iam principal arns            arn aws iam   ACCOUNT ID  role eks nodes  EKS CLUSTER NAME                         CodeBlockConfig      CodeTabs   1  Create the corresponding authentication entry in VSO        yaml   apiVersion  secrets hashicorp com v1beta1   kind  VaultAuth   metadata      name  vaultauth aws node example     namespace   K8S NAMESPACE    spec      vaultConnectionRef   VAULT CONNECTION NAME      mount  aws     method  aws     aws        role   VAULT AWS NODE ROLE        region   AWS REGION            Instance profile  1  Create an appropriate authentication role in your Vault instance      CodeTabs     CodeBlockConfig        shell session     vault write auth aws role  VAULT AWS INSTANCE ROLE          auth type  iam          policies  default          inferred entity type  ec2 instance          inferred aws region    AWS REGION           bound account id   ACCOUNT ID           bound iam principal arn  arn aws iam   ACCOUNT ID  instance profile eks  INSTANCE PROFILE UUID              CodeBlockConfig     CodeBlockConfig        hcl   resource  vault aws auth backend role   aws node role        backend                     auth aws      role                        VAULT AWS INSTANCE ROLE      auth type                   iam      token policies               default       inferred entity type        ec2 instance      inferred aws region          AWS REGION       bound account ids             ACCOUNT ID        bound iam principal arns            arn aws iam   ACCOUNT ID  role eks nodes  EKS CLUSTER NAME                         CodeBlockConfig      CodeTabs   1  Create the corresponding authentication entry in VSO        yaml   apiVersion  secrets hashicorp com v1beta1   kind  VaultAuth   metadata      name  vaultauth aws instance example     namespace   K8S NAMESPACE    spec      vaultConnectionRef   VAULT CONNECTION NAME      mount  aws     method  aws     aws        role   VAULT AWS INSTANCE ROLE        region   AWS REGION            Static credentials  1  Create an appropriate authentication role in your Vault instance      CodeTabs     CodeBlockConfig        shell session     vault write auth aws role  VAULT AWS STATIC ROLE          auth type  iam          policies  default          bound iam principal arn  arn aws iam   ACCOUNT ID  role  IAM ROLE              CodeBlockConfig     CodeBlockConfig        hcl   resource  vault aws auth backend role   aws static role        backend                     auth aws      role                        VAULT AWS STATIC ROLE      auth type                   iam      token policies               default       bound iam principal arns            arn aws iam   ACCOUNT ID  role  IAM ROLE                         CodeBlockConfig      CodeTabs   1  Create the corresponding authentication entry in VSO        yaml   apiVersion  v1   kind  Secret   metadata      name  aws static creds     namespace   K8S NAMESPACE    data      access key id   AWS ACCESS KEY ID      secret access key   AWS SECRET ACCESS KEY      session token   AWS SESSION TOKEN     session token is optional         apiVersion  secrets hashicorp com v1beta1   kind  VaultAuth   metadata      name  vaultauth aws static example     namespace   K8S NAMESPACE    spec      vaultConnectionRef   VAULT CONNECTION NAME      mount  aws     method  aws     aws        role   VAULT AWS STATIC ROLE        region   AWS REGION        secretRef  aws static creds          API  See the full list of AWS VaultAuth options on the  VSO API page   vault docs platform k8s vso api reference vaultauthconfigaws     aws irsa   https   docs aws amazon com eks latest userguide iam roles for service accounts html"}
{"questions":"vault Authenticate to Vault with the Vault Secrets Operator Vault authentication in detail page title Vault Secrets Operator Vault authentication details include vso common links mdx layout docs","answers":"---\nlayout: docs\npage_title: 'Vault Secrets Operator: Vault authentication details'\ndescription: >-\n  Authenticate to Vault with the Vault Secrets Operator.\n---\n\n@include 'vso\/common-links.mdx'\n\n# Vault authentication in detail\n\n## Auth configuration\n\nThe Vault Secrets Operator (VSO) relies on `VaultAuth` resources to authenticate with Vault. It relies on credential\nproviders to generate the credentials necessary for authentication. For example, when VSO authenticates to a kubernetes\nauth backend, it generates a token using the Kubernetes service account configured in the VaultAuth resource's defined\nkubernetes auth method. The service account must be configured in the Kubernetes namespace of the requesting resource.\nMeaning, if a resource like a `VaultStaticSecret` is created in the `apps` namespace, the service account must be in\nthe apps namespace. The rationale behind this approach is to ensure that cross namespace access is not possible.\n\n## Vault authentication globals\n\nThe `VaultAuthGlobal` resource is a global configuration that allows you to share a single authentication configuration\nacross a set of VaultAuth resources. This is useful when you have multiple VaultAuth resources that share the\nsame base configuration. For example, if you have multiple VaultAuth resources that all authenticate to Vault\nusing the same auth backend, you can create a single VaultAuthGlobal resource that defines the configuration\ncommon to all VaultAuth instances. Options like `mount`, `method`, `namespace`, and method specific configuration\ncan all be inherited from the VaultAuthGlobal resource. Any field in the VaultAuth resource can be inherited\nfrom a VaultAuthGlobal instance. Typically, most fields are inherited from the VaultAuthGlobal,\nfields like `role`, and credential provider specific fields like `serviceAccount` are usually set on the referring\nVaultAuth instance, since they are more specific to the application that requires the VaultAuth resource.\n\n*See [VaultAuthGlobal spec][vag-spec]  and [VaultAuth spec][va-spec] for the complete list of available fields.*\n\n\n## VaultAuthGlobal configuration inheritance\n\n- The configuration in the VaultAuth resource takes precedence over the configuration in the VaultAuthGlobal resource.\n- The VaultAuthGlobal can reside in any namespace, but must allow the namespace of the VaultAuth resource to reference it.\n- Default VaultAuthGlobal resources are denoted by the name `default` and are automatically referenced by all VaultAuth resources\n  when `spec.vaultAuthGlobalRef.allowDefault` is set to `true` and VSO is running with the `allow-default-globals`\n  option set in the `-global-vault-auth-options` flag (the default).\n- When a `spec.vaultAuthGlobalRef.namespace` is set, the search for the default VaultAuthGlobal resource is\n  constrained to that namespace. Otherwise, the search order is:\n  1. The default VaultAuthGlobal resource in the referring VaultAuth resource's namespace.\n  2. The default VaultAuthGlobal resource in the Operator's namespace.\n\n\n## Sample use cases and configurations\n\nThe following sections provide some sample use cases and configurations for the VaultAuthGlobal resource. These\nexamples demonstrate how to use the VaultAuthGlobal resource to share a common authentication configuration across a\nset of VaultAuth resources. Like other namespaced VSO custom resource definitions, there can be many VaultAuthGlobal\nresources configured in a single Kubernetes cluster.\n\n### Multiple applications with shared authentication backend\n\nA Vault admin has configured a Kubernetes auth backend in Vault mounted at `kubernetes`. The admin expects to have\ntwo applications authenticate using their own roles, and service accounts. The admin creates the necessary roles in\nVault bound to the service accounts and namespaces of the applications.\n\nThe admin creates a default VaultAuthGlobal with the following configuration:\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultAuthGlobal\nmetadata:\n  name: default\n  namespace: admin\nspec:\n  allowedNamespaces:\n    - apps\n  defaultAuthMethod: kubernetes\n  kubernetes:\n    audiences:\n    - vault\n    mount: kubernetes\n    role: default\n    serviceAccount: default\n    tokenExpirationSeconds: 600\n```\n\nA developer creates a `VaultAuth` and VaultStaticSecret resource in their application's namespace with the following\nconfigurations:\n\nApplication 1 would have a configuration like this:\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultAuth\nmetadata:\n  name: app1\n  namespace: apps\nspec:\n  kubernetes:\n    role: app1\n    serviceAccount: app1\n  vaultAuthGlobalRef:\n    allowDefault: true\n    namespace: admin\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultStaticSecret\nmetadata:\n  name: app1-secret\n  namespace: apps\nspec:\n  destination:\n    create: true\n    name: app1-secret\n  hmacSecretData: true\n  mount: apps\n  path: app1\n  type: kv-v2\n  vaultAuthRef: app1\n```\n\nApplication 2 would have a similar configuration:\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultAuth\nmetadata:\n  name: app2\n  namespace: apps\nspec:\n  kubernetes:\n    role: app2\n    serviceAccount: app2\n  vaultAuthGlobalRef:\n    allowDefault: true\n    namespace: admin\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultStaticSecret\nmetadata:\n  name: app2-secret\n  namespace: apps\nspec:\n  destination:\n    create: true\n    name: app2-secret\n  hmacSecretData: true\n  mount: apps\n  path: app2\n  type: kv-v2\n  vaultAuthRef: app2\n```\n\n#### Explanation\n\n- The default VaultAuthGlobal resource is created in the `admin` namespace. This resource defines the\n  common configuration for all VaultAuth resources that reference it. The `allowedNamespaces` field restricts the\n  VaultAuth resources that can reference this VaultAuthGlobal resource. In this case, only resources in the `apps`\n  namespace can reference this VaultAuthGlobal resource.\n- The VaultAuth resources in the `apps` namespace reference the VaultAuthGlobal resource. This allows the VaultAuth\n  resources to inherit the configuration from the VaultAuthGlobal resource. The `role` and `serviceAccount` fields are\n  specific to the application and are not inherited from the VaultAuthGlobal resource. Since the\n  `.spec.vaultAuthGlobalRef.allowDefault` field is set to `true`, the VaultAuth resources will automatically reference the\n  `default` VaultAuthGlobal in defined namespace.\n- The VaultStaticSecret resources in the `apps` namespace reference the VaultAuth resources. This allows the\n  VaultStaticSecret resources to authenticate to Vault in order to sync the KV secrets to the destination Kubernetes\n  Secret.\n\n### Multiple applications with shared authentication backend and role\n\nA Vault admin has configured a Kubernetes auth backend in Vault mounted at `kubernetes`. The admin expects to have\ntwo applications authenticate using a single role, and service account. The admin creates the necessary role in\nVault bound to the same service account and namespace of the applications.\n\nThe admin or developer creates a default VaultAuthGlobal in the application's namespace with the following\nconfiguration:\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultAuthGlobal\nmetadata:\n  name: default\n  namespace: apps\nspec:\n  defaultAuthMethod: kubernetes\n  kubernetes:\n    audiences:\n    - vault\n    mount: kubernetes\n    role: apps\n    serviceAccount: apps\n    tokenExpirationSeconds: 600\n```\n\nA developer creates single VaultAuth and the necessary VaultStatic secrets in their application's namespace with the\nfollowing:\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultAuth\nmetadata:\n  name: apps\n  namespace: apps\nspec:\n  vaultAuthGlobalRef:\n    allowDefault: true\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultStaticSecret\nmetadata:\n  name: app1-secret\n  namespace: apps\nspec:\n  destination:\n    create: true\n    name: app1-secret\n  hmacSecretData: true\n  mount: apps\n  path: app1\n  type: kv-v2\n  vaultAuthRef: apps\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultStaticSecret\nmetadata:\n  name: app2-secret\n  namespace: apps\nspec:\n  destination:\n    create: true\n    name: app2-secret\n  hmacSecretData: true\n  mount: apps\n  path: app2\n  type: kv-v2\n  vaultAuthRef: apps\n```\n\n#### Explanation\n\n- The default VaultAuthGlobal resource is created in the `apps` namespace. It provides all the necessary configuration\n  for the VaultAuth resources that reference it.\n- A single VaultAuth resource is created in the `apps` namespace. This resource references the VaultAuthGlobal resource\n  and inherits the configuration from it.\n- The VaultStaticSecret resources in the `apps` namespace reference the VaultAuth resource. This allows the VaultStaticSecret\n  resources to authenticate to Vault in order to sync the KV secrets to the destination Kubernetes Secret.\n\n### Multiple applications with multiple authentication backends and roles\n\nA Vault admin has configured a Kubernetes auth backend in Vault mounted at `kubernetes`. In addition, the Vault\nadmin has configured a JWT auth backend mounted at `jwt`. The admin creates the necessary roles in Vault for each\nauth method. The admin expects to have two applications authenticate, one using `kubernetes` auth and the other using `jwt` auth.\n\nThe admin or developer creates a default VaultAuthGlobal in the application's namespace with the following\nconfiguration:\n\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultAuthGlobal\nmetadata:\n  name: default\n  namespace: apps\nspec:\n  defaultAuthMethod: kubernetes\n  kubernetes:\n    audiences:\n    - vault\n    mount: kubernetes\n    role: apps\n    serviceAccount: apps-k8s\n    tokenExpirationSeconds: 600\n  jwt:\n    audiences:\n    - vault\n    mount: jwt\n    role: apps\n    serviceAccount: apps-jwt\n```\n\nA developer creates a VaultAuth and VaultStaticSecret resource in their application's namespace with the following\nconfigurations:\n\nApplication 1 would have a configuration like this which will be using the kubernetes auth method:\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultAuth\nmetadata:\n  name: apps-default\n  namespace: apps\nspec:\n  # uses the default kubernetes auth method as defined in\n  # the VaultAuthGlobal .spec.defaultAuthMethod\n  vaultAuthGlobalRef:\n    allowDefault: true\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultStaticSecret\nmetadata:\n  name: app1-secret\n  namespace: apps\nspec:\n  destination:\n    create: true\n    name: app1-secret\n  hmacSecretData: true\n  mount: apps\n  path: app1\n  type: kv-v2\n  vaultAuthRef: apps-default\n```\n\nApplication 2 would have a similar configuration, except it will be using the JWT auth method:\n```yaml\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultAuth\nmetadata:\n  name: apps-jwt\n  namespace: apps\nspec:\n  method: jwt\n  vaultAuthGlobalRef:\n    allowDefault: true\n---\napiVersion: secrets.hashicorp.com\/v1beta1\nkind: VaultStaticSecret\nmetadata:\n  name: app2-secret\n  namespace: apps\nspec:\n  destination:\n    create: true\n    name: app2-secret\n  hmacSecretData: true\n  mount: apps\n  path: app2\n  type: kv-v2\n  vaultAuthRef: apps-jwt\n```\n\n#### Explanation\n\n- The default VaultAuthGlobal resource is created in the `apps` namespace. It provides all the necessary configuration\n  for the VaultAuth resources that reference it. The `defaultAuthMethod` field defines the default auth method to use\n  when authenticating to Vault. The `kubernetes` and `jwt` fields define the configuration for the respective auth\n  method.\n- Application 1 uses the default kubernetes auth method defined in the VaultAuthGlobal resource. The VaultAuth resource\n  references the VaultAuthGlobal resource and inherits the kubernetes auth configuration from it.\n- Application 2 uses the JWT auth method defined in the VaultAuthGlobal resource. The VaultAuth resource references the\n  VaultAuthGlobal resource and inherits the JWT auth configuration from it.\n- Neither VaultAuth resource has a `role` or `serviceAccount` field set. This is because the `role` and `serviceAccount`\n  fields are defined in the VaultAuthGlobal resource and are inherited by the VaultAuth resources.\n\n## VaultAuthGlobal common errors and troubleshooting\n\nThere are few sources for tracking down issues with VaultAuthGlobal resources:\n- Vault Secrets Operator logs\n- Kubernetes events\n- Resource status\n\nBelow are examples of errors from each source and how to resolve them:\n\n  Sample output sync failures from the Vault Secrets Operator logs:\n  ```json\n  {\n    \"level\": \"error\",\n    \"ts\": \"2024-07-16T17:35:20Z\",\n    \"logger\": \"cachingClientFactory\",\n    \"msg\": \"Failed to get cacheKey from obj\",\n    \"controller\": \"vaultstaticsecret\",\n    \"controllerGroup\": \"secrets.hashicorp.com\",\n    \"controllerKind\": \"VaultStaticSecret\",\n    \"VaultStaticSecret\": {\n      \"name\": \"app1\",\n      \"namespace\": \"apps\"\n    },\n    \"namespace\": \"apps\",\n    \"name\": \"app1\",\n    \"reconcileID\": \"5201f597-6c5d-4d07-ae8f-30a39c80dc54\",\n    \"error\": \"failed getting admin\/default, err=VaultAuthGlobal.secrets.hashicorp.com \\\"default\\\" not found\"\n  }\n  ```\n\n  Check for related Kubernetes events:\n\n  ```shell\n  $ kubectl events --types=Warning -n admin --for vaultauths.secrets.hashicorp.com\/default -o json\n  ```\n\n  Sample output from the Kubernetes event for the VaultAuth resource:\n\n  ```json\n  {\n    \"kind\": \"Event\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n      \"name\": \"default.17e2c0da7b0e36b5\",\n      \"namespace\": \"admin\",\n      \"uid\": \"3ca6088e-7391-4b76-9443-a790ccae02c0\",\n      \"resourceVersion\": \"634396\",\n      \"creationTimestamp\": \"2024-07-16T17:14:12Z\"\n    },\n    \"involvedObject\": {\n      \"kind\": \"VaultAuth\",\n      \"namespace\": \"admin\",\n      \"name\": \"default\",\n      \"uid\": \"1dabe3a5-5479-4f5d-ac48-5db7eff7f822\",\n      \"apiVersion\": \"secrets.hashicorp.com\/v1beta1\",\n      \"resourceVersion\": \"631994\"\n    },\n    \"reason\": \"Accepted\",\n    \"message\": \"Failed to handle VaultAuth resource request: err=failed getting admin\/default, err=VaultAuthGlobal.secrets.hashicorp.com \\\"default\\\" not found\",\n    \"source\": {\n      \"component\": \"VaultAuth\"\n    },\n    \"firstTimestamp\": \"2024-07-16T17:14:12Z\",\n    \"lastTimestamp\": \"2024-07-16T17:15:53Z\",\n    \"count\": 25,\n    \"type\": \"Warning\",\n    \"eventTime\": null,\n    \"reportingComponent\": \"VaultAuth\",\n    \"reportingInstance\": \"\"\n  }\n  ```\n\nCheck the conditions on the VaultAuth resource:\n\n  ```shell\n  $ kubectl get vaultauths.secrets.hashicorp.com -n admin default -o jsonpath='{.status}'\n  ```\n\nSample output of the VaultAuth's status (prettified). The `valid` field will be `false` for the condition reason\n`VaultAuthGlobalRef`:\n  ```json\n  {\n    \"conditions\": [\n      {\n        \"lastTransitionTime\": \"2024-07-16T15:35:43Z\",\n        \"message\": \"failed getting admin\/default, err=VaultAuthGlobal.secrets.hashicorp.com \\\"default\\\" not found\",\n        \"observedGeneration\": 3,\n        \"reason\": \"VaultAuthGlobalRef\",\n        \"status\": \"False\",\n        \"type\": \"Available\"\n      }\n    ],\n    \"specHash\": \"e264f241cb4ad776802924b6ad2aa272b11cffd570382605d1c2ddbdfd661ad3\",\n    \"valid\": false\n  }\n  ```\n- **Situation**: The VaultAuthGlobal resource is not found or is invalid for some reason, denoted by error messages like\n`not found...`.\n\n  **Resolution**: Ensure that the VaultAuthGlobal resource exists in the referring VaultAuth's namespace or a default\n  VaultAuthGlobal resource exists per [VaultAuthGlobal configuration inheritance]\n  (#vaultauthglobal-configuration-inheritance)\n\n- **Situation**: The VaultAuthGlobal is not allowed to be referenced by the VaultAuth resource, denoted by error\n  messages like `target namespace \"apps\" is not allowed...`.\n\n  **Resolution**: Ensure that the VaultAuthGlobal resource's `spec.allowedNamespaces` field includes the namespace of the\n  VaultAuth resource.\n\n- **Situation**: The VaultAuth resource is not valid due to missing required fields, denoted by error messages like\n `invalid merge: empty role`.\n\n  **Resolution**: Ensure all required fields are set either on the VaultAuth resource or on the inherited\n  VaultAuthGlobal.\n\n  A successfully merged VaultAuth resource will have the `valid` field set to `true` and the `conditions` will look\n  something like:\n\n  ```json\n  {\n    \"conditions\": [\n      {\n        \"lastTransitionTime\": \"2024-07-17T13:46:43Z\",\n        \"message\": \"VaultAuthGlobal successfully merged, key=admin\/default, uid=6aeb3559-8f42-48bf-b16a-2305bc9a9bed, generation=7\",\n        \"observedGeneration\": 1,\n        \"reason\": \"VaultAuthGlobalRef\",\n        \"status\": \"True\",\n        \"type\": \"Available\"\n      }\n    ],\n    \"specHash\": \"5cbe5544d0557926e00002514871b95c49903a9d4496ef9b794c84f1e54db1a0\",\n    \"valid\": true\n  }\n  ```\n\n<Tip>\n\n  The value for the key in the message field is the namespace\/name of the VaultAuthGlobal object that was successfully merged.\n  This is useful if you want to know which VaultAuthGlobal object was used to merge the VaultAuth object.\n\n<\/Tip>\n\n\n## Some authentication engines in detail\n\n- [AWS](\/vault\/docs\/auth\/aws)\n\n- [GCP](\/vault\/docs\/auth\/gcp)","site":"vault","answers_cleaned":"    layout  docs page title   Vault Secrets Operator  Vault authentication details  description       Authenticate to Vault with the Vault Secrets Operator        include  vso common links mdx     Vault authentication in detail     Auth configuration  The Vault Secrets Operator  VSO  relies on  VaultAuth  resources to authenticate with Vault  It relies on credential providers to generate the credentials necessary for authentication  For example  when VSO authenticates to a kubernetes auth backend  it generates a token using the Kubernetes service account configured in the VaultAuth resource s defined kubernetes auth method  The service account must be configured in the Kubernetes namespace of the requesting resource  Meaning  if a resource like a  VaultStaticSecret  is created in the  apps  namespace  the service account must be in the apps namespace  The rationale behind this approach is to ensure that cross namespace access is not possible      Vault authentication globals  The  VaultAuthGlobal  resource is a global configuration that allows you to share a single authentication configuration across a set of VaultAuth resources  This is useful when you have multiple VaultAuth resources that share the same base configuration  For example  if you have multiple VaultAuth resources that all authenticate to Vault using the same auth backend  you can create a single VaultAuthGlobal resource that defines the configuration common to all VaultAuth instances  Options like  mount    method    namespace   and method specific configuration can all be inherited from the VaultAuthGlobal resource  Any field in the VaultAuth resource can be inherited from a VaultAuthGlobal instance  Typically  most fields are inherited from the VaultAuthGlobal  fields like  role   and credential provider specific fields like  serviceAccount  are usually set on the referring VaultAuth instance  since they are more specific to the application that requires the VaultAuth resource    See  VaultAuthGlobal spec  vag spec   and  VaultAuth spec  va spec  for the complete list of available fields        VaultAuthGlobal configuration inheritance    The configuration in the VaultAuth resource takes precedence over the configuration in the VaultAuthGlobal resource    The VaultAuthGlobal can reside in any namespace  but must allow the namespace of the VaultAuth resource to reference it    Default VaultAuthGlobal resources are denoted by the name  default  and are automatically referenced by all VaultAuth resources   when  spec vaultAuthGlobalRef allowDefault  is set to  true  and VSO is running with the  allow default globals    option set in the   global vault auth options  flag  the default     When a  spec vaultAuthGlobalRef namespace  is set  the search for the default VaultAuthGlobal resource is   constrained to that namespace  Otherwise  the search order is    1  The default VaultAuthGlobal resource in the referring VaultAuth resource s namespace    2  The default VaultAuthGlobal resource in the Operator s namespace       Sample use cases and configurations  The following sections provide some sample use cases and configurations for the VaultAuthGlobal resource  These examples demonstrate how to use the VaultAuthGlobal resource to share a common authentication configuration across a set of VaultAuth resources  Like other namespaced VSO custom resource definitions  there can be many VaultAuthGlobal resources configured in a single Kubernetes cluster       Multiple applications with shared authentication backend  A Vault admin has configured a Kubernetes auth backend in Vault mounted at  kubernetes   The admin expects to have two applications authenticate using their own roles  and service accounts  The admin creates the necessary roles in Vault bound to the service accounts and namespaces of the applications   The admin creates a default VaultAuthGlobal with the following configuration      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultAuthGlobal metadata    name  default   namespace  admin spec    allowedNamespaces        apps   defaultAuthMethod  kubernetes   kubernetes      audiences        vault     mount  kubernetes     role  default     serviceAccount  default     tokenExpirationSeconds  600      A developer creates a  VaultAuth  and VaultStaticSecret resource in their application s namespace with the following configurations   Application 1 would have a configuration like this     yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultAuth metadata    name  app1   namespace  apps spec    kubernetes      role  app1     serviceAccount  app1   vaultAuthGlobalRef      allowDefault  true     namespace  admin     apiVersion  secrets hashicorp com v1beta1 kind  VaultStaticSecret metadata    name  app1 secret   namespace  apps spec    destination      create  true     name  app1 secret   hmacSecretData  true   mount  apps   path  app1   type  kv v2   vaultAuthRef  app1      Application 2 would have a similar configuration     yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultAuth metadata    name  app2   namespace  apps spec    kubernetes      role  app2     serviceAccount  app2   vaultAuthGlobalRef      allowDefault  true     namespace  admin     apiVersion  secrets hashicorp com v1beta1 kind  VaultStaticSecret metadata    name  app2 secret   namespace  apps spec    destination      create  true     name  app2 secret   hmacSecretData  true   mount  apps   path  app2   type  kv v2   vaultAuthRef  app2           Explanation    The default VaultAuthGlobal resource is created in the  admin  namespace  This resource defines the   common configuration for all VaultAuth resources that reference it  The  allowedNamespaces  field restricts the   VaultAuth resources that can reference this VaultAuthGlobal resource  In this case  only resources in the  apps    namespace can reference this VaultAuthGlobal resource    The VaultAuth resources in the  apps  namespace reference the VaultAuthGlobal resource  This allows the VaultAuth   resources to inherit the configuration from the VaultAuthGlobal resource  The  role  and  serviceAccount  fields are   specific to the application and are not inherited from the VaultAuthGlobal resource  Since the     spec vaultAuthGlobalRef allowDefault  field is set to  true   the VaultAuth resources will automatically reference the    default  VaultAuthGlobal in defined namespace    The VaultStaticSecret resources in the  apps  namespace reference the VaultAuth resources  This allows the   VaultStaticSecret resources to authenticate to Vault in order to sync the KV secrets to the destination Kubernetes   Secret       Multiple applications with shared authentication backend and role  A Vault admin has configured a Kubernetes auth backend in Vault mounted at  kubernetes   The admin expects to have two applications authenticate using a single role  and service account  The admin creates the necessary role in Vault bound to the same service account and namespace of the applications   The admin or developer creates a default VaultAuthGlobal in the application s namespace with the following configuration      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultAuthGlobal metadata    name  default   namespace  apps spec    defaultAuthMethod  kubernetes   kubernetes      audiences        vault     mount  kubernetes     role  apps     serviceAccount  apps     tokenExpirationSeconds  600      A developer creates single VaultAuth and the necessary VaultStatic secrets in their application s namespace with the following      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultAuth metadata    name  apps   namespace  apps spec    vaultAuthGlobalRef      allowDefault  true     apiVersion  secrets hashicorp com v1beta1 kind  VaultStaticSecret metadata    name  app1 secret   namespace  apps spec    destination      create  true     name  app1 secret   hmacSecretData  true   mount  apps   path  app1   type  kv v2   vaultAuthRef  apps     apiVersion  secrets hashicorp com v1beta1 kind  VaultStaticSecret metadata    name  app2 secret   namespace  apps spec    destination      create  true     name  app2 secret   hmacSecretData  true   mount  apps   path  app2   type  kv v2   vaultAuthRef  apps           Explanation    The default VaultAuthGlobal resource is created in the  apps  namespace  It provides all the necessary configuration   for the VaultAuth resources that reference it    A single VaultAuth resource is created in the  apps  namespace  This resource references the VaultAuthGlobal resource   and inherits the configuration from it    The VaultStaticSecret resources in the  apps  namespace reference the VaultAuth resource  This allows the VaultStaticSecret   resources to authenticate to Vault in order to sync the KV secrets to the destination Kubernetes Secret       Multiple applications with multiple authentication backends and roles  A Vault admin has configured a Kubernetes auth backend in Vault mounted at  kubernetes   In addition  the Vault admin has configured a JWT auth backend mounted at  jwt   The admin creates the necessary roles in Vault for each auth method  The admin expects to have two applications authenticate  one using  kubernetes  auth and the other using  jwt  auth   The admin or developer creates a default VaultAuthGlobal in the application s namespace with the following configuration      yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultAuthGlobal metadata    name  default   namespace  apps spec    defaultAuthMethod  kubernetes   kubernetes      audiences        vault     mount  kubernetes     role  apps     serviceAccount  apps k8s     tokenExpirationSeconds  600   jwt      audiences        vault     mount  jwt     role  apps     serviceAccount  apps jwt      A developer creates a VaultAuth and VaultStaticSecret resource in their application s namespace with the following configurations   Application 1 would have a configuration like this which will be using the kubernetes auth method     yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultAuth metadata    name  apps default   namespace  apps spec      uses the default kubernetes auth method as defined in     the VaultAuthGlobal  spec defaultAuthMethod   vaultAuthGlobalRef      allowDefault  true     apiVersion  secrets hashicorp com v1beta1 kind  VaultStaticSecret metadata    name  app1 secret   namespace  apps spec    destination      create  true     name  app1 secret   hmacSecretData  true   mount  apps   path  app1   type  kv v2   vaultAuthRef  apps default      Application 2 would have a similar configuration  except it will be using the JWT auth method     yaml     apiVersion  secrets hashicorp com v1beta1 kind  VaultAuth metadata    name  apps jwt   namespace  apps spec    method  jwt   vaultAuthGlobalRef      allowDefault  true     apiVersion  secrets hashicorp com v1beta1 kind  VaultStaticSecret metadata    name  app2 secret   namespace  apps spec    destination      create  true     name  app2 secret   hmacSecretData  true   mount  apps   path  app2   type  kv v2   vaultAuthRef  apps jwt           Explanation    The default VaultAuthGlobal resource is created in the  apps  namespace  It provides all the necessary configuration   for the VaultAuth resources that reference it  The  defaultAuthMethod  field defines the default auth method to use   when authenticating to Vault  The  kubernetes  and  jwt  fields define the configuration for the respective auth   method    Application 1 uses the default kubernetes auth method defined in the VaultAuthGlobal resource  The VaultAuth resource   references the VaultAuthGlobal resource and inherits the kubernetes auth configuration from it    Application 2 uses the JWT auth method defined in the VaultAuthGlobal resource  The VaultAuth resource references the   VaultAuthGlobal resource and inherits the JWT auth configuration from it    Neither VaultAuth resource has a  role  or  serviceAccount  field set  This is because the  role  and  serviceAccount    fields are defined in the VaultAuthGlobal resource and are inherited by the VaultAuth resources      VaultAuthGlobal common errors and troubleshooting  There are few sources for tracking down issues with VaultAuthGlobal resources    Vault Secrets Operator logs   Kubernetes events   Resource status  Below are examples of errors from each source and how to resolve them     Sample output sync failures from the Vault Secrets Operator logs       json          level    error        ts    2024 07 16T17 35 20Z        logger    cachingClientFactory        msg    Failed to get cacheKey from obj        controller    vaultstaticsecret        controllerGroup    secrets hashicorp com        controllerKind    VaultStaticSecret        VaultStaticSecret            name    app1          namespace    apps              namespace    apps        name    app1        reconcileID    5201f597 6c5d 4d07 ae8f 30a39c80dc54        error    failed getting admin default  err VaultAuthGlobal secrets hashicorp com   default   not found               Check for related Kubernetes events        shell     kubectl events   types Warning  n admin   for vaultauths secrets hashicorp com default  o json          Sample output from the Kubernetes event for the VaultAuth resource        json          kind    Event        apiVersion    v1        metadata            name    default 17e2c0da7b0e36b5          namespace    admin          uid    3ca6088e 7391 4b76 9443 a790ccae02c0          resourceVersion    634396          creationTimestamp    2024 07 16T17 14 12Z              involvedObject            kind    VaultAuth          namespace    admin          name    default          uid    1dabe3a5 5479 4f5d ac48 5db7eff7f822          apiVersion    secrets hashicorp com v1beta1          resourceVersion    631994              reason    Accepted        message    Failed to handle VaultAuth resource request  err failed getting admin default  err VaultAuthGlobal secrets hashicorp com   default   not found        source            component    VaultAuth              firstTimestamp    2024 07 16T17 14 12Z        lastTimestamp    2024 07 16T17 15 53Z        count   25       type    Warning        eventTime   null       reportingComponent    VaultAuth        reportingInstance                 Check the conditions on the VaultAuth resource        shell     kubectl get vaultauths secrets hashicorp com  n admin default  o jsonpath    status          Sample output of the VaultAuth s status  prettified   The  valid  field will be  false  for the condition reason  VaultAuthGlobalRef        json          conditions                      lastTransitionTime    2024 07 16T15 35 43Z            message    failed getting admin default  err VaultAuthGlobal secrets hashicorp com   default   not found            observedGeneration   3           reason    VaultAuthGlobalRef            status    False            type    Available                      specHash    e264f241cb4ad776802924b6ad2aa272b11cffd570382605d1c2ddbdfd661ad3        valid   false               Situation    The VaultAuthGlobal resource is not found or is invalid for some reason  denoted by error messages like  not found           Resolution    Ensure that the VaultAuthGlobal resource exists in the referring VaultAuth s namespace or a default   VaultAuthGlobal resource exists per  VaultAuthGlobal configuration inheritance      vaultauthglobal configuration inheritance       Situation    The VaultAuthGlobal is not allowed to be referenced by the VaultAuth resource  denoted by error   messages like  target namespace  apps  is not allowed           Resolution    Ensure that the VaultAuthGlobal resource s  spec allowedNamespaces  field includes the namespace of the   VaultAuth resource       Situation    The VaultAuth resource is not valid due to missing required fields  denoted by error messages like   invalid merge  empty role        Resolution    Ensure all required fields are set either on the VaultAuth resource or on the inherited   VaultAuthGlobal     A successfully merged VaultAuth resource will have the  valid  field set to  true  and the  conditions  will look   something like        json          conditions                      lastTransitionTime    2024 07 17T13 46 43Z            message    VaultAuthGlobal successfully merged  key admin default  uid 6aeb3559 8f42 48bf b16a 2305bc9a9bed  generation 7            observedGeneration   1           reason    VaultAuthGlobalRef            status    True            type    Available                      specHash    5cbe5544d0557926e00002514871b95c49903a9d4496ef9b794c84f1e54db1a0        valid   true             Tip     The value for the key in the message field is the namespace name of the VaultAuthGlobal object that was successfully merged    This is useful if you want to know which VaultAuthGlobal object was used to merge the VaultAuth object     Tip       Some authentication engines in detail     AWS   vault docs auth aws      GCP   vault docs auth gcp "}
{"questions":"vault page title Configuration This section documents configuration options for the Vault Helm chart Configuration layout docs include helm version mdx","answers":"---\nlayout: docs\npage_title: Configuration\ndescription: This section documents configuration options for the Vault Helm chart\n---\n\n# Configuration\n\n@include 'helm\/version.mdx'\n\nThe chart is highly customizable using\n[Helm configuration values](https:\/\/helm.sh\/docs\/intro\/using_helm\/#customizing-the-chart-before-installing).\nEach value has a default tuned for an optimal getting started experience\nwith Vault. Before going into production, please review the parameters below\nand consider if they're appropriate for your deployment.\n\n- `global` - These global values affect multiple components of the chart.\n\n  - `enabled` (`boolean: true`) - The master enabled\/disabled configuration. If this is true, most components will be installed by default. If this is false, no components will be installed by default and manually opting-in is required, such as by setting `server.enabled` to true.\n\n  - `namespace` (`string: \"\"`) - The namespace to deploy to. Defaults to the `helm` installation namespace.\n\n  - `imagePullSecrets` (`array: []`) - References secrets to be used when pulling images from private registries. See [Pull an Image from a Private Registry](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/pull-image-private-registry\/) for more details. May be specified as an array of name map entries or just as an array of names:\n\n    ```yaml\n    imagePullSecrets:\n      - name: image-pull-secret\n    # or\n    imagePullSecrets:\n      - image-pull-secret\n    ```\n\n  - `tlsDisable` (`boolean: true`) - When set to `true`, changes URLs from `https` to `http` (such as the `VAULT_ADDR=http:\/\/127.0.0.1:8200` environment variable set on the Vault pods).\n\n  - `externalVaultAddr` (`string: \"\"`) - External vault server address for the injector and CSI provider to use. Setting this will disable deployment of a vault server. A service account with token review permissions is automatically created if `server.serviceAccount.create=true` is set for the external Vault server to use.\n\n  - `openshift` (`boolean: false`) - If `true`, enables configuration specific to OpenShift such as NetworkPolicy, SecurityContext, and Route.\n\n  - `psp` - Values that configure Pod Security Policy.\n\n    - `enable` (`boolean: false`) - When set to `true`, enables Pod Security Policies for Vault and Vault Agent Injector.\n\n    - `annotations` (`dictionary: {}`) - This value defines additional annotations to\n      add to the Pod Security Policies. This can either be YAML or a YAML-formatted\n      multi-line templated string.\n\n      ```yaml\n      annotations:\n        seccomp.security.alpha.kubernetes.io\/allowedProfileNames: docker\/default,runtime\/default\n        apparmor.security.beta.kubernetes.io\/allowedProfileNames: runtime\/default\n        seccomp.security.alpha.kubernetes.io\/defaultProfileName:  runtime\/default\n        apparmor.security.beta.kubernetes.io\/defaultProfileName:  runtime\/default\n      # or\n      annotations: |\n        seccomp.security.alpha.kubernetes.io\/allowedProfileNames: docker\/default,runtime\/default\n        apparmor.security.beta.kubernetes.io\/allowedProfileNames: runtime\/default\n        seccomp.security.alpha.kubernetes.io\/defaultProfileName:  runtime\/default\n        apparmor.security.beta.kubernetes.io\/defaultProfileName:  runtime\/default\n      ```\n  - `serverTelemetry` - Values that configure metrics and telemetry\n\n    - `prometheusOperator` (`boolean: false`) - When set to `true`, enables integration with the\n      Prometheus Operator. Be sure to configure the top-level [`serverTelemetry`](\/vault\/docs\/platform\/k8s\/helm\/configuration#servertelemetry-1) section for more details\n      and required configuration values.\n\n- `injector` - Values that configure running a Vault Agent Injector Admission Webhook Controller within Kubernetes.\n\n  - `enabled` (`boolean or string: \"-\"`) - When set to `true`, the Vault Agent Injector Admission Webhook controller will be created. When set to `\"-\"`, defaults to the value of `global.enabled`.\n\n  - `externalVaultAddr` (`string: \"\"`) - Deprecated: Please use [global.externalVaultAddr](\/vault\/docs\/platform\/k8s\/helm\/configuration#externalvaultaddr) instead.\n\n  - `replicas` (`int: 1`) - The number of pods to deploy to create a highly available cluster of Vault Agent Injectors. Requires Vault K8s 0.7.0 to have more than 1 replica.\n\n  - `leaderElector` - Values that configure the Vault Agent Injector leader election for HA deployments.\n\n    - `enabled` (`boolean: true`) - When set to `true`, enables leader election for Vault Agent Injector. This is required when using auto-tls and more than 1 replica.\n\n  - `image` - Values that configure the Vault Agent Injector Docker image.\n\n    - `repository` (`string: \"hashicorp\/vault-k8s\"`) - The name of the Docker image for Vault Agent Injector.\n\n    - `tag` (`string: \"1.5.0\"`) - The tag of the Docker image for the Vault Agent Injector. **This should be pinned to a specific version when running in production.** Otherwise, other changes to the chart may inadvertently upgrade your admission controller.\n\n    - `pullPolicy` (`string: \"IfNotPresent\"`) - The pull policy for container images. The default pull policy is `IfNotPresent` which causes the Kubelet to skip pulling an image if it already exists.\n\n  - `agentImage` - Values that configure the Vault Agent sidecar image.\n\n    - `repository` (`string: \"hashicorp\/vault\"`) - The name of the Docker image for the Vault Agent sidecar. This should be set to the official Vault Docker image.\n\n    - `tag` (`string: \"1.18.1\"`) - The tag of the Vault Docker image to use for the Vault Agent Sidecar. **Vault 1.3.1+ is required by the admission controller**.\n\n  - `agentDefaults` - Values that configure the injected Vault Agent containers default values.\n\n    - `cpuLimit` (`string: \"500m\"`) - The default CPU limit for injected Vault Agent containers.\n\n    - `cpuRequest` (`string: \"250m\"`) - The default CPU request for injected Vault Agent containers.\n\n    - `memLimit` (`string: \"128Mi\"`) - The default memory limit for injected Vault Agent containers.\n\n    - `memRequest` (`string: \"64Mi\"`) - The default memory request for injected Vault Agent containers.\n\n    - `ephemeralLimit` (`string: \"\"`) - The default ephemeral storage limit for injected Vault Agent containers.\n\n    - `ephemeralRequest` (`string: \"\"`) - The default ephemeral storage request for injected Vault Agent containers.\n\n    - `template` (`string: \"map\"`) - The default template type for rendered secrets if no custom templates are defined.\n      Possible values include `map` and `json`.\n\n    - `templateConfig` - Default values within Agent's [`template_config` stanza](\/vault\/docs\/agent-and-proxy\/agent\/template).\n\n      - `exitOnRetryFailure` (`boolean: true`) - Controls whether Vault Agent exits after it has exhausted its number of template retry attempts due to failures.\n\n      - `staticSecretRenderInterval` (`string: \"\"`) - Configures how often Vault Agent Template should render non-leased secrets such as KV v2. See the [Vault Agent Templates documentation](\/vault\/docs\/agent-and-proxy\/agent\/template#non-renewable-secrets) for more details.\n\n  - `metrics` - Values that configure the Vault Agent Injector metric exporter.\n\n    - `enabled` (`boolean: false`) - When set to `true`, the Vault Agent Injector exports Prometheus metrics at the `\/metrics` path.\n\n  - `authPath` (`string: \"auth\/kubernetes\"`) - Mount path of the Vault Kubernetes Auth Method.\n\n  - `logLevel` (`string: \"info\"`) - Configures the log verbosity of the injector. Supported log levels: trace, debug, error, warn, info.\n\n  - `logFormat` (`string: \"standard\"`) - Configures the log format of the injector. Supported log formats: \"standard\", \"json\".\n\n  - `revokeOnShutdown` (`boolean: false`) - Configures all Vault Agent sidecars to revoke their token when shutting down.\n\n  - `securityContext` - Security context for the pod template and the injector container\n\n    - `pod` (`dictionary: {}`) - Defines the securityContext for the injector Pod, as YAML or a YAML-formatted multi-line templated string. Default if not specified:\n\n      ```yaml\n      runAsNonRoot: true\n      runAsGroup: \n      runAsUser: \n      fsGroup: \n      ```\n\n    - `container` (`dictionary: {}`) - Defines the securityContext for the injector container, as YAML or a YAML-formatted multi-line templated string. Default if not specified:\n\n      ```yaml\n      allowPrivilegeEscalation: false\n      capabilities:\n        drop:\n          - ALL\n      ```\n\n  - `resources` (`dictionary: {}`) - The resource requests and limits (CPU, memory, etc.) for each container of the injector. This should be a YAML dictionary of a Kubernetes [resource](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/) object. If this isn't specified, then the pods won't request any specific amount of resources, which limits the ability for Kubernetes to make efficient use of compute resources.<br \/> **Setting this is highly recommended.**\n\n    ```yaml\n    resources:\n      requests:\n        memory: '256Mi'\n        cpu: '250m'\n      limits:\n        memory: '256Mi'\n        cpu: '250m'\n    ```\n\n  - `webhook` - Values that control the Mutating Webhook Configuration.\n\n    - `failurePolicy` (`string: \"Ignore\"`) - Configures failurePolicy of the webhook. To block pod creation while the webhook is unavailable, set the policy to `\"Fail\"`. See [Failure Policy](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/extensible-admission-controllers\/#failure-policy).\n\n    - `matchPolicy` (`string: \"Exact\"`) - Specifies the approach to accepting changes based on the rules of the MutatingWebhookConfiguration. See [Match Policy](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/extensible-admission-controllers\/#matching-requests-matchpolicy).\n\n    - `timeoutSeconds` (`int: 30`) - Specifies the number of seconds before the webhook request will be ignored or fails. If it is ignored or fails depends on the `failurePolicy`. See [timeouts](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/extensible-admission-controllers\/#timeouts).\n\n    - `namespaceSelector` (`object: {}`) - The selector used by the admission webhook controller to limit what namespaces where injection can happen. If unset, all non-system namespaces are eligible for injection. See [Matching requests: namespace selector](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/extensible-admission-controllers\/#matching-requests-namespaceselector).\n\n      ```yaml\n      namespaceSelector:\n        matchLabels:\n          sidecar-injector: enabled\n      ```\n\n    - `objectSelector` (`object: {}`) - The selector used by the admission webhook controller to limit what objects can be affected by mutation. See [Matching requests: object selector](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/extensible-admission-controllers\/#matching-requests-objectselector).\n\n      ```yaml\n      objectSelector:\n        matchLabels:\n          sidecar-injector: enabled\n      ```\n\n    - `annotations` (`string or object: {}`) - Defines additional annotations to attach to the webhook. This can either be YAML or a YAML-formatted multi-line templated string.\n\n  - `namespaceSelector` (`dictionary: {}`) - Deprecated: please use [`webhook.namespaceSelector`](\/vault\/docs\/platform\/k8s\/helm\/configuration#namespaceselector) instead.\n\n  - `objectSelector` (`dictionary: {}`) - Deprecated: please use [`webhook.objectSelector`](\/vault\/docs\/platform\/k8s\/helm\/configuration#objectselector) instead.\n\n  - `extraLabels` (`dictionary: {}`) - This value defines additional labels for Vault Agent Injector pods.\n\n    ```yaml\n    extraLabels:\n      'sample\/label1': 'foo'\n      'sample\/label2': 'bar'\n    ```\n\n  - `certs` - The certs section configures how the webhook TLS certs are configured. These are the TLS certs for the Kube apiserver communicating to the webhook. By default, the injector will generate and manage its own certs, but this requires the ability for the injector to update its own `MutatingWebhookConfiguration`. In a production environment, custom certs should probably be used. Configure the values below to enable this.\n\n    - `secretName` (`string: \"\"`) - secretName is the name of the Kubernetes secret that has the TLS certificate and private key to serve the injector webhook. If this is null, then the injector will default to its automatic management mode.\n\n    - `caBundle` (`string: \"\"`) - The PEM-encoded CA public certificate bundle for the TLS certificate served by the injector. This must be specified as a string and can't come from a secret because it must be statically configured on the Kubernetes `MutatingAdmissionWebhook` resource. This only needs to be specified if `secretName` is not null.\n\n    - `certName` (`string: \"tls.crt\"`) - The name of the certificate file within the `secretName` secret.\n\n    - `keyName` (`string: \"tls.key\"`) - The name of the key file within the `secretName` secret.\n\n  - `extraEnvironmentVars` (`dictionary: {}`) - Extra environment variables to set in the injector deployment.\n\n    ```yaml\n    # Example setting injector TLS options in a deployment:\n    extraEnvironmentVars:\n      AGENT_INJECT_TLS_MIN_VERSION: tls13\n      AGENT_INJECT_TLS_CIPHER_SUITES: ...\n    ```\n\n  - `affinity` - This value defines the [affinity](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/#affinity-and-anti-affinity) for Vault Agent Injector pods. This can either be multi-line string or YAML matching the PodSpec's affinity field. It defaults to allowing only a single pod on each node, which minimizes risk of the cluster becoming unusable if a node is lost. If you need to run more pods per node (for example, testing on Minikube), set this value to `null`.\n\n    ```yaml\n    # Recommended default server affinity:\n    affinity: |\n      podAntiAffinity:\n        requiredDuringSchedulingIgnoredDuringExecution:\n        - labelSelector:\n            matchLabels:\n              app.kubernetes.io\/name: -agent-injector\n              app.kubernetes.io\/instance: \"\"\n              component: webhook\n          topologyKey: kubernetes.io\/hostname\n    ```\n\n  - `topologySpreadConstraints` (`array: []`) - [Topology settings](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-topology-spread-constraints\/)\n    for injector pods. This can either be YAML or a YAML-formatted multi-line templated string.\n\n  - `tolerations` (`array: []`) - Toleration Settings for injector pods. This should be either a multi-line string or YAML matching the Toleration array.\n\n  - `nodeSelector` (`dictionary: {}`) - nodeSelector labels for injector pod assignment, formatted as a muli-line string or YAML map.\n\n  - `priorityClassName` (`string: \"\"`) - Priority class for injector pods\n\n  - `annotations` (`dictionary: {}`) - This value defines additional annotations for injector pods. This can either be YAML or a YAML-formatted multi-line templated string.\n\n    ```yaml\n    annotations:\n      \"sample\/annotation1\": \"foo\"\n      \"sample\/annotation2\": \"bar\"\n    # or\n    annotations: |\n      \"sample\/annotation1\": \"foo\"\n      \"sample\/annotation2\": \"bar\"\n    ```\n\n  - `failurePolicy` (`string: \"Ignore\"`) - Deprecated: please use [`webhook.failurePolicy`](\/vault\/docs\/platform\/k8s\/helm\/configuration#failurepolicy) instead.\n\n  - `webhookAnnotations` (`dictionary: {}`) - Deprecated: please use [`webhook.annotations`](\/vault\/docs\/platform\/k8s\/helm\/configuration#annotations-1) instead.\n\n  - `service` - The service section configures the Kubernetes service for the Vault Agent Injector.\n\n    - `annotations` (`dictionary: {}`) - This value defines additional annotations to\n      add to the Vault Agent Injector service. This can either be YAML or a YAML-formatted\n      multi-line templated string.\n\n      ```yaml\n      annotations:\n        \"sample\/annotation1\": \"foo\"\n        \"sample\/annotation2\": \"bar\"\n      # or\n      annotations: |\n        \"sample\/annotation1\": \"foo\"\n        \"sample\/annotation2\": \"bar\"\n      ```\n\n  - `serviceAccount` - Injector serviceAccount specific config\n\n    - `annotations` (`dictionary: {}`) - Extra annotations to attach to the injector serviceAccount. This can either be YAML or a YAML-formatted multi-line templated string.\n\n  - `hostNetwork` (`boolean: false`) - When set to true, configures the Vault Agent Injector to run on the host network. This is useful\n    when alternative cluster networking is used.\n\n  - `port` (`int: 8080`) - Configures the port the Vault Agent Injector listens on.\n\n  - `podDisruptionBudget` (`dictionary: {}`) - A disruption budget limits the number of pods of a replicated application that are down simultaneously from voluntary disruptions.\n\n    ```yaml\n    podDisruptionBudget:\n      maxUnavailable: 1\n    ```\n\n  - `strategy` (`dictionary: {}`) - Strategy for updating the deployment. This can be a multi-line string or a YAML map.\n\n    ```yaml\n    strategy:\n      rollingUpdate:\n        maxSurge: 25%\n        maxUnavailable: 25%\n      type: RollingUpdate\n    # or\n    strategy: |\n      rollingUpdate:\n        maxSurge: 25%\n        maxUnavailable: 25%\n      type: RollingUpdate\n    ```\n\n  - `livenessProbe` - Values that configure the liveness probe for the injector.\n\n    - `failureThreshold` (`int: 2`) - When set to a value, configures how many probe failures will be tolerated by Kubernetes.\n\n    - `initialDelaySeconds` (`int: 60`) - Sets the initial delay of the liveness probe when the container starts.\n\n    - `periodSeconds` (`int: 5`) - When set to a value, configures how often (in seconds) to perform the probe.\n\n    - `successThreshold` (`int: 1`) - When set to a value, configures the minimum consecutive successes for the probe to be considered successful after having failed.\n\n    - `timeoutSeconds` (`int: 3`) - When set to a value, configures the number of seconds after which the probe times out.\n\n  - `readinessProbe` - Values that configure the readiness probe for the injector.\n\n    - `failureThreshold` (`int: 2`) - When set to a value, configures how many probe failures will be tolerated by Kubernetes.\n\n    - `initialDelaySeconds` (`int: 60`) - Sets the initial delay of the readiness probe when the container starts.\n\n    - `periodSeconds` (`int: 5`) - When set to a value, configures how often (in seconds) to perform the probe.\n\n    - `successThreshold` (`int: 1`) - When set to a value, configures the minimum consecutive successes for the probe to be considered successful after having failed.\n\n    - `timeoutSeconds` (`int: 3`) - When set to a value, configures the number of seconds after which the probe times out.\n\n  - `startupProbe` - Values that configure the startup probe for the injector.\n\n    - `failureThreshold` (`int: 2`) - When set to a value, configures how many probe failures will be tolerated by Kubernetes.\n\n    - `initialDelaySeconds` (`int: 60`) - Sets the initial delay of the startup probe when the container starts.\n\n    - `periodSeconds` (`int: 5`) - When set to a value, configures how often (in seconds) to perform the probe.\n\n    - `successThreshold` (`int: 1`) - When set to a value, configures the minimum consecutive successes for the probe to be considered successful after having failed.\n\n    - `timeoutSeconds` (`int: 3`) - When set to a value, configures the number of seconds after which the probe times out.\n\n- `server` - Values that configure running a Vault server within Kubernetes.\n\n  - `enabled` (`boolean or string: \"-\"`) - When set to `true`, the Vault server will be created. When set to `\"-\"`, defaults to the value of `global.enabled`.\n\n  - `enterpriseLicense` - This value refers to a Kubernetes secret that you have created that contains your enterprise license. If you are not using an enterprise image or if you plan to introduce the license key via another route, then leave secretName blank (\"\") or set it to null. Requires Vault Enterprise 1.8 or later.\n\n    - `secretName` (`string: \"\"`) - The name of the Kubernetes secret that holds the enterprise license. The secret must be in the same namespace that Vault is installed into.\n\n    - `secretKey` (`string: \"license\"`) - The key within the Kubernetes secret that holds the enterprise license.\n\n  - `image` - Values that configure the Vault Docker image.\n\n    - `repository` (`string: \"hashicorp\/vault\"`) - The name of the Docker image for the containers running Vault.\n\n    - `tag` (`string: \"1.18.1\"`) - The tag of the Docker image for the containers running Vault. **This should be pinned to a specific version when running in production.** Otherwise, other changes to the chart may inadvertently upgrade your admission controller.\n\n    - `pullPolicy` (`string: \"IfNotPresent\"`) - The pull policy for container images. The default pull policy is `IfNotPresent` which causes the Kubelet to skip pulling an image if it already exists.\n\n  - `updateStrategyType` (`string: \"OnDelete\"`) - Configure the [Update Strategy Type](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/statefulset\/#update-strategies) for the StatefulSet.\n\n  - `logLevel` (`string: \"\"`) - Configures the Vault server logging verbosity. If set this will override values defined in the Vault configuration file.\n    Supported log levels include: `trace`, `debug`, `info`, `warn`, `error`.\n\n  - `logFormat` (`string: \"\"`) - Configures the Vault server logging format. If set this will override values defined in the Vault configuration file.\n    Supported log formats include: `standard`, `json`.\n\n  - `resources` (`dictionary: {}`) - The resource requests and limits (CPU, memory, etc.) for each container of the server. This should be a YAML dictionary of a Kubernetes [resource](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/) object. If this isn't specified, then the pods won't request any specific amount of resources, which limits the ability for Kubernetes to make efficient use of compute resources. **Setting this is highly recommended.**\n\n    ```yaml\n    resources:\n      requests:\n        memory: '10Gi'\n      limits:\n        memory: '10Gi'\n    ```\n\n  - `ingress` - Values that configure Ingress services for Vault.\n\n    ~> If deploying on OpenShift, these ingress settings are ignored. Use the [`route`](#route) configuration to expose Vault on OpenShift. <br\/> <br\/>\n    If [`ha`](#ha) is enabled the Ingress will point to the active vault server via the `active` Service. This requires vault 1.4+ and [service_registration](\/vault\/docs\/configuration\/service-registration\/kubernetes) to be set in the vault config.\n\n    - `enabled` (`boolean: false`) - When set to `true`, an [Ingress](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress\/) service will be created.\n\n    - `labels` (`dictionary: {}`) - Labels for the ingress service.\n\n    - `annotations` (`dictionary: {}`) - This value defines additional annotations to\n      add to the Ingress service. This can either be YAML or a YAML-formatted\n      multi-line templated string.\n\n      ```yaml\n      annotations:\n        kubernetes.io\/ingress.class: nginx\n        kubernetes.io\/tls-acme: \"true\"\n      # or\n      annotations: |\n        kubernetes.io\/ingress.class: nginx\n        kubernetes.io\/tls-acme: \"true\"\n      ```\n\n    - `ingressClassName` (`string: \"\"`) - Specify the [IngressClass](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress\/#ingress-class) that should be used to implement the Ingress\n\n    - `activeService` (`boolean: true`) - When HA mode is enabled and K8s service registration is being used, configure the ingress to point to the Vault active service.\n\n    - `extraPaths` (`array: []`) - Configures extra paths to prepend to the host configuration.\n      This is useful when working with annotation based services.\n\n      ```yaml\n      extraPaths:\n        - path: \/*\n          backend:\n            service:\n              name: ssl-redirect\n              port:\n                number: use-annotation\n      ```\n\n    - `tls` (`array: []`) - Configures the TLS portion of the [Ingress spec](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress\/#tls), where `hosts` is a list of the hosts defined in the Common Name of the TLS certificate, and `secretName` is the name of the Secret containing the required TLS files such as certificates and keys.\n\n      ```yaml\n      tls:\n        - hosts:\n            - sslexample.foo.com\n            - sslexample.bar.com\n          secretName: testsecret-tls\n      ```\n\n    - `hosts` - Values that configure the Ingress host rules.\n\n      - `host` (`string: \"chart-example.local\"`): Name of the host to use for Ingress.\n\n      - `paths` (`array: []`): Deprecated: `server.ingress.extraPaths` should be used instead. A list of paths that will be directed to the Vault service. At least one path is required.\n\n      ```yaml\n      paths:\n        - \/\n        - \/vault\n      ```\n\n  - `hostAliases` (`array: []`) - A list of aliases to be added to `\/etc\/hosts`. Specified as a YAML list following the [hostAlias format](https:\/\/kubernetes.io\/docs\/tasks\/network\/customize-hosts-file-for-pods\/)\n\n  - `route` - Values that configure Route services for Vault in OpenShift\n\n    ~> If [`ha`](#ha) is enabled the Route will point to the active vault server via the `active` Service (requires vault 1.4+ and [service_registration](\/vault\/docs\/configuration\/service-registration\/kubernetes) to be set in the vault config).\n\n    - `enabled` (`boolean: false`) - When set to `true`, a Route for Vault will be created.\n\n    - `activeService` (`boolean: true`) - When HA mode is enabled and K8s service registration is being used, configure the route to point to the Vault active service.\n\n    - `labels` (`dictionary: {}`) - Labels for the Route\n\n    - `annotations` (`dictionary: {}`) - Annotations to add to the Route. This can either be YAML or a YAML-formatted multi-line templated string.\n\n    - `host` (`string: \"chart-example.local\"`) - Sets the hostname for the Route.\n\n    - `tls` (`dictionary: {termination: passthrough}`) - TLS config that will be passed directly to the route's TLS config, which can be used to configure other termination methods that terminate TLS at the router.\n\n  - `authDelegator` - Values that configure the Cluster Role Binding attached to the Vault service account.\n\n    - `enabled` (`boolean: true`) - When set to `true`, a Cluster Role Binding will be bound to the Vault service account. This Cluster Role Binding has the necessary privileges for Vault to use the [Kubernetes Auth Method](\/vault\/docs\/auth\/kubernetes).\n\n  - `readinessProbe` - Values that configure the readiness probe for the Vault pods.\n\n    - `enabled` (`boolean: true`) - When set to `true`, a readiness probe will be applied to the Vault pods.\n\n    - `path` (`string: \"\"`) - When set to a value, enables HTTP\/HTTPS probes instead of using the default `exec` probe. The http\/https scheme is controlled by the `tlsDisable` value.\n\n    - `failureThreshold` (`int: 2`) - When set to a value, configures how many probe failures will be tolerated by Kubernetes.\n\n    - `initialDelaySeconds` (`int: 5`) - When set to a value, configures the number of seconds after the container has started before probe initiates.\n\n    - `periodSeconds` (`int: 5`) - When set to a value, configures how often (in seconds) to perform the probe.\n\n    - `successThreshold` (`int: 1`) - When set to a value, configures the minimum consecutive successes for the probe to be considered successful after having failed.\n\n    - `timeoutSeconds` (`int: 3`) - When set to a value, configures the number of seconds after which the probe times out.\n\n    - `port` (`int: 8200`) - When set to a value, overrides the default port used for the server readiness probe.\n\n    ```yaml\n    readinessProbe:\n      enabled: true\n      path: \/v1\/sys\/health?standbyok=true\n      failureThreshold: 2\n      initialDelaySeconds: 5\n      periodSeconds: 5\n      successThreshold: 1\n      timeoutSeconds: 3\n      port: 8200\n    ```\n\n  - `livenessProbe` - Values that configure the liveness probe for the Vault pods.\n\n    - `enabled` (`boolean: false`) - When set to `true`, a liveness probe will be applied to the Vault pods.\n\n    - `execCommand` (`array: []`) - Used to define a liveness exec command. If provided, exec is preferred to httpGet (path) as the livenessProbe handler.\n\n    ```yaml\n    execCommand:\n      - \/bin\/sh\n      - -c\n      - \/vault\/userconfig\/mylivenessscript\/run.sh\n    ```\n\n    - `path` (`string: \"\/v1\/sys\/health?standbyok=true\"`) - Path for the livenessProbe to use httpGet as the livenessProbe handler. The http\/https scheme is controlled by the `tlsDisable` value.\n\n    - `initialDelaySeconds` (`int: 60`) - Sets the initial delay of the liveness probe when the container starts.\n\n    - `failureThreshold` (`int: 2`) - When set to a value, configures how many probe failures will be tolerated by Kubernetes.\n\n    - `periodSeconds` (`int: 5`) - When set to a value, configures how often (in seconds) to perform the probe.\n\n    - `successThreshold` (`int: 1`) - When set to a value, configures the minimum consecutive successes for the probe to be considered successful after having failed.\n\n    - `timeoutSeconds` (`int: 3`) - When set to a value, configures the number of seconds after which the probe times out.\n\n    - `port` (`int: 8200`) - Port number on which livenessProbe will be checked if httpGet is used as the livenessProbe handler.\n\n    ```yaml\n    livenessProbe:\n      enabled: true\n      path: \/v1\/sys\/health?standbyok=true\n      initialDelaySeconds: 60\n      failureThreshold: 2\n      periodSeconds: 5\n      successThreshold: 1\n      timeoutSeconds: 3\n      port: 8200\n    ```\n\n  - `terminationGracePeriodSeconds` (`int: 10`) - Optional duration in seconds the pod needs to terminate gracefully. See: https:\/\/kubernetes.io\/docs\/concepts\/containers\/container-lifecycle-hooks\/\n\n  - `preStopSleepSeconds` (`int: 5`) - Used to set the sleep time during the preStop step.\n\n  - `postStart` (`array: []`) - Used to define commands to run after the pod is ready. This can be used to automate processes such as initialization or bootstrapping auth methods.\n\n  ```yaml\n  postStart:\n    - \/bin\/sh\n    - -c\n    - \/vault\/userconfig\/myscript\/run.sh\n  ```\n\n  - `extraInitContainers` (`array: null`) - extraInitContainers is a list of init containers. Specified as a YAML list. This is useful if you need to run a script to provision TLS certificates or write out configuration files in a dynamic way.\n\n  - `extraContainers` (`array: null`) - The extra containers to be applied to the Vault server pods.\n\n  ```yaml\n  extraContainers:\n    - name: mycontainer\n      image: 'app:0.0.0'\n      env: ...\n  ```\n\n  - `extraEnvironmentVars` (`dictionary: {}`) - The extra environment variables to be applied to the Vault server.\n\n  ```yaml\n  # Extra Environment Variables are defined as key\/value strings.\n  extraEnvironmentVars:\n    GOOGLE_REGION: global\n    GOOGLE_PROJECT: myproject\n    GOOGLE_APPLICATION_CREDENTIALS: \/vault\/userconfig\/myproject\/myproject-creds.json\n  ```\n\n  - `shareProcessNamespace` (`boolean: false`) - Enables process namespace sharing between Vault and the extraContainers. This is useful if Vault must be signaled, e.g. to send a SIGHUP for log rotation.\n\n  - `extraArgs` (`string: null`) - The extra arguments to be applied to the Vault server startup command.\n\n    ```yaml\n    extraArgs: '-config=\/path\/to\/extra\/config.hcl -log-format=json'\n    ```\n\n  - `extraPorts` (`array: []`) - additional ports to add to the server statefulset\n\n  ```yaml\n  extraPorts:\n    - containerPort: 8300\n      name: http-monitoring\n  ```\n\n  - `extraSecretEnvironmentVars` (`array: []`) - The extra environment variables populated from a secret to be applied to the Vault server.\n\n    - `envName` (`string: required`) -\n      Name of the environment variable to be populated in the Vault container.\n\n    - `secretName` (`string: required`) -\n      Name of Kubernetes secret used to populate the environment variable defined by `envName`.\n\n    - `secretKey` (`string: required`) -\n      Name of the key where the requested secret value is located in the Kubernetes secret.\n\n    ```yaml\n    # Extra Environment Variables populated from a secret.\n    extraSecretEnvironmentVars:\n      - envName: AWS_SECRET_ACCESS_KEY\n        secretName: vault\n        secretKey: AWS_SECRET_ACCESS_KEY\n    ```\n\n  - `extraVolumes` (`array: []`) - Deprecated: please use `volumes` instead. A list of extra volumes to mount to Vault servers. This is useful for bringing in extra data that can be referenced by other configurations at a well known path, such as TLS certificates. The value of this should be a list of objects. Each object supports the following keys:\n\n    - `type` (`string: required`) -\n      Type of the volume, must be one of \"configMap\" or \"secret\". Case sensitive.\n\n    - `name` (`string: required`) -\n      Name of the configMap or secret to be mounted. This also controls the path\n      that it is mounted to. The volume will be mounted to `\/vault\/userconfig\/<name>` by default\n      unless `path` is configured.\n\n    - `path` (`string: \/vault\/userconfigs`) -\n      Name of the path where a configMap or secret is mounted. If not specified\n      the volume will be mounted to `\/vault\/userconfig\/<name of volume>`.\n\n    - `defaultMode` (`string: \"420\"`) -\n      Default mode of the mounted files.\n\n    ```yaml\n    extraVolumes:\n      - type: 'secret'\n        name: 'vault-certs'\n        path: '\/etc\/pki'\n    ```\n\n  - `volumes` (`array: null`) - A list of volumes made available to all containers. This takes\n    standard Kubernetes volume definitions.\n\n    ```yaml\n    volumes:\n      - name: plugins\n        emptyDir: {}\n    ```\n\n  - `volumeMounts` (`array: null`) - A list of volumes mounts made available to all containers. This takes\n    standard Kubernetes volume definitions.\n\n    ```yaml\n    volumeMounts:\n      - mountPath: \/usr\/local\/libexec\/vault\n        name: plugins\n        readOnly: true\n    ```\n\n  - `affinity` - This value defines the [affinity](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/#affinity-and-anti-affinity) for server pods. This should be either a multi-line string or YAML matching the PodSpec's affinity field. It defaults to allowing only a single pod on each node, which minimizes risk of the cluster becoming unusable if a node is lost. If you need to run more pods per node (for example, testing on Minikube), set this value to `null`.\n\n  ```yaml\n  # Recommended default server affinity:\n  affinity: |\n    podAntiAffinity:\n      requiredDuringSchedulingIgnoredDuringExecution:\n      - labelSelector:\n        matchLabels:\n          app.kubernetes.io\/name: \n          app.kubernetes.io\/instance: \"\"\n          component: server\n        topologyKey: kubernetes.io\/hostname\n  ```\n\n  - `topologySpreadConstraints` (`array: []`) - [Topology settings](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-topology-spread-constraints\/)\n    for server pods. This can either be YAML or a YAML-formatted multi-line templated string.\n\n  - `tolerations` (`array: []`) - This value defines the [tolerations](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/taint-and-toleration\/) that are acceptable when being scheduled. This should be either a multi-line string or YAML matching the Toleration array in a PodSpec.\n\n  ```yaml\n  tolerations: |\n    - key: 'node.kubernetes.io\/unreachable'\n      operator: 'Exists'\n      effect: 'NoExecute'\n      tolerationSeconds: 6000\n  ```\n\n  - `nodeSelector` (`dictionary: {}`) - This value defines additional node selection criteria for more control over where the Vault servers are deployed. This should be formatted as a multi-line string or YAML map.\n\n  ```yaml\n  nodeSelector: |\n    disktype: ssd\n  ```\n\n  - `networkPolicy` - Values that configure the Vault Network Policy.\n\n    - `enabled` (`boolean: false`) - When set to `true`, enables a Network Policy for the Vault cluster.\n\n    - `egress` (`array: []`) - This value configures the [egress](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/network-policies\/) network policy rules.\n\n    ```yaml\n    egress:\n      - to:\n          - ipBlock:\n              cidr: 10.0.0.0\/24\n        ports:\n          - protocol: TCP\n            port: 8200\n    ```\n\n    - `ingress` (`array: []`) - This value configures the [ingress](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/network-policies\/) network policy rules. The default is below:\n\n    ```yaml\n    ingress:\n      - from:\n        - namespaceSelector: {}\n        ports:\n        - port: 8200\n          protocol: TCP\n        - port: 8201\n          protocol: TCP\n    ```\n\n  - `priorityClassName` (`string: \"\"`) - Priority class for server pods\n\n  - `extraLabels` (`dictionary: {}`) - This value defines additional labels for server pods.\n\n  ```yaml\n  extraLabels:\n    'sample\/label1': 'foo'\n    'sample\/label2': 'bar'\n  ```\n\n  - `annotations` (`dictionary: {}`) - This value defines additional annotations for server pods. This can either be YAML or a YAML-formatted multi-line templated string.\n\n  ```yaml\n  annotations:\n    \"sample\/annotation1\": \"foo\"\n    \"sample\/annotation2\": \"bar\"\n  # or\n  annotations: |\n    \"sample\/annotation1\": \"foo\"\n    \"sample\/annotation2\": \"bar\"\n  ```\n\n  - `includeConfigAnnotation` (`boolean: false`) - Add an annotation to the server configmap and the statefulset pods, `vaultproject.io\/config-checksum`, that is a hash of the Vault configuration. This can be used together with an OnDelete deployment strategy to help identify which pods still need to be deleted during a deployment to pick up any configuration changes.\n\n  - `service` - Values that configure the Kubernetes service created for Vault. These options are also used for the `active` and `standby` services when [`ha`](#ha) is enabled.\n\n    - `enabled` (`boolean: true`) - When set to `true`, a Kubernetes service will be created for Vault.\n\n    - `active` - Values that apply only to the vault-active service.\n\n      - `enabled` (`boolean: true`) - When set to `true`, the vault-active Kubernetes service will be created for Vault, selecting pods which label themselves as the cluster leader with `vault-active: \"true\"`.\n\n      - `annotations` (`dictionary: {}`) -  Extra annotations for the active service definition. This can either be YAML or a YAML-formatted multi-line templated string.\n\n    - `standby` - Values that apply only to the vault-standby service.\n\n      - `enabled` (`boolean: true`) - When set to `true`, the vault-standby Kubernetes service will be created for Vault, selecting pods which label themselves as a cluster follower with `vault-active: \"false\"`.\n\n      - `annotations` (`dictionary: {}`) -  Extra annotations for the standby service definition. This can either be YAML or a YAML-formatted multi-line templated string.\n\n    - `clusterIP` (`string`) - ClusterIP controls whether an IP address (cluster IP) is attached to the Vault service within Kubernetes. By default the Vault service will be given a Cluster IP address, set to `None` to disable. When disabled Kubernetes will create a \"headless\" service. Headless services can be used to communicate with pods directly through DNS instead of a round robin load balancer.\n\n    - `type` (`string: \"ClusterIP\"`) - Sets the type of service to create, such as `NodePort`.\n\n    - `externalTrafficPolicy` (`string: \"Cluster\"`) - The [externalTrafficPolicy](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#external-traffic-policy) can be set to either Cluster or Local and is only valid for LoadBalancer and NodePort service types.\n\n    - `port` (`int: 8200`) - Port on which Vault server is listening inside the pod.\n\n    - `targetPort` (`int: 8200`) - Port on which the service is listening.\n\n    - `nodePort` (`int:`) - When type is set to `NodePort`, the bound node port can be configured using this value. A random port will be assigned if this is left blank.\n\n    - `activeNodePort` (`int:`) - (When HA mode is enabled) If type is set to \"NodePort\", a specific nodePort value can be configured for the `active` service, and will be random if left blank.\n\n    - `standbyNodePort` (`int:`) - (When HA mode is enabled) If type is set to \"NodePort\", a specific nodePort value can be configured for the `standby` service, will be random if left blank.\n\n    - `publishNotReadyAddresses` (`boolean: true`) - If true, do not wait for pods to be ready before including them in the services' targets. Does not apply to the headless service, which is used for cluster-internal communication.\n\n    - `instanceSelector`\n\n      - `enabled` (`boolean: true`) - When set to false, the service selector used for the vault, vault-active, and vault-standby services will not filter on `app.kubernetes.io\/instance`. This means they may select pods from outside this deployment of the Helm chart. Does not affect the headless vault-internal service with `ClusterIP: None`.\n\n    - `annotations` (`dictionary: {}`) - This value defines additional annotations for the service. This can either be YAML or a YAML-formatted multi-line templated string.\n\n    ```yaml\n    annotations:\n      \"sample\/annotation1\": \"foo\"\n      \"sample\/annotation2\": \"bar\"\n    # or\n    annotations: |\n      \"sample\/annotation1\": \"foo\"\n      \"sample\/annotation2\": \"bar\"\n    ```\n\n    - `ipFamilyPolicy` (`string: \"\"`) - The IP family and IP families options are to set the behaviour in a dual-stack environment. Omitting these values will let the service fall back to whatever the CNI dictates the defaults should be. These are only supported for kubernetes versions >=1.23. The service's [supported IP family policy](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/dual-stack\/#services), can be either: `SingleStack`, `PreferDualStack`, or `RequireDualStack`.\n\n    - `serviceIPFamilies` (`array: []`) - Sets the families that should be supported and the order in which they should be applied to ClusterIP as well. Can be IPv4 and\/or IPv6.\n\n  - `serviceAccount` - Values that configure the Kubernetes service account created for Vault.\n\n    - `create` (`boolean: true`): If set to true, creates a service account used by Vault.\n\n    - `name` (`string: \"\"`): Name of the service account to use. If not set and create is true, a name is generated using the name of the installation (default is \"vault\").\n\n    - `createSecret` (`boolean: false`): Create a Kubernetes Secret object to store a non-expiring token for the service account. Prior to Kubernetes 1.24.0, Kubernetes used to generate this secret for each service account by default. Kubernetes recommends using short-lived tokens from the TokenRequest API or projected volumes instead if possible. For more details, see https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/#service-account-token-secrets. `server.serviceAccount.create` must be equal to 'true' in order to use this feature.\n\n    - `annotations` (`dictionary: {}`) - This value defines additional annotations for the service account. This can either be YAML or a YAML-formatted multi-line templated string.\n\n    ```yaml\n    annotations:\n      \"sample\/annotation1\": \"foo\"\n      \"sample\/annotation2\": \"bar\"\n    # or\n    annotations: |\n      \"sample\/annotation1\": \"foo\"\n      \"sample\/annotation2\": \"bar\"\n    ```\n\n    - `extraLabels` (`dictionary: {}`) - This value defines additional labels for the Vault Server service account.\n\n    ```yaml\n    extraLabels:\n      'sample\/label1': 'foo'\n      'sample\/label2': 'bar'\n    ```\n\n    - `serviceDiscovery` - Values that configure permissions required for Vault Server to automatically discover and join a Vault cluster using pod metadata.\n\n      - `enabled` (`boolean: true`) - Enable or disable a service account role binding with the permissions required for Vault's Kubernetes [`service_registration`](\/vault\/docs\/configuration\/service-registration\/kubernetes) config option.\n\n  - `dataStorage` - This configures the volume used for storing Vault data when not using external storage such as Consul.\n\n    - `enabled` (`boolean: true`) -\n      Enables a persistent volume to be created for storing Vault data when not using an external storage service.\n\n    - `size` (`string: 10Gi`) -\n      Size of the volume to be created for Vault's data storage when not using an external storage service.\n\n    - `storageClass` (`string: null`) -\n      Name of the storage class to use when creating the data storage volume.\n\n    - `mountPath` (`string: \/vault\/data`) -\n      Configures the path in the Vault pod where the data storage will be mounted.\n\n    - `accessMode` (`string: ReadWriteOnce`) -\n      Type of access mode of the storage device. See the [official Kubernetes](https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes\/#access-modes) for more information.\n\n    - `annotations` (`dictionary: {}`) - This value defines additional annotations to\n      add to the data PVCs. This can either be YAML or a YAML-formatted\n      multi-line templated string.\n\n    ```yaml\n    annotations:\n      kubernetes.io\/my-pvc: foobar\n    # or\n    annotations: |\n      kubernetes.io\/my-pvc: foobar\n    ```\n\n    - `labels` (`dictionary: {}`) - This value defines additional labels to add to the\n      data PVCs. This can either be YAML or a YAML-formatted multi-line templated\n      string.\n\n  - `persistentVolumeClaimRetentionPolicy` (`dictionary: {}`) - Specifies the Persistent Volume Claim (PVC) [retention policy](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/statefulset\/#persistentvolumeclaim-retention).\n\n  ```yaml\n  persistentVolumeClaimRetentionPolicy:\n    whenDeleted: Retain\n    whenScaled: Retain\n  ```\n\n  - `auditStorage` - This configures the volume used for storing Vault's audit logs. See the [Vault documentation](\/vault\/docs\/audit) for more information.\n\n    - `enabled` (`boolean: false`) -\n      Enables a persistent volume to be created for storing Vault's audit logs.\n\n    - `size` (`string: 10Gi`) -\n      Size of the volume to be created for Vault's audit logs.\n\n    - `storageClass` (`string: null`) -\n      Name of the storage class to use when creating the audit storage volume.\n\n    - `mountPath` (`string: \/vault\/audit`) -\n      Configures the path in the Vault pod where the audit storage will be mounted.\n\n    - `accessMode` (`string: ReadWriteOnce`) -\n      Type of access mode of the storage device.\n\n    - `annotations` (`dictionary: {}`) - This value defines additional annotations to\n      add to the audit PVCs. This can either be YAML or a YAML-formatted\n      multi-line templated string.\n\n    ```yaml\n    annotations:\n      kubernetes.io\/my-pvc: foobar\n    # or\n    annotations: |\n      kubernetes.io\/my-pvc: foobar\n    ```\n\n  - `labels` (`dictionary: {}`) - This value defines additional labels to add to the\n      audit PVCs. This can either be YAML or a YAML-formatted multi-line templated\n      string.\n\n  - `dev` - This configures `dev` mode for the Vault server.\n\n    - `enabled` (`boolean: false`) -\n      Enables `dev` mode for the Vault server. This mode is useful for experimenting with Vault without needing to unseal.\n\n    - `devRootToken` (`string: \"root\"`) - Configures the root token for the Vault development server.\n\n    ~> **Security Warning:** Never, ever, ever run a \"dev\" mode server in production. It is insecure and will lose data on every restart (since it stores data in-memory). It is only made for development or experimentation.\n\n  - `standalone` - This configures `standalone` mode for the Vault server.\n\n    - `enabled` (`boolean: true`) -\n      Enables `standalone` mode for the Vault server. This mode uses the `file` storage backend and requires a volume for persistence (`dataStorage`).\n\n    - `config` (`string or object: \"{}\"`) -\n      A raw string of extra HCL or JSON [configuration](\/vault\/docs\/configuration) for Vault servers.\n      This will be saved as-is into a ConfigMap that is read by the Vault servers.\n      This can be used to add additional configuration that isn't directly exposed by the chart.\n      If an object is provided, it will be written as JSON.\n\n    ```yaml\n    # ExtraConfig values are formatted as a multi-line string:\n    config: |\n      api_addr = \"http:\/\/POD_IP:8200\"\n\n      listener \"tcp\" {\n        tls_disable = 1\n        address     = \"0.0.0.0:8200\"\n      }\n\n      storage \"file\" {\n        path = \"\/vault\/data\"\n      }\n    ```\n\n    This can also be set using Helm's `--set` flag (vault-helm v0.1.0 and later), using the following syntax:\n\n    ```shell\n    --set server.standalone.config='{ listener \"tcp\" { address = \"0.0.0.0:8200\" }'\n    ```\n\n  - `ha` - This configures `ha` mode for the Vault server.\n\n    - `enabled` (`boolean: false`) -\n      Enables `ha` mode for the Vault server. This mode uses a highly available backend storage (such as Consul) to store Vault's data. By default this is configured to use [Consul Helm](https:\/\/github.com\/hashicorp\/consul-k8s). For a complete list of storage backends, see the [Vault documentation](\/vault\/docs\/configuration).\n\n    - `apiAddr`: (`string: \"{}\"`) -\n      Set the API address configuration for a Vault cluster. If set to an empty string, the pod IP address is used.\n\n    - `clusterAddr` (`string: null`) - Set the [`cluster_addr`](\/vault\/docs\/configuration#cluster_addr) configuration for Vault HA.\n      If null, defaults to `https:\/\/$(HOSTNAME).-internal:8201`.\n\n    - `raft` - This configures `raft` integrated storage mode for the Vault server.\n\n      - `enabled` (`boolean: false`) -\n        Enables `raft` integrated storage mode for the Vault server. This mode uses persistent volumes for storage.\n\n      - `setNodeId` (`boolean: false`) - Set the Node Raft ID to the name of the pod.\n\n      - `config` (`string or object: \"{}\"`) -\n        A raw string of extra HCL or JSON [configuration](\/vault\/docs\/configuration) for Vault servers.\n        This will be saved as-is into a ConfigMap that is read by the Vault servers.\n        This can be used to add additional configuration that isn't directly exposed by the chart.\n        If an object is provided, it will be written as JSON.\n\n    - `replicas` (`int: 3`) -\n      The number of pods to deploy to create a highly available cluster of Vault servers.\n\n    - `updatePartition` (`int: 0`) -\n      If an updatePartition is specified, all Pods with an ordinal that is greater than or equal to the partition will be updated when the StatefulSet\u2019s `.spec.template` is updated. If set to `0`, this disables partition updates. For more information see the [official Kubernetes documentation](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/statefulset\/#rolling-updates).\n\n    - `config` (`string or object: \"{}\"`) -\n      A raw string of extra HCL or JSON [configuration](\/vault\/docs\/configuration) for Vault servers.\n      This will be saved as-is into a ConfigMap that is read by the Vault servers.\n      This can be used to add additional configuration that isn't directly exposed by the chart.\n      If an object is provided, it will be written as JSON.\n\n    ```yaml\n    # ExtraConfig values are formatted as a multi-line string:\n    config: |\n      ui = true\n      api_addr = \"http:\/\/POD_IP:8200\"\n      listener \"tcp\" {\n          tls_disable = 1\n          address     = \"0.0.0.0:8200\"\n      }\n\n      storage \"consul\" {\n          path = \"vault\/\"\n          address = \"HOST_IP:8500\"\n      }\n    ```\n\n    This can also be set using Helm's `--set` flag (vault-helm v0.1.0 and later), using the following syntax:\n\n    ```shell\n    --set server.ha.config='{ listener \"tcp\" { address = \"0.0.0.0:8200\" }'\n    ```\n\n  - `disruptionBudget` - Values that configures the disruption budget policy. See the [official Kubernetes documentation](https:\/\/kubernetes.io\/docs\/tasks\/run-application\/configure-pdb\/) for more information.\n\n    - `enabled` (`boolean: true`) -\n      Enables disruption budget policy to limit the number of pods that are down simultaneously from voluntary disruptions.\n\n    - `maxUnavailable` (`int: null`) -\n      The maximum number of unavailable pods. By default, this will be automatically\n      computed based on the `server.replicas` value to be `(n\/2)-1`. If you need to set\n      this to `0`, you will need to add a `--set 'server.disruptionBudget.maxUnavailable=0'`\n      flag to the helm chart installation command because of a limitation in the Helm\n      templating language.\n\n  - `statefulSet` - This configures settings for the Vault Statefulset.\n\n    - `annotations` (`dictionary: {}`) - This value defines additional annotations to\n      add to the Vault statefulset. This can either be YAML or a YAML-formatted\n      multi-line templated string.\n\n      ```yaml\n      annotations:\n        kubernetes.io\/my-statefulset: foobar\n      # or\n      annotations: |\n        kubernetes.io\/my-statefulset: foobar\n      ```\n\n  - `securityContext` - Set the Pod and container security contexts\n\n    - `pod` (`dictionary: {}`) - Defines the securityContext for the server Pods, as YAML or a YAML-formatted multi-line templated string.\n\n      Default if not specified and `global.openshift=false`:\n\n      ```yaml\n      runAsNonRoot: true\n      runAsGroup: \n      runAsUser: \n      fsGroup: \n      ```\n\n      Defaults to empty if not specified and `global.openshift=true`.\n\n    - `container` (`dictionary: {}`) - Defines the securityContext for the server containers, as YAML or a YAML-formatted multi-line templated string.\n\n      Default if not specified and `global.openshift=false`:\n\n      ```yaml\n      allowPrivilegeEscalation: false\n      ```\n\n      Defaults to empty if not specified and `global.openshift=true`.\n\n- `ui` - Values that configure the Vault UI.\n\n  - `enabled` (`boolean: false`) - If true, the UI will be enabled. The UI will only be enabled on Vault servers. If `server.enabled` is false, then this setting has no effect. To expose the UI in some way, you must configure `ui.service`.\n\n  - `serviceType` (`string: ClusterIP`) - The service type to register. This defaults to `ClusterIP`.\n    The available service types are documented on\n    [the Kubernetes website](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#publishing-services-service-types).\n\n  - `publishNotReadyAddresses` (`boolean: true`) - If set to true, will route traffic to Vault pods that aren't ready (if they're sealed or uninitialized.\n\n  - `activeVaultPodOnly` (`boolean: false`) - If set to true, the UI service will only route to the active pod in a Vault HA cluster.\n\n  - `serviceNodePort` (`int: null`) - Sets the Node Port value when using `serviceType: NodePort` on the Vault UI service.\n\n  - `externalPort` (`int: 8200`) - Sets the external port value of the service.\n\n  - `targetPort` (`int: 8200`) - Sets the target port value of the service.\n\n  - `serviceIPFamilyPolicy` (`string: \"\"`) - The IP family and IP families options are to set the behaviour in a dual-stack environment. Omitting these values will let the service fall back to whatever the CNI dictates the defaults should be. These are only supported for kubernetes versions >=1.23. The service's [supported IP family policy](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/dual-stack\/#services), can be either: `SingleStack`, `PreferDualStack`, or `RequireDualStack`.\n\n  - `serviceIPFamilies` (`array: []`) - Sets the families that should be supported and the order in which they should be applied to ClusterIP as well. Can be IPv4 and\/or IPv6.\n\n  - `externalTrafficPolicy` (`string: \"Cluster\"`) - The [externalTrafficPolicy](https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#external-traffic-policy) can be set to either Cluster or Local and is only valid for LoadBalancer and NodePort service types.\n\n  - `loadBalancerSourceRanges` (`array`) - This value defines additional source CIDRs when using `serviceType: LoadBalancer`.\n\n  ```yaml\n  loadBalancerSourceRanges:\n    - 10.0.0.0\/16\n    - 120.78.23.3\/32\n  ```\n\n  - `loadBalancerIP` (`string`) - This value defines the IP address of the load balancer when using `serviceType: LoadBalancer`.\n\n  - `annotations` (`dictionary: {}`) - This value defines additional annotations for the UI service. This can either be YAML or a YAML-formatted multi-line templated string.\n\n  ```yaml\n  annotations:\n    \"sample\/annotation1\": \"foo\"\n    \"sample\/annotation2\": \"bar\"\n  # or\n  annotations: |\n    \"sample\/annotation1\": \"foo\"\n    \"sample\/annotation2\": \"bar\"\n  ```\n\n- `csi` - Values that configure running the Vault CSI Provider.\n\n  - `enabled` (`boolean: false`) - When set to `true`, the Vault CSI Provider daemonset will be created.\n\n  - `image` - Values that configure the Vault CSI Provider Docker image.\n\n    - `repository` (`string: \"hashicorp\/vault-csi-provider\"`) - The name of the Docker image for the Vault CSI Provider.\n\n    - `tag` (`string: \"1.5.0\"`) - The tag of the Docker image for the Vault CSI Provider.. **This should be pinned to a specific version when running in production.** Otherwise, other changes to the chart may inadvertently upgrade your CSI provider.\n\n    - `pullPolicy` (`string: \"IfNotPresent\"`) - The pull policy for container images. The default pull policy is `IfNotPresent` which causes the Kubelet to skip pulling an image if it already exists locally.\n\n  - `volumes` (`array: null`) - A list of volumes made available to all containers. This takes\n    standard Kubernetes volume definitions.\n\n    ```yaml\n    volumes:\n      - name: plugins\n        emptyDir: {}\n    ```\n\n  - `volumeMounts` (`array: null`) - A list of volumes mounts made available to all containers. This takes\n    standard Kubernetes volume mount definitions.\n\n    ```yaml\n    volumeMounts:\n      - mountPath: \/usr\/local\/libexec\/vault\n        name: plugins\n        readOnly: true\n    ```\n\n  - `resources` (`dictionary: {}`) - The resource requests and limits (CPU, memory, etc.) for each of the CSI containers. This should be a YAML dictionary of a Kubernetes [resource](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/) object. If this isn't specified, then the pods won't request any specific amount of resources, which limits the ability for Kubernetes to make efficient use of compute resources.<br \/> **Setting this is highly recommended.**\n\n    ```yaml\n    resources:\n      requests:\n        memory: '10Gi'\n      limits:\n        memory: '10Gi'\n    ```\n\n  - `hmacSecretName` (`string: \"\"`) - Override the default secret name for the CSI Provider's HMAC key used for generating secret versions.\n\n  - `hostNetwork` (`bool: false`) - Set the `hostNetwork` parameter on the CSI Provider pods to\n    avoid the need of a dedicated pod ip.\n\n  - `daemonSet` - Values that configure the Vault CSI Provider daemonSet.\n\n    - `updateStrategy` - Values that configure the Vault CSI Provider update strategy.\n\n      - `type` (`string: \"RollingUpdate\"`) - The [type of update strategy](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/statefulset\/#update-strategies) to be used when the daemonset is updated using Helm upgrades.\n\n      - `maxUnavailable` (`int: null`) - The maximum number of unavailable pods during an upgrade.\n\n    - `annotations` (`dictionary: {}`) - This value defines additional annotations to\n      add to the Vault CSI Provider daemonset. This can either be YAML or a YAML-formatted\n      multi-line templated string.\n\n      ```yaml\n      annotations:\n        foo: bar\n      # or\n      annotations: |\n        foo: bar\n      ```\n\n    - `extraLabels` (`dictionary: {}`) - This value defines additional labels for the CSI provider daemonset.\n\n    - `providersDir` (`string: \"\/etc\/kubernetes\/secrets-store-csi-providers\"`) - Provider host path (must match the CSI provider's path)\n\n    - `kubeletRootDir` (`string: \"\/var\/lib\/kubelet\"`) - Kubelet host path\n\n    - `securityContext` - Security context for the pod template and container in the csi provider daemonSet\n\n      - `pod` (`dictionary: {}`) - Pod-level securityContext. May be specified as YAML or a YAML-formatted multi-line templated string.\n\n      - `container` (`dictionary: {}`) - Container-level securityContext. May be specified as YAML or a YAML-formatted multi-line templated string.\n\n  - `pod` - Values that configure the Vault CSI Provider pod.\n\n    - `annotations` (`dictionary: {}`) - This value defines additional annotations to\n      add to the Vault CSI Provider pods. This can either be YAML or a YAML-formatted\n      multi-line templated string.\n\n      ```yaml\n      annotations:\n        foo: bar\n      # or\n      annotations: |\n        foo: bar\n      ```\n\n    - `extraLabels` (`dictionary: {}`) - This value defines additional labels for CSI provider pods.\n\n    - `nodeSelector` (`dictionary: {}`) - [nodeSelector](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/#nodeselector) labels for csi pod assignment, formatted as a multi-line string or YAML map.\n\n    ```yaml\n    nodeSelector:\n      beta.kubernetes.io\/arch: amd64\n    ```\n\n    - `affinity` (`dictionary: {}`) - This should be either a multi-line string or YAML matching the PodSpec's affinity field.\n\n    - `tolerations` (`array: []`) - Toleration Settings for CSI pods. This should be a multi-line string or YAML matching the Toleration array in a PodSpec.\n\n  - `priorityClassName` (`string: \"\"`) - Priority class for CSI Provider pods\n\n  - `serviceAccount` - Values that configure the Vault CSI Provider's serviceaccount.\n\n    - `annotations` (`dictionary: {}`) - This value defines additional\n      annotations for the serviceAccount definition. This can either be YAML or\n      a YAML-formatted multi-line templated string.\n\n      ```yaml\n      annotations:\n        foo: bar\n      # or\n      annotations: |\n        foo: bar\n      ```\n\n    - `extraLabels` (`dictionary: {}`) - This value defines additional labels for the CSI provider service account.\n\n  - `readinessProbe` - Values that configure the readiness probe for the Vault CSI Provider pods.\n\n    - `failureThreshold` (`int: 2`) - When set to a value, configures how many probe failures will be tolerated by Kubernetes.\n\n    - `initialDelaySeconds` (`int: 5`) - When set to a value, configures the number of seconds after the container has started before probe initiates.\n\n    - `periodSeconds` (`int: 5`) - When set to a value, configures how often (in seconds) to perform the probe.\n\n    - `successThreshold` (`int: 1`) - When set to a value, configures the minimum consecutive successes for the probe to be considered successful after having failed.\n\n    - `timeoutSeconds` (`int: 3`) - When set to a value, configures the number of seconds after which the probe times out.\n\n  - `livenessProbe` - Values that configure the liveness probe for the Vault CSI Provider pods.\n\n    - `initialDelaySeconds` (`int: 5`) - Sets the initial delay of the liveness probe when the container starts.\n\n    - `failureThreshold` (`int: 2`) - When set to a value, configures how many probe failures will be tolerated by Kubernetes.\n\n    - `periodSeconds` (`int: 5`) - When set to a value, configures how often (in seconds) to perform the probe.\n\n    - `successThreshold` (`int: 1`) - When set to a value, configures the minimum consecutive successes for the probe to be considered successful after having failed.\n\n    - `timeoutSeconds` (`int: 3`) - When set to a value, configures the number of seconds after which the probe times out.\n\n  - `logLevel` (`string: \"info\"`) - Configures the log level for the Vault CSI provider. Supported\n    log levels include: `trace`, `debug`, `info`, `warn`, `error`, and `off`.\n\n  - `debug` (`bool: false`) - Deprecated: set `logLevel` to `debug` instead. When set to true,\n    enables debug logging on the Vault CSI Provider daemonset.\n\n  - `extraArgs` (`array: []`) - The extra arguments to be applied to the CSI pod startup command. See [here](\/vault\/docs\/platform\/k8s\/csi\/configurations#command-line-arguments) for available flags.\n\n  - `agent` - Configures the Vault Agent sidecar for the CSI Provider\n    - `enabled` (`bool: true`) - whether to enable the agent sidecar for the CSI provider\n    - `extraArgs` (`array: []`) - The extra arguments to be applied to the agent startup command.\n\n    - `image` - Values that configure the Vault Agent sidecar image for the CSI Provider.\n      - `pullPolicy` (`string: \"IfNotPresent\"`) - The pull policy for agent image. The default pull policy is `IfNotPresent` which causes the Kubelet to skip pulling an image if it already exists.\n\n      - `repository` (`string: \"hashicorp\/vault\"`) - The name of the Docker image for the Vault Agent sidecar. This should be set to the official Vault Docker image.\n\n      - `tag` (`string: \"1.18.1\"`) - The tag of the Vault Docker image to use for the Vault Agent Sidecar.\n\n    - `logFormat` (`string: \"standard\"`) -\n    - `logLevel` (`string: \"info\"`) -\n    - `resources` (`dictionary: {}`) - The resource requests and limits (CPU, memory, etc.) for the agent. This should be a YAML dictionary of a Kubernetes [resource](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/) object.\n    ```yaml\n    resources:\n      requests:\n        memory: '256Mi'\n        cpu: '250m'\n      limits:\n        memory: '256Mi'\n        cpu: '250m'\n    ```\n\n- `serverTelemetry` - Values the configure metrics and telemetry. Enabling these features requires setting\n  the `telemetry {}` stanza in the Vault configuration. See the [telemetry](\/vault\/docs\/configuration\/telemetry)\n  [docs](\/vault\/docs\/internals\/telemetry) for more on the Vault configuration.\n\n  If authorization is not set for authenticating to Vault's metrics endpoint,\n  the following Vault server `telemetry{}` config must be included in the\n  `listener \"tcp\"{}` stanza of the Vault configuration:\n\n  ```yaml\n  listener \"tcp\" {\n    tls_disable = 1\n    address     = \"0.0.0.0:8200\"\n\n    telemetry {\n      unauthenticated_metrics_access = \"true\"\n    }\n  }\n  ```\n\n  In addition, a top level `telemetry {}` stanza must also be included in the Vault configuration, such as:\n\n  ```yaml\n  telemetry {\n    prometheus_retention_time = \"30s\",\n    disable_hostname = true\n  }\n  ```\n\n  - `serviceMonitor` - Values that configure monitoring the Vault server\n\n    - `enabled` (`boolean: false`) - When set to `true`, enable deployment of the Vault Server\n      ServiceMonitor CustomResource. The Prometheus operator *must* be installed before enabling this\n      feature. If not, the chart will fail to install due to missing CustomResourceDefinitions provided by\n      the operator.\n\n      Instructions on how to install the Helm chart can be found [here](https:\/\/github.com\/prometheus-community\/helm-charts\/tree\/main\/charts\/kube-prometheus-stack).\n\n      More information can be found here in the\n      [these](https:\/\/github.com\/prometheus-operator\/prometheus-operator)\n      [repositories](https:\/\/github.com\/prometheus-operator\/kube-prometheus)\n\n    - `selectors` (`dictionary: {}`) - Selector labels to add to the ServiceMonitor.\n\n    - `interval` (`string: \"30s\"`) - Interval at which Prometheus scrapes metrics.\n\n    - `scrapeTimeout` (`string: \"10s\"`) - Timeout for Prometheus scrapes.\n\n    - `tlsConfig` (`dictionary: {}`) - tlsConfig used for scraping the Vault metrics API. See the\n      prometheus [API\n      reference](https:\/\/prometheus-operator.dev\/docs\/api-reference\/api\/#monitoring.coreos.com\/v1.TLSConfig)\n      for more details.\n\n      ```yaml\n      tlsConfig:\n        ca:\n          secret:\n            name: vault-metrics-client\n            key: ca.crt\n      ```\n\n    - `authorization` (`dictionary: {}`) - Authorization used for scraping the Vault metrics API.\n      See the prometheus [API\n      reference](https:\/\/prometheus-operator.dev\/docs\/api-reference\/api\/#monitoring.coreos.com\/v1.SafeAuthorization)\n      for more details.\n\n      ```yaml\n      authorization:\n        credentials:\n          name: vault-metrics-client\n          key: token\n      ```\n\n  - `prometheusRules` - Values that configure Prometheus rules.\n\n    - `enabled` (`boolean: false`) - Deploy the PrometheusRule custom resource for AlertManager-based\n      alerts. Requires that AlertManager is properly deployed.\n\n    - `selectors` (`dictionary: {}`) - Selector labels to add to the Prometheus rules.\n\n    - `rules`: (`array: []`) - Prometheus rules to create.\n\n      For example:\n      ```yaml\n      rules:\n        - alert: vault-HighResponseTime\n          annotations:\n            message: The response time of Vault is over 500ms on average over the last 5 minutes.\n          expr: vault_core_handle_request{quantile=\"0.5\", namespace=\"mynamespace\"} > 500\n          for: 5m\n          labels:\n            severity: warning\n        - alert: vault-HighResponseTime\n          annotations:\n            message: The response time of Vault is over 1s on average over the last 5 minutes.\n          expr: vault_core_handle_request{quantile=\"0.5\", namespace=\"mynamespace\"} > 1000\n          for: 5m\n          labels:\n            severity: critical\n      ```","site":"vault","answers_cleaned":"    layout  docs page title  Configuration description  This section documents configuration options for the Vault Helm chart        Configuration   include  helm version mdx   The chart is highly customizable using  Helm configuration values  https   helm sh docs intro using helm  customizing the chart before installing   Each value has a default tuned for an optimal getting started experience with Vault  Before going into production  please review the parameters below and consider if they re appropriate for your deployment      global    These global values affect multiple components of the chart        enabled    boolean  true     The master enabled disabled configuration  If this is true  most components will be installed by default  If this is false  no components will be installed by default and manually opting in is required  such as by setting  server enabled  to true        namespace    string         The namespace to deploy to  Defaults to the  helm  installation namespace        imagePullSecrets    array         References secrets to be used when pulling images from private registries  See  Pull an Image from a Private Registry  https   kubernetes io docs tasks configure pod container pull image private registry   for more details  May be specified as an array of name map entries or just as an array of names          yaml     imagePullSecrets          name  image pull secret       or     imagePullSecrets          image pull secret               tlsDisable    boolean  true     When set to  true   changes URLs from  https  to  http   such as the  VAULT ADDR http   127 0 0 1 8200  environment variable set on the Vault pods         externalVaultAddr    string         External vault server address for the injector and CSI provider to use  Setting this will disable deployment of a vault server  A service account with token review permissions is automatically created if  server serviceAccount create true  is set for the external Vault server to use        openshift    boolean  false     If  true   enables configuration specific to OpenShift such as NetworkPolicy  SecurityContext  and Route        psp    Values that configure Pod Security Policy          enable    boolean  false     When set to  true   enables Pod Security Policies for Vault and Vault Agent Injector          annotations    dictionary         This value defines additional annotations to       add to the Pod Security Policies  This can either be YAML or a YAML formatted       multi line templated string            yaml       annotations          seccomp security alpha kubernetes io allowedProfileNames  docker default runtime default         apparmor security beta kubernetes io allowedProfileNames  runtime default         seccomp security alpha kubernetes io defaultProfileName   runtime default         apparmor security beta kubernetes io defaultProfileName   runtime default         or       annotations            seccomp security alpha kubernetes io allowedProfileNames  docker default runtime default         apparmor security beta kubernetes io allowedProfileNames  runtime default         seccomp security alpha kubernetes io defaultProfileName   runtime default         apparmor security beta kubernetes io defaultProfileName   runtime default                serverTelemetry    Values that configure metrics and telemetry         prometheusOperator    boolean  false     When set to  true   enables integration with the       Prometheus Operator  Be sure to configure the top level   serverTelemetry    vault docs platform k8s helm configuration servertelemetry 1  section for more details       and required configuration values      injector    Values that configure running a Vault Agent Injector Admission Webhook Controller within Kubernetes        enabled    boolean or string          When set to  true   the Vault Agent Injector Admission Webhook controller will be created  When set to        defaults to the value of  global enabled         externalVaultAddr    string         Deprecated  Please use  global externalVaultAddr   vault docs platform k8s helm configuration externalvaultaddr  instead        replicas    int  1     The number of pods to deploy to create a highly available cluster of Vault Agent Injectors  Requires Vault K8s 0 7 0 to have more than 1 replica        leaderElector    Values that configure the Vault Agent Injector leader election for HA deployments          enabled    boolean  true     When set to  true   enables leader election for Vault Agent Injector  This is required when using auto tls and more than 1 replica        image    Values that configure the Vault Agent Injector Docker image          repository    string   hashicorp vault k8s      The name of the Docker image for Vault Agent Injector          tag    string   1 5 0      The tag of the Docker image for the Vault Agent Injector    This should be pinned to a specific version when running in production    Otherwise  other changes to the chart may inadvertently upgrade your admission controller          pullPolicy    string   IfNotPresent      The pull policy for container images  The default pull policy is  IfNotPresent  which causes the Kubelet to skip pulling an image if it already exists        agentImage    Values that configure the Vault Agent sidecar image          repository    string   hashicorp vault      The name of the Docker image for the Vault Agent sidecar  This should be set to the official Vault Docker image          tag    string   1 18 1      The tag of the Vault Docker image to use for the Vault Agent Sidecar    Vault 1 3 1  is required by the admission controller          agentDefaults    Values that configure the injected Vault Agent containers default values          cpuLimit    string   500m      The default CPU limit for injected Vault Agent containers          cpuRequest    string   250m      The default CPU request for injected Vault Agent containers          memLimit    string   128Mi      The default memory limit for injected Vault Agent containers          memRequest    string   64Mi      The default memory request for injected Vault Agent containers          ephemeralLimit    string         The default ephemeral storage limit for injected Vault Agent containers          ephemeralRequest    string         The default ephemeral storage request for injected Vault Agent containers          template    string   map      The default template type for rendered secrets if no custom templates are defined        Possible values include  map  and  json           templateConfig    Default values within Agent s   template config  stanza   vault docs agent and proxy agent template             exitOnRetryFailure    boolean  true     Controls whether Vault Agent exits after it has exhausted its number of template retry attempts due to failures            staticSecretRenderInterval    string         Configures how often Vault Agent Template should render non leased secrets such as KV v2  See the  Vault Agent Templates documentation   vault docs agent and proxy agent template non renewable secrets  for more details        metrics    Values that configure the Vault Agent Injector metric exporter          enabled    boolean  false     When set to  true   the Vault Agent Injector exports Prometheus metrics at the   metrics  path        authPath    string   auth kubernetes      Mount path of the Vault Kubernetes Auth Method        logLevel    string   info      Configures the log verbosity of the injector  Supported log levels  trace  debug  error  warn  info        logFormat    string   standard      Configures the log format of the injector  Supported log formats   standard    json         revokeOnShutdown    boolean  false     Configures all Vault Agent sidecars to revoke their token when shutting down        securityContext    Security context for the pod template and the injector container         pod    dictionary         Defines the securityContext for the injector Pod  as YAML or a YAML formatted multi line templated string  Default if not specified            yaml       runAsNonRoot  true       runAsGroup         runAsUser         fsGroup                     container    dictionary         Defines the securityContext for the injector container  as YAML or a YAML formatted multi line templated string  Default if not specified            yaml       allowPrivilegeEscalation  false       capabilities          drop              ALL                 resources    dictionary         The resource requests and limits  CPU  memory  etc   for each container of the injector  This should be a YAML dictionary of a Kubernetes  resource  https   kubernetes io docs concepts configuration manage resources containers   object  If this isn t specified  then the pods won t request any specific amount of resources  which limits the ability for Kubernetes to make efficient use of compute resources  br      Setting this is highly recommended            yaml     resources        requests          memory   256Mi          cpu   250m        limits          memory   256Mi          cpu   250m                webhook    Values that control the Mutating Webhook Configuration          failurePolicy    string   Ignore      Configures failurePolicy of the webhook  To block pod creation while the webhook is unavailable  set the policy to   Fail    See  Failure Policy  https   kubernetes io docs reference access authn authz extensible admission controllers  failure policy           matchPolicy    string   Exact      Specifies the approach to accepting changes based on the rules of the MutatingWebhookConfiguration  See  Match Policy  https   kubernetes io docs reference access authn authz extensible admission controllers  matching requests matchpolicy           timeoutSeconds    int  30     Specifies the number of seconds before the webhook request will be ignored or fails  If it is ignored or fails depends on the  failurePolicy   See  timeouts  https   kubernetes io docs reference access authn authz extensible admission controllers  timeouts           namespaceSelector    object         The selector used by the admission webhook controller to limit what namespaces where injection can happen  If unset  all non system namespaces are eligible for injection  See  Matching requests  namespace selector  https   kubernetes io docs reference access authn authz extensible admission controllers  matching requests namespaceselector             yaml       namespaceSelector          matchLabels            sidecar injector  enabled                   objectSelector    object         The selector used by the admission webhook controller to limit what objects can be affected by mutation  See  Matching requests  object selector  https   kubernetes io docs reference access authn authz extensible admission controllers  matching requests objectselector             yaml       objectSelector          matchLabels            sidecar injector  enabled                   annotations    string or object         Defines additional annotations to attach to the webhook  This can either be YAML or a YAML formatted multi line templated string        namespaceSelector    dictionary         Deprecated  please use   webhook namespaceSelector    vault docs platform k8s helm configuration namespaceselector  instead        objectSelector    dictionary         Deprecated  please use   webhook objectSelector    vault docs platform k8s helm configuration objectselector  instead        extraLabels    dictionary         This value defines additional labels for Vault Agent Injector pods          yaml     extraLabels         sample label1    foo         sample label2    bar                certs    The certs section configures how the webhook TLS certs are configured  These are the TLS certs for the Kube apiserver communicating to the webhook  By default  the injector will generate and manage its own certs  but this requires the ability for the injector to update its own  MutatingWebhookConfiguration   In a production environment  custom certs should probably be used  Configure the values below to enable this          secretName    string         secretName is the name of the Kubernetes secret that has the TLS certificate and private key to serve the injector webhook  If this is null  then the injector will default to its automatic management mode          caBundle    string         The PEM encoded CA public certificate bundle for the TLS certificate served by the injector  This must be specified as a string and can t come from a secret because it must be statically configured on the Kubernetes  MutatingAdmissionWebhook  resource  This only needs to be specified if  secretName  is not null          certName    string   tls crt      The name of the certificate file within the  secretName  secret          keyName    string   tls key      The name of the key file within the  secretName  secret        extraEnvironmentVars    dictionary         Extra environment variables to set in the injector deployment          yaml       Example setting injector TLS options in a deployment      extraEnvironmentVars        AGENT INJECT TLS MIN VERSION  tls13       AGENT INJECT TLS CIPHER SUITES                    affinity    This value defines the  affinity  https   kubernetes io docs concepts configuration assign pod node  affinity and anti affinity  for Vault Agent Injector pods  This can either be multi line string or YAML matching the PodSpec s affinity field  It defaults to allowing only a single pod on each node  which minimizes risk of the cluster becoming unusable if a node is lost  If you need to run more pods per node  for example  testing on Minikube   set this value to  null           yaml       Recommended default server affinity      affinity          podAntiAffinity          requiredDuringSchedulingIgnoredDuringExecution            labelSelector              matchLabels                app kubernetes io name   agent injector               app kubernetes io instance                   component  webhook           topologyKey  kubernetes io hostname               topologySpreadConstraints    array          Topology settings  https   kubernetes io docs concepts workloads pods pod topology spread constraints       for injector pods  This can either be YAML or a YAML formatted multi line templated string        tolerations    array         Toleration Settings for injector pods  This should be either a multi line string or YAML matching the Toleration array        nodeSelector    dictionary         nodeSelector labels for injector pod assignment  formatted as a muli line string or YAML map        priorityClassName    string         Priority class for injector pods       annotations    dictionary         This value defines additional annotations for injector pods  This can either be YAML or a YAML formatted multi line templated string          yaml     annotations         sample annotation1    foo         sample annotation2    bar        or     annotations           sample annotation1    foo         sample annotation2    bar                failurePolicy    string   Ignore      Deprecated  please use   webhook failurePolicy    vault docs platform k8s helm configuration failurepolicy  instead        webhookAnnotations    dictionary         Deprecated  please use   webhook annotations    vault docs platform k8s helm configuration annotations 1  instead        service    The service section configures the Kubernetes service for the Vault Agent Injector          annotations    dictionary         This value defines additional annotations to       add to the Vault Agent Injector service  This can either be YAML or a YAML formatted       multi line templated string            yaml       annotations           sample annotation1    foo           sample annotation2    bar          or       annotations             sample annotation1    foo           sample annotation2    bar                  serviceAccount    Injector serviceAccount specific config         annotations    dictionary         Extra annotations to attach to the injector serviceAccount  This can either be YAML or a YAML formatted multi line templated string        hostNetwork    boolean  false     When set to true  configures the Vault Agent Injector to run on the host network  This is useful     when alternative cluster networking is used        port    int  8080     Configures the port the Vault Agent Injector listens on        podDisruptionBudget    dictionary         A disruption budget limits the number of pods of a replicated application that are down simultaneously from voluntary disruptions          yaml     podDisruptionBudget        maxUnavailable  1               strategy    dictionary         Strategy for updating the deployment  This can be a multi line string or a YAML map          yaml     strategy        rollingUpdate          maxSurge  25          maxUnavailable  25        type  RollingUpdate       or     strategy          rollingUpdate          maxSurge  25          maxUnavailable  25        type  RollingUpdate               livenessProbe    Values that configure the liveness probe for the injector          failureThreshold    int  2     When set to a value  configures how many probe failures will be tolerated by Kubernetes          initialDelaySeconds    int  60     Sets the initial delay of the liveness probe when the container starts          periodSeconds    int  5     When set to a value  configures how often  in seconds  to perform the probe          successThreshold    int  1     When set to a value  configures the minimum consecutive successes for the probe to be considered successful after having failed          timeoutSeconds    int  3     When set to a value  configures the number of seconds after which the probe times out        readinessProbe    Values that configure the readiness probe for the injector          failureThreshold    int  2     When set to a value  configures how many probe failures will be tolerated by Kubernetes          initialDelaySeconds    int  60     Sets the initial delay of the readiness probe when the container starts          periodSeconds    int  5     When set to a value  configures how often  in seconds  to perform the probe          successThreshold    int  1     When set to a value  configures the minimum consecutive successes for the probe to be considered successful after having failed          timeoutSeconds    int  3     When set to a value  configures the number of seconds after which the probe times out        startupProbe    Values that configure the startup probe for the injector          failureThreshold    int  2     When set to a value  configures how many probe failures will be tolerated by Kubernetes          initialDelaySeconds    int  60     Sets the initial delay of the startup probe when the container starts          periodSeconds    int  5     When set to a value  configures how often  in seconds  to perform the probe          successThreshold    int  1     When set to a value  configures the minimum consecutive successes for the probe to be considered successful after having failed          timeoutSeconds    int  3     When set to a value  configures the number of seconds after which the probe times out      server    Values that configure running a Vault server within Kubernetes        enabled    boolean or string          When set to  true   the Vault server will be created  When set to        defaults to the value of  global enabled         enterpriseLicense    This value refers to a Kubernetes secret that you have created that contains your enterprise license  If you are not using an enterprise image or if you plan to introduce the license key via another route  then leave secretName blank      or set it to null  Requires Vault Enterprise 1 8 or later          secretName    string         The name of the Kubernetes secret that holds the enterprise license  The secret must be in the same namespace that Vault is installed into          secretKey    string   license      The key within the Kubernetes secret that holds the enterprise license        image    Values that configure the Vault Docker image          repository    string   hashicorp vault      The name of the Docker image for the containers running Vault          tag    string   1 18 1      The tag of the Docker image for the containers running Vault    This should be pinned to a specific version when running in production    Otherwise  other changes to the chart may inadvertently upgrade your admission controller          pullPolicy    string   IfNotPresent      The pull policy for container images  The default pull policy is  IfNotPresent  which causes the Kubelet to skip pulling an image if it already exists        updateStrategyType    string   OnDelete      Configure the  Update Strategy Type  https   kubernetes io docs concepts workloads controllers statefulset  update strategies  for the StatefulSet        logLevel    string         Configures the Vault server logging verbosity  If set this will override values defined in the Vault configuration file      Supported log levels include   trace    debug    info    warn    error         logFormat    string         Configures the Vault server logging format  If set this will override values defined in the Vault configuration file      Supported log formats include   standard    json         resources    dictionary         The resource requests and limits  CPU  memory  etc   for each container of the server  This should be a YAML dictionary of a Kubernetes  resource  https   kubernetes io docs concepts configuration manage resources containers   object  If this isn t specified  then the pods won t request any specific amount of resources  which limits the ability for Kubernetes to make efficient use of compute resources    Setting this is highly recommended            yaml     resources        requests          memory   10Gi        limits          memory   10Gi                ingress    Values that configure Ingress services for Vault          If deploying on OpenShift  these ingress settings are ignored  Use the   route    route  configuration to expose Vault on OpenShift   br    br       If   ha    ha  is enabled the Ingress will point to the active vault server via the  active  Service  This requires vault 1 4  and  service registration   vault docs configuration service registration kubernetes  to be set in the vault config          enabled    boolean  false     When set to  true   an  Ingress  https   kubernetes io docs concepts services networking ingress   service will be created          labels    dictionary         Labels for the ingress service          annotations    dictionary         This value defines additional annotations to       add to the Ingress service  This can either be YAML or a YAML formatted       multi line templated string            yaml       annotations          kubernetes io ingress class  nginx         kubernetes io tls acme   true          or       annotations            kubernetes io ingress class  nginx         kubernetes io tls acme   true                    ingressClassName    string         Specify the  IngressClass  https   kubernetes io docs concepts services networking ingress  ingress class  that should be used to implement the Ingress         activeService    boolean  true     When HA mode is enabled and K8s service registration is being used  configure the ingress to point to the Vault active service          extraPaths    array         Configures extra paths to prepend to the host configuration        This is useful when working with annotation based services            yaml       extraPaths            path               backend              service                name  ssl redirect               port                  number  use annotation                   tls    array         Configures the TLS portion of the  Ingress spec  https   kubernetes io docs concepts services networking ingress  tls   where  hosts  is a list of the hosts defined in the Common Name of the TLS certificate  and  secretName  is the name of the Secret containing the required TLS files such as certificates and keys            yaml       tls            hosts                sslexample foo com               sslexample bar com           secretName  testsecret tls                   hosts    Values that configure the Ingress host rules            host    string   chart example local     Name of the host to use for Ingress            paths    array        Deprecated   server ingress extraPaths  should be used instead  A list of paths that will be directed to the Vault service  At least one path is required            yaml       paths                         vault                 hostAliases    array         A list of aliases to be added to   etc hosts   Specified as a YAML list following the  hostAlias format  https   kubernetes io docs tasks network customize hosts file for pods         route    Values that configure Route services for Vault in OpenShift         If   ha    ha  is enabled the Route will point to the active vault server via the  active  Service  requires vault 1 4  and  service registration   vault docs configuration service registration kubernetes  to be set in the vault config           enabled    boolean  false     When set to  true   a Route for Vault will be created          activeService    boolean  true     When HA mode is enabled and K8s service registration is being used  configure the route to point to the Vault active service          labels    dictionary         Labels for the Route         annotations    dictionary         Annotations to add to the Route  This can either be YAML or a YAML formatted multi line templated string          host    string   chart example local      Sets the hostname for the Route          tls    dictionary   termination  passthrough      TLS config that will be passed directly to the route s TLS config  which can be used to configure other termination methods that terminate TLS at the router        authDelegator    Values that configure the Cluster Role Binding attached to the Vault service account          enabled    boolean  true     When set to  true   a Cluster Role Binding will be bound to the Vault service account  This Cluster Role Binding has the necessary privileges for Vault to use the  Kubernetes Auth Method   vault docs auth kubernetes         readinessProbe    Values that configure the readiness probe for the Vault pods          enabled    boolean  true     When set to  true   a readiness probe will be applied to the Vault pods          path    string         When set to a value  enables HTTP HTTPS probes instead of using the default  exec  probe  The http https scheme is controlled by the  tlsDisable  value          failureThreshold    int  2     When set to a value  configures how many probe failures will be tolerated by Kubernetes          initialDelaySeconds    int  5     When set to a value  configures the number of seconds after the container has started before probe initiates          periodSeconds    int  5     When set to a value  configures how often  in seconds  to perform the probe          successThreshold    int  1     When set to a value  configures the minimum consecutive successes for the probe to be considered successful after having failed          timeoutSeconds    int  3     When set to a value  configures the number of seconds after which the probe times out          port    int  8200     When set to a value  overrides the default port used for the server readiness probe          yaml     readinessProbe        enabled  true       path   v1 sys health standbyok true       failureThreshold  2       initialDelaySeconds  5       periodSeconds  5       successThreshold  1       timeoutSeconds  3       port  8200               livenessProbe    Values that configure the liveness probe for the Vault pods          enabled    boolean  false     When set to  true   a liveness probe will be applied to the Vault pods          execCommand    array         Used to define a liveness exec command  If provided  exec is preferred to httpGet  path  as the livenessProbe handler          yaml     execCommand           bin sh          c          vault userconfig mylivenessscript run sh                 path    string    v1 sys health standbyok true      Path for the livenessProbe to use httpGet as the livenessProbe handler  The http https scheme is controlled by the  tlsDisable  value          initialDelaySeconds    int  60     Sets the initial delay of the liveness probe when the container starts          failureThreshold    int  2     When set to a value  configures how many probe failures will be tolerated by Kubernetes          periodSeconds    int  5     When set to a value  configures how often  in seconds  to perform the probe          successThreshold    int  1     When set to a value  configures the minimum consecutive successes for the probe to be considered successful after having failed          timeoutSeconds    int  3     When set to a value  configures the number of seconds after which the probe times out          port    int  8200     Port number on which livenessProbe will be checked if httpGet is used as the livenessProbe handler          yaml     livenessProbe        enabled  true       path   v1 sys health standbyok true       initialDelaySeconds  60       failureThreshold  2       periodSeconds  5       successThreshold  1       timeoutSeconds  3       port  8200               terminationGracePeriodSeconds    int  10     Optional duration in seconds the pod needs to terminate gracefully  See  https   kubernetes io docs concepts containers container lifecycle hooks        preStopSleepSeconds    int  5     Used to set the sleep time during the preStop step        postStart    array         Used to define commands to run after the pod is ready  This can be used to automate processes such as initialization or bootstrapping auth methods        yaml   postStart         bin sh        c        vault userconfig myscript run sh             extraInitContainers    array  null     extraInitContainers is a list of init containers  Specified as a YAML list  This is useful if you need to run a script to provision TLS certificates or write out configuration files in a dynamic way        extraContainers    array  null     The extra containers to be applied to the Vault server pods        yaml   extraContainers        name  mycontainer       image   app 0 0 0        env                  extraEnvironmentVars    dictionary         The extra environment variables to be applied to the Vault server        yaml     Extra Environment Variables are defined as key value strings    extraEnvironmentVars      GOOGLE REGION  global     GOOGLE PROJECT  myproject     GOOGLE APPLICATION CREDENTIALS   vault userconfig myproject myproject creds json             shareProcessNamespace    boolean  false     Enables process namespace sharing between Vault and the extraContainers  This is useful if Vault must be signaled  e g  to send a SIGHUP for log rotation        extraArgs    string  null     The extra arguments to be applied to the Vault server startup command          yaml     extraArgs    config  path to extra config hcl  log format json                extraPorts    array         additional ports to add to the server statefulset       yaml   extraPorts        containerPort  8300       name  http monitoring             extraSecretEnvironmentVars    array         The extra environment variables populated from a secret to be applied to the Vault server          envName    string  required           Name of the environment variable to be populated in the Vault container          secretName    string  required           Name of Kubernetes secret used to populate the environment variable defined by  envName           secretKey    string  required           Name of the key where the requested secret value is located in the Kubernetes secret          yaml       Extra Environment Variables populated from a secret      extraSecretEnvironmentVars          envName  AWS SECRET ACCESS KEY         secretName  vault         secretKey  AWS SECRET ACCESS KEY               extraVolumes    array         Deprecated  please use  volumes  instead  A list of extra volumes to mount to Vault servers  This is useful for bringing in extra data that can be referenced by other configurations at a well known path  such as TLS certificates  The value of this should be a list of objects  Each object supports the following keys          type    string  required           Type of the volume  must be one of  configMap  or  secret   Case sensitive          name    string  required           Name of the configMap or secret to be mounted  This also controls the path       that it is mounted to  The volume will be mounted to   vault userconfig  name   by default       unless  path  is configured          path    string   vault userconfigs           Name of the path where a configMap or secret is mounted  If not specified       the volume will be mounted to   vault userconfig  name of volume            defaultMode    string   420            Default mode of the mounted files          yaml     extraVolumes          type   secret          name   vault certs          path    etc pki                volumes    array  null     A list of volumes made available to all containers  This takes     standard Kubernetes volume definitions          yaml     volumes          name  plugins         emptyDir                   volumeMounts    array  null     A list of volumes mounts made available to all containers  This takes     standard Kubernetes volume definitions          yaml     volumeMounts          mountPath   usr local libexec vault         name  plugins         readOnly  true               affinity    This value defines the  affinity  https   kubernetes io docs concepts configuration assign pod node  affinity and anti affinity  for server pods  This should be either a multi line string or YAML matching the PodSpec s affinity field  It defaults to allowing only a single pod on each node  which minimizes risk of the cluster becoming unusable if a node is lost  If you need to run more pods per node  for example  testing on Minikube   set this value to  null         yaml     Recommended default server affinity    affinity        podAntiAffinity        requiredDuringSchedulingIgnoredDuringExecution          labelSelector          matchLabels            app kubernetes io name             app kubernetes io instance               component  server         topologyKey  kubernetes io hostname             topologySpreadConstraints    array          Topology settings  https   kubernetes io docs concepts workloads pods pod topology spread constraints       for server pods  This can either be YAML or a YAML formatted multi line templated string        tolerations    array         This value defines the  tolerations  https   kubernetes io docs concepts configuration taint and toleration   that are acceptable when being scheduled  This should be either a multi line string or YAML matching the Toleration array in a PodSpec        yaml   tolerations          key   node kubernetes io unreachable        operator   Exists        effect   NoExecute        tolerationSeconds  6000             nodeSelector    dictionary         This value defines additional node selection criteria for more control over where the Vault servers are deployed  This should be formatted as a multi line string or YAML map        yaml   nodeSelector        disktype  ssd             networkPolicy    Values that configure the Vault Network Policy          enabled    boolean  false     When set to  true   enables a Network Policy for the Vault cluster          egress    array         This value configures the  egress  https   kubernetes io docs concepts services networking network policies   network policy rules          yaml     egress          to              ipBlock                cidr  10 0 0 0 24         ports              protocol  TCP             port  8200                 ingress    array         This value configures the  ingress  https   kubernetes io docs concepts services networking network policies   network policy rules  The default is below          yaml     ingress          from            namespaceSelector             ports            port  8200           protocol  TCP           port  8201           protocol  TCP               priorityClassName    string         Priority class for server pods       extraLabels    dictionary         This value defines additional labels for server pods        yaml   extraLabels       sample label1    foo       sample label2    bar              annotations    dictionary         This value defines additional annotations for server pods  This can either be YAML or a YAML formatted multi line templated string        yaml   annotations       sample annotation1    foo       sample annotation2    bar      or   annotations         sample annotation1    foo       sample annotation2    bar              includeConfigAnnotation    boolean  false     Add an annotation to the server configmap and the statefulset pods   vaultproject io config checksum   that is a hash of the Vault configuration  This can be used together with an OnDelete deployment strategy to help identify which pods still need to be deleted during a deployment to pick up any configuration changes        service    Values that configure the Kubernetes service created for Vault  These options are also used for the  active  and  standby  services when   ha    ha  is enabled          enabled    boolean  true     When set to  true   a Kubernetes service will be created for Vault          active    Values that apply only to the vault active service            enabled    boolean  true     When set to  true   the vault active Kubernetes service will be created for Vault  selecting pods which label themselves as the cluster leader with  vault active   true              annotations    dictionary          Extra annotations for the active service definition  This can either be YAML or a YAML formatted multi line templated string          standby    Values that apply only to the vault standby service            enabled    boolean  true     When set to  true   the vault standby Kubernetes service will be created for Vault  selecting pods which label themselves as a cluster follower with  vault active   false              annotations    dictionary          Extra annotations for the standby service definition  This can either be YAML or a YAML formatted multi line templated string          clusterIP    string     ClusterIP controls whether an IP address  cluster IP  is attached to the Vault service within Kubernetes  By default the Vault service will be given a Cluster IP address  set to  None  to disable  When disabled Kubernetes will create a  headless  service  Headless services can be used to communicate with pods directly through DNS instead of a round robin load balancer          type    string   ClusterIP      Sets the type of service to create  such as  NodePort           externalTrafficPolicy    string   Cluster      The  externalTrafficPolicy  https   kubernetes io docs concepts services networking service  external traffic policy  can be set to either Cluster or Local and is only valid for LoadBalancer and NodePort service types          port    int  8200     Port on which Vault server is listening inside the pod          targetPort    int  8200     Port on which the service is listening          nodePort    int      When type is set to  NodePort   the bound node port can be configured using this value  A random port will be assigned if this is left blank          activeNodePort    int       When HA mode is enabled  If type is set to  NodePort   a specific nodePort value can be configured for the  active  service  and will be random if left blank          standbyNodePort    int       When HA mode is enabled  If type is set to  NodePort   a specific nodePort value can be configured for the  standby  service  will be random if left blank          publishNotReadyAddresses    boolean  true     If true  do not wait for pods to be ready before including them in the services  targets  Does not apply to the headless service  which is used for cluster internal communication          instanceSelector            enabled    boolean  true     When set to false  the service selector used for the vault  vault active  and vault standby services will not filter on  app kubernetes io instance   This means they may select pods from outside this deployment of the Helm chart  Does not affect the headless vault internal service with  ClusterIP  None           annotations    dictionary         This value defines additional annotations for the service  This can either be YAML or a YAML formatted multi line templated string          yaml     annotations         sample annotation1    foo         sample annotation2    bar        or     annotations           sample annotation1    foo         sample annotation2    bar                  ipFamilyPolicy    string         The IP family and IP families options are to set the behaviour in a dual stack environment  Omitting these values will let the service fall back to whatever the CNI dictates the defaults should be  These are only supported for kubernetes versions   1 23  The service s  supported IP family policy  https   kubernetes io docs concepts services networking dual stack  services   can be either   SingleStack    PreferDualStack   or  RequireDualStack           serviceIPFamilies    array         Sets the families that should be supported and the order in which they should be applied to ClusterIP as well  Can be IPv4 and or IPv6        serviceAccount    Values that configure the Kubernetes service account created for Vault          create    boolean  true    If set to true  creates a service account used by Vault          name    string        Name of the service account to use  If not set and create is true  a name is generated using the name of the installation  default is  vault            createSecret    boolean  false    Create a Kubernetes Secret object to store a non expiring token for the service account  Prior to Kubernetes 1 24 0  Kubernetes used to generate this secret for each service account by default  Kubernetes recommends using short lived tokens from the TokenRequest API or projected volumes instead if possible  For more details  see https   kubernetes io docs concepts configuration secret  service account token secrets   server serviceAccount create  must be equal to  true  in order to use this feature          annotations    dictionary         This value defines additional annotations for the service account  This can either be YAML or a YAML formatted multi line templated string          yaml     annotations         sample annotation1    foo         sample annotation2    bar        or     annotations           sample annotation1    foo         sample annotation2    bar                  extraLabels    dictionary         This value defines additional labels for the Vault Server service account          yaml     extraLabels         sample label1    foo         sample label2    bar                  serviceDiscovery    Values that configure permissions required for Vault Server to automatically discover and join a Vault cluster using pod metadata            enabled    boolean  true     Enable or disable a service account role binding with the permissions required for Vault s Kubernetes   service registration    vault docs configuration service registration kubernetes  config option        dataStorage    This configures the volume used for storing Vault data when not using external storage such as Consul          enabled    boolean  true           Enables a persistent volume to be created for storing Vault data when not using an external storage service          size    string  10Gi           Size of the volume to be created for Vault s data storage when not using an external storage service          storageClass    string  null           Name of the storage class to use when creating the data storage volume          mountPath    string   vault data           Configures the path in the Vault pod where the data storage will be mounted          accessMode    string  ReadWriteOnce           Type of access mode of the storage device  See the  official Kubernetes  https   kubernetes io docs concepts storage persistent volumes  access modes  for more information          annotations    dictionary         This value defines additional annotations to       add to the data PVCs  This can either be YAML or a YAML formatted       multi line templated string          yaml     annotations        kubernetes io my pvc  foobar       or     annotations          kubernetes io my pvc  foobar                 labels    dictionary         This value defines additional labels to add to the       data PVCs  This can either be YAML or a YAML formatted multi line templated       string        persistentVolumeClaimRetentionPolicy    dictionary         Specifies the Persistent Volume Claim  PVC   retention policy  https   kubernetes io docs concepts workloads controllers statefulset  persistentvolumeclaim retention         yaml   persistentVolumeClaimRetentionPolicy      whenDeleted  Retain     whenScaled  Retain             auditStorage    This configures the volume used for storing Vault s audit logs  See the  Vault documentation   vault docs audit  for more information          enabled    boolean  false           Enables a persistent volume to be created for storing Vault s audit logs          size    string  10Gi           Size of the volume to be created for Vault s audit logs          storageClass    string  null           Name of the storage class to use when creating the audit storage volume          mountPath    string   vault audit           Configures the path in the Vault pod where the audit storage will be mounted          accessMode    string  ReadWriteOnce           Type of access mode of the storage device          annotations    dictionary         This value defines additional annotations to       add to the audit PVCs  This can either be YAML or a YAML formatted       multi line templated string          yaml     annotations        kubernetes io my pvc  foobar       or     annotations          kubernetes io my pvc  foobar               labels    dictionary         This value defines additional labels to add to the       audit PVCs  This can either be YAML or a YAML formatted multi line templated       string        dev    This configures  dev  mode for the Vault server          enabled    boolean  false           Enables  dev  mode for the Vault server  This mode is useful for experimenting with Vault without needing to unseal          devRootToken    string   root      Configures the root token for the Vault development server            Security Warning    Never  ever  ever run a  dev  mode server in production  It is insecure and will lose data on every restart  since it stores data in memory   It is only made for development or experimentation        standalone    This configures  standalone  mode for the Vault server          enabled    boolean  true           Enables  standalone  mode for the Vault server  This mode uses the  file  storage backend and requires a volume for persistence   dataStorage            config    string or object                 A raw string of extra HCL or JSON  configuration   vault docs configuration  for Vault servers        This will be saved as is into a ConfigMap that is read by the Vault servers        This can be used to add additional configuration that isn t directly exposed by the chart        If an object is provided  it will be written as JSON          yaml       ExtraConfig values are formatted as a multi line string      config          api addr    http   POD IP 8200         listener  tcp            tls disable   1         address        0 0 0 0 8200                 storage  file            path     vault data                       This can also be set using Helm s    set  flag  vault helm v0 1 0 and later   using the following syntax          shell       set server standalone config    listener  tcp    address    0 0 0 0 8200                   ha    This configures  ha  mode for the Vault server          enabled    boolean  false           Enables  ha  mode for the Vault server  This mode uses a highly available backend storage  such as Consul  to store Vault s data  By default this is configured to use  Consul Helm  https   github com hashicorp consul k8s   For a complete list of storage backends  see the  Vault documentation   vault docs configuration           apiAddr     string                 Set the API address configuration for a Vault cluster  If set to an empty string  the pod IP address is used          clusterAddr    string  null     Set the   cluster addr    vault docs configuration cluster addr  configuration for Vault HA        If null  defaults to  https     HOSTNAME   internal 8201           raft    This configures  raft  integrated storage mode for the Vault server            enabled    boolean  false             Enables  raft  integrated storage mode for the Vault server  This mode uses persistent volumes for storage            setNodeId    boolean  false     Set the Node Raft ID to the name of the pod            config    string or object                   A raw string of extra HCL or JSON  configuration   vault docs configuration  for Vault servers          This will be saved as is into a ConfigMap that is read by the Vault servers          This can be used to add additional configuration that isn t directly exposed by the chart          If an object is provided  it will be written as JSON          replicas    int  3           The number of pods to deploy to create a highly available cluster of Vault servers          updatePartition    int  0           If an updatePartition is specified  all Pods with an ordinal that is greater than or equal to the partition will be updated when the StatefulSet s   spec template  is updated  If set to  0   this disables partition updates  For more information see the  official Kubernetes documentation  https   kubernetes io docs concepts workloads controllers statefulset  rolling updates           config    string or object                 A raw string of extra HCL or JSON  configuration   vault docs configuration  for Vault servers        This will be saved as is into a ConfigMap that is read by the Vault servers        This can be used to add additional configuration that isn t directly exposed by the chart        If an object is provided  it will be written as JSON          yaml       ExtraConfig values are formatted as a multi line string      config          ui   true       api addr    http   POD IP 8200        listener  tcp              tls disable   1           address        0 0 0 0 8200                 storage  consul              path    vault             address    HOST IP 8500                       This can also be set using Helm s    set  flag  vault helm v0 1 0 and later   using the following syntax          shell       set server ha config    listener  tcp    address    0 0 0 0 8200                   disruptionBudget    Values that configures the disruption budget policy  See the  official Kubernetes documentation  https   kubernetes io docs tasks run application configure pdb   for more information          enabled    boolean  true           Enables disruption budget policy to limit the number of pods that are down simultaneously from voluntary disruptions          maxUnavailable    int  null           The maximum number of unavailable pods  By default  this will be automatically       computed based on the  server replicas  value to be   n 2  1   If you need to set       this to  0   you will need to add a    set  server disruptionBudget maxUnavailable 0         flag to the helm chart installation command because of a limitation in the Helm       templating language        statefulSet    This configures settings for the Vault Statefulset          annotations    dictionary         This value defines additional annotations to       add to the Vault statefulset  This can either be YAML or a YAML formatted       multi line templated string            yaml       annotations          kubernetes io my statefulset  foobar         or       annotations            kubernetes io my statefulset  foobar                 securityContext    Set the Pod and container security contexts         pod    dictionary         Defines the securityContext for the server Pods  as YAML or a YAML formatted multi line templated string         Default if not specified and  global openshift false             yaml       runAsNonRoot  true       runAsGroup         runAsUser         fsGroup                    Defaults to empty if not specified and  global openshift true           container    dictionary         Defines the securityContext for the server containers  as YAML or a YAML formatted multi line templated string         Default if not specified and  global openshift false             yaml       allowPrivilegeEscalation  false                  Defaults to empty if not specified and  global openshift true       ui    Values that configure the Vault UI        enabled    boolean  false     If true  the UI will be enabled  The UI will only be enabled on Vault servers  If  server enabled  is false  then this setting has no effect  To expose the UI in some way  you must configure  ui service         serviceType    string  ClusterIP     The service type to register  This defaults to  ClusterIP       The available service types are documented on      the Kubernetes website  https   kubernetes io docs concepts services networking service  publishing services service types         publishNotReadyAddresses    boolean  true     If set to true  will route traffic to Vault pods that aren t ready  if they re sealed or uninitialized        activeVaultPodOnly    boolean  false     If set to true  the UI service will only route to the active pod in a Vault HA cluster        serviceNodePort    int  null     Sets the Node Port value when using  serviceType  NodePort  on the Vault UI service        externalPort    int  8200     Sets the external port value of the service        targetPort    int  8200     Sets the target port value of the service        serviceIPFamilyPolicy    string         The IP family and IP families options are to set the behaviour in a dual stack environment  Omitting these values will let the service fall back to whatever the CNI dictates the defaults should be  These are only supported for kubernetes versions   1 23  The service s  supported IP family policy  https   kubernetes io docs concepts services networking dual stack  services   can be either   SingleStack    PreferDualStack   or  RequireDualStack         serviceIPFamilies    array         Sets the families that should be supported and the order in which they should be applied to ClusterIP as well  Can be IPv4 and or IPv6        externalTrafficPolicy    string   Cluster      The  externalTrafficPolicy  https   kubernetes io docs concepts services networking service  external traffic policy  can be set to either Cluster or Local and is only valid for LoadBalancer and NodePort service types        loadBalancerSourceRanges    array     This value defines additional source CIDRs when using  serviceType  LoadBalancer         yaml   loadBalancerSourceRanges        10 0 0 0 16       120 78 23 3 32             loadBalancerIP    string     This value defines the IP address of the load balancer when using  serviceType  LoadBalancer         annotations    dictionary         This value defines additional annotations for the UI service  This can either be YAML or a YAML formatted multi line templated string        yaml   annotations       sample annotation1    foo       sample annotation2    bar      or   annotations         sample annotation1    foo       sample annotation2    bar            csi    Values that configure running the Vault CSI Provider        enabled    boolean  false     When set to  true   the Vault CSI Provider daemonset will be created        image    Values that configure the Vault CSI Provider Docker image          repository    string   hashicorp vault csi provider      The name of the Docker image for the Vault CSI Provider          tag    string   1 5 0      The tag of the Docker image for the Vault CSI Provider     This should be pinned to a specific version when running in production    Otherwise  other changes to the chart may inadvertently upgrade your CSI provider          pullPolicy    string   IfNotPresent      The pull policy for container images  The default pull policy is  IfNotPresent  which causes the Kubelet to skip pulling an image if it already exists locally        volumes    array  null     A list of volumes made available to all containers  This takes     standard Kubernetes volume definitions          yaml     volumes          name  plugins         emptyDir                   volumeMounts    array  null     A list of volumes mounts made available to all containers  This takes     standard Kubernetes volume mount definitions          yaml     volumeMounts          mountPath   usr local libexec vault         name  plugins         readOnly  true               resources    dictionary         The resource requests and limits  CPU  memory  etc   for each of the CSI containers  This should be a YAML dictionary of a Kubernetes  resource  https   kubernetes io docs concepts configuration manage resources containers   object  If this isn t specified  then the pods won t request any specific amount of resources  which limits the ability for Kubernetes to make efficient use of compute resources  br      Setting this is highly recommended            yaml     resources        requests          memory   10Gi        limits          memory   10Gi                hmacSecretName    string         Override the default secret name for the CSI Provider s HMAC key used for generating secret versions        hostNetwork    bool  false     Set the  hostNetwork  parameter on the CSI Provider pods to     avoid the need of a dedicated pod ip        daemonSet    Values that configure the Vault CSI Provider daemonSet          updateStrategy    Values that configure the Vault CSI Provider update strategy            type    string   RollingUpdate      The  type of update strategy  https   kubernetes io docs concepts workloads controllers statefulset  update strategies  to be used when the daemonset is updated using Helm upgrades            maxUnavailable    int  null     The maximum number of unavailable pods during an upgrade          annotations    dictionary         This value defines additional annotations to       add to the Vault CSI Provider daemonset  This can either be YAML or a YAML formatted       multi line templated string            yaml       annotations          foo  bar         or       annotations            foo  bar                   extraLabels    dictionary         This value defines additional labels for the CSI provider daemonset          providersDir    string    etc kubernetes secrets store csi providers      Provider host path  must match the CSI provider s path          kubeletRootDir    string    var lib kubelet      Kubelet host path         securityContext    Security context for the pod template and container in the csi provider daemonSet           pod    dictionary         Pod level securityContext  May be specified as YAML or a YAML formatted multi line templated string            container    dictionary         Container level securityContext  May be specified as YAML or a YAML formatted multi line templated string        pod    Values that configure the Vault CSI Provider pod          annotations    dictionary         This value defines additional annotations to       add to the Vault CSI Provider pods  This can either be YAML or a YAML formatted       multi line templated string            yaml       annotations          foo  bar         or       annotations            foo  bar                   extraLabels    dictionary         This value defines additional labels for CSI provider pods          nodeSelector    dictionary          nodeSelector  https   kubernetes io docs concepts configuration assign pod node  nodeselector  labels for csi pod assignment  formatted as a multi line string or YAML map          yaml     nodeSelector        beta kubernetes io arch  amd64                 affinity    dictionary         This should be either a multi line string or YAML matching the PodSpec s affinity field          tolerations    array         Toleration Settings for CSI pods  This should be a multi line string or YAML matching the Toleration array in a PodSpec        priorityClassName    string         Priority class for CSI Provider pods       serviceAccount    Values that configure the Vault CSI Provider s serviceaccount          annotations    dictionary         This value defines additional       annotations for the serviceAccount definition  This can either be YAML or       a YAML formatted multi line templated string            yaml       annotations          foo  bar         or       annotations            foo  bar                   extraLabels    dictionary         This value defines additional labels for the CSI provider service account        readinessProbe    Values that configure the readiness probe for the Vault CSI Provider pods          failureThreshold    int  2     When set to a value  configures how many probe failures will be tolerated by Kubernetes          initialDelaySeconds    int  5     When set to a value  configures the number of seconds after the container has started before probe initiates          periodSeconds    int  5     When set to a value  configures how often  in seconds  to perform the probe          successThreshold    int  1     When set to a value  configures the minimum consecutive successes for the probe to be considered successful after having failed          timeoutSeconds    int  3     When set to a value  configures the number of seconds after which the probe times out        livenessProbe    Values that configure the liveness probe for the Vault CSI Provider pods          initialDelaySeconds    int  5     Sets the initial delay of the liveness probe when the container starts          failureThreshold    int  2     When set to a value  configures how many probe failures will be tolerated by Kubernetes          periodSeconds    int  5     When set to a value  configures how often  in seconds  to perform the probe          successThreshold    int  1     When set to a value  configures the minimum consecutive successes for the probe to be considered successful after having failed          timeoutSeconds    int  3     When set to a value  configures the number of seconds after which the probe times out        logLevel    string   info      Configures the log level for the Vault CSI provider  Supported     log levels include   trace    debug    info    warn    error   and  off         debug    bool  false     Deprecated  set  logLevel  to  debug  instead  When set to true      enables debug logging on the Vault CSI Provider daemonset        extraArgs    array         The extra arguments to be applied to the CSI pod startup command  See  here   vault docs platform k8s csi configurations command line arguments  for available flags        agent    Configures the Vault Agent sidecar for the CSI Provider        enabled    bool  true     whether to enable the agent sidecar for the CSI provider        extraArgs    array         The extra arguments to be applied to the agent startup command          image    Values that configure the Vault Agent sidecar image for the CSI Provider           pullPolicy    string   IfNotPresent      The pull policy for agent image  The default pull policy is  IfNotPresent  which causes the Kubelet to skip pulling an image if it already exists            repository    string   hashicorp vault      The name of the Docker image for the Vault Agent sidecar  This should be set to the official Vault Docker image            tag    string   1 18 1      The tag of the Vault Docker image to use for the Vault Agent Sidecar          logFormat    string   standard             logLevel    string   info             resources    dictionary         The resource requests and limits  CPU  memory  etc   for the agent  This should be a YAML dictionary of a Kubernetes  resource  https   kubernetes io docs concepts configuration manage resources containers   object         yaml     resources        requests          memory   256Mi          cpu   250m        limits          memory   256Mi          cpu   250m              serverTelemetry    Values the configure metrics and telemetry  Enabling these features requires setting   the  telemetry     stanza in the Vault configuration  See the  telemetry   vault docs configuration telemetry     docs   vault docs internals telemetry  for more on the Vault configuration     If authorization is not set for authenticating to Vault s metrics endpoint    the following Vault server  telemetry    config must be included in the    listener  tcp     stanza of the Vault configuration        yaml   listener  tcp        tls disable   1     address        0 0 0 0 8200       telemetry         unauthenticated metrics access    true                     In addition  a top level  telemetry     stanza must also be included in the Vault configuration  such as        yaml   telemetry       prometheus retention time    30s       disable hostname   true                 serviceMonitor    Values that configure monitoring the Vault server         enabled    boolean  false     When set to  true   enable deployment of the Vault Server       ServiceMonitor CustomResource  The Prometheus operator  must  be installed before enabling this       feature  If not  the chart will fail to install due to missing CustomResourceDefinitions provided by       the operator         Instructions on how to install the Helm chart can be found  here  https   github com prometheus community helm charts tree main charts kube prometheus stack          More information can be found here in the        these  https   github com prometheus operator prometheus operator         repositories  https   github com prometheus operator kube prometheus          selectors    dictionary         Selector labels to add to the ServiceMonitor          interval    string   30s      Interval at which Prometheus scrapes metrics          scrapeTimeout    string   10s      Timeout for Prometheus scrapes          tlsConfig    dictionary         tlsConfig used for scraping the Vault metrics API  See the       prometheus  API       reference  https   prometheus operator dev docs api reference api  monitoring coreos com v1 TLSConfig        for more details            yaml       tlsConfig          ca            secret              name  vault metrics client             key  ca crt                   authorization    dictionary         Authorization used for scraping the Vault metrics API        See the prometheus  API       reference  https   prometheus operator dev docs api reference api  monitoring coreos com v1 SafeAuthorization        for more details            yaml       authorization          credentials            name  vault metrics client           key  token                 prometheusRules    Values that configure Prometheus rules          enabled    boolean  false     Deploy the PrometheusRule custom resource for AlertManager based       alerts  Requires that AlertManager is properly deployed          selectors    dictionary         Selector labels to add to the Prometheus rules          rules     array         Prometheus rules to create         For example           yaml       rules            alert  vault HighResponseTime           annotations              message  The response time of Vault is over 500ms on average over the last 5 minutes            expr  vault core handle request quantile  0 5   namespace  mynamespace     500           for  5m           labels              severity  warning           alert  vault HighResponseTime           annotations              message  The response time of Vault is over 1s on average over the last 5 minutes            expr  vault core handle request quantile  0 5   namespace  mynamespace     1000           for  5m           labels              severity  critical          "}
{"questions":"vault Vault enterprise license management You can use this Helm chart to deploy Vault Enterprise by following a few extra steps around licensing layout docs Vault Helm supports deploying Vault Enterprise including license autoloading page title Vault Enterprise License Management Kubernetes","answers":"---\nlayout: docs\npage_title: Vault Enterprise License Management - Kubernetes\ndescription: >-\n  Vault Helm supports deploying Vault Enterprise, including license autoloading.\n---\n\n# Vault enterprise license management\n\nYou can use this Helm chart to deploy Vault Enterprise by following a few extra steps around licensing.\n\n~> **Note:** As of Vault Enterprise 1.8, the license must be specified via HCL configuration or environment variables on startup, unless the Vault cluster was created with an older Vault version and the license was stored. More information is available in the [Vault Enterprise License docs](\/vault\/docs\/enterprise\/license).\n\n@include 'helm\/version.mdx'\n\n## Vault enterprise 1.8+\n\n### License install\n\nFirst create a Kubernetes secret using the contents of your license file. For example, the following commands create a secret with the name `vault-ent-license` and key `license`:\n\n```bash\nsecret=$(cat 1931d1f4-bdfd-6881-f3f5-19349374841f.hclic)\nkubectl create secret generic vault-ent-license --from-literal=\"license=${secret}\"\n```\n\n-> **Note:** If you cannot find your `.hclic` file, please contact your sales team or Technical Account Manager.\n\nIn your chart overrides, set the values of [`server.image`](\/vault\/docs\/platform\/k8s\/helm\/configuration#image-2) to one of the enterprise [release tags](https:\/\/hub.docker.com\/r\/hashicorp\/vault-enterprise\/tags). Also set the name of the secret you just created in [`server.enterpriseLicense`](\/vault\/docs\/platform\/k8s\/helm\/configuration#enterpriselicense).\n\n```yaml\n# config.yaml\nserver:\n  image:\n    repository: hashicorp\/vault-enterprise\n    tag: 1.18.1-ent\n  enterpriseLicense:\n    secretName: vault-ent-license\n```\n\nNow run `helm install`:\n\n```shell-session\n$ helm install hashicorp hashicorp\/vault -f config.yaml\n```\n\nOnce the cluster is [initialized and unsealed](\/vault\/docs\/platform\/k8s\/helm\/run), you may check the license status using the `vault license get` command:\n\n```shell\nkubectl exec -ti vault-0 -- vault license get\n```\n\n### License update\n\nTo update the autoloaded license in Vault, you may do the following:\n\n- Update your license secret with the new license data\n\n```shell\nnew_secret=$(base64 < .\/new-license.hclic | tr -d '\\n')\n\ncat > patch-license.yaml <<EOF\ndata:\n  license: ${new_secret}\nEOF\n\nkubectl patch secret vault-ent-license --patch \"$(cat patch-license.yaml)\"\n```\n\n- Wait until [`vault license inspect`](\/vault\/docs\/commands\/license\/inspect) shows the updated license\n\n  Since the `inspect` command is reading the license file from the mounted secret, this tells you when the updated secret has been propagated to the mount on the Vault pod.\n\n```shell\nkubectl exec vault-0 -- vault license inspect\n```\n\n- Reload Vault's license config\n\n  You may use the [`sys\/config\/reload\/license` API endpoint](\/vault\/api-docs\/system\/config-reload#reload-license-file):\n\n```shell\nkubectl exec vault-0 -- vault write -f sys\/config\/reload\/license\n```\n\nOr you may issue an HUP signal directly to Vault:\n\n```shell\nkubectl exec vault-0 -- pkill -HUP vault\n```\n\n- Verify that [`vault license get`](\/vault\/docs\/commands\/license\/get) shows the updated license\n\n```shell\nkubectl exec vault-0 -- vault license get\n```\n\n## Vault enterprise prior to 1.8\n\nIn your chart overrides, set the values of `server.image` to one of the enterprise [release tags](https:\/\/hub.docker.com\/r\/hashicorp\/vault-enterprise\/tags). Install the chart, and initialize and unseal vault as described in [Running Vault](\/vault\/docs\/platform\/k8s\/helm\/run).\n\nAfter Vault has been initialized and unsealed, setup a port-forward tunnel to the Vault Enterprise cluster:\n\n```shell\nkubectl port-forward vault-0 8200:8200\n```\n\nNext, in a separate terminal, create a `payload.json` file that contains the license key like this example:\n\n```json\n{\n  \"text\": \"01ABCDEFG...\"\n}\n```\n\nFinally, using curl, apply the license key to the Vault API:\n\n```bash\ncurl \\\n  --header \"X-Vault-Token: VAULT_LOGIN_TOKEN_HERE\" \\\n  --request POST \\\n  --data @payload.json \\\n  http:\/\/127.0.0.1:8200\/v1\/sys\/license\n\n```\n\nTo verify that the license installation worked correctly, using `curl`, run the following:\n\n```shell\ncurl \\\n  --header \"X-Vault-Token: VAULT_LOGIN_TOKEN_HERE\" \\\n  http:\/\/127.0.0.1:8200\/v1\/sys\/license\n```","site":"vault","answers_cleaned":"    layout  docs page title  Vault Enterprise License Management   Kubernetes description       Vault Helm supports deploying Vault Enterprise  including license autoloading         Vault enterprise license management  You can use this Helm chart to deploy Vault Enterprise by following a few extra steps around licensing        Note    As of Vault Enterprise 1 8  the license must be specified via HCL configuration or environment variables on startup  unless the Vault cluster was created with an older Vault version and the license was stored  More information is available in the  Vault Enterprise License docs   vault docs enterprise license     include  helm version mdx      Vault enterprise 1 8       License install  First create a Kubernetes secret using the contents of your license file  For example  the following commands create a secret with the name  vault ent license  and key  license       bash secret   cat 1931d1f4 bdfd 6881 f3f5 19349374841f hclic  kubectl create secret generic vault ent license   from literal  license   secret             Note    If you cannot find your   hclic  file  please contact your sales team or Technical Account Manager   In your chart overrides  set the values of   server image    vault docs platform k8s helm configuration image 2  to one of the enterprise  release tags  https   hub docker com r hashicorp vault enterprise tags   Also set the name of the secret you just created in   server enterpriseLicense    vault docs platform k8s helm configuration enterpriselicense       yaml   config yaml server    image      repository  hashicorp vault enterprise     tag  1 18 1 ent   enterpriseLicense      secretName  vault ent license      Now run  helm install       shell session   helm install hashicorp hashicorp vault  f config yaml      Once the cluster is  initialized and unsealed   vault docs platform k8s helm run   you may check the license status using the  vault license get  command      shell kubectl exec  ti vault 0    vault license get          License update  To update the autoloaded license in Vault  you may do the following     Update your license secret with the new license data     shell new secret   base64     new license hclic   tr  d   n    cat   patch license yaml   EOF data    license    new secret  EOF  kubectl patch secret vault ent license   patch    cat patch license yaml          Wait until   vault license inspect    vault docs commands license inspect  shows the updated license    Since the  inspect  command is reading the license file from the mounted secret  this tells you when the updated secret has been propagated to the mount on the Vault pod      shell kubectl exec vault 0    vault license inspect        Reload Vault s license config    You may use the   sys config reload license  API endpoint   vault api docs system config reload reload license file       shell kubectl exec vault 0    vault write  f sys config reload license      Or you may issue an HUP signal directly to Vault      shell kubectl exec vault 0    pkill  HUP vault        Verify that   vault license get    vault docs commands license get  shows the updated license     shell kubectl exec vault 0    vault license get         Vault enterprise prior to 1 8  In your chart overrides  set the values of  server image  to one of the enterprise  release tags  https   hub docker com r hashicorp vault enterprise tags   Install the chart  and initialize and unseal vault as described in  Running Vault   vault docs platform k8s helm run    After Vault has been initialized and unsealed  setup a port forward tunnel to the Vault Enterprise cluster      shell kubectl port forward vault 0 8200 8200      Next  in a separate terminal  create a  payload json  file that contains the license key like this example      json      text    01ABCDEFG            Finally  using curl  apply the license key to the Vault API      bash curl       header  X Vault Token  VAULT LOGIN TOKEN HERE        request POST       data  payload json     http   127 0 0 1 8200 v1 sys license       To verify that the license installation worked correctly  using  curl   run the following      shell curl       header  X Vault Token  VAULT LOGIN TOKEN HERE      http   127 0 0 1 8200 v1 sys license    "}
{"questions":"vault sidebar current docs platform k8s terraform page title Configure Vault Helm using Terraform Configuring Vault helm with terraform layout docs Describes how to configure the Vault Helm chart using Terraform","answers":"---\nlayout: 'docs'\npage_title: 'Configure Vault Helm using Terraform'\nsidebar_current: 'docs-platform-k8s-terraform'\ndescription: |-\n  Describes how to configure the Vault Helm chart using Terraform\n---\n\n# Configuring Vault helm with terraform\n\nTerraform may also be used to configure and deploy the Vault Helm chart, by using the [Helm provider](https:\/\/registry.terraform.io\/providers\/hashicorp\/helm\/latest\/docs).\n\nFor example, to configure the chart to deploy [HA Vault with integrated storage (raft)](\/vault\/docs\/platform\/k8s\/helm\/examples\/ha-with-raft), the values overrides can be set on the command-line, in a values yaml file, or with a Terraform configuration:\n\n<CodeTabs>\n<CodeBlockConfig>\n\n```shell-session\n$ helm install vault hashicorp\/vault \\\n  --set='server.ha.enabled=true' \\\n  --set='server.ha.raft.enabled=true'\n```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig>\n\n```yaml\nserver:\n  ha:\n    enabled: true\n    raft:\n      enabled: true\n```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig>\n\n```hcl\nprovider \"helm\" {\n  kubernetes {\n    config_path = \"~\/.kube\/config\"\n  }\n}\n\nresource \"helm_release\" \"vault\" {\n  name       = \"vault\"\n  repository = \"https:\/\/helm.releases.hashicorp.com\"\n  chart      = \"vault\"\n\n  set {\n    name  = \"server.ha.enabled\"\n    value = \"true\"\n  }\n  set {\n    name  = \"server.ha.raft.enabled\"\n    value = \"true\"\n  }\n}\n```\n\n<\/CodeBlockConfig>\n<\/CodeTabs>\n\nThe values file can also be used directly in the Terraform configuration with the [`values` directive](https:\/\/registry.terraform.io\/providers\/hashicorp\/helm\/latest\/docs\/resources\/release#values#values).\n\n## Further examples\n\n### Vault config as a multi-line string\n\n<CodeTabs>\n<CodeBlockConfig>\n\n```yaml\nserver:\n  ha:\n    enabled: true\n    raft:\n      enabled: true\n      setNodeId: true\n      config: |\n        ui = false\n\n        listener \"tcp\" {\n          tls_disable = 1\n          address = \"[::]:8200\"\n          cluster_address = \"[::]:8201\"\n        }\n\n        storage \"raft\" {\n          path    = \"\/vault\/data\"\n        }\n\n        service_registration \"kubernetes\" {}\n\n        seal \"awskms\" {\n          region     = \"us-west-2\"\n          kms_key_id = \"alias\/my-kms-key\"\n        }\n```\n\n<\/CodeBlockConfig>\n<CodeBlockConfig>\n\n```hcl\nresource \"helm_release\" \"vault\" {\n  name       = \"vault\"\n  repository = \"https:\/\/helm.releases.hashicorp.com\"\n  chart      = \"vault\"\n\n  set {\n    name  = \"server.ha.enabled\"\n    value = \"true\"\n  }\n  set {\n    name  = \"server.ha.raft.enabled\"\n    value = \"true\"\n  }\n  set {\n    name  = \"server.ha.raft.setNodeId\"\n    value = \"true\"\n  }\n  set {\n    name  = \"server.ha.raft.config\"\n    value = <<EOT\nui = false\n\nlistener \"tcp\" {\n  tls_disable = 1\n  address = \"[::]:8200\"\n  cluster_address = \"[::]:8201\"\n}\n\nstorage \"raft\" {\n  path    = \"\/vault\/data\"\n}\n\nservice_registration \"kubernetes\" {}\n\nseal \"awskms\" {\n  region     = \"us-west-2\"\n  kms_key_id = \"alias\/my-kms-key\"\n}\nEOT\n  }\n}\n```\n\n<\/CodeBlockConfig>\n<\/CodeTabs>\n\n### Lists of volumes and volumeMounts\n\n<CodeTabs>\n<CodeBlockConfig>\n\n```yaml\nserver:\n  volumes:\n    - name: userconfig-my-gcp-iam\n      secret:\n        defaultMode: 420\n        secretName: my-gcp-iam\n\n  volumeMounts:\n    - mountPath: \/vault\/userconfig\/my-gcp-iam\n      name: userconfig-my-gcp-iam\n      readOnly: true\n```\n\n<\/CodeBlockConfig>\n\n<CodeBlockConfig>\n\n```hcl\nresource \"helm_release\" \"vault\" {\n  name       = \"vault\"\n  repository = \"https:\/\/helm.releases.hashicorp.com\"\n  chart      = \"vault\"\n\n  set {\n    name  = \"server.volumes[0].name\"\n    value = \"userconfig-my-gcp-iam\"\n  }\n  set {\n    name  = \"server.volumes[0].secret.defaultMode\"\n    value = \"420\"\n  }\n  set {\n    name  = \"server.volumes[0].secret.secretName\"\n    value = \"my-gcp-iam\"\n  }\n\n  set {\n    name  = \"server.volumeMounts[0].mountPath\"\n    value = \"\/vault\/userconfig\/my-gcp-iam\"\n  }\n  set {\n    name  = \"server.volumeMounts[0].name\"\n    value = \"userconfig-my-gcp-iam\"\n  }\n  set {\n    name  = \"server.volumeMounts[0].readOnly\"\n    value = \"true\"\n  }\n}\n```\n\n<\/CodeBlockConfig>\n<\/CodeTabs>\n\n### Annotations\n\nAnnotations can be set as a YAML map:\n\n<CodeTabs>\n\n<CodeBlockConfig>\n\n```yaml\nserver:\n  ingress:\n    annotations:\n      service.beta.kubernetes.io\/azure-load-balancer-internal: true\n      service.beta.kubernetes.io\/azure-load-balancer-internal-subnet: apps-subnet\n```\n<\/CodeBlockConfig>\n\n<CodeBlockConfig>\n\n```hcl\n  set {\n    name = \"server.ingress.annotations.service\\\\.beta\\\\.kubernetes\\\\.io\/azure-load-balancer-internal\"\n    value = \"true\"\n  }\n\n  set {\n    name = \"server.ingress.annotations.service\\\\.beta\\\\.kubernetes\\\\.io\/azure-load-balancer-internal-subnet\"\n    value = \"apps-subnet\"\n  }\n```\n\n<\/CodeBlockConfig>\n<\/CodeTabs>\n\nor as a multi-line string:\n\n<CodeTabs>\n<CodeBlockConfig>\n\n```yaml\nserver:\n  ingress:\n    annotations: |\n      service.beta.kubernetes.io\/azure-load-balancer-internal: true\n      service.beta.kubernetes.io\/azure-load-balancer-internal-subnet: apps-subnet\n```\n\n<\/CodeBlockConfig>\n<CodeBlockConfig>\n\n```hcl\n  set {\n    name = \"server.ingress.annotations\"\n    value = yamlencode({\n      \"service.beta.kubernetes.io\/azure-load-balancer-internal\": \"true\"\n      \"service.beta.kubernetes.io\/azure-load-balancer-internal-subnet\": \"apps-subnet\"\n    })\n    type = \"auto\"\n  }\n```\n\n<\/CodeBlockConfig>\n<\/CodeTabs>","site":"vault","answers_cleaned":"    layout   docs  page title   Configure Vault Helm using Terraform  sidebar current   docs platform k8s terraform  description       Describes how to configure the Vault Helm chart using Terraform        Configuring Vault helm with terraform  Terraform may also be used to configure and deploy the Vault Helm chart  by using the  Helm provider  https   registry terraform io providers hashicorp helm latest docs    For example  to configure the chart to deploy  HA Vault with integrated storage  raft    vault docs platform k8s helm examples ha with raft   the values overrides can be set on the command line  in a values yaml file  or with a Terraform configuration    CodeTabs   CodeBlockConfig      shell session   helm install vault hashicorp vault       set  server ha enabled true        set  server ha raft enabled true         CodeBlockConfig    CodeBlockConfig      yaml server    ha      enabled  true     raft        enabled  true        CodeBlockConfig    CodeBlockConfig      hcl provider  helm      kubernetes       config path       kube config         resource  helm release   vault      name          vault    repository    https   helm releases hashicorp com    chart         vault     set       name     server ha enabled      value    true        set       name     server ha raft enabled      value    true               CodeBlockConfig    CodeTabs   The values file can also be used directly in the Terraform configuration with the   values  directive  https   registry terraform io providers hashicorp helm latest docs resources release values values       Further examples      Vault config as a multi line string   CodeTabs   CodeBlockConfig      yaml server    ha      enabled  true     raft        enabled  true       setNodeId  true       config            ui   false          listener  tcp              tls disable   1           address         8200            cluster address         8201                     storage  raft              path        vault data                     service registration  kubernetes              seal  awskms              region        us west 2            kms key id    alias my kms key                   CodeBlockConfig   CodeBlockConfig      hcl resource  helm release   vault      name          vault    repository    https   helm releases hashicorp com    chart         vault     set       name     server ha enabled      value    true        set       name     server ha raft enabled      value    true        set       name     server ha raft setNodeId      value    true        set       name     server ha raft config      value     EOT ui   false  listener  tcp      tls disable   1   address         8200    cluster address         8201     storage  raft      path        vault data     service registration  kubernetes      seal  awskms      region        us west 2    kms key id    alias my kms key    EOT              CodeBlockConfig    CodeTabs       Lists of volumes and volumeMounts   CodeTabs   CodeBlockConfig      yaml server    volumes        name  userconfig my gcp iam       secret          defaultMode  420         secretName  my gcp iam    volumeMounts        mountPath   vault userconfig my gcp iam       name  userconfig my gcp iam       readOnly  true        CodeBlockConfig    CodeBlockConfig      hcl resource  helm release   vault      name          vault    repository    https   helm releases hashicorp com    chart         vault     set       name     server volumes 0  name      value    userconfig my gcp iam        set       name     server volumes 0  secret defaultMode      value    420        set       name     server volumes 0  secret secretName      value    my gcp iam         set       name     server volumeMounts 0  mountPath      value     vault userconfig my gcp iam        set       name     server volumeMounts 0  name      value    userconfig my gcp iam        set       name     server volumeMounts 0  readOnly      value    true               CodeBlockConfig    CodeTabs       Annotations  Annotations can be set as a YAML map    CodeTabs    CodeBlockConfig      yaml server    ingress      annotations        service beta kubernetes io azure load balancer internal  true       service beta kubernetes io azure load balancer internal subnet  apps subnet       CodeBlockConfig    CodeBlockConfig      hcl   set       name    server ingress annotations service   beta   kubernetes   io azure load balancer internal      value    true         set       name    server ingress annotations service   beta   kubernetes   io azure load balancer internal subnet      value    apps subnet             CodeBlockConfig    CodeTabs   or as a multi line string    CodeTabs   CodeBlockConfig      yaml server    ingress      annotations          service beta kubernetes io azure load balancer internal  true       service beta kubernetes io azure load balancer internal subnet  apps subnet        CodeBlockConfig   CodeBlockConfig      hcl   set       name    server ingress annotations      value   yamlencode          service beta kubernetes io azure load balancer internal    true         service beta kubernetes io azure load balancer internal subnet    apps subnet             type    auto             CodeBlockConfig    CodeTabs "}
{"questions":"vault page title Running Vault OpenShift Run Vault on OpenShift layout docs pure OpenShift workloads this enables Vault to also exist purely within Vault can run directly on OpenShift in various configurations For Kubernetes","answers":"---\nlayout: docs\npage_title: Running Vault - OpenShift\ndescription: >-\n  Vault can run directly on OpenShift in various configurations.  For\n  pure-OpenShift workloads, this enables Vault to also exist purely within\n  Kubernetes.\n---\n\n# Run Vault on OpenShift\n\n@include 'helm\/version.mdx'\n\nThe following documentation describes installing, running, and using\nVault and **Vault Agent Injector** on OpenShift.\n\n~> **Note:** We recommend using the Vault agent injector on Openshift\ninstead of the Secrets Store CSI driver. OpenShift\n[does not recommend](https:\/\/docs.openshift.com\/container-platform\/4.9\/storage\/persistent_storage\/persistent-storage-hostpath.html)\nusing `hostPath` mounting in production or\n[certify Helm charts](https:\/\/github.com\/redhat-certification\/chart-verifier\/blob\/dbf89bff2d09142e4709d689a9f4037a739c2244\/docs\/helm-chart-checks.md#table-2-helm-chart-default-checks)\nusing CSI objects because pods must run as privileged. If you would like to run the Secrets Store\nCSI driver on a development or testing cluster, refer to\n[installation instructions for the Vault CSI provider](\/vault\/docs\/platform\/k8s\/csi\/installation).\n\n## Requirements\n\nThe following are required to install Vault and Vault Agent Injector\non OpenShift:\n\n- Cluster Admin privileges to bind the `auth-delegator` role to Vault's service account\n- Helm v3.6+\n- OpenShift 4.3+\n- Vault Helm v0.6.0+\n- Vault K8s v0.4.0+\n\n~> **Note:** Support for Consul on OpenShift is available since [Consul 1.9](https:\/\/www.hashicorp.com\/blog\/introducing-openshift-support-for-consul-on-kubernetes). However, for highly available\ndeployments, Raft integrated storage is recommended.\n\n## Additional resources\n\nThe documentation, configuration and examples for Vault Helm and Vault K8s Agent Injector\nare applicable to OpenShift installations. For more examples see the existing documentation:\n\n- [Vault Helm documentation](\/vault\/docs\/platform\/k8s\/helm)\n- [Vault K8s documentation](\/vault\/docs\/platform\/k8s\/injector)\n\n## Helm chart\n\nThe [Vault Helm chart](https:\/\/github.com\/hashicorp\/vault-helm)\nis the recommended way to install and configure Vault on OpenShift.\nIn addition to running Vault itself, the Helm chart is the primary\nmethod for installing and configuring Vault Agent Injection Mutating\nWebhook.\n\nWhile the Helm chart automatically sets up complex resources and exposes the\nconfiguration to meet your requirements, it **does not automatically operate\nVault.** You are still responsible for learning how to monitor, backup, upgrade,\netc. the Vault cluster.\n\n~> **Security Warning:** By default, the chart runs in standalone mode. This\nmode uses a single Vault server with a file storage backend. This is a less\nsecure and less resilient installation that is **NOT** appropriate for a\nproduction setup. It is highly recommended to use a [properly secured Kubernetes\ncluster](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/securing-a-cluster\/),\n[learn the available configuration\noptions](\/vault\/docs\/platform\/k8s\/helm\/configuration), and read the [production deployment\nchecklist](\/vault\/docs\/platform\/k8s\/helm\/run#architecture).\n\n## How-To\n\n### Install Vault\n\nTo use the Helm chart, add the Hashicorp helm repository and check that you have\naccess to the chart:\n\n@include 'helm\/repo.mdx'\n\n-> **Important:** The Helm chart is new and under significant development.\nPlease always run Helm with `--dry-run` before any install or upgrade to verify\nchanges.\n\nUse `helm install` to install the latest release of the Vault Helm chart.\n\n```shell-session\n$ helm install vault hashicorp\/vault\n```\n\nOr install a specific version of the chart.\n\n@include 'helm\/install.mdx'\n\nThe `helm install` command accepts parameters to override default configuration\nvalues inline or defined in a file. For all OpenShift deployments, `global.openshift`\nshould be set to `true`.\n\nOverride the `server.dev.enabled` configuration value:\n\n```shell-session\n$ helm install vault hashicorp\/vault \\\n    --set \"global.openshift=true\" \\\n    --set \"server.dev.enabled=true\"\n```\n\nOverride all the configuration found in a file:\n\n```shell-session\n$ cat override-values.yml\nglobal:\n  openshift: true\n\nserver:\n  ha:\n    enabled: true\n    replicas: 5\n##\n$ helm install vault hashicorp\/vault \\\n    --values override-values.yml\n```\n\n#### Dev mode\n\nThe Helm chart may run a Vault server in development. This installs a single\nVault server with a memory storage backend.\n\n-> **Dev mode:** This is ideal for learning and demonstration environments but\nNOT recommended for a production environment.\n\nInstall the latest Vault Helm chart in development mode.\n\n```shell-session\n$ helm install vault hashicorp\/vault \\\n    --set \"global.openshift=true\" \\\n    --set \"server.dev.enabled=true\"\n```\n\n#### Highly available raft mode\n\nThe following creates a Vault cluster using the Raft integrated storage backend.\n\nInstall the latest Vault Helm chart in HA Raft mode:\n\n```shell-session\n$ helm install vault hashicorp\/vault \\\n  --set='global.openshift=true' \\\n  --set='server.ha.enabled=true' \\\n  --set='server.ha.raft.enabled=true'\n```\n\nNext, initialize and unseal `vault-0` pod:\n\n```shell-session\n$ oc exec -ti vault-0 -- vault operator init\n$ oc exec -ti vault-0 -- vault operator unseal\n```\n\nFinally, join the remaining pods to the Raft cluster and unseal them. The pods\nwill need to communicate directly so we'll configure the pods to use the internal\nservice provided by the Helm chart:\n\n```shell-session\n$ oc exec -ti vault-1 -- vault operator raft join http:\/\/vault-0.vault-internal:8200\n$ oc exec -ti vault-1 -- vault operator unseal\n\n$ oc exec -ti vault-2 -- vault operator raft join http:\/\/vault-0.vault-internal:8200\n$ oc exec -ti vault-2 -- vault operator unseal\n```\n\nTo verify if the Raft cluster has successfully been initialized, run the following.\n\nFirst, login using the `root` token on the `vault-0` pod:\n\n```shell-session\n$ oc exec -ti vault-0 -- vault login\n```\n\nNext, list all the raft peers:\n\n```shell-session\n$ oc exec -ti vault-0 -- vault operator raft list-peers\n\nNode                                    Address                        State       Voter\n----                                    -------                        -----       -----\na1799962-8711-7f28-23f0-cea05c8a527d    vault-0.vault-internal:8201    leader      true\ne6876c97-aaaa-a92e-b99a-0aafab105745    vault-1.vault-internal:8201    follower    true\n4b5d7383-ff31-44df-e008-6a606828823b    vault-2.vault-internal:8201    follower    true\n```\n\nVault with integrated storage (Raft) is now ready to use!\n\n#### External mode\n\nThe Helm chart may be run in external mode. This installs no Vault server and\nrelies on a network addressable Vault server to exist.\n\nInstall the latest Vault Helm chart in external mode.\n\n```shell-session\n$ helm install vault hashicorp\/vault \\\n    --set \"global.openshift=true\" \\\n    --set \"injector.externalVaultAddr=http:\/\/external-vault:8200\"\n```\n\n## Tutorial\n\nRefer to the [Integrate a Kubernetes Cluster with an\nExternal Vault](\/vault\/tutorials\/kubernetes\/kubernetes-external-vault)\ntutorial to learn how to use an external Vault within a Kubernetes cluster.","site":"vault","answers_cleaned":"    layout  docs page title  Running Vault   OpenShift description       Vault can run directly on OpenShift in various configurations   For   pure OpenShift workloads  this enables Vault to also exist purely within   Kubernetes         Run Vault on OpenShift   include  helm version mdx   The following documentation describes installing  running  and using Vault and   Vault Agent Injector   on OpenShift        Note    We recommend using the Vault agent injector on Openshift instead of the Secrets Store CSI driver  OpenShift  does not recommend  https   docs openshift com container platform 4 9 storage persistent storage persistent storage hostpath html  using  hostPath  mounting in production or  certify Helm charts  https   github com redhat certification chart verifier blob dbf89bff2d09142e4709d689a9f4037a739c2244 docs helm chart checks md table 2 helm chart default checks  using CSI objects because pods must run as privileged  If you would like to run the Secrets Store CSI driver on a development or testing cluster  refer to  installation instructions for the Vault CSI provider   vault docs platform k8s csi installation       Requirements  The following are required to install Vault and Vault Agent Injector on OpenShift     Cluster Admin privileges to bind the  auth delegator  role to Vault s service account   Helm v3 6    OpenShift 4 3    Vault Helm v0 6 0    Vault K8s v0 4 0        Note    Support for Consul on OpenShift is available since  Consul 1 9  https   www hashicorp com blog introducing openshift support for consul on kubernetes   However  for highly available deployments  Raft integrated storage is recommended      Additional resources  The documentation  configuration and examples for Vault Helm and Vault K8s Agent Injector are applicable to OpenShift installations  For more examples see the existing documentation      Vault Helm documentation   vault docs platform k8s helm     Vault K8s documentation   vault docs platform k8s injector      Helm chart  The  Vault Helm chart  https   github com hashicorp vault helm  is the recommended way to install and configure Vault on OpenShift  In addition to running Vault itself  the Helm chart is the primary method for installing and configuring Vault Agent Injection Mutating Webhook   While the Helm chart automatically sets up complex resources and exposes the configuration to meet your requirements  it   does not automatically operate Vault    You are still responsible for learning how to monitor  backup  upgrade  etc  the Vault cluster        Security Warning    By default  the chart runs in standalone mode  This mode uses a single Vault server with a file storage backend  This is a less secure and less resilient installation that is   NOT   appropriate for a production setup  It is highly recommended to use a  properly secured Kubernetes cluster  https   kubernetes io docs tasks administer cluster securing a cluster     learn the available configuration options   vault docs platform k8s helm configuration   and read the  production deployment checklist   vault docs platform k8s helm run architecture       How To      Install Vault  To use the Helm chart  add the Hashicorp helm repository and check that you have access to the chart    include  helm repo mdx        Important    The Helm chart is new and under significant development  Please always run Helm with    dry run  before any install or upgrade to verify changes   Use  helm install  to install the latest release of the Vault Helm chart      shell session   helm install vault hashicorp vault      Or install a specific version of the chart    include  helm install mdx   The  helm install  command accepts parameters to override default configuration values inline or defined in a file  For all OpenShift deployments   global openshift  should be set to  true    Override the  server dev enabled  configuration value      shell session   helm install vault hashicorp vault         set  global openshift true          set  server dev enabled true       Override all the configuration found in a file      shell session   cat override values yml global    openshift  true  server    ha      enabled  true     replicas  5      helm install vault hashicorp vault         values override values yml           Dev mode  The Helm chart may run a Vault server in development  This installs a single Vault server with a memory storage backend        Dev mode    This is ideal for learning and demonstration environments but NOT recommended for a production environment   Install the latest Vault Helm chart in development mode      shell session   helm install vault hashicorp vault         set  global openshift true          set  server dev enabled true            Highly available raft mode  The following creates a Vault cluster using the Raft integrated storage backend   Install the latest Vault Helm chart in HA Raft mode      shell session   helm install vault hashicorp vault       set  global openshift true        set  server ha enabled true        set  server ha raft enabled true       Next  initialize and unseal  vault 0  pod      shell session   oc exec  ti vault 0    vault operator init   oc exec  ti vault 0    vault operator unseal      Finally  join the remaining pods to the Raft cluster and unseal them  The pods will need to communicate directly so we ll configure the pods to use the internal service provided by the Helm chart      shell session   oc exec  ti vault 1    vault operator raft join http   vault 0 vault internal 8200   oc exec  ti vault 1    vault operator unseal    oc exec  ti vault 2    vault operator raft join http   vault 0 vault internal 8200   oc exec  ti vault 2    vault operator unseal      To verify if the Raft cluster has successfully been initialized  run the following   First  login using the  root  token on the  vault 0  pod      shell session   oc exec  ti vault 0    vault login      Next  list all the raft peers      shell session   oc exec  ti vault 0    vault operator raft list peers  Node                                    Address                        State       Voter                                                                                          a1799962 8711 7f28 23f0 cea05c8a527d    vault 0 vault internal 8201    leader      true e6876c97 aaaa a92e b99a 0aafab105745    vault 1 vault internal 8201    follower    true 4b5d7383 ff31 44df e008 6a606828823b    vault 2 vault internal 8201    follower    true      Vault with integrated storage  Raft  is now ready to use        External mode  The Helm chart may be run in external mode  This installs no Vault server and relies on a network addressable Vault server to exist   Install the latest Vault Helm chart in external mode      shell session   helm install vault hashicorp vault         set  global openshift true          set  injector externalVaultAddr http   external vault 8200          Tutorial  Refer to the  Integrate a Kubernetes Cluster with an External Vault   vault tutorials kubernetes kubernetes external vault  tutorial to learn how to use an external Vault within a Kubernetes cluster "}
{"questions":"vault Run Vault on kubernetes pure Kubernetes workloads this enables Vault to also exist purely within Vault can run directly on Kubernetes in various configurations For page title Running Vault Kubernetes layout docs Kubernetes","answers":"---\nlayout: docs\npage_title: Running Vault - Kubernetes\ndescription: >-\n  Vault can run directly on Kubernetes in various configurations.  For\n  pure-Kubernetes workloads, this enables Vault to also exist purely within\n  Kubernetes.\n---\n\n# Run Vault on kubernetes\n\nVault works with Kubernetes in various modes: `dev`, `standalone`, `ha`,\nand `external`.\n\n@include 'helm\/version.mdx'\n\n## Helm chart\n\nThe [Vault Helm chart](https:\/\/github.com\/hashicorp\/vault-helm)\nis the recommended way to install and configure Vault on Kubernetes.\nIn addition to running Vault itself, the Helm chart is the primary\nmethod for installing and configuring Vault to integrate with other\nservices such as Consul for High Availability (HA) deployments.\n\nWhile the Helm chart automatically sets up complex resources and exposes the\nconfiguration to meet your requirements, it **does not automatically operate\nVault.** You are still responsible for learning how to monitor, backup, upgrade,\netc. the Vault cluster.\n\n~> **Security Warning:** By default, the chart runs in standalone mode. This\nmode uses a single Vault server with a file storage backend. This is a less\nsecure and less resilient installation that is **NOT** appropriate for a\nproduction setup. It is highly recommended to use a [properly secured Kubernetes\ncluster](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/securing-a-cluster\/),\n[learn the available configuration\noptions](\/vault\/docs\/platform\/k8s\/helm\/configuration), and read the [production deployment\nchecklist](\/vault\/docs\/platform\/k8s\/helm\/run#architecture).\n\n## How-To\n\n### Install Vault\n\nHelm must be installed and configured on your machine. Please refer to the [Helm\ndocumentation](https:\/\/helm.sh\/) or the [Vault Installation to Minikube via\nHelm](\/vault\/tutorials\/kubernetes\/kubernetes-minikube-consul) tutorial.\n\nTo use the Helm chart, add the Hashicorp helm repository and check that you have\naccess to the chart:\n\n@include 'helm\/repo.mdx'\n\n-> **Important:** The Helm chart is new and under significant development.\nPlease always run Helm with `--dry-run` before any install or upgrade to verify\nchanges.\n\nUse `helm install` to install the latest release of the Vault Helm chart.\n\n```shell-session\n$ helm install vault hashicorp\/vault\n```\n\nOr install a specific version of the chart.\n\n@include 'helm\/install.mdx'\n\nThe `helm install` command accepts parameters to override default configuration\nvalues inline or defined in a file.\n\nOverride the `server.dev.enabled` configuration value:\n\n```shell-session\n$ helm install vault hashicorp\/vault \\\n    --set \"server.dev.enabled=true\"\n```\n\nOverride all the configuration found in a file:\n\n```shell-session\n$ cat override-values.yml\nserver:\n  ha:\n    enabled: true\n    replicas: 5\n##\n$ helm install vault hashicorp\/vault \\\n    --values override-values.yml\n```\n\n#### Dev mode\n\nThe Helm chart may run a Vault server in development. This installs a single\nVault server with a memory storage backend.\n\n-> **Dev mode:** This is ideal for learning and demonstration environments but\nNOT recommended for a production environment.\n\nInstall the latest Vault Helm chart in development mode.\n\n```shell-session\n$ helm install vault hashicorp\/vault \\\n    --set \"server.dev.enabled=true\"\n```\n\n#### Standalone mode\n\nThe Helm chart defaults to run in `standalone` mode. This installs a single\nVault server with a file storage backend.\n\nInstall the latest Vault Helm chart in standalone mode.\n\n```shell-session\n$ helm install vault hashicorp\/vault\n```\n\n#### HA mode\n\nThe Helm chart may be run in high availability (HA) mode. This installs three\nVault servers with an existing Consul storage backend. It is suggested that\nConsul is installed via the [Consul Helm\nchart](https:\/\/github.com\/hashicorp\/consul-k8s).\n\nInstall the latest Vault Helm chart in HA mode.\n\n```shell-session\n$ helm install vault hashicorp\/vault \\\n    --set \"server.ha.enabled=true\"\n```\n\nRefer to the [Vault Installation to Minikube via\nHelm](\/vault\/tutorials\/kubernetes\/kubernetes-minikube-consul) tutorial\nto learn how to set up Consul and Vault in HA mode.\n\n#### External mode\n\nThe Helm chart may be run in external mode. This installs no Vault server and\nrelies on a network addressable Vault server to exist.\n\nInstall the latest Vault Helm chart in external mode.\n\n```shell-session\n$ helm install vault hashicorp\/vault \\\n    --set \"injector.externalVaultAddr=http:\/\/external-vault:8200\"\n```\n\nRefer to the [Integrate a Kubernetes Cluster with an\nExternal Vault](\/vault\/tutorials\/kubernetes\/kubernetes-external-vault)\ntutorial to learn how to use an external Vault within a Kubernetes cluster.\n\n### View the Vault UI\n\nThe Vault UI is enabled but NOT exposed as service for security reasons. The\nVault UI can also be exposed via port-forwarding or through a [`ui`\nconfiguration value](\/vault\/docs\/platform\/k8s\/helm\/configuration\/#ui).\n\nExpose the Vault UI with port-forwarding:\n\n```shell-session\n$ kubectl port-forward vault-0 8200:8200\nForwarding from 127.0.0.1:8200 -> 8200\nForwarding from [::1]:8200 -> 8200\n##...\n```\n\n### Initialize and unseal Vault\n\nAfter the Vault Helm chart is installed in `standalone` or `ha` mode one of the\nVault servers need to be\n[initialized](\/vault\/docs\/commands\/operator\/init). The\ninitialization generates the credentials necessary to\n[unseal](\/vault\/docs\/concepts\/seal#why) all the Vault\nservers.\n\n#### CLI initialize and unseal\n\nView all the Vault pods in the current namespace:\n\n```shell-session\n$ kubectl get pods -l app.kubernetes.io\/name=vault\nNAME                                    READY   STATUS    RESTARTS   AGE\nvault-0                                 0\/1     Running   0          1m49s\nvault-1                                 0\/1     Running   0          1m49s\nvault-2                                 0\/1     Running   0          1m49s\n```\n\nInitialize one Vault server with the default number of key shares and default\nkey threshold:\n\n```shell-session\n$ kubectl exec -ti vault-0 -- vault operator init\nUnseal Key 1: MBFSDepD9E6whREc6Dj+k3pMaKJ6cCnCUWcySJQymObb\nUnseal Key 2: zQj4v22k9ixegS+94HJwmIaWLBL3nZHe1i+b\/wHz25fr\nUnseal Key 3: 7dbPPeeGGW3SmeBFFo04peCKkXFuuyKc8b2DuntA4VU5\nUnseal Key 4: tLt+ME7Z7hYUATfWnuQdfCEgnKA2L173dptAwfmenCdf\nUnseal Key 5: vYt9bxLr0+OzJ8m7c7cNMFj7nvdLljj0xWRbpLezFAI9\n\nInitial Root Token: s.zJNwZlRrqISjyBHFMiEca6GF\n##...\n```\n\nThe output displays the key shares and initial root key generated.\n\nUnseal the Vault server with the key shares until the key threshold is met:\n\n```sh\n## Unseal the first vault server until it reaches the key threshold\n$ kubectl exec -ti vault-0 -- vault operator unseal # ... Unseal Key 1\n$ kubectl exec -ti vault-0 -- vault operator unseal # ... Unseal Key 2\n$ kubectl exec -ti vault-0 -- vault operator unseal # ... Unseal Key 3\n```\n\nRepeat the unseal process for all Vault server pods. When all Vault server pods\nare unsealed they report READY `1\/1`.\n\n```shell-session\n$ kubectl get pods -l app.kubernetes.io\/name=vault\nNAME                                    READY   STATUS    RESTARTS   AGE\nvault-0                                 1\/1     Running   0          1m49s\nvault-1                                 1\/1     Running   0          1m49s\nvault-2                                 1\/1     Running   0          1m49s\n```\n\n#### Google KMS auto unseal\n\nThe Helm chart may be run with [Google KMS for Auto\nUnseal](\/vault\/docs\/configuration\/seal\/gcpckms). This enables Vault server pods to\nauto unseal if they are rescheduled.\n\nVault Helm requires the Google Cloud KMS credentials stored in\n`credentials.json` and mounted as a secret in each Vault server pod.\n\n##### Create the secret\n\nFirst, create the secret in Kubernetes:\n\n```bash\nkubectl create secret generic kms-creds --from-file=credentials.json\n```\n\nVault Helm mounts this to `\/vault\/userconfig\/kms-creds\/credentials.json`.\n\n##### Config example\n\nThis is a Vault Helm configuration that uses Google KMS:\n\n```yaml\nglobal:\n  enabled: true\n\nserver:\n  extraEnvironmentVars:\n    GOOGLE_REGION: global\n    GOOGLE_PROJECT: <PROJECT NAME>\n    GOOGLE_APPLICATION_CREDENTIALS: \/vault\/userconfig\/kms-creds\/credentials.json\n\n  volumes:\n    - name: userconfig-kms-creds\n      secret:\n        defaultMode: 420\n        secretName: kms-creds\n\n  volumeMounts:\n    - mountPath: \/vault\/userconfig\/kms-creds\n      name: userconfig-kms-creds\n      readOnly: true\n\n  ha:\n    enabled: true\n    replicas: 3\n\n    config: |\n      ui = true\n\n      listener \"tcp\" {\n        tls_disable = 1\n        address = \"[::]:8200\"\n        cluster_address = \"[::]:8201\"\n      }\n\n      seal \"gcpckms\" {\n        project     = \"<NAME OF PROJECT>\"\n        region      = \"global\"\n        key_ring    = \"<NAME OF KEYRING>\"\n        crypto_key  = \"<NAME OF KEY>\"\n      }\n\n      storage \"consul\" {\n        path = \"vault\"\n        address = \"HOST_IP:8500\"\n      }\n```\n\n#### Amazon KMS auto unseal\n\nThe Helm chart may be run with [AWS KMS for Auto\nUnseal](\/vault\/docs\/configuration\/seal\/awskms). This enables Vault server pods to auto\nunseal if they are rescheduled.\n\nVault Helm requires the AWS credentials stored as environment variables that\nare defined in each Vault server pod.\n\n##### Create the secret\n\nFirst, create a secret with your KMS access key\/secret:\n\n```shell-session\n$ kubectl create secret generic kms-creds \\\n    --from-literal=AWS_ACCESS_KEY_ID=\"${AWS_ACCESS_KEY_ID?}\" \\\n    --from-literal=AWS_SECRET_ACCESS_KEY=\"${AWS_SECRET_ACCESS_KEY?}\"\n```\n\n##### Config example\n\nThis is a Vault Helm configuration that uses AWS KMS:\n\n```yaml\nglobal:\n  enabled: true\n\nserver:\n  extraSecretEnvironmentVars:\n    - envName: AWS_ACCESS_KEY_ID\n      secretName: kms-creds\n      secretKey: AWS_ACCESS_KEY_ID\n    - envName: AWS_SECRET_ACCESS_KEY\n      secretName: kms-creds\n      secretKey: AWS_SECRET_ACCESS_KEY\n\n  ha:\n    enabled: true\n    config: |\n      ui = true\n\n      listener \"tcp\" {\n        tls_disable = 1\n        address = \"[::]:8200\"\n        cluster_address = \"[::]:8201\"\n      }\n\n      seal \"awskms\" {\n        region     = \"KMS_REGION_HERE\"\n        kms_key_id = \"KMS_KEY_ID_HERE\"\n      }\n\n      storage \"consul\" {\n        address = \"HOST_IP:8500\"\n        path = \"vault\/\"\n      }\n```\n\n### Probes\n\nProbes are essential for detecting failures, rescheduling and using pods in\nKubernetes. The helm chart offers configurable readiness and liveliness probes\nwhich can be customized for a variety of use cases.\n\nVault's [\/sys\/health`](\/vault\/api-docs\/system\/health) endpoint can be customized to\nchange the behavior of the health check. For example, we can change the Vault\nreadiness probe to show the Vault pods are ready even if they're still uninitialized\nand sealed using the following probe:\n\n```yaml\nserver:\n  readinessProbe:\n    enabled: true\n    path: '\/v1\/sys\/health?standbyok=true&sealedcode=204&uninitcode=204'\n```\n\nUsing this customized probe, a `postStart` script could automatically run once the\npod is ready for additional setup.\n\n### Upgrading Vault on kubernetes\n\nTo upgrade Vault on Kubernetes, we follow the same pattern as\n[generally upgrading Vault](\/vault\/docs\/upgrading), except we can use\nthe Helm chart to update the Vault server StatefulSet. It is important to understand\nhow to [generally upgrade Vault](\/vault\/docs\/upgrading) before reading this\nsection.\n\nThe Vault StatefulSet uses `OnDelete` update strategy. It is critical to use `OnDelete` instead\nof `RollingUpdate` because standbys must be updated before the active primary. A\nfailover to an older version of Vault must always be avoided.\n\n!> **IMPORTANT NOTE:** Always back up your data before upgrading! Vault does not\nmake backward-compatibility guarantees for its data store. Simply replacing the\nnewly-installed Vault binary with the previous version may not cleanly\ndowngrade Vault, as upgrades may perform changes to the underlying data\nstructure that make the data incompatible with a downgrade. If you need to roll\nback to a previous version of Vault, you should roll back your data store as\nwell.\n\n#### Upgrading Vault servers\n\n!> **IMPORTANT NOTE:** Helm will install the latest chart found in a repo by default.\nIt's recommended to specify the chart version when upgrading.\n\nTo initiate the upgrade, set the `server.image` values to the desired Vault\nversion, either in a values yaml file or on the command line. For illustrative\npurposes, the example below uses `vault:123.456`.\n\n```yaml\nserver:\n  image:\n    repository: 'vault'\n    tag: '123.456'\n```\n\nNext, list the Helm versions and choose the desired version to install.\n\n```bash\n$ helm search repo hashicorp\/vault\nNAME           \tCHART VERSION\tAPP VERSION\tDESCRIPTION\nhashicorp\/vault\t0.29.1       \t1.18.1     \tOfficial HashiCorp Vault Chart\n```\n\nNext, test the upgrade with `--dry-run` first to verify the changes sent to the\nKubernetes cluster.\n\n```shell-session\n$ helm upgrade vault hashicorp\/vault --version=0.29.1 \\\n    --set='server.image.repository=vault' \\\n    --set='server.image.tag=123.456' \\\n    --dry-run\n```\n\nThis should cause no changes (although the resources are updated). If\neverything is stable, `helm upgrade` can be run.\n\nThe `helm upgrade` command should have updated the StatefulSet template for\nthe Vault servers, however, no pods have been deleted. The pods must be manually\ndeleted to upgrade. Deleting the pods does not delete any persisted data.\n\nIf Vault is not deployed using `ha` mode, the single Vault server may be deleted by\nrunning:\n\n```shell-session\n$ kubectl delete pod <name of Vault pod>\n```\n\n\nIf you deployed Vault in high availability (`ha`) mode, you must upgrade your\nstandby pods before upgrading the active pod:\n\n1. Before deleting the standby pod, remove the associated node from the raft\n   with `vault operator raft remove-peer <server_id>`.\n1. Confirm Vault removed the node successfully from Raft with\n   `vault operator raft list-peers`.\n1. Once you confirm the removal, delete the pod.\n\n<Warning title=\"Delete nodes to avoid unnecessary leader elections\">\n\nRemoving a pod without first deleting the node from its cluster means that\nRaft will not be aware of the correct number of nodes in the cluster. Not knowing\nthe correct number of nodes can trigger a leader election, which can potentially\ncause unneeded downtime.\n\n<\/Warning>\n\nVault has K8s service discovery built in (when enabled in the server configuration) and\nwill automatically change the labels of the pod with its current leader status. These labels\ncan be used to filter the pods.\n\nFor example, select all pods that are Vault standbys:\n\n```shell-session\n$ kubectl get pods -l vault-active=false\n```\n\nSelect the active Vault pod:\n\n```shell-session\n$ kubectl get pods -l vault-active=true\n```\n\nNext, sequentially delete every pod that is not the active primary, ensuring the quorum is maintained at all times:\n\n```shell-session\n$ kubectl delete pod <name of Vault pod>\n```\n\nIf auto-unseal is not being used, the newly scheduled Vault standby pods needs\nto be unsealed:\n\n```shell-session\n$ kubectl exec -ti <name of pod> -- vault operator unseal\n```\n\nFinally, once the standby nodes have been updated and unsealed, delete the active\nprimary:\n\n```shell-session\n$ kubectl delete pod <name of Vault primary>\n```\n\nSimilar to the standby nodes, the former primary also needs to be unsealed:\n\n```shell-session\n$ kubectl exec -ti <name of pod> -- vault operator unseal\n```\n\nAfter a few moments the Vault cluster should elect a new active primary. The Vault\ncluster is now upgraded!\n\n### Protecting sensitive Vault configurations\n\nVault Helm renders a Vault configuration file during installation and stores the\nfile in a Kubernetes configmap. Some configurations require sensitive data to be\nincluded in the configuration file and would not be encrypted at rest once created\nin Kubernetes.\n\nThe following example shows how to add extra configuration files to Vault Helm\nto protect sensitive configurations from being in plaintext at rest using Kubernetes\nsecrets.\n\nFirst, create a partial Vault configuration with the sensitive settings Vault\nloads during startup:\n\n```shell-session\n$ cat <<EOF >>config.hcl\nstorage \"mysql\" {\nusername = \"user1234\"\npassword = \"secret123!\"\ndatabase = \"vault\"\n}\nEOF\n```\n\nNext, create a Kubernetes secret containing this partial configuration:\n\n```shell-session\n$ kubectl create secret generic vault-storage-config \\\n    --from-file=config.hcl\n```\n\nFinally, mount this secret as an extra volume and add an additional `-config` flag\nto the Vault startup command:\n\n```shell-session\n$ helm install vault hashicorp\/vault \\\n  --set='server.volumes[0].name=userconfig-vault-storage-config' \\\n  --set='server.volumes[0].secret.defaultMode=420' \\\n  --set='server.volumes[0].secret.secretName=vault-storage-config' \\\n  --set='server.volumeMounts[0].mountPath=\/vault\/userconfig\/vault-storage-config' \\\n  --set='server.volumeMounts[0].name=userconfig-vault-storage-config' \\\n  --set='server.volumeMounts[0].readOnly=true' \\\n  --set='server.extraArgs=-config=\/vault\/userconfig\/vault-storage-config\/config.hcl'\n```\n\n## Architecture\n\nWe recommend running Vault on Kubernetes with the same\n[general architecture](\/vault\/docs\/internals\/architecture)\nas running it anywhere else. There are some benefits Kubernetes can provide\nthat eases operating a Vault cluster and we document those below. The standard\n[production deployment](\/vault\/tutorials\/operations\/production-hardening) tutorial is still an\nimportant read even if running Vault within Kubernetes.\n\n### Production deployment checklist\n\n_End-to-End TLS._ Vault should always be used with TLS in production. If\nintermediate load balancers or reverse proxies are used to front Vault,\nthey should not terminate TLS. This way traffic is always encrypted in transit\nto Vault and minimizes risks introduced by intermediate layers. See the\n[official documentation](\/vault\/docs\/platform\/k8s\/helm\/examples\/standalone-tls\/)\nfor example on configuring Vault Helm to use TLS.\n\n_Single Tenancy._ Vault should be the only main process running on a machine.\nThis reduces the risk that another process running on the same machine is\ncompromised and can interact with Vault. This can be accomplished by using Vault\nHelm's `affinity` configurable. See the\n[official documentation](\/vault\/docs\/platform\/k8s\/helm\/examples\/ha-with-consul\/)\nfor example on configuring Vault Helm to use affinity rules.\n\n_Enable Auditing._ Vault supports several auditing backends. Enabling auditing\nprovides a history of all operations performed by Vault and provides a forensics\ntrail in the case of misuse or compromise. Audit logs securely hash any sensitive\ndata, but access should still be restricted to prevent any unintended disclosures.\nVault Helm includes a configurable `auditStorage` option that provisions a persistent\nvolume to store audit logs. See the\n[official documentation](\/vault\/docs\/platform\/k8s\/helm\/examples\/standalone-audit\/)\nfor an example on configuring Vault Helm to use auditing.\n\n_Immutable Upgrades._ Vault relies on an external storage backend for persistence,\nand this decoupling allows the servers running Vault to be managed immutably.\nWhen upgrading to new versions, new servers with the upgraded version of Vault\nare brought online. They are attached to the same shared storage backend and\nunsealed. Then the old servers are destroyed. This reduces the need for remote\naccess and upgrade orchestration which may introduce security gaps. See the\n[upgrade section](#how-to) for instructions\non upgrading Vault on Kubernetes.\n\n_Upgrade Frequently._ Vault is actively developed, and updating frequently is\nimportant to incorporate security fixes and any changes in default settings such\nas key lengths or cipher suites. Subscribe to the Vault mailing list and\nGitHub CHANGELOG for updates.\n\n_Restrict Storage Access._ Vault encrypts all data at rest, regardless of which\nstorage backend is used. Although the data is encrypted, an attacker with arbitrary\ncontrol can cause data corruption or loss by modifying or deleting keys. Access\nto the storage backend should be restricted to only Vault to avoid unauthorized\naccess or operations.","site":"vault","answers_cleaned":"    layout  docs page title  Running Vault   Kubernetes description       Vault can run directly on Kubernetes in various configurations   For   pure Kubernetes workloads  this enables Vault to also exist purely within   Kubernetes         Run Vault on kubernetes  Vault works with Kubernetes in various modes   dev    standalone    ha   and  external     include  helm version mdx      Helm chart  The  Vault Helm chart  https   github com hashicorp vault helm  is the recommended way to install and configure Vault on Kubernetes  In addition to running Vault itself  the Helm chart is the primary method for installing and configuring Vault to integrate with other services such as Consul for High Availability  HA  deployments   While the Helm chart automatically sets up complex resources and exposes the configuration to meet your requirements  it   does not automatically operate Vault    You are still responsible for learning how to monitor  backup  upgrade  etc  the Vault cluster        Security Warning    By default  the chart runs in standalone mode  This mode uses a single Vault server with a file storage backend  This is a less secure and less resilient installation that is   NOT   appropriate for a production setup  It is highly recommended to use a  properly secured Kubernetes cluster  https   kubernetes io docs tasks administer cluster securing a cluster     learn the available configuration options   vault docs platform k8s helm configuration   and read the  production deployment checklist   vault docs platform k8s helm run architecture       How To      Install Vault  Helm must be installed and configured on your machine  Please refer to the  Helm documentation  https   helm sh   or the  Vault Installation to Minikube via Helm   vault tutorials kubernetes kubernetes minikube consul  tutorial   To use the Helm chart  add the Hashicorp helm repository and check that you have access to the chart    include  helm repo mdx        Important    The Helm chart is new and under significant development  Please always run Helm with    dry run  before any install or upgrade to verify changes   Use  helm install  to install the latest release of the Vault Helm chart      shell session   helm install vault hashicorp vault      Or install a specific version of the chart    include  helm install mdx   The  helm install  command accepts parameters to override default configuration values inline or defined in a file   Override the  server dev enabled  configuration value      shell session   helm install vault hashicorp vault         set  server dev enabled true       Override all the configuration found in a file      shell session   cat override values yml server    ha      enabled  true     replicas  5      helm install vault hashicorp vault         values override values yml           Dev mode  The Helm chart may run a Vault server in development  This installs a single Vault server with a memory storage backend        Dev mode    This is ideal for learning and demonstration environments but NOT recommended for a production environment   Install the latest Vault Helm chart in development mode      shell session   helm install vault hashicorp vault         set  server dev enabled true            Standalone mode  The Helm chart defaults to run in  standalone  mode  This installs a single Vault server with a file storage backend   Install the latest Vault Helm chart in standalone mode      shell session   helm install vault hashicorp vault           HA mode  The Helm chart may be run in high availability  HA  mode  This installs three Vault servers with an existing Consul storage backend  It is suggested that Consul is installed via the  Consul Helm chart  https   github com hashicorp consul k8s    Install the latest Vault Helm chart in HA mode      shell session   helm install vault hashicorp vault         set  server ha enabled true       Refer to the  Vault Installation to Minikube via Helm   vault tutorials kubernetes kubernetes minikube consul  tutorial to learn how to set up Consul and Vault in HA mode        External mode  The Helm chart may be run in external mode  This installs no Vault server and relies on a network addressable Vault server to exist   Install the latest Vault Helm chart in external mode      shell session   helm install vault hashicorp vault         set  injector externalVaultAddr http   external vault 8200       Refer to the  Integrate a Kubernetes Cluster with an External Vault   vault tutorials kubernetes kubernetes external vault  tutorial to learn how to use an external Vault within a Kubernetes cluster       View the Vault UI  The Vault UI is enabled but NOT exposed as service for security reasons  The Vault UI can also be exposed via port forwarding or through a   ui  configuration value   vault docs platform k8s helm configuration  ui    Expose the Vault UI with port forwarding      shell session   kubectl port forward vault 0 8200 8200 Forwarding from 127 0 0 1 8200    8200 Forwarding from    1  8200    8200                Initialize and unseal Vault  After the Vault Helm chart is installed in  standalone  or  ha  mode one of the Vault servers need to be  initialized   vault docs commands operator init   The initialization generates the credentials necessary to  unseal   vault docs concepts seal why  all the Vault servers        CLI initialize and unseal  View all the Vault pods in the current namespace      shell session   kubectl get pods  l app kubernetes io name vault NAME                                    READY   STATUS    RESTARTS   AGE vault 0                                 0 1     Running   0          1m49s vault 1                                 0 1     Running   0          1m49s vault 2                                 0 1     Running   0          1m49s      Initialize one Vault server with the default number of key shares and default key threshold      shell session   kubectl exec  ti vault 0    vault operator init Unseal Key 1  MBFSDepD9E6whREc6Dj k3pMaKJ6cCnCUWcySJQymObb Unseal Key 2  zQj4v22k9ixegS 94HJwmIaWLBL3nZHe1i b wHz25fr Unseal Key 3  7dbPPeeGGW3SmeBFFo04peCKkXFuuyKc8b2DuntA4VU5 Unseal Key 4  tLt ME7Z7hYUATfWnuQdfCEgnKA2L173dptAwfmenCdf Unseal Key 5  vYt9bxLr0 OzJ8m7c7cNMFj7nvdLljj0xWRbpLezFAI9  Initial Root Token  s zJNwZlRrqISjyBHFMiEca6GF            The output displays the key shares and initial root key generated   Unseal the Vault server with the key shares until the key threshold is met      sh    Unseal the first vault server until it reaches the key threshold   kubectl exec  ti vault 0    vault operator unseal       Unseal Key 1   kubectl exec  ti vault 0    vault operator unseal       Unseal Key 2   kubectl exec  ti vault 0    vault operator unseal       Unseal Key 3      Repeat the unseal process for all Vault server pods  When all Vault server pods are unsealed they report READY  1 1       shell session   kubectl get pods  l app kubernetes io name vault NAME                                    READY   STATUS    RESTARTS   AGE vault 0                                 1 1     Running   0          1m49s vault 1                                 1 1     Running   0          1m49s vault 2                                 1 1     Running   0          1m49s           Google KMS auto unseal  The Helm chart may be run with  Google KMS for Auto Unseal   vault docs configuration seal gcpckms   This enables Vault server pods to auto unseal if they are rescheduled   Vault Helm requires the Google Cloud KMS credentials stored in  credentials json  and mounted as a secret in each Vault server pod         Create the secret  First  create the secret in Kubernetes      bash kubectl create secret generic kms creds   from file credentials json      Vault Helm mounts this to   vault userconfig kms creds credentials json          Config example  This is a Vault Helm configuration that uses Google KMS      yaml global    enabled  true  server    extraEnvironmentVars      GOOGLE REGION  global     GOOGLE PROJECT   PROJECT NAME      GOOGLE APPLICATION CREDENTIALS   vault userconfig kms creds credentials json    volumes        name  userconfig kms creds       secret          defaultMode  420         secretName  kms creds    volumeMounts        mountPath   vault userconfig kms creds       name  userconfig kms creds       readOnly  true    ha      enabled  true     replicas  3      config          ui   true        listener  tcp            tls disable   1         address         8200          cluster address         8201                 seal  gcpckms            project         NAME OF PROJECT           region         global          key ring        NAME OF KEYRING           crypto key      NAME OF KEY                  storage  consul            path    vault          address    HOST IP 8500                    Amazon KMS auto unseal  The Helm chart may be run with  AWS KMS for Auto Unseal   vault docs configuration seal awskms   This enables Vault server pods to auto unseal if they are rescheduled   Vault Helm requires the AWS credentials stored as environment variables that are defined in each Vault server pod         Create the secret  First  create a secret with your KMS access key secret      shell session   kubectl create secret generic kms creds         from literal AWS ACCESS KEY ID    AWS ACCESS KEY ID            from literal AWS SECRET ACCESS KEY    AWS SECRET ACCESS KEY               Config example  This is a Vault Helm configuration that uses AWS KMS      yaml global    enabled  true  server    extraSecretEnvironmentVars        envName  AWS ACCESS KEY ID       secretName  kms creds       secretKey  AWS ACCESS KEY ID       envName  AWS SECRET ACCESS KEY       secretName  kms creds       secretKey  AWS SECRET ACCESS KEY    ha      enabled  true     config          ui   true        listener  tcp            tls disable   1         address         8200          cluster address         8201                 seal  awskms            region        KMS REGION HERE          kms key id    KMS KEY ID HERE                 storage  consul            address    HOST IP 8500          path    vault                    Probes  Probes are essential for detecting failures  rescheduling and using pods in Kubernetes  The helm chart offers configurable readiness and liveliness probes which can be customized for a variety of use cases   Vault s   sys health    vault api docs system health  endpoint can be customized to change the behavior of the health check  For example  we can change the Vault readiness probe to show the Vault pods are ready even if they re still uninitialized and sealed using the following probe      yaml server    readinessProbe      enabled  true     path    v1 sys health standbyok true sealedcode 204 uninitcode 204       Using this customized probe  a  postStart  script could automatically run once the pod is ready for additional setup       Upgrading Vault on kubernetes  To upgrade Vault on Kubernetes  we follow the same pattern as  generally upgrading Vault   vault docs upgrading   except we can use the Helm chart to update the Vault server StatefulSet  It is important to understand how to  generally upgrade Vault   vault docs upgrading  before reading this section   The Vault StatefulSet uses  OnDelete  update strategy  It is critical to use  OnDelete  instead of  RollingUpdate  because standbys must be updated before the active primary  A failover to an older version of Vault must always be avoided        IMPORTANT NOTE    Always back up your data before upgrading  Vault does not make backward compatibility guarantees for its data store  Simply replacing the newly installed Vault binary with the previous version may not cleanly downgrade Vault  as upgrades may perform changes to the underlying data structure that make the data incompatible with a downgrade  If you need to roll back to a previous version of Vault  you should roll back your data store as well        Upgrading Vault servers       IMPORTANT NOTE    Helm will install the latest chart found in a repo by default  It s recommended to specify the chart version when upgrading   To initiate the upgrade  set the  server image  values to the desired Vault version  either in a values yaml file or on the command line  For illustrative purposes  the example below uses  vault 123 456       yaml server    image      repository   vault      tag   123 456       Next  list the Helm versions and choose the desired version to install      bash   helm search repo hashicorp vault NAME            CHART VERSION APP VERSION DESCRIPTION hashicorp vault 0 29 1        1 18 1      Official HashiCorp Vault Chart      Next  test the upgrade with    dry run  first to verify the changes sent to the Kubernetes cluster      shell session   helm upgrade vault hashicorp vault   version 0 29 1         set  server image repository vault          set  server image tag 123 456          dry run      This should cause no changes  although the resources are updated   If everything is stable   helm upgrade  can be run   The  helm upgrade  command should have updated the StatefulSet template for the Vault servers  however  no pods have been deleted  The pods must be manually deleted to upgrade  Deleting the pods does not delete any persisted data   If Vault is not deployed using  ha  mode  the single Vault server may be deleted by running      shell session   kubectl delete pod  name of Vault pod        If you deployed Vault in high availability   ha   mode  you must upgrade your standby pods before upgrading the active pod   1  Before deleting the standby pod  remove the associated node from the raft    with  vault operator raft remove peer  server id    1  Confirm Vault removed the node successfully from Raft with     vault operator raft list peers   1  Once you confirm the removal  delete the pod    Warning title  Delete nodes to avoid unnecessary leader elections    Removing a pod without first deleting the node from its cluster means that Raft will not be aware of the correct number of nodes in the cluster  Not knowing the correct number of nodes can trigger a leader election  which can potentially cause unneeded downtime     Warning   Vault has K8s service discovery built in  when enabled in the server configuration  and will automatically change the labels of the pod with its current leader status  These labels can be used to filter the pods   For example  select all pods that are Vault standbys      shell session   kubectl get pods  l vault active false      Select the active Vault pod      shell session   kubectl get pods  l vault active true      Next  sequentially delete every pod that is not the active primary  ensuring the quorum is maintained at all times      shell session   kubectl delete pod  name of Vault pod       If auto unseal is not being used  the newly scheduled Vault standby pods needs to be unsealed      shell session   kubectl exec  ti  name of pod     vault operator unseal      Finally  once the standby nodes have been updated and unsealed  delete the active primary      shell session   kubectl delete pod  name of Vault primary       Similar to the standby nodes  the former primary also needs to be unsealed      shell session   kubectl exec  ti  name of pod     vault operator unseal      After a few moments the Vault cluster should elect a new active primary  The Vault cluster is now upgraded       Protecting sensitive Vault configurations  Vault Helm renders a Vault configuration file during installation and stores the file in a Kubernetes configmap  Some configurations require sensitive data to be included in the configuration file and would not be encrypted at rest once created in Kubernetes   The following example shows how to add extra configuration files to Vault Helm to protect sensitive configurations from being in plaintext at rest using Kubernetes secrets   First  create a partial Vault configuration with the sensitive settings Vault loads during startup      shell session   cat   EOF   config hcl storage  mysql    username    user1234  password    secret123   database    vault    EOF      Next  create a Kubernetes secret containing this partial configuration      shell session   kubectl create secret generic vault storage config         from file config hcl      Finally  mount this secret as an extra volume and add an additional   config  flag to the Vault startup command      shell session   helm install vault hashicorp vault       set  server volumes 0  name userconfig vault storage config        set  server volumes 0  secret defaultMode 420        set  server volumes 0  secret secretName vault storage config        set  server volumeMounts 0  mountPath  vault userconfig vault storage config        set  server volumeMounts 0  name userconfig vault storage config        set  server volumeMounts 0  readOnly true        set  server extraArgs  config  vault userconfig vault storage config config hcl          Architecture  We recommend running Vault on Kubernetes with the same  general architecture   vault docs internals architecture  as running it anywhere else  There are some benefits Kubernetes can provide that eases operating a Vault cluster and we document those below  The standard  production deployment   vault tutorials operations production hardening  tutorial is still an important read even if running Vault within Kubernetes       Production deployment checklist   End to End TLS   Vault should always be used with TLS in production  If intermediate load balancers or reverse proxies are used to front Vault  they should not terminate TLS  This way traffic is always encrypted in transit to Vault and minimizes risks introduced by intermediate layers  See the  official documentation   vault docs platform k8s helm examples standalone tls   for example on configuring Vault Helm to use TLS    Single Tenancy   Vault should be the only main process running on a machine  This reduces the risk that another process running on the same machine is compromised and can interact with Vault  This can be accomplished by using Vault Helm s  affinity  configurable  See the  official documentation   vault docs platform k8s helm examples ha with consul   for example on configuring Vault Helm to use affinity rules    Enable Auditing   Vault supports several auditing backends  Enabling auditing provides a history of all operations performed by Vault and provides a forensics trail in the case of misuse or compromise  Audit logs securely hash any sensitive data  but access should still be restricted to prevent any unintended disclosures  Vault Helm includes a configurable  auditStorage  option that provisions a persistent volume to store audit logs  See the  official documentation   vault docs platform k8s helm examples standalone audit   for an example on configuring Vault Helm to use auditing    Immutable Upgrades   Vault relies on an external storage backend for persistence  and this decoupling allows the servers running Vault to be managed immutably  When upgrading to new versions  new servers with the upgraded version of Vault are brought online  They are attached to the same shared storage backend and unsealed  Then the old servers are destroyed  This reduces the need for remote access and upgrade orchestration which may introduce security gaps  See the  upgrade section   how to  for instructions on upgrading Vault on Kubernetes    Upgrade Frequently   Vault is actively developed  and updating frequently is important to incorporate security fixes and any changes in default settings such as key lengths or cipher suites  Subscribe to the Vault mailing list and GitHub CHANGELOG for updates    Restrict Storage Access   Vault encrypts all data at rest  regardless of which storage backend is used  Although the data is encrypted  an attacker with arbitrary control can cause data corruption or loss by modifying or deleting keys  Access to the storage backend should be restricted to only Vault to avoid unauthorized access or operations "}
{"questions":"vault Highly available Vault enterprise performance clusters with integrated storage Raft page title Highly Available Vault Enterprise Performance Clusters with Raft Describes how to set up Performance clusters with Integrated Storage Raft layout docs sidebar current docs platform k8s examples enterprise perf with raft","answers":"---\nlayout: 'docs'\npage_title: 'Highly Available Vault Enterprise Performance Clusters with Raft'\nsidebar_current: 'docs-platform-k8s-examples-enterprise-perf-with-raft'\ndescription: |-\n  Describes how to set up Performance clusters with Integrated Storage (Raft)\n---\n\n# Highly available Vault enterprise performance clusters with integrated storage (Raft)\n\n@include 'helm\/version.mdx'\n\nThe following is an example of creating a performance cluster using Vault Helm.\n\nFor more information on Disaster Recovery, [see the official documentation](\/vault\/docs\/enterprise\/replication\/).\n\n-> For license configuration refer to [Running Vault Enterprise](\/vault\/docs\/platform\/k8s\/helm\/enterprise).\n\n## Primary cluster\n\nFirst, create the primary cluster:\n\n```shell\nhelm install vault-primary hashicorp\/vault \\\n  --set='server.image.repository=hashicorp\/vault-enterprise' \\\n  --set='server.image.tag=1.18.1-ent' \\\n  --set='server.ha.enabled=true' \\\n  --set='server.ha.raft.enabled=true'\n```\n\nNext, initialize and unseal `vault-primary-0` pod:\n\n```shell\nkubectl exec -ti vault-primary-0 -- vault operator init\nkubectl exec -ti vault-primary-0 -- vault operator unseal\n```\n\nFinally, join the remaining pods to the Raft cluster and unseal them. The pods\nwill need to communicate directly so we'll configure the pods to use the internal\nservice provided by the Helm chart:\n\n```shell\nkubectl exec -ti vault-primary-1 -- vault operator raft join http:\/\/vault-primary-0.vault-primary-internal:8200\nkubectl exec -ti vault-primary-1 -- vault operator unseal\n\nkubectl exec -ti vault-primary-2 -- vault operator raft join http:\/\/vault-primary-0.vault-primary-internal:8200\nkubectl exec -ti vault-primary-2 -- vault operator unseal\n```\n\nTo verify if the Raft cluster has successfully been initialized, run the following.\n\nFirst, login using the `root` token on the `vault-primary-0` pod:\n\n```shell\nkubectl exec -ti vault-primary-0 -- vault login\n```\n\nNext, list all the raft peers:\n\n```shell\n$ kubectl exec -ti vault-primary-0 -- vault operator raft list-peers\n\nNode                                    Address                        State       Voter\n----                                    -------                        -----       -----\na1799962-8711-7f28-23f0-cea05c8a527d    vault-primary-0.vault-primary-internal:8201    leader      true\ne6876c97-aaaa-a92e-b99a-0aafab105745    vault-primary-1.vault-primary-internal:8201    follower    true\n4b5d7383-ff31-44df-e008-6a606828823b    vault-primary-2.vault-primary-internal:8201    follower    true\n```\n\n## Secondary cluster\n\nWith the primary cluster created, next create a secondary cluster.\n\n```shell\nhelm install vault-secondary hashicorp\/vault \\\n  --set='server.image.repository=hashicorp\/vault-enterprise' \\\n  --set='server.image.tag=1.18.1-ent' \\\n  --set='server.ha.enabled=true' \\\n  --set='server.ha.raft.enabled=true'\n```\n\nNext, initialize and unseal `vault-secondary-0` pod:\n\n```shell\nkubectl exec -ti vault-secondary-0 -- vault operator init\nkubectl exec -ti vault-secondary-0 -- vault operator unseal\n```\n\nFinally, join the remaining pods to the Raft cluster and unseal them. The pods\nwill need to communicate directly so we'll configure the pods to use the internal\nservice provided by the Helm chart:\n\n```shell\nkubectl exec -ti vault-secondary-1 -- vault operator raft join http:\/\/vault-secondary-0.vault-secondary-internal:8200\nkubectl exec -ti vault-secondary-1 -- vault operator unseal\n\nkubectl exec -ti vault-secondary-2 -- vault operator raft join http:\/\/vault-secondary-0.vault-secondary-internal:8200\nkubectl exec -ti vault-secondary-2 -- vault operator unseal\n```\n\nTo verify if the Raft cluster has successfully been initialized, run the following.\n\nFirst, login using the `root` token on the `vault-secondary-0` pod:\n\n```shell\nkubectl exec -ti vault-secondary-0 -- vault login\n```\n\nNext, list all the raft peers:\n\n```shell\n$ kubectl exec -ti vault-secondary-0 -- vault operator raft list-peers\n\nNode                                    Address                        State       Voter\n----                                    -------                        -----       -----\na1799962-8711-7f28-23f0-cea05c8a527d    vault-secondary-0.vault-secondary-internal:8201    leader      true\ne6876c97-aaaa-a92e-b99a-0aafab105745    vault-secondary-1.vault-secondary-internal:8201    follower    true\n4b5d7383-ff31-44df-e008-6a606828823b    vault-secondary-2.vault-secondary-internal:8201    follower    true\n```\n\n## Enable performance replication on primary\n\nWith the initial clusters setup, we can now configure them for Performance Replication.\n\nFirst, on the primary cluster, enable replication:\n\n```shell\nkubectl exec -ti vault-primary-0 -- vault write -f sys\/replication\/performance\/primary\/enable primary_cluster_addr=https:\/\/vault-primary-active:8201\n```\n\nNext, create a token the secondary cluster will use to configure replication:\n\n```shell\nkubectl exec -ti vault-primary-0 -- vault write sys\/replication\/performance\/primary\/secondary-token id=secondary\n```\n\nThe token in the output will be used when configuring the secondary cluster.\n\n## Enable performance replication on secondary\n\nUsing the token created in the last step, enable Performance Replication on the secondary:\n\n```shell\nkubectl exec -ti vault-secondary-0 -- vault write sys\/replication\/performance\/secondary\/enable token=<TOKEN FROM PRIMARY>\n```\n\nLast, delete the remainder secondary pods and unseal them using the primary unseal token\nafter Kubernetes reschedules them:\n\n```shell\nkubectl delete pod vault-secondary-1\nkubectl exec -ti vault-secondary-1 -- vault operator unseal <PRIMARY UNSEAL TOKEN>\n\nkubectl delete pod vault-secondary-2\nkubectl exec -ti vault-secondary-2 -- vault operator unseal <PRIMARY UNSEAL TOKEN>\n```","site":"vault","answers_cleaned":"    layout   docs  page title   Highly Available Vault Enterprise Performance Clusters with Raft  sidebar current   docs platform k8s examples enterprise perf with raft  description       Describes how to set up Performance clusters with Integrated Storage  Raft         Highly available Vault enterprise performance clusters with integrated storage  Raft    include  helm version mdx   The following is an example of creating a performance cluster using Vault Helm   For more information on Disaster Recovery   see the official documentation   vault docs enterprise replication        For license configuration refer to  Running Vault Enterprise   vault docs platform k8s helm enterprise       Primary cluster  First  create the primary cluster      shell helm install vault primary hashicorp vault       set  server image repository hashicorp vault enterprise        set  server image tag 1 18 1 ent        set  server ha enabled true        set  server ha raft enabled true       Next  initialize and unseal  vault primary 0  pod      shell kubectl exec  ti vault primary 0    vault operator init kubectl exec  ti vault primary 0    vault operator unseal      Finally  join the remaining pods to the Raft cluster and unseal them  The pods will need to communicate directly so we ll configure the pods to use the internal service provided by the Helm chart      shell kubectl exec  ti vault primary 1    vault operator raft join http   vault primary 0 vault primary internal 8200 kubectl exec  ti vault primary 1    vault operator unseal  kubectl exec  ti vault primary 2    vault operator raft join http   vault primary 0 vault primary internal 8200 kubectl exec  ti vault primary 2    vault operator unseal      To verify if the Raft cluster has successfully been initialized  run the following   First  login using the  root  token on the  vault primary 0  pod      shell kubectl exec  ti vault primary 0    vault login      Next  list all the raft peers      shell   kubectl exec  ti vault primary 0    vault operator raft list peers  Node                                    Address                        State       Voter                                                                                          a1799962 8711 7f28 23f0 cea05c8a527d    vault primary 0 vault primary internal 8201    leader      true e6876c97 aaaa a92e b99a 0aafab105745    vault primary 1 vault primary internal 8201    follower    true 4b5d7383 ff31 44df e008 6a606828823b    vault primary 2 vault primary internal 8201    follower    true         Secondary cluster  With the primary cluster created  next create a secondary cluster      shell helm install vault secondary hashicorp vault       set  server image repository hashicorp vault enterprise        set  server image tag 1 18 1 ent        set  server ha enabled true        set  server ha raft enabled true       Next  initialize and unseal  vault secondary 0  pod      shell kubectl exec  ti vault secondary 0    vault operator init kubectl exec  ti vault secondary 0    vault operator unseal      Finally  join the remaining pods to the Raft cluster and unseal them  The pods will need to communicate directly so we ll configure the pods to use the internal service provided by the Helm chart      shell kubectl exec  ti vault secondary 1    vault operator raft join http   vault secondary 0 vault secondary internal 8200 kubectl exec  ti vault secondary 1    vault operator unseal  kubectl exec  ti vault secondary 2    vault operator raft join http   vault secondary 0 vault secondary internal 8200 kubectl exec  ti vault secondary 2    vault operator unseal      To verify if the Raft cluster has successfully been initialized  run the following   First  login using the  root  token on the  vault secondary 0  pod      shell kubectl exec  ti vault secondary 0    vault login      Next  list all the raft peers      shell   kubectl exec  ti vault secondary 0    vault operator raft list peers  Node                                    Address                        State       Voter                                                                                          a1799962 8711 7f28 23f0 cea05c8a527d    vault secondary 0 vault secondary internal 8201    leader      true e6876c97 aaaa a92e b99a 0aafab105745    vault secondary 1 vault secondary internal 8201    follower    true 4b5d7383 ff31 44df e008 6a606828823b    vault secondary 2 vault secondary internal 8201    follower    true         Enable performance replication on primary  With the initial clusters setup  we can now configure them for Performance Replication   First  on the primary cluster  enable replication      shell kubectl exec  ti vault primary 0    vault write  f sys replication performance primary enable primary cluster addr https   vault primary active 8201      Next  create a token the secondary cluster will use to configure replication      shell kubectl exec  ti vault primary 0    vault write sys replication performance primary secondary token id secondary      The token in the output will be used when configuring the secondary cluster      Enable performance replication on secondary  Using the token created in the last step  enable Performance Replication on the secondary      shell kubectl exec  ti vault secondary 0    vault write sys replication performance secondary enable token  TOKEN FROM PRIMARY       Last  delete the remainder secondary pods and unseal them using the primary unseal token after Kubernetes reschedules them      shell kubectl delete pod vault secondary 1 kubectl exec  ti vault secondary 1    vault operator unseal  PRIMARY UNSEAL TOKEN   kubectl delete pod vault secondary 2 kubectl exec  ti vault secondary 2    vault operator unseal  PRIMARY UNSEAL TOKEN     "}
{"questions":"vault sidebar current docs platform k8s examples ha tls Describes how to set up a Raft HA Vault cluster with TLS certificate HA Cluster with Raft and TLS layout docs page title HA Cluster with Raft and TLS","answers":"---\nlayout: 'docs'\npage_title: 'HA Cluster with Raft and TLS'\nsidebar_current: 'docs-platform-k8s-examples-ha-tls'\ndescription: |-\n  Describes how to set up a Raft HA Vault cluster with TLS certificate\n---\n\n# HA Cluster with Raft and TLS\n\nThe overview for [Integrated Storage and\nTLS](\/vault\/docs\/concepts\/integrated-storage#integrated-storage-and-tls) covers\nthe various options for mitigating TLS verification warnings and bootstrapping\nyour Raft cluster.\n\nWithout proper configuration, you will see the following warning before cluster\ninitialization:\n```shell\ncore: join attempt failed: error=\"error during raft bootstrap init call: Put \"https:\/\/vault-${N}.${SERVICE}:8200\/v1\/sys\/storage\/raft\/bootstrap\/challenge\": x509: certificate is valid for ${SERVICE}, ${SERVICE}.${NAMESPACE}, ${SERVICE}.${NAMESPACE}.svc, ${SERVICE}.${NAMESPACE}.svc.cluster.local, not vault-${N}.${SERVICE}\"\n```\n\nThe examples below demonstrate two specific solutions. Both solutions ensure\nthat the common name (CN) used for the `leader_api_addr` in the Raft stanza\nmatches the name(s) listed in the TLS certificate.\n\n## Before you start\n\n1. Follow the steps from the example [HA Vault Cluster with Integrated\nStorage](\/vault\/docs\/platform\/k8s\/helm\/examples\/ha-with-raft) to build the cluster.\n\n2. Follow the examples and instructions in [Standalone Server with\nTLS](\/vault\/docs\/platform\/k8s\/helm\/examples\/standalone-tls) to create a TLS\ncertificate.\n\n## Solution 1: Use auto-join and set the TLS server in your Raft configuration\n\nThe join warning disappears if you use auto-join and set the expected TLS\nserver name (`${CN}`) with\n[`leader_tls_servername`](\/vault\/docs\/configuration\/storage\/raft#leader_tls_servername)\nin the Raft stanza for your Vault configuration.\n\nFor example:\n<CodeBlockConfig highlight=\"6,14,22\">\n\n```hcl\nstorage \"raft\" {\n  path = \"\/vault\/data\"\n\n  retry_join {\n    leader_api_addr = \"https:\/\/vault-0.${SERVICE}:8200\"\n    leader_tls_servername = \"${CN}\"\n    leader_client_cert_file = \"\/vault\/tls\/vault.crt\"\n    leader_client_key_file = \"\/vault\/tls\/vault.key\"\n    leader_ca_cert_file = \"\/vault\/tls\/vault.ca\"\n  }\n\n  retry_join {\n    leader_api_addr = \"https:\/\/vault-1.${SERVICE}:8200\"\n    leader_tls_servername = \"${CN}\"\n    leader_client_cert_file = \"\/vault\/tls\/vault.crt\"\n    leader_client_key_file = \"\/vault\/tls\/vault.key\"\n    leader_ca_cert_file = \"\/vault\/tls\/vault.ca\"\n  }\n\n  retry_join {\n    leader_api_addr = \"https:\/\/vault-2.${SERVICE}:8200\"\n    leader_tls_servername = \"${CN}\"\n    leader_client_cert_file = \"\/vault\/tls\/vault.crt\"\n    leader_client_key_file = \"\/vault\/tls\/vault.key\"\n    leader_ca_cert_file = \"\/vault\/tls\/vault.ca\"\n  }\n}\n```\n\n<\/CodeBlockConfig>\n\n## Solution 2:  Add a load balancer to your Raft configuration\n\nIf you have a load balancer for your Vault cluster, you can add a single\n`retry_join` stanza to your Raft configuration and use the load balancer\naddress for `leader_api_addr`.\n\nFor example:\n<CodeBlockConfig highlight=\"5\">\n\n```hcl\nstorage \"raft\" {\n  path = \"\/vault\/data\"\n\n  retry_join {\n    leader_api_addr = \"https:\/\/vault-active:8200\"\n    leader_client_cert_file = \"\/vault\/tls\/vault.crt\"\n    leader_client_key_file = \"\/vault\/tls\/vault.key\"\n    leader_ca_cert_file = \"\/vault\/tls\/vault.ca\"\n  }\n}\n```\n\n<\/CodeBlockConfig>\n","site":"vault","answers_cleaned":"    layout   docs  page title   HA Cluster with Raft and TLS  sidebar current   docs platform k8s examples ha tls  description       Describes how to set up a Raft HA Vault cluster with TLS certificate        HA Cluster with Raft and TLS  The overview for  Integrated Storage and TLS   vault docs concepts integrated storage integrated storage and tls  covers the various options for mitigating TLS verification warnings and bootstrapping your Raft cluster   Without proper configuration  you will see the following warning before cluster initialization     shell core  join attempt failed  error  error during raft bootstrap init call  Put  https   vault   N    SERVICE  8200 v1 sys storage raft bootstrap challenge   x509  certificate is valid for   SERVICE     SERVICE    NAMESPACE     SERVICE    NAMESPACE  svc    SERVICE    NAMESPACE  svc cluster local  not vault   N    SERVICE        The examples below demonstrate two specific solutions  Both solutions ensure that the common name  CN  used for the  leader api addr  in the Raft stanza matches the name s  listed in the TLS certificate      Before you start  1  Follow the steps from the example  HA Vault Cluster with Integrated Storage   vault docs platform k8s helm examples ha with raft  to build the cluster   2  Follow the examples and instructions in  Standalone Server with TLS   vault docs platform k8s helm examples standalone tls  to create a TLS certificate      Solution 1  Use auto join and set the TLS server in your Raft configuration  The join warning disappears if you use auto join and set the expected TLS server name     CN    with   leader tls servername    vault docs configuration storage raft leader tls servername  in the Raft stanza for your Vault configuration   For example   CodeBlockConfig highlight  6 14 22       hcl storage  raft      path     vault data     retry join       leader api addr    https   vault 0   SERVICE  8200      leader tls servername      CN       leader client cert file     vault tls vault crt      leader client key file     vault tls vault key      leader ca cert file     vault tls vault ca         retry join       leader api addr    https   vault 1   SERVICE  8200      leader tls servername      CN       leader client cert file     vault tls vault crt      leader client key file     vault tls vault key      leader ca cert file     vault tls vault ca         retry join       leader api addr    https   vault 2   SERVICE  8200      leader tls servername      CN       leader client cert file     vault tls vault crt      leader client key file     vault tls vault key      leader ca cert file     vault tls vault ca               CodeBlockConfig      Solution 2   Add a load balancer to your Raft configuration  If you have a load balancer for your Vault cluster  you can add a single  retry join  stanza to your Raft configuration and use the load balancer address for  leader api addr    For example   CodeBlockConfig highlight  5       hcl storage  raft      path     vault data     retry join       leader api addr    https   vault active 8200      leader client cert file     vault tls vault crt      leader client key file     vault tls vault key      leader ca cert file     vault tls vault ca               CodeBlockConfig  "}
{"questions":"vault Describes how to set up a standalone Vault with TLS certificate layout docs page title Standalone Server with TLS Standalone server with TLS sidebar current docs platform k8s examples standalone tls","answers":"---\nlayout: 'docs'\npage_title: 'Standalone Server with TLS'\nsidebar_current: 'docs-platform-k8s-examples-standalone-tls'\ndescription: |-\n  Describes how to set up a standalone Vault with TLS certificate\n---\n\n# Standalone server with TLS\n\n@include 'helm\/version.mdx'\n\nThis example can be used to set up a single server Vault cluster using TLS.\n\n1. Create key & certificate using Kubernetes CA\n2. Store key & cert into [Kubernetes secrets store](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/)\n3. Configure helm chart to use Kubernetes secret from step 2\n\n## 1. create key & certificate using kubernetes CA\n\nThere are four variables that will be used in this example.\n\n```bash\n# SERVICE is the name of the Vault service in kubernetes.\n# It does not have to match the actual running service, though it may help for consistency.\nexport SERVICE=vault-server-tls\n\n# NAMESPACE where the Vault service is running.\nexport NAMESPACE=vault-namespace\n\n# SECRET_NAME to create in the kubernetes secrets store.\nexport SECRET_NAME=vault-server-tls\n\n# TMPDIR is a temporary working directory.\nexport TMPDIR=\/tmp\n\n# CSR_NAME will be the name of our certificate signing request as seen by kubernetes.\nexport CSR_NAME=vault-csr\n```\n\n1. Create a key for Kubernetes to sign.\n\n   ```shell-session\n   $ openssl genrsa -out ${TMPDIR}\/vault.key 2048\n   Generating RSA private key, 2048 bit long modulus\n...................................................................................................+++\n...............+++\ne is 65537 (0x10001)\n   ```\n\n2. Create a Certificate Signing Request (CSR).\n\n   1. Create a file `${TMPDIR}\/csr.conf` with the following contents:\n\n      ```bash\n      cat <<EOF >${TMPDIR}\/csr.conf\n      [req]\n      req_extensions = v3_req\n      distinguished_name = req_distinguished_name\n      [req_distinguished_name]\n      [ v3_req ]\n      basicConstraints = CA:FALSE\n      keyUsage = nonRepudiation, digitalSignature, keyEncipherment\n      extendedKeyUsage = serverAuth\n      subjectAltName = @alt_names\n      [alt_names]\n      DNS.1 = *.${SERVICE}\n      DNS.2 = *.${SERVICE}.${NAMESPACE}\n      DNS.3 = *.${SERVICE}.${NAMESPACE}.svc\n      DNS.4 = *.${SERVICE}.${NAMESPACE}.svc.cluster.local\n      IP.1 = 127.0.0.1\n      EOF\n      ```\n\n   2. Create a CSR.\n\n      ```bash\n      openssl req -new \\\n                  -key ${TMPDIR}\/vault.key \\\n                  -subj \"\/CN=system:node:${SERVICE}.${NAMESPACE}.svc;\/O=system:nodes\" \\\n                  -out ${TMPDIR}\/server.csr \\\n                  -config ${TMPDIR}\/csr.conf\n      ```\n\n3. Create the certificate\n\n   ~> **Important Note:** If you are using EKS, certificate signing requirements have changed.  As per the AWS [certificate signing](https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/cert-signing.html) documentation, EKS version `1.22` and later now requires the `signerName` to be `beta.eks.amazonaws.com\/app-serving`, otherwise, the CSR will be approved but the certificate will not be issued.\n\n   1. Create a file `${TMPDIR}\/csr.yaml` with the following contents:\n\n      ```bash\n      cat <<EOF >${TMPDIR}\/csr.yaml\n      apiVersion: certificates.k8s.io\/v1\n      kind: CertificateSigningRequest\n      metadata:\n        name: ${CSR_NAME}\n      spec:\n        signerName: kubernetes.io\/kubelet-serving\n        groups:\n        - system:authenticated\n        request: $(base64 ${TMPDIR}\/server.csr | tr -d '\\n')\n        signerName: kubernetes.io\/kubelet-serving\n        usages:\n        - digital signature\n        - key encipherment\n        - server auth\n      EOF\n      ```\n\n   2. Send the CSR to Kubernetes.\n\n      ```shell-session\n      $ kubectl create -f ${TMPDIR}\/csr.yaml\n      certificatesigningrequest.certificates.k8s.io\/vault-csr created\n      ```\n\n      -> If this process is automated, you may need to wait to ensure the CSR has been received and stored:\n      `kubectl get csr ${CSR_NAME}`\n\n   3. Approve the CSR in Kubernetes.\n\n      ```shell-session\n      $ kubectl certificate approve ${CSR_NAME}\n      certificatesigningrequest.certificates.k8s.io\/vault-csr approved\n      ```\n\n   4. Verify that the certificate was approved and issued.\n      ```shell-session\n      $ kubectl get csr ${CSR_NAME}\n      NAME        AGE     SIGNERNAME                                    REQUESTOR                        CONDITION\n      vault-csr   1m13s   kubernetes.io\/kubelet-serving                 kubernetes-admin                 Approved,Issued\n      ```\n\n## 2. store key, cert, and kubernetes CA into kubernetes secrets store\n\n1. Retrieve the certificate.\n\n   ```shell-session\n   $ serverCert=$(kubectl get csr ${CSR_NAME} -o jsonpath='{.status.certificate}')\n   ```\n\n   -> If this process is automated, you may need to wait to ensure the certificate has been created.\n   If it hasn't, this will return an empty string.\n\n2. Write the certificate out to a file.\n\n   ```shell-session\n   $ echo \"${serverCert}\" | openssl base64 -d -A -out ${TMPDIR}\/vault.crt\n   ```\n\n3. Retrieve Kubernetes CA.\n\n   ```bash\n   kubectl get secret \\\n     -o jsonpath=\"{.items[?(@.type==\\\"kubernetes.io\/service-account-token\\\")].data['ca\\.crt']}\" \\\n     | base64 --decode > ${TMPDIR}\/vault.ca\n   ```\n\n4. Create the namespace.\n\n    ```shell-session\n    $ kubectl create namespace ${NAMESPACE}\n    namespace\/vault-namespace created\n    ```\n\n5. Store the key, cert, and Kubernetes CA into Kubernetes secrets.\n\n   ```shell-session\n   $ kubectl create secret generic ${SECRET_NAME} \\\n       --namespace ${NAMESPACE} \\\n       --from-file=vault.key=${TMPDIR}\/vault.key \\\n       --from-file=vault.crt=${TMPDIR}\/vault.crt \\\n       --from-file=vault.ca=${TMPDIR}\/vault.ca\n\n   # secret\/vault-server-tls created\n   ```\n\n## 3. helm configuration\n\nThe below `custom-values.yaml` can be used to set up a single server Vault cluster using TLS.\nThis assumes that a Kubernetes `secret` exists with the server certificate, key and\ncertificate authority:\n\n```yaml\nglobal:\n  enabled: true\n  tlsDisable: false\n\nserver:\n  extraEnvironmentVars:\n    VAULT_CACERT: \/vault\/userconfig\/vault-server-tls\/vault.ca\n\n  volumes:\n    - name: userconfig-vault-server-tls\n      secret:\n        defaultMode: 420\n        secretName: vault-server-tls # Matches the ${SECRET_NAME} from above\n\n  volumeMounts:\n    - mountPath: \/vault\/userconfig\/vault-server-tls\n      name: userconfig-vault-server-tls\n      readOnly: true\n\n  standalone:\n    enabled: true\n    config: |\n      listener \"tcp\" {\n        address = \"[::]:8200\"\n        cluster_address = \"[::]:8201\"\n        tls_cert_file = \"\/vault\/userconfig\/vault-server-tls\/vault.crt\"\n        tls_key_file  = \"\/vault\/userconfig\/vault-server-tls\/vault.key\"\n        tls_client_ca_file = \"\/vault\/userconfig\/vault-server-tls\/vault.ca\"\n      }\n\n      storage \"file\" {\n        path = \"\/vault\/data\"\n      }\n```","site":"vault","answers_cleaned":"    layout   docs  page title   Standalone Server with TLS  sidebar current   docs platform k8s examples standalone tls  description       Describes how to set up a standalone Vault with TLS certificate        Standalone server with TLS   include  helm version mdx   This example can be used to set up a single server Vault cluster using TLS   1  Create key   certificate using Kubernetes CA 2  Store key   cert into  Kubernetes secrets store  https   kubernetes io docs concepts configuration secret   3  Configure helm chart to use Kubernetes secret from step 2     1  create key   certificate using kubernetes CA  There are four variables that will be used in this example      bash   SERVICE is the name of the Vault service in kubernetes    It does not have to match the actual running service  though it may help for consistency  export SERVICE vault server tls    NAMESPACE where the Vault service is running  export NAMESPACE vault namespace    SECRET NAME to create in the kubernetes secrets store  export SECRET NAME vault server tls    TMPDIR is a temporary working directory  export TMPDIR  tmp    CSR NAME will be the name of our certificate signing request as seen by kubernetes  export CSR NAME vault csr      1  Create a key for Kubernetes to sign         shell session      openssl genrsa  out   TMPDIR  vault key 2048    Generating RSA private key  2048 bit long modulus                                                                                                                           e is 65537  0x10001          2  Create a Certificate Signing Request  CSR       1  Create a file    TMPDIR  csr conf  with the following contents            bash       cat   EOF    TMPDIR  csr conf        req        req extensions   v3 req       distinguished name   req distinguished name        req distinguished name          v3 req         basicConstraints   CA FALSE       keyUsage   nonRepudiation  digitalSignature  keyEncipherment       extendedKeyUsage   serverAuth       subjectAltName    alt names        alt names        DNS 1       SERVICE        DNS 2       SERVICE    NAMESPACE        DNS 3       SERVICE    NAMESPACE  svc       DNS 4       SERVICE    NAMESPACE  svc cluster local       IP 1   127 0 0 1       EOF               2  Create a CSR            bash       openssl req  new                      key   TMPDIR  vault key                      subj   CN system node   SERVICE    NAMESPACE  svc  O system nodes                       out   TMPDIR  server csr                      config   TMPDIR  csr conf            3  Create the certificate          Important Note    If you are using EKS  certificate signing requirements have changed   As per the AWS  certificate signing  https   docs aws amazon com eks latest userguide cert signing html  documentation  EKS version  1 22  and later now requires the  signerName  to be  beta eks amazonaws com app serving   otherwise  the CSR will be approved but the certificate will not be issued      1  Create a file    TMPDIR  csr yaml  with the following contents            bash       cat   EOF    TMPDIR  csr yaml       apiVersion  certificates k8s io v1       kind  CertificateSigningRequest       metadata          name    CSR NAME        spec          signerName  kubernetes io kubelet serving         groups            system authenticated         request    base64   TMPDIR  server csr   tr  d   n           signerName  kubernetes io kubelet serving         usages            digital signature           key encipherment           server auth       EOF               2  Send the CSR to Kubernetes            shell session         kubectl create  f   TMPDIR  csr yaml       certificatesigningrequest certificates k8s io vault csr created                     If this process is automated  you may need to wait to ensure the CSR has been received and stored         kubectl get csr   CSR NAME       3  Approve the CSR in Kubernetes            shell session         kubectl certificate approve   CSR NAME        certificatesigningrequest certificates k8s io vault csr approved               4  Verify that the certificate was approved and issued           shell session         kubectl get csr   CSR NAME        NAME        AGE     SIGNERNAME                                    REQUESTOR                        CONDITION       vault csr   1m13s   kubernetes io kubelet serving                 kubernetes admin                 Approved Issued               2  store key  cert  and kubernetes CA into kubernetes secrets store  1  Retrieve the certificate         shell session      serverCert   kubectl get csr   CSR NAME   o jsonpath    status certificate                  If this process is automated  you may need to wait to ensure the certificate has been created     If it hasn t  this will return an empty string   2  Write the certificate out to a file         shell session      echo    serverCert     openssl base64  d  A  out   TMPDIR  vault crt         3  Retrieve Kubernetes CA         bash    kubectl get secret         o jsonpath    items     type    kubernetes io service account token     data  ca  crt              base64   decode     TMPDIR  vault ca         4  Create the namespace          shell session       kubectl create namespace   NAMESPACE      namespace vault namespace created          5  Store the key  cert  and Kubernetes CA into Kubernetes secrets         shell session      kubectl create secret generic   SECRET NAME             namespace   NAMESPACE             from file vault key   TMPDIR  vault key            from file vault crt   TMPDIR  vault crt            from file vault ca   TMPDIR  vault ca       secret vault server tls created            3  helm configuration  The below  custom values yaml  can be used to set up a single server Vault cluster using TLS  This assumes that a Kubernetes  secret  exists with the server certificate  key and certificate authority      yaml global    enabled  true   tlsDisable  false  server    extraEnvironmentVars      VAULT CACERT   vault userconfig vault server tls vault ca    volumes        name  userconfig vault server tls       secret          defaultMode  420         secretName  vault server tls   Matches the   SECRET NAME  from above    volumeMounts        mountPath   vault userconfig vault server tls       name  userconfig vault server tls       readOnly  true    standalone      enabled  true     config          listener  tcp            address         8200          cluster address         8201          tls cert file     vault userconfig vault server tls vault crt          tls key file      vault userconfig vault server tls vault key          tls client ca file     vault userconfig vault server tls vault ca                 storage  file            path     vault data             "}
{"questions":"vault Describes how to set up Diaster Recovery clusters with Integrated Storage Raft page title Highly Available Vault Enterprise Disaster Recovery Clusters with Raft layout docs sidebar current docs platform k8s examples enterprise dr with raft Highly available Vault enterprise disaster recovery clusters with integrated storage Raft","answers":"---\nlayout: 'docs'\npage_title: 'Highly Available Vault Enterprise Disaster Recovery Clusters with Raft'\nsidebar_current: 'docs-platform-k8s-examples-enterprise-dr-with-raft'\ndescription: |-\n  Describes how to set up Diaster Recovery clusters with Integrated Storage (Raft)\n---\n\n# Highly available Vault enterprise disaster recovery clusters with integrated storage (Raft)\n\n@include 'helm\/version.mdx'\n\nThe following is an example of creating a disaster recovery cluster using Vault Helm.\n\nFor more information on Disaster Recovery, [see the official documentation](\/vault\/docs\/enterprise\/replication\/).\n\n-> For license configuration refer to [Running Vault Enterprise](\/vault\/docs\/platform\/k8s\/helm\/enterprise).\n\n## Primary cluster\n\nFirst, create the primary cluster:\n\n```shell\nhelm install vault-primary hashicorp\/vault \\\n  --set='server.image.repository=hashicorp\/vault-enterprise' \\\n  --set='server.image.tag=1.18.1-ent' \\\n  --set='server.ha.enabled=true' \\\n  --set='server.ha.raft.enabled=true'\n```\n\nNext, initialize and unseal `vault-primary-0` pod:\n\n```shell\nkubectl exec -ti vault-primary-0 -- vault operator init\nkubectl exec -ti vault-primary-0 -- vault operator unseal\n```\n\nFinally, join the remaining pods to the Raft cluster and unseal them. The pods\nwill need to communicate directly so we'll configure the pods to use the internal\nservice provided by the Helm chart:\n\n```shell\nkubectl exec -ti vault-primary-1 -- vault operator raft join http:\/\/vault-primary-0.vault-primary-internal:8200\nkubectl exec -ti vault-primary-1 -- vault operator unseal\n\nkubectl exec -ti vault-primary-2 -- vault operator raft join http:\/\/vault-primary-0.vault-primary-internal:8200\nkubectl exec -ti vault-primary-2 -- vault operator unseal\n```\n\nTo verify if the Raft cluster has successfully been initialized, run the following.\n\nFirst, login using the `root` token on the `vault-primary-0` pod:\n\n```shell\nkubectl exec -ti vault-primary-0 -- vault login\n```\n\nNext, list all the raft peers:\n\n```shell\n$ kubectl exec -ti vault-primary-0 -- vault operator raft list-peers\n\nNode                                    Address                        State       Voter\n----                                    -------                        -----       -----\na1799962-8711-7f28-23f0-cea05c8a527d    vault-primary-0.vault-primary-internal:8201    leader      true\ne6876c97-aaaa-a92e-b99a-0aafab105745    vault-primary-1.vault-primary-internal:8201    follower    true\n4b5d7383-ff31-44df-e008-6a606828823b    vault-primary-2.vault-primary-internal:8201    follower    true\n```\n\n## Secondary cluster\n\nWith the primary cluster created, next create a secondary cluster and enable\ndisaster recovery replication.\n\n```shell\nhelm install vault-secondary hashicorp\/vault \\\n  --set='server.image.repository=hashicorp\/vault-enterprise' \\\n  --set='server.image.tag=1.18.1-ent' \\\n  --set='server.ha.enabled=true' \\\n  --set='server.ha.raft.enabled=true'\n```\n\nNext, initialize and unseal `vault-secondary-0` pod:\n\n```shell\nkubectl exec -ti vault-secondary-0 -- vault operator init\nkubectl exec -ti vault-secondary-0 -- vault operator unseal\n```\n\nFinally, join the remaining pods to the Raft cluster and unseal them. The pods\nwill need to communicate directly so we'll configure the pods to use the internal\nservice provided by the Helm chart:\n\n```shell\nkubectl exec -ti vault-secondary-1 -- vault operator raft join http:\/\/vault-secondary-0.vault-secondary-internal:8200\nkubectl exec -ti vault-secondary-1 -- vault operator unseal\n\nkubectl exec -ti vault-secondary-2 -- vault operator raft join http:\/\/vault-secondary-0.vault-secondary-internal:8200\nkubectl exec -ti vault-secondary-2 -- vault operator unseal\n```\n\nTo verify if the Raft cluster has successfully been initialized, run the following.\n\nFirst, login using the `root` token on the `vault-secondary-0` pod:\n\n```shell\nkubectl exec -ti vault-secondary-0 -- vault login\n```\n\nNext, list all the raft peers:\n\n```shell\n$ kubectl exec -ti vault-secondary-0 -- vault operator raft list-peers\n\nNode                                    Address                        State       Voter\n----                                    -------                        -----       -----\na1799962-8711-7f28-23f0-cea05c8a527d    vault-secondary-0.vault-secondary-internal:8201    leader      true\ne6876c97-aaaa-a92e-b99a-0aafab105745    vault-secondary-1.vault-secondary-internal:8201    follower    true\n4b5d7383-ff31-44df-e008-6a606828823b    vault-secondary-2.vault-secondary-internal:8201    follower    true\n```\n\n## Enable disaster recovery replication on primary\n\nWith the initial clusters setup, we can now configure them for disaster recovery replication.\n\nFirst, on the primary cluster, enable replication:\n\n```shell\nkubectl exec -ti vault-primary-0 -- vault write -f sys\/replication\/dr\/primary\/enable primary_cluster_addr=https:\/\/vault-primary-active:8201\n```\n\nNext, create a token the secondary cluster will use to configure replication:\n\n```shell\nkubectl exec -ti vault-primary-0 -- vault write sys\/replication\/dr\/primary\/secondary-token id=secondary\n```\n\nThe token in the output will be used when configuring the secondary cluster.\n\n## Enable disaster recovery replication on secondary\n\nUsing the token created in the last step, enable disaster recovery replication on the secondary:\n\n```shell\nkubectl exec -ti vault-secondary-0 -- vault write sys\/replication\/dr\/secondary\/enable token=<TOKEN FROM PRIMARY>\n```\n\nLast, delete the remainder secondary pods and unseal them using the primary unseal token\nafter Kubernetes reschedules them:\n\n```shell\nkubectl delete pod vault-secondary-1\nkubectl exec -ti vault-secondary-1 -- vault operator unseal <PRIMARY UNSEAL TOKEN>\n\nkubectl delete pod vault-secondary-2\nkubectl exec -ti vault-secondary-2 -- vault operator unseal <PRIMARY UNSEAL TOKEN>\n```","site":"vault","answers_cleaned":"    layout   docs  page title   Highly Available Vault Enterprise Disaster Recovery Clusters with Raft  sidebar current   docs platform k8s examples enterprise dr with raft  description       Describes how to set up Diaster Recovery clusters with Integrated Storage  Raft         Highly available Vault enterprise disaster recovery clusters with integrated storage  Raft    include  helm version mdx   The following is an example of creating a disaster recovery cluster using Vault Helm   For more information on Disaster Recovery   see the official documentation   vault docs enterprise replication        For license configuration refer to  Running Vault Enterprise   vault docs platform k8s helm enterprise       Primary cluster  First  create the primary cluster      shell helm install vault primary hashicorp vault       set  server image repository hashicorp vault enterprise        set  server image tag 1 18 1 ent        set  server ha enabled true        set  server ha raft enabled true       Next  initialize and unseal  vault primary 0  pod      shell kubectl exec  ti vault primary 0    vault operator init kubectl exec  ti vault primary 0    vault operator unseal      Finally  join the remaining pods to the Raft cluster and unseal them  The pods will need to communicate directly so we ll configure the pods to use the internal service provided by the Helm chart      shell kubectl exec  ti vault primary 1    vault operator raft join http   vault primary 0 vault primary internal 8200 kubectl exec  ti vault primary 1    vault operator unseal  kubectl exec  ti vault primary 2    vault operator raft join http   vault primary 0 vault primary internal 8200 kubectl exec  ti vault primary 2    vault operator unseal      To verify if the Raft cluster has successfully been initialized  run the following   First  login using the  root  token on the  vault primary 0  pod      shell kubectl exec  ti vault primary 0    vault login      Next  list all the raft peers      shell   kubectl exec  ti vault primary 0    vault operator raft list peers  Node                                    Address                        State       Voter                                                                                          a1799962 8711 7f28 23f0 cea05c8a527d    vault primary 0 vault primary internal 8201    leader      true e6876c97 aaaa a92e b99a 0aafab105745    vault primary 1 vault primary internal 8201    follower    true 4b5d7383 ff31 44df e008 6a606828823b    vault primary 2 vault primary internal 8201    follower    true         Secondary cluster  With the primary cluster created  next create a secondary cluster and enable disaster recovery replication      shell helm install vault secondary hashicorp vault       set  server image repository hashicorp vault enterprise        set  server image tag 1 18 1 ent        set  server ha enabled true        set  server ha raft enabled true       Next  initialize and unseal  vault secondary 0  pod      shell kubectl exec  ti vault secondary 0    vault operator init kubectl exec  ti vault secondary 0    vault operator unseal      Finally  join the remaining pods to the Raft cluster and unseal them  The pods will need to communicate directly so we ll configure the pods to use the internal service provided by the Helm chart      shell kubectl exec  ti vault secondary 1    vault operator raft join http   vault secondary 0 vault secondary internal 8200 kubectl exec  ti vault secondary 1    vault operator unseal  kubectl exec  ti vault secondary 2    vault operator raft join http   vault secondary 0 vault secondary internal 8200 kubectl exec  ti vault secondary 2    vault operator unseal      To verify if the Raft cluster has successfully been initialized  run the following   First  login using the  root  token on the  vault secondary 0  pod      shell kubectl exec  ti vault secondary 0    vault login      Next  list all the raft peers      shell   kubectl exec  ti vault secondary 0    vault operator raft list peers  Node                                    Address                        State       Voter                                                                                          a1799962 8711 7f28 23f0 cea05c8a527d    vault secondary 0 vault secondary internal 8201    leader      true e6876c97 aaaa a92e b99a 0aafab105745    vault secondary 1 vault secondary internal 8201    follower    true 4b5d7383 ff31 44df e008 6a606828823b    vault secondary 2 vault secondary internal 8201    follower    true         Enable disaster recovery replication on primary  With the initial clusters setup  we can now configure them for disaster recovery replication   First  on the primary cluster  enable replication      shell kubectl exec  ti vault primary 0    vault write  f sys replication dr primary enable primary cluster addr https   vault primary active 8201      Next  create a token the secondary cluster will use to configure replication      shell kubectl exec  ti vault primary 0    vault write sys replication dr primary secondary token id secondary      The token in the output will be used when configuring the secondary cluster      Enable disaster recovery replication on secondary  Using the token created in the last step  enable disaster recovery replication on the secondary      shell kubectl exec  ti vault secondary 0    vault write sys replication dr secondary enable token  TOKEN FROM PRIMARY       Last  delete the remainder secondary pods and unseal them using the primary unseal token after Kubernetes reschedules them      shell kubectl delete pod vault secondary 1 kubectl exec  ti vault secondary 1    vault operator unseal  PRIMARY UNSEAL TOKEN   kubectl delete pod vault secondary 2 kubectl exec  ti vault secondary 2    vault operator unseal  PRIMARY UNSEAL TOKEN     "}
{"questions":"vault Describes how to set up the Vault Agent Injector with certificates and keys generated by cert manager Vault agent injector TLS with Cert Manager layout docs sidebar current docs platform k8s examples injector tls cert manager page title Vault Agent Injector TLS with Cert Manager","answers":"---\nlayout: 'docs'\npage_title: 'Vault Agent Injector TLS with Cert-Manager'\nsidebar_current: 'docs-platform-k8s-examples-injector-tls-cert-manager'\ndescription: |-\n  Describes how to set up the Vault Agent Injector with certificates and keys generated by cert-manager.\n---\n\n# Vault agent injector TLS with Cert-Manager\n\nThe following instructions demonstrate how to configure the Vault Agent Injector to use certificates generated by [cert-manager](https:\/\/cert-manager.io\/). This allows you to run multiple replicas of the Vault Agent Injector in a Kubernetes cluster.\n\n## Prerequisites\n\nInstall cert-manager if not already installed (see the [cert-manager documentation](https:\/\/cert-manager.io\/docs\/installation\/)). For example, with helm:\n\n```shell\n$ helm repo add jetstack https:\/\/charts.jetstack.io\n$ helm repo update\n$ helm install cert-manager jetstack\/cert-manager \\\n  --namespace cert-manager \\\n  --create-namespace \\\n  --set installCRDs=true\n```\n\n## Create a certificate authority (CA)\n\nFor this example we will bootstrap a self-signed certificate authority (CA) [Issuer](https:\/\/cert-manager.io\/docs\/configuration\/). If you already have a [ClusterIssuer](https:\/\/cert-manager.io\/docs\/concepts\/issuer\/) configured for your cluster, you may skip this step.\n\n```yaml\napiVersion: cert-manager.io\/v1\nkind: Issuer\nmetadata:\n  name: selfsigned\nspec:\n  selfSigned: {}\n---\napiVersion: cert-manager.io\/v1\nkind: Certificate\nmetadata:\n  name: injector-selfsigned-ca\nspec:\n  isCA: true\n  commonName: Agent Inject CA\n  secretName: injector-ca-secret\n  duration: 87660h  # 10 years\n  privateKey:\n    algorithm: ECDSA\n    size: 256\n  issuerRef:\n    name: selfsigned\n    kind: Issuer\n    group: cert-manager.io\n---\napiVersion: cert-manager.io\/v1\nkind: Issuer\nmetadata:\n  name: injector-ca-issuer\nspec:\n  ca:\n    secretName: injector-ca-secret\n```\n\nSave that to a file named `ca-issuer.yaml`, and apply to your Kubernetes cluster:\n\n```console\n$ kubectl apply -n vault -f ca-issuer.yaml\nissuer.cert-manager.io\/selfsigned created\ncertificate.cert-manager.io\/injector-selfsigned-ca created\nissuer.cert-manager.io\/injector-ca-issuer created\n\n$ kubectl -n vault get issuers -o wide\nNAME                 READY   STATUS                AGE\ninjector-ca-issuer   True    Signing CA verified   7s\nselfsigned           True                          7s\n\n$ kubectl -n vault get certificates injector-selfsigned-ca -o wide\nNAME                     READY   SECRET               ISSUER       STATUS                                          AGE\ninjector-selfsigned-ca   True    injector-ca-secret   selfsigned   Certificate is up to date and has not expired   32s\n```\n\n## Create the Vault agent injector certificate\n\nNext we can create a request for cert-manager to generate a certificate and key\nsigned by the certificate authority above. This certificate and key will be used\nby the Vault Agent Injector for TLS communications with the Kubernetes API. \n\nThe Certificate request object references the CA issuer created above, and specifies the name of the Secret where the CA, Certificate, and Key will be stored by cert-manager.\n\n```yaml\napiVersion: cert-manager.io\/v1\nkind: Certificate\nmetadata:\n  name: injector-certificate\nspec:\n  secretName: injector-tls\n  duration: 24h\n  renewBefore: 144m  # roughly 10% of 24h\n  dnsNames:\n  - vault-agent-injector-svc\n  - vault-agent-injector-svc.vault\n  - vault-agent-injector-svc.vault.svc\n  issuerRef:\n    name: injector-ca-issuer\n  commonName: Agent Inject Cert\n```\n\n~> **Important Note:** The dnsNames for the certificate must be configured to use the name\nof the Vault Agent Injector Kubernetes service and namespace where it is deployed.\n\nIn this example the Vault Agent Injector service name is `vault-agent-injector-svc` in the `vault` namespace.\nThis uses the pattern `<k8s service name>.<k8s namespace>.svc`.\n\nSave the Certificate yaml to a file and apply to your cluster:\n\n```shell\n$ kubectl -n vault apply -f injector-certificate.yaml\ncertificate.cert-manager.io\/injector-certificate created\n\n$ kubectl -n vault get certificates injector-certificate -o wide\nNAME                   READY   SECRET         ISSUER               STATUS                                          AGE\ninjector-certificate   True    injector-tls   injector-ca-issuer   Certificate is up to date and has not expired   41s\n\n$ kubectl -n vault get secret injector-tls\nNAME           TYPE                DATA   AGE\ninjector-tls   kubernetes.io\/tls   3      6m59s\n```\n\n## Configuration\n\nNow that a certificate authority and a signed certificate have been created, we can now configure\nHelm and the Vault Agent Injector to use them.\n\nInstall the Vault Agent Injector with the following custom values:\n\n```shell\n$ helm install vault hashicorp\/vault \\\n  --namespace=vault \\\n  --set injector.replicas=2 \\\n  --set injector.leaderElector.enabled=false \\\n  --set injector.certs.secretName=injector-tls \\\n  --set injector.webhook.annotations=\"cert-manager.io\/inject-ca-from: \/injector-certificate\"\n```","site":"vault","answers_cleaned":"    layout   docs  page title   Vault Agent Injector TLS with Cert Manager  sidebar current   docs platform k8s examples injector tls cert manager  description       Describes how to set up the Vault Agent Injector with certificates and keys generated by cert manager         Vault agent injector TLS with Cert Manager  The following instructions demonstrate how to configure the Vault Agent Injector to use certificates generated by  cert manager  https   cert manager io    This allows you to run multiple replicas of the Vault Agent Injector in a Kubernetes cluster      Prerequisites  Install cert manager if not already installed  see the  cert manager documentation  https   cert manager io docs installation     For example  with helm      shell   helm repo add jetstack https   charts jetstack io   helm repo update   helm install cert manager jetstack cert manager       namespace cert manager       create namespace       set installCRDs true         Create a certificate authority  CA   For this example we will bootstrap a self signed certificate authority  CA   Issuer  https   cert manager io docs configuration    If you already have a  ClusterIssuer  https   cert manager io docs concepts issuer   configured for your cluster  you may skip this step      yaml apiVersion  cert manager io v1 kind  Issuer metadata    name  selfsigned spec    selfSigned         apiVersion  cert manager io v1 kind  Certificate metadata    name  injector selfsigned ca spec    isCA  true   commonName  Agent Inject CA   secretName  injector ca secret   duration  87660h    10 years   privateKey      algorithm  ECDSA     size  256   issuerRef      name  selfsigned     kind  Issuer     group  cert manager io     apiVersion  cert manager io v1 kind  Issuer metadata    name  injector ca issuer spec    ca      secretName  injector ca secret      Save that to a file named  ca issuer yaml   and apply to your Kubernetes cluster      console   kubectl apply  n vault  f ca issuer yaml issuer cert manager io selfsigned created certificate cert manager io injector selfsigned ca created issuer cert manager io injector ca issuer created    kubectl  n vault get issuers  o wide NAME                 READY   STATUS                AGE injector ca issuer   True    Signing CA verified   7s selfsigned           True                          7s    kubectl  n vault get certificates injector selfsigned ca  o wide NAME                     READY   SECRET               ISSUER       STATUS                                          AGE injector selfsigned ca   True    injector ca secret   selfsigned   Certificate is up to date and has not expired   32s         Create the Vault agent injector certificate  Next we can create a request for cert manager to generate a certificate and key signed by the certificate authority above  This certificate and key will be used by the Vault Agent Injector for TLS communications with the Kubernetes API    The Certificate request object references the CA issuer created above  and specifies the name of the Secret where the CA  Certificate  and Key will be stored by cert manager      yaml apiVersion  cert manager io v1 kind  Certificate metadata    name  injector certificate spec    secretName  injector tls   duration  24h   renewBefore  144m    roughly 10  of 24h   dnsNames      vault agent injector svc     vault agent injector svc vault     vault agent injector svc vault svc   issuerRef      name  injector ca issuer   commonName  Agent Inject Cert           Important Note    The dnsNames for the certificate must be configured to use the name of the Vault Agent Injector Kubernetes service and namespace where it is deployed   In this example the Vault Agent Injector service name is  vault agent injector svc  in the  vault  namespace  This uses the pattern   k8s service name   k8s namespace  svc    Save the Certificate yaml to a file and apply to your cluster      shell   kubectl  n vault apply  f injector certificate yaml certificate cert manager io injector certificate created    kubectl  n vault get certificates injector certificate  o wide NAME                   READY   SECRET         ISSUER               STATUS                                          AGE injector certificate   True    injector tls   injector ca issuer   Certificate is up to date and has not expired   41s    kubectl  n vault get secret injector tls NAME           TYPE                DATA   AGE injector tls   kubernetes io tls   3      6m59s         Configuration  Now that a certificate authority and a signed certificate have been created  we can now configure Helm and the Vault Agent Injector to use them   Install the Vault Agent Injector with the following custom values      shell   helm install vault hashicorp vault       namespace vault       set injector replicas 2       set injector leaderElector enabled false       set injector certs secretName injector tls       set injector webhook annotations  cert manager io inject ca from   injector certificate     "}
{"questions":"vault page title Vault Agent Injector TLS Configuration Describes how to set up the Vault Agent Injector with manually generated certificates and keys layout docs sidebar current docs platform k8s examples injector tls Vault agent injector TLS configuration","answers":"---\nlayout: 'docs'\npage_title: 'Vault Agent Injector TLS Configuration'\nsidebar_current: 'docs-platform-k8s-examples-injector-tls'\ndescription: |-\n  Describes how to set up the Vault Agent Injector with manually generated certificates and keys.\n---\n\n# Vault agent injector TLS configuration\n\n@include 'helm\/version.mdx'\n\nThe following instructions demonstrate how to manually configure the Vault Agent Injector\nwith self-signed certificates.\n\n## Create a certificate authority (CA)\n\nFirst, create a private key to be used by our custom Certificate Authority (CA):\n\n```shell\n$ openssl genrsa -out injector-ca.key 2048\n```\n\nNext, create a certificate authority certificate:\n\n~> **Important Note:** Values such as days (how long the certificate is valid for) should be configured for your environment.\n\n```shell\n$ openssl req \\\n   -x509 \\\n   -new \\\n   -nodes \\\n   -key injector-ca.key \\\n   -sha256 \\\n   -days 1825 \\\n   -out injector-ca.crt \\\n   -subj \"\/C=US\/ST=CA\/L=San Francisco\/O=HashiCorp\/CN=vault-agent-injector-svc\"\n```\n\n## Create Vault agent injector certificate\n\nNext we can create a certificate and key signed by the certificate authority generated above. This\ncertificate and key will be used by the Vault Agent Injector for TLS communications with the Kubernetes\nAPI.\n\nFirst, create a private key for the certificate:\n\n```shell\n$ openssl genrsa -out tls.key 2048\n```\n\nNext, create a certificate signing request (CSR) to be used when signing the certificate:\n\n```shell\n$ openssl req \\\n   -new \\\n   -key tls.key \\\n   -out tls.csr \\\n   -subj \"\/C=US\/ST=CA\/L=San Francisco\/O=HashiCorp\/CN=vault-agent-injector-svc\"\n```\n\nAfter creating the CSR, create an extension file to configure additional parameters for signing\nthe certificate.\n\n~> **Important Note:** The alternative names for the certificate must be configured to use the name\nof the Vault Agent Injector Kubernetes service and namespace where its created.\n\nIn this example the Vault Agent Injector service name is `vault-agent-injector-svc` in the `vault` namespace.\nThis uses the pattern `<k8s service name>.<k8s namespace>.svc.cluster.local`.\n\n```shell\n$ cat <<EOF >csr.conf\nauthorityKeyIdentifier=keyid,issuer\nbasicConstraints=CA:FALSE\nkeyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment\nsubjectAltName = @alt_names\n\n[alt_names]\nDNS.1 = vault-agent-injector-svc\nDNS.2 = vault-agent-injector-svc.vault\nDNS.3 = vault-agent-injector-svc.vault.svc\nDNS.4 = vault-agent-injector-svc.vault.svc.cluster.local\nEOF\n```\n\nFinally, sign the certificate:\n\n~> **Important Note:** Values such as days (how long the certificate is valid for) should be configured for your environment.\n\n```shell\n$ openssl x509 \\\n  -req \\\n  -in tls.csr \\\n  -CA injector-ca.crt \\\n  -CAkey injector-ca.key \\\n  -CAcreateserial \\\n  -out tls.crt \\\n  -days 1825 \\\n  -sha256 \\\n  -extfile csr.conf\n```\n\n## Configuration\n\nNow that a certificate authority and a signed certificate have been created, we can now configure\nHelm and the Vault Agent Injector to use them.\n\nFirst, create a Kubernetes secret containing the certificate and key created above:\n\n~> **Important Note:** This example assumes the Vault Agent Injector is running in the `vault` namespace.\n\n```shell\n$ kubectl create secret generic injector-tls \\\n    --from-file tls.crt \\\n    --from-file tls.key \\\n    --namespace=vault\n```\n\nNext, base64 encode the certificate authority so Kubernetes can verify the authenticity of the certificate:\n\n```shell\n$ export CA_BUNDLE=$(cat injector-ca.crt | base64)\n```\n\nFinally, install the Vault Agent Injector with the following custom values:\n\n```shell\n$ helm install vault hashicorp\/vault \\\n  --namespace=vault \\\n  --set=\"injector.certs.secretName=injector-tls\" \\\n  --set=\"injector.certs.caBundle=${CA_BUNDLE?}\"\n```","site":"vault","answers_cleaned":"    layout   docs  page title   Vault Agent Injector TLS Configuration  sidebar current   docs platform k8s examples injector tls  description       Describes how to set up the Vault Agent Injector with manually generated certificates and keys         Vault agent injector TLS configuration   include  helm version mdx   The following instructions demonstrate how to manually configure the Vault Agent Injector with self signed certificates      Create a certificate authority  CA   First  create a private key to be used by our custom Certificate Authority  CA       shell   openssl genrsa  out injector ca key 2048      Next  create a certificate authority certificate        Important Note    Values such as days  how long the certificate is valid for  should be configured for your environment      shell   openssl req       x509       new       nodes       key injector ca key       sha256       days 1825       out injector ca crt       subj   C US ST CA L San Francisco O HashiCorp CN vault agent injector svc          Create Vault agent injector certificate  Next we can create a certificate and key signed by the certificate authority generated above  This certificate and key will be used by the Vault Agent Injector for TLS communications with the Kubernetes API   First  create a private key for the certificate      shell   openssl genrsa  out tls key 2048      Next  create a certificate signing request  CSR  to be used when signing the certificate      shell   openssl req       new       key tls key       out tls csr       subj   C US ST CA L San Francisco O HashiCorp CN vault agent injector svc       After creating the CSR  create an extension file to configure additional parameters for signing the certificate        Important Note    The alternative names for the certificate must be configured to use the name of the Vault Agent Injector Kubernetes service and namespace where its created   In this example the Vault Agent Injector service name is  vault agent injector svc  in the  vault  namespace  This uses the pattern   k8s service name   k8s namespace  svc cluster local       shell   cat   EOF  csr conf authorityKeyIdentifier keyid issuer basicConstraints CA FALSE keyUsage   digitalSignature  nonRepudiation  keyEncipherment  dataEncipherment subjectAltName    alt names   alt names  DNS 1   vault agent injector svc DNS 2   vault agent injector svc vault DNS 3   vault agent injector svc vault svc DNS 4   vault agent injector svc vault svc cluster local EOF      Finally  sign the certificate        Important Note    Values such as days  how long the certificate is valid for  should be configured for your environment      shell   openssl x509      req      in tls csr      CA injector ca crt      CAkey injector ca key      CAcreateserial      out tls crt      days 1825      sha256      extfile csr conf         Configuration  Now that a certificate authority and a signed certificate have been created  we can now configure Helm and the Vault Agent Injector to use them   First  create a Kubernetes secret containing the certificate and key created above        Important Note    This example assumes the Vault Agent Injector is running in the  vault  namespace      shell   kubectl create secret generic injector tls         from file tls crt         from file tls key         namespace vault      Next  base64 encode the certificate authority so Kubernetes can verify the authenticity of the certificate      shell   export CA BUNDLE   cat injector ca crt   base64       Finally  install the Vault Agent Injector with the following custom values      shell   helm install vault hashicorp vault       namespace vault       set  injector certs secretName injector tls        set  injector certs caBundle   CA BUNDLE       "}
{"questions":"vault deployment models page title Vault Agent Sidecar Injector Examples The following are different configuration examples to support a variety of layout docs Vault agent injector examples This section documents examples of using the Vault Agent Injector","answers":"---\nlayout: docs\npage_title: Vault Agent Sidecar Injector Examples\ndescription: This section documents examples of using the Vault Agent Injector.\n---\n\n# Vault agent injector examples\n\nThe following are different configuration examples to support a variety of\ndeployment models.\n\n~> A common mistake is to set the annotation on the Deployment or other resource.\nEnsure that the injector annotations are specified on the pod specification when\nusing higher level constructs such as deployments, jobs or statefulsets.\n\n## Before using the Vault agent injector\n\nBefore applying Vault Agent injection annotations to pods, the following requirements\nshould be satisfied.\n\n### Connectivity\n\n- the Kubernetes API can connect to the Vault Agent injector service on port `443`, and\n  the injector can connect to the Kubernetes API,\n- Vault can connect to the Kubernetes API,\n- Pods in the Kubernetes cluster can connect to Vault.\n\n~> Note: The Kubernetes API typically runs on the master nodes, and the Vault Agent injector\non a worker node in a Kubernetes cluster. <br\/><br\/>\nOn Kubernetes clusters that have aggregator routing enabled (ex. [GKE private\nclusters](https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/private-clusters#add_firewall_rules)),\nthe Kubernetes API will connect directly to the injector service endpoint,\nwhich is on port `8080`.\n\n### Kubernetes and Vault configuration\n\n- Kubernetes auth method should be configured and enabled in Vault,\n- Pod should have a service account,\n- desired secrets exist within Vault,\n- the service account should be bound to a Vault role with a policy enabling access to desired secrets.\n\nFor more information on configuring the Vault Kubernetes auth method,\n[see the official documentation](\/vault\/docs\/auth\/kubernetes#configuration).\n\n## Debugging\n\nIf an error occurs with a mutation request, Kubernetes will attach the error to the\nowner of the pod. Check the following for errors:\n\n- If the pod was created by a deployment or statefulset, check for errors in the `replicaset`\n  that owns the pod.\n- If the pod was created by a job, check the `job` for errors.\n\n## Patching existing pods\n\nTo patch existing pods, a Kubernetes patch can be applied to add the required annotations\nto pods. When applying a patch, the pods will be rescheduled.\n\nFirst, create the patch:\n\n```bash\ncat <<EOF >> .\/patch.yaml\nspec:\n  template:\n    metadata:\n      annotations:\n        vault.hashicorp.com\/agent-inject: \"true\"\n        vault.hashicorp.com\/agent-inject-status: \"update\"\n        vault.hashicorp.com\/agent-inject-secret-db-creds: \"database\/creds\/db-app\"\n        vault.hashicorp.com\/agent-inject-template-db-creds: |\n          \n          postgres:\/\/:@postgres:5432\/appdb?sslmode=disable\n          \n        vault.hashicorp.com\/role: \"db-app\"\n        vault.hashicorp.com\/ca-cert: \"\/vault\/tls\/ca.crt\"\n        vault.hashicorp.com\/client-cert: \"\/vault\/tls\/client.crt\"\n        vault.hashicorp.com\/client-key: \"\/vault\/tls\/client.key\"\n        vault.hashicorp.com\/tls-secret: \"vault-tls-client\"\nEOF\n```\n\nNext, apply the patch:\n\n```bash\nkubectl patch deployment <MY DEPLOYMENT> --patch \"$(cat patch.yaml)\"\n```\n\nThe pod should now be rescheduled with additional containers. The pod can be inspected\nusing the `kubectl describe` command:\n\n```bash\nkubectl describe pod <name of pod>\n```\n\n## Deployments, StatefulSets, etc.\n\nThe annotations for configuring Vault Agent injection must be on the pod\nspecification. Since higher level resources such as Deployments wrap pod\nspecification templates, Vault Agent Injector can be used with all of these\nhigher level constructs, too.\n\nAn example Deployment below shows how to enable Vault Agent injection:\n\n```yaml\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: app-example\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: app-example-deployment\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app-example\n  template:\n    metadata:\n      labels:\n        app: app-example\n      annotations:\n        vault.hashicorp.com\/agent-inject: 'true'\n        vault.hashicorp.com\/agent-inject-secret-db-creds: 'database\/creds\/db-app'\n        vault.hashicorp.com\/agent-inject-template-db-creds: |\n          \n          postgres:\/\/:@postgres:5432\/appdb?sslmode=disable\n          \n        vault.hashicorp.com\/role: 'db-app'\n        vault.hashicorp.com\/ca-cert: '\/vault\/tls\/ca.crt'\n        vault.hashicorp.com\/client-cert: '\/vault\/tls\/client.crt'\n        vault.hashicorp.com\/client-key: '\/vault\/tls\/client.key'\n        vault.hashicorp.com\/tls-secret: 'vault-tls-client'\n    spec:\n      containers:\n        - name: app\n          image: 'app:1.0.0'\n      serviceAccountName: app-example\n```\n\n## ConfigMap example\n\nThe following example creates a deployment that mounts a Kubernetes ConfigMap\ncontaining Vault Agent configuration files. For a complete list of the Vault\nAgent configuration settings, [see the Agent documentation](\/vault\/docs\/agent-and-proxy\/agent\/template#vault-agent-templates).\n\n```yaml\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: app-example\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: app-example-deployment\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: app-example\n  template:\n    metadata:\n      labels:\n        app: app-example\n      annotations:\n        vault.hashicorp.com\/agent-inject: 'true'\n        vault.hashicorp.com\/agent-configmap: 'my-configmap'\n        vault.hashicorp.com\/tls-secret: 'vault-tls-client'\n    spec:\n      containers:\n        - name: app\n          image: 'app:1.0.0'\n      serviceAccountName: app-example\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: my-configmap\ndata:\n  config.hcl: |\n    \"auto_auth\" = {\n      \"method\" = {\n        \"config\" = {\n          \"role\" = \"db-app\"\n        }\n        \"type\" = \"kubernetes\"\n      }\n\n      \"sink\" = {\n        \"config\" = {\n          \"path\" = \"\/home\/vault\/.token\"\n        }\n\n        \"type\" = \"file\"\n      }\n    }\n\n    \"exit_after_auth\" = false\n    \"pid_file\" = \"\/home\/vault\/.pid\"\n\n    \"template\" = {\n      \"contents\" = \"postgres:\/\/:@postgres:5432\/mydb?sslmode=disable\"\n      \"destination\" = \"\/vault\/secrets\/db-creds\"\n    }\n\n    \"vault\" = {\n      \"address\" = \"https:\/\/vault.demo.svc.cluster.local:8200\"\n      \"ca_cert\" = \"\/vault\/tls\/ca.crt\"\n      \"client_cert\" = \"\/vault\/tls\/client.crt\"\n      \"client_key\" = \"\/vault\/tls\/client.key\"\n    }\n  config-init.hcl: |\n    \"auto_auth\" = {\n      \"method\" = {\n        \"config\" = {\n          \"role\" = \"db-app\"\n        }\n        \"type\" = \"kubernetes\"\n      }\n\n      \"sink\" = {\n        \"config\" = {\n          \"path\" = \"\/home\/vault\/.token\"\n        }\n\n        \"type\" = \"file\"\n      }\n    }\n\n    \"exit_after_auth\" = true\n    \"pid_file\" = \"\/home\/vault\/.pid\"\n\n    \"template\" = {\n      \"contents\" = \"postgres:\/\/:@postgres:5432\/mydb?sslmode=disable\"\n      \"destination\" = \"\/vault\/secrets\/db-creds\"\n    }\n\n    \"vault\" = {\n      \"address\" = \"https:\/\/vault.demo.svc.cluster.local:8200\"\n      \"ca_cert\" = \"\/vault\/tls\/ca.crt\"\n      \"client_cert\" = \"\/vault\/tls\/client.crt\"\n      \"client_key\" = \"\/vault\/tls\/client.key\"\n    }\n```\n\n## Environment variable example\n\nThe following example demonstrates how templates can be used to create environment\nvariables. A template should be created that exports a Vault secret as an environment\nvariable and the application container should source those files during startup.\n\n```yaml\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: web-deployment\n  labels:\n    app: web\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: web\n  template:\n    metadata:\n      labels:\n        app: web\n      annotations:\n        vault.hashicorp.com\/agent-inject: 'true'\n        vault.hashicorp.com\/role: 'web'\n        vault.hashicorp.com\/agent-inject-secret-config: 'secret\/data\/web'\n        # Environment variable export template\n        vault.hashicorp.com\/agent-inject-template-config: |\n          \n            export api_key=\"\"\n          \n    spec:\n      serviceAccountName: web\n      containers:\n        - name: web\n          image: alpine:latest\n          command:\n            ['sh', '-c']\n          args:\n            ['source \/vault\/secrets\/config && <entrypoint script>']\n          ports:\n            - containerPort: 9090\n```\n\n## AppRole authentication\n\nThe following example demonstrates how the AppRole authentication method can be used by\nVault Agent for retrieving secrets. A Kubernetes secret containing the AppRole secret ID\nand role ID should be created first.\n\n```yaml\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: web-deployment\n  labels:\n    app: web\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: web\n  template:\n    metadata:\n      labels:\n        app: web\n      annotations:\n        vault.hashicorp.com\/agent-inject: 'true'\n        vault.hashicorp.com\/agent-extra-secret: 'approle-example'\n        vault.hashicorp.com\/auth-type: 'approle'\n        vault.hashicorp.com\/auth-path: 'auth\/approle'\n        vault.hashicorp.com\/auth-config-role-id-file-path: '\/vault\/custom\/role-id'\n        vault.hashicorp.com\/auth-config-secret-id-file-path: '\/vault\/custom\/secret-id'\n        vault.hashicorp.com\/agent-inject-secret-db-creds: 'database\/creds\/db-app'\n        vault.hashicorp.com\/agent-inject-template-db-creds: |\n          \n          postgres:\/\/:@postgres.postgres.svc:5432\/wizard?sslmode=disable\n          \n        vault.hashicorp.com\/role: 'my-role'\n        vault.hashicorp.com\/tls-secret: 'vault-tls'\n        vault.hashicorp.com\/ca-cert: '\/vault\/tls\/ca.crt'\n    spec:\n      serviceAccountName: web\n      containers:\n        - name: web\n          image: alpine:latest\n          args:\n            ['sh', '-c', 'source \/vault\/secrets\/config && <entrypoint script>']\n          ports:\n            - containerPort: 9090\n```\n\n## PKI cert example\n\nThe following example demonstrates how to use the [`pkiCert` function][pkiCert] and\n[`writeToFile` function][writeToFile] from consul-template to create two files\nfrom a template: one for the certificate and CA (`cert.pem`) and one for the key\n(`cert.key`) generated by [Vault's PKI Secrets Engine](\/vault\/docs\/secrets\/pki).\n\n```yaml\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: web-deployment\n  labels:\n    app: web\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: web\n  template:\n    metadata:\n      labels:\n        app: web\n      annotations:\n        vault.hashicorp.com\/agent-inject: 'true'\n        vault.hashicorp.com\/role: 'web'\n        vault.hashicorp.com\/agent-inject-secret-certs: 'pki\/issue\/cert'\n        vault.hashicorp.com\/agent-inject-template-certs: |\n          \n          \n          \n          \n          \n          \n    spec:\n      serviceAccountName: web\n      containers:\n        - name: web\n          image: nginx\n```\n\n[pkiCert]: https:\/\/github.com\/hashicorp\/consul-template\/blob\/main\/docs\/templating-language.md#pkicert\n[writeToFile]: https:\/\/github.com\/hashicorp\/consul-template\/blob\/main\/docs\/templating-language.md#writeToFile\n\n## Cross namespace secret sharing ((#cross-namespace))\n\n1. [Configure Vault for secret sharing across namespaces][cross-namespace].\n1. Use the following Pod annotations to authenticate to the Kubernetes method in\n   the `us-west-org` namespace and render secrets from the `us-east-org`\n   namespace into the file `\/vault\/secrets\/marketing`\n\n```yaml\n---\napiVersion: v1\nkind: Pod\nmetadata:\n  name: cross-namespace\n  namespace: client-nicecorp\n  annotations:\n    vault.hashicorp.com\/agent-inject: \"true\"\n    vault.hashicorp.com\/role: \"cross-namespace-demo\"\n    vault.hashicorp.com\/auth-path: \"us-west-org\/auth\/kubernetes\"\n    vault.hashicorp.com\/agent-inject-template-marketing: |\n      \n      : \n      \nspec:\n  serviceAccountName: mega-app\n  containers:\n    - name: campaign\n      image: nginx\n```\n\n[cross-namespace]: https:\/\/support.hashicorp.com\/hc\/en-us\/articles\/27093291534995-How-to-configure-cross-namespace-access-in-Vault-Enterprise","site":"vault","answers_cleaned":"    layout  docs page title  Vault Agent Sidecar Injector Examples description  This section documents examples of using the Vault Agent Injector         Vault agent injector examples  The following are different configuration examples to support a variety of deployment models      A common mistake is to set the annotation on the Deployment or other resource  Ensure that the injector annotations are specified on the pod specification when using higher level constructs such as deployments  jobs or statefulsets      Before using the Vault agent injector  Before applying Vault Agent injection annotations to pods  the following requirements should be satisfied       Connectivity    the Kubernetes API can connect to the Vault Agent injector service on port  443   and   the injector can connect to the Kubernetes API    Vault can connect to the Kubernetes API    Pods in the Kubernetes cluster can connect to Vault      Note  The Kubernetes API typically runs on the master nodes  and the Vault Agent injector on a worker node in a Kubernetes cluster   br   br   On Kubernetes clusters that have aggregator routing enabled  ex   GKE private clusters  https   cloud google com kubernetes engine docs how to private clusters add firewall rules    the Kubernetes API will connect directly to the injector service endpoint  which is on port  8080        Kubernetes and Vault configuration    Kubernetes auth method should be configured and enabled in Vault    Pod should have a service account    desired secrets exist within Vault    the service account should be bound to a Vault role with a policy enabling access to desired secrets   For more information on configuring the Vault Kubernetes auth method   see the official documentation   vault docs auth kubernetes configuration       Debugging  If an error occurs with a mutation request  Kubernetes will attach the error to the owner of the pod  Check the following for errors     If the pod was created by a deployment or statefulset  check for errors in the  replicaset    that owns the pod    If the pod was created by a job  check the  job  for errors      Patching existing pods  To patch existing pods  a Kubernetes patch can be applied to add the required annotations to pods  When applying a patch  the pods will be rescheduled   First  create the patch      bash cat   EOF      patch yaml spec    template      metadata        annotations          vault hashicorp com agent inject   true          vault hashicorp com agent inject status   update          vault hashicorp com agent inject secret db creds   database creds db app          vault hashicorp com agent inject template db creds                         postgres     postgres 5432 appdb sslmode disable                    vault hashicorp com role   db app          vault hashicorp com ca cert    vault tls ca crt          vault hashicorp com client cert    vault tls client crt          vault hashicorp com client key    vault tls client key          vault hashicorp com tls secret   vault tls client  EOF      Next  apply the patch      bash kubectl patch deployment  MY DEPLOYMENT    patch    cat patch yaml        The pod should now be rescheduled with additional containers  The pod can be inspected using the  kubectl describe  command      bash kubectl describe pod  name of pod          Deployments  StatefulSets  etc   The annotations for configuring Vault Agent injection must be on the pod specification  Since higher level resources such as Deployments wrap pod specification templates  Vault Agent Injector can be used with all of these higher level constructs  too   An example Deployment below shows how to enable Vault Agent injection      yaml     apiVersion  v1 kind  ServiceAccount metadata    name  app example     apiVersion  apps v1 kind  Deployment metadata    name  app example deployment spec    replicas  1   selector      matchLabels        app  app example   template      metadata        labels          app  app example       annotations          vault hashicorp com agent inject   true          vault hashicorp com agent inject secret db creds   database creds db app          vault hashicorp com agent inject template db creds                         postgres     postgres 5432 appdb sslmode disable                    vault hashicorp com role   db app          vault hashicorp com ca cert    vault tls ca crt          vault hashicorp com client cert    vault tls client crt          vault hashicorp com client key    vault tls client key          vault hashicorp com tls secret   vault tls client      spec        containers            name  app           image   app 1 0 0        serviceAccountName  app example         ConfigMap example  The following example creates a deployment that mounts a Kubernetes ConfigMap containing Vault Agent configuration files  For a complete list of the Vault Agent configuration settings   see the Agent documentation   vault docs agent and proxy agent template vault agent templates       yaml     apiVersion  v1 kind  ServiceAccount metadata    name  app example     apiVersion  apps v1 kind  Deployment metadata    name  app example deployment spec    replicas  1   selector      matchLabels        app  app example   template      metadata        labels          app  app example       annotations          vault hashicorp com agent inject   true          vault hashicorp com agent configmap   my configmap          vault hashicorp com tls secret   vault tls client      spec        containers            name  app           image   app 1 0 0        serviceAccountName  app example     apiVersion  v1 kind  ConfigMap metadata    name  my configmap data    config hcl         auto auth             method               config                 role     db app                     type     kubernetes                  sink               config                 path      home vault  token                      type     file                      exit after auth    false      pid file      home vault  pid        template             contents     postgres     postgres 5432 mydb sslmode disable         destination      vault secrets db creds              vault             address     https   vault demo svc cluster local 8200         ca cert      vault tls ca crt         client cert      vault tls client crt         client key      vault tls client key          config init hcl         auto auth             method               config                 role     db app                     type     kubernetes                  sink               config                 path      home vault  token                      type     file                      exit after auth    true      pid file      home vault  pid        template             contents     postgres     postgres 5432 mydb sslmode disable         destination      vault secrets db creds              vault             address     https   vault demo svc cluster local 8200         ca cert      vault tls ca crt         client cert      vault tls client crt         client key      vault tls client key                Environment variable example  The following example demonstrates how templates can be used to create environment variables  A template should be created that exports a Vault secret as an environment variable and the application container should source those files during startup      yaml     apiVersion  apps v1 kind  Deployment metadata    name  web deployment   labels      app  web spec    replicas  1   selector      matchLabels        app  web   template      metadata        labels          app  web       annotations          vault hashicorp com agent inject   true          vault hashicorp com role   web          vault hashicorp com agent inject secret config   secret data web            Environment variable export template         vault hashicorp com agent inject template config                           export api key                   spec        serviceAccountName  web       containers            name  web           image  alpine latest           command                sh     c             args                source  vault secrets config     entrypoint script              ports                containerPort  9090         AppRole authentication  The following example demonstrates how the AppRole authentication method can be used by Vault Agent for retrieving secrets  A Kubernetes secret containing the AppRole secret ID and role ID should be created first      yaml     apiVersion  apps v1 kind  Deployment metadata    name  web deployment   labels      app  web spec    replicas  1   selector      matchLabels        app  web   template      metadata        labels          app  web       annotations          vault hashicorp com agent inject   true          vault hashicorp com agent extra secret   approle example          vault hashicorp com auth type   approle          vault hashicorp com auth path   auth approle          vault hashicorp com auth config role id file path    vault custom role id          vault hashicorp com auth config secret id file path    vault custom secret id          vault hashicorp com agent inject secret db creds   database creds db app          vault hashicorp com agent inject template db creds                         postgres     postgres postgres svc 5432 wizard sslmode disable                    vault hashicorp com role   my role          vault hashicorp com tls secret   vault tls          vault hashicorp com ca cert    vault tls ca crt      spec        serviceAccountName  web       containers            name  web           image  alpine latest           args                sh     c    source  vault secrets config     entrypoint script              ports                containerPort  9090         PKI cert example  The following example demonstrates how to use the   pkiCert  function  pkiCert  and   writeToFile  function  writeToFile  from consul template to create two files from a template  one for the certificate and CA   cert pem   and one for the key   cert key   generated by  Vault s PKI Secrets Engine   vault docs secrets pki       yaml     apiVersion  apps v1 kind  Deployment metadata    name  web deployment   labels      app  web spec    replicas  1   selector      matchLabels        app  web   template      metadata        labels          app  web       annotations          vault hashicorp com agent inject   true          vault hashicorp com role   web          vault hashicorp com agent inject secret certs   pki issue cert          vault hashicorp com agent inject template certs                                                                          spec        serviceAccountName  web       containers            name  web           image  nginx       pkiCert   https   github com hashicorp consul template blob main docs templating language md pkicert  writeToFile   https   github com hashicorp consul template blob main docs templating language md writeToFile     Cross namespace secret sharing    cross namespace    1   Configure Vault for secret sharing across namespaces  cross namespace   1  Use the following Pod annotations to authenticate to the Kubernetes method in    the  us west org  namespace and render secrets from the  us east org     namespace into the file   vault secrets marketing      yaml     apiVersion  v1 kind  Pod metadata    name  cross namespace   namespace  client nicecorp   annotations      vault hashicorp com agent inject   true      vault hashicorp com role   cross namespace demo      vault hashicorp com auth path   us west org auth kubernetes      vault hashicorp com agent inject template marketing                           spec    serviceAccountName  mega app   containers        name  campaign       image  nginx       cross namespace   https   support hashicorp com hc en us articles 27093291534995 How to configure cross namespace access in Vault Enterprise"}
{"questions":"vault Vault Agent containers to pods for consuming Vault secrets layout docs page title Agent Sidecar Injector Overview Agent sidecar injector The Vault Agent Sidecar Injector is a Kubernetes admission webhook that adds","answers":"---\nlayout: docs\npage_title: Agent Sidecar Injector Overview\ndescription: >-\n  The Vault Agent Sidecar Injector is a Kubernetes admission webhook that adds\n  Vault Agent containers to pods for consuming Vault secrets.\n---\n\n# Agent sidecar injector\n\nThe Vault Agent Injector alters pod specifications to include Vault Agent\ncontainers that render Vault secrets to a shared memory volume using\n[Vault Agent Templates](\/vault\/docs\/agent-and-proxy\/agent\/template).\nBy rendering secrets to a shared volume, containers within the pod can consume\nVault secrets without being Vault aware.\n\nThe injector is a [Kubernetes Mutation Webhook Controller](https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/admission-controllers\/).\nThe controller intercepts pod events and applies mutations to the pod if annotations exist within\nthe request. This functionality is provided by the [vault-k8s](https:\/\/github.com\/hashicorp\/vault-k8s)\nproject and can be automatically installed and configured using the\n[Vault Helm](https:\/\/github.com\/hashicorp\/vault-helm) chart.\n\n@include 'kubernetes-supported-versions.mdx'\n\n## Overview\n\nThe Vault Agent Injector works by intercepting pod `CREATE` and `UPDATE`\nevents in Kubernetes. The controller parses the event and looks for the metadata\nannotation `vault.hashicorp.com\/agent-inject: true`. If found, the controller will\nalter the pod specification based on other annotations present.\n\n### Mutations\n\nAt a minimum, every container in the pod will be configured to mount a shared\nmemory volume. This volume is mounted to `\/vault\/secrets` and will be used by the Vault\nAgent containers for sharing secrets with the other containers in the pod.\n\nNext, two types of Vault Agent containers can be injected: init and sidecar. The\ninit container will prepopulate the shared memory volume with the requested\nsecrets prior to the other containers starting. The sidecar container will\ncontinue to authenticate and render secrets to the same location as the pod runs.\nUsing annotations, the initialization and sidecar containers may be disabled.\n\nLast, two additional types of volumes can be optionally mounted to the Vault Agent\ncontainers. The first is secret volume containing TLS requirements such as client\nand CA (certificate authority) certificates and keys. This volume is useful when\ncommunicating and verifying the Vault server's authenticity using TLS. The second\nis a configuration map containing Vault Agent configuration files. This volume is\nuseful to customize Vault Agent beyond what the provided annotations offer.\n\n### Authenticating with Vault\n\nThe primary method of authentication with Vault when using the Vault Agent Injector\nis the service account attached to the pod. Other authentication methods can be configured\nusing annotations.\n\nFor Kubernetes authentication, the service account must be bound to a Vault role and a\npolicy granting access to the secrets desired.\n\nA service account must be present to use the Vault Agent Injector with the Kubernetes\nauthentication method. It is _not_ recommended to bind Vault roles to the default service\naccount provided to pods if no service account is defined.\n\n### Requesting secrets\n\nThere are two methods of configuring the Vault Agent containers to render secrets:\n\n- the `vault.hashicorp.com\/agent-inject-secret` annotation, or\n- a configuration map containing Vault Agent configuration files.\n\nOnly one of these methods may be used at any time.\n\n#### Secrets via annotations\n\nTo configure secret injection using annotations, the user must supply:\n\n- one or more _secret_ annotations, and\n- the Vault role used to access those secrets.\n\nThe annotation must have the format:\n\n```yaml\nvault.hashicorp.com\/agent-inject-secret-<unique-name>: \/path\/to\/secret\n```\n\nThe unique name will be the filename of the rendered secret and must be unique if\nmultiple secrets are defined by the user. For example, consider the following\nsecret annotations:\n\n```yaml\nvault.hashicorp.com\/agent-inject-secret-foo: database\/roles\/app\nvault.hashicorp.com\/agent-inject-secret-bar: consul\/creds\/app\nvault.hashicorp.com\/role: 'app'\n```\n\nThe first annotation will be rendered to `\/vault\/secrets\/foo` and the second\nannotation will be rendered to `\/vault\/secrets\/bar`.\n\nIt's possible to set the file format of the rendered secret using the annotation. For example the\nfollowing secret will be rendered to `\/vault\/secrets\/foo.txt`:\n\n```yaml\nvault.hashicorp.com\/agent-inject-secret-foo.txt: database\/roles\/app\nvault.hashicorp.com\/role: 'app'\n```\n\nThe secret unique name must consist of alphanumeric characters, `.`, `_` or `-`.\n\n##### Secret templates\n\n~> Vault Agent uses the Consul Template project to render secrets. For more information\non writing templates, see the [Consul Template documentation](https:\/\/github.com\/hashicorp\/consul-template).\n\nHow the secret is rendered to the file is also configurable. To configure the template\nused, the user must supply a _template_ annotation using the same unique name of\nthe secret. The annotation must have the following format:\n\n```yaml\nvault.hashicorp.com\/agent-inject-template-<unique-name>: |\n  <\n    TEMPLATE\n    HERE\n  >\n```\n\nFor example, consider the following:\n\n```yaml\nvault.hashicorp.com\/agent-inject-secret-foo: 'database\/creds\/db-app'\nvault.hashicorp.com\/agent-inject-template-foo: |\n  \n  postgres:\/\/:@postgres:5432\/mydb?sslmode=disable\n  \nvault.hashicorp.com\/role: 'app'\n```\n\nThe rendered secret would look like this within the container:\n\n```shell-session\n$ cat \/vault\/secrets\/foo\npostgres:\/\/v-kubernet-pg-app-q0Z7WPfVN:A1a-BUEuQR52oAqPrP1J@postgres:5432\/mydb?sslmode=disable\n```\n\n~> The default left and right template delimiters are ``.\n\nIf no template is provided the following generic template is used:\n\n```\n\n    \n        : \n    \n\n```\n\nFor example, the following annotation will use the default template to render\nPostgreSQL secrets found at the configured path:\n\n```yaml\nvault.hashicorp.com\/agent-inject-secret-foo: 'database\/roles\/pg-app'\nvault.hashicorp.com\/role: 'app'\n```\n\nThe rendered secret would look like this within the container:\n\n```shell-session\n$ cat \/vault\/secrets\/foo\npassword: A1a-BUEuQR52oAqPrP1J\nusername: v-kubernet-pg-app-q0Z7WPfVNqqTJuoDqCTY-1576529094\n```\n\n~> Some secrets such as KV are stored in maps. Their data can be accessed using `.Data.data.<NAME>`\n\n### Renewals and updating secrets\n\nFor more information on when Vault Agent fetches and renews secrets, see the\n[Agent documentation](\/vault\/docs\/agent-and-proxy\/agent\/template#renewals-and-updating-secrets).\n\n### Vault agent configuration map\n\nFor advanced use cases, it may be required to define Vault Agent configuration\nfiles to mount instead of using secret and template annotations. The Vault Agent\nInjector supports mounting ConfigMaps by specifying the name using the `vault.hashicorp.com\/agent-configmap`\nannotation. The configuration files will be mounted to `\/vault\/configs`.\n\nThe configuration map must contain either one or both of the following files:\n\n- **config-init.hcl** used by the init container. This must have `exit_after_auth` set to `true`.\n- **config.hcl** used by the sidecar container. This must have `exit_after_auth` set to `false`.\n\nAn example of mounting a Vault Agent configmap [can be found here](\/vault\/docs\/platform\/k8s\/injector\/examples#configmap-example).\n\n## Tutorial\n\nRefer to the [Injecting Secrets into Kubernetes Pods via Vault Helm\nSidecar](\/vault\/tutorials\/kubernetes\/kubernetes-sidecar) guide\nfor a step-by-step tutorial.","site":"vault","answers_cleaned":"    layout  docs page title  Agent Sidecar Injector Overview description       The Vault Agent Sidecar Injector is a Kubernetes admission webhook that adds   Vault Agent containers to pods for consuming Vault secrets         Agent sidecar injector  The Vault Agent Injector alters pod specifications to include Vault Agent containers that render Vault secrets to a shared memory volume using  Vault Agent Templates   vault docs agent and proxy agent template   By rendering secrets to a shared volume  containers within the pod can consume Vault secrets without being Vault aware   The injector is a  Kubernetes Mutation Webhook Controller  https   kubernetes io docs reference access authn authz admission controllers    The controller intercepts pod events and applies mutations to the pod if annotations exist within the request  This functionality is provided by the  vault k8s  https   github com hashicorp vault k8s  project and can be automatically installed and configured using the  Vault Helm  https   github com hashicorp vault helm  chart    include  kubernetes supported versions mdx      Overview  The Vault Agent Injector works by intercepting pod  CREATE  and  UPDATE  events in Kubernetes  The controller parses the event and looks for the metadata annotation  vault hashicorp com agent inject  true   If found  the controller will alter the pod specification based on other annotations present       Mutations  At a minimum  every container in the pod will be configured to mount a shared memory volume  This volume is mounted to   vault secrets  and will be used by the Vault Agent containers for sharing secrets with the other containers in the pod   Next  two types of Vault Agent containers can be injected  init and sidecar  The init container will prepopulate the shared memory volume with the requested secrets prior to the other containers starting  The sidecar container will continue to authenticate and render secrets to the same location as the pod runs  Using annotations  the initialization and sidecar containers may be disabled   Last  two additional types of volumes can be optionally mounted to the Vault Agent containers  The first is secret volume containing TLS requirements such as client and CA  certificate authority  certificates and keys  This volume is useful when communicating and verifying the Vault server s authenticity using TLS  The second is a configuration map containing Vault Agent configuration files  This volume is useful to customize Vault Agent beyond what the provided annotations offer       Authenticating with Vault  The primary method of authentication with Vault when using the Vault Agent Injector is the service account attached to the pod  Other authentication methods can be configured using annotations   For Kubernetes authentication  the service account must be bound to a Vault role and a policy granting access to the secrets desired   A service account must be present to use the Vault Agent Injector with the Kubernetes authentication method  It is  not  recommended to bind Vault roles to the default service account provided to pods if no service account is defined       Requesting secrets  There are two methods of configuring the Vault Agent containers to render secrets     the  vault hashicorp com agent inject secret  annotation  or   a configuration map containing Vault Agent configuration files   Only one of these methods may be used at any time        Secrets via annotations  To configure secret injection using annotations  the user must supply     one or more  secret  annotations  and   the Vault role used to access those secrets   The annotation must have the format      yaml vault hashicorp com agent inject secret  unique name    path to secret      The unique name will be the filename of the rendered secret and must be unique if multiple secrets are defined by the user  For example  consider the following secret annotations      yaml vault hashicorp com agent inject secret foo  database roles app vault hashicorp com agent inject secret bar  consul creds app vault hashicorp com role   app       The first annotation will be rendered to   vault secrets foo  and the second annotation will be rendered to   vault secrets bar    It s possible to set the file format of the rendered secret using the annotation  For example the following secret will be rendered to   vault secrets foo txt       yaml vault hashicorp com agent inject secret foo txt  database roles app vault hashicorp com role   app       The secret unique name must consist of alphanumeric characters           or             Secret templates     Vault Agent uses the Consul Template project to render secrets  For more information on writing templates  see the  Consul Template documentation  https   github com hashicorp consul template    How the secret is rendered to the file is also configurable  To configure the template used  the user must supply a  template  annotation using the same unique name of the secret  The annotation must have the following format      yaml vault hashicorp com agent inject template  unique name             TEMPLATE     HERE          For example  consider the following      yaml vault hashicorp com agent inject secret foo   database creds db app  vault hashicorp com agent inject template foo         postgres     postgres 5432 mydb sslmode disable    vault hashicorp com role   app       The rendered secret would look like this within the container      shell session   cat  vault secrets foo postgres   v kubernet pg app q0Z7WPfVN A1a BUEuQR52oAqPrP1J postgres 5432 mydb sslmode disable         The default left and right template delimiters are      If no template is provided the following generic template is used                                   For example  the following annotation will use the default template to render PostgreSQL secrets found at the configured path      yaml vault hashicorp com agent inject secret foo   database roles pg app  vault hashicorp com role   app       The rendered secret would look like this within the container      shell session   cat  vault secrets foo password  A1a BUEuQR52oAqPrP1J username  v kubernet pg app q0Z7WPfVNqqTJuoDqCTY 1576529094         Some secrets such as KV are stored in maps  Their data can be accessed using   Data data  NAME        Renewals and updating secrets  For more information on when Vault Agent fetches and renews secrets  see the  Agent documentation   vault docs agent and proxy agent template renewals and updating secrets        Vault agent configuration map  For advanced use cases  it may be required to define Vault Agent configuration files to mount instead of using secret and template annotations  The Vault Agent Injector supports mounting ConfigMaps by specifying the name using the  vault hashicorp com agent configmap  annotation  The configuration files will be mounted to   vault configs    The configuration map must contain either one or both of the following files       config init hcl   used by the init container  This must have  exit after auth  set to  true       config hcl   used by the sidecar container  This must have  exit after auth  set to  false    An example of mounting a Vault Agent configmap  can be found here   vault docs platform k8s injector examples configmap example       Tutorial  Refer to the  Injecting Secrets into Kubernetes Pods via Vault Helm Sidecar   vault tutorials kubernetes kubernetes sidecar  guide for a step by step tutorial "}
{"questions":"vault Annotations page title Agent Sidecar Injector Annotations are organized into two sections agent and vault All of the annotations below layout docs This section documents the configurable annotations for the Vault Agent Injector The following are the available annotations for the injector These annotations","answers":"---\nlayout: docs\npage_title: Agent Sidecar Injector Annotations\ndescription: This section documents the configurable annotations for the Vault Agent Injector.\n---\n\n# Annotations\n\nThe following are the available annotations for the injector. These annotations\nare organized into two sections: agent and vault. All of the annotations below\nchange the configurations of the Vault Agent containers injected into the pod.\n\n## Agent annotations\n\nAgent annotations change the Vault Agent containers templating configuration. For\nexample, agent annotations allow users to define what secrets they want, how to render\nthem, optional commands to run, etc.\n\n- `vault.hashicorp.com\/agent-inject` - configures whether injection is explicitly\n  enabled or disabled for a pod. This should be set to a `true` or `false` value.\n  Defaults to `false`.\n\n- `vault.hashicorp.com\/agent-inject-status` - blocks further mutations\n  by adding the value `injected` to the pod after a successful mutation.\n\n- `vault.hashicorp.com\/agent-configmap` - name of the configuration map where Vault\n  Agent configuration file and templates can be found.\n\n- `vault.hashicorp.com\/agent-image` - name of the Vault docker image to use. This\n  value overrides the default image configured in the injector and is usually\n  not needed. Defaults to `hashicorp\/vault:1.18.1`.\n\n- `vault.hashicorp.com\/agent-init-first` - configures the pod to run the Vault Agent\n  init container first if `true` (last if `false`). This is useful when other init\n  containers need pre-populated secrets. This should be set to a `true` or `false`\n  value. Defaults to `false`.\n\n- `vault.hashicorp.com\/agent-inject-command` - configures Vault Agent\n  to run a command after the template has been rendered. To map a command to a specific\n  secret, use the same unique secret name: `vault.hashicorp.com\/agent-inject-command-SECRET-NAME`.\n  For example, if a secret annotation `vault.hashicorp.com\/agent-inject-secret-foobar`\n  is configured, `vault.hashicorp.com\/agent-inject-command-foobar` would map a command\n  to that secret.\n\n- `vault.hashicorp.com\/agent-inject-secret` - configures Vault Agent\n  to retrieve the secrets from Vault required by the container. The name of the\n  secret is any unique string after `vault.hashicorp.com\/agent-inject-secret-`,\n  such as `vault.hashicorp.com\/agent-inject-secret-foobar`. The value is the path\n  in Vault where the secret is located.\n\n- `vault.hashicorp.com\/agent-inject-template` - configures the template Vault Agent\n  should use for rendering a secret. The name of the template is any\n  unique string after `vault.hashicorp.com\/agent-inject-template-`, such as\n  `vault.hashicorp.com\/agent-inject-template-foobar`. This should map to the same\n  unique value provided in `vault.hashicorp.com\/agent-inject-secret-`. If not provided,\n  a default generic template is used.\n\n- `vault.hashicorp.com\/agent-template-left-delim` - configures the left delimiter for Vault Agent to\n  use when rendering a secret template. The name of the template is any unique string after\n  `vault.hashicorp.com\/agent-template-left-delim-`, such as\n  `vault.hashicorp.com\/agent-template-left-delim-foobar`. This should map to the same unique value\n  provided in `vault.hashicorp.com\/agent-inject-template-`. If not provided, a default left\n  delimiter is used as defined by [Vault Agent Template Config](\/vault\/docs\/agent-and-proxy\/agent\/template#left_delimiter).\n\n- `vault.hashicorp.com\/agent-template-right-delim` - configures the right delimiter for Vault Agent\n  to use when rendering a secret template. The name of the template is any unique string after\n  `vault.hashicorp.com\/agent-template-right-delim-`, such as\n  `vault.hashicorp.com\/agent-template-right-delim-foobar`. This should map to the same unique value\n  provided in `vault.hashicorp.com\/agent-inject-template-`. If not provided, a default right\n  delimiter is used as defined by [Vault Agent Template Config](\/vault\/docs\/agent-and-proxy\/agent\/template#right_delimiter).\n\n- `vault.hashicorp.com\/error-on-missing-key` - configures whether Vault Agent\n  should exit with an error when accessing a struct or map field\/key that does\n  not exist. The name of the secret is the string after\n  `vault.hashicorp.com\/error-on-missing-key-`, and should map to the same unique\n  value provided in `vault.hashicorp.com\/agent-inject-secret-`. Defaults to\n  `false`. See [Vault Agent Template Config](\/vault\/docs\/agent-and-proxy\/agent\/template#template-configurations)\n  for more details.\n\n- `vault.hashicorp.com\/agent-inject-containers` - comma-separated list that specifies in\n  which containers the secrets volume should be mounted. If not provided, the secrets\n  volume will be mounted in all containers in the pod.\n\n- `vault.hashicorp.com\/secret-volume-path` - configures where on the filesystem a secret\n  will be rendered. To map a path to a specific secret, use the same unique secret name:\n  `vault.hashicorp.com\/secret-volume-path-SECRET-NAME`. For example, if a secret annotation\n  `vault.hashicorp.com\/agent-inject-secret-foobar` is configured,\n  `vault.hashicorp.com\/secret-volume-path-foobar` would configure where that secret\n  is rendered. If no secret name is provided, this sets the default for all rendered\n  secrets in the pod.\n\n- `vault.hashicorp.com\/agent-inject-file` - configures the filename and path\n  in the secrets volume where a Vault secret will be written. This should be used\n  with `vault.hashicorp.com\/secret-volume-path`, which mounts a memory volume to\n  the specified path. If `secret-volume-path` is used, the path can be omitted from\n  this value. To map a filename to a specific secret, use the same unique secret name:\n  `vault.hashicorp.com\/agent-inject-file-SECRET-NAME`. For example, if a secret annotation\n  `vault.hashicorp.com\/agent-inject-secret-foobar` is configured,\n  `vault.hashicorp.com\/agent-inject-file-foobar` would configure the filename.\n\n- `vault.hashicorp.com\/agent-inject-perms` - configures the permissions of the\n  file to create in the secrets volume. The name of the secret is the string\n  after \"vault.hashicorp.com\/agent-inject-perms-\", and should map to the same\n  unique value provided in \"vault.hashicorp.com\/agent-inject-secret-\". The value\n  is the octal permission, for example: `0644`.\n\n- `vault.hashicorp.com\/agent-inject-template-file` - configures the path and filename of the\n  custom template to use. This should be used with `vault.hashicorp.com\/extra-secret`,\n  which mounts a Kubernetes secret to `\/vault\/custom`. To map a template file to a specific secret,\n  use the same unique secret name: `vault.hashicorp.com\/agent-inject-template-file-SECRET-NAME`.\n  For example, if a secret annotation `vault.hashicorp.com\/agent-inject-secret-foobar` is configured,\n  `vault.hashicorp.com\/agent-inject-template-file-foobar` would configure the template file.\n\n- `vault.hashicorp.com\/agent-inject-default-template` - configures the default template type for rendering\n  secrets if no custom template is defined. Possible values include `map` and `json`. Defaults to `map`.\n\n- `vault.hashicorp.com\/template-config-exit-on-retry-failure` - controls whether\n  Vault Agent exits after it has exhausted its number of template retry attempts\n  due to failures. Defaults to `true`. See [Vault Agent Template\n  Config](\/vault\/docs\/agent-and-proxy\/agent\/template#global-configurations) for more details.\n\n- `vault.hashicorp.com\/template-static-secret-render-interval` - If specified,\n  configures how often Vault Agent Template should render non-leased secrets such as KV v2.\n  See [Vault Agent Template Config](\/vault\/docs\/agent-and-proxy\/agent\/template#global-configurations) for more details.\n\n- `vault.hashicorp.com\/template-max-connections-per-host` - If specified, limits\n  the total number of connections that the Vault Agent templating engine can use\n  for a particular Vault host. The connection limit includes all connections in the dialing,\n  active, and idle states. See [Vault Agent Template Config](\/vault\/docs\/agent-and-proxy\/agent\/template#global-configurations)\n  for more details.\n\n- `vault.hashicorp.com\/agent-extra-secret` - mounts Kubernetes secret as a volume at\n  `\/vault\/custom` in the sidecar\/init containers. Useful for custom Agent configs with\n  auto-auth methods such as approle that require paths to secrets be present.\n\n- `vault.hashicorp.com\/agent-inject-token` - configures Vault Agent to share the Vault\n  token with other containers in the pod, in a file named `token` in the root of the\n  secrets volume (i.e. `\/vault\/secrets\/token`). This is helpful when other containers\n  communicate directly with Vault but require auto-authentication provided by Vault\n  Agent. This should be set to a `true` or `false` value. Defaults to `false`.\n\n- `vault.hashicorp.com\/agent-limits-cpu` - configures the CPU limits on the Vault\n  Agent containers. Defaults to `500m`. Setting this to an empty string disables\n  CPU limits.\n\n- `vault.hashicorp.com\/agent-limits-mem` - configures the memory limits on the Vault\n  Agent containers. Defaults to `128Mi`. Setting this to an empty string disables\n  memory limits.\n\n- `vault.hashicorp.com\/agent-limits-ephemeral` - configures the ephemeral\n  storage limit on the Vault Agent containers. Defaults to unset, which\n  disables ephemeral storage limits. Also available as a command-line option\n  (`-ephemeral-storage-limit`) or environment variable (`AGENT_INJECT_EPHEMERAL_LIMIT`)\n  to set the default for all injected Agent containers. **Note:** Pod limits are\n  equal to the sum of all container limits. Setting this limit without setting it\n  for other containers will also affect the limits of other containers in the pod.\n  See [Kubernetes resources documentation][k8s-resources] for more details.\n\n- `vault.hashicorp.com\/agent-requests-cpu` - configures the CPU requests on the\n  Vault Agent containers. Defaults to `250m`. Setting this to an empty string disables\n  CPU requests.\n\n- `vault.hashicorp.com\/agent-requests-mem` - configures the memory requests on the\n  Vault Agent containers. Defaults to `64Mi`. Setting this to an empty string disables\n  memory requests.\n\n- `vault.hashicorp.com\/agent-requests-ephemeral` - configures the ephemeral\n  storage requests on the Vault Agent Containers. Defaults to unset, which\n  disables ephemeral storage requests (and will default to the ephemeral limit\n  if set). Also available as a command-line option (`-ephemeral-storage-request`)\n  or environment variable (`AGENT_INJECT_EPHEMERAL_REQUEST`) to set the default\n  for all injected Agent containers. **Note:** Pod requests are equal to the sum\n  of all container requests. Setting this limit without setting it for other\n  containers will also affect the requests of other containers in the pod. See\n  [Kubernetes resources documentation][k8s-resources] for more details.\n\n- `vault.hashicorp.com\/agent-revoke-on-shutdown` - configures whether the sidecar\n  will revoke it's own token before shutting down. This setting will only be applied\n  to the Vault Agent sidecar container. This should be set to a `true` or `false`\n  value. Defaults to `false`.\n\n- `vault.hashicorp.com\/agent-revoke-grace` - configures the grace period, in seconds,\n  for revoking it's own token before shutting down. This setting will only be applied\n  to the Vault Agent sidecar container. Defaults to `5s`.\n\n- `vault.hashicorp.com\/agent-pre-populate` - configures whether an init container\n  is included to pre-populate the shared memory volume with secrets prior to the\n  containers starting. This should be set to a `true` or `false` value. Defaults\n  to `true`.\n\n- `vault.hashicorp.com\/agent-pre-populate-only` - configures whether an init container\n  is the only injected container. If true, no sidecar container will be injected\n  at runtime of the pod. Enabling this option is recommended for workloads of\n  type `CronJob` or `Job` to ensure a clean pod termination.\n\n- `vault.hashicorp.com\/preserve-secret-case` - configures Vault Agent to preserve\n  the secret name case when creating the secret files. This should be set to a `true`\n  or `false` value. Defaults to `false`.\n\n- `vault.hashicorp.com\/agent-run-as-user` - sets the user (uid) to run Vault\n  agent as. Also available as a command-line option (`-run-as-user`) or\n  environment variable (`AGENT_INJECT_RUN_AS_USER`) for the injector. Defaults\n  to 100.\n\n- `vault.hashicorp.com\/agent-run-as-group` - sets the group (gid) to run Vault\n  agent as. Also available as a command-line option (`-run-as-group`) or\n  environment variable (`AGENT_INJECT_RUN_AS_GROUP`) for the injector. Defaults\n  to 1000.\n\n- `vault.hashicorp.com\/agent-set-security-context` - controls whether\n  `SecurityContext` is set in injected containers. Also available as a\n  command-line option (`-set-security-context`) or environment variable\n  (`AGENT_INJECT_SET_SECURITY_CONTEXT`). Defaults to `true`.\n\n- `vault.hashicorp.com\/agent-run-as-same-user` - run the injected Vault agent\n  containers as the User (uid) of the first application container in the pod.\n  Requires `Spec.Containers[0].SecurityContext.RunAsUser` to be set in the pod\n  spec. Also available as a command-line option (`-run-as-same-user`) or\n  environment variable (`AGENT_INJECT_RUN_AS_SAME_USER`). Defaults to `false`.\n\n  ~> **Note**: If the first application container in the pod is running as root\n  (uid 0), the `run-as-same-user` annotation will fail injection with an error.\n\n- `vault.hashicorp.com\/agent-share-process-namespace` - sets\n  [shareProcessNamespace] in the Pod spec where Vault Agent is injected.\n  Defaults to `false`.\n\n- `vault.hashicorp.com\/agent-cache-enable` - configures Vault Agent to enable\n  [caching](\/vault\/docs\/agent-and-proxy\/agent\/caching). In Vault 1.7+ this annotation will also enable\n  a Vault Agent persistent cache. This persistent cache will be shared between the init\n  and sidecar container to reuse tokens and leases retrieved by the init container.\n  Defaults to `false`.\n\n- `vault.hashicorp.com\/agent-cache-use-auto-auth-token` - configures Vault Agent cache\n  to authenticate on behalf of the requester. Set to `force` to enable. Disabled\n  by default.\n\n- `vault.hashicorp.com\/agent-cache-listener-port` - configures Vault Agent cache\n  listening port. Defaults to `8200`.\n\n- `vault.hashicorp.com\/agent-copy-volume-mounts` - copies the mounts from the specified\n  container and mounts them to the Vault Agent containers. The service account volume is\n  ignored.\n\n- `vault.hashicorp.com\/agent-service-account-token-volume-name` - the optional name of a projected volume containing a service account token for use with auto-auth against Vault's Kubernetes auth method. If the volume is mounted to another container in the deployment, the token volume will be mounted to the same location in the vault-agent containers. Otherwise it will be mounted at the default location of `\/var\/run\/secrets\/vault.hashicorp.com\/serviceaccount\/`.\n\n- `vault.hashicorp.com\/agent-enable-quit` - enable the [`\/agent\/v1\/quit` endpoint](\/vault\/docs\/agent-and-proxy\/agent#quit) on an injected agent. This option defaults to false, and if true will be set on the existing cache listener, or a new localhost listener with a basic cache stanza configured. The [agent-cache-listener-port annotation](\/vault\/docs\/platform\/k8s\/injector\/annotations#vault-hashicorp-com-agent-cache-listener-port) can be used to change the port.\n\n- `vault.hashicorp.com\/agent-telemetry` - specifies the [telemetry](\/vault\/docs\/configuration\/telemetry) configuration for the\n  Vault Agent sidecar. The name of the config is any unique string after\n  `vault.hashicorp.com\/agent-telemetry-`, such as `vault.hashicorp.com\/agent-telemetry-prometheus_retention_time`.\n  This annotation can be reused multiple times to configure multiple settings for the agent telemetry.\n\n- `vault.hashicorp.com\/go-max-procs` - set the `GOMAXPROCS` environment variable for injected agents\n\n- `vault.hashicorp.com\/agent-json-patch` - change the injected agent sidecar container using a [JSON patch](https:\/\/jsonpatch.com\/) before it is created.\n  This can be used to add, remove, or modify any attribute of the container.\n  For example, setting this to `[{\"op\": \"replace\", \"path\": \"\/name\", \"value\": \"different-name\"}]` will update the agent container's name to be `different-name`\n  instead of the default `vault-agent`.\n\n- `vault.hashicorp.com\/agent-init-json-patch` - same as `vault.hashicorp.com\/agent-json-patch`, except that the JSON patch will be applied to the\n  injected init container instead.\n\n## Vault annotations\n\nVault annotations change how the Vault Agent containers communicate with Vault. For\nexample, Vault's address, TLS certificates to use, client parameters such as timeouts,\netc.\n\n- `vault.hashicorp.com\/auth-config` - configures additional parameters for the configured\n  authentication method. The name of the config is any unique string after\n  `vault.hashicorp.com\/auth-config-`, such as `vault.hashicorp.com\/auth-config-role-id-file-path`.\n  This annotation can be reused multiple times to configure multiple settings for the authentication\n  method. Some authentication methods may require additional secrets and should be mounted via the\n  `vault.hashicorp.com\/agent-extra-secret` annotation. For a list of valid authentication configurations,\n  see the Vault Agent [auto-auth documentation](\/vault\/docs\/agent-and-proxy\/autoauth\/methods).\n\n- `vault.hashicorp.com\/auth-path` - configures the authentication path for the Kubernetes\n  auth method. Defaults to `auth\/kubernetes`.\n\n- `vault.hashicorp.com\/auth-type` - configures the authentication type for Vault Agent.\n  Defaults to `kubernetes`. For a list of valid authentication methods, see the Vault Agent\n  [auto-auth documentation](\/vault\/docs\/agent-and-proxy\/autoauth\/methods).\n\n- `vault.hashicorp.com\/auth-min-backoff` - set the [min_backoff](\/vault\/docs\/agent-and-proxy\/autoauth#min_backoff) option in the auto-auth config. Requires Vault 1.11+.\n\n- `vault.hashicorp.com\/auth-max-backoff` - set the [max_backoff](\/vault\/docs\/agent-and-proxy\/autoauth#max_backoff) option in the auto-auth config\n\n- `vault.hashicorp.com\/agent-auto-auth-exit-on-err` - set the [exit_on_err](\/vault\/docs\/agent-and-proxy\/autoauth#exit_on_err) option in the auto-auth config\n\n- `vault.hashicorp.com\/ca-cert` - path of the CA certificate used to verify Vault's\n  TLS. This can also be set as the default for all injected Agents via the\n  `AGENT_INJECT_VAULT_CACERT_BYTES` environment variable which takes a PEM-encoded\n  certificate or bundle.\n\n- `vault.hashicorp.com\/ca-key` - path of the CA public key used to verify Vault's\n  TLS.\n\n- `vault.hashicorp.com\/client-cert` - path of the client certificate used when\n  communicating with Vault via mTLS.\n\n- `vault.hashicorp.com\/client-key` - path of the client public key used when communicating\n  with Vault via mTLS.\n\n- `vault.hashicorp.com\/client-max-retries` - configures number of Vault Agent retry\n  attempts when certain errors are encountered. Defaults to 2, for 3 total attempts.\n  Set this to `0` or less to disable retrying. Error codes that are retried are 412\n  (client consistency requirement not satisfied) and all 5xx except for 501 (not implemented).\n\n- `vault.hashicorp.com\/client-timeout` - configures the request timeout threshold,\n  in seconds, of the Vault Agent when communicating with Vault. Defaults to `60s`\n  and accepts value types of `60`, `60s` or `1m`.\n\n- `vault.hashicorp.com\/log-level` - configures the verbosity of the Vault Agent\n  log level. Default is `info`.\n\n- `vault.hashicorp.com\/log-format` - configures the log type for Vault Agent. Possible\n  values are `standard` and `json`. Default is `standard`.\n\n- `vault.hashicorp.com\/namespace` - configures the Vault Enterprise namespace to\n  be used when requesting secrets from Vault. Also available as a command-line\n  option (`-vault-namespace`) or environment variable\n  (`AGENT_INJECT_VAULT_NAMESPACE`) to set the default namespace for all injected\n  Agents.\n\n- `vault.hashicorp.com\/proxy-address` - configures the HTTP proxy to use when connecting\n  to a Vault server.\n\n- `vault.hashicorp.com\/role` - configures the Vault role used by the Vault Agent\n  auto-auth method. Required when `vault.hashicorp.com\/agent-configmap` is not set.\n\n- `vault.hashicorp.com\/service` - configures the Vault address for the injected\n  Vault Agent to use. This value overrides the default Vault address configured\n  in the injector, and may either be the address of a Vault service within the\n  same Kubernetes cluster as the injector, or an external Vault URL.\n\n- `vault.hashicorp.com\/tls-secret` - name of the Kubernetes secret containing TLS\n  Client and CA certificates and keys. This is mounted to `\/vault\/tls`.\n\n- `vault.hashicorp.com\/tls-server-name` - name of the Vault server to verify the\n  authenticity of the server when communicating with Vault over TLS.\n\n- `vault.hashicorp.com\/tls-skip-verify` - if true, configures the Vault Agent to\n  skip verification of Vault's TLS certificate. It's not recommended to set this\n  value to true in a production environment.\n\n- `vault.hashicorp.com\/agent-disable-idle-connections` - Comma-separated [list\n  of Vault Agent features](\/vault\/docs\/agent-and-proxy\/agent#disable_idle_connections) where idle\n  connections should be disabled. Also available as a command-line option\n  (`-disable-idle-connections`) or environment variable\n  (`AGENT_INJECT_DISABLE_IDLE_CONNECTIONS`) to set the default for all injected\n  Agents.\n\n- `vault.hashicorp.com\/agent-disable-keep-alives` - Comma-separated [list of\n  Vault Agent features](\/vault\/docs\/agent-and-proxy\/agent#disable_keep_alives) where keep-alives\n  should be disabled. Also available as a command-line option\n  (`-disable-keep-alives`) or environment variable\n  (`AGENT_INJECT_DISABLE_KEEP_ALIVES`) to set the default for all injected\n  Agents.\n\n[k8s-resources]: https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/#resource-requests-and-limits-of-pod-and-container\n[shareProcessNamespace]: https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/share-process-namespace\/","site":"vault","answers_cleaned":"    layout  docs page title  Agent Sidecar Injector Annotations description  This section documents the configurable annotations for the Vault Agent Injector         Annotations  The following are the available annotations for the injector  These annotations are organized into two sections  agent and vault  All of the annotations below change the configurations of the Vault Agent containers injected into the pod      Agent annotations  Agent annotations change the Vault Agent containers templating configuration  For example  agent annotations allow users to define what secrets they want  how to render them  optional commands to run  etc      vault hashicorp com agent inject    configures whether injection is explicitly   enabled or disabled for a pod  This should be set to a  true  or  false  value    Defaults to  false       vault hashicorp com agent inject status    blocks further mutations   by adding the value  injected  to the pod after a successful mutation      vault hashicorp com agent configmap    name of the configuration map where Vault   Agent configuration file and templates can be found      vault hashicorp com agent image    name of the Vault docker image to use  This   value overrides the default image configured in the injector and is usually   not needed  Defaults to  hashicorp vault 1 18 1       vault hashicorp com agent init first    configures the pod to run the Vault Agent   init container first if  true   last if  false    This is useful when other init   containers need pre populated secrets  This should be set to a  true  or  false    value  Defaults to  false       vault hashicorp com agent inject command    configures Vault Agent   to run a command after the template has been rendered  To map a command to a specific   secret  use the same unique secret name   vault hashicorp com agent inject command SECRET NAME     For example  if a secret annotation  vault hashicorp com agent inject secret foobar    is configured   vault hashicorp com agent inject command foobar  would map a command   to that secret      vault hashicorp com agent inject secret    configures Vault Agent   to retrieve the secrets from Vault required by the container  The name of the   secret is any unique string after  vault hashicorp com agent inject secret      such as  vault hashicorp com agent inject secret foobar   The value is the path   in Vault where the secret is located      vault hashicorp com agent inject template    configures the template Vault Agent   should use for rendering a secret  The name of the template is any   unique string after  vault hashicorp com agent inject template    such as    vault hashicorp com agent inject template foobar   This should map to the same   unique value provided in  vault hashicorp com agent inject secret    If not provided    a default generic template is used      vault hashicorp com agent template left delim    configures the left delimiter for Vault Agent to   use when rendering a secret template  The name of the template is any unique string after    vault hashicorp com agent template left delim    such as    vault hashicorp com agent template left delim foobar   This should map to the same unique value   provided in  vault hashicorp com agent inject template    If not provided  a default left   delimiter is used as defined by  Vault Agent Template Config   vault docs agent and proxy agent template left delimiter       vault hashicorp com agent template right delim    configures the right delimiter for Vault Agent   to use when rendering a secret template  The name of the template is any unique string after    vault hashicorp com agent template right delim    such as    vault hashicorp com agent template right delim foobar   This should map to the same unique value   provided in  vault hashicorp com agent inject template    If not provided  a default right   delimiter is used as defined by  Vault Agent Template Config   vault docs agent and proxy agent template right delimiter       vault hashicorp com error on missing key    configures whether Vault Agent   should exit with an error when accessing a struct or map field key that does   not exist  The name of the secret is the string after    vault hashicorp com error on missing key    and should map to the same unique   value provided in  vault hashicorp com agent inject secret    Defaults to    false   See  Vault Agent Template Config   vault docs agent and proxy agent template template configurations    for more details      vault hashicorp com agent inject containers    comma separated list that specifies in   which containers the secrets volume should be mounted  If not provided  the secrets   volume will be mounted in all containers in the pod      vault hashicorp com secret volume path    configures where on the filesystem a secret   will be rendered  To map a path to a specific secret  use the same unique secret name     vault hashicorp com secret volume path SECRET NAME   For example  if a secret annotation    vault hashicorp com agent inject secret foobar  is configured     vault hashicorp com secret volume path foobar  would configure where that secret   is rendered  If no secret name is provided  this sets the default for all rendered   secrets in the pod      vault hashicorp com agent inject file    configures the filename and path   in the secrets volume where a Vault secret will be written  This should be used   with  vault hashicorp com secret volume path   which mounts a memory volume to   the specified path  If  secret volume path  is used  the path can be omitted from   this value  To map a filename to a specific secret  use the same unique secret name     vault hashicorp com agent inject file SECRET NAME   For example  if a secret annotation    vault hashicorp com agent inject secret foobar  is configured     vault hashicorp com agent inject file foobar  would configure the filename      vault hashicorp com agent inject perms    configures the permissions of the   file to create in the secrets volume  The name of the secret is the string   after  vault hashicorp com agent inject perms    and should map to the same   unique value provided in  vault hashicorp com agent inject secret    The value   is the octal permission  for example   0644       vault hashicorp com agent inject template file    configures the path and filename of the   custom template to use  This should be used with  vault hashicorp com extra secret     which mounts a Kubernetes secret to   vault custom   To map a template file to a specific secret    use the same unique secret name   vault hashicorp com agent inject template file SECRET NAME     For example  if a secret annotation  vault hashicorp com agent inject secret foobar  is configured     vault hashicorp com agent inject template file foobar  would configure the template file      vault hashicorp com agent inject default template    configures the default template type for rendering   secrets if no custom template is defined  Possible values include  map  and  json   Defaults to  map       vault hashicorp com template config exit on retry failure    controls whether   Vault Agent exits after it has exhausted its number of template retry attempts   due to failures  Defaults to  true   See  Vault Agent Template   Config   vault docs agent and proxy agent template global configurations  for more details      vault hashicorp com template static secret render interval    If specified    configures how often Vault Agent Template should render non leased secrets such as KV v2    See  Vault Agent Template Config   vault docs agent and proxy agent template global configurations  for more details      vault hashicorp com template max connections per host    If specified  limits   the total number of connections that the Vault Agent templating engine can use   for a particular Vault host  The connection limit includes all connections in the dialing    active  and idle states  See  Vault Agent Template Config   vault docs agent and proxy agent template global configurations    for more details      vault hashicorp com agent extra secret    mounts Kubernetes secret as a volume at     vault custom  in the sidecar init containers  Useful for custom Agent configs with   auto auth methods such as approle that require paths to secrets be present      vault hashicorp com agent inject token    configures Vault Agent to share the Vault   token with other containers in the pod  in a file named  token  in the root of the   secrets volume  i e    vault secrets token    This is helpful when other containers   communicate directly with Vault but require auto authentication provided by Vault   Agent  This should be set to a  true  or  false  value  Defaults to  false       vault hashicorp com agent limits cpu    configures the CPU limits on the Vault   Agent containers  Defaults to  500m   Setting this to an empty string disables   CPU limits      vault hashicorp com agent limits mem    configures the memory limits on the Vault   Agent containers  Defaults to  128Mi   Setting this to an empty string disables   memory limits      vault hashicorp com agent limits ephemeral    configures the ephemeral   storage limit on the Vault Agent containers  Defaults to unset  which   disables ephemeral storage limits  Also available as a command line option      ephemeral storage limit   or environment variable   AGENT INJECT EPHEMERAL LIMIT     to set the default for all injected Agent containers    Note    Pod limits are   equal to the sum of all container limits  Setting this limit without setting it   for other containers will also affect the limits of other containers in the pod    See  Kubernetes resources documentation  k8s resources  for more details      vault hashicorp com agent requests cpu    configures the CPU requests on the   Vault Agent containers  Defaults to  250m   Setting this to an empty string disables   CPU requests      vault hashicorp com agent requests mem    configures the memory requests on the   Vault Agent containers  Defaults to  64Mi   Setting this to an empty string disables   memory requests      vault hashicorp com agent requests ephemeral    configures the ephemeral   storage requests on the Vault Agent Containers  Defaults to unset  which   disables ephemeral storage requests  and will default to the ephemeral limit   if set   Also available as a command line option    ephemeral storage request     or environment variable   AGENT INJECT EPHEMERAL REQUEST   to set the default   for all injected Agent containers    Note    Pod requests are equal to the sum   of all container requests  Setting this limit without setting it for other   containers will also affect the requests of other containers in the pod  See    Kubernetes resources documentation  k8s resources  for more details      vault hashicorp com agent revoke on shutdown    configures whether the sidecar   will revoke it s own token before shutting down  This setting will only be applied   to the Vault Agent sidecar container  This should be set to a  true  or  false    value  Defaults to  false       vault hashicorp com agent revoke grace    configures the grace period  in seconds    for revoking it s own token before shutting down  This setting will only be applied   to the Vault Agent sidecar container  Defaults to  5s       vault hashicorp com agent pre populate    configures whether an init container   is included to pre populate the shared memory volume with secrets prior to the   containers starting  This should be set to a  true  or  false  value  Defaults   to  true       vault hashicorp com agent pre populate only    configures whether an init container   is the only injected container  If true  no sidecar container will be injected   at runtime of the pod  Enabling this option is recommended for workloads of   type  CronJob  or  Job  to ensure a clean pod termination      vault hashicorp com preserve secret case    configures Vault Agent to preserve   the secret name case when creating the secret files  This should be set to a  true    or  false  value  Defaults to  false       vault hashicorp com agent run as user    sets the user  uid  to run Vault   agent as  Also available as a command line option    run as user   or   environment variable   AGENT INJECT RUN AS USER   for the injector  Defaults   to 100      vault hashicorp com agent run as group    sets the group  gid  to run Vault   agent as  Also available as a command line option    run as group   or   environment variable   AGENT INJECT RUN AS GROUP   for the injector  Defaults   to 1000      vault hashicorp com agent set security context    controls whether    SecurityContext  is set in injected containers  Also available as a   command line option    set security context   or environment variable     AGENT INJECT SET SECURITY CONTEXT    Defaults to  true       vault hashicorp com agent run as same user    run the injected Vault agent   containers as the User  uid  of the first application container in the pod    Requires  Spec Containers 0  SecurityContext RunAsUser  to be set in the pod   spec  Also available as a command line option    run as same user   or   environment variable   AGENT INJECT RUN AS SAME USER    Defaults to  false           Note    If the first application container in the pod is running as root    uid 0   the  run as same user  annotation will fail injection with an error      vault hashicorp com agent share process namespace    sets    shareProcessNamespace  in the Pod spec where Vault Agent is injected    Defaults to  false       vault hashicorp com agent cache enable    configures Vault Agent to enable    caching   vault docs agent and proxy agent caching   In Vault 1 7  this annotation will also enable   a Vault Agent persistent cache  This persistent cache will be shared between the init   and sidecar container to reuse tokens and leases retrieved by the init container    Defaults to  false       vault hashicorp com agent cache use auto auth token    configures Vault Agent cache   to authenticate on behalf of the requester  Set to  force  to enable  Disabled   by default      vault hashicorp com agent cache listener port    configures Vault Agent cache   listening port  Defaults to  8200       vault hashicorp com agent copy volume mounts    copies the mounts from the specified   container and mounts them to the Vault Agent containers  The service account volume is   ignored      vault hashicorp com agent service account token volume name    the optional name of a projected volume containing a service account token for use with auto auth against Vault s Kubernetes auth method  If the volume is mounted to another container in the deployment  the token volume will be mounted to the same location in the vault agent containers  Otherwise it will be mounted at the default location of   var run secrets vault hashicorp com serviceaccount        vault hashicorp com agent enable quit    enable the    agent v1 quit  endpoint   vault docs agent and proxy agent quit  on an injected agent  This option defaults to false  and if true will be set on the existing cache listener  or a new localhost listener with a basic cache stanza configured  The  agent cache listener port annotation   vault docs platform k8s injector annotations vault hashicorp com agent cache listener port  can be used to change the port      vault hashicorp com agent telemetry    specifies the  telemetry   vault docs configuration telemetry  configuration for the   Vault Agent sidecar  The name of the config is any unique string after    vault hashicorp com agent telemetry    such as  vault hashicorp com agent telemetry prometheus retention time     This annotation can be reused multiple times to configure multiple settings for the agent telemetry      vault hashicorp com go max procs    set the  GOMAXPROCS  environment variable for injected agents     vault hashicorp com agent json patch    change the injected agent sidecar container using a  JSON patch  https   jsonpatch com   before it is created    This can be used to add  remove  or modify any attribute of the container    For example  setting this to     op    replace    path     name    value    different name     will update the agent container s name to be  different name    instead of the default  vault agent       vault hashicorp com agent init json patch    same as  vault hashicorp com agent json patch   except that the JSON patch will be applied to the   injected init container instead      Vault annotations  Vault annotations change how the Vault Agent containers communicate with Vault  For example  Vault s address  TLS certificates to use  client parameters such as timeouts  etc      vault hashicorp com auth config    configures additional parameters for the configured   authentication method  The name of the config is any unique string after    vault hashicorp com auth config    such as  vault hashicorp com auth config role id file path     This annotation can be reused multiple times to configure multiple settings for the authentication   method  Some authentication methods may require additional secrets and should be mounted via the    vault hashicorp com agent extra secret  annotation  For a list of valid authentication configurations    see the Vault Agent  auto auth documentation   vault docs agent and proxy autoauth methods       vault hashicorp com auth path    configures the authentication path for the Kubernetes   auth method  Defaults to  auth kubernetes       vault hashicorp com auth type    configures the authentication type for Vault Agent    Defaults to  kubernetes   For a list of valid authentication methods  see the Vault Agent    auto auth documentation   vault docs agent and proxy autoauth methods       vault hashicorp com auth min backoff    set the  min backoff   vault docs agent and proxy autoauth min backoff  option in the auto auth config  Requires Vault 1 11       vault hashicorp com auth max backoff    set the  max backoff   vault docs agent and proxy autoauth max backoff  option in the auto auth config     vault hashicorp com agent auto auth exit on err    set the  exit on err   vault docs agent and proxy autoauth exit on err  option in the auto auth config     vault hashicorp com ca cert    path of the CA certificate used to verify Vault s   TLS  This can also be set as the default for all injected Agents via the    AGENT INJECT VAULT CACERT BYTES  environment variable which takes a PEM encoded   certificate or bundle      vault hashicorp com ca key    path of the CA public key used to verify Vault s   TLS      vault hashicorp com client cert    path of the client certificate used when   communicating with Vault via mTLS      vault hashicorp com client key    path of the client public key used when communicating   with Vault via mTLS      vault hashicorp com client max retries    configures number of Vault Agent retry   attempts when certain errors are encountered  Defaults to 2  for 3 total attempts    Set this to  0  or less to disable retrying  Error codes that are retried are 412    client consistency requirement not satisfied  and all 5xx except for 501  not implemented       vault hashicorp com client timeout    configures the request timeout threshold    in seconds  of the Vault Agent when communicating with Vault  Defaults to  60s    and accepts value types of  60    60s  or  1m       vault hashicorp com log level    configures the verbosity of the Vault Agent   log level  Default is  info       vault hashicorp com log format    configures the log type for Vault Agent  Possible   values are  standard  and  json   Default is  standard       vault hashicorp com namespace    configures the Vault Enterprise namespace to   be used when requesting secrets from Vault  Also available as a command line   option    vault namespace   or environment variable     AGENT INJECT VAULT NAMESPACE   to set the default namespace for all injected   Agents      vault hashicorp com proxy address    configures the HTTP proxy to use when connecting   to a Vault server      vault hashicorp com role    configures the Vault role used by the Vault Agent   auto auth method  Required when  vault hashicorp com agent configmap  is not set      vault hashicorp com service    configures the Vault address for the injected   Vault Agent to use  This value overrides the default Vault address configured   in the injector  and may either be the address of a Vault service within the   same Kubernetes cluster as the injector  or an external Vault URL      vault hashicorp com tls secret    name of the Kubernetes secret containing TLS   Client and CA certificates and keys  This is mounted to   vault tls       vault hashicorp com tls server name    name of the Vault server to verify the   authenticity of the server when communicating with Vault over TLS      vault hashicorp com tls skip verify    if true  configures the Vault Agent to   skip verification of Vault s TLS certificate  It s not recommended to set this   value to true in a production environment      vault hashicorp com agent disable idle connections    Comma separated  list   of Vault Agent features   vault docs agent and proxy agent disable idle connections  where idle   connections should be disabled  Also available as a command line option      disable idle connections   or environment variable     AGENT INJECT DISABLE IDLE CONNECTIONS   to set the default for all injected   Agents      vault hashicorp com agent disable keep alives    Comma separated  list of   Vault Agent features   vault docs agent and proxy agent disable keep alives  where keep alives   should be disabled  Also available as a command line option      disable keep alives   or environment variable     AGENT INJECT DISABLE KEEP ALIVES   to set the default for all injected   Agents    k8s resources   https   kubernetes io docs concepts configuration manage resources containers  resource requests and limits of pod and container  shareProcessNamespace   https   kubernetes io docs tasks configure pod container share process namespace "}
{"questions":"vault Installing the agent injector The Vault Helm chart vault docs platform k8s helm is the recommended way to layout docs install and configure the Agent Injector in Kubernetes The Vault Agent Sidecar Injector can be installed using Vault Helm page title Agent Sidecar Injector Installation","answers":"---\nlayout: docs\npage_title: Agent Sidecar Injector Installation\ndescription: The Vault Agent Sidecar Injector can be installed using Vault Helm.\n---\n\n# Installing the agent injector\n\nThe [Vault Helm chart](\/vault\/docs\/platform\/k8s\/helm) is the recommended way to\ninstall and configure the Agent Injector in Kubernetes.\n\n~> The Vault Agent Injector requires Vault 1.3.1 or greater.\n\nTo install a new instance of Vault and the Vault Agent Injector, first add the\nHashicorp helm repository and ensure you have access to the chart:\n\n@include 'helm\/repo.mdx'\n\nThen install the chart and enable the injection feature by setting the\n`injector.enabled` value to `true`:\n\n```bash\nhelm install vault hashicorp\/vault --set=\"injector.enabled=true\"\n```\n\nUpgrades may be performed with `helm upgrade` on an existing install. Please\nalways run Helm with `--dry-run` before any install or upgrade to verify\nchanges.\n\nYou can see all the available values settings by running `helm inspect values hashicorp\/vault` or by reading the [Vault Helm Configuration\nDocs](\/vault\/docs\/platform\/k8s\/helm\/configuration). Commonly used values in the Helm\nchart include limiting the namespaces the injector runs in, TLS options and\nmore.\n\n## TLS options\n\nAdmission webhook controllers require TLS to run within Kubernetes. The Injector\ndefaults to supporting TLS 1.2 and above, and supports configuring the minimum\nsupported TLS version and list of enabled cipher suites. These can be set via\nthe following environment variables:\n\n| Environment variable             | Description                                                                                                                    |\n| -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ |\n| `AGENT_INJECT_TLS_MIN_VERSION`   | Minimum supported version of TLS. Defaults to **tls12**. Accepted values are `tls10`, `tls11`, `tls12`, or `tls13`.            |\n| `AGENT_INJECT_TLS_CIPHER_SUITES` | Comma-separated list of enabled [cipher suites][tls-suites] for TLS 1.0-1.2. (Cipher suites are not configurable for TLS 1.3.) |\n\n~> **Warning**: TLS 1.1 and lower are generally considered insecure.\n\nThese may be set in a Helm chart deployment via the\n[injector.extraEnvironmentVars](\/vault\/docs\/platform\/k8s\/helm\/configuration#extraenvironmentvars)\noption:\n\n```bash\nhelm install vault hashicorp\/vault \\\n  --set=\"injector.extraEnvironmentVars.AGENT_INJECT_TLS_MIN_VERSION=tls13\" \\\n  --set=\"injector.extraEnvironmentVars.AGENT_INJECT_TLS_CIPHER_SUITES=...\"\n```\n\nThe Vault Agent Injector also supports two TLS management options:\n\n- Auto TLS generation (default)\n- Manual TLS\n\n### Auto TLS\n\nBy default, the Vault Agent Injector will bootstrap TLS by generating a certificate\nauthority and creating a certificate\/key to be used by the controller. If using\nVault Helm, the chart will automatically create the necessary DNS entries for the\ncontroller's service used to verify the certificate.\n\n### Manual TLS\n\nIf desired, users can supply their own TLS certificates, key and certificate authority.\nThe following is required to configure TLS manually:\n\n- Server certificate\/key\n- Base64 PEM encoded Certificate Authority bundle\n\nFor more information on configuring manual TLS, see the [Vault Helm cert values](\/vault\/docs\/platform\/k8s\/helm\/configuration#certs).\n\nThis option may also be used in conjunction with [cert-manager for certificate management](\/vault\/docs\/platform\/k8s\/helm\/examples\/injector-tls-cert-manager).\n\n## Multiple replicas and TLS\n\nThe Vault Agent Injector can be run with multiple replicas if using [Manual\nTLS](#manual-tls) or [cert-manager](\/vault\/docs\/platform\/k8s\/helm\/examples\/injector-tls-cert-manager), and as of v0.7.0 multiple replicas are also supported with\n[Auto TLS](#auto-tls). The number of replicas is controlled in the Vault Helm\nchart by the [injector.replicas\nvalue](\/vault\/docs\/platform\/k8s\/helm\/configuration#replicas).\n\nWith Auto TLS and multiple replicas, a leader replica is determined by ownership\nof a ConfigMap named `vault-k8s-leader`. Another replica can become the leader\nonce the current leader replica stops running, and the Kubernetes garbage\ncollector deletes the ConfigMap. The leader replica is in charge of generating\nthe CA and patching the webhook caBundle in Kubernetes, and also generating and\ndistributing the certificate and key to the \"followers\". The followers read the\ncertificate and key needed for the webhook service listener from a Kubernetes\nSecret, which is updated by the leader when a certificate is near expiration.\n\nWith Manual TLS and multiple replicas,\n[injector.leaderElector.enabled](\/vault\/docs\/platform\/k8s\/helm\/configuration#enabled-2)\ncan be set to `false` since leader determination is not necessary in this case.\n\n## Namespace selector\n\nBy default, the Vault Agent Injector will process all namespaces in Kubernetes except\nthe system namespaces `kube-system` and `kube-public`. To limit what namespaces\nthe injector can work in a namespace selector can be defined to match labels attached\nto namespaces.\n\nFor more information on configuring namespace selection, see the [Vault Helm namespaceSelector value](\/vault\/docs\/platform\/k8s\/helm\/configuration#namespaceselector).\n\n[tls-suites]: https:\/\/golang.org\/src\/crypto\/tls\/cipher_suites.go","site":"vault","answers_cleaned":"    layout  docs page title  Agent Sidecar Injector Installation description  The Vault Agent Sidecar Injector can be installed using Vault Helm         Installing the agent injector  The  Vault Helm chart   vault docs platform k8s helm  is the recommended way to install and configure the Agent Injector in Kubernetes      The Vault Agent Injector requires Vault 1 3 1 or greater   To install a new instance of Vault and the Vault Agent Injector  first add the Hashicorp helm repository and ensure you have access to the chart    include  helm repo mdx   Then install the chart and enable the injection feature by setting the  injector enabled  value to  true       bash helm install vault hashicorp vault   set  injector enabled true       Upgrades may be performed with  helm upgrade  on an existing install  Please always run Helm with    dry run  before any install or upgrade to verify changes   You can see all the available values settings by running  helm inspect values hashicorp vault  or by reading the  Vault Helm Configuration Docs   vault docs platform k8s helm configuration   Commonly used values in the Helm chart include limiting the namespaces the injector runs in  TLS options and more      TLS options  Admission webhook controllers require TLS to run within Kubernetes  The Injector defaults to supporting TLS 1 2 and above  and supports configuring the minimum supported TLS version and list of enabled cipher suites  These can be set via the following environment variables     Environment variable               Description                                                                                                                                                                                                                                                                                               AGENT INJECT TLS MIN VERSION      Minimum supported version of TLS  Defaults to   tls12    Accepted values are  tls10    tls11    tls12   or  tls13                   AGENT INJECT TLS CIPHER SUITES    Comma separated list of enabled  cipher suites  tls suites  for TLS 1 0 1 2   Cipher suites are not configurable for TLS 1 3           Warning    TLS 1 1 and lower are generally considered insecure   These may be set in a Helm chart deployment via the  injector extraEnvironmentVars   vault docs platform k8s helm configuration extraenvironmentvars  option      bash helm install vault hashicorp vault       set  injector extraEnvironmentVars AGENT INJECT TLS MIN VERSION tls13        set  injector extraEnvironmentVars AGENT INJECT TLS CIPHER SUITES           The Vault Agent Injector also supports two TLS management options     Auto TLS generation  default    Manual TLS      Auto TLS  By default  the Vault Agent Injector will bootstrap TLS by generating a certificate authority and creating a certificate key to be used by the controller  If using Vault Helm  the chart will automatically create the necessary DNS entries for the controller s service used to verify the certificate       Manual TLS  If desired  users can supply their own TLS certificates  key and certificate authority  The following is required to configure TLS manually     Server certificate key   Base64 PEM encoded Certificate Authority bundle  For more information on configuring manual TLS  see the  Vault Helm cert values   vault docs platform k8s helm configuration certs    This option may also be used in conjunction with  cert manager for certificate management   vault docs platform k8s helm examples injector tls cert manager       Multiple replicas and TLS  The Vault Agent Injector can be run with multiple replicas if using  Manual TLS   manual tls  or  cert manager   vault docs platform k8s helm examples injector tls cert manager   and as of v0 7 0 multiple replicas are also supported with  Auto TLS   auto tls   The number of replicas is controlled in the Vault Helm chart by the  injector replicas value   vault docs platform k8s helm configuration replicas    With Auto TLS and multiple replicas  a leader replica is determined by ownership of a ConfigMap named  vault k8s leader   Another replica can become the leader once the current leader replica stops running  and the Kubernetes garbage collector deletes the ConfigMap  The leader replica is in charge of generating the CA and patching the webhook caBundle in Kubernetes  and also generating and distributing the certificate and key to the  followers   The followers read the certificate and key needed for the webhook service listener from a Kubernetes Secret  which is updated by the leader when a certificate is near expiration   With Manual TLS and multiple replicas   injector leaderElector enabled   vault docs platform k8s helm configuration enabled 2  can be set to  false  since leader determination is not necessary in this case      Namespace selector  By default  the Vault Agent Injector will process all namespaces in Kubernetes except the system namespaces  kube system  and  kube public   To limit what namespaces the injector can work in a namespace selector can be defined to match labels attached to namespaces   For more information on configuring namespace selection  see the  Vault Helm namespaceSelector value   vault docs platform k8s helm configuration namespaceselector     tls suites   https   golang org src crypto tls cipher suites go"}
{"questions":"vault Vault lambda extension AWS Lambda lets you run code without provisioning and managing servers page title Vault Lambda Extension layout docs The Vault Lambda Extension allows a Lambda function to read secrets from a Vault deployment","answers":"---\nlayout: docs\npage_title: Vault Lambda Extension\ndescription: >-\n  The Vault Lambda Extension allows a Lambda function to read secrets from a Vault deployment.\n---\n\n# Vault lambda extension\n\nAWS Lambda lets you run code without provisioning and managing servers.\nThe [Vault Lambda Extension](https:\/\/github.com\/hashicorp\/vault-lambda-extension) utilizes the AWS Lambda Extensions API to help your Lambda function read secrets from your Vault deployment.\nYou can use the [quick-start](https:\/\/github.com\/hashicorp\/vault-lambda-extension\/tree\/main\/quick-start) directory which has an end-to-end example if you would like to try out the extension from scratch.\n\n~> **Note**: If you decide to create one from scratch, be aware that this will create real infrastructure with an associated cost as per AWS' pricing.\n\n## Usage\n\nTo use the extension, include one of the following ARNs as a layer in your\nLambda function, depending on your desired architecture.\n\namd64 (x86_64):\n```text\narn:aws:lambda:<your-region>:634166935893:layer:vault-lambda-extension:18\n```\n\narm64:\n```text\narn:aws:lambda:<your-region>:634166935893:layer:vault-lambda-extension-arm64:6\n```\n\nWhere region may be any of `af-south-1`, `ap-east-1`, `ap-northeast-1`,\n`ap-northeast-2`, `ap-northeast-3`, `ap-south-1`, `ap-south-2`, `ap-southeast-1`,\n`ap-southeast-2`, `ca-central-1`, `eu-central-1`, `eu-north-1`, `eu-south-1`,\n`eu-west-1`, `eu-west-2`, `eu-west-3`, `me-south-1`, `sa-east-1`, `us-east-1`,\n`us-east-2`, `us-west-1`, `us-west-2`.\n\nThe extension authenticates with Vault using [AWS IAM auth](\/vault\/docs\/auth\/aws),\nand all configuration is supplied via environment variables. There are two methods\nto read secrets, which can both be used side-by-side:\n\n- **Recommended**: Make unauthenticated requests to the extension's local proxy\n  server at `http:\/\/127.0.0.1:8200`, which will add an authentication header and\n  proxy to the configured `VAULT_ADDR`. Responses from Vault are returned without\n  modification.\n- Configure environment variables such as `VAULT_SECRET_PATH` for the extension\n  to read a secret and write it to disk.\n\n### Adding the extension to your existing lambda and Vault infrastructure\n\n#### Requirements\n\n- ARN of the role your Lambda runs as\n- An instance of Vault accessible from AWS Lambda\n- An authenticated `vault` client\n- A secret in Vault that you want your Lambda to access, and a policy giving read access to it\n- Your Lambda function must use one of the [supported runtimes](https:\/\/docs.aws.amazon.com\/lambda\/latest\/dg\/runtimes-extensions-api.html) for extensions\n\n#### Step 1. configure Vault\n\nEnable the aws auth method.\n\n```shell-session\n$ vault auth enable aws\n```\n\nConfigure the AWS client to use the default options.\n\n```shell-session\n$ vault write -force auth\/aws\/config\/client\n```\n\nCreate a role prefixed with the AWS environment name.\n\n```shell-session\n$ vault write auth\/aws\/role\/vault-lambda-role \\\n    auth_type=iam \\\n    bound_iam_principal_arn=\"${YOUR_ARN}\" \\\n    policies=\"${YOUR_POLICY}\" \\\n    ttl=1h\n```\n\n#### Step 2. option a) install the extension for lambda functions packaged in zip archives\n\nIf you deploy your Lambda function as a zip file, you can add the extension\nto your Lambda layers using the console or [cli](https:\/\/docs.aws.amazon.com\/lambda\/latest\/dg\/configuration-layers.html#configuration-layers-using):\n\n```text\narn:aws:lambda:<your-region>:634166935893:layer:vault-lambda-extension:11\n```\n\n#### Step 2. option b) install the extension for lambda functions packaged in container images\n\nAlternatively, if you deploy your Lambda function as a container image, simply\nplace the built binary in the `\/opt\/extensions` directory of your image.\n\nFetch the binary from\n[releases.hashicorp.com](https:\/\/releases.hashicorp.com\/vault-lambda-extension\/).\nThe following command requires cURL.\n\n```shell-session\n$ curl --silent https:\/\/releases.hashicorp.com\/vault-lambda-extension\/0.5.0\/vault-lambda-extension_0.5.0_linux_amd64.zip \\\n  --output vault-lambda-extension.zip\n```\n\nUnzip the downloaded binary.\n\n```shell-session\n$ unzip vault-lambda-extension.zip\n```\n\nOptionally, you can verify the integrity of the downloaded zip using the release\narchive checksum verification instructions\n[here](https:\/\/www.hashicorp.com\/security).\n\nOr to build the binary from source. This requires Golang installed. Run from the root of this repository.\n\n```shell-session\n$ GOOS=linux GOARCH=amd64 go build -o vault-lambda-extension main.go\n```\n\n#### Step 3. configure vault-lambda-extension\n\nConfigure the extension using [Lambda environment\nvariables](https:\/\/docs.aws.amazon.com\/lambda\/latest\/dg\/configuration-envvars.html):\n\nSet the Vault API address.\n\n```shell-session\n$ VAULT_ADDR=http:\/\/vault.example.com:8200\n```\n\nSet the AWS IAM auth mount point (i.e. the path segment after `auth\/` from above).\n\n```shell-session\n$ VAULT_AUTH_PROVIDER=aws\n```\n\nSet the Vault role to authenticate as. Must be configured for the ARN of your\nLambda's role.\n\n```shell-session\n$ VAULT_AUTH_ROLE=vault-lambda-role\n```\n\nThe path to a secret in Vault. Can be static or dynamic. Unless\nVAULT_SECRET_FILE is specified, JSON response will be written to\n`\/tmp\/vault\/secret.json`.\n\n```shell-session\n$ VAULT_SECRET_PATH=secret\/lambda-app\/token\n```\n\nIf everything is correctly set up, your Lambda function can then read secret\nmaterial from `\/tmp\/vault\/secret.json`. The exact contents of the JSON object\nwill depend on the secret read, but its schema is the [Secret struct](https:\/\/github.com\/hashicorp\/vault\/blob\/api\/v1.0.4\/api\/secret.go#L15)\nfrom the Vault API module.\n\nAlternatively, you can send normal Vault API requests over HTTP to the local\nproxy at `http:\/\/127.0.0.1:8200`, and the extension will add authentication\nbefore forwarding the request. Vault responses will be returned unmodified.\nAlthough local communication is over plain HTTP, the proxy server will use TLS\nto communicate with Vault if configured to do so as detailed below.\n\n## Configuration\n\nThe extension is configured via [Lambda environment variables](https:\/\/docs.aws.amazon.com\/lambda\/latest\/dg\/configuration-envvars.html).\nMost of the [Vault CLI client's environment variables](\/vault\/docs\/commands#environment-variables) are available,\nas well as some additional variables to configure auth, which secret(s) to read and\nwhere to write secrets.\n\n| Environment variable              | Description                                                                                                                                                                                                                                                                                                           | Required | Example value               |\n|-----------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------|\n| `VLE_VAULT_ADDR`                  | Vault address to connect to. Takes precedence over `VAULT_ADDR` so that clients of the proxy server can be configured using the standard `VAULT_ADDR`                                                                                                                                                                 | No       | `https:\/\/x.x.x.x:8200`      |\n| `VAULT_ADDR`                      | Vault address to connect to if `VLE_VAULT_ADDR` is not set. Required if `VLE_VAULT_ADDR` is not set                                                                                                                                                                                                                   | No       | `https:\/\/x.x.x.x:8200`      |\n| `VAULT_AUTH_PROVIDER`             | Name of the configured AWS IAM auth route on Vault                                                                                                                                                                                                                                                                    | Yes      | `aws`                       |\n| `VAULT_AUTH_ROLE`                 | Vault role to authenticate as                                                                                                                                                                                                                                                                                         | Yes      | `lambda-app`                |\n| `VAULT_IAM_SERVER_ID`             | Value to pass to the Vault server via the [`X-Vault-AWS-IAM-Server-ID` HTTP Header for AWS Authentication](\/vault\/api-docs\/auth\/aws#iam_server_id_header_value)                                                                                                                                                       | No       | `vault.example.com`         |\n| `VAULT_SECRET_PATH`               | Secret path to read, written to `\/tmp\/vault\/secret.json` unless `VAULT_SECRET_FILE` is specified                                                                                                                                                                                                                      | No       | `database\/creds\/lambda-app` |\n| `VAULT_SECRET_FILE`               | Path to write the JSON response for `VAULT_SECRET_PATH`                                                                                                                                                                                                                                                               | No       | `\/tmp\/db.json`              |\n| `VAULT_SECRET_PATH_FOO`           | Additional secret path to read, where FOO can be any name, as long as a matching `VAULT_SECRET_FILE_FOO` is specified                                                                                                                                                                                                 | No       | `secret\/lambda-app\/token`   |\n| `VAULT_SECRET_FILE_FOO`           | Must exist for any correspondingly named `VAULT_SECRET_PATH_FOO`. Name has no further effect beyond matching to the correct path variable                                                                                                                                                                             | No       | `\/tmp\/token`                |\n| `VAULT_RUN_MODE`                  | Available options are `default`, `proxy`, and `file`. Proxy mode makes requests to the extension's local proxy server. File mode configures the extension to read and write secrets to disk. Default mode uses both file and proxy mode. The default is `default`.                                                    | No       | `default`                   |\n| `VAULT_TOKEN_EXPIRY_GRACE_PERIOD` | Period at the end of the proxy server's auth token TTL where it will consider the token expired and attempt to re-authenticate to Vault. Must have a unit and be parseable by `time.Duration`. Defaults to 10s.                                                                                                       | No       | `1m`                        |\n| `VAULT_STS_ENDPOINT_REGION`       | The region of the STS regional endpoint to authenticate with. If the AWS IAM auth mount specified uses a regional STS endpoint, then this needs to match the region of that endpoint. Defaults to using the global endpoint, or the region the Lambda resides in if `AWS_STS_REGIONAL_ENDPOINTS` is set to `regional` | No       | `eu-west-1`                 |\n\nThe remaining environment variables are not required, and function exactly as\ndescribed in the [Vault Commands (CLI)](\/vault\/docs\/commands#environment-variables) documentation. However,\nnote that `VAULT_CLIENT_TIMEOUT` cannot extend the timeout beyond the 10s\ninitialization timeout imposed by the Extensions API when writing files to disk.\n\n| Environment variable          | Description                                                                                                                                                                     | Required | Example value       |\n| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------------------- |\n| `VAULT_CACERT`                | Path to a PEM-encoded CA certificate _file_ on the local disk                                                                                                                   | No       | `\/tmp\/ca.crt`       |\n| `VAULT_CAPATH`                | Path to a _directory_ of PEM-encoded CA certificate files on the local disk                                                                                                     | No       | `\/tmp\/certs`        |\n| `VAULT_CLIENT_CERT`           | Path to a PEM-encoded client certificate on the local disk                                                                                                                      | No       | `\/tmp\/client.crt`   |\n| `VAULT_CLIENT_KEY`            | Path to an unencrypted, PEM-encoded private key on disk which corresponds to the matching client certificate                                                                    | No       | `\/tmp\/client.key`   |\n| `VAULT_CLIENT_TIMEOUT`        | Timeout for Vault requests. Default value is 60s. Ignored by proxy server. **Any value over 10s will exceed the Extensions API timeout and therefore have no effect**           | No       | `5s`                |\n| `VAULT_MAX_RETRIES`           | Maximum number of retries on `5xx` error codes. Defaults to 2. Ignored by proxy server                                                                                          | No       | `2`                 |\n| `VAULT_SKIP_VERIFY`           | Do not verify Vault's presented certificate before communicating with it. Setting this variable is not recommended and voids Vault's [security model](\/vault\/docs\/internals\/security) | No       | `true`              |\n| `VAULT_TLS_SERVER_NAME`       | Name to use as the SNI host when connecting via TLS                                                                                                                             | No       | `vault.example.com` |\n| `VAULT_RATE_LIMIT`            | Only applies to a single invocation of the extension. See [Vault Commands (CLI)](\/vault\/docs\/commands#environment-variables) documentation for details. Ignored by proxy server       | No       | `10`                |\n| `VAULT_NAMESPACE`             | The namespace to use for pre-configured secrets. Ignored by proxy server                                                                                                        | No       | `education`         |\n| `VAULT_DEFAULT_CACHE_TTL`     | The time to live configuration (aka, TTL) of the cache used by proxy server. Must have a unit and be parsable as a time.Duration. Required for caching to be enabled.           | No       | `15m`               |\n| `VAULT_DEFAULT_CACHE_ENABLED` | Enable caching for all requests, without needing to set the X-Vault-Cache-Control header for each request. Must be set to a boolean value.                                      | No       | `true`              |\n| `VAULT_ASSUMED_ROLE_ARN`      | Valid ARN of an IAM role that can be assumed by the execution role assigned to your Lambda function.                                                                            | No       | `arn:aws:iam::123456789012:role\/xaccounts3access`\n| `VAULT_LOG_LEVEL`             | Log verbosity level, one of TRACE, DEBUG, INFO, WARN, ERROR, OFF. Defaults to INFO.                                                                                             | No       | `DEBUG`\n\n### AWS STS client configuration\n\nIn addition to Vault configuration, you can configure certain aspects of the STS\nclient the extension uses through the usual AWS environment variables. For example,\nif your Vault instance's IAM auth is configured to use regional STS endpoints:\n\n```shell-session\n$ vault write auth\/aws\/config\/client \\\n     sts_endpoint=\"https:\/\/sts.eu-west-1.amazonaws.com\" \\\n     sts_region=\"eu-west-1\"\n```\n\nThen you may need to configure the extension's STS client to also use the regional\nSTS endpoint by setting `AWS_STS_REGIONAL_ENDPOINTS=regional`, because both the AWS Golang\nSDK and Vault IAM auth method default to using the global endpoint in many regions.\nSee documentation on [`sts_regional_endpoints`](https:\/\/docs.aws.amazon.com\/credref\/latest\/refdocs\/setting-global-sts_regional_endpoints.html) for more information.\n\n### Caching\n\nCaching can be configured for the extension's local proxy server so that it does\nnot forward every HTTP request to Vault. The main consideration behind caching\ndesign is to make caching an explicit opt-in at the request level, so that it is\nonly enabled for scenarios where caching makes sense without negative impact in\nothers. To turn on caching, set the environment variable\n`VAULT_DEFAULT_CACHE_TTL` to a valid value that is parsable as a time.Duration\nin Go, for example, \"15m\", \"1h\", \"2m3s\" or \"1h2m3s\", depending on application\nneeds. An invalid or negative value will be treated the same as a missing value,\nin which case, caching will not be set up and enabled.\n\nThen requests with HTTP method of \"GET\", and the HTTP header\n`X-Vault-Cache-Control: cache` will be returned directly from the cache if\nthere's a cache hit. On a cache miss the request will be forwarded to Vault and\nthe response returned and cached. If the header is set to\n`X-Vault-Cache-Control: recache`, the cache lookup will be skipped, and the\nrequest will be forwarded to Vault and the response returned and cached.\nCurrently, the cache key is a hash of the request URL path, headers, body, and\ntoken.\n\n<Warning title=\"Nonstandard distributed tracing headers may negate the cache\">\n\n  The Vault Lambda Extension cache key includes headers from proxy requests, but \n  excludes the standard distributed tracing headers `traceparent` and\n  `tracestate` because trace IDs are unique per request and would lead to unique\n  hashes for repeated requests.\n  \n  Some distributed tracing tools may add nonstandard tracing headers, which can \n  also lead to individualized hashes that make repeated requests unique and \n  cause cache misses.\n\n<\/Warning>\n\nCaching may also be enabled for all requests by setting the environment variable\n`VAULT_DEFAULT_CACHE_ENABLED` to `true`. Then all requests will be fetched and\/or\ncached as though the header `X-Vault-Cache-Control: cache` was present. Setting\nthe header to `nocache` on a request will opt-out of caching entirely in this\nconfiguration. Setting the header to `recache` will skip the cache lookup and\nreturn and cache the response from Vault as described previously.\n\n~> **Warning!** The Vault Lambda Extension's cache is only in-memory\nand will not be persisted when the Lambda execution environment shuts down.\nIn order words, the cache TTL is capped to the duration of the Lambda execution environment.\n\n## Limitations\n\nSecrets written to disk or returned from the proxy server will not be automatically\nrefreshed when they expire. This is particularly important if you configure the\nextension to write secrets to disk, because the extension will only write to disk\nonce per execution environment, rather than once per function invocation. If you\nuse [provisioned concurrency](https:\/\/docs.aws.amazon.com\/lambda\/latest\/dg\/configuration-concurrency.html#configuration-concurrency-provisioned) or if your Lambda\nis invoked often enough that execution contexts live beyond the lifetime of the\nsecret, then secrets on disk are likely to become invalid.\n\nIn line with [Lambda best practices](https:\/\/docs.aws.amazon.com\/lambda\/latest\/dg\/best-practices.html), we recommend avoiding\nwriting secrets to disk where possible, and exclusively consuming secrets via\nthe proxy server. However, the proxy server will still not perform any additional\nprocessing with returned secrets such as automatic lease renewal. The proxy server's\nown Vault auth token is the only thing that gets automatically refreshed. It will\nsynchronously refresh its own token before proxying requests if the token is\nexpired (including a grace window), and it will attempt to renew its token if the\ntoken is nearly expired but renewable. The proxy will also immediately refresh its token\nif the incoming request header `X-Vault-Token-Options: revoke` is present.\n\n<Note title=\"Not SnapStart compatible\">\n\n  The Vault Lambda extension does not currently  work with\n  [AWS SnapStart](https:\/\/docs.aws.amazon.com\/lambda\/latest\/dg\/snapstart.html).\n\n<\/Note>\n\n## Performance impact\n\nAWS Lambda pricing is based on [number of invocations, time of execution and memory\nused](https:\/\/aws.amazon.com\/lambda\/pricing\/). The following table details some approximate performance\nrelated statistics to help assess the cost impact of this extension. Note that AWS\nLambda allocates [CPU power in proportion to memory](https:\/\/docs.aws.amazon.com\/lambda\/latest\/dg\/configuration-memory.html) so results\nwill vary widely. These benchmarks were run with the minimum 128MB of memory allocated\nso aim to give an approximate baseline.\n\n| Metric         | Value                                                                              | Description                                                                                                                      | Derivation                                                                         |\n| -------------- | ---------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------- |\n| Layer size     | 8.5MB                                                                              | The size of the unpacked extension binary                                                                                        | `ls -la`                                                                           |\n| Init latency   | 8.5ms (standard deviation 2.4ms) + one network round trip to authenticate to Vault | Extension initialization time in a new execution environment. Authentication round trip time will be highly deployment-dependent | Instrumented in code                                                               |\n| Invoke latency | <1ms                                                                               | The base processing time for each function invocation, assuming no calls to the proxy server                                     | Instrumented in code                                                               |\n| Memory impact  | 12MB                                                                               | The marginal impact on \"Max Memory Used\" when running the extension                                                              | As reported by Lambda when running Hello World function with and without extension |\n\n## Uploading to your own AWS account and region\n\nIf you would like to upload the extension as a Lambda layer in your own AWS\naccount and region, you can do the following:\n\n```shell-session\n$ curl --silent https:\/\/releases.hashicorp.com\/vault-lambda-extension\/0.5.0\/vault-lambda-extension_0.5.0_linux_amd64.zip \\\n  --output vault-lambda-extension.zip\n```\n\nSet your target AWS region.\n\n```shell-session\n$ export REGION=\"YOUR REGION HERE\"\n```\n\nUpload the extension as a Lambda layer.\n\n```shell-session\n$ aws lambda publish-layer-version \\\n     --layer-name vault-lambda-extension \\\n     --zip-file  \"fileb:\/\/vault-lambda-extension.zip\" \\\n     --region \"${REGION}\"\n```\n\n## Tutorial\n\nFor step-by-step instructions, refer to the [Vault AWS Lambda Extension](\/vault\/tutorials\/app-integration\/aws-lambda) tutorial for details on how to create an AWS Lambda function and use the Vault Lambda Extension to authenticate with Vault.","site":"vault","answers_cleaned":"    layout  docs page title  Vault Lambda Extension description       The Vault Lambda Extension allows a Lambda function to read secrets from a Vault deployment         Vault lambda extension  AWS Lambda lets you run code without provisioning and managing servers  The  Vault Lambda Extension  https   github com hashicorp vault lambda extension  utilizes the AWS Lambda Extensions API to help your Lambda function read secrets from your Vault deployment  You can use the  quick start  https   github com hashicorp vault lambda extension tree main quick start  directory which has an end to end example if you would like to try out the extension from scratch        Note    If you decide to create one from scratch  be aware that this will create real infrastructure with an associated cost as per AWS  pricing      Usage  To use the extension  include one of the following ARNs as a layer in your Lambda function  depending on your desired architecture   amd64  x86 64      text arn aws lambda  your region  634166935893 layer vault lambda extension 18      arm64     text arn aws lambda  your region  634166935893 layer vault lambda extension arm64 6      Where region may be any of  af south 1    ap east 1    ap northeast 1    ap northeast 2    ap northeast 3    ap south 1    ap south 2    ap southeast 1    ap southeast 2    ca central 1    eu central 1    eu north 1    eu south 1    eu west 1    eu west 2    eu west 3    me south 1    sa east 1    us east 1    us east 2    us west 1    us west 2    The extension authenticates with Vault using  AWS IAM auth   vault docs auth aws   and all configuration is supplied via environment variables  There are two methods to read secrets  which can both be used side by side       Recommended    Make unauthenticated requests to the extension s local proxy   server at  http   127 0 0 1 8200   which will add an authentication header and   proxy to the configured  VAULT ADDR   Responses from Vault are returned without   modification    Configure environment variables such as  VAULT SECRET PATH  for the extension   to read a secret and write it to disk       Adding the extension to your existing lambda and Vault infrastructure       Requirements    ARN of the role your Lambda runs as   An instance of Vault accessible from AWS Lambda   An authenticated  vault  client   A secret in Vault that you want your Lambda to access  and a policy giving read access to it   Your Lambda function must use one of the  supported runtimes  https   docs aws amazon com lambda latest dg runtimes extensions api html  for extensions       Step 1  configure Vault  Enable the aws auth method      shell session   vault auth enable aws      Configure the AWS client to use the default options      shell session   vault write  force auth aws config client      Create a role prefixed with the AWS environment name      shell session   vault write auth aws role vault lambda role       auth type iam       bound iam principal arn    YOUR ARN         policies    YOUR POLICY         ttl 1h           Step 2  option a  install the extension for lambda functions packaged in zip archives  If you deploy your Lambda function as a zip file  you can add the extension to your Lambda layers using the console or  cli  https   docs aws amazon com lambda latest dg configuration layers html configuration layers using       text arn aws lambda  your region  634166935893 layer vault lambda extension 11           Step 2  option b  install the extension for lambda functions packaged in container images  Alternatively  if you deploy your Lambda function as a container image  simply place the built binary in the   opt extensions  directory of your image   Fetch the binary from  releases hashicorp com  https   releases hashicorp com vault lambda extension    The following command requires cURL      shell session   curl   silent https   releases hashicorp com vault lambda extension 0 5 0 vault lambda extension 0 5 0 linux amd64 zip       output vault lambda extension zip      Unzip the downloaded binary      shell session   unzip vault lambda extension zip      Optionally  you can verify the integrity of the downloaded zip using the release archive checksum verification instructions  here  https   www hashicorp com security    Or to build the binary from source  This requires Golang installed  Run from the root of this repository      shell session   GOOS linux GOARCH amd64 go build  o vault lambda extension main go           Step 3  configure vault lambda extension  Configure the extension using  Lambda environment variables  https   docs aws amazon com lambda latest dg configuration envvars html    Set the Vault API address      shell session   VAULT ADDR http   vault example com 8200      Set the AWS IAM auth mount point  i e  the path segment after  auth   from above       shell session   VAULT AUTH PROVIDER aws      Set the Vault role to authenticate as  Must be configured for the ARN of your Lambda s role      shell session   VAULT AUTH ROLE vault lambda role      The path to a secret in Vault  Can be static or dynamic  Unless VAULT SECRET FILE is specified  JSON response will be written to   tmp vault secret json       shell session   VAULT SECRET PATH secret lambda app token      If everything is correctly set up  your Lambda function can then read secret material from   tmp vault secret json   The exact contents of the JSON object will depend on the secret read  but its schema is the  Secret struct  https   github com hashicorp vault blob api v1 0 4 api secret go L15  from the Vault API module   Alternatively  you can send normal Vault API requests over HTTP to the local proxy at  http   127 0 0 1 8200   and the extension will add authentication before forwarding the request  Vault responses will be returned unmodified  Although local communication is over plain HTTP  the proxy server will use TLS to communicate with Vault if configured to do so as detailed below      Configuration  The extension is configured via  Lambda environment variables  https   docs aws amazon com lambda latest dg configuration envvars html   Most of the  Vault CLI client s environment variables   vault docs commands environment variables  are available  as well as some additional variables to configure auth  which secret s  to read and where to write secrets     Environment variable                Description                                                                                                                                                                                                                                                                                                             Required   Example value                                                                                                                                                                                                                                                                                                                                                                                                                           VLE VAULT ADDR                     Vault address to connect to  Takes precedence over  VAULT ADDR  so that clients of the proxy server can be configured using the standard  VAULT ADDR                                                                                                                                                                    No          https   x x x x 8200            VAULT ADDR                         Vault address to connect to if  VLE VAULT ADDR  is not set  Required if  VLE VAULT ADDR  is not set                                                                                                                                                                                                                     No          https   x x x x 8200            VAULT AUTH PROVIDER                Name of the configured AWS IAM auth route on Vault                                                                                                                                                                                                                                                                      Yes         aws                             VAULT AUTH ROLE                    Vault role to authenticate as                                                                                                                                                                                                                                                                                           Yes         lambda app                      VAULT IAM SERVER ID                Value to pass to the Vault server via the   X Vault AWS IAM Server ID  HTTP Header for AWS Authentication   vault api docs auth aws iam server id header value                                                                                                                                                          No          vault example com               VAULT SECRET PATH                  Secret path to read  written to   tmp vault secret json  unless  VAULT SECRET FILE  is specified                                                                                                                                                                                                                        No          database creds lambda app       VAULT SECRET FILE                  Path to write the JSON response for  VAULT SECRET PATH                                                                                                                                                                                                                                                                  No           tmp db json                    VAULT SECRET PATH FOO              Additional secret path to read  where FOO can be any name  as long as a matching  VAULT SECRET FILE FOO  is specified                                                                                                                                                                                                   No          secret lambda app token         VAULT SECRET FILE FOO              Must exist for any correspondingly named  VAULT SECRET PATH FOO   Name has no further effect beyond matching to the correct path variable                                                                                                                                                                               No           tmp token                      VAULT RUN MODE                     Available options are  default    proxy   and  file   Proxy mode makes requests to the extension s local proxy server  File mode configures the extension to read and write secrets to disk  Default mode uses both file and proxy mode  The default is  default                                                        No          default                         VAULT TOKEN EXPIRY GRACE PERIOD    Period at the end of the proxy server s auth token TTL where it will consider the token expired and attempt to re authenticate to Vault  Must have a unit and be parseable by  time Duration   Defaults to 10s                                                                                                          No          1m                              VAULT STS ENDPOINT REGION          The region of the STS regional endpoint to authenticate with  If the AWS IAM auth mount specified uses a regional STS endpoint  then this needs to match the region of that endpoint  Defaults to using the global endpoint  or the region the Lambda resides in if  AWS STS REGIONAL ENDPOINTS  is set to  regional    No          eu west 1                     The remaining environment variables are not required  and function exactly as described in the  Vault Commands  CLI    vault docs commands environment variables  documentation  However  note that  VAULT CLIENT TIMEOUT  cannot extend the timeout beyond the 10s initialization timeout imposed by the Extensions API when writing files to disk     Environment variable            Description                                                                                                                                                                       Required   Example value                                                                                                                                                                                                                                                                 VAULT CACERT                   Path to a PEM encoded CA certificate  file  on the local disk                                                                                                                     No           tmp ca crt             VAULT CAPATH                   Path to a  directory  of PEM encoded CA certificate files on the local disk                                                                                                       No           tmp certs              VAULT CLIENT CERT              Path to a PEM encoded client certificate on the local disk                                                                                                                        No           tmp client crt         VAULT CLIENT KEY               Path to an unencrypted  PEM encoded private key on disk which corresponds to the matching client certificate                                                                      No           tmp client key         VAULT CLIENT TIMEOUT           Timeout for Vault requests  Default value is 60s  Ignored by proxy server    Any value over 10s will exceed the Extensions API timeout and therefore have no effect               No          5s                      VAULT MAX RETRIES              Maximum number of retries on  5xx  error codes  Defaults to 2  Ignored by proxy server                                                                                            No          2                       VAULT SKIP VERIFY              Do not verify Vault s presented certificate before communicating with it  Setting this variable is not recommended and voids Vault s  security model   vault docs internals security    No          true                    VAULT TLS SERVER NAME          Name to use as the SNI host when connecting via TLS                                                                                                                               No          vault example com       VAULT RATE LIMIT               Only applies to a single invocation of the extension  See  Vault Commands  CLI    vault docs commands environment variables  documentation for details  Ignored by proxy server         No          10                      VAULT NAMESPACE                The namespace to use for pre configured secrets  Ignored by proxy server                                                                                                          No          education               VAULT DEFAULT CACHE TTL        The time to live configuration  aka  TTL  of the cache used by proxy server  Must have a unit and be parsable as a time Duration  Required for caching to be enabled              No          15m                     VAULT DEFAULT CACHE ENABLED    Enable caching for all requests  without needing to set the X Vault Cache Control header for each request  Must be set to a boolean value                                         No          true                    VAULT ASSUMED ROLE ARN         Valid ARN of an IAM role that can be assumed by the execution role assigned to your Lambda function                                                                               No          arn aws iam  123456789012 role xaccounts3access     VAULT LOG LEVEL                Log verbosity level  one of TRACE  DEBUG  INFO  WARN  ERROR  OFF  Defaults to INFO                                                                                                No          DEBUG       AWS STS client configuration  In addition to Vault configuration  you can configure certain aspects of the STS client the extension uses through the usual AWS environment variables  For example  if your Vault instance s IAM auth is configured to use regional STS endpoints      shell session   vault write auth aws config client        sts endpoint  https   sts eu west 1 amazonaws com         sts region  eu west 1       Then you may need to configure the extension s STS client to also use the regional STS endpoint by setting  AWS STS REGIONAL ENDPOINTS regional   because both the AWS Golang SDK and Vault IAM auth method default to using the global endpoint in many regions  See documentation on   sts regional endpoints   https   docs aws amazon com credref latest refdocs setting global sts regional endpoints html  for more information       Caching  Caching can be configured for the extension s local proxy server so that it does not forward every HTTP request to Vault  The main consideration behind caching design is to make caching an explicit opt in at the request level  so that it is only enabled for scenarios where caching makes sense without negative impact in others  To turn on caching  set the environment variable  VAULT DEFAULT CACHE TTL  to a valid value that is parsable as a time Duration in Go  for example   15m    1h    2m3s  or  1h2m3s   depending on application needs  An invalid or negative value will be treated the same as a missing value  in which case  caching will not be set up and enabled   Then requests with HTTP method of  GET   and the HTTP header  X Vault Cache Control  cache  will be returned directly from the cache if there s a cache hit  On a cache miss the request will be forwarded to Vault and the response returned and cached  If the header is set to  X Vault Cache Control  recache   the cache lookup will be skipped  and the request will be forwarded to Vault and the response returned and cached  Currently  the cache key is a hash of the request URL path  headers  body  and token    Warning title  Nonstandard distributed tracing headers may negate the cache      The Vault Lambda Extension cache key includes headers from proxy requests  but    excludes the standard distributed tracing headers  traceparent  and    tracestate  because trace IDs are unique per request and would lead to unique   hashes for repeated requests       Some distributed tracing tools may add nonstandard tracing headers  which can    also lead to individualized hashes that make repeated requests unique and    cause cache misses     Warning   Caching may also be enabled for all requests by setting the environment variable  VAULT DEFAULT CACHE ENABLED  to  true   Then all requests will be fetched and or cached as though the header  X Vault Cache Control  cache  was present  Setting the header to  nocache  on a request will opt out of caching entirely in this configuration  Setting the header to  recache  will skip the cache lookup and return and cache the response from Vault as described previously        Warning    The Vault Lambda Extension s cache is only in memory and will not be persisted when the Lambda execution environment shuts down  In order words  the cache TTL is capped to the duration of the Lambda execution environment      Limitations  Secrets written to disk or returned from the proxy server will not be automatically refreshed when they expire  This is particularly important if you configure the extension to write secrets to disk  because the extension will only write to disk once per execution environment  rather than once per function invocation  If you use  provisioned concurrency  https   docs aws amazon com lambda latest dg configuration concurrency html configuration concurrency provisioned  or if your Lambda is invoked often enough that execution contexts live beyond the lifetime of the secret  then secrets on disk are likely to become invalid   In line with  Lambda best practices  https   docs aws amazon com lambda latest dg best practices html   we recommend avoiding writing secrets to disk where possible  and exclusively consuming secrets via the proxy server  However  the proxy server will still not perform any additional processing with returned secrets such as automatic lease renewal  The proxy server s own Vault auth token is the only thing that gets automatically refreshed  It will synchronously refresh its own token before proxying requests if the token is expired  including a grace window   and it will attempt to renew its token if the token is nearly expired but renewable  The proxy will also immediately refresh its token if the incoming request header  X Vault Token Options  revoke  is present    Note title  Not SnapStart compatible      The Vault Lambda extension does not currently  work with    AWS SnapStart  https   docs aws amazon com lambda latest dg snapstart html      Note      Performance impact  AWS Lambda pricing is based on  number of invocations  time of execution and memory used  https   aws amazon com lambda pricing    The following table details some approximate performance related statistics to help assess the cost impact of this extension  Note that AWS Lambda allocates  CPU power in proportion to memory  https   docs aws amazon com lambda latest dg configuration memory html  so results will vary widely  These benchmarks were run with the minimum 128MB of memory allocated so aim to give an approximate baseline     Metric           Value                                                                                Description                                                                                                                        Derivation                                                                                                                                                                                                                                                                                                                                                                                                             Layer size       8 5MB                                                                                The size of the unpacked extension binary                                                                                           ls  la                                                                                Init latency     8 5ms  standard deviation 2 4ms    one network round trip to authenticate to Vault   Extension initialization time in a new execution environment  Authentication round trip time will be highly deployment dependent   Instrumented in code                                                                   Invoke latency    1ms                                                                                 The base processing time for each function invocation  assuming no calls to the proxy server                                       Instrumented in code                                                                   Memory impact    12MB                                                                                 The marginal impact on  Max Memory Used  when running the extension                                                                As reported by Lambda when running Hello World function with and without extension       Uploading to your own AWS account and region  If you would like to upload the extension as a Lambda layer in your own AWS account and region  you can do the following      shell session   curl   silent https   releases hashicorp com vault lambda extension 0 5 0 vault lambda extension 0 5 0 linux amd64 zip       output vault lambda extension zip      Set your target AWS region      shell session   export REGION  YOUR REGION HERE       Upload the extension as a Lambda layer      shell session   aws lambda publish layer version          layer name vault lambda extension          zip file   fileb   vault lambda extension zip           region    REGION           Tutorial  For step by step instructions  refer to the  Vault AWS Lambda Extension   vault tutorials app integration aws lambda  tutorial for details on how to create an AWS Lambda function and use the Vault Lambda Extension to authenticate with Vault "}
{"questions":"vault page title Configure Vault ServiceNow Credential Resolver MID server properties Configuring the Vault credential resolver layout docs This section documents the configurables for the Vault ServiceNow Credential Resolver","answers":"---\nlayout: docs\npage_title: Configure Vault ServiceNow Credential Resolver\ndescription: This section documents the configurables for the Vault ServiceNow Credential Resolver.\n---\n\n# Configuring the Vault credential resolver\n\n## MID server properties\n\nThe following [properties] are supported by the Vault Credential Resolver:\n\n- `mid.external_credentials.vault.address` `(string: \"\")` - Address of Vault Agent as resolveable by the MID server.\n  For example, if Vault Agent is on the same server as the MID server it could be `https:\/\/127.0.0.1:8200`.\n\n- `mid.external_credentials.vault.ca` `(string: \"\")` - The CA certificate to trust for TLS in PEM format. If unset,\n  the system's trusted CAs will be used.\n\n- `mid.external_credentials.vault.tls_skip_verify` `(string: \"\")` - When set to true, skips verification of the Vault server\n  TLS certificiate. Setting this to true is not recommended for production.\n\n[properties]: https:\/\/docs.servicenow.com\/bundle\/quebec-servicenow-platform\/page\/product\/mid-server\/reference\/r_MIDServerProperties.html#t_SetMIDServerProperties\n\n## Configuring discovery credentials\n\nTo consume Vault credentials from your MID server, you will need to:\n\n* Create a secret in Vault\n* Configure the resolver to use that secret\n\n### Creating a secret in Vault\n\nThe credential resolver supports reading credentials from the following secret engines:\n\n* [Active Directory](\/vault\/docs\/secrets\/ad)\n* [AD\/OpenLDAP](\/vault\/docs\/secrets\/ldap)\n* [AWS](\/vault\/docs\/secrets\/aws)\n* [KV v1](\/vault\/docs\/secrets\/kv\/kv-v1)\n* [KV v2](\/vault\/docs\/secrets\/kv\/kv-v2)\n\nWhen creating KV secrets, you must use the following keys for each component\nto ensure it is correctly mapped to ServiceNow's credential fields:\n\nKey         | Description                            | Supported aliases\n------------|----------------------------------------|------------------\nusername    | The username                           | access_key\npassword    | The password                           | secret_key, current_password\nprivate_key | The private SSH key                    |\npassphrase  | The passphrase for the private SSH key |\n\nMost ServiceNow credential types will expect at least a username and either\na password or a private key. To help surface errors early, the credential\nresolver validates that a username and password are present for:\n\n* aws\n* basic\n* jdbc\n* jms\n* ssh_password\n* vmware\n* windows\n\nThe credential resolver expects the following types to specify at least\na username and a private key:\n\n* api_key\n* cfg_chef_credentials\n* infoblox\n* sn_cfg_ansible\n* sn_disco_certmgmt_certificate_ca\n* ssh_private_key\n\nFor SNMPv3 credentials, the credential resolver can accept up to five values:\n\n* username\n* auth-protocol\n* auth-key\n* privacy-protocol\n* privacy-key\n\nDepending on the configuration of the SNMP endpoint, the username at least will always be required. See below for different SNMP endpoint configurations:\n\nLevel         | Authentication | Encryption | What Happens\n--------------|----------------|------------|------------------------\nnoAuthNoPriv  | Username       | None       | Username match for auth\nauthNoPriv    | MD5 or SHA     | None       | Auth based on HMAC-MD5 or HMAC-SHA algorithms\nauthPriv      | MD5 or SHA     | DES        | Auth based on HMAC-MD5 or HMAC-SHA algorithms; provides DES 56-bit encryption based on (CBC)-DES (DES-56) \n\n### Configuring the resolver to use a secret\n\n<ImageConfig hideBorder caption=\"Vault credential resolver\">\n\n![Partial screenshot of the ServiceNow UI showing the search dialog for adding a Vault configuration by name](\/img\/service-now\/vault-credential-resolver-fqcn.png)\n\n<\/ImageConfig>\n\nIn the ServiceNow UI:\n\n1. Navigate to \"Discovery - Credentials &rarr; New\".\n1. Choose a type from the list.\n1. Select \"External credential store\".\n1. Provide a fully qualified collection name (FQCN):\n    - **Xanadu (Q4-2024) or newer**:  use `com.snc.discovery.CredentialResolver`\n    - **Versions prior to Xanadu (Q4-2024)**: leave blank or use \"None\"\n1. Provide a meaningful name for the resolver.\n1. Set \"Credential ID\" to the\n   [ReadSecretVersion endpoint](\/vault\/api-docs\/secret\/kv\/kv-v2#read-secret-version)\n   of your secrets plugin and credential. For example, the endpoint\n   for a secret stored on the path `ssh` under a KV v2 secret engine mounted at\n   `secret` is `\/secret\/data\/ssh`.\n1. Click \"Test credential\" then select a MID server and target to test your\n   configuration.","site":"vault","answers_cleaned":"    layout  docs page title  Configure Vault ServiceNow Credential Resolver description  This section documents the configurables for the Vault ServiceNow Credential Resolver         Configuring the Vault credential resolver     MID server properties  The following  properties  are supported by the Vault Credential Resolver      mid external credentials vault address    string         Address of Vault Agent as resolveable by the MID server    For example  if Vault Agent is on the same server as the MID server it could be  https   127 0 0 1 8200       mid external credentials vault ca    string         The CA certificate to trust for TLS in PEM format  If unset    the system s trusted CAs will be used      mid external credentials vault tls skip verify    string         When set to true  skips verification of the Vault server   TLS certificiate  Setting this to true is not recommended for production    properties   https   docs servicenow com bundle quebec servicenow platform page product mid server reference r MIDServerProperties html t SetMIDServerProperties     Configuring discovery credentials  To consume Vault credentials from your MID server  you will need to     Create a secret in Vault   Configure the resolver to use that secret      Creating a secret in Vault  The credential resolver supports reading credentials from the following secret engines      Active Directory   vault docs secrets ad     AD OpenLDAP   vault docs secrets ldap     AWS   vault docs secrets aws     KV v1   vault docs secrets kv kv v1     KV v2   vault docs secrets kv kv v2   When creating KV secrets  you must use the following keys for each component to ensure it is correctly mapped to ServiceNow s credential fields   Key           Description                              Supported aliases                                                                          username      The username                             access key password      The password                             secret key  current password private key   The private SSH key                      passphrase    The passphrase for the private SSH key    Most ServiceNow credential types will expect at least a username and either a password or a private key  To help surface errors early  the credential resolver validates that a username and password are present for     aws   basic   jdbc   jms   ssh password   vmware   windows  The credential resolver expects the following types to specify at least a username and a private key     api key   cfg chef credentials   infoblox   sn cfg ansible   sn disco certmgmt certificate ca   ssh private key  For SNMPv3 credentials  the credential resolver can accept up to five values     username   auth protocol   auth key   privacy protocol   privacy key  Depending on the configuration of the SNMP endpoint  the username at least will always be required  See below for different SNMP endpoint configurations   Level           Authentication   Encryption   What Happens                                                                       noAuthNoPriv    Username         None         Username match for auth authNoPriv      MD5 or SHA       None         Auth based on HMAC MD5 or HMAC SHA algorithms authPriv        MD5 or SHA       DES          Auth based on HMAC MD5 or HMAC SHA algorithms  provides DES 56 bit encryption based on  CBC  DES  DES 56        Configuring the resolver to use a secret   ImageConfig hideBorder caption  Vault credential resolver      Partial screenshot of the ServiceNow UI showing the search dialog for adding a Vault configuration by name   img service now vault credential resolver fqcn png     ImageConfig   In the ServiceNow UI   1  Navigate to  Discovery   Credentials  rarr  New   1  Choose a type from the list  1  Select  External credential store   1  Provide a fully qualified collection name  FQCN           Xanadu  Q4 2024  or newer     use  com snc discovery CredentialResolver          Versions prior to Xanadu  Q4 2024     leave blank or use  None  1  Provide a meaningful name for the resolver  1  Set  Credential ID  to the     ReadSecretVersion endpoint   vault api docs secret kv kv v2 read secret version     of your secrets plugin and credential  For example  the endpoint    for a secret stored on the path  ssh  under a KV v2 secret engine mounted at     secret  is   secret data ssh   1  Click  Test credential  then select a MID server and target to test your    configuration "}
{"questions":"vault Installation steps for the Vault ServiceNow Credential Resolver page title Install Vault ServiceNow Credential Resolver Installing the Vault credential resolver layout docs Prerequisites","answers":"---\nlayout: docs\npage_title: Install Vault ServiceNow Credential Resolver\ndescription: Installation steps for the Vault ServiceNow Credential Resolver.\n---\n\n# Installing the Vault credential resolver\n\n## Prerequisites\n\n* ServiceNow version Quebec+ (untested on previous versions)\n* MID server version Quebec+ (untested on previous versions)\n* Discovery and external credential plugins activated on ServiceNow\n* Working Vault deployment accessible from the MID server\n\n## Installing Vault agent\n\n* Select your desired auth method from Agent's [supported auth methods](\/vault\/docs\/agent-and-proxy\/autoauth\/methods)\n  and set it up in Vault\n  * For example, to set up AppRole auth and a role called `role1` with the `demo` policy attached:\n\n    ```bash\n    vault auth enable approle\n    vault policy write demo - <<EOF\n    path \"secret\/*\" {\n      capabilities = [\"read\"]\n    }\n    EOF\n    vault write auth\/approle\/role\/role1 bind_secret_id=true token_policies=demo\n    ```\n\n  * To get the files required for the example Agent config below, you can then\n    run:\n\n    ```bash\n    echo -n $(vault read -format json auth\/approle\/role\/role1\/role-id | jq -r '.data.role_id') > \/path\/to\/roleID\n    echo -n $(vault write -format json -f auth\/approle\/role\/role1\/secret-id | jq -r '.data.secret_id') > \/path\/to\/secretID\n    ```\n\n* Create an `agent.hcl` config file. Your exact configuration may vary, but you\n  must set `cache.use_auto_auth_token = true`, and the `listener`, `vault` and\n  `auto_auth` blocks are also required to set up a working Agent, e.g.:\n\n  ```hcl\n  listener \"tcp\" {\n    address = \"127.0.0.1:8200\"\n    tls_disable = false\n    tls_cert_file = \"\/path\/to\/cert.pem\"\n    tls_key_file = \"\/path\/to\/key.pem\"\n  }\n\n  cache {\n    use_auto_auth_token = true\n  }\n\n  vault {\n    address = \"http:\/\/vault.example.com:8200\"\n  }\n\n  auto_auth {\n      method {\n          type = \"approle\"\n          config = {\n              role_id_file_path = \"\/path\/to\/roleID\"\n              secret_id_file_path = \"\/path\/to\/secretID\"\n              remove_secret_id_file_after_reading = false\n          }\n      }\n  }\n  ```\n\n* Install Vault Agent as a service running `vault agent -config=\/path\/to\/agent.hcl`\n  * Documentation for Windows service installation [here](\/vault\/docs\/agent-and-proxy\/agent\/winsvc)\n\n## Uploading JAR file to MID server\n\n<Warning heading=\"Use the ServiceNow app store to install Vault Credential Resolver\">\n  The steps documented below are for **pre ServiceNow UTAH versions**.\n\n  As of ServiceNow version UTAH, use the \"HashiCorp Vault Credential Resolver\" App \n  from the ServiceNow App store to install the Vault Credential Resolver and verify\n  the jar file installed is `vault-servicenow-credential-resolver`. If you wish to\n  use a custom name, you must manually rename the deployed jar. \n<\/Warning>\n\n* Download the latest version of the Vault Credential Resolver JAR file from\n  [releases.hashicorp.com](https:\/\/releases.hashicorp.com\/vault-servicenow-credential-resolver\/)\n* In ServiceNow, navigate to \"MID server - JAR files\" -> New\n  * Manage Attachments -> upload Vault Credential Resolver JAR\n  * Fill in name, version etc as desired\n  * Click Submit\n* Navigate to \"MID server - Properties\" -> New\n  * Set Name: `mid.external_credentials.vault.address`, Value: Address of Vault\n    Agent listener from previous step, e.g. `http:\/\/127.0.0.1:8200`\n  * **Optional:** Set the property `mid.external_credentials.vault.ca` to the\n    trusted CA in PEM format if using TLS between the MID server and Vault\n    Agent with a self-signed certificate.\n\n## Next steps\n\nSee [configuration](\/vault\/docs\/platform\/servicenow\/configuration) for details on\nconfiguring the resolver and using credentials for discovery.","site":"vault","answers_cleaned":"    layout  docs page title  Install Vault ServiceNow Credential Resolver description  Installation steps for the Vault ServiceNow Credential Resolver         Installing the Vault credential resolver     Prerequisites    ServiceNow version Quebec   untested on previous versions    MID server version Quebec   untested on previous versions    Discovery and external credential plugins activated on ServiceNow   Working Vault deployment accessible from the MID server     Installing Vault agent    Select your desired auth method from Agent s  supported auth methods   vault docs agent and proxy autoauth methods    and set it up in Vault     For example  to set up AppRole auth and a role called  role1  with the  demo  policy attached          bash     vault auth enable approle     vault policy write demo     EOF     path  secret            capabilities     read             EOF     vault write auth approle role role1 bind secret id true token policies demo              To get the files required for the example Agent config below  you can then     run          bash     echo  n   vault read  format json auth approle role role1 role id   jq  r   data role id      path to roleID     echo  n   vault write  format json  f auth approle role role1 secret id   jq  r   data secret id      path to secretID            Create an  agent hcl  config file  Your exact configuration may vary  but you   must set  cache use auto auth token   true   and the  listener    vault  and    auto auth  blocks are also required to set up a working Agent  e g         hcl   listener  tcp        address    127 0 0 1 8200      tls disable   false     tls cert file     path to cert pem      tls key file     path to key pem         cache       use auto auth token   true        vault       address    http   vault example com 8200         auto auth         method             type    approle            config                   role id file path     path to roleID                secret id file path     path to secretID                remove secret id file after reading   false                                  Install Vault Agent as a service running  vault agent  config  path to agent hcl      Documentation for Windows service installation  here   vault docs agent and proxy agent winsvc      Uploading JAR file to MID server   Warning heading  Use the ServiceNow app store to install Vault Credential Resolver     The steps documented below are for   pre ServiceNow UTAH versions       As of ServiceNow version UTAH  use the  HashiCorp Vault Credential Resolver  App    from the ServiceNow App store to install the Vault Credential Resolver and verify   the jar file installed is  vault servicenow credential resolver   If you wish to   use a custom name  you must manually rename the deployed jar     Warning     Download the latest version of the Vault Credential Resolver JAR file from    releases hashicorp com  https   releases hashicorp com vault servicenow credential resolver     In ServiceNow  navigate to  MID server   JAR files     New     Manage Attachments    upload Vault Credential Resolver JAR     Fill in name  version etc as desired     Click Submit   Navigate to  MID server   Properties     New     Set Name   mid external credentials vault address   Value  Address of Vault     Agent listener from previous step  e g   http   127 0 0 1 8200        Optional    Set the property  mid external credentials vault ca  to the     trusted CA in PEM format if using TLS between the MID server and Vault     Agent with a self signed certificate      Next steps  See  configuration   vault docs platform servicenow configuration  for details on configuring the resolver and using credentials for discovery "}
{"questions":"vault This guide assumes you are installing the Vault EKM Provider for the first time page title Install the Vault EKM Provider layout docs For upgrade instructions see upgrading vault docs platform mssql upgrading Installation steps for the Vault EKM Provider for Microsoft SQL Server Installing the Vault EKM provider","answers":"---\nlayout: docs\npage_title: Install the Vault EKM Provider\ndescription: Installation steps for the Vault EKM Provider for Microsoft SQL Server.\n---\n\n# Installing the Vault EKM provider\n\nThis guide assumes you are installing the Vault EKM Provider for the first time.\nFor upgrade instructions, see [upgrading](\/vault\/docs\/platform\/mssql\/upgrading).\n\n## Prerequisites\n\n* Vault Enterprise server 1.9+ with a license for the Advanced Data Protection Key Management module\n* Microsoft Windows Server operating system\n* Microsoft SQL Server 2012 or newer for Windows (Windows SQL Server Express and SQL Server for Linux [does not support EKM][linux-ekm])\n* An authenticated Vault client\n\nTo check your Vault version and license, you can run:\n\n```bash\nvault status\nvault license get -format=json\n```\n\nThe list of features should include \"Key Management Transparent Data Encryption\".\n\n[linux-ekm]: https:\/\/docs.microsoft.com\/en-us\/sql\/linux\/sql-server-linux-editions-and-components-2019?view=sql-server-ver15#Unsupported\n\n## Installing the Vault EKM provider\n\n## Configuring Vault\n\nThe EKM provider requires AppRole auth and the Transit secret engine to be setup\non the Vault server. The steps below can be used to configure Vault ready for the\nEKM provider to use it.\n\n-> **Note:** rsa-2048 is currently the only supported key type.\n\n1. Set up AppRole auth:\n\n    ```bash\n    vault auth enable approle\n    vault write auth\/approle\/role\/ekm-encryption-key-role \\\n        token_ttl=20m \\\n        max_token_ttl=30m \\\n        token_policies=tde-policy\n    ```\n\n-> **Note:** After authenticating to Vault with the AppRole, the EKM provider\nwill re-use the token it receives until it expires, at which point it will\nauthenticate using the AppRole credentials again; it will not attempt to renew\nits token. The example AppRole configuraiton here will work for this, but keep\nthat in mind if you choose to use a different AppRole configuration.\n\n1. Retrieve the AppRole ID and secret ID for use later when configuring SQL Server:\n\n    ```bash\n    vault read auth\/approle\/role\/ekm-encryption-key-role\/role-id\n    vault write -f auth\/approle\/role\/ekm-encryption-key-role\/secret-id\n    ```\n\n1. Enable the transit secret engine and create a key:\n\n    ```bash\n    vault secrets enable transit\n    vault write -f transit\/keys\/ekm-encryption-key type=\"rsa-2048\"\n    ```\n\n1. Create a policy for the Vault EKM provider to use. The following policy has\n    the minimum required permissions:\n\n    ```bash\n    vault policy write tde-policy -<<EOF\n    path \"transit\/keys\/ekm-encryption-key\" {\n        capabilities = [\"create\", \"read\", \"update\", \"delete\"]\n    }\n\n    path \"transit\/keys\" {\n        capabilities = [\"list\"]\n    }\n\n    path \"transit\/encrypt\/ekm-encryption-key\" {\n        capabilities = [\"update\"]\n    }\n\n    path \"transit\/decrypt\/ekm-encryption-key\" {\n        capabilities = [\"update\"]\n    }\n\n    path \"sys\/license\/status\" {\n        capabilities = [\"read\"]\n    }\n    EOF\n    ```\n\n## Configuring SQL server\n\nThe remaining steps are all run on the database server.\n\n### Install the EKM provider on the server\n\n1. Download and run the latest Vault EKM provider installer from\n  [releases.hashicorp.com](https:\/\/releases.hashicorp.com\/vault-mssql-ekm-provider\/)\n1. Enter your Vault server's address when prompted and complete the installer\n1. If you need to configure non-default namespace or mount paths for your AppRole and\n   Transit engines, see [configuration](\/vault\/docs\/platform\/mssql\/configuration).\n\n### Configure the EKM provider using SQL\n\nOpen Microsoft SQL Server Management Studio, and run the queries below to complete\ninstallation.\n\n1. Enable the EKM feature and create a cryptographic provider using the folder\n   you just installed the EKM provider into.\n\n    ```sql\n    -- Enable advanced options\n    USE master;\n    GO\n\n    EXEC sp_configure 'show advanced options', 1;\n    GO\n\n    RECONFIGURE;\n    GO\n\n    -- Enable EKM provider\n    EXEC sp_configure 'EKM provider enabled', 1;\n    GO\n\n    RECONFIGURE;\n    GO\n\n    CREATE CRYPTOGRAPHIC PROVIDER TransitVaultProvider\n    FROM FILE = 'C:\\Program Files\\HashiCorp\\Transit Vault EKM Provider\\TransitVaultEKM.dll'\n    GO\n    ```\n\n1. Next, create credentials for an admin to use EKM with your AppRole role and\n    secret ID from above:\n\n    ```sql\n    -- Replace <approle-role-id> and <approle-secret-id> with the values from\n    -- the earlier vault commands:\n    -- vault read auth\/approle\/role\/ekm-encryption-key\/role-id\n    -- vault write -f auth\/approle\/role\/ekm-encryption-key\/secret-id\n    CREATE CREDENTIAL TransitVaultCredentials\n        WITH IDENTITY = '<approle-role-id>',\n        SECRET = '<approle-secret-id>'\n    FOR CRYPTOGRAPHIC PROVIDER TransitVaultProvider;\n    GO\n\n    -- Replace <domain>\\<login> with the SQL Server administrator's login\n    ALTER LOGIN \"<domain>\\<login>\" ADD CREDENTIAL TransitVaultCredentials;\n    ```\n\n1. You can now create an asymmetric key using the transit key set up earlier:\n\n    ```sql\n    CREATE ASYMMETRIC KEY TransitVaultAsymmetric\n    FROM PROVIDER TransitVaultProvider\n    WITH\n    CREATION_DISPOSITION = OPEN_EXISTING,\n    PROVIDER_KEY_NAME = 'ekm-encryption-key';\n    ```\n\n    -> **Note:** This is the first step at which the EKM provider will communicate with Vault. If\n    Vault is misconfigured, this step is likely to fail. See\n    [troubleshooting](\/vault\/docs\/platform\/mssql\/troubleshooting) for tips on specific error codes.\n\n1. Create another login from the new asymmetric key:\n\n    ```sql\n     -- Replace <approle-role-id> and <approle-secret-id> with the values from\n    -- the earlier vault commands again\n    CREATE CREDENTIAL TransitVaultTDECredentials\n        WITH IDENTITY = '<approle-role-id>',\n        SECRET = '<approle-secret-id>'\n    FOR CRYPTOGRAPHIC PROVIDER TransitVaultProvider;\n    GO\n\n    CREATE LOGIN TransitVaultTDELogin\n    FROM ASYMMETRIC KEY TransitVaultAsymmetric;\n    GO\n\n    ALTER LOGIN TransitVaultTDELogin\n    ADD CREDENTIAL TransitVaultTDECredentials;\n    GO\n    ```\n\n1. Finally, you can enable TDE and protect the database encryption key with\n   the asymmetric key managed by Vault's Transit secret engine:\n\n    ```sql\n    CREATE DATABASE TestTDE\n    GO\n\n    USE TestTDE;\n    GO\n\n    CREATE DATABASE ENCRYPTION KEY\n    WITH ALGORITHM = AES_256\n    ENCRYPTION BY SERVER ASYMMETRIC KEY TransitVaultAsymmetric;\n    GO\n\n    ALTER DATABASE TestTDE\n    SET ENCRYPTION ON;\n    GO\n    ```\n\n1. Check the status of database encryption using the following queries:\n    ```sql\n    SELECT * FROM sys.dm_database_encryption_keys;\n\n    SELECT (SELECT name FROM sys.databases WHERE database_id = k.database_id) as name,\n        encryption_state, key_algorithm, key_length,\n        encryptor_type, encryption_state_desc, encryption_scan_state_desc FROM sys.dm_database_encryption_keys k;\n\n    ```\n\n## Key rotation\n\nSee [key rotation](\/vault\/docs\/platform\/mssql\/rotation) for guidance on rotating\nthe encryption keys.","site":"vault","answers_cleaned":"    layout  docs page title  Install the Vault EKM Provider description  Installation steps for the Vault EKM Provider for Microsoft SQL Server         Installing the Vault EKM provider  This guide assumes you are installing the Vault EKM Provider for the first time  For upgrade instructions  see  upgrading   vault docs platform mssql upgrading       Prerequisites    Vault Enterprise server 1 9  with a license for the Advanced Data Protection Key Management module   Microsoft Windows Server operating system   Microsoft SQL Server 2012 or newer for Windows  Windows SQL Server Express and SQL Server for Linux  does not support EKM  linux ekm     An authenticated Vault client  To check your Vault version and license  you can run      bash vault status vault license get  format json      The list of features should include  Key Management Transparent Data Encryption     linux ekm   https   docs microsoft com en us sql linux sql server linux editions and components 2019 view sql server ver15 Unsupported     Installing the Vault EKM provider     Configuring Vault  The EKM provider requires AppRole auth and the Transit secret engine to be setup on the Vault server  The steps below can be used to configure Vault ready for the EKM provider to use it        Note    rsa 2048 is currently the only supported key type   1  Set up AppRole auth          bash     vault auth enable approle     vault write auth approle role ekm encryption key role           token ttl 20m           max token ttl 30m           token policies tde policy               Note    After authenticating to Vault with the AppRole  the EKM provider will re use the token it receives until it expires  at which point it will authenticate using the AppRole credentials again  it will not attempt to renew its token  The example AppRole configuraiton here will work for this  but keep that in mind if you choose to use a different AppRole configuration   1  Retrieve the AppRole ID and secret ID for use later when configuring SQL Server          bash     vault read auth approle role ekm encryption key role role id     vault write  f auth approle role ekm encryption key role secret id          1  Enable the transit secret engine and create a key          bash     vault secrets enable transit     vault write  f transit keys ekm encryption key type  rsa 2048           1  Create a policy for the Vault EKM provider to use  The following policy has     the minimum required permissions          bash     vault policy write tde policy    EOF     path  transit keys ekm encryption key            capabilities     create    read    update    delete              path  transit keys            capabilities     list              path  transit encrypt ekm encryption key            capabilities     update              path  transit decrypt ekm encryption key            capabilities     update              path  sys license status            capabilities     read             EOF             Configuring SQL server  The remaining steps are all run on the database server       Install the EKM provider on the server  1  Download and run the latest Vault EKM provider installer from    releases hashicorp com  https   releases hashicorp com vault mssql ekm provider   1  Enter your Vault server s address when prompted and complete the installer 1  If you need to configure non default namespace or mount paths for your AppRole and    Transit engines  see  configuration   vault docs platform mssql configuration        Configure the EKM provider using SQL  Open Microsoft SQL Server Management Studio  and run the queries below to complete installation   1  Enable the EKM feature and create a cryptographic provider using the folder    you just installed the EKM provider into          sql        Enable advanced options     USE master      GO      EXEC sp configure  show advanced options   1      GO      RECONFIGURE      GO         Enable EKM provider     EXEC sp configure  EKM provider enabled   1      GO      RECONFIGURE      GO      CREATE CRYPTOGRAPHIC PROVIDER TransitVaultProvider     FROM FILE    C  Program Files HashiCorp Transit Vault EKM Provider TransitVaultEKM dll      GO          1  Next  create credentials for an admin to use EKM with your AppRole role and     secret ID from above          sql        Replace  approle role id  and  approle secret id  with the values from        the earlier vault commands         vault read auth approle role ekm encryption key role id        vault write  f auth approle role ekm encryption key secret id     CREATE CREDENTIAL TransitVaultCredentials         WITH IDENTITY     approle role id            SECRET     approle secret id       FOR CRYPTOGRAPHIC PROVIDER TransitVaultProvider      GO         Replace  domain   login  with the SQL Server administrator s login     ALTER LOGIN   domain   login   ADD CREDENTIAL TransitVaultCredentials           1  You can now create an asymmetric key using the transit key set up earlier          sql     CREATE ASYMMETRIC KEY TransitVaultAsymmetric     FROM PROVIDER TransitVaultProvider     WITH     CREATION DISPOSITION   OPEN EXISTING      PROVIDER KEY NAME    ekm encryption key                     Note    This is the first step at which the EKM provider will communicate with Vault  If     Vault is misconfigured  this step is likely to fail  See      troubleshooting   vault docs platform mssql troubleshooting  for tips on specific error codes   1  Create another login from the new asymmetric key          sql         Replace  approle role id  and  approle secret id  with the values from        the earlier vault commands again     CREATE CREDENTIAL TransitVaultTDECredentials         WITH IDENTITY     approle role id            SECRET     approle secret id       FOR CRYPTOGRAPHIC PROVIDER TransitVaultProvider      GO      CREATE LOGIN TransitVaultTDELogin     FROM ASYMMETRIC KEY TransitVaultAsymmetric      GO      ALTER LOGIN TransitVaultTDELogin     ADD CREDENTIAL TransitVaultTDECredentials      GO          1  Finally  you can enable TDE and protect the database encryption key with    the asymmetric key managed by Vault s Transit secret engine          sql     CREATE DATABASE TestTDE     GO      USE TestTDE      GO      CREATE DATABASE ENCRYPTION KEY     WITH ALGORITHM   AES 256     ENCRYPTION BY SERVER ASYMMETRIC KEY TransitVaultAsymmetric      GO      ALTER DATABASE TestTDE     SET ENCRYPTION ON      GO          1  Check the status of database encryption using the following queries         sql     SELECT   FROM sys dm database encryption keys       SELECT  SELECT name FROM sys databases WHERE database id   k database id  as name          encryption state  key algorithm  key length          encryptor type  encryption state desc  encryption scan state desc FROM sys dm database encryption keys k               Key rotation  See  key rotation   vault docs platform mssql rotation  for guidance on rotating the encryption keys "}
{"questions":"vault Software Release date Mar 23 2022 Vault 1 10 0 release notes layout docs This page contains release notes for Vault 1 10 0 page title 1 10 0","answers":"---\nlayout: docs\npage_title: 1.10.0\ndescription: |-\n  This page contains release notes for Vault 1.10.0\n---\n\n# Vault 1.10.0 release notes\n\n**Software Release date:** Mar 23, 2022\n\n**Summary:** Vault version 1.10.0 offers features and enhancements that improve the user experience while closing the loop on key issues previously encountered by our customers. We are providing a summary of these improvements in these release notes.\n\nWe encourage you to upgrade to the latest release to take advantage of the new benefits that we are providing. Additionally, with this latest release, we offer solutions to critical feature gaps that have been identified previously. For further information on product improvements, including a comprehensive list of bug fixes, please refer to the [Changelog](https:\/\/github.com\/hashicorp\/vault\/blob\/main\/CHANGELOG.md) within the Vault 1.10.0 release.\n\nSome of these enhancements and changes in this release include:\n\n- Ability to view client counts per auth and changes to clients over months, therefore, providing more granular visibility into clients.\n- Extended the `sys\/remount` API endpoint to support moving secrets engines and auth method mounts from one location to another, within a namespace or across namespaces.\n- Improved security posture that includes MFA on login for Vault Community Edition customers.\n- Ability to implicitely achieve consistency via tokens.\n- Support of PKCE on Vault\u2019s OIDC auth method with Telemetry support for the Vault Agent.\n- Improvement of key areas and parity to support using Terraform Provider with Vault.\n\n## New features\n\nThis section describes the new features introduced as part of Vault\n\n### Multi-Factor authentication (MFA) for Vault Community Edition\n\nVault has had support for the [Step-up Enterprise MFA](\/vault\/docs\/enterprise\/mfa) as part of its Enterprise edition. The Step-up Enterprise MFA allows having an MFA on login, or for step-up access to sensitive resources in Vault.\n\nWith Vault 1.10.0, MFA as part of [login](\/vault\/docs\/auth\/login-mfa) is now supported for Vault Community Edition. This demonstrates HashiCorp\u2019s thought leadership in security and its continued endeavor to enable all Vault users to employ strong security policies with Vault.\n\n~> **Note:** The Legacy MFA in Vault Community Edition is a [deprecated](\/vault\/docs\/deprecation) feature and will be removed in Vault 1.11.\n\nRefer to the [Login MFA FAQ](\/vault\/docs\/auth\/login-mfa\/faq) to understand the various MFA workflows that are supported in Vault 1.10.0.\n\n### Vault OIDC provider with PKCE support\n\nVault\u2019s support to act as an OIDC provider is now generally available. Furthermore, Vault\u2019s OIDC provider functionality can now support PKCE for authorization code flow as well. Thanks to all the excellent community feedback received, we have simplified the user experience around configuration of OIDC provider functionality.\n\n### Caching support for Vault lambda extension\n\nWith 0.6.0, Vault Lambda Extension supports [caching](https:\/\/github.com\/hashicorp\/vault-lambda-extension#caching) in the local proxy server to avoid proxying every request to enable setting expiry time and invalidate cache, as needed.\n\n### Terraform provider for Vault\n\nWe have introduced three new resources to enable configuration of the [KMIP secrets engine](https:\/\/registry.terraform.io\/providers\/hashicorp\/vault\/latest\/docs\/resources\/kmip_secret_backend) using the Terraform Provider for Vault. In addition, frequent releases on the Terraform Provider for Vault have been incorporating the ability to configure newer resources and data sources. Please read the [documentation](https:\/\/registry.terraform.io\/providers\/hashicorp\/vault\/latest\/docs) for more details.\n\n### KV secrets engine v2 patch operations\n\nWe now support an additional method for managing [KV v2 secrets](\/vault\/api-docs\/secret\/kv\/kv-v2) to maintain least privilege security in certain types of automated environments. This feature creates a new PATCH capability that enables partial updates to KV v2 secrets without requiring the READ privilege to the entire endpoint for an entity.\n\n### DB2 dynamic secrets support\n\nVault operators can leverage the openldap secrets engine to manage credentials for IBM DB2 and the LDAP security plugin for Db2. This allows Db2 to offload authentication and authorization to the LDAP security plugin and allows Vault to manage static credentials or even generate dynamic users. For more details, refer to the For more details, refer to the [IBM Db2 Credentials Management](\/vault\/tutorials\/secrets-management\/ibm-db2-openldap) tutorial.\n\n### Temporal transit key rotation\n\nProper key management includes occasionally rotating encryption keys to reduce the risks of a nonce reuse and opportunities for keys to be compromised. Previously, there was no automated way to rotate keys that is native to Vault. Now, we have provided a new configuration element on transit keys and tokenization transform configurations where a time interval triggers the keys to automatically rotate after the interval has lapsed.\n\n### PKI HSM forwarding\n\nTo address security and compliance needs, customers may require that keys be either created or stored within Hardware Security Models (HSMs). Vault 1.10.0 introduces an accommodation for this requirement with regards to the PKI Secrets Engine. We now support offloading selected PKI operations to HSMs, in particular allowing customers to both generate new PKI key pairs and sign\/verify some certificate workflows. All of these operations are conducted in a way that never allows the private key material to leave the secure confines of the HSM itself.\n\n### AWS and AKV KMS forwarding\n\nThe work done above to support HSM-backed PKI operations inspired us to consider what other key possession paradigms we could support. This led us to extend the implementation to support Cloud Key Management Systems in addition to HSMs. In Vault 1.10.0, users may generate new PKI pairs and perform sign\/verify certificate workflows, all with those keys never leaving the cloud KMS itself. Vault 1.10.0 provides support for AWS Key Management Service and Azure Key Vault Key Management Service.\n\n### Server side consistent tokens\n\nVault\u2019s [eventual consistency](\/vault\/docs\/enterprise\/consistency) model precludes read-after-write guarantees when clients interact with performance standbys or performance replication clusters. The [Client Controlled Consistency](\/vault\/docs\/enterprise\/consistency#vault-1-7-mitigations) mitigations supported with Vault 1.7 provide ways to achieve consistency through client modifications or by using the agent for proxied requests, which is not possible in all cases. The Server Side Consistent Tokens feature provides an implicit way to achieve consistency by embedding the minimum Write-Ahead-Log state information in the Service tokens returned from logins or token-create requests. This feature introduces changes in the token format and the new tokesn will be the default tokens starting in Vault 1.10.0. Vault 1.10.0 is backwards compatible with old tokens.\n\nSee [Replication](\/vault\/docs\/configuration\/replication), [Vault Eventual Consistency](\/vault\/docs\/enterprise\/consistency), [Upgrade to 1.10.0](\/vault\/docs\/upgrading\/upgrade-to-1.10.x) and [Server Side Consistent Token FAQ](\/vault\/docs\/faq\/ssct) to understand the various consistency options available with Vault 1.10.0 and the considerations to be aware of prior to selecting an option for your use case.\n\n## Vault agent features\n\n### Support for telemetry\n\nStarting with Vault 1.10.0, the Vault Agent supports a new metrics endpoint and [Telemetry](\/vault\/docs\/agent#telemetry-stanza) metrics around run time, authentication success, authentication failures, cache hits, cache misses, proxy succes, and proxy client errors. This Vault Agent Telemetry should greatly help with the retrieval of key operational insights for Vault Agent deployments.\n\n### User-assigned managed identities for auto auth in Azure\n\nWith this [enhancement](\/vault\/docs\/agent\/autoauth\/methods\/azure), users can specify user-assigned managed identities via the `object_id` and `client_id` when configuring Vault agent auto-auth for Azure. This enables users that have more than one user-assigned managed identity associated with their VM to specify which one they'd like to use when authenticating via the Vault's Azure auth method. Note that providing these parameters is an \"exclusive or\" operation.\n\n### Quit API endpoint with config\n\nPreviously, for instances where the Agent is a sidecar in a Kubernetes job and the job hangs, you must either use `shareProcessNamespace: true` for the container so that the process kill signals can be sent, or avoid the sidecar container entirely and solely rely on an init container. With this [enhancement](\/vault\/docs\/agent#quit), we have added support for a Quit API endpoint to automatically shut down the Vault Agent, therefore eliminating the need to perform the workarounds.\n\n## Other features and enhancements\n\nThis section describes other features and enhancements introduced as part of the Vault 1.10.0 release.\n\n### Client count improvements\n\nWe have introduced auth mount-based attribution of clients to help better understand where clients are being used within a cluster. This is available via UI and API. This is an enhancement on top of the namespace attribution capability we introduced in Vault 1.9.\n\nWe have also introduced the ability to view changes to clients month over month via the client count API, and made other UI enhancements. Refer to [What is a Client?](\/vault\/docs\/concepts\/client-count) and [Client Count FAQ](\/vault\/docs\/concepts\/client-count\/faq) for more details.\n\n### Mount migration\n\nWe have made improvements to the `sys\/remount` API endpoint to simplify the complexities of moving data, such as secret engine and authentication method configuration from one mount to another, within a namespace or across namespaces. This can help with restructuring namespaces and mounts for various reasons, including migrating mounts from root to other namespaces when transitioning to using namespaces for the first time. For step-by-step instructions, refer to the [Mount Move](\/vault\/tutorials\/enterprise\/mount-move) tutorial.\n\n### Scaling external database plugins\n\nDatabase plugins can now implement [plugin multiplexing](\/vault\/docs\/plugins\/plugin-architecture#plugin-multiplexing) which allows a single plugin process to be used for multiple database connections. Database plugin multiplexing will be enabled on the Oracle Database plugin starting in v0.6.0. We will extend this functionality to additional database plugins in subsequent releases.\n\nAny external database plugins that want to adopt multiplexing support will have to update their main.go call from [dbplugin.Serve()](https:\/\/github.com\/hashicorp\/vault\/blob\/sdk\/v0.4.1\/sdk\/database\/dbplugin\/v5\/plugin_server.go#L13) to [dbplugin.ServeMultiplex()](https:\/\/github.com\/hashicorp\/vault\/blob\/sdk\/v0.4.1\/sdk\/database\/dbplugin\/v5\/plugin_server.go#L42). Multiplexable database plugins are compatible with older versions of Vault down to Vault 1.6. Refer to this [Oracle Database PR](https:\/\/github.com\/hashicorp\/vault-plugin-database-oracle\/pull\/74) as an example of the upgrade process.\n\n### Consul secrets engine enhancements\n\nConsul has supported [namespace](\/consul\/docs\/enterprise\/namespaces), [admin partitions](\/consul\/docs\/enterprise\/admin-partitions) and [ACL roles](\/consul\/commands\/acl\/role) for some time now. In this release we have added enhancements to the Consul Secrets engine to support namespace awareness and add admin partition and role support for Consul ACL tokens. This significantly simplifies the integrations for customers who want to achieve a zero trust security posture with both Vault and Consul.\n\n### Using sessionStorage instead of localStorage for the Vault UI\n\nPrior to Vault 1.10.0, the Vault UI used localStorage to store authentication information. The data in localStorage was persisted in browsers and removed only on demand. Now, we have switched the Vault UI to use sessionStorage instead, which ensures that the authentication information is stored in the current browser tab alone, thereby improving security.\n\n### Advanced I\/O handling for transform FPE\n\nThe Transform Secrets Engine allows users to securely encrypt data while providing control over the output format. In Vault 1.9, we introduced [additional format fields](\/vault\/docs\/release-notes\/1.9.0#advanced-i-o-handling-for-tranform-fpe-adp-transform) on the templates used for this workflow. In Vault 1.10.0, we have now added those two new fields, `encode_format` and `decode_format`, to the Create Template page on the UI under Advanced Templating.\n\n## Breaking changes\n\nThe following section details breaking changes introduced in Vault 1.10.0.\n\n### LDAP auth method entity alias mapping\n\nIn Vault 1.9, we added support to provide custom user filters through the [userfilter](\/vault\/api-docs\/auth\/ldap#userfilter) parameter. This support changed the way that entity alias was mapped to an entity. Prior to Vault 1.9, alias names were always based on the [login username](\/vault\/api-docs\/auth\/ldap#username-3) (which in turn is based on the value of the [userattr](\/vault\/api-docs\/auth\/ldap#userattr)). In Vault 1.9, alias names no longer mapped to the login username. Instead, the mapping depends on other config values as well, such as [updomain](\/vault\/api-docs\/auth\/ldap#upndomain), [binddn](\/vault\/api-docs\/auth\/ldap#binddn), [discoverydn](\/vault\/api-docs\/auth\/ldap#discoverdn), and [userattr](\/vault\/api-docs\/auth\/ldap#userattr).\n\nWith Vault 1.10.0, we re-introduced the option to force the alias name to map to the login username with the optional parameter username_as_alias. Users that have the LDAP auth method enabled prior to Vault 1.9 may want to consider setting this to true to revert back to the old behavior. Otherwise, depending on the other aforementioned config values, logins may generate a new and different entity for an existing user with a previous entity associated in Vault. This in turn affects client counts since there may be more than one entity tied to this user. The username_as_alias flag was also made available in subsequent Vault 1.8.x and Vault 1.9.x releases to allow for this to be set prior to a Vault 1.10.0 upgrade.\n\n## Known issues\n\n### Single Vault follower restart causes election even with established quorum\n\nWe now support Server Side Consistent Tokens (See [Replication](\/vault\/docs\/configuration\/replication), [Vault Eventual Consistency](\/vault\/docs\/enterprise\/consistency), and [Upgrade to 1.10.0](\/vault\/docs\/upgrading\/upgrade-to-1.10.x).), which introduces a new token format that can only be used on nodes of 1.10 or higher version. This new format is enabled by default upon upgrading to the new version. Old format tokens can be read by Vault 1.10.0, but the new format Vault 1.10 tokens cannot be read by older Vault versions.\n\nFor more details, see the [Server Side Consistent Tokens FAQ](\/vault\/docs\/faq\/ssct).\n\nSince service tokens are always created on the leader, as long as the leader is not upgraded before performance standbys, service tokens will be of the old format and still be usable during the upgrade process. However, the usual upgrade process we recommend can't be relied upon to always upgrade the leader last. Due to this known [issue](https:\/\/github.com\/hashicorp\/vault\/issues\/14153), a Vault cluster using Integrated Storage may result in a leader not being upgraded last, and this can trigger a re-election. This re-election can cause the upgraded node to become the leader, resulting in the newly created tokens on the leader to be unusable on nodes that have not yet been upgraded. Note that this issue does not impact Vault Community Edition users.\n\nWe will have a fix for this issue in Vault 1.10.1. Until this issue is fixed, you may be at risk of having performance standbys unable to service requests until all nodes are upgraded. We recommended that you plan for a maintenance window to upgrade.\n\n### Limited policy shows unhelpful message in UI after mounting a secret engine\n\nWhen a user has a policy that allows creating a secret engine but not reading it, after successful creation, the user sees a message `n is undefined` instead of a permissions error. We will have a fix for this issue in an upcoming minor release.\n\n### Adding\/Modifying Duo MFA method for enterprise MFA triggers a panic error\n\nWhen adding or modifying a Duo MFA method for step-up Enterprise MFA using the `sys\/mfa\/method\/duo` endpoint, a panic gets triggered due to a missing schema field. We will have a fix for this in Vault 1.10.1. Until this issue is fixed, avoid making any changes to your Duo configuration if you are upgrading Vault to v1.10.0.\n\n### Sign in to UI using OIDC auth method results in an error\n\nSigning in to the Vault UI using an OIDC auth mount listed in the \"tabs\" of the form will result\nin the following error: \"Authentication failed: role with oidc role_type is not allowed\".\nThe auth mounts listed in the \"tabs\" of the form are those that have [listing_visibility](\/vault\/api-docs\/system\/auth#listing_visibility-1)\nset to `unauth`.\n\nThere is a workaround for this error that will allow you to sign in to Vault using the OIDC\nauth method. Select the \"Other\" tab instead of selecting the specific OIDC auth mount tab.\nFrom there, select \"OIDC\" from the \"Method\" select box and proceed to sign in to Vault.\n\n### Error initializing raft storage type with windows\n\nWhen trying to start Vault server 1.10.0 on Windows, and there is less than 100GB of free disk space, there is an initialization error with raft DB related to insufficient space on the disk. See this [issue](https:\/\/github.com\/hashicorp\/vault\/issues\/14895) for details.  Windows users should wait till 1.10.1 to upgrade.\n\n## Feature deprecations and EOL\n\nPlease refer to the [Deprecation Plans and Notice](\/vault\/docs\/deprecation) page for up-to-date information on feature deprecations and plans. An [Feature Deprecation FAQ](\/vault\/docs\/deprecation\/faq) page is also available to address questions concerning decisions made about Vault feature deprecations.","site":"vault","answers_cleaned":"    layout  docs page title  1 10 0 description       This page contains release notes for Vault 1 10 0        Vault 1 10 0 release notes    Software Release date    Mar 23  2022    Summary    Vault version 1 10 0 offers features and enhancements that improve the user experience while closing the loop on key issues previously encountered by our customers  We are providing a summary of these improvements in these release notes   We encourage you to upgrade to the latest release to take advantage of the new benefits that we are providing  Additionally  with this latest release  we offer solutions to critical feature gaps that have been identified previously  For further information on product improvements  including a comprehensive list of bug fixes  please refer to the  Changelog  https   github com hashicorp vault blob main CHANGELOG md  within the Vault 1 10 0 release   Some of these enhancements and changes in this release include     Ability to view client counts per auth and changes to clients over months  therefore  providing more granular visibility into clients    Extended the  sys remount  API endpoint to support moving secrets engines and auth method mounts from one location to another  within a namespace or across namespaces    Improved security posture that includes MFA on login for Vault Community Edition customers    Ability to implicitely achieve consistency via tokens    Support of PKCE on Vault s OIDC auth method with Telemetry support for the Vault Agent    Improvement of key areas and parity to support using Terraform Provider with Vault      New features  This section describes the new features introduced as part of Vault      Multi Factor authentication  MFA  for Vault Community Edition  Vault has had support for the  Step up Enterprise MFA   vault docs enterprise mfa  as part of its Enterprise edition  The Step up Enterprise MFA allows having an MFA on login  or for step up access to sensitive resources in Vault   With Vault 1 10 0  MFA as part of  login   vault docs auth login mfa  is now supported for Vault Community Edition  This demonstrates HashiCorp s thought leadership in security and its continued endeavor to enable all Vault users to employ strong security policies with Vault        Note    The Legacy MFA in Vault Community Edition is a  deprecated   vault docs deprecation  feature and will be removed in Vault 1 11   Refer to the  Login MFA FAQ   vault docs auth login mfa faq  to understand the various MFA workflows that are supported in Vault 1 10 0       Vault OIDC provider with PKCE support  Vault s support to act as an OIDC provider is now generally available  Furthermore  Vault s OIDC provider functionality can now support PKCE for authorization code flow as well  Thanks to all the excellent community feedback received  we have simplified the user experience around configuration of OIDC provider functionality       Caching support for Vault lambda extension  With 0 6 0  Vault Lambda Extension supports  caching  https   github com hashicorp vault lambda extension caching  in the local proxy server to avoid proxying every request to enable setting expiry time and invalidate cache  as needed       Terraform provider for Vault  We have introduced three new resources to enable configuration of the  KMIP secrets engine  https   registry terraform io providers hashicorp vault latest docs resources kmip secret backend  using the Terraform Provider for Vault  In addition  frequent releases on the Terraform Provider for Vault have been incorporating the ability to configure newer resources and data sources  Please read the  documentation  https   registry terraform io providers hashicorp vault latest docs  for more details       KV secrets engine v2 patch operations  We now support an additional method for managing  KV v2 secrets   vault api docs secret kv kv v2  to maintain least privilege security in certain types of automated environments  This feature creates a new PATCH capability that enables partial updates to KV v2 secrets without requiring the READ privilege to the entire endpoint for an entity       DB2 dynamic secrets support  Vault operators can leverage the openldap secrets engine to manage credentials for IBM DB2 and the LDAP security plugin for Db2  This allows Db2 to offload authentication and authorization to the LDAP security plugin and allows Vault to manage static credentials or even generate dynamic users  For more details  refer to the For more details  refer to the  IBM Db2 Credentials Management   vault tutorials secrets management ibm db2 openldap  tutorial       Temporal transit key rotation  Proper key management includes occasionally rotating encryption keys to reduce the risks of a nonce reuse and opportunities for keys to be compromised  Previously  there was no automated way to rotate keys that is native to Vault  Now  we have provided a new configuration element on transit keys and tokenization transform configurations where a time interval triggers the keys to automatically rotate after the interval has lapsed       PKI HSM forwarding  To address security and compliance needs  customers may require that keys be either created or stored within Hardware Security Models  HSMs   Vault 1 10 0 introduces an accommodation for this requirement with regards to the PKI Secrets Engine  We now support offloading selected PKI operations to HSMs  in particular allowing customers to both generate new PKI key pairs and sign verify some certificate workflows  All of these operations are conducted in a way that never allows the private key material to leave the secure confines of the HSM itself       AWS and AKV KMS forwarding  The work done above to support HSM backed PKI operations inspired us to consider what other key possession paradigms we could support  This led us to extend the implementation to support Cloud Key Management Systems in addition to HSMs  In Vault 1 10 0  users may generate new PKI pairs and perform sign verify certificate workflows  all with those keys never leaving the cloud KMS itself  Vault 1 10 0 provides support for AWS Key Management Service and Azure Key Vault Key Management Service       Server side consistent tokens  Vault s  eventual consistency   vault docs enterprise consistency  model precludes read after write guarantees when clients interact with performance standbys or performance replication clusters  The  Client Controlled Consistency   vault docs enterprise consistency vault 1 7 mitigations  mitigations supported with Vault 1 7 provide ways to achieve consistency through client modifications or by using the agent for proxied requests  which is not possible in all cases  The Server Side Consistent Tokens feature provides an implicit way to achieve consistency by embedding the minimum Write Ahead Log state information in the Service tokens returned from logins or token create requests  This feature introduces changes in the token format and the new tokesn will be the default tokens starting in Vault 1 10 0  Vault 1 10 0 is backwards compatible with old tokens   See  Replication   vault docs configuration replication    Vault Eventual Consistency   vault docs enterprise consistency    Upgrade to 1 10 0   vault docs upgrading upgrade to 1 10 x  and  Server Side Consistent Token FAQ   vault docs faq ssct  to understand the various consistency options available with Vault 1 10 0 and the considerations to be aware of prior to selecting an option for your use case      Vault agent features      Support for telemetry  Starting with Vault 1 10 0  the Vault Agent supports a new metrics endpoint and  Telemetry   vault docs agent telemetry stanza  metrics around run time  authentication success  authentication failures  cache hits  cache misses  proxy succes  and proxy client errors  This Vault Agent Telemetry should greatly help with the retrieval of key operational insights for Vault Agent deployments       User assigned managed identities for auto auth in Azure  With this  enhancement   vault docs agent autoauth methods azure   users can specify user assigned managed identities via the  object id  and  client id  when configuring Vault agent auto auth for Azure  This enables users that have more than one user assigned managed identity associated with their VM to specify which one they d like to use when authenticating via the Vault s Azure auth method  Note that providing these parameters is an  exclusive or  operation       Quit API endpoint with config  Previously  for instances where the Agent is a sidecar in a Kubernetes job and the job hangs  you must either use  shareProcessNamespace  true  for the container so that the process kill signals can be sent  or avoid the sidecar container entirely and solely rely on an init container  With this  enhancement   vault docs agent quit   we have added support for a Quit API endpoint to automatically shut down the Vault Agent  therefore eliminating the need to perform the workarounds      Other features and enhancements  This section describes other features and enhancements introduced as part of the Vault 1 10 0 release       Client count improvements  We have introduced auth mount based attribution of clients to help better understand where clients are being used within a cluster  This is available via UI and API  This is an enhancement on top of the namespace attribution capability we introduced in Vault 1 9   We have also introduced the ability to view changes to clients month over month via the client count API  and made other UI enhancements  Refer to  What is a Client    vault docs concepts client count  and  Client Count FAQ   vault docs concepts client count faq  for more details       Mount migration  We have made improvements to the  sys remount  API endpoint to simplify the complexities of moving data  such as secret engine and authentication method configuration from one mount to another  within a namespace or across namespaces  This can help with restructuring namespaces and mounts for various reasons  including migrating mounts from root to other namespaces when transitioning to using namespaces for the first time  For step by step instructions  refer to the  Mount Move   vault tutorials enterprise mount move  tutorial       Scaling external database plugins  Database plugins can now implement  plugin multiplexing   vault docs plugins plugin architecture plugin multiplexing  which allows a single plugin process to be used for multiple database connections  Database plugin multiplexing will be enabled on the Oracle Database plugin starting in v0 6 0  We will extend this functionality to additional database plugins in subsequent releases   Any external database plugins that want to adopt multiplexing support will have to update their main go call from  dbplugin Serve    https   github com hashicorp vault blob sdk v0 4 1 sdk database dbplugin v5 plugin server go L13  to  dbplugin ServeMultiplex    https   github com hashicorp vault blob sdk v0 4 1 sdk database dbplugin v5 plugin server go L42   Multiplexable database plugins are compatible with older versions of Vault down to Vault 1 6  Refer to this  Oracle Database PR  https   github com hashicorp vault plugin database oracle pull 74  as an example of the upgrade process       Consul secrets engine enhancements  Consul has supported  namespace   consul docs enterprise namespaces    admin partitions   consul docs enterprise admin partitions  and  ACL roles   consul commands acl role  for some time now  In this release we have added enhancements to the Consul Secrets engine to support namespace awareness and add admin partition and role support for Consul ACL tokens  This significantly simplifies the integrations for customers who want to achieve a zero trust security posture with both Vault and Consul       Using sessionStorage instead of localStorage for the Vault UI  Prior to Vault 1 10 0  the Vault UI used localStorage to store authentication information  The data in localStorage was persisted in browsers and removed only on demand  Now  we have switched the Vault UI to use sessionStorage instead  which ensures that the authentication information is stored in the current browser tab alone  thereby improving security       Advanced I O handling for transform FPE  The Transform Secrets Engine allows users to securely encrypt data while providing control over the output format  In Vault 1 9  we introduced  additional format fields   vault docs release notes 1 9 0 advanced i o handling for tranform fpe adp transform  on the templates used for this workflow  In Vault 1 10 0  we have now added those two new fields   encode format  and  decode format   to the Create Template page on the UI under Advanced Templating      Breaking changes  The following section details breaking changes introduced in Vault 1 10 0       LDAP auth method entity alias mapping  In Vault 1 9  we added support to provide custom user filters through the  userfilter   vault api docs auth ldap userfilter  parameter  This support changed the way that entity alias was mapped to an entity  Prior to Vault 1 9  alias names were always based on the  login username   vault api docs auth ldap username 3   which in turn is based on the value of the  userattr   vault api docs auth ldap userattr    In Vault 1 9  alias names no longer mapped to the login username  Instead  the mapping depends on other config values as well  such as  updomain   vault api docs auth ldap upndomain    binddn   vault api docs auth ldap binddn    discoverydn   vault api docs auth ldap discoverdn   and  userattr   vault api docs auth ldap userattr    With Vault 1 10 0  we re introduced the option to force the alias name to map to the login username with the optional parameter username as alias  Users that have the LDAP auth method enabled prior to Vault 1 9 may want to consider setting this to true to revert back to the old behavior  Otherwise  depending on the other aforementioned config values  logins may generate a new and different entity for an existing user with a previous entity associated in Vault  This in turn affects client counts since there may be more than one entity tied to this user  The username as alias flag was also made available in subsequent Vault 1 8 x and Vault 1 9 x releases to allow for this to be set prior to a Vault 1 10 0 upgrade      Known issues      Single Vault follower restart causes election even with established quorum  We now support Server Side Consistent Tokens  See  Replication   vault docs configuration replication    Vault Eventual Consistency   vault docs enterprise consistency   and  Upgrade to 1 10 0   vault docs upgrading upgrade to 1 10 x     which introduces a new token format that can only be used on nodes of 1 10 or higher version  This new format is enabled by default upon upgrading to the new version  Old format tokens can be read by Vault 1 10 0  but the new format Vault 1 10 tokens cannot be read by older Vault versions   For more details  see the  Server Side Consistent Tokens FAQ   vault docs faq ssct    Since service tokens are always created on the leader  as long as the leader is not upgraded before performance standbys  service tokens will be of the old format and still be usable during the upgrade process  However  the usual upgrade process we recommend can t be relied upon to always upgrade the leader last  Due to this known  issue  https   github com hashicorp vault issues 14153   a Vault cluster using Integrated Storage may result in a leader not being upgraded last  and this can trigger a re election  This re election can cause the upgraded node to become the leader  resulting in the newly created tokens on the leader to be unusable on nodes that have not yet been upgraded  Note that this issue does not impact Vault Community Edition users   We will have a fix for this issue in Vault 1 10 1  Until this issue is fixed  you may be at risk of having performance standbys unable to service requests until all nodes are upgraded  We recommended that you plan for a maintenance window to upgrade       Limited policy shows unhelpful message in UI after mounting a secret engine  When a user has a policy that allows creating a secret engine but not reading it  after successful creation  the user sees a message  n is undefined  instead of a permissions error  We will have a fix for this issue in an upcoming minor release       Adding Modifying Duo MFA method for enterprise MFA triggers a panic error  When adding or modifying a Duo MFA method for step up Enterprise MFA using the  sys mfa method duo  endpoint  a panic gets triggered due to a missing schema field  We will have a fix for this in Vault 1 10 1  Until this issue is fixed  avoid making any changes to your Duo configuration if you are upgrading Vault to v1 10 0       Sign in to UI using OIDC auth method results in an error  Signing in to the Vault UI using an OIDC auth mount listed in the  tabs  of the form will result in the following error   Authentication failed  role with oidc role type is not allowed   The auth mounts listed in the  tabs  of the form are those that have  listing visibility   vault api docs system auth listing visibility 1  set to  unauth    There is a workaround for this error that will allow you to sign in to Vault using the OIDC auth method  Select the  Other  tab instead of selecting the specific OIDC auth mount tab  From there  select  OIDC  from the  Method  select box and proceed to sign in to Vault       Error initializing raft storage type with windows  When trying to start Vault server 1 10 0 on Windows  and there is less than 100GB of free disk space  there is an initialization error with raft DB related to insufficient space on the disk  See this  issue  https   github com hashicorp vault issues 14895  for details   Windows users should wait till 1 10 1 to upgrade      Feature deprecations and EOL  Please refer to the  Deprecation Plans and Notice   vault docs deprecation  page for up to date information on feature deprecations and plans  An  Feature Deprecation FAQ   vault docs deprecation faq  page is also available to address questions concerning decisions made about Vault feature deprecations "}
{"questions":"vault Software Release date Oct 12 2022 Vault 1 12 0 release notes layout docs page title 1 12 0 This page contains release notes for Vault 1 12 0","answers":"---\nlayout: docs\npage_title: 1.12.0\ndescription: |-\n  This page contains release notes for Vault 1.12.0\n---\n\n# Vault 1.12.0 release notes\n\n**Software Release date:** Oct. 12, 2022\n\n**Summary:** Vault Release 1.12.0 offers features and enhancements that improve the user experience while solving critical  issues previously encountered by our customers. We are providing an overview  of improvements in this set of  release notes.\n\nWe encourage you to upgrade to the latest release of Vault to take advantage of the new benefits provided. With this latest release, we offer solutions to critical feature gaps that were identified previously. Please refer to the [Changelog](https:\/\/github.com\/hashicorp\/vault\/blob\/main\/CHANGELOG.md) within the Vault release for further information on product improvements, including a comprehensive list of bug fixes.\n\nSome of these enhancements and changes in this release include the following:\n\n\n- Vault Enterprise now supports **PKCS#11** provider plugin (client library) functionality.\n- Vault Enterprise can manage keys for **Oracle TDE**. This requires the Advanced Data Protection license.\n- **PKI Key revocation** improvements are made to Vault\u2019s PKI engine, introducing a new OCSP responder and automatic CRL rebuilding (with up-to-date Delta CRL), that offers significant performance and data transfer improvements to revocation workflows.\n- **BYOK in Transform engines** now allow users to import their keys generated elsewhere.\n- **KMIP Server Profile** adds support for additional operations, allowing  Vault to claim support for the baseline server profile.\n- **Transform secrets engine** supports time-based auto-key rotation for tokenization.\n- **Path and Role-based Quotas** extend the existing Vault Quota support by allowing quotas to be extended to the API path suffixes and auth mount roles.\n- **Licensing** termination behavior has changed where non-evaluation licenses (production licenses) will no longer have a termination date.\n- **Redis Database Secrets Engine** is now available to manage static roles or generation of dynamic credentials, as well as root credential rotation on a stand-alone Redis server.\n- **AWS Elasticache Database Secrets Engine** is introduced to manage static credentials for AWS Elasticache instances. \n\n~> **Vault Enterprise:** Use [Integrated Storage](\/vault\/docs\/configuration\/storage\/raft) or [Consul](\/vault\/docs\/configuration\/storage\/consul) as your Vault's storage backend. Vault Enterprise will no longer start up if configured to use a storage backend other than Integrated Storage or Consul. (See the [Upgrade Guide](\/vault\/docs\/upgrading\/upgrade-to-1.12.x).)\n\n## New features\n\nThis section describes the new features introduced in Vault 1.12.0.\n\n### Transform secrets engine enhancements\n\n-> **NOTE:** These features need the Vault Enterprise ADP License.\n\n#### Bring your own key (BYOK) for transform\n\nIn release 1.11, we introduced BYOK support to Vault, enabling  customers to import existing keys into the Vault Transit Secrets Engine and  enabling secure and flexible Vault deployments.\nWe  are extending that support to the Vault Transform Secrets Engine in this release.\n\n#### MSSQL support\n\nAn MSSQL store is now available to be used as an external storage engine with tokenization Transform Secrets Engine. Refer to the following documents, [Transform Secrets Engine(API)](\/vault\/api-docs\/secret\/transform), [Transform Secrets Engine](\/vault\/docs\/secrets\/transform), and [Tokenization Transform](\/vault\/docs\/secrets\/transform\/tokenization) for more information.\n\n#### Key auto rotation\n\nPeriodic rotation of encryption keys is a recommended key management practice for a good security posture. In Vault release 1.10, we added support for Auto key rotation for Transit Secrets Engine. In Vault 1.12, the Transform secrets engine is now  enhanced, allowing users to set the rotation policy during key creation in a time interval, which will cause Vault to rotate the Transform keys when the time interval elapses automatically.\n\nRefer to the following documentation [Tokenization Transform](\/vault\/docs\/secrets\/transform\/tokenization) and [Transform Secrets Engine(API)](\/vault\/api-docs\/secret\/transform#rotate-tokenization-key) for more information.\n\n### PKI secrets engine improvements\n\n#### PKI secrets engine revocation enhancements\n\nWe are improving Vault PKI Engine\u2019s revocation capabilities by adding support for the Online Certificate Status Protocol (OCSP) and a delta Certificate Revocation List (CRL) to track changes to the main CRL. These enhancements significantly streamline customer experience with the PKI engine making the certification revocation semantics easier to understand and manage. Additionally, support for automatic CRL rotation and periodic tidy operations help reduce operator burden, alleviate the demand on cluster resources during periods of high revocation, and ensure clients are always served valid CRLs. Finally, support for Bring-Your-Own-Cert (BYOC) allows revocation of `no_store=true` certificates and for Proof-of-Possession (PoP) allows end-users to safely revoke their own certificates (with corresponding private key) without operator intervention. \n\n#### PKI and managed key support for RSA-PSS signatures\n\nSince its initial release, Vault's PKI secrets engine only supported RSA-PKCS#1v1.5 (Public Key Cryptographic Standards) signatures for issuers and leaves. To conform with NIST's guidance around key transport and for compatibility with newer HSM Firmware, we have included support for RSA-PSS signatures (Probabilistic Signature Scheme). See the section on [PSS Support in the PKI documentation](\/vault\/docs\/secrets\/pki\/considerations#pss-support) for limitations of this feature.\n\n#### PKI telemetry improvements\n\nIn this release, we are adding additional telemetry to Vault\u2019s PKI secrets engine, enabling customers to gather better  insights into  certificate usage via the count of stored and revoked certificates. Additionally, the Vault `tidy` function is enhanced with additional metrics that reflect the remaining stored and revoked certificates.\n\n#### Auto-fetch CRL in the certificate auth method\n\nOperators will now be able to specify one or more CRL URLs that Vault will automatically fetch and keep up-to-date, rather than having to push the CRLs to the cert auth method. This should make certificate management easier for those users that have large cert auth deployments.\n\n#### GCP Cloud key manager support\n\nManaged Keys let Vault secrets engines (currently PKI) use keys stored in Cloud KMS systems for cryptographic operations like certificate signing.  Vault 1.12 adds support for GCP Cloud KMS to the Managed Key system, where previously AWS, Azure, and PKCS#11 Hardware Security Modules were supported.\n\n### KMIP server profile\n\nThe [Baseline Server Profile](https:\/\/docs.oasis-open.org\/kmip\/kmip-profiles\/v2.1\/os\/kmip-profiles-v2.1-os.html) specifies the basic functionality expected of a KMIP server.  In Vault 1.12, we offer support for the operations and attributes in the Baseline server profile. With this release, Vault Enterprise now supports the Symmetric Key lifecycle server profile, Baseline server profile, and the Basic Cryptographic server profile (as of Release 1.11), enabling the  support of KMIP integrations with various clients more effectively. This requires the Vault Enterprise ADP license.\n\n### SSH secrets engine support for generating keys\n\nPreviously, Vault's SSH Secrets Engine when used as an SSH CA required requesters to provide their own public key for signing. In Vault 1.12, Vault can now generate credential key pairs dynamically, returning them to the requester.\n\nThis was a community contributed enhancement.\n\n### Path and Role-Based resource quotas\n\nIn this release, the existing resource quota functionality has been enhanced. In addition to applying the API rate limiting and lease quotas at the namespace or mount level, you can now use  the quotas to the [API path suffixes and auth mount roles](\/vault\/docs\/enterprise\/lease-count-quotas). This enhancement provides users with more control over issued certificates.\n\n### Client count improvements\n\nThe billing period for client counting API can now be specified with the [current month](\/vault\/docs\/concepts\/client-count) for the end date parameter. When this is done the \"new_clients\" field will have an hyperlog approximate value indicating the number of new clients that came in the current month. Note that for the previous months, the number will be an exact value.\n\n### Redis database secrets engine\n\nWith the support of the Redis database secrets engine, users can use Vault to manage static and dynamic credentials for Redis OSS. The engine  works similarly to other database secrets engines. Refer to the [Redis](\/vault\/docs\/secrets\/databases\/redis) documentation for more information. Huge thanks to [Francis Hitchens](https:\/\/github.com\/fhitchen), who contributed their repository to HashiCorp\n\n### AWS elasticache database secrets engine\nWith the support of the AWS ElastiCache database secrets engine, users may use Vault to manage static credentials for AWS Elasticache instances. The engine will work similarly to other database secrets engines. Refer to the [elasticache](\/vault\/docs\/secrets\/databases\/rediselasticache) documentation for more information.\n\n### LDAP secrets engine\n\nVault 1.12 introduces a new LDAP secrets engine that unifies the user experience between the Active Directory (AD) secrets engine and OpenLDAP secrets engine. This new engine simplifies the user experience when Vault is used to manage credentials for directory services. This new engine supports all implementations from both of the engines mentioned above (AD, LDAP and RACF) and brings dynamic credential capabilities for users relying on Active Directory.\n\n~> **Note:** This engine does _not_ replace the current Active Directory secrets engine. We will continue to maintain the engine and provide bug fixes, but encourage all new users to use the unified LDAP engine. We will communicate the schedule to deprecate the Active Directory secrets engine well in advance, providing time for users to migrate over.\n\n### Terraform Vault provider: Vault version detection\n\nVault Terraform provider v3.9.0 can now query Vault to detect the server\u2019s version of the server and then perform a semantic version comparison against a provided minimum threshold version to determine whether a selected feature is available for use. This allows for the Vault provider to deterministically anticipate Vault\u2019s behavior.\n\n### Plugin versioning\n\nIn prior versions of Vault, plugins were not \u201cversion-aware,\u201d  creating a suboptimal user experience during plugin installation and upgrades. In Vault 1.12, we are introducing the concept of versions to plugins, making plugins \u201cversion-aware\u201d and allowing standardization of the release processes and offering a better user experience when installing and upgrading plugins.\n\n### PKCS#11 client support\n\u200b\u200b\nSoftware solutions often require cryptographic objects-like keys, X.509 certificates, or perform operations like a certificate or key generation, hashing, encryption, decryption, and signing. Hardware Security Modules (HSM) are traditionally used as a secure option, but are expensive and challenging to operationalize.\n\nVault Enterprise 1.12 is a PKCS#11 2.40 compliant provider, extended profile. PKCS#11 is the standard protocol supported for integrating with HSMs. Support for this protocol is the first step to enabling customers to consolidate HSMs. It also has the operational flexibility and advantages of software for key generation, encryption, and object storage operations. The PKCS#11 support in Vault 1.12 supports a subset of key generation, encryption, decryption and key storage operations. This requires the Enterprise ADP-KM license.\n\n~> **Note:** With this feature, Vault does not become an HSM. HSMs are needed where customer use cases need FIPS 140-2 L2+ compliance support.\n\n### Oracle TDE\n\nWith Vault 1.12, Vault Enterprise (ADP-KM) can now act as an external key manager for Oracle instances when Transparent Data Encryption is enabled. TDE allows users to conjure and use Vault to protect their Data Encryption Keys by using Vault to protect them using a Key Encryption Key. Reading and writing of data securely are handled transparently by Oracle database instances without needing user intervention. This will need the Enterprise ADP license.\n\n### UI support for okta number challenge\n\nIn Vault 1.11, we added support for Okta\u2019s Number Challenge feature in the CLI and API. In Vault 1.12, we\u2019ve extended this support to the Vault UI, allowing users to complete the Okta Number Challenge from a web browser, the command line, and the HTTP API.\n\n### OIDC provider support in the UI\nVault can now act as an OIDC provider for applications that wish to delegate authentication to Vault and leverage its identity system. As an OIDC provider, Vault supports PKCE for authorization code flow, preventing attacks such as SSRF. After OIDC provider functionality went GA, our design and user research team gathered feedback from community members, and we simplified the setup experience. With a few CLI commands or UI clicks, users can now have a default OIDC provider with its defaults configured and ready to go for applications to utilize the functionality.\n\n\n## Other features and enhancements\n\n### License termination behavior\n\nThe Licensing termination behavior has changed where non-evaluation licenses (production licenses) no longer have a termination date, making Vault more robust for Vault Enterprise customers. Also refer to the updated [licensing FAQ](\/vault\/docs\/enterprise\/license\/faq) for more information.\n\n### Namespace custom metadata\nCustomers can now specify [custom metadata](\/vault\/api-docs\/system\/namespaces) on the namespaces. The new `vault namespace patch` [command](\/vault\/docs\/commands\/namespace) can be used to update existing namespaces with custom metadata as well. This will make it possible to tag namespaces with additional fields (For example: owner, region department) describing it.\n\n### Vault agent improvements\n\nVault Agent introduced new configuration parameters that will significantly improve the use of Vault Agent. These includes:\n\n- Added `disable_idle_connections` configuration to disable leaving idle connections open in auto-auth, caching and templating.\n- Added `disable_keep_alives` configuration to disable keep alives in auto-auth, caching and templating.\n- JWT auto-auth now supports a `remove_jwt_after_reading` configuration option which defaults to true.\n\n\n\n## Known issues\n\nThere are no known issues documented for this release.\n\n## Feature deprecations and EOL\n\nPlease refer to the [Deprecation Plans and Notice](\/vault\/docs\/deprecation) page for up-to-date information on feature deprecations and plans. A [Feature Deprecation FAQ](\/vault\/docs\/deprecation\/faq) page addresses questions about  decisions made about Vault feature deprecations.","site":"vault","answers_cleaned":"    layout  docs page title  1 12 0 description       This page contains release notes for Vault 1 12 0        Vault 1 12 0 release notes    Software Release date    Oct  12  2022    Summary    Vault Release 1 12 0 offers features and enhancements that improve the user experience while solving critical  issues previously encountered by our customers  We are providing an overview  of improvements in this set of  release notes   We encourage you to upgrade to the latest release of Vault to take advantage of the new benefits provided  With this latest release  we offer solutions to critical feature gaps that were identified previously  Please refer to the  Changelog  https   github com hashicorp vault blob main CHANGELOG md  within the Vault release for further information on product improvements  including a comprehensive list of bug fixes   Some of these enhancements and changes in this release include the following      Vault Enterprise now supports   PKCS 11   provider plugin  client library  functionality    Vault Enterprise can manage keys for   Oracle TDE    This requires the Advanced Data Protection license      PKI Key revocation   improvements are made to Vault s PKI engine  introducing a new OCSP responder and automatic CRL rebuilding  with up to date Delta CRL   that offers significant performance and data transfer improvements to revocation workflows      BYOK in Transform engines   now allow users to import their keys generated elsewhere      KMIP Server Profile   adds support for additional operations  allowing  Vault to claim support for the baseline server profile      Transform secrets engine   supports time based auto key rotation for tokenization      Path and Role based Quotas   extend the existing Vault Quota support by allowing quotas to be extended to the API path suffixes and auth mount roles      Licensing   termination behavior has changed where non evaluation licenses  production licenses  will no longer have a termination date      Redis Database Secrets Engine   is now available to manage static roles or generation of dynamic credentials  as well as root credential rotation on a stand alone Redis server      AWS Elasticache Database Secrets Engine   is introduced to manage static credentials for AWS Elasticache instances         Vault Enterprise    Use  Integrated Storage   vault docs configuration storage raft  or  Consul   vault docs configuration storage consul  as your Vault s storage backend  Vault Enterprise will no longer start up if configured to use a storage backend other than Integrated Storage or Consul   See the  Upgrade Guide   vault docs upgrading upgrade to 1 12 x        New features  This section describes the new features introduced in Vault 1 12 0       Transform secrets engine enhancements       NOTE    These features need the Vault Enterprise ADP License        Bring your own key  BYOK  for transform  In release 1 11  we introduced BYOK support to Vault  enabling  customers to import existing keys into the Vault Transit Secrets Engine and  enabling secure and flexible Vault deployments  We  are extending that support to the Vault Transform Secrets Engine in this release        MSSQL support  An MSSQL store is now available to be used as an external storage engine with tokenization Transform Secrets Engine  Refer to the following documents   Transform Secrets Engine API    vault api docs secret transform    Transform Secrets Engine   vault docs secrets transform   and  Tokenization Transform   vault docs secrets transform tokenization  for more information        Key auto rotation  Periodic rotation of encryption keys is a recommended key management practice for a good security posture  In Vault release 1 10  we added support for Auto key rotation for Transit Secrets Engine  In Vault 1 12  the Transform secrets engine is now  enhanced  allowing users to set the rotation policy during key creation in a time interval  which will cause Vault to rotate the Transform keys when the time interval elapses automatically   Refer to the following documentation  Tokenization Transform   vault docs secrets transform tokenization  and  Transform Secrets Engine API    vault api docs secret transform rotate tokenization key  for more information       PKI secrets engine improvements       PKI secrets engine revocation enhancements  We are improving Vault PKI Engine s revocation capabilities by adding support for the Online Certificate Status Protocol  OCSP  and a delta Certificate Revocation List  CRL  to track changes to the main CRL  These enhancements significantly streamline customer experience with the PKI engine making the certification revocation semantics easier to understand and manage  Additionally  support for automatic CRL rotation and periodic tidy operations help reduce operator burden  alleviate the demand on cluster resources during periods of high revocation  and ensure clients are always served valid CRLs  Finally  support for Bring Your Own Cert  BYOC  allows revocation of  no store true  certificates and for Proof of Possession  PoP  allows end users to safely revoke their own certificates  with corresponding private key  without operator intervention         PKI and managed key support for RSA PSS signatures  Since its initial release  Vault s PKI secrets engine only supported RSA PKCS 1v1 5  Public Key Cryptographic Standards  signatures for issuers and leaves  To conform with NIST s guidance around key transport and for compatibility with newer HSM Firmware  we have included support for RSA PSS signatures  Probabilistic Signature Scheme   See the section on  PSS Support in the PKI documentation   vault docs secrets pki considerations pss support  for limitations of this feature        PKI telemetry improvements  In this release  we are adding additional telemetry to Vault s PKI secrets engine  enabling customers to gather better  insights into  certificate usage via the count of stored and revoked certificates  Additionally  the Vault  tidy  function is enhanced with additional metrics that reflect the remaining stored and revoked certificates        Auto fetch CRL in the certificate auth method  Operators will now be able to specify one or more CRL URLs that Vault will automatically fetch and keep up to date  rather than having to push the CRLs to the cert auth method  This should make certificate management easier for those users that have large cert auth deployments        GCP Cloud key manager support  Managed Keys let Vault secrets engines  currently PKI  use keys stored in Cloud KMS systems for cryptographic operations like certificate signing   Vault 1 12 adds support for GCP Cloud KMS to the Managed Key system  where previously AWS  Azure  and PKCS 11 Hardware Security Modules were supported       KMIP server profile  The  Baseline Server Profile  https   docs oasis open org kmip kmip profiles v2 1 os kmip profiles v2 1 os html  specifies the basic functionality expected of a KMIP server   In Vault 1 12  we offer support for the operations and attributes in the Baseline server profile  With this release  Vault Enterprise now supports the Symmetric Key lifecycle server profile  Baseline server profile  and the Basic Cryptographic server profile  as of Release 1 11   enabling the  support of KMIP integrations with various clients more effectively  This requires the Vault Enterprise ADP license       SSH secrets engine support for generating keys  Previously  Vault s SSH Secrets Engine when used as an SSH CA required requesters to provide their own public key for signing  In Vault 1 12  Vault can now generate credential key pairs dynamically  returning them to the requester   This was a community contributed enhancement       Path and Role Based resource quotas  In this release  the existing resource quota functionality has been enhanced  In addition to applying the API rate limiting and lease quotas at the namespace or mount level  you can now use  the quotas to the  API path suffixes and auth mount roles   vault docs enterprise lease count quotas   This enhancement provides users with more control over issued certificates       Client count improvements  The billing period for client counting API can now be specified with the  current month   vault docs concepts client count  for the end date parameter  When this is done the  new clients  field will have an hyperlog approximate value indicating the number of new clients that came in the current month  Note that for the previous months  the number will be an exact value       Redis database secrets engine  With the support of the Redis database secrets engine  users can use Vault to manage static and dynamic credentials for Redis OSS  The engine  works similarly to other database secrets engines  Refer to the  Redis   vault docs secrets databases redis  documentation for more information  Huge thanks to  Francis Hitchens  https   github com fhitchen   who contributed their repository to HashiCorp      AWS elasticache database secrets engine With the support of the AWS ElastiCache database secrets engine  users may use Vault to manage static credentials for AWS Elasticache instances  The engine will work similarly to other database secrets engines  Refer to the  elasticache   vault docs secrets databases rediselasticache  documentation for more information       LDAP secrets engine  Vault 1 12 introduces a new LDAP secrets engine that unifies the user experience between the Active Directory  AD  secrets engine and OpenLDAP secrets engine  This new engine simplifies the user experience when Vault is used to manage credentials for directory services  This new engine supports all implementations from both of the engines mentioned above  AD  LDAP and RACF  and brings dynamic credential capabilities for users relying on Active Directory        Note    This engine does  not  replace the current Active Directory secrets engine  We will continue to maintain the engine and provide bug fixes  but encourage all new users to use the unified LDAP engine  We will communicate the schedule to deprecate the Active Directory secrets engine well in advance  providing time for users to migrate over       Terraform Vault provider  Vault version detection  Vault Terraform provider v3 9 0 can now query Vault to detect the server s version of the server and then perform a semantic version comparison against a provided minimum threshold version to determine whether a selected feature is available for use  This allows for the Vault provider to deterministically anticipate Vault s behavior       Plugin versioning  In prior versions of Vault  plugins were not  version aware    creating a suboptimal user experience during plugin installation and upgrades  In Vault 1 12  we are introducing the concept of versions to plugins  making plugins  version aware  and allowing standardization of the release processes and offering a better user experience when installing and upgrading plugins       PKCS 11 client support    Software solutions often require cryptographic objects like keys  X 509 certificates  or perform operations like a certificate or key generation  hashing  encryption  decryption  and signing  Hardware Security Modules  HSM  are traditionally used as a secure option  but are expensive and challenging to operationalize   Vault Enterprise 1 12 is a PKCS 11 2 40 compliant provider  extended profile  PKCS 11 is the standard protocol supported for integrating with HSMs  Support for this protocol is the first step to enabling customers to consolidate HSMs  It also has the operational flexibility and advantages of software for key generation  encryption  and object storage operations  The PKCS 11 support in Vault 1 12 supports a subset of key generation  encryption  decryption and key storage operations  This requires the Enterprise ADP KM license        Note    With this feature  Vault does not become an HSM  HSMs are needed where customer use cases need FIPS 140 2 L2  compliance support       Oracle TDE  With Vault 1 12  Vault Enterprise  ADP KM  can now act as an external key manager for Oracle instances when Transparent Data Encryption is enabled  TDE allows users to conjure and use Vault to protect their Data Encryption Keys by using Vault to protect them using a Key Encryption Key  Reading and writing of data securely are handled transparently by Oracle database instances without needing user intervention  This will need the Enterprise ADP license       UI support for okta number challenge  In Vault 1 11  we added support for Okta s Number Challenge feature in the CLI and API  In Vault 1 12  we ve extended this support to the Vault UI  allowing users to complete the Okta Number Challenge from a web browser  the command line  and the HTTP API       OIDC provider support in the UI Vault can now act as an OIDC provider for applications that wish to delegate authentication to Vault and leverage its identity system  As an OIDC provider  Vault supports PKCE for authorization code flow  preventing attacks such as SSRF  After OIDC provider functionality went GA  our design and user research team gathered feedback from community members  and we simplified the setup experience  With a few CLI commands or UI clicks  users can now have a default OIDC provider with its defaults configured and ready to go for applications to utilize the functionality       Other features and enhancements      License termination behavior  The Licensing termination behavior has changed where non evaluation licenses  production licenses  no longer have a termination date  making Vault more robust for Vault Enterprise customers  Also refer to the updated  licensing FAQ   vault docs enterprise license faq  for more information       Namespace custom metadata Customers can now specify  custom metadata   vault api docs system namespaces  on the namespaces  The new  vault namespace patch   command   vault docs commands namespace  can be used to update existing namespaces with custom metadata as well  This will make it possible to tag namespaces with additional fields  For example  owner  region department  describing it       Vault agent improvements  Vault Agent introduced new configuration parameters that will significantly improve the use of Vault Agent  These includes     Added  disable idle connections  configuration to disable leaving idle connections open in auto auth  caching and templating    Added  disable keep alives  configuration to disable keep alives in auto auth  caching and templating    JWT auto auth now supports a  remove jwt after reading  configuration option which defaults to true        Known issues  There are no known issues documented for this release      Feature deprecations and EOL  Please refer to the  Deprecation Plans and Notice   vault docs deprecation  page for up to date information on feature deprecations and plans  A  Feature Deprecation FAQ   vault docs deprecation faq  page addresses questions about  decisions made about Vault feature deprecations "}
{"questions":"vault Key updates for Vault 1 16 1 Vault 1 16 1 release notes GA date 2024 04 04 layout docs page title 1 16 1 release notes","answers":"---\nlayout: docs\npage_title: \"1.16.1 release notes\"\ndescription: |-\n  Key updates for  Vault 1.16.1\n---\n\n# Vault 1.16.1 release notes\n\n**GA date:** 2024-04-04\n\n@include 'release-notes\/intro.mdx'\n\n## Important changes\n\n| Version                     | Change                                                                                                                                                                                       |\n|-----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| 1.16.0+                     | [Existing clusters do not show the current Vault version in UI by default](\/vault\/docs\/upgrading\/upgrade-to-1.16.x#default-policy-changes)                                                   |\n| 1.16.0+                     | [Default LCQ enabled when upgrading pre-1.9](\/vault\/docs\/upgrading\/upgrade-to-1.16.x#default-lcq-pre-1.9-upgrade)                                                                            |\n| 1.16.0+                     | [External plugin environment variables take precedence over server variables](\/vault\/docs\/upgrading\/upgrade-to-1.16.x#external-plugin-variables)                                             |\n| 1.16.0+                     | [LDAP auth entity alias names no longer include upndomain](\/vault\/docs\/upgrading\/upgrade-to-1.16.x#ldap-auth-entity-alias-names-no-longer-include-upndomain)                                 |\n| 1.16.0+                     | [Secrets Sync now requires a one-time flag to operate](\/vault\/docs\/upgrading\/upgrade-to-1.16.x#secrets-sync-now-requires-setting-a-one-time-flag-before-use)                                 |\n| 1.16.0+                     | [Azure secrets engine role creation failing](\/vault\/docs\/upgrading\/upgrade-to-1.16.x#azure-secrets-engine-role-creation-failing)                                                             |\n| 1.16.1 - 1.16.3             | [New nodes added by autopilot upgrades provisioned with the wrong version](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#new-nodes-added-by-autopilot-upgrades-provisioned-with-the-wrong-version) |\n| 1.15.8+                     | [Autopilot upgrade for Vault Enterprise fails](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#autopilot)                                                                                            |\n| 1.16.5                      | [Listener stops listening on untrusted upstream connection with particular config settings](\/vault\/docs\/upgrading\/upgrade-to-1.16.x#listener-proxy-protocol-config)                          |\n| 1.16.3 - 1.16.6             | [Vault standby nodes not deleting removed entity-aliases from in-memory database](\/vault\/docs\/upgrading\/upgrade-to-1.16.x#dangling-entity-alias-in-memory)                                   |\n| 0.7.0+                      | [Duplicate identity groups created](\/vault\/docs\/upgrading\/upgrade-to-1.16.x#duplicate-identity-groups-created-when-concurrent-requests-sent-to-the-primary-and-pr-secondary-cluster)         |                                                                                       |\n| Known Issue (0.7.0+)        | [Manual entity merges fail](\/vault\/docs\/upgrading\/upgrade-to-1.16.x#manual-entity-merges-sent-to-a-pr-secondary-cluster-are-not-persisted-to-storage)                                        |\n| Known Issue (1.16.7-1.16.8) | [Some values in the audit logs not hmac'd properly](\/vault\/docs\/upgrading\/upgrade-to-1.16.x#client-tokens-and-token-accessors-audited-in-plaintext)                                          |\n| New default (1.16.13)       | [Vault product usage metrics reporting](\/vault\/docs\/upgrading\/upgrade-to-1.6.x#product-usage-reporting)                                                                                      |\n| Deprecation (1.16.13)       | [`default_report_months` is deprecated for the `sys\/internal\/counters` API](\/vault\/docs\/upgrading\/upgrade-to-1.16.x#activity-log-changes)                                                    |\n\n\n## Vault companion updates\n\nCompanion updates are Vault updates that live outside the main Vault binary.\n\n<table>\n  <thead>\n    <tr>\n      <th style=>Release<\/th>\n      <th style=>Update<\/th>\n      <th style=>Description<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n\n  <tr>\n    <td style=>\n      Vault Secrets Operator (v0.5)\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Use templating to format, transform, and decode secrets before syncing to\n      Kubernetes secret.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/platform\/k8s\/vso\/secret-transformation\">Secret data transformation<\/a>\n    <\/td>\n  <\/tr>\n  <\/tbody>\n<\/table>\n\n## Core updates\n\nFollow the learn more links for more information, or browse the list of\n[Vault tutorials updated to highlight changes for the most recent GA release](\/vault\/tutorials\/new-release).\n\n<table>\n  <thead>\n    <tr>\n      <th style=>Release<\/th>\n      <th style=>Update<\/th>\n      <th style=>Description<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n\n  <tr>\n    <td style=>\n      Endpoint hardening\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Minimize network exposure by selectively redacting select fields like IP\n      addresses, cluster names, and Vault version from the HTTP responses of\n      your Vault server.\n      <br \/><br \/>\n      Learn more:&nbsp;\n      <a href=\"\/vault\/docs\/configuration\/listener\/tcp#redact_addresses\"><tt>redact_addresses<\/tt> parameter<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      External plugins\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Run external plugins in their own container with native container platform\n      controls.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/plugins\/containerized-plugins\">Containerize Vault plugins<\/a>\n    <\/td>\n  <\/tr>\n\n  <\/tbody>\n<\/table>\n\n## Enterprise updates\n\n<table>\n  <thead>\n    <tr>\n      <th style=>Release<\/th>\n      <th style=>Update<\/th>\n      <th style=>Description<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n\n  <tr>\n    <td style=>\n      Long-term support\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Reduce risk and operational overhead with Vault Enterprise Long-Term\n      Support (LTS) releases.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/enterprise\/lts\">LTS overview<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Vault GUI\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Configure custom messages and display those messages to targeted users in\n      the Vault GUI.\n        <br \/><br \/>\n        Learn more: <a href=\"\/vault\/docs\/ui\/custom-messages\">Custom UI messages<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Audit logging\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Filter audit logs to write data to different destinations based on the content.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/enterprise\/audit\/filtering\">Filter syntax for audit results<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Static secret caching\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Use Vault Proxy to cache static secrets for a set period of time and receive\n      event notifications when secrets change.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/agent-and-proxy\/proxy\/caching\/static-secret-caching\">Vault Proxy static secret caching<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Event notifications\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Subscribe to notifications for various events in Vault. Includes support\n      for filtering, permissions, and cluster configurations with K-V secrets.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/concepts\/events\">Events<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Public Key Infrastructure (PKI)\n    <\/td>\n    <td style=>BETA<\/td>\n    <td style=>\n      Automate certificate lifecycle management for IoT\/EST enabled devices with\n      native EST protocol support\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/secrets\/pki\/est\">Enrollment over Secure Transport (EST)<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Default lease count quotas\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      New server deployments automatically create a lease count quota in the\n      root namespace with a 300K limit.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/enterprise\/lease-count-quotas\">Lease count quotas<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      License utilization reporting\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Use the Vault CLI to bundle and report usage data to HashiCorp for\n      clusters that do not report license utilization data automatically.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/enterprise\/license\/manual-reporting\">Manual license utilization reporting<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Secrets sync\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Sync Key Value (KV) v2 data between Vault and secrets managers from AWS,\n      Azure, Google Cloud Platform (GCP), GitHub, and Vercel.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/sync\">Secrets Sync<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      AWS plugin\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Use automatic identity tokes for workload identity federation\n      authentication flows with the AWS secret engine without explicitly\n      configuring sensitive security credentials.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/secrets\/aws\">AWS secrets engine<\/a>\n    <\/td>\n  <\/tr>\n\n  <\/tbody>\n<\/table>\n\n## Feature deprecations and EOL\n\nDeprecated in 1.16 | Retired in 1.16\n------------------ | ---------------\nNone | None\n\n@include 'release-notes\/deprecation-note.mdx'","site":"vault","answers_cleaned":"    layout  docs page title   1 16 1 release notes  description       Key updates for  Vault 1 16 1        Vault 1 16 1 release notes    GA date    2024 04 04   include  release notes intro mdx      Important changes    Version                       Change                                                                                                                                                                                                                                                                                                                                                                                                                          1 16 0                         Existing clusters do not show the current Vault version in UI by default   vault docs upgrading upgrade to 1 16 x default policy changes                                                        1 16 0                         Default LCQ enabled when upgrading pre 1 9   vault docs upgrading upgrade to 1 16 x default lcq pre 1 9 upgrade                                                                                 1 16 0                         External plugin environment variables take precedence over server variables   vault docs upgrading upgrade to 1 16 x external plugin variables                                                  1 16 0                         LDAP auth entity alias names no longer include upndomain   vault docs upgrading upgrade to 1 16 x ldap auth entity alias names no longer include upndomain                                      1 16 0                         Secrets Sync now requires a one time flag to operate   vault docs upgrading upgrade to 1 16 x secrets sync now requires setting a one time flag before use                                      1 16 0                         Azure secrets engine role creation failing   vault docs upgrading upgrade to 1 16 x azure secrets engine role creation failing                                                                  1 16 1   1 16 3                New nodes added by autopilot upgrades provisioned with the wrong version   vault docs upgrading upgrade to 1 15 x new nodes added by autopilot upgrades provisioned with the wrong version      1 15 8                         Autopilot upgrade for Vault Enterprise fails   vault docs upgrading upgrade to 1 15 x autopilot                                                                                                 1 16 5                         Listener stops listening on untrusted upstream connection with particular config settings   vault docs upgrading upgrade to 1 16 x listener proxy protocol config                               1 16 3   1 16 6                Vault standby nodes not deleting removed entity aliases from in memory database   vault docs upgrading upgrade to 1 16 x dangling entity alias in memory                                        0 7 0                          Duplicate identity groups created   vault docs upgrading upgrade to 1 16 x duplicate identity groups created when concurrent requests sent to the primary and pr secondary cluster                                                                                                      Known Issue  0 7 0             Manual entity merges fail   vault docs upgrading upgrade to 1 16 x manual entity merges sent to a pr secondary cluster are not persisted to storage                                             Known Issue  1 16 7 1 16 8     Some values in the audit logs not hmac d properly   vault docs upgrading upgrade to 1 16 x client tokens and token accessors audited in plaintext                                               New default  1 16 13           Vault product usage metrics reporting   vault docs upgrading upgrade to 1 6 x product usage reporting                                                                                           Deprecation  1 16 13            default report months  is deprecated for the  sys internal counters  API   vault docs upgrading upgrade to 1 16 x activity log changes                                                            Vault companion updates  Companion updates are Vault updates that live outside the main Vault binary    table     thead       tr         th style  Release  th         th style  Update  th         th style  Description  th        tr      thead     tbody      tr       td style         Vault Secrets Operator  v0 5        td       td style  ENHANCED  td       td style         Use templating to format  transform  and decode secrets before syncing to       Kubernetes secret         br    br          Learn more   a href   vault docs platform k8s vso secret transformation  Secret data transformation  a        td      tr      tbody    table      Core updates  Follow the learn more links for more information  or browse the list of  Vault tutorials updated to highlight changes for the most recent GA release   vault tutorials new release     table     thead       tr         th style  Release  th         th style  Update  th         th style  Description  th        tr      thead     tbody      tr       td style         Endpoint hardening       td       td style  ENHANCED  td       td style         Minimize network exposure by selectively redacting select fields like IP       addresses  cluster names  and Vault version from the HTTP responses of       your Vault server         br    br          Learn more  nbsp         a href   vault docs configuration listener tcp redact addresses   tt redact addresses  tt  parameter  a        td      tr      tr       td style         External plugins       td       td style  GA  td       td style         Run external plugins in their own container with native container platform       controls         br    br          Learn more   a href   vault docs plugins containerized plugins  Containerize Vault plugins  a        td      tr       tbody    table      Enterprise updates   table     thead       tr         th style  Release  th         th style  Update  th         th style  Description  th        tr      thead     tbody      tr       td style         Long term support       td       td style  GA  td       td style         Reduce risk and operational overhead with Vault Enterprise Long Term       Support  LTS  releases         br    br          Learn more   a href   vault docs enterprise lts  LTS overview  a        td      tr      tr       td style         Vault GUI       td       td style  GA  td       td style         Configure custom messages and display those messages to targeted users in       the Vault GUI           br    br            Learn more   a href   vault docs ui custom messages  Custom UI messages  a        td      tr      tr       td style         Audit logging       td       td style  GA  td       td style         Filter audit logs to write data to different destinations based on the content         br    br          Learn more   a href   vault docs enterprise audit filtering  Filter syntax for audit results  a        td      tr      tr       td style         Static secret caching       td       td style  GA  td       td style         Use Vault Proxy to cache static secrets for a set period of time and receive       event notifications when secrets change         br    br          Learn more   a href   vault docs agent and proxy proxy caching static secret caching  Vault Proxy static secret caching  a        td      tr      tr       td style         Event notifications       td       td style  GA  td       td style         Subscribe to notifications for various events in Vault  Includes support       for filtering  permissions  and cluster configurations with K V secrets         br    br          Learn more   a href   vault docs concepts events  Events  a        td      tr      tr       td style         Public Key Infrastructure  PKI        td       td style  BETA  td       td style         Automate certificate lifecycle management for IoT EST enabled devices with       native EST protocol support        br    br          Learn more   a href   vault docs secrets pki est  Enrollment over Secure Transport  EST   a        td      tr      tr       td style         Default lease count quotas       td       td style  GA  td       td style         New server deployments automatically create a lease count quota in the       root namespace with a 300K limit         br    br          Learn more   a href   vault docs enterprise lease count quotas  Lease count quotas  a        td      tr      tr       td style         License utilization reporting       td       td style  ENHANCED  td       td style         Use the Vault CLI to bundle and report usage data to HashiCorp for       clusters that do not report license utilization data automatically         br    br          Learn more   a href   vault docs enterprise license manual reporting  Manual license utilization reporting  a        td      tr      tr       td style         Secrets sync       td       td style  GA  td       td style         Sync Key Value  KV  v2 data between Vault and secrets managers from AWS        Azure  Google Cloud Platform  GCP   GitHub  and Vercel         br    br          Learn more   a href   vault docs sync  Secrets Sync  a        td      tr      tr       td style         AWS plugin       td       td style  GA  td       td style         Use automatic identity tokes for workload identity federation       authentication flows with the AWS secret engine without explicitly       configuring sensitive security credentials         br    br          Learn more   a href   vault docs secrets aws  AWS secrets engine  a        td      tr       tbody    table      Feature deprecations and EOL  Deprecated in 1 16   Retired in 1 16                                      None   None   include  release notes deprecation note mdx "}
{"questions":"vault GA date June 21 2023 Vault 1 14 0 release notes layout docs page title 1 14 0 release notes Key updates for Vault 1 14 0","answers":"---\nlayout: docs\npage_title: \"1.14.0 release notes\"\ndescription: |-\n  Key updates for  Vault 1.14.0\n---\n\n# Vault 1.14.0 release notes\n\n**GA date:** June 21, 2023\n\n@include 'release-notes\/intro.mdx'\n\n## Known issues and breaking changes\n\nVersion | Issue\n------- | ------------------------------------------------------------\n1.14.0+ | [Users limited by control groups can only access issuer detail from PKI overview page](\/vault\/docs\/upgrading\/upgrade-to-1.14.x#ui-pki-control-groups)\nAll     | [API calls to update-primary may lead to data loss](\/vault\/docs\/upgrading\/upgrade-to-1.14.x#update-primary-data-loss)\n1.14.0+ | [AWS static roles ignore changes to rotation period](\/vault\/docs\/upgrading\/upgrade-to-1.14.x#aws-static-role-rotation)\n1.14.0+ | [UI Collapsed navbar does not allow certain click events](\/vault\/docs\/upgrading\/upgrade-to-1.14.x#ui-collapsed-navbar)\n1.14.3 - 1.14.5 | [Vault storing references to ephemeral sub-loggers leading to unbounded memory consumption](\/vault\/docs\/upgrading\/upgrade-to-1.14.x#vault-is-storing-references-to-ephemeral-sub-loggers-leading-to-unbounded-memory-consumption)\n1.14.4 - 1.14.5 | [Internal error when vault policy in namespace does not exist](\/vault\/docs\/upgrading\/upgrade-to-1.14.x#internal-error-when-vault-policy-in-namespace-does-not-exist)\n1.14.0+ | [Sublogger levels not adjusted on reload](\/vault\/docs\/upgrading\/upgrade-to-1.14.x#sublogger-levels-unchanged-on-reload)\n1.14.5  | [Fatal error during expiration metrics gathering causing Vault crash](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#fatal-error-during-expiration-metrics-gathering-causing-vault-crash)\n1.14.5 | [User lockout potential double logging](\/vault\/docs\/upgrading\/upgrade-to-1.14.x#user-lockout-logging)\n1.14.5 - 1.14.9 | [Deadlock can occur on performance secondary clusters with many mounts](\/vault\/docs\/upgrading\/upgrade-to-1.14.x#deadlock-can-occur-on-performance-secondary-clusters-with-many-mounts)\n\n## Vault companion updates\n\nCompanion updates are Vault updates that live outside the main Vault binary.\n\n<table>\n  <thead>\n    <tr>\n      <th style=>Release<\/th>\n      <th style=>Update<\/th>\n      <th style=>Description<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n\n  <tr>\n    <td style=>\n      Vault Secrets Operator for Kubernetes\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Directly connect Vault secrets into Pods as native Kubernetes Secrets\n      without modifying your application code.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/platform\/k8s\/vso\">Vault Secrets Operator<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td rowspan={2} style=>\n      Terraform\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Use LDAP authentication from the unified LDAP engine to Terraform Vault\n      Provider.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/secrets\/ldap\">LDAP Secrets Engine<\/a>\n    <\/td>\n  <\/tr>\n  <tr>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Support for additional PKI issuers and keys endpoints.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/secrets\/pki\">PKI Secrets Engine<\/a>\n    <\/td>\n  <\/tr>\n  <\/tbody>\n<\/table>\n\n## Core updates\n\nFollow the learn more links for more information, or browse the list of\n[Vault tutorials updated to highlight changes for the most recent GA release](\/vault\/tutorials\/new-release).\n\n<table>\n  <thead>\n    <tr>\n      <th style=>Release<\/th>\n      <th style=>Update<\/th>\n      <th style=>Description<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n\n  <tr>\n    <td rowspan={2} style=>\n      Public Key Infrastructure (PKI)\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Use ACME to automate certificate lifecycle management for private PKI\n      needs with standard ACME clients like Certbot and k8s cert-manager.\n      Request certificates from a Vault server without needing to know Vault\n      APIs or authentication mechanisms.\n      <br \/><br \/>\n      Learn more:&nbsp;\n      <a href=\"\/vault\/api-docs\/secret\/pki#acme-certificate-issuance\">PKI Secrets Engine API: ACME<\/a>\n    <\/td>\n  <\/tr>\n  <tr>\n    <td style=>GA<\/td>\n    <td style=>\n      Use the improved PKI web UI to manage your PKI instance with intuitive\n      configuration and reasonable defaults for workflows, metadata, issuer\n      info, mount and tidy configuration, cross signing, multi-issuers etc.and\n      includes.\n      <br \/><br \/>\n      Learn more:&nbsp;\n      <a href=\"\/vault\/api-docs\/secret\/pki#acme-certificate-issuance\">PKI Secrets Engine<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Security patches\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Various security improvements to remediate low severity and informational\n      findings from a 3rd party security audit.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/internals\/security\">Vault security model<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td rowspan={2} style=>\n      Vault Agent\n    <\/td>\n    <td style=>BETA<\/td>\n    <td style=>\n      Fetch secrets directly into your application as environment variables.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/agent-and-proxy\/agent\/process-supervisor\">Process Supervisor Mode<\/a>\n    <\/td>\n  <\/tr>\n  <tr>\n    <td style=>GA<\/td>\n    <td style=>\n      Use a new subcommand and daemon, Vault Proxy, to access the proxy\n      functionality of Vault Agent. Vault Proxy will handle Vault Agent proxy\n      functionality going forward to simplify use case decisions for users.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/agent-and-proxy\/proxy\">Vault Proxy<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td rowspan={3} style=>\n      Plugin support\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Capture plugin metadata in the Vault audit log.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/audit\/syslog\">Syslog audit device<\/a>\n    <\/td>\n  <\/tr>\n  <tr>\n    <td style=>GA<\/td>\n    <td style=>\n      Use X509 Authentication and Terraform Vault Provider in the MongoDB Atlas\n      Database Secrets Engine.\n      <br \/><br \/>\n      Learn more:&nbsp;\n      <a href=\"\/vault\/docs\/secrets\/databases\/mongodbatlas\">MongoDB Atlas Database Secrets Engine<\/a>\n    <\/td>\n  <\/tr>\n  <tr>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Dependency updates and more robust multiplexing for secrets and\n      authentication plugins.\n      <br \/><br \/>\n      Learn more:&nbsp;\n      <a href=\"\/vault\/docs\/plugins\/plugin-development#serving-a-plugin-with-multiplexing\">\n        Serving a plugin with multiplexing (Plugin Development)\n      <\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td rowspan={2} style=>\n      AWS support\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Monitoring and performance enhancements for the Vault Lambda extension.\n      <br \/><br \/>\n      Learn more:&nbsp;\n      <a href=\"\/vault\/docs\/platform\/aws\/lambda-extension\">Vault Lambda Extension guide<\/a>\n    <\/td>\n  <\/tr>\n  <tr>\n    <td style=>GA<\/td>\n    <td style=>\n      Use static roles for IAM users in the AWS Secrets Engine.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/secrets\/aws\">AWS Secrets Engine<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Vault GUI\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Streamlined and aligned navigation with HCP Vault UI.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/configuration\/ui\">Vault UI<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Transit\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      <b>Contributed by the Vault community<\/b>. Support for public-key only Transit\n      keys and BYOK-secured export of key material.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/api-docs\/secret\/transit\">Transit Secrets Engine<\/a>\n    <\/td>\n  <\/tr>\n\n  <\/tbody>\n<\/table>\n\n## Enterprise updates\n\n<table>\n  <thead>\n    <tr>\n      <th style=>Release<\/th>\n      <th style=>Update<\/th>\n      <th style=>Description<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n\n  <tr>\n    <td style=>\n      Vault replication\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Stability improvements based on customer feedback for Vault 1.13. See the\n      <a href=\"https:\/\/raw.githubusercontent.com\/hashicorp\/vault\/main\/CHANGELOG.md\">\n        Vault changelog\n      <\/a>\n      for a full list of bug fixes.\n      <br \/><br \/>\n      Learn more:&nbsp;\n      <a href=\"\/vault\/docs\/internals\/replication\">Replication overview<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      License utilization reporting\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Enables automatic license utilization reporting for you and HashiCorp to\n      ensure transparent, accurate billing.\n      <br \/><br \/>\n      Learn more:&nbsp;\n      <a href=\"\/vault\/docs\/enterprise\/license\/utilization-reporting\">Automated License utilization reporting<\/a>\n    <\/td>\n  <\/tr>\n\n  <\/tbody>\n<\/table>\n\n## Feature deprecations and EOL\n\nDeprecated in 1.14 | Retired in 1.14\n------------------ | ---------------\nVault Agent API proxy support | [Duplicative Docker Images](https:\/\/hub.docker.com\/_\/vault)\n\n@include 'release-notes\/deprecation-note.mdx'","site":"vault","answers_cleaned":"    layout  docs page title   1 14 0 release notes  description       Key updates for  Vault 1 14 0        Vault 1 14 0 release notes    GA date    June 21  2023   include  release notes intro mdx      Known issues and breaking changes  Version   Issue                                                                        1 14 0     Users limited by control groups can only access issuer detail from PKI overview page   vault docs upgrading upgrade to 1 14 x ui pki control groups  All        API calls to update primary may lead to data loss   vault docs upgrading upgrade to 1 14 x update primary data loss  1 14 0     AWS static roles ignore changes to rotation period   vault docs upgrading upgrade to 1 14 x aws static role rotation  1 14 0     UI Collapsed navbar does not allow certain click events   vault docs upgrading upgrade to 1 14 x ui collapsed navbar  1 14 3   1 14 5    Vault storing references to ephemeral sub loggers leading to unbounded memory consumption   vault docs upgrading upgrade to 1 14 x vault is storing references to ephemeral sub loggers leading to unbounded memory consumption  1 14 4   1 14 5    Internal error when vault policy in namespace does not exist   vault docs upgrading upgrade to 1 14 x internal error when vault policy in namespace does not exist  1 14 0     Sublogger levels not adjusted on reload   vault docs upgrading upgrade to 1 14 x sublogger levels unchanged on reload  1 14 5     Fatal error during expiration metrics gathering causing Vault crash   vault docs upgrading upgrade to 1 15 x fatal error during expiration metrics gathering causing vault crash  1 14 5    User lockout potential double logging   vault docs upgrading upgrade to 1 14 x user lockout logging  1 14 5   1 14 9    Deadlock can occur on performance secondary clusters with many mounts   vault docs upgrading upgrade to 1 14 x deadlock can occur on performance secondary clusters with many mounts      Vault companion updates  Companion updates are Vault updates that live outside the main Vault binary    table     thead       tr         th style  Release  th         th style  Update  th         th style  Description  th        tr      thead     tbody      tr       td style         Vault Secrets Operator for Kubernetes       td       td style  GA  td       td style         Directly connect Vault secrets into Pods as native Kubernetes Secrets       without modifying your application code         br    br          Learn more   a href   vault docs platform k8s vso  Vault Secrets Operator  a        td      tr      tr       td rowspan  2  style         Terraform       td       td style  GA  td       td style         Use LDAP authentication from the unified LDAP engine to Terraform Vault       Provider         br    br          Learn more   a href   vault docs secrets ldap  LDAP Secrets Engine  a        td      tr     tr       td style  ENHANCED  td       td style         Support for additional PKI issuers and keys endpoints         br    br          Learn more   a href   vault docs secrets pki  PKI Secrets Engine  a        td      tr      tbody    table      Core updates  Follow the learn more links for more information  or browse the list of  Vault tutorials updated to highlight changes for the most recent GA release   vault tutorials new release     table     thead       tr         th style  Release  th         th style  Update  th         th style  Description  th        tr      thead     tbody      tr       td rowspan  2  style         Public Key Infrastructure  PKI        td       td style  GA  td       td style         Use ACME to automate certificate lifecycle management for private PKI       needs with standard ACME clients like Certbot and k8s cert manager        Request certificates from a Vault server without needing to know Vault       APIs or authentication mechanisms         br    br          Learn more  nbsp         a href   vault api docs secret pki acme certificate issuance  PKI Secrets Engine API  ACME  a        td      tr     tr       td style  GA  td       td style         Use the improved PKI web UI to manage your PKI instance with intuitive       configuration and reasonable defaults for workflows  metadata  issuer       info  mount and tidy configuration  cross signing  multi issuers etc and       includes         br    br          Learn more  nbsp         a href   vault api docs secret pki acme certificate issuance  PKI Secrets Engine  a        td      tr      tr       td style         Security patches       td       td style  ENHANCED  td       td style         Various security improvements to remediate low severity and informational       findings from a 3rd party security audit         br    br          Learn more   a href   vault docs internals security  Vault security model  a        td      tr      tr       td rowspan  2  style         Vault Agent       td       td style  BETA  td       td style         Fetch secrets directly into your application as environment variables         br    br          Learn more   a href   vault docs agent and proxy agent process supervisor  Process Supervisor Mode  a        td      tr     tr       td style  GA  td       td style         Use a new subcommand and daemon  Vault Proxy  to access the proxy       functionality of Vault Agent  Vault Proxy will handle Vault Agent proxy       functionality going forward to simplify use case decisions for users         br    br          Learn more   a href   vault docs agent and proxy proxy  Vault Proxy  a        td      tr      tr       td rowspan  3  style         Plugin support       td       td style  GA  td       td style         Capture plugin metadata in the Vault audit log         br    br          Learn more   a href   vault docs audit syslog  Syslog audit device  a        td      tr     tr       td style  GA  td       td style         Use X509 Authentication and Terraform Vault Provider in the MongoDB Atlas       Database Secrets Engine         br    br          Learn more  nbsp         a href   vault docs secrets databases mongodbatlas  MongoDB Atlas Database Secrets Engine  a        td      tr     tr       td style  ENHANCED  td       td style         Dependency updates and more robust multiplexing for secrets and       authentication plugins         br    br          Learn more  nbsp         a href   vault docs plugins plugin development serving a plugin with multiplexing           Serving a plugin with multiplexing  Plugin Development          a        td      tr      tr       td rowspan  2  style         AWS support       td       td style  ENHANCED  td       td style         Monitoring and performance enhancements for the Vault Lambda extension         br    br          Learn more  nbsp         a href   vault docs platform aws lambda extension  Vault Lambda Extension guide  a        td      tr     tr       td style  GA  td       td style         Use static roles for IAM users in the AWS Secrets Engine         br    br          Learn more   a href   vault docs secrets aws  AWS Secrets Engine  a        td      tr      tr       td style         Vault GUI       td       td style  ENHANCED  td       td style         Streamlined and aligned navigation with HCP Vault UI         br    br          Learn more   a href   vault docs configuration ui  Vault UI  a        td      tr      tr       td style         Transit       td       td style  ENHANCED  td       td style          b Contributed by the Vault community  b   Support for public key only Transit       keys and BYOK secured export of key material         br    br          Learn more   a href   vault api docs secret transit  Transit Secrets Engine  a        td      tr       tbody    table      Enterprise updates   table     thead       tr         th style  Release  th         th style  Update  th         th style  Description  th        tr      thead     tbody      tr       td style         Vault replication       td       td style  ENHANCED  td       td style         Stability improvements based on customer feedback for Vault 1 13  See the        a href  https   raw githubusercontent com hashicorp vault main CHANGELOG md           Vault changelog         a        for a full list of bug fixes         br    br          Learn more  nbsp         a href   vault docs internals replication  Replication overview  a        td      tr      tr       td style         License utilization reporting       td       td style  GA  td       td style         Enables automatic license utilization reporting for you and HashiCorp to       ensure transparent  accurate billing         br    br          Learn more  nbsp         a href   vault docs enterprise license utilization reporting  Automated License utilization reporting  a        td      tr       tbody    table      Feature deprecations and EOL  Deprecated in 1 14   Retired in 1 14                                      Vault Agent API proxy support    Duplicative Docker Images  https   hub docker com   vault    include  release notes deprecation note mdx "}
{"questions":"vault Software Release date March 1 2023 Vault 1 13 0 release notes page title 1 13 0 layout docs This page contains release notes for Vault 1 13 0","answers":"---\nlayout: docs\npage_title: 1.13.0\ndescription: |-\n  This page contains release notes for Vault 1.13.0\n---\n\n# Vault 1.13.0 release notes\n\n**Software Release date:** March 1, 2023\n\n**Summary:** Vault Release 1.13.0 offers features and enhancements that improve\nthe user experience while solving critical  issues previously encountered by our\ncustomers. We are providing an overview  of improvements in this set of  release\nnotes.\n\nWe encourage you to [upgrade](\/vault\/docs\/upgrading) to the latest release of\nVault to take advantage of the new benefits provided. With this latest release,\nwe offer solutions to critical feature gaps that were identified previously.\nPlease refer to the\n[Changelog](https:\/\/github.com\/hashicorp\/vault\/blob\/main\/CHANGELOG.md#1130-rc1)\nwithin the Vault release for further information on product improvements,\nincluding a comprehensive list of bug fixes.\n\nSome of these enhancements and changes in this release include the following:\n\n- **PKI improvements:**\n   - **Cross Cluster PKI Certificate Revocation:** Introducing a new unified\n     OCSP responder and CRL builder that enables a certificate revocations and\n     CRL view across clusters for a given PKI mount.\n   - **PKI UI Beta:** New UI introducing cross-signing flow, overview page,\n     roles and keys view.\n   - **Health Checks:** Provide a health overview of PKI mounts for proactive\n     actions and troubleshooting.\n   - **Command Line:** Simplified CLI to discover, rotate issuers and related\n     commands for PKI mounts\n\n- **Azure Auth Improvements:**\n   - **Rotate-root support:** Add the ability to rotate the root account's\n     client secret defined in the auth method's configuration via the new\n     `rotate-root` endpoint.\n   - **Managed Identities authentication:** The auth method now allows any Azure\n     resource that supports managed identities to authenticate with Vault.\n   - **VMSS Flex authentication:** Add support for Virtual Machine Scale Set\n     (VMSS) Flex authentication.\n\n- **GCP Secrets Impersonated Account Support:** Add support for GCP service\n  account impersonation, allowing callers to generate a GCP access token without\n  requiring Vault to store or retrieve a GCP service account key for each role.\n- **Managed Keys in Transit Engine:** Support for offloading Transit Key\n  operations to HSMs\/external KMS.\n- **KMIP Secret Engine Enhancements:** Implemented Asymmetric Key Lifecycle\n  Server and Advanced Cryptographic Server profiles. Added support for RSA keys\n  and operations such as:  MAC, MAC Verify, Sign, Sign Verify, RNG Seed and RNG\n  Retrieve.\n- **Vault as a SSM:** Support is planned for an upcoming Vault PKCS#11 Provider\n  version to include mechanisms for encryption, decryption, signing and\n  signature verification for AES and RSA keys.\n- **Replication (enterprise):** We fixed a bug that could cause a cluster to\n  wind up in a permanent merkle-diff\/merkle-sync loop and never enter\n  stream-wals, particularly in cases of high write loads on the primary cluster.\n- **Share Secrets in Independent Namespaces (enterprise):** You can now add\n  users from namespaces outside a namespace hierarchy to a group in a given\n  namespace hierarchy. For Vault Agent, you can now grant it access to secrets\n  outside the namespace where it authenticated, and reduce the number of Agents\n  you need to run.\n- **User Lockout:** Vault now supports configuration to lock out users when they\n  have consecutive failed login attempts. This feature is **enabled by default**\n  in 1.13 for the userpass, ldap, and approle auth methods.\n- **Event System (Alpha):** Vault has a new experimental event system. Events\n  are currently only generated on writes to the KV secrets engine, but external\n  plugins can also be updated to start generating events.\n- **Kubernetes authentication plugin bug fix:** Ensures a consistent TLS\n  configuration for all k8s API requests. This fixes a bug where it was possible\n  for the http.Client's Transport to be missing the necessary root CAs to ensure\n  that all TLS connections between the auth engine and the Kubernetes API were\n  validated against the configured set of CA certificates.\n- **Kubernetes Secretes Engine on Vault UI:** Introducing Kubernetes secret\n  engine support on the UI\n- **Client Count UI improvements:** Combining current month and previous history\n  into one dashboard\n- **OCSP Support in the TLS Certificate Auth Method:** The auth method now can\n  check for revoked certificates using the OCSP protocol.\n- **UI Wizard removal:** The UI Wizard has been removed from the UI since the\n  information was occasionally out-of-date and did not align with the latest\n  changes. A new and enhanced UI experience is planned in a future release.\n\n- **Vault Agent improvements:**\n   - Auto-auth introduced `token_file` method which reads an existing token from\n     a file. The token file method is designed for development and testing. It\n     is not suitable for production deployment.\n   - Listeners for the Vault Agent can define a role set to `metrics_only` so\n     that a service can be configured to listen on a particular port to collect\n     metrics.\n   - Vault Agent can read configurations from multiple files.\n   - Users can specify the log file path using the `-log-file` command flag or\n     `VAULT_LOG_FILE` environment variable. This is particularly useful when\n     Vault Agent is running as a Windows service.\n\n- **OpenAPI-based Go & .NET Client Libraries (Public Beta):** Use the new Go &\n  .NET client libraries to interact with the Vault API from your applications.\n   - [OpenAPI-based Go client library](https:\/\/github.com\/hashicorp\/vault-client-go\/)\n   - [OpenAPI-based .NET client library](https:\/\/github.com\/hashicorp\/vault-client-dotnet\/)\n\n## Known issues\n\nWhen Vault is configured without a TLS certificate on the TCP listener, the Vault UI may throw an error that blocks you from performing operational tasks.\n\nThe error message: `Q.randomUUID is not a function`\n\n<Note>\n\nRefer to this [Knowledge Base article](https:\/\/support.hashicorp.com\/hc\/en-us\/articles\/14512496697875) for more details and a workaround.\n\n<\/Note>\n\nThe fix for this UI issue is coming in the Vault 1.13.1 release.\n\n@include 'perf-standby-token-create-forwarding-failure.mdx'\n\n@include 'known-issues\/update-primary-data-loss.mdx'\n\n@include 'known-issues\/internal-error-namespace-missing-policy.mdx'\n\n@include 'known-issues\/ephemeral-loggers-memory-leak.mdx'\n\n@include 'known-issues\/sublogger-levels-unchanged-on-reload.mdx'\n\n@include 'known-issues\/expiration-metrics-fatal-error.mdx'\n\n@include 'known-issues\/perf-secondary-many-mounts-deadlock.mdx'\n\n@include 'known-issues\/1_13-reload-census-panic-standby.mdx'\n\n\n## Feature deprecations and EOL\n\nPlease refer to the [Deprecation Plans and Notice](\/vault\/docs\/deprecation) page\nfor up-to-date information on feature deprecations and plans. A [Feature\nDeprecation FAQ](\/vault\/docs\/deprecation\/faq) page addresses questions about\ndecisions made about Vault feature deprecations.","site":"vault","answers_cleaned":"    layout  docs page title  1 13 0 description       This page contains release notes for Vault 1 13 0        Vault 1 13 0 release notes    Software Release date    March 1  2023    Summary    Vault Release 1 13 0 offers features and enhancements that improve the user experience while solving critical  issues previously encountered by our customers  We are providing an overview  of improvements in this set of  release notes   We encourage you to  upgrade   vault docs upgrading  to the latest release of Vault to take advantage of the new benefits provided  With this latest release  we offer solutions to critical feature gaps that were identified previously  Please refer to the  Changelog  https   github com hashicorp vault blob main CHANGELOG md 1130 rc1  within the Vault release for further information on product improvements  including a comprehensive list of bug fixes   Some of these enhancements and changes in this release include the following       PKI improvements           Cross Cluster PKI Certificate Revocation    Introducing a new unified      OCSP responder and CRL builder that enables a certificate revocations and      CRL view across clusters for a given PKI mount         PKI UI Beta    New UI introducing cross signing flow  overview page       roles and keys view         Health Checks    Provide a health overview of PKI mounts for proactive      actions and troubleshooting         Command Line    Simplified CLI to discover  rotate issuers and related      commands for PKI mounts      Azure Auth Improvements           Rotate root support    Add the ability to rotate the root account s      client secret defined in the auth method s configuration via the new       rotate root  endpoint         Managed Identities authentication    The auth method now allows any Azure      resource that supports managed identities to authenticate with Vault         VMSS Flex authentication    Add support for Virtual Machine Scale Set       VMSS  Flex authentication       GCP Secrets Impersonated Account Support    Add support for GCP service   account impersonation  allowing callers to generate a GCP access token without   requiring Vault to store or retrieve a GCP service account key for each role      Managed Keys in Transit Engine    Support for offloading Transit Key   operations to HSMs external KMS      KMIP Secret Engine Enhancements    Implemented Asymmetric Key Lifecycle   Server and Advanced Cryptographic Server profiles  Added support for RSA keys   and operations such as   MAC  MAC Verify  Sign  Sign Verify  RNG Seed and RNG   Retrieve      Vault as a SSM    Support is planned for an upcoming Vault PKCS 11 Provider   version to include mechanisms for encryption  decryption  signing and   signature verification for AES and RSA keys      Replication  enterprise     We fixed a bug that could cause a cluster to   wind up in a permanent merkle diff merkle sync loop and never enter   stream wals  particularly in cases of high write loads on the primary cluster      Share Secrets in Independent Namespaces  enterprise     You can now add   users from namespaces outside a namespace hierarchy to a group in a given   namespace hierarchy  For Vault Agent  you can now grant it access to secrets   outside the namespace where it authenticated  and reduce the number of Agents   you need to run      User Lockout    Vault now supports configuration to lock out users when they   have consecutive failed login attempts  This feature is   enabled by default     in 1 13 for the userpass  ldap  and approle auth methods      Event System  Alpha     Vault has a new experimental event system  Events   are currently only generated on writes to the KV secrets engine  but external   plugins can also be updated to start generating events      Kubernetes authentication plugin bug fix    Ensures a consistent TLS   configuration for all k8s API requests  This fixes a bug where it was possible   for the http Client s Transport to be missing the necessary root CAs to ensure   that all TLS connections between the auth engine and the Kubernetes API were   validated against the configured set of CA certificates      Kubernetes Secretes Engine on Vault UI    Introducing Kubernetes secret   engine support on the UI     Client Count UI improvements    Combining current month and previous history   into one dashboard     OCSP Support in the TLS Certificate Auth Method    The auth method now can   check for revoked certificates using the OCSP protocol      UI Wizard removal    The UI Wizard has been removed from the UI since the   information was occasionally out of date and did not align with the latest   changes  A new and enhanced UI experience is planned in a future release       Vault Agent improvements         Auto auth introduced  token file  method which reads an existing token from      a file  The token file method is designed for development and testing  It      is not suitable for production deployment       Listeners for the Vault Agent can define a role set to  metrics only  so      that a service can be configured to listen on a particular port to collect      metrics       Vault Agent can read configurations from multiple files       Users can specify the log file path using the   log file  command flag or       VAULT LOG FILE  environment variable  This is particularly useful when      Vault Agent is running as a Windows service       OpenAPI based Go    NET Client Libraries  Public Beta     Use the new Go      NET client libraries to interact with the Vault API from your applications        OpenAPI based Go client library  https   github com hashicorp vault client go         OpenAPI based  NET client library  https   github com hashicorp vault client dotnet       Known issues  When Vault is configured without a TLS certificate on the TCP listener  the Vault UI may throw an error that blocks you from performing operational tasks   The error message   Q randomUUID is not a function    Note   Refer to this  Knowledge Base article  https   support hashicorp com hc en us articles 14512496697875  for more details and a workaround     Note   The fix for this UI issue is coming in the Vault 1 13 1 release    include  perf standby token create forwarding failure mdx    include  known issues update primary data loss mdx    include  known issues internal error namespace missing policy mdx    include  known issues ephemeral loggers memory leak mdx    include  known issues sublogger levels unchanged on reload mdx    include  known issues expiration metrics fatal error mdx    include  known issues perf secondary many mounts deadlock mdx    include  known issues 1 13 reload census panic standby mdx       Feature deprecations and EOL  Please refer to the  Deprecation Plans and Notice   vault docs deprecation  page for up to date information on feature deprecations and plans  A  Feature Deprecation FAQ   vault docs deprecation faq  page addresses questions about decisions made about Vault feature deprecations "}
{"questions":"vault Software Release Date November 19 2021 layout docs Vault 1 9 0 release notes page title 1 9 0 This page contains release notes for Vault 1 9 0","answers":"---\nlayout: docs\npage_title: 1.9.0\ndescription: |-\n  This page contains release notes for Vault 1.9.0.\n---\n\n# Vault 1.9.0 release notes\n\n**Software Release Date**: November 19, 2021\n\n**Summary**: This document captures major updates as part of Vault release 1.9.0, including new features, breaking changes, enhancements, deprecation, and EOL plans. Refer to the [Changelog](https:\/\/github.com\/hashicorp\/vault\/blob\/main\/CHANGELOG.md) for additional changes made within the Vault 1.9 release.\n\n## New features\n\nThis section describes the new features introduced as part of Vault 1.9.0.\n\n### Client count improvements\n\nSeveral improvements to client count were made to help customers better track and identify client attribution and reduce overcomputing.\n\n#### Improved computation of client counts and usability within the usage metrics UI\n\nThe improvements made include the following:\n\n* New logic enables de-duplication of non-entity tokens, thereby reducing their contribution towards the client count\n* New logic allows entities to be created for local auth mounts, thereby eliminating  non-entity-tokens being  issued by the local auth mounts and reducing the overall client count\n* Eliminates root tokens from the client count aggregate\n* Displays client counts per namespace (top ten, descending order by attribution) in the usage metrics UI with the ability to export data for all namespaces\n* Displays clients earlier than a month in the usage metrics UI (within ten minutes since initiation of computation)\n\n### Advanced data protection module (ADP) enhancements\n\nThe following section provides details about the ADP module features added in this release.\n\n#### Advanced I\/O handling for tranform FPE (ADP-Transform)\n\nUsers of the Format Preserving Encryption (FPE) feature of ADP Transform will now benefit from increased flexibility with regards to formatting the input and output of their data. [Transformation templates](\/vault\/tutorials\/adp\/transform#advanced-handling) are receiving two new fields- **encode_format** and **decode_formats** -that allow users to specify and format individual [capturing groups](https:\/\/www.regular-expressions.info\/refcapture.html) within the regular expressions that define their formats.\n\n#### MS SQL TDE (ADP-KM)\n\nWe added support to Vault Enterprise for customers who want Vault to manage encryption keys for Transparent Data Encryption on MSSQL servers.\n\n#### Key management Secrets(KMS) engine - GCP (ADP-KM)\n\nThe [KMS Engine for GCP](\/vault\/docs\/secrets\/gcpkms) provides key management via the Google Cloud KMS to assist with automating many GCP key management functions.\n\n## Other features and enhancements\n\nThis section describes other features and enhancements introduced as part of the Vault 1.9 release.\n\n### Vault agent improvements\n\nImprovements were made to the Vault Agent Cache to ensure that [consul-template is always routed through the Vault Agent cache](\/vault\/docs\/agent\/template), therefore, eliminating the need for listeners to be defined in the Vault Agent for just templating.\n\n### Customized username generation for database dynamic credentials\n\nThis feature enables customization of username for database dynamic credentials. This feature helps customers better manage and correlate usernames for various actions such as troubleshooting, etc. Vault 1.9 supports Postgres, MSSQL, MySQL, Oracle, MongoDB.\n\n### Customizable HTTP headers for Vault\n\nThis feature allows security operators to configure [custom response headers](\/vault\/docs\/configuration\/listener\/tcp) to HTTP root path (`\/`) and API endpoints (`\/v1\/*`), in addition to the previously supported UI paths through the server HCL configuration file.\n\n### Support for IBM s390X CPU architecture\n\nThis feature adds support for Vault to run on the IBM s390x architecture via the [equivalent binary](https:\/\/releases.hashicorp.com\/vault\/1.9.0+ent\/).\n\n### Namespace API lock\n\nThis [feature](\/vault\/docs\/concepts\/namespace-api-lock) allows namespace administrators to flexibly control operations such as locking APIs from child namespaces to which they have access. This enables them to restrict access to their domain in a multi-tenant environment and perform break-glass procedures in times of emergency to protect a cluster from within their child namespace.\n\n### Vault terraform provider v3\n\nWe have upgraded the [Vault Terraform Provider](https:\/\/registry.terraform.io\/providers\/hashicorp\/vault\/latest\/docs) to the latest version of the [Terraform Plugin SDKv2](\/terraform\/plugin\/sdkv2) to leverage new features.\n\n### Azure secrets engine\n\nThe following enhancement are included:\n\n* Added `use_microsoft_graph_api` configuration parameter is added to use with Microsoft Graph API. We are targeting to remove Azure Active Directory API by [June 30, 2022](https:\/\/docs.microsoft.com\/en-us\/graph\/migrate-azure-ad-graph-overview).\n* Rotate root API is now available to rotate client_secret immediately after configuration.\n\n### Customized metadata for KV\n\nThis [enhancement](\/vault\/api-docs\/secret\/kv\/kv-v2) provides the ability to set version-agnostic custom key metadata for Vault KVv2 secrets via a metadata endpoint. This custom metadata is also visible in the UI.\n\n## UI enhancements\n### Expanding the UI for more DB secrets engines\n\nWe have been adding support for DB secrets engines in the UI over the past few releases. In the Vault 1.9 release, we have added support for [Oracle](\/vault\/docs\/secrets\/databases\/oracle) and [ElasticSearch](\/vault\/docs\/secrets\/databases\/elasticdb) and [PostgresSQL](\/vault\/docs\/secrets\/databases\/postgresql) database secrets engines in the UI.\n\n### PKI certificate metadata\n\nThe [PKI Secrets Engine](\/vault\/docs\/secrets\/pki) now displays additional PKI certificate metadata in the UI, such as date issued, date of expiry, serial number, and subject\/name.\n\n## Tech preview features\n\n### KV secrets engine v2 patch operations\n\nThis feature provides a more streamlined method for managing [KV v2 secrets](\/vault\/api-docs\/secret\/kv\/kv-v2), enabling customers to better maintain least privilege security in automated environments. This feature allows performing partial updates to KV v2 secrets without requiring to read the full KV secret's key\/value pairs.\n\n### Vault as an OIDC provider\n\nVault can now act as an OIDC Provider so applications can leverage the pre-existing [Vault identities](\/vault\/api-docs\/secret\/identity) to authenticate into applications.\n\n## Breaking changes\n\nThe following section details breaking changes introduced in Vault 1.9.\n\n### Removal of HTTP request counters\n\nIn Vault 1.9, the [internal HTTP Request count API](\/vault\/api-docs\/system\/internal-counters#http-requests) was removed from the product. Calls to the endpoint will result in a **404 error** with a message stating that functionality on this path has been removed.\nPlease refer to the [upgrade guide](\/vault\/docs\/upgrading\/upgrade-to-1.9.x) for more information.\n\nAs called out in the documentation, Vault does not make backwards compatible guarantees on internal APIs (those prefaced with `sys\/internal`). They are subject to change and may disappear without notice.\n\n## Feature deprecations and EOL\n\nPlease refer to the [Deprecation Plans and Notice](\/vault\/docs\/deprecation) page for up-to-date information on feature deprecations and plans. An [FAQ](\/vault\/docs\/deprecation\/faq) page is also available to address questions concerning decisions made about Vault feature deprecations.","site":"vault","answers_cleaned":"    layout  docs page title  1 9 0 description       This page contains release notes for Vault 1 9 0         Vault 1 9 0 release notes    Software Release Date    November 19  2021    Summary    This document captures major updates as part of Vault release 1 9 0  including new features  breaking changes  enhancements  deprecation  and EOL plans  Refer to the  Changelog  https   github com hashicorp vault blob main CHANGELOG md  for additional changes made within the Vault 1 9 release      New features  This section describes the new features introduced as part of Vault 1 9 0       Client count improvements  Several improvements to client count were made to help customers better track and identify client attribution and reduce overcomputing        Improved computation of client counts and usability within the usage metrics UI  The improvements made include the following     New logic enables de duplication of non entity tokens  thereby reducing their contribution towards the client count   New logic allows entities to be created for local auth mounts  thereby eliminating  non entity tokens being  issued by the local auth mounts and reducing the overall client count   Eliminates root tokens from the client count aggregate   Displays client counts per namespace  top ten  descending order by attribution  in the usage metrics UI with the ability to export data for all namespaces   Displays clients earlier than a month in the usage metrics UI  within ten minutes since initiation of computation       Advanced data protection module  ADP  enhancements  The following section provides details about the ADP module features added in this release        Advanced I O handling for tranform FPE  ADP Transform   Users of the Format Preserving Encryption  FPE  feature of ADP Transform will now benefit from increased flexibility with regards to formatting the input and output of their data   Transformation templates   vault tutorials adp transform advanced handling  are receiving two new fields    encode format   and   decode formats    that allow users to specify and format individual  capturing groups  https   www regular expressions info refcapture html  within the regular expressions that define their formats        MS SQL TDE  ADP KM   We added support to Vault Enterprise for customers who want Vault to manage encryption keys for Transparent Data Encryption on MSSQL servers        Key management Secrets KMS  engine   GCP  ADP KM   The  KMS Engine for GCP   vault docs secrets gcpkms  provides key management via the Google Cloud KMS to assist with automating many GCP key management functions      Other features and enhancements  This section describes other features and enhancements introduced as part of the Vault 1 9 release       Vault agent improvements  Improvements were made to the Vault Agent Cache to ensure that  consul template is always routed through the Vault Agent cache   vault docs agent template   therefore  eliminating the need for listeners to be defined in the Vault Agent for just templating       Customized username generation for database dynamic credentials  This feature enables customization of username for database dynamic credentials  This feature helps customers better manage and correlate usernames for various actions such as troubleshooting  etc  Vault 1 9 supports Postgres  MSSQL  MySQL  Oracle  MongoDB       Customizable HTTP headers for Vault  This feature allows security operators to configure  custom response headers   vault docs configuration listener tcp  to HTTP root path       and API endpoints    v1      in addition to the previously supported UI paths through the server HCL configuration file       Support for IBM s390X CPU architecture  This feature adds support for Vault to run on the IBM s390x architecture via the  equivalent binary  https   releases hashicorp com vault 1 9 0 ent         Namespace API lock  This  feature   vault docs concepts namespace api lock  allows namespace administrators to flexibly control operations such as locking APIs from child namespaces to which they have access  This enables them to restrict access to their domain in a multi tenant environment and perform break glass procedures in times of emergency to protect a cluster from within their child namespace       Vault terraform provider v3  We have upgraded the  Vault Terraform Provider  https   registry terraform io providers hashicorp vault latest docs  to the latest version of the  Terraform Plugin SDKv2   terraform plugin sdkv2  to leverage new features       Azure secrets engine  The following enhancement are included     Added  use microsoft graph api  configuration parameter is added to use with Microsoft Graph API  We are targeting to remove Azure Active Directory API by  June 30  2022  https   docs microsoft com en us graph migrate azure ad graph overview     Rotate root API is now available to rotate client secret immediately after configuration       Customized metadata for KV  This  enhancement   vault api docs secret kv kv v2  provides the ability to set version agnostic custom key metadata for Vault KVv2 secrets via a metadata endpoint  This custom metadata is also visible in the UI      UI enhancements     Expanding the UI for more DB secrets engines  We have been adding support for DB secrets engines in the UI over the past few releases  In the Vault 1 9 release  we have added support for  Oracle   vault docs secrets databases oracle  and  ElasticSearch   vault docs secrets databases elasticdb  and  PostgresSQL   vault docs secrets databases postgresql  database secrets engines in the UI       PKI certificate metadata  The  PKI Secrets Engine   vault docs secrets pki  now displays additional PKI certificate metadata in the UI  such as date issued  date of expiry  serial number  and subject name      Tech preview features      KV secrets engine v2 patch operations  This feature provides a more streamlined method for managing  KV v2 secrets   vault api docs secret kv kv v2   enabling customers to better maintain least privilege security in automated environments  This feature allows performing partial updates to KV v2 secrets without requiring to read the full KV secret s key value pairs       Vault as an OIDC provider  Vault can now act as an OIDC Provider so applications can leverage the pre existing  Vault identities   vault api docs secret identity  to authenticate into applications      Breaking changes  The following section details breaking changes introduced in Vault 1 9       Removal of HTTP request counters  In Vault 1 9  the  internal HTTP Request count API   vault api docs system internal counters http requests  was removed from the product  Calls to the endpoint will result in a   404 error   with a message stating that functionality on this path has been removed  Please refer to the  upgrade guide   vault docs upgrading upgrade to 1 9 x  for more information   As called out in the documentation  Vault does not make backwards compatible guarantees on internal APIs  those prefaced with  sys internal    They are subject to change and may disappear without notice      Feature deprecations and EOL  Please refer to the  Deprecation Plans and Notice   vault docs deprecation  page for up to date information on feature deprecations and plans  An  FAQ   vault docs deprecation faq  page is also available to address questions concerning decisions made about Vault feature deprecations "}
{"questions":"vault This page contains release notes for Vault 1 11 0 Vault 1 11 0 release notes Software Release date June 21 2022 page title 1 11 0 layout docs","answers":"---\nlayout: docs\npage_title: 1.11.0\ndescription: |-\n  This page contains release notes for Vault 1.11.0\n---\n\n# Vault 1.11.0 release notes\n\n**Software Release date:** June 21, 2022\n\n**Summary:** Vault Release 1.11.0 offers features and enhancements that improve the user experience while closing the loop on key issues previously encountered by our customers. We are providing a summary of these improvements in these release notes.\n\nWe encourage you to upgrade to the latest release to take advantage of the new benefits that we are providing. With this latest release, we offer solutions to critical feature gaps that have been identified previously. For further information on product improvements, including a comprehensive list of bug fixes, please refer to the [Changelog](https:\/\/github.com\/hashicorp\/vault\/blob\/main\/CHANGELOG.md) within the Vault  release.\n\n\nSome of these enhancements and changes in this release include:\n\n- Vault Consul secrets engine provides a templating policy to allow node and service identities to be set on the Consul token creation\n- Snowflake secrets engine added a key\/pair-based authentication\n- Vault adds a Kubernetes secrets engine to allow creating dynamic k8s service accounts\n- ADP-Transform extends its functionality by adding a convergent tokenization mode and a tokenization lookup\n- ADP-KM adds four new operations\n- Client count tooling improvements to help understand the attribution of clients better\n- Integration storage autopilot improvements include auto upgrade and redundancy zones\n- Plugin Multiplexing support is extended to secret and auth plugins, allowing them to be managed more efficiently with a single process\n\n\n## New features\n\nThis section describes the new features introduced as part of Vault 1.11.0.\n\n### Configure GCP auth to target non-public good API addresses\n\nThe GCP auth method only allows for public API endpoints to be configured for authentication purposes. Workloads running in GCP that do not have external internet access need the ability to authenticate using [Private Google Access](https:\/\/cloud.google.com\/vpc\/docs\/private-google-access#pga). In Vault 1.11.0, we allow for customization of certain service endpoints. For more information, refer to the [GCP auth method](\/vault\/api-docs\/auth\/gcp#custom_endpoint) documentation.\n\n\n### Support for key\/pair based authentication for snowflake\n\nIn Vault 1.11.0, the Snowflake Database Engine supports an additional credential type that can be generated. For users not wanting to rely on the standard user\/pass authentication to Snowflake, Vault can now dynamically generate RSA key pairs that allow users to authenticate into Snowflake. For more information, refer to the [Snowflake Database Secrets Engine](\/vault\/docs\/secrets\/databases\/snowflake) and [Database Secrets Engine (API)](\/vault\/api-docs\/secret\/databases) documentation.\n\n### Dynamic kubernetes service account secrets\n\nKubernetes service accounts must be manually generated and passed to a Kubernetes configuration file or the command line using a CLI tool such as kubectl to interact with Kubernetes clusters. With this method, service account credentials, which contain static secrets, can be exposed and would require a periodic manual rotation. To address this issue, we now support generating short-lived dynamic service accounts and associate role bindings to specific Kubernetes namespaces. For more information, refer to the [Kubernetes Auth Method](\/vault\/docs\/auth\/kubernetes) and [Kubernetes Auth Method (API)](\/vault\/api-docs\/auth\/kubernetes) documentation.\n\n### New KV secrets engine (v2) utilities\n\nThe KV version 2 secrets engine now includes a set of utilities and enhancements for easier retrieval of key-value secrets and metadata. This includes:\n\n* New optional Vault CLI mount flag (`vault kv get -mount=secret foo`).\n* New flag to output a sample policy in HCL (`-output-policy`) for any Vault CLI command.\n* New KV convenience\/helper methods (GET and PUT) added to the Go client library.\n\nFor more details, refer to the [Version Key Value Secrets Engine](\/vault\/tutorials\/secrets-management\/versioned-kv) tutorial.\n\n### Support for node identity and service identity for Vault consul secrets engine\n\nWithin the Consul secrets engine, practitioners writing a Vault role can specify node-identity or service-identity. You can also specify multiples of each identity on a Vault role. For more information, refer to the [Consul Secrets Engine](\/vault\/docs\/secrets\/consul) and [Consul Secrets Engine (API)](\/vault\/api-docs\/secret\/consul) documentation.\n\n### Autopilot (Vault enterprise)\n\nVault release 1.7 introduced the Autopilot feature to Integrated Storage. In this release, new Autopilot features are added to Vault Enterprise to perform seamless automatic upgrades and support redundancy zones for improved cluster resiliency. Refer to the [autopilot endpoint](\/vault\/api-docs\/system\/storage\/raftautopilot#sys-storage-raft-autopilot), [operator raft](\/vault\/docs\/commands\/operator\/raft), [Autopilot](\/vault\/docs\/concepts\/integrated-storage\/autopilot),  [Automated Upgrades](\/vault\/docs\/enterprise\/automated-upgrades), and [Redundancy Zones](\/vault\/docs\/enterprise\/redundancy-zones) documentation for more information.\n\n## Other features and enhancements\n\nThis section describes other features and enhancements introduced as part of the Vault 1.11.0 release.\n\n### Import externally-generated keys into transit secrets engine\n\nHistorically, Vault has only allowed the Transit secrets engine to utilize keys that were created by Vault itself. In this release, we have introduced an import feature for the Transit secrets engine that enables individuals to bring externally-generated encryption keys into a Transit keyring. These keys can then be used identically to internally-generated Transit keys.\n\n### Improved CA rotation\n\nPKI secrets engine users have sought a way to rotate root or intermediate CAs without causing service interruptions to any entities referencing them. Vault can now create the newly rotated PKI key pairs for servicing new certificates at the same path as the pre-existing keypair. This allows operators to gradually transition entities over to the new root certificate while the old is still active.\n\n### Client count tooling improvements\n\nWe have made the following improvements to the Client Count tooling:\n\n* Provide the ability to export the unique clients that contribute to the client count aggregate for the selected billing period via a new [activity export API endpoint](\/vault\/api-docs\/system\/internal-counters#activity-export). This feature is available in tech preview mode.\n* Provide the ability to view changes to client counts month over month in the UI.\n\n### MFA enhancements\n\nVault 1.10 introduced [Login MFA](\/vault\/docs\/auth\/login-mfa) support for Vault Community Edition. In this release, we included additional enhancements to the Login MFA feature by introducing the ability to configure Login MFA via the UI and providing an enhanced TOTP configuration experience via the QR code scan.\n\n### Vault agent: support for using an existing valid certificate upon re-authentication\n\nEnhancements have been made to the Vault Agent to support the parsing of a certificate that's been fetched. A new certificate will only be fetched upon a re-authentication if the certificate's lifetime has expired. This enhancement drastically reduces the resource overhead that Vault Agent users often experience due to over-fetching certificates.\n\n### Namespace enhancements for Vault terraform\n\nWith Terraform Vault provider v3.7.0, we have made enhancements where it\u2019s now possible to specify the namespace directly within the resource or data source. All resource or data source-specific namespaces are relative to their provider\u2019s configured namespace. This enhancement encourages a better workflow for namespaces, reduces execution time when handling failures of a Terraform plan, and eases the burden on system resources such as memory, CPU, etc.\n\n### ADP-Tranform enhancements\n\nTwo new enhancements were made to the Transform secrets engine. The first is Convergent Tokenization, which allows tokenization transformations to be configured as _convergent_. When enabled, this guarantees that tokenizing a given plaintext and expiration more than once always results in the same token value being produced. Please refer to the [Convergent Tokenization](\/vault\/docs\/secrets\/transform\/tokenization#convergence) document for more information. Token Lookup allows you to look up the value of a token given its plaintext. While this is not typically encouraged from a security perspective, it may be necessary for particular circumstances that require this operation. Note that token lookup is only supported when convergence is enabled. For more information on the endpoint, refer to the [Lookup Token](\/vault\/api-docs\/secret\/transform#lookup-token) documentation.\n\n### KMIP support for import, query, encryption and decryption\n\nPreviously, KMIP did not support certain operations such as import, decrypt, encrypt, and query. These operations are now supported. For a complete list of supported KMIP operations, please refer to the [Supported KMIP Operations](\/vault\/docs\/secrets\/kmip) documentation.\n\n@include 'pgx-params.mdx'\n\n## Known issues\n\nWhen you use Vault 1.11.0+ as a Consul's Connect CA, you may encounter an issue generating the leaf certificates ([GH-15525](https:\/\/github.com\/hashicorp\/consul\/pull\/15525)). Upgrade your [Consul version that includes the fix](https:\/\/support.hashicorp.com\/hc\/en-us\/articles\/11308460105491#01GMC24E6PPGXMRX8DMT4HZYTW) to avoid running into this problem. \n\n-> Refer to this [Knowledge Base article](https:\/\/support.hashicorp.com\/hc\/en-us\/articles\/11308460105491) for more details.\n\n## Feature deprecations and EOL\n\nPlease refer to the [Deprecation Plans and Notice](\/vault\/docs\/deprecation) page for up-to-date information on feature deprecations and plans. An [Feature Deprecation FAQ](\/vault\/docs\/deprecation\/faq) page is also available to address questions concerning decisions made about Vault feature deprecations.","site":"vault","answers_cleaned":"    layout  docs page title  1 11 0 description       This page contains release notes for Vault 1 11 0        Vault 1 11 0 release notes    Software Release date    June 21  2022    Summary    Vault Release 1 11 0 offers features and enhancements that improve the user experience while closing the loop on key issues previously encountered by our customers  We are providing a summary of these improvements in these release notes   We encourage you to upgrade to the latest release to take advantage of the new benefits that we are providing  With this latest release  we offer solutions to critical feature gaps that have been identified previously  For further information on product improvements  including a comprehensive list of bug fixes  please refer to the  Changelog  https   github com hashicorp vault blob main CHANGELOG md  within the Vault  release    Some of these enhancements and changes in this release include     Vault Consul secrets engine provides a templating policy to allow node and service identities to be set on the Consul token creation   Snowflake secrets engine added a key pair based authentication   Vault adds a Kubernetes secrets engine to allow creating dynamic k8s service accounts   ADP Transform extends its functionality by adding a convergent tokenization mode and a tokenization lookup   ADP KM adds four new operations   Client count tooling improvements to help understand the attribution of clients better   Integration storage autopilot improvements include auto upgrade and redundancy zones   Plugin Multiplexing support is extended to secret and auth plugins  allowing them to be managed more efficiently with a single process      New features  This section describes the new features introduced as part of Vault 1 11 0       Configure GCP auth to target non public good API addresses  The GCP auth method only allows for public API endpoints to be configured for authentication purposes  Workloads running in GCP that do not have external internet access need the ability to authenticate using  Private Google Access  https   cloud google com vpc docs private google access pga   In Vault 1 11 0  we allow for customization of certain service endpoints  For more information  refer to the  GCP auth method   vault api docs auth gcp custom endpoint  documentation        Support for key pair based authentication for snowflake  In Vault 1 11 0  the Snowflake Database Engine supports an additional credential type that can be generated  For users not wanting to rely on the standard user pass authentication to Snowflake  Vault can now dynamically generate RSA key pairs that allow users to authenticate into Snowflake  For more information  refer to the  Snowflake Database Secrets Engine   vault docs secrets databases snowflake  and  Database Secrets Engine  API    vault api docs secret databases  documentation       Dynamic kubernetes service account secrets  Kubernetes service accounts must be manually generated and passed to a Kubernetes configuration file or the command line using a CLI tool such as kubectl to interact with Kubernetes clusters  With this method  service account credentials  which contain static secrets  can be exposed and would require a periodic manual rotation  To address this issue  we now support generating short lived dynamic service accounts and associate role bindings to specific Kubernetes namespaces  For more information  refer to the  Kubernetes Auth Method   vault docs auth kubernetes  and  Kubernetes Auth Method  API    vault api docs auth kubernetes  documentation       New KV secrets engine  v2  utilities  The KV version 2 secrets engine now includes a set of utilities and enhancements for easier retrieval of key value secrets and metadata  This includes     New optional Vault CLI mount flag   vault kv get  mount secret foo      New flag to output a sample policy in HCL    output policy   for any Vault CLI command    New KV convenience helper methods  GET and PUT  added to the Go client library   For more details  refer to the  Version Key Value Secrets Engine   vault tutorials secrets management versioned kv  tutorial       Support for node identity and service identity for Vault consul secrets engine  Within the Consul secrets engine  practitioners writing a Vault role can specify node identity or service identity  You can also specify multiples of each identity on a Vault role  For more information  refer to the  Consul Secrets Engine   vault docs secrets consul  and  Consul Secrets Engine  API    vault api docs secret consul  documentation       Autopilot  Vault enterprise   Vault release 1 7 introduced the Autopilot feature to Integrated Storage  In this release  new Autopilot features are added to Vault Enterprise to perform seamless automatic upgrades and support redundancy zones for improved cluster resiliency  Refer to the  autopilot endpoint   vault api docs system storage raftautopilot sys storage raft autopilot    operator raft   vault docs commands operator raft    Autopilot   vault docs concepts integrated storage autopilot     Automated Upgrades   vault docs enterprise automated upgrades   and  Redundancy Zones   vault docs enterprise redundancy zones  documentation for more information      Other features and enhancements  This section describes other features and enhancements introduced as part of the Vault 1 11 0 release       Import externally generated keys into transit secrets engine  Historically  Vault has only allowed the Transit secrets engine to utilize keys that were created by Vault itself  In this release  we have introduced an import feature for the Transit secrets engine that enables individuals to bring externally generated encryption keys into a Transit keyring  These keys can then be used identically to internally generated Transit keys       Improved CA rotation  PKI secrets engine users have sought a way to rotate root or intermediate CAs without causing service interruptions to any entities referencing them  Vault can now create the newly rotated PKI key pairs for servicing new certificates at the same path as the pre existing keypair  This allows operators to gradually transition entities over to the new root certificate while the old is still active       Client count tooling improvements  We have made the following improvements to the Client Count tooling     Provide the ability to export the unique clients that contribute to the client count aggregate for the selected billing period via a new  activity export API endpoint   vault api docs system internal counters activity export   This feature is available in tech preview mode    Provide the ability to view changes to client counts month over month in the UI       MFA enhancements  Vault 1 10 introduced  Login MFA   vault docs auth login mfa  support for Vault Community Edition  In this release  we included additional enhancements to the Login MFA feature by introducing the ability to configure Login MFA via the UI and providing an enhanced TOTP configuration experience via the QR code scan       Vault agent  support for using an existing valid certificate upon re authentication  Enhancements have been made to the Vault Agent to support the parsing of a certificate that s been fetched  A new certificate will only be fetched upon a re authentication if the certificate s lifetime has expired  This enhancement drastically reduces the resource overhead that Vault Agent users often experience due to over fetching certificates       Namespace enhancements for Vault terraform  With Terraform Vault provider v3 7 0  we have made enhancements where it s now possible to specify the namespace directly within the resource or data source  All resource or data source specific namespaces are relative to their provider s configured namespace  This enhancement encourages a better workflow for namespaces  reduces execution time when handling failures of a Terraform plan  and eases the burden on system resources such as memory  CPU  etc       ADP Tranform enhancements  Two new enhancements were made to the Transform secrets engine  The first is Convergent Tokenization  which allows tokenization transformations to be configured as  convergent   When enabled  this guarantees that tokenizing a given plaintext and expiration more than once always results in the same token value being produced  Please refer to the  Convergent Tokenization   vault docs secrets transform tokenization convergence  document for more information  Token Lookup allows you to look up the value of a token given its plaintext  While this is not typically encouraged from a security perspective  it may be necessary for particular circumstances that require this operation  Note that token lookup is only supported when convergence is enabled  For more information on the endpoint  refer to the  Lookup Token   vault api docs secret transform lookup token  documentation       KMIP support for import  query  encryption and decryption  Previously  KMIP did not support certain operations such as import  decrypt  encrypt  and query  These operations are now supported  For a complete list of supported KMIP operations  please refer to the  Supported KMIP Operations   vault docs secrets kmip  documentation    include  pgx params mdx      Known issues  When you use Vault 1 11 0  as a Consul s Connect CA  you may encounter an issue generating the leaf certificates   GH 15525  https   github com hashicorp consul pull 15525    Upgrade your  Consul version that includes the fix  https   support hashicorp com hc en us articles 11308460105491 01GMC24E6PPGXMRX8DMT4HZYTW  to avoid running into this problem       Refer to this  Knowledge Base article  https   support hashicorp com hc en us articles 11308460105491  for more details      Feature deprecations and EOL  Please refer to the  Deprecation Plans and Notice   vault docs deprecation  page for up to date information on feature deprecations and plans  An  Feature Deprecation FAQ   vault docs deprecation faq  page is also available to address questions concerning decisions made about Vault feature deprecations "}
{"questions":"vault page title 1 15 0 release notes Vault 1 15 0 release notes layout docs GA date 2023 09 27 Key updates for Vault 1 15 0","answers":"---\nlayout: docs\npage_title: \"1.15.0 release notes\"\ndescription: |-\n  Key updates for  Vault 1.15.0\n---\n\n# Vault 1.15.0 release notes\n\n**GA date:** 2023-09-27\n\n@include 'release-notes\/intro.mdx'\n\n## Known issues and breaking changes\n\n| Version          | Issue                                                                                                                                                                                                                             |\n|------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| 1.15.0+          | [Vault no longer reports rollback metrics by mountpoint](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#rollback-metrics)                                                                                                                |\n| 1.15.0           | [Panic in AWS auth method during IAM-based login](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#panic-in-aws-auth-method-during-iam-based-login)                                                                                        |\n| 1.15.0+          | [UI Collapsed navbar does not allow certain click events](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#ui-collapsed-navbar)                                                                                                            |\n| 1.15             | [Vault file audit devices do not honor SIGHUP signal to reload](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#file-audit-devices-do-not-honor-sighup-signal-to-reload)                                                                  |\n| 1.15.0 - 1.15.1  | [Vault storing references to ephemeral sub-loggers leading to unbounded memory consumption](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#vault-is-storing-references-to-ephemeral-sub-loggers-leading-to-unbounded-memory-consumption) |\n| 1.15.0 - 1.15.1  | [Internal error when vault policy in namespace does not exist](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#internal-error-when-vault-policy-in-namespace-does-not-exist)                                                              |\n| 1.15.0+          | [Sublogger levels not adjusted on reload](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#sublogger-levels-unchanged-on-reload)                                                                                                           |\n| 1.15.0+          | [URL change for KV v2 plugin](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#kv2-url-change)                                                                                                                                             |\n| 1.15.1           | [Fatal error during expiration metrics gathering causing Vault crash](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#fatal-error-during-expiration-metrics-gathering-causing-vault-crash)                                                |\n| 1.15.0 - 1.15.4  | [Audit devices could log raw data despite configuration](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#audit-devices-could-log-raw-data-despite-configuration)                                                                          |\n| 1.15.5           | [Unable to rotate LDAP credentials](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#unable-to-rotate-ldap-credentials)                                                                                                                    |\n| 1.15.0 - 1.15.5  | [Deadlock can occur on performance secondary clusters with many mounts](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#deadlock-can-occur-on-performance-secondary-clusters-with-many-mounts)                                            |\n| 1.15.0 - 1.15.5  | [Audit fails to recover from panics when formatting audit entries](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#audit-fails-to-recover-from-panics-when-formatting-audit-entries)                                                      |\n| 1.15.0 - 1.15.7  | [Vault Enterprise performance standby nodes audit all request headers regardless of settings](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#vault-enterprise-performance-standby-nodes-audit-all-request-headers)                       |\n| 1.15.3 - 1.15.9  | [New nodes added by autopilot upgrades provisioned with the wrong version](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#new-nodes-added-by-autopilot-upgrades-provisioned-with-the-wrong-version)                                      |\n| 1.15.8 - 1.15.9  | [Autopilot upgrade for Vault Enterprise fails](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#autopilot)                                                                                                                                 |\n| 1.15.0 - 1.15.11 | [Listener stops listening on untrusted upstream connection with particular config settings](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#listener-proxy-protocol-config)                                                               |\n| 0.7.0+           | [Duplicate identity groups created](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#duplicate-identity-groups-created-when-concurrent-requests-sent-to-the-primary-and-pr-secondary-cluster)                                              |                                                                                                  |\n| Known Issue (0.7.0+) | [Manual entity merges fail](\/vault\/docs\/upgrading\/upgrade-to-1.15.x#manual-entity-merges-sent-to-a-pr-secondary-cluster-are-not-persisted-to-storage)\n\n\n## Vault companion updates\n\nCompanion updates are Vault updates that live outside the main Vault binary.\n\n<table>\n  <thead>\n    <tr>\n      <th style=>Release<\/th>\n      <th style=>Update<\/th>\n      <th style=>Description<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n\n  <tr>\n    <td style=>\n      Vault Secrets Operator\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Run the Vault Secrets Operator (v0.3.0) on Red Hat OpenShift.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/platform\/k8s\/vso\/openshift\">Vault Secrets Operator<\/a>\n    <\/td>\n  <\/tr>\n  <\/tbody>\n<\/table>\n\n## Core updates\n\nFollow the learn more links for more information, or browse the list of\n[Vault tutorials updated to highlight changes for the most recent GA release](\/vault\/tutorials\/new-release).\n\n<table>\n  <thead>\n    <tr>\n      <th style=>Release<\/th>\n      <th style=>Update<\/th>\n      <th style=>Description<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n\n  <tr>\n    <td rowSpan={2} style=>\n      Vault Agent\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Updated to use the latest Azure SDK version and Workload Identity\n      Federation (WIF).\n      <br \/><br \/>\n      Learn more:&nbsp;\n      <a href=\"\/vault\/docs\/agent-and-proxy\/agent\">What is Vault Agent?<\/a>\n    <\/td>\n  <\/tr>\n  <tr>\n    <td style=>GA<\/td>\n    <td style=>\n      Fetch secrets directly into your application as environment variables.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/agent-and-proxy\/agent\/process-supervisor\">Process Supervisor Mode<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      External plugins\n    <\/td>\n    <td style=>BETA<\/td>\n    <td style=>\n      Run external plugins in their own container with native container platform\n      controls.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/plugins\/containerized-plugins\">Containerize Vault plugins<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Eventing\n    <\/td>\n    <td style=>BETA<\/td>\n    <td style=>\n      Subscribe to notifications for various events in Vault. Includes support\n      for filtering, permissions, and cluster configurations with K-V secrets.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/concepts\/events\">Events<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td rowSpan={2} style=>\n    Vault GUI\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      New LDAP secrets engine GUI.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/configuration\/ui\">Vault UI guide<\/a>\n    <\/td>\n  <\/tr>\n  <tr>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      &bull; New landing page dashboard.<br \/>\n      &bull; View secrets you have read access to under your directory.<br \/>\n      &bull; View diffs between previous and new secret versions.<br \/>\n      &bull; Copy and paste secret paths from the GUI to the Vault CLI or API.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/configuration\/ui\">Vault UI guide<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td rowSpan={2} style=>\n      Secrets management\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Connect to Google Cloud Platform (GCP) Cloud SQL instances using native\n      IAM credentials.\n      <br \/><br \/>\n      Learn more:&nbsp;\n      <a href=\"\/vault\/docs\/sync\/gcpsm\">Google Cloud Platform Secret Manager<\/a>\n    <\/td>\n  <\/tr>\n  <tr>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Improved TTL management for database credentials with configurable\n      credential rotation.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/api-docs\/secret\">Secrets engines<\/a>\n    <\/td>\n  <\/tr>\n\n  <\/tbody>\n<\/table>\n\n## Enterprise updates\n\n<table>\n  <thead>\n    <tr>\n      <th style=>Release<\/th>\n      <th style=>Update<\/th>\n      <th style=>Description<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n\n  <tr>\n    <td style=>\n      Secrets syncing\n    <\/td>\n    <td style=>BETA<\/td>\n    <td style=>\n      Sync Key Value (KV) v2 data between Vault and secrets managers from AWS,\n      Azure, Google Cloud Platform (GCP), GitHub, and Vercel.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/sync\">Secrets Sync<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Public Key Infrastructure (PKI)\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Control Vault PKI issued certificates with the Certificate Issuance\n      External Policy Service (CIEPS) to ensure consistency and compliance to\n      enterprise standards.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/secrets\/pki\/cieps\">Certificate Issuance External Policy Service (CIEPS)<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Replication\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Holistic improvements to cluster replication including problem detection\n      and remediation.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/enterprise\/replication\">Vault Enterprise replication<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Seal High Availability\n    <\/td>\n    <td style=>BETA<\/td>\n    <td style=>\n      Enables Vault administrators to configure multiple KMS for seal keys to\n      ensure Vault availability in the event a single KMS becomes unavailable.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/configuration\/seal\/seal-ha\">Seal wrap<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Authentication\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Authenticate to Vault with your SAML identity provider.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/auth\/saml\">SAML auth method<\/a>\n    <\/td>\n  <\/tr>\n\n  <\/tbody>\n<\/table>\n\n## Feature deprecations and EOL\n\nDeprecated in 1.15 | Retired in 1.15\n------------------ | ---------------\nNone | None\n\n@include 'release-notes\/deprecation-note.mdx'","site":"vault","answers_cleaned":"    layout  docs page title   1 15 0 release notes  description       Key updates for  Vault 1 15 0        Vault 1 15 0 release notes    GA date    2023 09 27   include  release notes intro mdx      Known issues and breaking changes    Version            Issue                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          1 15 0              Vault no longer reports rollback metrics by mountpoint   vault docs upgrading upgrade to 1 15 x rollback metrics                                                                                                                     1 15 0              Panic in AWS auth method during IAM based login   vault docs upgrading upgrade to 1 15 x panic in aws auth method during iam based login                                                                                             1 15 0              UI Collapsed navbar does not allow certain click events   vault docs upgrading upgrade to 1 15 x ui collapsed navbar                                                                                                                 1 15                Vault file audit devices do not honor SIGHUP signal to reload   vault docs upgrading upgrade to 1 15 x file audit devices do not honor sighup signal to reload                                                                       1 15 0   1 15 1     Vault storing references to ephemeral sub loggers leading to unbounded memory consumption   vault docs upgrading upgrade to 1 15 x vault is storing references to ephemeral sub loggers leading to unbounded memory consumption      1 15 0   1 15 1     Internal error when vault policy in namespace does not exist   vault docs upgrading upgrade to 1 15 x internal error when vault policy in namespace does not exist                                                                   1 15 0              Sublogger levels not adjusted on reload   vault docs upgrading upgrade to 1 15 x sublogger levels unchanged on reload                                                                                                                1 15 0              URL change for KV v2 plugin   vault docs upgrading upgrade to 1 15 x kv2 url change                                                                                                                                                  1 15 1              Fatal error during expiration metrics gathering causing Vault crash   vault docs upgrading upgrade to 1 15 x fatal error during expiration metrics gathering causing vault crash                                                     1 15 0   1 15 4     Audit devices could log raw data despite configuration   vault docs upgrading upgrade to 1 15 x audit devices could log raw data despite configuration                                                                               1 15 5              Unable to rotate LDAP credentials   vault docs upgrading upgrade to 1 15 x unable to rotate ldap credentials                                                                                                                         1 15 0   1 15 5     Deadlock can occur on performance secondary clusters with many mounts   vault docs upgrading upgrade to 1 15 x deadlock can occur on performance secondary clusters with many mounts                                                 1 15 0   1 15 5     Audit fails to recover from panics when formatting audit entries   vault docs upgrading upgrade to 1 15 x audit fails to recover from panics when formatting audit entries                                                           1 15 0   1 15 7     Vault Enterprise performance standby nodes audit all request headers regardless of settings   vault docs upgrading upgrade to 1 15 x vault enterprise performance standby nodes audit all request headers                            1 15 3   1 15 9     New nodes added by autopilot upgrades provisioned with the wrong version   vault docs upgrading upgrade to 1 15 x new nodes added by autopilot upgrades provisioned with the wrong version                                           1 15 8   1 15 9     Autopilot upgrade for Vault Enterprise fails   vault docs upgrading upgrade to 1 15 x autopilot                                                                                                                                      1 15 0   1 15 11    Listener stops listening on untrusted upstream connection with particular config settings   vault docs upgrading upgrade to 1 15 x listener proxy protocol config                                                                    0 7 0               Duplicate identity groups created   vault docs upgrading upgrade to 1 15 x duplicate identity groups created when concurrent requests sent to the primary and pr secondary cluster                                                                                                                                                      Known Issue  0 7 0      Manual entity merges fail   vault docs upgrading upgrade to 1 15 x manual entity merges sent to a pr secondary cluster are not persisted to storage       Vault companion updates  Companion updates are Vault updates that live outside the main Vault binary    table     thead       tr         th style  Release  th         th style  Update  th         th style  Description  th        tr      thead     tbody      tr       td style         Vault Secrets Operator       td       td style  GA  td       td style         Run the Vault Secrets Operator  v0 3 0  on Red Hat OpenShift         br    br          Learn more   a href   vault docs platform k8s vso openshift  Vault Secrets Operator  a        td      tr      tbody    table      Core updates  Follow the learn more links for more information  or browse the list of  Vault tutorials updated to highlight changes for the most recent GA release   vault tutorials new release     table     thead       tr         th style  Release  th         th style  Update  th         th style  Description  th        tr      thead     tbody      tr       td rowSpan  2  style         Vault Agent       td       td style  ENHANCED  td       td style         Updated to use the latest Azure SDK version and Workload Identity       Federation  WIF          br    br          Learn more  nbsp         a href   vault docs agent and proxy agent  What is Vault Agent   a        td      tr     tr       td style  GA  td       td style         Fetch secrets directly into your application as environment variables         br    br          Learn more   a href   vault docs agent and proxy agent process supervisor  Process Supervisor Mode  a        td      tr      tr       td style         External plugins       td       td style  BETA  td       td style         Run external plugins in their own container with native container platform       controls         br    br          Learn more   a href   vault docs plugins containerized plugins  Containerize Vault plugins  a        td      tr      tr       td style         Eventing       td       td style  BETA  td       td style         Subscribe to notifications for various events in Vault  Includes support       for filtering  permissions  and cluster configurations with K V secrets         br    br          Learn more   a href   vault docs concepts events  Events  a        td      tr      tr       td rowSpan  2  style       Vault GUI       td       td style  GA  td       td style         New LDAP secrets engine GUI         br    br          Learn more   a href   vault docs configuration ui  Vault UI guide  a        td      tr     tr       td style  ENHANCED  td       td style          bull  New landing page dashboard  br           bull  View secrets you have read access to under your directory  br           bull  View diffs between previous and new secret versions  br           bull  Copy and paste secret paths from the GUI to the Vault CLI or API         br    br          Learn more   a href   vault docs configuration ui  Vault UI guide  a        td      tr      tr       td rowSpan  2  style         Secrets management       td       td style  GA  td       td style         Connect to Google Cloud Platform  GCP  Cloud SQL instances using native       IAM credentials         br    br          Learn more  nbsp         a href   vault docs sync gcpsm  Google Cloud Platform Secret Manager  a        td      tr     tr       td style  ENHANCED  td       td style         Improved TTL management for database credentials with configurable       credential rotation         br    br          Learn more   a href   vault api docs secret  Secrets engines  a        td      tr       tbody    table      Enterprise updates   table     thead       tr         th style  Release  th         th style  Update  th         th style  Description  th        tr      thead     tbody      tr       td style         Secrets syncing       td       td style  BETA  td       td style         Sync Key Value  KV  v2 data between Vault and secrets managers from AWS        Azure  Google Cloud Platform  GCP   GitHub  and Vercel         br    br          Learn more   a href   vault docs sync  Secrets Sync  a        td      tr      tr       td style         Public Key Infrastructure  PKI        td       td style  GA  td       td style         Control Vault PKI issued certificates with the Certificate Issuance       External Policy Service  CIEPS  to ensure consistency and compliance to       enterprise standards         br    br          Learn more   a href   vault docs secrets pki cieps  Certificate Issuance External Policy Service  CIEPS   a        td      tr      tr       td style         Replication       td       td style  ENHANCED  td       td style         Holistic improvements to cluster replication including problem detection       and remediation         br    br          Learn more   a href   vault docs enterprise replication  Vault Enterprise replication  a        td      tr      tr       td style         Seal High Availability       td       td style  BETA  td       td style         Enables Vault administrators to configure multiple KMS for seal keys to       ensure Vault availability in the event a single KMS becomes unavailable         br    br          Learn more   a href   vault docs configuration seal seal ha  Seal wrap  a        td      tr      tr       td style         Authentication       td       td style  GA  td       td style         Authenticate to Vault with your SAML identity provider         br    br          Learn more   a href   vault docs auth saml  SAML auth method  a        td      tr       tbody    table      Feature deprecations and EOL  Deprecated in 1 15   Retired in 1 15                                      None   None   include  release notes deprecation note mdx "}
{"questions":"vault page title 1 17 0 release notes Key updates for Vault 1 17 0 layout docs Vault 1 17 0 release notes GA date 2024 06 12","answers":"---\nlayout: docs\npage_title: \"1.17.0 release notes\"\ndescription: |-\n  Key updates for  Vault 1.17.0\n---\n\n# Vault 1.17.0 release notes\n\n**GA date:** 2024-06-12\n\n@include 'release-notes\/intro.mdx'\n\n## Important changes\n\n| Change                                         | Description                                                                                                                                                                                      |\n|------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| New default (1.17)                             | [Allowed audit headers now have unremovable defaults](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#audit-headers)                                                                                     |\n| Opt out feature (1.17)                         | [PKI sign-intermediate now truncates `notAfter` field to signing issuer](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#pki-truncate)                                                                   |\n| Beta feature deprecated (1.17)                 | [Request limiter deprecated](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#request-limiter)                                                                                                            |\n| Known issue (1.17.0+)                          | [PKI OCSP GET requests can return HTTP redirect responses](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#pki-ocsp)                                                                                     |\n| Known issue (1.17.0)                           | [Vault Agent and Vault Proxy consume excessive amounts of CPU](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#agent-proxy-cpu-1-17)                                                                     |\n| Known issue (1.15.8 - 1.15.9, 1.16.0 - 1.16.3) | [Autopilot upgrade for Vault Enterprise fails](\/vault\/docs\/upgrading\/upgrade-to-1.16.x#new-nodes-added-by-autopilot-upgrades-provisioned-with-the-wrong-version)                                 |\n| Known issue (1.17.0 - 1.17.2)                  | [Vault standby nodes not deleting removed entity-aliases from in-memory database](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#dangling-entity-alias-in-memory)                                       |\n| Known issue (1.17.0 - 1.17.3)                  | [AWS Auth AssumeRole requires an external ID even if none is set](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#aws-auth-role-configuration-requires-an-external_id)                                   |\n| Known Issue (0.7.0+)                           | [Duplicate identity groups created](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#duplicate-identity-groups-created-when-concurrent-requests-sent-to-the-primary-and-pr-secondary-cluster)             |\n| Known Issue (0.7.0+)                           | [Manual entity merges fail](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#manual-entity-merges-sent-to-a-pr-secondary-cluster-are-not-persisted-to-storage)                                            |\n| Known Issue (1.17.3-1.17.4)                    | [Some values in the audit logs not hmac'd properly](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#client-tokens-and-token-accessors-audited-in-plaintext)                                              |\n| Known Issue (1.17.0-1.17.5)                    | [Cached activation flags for secrets sync on follower nodes are not updated](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#cached-activation-flags-for-secrets-sync-on-follower-nodes-are-not-updated) |\n| New default (1.17.9)                           | [Vault product usage metrics reporting](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#product-usage-reporting)                                                                                         |\n| Deprecation (1.17.9)                           | [`default_report_months` is deprecated for the `sys\/internal\/counters` API](\/vault\/docs\/upgrading\/upgrade-to-1.17.x#activity-log-changes)                                                        |\n\n## Vault companion updates\n\nCompanion updates are Vault updates that live outside the main Vault binary.\n\n**None**.\n\n## Core updates\n\nFollow the learn more links for more information, or browse the list of\n[Vault tutorials updated to highlight changes for the most recent GA release](\/vault\/tutorials\/new-release).\n\n<table>\n  <thead>\n    <tr>\n      <th style=>Release<\/th>\n      <th style=>Update<\/th>\n      <th style=>Description<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n\n  <tr>\n    <td style=>\n      Security patches\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Various security improvements to remediate varying severity and\n      informational findings from a 3rd party security audit.\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Vault Agent and Vault Proxy self-healing tokens\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Auto-authentication avoids agent\/proxy restarts and config changes by\n      automatically re-authenticating authN tokens to Vault.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/agent-and-proxy\/autoauth\">Vault Agent and Vault Proxy auto-auth<\/a>\n    <\/td>\n  <\/tr>\n\n  <\/tbody>\n<\/table>\n\n## Enterprise updates\n\n<table>\n  <thead>\n    <tr>\n      <th style=>Release<\/th>\n      <th style=>Update<\/th>\n      <th style=>Description<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n\n  <tr>\n    <td style=>\n      Adaptive overload protection\n    <\/td>\n    <td style=>BETA<\/td>\n    <td style=>\n      Prevent client requests from overwhelming a variety of server resources\n      that could lead to poor server availability.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/concepts\/adaptive-overload-protection\">Adaptive overload protection overview<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      ACME Client Count\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      To improve clarity around client counts, Vault now separates ACME clients\n      from non-entity clients.\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td rowSpan={2} style=>\n      Public Key Infrastructure (PKI)\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Automate certificate lifecycle management for IoT\/EST enabled devices with\n      native EST protocol support.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/secrets\/pki\/est\">Enrollment over Secure Transport (EST)<\/a> overview\n    <\/td>\n  <\/tr>\n\n   <tr>\n    <td style=>GA<\/td>\n    <td style=>\n      Submit custom metadata with certificate requests and store the additional\n      information in Vault for further analysis.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/api-docs\/secret\/pki#metadata\">PKI secrets engine API<\/a>\n    <\/td>\n  <\/tr>\n  <tr>\n    <td rowSpan={3} style=>\n      Resource management\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Vault now supports a greater number of namespaces and mounts for\n      large-scale Vault installations.\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>GA<\/td>\n    <td style=>\n      Use hierarchical mount paths to organize, manage, and control access to\n      secret engine objects.\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>GA<\/td>\n    <td style=>\n      Safely override the max entry size to set different limits for specific\n      storage entries that contain mount tables, auth tables and namespace\n      configuration data.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/configuration\/storage\/raft#max_mount_and_namespace_table_entry_size\"><code>max_mount_and_namespace_table_entry_size<\/code> parameter<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Transit\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Use cipher-based message authentication code (CMAC) with AES symmetric\n      keys in the Vault Transit plugin.\n      <br \/><br \/>\n      Learn more: <a href=\"\/docs\/secrets\/transit#aes256-cmac\">CMAC support<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Plugin identity tokens\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Enable AWS, Azure, and GCP authentication flows with workload identity\n      federation (WIF) tokens from the associated secrets plugins without\n      explicitly configuring sensitive security credentials.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/secrets\/aws#plugin-workload-identity-federation-wif\">Plugin WIF overview<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      LDAP Secrets Engine\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Use hierarchical paths with roles and set names to define policies that\n      map 1-1 to LDAP secrets engine roles.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/secrets\/ldap#hierarchical-paths\">Hierarchical paths<\/a> overview\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Clock skew and lag detection\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Use the <code>sys\/health<\/code> and <code>sys\/ha-status<\/code> endpoints\n      to display lags in performance secondaries and performance standby nodes.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/enterprise\/consistency#clock-skew-and-replication-lag\">Clock skew and replication lag<\/a> overview\n    <\/td>\n  <\/tr>\n\n  <\/tbody>\n<\/table>\n\n## Feature deprecations and EOL\n\nDeprecated in 1.17 | Retired in 1.17\n------------------ | ---------------\nNone               | Centrify Auth plugin\n\n@include 'release-notes\/deprecation-note.mdx'","site":"vault","answers_cleaned":"    layout  docs page title   1 17 0 release notes  description       Key updates for  Vault 1 17 0        Vault 1 17 0 release notes    GA date    2024 06 12   include  release notes intro mdx      Important changes    Change                                           Description                                                                                                                                                                                                                                                                                                                                                                                                                                                New default  1 17                                 Allowed audit headers now have unremovable defaults   vault docs upgrading upgrade to 1 17 x audit headers                                                                                          Opt out feature  1 17                             PKI sign intermediate now truncates  notAfter  field to signing issuer   vault docs upgrading upgrade to 1 17 x pki truncate                                                                        Beta feature deprecated  1 17                     Request limiter deprecated   vault docs upgrading upgrade to 1 17 x request limiter                                                                                                                 Known issue  1 17 0                               PKI OCSP GET requests can return HTTP redirect responses   vault docs upgrading upgrade to 1 17 x pki ocsp                                                                                          Known issue  1 17 0                               Vault Agent and Vault Proxy consume excessive amounts of CPU   vault docs upgrading upgrade to 1 17 x agent proxy cpu 1 17                                                                          Known issue  1 15 8   1 15 9  1 16 0   1 16 3     Autopilot upgrade for Vault Enterprise fails   vault docs upgrading upgrade to 1 16 x new nodes added by autopilot upgrades provisioned with the wrong version                                      Known issue  1 17 0   1 17 2                      Vault standby nodes not deleting removed entity aliases from in memory database   vault docs upgrading upgrade to 1 17 x dangling entity alias in memory                                            Known issue  1 17 0   1 17 3                      AWS Auth AssumeRole requires an external ID even if none is set   vault docs upgrading upgrade to 1 17 x aws auth role configuration requires an external id                                        Known Issue  0 7 0                                Duplicate identity groups created   vault docs upgrading upgrade to 1 17 x duplicate identity groups created when concurrent requests sent to the primary and pr secondary cluster                  Known Issue  0 7 0                                Manual entity merges fail   vault docs upgrading upgrade to 1 17 x manual entity merges sent to a pr secondary cluster are not persisted to storage                                                 Known Issue  1 17 3 1 17 4                        Some values in the audit logs not hmac d properly   vault docs upgrading upgrade to 1 17 x client tokens and token accessors audited in plaintext                                                   Known Issue  1 17 0 1 17 5                        Cached activation flags for secrets sync on follower nodes are not updated   vault docs upgrading upgrade to 1 17 x cached activation flags for secrets sync on follower nodes are not updated      New default  1 17 9                               Vault product usage metrics reporting   vault docs upgrading upgrade to 1 17 x product usage reporting                                                                                              Deprecation  1 17 9                                default report months  is deprecated for the  sys internal counters  API   vault docs upgrading upgrade to 1 17 x activity log changes                                                               Vault companion updates  Companion updates are Vault updates that live outside the main Vault binary     None        Core updates  Follow the learn more links for more information  or browse the list of  Vault tutorials updated to highlight changes for the most recent GA release   vault tutorials new release     table     thead       tr         th style  Release  th         th style  Update  th         th style  Description  th        tr      thead     tbody      tr       td style         Security patches       td       td style  ENHANCED  td       td style         Various security improvements to remediate varying severity and       informational findings from a 3rd party security audit        td      tr      tr       td style         Vault Agent and Vault Proxy self healing tokens       td       td style  ENHANCED  td       td style         Auto authentication avoids agent proxy restarts and config changes by       automatically re authenticating authN tokens to Vault         br    br          Learn more   a href   vault docs agent and proxy autoauth  Vault Agent and Vault Proxy auto auth  a        td      tr       tbody    table      Enterprise updates   table     thead       tr         th style  Release  th         th style  Update  th         th style  Description  th        tr      thead     tbody      tr       td style         Adaptive overload protection       td       td style  BETA  td       td style         Prevent client requests from overwhelming a variety of server resources       that could lead to poor server availability         br    br          Learn more   a href   vault docs concepts adaptive overload protection  Adaptive overload protection overview  a        td      tr      tr       td style         ACME Client Count       td       td style  ENHANCED  td       td style         To improve clarity around client counts  Vault now separates ACME clients       from non entity clients        td      tr      tr       td rowSpan  2  style         Public Key Infrastructure  PKI        td       td style  GA  td       td style         Automate certificate lifecycle management for IoT EST enabled devices with       native EST protocol support         br    br          Learn more   a href   vault docs secrets pki est  Enrollment over Secure Transport  EST   a  overview       td      tr       tr       td style  GA  td       td style         Submit custom metadata with certificate requests and store the additional       information in Vault for further analysis         br    br          Learn more   a href   vault api docs secret pki metadata  PKI secrets engine API  a        td      tr     tr       td rowSpan  3  style         Resource management       td       td style  ENHANCED  td       td style         Vault now supports a greater number of namespaces and mounts for       large scale Vault installations        td      tr      tr       td style  GA  td       td style         Use hierarchical mount paths to organize  manage  and control access to       secret engine objects        td      tr      tr       td style  GA  td       td style         Safely override the max entry size to set different limits for specific       storage entries that contain mount tables  auth tables and namespace       configuration data         br    br          Learn more   a href   vault docs configuration storage raft max mount and namespace table entry size   code max mount and namespace table entry size  code  parameter  a        td      tr      tr       td style         Transit       td       td style  GA  td       td style         Use cipher based message authentication code  CMAC  with AES symmetric       keys in the Vault Transit plugin         br    br          Learn more   a href   docs secrets transit aes256 cmac  CMAC support  a        td      tr      tr       td style         Plugin identity tokens       td       td style  GA  td       td style         Enable AWS  Azure  and GCP authentication flows with workload identity       federation  WIF  tokens from the associated secrets plugins without       explicitly configuring sensitive security credentials         br    br          Learn more   a href   vault docs secrets aws plugin workload identity federation wif  Plugin WIF overview  a        td      tr      tr       td style         LDAP Secrets Engine       td       td style  GA  td       td style         Use hierarchical paths with roles and set names to define policies that       map 1 1 to LDAP secrets engine roles         br    br          Learn more   a href   vault docs secrets ldap hierarchical paths  Hierarchical paths  a  overview       td      tr      tr       td style         Clock skew and lag detection       td       td style  GA  td       td style         Use the  code sys health  code  and  code sys ha status  code  endpoints       to display lags in performance secondaries and performance standby nodes         br    br          Learn more   a href   vault docs enterprise consistency clock skew and replication lag  Clock skew and replication lag  a  overview       td      tr       tbody    table      Feature deprecations and EOL  Deprecated in 1 17   Retired in 1 17                                      None                 Centrify Auth plugin   include  release notes deprecation note mdx "}
{"questions":"vault GA date 2024 10 09 page title 1 18 0 release notes Key updates for Vault 1 18 0 layout docs Vault 1 18 0 release notes","answers":"---\nlayout: docs\npage_title: \"1.18.0 release notes\"\ndescription: |-\n  Key updates for  Vault 1.18.0\n---\n\n# Vault 1.18.0 release notes\n\n**GA date:** 2024-10-09\n\n@include 'release-notes\/intro.mdx'\n\n## Important changes\n\n| Change                      | Description                                                                                                          |\n|-----------------------------|----------------------------------------------------------------------------------------------------------------------|\n| New default (1.18.0)        | [Default activity log querying period](\/vault\/docs\/upgrading\/upgrade-to-1.18.x#default-activity-log-querying-period) |\n| New default (1.18.0)        | [Docker image no longer contains curl](\/vault\/docs\/upgrading\/upgrade-to-1.18.x#docker-image-no-longer-contains-curl) |\n| Beta feature removed (1.18) | [Request limiter removed](\/vault\/docs\/upgrading\/upgrade-to-1.18.x#request-limiter-configuration-removal)             |\n| New default (1.18.2)        | [Vault product usage metrics reporting](\/vault\/docs\/upgrading\/upgrade-to-1.18.x#product-usage-reporting)             |\n\n## Vault companion updates\n\nCompanion updates are Vault updates that live outside the main Vault binary.\n\n**None**.\n\n## Community updates\n\nFollow the learn more links for more information, or browse the list of\n[Vault tutorials updated to highlight changes for the most recent GA release](\/vault\/tutorials\/new-release).\n\n**None**.\n\n## Enterprise updates\n\n<table>\n  <thead>\n    <tr>\n      <th style=>Release<\/th>\n      <th style=>Update<\/th>\n      <th style=>Description<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n\n  <tr>\n    <td style=>\n      Adaptive overload protection\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Prevent client requests from overwhelming a variety of server resources\n      that could lead to poor server availability.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/concepts\/adaptive-overload-protection\">Adaptive overload protection overview<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Autopilot\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Overall stability improvements.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/concepts\/integrated-storage\/autopilot\">Autopilot overview<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Client count\n    <\/td>\n    <td style=>ENHANCED<\/td>\n    <td style=>\n      Improved clarity around metering and billing attribution.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/concepts\/client-count\/counting\">Client count calculations<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      PKI CMPv2\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Enable PKI support for automated certificate enrollment with CMPv2\n      protocols for 5G networks per 3G PP standards.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/secrets\/pki\/cmpv2\">CMPv2 in the Vault PKI plugin<\/a>\n    <\/td>\n  <\/tr>\n\n   <tr>\n    <td style=>\n      Vault UI\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Use the Vault UI to configure AWS WIF plugins.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/secrets\/aws#plugin-workload-identity-federation-wif\">AWS WIF<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      PostgreSQL plugin\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Use rootless rotation for PostgreSQL static roles so individual database\n      accounts can rotate their own passwords.\n      <br \/><br \/>\n      Learn more: <a href=\"\/vault\/docs\/secrets\/databases\/postgresql\">PostgreSQL plugin overview<\/a>\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      KV Patch and Subkey support in Vault\u2019s GUI\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Configure GUI access to key names in the KV plugin for users without\n      granting read access to the values.\n    <\/td>\n  <\/tr>\n\n  <tr>\n    <td style=>\n      Vault Enterprise with HSM for ARM architecture\n    <\/td>\n    <td style=>GA<\/td>\n    <td style=>\n      Run Vault Enterprise on ARM machines with Hardware Security Modules.\n      <br \/><br \/>\n      Vault releases: <a href=\"https:\/\/releases.hashicorp.com\/vault\/\">releases.hashicorp.com\/vault<\/a>\n    <\/td>\n  <\/tr>\n\n  <\/tbody>\n<\/table>\n\n## Feature deprecations and EOL\n\nDeprecated in 1.18.x | Retired in 1.18.x\n-------------------- | ---------------\nNone                 | None\n\n@include 'release-notes\/deprecation-note.mdx'","site":"vault","answers_cleaned":"    layout  docs page title   1 18 0 release notes  description       Key updates for  Vault 1 18 0        Vault 1 18 0 release notes    GA date    2024 10 09   include  release notes intro mdx      Important changes    Change                        Description                                                                                                                                                                                                                                                                     New default  1 18 0            Default activity log querying period   vault docs upgrading upgrade to 1 18 x default activity log querying period      New default  1 18 0            Docker image no longer contains curl   vault docs upgrading upgrade to 1 18 x docker image no longer contains curl      Beta feature removed  1 18     Request limiter removed   vault docs upgrading upgrade to 1 18 x request limiter configuration removal                  New default  1 18 2            Vault product usage metrics reporting   vault docs upgrading upgrade to 1 18 x product usage reporting                    Vault companion updates  Companion updates are Vault updates that live outside the main Vault binary     None        Community updates  Follow the learn more links for more information  or browse the list of  Vault tutorials updated to highlight changes for the most recent GA release   vault tutorials new release      None        Enterprise updates   table     thead       tr         th style  Release  th         th style  Update  th         th style  Description  th        tr      thead     tbody      tr       td style         Adaptive overload protection       td       td style  GA  td       td style         Prevent client requests from overwhelming a variety of server resources       that could lead to poor server availability         br    br          Learn more   a href   vault docs concepts adaptive overload protection  Adaptive overload protection overview  a        td      tr      tr       td style         Autopilot       td       td style  ENHANCED  td       td style         Overall stability improvements         br    br          Learn more   a href   vault docs concepts integrated storage autopilot  Autopilot overview  a        td      tr      tr       td style         Client count       td       td style  ENHANCED  td       td style         Improved clarity around metering and billing attribution         br    br          Learn more   a href   vault docs concepts client count counting  Client count calculations  a        td      tr      tr       td style         PKI CMPv2       td       td style  GA  td       td style         Enable PKI support for automated certificate enrollment with CMPv2       protocols for 5G networks per 3G PP standards         br    br          Learn more   a href   vault docs secrets pki cmpv2  CMPv2 in the Vault PKI plugin  a        td      tr       tr       td style         Vault UI       td       td style  GA  td       td style         Use the Vault UI to configure AWS WIF plugins         br    br          Learn more   a href   vault docs secrets aws plugin workload identity federation wif  AWS WIF  a        td      tr      tr       td style         PostgreSQL plugin       td       td style  GA  td       td style         Use rootless rotation for PostgreSQL static roles so individual database       accounts can rotate their own passwords         br    br          Learn more   a href   vault docs secrets databases postgresql  PostgreSQL plugin overview  a        td      tr      tr       td style         KV Patch and Subkey support in Vault s GUI       td       td style  GA  td       td style         Configure GUI access to key names in the KV plugin for users without       granting read access to the values        td      tr      tr       td style         Vault Enterprise with HSM for ARM architecture       td       td style  GA  td       td style         Run Vault Enterprise on ARM machines with Hardware Security Modules         br    br          Vault releases   a href  https   releases hashicorp com vault   releases hashicorp com vault  a        td      tr       tbody    table      Feature deprecations and EOL  Deprecated in 1 18 x   Retired in 1 18 x                                        None                   None   include  release notes deprecation note mdx "}
{"questions":"vault Key rotation Vault stores different encryption keys for different purposes Vault uses key layout docs page title Key Rotation rotation to periodically change the keys according to a configured limit or in Learn about the details of key rotation within Vault","answers":"---\nlayout: docs\npage_title: Key Rotation\ndescription: Learn about the details of key rotation within Vault.\n---\n\n# Key rotation\n\nVault stores different encryption keys for different purposes. Vault uses key\nrotation to periodically change the keys according to a configured limit or in\nresponse to a potential leak or compromised service.\n\n## Relevant key definitions\n\nThere are four keys involved in key rotation:\n\n- **internal encryption key** - Encrypts and protects data written to the\n  storage backend.\n- **root key** - \"Master\" key that seals Vault and protects the internal\n  encryption key.\n- **unseal key** - A portion (share) of the root key used to reconstruct the\n  root key. By default, Vault uses the\n  [Shamir's secret sharing algorithm](https:\/\/en.wikipedia.org\/wiki\/Shamir's_Secret_Sharing)\n  to split the root key into 5 shares.\n- **upgrade key** - A short-lived copy of the internal encryption key created\n  during key rotation in high-availability deployments. Vault encrypts upgrade\n  keys using the previous internal encryption key.\n\n## How key rotation works\n\nVault supports online **rekey** and **rotate** operations to update the root\nkey, unseal keys, and backend encryption key even for high-availability\ndeployments. In replicated deployments, the active node performs the operations\nand standby nodes use an upgrade key to update their keys without requiring a\nmanual unseal operation.\n\n1. Rekeying begins with a configured split and threshold for unseal keys:\n    1. Vault receives the configured threshold of unseal keys.\n    1. Vault generates and splits the new root key.\n    1. Vault re-encrypts the internal encryption key with the new root key.\n    1. Vault returns the new unseal keys.\n1. Rotation begins:\n    1. Vault generates a new internal encryption key.\n    1. Vault adds the new encryption key to an internal keyring.\n    1. Vault creates a temporary **upgrade key** (if needed).\n\n![Key Rotate](\/img\/vault-key-rotate.png)\n\nOnce the rotation completes, Vault can encrypt new writes to the storage backend\nusing the new key, but still decrypt entries written under the previous key.\n\n<Tip title=\"Related API endpoints\">\n\n  ConfigureKeyRotation - [`POST:\/sys\/rotate\/config`](\/vault\/api-docs\/system\/rotate-config)\n\n<\/Tip>\n\n\n## NIST rotation guidance\n\nThe National Institute of Standards and Technology (NIST) recommends\nperiodically rotating encryption keys, even without a leak or compromise event.\n\nDue to the nature of AES-256-GCM encryption,\n[NIST publication 800-38D](https:\/\/csrc.nist.gov\/pubs\/sp\/800\/38\/d\/final)\nrecommends rotating keys **before** performing ~2<sup>32<\/sup> encryptions. By\ndefault, Vault monitors the `vault.barrier.estimated_encryptions` metric and\nautomatically rotates the backend encryption key before reaching 2<sup>32<\/sup>\nencryption operations.\n\nYou can approximate the `vault.barrier.estimated_encryptions` metric with the\nfollowing sum:\n\n<CodeBlockConfig hideClipboard>\n\n```text\nESTIMATED_OPS = PUT_EVENTS + CREATE_EVENTS + MERKLE_FLUSH_EVENTS + WAL_INDEX\n```\n\n<\/CodeBlockConfig>\n\nwhere:\n\n- **`PUT_EVENTS`** is the `vault.barrier.put` telemetry metric.\n- **`CREATION_EVENTS`** is the `vault.token.creation` metric where `token_type`\n  is `batch`.\n- **`MERKLE_FLUSH_EVENTS`** is the `merkle.flushDirty.num_pages` telemetry metric.\n- **`WAL_INDEX`** is the current write-ahead-log index.\n\n<Tip>\n\n  Vault periodically persists the number of encryptions to support rotation. The\n  save operation has a 1 second timeout to limit performance impact when Vault is\n  under heavy load. If you use seal wrap, persisting encryptions involves the seal\n  backend, which means that some seals, like HSMs, may routinely take longer than\n  1 second to respond. You can override the save timeout by setting the \n  `VAULT_ENCRYPTION_COUNT_PERSIST_TIMEOUT` environment variable on your Vault\n  server to a larger value, such as \"5s\".\n\n<\/Tip>","site":"vault","answers_cleaned":"    layout  docs page title  Key Rotation description  Learn about the details of key rotation within Vault         Key rotation  Vault stores different encryption keys for different purposes  Vault uses key rotation to periodically change the keys according to a configured limit or in response to a potential leak or compromised service      Relevant key definitions  There are four keys involved in key rotation       internal encryption key     Encrypts and protects data written to the   storage backend      root key      Master  key that seals Vault and protects the internal   encryption key      unseal key     A portion  share  of the root key used to reconstruct the   root key  By default  Vault uses the    Shamir s secret sharing algorithm  https   en wikipedia org wiki Shamir s Secret Sharing    to split the root key into 5 shares      upgrade key     A short lived copy of the internal encryption key created   during key rotation in high availability deployments  Vault encrypts upgrade   keys using the previous internal encryption key      How key rotation works  Vault supports online   rekey   and   rotate   operations to update the root key  unseal keys  and backend encryption key even for high availability deployments  In replicated deployments  the active node performs the operations and standby nodes use an upgrade key to update their keys without requiring a manual unseal operation   1  Rekeying begins with a configured split and threshold for unseal keys      1  Vault receives the configured threshold of unseal keys      1  Vault generates and splits the new root key      1  Vault re encrypts the internal encryption key with the new root key      1  Vault returns the new unseal keys  1  Rotation begins      1  Vault generates a new internal encryption key      1  Vault adds the new encryption key to an internal keyring      1  Vault creates a temporary   upgrade key    if needed      Key Rotate   img vault key rotate png   Once the rotation completes  Vault can encrypt new writes to the storage backend using the new key  but still decrypt entries written under the previous key    Tip title  Related API endpoints      ConfigureKeyRotation     POST  sys rotate config    vault api docs system rotate config     Tip       NIST rotation guidance  The National Institute of Standards and Technology  NIST  recommends periodically rotating encryption keys  even without a leak or compromise event   Due to the nature of AES 256 GCM encryption   NIST publication 800 38D  https   csrc nist gov pubs sp 800 38 d final  recommends rotating keys   before   performing  2 sup 32  sup  encryptions  By default  Vault monitors the  vault barrier estimated encryptions  metric and automatically rotates the backend encryption key before reaching 2 sup 32  sup  encryption operations   You can approximate the  vault barrier estimated encryptions  metric with the following sum    CodeBlockConfig hideClipboard      text ESTIMATED OPS   PUT EVENTS   CREATE EVENTS   MERKLE FLUSH EVENTS   WAL INDEX        CodeBlockConfig   where        PUT EVENTS    is the  vault barrier put  telemetry metric       CREATION EVENTS    is the  vault token creation  metric where  token type    is  batch        MERKLE FLUSH EVENTS    is the  merkle flushDirty num pages  telemetry metric       WAL INDEX    is the current write ahead log index    Tip     Vault periodically persists the number of encryptions to support rotation  The   save operation has a 1 second timeout to limit performance impact when Vault is   under heavy load  If you use seal wrap  persisting encryptions involves the seal   backend  which means that some seals  like HSMs  may routinely take longer than   1 second to respond  You can override the save timeout by setting the     VAULT ENCRYPTION COUNT PERSIST TIMEOUT  environment variable on your Vault   server to a larger value  such as  5s      Tip "}
{"questions":"vault Learn about integrated Raft storage in Vault Vault supports several options for durable information storage Each backend Integrated Raft storage page title Raft integrated storage layout docs offers pros cons advantages and trade offs For example some backends","answers":"---\nlayout: docs\npage_title: Raft integrated storage\ndescription: Learn about integrated Raft storage in Vault.\n---\n\n# Integrated Raft storage\n\nVault supports several options for durable information storage. Each backend\noffers pros, cons, advantages, and trade-offs. For example, some backends\nsupport high availability while others provide a more robust backup and\nrestoration process. Integrated storage is a \"built-in\" storage option that\nsupports backup\/restore workflows, high availability, and Enterprise replication\nfeatures without relying on third-party systems.\n\n## Raft protocol overview\n\n<Highlight>\n\n  [The Secret Lives of Data] has a nice visual explanation of Raft storage.\n\n<\/Highlight>\n\nRaft storage uses a [consensus protocol] based on [Paxos] and the work in\n[\"Raft: In search of an Understandable Consensus Algorithm\"] to provide CAP\n[consistency].\n\nRaft performance is bound by disk I\/O and network latency, and\ncomparable to Paxos. With stable leadership, committing a log entry requires a\nsingle round trip to half of the peer set.\n\nCompared to Paxos, Raft is designed to have fewer states and a simpler, more\nunderstandable algorithm that depends on the following elements:\n\n- **Log** - An ordered sequence of entries (replicated log) that tracks cluster\n  changes. For example, writing data is a new event, which creates a\n  corresponding log entry.\n\n- **Peer set** - The set of all members participating in log replication. All\n  server nodes are in the peer set of the local cluster.\n\n- **Leader** - At any given time, the peer set elects a single node to be the\n  leader. Leaders ingest new log entries, replicate the log to followers, and\n  manage when an entry should be committed. Leaders manage log replication and\n  inconsistencies within replicated log entries may indicate an issue with the\n  leader.\n\n- **Quorum** - A majority of members from a peer set. For a peer set of size `N`,\n  quorum requires at least `ceil( (N + 1) \/ 2 )` members. For example, quorum in\n  a peer set of 5 members requires 3 nodes. If a cluster cannot achieve quorum,\n  **the cluster becomes unavailable** and cannot commit new logs.\n\n- **Committed entry** - A log entry that is replicated to a quorum of nodes. Log\n  entries are only applied once they are committed.\n\n- **Deterministic finite-state machine ([DFSM])** - A collection of known states\n  with predictable transitions between the states. In Raft, the DFSM transitions\n  between states whenever new logs are applied. By DFSM rules, multiple\n  applications of the same sequence of logs must always result in the same final\n  state.\n\n### Node states\n\nRaft nodes are always in one of following states:\n\n- **follower** - All nodes start as a follower. Followers accept log entries\n  from a leader and cast votes for leader selection.\n- **candidate** - A node self-promotes to the candidate state whenever it goes\n  without receiving log entries for a given period of time. During\n  self-promotion, candidates request votes from the rest of their peer set.\n- **leader** - Nodes become leaders once they receive a quorum of votes as a\n  candidate.\n\n### Writing logs\n\nWith Raft, a log entry is an opaque binary blob. Once the peer set elects a\nleader, the peer set can accept new log entries. When clients ask the set to\nappend a new log entry, the leader writes the entry to durable storage and tries\nto replicate the data to a quorum of followers. Once the log entry is\n**committed**, the leader **applies** the log entry to a deterministic finite\nstate machine to maintain the cluster state.\n\n<Note title=\"Raft in Vault\">\n\n  Vault uses [BoltDB](https:\/\/github.com\/etcd-io\/bbolt) or WAL Raft as the\n  deterministic finite state machine and blocks writes until they are both\n  committed **and** applied.\n\n<\/Note>\n\n### Compacting logs\n\nTo avoid unbounded growth in the replicated logs, Raft saves the current state\nto snapshots then compacts the associated logs. Because the finite-state machine\nis deterministic, restoring a snapshot of the DFSM always results in the same\nstate as replaying the sequence of logs associated with the snapshot. Taking\nsnapshots lets Raft capture the DFSM state at any point in time and then remove\nthe logs used to reach that state, thereby compacting the log data.\n\n<Note title=\"Raft in Vault\">\n\n  Vault compacts logs automatically to prevent unbounded disk usage while also\n  minimizing the time spent replaying logs. Using BoltDB as the DFSM also keeps\n  the Vault snapshots lightweight because the Vault data is already persisted to\n  disk in BoltDB, the snapshot process just needs to truncate the Raft logs.\n\n<\/Note>\n\n### Quorum\n\nRaft consensus is fault-tolerant when a peer set has quorum. However, when a\nquorum of nodes is **not** available, the peer set cannot process log entries,\nelect leaders, or manage peer membership.\n\nFor example, suppose there are only 2 peers: A and B. To have quorum, both nodes\nmust participate, so the quorum size is 2. As a result, both nodes must agree\nbefore they can commit a log entry. If one of the nodes fails, the remaining\nnode cannot reach quorum; the peer set can no longer add or remove nodes or\ncommit additional log entries. When the peer set can no longer take action, it\nbecomes **unavailable**. Once a peer set becomes unavailable, it can only be\nrecovered manually by removing the failing node and restarting the remaining\nnode in bootstrap mode so it self-elects as leader.\n\n## Raft leadership in Vault\n\nWhen a single Vault server (node)\n[initializes](\/vault\/docs\/commands\/operator\/init\/#operator-init), it establishes\na cluster (peer set) of size 1 and self-elects itself as leader. Once the\ncluster has a leader, additional servers can join the cluster using an\nencrypted challenge\/answer workflow. For the join process to work, all nodes\nin a single Raft cluster must share the same seal configuration. If the cluster\nis configured to use auto-unseal, the join process automatically decrypts the\nchallenge and responds with the answer using the configured seal. For other seal\noptions, like a Shamir seal, nodes must have access to the unseal keys before\njoining so they can decrypt the challenge and respond with the decrypted answer.\n\nIn a [high availability](\/vault\/docs\/internals\/high-availability#design-overview)\nconfiguration, the active Vault node is the leader node and all standby nodes\nare followers.\n\n## Leadership elections\n\nNodes become the Raft leader through Raft leadership elections.\n\nAll nodes in a Raft cluster start as **followers**. Followers monitor leader\nhealth through a **leader heartbeat**. If a follower does not receive a heartbeat\nwithin the  configured **heartbeat timeout**, the node becomes a **candidate**.\nCandidates watch for election notices from other nodes in the cluster. If the\n**election timeout** period expires, the candidate starts an election for\nleader. If the candidate gets responses from a quorum of other nodes in the\ncluster, the candidate becomes the new leader node.\n\nRaft leaders may step down voluntarily if the node cannot connect to a quorum\nof nodes with the **leader lease timeout** period.\n\nThe relevant timeout periods (heartbeat timeout, election timeout, leader lease\ntimeout) scale according to the [`performance_multiplier`](\/vault\/docs\/configuration\/storage\/raft#performance-multiplier) setting in your Vault configuration. By default,\nthe `performance_multiplier` is 5, which translates to the following timeout\nvalues:\n\nTimeout              | Default duration\n-------------------- | ----------------\nHeartbeat timeout    | 5 seconds\nElection timeout     | 5 seconds\nLeader lease timeout | 2.5 seconds\n\nWe recommend using the default multiplier unless one of the following is true:\n\n- Platform telemetry strongly indicates the default behavior is insufficient.\n- The reliability of your platform or network requires different behavior.\n\n## BoltDB Raft logs\n\nBoltDB is a single file database, which means BoltDB cannot shrink the file on\ndisk to recover space when you delete data. Instead, BoltDB notes the places\nwhere the deleted data was stored on a \"freelist\". On subsequent writes, BoltDB\nconsults the freelist to reuse old pages before allocating new space to persist\nthe data.\n\n<Warning title=\"BoltDB requires careful tuning\">\n\n1. On Vault clusters with high churn, the BoltDB freelist can become quite large\n   and the database file can become highly fragmented. Large freelists and\n   fragmented database files can slow BoltDB transaction and directly impact the\n   performance of your Vault cluster.\n1. On busy Vault clusters, where new followers struggle to sync Raft snapshots\n   before receiving subsequent snapshots from the leader, the BoltDB file is\n   susceptible to sudden bursts of writes. Not only will new followers potentially\n   fail to join quorum, Vault installations that do not provide for spiky file\n   growth or over-allocate and waste disk space will likely see poor performance.\n\n<\/Warning>\n\n## Write-ahead Raft logs\n\n@include 'alerts\/experimental.mdx'\n\nBy default, Vault uses the `raft-boltdb` library for BoltDB to store Raft logs,\nbut you can also configure Vault to use the\n[`raft-wal`](https:\/\/github.com\/hashicorp\/raft-wal) library for write-ahead Raft\nlogs.\n\nLibrary       | Filename(s)                                                | Storage directory\n------------- |------------------------------------------------------------| ----------------\n`raft-boltdb` | `raft.db`                                                  | `raft`\n`raft-wal`    | `wal-meta.db`, `XXXXXXXXXXXXXXXXXXXX-XXXXXXXXXXXXXXXX.wal` | `raft\/wal`\n\nThe `raft-wal` library is designed specifically for storing Raft logs. Rather\nthan using a freelist like `raft-boltdb`, `raft-wal` maintains a directory of\nfiles as its data store and compacts data over time to free up space when a\ngiven file is no longer needed.\n\nStoring data as files in a directory also means that the `raft-wal` library can\neasily increase or decrease the number of logs retained by leaders before\ntruncating and compacting without risking poor performance from spiky writes.\n\n## Quorum management in Vault\n\n### With autopilot\n\nWith the [autopilot](\/vault\/docs\/concepts\/integrated-storage\/autopilot) feature,\nVault uses a configurable set of parameters to confirm a node is healthy before\nconsidering it an eligible voter in the quorum list.\n\nAutopilot is enabled by default and includes stabilization logic for nodes\njoining the cluster:\n\n- A node joins the cluster as a non-voter.\n- The joined node syncs with the current Raft index.\n- Once the configured stability threshold is met, the node becomes a full voting\n  member of the cluster.\n\n<Warning title=\"Verify your stability threshold is appropriate\">\n\n  Setting the stability threshold too low can lead to cluster instability because\n  nodes may begin voting before they are fully in sync with the Raft index.\n\n<\/Warning>\n\nAutopilot also includes a dead server cleanup feature. When you enable dead\nserver cleanup with the\n[Autopilot API](\/vault\/api-docs\/system\/storage\/raftautopilot), Vault\nautomatically removes unhealthy nodes from the Raft cluster without manual\noperator intervention.\n\n### Without autopilot\n\nWithout autopilot, when a node joins a Raft cluster, the node tries to catch up\nwith the peer set just by replicating data received from the leader. While the\nnode is in the initial synchronization state, it cannot vote, but **is** counted for\nthe purposes of quorum. If multiple nodes join the cluster simultaneously (or\nwithin a small enough window) the cluster may exceed the expected failure\ntolerance, quorum may be lost, and the cluster can fail.\n\nFor example, consider a 3-node cluster with a large amount of data and a failure\ntolerance of 1. If 3 nodes join the cluster at the same time, the cluster size\nbecomes 6 with an expected failure tolerance of 2. But 3 of the nodes are still\nsynchronizing and cannot vote, which means the cluster loses quorum.\n\nIf you are not using autopilot, we strongly recommend that you ensure all new\nnodes have Raft indexes that are in sync (or very close to in sync) with the\nleader before adding additional nodes. You can check the status of current Raft\nindexes with the `vault status` CLI command.\n\n## Quorum size and failure tolerance\n\nThe table below compares quorum size and failure tolerance for various\ncluster sizes.\n\nServers | Quorum size | Failure tolerance\n:-----: | :---------: | :---------------:\n1       | 1           | 0\n2       | 2           | 0\n3       | 2           | 1\n4       | 3           | 1\n**5**   | **3**       | **2**\n6       | 4           | 2\n7       | 4           | 3\n\n<Highlight title=\"Best practice\">\n\n  For best performance, we recommended at least 5 servers for a standard\n  production deployment to maintained a minimum failure tolerance of 2. We also\n  recommend maintaining a cluster with an odd number of nodes to avoid voting\n  stalemates.\n\n  We **strongly discourage** single server deployments for production use due to\n  the high risk of data loss during failure scenarios.\n\n<\/Highlight>\n\nTo maintain failure tolerance during maintenance and other changes, we recommend\nsequentially scaling and reverting your cluster, 2 nodes at a time.\n\nFor example, if you start with a 5-node cluster:\n\n1. Scale the cluster to 7 nodes.\n1. Confirm the new nodes are joined and in sync with the rest of the peer set.\n1. Stop or destroy 2 of the older nodes.\n1. Repeat this process 2 more times to cycle out the rest of the pre-existing nodes.\n\nYou should always maintain quorum to limit the impact on failure tolerance when\nchanging or scaling your Vault instance.\n\n### Redundancy Zones\n\nIf you are using autopilot with [redundancy zones](\/vault\/docs\/enterprise\/redundancy-zones),\nthe total number of servers will be different from the above, and is dependent\non how many redundancy zones and servers per redundancy zone that you choose.\n\n@include 'autopilot\/redundancy-zones.mdx'\n\n<Highlight title=\"Best practice\">\n\n  If you choose to use redundancy zones, we **strongly recommend** using at least 3\n  zones to ensure failure tolerance.\n\n<\/Highlight>\n\nRedundancy zones | Servers per zone | Quorum size | Failure tolerance | Optimistic failure tolerance\n:--------------: | :--------------: | :---------: | :---------------: | :--------------------------:\n2                | 2                | 2           | 0                 | 2\n3                | 2                | 2           | 1                 | 3\n3                | 3                | 2           | 1                 | 5\n5                | 2                | 3           | 2                 | 6\n\n[consensus protocol]: https:\/\/en.wikipedia.org\/wiki\/Consensus_(computer_science)\n[consistency]: https:\/\/en.wikipedia.org\/wiki\/CAP_theorem\n[\"Raft: In search of an Understandable Consensus Algorithm\"]: https:\/\/raft.github.io\/raft.pdf\n[paxos]: https:\/\/en.wikipedia.org\/wiki\/Paxos_%28computer_science%29\n[The Secret Lives of Data]: http:\/\/thesecretlivesofdata.com\/raft\n[FSM]: https:\/\/en.wikipedia.org\/wiki\/Finite-state_machine","site":"vault","answers_cleaned":"    layout  docs page title  Raft integrated storage description  Learn about integrated Raft storage in Vault         Integrated Raft storage  Vault supports several options for durable information storage  Each backend offers pros  cons  advantages  and trade offs  For example  some backends support high availability while others provide a more robust backup and restoration process  Integrated storage is a  built in  storage option that supports backup restore workflows  high availability  and Enterprise replication features without relying on third party systems      Raft protocol overview   Highlight      The Secret Lives of Data  has a nice visual explanation of Raft storage     Highlight   Raft storage uses a  consensus protocol  based on  Paxos  and the work in   Raft  In search of an Understandable Consensus Algorithm   to provide CAP  consistency    Raft performance is bound by disk I O and network latency  and comparable to Paxos  With stable leadership  committing a log entry requires a single round trip to half of the peer set   Compared to Paxos  Raft is designed to have fewer states and a simpler  more understandable algorithm that depends on the following elements       Log     An ordered sequence of entries  replicated log  that tracks cluster   changes  For example  writing data is a new event  which creates a   corresponding log entry       Peer set     The set of all members participating in log replication  All   server nodes are in the peer set of the local cluster       Leader     At any given time  the peer set elects a single node to be the   leader  Leaders ingest new log entries  replicate the log to followers  and   manage when an entry should be committed  Leaders manage log replication and   inconsistencies within replicated log entries may indicate an issue with the   leader       Quorum     A majority of members from a peer set  For a peer set of size  N     quorum requires at least  ceil   N   1    2    members  For example  quorum in   a peer set of 5 members requires 3 nodes  If a cluster cannot achieve quorum      the cluster becomes unavailable   and cannot commit new logs       Committed entry     A log entry that is replicated to a quorum of nodes  Log   entries are only applied once they are committed       Deterministic finite state machine   DFSM       A collection of known states   with predictable transitions between the states  In Raft  the DFSM transitions   between states whenever new logs are applied  By DFSM rules  multiple   applications of the same sequence of logs must always result in the same final   state       Node states  Raft nodes are always in one of following states       follower     All nodes start as a follower  Followers accept log entries   from a leader and cast votes for leader selection      candidate     A node self promotes to the candidate state whenever it goes   without receiving log entries for a given period of time  During   self promotion  candidates request votes from the rest of their peer set      leader     Nodes become leaders once they receive a quorum of votes as a   candidate       Writing logs  With Raft  a log entry is an opaque binary blob  Once the peer set elects a leader  the peer set can accept new log entries  When clients ask the set to append a new log entry  the leader writes the entry to durable storage and tries to replicate the data to a quorum of followers  Once the log entry is   committed    the leader   applies   the log entry to a deterministic finite state machine to maintain the cluster state    Note title  Raft in Vault      Vault uses  BoltDB  https   github com etcd io bbolt  or WAL Raft as the   deterministic finite state machine and blocks writes until they are both   committed   and   applied     Note       Compacting logs  To avoid unbounded growth in the replicated logs  Raft saves the current state to snapshots then compacts the associated logs  Because the finite state machine is deterministic  restoring a snapshot of the DFSM always results in the same state as replaying the sequence of logs associated with the snapshot  Taking snapshots lets Raft capture the DFSM state at any point in time and then remove the logs used to reach that state  thereby compacting the log data    Note title  Raft in Vault      Vault compacts logs automatically to prevent unbounded disk usage while also   minimizing the time spent replaying logs  Using BoltDB as the DFSM also keeps   the Vault snapshots lightweight because the Vault data is already persisted to   disk in BoltDB  the snapshot process just needs to truncate the Raft logs     Note       Quorum  Raft consensus is fault tolerant when a peer set has quorum  However  when a quorum of nodes is   not   available  the peer set cannot process log entries  elect leaders  or manage peer membership   For example  suppose there are only 2 peers  A and B  To have quorum  both nodes must participate  so the quorum size is 2  As a result  both nodes must agree before they can commit a log entry  If one of the nodes fails  the remaining node cannot reach quorum  the peer set can no longer add or remove nodes or commit additional log entries  When the peer set can no longer take action  it becomes   unavailable    Once a peer set becomes unavailable  it can only be recovered manually by removing the failing node and restarting the remaining node in bootstrap mode so it self elects as leader      Raft leadership in Vault  When a single Vault server  node   initializes   vault docs commands operator init  operator init   it establishes a cluster  peer set  of size 1 and self elects itself as leader  Once the cluster has a leader  additional servers can join the cluster using an encrypted challenge answer workflow  For the join process to work  all nodes in a single Raft cluster must share the same seal configuration  If the cluster is configured to use auto unseal  the join process automatically decrypts the challenge and responds with the answer using the configured seal  For other seal options  like a Shamir seal  nodes must have access to the unseal keys before joining so they can decrypt the challenge and respond with the decrypted answer   In a  high availability   vault docs internals high availability design overview  configuration  the active Vault node is the leader node and all standby nodes are followers      Leadership elections  Nodes become the Raft leader through Raft leadership elections   All nodes in a Raft cluster start as   followers    Followers monitor leader health through a   leader heartbeat    If a follower does not receive a heartbeat within the  configured   heartbeat timeout    the node becomes a   candidate    Candidates watch for election notices from other nodes in the cluster  If the   election timeout   period expires  the candidate starts an election for leader  If the candidate gets responses from a quorum of other nodes in the cluster  the candidate becomes the new leader node   Raft leaders may step down voluntarily if the node cannot connect to a quorum of nodes with the   leader lease timeout   period   The relevant timeout periods  heartbeat timeout  election timeout  leader lease timeout  scale according to the   performance multiplier    vault docs configuration storage raft performance multiplier  setting in your Vault configuration  By default  the  performance multiplier  is 5  which translates to the following timeout values   Timeout                Default duration                                         Heartbeat timeout      5 seconds Election timeout       5 seconds Leader lease timeout   2 5 seconds  We recommend using the default multiplier unless one of the following is true     Platform telemetry strongly indicates the default behavior is insufficient    The reliability of your platform or network requires different behavior      BoltDB Raft logs  BoltDB is a single file database  which means BoltDB cannot shrink the file on disk to recover space when you delete data  Instead  BoltDB notes the places where the deleted data was stored on a  freelist   On subsequent writes  BoltDB consults the freelist to reuse old pages before allocating new space to persist the data    Warning title  BoltDB requires careful tuning    1  On Vault clusters with high churn  the BoltDB freelist can become quite large    and the database file can become highly fragmented  Large freelists and    fragmented database files can slow BoltDB transaction and directly impact the    performance of your Vault cluster  1  On busy Vault clusters  where new followers struggle to sync Raft snapshots    before receiving subsequent snapshots from the leader  the BoltDB file is    susceptible to sudden bursts of writes  Not only will new followers potentially    fail to join quorum  Vault installations that do not provide for spiky file    growth or over allocate and waste disk space will likely see poor performance     Warning      Write ahead Raft logs   include  alerts experimental mdx   By default  Vault uses the  raft boltdb  library for BoltDB to store Raft logs  but you can also configure Vault to use the   raft wal   https   github com hashicorp raft wal  library for write ahead Raft logs   Library         Filename s                                                   Storage directory                                                                                                raft boltdb     raft db                                                      raft   raft wal        wal meta db    XXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXX wal     raft wal   The  raft wal  library is designed specifically for storing Raft logs  Rather than using a freelist like  raft boltdb    raft wal  maintains a directory of files as its data store and compacts data over time to free up space when a given file is no longer needed   Storing data as files in a directory also means that the  raft wal  library can easily increase or decrease the number of logs retained by leaders before truncating and compacting without risking poor performance from spiky writes      Quorum management in Vault      With autopilot  With the  autopilot   vault docs concepts integrated storage autopilot  feature  Vault uses a configurable set of parameters to confirm a node is healthy before considering it an eligible voter in the quorum list   Autopilot is enabled by default and includes stabilization logic for nodes joining the cluster     A node joins the cluster as a non voter    The joined node syncs with the current Raft index    Once the configured stability threshold is met  the node becomes a full voting   member of the cluster    Warning title  Verify your stability threshold is appropriate      Setting the stability threshold too low can lead to cluster instability because   nodes may begin voting before they are fully in sync with the Raft index     Warning   Autopilot also includes a dead server cleanup feature  When you enable dead server cleanup with the  Autopilot API   vault api docs system storage raftautopilot   Vault automatically removes unhealthy nodes from the Raft cluster without manual operator intervention       Without autopilot  Without autopilot  when a node joins a Raft cluster  the node tries to catch up with the peer set just by replicating data received from the leader  While the node is in the initial synchronization state  it cannot vote  but   is   counted for the purposes of quorum  If multiple nodes join the cluster simultaneously  or within a small enough window  the cluster may exceed the expected failure tolerance  quorum may be lost  and the cluster can fail   For example  consider a 3 node cluster with a large amount of data and a failure tolerance of 1  If 3 nodes join the cluster at the same time  the cluster size becomes 6 with an expected failure tolerance of 2  But 3 of the nodes are still synchronizing and cannot vote  which means the cluster loses quorum   If you are not using autopilot  we strongly recommend that you ensure all new nodes have Raft indexes that are in sync  or very close to in sync  with the leader before adding additional nodes  You can check the status of current Raft indexes with the  vault status  CLI command      Quorum size and failure tolerance  The table below compares quorum size and failure tolerance for various cluster sizes   Servers   Quorum size   Failure tolerance                                           1         1             0 2         2             0 3         2             1 4         3             1   5         3             2   6         4             2 7         4             3   Highlight title  Best practice      For best performance  we recommended at least 5 servers for a standard   production deployment to maintained a minimum failure tolerance of 2  We also   recommend maintaining a cluster with an odd number of nodes to avoid voting   stalemates     We   strongly discourage   single server deployments for production use due to   the high risk of data loss during failure scenarios     Highlight   To maintain failure tolerance during maintenance and other changes  we recommend sequentially scaling and reverting your cluster  2 nodes at a time   For example  if you start with a 5 node cluster   1  Scale the cluster to 7 nodes  1  Confirm the new nodes are joined and in sync with the rest of the peer set  1  Stop or destroy 2 of the older nodes  1  Repeat this process 2 more times to cycle out the rest of the pre existing nodes   You should always maintain quorum to limit the impact on failure tolerance when changing or scaling your Vault instance       Redundancy Zones  If you are using autopilot with  redundancy zones   vault docs enterprise redundancy zones   the total number of servers will be different from the above  and is dependent on how many redundancy zones and servers per redundancy zone that you choose    include  autopilot redundancy zones mdx    Highlight title  Best practice      If you choose to use redundancy zones  we   strongly recommend   using at least 3   zones to ensure failure tolerance     Highlight   Redundancy zones   Servers per zone   Quorum size   Failure tolerance   Optimistic failure tolerance                                                                                                      2                  2                  2             0                   2 3                  2                  2             1                   3 3                  3                  2             1                   5 5                  2                  3             2                   6   consensus protocol   https   en wikipedia org wiki Consensus  computer science   consistency   https   en wikipedia org wiki CAP theorem   Raft  In search of an Understandable Consensus Algorithm    https   raft github io raft pdf  paxos   https   en wikipedia org wiki Paxos  28computer science 29  The Secret Lives of Data   http   thesecretlivesofdata com raft  FSM   https   en wikipedia org wiki Finite state machine"}
{"questions":"vault Due to the nature of Vault and the confidentiality of data it manages page title Security Model Security model the Vault security model is very critical The overall goal of Vault s security layout docs Learn about the security model of Vault","answers":"---\nlayout: docs\npage_title: Security Model\ndescription: Learn about the security model of Vault.\n---\n\n# Security model\n\nDue to the nature of Vault and the confidentiality of data it manages,\nthe Vault security model is very critical. The overall goal of Vault's security\nmodel is to provide [confidentiality, integrity, availability, accountability,\nauthentication](https:\/\/en.wikipedia.org\/wiki\/Information_security).\n\nThis means that data at rest and in transit must be secure from eavesdropping\nor tampering. Clients must be appropriately authenticated and authorized\nto access data or modify policies. All interactions must be auditable and traced\nuniquely back to the origin entity, and the system must be robust against intentional\nattempts to bypass any of its access controls.\n\n# Threat model\n\nThe following are the various parts of the Vault threat model:\n\n- Eavesdropping on any Vault communication. Client communication with Vault\n  should be secure from eavesdropping as well as communication from Vault to\n  its storage backend or between Vault cluster nodes.\n\n- Tampering with data at rest or in transit. Any tampering should be detectable\n  and cause Vault to abort processing of the transaction.\n\n- Access to data or controls without authentication or authorization. All requests\n  must be proceeded by the applicable security policies.\n\n- Access to data or controls without accountability. If audit logging\n  is enabled, requests and responses must be logged before the client receives\n  any secret material.\n\n- Confidentiality of stored secrets. Any data that leaves Vault to rest in the\n  storage backend must be safe from eavesdropping. In practice, this means all\n  data at rest must be encrypted.\n\n- Availability of secret material in the face of failure. Vault supports\n  running in a highly available configuration to avoid loss of availability.\n\nThe following are not considered part of the Vault threat model:\n\n- Protecting against arbitrary control of the storage backend. An attacker\n  that can perform arbitrary operations against the storage backend can\n  undermine security in any number of ways that are difficult or impossible to protect\n  against. As an example, an attacker could delete or corrupt all the contents\n  of the storage backend causing total data loss for Vault. The ability to control\n  reads would allow an attacker to snapshot in a well-known state and rollback state\n  changes if that would be beneficial to them.\n\n- Protecting against the leakage of the existence of secret material. An attacker\n  that can read from the storage backend may observe that secret material exists\n  and is stored, even if it is kept confidential.\n\n- Protecting against memory analysis of a running Vault. If an attacker is able\n  to inspect the memory state of a running Vault instance, then the confidentiality\n  of data may be compromised.\n\n- Protecting against flaws in external systems or services used by Vault.\n  Some authentication methods or secrets engines delegate sensitive operations to\n  systems external to Vault. If an attacker can compromise credentials or otherwise\n  exploit a vulnerability in these external systems, then the confidentiality or\n  integrity of data may be compromised.\n\n- Protecting against malicious plugins or code execution on the underlying host.\n  If an attacker can gain code execution or write privileges to the underlying host,\n  then the confidentiality or the integrity of data may be compromised.\n\n- Protecting against flaws in clients or systems that access Vault. If an attacker\n  can compromise a Vault client (e.g., system, browser) and obtain this client\u2019s Vault\n  credentials, they can access Vault with the level of privilege associated with this\n  client.\n\n- Protecting against Vault administrators supplying vulnerable or malicious configuration\n  data. Any data provided as configuration values to Vault's administrative endpoints\n  (e.g. [secret engines](\/vault\/docs\/secrets) configurations), or Vault's\n  configuration files should be validated. If an attacker can write to Vault's\n  configuration, then the confidentiality or integrity of data can be compromised.\n\n# External threat overview\n\nVault architecture compromises of three distinct systems:\n\n- Client: Speaks to Vault over an API.\n- Server: Provides an API and serves requests.\n- Storage backend: Utilized by the server to read and write data.\n\n\nThere is no mutual trust between the Vault client and server. Clients use\n[TLS](https:\/\/en.wikipedia.org\/wiki\/Transport_Layer_Security) to verify the\nidentity of the server and to establish a secure communication channel. Servers\nrequire that a client provides a client token for every request which is used\nto identify the client. A client that does not provide their token is only\npermitted to make login requests.\n\nAll server-to-server traffic between Vault instances within a cluster (i.e,\nhigh availability, enterprise replication or integrated storage) uses\nmutually-authenticated TLS to ensure the confidentiality and integrity of data\nin transit. Nodes are authenticated prior to joining the cluster by an\n[unseal challenge](\/vault\/docs\/concepts\/integrated-storage#vault-networking-recap) or\na [one-time-use activation token](\/vault\/docs\/enterprise\/replication#security-model).\n\nThe storage backends used by Vault are also untrusted by design. Vault uses a\nsecurity barrier for all requests made to the backend. The security barrier\nautomatically encrypts all data leaving Vault using a 256-bit [Advanced\nEncryption Standard\n(AES)](https:\/\/en.wikipedia.org\/wiki\/Advanced_Encryption_Standard) cipher in\nthe [Galois Counter Mode\n(GCM)](https:\/\/en.wikipedia.org\/wiki\/Galois\/Counter_Mode) with 96-bit nonces.\nThe nonce is randomly generated for every encrypted object. When data is read\nfrom the security barrier, the GCM authentication tag is verified during the\ndecryption process to detect any tampering.\n\nDepending on the backend used, Vault may communicate with the backend over TLS\nto provide an added layer of security. In some cases, such as a file backend,\nthis is not applicable. Because storage backends are untrusted, an eavesdropper\nwould only gain access to encrypted data even if communication with the backend\nwas intercepted.\n\n# Internal threat overview\n\nWithin the Vault system, a critical security concern is an attacker attempting\nto gain access to secret material they are not authorized to. This is an internal\nthreat if the attacker is already permitted to some level of access to Vault, and is\nable to authenticate.\n\nWhen a client first authenticates with Vault, an auth method is used to verify\nthe identity of the client and to return a list of associated ACL policies.\nThis association is configured by operators of Vault ahead of time. For\nexample, GitHub users in the \"engineering\" team may be mapped to the\n\"engineering\" and \"ops\" Vault policies. Vault then generates a client token\nwhich is a randomly generated, serialized value and maps it to the policy list.\nThis client token is then returned to the client.\n\nOn each request, a client provides this token. Vault then uses it to check that\nthe token is valid and has not been revoked or expired, and generates an ACL\nbased on the associated policies. Vault uses a strict default deny\nenforcement strategy. This means unless an associated policy allows for a given action,\nit will be denied. Each policy specifies a level of access granted to a path in\nVault. When the policies are merged (if multiple policies are associated with a\nclient), the highest access level permitted is used. For example, if the\n\"engineering\" policy permits read\/update access to the \"eng\/\" path, and the\n\"ops\" policy permits read access to the \"ops\/\" path, then the user gets the\nunion of those. Policy is matched using the most specific defined policy, which\nmay be an exact match or the longest-prefix match glob pattern. See\n[Policy Syntax](\/vault\/docs\/concepts\/policies#policy-syntax) for more details.\n\nCertain operations are only permitted by \"root\" users, which is a distinguished\npolicy built into Vault. This is similar to the concept of a root user on a\nUnix system or an administrator on Windows. In cases where clients are provided\nwith root tokens or associated with the root policy, Vault supports the\nnotion of \"sudo\" privilege. As part of a policy, users may be granted \"sudo\"\nprivileges to certain paths, so that they can still perform security sensitive\noperations without being granted global root access to Vault.\n\nLastly, Vault supports using a [Two-person\nrule](https:\/\/en.wikipedia.org\/wiki\/Two-person_rule) for unsealing using [Shamir's\nSecret Sharing\ntechnique](https:\/\/en.wikipedia.org\/wiki\/Shamir's_Secret_Sharing). When Vault\nis started, it starts in a _sealed_ state. This means that the encryption key\nneeded to read and write from the storage backend is not yet known. The process\nof unsealing requires providing the root key so that the encryption key can\nbe retrieved. The risk of distributing the root key is that a single\nmalicious attacker with access to it can decrypt the entire Vault. Instead,\nShamir's technique allows us to split the root key into multiple shares or\nparts. The number of shares and the threshold needed is configurable, but by\ndefault Vault generates 5 shares, any 3 of which must be provided to\nreconstruct the root key.\n\nBy using a secret sharing technique, we avoid the need to place absolute trust\nin the holder of the root key, and avoid storing the root key at all. The\nroot key is only retrievable by reconstructing the shares. The shares are not\nuseful for making any requests to Vault, and can only be used for unsealing.\nOnce unsealed the standard ACL mechanisms are used for all requests.\n\nTo make an analogy, a bank puts security deposit boxes inside of a vault. Each\nsecurity deposit box has a key, while the vault door has both a combination and\na key. The vault is encased in steel and concrete so that the door is the only\npractical entrance. The analogy to Vault is that the cryptosystem is the\nsteel and concrete protecting the data. While you could tunnel through the\nconcrete or brute force the encryption keys, it would be prohibitively time\nconsuming. Opening the bank vault requires two-factors: the key and the\ncombination. Similarly, Vault requires multiple shares be provided to\nreconstruct the root key. Once unsealed, each security deposit boxes still\nrequires that the owner provide a key, and similarly the Vault ACL system protects\nall the secrets stored.","site":"vault","answers_cleaned":"    layout  docs page title  Security Model description  Learn about the security model of Vault         Security model  Due to the nature of Vault and the confidentiality of data it manages  the Vault security model is very critical  The overall goal of Vault s security model is to provide  confidentiality  integrity  availability  accountability  authentication  https   en wikipedia org wiki Information security    This means that data at rest and in transit must be secure from eavesdropping or tampering  Clients must be appropriately authenticated and authorized to access data or modify policies  All interactions must be auditable and traced uniquely back to the origin entity  and the system must be robust against intentional attempts to bypass any of its access controls     Threat model  The following are the various parts of the Vault threat model     Eavesdropping on any Vault communication  Client communication with Vault   should be secure from eavesdropping as well as communication from Vault to   its storage backend or between Vault cluster nodes     Tampering with data at rest or in transit  Any tampering should be detectable   and cause Vault to abort processing of the transaction     Access to data or controls without authentication or authorization  All requests   must be proceeded by the applicable security policies     Access to data or controls without accountability  If audit logging   is enabled  requests and responses must be logged before the client receives   any secret material     Confidentiality of stored secrets  Any data that leaves Vault to rest in the   storage backend must be safe from eavesdropping  In practice  this means all   data at rest must be encrypted     Availability of secret material in the face of failure  Vault supports   running in a highly available configuration to avoid loss of availability   The following are not considered part of the Vault threat model     Protecting against arbitrary control of the storage backend  An attacker   that can perform arbitrary operations against the storage backend can   undermine security in any number of ways that are difficult or impossible to protect   against  As an example  an attacker could delete or corrupt all the contents   of the storage backend causing total data loss for Vault  The ability to control   reads would allow an attacker to snapshot in a well known state and rollback state   changes if that would be beneficial to them     Protecting against the leakage of the existence of secret material  An attacker   that can read from the storage backend may observe that secret material exists   and is stored  even if it is kept confidential     Protecting against memory analysis of a running Vault  If an attacker is able   to inspect the memory state of a running Vault instance  then the confidentiality   of data may be compromised     Protecting against flaws in external systems or services used by Vault    Some authentication methods or secrets engines delegate sensitive operations to   systems external to Vault  If an attacker can compromise credentials or otherwise   exploit a vulnerability in these external systems  then the confidentiality or   integrity of data may be compromised     Protecting against malicious plugins or code execution on the underlying host    If an attacker can gain code execution or write privileges to the underlying host    then the confidentiality or the integrity of data may be compromised     Protecting against flaws in clients or systems that access Vault  If an attacker   can compromise a Vault client  e g   system  browser  and obtain this client s Vault   credentials  they can access Vault with the level of privilege associated with this   client     Protecting against Vault administrators supplying vulnerable or malicious configuration   data  Any data provided as configuration values to Vault s administrative endpoints    e g   secret engines   vault docs secrets  configurations   or Vault s   configuration files should be validated  If an attacker can write to Vault s   configuration  then the confidentiality or integrity of data can be compromised     External threat overview  Vault architecture compromises of three distinct systems     Client  Speaks to Vault over an API    Server  Provides an API and serves requests    Storage backend  Utilized by the server to read and write data    There is no mutual trust between the Vault client and server  Clients use  TLS  https   en wikipedia org wiki Transport Layer Security  to verify the identity of the server and to establish a secure communication channel  Servers require that a client provides a client token for every request which is used to identify the client  A client that does not provide their token is only permitted to make login requests   All server to server traffic between Vault instances within a cluster  i e  high availability  enterprise replication or integrated storage  uses mutually authenticated TLS to ensure the confidentiality and integrity of data in transit  Nodes are authenticated prior to joining the cluster by an  unseal challenge   vault docs concepts integrated storage vault networking recap  or a  one time use activation token   vault docs enterprise replication security model    The storage backends used by Vault are also untrusted by design  Vault uses a security barrier for all requests made to the backend  The security barrier automatically encrypts all data leaving Vault using a 256 bit  Advanced Encryption Standard  AES   https   en wikipedia org wiki Advanced Encryption Standard  cipher in the  Galois Counter Mode  GCM   https   en wikipedia org wiki Galois Counter Mode  with 96 bit nonces  The nonce is randomly generated for every encrypted object  When data is read from the security barrier  the GCM authentication tag is verified during the decryption process to detect any tampering   Depending on the backend used  Vault may communicate with the backend over TLS to provide an added layer of security  In some cases  such as a file backend  this is not applicable  Because storage backends are untrusted  an eavesdropper would only gain access to encrypted data even if communication with the backend was intercepted     Internal threat overview  Within the Vault system  a critical security concern is an attacker attempting to gain access to secret material they are not authorized to  This is an internal threat if the attacker is already permitted to some level of access to Vault  and is able to authenticate   When a client first authenticates with Vault  an auth method is used to verify the identity of the client and to return a list of associated ACL policies  This association is configured by operators of Vault ahead of time  For example  GitHub users in the  engineering  team may be mapped to the  engineering  and  ops  Vault policies  Vault then generates a client token which is a randomly generated  serialized value and maps it to the policy list  This client token is then returned to the client   On each request  a client provides this token  Vault then uses it to check that the token is valid and has not been revoked or expired  and generates an ACL based on the associated policies  Vault uses a strict default deny enforcement strategy  This means unless an associated policy allows for a given action  it will be denied  Each policy specifies a level of access granted to a path in Vault  When the policies are merged  if multiple policies are associated with a client   the highest access level permitted is used  For example  if the  engineering  policy permits read update access to the  eng   path  and the  ops  policy permits read access to the  ops   path  then the user gets the union of those  Policy is matched using the most specific defined policy  which may be an exact match or the longest prefix match glob pattern  See  Policy Syntax   vault docs concepts policies policy syntax  for more details   Certain operations are only permitted by  root  users  which is a distinguished policy built into Vault  This is similar to the concept of a root user on a Unix system or an administrator on Windows  In cases where clients are provided with root tokens or associated with the root policy  Vault supports the notion of  sudo  privilege  As part of a policy  users may be granted  sudo  privileges to certain paths  so that they can still perform security sensitive operations without being granted global root access to Vault   Lastly  Vault supports using a  Two person rule  https   en wikipedia org wiki Two person rule  for unsealing using  Shamir s Secret Sharing technique  https   en wikipedia org wiki Shamir s Secret Sharing   When Vault is started  it starts in a  sealed  state  This means that the encryption key needed to read and write from the storage backend is not yet known  The process of unsealing requires providing the root key so that the encryption key can be retrieved  The risk of distributing the root key is that a single malicious attacker with access to it can decrypt the entire Vault  Instead  Shamir s technique allows us to split the root key into multiple shares or parts  The number of shares and the threshold needed is configurable  but by default Vault generates 5 shares  any 3 of which must be provided to reconstruct the root key   By using a secret sharing technique  we avoid the need to place absolute trust in the holder of the root key  and avoid storing the root key at all  The root key is only retrievable by reconstructing the shares  The shares are not useful for making any requests to Vault  and can only be used for unsealing  Once unsealed the standard ACL mechanisms are used for all requests   To make an analogy  a bank puts security deposit boxes inside of a vault  Each security deposit box has a key  while the vault door has both a combination and a key  The vault is encased in steel and concrete so that the door is the only practical entrance  The analogy to Vault is that the cryptosystem is the steel and concrete protecting the data  While you could tunnel through the concrete or brute force the encryption keys  it would be prohibitively time consuming  Opening the bank vault requires two factors  the key and the combination  Similarly  Vault requires multiple shares be provided to reconstruct the root key  Once unsealed  each security deposit boxes still requires that the owner provide a key  and similarly the Vault ACL system protects all the secrets stored "}
{"questions":"vault using this feature it is useful to understand the intended use cases design page title Replication layout docs Vault Enterprise 0 7 adds support for multi datacenter replication Before Replication Vault enterprise Learn about the details of multi datacenter replication within Vault","answers":"---\nlayout: docs\npage_title: Replication\ndescription: Learn about the details of multi-datacenter replication within Vault.\n---\n\n# Replication (Vault enterprise)\n\nVault Enterprise 0.7 adds support for multi-datacenter replication. Before\nusing this feature, it is useful to understand the intended use cases, design\ngoals, and high level architecture.\n\nReplication is based on a primary\/secondary (1:N) model with asynchronous\nreplication, focusing on high availability for global deployments. The\ntrade-offs made in the design and implementation of replication reflect these\nhigh level goals.\n\n# Use cases\n\nVault replication is based on a number of common use cases:\n\n- **Multi-Datacenter Deployments**: A common challenge is providing Vault to\n  applications across many datacenters in a highly-available manner. Running a\n  single Vault cluster imposes high latency of access for remote clients,\n  availability loss or outages during connectivity failures, and limits\n  scalability.\n\n- **Backup Sites**: Implementing a robust business continuity plan around the\n  loss of a primary datacenter requires the ability to quickly and easily fail\n  to a hot backup site.\n\n- **Scaling Throughput**: Applications that use Vault for\n  Encryption-as-a-Service or cryptographic offload may generate a very high\n  volume of requests for Vault. Replicating keys between multiple clusters\n  allows load to be distributed across additional servers to scale request\n  throughput.\n\n# Design goals\n\nBased on the use cases for Vault Replication, we had a number of design goals\nfor the implementation:\n\n- **Availability**: Global deployments of Vault require high levels of\n  availability, and can tolerate reduced consistency. During full connectivity,\n  replication is nearly real-time between the primary and secondary clusters.\n  Degraded connectivity between a primary and secondary does not impact the\n  primary's ability to service requests, and the secondary will continue to\n  service reads on last-known data.\n\n- **Conflict Free**: Certain replication techniques allow for potential write\n  conflicts to take place. Particularly, any active\/active configuration where\n  writes are allowed to multiple sites require a conflict resolution strategy.\n  This varies from techniques that allow for data loss like last-write-wins, or\n  techniques that require manual operator resolution like allowing multiple\n  values per key. We avoid the possibility of conflicts to ensure there is no\n  data loss or manual intervention required.\n\n- **Transparent to Clients**: Vault replication should be transparent to\n  clients of Vault, so that existing thin clients work unmodified. The Vault\n  servers handle the logic of request forwarding to the primary when necessary,\n  and multi-hop routing is performed internally to ensure requests are\n  processed.\n\n- **Simple to Operate**: Operating a replicated cluster should be simple to\n  avoid administrative overhead and potentially introducing security gaps.\n  Setup of replication is very simple, and secondaries can handle being\n  arbitrarily behind the primary, avoiding the need for operator intervention\n  to copy data or snapshot the primary.\n\n# Architecture\n\nThe architecture of Vault replication is based on the design goals, focusing on\nthe intended use cases. When replication is enabled, a cluster is set as either\na _primary_ or _secondary_. The primary cluster is authoritative, and is the\nonly cluster allowed to perform actions that write to the underlying data\nstorage, such as modifying policies or secrets. Secondary clusters can service\nall other operations, such as reading secrets or sending data through\n`transit`, and forward any writes to the primary cluster. Disallowing multiple\nprimaries ensures the cluster is conflict free and has an authoritative state.\n\nThe primary cluster uses log shipping to replicate changes to all of the\nsecondaries. This ensures writes are visible globally in near real-time when\nthere is full network connectivity. If a secondary is down or unable to\ncommunicate with the primary, writes are not blocked on the primary and reads\nare still serviced on the secondary. This ensures the availability of Vault.\nWhen the secondary is initialized or recovers from degraded connectivity it\nwill automatically reconcile with the primary.\n\nLastly, clients can speak to any Vault server without a thick client. If a\nclient is communicating with a standby instance, the request is automatically\nforwarded to an active instance. Secondary clusters will service reads locally\nand forward any write requests to the primary cluster. The primary cluster is\nable to service all request types.\n\nAn important optimization Vault makes is to avoid replication of tokens or\nleases between clusters. Policies and secrets are the minority of data managed\nby Vault and tend to be relatively stable. Tokens and leases are much more\ndynamic, as they are created and expire rapidly. Keeping tokens and leases\nlocally reduces the amount of data that needs to be replicated, and distributes\nthe work of TTL management between the clusters. The caveat is that clients\nwill need to re-authenticate if they switch the Vault cluster they are\ncommunicating with.\n\n# Implementation details\n\nIt is important to understand the high-level architecture of replication to\nensure the trade-offs are appropriate for your use case. The implementation\ndetails may be useful for those who are curious or want to understand more\nabout the performance characteristics or failure scenarios.\n\nUsing replication requires a storage backend that supports transactional\nupdates, such as Consul. This allows multiple key\/value updates to be\nperformed atomically. Replication uses this to maintain a\n[Write-Ahead-Log][wal] (WAL) of all updates, so that the key update happens\natomically with the WAL entry creation. The WALs are then used to perform log\nshipping between the Vault clusters. When a secondary is closely synchronized\nwith a primary, Vault directly streams new WALs to be applied, providing near\nreal-time replication. A bounded set of WALs are maintained for the\nsecondaries, and older WALs are garbage collected automatically.\n\nWhen a secondary is initialized or is too far behind the primary there may not\nbe enough WALs to synchronize. To handle this scenario, Vault maintains a\n[merkle index][merkle] of the encrypted keys. Any time a key is updated or\ndeleted, the merkle index is updated to reflect the change. When a secondary\nneeds to reconcile with a primary, they compare their merkle indexes to\ndetermine which keys are out of sync. The structure of the index allows this to\nbe done very efficiently, usually requiring only two round trips and a small\namount of data. The secondary uses this information to reconcile and then\nswitches back into WAL streaming mode.\n\nPerformance is an important concern for Vault, so WAL entries are batched and\nthe merkle index is not flushed to disk with every operation. Instead, the\nindex is updated in memory for every operation and asynchronously flushed to\ndisk. As a result, a crash or power loss may cause the merkle index to become\nout of sync with the underlying keys. Vault uses the [ARIES][aries] recovery\nalgorithm to ensure the consistency of the index under those failure\nconditions.\n\nLog shipping traditionally requires the WAL stream to be synchronized, which\ncan introduce additional complexity when a new primary cluster is promoted.\nVault uses the merkle index as the source of truth, allowing the WAL streams to\nbe completely distinct and unsynchronized. This simplifies administration of\nVault Replication for operators.\n\n## Addressing\n\n### Cluster addresses on the primary\n\nWhen a cluster is enabled as replication primary, it persists a cluster\ndefinition to storage, under `core\/cluster\/replicated\/info` or\n`core\/cluster\/replicated-dr\/info`.  An optional field of the cluster definition\nis `primary_cluster_addr`, which may be provided in the enable request.\n\nPerformance standbys regularly issue heartbeat RPC requests to the active node, and\none of the arguments to the RPC is the local node's `cluster_addr`.  The\nprimary active node retains these cluster addresses received from its peers in\nan in-memory cache named `clusterPeerClusterAddrsCache` with a 15s expiry time.\n\n### Cluster addresses on the secondary\n\nWhen a secondary is enabled, its replication activation token (obtained from\nthe primary) includes a `primary_cluster_addr` field.  This is taken from the\npersisted cluster definition created when the primary was enabled, or if no\n`primary_cluster_addr` was provided at that time, the token contains the\n`cluster_addr` of the current active node in the primary at the time the\nactivation token is created.\n\nThe secondary persists its own version of the cluster definition to storage,\nagain under `core\/cluster\/replicated\/info` or\n`core\/cluster\/replicated-dr\/info`.  Here the `primary_cluster_addr` field is\nthe one obtained from the activation token.\n\nThe secondary active node regularly issues heartbeat RPC requests to the\nprimary active node.  In response to these, the primary returns a response\nwhich includes a `ClusterAddrs` field, comprising the contents of its\n`clusterPeerClusterAddrsCache` plus the current active node's `cluster_addr`.\nThe secondary uses the response to both to update its in-memory record of\n`known_primary_cluster_addrs`, and in addition it persists them to storage\nunder `core\/primary-addrs\/dr` or `core\/primary-addrs\/perf`.  When this happens\nit logs the line `\"replication: successful heartbeat\"`, which includes the\n`ClusterAddrs` value obtained in the response.\n\n### Secondary RPC address resolution\n\ngRPC is given a list of target addresses for it to use in performing RPC\nrequests.  gRPC will discover that performance standbys can't service most RPCs, and\nwill quickly weed out all but the active node cluster address.  If the primary\nactive node changes, gRPC will learn that its address is no longer viable and\nwill automatically fail over to the new active node, assuming it's one of the\nknown target addresses.\n\nThe secondary runs a background resolver goroutine that, every few seconds,\nbuilds the gRPC target list of addresses for the primary.  Its output is logged\nat Trace level as `\"loaded addresses\"`.\n\nTo build the primary cluster address list, the resolver goroutine normally\nsimply concatenates `known_primary_cluster_addrs` with the\n`primary_cluster_addr` in the cluster definition in storage.\n\nThere are two exceptions to that normal behaviour: the first time the goroutine\ngoes through its loop, and when gRPC asks for a forced ResolveNow, which\nhappens when it's unable to perform RPCs on any of its target addresses.  In\nboth these cases, the resolver goroutine issues a special RemoteResolve RPC to\nthe primary.  This RPC is special because unlike all the other replication\nRPCs, it can be serviced by performance standbys as well as the active node.  In\neither case the node will return the `primary_cluster_addr` stored in the\nprimary's cluster definition, if any, or failing that the current active node's\ncluster address.  The result of the RemoteResolve call gets included in the\nlist of target addresses the resolver gives to gRPC for regular RPCs to the\nprimary.\n\n# Caveats\n\n~> **Mismatched Cluster Versions**: It is not safe to replicate from a newer\nversion of Vault to an older version. When upgrading replicated clusters,\nensure that upstream clusters are always on older version of Vault than\ndownstream clusters. See\n[Upgrading Vault](\/vault\/docs\/upgrading#replication-installations) for an example.\n\n- **Read-After-Write Consistency**: All write requests are forwarded from\n  secondaries to the primary cluster in order to avoid potential conflicts.\n  While replication is near real-time, it is not instantaneous, meaning there\n  is a potential for a client to write to a secondary and a subsequent read to\n  return an old value. Secondaries attempt to mask this from an individual\n  client making subsequent requests by stalling write requests until the write\n  is replicated or a timeout is reached (2 seconds). If the timeout is reached,\n  the client will receive a warning. Clients can also take steps to protect\n  against this, see [Consistency](\/vault\/docs\/enterprise\/consistency#mitigations).\n\n- **Stale Reads**: Secondary clusters service reads based on their\n  locally-replicated data. During normal operation updates from a primary are\n  received in near real-time by secondaries. However, during an outage or\n  network service disruption, replication may stall and secondaries may have\n  stale data. The cluster will automatically recover and reconcile any stale\n  data once the outage has recovered, but reads in the intervening period may\n  receive stale data.\n\n[wal]: https:\/\/en.wikipedia.org\/wiki\/Write-ahead_logging\n[merkle]: https:\/\/en.wikipedia.org\/wiki\/Merkle_tree\n[aries]: https:\/\/en.wikipedia.org\/wiki\/Algorithms_for_Recovery_and_Isolation_Exploiting_Semantics","site":"vault","answers_cleaned":"    layout  docs page title  Replication description  Learn about the details of multi datacenter replication within Vault         Replication  Vault enterprise   Vault Enterprise 0 7 adds support for multi datacenter replication  Before using this feature  it is useful to understand the intended use cases  design goals  and high level architecture   Replication is based on a primary secondary  1 N  model with asynchronous replication  focusing on high availability for global deployments  The trade offs made in the design and implementation of replication reflect these high level goals     Use cases  Vault replication is based on a number of common use cases       Multi Datacenter Deployments    A common challenge is providing Vault to   applications across many datacenters in a highly available manner  Running a   single Vault cluster imposes high latency of access for remote clients    availability loss or outages during connectivity failures  and limits   scalability       Backup Sites    Implementing a robust business continuity plan around the   loss of a primary datacenter requires the ability to quickly and easily fail   to a hot backup site       Scaling Throughput    Applications that use Vault for   Encryption as a Service or cryptographic offload may generate a very high   volume of requests for Vault  Replicating keys between multiple clusters   allows load to be distributed across additional servers to scale request   throughput     Design goals  Based on the use cases for Vault Replication  we had a number of design goals for the implementation       Availability    Global deployments of Vault require high levels of   availability  and can tolerate reduced consistency  During full connectivity    replication is nearly real time between the primary and secondary clusters    Degraded connectivity between a primary and secondary does not impact the   primary s ability to service requests  and the secondary will continue to   service reads on last known data       Conflict Free    Certain replication techniques allow for potential write   conflicts to take place  Particularly  any active active configuration where   writes are allowed to multiple sites require a conflict resolution strategy    This varies from techniques that allow for data loss like last write wins  or   techniques that require manual operator resolution like allowing multiple   values per key  We avoid the possibility of conflicts to ensure there is no   data loss or manual intervention required       Transparent to Clients    Vault replication should be transparent to   clients of Vault  so that existing thin clients work unmodified  The Vault   servers handle the logic of request forwarding to the primary when necessary    and multi hop routing is performed internally to ensure requests are   processed       Simple to Operate    Operating a replicated cluster should be simple to   avoid administrative overhead and potentially introducing security gaps    Setup of replication is very simple  and secondaries can handle being   arbitrarily behind the primary  avoiding the need for operator intervention   to copy data or snapshot the primary     Architecture  The architecture of Vault replication is based on the design goals  focusing on the intended use cases  When replication is enabled  a cluster is set as either a  primary  or  secondary   The primary cluster is authoritative  and is the only cluster allowed to perform actions that write to the underlying data storage  such as modifying policies or secrets  Secondary clusters can service all other operations  such as reading secrets or sending data through  transit   and forward any writes to the primary cluster  Disallowing multiple primaries ensures the cluster is conflict free and has an authoritative state   The primary cluster uses log shipping to replicate changes to all of the secondaries  This ensures writes are visible globally in near real time when there is full network connectivity  If a secondary is down or unable to communicate with the primary  writes are not blocked on the primary and reads are still serviced on the secondary  This ensures the availability of Vault  When the secondary is initialized or recovers from degraded connectivity it will automatically reconcile with the primary   Lastly  clients can speak to any Vault server without a thick client  If a client is communicating with a standby instance  the request is automatically forwarded to an active instance  Secondary clusters will service reads locally and forward any write requests to the primary cluster  The primary cluster is able to service all request types   An important optimization Vault makes is to avoid replication of tokens or leases between clusters  Policies and secrets are the minority of data managed by Vault and tend to be relatively stable  Tokens and leases are much more dynamic  as they are created and expire rapidly  Keeping tokens and leases locally reduces the amount of data that needs to be replicated  and distributes the work of TTL management between the clusters  The caveat is that clients will need to re authenticate if they switch the Vault cluster they are communicating with     Implementation details  It is important to understand the high level architecture of replication to ensure the trade offs are appropriate for your use case  The implementation details may be useful for those who are curious or want to understand more about the performance characteristics or failure scenarios   Using replication requires a storage backend that supports transactional updates  such as Consul  This allows multiple key value updates to be performed atomically  Replication uses this to maintain a  Write Ahead Log  wal   WAL  of all updates  so that the key update happens atomically with the WAL entry creation  The WALs are then used to perform log shipping between the Vault clusters  When a secondary is closely synchronized with a primary  Vault directly streams new WALs to be applied  providing near real time replication  A bounded set of WALs are maintained for the secondaries  and older WALs are garbage collected automatically   When a secondary is initialized or is too far behind the primary there may not be enough WALs to synchronize  To handle this scenario  Vault maintains a  merkle index  merkle  of the encrypted keys  Any time a key is updated or deleted  the merkle index is updated to reflect the change  When a secondary needs to reconcile with a primary  they compare their merkle indexes to determine which keys are out of sync  The structure of the index allows this to be done very efficiently  usually requiring only two round trips and a small amount of data  The secondary uses this information to reconcile and then switches back into WAL streaming mode   Performance is an important concern for Vault  so WAL entries are batched and the merkle index is not flushed to disk with every operation  Instead  the index is updated in memory for every operation and asynchronously flushed to disk  As a result  a crash or power loss may cause the merkle index to become out of sync with the underlying keys  Vault uses the  ARIES  aries  recovery algorithm to ensure the consistency of the index under those failure conditions   Log shipping traditionally requires the WAL stream to be synchronized  which can introduce additional complexity when a new primary cluster is promoted  Vault uses the merkle index as the source of truth  allowing the WAL streams to be completely distinct and unsynchronized  This simplifies administration of Vault Replication for operators      Addressing      Cluster addresses on the primary  When a cluster is enabled as replication primary  it persists a cluster definition to storage  under  core cluster replicated info  or  core cluster replicated dr info    An optional field of the cluster definition is  primary cluster addr   which may be provided in the enable request   Performance standbys regularly issue heartbeat RPC requests to the active node  and one of the arguments to the RPC is the local node s  cluster addr    The primary active node retains these cluster addresses received from its peers in an in memory cache named  clusterPeerClusterAddrsCache  with a 15s expiry time       Cluster addresses on the secondary  When a secondary is enabled  its replication activation token  obtained from the primary  includes a  primary cluster addr  field   This is taken from the persisted cluster definition created when the primary was enabled  or if no  primary cluster addr  was provided at that time  the token contains the  cluster addr  of the current active node in the primary at the time the activation token is created   The secondary persists its own version of the cluster definition to storage  again under  core cluster replicated info  or  core cluster replicated dr info    Here the  primary cluster addr  field is the one obtained from the activation token   The secondary active node regularly issues heartbeat RPC requests to the primary active node   In response to these  the primary returns a response which includes a  ClusterAddrs  field  comprising the contents of its  clusterPeerClusterAddrsCache  plus the current active node s  cluster addr   The secondary uses the response to both to update its in memory record of  known primary cluster addrs   and in addition it persists them to storage under  core primary addrs dr  or  core primary addrs perf    When this happens it logs the line   replication  successful heartbeat    which includes the  ClusterAddrs  value obtained in the response       Secondary RPC address resolution  gRPC is given a list of target addresses for it to use in performing RPC requests   gRPC will discover that performance standbys can t service most RPCs  and will quickly weed out all but the active node cluster address   If the primary active node changes  gRPC will learn that its address is no longer viable and will automatically fail over to the new active node  assuming it s one of the known target addresses   The secondary runs a background resolver goroutine that  every few seconds  builds the gRPC target list of addresses for the primary   Its output is logged at Trace level as   loaded addresses     To build the primary cluster address list  the resolver goroutine normally simply concatenates  known primary cluster addrs  with the  primary cluster addr  in the cluster definition in storage   There are two exceptions to that normal behaviour  the first time the goroutine goes through its loop  and when gRPC asks for a forced ResolveNow  which happens when it s unable to perform RPCs on any of its target addresses   In both these cases  the resolver goroutine issues a special RemoteResolve RPC to the primary   This RPC is special because unlike all the other replication RPCs  it can be serviced by performance standbys as well as the active node   In either case the node will return the  primary cluster addr  stored in the primary s cluster definition  if any  or failing that the current active node s cluster address   The result of the RemoteResolve call gets included in the list of target addresses the resolver gives to gRPC for regular RPCs to the primary     Caveats       Mismatched Cluster Versions    It is not safe to replicate from a newer version of Vault to an older version  When upgrading replicated clusters  ensure that upstream clusters are always on older version of Vault than downstream clusters  See  Upgrading Vault   vault docs upgrading replication installations  for an example       Read After Write Consistency    All write requests are forwarded from   secondaries to the primary cluster in order to avoid potential conflicts    While replication is near real time  it is not instantaneous  meaning there   is a potential for a client to write to a secondary and a subsequent read to   return an old value  Secondaries attempt to mask this from an individual   client making subsequent requests by stalling write requests until the write   is replicated or a timeout is reached  2 seconds   If the timeout is reached    the client will receive a warning  Clients can also take steps to protect   against this  see  Consistency   vault docs enterprise consistency mitigations        Stale Reads    Secondary clusters service reads based on their   locally replicated data  During normal operation updates from a primary are   received in near real time by secondaries  However  during an outage or   network service disruption  replication may stall and secondaries may have   stale data  The cluster will automatically recover and reconcile any stale   data once the outage has recovered  but reads in the intervening period may   receive stale data    wal   https   en wikipedia org wiki Write ahead logging  merkle   https   en wikipedia org wiki Merkle tree  aries   https   en wikipedia org wiki Algorithms for Recovery and Isolation Exploiting Semantics"}
{"questions":"vault Vault imposes fixed upper limits on the size of certain fields and Vault limits and maximums page title Limits and Maximums layout docs Learn about the maximum number of objects within Vault objects and configurable limits on others Vault also has upper","answers":"---\nlayout: docs\npage_title: Limits and Maximums\ndescription: Learn about the maximum number of objects within Vault.\n---\n\n# Vault limits and maximums\n\nVault imposes fixed upper limits on the size of certain fields and\nobjects, and configurable limits on others. Vault also has upper\nbounds that are a consequence of its underlying storage. This page\nattempts to collect these limits, to assist in planning Vault\ndeployments.\n\nIn some cases, the system will show performance problems in advance of\nthe absolute limits being reached.\n\n## Storage-Related limits\n\n### Storage entry size\n\n@include 'storage-entry-size.mdx'\n\nMany of the other limits within Vault derive from the maximum size of\na storage entry, as described in the next sections. It is possible to\nrecover from an error where a storage entry has reached its maximum\nsize by reconfiguring Vault or Consul to a larger maximum storage\nentry.\n\n### Mount point limits\n\nAll secret engine mount points, and all auth mount points, must each fit\nwithin a single storage entry. Each JSON object describing a mount\ntakes about 500 bytes, but is stored in compressed form at a typical cost of\nabout 75 bytes. Each of (1) auth mounts, (2) secret engine mount points,\n(3) local-only auth methods, and (4) local-only secret engine mounts are\nstored separately, so the limit applies to each independently.\n\n|                                              | Consul default (512 KiB) | Integrated storage default (1 MiB) |\n| -------------------------------------------- | ------------------------ | ---------------------------------- |\n| Maximum number of secret engine mount points | ~7000                    | ~14000                             |\n| Maximum number of enabled auth methods       | ~7000                    | ~14000                             |\n| Maximum mount point length                   | no enforced limit        | no enforced limit                  |\n\nSpecifying distinct per-mount options, or using long mount point paths, can\nincrease the space required per mount.\n\nThe number of mount points can be monitored by reading the\n[`sys\/auth`](\/vault\/api-docs\/system\/auth) and\n[`sys\/mounts`](\/vault\/api-docs\/system\/mounts) endpoints from the root namespace and\nsimilar sub-paths for namespaces respectively, like: `namespace1\/sys\/auth`,\n`namespace1\/sys\/mounts`, etc.\n\nAlternatively, use the\n[`vault.core.mount_table.num_entries`](\/vault\/docs\/internals\/telemetry\/metrics\/core-system#vault-core-mount_table-num_entries)\nand\n[`vault.core.mount_table.size`](\/vault\/docs\/internals\/telemetry\/metrics\/core-system#vault-core-mount_table-size)\ntelemetry metrics to monitor the number of mount points and size of each mount table.\n\n### Namespace limits\n\n@include 'namespace-limits.mdx'\n\n### Entity and group limits\n\nThe metadata that may be attached to an identity entity or an entity group\nhas the following constraints:\n\n|                                       | Limit     |\n| ------------------------------------- | --------- |\n| Number of key-value pairs in metadata | 64        |\n| Metadata key size                     | 128 bytes |\n| Metadata value size                   | 512 bytes |\n\nVault shards the entities across 256 storage entries. This creates a\nhard limit of 128MiB storage space used for entities on Consul, or\n256MiB on integrated storage with its default settings. Entity aliases\nare stored inline in the Entity objects and so consume the same pool\nof storage. Entity definitions are compressed within each storage\nentry, and the pre-compression size varies with the number of entity\naliases and the amount of metadata. Minimally-populated entities\nabout 200 bytes after compression.\n\nGroup definitions are stored separately, in their own pool of 256\nstorage entries. The size of each group object depends on the number\nof members and the amount of metadata. Group aliases and group\nmembership information is stored inline in each Group object. A group\nwith no metadata, holding 10 entities, will use about 500 bytes per\ngroup. A group holding 100 entities would instead consume about 4,000\nbytes.\n\nThe following table shows a best-case estimate and a more conservative\nestimate for entities and groups. The number is slightly less than the\namount that fits in one shard, to reflect the fact that the first\nshard to fill up will start inducing failures. This maximum will\ndecrease if each entity has a large amount of metadata, or if each\ngroup has a large number of members.\n\n|                                                                                          | Consul default (512 KiB) | Integrated storage default (1 MiB) |\n| ---------------------------------------------------------------------------------------- | ------------------------ | ---------------------------------- |\n| Maximum number of identity entities (best case, 200 bytes per entity)                    | ~610,000                 | ~1,250,000                         |\n| Maximum number of identity entities (conservative case, 500 bytes per entity)            | ~250,000                 | ~480,000                           |\n| Maximum number of identity entities (maximum permitted metadata, 41160 bytes per entity) | 670                      | 2,400                              |\n| Maximum number of groups (10 entities per group)                                         | ~250,000                 | ~480,000                           |\n| Maximum number of groups (100 entities per group)                                        | ~22,000                  | ~50,000                            |\n| Maximum number of members in a group                                                     | ~11,500                  | ~23,000                            |\n\nThe number of entities can be monitored using Vault's [telemetry](\/vault\/docs\/internals\/telemetry#token-identity-and-lease-metrics); see `vault.identity.num_entities` (total) or `vault.identity.entities.count` (by namespace).\n\nThe cost of entity and group updates grows as the number of objects in\neach shard increases. This cost can be monitored via the\n`vault.identity.upsert_entity_txn` and\nthe `vault.identity.upsert_group_txn` metrics.\n\nVery large internal groups should be avoided (more than 1000 members),\nbecause the membership list in a group must reside in a single storage entry.\nInstead, consider using [external groups](\/vault\/docs\/concepts\/identity#external-vs-internal-groups) or split the group up into multiple sub-groups.\n\n### Token limits\n\nOne storage entry is used per token; there is thus no\nupper bound on the number of active tokens. There are no restrictions on\nthe token metadata field, other than the entire token must fit into one\nstorage entry:\n\n|                                       | Limit    |\n| ------------------------------------- | -------- |\n| Number of key-value pairs in metadata | no limit |\n| Metadata key size                     | no limit |\n| Metadata value size                   | no limit |\n| Total size of token metadata          | 512 KiB  |\n\n### Policy limits\n\nThe maximum size of a policy is limited by the storage\nentry size. Policy lists that appear in tokens or entities must fit\nwithin a single storage entry.\n\n|                                                | Consul default (512 KiB) | Integrated storage default (1 MiB) |\n| ---------------------------------------------- | ------------------------ | ---------------------------------- |\n| Maximum policy size                            | 512 KiB                  | 1 MiB                              |\n| Maximum number of policies per namespace       | no limit                 | no limit                           |\n| Maximum number of policies per token           | ~14,000                  | ~28,000                            |\n| Maximum number of policies per entity or group | ~14,000                  | ~28,000                            |\n\nEach time a token is used, Vault must assemble the collection of\npolicies attached to that token, to the entity, to any groups that the\nentity belongs to, and recursively to any groups that contain those groups.\nVery large numbers of policies are possible, but can cause Vault\u2019s\nresponse time to increase. You can monitor the\n[`vault.core.fetch_acl_and_token`](\/vault\/docs\/internals\/telemetry#core-metrics)\nmetric to determine if the time required to assemble an access control list\nis becoming excessive.\n\n### Versioned key-value store (kv-v2 secret engine)\n\n|                                                          | Limit                                                      |\n| -------------------------------------------------------- | ---------------------------------------------------------- |\n| Number of secrets                                        | no limit, up to available storage capacity                 |\n| Maximum size of one version of a secret                  | slightly less than one storage entry (512 KiB or 1024 KiB) |\n| Number of versions of a secret                           | default 10; configurable per-secret or per-mount           |\n| Maximum number of versions (not checked when configured) | at least 24,000                                            |\n\nEach version of a secret must fit in a single storage entry; the\nkey-value pairs are converted to JSON before storage.\n\nVersion metadata consumes 21 bytes per version and must fit in a\nsingle storage entry, separate from the stored data.\n\nEach secret also has version-agnostic metadata. This data can contain a `custom_metadata` field of\nuser-provided key-value pairs. Vault imposes the following custom metadata limits:\n\n|                                           | Limit     |\n| ----------------------------------------- | --------- |\n| Number of custom metadata key-value pairs | 64        |\n| Custom metadata key size                  | 128 bytes |\n| Custom metadata value size                | 512 bytes |\n\n### Transit secret engine\n\nThe maximum size of a Transit ciphertext or plaintext is limited by Vault's\nmaximum request size, as described [below](#request-size).\n\nAll archived versions of a single key must fit in a single storage entry.\nThis limit depends on the key size.\n\n| Key length           | Consul default (512 KiB) | Integrated storage default (1 MiB) |\n| -------------------- | ------------------------ | ---------------------------------- |\n| aes128-gcm96 keys    | 2008                     | 4017                               |\n| aes256-gcm96 keys    | 1865                     | 3731                               |\n| chacha-poly1305 keys | 1865                     | 3731                               |\n| ed25519 keys         | 1420                     | 2841                               |\n| ecdsa-p256 keys      | 817                      | 1635                               |\n| ecdsa-p384 keys      | 659                      | 1318                               |\n| ecdsa-p523 keys      | 539                      | 1078                               |\n| 1024-bit RSA keys    | 169                      | 333                                |\n| 2048-bit RSA keys    | 116                      | 233                                |\n| 4096-bit RSA keys    | 89                       | 178                                |\n\n## Other limits\n\n### Request size\n\nThe maximum size of an HTTP request sent to Vault is limited by\nthe `max_request_size` option in the [listener stanza](\/vault\/docs\/configuration\/listener\/tcp). It defaults to 32 MiB. This value, minus the overhead of\nthe HTTP request itself, places an upper bound on any Transit operation,\nand on the maximum size of any key-value secrets.\n\n### Request duration\n\nThe maximum duration of a Vault operation is\n[`max_request_duration`](\/vault\/docs\/\/configuration\/listener\/tcp), which defaults to\n90 seconds. If a particular secret engine takes longer than this to perform an\noperation on a remote service, the Vault client will see a failure.\n\nThe environment variable [`VAULT_CLIENT_TIMEOUT`](\/vault\/docs\/commands#vault_client_timeout) sets a client-side maximum duration as well,\nwhich is 60 seconds by default.\n\n### Cluster and replication limits\n\nThere are no implementation limits on the maximum size of a cluster,\nor the maximum number of replicas associated with a primary. However,\neach replica or performance standby adds considerable overhead to the\nactive node, as each write must be duplicated to all standbys. The overhead of\nresyncing multiple replicas at once is also high.\n\nMonitor the active Vault node's CPU and network utilization, as well as\nthe lag between the last WAL and replica WAL, to determine if the\nmaximum number of replicas has been exceeded.\n\n|                                        | Limit                                  |\n| -------------------------------------- | -------------------------------------- |\n| Maximum cluster size                   | no limit, up to active node capability |\n| Maximum number of DR replicas          | no limit, up to active node capability |\n| Maximum number of performance replicas | no limit, up to active node capability |\n\n### Lease limits\n\nA systemwide [maximum TTL](\/vault\/docs\/configuration#max_lease_ttl), and a\n[maximum TTL per mount point](\/vault\/api-docs\/system\/mounts#max_lease_ttl-1) can be\nconfigured.\n\nAlthough no technical maximum exists, high lease counts can cause\ndegradation in system performance. We recommend short default\ntime-to-live values on tokens and leases to avoid a large backlog of\nunexpired leases, or a large number of simultaneous expirations.\n\n|                                    | Limit                     |\n| ---------------------------------- | ------------------------- |\n| Maximum number of leases           | advisory limit at 256,000 |\n| Maximum duration of lease or token | 768 hours by default      |\n\nThe current number of unexpired leases can be monitored via the\n[`vault.expire.num_leases`](\/vault\/docs\/internals\/telemetry#token-identity-and-lease-metrics) metric.\n\n### Transform limits\n\nThe Transform secret engine obeys the [FF3-1 minimum and maximum sizes\non the length of an input](\/vault\/docs\/secrets\/transform#input-limits), which\nare a function of the alphabet size.\n\n### External plugin limits\n\nThe [plugin system](\/vault\/docs\/plugins) launches a separate process\ninitiated by Vault that communicates over RPC. For each secret engine and auth\nmethod that's enabled as an external plugin, Vault will spawn a process on the\nhost system. For the Database Secrets Engines, external database plugins will\nspawn a process for every configured connection.\n\nRegardless of plugin type, each of these processes will incur resource overhead\non the system, including but not limited to resources such as CPU, memory,\nnetworking, and file descriptors. There's no specific limit on the number\nsecrets engines, auth methods, or database configured connections that can be\nenabled. This ultimately depends on the particular plugin resource utilization,\nthe extent to which that plugin is being called, and the available resources on\nthe system. For plugins of the same type, each additional process will incur a\nroughly linear increase in resource utilization. This assumes the usage of each\nplugin of the same type is similar.","site":"vault","answers_cleaned":"    layout  docs page title  Limits and Maximums description  Learn about the maximum number of objects within Vault         Vault limits and maximums  Vault imposes fixed upper limits on the size of certain fields and objects  and configurable limits on others  Vault also has upper bounds that are a consequence of its underlying storage  This page attempts to collect these limits  to assist in planning Vault deployments   In some cases  the system will show performance problems in advance of the absolute limits being reached      Storage Related limits      Storage entry size   include  storage entry size mdx   Many of the other limits within Vault derive from the maximum size of a storage entry  as described in the next sections  It is possible to recover from an error where a storage entry has reached its maximum size by reconfiguring Vault or Consul to a larger maximum storage entry       Mount point limits  All secret engine mount points  and all auth mount points  must each fit within a single storage entry  Each JSON object describing a mount takes about 500 bytes  but is stored in compressed form at a typical cost of about 75 bytes  Each of  1  auth mounts   2  secret engine mount points   3  local only auth methods  and  4  local only secret engine mounts are stored separately  so the limit applies to each independently                                                    Consul default  512 KiB    Integrated storage default  1 MiB                                                                                                                       Maximum number of secret engine mount points    7000                       14000                                 Maximum number of enabled auth methods          7000                       14000                                 Maximum mount point length                     no enforced limit          no enforced limit                     Specifying distinct per mount options  or using long mount point paths  can increase the space required per mount   The number of mount points can be monitored by reading the   sys auth    vault api docs system auth  and   sys mounts    vault api docs system mounts  endpoints from the root namespace and similar sub paths for namespaces respectively  like   namespace1 sys auth    namespace1 sys mounts   etc   Alternatively  use the   vault core mount table num entries    vault docs internals telemetry metrics core system vault core mount table num entries  and   vault core mount table size    vault docs internals telemetry metrics core system vault core mount table size  telemetry metrics to monitor the number of mount points and size of each mount table       Namespace limits   include  namespace limits mdx       Entity and group limits  The metadata that may be attached to an identity entity or an entity group has the following constraints                                             Limit                                                               Number of key value pairs in metadata   64            Metadata key size                       128 bytes     Metadata value size                     512 bytes    Vault shards the entities across 256 storage entries  This creates a hard limit of 128MiB storage space used for entities on Consul  or 256MiB on integrated storage with its default settings  Entity aliases are stored inline in the Entity objects and so consume the same pool of storage  Entity definitions are compressed within each storage entry  and the pre compression size varies with the number of entity aliases and the amount of metadata  Minimally populated entities about 200 bytes after compression   Group definitions are stored separately  in their own pool of 256 storage entries  The size of each group object depends on the number of members and the amount of metadata  Group aliases and group membership information is stored inline in each Group object  A group with no metadata  holding 10 entities  will use about 500 bytes per group  A group holding 100 entities would instead consume about 4 000 bytes   The following table shows a best case estimate and a more conservative estimate for entities and groups  The number is slightly less than the amount that fits in one shard  to reflect the fact that the first shard to fill up will start inducing failures  This maximum will decrease if each entity has a large amount of metadata  or if each group has a large number of members                                                                                                Consul default  512 KiB    Integrated storage default  1 MiB                                                                                                                                                                   Maximum number of identity entities  best case  200 bytes per entity                        610 000                    1 250 000                             Maximum number of identity entities  conservative case  500 bytes per entity                250 000                    480 000                               Maximum number of identity entities  maximum permitted metadata  41160 bytes per entity    670                        2 400                                  Maximum number of groups  10 entities per group                                             250 000                    480 000                               Maximum number of groups  100 entities per group                                            22 000                     50 000                                Maximum number of members in a group                                                        11 500                     23 000                               The number of entities can be monitored using Vault s  telemetry   vault docs internals telemetry token identity and lease metrics   see  vault identity num entities   total  or  vault identity entities count   by namespace    The cost of entity and group updates grows as the number of objects in each shard increases  This cost can be monitored via the  vault identity upsert entity txn  and the  vault identity upsert group txn  metrics   Very large internal groups should be avoided  more than 1000 members   because the membership list in a group must reside in a single storage entry  Instead  consider using  external groups   vault docs concepts identity external vs internal groups  or split the group up into multiple sub groups       Token limits  One storage entry is used per token  there is thus no upper bound on the number of active tokens  There are no restrictions on the token metadata field  other than the entire token must fit into one storage entry                                             Limit                                                             Number of key value pairs in metadata   no limit     Metadata key size                       no limit     Metadata value size                     no limit     Total size of token metadata            512 KiB         Policy limits  The maximum size of a policy is limited by the storage entry size  Policy lists that appear in tokens or entities must fit within a single storage entry                                                      Consul default  512 KiB    Integrated storage default  1 MiB                                                                                                                         Maximum policy size                              512 KiB                    1 MiB                                  Maximum number of policies per namespace         no limit                   no limit                               Maximum number of policies per token              14 000                     28 000                                Maximum number of policies per entity or group    14 000                     28 000                               Each time a token is used  Vault must assemble the collection of policies attached to that token  to the entity  to any groups that the entity belongs to  and recursively to any groups that contain those groups  Very large numbers of policies are possible  but can cause Vault s response time to increase  You can monitor the   vault core fetch acl and token    vault docs internals telemetry core metrics  metric to determine if the time required to assemble an access control list is becoming excessive       Versioned key value store  kv v2 secret engine                                                                Limit                                                                                                                                                                                    Number of secrets                                          no limit  up to available storage capacity                     Maximum size of one version of a secret                    slightly less than one storage entry  512 KiB or 1024 KiB      Number of versions of a secret                             default 10  configurable per secret or per mount               Maximum number of versions  not checked when configured    at least 24 000                                               Each version of a secret must fit in a single storage entry  the key value pairs are converted to JSON before storage   Version metadata consumes 21 bytes per version and must fit in a single storage entry  separate from the stored data   Each secret also has version agnostic metadata  This data can contain a  custom metadata  field of user provided key value pairs  Vault imposes the following custom metadata limits                                                 Limit                                                                   Number of custom metadata key value pairs   64            Custom metadata key size                    128 bytes     Custom metadata value size                  512 bytes        Transit secret engine  The maximum size of a Transit ciphertext or plaintext is limited by Vault s maximum request size  as described  below   request size    All archived versions of a single key must fit in a single storage entry  This limit depends on the key size     Key length             Consul default  512 KiB    Integrated storage default  1 MiB                                                                                               aes128 gcm96 keys      2008                       4017                                   aes256 gcm96 keys      1865                       3731                                   chacha poly1305 keys   1865                       3731                                   ed25519 keys           1420                       2841                                   ecdsa p256 keys        817                        1635                                   ecdsa p384 keys        659                        1318                                   ecdsa p523 keys        539                        1078                                   1024 bit RSA keys      169                        333                                    2048 bit RSA keys      116                        233                                    4096 bit RSA keys      89                         178                                      Other limits      Request size  The maximum size of an HTTP request sent to Vault is limited by the  max request size  option in the  listener stanza   vault docs configuration listener tcp   It defaults to 32 MiB  This value  minus the overhead of the HTTP request itself  places an upper bound on any Transit operation  and on the maximum size of any key value secrets       Request duration  The maximum duration of a Vault operation is   max request duration    vault docs  configuration listener tcp   which defaults to 90 seconds  If a particular secret engine takes longer than this to perform an operation on a remote service  the Vault client will see a failure   The environment variable   VAULT CLIENT TIMEOUT    vault docs commands vault client timeout  sets a client side maximum duration as well  which is 60 seconds by default       Cluster and replication limits  There are no implementation limits on the maximum size of a cluster  or the maximum number of replicas associated with a primary  However  each replica or performance standby adds considerable overhead to the active node  as each write must be duplicated to all standbys  The overhead of resyncing multiple replicas at once is also high   Monitor the active Vault node s CPU and network utilization  as well as the lag between the last WAL and replica WAL  to determine if the maximum number of replicas has been exceeded                                              Limit                                                                                                                          Maximum cluster size                     no limit  up to active node capability     Maximum number of DR replicas            no limit  up to active node capability     Maximum number of performance replicas   no limit  up to active node capability        Lease limits  A systemwide  maximum TTL   vault docs configuration max lease ttl   and a  maximum TTL per mount point   vault api docs system mounts max lease ttl 1  can be configured   Although no technical maximum exists  high lease counts can cause degradation in system performance  We recommend short default time to live values on tokens and leases to avoid a large backlog of unexpired leases  or a large number of simultaneous expirations                                          Limit                                                                                            Maximum number of leases             advisory limit at 256 000     Maximum duration of lease or token   768 hours by default         The current number of unexpired leases can be monitored via the   vault expire num leases    vault docs internals telemetry token identity and lease metrics  metric       Transform limits  The Transform secret engine obeys the  FF3 1 minimum and maximum sizes on the length of an input   vault docs secrets transform input limits   which are a function of the alphabet size       External plugin limits  The  plugin system   vault docs plugins  launches a separate process initiated by Vault that communicates over RPC  For each secret engine and auth method that s enabled as an external plugin  Vault will spawn a process on the host system  For the Database Secrets Engines  external database plugins will spawn a process for every configured connection   Regardless of plugin type  each of these processes will incur resource overhead on the system  including but not limited to resources such as CPU  memory  networking  and file descriptors  There s no specific limit on the number secrets engines  auth methods  or database configured connections that can be enabled  This ultimately depends on the particular plugin resource utilization  the extent to which that plugin is being called  and the available resources on the system  For plugins of the same type  each additional process will incur a roughly linear increase in resource utilization  This assumes the usage of each plugin of the same type is similar "}
{"questions":"vault Recommended patterns Help keep your Vault environments operating effectively by implementing the following best practice so you avoid common anti patterns page title Recommended patterns layout docs Follow these recommended patterns to effectively operate Vault","answers":"---\nlayout: docs\npage_title: Recommended patterns\ndescription: Follow these recommended patterns to effectively operate Vault.\n---\n\n# Recommended patterns\n\nHelp keep your Vault environments operating effectively by implementing the following best practice so you avoid common anti-patterns.\n\n| Description  \t| Applicable Vault edition \t|\n|---\t|---\t|\n| [Adjust the default lease time](#adjust-the-default-lease-time) \t|  All \t|\n| [Use identity entities for accurate client count](#use-identity-entities-for-accurate-client-count)  \t| Enterprise, HCP\t|\n| [Increase IOPS](#increase-iops)  \t| Enterprise, Community \t|\n| [Enable disaster recovery](#enable-disaster-recovery)  \t| Enterprise \t|\n| [Test disaster recovery](#test-disaster-recovery)  \t| Enterprise \t|\n| [Improve upgrade cadence](#improve-upgrade-cadence)  \t| Enterprise, Community |\n| [Test before upgrades](#test-before-upgrades)  \t| Enterprise, Community \t|\n| [Rotate audit device logs](#rotate-audit-device-logs)  \t| Enterprise, Community\t|\n| [Monitor metrics](#monitor-metrics)   \t| Enterprise, Community \t|\n| [Establish usage baseline](#establish-usage-baseline)  \t| Enterprise, Community\t|\n| [Minimize root token use](#minimize-root-token-use)  \t| All \t|\n| [Rekey when necessary](#rekey-when-necessary)  \t| All \t|\n\n## Adjust the default lease time\n\nThe default lease time in Vault is 32 days or 768 hours. This time allows for some operations, such as re-authentication or renewal.\nSee [lease](\/vault\/docs\/concepts\/lease) documentation for more information.\n\n**Recommended pattern:**\n\nYou should tune the lease TTL value for your needs. Vault holds leases in memory until the lease expires. \nWe recommend keeping TTLs as short as the use case will allow. \n- [Auth tune](\/vault\/docs\/commands\/auth\/tune)\n- [Secrets tune](\/vault\/docs\/commands\/secrets\/tune)\n\n<Note>\nTuning or adjusting TTLs does not retroactively affect tokens that were issued. New tokens must be issued after tuning TTLs. \n<\/Note>\n\n**Anti-pattern issue:**\n\nIf you create leases without changing the default time-to-live (TTL), leases will live in Vault until the default lease time is up. \nDepending on your infrastructure and available system memory, using the default or long TTL may cause performance issues as Vault stores\nleases in memory. \n\n## Use identity entities for accurate client count\n\nEach Vault client may have multiple accounts with the auth methods enabled on the Vault server.\n\n![Entity](\/img\/vault-entity-waf1.png)\n\n**Recommended pattern:**\n\nSince each token adds to the client count, and each unique authentication issues a token, you should use identity entities to create aliases that connect each login to a single identity. \n\n  - [Client count](\/vault\/docs\/concepts\/client-count)\n  - [Vault identity concepts](\/vault\/docs\/concepts\/identity)\n  - [Vault Identity secrets engine](\/vault\/docs\/secrets\/identity)\n  - [Identity: Entities and groups tutorial](\/vault\/tutorials\/auth-methods\/identity)\n\n**Anti-pattern issue:**\n\nWhen you do not use identity entities, each new client is counted as a separate identity when using another auth method not linked to the user's entity. \n\n## Increase IOPS\n\nIOPS (input\/output operations per second) measures performance for Vault cluster members. Vault is bound by the IO limits of the storage backend rather than the compute requirements. \n\n**Recommended pattern:**\n\nUse the HashiCorp reference guidelines for Vault servers' hardware sizing and network considerations.\n\n- [Vault with Integrated storage reference architecture](\/vault\/tutorials\/day-one-raft\/raft-reference-architecture#system-requirements)\n- [Performance tuning](\/vault\/tutorials\/operations\/performance-tuning)\n- [Transform secrets engine](\/vault\/docs\/concepts\/transform)\n\n<Note>\n\nDepending on the client count, the Transform (Enterprise) and Transit secret engines can be resource-intensive.\n\n<\/Note>\n\n**Anti-pattern issue:**\n\nLimited IOPS can significantly degrade Vault\u2019s performance.\n\n## Enable disaster recovery\n\nHashiCorp Vault's (HA) highly available [Integrated storage (Raft)](\/vault\/docs\/concepts\/integrated-storage) \nbackend provides intra-cluster data replication across cluster members. Integrated Storage provides Vault with \nhorizontal scalability and failure tolerance, but it does not provide backup for the entire cluster. Not utilizing \ndisaster recovery for your production environment will negatively impact your organization's Recovery Point \nObjective (RPO) and Recovery Time Objective (RTO). \n\n**Recommended pattern:** \n\nFor cluster-wide issues (i.e., network connectivity), Vault Enterprise Disaster Recovery (DR) replication \nprovides a warm standby cluster containing all primary cluster data. The DR cluster does not service reads \nor writes but you can promote it to replace the primary cluster when needed.\n\n- [Disaster recovery replication setup](\/vault\/tutorials\/day-one-raft\/disaster-recovery)\n- [Disaster recovery (DR) replication](\/vault\/docs\/enterprise\/replication#disaster-recovery-dr-replication)\n- [DR replication API documentation](\/vault\/api-docs\/system\/replication\/replication-dr)\n\nWe also recommend periodically creating data snapshots to protect against data corruption.\n\n- [Vault data backup standard procedure](\/vault\/tutorials\/standard-procedures\/sop-backup)\n- [Automated integrated storage snapshots](\/vault\/docs\/enterprise\/automated-integrated-storage-snapshots)\n- [\/sys\/storage\/raft\/snapshot-auto](\/vault\/api-docs\/system\/storage\/raftautosnapshots)\n\n**Anti-pattern issue:**\n\nIf you do not enable disaster recovery and catastrophic failure occurs, your use cases will encounter longer downtime duration and costs associated with not serving Vault clients in your environment.\n\n## Test disaster recovery\n\nYour disaster recovery (DR) solution is a key part of your overall disaster recovery plan.\n\nDesigning and configuring your Vault disaster recovery solution is only the first step. You also need to validate the DR solution, as not doing so can negatively impact your organization's Recovery Point Objective (RPO) and Recovery Time Objective (RTO).\n\n**Recommended pattern:**\n\nVault's Disaster Recovery (DR) replication mode provides a warm standby for \nfailover if the primary cluster experiences catastrophic failure. You should \nperiodically test the disaster recovery replication cluster by completing the \nfailover and failback procedure. \n\n- [Vault disaster recovery replication failover and failback tutorial](\/vault\/tutorials\/enterprise\/disaster-recovery-replication-failover)\n- [Vault Enterprise replication](\/vault\/docs\/enterprise\/replication)\n- [Monitoring Vault replication](\/vault\/tutorials\/monitoring\/monitor-replication)\n\nYou should establish standard operating procedures for restoring a Vault cluster from a snapshot. The restoration methods following a DR situation would be in response to data corruption or sabotage, which Disaster Recovery Replication might be unable to protect against.\n\n- [Standard procedure for restoring a Vault cluster](\/vault\/tutorials\/standard-procedures\/sop-restore)\n\n**Anti-pattern issue:**\n\nIf you don't test your disaster recovery solution, your key stakeholders will not feel confident they can effectively perform the disaster recovery plan. Testing the DR solution also helps your team to remove uncertainty around recovering the system during an outage.\n\n## Improve upgrade cadence\n\nWhile it might be easy to upgrade Vault whenever you have capacity, not having a frequent upgrade cadence can impact your Vault performance and security.\n\n**Recommended pattern:** \n\nWe recommend upgrading to our latest version of Vault. Subscribe to the releases in [Vault's GitHub repository](https:\/\/github.com\/hashicorp\/vault), and notifications from [HashiCorp Vault discuss](https:\/\/discuss.hashicorp.com\/c\/release-notifications\/57), will inform you when we release a new Vault version.\n\n- [Vault upgrade guides](\/vault\/docs\/upgrading)\n- [Vault feature deprecation notice and plans](\/vault\/docs\/deprecation)\n\n**Anti-pattern issue:**\n\nWhen you do not keep a regular upgrade cadence, your Vault environment could be missing key features or improvements.\n\n- Missing patches for bugs or vulnerabilities as documented in the [CHANGELOG](https:\/\/github.com\/hashicorp\/vault\/blob\/main\/CHANGELOG.md).\n- New features to improve workflow.\n- Must use version-specific rather than the latest documentation.\n- Some educational resourcesrequire a specific minimum Vault version.\n- Updates may require a stepped approach that uses an intermediate version before installing the latest binary.\n\n## Test before upgrades\n\nWe recommend testing Vault in a sandbox environment before deploying to production.\n\nAlthough it might be faster to upgrade immediately in production, testing will help identify any compatibility issues.\n\nBe aware of the [CHANGELOG](https:\/\/github.com\/hashicorp\/vault\/blob\/main\/CHANGELOG.md) and account for any new features, improvements, known issues and bug fixes in your testing.\n\n**Recommended pattern:** \n\nTest new Vault versions in sandbox environments before upgrading in production and follow our upgrading documentation.\n\nWe recommend adding a testing phase to your standard upgrade procedure.\n\n- [Vault upgrade standard procedure](\/vault\/tutorials\/standard-procedures\/sop-upgrade)\n- [Upgrading Vault](\/vault\/docs\/upgrading)\n\n**Anti-pattern issue:**\n\nWithout adequate testing before upgrading in production, you risk compatibility and performance issues. \n\n<Warning>\n\nThis could lead to downtime or degradation in your production Vault environment.\n\n<\/Warning>\n\n## Rotate audit device logs\n\nAudit devices in Vault maintain a detailed log of every client request and server response.\n\nIf you allow the logs for audit devices to run perpetually without rotating you may face a blocked audit device if the filesystem storage becomes exhausted.\n\n**Recommended pattern:**\n\nInspect and rotate audit logs periodically.\n\n- [Blocked audit devices tutorial](\/vault\/tutorials\/monitoring\/blocked-audit-devices)\n- [blocked audit devices](\/vault\/docs\/audit#blocked-audit-devices)\n\n**Anti-pattern issue:**\n\nVault will not respond to requests when audit devices are not enabled to record them.\n\nThe audit device can exhaust the local storage if the audit device log is not maintained and rotated over time.\n\n## Monitor metrics\n\nRelying solely on Vault operational logs and data in Vault UI will give you a partial picture of the cluster's performance.\n\n\n**Recommended pattern:**\n\nContinuous monitoring will allow organizations to detect minor problems and promptly resolve them. \nMigrating from reactive to proactive monitoring will help to prevent system failures. Vault has multiple outputs \nthat help monitor the cluster's activity: audit logs, operational logs, and telemetry data. This data can work \nwith a SIEM (security information and event management) tool for aggregation, inspection, and alerting capabilities.\n\n- [Telemetry](\/vault\/docs\/internals\/telemetry#secrets-engines-metric)\n- [Telemetry metrics reference](\/vault\/tutorials\/monitoring\/telemetry-metrics-reference)\n\nAdding a monitoring solution:\n- [Audit device logs and incident response with elasticsearch](\/vault\/tutorials\/monitoring\/audit-elastic-incident-response)\n- [Monitor telemetry & audit device log data](\/vault\/tutorials\/monitoring\/monitor-telemetry-audit-splunk)\n- [Monitor telemetry with Prometheus & Grafana](\/vault\/tutorials\/monitoring\/monitor-telemetry-grafana-prometheus)\n\n\n<Note>\n\n  Vault logs to standard output and standard error by default, automatically captured by the systemd journal. You can also instruct Vault to redirect operational log writes to a file. \n\n<\/Note>\n\n**Anti-pattern issue:** \n\nHaving partial insight into cluster activity can leave the business in a reactive state.\n\n## Establish usage baseline\n\nA baseline provides insight into current utilization and thresholds. Telemetry metrics are valuable, especially when monitored over time. You can use telemetry metrics to gather a baseline of cluster activity, while alerts inform you of abnormal activity. \n\n**Recommended pattern:**\n\nTelemetry information can also be streamed directly from Vault to a range of metrics aggregation solutions and \nsaved for aggregation and inspection. \n\n- [Vault usage metrics](\/vault\/tutorials\/monitoring\/usage-metrics)\n- [Diagnose server issues](\/vault\/tutorials\/monitoring\/diagnose-startup-issues)\n\n**Anti-pattern issue:**\n\nThis issue closely relates to the recommended pattern for [monitor metrics](#monitor-metrics).\n Telemetry data is \nonly held in memory for a short period.\n\n## Minimize root token use\n\nInitializing a Vault server emits an initial root token that gives root-level access across all Vault features.\n\n**Recommended pattern:**\n\nWe recommend that you revoke the root token after initializing Vault within your environment. If users require elevated access, create access control list policies that grant proper capabilities on the necessary paths in Vault. If your operations require the root token, keep it for the shortest possible time before revoking it.\n\n- [Generate root tokens tutorial](\/vault\/tutorials\/operations\/generate-root)\n- [Root tokens](\/vault\/docs\/concepts\/tokens#root-tokens)\n- [Vault policies](\/vault\/docs\/concepts\/policies)\n\n**Anti-pattern issue:**\n\nA root token can perform all actions within Vault and never expire. Unrestricted access can give users higher privileges than necessary to all Vault operations and paths. Sharing and providing access to root tokens poses a security risk.\n\n## Rekey when necessary\n\nVault distributes unsealed keys to stakeholders. A quorum of keys is needed to unlock Vault based on your initialization settings. \n\n**Recommended pattern:** \n\nVault supports rekeying, and you should establish a workflow for rekeying when necessary. \n\n- [Rekeying & rotating Vault](\/vault\/tutorials\/operations\/rekeying-and-rotating)\n- [Operator rekey](\/vault\/docs\/commands\/operator\/rekey)\n\n**Anti-pattern issue:**\n\nIf several stakeholders leave the organization, you risk not having the required key shares to meet the unseal quorum, which could result in the loss of the ability to unseal Vault.","site":"vault","answers_cleaned":"    layout  docs page title  Recommended patterns description  Follow these recommended patterns to effectively operate Vault         Recommended patterns  Help keep your Vault environments operating effectively by implementing the following best practice so you avoid common anti patterns     Description     Applicable Vault edition                   Adjust the default lease time   adjust the default lease time      All       Use identity entities for accurate client count   use identity entities for accurate client count      Enterprise  HCP      Increase IOPS   increase iops      Enterprise  Community       Enable disaster recovery   enable disaster recovery      Enterprise       Test disaster recovery   test disaster recovery      Enterprise       Improve upgrade cadence   improve upgrade cadence      Enterprise  Community      Test before upgrades   test before upgrades      Enterprise  Community       Rotate audit device logs   rotate audit device logs      Enterprise  Community      Monitor metrics   monitor metrics       Enterprise  Community       Establish usage baseline   establish usage baseline      Enterprise  Community      Minimize root token use   minimize root token use      All       Rekey when necessary   rekey when necessary      All        Adjust the default lease time  The default lease time in Vault is 32 days or 768 hours  This time allows for some operations  such as re authentication or renewal  See  lease   vault docs concepts lease  documentation for more information     Recommended pattern     You should tune the lease TTL value for your needs  Vault holds leases in memory until the lease expires   We recommend keeping TTLs as short as the use case will allow      Auth tune   vault docs commands auth tune     Secrets tune   vault docs commands secrets tune    Note  Tuning or adjusting TTLs does not retroactively affect tokens that were issued  New tokens must be issued after tuning TTLs     Note     Anti pattern issue     If you create leases without changing the default time to live  TTL   leases will live in Vault until the default lease time is up   Depending on your infrastructure and available system memory  using the default or long TTL may cause performance issues as Vault stores leases in memory       Use identity entities for accurate client count  Each Vault client may have multiple accounts with the auth methods enabled on the Vault server     Entity   img vault entity waf1 png     Recommended pattern     Since each token adds to the client count  and each unique authentication issues a token  you should use identity entities to create aliases that connect each login to a single identity         Client count   vault docs concepts client count       Vault identity concepts   vault docs concepts identity       Vault Identity secrets engine   vault docs secrets identity       Identity  Entities and groups tutorial   vault tutorials auth methods identity     Anti pattern issue     When you do not use identity entities  each new client is counted as a separate identity when using another auth method not linked to the user s entity       Increase IOPS  IOPS  input output operations per second  measures performance for Vault cluster members  Vault is bound by the IO limits of the storage backend rather than the compute requirements      Recommended pattern     Use the HashiCorp reference guidelines for Vault servers  hardware sizing and network considerations      Vault with Integrated storage reference architecture   vault tutorials day one raft raft reference architecture system requirements     Performance tuning   vault tutorials operations performance tuning     Transform secrets engine   vault docs concepts transform    Note   Depending on the client count  the Transform  Enterprise  and Transit secret engines can be resource intensive     Note     Anti pattern issue     Limited IOPS can significantly degrade Vault s performance      Enable disaster recovery  HashiCorp Vault s  HA  highly available  Integrated storage  Raft    vault docs concepts integrated storage   backend provides intra cluster data replication across cluster members  Integrated Storage provides Vault with  horizontal scalability and failure tolerance  but it does not provide backup for the entire cluster  Not utilizing  disaster recovery for your production environment will negatively impact your organization s Recovery Point  Objective  RPO  and Recovery Time Objective  RTO       Recommended pattern      For cluster wide issues  i e   network connectivity   Vault Enterprise Disaster Recovery  DR  replication  provides a warm standby cluster containing all primary cluster data  The DR cluster does not service reads  or writes but you can promote it to replace the primary cluster when needed      Disaster recovery replication setup   vault tutorials day one raft disaster recovery     Disaster recovery  DR  replication   vault docs enterprise replication disaster recovery dr replication     DR replication API documentation   vault api docs system replication replication dr   We also recommend periodically creating data snapshots to protect against data corruption      Vault data backup standard procedure   vault tutorials standard procedures sop backup     Automated integrated storage snapshots   vault docs enterprise automated integrated storage snapshots      sys storage raft snapshot auto   vault api docs system storage raftautosnapshots     Anti pattern issue     If you do not enable disaster recovery and catastrophic failure occurs  your use cases will encounter longer downtime duration and costs associated with not serving Vault clients in your environment      Test disaster recovery  Your disaster recovery  DR  solution is a key part of your overall disaster recovery plan   Designing and configuring your Vault disaster recovery solution is only the first step  You also need to validate the DR solution  as not doing so can negatively impact your organization s Recovery Point Objective  RPO  and Recovery Time Objective  RTO      Recommended pattern     Vault s Disaster Recovery  DR  replication mode provides a warm standby for  failover if the primary cluster experiences catastrophic failure  You should  periodically test the disaster recovery replication cluster by completing the  failover and failback procedure       Vault disaster recovery replication failover and failback tutorial   vault tutorials enterprise disaster recovery replication failover     Vault Enterprise replication   vault docs enterprise replication     Monitoring Vault replication   vault tutorials monitoring monitor replication   You should establish standard operating procedures for restoring a Vault cluster from a snapshot  The restoration methods following a DR situation would be in response to data corruption or sabotage  which Disaster Recovery Replication might be unable to protect against      Standard procedure for restoring a Vault cluster   vault tutorials standard procedures sop restore     Anti pattern issue     If you don t test your disaster recovery solution  your key stakeholders will not feel confident they can effectively perform the disaster recovery plan  Testing the DR solution also helps your team to remove uncertainty around recovering the system during an outage      Improve upgrade cadence  While it might be easy to upgrade Vault whenever you have capacity  not having a frequent upgrade cadence can impact your Vault performance and security     Recommended pattern      We recommend upgrading to our latest version of Vault  Subscribe to the releases in  Vault s GitHub repository  https   github com hashicorp vault   and notifications from  HashiCorp Vault discuss  https   discuss hashicorp com c release notifications 57   will inform you when we release a new Vault version      Vault upgrade guides   vault docs upgrading     Vault feature deprecation notice and plans   vault docs deprecation     Anti pattern issue     When you do not keep a regular upgrade cadence  your Vault environment could be missing key features or improvements     Missing patches for bugs or vulnerabilities as documented in the  CHANGELOG  https   github com hashicorp vault blob main CHANGELOG md     New features to improve workflow    Must use version specific rather than the latest documentation    Some educational resourcesrequire a specific minimum Vault version    Updates may require a stepped approach that uses an intermediate version before installing the latest binary      Test before upgrades  We recommend testing Vault in a sandbox environment before deploying to production   Although it might be faster to upgrade immediately in production  testing will help identify any compatibility issues   Be aware of the  CHANGELOG  https   github com hashicorp vault blob main CHANGELOG md  and account for any new features  improvements  known issues and bug fixes in your testing     Recommended pattern      Test new Vault versions in sandbox environments before upgrading in production and follow our upgrading documentation   We recommend adding a testing phase to your standard upgrade procedure      Vault upgrade standard procedure   vault tutorials standard procedures sop upgrade     Upgrading Vault   vault docs upgrading     Anti pattern issue     Without adequate testing before upgrading in production  you risk compatibility and performance issues     Warning   This could lead to downtime or degradation in your production Vault environment     Warning      Rotate audit device logs  Audit devices in Vault maintain a detailed log of every client request and server response   If you allow the logs for audit devices to run perpetually without rotating you may face a blocked audit device if the filesystem storage becomes exhausted     Recommended pattern     Inspect and rotate audit logs periodically      Blocked audit devices tutorial   vault tutorials monitoring blocked audit devices     blocked audit devices   vault docs audit blocked audit devices     Anti pattern issue     Vault will not respond to requests when audit devices are not enabled to record them   The audit device can exhaust the local storage if the audit device log is not maintained and rotated over time      Monitor metrics  Relying solely on Vault operational logs and data in Vault UI will give you a partial picture of the cluster s performance      Recommended pattern     Continuous monitoring will allow organizations to detect minor problems and promptly resolve them   Migrating from reactive to proactive monitoring will help to prevent system failures  Vault has multiple outputs  that help monitor the cluster s activity  audit logs  operational logs  and telemetry data  This data can work  with a SIEM  security information and event management  tool for aggregation  inspection  and alerting capabilities      Telemetry   vault docs internals telemetry secrets engines metric     Telemetry metrics reference   vault tutorials monitoring telemetry metrics reference   Adding a monitoring solution     Audit device logs and incident response with elasticsearch   vault tutorials monitoring audit elastic incident response     Monitor telemetry   audit device log data   vault tutorials monitoring monitor telemetry audit splunk     Monitor telemetry with Prometheus   Grafana   vault tutorials monitoring monitor telemetry grafana prometheus     Note     Vault logs to standard output and standard error by default  automatically captured by the systemd journal  You can also instruct Vault to redirect operational log writes to a file      Note     Anti pattern issue      Having partial insight into cluster activity can leave the business in a reactive state      Establish usage baseline  A baseline provides insight into current utilization and thresholds  Telemetry metrics are valuable  especially when monitored over time  You can use telemetry metrics to gather a baseline of cluster activity  while alerts inform you of abnormal activity      Recommended pattern     Telemetry information can also be streamed directly from Vault to a range of metrics aggregation solutions and  saved for aggregation and inspection       Vault usage metrics   vault tutorials monitoring usage metrics     Diagnose server issues   vault tutorials monitoring diagnose startup issues     Anti pattern issue     This issue closely relates to the recommended pattern for  monitor metrics   monitor metrics    Telemetry data is  only held in memory for a short period      Minimize root token use  Initializing a Vault server emits an initial root token that gives root level access across all Vault features     Recommended pattern     We recommend that you revoke the root token after initializing Vault within your environment  If users require elevated access  create access control list policies that grant proper capabilities on the necessary paths in Vault  If your operations require the root token  keep it for the shortest possible time before revoking it      Generate root tokens tutorial   vault tutorials operations generate root     Root tokens   vault docs concepts tokens root tokens     Vault policies   vault docs concepts policies     Anti pattern issue     A root token can perform all actions within Vault and never expire  Unrestricted access can give users higher privileges than necessary to all Vault operations and paths  Sharing and providing access to root tokens poses a security risk      Rekey when necessary  Vault distributes unsealed keys to stakeholders  A quorum of keys is needed to unlock Vault based on your initialization settings      Recommended pattern      Vault supports rekeying  and you should establish a workflow for rekeying when necessary       Rekeying   rotating Vault   vault tutorials operations rekeying and rotating     Operator rekey   vault docs commands operator rekey     Anti pattern issue     If several stakeholders leave the organization  you risk not having the required key shares to meet the unseal quorum  which could result in the loss of the ability to unseal Vault "}
{"questions":"vault Learn about the key Vault metrics you should monitor with health checks Key metrics for common health checks layout docs This document covers common Vault monitoring patterns It is important to have operational and usage insight into a running Vault cluster to understand performance assist with proactive incident response and understand business workloads and use cases page title Key metrics for common health checks","answers":"---\nlayout: docs\npage_title: Key metrics for common health checks\ndescription: >-\n  Learn about the key Vault metrics you should monitor with health checks.\n---\n\n# Key metrics for common health checks\n\nThis document covers common Vault monitoring patterns. It is important to have operational and usage insight into a running Vault cluster to understand performance, assist with proactive incident response, and understand business workloads and use cases.\n\nThis document consists of five metrics sections: core, usage, storage backend, audit, and resource. Core metrics are fundamental internal metrics which you should monitor to ensure the health of your Vault cluster. The usage metrics section covers metrics which help count active and historical clients Vault. The storage backend section highlights the metrics to monitor so that you understand the storage infrastructure that your Vault cluster uses, allowing you to ensure your storage is functioning as intended. Audit metrics allow you to set up monitoring that helps you meet your compliance requirements. Resource metrics allow you to monitor metrics such as CPU, networking, and other resources Vault uses on its host. Replication covers metrics you can use to help ensure that Vault is replicating data as intended.\n\n## Core metrics\n\n### Servers assume the leader role\n\n#### Metrics:\n\n`Vault.core.leadership_lost`\n\n`vault.core.leadership_setup_failed`\n\n#### Background:\n\n![Recommended architecture](\/img\/vault-integrated-storage-reference-architecture.svg)\n\nThe diagram illustrates a highly available Vault cluster with five nodes distributed between three availability zones. Vault's Integrated Storage uses a consensus protocol to provide consistency across the cluster nodes. The leader (active) node is responsible for ingesting new log entries, replicating them to the follower (standby) nodes, and managing when to commit an entry. Integrated Storage uses consensus protocol to provide consistency; therefore, if the leader is lost, the voting nodes will elect a new leader. Refer to the [Integrated storage](\/vault\/docs\/internals\/integrated-storage) documentation for more details.\n\n<Tip>\n\nWhen you operate Vault with Integrated Storage, it automatically provides [additional metrics for Raft leadership changes](\/vault\/docs\/internals\/integrated-storage#consensus-protocol).\n\n<\/Tip>\n\n#### Alerting:\n\nThe metric `vault.core.leadership_lost` measures the duration a server held the leader position before losing it. A consistently low value for this metric suggests a high leadership turnover, indicating potential instability within the cluster.\n\nOn the other hand, spikes in the `vault.core.leadership_setup_failed` metric indicate failures that standby servers cannot successfully assume the leader role when required. Investigate these failures promptly, and check for any issues related to acquiring the leader election lock. These metrics are important alerts and can signify security and reliability risks. For example, there might be a communication problem between Vault and its storage backend or a broader outage causing multiple Vault servers to fail. Monitoring and analyzing these metrics can help identify and address any underlying issues, ensuring the stability and security of your Vault cluster.\n\n### Higher latency in your application\n\n#### Metrics:\n\n`vault.core.handle_login_request`\n\n`vault.core.handle_request`\n\n#### Background:\n\nVault can use trusted sources like Kubernetes, Active Directory, and Okta to verify the identity of clients (users or services) before granting them access to secrets. Clients must authenticate themselves by making a login request through the `vault login` command or the API. When the authentication is successful, Vault provides the client with a token, which is stored locally on the client's machine and is used to authorize future requests. As long as the client presents a valid token that has not expired, Vault recognizes the client as authenticated.\n\n#### Alerting:\n\nThe metric `vault.core.handle_login_request`, when averaged, measures how fast Vault responds to client login requests. If you notice a significant increase in this metric but no significant increase in the number of tokens issued (`vault.token.creation`) it's crucial to investigate the cause of this issue immediately.\n\nWhen a client sends a request to Vault, it typically needs to go through an authentication process to verify its identity and obtain the necessary permissions. This authentication process involves validating the client's credentials, such as username and password or API token, and ensuring the client has the appropriate access rights.\n\nIf the authentication process in Vault is slow, it takes longer for Vault to verify the client's credentials and authorize the request. This delay in authentication directly impacts the response time of Vault to the client's request.\n\nYou should also monitor the `vault.core.handle_request` metric, which measures server workload. This metric helps determine whether you need to scale up your cluster to accommodate increased traffic. On the other hand, a sudden drop in throughput may indicate connectivity problems between Vault and its clients, which you should investigate further.\n\n### Difficulties with setting up auditing or problems with mounting a custom plugin backend\n\n#### Metrics:\n\n`vault.core.post_unseal`\n\n#### Background:\n\nVault servers can be in one of two states: sealed or unsealed. To ensure security, Vault does not trust the storage backends and stores data in an encrypted form. After Vault is started, it must undergo an unsealing process to obtain the plaintext root key necessary to read the decryption key to decrypt the data. After unsealing, Vault performs various post-unseal operations to set up the server correctly before it can start responding to requests.\n\n#### Alerting:\n\nIf you notice sudden increases in the `vault.core.post_unseal` metric, issues might affect your server's availability during the post-unseal process, such as errors with auditing or mounting a custom plugin backend. To diagnose the issues, refer to your Vault server's logs.\n\n## Usage metrics\n\n### Excessive token creations affecting Vault performance\n\n#### Metrics:\n\n`vault.token.creation`\n\n##### Background:\n\nAll authenticated Vault requests require a valid client token. Tokens are linked to policies determining which capabilities  a client (user or system) has for a given path. Vault issues three types of tokens: service tokens, batch tokens, and recovery tokens.\n\nService tokens are what users will generally think of as \"normal\" Vault tokens. They support all features, such as renewal, revocation, creating child tokens, and more. They are correspondingly heavyweight to create and track.\n\nBatch tokens are encrypted binary large objects (blobs) with just enough information about the client. While Vault does not persist the batch tokens, it persists the service tokens. The amount of space required to store the service token depends on the authentication method. Therefore, a large number of service tokens could contribute to an out-of-memory issue.\n\nRecovery tokens are used exclusively for operating Vault in [recovery mode](\/vault\/docs\/concepts\/recovery-mode).\n\n#### Alerting\n\nBy monitoring the number of tokens created (`vault.token.creation`) and the frequency of login requests (`vault.core.handle_login_request` counted as a total), you can gain insights into the overall workload of your system. If your scenario involves running numerous short-lived processes, such as serverless workloads, you may experience simultaneous creation and request of secrets from hundreds or thousands of functions. In such cases, you will observe correlated spikes in both metrics.\n\nWhen dealing with transient workloads, you should utilize batch tokens to enhance the performance of your cluster. Vault creates a batch token, which encrypts all the client's information and provides it to the client. When the client employs this token, Vault decrypts the stored metadata and fulfills the request. Unlike service tokens, batch tokens do not retain client information or get replicated across clusters. This characteristic alleviates the burden on the storage backend and leads to improved cluster performance.\n\nTo learn more about batch tokens, refer to the [batch tokens](\/vault\/tutorials\/tokens\/batch-tokens) tutorial.\n\n### Lease lifecycle introducing unexpected traffic spikes in Vault\n\n#### Metrics:\n\n`vault.expire.num_leases`\n\n#### Background:\n\nVault creates a lease when it generates a dynamic secret or service token. This lease contains essential information like the secret or token\u2019s time to live (TTL) value, and whether it can be extended or renewed. Vault stores the lease in the storage backend. If Vault doesn't renew the lease before reaching its TTL, it will expire and be invalidated, causing the associated secret or token to be revoked.\n\n#### Alerting\n\nMonitoring the number of active leases in your Vault server (`vault.expire.num_leases`) can provide valuable insights into the server's activity level. An increase in leases suggests a higher volume of traffic to your application. At the same time, a sudden decrease could indicate issues with accessing dynamic secrets quickly enough to serve incoming traffic.\n\nWe recommend setting the shortest possible TTL for leases to improve security and performance. There are two main reasons for this. Firstly, a shorter TTL reduces the impact of potential attacks. Secondly, it prevents leases from accumulating indefinitely and consuming excessive space in the storage backend. If you don't specify a TTL explicitly, leases default to 32 days. However, if there is a sudden surge in load and numerous leases are generated with this long default TTL, the storage backend can quickly reach its maximum capacity and crash, resulting in unavailability.\n\nDepending on your specific use case, you may only require a token or secret for a few minutes or hours, rather than the full 32 days. By setting an appropriate TTL, you can free up storage space for storing new secrets and tokens. In the case of Vault Enterprise, you can set a [lease count quota](\/vault\/docs\/enterprise\/lease-count-quotas) to limit the number of leases generated below a certain threshold. When the threshold is reached, Vault will restrict the creation of new leases until an existing lease expires or is revoked. This helps manage the overall number of leases and prevents excessive resource usage.\n\nRead the [Protecting Vault with resource quotas](\/vault\/tutorials\/operations\/resource-quotas) tutorial to learn how to set the lease count quotas.\n\nAlternatively, you can leverage the [Vault agent caching](\/vault\/docs\/agent-and-proxy\/agent\/caching) to delegate the lease lifecycle management to Vault Agent.\n\n<Note>\n\nThe lifecycle of the leases are managed by the expiration manager, which handles the revocation of a lease when the time to live value associated with the lease is reached. Refer to the [Troubleshoot irrevocable leases](\/vault\/tutorials\/monitoring\/troubleshoot-irrevocable-leases) tutorial when you encounter irrevocable leases monitored by the `vault.expire.num_irrevocable_leases` metric.\n\n<\/Note>\n\n### Know the average time it takes to renew or revoke client tokens\n\n#### Metrics:\n\n`vault.expire.renew-token`\n\n`vault.expire.revoke`\n\n#### Background:\n\nVault automatically revokes access to secrets granted by a token when its time to live (TTL) expires. You can manually revoke a token if there are signs of a possible security breach. When a token is no longer valid (either expired or revoked), the client will lose access to the secrets managed by Vault. Therefore, the client must either renew the token before it expires, or request a new one.\n\n#### Alerting\n\nMonitoring the timely completion of revocation (`vault.expire.revoke`) and renewal (`vault.expire.renew-token`) operations is crucial for ensuring the validity and accessibility of secrets. Some long-running applications may require a token to be renewed instead of getting a new one. In such a case, the time it takes to renew a token can potentially affect the application from accessing secrets. Also, it is important to track the time it takes to complete the revoke operation helps to detect unauthorized access to secrets, as attackers who gain access can potentially infiltrate your system and cause harm. If you notice significant delays in the revocation process, you should investigate your server logs for any backend issues that might have hindered the revocation process.\n\n## Storage backend metrics\n\n### Detect any performance issues with your Vault's storage backend\n\n#### Metrics:\n\n`vault.<STORAGE>.get`\n\n`vault.<STORAGE>.put`\n\n`vault.<STORAGE>.list`\n\n`vault.<STORAGE>.delete`\n\n#### Background:\n\nThe performance of the storage backend affects the overall performance of the Vault; therefore, it is critical to  monitor the performance of your storage backend so that you can detect and react to any anomaly. Backend monitoring allows you to ensure that your storage infrastructure is functioning optimally. Tracking performance lets you gather detailed information and insights about the backend's operations and identify areas requiring improvement or optimization.\n\n#### Alerting\n\nIf Vault takes longer to access the backend for operations like retrieving (`vault.<STORAGE>.get`), storing (`vault.<STORAGE>.put`), listing (`vault.<STORAGE>.list`), or deleting (`vault.<STORAGE>.delete`) items, the Vault clients may be experiencing delays caused by storage limitations. To address this issue, you can set up alerts that will notify your team automatically when Vault's access to the storage backend slows down. This will allow you to take action, such as upgrading to disks with better input\/output (I\/O) performance, before the increased latency negatively impacts your application users' experience.\n\nIf you are using Integrated Storage, the following resources provide additional guidance:\n- [Inspect Data in Integrated Storage](\/vault\/tutorials\/monitoring\/inspect-data-integrated-storage)\n- [Inspect Data in BoltDB](\/vault\/tutorials\/monitoring\/inspect-data-boltdb)\n\n## Audit metrics\n\n###  Blocked audit devices\n\n#### Metrics:\n\n`vault.audit.log_request_failure`\n\n`vault.audit.log_response_failure`\n\n#### Background:\n\nAudit devices play a crucial role in meeting compliance requirements by recording a comprehensive audit log of requests and responses from Vault. For a production deployment, your Vault cluster should have at least one audit device enabled so that you can trace all incoming requests and outgoing responses associated with your cluster. If you rely on only one audit device and encounter problems (e.g., network connection loss or permission issues), Vault can become unresponsive and cease to handle requests. Enabling at least one  additional audit device is essential to ensure uninterrupted functionality and responsiveness from Vault.\n\n#### Alerting\n\nTo ensure smooth operation, monitoring any unusual increases in audit log request failures (`vault.audit.log_request_failure`) and response failures (`vault.audit.log_response_failure`) is important. These failures could indicate a device blockage. If such issues arise, examining the audit logs can help identify the problematic device and provide additional clues about the underlying problem.\n\nIf Vault is unable to write audit logs to the syslog, the server will generate error logs similar to the following example:\n\n```plaintext\n2020-10-20T12:34:56.290Z [ERROR] audit: backend failed to log response: backend=syslog\/ error=\"write unixgram @->\/test\/log: write: message too long\"\n2020-10-20T12:34:56.291Z [ERROR] core: failed to audit response: request_path=sys\/mounts\n error=\"1 error occurred:\n        * no audit backend succeeded in logging the response\n```\n\n\nYou should expect to encounter a pair of errors from the audit and core modules for each failed log response. If you receive an error message containing \"write: message too long,\" it suggests that the entries that Vault is trying to write to the syslog audit device exceed the size of the syslog host's socket send buffer. In such cases, it's necessary to investigate what is causing the generation of large log entries, such as an extensive list of Active Directory or LDAP groups.\n\nRefer to the [Blocked audit devices](\/vault\/tutorials\/monitoring\/blocked-audit-devices) tutorial for additional guidance.\n\n## Resource metrics\n\n### Vault memory issues indicated by garbage collection\n\n#### Metrics:\n\n`vault.runtime.sys_bytes`\n\n`vault.runtime.gc_pause_ns`\n\n#### Background:\n\nGarbage collection in the Go runtime temporarily pauses all operations. These pauses are usually brief, but garbage collection happens more often if memory usage is high. This increased frequency of garbage collection can cause delays in Vault's performance.\n\n#### Alerting\n\nAnalyzing the relationship between Vault's memory usage (represented as a percentage of total available memory on the host) and garbage collection pause time (measured by `vault.runtime.gc_pause_ns`) can provide valuable insights into resource limitations and assist in effectively allocating compute resources.\n\nTo illustrate, when the `vault.runtime.sys_bytes` exceeds 90 percent of the available memory on the host, it is advisable to add more memory to prevent performance degradation. Additionally, you should set  up an alert that triggers if the GC pause time exceeds 5 seconds per minute. This alert will promptly notify you, enabling swift action to address the issue.\n\n### CPU I\/O wait time\n\n#### Background:\n\nVault scales horizontally by adding more instances or nodes, but there are still practical limits to scalability. Excessive CPU wait time for I\/O operations can indicate that the system is reaching its scalability limits or overusing specific resources. By tracking these metrics, administrators can assess the system's scalability and take appropriate actions, such as optimizing I\/O operations or adding additional resources, to maintain performance as the system grows.\n\n#### Alerting\n\nWe recommend keeping the I\/O wait time below 10 percent to ensure optimal performance. If you notice excessively long wait times, it indicates that clients are experiencing delays while waiting for Vault to respond to their requests. This delay can negatively impact the performance of applications that rely on Vault. In such situations, evaluating if your resources are properly configured to handle your workload and if the requests are evenly distributed across all CPUs is necessary. These steps will help address potential performance issues and ensure the smooth operation of Vault and its dependent applications.\n\n###  Keep your network throughput within the threshold\n\n#### Background:\n\nMonitoring the network throughput of your Vault clusters allows you to gauge their workload. A sudden decrease in traffic going in or out might indicate communication problems between Vault and its clients or dependencies. Conversely, if you observe an unexpected surge in network activity, it could be a sign of a denial of service (DoS) attack. Knowing these network patterns can provide valuable insights and help you identify potential issues or security threats.\n\n#### Alerting\n\nStarting from Vault 1.5, you can set rate limit quotas to ensure Vault's overall stability and health. When a server reaches this threshold, Vault will reject any new client requests and respond with an HTTP 429 error, specifically \"Too Many Requests.\" These rejected requests will be recorded in your audit logs and display a message like this example: \"error: request path kv\/app\/test: rate limit quota exceeded.\" Choosing an appropriate limit for the rate quota is important so that it doesn't block legitimate requests and cause slowdowns in your applications. To monitor the frequency of these breaches and adjust your limit accordingly, you can keep an eye on the metric called quota.rate_limit.violation, which increments with each violation of the rate limit quota.\n\nRefer to the [Protecting Vault with resource quotas](\/vault\/tutorials\/operations\/resource-quotas) tutorial to learn how to set the rate limit quotas for your Vault.\n\n## Replication metrics\n\n### Free memory in the storage backend by monitoring Write-Ahead logs\n\n#### Metrics:\n\n`vault.wal_flushready`\n\n`vault.wal.persistWALs`\n\n#### Background:\n\nTo maintain high performance, Vault utilizes a garbage collector that periodically removes old Write-Ahead Logs (WALs) to free up memory on the storage backend. However, when there are unexpected surges in traffic, the accumulation of WALs can occur rapidly, leading to increased strain on the storage backend. These surges can negatively affect other processes in Vault that rely on the same storage backend. Therefore, it is important to assess the impact of replication on the performance of your storage backend. By doing so, you can better understand how the replication process influences your system's overall performance.\n\n#### Alerting\n\nWe recommend you keep track of two metrics: `vault.wal_flushready` and `vault.wal.persistWALs`. The first metric measures the time it takes to flush a ready Write-Ahead Log (WAL) to the persist queue, while the second metric measures the time it takes to persist a WAL to the storage backend.\n\nTo ensure efficient performance, we advise you to set up alerts that will notify you when the `vault.wal_flushready` metric exceeds 500 milliseconds or when the `vault.wal.persistWALs` metric surpasses 1,000 milliseconds. These alerts serve as indicators that backpressure is slowing down your storage backend.\n\nIf either of these alerts is triggered, consider scaling your storage backend to accommodate the increased workload. Scaling can help alleviate the strain and maintain optimal performance.\n\n###  Vault Enterprise Replication health check\n\n#### Metrics:\n\n`vault.replication.wal.last_wal`\n\n#### Background:\n\nVault's Write-Ahead Log (WAL) is a durable data storage and recovery mechanism. WAL is a log file that records all changes made to the Vault data store before Vault persists them to the underlying storage backend. The WAL provides an extra layer of reliability and ensures data integrity in case of system failures or crashes.\n\n#### Alerting\n\nWhen you have Vault Enterprise deployments with Performance Replication and\/or Disaster Recovery Replication configured, you want to monitor that the data gets replicated from the primary to the secondary clusters. To detect if your primary and secondary clusters are losing synchronization, you can compare the last Write-Ahead Log (WAL) index on both clusters. It's important to detect discrepancies between them because if the secondary clusters are significantly behind the primary and the primary cluster becomes unavailable, any requests made to Vault will yield outdated data. Therefore, if you notice missing values in the WAL, it's essential to investigate potential causes, which may include:\nNetwork issues between the primary and secondary clusters: Problems with the network connection can hinder the proper replication of data between the clusters.\nResource limitations on the primary or secondary systems: If the primary or secondary clusters are experiencing resource constraints, it can affect their ability to replicate data effectively.\n\nIssues with specific keys: Sometimes, the problem may relate to specific keys within the Vault. To identify such issues, examine the Vault's operational and storage logs, which will provide detailed information about the problematic keys causing the synchronization gaps.\n\nRefer to the [Monitoring Vault replication](\/vault\/tutorials\/monitoring\/monitor-replication) tutorial to learn more.\n\n## Additional references\n\n- [Monitor telemetry with Prometheus & Grafana](\/vault\/tutorials\/monitoring\/monitor-telemetry-grafana-prometheus)\n- [Monitor telemetry & Audit Device Log Data](\/vault\/tutorials\/monitoring\/monitor-telemetry-audit-splunk)\n- [Vault usage metrics](\/vault\/tutorials\/monitoring\/usage-metrics)","site":"vault","answers_cleaned":"    layout  docs page title  Key metrics for common health checks description       Learn about the key Vault metrics you should monitor with health checks         Key metrics for common health checks  This document covers common Vault monitoring patterns  It is important to have operational and usage insight into a running Vault cluster to understand performance  assist with proactive incident response  and understand business workloads and use cases   This document consists of five metrics sections  core  usage  storage backend  audit  and resource  Core metrics are fundamental internal metrics which you should monitor to ensure the health of your Vault cluster  The usage metrics section covers metrics which help count active and historical clients Vault  The storage backend section highlights the metrics to monitor so that you understand the storage infrastructure that your Vault cluster uses  allowing you to ensure your storage is functioning as intended  Audit metrics allow you to set up monitoring that helps you meet your compliance requirements  Resource metrics allow you to monitor metrics such as CPU  networking  and other resources Vault uses on its host  Replication covers metrics you can use to help ensure that Vault is replicating data as intended      Core metrics      Servers assume the leader role       Metrics    Vault core leadership lost    vault core leadership setup failed        Background     Recommended architecture   img vault integrated storage reference architecture svg   The diagram illustrates a highly available Vault cluster with five nodes distributed between three availability zones  Vault s Integrated Storage uses a consensus protocol to provide consistency across the cluster nodes  The leader  active  node is responsible for ingesting new log entries  replicating them to the follower  standby  nodes  and managing when to commit an entry  Integrated Storage uses consensus protocol to provide consistency  therefore  if the leader is lost  the voting nodes will elect a new leader  Refer to the  Integrated storage   vault docs internals integrated storage  documentation for more details    Tip   When you operate Vault with Integrated Storage  it automatically provides  additional metrics for Raft leadership changes   vault docs internals integrated storage consensus protocol      Tip        Alerting   The metric  vault core leadership lost  measures the duration a server held the leader position before losing it  A consistently low value for this metric suggests a high leadership turnover  indicating potential instability within the cluster   On the other hand  spikes in the  vault core leadership setup failed  metric indicate failures that standby servers cannot successfully assume the leader role when required  Investigate these failures promptly  and check for any issues related to acquiring the leader election lock  These metrics are important alerts and can signify security and reliability risks  For example  there might be a communication problem between Vault and its storage backend or a broader outage causing multiple Vault servers to fail  Monitoring and analyzing these metrics can help identify and address any underlying issues  ensuring the stability and security of your Vault cluster       Higher latency in your application       Metrics    vault core handle login request    vault core handle request        Background   Vault can use trusted sources like Kubernetes  Active Directory  and Okta to verify the identity of clients  users or services  before granting them access to secrets  Clients must authenticate themselves by making a login request through the  vault login  command or the API  When the authentication is successful  Vault provides the client with a token  which is stored locally on the client s machine and is used to authorize future requests  As long as the client presents a valid token that has not expired  Vault recognizes the client as authenticated        Alerting   The metric  vault core handle login request   when averaged  measures how fast Vault responds to client login requests  If you notice a significant increase in this metric but no significant increase in the number of tokens issued   vault token creation   it s crucial to investigate the cause of this issue immediately   When a client sends a request to Vault  it typically needs to go through an authentication process to verify its identity and obtain the necessary permissions  This authentication process involves validating the client s credentials  such as username and password or API token  and ensuring the client has the appropriate access rights   If the authentication process in Vault is slow  it takes longer for Vault to verify the client s credentials and authorize the request  This delay in authentication directly impacts the response time of Vault to the client s request   You should also monitor the  vault core handle request  metric  which measures server workload  This metric helps determine whether you need to scale up your cluster to accommodate increased traffic  On the other hand  a sudden drop in throughput may indicate connectivity problems between Vault and its clients  which you should investigate further       Difficulties with setting up auditing or problems with mounting a custom plugin backend       Metrics    vault core post unseal        Background   Vault servers can be in one of two states  sealed or unsealed  To ensure security  Vault does not trust the storage backends and stores data in an encrypted form  After Vault is started  it must undergo an unsealing process to obtain the plaintext root key necessary to read the decryption key to decrypt the data  After unsealing  Vault performs various post unseal operations to set up the server correctly before it can start responding to requests        Alerting   If you notice sudden increases in the  vault core post unseal  metric  issues might affect your server s availability during the post unseal process  such as errors with auditing or mounting a custom plugin backend  To diagnose the issues  refer to your Vault server s logs      Usage metrics      Excessive token creations affecting Vault performance       Metrics    vault token creation         Background   All authenticated Vault requests require a valid client token  Tokens are linked to policies determining which capabilities  a client  user or system  has for a given path  Vault issues three types of tokens  service tokens  batch tokens  and recovery tokens   Service tokens are what users will generally think of as  normal  Vault tokens  They support all features  such as renewal  revocation  creating child tokens  and more  They are correspondingly heavyweight to create and track   Batch tokens are encrypted binary large objects  blobs  with just enough information about the client  While Vault does not persist the batch tokens  it persists the service tokens  The amount of space required to store the service token depends on the authentication method  Therefore  a large number of service tokens could contribute to an out of memory issue   Recovery tokens are used exclusively for operating Vault in  recovery mode   vault docs concepts recovery mode         Alerting  By monitoring the number of tokens created   vault token creation   and the frequency of login requests   vault core handle login request  counted as a total   you can gain insights into the overall workload of your system  If your scenario involves running numerous short lived processes  such as serverless workloads  you may experience simultaneous creation and request of secrets from hundreds or thousands of functions  In such cases  you will observe correlated spikes in both metrics   When dealing with transient workloads  you should utilize batch tokens to enhance the performance of your cluster  Vault creates a batch token  which encrypts all the client s information and provides it to the client  When the client employs this token  Vault decrypts the stored metadata and fulfills the request  Unlike service tokens  batch tokens do not retain client information or get replicated across clusters  This characteristic alleviates the burden on the storage backend and leads to improved cluster performance   To learn more about batch tokens  refer to the  batch tokens   vault tutorials tokens batch tokens  tutorial       Lease lifecycle introducing unexpected traffic spikes in Vault       Metrics    vault expire num leases        Background   Vault creates a lease when it generates a dynamic secret or service token  This lease contains essential information like the secret or token s time to live  TTL  value  and whether it can be extended or renewed  Vault stores the lease in the storage backend  If Vault doesn t renew the lease before reaching its TTL  it will expire and be invalidated  causing the associated secret or token to be revoked        Alerting  Monitoring the number of active leases in your Vault server   vault expire num leases   can provide valuable insights into the server s activity level  An increase in leases suggests a higher volume of traffic to your application  At the same time  a sudden decrease could indicate issues with accessing dynamic secrets quickly enough to serve incoming traffic   We recommend setting the shortest possible TTL for leases to improve security and performance  There are two main reasons for this  Firstly  a shorter TTL reduces the impact of potential attacks  Secondly  it prevents leases from accumulating indefinitely and consuming excessive space in the storage backend  If you don t specify a TTL explicitly  leases default to 32 days  However  if there is a sudden surge in load and numerous leases are generated with this long default TTL  the storage backend can quickly reach its maximum capacity and crash  resulting in unavailability   Depending on your specific use case  you may only require a token or secret for a few minutes or hours  rather than the full 32 days  By setting an appropriate TTL  you can free up storage space for storing new secrets and tokens  In the case of Vault Enterprise  you can set a  lease count quota   vault docs enterprise lease count quotas  to limit the number of leases generated below a certain threshold  When the threshold is reached  Vault will restrict the creation of new leases until an existing lease expires or is revoked  This helps manage the overall number of leases and prevents excessive resource usage   Read the  Protecting Vault with resource quotas   vault tutorials operations resource quotas  tutorial to learn how to set the lease count quotas   Alternatively  you can leverage the  Vault agent caching   vault docs agent and proxy agent caching  to delegate the lease lifecycle management to Vault Agent    Note   The lifecycle of the leases are managed by the expiration manager  which handles the revocation of a lease when the time to live value associated with the lease is reached  Refer to the  Troubleshoot irrevocable leases   vault tutorials monitoring troubleshoot irrevocable leases  tutorial when you encounter irrevocable leases monitored by the  vault expire num irrevocable leases  metric     Note       Know the average time it takes to renew or revoke client tokens       Metrics    vault expire renew token    vault expire revoke        Background   Vault automatically revokes access to secrets granted by a token when its time to live  TTL  expires  You can manually revoke a token if there are signs of a possible security breach  When a token is no longer valid  either expired or revoked   the client will lose access to the secrets managed by Vault  Therefore  the client must either renew the token before it expires  or request a new one        Alerting  Monitoring the timely completion of revocation   vault expire revoke   and renewal   vault expire renew token   operations is crucial for ensuring the validity and accessibility of secrets  Some long running applications may require a token to be renewed instead of getting a new one  In such a case  the time it takes to renew a token can potentially affect the application from accessing secrets  Also  it is important to track the time it takes to complete the revoke operation helps to detect unauthorized access to secrets  as attackers who gain access can potentially infiltrate your system and cause harm  If you notice significant delays in the revocation process  you should investigate your server logs for any backend issues that might have hindered the revocation process      Storage backend metrics      Detect any performance issues with your Vault s storage backend       Metrics    vault  STORAGE  get    vault  STORAGE  put    vault  STORAGE  list    vault  STORAGE  delete        Background   The performance of the storage backend affects the overall performance of the Vault  therefore  it is critical to  monitor the performance of your storage backend so that you can detect and react to any anomaly  Backend monitoring allows you to ensure that your storage infrastructure is functioning optimally  Tracking performance lets you gather detailed information and insights about the backend s operations and identify areas requiring improvement or optimization        Alerting  If Vault takes longer to access the backend for operations like retrieving   vault  STORAGE  get    storing   vault  STORAGE  put    listing   vault  STORAGE  list    or deleting   vault  STORAGE  delete   items  the Vault clients may be experiencing delays caused by storage limitations  To address this issue  you can set up alerts that will notify your team automatically when Vault s access to the storage backend slows down  This will allow you to take action  such as upgrading to disks with better input output  I O  performance  before the increased latency negatively impacts your application users  experience   If you are using Integrated Storage  the following resources provide additional guidance     Inspect Data in Integrated Storage   vault tutorials monitoring inspect data integrated storage     Inspect Data in BoltDB   vault tutorials monitoring inspect data boltdb      Audit metrics       Blocked audit devices       Metrics    vault audit log request failure    vault audit log response failure        Background   Audit devices play a crucial role in meeting compliance requirements by recording a comprehensive audit log of requests and responses from Vault  For a production deployment  your Vault cluster should have at least one audit device enabled so that you can trace all incoming requests and outgoing responses associated with your cluster  If you rely on only one audit device and encounter problems  e g   network connection loss or permission issues   Vault can become unresponsive and cease to handle requests  Enabling at least one  additional audit device is essential to ensure uninterrupted functionality and responsiveness from Vault        Alerting  To ensure smooth operation  monitoring any unusual increases in audit log request failures   vault audit log request failure   and response failures   vault audit log response failure   is important  These failures could indicate a device blockage  If such issues arise  examining the audit logs can help identify the problematic device and provide additional clues about the underlying problem   If Vault is unable to write audit logs to the syslog  the server will generate error logs similar to the following example      plaintext 2020 10 20T12 34 56 290Z  ERROR  audit  backend failed to log response  backend syslog  error  write unixgram     test log  write  message too long  2020 10 20T12 34 56 291Z  ERROR  core  failed to audit response  request path sys mounts  error  1 error occurred            no audit backend succeeded in logging the response       You should expect to encounter a pair of errors from the audit and core modules for each failed log response  If you receive an error message containing  write  message too long   it suggests that the entries that Vault is trying to write to the syslog audit device exceed the size of the syslog host s socket send buffer  In such cases  it s necessary to investigate what is causing the generation of large log entries  such as an extensive list of Active Directory or LDAP groups   Refer to the  Blocked audit devices   vault tutorials monitoring blocked audit devices  tutorial for additional guidance      Resource metrics      Vault memory issues indicated by garbage collection       Metrics    vault runtime sys bytes    vault runtime gc pause ns        Background   Garbage collection in the Go runtime temporarily pauses all operations  These pauses are usually brief  but garbage collection happens more often if memory usage is high  This increased frequency of garbage collection can cause delays in Vault s performance        Alerting  Analyzing the relationship between Vault s memory usage  represented as a percentage of total available memory on the host  and garbage collection pause time  measured by  vault runtime gc pause ns   can provide valuable insights into resource limitations and assist in effectively allocating compute resources   To illustrate  when the  vault runtime sys bytes  exceeds 90 percent of the available memory on the host  it is advisable to add more memory to prevent performance degradation  Additionally  you should set  up an alert that triggers if the GC pause time exceeds 5 seconds per minute  This alert will promptly notify you  enabling swift action to address the issue       CPU I O wait time       Background   Vault scales horizontally by adding more instances or nodes  but there are still practical limits to scalability  Excessive CPU wait time for I O operations can indicate that the system is reaching its scalability limits or overusing specific resources  By tracking these metrics  administrators can assess the system s scalability and take appropriate actions  such as optimizing I O operations or adding additional resources  to maintain performance as the system grows        Alerting  We recommend keeping the I O wait time below 10 percent to ensure optimal performance  If you notice excessively long wait times  it indicates that clients are experiencing delays while waiting for Vault to respond to their requests  This delay can negatively impact the performance of applications that rely on Vault  In such situations  evaluating if your resources are properly configured to handle your workload and if the requests are evenly distributed across all CPUs is necessary  These steps will help address potential performance issues and ensure the smooth operation of Vault and its dependent applications        Keep your network throughput within the threshold       Background   Monitoring the network throughput of your Vault clusters allows you to gauge their workload  A sudden decrease in traffic going in or out might indicate communication problems between Vault and its clients or dependencies  Conversely  if you observe an unexpected surge in network activity  it could be a sign of a denial of service  DoS  attack  Knowing these network patterns can provide valuable insights and help you identify potential issues or security threats        Alerting  Starting from Vault 1 5  you can set rate limit quotas to ensure Vault s overall stability and health  When a server reaches this threshold  Vault will reject any new client requests and respond with an HTTP 429 error  specifically  Too Many Requests   These rejected requests will be recorded in your audit logs and display a message like this example   error  request path kv app test  rate limit quota exceeded   Choosing an appropriate limit for the rate quota is important so that it doesn t block legitimate requests and cause slowdowns in your applications  To monitor the frequency of these breaches and adjust your limit accordingly  you can keep an eye on the metric called quota rate limit violation  which increments with each violation of the rate limit quota   Refer to the  Protecting Vault with resource quotas   vault tutorials operations resource quotas  tutorial to learn how to set the rate limit quotas for your Vault      Replication metrics      Free memory in the storage backend by monitoring Write Ahead logs       Metrics    vault wal flushready    vault wal persistWALs        Background   To maintain high performance  Vault utilizes a garbage collector that periodically removes old Write Ahead Logs  WALs  to free up memory on the storage backend  However  when there are unexpected surges in traffic  the accumulation of WALs can occur rapidly  leading to increased strain on the storage backend  These surges can negatively affect other processes in Vault that rely on the same storage backend  Therefore  it is important to assess the impact of replication on the performance of your storage backend  By doing so  you can better understand how the replication process influences your system s overall performance        Alerting  We recommend you keep track of two metrics   vault wal flushready  and  vault wal persistWALs   The first metric measures the time it takes to flush a ready Write Ahead Log  WAL  to the persist queue  while the second metric measures the time it takes to persist a WAL to the storage backend   To ensure efficient performance  we advise you to set up alerts that will notify you when the  vault wal flushready  metric exceeds 500 milliseconds or when the  vault wal persistWALs  metric surpasses 1 000 milliseconds  These alerts serve as indicators that backpressure is slowing down your storage backend   If either of these alerts is triggered  consider scaling your storage backend to accommodate the increased workload  Scaling can help alleviate the strain and maintain optimal performance        Vault Enterprise Replication health check       Metrics    vault replication wal last wal        Background   Vault s Write Ahead Log  WAL  is a durable data storage and recovery mechanism  WAL is a log file that records all changes made to the Vault data store before Vault persists them to the underlying storage backend  The WAL provides an extra layer of reliability and ensures data integrity in case of system failures or crashes        Alerting  When you have Vault Enterprise deployments with Performance Replication and or Disaster Recovery Replication configured  you want to monitor that the data gets replicated from the primary to the secondary clusters  To detect if your primary and secondary clusters are losing synchronization  you can compare the last Write Ahead Log  WAL  index on both clusters  It s important to detect discrepancies between them because if the secondary clusters are significantly behind the primary and the primary cluster becomes unavailable  any requests made to Vault will yield outdated data  Therefore  if you notice missing values in the WAL  it s essential to investigate potential causes  which may include  Network issues between the primary and secondary clusters  Problems with the network connection can hinder the proper replication of data between the clusters  Resource limitations on the primary or secondary systems  If the primary or secondary clusters are experiencing resource constraints  it can affect their ability to replicate data effectively   Issues with specific keys  Sometimes  the problem may relate to specific keys within the Vault  To identify such issues  examine the Vault s operational and storage logs  which will provide detailed information about the problematic keys causing the synchronization gaps   Refer to the  Monitoring Vault replication   vault tutorials monitoring monitor replication  tutorial to learn more      Additional references     Monitor telemetry with Prometheus   Grafana   vault tutorials monitoring monitor telemetry grafana prometheus     Monitor telemetry   Audit Device Log Data   vault tutorials monitoring monitor telemetry audit splunk     Vault usage metrics   vault tutorials monitoring usage metrics "}
{"questions":"vault page title Enable Vault telemetry Collect telemetry data from your Vault installation layout docs Step by step guide to enabling telemetry gathering with Vault Enable Vault telemetry gathering","answers":"---\nlayout: docs\npage_title: Enable Vault telemetry\ndescription: >-\n  Step-by-step guide to enabling telemetry gathering with Vault\n---\n\n# Enable Vault telemetry gathering\n\nCollect telemetry data from your Vault installation.\n\n## Before you start\n\n- **You must have Vault 1.14 or later installed and running**.\n- **You must have access to your [Vault configuration](\/vault\/docs\/configuration) file**.\n\n\n## Step 1: Choose an aggregation agent\n\n@include 'telemetry\/supported-aggregation-agents.mdx'\n\n## Step 2: Enable at least one audit device\n\nTo include audit-related metrics, you must enable auditing on at least one device\nwith the `vault audit enable` command. For example, to enable auditing for the\n`file` device and save the logs to `\/var\/log\/vault_audit.log`:\n\n```shell-session\n$ vault audit enable file file_path=\/var\/log\/vault_audit.log\n```\n\nBy default, Enterprise installations replicate audit devices to the secondary\nperformance nodes in a cluster. To limit performance replication for an audit\ndevice, use the `local` flag to mark the device as local to the current node:\n\n```shell-session\n$ vault audit enable file -local file_path=\/var\/log\/vault_audit.log\n```\n\n\n## Step 3: Configure telemetry collection\n\nTo configure telemetry collection, update the telemetry stanza in your Vault\nconfiguration with your collection preferences and aggregation agent details.\n\nFor example, the following `telemetry` stanza configures Vault with the standard\ntelemetry defaults and connects it to a Statsite agent running on the default\nport within a company intranet at `mycompany.statsite`:\n\n```hcl\ntelemetry {\n  usage_gauge_period = \"10m\"\n  maximum_gauge_cardinality = 500\n  disable_hostname = false\n  enable_hostname_label = false\n  lease_metrics_epsilon = \"1h\"\n  num_lease_metrics_buckets = 168\n  add_lease_metrics_namespace_labels = false\n  filter_default = true\n\n  statsite_address = \"mycompany.statsite:8125\"\n}\n```\n\nMany metrics solutions charge by the metric. You can set `filter_default` to\nfalse and use the `prefix_filter` parameter to include and exclude specific\nvalues based on metric name to avoid paying for irrelevant information.\n\nFor example, to limit your telemetry to the core token metrics plus the number\nof leases set to expire:\n\n```hcl\ntelemetry {\n  filter_default = false\n  prefix_filter = [\"+vault.token\", \"-vault.expire\", \"+vault.expire.num_leases\"]\n}\n```\n\n## Step 4: Choose a reporting solution\n\nYou need to save or forward your telemetry data to a separate storage solution\nfor reporting, analysis, and alerting. Which solution you need depends on the\nfeature set provided by your aggregation agent and the protocol support of your\nreporting platform.\n\nPopular reporting solutions compatible with Vault:\n\n- [Grafana](https:\/\/grafana.com\/grafana)\n- [Graphite](https:\/\/www.hostedgraphite.com)\n- [InfluxData: Telegraf](https:\/\/www.influxdata.com\/time-series-platform\/telegraf)\n- [InfluxData: InfluxDB](https:\/\/www.influxdata.com\/products\/influxdb-overview)\n- [InfluxData: Chronograf](https:\/\/www.influxdata.com\/time-series-platform\/telegraf)\n- [InfluxData: Kapacitor](https:\/\/www.influxdata.com\/time-series-platform\/kapacitor)\n- [Splunk](https:\/\/www.splunk.com)\n\n\n## Next steps\n\n- Review the\n  [Key metrics for common health checks](\/well-architected-framework\/reliability\/reliability-vault-monitoring-key-metrics)\n  guide to identify metrics you may want to start monitoring immediately.\n- Review the full list of available\n  [telemetry parameters](\/vault\/docs\/configuration\/telemetry#telemetry-parameters).\n- Review the [Monitor telemetry and audit device log data](\/vault\/tutorials\/monitoring\/monitor-telemetry-audit-splunk)\n  tutorial for general monitoring guidance and steps to configure your\n  Vault telemetry for Splunk using Telegraf and Fluentd.\n- Review the\n  [Monitor telemetry with Prometheus and Grafana](\/vault\/tutorials\/monitoring\/monitor-telemetry-grafana-prometheus)\n  tutorial to configure your Vault telemetry for Prometheus and Grafana.","site":"vault","answers_cleaned":"    layout  docs page title  Enable Vault telemetry description       Step by step guide to enabling telemetry gathering with Vault        Enable Vault telemetry gathering  Collect telemetry data from your Vault installation      Before you start      You must have Vault 1 14 or later installed and running        You must have access to your  Vault configuration   vault docs configuration  file         Step 1  Choose an aggregation agent   include  telemetry supported aggregation agents mdx      Step 2  Enable at least one audit device  To include audit related metrics  you must enable auditing on at least one device with the  vault audit enable  command  For example  to enable auditing for the  file  device and save the logs to   var log vault audit log       shell session   vault audit enable file file path  var log vault audit log      By default  Enterprise installations replicate audit devices to the secondary performance nodes in a cluster  To limit performance replication for an audit device  use the  local  flag to mark the device as local to the current node      shell session   vault audit enable file  local file path  var log vault audit log          Step 3  Configure telemetry collection  To configure telemetry collection  update the telemetry stanza in your Vault configuration with your collection preferences and aggregation agent details   For example  the following  telemetry  stanza configures Vault with the standard telemetry defaults and connects it to a Statsite agent running on the default port within a company intranet at  mycompany statsite       hcl telemetry     usage gauge period    10m    maximum gauge cardinality   500   disable hostname   false   enable hostname label   false   lease metrics epsilon    1h    num lease metrics buckets   168   add lease metrics namespace labels   false   filter default   true    statsite address    mycompany statsite 8125         Many metrics solutions charge by the metric  You can set  filter default  to false and use the  prefix filter  parameter to include and exclude specific values based on metric name to avoid paying for irrelevant information   For example  to limit your telemetry to the core token metrics plus the number of leases set to expire      hcl telemetry     filter default   false   prefix filter      vault token     vault expire     vault expire num leases             Step 4  Choose a reporting solution  You need to save or forward your telemetry data to a separate storage solution for reporting  analysis  and alerting  Which solution you need depends on the feature set provided by your aggregation agent and the protocol support of your reporting platform   Popular reporting solutions compatible with Vault      Grafana  https   grafana com grafana     Graphite  https   www hostedgraphite com     InfluxData  Telegraf  https   www influxdata com time series platform telegraf     InfluxData  InfluxDB  https   www influxdata com products influxdb overview     InfluxData  Chronograf  https   www influxdata com time series platform telegraf     InfluxData  Kapacitor  https   www influxdata com time series platform kapacitor     Splunk  https   www splunk com       Next steps    Review the    Key metrics for common health checks   well architected framework reliability reliability vault monitoring key metrics    guide to identify metrics you may want to start monitoring immediately    Review the full list of available    telemetry parameters   vault docs configuration telemetry telemetry parameters     Review the  Monitor telemetry and audit device log data   vault tutorials monitoring monitor telemetry audit splunk    tutorial for general monitoring guidance and steps to configure your   Vault telemetry for Splunk using Telegraf and Fluentd    Review the    Monitor telemetry with Prometheus and Grafana   vault tutorials monitoring monitor telemetry grafana prometheus    tutorial to configure your Vault telemetry for Prometheus and Grafana "}
{"questions":"vault Database telemetry page title Telemetry reference Database metrics Database telemetry provides general information about configured secrets engines layout docs Technical reference for database telemetry values","answers":"---\nlayout: docs\npage_title: \"Telemetry reference: Database metrics\"\ndescription: >-\n  Technical reference for database telemetry values.\n---\n\n# Database telemetry\n\nDatabase telemetry provides general information about configured secrets engines\nand databases.\n\n## Secrets database metrics\n\n@include 'telemetry-metrics\/secretsdb-intro.mdx'\n\n@include 'telemetry-metrics\/database\/close.mdx'\n\n@include 'telemetry-metrics\/database\/close\/error.mdx'\n\n@include 'telemetry-metrics\/database\/createuser.mdx'\n\n@include 'telemetry-metrics\/database\/createuser\/error.mdx'\n\n@include 'telemetry-metrics\/database\/initialize.mdx'\n\n@include 'telemetry-metrics\/database\/initialize\/error.mdx'\n\n@include 'telemetry-metrics\/database\/name\/close.mdx'\n\n@include 'telemetry-metrics\/database\/name\/close\/error.mdx'\n\n@include 'telemetry-metrics\/database\/name\/createuser.mdx'\n\n@include 'telemetry-metrics\/database\/name\/createuser\/error.mdx'\n\n@include 'telemetry-metrics\/database\/name\/initialize.mdx'\n\n@include 'telemetry-metrics\/database\/name\/initialize\/error.mdx'\n\n@include 'telemetry-metrics\/database\/name\/renewuser.mdx'\n\n@include 'telemetry-metrics\/database\/name\/renewuser\/error.mdx'\n\n@include 'telemetry-metrics\/database\/name\/revokeuser.mdx'\n\n@include 'telemetry-metrics\/database\/name\/revokeuser\/error.mdx'\n\n@include 'telemetry-metrics\/database\/renewuser.mdx'\n\n@include 'telemetry-metrics\/database\/renewuser\/error.mdx'\n\n@include 'telemetry-metrics\/database\/revokeuser.mdx'\n\n@include 'telemetry-metrics\/database\/revokeuser\/error.mdx'\n\n## Cockroach database\n\nMetrics related to your Cockroach database **storage backend**.\n\n@include 'telemetry-metrics\/vault\/cockroachdb\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/cockroachdb\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/cockroachdb\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/cockroachdb\/put.mdx'\n\n## Couch database\n\nMetrics related to your Couch database **storage backend**.\n\n@include 'telemetry-metrics\/vault\/couchdb\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/couchdb\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/couchdb\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/couchdb\/put.mdx'\n\n## Dynamo database\n\nMetrics related to your Dynamo database **storage backend**.\n\n@include 'telemetry-metrics\/vault\/dynamodb\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/dynamodb\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/dynamodb\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/dynamodb\/put.mdx'\n\n## Google Cloud - Spanner\n\nMetrics related to your Spanner **storage backend**.\n\n@include 'telemetry-metrics\/vault\/spanner\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/lock\/lock.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/lock\/unlock.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/lock\/value.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/put.mdx'\n\n## Microsoft SQL Server (MSSQL)\n\nMetrics related to your SQL Server **storage backend**.\n\n@include 'telemetry-metrics\/vault\/mssql\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/mssql\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/mssql\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/mssql\/put.mdx'\n\n## MySQL\n\nMetrics related to your MySQL **storage backend**.\n\n@include 'telemetry-metrics\/vault\/mysql\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/mysql\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/mysql\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/mysql\/put.mdx'\n\n## PostgreSQL\n\nMetrics related to your PostgreSQL **storage backend**.\n\n@include 'telemetry-metrics\/vault\/postgres\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/postgres\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/postgres\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/postgres\/put.mdx","site":"vault","answers_cleaned":"    layout  docs page title   Telemetry reference  Database metrics  description       Technical reference for database telemetry values         Database telemetry  Database telemetry provides general information about configured secrets engines and databases      Secrets database metrics   include  telemetry metrics secretsdb intro mdx    include  telemetry metrics database close mdx    include  telemetry metrics database close error mdx    include  telemetry metrics database createuser mdx    include  telemetry metrics database createuser error mdx    include  telemetry metrics database initialize mdx    include  telemetry metrics database initialize error mdx    include  telemetry metrics database name close mdx    include  telemetry metrics database name close error mdx    include  telemetry metrics database name createuser mdx    include  telemetry metrics database name createuser error mdx    include  telemetry metrics database name initialize mdx    include  telemetry metrics database name initialize error mdx    include  telemetry metrics database name renewuser mdx    include  telemetry metrics database name renewuser error mdx    include  telemetry metrics database name revokeuser mdx    include  telemetry metrics database name revokeuser error mdx    include  telemetry metrics database renewuser mdx    include  telemetry metrics database renewuser error mdx    include  telemetry metrics database revokeuser mdx    include  telemetry metrics database revokeuser error mdx      Cockroach database  Metrics related to your Cockroach database   storage backend      include  telemetry metrics vault cockroachdb delete mdx    include  telemetry metrics vault cockroachdb get mdx    include  telemetry metrics vault cockroachdb list mdx    include  telemetry metrics vault cockroachdb put mdx      Couch database  Metrics related to your Couch database   storage backend      include  telemetry metrics vault couchdb delete mdx    include  telemetry metrics vault couchdb get mdx    include  telemetry metrics vault couchdb list mdx    include  telemetry metrics vault couchdb put mdx      Dynamo database  Metrics related to your Dynamo database   storage backend      include  telemetry metrics vault dynamodb delete mdx    include  telemetry metrics vault dynamodb get mdx    include  telemetry metrics vault dynamodb list mdx    include  telemetry metrics vault dynamodb put mdx      Google Cloud   Spanner  Metrics related to your Spanner   storage backend      include  telemetry metrics vault spanner delete mdx    include  telemetry metrics vault spanner get mdx    include  telemetry metrics vault spanner list mdx    include  telemetry metrics vault spanner lock lock mdx    include  telemetry metrics vault spanner lock unlock mdx    include  telemetry metrics vault spanner lock value mdx    include  telemetry metrics vault spanner put mdx      Microsoft SQL Server  MSSQL   Metrics related to your SQL Server   storage backend      include  telemetry metrics vault mssql delete mdx    include  telemetry metrics vault mssql get mdx    include  telemetry metrics vault mssql list mdx    include  telemetry metrics vault mssql put mdx      MySQL  Metrics related to your MySQL   storage backend      include  telemetry metrics vault mysql delete mdx    include  telemetry metrics vault mysql get mdx    include  telemetry metrics vault mysql list mdx    include  telemetry metrics vault mysql put mdx      PostgreSQL  Metrics related to your PostgreSQL   storage backend      include  telemetry metrics vault postgres delete mdx    include  telemetry metrics vault postgres get mdx    include  telemetry metrics vault postgres list mdx    include  telemetry metrics vault postgres put mdx"}
{"questions":"vault page title Telemetry reference All metrics Full list of all telemetry values provided by Vault For completeness we provide a full list of available metrics below in All Vault telemetry metrics layout docs","answers":"---\nlayout: docs\npage_title: \"Telemetry reference: All metrics\"\ndescription: >-\n  Full list of all telemetry values provided by Vault.\n---\n\n# All Vault telemetry metrics\n\nFor completeness, we provide a full list of available metrics below in\nalphabetic order by name.\n\n## Full metric list\n\n@include 'telemetry-metrics\/database\/close.mdx'\n\n@include 'telemetry-metrics\/database\/close\/error.mdx'\n\n@include 'telemetry-metrics\/database\/createuser.mdx'\n\n@include 'telemetry-metrics\/database\/createuser\/error.mdx'\n\n@include 'telemetry-metrics\/database\/initialize.mdx'\n\n@include 'telemetry-metrics\/database\/initialize\/error.mdx'\n\n@include 'telemetry-metrics\/database\/name\/close.mdx'\n\n@include 'telemetry-metrics\/database\/name\/close\/error.mdx'\n\n@include 'telemetry-metrics\/database\/name\/createuser.mdx'\n\n@include 'telemetry-metrics\/database\/name\/createuser\/error.mdx'\n\n@include 'telemetry-metrics\/database\/name\/initialize.mdx'\n\n@include 'telemetry-metrics\/database\/name\/initialize\/error.mdx'\n\n@include 'telemetry-metrics\/database\/name\/renewuser.mdx'\n\n@include 'telemetry-metrics\/database\/name\/renewuser\/error.mdx'\n\n@include 'telemetry-metrics\/database\/name\/revokeuser.mdx'\n\n@include 'telemetry-metrics\/database\/name\/revokeuser\/error.mdx'\n\n@include 'telemetry-metrics\/database\/renewuser.mdx'\n\n@include 'telemetry-metrics\/database\/renewuser\/error.mdx'\n\n@include 'telemetry-metrics\/database\/revokeuser.mdx'\n\n@include 'telemetry-metrics\/database\/revokeuser\/error.mdx'\n\n@include 'telemetry-metrics\/secrets\/pki\/tidy\/cert_store_current_entry.mdx'\n\n@include 'telemetry-metrics\/secrets\/pki\/tidy\/cert_store_deleted_count.mdx'\n\n@include 'telemetry-metrics\/secrets\/pki\/tidy\/cert_store_total_entries_remaining.mdx'\n\n@include 'telemetry-metrics\/secrets\/pki\/tidy\/cert_store_total_entries.mdx'\n\n@include 'telemetry-metrics\/secrets\/pki\/tidy\/duration.mdx'\n\n@include 'telemetry-metrics\/secrets\/pki\/tidy\/failure.mdx'\n\n@include 'telemetry-metrics\/secrets\/pki\/tidy\/revoked_cert_current_entry.mdx'\n\n@include 'telemetry-metrics\/secrets\/pki\/tidy\/revoked_cert_deleted_count.mdx'\n\n@include 'telemetry-metrics\/secrets\/pki\/tidy\/revoked_cert_total_entries_fixed_issuers.mdx'\n\n@include 'telemetry-metrics\/secrets\/pki\/tidy\/revoked_cert_total_entries_incorrect_issuers.mdx'\n\n@include 'telemetry-metrics\/secrets\/pki\/tidy\/revoked_cert_total_entries_remaining.mdx'\n\n@include 'telemetry-metrics\/secrets\/pki\/tidy\/revoked_cert_total_entries.mdx'\n\n@include 'telemetry-metrics\/secrets\/pki\/tidy\/start_time_epoch.mdx'\n\n@include 'telemetry-metrics\/secrets\/pki\/tidy\/success.mdx'\n\n@include 'telemetry-metrics\/vault\/audit\/device\/log_request.mdx'\n\n@include 'telemetry-metrics\/vault\/audit\/device\/log_response.mdx'\n\n@include 'telemetry-metrics\/vault\/audit\/log_request_failure.mdx'\n\n@include 'telemetry-metrics\/vault\/audit\/log_request.mdx'\n\n@include 'telemetry-metrics\/vault\/audit\/log_response_failure.mdx'\n\n@include 'telemetry-metrics\/vault\/audit\/log_response.mdx'\n\n@include 'telemetry-metrics\/vault\/audit\/sink_success.mdx'\n\n@include 'telemetry-metrics\/vault\/audit\/sink_failure.mdx'\n\n@include 'telemetry-metrics\/vault\/audit\/fallback_success.mdx'\n\n@include 'telemetry-metrics\/vault\/audit\/fallback_miss.mdx'\n\n@include 'telemetry-metrics\/vault\/autopilot\/failure_tolerance.mdx'\n\n@include 'telemetry-metrics\/vault\/autopilot\/healthy.mdx'\n\n@include 'telemetry-metrics\/vault\/autopilot\/node\/healthy.mdx'\n\n@include 'telemetry-metrics\/vault\/autosnapshots\/last\/success\/time.mdx'\n\n@include 'telemetry-metrics\/vault\/autosnapshots\/percent\/maxspace\/used.mdx'\n\n@include 'telemetry-metrics\/vault\/autosnapshots\/rotate\/duration.mdx'\n\n@include 'telemetry-metrics\/vault\/autosnapshots\/save\/duration.mdx'\n\n@include 'telemetry-metrics\/vault\/autosnapshots\/save\/errors.mdx'\n\n@include 'telemetry-metrics\/vault\/autosnapshots\/snapshot\/size.mdx'\n\n@include 'telemetry-metrics\/vault\/autosnapshots\/total\/snapshot\/size.mdx'\n\n@include 'telemetry-metrics\/vault\/azure\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/azure\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/azure\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/azure\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/barrier\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/barrier\/estimated_encryptions.mdx'\n\n@include 'telemetry-metrics\/vault\/barrier\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/barrier\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/barrier\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/cache\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/cache\/hit.mdx'\n\n@include 'telemetry-metrics\/vault\/cache\/miss.mdx'\n\n@include 'telemetry-metrics\/vault\/cache\/write.mdx'\n\n@include 'telemetry-metrics\/vault\/cassandra\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/cassandra\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/cassandra\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/cassandra\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/cockroachdb\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/cockroachdb\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/cockroachdb\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/cockroachdb\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/consul\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/consul\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/consul\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/consul\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/consul\/transaction.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/active.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/activity\/fragment_size.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/activity\/segment_write.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/check_token.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/fetch_acl_and_token.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/handle_login_request.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/handle_request.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/in_flight_requests.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/leadership_lost.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/leadership_setup_failed.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/license\/expiration_time_epoch.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/locked_users.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/mount_table\/num_entries.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/mount_table\/size.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/performance_standby.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/post_unseal.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/pre_seal.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/replication\/dr\/primary.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/replication\/dr\/secondary.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/replication\/performance\/primary.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/replication\/performance\/secondary.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/replication\/write_undo_logs.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/replication\/build_progress.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/replication\/build_total.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/replication\/reindex_stage.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/seal_internal.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/seal_with_request.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/step_down.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/unseal.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/unsealed.mdx'\n\n@include 'telemetry-metrics\/vault\/couchdb\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/couchdb\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/couchdb\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/couchdb\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/dynamodb\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/dynamodb\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/dynamodb\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/dynamodb\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/etcd\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/etcd\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/etcd\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/etcd\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/fetch_lease_times_by_token.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/fetch_lease_times.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/job_manager\/queue_length.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/job_manager\/total_jobs.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/lease_expiration.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/lease_expiration\/error.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/lease_expiration\/time_in_queue.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/leases\/by_expiration.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/num_irrevocable_leases.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/num_leases.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/register_auth.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/register.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/renew_token.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/renew.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/revoke_by_token.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/revoke_force.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/revoke_prefix.mdx'\n\n@include 'telemetry-metrics\/vault\/expire\/revoke.mdx'\n\n@include 'telemetry-metrics\/vault\/gcs\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/gcs\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/gcs\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/gcs\/lock\/lock.mdx'\n\n@include 'telemetry-metrics\/vault\/gcs\/lock\/unlock.mdx'\n\n@include 'telemetry-metrics\/vault\/gcs\/lock\/value.mdx'\n\n@include 'telemetry-metrics\/vault\/gcs\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/ha\/rpc\/client\/echo.mdx'\n\n@include 'telemetry-metrics\/vault\/ha\/rpc\/client\/echo\/errors.mdx'\n\n@include 'telemetry-metrics\/vault\/ha\/rpc\/client\/forward.mdx'\n\n@include 'telemetry-metrics\/vault\/ha\/rpc\/client\/forward\/errors.mdx'\n\n@include 'telemetry-metrics\/vault\/identity\/entity\/active\/monthly.mdx'\n\n@include 'telemetry-metrics\/vault\/identity\/entity\/active\/partial_month.mdx'\n\n@include 'telemetry-metrics\/vault\/identity\/entity\/active\/reporting_period.mdx'\n\n@include 'telemetry-metrics\/vault\/identity\/entity\/alias\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/identity\/entity\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/identity\/entity\/creation.mdx'\n\n@include 'telemetry-metrics\/vault\/identity\/num_entities.mdx'\n\n@include 'telemetry-metrics\/vault\/identity\/pki_acme\/monthly.mdx'\n\n@include 'telemetry-metrics\/vault\/identity\/pki_acme\/reporting_period.mdx'\n\n@include 'telemetry-metrics\/vault\/identity\/secret_sync\/monthly.mdx'\n\n@include 'telemetry-metrics\/vault\/identity\/secret_sync\/reporting_period.mdx'\n\n@include 'telemetry-metrics\/vault\/identity\/upsert_entity_txn.mdx'\n\n@include 'telemetry-metrics\/vault\/identity\/upsert_group_txn.mdx'\n\n@include 'telemetry-metrics\/vault\/logshipper\/buffer\/length.mdx'\n\n@include 'telemetry-metrics\/vault\/logshipper\/buffer\/max_length.mdx'\n\n@include 'telemetry-metrics\/vault\/logshipper\/buffer\/max_size.mdx'\n\n@include 'telemetry-metrics\/vault\/logshipper\/buffer\/size.mdx'\n\n@include 'telemetry-metrics\/vault\/logshipper\/streamwals\/guard_found.mdx'\n\n@include 'telemetry-metrics\/vault\/logshipper\/streamwals\/missing_guard.mdx'\n\n@include 'telemetry-metrics\/vault\/logshipper\/streamwals\/scanned_entries.mdx'\n\n@include 'telemetry-metrics\/vault\/merkle\/flushdirty.mdx'\n\n@include 'telemetry-metrics\/vault\/merkle\/flushdirty\/num_pages.mdx'\n\n@include 'telemetry-metrics\/vault\/merkle\/flushdirty\/outstanding_pages.mdx'\n\n@include 'telemetry-metrics\/vault\/merkle\/savecheckpoint.mdx'\n\n@include 'telemetry-metrics\/vault\/merkle\/savecheckpoint\/num_dirty.mdx'\n\n@include 'telemetry-metrics\/vault\/metrics\/collection.mdx'\n\n@include 'telemetry-metrics\/vault\/metrics\/collection\/error.mdx'\n\n@include 'telemetry-metrics\/vault\/metrics\/collection\/interval.mdx'\n\n@include 'telemetry-metrics\/vault\/mssql\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/mssql\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/mssql\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/mssql\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/mysql\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/mysql\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/mysql\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/mysql\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/policy\/delete_policy.mdx'\n\n@include 'telemetry-metrics\/vault\/policy\/get_policy.mdx'\n\n@include 'telemetry-metrics\/vault\/policy\/list_policies.mdx'\n\n@include 'telemetry-metrics\/vault\/policy\/set_policy.mdx'\n\n@include 'telemetry-metrics\/vault\/postgres\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/postgres\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/postgres\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/postgres\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/quota\/lease_count\/counter.mdx'\n\n@include 'telemetry-metrics\/vault\/quota\/lease_count\/max.mdx'\n\n@include 'telemetry-metrics\/vault\/quota\/lease_count\/violation.mdx'\n\n@include 'telemetry-metrics\/vault\/quota\/rate_limit\/violation.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/cursor\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/freelist\/allocated_bytes.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/freelist\/free_pages.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/freelist\/pending_pages.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/freelist\/used_bytes.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/node\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/node\/dereferences.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/page\/bytes_allocated.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/page\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/rebalance\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/rebalance\/time.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/spill\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/spill\/time.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/split\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/transaction\/currently_open_read_transactions.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/transaction\/started_read_transactions.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/write\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/write\/time.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/follower\/applied_index_delta.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/follower\/last_heartbeat_ms.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/stats\/applied_index.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/stats\/commit_index.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/stats\/fsm_pending.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-storage\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-storage\/entry_size.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-storage\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-storage\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-storage\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-storage\/transaction.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/head-truncations.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/tail-truncations.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/log-entries-read.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/log-entries-written.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/log-entry-bytes-read.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/log-entry-bytes-written.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/stable-gets.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/stable-sets.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/log-appends.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/segment-rotations.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/last-segment-age-seconds.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/apply.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/barrier.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/candidate\/electself.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/commitnumlogs.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/committime.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/compactlogs.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/fsm\/apply.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/fsm\/applybatch.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/fsm\/applybatchnum.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/fsm\/enqueue.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/fsm\/restore.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/fsm\/snapshot.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/fsm\/store_config.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/leader\/dispatchlog.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/leader\/dispatchnumlogs.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/leader\/lastcontact.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/peers.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/replication\/appendentries\/log.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/replication\/appendentries\/rpc.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/replication\/heartbeat.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/replication\/installsnapshot.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/restore.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/restoreusersnapshot.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/rpc\/appendentries.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/rpc\/appendentries\/processlogs.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/rpc\/appendentries\/storelogs.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/rpc\/installsnapshot.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/rpc\/processheartbeat.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/rpc\/requestvote.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/snapshot\/create.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/snapshot\/persist.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/snapshot\/takesnapshot.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/state\/candidate.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/state\/follower.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/state\/leader.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/transition\/heartbeat_timeout.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/transition\/leader_lease_timeout.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/verify_leader.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/fetchremotekeys.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/fsm\/last_remote_wal.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/fsm\/last_upstream_remote_wal.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/merkle\/commit_index.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/merklediff.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/merklesync.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/conflicting_pages.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/create_token_register_auth_lease.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/fetch_keys.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/forward.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/guard_hash.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/persist_alias.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/register_auth.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/register_lease.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/save_mfa_response_auth.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/stream_wals.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/sub_page_hashes.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/sync_counter.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/upsert_group.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/wrap_in_cubbyhole.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/dr\/server\/echo.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/dr\/server\/fetch_keys_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/auth_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/bootstrap_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/conflicting_pages_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/echo.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/last_heartbeat.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/forwarding_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/guard_hash_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/persist_alias_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/persist_persona_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/save_mfa_response_auth.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/stream_wals_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/sub_page_hashes_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/sync_counter_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/upsert_group_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/standby\/server\/create_token_register_auth_lease_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/standby\/server\/echo.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/standby\/server\/register_auth_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/standby\/server\/register_lease_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/standby\/server\/wrap_token_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/wal\/gc.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/wal\/last_dr_wal.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/wal\/last_performance_wal.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/wal\/last_wal.mdx'\n\n@include 'telemetry-metrics\/vault\/rollback\/attempt\/mountpoint.mdx'\n\n@include 'telemetry-metrics\/vault\/rollback\/attempt.mdx'\n\n@include 'telemetry-metrics\/vault\/rollback\/inflight.mdx'\n\n@include 'telemetry-metrics\/vault\/rollback\/queued.mdx'\n\n@include 'telemetry-metrics\/vault\/rollback\/waiting.mdx'\n\n@include 'telemetry-metrics\/vault\/route\/create\/mountpoint.mdx'\n\n@include 'telemetry-metrics\/vault\/route\/delete\/mountpoint.mdx'\n\n@include 'telemetry-metrics\/vault\/route\/list\/mountpoint.mdx'\n\n@include 'telemetry-metrics\/vault\/route\/read\/mountpoint.mdx'\n\n@include 'telemetry-metrics\/vault\/route\/rollback\/mountpoint.mdx'\n\n@include 'telemetry-metrics\/vault\/route\/rollback.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/alloc_bytes.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/free_count.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/gc_pause_ns.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/heap_objects.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/malloc_count.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/num_goroutines.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/sys_bytes.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/total_gc_pause_ns.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/total_gc_runs.mdx'\n\n@include 'telemetry-metrics\/vault\/s3\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/s3\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/s3\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/s3\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/secret\/kv\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/secret\/lease\/creation.mdx'\n\n@include 'telemetry-metrics\/vault\/secrets-sync\/destinations.mdx'\n\n@include 'telemetry-metrics\/vault\/secrets-sync\/associations.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/lock\/lock.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/lock\/unlock.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/lock\/value.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/swift\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/swift\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/swift\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/swift\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/token\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/token\/count\/by_auth.mdx'\n\n@include 'telemetry-metrics\/vault\/token\/count\/by_policy.mdx'\n\n@include 'telemetry-metrics\/vault\/token\/count\/by_ttl.mdx'\n\n@include 'telemetry-metrics\/vault\/token\/create_root.mdx'\n\n@include 'telemetry-metrics\/vault\/token\/create.mdx'\n\n@include 'telemetry-metrics\/vault\/token\/createaccessor.mdx'\n\n@include 'telemetry-metrics\/vault\/token\/creation.mdx'\n\n@include 'telemetry-metrics\/vault\/token\/lookup.mdx'\n\n@include 'telemetry-metrics\/vault\/token\/revoke_tree.mdx'\n\n@include 'telemetry-metrics\/vault\/token\/revoke.mdx'\n\n@include 'telemetry-metrics\/vault\/token\/store.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/deletewals.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/flushready.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/flushready\/queue_len.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/gc\/deleted.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/gc\/total.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/loadwal.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/persistwals.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/write_controller\/d.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/write_controller\/i.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/write_controller\/p.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/write_controller\/reject_fraction.mdx'\n\n@include 'telemetry-metrics\/vault\/zookeeper\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/zookeeper\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/zookeeper\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/zookeeper\/put.mdx'","site":"vault","answers_cleaned":"    layout  docs page title   Telemetry reference  All metrics  description       Full list of all telemetry values provided by Vault         All Vault telemetry metrics  For completeness  we provide a full list of available metrics below in alphabetic order by name      Full metric list   include  telemetry metrics database close mdx    include  telemetry metrics database close error mdx    include  telemetry metrics database createuser mdx    include  telemetry metrics database createuser error mdx    include  telemetry metrics database initialize mdx    include  telemetry metrics database initialize error mdx    include  telemetry metrics database name close mdx    include  telemetry metrics database name close error mdx    include  telemetry metrics database name createuser mdx    include  telemetry metrics database name createuser error mdx    include  telemetry metrics database name initialize mdx    include  telemetry metrics database name initialize error mdx    include  telemetry metrics database name renewuser mdx    include  telemetry metrics database name renewuser error mdx    include  telemetry metrics database name revokeuser mdx    include  telemetry metrics database name revokeuser error mdx    include  telemetry metrics database renewuser mdx    include  telemetry metrics database renewuser error mdx    include  telemetry metrics database revokeuser mdx    include  telemetry metrics database revokeuser error mdx    include  telemetry metrics secrets pki tidy cert store current entry mdx    include  telemetry metrics secrets pki tidy cert store deleted count mdx    include  telemetry metrics secrets pki tidy cert store total entries remaining mdx    include  telemetry metrics secrets pki tidy cert store total entries mdx    include  telemetry metrics secrets pki tidy duration mdx    include  telemetry metrics secrets pki tidy failure mdx    include  telemetry metrics secrets pki tidy revoked cert current entry mdx    include  telemetry metrics secrets pki tidy revoked cert deleted count mdx    include  telemetry metrics secrets pki tidy revoked cert total entries fixed issuers mdx    include  telemetry metrics secrets pki tidy revoked cert total entries incorrect issuers mdx    include  telemetry metrics secrets pki tidy revoked cert total entries remaining mdx    include  telemetry metrics secrets pki tidy revoked cert total entries mdx    include  telemetry metrics secrets pki tidy start time epoch mdx    include  telemetry metrics secrets pki tidy success mdx    include  telemetry metrics vault audit device log request mdx    include  telemetry metrics vault audit device log response mdx    include  telemetry metrics vault audit log request failure mdx    include  telemetry metrics vault audit log request mdx    include  telemetry metrics vault audit log response failure mdx    include  telemetry metrics vault audit log response mdx    include  telemetry metrics vault audit sink success mdx    include  telemetry metrics vault audit sink failure mdx    include  telemetry metrics vault audit fallback success mdx    include  telemetry metrics vault audit fallback miss mdx    include  telemetry metrics vault autopilot failure tolerance mdx    include  telemetry metrics vault autopilot healthy mdx    include  telemetry metrics vault autopilot node healthy mdx    include  telemetry metrics vault autosnapshots last success time mdx    include  telemetry metrics vault autosnapshots percent maxspace used mdx    include  telemetry metrics vault autosnapshots rotate duration mdx    include  telemetry metrics vault autosnapshots save duration mdx    include  telemetry metrics vault autosnapshots save errors mdx    include  telemetry metrics vault autosnapshots snapshot size mdx    include  telemetry metrics vault autosnapshots total snapshot size mdx    include  telemetry metrics vault azure delete mdx    include  telemetry metrics vault azure get mdx    include  telemetry metrics vault azure list mdx    include  telemetry metrics vault azure put mdx    include  telemetry metrics vault barrier delete mdx    include  telemetry metrics vault barrier estimated encryptions mdx    include  telemetry metrics vault barrier get mdx    include  telemetry metrics vault barrier list mdx    include  telemetry metrics vault barrier put mdx    include  telemetry metrics vault cache delete mdx    include  telemetry metrics vault cache hit mdx    include  telemetry metrics vault cache miss mdx    include  telemetry metrics vault cache write mdx    include  telemetry metrics vault cassandra delete mdx    include  telemetry metrics vault cassandra get mdx    include  telemetry metrics vault cassandra list mdx    include  telemetry metrics vault cassandra put mdx    include  telemetry metrics vault cockroachdb delete mdx    include  telemetry metrics vault cockroachdb get mdx    include  telemetry metrics vault cockroachdb list mdx    include  telemetry metrics vault cockroachdb put mdx    include  telemetry metrics vault consul delete mdx    include  telemetry metrics vault consul get mdx    include  telemetry metrics vault consul list mdx    include  telemetry metrics vault consul put mdx    include  telemetry metrics vault consul transaction mdx    include  telemetry metrics vault core active mdx    include  telemetry metrics vault core activity fragment size mdx    include  telemetry metrics vault core activity segment write mdx    include  telemetry metrics vault core check token mdx    include  telemetry metrics vault core fetch acl and token mdx    include  telemetry metrics vault core handle login request mdx    include  telemetry metrics vault core handle request mdx    include  telemetry metrics vault core in flight requests mdx    include  telemetry metrics vault core leadership lost mdx    include  telemetry metrics vault core leadership setup failed mdx    include  telemetry metrics vault core license expiration time epoch mdx    include  telemetry metrics vault core locked users mdx    include  telemetry metrics vault core mount table num entries mdx    include  telemetry metrics vault core mount table size mdx    include  telemetry metrics vault core performance standby mdx    include  telemetry metrics vault core post unseal mdx    include  telemetry metrics vault core pre seal mdx    include  telemetry metrics vault core replication dr primary mdx    include  telemetry metrics vault core replication dr secondary mdx    include  telemetry metrics vault core replication performance primary mdx    include  telemetry metrics vault core replication performance secondary mdx    include  telemetry metrics vault core replication write undo logs mdx    include  telemetry metrics vault core replication build progress mdx    include  telemetry metrics vault core replication build total mdx    include  telemetry metrics vault core replication reindex stage mdx    include  telemetry metrics vault core seal internal mdx    include  telemetry metrics vault core seal with request mdx    include  telemetry metrics vault core step down mdx    include  telemetry metrics vault core unseal mdx    include  telemetry metrics vault core unsealed mdx    include  telemetry metrics vault couchdb delete mdx    include  telemetry metrics vault couchdb get mdx    include  telemetry metrics vault couchdb list mdx    include  telemetry metrics vault couchdb put mdx    include  telemetry metrics vault dynamodb delete mdx    include  telemetry metrics vault dynamodb get mdx    include  telemetry metrics vault dynamodb list mdx    include  telemetry metrics vault dynamodb put mdx    include  telemetry metrics vault etcd delete mdx    include  telemetry metrics vault etcd get mdx    include  telemetry metrics vault etcd list mdx    include  telemetry metrics vault etcd put mdx    include  telemetry metrics vault expire fetch lease times by token mdx    include  telemetry metrics vault expire fetch lease times mdx    include  telemetry metrics vault expire job manager queue length mdx    include  telemetry metrics vault expire job manager total jobs mdx    include  telemetry metrics vault expire lease expiration mdx    include  telemetry metrics vault expire lease expiration error mdx    include  telemetry metrics vault expire lease expiration time in queue mdx    include  telemetry metrics vault expire leases by expiration mdx    include  telemetry metrics vault expire num irrevocable leases mdx    include  telemetry metrics vault expire num leases mdx    include  telemetry metrics vault expire register auth mdx    include  telemetry metrics vault expire register mdx    include  telemetry metrics vault expire renew token mdx    include  telemetry metrics vault expire renew mdx    include  telemetry metrics vault expire revoke by token mdx    include  telemetry metrics vault expire revoke force mdx    include  telemetry metrics vault expire revoke prefix mdx    include  telemetry metrics vault expire revoke mdx    include  telemetry metrics vault gcs delete mdx    include  telemetry metrics vault gcs get mdx    include  telemetry metrics vault gcs list mdx    include  telemetry metrics vault gcs lock lock mdx    include  telemetry metrics vault gcs lock unlock mdx    include  telemetry metrics vault gcs lock value mdx    include  telemetry metrics vault gcs put mdx    include  telemetry metrics vault ha rpc client echo mdx    include  telemetry metrics vault ha rpc client echo errors mdx    include  telemetry metrics vault ha rpc client forward mdx    include  telemetry metrics vault ha rpc client forward errors mdx    include  telemetry metrics vault identity entity active monthly mdx    include  telemetry metrics vault identity entity active partial month mdx    include  telemetry metrics vault identity entity active reporting period mdx    include  telemetry metrics vault identity entity alias count mdx    include  telemetry metrics vault identity entity count mdx    include  telemetry metrics vault identity entity creation mdx    include  telemetry metrics vault identity num entities mdx    include  telemetry metrics vault identity pki acme monthly mdx    include  telemetry metrics vault identity pki acme reporting period mdx    include  telemetry metrics vault identity secret sync monthly mdx    include  telemetry metrics vault identity secret sync reporting period mdx    include  telemetry metrics vault identity upsert entity txn mdx    include  telemetry metrics vault identity upsert group txn mdx    include  telemetry metrics vault logshipper buffer length mdx    include  telemetry metrics vault logshipper buffer max length mdx    include  telemetry metrics vault logshipper buffer max size mdx    include  telemetry metrics vault logshipper buffer size mdx    include  telemetry metrics vault logshipper streamwals guard found mdx    include  telemetry metrics vault logshipper streamwals missing guard mdx    include  telemetry metrics vault logshipper streamwals scanned entries mdx    include  telemetry metrics vault merkle flushdirty mdx    include  telemetry metrics vault merkle flushdirty num pages mdx    include  telemetry metrics vault merkle flushdirty outstanding pages mdx    include  telemetry metrics vault merkle savecheckpoint mdx    include  telemetry metrics vault merkle savecheckpoint num dirty mdx    include  telemetry metrics vault metrics collection mdx    include  telemetry metrics vault metrics collection error mdx    include  telemetry metrics vault metrics collection interval mdx    include  telemetry metrics vault mssql delete mdx    include  telemetry metrics vault mssql get mdx    include  telemetry metrics vault mssql list mdx    include  telemetry metrics vault mssql put mdx    include  telemetry metrics vault mysql delete mdx    include  telemetry metrics vault mysql get mdx    include  telemetry metrics vault mysql list mdx    include  telemetry metrics vault mysql put mdx    include  telemetry metrics vault policy delete policy mdx    include  telemetry metrics vault policy get policy mdx    include  telemetry metrics vault policy list policies mdx    include  telemetry metrics vault policy set policy mdx    include  telemetry metrics vault postgres delete mdx    include  telemetry metrics vault postgres get mdx    include  telemetry metrics vault postgres list mdx    include  telemetry metrics vault postgres put mdx    include  telemetry metrics vault quota lease count counter mdx    include  telemetry metrics vault quota lease count max mdx    include  telemetry metrics vault quota lease count violation mdx    include  telemetry metrics vault quota rate limit violation mdx    include  telemetry metrics vault raft storage bolt cursor count mdx    include  telemetry metrics vault raft storage bolt freelist allocated bytes mdx    include  telemetry metrics vault raft storage bolt freelist free pages mdx    include  telemetry metrics vault raft storage bolt freelist pending pages mdx    include  telemetry metrics vault raft storage bolt freelist used bytes mdx    include  telemetry metrics vault raft storage bolt node count mdx    include  telemetry metrics vault raft storage bolt node dereferences mdx    include  telemetry metrics vault raft storage bolt page bytes allocated mdx    include  telemetry metrics vault raft storage bolt page count mdx    include  telemetry metrics vault raft storage bolt rebalance count mdx    include  telemetry metrics vault raft storage bolt rebalance time mdx    include  telemetry metrics vault raft storage bolt spill count mdx    include  telemetry metrics vault raft storage bolt spill time mdx    include  telemetry metrics vault raft storage bolt split count mdx    include  telemetry metrics vault raft storage bolt transaction currently open read transactions mdx    include  telemetry metrics vault raft storage bolt transaction started read transactions mdx    include  telemetry metrics vault raft storage bolt write count mdx    include  telemetry metrics vault raft storage bolt write time mdx    include  telemetry metrics vault raft storage follower applied index delta mdx    include  telemetry metrics vault raft storage follower last heartbeat ms mdx    include  telemetry metrics vault raft storage stats applied index mdx    include  telemetry metrics vault raft storage stats commit index mdx    include  telemetry metrics vault raft storage stats fsm pending mdx    include  telemetry metrics vault raft storage delete mdx    include  telemetry metrics vault raft storage entry size mdx    include  telemetry metrics vault raft storage get mdx    include  telemetry metrics vault raft storage list mdx    include  telemetry metrics vault raft storage put mdx    include  telemetry metrics vault raft storage transaction mdx    include  telemetry metrics vault raft wal head truncations mdx    include  telemetry metrics vault raft wal tail truncations mdx    include  telemetry metrics vault raft wal log entries read mdx    include  telemetry metrics vault raft wal log entries written mdx    include  telemetry metrics vault raft wal log entry bytes read mdx    include  telemetry metrics vault raft wal log entry bytes written mdx    include  telemetry metrics vault raft wal stable gets mdx    include  telemetry metrics vault raft wal stable sets mdx    include  telemetry metrics vault raft wal log appends mdx    include  telemetry metrics vault raft wal segment rotations mdx    include  telemetry metrics vault raft wal last segment age seconds mdx    include  telemetry metrics vault raft apply mdx    include  telemetry metrics vault raft barrier mdx    include  telemetry metrics vault raft candidate electself mdx    include  telemetry metrics vault raft commitnumlogs mdx    include  telemetry metrics vault raft committime mdx    include  telemetry metrics vault raft compactlogs mdx    include  telemetry metrics vault raft fsm apply mdx    include  telemetry metrics vault raft fsm applybatch mdx    include  telemetry metrics vault raft fsm applybatchnum mdx    include  telemetry metrics vault raft fsm enqueue mdx    include  telemetry metrics vault raft fsm restore mdx    include  telemetry metrics vault raft fsm snapshot mdx    include  telemetry metrics vault raft fsm store config mdx    include  telemetry metrics vault raft get mdx    include  telemetry metrics vault raft leader dispatchlog mdx    include  telemetry metrics vault raft leader dispatchnumlogs mdx    include  telemetry metrics vault raft leader lastcontact mdx    include  telemetry metrics vault raft list mdx    include  telemetry metrics vault raft peers mdx    include  telemetry metrics vault raft replication appendentries log mdx    include  telemetry metrics vault raft replication appendentries rpc mdx    include  telemetry metrics vault raft replication heartbeat mdx    include  telemetry metrics vault raft replication installsnapshot mdx    include  telemetry metrics vault raft restore mdx    include  telemetry metrics vault raft restoreusersnapshot mdx    include  telemetry metrics vault raft rpc appendentries mdx    include  telemetry metrics vault raft rpc appendentries processlogs mdx    include  telemetry metrics vault raft rpc appendentries storelogs mdx    include  telemetry metrics vault raft rpc installsnapshot mdx    include  telemetry metrics vault raft rpc processheartbeat mdx    include  telemetry metrics vault raft rpc requestvote mdx    include  telemetry metrics vault raft snapshot create mdx    include  telemetry metrics vault raft snapshot persist mdx    include  telemetry metrics vault raft snapshot takesnapshot mdx    include  telemetry metrics vault raft state candidate mdx    include  telemetry metrics vault raft state follower mdx    include  telemetry metrics vault raft state leader mdx    include  telemetry metrics vault raft transition heartbeat timeout mdx    include  telemetry metrics vault raft transition leader lease timeout mdx    include  telemetry metrics vault raft verify leader mdx    include  telemetry metrics vault replication fetchremotekeys mdx    include  telemetry metrics vault replication fsm last remote wal mdx    include  telemetry metrics vault replication fsm last upstream remote wal mdx    include  telemetry metrics vault replication merkle commit index mdx    include  telemetry metrics vault replication merklediff mdx    include  telemetry metrics vault replication merklesync mdx    include  telemetry metrics vault replication rpc client conflicting pages mdx    include  telemetry metrics vault replication rpc client create token register auth lease mdx    include  telemetry metrics vault replication rpc client fetch keys mdx    include  telemetry metrics vault replication rpc client forward mdx    include  telemetry metrics vault replication rpc client guard hash mdx    include  telemetry metrics vault replication rpc client persist alias mdx    include  telemetry metrics vault replication rpc client register auth mdx    include  telemetry metrics vault replication rpc client register lease mdx    include  telemetry metrics vault replication rpc client save mfa response auth mdx    include  telemetry metrics vault replication rpc client stream wals mdx    include  telemetry metrics vault replication rpc client sub page hashes mdx    include  telemetry metrics vault replication rpc client sync counter mdx    include  telemetry metrics vault replication rpc client upsert group mdx    include  telemetry metrics vault replication rpc client wrap in cubbyhole mdx    include  telemetry metrics vault replication rpc dr server echo mdx    include  telemetry metrics vault replication rpc dr server fetch keys request mdx    include  telemetry metrics vault replication rpc server auth request mdx    include  telemetry metrics vault replication rpc server bootstrap request mdx    include  telemetry metrics vault replication rpc server conflicting pages request mdx    include  telemetry metrics vault replication rpc server echo mdx    include  telemetry metrics vault replication rpc server last heartbeat mdx    include  telemetry metrics vault replication rpc server forwarding request mdx    include  telemetry metrics vault replication rpc server guard hash request mdx    include  telemetry metrics vault replication rpc server persist alias request mdx    include  telemetry metrics vault replication rpc server persist persona request mdx    include  telemetry metrics vault replication rpc server save mfa response auth mdx    include  telemetry metrics vault replication rpc server stream wals request mdx    include  telemetry metrics vault replication rpc server sub page hashes request mdx    include  telemetry metrics vault replication rpc server sync counter request mdx    include  telemetry metrics vault replication rpc server upsert group request mdx    include  telemetry metrics vault replication rpc standby server create token register auth lease request mdx    include  telemetry metrics vault replication rpc standby server echo mdx    include  telemetry metrics vault replication rpc standby server register auth request mdx    include  telemetry metrics vault replication rpc standby server register lease request mdx    include  telemetry metrics vault replication rpc standby server wrap token request mdx    include  telemetry metrics vault replication wal gc mdx    include  telemetry metrics vault replication wal last dr wal mdx    include  telemetry metrics vault replication wal last performance wal mdx    include  telemetry metrics vault replication wal last wal mdx    include  telemetry metrics vault rollback attempt mountpoint mdx    include  telemetry metrics vault rollback attempt mdx    include  telemetry metrics vault rollback inflight mdx    include  telemetry metrics vault rollback queued mdx    include  telemetry metrics vault rollback waiting mdx    include  telemetry metrics vault route create mountpoint mdx    include  telemetry metrics vault route delete mountpoint mdx    include  telemetry metrics vault route list mountpoint mdx    include  telemetry metrics vault route read mountpoint mdx    include  telemetry metrics vault route rollback mountpoint mdx    include  telemetry metrics vault route rollback mdx    include  telemetry metrics vault runtime alloc bytes mdx    include  telemetry metrics vault runtime free count mdx    include  telemetry metrics vault runtime gc pause ns mdx    include  telemetry metrics vault runtime heap objects mdx    include  telemetry metrics vault runtime malloc count mdx    include  telemetry metrics vault runtime num goroutines mdx    include  telemetry metrics vault runtime sys bytes mdx    include  telemetry metrics vault runtime total gc pause ns mdx    include  telemetry metrics vault runtime total gc runs mdx    include  telemetry metrics vault s3 delete mdx    include  telemetry metrics vault s3 get mdx    include  telemetry metrics vault s3 list mdx    include  telemetry metrics vault s3 put mdx    include  telemetry metrics vault secret kv count mdx    include  telemetry metrics vault secret lease creation mdx    include  telemetry metrics vault secrets sync destinations mdx    include  telemetry metrics vault secrets sync associations mdx    include  telemetry metrics vault spanner delete mdx    include  telemetry metrics vault spanner get mdx    include  telemetry metrics vault spanner list mdx    include  telemetry metrics vault spanner lock lock mdx    include  telemetry metrics vault spanner lock unlock mdx    include  telemetry metrics vault spanner lock value mdx    include  telemetry metrics vault spanner put mdx    include  telemetry metrics vault swift delete mdx    include  telemetry metrics vault swift get mdx    include  telemetry metrics vault swift list mdx    include  telemetry metrics vault swift put mdx    include  telemetry metrics vault token count mdx    include  telemetry metrics vault token count by auth mdx    include  telemetry metrics vault token count by policy mdx    include  telemetry metrics vault token count by ttl mdx    include  telemetry metrics vault token create root mdx    include  telemetry metrics vault token create mdx    include  telemetry metrics vault token createaccessor mdx    include  telemetry metrics vault token creation mdx    include  telemetry metrics vault token lookup mdx    include  telemetry metrics vault token revoke tree mdx    include  telemetry metrics vault token revoke mdx    include  telemetry metrics vault token store mdx    include  telemetry metrics vault wal deletewals mdx    include  telemetry metrics vault wal flushready mdx    include  telemetry metrics vault wal flushready queue len mdx    include  telemetry metrics vault wal gc deleted mdx    include  telemetry metrics vault wal gc total mdx    include  telemetry metrics vault wal loadwal mdx    include  telemetry metrics vault wal persistwals mdx    include  telemetry metrics vault wal write controller d mdx    include  telemetry metrics vault wal write controller i mdx    include  telemetry metrics vault wal write controller p mdx    include  telemetry metrics vault wal write controller reject fraction mdx    include  telemetry metrics vault zookeeper delete mdx    include  telemetry metrics vault zookeeper get mdx    include  telemetry metrics vault zookeeper list mdx    include  telemetry metrics vault zookeeper put mdx "}
{"questions":"vault Availability telemetry provides information about standby and active nodes in Technical reference for availability related telemetry values Availability telemetry layout docs page title Telemetry reference Availability","answers":"---\nlayout: docs\npage_title: \"Telemetry reference: Availability\"\ndescription: >-\n  Technical reference for availability related telemetry values.\n---\n\n# Availability telemetry\n\nAvailability telemetry provides information about standby and active nodes in\nyour Vault instance. Enterprise installations also include\n[replication](\/vault\/docs\/enterprise\/replication) metrics.\n\n## Default metrics\n\n@include 'telemetry-metrics\/vault\/ha\/rpc\/client\/echo.mdx'\n\n@include 'telemetry-metrics\/vault\/ha\/rpc\/client\/echo\/errors.mdx'\n\n@include 'telemetry-metrics\/vault\/ha\/rpc\/client\/forward.mdx'\n\n@include 'telemetry-metrics\/vault\/ha\/rpc\/client\/forward\/errors.mdx'\n\n## Merkle tree metrics\n\n@include 'telemetry-metrics\/vault\/merkle\/flushdirty.mdx'\n\n@include 'telemetry-metrics\/vault\/merkle\/flushdirty\/num_pages.mdx'\n\n@include 'telemetry-metrics\/vault\/merkle\/flushdirty\/outstanding_pages.mdx'\n\n@include 'telemetry-metrics\/vault\/merkle\/savecheckpoint.mdx'\n\n@include 'telemetry-metrics\/vault\/merkle\/savecheckpoint\/num_dirty.mdx'\n\n## Write-ahead log (WAL) telemetry\n\n@include 'telemetry-metrics\/vault\/wal\/deletewals.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/flushready.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/flushready\/queue_len.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/gc\/deleted.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/gc\/total.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/loadwal.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/persistwals.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/write_controller\/d.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/write_controller\/i.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/write_controller\/p.mdx'\n\n@include 'telemetry-metrics\/vault\/wal\/write_controller\/reject_fraction.mdx'\n\n## Log shipping metrics\n\n@include 'telemetry-metrics\/vault\/logshipper\/buffer\/length.mdx'\n\n@include 'telemetry-metrics\/vault\/logshipper\/buffer\/max_length.mdx'\n\n@include 'telemetry-metrics\/vault\/logshipper\/buffer\/max_size.mdx'\n\n@include 'telemetry-metrics\/vault\/logshipper\/buffer\/size.mdx'\n\n@include 'telemetry-metrics\/vault\/logshipper\/streamwals\/guard_found.mdx'\n\n@include 'telemetry-metrics\/vault\/logshipper\/streamwals\/missing_guard.mdx'\n\n@include 'telemetry-metrics\/vault\/logshipper\/streamwals\/scanned_entries.mdx'\n\n## Replication metrics <EnterpriseAlert product=\"vault\" inline \/>\n\n@include 'telemetry-metrics\/replication-note.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/fetchremotekeys.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/fsm\/last_remote_wal.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/fsm\/last_upstream_remote_wal.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/merkle\/commit_index.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/merklediff.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/merklesync.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/conflicting_pages.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/create_token_register_auth_lease.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/fetch_keys.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/forward.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/guard_hash.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/persist_alias.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/register_auth.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/register_lease.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/save_mfa_response_auth.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/stream_wals.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/sub_page_hashes.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/sync_counter.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/upsert_group.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/client\/wrap_in_cubbyhole.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/dr\/server\/echo.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/dr\/server\/fetch_keys_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/auth_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/bootstrap_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/conflicting_pages_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/echo.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/last_heartbeat.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/forwarding_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/guard_hash_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/persist_alias_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/persist_persona_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/save_mfa_response_auth.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/stream_wals_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/sub_page_hashes_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/sync_counter_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/server\/upsert_group_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/standby\/server\/create_token_register_auth_lease_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/standby\/server\/echo.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/standby\/server\/register_auth_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/standby\/server\/register_lease_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/rpc\/standby\/server\/wrap_token_request.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/wal\/gc.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/wal\/last_dr_wal.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/wal\/last_performance_wal.mdx'\n\n@include 'telemetry-metrics\/vault\/replication\/wal\/last_wal.mdx'","site":"vault","answers_cleaned":"    layout  docs page title   Telemetry reference  Availability  description       Technical reference for availability related telemetry values         Availability telemetry  Availability telemetry provides information about standby and active nodes in your Vault instance  Enterprise installations also include  replication   vault docs enterprise replication  metrics      Default metrics   include  telemetry metrics vault ha rpc client echo mdx    include  telemetry metrics vault ha rpc client echo errors mdx    include  telemetry metrics vault ha rpc client forward mdx    include  telemetry metrics vault ha rpc client forward errors mdx      Merkle tree metrics   include  telemetry metrics vault merkle flushdirty mdx    include  telemetry metrics vault merkle flushdirty num pages mdx    include  telemetry metrics vault merkle flushdirty outstanding pages mdx    include  telemetry metrics vault merkle savecheckpoint mdx    include  telemetry metrics vault merkle savecheckpoint num dirty mdx      Write ahead log  WAL  telemetry   include  telemetry metrics vault wal deletewals mdx    include  telemetry metrics vault wal flushready mdx    include  telemetry metrics vault wal flushready queue len mdx    include  telemetry metrics vault wal gc deleted mdx    include  telemetry metrics vault wal gc total mdx    include  telemetry metrics vault wal loadwal mdx    include  telemetry metrics vault wal persistwals mdx    include  telemetry metrics vault wal write controller d mdx    include  telemetry metrics vault wal write controller i mdx    include  telemetry metrics vault wal write controller p mdx    include  telemetry metrics vault wal write controller reject fraction mdx      Log shipping metrics   include  telemetry metrics vault logshipper buffer length mdx    include  telemetry metrics vault logshipper buffer max length mdx    include  telemetry metrics vault logshipper buffer max size mdx    include  telemetry metrics vault logshipper buffer size mdx    include  telemetry metrics vault logshipper streamwals guard found mdx    include  telemetry metrics vault logshipper streamwals missing guard mdx    include  telemetry metrics vault logshipper streamwals scanned entries mdx      Replication metrics  EnterpriseAlert product  vault  inline      include  telemetry metrics replication note mdx    include  telemetry metrics vault replication fetchremotekeys mdx    include  telemetry metrics vault replication fsm last remote wal mdx    include  telemetry metrics vault replication fsm last upstream remote wal mdx    include  telemetry metrics vault replication merkle commit index mdx    include  telemetry metrics vault replication merklediff mdx    include  telemetry metrics vault replication merklesync mdx    include  telemetry metrics vault replication rpc client conflicting pages mdx    include  telemetry metrics vault replication rpc client create token register auth lease mdx    include  telemetry metrics vault replication rpc client fetch keys mdx    include  telemetry metrics vault replication rpc client forward mdx    include  telemetry metrics vault replication rpc client guard hash mdx    include  telemetry metrics vault replication rpc client persist alias mdx    include  telemetry metrics vault replication rpc client register auth mdx    include  telemetry metrics vault replication rpc client register lease mdx    include  telemetry metrics vault replication rpc client save mfa response auth mdx    include  telemetry metrics vault replication rpc client stream wals mdx    include  telemetry metrics vault replication rpc client sub page hashes mdx    include  telemetry metrics vault replication rpc client sync counter mdx    include  telemetry metrics vault replication rpc client upsert group mdx    include  telemetry metrics vault replication rpc client wrap in cubbyhole mdx    include  telemetry metrics vault replication rpc dr server echo mdx    include  telemetry metrics vault replication rpc dr server fetch keys request mdx    include  telemetry metrics vault replication rpc server auth request mdx    include  telemetry metrics vault replication rpc server bootstrap request mdx    include  telemetry metrics vault replication rpc server conflicting pages request mdx    include  telemetry metrics vault replication rpc server echo mdx    include  telemetry metrics vault replication rpc server last heartbeat mdx    include  telemetry metrics vault replication rpc server forwarding request mdx    include  telemetry metrics vault replication rpc server guard hash request mdx    include  telemetry metrics vault replication rpc server persist alias request mdx    include  telemetry metrics vault replication rpc server persist persona request mdx    include  telemetry metrics vault replication rpc server save mfa response auth mdx    include  telemetry metrics vault replication rpc server stream wals request mdx    include  telemetry metrics vault replication rpc server sub page hashes request mdx    include  telemetry metrics vault replication rpc server sync counter request mdx    include  telemetry metrics vault replication rpc server upsert group request mdx    include  telemetry metrics vault replication rpc standby server create token register auth lease request mdx    include  telemetry metrics vault replication rpc standby server echo mdx    include  telemetry metrics vault replication rpc standby server register auth request mdx    include  telemetry metrics vault replication rpc standby server register lease request mdx    include  telemetry metrics vault replication rpc standby server wrap token request mdx    include  telemetry metrics vault replication wal gc mdx    include  telemetry metrics vault replication wal last dr wal mdx    include  telemetry metrics vault replication wal last performance wal mdx    include  telemetry metrics vault replication wal last wal mdx "}
{"questions":"vault page title Telemetry reference Core system metrics Core system telemetry Core system telemetry provides information about the operational health of your layout docs Technical reference for core system telemetry values","answers":"---\nlayout: docs\npage_title: \"Telemetry reference: Core system metrics\"\ndescription: >-\n  Technical reference for core system telemetry values.\n---\n\n# Core system telemetry\n\nCore system telemetry provides information about the operational health of your\nVault instance.\n\n## Default metrics\n\n@include 'telemetry-metrics\/vault\/core\/active.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/activity\/fragment_size.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/activity\/segment_write.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/check_token.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/fetch_acl_and_token.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/handle_login_request.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/handle_request.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/in_flight_requests.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/leadership_lost.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/leadership_setup_failed.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/license\/expiration_time_epoch.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/locked_users.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/mount_table\/num_entries.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/mount_table\/size.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/performance_standby.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/replication\/dr\/primary.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/replication\/dr\/secondary.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/replication\/performance\/primary.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/replication\/performance\/secondary.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/replication\/write_undo_logs.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/step_down.mdx'\n\n## Barrier metrics\n\n@include 'telemetry-metrics\/vault\/barrier\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/barrier\/estimated_encryptions.mdx'\n\n@include 'telemetry-metrics\/vault\/barrier\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/barrier\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/barrier\/put.mdx'\n\n## Caching metrics\n\n@include 'telemetry-metrics\/vault\/cache\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/cache\/hit.mdx'\n\n@include 'telemetry-metrics\/vault\/cache\/miss.mdx'\n\n@include 'telemetry-metrics\/vault\/cache\/write.mdx'\n\n## Metric collection metrics\n\n@include 'telemetry-metrics\/vault\/metrics\/collection.mdx'\n\n@include 'telemetry-metrics\/vault\/metrics\/collection\/error.mdx'\n\n@include 'telemetry-metrics\/vault\/metrics\/collection\/interval.mdx'\n\n## Quota metrics\n\n@include 'telemetry-metrics\/quota-intro.mdx'\n\n@include 'telemetry-metrics\/vault\/quota\/lease_count\/counter.mdx'\n\n@include 'telemetry-metrics\/vault\/quota\/lease_count\/max.mdx'\n\n@include 'telemetry-metrics\/vault\/quota\/lease_count\/violation.mdx'\n\n@include 'telemetry-metrics\/vault\/quota\/rate_limit\/violation.mdx'\n\n## Request limiter metrics\n\n@include 'telemetry-metrics\/request-limiter-intro.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/request-limiter\/write.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/request-limiter\/special_path.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/request-limiter\/service_unavailable.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/request-limiter\/success.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/request-limiter\/dropped.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/request-limiter\/ignored.mdx'\n\n## Rollback metrics\n\n@include 'telemetry-metrics\/rollback-intro.mdx'\n\n@include 'telemetry-metrics\/vault\/rollback\/attempt\/mountpoint.mdx'\n\n@include 'telemetry-metrics\/vault\/rollback\/attempt.mdx'\n\n@include 'telemetry-metrics\/vault\/rollback\/inflight.mdx'\n\n@include 'telemetry-metrics\/vault\/rollback\/queued.mdx'\n\n@include 'telemetry-metrics\/vault\/rollback\/waiting.mdx'\n\n## Route metrics\n\n@include 'telemetry-metrics\/route-intro.mdx'\n\n@include 'telemetry-metrics\/vault\/route\/create\/mountpoint.mdx'\n\n@include 'telemetry-metrics\/vault\/route\/delete\/mountpoint.mdx'\n\n@include 'telemetry-metrics\/vault\/route\/list\/mountpoint.mdx'\n\n@include 'telemetry-metrics\/vault\/route\/read\/mountpoint.mdx'\n\n@include 'telemetry-metrics\/vault\/route\/rollback\/mountpoint.mdx'\n\n@include 'telemetry-metrics\/vault\/route\/rollback.mdx'\n\n## Runtime metrics\n\n@include 'telemetry-metrics\/runtime-note.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/alloc_bytes.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/free_count.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/gc_pause_ns.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/heap_objects.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/malloc_count.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/num_goroutines.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/sys_bytes.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/total_gc_pause_ns.mdx'\n\n@include 'telemetry-metrics\/vault\/runtime\/total_gc_runs.mdx'\n\n## Seal metrics\n\n@include 'telemetry-metrics\/vault\/core\/post_unseal.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/pre_seal.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/seal_encrypt.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/seal_decrypt.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/seal_internal.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/seal_unreachable.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/seal_with_request.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/unseal.mdx'\n\n@include 'telemetry-metrics\/vault\/core\/unsealed.mdx'","site":"vault","answers_cleaned":"    layout  docs page title   Telemetry reference  Core system metrics  description       Technical reference for core system telemetry values         Core system telemetry  Core system telemetry provides information about the operational health of your Vault instance      Default metrics   include  telemetry metrics vault core active mdx    include  telemetry metrics vault core activity fragment size mdx    include  telemetry metrics vault core activity segment write mdx    include  telemetry metrics vault core check token mdx    include  telemetry metrics vault core fetch acl and token mdx    include  telemetry metrics vault core handle login request mdx    include  telemetry metrics vault core handle request mdx    include  telemetry metrics vault core in flight requests mdx    include  telemetry metrics vault core leadership lost mdx    include  telemetry metrics vault core leadership setup failed mdx    include  telemetry metrics vault core license expiration time epoch mdx    include  telemetry metrics vault core locked users mdx    include  telemetry metrics vault core mount table num entries mdx    include  telemetry metrics vault core mount table size mdx    include  telemetry metrics vault core performance standby mdx    include  telemetry metrics vault core replication dr primary mdx    include  telemetry metrics vault core replication dr secondary mdx    include  telemetry metrics vault core replication performance primary mdx    include  telemetry metrics vault core replication performance secondary mdx    include  telemetry metrics vault core replication write undo logs mdx    include  telemetry metrics vault core step down mdx      Barrier metrics   include  telemetry metrics vault barrier delete mdx    include  telemetry metrics vault barrier estimated encryptions mdx    include  telemetry metrics vault barrier get mdx    include  telemetry metrics vault barrier list mdx    include  telemetry metrics vault barrier put mdx      Caching metrics   include  telemetry metrics vault cache delete mdx    include  telemetry metrics vault cache hit mdx    include  telemetry metrics vault cache miss mdx    include  telemetry metrics vault cache write mdx      Metric collection metrics   include  telemetry metrics vault metrics collection mdx    include  telemetry metrics vault metrics collection error mdx    include  telemetry metrics vault metrics collection interval mdx      Quota metrics   include  telemetry metrics quota intro mdx    include  telemetry metrics vault quota lease count counter mdx    include  telemetry metrics vault quota lease count max mdx    include  telemetry metrics vault quota lease count violation mdx    include  telemetry metrics vault quota rate limit violation mdx      Request limiter metrics   include  telemetry metrics request limiter intro mdx    include  telemetry metrics vault core request limiter write mdx    include  telemetry metrics vault core request limiter special path mdx    include  telemetry metrics vault core request limiter service unavailable mdx    include  telemetry metrics vault core request limiter success mdx    include  telemetry metrics vault core request limiter dropped mdx    include  telemetry metrics vault core request limiter ignored mdx      Rollback metrics   include  telemetry metrics rollback intro mdx    include  telemetry metrics vault rollback attempt mountpoint mdx    include  telemetry metrics vault rollback attempt mdx    include  telemetry metrics vault rollback inflight mdx    include  telemetry metrics vault rollback queued mdx    include  telemetry metrics vault rollback waiting mdx      Route metrics   include  telemetry metrics route intro mdx    include  telemetry metrics vault route create mountpoint mdx    include  telemetry metrics vault route delete mountpoint mdx    include  telemetry metrics vault route list mountpoint mdx    include  telemetry metrics vault route read mountpoint mdx    include  telemetry metrics vault route rollback mountpoint mdx    include  telemetry metrics vault route rollback mdx      Runtime metrics   include  telemetry metrics runtime note mdx    include  telemetry metrics vault runtime alloc bytes mdx    include  telemetry metrics vault runtime free count mdx    include  telemetry metrics vault runtime gc pause ns mdx    include  telemetry metrics vault runtime heap objects mdx    include  telemetry metrics vault runtime malloc count mdx    include  telemetry metrics vault runtime num goroutines mdx    include  telemetry metrics vault runtime sys bytes mdx    include  telemetry metrics vault runtime total gc pause ns mdx    include  telemetry metrics vault runtime total gc runs mdx      Seal metrics   include  telemetry metrics vault core post unseal mdx    include  telemetry metrics vault core pre seal mdx    include  telemetry metrics vault core seal encrypt mdx    include  telemetry metrics vault core seal decrypt mdx    include  telemetry metrics vault core seal internal mdx    include  telemetry metrics vault core seal unreachable mdx    include  telemetry metrics vault core seal with request mdx    include  telemetry metrics vault core unseal mdx    include  telemetry metrics vault core unsealed mdx "}
{"questions":"vault page title Telemetry reference Raft metrics Technical reference for integrated storage telemetry values Raft telemetry layout docs Raft telemetry provides information on","answers":"---\nlayout: docs\npage_title: \"Telemetry reference: Raft metrics\"\ndescription: >-\n  Technical reference for integrated storage telemetry values.\n---\n\n# Raft telemetry\n\nRaft telemetry provides information on\nVault [integrated storage](\/vault\/docs\/configuration\/storage\/raft).\n\n## Default metrics\n\n@include 'telemetry-metrics\/vault\/raft\/apply.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/barrier.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/candidate\/electself.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/commitnumlogs.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/committime.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/compactlogs.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/fsm\/apply.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/fsm\/applybatch.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/fsm\/applybatchnum.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/fsm\/enqueue.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/fsm\/restore.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/fsm\/snapshot.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/fsm\/store_config.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/peers.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/restore.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/restoreusersnapshot.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/rpc\/appendentries.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/rpc\/appendentries\/processlogs.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/rpc\/appendentries\/storelogs.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/rpc\/installsnapshot.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/rpc\/processheartbeat.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/rpc\/requestvote.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/snapshot\/create.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/snapshot\/persist.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/snapshot\/takesnapshot.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/state\/candidate.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/state\/follower.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/state\/leader.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/transition\/heartbeat_timeout.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/transition\/leader_lease_timeout.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/verify_leader.mdx'\n\n## Autopilot metrics\n\n@include 'telemetry-metrics\/raft-autopilot-note.mdx'\n\n@include 'telemetry-metrics\/vault\/autopilot\/failure_tolerance.mdx'\n\n@include 'telemetry-metrics\/vault\/autopilot\/healthy.mdx'\n\n@include 'telemetry-metrics\/vault\/autopilot\/node\/healthy.mdx'\n\n## Leadership change metrics\n\n@include 'telemetry-metrics\/raft-leadership-intro.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/leader\/dispatchlog.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/leader\/dispatchnumlogs.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/leader\/lastcontact.mdx'\n\n## Raft replication metrics\n\n@include 'telemetry-metrics\/vault\/raft\/replication\/appendentries\/log.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/replication\/appendentries\/rpc.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/replication\/heartbeat.mdx'\n\n@include 'telemetry-metrics\/vault\/raft\/replication\/installsnapshot.mdx'\n\n## Storage metrics\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/cursor\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/freelist\/allocated_bytes.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/freelist\/free_pages.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/freelist\/pending_pages.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/freelist\/used_bytes.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/node\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/node\/dereferences.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/page\/bytes_allocated.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/page\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/rebalance\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/rebalance\/time.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/spill\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/spill\/time.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/split\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/transaction\/currently_open_read_transactions.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/transaction\/started_read_transactions.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/write\/count.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/bolt\/write\/time.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/follower\/applied_index_delta.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/follower\/last_heartbeat_ms.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/stats\/applied_index.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/stats\/commit_index.mdx'\n\n@include 'telemetry-metrics\/vault\/raft_storage\/stats\/fsm_pending.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-storage\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-storage\/entry_size.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-storage\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-storage\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-storage\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-storage\/transaction.mdx'\n\n## Write-ahead logging (WAL) metrics\n\n@include 'telemetry-metrics\/vault\/raft-wal\/head-truncations.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/tail-truncations.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/log-entries-read.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/log-entries-written.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/log-entry-bytes-read.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/log-entry-bytes-written.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/stable-gets.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/stable-sets.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/log-appends.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/segment-rotations.mdx'\n\n@include 'telemetry-metrics\/vault\/raft-wal\/last-segment-age-seconds.mdx'","site":"vault","answers_cleaned":"    layout  docs page title   Telemetry reference  Raft metrics  description       Technical reference for integrated storage telemetry values         Raft telemetry  Raft telemetry provides information on Vault  integrated storage   vault docs configuration storage raft       Default metrics   include  telemetry metrics vault raft apply mdx    include  telemetry metrics vault raft barrier mdx    include  telemetry metrics vault raft candidate electself mdx    include  telemetry metrics vault raft commitnumlogs mdx    include  telemetry metrics vault raft committime mdx    include  telemetry metrics vault raft compactlogs mdx    include  telemetry metrics vault raft fsm apply mdx    include  telemetry metrics vault raft fsm applybatch mdx    include  telemetry metrics vault raft fsm applybatchnum mdx    include  telemetry metrics vault raft fsm enqueue mdx    include  telemetry metrics vault raft fsm restore mdx    include  telemetry metrics vault raft fsm snapshot mdx    include  telemetry metrics vault raft fsm store config mdx    include  telemetry metrics vault raft get mdx    include  telemetry metrics vault raft list mdx    include  telemetry metrics vault raft peers mdx    include  telemetry metrics vault raft restore mdx    include  telemetry metrics vault raft restoreusersnapshot mdx    include  telemetry metrics vault raft rpc appendentries mdx    include  telemetry metrics vault raft rpc appendentries processlogs mdx    include  telemetry metrics vault raft rpc appendentries storelogs mdx    include  telemetry metrics vault raft rpc installsnapshot mdx    include  telemetry metrics vault raft rpc processheartbeat mdx    include  telemetry metrics vault raft rpc requestvote mdx    include  telemetry metrics vault raft snapshot create mdx    include  telemetry metrics vault raft snapshot persist mdx    include  telemetry metrics vault raft snapshot takesnapshot mdx    include  telemetry metrics vault raft state candidate mdx    include  telemetry metrics vault raft state follower mdx    include  telemetry metrics vault raft state leader mdx    include  telemetry metrics vault raft transition heartbeat timeout mdx    include  telemetry metrics vault raft transition leader lease timeout mdx    include  telemetry metrics vault raft verify leader mdx      Autopilot metrics   include  telemetry metrics raft autopilot note mdx    include  telemetry metrics vault autopilot failure tolerance mdx    include  telemetry metrics vault autopilot healthy mdx    include  telemetry metrics vault autopilot node healthy mdx      Leadership change metrics   include  telemetry metrics raft leadership intro mdx    include  telemetry metrics vault raft leader dispatchlog mdx    include  telemetry metrics vault raft leader dispatchnumlogs mdx    include  telemetry metrics vault raft leader lastcontact mdx      Raft replication metrics   include  telemetry metrics vault raft replication appendentries log mdx    include  telemetry metrics vault raft replication appendentries rpc mdx    include  telemetry metrics vault raft replication heartbeat mdx    include  telemetry metrics vault raft replication installsnapshot mdx      Storage metrics   include  telemetry metrics vault raft storage bolt cursor count mdx    include  telemetry metrics vault raft storage bolt freelist allocated bytes mdx    include  telemetry metrics vault raft storage bolt freelist free pages mdx    include  telemetry metrics vault raft storage bolt freelist pending pages mdx    include  telemetry metrics vault raft storage bolt freelist used bytes mdx    include  telemetry metrics vault raft storage bolt node count mdx    include  telemetry metrics vault raft storage bolt node dereferences mdx    include  telemetry metrics vault raft storage bolt page bytes allocated mdx    include  telemetry metrics vault raft storage bolt page count mdx    include  telemetry metrics vault raft storage bolt rebalance count mdx    include  telemetry metrics vault raft storage bolt rebalance time mdx    include  telemetry metrics vault raft storage bolt spill count mdx    include  telemetry metrics vault raft storage bolt spill time mdx    include  telemetry metrics vault raft storage bolt split count mdx    include  telemetry metrics vault raft storage bolt transaction currently open read transactions mdx    include  telemetry metrics vault raft storage bolt transaction started read transactions mdx    include  telemetry metrics vault raft storage bolt write count mdx    include  telemetry metrics vault raft storage bolt write time mdx    include  telemetry metrics vault raft storage follower applied index delta mdx    include  telemetry metrics vault raft storage follower last heartbeat ms mdx    include  telemetry metrics vault raft storage stats applied index mdx    include  telemetry metrics vault raft storage stats commit index mdx    include  telemetry metrics vault raft storage stats fsm pending mdx    include  telemetry metrics vault raft storage delete mdx    include  telemetry metrics vault raft storage entry size mdx    include  telemetry metrics vault raft storage get mdx    include  telemetry metrics vault raft storage list mdx    include  telemetry metrics vault raft storage put mdx    include  telemetry metrics vault raft storage transaction mdx      Write ahead logging  WAL  metrics   include  telemetry metrics vault raft wal head truncations mdx    include  telemetry metrics vault raft wal tail truncations mdx    include  telemetry metrics vault raft wal log entries read mdx    include  telemetry metrics vault raft wal log entries written mdx    include  telemetry metrics vault raft wal log entry bytes read mdx    include  telemetry metrics vault raft wal log entry bytes written mdx    include  telemetry metrics vault raft wal stable gets mdx    include  telemetry metrics vault raft wal stable sets mdx    include  telemetry metrics vault raft wal log appends mdx    include  telemetry metrics vault raft wal segment rotations mdx    include  telemetry metrics vault raft wal last segment age seconds mdx "}
{"questions":"vault Technical reference for individual storage plugin telemetry values Storage plugin telemetry Storage telemetry provides information on the health of Vault storage and your page title Telemetry reference Storage plugin metrics layout docs","answers":"---\nlayout: docs\npage_title: \"Telemetry reference: Storage plugin metrics\"\ndescription: >-\n  Technical reference for individual storage plugin telemetry values.\n---\n\n# Storage plugin telemetry\n\nStorage telemetry provides information on the health of Vault storage and your\nconfigured storage backends. For integrated storage metrics, refer to the\n[Raft telemetry](\/vault\/docs\/internals\/metrics\/raft) metric list.\n\n## Barrier metrics\n\n@include 'telemetry-metrics\/vault\/barrier\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/barrier\/estimated_encryptions.mdx'\n\n@include 'telemetry-metrics\/vault\/barrier\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/barrier\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/barrier\/put.mdx'\n\n## Caching metrics\n\n@include 'telemetry-metrics\/vault\/cache\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/cache\/hit.mdx'\n\n@include 'telemetry-metrics\/vault\/cache\/miss.mdx'\n\n@include 'telemetry-metrics\/vault\/cache\/write.mdx'\n\n## Amazon S3 metrics\n\n@include 'telemetry-metrics\/vault\/s3\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/s3\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/s3\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/s3\/put.mdx'\n\n## Azure metrics\n\n@include 'telemetry-metrics\/vault\/azure\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/azure\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/azure\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/azure\/put.mdx'\n\n## Cassandra metrics\n\n@include 'telemetry-metrics\/vault\/cassandra\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/cassandra\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/cassandra\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/cassandra\/put.mdx'\n\n## Cockroach database metrics\n\n@include 'telemetry-metrics\/vault\/cockroachdb\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/cockroachdb\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/cockroachdb\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/cockroachdb\/put.mdx'\n\n## Consul metrics\n\n@include 'telemetry-metrics\/vault\/consul\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/consul\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/consul\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/consul\/put.mdx'\n\n@include 'telemetry-metrics\/vault\/consul\/transaction.mdx'\n\n## Couch database metrics\n\n@include 'telemetry-metrics\/vault\/couchdb\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/couchdb\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/couchdb\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/couchdb\/put.mdx'\n\n## Dynamo database metrics\n\n@include 'telemetry-metrics\/vault\/dynamodb\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/dynamodb\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/dynamodb\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/dynamodb\/put.mdx'\n\n## Etcd metrics\n\n@include 'telemetry-metrics\/vault\/etcd\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/etcd\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/etcd\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/etcd\/put.mdx'\n\n## Google Cloud metrics\n\n@include 'telemetry-metrics\/vault\/gcs\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/gcs\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/gcs\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/gcs\/lock\/lock.mdx'\n\n@include 'telemetry-metrics\/vault\/gcs\/lock\/unlock.mdx'\n\n@include 'telemetry-metrics\/vault\/gcs\/lock\/value.mdx'\n\n@include 'telemetry-metrics\/vault\/gcs\/put.mdx'\n\n## Google Cloud - Spanner metrics\n\n@include 'telemetry-metrics\/vault\/spanner\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/lock\/lock.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/lock\/unlock.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/lock\/value.mdx'\n\n@include 'telemetry-metrics\/vault\/spanner\/put.mdx'\n\n## Microsoft SQL Server (MSSQL) metrics\n\n@include 'telemetry-metrics\/vault\/mssql\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/mssql\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/mssql\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/mssql\/put.mdx'\n\n## MySQL metrics\n\n@include 'telemetry-metrics\/vault\/mysql\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/mysql\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/mysql\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/mysql\/put.mdx'\n\n## PostgreSQL metrics\n\n@include 'telemetry-metrics\/vault\/postgres\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/postgres\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/postgres\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/postgres\/put.mdx'\n\n## Swift metrics\n\n@include 'telemetry-metrics\/vault\/swift\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/swift\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/swift\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/swift\/put.mdx'\n\n## ZooKeeper metrics\n\n@include 'telemetry-metrics\/vault\/zookeeper\/delete.mdx'\n\n@include 'telemetry-metrics\/vault\/zookeeper\/get.mdx'\n\n@include 'telemetry-metrics\/vault\/zookeeper\/list.mdx'\n\n@include 'telemetry-metrics\/vault\/zookeeper\/put.mdx","site":"vault","answers_cleaned":"    layout  docs page title   Telemetry reference  Storage plugin metrics  description       Technical reference for individual storage plugin telemetry values         Storage plugin telemetry  Storage telemetry provides information on the health of Vault storage and your configured storage backends  For integrated storage metrics  refer to the  Raft telemetry   vault docs internals metrics raft  metric list      Barrier metrics   include  telemetry metrics vault barrier delete mdx    include  telemetry metrics vault barrier estimated encryptions mdx    include  telemetry metrics vault barrier get mdx    include  telemetry metrics vault barrier list mdx    include  telemetry metrics vault barrier put mdx      Caching metrics   include  telemetry metrics vault cache delete mdx    include  telemetry metrics vault cache hit mdx    include  telemetry metrics vault cache miss mdx    include  telemetry metrics vault cache write mdx      Amazon S3 metrics   include  telemetry metrics vault s3 delete mdx    include  telemetry metrics vault s3 get mdx    include  telemetry metrics vault s3 list mdx    include  telemetry metrics vault s3 put mdx      Azure metrics   include  telemetry metrics vault azure delete mdx    include  telemetry metrics vault azure get mdx    include  telemetry metrics vault azure list mdx    include  telemetry metrics vault azure put mdx      Cassandra metrics   include  telemetry metrics vault cassandra delete mdx    include  telemetry metrics vault cassandra get mdx    include  telemetry metrics vault cassandra list mdx    include  telemetry metrics vault cassandra put mdx      Cockroach database metrics   include  telemetry metrics vault cockroachdb delete mdx    include  telemetry metrics vault cockroachdb get mdx    include  telemetry metrics vault cockroachdb list mdx    include  telemetry metrics vault cockroachdb put mdx      Consul metrics   include  telemetry metrics vault consul delete mdx    include  telemetry metrics vault consul get mdx    include  telemetry metrics vault consul list mdx    include  telemetry metrics vault consul put mdx    include  telemetry metrics vault consul transaction mdx      Couch database metrics   include  telemetry metrics vault couchdb delete mdx    include  telemetry metrics vault couchdb get mdx    include  telemetry metrics vault couchdb list mdx    include  telemetry metrics vault couchdb put mdx      Dynamo database metrics   include  telemetry metrics vault dynamodb delete mdx    include  telemetry metrics vault dynamodb get mdx    include  telemetry metrics vault dynamodb list mdx    include  telemetry metrics vault dynamodb put mdx      Etcd metrics   include  telemetry metrics vault etcd delete mdx    include  telemetry metrics vault etcd get mdx    include  telemetry metrics vault etcd list mdx    include  telemetry metrics vault etcd put mdx      Google Cloud metrics   include  telemetry metrics vault gcs delete mdx    include  telemetry metrics vault gcs get mdx    include  telemetry metrics vault gcs list mdx    include  telemetry metrics vault gcs lock lock mdx    include  telemetry metrics vault gcs lock unlock mdx    include  telemetry metrics vault gcs lock value mdx    include  telemetry metrics vault gcs put mdx      Google Cloud   Spanner metrics   include  telemetry metrics vault spanner delete mdx    include  telemetry metrics vault spanner get mdx    include  telemetry metrics vault spanner list mdx    include  telemetry metrics vault spanner lock lock mdx    include  telemetry metrics vault spanner lock unlock mdx    include  telemetry metrics vault spanner lock value mdx    include  telemetry metrics vault spanner put mdx      Microsoft SQL Server  MSSQL  metrics   include  telemetry metrics vault mssql delete mdx    include  telemetry metrics vault mssql get mdx    include  telemetry metrics vault mssql list mdx    include  telemetry metrics vault mssql put mdx      MySQL metrics   include  telemetry metrics vault mysql delete mdx    include  telemetry metrics vault mysql get mdx    include  telemetry metrics vault mysql list mdx    include  telemetry metrics vault mysql put mdx      PostgreSQL metrics   include  telemetry metrics vault postgres delete mdx    include  telemetry metrics vault postgres get mdx    include  telemetry metrics vault postgres list mdx    include  telemetry metrics vault postgres put mdx      Swift metrics   include  telemetry metrics vault swift delete mdx    include  telemetry metrics vault swift get mdx    include  telemetry metrics vault swift list mdx    include  telemetry metrics vault swift put mdx      ZooKeeper metrics   include  telemetry metrics vault zookeeper delete mdx    include  telemetry metrics vault zookeeper get mdx    include  telemetry metrics vault zookeeper list mdx    include  telemetry metrics vault zookeeper put mdx"}
{"questions":"vault page title Audit Devices Audit devices are mountable devices that log requests and responses in Vault Audit devices are the components in Vault that collectively keep a detailed log of all layout docs requests to Vault and their responses Because every operation with Vault is an API Audit devices","answers":"---\nlayout: docs\npage_title: Audit Devices\ndescription: Audit devices are mountable devices that log requests and responses in Vault.\n---\n\n# Audit devices\n\nAudit devices are the components in Vault that collectively keep a detailed log of all\nrequests to Vault, and their responses. Because every operation with Vault is an API\nrequest\/response, when using a single audit device, the audit log contains _every_ interaction with\nthe Vault API, including errors - except for a few paths which do not go via the audit system.\n\nThe non-audited paths are:\n\n    sys\/init\n    sys\/seal-status\n    sys\/seal\n    sys\/step-down\n    sys\/unseal\n    sys\/leader\n    sys\/health\n    sys\/rekey\/init\n    sys\/rekey\/update\n    sys\/rekey\/verify\n    sys\/rekey-recovery-key\/init\n    sys\/rekey-recovery-key\/update\n    sys\/rekey-recovery-key\/verify\n    sys\/storage\/raft\/bootstrap\n    sys\/storage\/raft\/join\n    sys\/internal\/ui\/feature-flags\n\nand also, if the relevant listener configuration settings allow unauthenticated access:\n\n    sys\/metrics\n    sys\/pprof\/*\n    sys\/in-flight-req\n\n## Enabling multiple devices\n\nWhen multiple audit devices are enabled, Vault will attempt to send the audit\nlogs to all of them. This allows you to not only have redundant copies, but also\na way to check for data tampering in the logs themselves.\n\nVault considers a request to be successful if it can log to *at least* one\nconfigured audit device (see: [Blocked Audit\nDevices](\/vault\/docs\/audit#blocked-audit-devices) section below).  Therefore in order\nto build a complete picture of all audited actions, use the aggregate\/union of\nthe logs from each audit device.\n\n~> Note: It is **highly recommended** that you configure Vault to use multiple audit\ndevices. Audit failures can prevent Vault from servicing requests, so it is\nimportant to provide at least one other device.\n\n\n## Format\n\nEach line in the audit log is a JSON object. The `type` field specifies what\ntype of object it is. Currently, only two types exist: `request` and `response`.\nThe line contains all of the information for any given request and response. By\ndefault, all the sensitive information is first hashed before logging in the\naudit logs.\n\n## Sensitive information\n\nThe audit logs contain the full request and response objects for every\ninteraction with Vault. The request and response can be matched utilizing a\nunique identifier assigned to each request.\n\nMost strings contained within requests and responses are hashed with a salt using HMAC-SHA256.\nThe purpose of the hash is so that secrets aren't in plaintext within your audit logs.\nHowever, you're still able to check the value of secrets by generating HMACs yourself;\nthis can be done with the audit device's hash function and salt by using the `\/sys\/audit-hash`\nAPI endpoint (see the documentation for more details).\n\n~> Currently, only strings that come from JSON or returned in JSON are\nHMAC'd. Other data types, like integers, booleans, and so on, are passed\nthrough in plaintext. We recommend that all sensitive data be provided as string values\ninside all JSON sent to Vault (i.e., that integer values are provided in quotes).\n\nWhile most strings are hashed, Vault can be configured to make some exceptions.\nFor example in auth methods and secrets engines, users can enable additional exceptions\nusing the [secrets enable](\/vault\/docs\/commands\/secrets\/enable) command, and then\n[tune](\/vault\/docs\/commands\/secrets\/tune) it afterward.\n\n**see also**:\n\n[auth enable](\/vault\/docs\/commands\/auth\/enable)\n\n[auth tune](\/vault\/docs\/commands\/auth\/tune)\n\n## Audit request headers\n\nUse the [Vault API](\/vault\/api-docs\/system\/config-auditing) to configure request\nheaders to monitor and audit headers in incoming client request.\n\nBy default, Vault **does not** encrypt request header values with HMAC if you\n[create](\/vault\/api-docs\/system\/config-auditing#create-update-audit-request-header)\nan exception to allow request headers in the audit log. To encrypt the header\nvalues, you must [configure](\/vault\/api-docs\/system\/config-auditing#hmac) the\nrelevant headers individually.\n\n### Default headers\n\n\nTo help correlate requests across distributed systems, Vault automatically\nrecords the following headers in the audit log:\n\n- `correlation-id`\n- `x-correlation-id`\n\n\nTo ensure Vault uses HMAC on the header values during logging, set the `hmac` value to true for the `config\/auditing\/request-headers` API call.\n\nFor example, to enable HMAC for `correlation-id`\n\n```shell\ncurl \\\n    --header \"X-Vault-Token: ...\" \\\n    http:\/\/127.0.0.1:8200\/v1\/sys\/config\/auditing\/request-headers\/correlation-id \\\n    --data '{ \"hmac\": true }'\n```\n\nAnother way to identify the source of a request is through the User-Agent request header.\nVault will automatically record this value as `user-agent` within the `headers` of a\nrequest entry within the audit log.\n\n\n## Enabling\/Disabling audit devices\n\nWhen a Vault server is first initialized, no auditing is enabled. Audit\ndevices must be enabled by a root user using `vault audit enable`.\n\nWhen enabling an audit device, options can be passed to it to configure it.\nFor example, the command below enables the file audit device:\n\n```shell-session\n$ vault audit enable file file_path=\/var\/log\/vault_audit.log\n```\n\nIn the command above, we passed the \"file_path\" parameter to specify the path\nwhere the audit log will be written to. Each audit device has its own\nset of parameters. See the documentation to the left for more details.\n\n~> Note: Audit device configuration is replicated to all nodes within a\ncluster by default, and to performance\/DR secondaries for Vault Enterprise clusters.\nBefore enabling an audit device, ensure that all nodes within the cluster(s)\nwill be able to successfully log to the audit device to avoid Vault being\nblocked from serving requests.\nAn audit device can be limited to only within the node's cluster with the [`local`](\/vault\/api-docs\/system\/audit#local) parameter.\n\nWhen an audit device is disabled, it will stop receiving logs immediately.\nThe existing logs that it did store are untouched.\n\n~> Note: Once an audit device is disabled, you will no longer be able to HMAC values\nfor comparison with entries in the audit logs. This is true even if you re-enable\nthe audit device at the same path, as a new salt will be created for hashing.\n\n## Blocked audit devices\n\nAudit device logs are critically important and ignoring auditing failures opens an avenue for attack. Vault will not respond to requests when no enabled audit devices can record them.\n\nVault can distinguish between two types of audit device failures.\n\n- A blocking failure is one where an attempt to write to the audit device never completes. This is unlikely with a local disk device, but could occure with a network-based audit device.\n\n- When multiple audit devices are enabled, if any of them fail in a non-blocking fashion, Vault requests can still complete successfully provided at least one audit device successfully writes the audit record. If any of the audit devices fail in a blocking fashion however, Vault requests will hang until the blocking is resolved.\n\nIn other words, Vault will not complete any requests until the blocked audit device can write.\n\n## Tutorial\n\nRefer to [Blocked Audit Devices](\/vault\/tutorials\/monitoring\/blocked-audit-devices) for a step-by-step tutorial.\n\n## API\n\nAudit devices also have a full HTTP API. Please see the [Audit device API\ndocs](\/vault\/api-docs\/system\/audit) for more details.\n\n## Common configuration options\n\n@include 'audit-options-common.mdx'\n\n## Eliding list response bodies\n\nSome Vault responses can be very large. Primarily, this affects list operations -\nas Vault lacks pagination in its APIs, listing a very large collection can result\nin a response that is tens of megabytes long. Some audit backends are unable to\nprocess individual audit records of larger sizes.\n\nThe contents of the response for a list operation is often not very interesting;\nmost contain only a \"keys\" field, containing a list of IDs. Select API endpoints\nadditionally return a \"key_info\" field, a map from ID to some additional\ninformation about the list entry - `identity\/entity\/id\/` is an example of this.\nEven in this case, the response to a list operation is usually less-confidential\nor public information, for which having the full response in the audit logs is of\nlesser importance.\n\nThe `elide_list_responses` audit option provides the flexibility to not write the\nfull list response data from the audit log, to mitigate the creation of very long\nindividual audit records.\n\nWhen enabled, it affects only audit records of `type=response` and\n`request.operation=list`. The values of `response.data.keys` and\n`response.data.key_info` will be replaced with a simple integer, recording how\nmany entries were contained in the list (`keys`) or map (`key_info`) - therefore\neven with this feature enabled, it is still possible to see how many items were\nreturned by a list operation.\n\nThis extra processing only affects the response data fields `keys` and `key_info`,\nand only when they have the expected data types - in the event a list response\ncontains data outside of the usual conventions that apply to Vault list responses,\nit will be left as is by this feature.\n\nHere is an example of an audit record that has been processed by this feature\n(formatted with extra whitespace, and with fields not relevant to the example\nomitted):\n```json\n{\n  \"type\": \"response\",\n  \"request\": {\n    \"operation\": \"list\"\n  },\n  \"response\": {\n    \"data\": {\n      \"key_info\": 4,\n      \"keys\": 4\n    }\n  }\n}\n```","site":"vault","answers_cleaned":"    layout  docs page title  Audit Devices description  Audit devices are mountable devices that log requests and responses in Vault         Audit devices  Audit devices are the components in Vault that collectively keep a detailed log of all requests to Vault  and their responses  Because every operation with Vault is an API request response  when using a single audit device  the audit log contains  every  interaction with the Vault API  including errors   except for a few paths which do not go via the audit system   The non audited paths are       sys init     sys seal status     sys seal     sys step down     sys unseal     sys leader     sys health     sys rekey init     sys rekey update     sys rekey verify     sys rekey recovery key init     sys rekey recovery key update     sys rekey recovery key verify     sys storage raft bootstrap     sys storage raft join     sys internal ui feature flags  and also  if the relevant listener configuration settings allow unauthenticated access       sys metrics     sys pprof       sys in flight req     Enabling multiple devices  When multiple audit devices are enabled  Vault will attempt to send the audit logs to all of them  This allows you to not only have redundant copies  but also a way to check for data tampering in the logs themselves   Vault considers a request to be successful if it can log to  at least  one configured audit device  see   Blocked Audit Devices   vault docs audit blocked audit devices  section below    Therefore in order to build a complete picture of all audited actions  use the aggregate union of the logs from each audit device      Note  It is   highly recommended   that you configure Vault to use multiple audit devices  Audit failures can prevent Vault from servicing requests  so it is important to provide at least one other device       Format  Each line in the audit log is a JSON object  The  type  field specifies what type of object it is  Currently  only two types exist   request  and  response   The line contains all of the information for any given request and response  By default  all the sensitive information is first hashed before logging in the audit logs      Sensitive information  The audit logs contain the full request and response objects for every interaction with Vault  The request and response can be matched utilizing a unique identifier assigned to each request   Most strings contained within requests and responses are hashed with a salt using HMAC SHA256  The purpose of the hash is so that secrets aren t in plaintext within your audit logs  However  you re still able to check the value of secrets by generating HMACs yourself  this can be done with the audit device s hash function and salt by using the   sys audit hash  API endpoint  see the documentation for more details       Currently  only strings that come from JSON or returned in JSON are HMAC d  Other data types  like integers  booleans  and so on  are passed through in plaintext  We recommend that all sensitive data be provided as string values inside all JSON sent to Vault  i e   that integer values are provided in quotes    While most strings are hashed  Vault can be configured to make some exceptions  For example in auth methods and secrets engines  users can enable additional exceptions using the  secrets enable   vault docs commands secrets enable  command  and then  tune   vault docs commands secrets tune  it afterward     see also      auth enable   vault docs commands auth enable    auth tune   vault docs commands auth tune      Audit request headers  Use the  Vault API   vault api docs system config auditing  to configure request headers to monitor and audit headers in incoming client request   By default  Vault   does not   encrypt request header values with HMAC if you  create   vault api docs system config auditing create update audit request header  an exception to allow request headers in the audit log  To encrypt the header values  you must  configure   vault api docs system config auditing hmac  the relevant headers individually       Default headers   To help correlate requests across distributed systems  Vault automatically records the following headers in the audit log      correlation id     x correlation id    To ensure Vault uses HMAC on the header values during logging  set the  hmac  value to true for the  config auditing request headers  API call   For example  to enable HMAC for  correlation id      shell curl         header  X Vault Token             http   127 0 0 1 8200 v1 sys config auditing request headers correlation id         data     hmac   true         Another way to identify the source of a request is through the User Agent request header  Vault will automatically record this value as  user agent  within the  headers  of a request entry within the audit log       Enabling Disabling audit devices  When a Vault server is first initialized  no auditing is enabled  Audit devices must be enabled by a root user using  vault audit enable    When enabling an audit device  options can be passed to it to configure it  For example  the command below enables the file audit device      shell session   vault audit enable file file path  var log vault audit log      In the command above  we passed the  file path  parameter to specify the path where the audit log will be written to  Each audit device has its own set of parameters  See the documentation to the left for more details      Note  Audit device configuration is replicated to all nodes within a cluster by default  and to performance DR secondaries for Vault Enterprise clusters  Before enabling an audit device  ensure that all nodes within the cluster s  will be able to successfully log to the audit device to avoid Vault being blocked from serving requests  An audit device can be limited to only within the node s cluster with the   local    vault api docs system audit local  parameter   When an audit device is disabled  it will stop receiving logs immediately  The existing logs that it did store are untouched      Note  Once an audit device is disabled  you will no longer be able to HMAC values for comparison with entries in the audit logs  This is true even if you re enable the audit device at the same path  as a new salt will be created for hashing      Blocked audit devices  Audit device logs are critically important and ignoring auditing failures opens an avenue for attack  Vault will not respond to requests when no enabled audit devices can record them   Vault can distinguish between two types of audit device failures     A blocking failure is one where an attempt to write to the audit device never completes  This is unlikely with a local disk device  but could occure with a network based audit device     When multiple audit devices are enabled  if any of them fail in a non blocking fashion  Vault requests can still complete successfully provided at least one audit device successfully writes the audit record  If any of the audit devices fail in a blocking fashion however  Vault requests will hang until the blocking is resolved   In other words  Vault will not complete any requests until the blocked audit device can write      Tutorial  Refer to  Blocked Audit Devices   vault tutorials monitoring blocked audit devices  for a step by step tutorial      API  Audit devices also have a full HTTP API  Please see the  Audit device API docs   vault api docs system audit  for more details      Common configuration options   include  audit options common mdx      Eliding list response bodies  Some Vault responses can be very large  Primarily  this affects list operations   as Vault lacks pagination in its APIs  listing a very large collection can result in a response that is tens of megabytes long  Some audit backends are unable to process individual audit records of larger sizes   The contents of the response for a list operation is often not very interesting  most contain only a  keys  field  containing a list of IDs  Select API endpoints additionally return a  key info  field  a map from ID to some additional information about the list entry    identity entity id   is an example of this  Even in this case  the response to a list operation is usually less confidential or public information  for which having the full response in the audit logs is of lesser importance   The  elide list responses  audit option provides the flexibility to not write the full list response data from the audit log  to mitigate the creation of very long individual audit records   When enabled  it affects only audit records of  type response  and  request operation list   The values of  response data keys  and  response data key info  will be replaced with a simple integer  recording how many entries were contained in the list   keys   or map   key info     therefore even with this feature enabled  it is still possible to see how many items were returned by a list operation   This extra processing only affects the response data fields  keys  and  key info   and only when they have the expected data types   in the event a list response contains data outside of the usual conventions that apply to Vault list responses  it will be left as is by this feature   Here is an example of an audit record that has been processed by this feature  formatted with extra whitespace  and with fields not relevant to the example omitted      json      type    response      request          operation    list          response          data            key info   4         keys   4                "}
{"questions":"vault Password policies are used in some secret engines to allow users to define how passwords are generated Password policies for dynamic static users within those engines layout docs page title Password Policies","answers":"---\nlayout: docs\npage_title: Password Policies\ndescription: >-\n  Password policies are used in some secret engines to allow users to define how passwords are generated\n  for dynamic & static users within those engines.\n---\n\n# Password policies\n\nA password policy is a set of instructions on how to generate a password, similar to other password\ngenerators. These password policies are used in a subset of secret engines to allow you to configure\nhow a password is generated for that engine. Not all secret engines utilize password policies, so check\nthe documentation for the engine you are using for compatibility.\n\n**Note:** Password policies are unrelated to [Policies](\/vault\/docs\/concepts\/policies) other than sharing similar names.\n\nPassword policies are available in Vault version 1.5+. [API docs can be found here](\/vault\/api-docs\/system\/policies-password).\n\n!> Password policies are an advanced usage of Vault. This generates credentials for external systems\n(databases, LDAP, AWS, etc.) and should be used with caution.\n\n## Design\n\nPassword policies fundamentally have two parts: a length, and a set of rules that a password must\nadhere to. Passwords are randomly generated from the de-duplicated union of charsets found in all rules\nand then checked against each of the rules to determine if the candidate password is valid according\nto the policy. See [Candidate Password Generation](#candidate-password-generation) for details on how\npasswords are generated prior to being checked against the rule set.\n\nA rule is an assertion upon a candidate password string that indicates whether or not\nthe password is acceptable. For example: a \"charset\" rule states that a password must have at least one\nlowercase letter in it. This rule will reject any passwords that do not have any lowercase letters in it.\n\nMultiple rules may be specified within a policy to create more complex rules, such as requiring at least\none lowercase letter, at least one uppercase letter, and at least one number.\n\nThe flow looks like:\n\n[![Vault Password Policy Flow](\/img\/vault-password-policy-flow.svg)](\/img\/vault-password-policy-flow.svg)\n\n## Candidate password generation\n\nHow a candidate password is generated is extremely important. Great care must be placed to ensure that\npasswords aren't created in a way that can be exploited by threat actors. This section describes how we\ngenerate passwords within password policies to ensure that passwords are generated as securely as possible.\n\nTo generate a candidate password, three things are needed:\n\n1. A [cryptographically secure random number generator](https:\/\/golang.org\/pkg\/crypto\/rand\/) (RNG).\n2. A character set (charset) to select characters from.\n3. The length of the password.\n\nAt a high level, we use our RNG to generate N numbers that correspond to indices into the charset\narray where N is the length of the password we wish to create. Each value returned from the RNG is then\nused to extract a character from the charset into the password.\n\nFor example, let's generate a password of length 8 from the charset `abcdefghij`:\n\nThe RNG is used to generate 8 random values. For our example let's say those values are:\n\n`[3, 2, 0, 8, 7, 3, 5, 1]`\n\nEach of these values is an index into the charset array:\n\n`[3, 2, 0, 8, 7, 3, 5, 1]` => `[d, c, a, i, h, d, f, b]`\n\nThis gives us our candidate password: `dcaihdfb` which can then be run through the rules of the policy.\n\nIn a real world scenario, the values in the random array will be between `[0-255]` as that is the range of\nvalues that a single byte can be. The value is restricted to the size of the charset array by using the\n[modulo operation](https:\/\/en.wikipedia.org\/wiki\/Modulo_operation) to prevent referencing a character\noutside the bounds of the charset. However this can introduce a problem with bias.\n\n### Preventing bias\n\nWhen using the [modulo operation](https:\/\/en.wikipedia.org\/wiki\/Modulo_operation) to generate a password,\nyou must be very careful to prevent the introduction of bias. When generating a random number between\n[0-255] for a charset that has a length that isn't evenly divisible into 256, some of the first characters\nin the charset may be selected more frequently than the remaining characters.\n\nTo demonstrate this, let's simplify the math. Assume that we have a charset of length 10: `abcdefghij`.\nLet's also assume that our RNG generates values `[0-25]`. The first 10 values `0-9` correspond to each\ncharacter in our charset. The next 10 values `10-19` also correspond to each character in our charset.\nHowever, the next 6 values `20-25` correspond to only the first 6 characters in the charset. This means\nthat those 6 characters `abcdef` can be selected more often than the last 4 characters `ghij`.\n\nIn order to prevent this from happening, we calculate the maximum value that we can allow an index to be.\nThis is based on the length of the charset we are selecting from. In the example above, the maximum index\nvalue we should allow is 19 as that represents the largest integer multiple of the length of the charset\narray that is less than the maximum value that our RNG can generate. When our RNG generates any values\nlarger than our maximum allowed value, that number is ignored and we continue to the next number. Passwords\ndo not lose any length because we continue generating numbers until the password is fully filled in to the\nlength requested.\n\n## Performance characteristics\n\nCharacterizing password generation performance with this model is heavily dependent on the policy\nconfiguration. In short, the more restrictive the policy, the longer it will take to generate a password.\nThis generalization isn't always true, but is a general guideline. The performance curve can be generalized:\n\n`(time to generate a candidate password) * (number of candidate passwords generated)`\n\nWhere the number of times a candidate password needs to be generated is a function of how likely a given\ncandidate password does not pass all of the rules.\n\nHere are some example policy configurations with their performance characteristics below. Each of these\npolicies have the same charset that candidate passwords are generated from (94 characters). The only\ndifference is the minimum number of characters for various character subsets.\n\n<details>\n<summary>No Minimum Characters<\/summary>\n\n```hcl\nrule \"charset\" {\n  charset = \"abcdefghijklmnopqrstuvwxyz\"\n}\nrule \"charset\" {\n  charset = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n}\nrule \"charset\" {\n  charset = \"0123456789\"\n}\nrule \"charset\" {\n  charset = \"!\\\"#$%&'()*+,-.\/:;<=>?@[\\\\]^_`{|}~\"\n}\n```\n\n<\/details>\n\n<details>\n<summary>1 uppercase, 1 lowercase, 1 numeric<\/summary>\n\n```hcl\nrule \"charset\" {\n  charset = \"abcdefghijklmnopqrstuvwxyz\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"0123456789\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"!\\\"#$%&'()*+,-.\/:;<=>?@[\\\\]^_`{|}~\"\n}\n```\n\n<\/details>\n\n<details>\n<summary>1 uppercase, 1 lowercase, 1 numeric, 1 from all ASCII characters<\/summary>\n\n```hcl\nrule \"charset\" {\n  charset = \"abcdefghijklmnopqrstuvwxyz\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"0123456789\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"!\\\"#$%&'()*+,-.\/:;<=>?@[\\\\]^_`{|}~\"\n  min-chars = 1\n}\n```\n\n<\/details>\n\n<details>\n<summary>1 uppercase, 1 lowercase, 1 numeric, 1 from <code>!@#$<\/code><\/summary>\n\n```hcl\nrule \"charset\" {\n  charset = \"abcdefghijklmnopqrstuvwxyz\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"0123456789\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"!@#$\"\n  min-chars = 1\n}\n# Fleshes out the rest of the symbols but doesn't add any required characters\nrule \"charset\" {\n  charset = \"!\\\"#$%&'()*+,-.\/:;<=>?@[\\\\]^_`{|}~\"\n}\n```\n\n<\/details>\n\n[![Password Policy Performance](\/img\/vault-password-policy-performance.svg)](\/img\/vault-password-policy-performance.svg)\n\nAs more characters are generated, the amount of time increases (as seen in `No Minimum Characters`).\nThis upward trend can be dwarfed by restricting charsets. When a password is short, the chances of a character\nbeing selected from a subset is smaller. For instance, if you have a 1 character password from the charset\n`abcde` the chances of selecting `c` from it is 1\/5. However if you have a 2 character password, the chances\nof selecting `c` at least once are greater than 1\/5 because you have a second chance to select `c` from\nthe charset.\n\nIn these examples, as the length of the password increases, the amount of time to generate a password trends\ndown, levels off, and then slowly increases. This is a combination of the two effects listed above: increasing\ntime to generate more characters vs the chances of the character subsets being selected. When a single subset is\nvery small (such as `!@#$`) the chances of it being selected are much smaller (4\/94) than if the subset is larger\n(26\/94 for lowercase characters). This can result in a dramatic loss of performance.\n\n<details>\n<summary><b>Click here for more details on password generation probabilities<\/b><\/summary>\n\nIn the examples above, the charset being used to generate candidate passwords is 94 characters long.\nRandomly choosing a given character from the 94 character charset has a 1\/94 chance. Choosing a single\ncharacter from it after N tries (where N is the length of the password) is `1-(1-1\/94)^N`.\n\nIf we expand this to look at a subset of characters (such as lowercase characters) the chances of selecting\na character from that subset is `1-(1-L\/94)^N` where `L` is the length of the subset. For lowercase\ncharacters, we get a probability of `1-(1-26\/94)^N`.\n\nIf we do this for uppercase letters as well as numbers, then we get a combined probability curve:\n\n`p = (1-(1-26\/94)^N) * (1-(1-26\/94)^N) * (1-(1-10\/94)^N)`\n\n[![Chance of Generating a Good Password - 1](\/img\/vault-password-policy-chance.svg)](\/img\/vault-password-policy-chance.svg)\n\nIt should be noted that this probability curve only applies to this specific policy. To understand the\nperformance characteristics of a given policy, you should run your policy with the\n[`generate`](\/vault\/api-docs\/system\/policies-password) endpoint to see how much time the policy takes to\nproduce passwords.\n\n<\/details>\n\n## Password policy syntax\n\nPassword policies are defined in [HCL](https:\/\/github.com\/hashicorp\/hcl) or JSON which defines\nthe length of the password and a set of rules a password must adhere to.\n\nSee the [API docs](\/vault\/api-docs\/system\/policies-password) for examples of the commands to save\/read\/etc.\npassword policies\n\nHere is a very simple policy which generates 20 character passwords from lowercase characters:\n\n```hcl\nlength = 20\nrule \"charset\" {\n  charset = \"abcdefghijklmnopqrstuvwxyz\"\n}\n```\n\nMultiple rules may be specified, including multiple rules of the same type. For instance, the following\npolicy will generate a 20 character password with at least one lowercase letter, at least one uppercase\nletter, at least one number, and at least one symbol from the set `!@#$%^&*`:\n\n```hcl\nlength = 20\nrule \"charset\" {\n  charset = \"abcdefghijklmnopqrstuvwxyz\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"0123456789\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"!@#$%^&*\"\n  min-chars = 1\n}\n```\n\nAt least one charset must be specified for a policy to be valid. In order to generate a password, a charset\nmust be available to select characters from and password policies do not have a default charset.\nThe following policy is **NOT** valid and will be rejected:\n\n```hcl\nlength = 20\n```\n\n## Configuration & available rules\n\n### `length` parameter\n\n- `length` `(int: <required>)` - Specifies how long the generated password will be. Must be >= 4.\n\nLength is **not** a rule. It is the only part of the configuration that does not adhere to the guess-\nand-check approach of rules.\n\n### Rule `charset`\n\nAllows you to specify a minimum number of characters from a given charset. For instance: a password must\nhave at least one lowercase letter. This rule also helps construct the charset that the password generation\nutilizes. In order to generate a password, a charset must be specified.\n\nIf multiple charsets are specified, all of the charsets will be combined and de-duplicated prior to\ngenerating any candidate passwords. Each individual `charset` rule will still need to be adhered to in\norder to successfully generate passwords.\n\n~> After combining and de-duplicating charsets, the length of the charset that candidate passwords\nare generated from must be no longer than 256 characters.\n\n#### Parameters\n\n- `charset` `(string: <required>)` \u2013\u00a0A string representation of the character set that this rule observes.\n  Accepts UTF-8 compatible strings. All characters within the string must be printable.\n  Please note that the JSON output returned may be escaped for the special and control characters such as <,>,& etc as per the JSON specification.\n- `min-chars` `(int: 0)` - Specifies a minimum number of characters required from the charset specified in\n  this rule. For example: if `min-chars = 2`, the password must have at least 2 characters from `charset`.\n\n#### Example\n\n```hcl\nlength = 20\nrule \"charset\" {\n  charset = \"abcde\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"01234\"\n  min-chars = 1\n}\n```\n\nThis policy will generate passwords from the charset `abcde01234`. However, the password must have at\nleast one character that is from `abcde` and at least one character from `01234`. If charsets overlap\nbetween rules, the charsets will be de-duplicated to prevent bias towards the overlapping set.\nFor instance: if you have two charset rules: `abcde` & `cdefg`, the charset `abcdefg` will be used to\ngenerate candidate passwords, but a least one character from each `abcde` & `cdefg` must still appear\nin the password.\n\nIf `min-chars` is not specified (or set to `0`) then this charset will not have a minimum required number\nof characters, but it will be used to select characters from. Example:\n\n```hcl\nlength = 8\nrule \"charset\" {\n  charset = \"abcde\"\n}\nrule \"charset\" {\n  charset = \"01234\"\n  min-chars = 1\n}\n```\n\nThis policy generates 8 character passwords from the charset `abcde01234` and requires at least one\ncharacter from `01234` to be in it, but does not require any characters from `abcde`. The password\n`04031945` may result from this policy, even though no alphabetical characters are in it.\n\n## Default password policy\n\nVault ships with a default password policy that applies to any password \ngenerated by Vault without an explicit policy assignment. The default\npolicy requires passwords include:\n\n- 20 characters total\n- 1 uppercase character\n- 1 lowercase character\n- 1 number\n- 1 special character\n\n\n```hcl\nlength = 20\n\nrule \"charset\" {\n  charset = \"abcdefghijklmnopqrstuvwxyz\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"0123456789\"\n  min-chars = 1\n}\nrule \"charset\" {\n  charset = \"-\"\n  min-chars = 1\n}\n```\n\n\n\n## Tutorial\n\nRefer to [User Configurable Password Generation for Secret\nEngines](\/vault\/tutorials\/policies\/password-policies)\nfor a step-by-step tutorial.","site":"vault","answers_cleaned":"    layout  docs page title  Password Policies description       Password policies are used in some secret engines to allow users to define how passwords are generated   for dynamic   static users within those engines         Password policies  A password policy is a set of instructions on how to generate a password  similar to other password generators  These password policies are used in a subset of secret engines to allow you to configure how a password is generated for that engine  Not all secret engines utilize password policies  so check the documentation for the engine you are using for compatibility     Note    Password policies are unrelated to  Policies   vault docs concepts policies  other than sharing similar names   Password policies are available in Vault version 1 5    API docs can be found here   vault api docs system policies password       Password policies are an advanced usage of Vault  This generates credentials for external systems  databases  LDAP  AWS  etc   and should be used with caution      Design  Password policies fundamentally have two parts  a length  and a set of rules that a password must adhere to  Passwords are randomly generated from the de duplicated union of charsets found in all rules and then checked against each of the rules to determine if the candidate password is valid according to the policy  See  Candidate Password Generation   candidate password generation  for details on how passwords are generated prior to being checked against the rule set   A rule is an assertion upon a candidate password string that indicates whether or not the password is acceptable  For example  a  charset  rule states that a password must have at least one lowercase letter in it  This rule will reject any passwords that do not have any lowercase letters in it   Multiple rules may be specified within a policy to create more complex rules  such as requiring at least one lowercase letter  at least one uppercase letter  and at least one number   The flow looks like      Vault Password Policy Flow   img vault password policy flow svg    img vault password policy flow svg      Candidate password generation  How a candidate password is generated is extremely important  Great care must be placed to ensure that passwords aren t created in a way that can be exploited by threat actors  This section describes how we generate passwords within password policies to ensure that passwords are generated as securely as possible   To generate a candidate password  three things are needed   1  A  cryptographically secure random number generator  https   golang org pkg crypto rand    RNG   2  A character set  charset  to select characters from  3  The length of the password   At a high level  we use our RNG to generate N numbers that correspond to indices into the charset array where N is the length of the password we wish to create  Each value returned from the RNG is then used to extract a character from the charset into the password   For example  let s generate a password of length 8 from the charset  abcdefghij    The RNG is used to generate 8 random values  For our example let s say those values are     3  2  0  8  7  3  5  1    Each of these values is an index into the charset array     3  2  0  8  7  3  5  1        d  c  a  i  h  d  f  b    This gives us our candidate password   dcaihdfb  which can then be run through the rules of the policy   In a real world scenario  the values in the random array will be between   0 255   as that is the range of values that a single byte can be  The value is restricted to the size of the charset array by using the  modulo operation  https   en wikipedia org wiki Modulo operation  to prevent referencing a character outside the bounds of the charset  However this can introduce a problem with bias       Preventing bias  When using the  modulo operation  https   en wikipedia org wiki Modulo operation  to generate a password  you must be very careful to prevent the introduction of bias  When generating a random number between  0 255  for a charset that has a length that isn t evenly divisible into 256  some of the first characters in the charset may be selected more frequently than the remaining characters   To demonstrate this  let s simplify the math  Assume that we have a charset of length 10   abcdefghij   Let s also assume that our RNG generates values   0 25    The first 10 values  0 9  correspond to each character in our charset  The next 10 values  10 19  also correspond to each character in our charset  However  the next 6 values  20 25  correspond to only the first 6 characters in the charset  This means that those 6 characters  abcdef  can be selected more often than the last 4 characters  ghij    In order to prevent this from happening  we calculate the maximum value that we can allow an index to be  This is based on the length of the charset we are selecting from  In the example above  the maximum index value we should allow is 19 as that represents the largest integer multiple of the length of the charset array that is less than the maximum value that our RNG can generate  When our RNG generates any values larger than our maximum allowed value  that number is ignored and we continue to the next number  Passwords do not lose any length because we continue generating numbers until the password is fully filled in to the length requested      Performance characteristics  Characterizing password generation performance with this model is heavily dependent on the policy configuration  In short  the more restrictive the policy  the longer it will take to generate a password  This generalization isn t always true  but is a general guideline  The performance curve can be generalized     time to generate a candidate password     number of candidate passwords generated    Where the number of times a candidate password needs to be generated is a function of how likely a given candidate password does not pass all of the rules   Here are some example policy configurations with their performance characteristics below  Each of these policies have the same charset that candidate passwords are generated from  94 characters   The only difference is the minimum number of characters for various character subsets    details   summary No Minimum Characters  summary      hcl rule  charset      charset    abcdefghijklmnopqrstuvwxyz    rule  charset      charset    ABCDEFGHIJKLMNOPQRSTUVWXYZ    rule  charset      charset    0123456789    rule  charset      charset                                                 details    details   summary 1 uppercase  1 lowercase  1 numeric  summary      hcl rule  charset      charset    abcdefghijklmnopqrstuvwxyz    min chars   1   rule  charset      charset    ABCDEFGHIJKLMNOPQRSTUVWXYZ    min chars   1   rule  charset      charset    0123456789    min chars   1   rule  charset      charset                                                 details    details   summary 1 uppercase  1 lowercase  1 numeric  1 from all ASCII characters  summary      hcl rule  charset      charset    abcdefghijklmnopqrstuvwxyz    min chars   1   rule  charset      charset    ABCDEFGHIJKLMNOPQRSTUVWXYZ    min chars   1   rule  charset      charset    0123456789    min chars   1   rule  charset      charset                                          min chars   1          details    details   summary 1 uppercase  1 lowercase  1 numeric  1 from  code       code   summary      hcl rule  charset      charset    abcdefghijklmnopqrstuvwxyz    min chars   1   rule  charset      charset    ABCDEFGHIJKLMNOPQRSTUVWXYZ    min chars   1   rule  charset      charset    0123456789    min chars   1   rule  charset      charset            min chars   1     Fleshes out the rest of the symbols but doesn t add any required characters rule  charset      charset                                                 details      Password Policy Performance   img vault password policy performance svg    img vault password policy performance svg   As more characters are generated  the amount of time increases  as seen in  No Minimum Characters    This upward trend can be dwarfed by restricting charsets  When a password is short  the chances of a character being selected from a subset is smaller  For instance  if you have a 1 character password from the charset  abcde  the chances of selecting  c  from it is 1 5  However if you have a 2 character password  the chances of selecting  c  at least once are greater than 1 5 because you have a second chance to select  c  from the charset   In these examples  as the length of the password increases  the amount of time to generate a password trends down  levels off  and then slowly increases  This is a combination of the two effects listed above  increasing time to generate more characters vs the chances of the character subsets being selected  When a single subset is very small  such as         the chances of it being selected are much smaller  4 94  than if the subset is larger  26 94 for lowercase characters   This can result in a dramatic loss of performance    details   summary  b Click here for more details on password generation probabilities  b   summary   In the examples above  the charset being used to generate candidate passwords is 94 characters long  Randomly choosing a given character from the 94 character charset has a 1 94 chance  Choosing a single character from it after N tries  where N is the length of the password  is  1  1 1 94  N    If we expand this to look at a subset of characters  such as lowercase characters  the chances of selecting a character from that subset is  1  1 L 94  N  where  L  is the length of the subset  For lowercase characters  we get a probability of  1  1 26 94  N    If we do this for uppercase letters as well as numbers  then we get a combined probability curve    p    1  1 26 94  N     1  1 26 94  N     1  1 10 94  N       Chance of Generating a Good Password   1   img vault password policy chance svg    img vault password policy chance svg   It should be noted that this probability curve only applies to this specific policy  To understand the performance characteristics of a given policy  you should run your policy with the   generate    vault api docs system policies password  endpoint to see how much time the policy takes to produce passwords     details      Password policy syntax  Password policies are defined in  HCL  https   github com hashicorp hcl  or JSON which defines the length of the password and a set of rules a password must adhere to   See the  API docs   vault api docs system policies password  for examples of the commands to save read etc  password policies  Here is a very simple policy which generates 20 character passwords from lowercase characters      hcl length   20 rule  charset      charset    abcdefghijklmnopqrstuvwxyz         Multiple rules may be specified  including multiple rules of the same type  For instance  the following policy will generate a 20 character password with at least one lowercase letter  at least one uppercase letter  at least one number  and at least one symbol from the set                 hcl length   20 rule  charset      charset    abcdefghijklmnopqrstuvwxyz    min chars   1   rule  charset      charset    ABCDEFGHIJKLMNOPQRSTUVWXYZ    min chars   1   rule  charset      charset    0123456789    min chars   1   rule  charset      charset                min chars   1        At least one charset must be specified for a policy to be valid  In order to generate a password  a charset must be available to select characters from and password policies do not have a default charset  The following policy is   NOT   valid and will be rejected      hcl length   20         Configuration   available rules       length  parameter     length    int   required      Specifies how long the generated password will be  Must be    4   Length is   not   a rule  It is the only part of the configuration that does not adhere to the guess  and check approach of rules       Rule  charset   Allows you to specify a minimum number of characters from a given charset  For instance  a password must have at least one lowercase letter  This rule also helps construct the charset that the password generation utilizes  In order to generate a password  a charset must be specified   If multiple charsets are specified  all of the charsets will be combined and de duplicated prior to generating any candidate passwords  Each individual  charset  rule will still need to be adhered to in order to successfully generate passwords      After combining and de duplicating charsets  the length of the charset that candidate passwords are generated from must be no longer than 256 characters        Parameters     charset    string   required      A string representation of the character set that this rule observes    Accepts UTF 8 compatible strings  All characters within the string must be printable    Please note that the JSON output returned may be escaped for the special and control characters such as       etc as per the JSON specification     min chars    int  0     Specifies a minimum number of characters required from the charset specified in   this rule  For example  if  min chars   2   the password must have at least 2 characters from  charset         Example     hcl length   20 rule  charset      charset    abcde    min chars   1   rule  charset      charset    01234    min chars   1        This policy will generate passwords from the charset  abcde01234   However  the password must have at least one character that is from  abcde  and at least one character from  01234   If charsets overlap between rules  the charsets will be de duplicated to prevent bias towards the overlapping set  For instance  if you have two charset rules   abcde     cdefg   the charset  abcdefg  will be used to generate candidate passwords  but a least one character from each  abcde     cdefg  must still appear in the password   If  min chars  is not specified  or set to  0   then this charset will not have a minimum required number of characters  but it will be used to select characters from  Example      hcl length   8 rule  charset      charset    abcde    rule  charset      charset    01234    min chars   1        This policy generates 8 character passwords from the charset  abcde01234  and requires at least one character from  01234  to be in it  but does not require any characters from  abcde   The password  04031945  may result from this policy  even though no alphabetical characters are in it      Default password policy  Vault ships with a default password policy that applies to any password  generated by Vault without an explicit policy assignment  The default policy requires passwords include     20 characters total   1 uppercase character   1 lowercase character   1 number   1 special character      hcl length   20  rule  charset      charset    abcdefghijklmnopqrstuvwxyz    min chars   1   rule  charset      charset    ABCDEFGHIJKLMNOPQRSTUVWXYZ    min chars   1   rule  charset      charset    0123456789    min chars   1   rule  charset      charset         min chars   1             Tutorial  Refer to  User Configurable Password Generation for Secret Engines   vault tutorials policies password policies  for a step by step tutorial "}
{"questions":"vault Event notifications allow Vault and plugins to exchange arbitrary activity Event Notifications page title Event Notications layout docs data within Vault and with external subscribers via WebSockets","answers":"---\nlayout: docs\npage_title: Event Notications\ndescription: >-\n  Event notifications allow Vault and plugins to exchange arbitrary activity\n  data within Vault and with external subscribers via WebSockets.\n---\n\n# Event Notifications\n\n@include 'alerts\/enterprise-only.mdx'\n\nEvent notifications are arbitrary, **non-secret** data that can be exchanged between producers (Vault and plugins)\nand subscribers (Vault components and external users via the API).\n\n## Event types\n\n<!-- This information will probably be migrated to the plugin pages eventually -->\n\n<Note title=\"Note\">\n\nEvent types without the `data_path` metadata field require a root token in order to be consumed from the `\/v1\/sys\/events\/subscribe\/{eventType}` API endpoint.\n\n<\/Note>\n\nInternal components of Vault as well as external plugins can generate event notifications.\nThese are published to \"event types\", sometimes called \"topics\" in other event systems.\nAll event notifications of a specific event type will have the same format for their\nadditional `metadata` field.\n\nThe following event types are currently generated by Vault and its builtin plugins automatically:\n\n| Plugin   | Event Type                          | Metadata                                       | Vault version |\n|----------|-------------------------------------|------------------------------------------------|---------------|\n| database | `database\/config-delete`            | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/config-write`             | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/creds-create`             | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/reload`                   | `modified`, `operation`, `path`, `plugin_name` | 1.16          |\n| database | `database\/reset`                    | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/role-create`              | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/role-delete`              | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/role-update`              | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/root-rotate-fail`         | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/root-rotate`              | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/rotate-fail`              | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/rotate`                   | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/static-creds-create-fail` | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/static-creds-create`      | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/static-role-create`       | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/static-role-delete`       | `modified`, `operation`, `path`, `name`        | 1.16          |\n| database | `database\/static-role-update`       | `modified`, `operation`, `path`, `name`        | 1.16          |\n| kv       | `kv-v1\/delete`                      | `modified`, `operation`, `path`                | 1.13          |\n| kv       | `kv-v1\/write`                       | `data_path`, `modified`, `operation`, `path`   | 1.13          |\n| kv       | `kv-v2\/config-write`                | `data_path`, `modified`, `operation`, `path`   | 1.13          |\n| kv       | `kv-v2\/data-delete`                 | `modified`, `operation`, `path`                | 1.13          |\n| kv       | `kv-v2\/data-patch`                  | `data_path`, `modified`, `operation`, `path`   | 1.13          |\n| kv       | `kv-v2\/data-write`                  | `data_path`, `modified`, `operation`, `path`   | 1.13          |\n| kv       | `kv-v2\/delete`                      | `modified`, `operation`, `path`                | 1.13          |\n| kv       | `kv-v2\/destroy`                     | `modified`, `operation`, `path`                | 1.13          |\n| kv       | `kv-v2\/metadata-delete`             | `modified`, `operation`, `path`                | 1.13          |\n| kv       | `kv-v2\/metadata-patch`              | `data_path`, `modified`, `operation`, `path`   | 1.13          |\n| kv       | `kv-v2\/metadata-write`              | `data_path`, `modified`, `operation`, `path`   | 1.13          |\n| kv       | `kv-v2\/undelete`                    | `data_path`, `modified`, `operation`, `path`   | 1.13          |\n\n\n## Event notifications format\n\nEvent notifications may be formatted in protobuf binary format or as JSON.\nSee `EventReceived` in [`sdk\/logical\/event.proto`](https:\/\/github.com\/hashicorp\/vault\/blob\/main\/sdk\/logical\/event.proto) in the relevant Vault version for the protobuf schema.\n\nWhen formatted as JSON, the event notification conforms to the [CloudEvents](https:\/\/cloudevents.io\/) specification.\n\n- `id` `(string)` - CloudEvents unique identifier for the event notification. The `id` is unique for each event notification, and event notifications with the same `id` represent the same event notification.\n\n- `source` `(string)` - CloudEvents source, which is set to `vault:\/\/` followed by the Raft node ID or the hostname of the host that generated the event notification.\n\n- `specversion` `(string)` - The CloudEvents specification version this conforms to.\n\n- `type` `(string)` - CloudEvents type this event notification corresponds to, which is currently always `*`.\n\n- ` datacontenttype` `(string)` - CloudEvents content type of the event notification, which is currently always `application\/json`.\n\n- `time` `(string)` - ISO 8601-formatted timestamp for when the event notificcation was generated.\n\n- ` data` `(object)` - Vault-specific data.\n\n  - `event` `(Event)` - contains the event notification that happened.\n\n    - `id` `(string)` - (repeat of the `id` parameter)\n\n    - `metadata` `(object)` - arbitrary extra data customized for the event type.\n\n  - `event_type` `(string)` - the event type that was published.\n\n  - `plugin_info` `(PluginInfo)` - information about the plugin that generated the event, if applicable.\n\n    - `mount_class` `(string)` - the class of plugin, e.g., `secret`, `auth`.\n\n    - `mount_accessor` `(string)` - the unique ID of the mounted plugin.\n\n    - `mount_path` `(string)` - the path that the plugin is mounted at.\n\n    - `plugin` `(string)` - the name of the plugin, e.g., `kv`.\n\nHere is an example event notification in JSON format:\n\n```json\n{\n  \"id\": \"a3be9fb1-b514-519f-5b25-b6f144a8c1ce\",\n  \"source\": \"vault:\/\/mycluster\",\n  \"specversion\": \"1.0\",\n  \"type\": \"*\",\n  \"data\": {\n    \"event\": {\n      \"id\": \"a3be9fb1-b514-519f-5b25-b6f144a8c1ce\",\n      \"metadata\": {\n        \"current_version\": \"1\",\n        \"data_path\": \"secret\/data\/foo\",\n        \"modified\": \"true\",\n        \"oldest_version\": \"0\",\n        \"operation\": \"data-write\",\n        \"path\": \"secret\/data\/foo\"\n      }\n    },\n    \"event_type\": \"kv-v2\/data-write\",\n    \"plugin_info\": {\n      \"mount_class\": \"secret\",\n      \"mount_accessor\": \"kv_5dc4d18e\",\n      \"mount_path\": \"secret\/\",\n      \"plugin\": \"kv\"\n    }\n  },\n  \"datacontentype\": \"application\/cloudevents\",\n  \"time\": \"2023-09-12T15:19:49.394915-07:00\"\n}\n```\n\n## Subscribing to event notifications\n\n<Note title=\"Note\">\n\nFor multi-node Vault deployments, Vault only accepts subscriptions on the active node. If a client attempts to subscribe to events on a standby node,\nVault will respond with a redirect to the active node. Vault uses the [`api_addr`](\/vault\/docs\/configuration#api_addr) of the active node's configuration to route the redirect.\n\nVault deployments with performance replication must subscribe to events on the\nprimary performance cluster. Vault ignores subscriptions made from secondary\nclusters.\n\n<\/Note>\n\nVault has an API endpoint, `\/v1\/sys\/events\/subscribe\/{eventType}`, that allows users to subscribe to event notifications via a\nWebSocket stream.\nThis endpoint supports the standard authentication and authorization workflows used by other Vault endpoints.\nThe `{eventType}` parameter is a non-empty string of what event type to subscribe to, which may contain wildcards (`*`)\nto subscribe to multiple event types, e.g., `kv-v2\/data-*`.\n\nBy default, the event notifications are delivered in protobuf binary format.\nThe endpoint can also format the data as JSON if the `json` query parameter is set to `true`:\n\n```shell-session\n$ wscat -H \"X-Vault-Token: $(vault print token)\" --connect 'ws:\/\/127.0.0.1:8200\/v1\/sys\/events\/subscribe\/kv-v2\/data-write?json=true'\n{\"id\":\"a3be9fb1-b514-519f-5b25-b6f144a8c1ce\",\"source\":\"vault:\/\/mycluster\",\"specversion\":\"1.0\",\"type\":\"*\",\"data\":{\"event\":{\"id\":\"a3be9fb1-b514-519f-5b25-b6f144a8c1ce\",\"metadata\":{\"current_version\":\"1\",\"data_path\":\"secret\/data\/foo\",\"modified\":\"true\",\"oldest_version\":\"0\",\"operation\":\"data-write\",\"path\":\"secret\/data\/foo\"}},\"event_type\":\"kv-v2\/data-write\",\"plugin_info\":{\"mount_class\":\"secret\",\"mount_accessor\":\"kv_5dc4d18e\",\"mount_path\":\"secret\/\",\"plugin\":\"kv\"}},\"datacontentype\":\"application\/cloudevents\",\"time\":\"2023-09-12T15:19:49.394915-07:00\"}\n...\n```\n\nThe Vault CLI support this endpoint via the `events subscribe` command, which will output a stream of\nJSON for the requested event notifications (one line per event notification):\n\n```shell-session\n$ vault events subscribe kv-v2\/data-write\n{\"id\":\"a3be9fb1-b514-519f-5b25-b6f144a8c1ce\",\"source\":\"vault:\/\/mycluster\",\"specversion\":\"1.0\",\"type\":\"*\",\"data\":{\"event\":{\"id\":\"a3be9fb1-b514-519f-5b25-b6f144a8c1ce\",\"metadata\":{\"current_version\":\"1\",\"data_path\":\"secret\/data\/foo\",\"modified\":\"true\",\"oldest_version\":\"0\",\"operation\":\"data-write\",\"path\":\"secret\/data\/foo\"}},\"event_type\":\"kv-v2\/data-write\",\"plugin_info\":{\"mount_class\":\"secret\",\"mount_accessor\":\"kv_5dc4d18e\",\"mount_path\":\"secret\/\",\"plugin\":\"kv\"}},\"datacontentype\":\"application\/cloudevents\",\"time\":\"2023-09-12T15:19:49.394915-07:00\"}\n...\n```\n\n## Policies\n\nTo subscribe to an event notification, you must have the following policy grants:\n\n1. `read` capability on `\/v1\/sys\/events\/subscribe\/{eventType}`, where `{eventType}` is the event type that will be\n   subscribed to. The path may contain wildcards.\n\n   An example blanket policy is:\n   ```hcl\n   path \"sys\/events\/subscribe\/*\" {\n       capabilities = [\"read\"]\n   }\n   ```\n\n2. `list` and `subscribe` capabilities on the *path of the secret* for events\n    related to secrets. The policy must also provide a `subscribe_event_types`\n    entry with the specific event notifications subscribers are allowed to use. For example,\n    to receive event notifications related to the KV secrets engine path,\n     `secret\/my-data`, a valid policy would be:\n\n   ```hcl\n   path \"secret\/my-data\" {\n     capabilities = [\"list\", \"subscribe\"]\n     subscribe_event_types = [\"*\"]\n   }\n   ```\n\nVault continuously evaluates policies for WebSocket subscriptions and\ncaches the results for a short period of time to improve performance.\nAs a result, event notifications **may** still be sent for a few minutes after a token is\nrevoked or a policy is deleted.\n\n## Supported versions\n\n| Version | Support                                     |\n|---------|---------------------------------------------|\n| <= 1.12 | Not supported                               |\n| 1.13    | Supported; **disabled** by default          |\n| 1.14    | Supported; **disabled** by default          |\n| 1.15    | Supported (beta); **enabled** by default    |\n| 1.16+   | Generally available; **enabled** by default |\n\nFor versions where event notifications are disabled by default, you can enable the\nfunctionality with the `events.alpha1`\n[experiment option](\/vault\/docs\/configuration#experiments) in your Vault\nconfiguration or from the command line with the `-experiments` flag. For example:\n\n```shell-session\n$ vault server -experiment events.alpha1\n```","site":"vault","answers_cleaned":"    layout  docs page title  Event Notications description       Event notifications allow Vault and plugins to exchange arbitrary activity   data within Vault and with external subscribers via WebSockets         Event Notifications   include  alerts enterprise only mdx   Event notifications are arbitrary    non secret   data that can be exchanged between producers  Vault and plugins  and subscribers  Vault components and external users via the API       Event types       This information will probably be migrated to the plugin pages eventually       Note title  Note    Event types without the  data path  metadata field require a root token in order to be consumed from the   v1 sys events subscribe  eventType   API endpoint     Note   Internal components of Vault as well as external plugins can generate event notifications  These are published to  event types   sometimes called  topics  in other event systems  All event notifications of a specific event type will have the same format for their additional  metadata  field   The following event types are currently generated by Vault and its builtin plugins automatically     Plugin     Event Type                            Metadata                                         Vault version                                                                                                                         database    database config delete                modified    operation    path    name           1 16              database    database config write                 modified    operation    path    name           1 16              database    database creds create                 modified    operation    path    name           1 16              database    database reload                       modified    operation    path    plugin name    1 16              database    database reset                        modified    operation    path    name           1 16              database    database role create                  modified    operation    path    name           1 16              database    database role delete                  modified    operation    path    name           1 16              database    database role update                  modified    operation    path    name           1 16              database    database root rotate fail             modified    operation    path    name           1 16              database    database root rotate                  modified    operation    path    name           1 16              database    database rotate fail                  modified    operation    path    name           1 16              database    database rotate                       modified    operation    path    name           1 16              database    database static creds create fail     modified    operation    path    name           1 16              database    database static creds create          modified    operation    path    name           1 16              database    database static role create           modified    operation    path    name           1 16              database    database static role delete           modified    operation    path    name           1 16              database    database static role update           modified    operation    path    name           1 16              kv          kv v1 delete                          modified    operation    path                   1 13              kv          kv v1 write                           data path    modified    operation    path      1 13              kv          kv v2 config write                    data path    modified    operation    path      1 13              kv          kv v2 data delete                     modified    operation    path                   1 13              kv          kv v2 data patch                      data path    modified    operation    path      1 13              kv          kv v2 data write                      data path    modified    operation    path      1 13              kv          kv v2 delete                          modified    operation    path                   1 13              kv          kv v2 destroy                         modified    operation    path                   1 13              kv          kv v2 metadata delete                 modified    operation    path                   1 13              kv          kv v2 metadata patch                  data path    modified    operation    path      1 13              kv          kv v2 metadata write                  data path    modified    operation    path      1 13              kv          kv v2 undelete                        data path    modified    operation    path      1 13                 Event notifications format  Event notifications may be formatted in protobuf binary format or as JSON  See  EventReceived  in   sdk logical event proto   https   github com hashicorp vault blob main sdk logical event proto  in the relevant Vault version for the protobuf schema   When formatted as JSON  the event notification conforms to the  CloudEvents  https   cloudevents io   specification      id    string     CloudEvents unique identifier for the event notification  The  id  is unique for each event notification  and event notifications with the same  id  represent the same event notification      source    string     CloudEvents source  which is set to  vault     followed by the Raft node ID or the hostname of the host that generated the event notification      specversion    string     The CloudEvents specification version this conforms to      type    string     CloudEvents type this event notification corresponds to  which is currently always           datacontenttype    string     CloudEvents content type of the event notification  which is currently always  application json       time    string     ISO 8601 formatted timestamp for when the event notificcation was generated       data    object     Vault specific data        event    Event     contains the event notification that happened          id    string      repeat of the  id  parameter          metadata    object     arbitrary extra data customized for the event type        event type    string     the event type that was published        plugin info    PluginInfo     information about the plugin that generated the event  if applicable          mount class    string     the class of plugin  e g    secret    auth           mount accessor    string     the unique ID of the mounted plugin          mount path    string     the path that the plugin is mounted at          plugin    string     the name of the plugin  e g    kv    Here is an example event notification in JSON format      json      id    a3be9fb1 b514 519f 5b25 b6f144a8c1ce      source    vault   mycluster      specversion    1 0      type           data          event            id    a3be9fb1 b514 519f 5b25 b6f144a8c1ce          metadata              current version    1            data path    secret data foo            modified    true            oldest version    0            operation    data write            path    secret data foo                      event type    kv v2 data write        plugin info            mount class    secret          mount accessor    kv 5dc4d18e          mount path    secret           plugin    kv                datacontentype    application cloudevents      time    2023 09 12T15 19 49 394915 07 00            Subscribing to event notifications   Note title  Note    For multi node Vault deployments  Vault only accepts subscriptions on the active node  If a client attempts to subscribe to events on a standby node  Vault will respond with a redirect to the active node  Vault uses the   api addr    vault docs configuration api addr  of the active node s configuration to route the redirect   Vault deployments with performance replication must subscribe to events on the primary performance cluster  Vault ignores subscriptions made from secondary clusters     Note   Vault has an API endpoint    v1 sys events subscribe  eventType    that allows users to subscribe to event notifications via a WebSocket stream  This endpoint supports the standard authentication and authorization workflows used by other Vault endpoints  The   eventType   parameter is a non empty string of what event type to subscribe to  which may contain wildcards       to subscribe to multiple event types  e g    kv v2 data      By default  the event notifications are delivered in protobuf binary format  The endpoint can also format the data as JSON if the  json  query parameter is set to  true       shell session   wscat  H  X Vault Token    vault print token     connect  ws   127 0 0 1 8200 v1 sys events subscribe kv v2 data write json true    id   a3be9fb1 b514 519f 5b25 b6f144a8c1ce   source   vault   mycluster   specversion   1 0   type       data    event    id   a3be9fb1 b514 519f 5b25 b6f144a8c1ce   metadata    current version   1   data path   secret data foo   modified   true   oldest version   0   operation   data write   path   secret data foo     event type   kv v2 data write   plugin info    mount class   secret   mount accessor   kv 5dc4d18e   mount path   secret    plugin   kv     datacontentype   application cloudevents   time   2023 09 12T15 19 49 394915 07 00            The Vault CLI support this endpoint via the  events subscribe  command  which will output a stream of JSON for the requested event notifications  one line per event notification       shell session   vault events subscribe kv v2 data write   id   a3be9fb1 b514 519f 5b25 b6f144a8c1ce   source   vault   mycluster   specversion   1 0   type       data    event    id   a3be9fb1 b514 519f 5b25 b6f144a8c1ce   metadata    current version   1   data path   secret data foo   modified   true   oldest version   0   operation   data write   path   secret data foo     event type   kv v2 data write   plugin info    mount class   secret   mount accessor   kv 5dc4d18e   mount path   secret    plugin   kv     datacontentype   application cloudevents   time   2023 09 12T15 19 49 394915 07 00               Policies  To subscribe to an event notification  you must have the following policy grants   1   read  capability on   v1 sys events subscribe  eventType    where   eventType   is the event type that will be    subscribed to  The path may contain wildcards      An example blanket policy is        hcl    path  sys events subscribe             capabilities     read                2   list  and  subscribe  capabilities on the  path of the secret  for events     related to secrets  The policy must also provide a  subscribe event types      entry with the specific event notifications subscribers are allowed to use  For example      to receive event notifications related to the KV secrets engine path        secret my data   a valid policy would be         hcl    path  secret my data         capabilities     list    subscribe        subscribe event types                      Vault continuously evaluates policies for WebSocket subscriptions and caches the results for a short period of time to improve performance  As a result  event notifications   may   still be sent for a few minutes after a token is revoked or a policy is deleted      Supported versions    Version   Support                                                                                                      1 12   Not supported                                   1 13      Supported    disabled   by default              1 14      Supported    disabled   by default              1 15      Supported  beta     enabled   by default        1 16      Generally available    enabled   by default    For versions where event notifications are disabled by default  you can enable the functionality with the  events alpha1   experiment option   vault docs configuration experiments  in your Vault configuration or from the command line with the   experiments  flag  For example      shell session   vault server  experiment events alpha1    "}
{"questions":"vault Secure external data with Vault transit tokenization and transforms Secure external data with Vault page title Secure external data with Vault layout docs Not all personally identifiable information PII lives in Vault For example","answers":"---\nlayout: docs\npage_title: Secure external data with Vault\ndescription: >-\n  Secure external data with Vault transit, tokenization, and transforms.\n---\n\n# Secure external data with Vault\n\nNot all personally identifiable information (PII) lives in Vault. For example,\nyou may need to store credit card numbers, patient IDs, or social security\nnumbers in an external databases.\n\nIt can be difficult to secure sensitive data outside Vault and balance the\nneed for applications to access the data efficiently while adhering to\nstringent security standards and protocols. Vault helps you secure external data\nwith **transit encryption**, **tokenization**, and **transforms**.\n\nTransform consists of three modes, called _transformations_. Format Preserving\nEncryption (**FPE**) for encrypting and decrypting values while retaining their\nformats. **Masking** for replacing sensitive information with masking\ncharacters. And **Tokenization** which replaces sensitive information with\nmathematically unrelated tokens.\n\n![Transit vs Transform](\/img\/transit-or-transform.png)\n\n\n\n\n## Encrypt sensitive data with Vault transit\n\nEncrypting sensitive data is an obvious solution for protecting sensitive data.\nBut independently implementing robust data encryption can be complex and\nexpensive. The Vault transit plugin encrypts data, returns the resulting\nciphertext, and manages the associated encryption keys. Your application only\never persists the ciphertext and never deals directly with the encryption key.\n\nFor example, the following diagram shows a credit card number going into the\ntransit system and returning to persistent storage as encrypted data.\n\n<ImageConfig hideBorder>\n\n![Tokenization overview](\/img\/vault-transit-secrets-engine-1.png)\n\n<\/ImageConfig>\n\nThe tradeoff with encryption is that the ciphertext is often much longer than\nthe original data. Additionally, the ciphertext can contain characters that\nare not allowed in the original data, which may cause your system to reject the\nencrypted data. As a result, you may modify your database schema or adjust your\ndata validation system to avoid rejecting the encrypted data.\n\n## Tokenize sensitive data\n\nTokenization safeguards sensitive information by randomly generating a token\nwith a one-way cryptographic hash function. Rather than storing the data\ndirectly, the system saves the token to persistent storage and maintains a\nmapping between the original data and the token.\n\n<ImageConfig hideBorder>\n\n![Tokenization overview](\/img\/traditional_tokenization.png)\n\n<\/ImageConfig>\n\nCombining effective tokenization and a robust random number generator ensures\nprotected data security regardless of where the data lives. However, even if\ntokenization hides the format of the data, it can be a leaky abstraction if the\nfinal token ends up the same length of the original data. Additionally, when the\ntoken length matches the original length, format checkers may not realize the\ndata is tokenized and could reject the tokenized value as \"invalid data\".\n\nTokenization also presents scalability challenges because the hash table expands\nany time the system stores a new tokenized value. A rapidly expanding hash table\naffects storage and performance as the speed of hash searches declines.\n\nMaintaining cryptographic data security with tokenization is a complex task that\nrequires protection of the tokenized data **and** the hash table to ensure data\nintegrity and security.\n\n## Obscure sensitive data with Vault transform\n\n<EnterpriseAlert>\n\n  The Vault transform plugin requires a Vault Enterprise license with Advanced\n  Data Protection (ADP).\n\n<\/EnterpriseAlert>\n\nWe recommend using the Vault transform plugin for securing external, sensitive\ndata with Vault. The transform plugin supports three transformations:\n\n- format preserving encryption\n- data masking\n- tokenization\n\nIn addition to providing the option of data masking, Vault transform simplifies\nsome of the complexities with stand-alone encryption and tokenization.\n\n### Format preserving encryption\n\nFormat preserving encryption (FPE) is a two-way transformation that encrypts\nexternal data while maintaining the original format and length. For example,\ntransforming a credit card number to an encoded ciphertext made of 16 numbers.\n\n<ImageConfig hideBorder>\n\n![Format preserving encryption](\/img\/vault-encoded-text.jpg)\n\n<\/ImageConfig>\n\nUnlike stand-alone encryption, FPE maintains the original length and data\nstructure for encoded data so the transformed data works with your existing\ndatabase schema and validation systems. And, unlike tokenization, FPE preserves\nthe original structure without the risk of leaky abstraction.\n\n<Note title=\"FPE is secure\">\n\n  Vault uses\n  the [FF3-1](https:\/\/csrc.nist.gov\/publications\/detail\/sp\/800-38g\/rev-1\/draft) algorithm\n  to ensure the security of the encoded ciphertexts. The National Institute of\n  Standards and Technology (NIST) vets, updates, and tests the FF3-1 algorithm\n  to protect against specific types of attacks and potential future threats from\n  supercomputers.\n\n<\/Note>\n\nIn addition to providing built-in transformation templates for common data like\ncredit card numbers and US social security numbers, format preserving encryptions\nsupport custom transformation templates. You can use regular expressions to\nspecify the values you want to transform and enforce a schema on the encoded\nvalue.\n\nFPE transformation is also stateless, which means that Vault does not store the\nprotected secret. Instead, Vault protects the encryption key needed to decrypt\nthe ciphertext. By only storing the information needed to decrypt the\nciphertext, Vault provides maximum performance for encoding and decoding\noperations while also minimizing exposure risk for the data.\n\n\n### Data masking\n\nData masking is a one-way transformation that replaces characters on the input\nvalue with a predefined translation character.\n\n<Warning title=\"Use with caution\">\n\n  Masking is non-reversible. We do not recommend masking for situations where\n  you need to retrieve and decode the original value.\n\n<\/Warning>\n\nData masking is a good solution when you need to show or print sensitive data\nwithout full readability. For example, masking a a bank account number in an\nonline banking portal to prevent potential security breaches from bad actors who\nmight be observing the screen.\n\n### Tokenization\n\nUnlike standalone tokenization, tokenization with Vault transform is a two-way,\nrandom encoding that satisfies the PCI-DSS requirement for data irreversibility\nwhile still allowing you to decode tokens back to their original plaintext.\n\nTo support token decoding, Vault secures a cryptographic mapping of tokens and\nplaintext values in internal storage. Even if an attacker steals the underlying\ntransformation key and mapping values from Vault, tokenization of the data\nprevents the attacker from recovering the original plaintext.\n\n<ImageConfig hideBorder>\n\n![Tokenization](\/img\/vault-tokenization-transformation-1.png)\n\n<\/ImageConfig>\n\n\n\nVault transform creates a new key for each tokenization transformation, which\nhelps ensure a strong cryptographic distinction between different tokenization\nuse cases. For example, a credit card processor may want to distinguish between\nthe same credit card number used by different merchants without having to decode\nthe token.\n\nTokenization transform also supports automatic key rotation based on a\nconfigurable time interval and minimum key version. Each configured tokenization\ntransformation keeps a set of versioned keys. When a key rotates, older key\nversions, within the configured age limit, are still available for decoding\ntokens generated in the past. Vault cannot decode generated tokens with keys\nbelow the minimum key version.\n\n<Highlight title=\"Convergent tokenization\">\n\n  By default, tokenization produces a unique token for every encode operation.\n  Vault transform supports **convergent tokenization**, which lets you use the\n  same encoded value for a given input.\n\n  Convergent tokenization lets you perform statistical analysis of the tokens\n  in your system **without decoding the token**. For example, counting the\n  entries for a given token, querying relationships between a token and other\n  fields in your database, and relating information that is tokenized across\n  multiple systems.\n\n<\/Highlight>\n\n### Performance considerations with Vault transform\n\nTokenization transformation is stateful, which means the encode operation must\nperform writes to storage on the primary node of your Vault cluster. As a\nresult, any storage performance limits on the primary node also limits\nscalability of the encode operation.\n  \nIn comparison, neither Vault transit encryption nor FPE transformation write to\nstorage, and both can be horizontally scaled using performance standby nodes.\n\nFor high-performance use cases, we recommend that you configure Vault to store\nthe token mapping in an external database. External stores can achieve a much\nhigher performance scale and reduce the load on the internal storage for your\nVault installation.\n\nVault currently supports the following external storage systems:\n\n- PostgreSQL\n- MySQL\n- MSSQL.\n\nFor more information on external storage, review the\n[Tokenization transform storage](\/vault\/docs\/secrets\/transform\/tokenization#storage)\ndocumentation.\n\n### Learn more about Vault transform\n\n- [Transform secrets engine tutorial](\/vault\/tutorials\/adp\/transform)\n- [Transform tokenization overview](\/vault\/docs\/secrets\/transform\/tokenization)\n- [Encrypt data with Transform tutorial](\/vault\/tutorials\/adp\/transform-code-example)\n- [Tokenize data with Transform tutorial](\/vault\/tutorials\/adp\/tokenization)\n- [Transform secrets engine API](\/vault\/api-docs\/secret\/transform","site":"vault","answers_cleaned":"    layout  docs page title  Secure external data with Vault description       Secure external data with Vault transit  tokenization  and transforms         Secure external data with Vault  Not all personally identifiable information  PII  lives in Vault  For example  you may need to store credit card numbers  patient IDs  or social security numbers in an external databases   It can be difficult to secure sensitive data outside Vault and balance the need for applications to access the data efficiently while adhering to stringent security standards and protocols  Vault helps you secure external data with   transit encryption      tokenization    and   transforms     Transform consists of three modes  called  transformations   Format Preserving Encryption    FPE    for encrypting and decrypting values while retaining their formats    Masking   for replacing sensitive information with masking characters  And   Tokenization   which replaces sensitive information with mathematically unrelated tokens     Transit vs Transform   img transit or transform png         Encrypt sensitive data with Vault transit  Encrypting sensitive data is an obvious solution for protecting sensitive data  But independently implementing robust data encryption can be complex and expensive  The Vault transit plugin encrypts data  returns the resulting ciphertext  and manages the associated encryption keys  Your application only ever persists the ciphertext and never deals directly with the encryption key   For example  the following diagram shows a credit card number going into the transit system and returning to persistent storage as encrypted data    ImageConfig hideBorder     Tokenization overview   img vault transit secrets engine 1 png     ImageConfig   The tradeoff with encryption is that the ciphertext is often much longer than the original data  Additionally  the ciphertext can contain characters that are not allowed in the original data  which may cause your system to reject the encrypted data  As a result  you may modify your database schema or adjust your data validation system to avoid rejecting the encrypted data      Tokenize sensitive data  Tokenization safeguards sensitive information by randomly generating a token with a one way cryptographic hash function  Rather than storing the data directly  the system saves the token to persistent storage and maintains a mapping between the original data and the token    ImageConfig hideBorder     Tokenization overview   img traditional tokenization png     ImageConfig   Combining effective tokenization and a robust random number generator ensures protected data security regardless of where the data lives  However  even if tokenization hides the format of the data  it can be a leaky abstraction if the final token ends up the same length of the original data  Additionally  when the token length matches the original length  format checkers may not realize the data is tokenized and could reject the tokenized value as  invalid data    Tokenization also presents scalability challenges because the hash table expands any time the system stores a new tokenized value  A rapidly expanding hash table affects storage and performance as the speed of hash searches declines   Maintaining cryptographic data security with tokenization is a complex task that requires protection of the tokenized data   and   the hash table to ensure data integrity and security      Obscure sensitive data with Vault transform   EnterpriseAlert     The Vault transform plugin requires a Vault Enterprise license with Advanced   Data Protection  ADP      EnterpriseAlert   We recommend using the Vault transform plugin for securing external  sensitive data with Vault  The transform plugin supports three transformations     format preserving encryption   data masking   tokenization  In addition to providing the option of data masking  Vault transform simplifies some of the complexities with stand alone encryption and tokenization       Format preserving encryption  Format preserving encryption  FPE  is a two way transformation that encrypts external data while maintaining the original format and length  For example  transforming a credit card number to an encoded ciphertext made of 16 numbers    ImageConfig hideBorder     Format preserving encryption   img vault encoded text jpg     ImageConfig   Unlike stand alone encryption  FPE maintains the original length and data structure for encoded data so the transformed data works with your existing database schema and validation systems  And  unlike tokenization  FPE preserves the original structure without the risk of leaky abstraction    Note title  FPE is secure      Vault uses   the  FF3 1  https   csrc nist gov publications detail sp 800 38g rev 1 draft  algorithm   to ensure the security of the encoded ciphertexts  The National Institute of   Standards and Technology  NIST  vets  updates  and tests the FF3 1 algorithm   to protect against specific types of attacks and potential future threats from   supercomputers     Note   In addition to providing built in transformation templates for common data like credit card numbers and US social security numbers  format preserving encryptions support custom transformation templates  You can use regular expressions to specify the values you want to transform and enforce a schema on the encoded value   FPE transformation is also stateless  which means that Vault does not store the protected secret  Instead  Vault protects the encryption key needed to decrypt the ciphertext  By only storing the information needed to decrypt the ciphertext  Vault provides maximum performance for encoding and decoding operations while also minimizing exposure risk for the data        Data masking  Data masking is a one way transformation that replaces characters on the input value with a predefined translation character    Warning title  Use with caution      Masking is non reversible  We do not recommend masking for situations where   you need to retrieve and decode the original value     Warning   Data masking is a good solution when you need to show or print sensitive data without full readability  For example  masking a a bank account number in an online banking portal to prevent potential security breaches from bad actors who might be observing the screen       Tokenization  Unlike standalone tokenization  tokenization with Vault transform is a two way  random encoding that satisfies the PCI DSS requirement for data irreversibility while still allowing you to decode tokens back to their original plaintext   To support token decoding  Vault secures a cryptographic mapping of tokens and plaintext values in internal storage  Even if an attacker steals the underlying transformation key and mapping values from Vault  tokenization of the data prevents the attacker from recovering the original plaintext    ImageConfig hideBorder     Tokenization   img vault tokenization transformation 1 png     ImageConfig     Vault transform creates a new key for each tokenization transformation  which helps ensure a strong cryptographic distinction between different tokenization use cases  For example  a credit card processor may want to distinguish between the same credit card number used by different merchants without having to decode the token   Tokenization transform also supports automatic key rotation based on a configurable time interval and minimum key version  Each configured tokenization transformation keeps a set of versioned keys  When a key rotates  older key versions  within the configured age limit  are still available for decoding tokens generated in the past  Vault cannot decode generated tokens with keys below the minimum key version    Highlight title  Convergent tokenization      By default  tokenization produces a unique token for every encode operation    Vault transform supports   convergent tokenization    which lets you use the   same encoded value for a given input     Convergent tokenization lets you perform statistical analysis of the tokens   in your system   without decoding the token    For example  counting the   entries for a given token  querying relationships between a token and other   fields in your database  and relating information that is tokenized across   multiple systems     Highlight       Performance considerations with Vault transform  Tokenization transformation is stateful  which means the encode operation must perform writes to storage on the primary node of your Vault cluster  As a result  any storage performance limits on the primary node also limits scalability of the encode operation     In comparison  neither Vault transit encryption nor FPE transformation write to storage  and both can be horizontally scaled using performance standby nodes   For high performance use cases  we recommend that you configure Vault to store the token mapping in an external database  External stores can achieve a much higher performance scale and reduce the load on the internal storage for your Vault installation   Vault currently supports the following external storage systems     PostgreSQL   MySQL   MSSQL   For more information on external storage  review the  Tokenization transform storage   vault docs secrets transform tokenization storage  documentation       Learn more about Vault transform     Transform secrets engine tutorial   vault tutorials adp transform     Transform tokenization overview   vault docs secrets transform tokenization     Encrypt data with Transform tutorial   vault tutorials adp transform code example     Tokenize data with Transform tutorial   vault tutorials adp tokenization     Transform secrets engine API   vault api docs secret transform"}
{"questions":"vault page title Username Templating Username templating Username templating are used in some secret engines to allow operators to define layout docs how dynamic usernames are generated","answers":"---\nlayout: docs\npage_title: Username Templating\ndescription: >-\n  Username templating are used in some secret engines to allow operators to define\n  how dynamic usernames are generated.\n---\n\n# Username templating\n\nSome of the secrets engines that generate dynamic users for external systems provide the ability for Vault operators\nto customize how usernames are generated for said external systems. This customization feature uses the\n[Go template language](https:\/\/golang.org\/pkg\/text\/template\/). This page describes the basics of using these templates\nfor username generation but does not go into great depth of using the templating language for more advanced usages.\nSee the API documentation for the given secret engine to determine if it supports username templating and for more\ndetails on using it with that engine.\n\n~> When customizing how usernames are generated, take care to ensure you have enough randomness to ensure uniqueness\notherwise multiple calls to create the credentials may interfere with each other.\n\nIn addition to the functionality built into the Go template language, a number of additional functions are available:\n\n## Available functions\n\n### String\/Character manipulation\n\n`lowercase` - Lowercases the input value.<br\/>\n**Example**: ``\n\n`replace` - Find\/replace on the input value.<br\/>\n**Example**: ``\n\n`truncate` - truncates the input value to the specified number of characters.<br\/>\n**Example**: ``\n\n`truncate_sha256` - Truncates the input value to the specified number of characters. The last 8 characters of the\nnew value will be replace by the first 8 characters of the SHA256 hash of the truncated characters.<br\/>\n**Example**: ``. If `FieldName` is `abcdefghijklmnopqrstuvwxyz`, all characters after\nthe 12th (`l`) are removed and SHA256 hashed (`872808ffbf...1886ca6f20`).\nThe first 8 characters of the hash (`872808ff`) are then appended to the end of the first 12 characters from the\noriginal value: `abcdefghijkl872808ff`.\n\n`uppercase` - Uppercases the input value.<br\/>\n**Example**: ``\n\n### Generating values\n\n`random` - Generates a random string from lowercase letters, uppercase letters, and numbers. Must include a\nnumber indicating how many characters to generate.<br\/>\n**Example**: `` generates 20 random characters\n\n`timestamp` - The current time. Must provide a formatting string based on Go\u2019s [time package](https:\/\/golang.org\/pkg\/time\/).<br\/>\n**Example**: ``\n\n`unix_time` - The current unix timestamp (number of seconds since Jan 1 1970).<br\/>\n**Example**: ``\n\n`unix_time_millis` - The current unix timestamp in milliseconds.<br\/>\n**Example**: ``\n\n`uuid` - Generates a random UUID.<br\/>\n**Example**: ``\n\n### Hashing\n\n`base64` - Base64 encodes the input value.<br\/>\n**Example**: ``\n\n`sha256` - SHA256 hashes the input value.<br\/>\n**Example**: ``\n\n## Examples\n\nEach secret engine provides a different set of data to the template. Please see the associated secret engine's\ndocumentation for details on what values are provided to the template. The examples below are modeled after the\n[Database engine's](\/vault\/docs\/secrets\/databases) data, however the specific fields that are provided from a given engine\nmay differ from these examples. Additionally, the time is assumed to be 2009-02-13 11:31:30PM GMT\n(unix timestamp: 1234567890) and random characters are the ordered english alphabet: `abcdefghijklmnopqrstuvwxyz`.\n\n-> Note: The space between `` and the values\/functions are optional.\nFor instance: `` is equivalent to ``\n\n| Field name    | Value                     |\n| ------------- | ------------------------- |\n| `DisplayName` | `token-with-display-name` |\n| `RoleName`    | `my_custom_database_role` |\n\nTo reference either of these fields, a `.` must be put in front of the field name: ``. Custom functions\ndo not include a `.` in front of them: ``.\n\n### Basic example\n\n**Template**:\n\n```\n_\n```\n\n**Username**:\n\n```\ntoken-with-display-name_my_custom_database_role\n```\n\nThis is a basic example that references the two fields that are provided to the template. In simplest terms, this is\na simple string substitution.\n\n~> This example does not have any randomness and should not be used when generating dynamic usernames. The purpose is to\ndemonstrate referencing data within the Go template language.\n\n### Custom functions\n\n**Template**:\n\n```\nFOO___\n```\n\n**Username**:\n\n```\nFOO_TOKEN_WITH_DISPLAY_NAME_MY_CUSTOM_DATABASE_ROLE_2009_02_13T11_31_30Z_0700\n```\n\n`` - Replaces all dashes with underscores and then uppercases the display name.<br\/>\n`` - Replaces all dashes with underscores and then uppercases the role name.<br\/>\n`` - Generates the current timestamp using the provided format and\nreplaces all dashes with underscores.\n\n### Truncating to maximum length\n\n**Template**:\n\n```\n\n```\n\n**Username**:\n\n```\nv_token-wi_my_custo_abcdefghijklmnopqrst_1234\n```\n\n`.DisplayName | truncate 8` truncates the display name to 8 characters (`token-wi`).<br\/>\n`.RoleName | truncate 8` truncates the role name to 8 characters (`my_custo`).<br\/>\n`random 20` generates 20 random characters `abcdefghijklmnopqrst`.<br\/>\n`unix_time` generates the current timestamp as the number of seconds since January 1, 1970 (`1234567890`).<br\/>\n\nEach of these values are passed to `printf \"v_%s_%s_%s_%s\"` which prepends them with `v_` and puts an underscore between\neach field. This results in `v_token-wi_my_custo_abcdefghijklmnopqrst_1234567890`. This value is then passed to\n`truncate 45` where the last 6 characters are removed which results in `v_token-wi_my_custo_abcdefghijklmnopqrst_1234`.\n\n## Tutorial\n\nRefer to the [Database secrets\nengine](\/vault\/tutorials\/db-credentials\/database-secrets#define-a-username-template) for step-by-step instructions.","site":"vault","answers_cleaned":"    layout  docs page title  Username Templating description       Username templating are used in some secret engines to allow operators to define   how dynamic usernames are generated         Username templating  Some of the secrets engines that generate dynamic users for external systems provide the ability for Vault operators to customize how usernames are generated for said external systems  This customization feature uses the  Go template language  https   golang org pkg text template    This page describes the basics of using these templates for username generation but does not go into great depth of using the templating language for more advanced usages  See the API documentation for the given secret engine to determine if it supports username templating and for more details on using it with that engine      When customizing how usernames are generated  take care to ensure you have enough randomness to ensure uniqueness otherwise multiple calls to create the credentials may interfere with each other   In addition to the functionality built into the Go template language  a number of additional functions are available      Available functions      String Character manipulation   lowercase    Lowercases the input value  br     Example         replace    Find replace on the input value  br     Example         truncate    truncates the input value to the specified number of characters  br     Example         truncate sha256    Truncates the input value to the specified number of characters  The last 8 characters of the new value will be replace by the first 8 characters of the SHA256 hash of the truncated characters  br     Example        If  FieldName  is  abcdefghijklmnopqrstuvwxyz   all characters after the 12th   l   are removed and SHA256 hashed   872808ffbf   1886ca6f20    The first 8 characters of the hash   872808ff   are then appended to the end of the first 12 characters from the original value   abcdefghijkl872808ff     uppercase    Uppercases the input value  br     Example            Generating values   random    Generates a random string from lowercase letters  uppercase letters  and numbers  Must include a number indicating how many characters to generate  br     Example       generates 20 random characters   timestamp    The current time  Must provide a formatting string based on Go s  time package  https   golang org pkg time    br     Example         unix time    The current unix timestamp  number of seconds since Jan 1 1970   br     Example         unix time millis    The current unix timestamp in milliseconds  br     Example         uuid    Generates a random UUID  br     Example            Hashing   base64    Base64 encodes the input value  br     Example         sha256    SHA256 hashes the input value  br     Example           Examples  Each secret engine provides a different set of data to the template  Please see the associated secret engine s documentation for details on what values are provided to the template  The examples below are modeled after the  Database engine s   vault docs secrets databases  data  however the specific fields that are provided from a given engine may differ from these examples  Additionally  the time is assumed to be 2009 02 13 11 31 30PM GMT  unix timestamp  1234567890  and random characters are the ordered english alphabet   abcdefghijklmnopqrstuvwxyz       Note  The space between    and the values functions are optional  For instance     is equivalent to       Field name      Value                                                                        DisplayName     token with display name       RoleName        my custom database role     To reference either of these fields  a     must be put in front of the field name      Custom functions do not include a     in front of them           Basic example    Template                  Username         token with display name my custom database role      This is a basic example that references the two fields that are provided to the template  In simplest terms  this is a simple string substitution      This example does not have any randomness and should not be used when generating dynamic usernames  The purpose is to demonstrate referencing data within the Go template language       Custom functions    Template         FOO           Username         FOO TOKEN WITH DISPLAY NAME MY CUSTOM DATABASE ROLE 2009 02 13T11 31 30Z 0700           Replaces all dashes with underscores and then uppercases the display name  br        Replaces all dashes with underscores and then uppercases the role name  br        Generates the current timestamp using the provided format and replaces all dashes with underscores       Truncating to maximum length    Template                 Username         v token wi my custo abcdefghijklmnopqrst 1234        DisplayName   truncate 8  truncates the display name to 8 characters   token wi    br     RoleName   truncate 8  truncates the role name to 8 characters   my custo    br    random 20  generates 20 random characters  abcdefghijklmnopqrst   br    unix time  generates the current timestamp as the number of seconds since January 1  1970   1234567890    br    Each of these values are passed to  printf  v  s  s  s  s   which prepends them with  v   and puts an underscore between each field  This results in  v token wi my custo abcdefghijklmnopqrst 1234567890   This value is then passed to  truncate 45  where the last 6 characters are removed which results in  v token wi my custo abcdefghijklmnopqrst 1234       Tutorial  Refer to the  Database secrets engine   vault tutorials db credentials database secrets define a username template  for step by step instructions "}
{"questions":"vault page title Tokens Warning heading Internal token structure is volatile Tokens are a core auth method in Vault Concepts and important features Tokens are opaque values so their structure is undocumented and subject to change layout docs Tokens","answers":"---\nlayout: docs\npage_title: Tokens\ndescription: Tokens are a core auth method in Vault. Concepts and important features.\n---\n\n# Tokens\n\n<Warning heading=\"Internal token structure is volatile\">\n  Tokens are opaque values so their structure is undocumented and subject to change.\n  Scripts and automations that rely on the internal structure of a token in scripts will break.\n<\/Warning>\n\nTokens are the core method for _authentication_ within Vault. Tokens\ncan be used directly or [auth methods](\/vault\/docs\/concepts\/auth)\ncan be used to dynamically generate tokens based on external identities.\n\nIf you've gone through the getting started guide, you probably noticed that\n`vault server -dev` (or `vault operator init` for a non-dev server) outputs an\ninitial \"root token.\" This is the first method of authentication for Vault.\nIt is also the only auth method that cannot be disabled.\n\nAs stated in the [authentication concepts](\/vault\/docs\/concepts\/auth),\nall external authentication mechanisms, such as GitHub, map down to dynamically\ncreated tokens. These tokens have all the same properties as a normal manually\ncreated token.\n\nWithin Vault, tokens map to information. The most important information mapped\nto a token is a set of one or more attached\n[policies](\/vault\/docs\/concepts\/policies). These policies control what the token\nholder is allowed to do within Vault. Other mapped information includes\nmetadata that can be viewed and is added to the audit log, such as creation\ntime, last renewal time, and more.\n\nRead on for a deeper dive into token concepts.  See the\n[tokens tutorial](\/vault\/tutorials\/tokens\/tokens)\nfor details on how these concepts play out in practice.\n\n## Token types\n\nThere are three types of tokens. On this page `service` tokens and `batch` tokens are outlined,\nwhile `recovery` tokens are covered separately in their [own page](\/vault\/docs\/concepts\/recovery-mode#recovery-tokens).\nA section near the bottom of this page contains detailed information about their differences,\nbut it is useful to understand other token concepts first. The features in the following\nsections all apply to service tokens, and their applicability to batch tokens is discussed\nlater.\n\n### Token prefixes\n\nTokens have a specific prefix that indicates their type. As of Vault 1.10, this token\nformat was updated. The following table lists the prefix differences. This format\npattern and its change also apply for recovery tokens. After the prefix, a string of\n24 or more randomly-generated characters is appended.\n\n| Token Type      | Vault 1.9.x or earlier | Vault 1.10 and later |\n|-----------------|------------------------|----------------------|\n| Service tokens  | `s.<random>`           | `hvs.<random>`       |\n| Batch tokens    | `b.<random>`           | `hvb.<random>`       |\n| Recovery tokens | `r.<random>`           | `hvr.<random>`       |\n\nFor example, a service token may look like `hvs.CvmS4c0DPTvHv5eJgXWMJg9r`.\n\n## The token store\n\nOften in documentation or in help channels, the \"token store\" is referenced.\nThis is the same as the [`token` authentication\nbackend](\/vault\/docs\/auth\/token). This is a special\nbackend in that it is responsible for creating and storing tokens, and cannot\nbe disabled. It is also the only auth method that has no login\ncapability -- all actions require existing authenticated tokens.\n\n## Root tokens\n\nRoot tokens are tokens that have the `root` policy attached to them. Root\ntokens can do anything in Vault. _Anything_. In addition, they are the only\ntype of token within Vault that can be set to never expire without any renewal\nneeded. As a result, it is purposefully hard to create root tokens; in fact\nthere are only three ways to create root tokens:\n\n1. The initial root token generated at `vault operator init` time -- this token has no\n   expiration\n1. By using another root token; a root token with an expiration cannot create a\n   root token that never expires\n1. By using `vault operator generate-root` ([example](\/vault\/tutorials\/operations\/generate-root))\n   with the permission of a quorum of unseal key holders\n\nRoot tokens are useful in development but should be extremely carefully guarded\nin production. In fact, the Vault team recommends that root tokens are only\nused for just enough initial setup (usually, setting up auth methods\nand policies necessary to allow administrators to acquire more limited tokens)\nor in emergencies, and are revoked immediately after they are no longer needed.\nIf a new root token is needed, the `operator generate-root` command and associated\n[API endpoint](\/vault\/api-docs\/system\/generate-root) can be used to generate one on-the-fly.\n\nIt is also good security practice for there to be multiple eyes on a terminal\nwhenever a root token is live. This way multiple people can verify as to the\ntasks performed with the root token, and that the token was revoked immediately\nafter these tasks were completed.\n\n## Token hierarchies and orphan tokens\n\nNormally, when a token holder creates new tokens, these tokens will be created\nas children of the original token; tokens they create will be children of them;\nand so on. When a parent token is revoked, all of its child tokens -- and all\nof their leases -- are revoked as well. This ensures that a user cannot escape\nrevocation by simply generating a never-ending tree of child tokens.\n\nOften this behavior is not desired, so users with appropriate access can create\n`orphan` tokens. These tokens have no parent -- they are the root of their own\ntoken tree. These orphan tokens can be created:\n\n1. Via `write` access to the `auth\/token\/create-orphan` endpoint\n1. By having `sudo` or `root` access to the `auth\/token\/create`\n   and setting the `no_parent` parameter to `true`\n1. Via token store roles\n1. By logging in with any other (non-`token`) auth method\n\nUsers with appropriate permissions can also use the `auth\/token\/revoke-orphan`\nendpoint, which revokes the given token but rather than revoke the rest of the\ntree, it instead sets the tokens' immediate children to be orphans. Use with\ncaution!\n\n## Token accessors\n\nWhen tokens are created, a token accessor is also created and returned. This\naccessor is a value that acts as a reference to a token and can only be used to\nperform limited actions:\n\n1. Look up a token's properties (not including the actual token ID)\n1. Look up a token's capabilities on a path\n1. Renew the token\n1. Revoke the token\n\nThe token _making the call_, _not_ the token associated with the accessor, must\nhave appropriate permissions for these functions.\n\nThere are many useful workflows around token accessors. As an example, a\nservice that creates tokens on behalf of another service (such as the\n[Nomad](https:\/\/www.nomadproject.io\/) scheduler) can store the accessor\ncorrelated with a particular job ID. When the job is complete, the accessor can\nbe used to instantly revoke the token given to the job and all of its leased\ncredentials, limiting the chance that a bad actor will discover and use them.\n\nAudit devices can optionally be set to not obfuscate token accessors in audit\nlogs. This provides a way to quickly revoke tokens in case of an emergency.\nHowever, it also means that the audit logs can be used to perform a larger-scale\ndenial of service attack.\n\nFinally, the only way to \"list tokens\" is via the `auth\/token\/accessors`\ncommand, which actually gives a list of token accessors. While this is still a\ndangerous endpoint (since listing all of the accessors means that they can then\nbe used to revoke all tokens), it also provides a way to audit and revoke the\ncurrently-active set of tokens.\n\n## Token Time-To-Live, periodic tokens, and explicit max TTLs\n\nEvery non-root token has a time-to-live (TTL) associated with it, which is a\ncurrent period of validity since either the token's creation time or last\nrenewal time, whichever is more recent. (Root tokens may have a TTL associated,\nbut the TTL may also be 0, indicating a token that never expires). After the\ncurrent TTL is up, the token will no longer function -- it, and its associated\nleases, are revoked.\n\nIf the token is renewable, Vault can be asked to extend the token validity\nperiod using `vault token renew` or the appropriate renewal endpoint. At this\ntime, various factors come into play. What happens depends upon whether the\ntoken is a periodic token (available for creation by `root`\/`sudo` users, token\nstore roles, or some auth methods), has an explicit maximum TTL\nattached, or neither.\n\n### The general case\n\nIn the general case, where there is neither a period nor explicit maximum TTL\nvalue set on the token, the token's lifetime since it was created will be\ncompared to the maximum TTL. This maximum TTL value is dynamically generated\nand can change from renewal to renewal, so the value cannot be displayed when a\ntoken's information is looked up. It is based on a combination of factors:\n\n1. The system max TTL, which is 32 days but can be changed in Vault's\n   configuration file.\n1. The max TTL set on a mount using [mount\n   tuning](\/vault\/api-docs\/system\/mounts). This value\n   is allowed to override the system max TTL -- it can be longer or shorter,\n   and if set this value will be respected.\n1. A value suggested by the auth method that issued the token. This\n   might be configured on a per-role, per-group, or per-user basis. This value\n   is allowed to be less than the mount max TTL (or, if not set, the system max\n   TTL), but it is not allowed to be longer.\n\nNote that the values in (2) and (3) may change at any given time, which is why\na final determination about the current allowed max TTL is made at renewal time\nusing the current values. It is also why it is important to always ensure that\nthe TTL returned from a renewal operation is within an allowed range; if this\nvalue is not extending, likely the TTL of the token cannot be extended past its\ncurrent value and the client may want to reauthenticate and acquire a new\ntoken. However, outside of direct operator interaction, Vault will never revoke\na token before the returned TTL has expired.\n\n### Explicit max TTLs\n\nTokens can have an explicit max TTL set on them. This value becomes a hard\nlimit on the token's lifetime -- no matter what the values in (1), (2), and (3)\nfrom the general case are, the token cannot live past this explicitly-set\nvalue. This has an effect even when using periodic tokens to escape the normal\nTTL mechanism.\n\n### Periodic tokens\n\nIn some cases, having a token be revoked would be problematic -- for instance,\nif a long-running service needs to maintain its SQL connection pool over a long\nperiod of time. In this scenario, a periodic token can be used. Periodic tokens\ncan be created in a few ways:\n\n1. By having `sudo` capability or a `root` token with the `auth\/token\/create`\n   endpoint\n1. By using token store roles\n1. By using an auth method that supports issuing these, such as\n   AppRole\n\nAt issue time, the TTL of a periodic token will be equal to the configured\nperiod. At every renewal time, the TTL will be reset back to this configured\nperiod, and as long as the token is successfully renewed within each of these\nperiods of time, it will never expire. Outside of `root` tokens, it is\ncurrently the only way for a token in Vault to have an unlimited lifetime.\n\nThe idea behind periodic tokens is that it is easy for systems and services to\nperform an action relatively frequently -- for instance, every two hours, or\neven every five minutes. Therefore, as long as a system is actively renewing\nthis token -- in other words, as long as the system is alive -- the system is\nallowed to keep using the token and any associated leases. However, if the\nsystem stops renewing within this period (for instance, if it was shut down),\nthe token will expire relatively quickly. It is good practice to keep this\nperiod as short as possible, and generally speaking it is not useful for humans\nto be given periodic tokens.\n\nThere are a few important things to know when using periodic tokens:\n\n- When a periodic token is created via a token store role, the _current_ value\n  of the role's period setting will be used at renewal time\n- A token with both a period and an explicit max TTL will act like a periodic\n  token but will be revoked when the explicit max TTL is reached\n\n## CIDR-Bound tokens\n\nSome tokens are able to be bound to CIDR(s) that restrict the range of client\nIPs allowed to use them. These affect all tokens except for non-expiring root\ntokens (those with a TTL of zero). If a root token has an expiration, it also\nis affected by CIDR-binding.\n\n## Token types in detail\n\nThere are currently two types of tokens.\n\n### Service tokens\n\nService tokens are what users will generally think of as \"normal\" Vault tokens.\nThey support all features, such as renewal, revocation, creating child tokens,\nand more. They are correspondingly heavyweight to create and track.\n\n### Batch tokens\n\nBatch tokens are encrypted blobs that carry enough information for them to\nbe used for Vault actions, but they require no storage on disk to track them.\nAs a result they are extremely lightweight and scalable, but lack most of the\nflexibility and features of service tokens.\n\n### Token type comparison\n\nThis reference chart describes the difference in behavior between service and\nbatch tokens.\n\n|                                                     |                                          Service Tokens |                                    Batch Tokens |\n| --------------------------------------------------- | ------------------------------------------------------: | ----------------------------------------------: |\n| Can Be Root Tokens                                  |                                                     Yes |                                              No |\n| Can Create Child Tokens                             |                                                     Yes |                                              No |\n| Can be Renewable                                    |                                                     Yes |                                              No |\n| Manually Revocable                                  |                                                     Yes |                                              No |\n| Can be Periodic                                     |                                                     Yes |                                              No |\n| Can have Explicit Max TTL                           |                                                     Yes |                    No (always uses a fixed TTL) |\n| Has Accessors                                       |                                                     Yes |                                              No |\n| Has Cubbyhole                                       |                                                     Yes |                                              No |\n| Revoked with Parent (if not orphan)                 |                                                     Yes |                                   Stops Working |\n| Dynamic Secrets Lease Assignment                    |                                                    Self |                          Parent (if not orphan) |\n| Can be Used Across Performance Replication Clusters |                                                      No |                                 Yes (if orphan) |\n| Creation Scales with Performance Standby Node Count |                                                      No |                                             Yes |\n| Cost                                                | Heavyweight; multiple storage writes per token creation | Lightweight; no storage cost for token creation |\n\n### Service vs. batch token lease handling\n\n#### Service tokens\n\nLeases created by service tokens (including child tokens' leases) are tracked\nalong with the service token and revoked when the token expires.\n\n#### Batch tokens\n\nLeases created by batch tokens are constrained to the remaining TTL of the\nbatch tokens and, if the batch token is not an orphan, are tracked by the\nparent. They are revoked when the batch token's TTL expires, or when the batch\ntoken's parent is revoked (at which point the batch token is also denied access\nto Vault).\n\nAs a corollary, batch tokens can be used across performance replication\nclusters, but only if they are orphan, since non-orphan tokens will not be able\nto ensure the validity of the parent token.\n\n## Error Responses\n\nWhen using a token that has been revoked, exceeded its TTL, or is an otherwise invalid value, Vault will respond\nwith a `403` response code error containing the following error messages: `invalid token` and `permission denied`.\n\nWhen using a token with incorrect policy access, Vault will respond with a `403` response code error containing the error message\n`permission denied`.\n\n","site":"vault","answers_cleaned":"    layout  docs page title  Tokens description  Tokens are a core auth method in Vault  Concepts and important features         Tokens   Warning heading  Internal token structure is volatile     Tokens are opaque values so their structure is undocumented and subject to change    Scripts and automations that rely on the internal structure of a token in scripts will break    Warning   Tokens are the core method for  authentication  within Vault  Tokens can be used directly or  auth methods   vault docs concepts auth  can be used to dynamically generate tokens based on external identities   If you ve gone through the getting started guide  you probably noticed that  vault server  dev   or  vault operator init  for a non dev server  outputs an initial  root token   This is the first method of authentication for Vault  It is also the only auth method that cannot be disabled   As stated in the  authentication concepts   vault docs concepts auth   all external authentication mechanisms  such as GitHub  map down to dynamically created tokens  These tokens have all the same properties as a normal manually created token   Within Vault  tokens map to information  The most important information mapped to a token is a set of one or more attached  policies   vault docs concepts policies   These policies control what the token holder is allowed to do within Vault  Other mapped information includes metadata that can be viewed and is added to the audit log  such as creation time  last renewal time  and more   Read on for a deeper dive into token concepts   See the  tokens tutorial   vault tutorials tokens tokens  for details on how these concepts play out in practice      Token types  There are three types of tokens  On this page  service  tokens and  batch  tokens are outlined  while  recovery  tokens are covered separately in their  own page   vault docs concepts recovery mode recovery tokens   A section near the bottom of this page contains detailed information about their differences  but it is useful to understand other token concepts first  The features in the following sections all apply to service tokens  and their applicability to batch tokens is discussed later       Token prefixes  Tokens have a specific prefix that indicates their type  As of Vault 1 10  this token format was updated  The following table lists the prefix differences  This format pattern and its change also apply for recovery tokens  After the prefix  a string of 24 or more randomly generated characters is appended     Token Type        Vault 1 9 x or earlier   Vault 1 10 and later                                                                         Service tokens     s  random                hvs  random             Batch tokens       b  random                hvb  random             Recovery tokens    r  random                hvr  random            For example  a service token may look like  hvs CvmS4c0DPTvHv5eJgXWMJg9r       The token store  Often in documentation or in help channels  the  token store  is referenced  This is the same as the   token  authentication backend   vault docs auth token   This is a special backend in that it is responsible for creating and storing tokens  and cannot be disabled  It is also the only auth method that has no login capability    all actions require existing authenticated tokens      Root tokens  Root tokens are tokens that have the  root  policy attached to them  Root tokens can do anything in Vault   Anything   In addition  they are the only type of token within Vault that can be set to never expire without any renewal needed  As a result  it is purposefully hard to create root tokens  in fact there are only three ways to create root tokens   1  The initial root token generated at  vault operator init  time    this token has no    expiration 1  By using another root token  a root token with an expiration cannot create a    root token that never expires 1  By using  vault operator generate root    example   vault tutorials operations generate root      with the permission of a quorum of unseal key holders  Root tokens are useful in development but should be extremely carefully guarded in production  In fact  the Vault team recommends that root tokens are only used for just enough initial setup  usually  setting up auth methods and policies necessary to allow administrators to acquire more limited tokens  or in emergencies  and are revoked immediately after they are no longer needed  If a new root token is needed  the  operator generate root  command and associated  API endpoint   vault api docs system generate root  can be used to generate one on the fly   It is also good security practice for there to be multiple eyes on a terminal whenever a root token is live  This way multiple people can verify as to the tasks performed with the root token  and that the token was revoked immediately after these tasks were completed      Token hierarchies and orphan tokens  Normally  when a token holder creates new tokens  these tokens will be created as children of the original token  tokens they create will be children of them  and so on  When a parent token is revoked  all of its child tokens    and all of their leases    are revoked as well  This ensures that a user cannot escape revocation by simply generating a never ending tree of child tokens   Often this behavior is not desired  so users with appropriate access can create  orphan  tokens  These tokens have no parent    they are the root of their own token tree  These orphan tokens can be created   1  Via  write  access to the  auth token create orphan  endpoint 1  By having  sudo  or  root  access to the  auth token create     and setting the  no parent  parameter to  true  1  Via token store roles 1  By logging in with any other  non  token   auth method  Users with appropriate permissions can also use the  auth token revoke orphan  endpoint  which revokes the given token but rather than revoke the rest of the tree  it instead sets the tokens  immediate children to be orphans  Use with caution      Token accessors  When tokens are created  a token accessor is also created and returned  This accessor is a value that acts as a reference to a token and can only be used to perform limited actions   1  Look up a token s properties  not including the actual token ID  1  Look up a token s capabilities on a path 1  Renew the token 1  Revoke the token  The token  making the call    not  the token associated with the accessor  must have appropriate permissions for these functions   There are many useful workflows around token accessors  As an example  a service that creates tokens on behalf of another service  such as the  Nomad  https   www nomadproject io   scheduler  can store the accessor correlated with a particular job ID  When the job is complete  the accessor can be used to instantly revoke the token given to the job and all of its leased credentials  limiting the chance that a bad actor will discover and use them   Audit devices can optionally be set to not obfuscate token accessors in audit logs  This provides a way to quickly revoke tokens in case of an emergency  However  it also means that the audit logs can be used to perform a larger scale denial of service attack   Finally  the only way to  list tokens  is via the  auth token accessors  command  which actually gives a list of token accessors  While this is still a dangerous endpoint  since listing all of the accessors means that they can then be used to revoke all tokens   it also provides a way to audit and revoke the currently active set of tokens      Token Time To Live  periodic tokens  and explicit max TTLs  Every non root token has a time to live  TTL  associated with it  which is a current period of validity since either the token s creation time or last renewal time  whichever is more recent   Root tokens may have a TTL associated  but the TTL may also be 0  indicating a token that never expires   After the current TTL is up  the token will no longer function    it  and its associated leases  are revoked   If the token is renewable  Vault can be asked to extend the token validity period using  vault token renew  or the appropriate renewal endpoint  At this time  various factors come into play  What happens depends upon whether the token is a periodic token  available for creation by  root   sudo  users  token store roles  or some auth methods   has an explicit maximum TTL attached  or neither       The general case  In the general case  where there is neither a period nor explicit maximum TTL value set on the token  the token s lifetime since it was created will be compared to the maximum TTL  This maximum TTL value is dynamically generated and can change from renewal to renewal  so the value cannot be displayed when a token s information is looked up  It is based on a combination of factors   1  The system max TTL  which is 32 days but can be changed in Vault s    configuration file  1  The max TTL set on a mount using  mount    tuning   vault api docs system mounts   This value    is allowed to override the system max TTL    it can be longer or shorter     and if set this value will be respected  1  A value suggested by the auth method that issued the token  This    might be configured on a per role  per group  or per user basis  This value    is allowed to be less than the mount max TTL  or  if not set  the system max    TTL   but it is not allowed to be longer   Note that the values in  2  and  3  may change at any given time  which is why a final determination about the current allowed max TTL is made at renewal time using the current values  It is also why it is important to always ensure that the TTL returned from a renewal operation is within an allowed range  if this value is not extending  likely the TTL of the token cannot be extended past its current value and the client may want to reauthenticate and acquire a new token  However  outside of direct operator interaction  Vault will never revoke a token before the returned TTL has expired       Explicit max TTLs  Tokens can have an explicit max TTL set on them  This value becomes a hard limit on the token s lifetime    no matter what the values in  1    2   and  3  from the general case are  the token cannot live past this explicitly set value  This has an effect even when using periodic tokens to escape the normal TTL mechanism       Periodic tokens  In some cases  having a token be revoked would be problematic    for instance  if a long running service needs to maintain its SQL connection pool over a long period of time  In this scenario  a periodic token can be used  Periodic tokens can be created in a few ways   1  By having  sudo  capability or a  root  token with the  auth token create     endpoint 1  By using token store roles 1  By using an auth method that supports issuing these  such as    AppRole  At issue time  the TTL of a periodic token will be equal to the configured period  At every renewal time  the TTL will be reset back to this configured period  and as long as the token is successfully renewed within each of these periods of time  it will never expire  Outside of  root  tokens  it is currently the only way for a token in Vault to have an unlimited lifetime   The idea behind periodic tokens is that it is easy for systems and services to perform an action relatively frequently    for instance  every two hours  or even every five minutes  Therefore  as long as a system is actively renewing this token    in other words  as long as the system is alive    the system is allowed to keep using the token and any associated leases  However  if the system stops renewing within this period  for instance  if it was shut down   the token will expire relatively quickly  It is good practice to keep this period as short as possible  and generally speaking it is not useful for humans to be given periodic tokens   There are a few important things to know when using periodic tokens     When a periodic token is created via a token store role  the  current  value   of the role s period setting will be used at renewal time   A token with both a period and an explicit max TTL will act like a periodic   token but will be revoked when the explicit max TTL is reached     CIDR Bound tokens  Some tokens are able to be bound to CIDR s  that restrict the range of client IPs allowed to use them  These affect all tokens except for non expiring root tokens  those with a TTL of zero   If a root token has an expiration  it also is affected by CIDR binding      Token types in detail  There are currently two types of tokens       Service tokens  Service tokens are what users will generally think of as  normal  Vault tokens  They support all features  such as renewal  revocation  creating child tokens  and more  They are correspondingly heavyweight to create and track       Batch tokens  Batch tokens are encrypted blobs that carry enough information for them to be used for Vault actions  but they require no storage on disk to track them  As a result they are extremely lightweight and scalable  but lack most of the flexibility and features of service tokens       Token type comparison  This reference chart describes the difference in behavior between service and batch tokens                                                                                                    Service Tokens                                      Batch Tokens                                                                                                                                                                         Can Be Root Tokens                                                                                        Yes                                                No     Can Create Child Tokens                                                                                   Yes                                                No     Can be Renewable                                                                                          Yes                                                No     Manually Revocable                                                                                        Yes                                                No     Can be Periodic                                                                                           Yes                                                No     Can have Explicit Max TTL                                                                                 Yes                      No  always uses a fixed TTL      Has Accessors                                                                                             Yes                                                No     Has Cubbyhole                                                                                             Yes                                                No     Revoked with Parent  if not orphan                                                                        Yes                                     Stops Working     Dynamic Secrets Lease Assignment                                                                         Self                            Parent  if not orphan      Can be Used Across Performance Replication Clusters                                                        No                                   Yes  if orphan      Creation Scales with Performance Standby Node Count                                                        No                                               Yes     Cost                                                  Heavyweight  multiple storage writes per token creation   Lightweight  no storage cost for token creation        Service vs  batch token lease handling       Service tokens  Leases created by service tokens  including child tokens  leases  are tracked along with the service token and revoked when the token expires        Batch tokens  Leases created by batch tokens are constrained to the remaining TTL of the batch tokens and  if the batch token is not an orphan  are tracked by the parent  They are revoked when the batch token s TTL expires  or when the batch token s parent is revoked  at which point the batch token is also denied access to Vault    As a corollary  batch tokens can be used across performance replication clusters  but only if they are orphan  since non orphan tokens will not be able to ensure the validity of the parent token      Error Responses  When using a token that has been revoked  exceeded its TTL  or is an otherwise invalid value  Vault will respond with a  403  response code error containing the following error messages   invalid token  and  permission denied    When using a token with incorrect policy access  Vault will respond with a  403  response code error containing the error message  permission denied    "}
{"questions":"vault Before performing any operation with Vault the connecting client must be Authentication authenticated layout docs page title Authentication","answers":"---\nlayout: docs\npage_title: Authentication\ndescription: >-\n  Before performing any operation with Vault, the connecting client must be\n  authenticated.\n---\n\n# Authentication\n\nAuthentication in Vault is the process by which user or machine supplied\ninformation is verified against an internal or external system. Vault supports\nmultiple [auth methods](\/vault\/docs\/auth) including GitHub,\nLDAP, AppRole, and more. Each auth method has a specific use case.\n\nBefore a client can interact with Vault, it must _authenticate_ against an\nauth method. Upon authentication, a token is generated. This token is\nconceptually similar to a session ID on a website. The token may have attached\npolicy, which is mapped at authentication time. This process is described in\ndetail in the [policies concepts](\/vault\/docs\/concepts\/policies) documentation.\n\n## auth methods\n\nVault supports a number of auth methods. Some backends are targeted\ntoward users while others are targeted toward machines. Most authentication\nbackends must be enabled before use. To enable an auth method:\n\n```shell-session\n$ vault auth enable userpass -path=my-auth\n```\n\nThis enables the \"userpass\" auth method at the path \"my-auth\". This\nauthentication will be accessible at the path \"my-auth\". Often you will see\nauthentications at the same path as their name, but this is not a requirement.\n\nTo learn more about this authentication, use the built-in `path-help` command:\n\n```shell-session\n$ vault path-help auth\/my-auth\n# ...\n```\n\nVault supports multiple auth methods simultaneously, and you can even\nmount the same type of auth method at different paths. Only one\nauthentication is required to gain access to Vault, and it is not currently\npossible to force a user through multiple auth methods to gain\naccess, although some backends do support MFA.\n\n## Tokens\n\nThere is an [entire page dedicated to tokens](\/vault\/docs\/concepts\/tokens),\nbut it is important to understand that authentication works by verifying\nyour identity and then generating a token to associate with that identity.\n\nFor example, even though you may authenticate using something like GitHub,\nVault generates a unique access token for you to use for future requests.\nThe CLI automatically attaches this token to requests, but if you're using\nthe API you'll have to do this manually.\n\nThis token given for authentication with any backend can also be used\nwith the full set of token commands, such as creating new sub-tokens,\nrevoking tokens, and renewing tokens. This is all covered on the\n[token concepts page](\/vault\/docs\/concepts\/tokens).\n\n## Authenticating\n\n### Via the CLI\n\nTo authenticate with the CLI, `vault login` is used. This supports many\nof the built-in auth methods. For example, with GitHub:\n\n```shell-session\n$ vault login -method=github token=<token>\n...\n```\n\nAfter authenticating, you will be logged in. The CLI command will also\noutput your raw token. This token is used for revocation and renewal.\nAs the user logging in, the primary use case of the token is renewal,\ncovered below in the \"Auth Leases\" section.\n\nTo determine what variables are needed for an auth method,\nsupply the `-method` flag without any additional arguments and help\nwill be shown.\n\nIf you're using a method that isn't supported via the CLI, then the API\nmust be used.\n\n### Via the API\n\nAPI authentication is generally used for machine authentication. Each\nauth method implements its own login endpoint. Use the `vault path-help`\nmechanism to find the proper endpoint.\n\nFor example, the GitHub login endpoint is located at `auth\/github\/login`.\nAnd to determine the arguments needed, `vault path-help auth\/github\/login` can\nbe used.\n\n## Auth leases\n\nJust like secrets, identities have\n[leases](\/vault\/docs\/concepts\/lease) associated with them. This means that\nyou must reauthenticate after the given lease period to continue accessing\nVault.\n\nTo set the lease associated with an identity, reference the help for\nthe specific auth method in use. It is specific to each backend\nhow leasing is implemented.\n\nAnd just like secrets, identities can be renewed without having to\ncompletely reauthenticate. Just use `vault token renew <token>` with the\nleased token associated with your identity to renew it.\n\n## Code example\n\nThe following code snippet demonstrates how to renew auth tokens.\n\n<CodeTabs heading=\"token renewal example\">\n\n<CodeBlockConfig lineNumbers>\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"log\"\n\n\tvault \"github.com\/hashicorp\/vault\/api\"\n\tauth \"github.com\/hashicorp\/vault\/api\/auth\/userpass\"\n)\n\n\/\/ Once you've set the token for your Vault client, you will need to\n\/\/ periodically renew its lease.\n\/\/\n\/\/ A function like this should be run as a goroutine to avoid blocking.\n\/\/\n\/\/ Production applications may also wish to be more tolerant of failures and\n\/\/ retry rather than exiting.\n\/\/\n\/\/ Additionally, enterprise Vault users should be aware that due to eventual\n\/\/ consistency, the API may return unexpected errors when running Vault with\n\/\/ performance standbys or performance replication, despite the client having\n\/\/ a freshly renewed token. See https:\/\/developer.hashicorp.com\/vault\/docs\/enterprise\/consistency#vault-1-7-mitigations\n\/\/ for several ways to mitigate this which are outside the scope of this code sample.\nfunc renewToken(client *vault.Client) {\n\tfor {\n\t\tvaultLoginResp, err := login(client)\n\t\tif err != nil {\n\t\t\tlog.Fatalf(\"unable to authenticate to Vault: %v\", err)\n\t\t}\n\t\ttokenErr := manageTokenLifecycle(client, vaultLoginResp)\n\t\tif tokenErr != nil {\n\t\t\tlog.Fatalf(\"unable to start managing token lifecycle: %v\", tokenErr)\n\t\t}\n\t}\n}\n\n\/\/ Starts token lifecycle management. Returns only fatal errors as errors,\n\/\/ otherwise returns nil so we can attempt login again.\nfunc manageTokenLifecycle(client *vault.Client, token *vault.Secret) error {\n\trenew := token.Auth.Renewable \/\/ You may notice a different top-level field called Renewable. That one is used for dynamic secrets renewal, not token renewal.\n\tif !renew {\n\t\tlog.Printf(\"Token is not configured to be renewable. Re-attempting login.\")\n\t\treturn nil\n\t}\n\n\twatcher, err := client.NewLifetimeWatcher(&vault.LifetimeWatcherInput{\n\t\tSecret:    token,\n\t\tIncrement: 3600, \/\/ Learn more about this optional value in https:\/\/developer.hashicorp.com\/vault\/docs\/concepts\/lease#lease-durations-and-renewal\n\t})\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to initialize new lifetime watcher for renewing auth token: %w\", err)\n\t}\n\n\tgo watcher.Start()\n\tdefer watcher.Stop()\n\n\tfor {\n\t\tselect {\n\t\t\/\/ `DoneCh` will return if renewal fails, or if the remaining lease\n\t\t\/\/ duration is under a built-in threshold and either renewing is not\n\t\t\/\/ extending it or renewing is disabled. In any case, the caller\n\t\t\/\/ needs to attempt to log in again.\n\t\tcase err := <-watcher.DoneCh():\n\t\t\tif err != nil {\n\t\t\t\tlog.Printf(\"Failed to renew token: %v. Re-attempting login.\", err)\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\t\/\/ This occurs once the token has reached max TTL.\n\t\t\tlog.Printf(\"Token can no longer be renewed. Re-attempting login.\")\n\t\t\treturn nil\n\n\t\t\/\/ Successfully completed renewal\n\t\tcase renewal := <-watcher.RenewCh():\n\t\t\tlog.Printf(\"Successfully renewed: %#v\", renewal)\n\t\t}\n\t}\n}\n\nfunc login(client *vault.Client) (*vault.Secret, error) {\n\t\/\/ WARNING: A plaintext password like this is obviously insecure.\n\t\/\/ See the hashicorp\/vault-examples repo for full examples of how to securely\n\t\/\/ log in to Vault using various auth methods. This function is just\n\t\/\/ demonstrating the basic idea that a *vault.Secret is returned by\n\t\/\/ the login call.\n\tuserpassAuth, err := auth.NewUserpassAuth(\"my-user\", &auth.Password{FromString: \"my-password\"})\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to initialize userpass auth method: %w\", err)\n\t}\n\n\tauthInfo, err := client.Auth().Login(context.TODO(), userpassAuth)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to login to userpass auth method: %w\", err)\n\t}\n\tif authInfo == nil {\n\t\treturn nil, fmt.Errorf(\"no auth info was returned after login\")\n\t}\n\n\treturn authInfo, nil\n}\n```\n\n<\/CodeBlockConfig>\n\n<\/CodeTabs>","site":"vault","answers_cleaned":"    layout  docs page title  Authentication description       Before performing any operation with Vault  the connecting client must be   authenticated         Authentication  Authentication in Vault is the process by which user or machine supplied information is verified against an internal or external system  Vault supports multiple  auth methods   vault docs auth  including GitHub  LDAP  AppRole  and more  Each auth method has a specific use case   Before a client can interact with Vault  it must  authenticate  against an auth method  Upon authentication  a token is generated  This token is conceptually similar to a session ID on a website  The token may have attached policy  which is mapped at authentication time  This process is described in detail in the  policies concepts   vault docs concepts policies  documentation      auth methods  Vault supports a number of auth methods  Some backends are targeted toward users while others are targeted toward machines  Most authentication backends must be enabled before use  To enable an auth method      shell session   vault auth enable userpass  path my auth      This enables the  userpass  auth method at the path  my auth   This authentication will be accessible at the path  my auth   Often you will see authentications at the same path as their name  but this is not a requirement   To learn more about this authentication  use the built in  path help  command      shell session   vault path help auth my auth            Vault supports multiple auth methods simultaneously  and you can even mount the same type of auth method at different paths  Only one authentication is required to gain access to Vault  and it is not currently possible to force a user through multiple auth methods to gain access  although some backends do support MFA      Tokens  There is an  entire page dedicated to tokens   vault docs concepts tokens   but it is important to understand that authentication works by verifying your identity and then generating a token to associate with that identity   For example  even though you may authenticate using something like GitHub  Vault generates a unique access token for you to use for future requests  The CLI automatically attaches this token to requests  but if you re using the API you ll have to do this manually   This token given for authentication with any backend can also be used with the full set of token commands  such as creating new sub tokens  revoking tokens  and renewing tokens  This is all covered on the  token concepts page   vault docs concepts tokens       Authenticating      Via the CLI  To authenticate with the CLI   vault login  is used  This supports many of the built in auth methods  For example  with GitHub      shell session   vault login  method github token  token           After authenticating  you will be logged in  The CLI command will also output your raw token  This token is used for revocation and renewal  As the user logging in  the primary use case of the token is renewal  covered below in the  Auth Leases  section   To determine what variables are needed for an auth method  supply the   method  flag without any additional arguments and help will be shown   If you re using a method that isn t supported via the CLI  then the API must be used       Via the API  API authentication is generally used for machine authentication  Each auth method implements its own login endpoint  Use the  vault path help  mechanism to find the proper endpoint   For example  the GitHub login endpoint is located at  auth github login   And to determine the arguments needed   vault path help auth github login  can be used      Auth leases  Just like secrets  identities have  leases   vault docs concepts lease  associated with them  This means that you must reauthenticate after the given lease period to continue accessing Vault   To set the lease associated with an identity  reference the help for the specific auth method in use  It is specific to each backend how leasing is implemented   And just like secrets  identities can be renewed without having to completely reauthenticate  Just use  vault token renew  token   with the leased token associated with your identity to renew it      Code example  The following code snippet demonstrates how to renew auth tokens    CodeTabs heading  token renewal example     CodeBlockConfig lineNumbers      go package main  import     context    fmt    log    vault  github com hashicorp vault api   auth  github com hashicorp vault api auth userpass        Once you ve set the token for your Vault client  you will need to    periodically renew its lease        A function like this should be run as a goroutine to avoid blocking        Production applications may also wish to be more tolerant of failures and    retry rather than exiting        Additionally  enterprise Vault users should be aware that due to eventual    consistency  the API may return unexpected errors when running Vault with    performance standbys or performance replication  despite the client having    a freshly renewed token  See https   developer hashicorp com vault docs enterprise consistency vault 1 7 mitigations    for several ways to mitigate this which are outside the scope of this code sample  func renewToken client  vault Client     for     vaultLoginResp  err    login client    if err    nil      log Fatalf  unable to authenticate to Vault   v   err        tokenErr    manageTokenLifecycle client  vaultLoginResp    if tokenErr    nil      log Fatalf  unable to start managing token lifecycle   v   tokenErr               Starts token lifecycle management  Returns only fatal errors as errors     otherwise returns nil so we can attempt login again  func manageTokenLifecycle client  vault Client  token  vault Secret  error    renew    token Auth Renewable    You may notice a different top level field called Renewable  That one is used for dynamic secrets renewal  not token renewal   if  renew     log Printf  Token is not configured to be renewable  Re attempting login      return nil      watcher  err    client NewLifetimeWatcher  vault LifetimeWatcherInput    Secret     token    Increment  3600     Learn more about this optional value in https   developer hashicorp com vault docs concepts lease lease durations and renewal      if err    nil     return fmt Errorf  unable to initialize new lifetime watcher for renewing auth token   w   err       go watcher Start    defer watcher Stop     for     select         DoneCh  will return if renewal fails  or if the remaining lease      duration is under a built in threshold and either renewing is not      extending it or renewing is disabled  In any case  the caller      needs to attempt to log in again    case err      watcher DoneCh       if err    nil       log Printf  Failed to renew token   v  Re attempting login    err      return nil            This occurs once the token has reached max TTL     log Printf  Token can no longer be renewed  Re attempting login       return nil       Successfully completed renewal   case renewal      watcher RenewCh       log Printf  Successfully renewed    v   renewal            func login client  vault Client    vault Secret  error        WARNING  A plaintext password like this is obviously insecure      See the hashicorp vault examples repo for full examples of how to securely     log in to Vault using various auth methods  This function is just     demonstrating the basic idea that a  vault Secret is returned by     the login call   userpassAuth  err    auth NewUserpassAuth  my user    auth Password FromString   my password     if err    nil     return nil  fmt Errorf  unable to initialize userpass auth method   w   err       authInfo  err    client Auth   Login context TODO    userpassAuth   if err    nil     return nil  fmt Errorf  unable to login to userpass auth method   w   err      if authInfo    nil     return nil  fmt Errorf  no auth info was returned after login        return authInfo  nil          CodeBlockConfig     CodeTabs "}
{"questions":"vault which parts of Vault a user can access Policies are how authorization is done in Vault allowing you to restrict layout docs Policies page title Policies","answers":"---\nlayout: docs\npage_title: Policies\ndescription: >-\n  Policies are how authorization is done in Vault, allowing you to restrict\n  which parts of Vault a user can access.\n---\n\n# Policies\n\nEverything in Vault is path-based, and policies are no exception. Policies\nprovide a declarative way to grant or forbid access to certain paths and\noperations in Vault. This section discusses policy workflows and syntaxes.\n\nPolicies are **deny by default**, so an empty policy grants no permission in the\nsystem.\n\n## Policy-authorization workflow\n\nBefore a human or machine can gain access, an administrator must configure Vault\nwith an [auth method](\/vault\/docs\/concepts\/auth). Authentication is\nthe process by which human or machine-supplied information is verified against\nan internal or external system.\n\nConsider the following diagram, which illustrates the steps a security team\nwould take to configure Vault to authenticate using a corporate LDAP or\nActiveDirectory installation. Even though this example uses LDAP, the concept\napplies to all auth methods.\n\n[![Vault Auth Workflow](\/img\/vault-policy-workflow.svg)](\/img\/vault-policy-workflow.svg)\n\n1. The security team configures Vault to connect to an auth method.\n   This configuration varies by auth method. In the case of LDAP, Vault\n   needs to know the address of the LDAP server and whether to connect using TLS.\n   It is important to note that Vault does not store a copy of the LDAP database -\n   Vault will delegate the authentication to the auth method.\n\n1. The security team authors a policy (or uses an existing policy) which grants\n   access to paths in Vault. Policies are written in HCL in your editor of\n   preference and saved to disk.\n\n1. The policy's contents are uploaded and stored in Vault and referenced by name.\n   You can think of the policy's name as a pointer or symlink to its set of rules.\n\n1. Most importantly, the security team maps data in the auth method to a policy.\n   For example, the security team might create mappings like:\n\n   > Members of the OU group \"dev\" map to the Vault policy named \"readonly-dev\".\n\n   or\n\n   > Members of the OU group \"ops\" map to the Vault policies \"admin\" and \"auditor\".\n\nNow Vault has an internal mapping between a backend authentication system and\ninternal policy. When a user authenticates to Vault, the actual authentication\nis delegated to the auth method. As a user, the flow looks like:\n\n[![Vault Auth Workflow](\/img\/vault-auth-workflow.svg)](\/img\/vault-auth-workflow.svg)\n\n1. A user attempts to authenticate to Vault using their LDAP credentials,\n   providing Vault with their LDAP username and password.\n\n1. Vault establishes a connection to LDAP and asks the LDAP server to verify the\n   given credentials. Assuming this is successful, the LDAP server returns the\n   information about the user, including the OU groups.\n\n1. Vault maps the result from the LDAP server to policies inside Vault using the\n   mapping configured by the security team in the previous section. Vault then\n   generates a token and attaches the matching policies.\n\n1. Vault returns the token to the user. This token has the correct policies\n   assigned, as dictated by the mapping configuration that was setup by the\n   security team in advance.\n\nThe user then uses this Vault token for future operations. If the user performs\nthe authentication steps again, they will get a _new_ token. The token will have\nthe same permissions, but the actual token will be different. Authenticating a\nsecond time does not invalidate the original token.\n\n## Policy syntax\n\nPolicies are written in [HCL][hcl] or JSON and describe which paths in Vault a\nuser or machine is allowed to access.\n\n[hcl]: https:\/\/github.com\/hashicorp\/hcl\n\nHere is a very simple policy which grants read capabilities to the [KVv1](\/vault\/api-docs\/secret\/kv\/kv-v1) path\n`\"secret\/foo\"`:\n\n```hcl\npath \"secret\/foo\" {\n  capabilities = [\"read\"]\n}\n```\n\nWhen this policy is assigned to a token, the token can read from `\"secret\/foo\"`.\nHowever, the token cannot update or delete `\"secret\/foo\"`, since the\ncapabilities do not allow it. Because policies are **deny by default**, the\ntoken would have no other access in Vault.\n\nHere is a more detailed policy, and it is documented inline:\n\n```hcl\n# This section grants all access on \"secret\/*\". further restrictions can be\n# applied to this broad policy, as shown below.\npath \"secret\/*\" {\n  capabilities = [\"create\", \"read\", \"update\", \"patch\", \"delete\", \"list\"]\n}\n\n# Even though we allowed secret\/*, this line explicitly denies\n# secret\/super-secret. this takes precedence.\npath \"secret\/super-secret\" {\n  capabilities = [\"deny\"]\n}\n\n# Policies can also specify allowed, disallowed, and required parameters. here\n# the key \"secret\/restricted\" can only contain \"foo\" (any value) and \"bar\" (one\n# of \"zip\" or \"zap\").\npath \"secret\/restricted\" {\n  capabilities = [\"create\"]\n  allowed_parameters = {\n    \"foo\" = []\n    \"bar\" = [\"zip\", \"zap\"]\n  }\n}\n```\n\nPolicies use path-based matching to test the set of capabilities against a\nrequest. A policy `path` may specify an exact path to match, or it could specify\na glob pattern which instructs Vault to use a prefix match:\n\n```hcl\n# Permit reading only \"secret\/foo\". an attached token cannot read \"secret\/food\"\n# or \"secret\/foo\/bar\".\npath \"secret\/foo\" {\n  capabilities = [\"read\"]\n}\n\n# Permit reading everything under \"secret\/bar\". an attached token could read\n# \"secret\/bar\/zip\", \"secret\/bar\/zip\/zap\", but not \"secret\/bars\/zip\".\npath \"secret\/bar\/*\" {\n  capabilities = [\"read\"]\n}\n\n# Permit reading everything prefixed with \"zip-\". an attached token could read\n# \"secret\/zip-zap\" or \"secret\/zip-zap\/zong\", but not \"secret\/zip\/zap\npath \"secret\/zip-*\" {\n  capabilities = [\"read\"]\n}\n```\n\nIn addition, a `+` can be used to denote any number of characters bounded\nwithin a single path segment (this appeared in Vault 1.1):\n\n```hcl\n# Permit reading the \"teamb\" path under any top-level path under secret\/\npath \"secret\/+\/teamb\" {\n  capabilities = [\"read\"]\n}\n\n# Permit reading secret\/foo\/bar\/teamb, secret\/bar\/foo\/teamb, etc.\npath \"secret\/+\/+\/teamb\" {\n  capabilities = [\"read\"]\n}\n```\n\nVault's architecture is similar to a filesystem. Every action in Vault has a\ncorresponding path and capability - even Vault's internal core configuration\nendpoints live under the `\"sys\/\"` path. Policies define access to these paths and\ncapabilities, which controls a token's access to credentials in Vault.\n\n## Priority matching\n\n~> **Note:** The policy rules that Vault applies are determined by the most-specific match\navailable, using the priority rules described below. This may be an exact match\nor the longest-prefix match of a glob. If the same pattern appears in multiple\npolicies, we take the union of the capabilities. If different patterns appear in\nthe applicable policies, we take only the highest-priority match from those\npolicies.\n\nThis means if you define a policy for `\"secret\/foo*\"`, the policy would\nalso match `\"secret\/foobar\"`. Specifically, when there are potentially multiple\nmatching policy paths, `P1` and `P2`, the following matching criteria is applied:\n\n1. If the first wildcard (`+`) or glob (`*`) occurs earlier in `P1`, `P1` is lower priority\n1. If `P1` ends in `*` and `P2` doesn't, `P1` is lower priority\n1. If `P1` has more `+` (wildcard) segments, `P1` is lower priority\n1. If `P1` is shorter, it is lower priority\n1. If `P1` is smaller lexicographically, it is lower priority\n\nFor example, given the two paths, `\"secret\/*\"` and `\"secret\/+\/+\/foo\/*\"`, the first\nwildcard appears in the same place, both end in `*` and the latter has two wildcard\nsegments while the former has zero. So we end at rule (3), and give `\"secret\/+\/+\/foo\/*\"`\n_lower_ priority.\n\nAnother example utilizing Vault [namespaces](\/vault\/docs\/enterprise\/namespaces), given [nested](\/vault\/tutorials\/enterprise\/namespace-structure) namespaces `ns1\/ns2\/ns3` and two paths, \n`\"secret\/*\"` and `\"ns1\/ns2\/ns3\/secret\/apps\/*\"` where `secret` is a mountpoint in namespace `ns3`. The first path is\ndefined in a policy inside\/relative to namespace `ns3` while the second path is defined in a policy in the `root` namespace. \nBoth paths end in `*` but the first is shorter. So we end at rule (4), and give `\"secret\/*\"` _lower_ priority.\n\n!> **Informational:**The glob character referred to in this documentation is the asterisk (`*`).\nIt _is not a regular expression_ and is only supported **as the last character of the path**!\n\nWhen providing `list` capability, it is important to note that since listing\nalways operates on a prefix, policies must operate on a prefix because Vault\nwill sanitize request paths to be prefixes.\n\n### Capabilities\n\nEach path must define one or more capabilities which provide fine-grained\ncontrol over permitted (or denied) operations. As shown in the examples above,\ncapabilities are always specified as a list of strings, even if there is only\none capability.\n\nTo determine the capabilities needed to perform a specific operation, the `-output-policy` flag can be added to the CLI subcommand. For an example, refer to the [Print Policy Requirements](\/vault\/docs\/commands#print-policy-requirements) document section.\n\nThe list of capabilities include the following:\n\n- `create` (`POST\/PUT`) - Allows creating data at the given path. Very few\n  parts of Vault distinguish between `create` and `update`, so most operations\n  require both `create` and `update` capabilities. Parts of Vault that\n  provide such a distinction are noted in documentation.\n\n- `read` (`GET`) - Allows reading the data at the given path.\n\n- `update` (`POST\/PUT`) - Allows changing the data at the given path. In most\n  parts of Vault, this implicitly includes the ability to create the initial\n  value at the path.\n\n- `patch` (`PATCH`) - Allows partial updates to the data at a given path.\n\n- `delete` (`DELETE`) - Allows deleting the data at the given path.\n\n- `list` (`LIST`) - Allows listing values at the given path. Note that the\n  keys returned by a `list` operation are _not_ filtered by policies. Do not\n  encode sensitive information in key names. Not all backends support listing.\n\nIn the list above, the associated HTTP verbs are shown in parenthesis next to\nthe capability. When authoring policy, it is usually helpful to look at the HTTP\nAPI documentation for the paths and HTTP verbs and map them back onto\ncapabilities. While the mapping is not strictly 1:1, they are often very\nsimilarly matched.\n\nIn addition to the standard set, there are some capabilities that do not map to\nHTTP verbs.\n\n- `sudo` - Allows access to paths that are _root-protected_. Tokens are not\n  permitted to interact with these paths unless they have the `sudo`\n  capability (in addition to the other necessary capabilities for performing\n  an operation against that path, such as `read` or `delete`).\n\n  For example, modifying the audit log backends requires a token with `sudo`\n  privileges.\n\n- `deny` - Disallows access. This always takes precedence regardless of any\n  other defined capabilities, including `sudo`.\n\n- `subscribe` - Allows subscribing to [events](\/vault\/docs\/concepts\/events)\n  for the given path.\n\n~> **Note:** Capabilities usually map to the HTTP verb, and not the underlying\naction taken. This can be a common source of confusion. Generating database\ncredentials _creates_ database credentials, but the HTTP request is a GET which\ncorresponds to a `read` capability. Thus, to grant access to generate database\ncredentials, the policy would grant `read` access on the appropriate path.\n\n## Templated policies\n\nThe policy syntax allows for doing variable replacement in some policy strings\nwith values available to the token. Currently `identity` information can be\ninjected, and currently the `path` keys in policies allow injection.\n\n### Parameters\n\n| Name                                                                             | Description                                                                           |\n| :------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------ |\n| `identity.entity.id`                                                             | The entity's ID                                                                       |\n| `identity.entity.name`                                                           | The entity's name                                                                     |\n| `identity.entity.metadata.<metadata key>`                                        | Metadata associated with the entity for the given key                                 |\n| `identity.entity.aliases.<mount accessor>.id`                                    | Entity alias ID for the given mount                                                   |\n| `identity.entity.aliases.<mount accessor>.name`                                  | Entity alias name for the given mount                                                 |\n| `identity.entity.aliases.<mount accessor>.metadata.<metadata key>`               | Metadata associated with the alias for the given mount and metadata key               |\n| `identity.entity.aliases.<mount accessor>.custom_metadata.<custom_metadata key>` | Custom metadata associated with the alias for the given mount and custom metadata key |\n| `identity.groups.ids.<group id>.name`                                            | The group name for the given group ID                                                 |\n| `identity.groups.names.<group name>.id`                                          | The group ID for the given group name                                                 |\n| `identity.groups.ids.<group id>.metadata.<metadata key>`                         | Metadata associated with the group for the given key                                  |\n| `identity.groups.names.<group name>.metadata.<metadata key>`                     | Metadata associated with the group for the given key                                  |\n\n### Examples\n\nThe following policy creates a section of the KVv2 Secret Engine to a specific user\n\n```hcl\npath \"secret\/data\/\/*\" {\n  capabilities = [\"create\", \"update\", \"patch\", \"read\", \"delete\"]\n}\n\npath \"secret\/metadata\/\/*\" {\n  capabilities = [\"list\"]\n}\n```\n\nIf you wanted to create a shared section of KV that is associated with entities that are in a\ngroup.\n\n```hcl\n# In the example below, the group ID maps a group and the path\npath \"secret\/data\/groups\/\/*\" {\n  capabilities = [\"create\", \"update\", \"patch\", \"read\", \"delete\"]\n}\n\npath \"secret\/metadata\/groups\/\/*\" {\n  capabilities = [\"list\"]\n}\n```\n\n~> **Note:** When developing templated policies, use IDs wherever possible. Each ID is\nunique to the user, whereas names can change over time and can be reused. This\nensures that if a given user or group name is changed, the policy will be\nmapped to the intended entity or group.\n\nIf you want to use the metadata associated with an authentication plugin in your\ntemplates, you will need to get its _mount accessor_ and access it via the\n`aliases` key.\n\nYou can get the mount accessor value using the following command:\n\n```shellsession\n$> vault auth list\nPath           Type          Accessor                    Description\n----           ----          --------                    -----------\nkubernetes\/    kubernetes    auth_kubernetes_xxxx        n\/a\ntoken\/         token         auth_token_yyyy             token based credentials\n```\n\nThe following templated policy allow to read the path associated with the\nKubernetes service account namespace of the identity:\n\n```hcl\npath \"secret\/data\/\/*\" {\n  capabilities = [\"read\"]\n}\n```\n\n## Fine-grained control\n\nIn addition to the standard set of capabilities, Vault offers finer-grained\ncontrol over permissions at a given path. The capabilities associated with a\npath take precedence over permissions on parameters.\n\n### Parameter constraints\n\n!> **Note:** The use of [globs](\/vault\/docs\/concepts\/policies#policy-syntax) (`*`) may result in [surprising or unexpected behavior](#parameter-constraints-limitations).\n\n~> **Note:** The `allowed_parameters`, `denied_parameters`, and `required_parameters` fields are not supported for policies used with the [version 2 kv secrets engine](\/vault\/docs\/secrets\/kv\/kv-v2).\n\nSee the [API Specification](\/vault\/api-docs\/secret\/kv\/kv-v2) for more information.\n\nPolicies can take into account HTTP request parameters to further\nconstrain requests, using the following options:\n\n- `required_parameters` - A list of parameters that must be specified.\n\n  ```hcl\n  # This requires the user to create \"secret\/profile\" with a parameter\/key named\n  # \"name\" and \"id\" where kv v1 is enabled at \"secret\/\".\n  path \"secret\/profile\" {\n    capabilities = [\"create\"]\n    required_parameters = [\"name\", \"id\"]\n  }\n  ```\n\n- `allowed_parameters` - A list of keys and values that are\n  permitted on the given path.\n\n  - Setting a parameter with a value of the empty list allows the parameter to\n    contain any value.\n\n    ```hcl\n    # This allows the user to update the password parameter value set on any\n    # users configured for userpass auth method. The password value can be\n    # anything. However, the user cannot update other parameter values such as\n    # token_ttl.\n    path \"auth\/userpass\/users\/*\" {\n      capabilities = [\"update\"]\n      allowed_parameters = {\n        \"password\" = []\n      }\n    }\n    ```\n\n    -> **Usage example:** The [ACL Policy Path\n    Templating](\/vault\/tutorials\/policies\/policy-templating)\n    tutorial demonstrates the use of `allowed_parameters` to permit a user to\n    update the user's password when using the [userpass auth\n    method](\/vault\/docs\/auth\/userpass) to log in with Vault.\n\n  - Setting a parameter with a value of a populated list allows the parameter\n    to contain only those values.\n\n    ```hcl\n    # This allows the user to create or update an encryption key for transit\n    # secrets engine enabled at \"transit\/\". When you do, you can set the\n    # \"auto_rotate_period\" parameter value so that the key gets rotated.\n    # However, the rotation period must be \"8h\", \"24h\", or \"5d\". Any other value\n    # will result in an error.\n    path \"transit\/keys\/*\" {\n      capabilities = [\"create\", \"update\"]\n      allowed_parameters = {\n        \"auto_rotate_period\" = [\"8h\", \"24h\", \"5d\"]\n      }\n    }\n    ```\n\n  - If any keys are specified, all non-specified parameters will be denied\n    unless the parameter `\"*\"` is set to an empty array, which will\n    allow all other parameters to be modified. Parameters with specific values\n    will still be restricted to those values.\n\n    ```hcl\n    # When kv v1 secrets engine is enabled at \"secret\/\", this allows the user to\n    # create \"secret\/foo\" with a parameter named \"bar\". The parameter \"bar\" can\n    # only contain the values \"zip\" or \"zap\", but any other parameters may be\n    # created with any value.\n    path \"secret\/foo\" {\n      capabilities = [\"create\"]\n      allowed_parameters = {\n        \"bar\" = [\"zip\", \"zap\"]\n        \"*\"   = []\n      }\n    }\n    ```\n\n- `denied_parameters` - A list of keys and values that are not permitted on the given\n  path. Any values specified here take precedence over `allowed_parameters`.\n\n  - Setting a parameter with a value of the empty list denies any changes to\n    that parameter.\n\n    ```hcl\n    # This allows the user to update the userpass auth method's user\n    # configurations (e.g., \"password\") but cannot update the \"token_policies\"\n    # and \"policies\" parameter values.\n    path \"auth\/userpass\/users\/*\" {\n      capabilities = [\"update\"]\n      denied_parameters = {\n        \"token_policies\" = []\n        \"policies\" = []\n      }\n    }\n    ```\n\n  - Setting a parameter with a value of a populated list denies any parameter\n    containing those values.\n\n    ```hcl\n    # This allows the user to create or update token roles. However, the\n    # \"allowed_policies\" parameter value cannot be \"admin\", but the user can\n    # assign any other policies to the parameter.\n    path \"auth\/token\/roles\/*\" {\n      capabilities = [\"create\", \"update\"]\n      denied_parameters = {\n        \"allowed_policies\" = [\"admin\"]\n      }\n    }\n    ```\n\n  - Setting to `\"*\"` will deny any parameter.\n\n    ```hcl\n    # This allows the user to create or update an encryption key for transit\n    # secrets engine enabled at \"transit\/\". However, the user cannot set any of\n    # the configuration parameters. As a result, the created key will have all\n    # parameters set to default values.\n    path \"transit\/keys\/*\" {\n      capabilities = [\"create\", \"update\"]\n      denied_parameters = {\n        \"*\" = []\n      }\n    }\n    ```\n\n  - If any parameters are specified, all non-specified parameters are allowed,\n    unless `allowed_parameters` is also set, in which case normal rules apply.\n\nParameter values also support prefix\/suffix globbing. Globbing is enabled by\nprepending or appending or prepending a splat (`*`) to the value:\n\n```hcl\n# Only allow a parameter named \"bar\" with a value starting with \"foo-*\".\npath \"secret\/foo\" {\n  capabilities = [\"create\"]\n  allowed_parameters = {\n    \"bar\" = [\"foo-*\"]\n  }\n}\n```\n\n~> **Note:** the only value that can be used with the `*` parameter is `[]`.\n\n#### Parameter constraints limitations\n\n##### Default values\n\nEvaluation of policies with `allowed_parameters`, `denied_parameters`, and `required_parameters` happens\nwithout consideration of parameters' default values.\n\nGiven the following policy:\n\n```hcl\n# The \"no_store\" parameter cannot be false\npath \"secret\/foo\" {\n  capabilities = [\"create\"]\n  denied_parameters = {\n    \"no_store\" = [false, \"false\"]\n  }\n}\n```\n\nThe following operation will error, because \"no_store\" is set to false:\n\n```shell-session\n$ vault write secret\/foo no_store=false value=bar\n```\n\nWhereas the following operation will succeed, even if the \"no_store\"\nparameter must be a boolean, and it defaults to false:\n\n```shell-session\n# Succeeds because \"no_store=false\" isn't present in the parameters\n$ vault write secret\/foo value=bar\n```\n\nThis is because the policy evaluator does not know what the default value is for\nthe \"no_store\" parameter. All it sees is that the denied parameter isn't present\nin the command.\n\nThis can be resolved by requiring the \"no_store\" parameter in your policy:\n\n```hcl\npath \"secret\/foo\" {\n  capabilities = [\"create\"]\n  required_parameters = [\"no_store\"]\n  denied_parameters = {\n    \"no_store\" = [false, \"false\"]\n  }\n}\n```\n\nThe following command, which previously succeeded, will now fail under the new policy\nbecause there is no \"no_store\" parameter:\n\n```shell-session\n$ vault write secret\/foo value=bar\n```\n\n##### Globbing\n\nIt's also important to note that the use of globbing may result in surprising\nor unexpected behavior:\n\n```hcl\n# This allows the user to create, update, or patch \"secret\/foo\" with a parameter\n# named \"bar\". the values passed to parameter \"bar\" must start with \"baz\/\"\n# so values like \"baz\/quux\" are fine. however, values like\n# \"baz\/quux,wibble,wobble,wubble\" would also be accepted. the API that\n# underlies \"secret\/foo\" might allow comma delimited values for the \"bar\"\n# parameter, and if it did, specifying a value like\n# \"baz\/quux,wibble,wobble,wubble\" would result in 4 different values getting\n# passed along. seeing values like \"wibble\" or \"wobble\" getting passed to\n# \"secret\/foo\" might surprise someone that expected the allowed_parameters\n# constraint to only allow values starting with \"baz\/\".\npath \"secret\/foo\" {\n  capabilities = [\"create\", \"update\", \"patch\"]\n  allowed_parameters = {\n    \"bar\" = [\"baz\/*\"]\n  }\n}\n```\n\n### Required response wrapping TTLs\n\nThese parameters can be used to set minimums\/maximums on TTLs set by clients\nwhen requesting that a response be\n[wrapped](\/vault\/docs\/concepts\/response-wrapping), with a granularity of a\nsecond. These use [duration format strings](\/vault\/docs\/concepts\/duration-format).\n\nIn practice, setting a minimum TTL of one second effectively makes response\nwrapping mandatory for a particular path.\n\n- `min_wrapping_ttl` - The minimum allowed TTL that clients can specify for a\n  wrapped response. In practice, setting a minimum TTL of one second\n  effectively makes response wrapping mandatory for a particular path. It can\n  also be used to ensure that the TTL is not too low, leading to end targets\n  being unable to unwrap before the token expires.\n\n- `max_wrapping_ttl` - The maximum allowed TTL that clients can specify for a\n  wrapped response.\n\n```hcl\n# This effectively makes response wrapping mandatory for this path by setting min_wrapping_ttl to 1 second.\n# This also sets this path's wrapped response maximum allowed TTL to 90 seconds.\npath \"auth\/approle\/role\/my-role\/secret-id\" {\n    capabilities = [\"create\", \"update\"]\n    min_wrapping_ttl = \"1s\"\n    max_wrapping_ttl = \"90s\"\n}\n```\n\nIf both are specified, the minimum value must be less than the maximum. In\naddition, if paths are merged from different stanzas, the lowest value\nspecified for each is the value that will result, in line with the idea of\nkeeping token lifetimes as short as possible.\n\n## Built-in policies\n\nVault has two built-in policies: `default` and `root`. This section describes\nthe two built-in policies.\n\n### Default policy\n\nThe `default` policy is a built-in Vault policy that cannot be removed. By\ndefault, it is attached to all tokens, but may be explicitly excluded at token\ncreation time by supporting authentication methods.\n\nThe policy contains basic functionality such as the ability for the token to\nlook up data about itself and to use its cubbyhole data. However, Vault is not\nprescriptive about its contents. It can be modified to suit your needs; Vault\nwill never overwrite your modifications. If you want to stay up-to-date with\nthe latest upstream version of the `default` policy, simply read the contents\nof the policy from an up-to-date `dev` server, and write those contents into\nyour Vault's `default` policy.\n\nTo view all permissions granted by the default policy on your Vault\ninstallation, run:\n\n```shell-session\n$ vault read sys\/policy\/default\n```\n\nTo disable attachment of the default policy:\n\n```shell-session\n$ vault token create -no-default-policy\n```\n\nor via the API:\n\n```shell-session\n$ curl \\\n  --request POST \\\n  --header \"X-Vault-Token: ...\" \\\n  --data '{\"no_default_policy\": \"true\"}' \\\n  https:\/\/vault.hashicorp.rocks\/v1\/auth\/token\/create\n```\n\n### Root policy\n\nThe `root` policy is a built-in Vault policy that cannot be modified or removed.\nAny user associated with this policy becomes a root user. A root user can do\n_anything_ within Vault. As such, it is **highly recommended** that you revoke\nany root tokens before running Vault in production.\n\nWhen a Vault server is first initialized, there always exists one root user.\nThis user is used to do the initial configuration and setup of Vault. After\nconfigured, the initial root token should be revoked and more strictly\ncontrolled users and authentication should be used.\n\nTo revoke a root token, run:\n\n```shell-session\n$ vault token revoke \"<token>\"\n```\n\nor via the API:\n\n```shell-session\n$ curl \\\n  --request POST \\\n  --header \"X-Vault-Token: ...\" \\\n  --data '{\"token\": \"<token>\"}' \\\n  https:\/\/vault.hashicorp.rocks\/v1\/auth\/token\/revoke\n```\n\nFor more information, please read:\n\n- [Production Hardening](\/vault\/tutorials\/operations\/production-hardening)\n- [Generating a Root Token](\/vault\/tutorials\/operations\/generate-root)\n\n## Managing policies\n\nPolicies are authored (written) in your editor of choice. They can be authored\nin HCL or JSON, and the syntax is described in detail above. Once saved,\npolicies must be uploaded to Vault before they can be used.\n\n### Listing policies\n\nTo list all registered policies in Vault:\n\n```shell-session\n$ vault read sys\/policy\n```\n\nor via the API:\n\n```shell-session\n$ curl \\\n  --header \"X-Vault-Token: ...\" \\\n  https:\/\/vault.hashicorp.rocks\/v1\/sys\/policy\n```\n\n~> **Note:** You may also see the CLI command `vault policies`. This is a convenience\nwrapper around reading the sys endpoint directly. It provides the same\nfunctionality but formats the output in a special manner.\n\n### Creating policies\n\nPolicies may be created (uploaded) via the CLI or via the API. To create a new\npolicy in Vault:\n\n```shell-session\n$ vault policy write policy-name policy-file.hcl\n```\n\nor via the API:\n\n```shell-session\n$ curl \\\n  --request POST \\\n  --header \"X-Vault-Token: ...\" \\\n  --data '{\"policy\":\"path \\\"...\\\" {...} \"}' \\\n  https:\/\/vault.hashicorp.rocks\/v1\/sys\/policy\/policy-name\n```\n\nIn both examples, the name of the policy is \"policy-name\". You can think of this\nname as a pointer or symlink to the policy ACLs. Tokens are attached policies by\nname, which are then mapped to the set of rules corresponding to that name.\n\n### Updating policies\n\nExisting policies may be updated to change permissions via the CLI or via the\nAPI. To update an existing policy in Vault, follow the same steps as creating a\npolicy, but use an existing policy name:\n\n```shell-session\n$ vault write sys\/policy\/my-existing-policy policy=@updated-policy.json\n```\n\nor via the API:\n\n```shell-session\n$ curl \\\n  --request POST \\\n  --header \"X-Vault-Token: ...\" \\\n  --data '{\"policy\":\"path \\\"...\\\" {...} \"}' \\\n  https:\/\/vault.hashicorp.rocks\/v1\/sys\/policy\/my-existing-policy\n```\n\n### Deleting policies\n\nExisting policies may be deleted via the CLI or API. To delete a policy:\n\n```shell-session\n$ vault delete sys\/policy\/policy-name\n```\n\nor via the API:\n\n```shell-session\n$ curl \\\n  --request DELETE \\\n  --header \"X-Vault-Token: ...\" \\\n  https:\/\/vault.hashicorp.rocks\/v1\/sys\/policy\/policy-name\n```\n\nThis is an idempotent operation. Vault will not return an error when deleting a\npolicy that does not exist.\n\n## Associating policies\n\nVault can automatically associate a set of policies to a token based on an\nauthorization. This configuration varies significantly between authentication\nbackends. For simplicity, this example will use Vault's built-in userpass\nauth method.\n\nA Vault administrator or someone from the security team would create the user in\nVault with a list of associated policies:\n\n```shell-session\n$ vault write auth\/userpass\/users\/sethvargo \\\n    password=\"s3cr3t!\" \\\n    policies=\"dev-readonly,logs\"\n```\n\nThis creates an authentication mapping to the policy such that, when the user\nauthenticates successfully to Vault, they will be given a token which has the list\nof policies attached.\n\nThe user wishing to authenticate would run\n\n```shell-session\n$ vault login -method=\"userpass\" username=\"sethvargo\"\nPassword (will be hidden): ...\n```\n\nIf the provided information is correct, Vault will generate a token, assign the\nlist of configured policies to the token, and return that token to the\nauthenticated user.\n\n## Root protected API endpoints\n\n~> **Note:** Vault treats the HTTP POST and PUT verbs as equivalent, so for each mention\nof POST in the table below, PUT may also be used. Vault uses the non-standard LIST HTTP\nverb, but also allows list requests to be made using the GET verb along with `?list=true`\nas a query parameter, so for each mention of LIST in the table above, GET with `?list=true`\nmay also be used.\n\nThe following paths requires a root token or `sudo` capability in the policy:\n\n| Path                                                                                                                                                   | HTTP verb         | Description                                                                                                         |\n| ------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------- | ------------------------------------------------------------------------------------------------------------------- |\n| [auth\/token\/accessors](\/vault\/api-docs\/auth\/token#list-accessors)                                                                                      | LIST              | List token accessors for all current Vault service tokens                                                           |\n| [auth\/token\/create](\/vault\/api-docs\/auth\/token#create-token)                                                                                           | POST              | Create a periodic or an orphan token (`period` or `no_parent`) option                                               |\n| [auth\/token\/revoke-orphan](\/vault\/api-docs\/auth\/token#revoke-token-and-orphan-children)                                                                | POST              | Revoke a token but not its child tokens, which will be orphaned                                                     |\n| [pki\/root](\/vault\/api-docs\/secret\/pki#delete-all-issuers-and-keys)                                                                                     | DELETE            | Delete the current CA key ([pki secrets engine](\/vault\/docs\/secrets\/pki))                                           |\n| [pki\/root\/sign-self-issued](\/vault\/api-docs\/secret\/pki#sign-self-issued)                                                                               | POST              | Use the configured CA certificate to sign a self-issued certificate ([pki secrets engine](\/vault\/docs\/secrets\/pki)) |\n| [sys\/audit](\/vault\/api-docs\/system\/audit)                                                                                                              | GET               | List enabled audit devices                                                                                          |\n| [sys\/audit\/:path](\/vault\/api-docs\/system\/audit)                                                                                                        | POST, DELETE      | Enable or remove an audit device                                                                                    |\n| [sys\/auth\/:path](\/vault\/api-docs\/system\/auth)                                                                                                          | GET, POST, DELETE | Manage the auth methods (enable, read, and delete)                                                                  |\n| [sys\/auth\/:path\/tune](\/vault\/api-docs\/system\/auth#tune-auth-method)                                                                                    | GET, POST         | Manage the auth methods (enable, read, delete, and tune)                                                            |\n| [sys\/config\/auditing\/request-headers](\/vault\/api-docs\/system\/config-auditing)                                                                          | GET               | List the request headers that are configured to be audited                                                          |\n| [sys\/config\/auditing\/request-headers\/:name](\/vault\/api-docs\/system\/config-auditing)                                                                    | GET, POST, DELETE | Manage the auditing headers (create, update, read and delete)                                                       |\n| [sys\/config\/cors](\/vault\/api-docs\/system\/config-cors)                                                                                                  | GET, POST, DELETE | Configure CORS setting                                                                                              |\n| [sys\/config\/ui\/headers](\/vault\/api-docs\/system\/config-ui)                                                                                              | GET, LIST         | Configure the UI settings                                                                                           |\n| [sys\/config\/ui\/headers\/:name](\/vault\/api-docs\/system\/config-ui#name)                                                                                   | POST, DELETE      | Configure custom HTTP headers to be served with the UI                                                              |\n| [sys\/internal\/inspect\/router\/:tag](\/vault\/api-docs\/system\/inspect\/router)                                                                              | GET               | Inspect the internal components of Vault's router. `tag` must be one of root, uuid, accessor, or storage            |\n| [sys\/leases\/lookup\/:prefix](\/vault\/api-docs\/system\/leases#list-leases)                                                                                 | LIST              | List lease IDs                                                                                                      |\n| [sys\/leases\/revoke-force\/:prefix](\/vault\/api-docs\/system\/leases#revoke-force)                                                                          | POST              | Revoke all secrets or tokens ignoring backend errors                                                                |\n| [sys\/leases\/revoke-prefix\/:prefix](\/vault\/api-docs\/system\/leases#revoke-prefix)                                                                        | POST              | Revoke all secrets generated under a given prefix                                                                   |\n| [sys\/plugins\/catalog\/:type\/:name](\/vault\/api-docs\/system\/plugins-catalog#register-plugin)                                                              | GET, POST, DELETE | Register a new plugin, or read\/remove an existing plugin                                                            |\n| [sys\/raw:path](\/vault\/api-docs\/system\/raw)                                                                                                             | GET, POST, DELETE | Used to access the raw underlying store in Vault                                                                    |\n| [sys\/raw:prefix](\/vault\/api-docs\/system\/raw#list-raw)                                                                                                  | GET, LIST         | Returns a list keys for a given path prefix                                                                         |\n| [sys\/remount](\/vault\/api-docs\/system\/remount)                                                                                                          | POST              | Moves an already-mounted backend to a new mount point                                                               |\n| [sys\/replication\/reindex](\/vault\/api-docs\/system\/replication#reindex-replication)                                                                      | POST              | Reindex the local data storage                                                                                      |\n| [sys\/replication\/performance\/primary\/secondary-token](\/vault\/api-docs\/system\/replication\/replication-performance#generate-performance-secondary-token) | POST              | Generate a performance secondary activation token                                                                   |\n| [sys\/replication\/dr\/primary\/secondary-token](\/vault\/api-docs\/system\/replication\/replication-dr#generate-dr-secondary-token)                            | POST              | Generate a DR secondary activation token                                                                            |\n| [sys\/rotate](\/vault\/api-docs\/system\/rotate)                                                                                                            | POST              | Trigger a rotation of the backend encryption key                                                                    |\n| [sys\/seal](\/vault\/api-docs\/system\/seal)                                                                                                                | POST              | Seals the Vault                                                                                                     |\n| [sys\/step-down](\/vault\/api-docs\/system\/step-down)                                                                                                      | POST              | Forces a node to give up active status                                                                              |\n| [sys\/storage\/raft\/snapshot-auto\/config](\/vault\/api-docs\/system\/storage\/raftautosnapshots#list-automated-snapshots-configs)                             | LIST              | Lists named configurations                                                                                          |\n| [sys\/storage\/raft\/snapshot-auto\/config\/:name](\/vault\/api-docs\/system\/storage\/raftautosnapshots)                                                        | GET, POST, DELETE | Creates or updates a named configuration                                                                            |\n\n### Tokens\n\nTokens have two sets of policies: identity policies, which are computed\nbased on the entity and its groups, and token policies, which are either defined\nbased on the login method or, in the case of explicit token creates via the API,\nare an input to the token creation. What follows concerns token policies\nexclusively: a token's identity policies cannot be controlled except by modifying\nthe underlying entities, groups, and group memberships.\n\nTokens are associated with their policies at creation time. For example:\n\n```shell-session\n$ vault token create -policy=dev-readonly -policy=logs\n```\n\nNormally the only policies that may be specified are those which are present\nin the current token's (i.e. the new token's parent's) token policies.\nHowever, root users can assign any policies.\n\nThere is no way to modify the policies associated with a token once the token\nhas been issued. The token must be revoked and a new one acquired to receive a\nnew set of policies.\n\nHowever, the _contents_ of policies are parsed in real-time whenever the token is used.\nAs a result, if a policy is modified, the modified rules will be in force the\nnext time a token, with that policy attached, is used to make a call to Vault.\n\n## UI policy requirements\n\n@include 'ui\/policy-requirements.mdx'\n\n\n## Tutorial\n\nRefer to the following tutorials for further learning:\n\n- [Vault Policies](\/vault\/tutorials\/policies\/policies)\n- [ACL Policy Path Templating](\/vault\/tutorials\/policies\/policy-templating)","site":"vault","answers_cleaned":"    layout  docs page title  Policies description       Policies are how authorization is done in Vault  allowing you to restrict   which parts of Vault a user can access         Policies  Everything in Vault is path based  and policies are no exception  Policies provide a declarative way to grant or forbid access to certain paths and operations in Vault  This section discusses policy workflows and syntaxes   Policies are   deny by default    so an empty policy grants no permission in the system      Policy authorization workflow  Before a human or machine can gain access  an administrator must configure Vault with an  auth method   vault docs concepts auth   Authentication is the process by which human or machine supplied information is verified against an internal or external system   Consider the following diagram  which illustrates the steps a security team would take to configure Vault to authenticate using a corporate LDAP or ActiveDirectory installation  Even though this example uses LDAP  the concept applies to all auth methods      Vault Auth Workflow   img vault policy workflow svg    img vault policy workflow svg   1  The security team configures Vault to connect to an auth method     This configuration varies by auth method  In the case of LDAP  Vault    needs to know the address of the LDAP server and whether to connect using TLS     It is important to note that Vault does not store a copy of the LDAP database      Vault will delegate the authentication to the auth method   1  The security team authors a policy  or uses an existing policy  which grants    access to paths in Vault  Policies are written in HCL in your editor of    preference and saved to disk   1  The policy s contents are uploaded and stored in Vault and referenced by name     You can think of the policy s name as a pointer or symlink to its set of rules   1  Most importantly  the security team maps data in the auth method to a policy     For example  the security team might create mappings like        Members of the OU group  dev  map to the Vault policy named  readonly dev       or       Members of the OU group  ops  map to the Vault policies  admin  and  auditor    Now Vault has an internal mapping between a backend authentication system and internal policy  When a user authenticates to Vault  the actual authentication is delegated to the auth method  As a user  the flow looks like      Vault Auth Workflow   img vault auth workflow svg    img vault auth workflow svg   1  A user attempts to authenticate to Vault using their LDAP credentials     providing Vault with their LDAP username and password   1  Vault establishes a connection to LDAP and asks the LDAP server to verify the    given credentials  Assuming this is successful  the LDAP server returns the    information about the user  including the OU groups   1  Vault maps the result from the LDAP server to policies inside Vault using the    mapping configured by the security team in the previous section  Vault then    generates a token and attaches the matching policies   1  Vault returns the token to the user  This token has the correct policies    assigned  as dictated by the mapping configuration that was setup by the    security team in advance   The user then uses this Vault token for future operations  If the user performs the authentication steps again  they will get a  new  token  The token will have the same permissions  but the actual token will be different  Authenticating a second time does not invalidate the original token      Policy syntax  Policies are written in  HCL  hcl  or JSON and describe which paths in Vault a user or machine is allowed to access    hcl   https   github com hashicorp hcl  Here is a very simple policy which grants read capabilities to the  KVv1   vault api docs secret kv kv v1  path   secret foo        hcl path  secret foo      capabilities     read          When this policy is assigned to a token  the token can read from   secret foo    However  the token cannot update or delete   secret foo    since the capabilities do not allow it  Because policies are   deny by default    the token would have no other access in Vault   Here is a more detailed policy  and it is documented inline      hcl   This section grants all access on  secret     further restrictions can be   applied to this broad policy  as shown below  path  secret        capabilities     create    read    update    patch    delete    list        Even though we allowed secret    this line explicitly denies   secret super secret  this takes precedence  path  secret super secret      capabilities     deny        Policies can also specify allowed  disallowed  and required parameters  here   the key  secret restricted  can only contain  foo   any value  and  bar   one   of  zip  or  zap    path  secret restricted      capabilities     create     allowed parameters          foo            bar      zip    zap              Policies use path based matching to test the set of capabilities against a request  A policy  path  may specify an exact path to match  or it could specify a glob pattern which instructs Vault to use a prefix match      hcl   Permit reading only  secret foo   an attached token cannot read  secret food    or  secret foo bar   path  secret foo      capabilities     read        Permit reading everything under  secret bar   an attached token could read    secret bar zip    secret bar zip zap   but not  secret bars zip   path  secret bar        capabilities     read        Permit reading everything prefixed with  zip    an attached token could read    secret zip zap  or  secret zip zap zong   but not  secret zip zap path  secret zip        capabilities     read          In addition  a     can be used to denote any number of characters bounded within a single path segment  this appeared in Vault 1 1       hcl   Permit reading the  teamb  path under any top level path under secret  path  secret   teamb      capabilities     read        Permit reading secret foo bar teamb  secret bar foo teamb  etc  path  secret     teamb      capabilities     read          Vault s architecture is similar to a filesystem  Every action in Vault has a corresponding path and capability   even Vault s internal core configuration endpoints live under the   sys    path  Policies define access to these paths and capabilities  which controls a token s access to credentials in Vault      Priority matching       Note    The policy rules that Vault applies are determined by the most specific match available  using the priority rules described below  This may be an exact match or the longest prefix match of a glob  If the same pattern appears in multiple policies  we take the union of the capabilities  If different patterns appear in the applicable policies  we take only the highest priority match from those policies   This means if you define a policy for   secret foo     the policy would also match   secret foobar    Specifically  when there are potentially multiple matching policy paths   P1  and  P2   the following matching criteria is applied   1  If the first wildcard       or glob       occurs earlier in  P1    P1  is lower priority 1  If  P1  ends in     and  P2  doesn t   P1  is lower priority 1  If  P1  has more      wildcard  segments   P1  is lower priority 1  If  P1  is shorter  it is lower priority 1  If  P1  is smaller lexicographically  it is lower priority  For example  given the two paths    secret     and   secret     foo      the first wildcard appears in the same place  both end in     and the latter has two wildcard segments while the former has zero  So we end at rule  3   and give   secret     foo      lower  priority   Another example utilizing Vault  namespaces   vault docs enterprise namespaces   given  nested   vault tutorials enterprise namespace structure  namespaces  ns1 ns2 ns3  and two paths     secret     and   ns1 ns2 ns3 secret apps     where  secret  is a mountpoint in namespace  ns3   The first path is defined in a policy inside relative to namespace  ns3  while the second path is defined in a policy in the  root  namespace   Both paths end in     but the first is shorter  So we end at rule  4   and give   secret      lower  priority        Informational   The glob character referred to in this documentation is the asterisk        It  is not a regular expression  and is only supported   as the last character of the path     When providing  list  capability  it is important to note that since listing always operates on a prefix  policies must operate on a prefix because Vault will sanitize request paths to be prefixes       Capabilities  Each path must define one or more capabilities which provide fine grained control over permitted  or denied  operations  As shown in the examples above  capabilities are always specified as a list of strings  even if there is only one capability   To determine the capabilities needed to perform a specific operation  the   output policy  flag can be added to the CLI subcommand  For an example  refer to the  Print Policy Requirements   vault docs commands print policy requirements  document section   The list of capabilities include the following      create    POST PUT     Allows creating data at the given path  Very few   parts of Vault distinguish between  create  and  update   so most operations   require both  create  and  update  capabilities  Parts of Vault that   provide such a distinction are noted in documentation      read    GET     Allows reading the data at the given path      update    POST PUT     Allows changing the data at the given path  In most   parts of Vault  this implicitly includes the ability to create the initial   value at the path      patch    PATCH     Allows partial updates to the data at a given path      delete    DELETE     Allows deleting the data at the given path      list    LIST     Allows listing values at the given path  Note that the   keys returned by a  list  operation are  not  filtered by policies  Do not   encode sensitive information in key names  Not all backends support listing   In the list above  the associated HTTP verbs are shown in parenthesis next to the capability  When authoring policy  it is usually helpful to look at the HTTP API documentation for the paths and HTTP verbs and map them back onto capabilities  While the mapping is not strictly 1 1  they are often very similarly matched   In addition to the standard set  there are some capabilities that do not map to HTTP verbs      sudo    Allows access to paths that are  root protected   Tokens are not   permitted to interact with these paths unless they have the  sudo    capability  in addition to the other necessary capabilities for performing   an operation against that path  such as  read  or  delete       For example  modifying the audit log backends requires a token with  sudo    privileges      deny    Disallows access  This always takes precedence regardless of any   other defined capabilities  including  sudo       subscribe    Allows subscribing to  events   vault docs concepts events    for the given path        Note    Capabilities usually map to the HTTP verb  and not the underlying action taken  This can be a common source of confusion  Generating database credentials  creates  database credentials  but the HTTP request is a GET which corresponds to a  read  capability  Thus  to grant access to generate database credentials  the policy would grant  read  access on the appropriate path      Templated policies  The policy syntax allows for doing variable replacement in some policy strings with values available to the token  Currently  identity  information can be injected  and currently the  path  keys in policies allow injection       Parameters    Name                                                                               Description                                                                                                                                                                                                                                                             identity entity id                                                                The entity s ID                                                                            identity entity name                                                              The entity s name                                                                          identity entity metadata  metadata key                                            Metadata associated with the entity for the given key                                      identity entity aliases  mount accessor  id                                       Entity alias ID for the given mount                                                        identity entity aliases  mount accessor  name                                     Entity alias name for the given mount                                                      identity entity aliases  mount accessor  metadata  metadata key                   Metadata associated with the alias for the given mount and metadata key                    identity entity aliases  mount accessor  custom metadata  custom metadata key     Custom metadata associated with the alias for the given mount and custom metadata key      identity groups ids  group id  name                                               The group name for the given group ID                                                      identity groups names  group name  id                                             The group ID for the given group name                                                      identity groups ids  group id  metadata  metadata key                             Metadata associated with the group for the given key                                       identity groups names  group name  metadata  metadata key                         Metadata associated with the group for the given key                                         Examples  The following policy creates a section of the KVv2 Secret Engine to a specific user     hcl path  secret data         capabilities     create    update    patch    read    delete      path  secret metadata         capabilities     list          If you wanted to create a shared section of KV that is associated with entities that are in a group      hcl   In the example below  the group ID maps a group and the path path  secret data groups         capabilities     create    update    patch    read    delete      path  secret metadata groups         capabilities     list               Note    When developing templated policies  use IDs wherever possible  Each ID is unique to the user  whereas names can change over time and can be reused  This ensures that if a given user or group name is changed  the policy will be mapped to the intended entity or group   If you want to use the metadata associated with an authentication plugin in your templates  you will need to get its  mount accessor  and access it via the  aliases  key   You can get the mount accessor value using the following command      shellsession    vault auth list Path           Type          Accessor                    Description                                                                      kubernetes     kubernetes    auth kubernetes xxxx        n a token          token         auth token yyyy             token based credentials      The following templated policy allow to read the path associated with the Kubernetes service account namespace of the identity      hcl path  secret data         capabilities     read             Fine grained control  In addition to the standard set of capabilities  Vault offers finer grained control over permissions at a given path  The capabilities associated with a path take precedence over permissions on parameters       Parameter constraints       Note    The use of  globs   vault docs concepts policies policy syntax        may result in  surprising or unexpected behavior   parameter constraints limitations         Note    The  allowed parameters    denied parameters   and  required parameters  fields are not supported for policies used with the  version 2 kv secrets engine   vault docs secrets kv kv v2    See the  API Specification   vault api docs secret kv kv v2  for more information   Policies can take into account HTTP request parameters to further constrain requests  using the following options      required parameters    A list of parameters that must be specified        hcl     This requires the user to create  secret profile  with a parameter key named      name  and  id  where kv v1 is enabled at  secret      path  secret profile        capabilities     create       required parameters     name    id                 allowed parameters    A list of keys and values that are   permitted on the given path       Setting a parameter with a value of the empty list allows the parameter to     contain any value          hcl       This allows the user to update the password parameter value set on any       users configured for userpass auth method  The password value can be       anything  However  the user cannot update other parameter values such as       token ttl      path  auth userpass users            capabilities     update         allowed parameters              password                                       Usage example    The  ACL Policy Path     Templating   vault tutorials policies policy templating      tutorial demonstrates the use of  allowed parameters  to permit a user to     update the user s password when using the  userpass auth     method   vault docs auth userpass  to log in with Vault       Setting a parameter with a value of a populated list allows the parameter     to contain only those values          hcl       This allows the user to create or update an encryption key for transit       secrets engine enabled at  transit    When you do  you can set the        auto rotate period  parameter value so that the key gets rotated        However  the rotation period must be  8h    24h   or  5d   Any other value       will result in an error      path  transit keys            capabilities     create    update         allowed parameters              auto rotate period      8h    24h    5d                              If any keys are specified  all non specified parameters will be denied     unless the parameter       is set to an empty array  which will     allow all other parameters to be modified  Parameters with specific values     will still be restricted to those values          hcl       When kv v1 secrets engine is enabled at  secret    this allows the user to       create  secret foo  with a parameter named  bar   The parameter  bar  can       only contain the values  zip  or  zap   but any other parameters may be       created with any value      path  secret foo          capabilities     create         allowed parameters              bar      zip    zap                                                denied parameters    A list of keys and values that are not permitted on the given   path  Any values specified here take precedence over  allowed parameters        Setting a parameter with a value of the empty list denies any changes to     that parameter          hcl       This allows the user to update the userpass auth method s user       configurations  e g    password   but cannot update the  token policies        and  policies  parameter values      path  auth userpass users            capabilities     update         denied parameters              token policies                policies                                  Setting a parameter with a value of a populated list denies any parameter     containing those values          hcl       This allows the user to create or update token roles  However  the        allowed policies  parameter value cannot be  admin   but the user can       assign any other policies to the parameter      path  auth token roles            capabilities     create    update         denied parameters              allowed policies      admin                              Setting to       will deny any parameter          hcl       This allows the user to create or update an encryption key for transit       secrets engine enabled at  transit    However  the user cannot set any of       the configuration parameters  As a result  the created key will have all       parameters set to default values      path  transit keys            capabilities     create    update         denied parameters                                                 If any parameters are specified  all non specified parameters are allowed      unless  allowed parameters  is also set  in which case normal rules apply   Parameter values also support prefix suffix globbing  Globbing is enabled by prepending or appending or prepending a splat       to the value      hcl   Only allow a parameter named  bar  with a value starting with  foo     path  secret foo      capabilities     create     allowed parameters          bar      foo                     Note    the only value that can be used with the     parameter is             Parameter constraints limitations        Default values  Evaluation of policies with  allowed parameters    denied parameters   and  required parameters  happens without consideration of parameters  default values   Given the following policy      hcl   The  no store  parameter cannot be false path  secret foo      capabilities     create     denied parameters          no store     false   false              The following operation will error  because  no store  is set to false      shell session   vault write secret foo no store false value bar      Whereas the following operation will succeed  even if the  no store  parameter must be a boolean  and it defaults to false      shell session   Succeeds because  no store false  isn t present in the parameters   vault write secret foo value bar      This is because the policy evaluator does not know what the default value is for the  no store  parameter  All it sees is that the denied parameter isn t present in the command   This can be resolved by requiring the  no store  parameter in your policy      hcl path  secret foo      capabilities     create     required parameters     no store     denied parameters          no store     false   false              The following command  which previously succeeded  will now fail under the new policy because there is no  no store  parameter      shell session   vault write secret foo value bar            Globbing  It s also important to note that the use of globbing may result in surprising or unexpected behavior      hcl   This allows the user to create  update  or patch  secret foo  with a parameter   named  bar   the values passed to parameter  bar  must start with  baz     so values like  baz quux  are fine  however  values like    baz quux wibble wobble wubble  would also be accepted  the API that   underlies  secret foo  might allow comma delimited values for the  bar    parameter  and if it did  specifying a value like    baz quux wibble wobble wubble  would result in 4 different values getting   passed along  seeing values like  wibble  or  wobble  getting passed to    secret foo  might surprise someone that expected the allowed parameters   constraint to only allow values starting with  baz    path  secret foo      capabilities     create    update    patch     allowed parameters          bar      baz                    Required response wrapping TTLs  These parameters can be used to set minimums maximums on TTLs set by clients when requesting that a response be  wrapped   vault docs concepts response wrapping   with a granularity of a second  These use  duration format strings   vault docs concepts duration format    In practice  setting a minimum TTL of one second effectively makes response wrapping mandatory for a particular path      min wrapping ttl    The minimum allowed TTL that clients can specify for a   wrapped response  In practice  setting a minimum TTL of one second   effectively makes response wrapping mandatory for a particular path  It can   also be used to ensure that the TTL is not too low  leading to end targets   being unable to unwrap before the token expires      max wrapping ttl    The maximum allowed TTL that clients can specify for a   wrapped response      hcl   This effectively makes response wrapping mandatory for this path by setting min wrapping ttl to 1 second    This also sets this path s wrapped response maximum allowed TTL to 90 seconds  path  auth approle role my role secret id        capabilities     create    update       min wrapping ttl    1s      max wrapping ttl    90s         If both are specified  the minimum value must be less than the maximum  In addition  if paths are merged from different stanzas  the lowest value specified for each is the value that will result  in line with the idea of keeping token lifetimes as short as possible      Built in policies  Vault has two built in policies   default  and  root   This section describes the two built in policies       Default policy  The  default  policy is a built in Vault policy that cannot be removed  By default  it is attached to all tokens  but may be explicitly excluded at token creation time by supporting authentication methods   The policy contains basic functionality such as the ability for the token to look up data about itself and to use its cubbyhole data  However  Vault is not prescriptive about its contents  It can be modified to suit your needs  Vault will never overwrite your modifications  If you want to stay up to date with the latest upstream version of the  default  policy  simply read the contents of the policy from an up to date  dev  server  and write those contents into your Vault s  default  policy   To view all permissions granted by the default policy on your Vault installation  run      shell session   vault read sys policy default      To disable attachment of the default policy      shell session   vault token create  no default policy      or via the API      shell session   curl       request POST       header  X Vault Token             data    no default policy    true        https   vault hashicorp rocks v1 auth token create          Root policy  The  root  policy is a built in Vault policy that cannot be modified or removed  Any user associated with this policy becomes a root user  A root user can do  anything  within Vault  As such  it is   highly recommended   that you revoke any root tokens before running Vault in production   When a Vault server is first initialized  there always exists one root user  This user is used to do the initial configuration and setup of Vault  After configured  the initial root token should be revoked and more strictly controlled users and authentication should be used   To revoke a root token  run      shell session   vault token revoke   token        or via the API      shell session   curl       request POST       header  X Vault Token             data    token     token         https   vault hashicorp rocks v1 auth token revoke      For more information  please read      Production Hardening   vault tutorials operations production hardening     Generating a Root Token   vault tutorials operations generate root      Managing policies  Policies are authored  written  in your editor of choice  They can be authored in HCL or JSON  and the syntax is described in detail above  Once saved  policies must be uploaded to Vault before they can be used       Listing policies  To list all registered policies in Vault      shell session   vault read sys policy      or via the API      shell session   curl       header  X Vault Token           https   vault hashicorp rocks v1 sys policy           Note    You may also see the CLI command  vault policies   This is a convenience wrapper around reading the sys endpoint directly  It provides the same functionality but formats the output in a special manner       Creating policies  Policies may be created  uploaded  via the CLI or via the API  To create a new policy in Vault      shell session   vault policy write policy name policy file hcl      or via the API      shell session   curl       request POST       header  X Vault Token             data    policy   path                       https   vault hashicorp rocks v1 sys policy policy name      In both examples  the name of the policy is  policy name   You can think of this name as a pointer or symlink to the policy ACLs  Tokens are attached policies by name  which are then mapped to the set of rules corresponding to that name       Updating policies  Existing policies may be updated to change permissions via the CLI or via the API  To update an existing policy in Vault  follow the same steps as creating a policy  but use an existing policy name      shell session   vault write sys policy my existing policy policy  updated policy json      or via the API      shell session   curl       request POST       header  X Vault Token             data    policy   path                       https   vault hashicorp rocks v1 sys policy my existing policy          Deleting policies  Existing policies may be deleted via the CLI or API  To delete a policy      shell session   vault delete sys policy policy name      or via the API      shell session   curl       request DELETE       header  X Vault Token           https   vault hashicorp rocks v1 sys policy policy name      This is an idempotent operation  Vault will not return an error when deleting a policy that does not exist      Associating policies  Vault can automatically associate a set of policies to a token based on an authorization  This configuration varies significantly between authentication backends  For simplicity  this example will use Vault s built in userpass auth method   A Vault administrator or someone from the security team would create the user in Vault with a list of associated policies      shell session   vault write auth userpass users sethvargo       password  s3cr3t         policies  dev readonly logs       This creates an authentication mapping to the policy such that  when the user authenticates successfully to Vault  they will be given a token which has the list of policies attached   The user wishing to authenticate would run     shell session   vault login  method  userpass  username  sethvargo  Password  will be hidden            If the provided information is correct  Vault will generate a token  assign the list of configured policies to the token  and return that token to the authenticated user      Root protected API endpoints       Note    Vault treats the HTTP POST and PUT verbs as equivalent  so for each mention of POST in the table below  PUT may also be used  Vault uses the non standard LIST HTTP verb  but also allows list requests to be made using the GET verb along with   list true  as a query parameter  so for each mention of LIST in the table above  GET with   list true  may also be used   The following paths requires a root token or  sudo  capability in the policy     Path                                                                                                                                                     HTTP verb           Description                                                                                                                                                                                                                                                                                                                                                                                                                   auth token accessors   vault api docs auth token list accessors                                                                                         LIST                List token accessors for all current Vault service tokens                                                                auth token create   vault api docs auth token create token                                                                                              POST                Create a periodic or an orphan token   period  or  no parent   option                                                    auth token revoke orphan   vault api docs auth token revoke token and orphan children                                                                   POST                Revoke a token but not its child tokens  which will be orphaned                                                          pki root   vault api docs secret pki delete all issuers and keys                                                                                        DELETE              Delete the current CA key   pki secrets engine   vault docs secrets pki                                                  pki root sign self issued   vault api docs secret pki sign self issued                                                                                  POST                Use the configured CA certificate to sign a self issued certificate   pki secrets engine   vault docs secrets pki        sys audit   vault api docs system audit                                                                                                                 GET                 List enabled audit devices                                                                                               sys audit  path   vault api docs system audit                                                                                                           POST  DELETE        Enable or remove an audit device                                                                                         sys auth  path   vault api docs system auth                                                                                                             GET  POST  DELETE   Manage the auth methods  enable  read  and delete                                                                        sys auth  path tune   vault api docs system auth tune auth method                                                                                       GET  POST           Manage the auth methods  enable  read  delete  and tune                                                                  sys config auditing request headers   vault api docs system config auditing                                                                             GET                 List the request headers that are configured to be audited                                                               sys config auditing request headers  name   vault api docs system config auditing                                                                       GET  POST  DELETE   Manage the auditing headers  create  update  read and delete                                                             sys config cors   vault api docs system config cors                                                                                                     GET  POST  DELETE   Configure CORS setting                                                                                                   sys config ui headers   vault api docs system config ui                                                                                                 GET  LIST           Configure the UI settings                                                                                                sys config ui headers  name   vault api docs system config ui name                                                                                      POST  DELETE        Configure custom HTTP headers to be served with the UI                                                                   sys internal inspect router  tag   vault api docs system inspect router                                                                                 GET                 Inspect the internal components of Vault s router   tag  must be one of root  uuid  accessor  or storage                 sys leases lookup  prefix   vault api docs system leases list leases                                                                                    LIST                List lease IDs                                                                                                           sys leases revoke force  prefix   vault api docs system leases revoke force                                                                             POST                Revoke all secrets or tokens ignoring backend errors                                                                     sys leases revoke prefix  prefix   vault api docs system leases revoke prefix                                                                           POST                Revoke all secrets generated under a given prefix                                                                        sys plugins catalog  type  name   vault api docs system plugins catalog register plugin                                                                 GET  POST  DELETE   Register a new plugin  or read remove an existing plugin                                                                 sys raw path   vault api docs system raw                                                                                                                GET  POST  DELETE   Used to access the raw underlying store in Vault                                                                         sys raw prefix   vault api docs system raw list raw                                                                                                     GET  LIST           Returns a list keys for a given path prefix                                                                              sys remount   vault api docs system remount                                                                                                             POST                Moves an already mounted backend to a new mount point                                                                    sys replication reindex   vault api docs system replication reindex replication                                                                         POST                Reindex the local data storage                                                                                           sys replication performance primary secondary token   vault api docs system replication replication performance generate performance secondary token    POST                Generate a performance secondary activation token                                                                        sys replication dr primary secondary token   vault api docs system replication replication dr generate dr secondary token                               POST                Generate a DR secondary activation token                                                                                 sys rotate   vault api docs system rotate                                                                                                               POST                Trigger a rotation of the backend encryption key                                                                         sys seal   vault api docs system seal                                                                                                                   POST                Seals the Vault                                                                                                          sys step down   vault api docs system step down                                                                                                         POST                Forces a node to give up active status                                                                                   sys storage raft snapshot auto config   vault api docs system storage raftautosnapshots list automated snapshots configs                                LIST                Lists named configurations                                                                                               sys storage raft snapshot auto config  name   vault api docs system storage raftautosnapshots                                                           GET  POST  DELETE   Creates or updates a named configuration                                                                                   Tokens  Tokens have two sets of policies  identity policies  which are computed based on the entity and its groups  and token policies  which are either defined based on the login method or  in the case of explicit token creates via the API  are an input to the token creation  What follows concerns token policies exclusively  a token s identity policies cannot be controlled except by modifying the underlying entities  groups  and group memberships   Tokens are associated with their policies at creation time  For example      shell session   vault token create  policy dev readonly  policy logs      Normally the only policies that may be specified are those which are present in the current token s  i e  the new token s parent s  token policies  However  root users can assign any policies   There is no way to modify the policies associated with a token once the token has been issued  The token must be revoked and a new one acquired to receive a new set of policies   However  the  contents  of policies are parsed in real time whenever the token is used  As a result  if a policy is modified  the modified rules will be in force the next time a token  with that policy attached  is used to make a call to Vault      UI policy requirements   include  ui policy requirements mdx       Tutorial  Refer to the following tutorials for further learning      Vault Policies   vault tutorials policies policies     ACL Policy Path Templating   vault tutorials policies policy templating "}
{"questions":"vault include alerts enterprise only mdx page title Filtering Filter expressions in Vault layout docs An introduction to the filtering syntax used in Vault","answers":"---\nlayout: docs\npage_title: Filtering\ndescription: >-\n  An introduction to the filtering syntax used in Vault.\n---\n\n# Filter expressions in Vault\n\n@include 'alerts\/enterprise-only.mdx'\n\nFilter expressions use matching operators and selector values to parse\nout important or relevant information. In some situations, you can use filter\nexpressions to control how Vault processes results.\n\n## Filter expression syntax\n\nBasic filter expressions are always written in plain text with a\n**matching operator**, a **selector**, and a **selector value**.\n\n- the **matching operator** tells Vault how to compare the selector and selector\n  value.\n- the **selector** is a [JSON pointer](https:\/\/tools.ietf.org\/html\/rfc6901) that\n  indicates which field or parameter in a JSON object to consider.\n- the **selector value** is a JSON pointer, number, or string that defines a\n  pattern Vault can filter against.\n\nFor example, in the filter expression:\n\n```text\nproduct\/name == \"Vault\"\n```\n\n- Equality (`==`) is the matching operator.\n- The JSON pointer `product\/name` is the selector.\n- The string \"Vault\" is the selector value.\n\nComplex filter expressions also allow Boolean logic and parenthesis. For example:\n\n```text\n(product\/name == \"Vault\") and (timestamp < \"2024-02-01\")\n```\n\nWhen parsing filter expressions, Vault ignores whitespace unless the whitespace\nis part of a literal string.\n\nFilter expression\n`product\/name==\"Vault\"` and `product\/name == \"Vault\"` generate the same results\nwhile `product\/name == \" Vault \"` and `product\/name == \"Vault\"` generate\ndifferent results.\n\n<Note title=\"Selectors are not universal\">\n\n  Filtering-enabled endpoints can support different selectors. Make sure to\n  consult the API documentation for a given endpoint when constructing your\n  filter expressions.\n\n<\/Note>\n<Tabs>\n\n<Tab heading=\"Matching operators\">\n\n\n\n```text\n\/\/ Equality & Inequality checks\n<Selector> == \"<Value>\"\n<Selector> != \"<Value>\"\n\n\/\/ Emptiness checks\n<Selector> is empty\n<Selector> is not empty\n\n\/\/ Contains checks or Substring Matching\n\"<Value>\" in <Selector>\n\"<Value>\" not in <Selector>\n<Selector> contains \"<Value>\"\n<Selector> not contains \"<Value>\"\n\n\/\/ Regular Expression Matching\n<Selector> matches \"<Value>\"\n<Selector> not matches \"<Value>\"\n```\n\n<\/Tab>\n\n<Tab heading=\"Selectors\">\n\n\nSelectors must be valid JSON pointers enclosed in quotes with a leading slash (`\/`).\n\n\nJSON pointers use forward slashes to define paths through a JSON block. For\nexample, to target the product name in:\n\n```json\n{ \"product\":\n  {\n    \"name\": \"Vault\",\n    \"version\": \"1.16.0\"\n  },\n  {\n    \"name\": \"Boundary\",\n    \"version\": \"0.15.0\"\n  }\n}\n\n```\n\nThe selector would be `\/product\/name`.\n\n\n<\/Tab>\n\n<Tab heading=\"Selector values\">\n\n\nSelector values can be any valid selector, integer, floating point number, or\nstring. Numbers and strings should be quoted in double quotes or backticks.\n\nStrings quoted in backticks are treated as literal values and escape sequences\nlike `\\n` are not expanded.\n\n| Value             | Type    | Expanded value    |\n|-------------------|---------|-------------------|\n| \"Vault\\tBoundary\" | string  | \"Vault\tBoundary\" |\n| `Vault\\tBoundary` | string  | \"Vault\\tBoundary\" |\n| \"10\"              | integer | \"10\"              |\n| `10`              | integer | \"10\"              |\n| \"0.75\"            | float   | \"0.75\"            |\n\n\n<\/Tab>\n\n<\/Tabs>\n\n## Complex expressions\n\nComplex expressions combine basic expressions with logical operators, grouping, and matching expressions.\n\n```text\n\/\/ Logical Or - evaluates to true if either sub-expression does\n<Expression 1> or <Expression 2>\n\n\/\/ Logical And - evaluates to true if both sub-expressions do\n<Expression 1 > and <Expression 2>\n\n\/\/ Logical Not - evaluates to true if the sub-expression does not\nnot <Expression 1>\n\n\/\/ Grouping - Overrides normal precedence rules\n( <Expression 1> )\n\n\/\/ Inspects data to check for a match\n<Matching Expression 1>\n```\n\nVault uses standard operator precedence when resolving complex\nexpressions. For example, the expression\n`<Expression 1> and not <Expression 2> or <Expression 3>` resolves\nthe same as\n`( <Expression 1> and (not <Expression 2> )) or <Expression 3>`.\n\n\n## Performance\n\nFilters consume a portion of CPU time on the Vault node where they run.\n\n<Note title=\"Regular expressions\">\n  Using multiple\/complex expressions including regular expressions\n  (regex) will have a larger impact on performance than fewer\/simpler filters.\n<\/Note>\n\nAlways test your filters in pre-production environments to ensure correctness.\n\nIdeally you should [codify your management of Vault](\/vault\/tutorials\/operations\/codify-mgmt-vault-terraform)\nusing tools such as [Terraform](https:\/\/www.terraform.io\/), to prevent accidentally enabling an audit device\nin a production environment with untested\/incorrect settings.\n\nFinally, always ensure you profile production-like workloads within your pre-production\nenvironments in order to accurately assess the performance of Vault.","site":"vault","answers_cleaned":"    layout  docs page title  Filtering description       An introduction to the filtering syntax used in Vault         Filter expressions in Vault   include  alerts enterprise only mdx   Filter expressions use matching operators and selector values to parse out important or relevant information  In some situations  you can use filter expressions to control how Vault processes results      Filter expression syntax  Basic filter expressions are always written in plain text with a   matching operator    a   selector    and a   selector value       the   matching operator   tells Vault how to compare the selector and selector   value    the   selector   is a  JSON pointer  https   tools ietf org html rfc6901  that   indicates which field or parameter in a JSON object to consider    the   selector value   is a JSON pointer  number  or string that defines a   pattern Vault can filter against   For example  in the filter expression      text product name     Vault         Equality        is the matching operator    The JSON pointer  product name  is the selector    The string  Vault  is the selector value   Complex filter expressions also allow Boolean logic and parenthesis  For example      text  product name     Vault   and  timestamp    2024 02 01        When parsing filter expressions  Vault ignores whitespace unless the whitespace is part of a literal string   Filter expression  product name   Vault   and  product name     Vault   generate the same results while  product name      Vault    and  product name     Vault   generate different results    Note title  Selectors are not universal      Filtering enabled endpoints can support different selectors  Make sure to   consult the API documentation for a given endpoint when constructing your   filter expressions     Note   Tabs    Tab heading  Matching operators         text    Equality   Inequality checks  Selector       Value    Selector       Value       Emptiness checks  Selector  is empty  Selector  is not empty     Contains checks or Substring Matching   Value   in  Selector    Value   not in  Selector   Selector  contains   Value    Selector  not contains   Value       Regular Expression Matching  Selector  matches   Value    Selector  not matches   Value          Tab    Tab heading  Selectors     Selectors must be valid JSON pointers enclosed in quotes with a leading slash          JSON pointers use forward slashes to define paths through a JSON block  For example  to target the product name in      json    product            name    Vault        version    1 16 0                name    Boundary        version    0 15 0              The selector would be   product name       Tab    Tab heading  Selector values     Selector values can be any valid selector  integer  floating point number  or string  Numbers and strings should be quoted in double quotes or backticks   Strings quoted in backticks are treated as literal values and escape sequences like   n  are not expanded     Value               Type      Expanded value                                                             Vault tBoundary    string     Vault Boundary       Vault tBoundary    string     Vault tBoundary       10                 integer    10                    10                 integer    10                    0 75               float      0 75                   Tab     Tabs      Complex expressions  Complex expressions combine basic expressions with logical operators  grouping  and matching expressions      text    Logical Or   evaluates to true if either sub expression does  Expression 1  or  Expression 2      Logical And   evaluates to true if both sub expressions do  Expression 1   and  Expression 2      Logical Not   evaluates to true if the sub expression does not not  Expression 1      Grouping   Overrides normal precedence rules    Expression 1        Inspects data to check for a match  Matching Expression 1       Vault uses standard operator precedence when resolving complex expressions  For example  the expression   Expression 1  and not  Expression 2  or  Expression 3   resolves the same as     Expression 1  and  not  Expression 2     or  Expression 3         Performance  Filters consume a portion of CPU time on the Vault node where they run    Note title  Regular expressions     Using multiple complex expressions including regular expressions    regex  will have a larger impact on performance than fewer simpler filters    Note   Always test your filters in pre production environments to ensure correctness   Ideally you should  codify your management of Vault   vault tutorials operations codify mgmt vault terraform  using tools such as  Terraform  https   www terraform io    to prevent accidentally enabling an audit device in a production environment with untested incorrect settings   Finally  always ensure you profile production like workloads within your pre production environments in order to accurately assess the performance of Vault "}
{"questions":"vault Production hardening Harden your Vault deployments for production operations page title Production hardening layout docs You can use the best practices in this document to harden Vault when planning","answers":"---\nlayout: docs\npage_title: Production hardening\ndescription: >-\n  Harden your Vault deployments for production operations.\n---\n\n# Production hardening\n\nYou can use the best practices in this document to harden Vault when planning\nyour production deployment. These recommendations follow the\n[Vault security model](\/vault\/docs\/internals\/security), and focus on defense\nin depth.\n\nYou should follow the **baseline recommendations** if at all possible for any\nproduction Vault deployment. The **extended recommendations** detail extra\nlayers of security which may require more administrative overhead, and might\nnot be suitable for every deployment.\n\n## Baseline recommendations\n\n- **Do not run as root**. Use a dedicated, unprivileged service account to run\n  Vault, rather than running as the root or Administrator account. Vault is\n  designed to run as an unprivileged user, and doing so adds significant\n  defense against various privilege-escalation attacks.\n\n- **Allow minimal write privileges**. The unprivileged Vault service account\n  should not have access to overwrite its executable binary or any Vault\n  configuration files. Limit what is writable by the Vault user to just\n  directories and files for local Vault storage (for example, Integrated\n  Storage) or file audit device logs.\n\n- **Use end-to-end TLS**. You should always use Vault with TLS in production.\n  If you use intermediate load balancers or reverse proxies to front Vault,\n  you should enable TLS for all network connections between every part of the\n  system (including external storage) to ensure encryption of all traffic in\n  transit to and from Vault. When possible, you should set the HTTP Strict\n  Transport Security (HSTS) header using Vault's [custom response headers](\/vault\/docs\/configuration\/listener\/tcp#configuring-custom-http-response-headers) feature.\n\n- **Disable swap**. Vault encrypts data in transit and at rest, however it must\n  still have sensitive data in memory to function. Risk of exposure should be\n  minimized by disabling swap to prevent the operating system from paging\n  sensitive data to disk. Disabling swap is even more critical when your\n  Vault deployment uses Integrated Storage.\n\n- **Disable core dumps**. A user or administrator that can force a core dump\n  and has access to the resulting file can potentially access Vault encryption\n  keys. Preventing core dumps is a platform-specific process; on Linux setting\n  the resource limit `RLIMIT_CORE` to `0` disables core dumps. In the systemd\n  service unit file, setting `LimitCORE=0` will enforce this setting for the\n  Vault service.\n\n- **Use single tenancy**. Vault should be the sole user process running on a\n  machine. This reduces the risk that another process running on the same\n  machine gets compromised and gains the ability to interact with the Vault\n  process. Similarly, you should prefer running Vault on bare metal instead\n  of a virtual machine, and you prefer running in a virtual machine instead\n  of running in a containerized environment.\n\n- **Firewall traffic**. Use a local firewall or network security features of\n  your cloud provider to restrict incoming and outgoing traffic to Vault and\n  essential system services like NTP. This includes restricting incoming\n  traffic to permitted sub-networks and outgoing traffic to services Vault\n  needs to connect to, such as databases.\n\n- **Avoid root tokens**. When you initialize Vault, it emits an initial\n  root token. You should use this token just to perform initial setup,\n  such as enabling auth methods so that users can authenticate. You should\n  treat Vault [configuration as\n  code](https:\/\/www.hashicorp.com\/blog\/codifying-vault-policies-and-configuration\/),\n  and use version control to manage policies. Once you complete initial Vault\n  setup, you should revoke the initial root token to reduce risk of exposure. Root tokens can be\n  [generated when needed](\/vault\/docs\/commands\/operator\/generate-root), and should be\n  revoked when no longer needed.\n\n- **Configure user lockout**. Vault provides a [user lockout](\/vault\/docs\/concepts\/user-lockout) function\n  for the [approle](\/vault\/docs\/auth\/approle), [ldap](\/vault\/docs\/auth\/ldap) and [userpass](\/vault\/docs\/auth\/userpass)\n  auth methods. **Vault enables user lockout by default**. Verify the lockout threshold, and lockout duration matches your organizations security policies.\n\n- **Enable audit device logs**. Vault supports several [audit\n  devices](\/vault\/docs\/audit). When you enable audit device logs, you gain\n  a detailed history of all operations performed by Vault, and a forensics\n  trail in the case of misuse or compromise. Audit logs [securely\n  hash](\/vault\/docs\/audit#sensitive-information)\n  sensitive data, but you should still restrict access to prevent any\n  unintended information disclosure.\n\n- **Disable shell command history**. You may want the `vault` command itself to\n  not appear in history at all.\n\n- **Keep a frequent upgrade cadence**. Vault is actively developed, and you\n  should upgrade Vault often to incorporate security fixes and any changes in\n  default settings such as key lengths or cipher suites. Subscribe to the\n  [HashiCorp Announcement mailing list](https:\/\/groups.google.com\/g\/hashicorp-announce)\n  to receive announcements of new releases and visit the [Vault\n  CHANGELOG](https:\/\/github.com\/hashicorp\/vault\/blob\/main\/CHANGELOG.md) for\n  details on the changes made in each release.\n\n- **Synchronize clocks**. Use NTP or whatever mechanism is appropriate for your\n  environment to ensure that all the Vault nodes agree about what time it is.\n  Vault uses the clock for things like enforcing TTLs and setting dates in PKI\n  certificates, and if the nodes have significant clock skew, a failover can wreak havoc.\n\n- **Restrict storage access**. Vault encrypts all data at rest, regardless of\n  which storage type it uses. Although Vault encrypts the data, an [attacker\n  with arbitrary\n  control](\/vault\/docs\/internals\/security) can cause\n  data corruption or loss by modifying or deleting keys. You should restrict\n  storage access outside of Vault to avoid unauthorized access or operations.\n\n- **Do not use clear text credentials**. The Vault configuration [`seal`\n  stanza](\/vault\/docs\/configuration\/seal) configures the seal type to use for\n  extra data protection such as using HSM or Cloud KMS solutions to encrypt and\n  decrypt the root key. **DO NOT** store your cloud credentials or HSM pin in\n  clear text within the `seal` stanza. If you host the Vault server on the same\n  cloud platform as the KMS service, use the platform-specific identity\n  solutions. For example:\n\n  - [Resource Access Management (RAM) on AliCloud](\/vault\/docs\/configuration\/seal\/alicloudkms#authentication)\n  - [Instance Profiles on AWS](\/vault\/docs\/configuration\/seal\/awskms#authentication)\n  - [Managed Service Identities (MSI) on Azure](\/vault\/docs\/configuration\/seal\/azurekeyvault#authentication)\n  - [Service Account on Google Cloud Platform](\/vault\/docs\/configuration\/seal\/gcpckms#authentication-permissions)\n\n  When using platform-specific identity solutions, you should be mindful of auth\n  method and secret engine configuration within namespaces. You can share\n  platform identity across Vault namespaces, as these provider features \n  generally offer host-based identity solutions.\n\n  If that is not applicable, set the credentials as environment variables\n  (for example, `VAULT_HSM_PIN`).\n\n- **Use the safest algorithms available**. [Vault's TLS listener](\/vault\/docs\/configuration\/listener\/tcp#tls_cipher_suites)\n  supports a variety of legacy algorithms for backwards compatibility. While\n  these algorithms are available, they are not recommended for use when\n  a stronger alternative is available. If possible, use TLS 1.3 to ensure\n  that modern encryption algorithms encrypt data in transit and offer\n  forward secrecy.\n\n- **Follow best practices for plugins**. While HashiCorp-developed plugins\n  generally default to a safe configuration, you should be mindful of\n  misconfigured or malicious Vault plugins. These plugin issues can harm the\n  security posture of your Vault deployment.\n\n- **Be aware of non-deterministic configuration file merging**. Vault's\n  configuration file merging is non-deterministic, and inconsistencies in\n  settings between files can lead to inconsistencies in Vault settings.\n  Ensure set configurations are consistent across all files (and any files merged together get denoted by a `-config` flag).\n\n- **Use correct filesystem permissions**. Always ensure appropriate permissions\n  get applied to files before starting Vault. This is even more critical for files which contain sensitive information.\n\n- **Use standard input for vault secrets**. [Vault login](\/vault\/docs\/commands\/login)\n  and [Vault unseal](\/vault\/api-docs\/system\/unseal#key) allow operators to\n  give secret values from either standard input or with command-line arguments.\n  Command-line arguments can persisted in shell history, and are readable by other unprivileged users on the same host.\n\n- **Develop an off-boarding process**. Removing accounts in Vault or associated \n  identity providers may not immediately revoke [token-based access](\/vault\/docs\/concepts\/tokens#user-management-considerations).\n  Depending on how you manage access to Vault, operators should consider:\n\n  - Removing the entity from groups granting access to resources.\n  - [Revoking](\/vault\/docs\/concepts\/lease#prefix-based-revocation) the active leases for a given user account.\n  - Deleting the canonical entity of the user after removing accounts in Vault or associated identity providers.\n    Deleting the canonical entity alone is insufficient as one is automatically created on successful login if it does not exist.\n  - [Disabling](\/vault\/docs\/commands\/auth\/disable) auth methods instead of deleting them, which revokes all\n    tokens generated by this auth method.\n\n- **Use short TTLs** When possible, credentials issued from Vault (for example\n  tokens, X.509 certificates) should be short-lived, as to guard against their potential compromise, and reduce the need to use revocation methods.\n\n## Extended recommendations\n\n- **Disable SSH \/ remote desktop**. When running a Vault as a single tenant\n  application, users should never access the machine directly. Instead, they\n  should access Vault through its API over the network. Use a centralized\n  logging and telemetry solution for debugging. Be sure to restrict access to\n  logs as need to know.\n\n- **Use systemd security features**. Systemd provides a number of features\n  that you can use to lock down access to the filesystem and to\n  administrative capabilities. The service unit file provided with the\n  official Vault Linux packages sets a number of these by default, including:\n\n  ```plaintext\n  ProtectSystem=full\n  PrivateTmp=yes\n  CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK\n  AmbientCapabilities=CAP_IPC_LOCK\n  ProtectHome=read-only\n  PrivateDevices=yes\n  NoNewPrivileges=yes\n  ```\n\n  See the [systemd.exec manual page](https:\/\/www.freedesktop.org\/software\/systemd\/man\/systemd.exec.html) for more details.\n\n- **Perform immutable upgrades**. Vault relies on external storage for\n  persistence, and this decoupling allows the servers running Vault to be\n  immutably managed. When you upgrade to a new version, you can bring new\n  servers with the upgraded version of Vault online. You can attach the new\n  servers to the same shared storage and unseal them. Then you can destroy the\n  older version servers. This reduces the need for remote access and upgrade orchestration which may introduce security gaps.\n\n- **Configure SELinux \/ AppArmor**. Using mechanisms like\n  [SELinux](https:\/\/github.com\/hashicorp\/vault-selinux-policies)\n  and AppArmor can help you gain layers of security when using Vault.\n  While Vault can run on several popular operating systems, Linux is\n  recommended due to the various security primitives mentioned here.\n\n- **Adjust user limits**. It is possible that your Linux distribution enforces\n  strict process user limits (`ulimits`). Consider a review of `ulimits` for maximum amount of open files, connections, etc. before going into production. You might need to increase the default values to avoid errors about too\n  many open files.\n\n- **Be aware of special container considerations**. To use memory locking\n  (mlock) inside a Vault container, you need to use the `overlayfs2` or another\n  supporting driver.","site":"vault","answers_cleaned":"    layout  docs page title  Production hardening description       Harden your Vault deployments for production operations         Production hardening  You can use the best practices in this document to harden Vault when planning your production deployment  These recommendations follow the  Vault security model   vault docs internals security   and focus on defense in depth   You should follow the   baseline recommendations   if at all possible for any production Vault deployment  The   extended recommendations   detail extra layers of security which may require more administrative overhead  and might not be suitable for every deployment      Baseline recommendations      Do not run as root    Use a dedicated  unprivileged service account to run   Vault  rather than running as the root or Administrator account  Vault is   designed to run as an unprivileged user  and doing so adds significant   defense against various privilege escalation attacks       Allow minimal write privileges    The unprivileged Vault service account   should not have access to overwrite its executable binary or any Vault   configuration files  Limit what is writable by the Vault user to just   directories and files for local Vault storage  for example  Integrated   Storage  or file audit device logs       Use end to end TLS    You should always use Vault with TLS in production    If you use intermediate load balancers or reverse proxies to front Vault    you should enable TLS for all network connections between every part of the   system  including external storage  to ensure encryption of all traffic in   transit to and from Vault  When possible  you should set the HTTP Strict   Transport Security  HSTS  header using Vault s  custom response headers   vault docs configuration listener tcp configuring custom http response headers  feature       Disable swap    Vault encrypts data in transit and at rest  however it must   still have sensitive data in memory to function  Risk of exposure should be   minimized by disabling swap to prevent the operating system from paging   sensitive data to disk  Disabling swap is even more critical when your   Vault deployment uses Integrated Storage       Disable core dumps    A user or administrator that can force a core dump   and has access to the resulting file can potentially access Vault encryption   keys  Preventing core dumps is a platform specific process  on Linux setting   the resource limit  RLIMIT CORE  to  0  disables core dumps  In the systemd   service unit file  setting  LimitCORE 0  will enforce this setting for the   Vault service       Use single tenancy    Vault should be the sole user process running on a   machine  This reduces the risk that another process running on the same   machine gets compromised and gains the ability to interact with the Vault   process  Similarly  you should prefer running Vault on bare metal instead   of a virtual machine  and you prefer running in a virtual machine instead   of running in a containerized environment       Firewall traffic    Use a local firewall or network security features of   your cloud provider to restrict incoming and outgoing traffic to Vault and   essential system services like NTP  This includes restricting incoming   traffic to permitted sub networks and outgoing traffic to services Vault   needs to connect to  such as databases       Avoid root tokens    When you initialize Vault  it emits an initial   root token  You should use this token just to perform initial setup    such as enabling auth methods so that users can authenticate  You should   treat Vault  configuration as   code  https   www hashicorp com blog codifying vault policies and configuration      and use version control to manage policies  Once you complete initial Vault   setup  you should revoke the initial root token to reduce risk of exposure  Root tokens can be    generated when needed   vault docs commands operator generate root   and should be   revoked when no longer needed       Configure user lockout    Vault provides a  user lockout   vault docs concepts user lockout  function   for the  approle   vault docs auth approle    ldap   vault docs auth ldap  and  userpass   vault docs auth userpass    auth methods    Vault enables user lockout by default    Verify the lockout threshold  and lockout duration matches your organizations security policies       Enable audit device logs    Vault supports several  audit   devices   vault docs audit   When you enable audit device logs  you gain   a detailed history of all operations performed by Vault  and a forensics   trail in the case of misuse or compromise  Audit logs  securely   hash   vault docs audit sensitive information    sensitive data  but you should still restrict access to prevent any   unintended information disclosure       Disable shell command history    You may want the  vault  command itself to   not appear in history at all       Keep a frequent upgrade cadence    Vault is actively developed  and you   should upgrade Vault often to incorporate security fixes and any changes in   default settings such as key lengths or cipher suites  Subscribe to the    HashiCorp Announcement mailing list  https   groups google com g hashicorp announce    to receive announcements of new releases and visit the  Vault   CHANGELOG  https   github com hashicorp vault blob main CHANGELOG md  for   details on the changes made in each release       Synchronize clocks    Use NTP or whatever mechanism is appropriate for your   environment to ensure that all the Vault nodes agree about what time it is    Vault uses the clock for things like enforcing TTLs and setting dates in PKI   certificates  and if the nodes have significant clock skew  a failover can wreak havoc       Restrict storage access    Vault encrypts all data at rest  regardless of   which storage type it uses  Although Vault encrypts the data  an  attacker   with arbitrary   control   vault docs internals security  can cause   data corruption or loss by modifying or deleting keys  You should restrict   storage access outside of Vault to avoid unauthorized access or operations       Do not use clear text credentials    The Vault configuration   seal    stanza   vault docs configuration seal  configures the seal type to use for   extra data protection such as using HSM or Cloud KMS solutions to encrypt and   decrypt the root key    DO NOT   store your cloud credentials or HSM pin in   clear text within the  seal  stanza  If you host the Vault server on the same   cloud platform as the KMS service  use the platform specific identity   solutions  For example        Resource Access Management  RAM  on AliCloud   vault docs configuration seal alicloudkms authentication       Instance Profiles on AWS   vault docs configuration seal awskms authentication       Managed Service Identities  MSI  on Azure   vault docs configuration seal azurekeyvault authentication       Service Account on Google Cloud Platform   vault docs configuration seal gcpckms authentication permissions     When using platform specific identity solutions  you should be mindful of auth   method and secret engine configuration within namespaces  You can share   platform identity across Vault namespaces  as these provider features    generally offer host based identity solutions     If that is not applicable  set the credentials as environment variables    for example   VAULT HSM PIN         Use the safest algorithms available     Vault s TLS listener   vault docs configuration listener tcp tls cipher suites    supports a variety of legacy algorithms for backwards compatibility  While   these algorithms are available  they are not recommended for use when   a stronger alternative is available  If possible  use TLS 1 3 to ensure   that modern encryption algorithms encrypt data in transit and offer   forward secrecy       Follow best practices for plugins    While HashiCorp developed plugins   generally default to a safe configuration  you should be mindful of   misconfigured or malicious Vault plugins  These plugin issues can harm the   security posture of your Vault deployment       Be aware of non deterministic configuration file merging    Vault s   configuration file merging is non deterministic  and inconsistencies in   settings between files can lead to inconsistencies in Vault settings    Ensure set configurations are consistent across all files  and any files merged together get denoted by a   config  flag        Use correct filesystem permissions    Always ensure appropriate permissions   get applied to files before starting Vault  This is even more critical for files which contain sensitive information       Use standard input for vault secrets     Vault login   vault docs commands login    and  Vault unseal   vault api docs system unseal key  allow operators to   give secret values from either standard input or with command line arguments    Command line arguments can persisted in shell history  and are readable by other unprivileged users on the same host       Develop an off boarding process    Removing accounts in Vault or associated    identity providers may not immediately revoke  token based access   vault docs concepts tokens user management considerations     Depending on how you manage access to Vault  operators should consider       Removing the entity from groups granting access to resources       Revoking   vault docs concepts lease prefix based revocation  the active leases for a given user account      Deleting the canonical entity of the user after removing accounts in Vault or associated identity providers      Deleting the canonical entity alone is insufficient as one is automatically created on successful login if it does not exist       Disabling   vault docs commands auth disable  auth methods instead of deleting them  which revokes all     tokens generated by this auth method       Use short TTLs   When possible  credentials issued from Vault  for example   tokens  X 509 certificates  should be short lived  as to guard against their potential compromise  and reduce the need to use revocation methods      Extended recommendations      Disable SSH   remote desktop    When running a Vault as a single tenant   application  users should never access the machine directly  Instead  they   should access Vault through its API over the network  Use a centralized   logging and telemetry solution for debugging  Be sure to restrict access to   logs as need to know       Use systemd security features    Systemd provides a number of features   that you can use to lock down access to the filesystem and to   administrative capabilities  The service unit file provided with the   official Vault Linux packages sets a number of these by default  including        plaintext   ProtectSystem full   PrivateTmp yes   CapabilityBoundingSet CAP SYSLOG CAP IPC LOCK   AmbientCapabilities CAP IPC LOCK   ProtectHome read only   PrivateDevices yes   NoNewPrivileges yes          See the  systemd exec manual page  https   www freedesktop org software systemd man systemd exec html  for more details       Perform immutable upgrades    Vault relies on external storage for   persistence  and this decoupling allows the servers running Vault to be   immutably managed  When you upgrade to a new version  you can bring new   servers with the upgraded version of Vault online  You can attach the new   servers to the same shared storage and unseal them  Then you can destroy the   older version servers  This reduces the need for remote access and upgrade orchestration which may introduce security gaps       Configure SELinux   AppArmor    Using mechanisms like    SELinux  https   github com hashicorp vault selinux policies    and AppArmor can help you gain layers of security when using Vault    While Vault can run on several popular operating systems  Linux is   recommended due to the various security primitives mentioned here       Adjust user limits    It is possible that your Linux distribution enforces   strict process user limits   ulimits    Consider a review of  ulimits  for maximum amount of open files  connections  etc  before going into production  You might need to increase the default values to avoid errors about too   many open files       Be aware of special container considerations    To use memory locking    mlock  inside a Vault container  you need to use the  overlayfs2  or another   supporting driver "}
{"questions":"vault page title Seal Unseal Seal Unseal sealed to lock it down layout docs A Vault must be unsealed before it can access its data Likewise it can be","answers":"---\nlayout: docs\npage_title: Seal\/Unseal\ndescription: >-\n  A Vault must be unsealed before it can access its data. Likewise, it can be\n  sealed to lock it down.\n---\n\n# Seal\/Unseal\n\nWhen a Vault server is started, it starts in a _sealed_ state. In this\nstate, Vault is configured to know where and how to access the physical\nstorage, but doesn't know how to decrypt any of it.\n\n_Unsealing_ is the process of obtaining the plaintext root key necessary to\nread the decryption key to decrypt the data, allowing access to the Vault.\n\nPrior to unsealing, almost no operations are possible with Vault. For\nexample authentication, managing the mount tables, etc. are all not possible.\nThe only possible operations are to unseal the Vault and check the status\nof the seal.\n\n## Why?\n\nThe data stored by Vault is encrypted. Vault needs the _encryption key_ in order\nto decrypt the data. The encryption key is also stored with the data\n(in the _keyring_), but encrypted with another encryption key known as the _root key_.\n\nTherefore, to decrypt the data, Vault must decrypt the encryption key\nwhich requires the root key. Unsealing is the process of getting access to\nthis root key. The root key is stored alongside all other Vault data,\nbut is encrypted by yet another mechanism: the unseal key.\n\nTo recap: most Vault data is encrypted using the encryption key in the keyring;\nthe keyring is encrypted by the root key; and the root key is encrypted by\nthe unseal key.\n\n## Shamir seals\n\n![Shahir seals](\/img\/vault-shamir-seal.png)\n\nThe default Vault config uses a Shamir seal. Instead of distributing the unseal\nkey as a single key to an operator, Vault uses an algorithm known as\n[Shamir's Secret Sharing](https:\/\/en.wikipedia.org\/wiki\/Shamir%27s_Secret_Sharing)\nto split the key into shares. A certain threshold of shares is required to\nreconstruct the unseal key, which is then used to decrypt the root key.\n\nThis is the _unseal_ process: the shares are added one at a time (in any\norder) until enough shares are present to reconstruct the key and\ndecrypt the root key.\n\n## Unsealing\n\nThe unseal process is done by running `vault operator unseal` or via the API.\nThis process is stateful: each key can be entered via multiple mechanisms from\nmultiple client machines and it will work. This allows each share of the root\nkey to be on a distinct client machine for better security.\n\nNote that when using the Shamir seal with multiple nodes, each node must be\nunsealed with the required threshold of shares. Partial unsealing of each node\nis not distributed across the cluster.\n\nOnce a Vault node is unsealed, it remains unsealed until one of these things happens:\n\n1. It is resealed via the API (see below).\n\n2. The server is restarted.\n\n3. Vault's storage layer encounters an unrecoverable error.\n\n-> **Note:** Unsealing makes the process of automating a Vault install\ndifficult. Automated tools can easily install, configure, and start Vault,\nbut unsealing it using Shamir is a very manual process. For most users\nAuto Unseal will provide a better experience.\n\n## Sealing\n\nThere is also an API to seal the Vault. This will throw away the root\nkey in memory and require another unseal process to restore it. Sealing\nonly requires a single operator with root privileges.\n\nThis way, if there is a detected intrusion, the Vault data can be locked\nquickly to try to minimize damages. It can't be accessed again without\naccess to the root key shares.\n\n## Auto unseal\n\nAuto unseal was developed to aid in reducing the operational complexity of\nkeeping the unseal key secure. This feature delegates the responsibility of\nsecuring the unseal key from users to a trusted device or service. At startup\nVault will connect to the device or service implementing the seal and ask it\nto decrypt the root key Vault read from storage.\n\n![Auto Unseal](\/img\/vault-auto-unseal.png)\n\nThere are certain operations in Vault besides unsealing that\nrequire a quorum of users to perform, e.g. generating a root token. When\nusing a Shamir seal the unseal keys must be provided to authorize these\noperations. When using Auto Unseal these operations require _recovery\nkeys_ instead.\n\nJust as the initialization process with a Shamir seal yields unseal keys,\ninitializing with an Auto Unseal yields recovery keys.\n\nIt is still possible to seal a Vault node using the API. In this case Vault\nwill remain sealed until restarted, or the unseal API is used, which with Auto\nUnseal requires the recovery key fragments instead of the unseal key fragments\nthat would be provided with Shamir. The process remains the same.\n\nFor a list of examples and supported providers, please see the\n[seal documentation](\/vault\/docs\/configuration\/seal).\n\nWhen DR replication is enabled in Vault Enterprise, [Performance Standby](\/vault\/docs\/enterprise\/performance-standby) nodes on the DR cluster will seal themselves, so they must be restarted to be unsealed.\n\n<Warning title=\"Recovery keys cannot decrypt the root key\">\n\nRecovery keys cannot decrypt the root key and thus are not sufficient to unseal\nVault if the auto unseal mechanism isn't working. They are purely an authorization mechanism.\nUsing auto unseal creates a strict Vault lifecycle dependency on the underlying seal mechanism. \nThis means that if the seal mechanism (such as the Cloud KMS key) becomes unavailable, \nor deleted before the seal is migrated, then there is no ability to recover \naccess to the Vault cluster until the mechanism is available again. **If the seal \nmechanism or its keys are permanently deleted, then the Vault cluster cannot be recovered, even\nfrom backups.**\nTo mitigate this risk, we recommend careful controls around management of the seal\nmechanism, for example using\n[AWS Service Control Policies](https:\/\/docs.aws.amazon.com\/organizations\/latest\/userguide\/orgs_manage_policies_scps.html)\nor similar.\nWith Vault Enterprise secondary clusters (disaster or performance) can have a\nseal configured independently of the primary, and when properly configured guards\nagainst *some* of this risk.  Unreplicated items such as local mounts could still\nbe lost.\n\n<\/Warning>\n\n## Recovery key\n\nWhen Vault is initialized while using an HSM or KMS, rather than unseal keys\nbeing returned to the operator, recovery keys are returned. These are generated\nfrom an internal recovery key that is split via Shamir's Secret Sharing, similar\nto Vault's treatment of unseal keys when running without an HSM or KMS.\n\nDetails about initialization and rekeying follow. When performing an operation\nthat uses recovery keys, such as `generate-root`, selection of the recovery\nkeys for this purpose, rather than the barrier unseal keys, is automatic.\n\n### Initialization\n\nWhen initializing, the split is performed according to the following CLI flags\nand their API equivalents in the [\/sys\/init](\/vault\/api-docs\/system\/init) endpoint:\n\n- `recovery-shares`: The number of shares into which to split the recovery\n  key. This value is equivalent to the `recovery_shares` value in the API\n  endpoint.\n- `recovery-threshold`: The threshold of shares required to reconstruct the\n  recovery key. This value is equivalent to the `recovery_threshold` value in\n  the API endpoint.\n- `recovery-pgp-keys`: The PGP keys to use to encrypt the returned recovery\n  key shares. This value is equivalent to the `recovery_pgp_keys` value in the\n  API endpoint, although as with `pgp_keys` the object in the API endpoint is\n  an array, not a string.\n\nAdditionally, Vault will refuse to initialize if the option has not been set to\ngenerate a key, and no key is found. See\n[Configuration](\/vault\/docs\/configuration\/seal\/pkcs11) for more details.\n\n### Rekeying\n\n#### Unseal key\n\nVault's unseal key can be rekeyed using a normal `vault operator rekey`\noperation from the CLI or the matching API calls. The rekey operation is\nauthorized by meeting the threshold of recovery keys. After rekeying, the new\nbarrier key is wrapped by the HSM or KMS and stored like the previous key; it is not\nreturned to the users that submitted their recovery keys.\n\n<EnterpriseAlert product=\"vault\">\n  Seal wrapping requires Vault Enterprise\n<\/EnterpriseAlert>\n\n#### Recovery key\n\nThe recovery key can be rekeyed to change the number of shares\/threshold or to\ntarget different key holders via different PGP keys. When using the Vault CLI,\nthis is performed by using the `-target=recovery` flag to `vault operator rekey`.\n\nVia the API, the rekey operation is performed with the same parameters as the\n[normal `\/sys\/rekey`\nendpoint](\/vault\/api-docs\/system\/rekey); however, the\nAPI prefix for this operation is at `\/sys\/rekey-recovery-key` rather than\n`\/sys\/rekey`.\n\n## Seal migration\n\nThe seal migration process cannot be performed without downtime, and due to the\ntechnical underpinnings of the seal implementations, the process requires that\nyou briefly take the whole cluster down. While experiencing some downtime may\nbe unavoidable, we believe that switching seals is a rare event and that the\ninconvenience of the downtime is an acceptable trade-off.\n\n~> **NOTE**: A backup should be taken before starting seal migration in case\nsomething goes wrong.\n\n~> **NOTE**: Seal migration operation will require both old and new seals to be\navailable during the migration. For example, migration from auto unseal to Shamir\nseal will require that the service backing the auto unseal is accessible during\nthe migration.\n\n~> **NOTE**: Seal migration from auto unseal to auto unseal of the same type is\nsupported since Vault 1.6.0. However, there is a current limitation that\nprevents migrating from AWSKMS to AWSKMS; all other seal migrations of the same\ntype are supported. Seal migration from one auto unseal type (AWS KMS) to\ndifferent auto unseal type (HSM, Azure KMS, etc.) is also supported on older\nversions as well.\n\n### Migration post Vault 1.16.0 via Seal HA for Auto Seals (Enterprise)\n\nWith Seal HA, migration between auto-unseal types (not including any Shamir\nseals) can be done fully online using Seal High Availability (Seal HA) without\nany downtime.\n\n1. Edit the Vault configuration, and add the new, target seal configuration.\n1. Send the Vault process the SIGHUP signal, triggering a configuration reload.\n1. Monitor the [`sys\/sealwrap\/rewrap`](\/vault\/api-docs\/system\/sealwrap-rewrap) endpoints,\nto see that rewrap is running, and\/or [`sys\/seal-backend-status`](\/vault\/api-docs\/system\/seal-backend-status),\nendpoints, waiting for `fully_wrapped` to be true, indicating all seal wrapped values are now\nwrapped by the new seal.  The logs also contain information about the rewrap progress.\n1. Edit the Vault configuration, removing the old seal configuration.\n1. Send the Vault process the SIGHUP signal, again allowing re-wrapping to complete.\n\n### Migration post Vault 1.5.1\n\nThese steps are common for seal migrations between any supported kinds and for\nany storage backend.\n\n1. Take a standby node down and update the [seal\n   configuration](\/vault\/docs\/configuration\/seal).\n\n   - If the migration is from Shamir seal to Auto seal, add the desired new Auto\n     seal block to the configuration.\n   - If the migration is from Auto seal to Shamir seal, add `disabled = \"true\"`\n     to the old seal block.\n   - If the migration is from Auto seal to another Auto seal, add `disabled =\n     \"true\"` to the old seal block and add the desired new Auto seal block.\n\n   Now, bring the standby node back up and run the unseal command on each key, by\n   supplying the `-migrate` flag.\n\n   - Supply Shamir unseal keys if the old seal was Shamir, which will be migrated\n     as the recovery keys for the Auto seal.\n   - Supply recovery keys if the old seal is one of Auto seals, which will be\n     migrated as the recovery keys of the new Auto seal, or as Shamir unseal\n     keys if the new seal is Shamir.\n\n1. Perform step 1 for all the standby nodes, one at a time. It is necessary to\n   bring back the downed standby node before moving on to the other standby nodes,\n   specifically when Integrated Storage is in use for it helps to retain the\n   quorum.\n\n1. [Step down](\/vault\/docs\/commands\/operator\/step-down) the\n   active node. One of the standby nodes will become the new active node.\n   When using Integrated Storage, ensure that quorum is reached and a leader is\n   elected.\n\n1. The new active node will perform the migration. Monitor the server log in\n   the active node to witness the completion of the seal migration process.\n   Wait for a little while for the migration information to replicate to all the\n   nodes in case of Integrated Storage. In enterprise Vault, switching an Auto seal\n   implies that the seal wrapped storage entries get re-wrapped. Monitor the log\n   and wait until this process is complete (look for `seal re-wrap completed`). \n\n<Warning heading=\"Seal configuration changes will invoke rewrap\">\n\n   Any change to the `seal` stanza in your Vault configuration invokes seal-rewrap,\n   even \"migrations\" from the same auto-unseal type like `pkcs11` to `pkcs11`.\n\n<\/Warning>\n\n1. Seal migration is now completed. Take down the old active node, update its\n   configuration to use the new seal blocks (completely unaware of the old seal type)\n   ,and bring it back up. It will be auto-unsealed if the new seal is one of the\n   auto seals, or will require unseal keys if the new seal is Shamir.\n\n1. At this point, configuration files of all the nodes can be updated to only have the\n   new seal information. Standby nodes can be restarted right away and the active\n   node can be restarted upon a leadership change.\n\n### Migration pre 1.5.1\n\n#### Migration from shamir to auto unseal\n\nTo migrate from Shamir keys to Auto Unseal, take your server cluster offline and\nupdate the [seal configuration](\/vault\/docs\/configuration\/seal) with the appropriate\nseal configuration. Bring your server back up and leave the rest of the nodes\noffline if using multi-server mode, then run the unseal process with the\n`-migrate` flag and bring the rest of the cluster online.\n\nAll unseal commands must specify the `-migrate` flag. Once the required\nthreshold of unseal keys are entered, unseal keys will be migrated to recovery\nkeys.\n\n`$ vault operator unseal -migrate`\n\n#### Migration from auto unseal to shamir\n\nTo migrate from auto unseal to Shamir keys, take your server cluster offline\nand update the [seal configuration](\/vault\/docs\/configuration\/seal) and add `disabled\n= \"true\"` to the seal block. This allows the migration to use this information\nto decrypt the key but will not unseal Vault. When you bring your server back\nup, run the unseal process with the `-migrate` flag and use the Recovery Keys\nto perform the migration. All unseal commands must specify the `-migrate` flag.\nOnce the required threshold of recovery keys are entered, the recovery keys\nwill be migrated to be used as unseal keys.\n\n#### Migration from auto unseal to auto unseal\n\n~> **NOTE**: Migration between same Auto Unseal types is supported in Vault\n1.6.0 and higher. For these pre-1.5.1 steps, it is only possible to migrate from\none type of auto unseal to a different type (ie Transit -> AWSKMS).\n\nTo migrate from auto unseal to a different auto unseal configuration, take your\nserver cluster offline and update the existing [seal\nconfiguration](\/vault\/docs\/configuration\/seal) and add `disabled = \"true\"` to the seal\nblock. Then add another seal block to describe the new seal.\n\nWhen you bring your server back up, run the unseal process with the `-migrate`\nflag and use the Recovery Keys to perform the migration. All unseal commands\nmust specify the `-migrate` flag. Once the required threshold of recovery keys\nare entered, the recovery keys will be kept and used as recovery keys in the new\nseal.\n\n#### Migration with integrated storage\n\nIntegrated Storage uses the Raft protocol underneath, which requires a quorum of\nservers to be online before the cluster is functional. Therefore, bringing the\ncluster back up one node at a time with the seal configuration updated, will not\nwork in this case. Follow the same steps for each kind of migration described\nabove with the exception that after the cluster is taken offline, update the\nseal configurations of all the nodes appropriately and bring them all back up.\nWhen the quorum of nodes are back up, Raft will elect a leader and the leader\nnode that will perform the migration. The migrated information will be replicated to\nall other cluster peers and when the peers eventually become the leader,\nmigration will not happen again on the peer nodes.\n\n## Seal high availability <EnterpriseAlert inline=\"true\" \/>\n\nSeal high availability (Seal HA) allows the configuration of more than one auto \nseal mechanism such that Vault can tolerate the temporary loss of a seal service \nor device for a time.  With Seal HA configured with at least two and no more than\nthree auto seals, Vault can also start up and unseal if one of the\nconfigured seals is still available (though Vault will remain in a degraded mode in\nthis case). While seals are unavailable, seal wrapping and entropy augmentation can\nstill occur using the remaining seals, and values produced while a seal is down will\nbe re-wrapped with all the seals when all seals become healthy again.\n\nAn operator should choose two seals that are unlikely to become unavailable at the\nsame time.  For example, they may choose KMS keys in two cloud regions, from\ntwo different providers; or a mix of HSM, KMS, or Transit seals.\n\nWhen an operator configures an additional seal or removes a seal (one at a time)\nand restarts Vault, Vault will automatically detect that it needs to re-wrap\nCSPs and seal wrapped values, and will start the process.  Seal re-wrapping can\nbe monitored via the logs or via the `sys\/seal-status` endpoint.  While a \nre-wrap is in progress (or could not complete successfully), changes to the\nseal configuration are not allowed.\n\nIn additional to high availability, seal HA can be used to migrate between two \nauto seals in a simplified manner.  To migrate in this way:\n\nIn additional to high availability, Seal HA can be used to migrate between two\nauto seals in a [simplified manner.](#migration-post-vault-1-16-0-via-seal-ha-for-auto-seals-enterprise)\n\nNote that Shamir seals are not auto seals and cannot be included in a Seal\nHA setup.  This is because auto seals support seal wrap while Shamir seals\ndo not, so the loss of the auto seal does not necessarily leave Vault in a\nfully available state. \n\n### Use and Configuration\n\nRefer to the [configuration](\/vault\/docs\/configuration\/seal\/seal-ha) section\nfor details on configuring Seal HA.\n\n### Seal Re-Wrapping\n\nWhenever seal configuration changes, Vault must re-wrap all CSPs and seal\nwrapped values, to ensure each value has an entry encrypted by all configured\nseals.  Vault detects these configuration changes automatically, and triggers\na re-wrap.  Re-wraps can take some time, depending on the number of\nseal wrapped values.  While re-wrapping is in progress, no configuration changes\nto the seals can be made.\n\nProgress of the re-wrap can be monitored using\nthe [`sys\/sealwrap\/rewrap`](\/vault\/api-docs\/system\/sealwrap-rewrap) endpoint.\n\n### Limitations and Known Issues\n\nIn order to limit complexity and increase safety, there are some limitations\nto the use and configuration of Seal HA:\n\n* Vault must be configured for a single seal at the time of initialization.\nExtra seals can then be added.\n* Seals must be added or removed one at a time.\n* Only auto seals can be used in HA configurations.  Shamir and auto cannot\nbe mixed.\n* A maximum of three seals can be configured.\n* As seal wrapped values must be wrapped by all configured seals, it is possible\nthat large values may fail to persist as the size of the entry is multiplied by\nthe number of seals causing it to exceed the storage entry size limit.   An example\nwould be storing a large document in KVv2 with seal wrapping enabled.\n* It is not possible to rotate the data encryption key nor the recovery keys while\nunless all seals are healthy.","site":"vault","answers_cleaned":"    layout  docs page title  Seal Unseal description       A Vault must be unsealed before it can access its data  Likewise  it can be   sealed to lock it down         Seal Unseal  When a Vault server is started  it starts in a  sealed  state  In this state  Vault is configured to know where and how to access the physical storage  but doesn t know how to decrypt any of it    Unsealing  is the process of obtaining the plaintext root key necessary to read the decryption key to decrypt the data  allowing access to the Vault   Prior to unsealing  almost no operations are possible with Vault  For example authentication  managing the mount tables  etc  are all not possible  The only possible operations are to unseal the Vault and check the status of the seal      Why   The data stored by Vault is encrypted  Vault needs the  encryption key  in order to decrypt the data  The encryption key is also stored with the data  in the  keyring    but encrypted with another encryption key known as the  root key    Therefore  to decrypt the data  Vault must decrypt the encryption key which requires the root key  Unsealing is the process of getting access to this root key  The root key is stored alongside all other Vault data  but is encrypted by yet another mechanism  the unseal key   To recap  most Vault data is encrypted using the encryption key in the keyring  the keyring is encrypted by the root key  and the root key is encrypted by the unseal key      Shamir seals    Shahir seals   img vault shamir seal png   The default Vault config uses a Shamir seal  Instead of distributing the unseal key as a single key to an operator  Vault uses an algorithm known as  Shamir s Secret Sharing  https   en wikipedia org wiki Shamir 27s Secret Sharing  to split the key into shares  A certain threshold of shares is required to reconstruct the unseal key  which is then used to decrypt the root key   This is the  unseal  process  the shares are added one at a time  in any order  until enough shares are present to reconstruct the key and decrypt the root key      Unsealing  The unseal process is done by running  vault operator unseal  or via the API  This process is stateful  each key can be entered via multiple mechanisms from multiple client machines and it will work  This allows each share of the root key to be on a distinct client machine for better security   Note that when using the Shamir seal with multiple nodes  each node must be unsealed with the required threshold of shares  Partial unsealing of each node is not distributed across the cluster   Once a Vault node is unsealed  it remains unsealed until one of these things happens   1  It is resealed via the API  see below    2  The server is restarted   3  Vault s storage layer encounters an unrecoverable error        Note    Unsealing makes the process of automating a Vault install difficult  Automated tools can easily install  configure  and start Vault  but unsealing it using Shamir is a very manual process  For most users Auto Unseal will provide a better experience      Sealing  There is also an API to seal the Vault  This will throw away the root key in memory and require another unseal process to restore it  Sealing only requires a single operator with root privileges   This way  if there is a detected intrusion  the Vault data can be locked quickly to try to minimize damages  It can t be accessed again without access to the root key shares      Auto unseal  Auto unseal was developed to aid in reducing the operational complexity of keeping the unseal key secure  This feature delegates the responsibility of securing the unseal key from users to a trusted device or service  At startup Vault will connect to the device or service implementing the seal and ask it to decrypt the root key Vault read from storage     Auto Unseal   img vault auto unseal png   There are certain operations in Vault besides unsealing that require a quorum of users to perform  e g  generating a root token  When using a Shamir seal the unseal keys must be provided to authorize these operations  When using Auto Unseal these operations require  recovery keys  instead   Just as the initialization process with a Shamir seal yields unseal keys  initializing with an Auto Unseal yields recovery keys   It is still possible to seal a Vault node using the API  In this case Vault will remain sealed until restarted  or the unseal API is used  which with Auto Unseal requires the recovery key fragments instead of the unseal key fragments that would be provided with Shamir  The process remains the same   For a list of examples and supported providers  please see the  seal documentation   vault docs configuration seal    When DR replication is enabled in Vault Enterprise   Performance Standby   vault docs enterprise performance standby  nodes on the DR cluster will seal themselves  so they must be restarted to be unsealed    Warning title  Recovery keys cannot decrypt the root key    Recovery keys cannot decrypt the root key and thus are not sufficient to unseal Vault if the auto unseal mechanism isn t working  They are purely an authorization mechanism  Using auto unseal creates a strict Vault lifecycle dependency on the underlying seal mechanism   This means that if the seal mechanism  such as the Cloud KMS key  becomes unavailable   or deleted before the seal is migrated  then there is no ability to recover  access to the Vault cluster until the mechanism is available again    If the seal  mechanism or its keys are permanently deleted  then the Vault cluster cannot be recovered  even from backups    To mitigate this risk  we recommend careful controls around management of the seal mechanism  for example using  AWS Service Control Policies  https   docs aws amazon com organizations latest userguide orgs manage policies scps html  or similar  With Vault Enterprise secondary clusters  disaster or performance  can have a seal configured independently of the primary  and when properly configured guards against  some  of this risk   Unreplicated items such as local mounts could still be lost     Warning      Recovery key  When Vault is initialized while using an HSM or KMS  rather than unseal keys being returned to the operator  recovery keys are returned  These are generated from an internal recovery key that is split via Shamir s Secret Sharing  similar to Vault s treatment of unseal keys when running without an HSM or KMS   Details about initialization and rekeying follow  When performing an operation that uses recovery keys  such as  generate root   selection of the recovery keys for this purpose  rather than the barrier unseal keys  is automatic       Initialization  When initializing  the split is performed according to the following CLI flags and their API equivalents in the   sys init   vault api docs system init  endpoint      recovery shares   The number of shares into which to split the recovery   key  This value is equivalent to the  recovery shares  value in the API   endpoint     recovery threshold   The threshold of shares required to reconstruct the   recovery key  This value is equivalent to the  recovery threshold  value in   the API endpoint     recovery pgp keys   The PGP keys to use to encrypt the returned recovery   key shares  This value is equivalent to the  recovery pgp keys  value in the   API endpoint  although as with  pgp keys  the object in the API endpoint is   an array  not a string   Additionally  Vault will refuse to initialize if the option has not been set to generate a key  and no key is found  See  Configuration   vault docs configuration seal pkcs11  for more details       Rekeying       Unseal key  Vault s unseal key can be rekeyed using a normal  vault operator rekey  operation from the CLI or the matching API calls  The rekey operation is authorized by meeting the threshold of recovery keys  After rekeying  the new barrier key is wrapped by the HSM or KMS and stored like the previous key  it is not returned to the users that submitted their recovery keys    EnterpriseAlert product  vault     Seal wrapping requires Vault Enterprise   EnterpriseAlert        Recovery key  The recovery key can be rekeyed to change the number of shares threshold or to target different key holders via different PGP keys  When using the Vault CLI  this is performed by using the   target recovery  flag to  vault operator rekey    Via the API  the rekey operation is performed with the same parameters as the  normal   sys rekey  endpoint   vault api docs system rekey   however  the API prefix for this operation is at   sys rekey recovery key  rather than   sys rekey       Seal migration  The seal migration process cannot be performed without downtime  and due to the technical underpinnings of the seal implementations  the process requires that you briefly take the whole cluster down  While experiencing some downtime may be unavoidable  we believe that switching seals is a rare event and that the inconvenience of the downtime is an acceptable trade off        NOTE    A backup should be taken before starting seal migration in case something goes wrong        NOTE    Seal migration operation will require both old and new seals to be available during the migration  For example  migration from auto unseal to Shamir seal will require that the service backing the auto unseal is accessible during the migration        NOTE    Seal migration from auto unseal to auto unseal of the same type is supported since Vault 1 6 0  However  there is a current limitation that prevents migrating from AWSKMS to AWSKMS  all other seal migrations of the same type are supported  Seal migration from one auto unseal type  AWS KMS  to different auto unseal type  HSM  Azure KMS  etc   is also supported on older versions as well       Migration post Vault 1 16 0 via Seal HA for Auto Seals  Enterprise   With Seal HA  migration between auto unseal types  not including any Shamir seals  can be done fully online using Seal High Availability  Seal HA  without any downtime   1  Edit the Vault configuration  and add the new  target seal configuration  1  Send the Vault process the SIGHUP signal  triggering a configuration reload  1  Monitor the   sys sealwrap rewrap    vault api docs system sealwrap rewrap  endpoints  to see that rewrap is running  and or   sys seal backend status    vault api docs system seal backend status   endpoints  waiting for  fully wrapped  to be true  indicating all seal wrapped values are now wrapped by the new seal   The logs also contain information about the rewrap progress  1  Edit the Vault configuration  removing the old seal configuration  1  Send the Vault process the SIGHUP signal  again allowing re wrapping to complete       Migration post Vault 1 5 1  These steps are common for seal migrations between any supported kinds and for any storage backend   1  Take a standby node down and update the  seal    configuration   vault docs configuration seal         If the migration is from Shamir seal to Auto seal  add the desired new Auto      seal block to the configuration       If the migration is from Auto seal to Shamir seal  add  disabled    true        to the old seal block       If the migration is from Auto seal to another Auto seal  add  disabled         true   to the old seal block and add the desired new Auto seal block      Now  bring the standby node back up and run the unseal command on each key  by    supplying the   migrate  flag        Supply Shamir unseal keys if the old seal was Shamir  which will be migrated      as the recovery keys for the Auto seal       Supply recovery keys if the old seal is one of Auto seals  which will be      migrated as the recovery keys of the new Auto seal  or as Shamir unseal      keys if the new seal is Shamir   1  Perform step 1 for all the standby nodes  one at a time  It is necessary to    bring back the downed standby node before moving on to the other standby nodes     specifically when Integrated Storage is in use for it helps to retain the    quorum   1   Step down   vault docs commands operator step down  the    active node  One of the standby nodes will become the new active node     When using Integrated Storage  ensure that quorum is reached and a leader is    elected   1  The new active node will perform the migration  Monitor the server log in    the active node to witness the completion of the seal migration process     Wait for a little while for the migration information to replicate to all the    nodes in case of Integrated Storage  In enterprise Vault  switching an Auto seal    implies that the seal wrapped storage entries get re wrapped  Monitor the log    and wait until this process is complete  look for  seal re wrap completed       Warning heading  Seal configuration changes will invoke rewrap       Any change to the  seal  stanza in your Vault configuration invokes seal rewrap     even  migrations  from the same auto unseal type like  pkcs11  to  pkcs11      Warning   1  Seal migration is now completed  Take down the old active node  update its    configuration to use the new seal blocks  completely unaware of the old seal type      and bring it back up  It will be auto unsealed if the new seal is one of the    auto seals  or will require unseal keys if the new seal is Shamir   1  At this point  configuration files of all the nodes can be updated to only have the    new seal information  Standby nodes can be restarted right away and the active    node can be restarted upon a leadership change       Migration pre 1 5 1       Migration from shamir to auto unseal  To migrate from Shamir keys to Auto Unseal  take your server cluster offline and update the  seal configuration   vault docs configuration seal  with the appropriate seal configuration  Bring your server back up and leave the rest of the nodes offline if using multi server mode  then run the unseal process with the   migrate  flag and bring the rest of the cluster online   All unseal commands must specify the   migrate  flag  Once the required threshold of unseal keys are entered  unseal keys will be migrated to recovery keys      vault operator unseal  migrate        Migration from auto unseal to shamir  To migrate from auto unseal to Shamir keys  take your server cluster offline and update the  seal configuration   vault docs configuration seal  and add  disabled    true   to the seal block  This allows the migration to use this information to decrypt the key but will not unseal Vault  When you bring your server back up  run the unseal process with the   migrate  flag and use the Recovery Keys to perform the migration  All unseal commands must specify the   migrate  flag  Once the required threshold of recovery keys are entered  the recovery keys will be migrated to be used as unseal keys        Migration from auto unseal to auto unseal       NOTE    Migration between same Auto Unseal types is supported in Vault 1 6 0 and higher  For these pre 1 5 1 steps  it is only possible to migrate from one type of auto unseal to a different type  ie Transit    AWSKMS    To migrate from auto unseal to a different auto unseal configuration  take your server cluster offline and update the existing  seal configuration   vault docs configuration seal  and add  disabled    true   to the seal block  Then add another seal block to describe the new seal   When you bring your server back up  run the unseal process with the   migrate  flag and use the Recovery Keys to perform the migration  All unseal commands must specify the   migrate  flag  Once the required threshold of recovery keys are entered  the recovery keys will be kept and used as recovery keys in the new seal        Migration with integrated storage  Integrated Storage uses the Raft protocol underneath  which requires a quorum of servers to be online before the cluster is functional  Therefore  bringing the cluster back up one node at a time with the seal configuration updated  will not work in this case  Follow the same steps for each kind of migration described above with the exception that after the cluster is taken offline  update the seal configurations of all the nodes appropriately and bring them all back up  When the quorum of nodes are back up  Raft will elect a leader and the leader node that will perform the migration  The migrated information will be replicated to all other cluster peers and when the peers eventually become the leader  migration will not happen again on the peer nodes      Seal high availability  EnterpriseAlert inline  true      Seal high availability  Seal HA  allows the configuration of more than one auto  seal mechanism such that Vault can tolerate the temporary loss of a seal service  or device for a time   With Seal HA configured with at least two and no more than three auto seals  Vault can also start up and unseal if one of the configured seals is still available  though Vault will remain in a degraded mode in this case   While seals are unavailable  seal wrapping and entropy augmentation can still occur using the remaining seals  and values produced while a seal is down will be re wrapped with all the seals when all seals become healthy again   An operator should choose two seals that are unlikely to become unavailable at the same time   For example  they may choose KMS keys in two cloud regions  from two different providers  or a mix of HSM  KMS  or Transit seals   When an operator configures an additional seal or removes a seal  one at a time  and restarts Vault  Vault will automatically detect that it needs to re wrap CSPs and seal wrapped values  and will start the process   Seal re wrapping can be monitored via the logs or via the  sys seal status  endpoint   While a  re wrap is in progress  or could not complete successfully   changes to the seal configuration are not allowed   In additional to high availability  seal HA can be used to migrate between two  auto seals in a simplified manner   To migrate in this way   In additional to high availability  Seal HA can be used to migrate between two auto seals in a  simplified manner    migration post vault 1 16 0 via seal ha for auto seals enterprise   Note that Shamir seals are not auto seals and cannot be included in a Seal HA setup   This is because auto seals support seal wrap while Shamir seals do not  so the loss of the auto seal does not necessarily leave Vault in a fully available state        Use and Configuration  Refer to the  configuration   vault docs configuration seal seal ha  section for details on configuring Seal HA       Seal Re Wrapping  Whenever seal configuration changes  Vault must re wrap all CSPs and seal wrapped values  to ensure each value has an entry encrypted by all configured seals   Vault detects these configuration changes automatically  and triggers a re wrap   Re wraps can take some time  depending on the number of seal wrapped values   While re wrapping is in progress  no configuration changes to the seals can be made   Progress of the re wrap can be monitored using the   sys sealwrap rewrap    vault api docs system sealwrap rewrap  endpoint       Limitations and Known Issues  In order to limit complexity and increase safety  there are some limitations to the use and configuration of Seal HA     Vault must be configured for a single seal at the time of initialization  Extra seals can then be added    Seals must be added or removed one at a time    Only auto seals can be used in HA configurations   Shamir and auto cannot be mixed    A maximum of three seals can be configured    As seal wrapped values must be wrapped by all configured seals  it is possible that large values may fail to persist as the size of the entry is multiplied by the number of seals causing it to exceed the storage entry size limit    An example would be storing a large document in KVv2 with seal wrapping enabled    It is not possible to rotate the data encryption key nor the recovery keys while unless all seals are healthy "}
{"questions":"vault Describes how Vault can be an OIDC identity provider OIDC provider page title OIDC Provider This document provides conceptual information about the Vault OpenID Connect OIDC identity layout docs","answers":"---\nlayout: docs\npage_title: OIDC Provider\ndescription: >-\n  Describes how Vault can be an OIDC identity provider.\n---\n\n# OIDC provider\n\nThis document provides conceptual information about the Vault **OpenID Connect (OIDC) identity\nprovider** feature. This feature enables client applications that speak the OIDC protocol to\nleverage Vault's source of [identity](\/vault\/docs\/concepts\/identity) and wide range of [authentication methods](\/vault\/docs\/auth)\nwhen authenticating end-users. For more information about the usage of Vault's OIDC provider,\nrefer to the [OIDC identity provider](\/vault\/docs\/secrets\/identity\/oidc-provider) documentation.\n\n## Configuration options\n\nThe next few sections of the document provide implementation details for each resource that permits Vault configuration as an OIDC identity provider.\n\n### OIDC providers\n\nEach Vault namespace will contain a built-in provider resource named `default`. The `default`\nprovider will allow all client applications within the namespace to use it for OIDC flows.\nThe `default` provider can be modified but not deleted.\n\nAdditionally, a Vault namespace may contain several provider resources. Each configured provider will publish the APIs listed within the [OIDC flow](\/vault\/docs\/concepts\/oidc-provider#oidc-flow) section. The APIs will be served via backend path-based routing on Vault's listen [address](\/vault\/docs\/configuration\/listener\/tcp#address).\n\nA provider has the following configuration parameters:\n\n* **Issuer URL**: used in the `iss` claim of ID tokens\n* **Allowed client IDs**: limits which clients can access the provider\n* **Scopes supported**: limits what identity information is available as claims\n\nThe issuer URL parameter is necessary for the validation of ID tokens by clients. If an URL parameter is not provided explicitly, it will default to a URL with Vault's [api_addr](\/vault\/docs\/configuration#api_addr) as the `scheme:\/\/host:port` component and `\/v1\/:namespace\/identity\/oidc\/provider\/:name` as the path component. This means tokens issued by a provider in a specified Vault cluster must be validated within that same cluster. If the issuer URL is provided explicitly, it must point to a Vault instance that is network-reachable by clients for ID token validation.\n\nThe allowed client IDs parameter utilizes the list of client IDs that have been generated by Vault as a part of client registration. By default, all clients will be *disallowed*. Providing `*` as the parameter value will allow all clients to use the provider.\n\nThe scopes parameter employs a list of references to named scope resources. The values provided are discoverable by the `scopes_supported` key in the OIDC discovery document of the provider. By default, a provider will have the `openid` scope available. See the scopes section below for more details on the `openid` scope.\n\n### Scopes\n\nProviders may reference scope resources via the `scopes_supported` parameter to make specific identity information available as claims.\n\nA scope will have the following configuration parameters:\n\n* **Description**: identity information captured by the scope\n* **Template**: maps individual claims to Vault identity information\n\n\nThe template parameter takes advantage of the [JSON-based templating](\/vault\/api-docs\/secret\/identity\/tokens#template) used by identity tokens for claims mapping. This means the parameter will take a JSON string of arbitrary structure where the values may be replaced with specific identity information. Template parameters that are not present for a Vault identity are omitted from the resulting claims without an error.\n\n\nExample of a JSON template for a scope:\n\n```\n{\n    \"username\": ,\n    \"contact\": {\n        \"email\": ,\n        \"phone_number\": \n    },\n    \"groups\": \n}\n```\n\nThe full list of template parameters are included in the following table:\n\n| Name                                                                             | Description                                                                             |\n| :------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------- |\n| `identity.entity.id`                                                             | The entity's ID                                                                         |\n| `identity.entity.name`                                                           | The entity's name                                                                       |\n| `identity.entity.groups.ids`                                                     | The IDs of the groups the entity is a member of                                         |\n| `identity.entity.groups.names`                                                   | The names of the groups the entity is a member of                                       |\n| `identity.entity.metadata`                                                       | Metadata associated with the entity                                                     |\n| `identity.entity.metadata.<metadata key>`                                        | Metadata associated with the entity for the given key                                   |\n| `identity.entity.aliases.<mount accessor>.id`                                    | Entity alias ID for the given mount                                                     |\n| `identity.entity.aliases.<mount accessor>.name`                                  | Entity alias name for the given mount                                                   |\n| `identity.entity.aliases.<mount accessor>.metadata`                              | Metadata associated with the alias for the given mount                                  |\n| `identity.entity.aliases.<mount accessor>.metadata.<metadata key>`               | Metadata associated with the alias for the given mount and metadata key                 |\n| `identity.entity.aliases.<mount accessor>.custom_metadata`                       | Custom metadata associated with the alias for the given mount                           |\n| `identity.entity.aliases.<mount accessor>.custom_metadata.<custom_metadata key>` | Custom metadata associated with the alias for the given mount and custom metadata key   |\n| `time.now`                                                                       | Current time as integral seconds since the Epoch                                        |\n| `time.now.plus.<duration>`                                                       | Current time plus a [duration format string](\/vault\/docs\/concepts\/duration-format)                 |\n| `time.now.minus.<duration>`                                                      | Current time minus a [duration format string](\/vault\/docs\/concepts\/duration-format)                |\n\n\nSeveral named scopes can be made available on an individual provider. Note that the top-level keys in a JSON template may conflict with those in another scope. When scopes are made available on a provider, their templates are checked for top-level conflicts. A warning will be issued to the Vault operator if any conflicts are found. This may result in an error if the scopes are requested in an OIDC Authentication Request.\n\nThe `openid` scope is a unique case scope that may not be modified or deleted. The scope will exist in Vault and supported by each provider by default. The scope represents the minimum set of claims required by the OIDC specification for inclusion in ID tokens. As such, templates may not contain top-level keys that overwrite the claims populated by the openid scope.\n\nThe following defines the claims key and value mapping for the `openid` scope:\n\n* `iss`- configured issuer of the provider\n* `sub`- unique entity ID of the Vault user\n* `aud`- ID of the client\n* `iat`- time of token issue\n* `exp`- time of token issue + ID token TTL\n\n### Client applications\n\nA client resource represents an application that wants to delegate end-user authentication\nto Vault using the OIDC protocol. The information provided by a client resource can be used\nto configure an OIDC [relying party](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#Terminology).\n\nA client has the following configuration parameters:\n\n* **Redirect URIs**: limits the valid redirect URIs in an authentication request\n* **Assignments**: determine who can authenticate with the client\n* **Key**: used to sign the ID tokens\n*\t**ID token TTL**: specifies the time-to-live for ID tokens\n* **Access token TTL**: specifies the time-to-live for access tokens\n* **Client type**: determines the client's ability to maintain confidentiality of credentials\n\nThe `key` parameter is optional. The key will be used to sign ID tokens for the client.\nIt cannot be modified after creation. If not supplied, defaults to the built-in\n[default key](\/vault\/docs\/concepts\/oidc-provider#keys).\n\nA `client_id` is generated and returned after a successful client registration. The\n`client_id` uniquely identifies the client. Its value will be a string with 32 random\ncharacters from the base62 character set.\n\n~> **Note**: At least one of the redirect URIs of a client must exactly match the `redirect_uri` parameter used in an authentication request initiated by the client.\n\n#### Client types\n\nA client resource has a `client_type` parameter which specifies the OAuth 2.0\n[client type](https:\/\/datatracker.ietf.org\/doc\/html\/rfc6749#section-2.1) based on\nits ability to maintain confidentiality of credentials. The following sections detail\nthe differences between confidential and public clients in Vault.\n\n##### Confidential\n\nConfidential clients are capable of maintaining the confidentiality of their credentials.\nConfidential clients have a `client_secret`. The `client_secret` will have a prefix of\n`hvo_secret` followed by 64 random characters in the base62 character set.\n\nConfidential clients may use Proof Key for Code Exchange ([PKCE](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7636))\nduring the authorization code flow.\n\nConfidential clients must authenticate to the token endpoint using the\n`client_secret_basic` or `client_secret_post` [client authentication method](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#ClientAuthentication).\n\n##### Public\n\nPublic clients are not capable of maintaining the confidentiality of their credentials.\nAs such, public clients do not have a `client_secret`.\n\nPublic clients must use Proof Key for Code Exchange ([PKCE](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7636))\nduring the authorization code flow.\n\nPublic clients use the `none` [client authentication method](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#ClientAuthentication).\n\n### Assignments\n\nAssignment resources are referenced by clients via the `assignments` parameter. This parameter limits the set of Vault users allowed to authenticate. The assignments of an associated client are validated during the authentication request, ensuring that the Vault identity associated with the request is a member of the assignment's entities or groups.\n\nEach Vault namespace will contain a built-in assignment resource named `allow_all`. The\n`allow_all` assignment allows all Vault entities to authenticate through a client. The\n`allow_all` assignment cannot be modified or deleted.\n\n### Keys\n\nKey resources are referenced by clients via the `key` parameter. This parameter specifies\nthe key that will be used to sign ID tokens for the client. See existing\n[documentation](\/vault\/api-docs\/secret\/identity\/tokens#create-a-named-key) for details on keyring\nmanagement, supported signing algorithms, rotation periods, and verification TTLs. Currently,\na key referenced by a client cannot be changed.\n\nEach Vault namespace will contain a built-in key resource named `default`. The `default`\nkey can be modified but not deleted. Clients that don't specify the `key` parameter at\ncreation time will use the `default` key.\n\nThe `default` key will have the following configuration:\n\n- `algorithm` - `RS256`\n- `allowed_client_ids` - `*`\n- `rotation_period` - `24h`\n- `verification_ttl` - `24h`\n\n## OIDC flow\n\n~> **Note**: The Vault OIDC Provider feature currently only supports the [authorization code flow](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#CodeFlowAuth).\n\nThe following sections provide implementation details for the OIDC compliant APIs provided by Vault OIDC providers.\n\nVault OIDC providers enable registered clients to authenticate and obtain identity information (or \"claims\") for their end-users. They do this by providing the APIs and behavior required to satisfy the OIDC specification for the [authorization code flow](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#CodeFlowAuth). All clients are treated as first-party. This means that end-users will not be required to provide consent to the provider as detailed in section [3.1.2.4](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#Consent) of the OIDC specification. The provider will release information to clients as long as the end-user has ACL access to the provider and their identity has been authorized via an assignment.\n\nVault OIDC providers implement Proof Key for Code Exchange ([PKCE](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7636))\nto mitigate authorization code interception attacks. PKCE is required for `public` client types\nand optional for `confidential` client types.\n\n### OpenID configuration\n\nEach provider offers an unauthenticated endpoint that facilitates OIDC Discovery. All required metadata listed in [OpenID Provider Metadata](https:\/\/openid.net\/specs\/openid-connect-discovery-1_0.html#ProviderMetadata) is included in the discovery document. Additionally, the recommended `userinfo_endpoint` and `scopes_supported` metadata are included.\n\n### Keys\n\nEach provider offers an unauthenticated endpoint that provides the public portion of keys used to sign ID tokens. The keys are published in a JSON Web Key Set [(JWKS)](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7517) format. The keyset for an individual provider contains the keys referenced by all clients via the `allowed_client_ids` configuration parameter. A `Cache-Control` header to set based on responses, allowing clients to refresh their keys upon rotation. The `max-age` of the header is set based on the earliest rotation time of any of the keys in the keyset.\n\n### Authorization endpoint\n\nEach provider offers an authenticated [authorization endpoint](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#AuthorizationEndpoint). The authorization endpoint for each provider is added to Vault's [default policy](\/vault\/docs\/concepts\/policies#default-policy) using the `identity\/oidc\/provider\/+\/authorize` path. The endpoint incorporates all required [authentication request](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#AuthRequest) parameters as input.\n\nThe endpoint [validates](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#AuthRequestValidation) client requests and ensures that all required parameters are present and valid. The `redirect_uri` of the request is validated against the client's `redirect_uris`. The requesting Vault entity will be validated against the client's `assignments`. An appropriate [error code](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#AuthError) is returned for invalid requests.\n\nAn authorization code is generated with a successful validation of the request. The authorization code is single-use and cached with a lifetime of approximately 5 minutes, which mitigates the risk of leaks. A response including the original `state` presented by the client and `code` will be returned to the Vault UI which initiated the request. Vault will issue an HTTP 302 redirect to the `redirect_uri` of the request, which includes the `code` and `state` as query parameters.\n\n### Token endpoint\n\nEach provider will offer a [token endpoint](\/vault\/api-docs\/secret\/identity\/oidc-provider#token-endpoint). The endpoint may be unauthenticated in Vault but is authenticated by requiring a `client_secret` as described in [client authentication](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#ClientAuthentication). The endpoint ingests all required [token request](\/vault\/api-docs\/secret\/identity\/oidc-provider#parameters-15) parameters as input. The endpoint [validates](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#TokenRequestValidation) the client requests and exchanges an authorization code for the ID token and access token. The cache of authorization codes will be verified against the code presented in the exchange. The appropriate [error codes](https:\/\/openid.net\/specs\/openid-connect-core-1_0.html#TokenErrorResponse) are returned for all invalid requests.\n\nThe ID token is generated and returned upon successful client authentication and request validation. The ID token will contain a combination of required and configurable claims. The required claims are enumerated in the scopes section above for the `openid` scope. The configurable claims are populated by templates associated with the scopes provided in the authentication request that generated the authorization code.\n\nAn access token is also generated and returned upon successful client authentication and request validation. The access token is a Vault [batch token](\/vault\/docs\/concepts\/tokens#batch-tokens) with a policy that only provides read access to the issuing provider's [userinfo endpoint](\/vault\/api-docs\/secret\/identity\/oidc-provider#userinfo-endpoint). The access token is also a TTL as defined by the `access_token_ttl` of the requesting client.\n\n### UserInfo endpoint\n\nEach provider provides an authenticated [userinfo endpoint](\/vault\/api-docs\/secret\/identity\/oidc-provider#userinfo-endpoint). The endpoint accepts the access token obtained from the token endpoint as a [bearer token](\/vault\/api-docs#authentication). The userinfo response is a JSON object with the `application\/json` content type. The JSON object contains claims for the Vault entity associated with the access token. The claims returned are determined by the scopes requested in the authentication request that produced the access token. The `sub` claim is always returned as the entity ID in the userinfo response.","site":"vault","answers_cleaned":"    layout  docs page title  OIDC Provider description       Describes how Vault can be an OIDC identity provider         OIDC provider  This document provides conceptual information about the Vault   OpenID Connect  OIDC  identity provider   feature  This feature enables client applications that speak the OIDC protocol to leverage Vault s source of  identity   vault docs concepts identity  and wide range of  authentication methods   vault docs auth  when authenticating end users  For more information about the usage of Vault s OIDC provider  refer to the  OIDC identity provider   vault docs secrets identity oidc provider  documentation      Configuration options  The next few sections of the document provide implementation details for each resource that permits Vault configuration as an OIDC identity provider       OIDC providers  Each Vault namespace will contain a built in provider resource named  default   The  default  provider will allow all client applications within the namespace to use it for OIDC flows  The  default  provider can be modified but not deleted   Additionally  a Vault namespace may contain several provider resources  Each configured provider will publish the APIs listed within the  OIDC flow   vault docs concepts oidc provider oidc flow  section  The APIs will be served via backend path based routing on Vault s listen  address   vault docs configuration listener tcp address    A provider has the following configuration parameters       Issuer URL    used in the  iss  claim of ID tokens     Allowed client IDs    limits which clients can access the provider     Scopes supported    limits what identity information is available as claims  The issuer URL parameter is necessary for the validation of ID tokens by clients  If an URL parameter is not provided explicitly  it will default to a URL with Vault s  api addr   vault docs configuration api addr  as the  scheme   host port  component and   v1  namespace identity oidc provider  name  as the path component  This means tokens issued by a provider in a specified Vault cluster must be validated within that same cluster  If the issuer URL is provided explicitly  it must point to a Vault instance that is network reachable by clients for ID token validation   The allowed client IDs parameter utilizes the list of client IDs that have been generated by Vault as a part of client registration  By default  all clients will be  disallowed   Providing     as the parameter value will allow all clients to use the provider   The scopes parameter employs a list of references to named scope resources  The values provided are discoverable by the  scopes supported  key in the OIDC discovery document of the provider  By default  a provider will have the  openid  scope available  See the scopes section below for more details on the  openid  scope       Scopes  Providers may reference scope resources via the  scopes supported  parameter to make specific identity information available as claims   A scope will have the following configuration parameters       Description    identity information captured by the scope     Template    maps individual claims to Vault identity information   The template parameter takes advantage of the  JSON based templating   vault api docs secret identity tokens template  used by identity tokens for claims mapping  This means the parameter will take a JSON string of arbitrary structure where the values may be replaced with specific identity information  Template parameters that are not present for a Vault identity are omitted from the resulting claims without an error    Example of a JSON template for a scope              username          contact              email              phone number                groups           The full list of template parameters are included in the following table     Name                                                                               Description                                                                                                                                                                                                                                                                 identity entity id                                                                The entity s ID                                                                              identity entity name                                                              The entity s name                                                                            identity entity groups ids                                                        The IDs of the groups the entity is a member of                                              identity entity groups names                                                      The names of the groups the entity is a member of                                            identity entity metadata                                                          Metadata associated with the entity                                                          identity entity metadata  metadata key                                            Metadata associated with the entity for the given key                                        identity entity aliases  mount accessor  id                                       Entity alias ID for the given mount                                                          identity entity aliases  mount accessor  name                                     Entity alias name for the given mount                                                        identity entity aliases  mount accessor  metadata                                 Metadata associated with the alias for the given mount                                       identity entity aliases  mount accessor  metadata  metadata key                   Metadata associated with the alias for the given mount and metadata key                      identity entity aliases  mount accessor  custom metadata                          Custom metadata associated with the alias for the given mount                                identity entity aliases  mount accessor  custom metadata  custom metadata key     Custom metadata associated with the alias for the given mount and custom metadata key        time now                                                                          Current time as integral seconds since the Epoch                                             time now plus  duration                                                           Current time plus a  duration format string   vault docs concepts duration format                       time now minus  duration                                                          Current time minus a  duration format string   vault docs concepts duration format                     Several named scopes can be made available on an individual provider  Note that the top level keys in a JSON template may conflict with those in another scope  When scopes are made available on a provider  their templates are checked for top level conflicts  A warning will be issued to the Vault operator if any conflicts are found  This may result in an error if the scopes are requested in an OIDC Authentication Request   The  openid  scope is a unique case scope that may not be modified or deleted  The scope will exist in Vault and supported by each provider by default  The scope represents the minimum set of claims required by the OIDC specification for inclusion in ID tokens  As such  templates may not contain top level keys that overwrite the claims populated by the openid scope   The following defines the claims key and value mapping for the  openid  scope      iss   configured issuer of the provider    sub   unique entity ID of the Vault user    aud   ID of the client    iat   time of token issue    exp   time of token issue   ID token TTL      Client applications  A client resource represents an application that wants to delegate end user authentication to Vault using the OIDC protocol  The information provided by a client resource can be used to configure an OIDC  relying party  https   openid net specs openid connect core 1 0 html Terminology    A client has the following configuration parameters       Redirect URIs    limits the valid redirect URIs in an authentication request     Assignments    determine who can authenticate with the client     Key    used to sign the ID tokens     ID token TTL    specifies the time to live for ID tokens     Access token TTL    specifies the time to live for access tokens     Client type    determines the client s ability to maintain confidentiality of credentials  The  key  parameter is optional  The key will be used to sign ID tokens for the client  It cannot be modified after creation  If not supplied  defaults to the built in  default key   vault docs concepts oidc provider keys    A  client id  is generated and returned after a successful client registration  The  client id  uniquely identifies the client  Its value will be a string with 32 random characters from the base62 character set        Note    At least one of the redirect URIs of a client must exactly match the  redirect uri  parameter used in an authentication request initiated by the client        Client types  A client resource has a  client type  parameter which specifies the OAuth 2 0  client type  https   datatracker ietf org doc html rfc6749 section 2 1  based on its ability to maintain confidentiality of credentials  The following sections detail the differences between confidential and public clients in Vault         Confidential  Confidential clients are capable of maintaining the confidentiality of their credentials  Confidential clients have a  client secret   The  client secret  will have a prefix of  hvo secret  followed by 64 random characters in the base62 character set   Confidential clients may use Proof Key for Code Exchange   PKCE  https   datatracker ietf org doc html rfc7636   during the authorization code flow   Confidential clients must authenticate to the token endpoint using the  client secret basic  or  client secret post   client authentication method  https   openid net specs openid connect core 1 0 html ClientAuthentication          Public  Public clients are not capable of maintaining the confidentiality of their credentials  As such  public clients do not have a  client secret    Public clients must use Proof Key for Code Exchange   PKCE  https   datatracker ietf org doc html rfc7636   during the authorization code flow   Public clients use the  none   client authentication method  https   openid net specs openid connect core 1 0 html ClientAuthentication        Assignments  Assignment resources are referenced by clients via the  assignments  parameter  This parameter limits the set of Vault users allowed to authenticate  The assignments of an associated client are validated during the authentication request  ensuring that the Vault identity associated with the request is a member of the assignment s entities or groups   Each Vault namespace will contain a built in assignment resource named  allow all   The  allow all  assignment allows all Vault entities to authenticate through a client  The  allow all  assignment cannot be modified or deleted       Keys  Key resources are referenced by clients via the  key  parameter  This parameter specifies the key that will be used to sign ID tokens for the client  See existing  documentation   vault api docs secret identity tokens create a named key  for details on keyring management  supported signing algorithms  rotation periods  and verification TTLs  Currently  a key referenced by a client cannot be changed   Each Vault namespace will contain a built in key resource named  default   The  default  key can be modified but not deleted  Clients that don t specify the  key  parameter at creation time will use the  default  key   The  default  key will have the following configuration      algorithm     RS256     allowed client ids           rotation period     24h     verification ttl     24h      OIDC flow       Note    The Vault OIDC Provider feature currently only supports the  authorization code flow  https   openid net specs openid connect core 1 0 html CodeFlowAuth    The following sections provide implementation details for the OIDC compliant APIs provided by Vault OIDC providers   Vault OIDC providers enable registered clients to authenticate and obtain identity information  or  claims   for their end users  They do this by providing the APIs and behavior required to satisfy the OIDC specification for the  authorization code flow  https   openid net specs openid connect core 1 0 html CodeFlowAuth   All clients are treated as first party  This means that end users will not be required to provide consent to the provider as detailed in section  3 1 2 4  https   openid net specs openid connect core 1 0 html Consent  of the OIDC specification  The provider will release information to clients as long as the end user has ACL access to the provider and their identity has been authorized via an assignment   Vault OIDC providers implement Proof Key for Code Exchange   PKCE  https   datatracker ietf org doc html rfc7636   to mitigate authorization code interception attacks  PKCE is required for  public  client types and optional for  confidential  client types       OpenID configuration  Each provider offers an unauthenticated endpoint that facilitates OIDC Discovery  All required metadata listed in  OpenID Provider Metadata  https   openid net specs openid connect discovery 1 0 html ProviderMetadata  is included in the discovery document  Additionally  the recommended  userinfo endpoint  and  scopes supported  metadata are included       Keys  Each provider offers an unauthenticated endpoint that provides the public portion of keys used to sign ID tokens  The keys are published in a JSON Web Key Set   JWKS   https   datatracker ietf org doc html rfc7517  format  The keyset for an individual provider contains the keys referenced by all clients via the  allowed client ids  configuration parameter  A  Cache Control  header to set based on responses  allowing clients to refresh their keys upon rotation  The  max age  of the header is set based on the earliest rotation time of any of the keys in the keyset       Authorization endpoint  Each provider offers an authenticated  authorization endpoint  https   openid net specs openid connect core 1 0 html AuthorizationEndpoint   The authorization endpoint for each provider is added to Vault s  default policy   vault docs concepts policies default policy  using the  identity oidc provider   authorize  path  The endpoint incorporates all required  authentication request  https   openid net specs openid connect core 1 0 html AuthRequest  parameters as input   The endpoint  validates  https   openid net specs openid connect core 1 0 html AuthRequestValidation  client requests and ensures that all required parameters are present and valid  The  redirect uri  of the request is validated against the client s  redirect uris   The requesting Vault entity will be validated against the client s  assignments   An appropriate  error code  https   openid net specs openid connect core 1 0 html AuthError  is returned for invalid requests   An authorization code is generated with a successful validation of the request  The authorization code is single use and cached with a lifetime of approximately 5 minutes  which mitigates the risk of leaks  A response including the original  state  presented by the client and  code  will be returned to the Vault UI which initiated the request  Vault will issue an HTTP 302 redirect to the  redirect uri  of the request  which includes the  code  and  state  as query parameters       Token endpoint  Each provider will offer a  token endpoint   vault api docs secret identity oidc provider token endpoint   The endpoint may be unauthenticated in Vault but is authenticated by requiring a  client secret  as described in  client authentication  https   openid net specs openid connect core 1 0 html ClientAuthentication   The endpoint ingests all required  token request   vault api docs secret identity oidc provider parameters 15  parameters as input  The endpoint  validates  https   openid net specs openid connect core 1 0 html TokenRequestValidation  the client requests and exchanges an authorization code for the ID token and access token  The cache of authorization codes will be verified against the code presented in the exchange  The appropriate  error codes  https   openid net specs openid connect core 1 0 html TokenErrorResponse  are returned for all invalid requests   The ID token is generated and returned upon successful client authentication and request validation  The ID token will contain a combination of required and configurable claims  The required claims are enumerated in the scopes section above for the  openid  scope  The configurable claims are populated by templates associated with the scopes provided in the authentication request that generated the authorization code   An access token is also generated and returned upon successful client authentication and request validation  The access token is a Vault  batch token   vault docs concepts tokens batch tokens  with a policy that only provides read access to the issuing provider s  userinfo endpoint   vault api docs secret identity oidc provider userinfo endpoint   The access token is also a TTL as defined by the  access token ttl  of the requesting client       UserInfo endpoint  Each provider provides an authenticated  userinfo endpoint   vault api docs secret identity oidc provider userinfo endpoint   The endpoint accepts the access token obtained from the token endpoint as a  bearer token   vault api docs authentication   The userinfo response is a JSON object with the  application json  content type  The JSON object contains claims for the Vault entity associated with the access token  The claims returned are determined by the scopes requested in the authentication request that produced the access token  The  sub  claim is always returned as the entity ID in the userinfo response "}
{"questions":"vault Vault has the ability to integrate with OpenPGP compatible programs like GnuPG and services like Keybase io to provide an additional layer of security layout docs page title Using PGP GnuPG and Keybase when performing certain operations This page details the various PGP integrations their use and operation","answers":"---\nlayout: docs\npage_title: 'Using PGP, GnuPG, and Keybase'\ndescription: |-\n  Vault has the ability to integrate with OpenPGP-compatible programs like\n  GnuPG and services like Keybase.io to provide an additional layer of security\n  when performing certain operations.  This page details the various PGP\n  integrations, their use, and operation.\n---\n\n# Using PGP, GnuPG, and keybase\n\nVault has the ability to integrate with OpenPGP-compatible programs like GnuPG\nand services like Keybase.io to provide an additional layer of security when\nperforming certain operations. This page details the various PGP integrations,\ntheir use, and operation.\n\nKeybase.io support is available only in the command-line tool and not via the\nVault HTTP API, tools that help with initialization should use the Keybase.io\nAPI in order to obtain the PGP keys needed for a secure initialization if you\nwant them to use Keybase for keys.\n\nOnce the Vault has been initialized, it is possible to use Keybase to decrypt\nthe shards and unseal normally.\n\n## Initializing with PGP\n\nOne of the early fundamental problems when bootstrapping and initializing Vault\nwas that the first user (the initializer) received a plain-text copy of all of\nthe unseal keys. This defeats the promises of Vault's security model, and it\nalso makes the distribution of those keys more difficult. Since Vault 0.3,\nVault can optionally be initialized using PGP keys. In this mode, Vault will\ngenerate the unseal keys and then immediately encrypt them using the given\nusers' public PGP keys. Only the owner of the corresponding private key is then\nable to decrypt the value, revealing the plain-text unseal key.\n\nFirst, you must create, acquire, or import the appropriate key(s) onto the\nlocal machine from which you are initializing Vault. This guide will not\nattempt to cover all aspects of PGP keys but give examples using two popular\nprograms: Keybase and GnuPG.\n\nFor beginners, we suggest using [Keybase.io](https:\/\/keybase.io\/) (\"Keybase\")\nas it can be both simpler and has a number of useful behaviors and properties\naround key management, such as verification of users' identities using a number\nof public online sources. It also exposes the ability for users to have PGP\nkeys generated, stored, and managed securely on their servers. Using Vault with\nKeybase will be discussed first as it is simpler.\n\n## Initializing with keybase\n\nTo generate unseal keys for Keybase users, Vault accepts the `keybase:` prefix\nto the `-pgp-keys` argument:\n\n```shell-session\n$ vault operator init -key-shares=3 -key-threshold=2 \\\n    -pgp-keys=\"keybase:jefferai,keybase:vishalnayak,keybase:sethvargo\"\n```\n\nThis requires far fewer steps than traditional PGP (e.g. with `gpg`) because\nKeybase handles a few of the tedious steps. The output will be the similar to\nthe following:\n\n```\nKey 1: wcBMA37rwGt6FS1VAQgAk1q8XQh6yc...\nKey 2: wcBMA0wwnMXgRzYYAQgAavqbTCxZGD...\nKey 3: wcFMA2DjqDb4YhTAARAAeTFyYxPmUd...\n...\n```\n\nThe output should be rather long in comparison to a regular unseal key. These\nkeys are encrypted, and only the user holding the corresponding private key can\ndecrypt the value. The keys are encrypted in the order in which specified in\nthe `-pgp-keys` attribute. As such, the keys belong to respective Keybase\naccounts of `jefferai`, `vishalnayak`, and `sethvargo`. These keys can be\ndistributed over almost any medium, although common sense and judgement are\nbest advised. The encrypted keys are base64 encoded before returning.\n\n### Unsealing with keybase\n\nAs a user, the easiest way to decrypt your unseal key is with the Keybase CLI\ntool. You can download it from [Keybase.io download\npage](https:\/\/keybase.io\/download). After you have downloaded and configured\nthe Keybase CLI, you are now tasked with entering your unseal key. To get the\nplain-text unseal key, you must decrypt the value given to you by the\ninitializer. To get the plain-text value, run the following command:\n\n```shell-session\n$ echo \"wcBMA37...\" | base64 --decode | keybase pgp decrypt\n```\n\nAnd replace `wcBMA37...` with the encrypted key.\n\nYou will be prompted to enter your Keybase passphrase. The output will be the\nplain-text unseal key.\n\n```\n6ecb46277133e04b29bd0b1b05e60722dab7cdc684a0d3ee2de50ce4c38a357101\n```\n\nThis is your unseal key in plain-text and should be guarded the same way you\nguard a password. Now you can enter your key to the `unseal` command:\n\n```shell-session\n$ vault operator unseal\nKey (will be hidden): ...\n```\n\n---\n\n## Initializing with GnuPG\n\nGnuPG is an open-source implementation of the OpenPGP standard and is available\non nearly every platform. For more information, please see the [GnuPG\nmanual](https:\/\/gnupg.org\/gph\/en\/manual.html).\n\n<Note>\n\nTo use ECHD keys with Vault you must use GnuPGP 2.2.21 or  newer.\nRefer to the [GnuPG\/NEWS](https:\/\/dev.gnupg.org\/source\/gnupg\/browse\/master\/NEWS) for further details.\n\n<\/Note>\n\nTo create a new PGP key, run, following the prompts:\n\n```shell-session\n$ gpg --gen-key\n```\n\nTo import an existing key, download the public key onto disk and run:\n\n```shell-session\n$ gpg --import key.asc\n```\n\nOnce you have imported the users' public keys, you need to save their values\nto disk as either base64 or binary key files. For example:\n\n```shell-session\n$ gpg --export 348FFC4C | base64 > seth.asc\n```\n\nThese key files must exist on disk in base64 (the \"standard\" base64 character set,\nwithout ASCII armoring) or binary. Once saved to disk, the path to these files\ncan be specified as an argument to the `-pgp-keys` flag.\n\n```shell-session\n$ vault operator init -key-shares=3 -key-threshold=2 \\\n    -pgp-keys=\"jeff.asc,vishal.asc,seth.asc\"\n```\n\nThe result should look something like this:\n\n```\nKey 1: wcBMA37rwGt6FS1VAQgAk1q8XQh6yc...\nKey 2: wcBMA0wwnMXgRzYYAQgAavqbTCxZGD...\nKey 3: wcFMA2DjqDb4YhTAARAAeTFyYxPmUd...\n...\n```\n\nThe output should be rather long in comparison to a regular unseal key. These\nkeys are encrypted, and only the user holding the corresponding private key\ncan decrypt the value. The keys are encrypted in the order in which specified\nin the `-pgp-keys` attribute. As such, the first key belongs to Jeff, the second\nto Vishal, and the third to Seth. These keys can be distributed over almost any\nmedium, although common sense and judgement are best advised. The encrypted\nkeys are base64 encoded before returning.\n\n### Unsealing with GnuPG\n\nAssuming you have been given an unseal key that was encrypted using your public\nPGP key, you are now tasked with entering your unseal key. To get the\nplain-text unseal key, you must decrypt the value given to you by the\ninitializer. To get the plain-text value, run the following command:\n\n```shell-session\n$ echo \"wcBMA37...\" | base64 --decode | gpg -dq\n```\n\nAnd replace `wcBMA37...` with the encrypted key.\n\nIf you encrypted your private PGP key with a passphrase, you may be prompted to\nenter it. After you enter your password, the output will be the plain-text\nkey:\n\n```\n6ecb46277133e04b29bd0b1b05e60722dab7cdc684a0d3ee2de50ce4c38a357101\n```\n\nThis is your unseal key in plain-text and should be guarded the same way you\nguard a password. Now you can enter your key to the `unseal` command:\n\n```shell-session\n$ vault operator unseal\nKey (will be hidden): ...\n```","site":"vault","answers_cleaned":"    layout  docs page title   Using PGP  GnuPG  and Keybase  description       Vault has the ability to integrate with OpenPGP compatible programs like   GnuPG and services like Keybase io to provide an additional layer of security   when performing certain operations   This page details the various PGP   integrations  their use  and operation         Using PGP  GnuPG  and keybase  Vault has the ability to integrate with OpenPGP compatible programs like GnuPG and services like Keybase io to provide an additional layer of security when performing certain operations  This page details the various PGP integrations  their use  and operation   Keybase io support is available only in the command line tool and not via the Vault HTTP API  tools that help with initialization should use the Keybase io API in order to obtain the PGP keys needed for a secure initialization if you want them to use Keybase for keys   Once the Vault has been initialized  it is possible to use Keybase to decrypt the shards and unseal normally      Initializing with PGP  One of the early fundamental problems when bootstrapping and initializing Vault was that the first user  the initializer  received a plain text copy of all of the unseal keys  This defeats the promises of Vault s security model  and it also makes the distribution of those keys more difficult  Since Vault 0 3  Vault can optionally be initialized using PGP keys  In this mode  Vault will generate the unseal keys and then immediately encrypt them using the given users  public PGP keys  Only the owner of the corresponding private key is then able to decrypt the value  revealing the plain text unseal key   First  you must create  acquire  or import the appropriate key s  onto the local machine from which you are initializing Vault  This guide will not attempt to cover all aspects of PGP keys but give examples using two popular programs  Keybase and GnuPG   For beginners  we suggest using  Keybase io  https   keybase io     Keybase   as it can be both simpler and has a number of useful behaviors and properties around key management  such as verification of users  identities using a number of public online sources  It also exposes the ability for users to have PGP keys generated  stored  and managed securely on their servers  Using Vault with Keybase will be discussed first as it is simpler      Initializing with keybase  To generate unseal keys for Keybase users  Vault accepts the  keybase   prefix to the   pgp keys  argument      shell session   vault operator init  key shares 3  key threshold 2        pgp keys  keybase jefferai keybase vishalnayak keybase sethvargo       This requires far fewer steps than traditional PGP  e g  with  gpg   because Keybase handles a few of the tedious steps  The output will be the similar to the following       Key 1  wcBMA37rwGt6FS1VAQgAk1q8XQh6yc    Key 2  wcBMA0wwnMXgRzYYAQgAavqbTCxZGD    Key 3  wcFMA2DjqDb4YhTAARAAeTFyYxPmUd             The output should be rather long in comparison to a regular unseal key  These keys are encrypted  and only the user holding the corresponding private key can decrypt the value  The keys are encrypted in the order in which specified in the   pgp keys  attribute  As such  the keys belong to respective Keybase accounts of  jefferai    vishalnayak   and  sethvargo   These keys can be distributed over almost any medium  although common sense and judgement are best advised  The encrypted keys are base64 encoded before returning       Unsealing with keybase  As a user  the easiest way to decrypt your unseal key is with the Keybase CLI tool  You can download it from  Keybase io download page  https   keybase io download   After you have downloaded and configured the Keybase CLI  you are now tasked with entering your unseal key  To get the plain text unseal key  you must decrypt the value given to you by the initializer  To get the plain text value  run the following command      shell session   echo  wcBMA37       base64   decode   keybase pgp decrypt      And replace  wcBMA37     with the encrypted key   You will be prompted to enter your Keybase passphrase  The output will be the plain text unseal key       6ecb46277133e04b29bd0b1b05e60722dab7cdc684a0d3ee2de50ce4c38a357101      This is your unseal key in plain text and should be guarded the same way you guard a password  Now you can enter your key to the  unseal  command      shell session   vault operator unseal Key  will be hidden                    Initializing with GnuPG  GnuPG is an open source implementation of the OpenPGP standard and is available on nearly every platform  For more information  please see the  GnuPG manual  https   gnupg org gph en manual html     Note   To use ECHD keys with Vault you must use GnuPGP 2 2 21 or  newer  Refer to the  GnuPG NEWS  https   dev gnupg org source gnupg browse master NEWS  for further details     Note   To create a new PGP key  run  following the prompts      shell session   gpg   gen key      To import an existing key  download the public key onto disk and run      shell session   gpg   import key asc      Once you have imported the users  public keys  you need to save their values to disk as either base64 or binary key files  For example      shell session   gpg   export 348FFC4C   base64   seth asc      These key files must exist on disk in base64  the  standard  base64 character set  without ASCII armoring  or binary  Once saved to disk  the path to these files can be specified as an argument to the   pgp keys  flag      shell session   vault operator init  key shares 3  key threshold 2        pgp keys  jeff asc vishal asc seth asc       The result should look something like this       Key 1  wcBMA37rwGt6FS1VAQgAk1q8XQh6yc    Key 2  wcBMA0wwnMXgRzYYAQgAavqbTCxZGD    Key 3  wcFMA2DjqDb4YhTAARAAeTFyYxPmUd             The output should be rather long in comparison to a regular unseal key  These keys are encrypted  and only the user holding the corresponding private key can decrypt the value  The keys are encrypted in the order in which specified in the   pgp keys  attribute  As such  the first key belongs to Jeff  the second to Vishal  and the third to Seth  These keys can be distributed over almost any medium  although common sense and judgement are best advised  The encrypted keys are base64 encoded before returning       Unsealing with GnuPG  Assuming you have been given an unseal key that was encrypted using your public PGP key  you are now tasked with entering your unseal key  To get the plain text unseal key  you must decrypt the value given to you by the initializer  To get the plain text value  run the following command      shell session   echo  wcBMA37       base64   decode   gpg  dq      And replace  wcBMA37     with the encrypted key   If you encrypted your private PGP key with a passphrase  you may be prompted to enter it  After you enter your password  the output will be the plain text key       6ecb46277133e04b29bd0b1b05e60722dab7cdc684a0d3ee2de50ce4c38a357101      This is your unseal key in plain text and should be guarded the same way you guard a password  Now you can enter your key to the  unseal  command      shell session   vault operator unseal Key  will be hidden          "}
{"questions":"vault Cloud access management Vault and Boundary can be used together to provide a modern solution to remote access management in the cloud sidebar title Cloud access management layout docs page title Cloud access management","answers":"---\nlayout: docs\npage_title: Cloud access management\nsidebar_title: Cloud access management\ndescription: >-\n  Vault and Boundary can be used together to provide a modern solution to remote access management in the cloud.\n---\n\n# Cloud access management\n\nModern access management must be as dynamic as the infrastructure, people, and systems it serves. Traditionally, you use an IP address as the unit of security control; you use the IP as a unit of identity and manage around that, including traditional privileged access management (PAM). As you think about identity remaining a static target while the infrastructure underneath continues to be dynamic, this paradigm shift applies to simplified network topology and modern access management. As the new perimeter, [identity](https:\/\/www.hashicorp.com\/resources\/why-should-we-use-identity-based-security-as-we-ado) is the fundamental change agent in access management to infrastructure and resources.\nThis document outlines the security threats and challenges organizations encounter using traditional PAM solutions in the cloud era. It also explains why the consumption of [secrets](https:\/\/www.vaultproject.io\/use-cases\/secrets-management) should be independent of privileged access\/session management, and why programmatic access to systems must also interact with secret management outside the traditional PAM process.\n\nHashiCorp Vault and Boundary are security platform building blocks that can address these challenges for large, global enterprises \u2014 especially in regulated industries \u2014 creating a viable path to address modern privileged access challenges at scale.\n\n## The traditional PAM framework\n\nThe traditional PAM framework was conceived for an era of mainframes and monolithic, on-premises infrastructure, believing that any traffic allowed inside an organization's datacenter network was safe and should be allowed broad access to resources in that network. Traditional PAM's main goal was to control elevated (\"privileged\") access and permissions for users, accounts, processes, and systems across an IT environment.\n\nTraditionally, a few highly technical administrators manage PAM by accessing privileged accounts inside the datacenter. It typically takes administrators multiple days to manually onboard credentials mapping back to compute and systems across an IT environment.\n\n<ImageConfig hideBorder caption=\"The traditional PAM framework.\">\n\n![Diagram describing traditional PAM framework](\/img\/diagram-cloud-access-traditional.png)\n\n<\/ImageConfig>\n\nThe incumbent PAM process is often ticket-based (ITIL), requiring multi-person approval. After that, there is typically a manual follow-up process to rotate the credentials exposed to humans since long-lived credentials are a security and regulatory compliance risk. \nIn the world of multi- and hybrid-cloud, this traditional PAM framework is ineffective, leading to an exponential increase of human toil and increased risks.\n\n## Where traditional PAM fails\n\nTraditional PAM falls short of modern software delivery needs and security threats in two key areas.\n\n### Dynamic and ephemeral workloads\n\nIn the era of dynamic and ephemeral workloads, a PAM process requiring significant manual intervention introduces risk and does not scale. Infrastructure as code (IaC) has become the standard for automating repeatable IT administrative tasks by building a [platform](https:\/\/www.hashicorp.com\/resources\/what-is-a-platform-team-and-why-do-we-need-them) where developers can go for self-service provisioning, security, networking, and deployment tasks with guardrails. Automating these processes drives cost savings through tool consolidation, time savings, and legacy system deprecation. \n\nTraditional PAM solutions built in the era before the cloud do not fit into this new standard [cloud operating model](https:\/\/www.hashicorp.com\/cloud-operating-model). The manual processes need to be faster, the frequency of human intervention invites too many potential errors, and the controls need to be granular and modular enough to meet modern security needs. They can negatively impact developer processes and workflows. \n\n### Identity-based access management and zero trust\n\nThe need for organizations to quickly move away from the perimeter-defense-only approach (sometimes called the \"castle-and-moat\" defense) is becoming more urgent. The direction for many leading IT departments is to adopt an identity-based security model, where human and application access is gated using identity through trusted identity providers rather than outmoded identifiers like IP addresses (the traditional approach). The National Institute for Standards and Technology (NIST) recommends shifting to identity-based segmentation instead of network segmentation, as workloads, users, data, and credentials change often.\n\nSimilarly, modern best practices encourage the adoption of a [zero-trust architecture](https:\/\/www.hashicorp.com\/solutions\/zero-trust-security). According to NIST:\n\n> \"Zero trust architecture is an end-to-end approach to enterprise resource and data security encompassing **identity** (person and **nonperson entities**), credentials, access management, operations, endpoints, hosting environments, and the interconnecting infrastructure.\" - [NIST 800-207](https:\/\/nvlpubs.nist.gov\/nistpubs\/SpecialPublications\/NIST.SP.800-207.pdf).\n\nNIST's position on how non-human entities authenticate themselves in an enterprise implementing a zero-trust architecture is an open issue.\n\n> \"The associated risk is that an attacker will be able to **induce** or coerce an NPE (non-person entities) to perform some task that the attacker is not privileged to perform. There is also a risk that an attacker could access a software agent's **credentials** and **impersonate the agent** when performing tasks.\" - [NIST 800-207](https:\/\/nvlpubs.nist.gov\/nistpubs\/SpecialPublications\/NIST.SP.800-207.pdf).\n\nNIST's concerns are likely based on poor implementation of non-human authentication. At organizations trying to move away from traditional PAM, a common challenge is automating their credential rotation process and making it less cumbersome to rotate frequently. \n\nSolving this issue is essential because long-lived secrets in any environment can lead to [credential stuffing](https:\/\/owasp.org\/www-community\/attacks\/Credential_stuffing); the automated injection of stolen credentials to fraudulently gain access to user accounts costs large organizations more than [$2 million](https:\/\/money.cnn.com\/2018\/03\/18\/technology\/biometrics-workplace\/index.html) annually in remediating actions. It can take 10.5 months to detect and identify credential-stuffing activities.\n\n## Solutions to the traditional PAM challenges\n\nBased on the challenges and security risks of traditional PAM, a modern replacement must meet several requirements:\n\n- Automation and versioned \"as code\" configuration for access and secrets management controls\n- Multi-cloud compatibility\n- Identity-based access controls facilitated by an identity broker with secrets or workload identity\n- Automated secrets rotation, or in some cases, single-use, just-in-time-generated credentials\n\nLet's explore the last two requirements in more detail.\n\n### Workload identity for identity-based access\n\nA [workload identity](https:\/\/learn.microsoft.com\/en-us\/entra\/workload-id\/workload-identities-overview) is an identity you assign to a software workload to authenticate and access other services and resources; it's something you need for your software entity to authenticate with some system.\n\nAccording to a [Microsoft blog post](https:\/\/blog.identitydigest.com\/azuread-federate-k8s\/): \"workload identity is a new capability that allows you to get rid of secrets in several scenarios.\" \n\nWhile using secrets for workload and machine identity is fine as organizations modernize their PAM, they can be compromised in credential stuffing attacks, as mentioned by NIST. It is why more solutions are using major cloud providers and their platforms as identity providers to generate workload identities (sometimes called \"machine identities\") as an alternative to using secrets for identity.\n\nWorkload identity sits on a framework where you configure trust relationships between two platforms, establishing a hardened, verifiable identity per workload. Workload and machine identity attestation at the platform removes the risk of impersonation for non-person entities.\nMany enterprises leverage an identity broker such as HashiCorp Vault to authenticate applications against a trusted source of identity and then leverage that identity to control access to data, systems, shared services, and secrets. An identity broker creates an opportunity to aggregate multiple sources of identity and present them as a single entity to target platforms; applying policy to that entity is vastly simplified.\n\n### Just-in-time credentials\n\nOne of the basic principles of data security is the principle of least privilege, which reduces risk by allowing only specific privileges for specific purposes. However, standing privileges easily violate this principle \u2014 account privileges are always available, even when not needed \u2014 providing a perpetually available attack surface. Standing accounts increase the threat of data exposure, and managing privileged access with many accounts, many of which belong to machines rather than human users, becomes more challenging.\n\nZero standing privilege means no long-lived credentials are statically stored anywhere.  Temporary credentials are provided in flight (ideally in memory) and just in time; this is a crucial strength of dynamic secrets because it generates ephemeral, extremely short-lived credentials in flight when invoking a request for a  secret.\n\nShort-lived credentials created just in time avoid credential reuse and potential leaks. Boundary integrates with Vault to leverage its dynamic secrets support to enable that pattern, where short-lived credentials are created upon access, and destroyed after the session is complete.  Applying fine-grained role-based access control with this technique enables a least privileged approach. Dynamic generation lets organizations attribute each credential to a single interactive and non-interactive session, making auditing more straightforward and robust.\n\nWhen managing machine access to secrets, the dynamic nature of HashiCorp Vault comes to the forefront. Vault gives each service access to secrets based on its identity and associated policy. \nHashiCorp Vault natively supports several secret engines, including\n\n- Google Cloud secrets engine\n- Azure secrets engine\n- AWS secrets engine\n- Kubernetes secrets engine\n- SSH secrets engine\n- Databases secrets engine (MySQL, Postgres, SQL-Server, MongoDB, etc.)\n- PKI secrets engine\n\nCombining multiple authentication sources and secret engines can provide controlled access within various implementations.\n\nWith HashiCorp Vault, whether a user is looking to create and distribute organizational secrets and access or applications are looking to retrieve new database credentials every 15 minutes, centrally managing this access based on trusted identities is critical.\n\nHashiCorp Vault has successfully altered the market's perception of managing secrets across multiple platforms and identity providers. Security in a dynamic world requires a dramatic shift from the approaches common in the static world. Instead of wrapping security around static servers and applications, it must be dynamically woven among the different components and tightly coupled with trusted identities and policies. With Vault, organizations can leverage any trusted source of identity to enforce access to systems and secrets.\n\n<ImageConfig hideBorder caption=\"HashiCorp Boundary leveraging Vault authentication and secrets.\">\n\n![Diagram showing HashiCorp Boundary leveraging Vault authentication and secrets](\/img\/diagram-cloud-access-boundary-vault.png)\n\n<\/ImageConfig>\n\n## Modern PAM for the cloud\n\n[AWS recommends](https:\/\/aws.amazon.com\/blogs\/security\/temporary-elevated-access-management-with-iam-identity-center\/) using automation where possible to keep people away from systems \u2014 yet not every action can be automated in practice, and some operations might require access by human users. So, the need for a PAM (or just access management) process continues to be justified. In addition, governance mandates session recording of all privileged interactions. Today, most privileged interactive sessions can be programmatically conducted, limiting privileged interactive sessions to emergency P1 incidents.\n\nAdopting a modern access management solution purpose-built for the cloud is essential in today's cloud-centric landscape. Automated onboarding of services is a critical component in a modern PAM solution, especially in highly dynamic and multi-cloud environments.  Such a solution empowers organizations to streamline access management, enhance operational efficiency, ensure higher identity assurance, and strengthen security and compliance measures.\n\n[HashiCorp Boundary](https:\/\/www.hashicorp.com\/products\/boundary) is part of the HashiCorp suite of tools for managing identity-based access for modern, dynamic infrastructure. Boundary allows a single workflow to facilitate interactive human sessions for privileged and non-privileged accounts while providing a local development experience. It leverages Vault's identity brokering and dynamic credentials capability to underpin the modern PAM paradigm.\n\nHashiCorp's approach focuses on five core principles to enable modern PAM, centered on identity-based controls in cloud-driven environments:\n\n1. Authentication and authorization\n1. Time-bound, least-privileged access\n1. Automation and flexible deployment\n1. Streamlined DevOps workflow\n1. Auditing and logging\n\nHashiCorp Boundary allows a single workflow to facilitate interactive human sessions for privileged and non-privileged accounts while providing a local development experience. Boundary's workflow layers security controls and integrations on multiple levels, monitoring and managing user access through activities aligned with the five core principles:\n\n- Tightly scoped identity-based permissions\n- \"Just-in-time\" network and credential access for sessions via HashiCorp Vault\n- Single sign-on to target services and applications via external identity providers\n- Automated discovery of target systems\n- Session monitoring and management \n- SSH session recording\n\nThe above activities align with the five core principles articulated earlier.\n\n<ImageConfig hideBorder caption=\"HashiCorp Boundary going full circle, leveraging the ecosystem, including Vault.\">\n\n![Diagram showing HashiCorp Boundary going full circle, leveraging the ecosystem, including Vault](\/img\/diagram-cloud-access-full-circle.png)\n\n<\/ImageConfig>\n\nHashiCorp Boundary includes automated controls to facilitate the onboarding of services via HashiCorp Terraform for preconfigured security policies or dynamic host catalogs, which automatically discover and onboard new or changed infrastructure resources and their connection information, such as Amazon EC2 hosts and Microsoft Azure virtual machines. This workflow automates onboarding new or altered infrastructure resources and their connection information. Automated onboarding of applications and infrastructure leveraging IaC significantly reduces administrative management and operational toil, accelerating the integration of secure access to infrastructure and services.\n\nHashiCorp has been [named](https:\/\/www.hashicorp.com\/blog\/hashicorp-enters-gartner-pam-mq) for the first time in the 2023 Gartner\u00ae Magic Quadrant\u2122 for Privileged Access Management (PAM). We feel that HashiCorp's approach has influenced our inclusion in the Magic Quadrant based on what we see as the five essential principles for modern PAM.\n\nGartner noted HashiCorp's solution combining HashiCorp Boundary and HashiCorp Vault. These two products can be used to solve new challenges around PAM utilizing the cloud; this was born from developing world-class capabilities around a specific set of modern core use cases focused on **[workflows, not technologies](https:\/\/www.hashicorp.com\/tao-of-hashicorp)**.\n\n## Conclusion\n\nThe cloud-native era demands a revolutionary shift to a dynamic PAM solution, unencumbered by legacy tooling. HashiCorp Vault and Boundary Enterprise are well-situated to address this paradigm shift for large, global enterprises in regulated industries. This creates a viable path to manage an organization's privileged access challenges at scale.\n\nGiven the pace of change in the industry, now is the time for enterprises to begin evaluating by experimentation, steered by the goal of streamlining dynamic access management. It is an opportunity to collaborate across the organization and discover consumption patterns conducive to streamlined developer workflows and a modern shared responsibility security model.","site":"vault","answers_cleaned":"    layout  docs page title  Cloud access management sidebar title  Cloud access management description       Vault and Boundary can be used together to provide a modern solution to remote access management in the cloud         Cloud access management  Modern access management must be as dynamic as the infrastructure  people  and systems it serves  Traditionally  you use an IP address as the unit of security control  you use the IP as a unit of identity and manage around that  including traditional privileged access management  PAM   As you think about identity remaining a static target while the infrastructure underneath continues to be dynamic  this paradigm shift applies to simplified network topology and modern access management  As the new perimeter   identity  https   www hashicorp com resources why should we use identity based security as we ado  is the fundamental change agent in access management to infrastructure and resources  This document outlines the security threats and challenges organizations encounter using traditional PAM solutions in the cloud era  It also explains why the consumption of  secrets  https   www vaultproject io use cases secrets management  should be independent of privileged access session management  and why programmatic access to systems must also interact with secret management outside the traditional PAM process   HashiCorp Vault and Boundary are security platform building blocks that can address these challenges for large  global enterprises   especially in regulated industries   creating a viable path to address modern privileged access challenges at scale      The traditional PAM framework  The traditional PAM framework was conceived for an era of mainframes and monolithic  on premises infrastructure  believing that any traffic allowed inside an organization s datacenter network was safe and should be allowed broad access to resources in that network  Traditional PAM s main goal was to control elevated   privileged   access and permissions for users  accounts  processes  and systems across an IT environment   Traditionally  a few highly technical administrators manage PAM by accessing privileged accounts inside the datacenter  It typically takes administrators multiple days to manually onboard credentials mapping back to compute and systems across an IT environment    ImageConfig hideBorder caption  The traditional PAM framework       Diagram describing traditional PAM framework   img diagram cloud access traditional png     ImageConfig   The incumbent PAM process is often ticket based  ITIL   requiring multi person approval  After that  there is typically a manual follow up process to rotate the credentials exposed to humans since long lived credentials are a security and regulatory compliance risk   In the world of multi  and hybrid cloud  this traditional PAM framework is ineffective  leading to an exponential increase of human toil and increased risks      Where traditional PAM fails  Traditional PAM falls short of modern software delivery needs and security threats in two key areas       Dynamic and ephemeral workloads  In the era of dynamic and ephemeral workloads  a PAM process requiring significant manual intervention introduces risk and does not scale  Infrastructure as code  IaC  has become the standard for automating repeatable IT administrative tasks by building a  platform  https   www hashicorp com resources what is a platform team and why do we need them  where developers can go for self service provisioning  security  networking  and deployment tasks with guardrails  Automating these processes drives cost savings through tool consolidation  time savings  and legacy system deprecation    Traditional PAM solutions built in the era before the cloud do not fit into this new standard  cloud operating model  https   www hashicorp com cloud operating model   The manual processes need to be faster  the frequency of human intervention invites too many potential errors  and the controls need to be granular and modular enough to meet modern security needs  They can negatively impact developer processes and workflows        Identity based access management and zero trust  The need for organizations to quickly move away from the perimeter defense only approach  sometimes called the  castle and moat  defense  is becoming more urgent  The direction for many leading IT departments is to adopt an identity based security model  where human and application access is gated using identity through trusted identity providers rather than outmoded identifiers like IP addresses  the traditional approach   The National Institute for Standards and Technology  NIST  recommends shifting to identity based segmentation instead of network segmentation  as workloads  users  data  and credentials change often   Similarly  modern best practices encourage the adoption of a  zero trust architecture  https   www hashicorp com solutions zero trust security   According to NIST      Zero trust architecture is an end to end approach to enterprise resource and data security encompassing   identity    person and   nonperson entities     credentials  access management  operations  endpoints  hosting environments  and the interconnecting infrastructure      NIST 800 207  https   nvlpubs nist gov nistpubs SpecialPublications NIST SP 800 207 pdf    NIST s position on how non human entities authenticate themselves in an enterprise implementing a zero trust architecture is an open issue      The associated risk is that an attacker will be able to   induce   or coerce an NPE  non person entities  to perform some task that the attacker is not privileged to perform  There is also a risk that an attacker could access a software agent s   credentials   and   impersonate the agent   when performing tasks      NIST 800 207  https   nvlpubs nist gov nistpubs SpecialPublications NIST SP 800 207 pdf    NIST s concerns are likely based on poor implementation of non human authentication  At organizations trying to move away from traditional PAM  a common challenge is automating their credential rotation process and making it less cumbersome to rotate frequently    Solving this issue is essential because long lived secrets in any environment can lead to  credential stuffing  https   owasp org www community attacks Credential stuffing   the automated injection of stolen credentials to fraudulently gain access to user accounts costs large organizations more than   2 million  https   money cnn com 2018 03 18 technology biometrics workplace index html  annually in remediating actions  It can take 10 5 months to detect and identify credential stuffing activities      Solutions to the traditional PAM challenges  Based on the challenges and security risks of traditional PAM  a modern replacement must meet several requirements     Automation and versioned  as code  configuration for access and secrets management controls   Multi cloud compatibility   Identity based access controls facilitated by an identity broker with secrets or workload identity   Automated secrets rotation  or in some cases  single use  just in time generated credentials  Let s explore the last two requirements in more detail       Workload identity for identity based access  A  workload identity  https   learn microsoft com en us entra workload id workload identities overview  is an identity you assign to a software workload to authenticate and access other services and resources  it s something you need for your software entity to authenticate with some system   According to a  Microsoft blog post  https   blog identitydigest com azuread federate k8s     workload identity is a new capability that allows you to get rid of secrets in several scenarios     While using secrets for workload and machine identity is fine as organizations modernize their PAM  they can be compromised in credential stuffing attacks  as mentioned by NIST  It is why more solutions are using major cloud providers and their platforms as identity providers to generate workload identities  sometimes called  machine identities   as an alternative to using secrets for identity   Workload identity sits on a framework where you configure trust relationships between two platforms  establishing a hardened  verifiable identity per workload  Workload and machine identity attestation at the platform removes the risk of impersonation for non person entities  Many enterprises leverage an identity broker such as HashiCorp Vault to authenticate applications against a trusted source of identity and then leverage that identity to control access to data  systems  shared services  and secrets  An identity broker creates an opportunity to aggregate multiple sources of identity and present them as a single entity to target platforms  applying policy to that entity is vastly simplified       Just in time credentials  One of the basic principles of data security is the principle of least privilege  which reduces risk by allowing only specific privileges for specific purposes  However  standing privileges easily violate this principle   account privileges are always available  even when not needed   providing a perpetually available attack surface  Standing accounts increase the threat of data exposure  and managing privileged access with many accounts  many of which belong to machines rather than human users  becomes more challenging   Zero standing privilege means no long lived credentials are statically stored anywhere   Temporary credentials are provided in flight  ideally in memory  and just in time  this is a crucial strength of dynamic secrets because it generates ephemeral  extremely short lived credentials in flight when invoking a request for a  secret   Short lived credentials created just in time avoid credential reuse and potential leaks  Boundary integrates with Vault to leverage its dynamic secrets support to enable that pattern  where short lived credentials are created upon access  and destroyed after the session is complete   Applying fine grained role based access control with this technique enables a least privileged approach  Dynamic generation lets organizations attribute each credential to a single interactive and non interactive session  making auditing more straightforward and robust   When managing machine access to secrets  the dynamic nature of HashiCorp Vault comes to the forefront  Vault gives each service access to secrets based on its identity and associated policy   HashiCorp Vault natively supports several secret engines  including    Google Cloud secrets engine   Azure secrets engine   AWS secrets engine   Kubernetes secrets engine   SSH secrets engine   Databases secrets engine  MySQL  Postgres  SQL Server  MongoDB  etc     PKI secrets engine  Combining multiple authentication sources and secret engines can provide controlled access within various implementations   With HashiCorp Vault  whether a user is looking to create and distribute organizational secrets and access or applications are looking to retrieve new database credentials every 15 minutes  centrally managing this access based on trusted identities is critical   HashiCorp Vault has successfully altered the market s perception of managing secrets across multiple platforms and identity providers  Security in a dynamic world requires a dramatic shift from the approaches common in the static world  Instead of wrapping security around static servers and applications  it must be dynamically woven among the different components and tightly coupled with trusted identities and policies  With Vault  organizations can leverage any trusted source of identity to enforce access to systems and secrets    ImageConfig hideBorder caption  HashiCorp Boundary leveraging Vault authentication and secrets       Diagram showing HashiCorp Boundary leveraging Vault authentication and secrets   img diagram cloud access boundary vault png     ImageConfig      Modern PAM for the cloud   AWS recommends  https   aws amazon com blogs security temporary elevated access management with iam identity center   using automation where possible to keep people away from systems   yet not every action can be automated in practice  and some operations might require access by human users  So  the need for a PAM  or just access management  process continues to be justified  In addition  governance mandates session recording of all privileged interactions  Today  most privileged interactive sessions can be programmatically conducted  limiting privileged interactive sessions to emergency P1 incidents   Adopting a modern access management solution purpose built for the cloud is essential in today s cloud centric landscape  Automated onboarding of services is a critical component in a modern PAM solution  especially in highly dynamic and multi cloud environments   Such a solution empowers organizations to streamline access management  enhance operational efficiency  ensure higher identity assurance  and strengthen security and compliance measures    HashiCorp Boundary  https   www hashicorp com products boundary  is part of the HashiCorp suite of tools for managing identity based access for modern  dynamic infrastructure  Boundary allows a single workflow to facilitate interactive human sessions for privileged and non privileged accounts while providing a local development experience  It leverages Vault s identity brokering and dynamic credentials capability to underpin the modern PAM paradigm   HashiCorp s approach focuses on five core principles to enable modern PAM  centered on identity based controls in cloud driven environments   1  Authentication and authorization 1  Time bound  least privileged access 1  Automation and flexible deployment 1  Streamlined DevOps workflow 1  Auditing and logging  HashiCorp Boundary allows a single workflow to facilitate interactive human sessions for privileged and non privileged accounts while providing a local development experience  Boundary s workflow layers security controls and integrations on multiple levels  monitoring and managing user access through activities aligned with the five core principles     Tightly scoped identity based permissions    Just in time  network and credential access for sessions via HashiCorp Vault   Single sign on to target services and applications via external identity providers   Automated discovery of target systems   Session monitoring and management    SSH session recording  The above activities align with the five core principles articulated earlier    ImageConfig hideBorder caption  HashiCorp Boundary going full circle  leveraging the ecosystem  including Vault       Diagram showing HashiCorp Boundary going full circle  leveraging the ecosystem  including Vault   img diagram cloud access full circle png     ImageConfig   HashiCorp Boundary includes automated controls to facilitate the onboarding of services via HashiCorp Terraform for preconfigured security policies or dynamic host catalogs  which automatically discover and onboard new or changed infrastructure resources and their connection information  such as Amazon EC2 hosts and Microsoft Azure virtual machines  This workflow automates onboarding new or altered infrastructure resources and their connection information  Automated onboarding of applications and infrastructure leveraging IaC significantly reduces administrative management and operational toil  accelerating the integration of secure access to infrastructure and services   HashiCorp has been  named  https   www hashicorp com blog hashicorp enters gartner pam mq  for the first time in the 2023 Gartner  Magic Quadrant  for Privileged Access Management  PAM   We feel that HashiCorp s approach has influenced our inclusion in the Magic Quadrant based on what we see as the five essential principles for modern PAM   Gartner noted HashiCorp s solution combining HashiCorp Boundary and HashiCorp Vault  These two products can be used to solve new challenges around PAM utilizing the cloud  this was born from developing world class capabilities around a specific set of modern core use cases focused on    workflows  not technologies  https   www hashicorp com tao of hashicorp         Conclusion  The cloud native era demands a revolutionary shift to a dynamic PAM solution  unencumbered by legacy tooling  HashiCorp Vault and Boundary Enterprise are well situated to address this paradigm shift for large  global enterprises in regulated industries  This creates a viable path to manage an organization s privileged access challenges at scale   Given the pace of change in the industry  now is the time for enterprises to begin evaluating by experimentation  steered by the goal of streamlining dynamic access management  It is an opportunity to collaborate across the organization and discover consumption patterns conducive to streamlined developer workflows and a modern shared responsibility security model "}
{"questions":"vault Identity page title Identity This document contains conceptual information about Identity along with an Vault provides an identity management solution to maintain clients who are recognized by Vault layout docs","answers":"---\nlayout: docs\npage_title: 'Identity'\ndescription: >-\n  Vault provides an identity management solution to maintain clients who are recognized by Vault.\n---\n\n# Identity\n\nThis document contains conceptual information about **Identity** along with an\noverview of the various terminologies and their concepts. The idea of Identity\nis to maintain the clients who are recognized by Vault. As such, Vault provides\nan identity management solution through the **Identity secrets engine**. For\nmore information about the Identity secrets engine and how it is used, refer to\nthe [Identity Secrets Engine](\/vault\/docs\/secrets\/identity) documentation.\n\n## Entities and aliases\n\nEach user may have multiple accounts with various identity providers, and Vault\nsupports many of those providers to authenticate with Vault. Vault Identity can\ntie authentications from various auth methods to a single representation. This representation of a consolidated identity is called an **Entity** and their\ncorresponding accounts with authentication providers can be mapped as\n**Aliases**. In essence, each entity is made up of zero or more aliases. An entity cannot have more than one alias for\na particular authentication backend.\n\nFor example, a user with accounts in both GitHub and LDAP can be mapped to a\nsingle entity in Vault with two aliases, one of type GitHub and one of type\nLDAP.\n\n![Entity  overview](\/img\/vault-identity-doc-1.png)\n\nHowever, if both aliases are created on the same auth mount, such as\na Github mount, both aliases cannot be mapped to the same entity. The aliases can\nhave the same auth type, as long as the auth mounts are different, and\nstill be associated to the same entity. The diagrams below illustrate both valid\nand invalid scenarios.\n\n![Valid Alias Mapping](\/img\/vault-identity-doc-4.png)\n![Invalid Alias Mapping](\/img\/vault-identity-doc-5.png)\n\nWhen a client authenticates via any credential backend (except the Token\nbackend), Vault creates a new entity. It attaches a new alias to it if a\ncorresponding entity does not already exist. The entity identifier will be tied\nto the authenticated token. When such tokens are used, their entity identifiers\nare audit logged, marking a trail of actions performed by specific users.\n\n~> Vault Entity is used to count the number of Vault clients. To learn more\nabout client count, refer to the [Client Count](\/vault\/docs\/concepts\/client-count)\ndocumentation.\n\n## Entity management\n\nEntities in Vault **do not** automatically pull identity information from\nanywhere. It needs to be explicitly managed by operators. This way, it is\nflexible in terms of administratively controlling the number of entities to be\nsynced against Vault. In some sense, Vault will serve as a _cache_ of\nidentities and not as a _source_ of identities.\n\n## Entity policies\n\nVault policies can be assigned to entities which will grant _additional_\npermissions to the token on top of the existing policies on the token. If the\ntoken presented on the API request contains an identifier for the entity and if\nthat entity has a set of policies on it, then the token will be capable of\nperforming actions allowed by the policies on the entity as well.\n\n![Entity policies](\/img\/vault-identity-doc-2.png)\n\nThis is a paradigm shift in terms of _when_ the policies of the token get\nevaluated. Before identity, the policy names on the token were immutable (not\nthe contents of those policies though). But with entity policies, along with\nthe immutable set of policy names on the token, the evaluation of policies\napplicable to the token through its identity will happen at request time. This\nalso adds enormous flexibility to control the behavior of already issued\ntokens.\n\nIt is important to note that the policies on the entity are only a means to grant\n_additional_ capabilities and not a replacement for the policies on the token.\nTo know the full set of capabilities of the token with an associated entity\nidentifier, the policies on the token should be taken into account.\n\n~> **NOTE:** Be careful in granting permissions to non-readonly identity endpoints.\nIf a user can modify an entity, they can grant it additional privileges through\npolicies. If a user can modify an alias they can login with, they can bind it to\nan entity with higher privileges. If a user can modify group membership, they\ncan add their entity to a group with higher privileges.\n\n## Mount bound aliases\n\nVault supports multiple authentication backends and also allows enabling the\nsame type of authentication backend on different mount paths. The alias name of\nthe user will be unique within the backend's mount. But identity store needs to\nuniquely distinguish between conflicting alias names across different mounts of\nthese identity providers. Hence, the alias name in combination with the\nauthentication backend mount's accessor, serve as the unique identifier of an\nalias.\n\nThe table below shows what information each of the supported auth methods uses\nto form the alias name. This is the identifying information that is used to match or create\nan entity. If no entities are explicitly created or merged, then one [entity will be implicitly created](#implicit-entities)\nfor each object on the right-hand side of the table, when it is used to authenticate on\na particular auth mount point.\n\n| Auth method         | Name reported by auth method                                                                        |\n| ------------------- | --------------------------------------------------------------------------------------------------- |\n| AliCloud            | Principal ID                                                                                        |\n| AppRole             | Role ID                                                                                             |\n| AWS IAM             | Configurable via `iam_alias` to one of: Role ID (default), IAM unique ID, Canonical ARN, Full ARN   |\n| AWS EC2             | Configurable via `ec2_alias` to one of: Role ID (default), EC2 instance ID, AMI ID                  |\n| Azure               | Subject (from JWT claim)                                                                            |\n| Cloud Foundry       | App ID                                                                                              |\n| GitHub              | User login name associated with token                                                               |\n| Google Cloud        | Configurable via `iam_alias` to one of: Role ID (default), Service account unique ID                |\n| JWT\/OIDC            | Configurable via `user_claim` to one of the presented claims (no default value)                     |\n| Kerberos            | Username                                                                                            |\n| Kubernetes          | Configurable via `alias_name_source` to one of: Service account UID (default), Service account name |\n| LDAP                | Username                                                                                            |\n| OCI                 | Role name                                                                                           |\n| Okta                | Username                                                                                            |\n| RADIUS              | Username                                                                                            |\n| TLS Certificate     | Subject CommonName                                                                                  |\n| Token               | `entity_alias`, if provided                                                                         |\n| Username (userpass) | Username                                                                                            |\n\n## Local auth methods\n\n**Vault Enterprise:** All the auth methods will generate an entity by default\nwhen a token is being issued, with the exception of token store. This is applicable\nfor both mounts that are shared between clusters and cluster local auth mounts (using `local=true`)\nwhen Vault replication is in use.\nIf the goal of marking an auth method as `local` was to comply to GDPR guidelines,\nthen care must be taken to not set the data pertaining to local auth mount or local auth\nmount aliases in the metadata of the associated entity.\n\n## Implicit entities\n\nOperators can create entities for all the users of an auth mount beforehand and\nassign policies to them, so that when users login, the desired capabilities to\nthe tokens via entities are already assigned. But if that's not done, upon a\nsuccessful user login from any of the authentication backends, Vault will\ncreate a new entity and assign an alias against the login that was successful.\n\nNote that the tokens created using the token authentication backend will not\nnormally have any associated identity information. An existing or new implicit\nentity can be assigned by using the `entity_alias` parameter, when creating a\ntoken using a token role with a configured list of `allowed_entity_aliases`.\n\n## Identity auditing\n\nIf the token used to make API calls has an associated entity identifier, it\nwill be audit logged as well. This leaves a trail of actions performed by\nspecific users.\n\n## Identity groups\n\nVault identity has support for **groups**. A group can contain multiple entities\nas its members. A group can also have subgroups. Policies set on the group are\ngranted to all members of the group. During request time, when the token's\nentity ID is being evaluated for the policies that it has access to, policies\nthat are inherited due to group memberships are granted along with the policies\non the entity itself.\n\n![Identity overview](\/img\/vault-identity-doc-3.png)\n\n## Group hierarchical permissions\n\nEntities can be direct members of groups, in which case they inherit the\npolicies of the groups they belong to. Entities can also be indirect members of\ngroups. For example, if a GroupA has GroupB as subgroup, then members of GroupB\nare indirect members of GroupA. Hence, the members of GroupB will have access\nto policies on both GroupA and GroupB.\n\n## External vs internal groups\n\nBy default, the groups created in identity store are called **internal groups**.\nThe membership management of these groups should be carried out\nmanually.\n\nA group can also be created as an **external group**. In this case, the\nentity membership in the group is managed semi-automatically. An external group\nserves as a mapping to a group that is outside of the identity store. External\ngroups can have one (and only one) alias. This alias should map to a notion of\na group that is outside of the identity store.\n\nFor example, groups in LDAP and teams in GitHub.\nA username in LDAP belonging to a group in LDAP can get its\nentity ID added as a member of a group in Vault automatically during _logins_\nand _token renewals_. This works only if the group in Vault is an external\ngroup and has an alias that maps to the group in LDAP.\n\n~> **NOTE:** If the user is removed from the group in LDAP, the user will\nnot immediately be removed from the external group in Vault. The group\nmembership change will be reflected in Vault only upon the\nsubsequent **login** or **renewal** operation.\n\nFor information about Identity Secrets Engine, refer to [Identity Secrets Engine](\/vault\/docs\/secrets\/identity).\n\n## Tutorial\n\nRefer to the [Identity: Entities and\nGroups](\/vault\/tutorials\/auth-methods\/identity) tutorial to learn how Vault supports mutliple authentication methods and enables the same authentication method to be used with different mount paths.","site":"vault","answers_cleaned":"    layout  docs page title   Identity  description       Vault provides an identity management solution to maintain clients who are recognized by Vault         Identity  This document contains conceptual information about   Identity   along with an overview of the various terminologies and their concepts  The idea of Identity is to maintain the clients who are recognized by Vault  As such  Vault provides an identity management solution through the   Identity secrets engine    For more information about the Identity secrets engine and how it is used  refer to the  Identity Secrets Engine   vault docs secrets identity  documentation      Entities and aliases  Each user may have multiple accounts with various identity providers  and Vault supports many of those providers to authenticate with Vault  Vault Identity can tie authentications from various auth methods to a single representation  This representation of a consolidated identity is called an   Entity   and their corresponding accounts with authentication providers can be mapped as   Aliases    In essence  each entity is made up of zero or more aliases  An entity cannot have more than one alias for a particular authentication backend   For example  a user with accounts in both GitHub and LDAP can be mapped to a single entity in Vault with two aliases  one of type GitHub and one of type LDAP     Entity  overview   img vault identity doc 1 png   However  if both aliases are created on the same auth mount  such as a Github mount  both aliases cannot be mapped to the same entity  The aliases can have the same auth type  as long as the auth mounts are different  and still be associated to the same entity  The diagrams below illustrate both valid and invalid scenarios     Valid Alias Mapping   img vault identity doc 4 png    Invalid Alias Mapping   img vault identity doc 5 png   When a client authenticates via any credential backend  except the Token backend   Vault creates a new entity  It attaches a new alias to it if a corresponding entity does not already exist  The entity identifier will be tied to the authenticated token  When such tokens are used  their entity identifiers are audit logged  marking a trail of actions performed by specific users      Vault Entity is used to count the number of Vault clients  To learn more about client count  refer to the  Client Count   vault docs concepts client count  documentation      Entity management  Entities in Vault   do not   automatically pull identity information from anywhere  It needs to be explicitly managed by operators  This way  it is flexible in terms of administratively controlling the number of entities to be synced against Vault  In some sense  Vault will serve as a  cache  of identities and not as a  source  of identities      Entity policies  Vault policies can be assigned to entities which will grant  additional  permissions to the token on top of the existing policies on the token  If the token presented on the API request contains an identifier for the entity and if that entity has a set of policies on it  then the token will be capable of performing actions allowed by the policies on the entity as well     Entity policies   img vault identity doc 2 png   This is a paradigm shift in terms of  when  the policies of the token get evaluated  Before identity  the policy names on the token were immutable  not the contents of those policies though   But with entity policies  along with the immutable set of policy names on the token  the evaluation of policies applicable to the token through its identity will happen at request time  This also adds enormous flexibility to control the behavior of already issued tokens   It is important to note that the policies on the entity are only a means to grant  additional  capabilities and not a replacement for the policies on the token  To know the full set of capabilities of the token with an associated entity identifier  the policies on the token should be taken into account        NOTE    Be careful in granting permissions to non readonly identity endpoints  If a user can modify an entity  they can grant it additional privileges through policies  If a user can modify an alias they can login with  they can bind it to an entity with higher privileges  If a user can modify group membership  they can add their entity to a group with higher privileges      Mount bound aliases  Vault supports multiple authentication backends and also allows enabling the same type of authentication backend on different mount paths  The alias name of the user will be unique within the backend s mount  But identity store needs to uniquely distinguish between conflicting alias names across different mounts of these identity providers  Hence  the alias name in combination with the authentication backend mount s accessor  serve as the unique identifier of an alias   The table below shows what information each of the supported auth methods uses to form the alias name  This is the identifying information that is used to match or create an entity  If no entities are explicitly created or merged  then one  entity will be implicitly created   implicit entities  for each object on the right hand side of the table  when it is used to authenticate on a particular auth mount point     Auth method           Name reported by auth method                                                                                                                                                                                                          AliCloud              Principal ID                                                                                            AppRole               Role ID                                                                                                 AWS IAM               Configurable via  iam alias  to one of  Role ID  default   IAM unique ID  Canonical ARN  Full ARN       AWS EC2               Configurable via  ec2 alias  to one of  Role ID  default   EC2 instance ID  AMI ID                      Azure                 Subject  from JWT claim                                                                                 Cloud Foundry         App ID                                                                                                  GitHub                User login name associated with token                                                                   Google Cloud          Configurable via  iam alias  to one of  Role ID  default   Service account unique ID                    JWT OIDC              Configurable via  user claim  to one of the presented claims  no default value                          Kerberos              Username                                                                                                Kubernetes            Configurable via  alias name source  to one of  Service account UID  default   Service account name     LDAP                  Username                                                                                                OCI                   Role name                                                                                               Okta                  Username                                                                                                RADIUS                Username                                                                                                TLS Certificate       Subject CommonName                                                                                      Token                  entity alias   if provided                                                                             Username  userpass    Username                                                                                                  Local auth methods    Vault Enterprise    All the auth methods will generate an entity by default when a token is being issued  with the exception of token store  This is applicable for both mounts that are shared between clusters and cluster local auth mounts  using  local true   when Vault replication is in use  If the goal of marking an auth method as  local  was to comply to GDPR guidelines  then care must be taken to not set the data pertaining to local auth mount or local auth mount aliases in the metadata of the associated entity      Implicit entities  Operators can create entities for all the users of an auth mount beforehand and assign policies to them  so that when users login  the desired capabilities to the tokens via entities are already assigned  But if that s not done  upon a successful user login from any of the authentication backends  Vault will create a new entity and assign an alias against the login that was successful   Note that the tokens created using the token authentication backend will not normally have any associated identity information  An existing or new implicit entity can be assigned by using the  entity alias  parameter  when creating a token using a token role with a configured list of  allowed entity aliases       Identity auditing  If the token used to make API calls has an associated entity identifier  it will be audit logged as well  This leaves a trail of actions performed by specific users      Identity groups  Vault identity has support for   groups    A group can contain multiple entities as its members  A group can also have subgroups  Policies set on the group are granted to all members of the group  During request time  when the token s entity ID is being evaluated for the policies that it has access to  policies that are inherited due to group memberships are granted along with the policies on the entity itself     Identity overview   img vault identity doc 3 png      Group hierarchical permissions  Entities can be direct members of groups  in which case they inherit the policies of the groups they belong to  Entities can also be indirect members of groups  For example  if a GroupA has GroupB as subgroup  then members of GroupB are indirect members of GroupA  Hence  the members of GroupB will have access to policies on both GroupA and GroupB      External vs internal groups  By default  the groups created in identity store are called   internal groups    The membership management of these groups should be carried out manually   A group can also be created as an   external group    In this case  the entity membership in the group is managed semi automatically  An external group serves as a mapping to a group that is outside of the identity store  External groups can have one  and only one  alias  This alias should map to a notion of a group that is outside of the identity store   For example  groups in LDAP and teams in GitHub  A username in LDAP belonging to a group in LDAP can get its entity ID added as a member of a group in Vault automatically during  logins  and  token renewals   This works only if the group in Vault is an external group and has an alias that maps to the group in LDAP        NOTE    If the user is removed from the group in LDAP  the user will not immediately be removed from the external group in Vault  The group membership change will be reflected in Vault only upon the subsequent   login   or   renewal   operation   For information about Identity Secrets Engine  refer to  Identity Secrets Engine   vault docs secrets identity       Tutorial  Refer to the  Identity  Entities and Groups   vault tutorials auth methods identity  tutorial to learn how Vault supports mutliple authentication methods and enables the same authentication method to be used with different mount paths "}
{"questions":"vault sidebar title Storage layout docs Storage Vault relies on external storage to save its durable information page title Storage","answers":"---\nlayout: docs\npage_title: Storage\nsidebar_title: Storage\ndescription: >-\n  Vault relies on external storage to save its durable information.\n---\n\n# Storage\n\nAs described on our [Architecture](\/vault\/docs\/internals\/architecture) page, Vault's\nstorage backend is untrusted storage used purely to keep encrypted information.\n\n## Supported storage backends\n\n@include 'ent-supported-storage.mdx'\n\nMany other options for storage are available with community support for Vault - see our\n[Storage Configuration](\/vault\/docs\/configuration\/storage) section for more\ninformation.\n\n-> **Choosing a storage backend:** Refer to the [integrated storage vs. external\nstorage](\/vault\/docs\/configuration\/storage#integrated-storage-vs-external-storage)\nsection of the storage configuration page to help make a decision about which\nstorage backend to use.\n\n## Backups\n\nDue to the highly flexible nature of Vault's potential storage configurations,\nproviding exact guidance on backing up Vault is challenging.\n\nWhen backing up Vault, there are two pieces to consider:\n\n1. Vault's encrypted data in the storage backend\n2. Configuration files and management scripts for running the Vault server\n\nThere's also a big question - what is the error case you're trying to guard\nagainst by saving a backup?\n\n### The big question - why take backups?\n\nIt's important to consider the question of \"why take a backup\" while developing\nyour ongoing backup and disaster recovery strategy.\n\nTaking a backup is recommended prior to upgrades, as downgrading Vault storage\nis not always possible. Generally, a backup is recommended any time a major\nchange is planned for a cluster.\n\nMore specifically, we recommend taking backups **before**, but not during, write\noperations to the `\/sys` API (excluding the `\/sys\/leases`, `\/sys\/namespaces`,\n`\/sys\/tools`, `\/sys\/wrapping`, `\/sys\/policies`, and `\/sys\/pprof` endpoints).\nSome examples of workflows that write to the `\/sys` API are upgrades and rekeys.\nIn the future, this guidance may change for the Integrated Storage backend.\n\nBackups _can_ also help with accidental data deletions or modifications. In\nthis case, the story can get a little tricky. If you simply recover a backup\nfrom 5AM with the correct data, but the current time is 10AM, you will lose data\nwritten between 5 and 10AM. Lucy Davinhart gave a HashiConf talk that serves as\nan interesting [case\nstudy](https:\/\/www.hashicorp.com\/resources\/oh-no-i-deleted-my-vault-secret).\n\nWe do not recommend backups as protection against the failure of an individual\nmachine. Vault servers can run in clusters, so to protect against server\nfailure, we recommend running Vault in [HA\nmode](\/vault\/docs\/internals\/high-availability). With community features, a\nVault cluster can extend across multiple availability zones within a region.\n\nVault Enterprise supports replicated clusters and disaster recovery for data\ncenter failure. When using Vault Community Edition in [HA\nMode](\/vault\/docs\/internals\/high-availability), a backup can help guard against the\nfailure of a data center.\n\nUltimately, backups are not a replacement for running in HA, or for using\nreplication with Vault Enterprise. As you develop a plan for recovering from or\nguarding against failure, you should consider both backups and HA as critical\ncomponents of that plan.\n\n### Backing up vault's persisted data\n\nBackups and restores are ideally performed while Vault is offline. If offline\nbackups are not feasible, we recommend using a storage backend that supports\natomic snapshots (such as\n[Consul](\/consul\/commands\/snapshot) or [Integrated\nStorage](\/vault\/docs\/commands\/operator\/raft#snapshot)).\n\n~> If your storage backend does not support atomic snapshots, we recommend only\ntaking offline backups.\n\nTo perform a backup or restore of Vault's encrypted data when using a\nHashiCorp-supported storage backend, see the instructions linked below. For\nother storage backends, follow the documentation of that backend for taking and\nrestoring backups.\n\n- Integrated Storage [snapshots](\/vault\/docs\/commands\/operator\/raft#snapshot)\n- Consul [snapshots](\/consul\/commands\/snapshot)\n\n#### Backing up multiple clusters\n\nIf you are using Vault Enterprise [Performance\nReplication](\/vault\/docs\/enterprise\/replication#performance-replication-and-disaster-recovery-dr-replication),\nyou should plan to take backups of the active node on each of your clusters.\n\n### Configuration\n\nIn addition to backing up Vault's encrypted data via the storage backend, you\nmay also wish to save the server configuration files, any scripts for managing\nthe Vault service, and ensure you can reinstall any user-installed plugins. The\nlocation of these files will be specific to your installation of Vault.\n\n~> **NOTE**: Although a backup or snapshot of Vault's data from the storage\nbackend is encrypted, some of your configuration may be sensitive (a Vault token\nfor Transit Autounseal or a TLS private key in your configuration, for example).\nThe presence of this information in your backups will mean that they may need\nto be carefully protected.","site":"vault","answers_cleaned":"    layout  docs page title  Storage sidebar title  Storage description       Vault relies on external storage to save its durable information         Storage  As described on our  Architecture   vault docs internals architecture  page  Vault s storage backend is untrusted storage used purely to keep encrypted information      Supported storage backends   include  ent supported storage mdx   Many other options for storage are available with community support for Vault   see our  Storage Configuration   vault docs configuration storage  section for more information        Choosing a storage backend    Refer to the  integrated storage vs  external storage   vault docs configuration storage integrated storage vs external storage  section of the storage configuration page to help make a decision about which storage backend to use      Backups  Due to the highly flexible nature of Vault s potential storage configurations  providing exact guidance on backing up Vault is challenging   When backing up Vault  there are two pieces to consider   1  Vault s encrypted data in the storage backend 2  Configuration files and management scripts for running the Vault server  There s also a big question   what is the error case you re trying to guard against by saving a backup       The big question   why take backups   It s important to consider the question of  why take a backup  while developing your ongoing backup and disaster recovery strategy   Taking a backup is recommended prior to upgrades  as downgrading Vault storage is not always possible  Generally  a backup is recommended any time a major change is planned for a cluster   More specifically  we recommend taking backups   before    but not during  write operations to the   sys  API  excluding the   sys leases     sys namespaces     sys tools     sys wrapping     sys policies   and   sys pprof  endpoints   Some examples of workflows that write to the   sys  API are upgrades and rekeys  In the future  this guidance may change for the Integrated Storage backend   Backups  can  also help with accidental data deletions or modifications  In this case  the story can get a little tricky  If you simply recover a backup from 5AM with the correct data  but the current time is 10AM  you will lose data written between 5 and 10AM  Lucy Davinhart gave a HashiConf talk that serves as an interesting  case study  https   www hashicorp com resources oh no i deleted my vault secret    We do not recommend backups as protection against the failure of an individual machine  Vault servers can run in clusters  so to protect against server failure  we recommend running Vault in  HA mode   vault docs internals high availability   With community features  a Vault cluster can extend across multiple availability zones within a region   Vault Enterprise supports replicated clusters and disaster recovery for data center failure  When using Vault Community Edition in  HA Mode   vault docs internals high availability   a backup can help guard against the failure of a data center   Ultimately  backups are not a replacement for running in HA  or for using replication with Vault Enterprise  As you develop a plan for recovering from or guarding against failure  you should consider both backups and HA as critical components of that plan       Backing up vault s persisted data  Backups and restores are ideally performed while Vault is offline  If offline backups are not feasible  we recommend using a storage backend that supports atomic snapshots  such as  Consul   consul commands snapshot  or  Integrated Storage   vault docs commands operator raft snapshot        If your storage backend does not support atomic snapshots  we recommend only taking offline backups   To perform a backup or restore of Vault s encrypted data when using a HashiCorp supported storage backend  see the instructions linked below  For other storage backends  follow the documentation of that backend for taking and restoring backups     Integrated Storage  snapshots   vault docs commands operator raft snapshot    Consul  snapshots   consul commands snapshot        Backing up multiple clusters  If you are using Vault Enterprise  Performance Replication   vault docs enterprise replication performance replication and disaster recovery dr replication   you should plan to take backups of the active node on each of your clusters       Configuration  In addition to backing up Vault s encrypted data via the storage backend  you may also wish to save the server configuration files  any scripts for managing the Vault service  and ensure you can reinstall any user installed plugins  The location of these files will be specific to your installation of Vault        NOTE    Although a backup or snapshot of Vault s data from the storage backend is encrypted  some of your configuration may be sensitive  a Vault token for Transit Autounseal or a TLS private key in your configuration  for example   The presence of this information in your backups will mean that they may need to be carefully protected "}
{"questions":"vault High availability mode HA against outages layout docs page title High Availability Vault can be highly available allowing you to run multiple Vaults to protect","answers":"---\nlayout: docs\npage_title: High Availability\ndescription: >-\n  Vault can be highly available, allowing you to run multiple Vaults to protect\n  against outages.\n---\n\n# High availability mode (HA)\n\nVault supports a multi-server mode for high availability. This mode protects\nagainst outages by running multiple Vault servers. High availability mode\nis automatically enabled when using a data store that supports it.\n\nYou can tell if a data store supports high availability mode (\"HA\") by starting\nthe server and seeing if \"(HA available)\" is output next to the data store\ninformation. If it is, then Vault will automatically use HA mode. This\ninformation is also available on the\n[Configuration](\/vault\/docs\/configuration) page.\n\nTo be highly available, one of the Vault server nodes grabs a lock within the\ndata store. The successful server node then becomes the active node; all other\nnodes become standby nodes. At this point, if the standby nodes receive a\nrequest, they will either [forward the request](#request-forwarding) or\n[redirect the client](#client-redirection) depending on the current\nconfiguration and state of the cluster -- see the sections below for details.\nDue to this architecture, HA does not enable increased scalability. In general,\nthe bottleneck of Vault is the data store itself, not Vault core. For example:\nto increase the scalability of Vault with Consul, you would generally scale\nConsul instead of Vault.\n\nCertain storage backends can support high availability mode, which enable them\nto store both Vault's information in addition to the HA lock. However, Vault\nalso supports split data\/HA mode, whereby the lock value and the rest of the\ndata live separately. This can be done by specifying both the\n[`storage`](\/vault\/docs\/configuration#storage) and\n[`ha_storage`](\/vault\/docs\/configuration#ha_storage) stanzas in the configuration file\nwith different backends. For instance, a Vault cluster can be set up to use\nConsul as the [`ha_storage`](\/vault\/docs\/configuration#ha_storage) to manage the lock,\nand use Amazon S3 as the [`storage`](\/vault\/docs\/configuration#storage) for all other\npersisted data.\n\nThe sections below explain the server communication patterns and each type of\nrequest handling in more detail. At a minimum, the requirements for redirection\nmode must be met for an HA cluster to work successfully.\n\n## Server-to-Server communication\n\nBoth methods of request handling rely on the active node advertising\ninformation about itself to the other nodes. Rather than over the network, this\ncommunication takes place within Vault's encrypted storage; the active node\nwrites this information and unsealed standby Vault nodes can read it.\n\nFor the client redirection method, this is the extent of server-to-server\ncommunication -- no direct communication with only encrypted entries in the\ndata store used to transfer state.\n\nFor the request forwarding method, the servers need direct communication with\neach other. In order to perform this securely, the active node also advertises,\nvia the encrypted data store entry, a newly-generated private key (ECDSA-P521)\nand a newly-generated self-signed certificate designated for client and server\nauthentication. Each standby uses the private key and certificate to open a\nmutually-authenticated TLS 1.2 connection to the active node via the advertised\ncluster address. When client requests come in, the requests are serialized,\nsent over this TLS-protected communication channel, and acted upon by the\nactive node. The active node then returns a response to the standby, which\nsends the response back to the requesting client.\n\n## Request forwarding\n\nIf request forwarding is enabled (turned on by default in 0.6.2), clients can\nstill force the older\/fallback redirection behavior (see below) if desired by\nsetting the `X-Vault-No-Request-Forwarding` header to any non-empty value.\n\nSuccessful cluster setup requires a few configuration parameters, although some\ncan be automatically determined.\n\n## Client redirection\n\nIf `X-Vault-No-Request-Forwarding` header in the request is set to a non-empty\nvalue, the standby nodes will redirect the client using a `307` status code to\nthe _active node's_ redirect address.\n\nThis is also the fallback method used when request forwarding is turned off or\nthere is an error performing the forwarding. As such, a redirect address is\nalways required for all HA setups.\n\nSome HA data store drivers can autodetect the redirect address, but it is often\nnecessary to configure it manually via a top-level value in the configuration\nfile. The key for this value is [`api_addr`](\/vault\/docs\/configuration#api_addr) and\nthe value can also be specified by the `VAULT_API_ADDR` environment variable,\nwhich takes precedence.\n\nWhat the [`api_addr`](\/vault\/docs\/configuration#api_addr) value should be set to\ndepends on how Vault is set up. There are two common scenarios: Vault servers\naccessed directly by clients, and Vault servers accessed via a load balancer.\n\nIn both cases, the [`api_addr`](\/vault\/docs\/configuration#api_addr) should be a full\nURL including scheme (`http`\/`https`), not simply an IP address and port.\n\n### Direct access\n\nWhen clients are able to access Vault directly, the\n[`api_addr`](\/vault\/docs\/configuration#api_addr) for each node should be that node's\naddress. For instance, if there are two Vault nodes:\n\n- `A`, accessed via `https:\/\/a.vault.mycompany.com:8200`\n- `B`, accessed via `https:\/\/b.vault.mycompany.com:8200`\n\nThen node `A` would set its\n[`api_addr`](\/vault\/docs\/configuration#api_addr) to\n`https:\/\/a.vault.mycompany.com:8200` and node `B` would set its\n[`api_addr`](\/vault\/docs\/configuration#api_addr) to\n`https:\/\/b.vault.mycompany.com:8200`.\n\nThis way, when `A` is the active node, any requests received by node `B` will\ncause it to redirect the client to node `A`'s\n[`api_addr`](\/vault\/docs\/configuration#api_addr) at `https:\/\/a.vault.mycompany.com`,\nand vice-versa.\n\n### Behind load balancers\n\nSometimes clients use load balancers as an initial method to access one of the\nVault servers, but actually have direct access to each Vault node. In this\ncase, the Vault servers should actually be set up as described in the above\nsection, since for redirection purposes the clients have direct access.\n\nHowever, if the only access to the Vault servers is via the load balancer, the\n[`api_addr`](\/vault\/docs\/configuration#api_addr) on each node should be the same: the\naddress of the load balancer. Clients that reach a standby node will be\nredirected back to the load balancer; at that point hopefully the load\nbalancer's configuration will have been updated to know the address of the\ncurrent leader. This can cause a redirect loop and as such is not a recommended\nsetup when it can be avoided.\n\n### Per-Node cluster listener addresses\n\nEach [`listener`](\/vault\/docs\/configuration\/listener) block in Vault's configuration\nfile contains an [`address`](\/vault\/docs\/configuration\/listener\/tcp#address) value on\nwhich Vault listens for requests. Similarly, each\n[`listener`](\/vault\/docs\/configuration\/listener) block can contain a\n[`cluster_address`](\/vault\/docs\/configuration\/listener\/tcp#cluster_address) on which\nVault listens for server-to-server cluster requests. If this value is not set,\nits IP address will be automatically set to same as the\n[`address`](\/vault\/docs\/configuration\/listener\/tcp#address) value, and its port will\nbe automatically set to the same as the\n[`address`](\/vault\/docs\/configuration\/listener\/tcp#address) value plus one (so by\ndefault, port `8201`).\n\nNote that _only_ active nodes have active listeners. When a node becomes active\nit will start cluster listeners, and when it becomes standby it will stop them.\n\n### Per-Node cluster address\n\nSimilar to the [`api_addr`](\/vault\/docs\/configuration#api_addr),\n[`cluster_addr`](\/vault\/docs\/configuration#cluster_addr) is the value that each node,\nif active, should advertise to the standbys to use for server-to-server\ncommunications, and lives as a top-level value in the configuration file. On\neach node, this should be set to a host name or IP address that a standby can\nuse to reach one of that node's\n[`cluster_address`](\/vault\/docs\/configuration#cluster_address) values set in the\n[`listener`](\/vault\/docs\/configuration\/listener) blocks, including port. (Note that\nthis will always be forced to `https` since only TLS connections are used\nbetween servers.)\n\nThis value can also be specified by the `VAULT_CLUSTER_ADDR` environment\nvariable, which takes precedence.\n\n## Storage support\n\nCurrently there are several storage backends that support high availability\nmode, including [Consul](\/vault\/docs\/configuration\/storage\/consul),\n[ZooKeeper](\/vault\/docs\/configuration\/storage\/zookeeper) and [etcd](\/vault\/docs\/configuration\/storage\/etcd). These may\nchange over time, and the [configuration page](\/vault\/docs\/configuration) should be\nreferenced.\n\nHashiCorp recommends [Vault Integrated Storage](\/vault\/docs\/configuration\/storage\/raft) as the default HA backend for new deployments of Vault. [Consul Storage Backend](\/vault\/docs\/configuration\/storage\/consul) is also a supported option and used by many production deployments. See the [comparison chart](\/vault\/docs\/configuration\/storage#integrated-storage-vs-consul-as-vault-storage) for help deciding which option is best for you.\n\nIf you're interested in implementing another backend or adding HA support to\nanother backend, we'd love your contributions. Adding HA support requires\nimplementing the\n[`physical.HABackend`](https:\/\/pkg.go.dev\/github.com\/hashicorp\/vault\/sdk\/physical#HABackend)\ninterface for the storage backend.","site":"vault","answers_cleaned":"    layout  docs page title  High Availability description       Vault can be highly available  allowing you to run multiple Vaults to protect   against outages         High availability mode  HA   Vault supports a multi server mode for high availability  This mode protects against outages by running multiple Vault servers  High availability mode is automatically enabled when using a data store that supports it   You can tell if a data store supports high availability mode   HA   by starting the server and seeing if   HA available   is output next to the data store information  If it is  then Vault will automatically use HA mode  This information is also available on the  Configuration   vault docs configuration  page   To be highly available  one of the Vault server nodes grabs a lock within the data store  The successful server node then becomes the active node  all other nodes become standby nodes  At this point  if the standby nodes receive a request  they will either  forward the request   request forwarding  or  redirect the client   client redirection  depending on the current configuration and state of the cluster    see the sections below for details  Due to this architecture  HA does not enable increased scalability  In general  the bottleneck of Vault is the data store itself  not Vault core  For example  to increase the scalability of Vault with Consul  you would generally scale Consul instead of Vault   Certain storage backends can support high availability mode  which enable them to store both Vault s information in addition to the HA lock  However  Vault also supports split data HA mode  whereby the lock value and the rest of the data live separately  This can be done by specifying both the   storage    vault docs configuration storage  and   ha storage    vault docs configuration ha storage  stanzas in the configuration file with different backends  For instance  a Vault cluster can be set up to use Consul as the   ha storage    vault docs configuration ha storage  to manage the lock  and use Amazon S3 as the   storage    vault docs configuration storage  for all other persisted data   The sections below explain the server communication patterns and each type of request handling in more detail  At a minimum  the requirements for redirection mode must be met for an HA cluster to work successfully      Server to Server communication  Both methods of request handling rely on the active node advertising information about itself to the other nodes  Rather than over the network  this communication takes place within Vault s encrypted storage  the active node writes this information and unsealed standby Vault nodes can read it   For the client redirection method  this is the extent of server to server communication    no direct communication with only encrypted entries in the data store used to transfer state   For the request forwarding method  the servers need direct communication with each other  In order to perform this securely  the active node also advertises  via the encrypted data store entry  a newly generated private key  ECDSA P521  and a newly generated self signed certificate designated for client and server authentication  Each standby uses the private key and certificate to open a mutually authenticated TLS 1 2 connection to the active node via the advertised cluster address  When client requests come in  the requests are serialized  sent over this TLS protected communication channel  and acted upon by the active node  The active node then returns a response to the standby  which sends the response back to the requesting client      Request forwarding  If request forwarding is enabled  turned on by default in 0 6 2   clients can still force the older fallback redirection behavior  see below  if desired by setting the  X Vault No Request Forwarding  header to any non empty value   Successful cluster setup requires a few configuration parameters  although some can be automatically determined      Client redirection  If  X Vault No Request Forwarding  header in the request is set to a non empty value  the standby nodes will redirect the client using a  307  status code to the  active node s  redirect address   This is also the fallback method used when request forwarding is turned off or there is an error performing the forwarding  As such  a redirect address is always required for all HA setups   Some HA data store drivers can autodetect the redirect address  but it is often necessary to configure it manually via a top level value in the configuration file  The key for this value is   api addr    vault docs configuration api addr  and the value can also be specified by the  VAULT API ADDR  environment variable  which takes precedence   What the   api addr    vault docs configuration api addr  value should be set to depends on how Vault is set up  There are two common scenarios  Vault servers accessed directly by clients  and Vault servers accessed via a load balancer   In both cases  the   api addr    vault docs configuration api addr  should be a full URL including scheme   http   https    not simply an IP address and port       Direct access  When clients are able to access Vault directly  the   api addr    vault docs configuration api addr  for each node should be that node s address  For instance  if there are two Vault nodes      A   accessed via  https   a vault mycompany com 8200     B   accessed via  https   b vault mycompany com 8200   Then node  A  would set its   api addr    vault docs configuration api addr  to  https   a vault mycompany com 8200  and node  B  would set its   api addr    vault docs configuration api addr  to  https   b vault mycompany com 8200    This way  when  A  is the active node  any requests received by node  B  will cause it to redirect the client to node  A  s   api addr    vault docs configuration api addr  at  https   a vault mycompany com   and vice versa       Behind load balancers  Sometimes clients use load balancers as an initial method to access one of the Vault servers  but actually have direct access to each Vault node  In this case  the Vault servers should actually be set up as described in the above section  since for redirection purposes the clients have direct access   However  if the only access to the Vault servers is via the load balancer  the   api addr    vault docs configuration api addr  on each node should be the same  the address of the load balancer  Clients that reach a standby node will be redirected back to the load balancer  at that point hopefully the load balancer s configuration will have been updated to know the address of the current leader  This can cause a redirect loop and as such is not a recommended setup when it can be avoided       Per Node cluster listener addresses  Each   listener    vault docs configuration listener  block in Vault s configuration file contains an   address    vault docs configuration listener tcp address  value on which Vault listens for requests  Similarly  each   listener    vault docs configuration listener  block can contain a   cluster address    vault docs configuration listener tcp cluster address  on which Vault listens for server to server cluster requests  If this value is not set  its IP address will be automatically set to same as the   address    vault docs configuration listener tcp address  value  and its port will be automatically set to the same as the   address    vault docs configuration listener tcp address  value plus one  so by default  port  8201     Note that  only  active nodes have active listeners  When a node becomes active it will start cluster listeners  and when it becomes standby it will stop them       Per Node cluster address  Similar to the   api addr    vault docs configuration api addr     cluster addr    vault docs configuration cluster addr  is the value that each node  if active  should advertise to the standbys to use for server to server communications  and lives as a top level value in the configuration file  On each node  this should be set to a host name or IP address that a standby can use to reach one of that node s   cluster address    vault docs configuration cluster address  values set in the   listener    vault docs configuration listener  blocks  including port   Note that this will always be forced to  https  since only TLS connections are used between servers    This value can also be specified by the  VAULT CLUSTER ADDR  environment variable  which takes precedence      Storage support  Currently there are several storage backends that support high availability mode  including  Consul   vault docs configuration storage consul    ZooKeeper   vault docs configuration storage zookeeper  and  etcd   vault docs configuration storage etcd   These may change over time  and the  configuration page   vault docs configuration  should be referenced   HashiCorp recommends  Vault Integrated Storage   vault docs configuration storage raft  as the default HA backend for new deployments of Vault   Consul Storage Backend   vault docs configuration storage consul  is also a supported option and used by many production deployments  See the  comparison chart   vault docs configuration storage integrated storage vs consul as vault storage  for help deciding which option is best for you   If you re interested in implementing another backend or adding HA support to another backend  we d love your contributions  Adding HA support requires implementing the   physical HABackend   https   pkg go dev github com hashicorp vault sdk physical HABackend  interface for the storage backend "}
{"questions":"vault introduced in Vault 0 8 and may not be available in earlier releases Response wrapping layout docs page title Response Wrapping Wrapping responses in cubbyholes for secure distribution Note Some of this information relies on features of response wrapping tokens","answers":"---\nlayout: docs\npage_title: Response Wrapping\ndescription: Wrapping responses in cubbyholes for secure distribution.\n---\n\n# Response wrapping\n\n_Note_: Some of this information relies on features of response-wrapping tokens\nintroduced in Vault 0.8 and may not be available in earlier releases.\n\n## Overview\n\nIn many Vault deployments, clients can access Vault directly and consume\nreturned secrets. In other situations, it may make sense to or be desired to\nseparate privileges such that one trusted entity is responsible for interacting\nwith most of the Vault API and passing secrets to the end consumer.\n\nHowever, the more relays a secret travels through, the more possibilities for\naccidental disclosure, especially if the secret is being transmitted in\nplaintext. For instance, you may wish to get a TLS private key to a machine\nthat has been cold-booted, but since you do not want to store a decryption key\nin persistent storage, you cannot encrypt this key in transit.\n\nTo help address this problem, Vault includes a feature called _response\nwrapping_. When requested, Vault can take the response it would have sent to an\nHTTP client and instead insert it into the\n[`cubbyhole`](\/vault\/docs\/secrets\/cubbyhole) of a single-use token,\nreturning that single-use token instead.\n\nLogically speaking, the response is\nwrapped by the token, and retrieving it requires an unwrap operation against\nthis token. Functionally speaking, the token provides authorization to use\nan encryption key from Vault's keyring to decrypt the data.\n\nThis provides a powerful mechanism for information sharing in many\nenvironments. In the types of scenarios, described above, often the best\npractical option is to provide _cover_ for the secret information, be able to\n_detect malfeasance_ (interception, tampering), and limit _lifetime_ of the\nsecret's exposure. Response wrapping performs all three of these duties:\n\n- It provides _cover_ by ensuring that the value being transmitted across the\n  wire is not the actual secret but a reference to such a secret, namely the\n  response-wrapping token. Information stored in logs or captured along the\n  way do not directly see the sensitive information.\n- It provides _malfeasance detection_ by ensuring that only a single party can\n  ever unwrap the token and see what's inside. A client receiving a token that\n  cannot be unwrapped can trigger an immediate security incident. In addition,\n  a client can inspect a given token before unwrapping to ensure that its\n  origin is from the expected location in Vault.\n- It _limits the lifetime_ of secret exposure because the response-wrapping\n  token has a lifetime that is separate from the wrapped secret (and often can\n  be much shorter), so if a client fails to come up and unwrap the token, the\n  token can expire very quickly.\n\n## Response-Wrapping tokens\n\nWhen a response is wrapped, the normal API response from Vault does not contain\nthe original secret, but rather contains a set of information related to the\nresponse-wrapping token:\n\n- TTL: The TTL of the response-wrapping token itself\n- Token: The actual token value\n- Creation Time: The time that the response-wrapping token was created\n- Creation Path: The API path that was called in the original request\n- Wrapped Accessor: If the wrapped response is an authentication response\n  containing a Vault token, this is the value of the wrapped token's accessor.\n  This is useful for orchestration systems (such as Nomad) to be able to control\n  the lifetime of secrets based on their knowledge of the lifetime of jobs,\n  without having to actually unwrap the response-wrapping token or gain\n  knowledge of the token ID inside.\n\nVault currently does not provide signed response-wrapping tokens, as it\nprovides little extra protection. If you are being pointed to the correct Vault\nserver, token validation is performed by interacting with the server itself; a\nsigned token does not remove the need to validate the token with the server,\nsince the token is not carrying data but merely an access mechanism and the\nserver will not release data without validating it. If you are being attacked\nand pointed to the wrong Vault server, the same attacker could trivially give\nyou the wrong signing public key that corresponds to the wrong Vault server.\nYou could cache a previously valid key, but could also cache a previously valid\naddress (and in most cases the Vault address will not change or will be set via\na service discovery mechanism). As such, we rely on the fact that the token\nitself is not carrying authoritative data and do not sign it.\n\n## Response-Wrapping token operations\n\nVia the `sys\/wrapping` path, several operations can be run against wrapping\ntokens:\n\n- Lookup (`sys\/wrapping\/lookup`): This allows fetching the response-wrapping\n  token's creation time, creation path, and TTL. This path is unauthenticated\n  and available to response-wrapping tokens themselves. In other words, a\n  response-wrapping token holder wishing to perform validation is always\n  allowed to look up the properties of the token.\n- Unwrap (`sys\/wrapping\/unwrap`): Unwrap the token, returning the response\n  inside. The response that is returned will be the original wire-format\n  response; it can be used directly with API clients.\n- Rewrap (`sys\/wrapping\/rewrap`): Allows migrating the wrapped data to a new\n  response-wrapping token. This can be useful for long-lived secrets. For\n  example, an organization may wish (or be required in a compliance scenario)\n  to have the `pki` backend's root CA key be returned in a long-lived\n  response-wrapping token to ensure that nobody has seen the key (easily\n  verified by performing lookups on the response-wrapping token) but available\n  for signing CRLs in case they ever accidentally change or lose the `pki`\n  mount. Often, compliance schemes require periodic rotation of secrets, so\n  this helps achieve that compliance goal without actually exposing what's\n  inside.\n- Wrap (`sys\/wrapping\/wrap`): A helper endpoint that echoes back the data sent\n  to it in a response-wrapping token. Note that blocking access to this\n  endpoint does not remove the ability for arbitrary data to be wrapped, as it\n  can be done elsewhere in Vault.\n\n## Response-Wrapping token creation\n\nResponse wrapping is per-request and is triggered by providing to Vault the\ndesired TTL for a response-wrapping token for that request. This is set by the\nclient using the `X-Vault-Wrap-TTL` header and can be either an integer number\nof seconds or a string duration of seconds (`15s`), minutes (`20m`), or hours\n(`25h`). When using the Vault CLI, you can set this via the `-wrap-ttl`\nparameter. When using the Go API, wrapping is triggered by [setting a helper\nfunction](https:\/\/godoc.org\/github.com\/hashicorp\/vault\/api#Client.SetWrappingLookupFunc)\nthat tells the API the conditions under which to request wrapping, by mapping\nan operation and path to a desired TTL.\n\nIf a client requests wrapping:\n\n1. The original HTTP response is serialized\n2. A new single-use token is generated with the TTL supplied by the client\n3. Internally, the original serialized response is stored in the single-use\n   token's cubbyhole\n4. A new response is generated, with the token ID, TTL, and path stored in the\n   new response's wrap information object\n5. The new response is returned to the caller\n\nNote that policies can control minimum\/maximum wrapping TTLs; see the [policies\nconcepts page](\/vault\/docs\/concepts\/policies) for\nmore information.\n\n## Response-Wrapping token validation\n\nProper validation of response-wrapping tokens is essential to ensure that any\nmalfeasance is detected. It's also pretty straightforward.\n\nValidation is best performed by the following steps:\n\n1. If a client has been expecting delivery of a response-wrapping token and\n   none arrives, this may be due to an attacker intercepting the token and then\n   preventing it from traveling further. This should cause an alert to trigger\n   an immediate investigation.\n2. Perform a lookup on the response-wrapping token. This immediately tells you\n   if the token has already been unwrapped or is expired (or otherwise\n   revoked). If the lookup indicates that a token is invalid, it does not\n   necessarily mean that the data was intercepted (for instance, perhaps the\n   client took a long time to start up and the TTL expired) but should trigger\n   an alert for immediate investigation, likely with the assistance of Vault's\n   audit logs to see if the token really was unwrapped.\n3. With the token information in hand, validate that the creation path matches\n   expectations. If you expect to find a TLS key\/certificate inside, chances\n   are the path should be something like `pki\/issue\/...`. If the path is not\n   what you expect, it is possible that the data contained inside was read and\n   then put into a new response-wrapping token. (This is especially likely if\n   the path starts with `cubbyhole` or `sys\/wrapping\/wrap`.) Particular care\n   should be taken with `kv` secrets engine: exact matches on the path are best\n   there. For example, if you expect a secret to come from `secret\/foo` and\n   the interceptor provides a token with `secret\/bar` as the path, simply\n   checking for a prefix of `secret\/` is not enough.\n4. After prefix validation, unwrap the token. If the unwrap fails, the response\n   is similar to if the initial lookup fails: trigger an alert for immediate\n   investigation.\n\nFollowing those steps provides very strong assurance that the data contained\nwithin the response-wrapping token has never been seen by anyone other than the\nintended client and that any interception or tampering has resulted in a\nsecurity alert.","site":"vault","answers_cleaned":"    layout  docs page title  Response Wrapping description  Wrapping responses in cubbyholes for secure distribution         Response wrapping   Note   Some of this information relies on features of response wrapping tokens introduced in Vault 0 8 and may not be available in earlier releases      Overview  In many Vault deployments  clients can access Vault directly and consume returned secrets  In other situations  it may make sense to or be desired to separate privileges such that one trusted entity is responsible for interacting with most of the Vault API and passing secrets to the end consumer   However  the more relays a secret travels through  the more possibilities for accidental disclosure  especially if the secret is being transmitted in plaintext  For instance  you may wish to get a TLS private key to a machine that has been cold booted  but since you do not want to store a decryption key in persistent storage  you cannot encrypt this key in transit   To help address this problem  Vault includes a feature called  response wrapping   When requested  Vault can take the response it would have sent to an HTTP client and instead insert it into the   cubbyhole    vault docs secrets cubbyhole  of a single use token  returning that single use token instead   Logically speaking  the response is wrapped by the token  and retrieving it requires an unwrap operation against this token  Functionally speaking  the token provides authorization to use an encryption key from Vault s keyring to decrypt the data   This provides a powerful mechanism for information sharing in many environments  In the types of scenarios  described above  often the best practical option is to provide  cover  for the secret information  be able to  detect malfeasance   interception  tampering   and limit  lifetime  of the secret s exposure  Response wrapping performs all three of these duties     It provides  cover  by ensuring that the value being transmitted across the   wire is not the actual secret but a reference to such a secret  namely the   response wrapping token  Information stored in logs or captured along the   way do not directly see the sensitive information    It provides  malfeasance detection  by ensuring that only a single party can   ever unwrap the token and see what s inside  A client receiving a token that   cannot be unwrapped can trigger an immediate security incident  In addition    a client can inspect a given token before unwrapping to ensure that its   origin is from the expected location in Vault    It  limits the lifetime  of secret exposure because the response wrapping   token has a lifetime that is separate from the wrapped secret  and often can   be much shorter   so if a client fails to come up and unwrap the token  the   token can expire very quickly      Response Wrapping tokens  When a response is wrapped  the normal API response from Vault does not contain the original secret  but rather contains a set of information related to the response wrapping token     TTL  The TTL of the response wrapping token itself   Token  The actual token value   Creation Time  The time that the response wrapping token was created   Creation Path  The API path that was called in the original request   Wrapped Accessor  If the wrapped response is an authentication response   containing a Vault token  this is the value of the wrapped token s accessor    This is useful for orchestration systems  such as Nomad  to be able to control   the lifetime of secrets based on their knowledge of the lifetime of jobs    without having to actually unwrap the response wrapping token or gain   knowledge of the token ID inside   Vault currently does not provide signed response wrapping tokens  as it provides little extra protection  If you are being pointed to the correct Vault server  token validation is performed by interacting with the server itself  a signed token does not remove the need to validate the token with the server  since the token is not carrying data but merely an access mechanism and the server will not release data without validating it  If you are being attacked and pointed to the wrong Vault server  the same attacker could trivially give you the wrong signing public key that corresponds to the wrong Vault server  You could cache a previously valid key  but could also cache a previously valid address  and in most cases the Vault address will not change or will be set via a service discovery mechanism   As such  we rely on the fact that the token itself is not carrying authoritative data and do not sign it      Response Wrapping token operations  Via the  sys wrapping  path  several operations can be run against wrapping tokens     Lookup   sys wrapping lookup    This allows fetching the response wrapping   token s creation time  creation path  and TTL  This path is unauthenticated   and available to response wrapping tokens themselves  In other words  a   response wrapping token holder wishing to perform validation is always   allowed to look up the properties of the token    Unwrap   sys wrapping unwrap    Unwrap the token  returning the response   inside  The response that is returned will be the original wire format   response  it can be used directly with API clients    Rewrap   sys wrapping rewrap    Allows migrating the wrapped data to a new   response wrapping token  This can be useful for long lived secrets  For   example  an organization may wish  or be required in a compliance scenario    to have the  pki  backend s root CA key be returned in a long lived   response wrapping token to ensure that nobody has seen the key  easily   verified by performing lookups on the response wrapping token  but available   for signing CRLs in case they ever accidentally change or lose the  pki    mount  Often  compliance schemes require periodic rotation of secrets  so   this helps achieve that compliance goal without actually exposing what s   inside    Wrap   sys wrapping wrap    A helper endpoint that echoes back the data sent   to it in a response wrapping token  Note that blocking access to this   endpoint does not remove the ability for arbitrary data to be wrapped  as it   can be done elsewhere in Vault      Response Wrapping token creation  Response wrapping is per request and is triggered by providing to Vault the desired TTL for a response wrapping token for that request  This is set by the client using the  X Vault Wrap TTL  header and can be either an integer number of seconds or a string duration of seconds   15s    minutes   20m    or hours   25h    When using the Vault CLI  you can set this via the   wrap ttl  parameter  When using the Go API  wrapping is triggered by  setting a helper function  https   godoc org github com hashicorp vault api Client SetWrappingLookupFunc  that tells the API the conditions under which to request wrapping  by mapping an operation and path to a desired TTL   If a client requests wrapping   1  The original HTTP response is serialized 2  A new single use token is generated with the TTL supplied by the client 3  Internally  the original serialized response is stored in the single use    token s cubbyhole 4  A new response is generated  with the token ID  TTL  and path stored in the    new response s wrap information object 5  The new response is returned to the caller  Note that policies can control minimum maximum wrapping TTLs  see the  policies concepts page   vault docs concepts policies  for more information      Response Wrapping token validation  Proper validation of response wrapping tokens is essential to ensure that any malfeasance is detected  It s also pretty straightforward   Validation is best performed by the following steps   1  If a client has been expecting delivery of a response wrapping token and    none arrives  this may be due to an attacker intercepting the token and then    preventing it from traveling further  This should cause an alert to trigger    an immediate investigation  2  Perform a lookup on the response wrapping token  This immediately tells you    if the token has already been unwrapped or is expired  or otherwise    revoked   If the lookup indicates that a token is invalid  it does not    necessarily mean that the data was intercepted  for instance  perhaps the    client took a long time to start up and the TTL expired  but should trigger    an alert for immediate investigation  likely with the assistance of Vault s    audit logs to see if the token really was unwrapped  3  With the token information in hand  validate that the creation path matches    expectations  If you expect to find a TLS key certificate inside  chances    are the path should be something like  pki issue       If the path is not    what you expect  it is possible that the data contained inside was read and    then put into a new response wrapping token   This is especially likely if    the path starts with  cubbyhole  or  sys wrapping wrap    Particular care    should be taken with  kv  secrets engine  exact matches on the path are best    there  For example  if you expect a secret to come from  secret foo  and    the interceptor provides a token with  secret bar  as the path  simply    checking for a prefix of  secret   is not enough  4  After prefix validation  unwrap the token  If the unwrap fails  the response    is similar to if the initial lookup fails  trigger an alert for immediate    investigation   Following those steps provides very strong assurance that the data contained within the response wrapping token has never been seen by anyone other than the intended client and that any interception or tampering has resulted in a security alert "}
{"questions":"vault page title Integrated Storage Learn about the integrated raft storage in Vault information As of Vault 1 4 an Integrated Storage option is offered This Integrated storage Vault supports a number of storage options for the durable storage of Vault s layout docs","answers":"---\nlayout: docs\npage_title: Integrated Storage\ndescription: Learn about the integrated raft storage in Vault.\n---\n\n# Integrated storage\n\nVault supports a number of storage options for the durable storage of Vault's\ninformation. As of Vault 1.4 an Integrated Storage option is offered. This\nstorage backend does not rely on any third party systems, implements high\navailability semantics, supports Enterprise Replication features, and provides\nbackup\/restore workflows.\n\nThe option stores Vault's data on the server's filesystem and\nuses a consensus protocol to replicate data to each server in the cluster. More\ninformation on the internals of Integrated Storage can be found in the\n[Integrated Storage internals\ndocumentation](\/vault\/docs\/internals\/integrated-storage\/). Additionally, the\n[Configuration](\/vault\/docs\/configuration\/storage\/raft\/) docs can help in configuring\nVault to use Integrated Storage.\n\nThe sections below go into various details on how to operate Vault with\nIntegrated Storage.\n\n## Server-to-Server communication\n\nOnce nodes are joined to one another they begin to communicate using mTLS over\nVault's cluster port. The cluster port defaults to `8201`. The TLS information\nis exchanged at join time and is rotated on a cadence.\n\nA requirement for Integrated Storage is that the\n[`cluster_addr`](\/vault\/docs\/concepts\/ha#per-node-cluster-address) configuration option\nis set. This allows Vault to assign an address to the node ID at join time.\n\n## Cluster membership\n\nThis section will outline how to bootstrap and manage a cluster of Vault nodes\nrunning Integrated Storage.\n\nIntegrated Storage is bootstrapped during the [initialization\nprocess](\/vault\/tutorials\/getting-started\/getting-started-deploy#initializing-the-vault),\nand results in a cluster of size 1. Depending on the [desired deployment\nsize](\/vault\/docs\/internals\/integrated-storage\/#deployment-table), nodes can be joined\nto the active Vault node.\n\n### Joining nodes\n\nJoining is the process of taking an uninitialized Vault node and making it a\nmember of an existing cluster. In order to authenticate the new node to the\ncluster it must use the same seal mechanism. If using a Auto Unseal the node\nmust be configured to use the same KMS provider and Key as the cluster it's\nattempting to join. If using a Shamir seal the unseal keys must be provided to\nthe new node before the join process can complete. Once a node has successfully\njoined, data from the active node can begin to replicate to it. Once a node has\nbeen joined it cannot be re-joined to a different cluster.\n\nYou can either join the node automatically via the config file or manually through the\nAPI (both methods described below). When joining a node, the API address of the leader node must be used. We\nrecommend setting the [`api_addr`](\/vault\/docs\/concepts\/ha#direct-access) configuration\noption on all nodes to make joining simpler.\n\nAlways join nodes to a cluster one at a time and wait for the node to become\nhealthy and (if applicable) a voter before continuing to add more nodes. The\nstatus of a node can be verified by performing a [`list-peers`](\/vault\/docs\/commands\/operator\/raft#list-peers)\ncommand or by checking the [`autopilot state`](\/vault\/docs\/commands\/operator\/raft#autopilot-state).\n\n#### `retry_join` configuration\n\nThis method enables setting one, or more, target leader nodes in the config file.\nWhen an uninitialized Vault server starts up it will attempt to join each potential\nleader that is defined, retrying until successful. When one of the specified\nleaders become active this node will successfully join. When using Shamir seal,\nthe joined nodes will still need to be unsealed manually. When using Auto Unseal\nthe node will be able to join and unseal automatically.\n\nAn example [`retry_join`](\/vault\/docs\/configuration\/storage\/raft#retry_join-stanza)\nconfig can be seen below:\n\n```hcl\nstorage \"raft\" {\n  path    = \"\/var\/raft\/\"\n  node_id = \"node3\"\n\n  retry_join {\n    leader_api_addr = \"https:\/\/node1.vault.local:8200\"\n  }\n  retry_join {\n    leader_api_addr = \"https:\/\/node2.vault.local:8200\"\n  }\n}\n```\n\nNote, in each [`retry_join`](\/vault\/docs\/configuration\/storage\/raft#retry_join-stanza)\nstanza, you may provide a single\n[`leader_api_addr`](\/vault\/docs\/configuration\/storage\/raft#leader_api_addr) or\n[`auto_join`](\/vault\/docs\/configuration\/storage\/raft#auto_join) value. When a cloud\n[`auto_join`](\/vault\/docs\/configuration\/storage\/raft#auto_join) configuration value is\nprovided, Vault will use [go-discover](https:\/\/github.com\/hashicorp\/go-discover)\nto automatically attempt to discover and resolve potential Raft leader\naddresses.\n\nCheck the go-discover\n[README](https:\/\/github.com\/hashicorp\/go-discover\/blob\/master\/README.md) for\ndetails on the format of the [`auto_join`](\/vault\/docs\/configuration\/storage\/raft#auto_join)\nvalue per cloud provider.\n\n```hcl\nstorage \"raft\" {\n  path    = \"\/var\/raft\/\"\n  node_id = \"node3\"\n\n  retry_join {\n    auto_join = \"provider=aws region=eu-west-1 tag_key=vault tag_value=... access_key_id=... secret_access_key=...\"\n  }\n}\n```\n\nBy default, Vault will attempt to reach discovered peers using HTTPS and port 8200. Operators may override these through the\n[`auto_join_scheme`](\/vault\/docs\/configuration\/storage\/raft#auto_join_scheme) and\n[`auto_join_port`](\/vault\/docs\/configuration\/storage\/raft#auto_join_port) fields\nrespectively.\n\n```hcl\nstorage \"raft\" {\n  path    = \"\/var\/raft\/\"\n  node_id = \"node3\"\n\n  retry_join {\n    auto_join = \"provider=aws region=eu-west-1 tag_key=vault tag_value=... access_key_id=... secret_access_key=...\"\n    auto_join_scheme = \"http\"\n    auto_join_port = 8201\n  }\n}\n```\n\n#### Join from the CLI\n\nAlternatively you can use the [`join` CLI\ncommand](\/vault\/docs\/commands\/operator\/raft\/#join) or the API to join a node. The\nactive node's API address will need to be specified:\n\n```shell-session\n$ vault operator raft join https:\/\/node1.vault.local:8200\n```\n\n#### Non-Voting nodes (Enterprise only)\n\nNodes that are joined to a cluster can be specified as non-voters. A non-voting\nnode has all of Vault's data replicated to it, but does not contribute to the\nquorum count. This can be used in conjunction with [Performance\nStandby](\/vault\/docs\/enterprise\/performance-standby\/) nodes to add read scalability to\na cluster in cases where a high volume of reads to servers are needed.\n\n```shell-session\n$ vault operator raft join -non-voter https:\/\/node1.vault.local:8200\n```\n\n### Removing peers\n\nRemoving a peer node is a necessary step when you no longer want the node in the\ncluster. This could happen if the node is rotated for a new one, the hostname\npermanently changes and can no longer be accessed, you're attempting to shrink\nthe size of the cluster, or for many other reasons. Removing the peer will\nensure the cluster stays at the desired size, and that quorum is maintained.\n\nTo remove the peer you can issue a\n[`remove-peer`](\/vault\/docs\/commands\/operator\/raft#remove-peer) command and provide the\nnode ID you wish to remove:\n\n```shell-session\n$ vault operator raft remove-peer node1\nPeer removed successfully!\n```\n\n#### Re-joining after removal\n\nIf you have used `remove-peer` to remove a node from the Raft cluster, but you\nlater want to have this same node re-join the cluster, you will need to delete\nany existing Raft data on the removed node before adding it back to the cluster.\nThis will involve stopping the Vault process, deleting the data directory containing\nRaft data, and then restarting the Vault process.\n\n### Listing peers\n\nTo see the current peer set for the cluster you can issue a\n[`list-peers`](\/vault\/docs\/commands\/operator\/raft#list-peers) command. All the voting\nnodes that are listed here contribute to the quorum and a majority must be alive\nfor Integrated Storage to continue to operate.\n\n```shell-session\n$ vault operator raft list-peers\nNode     Address                   State       Voter\n----     -------                   -----       -----\nnode1    node1.vault.local:8201    follower    true\nnode2    node2.vault.local:8201    follower    true\nnode3    node3.vault.local:8201    leader      true\n```\n\n## Integrated storage and TLS\n\nWe've glossed over some details in the above sections on bootstrapping clusters.\nThe instructions are sufficient for most cases, but some users have run into\nproblems when using auto-join and TLS in conjunction with things like auto-scaling.\nThe issue is that [go-discover](https:\/\/github.com\/hashicorp\/go-discover) on\nmost platforms returns IPs (not hostnames), and because the IPs aren't knowable\nin advance, the TLS certificates used to secure the Vault API port don't contain\nthese IPs in their IP SANs.\n\n### Vault networking recap\n\nBefore we explore solutions to this problem, let's recapitulate how Vault nodes\nspeak to one another.\n\nVault exposes two TCP ports: [the API port](\/vault\/docs\/configuration#api_addr) and\n[the cluster port](\/vault\/docs\/configuration#cluster_addr).\n\nThe API port is where clients send their Vault HTTP requests.\n\nFor a single-node Vault cluster you don't worry about a cluster port as it won't be used.\n\nWhen you have multiple nodes, you also need a cluster port. This is used by Vault\nnodes to issue RPCs to one another, e.g. to forward requests from a standby node\nto the active node, or when Raft is in use, to handle leader election and\nreplication of stored data.\n\nThe cluster port is secured using a TLS certificate that the Vault active node\ngenerates internally. It's clear how this can work when not using integrated\nstorage: every node has at least read access to storage, so once the active\nnode has persisted the certificate, the standby nodes can fetch it, and all\nagree on how cluster traffic should be encrypted.\n\nIt's less clear how this works with Integrated Storage, as there is a chicken\nand egg problem. Nodes don't have a shared view of storage until the raft\ncluster has been formed, but we're trying to form the raft cluster! To solve\nthis problem, a Vault node must speak to another Vault node using the API port\ninstead of the cluster port. This is currently the only situation in which\nVault Community Edition does this (Vault Enterprise also does something similar when setting\nup replication.)\n\n- `node2` wants to join the cluster, so issues challenge API request to existing member `node1`\n- `node1` replies to challenge request with (1) an encrypted random UUID and (2) seal config\n- `node2` must decrypt UUID using seal; if using auto-unseal can do it directly, if using Shamir must wait for user to provide enough unseal keys to perform decryption\n- `node2` sends decrypted UUID back to `node1` using answer API\n- `node1` sees `node2` can be trusted (since it has seal access) and replies with a bootstrap package which includes the cluster TLS certificate and private key\n- `node2` gets sent a raft snapshot over the cluster port\n\nAfter this procedure the new node will never again send traffic to the API port.\nAll subsequent inter-node communication will use the cluster port.\n\n![Raft Join Process](\/img\/raft-join-detailed.png)\n\n### Assisted raft join techniques\n\nThe simplest option is to do it by hand: issue [`raft join`](\/vault\/docs\/commands\/operator\/raft#join) commands specifying the explicit names\nor IPs of the nodes to join to. In this section we look at other TLS-compatible\noptions that lend themselves more to automation.\n\n#### Autojoin with TLS servername\n\nAs of Vault 1.6.2, the simplest option might be to specify a\n[`leader_tls_servername`](\/vault\/docs\/configuration\/storage\/raft#leader_tls_servername)\nin the [`retry_join`](\/vault\/docs\/configuration\/storage\/raft#retry_join-stanza) stanza\nwhich matches a [DNS\nSAN](https:\/\/en.wikipedia.org\/wiki\/Subject_Alternative_Name) in the certificate.\n\nNote that names in a certificate's DNS SAN don't actually have to be registered\nin a DNS server. Your nodes may have no names found in DNS, while still\nusing certificate(s) that contain this shared `servername` in their DNS SANs.\n\n#### Autojoin but constrain CIDR, list all possible IPs in certificate\n\nIf all the vault node IPs are assigned from a small subnet, e.g. a `\/28`, it\nbecomes practical to put all the IPs that exist in that subnet into the IP SANs\nof the TLS certificate the nodes will share.\n\nThe drawback here is that the cluster may someday outgrow the CIDR and changing\nit may be a pain. For similar reasons this solution may be impractical when\nusing non-voting nodes and dynamically scaling clusters.\n\n#### Load balancer instead of autojoin\n\nMost Vault instances are going to have a load balancer (LB) between clients and\nthe Vault nodes. In that case, the LB knows how to route traffic to working\nVault nodes, and there's no need for auto-join: we can just use\n[`retry_join`](\/vault\/docs\/configuration\/storage\/raft#retry_join-stanza) with the LB\naddress as the target.\n\nOne potential issue here: some users want a public facing LB for clients to\nconnect to Vault, but aren't comfortable with Vault internal traffic\negressing from the internal network it normally runs on.\n\n## Outage recovery\n\n### Quorum maintained\n\nThis section outlines the steps to take when a single server or multiple servers\nare in a failed state but quorum is still maintained. This means the remaining\nalive servers are still operational, can elect a leader, and are able to process\nwrite requests.\n\nIf the failed server is recoverable, the best option is to bring it back online\nand have it reconnect to the cluster with the same host address. This will return\nthe cluster to a fully healthy state.\n\nIf this is impractical, you need to remove the failed server. Usually, you can\nissue a [`remove-peer`](\/vault\/docs\/commands\/operator\/raft#remove-peer) command to\nremove the failed server if it's still a member of the cluster.\n\nIf the [`remove-peer`](\/vault\/docs\/commands\/operator\/raft#remove-peer) command isn't\npossible or you'd rather manually re-write the cluster membership a\n[`raft\/peers.json`](#manual-recovery-using-peers-json) file can be written to\nthe configured data directory.\n\n### Quorum lost\n\nIn the event that multiple servers are lost, causing a loss of quorum and a\ncomplete outage, partial recovery is still possible.\n\nIf the failed servers are recoverable, the best option is to bring them back\nonline and have them reconnect to the cluster using the same host addresses.\nThis will return the cluster to a fully healthy state.\n\nIf the failed servers are not recoverable, partial recovery is possible using\ndata on the remaining servers in the cluster. There may be data loss in this\nsituation because multiple servers were lost, so information about what's\ncommitted could be incomplete. The recovery process implicitly commits all\noutstanding Raft log entries, so it's also possible to commit data that was\nuncommitted before the failure.\n\nSee the section below on manual recovery using\n[`peers.json`](#manual-recovery-using-peers-json) for details of the recovery\nprocedure. You include only the remaining servers in the\n[`peers.json`](#manual-recovery-using-peers-json) recovery file. The\ncluster should be able to elect a leader once the remaining servers are all\nrestarted with an identical\n[`peers.json`](#manual-recovery-using-peers-json) configuration.\n\nAny servers you introduce later can be fresh with totally clean data\ndirectories and joined using Vault's join command.\n\nIn extreme cases, it should be possible to recover with just a single remaining\nserver by starting that single server with itself as the only peer in the\n[`peers.json`](#manual-recovery-using-peers-json) recovery file.\n\n### Manual recovery using peers.json\n\nUsing `raft\/peers.json` for recovery can cause uncommitted Raft log entries to be\nimplicitly committed, so this should only be used after an outage where no other\noption is available to recover a lost server. Make sure you don't have any\nautomated processes that will put the peers file in place on a periodic basis.\n\nTo begin, stop all remaining servers.\n\nThe next step is to go to the [configured data\npath](\/vault\/docs\/configuration\/storage\/raft\/#path) of each Vault server. Inside that\ndirectory, there will be a `raft\/` sub-directory. We need to create a\n`raft\/peers.json` file. The file should be formatted as a JSON array containing\nthe node ID, `address:port`, and suffrage information of each Vault server you\nwish to be in the cluster:\n\n```json\n[\n  {\n    \"id\": \"node1\",\n    \"address\": \"node1.vault.local:8201\",\n    \"non_voter\": false\n  },\n  {\n    \"id\": \"node2\",\n    \"address\": \"node2.vault.local:8201\",\n    \"non_voter\": false\n  },\n  {\n    \"id\": \"node3\",\n    \"address\": \"node3.vault.local:8201\",\n    \"non_voter\": false\n  }\n]\n```\n\n- `id` `(string: <required>)` - Specifies the node ID of the server. This can be\n  found in the config file, or inside the `node-id` file in the server's data\n  directory if it was auto-generated.\n- `address` `(string: <required>)` - Specifies the host and port of the server. The\n  port is the server's cluster port.\n- `non_voter` `(bool: <false>)` - This controls whether the server is a non-voter.\n  If omitted, it will default to false, which is typical for most clusters. This\n  is an enterprise only feature.\n\nCreate entries for all servers. You must confirm that servers you do not\ninclude here have indeed failed and will not later rejoin the cluster. Ensure\nthat this file is the same across all remaining server nodes.\n\nAt this point, you can restart all the remaining servers. The cluster should be\nin an operable state again. One of the nodes should claim leadership and become\nactive.\n\n### Other recovery methods\n\nFor other, non-quorum related recovery [Vault's\nrecovery](\/vault\/docs\/concepts\/recovery-mode\/) mode can be used.","site":"vault","answers_cleaned":"    layout  docs page title  Integrated Storage description  Learn about the integrated raft storage in Vault         Integrated storage  Vault supports a number of storage options for the durable storage of Vault s information  As of Vault 1 4 an Integrated Storage option is offered  This storage backend does not rely on any third party systems  implements high availability semantics  supports Enterprise Replication features  and provides backup restore workflows   The option stores Vault s data on the server s filesystem and uses a consensus protocol to replicate data to each server in the cluster  More information on the internals of Integrated Storage can be found in the  Integrated Storage internals documentation   vault docs internals integrated storage    Additionally  the  Configuration   vault docs configuration storage raft   docs can help in configuring Vault to use Integrated Storage   The sections below go into various details on how to operate Vault with Integrated Storage      Server to Server communication  Once nodes are joined to one another they begin to communicate using mTLS over Vault s cluster port  The cluster port defaults to  8201   The TLS information is exchanged at join time and is rotated on a cadence   A requirement for Integrated Storage is that the   cluster addr    vault docs concepts ha per node cluster address  configuration option is set  This allows Vault to assign an address to the node ID at join time      Cluster membership  This section will outline how to bootstrap and manage a cluster of Vault nodes running Integrated Storage   Integrated Storage is bootstrapped during the  initialization process   vault tutorials getting started getting started deploy initializing the vault   and results in a cluster of size 1  Depending on the  desired deployment size   vault docs internals integrated storage  deployment table   nodes can be joined to the active Vault node       Joining nodes  Joining is the process of taking an uninitialized Vault node and making it a member of an existing cluster  In order to authenticate the new node to the cluster it must use the same seal mechanism  If using a Auto Unseal the node must be configured to use the same KMS provider and Key as the cluster it s attempting to join  If using a Shamir seal the unseal keys must be provided to the new node before the join process can complete  Once a node has successfully joined  data from the active node can begin to replicate to it  Once a node has been joined it cannot be re joined to a different cluster   You can either join the node automatically via the config file or manually through the API  both methods described below   When joining a node  the API address of the leader node must be used  We recommend setting the   api addr    vault docs concepts ha direct access  configuration option on all nodes to make joining simpler   Always join nodes to a cluster one at a time and wait for the node to become healthy and  if applicable  a voter before continuing to add more nodes  The status of a node can be verified by performing a   list peers    vault docs commands operator raft list peers  command or by checking the   autopilot state    vault docs commands operator raft autopilot state          retry join  configuration  This method enables setting one  or more  target leader nodes in the config file  When an uninitialized Vault server starts up it will attempt to join each potential leader that is defined  retrying until successful  When one of the specified leaders become active this node will successfully join  When using Shamir seal  the joined nodes will still need to be unsealed manually  When using Auto Unseal the node will be able to join and unseal automatically   An example   retry join    vault docs configuration storage raft retry join stanza  config can be seen below      hcl storage  raft      path        var raft     node id    node3     retry join       leader api addr    https   node1 vault local 8200        retry join       leader api addr    https   node2 vault local 8200             Note  in each   retry join    vault docs configuration storage raft retry join stanza  stanza  you may provide a single   leader api addr    vault docs configuration storage raft leader api addr  or   auto join    vault docs configuration storage raft auto join  value  When a cloud   auto join    vault docs configuration storage raft auto join  configuration value is provided  Vault will use  go discover  https   github com hashicorp go discover  to automatically attempt to discover and resolve potential Raft leader addresses   Check the go discover  README  https   github com hashicorp go discover blob master README md  for details on the format of the   auto join    vault docs configuration storage raft auto join  value per cloud provider      hcl storage  raft      path        var raft     node id    node3     retry join       auto join    provider aws region eu west 1 tag key vault tag value     access key id     secret access key                 By default  Vault will attempt to reach discovered peers using HTTPS and port 8200  Operators may override these through the   auto join scheme    vault docs configuration storage raft auto join scheme  and   auto join port    vault docs configuration storage raft auto join port  fields respectively      hcl storage  raft      path        var raft     node id    node3     retry join       auto join    provider aws region eu west 1 tag key vault tag value     access key id     secret access key          auto join scheme    http      auto join port   8201                 Join from the CLI  Alternatively you can use the   join  CLI command   vault docs commands operator raft  join  or the API to join a node  The active node s API address will need to be specified      shell session   vault operator raft join https   node1 vault local 8200           Non Voting nodes  Enterprise only   Nodes that are joined to a cluster can be specified as non voters  A non voting node has all of Vault s data replicated to it  but does not contribute to the quorum count  This can be used in conjunction with  Performance Standby   vault docs enterprise performance standby   nodes to add read scalability to a cluster in cases where a high volume of reads to servers are needed      shell session   vault operator raft join  non voter https   node1 vault local 8200          Removing peers  Removing a peer node is a necessary step when you no longer want the node in the cluster  This could happen if the node is rotated for a new one  the hostname permanently changes and can no longer be accessed  you re attempting to shrink the size of the cluster  or for many other reasons  Removing the peer will ensure the cluster stays at the desired size  and that quorum is maintained   To remove the peer you can issue a   remove peer    vault docs commands operator raft remove peer  command and provide the node ID you wish to remove      shell session   vault operator raft remove peer node1 Peer removed successfully            Re joining after removal  If you have used  remove peer  to remove a node from the Raft cluster  but you later want to have this same node re join the cluster  you will need to delete any existing Raft data on the removed node before adding it back to the cluster  This will involve stopping the Vault process  deleting the data directory containing Raft data  and then restarting the Vault process       Listing peers  To see the current peer set for the cluster you can issue a   list peers    vault docs commands operator raft list peers  command  All the voting nodes that are listed here contribute to the quorum and a majority must be alive for Integrated Storage to continue to operate      shell session   vault operator raft list peers Node     Address                   State       Voter                                                      node1    node1 vault local 8201    follower    true node2    node2 vault local 8201    follower    true node3    node3 vault local 8201    leader      true         Integrated storage and TLS  We ve glossed over some details in the above sections on bootstrapping clusters  The instructions are sufficient for most cases  but some users have run into problems when using auto join and TLS in conjunction with things like auto scaling  The issue is that  go discover  https   github com hashicorp go discover  on most platforms returns IPs  not hostnames   and because the IPs aren t knowable in advance  the TLS certificates used to secure the Vault API port don t contain these IPs in their IP SANs       Vault networking recap  Before we explore solutions to this problem  let s recapitulate how Vault nodes speak to one another   Vault exposes two TCP ports   the API port   vault docs configuration api addr  and  the cluster port   vault docs configuration cluster addr    The API port is where clients send their Vault HTTP requests   For a single node Vault cluster you don t worry about a cluster port as it won t be used   When you have multiple nodes  you also need a cluster port  This is used by Vault nodes to issue RPCs to one another  e g  to forward requests from a standby node to the active node  or when Raft is in use  to handle leader election and replication of stored data   The cluster port is secured using a TLS certificate that the Vault active node generates internally  It s clear how this can work when not using integrated storage  every node has at least read access to storage  so once the active node has persisted the certificate  the standby nodes can fetch it  and all agree on how cluster traffic should be encrypted   It s less clear how this works with Integrated Storage  as there is a chicken and egg problem  Nodes don t have a shared view of storage until the raft cluster has been formed  but we re trying to form the raft cluster  To solve this problem  a Vault node must speak to another Vault node using the API port instead of the cluster port  This is currently the only situation in which Vault Community Edition does this  Vault Enterprise also does something similar when setting up replication       node2  wants to join the cluster  so issues challenge API request to existing member  node1     node1  replies to challenge request with  1  an encrypted random UUID and  2  seal config    node2  must decrypt UUID using seal  if using auto unseal can do it directly  if using Shamir must wait for user to provide enough unseal keys to perform decryption    node2  sends decrypted UUID back to  node1  using answer API    node1  sees  node2  can be trusted  since it has seal access  and replies with a bootstrap package which includes the cluster TLS certificate and private key    node2  gets sent a raft snapshot over the cluster port  After this procedure the new node will never again send traffic to the API port  All subsequent inter node communication will use the cluster port     Raft Join Process   img raft join detailed png       Assisted raft join techniques  The simplest option is to do it by hand  issue   raft join    vault docs commands operator raft join  commands specifying the explicit names or IPs of the nodes to join to  In this section we look at other TLS compatible options that lend themselves more to automation        Autojoin with TLS servername  As of Vault 1 6 2  the simplest option might be to specify a   leader tls servername    vault docs configuration storage raft leader tls servername  in the   retry join    vault docs configuration storage raft retry join stanza  stanza which matches a  DNS SAN  https   en wikipedia org wiki Subject Alternative Name  in the certificate   Note that names in a certificate s DNS SAN don t actually have to be registered in a DNS server  Your nodes may have no names found in DNS  while still using certificate s  that contain this shared  servername  in their DNS SANs        Autojoin but constrain CIDR  list all possible IPs in certificate  If all the vault node IPs are assigned from a small subnet  e g  a   28   it becomes practical to put all the IPs that exist in that subnet into the IP SANs of the TLS certificate the nodes will share   The drawback here is that the cluster may someday outgrow the CIDR and changing it may be a pain  For similar reasons this solution may be impractical when using non voting nodes and dynamically scaling clusters        Load balancer instead of autojoin  Most Vault instances are going to have a load balancer  LB  between clients and the Vault nodes  In that case  the LB knows how to route traffic to working Vault nodes  and there s no need for auto join  we can just use   retry join    vault docs configuration storage raft retry join stanza  with the LB address as the target   One potential issue here  some users want a public facing LB for clients to connect to Vault  but aren t comfortable with Vault internal traffic egressing from the internal network it normally runs on      Outage recovery      Quorum maintained  This section outlines the steps to take when a single server or multiple servers are in a failed state but quorum is still maintained  This means the remaining alive servers are still operational  can elect a leader  and are able to process write requests   If the failed server is recoverable  the best option is to bring it back online and have it reconnect to the cluster with the same host address  This will return the cluster to a fully healthy state   If this is impractical  you need to remove the failed server  Usually  you can issue a   remove peer    vault docs commands operator raft remove peer  command to remove the failed server if it s still a member of the cluster   If the   remove peer    vault docs commands operator raft remove peer  command isn t possible or you d rather manually re write the cluster membership a   raft peers json    manual recovery using peers json  file can be written to the configured data directory       Quorum lost  In the event that multiple servers are lost  causing a loss of quorum and a complete outage  partial recovery is still possible   If the failed servers are recoverable  the best option is to bring them back online and have them reconnect to the cluster using the same host addresses  This will return the cluster to a fully healthy state   If the failed servers are not recoverable  partial recovery is possible using data on the remaining servers in the cluster  There may be data loss in this situation because multiple servers were lost  so information about what s committed could be incomplete  The recovery process implicitly commits all outstanding Raft log entries  so it s also possible to commit data that was uncommitted before the failure   See the section below on manual recovery using   peers json    manual recovery using peers json  for details of the recovery procedure  You include only the remaining servers in the   peers json    manual recovery using peers json  recovery file  The cluster should be able to elect a leader once the remaining servers are all restarted with an identical   peers json    manual recovery using peers json  configuration   Any servers you introduce later can be fresh with totally clean data directories and joined using Vault s join command   In extreme cases  it should be possible to recover with just a single remaining server by starting that single server with itself as the only peer in the   peers json    manual recovery using peers json  recovery file       Manual recovery using peers json  Using  raft peers json  for recovery can cause uncommitted Raft log entries to be implicitly committed  so this should only be used after an outage where no other option is available to recover a lost server  Make sure you don t have any automated processes that will put the peers file in place on a periodic basis   To begin  stop all remaining servers   The next step is to go to the  configured data path   vault docs configuration storage raft  path  of each Vault server  Inside that directory  there will be a  raft   sub directory  We need to create a  raft peers json  file  The file should be formatted as a JSON array containing the node ID   address port   and suffrage information of each Vault server you wish to be in the cluster      json            id    node1        address    node1 vault local 8201        non voter   false               id    node2        address    node2 vault local 8201        non voter   false               id    node3        address    node3 vault local 8201        non voter   false               id    string   required      Specifies the node ID of the server  This can be   found in the config file  or inside the  node id  file in the server s data   directory if it was auto generated     address    string   required      Specifies the host and port of the server  The   port is the server s cluster port     non voter    bool   false      This controls whether the server is a non voter    If omitted  it will default to false  which is typical for most clusters  This   is an enterprise only feature   Create entries for all servers  You must confirm that servers you do not include here have indeed failed and will not later rejoin the cluster  Ensure that this file is the same across all remaining server nodes   At this point  you can restart all the remaining servers  The cluster should be in an operable state again  One of the nodes should claim leadership and become active       Other recovery methods  For other  non quorum related recovery  Vault s recovery   vault docs concepts recovery mode   mode can be used "}
{"questions":"vault page title Migration checklist Migration checklist Tip title This is a decision making checklist layout docs Use this checklist for decision making related to migrating your Vault deployment to Integrated Storage","answers":"---\nlayout: docs\npage_title: Migration checklist\ndescription: Use this checklist for decision making related to migrating your Vault deployment to Integrated Storage.\n---\n\n# Migration checklist\n\n<Tip title=\"This is a decision-making checklist\">\n\nThe purpose of this checklist is not to walk you through the storage\nmigration steps. This content provides a quick self-check whether it is your\nbest interest to migrate your Vault storage from an external system to\nIntegrated Storage.\n\n<\/Tip>\n\n## Who should use this checklist?\n\nIntegrated Storage is a recommended storage option, made available in\nVault 1.4. Vault continues to also support other storage solutions\nlike Consul.\n\nYou should use this checklist if you are operating a Vault deployment backed\nby external storage like Consul, and you are considering migration to\nIntegrated Storage.\n\n## Understand architectural differences\n\nIt is important that you understand the differences between operating Vault\nwith external storage and operating with Integrated Storage. The following\nsections detail key differences in architecture between Vault with Consul\nstorage, and Vault with Integrated Storage to help inform your decision.\n\n### Reference architecture with Consul\n\nThe recommended number of Vault instances is **3** in a cluster which connects\nto a Consul cluster which may have **5** or more nodes as shown in the diagram.\n\nA total of 8 virtual machines hosts this Vault highly available architecture.\n\n<ImageConfig hideBorder>\n\n![Reference Diagram](\/img\/diagram-vault-ra-3-az.png)\n\n<\/ImageConfig>\n\nThe processing requirements depend on the encryption and messaging workloads.\nMemory requirements are dependant on the total size of secrets stored in\nmemory. The Vault server itself has minimal storage requirements, but \nthe Consul nodes should have a high-performance physical storage system.\n\n### Reference architecture with Integrated Storage\n\nThe recommended number of Vault instances is **5** in a cluster. In a single HA\ncluster, all Vault nodes share the data while an active node holds the lock;\ntherefore, only the active node has write access. To achieve n-2 redundancy,\n(meaning that the cluster can still function after losing 2 nodes),\nan ideal size for a Vault HA cluster is 5 nodes.\n\n<Tip title=\"More deployment details in the documentation\">\n\nRefer to the [Integrated\nStorage](\/vault\/docs\/internals\/integrated-storage#deployment-table)\ndocumentation for more deployment details.\n\n<\/Tip>\n\n<ImageConfig hideBorder>\n\n![Reference Diagram Details](\/img\/diagram-vault-integrated-ra-3_az.png)\n\n<\/ImageConfig>\n\nBecause the data gets persisted on the same host, the Vault server should be\nhosted on a relatively high-performance hard disk system.\n\n## Consul vs. Integrated Storage\n\nThe Integrated Storage eliminates the need for external storage; therefore,\nVault is the only software you need to stand up a cluster. This indicates that\nthe host machine must have disk capacity in an amount equal or\ngreater to that of the existing external storage backend.\n\n### System requirements comparison\n\nThe fundamental difference between Vault's Integrated Storage and Consul is\nthat the Integrated Storage stores everything on disk while [Consul\nKV](\/consul\/docs\/dynamic-app-config\/kv) stores everything in its memory\nwhich impacts the host's RAM.\n\n#### Machine sizes for Vault - Consul as its storage backend\n\nIt is recommended to avoid hosting Consul on an instance with burstable CPU.\n\n| Size  | CPU      | Memory       | Disk  | Typical Cloud Instance Types              |\n| ----- | -------- | ------------ | ----- | ----------------------------------------- |\n| Small | 2 core   | 4-8 GB RAM   | 25 GB | **AWS:** m5.large                         |\n|       |          |              |       | **Azure:** Standard_D2_v3                 |\n|       |          |              |       | **GCE:** n1-standard-2, n1-standard-4     |\n| Large | 4-8 core | 16-32 GB RAM | 50 GB | **AWS:** m5.xlarge, m5.2xlarge            |\n|       |          |              |       | **Azure:** Standard_D4_v3, Standard_D8_v3 |\n|       |          |              |       | **GCE:** n1-standard-8, n1-standard-16    |\n\n#### Machine sizes for Vault with Integrated Storage\n\n| Size  | CPU      | Memory       | Disk   | Typical Cloud Instance Types               |\n| ----- | -------- | ------------ | ------ | ------------------------------------------ |\n| Small | 2 core   | 8-16 GB RAM  | 100 GB | **AWS:** m5.large, m5.xlarge               |\n|       |          |              |        | **Azure:** Standard_D2_v3, Standard_D4_v3  |\n|       |          |              |        | **GCE:** n2-standard-2, n2-standard-4      |\n| Large | 4-8 core | 32-64 GB RAM | 200 GB | **AWS:** m5.2xlarge, m5.4xlarge            |\n|       |          |              |        | **Azure:** Standard_D8_v3, Standard_D16_v3 |\n|       |          |              |        | **GCE:** n2-standard-8, n2-standard-16     |\n\nIf many secrets are being generated or rotated frequently, this information will\nneed to be flushed to the disk often. Therefore, the infrastructure should have\na relatively high-performance hard disk system when using the integrated\nstorage.\n\n<Note title=\"A note about the importance of IOPS\">\n\n Vault's Integrated Storage is disk-bound; therefore, care should be taken when planning storage volume size and performance. For cloud providers, IOPS can be dependent on volume size and\/or provisioned IOPS. It is recommended to provision IOPS and avoid burstable IOPS. Monitoring of IOPS performance should be implemented in order to tune the storage volume to the IOPS load.\n\n<\/Note>\n\n### Performance considerations\n\nBecause Consul KV is memory-bound, it is necessary to take a snapshot frequently.\nHowever, Vault's Integrated Storage persists everything on the disk which eliminates\nthe need for such frequent snapshot operations. Take snapshots to back up the data\nso that you can restore them in case of data loss. This reduces the performance cost\nintroduced by the frequent snapshot operations.\n\nIn considering disk performance, since Vault data changes are immediately written to disk,\nrather than in batched snapshots as Consul does, it is important to monitor IOPS as well\nas disk queues to limit storage bottlenecks.\n\n\n### Inspect Vault data\n\nInspection of Vault data differs considerably from the `consul kv` commands used\nto inspect Consul's KV store.\nConsult the [Inspect Data in Integrated Storage](\/vault\/tutorials\/monitoring\/inspect-data-integrated-storage)\ntutorial to learn more about querying Integrated Storage data.\n\n### Summary\n\nThe table below highlights the differences between Consul and integrated\nstorage.\n\n| Consideration       | Consul as storage backend                                                  | Vault Integrated Storage                                                                     |\n| ------------------- | -------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- |\n| System requirement  | Memory optimized machine                                                   | Storage optimized high IOPS machine                                                          |\n| Data snapshot       | Frequent snapshots                                                         | Normal data backup strategy                                                                  |\n| Snapshot automation | Snapshot agent (**Consul Enterprise only**)                                | Automatic snapshot (**Vault Enterprise v1.6.0 and later**)                                   |\n| Data inspection     | [Online, use `consul kv` command](\/vault\/tutorials\/monitoring\/inspecting-data-consul) | [Offline, requires using recovery mode](\/vault\/tutorials\/monitoring\/inspect-data-integrated-storage) |\n| Autopilot           | Supported                                                                  | Supported (**Vault 1.7.0 and later**)                                  |\n\n## Self-check questions\n\n- [ ] Where is the product expertise?\n  - [ ] Do you already have Consul expertise?\n  - [ ] Are you concerned about lack of Consul knowledge?\n- [ ] Do you experience any technical issues with Consul?\n- [ ] What motivates the data migration from the current storage to Integrated Storage?\n  - [ ] Reduce the operational overhead?\n  - [ ] Reduce the number of machines to run?\n  - [ ] Reduce the cloud infrastructure cost?\n- [ ] Do you have a staging environment where you can run production loads and verify that everything works as you expect?\n- [ ] Have you thought through the storage backup process or workflow after migrating to the Integrated Storage?\n- [ ] Do you currently rely heavily on using Consul to inspect Vault data?\n\n## Tutorials\n\nIf you are ready to migrate the current storage backend to Integrated Storage,\nrefer to the [Storage Migration Tutorial - Consul to Integrated Storage](\/vault\/tutorials\/raft\/raft-migration).\n\nTo deploy a new cluster with Integrated Storage, refer to the [Vault HA Cluster\nwith Integrated Storage](\/vault\/tutorials\/raft\/raft-storage) tutorial.","site":"vault","answers_cleaned":"    layout  docs page title  Migration checklist description  Use this checklist for decision making related to migrating your Vault deployment to Integrated Storage         Migration checklist   Tip title  This is a decision making checklist    The purpose of this checklist is not to walk you through the storage migration steps  This content provides a quick self check whether it is your best interest to migrate your Vault storage from an external system to Integrated Storage     Tip      Who should use this checklist   Integrated Storage is a recommended storage option  made available in Vault 1 4  Vault continues to also support other storage solutions like Consul   You should use this checklist if you are operating a Vault deployment backed by external storage like Consul  and you are considering migration to Integrated Storage      Understand architectural differences  It is important that you understand the differences between operating Vault with external storage and operating with Integrated Storage  The following sections detail key differences in architecture between Vault with Consul storage  and Vault with Integrated Storage to help inform your decision       Reference architecture with Consul  The recommended number of Vault instances is   3   in a cluster which connects to a Consul cluster which may have   5   or more nodes as shown in the diagram   A total of 8 virtual machines hosts this Vault highly available architecture    ImageConfig hideBorder     Reference Diagram   img diagram vault ra 3 az png     ImageConfig   The processing requirements depend on the encryption and messaging workloads  Memory requirements are dependant on the total size of secrets stored in memory  The Vault server itself has minimal storage requirements  but  the Consul nodes should have a high performance physical storage system       Reference architecture with Integrated Storage  The recommended number of Vault instances is   5   in a cluster  In a single HA cluster  all Vault nodes share the data while an active node holds the lock  therefore  only the active node has write access  To achieve n 2 redundancy   meaning that the cluster can still function after losing 2 nodes   an ideal size for a Vault HA cluster is 5 nodes    Tip title  More deployment details in the documentation    Refer to the  Integrated Storage   vault docs internals integrated storage deployment table  documentation for more deployment details     Tip    ImageConfig hideBorder     Reference Diagram Details   img diagram vault integrated ra 3 az png     ImageConfig   Because the data gets persisted on the same host  the Vault server should be hosted on a relatively high performance hard disk system      Consul vs  Integrated Storage  The Integrated Storage eliminates the need for external storage  therefore  Vault is the only software you need to stand up a cluster  This indicates that the host machine must have disk capacity in an amount equal or greater to that of the existing external storage backend       System requirements comparison  The fundamental difference between Vault s Integrated Storage and Consul is that the Integrated Storage stores everything on disk while  Consul KV   consul docs dynamic app config kv  stores everything in its memory which impacts the host s RAM        Machine sizes for Vault   Consul as its storage backend  It is recommended to avoid hosting Consul on an instance with burstable CPU     Size    CPU        Memory         Disk    Typical Cloud Instance Types                                                                                                          Small   2 core     4 8 GB RAM     25 GB     AWS    m5 large                                                                         Azure    Standard D2 v3                                                                 GCE    n1 standard 2  n1 standard 4         Large   4 8 core   16 32 GB RAM   50 GB     AWS    m5 xlarge  m5 2xlarge                                                            Azure    Standard D4 v3  Standard D8 v3                                                 GCE    n1 standard 8  n1 standard 16            Machine sizes for Vault with Integrated Storage    Size    CPU        Memory         Disk     Typical Cloud Instance Types                                                                                                             Small   2 core     8 16 GB RAM    100 GB     AWS    m5 large  m5 xlarge                                                                Azure    Standard D2 v3  Standard D4 v3                                                   GCE    n2 standard 2  n2 standard 4          Large   4 8 core   32 64 GB RAM   200 GB     AWS    m5 2xlarge  m5 4xlarge                                                             Azure    Standard D8 v3  Standard D16 v3                                                  GCE    n2 standard 8  n2 standard 16        If many secrets are being generated or rotated frequently  this information will need to be flushed to the disk often  Therefore  the infrastructure should have a relatively high performance hard disk system when using the integrated storage    Note title  A note about the importance of IOPS     Vault s Integrated Storage is disk bound  therefore  care should be taken when planning storage volume size and performance  For cloud providers  IOPS can be dependent on volume size and or provisioned IOPS  It is recommended to provision IOPS and avoid burstable IOPS  Monitoring of IOPS performance should be implemented in order to tune the storage volume to the IOPS load     Note       Performance considerations  Because Consul KV is memory bound  it is necessary to take a snapshot frequently  However  Vault s Integrated Storage persists everything on the disk which eliminates the need for such frequent snapshot operations  Take snapshots to back up the data so that you can restore them in case of data loss  This reduces the performance cost introduced by the frequent snapshot operations   In considering disk performance  since Vault data changes are immediately written to disk  rather than in batched snapshots as Consul does  it is important to monitor IOPS as well as disk queues to limit storage bottlenecks        Inspect Vault data  Inspection of Vault data differs considerably from the  consul kv  commands used to inspect Consul s KV store  Consult the  Inspect Data in Integrated Storage   vault tutorials monitoring inspect data integrated storage  tutorial to learn more about querying Integrated Storage data       Summary  The table below highlights the differences between Consul and integrated storage     Consideration         Consul as storage backend                                                    Vault Integrated Storage                                                                                                                                                                                                                                                                             System requirement    Memory optimized machine                                                     Storage optimized high IOPS machine                                                              Data snapshot         Frequent snapshots                                                           Normal data backup strategy                                                                      Snapshot automation   Snapshot agent    Consul Enterprise only                                     Automatic snapshot    Vault Enterprise v1 6 0 and later                                          Data inspection        Online  use  consul kv  command   vault tutorials monitoring inspecting data consul     Offline  requires using recovery mode   vault tutorials monitoring inspect data integrated storage      Autopilot             Supported                                                                    Supported    Vault 1 7 0 and later                                           Self check questions        Where is the product expertise          Do you already have Consul expertise          Are you concerned about lack of Consul knowledge        Do you experience any technical issues with Consul        What motivates the data migration from the current storage to Integrated Storage          Reduce the operational overhead          Reduce the number of machines to run          Reduce the cloud infrastructure cost        Do you have a staging environment where you can run production loads and verify that everything works as you expect        Have you thought through the storage backup process or workflow after migrating to the Integrated Storage        Do you currently rely heavily on using Consul to inspect Vault data      Tutorials  If you are ready to migrate the current storage backend to Integrated Storage  refer to the  Storage Migration Tutorial   Consul to Integrated Storage   vault tutorials raft raft migration    To deploy a new cluster with Integrated Storage  refer to the  Vault HA Cluster with Integrated Storage   vault tutorials raft raft storage  tutorial "}
{"questions":"vault Client count calculation Technical overview of client count calculations in Vault Vault provides usage telemetry for the number of clients based on the number of layout docs page title Client count calculation","answers":"---\nlayout: docs\npage_title: Client count calculation\ndescription: |-\n  Technical overview of client count calculations in Vault\n---\n\n# Client count calculation\n\nVault provides usage telemetry for the number of clients based on the number of\nunique entity assignments within a Vault cluster over a given billing period:\n\n- Standard entity assignments based on authentication method for active entities.\n- Constructed entity assignments for active non-entity tokens, including batch\n  tokens created by performance standby nodes.\n- Certificate entity assignments for ACME connections.\n- Secrets being synced to at least one sync destination.\n\n```markdown\nCLIENT_COUNT_PER_CLUSTER = UNIQUE_STANDARD_ENTITIES +\n                           UNIQUE_CONSTRUCTED_ENTITIES +\n                           UNIQUE_CERTIFICATE_ENTITIES +\n                           UNIQUE_SYNCED_SECRETS\n```\n\nVault does not aggregate or de-duplicate clients across clusters, but all logs\nand precomputed reports are included in DR replication.\n\n## How Vault tracks clients\n\nEach time a client authenticates, Vault checks whether the corresponding entity\nID has already been recorded in the client log as active for the current month:\n\n- **If no record exists**, Vault adds an entry for the entity ID.\n- If a record exists but the entity was last active **prior to the current month**,\n  Vault adds a new entry to the client record for the entity ID.\n- If a record exists and the entity was last active **within the current month**,\n  Vault does not add a new entry to the client record for the entity ID.\n\nFor example:\n\n- Two non-entity tokens under the same namespace, with the same alias name and\n  policy assignment receive the same entity assignment and are only counted\n  **once**.\n- Two authentication requests from a single ACME client for the same certificate\n  identifiers from different mounts receive the same entity assignments and\n  are counted **once**.\n- An application authenticating with AppRole receive the same entity assignment\n  every time and only counted **once**.\n\nAt the **end of each month**, Vault pre-computes reports for each cluster on the\nnumber of active entities, per namespace, for each time period within the\nconfigured retention period. By de-duplicating records from the current month\nagainst records for the previous month, Vault ensures entities that remain\nactive within every calendar month are only counted once for the year.\n\nThe deduplication process has two additional consequences:\n\n1. Detailed reporting lags by 1 month at the start of the billing period.\n1. Billing period reports that include the current month must use an\n   approximation for the number of new clients in the current month.\n\n## How Vault approximates current-month client count\n\nVault approximates client count for the current month using a\n[hyperloglog algorithm](https:\/\/en.wikipedia.org\/wiki\/HyperLogLog) that looks\nat the difference between the cardinalities of:\n\n- the number of clients across the **entire** billing period, and\n- the number of clients across the billing period **excluding** clients from the current month.\n\nThe approximation algorithm uses the\n[axiomhq](https:\/\/github.com\/axiomhq\/hyperloglog) library with fourteen\nregisters and sparse representations (when applicable). The multiset for the\ncalculation is the total number of clients within a billing period, and the\naccuracy estimate for the approximation decreases as the difference between the\nnumber of clients in the current month and the number of clients in the billing\nperiod increases.\n\n### Testing verification for client count approximations\n\nGiven `CM` as the number of clients for the current month and `BP` as the number\nof clients in the billing period, we found that the approximation becomes\nincreasingly imprecise as:\n\n- the difference between `BC` and `CM` increases\n- the value of `CM` approaches zero.\n- the number of months in the billing period increase.\n\nThe maximum observed error rate\n(`ER = (FOUND_NEW_CLIENTS \/ EXPECTED_NEW_CLIENTS)`) was 30% for 10,000 clients\nor less, with an error rate of 5 &ndash; 10% in the average case.\n\nFor the purposes of predictive analysis, the following tables list a random\nsample the values we found during testing for `CM`, `BP`, and `ER`.\n\n<Tabs>\n\n<Tab heading=\"Single-month tests\">\n\n| Current month (`CM`) | Billing period (`BP`) | Error rate (`ER`) |\n| :-----------------: | :------------------: | :---------------: |\n| 7                   | 10                   | 0%                |\n| 20                  | 600                  | 0%                |\n| 20                  | 1000                 | 0%                |\n| 20                  | 6000                 | 10%               |\n| 20                  | 10000                | 10%               |\n| 200                 | 600                  | 0%                |\n| 200                 | 10000                | 7%                |\n| 400                 | 6000                 | 5%                |\n| 2000                | 10000                | 4%                |\n\n<\/Tab>\n\n<Tab heading=\"Multi-month \/ multi-segment tests\">\n\n| Current month (`CM`) | Billing period (`BP`) | Error rate (`ER`) |\n| :-----------------: | :------------------: | :---------------: |\n| 20                  | 15                   |      0%           |\n| 20                  | 100                  |      0%           |\n| 20                  | 1000                 |      0%           |\n| 20                  | 10000                |      30%          |\n| 200                 | 10000                |      6%           |\n| 2000                | 10000                |      2%           |\n\n<\/Tab>\n\n<\/Tabs>\n\n## Resource costs for client computation\n\nIn addition to the storage used for storing the pre-computed reports, each\nactive entity in the client log consumes a few bytes of storage. As a safety\nmeasure against runaway storage growth, Vault limits the number of entity\nrecords to 656,000 per month, but typical storage costs are much less.\n\nOn average, 1000 monthly active entities requires 3.0 MiB of storage capacity\nover the default 48-month retention period.\n\n@include \"content-footer-title.mdx\"\n\n<Tabs>\n\n<Tab heading=\"Related concepts\">\n<ul>\n  <li>\n    <a href=\"\/vault\/docs\/concepts\/client-count\/\">Clients and entities<\/a>\n  <\/li>\n  <li>\n    <a href=\"\/vault\/docs\/concepts\/client-count\/faq\">Client count FAQ<\/a>\n  <\/li>\n<\/ul>\n<\/Tab>\n\n<Tab heading=\"Related API docs\">\n<ul>\n  <li>\n    <a href=\"\/vault\/api-docs\/system\/internal-counters#client-count\">Client Count API<\/a>\n  <\/li>\n  <li>\n    <a href=\"\/vault\/api-docs\/system\/internal-counters\">Internal counters API<\/a>\n  <\/li>\n<\/ul>\n<\/Tab>\n\n<Tab heading=\"Related tutorials\">\n<ul>\n  <li>\n    <a href=\"\/vault\/tutorials\/monitoring\/usage-metrics\">\n      Vault Usage Metrics in Vault UI\n    <\/a>\n  <\/li>\n  <li>\n    <a href=\"\/vault\/tutorials\/monitoring\/usage-metrics\">KMIP Client metrics<\/a>\n  <\/li>\n<\/ul>\n<\/Tab>\n\n<Tab heading=\"Other resources\">\n<ul>\n  <li>\n    <a href=\"https:\/\/github.com\/axiomhq\/hyperloglog#readme\">Accuracy estimates for the axiomhq hyperloglog library<\/a>\n  <\/li>\n  <li>\n    Blog post: <a href=\"https:\/\/www.hashicorp.com\/blog\/onboarding-applications-to-vault-using-terraform-a-practical-guide\">\n      Onboarding Applications to Vault Using Terraform: A Practical Guide\n    <\/a>\n  <\/li>\n<\/ul>\n<\/Tab>\n\n<\/Tabs>","site":"vault","answers_cleaned":"    layout  docs page title  Client count calculation description       Technical overview of client count calculations in Vault        Client count calculation  Vault provides usage telemetry for the number of clients based on the number of unique entity assignments within a Vault cluster over a given billing period     Standard entity assignments based on authentication method for active entities    Constructed entity assignments for active non entity tokens  including batch   tokens created by performance standby nodes    Certificate entity assignments for ACME connections    Secrets being synced to at least one sync destination      markdown CLIENT COUNT PER CLUSTER   UNIQUE STANDARD ENTITIES                              UNIQUE CONSTRUCTED ENTITIES                              UNIQUE CERTIFICATE ENTITIES                              UNIQUE SYNCED SECRETS      Vault does not aggregate or de duplicate clients across clusters  but all logs and precomputed reports are included in DR replication      How Vault tracks clients  Each time a client authenticates  Vault checks whether the corresponding entity ID has already been recorded in the client log as active for the current month       If no record exists    Vault adds an entry for the entity ID    If a record exists but the entity was last active   prior to the current month      Vault adds a new entry to the client record for the entity ID    If a record exists and the entity was last active   within the current month      Vault does not add a new entry to the client record for the entity ID   For example     Two non entity tokens under the same namespace  with the same alias name and   policy assignment receive the same entity assignment and are only counted     once      Two authentication requests from a single ACME client for the same certificate   identifiers from different mounts receive the same entity assignments and   are counted   once      An application authenticating with AppRole receive the same entity assignment   every time and only counted   once     At the   end of each month    Vault pre computes reports for each cluster on the number of active entities  per namespace  for each time period within the configured retention period  By de duplicating records from the current month against records for the previous month  Vault ensures entities that remain active within every calendar month are only counted once for the year   The deduplication process has two additional consequences   1  Detailed reporting lags by 1 month at the start of the billing period  1  Billing period reports that include the current month must use an    approximation for the number of new clients in the current month      How Vault approximates current month client count  Vault approximates client count for the current month using a  hyperloglog algorithm  https   en wikipedia org wiki HyperLogLog  that looks at the difference between the cardinalities of     the number of clients across the   entire   billing period  and   the number of clients across the billing period   excluding   clients from the current month   The approximation algorithm uses the  axiomhq  https   github com axiomhq hyperloglog  library with fourteen registers and sparse representations  when applicable   The multiset for the calculation is the total number of clients within a billing period  and the accuracy estimate for the approximation decreases as the difference between the number of clients in the current month and the number of clients in the billing period increases       Testing verification for client count approximations  Given  CM  as the number of clients for the current month and  BP  as the number of clients in the billing period  we found that the approximation becomes increasingly imprecise as     the difference between  BC  and  CM  increases   the value of  CM  approaches zero    the number of months in the billing period increase   The maximum observed error rate   ER    FOUND NEW CLIENTS   EXPECTED NEW CLIENTS    was 30  for 10 000 clients or less  with an error rate of 5  ndash  10  in the average case   For the purposes of predictive analysis  the following tables list a random sample the values we found during testing for  CM    BP   and  ER     Tabs    Tab heading  Single month tests      Current month   CM     Billing period   BP     Error rate   ER                                                                          7                     10                     0                     20                    600                    0                     20                    1000                   0                     20                    6000                   10                    20                    10000                  10                    200                   600                    0                     200                   10000                  7                     400                   6000                   5                     2000                  10000                  4                      Tab    Tab heading  Multi month   multi segment tests      Current month   CM     Billing period   BP     Error rate   ER                                                                          20                    15                          0                20                    100                         0                20                    1000                        0                20                    10000                       30               200                   10000                       6                2000                  10000                       2                 Tab     Tabs      Resource costs for client computation  In addition to the storage used for storing the pre computed reports  each active entity in the client log consumes a few bytes of storage  As a safety measure against runaway storage growth  Vault limits the number of entity records to 656 000 per month  but typical storage costs are much less   On average  1000 monthly active entities requires 3 0 MiB of storage capacity over the default 48 month retention period    include  content footer title mdx    Tabs    Tab heading  Related concepts    ul     li       a href   vault docs concepts client count   Clients and entities  a      li     li       a href   vault docs concepts client count faq  Client count FAQ  a      li    ul    Tab    Tab heading  Related API docs    ul     li       a href   vault api docs system internal counters client count  Client Count API  a      li     li       a href   vault api docs system internal counters  Internal counters API  a      li    ul    Tab    Tab heading  Related tutorials    ul     li       a href   vault tutorials monitoring usage metrics         Vault Usage Metrics in Vault UI       a      li     li       a href   vault tutorials monitoring usage metrics  KMIP Client metrics  a      li    ul    Tab    Tab heading  Other resources    ul     li       a href  https   github com axiomhq hyperloglog readme  Accuracy estimates for the axiomhq hyperloglog library  a      li     li      Blog post   a href  https   www hashicorp com blog onboarding applications to vault using terraform a practical guide         Onboarding Applications to Vault Using Terraform  A Practical Guide       a      li    ul    Tab     Tabs "}
{"questions":"vault in Vault page title Clients and entities Clients and entities layout docs Technical overview covering the concept of clients entities and entity IDs","answers":"---\nlayout: docs\npage_title: Clients and entities\ndescription: |-\n  Technical overview covering the concept of clients, entities, and entity IDs\n  in Vault\n---\n\n# Clients and entities\n\nAnything that connects and authenticates to Vault to accomplish a task is a\n**client**. For example, a user logging into a cluster to manage policies or a\nmachine-based system (application or cloud service) requesting a database token\nare both considered clients.\n\n![Vault Client Workflows](https:\/\/www.datocms-assets.com\/2885\/1617325020-valult-client-workflows.png)\n\nWhile there are many different potential clients, the most common are:\n\n1. **Human users** interacting directly with Vault.\n1. **Applications and microservices**.\n1. **Servers and platforms** like VMs, Docker containers, or Kubernetes pods.\n1. **Orchestrators** like Nomad, Terraform, Ansible, ACME, and other continuous\n   integration \/ continuous delivery (CI\/CD) pipelines.\n1. **Vault agents and proxies** that act on behalf of an application or\n   microservice.\n\n## Identity and entity assignment\n\nAuthorized clients can connect to Vault with a variety of authentication methods.\n\nAuthorization source       | AuthN method\n-------------------------- | ---------------------------------\nExternally managed or SSO  | Active Directory, LDAP, OIDC, JWT, GitHub, username+password\nPlatform- or server-based  | Kubernetes, AWS, GCP, Azure, Cert, Cloud Foundry\nSelf                       | AppRole, tokens with no associated authN path or role\n\n![Vault client types](https:\/\/www.datocms-assets.com\/2885\/1617325030-vault-clients.png)\n\nWhen a client authenticates, Vault assigns a unique identifier\n(**client entity**) in the [Vault identity system](\/vault\/docs\/secrets\/identity)\nbased on the authentication method used or a previously assigned alias.\n**Entity aliases** let clients authenticate with multiple methods but still be\nassociated with a single policy, share resources, and count as the same entity,\nregardless of the authentication method used for a particular session.\n\n## Standard entity assignments\n\n@include \"authn-names.mdx\"\n\nEach authentication method has a unique ID string that corresponds to a client\nentity used for telemetry. For example, a microservice authenticating with\nAppRole takes the associated role ID as the entity. If you are running at scale\nand have multiple copies of the microservices using the same role id, the full\nset of instances will share the same identifier.\n\nAs a result, it is critical that you configure different clients\n(microservices, humans, applications, services, platforms, servers, or pipelines)\nin a way that results in distinct clients having unique identifiers. For example,\nthe role IDs should be different **between** two microservices, MicroserviceA and\nMicroServiceB, even if the **specific instances** of MicroServiceA and\nMicroServiceB share a common role ID.\n\n## Entity assignment with ACME\n\nVault treats all ACME connections that authenticate under the same certificate\nidentifier (domain) as the same **certificate entity** for client count\ncalculations.\n\nFor example:\n\n- ACME client requests (from the same server or separate servers) for the same\n  certificate identifier (a unique combination of CN, DNS, SANS and IP SANS)\n  are treated as the same entity.\n- If an ACME client makes a request for `a.test.com`, and subsequently makes a new\n  request for `b.test.com` and `*.test.com` then two distinct entities will be created,\n  one for `a.test.com` and another for the combination of `b.test.com` and `*.test.com`.\n- Overlap of certificate identifiers from different ACME clients will be treated\n  as the same entity e.g. if client 1 requests `a.test.com` and client 2 requests\n  `a.test.com` a single entity is created for both requests.\n\n## Secret sync clients\nVault can automatically update secrets in external destinations with [secret sync](\/vault\/docs\/sync).\nA secret that gets synced to one or more destinations is considered a **secret\nsync client** for client count calculations.\n\nNote that:\n\n- Each synced secret is counted distinctly based on the path and namespace of\n  the secret. If you have secrets at path `kv1\/secret` and `kv2\/secret`\n  which are both synced, then two distinct secret syncs will be counted.\n- A secret can be synced to multiple different destinations, and it will still\n  only be counted as one secret sync. If `kv\/secret` is synced to both Azure Key\n  Vault and AWS Secret Manager, this will be counted as only one secret sync\n  client.\n- Secret sync clients are only created after you create an association between a\n  secret and a store. If you create `kv\/secret` and do not associate this secret\n  with any destinations, it will not be counted as a secret sync client.\n- Secret syncs clients are registered in Vault's client counting system so long\n  as the sync is active. If you create `kv\/secret` and associate it with a\n  destination in January, update the secret in May, and then delete the secret\n  in September, Vault will consider this client as having been seen throughout\n  the entire period of January through September.\n\n## Entity assignment with namespaces\n\nA namespace represents a isolated, logical space within a single Vault\ncluster and is typically used for administrative purposes.\n\nWhen a client authenticates **within a given namespace**, Vault assigns the same\nclient entity to activities within any child namespaces because the namespaces\nexist within the same larger scope.\n\nWhen a client authenticates **across namespace boundaries**, Vault treats the\nsingle client as two distinct entities because the client is operating\nacross different scopes with different policy assignments and resources.\n\nFor example:\n\n- Different requests under parent and child namespaces from a single client\n  authenticated under the **parent** namespace are assigned **the same entity\n  ID**. All the client activities occur **within** the boundaries of the\n  namespace referenced in the original authentication request.\n- Different requests under parent and child namespaces from a single client\n  authenticated under the **child** namespace are assigned **different entity\n  IDs**. Some of the client activities occur **outside** the boundaries of the\n  namespace referenced in the original authentication request.\n- Requests by the same client to two different namespaces, NAMESPACE<sub>A<\/sub>\n  and NAMESPACE<sub>B<\/sub> are assigned **different entity IDs**.\n\n## Entity assignment with non-entity tokens\n\nVault uses tokens as the core method for authentication. You can use tokens to\nauthenticate directly, or use token [auth methods](\/vault\/docs\/concepts\/auth)\nto dynamically generate tokens based on external identities.\n\nWhen clients authenticate with the [token auth method](\/vault\/docs\/auth\/token)\n**without** a client identity, the result is a **non-entity token**. For example,\na service might use the token authentication method to create a token for a user\nwhose explicit identity is unknown.\n\nUltimately, non-entity tokens trace back to a particular client or purpose so\nVault assigns unique entity IDs to non-entity tokens based on a combination of\nthe:\n\n- assigned entity alias name (if present),\n- associated policies, and\n- namespace under which the token was created.\n\nIn **rare** cases, tokens may be created outside of the Vault identity system\n**without** an associated entity or identity. Vault treats every unaffiliated\ntoken as a unique client for production usage. We strongly discourage the use of\nunaffiliated tokens and recommend that you always associate a token with an\nentity alias and token role.\n\n<Note title=\"Behavior change in Vault 1.9+\">\n  As of Vault 1.9, all non-entity tokens with the same namespace and policy\n  assignments are treated as the same client entity. Prior to Vault 1.9, every\n  non-entity token was treated as a unique client entity, which drastically\n  inflated telemetry around client count.\n\n  If you are using Vault 1.8 or earlier, and need to address client count\n  inflation without upgrading, we recommend creating a\n  [token role](\/vault\/api-docs\/auth\/token#create-update-token-role) with\n  allowable entity aliases and assigning all tokens to an appropriate\n  [role and entity alias name](\/vault\/api-docs\/auth\/token#create-token) before\n  using them.\n<\/Note>\n\n@include \"content-footer-title.mdx\"\n\n<Tabs>\n\n<Tab heading=\"Related concepts\">\n<ul>\n  <li>\n    <a href=\"\/vault\/docs\/concepts\/client-count\/counting\">Client count calculation<\/a>\n  <\/li>\n  <li>\n    <a href=\"\/vault\/docs\/concepts\/client-count\/faq\">Client count FAQ<\/a>\n  <\/li>\n<\/ul>\n<\/Tab>\n\n<Tab heading=\"Related tutorials\">\n<ul>\n  <li>\n    <a href=\"\/vault\/tutorials\/auth-methods\/identity\">Identity: Entities and Groups<\/a>\n  <\/li>\n  <li>\n    <a href=\"\/vault\/tutorials\/enterprise\/namespaces\">Secure Multi-Tenancy with Namespaces<\/a>\n  <\/li>\n<\/ul>\n<\/Tab>\n\n<Tab heading=\"Other resources\">\n<ul>\n  <li>\n    Article: <a href=\"https:\/\/www.hashicorp.com\/identity-based-security-and-low-trust-networks\">\n      Identity-based Security and Low-trust Networks\n    <\/a>\n  <\/li>\n<\/ul>\n<\/Tab>\n\n<\/Tabs>","site":"vault","answers_cleaned":"    layout  docs page title  Clients and entities description       Technical overview covering the concept of clients  entities  and entity IDs   in Vault        Clients and entities  Anything that connects and authenticates to Vault to accomplish a task is a   client    For example  a user logging into a cluster to manage policies or a machine based system  application or cloud service  requesting a database token are both considered clients     Vault Client Workflows  https   www datocms assets com 2885 1617325020 valult client workflows png   While there are many different potential clients  the most common are   1    Human users   interacting directly with Vault  1    Applications and microservices    1    Servers and platforms   like VMs  Docker containers  or Kubernetes pods  1    Orchestrators   like Nomad  Terraform  Ansible  ACME  and other continuous    integration   continuous delivery  CI CD  pipelines  1    Vault agents and proxies   that act on behalf of an application or    microservice      Identity and entity assignment  Authorized clients can connect to Vault with a variety of authentication methods   Authorization source         AuthN method                                                                Externally managed or SSO    Active Directory  LDAP  OIDC  JWT  GitHub  username password Platform  or server based    Kubernetes  AWS  GCP  Azure  Cert  Cloud Foundry Self                         AppRole  tokens with no associated authN path or role    Vault client types  https   www datocms assets com 2885 1617325030 vault clients png   When a client authenticates  Vault assigns a unique identifier    client entity    in the  Vault identity system   vault docs secrets identity  based on the authentication method used or a previously assigned alias    Entity aliases   let clients authenticate with multiple methods but still be associated with a single policy  share resources  and count as the same entity  regardless of the authentication method used for a particular session      Standard entity assignments   include  authn names mdx   Each authentication method has a unique ID string that corresponds to a client entity used for telemetry  For example  a microservice authenticating with AppRole takes the associated role ID as the entity  If you are running at scale and have multiple copies of the microservices using the same role id  the full set of instances will share the same identifier   As a result  it is critical that you configure different clients  microservices  humans  applications  services  platforms  servers  or pipelines  in a way that results in distinct clients having unique identifiers  For example  the role IDs should be different   between   two microservices  MicroserviceA and MicroServiceB  even if the   specific instances   of MicroServiceA and MicroServiceB share a common role ID      Entity assignment with ACME  Vault treats all ACME connections that authenticate under the same certificate identifier  domain  as the same   certificate entity   for client count calculations   For example     ACME client requests  from the same server or separate servers  for the same   certificate identifier  a unique combination of CN  DNS  SANS and IP SANS    are treated as the same entity    If an ACME client makes a request for  a test com   and subsequently makes a new   request for  b test com  and    test com  then two distinct entities will be created    one for  a test com  and another for the combination of  b test com  and    test com     Overlap of certificate identifiers from different ACME clients will be treated   as the same entity e g  if client 1 requests  a test com  and client 2 requests    a test com  a single entity is created for both requests      Secret sync clients Vault can automatically update secrets in external destinations with  secret sync   vault docs sync   A secret that gets synced to one or more destinations is considered a   secret sync client   for client count calculations   Note that     Each synced secret is counted distinctly based on the path and namespace of   the secret  If you have secrets at path  kv1 secret  and  kv2 secret    which are both synced  then two distinct secret syncs will be counted    A secret can be synced to multiple different destinations  and it will still   only be counted as one secret sync  If  kv secret  is synced to both Azure Key   Vault and AWS Secret Manager  this will be counted as only one secret sync   client    Secret sync clients are only created after you create an association between a   secret and a store  If you create  kv secret  and do not associate this secret   with any destinations  it will not be counted as a secret sync client    Secret syncs clients are registered in Vault s client counting system so long   as the sync is active  If you create  kv secret  and associate it with a   destination in January  update the secret in May  and then delete the secret   in September  Vault will consider this client as having been seen throughout   the entire period of January through September      Entity assignment with namespaces  A namespace represents a isolated  logical space within a single Vault cluster and is typically used for administrative purposes   When a client authenticates   within a given namespace    Vault assigns the same client entity to activities within any child namespaces because the namespaces exist within the same larger scope   When a client authenticates   across namespace boundaries    Vault treats the single client as two distinct entities because the client is operating across different scopes with different policy assignments and resources   For example     Different requests under parent and child namespaces from a single client   authenticated under the   parent   namespace are assigned   the same entity   ID    All the client activities occur   within   the boundaries of the   namespace referenced in the original authentication request    Different requests under parent and child namespaces from a single client   authenticated under the   child   namespace are assigned   different entity   IDs    Some of the client activities occur   outside   the boundaries of the   namespace referenced in the original authentication request    Requests by the same client to two different namespaces  NAMESPACE sub A  sub    and NAMESPACE sub B  sub  are assigned   different entity IDs        Entity assignment with non entity tokens  Vault uses tokens as the core method for authentication  You can use tokens to authenticate directly  or use token  auth methods   vault docs concepts auth  to dynamically generate tokens based on external identities   When clients authenticate with the  token auth method   vault docs auth token    without   a client identity  the result is a   non entity token    For example  a service might use the token authentication method to create a token for a user whose explicit identity is unknown   Ultimately  non entity tokens trace back to a particular client or purpose so Vault assigns unique entity IDs to non entity tokens based on a combination of the     assigned entity alias name  if present     associated policies  and   namespace under which the token was created   In   rare   cases  tokens may be created outside of the Vault identity system   without   an associated entity or identity  Vault treats every unaffiliated token as a unique client for production usage  We strongly discourage the use of unaffiliated tokens and recommend that you always associate a token with an entity alias and token role    Note title  Behavior change in Vault 1 9      As of Vault 1 9  all non entity tokens with the same namespace and policy   assignments are treated as the same client entity  Prior to Vault 1 9  every   non entity token was treated as a unique client entity  which drastically   inflated telemetry around client count     If you are using Vault 1 8 or earlier  and need to address client count   inflation without upgrading  we recommend creating a    token role   vault api docs auth token create update token role  with   allowable entity aliases and assigning all tokens to an appropriate    role and entity alias name   vault api docs auth token create token  before   using them    Note    include  content footer title mdx    Tabs    Tab heading  Related concepts    ul     li       a href   vault docs concepts client count counting  Client count calculation  a      li     li       a href   vault docs concepts client count faq  Client count FAQ  a      li    ul    Tab    Tab heading  Related tutorials    ul     li       a href   vault tutorials auth methods identity  Identity  Entities and Groups  a      li     li       a href   vault tutorials enterprise namespaces  Secure Multi Tenancy with Namespaces  a      li    ul    Tab    Tab heading  Other resources    ul     li      Article   a href  https   www hashicorp com identity based security and low trust networks         Identity based Security and Low trust Networks       a      li    ul    Tab     Tabs "}
{"questions":"vault Client calculation and sizing can be complex to compute when you have multiple Vault usage metrics page title Vault usage metrics Learn how to discover the number of Vault clients for each namespace in Vault layout docs","answers":"---\nlayout: docs\npage_title: Vault usage metrics\ndescription: |-\n  Learn how to discover the number of Vault clients for each namespace in Vault.\n---\n\n# Vault usage metrics\n\nClient calculation and sizing can be complex to compute when you have multiple\nnamespaces and auth mounts. The **Vault Usage Metrics** dashboard on Vault UI\nprovides the information where you can filter the data by namespace and\/or auth\nmounts. You can also use Vault CLI or API to query the usage metrics.\n\n## Enable usage metrics\n\nUsage metrics are a feature that is enabled by default for Vault Enterprise and\nHCP Vault Dedicated. However, if you are running Vault Community Edition, you\nneed to enable usage metrics since it is disabled by default.\n\n<Tabs>\n<Tab heading=\"Web UI\" group=\"ui\">\n\n1. Open a web browser to access the Vault UI, and sign in.\n\n1. Select **Client Count** from the left navigation menu.\n\n1. Select **Configuration**.\n\n1. Select **Edit configuration**.\n\n   ![Edit configuration](\/img\/ui-usage-metrics-config.png)\n\n1. Select the toggle for **Usage data collection** so that the text reads **Data\n   collection  is on**.\n\n   <Tip title=\"Retention period\">\n\n    The retention period sets the number of months for which Vault will maintain\n    activity logs to track active clients. (Default: 48 months)\n\n    <\/Tip>\n\n1. Click **Save** to apply the changes.\n\n1. Click **Continue** in the confirmation dialog to enable usage metrics tracking.\n\n<\/Tab>\n<Tab heading=\"CLI command\" group=\"cli\">\n\n```shell-session\n$ vault write sys\/internal\/counters\/config enabled=enable\n```\n\nValid values for `enabled` parameter are: `default`, `enable`, and `disable`.\n\n<Tip title=\"Retention period\">\n\n By default, Vault maintains activity logs to track\nactive clients for 24 months. If you wish to change the retention period, use\nthe `retention_months` parameter.\n\n<\/Tip>\n\n**Example:**\n\n```shell-session\n$ vault write sys\/internal\/counters\/config \\\n    enabled=enable \\\n    retention_months=12\n```\n\n<\/Tab>\n<Tab heading=\"API call using cURL\" group=\"api\">\n\n```shell-session\n$ curl --header \"X-Vault-Token: <TOKEN>\" \\\n    --request POST \\\n    --data '{\"enabled\": \"enable\"}' \\\n    $VAULT_ADDR\/v1\/sys\/internal\/counters\/config\n```\n\nValid values for `enabled` parameter are: `default`, `enable`, and `disable`.\n\n<Tip title=\"Retention period\">\n\n By default, Vault maintains activity logs to track\nactive clients for 24 months. If you wish to change the retention period, use\nthe `retention_months` parameter.\n\n<\/Tip>\n\n**Example:**\n\n```shell-session\n$ curl --header \"X-Vault-Token: <TOKEN>\" \\\n    --request POST \\\n    --data '{\"enabled\": \"enable\", \"retention_months\": 12}' \\\n    $VAULT_ADDR\/v1\/sys\/internal\/counters\/config\n```\n\n<\/Tab>\n<\/Tabs>\n\n## Usage metrics dashboard\n\n1. Sign into Vault UI. The **Client count** section displays the total number of\n   clients for the current billing period.\n\n1. Select **Details**.\n   ![Vault UI default dashboard example](\/img\/ui-client-count.png)\n\n1. Examine the **Vault Usage Metrics** dashboard to learn your Vault usage.\n   ![Example Vault Usage Metrics dashboard view](\/img\/ui-usage-metrics-1.png)\n\n#### Usage metrics data categories\n\n- **Running client total** are the primary metric on which pricing is based.\n  It is the sum of entity clients (or distinct entities) and non-entity clients.\n\n- **Entity clients** (distinct entities) are representations of a particular\n  user, client, or application that belongs to a defined Vault entity. If you\n  are unfamiliar with Vault entities, refer to the [Identity: Entities and\n  Groups](\/vault\/tutorials\/auth-methods\/identity) tutorial.\n\n- **Non-entity clients** are clients without an entity attached.\n  This is because some customers or workflows might avoid using entity-creating\n  authentication methods and instead depend on token creation through the Vault\n  API. Refer to [understanding non-entity\n  tokens](\/vault\/docs\/concepts\/client-count#understanding-non-entity-tokens)\n  to learn more.\n\n  <Note>\n\n  The non-entity client count excludes `root` tokens.\n\n  <\/Note>\n\n- **Secrets sync clients** are the number of external destinations Vault\n  connects to sync the secrets. Refer to the\n  [documentation](\/vault\/docs\/concepts\/client-count#secret-sync-clients) for\n  more details.\n\n- **ACME clients** are the ACME connections that authenticate under the same\n  certificate identifier (domain) as the same certificate entity for client\n  count calculations. Refer to the\n  [documentation](\/vault\/docs\/concepts\/client-count#entity-assignment-with-acme)\n  for more details.\n\n    ![ACME clients example](\/img\/ui-usage-metrics-acme.png)\n\n\n## Select a data range\n\nUnder the **Client counting period**, select **Edit** to query the data for\na different billing period.\n\n![Query](\/img\/ui-usage-metrics-period.png)\n\nKeep in mind that Vault begins collecting data when the feature is enabled. For\nexample, if you enabled the usage metrics in March of 2023, you cannot query\ndata in January of 2023.\n\nVault will return metrics from March of 2023 through most recent full month. \n\n## Filter by namespaces\n\nIf you have [namespaces](\/vault\/docs\/enterprise\/namespaces), the dashboard\ndisplays the top ten namespaces by total clients.\n\n![Namespace attribution example](\/img\/ui-usage-metrics-namespace.png)\n\nUse the **Filters** to view the metrics data of a specific namespace.\n\n![Filter by namespace](\/img\/ui-usage-metrics-filter.png)\n\n## Mount attribution\n\nThe clients are also shown as graphs per auth mount. The **Mount attribution**\nsection displays the top auth mounts where you can expect to find your most used\nauth method mounts with respect to client usage. This allows you to detect which\nauth mount had the most number of total clients in the given billing period. You\ncan filter for auth mounts within a namespace, or view auth mounts across\nnamespaces. The mount attribution is available even if you are not using\nnamespaces. \n\n![Usage metrics by mount attribution](\/img\/ui-usage-metrics-mounts.png)\n\n\n## Query usage metrics via CLI\n\nRetrieve the usage metrics for the current billing period.\n\n```shell-session\n$ vault operator usage\n```\n\n**Example output:**\n\n<CodeBlockConfig hideClipboard>\n\n```plaintxt\nPeriod start: 2024-03-01T00:00:00Z\nPeriod end: 2024-10-31T23:59:59Z\n\nNamespace path                                        Entity Clients   Non-Entity clients   Secret syncs   ACME clients   Active clients\n--------------                                        --------------   ------------------   ------------   ------------   --------------\n[root]                                                86               114                  0              0              200\neducation\/                                            31               31                   0              0              62\neducation\/certification\/                              18               25                   0              0              43\neducation\/training\/                                   192              197                  0              0              389\nfinance\/                                              18               26                   0              0              44\nmarketing\/                                            28               17                   0              0              45\ntest-ns-1-with-namespace-length-over-18-characters\/   84               75                   0              0              159\ntest-ns-1\/                                            59               66                   0              0              125\ntest-ns-2-with-namespace-length-over-18-characters\/   58               46                   0              0              104\ntest-ns-2\/                                            56               47                   0              0              103\n\nTotal                                                 630              644                  0              0              1274\n```\n\n<\/CodeBlockConfig>\n\nThe output shows client usage metrics for each namespace.\n\n### Filter by namespace\n\nYou can narrow the scope for `education` namespace and its child namespaces.\n\n```shell-session\n$ vault operator usage -namespace education\n\nPeriod start: 2024-03-01T00:00:00Z\nPeriod end: 2024-10-31T23:59:59Z\n\nNamespace path             Entity Clients   Non-Entity clients   Secret syncs   ACME clients   Active clients\n--------------             --------------   ------------------   ------------   ------------   --------------\neducation\/                 31               31                   0              0              62\neducation\/certification\/   18               25                   0              0              43\neducation\/training\/        192              197                  0              0              389\n\nTotal                      241              253                  0              0              494\n```\n\n### Query with a time frame\n\nTo query the client usage metrics for the month of June, 2024. The start\ntime is June 1, 2024 (`2024-06-01T00:00:00Z`) and the end time is June\n30, 2024 (`2024-06-30T23:59:59Z`).\n\nThe `start_time` and `end_time` should be an RFC3339 timestamp or Unix epoch\ntime.\n\n```shell-session\n$ vault operator usage \\\n     -start-time=2024-06-01T00:00:00Z \\\n     -end-time=2024-06-30T23:59:59Z \n```\n\n**Example output:**\n\n<CodeBlockConfig hideClipboard>\n\n```plaintext\nPeriod start: 2024-06-01T00:00:00Z\nPeriod end: 2024-06-30T23:59:59Z\n\nNamespace path                                        Entity Clients   Non-Entity clients   Secret syncs   ACME clients   Active clients\n--------------                                        --------------   ------------------   ------------   ------------   --------------\n[root]                                                10               16                   0              0              26\neducation\/                                            7                1                    0              0              8\neducation\/certification\/                              2                4                    0              0              6\neducation\/training\/                                   37               30                   0              0              67\nfinance\/                                              3                6                    0              0              9\nmarketing\/                                            2                2                    0              0              4\ntest-ns-1-with-namespace-length-over-18-characters\/   6                9                    0              0              15\ntest-ns-1\/                                            9                12                   0              0              21\ntest-ns-2-with-namespace-length-over-18-characters\/   5                5                    0              0              10\ntest-ns-2\/                                            9                7                    0              0              16\n\nTotal                                                 90               92                   0              0              182\n```\n\n<\/CodeBlockConfig>\n\n\n## Export the metrics data\n\nYou can export the metrics data by clicking on the **Export attribution data**\nbutton.\n\n![Metrics UI](\/img\/ui-usage-metrics-export.png)\n\nThis downloads the usage metrics data on your local drive in comma separated\nvalues format (`.csv`) or JSON.\n\n\n## API \n\n- Refer to the\n  [`sys\/internal\/counters`](\/vault\/api-docs\/system\/internal-counters#client-count)\n  page to retrieve client count using API.\n- [Activity export API](\/vault\/api-docs\/system\/internal-counters#activity-export) to\n  export activity log.","site":"vault","answers_cleaned":"    layout  docs page title  Vault usage metrics description       Learn how to discover the number of Vault clients for each namespace in Vault         Vault usage metrics  Client calculation and sizing can be complex to compute when you have multiple namespaces and auth mounts  The   Vault Usage Metrics   dashboard on Vault UI provides the information where you can filter the data by namespace and or auth mounts  You can also use Vault CLI or API to query the usage metrics      Enable usage metrics  Usage metrics are a feature that is enabled by default for Vault Enterprise and HCP Vault Dedicated  However  if you are running Vault Community Edition  you need to enable usage metrics since it is disabled by default    Tabs   Tab heading  Web UI  group  ui    1  Open a web browser to access the Vault UI  and sign in   1  Select   Client Count   from the left navigation menu   1  Select   Configuration     1  Select   Edit configuration          Edit configuration   img ui usage metrics config png   1  Select the toggle for   Usage data collection   so that the text reads   Data    collection  is on         Tip title  Retention period        The retention period sets the number of months for which Vault will maintain     activity logs to track active clients   Default  48 months         Tip   1  Click   Save   to apply the changes   1  Click   Continue   in the confirmation dialog to enable usage metrics tracking     Tab   Tab heading  CLI command  group  cli       shell session   vault write sys internal counters config enabled enable      Valid values for  enabled  parameter are   default    enable   and  disable     Tip title  Retention period     By default  Vault maintains activity logs to track active clients for 24 months  If you wish to change the retention period  use the  retention months  parameter     Tip     Example        shell session   vault write sys internal counters config       enabled enable       retention months 12        Tab   Tab heading  API call using cURL  group  api       shell session   curl   header  X Vault Token   TOKEN           request POST         data    enabled    enable           VAULT ADDR v1 sys internal counters config      Valid values for  enabled  parameter are   default    enable   and  disable     Tip title  Retention period     By default  Vault maintains activity logs to track active clients for 24 months  If you wish to change the retention period  use the  retention months  parameter     Tip     Example        shell session   curl   header  X Vault Token   TOKEN           request POST         data    enabled    enable    retention months   12          VAULT ADDR v1 sys internal counters config        Tab    Tabs      Usage metrics dashboard  1  Sign into Vault UI  The   Client count   section displays the total number of    clients for the current billing period   1  Select   Details         Vault UI default dashboard example   img ui client count png   1  Examine the   Vault Usage Metrics   dashboard to learn your Vault usage       Example Vault Usage Metrics dashboard view   img ui usage metrics 1 png        Usage metrics data categories      Running client total   are the primary metric on which pricing is based    It is the sum of entity clients  or distinct entities  and non entity clients       Entity clients    distinct entities  are representations of a particular   user  client  or application that belongs to a defined Vault entity  If you   are unfamiliar with Vault entities  refer to the  Identity  Entities and   Groups   vault tutorials auth methods identity  tutorial       Non entity clients   are clients without an entity attached    This is because some customers or workflows might avoid using entity creating   authentication methods and instead depend on token creation through the Vault   API  Refer to  understanding non entity   tokens   vault docs concepts client count understanding non entity tokens    to learn more      Note     The non entity client count excludes  root  tokens       Note       Secrets sync clients   are the number of external destinations Vault   connects to sync the secrets  Refer to the    documentation   vault docs concepts client count secret sync clients  for   more details       ACME clients   are the ACME connections that authenticate under the same   certificate identifier  domain  as the same certificate entity for client   count calculations  Refer to the    documentation   vault docs concepts client count entity assignment with acme    for more details         ACME clients example   img ui usage metrics acme png       Select a data range  Under the   Client counting period    select   Edit   to query the data for a different billing period     Query   img ui usage metrics period png   Keep in mind that Vault begins collecting data when the feature is enabled  For example  if you enabled the usage metrics in March of 2023  you cannot query data in January of 2023   Vault will return metrics from March of 2023 through most recent full month       Filter by namespaces  If you have  namespaces   vault docs enterprise namespaces   the dashboard displays the top ten namespaces by total clients     Namespace attribution example   img ui usage metrics namespace png   Use the   Filters   to view the metrics data of a specific namespace     Filter by namespace   img ui usage metrics filter png      Mount attribution  The clients are also shown as graphs per auth mount  The   Mount attribution   section displays the top auth mounts where you can expect to find your most used auth method mounts with respect to client usage  This allows you to detect which auth mount had the most number of total clients in the given billing period  You can filter for auth mounts within a namespace  or view auth mounts across namespaces  The mount attribution is available even if you are not using namespaces      Usage metrics by mount attribution   img ui usage metrics mounts png       Query usage metrics via CLI  Retrieve the usage metrics for the current billing period      shell session   vault operator usage        Example output      CodeBlockConfig hideClipboard      plaintxt Period start  2024 03 01T00 00 00Z Period end  2024 10 31T23 59 59Z  Namespace path                                        Entity Clients   Non Entity clients   Secret syncs   ACME clients   Active clients                                                                                                                                           root                                                 86               114                  0              0              200 education                                             31               31                   0              0              62 education certification                               18               25                   0              0              43 education training                                    192              197                  0              0              389 finance                                               18               26                   0              0              44 marketing                                             28               17                   0              0              45 test ns 1 with namespace length over 18 characters    84               75                   0              0              159 test ns 1                                             59               66                   0              0              125 test ns 2 with namespace length over 18 characters    58               46                   0              0              104 test ns 2                                             56               47                   0              0              103  Total                                                 630              644                  0              0              1274        CodeBlockConfig   The output shows client usage metrics for each namespace       Filter by namespace  You can narrow the scope for  education  namespace and its child namespaces      shell session   vault operator usage  namespace education  Period start  2024 03 01T00 00 00Z Period end  2024 10 31T23 59 59Z  Namespace path             Entity Clients   Non Entity clients   Secret syncs   ACME clients   Active clients                                                                                                               education                  31               31                   0              0              62 education certification    18               25                   0              0              43 education training         192              197                  0              0              389  Total                      241              253                  0              0              494          Query with a time frame  To query the client usage metrics for the month of June  2024  The start time is June 1  2024   2024 06 01T00 00 00Z   and the end time is June 30  2024   2024 06 30T23 59 59Z     The  start time  and  end time  should be an RFC3339 timestamp or Unix epoch time      shell session   vault operator usage         start time 2024 06 01T00 00 00Z         end time 2024 06 30T23 59 59Z         Example output      CodeBlockConfig hideClipboard      plaintext Period start  2024 06 01T00 00 00Z Period end  2024 06 30T23 59 59Z  Namespace path                                        Entity Clients   Non Entity clients   Secret syncs   ACME clients   Active clients                                                                                                                                           root                                                 10               16                   0              0              26 education                                             7                1                    0              0              8 education certification                               2                4                    0              0              6 education training                                    37               30                   0              0              67 finance                                               3                6                    0              0              9 marketing                                             2                2                    0              0              4 test ns 1 with namespace length over 18 characters    6                9                    0              0              15 test ns 1                                             9                12                   0              0              21 test ns 2 with namespace length over 18 characters    5                5                    0              0              10 test ns 2                                             9                7                    0              0              16  Total                                                 90               92                   0              0              182        CodeBlockConfig       Export the metrics data  You can export the metrics data by clicking on the   Export attribution data   button     Metrics UI   img ui usage metrics export png   This downloads the usage metrics data on your local drive in comma separated values format    csv   or JSON       API     Refer to the     sys internal counters    vault api docs system internal counters client count    page to retrieve client count using API     Activity export API   vault api docs system internal counters activity export  to   export activity log "}
{"questions":"vault Deprecation announcements updates and migration plans for Vault page title Deprecation notices Vault implements a multi phased approach to deprecations to provide users with layout docs Deprecation notices","answers":"---\nlayout: docs\npage_title: Deprecation notices\ndescription: >-\n  Deprecation announcements, updates, and migration plans for Vault.\n---\n\n# Deprecation notices\n\nVault implements a multi-phased approach to deprecations to provide users with\nadvanced warning, minimize business disruptions, and allow for the safe handling\nof data affected by a feature removal.\n\n<Highlight title=\"Have questions?\">\n\nIf you have questions or concerns about a deprecated feature, please create a\ntopic on the [Vault community forum](https:\/\/discuss.hashicorp.com\/c\/vault\/30)\nor raise a ticket with your support team.\n\n<\/Highlight>\n\n<a id=\"announcements\" \/>\n\n## Recent announcements\n\n<Tabs>\n<Tab heading=\"DEPRECATED\">\n\n<EnterpriseAlert product=\"vault\">\n  The Vault Support Team can provide <b>limited<\/b> help with a deprecated feature.\n  Limited support includes troubleshooting solutions and workarounds but does not\n  include software patches or bug fixes. Refer to\n  the <a href=\"https:\/\/support.hashicorp.com\/hc\/en-us\/articles\/360021185113-Support-Period-and-End-of-Life-EOL-Policy\">HashiCorp Support Policy<\/a> for\n  more information on the product support timeline.\n<\/EnterpriseAlert>\n\n@include 'deprecation\/ruby-client-library.mdx'\n\n@include 'deprecation\/active-directory-secrets-engine.mdx'\n\n<\/Tab>\n<Tab heading=\"PENDING REMOVAL\">\n\n@include 'deprecation\/vault-agent-api-proxy.mdx'\n\n@include 'deprecation\/aws-field-change.mdx'\n\n@include 'deprecation\/centrify-auth-method.mdx'\n\n<\/Tab>\n<Tab heading=\"REMOVED\">\n\n@include 'deprecation\/duplicative-docker-images.mdx'\n\n@include 'deprecation\/azure-password-policy.mdx'\n\n<\/Tab>\n<\/Tabs>\n\n<a id=\"phases\" \/>\n\n## Deprecation phases\n\nThe lifecycle of a Vault feature or plugin includes 4 phases:\n\n- **supported** - generally available (GA), functioning as expected, and under\n  active maintenance\n- **deprecated** - marked for removal in a future release\n- **pending removal** - support ended or replaced by another feature\n- **removed** - end of lifecycle\n\n### Deprecated ((#deprecated))\n\n\"Deprecated\" is the first phase of the deprecation process and indicates that\nthe feature is marked for removal in a future release. When you upgrade Vault,\nnewly deprecated features will begin alerting that the feature is deprecated:\n\n- Built-in authentication and secrets plugins log `Warn`-level messages on\n  unseal.\n- All deprecated features log `Warn`-level messages.\n- All `POST`, `GET`, and `LIST` endpoints associated with the feature return\n  warnings in response data.\n\nBuilt-in Vault authentication and secrets plugins also expose their deprecation\nstatus through the Vault CLI and Vault API.\n\nCLI command                                                                  | API endpoint\n---------------------------------------------------------------------------- | --------------\nN\/A                                                                          | [`\/sys\/plugins\/catalog`](\/vault\/api-docs\/system\/plugins-catalog)\n[`vault plugin info auth <PLUGIN_NAME>`](\/vault\/docs\/commands\/plugin\/info)   | [`\/sys\/plugins\/catalog\/auth\/:name`](\/vault\/api-docs\/system\/plugins-catalog#list-plugins-1)\n[`vault plugin info secret <PLUGIN_NAME>`](\/vault\/docs\/commands\/plugin\/info) | [`\/sys\/plugins\/catalog\/secret\/:name`](\/vault\/api-docs\/system\/plugins-catalog#list-plugins-1)\n\n### Pending removal\n\n\"Pending removal\" is the second phase of the deprecation process and indicates\nthat the feature behavior is fundamentally altered in the following ways:\n\n- Built-in authentication and secrets plugins log `Error`-level messages and\n  cause an immediate shutdown of the Vault core.\n- All features pending removal fail and log `Error`-level messages.\n- All CLI commands and API endpoints associated with the feature fail and return\n  errors.\n\n<Warning title=\"Use with caution\">\n\n  In critical situations, you may be able to override the pending removal behavior with the\n  [`VAULT_ALLOW_PENDING_REMOVAL_MOUNTS`](\/vault\/docs\/commands\/server#vault_allow_pending_removal_mounts)\n  environment variable, which forces Vault to treat some features that are pending\n  removal as if they were still only deprecated.\n\n<\/Warning>\n\n### Removed\n\n\"Removed\" is the last phase of the deprecation process and indicates that the\nfeature is no longer supported and no longer exists within Vault.\n\n## Migrate from deprecated features\n\nFeatures in the \"pending removal\" and \"removed\" phases will fail, log errors,\nand, for built-in authentication or secret plugins, cause an immediate shutdown\nof the Vault core.\n\nMigrate away from a deprecated feature and successfully upgrade to newer Vault\nversions, you must eliminate the deprecated features:\n\n1. Downgrade Vault to a previous version if necessary.\n1. Replace any \"Removed\" or \"Pending removal\" feature with the recommended\n   alternative.\n1. Upgrade to latest desired version.","site":"vault","answers_cleaned":"    layout  docs page title  Deprecation notices description       Deprecation announcements  updates  and migration plans for Vault         Deprecation notices  Vault implements a multi phased approach to deprecations to provide users with advanced warning  minimize business disruptions  and allow for the safe handling of data affected by a feature removal    Highlight title  Have questions     If you have questions or concerns about a deprecated feature  please create a topic on the  Vault community forum  https   discuss hashicorp com c vault 30  or raise a ticket with your support team     Highlight    a id  announcements         Recent announcements   Tabs   Tab heading  DEPRECATED     EnterpriseAlert product  vault     The Vault Support Team can provide  b limited  b  help with a deprecated feature    Limited support includes troubleshooting solutions and workarounds but does not   include software patches or bug fixes  Refer to   the  a href  https   support hashicorp com hc en us articles 360021185113 Support Period and End of Life EOL Policy  HashiCorp Support Policy  a  for   more information on the product support timeline    EnterpriseAlert    include  deprecation ruby client library mdx    include  deprecation active directory secrets engine mdx     Tab   Tab heading  PENDING REMOVAL     include  deprecation vault agent api proxy mdx    include  deprecation aws field change mdx    include  deprecation centrify auth method mdx     Tab   Tab heading  REMOVED     include  deprecation duplicative docker images mdx    include  deprecation azure password policy mdx     Tab    Tabs    a id  phases         Deprecation phases  The lifecycle of a Vault feature or plugin includes 4 phases       supported     generally available  GA   functioning as expected  and under   active maintenance     deprecated     marked for removal in a future release     pending removal     support ended or replaced by another feature     removed     end of lifecycle      Deprecated    deprecated     Deprecated  is the first phase of the deprecation process and indicates that the feature is marked for removal in a future release  When you upgrade Vault  newly deprecated features will begin alerting that the feature is deprecated     Built in authentication and secrets plugins log  Warn  level messages on   unseal    All deprecated features log  Warn  level messages    All  POST    GET   and  LIST  endpoints associated with the feature return   warnings in response data   Built in Vault authentication and secrets plugins also expose their deprecation status through the Vault CLI and Vault API   CLI command                                                                    API endpoint                                                                                               N A                                                                               sys plugins catalog    vault api docs system plugins catalog    vault plugin info auth  PLUGIN NAME     vault docs commands plugin info         sys plugins catalog auth  name    vault api docs system plugins catalog list plugins 1    vault plugin info secret  PLUGIN NAME     vault docs commands plugin info       sys plugins catalog secret  name    vault api docs system plugins catalog list plugins 1       Pending removal   Pending removal  is the second phase of the deprecation process and indicates that the feature behavior is fundamentally altered in the following ways     Built in authentication and secrets plugins log  Error  level messages and   cause an immediate shutdown of the Vault core    All features pending removal fail and log  Error  level messages    All CLI commands and API endpoints associated with the feature fail and return   errors    Warning title  Use with caution      In critical situations  you may be able to override the pending removal behavior with the     VAULT ALLOW PENDING REMOVAL MOUNTS    vault docs commands server vault allow pending removal mounts    environment variable  which forces Vault to treat some features that are pending   removal as if they were still only deprecated     Warning       Removed   Removed  is the last phase of the deprecation process and indicates that the feature is no longer supported and no longer exists within Vault      Migrate from deprecated features  Features in the  pending removal  and  removed  phases will fail  log errors  and  for built in authentication or secret plugins  cause an immediate shutdown of the Vault core   Migrate away from a deprecated feature and successfully upgrade to newer Vault versions  you must eliminate the deprecated features   1  Downgrade Vault to a previous version if necessary  1  Replace any  Removed  or  Pending removal  feature with the recommended    alternative  1  Upgrade to latest desired version "}
{"questions":"vault it for a certain duration page title debug Command The debug command monitors a Vault server probing information about layout docs debug","answers":"---\nlayout: docs\npage_title: debug - Command\ndescription: |-\n  The \"debug\" command monitors a Vault server, probing information about\n  it for a certain duration.\n---\n\n# debug\n\nThe `debug` command starts a process that monitors a Vault server, probing\ninformation about it for a certain duration.\n\nGathering information about the state of the Vault cluster often requires the\noperator to access all necessary information via various API calls and terminal\ncommands. The `debug` command aims to provide a simple workflow that produces a\nconsistent output to help operators retrieve and share information about the\nserver in question.\n\nThe `debug` command honors the same variables that the base command\naccepts, such as the token stored via a previous login or the environment\nvariables `VAULT_TOKEN` and `VAULT_ADDR`. The token used determines the\npermissions and, in turn, the information that `debug` may be able to collect.\nThe address specified determines the target server that will be probed against.\n\nIf the command is interrupted, the information collected up until that\npoint gets persisted to an output directory.\n\n## Permissions\n\nRegardless of whether a particular target is provided, the ability for `debug`\nto fetch data for the target depends on the token provided. Some targets, such\nas `server-status`, queries unauthenticated endpoints which means that it can be\nqueried all the time. Other targets require the token to have ACL permissions to\nquery the matching endpoint in order to get a proper response. Any errors\nencountered during capture due to permissions or otherwise will be logged in the\nindex file.\n\nThe following policy can be used for generating debug packages with all targets:\n\n```hcl\npath \"auth\/token\/lookup-self\" {\n  capabilities = [\"read\"]\n}\n\npath \"sys\/pprof\/*\" {\n  capabilities = [\"read\"]\n}\n\npath \"sys\/config\/state\/sanitized\" {\n  capabilities = [\"read\"]\n}\n\npath \"sys\/monitor\" {\n  capabilities = [\"read\"]\n}\n\npath \"sys\/host-info\" {\n  capabilities = [\"read\"]\n}\n\npath \"sys\/in-flight-req\" {\n  capabilities = [\"read\"]\n}\n```\n\n## Capture targets\n\nThe `-target` flag can be specified multiple times to capture specific\ninformation when debug is running. By default, it captures all information.\n\n| Target               | Description                                                                       |\n| :------------------- | :-------------------------------------------------------------------------------- |\n| `config`             | Sanitized version of the configuration state.                                     |\n| `host`               | Information about the instance running the server, such as CPU, memory, and disk. |\n| `metrics`            | Telemetry information.                                                            |\n| `pprof`              | Runtime profiling data, including heap, CPU, goroutine, and trace profiling.      |\n| `replication-status` | Replication status.                                                               |\n| `server-status`      | Health and seal status.                                                           |\n\nNote that the `config`, `host`,`metrics`, and `pprof` targets are only queried\non active and performance standby nodes due to the the fact that the information\npertains to the local node and the request should not be forwarded.\n\nAdditionally, host information is not available on the OpenBSD platform due to\nlibrary limitations in fetching the data without enabling `cgo`.\n\n[Enterprise] Telemetry can be gathered from a DR Secondary active node via the\n`metrics` target if [unauthenticated_metrics_access](\/vault\/docs\/configuration\/listener\/tcp#unauthenticated_metrics_access) is enabled.\n\n## Output layout\n\nThe output of the bundled information, once decompressed, is contained within a\nsingle directory. Each target, with the exception of profiling data, is captured\nin a single file. For each of those targets collection is represented as a JSON\narray object, with each entry captured at each interval as a JSON object.\n\n```shell-session\n$ tree vault-debug-2019-10-15T21-44-49Z\/\nvault-debug-2019-10-15T21-44-49Z\/\n\u251c\u2500\u2500 2019-10-15T21-44-49Z\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 goroutine.prof\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 heap.prof\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 profile.prof\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 trace.out\n\u251c\u2500\u2500 2019-10-15T21-45-19Z\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 goroutine.prof\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 heap.prof\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 profile.prof\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 trace.out\n\u251c\u2500\u2500 2019-10-15T21-45-49Z\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 goroutine.prof\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 heap.prof\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 profile.prof\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 trace.out\n\u251c\u2500\u2500 2019-10-15T21-46-19Z\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 goroutine.prof\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 heap.prof\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 profile.prof\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 trace.out\n\u251c\u2500\u2500 2019-10-15T21-46-49Z\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 goroutine.prof\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 heap.prof\n\u251c\u2500\u2500 config.json\n\u251c\u2500\u2500 host_info.json\n\u251c\u2500\u2500 index.json\n\u251c\u2500\u2500 metrics.json\n\u251c\u2500\u2500 replication_status.json\n\u2514\u2500\u2500 server_status.json\n```\n\n## Examples\n\nStart debug using reasonable defaults:\n\n```shell-session\n$ vault debug\n```\n\nStart debug with different duration, intervals, and metrics interval values, and\nskip compression:\n\n```shell-session\n$ vault debug -duration=1m -interval=10s -metrics-interval=5s -compress=false\n```\n\nStart debug with specific targets:\n\n```shell-session\n$ vault debug -target=host -target=metrics\n```\n\n## Usage\n\nThe following flags are available in addition to the [standard set of\nflags](\/vault\/docs\/commands) included on all commands.\n\n### Command options\n\n- `-compress` `(bool: true)` - Toggles whether to compress output package The\n  default is true.\n\n- `-duration` `(int or time string: \"2m\")` - Duration to run the command. The\n  default is 2m0s.\n\n- `-interval` `(int or time string: \"30s\")` - The polling interval at which to\n  collect profiling data and server state. The default is 30s.\n\n- `-log-format` `(string: \"standard\")` - Log format to be captured if \"log\"\n  target specified. Supported values are \"standard\" and \"json\". The default is\n  \"standard\".\n\n- `-metrics-interval` `(int or time string: \"10s\")` - The polling interval at\n  which to collect metrics data. The default is 10s.\n\n- `-output` `(string)` - Specifies the output path for the debug package. Defaults\n  to a time-based generated file name.\n\n- `-target` `(string: all targets)` - Target to capture, defaulting to all if\n  none specified. This can be specified multiple times to capture multiple\n  targets. Available targets are: config, host, metrics, pprof,\n  replication-status, server-status.","site":"vault","answers_cleaned":"    layout  docs page title  debug   Command description       The  debug  command monitors a Vault server  probing information about   it for a certain duration         debug  The  debug  command starts a process that monitors a Vault server  probing information about it for a certain duration   Gathering information about the state of the Vault cluster often requires the operator to access all necessary information via various API calls and terminal commands  The  debug  command aims to provide a simple workflow that produces a consistent output to help operators retrieve and share information about the server in question   The  debug  command honors the same variables that the base command accepts  such as the token stored via a previous login or the environment variables  VAULT TOKEN  and  VAULT ADDR   The token used determines the permissions and  in turn  the information that  debug  may be able to collect  The address specified determines the target server that will be probed against   If the command is interrupted  the information collected up until that point gets persisted to an output directory      Permissions  Regardless of whether a particular target is provided  the ability for  debug  to fetch data for the target depends on the token provided  Some targets  such as  server status   queries unauthenticated endpoints which means that it can be queried all the time  Other targets require the token to have ACL permissions to query the matching endpoint in order to get a proper response  Any errors encountered during capture due to permissions or otherwise will be logged in the index file   The following policy can be used for generating debug packages with all targets      hcl path  auth token lookup self      capabilities     read      path  sys pprof        capabilities     read      path  sys config state sanitized      capabilities     read      path  sys monitor      capabilities     read      path  sys host info      capabilities     read      path  sys in flight req      capabilities     read             Capture targets  The   target  flag can be specified multiple times to capture specific information when debug is running  By default  it captures all information     Target                 Description                                                                                                                                                                                         config                Sanitized version of the configuration state                                           host                  Information about the instance running the server  such as CPU  memory  and disk       metrics               Telemetry information                                                                  pprof                 Runtime profiling data  including heap  CPU  goroutine  and trace profiling            replication status    Replication status                                                                     server status         Health and seal status                                                               Note that the  config    host   metrics   and  pprof  targets are only queried on active and performance standby nodes due to the the fact that the information pertains to the local node and the request should not be forwarded   Additionally  host information is not available on the OpenBSD platform due to library limitations in fetching the data without enabling  cgo     Enterprise  Telemetry can be gathered from a DR Secondary active node via the  metrics  target if  unauthenticated metrics access   vault docs configuration listener tcp unauthenticated metrics access  is enabled      Output layout  The output of the bundled information  once decompressed  is contained within a single directory  Each target  with the exception of profiling data  is captured in a single file  For each of those targets collection is represented as a JSON array object  with each entry captured at each interval as a JSON object      shell session   tree vault debug 2019 10 15T21 44 49Z  vault debug 2019 10 15T21 44 49Z      2019 10 15T21 44 49Z         goroutine prof         heap prof         profile prof         trace out     2019 10 15T21 45 19Z         goroutine prof         heap prof         profile prof         trace out     2019 10 15T21 45 49Z         goroutine prof         heap prof         profile prof         trace out     2019 10 15T21 46 19Z         goroutine prof         heap prof         profile prof         trace out     2019 10 15T21 46 49Z         goroutine prof         heap prof     config json     host info json     index json     metrics json     replication status json     server status json         Examples  Start debug using reasonable defaults      shell session   vault debug      Start debug with different duration  intervals  and metrics interval values  and skip compression      shell session   vault debug  duration 1m  interval 10s  metrics interval 5s  compress false      Start debug with specific targets      shell session   vault debug  target host  target metrics         Usage  The following flags are available in addition to the  standard set of flags   vault docs commands  included on all commands       Command options      compress    bool  true     Toggles whether to compress output package The   default is true       duration    int or time string   2m      Duration to run the command  The   default is 2m0s       interval    int or time string   30s      The polling interval at which to   collect profiling data and server state  The default is 30s       log format    string   standard      Log format to be captured if  log    target specified  Supported values are  standard  and  json   The default is    standard        metrics interval    int or time string   10s      The polling interval at   which to collect metrics data  The default is 10s       output    string     Specifies the output path for the debug package  Defaults   to a time based generated file name       target    string  all targets     Target to capture  defaulting to all if   none specified  This can be specified multiple times to capture multiple   targets  Available targets are  config  host  metrics  pprof    replication status  server status "}
{"questions":"vault page title events Command The events command interacts with the Vault events notifications subsystem events layout docs EnterpriseAlert product vault","answers":"---\nlayout: docs\npage_title: events - Command\ndescription: |-\n  The \"events\" command interacts with the Vault events notifications subsystem.\n---\n\n# events\n\n<EnterpriseAlert product=\"vault\" \/>\n\nUse the `events` command to get a real-time display of\n[event notifications](\/vault\/docs\/concepts\/events) generated by Vault and to subscribe to Vault\nevent notifications. Note that the `events subscribe` runs indefinitly and will not exit on\nits own unless it encounters an unexpected error. Similar to `tail -f` in the\nUnix world, you must terminate the process from the command line to end the\n`events` command.\n\nSpecify the desired event types (also called \"topics\") as a glob pattern. To\nmatch against multiple event types, use `*` as a wildcard. The command returns\nserialized JSON objects in the default protobuf JSON serialization format with\none line per event received.\n\n## Examples\n\nSubscribe to all event notifications:\n\n```shell-session\n$ vault events subscribe '*'\n```\n\nSubscribe to all KV event notifications:\n\n```shell-session\n$ vault events subscribe 'kv*'\n```\n\nSubscribe to all `kv-v2\/data-write` event notifications:\n\n```shell-session\n$ vault events subscribe kv-v2\/data-write\n```\n\nSubscribe to all KV event notifications in the current and `ns1` namespaces for the secret `secret\/data\/foo` that do not involve writing data:\n\n```shell-session\n$ vault events subscribe -namespaces=ns1 -filter='data_path == secret\/data\/foo and operation != \"data-write\"' 'kv*'\n```\n\n## Usage\n\n`events subscribe` supports the following flags in addition to the [standard set of\nflags](\/vault\/docs\/commands) included on all commands.\n\n### Options\n\n- `-timeout`: `(duration: \"\")` - close the WebSocket automatically after the\n  specified duration.\n\n- `-namespaces` `(string)` - Additional **child** namespaces for the\n  subscription. Repeat the flag to add additional namespace patterns to the\n  subscription request. Vault automatically prepends the issuing namespace for\n  the request to the provided namespace. For example, if you include\n  `-namespaces=ns2` on a request made in the `ns1` namespace, Vault will attempt\n  to subscribe you to event notifications under the `ns1\/ns2` and `ns1` namespaces. You can\n  use the `*` character to include wildcards in the namespace pattern. By\n  default, Vault will only subscribe to event notifications in the requesting namespace.\n\n<Note>\n  To subscribe to event notifications across multiple namespaces, you must provide a root\n  token or a token associated with appropriate policies across all the targeted\n  namespaces. Refer to\n  the <a href=\"\/vault\/tutorials\/enterprise\/namespaces\">Secure multi-tenancy with\n  namespaces<\/a>tutorial for configuring your Vault instance appropriately.\n<\/Note>\n\n- `-filter` `(string: \"\")` - Filter expression used to select event notifications to be sent\n  through the WebSocket.\n\nRefer to the [Filter expressions](\/vault\/docs\/concepts\/filtering) guide for a complete\nlist of filtering options and an explanation on how Vault evaluates filter expressions.\n\n  The following values are available in the filter expression:\n  - `event_type`: the event type, e.g., `kv-v2\/data-write`.\n\t- `operation`: the operation name that caused the event notification, e.g., `write`.\n\t- `source_plugin_mount`: the mount of the plugin that produced the event notification,\n    e.g., `secret\/`\n\t- `data_path`: the API path that can be used to access the data of the secret related to the event notification, e.g., `secret\/data\/foo`\n\t- `namespace`: the path of the namespace that created the event notification, e.g., `ns1\/`\n\n    The filter string is empty by default. Unfiltered subscription requests match to\n    all event notifications that the requestor has access to for the target event type. When the\n    filter string is not empty, Vault applies the filter conditions after the policy\n    checks to narrow the event notifications provided in the response.\n\n    Filters can be straightforward path matches like\n    `data_path == secret\/data\/foo`, which specifies that Vault should pass\n    return event notifications that refer to the `secret\/data\/foo` secret to the WebSocket.\n    Or more complex statements that exclude specific operations. For example:\n      ```\n      data_path == secret\/data\/foo and operation != write\n      ```","site":"vault","answers_cleaned":"    layout  docs page title  events   Command description       The  events  command interacts with the Vault events notifications subsystem         events   EnterpriseAlert product  vault      Use the  events  command to get a real time display of  event notifications   vault docs concepts events  generated by Vault and to subscribe to Vault event notifications  Note that the  events subscribe  runs indefinitly and will not exit on its own unless it encounters an unexpected error  Similar to  tail  f  in the Unix world  you must terminate the process from the command line to end the  events  command   Specify the desired event types  also called  topics   as a glob pattern  To match against multiple event types  use     as a wildcard  The command returns serialized JSON objects in the default protobuf JSON serialization format with one line per event received      Examples  Subscribe to all event notifications      shell session   vault events subscribe          Subscribe to all KV event notifications      shell session   vault events subscribe  kv        Subscribe to all  kv v2 data write  event notifications      shell session   vault events subscribe kv v2 data write      Subscribe to all KV event notifications in the current and  ns1  namespaces for the secret  secret data foo  that do not involve writing data      shell session   vault events subscribe  namespaces ns1  filter  data path    secret data foo and operation     data write    kv           Usage   events subscribe  supports the following flags in addition to the  standard set of flags   vault docs commands  included on all commands       Options      timeout     duration         close the WebSocket automatically after the   specified duration       namespaces    string     Additional   child   namespaces for the   subscription  Repeat the flag to add additional namespace patterns to the   subscription request  Vault automatically prepends the issuing namespace for   the request to the provided namespace  For example  if you include     namespaces ns2  on a request made in the  ns1  namespace  Vault will attempt   to subscribe you to event notifications under the  ns1 ns2  and  ns1  namespaces  You can   use the     character to include wildcards in the namespace pattern  By   default  Vault will only subscribe to event notifications in the requesting namespace    Note    To subscribe to event notifications across multiple namespaces  you must provide a root   token or a token associated with appropriate policies across all the targeted   namespaces  Refer to   the  a href   vault tutorials enterprise namespaces  Secure multi tenancy with   namespaces  a tutorial for configuring your Vault instance appropriately    Note       filter    string         Filter expression used to select event notifications to be sent   through the WebSocket   Refer to the  Filter expressions   vault docs concepts filtering  guide for a complete list of filtering options and an explanation on how Vault evaluates filter expressions     The following values are available in the filter expression       event type   the event type  e g    kv v2 data write       operation   the operation name that caused the event notification  e g    write       source plugin mount   the mount of the plugin that produced the event notification      e g    secret       data path   the API path that can be used to access the data of the secret related to the event notification  e g    secret data foo      namespace   the path of the namespace that created the event notification  e g    ns1        The filter string is empty by default  Unfiltered subscription requests match to     all event notifications that the requestor has access to for the target event type  When the     filter string is not empty  Vault applies the filter conditions after the policy     checks to narrow the event notifications provided in the response       Filters can be straightforward path matches like      data path    secret data foo   which specifies that Vault should pass     return event notifications that refer to the  secret data foo  secret to the WebSocket      Or more complex statements that exclude specific operations  For example                  data path    secret data foo and operation    write          "}
{"questions":"vault page title Use a custom token helper The Vault CLI supports external token helpers to help simplify retrieving layout docs Use a custom token helper setting and erasing authentication tokens","answers":"---\nlayout: docs\npage_title: Use a custom token helper\ndescription: >-\n  The Vault CLI supports external token helpers to help simplify retrieving,\n  setting and erasing authentication tokens.\n---\n\n# Use a custom token helper\n\nA **token helper** is a program or script that saves, retrieves, or erases a\nsaved authentication token.\n\nBy default, the Vault CLI includes a token helper that caches tokens from any\nenabled authentication backend in a `~\/.vault-token` file. You can customize\nthe caching behavior with a custom token helper.\n\n## Step 1: Script your helper\n\nYour token helper must accept a single command-line argument:\n\nArgument | Action\n-------- | ------\n`get`    | Fetch and print a cached authentication token to `stdout`\n`store`  | Read an authentication token from `stdin` and save it in a secure location\n`erase`  | Delete a cached authentication token\n\nYou can manage the authentication tokens in whatever way you prefer, but your\nhelper must adhere to following output requirements:\n\n- Limit `stdout` writes to token strings.\n- Write all error messages to `stderr`.\n- Write all non-error and non-token output to `syslog` or a log file.\n- Return the status code `0` on success.\n- Return non-zero status codes for errors.\n\n## Step 2: Configure Vault\n\nTo configure a custom token helper, edit (or create) a CLI configuration file\ncalled `.vault` under your home directory and set the `token_helper` parameter\nwith the **fully qualified path** to your new helper:\n\n<Tabs>\n\n<Tab heading=\"Linux shell\" group=\"nix\">\n\n```\necho 'token_helper = \"\/path\/to\/token\/helper.sh\"' >> ${HOME}\/.vault\n```\n\n<\/Tab>\n\n<Tab heading=\"Powershell\" group=\"ps\">\n\nMake sure to use UTF-8 encoding (`ascii`) or Vault may complain about invalid\ncharacters when reading the configuration file:\n\n```powershell\n'token_helper = \"\\\\path\\\\to\\\\token\\\\helper.ps1\"' | `\nOut-File -FilePath ${env:USERPROFILE}\/.vault -Encoding ascii -Append\n```\n\n<\/Tab>\n\n<\/Tabs>\n\n<Tip>\n\n  Make sure the script is executable by the Vault binary.\n\n<\/Tip>\n\n## Example token helper\n\nThe following token helper manages tokens in a JSON file in the home directory\ncalled `.vault_tokens`.\n\nThe helper relies on the `$VAULT_ADDR` environment variable to store and\nretrieve tokens from different Vault servers.\n\n\n<CodeTabs>\n\n```shell-session\n#!\/bin\/bash\n\nfunction write_error(){ >&2 echo $@; }\n\n# Customize the hash key for tokens. Currently, we remove the strings\n# 'https:\/\/', '.', and ':' from the passed address (Vault address environment\n# by default) because jq has trouble with special characeters in JSON field\n# names\nfunction createHashKey {\n  \n  local key=\"\"\n\n  if [[ -z \"${1}\" ]] ; then key=\"${VAULT_ADDR}\" \n  else                      key=\"${1}\"\n  fi\n  \n  # We index the token according to the Vault server address by default so\n  # return an error if the address is empty\n  if [[ -z \"${key}\" ]] ; then\n    write_error \"Error: VAULT_ADDR environment variable unset.\"\n    exit 100\n  fi\n\n  key=${key\/\/\"http:\/\/\"\/\"\"}\n  key=${key\/\/\".\"\/\"_\"}\n  key=${key\/\/\":\"\/\"_\"}\n\n  echo \"addr-${key}\"\n}\n\nTOKEN_FILE=\"${HOME}\/.vault_token\"\nKEY=$(createHashKey)\nTOKEN=\"null\"\n\n# If the token file does not exist, create it\nif [ ! -f ${TOKEN_FILE} ] ; then\n   echo \"{}\" > ${TOKEN_FILE}\nfi\n\ncase \"${1}\" in\n    \"get\")\n\n      # Read the current JSON data and pull the token associated with ${KEY}\n      TOKEN=$(cat ${TOKEN_FILE} | jq --arg key \"${KEY}\" -r '.[$key]')\n      \n      # If the token != to the string \"null\", print the token to stdout \n      # jq returns \"null\" if the key was not found in the JSON data\n      if [ ! \"${TOKEN}\" == \"null\" ] ; then\n        echo \"${TOKEN}\"\n      fi\n      exit 0\n    ;;\n\n    \"store\")\n      \n      # Get the token from stdin\n      read TOKEN\n\n      # Read the current JSON data and add a new entry\n      JSON=$(\n        jq                      \\\n        --arg key \"${KEY}\"      \\\n        --arg token \"${TOKEN}\"  \\\n        '.[$key] = $token' ${TOKEN_FILE}\n      )\n      \n    ;;\n\n    \"erase\")\n      # Read the current JSON data and remove the entry if it exists\n      JSON=$(\n        jq                      \\\n        --arg key \"${KEY}\"      \\\n        --arg token \"${TOKEN}\"  \\\n        'del(.[$key])' ${TOKEN_FILE}\n      )\n    \n    ;;\n\n    *)\n      # change to stderr for real code\n      write_error \"Error: Provide a valid command: get, store, or erase.\"\n      exit 101\nesac\n\n# Update the JSON file and return success\necho $JSON | jq \".\" > ${TOKEN_FILE}\nexit 0\n```\n\n```powershell\n<#\n.Synopsis\n  Vault token helper script\n.INPUTS\n  Positional\/command line argument: get, store, erase\n.OUTPUTS\n  get: prints a cached authentication token to stdin (if it exists)\n  store: no output, updates the token cache\n  erase: no output, updates the token cache\n#>\n\n<#\n.Synopsis\n  CreateHashKey\n.DESCRIPTION\n  Customize the hash key for tokens. Currently, we remove the strings\n  'https:\/\/', '.', and ':' from the passed address (Vault address environment by\n  default) variable to simplify the hash key string\n#>\nfunction CreateHashKey {\n\n  Param($address = \"${env:VAULT_ADDR}\")\n  \n  # We index the token according to the Vault server address by default so\n  # return an error if the address is empty\n  if ( -not $address) {\n    Write-Error \"[Missing value] env:VAULT_ADDR currently unset.\"\n    exit 101\n  }\n\n  $key = ${address}.Replace(\"\/\",\"\").Replace(\".\",\"_\").Replace(\":\",\"_\")\n  return ${key}.Replace(\"http_\", \"addr-\")\n}\n\n<#\n.Synopsis\n  GetTokenCache\n.DESCRIPTION\n  Read in or create a new token cache and initialize the hash \n#>\nfunction GetTokenCache {\n\n  Param($filename)\n\n  # Read the JSON file (token cache) and initialize the hash data or create an\n  # empty hash if the file does not exist yet\n  if ( Get-Item -Path \".\/${filename}\" -ErrorAction SilentlyContinue ) {\n    $fileData = (Get-Content \"${filename}\" -Raw | ConvertFrom-Json -AsHashtable)\n  } else {\n    $fileData = (Write-Output \"{}\" | ConvertFrom-Json -AsHashtable)\n  } \n\n  return $fileData\n}\n\n<#\n.Synopsis\n  UpdateTokenCache\n.DESCRIPTION\n  Write the token hash out to the cache \n#>\nfunction UpdateTokenCache {\n\n  Param($filename, $fileData)\n  $jsonData = ($fileData | ConvertTo-Json) \n\n  # Convert the hash to JSON and update the token cache\n  $jsonData | Out-File -Encoding ascii \"${filename}\"\n\n  return\n}\n\n$tokenFile = \"${env:USERPROFILE}\/.vault_token\"\n$hashData  = (GetTokenCache \"${tokenFile}\")\n$key       = (CreateHashKey)\n$token     = $null\n\nswitch -Exact -CaseSensitive (${args}[0]) {\n\n  \"get\" {\n    # Print the token to stdin and return success\n    Write-Output ${hashData}.${key}\n    exit 0\n  }\n\n  \"store\" {\n    $token = Read-Host\n    # Add the new token to the hash\n    $hashData[\"${key}\"] = \"${token}\" \n  }\n\n  \"erase\" {\n    # Erase the token entry if it exists\n    if ($hashData.ContainsKey(\"${key}\") ) {\n      $hashData.Remove(\"${key}\")\n    }\n  }\n\n  Default {\n    # The argument was invalid so return an error\n    Write-Error \"[Invalid argument] Command must be: get, store, or erase.\"\n    exit 102\n  }\n\n}\n\n# Update the token cache and return success\nUpdateTokenCache ${tokenFile} ${hashData}\n\nexit 0\n```\n\n```ruby\n#!\/usr\/bin\/env ruby\n\nrequire 'json'\n\n\/\/ We index the token according to the Vault server address\n\/\/ so the VAULT_ADDR variable is required \nunless ENV['VAULT_ADDR']\n  STDERR.puts \"No VAULT_ADDR environment variable set. Set it and run me again!\"\n  exit 100\nend\n\n\/\/ If the token file does not exist, create and initialize the hashmap\nbegin\n  tokens = JSON.parse(File.read(\"#{ENV['HOME']}\/.vault_tokens\"))\nrescue Errno::ENOENT => e\n  # file doesn't exist so create a blank hash for it\n  tokens = {}\nend\n\n\/\/ Get the first command line argument\ncase ARGV.first\nwhen 'get'\n  \/\/ Write the token to stdout if it exists\n  print tokens[ENV['VAULT_ADDR']] if tokens[ENV['VAULT_ADDR']]\n  exit 0\nwhen 'store'\n  \/\/ Read the token from stdin\n  tokens[ENV['VAULT_ADDR']] = STDIN.read\nwhen 'erase'\n  \/\/ Delete the token entry if it exists\n  tokens.delete!(ENV['VAULT_ADDR'])\nend\n\n\n\/\/ Update the token file\nFile.open(\"#{ENV['HOME']}\/.vault_tokens\", 'w') { |file| file.write(tokens.to_json) }\n```\n<\/CodeTabs>","site":"vault","answers_cleaned":"    layout  docs page title  Use a custom token helper description       The Vault CLI supports external token helpers to help simplify retrieving    setting and erasing authentication tokens         Use a custom token helper  A   token helper   is a program or script that saves  retrieves  or erases a saved authentication token   By default  the Vault CLI includes a token helper that caches tokens from any enabled authentication backend in a     vault token  file  You can customize the caching behavior with a custom token helper      Step 1  Script your helper  Your token helper must accept a single command line argument   Argument   Action                    get       Fetch and print a cached authentication token to  stdout   store     Read an authentication token from  stdin  and save it in a secure location  erase     Delete a cached authentication token  You can manage the authentication tokens in whatever way you prefer  but your helper must adhere to following output requirements     Limit  stdout  writes to token strings    Write all error messages to  stderr     Write all non error and non token output to  syslog  or a log file    Return the status code  0  on success    Return non zero status codes for errors      Step 2  Configure Vault  To configure a custom token helper  edit  or create  a CLI configuration file called   vault  under your home directory and set the  token helper  parameter with the   fully qualified path   to your new helper    Tabs    Tab heading  Linux shell  group  nix        echo  token helper     path to token helper sh        HOME   vault        Tab    Tab heading  Powershell  group  ps    Make sure to use UTF 8 encoding   ascii   or Vault may complain about invalid characters when reading the configuration file      powershell  token helper      path  to  token  helper ps1       Out File  FilePath   env USERPROFILE   vault  Encoding ascii  Append        Tab     Tabs    Tip     Make sure the script is executable by the Vault binary     Tip      Example token helper  The following token helper manages tokens in a JSON file in the home directory called   vault tokens    The helper relies on the   VAULT ADDR  environment variable to store and retrieve tokens from different Vault servers     CodeTabs      shell session    bin bash  function write error      2 echo          Customize the hash key for tokens  Currently  we remove the strings    https           and     from the passed address  Vault address environment   by default  because jq has trouble with special characeters in JSON field   names function createHashKey        local key       if     z    1        then key    VAULT ADDR      else                      key    1     fi        We index the token according to the Vault server address by default so     return an error if the address is empty   if     z    key        then     write error  Error  VAULT ADDR environment variable unset       exit 100   fi    key   key   http           key   key             key   key              echo  addr   key      TOKEN FILE    HOME   vault token  KEY   createHashKey  TOKEN  null     If the token file does not exist  create it if      f   TOKEN FILE      then    echo          TOKEN FILE  fi  case    1   in      get            Read the current JSON data and pull the token associated with   KEY        TOKEN   cat   TOKEN FILE    jq   arg key    KEY    r     key                   If the token    to the string  null   print the token to stdout          jq returns  null  if the key was not found in the JSON data       if        TOKEN       null      then         echo    TOKEN         fi       exit 0              store                  Get the token from stdin       read TOKEN          Read the current JSON data and add a new entry       JSON            jq                                  arg key    KEY                    arg token    TOKEN                  key     token    TOKEN FILE                              erase           Read the current JSON data and remove the entry if it exists       JSON            jq                                  arg key    KEY                    arg token    TOKEN               del    key      TOKEN FILE                                      change to stderr for real code       write error  Error  Provide a valid command  get  store  or erase         exit 101 esac    Update the JSON file and return success echo  JSON   jq         TOKEN FILE  exit 0         powershell     Synopsis   Vault token helper script  INPUTS   Positional command line argument  get  store  erase  OUTPUTS   get  prints a cached authentication token to stdin  if it exists    store  no output  updates the token cache   erase  no output  updates the token cache         Synopsis   CreateHashKey  DESCRIPTION   Customize the hash key for tokens  Currently  we remove the strings    https           and     from the passed address  Vault address environment by   default  variable to simplify the hash key string    function CreateHashKey      Param  address      env VAULT ADDR           We index the token according to the Vault server address by default so     return an error if the address is empty   if    not  address        Write Error   Missing value  env VAULT ADDR currently unset       exit 101         key     address  Replace         Replace          Replace            return   key  Replace  http     addr           Synopsis   GetTokenCache  DESCRIPTION   Read in or create a new token cache and initialize the hash     function GetTokenCache      Param  filename       Read the JSON file  token cache  and initialize the hash data or create an     empty hash if the file does not exist yet   if   Get Item  Path      filename    ErrorAction SilentlyContinue          fileData    Get Content    filename    Raw   ConvertFrom Json  AsHashtable      else        fileData    Write Output        ConvertFrom Json  AsHashtable          return  fileData        Synopsis   UpdateTokenCache  DESCRIPTION   Write the token hash out to the cache     function UpdateTokenCache      Param  filename   fileData     jsonData     fileData   ConvertTo Json        Convert the hash to JSON and update the token cache    jsonData   Out File  Encoding ascii    filename      return     tokenFile      env USERPROFILE   vault token   hashData     GetTokenCache    tokenFile     key          CreateHashKey   token        null  switch  Exact  CaseSensitive    args  0         get          Print the token to stdin and return success     Write Output   hashData    key      exit 0         store         token   Read Host       Add the new token to the hash      hashData    key         token            erase          Erase the token entry if it exists     if   hashData ContainsKey    key               hashData Remove    key                 Default         The argument was invalid so return an error     Write Error   Invalid argument  Command must be  get  store  or erase       exit 102           Update the token cache and return success UpdateTokenCache   tokenFile    hashData   exit 0         ruby    usr bin env ruby  require  json      We index the token according to the Vault server address    so the VAULT ADDR variable is required  unless ENV  VAULT ADDR     STDERR puts  No VAULT ADDR environment variable set  Set it and run me again     exit 100 end     If the token file does not exist  create and initialize the hashmap begin   tokens   JSON parse File read    ENV  HOME     vault tokens    rescue Errno  ENOENT    e     file doesn t exist so create a blank hash for it   tokens      end     Get the first command line argument case ARGV first when  get       Write the token to stdout if it exists   print tokens ENV  VAULT ADDR    if tokens ENV  VAULT ADDR      exit 0 when  store       Read the token from stdin   tokens ENV  VAULT ADDR      STDIN read when  erase       Delete the token entry if it exists   tokens delete  ENV  VAULT ADDR    end      Update the token file File open    ENV  HOME     vault tokens    w      file  file write tokens to json          CodeTabs "}
{"questions":"vault The Vault CLI is a static binary that wraps the Vault API While every CLI page title Vault CLI usage layout docs Vault CLI Technical reference for the Vault CLI","answers":"---\nlayout: docs\npage_title: Vault CLI usage\ndescription: >-\n  Technical reference for the Vault CLI\n---\n\n# Vault CLI\n\nThe Vault CLI is a static binary that wraps the Vault API. While every CLI\ncommand maps directly to one or more APIs internally, not every endpoint is\nexposed publicly and not every API endpoint has a corresponding CLI command.\n\n\n\n## Usage \n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault <command> [subcommand(s)] [flag(s)] [command-argument(s)]\n\n$ vault <command> [subcommand(s)] [-help | -h]\n```\n\n<\/CodeBlockConfig>\n\n<Tip>\n\n  Use the `-help` flag with any command to see a description of the command and a\n  list of supported options and flags.\n\n<\/Tip>\n\nThe Vault CLI returns different exit codes depending on whether and where an\nerror occurred:\n\n- **`0`** - Success\n- **`1`** - Local\/terminal error (invalid flags, failed validation, wrong\n  numbers of arguments, etc.)\n- **`2`** - Remote\/server error (API failures, bad TLS, incorrect API\n  parameters, etc.)\n\n\n### Authenticating to Vault\n\nUnauthenticated users can use CLI commands with the `--help` flag, but must use\n[`vault login`](\/vault\/docs\/commands\/login) or set the\n[`VAULT_TOKEN`](\/vault\/docs\/commands#standard-vault_token) environment variable\nto use the CLI.\n\nThe CLI uses a token helper to cache access tokens after authenticating with\n`vault login` The default file for cached tokens is `~\/.vault-token` and\ndeleting the file forcibly logs the user out of Vault.\n\nIf you prefer to use a custom token helper,\n[you can create your own](\/vault\/docs\/commands\/token-helper) and configure the\nCLI to use it.\n\n\n### Passing command arguments\n\nCommand arguments include any relevant configuration settings and\ncommand-specific options. Command options pass input data as `key=value` pairs,\nwhich you can provided inline, as a `stdin` stream, or from a local file.\n\n<Tabs>\n\n<Tab heading=\"Inline\">\n\nTo pass input inline with the command, use the `<option-name>=<value>` syntax:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault audit enable file file_path=\"\/var\/log\/vault.log\"\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"stdin\">\n\nTo pass input from `stdin`, use `-` as a stand-in for the entire `key=value`\npair or a specific option value.\n\nTo pipe the option and value, use a JSON object with the option name and value:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ echo -n '{\"file_path\":\"\/var\/log\/vault.log\"}' | vault audit enable file -\n```\n\n<\/CodeBlockConfig>\n\nTo pipe the option value by itself, provide the option name inline:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ echo -n \"\/var\/log\/vault.log\" | vault audit enable file file_path=-\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n\n<Tab heading=\"Local file\">\n\nTo pass data as a file, use `@` as a stand-in for the entire\n`<option-name>=<value>` pair or a specific option value.\n\nTo pass the option and value, use a JSON file:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\ndata.json:\n{\n  \"file_path\":\"\/var\/log\/vault.log\"\n}\n\n$ vault audit enable file @data.json\n```\n\n<\/CodeBlockConfig>\n\nTo pass the option value by itself, use the option name inline and pass the\nvalue as text:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\ndata.txt:\n\/var\/log\/vault.log\n\n$ vault audit enable file file_path=@data.txt\n```\n\n<\/CodeBlockConfig>\n\nIf you use `@` as part of an argument **name** in `<option-name>=<value>`\nformat, Vault treats the `@` as part of the key name, rather than a file\nreference. As a result, Vault does not support filenames that include the `=`\ncharacter.\n\n<Note title=\"Escape literal '@' values\">\n\n  To keep Vault from parsing values that begin with a literal `@`, escape the\n  value with a backslash (`\\`):\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault login -method userpass  \\\n    username=\"mitchellh\"        \\\n    password=\"\\@badpasswordisbad\"\n```\n\n<\/CodeBlockConfig>\n\n<\/Note>\n\n<\/Tab>\n\n<\/Tabs>\n\n\n### Calling API endpoints \n\nTo invoke an API endpoint with the Vault CLI, you can use one of the following\nCLI commands with the associated endpoint path:\n\nCLI command    | HTTP verbs\n-------------- | -------------\n`vault read`   | `GET`\n`vault write`  | `PUT`, `POST`\n`vault delete` | `DELETE`\n`vault list`   | `LIST`\n\nFor example, to call the UpsertLDAPGroup endpoint,\n`\/auth\/ldap\/groups\/{group-name}` to create a new LDAP group called `admin`:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault write \/auth\/ldap\/groups\/admin policies=\"admin,default\"\n```\n\n<\/CodeBlockConfig>\n\n<Tip title=\"Core plugins have dedicated commands\">\n\n  You can use `read`, `write`, `delete`, or `list` with the relevant paths for\n  any valid API endpoint, but some plugins are central to the functionality\n  of Vault and have dedicated CLI commands:\n\n  - [`vault kv`](\/vault\/docs\/commands\/kv)\n  - [`vault transit`](\/vault\/docs\/commands\/transit)\n  - [`vault transform`](\/vault\/docs\/commands\/transform)\n  - [`vault token`](\/vault\/docs\/commands\/token)\n\n<\/Tip>\n\n\n\n## Enable autocomplete\n\nThe CLI does not autocomplete commands by default. To enable autocomplete for\nflags, subcommands, and arguments (where supported), use the\n`-autocomplete-install` flag and **restart your shell session**:\n\n```shell-session\n$ vault -autocomplete-install\n```\n\nTo use autocomplete, press `<tab>` while typing a command to show a list of\navailable completions. Or, use the `-<tab>` flag to show available flag\ncompletions for the current command.\n\n<Tip>\n\n  If you have configured the `VAULT_*` environment variables needed to connect to\n  your Vault instance, the autocomplete feature automatically queries the Vault\n  server and returns helpful argument suggestions.\n\n<\/Tip>\n\n\n\n## Configure environment variables\n\nYou can use environment variables to configure the CLI globally. Some\nconfiguration settings have a corresponding CLI flag to configure a specific\ncommand.\n\nFor example, `export VAULT_ADDR='http:\/\/localhost:8200'` sets the\naddress of your Vault server globally, while\n`-address='http:\/\/someotherhost:8200'` overrides the value for a specific\ncommand.\n\n---\n\n@include 'global-settings\/all-env-variables.mdx'\n\n\n\n## Troubleshoot CLI errors\n\nIf you run into errors when executing a particular CLI command, the following\nflags and commands can help you track down the problem.\n\n\n### Confirm you are using the right endpoint or command\n\nIf a command behaves differently than expected or you need details about a\nspecific endpoint, you can use the\n[`vault path-help`](\/vault\/docs\/commands\/path-help) command to see the help text\nfor a given endpoint path.\n\nFor example, to see the help for `sys\/mounts`:\n\n```shell-session\n$ vault path-help sys\/mounts\nRequest:        mounts\nMatching Route: ^mounts$\n\nList the currently mounted backends.\n\n## DESCRIPTION\n\nThis path responds to the following HTTP methods.\n\n    GET \/\n        Lists all the mounted secret backends.\n\n    GET \/<mount point>\n        Get information about the mount at the specified path.\n\n    POST \/<mount point>\n        Mount a new secret backend to the mount point in the URL.\n\n    POST \/<mount point>\/tune\n        Tune configuration parameters for the given mount point.\n\n    DELETE \/<mount point>\n        Unmount the specified mount point.\n```\n\n### Construct the associated cURL command\n\nTo determine if the problem exists with the structure of your CLI command or the\nassociated endpoint, you can use the `-output-curl-string` flag:\n\nFor example, to test that a `vault write` command to create a new user is not\nfailing due to an issue with the `\/auth\/userpass\/users\/{username}` endpoint, use\nthe generated cURL command to call the endpoint directly:\n\n```shell-session\n$ vault write -output-curl-string  auth\/userpass\/users\/bob password=\"long-password\"\n\ncurl -X PUT -H \"X-Vault-Request: true\" -H \"X-Vault-Token: $(vault print token)\" -d '{\"password\":\"long-password\"}' http:\/\/127.0.0.1:8200\/v1\/auth\/userpass\/users\/bob\n```\n\n### Construct the required Vault policy\n\nTo determine if the problem relates to insufficient permissions, you can use the\n`-output-policy` flag to construct a minimal Vault policy that grants the\npermissions needed to execute the relevant command.\n\nFor example, to confirm you have permission to write a secret to the `kv`\nplugin, mounted at `kv\/secret`, use `-output-policy` then confirm you have the\ncapabilities listed:\n\n```\n$ vault kv put -output-policy kv\/secret value=itsasecret\n\npath \"kv\/data\/secret\" {\n  capabilities = [\"create\", \"update\"]\n}\n```","site":"vault","answers_cleaned":"    layout  docs page title  Vault CLI usage description       Technical reference for the Vault CLI        Vault CLI  The Vault CLI is a static binary that wraps the Vault API  While every CLI command maps directly to one or more APIs internally  not every endpoint is exposed publicly and not every API endpoint has a corresponding CLI command        Usage    CodeBlockConfig hideClipboard      shell session   vault  command   subcommand s    flag s    command argument s      vault  command   subcommand s     help    h         CodeBlockConfig    Tip     Use the   help  flag with any command to see a description of the command and a   list of supported options and flags     Tip   The Vault CLI returns different exit codes depending on whether and where an error occurred        0      Success      1      Local terminal error  invalid flags  failed validation  wrong   numbers of arguments  etc        2      Remote server error  API failures  bad TLS  incorrect API   parameters  etc         Authenticating to Vault  Unauthenticated users can use CLI commands with the    help  flag  but must use   vault login    vault docs commands login  or set the   VAULT TOKEN    vault docs commands standard vault token  environment variable to use the CLI   The CLI uses a token helper to cache access tokens after authenticating with  vault login  The default file for cached tokens is     vault token  and deleting the file forcibly logs the user out of Vault   If you prefer to use a custom token helper   you can create your own   vault docs commands token helper  and configure the CLI to use it        Passing command arguments  Command arguments include any relevant configuration settings and command specific options  Command options pass input data as  key value  pairs  which you can provided inline  as a  stdin  stream  or from a local file    Tabs    Tab heading  Inline    To pass input inline with the command  use the   option name   value   syntax    CodeBlockConfig hideClipboard      shell session   vault audit enable file file path   var log vault log         CodeBlockConfig     Tab    Tab heading  stdin    To pass input from  stdin   use     as a stand in for the entire  key value  pair or a specific option value   To pipe the option and value  use a JSON object with the option name and value    CodeBlockConfig hideClipboard      shell session   echo  n    file path    var log vault log      vault audit enable file          CodeBlockConfig   To pipe the option value by itself  provide the option name inline    CodeBlockConfig hideClipboard      shell session   echo  n   var log vault log    vault audit enable file file path          CodeBlockConfig     Tab    Tab heading  Local file    To pass data as a file  use     as a stand in for the entire   option name   value   pair or a specific option value   To pass the option and value  use a JSON file    CodeBlockConfig hideClipboard      shell session data json       file path    var log vault log       vault audit enable file  data json        CodeBlockConfig   To pass the option value by itself  use the option name inline and pass the value as text    CodeBlockConfig hideClipboard      shell session data txt   var log vault log    vault audit enable file file path  data txt        CodeBlockConfig   If you use     as part of an argument   name   in   option name   value   format  Vault treats the     as part of the key name  rather than a file reference  As a result  Vault does not support filenames that include the     character    Note title  Escape literal     values      To keep Vault from parsing values that begin with a literal      escape the   value with a backslash          CodeBlockConfig hideClipboard      shell session   vault login  method userpass        username  mitchellh               password    badpasswordisbad         CodeBlockConfig     Note     Tab     Tabs        Calling API endpoints   To invoke an API endpoint with the Vault CLI  you can use one of the following CLI commands with the associated endpoint path   CLI command      HTTP verbs                                 vault read       GET   vault write      PUT    POST   vault delete     DELETE   vault list       LIST   For example  to call the UpsertLDAPGroup endpoint    auth ldap groups  group name   to create a new LDAP group called  admin     CodeBlockConfig hideClipboard      shell session   vault write  auth ldap groups admin policies  admin default         CodeBlockConfig    Tip title  Core plugins have dedicated commands      You can use  read    write    delete   or  list  with the relevant paths for   any valid API endpoint  but some plugins are central to the functionality   of Vault and have dedicated CLI commands         vault kv    vault docs commands kv        vault transit    vault docs commands transit        vault transform    vault docs commands transform        vault token    vault docs commands token     Tip        Enable autocomplete  The CLI does not autocomplete commands by default  To enable autocomplete for flags  subcommands  and arguments  where supported   use the   autocomplete install  flag and   restart your shell session        shell session   vault  autocomplete install      To use autocomplete  press   tab   while typing a command to show a list of available completions  Or  use the    tab   flag to show available flag completions for the current command    Tip     If you have configured the  VAULT    environment variables needed to connect to   your Vault instance  the autocomplete feature automatically queries the Vault   server and returns helpful argument suggestions     Tip        Configure environment variables  You can use environment variables to configure the CLI globally  Some configuration settings have a corresponding CLI flag to configure a specific command   For example   export VAULT ADDR  http   localhost 8200   sets the address of your Vault server globally  while   address  http   someotherhost 8200   overrides the value for a specific command         include  global settings all env variables mdx        Troubleshoot CLI errors  If you run into errors when executing a particular CLI command  the following flags and commands can help you track down the problem        Confirm you are using the right endpoint or command  If a command behaves differently than expected or you need details about a specific endpoint  you can use the   vault path help    vault docs commands path help  command to see the help text for a given endpoint path   For example  to see the help for  sys mounts       shell session   vault path help sys mounts Request         mounts Matching Route   mounts   List the currently mounted backends      DESCRIPTION  This path responds to the following HTTP methods       GET           Lists all the mounted secret backends       GET   mount point          Get information about the mount at the specified path       POST   mount point          Mount a new secret backend to the mount point in the URL       POST   mount point  tune         Tune configuration parameters for the given mount point       DELETE   mount point          Unmount the specified mount point           Construct the associated cURL command  To determine if the problem exists with the structure of your CLI command or the associated endpoint  you can use the   output curl string  flag   For example  to test that a  vault write  command to create a new user is not failing due to an issue with the   auth userpass users  username   endpoint  use the generated cURL command to call the endpoint directly      shell session   vault write  output curl string  auth userpass users bob password  long password   curl  X PUT  H  X Vault Request  true   H  X Vault Token    vault print token    d    password   long password    http   127 0 0 1 8200 v1 auth userpass users bob          Construct the required Vault policy  To determine if the problem relates to insufficient permissions  you can use the   output policy  flag to construct a minimal Vault policy that grants the permissions needed to execute the relevant command   For example  to confirm you have permission to write a secret to the  kv  plugin  mounted at  kv secret   use   output policy  then confirm you have the capabilities listed         vault kv put  output policy kv secret value itsasecret  path  kv data secret      capabilities     create    update        "}
{"questions":"vault The server command starts a Vault server that responds to API requests By page title server Command server default Vault will start in a sealed state The Vault cluster must be layout docs initialized before use","answers":"---\nlayout: docs\npage_title: server - Command\ndescription: |-\n  The \"server\" command starts a Vault server that responds to API requests. By\n  default, Vault will start in a \"sealed\" state. The Vault cluster must be\n  initialized before use.\n---\n\n# server\n\nThe `server` command starts a Vault server that responds to API requests. By\ndefault, Vault will start in a \"sealed\" state. The Vault cluster must be\ninitialized before use, usually by the `vault operator init` command. Each Vault\nserver must also be unsealed using the `vault operator unseal` command or the\nAPI before the server can respond to requests.\n\nFor more information, please see:\n\n- [`operator init` command](\/vault\/docs\/commands\/operator\/init) for information\n  on initializing a Vault server.\n\n- [`operator unseal` command](\/vault\/docs\/commands\/operator\/unseal) for\n  information on providing unseal keys.\n\n- [Vault configuration](\/vault\/docs\/configuration) for the syntax and\n  various configuration options for a Vault server.\n\n## Examples\n\nStart a server with a configuration file:\n\n```shell-session\n$ vault server -config=\/etc\/vault\/config.hcl\n```\n\nRun in \"dev\" mode with a custom initial root token:\n\n```shell-session\n$ vault server -dev -dev-root-token-id=\"root\"\n```\n\n## Usage\n\nThe following flags are available in addition to the [standard set of\nflags](\/vault\/docs\/commands) included on all commands.\n\n### Command options\n\n- `-config` `(string: \"\")` - Path to a configuration file or directory of\n  configuration files. This flag can be specified multiple times to load\n  multiple configurations. If the path is a directory, all files which end in\n  .hcl or .json are loaded.\n\n- `-log-level` ((#\\_log_level)) `(string: \"info\")` - Log verbosity level. Supported values (in\n  order of descending detail) are `trace`, `debug`, `info`, `warn`, and `error`. This can\n  also be specified via the `VAULT_LOG_LEVEL` environment variable.\n\n- `-log-format` ((#\\_log_format)) `(string: \"standard\")` - Log format. Supported values\n  are `standard` and `json`. This can also be specified via the\n  `VAULT_LOG_FORMAT` environment variable.\n\n- `-log-file` ((#\\_log_file)) - The absolute path where Vault should save log\n  messages in addition to other, existing outputs like journald \/ stdout. Paths\n  that end with a path separator use the default file name, `vault.log`. Paths\n  that do not end with a file extension use the default `.log` extension. If the\n  log file rotates, Vault appends the current timestamp to the file name\n  at the time of rotation. For example:\n\n  | `log-file`              | Full log file           | Rotated log file                    |\n  |-------------------------|-------------------------|-------------------------------------|\n  | `\/var\/log`              | `\/var\/log\/vault.log`    | `\/var\/log\/vault-{timestamp}.log`    |\n  | `\/var\/log\/my-diary`     | `\/var\/log\/my-diary.log` | `\/var\/log\/my-diary-{timestamp}.log` |\n  | `\/var\/log\/my-diary.txt` | `\/var\/log\/my-diary.txt` | `\/var\/log\/my-diary-{timestamp}.txt` |\n\n- `-log-rotate-bytes` ((#\\_log_rotate_bytes)) - to specify the number of\n  bytes that should be written to a log before it needs to be rotated. Unless specified,\n  there is no limit to the number of bytes that can be written to a log file.\n\n- `-log-rotate-duration` ((#\\_log_rotate_duration)) - to specify the maximum\n  duration a log should be written to before it needs to be rotated. Must be a duration\n  value such as 30s. Defaults to 24h.\n\n- `-log-rotate-max-files` ((#\\_log_rotate_max_files)) - to specify the maximum\n  number of older log file archives to keep. Defaults to 0 (no files are ever deleted).\n  Set to -1 to discard old log files when a new one is created.\n\n- `-experiment` `(string array: [])` - The name of an experiment to enable for this node.\n  This flag can be specified multiple times to enable multiple experiments. Experiments\n  should NOT be used in production, and the associated APIs may have backwards incompatible\n  changes between releases. Additional experiments can also be specified via the\n  `VAULT_EXPERIMENTS` environment variable as a comma-separated list, or via the\n  [`experiments`](\/vault\/docs\/configuration#experiments) config key.\n\n- `-pprof-dump-dir` `(string: \"\")` - Directory where the generated profiles are\n  created. Vault does not generate profiles when `pprof-dump-dir` is unset.\n  Use `pprof-dump-dir` temporarily during debugging sessions. Do not use\n  `pprof-dump-dir` in regular production processes.\n\n- `VAULT_ALLOW_PENDING_REMOVAL_MOUNTS` `(bool: false)` - (environment variable)\n  Allow Vault to be started with builtin engines which have the `Pending Removal`\n  deprecation state. This is a temporary stopgap in place in order to perform an\n  upgrade and disable these engines. Once these engines are marked `Removed` (in\n  the next major release of Vault), the environment variable will no longer work\n  and a downgrade must be performed in order to remove the offending engines. For\n  more information, see the [deprecation faq](\/vault\/docs\/deprecation\/faq\/#q-what-are-the-phases-of-deprecation).\n\n### Dev options\n\n- `-dev` `(bool: false)` - Enable development mode. In this mode, Vault runs\n  in-memory and starts unsealed. As the name implies, do not run \"dev\" mode in\n  production.\n\n- `-dev-tls` `(bool: false)` - Enable TLS development mode. In this mode, Vault runs\n  in-memory and starts unsealed with a generated TLS CA, certificate and key.\n  As the name implies, do not run \"dev\" mode in production.\n\n- `-dev-tls-cert-dir` `(string: \"\")` - Directory where generated TLS files are created if `-dev-tls` is specified. If left unset, files are generated in a temporary directory.\n\n- `-dev-listen-address` `(string: \"127.0.0.1:8200\")` - Address to bind to in\n  \"dev\" mode. This can also be specified via the `VAULT_DEV_LISTEN_ADDRESS`\n  environment variable.\n\n- `-dev-root-token-id` `(string: \"\")` - Initial root token. This only applies\n  when running in \"dev\" mode. This can also be specified via the\n  `VAULT_DEV_ROOT_TOKEN_ID` environment variable.\n\n  _Note:_ The token ID should not start with the `s.` prefix.\n\n- `-dev-no-store-token` `(string: \"\")` - Do not persist the dev root token to\n  the token helper (usually the local filesystem) for use in future requests.\n  The token will only be displayed in the command output.\n\n- `-dev-plugin-dir` `(string: \"\")` - Directory from which plugins are allowed to be loaded. Only applies in \"dev\" mode, it will automatically register all the plugins in the provided directory.","site":"vault","answers_cleaned":"    layout  docs page title  server   Command description       The  server  command starts a Vault server that responds to API requests  By   default  Vault will start in a  sealed  state  The Vault cluster must be   initialized before use         server  The  server  command starts a Vault server that responds to API requests  By default  Vault will start in a  sealed  state  The Vault cluster must be initialized before use  usually by the  vault operator init  command  Each Vault server must also be unsealed using the  vault operator unseal  command or the API before the server can respond to requests   For more information  please see       operator init  command   vault docs commands operator init  for information   on initializing a Vault server       operator unseal  command   vault docs commands operator unseal  for   information on providing unseal keys      Vault configuration   vault docs configuration  for the syntax and   various configuration options for a Vault server      Examples  Start a server with a configuration file      shell session   vault server  config  etc vault config hcl      Run in  dev  mode with a custom initial root token      shell session   vault server  dev  dev root token id  root          Usage  The following flags are available in addition to the  standard set of flags   vault docs commands  included on all commands       Command options      config    string         Path to a configuration file or directory of   configuration files  This flag can be specified multiple times to load   multiple configurations  If the path is a directory  all files which end in    hcl or  json are loaded       log level       log level     string   info      Log verbosity level  Supported values  in   order of descending detail  are  trace    debug    info    warn   and  error   This can   also be specified via the  VAULT LOG LEVEL  environment variable       log format       log format     string   standard      Log format  Supported values   are  standard  and  json   This can also be specified via the    VAULT LOG FORMAT  environment variable       log file       log file     The absolute path where Vault should save log   messages in addition to other  existing outputs like journald   stdout  Paths   that end with a path separator use the default file name   vault log   Paths   that do not end with a file extension use the default   log  extension  If the   log file rotates  Vault appends the current timestamp to the file name   at the time of rotation  For example        log file                 Full log file             Rotated log file                                                                                                                          var log                   var log vault log         var log vault  timestamp  log             var log my diary          var log my diary log      var log my diary  timestamp  log          var log my diary txt      var log my diary txt      var log my diary  timestamp  txt         log rotate bytes       log rotate bytes     to specify the number of   bytes that should be written to a log before it needs to be rotated  Unless specified    there is no limit to the number of bytes that can be written to a log file       log rotate duration       log rotate duration     to specify the maximum   duration a log should be written to before it needs to be rotated  Must be a duration   value such as 30s  Defaults to 24h       log rotate max files       log rotate max files     to specify the maximum   number of older log file archives to keep  Defaults to 0  no files are ever deleted     Set to  1 to discard old log files when a new one is created       experiment    string array         The name of an experiment to enable for this node    This flag can be specified multiple times to enable multiple experiments  Experiments   should NOT be used in production  and the associated APIs may have backwards incompatible   changes between releases  Additional experiments can also be specified via the    VAULT EXPERIMENTS  environment variable as a comma separated list  or via the     experiments    vault docs configuration experiments  config key       pprof dump dir    string         Directory where the generated profiles are   created  Vault does not generate profiles when  pprof dump dir  is unset    Use  pprof dump dir  temporarily during debugging sessions  Do not use    pprof dump dir  in regular production processes      VAULT ALLOW PENDING REMOVAL MOUNTS    bool  false      environment variable    Allow Vault to be started with builtin engines which have the  Pending Removal    deprecation state  This is a temporary stopgap in place in order to perform an   upgrade and disable these engines  Once these engines are marked  Removed   in   the next major release of Vault   the environment variable will no longer work   and a downgrade must be performed in order to remove the offending engines  For   more information  see the  deprecation faq   vault docs deprecation faq  q what are the phases of deprecation        Dev options      dev    bool  false     Enable development mode  In this mode  Vault runs   in memory and starts unsealed  As the name implies  do not run  dev  mode in   production       dev tls    bool  false     Enable TLS development mode  In this mode  Vault runs   in memory and starts unsealed with a generated TLS CA  certificate and key    As the name implies  do not run  dev  mode in production       dev tls cert dir    string         Directory where generated TLS files are created if   dev tls  is specified  If left unset  files are generated in a temporary directory       dev listen address    string   127 0 0 1 8200      Address to bind to in    dev  mode  This can also be specified via the  VAULT DEV LISTEN ADDRESS    environment variable       dev root token id    string         Initial root token  This only applies   when running in  dev  mode  This can also be specified via the    VAULT DEV ROOT TOKEN ID  environment variable      Note   The token ID should not start with the  s   prefix       dev no store token    string         Do not persist the dev root token to   the token helper  usually the local filesystem  for use in future requests    The token will only be displayed in the command output       dev plugin dir    string         Directory from which plugins are allowed to be loaded  Only applies in  dev  mode  it will automatically register all the plugins in the provided directory "}
{"questions":"vault login The login command authenticates users or machines to Vault using the conceptually similar to a session token on a website page title login Command provided arguments A successful authentication results in a Vault token layout docs","answers":"---\nlayout: docs\npage_title: login - Command\ndescription: |-\n  The \"login\" command authenticates users or machines to Vault using the\n  provided arguments. A successful authentication results in a Vault token -\n  conceptually similar to a session token on a website.\n---\n\n# login\n\nThe `login` command authenticates users or machines to Vault using the provided\narguments. A successful authentication results in a Vault token - conceptually\nsimilar to a session token on a website. By default, this token is cached on the\nlocal machine for future requests.\n\nThe `-method` flag allows using other auth methods, such as userpass,\ngithub, or cert. For these, additional \"K=V\" pairs may be required. For more\ninformation about the list of configuration parameters available for a given\nauth method, use the \"vault auth help TYPE\" command. You can also use \"vault\nauth list\" to see the list of enabled auth methods.\n\nIf an auth method is enabled at a non-standard path, the `-method`\nflag still refers to the canonical type, but the `-path` flag refers to the\nenabled path.\n\nIf the authentication is requested with response wrapping (via `-wrap-ttl`),\nthe returned token is automatically unwrapped unless:\n\n- The `-token-only` flag is used, in which case this command will output\n  the wrapping token.\n\n- The `-no-store` flag is used, in which case this command will output the\n  details of the wrapping token.\n\n## Examples\n\nBy default, login uses a \"token\" method and reads from stdin:\n\n```shell-session\n$ vault login\nToken (will be hidden):\nSuccess! You are now authenticated. The token information displayed below\nis already stored in the token helper. You do NOT need to run \"vault login\"\nagain. Future Vault requests will automatically use this token.\n\nKey                  Value\n---                  -----\ntoken                s.nDj4BB2tK8NaFffwBZBxyIa1\ntoken_accessor       ZuaObqdTeCHZ4oa9HWmdQJuZ\ntoken_duration       \u221e\ntoken_renewable      false\ntoken_policies       [\"root\"]\nidentity_policies    []\npolicies             [\"root\"]\n```\n\nAlternatively, the token may be provided as a command line argument (note that\nthis may be captured by shell history or process listings):\n\n```shell-session\n$ vault login s.3jnbMAKl1i4YS3QoKdbHzGXq\nSuccess! You are now authenticated. The token information displayed below\nis already stored in the token helper. You do NOT need to run \"vault login\"\nagain. Future Vault requests will automatically use this token.\n\nKey                  Value\n---                  -----\ntoken                s.3jnbMAKl1i4YS3QoKdbHzGXq\ntoken_accessor       7Uod1Rm0ejUAz77Oh7SxpAM0\ntoken_duration       767h59m49s\ntoken_renewable      true\ntoken_policies       [\"admin\" \"default\"]\nidentity_policies    []\npolicies             [\"admin\" \"default\"]\n```\n\nTo login with a different method, use `-method`:\n\n```shell-session\n$ vault login -method=userpass username=my-username\nPassword (will be hidden):\nSuccess! You are now authenticated. The token information below is already\nstored in the token helper. You do NOT need to run \"vault login\" again. Future\nrequests will use this token automatically.\n\nKey                    Value\n---                    -----\ntoken                  s.2y4SU3Sk46dK3p2Y8q2jSBwL\ntoken_accessor         8J125x9SZyB76MI9uF2jSJZf\ntoken_duration         768h\ntoken_renewable        true\ntoken_policies         [\"default\"]\nidentity_policies      []\npolicies               [\"default\"]\ntoken_meta_username    my-username\n```\n\n~> Notice that the command option (`-method=userpass`) precedes the command\nargument (`username=my-username`).\n\nIf a github auth method was enabled at the path \"github-prod\":\n\n```shell-session\n$ vault login -method=github -path=github-prod\nSuccess! You are now authenticated. The token information below is already\nstored in the token helper. You do NOT need to run \"vault login\" again. Future\nrequests will use this token automatically.\n\nKey                    Value\n---                    -----\ntoken                  s.2f3c5L1MHtnqbuNCbx90utmC\ntoken_accessor         JLUIXJ6ltUftTt2UYRl2lTAC\ntoken_duration         768h\ntoken_renewable        true\ntoken_policies         [\"default\"]\nidentity_policies      []\npolicies               [\"default\"]\ntoken_meta_org         hashicorp\ntoken_meta_username    my-username\n```\n\n## Usage\n\nThe following flags are available in addition to the [standard set of\nflags](\/vault\/docs\/commands) included on all commands.\n\n### Output options\n\n- `-field` `(string: \"\")` - Print only the field with the given name, in the format\n  specified in the `-format` directive. The result will not have a trailing\n  newline making it ideal for piping to other processes.\n\n- `-format` `(string: \"table\")` - Print the output in the given format. Valid\n  formats are \"table\", \"json\", or \"yaml\". This can also be specified via the\n  `VAULT_FORMAT` environment variable.\n\n### Command options\n\n- `-method` `(string \"token\")` - Type of authentication to use such as\n  \"userpass\" or \"ldap\". Note this corresponds to the TYPE, not the enabled path.\n  Use -path to specify the path where the authentication is enabled.\n\n- `-no-print` `(bool: false)` - Do not display the token. The token will\n  still be stored to the configured token helper. The default is false.\n\n- `-no-store` `(bool: false)` - Do not persist the token to the token helper\n  (usually the local filesystem) after authentication for use in future\n  requests. The token will only be displayed in the command output.\n\n- `-path` `(string: \"\")` - Remote path in Vault where the auth method\n  is enabled. This defaults to the TYPE of method (e.g. userpass -> userpass\/).\n\n- `-token-only` `(bool: false)` - Output only the token with no verification.\n  This flag is a shortcut for \"-field=token -no-store\". Setting those\n  flags to other values will have no affect.","site":"vault","answers_cleaned":"    layout  docs page title  login   Command description       The  login  command authenticates users or machines to Vault using the   provided arguments  A successful authentication results in a Vault token     conceptually similar to a session token on a website         login  The  login  command authenticates users or machines to Vault using the provided arguments  A successful authentication results in a Vault token   conceptually similar to a session token on a website  By default  this token is cached on the local machine for future requests   The   method  flag allows using other auth methods  such as userpass  github  or cert  For these  additional  K V  pairs may be required  For more information about the list of configuration parameters available for a given auth method  use the  vault auth help TYPE  command  You can also use  vault auth list  to see the list of enabled auth methods   If an auth method is enabled at a non standard path  the   method  flag still refers to the canonical type  but the   path  flag refers to the enabled path   If the authentication is requested with response wrapping  via   wrap ttl    the returned token is automatically unwrapped unless     The   token only  flag is used  in which case this command will output   the wrapping token     The   no store  flag is used  in which case this command will output the   details of the wrapping token      Examples  By default  login uses a  token  method and reads from stdin      shell session   vault login Token  will be hidden   Success  You are now authenticated  The token information displayed below is already stored in the token helper  You do NOT need to run  vault login  again  Future Vault requests will automatically use this token   Key                  Value                            token                s nDj4BB2tK8NaFffwBZBxyIa1 token accessor       ZuaObqdTeCHZ4oa9HWmdQJuZ token duration         token renewable      false token policies         root   identity policies       policies               root        Alternatively  the token may be provided as a command line argument  note that this may be captured by shell history or process listings       shell session   vault login s 3jnbMAKl1i4YS3QoKdbHzGXq Success  You are now authenticated  The token information displayed below is already stored in the token helper  You do NOT need to run  vault login  again  Future Vault requests will automatically use this token   Key                  Value                            token                s 3jnbMAKl1i4YS3QoKdbHzGXq token accessor       7Uod1Rm0ejUAz77Oh7SxpAM0 token duration       767h59m49s token renewable      true token policies         admin   default   identity policies       policies               admin   default        To login with a different method  use   method       shell session   vault login  method userpass username my username Password  will be hidden   Success  You are now authenticated  The token information below is already stored in the token helper  You do NOT need to run  vault login  again  Future requests will use this token automatically   Key                    Value                              token                  s 2y4SU3Sk46dK3p2Y8q2jSBwL token accessor         8J125x9SZyB76MI9uF2jSJZf token duration         768h token renewable        true token policies           default   identity policies         policies                 default   token meta username    my username         Notice that the command option    method userpass   precedes the command argument   username my username     If a github auth method was enabled at the path  github prod       shell session   vault login  method github  path github prod Success  You are now authenticated  The token information below is already stored in the token helper  You do NOT need to run  vault login  again  Future requests will use this token automatically   Key                    Value                              token                  s 2f3c5L1MHtnqbuNCbx90utmC token accessor         JLUIXJ6ltUftTt2UYRl2lTAC token duration         768h token renewable        true token policies           default   identity policies         policies                 default   token meta org         hashicorp token meta username    my username         Usage  The following flags are available in addition to the  standard set of flags   vault docs commands  included on all commands       Output options      field    string         Print only the field with the given name  in the format   specified in the   format  directive  The result will not have a trailing   newline making it ideal for piping to other processes       format    string   table      Print the output in the given format  Valid   formats are  table    json   or  yaml   This can also be specified via the    VAULT FORMAT  environment variable       Command options      method    string  token      Type of authentication to use such as    userpass  or  ldap   Note this corresponds to the TYPE  not the enabled path    Use  path to specify the path where the authentication is enabled       no print    bool  false     Do not display the token  The token will   still be stored to the configured token helper  The default is false       no store    bool  false     Do not persist the token to the token helper    usually the local filesystem  after authentication for use in future   requests  The token will only be displayed in the command output       path    string         Remote path in Vault where the auth method   is enabled  This defaults to the TYPE of method  e g  userpass    userpass         token only    bool  false     Output only the token with no verification    This flag is a shortcut for   field token  no store   Setting those   flags to other values will have no affect "}
{"questions":"vault credentials secrets configuration or arbitrary data The specific behavior page title write Command layout docs of this command is determined at the thing mounted at the path write The write command writes data to Vault at the given path The data can be","answers":"---\nlayout: docs\npage_title: write - Command\ndescription: |-\n  The \"write\" command writes data to Vault at the given path. The data can be\n  credentials, secrets, configuration, or arbitrary data. The specific behavior\n  of this command is determined at the thing mounted at the path.\n---\n\n# write\n\nThe `write` command writes data to Vault at the given path (wrapper command for\nHTTP PUT or POST). The data can be credentials, secrets, configuration, or\narbitrary data. The specific behavior of the `write` command is determined at\nthe thing mounted at the path.\n\nData is specified as \"**key=value**\" pairs on the command line. If the value begins\nwith an \"**@**\", then it is loaded from a file. If the value for a key is \"**-**\", Vault\nwill read the value from stdin rather than the command line.\n\nSome API fields require more advanced structures such as maps. These cannot\ndirectly be represented on the command line. However, direct control of the\nrequest parameters can be achieved by using `-` as the only data argument.\nThis causes `vault write` to read a JSON blob containing all request parameters\nfrom stdin. This argument will be ignored if used in conjunction with any\n\"key=value\" pairs.\n\nFor a full list of examples and paths, please see the documentation that\ncorresponds to the secrets engines in use.\n\n## Examples\n\nStore an arbitrary secrets in the token's cubbyhole.\n\n```shell-session\n$ vault write cubbyhole\/git-credentials username=\"student01\" password=\"p@$$w0rd\"\n```\n\nCreate a new encryption key in the transit secrets engine:\n\n```shell-session\n$ vault write -force transit\/keys\/my-key\n```\n\nThe `-force` flag allows the write operation without input data. (See [command\noptions](#command-options).)\n\nUpload an AWS IAM policy from a file on disk:\n\n```shell-session\n$ vault write aws\/roles\/ops policy=@policy.json\n```\n\nConfigure access to Consul by providing an access token:\n\n```shell-session\n$ echo $MY_TOKEN | vault write consul\/config\/access token=-\n```\n\nSet role-level TTL values for a user named \"alice\" so the generated lease has a\ndefault TTL of 8 hours (28800 seconds) and maximum TTL of 12 hours\n(43200 seconds):\n\n```shell-session\n$ VAULT_TOKEN=$VAULT_TOKEN vault write \/auth\/userpass\/users\/alice \\\n    token_ttl=\"8h\" token_max_ttl=\"12h\"\n```\n\n### API versus CLI\n\nCreate a token with TTL set to 8 hours, limited to 3 uses, and attach `admin`\nand `secops` policies.\n\n```shell-session\n$ vault write auth\/token\/create policies=\"admin\" policies=\"secops\" ttl=8h num_uses=3\n```\n\nEquivalent cURL command for this operation:\n\n```shell-session\n$ tee request_payload.json -<<EOF\n{\n   \"policies\": [\"admin\", \"secops\"],\n   \"ttl\": \"8h\",\n   \"num_uses\": 3\n}\nEOF\n\n$ curl --header \"X-Vault-Token: $VAULT_TOKEN\" \\\n    --request POST \\\n    --data @request_payload.json \\\n    $VAULT_ADDR\/v1\/auth\/token\/create\n```\n\nThe `vault write` command simplifies the API call.\n\nSince token management is a common task, Vault CLI provides a\n[`token`](\/vault\/docs\/commands\/token) command with\n[`create`](\/vault\/docs\/commands\/token\/create) subcommand. The CLI command simplifies\nthe token creation. Use the `vault create` command with options to set the token\nTTL, policies, and use limit.\n\n```shell-session\n$ vault token create -policy=admin -policy=secops -ttl=8h -use-limit=3\n```\n\n-> **Syntax:** The command options start with `-` (e.g. `-ttl`) while API path\nparameters do not (e.g. `ttl`). You always set the API parameters after the path\nyou are invoking.\n\n## Usage\n\nThe following flags are available in addition to the [standard set of\nflags](\/vault\/docs\/commands) included on all commands.\n\n### Output options\n\n- `-field` `(string: \"\")` - Print only the field with the given name, in the format\n  specified in the `-format` directive. The result will not have a trailing\n  newline making it ideal for piping to other processes.\n\n- `-format` `(string: \"table\")` - Print the output in the given format. Valid\n  formats are \"table\", \"json\", or \"yaml\". This can also be specified via the\n  `VAULT_FORMAT` environment variable.\n\n### Command options\n\n- `-force` `(bool: false)` - Allow the operation to continue with no key=value\n  pairs. This allows writing to keys that do not need or expect data. This is\n  aliased as `-f`.","site":"vault","answers_cleaned":"    layout  docs page title  write   Command description       The  write  command writes data to Vault at the given path  The data can be   credentials  secrets  configuration  or arbitrary data  The specific behavior   of this command is determined at the thing mounted at the path         write  The  write  command writes data to Vault at the given path  wrapper command for HTTP PUT or POST   The data can be credentials  secrets  configuration  or arbitrary data  The specific behavior of the  write  command is determined at the thing mounted at the path   Data is specified as    key value    pairs on the command line  If the value begins with an          then it is loaded from a file  If the value for a key is          Vault will read the value from stdin rather than the command line   Some API fields require more advanced structures such as maps  These cannot directly be represented on the command line  However  direct control of the request parameters can be achieved by using     as the only data argument  This causes  vault write  to read a JSON blob containing all request parameters from stdin  This argument will be ignored if used in conjunction with any  key value  pairs   For a full list of examples and paths  please see the documentation that corresponds to the secrets engines in use      Examples  Store an arbitrary secrets in the token s cubbyhole      shell session   vault write cubbyhole git credentials username  student01  password  p   w0rd       Create a new encryption key in the transit secrets engine      shell session   vault write  force transit keys my key      The   force  flag allows the write operation without input data   See  command options   command options     Upload an AWS IAM policy from a file on disk      shell session   vault write aws roles ops policy  policy json      Configure access to Consul by providing an access token      shell session   echo  MY TOKEN   vault write consul config access token        Set role level TTL values for a user named  alice  so the generated lease has a default TTL of 8 hours  28800 seconds  and maximum TTL of 12 hours  43200 seconds       shell session   VAULT TOKEN  VAULT TOKEN vault write  auth userpass users alice       token ttl  8h  token max ttl  12h           API versus CLI  Create a token with TTL set to 8 hours  limited to 3 uses  and attach  admin  and  secops  policies      shell session   vault write auth token create policies  admin  policies  secops  ttl 8h num uses 3      Equivalent cURL command for this operation      shell session   tee request payload json    EOF       policies     admin    secops        ttl    8h       num uses   3   EOF    curl   header  X Vault Token   VAULT TOKEN          request POST         data  request payload json        VAULT ADDR v1 auth token create      The  vault write  command simplifies the API call   Since token management is a common task  Vault CLI provides a   token    vault docs commands token  command with   create    vault docs commands token create  subcommand  The CLI command simplifies the token creation  Use the  vault create  command with options to set the token TTL  policies  and use limit      shell session   vault token create  policy admin  policy secops  ttl 8h  use limit 3           Syntax    The command options start with      e g    ttl   while API path parameters do not  e g   ttl    You always set the API parameters after the path you are invoking      Usage  The following flags are available in addition to the  standard set of flags   vault docs commands  included on all commands       Output options      field    string         Print only the field with the given name  in the format   specified in the   format  directive  The result will not have a trailing   newline making it ideal for piping to other processes       format    string   table      Print the output in the given format  Valid   formats are  table    json   or  yaml   This can also be specified via the    VAULT FORMAT  environment variable       Command options      force    bool  false     Allow the operation to continue with no key value   pairs  This allows writing to keys that do not need or expect data  This is   aliased as   f  "}
{"questions":"vault ssh page title ssh Command credentials obtained from an SSH secrets engine The ssh command establishes an SSH connection with the target machine using layout docs","answers":"---\nlayout: docs\npage_title: ssh - Command\ndescription: |-\n  The \"ssh\" command establishes an SSH connection with the target machine using\n  credentials obtained from an SSH secrets engine.\n---\n\n# ssh\n\nThe `ssh` command establishes an SSH connection with the target machine.\n\nThis command uses one of the SSH secrets engines to authenticate and\nautomatically establish an SSH connection to a host. This operation requires\nthat the SSH secrets engine is mounted and configured.\n\nThe user must have `ssh` installed locally - this command will exec out to it\nwith the proper commands to provide an \"SSH-like\" consistent experience.\n\n## Examples\n\nSSH using the OTP mode (requires [sshpass](https:\/\/linux.die.net\/man\/1\/sshpass)\nfor full automation):\n\n```shell-session\n$ vault ssh -mode=otp -role=my-role user@1.2.3.4\n```\n\nSSH using the CA mode:\n\n```shell-session\n$ vault ssh -mode=ca -role=my-role user@1.2.3.4\n```\n\nSSH using CA mode with host key verification:\n\n```shell-session\n$ vault ssh \\\n    -mode=ca \\\n    -role=my-role \\\n    -host-key-mount-point=host-signer \\\n    -host-key-hostnames=example.com \\\n    user@example.com\n```\n\nFor step-by-step guides and instructions for each of the available SSH\nauth methods, please see the corresponding [SSH secrets\nengine](\/vault\/docs\/secrets\/ssh).\n\n## Usage\n\nThe following flags are available in addition to the [standard set of\nflags](\/vault\/docs\/commands) included on all commands.\n\n### Output options\n\n- `-field` `(string: \"\")` - Print only the field with the given name, in the format\n  specified in the `-format` directive. The result will not have a trailing\n  newline making it ideal for piping to other processes.\n\n- `-format` `(string: \"table\")` - Print the output in the given format. Valid\n  formats are \"table\", \"json\", or \"yaml\". This can also be specified via the\n  `VAULT_FORMAT` environment variable.\n\n### SSH options\n\n- `-mode` `(string: \"\")` - Name of the authentication mode (ca, dynamic, otp).\"\n\n- `-mount-point` `(string: \"ssh\/\")` - Mount point to the SSH secrets engine.\n\n- `-no-exec` `(bool: false)` - Print the generated credentials, but do not\n  establish a connection.\n\n- `-role` `(string: \"\")` - Name of the role to use to generate the key.\n\n- `-strict-host-key-checking` `(string: \"\")` - Value to use for the SSH\n  configuration option \"StrictHostKeyChecking\". The default is ask. This can\n  also be specified via the `VAULT_SSH_STRICT_HOST_KEY_CHECKING` environment\n  variable.\n\n- `-user-known-hosts-file` `(string: \"~\/.ssh\/known_hosts\")` - Value to use for\n  the SSH configuration option \"UserKnownHostsFile\". This can also be specified\n  via the `VAULT_SSH_USER_KNOWN_HOSTS_FILE` environment variable.\n\n### CA mode options\n\n- `-host-key-hostnames` `(string: \"*\")` - List of hostnames to delegate for the\n  CA. The default value allows all domains and IPs. This is specified as a\n  comma-separated list of values. This can also be specified via the\n  `VAULT_SSH_HOST_KEY_HOSTNAMES` environment variable.\n\n- `-host-key-mount-point` `(string: \"\")` - Mount point to the SSH\n  secrets engine where host keys are signed. When given a value, Vault will\n  generate a custom \"known_hosts\" file with delegation to the CA at the provided\n  mount point to verify the SSH connection's host keys against the provided CA.\n  By default, host keys are validated against the user's local \"known_hosts\"\n  file. This flag forces strict key host checking and ignores a custom user\n  known hosts file. This can also be specified via the\n  `VAULT_SSH_HOST_KEY_MOUNT_POINT` environment variable.\n\n- `-private-key-path` `(string: \"~\/.ssh\/id_rsa\")` - Path to the SSH private key\n  to use for authentication. This must be the corresponding private key to\n  `-public-key-path`.\n\n- `-public-key-path` `(string: \"~\/.ssh\/id_rsa.pub\")` - Path to the SSH public\n  key to send to Vault for signing.","site":"vault","answers_cleaned":"    layout  docs page title  ssh   Command description       The  ssh  command establishes an SSH connection with the target machine using   credentials obtained from an SSH secrets engine         ssh  The  ssh  command establishes an SSH connection with the target machine   This command uses one of the SSH secrets engines to authenticate and automatically establish an SSH connection to a host  This operation requires that the SSH secrets engine is mounted and configured   The user must have  ssh  installed locally   this command will exec out to it with the proper commands to provide an  SSH like  consistent experience      Examples  SSH using the OTP mode  requires  sshpass  https   linux die net man 1 sshpass  for full automation       shell session   vault ssh  mode otp  role my role user 1 2 3 4      SSH using the CA mode      shell session   vault ssh  mode ca  role my role user 1 2 3 4      SSH using CA mode with host key verification      shell session   vault ssh        mode ca        role my role        host key mount point host signer        host key hostnames example com       user example com      For step by step guides and instructions for each of the available SSH auth methods  please see the corresponding  SSH secrets engine   vault docs secrets ssh       Usage  The following flags are available in addition to the  standard set of flags   vault docs commands  included on all commands       Output options      field    string         Print only the field with the given name  in the format   specified in the   format  directive  The result will not have a trailing   newline making it ideal for piping to other processes       format    string   table      Print the output in the given format  Valid   formats are  table    json   or  yaml   This can also be specified via the    VAULT FORMAT  environment variable       SSH options      mode    string         Name of the authentication mode  ca  dynamic  otp         mount point    string   ssh       Mount point to the SSH secrets engine       no exec    bool  false     Print the generated credentials  but do not   establish a connection       role    string         Name of the role to use to generate the key       strict host key checking    string         Value to use for the SSH   configuration option  StrictHostKeyChecking   The default is ask  This can   also be specified via the  VAULT SSH STRICT HOST KEY CHECKING  environment   variable       user known hosts file    string      ssh known hosts      Value to use for   the SSH configuration option  UserKnownHostsFile   This can also be specified   via the  VAULT SSH USER KNOWN HOSTS FILE  environment variable       CA mode options      host key hostnames    string          List of hostnames to delegate for the   CA  The default value allows all domains and IPs  This is specified as a   comma separated list of values  This can also be specified via the    VAULT SSH HOST KEY HOSTNAMES  environment variable       host key mount point    string         Mount point to the SSH   secrets engine where host keys are signed  When given a value  Vault will   generate a custom  known hosts  file with delegation to the CA at the provided   mount point to verify the SSH connection s host keys against the provided CA    By default  host keys are validated against the user s local  known hosts    file  This flag forces strict key host checking and ignores a custom user   known hosts file  This can also be specified via the    VAULT SSH HOST KEY MOUNT POINT  environment variable       private key path    string      ssh id rsa      Path to the SSH private key   to use for authentication  This must be the corresponding private key to     public key path        public key path    string      ssh id rsa pub      Path to the SSH public   key to send to Vault for signing "}
{"questions":"vault authenticated token The generated token will inherit all policies and The token create command creates a new token that can be used for page title token create Command permissions of the currently authenticated token unless you explicitly define authentication This token will be created as a child of the currently layout docs a subset list policies to assign to the token","answers":"---\nlayout: docs\npage_title: token create - Command\ndescription: |-\n  The \"token create\" command creates a new token that can be used for\n  authentication. This token will be created as a child of the currently\n  authenticated token. The generated token will inherit all policies and\n  permissions of the currently authenticated token unless you explicitly define\n  a subset list policies to assign to the token.\n---\n\n# token create\n\nThe `token create` command creates a new token that can be used for\nauthentication. This token will be created as a child of the currently\nauthenticated token. The generated token will inherit all policies and\npermissions of the currently authenticated token unless you explicitly define a\nsubset list policies to assign to the token.\n\nA ttl can also be associated with the token. If a ttl is not associated with the\ntoken, then it cannot be renewed. If a ttl is associated with the token, it will\nexpire after that amount of time unless it is renewed.\n\nMetadata associated with the token (specified with `-metadata`) is written to\nthe audit log when the token is used.\n\nIf a role is specified, the role may override parameters specified here.\n\n## Examples\n\nCreate a token attached to specific policies:\n\n```shell-session\n$ vault token create -policy=my-policy -policy=other-policy\nKey                Value\n---                -----\ntoken              95eba8ed-f6fc-958a-f490-c7fd0eda5e9e\ntoken_accessor     882d4a40-3796-d06e-c4f0-604e8503750b\ntoken_duration     768h\ntoken_renewable    true\ntoken_policies     [default my-policy other-policy]\n```\n\nCreate a periodic token:\n\n```shell-session\n$ vault token create -period=30m\nKey                Value\n---                -----\ntoken              fdb90d58-af87-024f-fdcd-9f95039e353a\ntoken_accessor     4cd9177c-034b-a004-c62d-54bc56c0e9bd\ntoken_duration     30m\ntoken_renewable    true\ntoken_policies     [my-policy]\n```\n\n## Usage\n\nThe following flags are available in addition to the [standard set of\nflags](\/vault\/docs\/commands) included on all commands.\n\n### Output options\n\n- `-field` `(string: \"\")` - Print only the field with the given name. Specifying\n  this option will take precedence over other formatting directives. The result\n  will not have a trailing newline making it ideal for piping to other processes.\n\n- `-format` `(string: \"table\")` - Print the output in the given format. Valid\n  formats are \"table\", \"json\", or \"yaml\". This can also be specified via the\n  `VAULT_FORMAT` environment variable.\n\n### Command options\n\n- `-display-name` `(string: \"\")` - Name to associate with this token. This is a\n  non-sensitive value that can be used to help identify created secrets (e.g.\n  prefixes).\n\n- `-entity-alias`  `(string: \"\")` - Name of the entity alias to associate with\n  during token creation. Only works in combination with -role argument and used\n  entity alias must be listed in allowed_entity_aliases. If this has been\n  specified, the entity will not be inherited from the parent.\n\n- `-explicit-max-ttl` `(duration: \"\")` - Explicit maximum lifetime for the\n  token. Unlike normal TTLs, the maximum TTL is a hard limit and cannot be\n  exceeded. Uses [duration format strings](\/vault\/docs\/concepts\/duration-format).\n\n- `-id` `(string: \"\")` - Value for the token. By default, this is an\n  auto-generated value. Specifying this value requires sudo permissions.\n\n- `-metadata` `(k=v: \"\")` - Arbitrary key=value metadata to associate with the\n  token. This metadata will show in the audit log when the token is used. This\n  can be specified multiple times to add multiple pieces of metadata.\n\n- `-no-default-policy` `(bool: false)` - Detach the \"default\" policy from the\n  policy set for this token.\n\n- `-orphan` `(bool: false)` - Create the token with no parent. This prevents the\n  token from being revoked when the token which created it expires. Setting this\n  value requires sudo permissions.\n\n- `-period` `(duration: \"\")` - If specified, every renewal will use the given\n  period. Periodic tokens do not expire as long as they are actively being\n  renewed (unless `-explicit-max-ttl` is also provided). Setting this value\n  requires sudo permissions. Uses [duration format strings](\/vault\/docs\/concepts\/duration-format).\n\n- `-policy` `(string: \"\")` - Name of a policy to associate with this token. This\n  can be specified multiple times to attach multiple policies.\n\n- `-renewable` `(bool: true)` - Allow the token to be renewed up to it's maximum\n  TTL.\n\n- `-role` `(string: \"\")` - Name of the role to create the token against.\n  Specifying -role may override other arguments. The locally authenticated Vault\n  token must have permission for `auth\/token\/create\/<role>`.\n\n- `-ttl` `(duration: \"\")` - Initial TTL to associate with the token. Token\n  renewals may be able to extend beyond this value, depending on the configured\n  maximumTTLs. Uses [duration format strings](\/vault\/docs\/concepts\/duration-format).\n\n- `-type` `(string: \"service\")` - The type of token to create. Can be \"service\" or \"batch\".\n\n- `-use-limit` `(int: 0)` - Number of times this token can be used. After the\n  last use, the token is automatically revoked. By default, tokens can be used\n  an unlimited number of times until their expiration.\n\n- `-wrap-ttl` `(duration: \"\")` -  Wraps the response in a cubbyhole token with the\n  requested TTL. The response is available via the \"vault unwrap\" command. The TTL\n  is specified as a numeric string with suffix like \"30s\" or \"5m\". This can also be\n  specified via the `VAULT_WRAP_TTL` environment variable.","site":"vault","answers_cleaned":"    layout  docs page title  token create   Command description       The  token create  command creates a new token that can be used for   authentication  This token will be created as a child of the currently   authenticated token  The generated token will inherit all policies and   permissions of the currently authenticated token unless you explicitly define   a subset list policies to assign to the token         token create  The  token create  command creates a new token that can be used for authentication  This token will be created as a child of the currently authenticated token  The generated token will inherit all policies and permissions of the currently authenticated token unless you explicitly define a subset list policies to assign to the token   A ttl can also be associated with the token  If a ttl is not associated with the token  then it cannot be renewed  If a ttl is associated with the token  it will expire after that amount of time unless it is renewed   Metadata associated with the token  specified with   metadata   is written to the audit log when the token is used   If a role is specified  the role may override parameters specified here      Examples  Create a token attached to specific policies      shell session   vault token create  policy my policy  policy other policy Key                Value                          token              95eba8ed f6fc 958a f490 c7fd0eda5e9e token accessor     882d4a40 3796 d06e c4f0 604e8503750b token duration     768h token renewable    true token policies      default my policy other policy       Create a periodic token      shell session   vault token create  period 30m Key                Value                          token              fdb90d58 af87 024f fdcd 9f95039e353a token accessor     4cd9177c 034b a004 c62d 54bc56c0e9bd token duration     30m token renewable    true token policies      my policy          Usage  The following flags are available in addition to the  standard set of flags   vault docs commands  included on all commands       Output options      field    string         Print only the field with the given name  Specifying   this option will take precedence over other formatting directives  The result   will not have a trailing newline making it ideal for piping to other processes       format    string   table      Print the output in the given format  Valid   formats are  table    json   or  yaml   This can also be specified via the    VAULT FORMAT  environment variable       Command options      display name    string         Name to associate with this token  This is a   non sensitive value that can be used to help identify created secrets  e g    prefixes        entity alias     string         Name of the entity alias to associate with   during token creation  Only works in combination with  role argument and used   entity alias must be listed in allowed entity aliases  If this has been   specified  the entity will not be inherited from the parent       explicit max ttl    duration         Explicit maximum lifetime for the   token  Unlike normal TTLs  the maximum TTL is a hard limit and cannot be   exceeded  Uses  duration format strings   vault docs concepts duration format        id    string         Value for the token  By default  this is an   auto generated value  Specifying this value requires sudo permissions       metadata    k v         Arbitrary key value metadata to associate with the   token  This metadata will show in the audit log when the token is used  This   can be specified multiple times to add multiple pieces of metadata       no default policy    bool  false     Detach the  default  policy from the   policy set for this token       orphan    bool  false     Create the token with no parent  This prevents the   token from being revoked when the token which created it expires  Setting this   value requires sudo permissions       period    duration         If specified  every renewal will use the given   period  Periodic tokens do not expire as long as they are actively being   renewed  unless   explicit max ttl  is also provided   Setting this value   requires sudo permissions  Uses  duration format strings   vault docs concepts duration format        policy    string         Name of a policy to associate with this token  This   can be specified multiple times to attach multiple policies       renewable    bool  true     Allow the token to be renewed up to it s maximum   TTL       role    string         Name of the role to create the token against    Specifying  role may override other arguments  The locally authenticated Vault   token must have permission for  auth token create  role         ttl    duration         Initial TTL to associate with the token  Token   renewals may be able to extend beyond this value  depending on the configured   maximumTTLs  Uses  duration format strings   vault docs concepts duration format        type    string   service      The type of token to create  Can be  service  or  batch        use limit    int  0     Number of times this token can be used  After the   last use  the token is automatically revoked  By default  tokens can be used   an unlimited number of times until their expiration       wrap ttl    duration          Wraps the response in a cubbyhole token with the   requested TTL  The response is available via the  vault unwrap  command  The TTL   is specified as a numeric string with suffix like  30s  or  5m   This can also be   specified via the  VAULT WRAP TTL  environment variable "}
{"questions":"vault The kv metadata command has subcommands for interacting with the metadata kv metadata page title kv metadata Command layout docs endpoint in Vault s key value store","answers":"---\nlayout: docs\npage_title: kv metadata - Command\ndescription: |-\n  The \"kv metadata\" command has subcommands for interacting with the metadata\n  endpoint in Vault's key-value store.\n---\n\n# kv metadata\n\n~> **NOTE:** This is a [KV version 2](\/vault\/docs\/secrets\/kv\/kv-v2) secrets\nengine command, and not available for Version 1.\n\nThe `kv metadata` command has subcommands for interacting with the metadata and\nversions for the versioned secrets (KV version 2 secrets engine) at the\nspecified path.\n\n## Usage\n\n```text\nUsage: vault kv metadata <subcommand> [options] [args]\n\n  # ...\n\nSubcommands:\n    delete    Deletes all versions and metadata for a key in the KV store\n    get       Retrieves key metadata from the KV store\n    put       Sets or updates key settings in the KV store\n```\n\n### kv metadata delete\n\nThe `kv metadata delete` command deletes all versions and metadata for the\nprovided key.\n\n#### Examples\n\nDeletes all versions and metadata of the key \"creds\":\n\n```shell-session\n$ vault kv metadata delete -mount=secret creds\nSuccess! Data deleted (if it existed) at: secret\/metadata\/creds\n```\n\n### kv metadata get\n\nThe `kv metadata get` command retrieves the metadata of the versioned secrets at\nthe given key name. If no key exists with that name, an error is returned.\n\n#### Examples\n\nRetrieves the metadata of the key name, \"creds\":\n\n```shell-session\n$ vault kv metadata get -mount=secret creds\n=== Metadata Path ===\nsecret\/metadata\/creds\n\n========== Metadata ==========\nKey                     Value\n---                     -----\ncas_required            false\ncreated_time            2019-06-28T15:53:30.395814Z\ncurrent_version         5\ndelete_version_after    0s\nmax_versions            0\noldest_version          0\nupdated_time            2019-06-28T16:01:47.40064Z\n\n====== Version 1 ======\nKey              Value\n---              -----\ncreated_time     2019-06-28T15:53:30.395814Z\ndeletion_time    n\/a\ndestroyed        false\n\n====== Version 2 ======\nKey              Value\n---              -----\ncreated_time     2019-06-28T16:01:36.676912Z\ndeletion_time    n\/a\ndestroyed        false\n\n...\n```\n\n### kv metadata put\n\nThe `kv metadata put` command can be used to create a blank key in the KV v2\nsecrets engine or to update key configuration for a specified key.\n\n#### Examples\n\nCreate a key in the KV v2 with no data at the key \"creds\":\n\n```shell-session\n$ vault kv metadata put -mount=secret creds\nSuccess! Data written to: secret\/metadata\/creds\n```\n\nSet the maximum number of versions to keep for the key \"creds\":\n\n```shell-session\n$ vault kv metadata put -mount=secret -max-versions=5 creds\nSuccess! Data written to: secret\/metadata\/creds\n```\n\n**NOTE:** If not set, the backend\u2019s configured max version is used. Once a key\nhas more than the configured allowed versions the oldest version will be\npermanently deleted.\n\nRequire Check-and-Set for the key \"creds\":\n\n```shell-session\n$ vault kv metadata put -mount=secret -cas-required creds\n```\n\n**NOTE:** When check-and-set is required, the key will require the `cas`\nparameter to be set on all write requests. Otherwise, the backend\u2019s\nconfiguration will be used.\n\nSet the length of time before a version is deleted for the key \"creds\":\n\n```shell-session\n$ vault kv metadata put -mount=secret -delete-version-after=\"3h25m19s\" creds\n```\n\n**NOTE:** If not set, the backend's configured Delete-Version-After is used. If\nset to a duration greater than the backend's, the backend's Delete-Version-After\nsetting will be used. Any changes to the Delete-Version-After setting will only\nbe applied to new versions.\n\n#### Output options\n\n- `-format` `(string: \"table\")` - Print the output in the given format. Valid\n  formats are \"table\", \"json\", or \"yaml\". This can also be specified via the\n  `VAULT_FORMAT` environment variable.\n\n#### Subcommand options\n\n- `-cas-required` `(bool: false)` - If true the key will require the cas\n  parameter to be set on all write requests. If false, the backend\u2019s\n  configuration will be used. The default is false.\n\n- `-max-versions` `(int: 0)` - The number of versions to keep per key. If not\n  set, the backend\u2019s configured max version is used. Once a key has more than the\n  configured allowed versions the oldest version will be permanently deleted.\n\n- `-delete-version-after` `(string:\"0s\")` \u2013 Set the `delete-version-after` value\n  to a duration to specify the `deletion_time` for all new versions written to\n  this key. If not set, the backend's `delete_version_after` will be used. If\n  the value is greater than the backend's `delete_version_after`, the backend's\n  `delete_version_after` will be used. Accepts [duration format strings](\/vault\/docs\/concepts\/duration-format).\n\n- `-custom-metadata` `(string: \"\")` - Specifies a key-value pair for the\n  `custom_metadata` field. This can be specified multiple times to add multiple\n  pieces of metadata.\n\n- `-mount` `(string: \"\")` - Specifies the path where the KV backend is mounted. \n  If specified, the next argument will be interpreted as the secret path. If \n  this flag is not specified, the next argument will be interpreted as the \n  combined mount path and secret path, with \/data\/ automatically inserted for \n  KV v2 secrets.","site":"vault","answers_cleaned":"    layout  docs page title  kv metadata   Command description       The  kv metadata  command has subcommands for interacting with the metadata   endpoint in Vault s key value store         kv metadata       NOTE    This is a  KV version 2   vault docs secrets kv kv v2  secrets engine command  and not available for Version 1   The  kv metadata  command has subcommands for interacting with the metadata and versions for the versioned secrets  KV version 2 secrets engine  at the specified path      Usage     text Usage  vault kv metadata  subcommand   options   args            Subcommands      delete    Deletes all versions and metadata for a key in the KV store     get       Retrieves key metadata from the KV store     put       Sets or updates key settings in the KV store          kv metadata delete  The  kv metadata delete  command deletes all versions and metadata for the provided key        Examples  Deletes all versions and metadata of the key  creds       shell session   vault kv metadata delete  mount secret creds Success  Data deleted  if it existed  at  secret metadata creds          kv metadata get  The  kv metadata get  command retrieves the metadata of the versioned secrets at the given key name  If no key exists with that name  an error is returned        Examples  Retrieves the metadata of the key name   creds       shell session   vault kv metadata get  mount secret creds     Metadata Path     secret metadata creds             Metadata            Key                     Value                               cas required            false created time            2019 06 28T15 53 30 395814Z current version         5 delete version after    0s max versions            0 oldest version          0 updated time            2019 06 28T16 01 47 40064Z         Version 1        Key              Value                        created time     2019 06 28T15 53 30 395814Z deletion time    n a destroyed        false         Version 2        Key              Value                        created time     2019 06 28T16 01 36 676912Z deletion time    n a destroyed        false               kv metadata put  The  kv metadata put  command can be used to create a blank key in the KV v2 secrets engine or to update key configuration for a specified key        Examples  Create a key in the KV v2 with no data at the key  creds       shell session   vault kv metadata put  mount secret creds Success  Data written to  secret metadata creds      Set the maximum number of versions to keep for the key  creds       shell session   vault kv metadata put  mount secret  max versions 5 creds Success  Data written to  secret metadata creds        NOTE    If not set  the backend s configured max version is used  Once a key has more than the configured allowed versions the oldest version will be permanently deleted   Require Check and Set for the key  creds       shell session   vault kv metadata put  mount secret  cas required creds        NOTE    When check and set is required  the key will require the  cas  parameter to be set on all write requests  Otherwise  the backend s configuration will be used   Set the length of time before a version is deleted for the key  creds       shell session   vault kv metadata put  mount secret  delete version after  3h25m19s  creds        NOTE    If not set  the backend s configured Delete Version After is used  If set to a duration greater than the backend s  the backend s Delete Version After setting will be used  Any changes to the Delete Version After setting will only be applied to new versions        Output options      format    string   table      Print the output in the given format  Valid   formats are  table    json   or  yaml   This can also be specified via the    VAULT FORMAT  environment variable        Subcommand options      cas required    bool  false     If true the key will require the cas   parameter to be set on all write requests  If false  the backend s   configuration will be used  The default is false       max versions    int  0     The number of versions to keep per key  If not   set  the backend s configured max version is used  Once a key has more than the   configured allowed versions the oldest version will be permanently deleted       delete version after    string  0s      Set the  delete version after  value   to a duration to specify the  deletion time  for all new versions written to   this key  If not set  the backend s  delete version after  will be used  If   the value is greater than the backend s  delete version after   the backend s    delete version after  will be used  Accepts  duration format strings   vault docs concepts duration format        custom metadata    string         Specifies a key value pair for the    custom metadata  field  This can be specified multiple times to add multiple   pieces of metadata       mount    string         Specifies the path where the KV backend is mounted     If specified  the next argument will be interpreted as the secret path  If    this flag is not specified  the next argument will be interpreted as the    combined mount path and secret path  with  data  automatically inserted for    KV v2 secrets "}
{"questions":"vault kv page title kv Command The kv command groups subcommands for interacting with Vault s key value secret engine layout docs","answers":"---\nlayout: docs\npage_title: kv - Command\ndescription: |-\n  The \"kv\" command groups subcommands for interacting with Vault's key\/value\n  secret engine.\n---\n\n# kv\n\nThe `kv` command groups subcommands for interacting with Vault's key\/value\nsecrets engine (both [KV version 1](\/vault\/docs\/secrets\/kv\/kv-v1) and [KV\nVersion 2](\/vault\/docs\/secrets\/kv\/kv-v2).\n\n## Syntax\n\nOption flags for a given subcommand are provided after the subcommand, but before the arguments.\n\nThe path to where the secrets engine is mounted can be indicated with the `-mount` flag, such as `vault kv get -mount=secret creds`.\n\nThe deprecated path-like syntax can also be used (e.g. `vault kv get secret\/creds`), but this should be avoided\nfor KV v2, because it is not actually the full API path to the secret \n(secret\/data\/foo) and may cause confusion.\n\n~> A `flag provided but not defined: -mount` error means you are using an older version of Vault before the \nmount flag syntax was introduced. Upgrade to at least Vault 1.11, or refer to previous versions of the docs \nwhich only use the old syntax to refer to the mount path.\n\n## Mount flag syntax (KV)\n\nAll `kv` commands can alternatively refer to the path to the KV secrets engine using a flag-based syntax like `$ vault kv get -mount=secret password`\ninstead of `$ vault kv get secret\/password`. The mount flag syntax was created to mitigate confusion caused by the fact that for KV v2 secrets,\ntheir full path (used in policies and raw API calls) actually contains a nested `\/data\/` element (e.g. `secret\/data\/password`) which can be easily overlooked when using\nthe above KV v1-like syntax `secret\/password`. To avoid this confusion, all KV-specific docs pages will use the `-mount` flag.\n\n## Exit codes\n\nThe Vault CLI aims to be consistent and well-behaved unless documented\notherwise.\n\n- Local errors such as incorrect flags, failed validations, or wrong numbers\n  of arguments return an exit code of 1.\n\n- Any remote errors such as API failures, bad TLS, or incorrect API parameters\n  return an exit status of 2\n\nSome commands override this default where it makes sense. These commands\ndocument this anomaly.\n\n\n## Examples\n\nCreate or update the key named \"creds\" in the KV version 2 enabled at \"secret\"\nwith the value \"passcode=my-long-passcode\":\n\n```shell-session\n$ vault kv put -mount=secret creds passcode=my-long-passcode\n== Secret Path ==\nsecret\/data\/creds\n\n======= Metadata =======\nKey                Value\n---                -----\ncreated_time       2022-06-15T20:14:17.107852Z\ncustom_metadata    <nil>\ndeletion_time      n\/a\ndestroyed          false\nversion            1\n```\n\nRead this value back:\n\n```shell-session\n$ vault kv get -mount=secret creds\n== Secret Path ==\nsecret\/data\/creds\n\n======= Metadata =======\nKey                Value\n---                -----\ncreated_time       2022-06-15T20:14:17.107852Z\ncustom_metadata    <nil>\ndeletion_time      n\/a\ndestroyed          false\nversion            1\n\n====== Data ======\nKey         Value\n---         -----\npasscode    my-long-passcode\n```\n\nGet metadata for the key named \"creds\":\n\n```shell-session\n$ vault kv metadata get -mount=secret creds\n=== Metadata Path ===\nsecret\/metadata\/creds\n\n========== Metadata ==========\nKey                     Value\n---                     -----\ncas_required            false\ncreated_time            2022-06-15T20:14:17.107852Z\ncurrent_version         1\ncustom_metadata         <nil>\ndelete_version_after    0s\nmax_versions            0\noldest_version          0\nupdated_time            2022-06-15T20:14:17.107852Z\n\n====== Version 1 ======\nKey              Value\n---              -----\ncreated_time     2022-06-15T20:14:17.107852Z\ndeletion_time    n\/a\ndestroyed        false\n```\n\nGet a specific version of the key named \"creds\":\n\n```shell-session\n$ vault kv get -mount=secret -version=1 creds\n== Secret Path ==\nsecret\/data\/creds\n\n======= Metadata =======\nKey                Value\n---                -----\ncreated_time       2022-06-15T20:14:17.107852Z\ncustom_metadata    <nil>\ndeletion_time      n\/a\ndestroyed          false\nversion            1\n\n====== Data ======\nKey         Value\n---         -----\npasscode    my-long-passcode\n```\n\n## Usage\n\n```text\nUsage: vault kv <subcommand> [options] [args]\n\n  # ...\n\nSubcommands:\n    delete               Deletes versions in the KV store\n    destroy              Permanently removes one or more versions in the KV store\n    enable-versioning    Turns on versioning for a KV store\n    get                  Retrieves data from the KV store\n    list                 List data or secrets\n    metadata             Interact with Vault's Key-Value storage\n    patch                Sets or updates data in the KV store without overwriting\n    put                  Sets or updates data in the KV store\n    rollback             Rolls back to a previous version of data\n    undelete             Undeletes versions in the KV store\n```\n\nFor more information, examples, and usage about a subcommand, click on the name\nof the subcommand in the sidebar.","site":"vault","answers_cleaned":"    layout  docs page title  kv   Command description       The  kv  command groups subcommands for interacting with Vault s key value   secret engine         kv  The  kv  command groups subcommands for interacting with Vault s key value secrets engine  both  KV version 1   vault docs secrets kv kv v1  and  KV Version 2   vault docs secrets kv kv v2       Syntax  Option flags for a given subcommand are provided after the subcommand  but before the arguments   The path to where the secrets engine is mounted can be indicated with the   mount  flag  such as  vault kv get  mount secret creds    The deprecated path like syntax can also be used  e g   vault kv get secret creds    but this should be avoided for KV v2  because it is not actually the full API path to the secret   secret data foo  and may cause confusion      A  flag provided but not defined   mount  error means you are using an older version of Vault before the  mount flag syntax was introduced  Upgrade to at least Vault 1 11  or refer to previous versions of the docs  which only use the old syntax to refer to the mount path      Mount flag syntax  KV   All  kv  commands can alternatively refer to the path to the KV secrets engine using a flag based syntax like    vault kv get  mount secret password  instead of    vault kv get secret password   The mount flag syntax was created to mitigate confusion caused by the fact that for KV v2 secrets  their full path  used in policies and raw API calls  actually contains a nested   data   element  e g   secret data password   which can be easily overlooked when using the above KV v1 like syntax  secret password   To avoid this confusion  all KV specific docs pages will use the   mount  flag      Exit codes  The Vault CLI aims to be consistent and well behaved unless documented otherwise     Local errors such as incorrect flags  failed validations  or wrong numbers   of arguments return an exit code of 1     Any remote errors such as API failures  bad TLS  or incorrect API parameters   return an exit status of 2  Some commands override this default where it makes sense  These commands document this anomaly       Examples  Create or update the key named  creds  in the KV version 2 enabled at  secret  with the value  passcode my long passcode       shell session   vault kv put  mount secret creds passcode my long passcode    Secret Path    secret data creds          Metadata         Key                Value                          created time       2022 06 15T20 14 17 107852Z custom metadata     nil  deletion time      n a destroyed          false version            1      Read this value back      shell session   vault kv get  mount secret creds    Secret Path    secret data creds          Metadata         Key                Value                          created time       2022 06 15T20 14 17 107852Z custom metadata     nil  deletion time      n a destroyed          false version            1         Data        Key         Value                   passcode    my long passcode      Get metadata for the key named  creds       shell session   vault kv metadata get  mount secret creds     Metadata Path     secret metadata creds             Metadata            Key                     Value                               cas required            false created time            2022 06 15T20 14 17 107852Z current version         1 custom metadata          nil  delete version after    0s max versions            0 oldest version          0 updated time            2022 06 15T20 14 17 107852Z         Version 1        Key              Value                        created time     2022 06 15T20 14 17 107852Z deletion time    n a destroyed        false      Get a specific version of the key named  creds       shell session   vault kv get  mount secret  version 1 creds    Secret Path    secret data creds          Metadata         Key                Value                          created time       2022 06 15T20 14 17 107852Z custom metadata     nil  deletion time      n a destroyed          false version            1         Data        Key         Value                   passcode    my long passcode         Usage     text Usage  vault kv  subcommand   options   args            Subcommands      delete               Deletes versions in the KV store     destroy              Permanently removes one or more versions in the KV store     enable versioning    Turns on versioning for a KV store     get                  Retrieves data from the KV store     list                 List data or secrets     metadata             Interact with Vault s Key Value storage     patch                Sets or updates data in the KV store without overwriting     put                  Sets or updates data in the KV store     rollback             Rolls back to a previous version of data     undelete             Undeletes versions in the KV store      For more information  examples  and usage about a subcommand  click on the name of the subcommand in the sidebar "}
{"questions":"vault The auth tune command tunes the configuration options for the auth method at page title auth tune Command auth tune layout docs the given PATH","answers":"---\nlayout: docs\npage_title: auth tune - Command\ndescription: |-\n  The \"auth tune\" command tunes the configuration options for the auth method at\n  the given PATH.\n---\n\n# auth tune\n\nThe `auth tune` command tunes the configuration options for the auth method at\nthe given PATH. \n\n<Note>\n\nThe argument corresponds to the **path** where the auth method is\nenabled, not the auth **type**.\n\n<\/Note>\n\n## Examples\n\nBefore tuning the auth method configuration, view the current configuration of the\nauth method enabled at `github\/`.\n\n```shell-session\n$ vault read sys\/auth\/github\/tune\nKey                  Value\n---                  -----\ndefault_lease_ttl    768h\ndescription          n\/a\nforce_no_cache       false\nmax_lease_ttl        768h\ntoken_type           default-service\n```\n\nThe default lease for the auth method enabled at `github\/` is currently set to\n768 hours. Tune this value to 72 hours.\n\n```shell-session\n$ vault auth tune -default-lease-ttl=72h github\/\nSuccess! Tuned the auth method at: github\/\n```\n\nVerify the updated configuration.\n\n<CodeBlockConfig highlight=\"1,4\">\n\n```shell-session\n$ vault read sys\/auth\/github\/tune\nKey                  Value\n---                  -----\ndefault_lease_ttl    72h\ndescription          n\/a\nforce_no_cache       false\nmax_lease_ttl        768h\ntoken_type           default-service\n```\n\n<\/CodeBlockConfig>\n\nTo restore back to the system default, you can use `-1`. \n\n```shell-session\n$ vault auth tune -default-lease-ttl=-1 github\/\nSuccess! Tuned the auth method at: github\/\n```\n\nVerify the updated configuration.\n\n<CodeBlockConfig highlight=\"1,4\">\n\n```shell-session\n$ vault read sys\/auth\/github\/tune\nKey                  Value\n---                  -----\ndefault_lease_ttl    768h\ndescription          n\/a\nforce_no_cache       false\nmax_lease_ttl        768h\ntoken_type           default-service\n```\n\n<\/CodeBlockConfig>\n\nYou can specify multiple audit non-hmac request keys.\n\n```shell-session\n$ vault auth tune -audit-non-hmac-request-keys=value1 -audit-non-hmac-request-keys=value2 github\/\nSuccess! Tuned the auth method at: github\/\n```\n\n### Enable user lockout\n\nUser lockout feature is only supported for\n[userpass](\/vault\/docs\/auth\/userpass), [ldap](\/vault\/docs\/auth\/ldap), and\n[approle](\/vault\/docs\/auth\/approle) auth methods.\n\nTune the `userpass\/` auth method to lock out the user after 10 failed login\nattempts within 10 minutes.\n\n```shell-session\n$ vault auth tune -user-lockout-threshold=10  -user-lockout-duration=10m userpass\/\nSuccess! Tuned the auth method at: userpass\/\n```\n\nView the current configuration of the auth method enabled at `userpass\/`.\n\n<CodeBlockConfig highlight=\"1,11-13\">\n\n```shell-session\n$ vault read sys\/auth\/userpass\/tune\n\nKey                  Value\n---                  -----\ndefault_lease_ttl    768h\ndescription          n\/a\nforce_no_cache       false\nmax_lease_ttl        768h\ntoken_type           default-service\nuser_lockout_counter_reset_duration    0s\nuser_lockout_disable                   false\nuser_lockout_duration                  10m\nuser_lockout_threshold                 10\n```\n\n<\/CodeBlockConfig>\n\n\n## Usage\n\nThe following flags are available in addition to the [standard set of\nflags](\/vault\/docs\/commands) included on all commands.\n\n- `-allowed-response-headers` `(string: \"\")` - response header values that the auth\n  method will be allowed to set.\n\n- `-audit-non-hmac-request-keys` `(string: \"\")` - Key that will not be HMAC'd\n  by audit devices in the request data object. Note that multiple keys may be\n  specified by providing this option multiple times, each time with 1 key.\n\n- `-audit-non-hmac-response-keys` `(string: \"\")` - Key that will not be HMAC'd\n  by audit devices in the response data object. Note that multiple keys may be\n  specified by providing this option multiple times, each time with 1 key.\n\n- `-default-lease-ttl` `(duration: \"\")` - The default lease TTL for this auth\n  method. If unspecified, this defaults to the Vault server's globally\n  configured default lease TTL, or a previously configured value for the auth\n  method.\n\n- `-description` `(string: \"\")` - Specifies the description of the auth method.\n  This overrides the current stored value, if any.\n\n- `-listing-visibility` `(string: \"\")` - The flag to toggle whether to show the\n  mount in the UI-specific listing endpoint. Valid values are `\"unauth\"` or `\"hidden\"`.\n  Passing empty string leaves the current setting unchanged.\n\n- `-max-lease-ttl` `(duration: \"\")` - The maximum lease TTL for this auth\n  method. If unspecified, this defaults to the Vault server's globally\n  configured [maximum lease TTL](\/vault\/docs\/configuration#max_lease_ttl), or a\n  previously configured value for the auth method. This value is allowed to\n  override the server's global max TTL; it can be longer or shorter.\n\n- `-passthrough-request-headers` `(string: \"\")` - request header values that will\n  be sent to the auth method. Note that multiple keys may be\n  specified by providing this option multiple times, each time with 1 key.\n\n- `-token-type` `(string: \"\")` - Specifies the type of tokens that should be\n  returned by the auth method.\n\n- `-trim-request-trailing-slashes` `(bool: false)` - If true, requests to\n  this mount with trailing slashes will have those slashes trimmed. \n  Necessary for some standards based APIs handled by Vault.\n\n- `-plugin-version` `(string: \"\")` - Configures the semantic version of the plugin\n  to use. The new version will not start running until the mount is\n  [reloaded](\/vault\/docs\/commands\/plugin\/reload).\n\n- `-user-lockout-threshold` `(string: \"\")` - Specifies the number of failed login attempts \n  after which the user is locked out. User lockout feature was added in Vault 1.13. \n\n- `-user-lockout-duration` `(duration: \"\")` - Specifies the duration for which a user will be locked out.\n  User lockout feature was added in Vault 1.13.\n\n- `-user-lockout-counter-reset-duration` `(duration: \"\")` - Specifies the duration after which the lockout \n  counter is reset with no failed login attempts. User lockout feature was added in Vault 1.13.\n\n- `-user-lockout-disable` `(bool: false)` - Disables the user lockout feature if set to true. User lockout feature was added in Vault 1.13.\n","site":"vault","answers_cleaned":"    layout  docs page title  auth tune   Command description       The  auth tune  command tunes the configuration options for the auth method at   the given PATH         auth tune  The  auth tune  command tunes the configuration options for the auth method at the given PATH     Note   The argument corresponds to the   path   where the auth method is enabled  not the auth   type       Note      Examples  Before tuning the auth method configuration  view the current configuration of the auth method enabled at  github        shell session   vault read sys auth github tune Key                  Value                            default lease ttl    768h description          n a force no cache       false max lease ttl        768h token type           default service      The default lease for the auth method enabled at  github   is currently set to 768 hours  Tune this value to 72 hours      shell session   vault auth tune  default lease ttl 72h github  Success  Tuned the auth method at  github       Verify the updated configuration    CodeBlockConfig highlight  1 4       shell session   vault read sys auth github tune Key                  Value                            default lease ttl    72h description          n a force no cache       false max lease ttl        768h token type           default service        CodeBlockConfig   To restore back to the system default  you can use   1        shell session   vault auth tune  default lease ttl  1 github  Success  Tuned the auth method at  github       Verify the updated configuration    CodeBlockConfig highlight  1 4       shell session   vault read sys auth github tune Key                  Value                            default lease ttl    768h description          n a force no cache       false max lease ttl        768h token type           default service        CodeBlockConfig   You can specify multiple audit non hmac request keys      shell session   vault auth tune  audit non hmac request keys value1  audit non hmac request keys value2 github  Success  Tuned the auth method at  github           Enable user lockout  User lockout feature is only supported for  userpass   vault docs auth userpass    ldap   vault docs auth ldap   and  approle   vault docs auth approle  auth methods   Tune the  userpass   auth method to lock out the user after 10 failed login attempts within 10 minutes      shell session   vault auth tune  user lockout threshold 10   user lockout duration 10m userpass  Success  Tuned the auth method at  userpass       View the current configuration of the auth method enabled at  userpass      CodeBlockConfig highlight  1 11 13       shell session   vault read sys auth userpass tune  Key                  Value                            default lease ttl    768h description          n a force no cache       false max lease ttl        768h token type           default service user lockout counter reset duration    0s user lockout disable                   false user lockout duration                  10m user lockout threshold                 10        CodeBlockConfig       Usage  The following flags are available in addition to the  standard set of flags   vault docs commands  included on all commands       allowed response headers    string         response header values that the auth   method will be allowed to set       audit non hmac request keys    string         Key that will not be HMAC d   by audit devices in the request data object  Note that multiple keys may be   specified by providing this option multiple times  each time with 1 key       audit non hmac response keys    string         Key that will not be HMAC d   by audit devices in the response data object  Note that multiple keys may be   specified by providing this option multiple times  each time with 1 key       default lease ttl    duration         The default lease TTL for this auth   method  If unspecified  this defaults to the Vault server s globally   configured default lease TTL  or a previously configured value for the auth   method       description    string         Specifies the description of the auth method    This overrides the current stored value  if any       listing visibility    string         The flag to toggle whether to show the   mount in the UI specific listing endpoint  Valid values are   unauth   or   hidden      Passing empty string leaves the current setting unchanged       max lease ttl    duration         The maximum lease TTL for this auth   method  If unspecified  this defaults to the Vault server s globally   configured  maximum lease TTL   vault docs configuration max lease ttl   or a   previously configured value for the auth method  This value is allowed to   override the server s global max TTL  it can be longer or shorter       passthrough request headers    string         request header values that will   be sent to the auth method  Note that multiple keys may be   specified by providing this option multiple times  each time with 1 key       token type    string         Specifies the type of tokens that should be   returned by the auth method       trim request trailing slashes    bool  false     If true  requests to   this mount with trailing slashes will have those slashes trimmed     Necessary for some standards based APIs handled by Vault       plugin version    string         Configures the semantic version of the plugin   to use  The new version will not start running until the mount is    reloaded   vault docs commands plugin reload        user lockout threshold    string         Specifies the number of failed login attempts    after which the user is locked out  User lockout feature was added in Vault 1 13        user lockout duration    duration         Specifies the duration for which a user will be locked out    User lockout feature was added in Vault 1 13       user lockout counter reset duration    duration         Specifies the duration after which the lockout    counter is reset with no failed login attempts  User lockout feature was added in Vault 1 13       user lockout disable    bool  false     Disables the user lockout feature if set to true  User lockout feature was added in Vault 1 13  "}
{"questions":"vault The secrets enable command enables a secrets engine at a given path If an page title secrets enable Command layout docs the secrets engine is enabled it usually needs configuration The secrets engine already exists at the given path an error is returned After configuration varies by secrets engine","answers":"---\nlayout: docs\npage_title: secrets enable - Command\ndescription: |-\n  The \"secrets enable\" command enables a secrets engine at a given path. If an\n  secrets engine already exists at the given path, an error is returned. After\n  the secrets engine is enabled, it usually needs configuration. The\n  configuration varies by secrets engine.\n---\n\n# secrets enable\n\nThe `secrets enable` command enables a secrets engine at a given path. If an\nsecrets engine already exists at the given path, an error is returned. After the\nsecrets engine is enabled, it usually needs configuration. The configuration\nvaries by secrets engine.\n\nBy default, secrets engines are enabled at the path corresponding to their TYPE,\nbut users can customize the path using the `-path` option.\n\nSome secrets engines persist data, some act as data pass-through, and some\ngenerate dynamic credentials. The secrets engine will likely require\nconfiguration after it is mounted. For details on the specific configuration\noptions, please see the [secrets engine\ndocumentation](\/vault\/docs\/secrets).\n\n## Examples\n\nEnable the AWS secrets engine at \"aws\/\":\n\n```shell-session\n$ vault secrets enable aws\nSuccess! Enabled the aws secrets engine at: aws\/\n```\n\nEnable the SSH secrets engine at ssh-prod\/:\n\n```shell-session\n$ vault secrets enable -path=ssh-prod ssh\n```\n\nEnable the database secrets engine with an explicit maximum TTL of 30m:\n\n```shell-session\n$ vault secrets enable -max-lease-ttl=30m database\n```\n\nEnable a custom plugin (after it is registered in the plugin registry):\n\n```shell-session\n$ vault secrets enable -path=my-secrets my-plugin\n```\n\nFor more information on the specific configuration options and paths, please see\nthe [secrets engine](\/vault\/docs\/secrets) documentation.\n\n## Usage\n\nThe following flags are available in addition to the [standard set of\nflags](\/vault\/docs\/commands) included on all commands.\n\n- `-audit-non-hmac-request-keys` `(string: \"\")` - Key that will not be HMAC'd\n  by audit devices in the request data object. Note that multiple keys may be\n  specified by providing this option multiple times, each time with 1 key.\n  An example of this is provided in the [tune section](\/vault\/docs\/commands\/secrets\/tune).\n\n- `-audit-non-hmac-response-keys` `(string: \"\")` - Key that will not be HMAC'd\n  by audit devices in the response data object. Note that multiple keys may be\n  specified by providing this option multiple times, each time with 1 key.\n\n- `-default-lease-ttl` `(duration: \"\")` - The default lease TTL for this secrets\n  engine. If unspecified, this defaults to the Vault server's globally\n  configured default lease TTL.\n\n- `-description` `(string: \"\")` - Human-friendly description for the purpose of\n  this engine.\n\n- `-force-no-cache` `(bool: false)` - Force the secrets engine to disable\n  caching. If unspecified, this defaults to the Vault server's globally\n  configured cache settings. This does not affect caching of the underlying\n  encrypted data storage.\n\n- `-local` `(bool: false)` - Mark the secrets engine as local-only. Local\n  engines are not replicated or removed by replication.\n\n- `-max-lease-ttl` `(duration: \"\")` The maximum lease TTL for this secrets\n  engine. If unspecified, this defaults to the Vault server's globally\n  configured maximum lease TTL.\n\n- `-path` `(string: \"\")` Place where the secrets engine will be accessible. This\n  must be unique cross all secrets engines. This defaults to the \"type\" of the\n  secrets engine.\n\n  !> **Case-sensitive:** The path where you enable secrets engines is case-sensitive. For\n  example, the KV secrets engine enabled at `kv\/` and `KV\/` are treated as two\n  distinct instances of KV secrets engine.\n\n- `-passthrough-request-headers` `(string: \"\")` - request header values that will\n  be sent to the secrets engine. Note that multiple keys may be\n  specified by providing this option multiple times, each time with 1 key.\n\n- `-allowed-response-headers` `(string: \"\")` - response header values that the secrets\n  engine will be allowed to set. Note that multiple keys may be\n  specified by providing this option multiple times, each time with 1 key.\n\n- `-allowed-managed-keys` `(string: \"\")` - Managed key name(s) that the mount\n  in question is allowed to access. Note that multiple keys may be specified\n  by providing this option multiple times, each time with 1 key.\n\n- `-delegated-auth-accessors` `(string: \"\")` - An authorized accessor the auth\n  backend can delegate authentication to. To allow multiple accessors, provide\n  the `delegated-auth-accessors` multiple times, each time with 1 accessor.\n\n- `-trim-request-trailing-slashes` `(bool: false)` - If true, requests to\n  this mount with trailing slashes will have those slashes trimmed.\n  Necessary for some standards based APIs handled by Vault.\n\n- `-plugin-version` `(string: \"\")` - Configures the semantic version of the plugin\n  to use. If unspecified, implies the built-in or any matching unversioned plugin\n  that may have been registered.","site":"vault","answers_cleaned":"    layout  docs page title  secrets enable   Command description       The  secrets enable  command enables a secrets engine at a given path  If an   secrets engine already exists at the given path  an error is returned  After   the secrets engine is enabled  it usually needs configuration  The   configuration varies by secrets engine         secrets enable  The  secrets enable  command enables a secrets engine at a given path  If an secrets engine already exists at the given path  an error is returned  After the secrets engine is enabled  it usually needs configuration  The configuration varies by secrets engine   By default  secrets engines are enabled at the path corresponding to their TYPE  but users can customize the path using the   path  option   Some secrets engines persist data  some act as data pass through  and some generate dynamic credentials  The secrets engine will likely require configuration after it is mounted  For details on the specific configuration options  please see the  secrets engine documentation   vault docs secrets       Examples  Enable the AWS secrets engine at  aws        shell session   vault secrets enable aws Success  Enabled the aws secrets engine at  aws       Enable the SSH secrets engine at ssh prod       shell session   vault secrets enable  path ssh prod ssh      Enable the database secrets engine with an explicit maximum TTL of 30m      shell session   vault secrets enable  max lease ttl 30m database      Enable a custom plugin  after it is registered in the plugin registry       shell session   vault secrets enable  path my secrets my plugin      For more information on the specific configuration options and paths  please see the  secrets engine   vault docs secrets  documentation      Usage  The following flags are available in addition to the  standard set of flags   vault docs commands  included on all commands       audit non hmac request keys    string         Key that will not be HMAC d   by audit devices in the request data object  Note that multiple keys may be   specified by providing this option multiple times  each time with 1 key    An example of this is provided in the  tune section   vault docs commands secrets tune        audit non hmac response keys    string         Key that will not be HMAC d   by audit devices in the response data object  Note that multiple keys may be   specified by providing this option multiple times  each time with 1 key       default lease ttl    duration         The default lease TTL for this secrets   engine  If unspecified  this defaults to the Vault server s globally   configured default lease TTL       description    string         Human friendly description for the purpose of   this engine       force no cache    bool  false     Force the secrets engine to disable   caching  If unspecified  this defaults to the Vault server s globally   configured cache settings  This does not affect caching of the underlying   encrypted data storage       local    bool  false     Mark the secrets engine as local only  Local   engines are not replicated or removed by replication       max lease ttl    duration       The maximum lease TTL for this secrets   engine  If unspecified  this defaults to the Vault server s globally   configured maximum lease TTL       path    string       Place where the secrets engine will be accessible  This   must be unique cross all secrets engines  This defaults to the  type  of the   secrets engine          Case sensitive    The path where you enable secrets engines is case sensitive  For   example  the KV secrets engine enabled at  kv   and  KV   are treated as two   distinct instances of KV secrets engine       passthrough request headers    string         request header values that will   be sent to the secrets engine  Note that multiple keys may be   specified by providing this option multiple times  each time with 1 key       allowed response headers    string         response header values that the secrets   engine will be allowed to set  Note that multiple keys may be   specified by providing this option multiple times  each time with 1 key       allowed managed keys    string         Managed key name s  that the mount   in question is allowed to access  Note that multiple keys may be specified   by providing this option multiple times  each time with 1 key       delegated auth accessors    string         An authorized accessor the auth   backend can delegate authentication to  To allow multiple accessors  provide   the  delegated auth accessors  multiple times  each time with 1 accessor       trim request trailing slashes    bool  false     If true  requests to   this mount with trailing slashes will have those slashes trimmed    Necessary for some standards based APIs handled by Vault       plugin version    string         Configures the semantic version of the plugin   to use  If unspecified  implies the built in or any matching unversioned plugin   that may have been registered "}
{"questions":"vault The secrets tune command tunes the configuration options for the secrets secrets tune The secrets tune command tunes the configuration options for the secrets engine at the given PATH page title secrets tune Command layout docs","answers":"---\nlayout: docs\npage_title: secrets tune - Command\ndescription: |-\n  The \"secrets tune\" command tunes the configuration options for the secrets engine at the given PATH.\n---\n\n# secrets tune\n\nThe `secrets tune` command tunes the configuration options for the secrets\nengine at the given PATH. The argument corresponds to the PATH where the secrets\nengine is enabled, not the type.\n\n## Examples\n\nBefore tuning the secret mount, view the current configuration of the\nmount enabled at \"pki\/\":\n\n```shell-session\n$ vault read sys\/mounts\/pki\/tune\nKey                             Value\n---                             -----\ndefault_lease_ttl               12h\ndescription                     Example PKI mount\nforce_no_cache                  false\nmax_lease_ttl                   24h\n```\n\nTune the default lease, exclude `common_name` and `serial_number` from being HMAC'd in the audit log for the PKI secrets engine:\n\n```shell-session\n$ vault secrets tune -default-lease-ttl=18h -audit-non-hmac-request-keys=common_name -audit-non-hmac-response-keys=serial_number pki\/\nSuccess! Tuned the secrets engine at: pki\/\n\n$ vault read sys\/mounts\/pki\/tune\nKey                             Value\n---                             -----\naudit_non_hmac_request_keys     [common_name]\naudit_non_hmac_response_keys    [serial_number]\ndefault_lease_ttl               18h\ndescription                     Example PKI mount\nforce_no_cache                  false\nmax_lease_ttl                   24h\n```\n\nSpecify multiple audit non-hmac request keys:\n\n```shell-session\n$ vault secrets tune -audit-non-hmac-request-keys=common_name -audit-non-hmac-request-keys=ttl pki\/\n```\n\n## Usage\n\nThe following flags are available in addition to the [standard set of\nflags](\/vault\/docs\/commands) included on all commands.\n\n- `-allowed-response-headers` `(string: \"\")` - response header values that the\n  secrets engine will be allowed to set. Note that multiple keys may be\n  specified by providing this option multiple times, each time with 1 key.\n\n- `-audit-non-hmac-request-keys` `(string: \"\")` - Key that will not be HMAC'd\n  by audit devices in the request data object. Note that multiple keys may be\n  specified by providing this option multiple times, each time with 1 key.\n\n- `-audit-non-hmac-response-keys` `(string: \"\")` - Key that will not be HMAC'd\n  by audit devices in the response data object. Note that multiple keys may be\n  specified by providing this option multiple times, each time with 1 key.\n\n- `-default-lease-ttl` `(duration: \"\")` - The default lease TTL for this secrets\n  engine. If unspecified, this defaults to the Vault server's globally\n  configured default lease TTL, or a previously configured value for the secrets\n  engine. Uses [duration format strings](\/vault\/docs\/concepts\/duration-format).\n\n- `-description` `(string: \"\")` - Specifies the description of the mount.\n  This overrides the current stored value, if any.\n\n- `-listing-visibility` `(string: \"\")` - The flag to toggle whether to show the\n  mount in the UI-specific listing endpoint. Valid values are `\"unauth\"` or `\"hidden\"`.\n  Passing empty string leaves the current setting unchanged.\n\n- `-max-lease-ttl` `(duration: \"\")` - The maximum lease TTL for this secrets\n  engine. If unspecified, this defaults to the Vault server's globally\n  configured [maximum lease TTL](\/vault\/docs\/configuration#max_lease_ttl), or a\n  previously configured value for the secrets engine. This value is allowed to\n  override the server's global max TTL; it can be longer or shorter.\n  Uses [duration format strings](\/vault\/docs\/concepts\/duration-format).\n\n- `-passthrough-request-headers` `(string: \"\")` - request header values that will\n  be sent to the secrets engine. Note that multiple keys may be\n  specified by providing this option multiple times, each time with 1 key.\n\n- `-allowed-managed-keys` `(string: \"\")` - Managed key name(s) that the mount\n  in question is allowed to access. Note that multiple keys may be specified\n  by providing this option multiple times, each time with 1 key.\n\n- `-delegated-auth-accessors` `(string: \"\")` - An authorized accessor the auth\n  backend can delegate authentication to. To allow multiple accessors, provide\n  the `delegated-auth-accessors` multiple times, each time with 1 accessor.\n\n- `-trim-request-trailing-slashes` `(bool: false)` - If true, requests to\n  this mount with trailing slashes will have those slashes trimmed. \n  Necessary for some standards based APIs handled by Vault.\n\n- `-plugin-version` `(string: \"\")` - Configures the semantic version of the plugin\n  to use. The new version will not start running until the mount is\n  [reloaded](\/vault\/docs\/commands\/plugin\/reload).","site":"vault","answers_cleaned":"    layout  docs page title  secrets tune   Command description       The  secrets tune  command tunes the configuration options for the secrets engine at the given PATH         secrets tune  The  secrets tune  command tunes the configuration options for the secrets engine at the given PATH  The argument corresponds to the PATH where the secrets engine is enabled  not the type      Examples  Before tuning the secret mount  view the current configuration of the mount enabled at  pki        shell session   vault read sys mounts pki tune Key                             Value                                       default lease ttl               12h description                     Example PKI mount force no cache                  false max lease ttl                   24h      Tune the default lease  exclude  common name  and  serial number  from being HMAC d in the audit log for the PKI secrets engine      shell session   vault secrets tune  default lease ttl 18h  audit non hmac request keys common name  audit non hmac response keys serial number pki  Success  Tuned the secrets engine at  pki     vault read sys mounts pki tune Key                             Value                                       audit non hmac request keys      common name  audit non hmac response keys     serial number  default lease ttl               18h description                     Example PKI mount force no cache                  false max lease ttl                   24h      Specify multiple audit non hmac request keys      shell session   vault secrets tune  audit non hmac request keys common name  audit non hmac request keys ttl pki          Usage  The following flags are available in addition to the  standard set of flags   vault docs commands  included on all commands       allowed response headers    string         response header values that the   secrets engine will be allowed to set  Note that multiple keys may be   specified by providing this option multiple times  each time with 1 key       audit non hmac request keys    string         Key that will not be HMAC d   by audit devices in the request data object  Note that multiple keys may be   specified by providing this option multiple times  each time with 1 key       audit non hmac response keys    string         Key that will not be HMAC d   by audit devices in the response data object  Note that multiple keys may be   specified by providing this option multiple times  each time with 1 key       default lease ttl    duration         The default lease TTL for this secrets   engine  If unspecified  this defaults to the Vault server s globally   configured default lease TTL  or a previously configured value for the secrets   engine  Uses  duration format strings   vault docs concepts duration format        description    string         Specifies the description of the mount    This overrides the current stored value  if any       listing visibility    string         The flag to toggle whether to show the   mount in the UI specific listing endpoint  Valid values are   unauth   or   hidden      Passing empty string leaves the current setting unchanged       max lease ttl    duration         The maximum lease TTL for this secrets   engine  If unspecified  this defaults to the Vault server s globally   configured  maximum lease TTL   vault docs configuration max lease ttl   or a   previously configured value for the secrets engine  This value is allowed to   override the server s global max TTL  it can be longer or shorter    Uses  duration format strings   vault docs concepts duration format        passthrough request headers    string         request header values that will   be sent to the secrets engine  Note that multiple keys may be   specified by providing this option multiple times  each time with 1 key       allowed managed keys    string         Managed key name s  that the mount   in question is allowed to access  Note that multiple keys may be specified   by providing this option multiple times  each time with 1 key       delegated auth accessors    string         An authorized accessor the auth   backend can delegate authentication to  To allow multiple accessors  provide   the  delegated auth accessors  multiple times  each time with 1 accessor       trim request trailing slashes    bool  false     If true  requests to   this mount with trailing slashes will have those slashes trimmed     Necessary for some standards based APIs handled by Vault       plugin version    string         Configures the semantic version of the plugin   to use  The new version will not start running until the mount is    reloaded   vault docs commands plugin reload  "}
{"questions":"vault vault agent Start an instance of Vault Agent Use vault agent to start an instance of Vault Agent layout docs page title agent Vault CLI","answers":"---\nlayout: docs\npage_title: \"agent - Vault CLI\"\ndescription: >-\n  Use vault agent to start an instance of Vault Agent.\n---\n\n# `vault agent`\n\nStart an instance of Vault Agent.\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault agent -config <config_file>\n\n$ vault agent [-help | -h]\n```\n\n<\/CodeBlockConfig>\n\n## Description\n\n`vault agent` start an instance of Vault Agent, which automatically\nauthenticates and fetches secrets for client applications.\n\n<Tip title=\"Related API endpoints\">\n\n  **None**\n\n<\/Tip>\n\n## Command arguments\n\nNone.\n\n## Command options\n\nNone.\n\n## Command flags\n\n<br \/>\n\n@include 'cli\/agent\/flags\/config.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/agent\/flags\/exit-after-auth.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/shared\/flags\/log-file.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/shared\/flags\/log-format.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/shared\/flags\/log-level.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/shared\/flags\/log-rotate-bytes.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/shared\/flags\/log-rotate-duration.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/shared\/flags\/log-rotate-max-files.mdx'\n\n## Standard flags\n\n<br \/>\n\n@include 'cli\/standard-settings\/all-standard-flags-but-format.mdx'\n\n## Examples\n\nStart Vault Agent with a single configuration file:\n\n```shell-session\n$ vault agent -config=\/etc\/vault\/agent\/config.hcl\n```\n\nStart Vault Agent with a two discrete configuration files:\n\n```shell-session\n$ vault agent                                   \\\n    -config=\/etc\/vault\/agent\/base-config.hcl    \\\n    -config=\/etc\/vault\/agent\/auto-auth-config.hcl\n```\n\nStart Vault Agent with a set of configuration files under the `` directory:\n\n```shell-session\n$ vault agent -config=\/etc\/vault\/agent\/config-files\/\n``","site":"vault","answers_cleaned":"    layout  docs page title   agent   Vault CLI  description       Use vault agent to start an instance of Vault Agent          vault agent   Start an instance of Vault Agent    CodeBlockConfig hideClipboard      shell session   vault agent  config  config file     vault agent   help    h         CodeBlockConfig      Description   vault agent  start an instance of Vault Agent  which automatically authenticates and fetches secrets for client applications    Tip title  Related API endpoints        None      Tip      Command arguments  None      Command options  None      Command flags   br      include  cli agent flags config mdx    br    hr    br      include  cli agent flags exit after auth mdx    br    hr    br      include  cli shared flags log file mdx    br    hr    br      include  cli shared flags log format mdx    br    hr    br      include  cli shared flags log level mdx    br    hr    br      include  cli shared flags log rotate bytes mdx    br    hr    br      include  cli shared flags log rotate duration mdx    br    hr    br      include  cli shared flags log rotate max files mdx      Standard flags   br      include  cli standard settings all standard flags but format mdx      Examples  Start Vault Agent with a single configuration file      shell session   vault agent  config  etc vault agent config hcl      Start Vault Agent with a two discrete configuration files      shell session   vault agent                                          config  etc vault agent base config hcl           config  etc vault agent auto auth config hcl      Start Vault Agent with a set of configuration files under the    directory      shell session   vault agent  config  etc vault agent config files    "}
{"questions":"vault file from secrets plugin data agent generate config layout docs page title agent generate config Vault CLI Use vault agent generate config to generate a basic Vault Agent configuration","answers":"---\nlayout: docs\npage_title: \"agent generate-config - Vault CLI\"\ndescription: >-\n  Use vault agent generate-config to generate a basic Vault Agent configuration\n  file from secrets plugin data.\n---\n\n# `agent generate-config`\n\nUse secrets plugin data to generate a basic\n[configuration file](\/vault\/docs\/agent-and-proxy\/agent#configuration-file-options)\nfor running Vault Agent in [process supervisor mode](\/vault\/docs\/agent-and-proxy\/agent\/process-supervisor).\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault agent generate-config -type <config_file_type> [options] [<file_path>]\n```\n\n<\/CodeBlockConfig>\n\n## Description \n\n`agent generate-config` composes configuration details for Vault Agent\nbased on the configuration `type` and writes a local configuration file for\nrunning Vault agent in process supervisor mode.\n\n<Tip title=\"Related API endpoints\">\n\n  - None\n\n<\/Tip>\n\n### Limitations and warnings\n\nLimitations:\n\n- Plugin support limited to KV plugins.\n- Configuration type limited to environment variable templates.\n\n<Warning title=\"Not appropriate for production\">\n\n  The file created by `agent generate-config` includes an `auto_auth` section\n  configured to use the `token_file` authentication method.\n\n  Token files are convenient for local testing, but **are not** appropriates for\n  production use. Refer to the full list of Vault Agent\n  [autoAuth methods](\/vault\/docs\/agent-and-proxy\/autoauth\/methods) for available\n  production-ready authentication methods.\n\n<\/Warning>\n\n## Arguments\n\n<br \/>\n\n@include 'cli\/agent\/args\/file_path.mdx'\n\n\n\n## Options\n\nNone.\n\n\n\n## Command Flags\n\n<br \/>\n\n@include 'cli\/agent\/flags\/exec.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/agent\/flags\/path.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/agent\/flags\/type.mdx'\n\n\n\n## Global flags\n\n<br \/>\n\n@include 'cli\/standard-settings\/all-standard-flags-but-format.mdx'\n\n\n\n## Examples\n\nGenerate an environment variable template configuration for the `foo` secrets\nplugin:\n\n```shell-session\n$ vault agent generate-config  \\\n    -type=\"env-template\"       \\\n    -exec=\".\/my-app arg1 arg2\" \\\n    -path=\"secret\/foo\"\n\nCommand output\n```\n\nGenerate an environment variable template configuration for more than one\nsecrets plugin:\n\n```shell-session\n$ vault agent generate-config -type=\"env-template\" \\\n    -exec=\".\/my-app arg1 arg2\" \\\n    -path=\"secret\/foo\" \\\n    -path=\"secret\/bar\" \\\n    -path=\"secret\/my-app\/*\"\n``","site":"vault","answers_cleaned":"    layout  docs page title   agent generate config   Vault CLI  description       Use vault agent generate config to generate a basic Vault Agent configuration   file from secrets plugin data          agent generate config   Use secrets plugin data to generate a basic  configuration file   vault docs agent and proxy agent configuration file options  for running Vault Agent in  process supervisor mode   vault docs agent and proxy agent process supervisor     CodeBlockConfig hideClipboard      shell session   vault agent generate config  type  config file type   options    file path          CodeBlockConfig      Description    agent generate config  composes configuration details for Vault Agent based on the configuration  type  and writes a local configuration file for running Vault agent in process supervisor mode    Tip title  Related API endpoints        None    Tip       Limitations and warnings  Limitations     Plugin support limited to KV plugins    Configuration type limited to environment variable templates    Warning title  Not appropriate for production      The file created by  agent generate config  includes an  auto auth  section   configured to use the  token file  authentication method     Token files are convenient for local testing  but   are not   appropriates for   production use  Refer to the full list of Vault Agent    autoAuth methods   vault docs agent and proxy autoauth methods  for available   production ready authentication methods     Warning      Arguments   br      include  cli agent args file path mdx        Options  None        Command Flags   br      include  cli agent flags exec mdx    br    hr    br      include  cli agent flags path mdx    br    hr    br      include  cli agent flags type mdx        Global flags   br      include  cli standard settings all standard flags but format mdx        Examples  Generate an environment variable template configuration for the  foo  secrets plugin      shell session   vault agent generate config         type  env template               exec    my app arg1 arg2         path  secret foo   Command output      Generate an environment variable template configuration for more than one secrets plugin      shell session   vault agent generate config  type  env template         exec    my app arg1 arg2         path  secret foo         path  secret bar         path  secret my app      "}
{"questions":"vault Create and enable a new audit device to capture log data from Vault Enable a new audit device page title audit enable Vault CLI layout docs audit enable","answers":"---\nlayout: docs\npage_title: \"audit enable - Vault CLI\"\ndescription: >-\n  Create and enable a new audit device to capture log data from Vault.\n---\n\n# `audit enable`\n\nEnable a new audit device.\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault audit enable [flags] <device_type> [options] [<config_argument=value>...]\n\n$ vault audit enable [-help | -h]\n```\n\n<\/CodeBlockConfig>\n\n## Description\n\n`audit enable` creates and enables an audit device at the given path or returns\nan error if an audit device already exists at the given path. The device\nconfiguration parameters depend on the audit device type.\n\n<Tip title=\"Related API endpoints\">\n\n  EnableAuditDevice - [`POST:\/sys\/audit\/{mount-path}`](\/vault\/api-docs\/system\/audit#enable-audit-device)\n\n<\/Tip>\n\n\n\n## Command arguments\n\n@include 'cli\/audit\/args\/device_type.mdx'\n\nEach audit device type also has a set of configuration arguments:\n\n<Tabs>\n\n<Tab heading=\"File\">\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault audit enable [flags] file [options] \\\n    file_path=<path\/to\/log\/file>            \\\n    [mode=<file_permissions>]\n```\n\n<\/CodeBlockConfig>\n\n<br \/>\n\n@include 'cli\/audit\/args\/file\/file_path.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/audit\/args\/file\/mode.mdx'\n\n<\/Tab>\n\n<Tab heading=\"Socket\">\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault audit enable [flags] socket [options] \\\n    [address=<server_address>]                \\\n    [socket_type=<protocol>]                  \\\n    [write_timeout=<wait_time>]\n```\n\n<\/CodeBlockConfig>\n\n<br \/>\n\n@include 'cli\/audit\/args\/socket\/address.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/audit\/args\/socket\/socket_type.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/audit\/args\/socket\/write_timeout.mdx'\n\n<\/Tab>\n\n<Tab heading=\"Syslog\">\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault audit enable [flags] syslog [options] \\\n    [facility=<process_entry_source>]         \\\n    [tag=<program_entry_source>]\n```\n\n<\/CodeBlockConfig>\n\n<br \/>\n\n@include 'cli\/audit\/args\/syslog\/facility.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/audit\/args\/syslog\/tag.mdx'\n\n<\/Tab>\n\n<\/Tabs>\n\n## Command options\n\n<br \/>\n\n@include 'cli\/audit\/options\/elide_list_responses.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/audit\/options\/exclude.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/audit\/options\/fallback.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/audit\/options\/filter.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/audit\/options\/format.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/audit\/options\/hmac_accessor.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/audit\/options\/log_raw.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/audit\/options\/prefix.mdx'\n\n## Command flags\n\n<br \/>\n\n@include 'cli\/audit\/flags\/description.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/audit\/flags\/local.mdx'\n\n<br \/><hr \/><br \/>\n\n@include 'cli\/audit\/flags\/path.mdx'\n\n## Standard flags\n\n<br \/>\n\n@include 'cli\/standard-settings\/all-standard-flags-but-format.mdx'\n\n## Examples\n\nEnable a `file` type audit device at the default path, `file\/`:\n\n```shell-session\n$ vault audit enable file file_path=\/tmp\/my-file.txt\nSuccess! Enabled the file audit device at: file\/\n```\n\nEnable a `file` type audit device at the path, `audit\/file`:\n\n```shell-session\n$ vault audit enable -path=audit\/file file file_path=\/tmp\/my-file.txt\nSuccess! Enabled the file audit device at: audit\/file\/\n```\n","site":"vault","answers_cleaned":"    layout  docs page title   audit enable   Vault CLI  description       Create and enable a new audit device to capture log data from Vault          audit enable   Enable a new audit device    CodeBlockConfig hideClipboard      shell session   vault audit enable  flags   device type   options    config argument value         vault audit enable   help    h         CodeBlockConfig      Description   audit enable  creates and enables an audit device at the given path or returns an error if an audit device already exists at the given path  The device configuration parameters depend on the audit device type    Tip title  Related API endpoints      EnableAuditDevice     POST  sys audit  mount path     vault api docs system audit enable audit device     Tip        Command arguments   include  cli audit args device type mdx   Each audit device type also has a set of configuration arguments    Tabs    Tab heading  File     CodeBlockConfig hideClipboard      shell session   vault audit enable  flags  file  options        file path  path to log file                    mode  file permissions          CodeBlockConfig    br      include  cli audit args file file path mdx    br    hr    br      include  cli audit args file mode mdx     Tab    Tab heading  Socket     CodeBlockConfig hideClipboard      shell session   vault audit enable  flags  socket  options         address  server address                         socket type  protocol                           write timeout  wait time          CodeBlockConfig    br      include  cli audit args socket address mdx    br    hr    br      include  cli audit args socket socket type mdx    br    hr    br      include  cli audit args socket write timeout mdx     Tab    Tab heading  Syslog     CodeBlockConfig hideClipboard      shell session   vault audit enable  flags  syslog  options         facility  process entry source                  tag  program entry source          CodeBlockConfig    br      include  cli audit args syslog facility mdx    br    hr    br      include  cli audit args syslog tag mdx     Tab     Tabs      Command options   br      include  cli audit options elide list responses mdx    br    hr    br      include  cli audit options exclude mdx    br    hr    br      include  cli audit options fallback mdx    br    hr    br      include  cli audit options filter mdx    br    hr    br      include  cli audit options format mdx    br    hr    br      include  cli audit options hmac accessor mdx    br    hr    br      include  cli audit options log raw mdx    br    hr    br      include  cli audit options prefix mdx      Command flags   br      include  cli audit flags description mdx    br    hr    br      include  cli audit flags local mdx    br    hr    br      include  cli audit flags path mdx      Standard flags   br      include  cli standard settings all standard flags but format mdx      Examples  Enable a  file  type audit device at the default path   file        shell session   vault audit enable file file path  tmp my file txt Success  Enabled the file audit device at  file       Enable a  file  type audit device at the path   audit file       shell session   vault audit enable  path audit file file file path  tmp my file txt Success  Enabled the file audit device at  audit file      "}
{"questions":"vault facilitate level with no decryption involved page title operator migrate Command layout docs migrating Vault between configurations It operates directly at the storage The operator migrate command copies data between storage backends to","answers":"---\nlayout: docs\npage_title: operator migrate - Command\ndescription: >-\n  The \"operator migrate\" command copies data between storage backends to\n  facilitate\n\n  migrating Vault between configurations. It operates directly at the storage\n\n  level, with no decryption involved.\n---\n\n# operator migrate\n\nThe `operator migrate` command copies data between storage backends to facilitate\nmigrating Vault between configurations. It operates directly at the storage\nlevel, with no decryption involved. Keys in the destination storage backend will\nbe overwritten, and the destination should _not_ be initialized prior to the\nmigrate operation. The source data is not modified, with the exception of a small lock\nkey added during migration.\n\nThis is intended to be an offline operation to ensure data consistency, and Vault\nwill not allow starting the server if a migration is in progress.\n\n## Examples\n\nMigrate all keys:\n\n```shell-session\n$ vault operator migrate -config migrate.hcl\n\n2018-09-20T14:23:23.656-0700 [INFO ] copied key: data\/core\/seal-config\n2018-09-20T14:23:23.657-0700 [INFO ] copied key: data\/core\/wrapping\/jwtkey\n2018-09-20T14:23:23.658-0700 [INFO ] copied key: data\/logical\/fd1bed89-ffc4-d631-00dd-0696c9f930c6\/31c8e6d9-2a17-d98f-bdf1-aa868afa1291\/archive\/metadata\n2018-09-20T14:23:23.660-0700 [INFO ] copied key: data\/logical\/fd1bed89-ffc4-d631-00dd-0696c9f930c6\/31c8e6d9-2a17-d98f-bdf1-aa868afa1291\/metadata\/5kKFZ4YnzgNfy9UcWOzxxzOMpqlp61rYuq6laqpLQDnB3RawKpqi7yBTrawj1P\n...\n```\n\nMigration is done in a consistent, sorted order. If the migration is halted or\nexits before completion (e.g. due to a connection error with a storage backend),\nit may be resumed from an arbitrary key prefix:\n\n```shell-session\n$ vault operator migrate -config migrate.hcl -start \"data\/logical\/fd\"\n```\n\n## Configuration\n\nThe `operator migrate` command uses a dedicated configuration file to specify the source\nand destination storage backends. The format of the storage stanzas is identical\nto that used to [configure Vault](\/vault\/docs\/configuration\/storage),\nwith the only difference being that two stanzas are required: `storage_source` and `storage_destination`.\n\n```hcl\nstorage_source \"mysql\" {\n  username = \"user1234\"\n  password = \"secret123!\"\n  database = \"vault\"\n}\n\nstorage_destination \"consul\" {\n  address = \"127.0.0.1:8500\"\n  path    = \"vault\/\"\n}\n```\n\n## Migrating to integrated raft storage\n\n### Example configuration\n\nThe below configuration will migrate away from Consul storage to integrated\nraft storage. The raft data will be stored on the local filesystem in the\ndefined `path`. `node_id` can optionally be set to identify this node.\n[cluster_addr](\/vault\/docs\/configuration#cluster_addr) must be set to the\ncluster hostname of this node. For more configuration options see the [raft\nstorage configuration documentation](\/vault\/docs\/configuration\/storage\/raft).\n\nIf the original configuration uses \"raft\" for `ha_storage` a different\n`path` needs to be declared for the path in `storage_destination` and the new\nconfiguration for the node post-migration.\n\n```hcl\nstorage_source \"consul\" {\n  address = \"127.0.0.1:8500\"\n  path    = \"vault\"\n}\n\nstorage_destination \"raft\" {\n  path = \"\/path\/to\/raft\/data\"\n  node_id = \"raft_node_1\"\n}\ncluster_addr = \"http:\/\/127.0.0.1:8201\"\n```\n\n### Run the migration\n\nVault will need to be offline during the migration process. First, stop Vault.\nThen, run the migration on the server you wish to become a the new Vault node.\n\n```shell-session\n$ vault operator migrate -config migrate.hcl\n\n2018-09-20T14:23:23.656-0700 [INFO ] copied key: data\/core\/seal-config\n2018-09-20T14:23:23.657-0700 [INFO ] copied key: data\/core\/wrapping\/jwtkey\n2018-09-20T14:23:23.658-0700 [INFO ] copied key: data\/logical\/fd1bed89-ffc4-d631-00dd-0696c9f930c6\/31c8e6d9-2a17-d98f-bdf1-aa868afa1291\/archive\/metadata\n2018-09-20T14:23:23.660-0700 [INFO ] copied key: data\/logical\/fd1bed89-ffc4-d631-00dd-0696c9f930c6\/31c8e6d9-2a17-d98f-bdf1-aa868afa1291\/metadata\/5kKFZ4YnzgNfy9UcWOzxxzOMpqlp61rYuq6laqpLQDnB3RawKpqi7yBTrawj1P\n...\n```\n\nAfter migration has completed, the data is stored on the local file system. To\nuse the new storage backend with Vault, update Vault's configuration file as\ndescribed in the [raft storage configuration\ndocumentation](\/vault\/docs\/configuration\/storage\/raft). Then start and unseal the\nvault server.\n\n### Join additional nodes\n\nAfter migration the raft cluster will only have a single node. Additional peers\nshould be joined to this node.\n\nIf the cluster was previously HA-enabled using \"raft\" as the `ha_storage`, the\nnodes will have to re-join to the migrated node before unsealing.\n\n## Usage\n\nThe following flags are available for the `operator migrate` command.\n\n- `-config` `(string: <required>)` - Path to the migration configuration file.\n\n- `-start` `(string: \"\")` - Migration starting key prefix. Only keys at or after this value will be copied.\n\n- `-reset` - Reset the migration lock. A lock file is added during migration to prevent\n  starting the Vault server or another migration. The `-reset` option can be used to\n  remove a stale lock file if present.\n\n- `-max-parallel` `int: 10` - Allows the operator to specify the maximum number of lightweight threads (goroutines)\n  which may be used to migrate data in parallel. This can potentially speed up migration on slower backends at\n  the cost of more resources (e.g. CPU, memory). Permitted values range from `1` (synchronous) to the maximum value\n  for an `integer`. If not supplied, a default of `10` parallel goroutines will be used.\n\n  ~> Note: The maximum number of concurrent requests handled by a storage backend is ultimately governed by the\n  storage backend configuration setting, which enforces a maximum number of concurrent requests (`max_parallel`).","site":"vault","answers_cleaned":"    layout  docs page title  operator migrate   Command description       The  operator migrate  command copies data between storage backends to   facilitate    migrating Vault between configurations  It operates directly at the storage    level  with no decryption involved         operator migrate  The  operator migrate  command copies data between storage backends to facilitate migrating Vault between configurations  It operates directly at the storage level  with no decryption involved  Keys in the destination storage backend will be overwritten  and the destination should  not  be initialized prior to the migrate operation  The source data is not modified  with the exception of a small lock key added during migration   This is intended to be an offline operation to ensure data consistency  and Vault will not allow starting the server if a migration is in progress      Examples  Migrate all keys      shell session   vault operator migrate  config migrate hcl  2018 09 20T14 23 23 656 0700  INFO   copied key  data core seal config 2018 09 20T14 23 23 657 0700  INFO   copied key  data core wrapping jwtkey 2018 09 20T14 23 23 658 0700  INFO   copied key  data logical fd1bed89 ffc4 d631 00dd 0696c9f930c6 31c8e6d9 2a17 d98f bdf1 aa868afa1291 archive metadata 2018 09 20T14 23 23 660 0700  INFO   copied key  data logical fd1bed89 ffc4 d631 00dd 0696c9f930c6 31c8e6d9 2a17 d98f bdf1 aa868afa1291 metadata 5kKFZ4YnzgNfy9UcWOzxxzOMpqlp61rYuq6laqpLQDnB3RawKpqi7yBTrawj1P          Migration is done in a consistent  sorted order  If the migration is halted or exits before completion  e g  due to a connection error with a storage backend   it may be resumed from an arbitrary key prefix      shell session   vault operator migrate  config migrate hcl  start  data logical fd          Configuration  The  operator migrate  command uses a dedicated configuration file to specify the source and destination storage backends  The format of the storage stanzas is identical to that used to  configure Vault   vault docs configuration storage   with the only difference being that two stanzas are required   storage source  and  storage destination       hcl storage source  mysql      username    user1234    password    secret123     database    vault     storage destination  consul      address    127 0 0 1 8500    path       vault             Migrating to integrated raft storage      Example configuration  The below configuration will migrate away from Consul storage to integrated raft storage  The raft data will be stored on the local filesystem in the defined  path    node id  can optionally be set to identify this node   cluster addr   vault docs configuration cluster addr  must be set to the cluster hostname of this node  For more configuration options see the  raft storage configuration documentation   vault docs configuration storage raft    If the original configuration uses  raft  for  ha storage  a different  path  needs to be declared for the path in  storage destination  and the new configuration for the node post migration      hcl storage source  consul      address    127 0 0 1 8500    path       vault     storage destination  raft      path     path to raft data    node id    raft node 1    cluster addr    http   127 0 0 1 8201           Run the migration  Vault will need to be offline during the migration process  First  stop Vault  Then  run the migration on the server you wish to become a the new Vault node      shell session   vault operator migrate  config migrate hcl  2018 09 20T14 23 23 656 0700  INFO   copied key  data core seal config 2018 09 20T14 23 23 657 0700  INFO   copied key  data core wrapping jwtkey 2018 09 20T14 23 23 658 0700  INFO   copied key  data logical fd1bed89 ffc4 d631 00dd 0696c9f930c6 31c8e6d9 2a17 d98f bdf1 aa868afa1291 archive metadata 2018 09 20T14 23 23 660 0700  INFO   copied key  data logical fd1bed89 ffc4 d631 00dd 0696c9f930c6 31c8e6d9 2a17 d98f bdf1 aa868afa1291 metadata 5kKFZ4YnzgNfy9UcWOzxxzOMpqlp61rYuq6laqpLQDnB3RawKpqi7yBTrawj1P          After migration has completed  the data is stored on the local file system  To use the new storage backend with Vault  update Vault s configuration file as described in the  raft storage configuration documentation   vault docs configuration storage raft   Then start and unseal the vault server       Join additional nodes  After migration the raft cluster will only have a single node  Additional peers should be joined to this node   If the cluster was previously HA enabled using  raft  as the  ha storage   the nodes will have to re join to the migrated node before unsealing      Usage  The following flags are available for the  operator migrate  command       config    string   required      Path to the migration configuration file       start    string         Migration starting key prefix  Only keys at or after this value will be copied       reset    Reset the migration lock  A lock file is added during migration to prevent   starting the Vault server or another migration  The   reset  option can be used to   remove a stale lock file if present       max parallel   int  10    Allows the operator to specify the maximum number of lightweight threads  goroutines    which may be used to migrate data in parallel  This can potentially speed up migration on slower backends at   the cost of more resources  e g  CPU  memory   Permitted values range from  1   synchronous  to the maximum value   for an  integer   If not supplied  a default of  10  parallel goroutines will be used        Note  The maximum number of concurrent requests handled by a storage backend is ultimately governed by the   storage backend configuration setting  which enforces a maximum number of concurrent requests   max parallel   "}
{"questions":"vault The operator import command imports secrets from external systems in to Vault layout docs page title operator import Command operator import","answers":"---\nlayout: docs\npage_title: operator import - Command\ndescription: >-\n  The \"operator import\" command imports secrets from external systems\n  in to Vault.\n---\n\n# operator import\n\n@include 'alerts\/enterprise-only.mdx'\n\n@include 'alerts\/alpha.mdx'\n\nThe `operator import` command imports secrets from external systems in to Vault.\nSecrets with the same name at the same storage path will be overwritten upon import.\n\n<Note title=\"Imports can be long-running processes\">\n\nYou can write import plans that read from as many sources as you want. The\namount of data migrated from each source depends on the filters applied and the\ndataset available. Be mindful of the time needed to read from each source,\napply any filters, and store the data in Vault.\n\n<\/Note>\n\n## Examples\n\nRead the config file `import.hcl` to generate a new import plan:\n\n```shell-session\n$ vault operator import -config import.hcl plan\n```\n\nOutput:\n\n<CodeBlockConfig hideClipboard>\n\n\t-----------\n\tImport plan\n\t-----------\n\tThe following namespaces are missing:\n\t* ns-1\/\n\n\tThe following mounts are missing:\n\t* ns-1\/mount-1\n\n\tSecrets to be imported to the destination \"my-dest-1\":\n\t* secret-1\n\t* secret-2\n\n<\/CodeBlockConfig>\n\n## Configuration\n\nThe `operator import` command uses a dedicated configuration file to specify the source,\ndestination, and mapping rules. To learn more about these types and secrets importing in \ngeneral, refer to the [Secrets Import documentation](\/vault\/docs\/import).\n\n```hcl\nsource_gcp {\n  name        = \"my-gcp-source-1\"\n  credentials = \"@\/path\/to\/service-account-key.json\"\n}\n\ndestination_vault {\n  name      = \"my-dest-1\"\n  address   = \"http:\/\/127.0.0.1:8200\/\"\n  token     = \"root\"\n  namespace = \"ns-1\"\n  mount     = \"mount-1\"\n}\n\nmapping_passthrough {\n  name        = \"my-map-1\"\n  source      = \"my-gcp-1\"\n  destination = \"my-dest-1\"\n  priority    = 1\n}\n```\n\n## Usage\n\n### Arguments\n\n- `plan` - Executes a read-only operation to let operators preview the secrets to import based on the configuration file.\n\n- `apply` - Executes the import operations to read the specified secrets from the source and write them into Vault.\n  Apply first executes a plan, then asks the user to approve the results before performing the actual import.\n\n### Flags\n\nThe `operator import` command accepts the following flags:\n\n- `-config` `(string: \"import.hcl\")` - Path to the import configuration HCL file. The default path is `import.hcl`.\n\n- `-auto-approve` `(bool: <false>)` - Automatically responds \"yes\" to all user-input prompts for the `apply` command.\n\n- `-auto-create` `(bool: <false>)` - Automatically creates any missing namespaces and KVv2 mounts when\n  running the `apply` command.\n\n- `-log-level` ((#\\_log_level)) `(string: \"info\")` - Log verbosity level. Supported values (in\n  order of descending detail) are `trace`, `debug`, `info`, `warn`, and `error`. You can also set log-level with the `VAULT_LOG_LEVEL` environment variable.","site":"vault","answers_cleaned":"    layout  docs page title  operator import   Command description       The  operator import  command imports secrets from external systems   in to Vault         operator import   include  alerts enterprise only mdx    include  alerts alpha mdx   The  operator import  command imports secrets from external systems in to Vault  Secrets with the same name at the same storage path will be overwritten upon import    Note title  Imports can be long running processes    You can write import plans that read from as many sources as you want  The amount of data migrated from each source depends on the filters applied and the dataset available  Be mindful of the time needed to read from each source  apply any filters  and store the data in Vault     Note      Examples  Read the config file  import hcl  to generate a new import plan      shell session   vault operator import  config import hcl plan      Output    CodeBlockConfig hideClipboard                 Import plan               The following namespaces are missing     ns 1    The following mounts are missing     ns 1 mount 1   Secrets to be imported to the destination  my dest 1      secret 1    secret 2    CodeBlockConfig      Configuration  The  operator import  command uses a dedicated configuration file to specify the source  destination  and mapping rules  To learn more about these types and secrets importing in  general  refer to the  Secrets Import documentation   vault docs import       hcl source gcp     name           my gcp source 1    credentials      path to service account key json     destination vault     name         my dest 1    address      http   127 0 0 1 8200     token        root    namespace    ns 1    mount        mount 1     mapping passthrough     name           my map 1    source         my gcp 1    destination    my dest 1    priority      1           Usage      Arguments     plan    Executes a read only operation to let operators preview the secrets to import based on the configuration file      apply    Executes the import operations to read the specified secrets from the source and write them into Vault    Apply first executes a plan  then asks the user to approve the results before performing the actual import       Flags  The  operator import  command accepts the following flags       config    string   import hcl      Path to the import configuration HCL file  The default path is  import hcl        auto approve    bool   false      Automatically responds  yes  to all user input prompts for the  apply  command       auto create    bool   false      Automatically creates any missing namespaces and KVv2 mounts when   running the  apply  command       log level       log level     string   info      Log verbosity level  Supported values  in   order of descending detail  are  trace    debug    info    warn   and  error   You can also set log level with the  VAULT LOG LEVEL  environment variable "}
{"questions":"vault process by which Vault s storage backend is prepared to receive data Since The operator init command initializes a Vault server Initialization is the initialize one Vault to initialize the storage backend Vault servers share the same storage backend in HA mode you only need to layout docs page title operator init Command","answers":"---\nlayout: docs\npage_title: operator init - Command\ndescription: |-\n  The \"operator init\" command initializes a Vault server. Initialization is the\n  process by which Vault's storage backend is prepared to receive data. Since\n  Vault servers share the same storage backend in HA mode, you only need to\n  initialize one Vault to initialize the storage backend.\n---\n\n# operator init\n\nThe `operator init` command initializes a Vault server. Initialization is the\nprocess by which Vault's storage backend is prepared to receive data. Since\nVault servers share the same storage backend in HA mode, you only need to\ninitialize one Vault to initialize the storage backend.\nThis command cannot be run against already-initialized Vault cluster.\n\nDuring initialization, Vault generates a root key, which is stored in the storage backend alongside all other Vault data. The root key itself is encrypted and requires an _unseal key_ to decrypt it.\n\nThe default Vault configuration uses [Shamir's Secret Sharing](https:\/\/en.wikipedia.org\/wiki\/Shamir%27s_Secret_Sharing) to split the root key into a configured number of shards (referred as key shares, or unseal keys). A certain threshold of shards is required to reconstruct the root key, which is then used to decrypt the Vault's encryption key.\n\nRefer to the [Seal\/Unseal](\/vault\/docs\/concepts\/seal#seal-unseal) documentation for further details.\n\n## Examples\n\nStart initialization with the default options:\n\n```shell-session\n$ vault operator init\n```\n\nInitialize, but encrypt the unseal keys with pgp keys:\n\n```shell-session\n$ vault operator init \\\n    -key-shares=3 \\\n    -key-threshold=2 \\\n    -pgp-keys=\"keybase:hashicorp,keybase:jefferai,keybase:sethvargo\"\n```\n\nInitialize Auto Unseal with a non-default threshold and number of recovery keys, and encrypt the recovery keys with pgp keys:\n\n```shell-session\n$ vault operator init \\\n    -recovery-shares=7 \\\n    -recovery-threshold=4 \\\n    -recovery-pgp-keys=\"keybase:jeff,keybase:chris,keybase:brian,keybase:calvin,keybase:matthew,keybase:vishal,keybase:nick\"\n```\n\nEncrypt the initial root token using a pgp key:\n\n```shell-session\n$ vault operator init -root-token-pgp-key=\"keybase:hashicorp\"\n```\n\n## Usage\n\nThe following flags are available in addition to the [standard set of\nflags](\/vault\/docs\/commands) included on all commands.\n\n### Output options\n\n- `-format` `(string: \"\")` - Print the output in the given format. Valid formats\n  are \"table\", \"json\", or \"yaml\". The default is table. This can also be\n  specified via the `VAULT_FORMAT` environment variable.\n\n### Common options\n\n- `-key-shares` `(int: 5)` - Number of key shares to split the generated master\n  key into. This is the number of \"unseal keys\" to generate. This is aliased as\n  `-n`.\n\n- `-key-threshold` `(int: 3)` - Number of key shares required to reconstruct the\n  root key. This must be less than or equal to -key-shares. This is aliased as\n  `-t`.\n\n- `-pgp-keys` `(string: \"...\")` - Comma-separated list of paths to files on disk\n  containing public PGP keys OR a comma-separated list of Keybase usernames\n  using the format `keybase:<username>`. When supplied, the generated unseal\n  keys will be encrypted and base64-encoded in the order specified in this list.\n  The number of entries must match -key-shares, unless -stored-shares are used.\n\n- `-root-token-pgp-key` `(string: \"\")` - Path to a file on disk containing a\n  binary or base64-encoded public PGP key. This can also be specified as a\n  Keybase username using the format `keybase:<username>`. When supplied, the\n  generated root token will be encrypted and base64-encoded with the given\n  public key.\n\n- `-status` `(bool\": false)` - Print the current initialization status. An exit\n  code of 0 means the Vault is already initialized. An exit code of 1 means an\n  error occurred. An exit code of 2 means the Vault is not initialized.\n\n### Consul options\n\n- `-consul-auto` `(bool: false)` - Perform automatic service discovery using\n  Consul in HA mode. When all nodes in a Vault HA cluster are registered with\n  Consul, enabling this option will trigger automatic service discovery based on\n  the provided -consul-service value. When Consul is Vault's HA backend, this\n  functionality is automatically enabled. Ensure the proper Consul environment\n  variables are set (CONSUL_HTTP_ADDR, etc). When only one Vault server is\n  discovered, it will be initialized automatically. When more than one Vault\n  server is discovered, they will each be output for selection. The default is\n  false.\n\n- `-consul-service` `(string: \"vault\")` - Name of the service in Consul under\n  which the Vault servers are registered.\n\n### HSM and KMS options\n\n- `-recovery-pgp-keys` `(string: \"...\")` - Behaves like `-pgp-keys`, but for the\n  recovery key shares. This is only available with [Auto Unseal](\/vault\/docs\/concepts\/seal#auto-unseal) seals (HSM, KMS and Transit seals).\n\n- `-recovery-shares` `(int: 5)` - Number of key shares to split the recovery key\n  into. This is only available with [Auto Unseal](\/vault\/docs\/concepts\/seal#auto-unseal) seals (HSM, KMS and Transit seals).\n\n- `-recovery-threshold` `(int: 3)` - Number of key shares required to\n  reconstruct the recovery key. This is only available with [Auto Unseal](\/vault\/docs\/concepts\/seal#auto-unseal) seals (HSM, KMS and Transit seals).\n\n- `-stored-shares` `(int: 0)` - Number of unseal keys to store on an HSM. This\n  must be equal to `-key-shares`.\n\n-> **Recovery keys:** Refer to the\n [Seal\/Unseal](\/vault\/docs\/concepts\/seal#recovery-key) documentation to learn more\n about recovery keys.","site":"vault","answers_cleaned":"    layout  docs page title  operator init   Command description       The  operator init  command initializes a Vault server  Initialization is the   process by which Vault s storage backend is prepared to receive data  Since   Vault servers share the same storage backend in HA mode  you only need to   initialize one Vault to initialize the storage backend         operator init  The  operator init  command initializes a Vault server  Initialization is the process by which Vault s storage backend is prepared to receive data  Since Vault servers share the same storage backend in HA mode  you only need to initialize one Vault to initialize the storage backend  This command cannot be run against already initialized Vault cluster   During initialization  Vault generates a root key  which is stored in the storage backend alongside all other Vault data  The root key itself is encrypted and requires an  unseal key  to decrypt it   The default Vault configuration uses  Shamir s Secret Sharing  https   en wikipedia org wiki Shamir 27s Secret Sharing  to split the root key into a configured number of shards  referred as key shares  or unseal keys   A certain threshold of shards is required to reconstruct the root key  which is then used to decrypt the Vault s encryption key   Refer to the  Seal Unseal   vault docs concepts seal seal unseal  documentation for further details      Examples  Start initialization with the default options      shell session   vault operator init      Initialize  but encrypt the unseal keys with pgp keys      shell session   vault operator init        key shares 3        key threshold 2        pgp keys  keybase hashicorp keybase jefferai keybase sethvargo       Initialize Auto Unseal with a non default threshold and number of recovery keys  and encrypt the recovery keys with pgp keys      shell session   vault operator init        recovery shares 7        recovery threshold 4        recovery pgp keys  keybase jeff keybase chris keybase brian keybase calvin keybase matthew keybase vishal keybase nick       Encrypt the initial root token using a pgp key      shell session   vault operator init  root token pgp key  keybase hashicorp          Usage  The following flags are available in addition to the  standard set of flags   vault docs commands  included on all commands       Output options      format    string         Print the output in the given format  Valid formats   are  table    json   or  yaml   The default is table  This can also be   specified via the  VAULT FORMAT  environment variable       Common options      key shares    int  5     Number of key shares to split the generated master   key into  This is the number of  unseal keys  to generate  This is aliased as     n        key threshold    int  3     Number of key shares required to reconstruct the   root key  This must be less than or equal to  key shares  This is aliased as     t        pgp keys    string            Comma separated list of paths to files on disk   containing public PGP keys OR a comma separated list of Keybase usernames   using the format  keybase  username    When supplied  the generated unseal   keys will be encrypted and base64 encoded in the order specified in this list    The number of entries must match  key shares  unless  stored shares are used       root token pgp key    string         Path to a file on disk containing a   binary or base64 encoded public PGP key  This can also be specified as a   Keybase username using the format  keybase  username    When supplied  the   generated root token will be encrypted and base64 encoded with the given   public key       status    bool   false     Print the current initialization status  An exit   code of 0 means the Vault is already initialized  An exit code of 1 means an   error occurred  An exit code of 2 means the Vault is not initialized       Consul options      consul auto    bool  false     Perform automatic service discovery using   Consul in HA mode  When all nodes in a Vault HA cluster are registered with   Consul  enabling this option will trigger automatic service discovery based on   the provided  consul service value  When Consul is Vault s HA backend  this   functionality is automatically enabled  Ensure the proper Consul environment   variables are set  CONSUL HTTP ADDR  etc   When only one Vault server is   discovered  it will be initialized automatically  When more than one Vault   server is discovered  they will each be output for selection  The default is   false       consul service    string   vault      Name of the service in Consul under   which the Vault servers are registered       HSM and KMS options      recovery pgp keys    string            Behaves like   pgp keys   but for the   recovery key shares  This is only available with  Auto Unseal   vault docs concepts seal auto unseal  seals  HSM  KMS and Transit seals        recovery shares    int  5     Number of key shares to split the recovery key   into  This is only available with  Auto Unseal   vault docs concepts seal auto unseal  seals  HSM  KMS and Transit seals        recovery threshold    int  3     Number of key shares required to   reconstruct the recovery key  This is only available with  Auto Unseal   vault docs concepts seal auto unseal  seals  HSM  KMS and Transit seals        stored shares    int  0     Number of unseal keys to store on an HSM  This   must be equal to   key shares         Recovery keys    Refer to the   Seal Unseal   vault docs concepts seal recovery key  documentation to learn more  about recovery keys "}
{"questions":"vault operator raft The operator raft command is used to interact with the integrated Raft storage backend layout docs This command groups subcommands for operators to manage the Integrated Storage Raft backend page title operator raft Command","answers":"---\nlayout: docs\npage_title: operator raft - Command\ndescription: >-\n  The \"operator raft\" command is used to interact with the integrated Raft storage backend.\n---\n\n# operator raft\n\nThis command groups subcommands for operators to manage the Integrated Storage Raft backend.\n\n```text\nUsage: vault operator raft <subcommand> [options] [args]\n\n This command groups subcommands for operators interacting with the Vault\n integrated Raft storage backend. Most users will not need to interact with these\n commands. Here are a few examples of the Raft operator commands:\n\nSubcommands:\n    join           Joins a node to the Raft cluster\n    list-peers     Returns the Raft peer set\n    remove-peer    Removes a node from the Raft cluster\n    snapshot       Restores and saves snapshots from the Raft cluster\n```\n\n## join\n\nThis command is used to join a new node as a peer to the Raft cluster. In order\nto join, there must be at least one existing member of the cluster. If Shamir\nseal is in use, then unseal keys are to be supplied before or after the\njoin process, depending on whether it's being used exclusively for HA.\n\nIf raft is used for `storage`, the node must be joined before unsealing and the\n`leader-api-addr` argument must be provided. If raft is used for `ha_storage`,\nthe node must be first unsealed before joining and the `leader-api-addr` must\n_not_ be provided.\n\n```text\nUsage: vault operator raft join [options] <leader-api-addr>\n\n  Join the current node as a peer to the Raft cluster by providing the address\n  of the Raft leader node.\n\n\t  $ vault operator raft join \"http:\/\/127.0.0.2:8200\"\n```\n\nThe `join` command also allows operators to specify cloud auto-join configuration\ninstead of a static IP address or hostname. When provided, Vault will attempt to\nautomatically discover and resolve potential leader addresses based on the provided\nauto-join configuration.\n\nVault uses go-discover to support the auto-join functionality. Please see the\ngo-discover\n[README](https:\/\/github.com\/hashicorp\/go-discover\/blob\/master\/README.md) for\ndetails on the format.\n\nBy default, Vault will attempt to reach discovered peers using HTTPS and port 8200.\nOperators may override these through the `--auto-join-scheme` and `--auto-join-port`\nCLI flags respectively.\n\n```text\nUsage: vault operator raft join [options] <auto-join-configuration>\n  Join the current node as a peer to the Raft cluster by providing cloud auto-join\n  metadata configuration.\n    $ vault operator raft join \"provider=aws region=eu-west-1 ...\"\n```\n\n### Parameters\n\nThe following flags are available for the `operator raft join` command.\n\n- `-leader-ca-cert` `(string: \"\")` - CA cert to communicate with Raft leader.\n\n- `-leader-client-cert` `(string: \"\")` - Client cert to authenticate to Raft leader.\n\n- `-leader-client-key` `(string: \"\")` - Client key to authenticate to Raft leader.\n\n- `-non-voter` `(bool: false) (enterprise)` - This flag is used to make the\n  server not participate in the Raft quorum, and have it only receive the data\n  replication stream. This can be used to add read scalability to a cluster in\n  cases where a high volume of reads to servers are needed. The default is false.\n  See [`retry_join_as_non_voter`](\/vault\/docs\/configuration\/storage\/raft#retry_join_as_non_voter)\n  for the equivalent config option when using `retry_join` stanzas instead.\n\n- `-retry` `(bool: false)` - Continuously retry joining the Raft cluster upon\n  failures. The default is false.\n\n~> **Note:** Please be aware that the content (not the path to the file) of the certificate or key is expected for these parameters: `-leader-ca-cert`, `-leader-client-cert`, `-leader-client-key`.\n\n## list-peers\n\nThis command is used to list the full set of peers in the Raft cluster.\n\n```text\nUsage: vault operator raft list-peers\n\n  Provides the details of all the peers in the Raft cluster.\n\n\t  $ vault operator raft list-peers\n```\n\n### Example output\n\n```json\n{\n ...\n  \"data\": {\n    \"config\": {\n      \"index\": 62,\n      \"servers\": [\n        {\n          \"address\": \"127.0.0.2:8201\",\n          \"leader\": true,\n          \"node_id\": \"node1\",\n          \"protocol_version\": \"3\",\n          \"voter\": true\n        },\n        {\n          \"address\": \"127.0.0.4:8201\",\n          \"leader\": false,\n          \"node_id\": \"node3\",\n          \"protocol_version\": \"3\",\n          \"voter\": true\n        }\n      ]\n    }\n  }\n}\n```\n\nUse the output of `list-peers` to ensure that your cluster is in an expected state.\nIf you've removed a server using `remove-peer`, the server should no longer be\nlisted in the `list-peers` output. If you've added a server using `add-peer` or\nthrough `retry_join`, check the `list-peers` output to see that it has been added\nto the cluster and (if the node has not been added as a non-voter)\nit has been promoted to a voter.\n\n## remove-peer\n\nThis command is used to remove a node from being a peer to the Raft cluster. In\ncertain cases where a peer may be left behind in the Raft configuration even\nthough the server is no longer present and known to the cluster, this command\ncan be used to remove the failed server so that it is no longer affects the Raft\nquorum.\n\n```text\nUsage: vault operator raft remove-peer <server_id>\n\n  Removes a node from the Raft cluster.\n\n\t  $ vault operator raft remove-peer node1\n```\n\n<Note>\n  Once a node is removed, its Raft data needs to be deleted before it may be joined back into an existing cluster. This requires shutting down the Vault process, deleting the data, then restarting the Vault process on the removed node.\n<\/Note>\n\n## snapshot\n\nThis command groups subcommands for operators interacting with the snapshot\nfunctionality of the integrated Raft storage backend. There are 2 subcommands\nsupported: `save` and `restore`.\n\n```text\nUsage: vault operator raft snapshot <subcommand> [options] [args]\n\n  This command groups subcommands for operators interacting with the snapshot\n  functionality of the integrated Raft storage backend.\n\nSubcommands:\n    restore    Installs the provided snapshot, returning the cluster to the state defined in it\n    save       Saves a snapshot of the current state of the Raft cluster into a file\n```\n\n### snapshot save\n\nTakes a snapshot of the Vault data. The snapshot can be used to restore Vault to\nthe point in time when a snapshot was taken.\n\n```text\nUsage: vault operator raft snapshot save <snapshot_file>\n\n  Saves a snapshot of the current state of the Raft cluster into a file.\n\n\t  $ vault operator raft snapshot save raft.snap\n```\n\n~> **Note:** Snapshot is not supported when Raft is used only for `ha_storage`.\n\n### snapshot restore\n\nRestores a snapshot of Vault data taken with `vault operator raft snapshot save`.\n\n```text\nUsage: vault operator raft snapshot restore <snapshot_file>\n\n  Installs the provided snapshot, returning the cluster to the state defined in it.\n\n\t  $ vault operator raft snapshot restore raft.snap\n```\n\n### snapshot inspect\n\nInspects a snapshot file taken from a Vault Raft cluster and prints a table showing the number of keys and the amount of space used.\n\n```text\nUsage: vault operator raft snapshot inspect <snapshot_file>\n```\n\nFor example:\n\n```shell-session\n$ vault operator raft snapshot inspect raft.snap\n```\n\n## autopilot\n\nThis command groups subcommands for operators interacting with the autopilot\nfunctionality of the integrated Raft storage backend. There are 3 subcommands\nsupported: `get-config`, `set-config` and `state`.\n\nFor a more detailed overview of autopilot features, see the [concepts page](\/vault\/docs\/concepts\/integrated-storage\/autopilot).\n\n```text\nUsage: vault operator raft autopilot <subcommand> [options] [args]\n\nThis command groups subcommands for operators interacting with the autopilot\nfunctionality of the integrated Raft storage backend.\n\nSubcommands:\n    get-config    Returns the configuration of the autopilot subsystem under integrated storage\n    set-config    Modify the configuration of the autopilot subsystem under integrated storage\n    state         Displays the state of the raft cluster under integrated storage as seen by autopilot\n```\n\n### autopilot state\n\nDisplays the state of the raft cluster under integrated storage as seen by\nautopilot. It shows whether autopilot thinks the cluster is healthy or not.\n\nState includes a list of all servers by nodeID and IP address.\n\n```text\nUsage: vault operator raft autopilot state\n\n  Displays the state of the raft cluster under integrated storage as seen by autopilot.\n\n    $ vault operator raft autopilot state\n```\n\n#### Example output\n\n```text\nHealthy:                         true\nFailure Tolerance:               1\nLeader:                          vault_1\nVoters:\n   vault_1\n   vault_2\n   vault_3\nServers:\n   vault_1\n      Name:              vault_1\n      Address:           127.0.0.1:8201\n      Status:            leader\n      Node Status:       alive\n      Healthy:           true\n      Last Contact:      0s\n      Last Term:         3\n      Last Index:        61\n      Version:           1.17.3\n      Node Type:         voter\n   vault_2\n      Name:              vault_2\n      Address:           127.0.0.1:8203\n      Status:            voter\n      Node Status:       alive\n      Healthy:           true\n      Last Contact:      564.765375ms\n      Last Term:         3\n      Last Index:        61\n      Version:           1.17.3\n      Node Type:         voter\n   vault_3\n      Name:              vault_3\n      Address:           127.0.0.1:8205\n      Status:            voter\n      Node Status:       alive\n      Healthy:           true\n      Last Contact:      3.814017875s\n      Last Term:         3\n      Last Index:        61\n      Version:           1.17.3\n      Node Type:         voter\n```\n\nThe \"Failure Tolerance\" of a cluster is the number of nodes in the cluster that could\nfail gradually without causing an outage.\n\nWhen verifying the health of your cluster, check the following fields of each server:\n- Healthy: whether Autopilot considers this node healthy or not\n- Status: the voting status of the node. This will be `voter`, `leader`, or [`non-voter`](\/vault\/docs\/concepts\/integrated-storage#non-voting-nodes-enterprise-only).\n- Last Index: the index of the last applied Raft log. This should be close to the \"Last Index\" value of the leader.\n- Version: the version of Vault running on the server\n- Node Type: the type of node. On CE, this will always be `voter`. See below for an explanation of Enterprise node types.\n\nVault Enterprise will include additional output related to automated upgrades, optimistic failure tolerance, and redundancy zones.\n\n#### Example Vault enterprise output\n\n```text\nRedundancy Zones:\n   a\n      Servers: vault_1, vault_2, vault_5\n      Voters: vault_1\n      Failure Tolerance: 2\n   b\n      Servers: vault_3, vault_4\n      Voters: vault_3\n      Failure Tolerance: 1\nUpgrade Info:\n   Status: await-new-voters\n   Target Version: 1.17.5\n   Target Version Voters:\n   Target Version Non-Voters: vault_5\n   Other Version Voters: vault_1, vault_3\n   Other Version Non-Voters: vault_2, vault_4\n   Redundancy Zones:\n      a\n         Target Version Voters:\n         Target Version Non-Voters: vault_5\n         Other Version Voters: vault_1\n         Other Version Non-Voters: vault_2\n      b\n         Target Version Voters:\n         Target Version Non-Voters:\n         Other Version Voters: vault_3\n         Other Version Non-Voters: vault_4\n```\n\n\"Optimistic Failure Tolerance\" describes the number of healthy active and\nback-up voting servers that can fail gradually without causing an outage.\n\n@include 'autopilot\/node-types.mdx'\n\n### autopilot get-config\n\nReturns the configuration of the autopilot subsystem under integrated storage.\n\n```text\nUsage: vault operator raft autopilot get-config\n\n  Returns the configuration of the autopilot subsystem under integrated storage.\n\n    $ vault operator raft autopilot get-config\n```\n\n### autopilot set-config\n\nModify the configuration of the autopilot subsystem under integrated storage.\n\n```text\nUsage: vault operator raft autopilot set-config [options]\n\n  Modify the configuration of the autopilot subsystem under integrated storage.\n\n\t  $ vault operator raft autopilot set-config -server-stabilization-time 10s\n\n```\n\nFlags applicable to this command are the following:\n\n- `cleanup-dead-servers` `(bool: false)` - Controls whether to remove dead servers from\n  the Raft peer list periodically or when a new server joins. This requires that\n  `min-quorum` is also set.\n\n- `last-contact-threshold` `(string: \"10s\")` - Limit on the amount of time a server can\n  go without leader contact before being considered unhealthy.\n\n- `dead-server-last-contact-threshold` `(string: \"24h\")` - Limit on the amount of time\na server can go without leader contact before being considered failed. This\ntakes effect only when `cleanup_dead_servers` is set. When adding new nodes\nto your cluster, the `dead_server_last_contact_threshold` needs to be larger\nthan the amount of time that it takes to load a Raft snapshot, otherwise the\nnewly added nodes will be removed from your cluster before they have finished\nloading the snapshot and starting up. If you are using an [HSM](\/vault\/docs\/enterprise\/hsm), your\n`dead_server_last_contact_threshold` needs to be larger than the response\ntime of the HSM.\n\n<Warning>\n\n  We strongly recommend keeping `dead_server_last_contact_threshold` at a high\n  duration, such as a day, as it being too low could result in removal of nodes\n  that aren't actually dead\n\n<\/Warning>\n\n- `max-trailing-logs` `(int: 1000)` - Amount of entries in the Raft Log that a server\n  can be behind before being considered unhealthy. If this value is too low,\n  it can cause the cluster to lose quorum if a follower falls behind. This\n  value only needs to be increased from the default if you have a very high\n  write load on Vault and you see that it takes a long time to promote new\n  servers to becoming voters. This is an unlikely scenario and most users\n  should not modify this value.\n\n- `min-quorum` `(int)` - The minimum number of servers that should always be\npresent in a cluster. Autopilot will not prune servers below this number.\n**There is no default for this value** and it should be set to the expected\nnumber of voters in your cluster when `cleanup_dead_servers` is set as `true`.\nUse the [quorum size guidance](\/vault\/docs\/internals\/integrated-storage#quorum-size-and-failure-tolerance)\nto determine the proper minimum quorum size for your cluster.\n\n- `server-stabilization-time` `(string: \"10s\")` - Minimum amount of time a server must be in a healthy state before it\n  can become a voter. Until that happens, it will be visible as a peer in the cluster, but as a non-voter, meaning it\n  won't contribute to quorum.\n\n- `disable-upgrade-migration` `(bool: false)` - Controls whether to disable automated\n  upgrade migrations, an Enterprise-only feature.","site":"vault","answers_cleaned":"    layout  docs page title  operator raft   Command description       The  operator raft  command is used to interact with the integrated Raft storage backend         operator raft  This command groups subcommands for operators to manage the Integrated Storage Raft backend      text Usage  vault operator raft  subcommand   options   args    This command groups subcommands for operators interacting with the Vault  integrated Raft storage backend  Most users will not need to interact with these  commands  Here are a few examples of the Raft operator commands   Subcommands      join           Joins a node to the Raft cluster     list peers     Returns the Raft peer set     remove peer    Removes a node from the Raft cluster     snapshot       Restores and saves snapshots from the Raft cluster         join  This command is used to join a new node as a peer to the Raft cluster  In order to join  there must be at least one existing member of the cluster  If Shamir seal is in use  then unseal keys are to be supplied before or after the join process  depending on whether it s being used exclusively for HA   If raft is used for  storage   the node must be joined before unsealing and the  leader api addr  argument must be provided  If raft is used for  ha storage   the node must be first unsealed before joining and the  leader api addr  must  not  be provided      text Usage  vault operator raft join  options   leader api addr     Join the current node as a peer to the Raft cluster by providing the address   of the Raft leader node        vault operator raft join  http   127 0 0 2 8200       The  join  command also allows operators to specify cloud auto join configuration instead of a static IP address or hostname  When provided  Vault will attempt to automatically discover and resolve potential leader addresses based on the provided auto join configuration   Vault uses go discover to support the auto join functionality  Please see the go discover  README  https   github com hashicorp go discover blob master README md  for details on the format   By default  Vault will attempt to reach discovered peers using HTTPS and port 8200  Operators may override these through the    auto join scheme  and    auto join port  CLI flags respectively      text Usage  vault operator raft join  options   auto join configuration    Join the current node as a peer to the Raft cluster by providing cloud auto join   metadata configuration        vault operator raft join  provider aws region eu west 1               Parameters  The following flags are available for the  operator raft join  command       leader ca cert    string         CA cert to communicate with Raft leader       leader client cert    string         Client cert to authenticate to Raft leader       leader client key    string         Client key to authenticate to Raft leader       non voter    bool  false   enterprise     This flag is used to make the   server not participate in the Raft quorum  and have it only receive the data   replication stream  This can be used to add read scalability to a cluster in   cases where a high volume of reads to servers are needed  The default is false    See   retry join as non voter    vault docs configuration storage raft retry join as non voter    for the equivalent config option when using  retry join  stanzas instead       retry    bool  false     Continuously retry joining the Raft cluster upon   failures  The default is false        Note    Please be aware that the content  not the path to the file  of the certificate or key is expected for these parameters    leader ca cert     leader client cert     leader client key       list peers  This command is used to list the full set of peers in the Raft cluster      text Usage  vault operator raft list peers    Provides the details of all the peers in the Raft cluster        vault operator raft list peers          Example output     json           data          config            index   62         servers                          address    127 0 0 2 8201              leader   true             node id    node1              protocol version    3              voter   true                                 address    127 0 0 4 8201              leader   false             node id    node3              protocol version    3              voter   true                                    Use the output of  list peers  to ensure that your cluster is in an expected state  If you ve removed a server using  remove peer   the server should no longer be listed in the  list peers  output  If you ve added a server using  add peer  or through  retry join   check the  list peers  output to see that it has been added to the cluster and  if the node has not been added as a non voter  it has been promoted to a voter      remove peer  This command is used to remove a node from being a peer to the Raft cluster  In certain cases where a peer may be left behind in the Raft configuration even though the server is no longer present and known to the cluster  this command can be used to remove the failed server so that it is no longer affects the Raft quorum      text Usage  vault operator raft remove peer  server id     Removes a node from the Raft cluster        vault operator raft remove peer node1       Note    Once a node is removed  its Raft data needs to be deleted before it may be joined back into an existing cluster  This requires shutting down the Vault process  deleting the data  then restarting the Vault process on the removed node    Note      snapshot  This command groups subcommands for operators interacting with the snapshot functionality of the integrated Raft storage backend  There are 2 subcommands supported   save  and  restore       text Usage  vault operator raft snapshot  subcommand   options   args     This command groups subcommands for operators interacting with the snapshot   functionality of the integrated Raft storage backend   Subcommands      restore    Installs the provided snapshot  returning the cluster to the state defined in it     save       Saves a snapshot of the current state of the Raft cluster into a file          snapshot save  Takes a snapshot of the Vault data  The snapshot can be used to restore Vault to the point in time when a snapshot was taken      text Usage  vault operator raft snapshot save  snapshot file     Saves a snapshot of the current state of the Raft cluster into a file        vault operator raft snapshot save raft snap           Note    Snapshot is not supported when Raft is used only for  ha storage        snapshot restore  Restores a snapshot of Vault data taken with  vault operator raft snapshot save       text Usage  vault operator raft snapshot restore  snapshot file     Installs the provided snapshot  returning the cluster to the state defined in it        vault operator raft snapshot restore raft snap          snapshot inspect  Inspects a snapshot file taken from a Vault Raft cluster and prints a table showing the number of keys and the amount of space used      text Usage  vault operator raft snapshot inspect  snapshot file       For example      shell session   vault operator raft snapshot inspect raft snap         autopilot  This command groups subcommands for operators interacting with the autopilot functionality of the integrated Raft storage backend  There are 3 subcommands supported   get config    set config  and  state    For a more detailed overview of autopilot features  see the  concepts page   vault docs concepts integrated storage autopilot       text Usage  vault operator raft autopilot  subcommand   options   args   This command groups subcommands for operators interacting with the autopilot functionality of the integrated Raft storage backend   Subcommands      get config    Returns the configuration of the autopilot subsystem under integrated storage     set config    Modify the configuration of the autopilot subsystem under integrated storage     state         Displays the state of the raft cluster under integrated storage as seen by autopilot          autopilot state  Displays the state of the raft cluster under integrated storage as seen by autopilot  It shows whether autopilot thinks the cluster is healthy or not   State includes a list of all servers by nodeID and IP address      text Usage  vault operator raft autopilot state    Displays the state of the raft cluster under integrated storage as seen by autopilot         vault operator raft autopilot state           Example output     text Healthy                          true Failure Tolerance                1 Leader                           vault 1 Voters     vault 1    vault 2    vault 3 Servers     vault 1       Name               vault 1       Address            127 0 0 1 8201       Status             leader       Node Status        alive       Healthy            true       Last Contact       0s       Last Term          3       Last Index         61       Version            1 17 3       Node Type          voter    vault 2       Name               vault 2       Address            127 0 0 1 8203       Status             voter       Node Status        alive       Healthy            true       Last Contact       564 765375ms       Last Term          3       Last Index         61       Version            1 17 3       Node Type          voter    vault 3       Name               vault 3       Address            127 0 0 1 8205       Status             voter       Node Status        alive       Healthy            true       Last Contact       3 814017875s       Last Term          3       Last Index         61       Version            1 17 3       Node Type          voter      The  Failure Tolerance  of a cluster is the number of nodes in the cluster that could fail gradually without causing an outage   When verifying the health of your cluster  check the following fields of each server    Healthy  whether Autopilot considers this node healthy or not   Status  the voting status of the node  This will be  voter    leader   or   non voter    vault docs concepts integrated storage non voting nodes enterprise only     Last Index  the index of the last applied Raft log  This should be close to the  Last Index  value of the leader    Version  the version of Vault running on the server   Node Type  the type of node  On CE  this will always be  voter   See below for an explanation of Enterprise node types   Vault Enterprise will include additional output related to automated upgrades  optimistic failure tolerance  and redundancy zones        Example Vault enterprise output     text Redundancy Zones     a       Servers  vault 1  vault 2  vault 5       Voters  vault 1       Failure Tolerance  2    b       Servers  vault 3  vault 4       Voters  vault 3       Failure Tolerance  1 Upgrade Info     Status  await new voters    Target Version  1 17 5    Target Version Voters     Target Version Non Voters  vault 5    Other Version Voters  vault 1  vault 3    Other Version Non Voters  vault 2  vault 4    Redundancy Zones        a          Target Version Voters           Target Version Non Voters  vault 5          Other Version Voters  vault 1          Other Version Non Voters  vault 2       b          Target Version Voters           Target Version Non Voters           Other Version Voters  vault 3          Other Version Non Voters  vault 4       Optimistic Failure Tolerance  describes the number of healthy active and back up voting servers that can fail gradually without causing an outage    include  autopilot node types mdx       autopilot get config  Returns the configuration of the autopilot subsystem under integrated storage      text Usage  vault operator raft autopilot get config    Returns the configuration of the autopilot subsystem under integrated storage         vault operator raft autopilot get config          autopilot set config  Modify the configuration of the autopilot subsystem under integrated storage      text Usage  vault operator raft autopilot set config  options     Modify the configuration of the autopilot subsystem under integrated storage        vault operator raft autopilot set config  server stabilization time 10s       Flags applicable to this command are the following      cleanup dead servers    bool  false     Controls whether to remove dead servers from   the Raft peer list periodically or when a new server joins  This requires that    min quorum  is also set      last contact threshold    string   10s      Limit on the amount of time a server can   go without leader contact before being considered unhealthy      dead server last contact threshold    string   24h      Limit on the amount of time a server can go without leader contact before being considered failed  This takes effect only when  cleanup dead servers  is set  When adding new nodes to your cluster  the  dead server last contact threshold  needs to be larger than the amount of time that it takes to load a Raft snapshot  otherwise the newly added nodes will be removed from your cluster before they have finished loading the snapshot and starting up  If you are using an  HSM   vault docs enterprise hsm   your  dead server last contact threshold  needs to be larger than the response time of the HSM    Warning     We strongly recommend keeping  dead server last contact threshold  at a high   duration  such as a day  as it being too low could result in removal of nodes   that aren t actually dead    Warning      max trailing logs    int  1000     Amount of entries in the Raft Log that a server   can be behind before being considered unhealthy  If this value is too low    it can cause the cluster to lose quorum if a follower falls behind  This   value only needs to be increased from the default if you have a very high   write load on Vault and you see that it takes a long time to promote new   servers to becoming voters  This is an unlikely scenario and most users   should not modify this value      min quorum    int     The minimum number of servers that should always be present in a cluster  Autopilot will not prune servers below this number    There is no default for this value   and it should be set to the expected number of voters in your cluster when  cleanup dead servers  is set as  true   Use the  quorum size guidance   vault docs internals integrated storage quorum size and failure tolerance  to determine the proper minimum quorum size for your cluster      server stabilization time    string   10s      Minimum amount of time a server must be in a healthy state before it   can become a voter  Until that happens  it will be visible as a peer in the cluster  but as a non voter  meaning it   won t contribute to quorum      disable upgrade migration    bool  false     Controls whether to disable automated   upgrade migrations  an Enterprise only feature "}
{"questions":"vault page title operator rekey Command optionally change the total number of key shares or the required threshold of The operator rekey command generates a new set of unseal keys This can layout docs those key shares to reconstruct the root key This operation is zero downtime but it requires the Vault is unsealed and a quorum of existing unseal keys are provided","answers":"---\nlayout: docs\npage_title: operator rekey - Command\ndescription: |-\n  The \"operator rekey\" command generates a new set of unseal keys. This can\n  optionally change the total number of key shares or the required threshold of\n  those key shares to reconstruct the root key. This operation is zero\n  downtime, but it requires the Vault is unsealed and a quorum of existing\n  unseal keys are provided.\n---\n\n# operator rekey\n\nThe `operator rekey` command generates a new set of unseal keys. This can\noptionally change the total number of key shares or the required threshold of\nthose key shares to reconstruct the root key. This operation is zero downtime,\nbut it requires the Vault is unsealed and a quorum of existing unseal keys are\nprovided.\n\nAn unseal key may be provided directly on the command line as an argument to the\ncommand. If key is specified as \"-\", the command will read from stdin. If a TTY\nis available, the command will prompt for text.\n\n## Examples\n\nInitialize a rekey:\n\n```shell-session\n$ vault operator rekey \\\n    -init \\\n    -key-shares=15 \\\n    -key-threshold=9\n```\n\nInitialize a rekey when Auto Unseal is used for the Vault cluster:\n\n```shell-session\n$ vault operator rekey \\\n    -target=recovery \\\n    -init \\\n    -key-shares=15 \\\n    -key-threshold=9\n```\n\nInitialize a rekey and activate the verification process:\n\n```shell-session\n$ vault operator rekey \\\n    -init \\\n    -key-shares=15 \\\n    -key-threshold=9 \\\n    -verify\n```\n\nRekey and encrypt the resulting unseal keys with PGP:\n\n```shell-session\n$ vault operator rekey \\\n    -init \\\n    -key-shares=3 \\\n    -key-threshold=2 \\\n    -pgp-keys=\"keybase:hashicorp,keybase:jefferai,keybase:sethvargo\"\n```\n\nRekey an Auto Unseal vault and encrypt the resulting recovery keys with PGP:\n\n```shell-session\n$ vault operator rekey \\\n    -target=recovery \\\n    -init \\\n    -pgp-keys=keybase:grahamhashicorp\n    -key-shares=1\n    -key-threshold=1\n```\n\nStore encrypted PGP keys in Vault's core:\n\n```shell-session\n$ vault operator rekey \\\n    -init \\\n    -pgp-keys=\"...\" \\\n    -backup\n```\n\nRetrieve backed-up unseal keys:\n\n```shell-session\n$ vault operator rekey -backup-retrieve\n```\n\nDelete backed-up unseal keys:\n\n```shell-session\n$ vault operator rekey -backup-delete\n```\n\nPerform the verification of the rekey using the verification nonce:\n\n```shell-session\n$ vault operator rekey -verify -nonce=\"...\"\n```\n\n## Usage\n\nThe following flags are available in addition to the [standard set of\nflags](\/vault\/docs\/commands) included on all commands.\n\n### Output options\n\n- `-format` `(string: \"table\")` - Print the output in the given format. Valid\n  formats are \"table\", \"json\", or \"yaml\". This can also be specified via the\n  `VAULT_FORMAT` environment variable.\n\n### Command options\n\n- `-cancel` `(bool: false)` - Reset the rekeying progress. This will discard any submitted unseal keys\n  or configuration. The default is false.\n\n- `-init` `(bool: false)` - Initialize the rekeying operation. This can only be\n  done if no rekeying operation is in progress. Customize the new number of key\n  shares and key threshold using the `-key-shares` and `-key-threshold flags`.\n\n- `-key-shares` `(int: 5)` - Number of key shares to split the generated master\n  key into. This is the number of \"unseal keys\" to generate. This is aliased as\n  `-n`\n\n- `-key-threshold` `(int: 3)` - Number of key shares required to reconstruct the\n  root key. This must be less than or equal to -key-shares. This is aliased as\n  `-t`.\n\n- `-nonce` `(string: \"\")` - Nonce value provided at initialization. The same\n  nonce value must be provided with each unseal key.\n\n- `-pgp-keys` `(string: \"...\")` - Comma-separated list of paths to files on disk\n  containing public PGP keys OR a comma-separated list of Keybase usernames\n  using the format `keybase:<username>`. When supplied, the generated unseal\n  keys will be encrypted and base64-encoded in the order specified in this list.\n\n- `-status` `(bool: false)` - Print the status of the current attempt without\n  providing an unseal key. The default is false.\n\n- `-target` `(string: \"barrier\")` - Target for rekeying. \"recovery\" only applies\n  when HSM support is enabled or using [Auto Unseal](\/vault\/docs\/concepts\/seal#auto-unseal).\n\n- `-verify` `(bool: false)` - Indicate during the phase `-init` that the\n  verification process is activated for the rekey. Along with `-nonce` option\n  it indicates that the nonce given is for the verification process.\n\n### Backup options\n\n- `-backup` `(bool: false)` - Store a backup of the current PGP encrypted unseal\n  keys in Vault's core. The encrypted values can be recovered in the event of\n  failure or discarded after success. See the -backup-delete and\n  \\-backup-retrieve options for more information. This option only applies when\n  the existing unseal keys were PGP encrypted.\n\n- `-backup-delete` `(bool: false)` - Delete any stored backup unseal keys.\n\n- `-backup-retrieve` `(bool: false)` - Retrieve the backed-up unseal keys. This\n  option is only available if the PGP keys were provided and the backup has not\n  been deleted.","site":"vault","answers_cleaned":"    layout  docs page title  operator rekey   Command description       The  operator rekey  command generates a new set of unseal keys  This can   optionally change the total number of key shares or the required threshold of   those key shares to reconstruct the root key  This operation is zero   downtime  but it requires the Vault is unsealed and a quorum of existing   unseal keys are provided         operator rekey  The  operator rekey  command generates a new set of unseal keys  This can optionally change the total number of key shares or the required threshold of those key shares to reconstruct the root key  This operation is zero downtime  but it requires the Vault is unsealed and a quorum of existing unseal keys are provided   An unseal key may be provided directly on the command line as an argument to the command  If key is specified as      the command will read from stdin  If a TTY is available  the command will prompt for text      Examples  Initialize a rekey      shell session   vault operator rekey        init        key shares 15        key threshold 9      Initialize a rekey when Auto Unseal is used for the Vault cluster      shell session   vault operator rekey        target recovery        init        key shares 15        key threshold 9      Initialize a rekey and activate the verification process      shell session   vault operator rekey        init        key shares 15        key threshold 9        verify      Rekey and encrypt the resulting unseal keys with PGP      shell session   vault operator rekey        init        key shares 3        key threshold 2        pgp keys  keybase hashicorp keybase jefferai keybase sethvargo       Rekey an Auto Unseal vault and encrypt the resulting recovery keys with PGP      shell session   vault operator rekey        target recovery        init        pgp keys keybase grahamhashicorp      key shares 1      key threshold 1      Store encrypted PGP keys in Vault s core      shell session   vault operator rekey        init        pgp keys              backup      Retrieve backed up unseal keys      shell session   vault operator rekey  backup retrieve      Delete backed up unseal keys      shell session   vault operator rekey  backup delete      Perform the verification of the rekey using the verification nonce      shell session   vault operator rekey  verify  nonce               Usage  The following flags are available in addition to the  standard set of flags   vault docs commands  included on all commands       Output options      format    string   table      Print the output in the given format  Valid   formats are  table    json   or  yaml   This can also be specified via the    VAULT FORMAT  environment variable       Command options      cancel    bool  false     Reset the rekeying progress  This will discard any submitted unseal keys   or configuration  The default is false       init    bool  false     Initialize the rekeying operation  This can only be   done if no rekeying operation is in progress  Customize the new number of key   shares and key threshold using the   key shares  and   key threshold flags        key shares    int  5     Number of key shares to split the generated master   key into  This is the number of  unseal keys  to generate  This is aliased as     n       key threshold    int  3     Number of key shares required to reconstruct the   root key  This must be less than or equal to  key shares  This is aliased as     t        nonce    string         Nonce value provided at initialization  The same   nonce value must be provided with each unseal key       pgp keys    string            Comma separated list of paths to files on disk   containing public PGP keys OR a comma separated list of Keybase usernames   using the format  keybase  username    When supplied  the generated unseal   keys will be encrypted and base64 encoded in the order specified in this list       status    bool  false     Print the status of the current attempt without   providing an unseal key  The default is false       target    string   barrier      Target for rekeying   recovery  only applies   when HSM support is enabled or using  Auto Unseal   vault docs concepts seal auto unseal        verify    bool  false     Indicate during the phase   init  that the   verification process is activated for the rekey  Along with   nonce  option   it indicates that the nonce given is for the verification process       Backup options      backup    bool  false     Store a backup of the current PGP encrypted unseal   keys in Vault s core  The encrypted values can be recovered in the event of   failure or discarded after success  See the  backup delete and     backup retrieve options for more information  This option only applies when   the existing unseal keys were PGP encrypted       backup delete    bool  false     Delete any stored backup unseal keys       backup retrieve    bool  false     Retrieve the backed up unseal keys  This   option is only available if the PGP keys were provided and the backup has not   been deleted "}
{"questions":"vault vault operator diagnose is a new operator centric command focused on providing a clear description but will also warn on configurations or statuses that it deems to be unsafe in some way of what is working in Vault and what is not working The command focuses on why Vault cannot serve requests page title operator diagnose Command layout docs","answers":"---\nlayout: docs\npage_title: operator diagnose - Command\ndescription: |-\n  \"vault operator diagnose\" is a new operator-centric command, focused on providing a clear description \n  of what is working in Vault, and what is not working. The command focuses on why Vault cannot serve requests,\n  but will also warn on configurations or statuses that it deems to be unsafe in some way. \n\n---\n\n# operator diagnose\n\nThe operator diagnose command should be used primarily when vault is down or \npartially inoperational. The command can be used safely regardless of the state \nvault is in, but may return meaningless results for some of the test cases if the \nvault server is already running. \n\nNote: if you run the diagnose command proactively, either before a server \nstarts or while a server is operational, please consult the documentation \non the individual checks below to see which checks are returning false error \nmessages or warnings.\n\n## Usage\n\nThe following flags are available in addition to the [standard set of\nflags](\/vault\/docs\/commands) included on all commands.\n\n### Output options\n\n- `-format` `(string: \"table\")` - Print the output in the given format. Valid\n  formats are \"table\", \"json\", or \"yaml\". This can also be specified via the\n  `VAULT_FORMAT` environment variable.\n\n#### Output layout\n\nThe operator diagnose command will output a set of lines in the CLI. \nEach line will begin with a prefix in parenthesis. These are:. \n\n- `[ success ]` - Denotes that the check was successful.\n- `[ warning ]` - Denotes that the check has passed, but that there may be potential\nissues to look into that may relate to the issues vault is experiencing. Diagnose warns \nfrequently. These warnings are meant to serve as starting points in the debugging process. \n- `[ failure ]` - Denotes that the check has failed. Failures are critical issues in the eyes \nof the diagnose command.\n\nIn addition to these prefixed lines, there may be output lines that are not prefixed, but are \ncolor-coded purple. These are advice lines from Diagnose, and are meant to offer general guidance \non how to go about fixing potential warnings or failures that may arise.\n\nWarn or fail prefixes in nested checks will bubble up to the parent if the prefix superceeds the \nparent prefix. Fail superceeds warn, and warn superceeds ok. For example, if the TLS checks under\nthe Storage check fails, the `[ failure ]` prefix will bubble up to the Storage check. \n\n### Command options\n\n- `-config` `(string; \"\")` - The path to the vault configuration file used by \nthe vault server on startup. \n\n### Diagnose checks\n\nThe following section details the various checks that Diagnose runs. Check names in documentation\nwill be separated by slashes to denote that they are nested, when applicable. For example, a check \ndocumented as `A \/ B` will show up as `B` in the `operator diagnose` output, and will be nested\n(indented) under `A`. \n\n#### Vault diagnose\n\n`Vault Diagnose` is the top level check that contains the rest of the checks. It will report the status\nof the check \n\n#### Check operating system \/ check open file limit\n\n`Check Open File Limit` verifies that the open file limit value is set high enough for vault \nto run effectively. We recommend setting these limits to at least 1024768. \n\nThis check will be skipped on openbsd, arm, and windows. \n\n#### Check operating system \/ check disk usage\n\n`Check Disk Usage` will report disk usage for each partition. For each partition on a prod host,\nwe recommend having at least 5% of the partition free to use, and at least 1 GB of space. \n\nThis check will be skipped on openbsd and arm.\n\n#### Parse configuration\n\n`Parse Configuration` will check the vault server config file for syntax errors. It will check\nfor extra values in the configuration file, repeated stanzas, and stanzas that do not belong \nin the configuration file (for example a \"tcpp\" listener as opposed to a tcp listener). \n\nCurrently, the `storage` stanza is not checked. \n\n#### Check storage \/ create storage backend\n\n`Create Storage Backend` ensures that the storage stanza configured in the vault server config\nhas enough information to create a storage object internally. Common errors will have to do\nwith misconfigured fields in the storage stanza. \n\n#### Check storage \/ check consul TLS\n\n`Check Consul TLS` verifies TLS information included in the storage stanza if the storage type \nis consul. If a certificate chain is provided, Diagnose parses the root, intermediate, and leaf\ncertificates, and checks each one for correctness. \n\n#### Check storage \/ check consul direct storage access\n\n`Check Consul Direct Storage Access` is a consul-specific check that ensures Vault is not accessing\nthe consul server directly, but rather through a local agent. \n\n#### Check storage \/ check raft folder permissions\n\n`Check Raft Folder Permissions` computes the permissions on the raft folder, checks that a boltDB file \nhas been initialized within the folder previously, and ensures that the folder is not too permissive, but \nat the same time has enough permissions to be used. The raft folder should not have `other` permissions, but \nshould have `group rw` or `owner rw`, depending on different setups. This check also warns if it detects a\nsymlink being used. \n\nNote that this check will warn that a raft file has not been created if diagnose is run without any\npre-existing server runs.\n\nThis check will be skipped on windows. \n\n#### Check storage \/ check raft folder ownership\n\n`Check Raft Folder Ownership` ensures that vault does not need to run as root to access the boltDB folder. \n\nNote that this check will warn that a raft file has not been created if diagnose is run without any\npre-existing server runs. \n\nThis check will be skipped on windows. \n\n#### Check storage \/ check for raft quorum\n\n`Check For Raft Quorum` uses the FSM to ensure that there were an odd number of voters in the raft quorum when \nvault was last running. \n\nNote that this check will warn that there are 0 voters if diagnose is run without any pre-existing server runs. \n\n#### Check storage \/ check storage access\n\n`Check Storage Access` will try to write a dud value, named `diagnose\/latency\/<uuid>`, to storage. \nEnsure that there is no important data at this location before running diagnose, as this check \nwill overwrite that data. This check will then try to list and read the value it wrote to ensure \nthe name and value is as expected. \n\n`Check Storage Access` will warn if any operation takes longer than 100ms, and error out if the \nentire check takes longer than 30s. \n\n#### Check service discovery \/ check consul service discovery TLS\n\n`Check Consul Service Discovery TLS` verifies TLS information included in the service discovery\n stanza if the storage type is consul. If a certificate chain is provided, Diagnose parses \n the root, intermediate, and leaf certificates, and checks each one for correctness. \n\n#### Check service discovery \/ check consul direct service discovery\n\n`Check Consul Direct Service Discovery` is a consul-specific check that ensures Vault\nis not accessing the consul server directly, but rather through a local agent. \n\n#### Create Vault server configuration seals\n\n`Create Vault Server Configuration Seals` creates seals from the vault configuration \nstanza and verifies they can be initialized and finalized. \n\n#### Check transit seal TLS\n\n`Check Transit Seal TLS` checks the TLS client certificate, key, and CA certificate\nprovided in a transit seal stanza (if one exists) for correctness. \n\n#### Create core configuration \/ initialize randomness for core\n\n`Initialize Randomness for Core` ensures that vault has access to the randReader that \nthe vault core uses. \n\n#### HA storage\n\nThis check and any nested checks will be the same as the `Check Storage` checks.\nThe only difference is that the checks here will be run on whatever is specified in the \n`ha_storage` section of the vault configuration, as opposed to the `storage` section. \n\n#### Determine redirect address\n\nEnsures that one of the `VAULT_API_ADDR`, `VAULT_REDIRECT_ADDR`, or `VAULT_ADVERTISE_ADDR` \nenvironment variables are set, or that the redirect address is specified in the vault \nconfiguration.\n\n#### Check cluster address\n\nParses the cluster address from the `VAULT_CLUSTER_ADDR` environment variable, or from the \nredirect address or cluster address specified in the vault configuration, and checks that \nthe address is of the form `host:port`. \n\n#### Check core creation\n\n`Check Core Creation` verifies the logical configuration checks that vault does when it \ncreates a core object. These are runtime checks, meaning any errors thrown by this diagnose\ntest will also be thrown by the vault server itself when it is run. \n\n#### Check for autoloaded license\n\n`Check For Autoloaded License` is an enterprise diagnose check, which verifies that vault \nhas access to a valid autoloaded license that will not expire in the next 30 days.\n\n#### Start listeners \/ check listener TLS\n\n`Check Listener TLS` verifies the server certificate file and key are valid and matching. \nIt also checks the client CA file, if one is provided, for a valid certificate, and performs\nthe standard runtime listener checks on the listener configuration stanza, such as verifying \nthat the minimum and maximum TLS versions are within the bounds of what vault supports. \n\nLike all the other Diagnose TLS checks, it will warn if any of the certificates provided are \nset to expire within the next month. \n\n#### Start listeners \/ create listeners\n\n`Create Listeners` uses the listener configuration to initialize the listeners, erroring with \na server error if anything goes wrong. \n\n#### Check autounseal encryption\n\n`Check Autounseal Encryption` will initialize the barrier using the seal stanza, if the seal\ntype is not a shamir seal, and use it to encrypt and decrypt a dud value. \n\n#### Check server before runtime\n\n`Check Server Before Runtime` achieves parity with the server run command, running through \nthe runtime code checks before the server is initialized to ensure that nothing fails. \nThis check will never fail without another diagnose check failing. ","site":"vault","answers_cleaned":"    layout  docs page title  operator diagnose   Command description        vault operator diagnose  is a new operator centric command  focused on providing a clear description    of what is working in Vault  and what is not working  The command focuses on why Vault cannot serve requests    but will also warn on configurations or statuses that it deems to be unsafe in some way           operator diagnose  The operator diagnose command should be used primarily when vault is down or  partially inoperational  The command can be used safely regardless of the state  vault is in  but may return meaningless results for some of the test cases if the  vault server is already running    Note  if you run the diagnose command proactively  either before a server  starts or while a server is operational  please consult the documentation  on the individual checks below to see which checks are returning false error  messages or warnings      Usage  The following flags are available in addition to the  standard set of flags   vault docs commands  included on all commands       Output options      format    string   table      Print the output in the given format  Valid   formats are  table    json   or  yaml   This can also be specified via the    VAULT FORMAT  environment variable        Output layout  The operator diagnose command will output a set of lines in the CLI   Each line will begin with a prefix in parenthesis  These are          success      Denotes that the check was successful       warning      Denotes that the check has passed  but that there may be potential issues to look into that may relate to the issues vault is experiencing  Diagnose warns  frequently  These warnings are meant to serve as starting points in the debugging process        failure      Denotes that the check has failed  Failures are critical issues in the eyes  of the diagnose command   In addition to these prefixed lines  there may be output lines that are not prefixed  but are  color coded purple  These are advice lines from Diagnose  and are meant to offer general guidance  on how to go about fixing potential warnings or failures that may arise   Warn or fail prefixes in nested checks will bubble up to the parent if the prefix superceeds the  parent prefix  Fail superceeds warn  and warn superceeds ok  For example  if the TLS checks under the Storage check fails  the    failure    prefix will bubble up to the Storage check        Command options      config    string         The path to the vault configuration file used by  the vault server on startup        Diagnose checks  The following section details the various checks that Diagnose runs  Check names in documentation will be separated by slashes to denote that they are nested  when applicable  For example  a check  documented as  A   B  will show up as  B  in the  operator diagnose  output  and will be nested  indented  under  A          Vault diagnose   Vault Diagnose  is the top level check that contains the rest of the checks  It will report the status of the check        Check operating system   check open file limit   Check Open File Limit  verifies that the open file limit value is set high enough for vault  to run effectively  We recommend setting these limits to at least 1024768    This check will be skipped on openbsd  arm  and windows         Check operating system   check disk usage   Check Disk Usage  will report disk usage for each partition  For each partition on a prod host  we recommend having at least 5  of the partition free to use  and at least 1 GB of space    This check will be skipped on openbsd and arm        Parse configuration   Parse Configuration  will check the vault server config file for syntax errors  It will check for extra values in the configuration file  repeated stanzas  and stanzas that do not belong  in the configuration file  for example a  tcpp  listener as opposed to a tcp listener     Currently  the  storage  stanza is not checked         Check storage   create storage backend   Create Storage Backend  ensures that the storage stanza configured in the vault server config has enough information to create a storage object internally  Common errors will have to do with misconfigured fields in the storage stanza         Check storage   check consul TLS   Check Consul TLS  verifies TLS information included in the storage stanza if the storage type  is consul  If a certificate chain is provided  Diagnose parses the root  intermediate  and leaf certificates  and checks each one for correctness         Check storage   check consul direct storage access   Check Consul Direct Storage Access  is a consul specific check that ensures Vault is not accessing the consul server directly  but rather through a local agent         Check storage   check raft folder permissions   Check Raft Folder Permissions  computes the permissions on the raft folder  checks that a boltDB file  has been initialized within the folder previously  and ensures that the folder is not too permissive  but  at the same time has enough permissions to be used  The raft folder should not have  other  permissions  but  should have  group rw  or  owner rw   depending on different setups  This check also warns if it detects a symlink being used    Note that this check will warn that a raft file has not been created if diagnose is run without any pre existing server runs   This check will be skipped on windows         Check storage   check raft folder ownership   Check Raft Folder Ownership  ensures that vault does not need to run as root to access the boltDB folder    Note that this check will warn that a raft file has not been created if diagnose is run without any pre existing server runs    This check will be skipped on windows         Check storage   check for raft quorum   Check For Raft Quorum  uses the FSM to ensure that there were an odd number of voters in the raft quorum when  vault was last running    Note that this check will warn that there are 0 voters if diagnose is run without any pre existing server runs         Check storage   check storage access   Check Storage Access  will try to write a dud value  named  diagnose latency  uuid    to storage   Ensure that there is no important data at this location before running diagnose  as this check  will overwrite that data  This check will then try to list and read the value it wrote to ensure  the name and value is as expected     Check Storage Access  will warn if any operation takes longer than 100ms  and error out if the  entire check takes longer than 30s         Check service discovery   check consul service discovery TLS   Check Consul Service Discovery TLS  verifies TLS information included in the service discovery  stanza if the storage type is consul  If a certificate chain is provided  Diagnose parses   the root  intermediate  and leaf certificates  and checks each one for correctness         Check service discovery   check consul direct service discovery   Check Consul Direct Service Discovery  is a consul specific check that ensures Vault is not accessing the consul server directly  but rather through a local agent         Create Vault server configuration seals   Create Vault Server Configuration Seals  creates seals from the vault configuration  stanza and verifies they can be initialized and finalized         Check transit seal TLS   Check Transit Seal TLS  checks the TLS client certificate  key  and CA certificate provided in a transit seal stanza  if one exists  for correctness         Create core configuration   initialize randomness for core   Initialize Randomness for Core  ensures that vault has access to the randReader that  the vault core uses         HA storage  This check and any nested checks will be the same as the  Check Storage  checks  The only difference is that the checks here will be run on whatever is specified in the   ha storage  section of the vault configuration  as opposed to the  storage  section         Determine redirect address  Ensures that one of the  VAULT API ADDR    VAULT REDIRECT ADDR   or  VAULT ADVERTISE ADDR   environment variables are set  or that the redirect address is specified in the vault  configuration        Check cluster address  Parses the cluster address from the  VAULT CLUSTER ADDR  environment variable  or from the  redirect address or cluster address specified in the vault configuration  and checks that  the address is of the form  host port          Check core creation   Check Core Creation  verifies the logical configuration checks that vault does when it  creates a core object  These are runtime checks  meaning any errors thrown by this diagnose test will also be thrown by the vault server itself when it is run         Check for autoloaded license   Check For Autoloaded License  is an enterprise diagnose check  which verifies that vault  has access to a valid autoloaded license that will not expire in the next 30 days        Start listeners   check listener TLS   Check Listener TLS  verifies the server certificate file and key are valid and matching   It also checks the client CA file  if one is provided  for a valid certificate  and performs the standard runtime listener checks on the listener configuration stanza  such as verifying  that the minimum and maximum TLS versions are within the bounds of what vault supports    Like all the other Diagnose TLS checks  it will warn if any of the certificates provided are  set to expire within the next month         Start listeners   create listeners   Create Listeners  uses the listener configuration to initialize the listeners  erroring with  a server error if anything goes wrong         Check autounseal encryption   Check Autounseal Encryption  will initialize the barrier using the seal stanza  if the seal type is not a shamir seal  and use it to encrypt and decrypt a dud value         Check server before runtime   Check Server Before Runtime  achieves parity with the server run command  running through  the runtime code checks before the server is initialized to ensure that nothing fails   This check will never fail without another diagnose check failing  "}
{"questions":"vault engine mount against an optional configuration pki health check The pki health check command verifies the health of the given PKI secrets layout docs page title pki health check Command","answers":"---\nlayout: docs\npage_title: pki health-check - Command\ndescription: |-\n  The \"pki health-check\" command verifies the health of the given PKI secrets\n  engine mount against an optional configuration.\n---\n\n# pki health-check\n\nThe `pki health-check` command verifies the health of the given PKI secrets\nengine mount against an optional configuration.\n\nThis runs with the permissions of the given token, reading various APIs from\nthe mount and `\/sys` against the given Vault server\n\nMounts need to be specified with any namespaces prefixed in the path, e.g.,\n`ns1\/pki`.\n\n## Examples\n\nPerforms a basic health check against the `pki-root` mount:\n\n```shell-session\n$ vault pki health-check pki-root\/\n```\n\nConfiguration can be specified using the `-health-config` flag:\n\n```shell-session\n$ vault pki health-check -health-config=mycorp-root.json pki-root\/\n```\n\nUsing the `-list` flag will show the list of health checks and any\nknown configuration values (with their defaults) that will be run\nagainst this mount:\n\n```shell-session\n$ vault pki health-check -list pki-root\/\n```\n\n## Usage\n\nThe following flags are unique to this command:\n\n - `-default-disabled` - When specified, results in all health checks being\n   disabled by default unless enabled by the configuration file explicitly.\n  The default is `false`, meaning all default-enabled health checks will run.\n\n - `-health-config` `(string: \"\")` - Path to JSON configuration file to\n   modify health check execution and parameters.\n\n - `-list` - When specified, no health checks are run, but all known health\n   checks are printed. Still requires a positional mount argument. The default\n   is `false`, meaning no listing is printed and health checks will execute.\n\n - `-return-indicator` `(string: \"default\")` - Behavior of the return value\n   (exit code) of this command:\n     - `permission`, for exiting with a non-zero code when the tool lacks\n       permissions or has a version mismatch with\n       the server;\n     - `critical`, for exiting with a non-zero code when a check returns a\n       critical status in addition to the above;\n     - `warning`, for exiting with a non-zero status when a check returns a\n       warning status in addition to the above;\n     - `informational`, for exiting with a non-zero status when a check\n       returns an informational status in addition to the above;\n     - `default`, for the default behavior based on severity of message\n        and only returning a zero exit status when all checks have passed\n        and no execution errors have occurred.\n\nThis command respects the `-format` parameter to control the presentation of\noutput sent to stdout. Fatal errors that prevent health checks from executing\nmay not follow this formatting.\n\n## Return status and output\n\nThis command returns the following exit codes:\n\n - `0` - Everything is good.\n - `1` - Usage error (check CLI parameters).\n - `2` - Informational message from a health check.\n - `3` - Warning message from a health check.\n - `4` - Critical message from a health check.\n - `5` - A version mismatch between health check and Vault Server occurred,\n\t     preventing one or more health checks from being fully run.\n - `6` - A permission denied message was returned from Vault Server for\n\t     one or more health checks.\n\nNote that an exit code of `5` (due to a version mismatch) is not necessarily\nfatal to the health check. For example, the `crl_validity_period` health\ncheck will return an invalid version warning when run against Vault 1.11 as\nno Delta CRL exists for this version of Vault, but this will not impact its\nability to check the complete CRL.\n\nEach health check outputs one or results in a list. This list contains a\nmapping of keys (`status`, `status_code`, `endpoint`, and `message`) to\nvalues returned by the health check. An endpoint may occur in more than\none health check and is not necessarily guaranteed to exist on the server\n(e.g., using wildcards to indicate all matching paths have the same\nresult). Tabular form elides the status code, as this is meant to be\nconsumed programatically.\n\nThese correspond to the following health check status values:\n\n - status `not_applicable` \/ status code `0`: exit code `0`.\n - status `ok` \/ status code `1`: exit code `0`\n - status `informational` \/ status code `2`: exit code `2`.\n - status `warning` \/ status code `3`: exit code `3`.\n - status `critical` \/ status code `4`: exit code `4`.\n - status `invalid_version` \/ status code `5`: exit code `5`.\n - status `insufficient_permissions` \/ status code `6`: exit code `6`.\n\n## Health checks\n\nThe following health checks are currently implemented. More health checks may\nbe added in future releases and may default to being enabled.\n\n### CA validity period\n\n**Name**: `ca_validity_period`\n\n**Accessed APIs**:\n\n - `LIST \/issuers` (unauthenticated)\n - `READ \/issuer\/:issuer_ref\/json` (unauthenticated)\n\n**Config Parameters**:\n\n - `root_expiry_critical` `(duration: 182d)` - for a duration within which the root's lifetime is considered critical\n - `intermediate_expiry_critical` `(duration: 30d)` - for a duration within which the intermediate's lifetime is considered critical\n - `root_expiry_warning` `(duration: 365d)` -  for a duration within which the root's lifetime is considered warning\n - `intermediate_expiry_warning` `(duration: 60d)` - for a duration within which the intermediate's lifetime is considered warning\n - `root_expiry_informational` `(duration: 730d)` - for a duration within which the root's lifetime is considered informational\n - `intermediate_expiry_informational` `(duration: 180d)` - for a duration within which the intermediate's lifetime is considered informational\n\nThis health check will check each issuer in the mount for validity status, returning a list. If a CA expires within the next 30 days, the result will be critical. If a root CA expires within the next 12 months or an intermediate CA within the next 2 months, the result will be a warning. If a root CA expires within 24 months or an intermediate CA within 6 months, the result will be informational.\n\n**Remediation steps**:\n\n1. Perform a [CA rotation operation](\/vault\/docs\/secrets\/pki\/rotation-primitives)\n   to check for CAs that are about to expire.\n1. Migrate from expiring CAs to new CAs.\n1. Delete any expired CAs with one of the following options:\n  - Run [tidy](\/vault\/api-docs\/secret\/pki#tidy) manually with `vault write <mount>\/tidy tidy_expired_issuers=true`.\n  - Use the Vault API to call [delete issuer](\/vault\/api-docs\/secret\/pki#delete-issuer).\n\n\n### CRL validity period\n\n**Name**: `crl_validity_period`\n\n**Accessed APIs**:\n\n - `LIST \/issuers` (unauthenticated)\n - `READ \/config\/crl` (optional)\n - `READ \/issuer\/:issuer_ref\/crl` (unauthenticated)\n - `READ \/issuer\/:issuer_ref\/crl\/delta` (unauthenticated)\n\n**Config Parameters**:\n\n - `crl_expiry_pct_critical` `(int: 95)` - the percentage of validity period after which a CRL should be considered critically close to expiry\n - `delta_crl_expiry_pct_critical` `(int: 95)` - the percentage of validity period after which a Delta CRL should be considered critically close to expiry\n\nThis health check checks each issuer's CRL for validity status, returning a list. Unlike CAs, where a date-based duration makes sense due to effort required to successfully rotate, rotating CRLs are much easier, so a percentage based approach makes sense. If the chosen percentage exceeds that of the `grace_period` from the CRL configuration, an informational message will be issued rather than OK.\n\nFor informational purposes, it reads the CRL config and suggests enabling auto-rebuild CRLs if not enabled.\n\n**Remediation steps**:\n\nUse `vault write` to enable CRL auto-rebuild:\n\n```shell-session\n$ vault write <mount>\/config\/crl auto_rebuild=true\n```\n\n### Hardware-Backed root certificate\n\n**Name**: `hardware_backed_root`\n\n**APIs**:\n\n - `LIST \/issuers` (unauthenticated)\n - `READ \/issuer\/:issuer_ref`\n - `READ \/key\/:key_ref`\n\n**Config Parameters**:\n\n - `enabled` `(boolean: false)` - defaults to not being run.\n\nThis health check checks issuers for root CAs backed by software keys. While Vault is secure, for production root certificates, we'd recommend the additional integrity of KMS-backed keys. This is an informational check only. When all roots are KMS-backed, we'll return OK; when no issuers are roots, we'll return not applicable.\n\nRead more about hardware-backed keys within [Vault Enterprise Managed Keys](\/vault\/docs\/enterprise\/managed-keys)\n\n### Root certificate issued Non-CA leaves\n\n**Name**: `root_issued_leaves`\n\n**APIs**:\n\n - `LIST \/issuers` (unauthenticated)\n - `READ \/issuer\/:issuer_ref\/pem` (unauthenticated)\n - `LIST \/certs`\n - `READ \/certs\/:serial` (unauthenticated)\n\n**Config Parameters**:\n\n - `certs_to_fetch` `(int: 100)` - a quantity of leaf certificates to fetch to see if any leaves have been issued by a root directly.\n\nThis health check verifies whether a proper CA hierarchy is in use. We do this by fetching `certs_to_fetch` leaf certificates (configurable) and seeing if they are a non-issuer leaf and if they were signed by a root issuer in this mount. If one is found, we'll issue a warning about this, and recommend setting up an intermediate CA.\n\n**Remediation steps**:\n\n1. Restrict the use of `sign`, `sign-verbatim`, `issue`, and ACME APIs against\n   the root issuer.\n1. Create an intermediary issuer in a different mount.\n1. Have the root issuer sign the new intermediary issuer.\n1. Issue new leaf certificates using the intermediary issuer.\n\n### Role allows implicit localhost issuance\n\n**Name**: `role_allows_localhost`\n\n**APIs**:\n\n - `LIST \/roles`\n - `READ \/roles\/:name`\n\n**Config Parameters**: (none)\n\nChecks whether any roles exist that allow implicit localhost based issuance\n(`allow_localhost=true`) with a non-empty `allowed_domains` value.\n\n**Remediation steps**:\n\n1. Set `allow_localhost` to `false` for all roles.\n1. Update the `allowed_domains` field with an explicit list of allowed\n   localhost-like domains.\n\n### Role allows Glob-Based wildcard issuance\n\n**Name**: `role_allows_glob_wildcards`\n\n**APIs**:\n\n - `LIST \/roles`\n - `READ \/roles\/:name`\n\n**Config Parameters**:\n\n - `allowed_roles` `(list: nil)` - an allow-list of roles to ignore.\n\nCheck each role to see whether or not it allows wildcard issuance **and** glob\ndomains. Wildcards and globs can interact and result in nested wildcards among\nother (potentially dangerous) quirks.\n\n**Remediation steps**:\n\n1. Split any role that need both of `allow_glob_domains` and `allow_wildcard_certificates` to be true into two roles.\n1. Continue splitting roles until both of the following are true for all roles:\n    - The role has `allow_glob_domains` **or** `allow_wildcard_certificates`, but\n      not both.\n    - Roles with `allow_glob_domains` **and** `allow_wildcard_certificates` are\n      the only roles required for **all** SANs on the certificate.\n1. Add the roles that allow glob domains and wildcards to `allowed_roles` so\n   Vault ignores them in future checks.\n\n### Role sets `no_store=false` and performance\n\n**Name**: `role_no_store_false`\n\n**APIs**:\n\n - `LIST \/roles`\n - `READ \/roles\/:name`\n - `LIST \/certs`\n - `READ \/config\/crl`\n\n**Config Parameters**:\n\n - `allowed_roles` `(list: nil)` - an allow-list of roles to ignore.\n\nChecks each role to see whether `no_store` is set to `false`.\n\n<Warning>\n\nVault will provide warnings and performance will suffer if you have a large\nnumber of certificates without temporal CRL auto-rebuilding and set `no_store`\nto `true`.\n\n<\/Warning>\n\n\n**Remediation steps**:\n\n1. Update none-ACME roles with `no_store=false`. **NOTE**: Roles used for ACME\n   issuance must have `no_store`  set to `true`.\n1. Set your certificate lifetimes as short as possible.\n1. Use [BYOC revocations](\/vault\/api-docs\/secret\/pki#revoke-certificate) to\n   revoke certificates as needed.\n\n### Accessibility of audit information\n\n**Name**: `audit_visibility`\n\n**APIs**:\n\n - `READ \/sys\/mounts\/:mount\/tune`\n\n**Config Parameters**:\n\n - `ignored_parameters` `(list: nil)` - a list of parameters to ignore their HMAC status.\n\nThis health check checks whether audit information is accessible to log consumers, validating whether our list of safe and unsafe audit parameters are generally followed. These are informational responses, if any are present.\n\n**Remediation steps**:\n\nUse `vault secrets tune` to set the desired audit parameters:\n\n```shell-session\n$ vault secrets tune \\\n  -audit-non-hmac-response-keys=certificate \\\n  -audit-non-hmac-response-keys=issuing_ca \\\n  -audit-non-hmac-response-keys=serial_number \\\n  -audit-non-hmac-response-keys=error \\\n  -audit-non-hmac-response-keys=ca_chain \\\n  -audit-non-hmac-request-keys=certificate \\\n  -audit-non-hmac-request-keys=issuer_ref \\\n  -audit-non-hmac-request-keys=common_name \\\n  -audit-non-hmac-request-keys=alt_names \\\n  -audit-non-hmac-request-keys=other_sans \\\n  -audit-non-hmac-request-keys=ip_sans \\\n  -audit-non-hmac-request-keys=uri_sans \\\n  -audit-non-hmac-request-keys=ttl \\\n  -audit-non-hmac-request-keys=not_after \\\n  -audit-non-hmac-request-keys=serial_number \\\n  -audit-non-hmac-request-keys=key_type \\\n  -audit-non-hmac-request-keys=private_key_format \\\n  -audit-non-hmac-request-keys=managed_key_name \\\n  -audit-non-hmac-request-keys=managed_key_id \\\n  -audit-non-hmac-request-keys=ou \\\n  -audit-non-hmac-request-keys=organization \\\n  -audit-non-hmac-request-keys=country \\\n  -audit-non-hmac-request-keys=locality \\\n  -audit-non-hmac-request-keys=province \\\n  -audit-non-hmac-request-keys=street_address \\\n  -audit-non-hmac-request-keys=postal_code \\\n  -audit-non-hmac-request-keys=permitted_dns_domains \\\n  -audit-non-hmac-request-keys=policy_identifiers \\\n  -audit-non-hmac-request-keys=ext_key_usage_oids \\\n  -audit-non-hmac-request-keys=csr \\\n   <mount>\n```\n\n### ACL policies allow problematic endpoints\n\n**Name**: `policy_allow_endpoints`\n\n**APIs**:\n\n - `LIST \/sys\/policy`\n - `READ \/sys\/policy\/:name`\n\n**Config Parameters**:\n\n - `allowed_policies` `(list: nil)` -  a list of policies to allow-list for access to insecure APIs.\n\nThis health check checks whether unsafe access to APIs (such as `sign-intermediate`, `sign-verbatim`, and `sign-self-issued`) are allowed. Any findings are a critical result and should be rectified by the administrator or explicitly allowed.\n\n### Allow If-Modified-Since requests\n\n**Name**: `allow_if_modified_since`\n\n**APIs**:\n\n - `READ \/sys\/internal\/ui\/mounts`\n\n**Config Parameters**: (none)\n\nThis health check verifies if the `If-Modified-Since` header has been added to `passthrough_request_headers` and if `Last-Modified` header has been added to `allowed_response_headers`. This is an informational message if both haven't been configured, or a warning if only one has been configured.\n\n**Remediation steps**:\n\n1. Update `allowed_response_headers` and `passthrough_request_headers` for all\n   policies with `vault secrets tune`:\n\n  ```shell-session\n  $ vault secrets tune \\\n      -passthrough-request-headers=\"If-Modified-Since\" \\\n      -allowed-response-headers=\"Last-Modified\" \\\n      <mount>\n  ```\n\n1. Update ACME-specific headers with `vault secrets tune` (if you are using ACME):\n\n  ```shell-session\n  $ vault secrets tune \\\n      -passthrough-request-headers=\"If-Modified-Since\" \\\n      -allowed-response-headers=\"Last-Modified\" \\\n      -allowed-response-headers=\"Replay-Nonce\" \\\n      -allowed-response-headers=\"Link\" \\\n      -allowed-response-headers=\"Location\" \\\n      <mount>\n  ```\n\n### Auto-Tidy disabled\n\n**Name**: `enable_auto_tidy`\n\n**APIs**:\n\n - `READ \/config\/auto-tidy`\n\n**Config Parameters**:\n\n - `interval_duration_critical` `(duration: 7d)` - the maximum allowed interval_duration to hit critical threshold.\n - `interval_duration_warning` `(duration: 2d)` - the maximum allowed interval_duration to hit a warning threshold.\n - `pause_duration_critical` `(duration: 1s)` - the maximum allowed pause_duration to hit a critical threshold.\n - `pause_duration_warning` `(duration: 200ms)` - the maximum allowed pause_duration to hit a warning threshold.\n\nThis health check verifies that auto-tidy is enabled, with sane defaults for interval_duration and pause_duration. Any disabled findings will be informational, as this is a best-practice but not strictly required, but other findings w.r.t. `interval_duration` or `pause_duration` will be critical\/warnings.\n\n**Remediation steps**\n\nUse `vault write` to enable auto-tidy with the recommended defaults:\n\n```shell-session\n$ vault write <mount>\/config\/auto-tidy \\\n    enabled=true \\\n    tidy_cert_store=true \\\n    tidy_revoked_certs=true \\\n    tidy_acme=true \\\n    tidy_revocation_queue=true \\\n    tidy_cross_cluster_revoked_certs=true \\\n    tidy_revoked_cert_issuer_associations=true\n```\n\n### Tidy hasn't run\n\n**Name**: `tidy_last_run`\n\n**APIs**:\n\n - `READ \/tidy-status`\n\n**Config Parameters**:\n\n - `last_run_critical` `(duration: 7d)` - the critical delay threshold between when tidy should have last run.\n - `last_run_warning` `(duration: 2d)` - the warning delay threshold between when tidy should have last run.\n\nThis health check verifies that tidy has run within the last run window. This can be critical\/warning alerts as this can start to seriously impact Vault's performance.\n\n**Remediation steps**:\n\n1. Schedule a manual run of tidy with `vault write`:\n\n  ```shell-session\n  $ vault write <mount>\/tidy \\\n      tidy_cert_store=true \\\n      tidy_revoked_certs=true \\\n      tidy_acme=true \\\n      tidy_revocation_queue=true \\\n      tidy_cross_cluster_revoked_certs=true \\\n      tidy_revoked_cert_issuer_associations=true\n  ```\n\n1. Review the tidy status endpoint, `vault read <mount>\/tidy-status` for\n   additional information.\n1. Re-configure auto-tidy based on the log information and results of your\n   manual run.\n\n\n### Too many certificates\n\n**Name**: `too_many_certs`\n\n**APIs**:\n\n - `READ \/tidy-status`\n - `LIST \/certs`\n\n**Config Parameters**:\n\n - `count_critical` `(int: 250000)` - the critical threshold at which there are too many certs.\n - `count_warning` `(int: 50000)` - the warning threshold at which there are too many certs.\n\nThis health check verifies that this cluster has a reasonable number of certificates. Ideally this would be fetched from tidy's status or a new metric reporting format, but as a fallback when tidy hasn't run, a list operation will be performed instead.\n\n**Remediation steps**:\n\n1. Verify that tidy ran recently with `vault read`:\n    ```shell-session\n    $ vault read <mount>\/tidy-status\n    ````\n1. Schedule a manual run of tidy with `vault write`:\n    ```shell-session\n    $ vault write <mount>\/tidy \\\n      tidy_cert_store=true \\\n      tidy_revoked_certs=true \\\n      tidy_acme=true \\\n      tidy_revocation_queue=true \\\n      tidy_cross_cluster_revoked_certs=true \\\n      tidy_revoked_cert_issuer_associations=true\n    ```\n1. Enable `auto-tidy`.\n1. Make sure that you are not renewing certificates too soon. Certificate\n   lifetimes should reflect the expected usage of the certificate. If the TTL is\n   set appropriately, most certificates renew at approximately 2\/3 of their\n   lifespan.\n1. Consider setting the `no_store` field for all roles to `true` and use [BYOC revocations](\/vault\/api-docs\/secret\/pki#revoke-certificate) to avoid storage.\n\n### Enable ACME issuance\n\n**Name**: `enable_acme_issuance`\n\n**APIs**:\n\n- `READ \/config\/acme`\n- `READ \/config\/cluster`\n- `LIST \/issuers` (unauthenticated)\n- `READ \/issuer\/:issuer_ref\/json` (unauthenticated)\n\n**Config Parameters**: (none)\n\nThis health check verifies that ACME is enabled within a mount that contains an intermediary issuer, as this is considered a best-practice to support a self-rotating PKI infrastructure.\n\nReview the [ACME Certificate Issuance](\/vault\/api-docs\/secret\/pki#acme-certificate-issuance)\nAPI documentation to learn about enabling ACME support in Vault.\n\n### ACME response headers\n\n**Name**: `allow_acme_headers`\n\n**APIs**:\n\n- `READ \/sys\/internal\/ui\/mounts`\n\n**Config Parameters**: (none)\n\nThis health check verifies if the `\"Replay-Nonce`, `Link`, and `Location` headers have been added to `allowed_response_headers`, when the ACME feature is enabled. The ACME protocol will not work if these headers are not added to the mount.\n\n**Remediation steps**:\n\nUse `vault secrets tune` to add the missing headers to `allowed_response_headers`:\n```shell-session\n$ vault secrets tune \\\n  -allowed-response-headers=\"Last-Modified\" \\\n  -allowed-response-headers=\"Replay-Nonce\" \\\n  -allowed-response-headers=\"Link\" \\\n  -allowed-response-headers=\"Location\" \\\n  <mount>\n```","site":"vault","answers_cleaned":"    layout  docs page title  pki health check   Command description       The  pki health check  command verifies the health of the given PKI secrets   engine mount against an optional configuration         pki health check  The  pki health check  command verifies the health of the given PKI secrets engine mount against an optional configuration   This runs with the permissions of the given token  reading various APIs from the mount and   sys  against the given Vault server  Mounts need to be specified with any namespaces prefixed in the path  e g    ns1 pki       Examples  Performs a basic health check against the  pki root  mount      shell session   vault pki health check pki root       Configuration can be specified using the   health config  flag      shell session   vault pki health check  health config mycorp root json pki root       Using the   list  flag will show the list of health checks and any known configuration values  with their defaults  that will be run against this mount      shell session   vault pki health check  list pki root          Usage  The following flags are unique to this command        default disabled    When specified  results in all health checks being    disabled by default unless enabled by the configuration file explicitly    The default is  false   meaning all default enabled health checks will run        health config    string         Path to JSON configuration file to    modify health check execution and parameters        list    When specified  no health checks are run  but all known health    checks are printed  Still requires a positional mount argument  The default    is  false   meaning no listing is printed and health checks will execute        return indicator    string   default      Behavior of the return value     exit code  of this command          permission   for exiting with a non zero code when the tool lacks        permissions or has a version mismatch with        the server          critical   for exiting with a non zero code when a check returns a        critical status in addition to the above          warning   for exiting with a non zero status when a check returns a        warning status in addition to the above          informational   for exiting with a non zero status when a check        returns an informational status in addition to the above          default   for the default behavior based on severity of message         and only returning a zero exit status when all checks have passed         and no execution errors have occurred   This command respects the   format  parameter to control the presentation of output sent to stdout  Fatal errors that prevent health checks from executing may not follow this formatting      Return status and output  This command returns the following exit codes       0    Everything is good      1    Usage error  check CLI parameters       2    Informational message from a health check      3    Warning message from a health check      4    Critical message from a health check      5    A version mismatch between health check and Vault Server occurred        preventing one or more health checks from being fully run      6    A permission denied message was returned from Vault Server for       one or more health checks   Note that an exit code of  5   due to a version mismatch  is not necessarily fatal to the health check  For example  the  crl validity period  health check will return an invalid version warning when run against Vault 1 11 as no Delta CRL exists for this version of Vault  but this will not impact its ability to check the complete CRL   Each health check outputs one or results in a list  This list contains a mapping of keys   status    status code    endpoint   and  message   to values returned by the health check  An endpoint may occur in more than one health check and is not necessarily guaranteed to exist on the server  e g   using wildcards to indicate all matching paths have the same result   Tabular form elides the status code  as this is meant to be consumed programatically   These correspond to the following health check status values      status  not applicable    status code  0   exit code  0      status  ok    status code  1   exit code  0     status  informational    status code  2   exit code  2      status  warning    status code  3   exit code  3      status  critical    status code  4   exit code  4      status  invalid version    status code  5   exit code  5      status  insufficient permissions    status code  6   exit code  6       Health checks  The following health checks are currently implemented  More health checks may be added in future releases and may default to being enabled       CA validity period    Name     ca validity period     Accessed APIs         LIST  issuers   unauthenticated      READ  issuer  issuer ref json   unauthenticated     Config Parameters         root expiry critical    duration  182d     for a duration within which the root s lifetime is considered critical     intermediate expiry critical    duration  30d     for a duration within which the intermediate s lifetime is considered critical     root expiry warning    duration  365d      for a duration within which the root s lifetime is considered warning     intermediate expiry warning    duration  60d     for a duration within which the intermediate s lifetime is considered warning     root expiry informational    duration  730d     for a duration within which the root s lifetime is considered informational     intermediate expiry informational    duration  180d     for a duration within which the intermediate s lifetime is considered informational  This health check will check each issuer in the mount for validity status  returning a list  If a CA expires within the next 30 days  the result will be critical  If a root CA expires within the next 12 months or an intermediate CA within the next 2 months  the result will be a warning  If a root CA expires within 24 months or an intermediate CA within 6 months  the result will be informational     Remediation steps     1  Perform a  CA rotation operation   vault docs secrets pki rotation primitives     to check for CAs that are about to expire  1  Migrate from expiring CAs to new CAs  1  Delete any expired CAs with one of the following options      Run  tidy   vault api docs secret pki tidy  manually with  vault write  mount  tidy tidy expired issuers true       Use the Vault API to call  delete issuer   vault api docs secret pki delete issuer         CRL validity period    Name     crl validity period     Accessed APIs         LIST  issuers   unauthenticated      READ  config crl   optional      READ  issuer  issuer ref crl   unauthenticated      READ  issuer  issuer ref crl delta   unauthenticated     Config Parameters         crl expiry pct critical    int  95     the percentage of validity period after which a CRL should be considered critically close to expiry     delta crl expiry pct critical    int  95     the percentage of validity period after which a Delta CRL should be considered critically close to expiry  This health check checks each issuer s CRL for validity status  returning a list  Unlike CAs  where a date based duration makes sense due to effort required to successfully rotate  rotating CRLs are much easier  so a percentage based approach makes sense  If the chosen percentage exceeds that of the  grace period  from the CRL configuration  an informational message will be issued rather than OK   For informational purposes  it reads the CRL config and suggests enabling auto rebuild CRLs if not enabled     Remediation steps     Use  vault write  to enable CRL auto rebuild      shell session   vault write  mount  config crl auto rebuild true          Hardware Backed root certificate    Name     hardware backed root     APIs         LIST  issuers   unauthenticated      READ  issuer  issuer ref      READ  key  key ref     Config Parameters         enabled    boolean  false     defaults to not being run   This health check checks issuers for root CAs backed by software keys  While Vault is secure  for production root certificates  we d recommend the additional integrity of KMS backed keys  This is an informational check only  When all roots are KMS backed  we ll return OK  when no issuers are roots  we ll return not applicable   Read more about hardware backed keys within  Vault Enterprise Managed Keys   vault docs enterprise managed keys       Root certificate issued Non CA leaves    Name     root issued leaves     APIs         LIST  issuers   unauthenticated      READ  issuer  issuer ref pem   unauthenticated      LIST  certs      READ  certs  serial   unauthenticated     Config Parameters         certs to fetch    int  100     a quantity of leaf certificates to fetch to see if any leaves have been issued by a root directly   This health check verifies whether a proper CA hierarchy is in use  We do this by fetching  certs to fetch  leaf certificates  configurable  and seeing if they are a non issuer leaf and if they were signed by a root issuer in this mount  If one is found  we ll issue a warning about this  and recommend setting up an intermediate CA     Remediation steps     1  Restrict the use of  sign    sign verbatim    issue   and ACME APIs against    the root issuer  1  Create an intermediary issuer in a different mount  1  Have the root issuer sign the new intermediary issuer  1  Issue new leaf certificates using the intermediary issuer       Role allows implicit localhost issuance    Name     role allows localhost     APIs         LIST  roles      READ  roles  name     Config Parameters     none   Checks whether any roles exist that allow implicit localhost based issuance   allow localhost true   with a non empty  allowed domains  value     Remediation steps     1  Set  allow localhost  to  false  for all roles  1  Update the  allowed domains  field with an explicit list of allowed    localhost like domains       Role allows Glob Based wildcard issuance    Name     role allows glob wildcards     APIs         LIST  roles      READ  roles  name     Config Parameters         allowed roles    list  nil     an allow list of roles to ignore   Check each role to see whether or not it allows wildcard issuance   and   glob domains  Wildcards and globs can interact and result in nested wildcards among other  potentially dangerous  quirks     Remediation steps     1  Split any role that need both of  allow glob domains  and  allow wildcard certificates  to be true into two roles  1  Continue splitting roles until both of the following are true for all roles        The role has  allow glob domains    or    allow wildcard certificates   but       not both        Roles with  allow glob domains    and    allow wildcard certificates  are       the only roles required for   all   SANs on the certificate  1  Add the roles that allow glob domains and wildcards to  allowed roles  so    Vault ignores them in future checks       Role sets  no store false  and performance    Name     role no store false     APIs         LIST  roles      READ  roles  name      LIST  certs      READ  config crl     Config Parameters         allowed roles    list  nil     an allow list of roles to ignore   Checks each role to see whether  no store  is set to  false     Warning   Vault will provide warnings and performance will suffer if you have a large number of certificates without temporal CRL auto rebuilding and set  no store  to  true      Warning      Remediation steps     1  Update none ACME roles with  no store false     NOTE    Roles used for ACME    issuance must have  no store   set to  true   1  Set your certificate lifetimes as short as possible  1  Use  BYOC revocations   vault api docs secret pki revoke certificate  to    revoke certificates as needed       Accessibility of audit information    Name     audit visibility     APIs         READ  sys mounts  mount tune     Config Parameters         ignored parameters    list  nil     a list of parameters to ignore their HMAC status   This health check checks whether audit information is accessible to log consumers  validating whether our list of safe and unsafe audit parameters are generally followed  These are informational responses  if any are present     Remediation steps     Use  vault secrets tune  to set the desired audit parameters      shell session   vault secrets tune      audit non hmac response keys certificate      audit non hmac response keys issuing ca      audit non hmac response keys serial number      audit non hmac response keys error      audit non hmac response keys ca chain      audit non hmac request keys certificate      audit non hmac request keys issuer ref      audit non hmac request keys common name      audit non hmac request keys alt names      audit non hmac request keys other sans      audit non hmac request keys ip sans      audit non hmac request keys uri sans      audit non hmac request keys ttl      audit non hmac request keys not after      audit non hmac request keys serial number      audit non hmac request keys key type      audit non hmac request keys private key format      audit non hmac request keys managed key name      audit non hmac request keys managed key id      audit non hmac request keys ou      audit non hmac request keys organization      audit non hmac request keys country      audit non hmac request keys locality      audit non hmac request keys province      audit non hmac request keys street address      audit non hmac request keys postal code      audit non hmac request keys permitted dns domains      audit non hmac request keys policy identifiers      audit non hmac request keys ext key usage oids      audit non hmac request keys csr       mount           ACL policies allow problematic endpoints    Name     policy allow endpoints     APIs         LIST  sys policy      READ  sys policy  name     Config Parameters         allowed policies    list  nil      a list of policies to allow list for access to insecure APIs   This health check checks whether unsafe access to APIs  such as  sign intermediate    sign verbatim   and  sign self issued   are allowed  Any findings are a critical result and should be rectified by the administrator or explicitly allowed       Allow If Modified Since requests    Name     allow if modified since     APIs         READ  sys internal ui mounts     Config Parameters     none   This health check verifies if the  If Modified Since  header has been added to  passthrough request headers  and if  Last Modified  header has been added to  allowed response headers   This is an informational message if both haven t been configured  or a warning if only one has been configured     Remediation steps     1  Update  allowed response headers  and  passthrough request headers  for all    policies with  vault secrets tune         shell session     vault secrets tune          passthrough request headers  If Modified Since           allowed response headers  Last Modified           mount         1  Update ACME specific headers with  vault secrets tune   if you are using ACME         shell session     vault secrets tune          passthrough request headers  If Modified Since           allowed response headers  Last Modified           allowed response headers  Replay Nonce           allowed response headers  Link           allowed response headers  Location           mount             Auto Tidy disabled    Name     enable auto tidy     APIs         READ  config auto tidy     Config Parameters         interval duration critical    duration  7d     the maximum allowed interval duration to hit critical threshold      interval duration warning    duration  2d     the maximum allowed interval duration to hit a warning threshold      pause duration critical    duration  1s     the maximum allowed pause duration to hit a critical threshold      pause duration warning    duration  200ms     the maximum allowed pause duration to hit a warning threshold   This health check verifies that auto tidy is enabled  with sane defaults for interval duration and pause duration  Any disabled findings will be informational  as this is a best practice but not strictly required  but other findings w r t   interval duration  or  pause duration  will be critical warnings     Remediation steps    Use  vault write  to enable auto tidy with the recommended defaults      shell session   vault write  mount  config auto tidy       enabled true       tidy cert store true       tidy revoked certs true       tidy acme true       tidy revocation queue true       tidy cross cluster revoked certs true       tidy revoked cert issuer associations true          Tidy hasn t run    Name     tidy last run     APIs         READ  tidy status     Config Parameters         last run critical    duration  7d     the critical delay threshold between when tidy should have last run      last run warning    duration  2d     the warning delay threshold between when tidy should have last run   This health check verifies that tidy has run within the last run window  This can be critical warning alerts as this can start to seriously impact Vault s performance     Remediation steps     1  Schedule a manual run of tidy with  vault write         shell session     vault write  mount  tidy         tidy cert store true         tidy revoked certs true         tidy acme true         tidy revocation queue true         tidy cross cluster revoked certs true         tidy revoked cert issuer associations true        1  Review the tidy status endpoint   vault read  mount  tidy status  for    additional information  1  Re configure auto tidy based on the log information and results of your    manual run        Too many certificates    Name     too many certs     APIs         READ  tidy status      LIST  certs     Config Parameters         count critical    int  250000     the critical threshold at which there are too many certs      count warning    int  50000     the warning threshold at which there are too many certs   This health check verifies that this cluster has a reasonable number of certificates  Ideally this would be fetched from tidy s status or a new metric reporting format  but as a fallback when tidy hasn t run  a list operation will be performed instead     Remediation steps     1  Verify that tidy ran recently with  vault read          shell session       vault read  mount  tidy status          1  Schedule a manual run of tidy with  vault write          shell session       vault write  mount  tidy         tidy cert store true         tidy revoked certs true         tidy acme true         tidy revocation queue true         tidy cross cluster revoked certs true         tidy revoked cert issuer associations true         1  Enable  auto tidy   1  Make sure that you are not renewing certificates too soon  Certificate    lifetimes should reflect the expected usage of the certificate  If the TTL is    set appropriately  most certificates renew at approximately 2 3 of their    lifespan  1  Consider setting the  no store  field for all roles to  true  and use  BYOC revocations   vault api docs secret pki revoke certificate  to avoid storage       Enable ACME issuance    Name     enable acme issuance     APIs        READ  config acme     READ  config cluster     LIST  issuers   unauthenticated     READ  issuer  issuer ref json   unauthenticated     Config Parameters     none   This health check verifies that ACME is enabled within a mount that contains an intermediary issuer  as this is considered a best practice to support a self rotating PKI infrastructure   Review the  ACME Certificate Issuance   vault api docs secret pki acme certificate issuance  API documentation to learn about enabling ACME support in Vault       ACME response headers    Name     allow acme headers     APIs        READ  sys internal ui mounts     Config Parameters     none   This health check verifies if the   Replay Nonce    Link   and  Location  headers have been added to  allowed response headers   when the ACME feature is enabled  The ACME protocol will not work if these headers are not added to the mount     Remediation steps     Use  vault secrets tune  to add the missing headers to  allowed response headers      shell session   vault secrets tune      allowed response headers  Last Modified       allowed response headers  Replay Nonce       allowed response headers  Link       allowed response headers  Location       mount     "}
{"questions":"vault This page contains the list of deprecations and important or breaking changes page title Upgrading to Vault 0 9 0 Guides Overview layout docs for Vault 0 9 0 Please read it carefully","answers":"---\nlayout: docs\npage_title: Upgrading to Vault 0.9.0 - Guides\ndescription: |-\n  This page contains the list of deprecations and important or breaking changes\n  for Vault 0.9.0. Please read it carefully.\n---\n\n# Overview\n\nThis page contains the list of deprecations and important or breaking changes\nfor Vault 0.9.0 compared to the most recent release. Please read it carefully.\n\n### PKI root generation (Since 0.8.1)\n\nCalling [`pki\/root\/generate`][generate-root] when a CA cert\/key already exists will now return a\n`204` instead of overwriting an existing root. If you want to recreate the\nroot, first run a delete operation on `pki\/root` (requires `sudo` capability),\nthen generate it again.\n\n### Token period in AWS IAM auth (Since 0.8.2)\n\nIn prior versions of Vault, if authenticating via AWS IAM and requesting a\nperiodic token, the period was not properly respected. This could lead to\ntokens expiring unexpectedly, or a token lifetime being longer than expected.\nUpon token renewal with Vault 0.8.2 the period will be properly enforced.\n\n### SSH CLI parameters (Since 0.8.2)\n\n`vault ssh` users should supply `-mode` and `-role` to reduce the number of API\ncalls. A future version of Vault will mark these optional values are required.\nFailure to supply `-mode` or `-role` will result in a warning.\n\n### Vault plugin init (Since 0.8.2)\n\nVault plugins will first briefly run a restricted version of the plugin to\nfetch metadata, and then lazy-load the plugin on first request to prevent\ncrash\/deadlock of Vault during the unseal process. Plugins will need to be\nbuilt with the latest changes in order for them to run properly.\n\n### Policy input format standardization (Since 0.8.3)\n\nFor all built-in authentication backends, policies can now be specified as a\ncomma-delimited string or an array if using JSON as API input; on read,\npolicies will be returned as an array; and the `default` policy will not be\nforcefully added to policies saved in configurations. Please note that the\n`default` policy will continue to be added to generated tokens, however, rather\nthan backends adding `default` to the given set of input policies (in some\ncases, and not in others), the stored set will reflect the user-specified set.\n\n### PKI `sign-self-issued` modifies `Issuer` in generated certificates (Since 0.8.3)\n\nIn 0.8.2 the endpoint would not modify the Issuer in the generated certificate,\nleaving the output self-issued. Although theoretically valid, in practice\ncrypto stacks were unhappy validating paths containing such certs. As a result,\n`sign-self-issued` now encodes the signing CA's Subject DN into the Issuer DN\nof the generated certificate.\n\n### `sys\/raw` requires enabling (Since 0.8.3)\n\nWhile the `sys\/raw` endpoint can be extremely useful in break-glass or support\nscenarios, it is also extremely dangerous. As of now, a configuration file\noption `raw_storage_endpoint` must be set in order to enable this API endpoint.\nOnce set, the available functionality has been enhanced slightly; it now\nsupports listing and decrypting most of Vault's core data structures, except\nfor the encryption keyring itself.\n\n### `generic` is now `kv` (Since 0.8.3)\n\nTo better reflect its actual use, the `generic` backend is now `kv`. Using\n`generic` will still work for backwards compatibility.\n\n### HSM users need to specify new config options (In 0.9)\n\nWhen using Vault with an HSM, a new parameter is required: `hmac_key_label`.\nThis performs a similar function to `key_label` but for the HMAC key Vault will\nuse. Vault will generate a suitable key if this value is specified and\n`generate_key` is set true. See [the seal configuration page][pkcs11-seal] for\nmore information.\n\n### API HTTP client behavior (In 0.9)\n\nWhen calling `NewClient` the API no longer modifies the provided\nclient\/transport. In particular this means it will no longer enable redirection\nlimiting and HTTP\/2 support on custom clients. It is suggested that if you want\nto make changes to an HTTP client that you use one created by `DefaultConfig`\nas a starting point.\n\n### AWS EC2 client nonce behavior (In 0.9)\n\nThe client nonce generated by the backend that gets returned along with the\nauthentication response will be audited in plaintext. If this is undesired, the\nclients can choose to supply a custom nonce to the login endpoint. The custom\nnonce set by the client will from now on, not be returned back with the\nauthentication response, and hence not audit logged.\n\n### AWS auth role options (In 0.9)\n\nThe API will now error when trying to create or update a role with the\nmutually-exclusive options `disallow_reauthentication` and\n`allow_instance_migration`.\n\n### SSH CA role read changes (In 0.9)\n\nWhen reading back a role from the `ssh` backend, the TTL\/max TTL values will\nnow be an integer number of seconds rather than a string. This better matches\nthe API elsewhere in Vault.\n\n### SSH role list changes (In 0.9)\n\nWhen listing roles from the `ssh` backend via the API, the response data will\nadditionally return a `key_info` map that will contain a map of each key with a\ncorresponding object containing the `key_type`.\n\n### More granularity in audit logs (In 0.9)\n\nAudit request and response entries are still in RFC3339 format but now have a\ngranularity of nanoseconds.\n\n[generate-root]: \/vault\/api-docs\/secret\/pki#generate-root\n[pkcs11-seal]: \/vault\/docs\/configuration\/seal\/pkcs11","site":"vault","answers_cleaned":"    layout  docs page title  Upgrading to Vault 0 9 0   Guides description       This page contains the list of deprecations and important or breaking changes   for Vault 0 9 0  Please read it carefully         Overview  This page contains the list of deprecations and important or breaking changes for Vault 0 9 0 compared to the most recent release  Please read it carefully       PKI root generation  Since 0 8 1   Calling   pki root generate   generate root  when a CA cert key already exists will now return a  204  instead of overwriting an existing root  If you want to recreate the root  first run a delete operation on  pki root   requires  sudo  capability   then generate it again       Token period in AWS IAM auth  Since 0 8 2   In prior versions of Vault  if authenticating via AWS IAM and requesting a periodic token  the period was not properly respected  This could lead to tokens expiring unexpectedly  or a token lifetime being longer than expected  Upon token renewal with Vault 0 8 2 the period will be properly enforced       SSH CLI parameters  Since 0 8 2    vault ssh  users should supply   mode  and   role  to reduce the number of API calls  A future version of Vault will mark these optional values are required  Failure to supply   mode  or   role  will result in a warning       Vault plugin init  Since 0 8 2   Vault plugins will first briefly run a restricted version of the plugin to fetch metadata  and then lazy load the plugin on first request to prevent crash deadlock of Vault during the unseal process  Plugins will need to be built with the latest changes in order for them to run properly       Policy input format standardization  Since 0 8 3   For all built in authentication backends  policies can now be specified as a comma delimited string or an array if using JSON as API input  on read  policies will be returned as an array  and the  default  policy will not be forcefully added to policies saved in configurations  Please note that the  default  policy will continue to be added to generated tokens  however  rather than backends adding  default  to the given set of input policies  in some cases  and not in others   the stored set will reflect the user specified set       PKI  sign self issued  modifies  Issuer  in generated certificates  Since 0 8 3   In 0 8 2 the endpoint would not modify the Issuer in the generated certificate  leaving the output self issued  Although theoretically valid  in practice crypto stacks were unhappy validating paths containing such certs  As a result   sign self issued  now encodes the signing CA s Subject DN into the Issuer DN of the generated certificate        sys raw  requires enabling  Since 0 8 3   While the  sys raw  endpoint can be extremely useful in break glass or support scenarios  it is also extremely dangerous  As of now  a configuration file option  raw storage endpoint  must be set in order to enable this API endpoint  Once set  the available functionality has been enhanced slightly  it now supports listing and decrypting most of Vault s core data structures  except for the encryption keyring itself        generic  is now  kv   Since 0 8 3   To better reflect its actual use  the  generic  backend is now  kv   Using  generic  will still work for backwards compatibility       HSM users need to specify new config options  In 0 9   When using Vault with an HSM  a new parameter is required   hmac key label   This performs a similar function to  key label  but for the HMAC key Vault will use  Vault will generate a suitable key if this value is specified and  generate key  is set true  See  the seal configuration page  pkcs11 seal  for more information       API HTTP client behavior  In 0 9   When calling  NewClient  the API no longer modifies the provided client transport  In particular this means it will no longer enable redirection limiting and HTTP 2 support on custom clients  It is suggested that if you want to make changes to an HTTP client that you use one created by  DefaultConfig  as a starting point       AWS EC2 client nonce behavior  In 0 9   The client nonce generated by the backend that gets returned along with the authentication response will be audited in plaintext  If this is undesired  the clients can choose to supply a custom nonce to the login endpoint  The custom nonce set by the client will from now on  not be returned back with the authentication response  and hence not audit logged       AWS auth role options  In 0 9   The API will now error when trying to create or update a role with the mutually exclusive options  disallow reauthentication  and  allow instance migration        SSH CA role read changes  In 0 9   When reading back a role from the  ssh  backend  the TTL max TTL values will now be an integer number of seconds rather than a string  This better matches the API elsewhere in Vault       SSH role list changes  In 0 9   When listing roles from the  ssh  backend via the API  the response data will additionally return a  key info  map that will contain a map of each key with a corresponding object containing the  key type        More granularity in audit logs  In 0 9   Audit request and response entries are still in RFC3339 format but now have a granularity of nanoseconds    generate root    vault api docs secret pki generate root  pkcs11 seal    vault docs configuration seal pkcs11"}
{"questions":"vault page title Upgrading to Vault 1 13 x Guides This page contains the list of deprecations and important or breaking changes for Vault 1 13 x Please read it carefully layout docs Overview","answers":"---\nlayout: docs\npage_title: Upgrading to Vault 1.13.x - Guides\ndescription: |-\n  This page contains the list of deprecations and important or breaking changes\n  for Vault 1.13.x. Please read it carefully.\n---\n\n# Overview\n\nThis page contains the list of deprecations and important or breaking changes\nfor Vault 1.13.x compared to 1.12. Please read it carefully.\n\n## Changes\n\n@include 'consul-dataplane-upgrade-note.mdx'\n\n### Undo logs\n\nVault 1.13 introduced changes to add extra resiliency to log shipping with undo logs. These logs can help prevent several Merkle syncs from occurring due to rapid key changes in the primary Merkle tree as the secondary tries to synchronize. For integrated storage users, upgrading Vault to 1.13 will enable this feature by default. For Consul storage users, Consul also needs to be upgraded to 1.14 to use this feature.\n\n### User lockout\n\nAs of version 1.13, Vault will stop trying to validate user credentials if the\nuser submits multiple invalid credentials in quick succession. During lockout,\nVault ignores requests from the barred user rather than responding with a\npermission denied error.\n\nUser lockout is enabled by default with a lockout threshold of 5 attempt, a\nlockout duration of 15 minutes, and a counter reset window of 15 minutes.\n\nFor more information, refer to the [User lockout](\/vault\/docs\/concepts\/user-lockout)\noverview.\n\n### Active directory secrets engine deprecation\n\nThe Active Directory (AD) secrets engine has been deprecated as of the Vault 1.13 release.\nWe will continue to support the AD secrets engine in maintenance mode for six major Vault\nreleases. Maintenance mode means that we will fix bugs and security issues but will not add\nnew features. For additional information, see the [deprecation table](\/vault\/docs\/deprecation)\nand [migration guide](\/vault\/docs\/secrets\/ad\/migration-guide).\n\n### AliCloud auth role parameter\n\nThe AliCloud auth plugin will now require the `role` parameter on login. This\nhas always been documented as a required field but the requirement will now be\nenforced.\n\n### Mounts associated with removed builtin plugins will result in core shutdown on upgrade\n\nAs of 1.13.0 Standalone (logical) DB Engines and the AppId Auth Method have been\nmarked with the `Removed` status. Any attempt to unseal Vault with\nmounts backed by one of these builtin plugins will result in an immediate\nshutdown of the Vault core.\n\n-> **NOTE** In the event that an external plugin with the same name and type as\na deprecated builtin is deregistered, any subsequent unseal will continue to\nunseal with an unusable auth backend, and a corresponding ERROR log.\n\n```shell-session\n$ vault plugin register -sha256=c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1 auth app-id\nSuccess! Registered plugin: app-id\n$ vault auth enable -plugin-name=app-id plugin\nSuccess! Enabled app-id auth method at: app-id\/\n$ vault auth list -detailed | grep \"app-id\"\napp-id\/    app-id    auth_app-id_3a8f2e24    system         system     default-service    replicated     false        false                      map[]      n\/a                        0018263c-0d64-7a70-fd5c-50e05c5f5dc3    n\/a        n\/a                      c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1    n\/a\n$ vault plugin deregister auth app-id\nSuccess! Deregistered plugin (if it was registered): app-id\n$ vault plugin list -detailed | grep \"app-id\"\napp-id                               auth        v1.13.0+builtin.vault                                 removed\n$ curl --header \"X-Vault-Token: $VAULT_TOKEN\" --request POST http:\/\/127.0.0.2:8200\/v1\/sys\/seal\n$ vault operator unseal <key1>\n...\n$ vault operator unseal <key2>\n...\n$ vault operator unseal <key3>\n...\n$ grep \"app-id\" \/path\/to\/vault.log\n[ERROR] core: skipping deprecated auth entry: name=app-id path=app-id\/ error=\"mount entry associated with removed builtin\"\n[ERROR] core: skipping initialization for nil auth backend: path=app-id\/ type=app-id version=\"v1.13.0+builtin.vault\"\n```\n\nThe remediation for affected mounts is to downgrade to the previously-used version of Vault\nenvironment variable and replace any `Removed` feature with the\n[preferred alternative\nfeature](\/vault\/docs\/deprecation\/faq#q-what-should-i-do-if-i-use-mount-filters-appid-or-any-of-the-standalone-db-engines).\n\nFor more information on the phases of deprecation, see the [Deprecation Notices\nFAQ](\/vault\/docs\/deprecation\/faq#q-what-are-the-phases-of-deprecation).\n\n#### Impacted versions\n\nAffects upgrading from any version of Vault to 1.13.x. All other upgrade paths\nare unaffected.\n\n### Application of Sentinel Role Governing Policies (RGPs) via identity groups\n\n@include 'application-of-sentinel-rgps-via-identity-groups.mdx'\n\n## Known issues\n\n@include 'tokenization-rotation-persistence.mdx'\n\n@include 'known-issues\/ocsp-redirect.mdx'\n\n@include 'known-issues\/1_13-reload-census-panic-standby.mdx'\n\n### PKI revocation request forwarding\n\nIf a revocation request comes in to a standby or performance secondary node,\nfor a certificate that is present locally, the request will not be correctly\nforwarded to the active node of this cluster.\n\nAs a workaround, submit revocation requests to the active node only.\n\n### STS credentials do not return a lease_duration\nVault 1.13.0 introduced a change to the AWS Secrets Engine such that it no longer creates leases for STS credentials due\nto the fact that they cannot be revoked or renewed. As part of this change, a bug was introduced which causes `lease_duration`\nto always return zero. This prevents the Vault Agent from refreshing STS credentials and may introduce undesired behaviour\nfor anything which relies on a non-zero `lease_duration`.\n\nFor applications that can control what value to look for, the `ttl` value in the response can be used to know when to\nrequest STS credentials next.\n\nAn additional workaround for users rendering STS credentials via the Vault Agent is to set the\n`static-secret-render-interval` for a template using the credentials. Setting this configuration to 15 minutes\naccommodates the default minimum duration of an STS token and overrides the default render interval of 5 minutes.\n\n#### Impacted versions\n\nAffects Vault 1.13.0 only.\n\n### LDAP pagination issue\n\nThere was a regression introduced in 1.13.2 relating to LDAP maximum page sizes, resulting in\nan error `no LDAP groups found in groupDN [...] only policies from locally-defined groups available`.  The issue\noccurs when upgrading Vault with an instance that has an existing LDAP Auth configuration.\n\nAs a workaround, disable paged searching using the following:\n```shell-session\nvault write auth\/ldap\/config max_page_size=-1\n```\n\n#### Impacted versions\n\nAffects Vault 1.13.2.\n\n### PKI Cross-Cluster revocation requests and unified CRL\/OCSP\n\nWhen revoking certificates on a cluster that doesn't own the\ncertificate, writing the revocation request will fail with\na message like `error persisting cross-cluster revocation request`.\nSimilar errors will appear in the log for failure to write\nunified CRL and unified delta CRL WAL entries.\n\nAs a workaround, submit revocation requests to the cluster which\nissued the certificate, or use BYOC revocation. Use cluster-local\nOCSP and CRLs until this is resolved.\n\n#### Impacted versions\n\nAffects Vault 1.13.0 to 1.13.2. Fixed in 1.13.3.\n\nOn upgrade, all local revocations will be synchronized between\nclusters; revocation requests are not persisted when failing to\nwrite cross-cluster.\n\n### Slow startup time when storing PKI certificates\n\nThere was a regression introduced in 1.13.0 where Vault is slow to start because the\nPKI secret engine performs a list operation on the stored certificates. If a large number\nof certificates are stored this can cause long start times on active and standby nodes.\n\nThere is currently no workaround for this other than limiting the number of certificates stored\nin Vault via the [PKI tidy](\/vault\/api-docs\/secret\/pki#tidy) or using `no_store`\nflag for [PKI roles](\/vault\/api-docs\/secret\/pki#createupdate-role).\n\n#### Impacted versions\n\nAffects Vault 1.13.0+\n\n@include 'perf-standby-token-create-forwarding-failure.mdx'\n\n@include 'known-issues\/update-primary-data-loss.mdx'\n\n@include 'pki-double-migration-bug.mdx'\n\n@include 'known-issues\/update-primary-addrs-panic.mdx'\n\n@include 'known-issues\/transit-managed-keys-panics.mdx'\n\n@include 'known-issues\/internal-error-namespace-missing-policy.mdx'\n\n@include 'known-issues\/ephemeral-loggers-memory-leak.mdx'\n\n@include 'known-issues\/sublogger-levels-unchanged-on-reload.mdx'\n\n@include 'known-issues\/expiration-metrics-fatal-error.mdx'\n\n@include 'known-issues\/perf-secondary-many-mounts-deadlock.mdx'","site":"vault","answers_cleaned":"    layout  docs page title  Upgrading to Vault 1 13 x   Guides description       This page contains the list of deprecations and important or breaking changes   for Vault 1 13 x  Please read it carefully         Overview  This page contains the list of deprecations and important or breaking changes for Vault 1 13 x compared to 1 12  Please read it carefully      Changes   include  consul dataplane upgrade note mdx       Undo logs  Vault 1 13 introduced changes to add extra resiliency to log shipping with undo logs  These logs can help prevent several Merkle syncs from occurring due to rapid key changes in the primary Merkle tree as the secondary tries to synchronize  For integrated storage users  upgrading Vault to 1 13 will enable this feature by default  For Consul storage users  Consul also needs to be upgraded to 1 14 to use this feature       User lockout  As of version 1 13  Vault will stop trying to validate user credentials if the user submits multiple invalid credentials in quick succession  During lockout  Vault ignores requests from the barred user rather than responding with a permission denied error   User lockout is enabled by default with a lockout threshold of 5 attempt  a lockout duration of 15 minutes  and a counter reset window of 15 minutes   For more information  refer to the  User lockout   vault docs concepts user lockout  overview       Active directory secrets engine deprecation  The Active Directory  AD  secrets engine has been deprecated as of the Vault 1 13 release  We will continue to support the AD secrets engine in maintenance mode for six major Vault releases  Maintenance mode means that we will fix bugs and security issues but will not add new features  For additional information  see the  deprecation table   vault docs deprecation  and  migration guide   vault docs secrets ad migration guide        AliCloud auth role parameter  The AliCloud auth plugin will now require the  role  parameter on login  This has always been documented as a required field but the requirement will now be enforced       Mounts associated with removed builtin plugins will result in core shutdown on upgrade  As of 1 13 0 Standalone  logical  DB Engines and the AppId Auth Method have been marked with the  Removed  status  Any attempt to unseal Vault with mounts backed by one of these builtin plugins will result in an immediate shutdown of the Vault core        NOTE   In the event that an external plugin with the same name and type as a deprecated builtin is deregistered  any subsequent unseal will continue to unseal with an unusable auth backend  and a corresponding ERROR log      shell session   vault plugin register  sha256 c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1 auth app id Success  Registered plugin  app id   vault auth enable  plugin name app id plugin Success  Enabled app id auth method at  app id    vault auth list  detailed   grep  app id  app id     app id    auth app id 3a8f2e24    system         system     default service    replicated     false        false                      map        n a                        0018263c 0d64 7a70 fd5c 50e05c5f5dc3    n a        n a                      c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1    n a   vault plugin deregister auth app id Success  Deregistered plugin  if it was registered   app id   vault plugin list  detailed   grep  app id  app id                               auth        v1 13 0 builtin vault                                 removed   curl   header  X Vault Token   VAULT TOKEN    request POST http   127 0 0 2 8200 v1 sys seal   vault operator unseal  key1        vault operator unseal  key2        vault operator unseal  key3        grep  app id   path to vault log  ERROR  core  skipping deprecated auth entry  name app id path app id  error  mount entry associated with removed builtin   ERROR  core  skipping initialization for nil auth backend  path app id  type app id version  v1 13 0 builtin vault       The remediation for affected mounts is to downgrade to the previously used version of Vault environment variable and replace any  Removed  feature with the  preferred alternative feature   vault docs deprecation faq q what should i do if i use mount filters appid or any of the standalone db engines    For more information on the phases of deprecation  see the  Deprecation Notices FAQ   vault docs deprecation faq q what are the phases of deprecation         Impacted versions  Affects upgrading from any version of Vault to 1 13 x  All other upgrade paths are unaffected       Application of Sentinel Role Governing Policies  RGPs  via identity groups   include  application of sentinel rgps via identity groups mdx      Known issues   include  tokenization rotation persistence mdx    include  known issues ocsp redirect mdx    include  known issues 1 13 reload census panic standby mdx       PKI revocation request forwarding  If a revocation request comes in to a standby or performance secondary node  for a certificate that is present locally  the request will not be correctly forwarded to the active node of this cluster   As a workaround  submit revocation requests to the active node only       STS credentials do not return a lease duration Vault 1 13 0 introduced a change to the AWS Secrets Engine such that it no longer creates leases for STS credentials due to the fact that they cannot be revoked or renewed  As part of this change  a bug was introduced which causes  lease duration  to always return zero  This prevents the Vault Agent from refreshing STS credentials and may introduce undesired behaviour for anything which relies on a non zero  lease duration    For applications that can control what value to look for  the  ttl  value in the response can be used to know when to request STS credentials next   An additional workaround for users rendering STS credentials via the Vault Agent is to set the  static secret render interval  for a template using the credentials  Setting this configuration to 15 minutes accommodates the default minimum duration of an STS token and overrides the default render interval of 5 minutes        Impacted versions  Affects Vault 1 13 0 only       LDAP pagination issue  There was a regression introduced in 1 13 2 relating to LDAP maximum page sizes  resulting in an error  no LDAP groups found in groupDN       only policies from locally defined groups available    The issue occurs when upgrading Vault with an instance that has an existing LDAP Auth configuration   As a workaround  disable paged searching using the following     shell session vault write auth ldap config max page size  1           Impacted versions  Affects Vault 1 13 2       PKI Cross Cluster revocation requests and unified CRL OCSP  When revoking certificates on a cluster that doesn t own the certificate  writing the revocation request will fail with a message like  error persisting cross cluster revocation request   Similar errors will appear in the log for failure to write unified CRL and unified delta CRL WAL entries   As a workaround  submit revocation requests to the cluster which issued the certificate  or use BYOC revocation  Use cluster local OCSP and CRLs until this is resolved        Impacted versions  Affects Vault 1 13 0 to 1 13 2  Fixed in 1 13 3   On upgrade  all local revocations will be synchronized between clusters  revocation requests are not persisted when failing to write cross cluster       Slow startup time when storing PKI certificates  There was a regression introduced in 1 13 0 where Vault is slow to start because the PKI secret engine performs a list operation on the stored certificates  If a large number of certificates are stored this can cause long start times on active and standby nodes   There is currently no workaround for this other than limiting the number of certificates stored in Vault via the  PKI tidy   vault api docs secret pki tidy  or using  no store  flag for  PKI roles   vault api docs secret pki createupdate role         Impacted versions  Affects Vault 1 13 0    include  perf standby token create forwarding failure mdx    include  known issues update primary data loss mdx    include  pki double migration bug mdx    include  known issues update primary addrs panic mdx    include  known issues transit managed keys panics mdx    include  known issues internal error namespace missing policy mdx    include  known issues ephemeral loggers memory leak mdx    include  known issues sublogger levels unchanged on reload mdx    include  known issues expiration metrics fatal error mdx    include  known issues perf secondary many mounts deadlock mdx "}
{"questions":"vault These are general upgrade instructions for Vault plugins Plugin upgrade procedure Upgrading Vault plugins page title Upgrading Plugins Guides layout docs","answers":"---\nlayout: docs\npage_title: Upgrading Plugins - Guides\ndescription: These are general upgrade instructions for Vault plugins.\n---\n\n# Upgrading Vault plugins\n\n## Plugin upgrade procedure\n\nThe following procedures detail steps for upgrading a plugin that has been mounted\nat a path on a running server. The steps are the same whether the plugin being\nupgraded is built-in or external.\n\n~> [Plugin versioning](\/vault\/docs\/plugins#plugin-versioning) was introduced\n  with Vault 1.12.0, so if your Vault server is on 1.11.x or earlier, see the\n  [1.11.x version of this page](\/vault\/docs\/v1.11.x\/upgrading\/plugins)\n  for plugin upgrade instructions.\n\n### Upgrading auth and secrets plugins\n\nThe process is nearly identical for auth and secret plugins. If you are upgrading\nan auth plugin, just replace all usages of `secrets` or `secret` with `auth`.\n\n1. [Register][plugin_registration] the first version of your plugin to the catalog.\n   Skip this step if your initial plugin is built-in or already registered.\n\n    ```shell-session\n    $ vault plugin register \\\n        -sha256=<SHA256 Hex value of the plugin binary> \\\n        secret \\\n        my-secret-plugin\n    Success! Registered plugin: my-secret-plugin\n    ```\n\n1. [Mount][plugin_management] the plugin. Skip this step if your initial plugin\n   is already mounted.\n\n    ```shell-session\n    $ vault secrets enable my-secret-plugin\n    Success! Enabled the my-secret-plugin secrets engine at: my-secret-plugin\/\n    ```\n\n1. Register a second version of your plugin. You **must** use the same plugin\n   type and name (the last two arguments) as the plugin being upgraded. This is\n   true regardless of whether the plugin being upgraded is built-in or external.\n\n    ```shell-session\n    $ vault plugin register \\\n        -sha256=<SHA256 Hex value of the plugin binary> \\\n        -command=my-secret-plugin-1.0.1 \\\n        -version=v1.0.1 \\\n        secret \\\n        my-secret-plugin\n    Success! Registered plugin: my-secret-plugin\n    ```\n\n1. Set the new version as the cluster's pinned version.\n\n   ```shell-session\n   $ vault write sys\/plugins\/pins\/secret\/my-secret-plugin version=v1.0.1\n   ```\n\n1. Trigger a global [plugin reload](\/vault\/docs\/commands\/plugin\/reload) to\n   reload all instances of the plugin.\n\n    ```shell-session\n    $ vault plugin reload -type=secret -plugin=my-secret-plugin -scope=global\n    Success! Reloading plugin: my-secret-plugin, reload_id: 98b1e875-4217-745d-07f2-93d14219fb3c\n    ```\n\n1. **Optional:** Check the \"Running Version\" field to verify the new version is\n    running:\n\n    ```shell-session\n    $ vault secrets list -detailed\n    ```\n\nUntil the reload step, the mount will still run the first version of `my-secret-plugin`. When\nthe reload is triggered, Vault will kill `my-secret-plugin`\u2019s process and start the\nnew plugin process for `my-secret-plugin` version 1.0.1.\n\n### Upgrading database plugins\n\n1. [Register][plugin_registration] the first version of your plugin to the catalog.\n   Skip this step if your initial plugin is built-in or already registered.\n\n    ```shell-session\n    $ vault plugin register\n        -sha256=<SHA256 Hex value of the plugin binary> \\\n        database \\\n        my-db-plugin\n    Success! Registered plugin: my-db-plugin\n    ```\n\n1. [Mount][plugin_management] the plugin. Skip this step if your initial plugin\n   is already mounted.\n\n    ```shell-session\n    $ vault secrets enable database\n    $ vault write database\/config\/my-db \\\n        plugin_name=my-db-plugin \\\n        # ...\n    Success! Data written to: database\/config\/my-db\n    ```\n\n1. Register a second version of your plugin. You **must** use the same plugin\n   type and name (the last two arguments) as the plugin being upgraded. This is\n   true regardless of whether the plugin being upgraded is built-in or external.\n\n    ```shell-session\n    $ vault plugin register \\\n        -sha256=<SHA256 Hex value of the plugin binary> \\\n        -command=my-db-plugin-1.0.1 \\\n        -version=v1.0.1 \\\n        database \\\n        my-db-plugin\n    Success! Registered plugin: my-db-plugin\n    ```\n\n1. Set the new version as the cluster's pinned version.\n\n    ```shell-session\n    $ vault write sys\/plugins\/pins\/database\/my-db-plugin version=v1.0.1\n    ```\n\n1. Trigger a global [plugin reload](\/vault\/docs\/commands\/plugin\/reload) to\n   reload all instances of the plugin.\n\n    ```shell-session\n    $ vault plugin reload -type=database -plugin=my-db-plugin -scope=global\n    Success! Reloading plugin: my-db-plugin, reload_id: 98b1e875-4217-745d-07f2-93d14219fb3c\n    ```\n\n1. **Optional:** Verify the current version of the running plugin:\n\n    ```shell-session\n    $ vault read database\/config\/my-db\n    ```\n\nUntil the reload step, the mount will still run the first version of `my-db-plugin`. When\nthe reload is triggered, Vault will kill `my-db-plugin`\u2019s process and start the\nnew plugin process for `my-db-plugin` version 1.0.1.\n\n### Downgrading plugins\n\nPlugin downgrades follow the same procedure as upgrades. You can use the Vault\nplugin list command to check what plugin versions are available to downgrade to:\n\n```shell-session\n$ vault plugin list secret\nName                Version\n----                -------\nad                  v0.14.0+builtin\nalicloud            v0.13.0+builtin\naws                 v1.12.0+builtin.vault\nazure               v0.14.0+builtin\ncassandra           v1.12.0+builtin.vault\nconsul              v1.12.0+builtin.vault\ngcp                 v0.14.0+builtin\ngcpkms              v0.13.0+builtin\nkv                  v0.13.3+builtin\nldap                v1.12.0+builtin.vault\nmongodb             v1.12.0+builtin.vault\nmongodbatlas        v0.8.0+builtin\nmssql               v1.12.0+builtin.vault\nmysql               v1.12.0+builtin.vault\nnomad               v1.12.0+builtin.vault\nopenldap            v0.9.0+builtin\npki                 v1.12.0+builtin.vault\npostgresql          v1.12.0+builtin.vault\nrabbitmq            v1.12.0+builtin.vault\nssh                 v1.12.0+builtin.vault\nterraform           v0.6.0+builtin\ntotp                v1.12.0+builtin.vault\ntransit             v1.12.0+builtin.vault\n```\n\n### Additional upgrade notes\n\n* As mentioned earlier, disabling existing mounts will wipe the existing data.\n* Overwriting an existing version in the catalog will affect all uses of that\n  plugin version. So if you have 5 different Azure Secrets mounts using v1.0.0,\n  they'll all start using the new binary if you overwrite it. We recommend\n  treating plugin versions in the catalog as immutable, much like version control\n  tags.\n* Each plugin has its own data within Vault storage. While it is rare for HashiCorp\n  maintained plugins to update their storage schema, it is up to plugin authors\n  to manage schema upgrades and downgrades. Check the plugin release notes for\n  any unsupported upgrade or downgrade transitions, especially before moving to\n  a new major version or downgrading.\n* Existing Vault [leases](\/vault\/docs\/concepts\/lease) and [tokens](\/vault\/docs\/concepts\/tokens)\n  are generally unaffected by plugin upgrades and reloads. This is because the lifecycle\n  of leases and tokens is handled by core systems within Vault. The plugin itself only\n  handles renewal and revocation of them when it\u2019s requested by those core systems.\n\n[plugin_reload_api]: \/vault\/api-docs\/system\/plugins-reload\n[plugin_registration]: \/vault\/docs\/plugins\/plugin-architecture#plugin-registration\n[plugin_management]: \/vault\/docs\/plugins\/plugin-management#enabling-disabling-external-plugins","site":"vault","answers_cleaned":"    layout  docs page title  Upgrading Plugins   Guides description  These are general upgrade instructions for Vault plugins         Upgrading Vault plugins     Plugin upgrade procedure  The following procedures detail steps for upgrading a plugin that has been mounted at a path on a running server  The steps are the same whether the plugin being upgraded is built in or external       Plugin versioning   vault docs plugins plugin versioning  was introduced   with Vault 1 12 0  so if your Vault server is on 1 11 x or earlier  see the    1 11 x version of this page   vault docs v1 11 x upgrading plugins    for plugin upgrade instructions       Upgrading auth and secrets plugins  The process is nearly identical for auth and secret plugins  If you are upgrading an auth plugin  just replace all usages of  secrets  or  secret  with  auth    1   Register  plugin registration  the first version of your plugin to the catalog     Skip this step if your initial plugin is built in or already registered          shell session       vault plugin register            sha256  SHA256 Hex value of the plugin binary            secret           my secret plugin     Success  Registered plugin  my secret plugin          1   Mount  plugin management  the plugin  Skip this step if your initial plugin    is already mounted          shell session       vault secrets enable my secret plugin     Success  Enabled the my secret plugin secrets engine at  my secret plugin           1  Register a second version of your plugin  You   must   use the same plugin    type and name  the last two arguments  as the plugin being upgraded  This is    true regardless of whether the plugin being upgraded is built in or external          shell session       vault plugin register            sha256  SHA256 Hex value of the plugin binary             command my secret plugin 1 0 1            version v1 0 1           secret           my secret plugin     Success  Registered plugin  my secret plugin          1  Set the new version as the cluster s pinned version         shell session      vault write sys plugins pins secret my secret plugin version v1 0 1         1  Trigger a global  plugin reload   vault docs commands plugin reload  to    reload all instances of the plugin          shell session       vault plugin reload  type secret  plugin my secret plugin  scope global     Success  Reloading plugin  my secret plugin  reload id  98b1e875 4217 745d 07f2 93d14219fb3c          1    Optional    Check the  Running Version  field to verify the new version is     running          shell session       vault secrets list  detailed          Until the reload step  the mount will still run the first version of  my secret plugin   When the reload is triggered  Vault will kill  my secret plugin  s process and start the new plugin process for  my secret plugin  version 1 0 1       Upgrading database plugins  1   Register  plugin registration  the first version of your plugin to the catalog     Skip this step if your initial plugin is built in or already registered          shell session       vault plugin register          sha256  SHA256 Hex value of the plugin binary            database           my db plugin     Success  Registered plugin  my db plugin          1   Mount  plugin management  the plugin  Skip this step if your initial plugin    is already mounted          shell session       vault secrets enable database       vault write database config my db           plugin name my db plugin                     Success  Data written to  database config my db          1  Register a second version of your plugin  You   must   use the same plugin    type and name  the last two arguments  as the plugin being upgraded  This is    true regardless of whether the plugin being upgraded is built in or external          shell session       vault plugin register            sha256  SHA256 Hex value of the plugin binary             command my db plugin 1 0 1            version v1 0 1           database           my db plugin     Success  Registered plugin  my db plugin          1  Set the new version as the cluster s pinned version          shell session       vault write sys plugins pins database my db plugin version v1 0 1          1  Trigger a global  plugin reload   vault docs commands plugin reload  to    reload all instances of the plugin          shell session       vault plugin reload  type database  plugin my db plugin  scope global     Success  Reloading plugin  my db plugin  reload id  98b1e875 4217 745d 07f2 93d14219fb3c          1    Optional    Verify the current version of the running plugin          shell session       vault read database config my db          Until the reload step  the mount will still run the first version of  my db plugin   When the reload is triggered  Vault will kill  my db plugin  s process and start the new plugin process for  my db plugin  version 1 0 1       Downgrading plugins  Plugin downgrades follow the same procedure as upgrades  You can use the Vault plugin list command to check what plugin versions are available to downgrade to      shell session   vault plugin list secret Name                Version                             ad                  v0 14 0 builtin alicloud            v0 13 0 builtin aws                 v1 12 0 builtin vault azure               v0 14 0 builtin cassandra           v1 12 0 builtin vault consul              v1 12 0 builtin vault gcp                 v0 14 0 builtin gcpkms              v0 13 0 builtin kv                  v0 13 3 builtin ldap                v1 12 0 builtin vault mongodb             v1 12 0 builtin vault mongodbatlas        v0 8 0 builtin mssql               v1 12 0 builtin vault mysql               v1 12 0 builtin vault nomad               v1 12 0 builtin vault openldap            v0 9 0 builtin pki                 v1 12 0 builtin vault postgresql          v1 12 0 builtin vault rabbitmq            v1 12 0 builtin vault ssh                 v1 12 0 builtin vault terraform           v0 6 0 builtin totp                v1 12 0 builtin vault transit             v1 12 0 builtin vault          Additional upgrade notes    As mentioned earlier  disabling existing mounts will wipe the existing data    Overwriting an existing version in the catalog will affect all uses of that   plugin version  So if you have 5 different Azure Secrets mounts using v1 0 0    they ll all start using the new binary if you overwrite it  We recommend   treating plugin versions in the catalog as immutable  much like version control   tags    Each plugin has its own data within Vault storage  While it is rare for HashiCorp   maintained plugins to update their storage schema  it is up to plugin authors   to manage schema upgrades and downgrades  Check the plugin release notes for   any unsupported upgrade or downgrade transitions  especially before moving to   a new major version or downgrading    Existing Vault  leases   vault docs concepts lease  and  tokens   vault docs concepts tokens    are generally unaffected by plugin upgrades and reloads  This is because the lifecycle   of leases and tokens is handled by core systems within Vault  The plugin itself only   handles renewal and revocation of them when it s requested by those core systems    plugin reload api    vault api docs system plugins reload  plugin registration    vault docs plugins plugin architecture plugin registration  plugin management    vault docs plugins plugin management enabling disabling external plugins"}
{"questions":"vault page title Upgrade to Vault 1 15 x Guides for anyone upgrading to 1 15 x from Vault 1 14 x Deprecations important or breaking changes and remediation recommendations layout docs Overview","answers":"---\nlayout: docs\npage_title: Upgrade to Vault 1.15.x - Guides\ndescription: |-\n  Deprecations, important or breaking changes, and remediation recommendations\n  for anyone upgrading to 1.15.x from Vault 1.14.x.\n---\n\n# Overview\n\nThe Vault 1.15.x upgrade guide contains information on deprecations, important\nor breaking changes, and remediation recommendations for anyone upgrading from\nVault 1.14. **Please read carefully**.\n\n## Consul service registration\n\nAs of version 1.15, `service_tags` supplied to Vault for the purpose of [Consul\nservice registration](\/vault\/docs\/configuration\/service-registration\/consul#service_tags)\nwill be **case-sensitive**.\n\nIn previous versions of Vault tags were converted to lowercase which led to issues,\nfor example when tags contained Traefik rules which use case-sensitive method names\nsuch as `Host()`.\n\nIf you previously used Consul service registration tags ignoring case, or relied\non the lowercase tags created by Vault, then this change may cause unexpected behavior.\n\nPlease audit your Consul storage stanza to ensure that you either:\n\n* Manually convert your `service_tags` to lowercase if required\n* Ensure that any system that relies on the tags is aware of the new case-preserving behavior\n\n## Rollback metrics\n\nVault no longer measures and reports the metrics `vault.rollback.attempts.{MOUNTPOINT}` and `vault.route.rollback.{MOUNTPOINT}` by default. The new default metrics are `vault.rollback.attempts`\nand `vault.route.rollback`, which **do not** contain the mount point in the metric name.\n\nTo continue measuring `vault.rollback.attempts.{MOUNTPOINT}` and\n`vault.route.rollback.{MOUNTPOINT}`, you must explicitly enable mount-specific\nmetrics in the `telemetry` stanza of your Vault configuration with the\n[`add_mount_point_rollback_metrics`](\/vault\/docs\/configuration\/telemetry#add_mount_point_rollback_metrics)\noption.\n\n## Application of Sentinel Role Governing Policies (RGPs) via identity groups\n\n@include 'application-of-sentinel-rgps-via-identity-groups.mdx'\n\n### Docker image no longer contains `curl`\n\nAs of 1.15.13 and later, the `curl` binary is no longer included in the published Docker container\nimages for Vault and Vault Enterprise. If your workflow depends on `curl` being available in the\ncontainer, consider one of the following strategies:\n\n#### Create a wrapper container image\n\nUse the HashiCorp image as a base image to create a new container image with `curl` installed.\n\n```Dockerfile\nFROM hashicorp\/vault-enterprise\nRUN apk add curl\n```\n\n**NOTE:** While this is the preferred option it will require managing your own registry and rebuilding new images.\n\n#### Install it at runtime dynamically\n\nWhen running the image as root (not recommended), you can install it at runtime dynamically by using the `apk` package manager:\n\n```shell-session\ndocker exec <CONTAINER-ID> apk add curl\n```\n```shell-session\nkubectl exec -ti <NAME> -- apk add curl\n```\n\nWhen running the image as non-root without privilege escalation (recommended) you can use existing\ntools to install a static binary of `curl` into the `vault` users home directory:\n\n```shell-session\ndocker exec <CONTAINER-ID> wget https:\/\/github.com\/moparisthebest\/static-curl\/releases\/latest\/download\/curl-amd64 -O \/home\/vault\/curl && chmod +x \/home\/vault\/curl\n```\n```shell-session\nkubectl exec -ti <NAME> -- wget https:\/\/github.com\/moparisthebest\/static-curl\/releases\/latest\/download\/curl-amd64 -O \/home\/vault\/curl && chmod +x \/home\/vault\/curl\n```\n\n**NOTE:** When using this option you'll want to verify that the static binary comes from a trusted source.\n\n## Known issues and workarounds\n\n@include 'known-issues\/1_15-auto-upgrade.mdx'\n\n@include 'known-issues\/transit-managed-keys-panics.mdx'\n\n@include 'known-issues\/transit-managed-keys-sign-fails.mdx'\n\n@include 'known-issues\/aws-auth-panics.mdx'\n\n@include 'known-issues\/ui-collapsed-navbar.mdx'\n\n@include 'known-issues\/1_15-audit-file-sighup-does-not-trigger-reload.mdx'\n\n@include 'known-issues\/internal-error-namespace-missing-policy.mdx'\n\n@include 'known-issues\/ephemeral-loggers-memory-leak.mdx'\n\n@include 'known-issues\/sublogger-levels-unchanged-on-reload.mdx'\n\n@include 'known-issues\/kv2-url-change.mdx'\n\n@include 'known-issues\/expiration-metrics-fatal-error.mdx'\n\n@include 'known-issues\/1_15_audit-use-of-log-raw-applies-to-all-devices.mdx'\n\n@include 'known-issues\/1_15_openldap-rotate-credentials.mdx'\n\n@include 'known-issues\/perf-secondary-many-mounts-deadlock.mdx'\n\n@include 'known-issues\/1_15-audit-panic-handling-with-eventlogger.mdx'\n\n@include 'known-issues\/ocsp-redirect.mdx'\n\n@include 'known-issues\/1_15-audit-vault-enterprise-perf-standby-logs-all-headers.mdx'\n\n@include 'known-issues\/perf-standbys-revert-to-standby.mdx'\n\n@include 'known-issues\/1_13-reload-census-panic-standby.mdx'\n\n@include 'known-issues\/autopilot-upgrade-upgrade-version.mdx'\n\n@include 'known-issues\/config_listener_proxy_protocol_behavior_issue.mdx'\n\n@include 'known-issues\/duplicate-identity-groups.mdx'\n\n@include 'known-issues\/manual-entity-merge-does-not-persist.mdx'\n\n","site":"vault","answers_cleaned":"    layout  docs page title  Upgrade to Vault 1 15 x   Guides description       Deprecations  important or breaking changes  and remediation recommendations   for anyone upgrading to 1 15 x from Vault 1 14 x         Overview  The Vault 1 15 x upgrade guide contains information on deprecations  important or breaking changes  and remediation recommendations for anyone upgrading from Vault 1 14    Please read carefully        Consul service registration  As of version 1 15   service tags  supplied to Vault for the purpose of  Consul service registration   vault docs configuration service registration consul service tags  will be   case sensitive     In previous versions of Vault tags were converted to lowercase which led to issues  for example when tags contained Traefik rules which use case sensitive method names such as  Host      If you previously used Consul service registration tags ignoring case  or relied on the lowercase tags created by Vault  then this change may cause unexpected behavior   Please audit your Consul storage stanza to ensure that you either     Manually convert your  service tags  to lowercase if required   Ensure that any system that relies on the tags is aware of the new case preserving behavior     Rollback metrics  Vault no longer measures and reports the metrics  vault rollback attempts  MOUNTPOINT   and  vault route rollback  MOUNTPOINT   by default  The new default metrics are  vault rollback attempts  and  vault route rollback   which   do not   contain the mount point in the metric name   To continue measuring  vault rollback attempts  MOUNTPOINT   and  vault route rollback  MOUNTPOINT    you must explicitly enable mount specific metrics in the  telemetry  stanza of your Vault configuration with the   add mount point rollback metrics    vault docs configuration telemetry add mount point rollback metrics  option      Application of Sentinel Role Governing Policies  RGPs  via identity groups   include  application of sentinel rgps via identity groups mdx       Docker image no longer contains  curl   As of 1 15 13 and later  the  curl  binary is no longer included in the published Docker container images for Vault and Vault Enterprise  If your workflow depends on  curl  being available in the container  consider one of the following strategies        Create a wrapper container image  Use the HashiCorp image as a base image to create a new container image with  curl  installed      Dockerfile FROM hashicorp vault enterprise RUN apk add curl        NOTE    While this is the preferred option it will require managing your own registry and rebuilding new images        Install it at runtime dynamically  When running the image as root  not recommended   you can install it at runtime dynamically by using the  apk  package manager      shell session docker exec  CONTAINER ID  apk add curl        shell session kubectl exec  ti  NAME     apk add curl      When running the image as non root without privilege escalation  recommended  you can use existing tools to install a static binary of  curl  into the  vault  users home directory      shell session docker exec  CONTAINER ID  wget https   github com moparisthebest static curl releases latest download curl amd64  O  home vault curl    chmod  x  home vault curl        shell session kubectl exec  ti  NAME     wget https   github com moparisthebest static curl releases latest download curl amd64  O  home vault curl    chmod  x  home vault curl        NOTE    When using this option you ll want to verify that the static binary comes from a trusted source      Known issues and workarounds   include  known issues 1 15 auto upgrade mdx    include  known issues transit managed keys panics mdx    include  known issues transit managed keys sign fails mdx    include  known issues aws auth panics mdx    include  known issues ui collapsed navbar mdx    include  known issues 1 15 audit file sighup does not trigger reload mdx    include  known issues internal error namespace missing policy mdx    include  known issues ephemeral loggers memory leak mdx    include  known issues sublogger levels unchanged on reload mdx    include  known issues kv2 url change mdx    include  known issues expiration metrics fatal error mdx    include  known issues 1 15 audit use of log raw applies to all devices mdx    include  known issues 1 15 openldap rotate credentials mdx    include  known issues perf secondary many mounts deadlock mdx    include  known issues 1 15 audit panic handling with eventlogger mdx    include  known issues ocsp redirect mdx    include  known issues 1 15 audit vault enterprise perf standby logs all headers mdx    include  known issues perf standbys revert to standby mdx    include  known issues 1 13 reload census panic standby mdx    include  known issues autopilot upgrade upgrade version mdx    include  known issues config listener proxy protocol behavior issue mdx    include  known issues duplicate identity groups mdx    include  known issues manual entity merge does not persist mdx   "}
{"questions":"vault page title Upgrade to Vault 1 17 x Guides Deprecations important or breaking changes and remediation recommendations layout docs for anyone upgrading to 1 17 x from Vault 1 16 x Overview","answers":"---\nlayout: docs\npage_title: Upgrade to Vault 1.17.x - Guides\ndescription: |-\n  Deprecations, important or breaking changes, and remediation recommendations\n  for anyone upgrading to 1.17.x from Vault 1.16.x.\n---\n\n# Overview\n\nThe Vault 1.17.x upgrade guide contains information on deprecations, important\nor breaking changes, and remediation recommendations for anyone upgrading from\nVault 1.16. **Please read carefully**.\n\n## Important changes\n\n<a id=\"audit-headers\" \/>\n\n### Allowed audit headers now have unremovable defaults\n\nThe [config auditing API endpoint](\/vault\/api-docs\/system\/config-auditing#create-update-audit-request-header)\ntells Vault to log incoming request headers (when present) in the audit log.\n\nPreviously, Vault only logged headers that were explicitly configured for\nlogging. As of version 1.17, Vault automatically logs a predefined set of\n[default headers](\/vault\/docs\/audit#default-headers). By default, the header\nvalues are not HMAC encrypted. You must explicitly configure the\n[HMAC setting](\/vault\/api-docs\/system\/config-auditing#hmac) for each of the\ndefault headers if required.\n\nRefer to the\n[audit request headers documentation](\/vault\/docs\/audit#audit-request-headers)\nfor more information.\n\n<a id=\"pki-truncate\" \/>\n\n### PKI sign-intermediate now truncates notAfter field to signing issuer\n\nPrior to 1.17.x, Vault allowed the calculated sign-intermediate `notAfter` field\nto go beyond the signing issuer `notAfter` field. The extended value lead to a\nCA chain that would not validate properly. As of 1.17.x, Vault truncates the\nintermediary `notAfter` value to the signing issuer `notAfter` if the calculated\nfield is greater.\n\n#### How to opt out\n\nYou can use the new `enforce_leaf_not_after_behavior` flag on the\nsign-intermediate API along with the `leaf_not_after_behavior` flag for the\nsigning issuer to opt out of the truncating behavior.\n\nWhen you set `enforce_leaf_not_after_behavior` to true, the sign-intermediate\nAPI uses the `leaf_not_after_behavior` value configured for the signing issuer\nto control truncation the behavior. Setting the issuer `leaf_not_after_behavior`\nfield to `permit` and `enforce_leaf_not_after_behavior` to true restores the\nlegacy behavior.\n\n<a id=\"request-limiter\" \/>\n\n### Request limiter deprecation\n\nVault 1.16.0 included an experimental request limiter. The limiter was disabled\nby default. Further testing indicated that an alternative approach improves\nperformance and reduces risk for many workloads. Vault 1.17.0 includes a\nnew [adaptive overload\nprotection](\/vault\/docs\/concepts\/adaptive-overload-protection) feature that\nprevents outages when Vault is overwhelmed by write requests. Adaptive overload\nprotection is a beta feature in 1.17.0 and is disabled by default.\n\nThe beta request limiter will be removed from Vault entirely in a later release.\n\n### JWT auth login requires bound audiences on the role\n\nThe `bound_audiences` parameter of \"jwt\" roles is **mandatory** if the JWT contains an audience\n(which is more often than not the case), and **must** match at least one of\nthe JWT's associated `aud` claims. The `aud` claim claim can be a single string\nor a list of strings as per [RFC 7519 Section 4.1.3](https:\/\/datatracker.ietf.org\/doc\/html\/rfc7519#section-4.1.3).\nIf the JWT's `aud` claim is not set, then the role's `bound_audiences`\nparameter is not required.\n\nUsers may not be able to log into Vault if the JWT role is configured\nincorrectly. For additional details, refer to the\n[JWT auth method (API)](\/vault\/api-docs\/auth\/jwt) documentation.\n\n### Activity Log Changes\n\n#### Default Activity Log Querying Period\n\nAs of 1.17.9 and later, the field `default_report_months` can no longer be configured or read. Any previously set values\nwill be ignored by the system.\n\n\nAttempts to modify `default_report_months` through the\n[\/sys\/internal\/counters\/config](\/vault\/api-docs\/system\/internal-counters#update-the-client-count-configuration)\nendpoint, will result in the following warning from Vault:\n\n<CodeBlockConfig hideClipboard>\n\n  ```shell-session\n\n  WARNING! The following warnings were returned from Vault:\n\n  * default_report_months is deprecated: defaulting to billing start time\n\n\n  ```\n\n<\/CodeBlockConfig>\n\n\nThe `current_billing_period` toggle for `\/sys\/internal\/counters\/activity` is also deprecated, as this will be set\ntrue by default.\n\nAttempts to set `current_billing_period` will result in the following warning from Vault:\n\n<CodeBlockConfig hideClipboard>\n\n  ```shell-session\n\n  WARNING! The following warnings were returned from Vault:\n\n  * current_billing_period is deprecated; unless otherwise specified, all requests will default to the current billing period\n\n\n  ```\n\n<\/CodeBlockConfig>\n\n### Auto-rolled billing start date\n\nAs of 1.17.3 and later, the billing start date (license start date if not configured) rolls over to the latest billing year at the end of the last cycle.\n\n@include 'auto-roll-billing-start.mdx'\n\n@include 'auto-roll-billing-start-example.mdx'\n\n### Docker image no longer contains `curl`\n\nAs of 1.17.3 and later, the `curl` binary is no longer included in the published Docker container\nimages for Vault and Vault Enterprise. If your workflow depends on `curl` being available in the\ncontainer, consider one of the following strategies:\n\n#### Create a wrapper container image\n\nUse the HashiCorp image as a base image to create a new container image with `curl` installed.\n\n```Dockerfile\nFROM hashicorp\/vault-enterprise\nRUN apk add curl\n```\n\n**NOTE:** While this is the preferred option it will require managing your own registry and rebuilding new images.\n\n#### Install it at runtime dynamically\n\nWhen running the image as root (not recommended), you can install it at runtime dynamically by using the `apk` package manager:\n\n```shell-session\ndocker exec <CONTAINER-ID> apk add curl\n```\n```shell-session\nkubectl exec -ti <NAME> -- apk add curl\n```\n\nWhen running the image as non-root without privilege escalation (recommended) you can use existing\ntools to install a static binary of `curl` into the `vault` users home directory:\n\n```shell-session\ndocker exec <CONTAINER-ID> wget https:\/\/github.com\/moparisthebest\/static-curl\/releases\/latest\/download\/curl-amd64 -O \/home\/vault\/curl && chmod +x \/home\/vault\/curl\n```\n```shell-session\nkubectl exec -ti <NAME> -- wget https:\/\/github.com\/moparisthebest\/static-curl\/releases\/latest\/download\/curl-amd64 -O \/home\/vault\/curl && chmod +x \/home\/vault\/curl\n```\n\n**NOTE:** When using this option you'll want to verify that the static binary comes from a trusted source.\n\n### Product usage reporting\n\nAs of 1.17.9, Vault will collect anonymous product usage metrics for HashiCorp. This information will be collected\nalongside client activity data, and will be sent automatically if automated reporting is configured, or added to manual\nreports if manual reporting is preferred.\n\nSee the main page for [Vault product usage metrics reporting](\/vault\/docs\/enterprise\/license\/product-usage-reporting) for\nmore details, and information about opt-out.\n\n## Known issues and workarounds\n\n@include 'known-issues\/1_17_audit-log-hmac.mdx'\n\n@include 'known-issues\/ocsp-redirect.mdx'\n\n@include 'known-issues\/agent-and-proxy-excessive-cpu-1-17.mdx'\n\n@include 'known-issues\/config_listener_proxy_protocol_behavior_issue.mdx'\n\n@include 'known-issues\/transit-input-on-cmac-response.mdx'\n\n@include 'known-issues\/dangling-entity-aliases-in-memory.mdx'\n\n@include 'known-issues\/duplicate-identity-groups.mdx'\n\n@include 'known-issues\/manual-entity-merge-does-not-persist.mdx'\n\n@include 'known-issues\/aws-auth-external-id.mdx'\n\n@include 'known-issues\/sync-activation-flags-cache-not-updated.mdx'\n\n@include 'known-issues\/duplicate-hsm-key.mdx'","site":"vault","answers_cleaned":"    layout  docs page title  Upgrade to Vault 1 17 x   Guides description       Deprecations  important or breaking changes  and remediation recommendations   for anyone upgrading to 1 17 x from Vault 1 16 x         Overview  The Vault 1 17 x upgrade guide contains information on deprecations  important or breaking changes  and remediation recommendations for anyone upgrading from Vault 1 16    Please read carefully        Important changes   a id  audit headers          Allowed audit headers now have unremovable defaults  The  config auditing API endpoint   vault api docs system config auditing create update audit request header  tells Vault to log incoming request headers  when present  in the audit log   Previously  Vault only logged headers that were explicitly configured for logging  As of version 1 17  Vault automatically logs a predefined set of  default headers   vault docs audit default headers   By default  the header values are not HMAC encrypted  You must explicitly configure the  HMAC setting   vault api docs system config auditing hmac  for each of the default headers if required   Refer to the  audit request headers documentation   vault docs audit audit request headers  for more information    a id  pki truncate          PKI sign intermediate now truncates notAfter field to signing issuer  Prior to 1 17 x  Vault allowed the calculated sign intermediate  notAfter  field to go beyond the signing issuer  notAfter  field  The extended value lead to a CA chain that would not validate properly  As of 1 17 x  Vault truncates the intermediary  notAfter  value to the signing issuer  notAfter  if the calculated field is greater        How to opt out  You can use the new  enforce leaf not after behavior  flag on the sign intermediate API along with the  leaf not after behavior  flag for the signing issuer to opt out of the truncating behavior   When you set  enforce leaf not after behavior  to true  the sign intermediate API uses the  leaf not after behavior  value configured for the signing issuer to control truncation the behavior  Setting the issuer  leaf not after behavior  field to  permit  and  enforce leaf not after behavior  to true restores the legacy behavior    a id  request limiter          Request limiter deprecation  Vault 1 16 0 included an experimental request limiter  The limiter was disabled by default  Further testing indicated that an alternative approach improves performance and reduces risk for many workloads  Vault 1 17 0 includes a new  adaptive overload protection   vault docs concepts adaptive overload protection  feature that prevents outages when Vault is overwhelmed by write requests  Adaptive overload protection is a beta feature in 1 17 0 and is disabled by default   The beta request limiter will be removed from Vault entirely in a later release       JWT auth login requires bound audiences on the role  The  bound audiences  parameter of  jwt  roles is   mandatory   if the JWT contains an audience  which is more often than not the case   and   must   match at least one of the JWT s associated  aud  claims  The  aud  claim claim can be a single string or a list of strings as per  RFC 7519 Section 4 1 3  https   datatracker ietf org doc html rfc7519 section 4 1 3   If the JWT s  aud  claim is not set  then the role s  bound audiences  parameter is not required   Users may not be able to log into Vault if the JWT role is configured incorrectly  For additional details  refer to the  JWT auth method  API    vault api docs auth jwt  documentation       Activity Log Changes       Default Activity Log Querying Period  As of 1 17 9 and later  the field  default report months  can no longer be configured or read  Any previously set values will be ignored by the system    Attempts to modify  default report months  through the   sys internal counters config   vault api docs system internal counters update the client count configuration  endpoint  will result in the following warning from Vault    CodeBlockConfig hideClipboard        shell session    WARNING  The following warnings were returned from Vault       default report months is deprecated  defaulting to billing start time            CodeBlockConfig    The  current billing period  toggle for   sys internal counters activity  is also deprecated  as this will be set true by default   Attempts to set  current billing period  will result in the following warning from Vault    CodeBlockConfig hideClipboard        shell session    WARNING  The following warnings were returned from Vault       current billing period is deprecated  unless otherwise specified  all requests will default to the current billing period            CodeBlockConfig       Auto rolled billing start date  As of 1 17 3 and later  the billing start date  license start date if not configured  rolls over to the latest billing year at the end of the last cycle    include  auto roll billing start mdx    include  auto roll billing start example mdx       Docker image no longer contains  curl   As of 1 17 3 and later  the  curl  binary is no longer included in the published Docker container images for Vault and Vault Enterprise  If your workflow depends on  curl  being available in the container  consider one of the following strategies        Create a wrapper container image  Use the HashiCorp image as a base image to create a new container image with  curl  installed      Dockerfile FROM hashicorp vault enterprise RUN apk add curl        NOTE    While this is the preferred option it will require managing your own registry and rebuilding new images        Install it at runtime dynamically  When running the image as root  not recommended   you can install it at runtime dynamically by using the  apk  package manager      shell session docker exec  CONTAINER ID  apk add curl        shell session kubectl exec  ti  NAME     apk add curl      When running the image as non root without privilege escalation  recommended  you can use existing tools to install a static binary of  curl  into the  vault  users home directory      shell session docker exec  CONTAINER ID  wget https   github com moparisthebest static curl releases latest download curl amd64  O  home vault curl    chmod  x  home vault curl        shell session kubectl exec  ti  NAME     wget https   github com moparisthebest static curl releases latest download curl amd64  O  home vault curl    chmod  x  home vault curl        NOTE    When using this option you ll want to verify that the static binary comes from a trusted source       Product usage reporting  As of 1 17 9  Vault will collect anonymous product usage metrics for HashiCorp  This information will be collected alongside client activity data  and will be sent automatically if automated reporting is configured  or added to manual reports if manual reporting is preferred   See the main page for  Vault product usage metrics reporting   vault docs enterprise license product usage reporting  for more details  and information about opt out      Known issues and workarounds   include  known issues 1 17 audit log hmac mdx    include  known issues ocsp redirect mdx    include  known issues agent and proxy excessive cpu 1 17 mdx    include  known issues config listener proxy protocol behavior issue mdx    include  known issues transit input on cmac response mdx    include  known issues dangling entity aliases in memory mdx    include  known issues duplicate identity groups mdx    include  known issues manual entity merge does not persist mdx    include  known issues aws auth external id mdx    include  known issues sync activation flags cache not updated mdx    include  known issues duplicate hsm key mdx "}
{"questions":"vault This page contains the list of deprecations and important or breaking changes page title Upgrading to Vault 1 10 x Guides for Vault 1 10 x Please read it carefully layout docs Overview","answers":"---\nlayout: docs\npage_title: Upgrading to Vault 1.10.x - Guides\ndescription: |-\n  This page contains the list of deprecations and important or breaking changes\n  for Vault 1.10.x. Please read it carefully.\n---\n\n# Overview\n\nThis page contains the list of deprecations and important or breaking changes\nfor Vault 1.10.x compared to 1.9. Please read it carefully.\n\n## SSH secrets engine\n\nThe new default value of `algorithm_signer` for SSH CA roles has been changed\nto `rsa-sha2-256` from `ssh-rsa`. Existing roles will be migrated to\nexplicitly specify the `algorithm_signer=ssh-rsa` for RSA keys if they used\nthe implicit (empty) default, but newly created roles will use the new default\nvalue (preferring a literal `default` which presently uses `rsa-sha2-256`).\n\n## Etcd v2 API no longer supported\n\nSupport for the Etcd v2 API is removed in Vault 1.10. The Etcd v2 API\nwas deprecated with the release of [Etcd v3.5](https:\/\/etcd.io\/blog\/2021\/announcing-etcd-3.5\/),\nand will be decommissioned in a forthcoming Etcd release.\n\nUsers of the `etcd` storage backend with the etcdv2 API that are\nupgrading to Vault 1.10 should [migrate](\/vault\/docs\/commands\/operator\/migrate)\nVault storage to an Etcd v3 cluster prior to upgrading to Vault 1.10.\nAll storage migrations should have\n[backups](\/vault\/docs\/concepts\/storage#backing-up-vault-s-persisted-data)\ntaken prior to migration.\n\n## OTP generation process\n\nCustomers passing in OTPs during the process of generating root tokens must modify\nthe OTP generation to include an additional 2 characters before upgrading so that the\nOTP can be xor-ed with the encoded root token. This change was implemented as a result\nof the change in the prefix from hvs. to s. for service tokens.\n\n## New error response for requests to perf standbys lagging behind active node\n\nThe introduction of [Server Side Consistent Tokens](\/vault\/docs\/faq\/ssct) means that\nwhen issuing a request to a perf standby right after having obtained a token (e.g.\nvia login), if the token and its lease haven't yet been replicated to the perf\nstandby, an HTTP 412 error will be returned.  Before 1.10.0 this typically would've\nresulted in a 400, 403, or 50x response.\n\n## Token format change\n\nToken prefixes were updated to be more easily identifiable.\n\n- Service tokens previously started with s. now start with hvs.\n- Batch tokens previously started with b. now start with hvb.\n- Recovery tokens previously started with r. now start with hvr.\n\nAdditionally, non-root service tokens are now longer than before. Previously, service tokens\nwere 26 characters; they now have a minimum of 95 characters. However, existing tokens will\nstill work.\n\nRefer to the [Server Side Consistent Token FAQ](\/vault\/docs\/faq\/ssct) for details.\n\n## OIDC provider built-in resources\n\nIn Vault 1.9, the [OIDC identity provider](\/vault\/docs\/secrets\/identity\/oidc-provider) feature\nwas released as a tech preview. In Vault 1.10, built-in resources were introduced to the\nOIDC provider system to reduce configuration steps and enhance usability.\n\nThe following built-in resources are included in each Vault namespace starting with Vault\n1.10:\n\n - A `default` OIDC provider that's usable by all client applications\n - A `default` key for signing and verification of ID tokens\n - An `allow_all` assignment which authorizes all Vault entities to authenticate via a\n   client application\n\nIf you created an [OIDC provider](\/vault\/api-docs\/secret\/identity\/oidc-provider#create-or-update-a-provider)\nwith the name `default`, [key](\/vault\/api-docs\/secret\/identity\/tokens#create-a-named-key) with the\nname `default`, or [assignment](\/vault\/api-docs\/secret\/identity\/oidc-provider#create-or-update-an-assignment)\nwith the name `allow_all` using the Vault 1.9 tech preview, the installation of these built-in\nresources will be skipped. We _strongly recommend_ that you delete any existing resources\nthat have naming collisions before upgrading to Vault 1.10. Failing to delete resources with\nnaming collisions could result unexpected default behavior. Additionally, we recommend reading\nthe corresponding details in the OIDC provider [concepts](\/vault\/docs\/concepts\/oidc-provider) document\nto understand how the built-in resources are used in the system.\n\n## Known issues\n\n@include 'raft-retry-join-failure.mdx'\n\n@include 'raft-panic-old-tls-key.mdx'\n\n@include 'tokenization-rotation-persistence.mdx'\n\n### Errors returned by perf standbys lagging behind active node with consul storage\n\nThe introduction of [Server Side Consistent Tokens](\/vault\/docs\/faq\/ssct) means that\nwhen issuing a request to a perf standby right after having obtained a token (e.g.\nvia login), if the token and its lease haven't yet been replicated to the perf\nstandby, an HTTP 412 error will be returned.  Before 1.10.0 this wouldn't have\nresulted in the client seeing errors with Consul storage.\n\n### Single Vault follower restart causes election even with established quorum\n\nWe now support Server Side Consistent Tokens (See [Replication](\/vault\/docs\/configuration\/replication) and [Vault Eventual Consistency](\/vault\/docs\/enterprise\/consistency)), which introduces a new token format that can only be used on nodes of 1.10 or higher version. This new format is enabled by default upon upgrading to the new version. Old format tokens can be read by Vault 1.10, but the new format Vault 1.10 tokens cannot be read by older Vault versions.\n\nFor more details, see the [Server Side Consistent Tokens FAQ](\/vault\/docs\/faq\/ssct).\n\nSince service tokens are always created on the leader, as long as the leader is not upgraded before performance standbys, service tokens will be of the old format and still be usable during the upgrade process. However, the usual upgrade process we recommend can't be relied upon to always upgrade the leader last. Due to this known [issue](https:\/\/github.com\/hashicorp\/vault\/issues\/14153), a Vault cluster using Integrated Storage may result in a leader not being upgraded last, and this can trigger a re-election. This re-election can cause the upgraded node to become the leader, resulting in the newly created tokens on the leader to be unusable on nodes that have not yet been upgraded. Note that this issue does not impact Vault Community Edition users.\n\nWe will have a fix for this issue in Vault 1.10.3. Until this issue is fixed, you may be at risk of having performance standbys unable to service requests until all nodes are upgraded. We recommended that you plan for a maintenance window to upgrade.\n\n### Limited policy shows unhelpful message in UI after mounting a secret engine\n\nWhen a user has a policy that allows creating a secret engine but not reading it, after successful creation, the user sees a message n is undefined instead of a permissions error. We will have a fix for this issue in an upcoming minor release.\n\n### Adding\/Modifying Duo MFA method for enterprise MFA triggers a panic error\n\nWhen adding or modifying a Duo MFA method for step-up Enterprise MFA using the `sys\/mfa\/method\/duo` endpoint, a panic gets triggered due to a missing schema field. We will have a fix for this in Vault 1.10.1. Until this issue is fixed, avoid making any changes to your Duo configuration if you are upgrading Vault to v1.10.0.\n\n### Sign in to UI using OIDC auth method results in an error\n\nSigning in to the Vault UI using an OIDC auth mount listed in the \"tabs\" of the form will result\nin the following error: \"Authentication failed: role with oidc role_type is not allowed\".\nThe auth mounts listed in the \"tabs\" of the form are those that have [listing_visibility](\/vault\/api-docs\/system\/auth#listing_visibility-1)\nset to `unauth`.\n\nThere is a workaround for this error that will allow you to sign in to Vault using the OIDC\nauth method. Select the \"Other\" tab instead of selecting the specific OIDC auth mount tab.\nFrom there, select \"OIDC\" from the \"Method\" select box and proceed to sign in to Vault.\n\n### Login MFA not enforced after restart\n\nA serious bug was identified in the Login MFA feature introduced in 1.10.0:\n[#15108](https:\/\/github.com\/hashicorp\/vault\/issues\/15108).\nUpon restart, Vault is not populating its in-memory MFA data structures based\non what is found in storage. Although Vault is persisting to storage MFA methods\nand login enforcement configs populated via \/identity\/mfa, they will effectively\ndisappear after the process is restarted.\n\nWe plan to issue a new 1.10.3 release to address this soon. We recommend delaying\nany rollouts of Login MFA until that release.\n","site":"vault","answers_cleaned":"    layout  docs page title  Upgrading to Vault 1 10 x   Guides description       This page contains the list of deprecations and important or breaking changes   for Vault 1 10 x  Please read it carefully         Overview  This page contains the list of deprecations and important or breaking changes for Vault 1 10 x compared to 1 9  Please read it carefully      SSH secrets engine  The new default value of  algorithm signer  for SSH CA roles has been changed to  rsa sha2 256  from  ssh rsa   Existing roles will be migrated to explicitly specify the  algorithm signer ssh rsa  for RSA keys if they used the implicit  empty  default  but newly created roles will use the new default value  preferring a literal  default  which presently uses  rsa sha2 256        Etcd v2 API no longer supported  Support for the Etcd v2 API is removed in Vault 1 10  The Etcd v2 API was deprecated with the release of  Etcd v3 5  https   etcd io blog 2021 announcing etcd 3 5    and will be decommissioned in a forthcoming Etcd release   Users of the  etcd  storage backend with the etcdv2 API that are upgrading to Vault 1 10 should  migrate   vault docs commands operator migrate  Vault storage to an Etcd v3 cluster prior to upgrading to Vault 1 10  All storage migrations should have  backups   vault docs concepts storage backing up vault s persisted data  taken prior to migration      OTP generation process  Customers passing in OTPs during the process of generating root tokens must modify the OTP generation to include an additional 2 characters before upgrading so that the OTP can be xor ed with the encoded root token  This change was implemented as a result of the change in the prefix from hvs  to s  for service tokens      New error response for requests to perf standbys lagging behind active node  The introduction of  Server Side Consistent Tokens   vault docs faq ssct  means that when issuing a request to a perf standby right after having obtained a token  e g  via login   if the token and its lease haven t yet been replicated to the perf standby  an HTTP 412 error will be returned   Before 1 10 0 this typically would ve resulted in a 400  403  or 50x response      Token format change  Token prefixes were updated to be more easily identifiable     Service tokens previously started with s  now start with hvs    Batch tokens previously started with b  now start with hvb    Recovery tokens previously started with r  now start with hvr   Additionally  non root service tokens are now longer than before  Previously  service tokens were 26 characters  they now have a minimum of 95 characters  However  existing tokens will still work   Refer to the  Server Side Consistent Token FAQ   vault docs faq ssct  for details      OIDC provider built in resources  In Vault 1 9  the  OIDC identity provider   vault docs secrets identity oidc provider  feature was released as a tech preview  In Vault 1 10  built in resources were introduced to the OIDC provider system to reduce configuration steps and enhance usability   The following built in resources are included in each Vault namespace starting with Vault 1 10      A  default  OIDC provider that s usable by all client applications    A  default  key for signing and verification of ID tokens    An  allow all  assignment which authorizes all Vault entities to authenticate via a    client application  If you created an  OIDC provider   vault api docs secret identity oidc provider create or update a provider  with the name  default    key   vault api docs secret identity tokens create a named key  with the name  default   or  assignment   vault api docs secret identity oidc provider create or update an assignment  with the name  allow all  using the Vault 1 9 tech preview  the installation of these built in resources will be skipped  We  strongly recommend  that you delete any existing resources that have naming collisions before upgrading to Vault 1 10  Failing to delete resources with naming collisions could result unexpected default behavior  Additionally  we recommend reading the corresponding details in the OIDC provider  concepts   vault docs concepts oidc provider  document to understand how the built in resources are used in the system      Known issues   include  raft retry join failure mdx    include  raft panic old tls key mdx    include  tokenization rotation persistence mdx       Errors returned by perf standbys lagging behind active node with consul storage  The introduction of  Server Side Consistent Tokens   vault docs faq ssct  means that when issuing a request to a perf standby right after having obtained a token  e g  via login   if the token and its lease haven t yet been replicated to the perf standby  an HTTP 412 error will be returned   Before 1 10 0 this wouldn t have resulted in the client seeing errors with Consul storage       Single Vault follower restart causes election even with established quorum  We now support Server Side Consistent Tokens  See  Replication   vault docs configuration replication  and  Vault Eventual Consistency   vault docs enterprise consistency    which introduces a new token format that can only be used on nodes of 1 10 or higher version  This new format is enabled by default upon upgrading to the new version  Old format tokens can be read by Vault 1 10  but the new format Vault 1 10 tokens cannot be read by older Vault versions   For more details  see the  Server Side Consistent Tokens FAQ   vault docs faq ssct    Since service tokens are always created on the leader  as long as the leader is not upgraded before performance standbys  service tokens will be of the old format and still be usable during the upgrade process  However  the usual upgrade process we recommend can t be relied upon to always upgrade the leader last  Due to this known  issue  https   github com hashicorp vault issues 14153   a Vault cluster using Integrated Storage may result in a leader not being upgraded last  and this can trigger a re election  This re election can cause the upgraded node to become the leader  resulting in the newly created tokens on the leader to be unusable on nodes that have not yet been upgraded  Note that this issue does not impact Vault Community Edition users   We will have a fix for this issue in Vault 1 10 3  Until this issue is fixed  you may be at risk of having performance standbys unable to service requests until all nodes are upgraded  We recommended that you plan for a maintenance window to upgrade       Limited policy shows unhelpful message in UI after mounting a secret engine  When a user has a policy that allows creating a secret engine but not reading it  after successful creation  the user sees a message n is undefined instead of a permissions error  We will have a fix for this issue in an upcoming minor release       Adding Modifying Duo MFA method for enterprise MFA triggers a panic error  When adding or modifying a Duo MFA method for step up Enterprise MFA using the  sys mfa method duo  endpoint  a panic gets triggered due to a missing schema field  We will have a fix for this in Vault 1 10 1  Until this issue is fixed  avoid making any changes to your Duo configuration if you are upgrading Vault to v1 10 0       Sign in to UI using OIDC auth method results in an error  Signing in to the Vault UI using an OIDC auth mount listed in the  tabs  of the form will result in the following error   Authentication failed  role with oidc role type is not allowed   The auth mounts listed in the  tabs  of the form are those that have  listing visibility   vault api docs system auth listing visibility 1  set to  unauth    There is a workaround for this error that will allow you to sign in to Vault using the OIDC auth method  Select the  Other  tab instead of selecting the specific OIDC auth mount tab  From there  select  OIDC  from the  Method  select box and proceed to sign in to Vault       Login MFA not enforced after restart  A serious bug was identified in the Login MFA feature introduced in 1 10 0    15108  https   github com hashicorp vault issues 15108   Upon restart  Vault is not populating its in memory MFA data structures based on what is found in storage  Although Vault is persisting to storage MFA methods and login enforcement configs populated via  identity mfa  they will effectively disappear after the process is restarted   We plan to issue a new 1 10 3 release to address this soon  We recommend delaying any rollouts of Login MFA until that release  "}
{"questions":"vault These are general upgrade instructions for Vault for both non HA and HA page title Upgrading Vault Guides setups Please ensure that you also read the version specific upgrade notes layout docs Upgrading Vault","answers":"---\nlayout: docs\npage_title: Upgrading Vault - Guides\ndescription: |-\n  These are general upgrade instructions for Vault for both non-HA and HA\n  setups. Please ensure that you also read the version-specific upgrade notes.\n---\n\n# Upgrading Vault\n\nThese are general upgrade instructions for Vault for both non-HA and HA setups.\n_Please ensure that you also read any version-specific upgrade notes which can be\nfound in the sidebar._\n\n!> **Important:** Always back up your data before upgrading! Vault does not\nmake backward-compatibility guarantees for its data store. Simply replacing the\nnewly-installed Vault binary with the previous version will not cleanly\ndowngrade Vault, as upgrades may perform changes to the underlying data\nstructure that make the data incompatible with a downgrade. If you need to roll\nback to a previous version of Vault, you should roll back your data store as\nwell.\n\nVault upgrades are designed such that large jumps (ie 1.3.10 -> 1.7.x) are\nsupported. The upgrade notes for each intervening version must be reviewed. The\nupgrade notes may describe additional steps or configuration to update before,\nduring, or after the upgrade.\n\nWe also recommend you consult the\n[deprecation notices](\/vault\/docs\/deprecation). The notice page includes\na comprehensive list of deprecated features and the Vault versions where\nthe feature was removed or is scheduled to be removed.\n\n@include 'versions.mdx'\n\n## Integrated storage autopilot\n\nVault 1.11 introduced [automated\nupgrades](\/vault\/docs\/concepts\/integrated-storage\/autopilot#automated-upgrades) as\npart of the Integrated Storage Autopilot feature. If your Vault environment is\nconfigured to use Integrated Storage, consider leveraging this new feature to\nupgrade your Vault environment.\n\n-> **Tutorial:** Refer to the [Automate Upgrades with Vault\n Enterprise](\/vault\/tutorials\/raft\/raft-upgrade-automation)\n tutorial for more details.\n\n## Agent\n\nThe Vault Agent is an API client of the Vault Server. Vault APIs are almost\nalways backwards compatible. When they are not, this is called out in the\nupgrade guide for the new Vault version, and there is a lengthy deprecation\nperiod. The Vault Agent version can lag behind the Vault Server version, though\nwe recommend keeping all Vault instances up to date with the most recent minor Vault version\nto the extent possible.\n\n## Testing the upgrade\n\nIt's always a good idea to try to ensure that the upgrade will be successful in\nyour environment. The ideal way to do this is to take a snapshot of your data\nand load it into a test cluster. However, if you are issuing secrets to third\nparty resources (cloud credentials, database credentials, etc.) ensure that you\ndo not allow external network connectivity during testing, in case credentials\nexpire. This prevents the test cluster from trying to revoke these resources\nalong with the non-test cluster.\n\n## Upgrading from Community to Enterprise editions\n\nUpgrading to Vault Enterprise installations follow the same steps as Community edition upgrades except that the Vault Enterprise binary is to be used and the license file [applied](\/vault\/api-docs\/system\/license#install-license), when applicable. The Enterprise binary and license file can be obtained through your HashiCorp sales team.\n\n## Non-HA installations\n\nUpgrading non-HA installations of Vault is as simple as replacing the Vault\nbinary with the new version and restarting Vault. Any upgrade tasks that can be\nperformed for you will be taken care of when Vault is unsealed.\n\nAlways use `SIGINT` or `SIGTERM` to properly shut down Vault.\n\nBe sure to also read and follow any instructions in the version-specific\nupgrade notes.\n\n## HA installations\n\nThe recommended upgrade procedure depends on the version of Vault you're currently on and the storage backend of Vault. If you're currently running on Vault 1.11 or later with Integrated Storage and you have Autopilot enabled, you should let Autopilot do the upgrade for you, as that's easier and\nless prone to human error. Please refer to our [automated\nupgrades](\/vault\/docs\/concepts\/integrated-storage\/autopilot#automated-upgrades) documentation for information on this feature and our\n[Automate Upgrades with Vault\nEnterprise](\/vault\/tutorials\/raft\/raft-upgrade-automation)\ntutorial for more details.\n\nIf you're currently on a version of Vault before 1.11, or you've chosen to opt-out the Autopilot automated upgrade features when running Vault after 1.11 with Integrated Storage, or if you are running Vault with other storage backend such as Consul. Please refer to our [Vault HA upgrades Pre 1.11\/Without Autopilot Upgrade Automation](\/vault\/docs\/upgrading\/vault-ha-upgrade) documentation for more details. Please note that this upgrade procedure also applies if you are upgrading Vault from pre 1.11 to post 1.11.\n\n## Enterprise replication installations\n\n<Note>\n\nPrior to any upgrade, be sure to also read and follow any instructions in the\nversion-specific upgrade notes which are found in the navigation menu for this\ndocumentation.\n\n<\/Note>\n\nUpgrading Vault Enterprise clusters which participate in [Enterprise\nReplication](\/vault\/docs\/enterprise\/replication) requires the following basic\norder of operations:\n\n- **Upgrade the replication secondary instances first** using appropriate\n  guidance from the previous sections\n- Verify functionality of each secondary instance after upgrading\n- When satisfied with functionality of upgraded secondary instances, upgrade\n  the primary instance\n\n<Note>\n\nIt is not safe to replicate from a newer version of Vault to an older version.\nWhen upgrading replicated clusters, ensure that upstream clusters are always on\nolder versions of Vault than downstream clusters.\n\n<\/Note>\n\nHere is an example of upgrading four Vault replicated Vault clusters:\n\n![Upgrading multiple replicated clusters](\/img\/vault-replication-upgrade.png)\n\nIn the above scenario, the ideal upgrade procedure would be as follows,\nverifying functionality after each cluster upgrade.\n\n1. Upgrade Clusters B and D. These clusters have no downstream clusters, so they\n   should be upgraded first, but the ordering of B vs D does not matter.\n2. Upgrade Cluster C, which now has an upgraded downstream cluster (Cluster D).\n   Because Cluster C is a cluster, it should also use the HA upgrade process.\n3. Finally, upgrade Cluster A. All clusters downstream of A will already be\n   upgraded. It should be upgraded last, as it is a Performance Primary and a DR\n   Primary.","site":"vault","answers_cleaned":"    layout  docs page title  Upgrading Vault   Guides description       These are general upgrade instructions for Vault for both non HA and HA   setups  Please ensure that you also read the version specific upgrade notes         Upgrading Vault  These are general upgrade instructions for Vault for both non HA and HA setups   Please ensure that you also read any version specific upgrade notes which can be found in the sidebar         Important    Always back up your data before upgrading  Vault does not make backward compatibility guarantees for its data store  Simply replacing the newly installed Vault binary with the previous version will not cleanly downgrade Vault  as upgrades may perform changes to the underlying data structure that make the data incompatible with a downgrade  If you need to roll back to a previous version of Vault  you should roll back your data store as well   Vault upgrades are designed such that large jumps  ie 1 3 10    1 7 x  are supported  The upgrade notes for each intervening version must be reviewed  The upgrade notes may describe additional steps or configuration to update before  during  or after the upgrade   We also recommend you consult the  deprecation notices   vault docs deprecation   The notice page includes a comprehensive list of deprecated features and the Vault versions where the feature was removed or is scheduled to be removed    include  versions mdx      Integrated storage autopilot  Vault 1 11 introduced  automated upgrades   vault docs concepts integrated storage autopilot automated upgrades  as part of the Integrated Storage Autopilot feature  If your Vault environment is configured to use Integrated Storage  consider leveraging this new feature to upgrade your Vault environment        Tutorial    Refer to the  Automate Upgrades with Vault  Enterprise   vault tutorials raft raft upgrade automation   tutorial for more details      Agent  The Vault Agent is an API client of the Vault Server  Vault APIs are almost always backwards compatible  When they are not  this is called out in the upgrade guide for the new Vault version  and there is a lengthy deprecation period  The Vault Agent version can lag behind the Vault Server version  though we recommend keeping all Vault instances up to date with the most recent minor Vault version to the extent possible      Testing the upgrade  It s always a good idea to try to ensure that the upgrade will be successful in your environment  The ideal way to do this is to take a snapshot of your data and load it into a test cluster  However  if you are issuing secrets to third party resources  cloud credentials  database credentials  etc   ensure that you do not allow external network connectivity during testing  in case credentials expire  This prevents the test cluster from trying to revoke these resources along with the non test cluster      Upgrading from Community to Enterprise editions  Upgrading to Vault Enterprise installations follow the same steps as Community edition upgrades except that the Vault Enterprise binary is to be used and the license file  applied   vault api docs system license install license   when applicable  The Enterprise binary and license file can be obtained through your HashiCorp sales team      Non HA installations  Upgrading non HA installations of Vault is as simple as replacing the Vault binary with the new version and restarting Vault  Any upgrade tasks that can be performed for you will be taken care of when Vault is unsealed   Always use  SIGINT  or  SIGTERM  to properly shut down Vault   Be sure to also read and follow any instructions in the version specific upgrade notes      HA installations  The recommended upgrade procedure depends on the version of Vault you re currently on and the storage backend of Vault  If you re currently running on Vault 1 11 or later with Integrated Storage and you have Autopilot enabled  you should let Autopilot do the upgrade for you  as that s easier and less prone to human error  Please refer to our  automated upgrades   vault docs concepts integrated storage autopilot automated upgrades  documentation for information on this feature and our  Automate Upgrades with Vault Enterprise   vault tutorials raft raft upgrade automation  tutorial for more details   If you re currently on a version of Vault before 1 11  or you ve chosen to opt out the Autopilot automated upgrade features when running Vault after 1 11 with Integrated Storage  or if you are running Vault with other storage backend such as Consul  Please refer to our  Vault HA upgrades Pre 1 11 Without Autopilot Upgrade Automation   vault docs upgrading vault ha upgrade  documentation for more details  Please note that this upgrade procedure also applies if you are upgrading Vault from pre 1 11 to post 1 11      Enterprise replication installations   Note   Prior to any upgrade  be sure to also read and follow any instructions in the version specific upgrade notes which are found in the navigation menu for this documentation     Note   Upgrading Vault Enterprise clusters which participate in  Enterprise Replication   vault docs enterprise replication  requires the following basic order of operations       Upgrade the replication secondary instances first   using appropriate   guidance from the previous sections   Verify functionality of each secondary instance after upgrading   When satisfied with functionality of upgraded secondary instances  upgrade   the primary instance   Note   It is not safe to replicate from a newer version of Vault to an older version  When upgrading replicated clusters  ensure that upstream clusters are always on older versions of Vault than downstream clusters     Note   Here is an example of upgrading four Vault replicated Vault clusters     Upgrading multiple replicated clusters   img vault replication upgrade png   In the above scenario  the ideal upgrade procedure would be as follows  verifying functionality after each cluster upgrade   1  Upgrade Clusters B and D  These clusters have no downstream clusters  so they    should be upgraded first  but the ordering of B vs D does not matter  2  Upgrade Cluster C  which now has an upgraded downstream cluster  Cluster D      Because Cluster C is a cluster  it should also use the HA upgrade process  3  Finally  upgrade Cluster A  All clusters downstream of A will already be    upgraded  It should be upgraded last  as it is a Performance Primary and a DR    Primary "}
{"questions":"vault This page contains the list of deprecations and important or breaking changes page title Upgrading to Vault 1 12 x Guides layout docs for Vault 1 12 x Please read it carefully Overview","answers":"---\nlayout: docs\npage_title: Upgrading to Vault 1.12.x - Guides\ndescription: |-\n  This page contains the list of deprecations and important or breaking changes\n  for Vault 1.12.x. Please read it carefully.\n---\n\n# Overview\n\nThis page contains the list of deprecations and important or breaking changes\nfor Vault 1.12.x compared to 1.11. Please read it carefully.\n\n## Changes\n\n### Supported storage backends\n\nVault Enterprise will now perform a supported storage check at startup. There is no impact on Vault Community Edition users.\n\n@include 'ent-supported-storage.mdx'\n\n@include 'consul-dataplane-upgrade-note.mdx'\n\n### External plugin loading\n\nVault 1.12.0 introduced a change to how external plugins are loaded. Prior to\nVault 1.12.0 plugins were lazy loaded on startup. This means that plugin\nprocesses were killed after a successful mount and then respawned when a\nrequest is routed to them. Vault 1.12.0 introduced auto mutual TLS for\nsecrets\/auth plugins so we do not lazy load them on startup anymore.\n\n## Known issues\n\n### Pinning to builtin plugin versions may cause failure on upgrade\n\n1.12.0 introduced plugin versions, and with it, the ability to explicitly specify\nthe builtin version of a plugin when mounting an auth, database or secrets plugin.\nFor example, `vault auth enable -plugin-version=v1.12.0+builtin.vault approle`. If\nthere are any mounts where the _builtin_ version was explicitly specified in this way,\nVault may fail to start on upgrading to 1.12.1 due to the specified version no\nlonger being available.\n\nTo check whether a mount path is affected, read the tune information, or the\ndatabase config. The affected plugins are `snowflake-database-plugin@v0.6.0+builtin`\nand any plugins with `+builtin.vault` metadata in their version.\n\nIn this example, the first two mounts are affected because `plugin_version` is\nexplicitly set and is one of the affected versions. The third mount is not\naffected because it only has `+builtin` metadata, and is not the Snowflake\ndatabase plugin. All mounts where the version is omitted, or the plugin is\nexternal (regardless of whether the version is specified) are unaffected.\n\n-> **NOTE:** Make sure you use Vault CLI 1.12.0 or later to check mounts.\n\n```shell-session\n$ vault read sys\/auth\/approle\/tune\nKey                  Value\n---                  -----\n...\nplugin_version       v1.12.0+builtin.vault\n\n$ vault read database\/config\/snowflake\nKey                                   Value\n---                                   -----\n...\nplugin_name                           snowflake-database-plugin\nplugin_version                        v0.6.0+builtin\n\n$ vault read sys\/auth\/kubernetes\/tune\nKey                  Value\n---                  -----\n...\nplugin_version       v0.14.0+builtin\n```\n\nAs it is not currently possible to unset the plugin version, there are 3 possible\nremediations if you have any affected mounts:\n\n* Upgrade Vault directly to 1.12.2 once released\n* Upgrade to an external version of the plugin before upgrading to 1.12.1;\n  * Using the [tune API](\/vault\/api-docs\/system\/auth#tune-auth-method) for auth methods\n  * Using the [tune API](\/vault\/api-docs\/system\/mounts#tune-mount-configuration) for secrets plugins\n  * Or using the [configure connection](\/vault\/api-docs\/secret\/databases#configure-connection)\n    API for database plugins\n* Unmount and remount the path without a version specified before upgrading to 1.12.1.\n  **Note:** This will delete all data and leases associated with the mount.\n\nThe bug was introduced by commit\nhttps:\/\/github.com\/hashicorp\/vault\/commit\/c36330f4c713b886a8a23c08cbbd862a7c530fc8.\n\n#### Impacted versions\n\nAffects upgrading from 1.12.0 to 1.12.1. All other upgrade paths are unaffected.\n1.12.2 will introduce a fix that enables upgrades from affected deployments of\n1.12.0.\n\n### Mounts associated with deprecated builtin plugins will result in core shutdown on upgrade\n\nAs of 1.12.0 Standalone (logical) DB Engines and the AppId Auth Method have been\nmarked with the `Pending Removal` status. Any attempt to unseal Vault with\nmounts backed by one of these builtin plugins will result in an immediate\nshutdown of the Vault core.\n\n-> **NOTE** In the event that an external plugin with the same name and type as\na deprecated builtin is deregistered, any subsequent unseal of Vault will also\nresult in a core shutdown.\n\n```shell-session\n$ vault plugin register -sha256=c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1 auth app-id\nSuccess! Registered plugin: app-id\n$ vault auth enable -plugin-name=app-id plugin\nSuccess! Enabled app-id auth method at: app-id\/\n$ vault auth list -detailed\napp-id\/    app-id    auth_app-id_3a8f2e24    system         system     default-service    replicated     false        false                      map[]      n\/a                        0018263c-0d64-7a70-fd5c-50e05c5f5dc3    n\/a        n\/a                      c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1    n\/a\n$ vault plugin deregister auth app-id\nSuccess! Deregistered plugin (if it was registered): app-id\n$ vault plugin list -detailed | grep \"app-id\"\napp-id                               auth        v1.12.0+builtin.vault                                 pending removal\n```\n\nThe remediation for affected mounts is to set the\n[VAULT_ALLOW_PENDING_REMOVAL_MOUNTS](\/vault\/docs\/commands\/server#vault_allow_pending_removal_mounts)\nenvironment variable and replace any `Pending Removal` feature with the\n[preferred alternative\nfeature](\/vault\/docs\/deprecation\/faq#q-what-should-i-do-if-i-use-mount-filters-appid-or-any-of-the-standalone-db-engines).\n\nFor more information on the phases of deprecation, see the [Deprecation Notices\nFAQ](\/vault\/docs\/deprecation\/faq#q-what-are-the-phases-of-deprecation).\n\n#### Impacted versions\n\nAffects upgrading from any version of Vault to 1.12.x. All other upgrade paths\nare unaffected.\n\n### `vault plugin list` fails when audit logging is enabled\n\nIf audit logging is enabled, Vault will fail to audit the response from any\ncalls to the [`GET \/v1\/sys\/plugins\/catalog`](\/vault\/api-docs\/system\/plugins-catalog#list-plugins)\nendpoint, which causes the whole request to fail and return a 500 internal\nserver error. From the CLI, this looks like the following:\n\n```shell-session\n$ vault plugin list\nError listing available plugins: data from server response is empty\n```\n\nIt will produce errors in Vault Server's logs such as:\n\n```text\n2022-11-30T20:04:22.397Z [ERROR] audit: panic during logging: request_path=sys\/plugins\/catalog error=\"reflect: reflect.Value.Set using value obtained using unexported field\"\n2022-11-30T20:04:22.398Z [ERROR] core: failed to audit response: request_path=sys\/plugins\/catalog\n  error=\n  | 1 error occurred:\n  | \t* panic generating audit log\n  |\n```\n\nAs a workaround, [listing plugins by type](\/vault\/api-docs\/system\/plugins-catalog#list-plugins-1)\nwill succeed:\n\n* `vault list sys\/plugins\/catalog\/auth`\n* `vault list sys\/plugins\/catalog\/database`\n* `vault list sys\/plugins\/catalog\/secret`\n\nThe bug was introduced by commit\nhttps:\/\/github.com\/hashicorp\/vault\/commit\/76165052e54f884ed0aa2caa496083dc84ad1c19.\n\n#### Impacted versions\n\nAffects versions 1.12.0, 1.12.1, and 1.12.2. A fix will be released in 1.12.3.\n\n### PKI OCSP GET requests return malformed request responses\n\nIf an OCSP GET request contains a '+' character, a malformed request response will be\nreturned instead of properly processing the request due to a double decoding issue within the\nhandler.\n\nAs a workaround, OCSP POST requests can be used which are unaffected.\n\n#### Impacted versions\n\nAffects version 1.12.3. A fix will be released in 1.12.4.\n\n@include 'tokenization-rotation-persistence.mdx'\n\n@include 'known-issues\/ocsp-redirect.mdx'\n\n### LDAP pagination issue\n\nThere was a regression introduced in 1.12.6 relating to LDAP maximum page sizes, resulting in\nan error `no LDAP groups found in groupDN [...] only policies from locally-defined groups available`.  The issue\noccurs when upgrading Vault with an instance that has an existing LDAP Auth configuration.\n\nAs a workaround, disable paged searching using the following:\n```shell-session\nvault write auth\/ldap\/config max_page_size=-1\n```\n\n#### Impacted versions\n\nAffects Vault 1.12.6.\n\n### Slow startup time when storing PKI certificates\n\nThere was a regression introduced in 1.12.0 where Vault is slow to start because the\nPKI secret engine performs a list operation on the stored certificates. If a large number\nof certificates are stored this can cause long start times on active and standby nodes.\n\nThere is currently no workaround for this other than limiting the number of certificates stored\nin Vault via the [PKI tidy](\/vault\/api-docs\/secret\/pki#tidy) or using `no_store`\nflag for [PKI roles](\/vault\/api-docs\/secret\/pki#createupdate-role).\n\n#### Impacted versions\n\nAffects Vault 1.12.0+\n\n@include 'pki-double-migration-bug.mdx'","site":"vault","answers_cleaned":"    layout  docs page title  Upgrading to Vault 1 12 x   Guides description       This page contains the list of deprecations and important or breaking changes   for Vault 1 12 x  Please read it carefully         Overview  This page contains the list of deprecations and important or breaking changes for Vault 1 12 x compared to 1 11  Please read it carefully      Changes      Supported storage backends  Vault Enterprise will now perform a supported storage check at startup  There is no impact on Vault Community Edition users    include  ent supported storage mdx    include  consul dataplane upgrade note mdx       External plugin loading  Vault 1 12 0 introduced a change to how external plugins are loaded  Prior to Vault 1 12 0 plugins were lazy loaded on startup  This means that plugin processes were killed after a successful mount and then respawned when a request is routed to them  Vault 1 12 0 introduced auto mutual TLS for secrets auth plugins so we do not lazy load them on startup anymore      Known issues      Pinning to builtin plugin versions may cause failure on upgrade  1 12 0 introduced plugin versions  and with it  the ability to explicitly specify the builtin version of a plugin when mounting an auth  database or secrets plugin  For example   vault auth enable  plugin version v1 12 0 builtin vault approle   If there are any mounts where the  builtin  version was explicitly specified in this way  Vault may fail to start on upgrading to 1 12 1 due to the specified version no longer being available   To check whether a mount path is affected  read the tune information  or the database config  The affected plugins are  snowflake database plugin v0 6 0 builtin  and any plugins with   builtin vault  metadata in their version   In this example  the first two mounts are affected because  plugin version  is explicitly set and is one of the affected versions  The third mount is not affected because it only has   builtin  metadata  and is not the Snowflake database plugin  All mounts where the version is omitted  or the plugin is external  regardless of whether the version is specified  are unaffected        NOTE    Make sure you use Vault CLI 1 12 0 or later to check mounts      shell session   vault read sys auth approle tune Key                  Value                                plugin version       v1 12 0 builtin vault    vault read database config snowflake Key                                   Value                                                 plugin name                           snowflake database plugin plugin version                        v0 6 0 builtin    vault read sys auth kubernetes tune Key                  Value                                plugin version       v0 14 0 builtin      As it is not currently possible to unset the plugin version  there are 3 possible remediations if you have any affected mounts     Upgrade Vault directly to 1 12 2 once released   Upgrade to an external version of the plugin before upgrading to 1 12 1      Using the  tune API   vault api docs system auth tune auth method  for auth methods     Using the  tune API   vault api docs system mounts tune mount configuration  for secrets plugins     Or using the  configure connection   vault api docs secret databases configure connection      API for database plugins   Unmount and remount the path without a version specified before upgrading to 1 12 1      Note    This will delete all data and leases associated with the mount   The bug was introduced by commit https   github com hashicorp vault commit c36330f4c713b886a8a23c08cbbd862a7c530fc8        Impacted versions  Affects upgrading from 1 12 0 to 1 12 1  All other upgrade paths are unaffected  1 12 2 will introduce a fix that enables upgrades from affected deployments of 1 12 0       Mounts associated with deprecated builtin plugins will result in core shutdown on upgrade  As of 1 12 0 Standalone  logical  DB Engines and the AppId Auth Method have been marked with the  Pending Removal  status  Any attempt to unseal Vault with mounts backed by one of these builtin plugins will result in an immediate shutdown of the Vault core        NOTE   In the event that an external plugin with the same name and type as a deprecated builtin is deregistered  any subsequent unseal of Vault will also result in a core shutdown      shell session   vault plugin register  sha256 c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1 auth app id Success  Registered plugin  app id   vault auth enable  plugin name app id plugin Success  Enabled app id auth method at  app id    vault auth list  detailed app id     app id    auth app id 3a8f2e24    system         system     default service    replicated     false        false                      map        n a                        0018263c 0d64 7a70 fd5c 50e05c5f5dc3    n a        n a                      c805cf3b69f704dfcd5176ef1c7599f88adbfd7374e9c76da7f24a32a97abfe1    n a   vault plugin deregister auth app id Success  Deregistered plugin  if it was registered   app id   vault plugin list  detailed   grep  app id  app id                               auth        v1 12 0 builtin vault                                 pending removal      The remediation for affected mounts is to set the  VAULT ALLOW PENDING REMOVAL MOUNTS   vault docs commands server vault allow pending removal mounts  environment variable and replace any  Pending Removal  feature with the  preferred alternative feature   vault docs deprecation faq q what should i do if i use mount filters appid or any of the standalone db engines    For more information on the phases of deprecation  see the  Deprecation Notices FAQ   vault docs deprecation faq q what are the phases of deprecation         Impacted versions  Affects upgrading from any version of Vault to 1 12 x  All other upgrade paths are unaffected        vault plugin list  fails when audit logging is enabled  If audit logging is enabled  Vault will fail to audit the response from any calls to the   GET  v1 sys plugins catalog    vault api docs system plugins catalog list plugins  endpoint  which causes the whole request to fail and return a 500 internal server error  From the CLI  this looks like the following      shell session   vault plugin list Error listing available plugins  data from server response is empty      It will produce errors in Vault Server s logs such as      text 2022 11 30T20 04 22 397Z  ERROR  audit  panic during logging  request path sys plugins catalog error  reflect  reflect Value Set using value obtained using unexported field  2022 11 30T20 04 22 398Z  ERROR  core  failed to audit response  request path sys plugins catalog   error      1 error occurred         panic generating audit log          As a workaround   listing plugins by type   vault api docs system plugins catalog list plugins 1  will succeed      vault list sys plugins catalog auth     vault list sys plugins catalog database     vault list sys plugins catalog secret   The bug was introduced by commit https   github com hashicorp vault commit 76165052e54f884ed0aa2caa496083dc84ad1c19        Impacted versions  Affects versions 1 12 0  1 12 1  and 1 12 2  A fix will be released in 1 12 3       PKI OCSP GET requests return malformed request responses  If an OCSP GET request contains a     character  a malformed request response will be returned instead of properly processing the request due to a double decoding issue within the handler   As a workaround  OCSP POST requests can be used which are unaffected        Impacted versions  Affects version 1 12 3  A fix will be released in 1 12 4    include  tokenization rotation persistence mdx    include  known issues ocsp redirect mdx       LDAP pagination issue  There was a regression introduced in 1 12 6 relating to LDAP maximum page sizes  resulting in an error  no LDAP groups found in groupDN       only policies from locally defined groups available    The issue occurs when upgrading Vault with an instance that has an existing LDAP Auth configuration   As a workaround  disable paged searching using the following     shell session vault write auth ldap config max page size  1           Impacted versions  Affects Vault 1 12 6       Slow startup time when storing PKI certificates  There was a regression introduced in 1 12 0 where Vault is slow to start because the PKI secret engine performs a list operation on the stored certificates  If a large number of certificates are stored this can cause long start times on active and standby nodes   There is currently no workaround for this other than limiting the number of certificates stored in Vault via the  PKI tidy   vault api docs secret pki tidy  or using  no store  flag for  PKI roles   vault api docs secret pki createupdate role         Impacted versions  Affects Vault 1 12 0    include  pki double migration bug mdx "}
{"questions":"vault for anyone upgrading to 1 16 x from Vault 1 15 x Deprecations important or breaking changes and remediation recommendations layout docs page title Upgrade to Vault 1 16 x Guides Overview","answers":"---\nlayout: docs\npage_title: Upgrade to Vault 1.16.x - Guides\ndescription: |-\n  Deprecations, important or breaking changes, and remediation recommendations\n  for anyone upgrading to 1.16.x from Vault 1.15.x.\n---\n\n# Overview\n\nThe Vault 1.16.x upgrade guide contains information on deprecations, important\nor breaking changes, and remediation recommendations for anyone upgrading from\nVault 1.15. **Please read carefully**.\n\n## Important changes\n\n### External plugin variables take precedence over system variables ((#external-plugin-variables))\n\nVault gives precedence to plugin environment variables over system environment\nvariables when loading external plugins. The behavior for builtin plugins and\nplugins that do not specify additional environment variables is unaffected.\n\nFor example, if you register an external plugin with `SOURCE=child` in the\n[env](\/vault\/api-docs\/system\/plugins-catalog#env) parameter but the main Vault\nprocess already has `SOURCE=parent` defined, the plugin process starts\nwith `SOURCE=child`.\n\nRefer to the [plugin management](\/vault\/docs\/plugins\/plugin-management) page for\nmore details on plugin environment variables.\n\n<Highlight title=\"Avoid conflicts with containerized plugins\">\n\n  Containerized plugins do not inherit system-defined environment variables. As\n  a result, containerized plugins cannot have conflicts with Vault environment\n  variables.\n\n<\/Highlight>\n\n#### How to opt out\n\nTo opt out of the precedence change, set the\n`VAULT_PLUGIN_USE_LEGACY_ENV_LAYERING` environment variable to `true` for the\nmain Vault process:\n\n```shell-session\n$ export VAULT_PLUGIN_USE_LEGACY_ENV_LAYERING=true\n```\n\nSetting `VAULT_PLUGIN_USE_LEGACY_ENV_LAYERING` to `true` tells Vault to:\n\n1. prioritize environment variables from the Vault server environment whenever\n   the system detects a variable conflict.\n1. report on plugin variable conflicts during the unseal process by printing\n   warnings for plugins with conflicting environment variables or logging an\n   informational entry when there are no conflicts.\n\nFor example, assume you set `VAULT_PLUGIN_USE_LEGACY_ENV_LAYERING` to `true`\nand have an environment variable `SOURCE=parent`.\n\nIf you register an external plugin called `myplugin` with `SOURCE=child`, the\nplugin process starts with `SOURCE=parent` and Vault reports a conflict for\n`myplugin`.\n\n### LDAP auth login changes\n\nUsers cannot log in using LDAP unless the LDAP plugin is configured\nwith an `userdn` value scoped to an organization unit (OU) where the\nuser resides.\n\n### LDAP auth entity alias names no longer include upndomain\n\nThe `userattr` field on the LDAP auth config is now used as the entity alias.\nPrior to 1.16, the LDAP auth method would detect if `upndomain` was configured\non the mount and then use `<cn>@<upndomain>` as the entity alias value.\n\nThe consequence of not configuring this correctly means users may not have the\ncorrect policies attached to their tokens when logging in.\n\n#### How to opt out\n\nTo opt out of the entity alias change, update the `userattr` field on the config:\n\n```\nuserattr=\"userprincipalname\"\n```\n\nRefer to the [LDAP auth method (API)](\/vault\/api-docs\/auth\/ldap) page for\nmore details on the configuration.\n\n### Secrets Sync now requires setting a one-time flag before use\n\nTo use the Secrets Sync feature, the feature must be activated with a new one-time\noperation called an activation-flag. The feature is gated until a Vault operator\ndecides to trigger the flag. More information can be found in the\n[secrets sync documentation](\/vault\/docs\/sync#activating-the-feature).\n\n### Activity Log Changes\n\n#### Default Activity Log Querying Period\n\nAs of 1.16.13 and later, the field `default_report_months` can no longer be configured or read. Any previously set values\nwill be ignored by the system.\n\n\nAttempts to modify `default_report_months` through the\n[\/sys\/internal\/counters\/config](\/vault\/api-docs\/system\/internal-counters#update-the-client-count-configuration)\nendpoint, will result in the following warning from Vault:\n\n<CodeBlockConfig hideClipboard>\n\n  ```shell-session\n\n  WARNING! The following warnings were returned from Vault:\n\n  * default_report_months is deprecated: defaulting to billing start time\n\n\n  ```\n\n<\/CodeBlockConfig>\n\n\nThe `current_billing_period` toggle for `\/sys\/internal\/counters\/activity` is also deprecated, as this will be set\ntrue by default.\n\nAttempts to set `current_billing_period` will result in the following warning from Vault:\n\n<CodeBlockConfig hideClipboard>\n\n  ```shell-session\n\n  WARNING! The following warnings were returned from Vault:\n\n  * current_billing_period is deprecated; unless otherwise specified, all requests will default to the current billing period\n\n\n  ```\n\n<\/CodeBlockConfig>\n\n### Auto-rolled billing start date\n\nAs of 1.16.7 and later, the billing start date (license start date if not configured) automatically rolls over to the latest billing year at the end of the last cycle.\n\n@include 'auto-roll-billing-start.mdx'\n\n@include 'auto-roll-billing-start-example.mdx'\n\n### Docker image no longer contains `curl`\n\nAs of 1.16.7 and later, the `curl` binary is no longer included in the published Docker container\nimages for Vault and Vault Enterprise. If your workflow depends on `curl` being available in the\ncontainer, consider one of the following strategies:\n\n#### Create a wrapper container image\n\nUse the HashiCorp image as a base image to create a new container image with `curl` installed.\n\n```Dockerfile\nFROM hashicorp\/vault-enterprise\nRUN apk add curl\n```\n\n**NOTE:** While this is the preferred option it will require managing your own registry and rebuilding new images.\n\n#### Install it at runtime dynamically\n\nWhen running the image as root (not recommended), you can install it at runtime dynamically by using the `apk` package manager:\n\n```shell-session\ndocker exec <CONTAINER-ID> apk add curl\n```\n```shell-session\nkubectl exec -ti <NAME> -- apk add curl\n```\n\nWhen running the image as non-root without privilege escalation (recommended) you can use existing\ntools to install a static binary of `curl` into the `vault` users home directory:\n\n```shell-session\ndocker exec <CONTAINER-ID> wget https:\/\/github.com\/moparisthebest\/static-curl\/releases\/latest\/download\/curl-amd64 -O \/home\/vault\/curl && chmod +x \/home\/vault\/curl\n```\n```shell-session\nkubectl exec -ti <NAME> -- wget https:\/\/github.com\/moparisthebest\/static-curl\/releases\/latest\/download\/curl-amd64 -O \/home\/vault\/curl && chmod +x \/home\/vault\/curl\n```\n\n**NOTE:** When using this option you'll want to verify that the static binary comes from a trusted source.\n\n### Product usage reporting\n\nAs of 1.16.13, Vault will collect anonymous product usage metrics for HashiCorp. This information will be collected\nalongside client activity data, and will be sent automatically if automated reporting is configured, or added to manual\nreports if manual reporting is preferred.\n\nSee the main page for [Vault product usage metrics reporting](\/vault\/docs\/enterprise\/license\/product-usage-reporting) for\nmore details, and information about opt-out.\n\n## Known issues and workarounds\n\n@include 'known-issues\/1_17_audit-log-hmac.mdx'\n\n@include 'known-issues\/1_16-jwt_auth_bound_audiences.mdx'\n\n@include 'known-issues\/1_16-jwt_auth_config.mdx'\n\n@include 'known-issues\/1_16-ldap_auth_login_anonymous_group_search.mdx'\n\n@include 'known-issues\/1_16-ldap_auth_login_missing_entity_alias.mdx'\n\n@include 'known-issues\/1_16-default-policy-needs-to-be-updated.mdx'\n\n@include 'known-issues\/1_16-default-lcq-pre-1_9-upgrade.mdx'\n\n@include 'known-issues\/ocsp-redirect.mdx'\n\n@include 'known-issues\/1_16_azure-secrets-engine-client-id.mdx'\n\n@include 'known-issues\/perf-standbys-revert-to-standby.mdx'\n\n@include 'known-issues\/1_13-reload-census-panic-standby.mdx'\n\n@include 'known-issues\/autopilot-upgrade-upgrade-version.mdx'\n\n@include 'known-issues\/1_16_secrets-sync-chroot-activation.mdx'\n\n@include 'known-issues\/config_listener_proxy_protocol_behavior_issue.mdx'\n\n@include 'known-issues\/dangling-entity-aliases-in-memory.mdx'\n\n@include 'known-issues\/duplicate-identity-groups.mdx'\n\n@include 'known-issues\/manual-entity-merge-does-not-persist.mdx'\n\n@include 'known-issues\/duplicate-hsm-key.mdx'\n","site":"vault","answers_cleaned":"    layout  docs page title  Upgrade to Vault 1 16 x   Guides description       Deprecations  important or breaking changes  and remediation recommendations   for anyone upgrading to 1 16 x from Vault 1 15 x         Overview  The Vault 1 16 x upgrade guide contains information on deprecations  important or breaking changes  and remediation recommendations for anyone upgrading from Vault 1 15    Please read carefully        Important changes      External plugin variables take precedence over system variables    external plugin variables    Vault gives precedence to plugin environment variables over system environment variables when loading external plugins  The behavior for builtin plugins and plugins that do not specify additional environment variables is unaffected   For example  if you register an external plugin with  SOURCE child  in the  env   vault api docs system plugins catalog env  parameter but the main Vault process already has  SOURCE parent  defined  the plugin process starts with  SOURCE child    Refer to the  plugin management   vault docs plugins plugin management  page for more details on plugin environment variables    Highlight title  Avoid conflicts with containerized plugins      Containerized plugins do not inherit system defined environment variables  As   a result  containerized plugins cannot have conflicts with Vault environment   variables     Highlight        How to opt out  To opt out of the precedence change  set the  VAULT PLUGIN USE LEGACY ENV LAYERING  environment variable to  true  for the main Vault process      shell session   export VAULT PLUGIN USE LEGACY ENV LAYERING true      Setting  VAULT PLUGIN USE LEGACY ENV LAYERING  to  true  tells Vault to   1  prioritize environment variables from the Vault server environment whenever    the system detects a variable conflict  1  report on plugin variable conflicts during the unseal process by printing    warnings for plugins with conflicting environment variables or logging an    informational entry when there are no conflicts   For example  assume you set  VAULT PLUGIN USE LEGACY ENV LAYERING  to  true  and have an environment variable  SOURCE parent    If you register an external plugin called  myplugin  with  SOURCE child   the plugin process starts with  SOURCE parent  and Vault reports a conflict for  myplugin        LDAP auth login changes  Users cannot log in using LDAP unless the LDAP plugin is configured with an  userdn  value scoped to an organization unit  OU  where the user resides       LDAP auth entity alias names no longer include upndomain  The  userattr  field on the LDAP auth config is now used as the entity alias  Prior to 1 16  the LDAP auth method would detect if  upndomain  was configured on the mount and then use   cn   upndomain   as the entity alias value   The consequence of not configuring this correctly means users may not have the correct policies attached to their tokens when logging in        How to opt out  To opt out of the entity alias change  update the  userattr  field on the config       userattr  userprincipalname       Refer to the  LDAP auth method  API    vault api docs auth ldap  page for more details on the configuration       Secrets Sync now requires setting a one time flag before use  To use the Secrets Sync feature  the feature must be activated with a new one time operation called an activation flag  The feature is gated until a Vault operator decides to trigger the flag  More information can be found in the  secrets sync documentation   vault docs sync activating the feature        Activity Log Changes       Default Activity Log Querying Period  As of 1 16 13 and later  the field  default report months  can no longer be configured or read  Any previously set values will be ignored by the system    Attempts to modify  default report months  through the   sys internal counters config   vault api docs system internal counters update the client count configuration  endpoint  will result in the following warning from Vault    CodeBlockConfig hideClipboard        shell session    WARNING  The following warnings were returned from Vault       default report months is deprecated  defaulting to billing start time            CodeBlockConfig    The  current billing period  toggle for   sys internal counters activity  is also deprecated  as this will be set true by default   Attempts to set  current billing period  will result in the following warning from Vault    CodeBlockConfig hideClipboard        shell session    WARNING  The following warnings were returned from Vault       current billing period is deprecated  unless otherwise specified  all requests will default to the current billing period            CodeBlockConfig       Auto rolled billing start date  As of 1 16 7 and later  the billing start date  license start date if not configured  automatically rolls over to the latest billing year at the end of the last cycle    include  auto roll billing start mdx    include  auto roll billing start example mdx       Docker image no longer contains  curl   As of 1 16 7 and later  the  curl  binary is no longer included in the published Docker container images for Vault and Vault Enterprise  If your workflow depends on  curl  being available in the container  consider one of the following strategies        Create a wrapper container image  Use the HashiCorp image as a base image to create a new container image with  curl  installed      Dockerfile FROM hashicorp vault enterprise RUN apk add curl        NOTE    While this is the preferred option it will require managing your own registry and rebuilding new images        Install it at runtime dynamically  When running the image as root  not recommended   you can install it at runtime dynamically by using the  apk  package manager      shell session docker exec  CONTAINER ID  apk add curl        shell session kubectl exec  ti  NAME     apk add curl      When running the image as non root without privilege escalation  recommended  you can use existing tools to install a static binary of  curl  into the  vault  users home directory      shell session docker exec  CONTAINER ID  wget https   github com moparisthebest static curl releases latest download curl amd64  O  home vault curl    chmod  x  home vault curl        shell session kubectl exec  ti  NAME     wget https   github com moparisthebest static curl releases latest download curl amd64  O  home vault curl    chmod  x  home vault curl        NOTE    When using this option you ll want to verify that the static binary comes from a trusted source       Product usage reporting  As of 1 16 13  Vault will collect anonymous product usage metrics for HashiCorp  This information will be collected alongside client activity data  and will be sent automatically if automated reporting is configured  or added to manual reports if manual reporting is preferred   See the main page for  Vault product usage metrics reporting   vault docs enterprise license product usage reporting  for more details  and information about opt out      Known issues and workarounds   include  known issues 1 17 audit log hmac mdx    include  known issues 1 16 jwt auth bound audiences mdx    include  known issues 1 16 jwt auth config mdx    include  known issues 1 16 ldap auth login anonymous group search mdx    include  known issues 1 16 ldap auth login missing entity alias mdx    include  known issues 1 16 default policy needs to be updated mdx    include  known issues 1 16 default lcq pre 1 9 upgrade mdx    include  known issues ocsp redirect mdx    include  known issues 1 16 azure secrets engine client id mdx    include  known issues perf standbys revert to standby mdx    include  known issues 1 13 reload census panic standby mdx    include  known issues autopilot upgrade upgrade version mdx    include  known issues 1 16 secrets sync chroot activation mdx    include  known issues config listener proxy protocol behavior issue mdx    include  known issues dangling entity aliases in memory mdx    include  known issues duplicate identity groups mdx    include  known issues manual entity merge does not persist mdx    include  known issues duplicate hsm key mdx  "}
{"questions":"vault for anyone upgrading to 1 18 x from Vault 1 17 x Deprecations important or breaking changes and remediation recommendations layout docs Overview page title Upgrade to Vault 1 18 x Guides","answers":"---\nlayout: docs\npage_title: Upgrade to Vault 1.18.x - Guides\ndescription: |-\n  Deprecations, important or breaking changes, and remediation recommendations\n  for anyone upgrading to 1.18.x from Vault 1.17.x.\n---\n\n# Overview\n\nThe Vault 1.18.x upgrade guide contains information on deprecations, important\nor breaking changes, and remediation recommendations for anyone upgrading from\nVault 1.17. **Please read carefully**.\n\n## Important changes\n\n### Activity Log Changes\n\n#### Default Activity Log Querying Period\n\nThe field `default_report_months` can no longer be configured or read. Any previously set values\nwill be ignored by the system.\n\n\nAttempts to modify `default_report_months` through the\n[\/sys\/internal\/counters\/config](\/vault\/api-docs\/system\/internal-counters#update-the-client-count-configuration)\nendpoint, will result in the following warning from Vault:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n\nWARNING! The following warnings were returned from Vault:\n\n  * default_report_months is deprecated: defaulting to billing start time\n\n\n```\n\n<\/CodeBlockConfig>\n\n\nThe `current_billing_period` toggle for `\/sys\/internal\/counters\/activity` is also deprecated, as this will be set\ntrue by default.\n\nAttempts to set `current_billing_period` will result in the following warning from Vault:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n\nWARNING! The following warnings were returned from Vault:\n\n  * current_billing_period is deprecated; unless otherwise specified, all requests will default to the current billing period\n\n\n```\n\n<\/CodeBlockConfig>\n\n### Docker image no longer contains `curl`\n\nThe `curl` binary is no longer included in the published Docker container images for Vault and Vault\nEnterprise. If your workflow depends on `curl` being available in the container, consider one of the\nfollowing strategies:\n\n#### Create a wrapper container image\n\nUse the HashiCorp image as a base image to create a new container image with `curl` installed.\n\n```Dockerfile\nFROM hashicorp\/vault-enterprise\nRUN apk add curl\n```\n\n**NOTE:** While this is the preferred option it will require managing your own registry and rebuilding new images.\n\n#### Install it at runtime dynamically\n\nWhen running the image as root (not recommended), you can install it at runtime dynamically by using the `apk` package manager:\n\n```shell-session\ndocker exec <CONTAINER-ID> apk add curl\n```\n```shell-session\nkubectl exec -ti <NAME> -- apk add curl\n```\n\nWhen running the image as non-root without privilege escalation (recommended) you can use existing\ntools to install a static binary of `curl` into the `vault` users home directory:\n\n```shell-session\ndocker exec <CONTAINER-ID> wget https:\/\/github.com\/moparisthebest\/static-curl\/releases\/latest\/download\/curl-amd64 -O \/home\/vault\/curl && chmod +x \/home\/vault\/curl\n```\n```shell-session\nkubectl exec -ti <NAME> -- wget https:\/\/github.com\/moparisthebest\/static-curl\/releases\/latest\/download\/curl-amd64 -O \/home\/vault\/curl && chmod +x \/home\/vault\/curl\n```\n\n**NOTE:** When using this option you'll want to verify that the static binary comes from a trusted source.\n\n### Request limiter configuration removal\n\nVault 1.16.0 included an experimental request limiter. The limiter was disabled\nby default with an opt-in `request_limiter` configuration.\n\nFurther testing indicated that an alternative approach improves performance and\nreduces risk for many workloads. Vault 1.17.0 included a new [adaptive overload\nprotection](\/vault\/docs\/concepts\/adaptive-overload-protection) feature that\nprevents outages when Vault is overwhelmed by write requests.\n\nAdaptive overload protection was a beta feature in 1.17.0.\n\nAs of Vault 1.18.0, the adaptive overload protection feature for writes is\nnow GA and enabled by default for the integrated storage backend.\n\nThe beta `request_limiter` configuration stanza is officially removed in Vault 1.18.0.\n\nVault will output two types of warnings if the `request_limiter` stanza is\ndetected in your Vault config.\n\n1. A UI warning message printed to `stderr`:\n\n```text\nWARNING: Request Limiter configuration is no longer supported; overriding server configuration to disable\n```\n\n2. A log line with level `WARN`, appearing in Vault's logs:\n\n```text\n... [WARN]  unknown or unsupported field request_limiter found in configuration at config.hcl:22:1\n```\n\n### Product usage reporting\n\nAs of 1.18.2, Vault will collect anonymous product usage metrics for HashiCorp. This information will be collected\nalongside client activity data, and will be sent automatically if automated reporting is configured, or added to manual\nreports if manual reporting is preferred.\n\nSee the main page for [Vault product usage metrics reporting](\/vault\/docs\/enterprise\/license\/product-usage-reporting) for\nmore details, and information about opt-out.\n\n## Known issues and workarounds\n\n@include 'known-issues\/duplicate-hsm-key.mdx'","site":"vault","answers_cleaned":"    layout  docs page title  Upgrade to Vault 1 18 x   Guides description       Deprecations  important or breaking changes  and remediation recommendations   for anyone upgrading to 1 18 x from Vault 1 17 x         Overview  The Vault 1 18 x upgrade guide contains information on deprecations  important or breaking changes  and remediation recommendations for anyone upgrading from Vault 1 17    Please read carefully        Important changes      Activity Log Changes       Default Activity Log Querying Period  The field  default report months  can no longer be configured or read  Any previously set values will be ignored by the system    Attempts to modify  default report months  through the   sys internal counters config   vault api docs system internal counters update the client count configuration  endpoint  will result in the following warning from Vault    CodeBlockConfig hideClipboard      shell session  WARNING  The following warnings were returned from Vault       default report months is deprecated  defaulting to billing start time          CodeBlockConfig    The  current billing period  toggle for   sys internal counters activity  is also deprecated  as this will be set true by default   Attempts to set  current billing period  will result in the following warning from Vault    CodeBlockConfig hideClipboard      shell session  WARNING  The following warnings were returned from Vault       current billing period is deprecated  unless otherwise specified  all requests will default to the current billing period          CodeBlockConfig       Docker image no longer contains  curl   The  curl  binary is no longer included in the published Docker container images for Vault and Vault Enterprise  If your workflow depends on  curl  being available in the container  consider one of the following strategies        Create a wrapper container image  Use the HashiCorp image as a base image to create a new container image with  curl  installed      Dockerfile FROM hashicorp vault enterprise RUN apk add curl        NOTE    While this is the preferred option it will require managing your own registry and rebuilding new images        Install it at runtime dynamically  When running the image as root  not recommended   you can install it at runtime dynamically by using the  apk  package manager      shell session docker exec  CONTAINER ID  apk add curl        shell session kubectl exec  ti  NAME     apk add curl      When running the image as non root without privilege escalation  recommended  you can use existing tools to install a static binary of  curl  into the  vault  users home directory      shell session docker exec  CONTAINER ID  wget https   github com moparisthebest static curl releases latest download curl amd64  O  home vault curl    chmod  x  home vault curl        shell session kubectl exec  ti  NAME     wget https   github com moparisthebest static curl releases latest download curl amd64  O  home vault curl    chmod  x  home vault curl        NOTE    When using this option you ll want to verify that the static binary comes from a trusted source       Request limiter configuration removal  Vault 1 16 0 included an experimental request limiter  The limiter was disabled by default with an opt in  request limiter  configuration   Further testing indicated that an alternative approach improves performance and reduces risk for many workloads  Vault 1 17 0 included a new  adaptive overload protection   vault docs concepts adaptive overload protection  feature that prevents outages when Vault is overwhelmed by write requests   Adaptive overload protection was a beta feature in 1 17 0   As of Vault 1 18 0  the adaptive overload protection feature for writes is now GA and enabled by default for the integrated storage backend   The beta  request limiter  configuration stanza is officially removed in Vault 1 18 0   Vault will output two types of warnings if the  request limiter  stanza is detected in your Vault config   1  A UI warning message printed to  stderr       text WARNING  Request Limiter configuration is no longer supported  overriding server configuration to disable      2  A log line with level  WARN   appearing in Vault s logs      text      WARN   unknown or unsupported field request limiter found in configuration at config hcl 22 1          Product usage reporting  As of 1 18 2  Vault will collect anonymous product usage metrics for HashiCorp  This information will be collected alongside client activity data  and will be sent automatically if automated reporting is configured  or added to manual reports if manual reporting is preferred   See the main page for  Vault product usage metrics reporting   vault docs enterprise license product usage reporting  for more details  and information about opt out      Known issues and workarounds   include  known issues duplicate hsm key mdx "}
{"questions":"vault This page contains the list of deprecations and important or breaking changes page title Upgrading to Vault 0 10 0 Guides for Vault 0 10 0 Please read it carefully layout docs Overview","answers":"---\nlayout: docs\npage_title: Upgrading to Vault 0.10.0 - Guides\ndescription: |-\n  This page contains the list of deprecations and important or breaking changes\n  for Vault 0.10.0. Please read it carefully.\n---\n\n# Overview\n\nThis page contains the list of deprecations and important or breaking changes\nfor Vault 0.10.0 compared to 0.9.0. Please read it carefully.\n\n## Changes since 0.9.6\n\n### Database plugin compatibility\n\nThe database plugin interface was enhanced to support some additional\nfunctionality related to root credential rotation and supporting templated\nURL strings. The changes were made in a backwards-compatible way and all\nbuiltin plugins were updated with the new features. Custom plugins not built\ninto Vault will need to be upgraded to support templated URL strings and\nroot rotation. Additionally, the Initialize method was deprecated in favor\nof a new Init method that supports configuration modifications that occur in\nthe plugin back to the primary data store.\n\n### Removal of returned secret information\n\nFor a long time Vault has returned configuration given to various secret\nengines and auth methods with secret values (such as secret API keys or\npasswords) still intact, and with a warning to the user on write that anyone\nwith read access could see the secret. This was mostly done to make it easy for\ntools like Terraform to judge whether state had drifted. However, it also feels\nquite un-Vault-y to do this and we've never felt very comfortable doing so. In\n0.10 we have gone through and removed this behavior from the various backends;\nfields which contained secret values are simply no longer returned on read. We\nare working with the Terraform team to make changes to their provider to\naccommodate this as best as possible, and users of other tools may have to make\nadjustments, but in the end we felt that the ends did not justify the means and\nwe needed to prioritize security over operational convenience.\n\n### LDAP auth method case sensitivity\n\nWe now treat usernames and groups configured locally for policy assignment in a\ncase insensitive fashion by default. Existing configurations will continue to\nwork as they do now; however, the next time a configuration is written\n`case_sensitive_names` will need to be explicitly set to `true`.\n\n### TTL handling moved to core\n\nAll lease TTL handling has been centralized within the core of Vault to ensure\nconsistency across all backends. Since this was previously delegated to\nindividual backends, there may be some slight differences in TTLs generated\nfrom some backends.\n\n### Default `secret\/` mount is deprecated\n\nIn 0.12 we will stop mounting `secret\/` by default at initialization time (it\nwill still be available in `dev` mode).\n\n## Full list since 0.9.0\n\n### Change to AWS role output\n\nThe AWS authentication backend now allows binds for inputs as either a\ncomma-delimited string or a string array. However, to keep consistency with\ninput and output, when reading a role the binds will now be returned as string\narrays rather than strings.\n\n### Change to AWS IAM auth ARN prefix matching\n\nIn order to prefix-match IAM role and instance profile ARNs in AWS auth\nbackend, you now must explicitly opt-in by adding a `*` to the end of the ARN.\nExisting configurations will be upgraded automatically, but when writing a new\nrole configuration the updated behavior will be used.\n\n### Backwards compatible CLI changes\n\nThis upgrade guide is typically reserved for breaking changes, however it\nis worth calling out that the CLI interface to Vault has been completely\nrevamped while maintaining backwards compatibility. This could lead to\npotential confusion while browsing the latest version of the Vault\ndocumentation on vaultproject.io.\n\nAll previous CLI commands should continue to work and are backwards\ncompatible in almost all cases.\n\nDocumentation for previous versions of Vault can be accessed using\nthe GitHub interface by browsing tags (eg [0.9.1 website tree](https:\/\/github.com\/hashicorp\/vault\/tree\/v0.9.1\/website)) or by\n[building the Vault website locally](https:\/\/github.com\/hashicorp\/vault\/tree\/v0.9.1\/website#running-the-site-locally).\n\n### `sys\/health` DR secondary reporting\n\nThe `replication_dr_secondary` bool returned by `sys\/health` could be\nmisleading since it would be `false` both when a cluster was not a DR secondary\nbut also when the node is a standby in the cluster and has not yet fully\nreceived state from the active node. This could cause health checks on LBs to\ndecide that the node was acceptable for traffic even though DR secondaries\ncannot handle normal Vault traffic. (In other words, the bool could only convey\n\"yes\" or \"no\" but not \"not sure yet\".) This has been replaced by\n`replication_dr_mode` and `replication_perf_mode` which are string values that\nconvey the current state of the node; a value of `disabled` indicates that\nreplication is disabled or the state is still being discovered. As a result, an\nLB check can positively verify that the node is both not `disabled` and is not\na DR secondary, and avoid sending traffic to it if either is true.\n\n### PKI secret backend roles parameter types\n\nFor `ou` and `organization` in role definitions in the PKI secret backend,\ninput can now be a comma-separated string or an array of strings. Reading a\nrole will now return arrays for these parameters.\n\n### Plugin API changes\n\nThe plugin API has been updated to utilize golang's context.Context package.\nMany function signatures now accept a context object as the first parameter.\nExisting plugins will need to pull in the latest Vault code and update their\nfunction signatures to begin using context and the new gRPC transport.\n\n### AppRole case sensitivity\n\nIn prior versions of Vault, `list` operations against AppRole roles would\nrequire preserving case in the role name, even though most other operations\nwithin AppRole are case-insensitive with respect to the role name. This has\nbeen fixed; existing roles will behave as they have in the past, but new roles\nwill act case-insensitively in these cases.\n\n### Token auth backend roles parameter types\n\nFor `allowed_policies` and `disallowed_policies` in role definitions in the\ntoken auth backend, input can now be a comma-separated string or an array of\nstrings. Reading a role will now return arrays for these parameters.\n\n### Transit key exporting\n\nYou can now mark a key in the `transit` backend as `exportable` at any time,\nrather than just at creation time; however, once this value is set, it still\ncannot be unset.\n\n### PKI secret backend roles parameter types\n\nFor `allowed_domains` and `key_usage` in role definitions in the PKI secret\nbackend, input can now be a comma-separated string or an array of strings.\nReading a role will now return arrays for these parameters.\n\n### SSH dynamic keys method defaults to 2048-bit keys\n\nWhen using the dynamic key method in the SSH backend, the default is now to use\n2048-bit keys if no specific key bit size is specified.\n\n### Consul secret backend lease handling\n\nThe `consul` secret backend can now accept both strings and integer numbers of\nseconds for its lease value. The value returned on a role read will be an\ninteger number of seconds instead of a human-friendly string.\n\n### Unprintable characters not allowed in API paths\n\nUnprintable characters are no longer allowed in names in the API (paths and\npath parameters), with an extra restriction on whitespace characters. Allowed\ncharacters are those that are considered printable by Unicode plus spaces.","site":"vault","answers_cleaned":"    layout  docs page title  Upgrading to Vault 0 10 0   Guides description       This page contains the list of deprecations and important or breaking changes   for Vault 0 10 0  Please read it carefully         Overview  This page contains the list of deprecations and important or breaking changes for Vault 0 10 0 compared to 0 9 0  Please read it carefully      Changes since 0 9 6      Database plugin compatibility  The database plugin interface was enhanced to support some additional functionality related to root credential rotation and supporting templated URL strings  The changes were made in a backwards compatible way and all builtin plugins were updated with the new features  Custom plugins not built into Vault will need to be upgraded to support templated URL strings and root rotation  Additionally  the Initialize method was deprecated in favor of a new Init method that supports configuration modifications that occur in the plugin back to the primary data store       Removal of returned secret information  For a long time Vault has returned configuration given to various secret engines and auth methods with secret values  such as secret API keys or passwords  still intact  and with a warning to the user on write that anyone with read access could see the secret  This was mostly done to make it easy for tools like Terraform to judge whether state had drifted  However  it also feels quite un Vault y to do this and we ve never felt very comfortable doing so  In 0 10 we have gone through and removed this behavior from the various backends  fields which contained secret values are simply no longer returned on read  We are working with the Terraform team to make changes to their provider to accommodate this as best as possible  and users of other tools may have to make adjustments  but in the end we felt that the ends did not justify the means and we needed to prioritize security over operational convenience       LDAP auth method case sensitivity  We now treat usernames and groups configured locally for policy assignment in a case insensitive fashion by default  Existing configurations will continue to work as they do now  however  the next time a configuration is written  case sensitive names  will need to be explicitly set to  true        TTL handling moved to core  All lease TTL handling has been centralized within the core of Vault to ensure consistency across all backends  Since this was previously delegated to individual backends  there may be some slight differences in TTLs generated from some backends       Default  secret   mount is deprecated  In 0 12 we will stop mounting  secret   by default at initialization time  it will still be available in  dev  mode       Full list since 0 9 0      Change to AWS role output  The AWS authentication backend now allows binds for inputs as either a comma delimited string or a string array  However  to keep consistency with input and output  when reading a role the binds will now be returned as string arrays rather than strings       Change to AWS IAM auth ARN prefix matching  In order to prefix match IAM role and instance profile ARNs in AWS auth backend  you now must explicitly opt in by adding a     to the end of the ARN  Existing configurations will be upgraded automatically  but when writing a new role configuration the updated behavior will be used       Backwards compatible CLI changes  This upgrade guide is typically reserved for breaking changes  however it is worth calling out that the CLI interface to Vault has been completely revamped while maintaining backwards compatibility  This could lead to potential confusion while browsing the latest version of the Vault documentation on vaultproject io   All previous CLI commands should continue to work and are backwards compatible in almost all cases   Documentation for previous versions of Vault can be accessed using the GitHub interface by browsing tags  eg  0 9 1 website tree  https   github com hashicorp vault tree v0 9 1 website   or by  building the Vault website locally  https   github com hashicorp vault tree v0 9 1 website running the site locally         sys health  DR secondary reporting  The  replication dr secondary  bool returned by  sys health  could be misleading since it would be  false  both when a cluster was not a DR secondary but also when the node is a standby in the cluster and has not yet fully received state from the active node  This could cause health checks on LBs to decide that the node was acceptable for traffic even though DR secondaries cannot handle normal Vault traffic   In other words  the bool could only convey  yes  or  no  but not  not sure yet    This has been replaced by  replication dr mode  and  replication perf mode  which are string values that convey the current state of the node  a value of  disabled  indicates that replication is disabled or the state is still being discovered  As a result  an LB check can positively verify that the node is both not  disabled  and is not a DR secondary  and avoid sending traffic to it if either is true       PKI secret backend roles parameter types  For  ou  and  organization  in role definitions in the PKI secret backend  input can now be a comma separated string or an array of strings  Reading a role will now return arrays for these parameters       Plugin API changes  The plugin API has been updated to utilize golang s context Context package  Many function signatures now accept a context object as the first parameter  Existing plugins will need to pull in the latest Vault code and update their function signatures to begin using context and the new gRPC transport       AppRole case sensitivity  In prior versions of Vault   list  operations against AppRole roles would require preserving case in the role name  even though most other operations within AppRole are case insensitive with respect to the role name  This has been fixed  existing roles will behave as they have in the past  but new roles will act case insensitively in these cases       Token auth backend roles parameter types  For  allowed policies  and  disallowed policies  in role definitions in the token auth backend  input can now be a comma separated string or an array of strings  Reading a role will now return arrays for these parameters       Transit key exporting  You can now mark a key in the  transit  backend as  exportable  at any time  rather than just at creation time  however  once this value is set  it still cannot be unset       PKI secret backend roles parameter types  For  allowed domains  and  key usage  in role definitions in the PKI secret backend  input can now be a comma separated string or an array of strings  Reading a role will now return arrays for these parameters       SSH dynamic keys method defaults to 2048 bit keys  When using the dynamic key method in the SSH backend  the default is now to use 2048 bit keys if no specific key bit size is specified       Consul secret backend lease handling  The  consul  secret backend can now accept both strings and integer numbers of seconds for its lease value  The value returned on a role read will be an integer number of seconds instead of a human friendly string       Unprintable characters not allowed in API paths  Unprintable characters are no longer allowed in names in the API  paths and path parameters   with an extra restriction on whitespace characters  Allowed characters are those that are considered printable by Unicode plus spaces "}
{"questions":"vault This page contains the list of deprecations and important or breaking changes for Vault 1 2 0 Please read it carefully page title Upgrading to Vault 1 2 0 Guides layout docs Overview","answers":"---\nlayout: docs\npage_title: Upgrading to Vault 1.2.0 - Guides\ndescription: |-\n  This page contains the list of deprecations and important or breaking changes\n  for Vault 1.2.0. Please read it carefully.\n---\n\n# Overview\n\nThis page contains the list of deprecations and important or breaking changes\nfor Vault 1.2.0 compared to 1.1.0. Please read it carefully.\n\n## Known issues\n\n### AppRole upgrade issue\n\nDue to a bug, on upgrade AppRole roles cannot be read properly. If using AppRole, do not upgrade until this issue is fixed in 1.2.1.\n\n## Changes\/Deprecations\n\n### Path character handling\n\nDue to underlying changes in Go's runtime past version 1.11.5, Vault is now\nstricter about what characters it will accept in path names. Whereas before it\nwould filter out unprintable characters (and this could be turned off), control\ncharacters and other invalid characters are now rejected within Go's HTTP\nlibrary before the request is passed to Vault, and this cannot be disabled. To\ncontinue using these (e.g. for already-written paths), they must be properly\npercent-encoded (e.g. `\\r` becomes `%0D`, `\\x00` becomes `%00`, and so on).\n\n### AWSKMS seal region\n\nThe user-configured regions on the AWSKMS seal stanza will now be preferred\nover regions set in the enclosing environment.\n\n### Audit logging of empty values\n\nAll values in audit logs now are omitted if they are empty. This helps reduce\nthe size of audit log entries by not reproducing keys in each entry that\ncommonly don't contain any value, which can help in cases where audit log\nentries are above the maximum UDP packet size and others.\n\n### Rollback logging\n\nRollback will no longer display log messages when it runs; it will only display\nmessages if an error occurs.\n\n### Database plugins\n\nDatabase plugins now default to 4 max open connections rather than 2. This\nshould be safe in nearly all cases and fixes some issues where a single\noperation could fail with the default configuration because it needed three\nconnections just for that operation. However, this could result in an increase\nin held open file descriptors for each database configuration, so ensure that\nthere is sufficient overhead.\n\n### AppRole various changes\n\n- AppRole uses new, common token fields for values that overlap with other auth\n  methods. `period` and `policies` will continue to work, with priority being\n  given to the `token_` prefixed versions of these fields, but the values for\n  those will only be returned on read if they were set initially.\n- `default` is no longer automatically added to policies after submission. It\n  was a no-op anyways since Vault's core would always add it, and changing this\n  behavior allows AppRole to support the new `token_no_default_policy`\n  parameter\n- The long-deprecated `bound_cidr_list` is no longer returned when reading a\n  role.\n\n### Token store roles changes\n\nToken store roles use new, common token fields for the values that overlap with\nother auth backends. `period`, `explicit_max_ttl`, and `bound_cidrs` will\ncontinue to work, with priority being given to the `token_` prefixed versions\nof those parameters. They will also be returned when doing a read on the role\nif they were used to provide values initially; however, in Vault 1.4 if\n`period` or `explicit_max_ttl` is zero they will no longer be returned.\n(`explicit_max_ttl` was already not returned if empty.)\n\n### Go API\/SDK changes\n\nVault now uses Go's official dependency management system, Go Modules, to\nmanage dependencies. As a result to both reduce transitive dependencies for API\nlibrary users and plugin authors, and to work around various conflicts, we have\nmoved various helpers around, mostly under an `sdk\/` submodule. A couple of\nfunctions have also moved from plugin helper code to the `api\/` submodule. If\nyou are a plugin author, take a look at some of our official plugins and the\npaths they are importing for guidance.\n\n### Change in LDAP group CN handling\n\nA bug fix put in place in Vault 1.1.1 to allow group CNs to be found from an\nLDAP server in lowercase `cn` as well as uppercase `CN` had an unintended\nconsequence. If prior to that a group used `cn`, as in `cn=foo,ou=bar` then the\ngroup that would need to be put into place in the LDAP plugin to match against\npolicies is `cn=foo,ou=bar` since the CN would not be correctly found. After\nthe change, the CN was correctly found, but this would result in the group name\nbeing parsed as `foo` and would not match groups using the full DN. In 1.1.5+,\nthere is a boolean config setting `use_pre111_group_cn_behavior` to allow\nreverting to the old matching behavior; we also attempt to upgrade exiting\nconfigs to have that defaulted to true.\n\n### JWT\/OIDC plugin\n\nLogins of role_type \"oidc\" via the \/login path are no longer allowed.\n\n### ACL wildcards\n\nNew ordering put into place in Vault 1.1.1 defines which policy wins when there\nare multiple inexact matches and at least one path contains `+`. `+*` is now\nillegal in policy paths. The previous behavior simply selected any matching\nsegment-wildcard path that matched.\n\n### Replication\n\nDue to technical limitations, mounting and unmounting was not previously\npossible from a performance secondary. These have been resolved, and these\noperations may now be run from a performance secondary.","site":"vault","answers_cleaned":"    layout  docs page title  Upgrading to Vault 1 2 0   Guides description       This page contains the list of deprecations and important or breaking changes   for Vault 1 2 0  Please read it carefully         Overview  This page contains the list of deprecations and important or breaking changes for Vault 1 2 0 compared to 1 1 0  Please read it carefully      Known issues      AppRole upgrade issue  Due to a bug  on upgrade AppRole roles cannot be read properly  If using AppRole  do not upgrade until this issue is fixed in 1 2 1      Changes Deprecations      Path character handling  Due to underlying changes in Go s runtime past version 1 11 5  Vault is now stricter about what characters it will accept in path names  Whereas before it would filter out unprintable characters  and this could be turned off   control characters and other invalid characters are now rejected within Go s HTTP library before the request is passed to Vault  and this cannot be disabled  To continue using these  e g  for already written paths   they must be properly percent encoded  e g    r  becomes   0D     x00  becomes   00   and so on        AWSKMS seal region  The user configured regions on the AWSKMS seal stanza will now be preferred over regions set in the enclosing environment       Audit logging of empty values  All values in audit logs now are omitted if they are empty  This helps reduce the size of audit log entries by not reproducing keys in each entry that commonly don t contain any value  which can help in cases where audit log entries are above the maximum UDP packet size and others       Rollback logging  Rollback will no longer display log messages when it runs  it will only display messages if an error occurs       Database plugins  Database plugins now default to 4 max open connections rather than 2  This should be safe in nearly all cases and fixes some issues where a single operation could fail with the default configuration because it needed three connections just for that operation  However  this could result in an increase in held open file descriptors for each database configuration  so ensure that there is sufficient overhead       AppRole various changes    AppRole uses new  common token fields for values that overlap with other auth   methods   period  and  policies  will continue to work  with priority being   given to the  token   prefixed versions of these fields  but the values for   those will only be returned on read if they were set initially     default  is no longer automatically added to policies after submission  It   was a no op anyways since Vault s core would always add it  and changing this   behavior allows AppRole to support the new  token no default policy    parameter   The long deprecated  bound cidr list  is no longer returned when reading a   role       Token store roles changes  Token store roles use new  common token fields for the values that overlap with other auth backends   period    explicit max ttl   and  bound cidrs  will continue to work  with priority being given to the  token   prefixed versions of those parameters  They will also be returned when doing a read on the role if they were used to provide values initially  however  in Vault 1 4 if  period  or  explicit max ttl  is zero they will no longer be returned    explicit max ttl  was already not returned if empty        Go API SDK changes  Vault now uses Go s official dependency management system  Go Modules  to manage dependencies  As a result to both reduce transitive dependencies for API library users and plugin authors  and to work around various conflicts  we have moved various helpers around  mostly under an  sdk   submodule  A couple of functions have also moved from plugin helper code to the  api   submodule  If you are a plugin author  take a look at some of our official plugins and the paths they are importing for guidance       Change in LDAP group CN handling  A bug fix put in place in Vault 1 1 1 to allow group CNs to be found from an LDAP server in lowercase  cn  as well as uppercase  CN  had an unintended consequence  If prior to that a group used  cn   as in  cn foo ou bar  then the group that would need to be put into place in the LDAP plugin to match against policies is  cn foo ou bar  since the CN would not be correctly found  After the change  the CN was correctly found  but this would result in the group name being parsed as  foo  and would not match groups using the full DN  In 1 1 5   there is a boolean config setting  use pre111 group cn behavior  to allow reverting to the old matching behavior  we also attempt to upgrade exiting configs to have that defaulted to true       JWT OIDC plugin  Logins of role type  oidc  via the  login path are no longer allowed       ACL wildcards  New ordering put into place in Vault 1 1 1 defines which policy wins when there are multiple inexact matches and at least one path contains           is now illegal in policy paths  The previous behavior simply selected any matching segment wildcard path that matched       Replication  Due to technical limitations  mounting and unmounting was not previously possible from a performance secondary  These have been resolved  and these operations may now be run from a performance secondary "}
{"questions":"vault This page contains the list of breaking changes for Vault 0 6 1 Please read page title Upgrading to Vault 0 6 1 Guides it carefully layout docs Overview","answers":"---\nlayout: docs\npage_title: Upgrading to Vault 0.6.1 - Guides\ndescription: |-\n  This page contains the list of breaking changes for Vault 0.6.1. Please read\n  it carefully.\n---\n\n# Overview\n\nThis page contains the list of breaking changes for Vault 0.6.1. Please read it\ncarefully.\n\n## Standby nodes must be 0.6.1 as well\n\nOnce an active node is running 0.6.1, only standby nodes running 0.6.1+ will be\nable to form an HA cluster. If following our [general upgrade\ninstructions](\/vault\/docs\/upgrading) this will\nnot be an issue.\n\n## Health endpoint status code changes\n\nPrior to 0.6.1, the health endpoint would return a `500` (Internal Server\nError) for both a sealed and uninitialized state. In both states this was\nconfusing, since it was hard to tell, based on the status code, an actual\ninternal error from Vault from a Vault that was simply uninitialized or sealed,\nnot to mention differentiating between those two states.\n\nIn 0.6.1, a sealed Vault will return a `503` (Service Unavailable) status code.\nAs before, this can be adjusted with the `sealedcode` query parameter. An\nuninitialized Vault will return a `501` (Not Implemented) status code. This can\nbe adjusted with the `uninitcode` query parameter.\n\nThis removes ambiguity\/confusion and falls more in line with the intention of\neach status code (including `500`).\n\n## Root token creation restrictions\n\nRoot tokens (tokens with the `root` policy) can no longer be created except by\nanother root token or the\n[`generate-root`](\/vault\/api-docs\/system\/generate-root)\nendpoint or CLI command.\n\n## PKI backend certificates will contain default key usages\n\nIssued certificates from the `pki` backend against roles created or modified\nafter upgrading will contain a set of default key usages. This increases\ncompatibility with some software that requires strict adherence to RFCs, such\nas OpenVPN.\n\nThis behavior is fully adjustable; see the [PKI backend\ndocumentation](\/vault\/docs\/secrets\/pki) for\ndetails.\n\n## DynamoDB does not support HA by default\n\nIf using DynamoDB and want to use HA support, you will need to explicitly\nenable it in Vault's configuration; see the\n[documentation](\/vault\/docs\/configuration#ha_storage)\nfor details.\n\nIf you are already using DynamoDB in an HA fashion and wish to keep doing so,\nit is _very important_ that you set this option **before** upgrading your Vault\ninstances. Without doing so, each Vault instance will believe that it is\nstandalone and there could be consistency issues.\n\n## LDAP auth method forgets bind password and insecure TLS settings\n\nDue to a bug, these two settings are forgotten if they have been configured in\nthe LDAP backend prior to 0.6.1. If you are using these settings with LDAP,\nplease be sure to re-submit your LDAP configuration to Vault after the upgrade,\nso ensure that you have a valid token to do so before upgrading if you are\nrelying on LDAP authentication for permissions to modify the backend itself.\n\n## LDAP auth method does not search `memberOf`\n\nThe LDAP backend went from a model where all permutations of storing and\nfiltering groups were tried in all cases to one where specific filters are\ndefined by the administrator. This vastly increases overall directory\ncompatibility, especially with Active Directory when using nested groups, but\nunfortunately has the side effect that `memberOf` is no longer searched for by\ndefault, which is a breaking change for many existing setups.\n\n`Scenario 2` in the [updated\ndocumentation](\/vault\/docs\/auth\/ldap) shows an\nexample of configuring the backend to query `memberOf`. It is recommended that\na test Vault server be set up and that successful authentication can be\nperformed using the new configuration before upgrading a primary or production\nVault instance.\n\nIn addition, if LDAP is relied upon for authentication, operators should ensure\nthat they have valid tokens with policies allowing modification of LDAP\nparameters before upgrading, so that once an upgrade is performed, the new\nconfiguration can be specified successfully.\n\n## App-ID is deprecated\n\nWith the addition of the new [AppRole\nbackend](\/vault\/docs\/auth\/approle), App-ID is\ndeprecated. There are no current plans to remove it, but we encourage using\nAppRole whenever possible, as it offers enhanced functionality and can\naccommodate many more types of authentication paradigms. App-ID will receive\nsecurity-related fixes only.","site":"vault","answers_cleaned":"    layout  docs page title  Upgrading to Vault 0 6 1   Guides description       This page contains the list of breaking changes for Vault 0 6 1  Please read   it carefully         Overview  This page contains the list of breaking changes for Vault 0 6 1  Please read it carefully      Standby nodes must be 0 6 1 as well  Once an active node is running 0 6 1  only standby nodes running 0 6 1  will be able to form an HA cluster  If following our  general upgrade instructions   vault docs upgrading  this will not be an issue      Health endpoint status code changes  Prior to 0 6 1  the health endpoint would return a  500   Internal Server Error  for both a sealed and uninitialized state  In both states this was confusing  since it was hard to tell  based on the status code  an actual internal error from Vault from a Vault that was simply uninitialized or sealed  not to mention differentiating between those two states   In 0 6 1  a sealed Vault will return a  503   Service Unavailable  status code  As before  this can be adjusted with the  sealedcode  query parameter  An uninitialized Vault will return a  501   Not Implemented  status code  This can be adjusted with the  uninitcode  query parameter   This removes ambiguity confusion and falls more in line with the intention of each status code  including  500        Root token creation restrictions  Root tokens  tokens with the  root  policy  can no longer be created except by another root token or the   generate root    vault api docs system generate root  endpoint or CLI command      PKI backend certificates will contain default key usages  Issued certificates from the  pki  backend against roles created or modified after upgrading will contain a set of default key usages  This increases compatibility with some software that requires strict adherence to RFCs  such as OpenVPN   This behavior is fully adjustable  see the  PKI backend documentation   vault docs secrets pki  for details      DynamoDB does not support HA by default  If using DynamoDB and want to use HA support  you will need to explicitly enable it in Vault s configuration  see the  documentation   vault docs configuration ha storage  for details   If you are already using DynamoDB in an HA fashion and wish to keep doing so  it is  very important  that you set this option   before   upgrading your Vault instances  Without doing so  each Vault instance will believe that it is standalone and there could be consistency issues      LDAP auth method forgets bind password and insecure TLS settings  Due to a bug  these two settings are forgotten if they have been configured in the LDAP backend prior to 0 6 1  If you are using these settings with LDAP  please be sure to re submit your LDAP configuration to Vault after the upgrade  so ensure that you have a valid token to do so before upgrading if you are relying on LDAP authentication for permissions to modify the backend itself      LDAP auth method does not search  memberOf   The LDAP backend went from a model where all permutations of storing and filtering groups were tried in all cases to one where specific filters are defined by the administrator  This vastly increases overall directory compatibility  especially with Active Directory when using nested groups  but unfortunately has the side effect that  memberOf  is no longer searched for by default  which is a breaking change for many existing setups    Scenario 2  in the  updated documentation   vault docs auth ldap  shows an example of configuring the backend to query  memberOf   It is recommended that a test Vault server be set up and that successful authentication can be performed using the new configuration before upgrading a primary or production Vault instance   In addition  if LDAP is relied upon for authentication  operators should ensure that they have valid tokens with policies allowing modification of LDAP parameters before upgrading  so that once an upgrade is performed  the new configuration can be specified successfully      App ID is deprecated  With the addition of the new  AppRole backend   vault docs auth approle   App ID is deprecated  There are no current plans to remove it  but we encourage using AppRole whenever possible  as it offers enhanced functionality and can accommodate many more types of authentication paradigms  App ID will receive security related fixes only "}
{"questions":"vault This page contains the list of deprecations and important or breaking changes for Vault 0 11 0 Please read it carefully page title Upgrading to Vault 0 11 0 Guides layout docs Overview","answers":"---\nlayout: docs\npage_title: Upgrading to Vault 0.11.0 - Guides\ndescription: |-\n  This page contains the list of deprecations and important or breaking changes\n  for Vault 0.11.0. Please read it carefully.\n---\n\n# Overview\n\nThis page contains the list of deprecations and important or breaking changes\nfor Vault 0.11.0 compared to 0.10.0. Please read it carefully.\n\n## Known issues\n\n### Nomad integration\n\nUsers that integrate Vault with Nomad should hold off on upgrading. A modification to\nVault's API is causing a runtime issue with the Nomad to Vault integration.\n\n### Minified JSON policies\n\nUsers that generate policies in minfied JSON may cause a parsing errors due to\na regression in the policy parser when it encounters repeating brackets. Although\nHCL is the official language for policies in Vault, HCL is JSON compatible and JSON\nshould work in place of HCL. To work around this error, pretty print the JSON policies\nor add spaces between repeating brackets. This regression will be addressed in\na future release.\n\n### Common mount prefixes\n\nBefore running the upgrade, users should run `vault secrets list` and `vault auth list`\nto check their mount table to ensure that mounts do not have common prefix \"folders\".\nFor example, if there is a mount with path `team1\/` and a mount with path `team1\/secrets`,\nVault will fail to unseal. Before upgrade, these mounts must be remounted at a path that\ndoes not share a common prefix.\n\n## Changes since 0.10.4\n\n### Request timeouts\n\nA default request timeout of 90s is now enforced. This setting can be\noverwritten in the config file. If you anticipate requests taking longer than\n90s this setting should be configured before upgrading.\n\n### `sys\/` top level injection\n\nFor the last two years for backwards compatibility data for various `sys\/`\nroutes has been injected into both the Secret's Data map and into the top level\nof the JSON response object. However, this has some subtle issues that pop up\nfrom time to time and is becoming increasingly complicated to maintain, so it's\nfinally being removed.\n\n### Path fallback for list operations\n\nFor a very long time Vault has automatically adjusted `list` operations to\nalways end in a `\/`, as list operations operates on prefixes, so all list\noperations by definition end with `\/`. This was done server-side so affects all\nclients. However, this has also led to a lot of confusion for users writing\npolicies that assume that the path that they use in the CLI is the path used\ninternally. Starting in 0.11, ACL policies gain a new fallback rule for\nlisting: they will use a matching path ending in `\/` if available, but if not\nfound, they will look for the same path without a trailing `\/`. This allows\nputting `list` capabilities in the same path block as most other capabilities\nfor that path, while not providing any extra access if `list` wasn't actually\nprovided there.\n\n### Performance standbys on by default\n\nIf your flavor\/license of Vault Enterprise supports Performance Standbys, they\nare on by default. You can disable this behavior per-node with the\n`disable_performance_standby` configuration flag.\n\n### AWS secret engine roles\n\nRoles in the AWS Secret Engine were previously ambiguous. For example, if the\n`arn` parameter had been specified, that could have been interpreted as the ARN\nof an AWS IAM policy to attach to an IAM user or it could have been the ARN of\nan AWS role to assume. Now, types are explicit, both in terms of what\ncredential type is being requested (e.g., an IAM User or an Assumed Role?) as\nwell as the parameters being sent to vault (e.g., the IAM policy document\nattached to an IAM user or used during a GetFederationToken call). All\ncredential retrieval remains backwards compatible as does updating role data.\nHowever, the data returned when reading role data is now different and\nbreaking, so anything which reads role data out of Vault will need to be\nupdated to handle the new role data format.\n\nWhile creating\/updating roles remains backwards compatible, the old parameters\nare now considered deprecated. You should use the new parameters as documented\nin the API docs.\n\nAs part of this, the `\/aws\/creds\/` and `\/aws\/sts\/` endpoints have been merged,\nwith the behavior only differing as specified below. The `\/aws\/sts\/` endpoint\nis considered deprecated and should only be used when needing backwards\ncompatibility.\n\nAll roles will be automatically updated to the new role format when accessed.\nHowever, due to the way role data was previously being stored in Vault, it's\npossible that invalid data was stored that both make the upgrade impossible as\nwell as would have made the role unable to retrieve credentials. In this\nsituation, the previous role data is returned in an `invalid_data` key so you\ncan inspect what used to be in the role and correct the role data if desired.\nOne consequence of the prior AWS role storage format is that a single Vault\nrole could have led to two different AWS credential types being retrieved when\na `policy` parameter was stored. In this case, these legacy roles will be\nallowed to retrieve both IAM User and Federation Token credentials, with the\ncredential type depending on the path used to access it (IAM User if accessed\nvia the `\/aws\/creds\/<role_name>` endpoint and Federation Token if accessed via\nthe `\/aws\/sts\/<role_name>` endpoint).\n\n## Full list since 0.10.0\n\n### Revocations of dynamic secrets leases now asynchronous\n\nDynamic secret lease revocation are now queued\/asynchronous rather\nthan synchronous. This allows Vault to take responsibility for revocation\neven if the initial attempt fails. The previous synchronous behavior can be\nattained via the `-sync` CLI flag or `sync` API parameter. When in\nsynchronous mode, if the operation results in failure it is up to the user\nto retry.\n\n### CLI retries\n\nThe CLI will no longer retry commands on 5xx errors. This was a\nsource of confusion to users as to why Vault would \"hang\" before returning a\n5xx error. The Go API client still defaults to two retries.\n\n### Identity entity alias metadata\n\nYou can no longer manually set metadata on\nentity aliases. All alias data (except the canonical entity ID it refers to)\nis intended to be managed by the plugin providing the alias information, so\nallowing it to be set manually didn't make sense.\n\n### Convergent encryption version 3\n\nIf you are using `transit`'s convergent encryption feature, which prior to this\nrelease was at version 2, we recommend\n[rotating](\/vault\/api-docs\/secret\/transit#rotate-key)\nyour encryption key (the new key will use version 3) and\n[rewrapping](\/vault\/api-docs\/secret\/transit#rewrap-data)\nyour data to mitigate the chance of offline plaintext-confirmation attacks.\n\n### PKI duration return types\n\nThe PKI backend now returns durations (e.g. when reading a role) as an integer\nnumber of seconds instead of a Go-style string.","site":"vault","answers_cleaned":"    layout  docs page title  Upgrading to Vault 0 11 0   Guides description       This page contains the list of deprecations and important or breaking changes   for Vault 0 11 0  Please read it carefully         Overview  This page contains the list of deprecations and important or breaking changes for Vault 0 11 0 compared to 0 10 0  Please read it carefully      Known issues      Nomad integration  Users that integrate Vault with Nomad should hold off on upgrading  A modification to Vault s API is causing a runtime issue with the Nomad to Vault integration       Minified JSON policies  Users that generate policies in minfied JSON may cause a parsing errors due to a regression in the policy parser when it encounters repeating brackets  Although HCL is the official language for policies in Vault  HCL is JSON compatible and JSON should work in place of HCL  To work around this error  pretty print the JSON policies or add spaces between repeating brackets  This regression will be addressed in a future release       Common mount prefixes  Before running the upgrade  users should run  vault secrets list  and  vault auth list  to check their mount table to ensure that mounts do not have common prefix  folders   For example  if there is a mount with path  team1   and a mount with path  team1 secrets   Vault will fail to unseal  Before upgrade  these mounts must be remounted at a path that does not share a common prefix      Changes since 0 10 4      Request timeouts  A default request timeout of 90s is now enforced  This setting can be overwritten in the config file  If you anticipate requests taking longer than 90s this setting should be configured before upgrading        sys   top level injection  For the last two years for backwards compatibility data for various  sys   routes has been injected into both the Secret s Data map and into the top level of the JSON response object  However  this has some subtle issues that pop up from time to time and is becoming increasingly complicated to maintain  so it s finally being removed       Path fallback for list operations  For a very long time Vault has automatically adjusted  list  operations to always end in a      as list operations operates on prefixes  so all list operations by definition end with      This was done server side so affects all clients  However  this has also led to a lot of confusion for users writing policies that assume that the path that they use in the CLI is the path used internally  Starting in 0 11  ACL policies gain a new fallback rule for listing  they will use a matching path ending in     if available  but if not found  they will look for the same path without a trailing      This allows putting  list  capabilities in the same path block as most other capabilities for that path  while not providing any extra access if  list  wasn t actually provided there       Performance standbys on by default  If your flavor license of Vault Enterprise supports Performance Standbys  they are on by default  You can disable this behavior per node with the  disable performance standby  configuration flag       AWS secret engine roles  Roles in the AWS Secret Engine were previously ambiguous  For example  if the  arn  parameter had been specified  that could have been interpreted as the ARN of an AWS IAM policy to attach to an IAM user or it could have been the ARN of an AWS role to assume  Now  types are explicit  both in terms of what credential type is being requested  e g   an IAM User or an Assumed Role   as well as the parameters being sent to vault  e g   the IAM policy document attached to an IAM user or used during a GetFederationToken call   All credential retrieval remains backwards compatible as does updating role data  However  the data returned when reading role data is now different and breaking  so anything which reads role data out of Vault will need to be updated to handle the new role data format   While creating updating roles remains backwards compatible  the old parameters are now considered deprecated  You should use the new parameters as documented in the API docs   As part of this  the   aws creds   and   aws sts   endpoints have been merged  with the behavior only differing as specified below  The   aws sts   endpoint is considered deprecated and should only be used when needing backwards compatibility   All roles will be automatically updated to the new role format when accessed  However  due to the way role data was previously being stored in Vault  it s possible that invalid data was stored that both make the upgrade impossible as well as would have made the role unable to retrieve credentials  In this situation  the previous role data is returned in an  invalid data  key so you can inspect what used to be in the role and correct the role data if desired  One consequence of the prior AWS role storage format is that a single Vault role could have led to two different AWS credential types being retrieved when a  policy  parameter was stored  In this case  these legacy roles will be allowed to retrieve both IAM User and Federation Token credentials  with the credential type depending on the path used to access it  IAM User if accessed via the   aws creds  role name   endpoint and Federation Token if accessed via the   aws sts  role name   endpoint       Full list since 0 10 0      Revocations of dynamic secrets leases now asynchronous  Dynamic secret lease revocation are now queued asynchronous rather than synchronous  This allows Vault to take responsibility for revocation even if the initial attempt fails  The previous synchronous behavior can be attained via the   sync  CLI flag or  sync  API parameter  When in synchronous mode  if the operation results in failure it is up to the user to retry       CLI retries  The CLI will no longer retry commands on 5xx errors  This was a source of confusion to users as to why Vault would  hang  before returning a 5xx error  The Go API client still defaults to two retries       Identity entity alias metadata  You can no longer manually set metadata on entity aliases  All alias data  except the canonical entity ID it refers to  is intended to be managed by the plugin providing the alias information  so allowing it to be set manually didn t make sense       Convergent encryption version 3  If you are using  transit  s convergent encryption feature  which prior to this release was at version 2  we recommend  rotating   vault api docs secret transit rotate key  your encryption key  the new key will use version 3  and  rewrapping   vault api docs secret transit rewrap data  your data to mitigate the chance of offline plaintext confirmation attacks       PKI duration return types  The PKI backend now returns durations  e g  when reading a role  as an integer number of seconds instead of a Go style string "}
{"questions":"vault page title Vault HA upgrades without Autopilot Upgrade Automation Pre 1 11 Upgrade instructions for Vault HA Pre 1 11 or Vault without autopilot upgrade automation being enabled Be sure to read the Upgrading Vault Guides as well Vault HA upgrades without autopilot upgrade automation Pre 1 11 layout docs This is our recommended upgrade procedure if one of the following applies","answers":"---\nlayout: docs\npage_title: Vault HA upgrades without Autopilot Upgrade Automation (Pre 1.11)\ndescription: |-\n  Upgrade instructions for Vault HA Pre 1.11 or Vault without autopilot upgrade automation being enabled. Be sure to read the Upgrading-Vault Guides as well.\n---\n\n# Vault HA upgrades without autopilot upgrade automation (Pre 1.11)\n\nThis is our recommended upgrade procedure if **one** of the following applies:\n\n- Running Vault version earlier than 1.11\n- Opt-out the [Autopilot automated upgrade](\/vault\/docs\/concepts\/integrated-storage\/autopilot#automated-upgrade) features with Vault 1.11 or later\n- Running Vault with external storage backend such as Consul\n\nYou should consider how to apply the steps described in this document to your\nparticular setup since HA setups can differ on whether a load balancer is in\nuse, what addresses clients are being given to connect to Vault (standby +\nleader, leader-only, or discovered via service discovery), etc.\n\nIf you are running on Vault 1.11+ with Integrated Storage and wish to enable the\nAutopilot upgrade automation features, read to the [automated\nupgrades](\/vault\/docs\/concepts\/integrated-storage\/autopilot#automated-upgrades)\ndocumentation for details and the [Automate Upgrades with Vault\nEnterprise](\/vault\/tutorials\/raft\/raft-upgrade-automation) tutorial for\nadditional guidance.\n\n\n## HA installations\n\nRegardless of the method you use, do not fail over from a newer version of Vault\nto an older version. Our suggested procedure is designed to prevent this.\n\nPlease note that Vault does not support true zero-downtime upgrades, but with\nproper upgrade procedure the downtime should be very short (a few hundred\nmilliseconds to a second depending on how the speed of access to the storage\nbackend).\n\n<Warning title=\"Important\">\n\nIf you are currently running on Vault 1.11+ with Integrated Storage and have\nchosen to opt-out the Autopilot automated upgrade features, please disable the\ndefault automated upgrade migrations feature of the Vault. To disable this\nfeature, follow the [Automate Upgrades with Vault Enterprise Autopilot\nconfiguration](\/vault\/tutorials\/raft\/raft-upgrade-automation#autopilot-configuration)\ntutorial for more details. Without disabling this feature, you may run into Lost\nQuorum issue as described in the [Quorum lost while upgrading the vault from\n1.11.0 to later version of\nit](https:\/\/support.hashicorp.com\/hc\/en-us\/articles\/7122445204755-Quorum-lost-while-upgrading-the-vault-from-1-11-0-to-later-version-of-it)\narticle.\n\n<\/Warning>\n\nPerform these steps on each standby:\n\n1. Properly shut down Vault on the standby node via `SIGINT` or `SIGTERM`\n2. Replace the Vault binary with the new version; ensure that `mlock()`\n   capability is added to the new binary with\n   [setcap](\/vault\/docs\/configuration#disable_mlock)\n3. Start the standby node\n4. Unseal the standby node\n5. Verify `vault status` shows correct Version and HA Mode is `standby`\n6. Review the node's logs to ensure successful startup and unseal\n\nAt this point all standby nodes are upgraded and ready to take over. The\nupgrade will not complete until one of the upgraded standby nodes takes over\nactive duty.\n\nTo complete the cluster upgrade:\n\n1. Properly shut down the remaining (active) node via `SIGINT` or `SIGTERM`\n\n   <Warning title=\"Important\">\n\n   DO NOT attempt to issue a [step-down](\/vault\/docs\/commands\/operator\/step-down) \n   operation at any time during the upgrade process. \n\n   <\/Warning>\n\n   <Note>\n\n   It is important that you shut the node down properly.\n   This will release the current leadership and the HA lock, allowing a standby\n   node to take over with a very short delay.\n   If you kill Vault without letting it release the lock, a standby node will\n   not be able to take over until the lock's timeout period has expired. This\n   is backend-specific but could be ten seconds or more.\n\n   <\/Note>\n\n2. Replace the Vault binary with the new version; ensure that `mlock()`\n   capability is added to the new binary with\n   [setcap](\/vault\/docs\/configuration#disable_mlock)\n3. Start the node\n4. Unseal the node\n5. Verify `vault status` shows correct Version and HA Mode is `standby`\n6. Review the node's logs to ensure successful startup and unseal\n\nInternal upgrade tasks will happen after one of the upgraded standby nodes\ntakes over active duty.\n\nBe sure to also read and follow any instructions in the version-specific\nupgrade notes.\n\n## Enterprise replication installations\n\nSee the main [upgrading](\/vault\/docs\/upgrading#enterprise-replication-installations) page.","site":"vault","answers_cleaned":"    layout  docs page title  Vault HA upgrades without Autopilot Upgrade Automation  Pre 1 11  description       Upgrade instructions for Vault HA Pre 1 11 or Vault without autopilot upgrade automation being enabled  Be sure to read the Upgrading Vault Guides as well         Vault HA upgrades without autopilot upgrade automation  Pre 1 11   This is our recommended upgrade procedure if   one   of the following applies     Running Vault version earlier than 1 11   Opt out the  Autopilot automated upgrade   vault docs concepts integrated storage autopilot automated upgrade  features with Vault 1 11 or later   Running Vault with external storage backend such as Consul  You should consider how to apply the steps described in this document to your particular setup since HA setups can differ on whether a load balancer is in use  what addresses clients are being given to connect to Vault  standby   leader  leader only  or discovered via service discovery   etc   If you are running on Vault 1 11  with Integrated Storage and wish to enable the Autopilot upgrade automation features  read to the  automated upgrades   vault docs concepts integrated storage autopilot automated upgrades  documentation for details and the  Automate Upgrades with Vault Enterprise   vault tutorials raft raft upgrade automation  tutorial for additional guidance       HA installations  Regardless of the method you use  do not fail over from a newer version of Vault to an older version  Our suggested procedure is designed to prevent this   Please note that Vault does not support true zero downtime upgrades  but with proper upgrade procedure the downtime should be very short  a few hundred milliseconds to a second depending on how the speed of access to the storage backend     Warning title  Important    If you are currently running on Vault 1 11  with Integrated Storage and have chosen to opt out the Autopilot automated upgrade features  please disable the default automated upgrade migrations feature of the Vault  To disable this feature  follow the  Automate Upgrades with Vault Enterprise Autopilot configuration   vault tutorials raft raft upgrade automation autopilot configuration  tutorial for more details  Without disabling this feature  you may run into Lost Quorum issue as described in the  Quorum lost while upgrading the vault from 1 11 0 to later version of it  https   support hashicorp com hc en us articles 7122445204755 Quorum lost while upgrading the vault from 1 11 0 to later version of it  article     Warning   Perform these steps on each standby   1  Properly shut down Vault on the standby node via  SIGINT  or  SIGTERM  2  Replace the Vault binary with the new version  ensure that  mlock       capability is added to the new binary with     setcap   vault docs configuration disable mlock  3  Start the standby node 4  Unseal the standby node 5  Verify  vault status  shows correct Version and HA Mode is  standby  6  Review the node s logs to ensure successful startup and unseal  At this point all standby nodes are upgraded and ready to take over  The upgrade will not complete until one of the upgraded standby nodes takes over active duty   To complete the cluster upgrade   1  Properly shut down the remaining  active  node via  SIGINT  or  SIGTERM       Warning title  Important       DO NOT attempt to issue a  step down   vault docs commands operator step down      operation at any time during the upgrade process         Warning       Note      It is important that you shut the node down properly     This will release the current leadership and the HA lock  allowing a standby    node to take over with a very short delay     If you kill Vault without letting it release the lock  a standby node will    not be able to take over until the lock s timeout period has expired  This    is backend specific but could be ten seconds or more        Note   2  Replace the Vault binary with the new version  ensure that  mlock       capability is added to the new binary with     setcap   vault docs configuration disable mlock  3  Start the node 4  Unseal the node 5  Verify  vault status  shows correct Version and HA Mode is  standby  6  Review the node s logs to ensure successful startup and unseal  Internal upgrade tasks will happen after one of the upgraded standby nodes takes over active duty   Be sure to also read and follow any instructions in the version specific upgrade notes      Enterprise replication installations  See the main  upgrading   vault docs upgrading enterprise replication installations  page "}
{"questions":"vault This page contains the list of deprecations and important or breaking changes for Vault 1 9 x Please read it carefully layout docs page title Upgrading to Vault 1 9 x Guides Overview","answers":"---\nlayout: docs\npage_title: Upgrading to Vault 1.9.x - Guides\ndescription: |-\n  This page contains the list of deprecations and important or breaking changes\n  for Vault 1.9.x. Please read it carefully.\n---\n\n# Overview\n\nThis page contains the list of deprecations and important or breaking changes\nfor Vault 1.9.x compared to 1.8. Please read it carefully.\n\n## OIDC provider\n\nVault 1.9.0 introduced the ability for Vault to be an OpenID Connect (OIDC) identity\nprovider. To support the feature, Vault's [default policy](\/vault\/docs\/concepts\/policies#default-policy)\nwas modified to include an ACL rule for its Authorization Endpoint. Due to the handling\nof Vault's default policy during upgrades, existing deployments of Vault that are upgraded\nto 1.9.0 will not have this required ACL rule.\n\nIf you're upgrading to 1.9.0 and want to use the new OIDC provider feature, the following\nACL rule must be added to the default policy **or** a policy associated with the Vault\n[Auth Method](\/vault\/docs\/auth) used to authenticate end-users during\nthe OIDC flow.\n\n```hcl\n# Allow a token to make requests to the authorization endpoint for OIDC providers.\npath \"identity\/oidc\/provider\/+\/authorize\" {\n  capabilities = [\"read\", \"update\"]\n}\n```\n\n## Identity tokens\n\nThe Identity secrets engine has changed the procedure for creating Identity\ntoken roles. When creating a role, the key parameter is required and the key\nmust exist. Previously, it was possible to create a role and assign it a named\nkey that did not yet exist despite the documentation stating otherwise.\n\nAll calls to [create or update a role](\/vault\/api-docs\/secret\/identity\/tokens#create-or-update-a-role)\nmust be checked to ensure that roles are not being created or updated with\nnon-existent keys.\n\n## SSH role parameter `allowed_extensions` behavior change\n\nPrior versions of Vault allowed clients to specify any extension when requesting\nSSH certificate [signing requests](\/vault\/api-docs\/secret\/ssh#sign-ssh-key)\nif their role had an `allowed_extensions` set to `\"\"` or was missing.\n\nNow, Vault will reject a client request that specifies extensions if the role\nparameter `allowed_extensions` is empty or missing from the role they are\nassociated with.\n\nTo re-enable the old behavior, update the roles with a value\nof `\"*\"` to the `allowed_extensions` parameter allowing any\/all extensions to be\nspecified by clients.\n\n@include 'entity-alias-mapping.mdx'\n\n## Deprecations\n\n### HTTP request counter deprecation\n\nIn Vault 1.9, the internal HTTP Request count\n[API](\/vault\/api-docs\/v1.8.x\/system\/internal-counters#http-requests)\nwill be removed from the product. Calls to the endpoint will result in a 404\nerror with a message stating that `functionality on this path has been removed`.\n\nVault does not make backwards compatible guarantees on internal APIs (those\nprefaced with `sys\/internal`). They are subject to change and may disappear\nwithout notice.\n\n### Etcd v2\n\nSupport for Etcd v2 will be removed from Vault in Vault 1.10 (not this Vault\nrelease, but the next one). The Etcd v2 API\nwas deprecated with the release of [Etcd\nv3.5](https:\/\/etcd.io\/blog\/2021\/announcing-etcd-3.5\/), and will be\ndecommissioned in the Etcd v3.6 release.\n\nUsers upgrading to Vault 1.9 and planning to eventually upgrade to Vault 1.10\nshould prepare to [migrate](\/vault\/docs\/commands\/operator\/migrate) Vault storage to\nan Etcd v3 cluster prior to upgrading to Vault 1.10. All storage migrations\nshould have [backups](\/vault\/docs\/concepts\/storage#backing-up-vault-s-persisted-data)\ntaken prior to migration.\n\n## TLS cipher suites changes\n\nIn Vault 1.9, due to changes in Go 1.17, the `tls_prefer_server_cipher_suites`\nTCP configuration parameter has been deprecated and its value will be ignored.\n\nAdditionally, Go has begun doing automated cipher suite ordering and no longer\nrespects the order of suites given in `tls_cipher_suites`.\n\nSee [this blog post](https:\/\/go.dev\/blog\/tls-cipher-suites) for more information.\n\n@include 'pki-forwarding-bug.mdx'\n\n\n## Known issues\n\n@include 'raft-panic-old-tls-key.mdx'\n\n### Identity token backend key rotations\n\nExisting Vault installations that use the [Identity Token\nbackend](\/vault\/api-docs\/secret\/identity\/tokens) and have [named\nkeys](\/vault\/api-docs\/secret\/identity\/tokens#create-a-named-key) generated will\nencounter a panic when any of those existing keys pass their\n`rotation_period`. This issue affects Vault 1.9.0, and is fixed in Vault 1.9.1.\nUsers should upgrade directly to 1.9.1 or above in order to avoid this panic.\n\nIf a panic is encountered after an upgrade to Vault 1.9.0, the named key will be\ncorrupted on storage and become unusable. In this case, the key will need to be\ndeleted and re-created. A fix to fully mitigate this panic will be addressed on\nVault 1.9.3.\n\n### Activity log Non-Entity tokens\n\nWhen upgrading Vault from 1.8 (or earlier) to 1.9 (or later), client counts of [non-entity tokens](\/vault\/docs\/concepts\/client-count#non-entity-tokens) will only include the tokens used after the upgrade.\n\nStarting in Vault 1.9, the activity log records and de-duplicates non-entity tokens by using the namespace and token's policies to generate a unique identifier. Because Vault did not create identifiers for these tokens before 1.9, the activity log cannot know whether this token has been seen pre-1.9. To prevent inaccurate and inflated counts, the activity log will ignore any counts of non-entity tokens that were created before the upgrade and only the non-entity tokens from versions 1.9 and later will be counted.\n\nBefore upgrading, you should [query Vault usage metrics](\/vault\/tutorials\/monitoring\/usage-metrics#querying-usage-metrics) and report the usage data for billing purposes.\n\nSee the client count [overview](\/vault\/docs\/concepts\/client-count) and [FAQ](\/vault\/docs\/concepts\/client-count\/faq) for more information.","site":"vault","answers_cleaned":"    layout  docs page title  Upgrading to Vault 1 9 x   Guides description       This page contains the list of deprecations and important or breaking changes   for Vault 1 9 x  Please read it carefully         Overview  This page contains the list of deprecations and important or breaking changes for Vault 1 9 x compared to 1 8  Please read it carefully      OIDC provider  Vault 1 9 0 introduced the ability for Vault to be an OpenID Connect  OIDC  identity provider  To support the feature  Vault s  default policy   vault docs concepts policies default policy  was modified to include an ACL rule for its Authorization Endpoint  Due to the handling of Vault s default policy during upgrades  existing deployments of Vault that are upgraded to 1 9 0 will not have this required ACL rule   If you re upgrading to 1 9 0 and want to use the new OIDC provider feature  the following ACL rule must be added to the default policy   or   a policy associated with the Vault  Auth Method   vault docs auth  used to authenticate end users during the OIDC flow      hcl   Allow a token to make requests to the authorization endpoint for OIDC providers  path  identity oidc provider   authorize      capabilities     read    update             Identity tokens  The Identity secrets engine has changed the procedure for creating Identity token roles  When creating a role  the key parameter is required and the key must exist  Previously  it was possible to create a role and assign it a named key that did not yet exist despite the documentation stating otherwise   All calls to  create or update a role   vault api docs secret identity tokens create or update a role  must be checked to ensure that roles are not being created or updated with non existent keys      SSH role parameter  allowed extensions  behavior change  Prior versions of Vault allowed clients to specify any extension when requesting SSH certificate  signing requests   vault api docs secret ssh sign ssh key  if their role had an  allowed extensions  set to      or was missing   Now  Vault will reject a client request that specifies extensions if the role parameter  allowed extensions  is empty or missing from the role they are associated with   To re enable the old behavior  update the roles with a value of       to the  allowed extensions  parameter allowing any all extensions to be specified by clients    include  entity alias mapping mdx      Deprecations      HTTP request counter deprecation  In Vault 1 9  the internal HTTP Request count  API   vault api docs v1 8 x system internal counters http requests  will be removed from the product  Calls to the endpoint will result in a 404 error with a message stating that  functionality on this path has been removed    Vault does not make backwards compatible guarantees on internal APIs  those prefaced with  sys internal    They are subject to change and may disappear without notice       Etcd v2  Support for Etcd v2 will be removed from Vault in Vault 1 10  not this Vault release  but the next one   The Etcd v2 API was deprecated with the release of  Etcd v3 5  https   etcd io blog 2021 announcing etcd 3 5    and will be decommissioned in the Etcd v3 6 release   Users upgrading to Vault 1 9 and planning to eventually upgrade to Vault 1 10 should prepare to  migrate   vault docs commands operator migrate  Vault storage to an Etcd v3 cluster prior to upgrading to Vault 1 10  All storage migrations should have  backups   vault docs concepts storage backing up vault s persisted data  taken prior to migration      TLS cipher suites changes  In Vault 1 9  due to changes in Go 1 17  the  tls prefer server cipher suites  TCP configuration parameter has been deprecated and its value will be ignored   Additionally  Go has begun doing automated cipher suite ordering and no longer respects the order of suites given in  tls cipher suites    See  this blog post  https   go dev blog tls cipher suites  for more information    include  pki forwarding bug mdx       Known issues   include  raft panic old tls key mdx       Identity token backend key rotations  Existing Vault installations that use the  Identity Token backend   vault api docs secret identity tokens  and have  named keys   vault api docs secret identity tokens create a named key  generated will encounter a panic when any of those existing keys pass their  rotation period   This issue affects Vault 1 9 0  and is fixed in Vault 1 9 1  Users should upgrade directly to 1 9 1 or above in order to avoid this panic   If a panic is encountered after an upgrade to Vault 1 9 0  the named key will be corrupted on storage and become unusable  In this case  the key will need to be deleted and re created  A fix to fully mitigate this panic will be addressed on Vault 1 9 3       Activity log Non Entity tokens  When upgrading Vault from 1 8  or earlier  to 1 9  or later   client counts of  non entity tokens   vault docs concepts client count non entity tokens  will only include the tokens used after the upgrade   Starting in Vault 1 9  the activity log records and de duplicates non entity tokens by using the namespace and token s policies to generate a unique identifier  Because Vault did not create identifiers for these tokens before 1 9  the activity log cannot know whether this token has been seen pre 1 9  To prevent inaccurate and inflated counts  the activity log will ignore any counts of non entity tokens that were created before the upgrade and only the non entity tokens from versions 1 9 and later will be counted   Before upgrading  you should  query Vault usage metrics   vault tutorials monitoring usage metrics querying usage metrics  and report the usage data for billing purposes   See the client count  overview   vault docs concepts client count  and  FAQ   vault docs concepts client count faq  for more information "}
{"questions":"vault page title Upgrading to Vault 0 5 0 Guides layout docs actions you must take to facilitate a smooth upgrade path This page contains the full list of breaking changes for Vault 0 5 including Overview","answers":"---\nlayout: docs\npage_title: Upgrading to Vault 0.5.0 - Guides\ndescription: |-\n  This page contains the full list of breaking changes for Vault 0.5, including\n  actions you must take to facilitate a smooth upgrade path.\n---\n\n# Overview\n\nThis page contains the list of breaking changes for Vault 0.5. Please read it\ncarefully.\n\nPlease note that these are changes to Vault itself. Client libraries maintained\nby HashiCorp have been updated with support for these changes, but if you are\nusing community-supported libraries, you should ensure that they are ready for\nVault 0.5 before upgrading.\n\n## Rekey requires nonce\n\nVault now generates a nonce when a rekey operation is started in order to\nensure that the operation cannot be hijacked. The nonce is output when the\nrekey operation is started and when rekey status is requested.\n\nThe nonce must be provided as part of the request parameters when providing an\nunseal key. The nonce can be communicated from the request initiator to unseal\nkey holders via side channels; the unseal key holders can then verify the nonce\n(by providing it) when they submit their unseal key.\n\nAs a convenience, if using the CLI interactively to provide the unseal key, the\nnonce will be displayed for verification but the user will not be required to\nmanually re-type it.\n\n## `TTL` field in token lookup\n\nPreviously, the `ttl` field returned when calling `lookup` or `lookup-self` on\nthe token auth method displayed the TTL set at token creation. It\nnow displays the time remaining (in seconds) for the token's validity period.\nThe original behavior has been moved to a field named `creation_ttl`.\n\n## Grace periods removed\n\nVault no longer uses grace periods internally for leases or token TTLs.\nPreviously these were set by backends and could differ greatly from one backend\nto another, causing confusion. TTLs (the `lease_duration` field for a lease,\nor, for a token lookup, the `ttl`) are now exact.\n\n## `token-renew` CLI command\n\nIf the token given for renewal is the same as the token in use by the client,\nthe `renew-self` endpoint will be used in the API rather than the `renew`\nendpoint. Since the `default` policy contains `auth\/token\/renew-self` this\nmakes it much more likely that the request will succeed rather than somewhat\nconfusingly failing due to a lack of permissions on `auth\/token\/renew`.\n\n## `status` CLI command\n\nThe `status` CLI command now returns an exit code of `0` for an unsealed Vault\n(as before), `2` for a sealed Vault, and `1` for an error. This keeps error\nreturn codes consistent across commands.\n\n## Transit upsertion behavior uses capabilities\n\nPreviously, attempting to encrypt with a key that did not exist would create a\nkey with default values. This was convenient but ultimately allowed a client to\npotentially escape an ACL policy restriction, albeit without any dangerous\naccess. Now that Vault supports more granular capabilities in policies,\nupsertion behavior is controlled by whether the client has the `create`\ncapability for the request (upsertion is allowed) or only the `update`\ncapability (upsertion is denied).\n\n## etcd physical backend uses `sync`\n\nThe `etcd` physical backend now supports `sync` functionality and it is turned\non by default, which maps to the upstream library's default. It can be\ndisabled; see the configuration page for information.\n\n## S3 physical backend prefers environment variables\n\nThe `s3` physical backend now prefers environment variables over configuration\nfile variables. This matches the behavior of the rest of the backends and of\nVault generally.\n\n## Lease default and renewal handling\n\nAll backends now honor system and mount-specific default and maximum lease\ntimes, except when specifically overridden by backend configuration or role\nparameters, or when doing so would not make sense (e.g. AWS STS tokens cannot\nhave a lifetime of greater than 1 hour).\n\nThis allows for a _much_ more uniform approach to managing leases on both the\noperational side and the user side, and removes much ambiguity and uncertainty\nresulting from backend-hardcoded limits.\n\nHowever, also this means that the leases generated by the backends may return\nsignificantly different TTLs in 0.5 than in previous versions, unless they have\nbeen preconfigured. You can use the `mount-tune` CLI command or the\n`\/sys\/mounts\/<mount point>\/tune` endpoint to adjust default and max TTL\nbehavior for any mount. This is supported in 0.4, so you can perform this\ntuning before upgrading.\n\nThe following list details the ways in which lease handling has changed\nper-backend. In all cases the \"mount TTL\" means the mount-specific value for\ndefault or max TTL; however, if no value is set on a given mount, the system\ndefault\/max values are used. This lists only the changes; any lease-issuing\nor renew function not listed here behaves the same as in 0.4.\n\n(As a refresher: the default TTL is the amount of time that the initial\nlease\/token is valid for before it must be renewed; the maximum TTL is the\namount of time a lease or token is valid for before it can no longer be renewed\nand must be reissued. A mount can be more restrictive with its maximum TTL, but\ncannot be less restrictive than the mount's maximum TTL.)\n\n#### Credential (Auth) backends\n\n- `github` \u2013 The renewal function now uses the backend's configured maximum\n  TTL, if set; otherwise, the mount maximum TTL is used.\n- `ldap` \u2013 The renewal function now uses the mount default TTL instead of always\n  using one hour.\n- `token` \u2013 Tokens can no longer be renewed forever; instead, they now honor the\n  mount default\/max TTL.\n- `userpass` \u2013 The renew function now uses the backend's configured maximum TTL,\n  if set; otherwise the mount maximum TTL is used.\n\n#### Secrets engines\n\n- `aws` \u2013 New IAM roles no longer always have a default TTL of one hour, instead\n  honoring the configured default if available and the mount default TTL if not\n  (renewal always used the configured values if available). STS tokens return a\n  TTL corresponding to the lifetime of the token in AWS and cannot be renewed.\n- `cassandra` \u2013 `lease_grace_period` has been removed since Vault no longer uses\n  grace periods.\n- `consul` \u2013 The mount default TTL is now used as the default TTL if there is no\n  backend configuration parameter. Renewal now uses the mount default and\n  maximum TTLs.\n- `mysql` \u2013 The mount default TTL is now used as the default TTL if there is no\n  backend configuration parameter.\n- `postgresql` \u2013 The mount default TTL is now used as the default TTL if there\n  is no backend configuration parameter. In addition, there is no longer any\n  grace period with the time configured for password expiration within Postgres\n  itself.","site":"vault","answers_cleaned":"    layout  docs page title  Upgrading to Vault 0 5 0   Guides description       This page contains the full list of breaking changes for Vault 0 5  including   actions you must take to facilitate a smooth upgrade path         Overview  This page contains the list of breaking changes for Vault 0 5  Please read it carefully   Please note that these are changes to Vault itself  Client libraries maintained by HashiCorp have been updated with support for these changes  but if you are using community supported libraries  you should ensure that they are ready for Vault 0 5 before upgrading      Rekey requires nonce  Vault now generates a nonce when a rekey operation is started in order to ensure that the operation cannot be hijacked  The nonce is output when the rekey operation is started and when rekey status is requested   The nonce must be provided as part of the request parameters when providing an unseal key  The nonce can be communicated from the request initiator to unseal key holders via side channels  the unseal key holders can then verify the nonce  by providing it  when they submit their unseal key   As a convenience  if using the CLI interactively to provide the unseal key  the nonce will be displayed for verification but the user will not be required to manually re type it       TTL  field in token lookup  Previously  the  ttl  field returned when calling  lookup  or  lookup self  on the token auth method displayed the TTL set at token creation  It now displays the time remaining  in seconds  for the token s validity period  The original behavior has been moved to a field named  creation ttl       Grace periods removed  Vault no longer uses grace periods internally for leases or token TTLs  Previously these were set by backends and could differ greatly from one backend to another  causing confusion  TTLs  the  lease duration  field for a lease  or  for a token lookup  the  ttl   are now exact       token renew  CLI command  If the token given for renewal is the same as the token in use by the client  the  renew self  endpoint will be used in the API rather than the  renew  endpoint  Since the  default  policy contains  auth token renew self  this makes it much more likely that the request will succeed rather than somewhat confusingly failing due to a lack of permissions on  auth token renew        status  CLI command  The  status  CLI command now returns an exit code of  0  for an unsealed Vault  as before    2  for a sealed Vault  and  1  for an error  This keeps error return codes consistent across commands      Transit upsertion behavior uses capabilities  Previously  attempting to encrypt with a key that did not exist would create a key with default values  This was convenient but ultimately allowed a client to potentially escape an ACL policy restriction  albeit without any dangerous access  Now that Vault supports more granular capabilities in policies  upsertion behavior is controlled by whether the client has the  create  capability for the request  upsertion is allowed  or only the  update  capability  upsertion is denied       etcd physical backend uses  sync   The  etcd  physical backend now supports  sync  functionality and it is turned on by default  which maps to the upstream library s default  It can be disabled  see the configuration page for information      S3 physical backend prefers environment variables  The  s3  physical backend now prefers environment variables over configuration file variables  This matches the behavior of the rest of the backends and of Vault generally      Lease default and renewal handling  All backends now honor system and mount specific default and maximum lease times  except when specifically overridden by backend configuration or role parameters  or when doing so would not make sense  e g  AWS STS tokens cannot have a lifetime of greater than 1 hour    This allows for a  much  more uniform approach to managing leases on both the operational side and the user side  and removes much ambiguity and uncertainty resulting from backend hardcoded limits   However  also this means that the leases generated by the backends may return significantly different TTLs in 0 5 than in previous versions  unless they have been preconfigured  You can use the  mount tune  CLI command or the   sys mounts  mount point  tune  endpoint to adjust default and max TTL behavior for any mount  This is supported in 0 4  so you can perform this tuning before upgrading   The following list details the ways in which lease handling has changed per backend  In all cases the  mount TTL  means the mount specific value for default or max TTL  however  if no value is set on a given mount  the system default max values are used  This lists only the changes  any lease issuing or renew function not listed here behaves the same as in 0 4    As a refresher  the default TTL is the amount of time that the initial lease token is valid for before it must be renewed  the maximum TTL is the amount of time a lease or token is valid for before it can no longer be renewed and must be reissued  A mount can be more restrictive with its maximum TTL  but cannot be less restrictive than the mount s maximum TTL         Credential  Auth  backends     github    The renewal function now uses the backend s configured maximum   TTL  if set  otherwise  the mount maximum TTL is used     ldap    The renewal function now uses the mount default TTL instead of always   using one hour     token    Tokens can no longer be renewed forever  instead  they now honor the   mount default max TTL     userpass    The renew function now uses the backend s configured maximum TTL    if set  otherwise the mount maximum TTL is used        Secrets engines     aws    New IAM roles no longer always have a default TTL of one hour  instead   honoring the configured default if available and the mount default TTL if not    renewal always used the configured values if available   STS tokens return a   TTL corresponding to the lifetime of the token in AWS and cannot be renewed     cassandra     lease grace period  has been removed since Vault no longer uses   grace periods     consul    The mount default TTL is now used as the default TTL if there is no   backend configuration parameter  Renewal now uses the mount default and   maximum TTLs     mysql    The mount default TTL is now used as the default TTL if there is no   backend configuration parameter     postgresql    The mount default TTL is now used as the default TTL if there   is no backend configuration parameter  In addition  there is no longer any   grace period with the time configured for password expiration within Postgres   itself "}
{"questions":"vault The Vercel Project sync destination allows Vault to safely synchronize secrets as Vercel environment variables Sync secrets from Vault to Vercel Project Automatically sync and unsync the secrets from Vault to a Vercel project to centralize visibility and control of secrets lifecycle management layout docs page title Sync secrets from Vault to Vercel Project","answers":"---\nlayout: docs\npage_title: Sync secrets from Vault to Vercel Project\ndescription: >-\n  Automatically sync and unsync the secrets from Vault to a Vercel project to centralize visibility and control of secrets lifecycle management.\n---\n\n# Sync secrets from Vault to Vercel Project\n\nThe Vercel Project sync destination allows Vault to safely synchronize secrets as Vercel environment variables.\nThis is a low footprint option that enables your applications to benefit from Vault-managed secrets without requiring them\nto connect directly with Vault. This guide walks you through the configuration process.\n\nPrerequisites:\n* Ability to read or create KVv2 secrets\n* Ability to create Vercel tokens with access to modify project environment variables\n* Ability to create sync destinations and associations on your Vault server\n\n## Setup\n\n1. If you do not already have a Vercel token, navigate [your account settings](https:\/\/vercel.com\/account\/tokens) to\n   generate credentials with the necessary permissions to manage your project's environment variables.\n\n1. Next you need to locate your project ID. It can be found under the `Settings` tab in your project's overview page.\n\n1. Configure a sync destination with the access token and project ID obtained in the previous steps.\n\n  ```shell-session\n  $ vault write sys\/sync\/destinations\/vercel-project\/my-dest \\\n      access_token=\"$TOKEN\" \\\n      project_id=\"$PROJECT_ID\" \\\n      deployment_environments=development \\\n      deployment_environments=preview \\\n      deployment_environments=production\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  Key                   Value\n  ---                   -----\n  connection_details    map[access_token:***** deployment_environments:[development preview production] project_id:<project-id>]\n  name                  my-dest\n  type                  vercel-project\n  ```\n\n  <\/CodeBlockConfig>\n\n## Usage\n\n1. If you do not already have a KVv2 secret to sync, mount a new KVv2 secrets engine.\n\n  ```shell-session\n  $ vault secrets enable -path='my-kv' kv-v2\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  Success! Enabled the kv-v2 secrets engine at: my-kv\/\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Create secrets you wish to sync with a target Vercel project.\n\n  ```shell-session\n  $ vault kv put -mount='my-kv' my-secret key1='val1'\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  ==== Secret Path ====\n  my-kv\/data\/my-secret\n\n  ======= Metadata =======\n  Key                Value\n  ---                -----\n  created_time       <timestamp>\n  custom_metadata    <nil>\n  deletion_time      n\/a\n  destroyed          false\n  version            1\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Create an association between the destination and a secret to synchronize.\n\n  ```shell-session\n  $ vault write sys\/sync\/destinations\/vercel-project\/my-dest\/associations\/set \\\n    mount='my-kv' \\\n    secret_name='my-secret'\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  Key                   Value\n  ---                   -----\n  associated_secrets    map[kv_1234\/my-secret:map[accessor:kv_1234 secret_name:my-secret sync_status:SYNCED updated_at:<timestamp>]]\n  store_name            my-dest\n  store_type            vercel-project\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Navigate to your project's settings under the `Environment Variables` section to confirm your secret was successfully\n  created in your Vercel project.\n\nMoving forward, any modification on the Vault secret will be propagated in near real time to its Vercel environment variable\ncounterpart. Creating a new secret version in Vault will overwrite the value in your Vercel Project. Deleting the secret\nor the association in Vault will delete the secret on Vercel as well.\n\n<Note>\n\nVault syncs secrets differently depending on whether you have configured \n`secret-key` or `secret-path` [granularity](\/vault\/docs\/sync#granularity):\n\n- `secret-key` granularity splits KVv2 secrets from Vault into key-value pairs\n  and stores the pairs as distinct entries in Vercel. For example,\n  `secrets.key1=\"val1\"` and `secrets.key2=\"val2\"`.\n\n- `secret-path` granularity stores secrets as a single JSON string that contains\n  all the associated key-value pairs. For example, `{\"key1\":\"val1\", \"key2\":\"val2\"}`.\n\nSince Vercel projects limit environment variables to single-value secrets, the\nsync granularity defaults to `secret-key`.\n\n<\/Note>\n\n## API\n\nPlease see the [secrets sync API](\/vault\/api-docs\/system\/secrets-sync) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Sync secrets from Vault to Vercel Project description       Automatically sync and unsync the secrets from Vault to a Vercel project to centralize visibility and control of secrets lifecycle management         Sync secrets from Vault to Vercel Project  The Vercel Project sync destination allows Vault to safely synchronize secrets as Vercel environment variables  This is a low footprint option that enables your applications to benefit from Vault managed secrets without requiring them to connect directly with Vault  This guide walks you through the configuration process   Prerequisites    Ability to read or create KVv2 secrets   Ability to create Vercel tokens with access to modify project environment variables   Ability to create sync destinations and associations on your Vault server     Setup  1  If you do not already have a Vercel token  navigate  your account settings  https   vercel com account tokens  to    generate credentials with the necessary permissions to manage your project s environment variables   1  Next you need to locate your project ID  It can be found under the  Settings  tab in your project s overview page   1  Configure a sync destination with the access token and project ID obtained in the previous steps        shell session     vault write sys sync destinations vercel project my dest         access token   TOKEN          project id   PROJECT ID          deployment environments development         deployment environments preview         deployment environments production            Output        CodeBlockConfig hideClipboard        plaintext   Key                   Value                                 connection details    map access token       deployment environments  development preview production  project id  project id     name                  my dest   type                  vercel project            CodeBlockConfig      Usage  1  If you do not already have a KVv2 secret to sync  mount a new KVv2 secrets engine        shell session     vault secrets enable  path  my kv  kv v2            Output        CodeBlockConfig hideClipboard        plaintext   Success  Enabled the kv v2 secrets engine at  my kv             CodeBlockConfig   1  Create secrets you wish to sync with a target Vercel project        shell session     vault kv put  mount  my kv  my secret key1  val1             Output        CodeBlockConfig hideClipboard        plaintext        Secret Path        my kv data my secret            Metadata           Key                Value                              created time        timestamp    custom metadata     nil    deletion time      n a   destroyed          false   version            1            CodeBlockConfig   1  Create an association between the destination and a secret to synchronize        shell session     vault write sys sync destinations vercel project my dest associations set       mount  my kv        secret name  my secret             Output        CodeBlockConfig hideClipboard        plaintext   Key                   Value                                 associated secrets    map kv 1234 my secret map accessor kv 1234 secret name my secret sync status SYNCED updated at  timestamp      store name            my dest   store type            vercel project            CodeBlockConfig   1  Navigate to your project s settings under the  Environment Variables  section to confirm your secret was successfully   created in your Vercel project   Moving forward  any modification on the Vault secret will be propagated in near real time to its Vercel environment variable counterpart  Creating a new secret version in Vault will overwrite the value in your Vercel Project  Deleting the secret or the association in Vault will delete the secret on Vercel as well    Note   Vault syncs secrets differently depending on whether you have configured   secret key  or  secret path   granularity   vault docs sync granularity       secret key  granularity splits KVv2 secrets from Vault into key value pairs   and stores the pairs as distinct entries in Vercel  For example     secrets key1  val1   and  secrets key2  val2        secret path  granularity stores secrets as a single JSON string that contains   all the associated key value pairs  For example     key1   val1    key2   val2      Since Vercel projects limit environment variables to single value secrets  the sync granularity defaults to  secret key      Note      API  Please see the  secrets sync API   vault api docs system secrets sync  for more details "}
{"questions":"vault The Google Cloud Platform GCP Secret Manager sync destination allows Vault to safely synchronize secrets to your GCP projects page title Sync secrets from Vault to GCP Secret Manager Sync secrets from Vault to GCP Secret Manager layout docs Automatically sync and unsync the secrets from Vault to GCP Secret Manager to centralize visibility and control of secrets lifecycle management","answers":"---\nlayout: docs\npage_title: Sync secrets from Vault to GCP Secret Manager\ndescription: >-\n  Automatically sync and unsync the secrets from Vault to GCP Secret Manager to centralize visibility and control of secrets lifecycle management.\n---\n\n# Sync secrets from Vault to GCP Secret Manager\n\nThe Google Cloud Platform (GCP) Secret Manager sync destination allows Vault to safely synchronize secrets to your GCP projects.\nThis is a low footprint option that enables your applications to benefit from Vault-managed secrets without requiring them\nto connect directly with Vault. This guide walks you through the configuration process.\n\nPrerequisites:\n* Ability to read or create KVv2 secrets\n* Ability to create GCP Service Account credentials with access to the Secret Manager\n* Ability to create sync destinations and associations on your Vault server\n\n## Setup\n\n1. If you do not already have a Service Account, navigate to the IAM & Admin page in the Google Cloud console to\n  [create a new Service Account](https:\/\/cloud.google.com\/iam\/docs\/service-accounts-create) with the\n  [necessary permissions](\/vault\/docs\/sync\/gcpsm#permissions). [Instructions](\/vault\/docs\/sync\/gcpsm#provision-service-account)\n  to provision this Service Account via Terraform can be found below.\n\n1. Configure a sync destination with the Service Account JSON credentials created in the previous step. See docs for\n  [alternative ways](\/vault\/docs\/secrets\/gcp#authentication) to pass in the `credentials` parameter.\n\n  ```shell-session\n  $ vault write sys\/sync\/destinations\/gcp-sm\/my-dest \\\n      credentials='@path\/to\/credentials.json'\n      replication_locations='us-east1,us-west1'\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  Key                   Value\n  ---                   -----\n  connection_details    map[credentials:***** replication_locations:us-east1,us-west1]\n  name                  my-dest\n  type                  gcp-sm\n  ```\n\n  <\/CodeBlockConfig>\n\n## Usage\n\n1. If you do not already have a KVv2 secret to sync, mount a new KVv2 secrets engine.\n\n  ```shell-session\n  $ vault secrets enable -path=my-kv kv-v2\n  ```\n\n  **Output**:\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  Success! Enabled the kv-v2 secrets engine at: my-kv\/\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Create secrets you wish to sync with a target GCP Secret Manager.\n\n  ```shell-session\n  $ vault kv put -mount=my-kv my-secret foo='bar'\n  ```\n\n  **Output**:\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  ==== Secret Path ====\n  my-kv\/data\/my-secret\n\n  ======= Metadata =======\n  Key                Value\n  ---                -----\n  created_time       <timestamp>\n  custom_metadata    <nil>\n  deletion_time      n\/a\n  destroyed          false\n  version            1\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Create an association between the destination and a secret to synchronize.\n\n  ```shell-session\n  $ vault write sys\/sync\/destinations\/gcp-sm\/my-dest\/associations\/set \\\n      mount='my-kv' \\\n      secret_name='my-secret'\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  Key                   Value\n  ---                   -----\n  associated_secrets    map[kv_1234\/my-secret:map[accessor:kv_1234 secret_name:my-secret sync_status:SYNCED updated_at:<timestamp>]]\n  store_name            my-dest\n  store_type            gcp-sm\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Navigate to the [Secret Manager](https:\/\/console.cloud.google.com\/security\/secret-manager) in the Google Cloud console\n  to confirm your secret was successfully created in your GCP project.\n\nMoving forward, any modification on the Vault secret will be propagated in near real time to its GCP Secret Manager\ncounterpart. Creating a new secret version in Vault will create a new version in GCP Secret Manager. Deleting the secret\nor the association in Vault will delete the secret in your GCP project as well.\n\n### Replication policy\n\nGCP can target specific geographic regions to provide strict control on where\nyour applications store data and sync secrets. You can target specific GCP \nregions for each sync destinations during creation which will limit where Vault writes \nsecrets. \n\nRegardless of the region limits on writes, synced secrets are always readable \nglobally when the client has the required permissions.\n\n## Permissions\n\nThe credentials given to Vault must have the following permissions to synchronize secrets:\n\n```shell-session\nsecretmanager.secrets.create\nsecretmanager.secrets.delete\nsecretmanager.secrets.update\nsecretmanager.versions.add\nsecretmanager.versions.destroy \n```\n\n## Provision service account\n\nVault needs to be configured with credentials to establish a trust relationship with your GCP project so it can manage\nSecret Manager secrets on your behalf. The IAM & Admin page in the Google Cloud console can be used to\n[create a new Service Account](https:\/\/cloud.google.com\/iam\/docs\/service-accounts-create) with access to the Secret Manager.\n\nYou can equally use the [Terraform Google provider](https:\/\/registry.terraform.io\/providers\/hashicorp\/google\/latest\/docs#authentication-and-configuration)\nto provision a GCP Service Account with the appropriate policies.\n\n1. Copy-paste this HCL snippet into a `secrets-sync-setup.tf` file.\n\n  ```hcl\n  provider \"google\" {\n      \/\/ See https:\/\/registry.terraform.io\/providers\/hashicorp\/google\/latest\/docs#authentication-and-configuration to setup the Google Provider\n      \/\/ for options on how to configure this provider. The following parameters or environment\n      \/\/ variables are typically used.\n\n      \/\/ Parameters\n      \/\/ region = \"\" (Optional)\n      \/\/ project = \"\"\n      \/\/ credentials = \"\"\n\n      \/\/ Environment Variables\n      \/\/ GOOGLE_REGION (optional)\n      \/\/ GOOGLE_PROJECT\n      \/\/ GOOGLE_CREDENTIALS (The path to a service account key file with the\n      \/\/                    \"Service Account Admin\", \"Service Account Key Admin\",\n      \/\/                    \"Secret Manager Admin\", and \"Project IAM Admin\" roles\n      \/\/                    attached)\n  }\n\n  data \"google_client_config\" \"config\" {}\n\n  resource \"google_service_account\" \"vault_secrets_sync_account\" {\n    account_id  = \"gcp-sm-vault-secrets-sync\"\n    description = \"service account for Vault Secrets Sync feature\"\n  }\n\n  \/\/ Production environments should use a more restricted role.\n  \/\/ The built-in secret manager admin role is used as an example for simplicity.\n  data \"google_iam_policy\" \"vault_secrets_sync_iam_policy\" {\n    binding {\n      role = \"roles\/secretmanager.admin\"\n      members = [\n        google_service_account.vault_secrets_sync_account.email,\n      ]\n    }\n  }\n\n  resource \"google_project_iam_member\" \"vault_secrets_sync_iam_member\" {\n    project = data.google_client_config.config.project\n    role    = \"roles\/secretmanager.admin\"\n    member  = google_service_account.vault_secrets_sync_account.member\n  }\n\n  resource \"google_service_account_key\" \"vault_secrets_sync_account_key\" {\n    service_account_id = google_service_account.vault_secrets_sync_account.name\n    public_key_type    = \"TYPE_X509_PEM_FILE\"\n  }\n\n  resource \"local_file\" \"vault_secrets_sync_credentials_file\" {\n    content  = base64decode(google_service_account_key.vault_secrets_sync_account_key.private_key)\n    filename = \"gcp-sm-sync-service-account-credentials.json\"\n  }\n\n  output \"vault_secrets_sync_credentials_file_path\" {\n    value = abspath(\"${path.module}\/${local_file.sync_service_account_credentials_file.filename}\")\n  }\n  ```\n\n1. Execute a plan to validate the Terraform Google provider is properly configured.\n\n  ```shell-session\n  $ terraform init && terraform plan\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  (...)\n  Plan: 4 to add, 0 to change, 0 to destroy.\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Execute an apply to provision the Service Account.\n\n  ```shell-session\n  $ terraform apply\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  (...)\n  Apply complete! Resources: 4 added, 0 changed, 0 destroyed.\n\n  Outputs:\n\n  sync_service_account_credentials_file = \"\/path\/to\/credentials\/file\/gcp-sm-sync-service-account-credentials.json\"\n  ```\n\n  <\/CodeBlockConfig>\n\nThe generated Service Account credentials file can then be used to configure the Vault GCP Secret Manager destination\nfollowing the [setup](\/vault\/docs\/sync\/gcpsm#setup) steps.\n\n## Targeting specific GCP projects\n\nBy default, the target GCP project to sync secrets with is derived from the service \naccount JSON [credentials](\/vault\/api-docs\/system\/secrets-sync#credentials) or application \ndefault credentials for a particular GCP sync destination. This means secrets will be synced\nwithin the parent project of the configured service account.\n\nIn some cases, it's desirable to use a single service account or [workload identity](https:\/\/cloud.google.com\/compute\/docs\/access\/create-enable-service-accounts-for-instances)\nto sync secrets with any number of GCP projects within an organization. To achieve this, \nyou can set the `project_id` parameter to the target project to sync secrets with:\n\n```shell-session\n  $ vault write sys\/sync\/destinations\/gcp-sm\/my-dest \\\n      project_id='target-project-id'\n```\n\nThis overrides the project ID derived from the service account JSON credentials or application\ndefault credentials. The service account must be [authorized](https:\/\/cloud.google.com\/iam\/docs\/service-account-overview#locations)\nto perform Secret Manager actions in the target project.\n\n## Access management\n\nYou can allow or restrict access to secrets based on\n[IAM conditions](https:\/\/cloud.google.com\/iam\/docs\/conditions-resource-attributes#resource-name)\nagainst the fully-qualified resource name. For secrets in Secret Manager, a fully-qualified resource name must have the following\nformat:\n\n  `projects\/<project_number>\/secrets\/<secret_name>`\n\n<Tip title=\"Use the project number, not the project ID\"> \n\n  The project **number** is not the same as the project **ID**. Project numbers\n  are **numeric** while project IDs are **alphanumeric**. They can be found on\n  the Project info panel in the web dashboard or on the Welcome screen.\n\n<\/Tip>\n\nFor example, the default secret name template prepends the word `vault` to the\nbeginning of secret names. To prevent Vault from modifying secrets that were not\ncreated by a sync operation, you can use a role binding against the resource\nname with the `startsWith` condition:\n\n  <CodeBlockConfig hideClipboard>\n\n    resource.name.startsWith(\"projects\/<project_number>\/secrets\/vault\")\n\n  <\/CodeBlockConfig>\n\nTo prevent out-of-band overwrites, simply add a negative condition with `!` on any\nwrite-access role bindings not being used by Vault that contain Secret Manager permissions:\n\n  <CodeBlockConfig hideClipboard>\n\n    !(resource.name.startsWith(\"projects\/<project_number>\/secrets\/vault\"))\n\n  <\/CodeBlockConfig>\n\nTo add conditions to IAM principles in GCP, click \"+ADD IAM CONDITION\" on the **Assign Roles** screen.\n\n![Assign Roles screen in GCP with the \"+ADD IAM CONDITION\" link circled in red](\/img\/gcp-add-iam-conditions_light.png#light-theme-only)\n![Assign Roles screen in GCP with the \"+ADD IAM CONDITION\" link circled in red](\/img\/gcp-add-iam-conditions_dark.png#dark-theme-only)\n\n<Tip title=\"Refer to Google's Overview of IAM Conditions documentation\"> \n\n  [Google's documentation](https:\/\/cloud.google.com\/iam\/docs\/conditions-overview) on IAM Conditions provides\n  further information on how they work and how they should be used, as well as their limits.\n\n<\/Tip>\n\n## API\n\nPlease see the [secrets sync API](\/vault\/api-docs\/system\/secrets-sync) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Sync secrets from Vault to GCP Secret Manager description       Automatically sync and unsync the secrets from Vault to GCP Secret Manager to centralize visibility and control of secrets lifecycle management         Sync secrets from Vault to GCP Secret Manager  The Google Cloud Platform  GCP  Secret Manager sync destination allows Vault to safely synchronize secrets to your GCP projects  This is a low footprint option that enables your applications to benefit from Vault managed secrets without requiring them to connect directly with Vault  This guide walks you through the configuration process   Prerequisites    Ability to read or create KVv2 secrets   Ability to create GCP Service Account credentials with access to the Secret Manager   Ability to create sync destinations and associations on your Vault server     Setup  1  If you do not already have a Service Account  navigate to the IAM   Admin page in the Google Cloud console to    create a new Service Account  https   cloud google com iam docs service accounts create  with the    necessary permissions   vault docs sync gcpsm permissions    Instructions   vault docs sync gcpsm provision service account    to provision this Service Account via Terraform can be found below   1  Configure a sync destination with the Service Account JSON credentials created in the previous step  See docs for    alternative ways   vault docs secrets gcp authentication  to pass in the  credentials  parameter        shell session     vault write sys sync destinations gcp sm my dest         credentials   path to credentials json        replication locations  us east1 us west1             Output        CodeBlockConfig hideClipboard        plaintext   Key                   Value                                 connection details    map credentials       replication locations us east1 us west1    name                  my dest   type                  gcp sm            CodeBlockConfig      Usage  1  If you do not already have a KVv2 secret to sync  mount a new KVv2 secrets engine        shell session     vault secrets enable  path my kv kv v2            Output        CodeBlockConfig hideClipboard        plaintext   Success  Enabled the kv v2 secrets engine at  my kv             CodeBlockConfig   1  Create secrets you wish to sync with a target GCP Secret Manager        shell session     vault kv put  mount my kv my secret foo  bar             Output        CodeBlockConfig hideClipboard        plaintext        Secret Path        my kv data my secret            Metadata           Key                Value                              created time        timestamp    custom metadata     nil    deletion time      n a   destroyed          false   version            1            CodeBlockConfig   1  Create an association between the destination and a secret to synchronize        shell session     vault write sys sync destinations gcp sm my dest associations set         mount  my kv          secret name  my secret             Output        CodeBlockConfig hideClipboard        plaintext   Key                   Value                                 associated secrets    map kv 1234 my secret map accessor kv 1234 secret name my secret sync status SYNCED updated at  timestamp      store name            my dest   store type            gcp sm            CodeBlockConfig   1  Navigate to the  Secret Manager  https   console cloud google com security secret manager  in the Google Cloud console   to confirm your secret was successfully created in your GCP project   Moving forward  any modification on the Vault secret will be propagated in near real time to its GCP Secret Manager counterpart  Creating a new secret version in Vault will create a new version in GCP Secret Manager  Deleting the secret or the association in Vault will delete the secret in your GCP project as well       Replication policy  GCP can target specific geographic regions to provide strict control on where your applications store data and sync secrets  You can target specific GCP  regions for each sync destinations during creation which will limit where Vault writes  secrets    Regardless of the region limits on writes  synced secrets are always readable  globally when the client has the required permissions      Permissions  The credentials given to Vault must have the following permissions to synchronize secrets      shell session secretmanager secrets create secretmanager secrets delete secretmanager secrets update secretmanager versions add secretmanager versions destroy          Provision service account  Vault needs to be configured with credentials to establish a trust relationship with your GCP project so it can manage Secret Manager secrets on your behalf  The IAM   Admin page in the Google Cloud console can be used to  create a new Service Account  https   cloud google com iam docs service accounts create  with access to the Secret Manager   You can equally use the  Terraform Google provider  https   registry terraform io providers hashicorp google latest docs authentication and configuration  to provision a GCP Service Account with the appropriate policies   1  Copy paste this HCL snippet into a  secrets sync setup tf  file        hcl   provider  google             See https   registry terraform io providers hashicorp google latest docs authentication and configuration to setup the Google Provider          for options on how to configure this provider  The following parameters or environment          variables are typically used            Parameters          region       Optional           project               credentials                Environment Variables          GOOGLE REGION  optional           GOOGLE PROJECT          GOOGLE CREDENTIALS  The path to a service account key file with the                              Service Account Admin    Service Account Key Admin                                Secret Manager Admin   and  Project IAM Admin  roles                             attached         data  google client config   config        resource  google service account   vault secrets sync account        account id     gcp sm vault secrets sync      description    service account for Vault Secrets Sync feature            Production environments should use a more restricted role       The built in secret manager admin role is used as an example for simplicity    data  google iam policy   vault secrets sync iam policy        binding         role    roles secretmanager admin        members             google service account vault secrets sync account email                       resource  google project iam member   vault secrets sync iam member        project   data google client config config project     role       roles secretmanager admin      member    google service account vault secrets sync account member        resource  google service account key   vault secrets sync account key        service account id   google service account vault secrets sync account name     public key type       TYPE X509 PEM FILE         resource  local file   vault secrets sync credentials file        content    base64decode google service account key vault secrets sync account key private key      filename    gcp sm sync service account credentials json         output  vault secrets sync credentials file path        value   abspath    path module    local file sync service account credentials file filename               1  Execute a plan to validate the Terraform Google provider is properly configured        shell session     terraform init    terraform plan            Output        CodeBlockConfig hideClipboard        plaintext           Plan  4 to add  0 to change  0 to destroy             CodeBlockConfig   1  Execute an apply to provision the Service Account        shell session     terraform apply            Output        CodeBlockConfig hideClipboard        plaintext           Apply complete  Resources  4 added  0 changed  0 destroyed     Outputs     sync service account credentials file     path to credentials file gcp sm sync service account credentials json             CodeBlockConfig   The generated Service Account credentials file can then be used to configure the Vault GCP Secret Manager destination following the  setup   vault docs sync gcpsm setup  steps      Targeting specific GCP projects  By default  the target GCP project to sync secrets with is derived from the service  account JSON  credentials   vault api docs system secrets sync credentials  or application  default credentials for a particular GCP sync destination  This means secrets will be synced within the parent project of the configured service account   In some cases  it s desirable to use a single service account or  workload identity  https   cloud google com compute docs access create enable service accounts for instances  to sync secrets with any number of GCP projects within an organization  To achieve this   you can set the  project id  parameter to the target project to sync secrets with      shell session     vault write sys sync destinations gcp sm my dest         project id  target project id       This overrides the project ID derived from the service account JSON credentials or application default credentials  The service account must be  authorized  https   cloud google com iam docs service account overview locations  to perform Secret Manager actions in the target project      Access management  You can allow or restrict access to secrets based on  IAM conditions  https   cloud google com iam docs conditions resource attributes resource name  against the fully qualified resource name  For secrets in Secret Manager  a fully qualified resource name must have the following format      projects  project number  secrets  secret name     Tip title  Use the project number  not the project ID       The project   number   is not the same as the project   ID    Project numbers   are   numeric   while project IDs are   alphanumeric    They can be found on   the Project info panel in the web dashboard or on the Welcome screen     Tip   For example  the default secret name template prepends the word  vault  to the beginning of secret names  To prevent Vault from modifying secrets that were not created by a sync operation  you can use a role binding against the resource name with the  startsWith  condition      CodeBlockConfig hideClipboard       resource name startsWith  projects  project number  secrets vault        CodeBlockConfig   To prevent out of band overwrites  simply add a negative condition with     on any write access role bindings not being used by Vault that contain Secret Manager permissions      CodeBlockConfig hideClipboard         resource name startsWith  projects  project number  secrets vault         CodeBlockConfig   To add conditions to IAM principles in GCP  click   ADD IAM CONDITION  on the   Assign Roles   screen     Assign Roles screen in GCP with the   ADD IAM CONDITION  link circled in red   img gcp add iam conditions light png light theme only    Assign Roles screen in GCP with the   ADD IAM CONDITION  link circled in red   img gcp add iam conditions dark png dark theme only    Tip title  Refer to Google s Overview of IAM Conditions documentation        Google s documentation  https   cloud google com iam docs conditions overview  on IAM Conditions provides   further information on how they work and how they should be used  as well as their limits     Tip      API  Please see the  secrets sync API   vault api docs system secrets sync  for more details "}
{"questions":"vault page title Sync secrets from Vault to GitHub Sync secrets from Vault to GitHub Automatically sync and unsync the secrets from Vault to GitHub to centralize visibility and control of secrets lifecycle management layout docs The GitHub actions sync destination allows Vault to safely synchronize secrets as GitHub organization repository or environment secrets","answers":"---\nlayout: docs\npage_title: Sync secrets from Vault to GitHub\ndescription: >-\n  Automatically sync and unsync the secrets from Vault to GitHub to centralize visibility and control of secrets lifecycle management.\n---\n\n# Sync secrets from Vault to GitHub\n\nThe GitHub actions sync destination allows Vault to safely synchronize secrets as GitHub organization, repository, or environment secrets.\nThis is a low footprint option that enables your applications to benefit from Vault-managed secrets without requiring them\nto connect directly with Vault. This guide walks you through the configuration process.\n\nPrerequisites:\n* Ability to read or create KVv2 secrets\n* Ability to create GitHub fine-grained or personal tokens (or a GitHub application) with access to modify organization and\/or repository secrets\n* Ability to create sync destinations and associations on your Vault server\n\n## Setup\n\n1. To get started with syncing Vault secrets to your GitHub, you will need a configured [GitHub application](#github-application) or an\n  [access token](https:\/\/docs.github.com\/en\/authentication\/keeping-your-account-and-data-secure\/managing-your-personal-access-tokens)\n  that has write permission on the target sync location in GitHub for \"Secrets\". The \"Secrets\" permissions in GitHub automatically includes read-only \"Metadata\" access.\n\n\n<Warning title=\"Pitfalls of using an access token\">\n\n  Access tokens are tied to a user account and can be revoked at any time, causing disruptions to the sync process.\n  GitHub applications are long-lived and do not expire. Using a GitHub application for authentication is preferred over using a personal access token.\n  \n<\/Warning>\n\n### Repositories\n\nUse `vault write` to configure a repository sync destination with an access token:\n\n  ```shell-session\n  $ vault write sys\/sync\/destinations\/gh\/DESTINATION_NAME \\\n      access_token=\"GITHUB_ACCESS_TOKEN\"                  \\\n      secrets_location=\"GITHUB_SECRETS_LOCATION\"          \\\n      repository_owner=\"GITHUB_OWNER_NAME\"                \\\n      repository_name=\"GITHUB_REPO_NAME\"\n  ```\n\n  For example:\n\n  <CodeBlockConfig hideClipboard>\n\n  ```\n  $ vault write sys\/sync\/destinations\/gh\/hcrepo-sandbox       \\\n      access_token=\"github_pat_11ABC000000000000000000000DEF\" \\\n      secrets_location=\"repository\"                           \\\n      repository_owner=\"hashicorp\"                            \\\n      repository_name=\"hcrepo\"\n\n  Key                   Value\n  ---                   -----\n  connection_details    map[access_token:***** secrets_location:repository repository_owner:hashicorp repository_name:hcrepo]\n  name                  hcrepo-sandbox\n  type                  gh\n  ```\n\n  <\/CodeBlockConfig>\n\n### Environments\n\nUse `vault write` to configure an environment sync destination:\n\n  ```shell-session\n  $ vault write sys\/sync\/destinations\/gh\/DESTINATION_NAME \\\n      access_token=\"GITHUB_ACCESS_TOKEN\"                  \\\n      secrets_location=\"GITHUB_SECRETS_LOCATION\"          \\\n      repository_owner=\"GITHUB_OWNER_NAME\"                \\\n      repository_name=\"GITHUB_REPO_NAME\"                  \\\n      environment_name=\"GITHUB_ENVIRONMENT_NAME\"\n  ```\n\n  For example:\n\n  <CodeBlockConfig hideClipboard>\n\n  ```\n  $ vault write sys\/sync\/destinations\/gh\/hcrepo-sandbox       \\\n      access_token=\"github_pat_11ABC000000000000000000000DEF\" \\\n      secrets_location=\"repository\"                           \\\n      repository_owner=\"hashicorp\"                            \\\n      repository_name=\"hcrepo\"                                \\\n      environment_name=\"sandbox\"\n\n  Key                   Value\n  ---                   -----\n  connection_details    map[access_token:***** secrets_location:repository environment_name:sandbox repository_owner:hashicorp repository_name:hcrepo]\n  name                  hcrepo-sandbox\n  type                  gh\n  ```\n\n  <\/CodeBlockConfig>\n\n### Organizations\n\n@include 'alerts\/beta.mdx'\n\n\nBeta limitations:\n\n- You cannot update visibility (`organization_visibility`) after creating a\n  secrets sync destination.\n- You cannot update the list of repositories with access to synced secrets\n  (`selected_repository_names`) after creating a secrets sync destination.\n\nSync secrets to GitHub organization to share those secrets across repositories \nin the organizations. You choose to make secrets global to the organization,\nlimited to private\/internal repos, or limited to specifically named repositories.\n\nRefer to the [Secrets sync API docs](\/vault\/docs\/sync\/github#api) for detailed\nconfiguration information.\n\n<Warning>\n\n  Organization secrets are\n  [not visible to private repositories for GitHub Free accounts](https:\/\/docs.github.com\/en\/actions\/security-for-github-actions\/security-guides\/using-secrets-in-github-actions#creating-secrets-for-an-organization).\n\n<\/Warning>\n\nUse `vault write` to configure an organization sync destination:\n\n  ```shell-session\n  $ vault write sys\/sync\/destinations\/gh\/DESTINATION_NAME \\\n      access_token=\"GITHUB_ACCESS_TOKEN\"                  \\\n      secrets_location=\"GITHUB_SECRETS_LOCATION\"          \\\n      organization_name=\"ORGANIZATION_NAME\"               \\\n      organization_visibility=\"ORGANIZATION_VISIBILITY\"\n  ```\n\n  For example:\n\n  <CodeBlockConfig hideClipboard>\n\n  ```\n  $ vault write sys\/sync\/destinations\/gh\/hcrepo-sandbox       \\\n      access_token=\"github_pat_11ABC000000000000000000000DEF\" \\\n      secrets_location=\"organization\"                         \\\n      organization_name=\"hashicorp\"                           \\\n      organization_visibility=\"selected\"                      \\\n      selected_repository_names=\"hcrepo-1,hcrepo-2\"\n\n  Key                   Value\n  ---                   -----\n  connection_details    map[access_token:***** secrets_location:organization organization_name:hashicorp organization_visibility:all selected_repository_names:[hcrepo-1 hcrepo-2]]\n  name                  hcrepo-sandbox\n  type                  gh\n  ```\n\n  <\/CodeBlockConfig>\n\n\n## Usage\n\n1. If you do not already have a KVv2 secret to sync, mount a new KVv2 secrets engine.\n\n  ```shell-session\n  $ vault secrets enable -path=my-kv kv-v2\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```\n  Success! Enabled the kv-v2 secrets engine at: my-kv\/\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Create secrets you wish to sync with a target GitHub repository for Actions.\n\n  ```shell-session\n  $ vault kv put -mount='my-kv' my-secret key1='val1' key2='val2'\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  ==== Secret Path ====\n  my-kv\/data\/my-secret\n\n  ======= Metadata =======\n  Key                Value\n  ---                -----\n  created_time       <timestamp>\n  custom_metadata    <nil>\n  deletion_time      n\/a\n  destroyed          false\n  version            1\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Create an association between the destination and a secret to synchronize.\n\n  ```shell-session\n  $ vault write sys\/sync\/destinations\/gh\/my-dest\/associations\/set \\\n      mount='my-kv' \\\n      secret_name='my-secret'\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  Key                   Value\n  ---                   -----\n  associated_secrets    map[kv_1234\/my-secret:map[accessor:kv_1234 secret_name:my-secret sync_status:SYNCED updated_at:<timestamp>>]]\n  store_name            my-dest\n  store_type            gh\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Navigate to your GitHub repository settings to confirm your secret was successfully created.\n\nMoving forward, any modification on the Vault secret will be propagated in near-real time to its GitHub secrets\ncounterpart. Creating a new secret version in Vault will create a new version in GitHub. Deleting the secret\nor the association in Vault will delete the secret in GitHub as well.\n\n## Security\n\n<Note>\n\nVault syncs secrets differently depending on whether you have configured \n`secret-key` or `secret-path` [granularity](\/vault\/docs\/sync#granularity):\n\n- `secret-key` granularity splits KVv2 secrets from Vault into key-value pairs\n  and stores the pairs as distinct entries in GitHub. For example,\n  `secrets.key1=\"val1\"` and `secrets.key2=\"val2\"`.\n\n- `secret-path` granularity stores secrets as a single JSON string that contains\n  all the associated key-value pairs. For example, `{\"key1\":\"val1\", \"key2\":\"val2\"}`.\n\nSince GitHub limits secrets to single-value secrets, the sync granularity defaults to `secret-key`.\n\n<\/Note>\n\nIf using the secret-path granularity, it is strongly advised to mask individual values for each sub-key to prevent the\nunintended disclosure of secrets in any GitHub Action outputs. The following snippet illustrates how to mask each secret values:\n\n```yaml\n  name: Mask synced secret values\n\n  on:\n    workflow_dispatch\n\n  jobs:\n    synced-secret-examples:\n      runs-on: ubuntu-latest\n      steps:\n        - name: \u2713 Mask synced secret values\n          run: |\n            for v in $(echo '$' | jq -r '.[]'); do\n              echo \"::add-mask::$v\"\n            done\n```\n\nIf the GitHub destination uses the default `secret-key` granularity, the values are masked by GitHub automatically.\n\n## GitHub application\n\nInstead of authenticating with a personal access token, you can choose to\nauthenticate with a\n[custom GitHub application](https:\/\/docs.github.com\/en\/apps\/creating-github-apps\/registering-a-github-app\/registering-a-github-app).\n\nStart by following the GitHub instructions for\n[installing a GitHub app](https:\/\/docs.github.com\/en\/apps\/using-github-apps\/installing-your-own-github-app).\nto install your GitHub application on a specified repository and note the\nassigned installation ID.\n\n<Tip title=\"Your installation ID is in the app URL\">\n\n  You can find your assigned installation ID in the URL path parameter:\n  `https:\/\/github.com\/settings\/installations\/<INSTALLATION_ID>`\n\n<\/Tip>\n\nThen add your GitHub application to your Vault instance.\n\nTo use your GitHub application with Vault:\n\n- The application must have permission to read and write secrets.\n- You must generate a private key for the application on GitHub.\n- The application must be installed on the repository you want to sync secrets with.\n- You must know the application ID assigned by GitHub.\n- You must know the installation ID assigned by GitHub.\n\nCallback, redirect URLs, and webhooks are not required at this time.\n\nTo configure the application in Vault, use `vault write` with the\n`sys\/sync\/github-apps` endpoint to assign a unique name and set the relevant\ninformation:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault write sys\/sync\/github-apps\/<APP_NAME> \\\n  app_id=<APP_ID> \\\n  private_key=@\/path\/to\/private\/key\n\nKey            Value\n---            -----\napp_id         <app-id>\nfingerprint    <fingerprint>\nname           <app-name>\nprivate_key    *****\n```\n<\/CodeBlockConfig>\n\n<Tip title=\"Fingerprint verification\">\n\nVault returns the fingerprint of the private_key provided to ensure that the\ncorrect private key was configured and that it was not tampered with along the way.\nYou can compare the fingerprint to the one provided by GitHub.\nFor more information, see [Verifying private keys](https:\/\/docs.github.com\/en\/apps\/creating-github-apps\/authenticating-with-a-github-app\/managing-private-keys-for-github-apps#verifying-private-keys).\n\n<\/Tip>\n\n\nNext, use `vault write` with the `sys\/sync\/destinations\/gh` endpoint to\nconfigure a GitHub destination that references your new GitHub application:\n\n<CodeBlockConfig hideClipboard>\n\n```shell-session\n$ vault write sys\/sync\/destinations\/gh\/<DESTINATION_NAME> \\\ninstallation_id=<INSTALLATION_ID>                         \\\nrepository_owner=<GITHUB_USER>                            \\\nrepository_name=<MY_REPO_NAME>                            \\\napp_name=<APP_NAME>\n\nKey                   Value\n---                   -----\nconnection_details    map[app_config:map[app_name:<app-name>] installation_id:<installation-id> repository_name:<repo-name> repository_owner:<repo-owner>]\nname                  my-dest\noptions               map[custom_tags:map[] granularity_level:secret-key secret_name_template:VAULT___]\ntype                  gh\n```\n<\/CodeBlockConfig>\n\nYou can now [use your GitHub application to sync secrets with your GitHub repository](#usage).\n\n## API\n\nPlease see the [secrets sync API](\/vault\/api-docs\/system\/secrets-sync) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Sync secrets from Vault to GitHub description       Automatically sync and unsync the secrets from Vault to GitHub to centralize visibility and control of secrets lifecycle management         Sync secrets from Vault to GitHub  The GitHub actions sync destination allows Vault to safely synchronize secrets as GitHub organization  repository  or environment secrets  This is a low footprint option that enables your applications to benefit from Vault managed secrets without requiring them to connect directly with Vault  This guide walks you through the configuration process   Prerequisites    Ability to read or create KVv2 secrets   Ability to create GitHub fine grained or personal tokens  or a GitHub application  with access to modify organization and or repository secrets   Ability to create sync destinations and associations on your Vault server     Setup  1  To get started with syncing Vault secrets to your GitHub  you will need a configured  GitHub application   github application  or an    access token  https   docs github com en authentication keeping your account and data secure managing your personal access tokens    that has write permission on the target sync location in GitHub for  Secrets   The  Secrets  permissions in GitHub automatically includes read only  Metadata  access     Warning title  Pitfalls of using an access token      Access tokens are tied to a user account and can be revoked at any time  causing disruptions to the sync process    GitHub applications are long lived and do not expire  Using a GitHub application for authentication is preferred over using a personal access token       Warning       Repositories  Use  vault write  to configure a repository sync destination with an access token        shell session     vault write sys sync destinations gh DESTINATION NAME         access token  GITHUB ACCESS TOKEN                           secrets location  GITHUB SECRETS LOCATION                   repository owner  GITHUB OWNER NAME                         repository name  GITHUB REPO NAME           For example      CodeBlockConfig hideClipboard             vault write sys sync destinations gh hcrepo sandbox               access token  github pat 11ABC000000000000000000000DEF          secrets location  repository                                    repository owner  hashicorp                                     repository name  hcrepo     Key                   Value                                 connection details    map access token       secrets location repository repository owner hashicorp repository name hcrepo    name                  hcrepo sandbox   type                  gh            CodeBlockConfig       Environments  Use  vault write  to configure an environment sync destination        shell session     vault write sys sync destinations gh DESTINATION NAME         access token  GITHUB ACCESS TOKEN                           secrets location  GITHUB SECRETS LOCATION                   repository owner  GITHUB OWNER NAME                         repository name  GITHUB REPO NAME                           environment name  GITHUB ENVIRONMENT NAME           For example      CodeBlockConfig hideClipboard             vault write sys sync destinations gh hcrepo sandbox               access token  github pat 11ABC000000000000000000000DEF          secrets location  repository                                    repository owner  hashicorp                                     repository name  hcrepo                                         environment name  sandbox     Key                   Value                                 connection details    map access token       secrets location repository environment name sandbox repository owner hashicorp repository name hcrepo    name                  hcrepo sandbox   type                  gh            CodeBlockConfig       Organizations   include  alerts beta mdx    Beta limitations     You cannot update visibility   organization visibility   after creating a   secrets sync destination    You cannot update the list of repositories with access to synced secrets     selected repository names   after creating a secrets sync destination   Sync secrets to GitHub organization to share those secrets across repositories  in the organizations  You choose to make secrets global to the organization  limited to private internal repos  or limited to specifically named repositories   Refer to the  Secrets sync API docs   vault docs sync github api  for detailed configuration information    Warning     Organization secrets are    not visible to private repositories for GitHub Free accounts  https   docs github com en actions security for github actions security guides using secrets in github actions creating secrets for an organization      Warning   Use  vault write  to configure an organization sync destination        shell session     vault write sys sync destinations gh DESTINATION NAME         access token  GITHUB ACCESS TOKEN                           secrets location  GITHUB SECRETS LOCATION                   organization name  ORGANIZATION NAME                        organization visibility  ORGANIZATION VISIBILITY           For example      CodeBlockConfig hideClipboard             vault write sys sync destinations gh hcrepo sandbox               access token  github pat 11ABC000000000000000000000DEF          secrets location  organization                                  organization name  hashicorp                                    organization visibility  selected                               selected repository names  hcrepo 1 hcrepo 2     Key                   Value                                 connection details    map access token       secrets location organization organization name hashicorp organization visibility all selected repository names  hcrepo 1 hcrepo 2     name                  hcrepo sandbox   type                  gh            CodeBlockConfig       Usage  1  If you do not already have a KVv2 secret to sync  mount a new KVv2 secrets engine        shell session     vault secrets enable  path my kv kv v2            Output        CodeBlockConfig hideClipboard           Success  Enabled the kv v2 secrets engine at  my kv             CodeBlockConfig   1  Create secrets you wish to sync with a target GitHub repository for Actions        shell session     vault kv put  mount  my kv  my secret key1  val1  key2  val2             Output        CodeBlockConfig hideClipboard        plaintext        Secret Path        my kv data my secret            Metadata           Key                Value                              created time        timestamp    custom metadata     nil    deletion time      n a   destroyed          false   version            1            CodeBlockConfig   1  Create an association between the destination and a secret to synchronize        shell session     vault write sys sync destinations gh my dest associations set         mount  my kv          secret name  my secret             Output        CodeBlockConfig hideClipboard        plaintext   Key                   Value                                 associated secrets    map kv 1234 my secret map accessor kv 1234 secret name my secret sync status SYNCED updated at  timestamp       store name            my dest   store type            gh            CodeBlockConfig   1  Navigate to your GitHub repository settings to confirm your secret was successfully created   Moving forward  any modification on the Vault secret will be propagated in near real time to its GitHub secrets counterpart  Creating a new secret version in Vault will create a new version in GitHub  Deleting the secret or the association in Vault will delete the secret in GitHub as well      Security   Note   Vault syncs secrets differently depending on whether you have configured   secret key  or  secret path   granularity   vault docs sync granularity       secret key  granularity splits KVv2 secrets from Vault into key value pairs   and stores the pairs as distinct entries in GitHub  For example     secrets key1  val1   and  secrets key2  val2        secret path  granularity stores secrets as a single JSON string that contains   all the associated key value pairs  For example     key1   val1    key2   val2      Since GitHub limits secrets to single value secrets  the sync granularity defaults to  secret key      Note   If using the secret path granularity  it is strongly advised to mask individual values for each sub key to prevent the unintended disclosure of secrets in any GitHub Action outputs  The following snippet illustrates how to mask each secret values      yaml   name  Mask synced secret values    on      workflow dispatch    jobs      synced secret examples        runs on  ubuntu latest       steps            name    Mask synced secret values           run                for v in   echo       jq  r         do               echo    add mask   v              done      If the GitHub destination uses the default  secret key  granularity  the values are masked by GitHub automatically      GitHub application  Instead of authenticating with a personal access token  you can choose to authenticate with a  custom GitHub application  https   docs github com en apps creating github apps registering a github app registering a github app    Start by following the GitHub instructions for  installing a GitHub app  https   docs github com en apps using github apps installing your own github app   to install your GitHub application on a specified repository and note the assigned installation ID    Tip title  Your installation ID is in the app URL      You can find your assigned installation ID in the URL path parameter     https   github com settings installations  INSTALLATION ID      Tip   Then add your GitHub application to your Vault instance   To use your GitHub application with Vault     The application must have permission to read and write secrets    You must generate a private key for the application on GitHub    The application must be installed on the repository you want to sync secrets with    You must know the application ID assigned by GitHub    You must know the installation ID assigned by GitHub   Callback  redirect URLs  and webhooks are not required at this time   To configure the application in Vault  use  vault write  with the  sys sync github apps  endpoint to assign a unique name and set the relevant information    CodeBlockConfig hideClipboard      shell session   vault write sys sync github apps  APP NAME      app id  APP ID      private key   path to private key  Key            Value                      app id          app id  fingerprint     fingerprint  name            app name  private key                CodeBlockConfig    Tip title  Fingerprint verification    Vault returns the fingerprint of the private key provided to ensure that the correct private key was configured and that it was not tampered with along the way  You can compare the fingerprint to the one provided by GitHub  For more information  see  Verifying private keys  https   docs github com en apps creating github apps authenticating with a github app managing private keys for github apps verifying private keys      Tip    Next  use  vault write  with the  sys sync destinations gh  endpoint to configure a GitHub destination that references your new GitHub application    CodeBlockConfig hideClipboard      shell session   vault write sys sync destinations gh  DESTINATION NAME    installation id  INSTALLATION ID                            repository owner  GITHUB USER                               repository name  MY REPO NAME                               app name  APP NAME   Key                   Value                             connection details    map app config map app name  app name   installation id  installation id  repository name  repo name  repository owner  repo owner   name                  my dest options               map custom tags map   granularity level secret key secret name template VAULT     type                  gh       CodeBlockConfig   You can now  use your GitHub application to sync secrets with your GitHub repository   usage       API  Please see the  secrets sync API   vault api docs system secrets sync  for more details "}
{"questions":"vault page title Secrets sync Use secrets sync feature to automatically sync Vault managed secrets with external destinations to centralize secrets lifecycle management layout docs Secrets sync EnterpriseAlert product vault","answers":"---\nlayout: docs\npage_title: Secrets sync\ndescription: >-\n  Use secrets sync feature to automatically sync Vault-managed secrets with external destinations to centralize secrets lifecycle management.\n---\n\n# Secrets sync\n\n<EnterpriseAlert product=\"vault\" \/>\n\nIn certain circumstances, fetching secrets directly from Vault is impossible or impractical. To help with this challenge,\nVault can maintain a one-way sync for KVv2 secrets into various destinations that are easier to access for some clients.\nWith this, Vault remains the system of records but can cache a subset of secrets on various external systems acting as\ntrusted last-mile delivery systems.\n\nA secret that is associated from a Vault KVv2 Secrets Engine into an external destination is actively managed by a continuous\nprocess. If the secret value is updated in Vault, the secret is updated in the destination as well. If the secret is deleted\nfrom Vault, it is deleted on the external system as well. This process is asynchronous and event-based. Vault propagates\nmodifications into the proper destinations automatically in a handful of seconds.\n\n<Note title=\"Not related to HCP Vault Secrets\">\n\n  Secrets sync is a Vault Enterprise feature. For information on secrets sync\n  with [HCP Vault Secrets](\/hcp\/docs\/vault-secrets), refer to the HashiCorp Cloud\n  Platform documentation for\n  [Vault Secrets integrations](\/hcp\/docs\/vault-secrets\/integrations).\n\n<\/Note>\n\n## Activating the feature\n\nThe secrets sync feature requires manual activation through a one-time trigger. If a sync-related endpoint is called prior to\nactivation, an error response will be received indicating that the feature has not been activated yet. Be sure to understand the\npotential [client count impacts](#client-counts) of using secrets sync before proceeding.\n\nActivating the feature can be done through one of several methods:\n\n  1. Activation directly through the UI.\n  \n  1. Acitvation through the CLI:\n    \n    ```shell-session\n    vault write -f sys\/activation-flags\/secrets-sync\/activate\n    ``` \n\n  1. Activation through a POST or PUT request:\n\n    ```shell-session\n    $ curl \\\n      --request PUT \\\n      --header \"X-Vault-Token: ...\" \\\n      http:\/\/127.0.0.1:8200\/v1\/sys\/activation-flags\/secrets-sync\/activate\n    ```\n\n## Destinations\n\nSecrets can be synced into various external systems, called destinations. The supported destinations are:\n* [AWS Secrets Manager](\/vault\/docs\/sync\/awssm)\n* [Azure Key Vault](\/vault\/docs\/sync\/azurekv)\n* [GCP Secret Manager](\/vault\/docs\/sync\/gcpsm)\n* [GitHub Repository Actions](\/vault\/docs\/sync\/github)\n* [Vercel Projects](\/vault\/docs\/sync\/vercelproject)\n\n## Associations\n\nSyncing a secret into one of the external systems is done by creating a connection between it and a destination, which is\ncalled an association. These associations are created via Vault's API by adding a KVv2 secret target to one of the configured\ndestinations. Each association keeps track of that secret's current sync status, the timestamp of its last status change, and\nthe error code of the last sync or unsync operation if it failed. Each destination can have any number of secret associations.\n\n## Sync statuses\n\nThere are several sync statuses which relay information about the outcome of the latest sync\noperation to have occurred on that secret. The status information is stored inside each\nassociation object returned by the endpoint and, upon failure, includes an error code describing the cause of the failure.\n\n| Status                   | Description                                                                                     |\n|:-------------------------|:------------------------------------------------------------------------------------------------|\n| `UNKNOWN`                | Vault is unable to determine the current state of the secret in regard to the external service. |\n| `PENDING`                | An operation is queued for that secret and has not been processed yet.                          |\n| `SYNCED`                 | The sync operation was successful and sent the secret to the external destination.              |\n| `UNSYNCED`               | The unsync operation was successful and removed the secret from the external destination.       |\n| `INTERNAL_VAULT_ERROR`   | The operation failed due to an issue internal to Vault.                                         |\n| `CLIENT_SIDE_ERROR`      | The operation failed due to a configuration error such as invalid privileges.                   |\n| `EXTERNAL_SERVICE_ERROR` | The operation failed due to an issue with the external service such as a temporary downtime.    |\n\n## Name template\n\nBy default, the name of synced secrets follows this format: `vault\/<accessor>\/<secret-path>`. The casing and delimiters\nmay change as they are normalized according to the valid character set of each destination type. This pattern was chosen to\nprevent accidental name collisions and to clearly identify where the secret is coming from.\n\nEvery destination allows you to customize this name pattern by configuring a `secret_name_template` field to best suit\nindividual use cases. The templates use a subset of the go-template syntax for extra flexibility.\n\nThe following placeholders are available:\n\n| Placeholder         | Description                                                                                                 |\n|:--------------------|:------------------------------------------------------------------------------------------------------------|\n| `DestinationType`   | The type of the destination, e.g. \"aws-sm\"                                                                  |\n| `DestinationName`   | The name of the destination                                                                                 |\n| `NamespacePath`     | The full namespace path where the secret being synced is located                                            |\n| `NamespaceBaseName` | The segment following the last `\/` character from the full path                                             |\n| `NamespaceID`       | The internal unique ID identifying the namespace, e.g. `RQegM`                                              |\n| `MountPath`         | The full mount path where the secret being synced is located                                                |\n| `MountBaseName`     | The segment following the last `\/` character from the full path                                             |\n| `MountAccessor`     | The internal unique ID identifying the mount, e.g. `kv_1234`                                                |\n| `SecretPath`        | The full secret path                                                                                        |\n| `SecretBaseName`    | The segment following the last `\/` character from the full path                                             |\n| `SecretKey`         | The individual secret key being synced, only available if the destination uses the `secret-key` granularity |\n\nLet's assume we want to sync the following secret:\n\n  <CodeBlockConfig hideClipboard>\n\n    $ VAULT_NAMESPACE=ns1\/ns2 vault kv get -mount=path\/to\/kv1 path\/to\/secret1\n\n    ========== Secret Path ==========\n    path\/to\/kv1\/data\/path\/to\/secret1\n\n    ======= Metadata =======\n    (...)\n\n    === Data ===\n    Key    Value\n    ---    -----\n    foo    bar\n\n  <\/CodeBlockConfig>\n\nLet's look at some name template examples and the resulting secret name at the sync destination.\n\n| Name template                            | Result                 |\n|:-----------------------------------------|:-----------------------|\n| prefix-                 | prefix-path\/to\/secret1 |\n|        | SECRET1                |\n| _    | kv_1234_foo            |\n|  | path_to_secret1        |\n\nName templates can be updated. The new template is only effective for new secrets associated with the destination and does\nnot affect the secrets synced with the previous template. It is possible to update an association to force a recreate operation.\nThe secret synced with the old template will be deleted and a new secret using the new template version will be synced.\n\n## Custom tags\n\nA destination can also have custom tags so that every secret associated to it that is synced will share that same set of tags.\nAdditionally, a default tag value of `hashicorp:vault` is used to denote any secret that is synced via Vault Enterprise. Similar\nto secret names, tag keys and values are normalized according to the valid character set of each destination type.\n\n## Granularity\n\nVault KV-v2 secrets are multi-value and their data is represented in JSON. Multi-value secrets are useful to bundle closely\nrelated information together like a username & password pair. However, most secret management systems only support single-value\nentries. Secrets sync allows you to choose the granularity that best suits your use case for each destination by specifying a `granularity`\nfield.\n\nThe `secret-path` granularity syncs the entire JSON content of the Vault secret as a single entry at the destination. If\nthe destination does not support multi-value secret the JSON is encoded as a single-value JSON-string.\n\nThe `secret-key` granularity syncs each Vault key-value pair as a distinct entry at the destination. If the value itself is a list or map\nit is encoded as a JSON blob.\n\nGranularity can be updated. The new granularity only affects secrets newly associated with the destination and does\nnot modify the previously synced secrets. It is possible to update an association to force a recreate operation.\nThe secret synced with the old granularity will be deleted and new secrets will be synced according to the new granularity.\n\n## Security\n\n~> Note: Vault does not control the permissions at the destination. It is the responsibility\nof the operator to configure and maintain proper access controls on the external system so synced\nsecrets are not accessed unintentionally.\n\n### Vault access requirements\n\nVault verifies the client has read access on the secret before syncing it with any destination. This additional check is\nthere to prevent users from maliciously or unintentionally leveraging elevated permissions on an external system to access\nsecrets they normally wouldn't be able to.\n\nLet's assume we have a secret located at `path\/to\/data\/secret1` and a user with write access to the sync feature,\nbut no read access to that secret. This scenario is equivalent to this ACL policy:\n\n  <CodeBlockConfig hideClipboard>\n\n    # Allow full access to the sync feature\n    path \"sys\/sync\/*\" {\n      capabilities = [\"read\", \"list\", \"create\", \"update\", \"delete\"]\n    }\n\n    # Allow read access to the secret mount path\/to\n    path \"path\/to\/*\" {\n      capabilities = [\"read\"]\n    }\n\n    # Deny access to a specific secret\n    path \"path\/to\/data\/my-secret-1\" {\n      capabilities = [\"deny\"]\n    }\n\n  <\/CodeBlockConfig>\n\nIf a client with this policy tries to read this secret they will receive an unauthorized error:\n\n  <CodeBlockConfig hideClipboard>\n\n    $ vault kv get -mount=path\/to my-secret-1\n\n    Error reading path\/to\/data\/my-secret-1: Error making API request.\n\n    URL: GET http:\/\/127.0.0.1:8200\/v1\/path\/to\/data\/my-secret-1\n    Code: 403. Errors:\n\n    * 1 error occurred:\n      * permission denied\n\n  <\/CodeBlockConfig>\n\nLikewise, if the client tries to sync this secret to any destination they will receive a similar unauthorized error:\n\n  <CodeBlockConfig hideClipboard>\n\n    $ vault write sys\/sync\/destinations\/$TYPE\/$NAME\/associations\/set \\\n    mount=\"path\/to\" \\\n    secret_name=\"my-secret-1\"\n\n    Error writing data to sys\/sync\/destinations\/$TYPE\/$NAME\/associations\/set: Error making API request.\n\n    URL: PUT http:\/\/127.0.0.1:8200\/v1\/sys\/sync\/destinations\/$TYPE\/$NAME\/associations\/set\n    Code: 403. Errors:\n\n    * permission denied to read the content of the secret my-secret-1 in mount path\/to\n\n  <\/CodeBlockConfig>\n\nThis read access verification is only done when creating or updating an association. Once the association is created, revoking\nread access to the policy that was used to sync the secret has no effect.\n\n### Collisions and overwrites\n\nSecrets Sync operates with a last-write-wins strategy. If a secret with the same name already exists at the destination,\nVault overwrites it when syncing a secret. There are also no automatic mechanisms to prevent a principal with sufficient\nprivileges at the destination from overwriting a secret synced by Vault.\n\nTo prevent Vault from accidentally overwriting existing secrets, it is recommended to use either a name pattern or\nbuilt-in tags as an extra policy condition on the role used to configure a Vault sync destination. A negative condition on other\npolicies may be used to prevent out-of-band overwrites to Vault secrets from non-Vault roles.\n\nTo see examples of policies that provide this type of restriction, refer to the access management section of the documentation\nfor each destination type below:\n\n* [AWS Access Management](\/vault\/docs\/sync\/awssm#access-management)\n* [GCP Access Management](\/vault\/docs\/sync\/gcpsm#access-management)\n\n## Reconciliation\n\nVault Secrets Sync is designed to automatically recover from transient failures\nin two ways: operation retries and reconciliation scans.\n\nOperation retries happen when a sync operation fails. Vault automatically\nretries the operation with exponential backoff. Operation retries help in\nsituations where your network becomes unreliable or overwhelmed.\n\nReconciliation scans happen periodically in a background thread. Vault scans all\nsecrets currently managed by the sync system to identify and update out-of-date\nsecrets, and to ensure that any configured destinations are up-to-date.\nReconciliation scans help in situations where there are external service downtimes\nthat are outside of your control and provide a way to automatically recover and self-heal.\n\nOperation retries and reconciliation scans are both enabled by default.\n\nNote that reconciliation process do not protect from out-of-band updates\nthat occur directly in the external service. The secrets sync system is designed to be\none-way and does not support bidirectional sync at this time.\n\n## Client counts\n\nEach secret that is synced with one or more destinations is counted as a\ndistinct client in Vault's client counting. See [entity assignments with secret\nsync](\/vault\/docs\/concepts\/client-count#secret-sync-clients)\nfor more information.\n\n## API\n\nPlease see the [secrets sync API](\/vault\/api-docs\/system\/secrets-sync) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Secrets sync description       Use secrets sync feature to automatically sync Vault managed secrets with external destinations to centralize secrets lifecycle management         Secrets sync   EnterpriseAlert product  vault      In certain circumstances  fetching secrets directly from Vault is impossible or impractical  To help with this challenge  Vault can maintain a one way sync for KVv2 secrets into various destinations that are easier to access for some clients  With this  Vault remains the system of records but can cache a subset of secrets on various external systems acting as trusted last mile delivery systems   A secret that is associated from a Vault KVv2 Secrets Engine into an external destination is actively managed by a continuous process  If the secret value is updated in Vault  the secret is updated in the destination as well  If the secret is deleted from Vault  it is deleted on the external system as well  This process is asynchronous and event based  Vault propagates modifications into the proper destinations automatically in a handful of seconds    Note title  Not related to HCP Vault Secrets      Secrets sync is a Vault Enterprise feature  For information on secrets sync   with  HCP Vault Secrets   hcp docs vault secrets   refer to the HashiCorp Cloud   Platform documentation for    Vault Secrets integrations   hcp docs vault secrets integrations      Note      Activating the feature  The secrets sync feature requires manual activation through a one time trigger  If a sync related endpoint is called prior to activation  an error response will be received indicating that the feature has not been activated yet  Be sure to understand the potential  client count impacts   client counts  of using secrets sync before proceeding   Activating the feature can be done through one of several methods     1  Activation directly through the UI       1  Acitvation through the CLI              shell session     vault write  f sys activation flags secrets sync activate             1  Activation through a POST or PUT request          shell session       curl           request PUT           header  X Vault Token               http   127 0 0 1 8200 v1 sys activation flags secrets sync activate             Destinations  Secrets can be synced into various external systems  called destinations  The supported destinations are     AWS Secrets Manager   vault docs sync awssm     Azure Key Vault   vault docs sync azurekv     GCP Secret Manager   vault docs sync gcpsm     GitHub Repository Actions   vault docs sync github     Vercel Projects   vault docs sync vercelproject      Associations  Syncing a secret into one of the external systems is done by creating a connection between it and a destination  which is called an association  These associations are created via Vault s API by adding a KVv2 secret target to one of the configured destinations  Each association keeps track of that secret s current sync status  the timestamp of its last status change  and the error code of the last sync or unsync operation if it failed  Each destination can have any number of secret associations      Sync statuses  There are several sync statuses which relay information about the outcome of the latest sync operation to have occurred on that secret  The status information is stored inside each association object returned by the endpoint and  upon failure  includes an error code describing the cause of the failure     Status                     Description                                                                                                                                                                                                                         UNKNOWN                   Vault is unable to determine the current state of the secret in regard to the external service       PENDING                   An operation is queued for that secret and has not been processed yet                                SYNCED                    The sync operation was successful and sent the secret to the external destination                    UNSYNCED                  The unsync operation was successful and removed the secret from the external destination             INTERNAL VAULT ERROR      The operation failed due to an issue internal to Vault                                               CLIENT SIDE ERROR         The operation failed due to a configuration error such as invalid privileges                         EXTERNAL SERVICE ERROR    The operation failed due to an issue with the external service such as a temporary downtime           Name template  By default  the name of synced secrets follows this format   vault  accessor   secret path    The casing and delimiters may change as they are normalized according to the valid character set of each destination type  This pattern was chosen to prevent accidental name collisions and to clearly identify where the secret is coming from   Every destination allows you to customize this name pattern by configuring a  secret name template  field to best suit individual use cases  The templates use a subset of the go template syntax for extra flexibility   The following placeholders are available     Placeholder           Description                                                                                                                                                                                                                                            DestinationType      The type of the destination  e g   aws sm                                                                        DestinationName      The name of the destination                                                                                      NamespacePath        The full namespace path where the secret being synced is located                                                 NamespaceBaseName    The segment following the last     character from the full path                                                  NamespaceID          The internal unique ID identifying the namespace  e g   RQegM                                                    MountPath            The full mount path where the secret being synced is located                                                     MountBaseName        The segment following the last     character from the full path                                                  MountAccessor        The internal unique ID identifying the mount  e g   kv 1234                                                      SecretPath           The full secret path                                                                                             SecretBaseName       The segment following the last     character from the full path                                                  SecretKey            The individual secret key being synced  only available if the destination uses the  secret key  granularity    Let s assume we want to sync the following secret      CodeBlockConfig hideClipboard         VAULT NAMESPACE ns1 ns2 vault kv get  mount path to kv1 path to secret1                 Secret Path                path to kv1 data path to secret1              Metadata                            Data         Key    Value                      foo    bar      CodeBlockConfig   Let s look at some name template examples and the resulting secret name at the sync destination     Name template                              Result                                                                                           prefix                    prefix path to secret1              SECRET1                           kv 1234 foo                   path to secret1           Name templates can be updated  The new template is only effective for new secrets associated with the destination and does not affect the secrets synced with the previous template  It is possible to update an association to force a recreate operation  The secret synced with the old template will be deleted and a new secret using the new template version will be synced      Custom tags  A destination can also have custom tags so that every secret associated to it that is synced will share that same set of tags  Additionally  a default tag value of  hashicorp vault  is used to denote any secret that is synced via Vault Enterprise  Similar to secret names  tag keys and values are normalized according to the valid character set of each destination type      Granularity  Vault KV v2 secrets are multi value and their data is represented in JSON  Multi value secrets are useful to bundle closely related information together like a username   password pair  However  most secret management systems only support single value entries  Secrets sync allows you to choose the granularity that best suits your use case for each destination by specifying a  granularity  field   The  secret path  granularity syncs the entire JSON content of the Vault secret as a single entry at the destination  If the destination does not support multi value secret the JSON is encoded as a single value JSON string   The  secret key  granularity syncs each Vault key value pair as a distinct entry at the destination  If the value itself is a list or map it is encoded as a JSON blob   Granularity can be updated  The new granularity only affects secrets newly associated with the destination and does not modify the previously synced secrets  It is possible to update an association to force a recreate operation  The secret synced with the old granularity will be deleted and new secrets will be synced according to the new granularity      Security     Note  Vault does not control the permissions at the destination  It is the responsibility of the operator to configure and maintain proper access controls on the external system so synced secrets are not accessed unintentionally       Vault access requirements  Vault verifies the client has read access on the secret before syncing it with any destination  This additional check is there to prevent users from maliciously or unintentionally leveraging elevated permissions on an external system to access secrets they normally wouldn t be able to   Let s assume we have a secret located at  path to data secret1  and a user with write access to the sync feature  but no read access to that secret  This scenario is equivalent to this ACL policy      CodeBlockConfig hideClipboard         Allow full access to the sync feature     path  sys sync            capabilities     read    list    create    update    delete                Allow read access to the secret mount path to     path  path to            capabilities     read                Deny access to a specific secret     path  path to data my secret 1          capabilities     deny              CodeBlockConfig   If a client with this policy tries to read this secret they will receive an unauthorized error      CodeBlockConfig hideClipboard         vault kv get  mount path to my secret 1      Error reading path to data my secret 1  Error making API request       URL  GET http   127 0 0 1 8200 v1 path to data my secret 1     Code  403  Errors         1 error occurred          permission denied      CodeBlockConfig   Likewise  if the client tries to sync this secret to any destination they will receive a similar unauthorized error      CodeBlockConfig hideClipboard         vault write sys sync destinations  TYPE  NAME associations set       mount  path to        secret name  my secret 1       Error writing data to sys sync destinations  TYPE  NAME associations set  Error making API request       URL  PUT http   127 0 0 1 8200 v1 sys sync destinations  TYPE  NAME associations set     Code  403  Errors         permission denied to read the content of the secret my secret 1 in mount path to      CodeBlockConfig   This read access verification is only done when creating or updating an association  Once the association is created  revoking read access to the policy that was used to sync the secret has no effect       Collisions and overwrites  Secrets Sync operates with a last write wins strategy  If a secret with the same name already exists at the destination  Vault overwrites it when syncing a secret  There are also no automatic mechanisms to prevent a principal with sufficient privileges at the destination from overwriting a secret synced by Vault   To prevent Vault from accidentally overwriting existing secrets  it is recommended to use either a name pattern or built in tags as an extra policy condition on the role used to configure a Vault sync destination  A negative condition on other policies may be used to prevent out of band overwrites to Vault secrets from non Vault roles   To see examples of policies that provide this type of restriction  refer to the access management section of the documentation for each destination type below      AWS Access Management   vault docs sync awssm access management     GCP Access Management   vault docs sync gcpsm access management      Reconciliation  Vault Secrets Sync is designed to automatically recover from transient failures in two ways  operation retries and reconciliation scans   Operation retries happen when a sync operation fails  Vault automatically retries the operation with exponential backoff  Operation retries help in situations where your network becomes unreliable or overwhelmed   Reconciliation scans happen periodically in a background thread  Vault scans all secrets currently managed by the sync system to identify and update out of date secrets  and to ensure that any configured destinations are up to date  Reconciliation scans help in situations where there are external service downtimes that are outside of your control and provide a way to automatically recover and self heal   Operation retries and reconciliation scans are both enabled by default   Note that reconciliation process do not protect from out of band updates that occur directly in the external service  The secrets sync system is designed to be one way and does not support bidirectional sync at this time      Client counts  Each secret that is synced with one or more destinations is counted as a distinct client in Vault s client counting  See  entity assignments with secret sync   vault docs concepts client count secret sync clients  for more information      API  Please see the  secrets sync API   vault api docs system secrets sync  for more details "}
{"questions":"vault page title Sync secrets from Vault to Azure Key Vault The Azure Key Vault destination enables Vault to sync and unsync secrets of your choosing into Sync secrets from Vault to Azure Key Vault layout docs Automatically sync and unsync the secrets from Vault to Azure Key Vault to centralize visibility and control of secrets lifecycle management","answers":"---\nlayout: docs\npage_title: Sync secrets from Vault to Azure Key Vault\ndescription: >-\n  Automatically sync and unsync the secrets from Vault to Azure Key Vault to centralize visibility and control of secrets lifecycle management.\n---\n\n# Sync secrets from Vault to Azure Key Vault\n\nThe Azure Key Vault destination enables Vault to sync and unsync secrets of your choosing into\nan external Azure account. When configured, Vault will actively maintain the state of each externally-synced\nsecret in realtime. This includes sending new secrets, updating existing secret values, and removing\nsecrets when they either get dissociated from the destination or deleted from Vault.\n\nPrerequisites:\n* Ability to read or create KVv2 secrets\n* Ability to create Azure AD user credentials with access to an Azure Key Vault\n* Ability to create sync destinations and associations on your Vault server\n\n## Setup\n\n1. If you do not already have an Azure Key Vault instance, navigate to the Azure Portal to create a new\n  [Key Vault](https:\/\/learn.microsoft.com\/en-us\/azure\/key-vault\/general\/quick-create-portal).\n\n1. A service principal with a client id and client secret will be needed to configure Azure Key Vault as a\n  sync destination. This [guide](https:\/\/learn.microsoft.com\/en-us\/azure\/active-directory\/develop\/howto-create-service-principal-portal)\n  will walk you through creating the service principal.\n\n1. Once the service principal is created, the next step is to\n  [grant the service principal](https:\/\/learn.microsoft.com\/en-us\/azure\/key-vault\/general\/rbac-guide?tabs=azure-cli)\n  access to Azure Key Vault. To quickly get started, we recommend using the \"Key Vault Secrets Officer\" built-in role,\n  which gives sufficient access to manage secrets. For more information, see the [Permissions](#permissions) section.\n\n\n1. Configure a sync destination with the service principal credentials and Key Vault URI created in the previous steps.\n\n  ```shell-session\n  $ vault write sys\/sync\/destinations\/azure-kv\/my-azure-1 \\\n      key_vault_uri=\"$KEY_VAULT_URI\" \\\n      client_id=\"$CLIENT_ID\" \\\n      client_secret=\"$CLIENT_SECRET\" \\\n      tenant_id=\"$TENANT_ID\"\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  Key                   Value\n  ---                   -----\n  connection_details    map[client_id:123 client_secret:***** key_vault_uri:***** tenant_id:123]\n  name                  my-azure-1\n  type                  azure-kv\n  ```\n\n  <\/CodeBlockConfig>\n\n## Usage\n\n1. If you do not already have a KVv2 secret to sync, mount a new KVv2 secrets engine.\n\n  ```shell-session\n  $ vault secrets enable -path='my-kv' kv-v2\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  Success! Enabled the kv-v2 secrets engine at: my-kv\/\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Create secrets you wish to sync with a target Azure Key Vault.\n\n  ```shell-session\n  $ vault kv put -mount='my-kv' my-secret foo='bar'\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  ==== Secret Path ====\n  my-kv\/data\/my-secret\n\n  ======= Metadata =======\n  Key                Value\n  ---                -----\n  created_time       2023-09-19T13:17:23.395109Z\n  custom_metadata    <nil>\n  deletion_time      n\/a\n  destroyed          false\n  version            1\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Create an association between the destination and a secret to synchronize.\n\n  ```shell-session\n  $ vault write sys\/sync\/destinations\/azure-kv\/my-azure-1\/associations\/set \\\n      mount='my-kv' \\\n      secret_name='my-secret'\n  ```\n\n  **Output:**\n\n  <CodeBlockConfig hideClipboard>\n\n  ```plaintext\n  Key                   Value\n  ---                   -----\n  associated_secrets    map[kv_7532a8b4\/my-secret:map[accessor:kv_7532a8b4 secret_name:my-secret sync_status:SYNCED updated_at:2023-09-21T13:53:24.839885-07:00]]\n  store_name            my-azure-1\n  store_type            azure-kv\n  ```\n\n  <\/CodeBlockConfig>\n\n1. Navigate to [Azure Key Vault](https:\/\/portal.azure.com\/#view\/HubsExtension\/BrowseResource\/resourceType\/Microsoft.KeyVault%2Fvaults)\n  in the Azure portal to confirm your secret was successfully created.\n\nMoving forward, any modification on the Vault secret will be propagated in near real time to its Azure Key Vault\ncounterpart. Creating a new secret version in Vault will create a new version in Azure Key Vault. Deleting the secret\nor the association in Vault will delete the secret in your Azure Key Vault as well.\n\n\n## Permissions\n\nFor a more minimal set of permissions, you can create a\n[custom role](https:\/\/learn.microsoft.com\/en-us\/azure\/role-based-access-control\/custom-roles#steps-to-create-a-custom-role)\nusing the following JSON role definition. Be sure to replace the subscription id placeholder.\n\n```json\n{\n  \"properties\": {\n    \"roleName\": \"Key Vault Secrets Reader Writer\",\n    \"description\": \"Custom role for reading and updating Azure Key Vault secrets.\",\n    \"permissions\": [\n      {\n        \"actions\": [\n          \"Microsoft.KeyVault\/vaults\/secrets\/read\",\n          \"Microsoft.KeyVault\/vaults\/secrets\/write\"\n        ],\n        \"notActions\": [],\n        \"dataActions\": [\n          \"Microsoft.KeyVault\/vaults\/secrets\/delete\",\n          \"Microsoft.KeyVault\/vaults\/secrets\/backup\/action\",\n          \"Microsoft.KeyVault\/vaults\/secrets\/purge\/action\",\n          \"Microsoft.KeyVault\/vaults\/secrets\/recover\/action\",\n          \"Microsoft.KeyVault\/vaults\/secrets\/restore\/action\",\n          \"Microsoft.KeyVault\/vaults\/secrets\/readMetadata\/action\",\n          \"Microsoft.KeyVault\/vaults\/secrets\/getSecret\/action\",\n          \"Microsoft.KeyVault\/vaults\/secrets\/setSecret\/action\"\n        ],\n        \"notDataActions\": []\n      }\n    ],\n    \"assignableScopes\": [\n      \"\/subscriptions\/{subscriptionId}\/\"\n    ]\n  }\n}\n```\n\n## Access management\n\nYou can allow or restrict access to secrets by using a separate Azure Key Vault instance for Vault sync destinations.\nThis corresponds with Microsoft's currently-recommended \n[best practices](https:\/\/learn.microsoft.com\/en-us\/azure\/key-vault\/general\/best-practices)\nfor managing secrets in Key Vault. Maintaining a boundary between Vault-managed secrets and other secrets through\nseparate Key Vaults provides increased security and access control.\n\nAzure roles can be created to grant the necessary permissions for the service principal to access the Key Vault\nwith [role-based access control](https:\/\/learn.microsoft.com\/en-us\/azure\/role-based-access-control\/overview).\nA role assignment can be set for the Vault user principal to provide it the role's permissions within the Key Vault\ninstance, its resource group, or subscription. Additionally,\n[Azure policies](https:\/\/learn.microsoft.com\/en-us\/azure\/key-vault\/general\/azure-policy) may further refine access control\nlimitations, such as denying the Vault user principal access to non-Vault related Key Vaults. The inverse, denying other\nusers any write-access to the Vault-related Key Vault, may be another choice.\n\n## API\n\nPlease see the [secrets sync API](\/vault\/api-docs\/system\/secrets-sync) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Sync secrets from Vault to Azure Key Vault description       Automatically sync and unsync the secrets from Vault to Azure Key Vault to centralize visibility and control of secrets lifecycle management         Sync secrets from Vault to Azure Key Vault  The Azure Key Vault destination enables Vault to sync and unsync secrets of your choosing into an external Azure account  When configured  Vault will actively maintain the state of each externally synced secret in realtime  This includes sending new secrets  updating existing secret values  and removing secrets when they either get dissociated from the destination or deleted from Vault   Prerequisites    Ability to read or create KVv2 secrets   Ability to create Azure AD user credentials with access to an Azure Key Vault   Ability to create sync destinations and associations on your Vault server     Setup  1  If you do not already have an Azure Key Vault instance  navigate to the Azure Portal to create a new    Key Vault  https   learn microsoft com en us azure key vault general quick create portal    1  A service principal with a client id and client secret will be needed to configure Azure Key Vault as a   sync destination  This  guide  https   learn microsoft com en us azure active directory develop howto create service principal portal    will walk you through creating the service principal   1  Once the service principal is created  the next step is to    grant the service principal  https   learn microsoft com en us azure key vault general rbac guide tabs azure cli    access to Azure Key Vault  To quickly get started  we recommend using the  Key Vault Secrets Officer  built in role    which gives sufficient access to manage secrets  For more information  see the  Permissions   permissions  section    1  Configure a sync destination with the service principal credentials and Key Vault URI created in the previous steps        shell session     vault write sys sync destinations azure kv my azure 1         key vault uri   KEY VAULT URI          client id   CLIENT ID          client secret   CLIENT SECRET          tenant id   TENANT ID             Output        CodeBlockConfig hideClipboard        plaintext   Key                   Value                                 connection details    map client id 123 client secret       key vault uri       tenant id 123    name                  my azure 1   type                  azure kv            CodeBlockConfig      Usage  1  If you do not already have a KVv2 secret to sync  mount a new KVv2 secrets engine        shell session     vault secrets enable  path  my kv  kv v2            Output        CodeBlockConfig hideClipboard        plaintext   Success  Enabled the kv v2 secrets engine at  my kv             CodeBlockConfig   1  Create secrets you wish to sync with a target Azure Key Vault        shell session     vault kv put  mount  my kv  my secret foo  bar             Output        CodeBlockConfig hideClipboard        plaintext        Secret Path        my kv data my secret            Metadata           Key                Value                              created time       2023 09 19T13 17 23 395109Z   custom metadata     nil    deletion time      n a   destroyed          false   version            1            CodeBlockConfig   1  Create an association between the destination and a secret to synchronize        shell session     vault write sys sync destinations azure kv my azure 1 associations set         mount  my kv          secret name  my secret             Output        CodeBlockConfig hideClipboard        plaintext   Key                   Value                                 associated secrets    map kv 7532a8b4 my secret map accessor kv 7532a8b4 secret name my secret sync status SYNCED updated at 2023 09 21T13 53 24 839885 07 00     store name            my azure 1   store type            azure kv            CodeBlockConfig   1  Navigate to  Azure Key Vault  https   portal azure com  view HubsExtension BrowseResource resourceType Microsoft KeyVault 2Fvaults    in the Azure portal to confirm your secret was successfully created   Moving forward  any modification on the Vault secret will be propagated in near real time to its Azure Key Vault counterpart  Creating a new secret version in Vault will create a new version in Azure Key Vault  Deleting the secret or the association in Vault will delete the secret in your Azure Key Vault as well       Permissions  For a more minimal set of permissions  you can create a  custom role  https   learn microsoft com en us azure role based access control custom roles steps to create a custom role  using the following JSON role definition  Be sure to replace the subscription id placeholder      json      properties          roleName    Key Vault Secrets Reader Writer        description    Custom role for reading and updating Azure Key Vault secrets         permissions                      actions                Microsoft KeyVault vaults secrets read              Microsoft KeyVault vaults secrets write                      notActions                dataActions                Microsoft KeyVault vaults secrets delete              Microsoft KeyVault vaults secrets backup action              Microsoft KeyVault vaults secrets purge action              Microsoft KeyVault vaults secrets recover action              Microsoft KeyVault vaults secrets restore action              Microsoft KeyVault vaults secrets readMetadata action              Microsoft KeyVault vaults secrets getSecret action              Microsoft KeyVault vaults secrets setSecret action                      notDataActions                          assignableScopes             subscriptions  subscriptionId                        Access management  You can allow or restrict access to secrets by using a separate Azure Key Vault instance for Vault sync destinations  This corresponds with Microsoft s currently recommended   best practices  https   learn microsoft com en us azure key vault general best practices  for managing secrets in Key Vault  Maintaining a boundary between Vault managed secrets and other secrets through separate Key Vaults provides increased security and access control   Azure roles can be created to grant the necessary permissions for the service principal to access the Key Vault with  role based access control  https   learn microsoft com en us azure role based access control overview   A role assignment can be set for the Vault user principal to provide it the role s permissions within the Key Vault instance  its resource group  or subscription  Additionally   Azure policies  https   learn microsoft com en us azure key vault general azure policy  may further refine access control limitations  such as denying the Vault user principal access to non Vault related Key Vaults  The inverse  denying other users any write access to the Vault related Key Vault  may be another choice      API  Please see the  secrets sync API   vault api docs system secrets sync  for more details "}
{"questions":"vault Sync secrets from Vault to AWS Secrets Manager Automatically sync and unsync the secrets from Vault to AWS Secrets Manager to centralize visibility and control of secrets lifecycle management page title Sync secrets from Vault to AWS Secrets Manager The AWS Secrets Manager destination enables Vault to sync and unsync secrets of your choosing into layout docs","answers":"---\nlayout: docs\npage_title: Sync secrets from Vault to AWS Secrets Manager\ndescription: >-\n  Automatically sync and unsync the secrets from Vault to AWS Secrets Manager to centralize visibility and control of secrets lifecycle management.\n---\n\n# Sync secrets from Vault to AWS Secrets Manager\n\nThe AWS Secrets Manager destination enables Vault to sync and unsync secrets of your choosing into\nan external AWS account. When configured, Vault will actively maintain the state of each externally-synced\nsecret in near-realtime. This includes sending new secrets, updating existing secret values, and removing\nsecrets when they either get dissociated from the destination or deleted from Vault. This enables the\nability to keep control of all your secrets localized while leveraging the benefits of the AWS Secrets Manager.\n\nPrerequisites:\n* Ability to read or create KVv2 secrets\n* Ability to create AWS IAM user and access keys with access to the Secrets Manager\n* Ability to create sync destinations and associations on your Vault server\n\n## Setup\n\n1. Navigate to the [AWS Identity and Access Management (IAM) console](https:\/\/us-east-1.console.aws.amazon.com\/iamv2\/home#\/home)\n  to configure a IAM user with access to the Secrets Manager. The following is an example policy outlining the required\n  permissions to use secrets syncing.\n\n  ```json\n  {\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n          \"Effect\": \"Allow\",\n          \"Action\": [\n            \"secretsmanager:Create*\",\n            \"secretsmanager:Update*\",\n            \"secretsmanager:Delete*\",\n            \"secretsmanager:TagResource\"\n          ],\n          \"Resource\": \"arn:aws:secretsmanager:*:*:secret:vault*\"\n        }\n    ]\n  }\n  ```\n\n1.\tConfigure a sync destination with the IAM user credentials created in the previous step.\n\n\t```shell-session\n\t$ vault write sys\/sync\/destinations\/aws-sm\/my-awssm-1 \\\n\t\t  access_key_id=\"$ACCESS_KEY_ID\" \\\n\t\t  secret_access_key=\"$SECRET_ACCESS_KEY\" \\\n\t\t  region='us-east-1'\n\t```\n\n\t**Output:**\n\n\t<CodeBlockConfig hideClipboard>\n\n\t```plaintext\n\tKey                   Value\n\t---                   -----\n\tconnection_details    map[access_key_id:***** region:us-east-1 secret_access_key:*****]\n\tname                  my-awssm-1\n\ttype                  aws-sm\n\t```\n\n\t<\/CodeBlockConfig>\n\n## Usage\n\n1.\tIf you do not already have a KVv2 secret to sync, mount a new KVv2 secrets engine.\n\n\t```shell-session\n\t$ vault secrets enable -path=my-kv kv-v2\n\t```\n\n\t**Output:**\n\n\t<CodeBlockConfig hideClipboard>\n\n\t```plaintext\n\tSuccess! Enabled the kv-v2 secrets engine at: my-kv\/\n\t```\n\n\t<\/CodeBlockConfig>\n\n1. Create secrets you wish to sync with a target AWS Secrets Manager.\n\n\t```shell-session\n\t$ vault kv put -mount=my-kv my-secret foo='bar'\n\t```\n\n\t**Output:**\n\n\t<CodeBlockConfig hideClipboard>\n\n\t```plaintext\n\t==== Secret Path ====\n\tmy-kv\/data\/my-secret\n\n\t======= Metadata =======\n\tKey                Value\n\t---                -----\n\tcreated_time       2023-09-19T13:17:23.395109Z\n\tcustom_metadata    <nil>\n\tdeletion_time      n\/a\n\tdestroyed          false\n\tversion            1\n\t```\n\n\t<\/CodeBlockConfig>\n\n1.\tCreate an association between the destination and a secret to synchronize.\n\n\t```shell-session\n\t$ vault write sys\/sync\/destinations\/aws-sm\/my-awssm-1\/associations\/set \\\n\t\tmount='my-kv' \\\n\t\tsecret_name='my-secret'\n\t```\n\n\t**Output:**\n\n\t<CodeBlockConfig hideClipboard>\n\n\t```plaintext\n\tKey                   Value\n\t---                   -----\n\tassociated_secrets    map[kv_37993f8a\/my-secret:map[accessor:kv_37993f8a secret_name:my-secret sync_status:SYNCED updated_at:2023-09-19T13:17:35.085581-05:00]]\n\tstore_name            aws1\n\tstore_type            aws-sm\n\t```\n\n\t<\/CodeBlockConfig>\n\n1. Navigate to the [Secrets Manager](https:\/\/console.aws.amazon.com\/secretsmanager\/) in the AWS console\n\tto confirm your secret was successfully synced.\n\nMoving forward, any modification on the Vault secret will be propagated to its AWS Secrets Manager\ncounterpart. Creating a new secret version in Vault will update the one in AWS to the new version. Deleting either\nthe secret or the association in Vault will delete the secret in your AWS account as well.\n\n## Access management\n\nYou can allow or restrict access to secrets by attaching AWS Resource Tags\nto secrets. For example, the following AWS IAM policy prevents Vault from\nmodifying secrets that were not created by a sync operation:\n\n  <CodeBlockConfig hideClipboard>\n\n    {\n      \"Version\": \"2012-10-17\",\n      \"Statement\": [\n        {\n          \"Effect\": \"Allow\",\n          \"Action\": [\n            \"secretsmanager:*\",\n          ],\n          \"Resource\": \"*\",\n          \"Condition\": {\n            \"StringEquals\": {\n              \"secretsmanager:ResourceTag\/hashicorp:vault\": \"\" # This tag is automatically added by Vault on every synced secrets\n            }\n          }\n        }\n      ]\n    }\n\n  <\/CodeBlockConfig>\n\nTo prevent out-of-band overwrites, we recommend adding a negative condition on\nall write-access policies not used by Vault:\n\n  <CodeBlockConfig hideClipboard>\n\n    {\n      \"Version\": \"2012-10-17\",\n      \"Statement\": [\n        {\n          \"Effect\": \"Deny\",\n          \"Action\": [\n            \"secretsmanager:*\"\n          ],\n          \"Resource\": \"*\",\n          \"Condition\": {\n            \"StringNotEquals\": {\n              \"secretsmanager:ResourceTag\/hashicorp:vault\": \"\" # This tag is automatically added by Vault on every synced secrets\n            }\n          }\n        }\n      ]\n    }\n\n  <\/CodeBlockConfig>\n\n<Warning title=\"Use wildcards with extreme caution\">\n\n  The previous examples use wildcards for the sake of brevity. We strongly\n  recommend you use the principle of least privilege to restrict actions and\n  resources for each use case to the minimum necessary requirements.\n  \n<\/Warning>\n\n## Tutorial\n\nRefer to the [Vault Enterprise Secrets Sync tutorial](\/vault\/tutorials\/enterprise\/secrets-sync)\nto learn how to configure the secrets sync between Vault and AWS Secrets Manager.\n\n## API\n\nPlease see the [secrets sync API](\/vault\/api-docs\/system\/secrets-sync) for more details.","site":"vault","answers_cleaned":"    layout  docs page title  Sync secrets from Vault to AWS Secrets Manager description       Automatically sync and unsync the secrets from Vault to AWS Secrets Manager to centralize visibility and control of secrets lifecycle management         Sync secrets from Vault to AWS Secrets Manager  The AWS Secrets Manager destination enables Vault to sync and unsync secrets of your choosing into an external AWS account  When configured  Vault will actively maintain the state of each externally synced secret in near realtime  This includes sending new secrets  updating existing secret values  and removing secrets when they either get dissociated from the destination or deleted from Vault  This enables the ability to keep control of all your secrets localized while leveraging the benefits of the AWS Secrets Manager   Prerequisites    Ability to read or create KVv2 secrets   Ability to create AWS IAM user and access keys with access to the Secrets Manager   Ability to create sync destinations and associations on your Vault server     Setup  1  Navigate to the  AWS Identity and Access Management  IAM  console  https   us east 1 console aws amazon com iamv2 home  home    to configure a IAM user with access to the Secrets Manager  The following is an example policy outlining the required   permissions to use secrets syncing        json          Version    2012 10 17        Statement                          Effect    Allow              Action                  secretsmanager Create                 secretsmanager Update                 secretsmanager Delete                 secretsmanager TagResource                          Resource    arn aws secretsmanager     secret vault                              1  Configure a sync destination with the IAM user credentials created in the previous step       shell session    vault write sys sync destinations aws sm my awssm 1       access key id   ACCESS KEY ID        secret access key   SECRET ACCESS KEY        region  us east 1           Output       CodeBlockConfig hideClipboard       plaintext  Key                   Value                               connection details    map access key id       region us east 1 secret access key         name                  my awssm 1  type                  aws sm          CodeBlockConfig      Usage  1  If you do not already have a KVv2 secret to sync  mount a new KVv2 secrets engine       shell session    vault secrets enable  path my kv kv v2          Output       CodeBlockConfig hideClipboard       plaintext  Success  Enabled the kv v2 secrets engine at  my kv           CodeBlockConfig   1  Create secrets you wish to sync with a target AWS Secrets Manager       shell session    vault kv put  mount my kv my secret foo  bar           Output       CodeBlockConfig hideClipboard       plaintext       Secret Path       my kv data my secret           Metadata          Key                Value                            created time       2023 09 19T13 17 23 395109Z  custom metadata     nil   deletion time      n a  destroyed          false  version            1          CodeBlockConfig   1  Create an association between the destination and a secret to synchronize       shell session    vault write sys sync destinations aws sm my awssm 1 associations set     mount  my kv      secret name  my secret           Output       CodeBlockConfig hideClipboard       plaintext  Key                   Value                               associated secrets    map kv 37993f8a my secret map accessor kv 37993f8a secret name my secret sync status SYNCED updated at 2023 09 19T13 17 35 085581 05 00    store name            aws1  store type            aws sm          CodeBlockConfig   1  Navigate to the  Secrets Manager  https   console aws amazon com secretsmanager   in the AWS console  to confirm your secret was successfully synced   Moving forward  any modification on the Vault secret will be propagated to its AWS Secrets Manager counterpart  Creating a new secret version in Vault will update the one in AWS to the new version  Deleting either the secret or the association in Vault will delete the secret in your AWS account as well      Access management  You can allow or restrict access to secrets by attaching AWS Resource Tags to secrets  For example  the following AWS IAM policy prevents Vault from modifying secrets that were not created by a sync operation      CodeBlockConfig hideClipboard                Version    2012 10 17          Statement                          Effect    Allow              Action                  secretsmanager                             Resource                   Condition                  StringEquals                    secretsmanager ResourceTag hashicorp vault        This tag is automatically added by Vault on every synced secrets                                                        CodeBlockConfig   To prevent out of band overwrites  we recommend adding a negative condition on all write access policies not used by Vault      CodeBlockConfig hideClipboard                Version    2012 10 17          Statement                          Effect    Deny              Action                  secretsmanager                            Resource                   Condition                  StringNotEquals                    secretsmanager ResourceTag hashicorp vault        This tag is automatically added by Vault on every synced secrets                                                        CodeBlockConfig    Warning title  Use wildcards with extreme caution      The previous examples use wildcards for the sake of brevity  We strongly   recommend you use the principle of least privilege to restrict actions and   resources for each use case to the minimum necessary requirements       Warning      Tutorial  Refer to the  Vault Enterprise Secrets Sync tutorial   vault tutorials enterprise secrets sync  to learn how to configure the secrets sync between Vault and AWS Secrets Manager      API  Please see the  secrets sync API   vault api docs system secrets sync  for more details "}
{"questions":"vault Secrets import allows you to safely onboard secrets from external sources into Vault KV for management include alerts enterprise only mdx page title Secrets import layout docs Secrets import","answers":"---\nlayout: docs\npage_title: Secrets import\ndescription: Secrets import allows you to safely onboard secrets from external sources into Vault KV for management.\n---\n\n\n# Secrets import\n\n@include 'alerts\/enterprise-only.mdx'\n\n@include 'alerts\/alpha.mdx'\n\nDistributing sensitive information across multiple external systems creates\nseveral challenges, including:\n\n- Increased operational overhead.\n- Increased exposure risk from data sprawl.\n- Increased risk of outdated and out-of-sync information.\n\nUsing Vault as a single source of truth (SSOT) for sensitive data increases\nsecurity and reduces management overhead, but migrating preexisting data from multiple\nand\/or varied sources can be complex and costly. \n\nThe secrets import process helps you automate and streamline your sensitive data\nmigration with codified import plans as HCL files. Import plans tell Vault which KVv2 secrets\nengine instance to store the expected secret data in, the source system for which data will be\nread from, and how to filter this data. Three HCL blocks make this possible:\n\n- The `destination` block defines target KVv2 mounts.\n- The `source` block provides credentials for connecting to the external system.\n- The `mapping` block defines how Vault should decide which data gets imported before\n  writing the information to KVv2.\n\n## Destinations\n\nVault stores imported secrets in a Vault KVv2 secrets engine mount. Destination\nblocks start with `destination_vault` and define the desired KVv2 mount path and\nan optional namespace. The combination of these represent the exact location in your\nVault instance you want the information stored.\n\n### HCL syntax\n\n\n```hcl\ndestination_vault {\n  name      = \"my-dest-1\"\n  namespace = \"ns-1\"\n  mount     = \"mount-1\"\n}\n```\n\n- `name` `(string: <required>)` - A unique name for the destination block that can\n  be referenced in subsequent mapping blocks.\n\n- `mount` `(string: <required>)` - The mount path for the target KVv2 instance.\n\n- `address` `(string)` - Optional network address of the Vault server with the\n  KVv2 secrets engine enabled. By default, the Vault client's address will be used.\n\n- `token` `(string)` - Optional authentication token for the Vault server at the\n  specified address. By default, the Vault client's token will be used.\n\n- `namespace` `(string)` - Optional namespace path containing the specified KVv2\n  mount. By default, Vault looks for the KVv2 mount under the root namespace.\n\n\n\n## Sources\n\nVault can import secrets from the following sources:\n- [GCP Secret Manager](\/vault\/docs\/import\/gcpsm)\n\nTo pull data from a source during import, Vault needs read credentials for the\nexternal system. You can provide credentials directly as part of the import\nplan, or use Vault to automatically generate dynamic credentials if you already\nhave the corresponding secrets engine configured.\n\n### HCL syntax\n\nSource blocks start with `source_<external_system>` and include any connection\ninformation required by the target system or the secrets engine to leverage. For example:\n\n```hcl\nsource_gcp {\n  name        = \"my-gcp-source-1\"\n  credentials = \"@\/path\/to\/service-account-key.json\"\n}\n```\n\n- `name` `(string: <required>)` - A unique name for the source block that can be\n  referenced in subsequent mapping blocks.\n\n- `credentials` `(string: <required>)` - Path to a credential file or token with\n  read permissions for the target system.\n\nDepending on the source system, additional information may be required. Refer to\nthe connection documentation for your source system to determine the full set of\nrequired fields for that system type.\n\n\n\n## Mappings\n\nMappings glue the source and destination together and filter the migrated data,\nto determine what is imported and what is ignored. Vault currently supports the\nfollowing mapping methods:\n\n- [mapping_passthrough](\/vault\/docs\/import\/mappings#passthrough)\n- [mapping_metadata](\/vault\/docs\/import\/mappings#metadata)\n- [mapping_regex](\/vault\/docs\/import\/mappings#regex)\n\n### HCL syntax\n\nMapping blocks start with `mapping_<filter_type>` and require a source name,\ndestination name, an execution priority, and any corresponding transformations\nor filters that apply for each mapping type. For example:\n\n```hcl\nmapping_regex {\n  name        = \"my-map-1\"\n  source      = \"my-gcp-source-1\"\n  destination = \"my-dest-1\"\n  priority    = 1\n  expression  = \"^database\/.*$\"\n}\n```\n\n- `name` `(string: <required>)` - A unique name for the mapping block.\n\n- `source` `(string: <required>)` - The name of a previously-defined source block\n  **from** which the data should be read.\n\n- `destination` `(string: <required>)` - The name of a previously defined\n  destination block **to** which the data should be written.\n\n- `priority` `(integer: <required>)` - The order in which Vault should apply the\n  mapping block during the import process. The lower the number, the higher the\n  priority. For example, a mapping with priority 1 executes before a mapping\n  with priority 2.\n\nDepending on the filter type, additional fields may be required or possible. Refer\nto the [import mappings documentation](\/vault\/docs\/import\/mappings) for the available\nsupported options and for a list of each mapping's specific fields.\n\n<Tip title=\"Priority matters\">\n\n  Vault applies mapping definitions in priority order and a given secret only\n  matches to the first mapping that applies. Once Vault imports a secret with a\n  particular mapping, subsequent reads from the same source will ignore that\n  secret. See the [priority section](\/vault\/docs\/import\/mappings#priority) for an example.\n\n<\/Tip>","site":"vault","answers_cleaned":"    layout  docs page title  Secrets import description  Secrets import allows you to safely onboard secrets from external sources into Vault KV for management          Secrets import   include  alerts enterprise only mdx    include  alerts alpha mdx   Distributing sensitive information across multiple external systems creates several challenges  including     Increased operational overhead    Increased exposure risk from data sprawl    Increased risk of outdated and out of sync information   Using Vault as a single source of truth  SSOT  for sensitive data increases security and reduces management overhead  but migrating preexisting data from multiple and or varied sources can be complex and costly    The secrets import process helps you automate and streamline your sensitive data migration with codified import plans as HCL files  Import plans tell Vault which KVv2 secrets engine instance to store the expected secret data in  the source system for which data will be read from  and how to filter this data  Three HCL blocks make this possible     The  destination  block defines target KVv2 mounts    The  source  block provides credentials for connecting to the external system    The  mapping  block defines how Vault should decide which data gets imported before   writing the information to KVv2      Destinations  Vault stores imported secrets in a Vault KVv2 secrets engine mount  Destination blocks start with  destination vault  and define the desired KVv2 mount path and an optional namespace  The combination of these represent the exact location in your Vault instance you want the information stored       HCL syntax      hcl destination vault     name         my dest 1    namespace    ns 1    mount        mount 1            name    string   required      A unique name for the destination block that can   be referenced in subsequent mapping blocks      mount    string   required      The mount path for the target KVv2 instance      address    string     Optional network address of the Vault server with the   KVv2 secrets engine enabled  By default  the Vault client s address will be used      token    string     Optional authentication token for the Vault server at the   specified address  By default  the Vault client s token will be used      namespace    string     Optional namespace path containing the specified KVv2   mount  By default  Vault looks for the KVv2 mount under the root namespace        Sources  Vault can import secrets from the following sources     GCP Secret Manager   vault docs import gcpsm   To pull data from a source during import  Vault needs read credentials for the external system  You can provide credentials directly as part of the import plan  or use Vault to automatically generate dynamic credentials if you already have the corresponding secrets engine configured       HCL syntax  Source blocks start with  source  external system   and include any connection information required by the target system or the secrets engine to leverage  For example      hcl source gcp     name           my gcp source 1    credentials      path to service account key json            name    string   required      A unique name for the source block that can be   referenced in subsequent mapping blocks      credentials    string   required      Path to a credential file or token with   read permissions for the target system   Depending on the source system  additional information may be required  Refer to the connection documentation for your source system to determine the full set of required fields for that system type        Mappings  Mappings glue the source and destination together and filter the migrated data  to determine what is imported and what is ignored  Vault currently supports the following mapping methods      mapping passthrough   vault docs import mappings passthrough     mapping metadata   vault docs import mappings metadata     mapping regex   vault docs import mappings regex       HCL syntax  Mapping blocks start with  mapping  filter type   and require a source name  destination name  an execution priority  and any corresponding transformations or filters that apply for each mapping type  For example      hcl mapping regex     name           my map 1    source         my gcp source 1    destination    my dest 1    priority      1   expression      database                name    string   required      A unique name for the mapping block      source    string   required      The name of a previously defined source block     from   which the data should be read      destination    string   required      The name of a previously defined   destination block   to   which the data should be written      priority    integer   required      The order in which Vault should apply the   mapping block during the import process  The lower the number  the higher the   priority  For example  a mapping with priority 1 executes before a mapping   with priority 2   Depending on the filter type  additional fields may be required or possible  Refer to the  import mappings documentation   vault docs import mappings  for the available supported options and for a list of each mapping s specific fields    Tip title  Priority matters      Vault applies mapping definitions in priority order and a given secret only   matches to the first mapping that applies  Once Vault imports a secret with a   particular mapping  subsequent reads from the same source will ignore that   secret  See the  priority section   vault docs import mappings priority  for an example     Tip "}
{"questions":"vault used to filter the scanned secrets and determine which will be imported in to Vault Vault supports multiple filter types for mapping blocks Each of the types provides a different mechanism Import mappings Mappings lets users apply various filtering methods to secrets being imported in to Vault layout docs page title Secrets import mappings","answers":"---\nlayout: docs\npage_title: Secrets import mappings\ndescription: Mappings lets users apply various filtering methods to secrets being imported in to Vault.\n---\n\n# Import mappings\n\nVault supports multiple filter types for mapping blocks. Each of the types provides a different mechanism\nused to filter the scanned secrets and determine which will be imported in to Vault.\n\n\n## Argument reference\n\nRefer to the [HCL syntax](\/vault\/docs\/import#hcl-syntax-2) for arguments common to all mapping types.\n\n## Passthrough mapping filters\n\nThe passthrough mapping block `mapping_passthrough` allows all secrets through from the specified source to the\nspecified destination. For example, one use case is setting it as a base-case for imported secrets. By assigning\nit the lowest priority in the import plan, all other mapping blocks will be applied first. Secrets that fail\nto match any of the previous mappings will fall through to the passthrough block and be collected in a single\nKVv2 location.\n\n### Additional arguments\n\nThere are no extra arguments to specify in a `mapping_passthrough` block.\n\n### Example\n\nIn this example, every single secret that `my-gcp-source-1` scans from GCP Secret Manager will be imported\nto the KVv2 secrets engine mount defined in `my-dest-1`.\n\n```hcl\nmapping_passthrough {\n  name        = \"my-map-1\"\n  source      = \"my-gcp-source-1\"\n  destination = \"my-dest-1\"\n  priority    = 1\n}\n```\n\n## Metadata\n\nThe metadata mapping block `mapping_metadata` allows secrets through from the specified source to the specified\ndestination if they contain matching metadata key-value pairs. Metadata is not supported in all external secret\nmanagement systems, and ones that do may use different terminology for metadata. For example, AWS allows tags\non secrets while [GCP](\/vault\/docs\/import\/gcpsm) allows labels.\n\n### Additional arguments\n\n* `tags` `(string: <required>)` - A set of key-value pairs to match on secrets from the external system. All of the specified\nkeys must be found on a secret and all of the values must be exact matches. Specifying a key in this mapping with\nan empty value, i.e. `\"\"`, acts as a wildcard match to the external system's key's value.\n\n### Example\n\nIn this example, `my-map-1` will only import the secrets into the destination `my-dest-1` that contain a tag with\na key named `importable` and its value set to `true`. \n\n```hcl\nmapping_metadata {\n  name        = \"my-map-1\"\n  source      = \"my-gcp-source-1\"\n  destination = \"my-dest-1\"\n  priority    = 1\n\n  tags = {\n    \"importable\" = \"true\"\n  }\n}\n```\n\n## Regex\n\nThe regex mapping block `mapping_regex` allows secrets through from the specified source to the specified\ndestination if their secret name passes a regular expression check.\n\n### Additional arguments\n\n* `expression` `(string: <required>)` - The regular expression used to match secrets' names from the external system.\n\n### Example\n\nIn this example, any secret in the GCP source whose name begins with `database\/` will be imported into Vault.\n\n```hcl\nmapping_regex {\n  name        = \"my-map-1\"\n  source      = \"my-gcp-source-1\"\n  destination = \"my-dest-1\"\n  priority    = 1\n  expression  = \"^database\/.*$\"\n}\n```\n\n## Priority\n\nPriority works in a \"first match\" fashion where lower values are higher priority. To explain in more detail,\nconsider the above metadata example with a second additional mapping.\n\nBelow are two metadata mappings. The first, `my-map-1`, has a priority of 1. This will only import the secrets\ninto the destination `my-dest-1` that contain both tag keys `database` and `importable`. Each of these keys' values\nmust also match to `users` and `true` respectively. The second, `my-map-2`, has a priority of 2. Even though all\nthe secrets in the first mapping would also qualify for the second mapping's filtering rule, those secrets will only\nbe imported into `my-dest-1` because of `my-map-2`'s lower priority. All remaining secrets that have the tag\n`importable` with a value of `true` will be imported into `my-dest-2`. \n\n```hcl\nmapping_metadata {\n  name        = \"my-map-1\"\n  source      = \"my-gcp-source-1\"\n  destination = \"my-dest-1\"\n  priority    = 1\n\n  tags = {\n    \"database\"   = \"users\"\n    \"importable\" = \"true\"\n  }\n}\n\nmapping_metadata {\n  name        = \"my-map-2\"\n  source      = \"my-gcp-source-1\"\n  destination = \"my-dest-2\"\n  priority    = 2\n\n  tags = {\n    \"importable\" = \"true\"\n  }\n}\n```","site":"vault","answers_cleaned":"    layout  docs page title  Secrets import mappings description  Mappings lets users apply various filtering methods to secrets being imported in to Vault         Import mappings  Vault supports multiple filter types for mapping blocks  Each of the types provides a different mechanism used to filter the scanned secrets and determine which will be imported in to Vault       Argument reference  Refer to the  HCL syntax   vault docs import hcl syntax 2  for arguments common to all mapping types      Passthrough mapping filters  The passthrough mapping block  mapping passthrough  allows all secrets through from the specified source to the specified destination  For example  one use case is setting it as a base case for imported secrets  By assigning it the lowest priority in the import plan  all other mapping blocks will be applied first  Secrets that fail to match any of the previous mappings will fall through to the passthrough block and be collected in a single KVv2 location       Additional arguments  There are no extra arguments to specify in a  mapping passthrough  block       Example  In this example  every single secret that  my gcp source 1  scans from GCP Secret Manager will be imported to the KVv2 secrets engine mount defined in  my dest 1       hcl mapping passthrough     name           my map 1    source         my gcp source 1    destination    my dest 1    priority      1           Metadata  The metadata mapping block  mapping metadata  allows secrets through from the specified source to the specified destination if they contain matching metadata key value pairs  Metadata is not supported in all external secret management systems  and ones that do may use different terminology for metadata  For example  AWS allows tags on secrets while  GCP   vault docs import gcpsm  allows labels       Additional arguments     tags    string   required      A set of key value pairs to match on secrets from the external system  All of the specified keys must be found on a secret and all of the values must be exact matches  Specifying a key in this mapping with an empty value  i e        acts as a wildcard match to the external system s key s value       Example  In this example   my map 1  will only import the secrets into the destination  my dest 1  that contain a tag with a key named  importable  and its value set to  true        hcl mapping metadata     name           my map 1    source         my gcp source 1    destination    my dest 1    priority      1    tags          importable     true                Regex  The regex mapping block  mapping regex  allows secrets through from the specified source to the specified destination if their secret name passes a regular expression check       Additional arguments     expression    string   required      The regular expression used to match secrets  names from the external system       Example  In this example  any secret in the GCP source whose name begins with  database   will be imported into Vault      hcl mapping regex     name           my map 1    source         my gcp source 1    destination    my dest 1    priority      1   expression      database                Priority  Priority works in a  first match  fashion where lower values are higher priority  To explain in more detail  consider the above metadata example with a second additional mapping   Below are two metadata mappings  The first   my map 1   has a priority of 1  This will only import the secrets into the destination  my dest 1  that contain both tag keys  database  and  importable   Each of these keys  values must also match to  users  and  true  respectively  The second   my map 2   has a priority of 2  Even though all the secrets in the first mapping would also qualify for the second mapping s filtering rule  those secrets will only be imported into  my dest 1  because of  my map 2  s lower priority  All remaining secrets that have the tag  importable  with a value of  true  will be imported into  my dest 2        hcl mapping metadata     name           my map 1    source         my gcp source 1    destination    my dest 1    priority      1    tags          database       users       importable     true         mapping metadata     name           my map 2    source         my gcp source 1    destination    my dest 2    priority      2    tags          importable     true           "}
{"questions":"vault This quick start will explore how to use Vault client libraries inside your application code to store and retrieve your first secret value Vault takes the security burden away from developers by providing a secure centralized secret store for an application s sensitive data credentials certificates encryption keys and more Learn how to store and retrieve your first secret Developer quick start layout docs page title Developer Quick Start","answers":"---\nlayout: docs\npage_title: Developer Quick Start\ndescription: Learn how to store and retrieve your first secret.\n---\n\n# Developer quick start\n\nThis quick start will explore how to use Vault client libraries inside your application code to store and retrieve your first secret value. Vault takes the security burden away from developers by providing a secure, centralized secret store for an application\u2019s sensitive data: credentials, certificates, encryption keys, and more.\n\nThe complete code samples for the steps below are available here:\n\n- [Go](https:\/\/github.com\/hashicorp\/vault-examples\/blob\/main\/examples\/_quick-start\/go\/example.go)\n- [Ruby](https:\/\/github.com\/hashicorp\/vault-examples\/blob\/main\/examples\/_quick-start\/ruby\/example.rb)\n- [C#](https:\/\/github.com\/hashicorp\/vault-examples\/blob\/main\/examples\/_quick-start\/dotnet\/Example.cs)\n- [Python](https:\/\/github.com\/hashicorp\/vault-examples\/blob\/main\/examples\/_quick-start\/python\/example.py)\n- [Java (Spring)](https:\/\/github.com\/hashicorp\/vault-examples\/blob\/main\/examples\/_quick-start\/java\/Example.java)\n- [OpenAPI-based Go](https:\/\/github.com\/hashicorp\/vault-client-go\/#getting-started)\n- [OpenAPI-based .NET](https:\/\/github.com\/hashicorp\/vault-client-dotnet\/#getting-started)\n\nFor an out-of-the-box runnable demo application showcasing these concepts and more, see the hello-vault repositories ([Go](https:\/\/github.com\/hashicorp\/hello-vault-go), [C#](https:\/\/github.com\/hashicorp\/hello-vault-dotnet) and [Java\/Spring Boot](https:\/\/github.com\/hashicorp\/hello-vault-spring)).\n\n## Prerequisites\n\n- [Docker](https:\/\/docs.docker.com\/get-docker\/) or a [local installation](\/vault\/tutorials\/getting-started\/getting-started-install) of the Vault binary\n- A development environment applicable to one of the languages in this quick start (currently **Go**, **Ruby**, **C#**, **Python**, **Java (Spring)**, and **Bash (curl)**)\n\n-> **Note**: Make sure you are using the [latest version](https:\/\/docs.docker.com\/engine\/release-notes\/) of Docker. Older versions may not work. As of 1.12.0, the recommended version of Docker is 20.10.17 or higher.\n\n## Step 1: start Vault\n\n!> **Warning**: This in-memory \u201cdev\u201d server is useful for practicing with Vault locally for the first time, but is insecure and **should never be used in production**. For developers who need to manage their own production Vault installations, this [page](\/vault\/tutorials\/operations\/production-hardening) provides some guidance on how to make your setup more production-friendly.\n\nRun the Vault server in a non-production \"dev\" mode in one of the following ways:\n\n**For Docker users, run this command**:\n\n```shell-session\n$ docker run -p 8200:8200 -e 'VAULT_DEV_ROOT_TOKEN_ID=dev-only-token' hashicorp\/vault\n```\n\n**For non-Docker users, run this command**:\n\n```shell-session\n$ vault server -dev -dev-root-token-id=\"dev-only-token\"\n```\n\nThe `-dev-root-token-id` flag for dev servers tells the Vault server to allow full root access to anyone who presents a token with the specified value (in this case \"dev-only-token\").\n\n!> **Warning**: The [root token](\/vault\/docs\/concepts\/tokens#root-tokens) is useful for development, but allows full access to all data and functionality of Vault, so it must be carefully guarded in production. Ideally, even an administrator of Vault would use their own token with limited privileges instead of the root token.\n\nVault is now listening over HTTP on port **8200**. With all the setup out of the way, it's time to get coding!\n\n## Step 2: install a client library\n\nTo read and write secrets in your application, you need to first configure a client to connect to Vault.\nLet's install the Vault client library for your language of choice.\n\n-> **Note**: Some of these libraries are currently community-maintained.\n\n<Tabs>\n<Tab heading=\"Go\" group=\"go\">\n\n[Go](https:\/\/pkg.go.dev\/github.com\/hashicorp\/vault\/api) (official) client library:\n\n```shell-session\n$ go get github.com\/hashicorp\/vault\/api\n```\n\nNow, let's add the import statements for the client library to the top of the file.\n\n<CodeBlockConfig heading=\"import statements for client library\" lineNumbers>\n\n```go\nimport vault \"github.com\/hashicorp\/vault\/api\"\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n<Tab heading=\"Ruby\" group=\"ruby\">\n\n\n[Ruby](https:\/\/github.com\/hashicorp\/vault-ruby) (official) client library:\n\n```shell-session\n$ gem install vault\n```\n\nNow, let's add the import statements for the client library to the top of the file.\n\n<CodeBlockConfig heading=\"import statements for client library\" lineNumbers>\n\n```ruby\nrequire \"vault\"\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n<Tab heading=\"C#\" group=\"cs\">\n\n\n[C#](https:\/\/github.com\/rajanadar\/VaultSharp) client library:\n\n```shell-session\n$ dotnet add package VaultSharp\n```\n\nNow, let's add the import statements for the client library to the top of the file.\n\n<CodeBlockConfig heading=\"import statements for client library\" lineNumbers>\n\n```cs\nusing VaultSharp;\nusing VaultSharp.V1.AuthMethods;\nusing VaultSharp.V1.AuthMethods.Token;\nusing VaultSharp.V1.Commons;\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n<Tab heading=\"Python\" group=\"python\">\n\n\n[Python](https:\/\/github.com\/hvac\/hvac) client library:\n\n```shell-session\n$ pip install hvac\n```\n\nNow, let's add the import statements for the client library to the top of the file.\n\n<CodeBlockConfig heading=\"import statements for client library\" lineNumbers>\n\n```Python\nimport hvac\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n<Tab heading=\"Java\" group=\"java\">\n\n\n[Java (Spring)](https:\/\/spring.io\/projects\/spring-vault) client library:\n\nAdd the following to pom.xml:\n\n```xml\n<dependency>\n     <groupId>org.springframework.vault<\/groupId>\n     <artifactId>spring-vault-core<\/artifactId>\n     <version>2.3.1<\/version>\n<\/dependency>\n```\n\nNow, let's add the import statements for the client library to the top of the file.\n\n<CodeBlockConfig heading=\"import statements for client library\" lineNumbers>\n\n```Java\nimport org.springframework.vault.authentication.TokenAuthentication;\nimport org.springframework.vault.client.VaultEndpoint;\nimport org.springframework.vault.support.Versioned;\nimport org.springframework.vault.core.VaultTemplate;\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n<Tab heading=\"OpenAPI Go (Beta)\" group=\"openAPI-go\">\n\n\n[OpenAPI Go](https:\/\/github.com\/hashicorp\/vault-client-go) (Beta) client library:\n\n```shell-session\n$ go get github.com\/hashicorp\/vault-client-go\n```\n\nNow, let's add the import statements for the client library to the top of the file.\n\n<CodeBlockConfig heading=\"import statements for client library\" lineNumbers>\n\n```go\nimport (\n\t\"github.com\/hashicorp\/vault-client-go\"\n\t\"github.com\/hashicorp\/vault-client-go\/schema\"\n)\n```\n\n<\/CodeBlockConfig>\n\n\n<\/Tab>\n<Tab heading=\"OpenAPI .NET (Beta)\" group=\"openAPI-dotnet\">\n\n\n[OpenAPI .NET](https:\/\/github.com\/hashicorp\/vault-client-dotnet) (Beta) client library:\n\nVault is a package available at [Hashicorp Nuget](https:\/\/www.nuget.org\/profiles\/hashicorp).\n\n\n```shell-session\n$ nuget install HashiCorp.Vault -Version \"0.1.0-beta\"\n```\n\n**Or:**\n\n```shell-session\n$ dotnet add package Hashicorp.Vault -version \"0.1.0-beta\"\n```\n\nNow, let's add the import statements for the client library to the top of the file.\n\n<CodeBlockConfig heading=\"import statements for client library\" lineNumbers>\n\n```cs\nusing Vault;\nusing Vault.Client;\n```\n\n<\/CodeBlockConfig>\n\n<\/Tab>\n<\/Tabs>\n\n\n## Step 3: authenticate to Vault\n\nA variety of [authentication methods](\/vault\/docs\/auth) can be used to prove your application's identity to the Vault server. To explore more secure authentication methods, such as via Kubernetes or your cloud provider, see the auth code snippets in the [vault-examples](https:\/\/github.com\/hashicorp\/vault-examples) repository.\n\nTo keep things simple for our example, we'll just use the root token created in **Step 1**.\nPaste the following code to initialize a new Vault client that will use token-based authentication for all its requests:\n\n<Tabs>\n<Tab heading=\"Go\">\n\n```go\nconfig := vault.DefaultConfig()\n\nconfig.Address = \"http:\/\/127.0.0.1:8200\"\n\nclient, err := vault.NewClient(config)\nif err != nil {\n    log.Fatalf(\"unable to initialize Vault client: %v\", err)\n}\n\nclient.SetToken(\"dev-only-token\")\n```\n\n<\/Tab>\n<Tab heading=\"Ruby\" group=\"ruby\">\n\n```ruby\nVault.configure do |config|\n    config.address = \"http:\/\/127.0.0.1:8200\"\n    config.token = \"dev-only-token\"\nend\n```\n\n<\/Tab>\n<Tab heading=\"C#\" group=\"cs\">\n\n```cs\nIAuthMethodInfo authMethod = new TokenAuthMethodInfo(vaultToken: \"dev-only-token\");\n\nVaultClientSettings vaultClientSettings = new\nVaultClientSettings(\"http:\/\/127.0.0.1:8200\", authMethod);\nIVaultClient vaultClient = new VaultClient(vaultClientSettings);\n```\n\n<\/Tab>\n<Tab heading=\"Python\" group=\"python\">\n\n```Python\nclient = hvac.Client(\n    url='http:\/\/127.0.0.1:8200',\n    token='dev-only-token',\n)\n```\n\n<\/Tab>\n<Tab heading=\"Java\" group=\"java\">\n\n```Java\nVaultEndpoint vaultEndpoint = new VaultEndpoint();\n\nvaultEndpoint.setHost(\"127.0.0.1\");\nvaultEndpoint.setPort(8200);\nvaultEndpoint.setScheme(\"http\");\n\nVaultTemplate vaultTemplate = new VaultTemplate(\n    vaultEndpoint,\n    new TokenAuthentication(\"dev-only-token\")\n);\n```\n\n<\/Tab>\n<Tab heading=\"Bash\" group=\"bash\">\n\n```shell-session\n$ export VAULT_TOKEN=\"dev-only-token\"\n```\n\n<\/Tab>\n<Tab heading=\"OpenAPI Go (Beta)\" group=\"openAPI-go\">\n\n```go\nclient, err := vault.New(\n   vault.WithAddress(\"http:\/\/127.0.0.1:8200\"),\n   vault.WithRequestTimeout(30*time.Second),\n)\nif err != nil {\n   log.Fatal(err)\n}\n\nif err := client.SetToken(\"dev-only-token\"); err != nil {\n   log.Fatal(err)\n}\n```\n\n<\/Tab>\n<Tab heading=\"OpenAPI .NET (Beta)\" group=\"openAPI-dotnet\">\n\n```cs\nstring address = \"http:\/\/127.0.0.1:8200\";\nVaultConfiguration config = new VaultConfiguration(address);\n\nVaultClient vaultClient = new VaultClient(config);\nvaultClient.SetToken(\"dev-only-token\");\n```\n\n<\/Tab>\n<\/Tabs>\n\n## Step 4: store a secret\n\nSecrets are sensitive data like API keys and passwords that we shouldn\u2019t be storing in our code or configuration files. Instead, we want to store values like this in Vault.\n\nWe'll use the Vault client we just initialized to write a secret to Vault, like so:\n\n<Tabs>\n<Tab heading=\"Go\">\n\n```go\nsecretData := map[string]interface{}{\n    \"password\": \"Hashi123\",\n}\n\n\n_, err = client.KVv2(\"secret\").Put(context.Background(), \"my-secret-password\", secretData)\nif err != nil {\n    log.Fatalf(\"unable to write secret: %v\", err)\n}\n\nfmt.Println(\"Secret written successfully.\")\n```\n\n<\/Tab>\n<Tab heading=\"Ruby\" group=\"ruby\">\n\n```ruby\nsecret_data = {data: {password: \"Hashi123\"}}\nVault.logical.write(\"secret\/data\/my-secret-password\", secret_data)\n\nputs \"Secret written successfully.\"\n```\n\n<\/Tab>\n<Tab heading=\"C#\" group=\"cs\">\n\n```cs\nvar secretData = new Dictionary<string, object> { { \"password\", \"Hashi123\" } };\nvaultClient.V1.Secrets.KeyValue.V2.WriteSecretAsync(\n    path: \"\/my-secret-password\",\n    data: secretData,\n    mountPoint: \"secret\"\n).Wait();\n\nConsole.WriteLine(\"Secret written successfully.\");\n```\n\n<\/Tab>\n<Tab heading=\"Python\" group=\"python\">\n\n```Python\ncreate_response = client.secrets.kv.v2.create_or_update_secret(\n    path='my-secret-password',\n    secret=dict(password='Hashi123'),\n)\n\nprint('Secret written successfully.')\n```\n\n<\/Tab>\n<Tab heading=\"Java\" group=\"java\">\n\n```Java\nMap<String, String> data = new HashMap<>();\ndata.put(\"password\", \"Hashi123\");\n\nVersioned.Metadata createResponse = vaultTemplate\n    .opsForVersionedKeyValue(\"secret\")\n    .put(\"my-secret-password\", data);\n\nSystem.out.println(\"Secret written successfully.\");\n```\n\n<\/Tab>\n<Tab heading=\"Bash\" group=\"bash\">\n\n```shell-session\n$ curl \\\n    --header \"X-Vault-Token: $VAULT_TOKEN\" \\\n    --header \"Content-Type: application\/json\" \\\n    --request POST \\\n    --data '{\"data\": {\"password\": \"Hashi123\"}}' \\\n    http:\/\/127.0.0.1:8200\/v1\/secret\/data\/my-secret-password\n```\n\n<\/Tab>\n<Tab heading=\"OpenAPI Go (Beta)\" group=\"openAPI-go\">\n\n```go\n_, err = client.Secrets.KVv2Write(context.Background(), \"my-secret-password\", schema.KVv2WriteRequest{\n   Data: map[string]any{\n      \"password\": \"Hashi123\",\n   },\n})\nif err != nil {\n   log.Fatal(err)\n}\n\nlog.Println(\"Secret written successfully.\")\n```\n\n<\/Tab>\n<Tab heading=\"OpenAPI .NET (Beta)\" group=\"openAPI-dotnet\">\n\n```cs\nvar secretData = new Dictionary<string, string> { { \"password\", \"Hashi123\" } };\n\n\/\/ Write a secret\nvar kvRequestData = new KVv2WriteRequest(secretData);\n\nvaultClient.Secrets.KVv2Write(\"my-secret-password\", kvRequestData);\n```\n\n<\/Tab>\n<\/Tabs>\n\nA common way of storing secrets is as key-value pairs using the [KV secrets engine (v2)](\/vault\/docs\/secrets\/kv\/kv-v2). In the code we've just added, `password` is the key in the key-value pair, and `Hashi123` is the value.\n\nWe also provided the path to our secret in Vault. We will reference this path in a moment when we learn how to retrieve our secret.\n\nRun the code now, and you should see `Secret written successfully`. If not, check that you've used the correct value for the root token and Vault server address.\n\n## Step 5: retrieve a secret\n\nNow that we know how to write a secret, let's practice reading one.\n\nUnderneath the line where you wrote a secret to Vault, let's add a few more lines, where we will be retrieving the secret and unpacking the value:\n\n<Tabs>\n<Tab heading=\"Go\">\n\n```go\nsecret, err := client.KVv2(\"secret\").Get(context.Background(), \"my-secret-password\")\nif err != nil {\n    log.Fatalf(\"unable to read secret: %v\", err)\n}\n\nvalue, ok := secret.Data[\"password\"].(string)\nif !ok {\nlog.Fatalf(\"value type assertion failed: %T %#v\", secret.Data[\"password\"], secret.Data[\"password\"])\n}\n```\n\n<\/Tab>\n<Tab heading=\"Ruby\" group=\"ruby\">\n\n```ruby\nsecret = Vault.logical.read(\"secret\/data\/my-secret-password\")\npassword = secret.data[:data][:password]\n```\n\n<\/Tab>\n<Tab heading=\"C#\" group=\"cs\">\n\n```cs\nSecret<SecretData> secret = vaultClient.V1.Secrets.KeyValue.V2.ReadSecretAsync(\n    path: \"\/my-secret-password\",\n    mountPoint: \"secret\"\n).Result;\n\nvar password = secret.Data.Data[\"password\"];\n```\n\n<\/Tab>\n<Tab heading=\"Python\" group=\"python\">\n\n```Python\nread_response = client.secrets.kv.read_secret_version(path='my-secret-password')\n\npassword = read_response['data']['data']['password']\n```\n\n<\/Tab>\n<Tab heading=\"Java\" group=\"java\">\n\n```Java\nVersioned<Map<String, Object>> readResponse = vaultTemplate\n    .opsForVersionedKeyValue(\"secret\")\n    .get(\"my-secret-password\");\n\nString password = \"\";\nif (readResponse != null && readResponse.hasData()) {\n    password = (String) readResponse.getData().get(\"password\");\n}\n```\n\n<\/Tab>\n<Tab heading=\"Bash\" group=\"bash\">\n\n```shell-session\n$ curl \\\n    --header \"X-Vault-Token: $VAULT_TOKEN\" \\\n    http:\/\/127.0.0.1:8200\/v1\/secret\/data\/my-secret-password > secrets.json\n```\n\n<\/Tab>\n<Tab heading=\"OpenAPI Go (Beta)\" group=\"openAPI-go\">\n\n```go\ns, err := client.Secrets.KVv2Read(context.Background(), \"my-secret-password\")\nif err != nil {\n   log.Fatal(err)\n}\n\nlog.Println(\"Secret retrieved:\", s.Data)\n```\n\n<\/Tab>\n<Tab heading=\"OpenAPI .NET (Beta)\" group=\"openAPI-dotnet\">\n\n```cs\nVaultResponse<Object> resp = vaultClient.Secrets.KVv2Read(\"my-secret-password\");\nConsole.WriteLine(resp.Data);\n```\n\n<\/Tab>\n<\/Tabs>\n\nLast, confirm that the value we unpacked from the read response is correct:\n\n<Tabs>\n<Tab heading=\"Go\">\n\n```go\nif value != \"Hashi123\" {\n    log.Fatalf(\"unexpected password value %q retrieved from vault\", value)\n}\n\nfmt.Println(\"Access granted!\")\n```\n\n<\/Tab>\n<Tab heading=\"Ruby\" group=\"ruby\">\n\n```ruby\nabort \"Unexpected password\" if password != \"Hashi123\"\n\nputs \"Access granted!\"\n```\n\n<\/Tab>\n<Tab heading=\"C#\" group=\"cs\">\n\n```cs\nif (password.ToString() != \"Hashi123\")\n{\n    throw new System.Exception(\"Unexpected password\");\n}\n\nConsole.WriteLine(\"Access granted!\");\n```\n\n<\/Tab>\n<Tab heading=\"Python\" group=\"python\">\n\n```Python\nif password != 'Hashi123':\n    sys.exit('unexpected password')\n\nprint('Access granted!')\n```\n\n<\/Tab>\n<Tab heading=\"Java\" group=\"java\">\n\n```Java\nif (!password.equals(\"Hashi123\")) {\n    throw new Exception(\"Unexpected password\");\n}\n\nSystem.out.println(\"Access granted!\");\n```\n\n<\/Tab>\n<Tab heading=\"Bash\" group=\"bash\">\n\n```shell-session\n$ cat secrets.json | jq '.data.data'\n```\n\n<\/Tab>\n<\/Tabs>\n\nIf the secret was fetched successfully, you should see the `Access granted!` message after you run the code. If not, check to see if you provided the correct path to your secret.\n\n**That's it! You've just written and retrieved your first Vault secret!**\n\n# Additional examples\n\nFor more secure examples of client authentication, see the auth snippets in the [vault-examples](https:\/\/github.com\/hashicorp\/vault-examples) repo.\n\nFor a runnable demo app that demonstrates more features, for example, how to keep your connection to Vault alive and how to connect to a database using Vault's dynamic database credentials, see the sample application hello-vault ([Go](https:\/\/github.com\/hashicorp\/hello-vault-go), [C#](https:\/\/github.com\/hashicorp\/hello-vault-dotnet)).\n\nTo learn how to integrate applications with Vault without needing to always change your application code, see the [Vault Agent](\/vault\/docs\/agent-and-proxy\/agent) documentation.","site":"vault","answers_cleaned":"    layout  docs page title  Developer Quick Start description  Learn how to store and retrieve your first secret         Developer quick start  This quick start will explore how to use Vault client libraries inside your application code to store and retrieve your first secret value  Vault takes the security burden away from developers by providing a secure  centralized secret store for an application s sensitive data  credentials  certificates  encryption keys  and more   The complete code samples for the steps below are available here      Go  https   github com hashicorp vault examples blob main examples  quick start go example go     Ruby  https   github com hashicorp vault examples blob main examples  quick start ruby example rb     C   https   github com hashicorp vault examples blob main examples  quick start dotnet Example cs     Python  https   github com hashicorp vault examples blob main examples  quick start python example py     Java  Spring   https   github com hashicorp vault examples blob main examples  quick start java Example java     OpenAPI based Go  https   github com hashicorp vault client go  getting started     OpenAPI based  NET  https   github com hashicorp vault client dotnet  getting started   For an out of the box runnable demo application showcasing these concepts and more  see the hello vault repositories   Go  https   github com hashicorp hello vault go    C   https   github com hashicorp hello vault dotnet  and  Java Spring Boot  https   github com hashicorp hello vault spring        Prerequisites     Docker  https   docs docker com get docker   or a  local installation   vault tutorials getting started getting started install  of the Vault binary   A development environment applicable to one of the languages in this quick start  currently   Go      Ruby      C       Python      Java  Spring     and   Bash  curl           Note    Make sure you are using the  latest version  https   docs docker com engine release notes   of Docker  Older versions may not work  As of 1 12 0  the recommended version of Docker is 20 10 17 or higher      Step 1  start Vault       Warning    This in memory  dev  server is useful for practicing with Vault locally for the first time  but is insecure and   should never be used in production    For developers who need to manage their own production Vault installations  this  page   vault tutorials operations production hardening  provides some guidance on how to make your setup more production friendly   Run the Vault server in a non production  dev  mode in one of the following ways     For Docker users  run this command        shell session   docker run  p 8200 8200  e  VAULT DEV ROOT TOKEN ID dev only token  hashicorp vault        For non Docker users  run this command        shell session   vault server  dev  dev root token id  dev only token       The   dev root token id  flag for dev servers tells the Vault server to allow full root access to anyone who presents a token with the specified value  in this case  dev only token          Warning    The  root token   vault docs concepts tokens root tokens  is useful for development  but allows full access to all data and functionality of Vault  so it must be carefully guarded in production  Ideally  even an administrator of Vault would use their own token with limited privileges instead of the root token   Vault is now listening over HTTP on port   8200    With all the setup out of the way  it s time to get coding      Step 2  install a client library  To read and write secrets in your application  you need to first configure a client to connect to Vault  Let s install the Vault client library for your language of choice        Note    Some of these libraries are currently community maintained    Tabs   Tab heading  Go  group  go     Go  https   pkg go dev github com hashicorp vault api   official  client library      shell session   go get github com hashicorp vault api      Now  let s add the import statements for the client library to the top of the file    CodeBlockConfig heading  import statements for client library  lineNumbers      go import vault  github com hashicorp vault api         CodeBlockConfig     Tab   Tab heading  Ruby  group  ruby      Ruby  https   github com hashicorp vault ruby   official  client library      shell session   gem install vault      Now  let s add the import statements for the client library to the top of the file    CodeBlockConfig heading  import statements for client library  lineNumbers      ruby require  vault         CodeBlockConfig     Tab   Tab heading  C   group  cs      C   https   github com rajanadar VaultSharp  client library      shell session   dotnet add package VaultSharp      Now  let s add the import statements for the client library to the top of the file    CodeBlockConfig heading  import statements for client library  lineNumbers      cs using VaultSharp  using VaultSharp V1 AuthMethods  using VaultSharp V1 AuthMethods Token  using VaultSharp V1 Commons         CodeBlockConfig     Tab   Tab heading  Python  group  python      Python  https   github com hvac hvac  client library      shell session   pip install hvac      Now  let s add the import statements for the client library to the top of the file    CodeBlockConfig heading  import statements for client library  lineNumbers      Python import hvac        CodeBlockConfig     Tab   Tab heading  Java  group  java      Java  Spring   https   spring io projects spring vault  client library   Add the following to pom xml      xml  dependency        groupId org springframework vault  groupId        artifactId spring vault core  artifactId        version 2 3 1  version    dependency       Now  let s add the import statements for the client library to the top of the file    CodeBlockConfig heading  import statements for client library  lineNumbers      Java import org springframework vault authentication TokenAuthentication  import org springframework vault client VaultEndpoint  import org springframework vault support Versioned  import org springframework vault core VaultTemplate         CodeBlockConfig     Tab   Tab heading  OpenAPI Go  Beta   group  openAPI go      OpenAPI Go  https   github com hashicorp vault client go   Beta  client library      shell session   go get github com hashicorp vault client go      Now  let s add the import statements for the client library to the top of the file    CodeBlockConfig heading  import statements for client library  lineNumbers      go import     github com hashicorp vault client go    github com hashicorp vault client go schema           CodeBlockConfig      Tab   Tab heading  OpenAPI  NET  Beta   group  openAPI dotnet      OpenAPI  NET  https   github com hashicorp vault client dotnet   Beta  client library   Vault is a package available at  Hashicorp Nuget  https   www nuget org profiles hashicorp        shell session   nuget install HashiCorp Vault  Version  0 1 0 beta         Or        shell session   dotnet add package Hashicorp Vault  version  0 1 0 beta       Now  let s add the import statements for the client library to the top of the file    CodeBlockConfig heading  import statements for client library  lineNumbers      cs using Vault  using Vault Client         CodeBlockConfig     Tab    Tabs       Step 3  authenticate to Vault  A variety of  authentication methods   vault docs auth  can be used to prove your application s identity to the Vault server  To explore more secure authentication methods  such as via Kubernetes or your cloud provider  see the auth code snippets in the  vault examples  https   github com hashicorp vault examples  repository   To keep things simple for our example  we ll just use the root token created in   Step 1    Paste the following code to initialize a new Vault client that will use token based authentication for all its requests    Tabs   Tab heading  Go       go config    vault DefaultConfig    config Address    http   127 0 0 1 8200   client  err    vault NewClient config  if err    nil       log Fatalf  unable to initialize Vault client   v   err     client SetToken  dev only token          Tab   Tab heading  Ruby  group  ruby       ruby Vault configure do  config      config address    http   127 0 0 1 8200      config token    dev only token  end        Tab   Tab heading  C   group  cs       cs IAuthMethodInfo authMethod   new TokenAuthMethodInfo vaultToken   dev only token     VaultClientSettings vaultClientSettings   new VaultClientSettings  http   127 0 0 1 8200   authMethod   IVaultClient vaultClient   new VaultClient vaultClientSettings          Tab   Tab heading  Python  group  python       Python client   hvac Client      url  http   127 0 0 1 8200       token  dev only token            Tab   Tab heading  Java  group  java       Java VaultEndpoint vaultEndpoint   new VaultEndpoint     vaultEndpoint setHost  127 0 0 1    vaultEndpoint setPort 8200   vaultEndpoint setScheme  http     VaultTemplate vaultTemplate   new VaultTemplate      vaultEndpoint      new TokenAuthentication  dev only token             Tab   Tab heading  Bash  group  bash       shell session   export VAULT TOKEN  dev only token         Tab   Tab heading  OpenAPI Go  Beta   group  openAPI go       go client  err    vault New     vault WithAddress  http   127 0 0 1 8200       vault WithRequestTimeout 30 time Second     if err    nil      log Fatal err     if err    client SetToken  dev only token    err    nil      log Fatal err           Tab   Tab heading  OpenAPI  NET  Beta   group  openAPI dotnet       cs string address    http   127 0 0 1 8200   VaultConfiguration config   new VaultConfiguration address    VaultClient vaultClient   new VaultClient config   vaultClient SetToken  dev only token           Tab    Tabs      Step 4  store a secret  Secrets are sensitive data like API keys and passwords that we shouldn t be storing in our code or configuration files  Instead  we want to store values like this in Vault   We ll use the Vault client we just initialized to write a secret to Vault  like so    Tabs   Tab heading  Go       go secretData    map string interface         password    Hashi123          err   client KVv2  secret   Put context Background     my secret password   secretData  if err    nil       log Fatalf  unable to write secret   v   err     fmt Println  Secret written successfully           Tab   Tab heading  Ruby  group  ruby       ruby secret data    data   password   Hashi123    Vault logical write  secret data my secret password   secret data   puts  Secret written successfully          Tab   Tab heading  C   group  cs       cs var secretData   new Dictionary string  object       password    Hashi123       vaultClient V1 Secrets KeyValue V2 WriteSecretAsync      path    my secret password       data  secretData      mountPoint   secret    Wait     Console WriteLine  Secret written successfully            Tab   Tab heading  Python  group  python       Python create response   client secrets kv v2 create or update secret      path  my secret password       secret dict password  Hashi123       print  Secret written successfully           Tab   Tab heading  Java  group  java       Java Map String  String  data   new HashMap      data put  password    Hashi123     Versioned Metadata createResponse   vaultTemplate      opsForVersionedKeyValue  secret        put  my secret password   data    System out println  Secret written successfully            Tab   Tab heading  Bash  group  bash       shell session   curl         header  X Vault Token   VAULT TOKEN          header  Content Type  application json          request POST         data    data     password    Hashi123           http   127 0 0 1 8200 v1 secret data my secret password        Tab   Tab heading  OpenAPI Go  Beta   group  openAPI go       go    err   client Secrets KVv2Write context Background     my secret password   schema KVv2WriteRequest     Data  map string any         password    Hashi123            if err    nil      log Fatal err     log Println  Secret written successfully           Tab   Tab heading  OpenAPI  NET  Beta   group  openAPI dotnet       cs var secretData   new Dictionary string  string       password    Hashi123           Write a secret var kvRequestData   new KVv2WriteRequest secretData    vaultClient Secrets KVv2Write  my secret password   kvRequestData          Tab    Tabs   A common way of storing secrets is as key value pairs using the  KV secrets engine  v2    vault docs secrets kv kv v2   In the code we ve just added   password  is the key in the key value pair  and  Hashi123  is the value   We also provided the path to our secret in Vault  We will reference this path in a moment when we learn how to retrieve our secret   Run the code now  and you should see  Secret written successfully   If not  check that you ve used the correct value for the root token and Vault server address      Step 5  retrieve a secret  Now that we know how to write a secret  let s practice reading one   Underneath the line where you wrote a secret to Vault  let s add a few more lines  where we will be retrieving the secret and unpacking the value    Tabs   Tab heading  Go       go secret  err    client KVv2  secret   Get context Background     my secret password   if err    nil       log Fatalf  unable to read secret   v   err     value  ok    secret Data  password    string  if  ok   log Fatalf  value type assertion failed   T   v   secret Data  password    secret Data  password             Tab   Tab heading  Ruby  group  ruby       ruby secret   Vault logical read  secret data my secret password   password   secret data  data   password         Tab   Tab heading  C   group  cs       cs Secret SecretData  secret   vaultClient V1 Secrets KeyValue V2 ReadSecretAsync      path    my secret password       mountPoint   secret    Result   var password   secret Data Data  password           Tab   Tab heading  Python  group  python       Python read response   client secrets kv read secret version path  my secret password    password   read response  data    data    password          Tab   Tab heading  Java  group  java       Java Versioned Map String  Object   readResponse   vaultTemplate      opsForVersionedKeyValue  secret        get  my secret password     String password       if  readResponse    null    readResponse hasData          password    String  readResponse getData   get  password             Tab   Tab heading  Bash  group  bash       shell session   curl         header  X Vault Token   VAULT TOKEN        http   127 0 0 1 8200 v1 secret data my secret password   secrets json        Tab   Tab heading  OpenAPI Go  Beta   group  openAPI go       go s  err    client Secrets KVv2Read context Background     my secret password   if err    nil      log Fatal err     log Println  Secret retrieved    s Data         Tab   Tab heading  OpenAPI  NET  Beta   group  openAPI dotnet       cs VaultResponse Object  resp   vaultClient Secrets KVv2Read  my secret password    Console WriteLine resp Data          Tab    Tabs   Last  confirm that the value we unpacked from the read response is correct    Tabs   Tab heading  Go       go if value     Hashi123        log Fatalf  unexpected password value  q retrieved from vault   value     fmt Println  Access granted           Tab   Tab heading  Ruby  group  ruby       ruby abort  Unexpected password  if password     Hashi123   puts  Access granted          Tab   Tab heading  C   group  cs       cs if  password ToString       Hashi123         throw new System Exception  Unexpected password       Console WriteLine  Access granted            Tab   Tab heading  Python  group  python       Python if password     Hashi123       sys exit  unexpected password    print  Access granted           Tab   Tab heading  Java  group  java       Java if   password equals  Hashi123          throw new Exception  Unexpected password       System out println  Access granted            Tab   Tab heading  Bash  group  bash       shell session   cat secrets json   jq   data data         Tab    Tabs   If the secret was fetched successfully  you should see the  Access granted   message after you run the code  If not  check to see if you provided the correct path to your secret     That s it  You ve just written and retrieved your first Vault secret       Additional examples  For more secure examples of client authentication  see the auth snippets in the  vault examples  https   github com hashicorp vault examples  repo   For a runnable demo app that demonstrates more features  for example  how to keep your connection to Vault alive and how to connect to a database using Vault s dynamic database credentials  see the sample application hello vault   Go  https   github com hashicorp hello vault go    C   https   github com hashicorp hello vault dotnet     To learn how to integrate applications with Vault without needing to always change your application code  see the  Vault Agent   vault docs agent and proxy agent  documentation "}
{"questions":"tekton weight 102 Migrating From Tekton to Tekton Migrating from Tekton v1alpha1","answers":"<!--\n---\nlinkTitle: \"Migrating from Tekton v1alpha1\"\nweight: 102\n---\n-->\n\n# Migrating From Tekton `v1alpha1` to Tekton `v1beta1`\n\n- [Changes to fields](#changes-to-fields)\n- [Changes to input parameters](#changes-to-input-parameters)\n- [Replacing `PipelineResources` with `Tasks`](#replacing-pipelineresources-with-tasks)\n  - [Replacing a `git` resource](#replacing-a-git-resource)\n  - [Replacing a `pullrequest` resource](#replacing-a-pullrequest-resource)\n  - [Replacing a `gcs` resource](#replacing-a-gcs-resource)\n  - [Replacing an `image` resource](#replacing-an-image-resource)\n  - [Replacing a `cluster` resource](#replacing-a-cluster-resource)\n- [Changes to `PipelineResources`](#changes-to-pipelineresources)\n\nThis document describes the differences between `v1alpha1` Tekton entities and their\n`v1beta1` counterparts. It also describes how to replace the supported types of\n`PipelineResources` with `Tasks` from the Tekton Catalog of equivalent functionality.\n\n## Changes to fields\n\nIn Tekton `v1beta1`, the following fields have been changed:\n\n| Old field | New field |\n| --------- | ----------|\n| `spec.inputs.params` | [`spec.params`](#changes-to-input-parameters) |\n| `spec.inputs` | Removed from `Tasks` |\n| `spec.outputs` | Removed from `Tasks` |\n| `spec.inputs.resources` | [`spec.resources.inputs`](#changes-to-pipelineresources) |\n| `spec.outputs.resources` | [`spec.resources.outputs`](#changes-to-pipelineresources) |\n\n## Changes to input parameters\n\nIn Tekton `v1beta1`, input parameters have been moved from `spec.inputs.params` to `spec.params`.\n\nFor example, consider the following `v1alpha1` parameters:\n\n```yaml\n# Task.yaml (v1alpha1)\nspec:\n  inputs:\n    params:\n      - name: ADDR\n        description: Address to curl.\n        type: string\n\n# TaskRun.yaml (v1alpha1)\nspec:\n  inputs:\n    params:\n      - name: ADDR\n        value: https:\/\/example.com\/foo.json\n```\n\nThe above parameters are now represented as follows in `v1beta1`:\n\n```yaml\n# Task.yaml (v1beta1)\nspec:\n  params:\n    - name: ADDR\n      description: Address to curl.\n      type: string\n\n# TaskRun.yaml (v1beta1)\nspec:\n  params:\n    - name: ADDR\n      value: https:\/\/example.com\/foo.json\n```\n\n## Replacing `PipelineResources` with `Tasks`\nSee [\"Replacing PipelineResources with Tasks\"](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/pipelineresources.md#replacing-pipelineresources-with-tasks) for information and examples on how to replace PipelineResources when migrating from v1alpha1 to v1beta1.\n\n## Changes to PipelineResources\n\nIn Tekton `v1beta1`, `PipelineResources` have been moved from `spec.input.resources`\nand `spec.output.resources` to `spec.resources.inputs` and `spec.resources.outputs`,\nrespectively.\n\nFor example, consider the following `v1alpha1` definition:\n\n```yaml\n# Task.yaml (v1alpha1)\nspec:\n  inputs:\n    resources:\n      - name: skaffold\n        type: git\n  outputs:\n    resources:\n      - name: baked-image\n        type: image\n\n# TaskRun.yaml (v1alpha1)\nspec:\n  inputs:\n    resources:\n      - name: skaffold\n        resourceSpec:\n          type: git\n          params:\n            - name: revision\n              value: v0.32.0\n            - name: url\n              value: https:\/\/github.com\/GoogleContainerTools\/skaffold\n  outputs:\n    resources:\n      - name: baked-image\n        resourceSpec:\n          - type: image\n            params:\n              - name: url\n                value: gcr.io\/foo\/bar\n```\n\nThe above definition becomes the following in `v1beta1`:\n\n```yaml\n# Task.yaml (v1beta1)\nspec:\n  resources:\n    inputs:\n      - name: src-repo\n        type: git\n    outputs:\n      - name: baked-image\n        type: image\n\n# TaskRun.yaml (v1beta1)\nspec:\n  resources:\n    inputs:\n      - name: src-repo\n        resourceSpec:\n          type: git\n          params:\n            - name: revision\n              value: main\n            - name: url\n              value: https:\/\/github.com\/tektoncd\/pipeline\n    outputs:\n      - name: baked-image\n        resourceSpec:\n          - type: image\n            params:\n              - name: url\n                value: gcr.io\/foo\/bar\n```","site":"tekton","answers_cleaned":"         linkTitle   Migrating from Tekton v1alpha1  weight  102            Migrating From Tekton  v1alpha1  to Tekton  v1beta1      Changes to fields   changes to fields     Changes to input parameters   changes to input parameters     Replacing  PipelineResources  with  Tasks    replacing pipelineresources with tasks       Replacing a  git  resource   replacing a git resource       Replacing a  pullrequest  resource   replacing a pullrequest resource       Replacing a  gcs  resource   replacing a gcs resource       Replacing an  image  resource   replacing an image resource       Replacing a  cluster  resource   replacing a cluster resource     Changes to  PipelineResources    changes to pipelineresources   This document describes the differences between  v1alpha1  Tekton entities and their  v1beta1  counterparts  It also describes how to replace the supported types of  PipelineResources  with  Tasks  from the Tekton Catalog of equivalent functionality      Changes to fields  In Tekton  v1beta1   the following fields have been changed     Old field   New field                                spec inputs params      spec params    changes to input parameters       spec inputs    Removed from  Tasks       spec outputs    Removed from  Tasks       spec inputs resources      spec resources inputs    changes to pipelineresources       spec outputs resources      spec resources outputs    changes to pipelineresources        Changes to input parameters  In Tekton  v1beta1   input parameters have been moved from  spec inputs params  to  spec params    For example  consider the following  v1alpha1  parameters      yaml   Task yaml  v1alpha1  spec    inputs      params          name  ADDR         description  Address to curl          type  string    TaskRun yaml  v1alpha1  spec    inputs      params          name  ADDR         value  https   example com foo json      The above parameters are now represented as follows in  v1beta1       yaml   Task yaml  v1beta1  spec    params        name  ADDR       description  Address to curl        type  string    TaskRun yaml  v1beta1  spec    params        name  ADDR       value  https   example com foo json         Replacing  PipelineResources  with  Tasks  See   Replacing PipelineResources with Tasks   https   github com tektoncd pipeline blob main docs pipelineresources md replacing pipelineresources with tasks  for information and examples on how to replace PipelineResources when migrating from v1alpha1 to v1beta1      Changes to PipelineResources  In Tekton  v1beta1    PipelineResources  have been moved from  spec input resources  and  spec output resources  to  spec resources inputs  and  spec resources outputs   respectively   For example  consider the following  v1alpha1  definition      yaml   Task yaml  v1alpha1  spec    inputs      resources          name  skaffold         type  git   outputs      resources          name  baked image         type  image    TaskRun yaml  v1alpha1  spec    inputs      resources          name  skaffold         resourceSpec            type  git           params                name  revision               value  v0 32 0               name  url               value  https   github com GoogleContainerTools skaffold   outputs      resources          name  baked image         resourceSpec              type  image             params                  name  url                 value  gcr io foo bar      The above definition becomes the following in  v1beta1       yaml   Task yaml  v1beta1  spec    resources      inputs          name  src repo         type  git     outputs          name  baked image         type  image    TaskRun yaml  v1beta1  spec    resources      inputs          name  src repo         resourceSpec            type  git           params                name  revision               value  main               name  url               value  https   github com tektoncd pipeline     outputs          name  baked image         resourceSpec              type  image             params                  name  url                 value  gcr io foo bar    "}
{"questions":"tekton toc weight 204 PipelineRuns PipelineRuns","answers":"<!--\n---\nlinkTitle: \"PipelineRuns\"\nweight: 204\n---\n-->\n\n# PipelineRuns\n\n<!-- toc -->\n- [PipelineRuns](#pipelineruns)\n  - [Overview](#overview)\n  - [Configuring a <code>PipelineRun<\/code>](#configuring-a-pipelinerun)\n    - [Specifying the target <code>Pipeline<\/code>](#specifying-the-target-pipeline)\n      - [Tekton Bundles](#tekton-bundles)\n      - [Remote Pipelines](#remote-pipelines)\n    - [Specifying Task-level `ComputeResources`](#specifying-task-level-computeresources)\n    - [Specifying <code>Parameters<\/code>](#specifying-parameters)\n      - [Propagated Parameters](#propagated-parameters)\n        - [Scope and Precedence](#scope-and-precedence)\n        - [Default Values](#default-values)\n        - [Object Parameters](#object-parameters)\n    - [Specifying custom <code>ServiceAccount<\/code> credentials](#specifying-custom-serviceaccount-credentials)\n    - [Mapping <code>ServiceAccount<\/code> credentials to <code>Tasks<\/code>](#mapping-serviceaccount-credentials-to-tasks)\n    - [Specifying a <code>Pod<\/code> template](#specifying-a-pod-template)\n    - [Specifying taskRunSpecs](#specifying-taskrunspecs)\n    - [Specifying <code>Workspaces<\/code>](#specifying-workspaces)\n      - [Propagated Workspaces](#propagated-workspaces)\n        - [Referenced TaskRuns within Embedded PipelineRuns](#referenced-taskruns-within-embedded-pipelineruns)\n    - [Specifying <code>LimitRange<\/code> values](#specifying-limitrange-values)\n    - [Configuring a failure timeout](#configuring-a-failure-timeout)\n  - [<code>PipelineRun<\/code> status](#pipelinerun-status)\n    - [The <code>status<\/code> field](#the-status-field)\n    - [Monitoring execution status](#monitoring-execution-status)\n    - [Marking off user errors](#marking-off-user-errors)\n  - [Cancelling a <code>PipelineRun<\/code>](#cancelling-a-pipelinerun)\n  - [Gracefully cancelling a <code>PipelineRun<\/code>](#gracefully-cancelling-a-pipelinerun)\n  - [Gracefully stopping a <code>PipelineRun<\/code>](#gracefully-stopping-a-pipelinerun)\n  - [Pending <code>PipelineRuns<\/code>](#pending-pipelineruns)\n<!-- \/toc -->\n\n\n## Overview\n\nA `PipelineRun` allows you to instantiate and execute a [`Pipeline`](pipelines.md) on-cluster.\nA `Pipeline` specifies one or more `Tasks` in the desired order of execution. A `PipelineRun`\nexecutes the `Tasks` in the `Pipeline` in the order they are specified until all `Tasks` have\nexecuted successfully or a failure occurs.\n\n**Note:** A `PipelineRun` automatically creates corresponding `TaskRuns` for every\n`Task` in your `Pipeline`.\n\nThe `Status` field tracks the current state of a `PipelineRun`, and can be used to monitor\nprogress.\nThis field contains the status of every `TaskRun`, as well as the full `PipelineSpec` used\nto instantiate this `PipelineRun`, for full auditability.\n\n## Configuring a `PipelineRun`\n\nA `PipelineRun` definition supports the following fields:\n\n- Required:\n  - [`apiVersion`][kubernetes-overview] - Specifies the API version. For example\n    `tekton.dev\/v1beta1`.\n  - [`kind`][kubernetes-overview] - Indicates that this resource object is a `PipelineRun` object.\n  - [`metadata`][kubernetes-overview] - Specifies the metadata that uniquely identifies the\n    `PipelineRun` object. For example, a `name`.\n  - [`spec`][kubernetes-overview] - Specifies the configuration information for\n    this `PipelineRun` object.\n    - [`pipelineRef` or `pipelineSpec`](#specifying-the-target-pipeline) - Specifies the target [`Pipeline`](pipelines.md).\n- Optional:\n  - [`params`](#specifying-parameters) - Specifies the desired execution parameters for the `Pipeline`.\n  - [`serviceAccountName`](#specifying-custom-serviceaccount-credentials) - Specifies a `ServiceAccount`\n    object that supplies specific execution credentials for the `Pipeline`.\n  - [`status`](#cancelling-a-pipelinerun) - Specifies options for cancelling a `PipelineRun`.\n  - [`taskRunSpecs`](#specifying-taskrunspecs) - Specifies a list of `PipelineRunTaskSpec` which allows for setting `ServiceAccountName`, [`Pod` template](.\/podtemplates.md), and `Metadata` for each task. This overrides the `Pod` template set for the entire `Pipeline`.\n  - [`timeout`](#configuring-a-failure-timeout) - Specifies the timeout before the `PipelineRun` fails. `timeout` is deprecated and will eventually be removed, so consider using `timeouts` instead.\n  - [`timeouts`](#configuring-a-failure-timeout) - Specifies the timeout before the `PipelineRun` fails. `timeouts` allows more granular timeout configuration, at the pipeline, tasks, and finally levels\n  - [`podTemplate`](#specifying-a-pod-template) - Specifies a [`Pod` template](.\/podtemplates.md) to use as the basis for the configuration of the `Pod` that executes each `Task`.\n  - [`workspaces`](#specifying-workspaces) - Specifies a set of workspace bindings which must match the names of workspaces declared in the pipeline being used.\n\n[kubernetes-overview]:\n  https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/kubernetes-objects\/#required-fields\n\n### Specifying the target `Pipeline`\n\nYou must specify the target `Pipeline` that you want the `PipelineRun` to execute, either by referencing\nan existing `Pipeline` definition, or embedding a `Pipeline` definition directly in the `PipelineRun`.\n\nTo specify the target `Pipeline` by reference, use the `pipelineRef` field:\n\n```yaml\nspec:\n  pipelineRef:\n    name: mypipeline\n```\nTo embed a `Pipeline` definition in the `PipelineRun`, use the `pipelineSpec` field:\n\n```yaml\nspec:\n  pipelineSpec:\n    tasks:\n      - name: task1\n        taskRef:\n          name: mytask\n```\n\nThe `Pipeline` in the [`pipelineSpec` example](..\/examples\/v1\/pipelineruns\/pipelinerun-with-pipelinespec.yaml)\nexample displays morning and evening greetings. Once you create and execute it, you can check the logs for its `Pods`:\n\n```bash\nkubectl logs $(kubectl get pods -o name | grep pipelinerun-echo-greetings-echo-good-morning)\nGood Morning, Bob!\n\nkubectl logs $(kubectl get pods -o name | grep pipelinerun-echo-greetings-echo-good-night)\nGood Night, Bob!\n```\n\nYou can also embed a `Task` definition the embedded `Pipeline` definition:\n\n```yaml\nspec:\n  pipelineSpec:\n    tasks:\n      - name: task1\n        taskSpec:\n          steps: ...\n```\n\nIn the [`taskSpec` in `pipelineSpec` example](..\/examples\/v1\/pipelineruns\/pipelinerun-with-pipelinespec-and-taskspec.yaml)\nit's `Tasks` all the way down!\n\nYou can also specify labels and annotations with `taskSpec` which are propagated to each `taskRun` and then to the\nrespective pods. These labels can be used to identify and filter pods for further actions (such as collecting pod metrics,\nand cleaning up completed pod with certain labels, etc) even being part of one single Pipeline.\n\n```yaml\nspec:\n  pipelineSpec:\n    tasks:\n      - name: task1\n        taskSpec:\n          metadata:\n            labels:\n              pipeline-sdk-type: kfp\n        # ...\n      - name: task2\n        taskSpec:\n          metadata:\n            labels:\n              pipeline-sdk-type: tfx\n        # ...\n```\n\n#### Tekton Bundles\n\nA `Tekton Bundle` is an OCI artifact that contains Tekton resources like `Tasks` which can be referenced within a `taskRef`.\n\nYou can reference a `Tekton bundle` in a `TaskRef` in both `v1` and `v1beta1` using [remote resolution](.\/bundle-resolver.md#pipeline-resolution). The example syntax shown below for `v1` uses remote resolution and requires enabling [beta features](.\/additional-configs.md#beta-features).\n\n```yaml\nspec:\n  pipelineRef:\n    resolver: bundles\n    params:\n    - name: bundle\n      value: docker.io\/myrepo\/mycatalog:v1.0\n    - name: name\n      value: mypipeline\n    - name: kind\n      value: Pipeline\n```\n\nThe syntax and caveats are similar to using `Tekton Bundles` for  `Task` references\nin [Pipelines](pipelines.md#tekton-bundles) or [TaskRuns](taskruns.md#tekton-bundles).\n\n`Tekton Bundles` may be constructed with any toolsets that produce valid OCI image artifacts\nso long as the artifact adheres to the [contract](tekton-bundle-contracts.md).\n\n#### Remote Pipelines\n\n**([beta feature](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/install.md#beta-features))**\n\nA `pipelineRef` field may specify a Pipeline in a remote location such as git.\nSupport for specific types of remote will depend on the Resolvers your\ncluster's operator has installed. For more information including a tutorial, please check [resolution docs](resolution.md). The below example demonstrates\nreferencing a Pipeline in git:\n\n```yaml\nspec:\n  pipelineRef:\n    resolver: git\n    params:\n    - name: url\n      value: https:\/\/github.com\/tektoncd\/catalog.git\n    - name: revision\n      value: abc123\n    - name: pathInRepo\n      value: \/pipeline\/buildpacks\/0.1\/buildpacks.yaml\n```\n\n### Specifying Task-level `ComputeResources`\n\n**([alpha only](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/additional-configs.md#alpha-features))**\n\nTask-level compute resources can be configured in `PipelineRun.TaskRunSpecs.ComputeResources` or `TaskRun.ComputeResources`.\n\ne.g.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: pipeline\nspec:\n  tasks:\n    - name: task\n---\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: pipelinerun\nspec:\n  pipelineRef:\n    name: pipeline\n  taskRunSpecs:\n    - pipelineTaskName: task\n      computeResources:\n        requests:\n          cpu: 2\n```\n\nFurther details and examples could be found in [Compute Resources in Tekton](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/compute-resources.md).\n\n### Specifying `Parameters`\n\n(See also [Specifying Parameters in Tasks](tasks.md#specifying-parameters))\n\nYou can specify `Parameters` that you want to pass to the `Pipeline` during execution,\nincluding different values of the same parameter for different `Tasks` in the `Pipeline`.\n\n**Note:** You must specify all the `Parameters` that the `Pipeline` expects. Parameters\nthat have default values specified in Pipeline are not required to be provided by PipelineRun.\n\nFor example:\n\n```yaml\nspec:\n  params:\n    - name: pl-param-x\n      value: \"100\"\n    - name: pl-param-y\n      value: \"500\"\n```\nYou can pass in extra `Parameters` if needed depending on your use cases. An example use\ncase is when your CI system autogenerates `PipelineRuns` and it has `Parameters` it wants to\nprovide to all `PipelineRuns`. Because you can pass in extra `Parameters`, you don't have to\ngo through the complexity of checking each `Pipeline` and providing only the required params.\n\n#### Parameter Enums\n\n> :seedling: **`enum` is an [alpha](additional-configs.md#alpha-features) feature.** The `enable-param-enum` feature flag must be set to `\"true\"` to enable this feature.\n\nIf a `Parameter` is guarded by `Enum` in the `Pipeline`, you can only provide `Parameter` values in the `PipelineRun` that are predefined in the `Param.Enum` in the `Pipeline`. The `PipelineRun` will fail with reason `InvalidParamValue` otherwise.\n\nTekton will also the validate the `param` values passed to any referenced `Tasks` (via `taskRef`) if `Enum` is specified for the `Task`. The `PipelineRun` will fail with reason `InvalidParamValue` if `Enum` validation is failed for any of the `PipelineTask`.\n\nYou can also specify `Enum` in an embedded `Pipeline` in a `PipelineRun`. The same `Param` validation will be executed in this scenario.\n\nSee more details in [Param.Enum](.\/pipelines.md#param-enum).\n\n#### Propagated Parameters\n\nWhen using an inlined spec, parameters from the parent `PipelineRun` will be\npropagated to any inlined specs without needing to be explicitly defined. This\nallows authors to simplify specs by automatically propagating top-level\nparameters down to other inlined resources.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: pr-echo-\nspec:\n  params:\n    - name: HELLO\n      value: \"Hello World!\"\n    - name: BYE\n      value: \"Bye World!\"\n  pipelineSpec:\n    tasks:\n      - name: echo-hello\n        taskSpec:\n          steps:\n            - name: echo\n              image: ubuntu\n              script: |\n                #!\/usr\/bin\/env bash\n                echo \"$(params.HELLO)\"\n      - name: echo-bye\n        taskSpec:\n          steps:\n            - name: echo\n              image: ubuntu\n              script: |\n                #!\/usr\/bin\/env bash\n                echo \"$(params.BYE)\"\n```\n\nOn executing the pipeline run, the parameters will be interpolated during resolution.\nThe specifications are not mutated before storage and so it remains the same.\nThe status is updated.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: pr-echo-szzs9\n  ...\nspec:\n  params:\n  - name: HELLO\n    value: Hello World!\n  - name: BYE\n    value: Bye World!\n  pipelineSpec:\n    tasks:\n    - name: echo-hello\n      taskSpec:\n        steps:\n        - image: ubuntu\n          name: echo\n          script: |\n            #!\/usr\/bin\/env bash\n            echo \"$(params.HELLO)\"\n    - name: echo-bye\n      taskSpec:\n        steps:\n        - image: ubuntu\n          name: echo\n          script: |\n            #!\/usr\/bin\/env bash\n            echo \"$(params.BYE)\"\nstatus:\n  conditions:\n  - lastTransitionTime: \"2022-04-07T12:34:58Z\"\n    message: 'Tasks Completed: 2 (Failed: 0, Canceled 0), Skipped: 0'\n    reason: Succeeded\n    status: \"True\"\n    type: Succeeded\n  pipelineSpec:\n    ...\n  childReferences:\n  - name: pr-echo-szzs9-echo-hello\n    pipelineTaskName: echo-hello\n    kind: TaskRun\n  - name: pr-echo-szzs9-echo-bye\n    pipelineTaskName: echo-bye\n    kind: TaskRun\n```\n\n##### Scope and Precedence\n\nWhen Parameters names conflict, the inner scope would take precedence as shown in this example:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: pr-echo-\nspec:\n  params:\n  - name: HELLO\n    value: \"Hello World!\"\n  - name: BYE\n    value: \"Bye World!\"\n  pipelineSpec:\n    tasks:\n      - name: echo-hello\n        params:\n        - name: HELLO\n          value: \"Sasa World!\"\n        taskSpec:\n          params:\n            - name: HELLO\n              type: string\n          steps:\n            - name: echo\n              image: ubuntu\n              script: |\n                #!\/usr\/bin\/env bash\n                echo \"$(params.HELLO)\"\n    ...\n```\n\nresolves to\n\n```yaml\n# Successful execution of the above PipelineRun\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: pr-echo-szzs9\n  ...\nspec:\n  ...\nstatus:\n  conditions:\n    - lastTransitionTime: \"2022-04-07T12:34:58Z\"\n      message: 'Tasks Completed: 2 (Failed: 0, Canceled 0), Skipped: 0'\n      reason: Succeeded\n      status: \"True\"\n      type: Succeeded\n  ...\n  childReferences:\n  - name: pr-echo-szzs9-echo-hello\n    pipelineTaskName: echo-hello\n    kind: TaskRun\n  ...\n```\n\n##### Default Values\n\nWhen `Parameter` specifications have default values, the `Parameter` value provided at runtime would take precedence to give users control, as shown in this example:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: pr-echo-\nspec:\n  params:\n  - name: HELLO\n    value: \"Hello World!\"\n  - name: BYE\n    value: \"Bye World!\"\n  pipelineSpec:\n    tasks:\n      - name: echo-hello\n        taskSpec:\n          params:\n          - name: HELLO\n            type: string\n            default: \"Sasa World!\"\n          steps:\n            - name: echo\n              image: ubuntu\n              script: |\n                #!\/usr\/bin\/env bash\n                echo \"$(params.HELLO)\"\n    ...\n```\n\nresolves to\n\n```yaml\n# Successful execution of the above PipelineRun\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: pr-echo-szzs9\n  ...\nspec:\n  ...\nstatus:\n  conditions:\n    - lastTransitionTime: \"2022-04-07T12:34:58Z\"\n      message: 'Tasks Completed: 2 (Failed: 0, Canceled 0), Skipped: 0'\n      reason: Succeeded\n      status: \"True\"\n      type: Succeeded\n  ...\n  childReferences:\n  - name: pr-echo-szzs9-echo-hello\n    pipelineTaskName: echo-hello\n    kind: TaskRun\n  ...\n```\n\n##### Referenced Resources\n\nWhen a PipelineRun definition has referenced specifications but does not explicitly pass Parameters, the PipelineRun will be created but the execution will fail because of missing Parameters.\n\n```yaml\n# Invalid PipelineRun attempting to propagate Parameters to referenced Tasks\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: pr-echo-\nspec:\n  params:\n  - name: HELLO\n    value: \"Hello World!\"\n  - name: BYE\n    value: \"Bye World!\"\n  pipelineSpec:\n    tasks:\n      - name: echo-hello\n        taskRef:\n          name: echo-hello\n      - name: echo-bye\n        taskRef:\n          name: echo-bye\n---\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: echo-hello\nspec:\n  steps:\n    - name: echo\n      image: ubuntu\n      script: |\n        #!\/usr\/bin\/env bash\n        echo \"$(params.HELLO)\"\n---\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: echo-bye\nspec:\n  steps:\n    - name: echo\n      image: ubuntu\n      script: |\n        #!\/usr\/bin\/env bash\n        echo \"$(params.BYE)\"\n```\n\nFails as follows:\n\n```yaml\n# Failed execution of the above PipelineRun\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: pr-echo-24lmf\n  ...\nspec:\n  params:\n  - name: HELLO\n    value: Hello World!\n  - name: BYE\n    value: Bye World!\n  pipelineSpec:\n    tasks:\n    - name: echo-hello\n      taskRef:\n        kind: Task\n        name: echo-hello\n    - name: echo-bye\n      taskRef:\n        kind: Task\n        name: echo-bye\nstatus:\n  conditions:\n  - lastTransitionTime: \"2022-04-07T20:24:51Z\"\n    message: 'invalid input params for task echo-hello: missing values for\n              these params which have no default values: [HELLO]'\n    reason: PipelineValidationFailed\n    status: \"False\"\n    type: Succeeded\n  ...\n```\n\n##### Object Parameters\n\nWhen using an inlined spec, object parameters from the parent `PipelineRun` will also be\npropagated to any inlined specs without needing to be explicitly defined. This\nallows authors to simplify specs by automatically propagating top-level\nparameters down to other inlined resources.\nWhen propagating object parameters, scope and precedence also holds as shown below.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: pipelinerun-object-param-result\nspec:\n  params:\n    - name: gitrepo\n      value:\n        url: abc.com\n        commit: sha123\n  pipelineSpec:\n    tasks:\n      - name: task1\n        params:\n          - name: gitrepo\n            value:\n              branch: main\n              url: xyz.com\n        taskSpec:\n          steps:\n            - name: write-result\n              image: bash\n              args: [\n                \"echo\",\n                \"--url=$(params.gitrepo.url)\",\n                \"--commit=$(params.gitrepo.commit)\",\n                \"--branch=$(params.gitrepo.branch)\",\n              ]\n```\n\nresolves to\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: pipelinerun-object-param-resultpxp59\n  ...\nspec:\n  params:\n  - name: gitrepo\n    value:\n      commit: sha123\n      url: abc.com\n  pipelineSpec:\n    tasks:\n    - name: task1\n      params:\n      - name: gitrepo\n        value:\n          branch: main\n          url: xyz.com\n      taskSpec:\n        metadata: {}\n        spec: null\n        steps:\n        - args:\n          - echo\n          - --url=$(params.gitrepo.url)\n          - --commit=$(params.gitrepo.commit)\n          - --branch=$(params.gitrepo.branch)\n          image: bash\n          name: write-result\nstatus:\n  completionTime: \"2022-09-08T17:22:01Z\"\n  conditions:\n  - lastTransitionTime: \"2022-09-08T17:22:01Z\"\n    message: 'Tasks Completed: 1 (Failed: 0, Cancelled 0), Skipped: 0'\n    reason: Succeeded\n    status: \"True\"\n    type: Succeeded\n  pipelineSpec:\n    tasks:\n    - name: task1\n      params:\n      - name: gitrepo\n        value:\n          branch: main\n          url: xyz.com\n      taskSpec:\n        metadata: {}\n        spec: null\n        steps:\n        - args:\n          - echo\n          - --url=xyz.com\n          - --commit=sha123\n          - --branch=main\n          image: bash\n          name: write-result\n  startTime: \"2022-09-08T17:21:57Z\"\n  childReferences:\n  - name: pipelinerun-object-param-resultpxp59-task1\n    pipelineTaskName: task1\n    kind: TaskRun\n          ...\n\ttaskSpec:\n          steps:\n          - args:\n            - echo\n            - --url=xyz.com\n            - --commit=sha123\n            - --branch=main\n            image: bash\n            name: write-result\n```\n\n### Specifying custom `ServiceAccount` credentials\n\nYou can execute the `Pipeline` in your `PipelineRun` with a specific set of credentials by\nspecifying a `ServiceAccount` object name in the `serviceAccountName` field in your `PipelineRun`\ndefinition. If you do not explicitly specify this, the `TaskRuns` created by your `PipelineRun`\nwill execute with the credentials specified in the `configmap-defaults` `ConfigMap`. If this\ndefault is not specified, the `TaskRuns` will execute with the [`default` service account](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/#use-the-default-service-account-to-access-the-api-server)\nset for the target [`namespace`](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/namespaces\/).\n\nFor more information, see [`ServiceAccount`](auth.md).\n\n[`Custom tasks`](pipelines.md#using-custom-tasks) may or may not use a service account name.\nConsult the documentation of the custom task that you are using to determine whether it supports a service account name.\n\n### Mapping `ServiceAccount` credentials to `Tasks`\n\nIf you require more granularity in specifying execution credentials, use the `taskRunSpecs[].taskServiceAccountName` field to\nmap a specific `serviceAccountName` value to a specific `Task` in the `Pipeline`. This overrides the global\n`serviceAccountName` you may have set for the `Pipeline` as described in the previous section.\n\nFor example, if you specify these mappings:\n\n\n\n```yaml\nspec:\n  taskRunTemplate:\n    serviceAccountName: sa-1\n  taskRunSpecs:\n    - pipelineTaskName: build-task\n      serviceAccountName: sa-for-build\n```\n\n\n\n```yaml\nspec:\n  serviceAccountName: sa-1\n  taskRunSpecs:\n    - pipelineTaskName: build-task\n      taskServiceAccountName: sa-for-build\n```\n\n\n\nfor this `Pipeline`:\n\n```yaml\nkind: Pipeline\nspec:\n  tasks:\n    - name: build-task\n      taskRef:\n        name: build-push\n    - name: test-task\n      taskRef:\n        name: test\n```\n\nthen `test-task` will execute using the `sa-1` account while `build-task` will execute with `sa-for-build`.\n\n#### Propagated Results\n\nWhen using an embedded spec, `Results` from the parent `PipelineRun` will be\npropagated to any inlined specs without needing to be explicitly defined. This\nallows authors to simplify specs by automatically propagating top-level\nresults down to other inlined resources.\n**`Result` substitutions will only be made for `name`, `commands`, `args`, `env` and `script` fields of `steps`, `sidecars`.**\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: PipelineRun\nmetadata:\n  name: uid-pipeline-run\nspec:\n  pipelineSpec:\n    tasks:\n    - name: add-uid\n      taskSpec:\n        results:\n          - name: uid\n            type: string\n        steps:\n          - name: add-uid\n            image: busybox\n            command: [\"\/bin\/sh\", \"-c\"]\n            args:\n              - echo \"1001\" | tee $(results.uid.path)\n    - name: show-uid\n      # params:\n      #   - name: uid\n      #     value: $(tasks.add-uid.results.uid)\n      taskSpec:\n        steps:\n          - name: show-uid\n            image: busybox\n            command: [\"\/bin\/sh\", \"-c\"]\n            args:\n              - echo $(tasks.add-uid.results.uid)\n              # - echo $(params.uid)\n```\n\nOn executing the `PipelineRun`, the `Results` will be interpolated during resolution.\n\n```yaml\nname:         uid-pipeline-run-show-uid\napiVersion:  tekton.dev\/v1\nkind:         TaskRun\nmetadata:\n  ...\nspec:\n  taskSpec:\n    steps:\n      args:\n        echo 1001\n      command:\n        - \/bin\/sh\n        - -c\n      image:  busybox\n      name:   show-uid\nstatus:\n  completionTime:  2023-09-11T07:34:28Z\n  conditions:\n    lastTransitionTime:  2023-09-11T07:34:28Z\n    message:               All Steps have completed executing\n    reason:                Succeeded\n    status:                True\n    type:                  Succeeded\n  podName:                uid-pipeline-run-show-uid-pod\n  steps:\n    container:  step-show-uid\n    name:       show-uid\n  taskSpec:\n    steps:\n      args:\n        echo 1001\n      command:\n        \/bin\/sh\n        -c\n      computeResources:\n      image:  busybox\n      name:   show-uid\n```\n\n### Specifying a `Pod` template\n\nYou can specify a [`Pod` template](podtemplates.md) configuration that will serve as the configuration starting\npoint for the `Pod` in which the container images specified in your `Tasks` will execute. This allows you to\ncustomize the `Pod` configuration specifically for each `TaskRun`.\n\nIn the following example, the `Task` defines a `volumeMount` object named `my-cache`. The `PipelineRun`\nprovisions this object for the `Task` using a `persistentVolumeClaim` and executes it as user 1001.\n\n\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: Task\nmetadata:\n  name: mytask\nspec:\n  steps:\n    - name: writesomething\n      image: ubuntu\n      command: [\"bash\", \"-c\"]\n      args: [\"echo 'foo' > \/my-cache\/bar\"]\n      volumeMounts:\n        - name: my-cache\n          mountPath: \/my-cache\n---\napiVersion: tekton.dev\/v1\nkind: Pipeline\nmetadata:\n  name: mypipeline\nspec:\n  tasks:\n    - name: task1\n      taskRef:\n        name: mytask\n---\napiVersion: tekton.dev\/v1\nkind: PipelineRun\nmetadata:\n  name: mypipelinerun\nspec:\n  pipelineRef:\n    name: mypipeline\n  taskRunTemplate:\n    podTemplate:\n      securityContext:\n        runAsNonRoot: true\n        runAsUser: 1001\n      volumes:\n        - name: my-cache\n          persistentVolumeClaim:\n            claimName: my-volume-claim\n```\n\n\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: mytask\nspec:\n  steps:\n    - name: writesomething\n      image: ubuntu\n      command: [\"bash\", \"-c\"]\n      args: [\"echo 'foo' > \/my-cache\/bar\"]\n      volumeMounts:\n        - name: my-cache\n          mountPath: \/my-cache\n---\napiVersion: tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: mypipeline\nspec:\n  tasks:\n    - name: task1\n      taskRef:\n        name: mytask\n---\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: mypipelinerun\nspec:\n  pipelineRef:\n    name: mypipeline\n  podTemplate:\n    securityContext:\n      runAsNonRoot: true\n      runAsUser: 1001\n    volumes:\n      - name: my-cache\n        persistentVolumeClaim:\n          claimName: my-volume-claim\n```\n\n\n\n[`Custom tasks`](pipelines.md#using-custom-tasks) may or may not use a pod template.\nConsult the documentation of the custom task that you are using to determine whether it supports a pod template.\n\n### Specifying taskRunSpecs\n\nSpecifies a list of `PipelineTaskRunSpec` which contains `TaskServiceAccountName`, `TaskPodTemplate`\nand `PipelineTaskName`. Mapping the specs to the corresponding `Task` based upon the `TaskName` a PipelineTask\nwill run with the configured  `TaskServiceAccountName` and `TaskPodTemplate` overwriting the pipeline\nwide `ServiceAccountName`  and [`podTemplate`](.\/podtemplates.md) configuration,\nfor example:\n\n\n\n```yaml\nspec:\n  podTemplate:\n    securityContext:\n      runAsUser: 1000\n      runAsGroup: 2000\n      fsGroup: 3000\n  taskRunSpecs:\n    - pipelineTaskName: build-task\n      serviceAccountName: sa-for-build\n      podTemplate:\n        nodeSelector:\n          disktype: ssd\n```\n\n\n\n```yaml\nspec:\n  podTemplate:\n    securityContext:\n      runAsUser: 1000\n      runAsGroup: 2000\n      fsGroup: 3000\n  taskRunSpecs:\n    - pipelineTaskName: build-task\n      taskServiceAccountName: sa-for-build\n      taskPodTemplate:\n        nodeSelector:\n          disktype: ssd\n```\n\n\n\nIf used with this `Pipeline`,  `build-task` will use the task specific `PodTemplate` (where `nodeSelector` has `disktype` equal to `ssd`)\nalong with `securityContext` from the `pipelineRun.spec.podTemplate`.\n`PipelineTaskRunSpec` may also contain `StepSpecs` and `SidecarSpecs`; see\n[Overriding `Task` `Steps` and `Sidecars`](.\/taskruns.md#overriding-task-steps-and-sidecars) for more information.\n\nThe optional annotations and labels can be added under a `Metadata` field as for a specific running context.\n\ne.g.\n\nRendering needed secrets with Vault:\n\n```yaml\nspec:\n  pipelineRef:\n    name: pipeline-name\n  taskRunSpecs:\n    - pipelineTaskName: task-name\n      metadata:\n        annotations:\n          vault.hashicorp.com\/agent-inject-secret-foo: \"\/path\/to\/foo\"\n          vault.hashicorp.com\/role: role-name\n```\n\nUpdating labels applied in a runtime context:\n\n```yaml\nspec:\n  pipelineRef:\n    name: pipeline-name\n  taskRunSpecs:\n    - pipelineTaskName: task-name\n      metadata:\n        labels:\n          app: cloudevent\n```\n\nIf a metadata key is present in different levels, the value that will be used in the `PipelineRun` is determined using this precedence order: `PipelineRun.spec.taskRunSpec.metadata` > `PipelineRun.metadata` > `Pipeline.spec.tasks.taskSpec.metadata`.\n\n### Specifying `Workspaces`\n\nIf your `Pipeline` specifies one or more `Workspaces`, you must map those `Workspaces` to\nthe corresponding physical volumes in your `PipelineRun` definition. For example, you\ncan map a `PersistentVolumeClaim` volume to a `Workspace` as follows:\n\n```yaml\nworkspaces:\n  - name: myworkspace # must match workspace name in Task\n    persistentVolumeClaim:\n      claimName: mypvc # this PVC must already exist\n    subPath: my-subdir\n```\n\n`workspaces[].subPath` can be an absolute value or can reference `pipelineRun` context variables, such as,\n`$(context.pipelineRun.name)` or `$(context.pipelineRun.uid)`.\n\nYou can pass in extra `Workspaces` if needed depending on your use cases. An example use\ncase is when your CI system autogenerates `PipelineRuns` and it has `Workspaces` it wants to\nprovide to all `PipelineRuns`. Because you can pass in extra `Workspaces`, you don't have to\ngo through the complexity of checking each `Pipeline` and providing only the required `Workspaces`:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: pipeline\nspec:\n  tasks:\n    - name: task\n---\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: pipelinerun\nspec:\n  pipelineRef:\n    name: pipeline\n  workspaces:\n    - name: unusedworkspace\n      persistentVolumeClaim:\n        claimName: mypvc\n```\n\nFor more information, see the following topics:\n- For information on mapping `Workspaces` to `Volumes`, see [Specifying `Workspaces` in `PipelineRuns`](workspaces.md#specifying-workspaces-in-pipelineruns).\n- For a list of supported `Volume` types, see [Specifying `VolumeSources` in `Workspaces`](workspaces.md#specifying-volumesources-in-workspaces).\n- For an end-to-end example, see [`Workspaces` in a `PipelineRun`](..\/examples\/v1\/pipelineruns\/workspaces.yaml).\n\n[`Custom tasks`](pipelines.md#using-custom-tasks) may or may not use workspaces.\nConsult the documentation of the custom task that you are using to determine whether it supports workspaces.\n\n#### Propagated Workspaces\n\nWhen using an embedded spec, workspaces from the parent `PipelineRun` will be\npropagated to any inlined specs without needing to be explicitly defined. This\nallows authors to simplify specs by automatically propagating top-level\nworkspaces down to other inlined resources.\n**Workspace substutions will only be made for `commands`, `args` and `script` fields of `steps`, `stepTemplates`, and `sidecars`.**\n\n```yaml\n# Inline specifications of a PipelineRun\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: recipe-time-\nspec:\n  workspaces:\n    - name: shared-data\n      volumeClaimTemplate:\n        spec:\n          accessModes:\n            - ReadWriteOnce\n          resources:\n            requests:\n              storage: 16Mi\n          volumeMode: Filesystem\n  pipelineSpec:\n    #workspaces:\n    #  - name: shared-data\n    tasks:\n    - name: fetch-secure-data\n      # workspaces:\n      #   - name: shared-data\n      taskSpec:\n        # workspaces:\n        #   - name: shared-data\n        steps:\n        - name: fetch-and-write-secure\n          image: ubuntu\n          script: |\n            echo hi >> $(workspaces.shared-data.path)\/recipe.txt\n    - name: print-the-recipe\n      # workspaces:\n      #   - name: shared-data\n      runAfter:\n        - fetch-secure-data\n      taskSpec:\n        # workspaces:\n        #   - name: shared-data\n        steps:\n        - name: print-secrets\n          image: ubuntu\n          script: cat $(workspaces.shared-data.path)\/recipe.txt\n```\n\nOn executing the pipeline run, the workspaces will be interpolated during resolution.\n\n```yaml\n# Successful execution of the above PipelineRun\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: recipe-time-\n  ...\nspec:\n  pipelineSpec:\n  ...\nstatus:\n  completionTime: \"2022-06-02T18:17:02Z\"\n  conditions:\n  - lastTransitionTime: \"2022-06-02T18:17:02Z\"\n    message: 'Tasks Completed: 2 (Failed: 0, Canceled 0), Skipped: 0'\n    reason: Succeeded\n    status: \"True\"\n    type: Succeeded\n  pipelineSpec:\n    ...\n  childReferences:\n  - name: recipe-time-lslt9-fetch-secure-data\n    pipelineTaskName: fetch-secure-data\n    kind: TaskRun\n  - name: recipe-time-lslt9-print-the-recipe\n    pipelineTaskName: print-the-recipe\n    kind: TaskRun\n```\n\n##### Workspace Referenced Resources\n\n`Workspaces` cannot be propagated to referenced specifications. For example, the following Pipeline will fail when executed because the workspaces defined in the PipelineRun cannot be propagated to the referenced Pipeline.\n\n```yaml\n# PipelineRun attempting to propagate Workspaces to referenced Tasks\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: shared-task-storage\nspec:\n  resources:\n    requests:\n      storage: 16Mi\n  volumeMode: Filesystem\n  accessModes:\n    - ReadWriteOnce\n---\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: fetch-and-print-recipe\nspec:\n  tasks:\n  - name: fetch-the-recipe\n    taskRef:\n      name: fetch-secure-data\n  - name: print-the-recipe\n    taskRef:\n      name: print-data\n    runAfter:\n      - fetch-the-recipe\n---\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: recipe-time-\nspec:\n  pipelineRef:\n    name: fetch-and-print-recipe\n  workspaces:\n  - name: shared-data\n    persistentVolumeClaim:\n      claimName: shared-task-storage\n```\n\nUpon execution, this will cause failures:\n\n```yaml\n# Failed execution of the above PipelineRun\n\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: recipe-time-\n  ...\nspec:\n  pipelineRef:\n    name: fetch-and-print-recipe\n  workspaces:\n  - name: shared-data\n    persistentVolumeClaim:\n      claimName: shared-task-storage\nstatus:\n  completionTime: \"2022-06-02T19:02:58Z\"\n  conditions:\n  - lastTransitionTime: \"2022-06-02T19:02:58Z\"\n    message: 'Tasks Completed: 1 (Failed: 1, Canceled 0), Skipped: 1'\n    reason: Failed\n    status: \"False\"\n    type: Succeeded\n  pipelineSpec:\n    ...\n  childReferences:\n  - name: recipe-time-v5scg-fetch-the-recipe\n    pipelineTaskName: fetch-the-recipe\n    kind: TaskRun\n```\n\n#### Referenced TaskRuns within Embedded PipelineRuns\nAs mentioned in the [Workspace Referenced Resources](#workspace-referenced-resources), workspaces can only be propagated from PipelineRuns to embedded Pipeline specs, not Pipeline references. Similarly, workspaces can only be propagated from a Pipeline to embedded Task specs, not referenced Tasks. For example:\n\n```yaml\n# PipelineRun attempting to propagate Workspaces to referenced Tasks\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: fetch-secure-data\nspec:\n  workspaces: # If Referenced, Workspaces need to be explicitly declared\n  - name: shared-data\n  steps:\n  - name: fetch-and-write\n    image: ubuntu\n    script: |\n      echo $(workspaces.shared-data.path)\n---\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: recipe-time-\nspec:\n  workspaces:\n  - name: shared-data\n    persistentVolumeClaim:\n      claimName: shared-task-storage\n  pipelineSpec:\n    # workspaces: # Since this is embedded specs, Workspaces don\u2019t need to be declared\n    #    ...\n    tasks:\n    - name: fetch-the-recipe\n      workspaces: # If referencing resources, Workspaces need to be explicitly declared\n      - name: shared-data\n      taskRef: # Referencing a resource\n        name: fetch-secure-data\n    - name: print-the-recipe\n      # workspaces: # Since this is embedded specs, Workspaces don\u2019t need to be declared\n      #    ...\n      taskSpec:\n        # workspaces: # Since this is embedded specs, Workspaces don\u2019t need to be declared\n        #    ...\n        steps:\n        - name: print-secrets\n          image: ubuntu\n          script: cat $(workspaces.shared-data.path)\/recipe.txt\n      runAfter:\n        - fetch-the-recipe\n```\n\nThe above pipelinerun successfully resolves to:\n\n```yaml\n# Successful execution of the above PipelineRun\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: recipe-time-\n  ...\nspec:\n  pipelineSpec:\n    ...\n  workspaces:\n  - name: shared-data\n    persistentVolumeClaim:\n      claimName: shared-task-storage\nstatus:\n  completionTime: \"2022-06-09T18:42:14Z\"\n  conditions:\n  - lastTransitionTime: \"2022-06-09T18:42:14Z\"\n    message: 'Tasks Completed: 2 (Failed: 0, Cancelled 0), Skipped: 0'\n    reason: Succeeded\n    status: \"True\"\n    type: Succeeded\n  pipelineSpec:\n    ...\n  childReferences:\n  - name: recipe-time-pj6l7-fetch-the-recipe\n    pipelineTaskName: fetch-the-recipe\n    kind: TaskRun\n  - name: recipe-time-pj6l7-print-the-recipe\n    pipelineTaskName: print-the-recipe\n    kind: TaskRun\n```\n\n### Specifying `LimitRange` values\n\nIn order to only consume the bare minimum amount of resources needed to execute one `Step` at a\ntime from the invoked `Task`, Tekton will request the compute values for CPU, memory, and ephemeral\nstorage for each `Step` based on the [`LimitRange`](https:\/\/kubernetes.io\/docs\/concepts\/policy\/limit-range\/)\nobject(s), if present. Any `Request` or `Limit` specified by the user (on `Task` for example) will be left unchanged.\n\nFor more information, see the [`LimitRange` support in Pipeline](.\/compute-resources.md#limitrange-support).\n\n### Configuring a failure timeout\n\nYou can use the `timeouts` field to set the `PipelineRun's` desired timeout value in minutes.\nThere are three sub-fields:\n- `pipeline`: specifies the timeout for the entire PipelineRun. Defaults to to the global configurable default timeout of 60 minutes.\nWhen `timeouts.pipeline` has elapsed, any running child TaskRuns will be canceled, regardless of whether they are normal Tasks\nor `finally` Tasks, and the PipelineRun will fail.\n- `tasks`: specifies the timeout for the cumulative time taken by non-`finally` Tasks specified in `pipeline.spec.tasks`.\nTo specify a timeout for an individual Task, use `pipeline.spec.tasks[].timeout`.\nWhen `timeouts.tasks` has elapsed, any running child TaskRuns will be canceled, finally Tasks will run if `timeouts.finally` is specified,\nand the PipelineRun will fail.\n- `finally`: the timeout for the cumulative time taken by `finally` Tasks specified in `pipeline.spec.finally`.\n(Since all `finally` Tasks run in parallel, this is functionally equivalent to the timeout for any `finally` Task.)\nWhen `timeouts.finally` has elapsed, any running `finally` TaskRuns will be canceled,\nand the PipelineRun will fail.\n\nFor example:\n\n```yaml\ntimeouts:\n  pipeline: \"0h0m60s\"\n  tasks: \"0h0m40s\"\n  finally: \"0h0m20s\"\n```\n\nAll three sub-fields are optional, and will be automatically processed according to the following constraint:\n* `timeouts.pipeline >= timeouts.tasks + timeouts.finally`\n\nEach `timeout` field is a `duration` conforming to Go's\n[`ParseDuration`](https:\/\/golang.org\/pkg\/time\/#ParseDuration) format. For example, valid\nvalues are `1h30m`, `1h`, `1m`, and `60s`.\n\nIf any of the sub-fields are set to \"0\", there is no timeout for that section of the PipelineRun,\nmeaning that it will run until it completes successfully or encounters an error.\nTo set `timeouts.tasks` or `timeouts.finally` to \"0\", you must also set `timeouts.pipeline` to \"0\".\n\nThe global default timeout is set to 60 minutes when you first install Tekton. You can set\na different global default timeout value using the `default-timeout-minutes` field in\n[`config\/config-defaults.yaml`](.\/..\/config\/config-defaults.yaml).\n\nExample timeouts usages are as follows:\n\nCombination 1: Set the timeout for the entire `pipeline` and reserve a portion of it for `tasks`.\n\n```yaml\nkind: PipelineRun\nspec:\n  timeouts:\n    pipeline: \"0h4m0s\"\n    tasks: \"0h1m0s\"\n```\n\nCombination 2: Set the timeout for the entire `pipeline` and reserve a portion of it for `finally`.\n\n```yaml\nkind: PipelineRun\nspec:\n  timeouts:\n    pipeline: \"0h4m0s\"\n    finally: \"0h3m0s\"\n```\n\nCombination 3: Set only a `tasks` timeout, with no timeout for the entire `pipeline`.\n\n```yaml\nkind: PipelineRun\nspec:\n  timeouts:\n    pipeline: \"0\"  # No timeout\n    tasks: \"0h3m0s\"\n```\n\nCombination : Set only a `finally` timeout, with no timeout for the entire `pipeline`.\n\n```yaml\nkind: PipelineRun\nspec:\n  timeouts:\n    pipeline: \"0\"  # No timeout\n    finally: \"0h3m0s\"\n```\n\nYou can also use the *Deprecated* `timeout` field to set the `PipelineRun's` desired timeout value in minutes.\nIf you do not specify this value in the `PipelineRun`, the global default timeout value applies.\nIf you set the timeout to 0, the `PipelineRun` fails immediately upon encountering an error.\n\n> :warning: ** `timeout` is deprecated and will be removed in future versions. Consider using `timeouts` instead.\n\n> :note: An internal detail of the `PipelineRun` and `TaskRun` reconcilers in the Tekton controller is that it will requeue a `PipelineRun` or `TaskRun` for re-evaluation, versus waiting for the next update, under certain conditions.  The wait time for that re-queueing is the elapsed time subtracted from the timeout; however, if the timeout is set to '0', that calculation produces a negative number, and the new reconciliation event will fire immediately, which can impact overall performance, which is counter to the intent of wait time calculation.  So instead, the reconcilers will use the configured global timeout as the wait time when the associated timeout has been set to '0'.\n\n## `PipelineRun` status\n\n### The `status` field\n\nYour `PipelineRun`'s `status` field can contain the following fields:\n- Required:\n  <!-- wokeignore:rule=master -->\n  - `status` - Most relevant, `status.conditions`, which contains the latest observations of the `PipelineRun`'s state. [See here](https:\/\/github.com\/kubernetes\/community\/blob\/master\/contributors\/devel\/sig-architecture\/api-conventions.md#typical-status-properties) for information on typical status properties.\n  - `startTime` - The time at which the `PipelineRun` began executing, in [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339) format.\n  - `completionTime` - The time at which the `PipelineRun` finished executing, in [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339) format.\n  - [`pipelineSpec`](pipelines.md#configuring-a-pipeline) - The exact `PipelineSpec` used when starting the `PipelineRun`.\n- Optional:\n  - [`pipelineResults`](pipelines.md#emitting-results-from-a-pipeline) - Results emitted by this `PipelineRun`.\n  - `skippedTasks` - A list of `Task`s which were skipped when running this `PipelineRun` due to [when expressions](pipelines.md#guard-task-execution-using-when-expressions), including the when expressions applying to the skipped task.\n  - `childReferences` - A list of references to each `TaskRun` or `Run` in this `PipelineRun`, which can be used to look up the status of the underlying `TaskRun` or `Run`. Each entry contains the following:\n    - [`kind`][kubernetes-overview] - Generally either `TaskRun` or `Run`.\n    - [`apiVersion`][kubernetes-overview] - The API version for the underlying `TaskRun` or `Run`.\n    - [`whenExpressions`](pipelines.md#guard-task-execution-using-when-expressions) - The list of when expressions guarding the execution of this task.\n  - `provenance` - Metadata about the runtime configuration and the resources used in the PipelineRun. The data in the `provenance` field will be recorded into the build provenance by the provenance generator i.e. (Tekton Chains). Currently, there are 2 subfields:\n    - `refSource`: the source from where a remote pipeline definition was fetched.\n    - `featureFlags`: the configuration data of the `feature-flags` configmap.\n  - `finallyStartTime`- The time at which the PipelineRun's `finally` Tasks, if any, began\n  executing, in [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339) format.\n\n### Monitoring execution status\n\nAs your `PipelineRun` executes, its `status` field accumulates information on the execution of each `TaskRun`\nas well as the `PipelineRun` as a whole. This information includes the name of the pipeline `Task` associated\nto a `TaskRun`, the complete [status of the `TaskRun`](taskruns.md#monitoring-execution-status) and details\nabout `whenExpressions` that may be associated to a `TaskRun`.\n\nThe following example shows an extract from the `status` field of a `PipelineRun` that has executed successfully:\n\n```yaml\ncompletionTime: \"2020-05-04T02:19:14Z\"\nconditions:\n  - lastTransitionTime: \"2020-05-04T02:19:14Z\"\n    message: \"Tasks Completed: 4, Skipped: 0\"\n    reason: Succeeded\n    status: \"True\"\n    type: Succeeded\nstartTime: \"2020-05-04T02:00:11Z\"\nchildReferences:\n- name: triggers-release-nightly-frwmw-build\n  pipelineTaskName: build\n  kind: TaskRun\n  ```\n\nThe following tables shows how to read the overall status of a `PipelineRun`.\nCompletion time is set once a `PipelineRun` reaches status `True` or `False`:\n\n`status` | `reason`           | `completionTime` is set |                                                                           Description\n:--------|:-------------------|:-----------------------:|-------------------------------------------------------------------------------------:\nUnknown  | Started            |           No            |                          The `PipelineRun` has just been picked up by the controller.\nUnknown  | Running            |           No            |                  The `PipelineRun` has been validate and started to perform its work.\nUnknown  | Cancelled          |           No            | The user requested the PipelineRun to be cancelled. Cancellation has not be done yet.\nTrue     | Succeeded          |           Yes           |                                             The `PipelineRun` completed successfully.\nTrue     | Completed          |           Yes           |             The `PipelineRun` completed successfully, one or more Tasks were skipped.\nFalse    | Failed             |           Yes           |                        The `PipelineRun` failed because one of the `TaskRuns` failed.\nFalse    | \\[Error message\\]  |           Yes           |                 The `PipelineRun` failed with a permanent error (usually validation).\nFalse    | Cancelled          |           Yes           |                                         The `PipelineRun` was cancelled successfully.\nFalse    | PipelineRunTimeout |           Yes           |                                                          The `PipelineRun` timed out.\nFalse    | CreateRunFailed    |           Yes           |                                        The `PipelineRun` create run resources failed.\n\nWhen a `PipelineRun` changes status, [events](events.md#pipelineruns) are triggered accordingly.\n\nWhen a `PipelineRun` has `Tasks` that were `skipped`, the `reason` for skipping the task will be listed in the `Skipped Tasks` section of the `status` of the `PipelineRun`.\n\nWhen a `PipelineRun` has `Tasks` with [`when` expressions](pipelines.md#guard-task-execution-using-when-expressions):\n- If the `when` expressions evaluate to `true`, the `Task` is executed then the `TaskRun` and its resolved `when` expressions will be listed in the `Task Runs` section of the `status` of the `PipelineRun`.\n- If the `when` expressions evaluate to `false`, the `Task` is skipped then its name and its resolved `when` expressions will be listed in the `Skipped Tasks` section of the `status` of the `PipelineRun`.\n\n```yaml\nConditions:\n  Last Transition Time:  2020-08-27T15:07:34Z\n  Message:               Tasks Completed: 1 (Failed: 0, Cancelled 0), Skipped: 1\n  Reason:                Completed\n  Status:                True\n  Type:                  Succeeded\nSkipped Tasks:\n  Name:       skip-this-task\n  Reason:     When Expressions evaluated to false\n  When Expressions:\n    Input:     foo\n    Operator:  in\n    Values:\n      bar\n    Input:     foo\n    Operator:  notin\n    Values:\n      foo\nChildReferences:\n- Name: pipelinerun-to-skip-task-run-this-task\n  Pipeline Task Name:  run-this-task\n  Kind: TaskRun\n```\n\nThe name of the `TaskRuns` and `Runs` owned by a `PipelineRun`  are univocally associated to the owning resource.\nIf a `PipelineRun` resource is deleted and created with the same name, the child `TaskRuns` will be created with the\nsame name as before. The base format of the name is `<pipelinerun-name>-<pipelinetask-name>`. If the `PipelineTask`\nhas a `Matrix`, the name will have an int suffix with format `<pipelinerun-name>-<pipelinetask-name>-<combination-id>`.\nThe name may vary according the logic of [`kmeta.ChildName`](https:\/\/pkg.go.dev\/github.com\/knative\/pkg\/kmeta#ChildName).\n\nSome examples:\n\n| `PipelineRun` Name                                       | `PipelineTask` Name                                          | `TaskRun` Names                                                                        |\n|----------------------------------------------------------|--------------------------------------------------------------|----------------------------------------------------------------------------------------|\n| pipeline-run                                             | task1                                                        | pipeline-run-task1                                                                     |\n| pipeline-run                                             | task2-0123456789-0123456789-0123456789-0123456789-0123456789 | pipeline-runee4a397d6eab67777d4e6f9991cd19e6-task2-0123456789-0                        |\n| pipeline-run-0123456789-0123456789-0123456789-0123456789 | task3                                                        | pipeline-run-0123456789-0123456789-0123456789-0123456789-task3                         |\n| pipeline-run-0123456789-0123456789-0123456789-0123456789 | task2-0123456789-0123456789-0123456789-0123456789-0123456789 | pipeline-run-0123456789-012345607ad8c7aac5873cdfabe472a68996b5c                        |\n| pipeline-run                                             | task4 (with 2x2 `Matrix`)                                    | pipeline-run-task1-0, pipeline-run-task1-2, pipeline-run-task1-3, pipeline-run-task1-4 |\n\n### Marking off user errors\n\nA user error in Tekton is any mistake made by user, such as a syntax error when specifying pipelines, tasks. User errors can occur in various stages of the Tekton pipeline, from authoring the pipeline configuration to executing the pipelines. They are currently explicitly labeled in the Run's conditions message, for example:\n\n```yaml\n# Failed PipelineRun with message labeled \"[User error]\"\napiVersion: tekton.dev\/v1\nkind: PipelineRun\nmetadata:\n  ...\nspec:\n  ...\nstatus:\n  ...\n  conditions:\n  - lastTransitionTime: \"2022-06-02T19:02:58Z\"\n    message: '[User error] PipelineRun default parameters is missing some parameters required by\n      Pipeline pipelinerun-with-params''s parameters: pipelineRun missing parameters:\n      [pl-param-x]'\n    reason: 'ParameterMissing'\n    status: \"False\"\n    type: Succeeded\n```\n\n```console\n~\/pipeline$ tkn pr list\nNAME                      STARTED         DURATION   STATUS\npipelinerun-with-params   5 seconds ago   0s         Failed(ParameterMissing)\n```\n\n## Cancelling a `PipelineRun`\n\nTo cancel a `PipelineRun` that's currently executing, update its definition\nto mark it as \"Cancelled\". When you do so, the spawned `TaskRuns` are also marked\nas cancelled, all associated `Pods` are deleted, and their `Retries` are not executed.\nPending `finally` tasks are not scheduled.\n\nFor example:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: go-example-git\nspec:\n  # [\u2026]\n  status: \"Cancelled\"\n```\n\n## Gracefully cancelling a `PipelineRun`\n\nTo gracefully cancel a `PipelineRun` that's currently executing, update its definition\nto mark it as \"CancelledRunFinally\". When you do so, the spawned `TaskRuns` are also marked\nas cancelled, all associated `Pods` are deleted, and their `Retries` are not executed.\n`finally` tasks are scheduled normally.\n\nFor example:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: go-example-git\nspec:\n  # [\u2026]\n  status: \"CancelledRunFinally\"\n```\n\n\n## Gracefully stopping a `PipelineRun`\n\nTo gracefully stop a `PipelineRun` that's currently executing, update its definition\nto mark it as \"StoppedRunFinally\". When you do so, the spawned `TaskRuns` are completed normally,\nincluding executing their `retries`, but no new non-`finally` task is scheduled. `finally` tasks are executed afterwards.\nFor example:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: go-example-git\nspec:\n  # [\u2026]\n  status: \"StoppedRunFinally\"\n```\n\n## Pending `PipelineRuns`\n\nA `PipelineRun` can be created as a \"pending\" `PipelineRun` meaning that it will not actually be started until the pending status is cleared.\n\nNote that a `PipelineRun` can only be marked \"pending\" before it has started, this setting is invalid after the `PipelineRun` has been started.\n\nTo mark a `PipelineRun` as pending, set `.spec.status` to `PipelineRunPending` when creating it:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: go-example-git\nspec:\n  # [\u2026]\n  status: \"PipelineRunPending\"\n```\n\nTo start the PipelineRun, clear the `.spec.status` field. Alternatively, update the value to `Cancelled` to cancel it.\n\n---\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/),\nand code samples are licensed under the\n[Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"         linkTitle   PipelineRuns  weight  204            PipelineRuns       toc        PipelineRuns   pipelineruns       Overview   overview       Configuring a  code PipelineRun  code    configuring a pipelinerun         Specifying the target  code Pipeline  code    specifying the target pipeline           Tekton Bundles   tekton bundles           Remote Pipelines   remote pipelines         Specifying Task level  ComputeResources    specifying task level computeresources         Specifying  code Parameters  code    specifying parameters           Propagated Parameters   propagated parameters             Scope and Precedence   scope and precedence             Default Values   default values             Object Parameters   object parameters         Specifying custom  code ServiceAccount  code  credentials   specifying custom serviceaccount credentials         Mapping  code ServiceAccount  code  credentials to  code Tasks  code    mapping serviceaccount credentials to tasks         Specifying a  code Pod  code  template   specifying a pod template         Specifying taskRunSpecs   specifying taskrunspecs         Specifying  code Workspaces  code    specifying workspaces           Propagated Workspaces   propagated workspaces             Referenced TaskRuns within Embedded PipelineRuns   referenced taskruns within embedded pipelineruns         Specifying  code LimitRange  code  values   specifying limitrange values         Configuring a failure timeout   configuring a failure timeout        code PipelineRun  code  status   pipelinerun status         The  code status  code  field   the status field         Monitoring execution status   monitoring execution status         Marking off user errors   marking off user errors       Cancelling a  code PipelineRun  code    cancelling a pipelinerun       Gracefully cancelling a  code PipelineRun  code    gracefully cancelling a pipelinerun       Gracefully stopping a  code PipelineRun  code    gracefully stopping a pipelinerun       Pending  code PipelineRuns  code    pending pipelineruns        toc          Overview  A  PipelineRun  allows you to instantiate and execute a   Pipeline   pipelines md  on cluster  A  Pipeline  specifies one or more  Tasks  in the desired order of execution  A  PipelineRun  executes the  Tasks  in the  Pipeline  in the order they are specified until all  Tasks  have executed successfully or a failure occurs     Note    A  PipelineRun  automatically creates corresponding  TaskRuns  for every  Task  in your  Pipeline    The  Status  field tracks the current state of a  PipelineRun   and can be used to monitor progress  This field contains the status of every  TaskRun   as well as the full  PipelineSpec  used to instantiate this  PipelineRun   for full auditability      Configuring a  PipelineRun   A  PipelineRun  definition supports the following fields     Required        apiVersion   kubernetes overview    Specifies the API version  For example      tekton dev v1beta1         kind   kubernetes overview    Indicates that this resource object is a  PipelineRun  object        metadata   kubernetes overview    Specifies the metadata that uniquely identifies the      PipelineRun  object  For example  a  name         spec   kubernetes overview    Specifies the configuration information for     this  PipelineRun  object          pipelineRef  or  pipelineSpec    specifying the target pipeline    Specifies the target   Pipeline   pipelines md     Optional        params    specifying parameters    Specifies the desired execution parameters for the  Pipeline         serviceAccountName    specifying custom serviceaccount credentials    Specifies a  ServiceAccount      object that supplies specific execution credentials for the  Pipeline         status    cancelling a pipelinerun    Specifies options for cancelling a  PipelineRun         taskRunSpecs    specifying taskrunspecs    Specifies a list of  PipelineRunTaskSpec  which allows for setting  ServiceAccountName     Pod  template    podtemplates md   and  Metadata  for each task  This overrides the  Pod  template set for the entire  Pipeline         timeout    configuring a failure timeout    Specifies the timeout before the  PipelineRun  fails   timeout  is deprecated and will eventually be removed  so consider using  timeouts  instead        timeouts    configuring a failure timeout    Specifies the timeout before the  PipelineRun  fails   timeouts  allows more granular timeout configuration  at the pipeline  tasks  and finally levels       podTemplate    specifying a pod template    Specifies a   Pod  template    podtemplates md  to use as the basis for the configuration of the  Pod  that executes each  Task         workspaces    specifying workspaces    Specifies a set of workspace bindings which must match the names of workspaces declared in the pipeline being used    kubernetes overview     https   kubernetes io docs concepts overview working with objects kubernetes objects  required fields      Specifying the target  Pipeline   You must specify the target  Pipeline  that you want the  PipelineRun  to execute  either by referencing an existing  Pipeline  definition  or embedding a  Pipeline  definition directly in the  PipelineRun    To specify the target  Pipeline  by reference  use the  pipelineRef  field      yaml spec    pipelineRef      name  mypipeline     To embed a  Pipeline  definition in the  PipelineRun   use the  pipelineSpec  field      yaml spec    pipelineSpec      tasks          name  task1         taskRef            name  mytask      The  Pipeline  in the   pipelineSpec  example     examples v1 pipelineruns pipelinerun with pipelinespec yaml  example displays morning and evening greetings  Once you create and execute it  you can check the logs for its  Pods       bash kubectl logs   kubectl get pods  o name   grep pipelinerun echo greetings echo good morning  Good Morning  Bob   kubectl logs   kubectl get pods  o name   grep pipelinerun echo greetings echo good night  Good Night  Bob       You can also embed a  Task  definition the embedded  Pipeline  definition      yaml spec    pipelineSpec      tasks          name  task1         taskSpec            steps           In the   taskSpec  in  pipelineSpec  example     examples v1 pipelineruns pipelinerun with pipelinespec and taskspec yaml  it s  Tasks  all the way down   You can also specify labels and annotations with  taskSpec  which are propagated to each  taskRun  and then to the respective pods  These labels can be used to identify and filter pods for further actions  such as collecting pod metrics  and cleaning up completed pod with certain labels  etc  even being part of one single Pipeline      yaml spec    pipelineSpec      tasks          name  task1         taskSpec            metadata              labels                pipeline sdk type  kfp                       name  task2         taskSpec            metadata              labels                pipeline sdk type  tfx                         Tekton Bundles  A  Tekton Bundle  is an OCI artifact that contains Tekton resources like  Tasks  which can be referenced within a  taskRef    You can reference a  Tekton bundle  in a  TaskRef  in both  v1  and  v1beta1  using  remote resolution    bundle resolver md pipeline resolution   The example syntax shown below for  v1  uses remote resolution and requires enabling  beta features    additional configs md beta features       yaml spec    pipelineRef      resolver  bundles     params        name  bundle       value  docker io myrepo mycatalog v1 0       name  name       value  mypipeline       name  kind       value  Pipeline      The syntax and caveats are similar to using  Tekton Bundles  for   Task  references in  Pipelines  pipelines md tekton bundles  or  TaskRuns  taskruns md tekton bundles     Tekton Bundles  may be constructed with any toolsets that produce valid OCI image artifacts so long as the artifact adheres to the  contract  tekton bundle contracts md         Remote Pipelines      beta feature  https   github com tektoncd pipeline blob main docs install md beta features      A  pipelineRef  field may specify a Pipeline in a remote location such as git  Support for specific types of remote will depend on the Resolvers your cluster s operator has installed  For more information including a tutorial  please check  resolution docs  resolution md   The below example demonstrates referencing a Pipeline in git      yaml spec    pipelineRef      resolver  git     params        name  url       value  https   github com tektoncd catalog git       name  revision       value  abc123       name  pathInRepo       value   pipeline buildpacks 0 1 buildpacks yaml          Specifying Task level  ComputeResources       alpha only  https   github com tektoncd pipeline blob main docs additional configs md alpha features      Task level compute resources can be configured in  PipelineRun TaskRunSpecs ComputeResources  or  TaskRun ComputeResources    e g      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Pipeline metadata    name  pipeline spec    tasks        name  task     apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    name  pipelinerun spec    pipelineRef      name  pipeline   taskRunSpecs        pipelineTaskName  task       computeResources          requests            cpu  2      Further details and examples could be found in  Compute Resources in Tekton  https   github com tektoncd pipeline blob main docs compute resources md        Specifying  Parameters    See also  Specifying Parameters in Tasks  tasks md specifying parameters    You can specify  Parameters  that you want to pass to the  Pipeline  during execution  including different values of the same parameter for different  Tasks  in the  Pipeline      Note    You must specify all the  Parameters  that the  Pipeline  expects  Parameters that have default values specified in Pipeline are not required to be provided by PipelineRun   For example      yaml spec    params        name  pl param x       value   100        name  pl param y       value   500      You can pass in extra  Parameters  if needed depending on your use cases  An example use case is when your CI system autogenerates  PipelineRuns  and it has  Parameters  it wants to provide to all  PipelineRuns   Because you can pass in extra  Parameters   you don t have to go through the complexity of checking each  Pipeline  and providing only the required params        Parameter Enums     seedling     enum  is an  alpha  additional configs md alpha features  feature    The  enable param enum  feature flag must be set to   true   to enable this feature   If a  Parameter  is guarded by  Enum  in the  Pipeline   you can only provide  Parameter  values in the  PipelineRun  that are predefined in the  Param Enum  in the  Pipeline   The  PipelineRun  will fail with reason  InvalidParamValue  otherwise   Tekton will also the validate the  param  values passed to any referenced  Tasks   via  taskRef   if  Enum  is specified for the  Task   The  PipelineRun  will fail with reason  InvalidParamValue  if  Enum  validation is failed for any of the  PipelineTask    You can also specify  Enum  in an embedded  Pipeline  in a  PipelineRun   The same  Param  validation will be executed in this scenario   See more details in  Param Enum    pipelines md param enum         Propagated Parameters  When using an inlined spec  parameters from the parent  PipelineRun  will be propagated to any inlined specs without needing to be explicitly defined  This allows authors to simplify specs by automatically propagating top level parameters down to other inlined resources      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    generateName  pr echo  spec    params        name  HELLO       value   Hello World         name  BYE       value   Bye World     pipelineSpec      tasks          name  echo hello         taskSpec            steps                name  echo               image  ubuntu               script                       usr bin env bash                 echo    params HELLO           name  echo bye         taskSpec            steps                name  echo               image  ubuntu               script                       usr bin env bash                 echo    params BYE        On executing the pipeline run  the parameters will be interpolated during resolution  The specifications are not mutated before storage and so it remains the same  The status is updated      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    name  pr echo szzs9       spec    params      name  HELLO     value  Hello World      name  BYE     value  Bye World    pipelineSpec      tasks        name  echo hello       taskSpec          steps            image  ubuntu           name  echo           script                   usr bin env bash             echo    params HELLO         name  echo bye       taskSpec          steps            image  ubuntu           name  echo           script                   usr bin env bash             echo    params BYE   status    conditions      lastTransitionTime   2022 04 07T12 34 58Z      message   Tasks Completed  2  Failed  0  Canceled 0   Skipped  0      reason  Succeeded     status   True      type  Succeeded   pipelineSpec            childReferences      name  pr echo szzs9 echo hello     pipelineTaskName  echo hello     kind  TaskRun     name  pr echo szzs9 echo bye     pipelineTaskName  echo bye     kind  TaskRun            Scope and Precedence  When Parameters names conflict  the inner scope would take precedence as shown in this example      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    generateName  pr echo  spec    params      name  HELLO     value   Hello World       name  BYE     value   Bye World     pipelineSpec      tasks          name  echo hello         params            name  HELLO           value   Sasa World           taskSpec            params                name  HELLO               type  string           steps                name  echo               image  ubuntu               script                       usr bin env bash                 echo    params HELLO                resolves to     yaml   Successful execution of the above PipelineRun apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    name  pr echo szzs9       spec        status    conditions        lastTransitionTime   2022 04 07T12 34 58Z        message   Tasks Completed  2  Failed  0  Canceled 0   Skipped  0        reason  Succeeded       status   True        type  Succeeded         childReferences      name  pr echo szzs9 echo hello     pipelineTaskName  echo hello     kind  TaskRun                  Default Values  When  Parameter  specifications have default values  the  Parameter  value provided at runtime would take precedence to give users control  as shown in this example      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    generateName  pr echo  spec    params      name  HELLO     value   Hello World       name  BYE     value   Bye World     pipelineSpec      tasks          name  echo hello         taskSpec            params              name  HELLO             type  string             default   Sasa World             steps                name  echo               image  ubuntu               script                       usr bin env bash                 echo    params HELLO                resolves to     yaml   Successful execution of the above PipelineRun apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    name  pr echo szzs9       spec        status    conditions        lastTransitionTime   2022 04 07T12 34 58Z        message   Tasks Completed  2  Failed  0  Canceled 0   Skipped  0        reason  Succeeded       status   True        type  Succeeded         childReferences      name  pr echo szzs9 echo hello     pipelineTaskName  echo hello     kind  TaskRun                  Referenced Resources  When a PipelineRun definition has referenced specifications but does not explicitly pass Parameters  the PipelineRun will be created but the execution will fail because of missing Parameters      yaml   Invalid PipelineRun attempting to propagate Parameters to referenced Tasks apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    generateName  pr echo  spec    params      name  HELLO     value   Hello World       name  BYE     value   Bye World     pipelineSpec      tasks          name  echo hello         taskRef            name  echo hello         name  echo bye         taskRef            name  echo bye     apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  echo hello spec    steps        name  echo       image  ubuntu       script               usr bin env bash         echo    params HELLO       apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  echo bye spec    steps        name  echo       image  ubuntu       script               usr bin env bash         echo    params BYE        Fails as follows      yaml   Failed execution of the above PipelineRun apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    name  pr echo 24lmf       spec    params      name  HELLO     value  Hello World      name  BYE     value  Bye World    pipelineSpec      tasks        name  echo hello       taskRef          kind  Task         name  echo hello       name  echo bye       taskRef          kind  Task         name  echo bye status    conditions      lastTransitionTime   2022 04 07T20 24 51Z      message   invalid input params for task echo hello  missing values for               these params which have no default values   HELLO       reason  PipelineValidationFailed     status   False      type  Succeeded                  Object Parameters  When using an inlined spec  object parameters from the parent  PipelineRun  will also be propagated to any inlined specs without needing to be explicitly defined  This allows authors to simplify specs by automatically propagating top level parameters down to other inlined resources  When propagating object parameters  scope and precedence also holds as shown below      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    generateName  pipelinerun object param result spec    params        name  gitrepo       value          url  abc com         commit  sha123   pipelineSpec      tasks          name  task1         params              name  gitrepo             value                branch  main               url  xyz com         taskSpec            steps                name  write result               image  bash               args                     echo                      url   params gitrepo url                       commit   params gitrepo commit                       branch   params gitrepo branch                         resolves to     yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    name  pipelinerun object param resultpxp59       spec    params      name  gitrepo     value        commit  sha123       url  abc com   pipelineSpec      tasks        name  task1       params          name  gitrepo         value            branch  main           url  xyz com       taskSpec          metadata             spec  null         steps            args              echo               url   params gitrepo url                commit   params gitrepo commit                branch   params gitrepo branch            image  bash           name  write result status    completionTime   2022 09 08T17 22 01Z    conditions      lastTransitionTime   2022 09 08T17 22 01Z      message   Tasks Completed  1  Failed  0  Cancelled 0   Skipped  0      reason  Succeeded     status   True      type  Succeeded   pipelineSpec      tasks        name  task1       params          name  gitrepo         value            branch  main           url  xyz com       taskSpec          metadata             spec  null         steps            args              echo               url xyz com               commit sha123               branch main           image  bash           name  write result   startTime   2022 09 08T17 21 57Z    childReferences      name  pipelinerun object param resultpxp59 task1     pipelineTaskName  task1     kind  TaskRun                taskSpec            steps              args                echo                 url xyz com                 commit sha123                 branch main             image  bash             name  write result          Specifying custom  ServiceAccount  credentials  You can execute the  Pipeline  in your  PipelineRun  with a specific set of credentials by specifying a  ServiceAccount  object name in the  serviceAccountName  field in your  PipelineRun  definition  If you do not explicitly specify this  the  TaskRuns  created by your  PipelineRun  will execute with the credentials specified in the  configmap defaults   ConfigMap   If this default is not specified  the  TaskRuns  will execute with the   default  service account  https   kubernetes io docs tasks configure pod container configure service account  use the default service account to access the api server  set for the target   namespace   https   kubernetes io docs concepts overview working with objects namespaces     For more information  see   ServiceAccount   auth md      Custom tasks   pipelines md using custom tasks  may or may not use a service account name  Consult the documentation of the custom task that you are using to determine whether it supports a service account name       Mapping  ServiceAccount  credentials to  Tasks   If you require more granularity in specifying execution credentials  use the  taskRunSpecs   taskServiceAccountName  field to map a specific  serviceAccountName  value to a specific  Task  in the  Pipeline   This overrides the global  serviceAccountName  you may have set for the  Pipeline  as described in the previous section   For example  if you specify these mappings        yaml spec    taskRunTemplate      serviceAccountName  sa 1   taskRunSpecs        pipelineTaskName  build task       serviceAccountName  sa for build           yaml spec    serviceAccountName  sa 1   taskRunSpecs        pipelineTaskName  build task       taskServiceAccountName  sa for build        for this  Pipeline       yaml kind  Pipeline spec    tasks        name  build task       taskRef          name  build push       name  test task       taskRef          name  test      then  test task  will execute using the  sa 1  account while  build task  will execute with  sa for build         Propagated Results  When using an embedded spec   Results  from the parent  PipelineRun  will be propagated to any inlined specs without needing to be explicitly defined  This allows authors to simplify specs by automatically propagating top level results down to other inlined resources     Result  substitutions will only be made for  name    commands    args    env  and  script  fields of  steps    sidecars         yaml apiVersion  tekton dev v1 kind  PipelineRun metadata    name  uid pipeline run spec    pipelineSpec      tasks        name  add uid       taskSpec          results              name  uid             type  string         steps              name  add uid             image  busybox             command     bin sh     c               args                  echo  1001    tee   results uid path        name  show uid         params              name  uid             value    tasks add uid results uid        taskSpec          steps              name  show uid             image  busybox             command     bin sh     c               args                  echo   tasks add uid results uid                    echo   params uid       On executing the  PipelineRun   the  Results  will be interpolated during resolution      yaml name          uid pipeline run show uid apiVersion   tekton dev v1 kind          TaskRun metadata        spec    taskSpec      steps        args          echo 1001       command             bin sh            c       image   busybox       name    show uid status    completionTime   2023 09 11T07 34 28Z   conditions      lastTransitionTime   2023 09 11T07 34 28Z     message                All Steps have completed executing     reason                 Succeeded     status                 True     type                   Succeeded   podName                 uid pipeline run show uid pod   steps      container   step show uid     name        show uid   taskSpec      steps        args          echo 1001       command           bin sh          c       computeResources        image   busybox       name    show uid          Specifying a  Pod  template  You can specify a   Pod  template  podtemplates md  configuration that will serve as the configuration starting point for the  Pod  in which the container images specified in your  Tasks  will execute  This allows you to customize the  Pod  configuration specifically for each  TaskRun    In the following example  the  Task  defines a  volumeMount  object named  my cache   The  PipelineRun  provisions this object for the  Task  using a  persistentVolumeClaim  and executes it as user 1001        yaml apiVersion  tekton dev v1 kind  Task metadata    name  mytask spec    steps        name  writesomething       image  ubuntu       command    bash     c         args    echo  foo     my cache bar         volumeMounts            name  my cache           mountPath   my cache     apiVersion  tekton dev v1 kind  Pipeline metadata    name  mypipeline spec    tasks        name  task1       taskRef          name  mytask     apiVersion  tekton dev v1 kind  PipelineRun metadata    name  mypipelinerun spec    pipelineRef      name  mypipeline   taskRunTemplate      podTemplate        securityContext          runAsNonRoot  true         runAsUser  1001       volumes            name  my cache           persistentVolumeClaim              claimName  my volume claim           yaml apiVersion  tekton dev v1beta1 kind  Task metadata    name  mytask spec    steps        name  writesomething       image  ubuntu       command    bash     c         args    echo  foo     my cache bar         volumeMounts            name  my cache           mountPath   my cache     apiVersion  tekton dev v1beta1 kind  Pipeline metadata    name  mypipeline spec    tasks        name  task1       taskRef          name  mytask     apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    name  mypipelinerun spec    pipelineRef      name  mypipeline   podTemplate      securityContext        runAsNonRoot  true       runAsUser  1001     volumes          name  my cache         persistentVolumeClaim            claimName  my volume claim          Custom tasks   pipelines md using custom tasks  may or may not use a pod template  Consult the documentation of the custom task that you are using to determine whether it supports a pod template       Specifying taskRunSpecs  Specifies a list of  PipelineTaskRunSpec  which contains  TaskServiceAccountName    TaskPodTemplate  and  PipelineTaskName   Mapping the specs to the corresponding  Task  based upon the  TaskName  a PipelineTask will run with the configured   TaskServiceAccountName  and  TaskPodTemplate  overwriting the pipeline wide  ServiceAccountName   and   podTemplate     podtemplates md  configuration  for example        yaml spec    podTemplate      securityContext        runAsUser  1000       runAsGroup  2000       fsGroup  3000   taskRunSpecs        pipelineTaskName  build task       serviceAccountName  sa for build       podTemplate          nodeSelector            disktype  ssd           yaml spec    podTemplate      securityContext        runAsUser  1000       runAsGroup  2000       fsGroup  3000   taskRunSpecs        pipelineTaskName  build task       taskServiceAccountName  sa for build       taskPodTemplate          nodeSelector            disktype  ssd        If used with this  Pipeline     build task  will use the task specific  PodTemplate   where  nodeSelector  has  disktype  equal to  ssd   along with  securityContext  from the  pipelineRun spec podTemplate    PipelineTaskRunSpec  may also contain  StepSpecs  and  SidecarSpecs   see  Overriding  Task   Steps  and  Sidecars     taskruns md overriding task steps and sidecars  for more information   The optional annotations and labels can be added under a  Metadata  field as for a specific running context   e g   Rendering needed secrets with Vault      yaml spec    pipelineRef      name  pipeline name   taskRunSpecs        pipelineTaskName  task name       metadata          annotations            vault hashicorp com agent inject secret foo    path to foo            vault hashicorp com role  role name      Updating labels applied in a runtime context      yaml spec    pipelineRef      name  pipeline name   taskRunSpecs        pipelineTaskName  task name       metadata          labels            app  cloudevent      If a metadata key is present in different levels  the value that will be used in the  PipelineRun  is determined using this precedence order   PipelineRun spec taskRunSpec metadata     PipelineRun metadata     Pipeline spec tasks taskSpec metadata        Specifying  Workspaces   If your  Pipeline  specifies one or more  Workspaces   you must map those  Workspaces  to the corresponding physical volumes in your  PipelineRun  definition  For example  you can map a  PersistentVolumeClaim  volume to a  Workspace  as follows      yaml workspaces      name  myworkspace   must match workspace name in Task     persistentVolumeClaim        claimName  mypvc   this PVC must already exist     subPath  my subdir       workspaces   subPath  can be an absolute value or can reference  pipelineRun  context variables  such as     context pipelineRun name   or    context pipelineRun uid     You can pass in extra  Workspaces  if needed depending on your use cases  An example use case is when your CI system autogenerates  PipelineRuns  and it has  Workspaces  it wants to provide to all  PipelineRuns   Because you can pass in extra  Workspaces   you don t have to go through the complexity of checking each  Pipeline  and providing only the required  Workspaces       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Pipeline metadata    name  pipeline spec    tasks        name  task     apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    name  pipelinerun spec    pipelineRef      name  pipeline   workspaces        name  unusedworkspace       persistentVolumeClaim          claimName  mypvc      For more information  see the following topics    For information on mapping  Workspaces  to  Volumes   see  Specifying  Workspaces  in  PipelineRuns   workspaces md specifying workspaces in pipelineruns     For a list of supported  Volume  types  see  Specifying  VolumeSources  in  Workspaces   workspaces md specifying volumesources in workspaces     For an end to end example  see   Workspaces  in a  PipelineRun      examples v1 pipelineruns workspaces yaml      Custom tasks   pipelines md using custom tasks  may or may not use workspaces  Consult the documentation of the custom task that you are using to determine whether it supports workspaces        Propagated Workspaces  When using an embedded spec  workspaces from the parent  PipelineRun  will be propagated to any inlined specs without needing to be explicitly defined  This allows authors to simplify specs by automatically propagating top level workspaces down to other inlined resources    Workspace substutions will only be made for  commands    args  and  script  fields of  steps    stepTemplates   and  sidecars         yaml   Inline specifications of a PipelineRun apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    generateName  recipe time  spec    workspaces        name  shared data       volumeClaimTemplate          spec            accessModes                ReadWriteOnce           resources              requests                storage  16Mi           volumeMode  Filesystem   pipelineSpec       workspaces           name  shared data     tasks        name  fetch secure data         workspaces              name  shared data       taskSpec            workspaces                name  shared data         steps            name  fetch and write secure           image  ubuntu           script                echo hi      workspaces shared data path  recipe txt       name  print the recipe         workspaces              name  shared data       runAfter            fetch secure data       taskSpec            workspaces                name  shared data         steps            name  print secrets           image  ubuntu           script  cat   workspaces shared data path  recipe txt      On executing the pipeline run  the workspaces will be interpolated during resolution      yaml   Successful execution of the above PipelineRun apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    generateName  recipe time        spec    pipelineSpec        status    completionTime   2022 06 02T18 17 02Z    conditions      lastTransitionTime   2022 06 02T18 17 02Z      message   Tasks Completed  2  Failed  0  Canceled 0   Skipped  0      reason  Succeeded     status   True      type  Succeeded   pipelineSpec            childReferences      name  recipe time lslt9 fetch secure data     pipelineTaskName  fetch secure data     kind  TaskRun     name  recipe time lslt9 print the recipe     pipelineTaskName  print the recipe     kind  TaskRun            Workspace Referenced Resources   Workspaces  cannot be propagated to referenced specifications  For example  the following Pipeline will fail when executed because the workspaces defined in the PipelineRun cannot be propagated to the referenced Pipeline      yaml   PipelineRun attempting to propagate Workspaces to referenced Tasks apiVersion  v1 kind  PersistentVolumeClaim metadata    name  shared task storage spec    resources      requests        storage  16Mi   volumeMode  Filesystem   accessModes        ReadWriteOnce     apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Pipeline metadata    name  fetch and print recipe spec    tasks      name  fetch the recipe     taskRef        name  fetch secure data     name  print the recipe     taskRef        name  print data     runAfter          fetch the recipe     apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    generateName  recipe time  spec    pipelineRef      name  fetch and print recipe   workspaces      name  shared data     persistentVolumeClaim        claimName  shared task storage      Upon execution  this will cause failures      yaml   Failed execution of the above PipelineRun  apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    generateName  recipe time        spec    pipelineRef      name  fetch and print recipe   workspaces      name  shared data     persistentVolumeClaim        claimName  shared task storage status    completionTime   2022 06 02T19 02 58Z    conditions      lastTransitionTime   2022 06 02T19 02 58Z      message   Tasks Completed  1  Failed  1  Canceled 0   Skipped  1      reason  Failed     status   False      type  Succeeded   pipelineSpec            childReferences      name  recipe time v5scg fetch the recipe     pipelineTaskName  fetch the recipe     kind  TaskRun           Referenced TaskRuns within Embedded PipelineRuns As mentioned in the  Workspace Referenced Resources   workspace referenced resources   workspaces can only be propagated from PipelineRuns to embedded Pipeline specs  not Pipeline references  Similarly  workspaces can only be propagated from a Pipeline to embedded Task specs  not referenced Tasks  For example      yaml   PipelineRun attempting to propagate Workspaces to referenced Tasks apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  fetch secure data spec    workspaces    If Referenced  Workspaces need to be explicitly declared     name  shared data   steps      name  fetch and write     image  ubuntu     script          echo   workspaces shared data path      apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    generateName  recipe time  spec    workspaces      name  shared data     persistentVolumeClaim        claimName  shared task storage   pipelineSpec        workspaces    Since this is embedded specs  Workspaces don t need to be declared                  tasks        name  fetch the recipe       workspaces    If referencing resources  Workspaces need to be explicitly declared         name  shared data       taskRef    Referencing a resource         name  fetch secure data       name  print the recipe         workspaces    Since this is embedded specs  Workspaces don t need to be declared                      taskSpec            workspaces    Since this is embedded specs  Workspaces don t need to be declared                          steps            name  print secrets           image  ubuntu           script  cat   workspaces shared data path  recipe txt       runAfter            fetch the recipe      The above pipelinerun successfully resolves to      yaml   Successful execution of the above PipelineRun apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    generateName  recipe time        spec    pipelineSpec            workspaces      name  shared data     persistentVolumeClaim        claimName  shared task storage status    completionTime   2022 06 09T18 42 14Z    conditions      lastTransitionTime   2022 06 09T18 42 14Z      message   Tasks Completed  2  Failed  0  Cancelled 0   Skipped  0      reason  Succeeded     status   True      type  Succeeded   pipelineSpec            childReferences      name  recipe time pj6l7 fetch the recipe     pipelineTaskName  fetch the recipe     kind  TaskRun     name  recipe time pj6l7 print the recipe     pipelineTaskName  print the recipe     kind  TaskRun          Specifying  LimitRange  values  In order to only consume the bare minimum amount of resources needed to execute one  Step  at a time from the invoked  Task   Tekton will request the compute values for CPU  memory  and ephemeral storage for each  Step  based on the   LimitRange   https   kubernetes io docs concepts policy limit range   object s   if present  Any  Request  or  Limit  specified by the user  on  Task  for example  will be left unchanged   For more information  see the   LimitRange  support in Pipeline    compute resources md limitrange support        Configuring a failure timeout  You can use the  timeouts  field to set the  PipelineRun s  desired timeout value in minutes  There are three sub fields     pipeline   specifies the timeout for the entire PipelineRun  Defaults to to the global configurable default timeout of 60 minutes  When  timeouts pipeline  has elapsed  any running child TaskRuns will be canceled  regardless of whether they are normal Tasks or  finally  Tasks  and the PipelineRun will fail     tasks   specifies the timeout for the cumulative time taken by non  finally  Tasks specified in  pipeline spec tasks   To specify a timeout for an individual Task  use  pipeline spec tasks   timeout   When  timeouts tasks  has elapsed  any running child TaskRuns will be canceled  finally Tasks will run if  timeouts finally  is specified  and the PipelineRun will fail     finally   the timeout for the cumulative time taken by  finally  Tasks specified in  pipeline spec finally    Since all  finally  Tasks run in parallel  this is functionally equivalent to the timeout for any  finally  Task   When  timeouts finally  has elapsed  any running  finally  TaskRuns will be canceled  and the PipelineRun will fail   For example      yaml timeouts    pipeline   0h0m60s    tasks   0h0m40s    finally   0h0m20s       All three sub fields are optional  and will be automatically processed according to the following constraint     timeouts pipeline    timeouts tasks   timeouts finally   Each  timeout  field is a  duration  conforming to Go s   ParseDuration   https   golang org pkg time  ParseDuration  format  For example  valid values are  1h30m    1h    1m   and  60s    If any of the sub fields are set to  0   there is no timeout for that section of the PipelineRun  meaning that it will run until it completes successfully or encounters an error  To set  timeouts tasks  or  timeouts finally  to  0   you must also set  timeouts pipeline  to  0    The global default timeout is set to 60 minutes when you first install Tekton  You can set a different global default timeout value using the  default timeout minutes  field in   config config defaults yaml        config config defaults yaml    Example timeouts usages are as follows   Combination 1  Set the timeout for the entire  pipeline  and reserve a portion of it for  tasks       yaml kind  PipelineRun spec    timeouts      pipeline   0h4m0s      tasks   0h1m0s       Combination 2  Set the timeout for the entire  pipeline  and reserve a portion of it for  finally       yaml kind  PipelineRun spec    timeouts      pipeline   0h4m0s      finally   0h3m0s       Combination 3  Set only a  tasks  timeout  with no timeout for the entire  pipeline       yaml kind  PipelineRun spec    timeouts      pipeline   0     No timeout     tasks   0h3m0s       Combination   Set only a  finally  timeout  with no timeout for the entire  pipeline       yaml kind  PipelineRun spec    timeouts      pipeline   0     No timeout     finally   0h3m0s       You can also use the  Deprecated   timeout  field to set the  PipelineRun s  desired timeout value in minutes  If you do not specify this value in the  PipelineRun   the global default timeout value applies  If you set the timeout to 0  the  PipelineRun  fails immediately upon encountering an error      warning      timeout  is deprecated and will be removed in future versions  Consider using  timeouts  instead      note  An internal detail of the  PipelineRun  and  TaskRun  reconcilers in the Tekton controller is that it will requeue a  PipelineRun  or  TaskRun  for re evaluation  versus waiting for the next update  under certain conditions   The wait time for that re queueing is the elapsed time subtracted from the timeout  however  if the timeout is set to  0   that calculation produces a negative number  and the new reconciliation event will fire immediately  which can impact overall performance  which is counter to the intent of wait time calculation   So instead  the reconcilers will use the configured global timeout as the wait time when the associated timeout has been set to  0        PipelineRun  status      The  status  field  Your  PipelineRun  s  status  field can contain the following fields    Required         wokeignore rule master          status    Most relevant   status conditions   which contains the latest observations of the  PipelineRun  s state   See here  https   github com kubernetes community blob master contributors devel sig architecture api conventions md typical status properties  for information on typical status properties       startTime    The time at which the  PipelineRun  began executing  in  RFC3339  https   tools ietf org html rfc3339  format       completionTime    The time at which the  PipelineRun  finished executing  in  RFC3339  https   tools ietf org html rfc3339  format        pipelineSpec   pipelines md configuring a pipeline    The exact  PipelineSpec  used when starting the  PipelineRun     Optional        pipelineResults   pipelines md emitting results from a pipeline    Results emitted by this  PipelineRun        skippedTasks    A list of  Task s which were skipped when running this  PipelineRun  due to  when expressions  pipelines md guard task execution using when expressions   including the when expressions applying to the skipped task       childReferences    A list of references to each  TaskRun  or  Run  in this  PipelineRun   which can be used to look up the status of the underlying  TaskRun  or  Run   Each entry contains the following          kind   kubernetes overview    Generally either  TaskRun  or  Run           apiVersion   kubernetes overview    The API version for the underlying  TaskRun  or  Run           whenExpressions   pipelines md guard task execution using when expressions    The list of when expressions guarding the execution of this task       provenance    Metadata about the runtime configuration and the resources used in the PipelineRun  The data in the  provenance  field will be recorded into the build provenance by the provenance generator i e   Tekton Chains   Currently  there are 2 subfields         refSource   the source from where a remote pipeline definition was fetched         featureFlags   the configuration data of the  feature flags  configmap       finallyStartTime   The time at which the PipelineRun s  finally  Tasks  if any  began   executing  in  RFC3339  https   tools ietf org html rfc3339  format       Monitoring execution status  As your  PipelineRun  executes  its  status  field accumulates information on the execution of each  TaskRun  as well as the  PipelineRun  as a whole  This information includes the name of the pipeline  Task  associated to a  TaskRun   the complete  status of the  TaskRun   taskruns md monitoring execution status  and details about  whenExpressions  that may be associated to a  TaskRun    The following example shows an extract from the  status  field of a  PipelineRun  that has executed successfully      yaml completionTime   2020 05 04T02 19 14Z  conditions      lastTransitionTime   2020 05 04T02 19 14Z      message   Tasks Completed  4  Skipped  0      reason  Succeeded     status   True      type  Succeeded startTime   2020 05 04T02 00 11Z  childReferences    name  triggers release nightly frwmw build   pipelineTaskName  build   kind  TaskRun        The following tables shows how to read the overall status of a  PipelineRun   Completion time is set once a  PipelineRun  reaches status  True  or  False     status     reason               completionTime  is set                                                                             Description                                                                                                                                                 Unknown    Started                        No                                       The  PipelineRun  has just been picked up by the controller  Unknown    Running                        No                               The  PipelineRun  has been validate and started to perform its work  Unknown    Cancelled                      No              The user requested the PipelineRun to be cancelled  Cancellation has not be done yet  True       Succeeded                      Yes                                                         The  PipelineRun  completed successfully  True       Completed                      Yes                         The  PipelineRun  completed successfully  one or more Tasks were skipped  False      Failed                         Yes                                    The  PipelineRun  failed because one of the  TaskRuns  failed  False        Error message                Yes                             The  PipelineRun  failed with a permanent error  usually validation   False      Cancelled                      Yes                                                     The  PipelineRun  was cancelled successfully  False      PipelineRunTimeout             Yes                                                                      The  PipelineRun  timed out  False      CreateRunFailed                Yes                                                    The  PipelineRun  create run resources failed   When a  PipelineRun  changes status   events  events md pipelineruns  are triggered accordingly   When a  PipelineRun  has  Tasks  that were  skipped   the  reason  for skipping the task will be listed in the  Skipped Tasks  section of the  status  of the  PipelineRun    When a  PipelineRun  has  Tasks  with   when  expressions  pipelines md guard task execution using when expressions     If the  when  expressions evaluate to  true   the  Task  is executed then the  TaskRun  and its resolved  when  expressions will be listed in the  Task Runs  section of the  status  of the  PipelineRun     If the  when  expressions evaluate to  false   the  Task  is skipped then its name and its resolved  when  expressions will be listed in the  Skipped Tasks  section of the  status  of the  PipelineRun       yaml Conditions    Last Transition Time   2020 08 27T15 07 34Z   Message                Tasks Completed  1  Failed  0  Cancelled 0   Skipped  1   Reason                 Completed   Status                 True   Type                   Succeeded Skipped Tasks    Name        skip this task   Reason      When Expressions evaluated to false   When Expressions      Input      foo     Operator   in     Values        bar     Input      foo     Operator   notin     Values        foo ChildReferences    Name  pipelinerun to skip task run this task   Pipeline Task Name   run this task   Kind  TaskRun      The name of the  TaskRuns  and  Runs  owned by a  PipelineRun   are univocally associated to the owning resource  If a  PipelineRun  resource is deleted and created with the same name  the child  TaskRuns  will be created with the same name as before  The base format of the name is   pipelinerun name   pipelinetask name    If the  PipelineTask  has a  Matrix   the name will have an int suffix with format   pipelinerun name   pipelinetask name   combination id    The name may vary according the logic of   kmeta ChildName   https   pkg go dev github com knative pkg kmeta ChildName    Some examples      PipelineRun  Name                                          PipelineTask  Name                                             TaskRun  Names                                                                                                                                                                                                                                                                                                 pipeline run                                               task1                                                          pipeline run task1                                                                         pipeline run                                               task2 0123456789 0123456789 0123456789 0123456789 0123456789   pipeline runee4a397d6eab67777d4e6f9991cd19e6 task2 0123456789 0                            pipeline run 0123456789 0123456789 0123456789 0123456789   task3                                                          pipeline run 0123456789 0123456789 0123456789 0123456789 task3                             pipeline run 0123456789 0123456789 0123456789 0123456789   task2 0123456789 0123456789 0123456789 0123456789 0123456789   pipeline run 0123456789 012345607ad8c7aac5873cdfabe472a68996b5c                            pipeline run                                               task4  with 2x2  Matrix                                        pipeline run task1 0  pipeline run task1 2  pipeline run task1 3  pipeline run task1 4        Marking off user errors  A user error in Tekton is any mistake made by user  such as a syntax error when specifying pipelines  tasks  User errors can occur in various stages of the Tekton pipeline  from authoring the pipeline configuration to executing the pipelines  They are currently explicitly labeled in the Run s conditions message  for example      yaml   Failed PipelineRun with message labeled   User error   apiVersion  tekton dev v1 kind  PipelineRun metadata        spec        status          conditions      lastTransitionTime   2022 06 02T19 02 58Z      message    User error  PipelineRun default parameters is missing some parameters required by       Pipeline pipelinerun with params  s parameters  pipelineRun missing parameters         pl param x       reason   ParameterMissing      status   False      type  Succeeded         console   pipeline  tkn pr list NAME                      STARTED         DURATION   STATUS pipelinerun with params   5 seconds ago   0s         Failed ParameterMissing          Cancelling a  PipelineRun   To cancel a  PipelineRun  that s currently executing  update its definition to mark it as  Cancelled   When you do so  the spawned  TaskRuns  are also marked as cancelled  all associated  Pods  are deleted  and their  Retries  are not executed  Pending  finally  tasks are not scheduled   For example      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    name  go example git spec            status   Cancelled          Gracefully cancelling a  PipelineRun   To gracefully cancel a  PipelineRun  that s currently executing  update its definition to mark it as  CancelledRunFinally   When you do so  the spawned  TaskRuns  are also marked as cancelled  all associated  Pods  are deleted  and their  Retries  are not executed   finally  tasks are scheduled normally   For example      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    name  go example git spec            status   CancelledRunFinally           Gracefully stopping a  PipelineRun   To gracefully stop a  PipelineRun  that s currently executing  update its definition to mark it as  StoppedRunFinally   When you do so  the spawned  TaskRuns  are completed normally  including executing their  retries   but no new non  finally  task is scheduled   finally  tasks are executed afterwards  For example      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    name  go example git spec            status   StoppedRunFinally          Pending  PipelineRuns   A  PipelineRun  can be created as a  pending   PipelineRun  meaning that it will not actually be started until the pending status is cleared   Note that a  PipelineRun  can only be marked  pending  before it has started  this setting is invalid after the  PipelineRun  has been started   To mark a  PipelineRun  as pending  set   spec status  to  PipelineRunPending  when creating it      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    name  go example git spec            status   PipelineRunPending       To start the PipelineRun  clear the   spec status  field  Alternatively  update the value to  Cancelled  to cancel it        Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    and code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton Debug Debug weight 108","answers":"<!--\n---\nlinkTitle: \"Debug\"\nweight: 108\n---\n-->\n\n# Debug\n\n- [Overview](#overview)\n- [Debugging TaskRuns](#debugging-taskruns)\n  - [Adding Breakpoints](#adding-breakpoints)\n    - [Breakpoint on Failure](#breakpoint-on-failure)\n      - [Failure of a Step](#failure-of-a-step)\n      - [Halting a Step on failure](#halting-a-step-on-failure)\n      - [Exiting onfailure breakpoint](#exiting-onfailure-breakpoint)\n    - [Breakpoint before step](#breakpoint-before-step)\n- [Debug Environment](#debug-environment)\n  - [Mounts](#mounts)\n  - [Debug Scripts](#debug-scripts)\n\n\n## Overview\n\n`Debug` spec is used for troubleshooting and breakpointing runtime resources. This doc helps understand the inner \nworkings of debug in Tekton. Currently only the `TaskRun` resource is supported. \n\nThis is an alpha feature. The `enable-api-fields` feature flag [must be set to `\"alpha\"`](.\/install.md)\nto specify `debug` in a `taskRun`.\n\n## Debugging TaskRuns\n\nThe following provides explanation on how Debugging TaskRuns is possible through Tekton. To understand how to use \nthe debug spec for TaskRuns follow the [TaskRun Debugging Documentation](taskruns.md#debugging-a-taskrun).\n\n### Breakpoint on Failure\n\nHalting a TaskRun execution on Failure of a step.\n\n#### Failure of a Step\n\nThe entrypoint binary is used to manage the lifecycle of a step. Steps are aligned beforehand by the TaskRun controller\nallowing each step to run in a particular order. This is done using `-wait_file` and the `-post_file` flags. The former \nlet's the entrypoint binary know that it has to wait on creation of a particular file before starting execution of the step.\nAnd the latter provides information on the step number and signal the next step on completion of the step.\n\nOn success of a step, the `-post-file` is written as is, signalling the next step which would have the same argument given\nfor `-wait_file` to resume the entrypoint process and move ahead with the step. \n\nOn failure of a step, the `-post_file` is written with appending `.err` to it denoting that the previous step has failed with\nand error. The subsequent steps are skipped in this case as well, marking the TaskRun as a failure.\n\n#### Halting a Step on failure\n\nThe failed step writes `<step-no>.err` to `\/tekton\/run` and stops running completely. To be able to debug a step we would\nneed it to continue running (not exit), not skip the next steps and signal health of the step. By disabling step skipping, \nstopping write of the `<step-no>.err` file and waiting on a signal by the user to disable the halt, we would be simulating a \n\"breakpoint\".\n\nIn this breakpoint, which is essentially a limbo state the TaskRun finds itself in, the user can interact with the step \nenvironment using a CLI or an IDE. \n\n#### Exiting onfailure breakpoint\n\nTo exit a step which has been paused upon failure, the step would wait on a file similar to `<step-no>.breakpointexit` which \nwould unpause and exit the step container. eg: Step 0 fails and is paused. Writing `0.breakpointexit` in `\/tekton\/run`\nwould unpause and exit the step container.\n\n### Breakpoint before step\n\n\nTaskRun will be stuck waiting for user debugging before the step execution.\nWhen beforeStep-Breakpoint takes effect, the user can see the following information\nfrom the corresponding step container log:\n```\ndebug before step breakpoint has taken effect, waiting for user's decision:\n1) continue, use cmd: \/tekton\/debug\/scripts\/debug-beforestep-continue\n2) fail-continue, use cmd: \/tekton\/debug\/scripts\/debug-beforestep-fail-continue\n```\n1. Executing \/tekton\/debug\/scripts\/debug-beforestep-continue will continue to execute the step program\n2. Executing \/tekton\/debug\/scripts\/debug-beforestep-fail-continue will not continue to execute the task, and will mark the step as failed\n\n## Debug Environment \n\nAdditional environment augmentations made available to the TaskRun Pod to aid in troubleshooting and managing step lifecycle.\n\n### Mounts\n\n`\/tekton\/debug\/scripts` : Contains scripts which the user can run to mark the step as a success, failure or exit the breakpoint.\nShared between all the containers.\n\n`\/tekton\/debug\/info\/<n>` : Contains information about the step. Single EmptyDir shared between all step containers, but renamed \nto reflect step number. eg: Step 0 will have `\/tekton\/debug\/info\/0`, Step 1 will have `\/tekton\/debug\/info\/1` etc.\n\n### Debug Scripts\n\n`\/tekton\/debug\/scripts\/debug-continue` : Mark the step as completed with success by writing to `\/tekton\/run`. eg: User wants to exit\nonfailure breakpoint for failed step 0. Running this script would create `\/tekton\/run\/0` and `\/tekton\/run\/0\/out.breakpointexit`.\n\n`\/tekton\/debug\/scripts\/debug-fail-continue` : Mark the step as completed with failure by writing to `\/tekton\/run`. eg: User wants to exit\nonfailure breakpoint for failed step 0. Running this script would create `\/tekton\/run\/0` and `\/tekton\/run\/0\/out.breakpointexit.err`.\n\n`\/tekton\/debug\/scripts\/debug-beforestep-continue` : Mark the step continue to execute by writing to `\/tekton\/run`. eg: User wants to exit\nbefore step breakpoint for before step 0. Running this script would create `\/tekton\/run\/0` and `\/tekton\/run\/0\/out.beforestepexit`.\n\n`\/tekton\/debug\/scripts\/debug-beforestep-fail-continue` : Mark the step not continue to execute by writing to `\/tekton\/run`. eg: User wants to exit\nbefore step breakpoint for before step 0. Running this script would create `\/tekton\/run\/0` and `\/tekton\/run\/0\/out.beforestepexit.err`.","site":"tekton","answers_cleaned":"         linkTitle   Debug  weight  108            Debug     Overview   overview     Debugging TaskRuns   debugging taskruns       Adding Breakpoints   adding breakpoints         Breakpoint on Failure   breakpoint on failure           Failure of a Step   failure of a step           Halting a Step on failure   halting a step on failure           Exiting onfailure breakpoint   exiting onfailure breakpoint         Breakpoint before step   breakpoint before step     Debug Environment   debug environment       Mounts   mounts       Debug Scripts   debug scripts       Overview   Debug  spec is used for troubleshooting and breakpointing runtime resources  This doc helps understand the inner  workings of debug in Tekton  Currently only the  TaskRun  resource is supported    This is an alpha feature  The  enable api fields  feature flag  must be set to   alpha      install md  to specify  debug  in a  taskRun       Debugging TaskRuns  The following provides explanation on how Debugging TaskRuns is possible through Tekton  To understand how to use  the debug spec for TaskRuns follow the  TaskRun Debugging Documentation  taskruns md debugging a taskrun        Breakpoint on Failure  Halting a TaskRun execution on Failure of a step        Failure of a Step  The entrypoint binary is used to manage the lifecycle of a step  Steps are aligned beforehand by the TaskRun controller allowing each step to run in a particular order  This is done using   wait file  and the   post file  flags  The former  let s the entrypoint binary know that it has to wait on creation of a particular file before starting execution of the step  And the latter provides information on the step number and signal the next step on completion of the step   On success of a step  the   post file  is written as is  signalling the next step which would have the same argument given for   wait file  to resume the entrypoint process and move ahead with the step    On failure of a step  the   post file  is written with appending   err  to it denoting that the previous step has failed with and error  The subsequent steps are skipped in this case as well  marking the TaskRun as a failure        Halting a Step on failure  The failed step writes   step no  err  to   tekton run  and stops running completely  To be able to debug a step we would need it to continue running  not exit   not skip the next steps and signal health of the step  By disabling step skipping   stopping write of the   step no  err  file and waiting on a signal by the user to disable the halt  we would be simulating a   breakpoint    In this breakpoint  which is essentially a limbo state the TaskRun finds itself in  the user can interact with the step  environment using a CLI or an IDE         Exiting onfailure breakpoint  To exit a step which has been paused upon failure  the step would wait on a file similar to   step no  breakpointexit  which  would unpause and exit the step container  eg  Step 0 fails and is paused  Writing  0 breakpointexit  in   tekton run  would unpause and exit the step container       Breakpoint before step   TaskRun will be stuck waiting for user debugging before the step execution  When beforeStep Breakpoint takes effect  the user can see the following information from the corresponding step container log      debug before step breakpoint has taken effect  waiting for user s decision  1  continue  use cmd   tekton debug scripts debug beforestep continue 2  fail continue  use cmd   tekton debug scripts debug beforestep fail continue     1  Executing  tekton debug scripts debug beforestep continue will continue to execute the step program 2  Executing  tekton debug scripts debug beforestep fail continue will not continue to execute the task  and will mark the step as failed     Debug Environment   Additional environment augmentations made available to the TaskRun Pod to aid in troubleshooting and managing step lifecycle       Mounts    tekton debug scripts    Contains scripts which the user can run to mark the step as a success  failure or exit the breakpoint  Shared between all the containers     tekton debug info  n     Contains information about the step  Single EmptyDir shared between all step containers  but renamed  to reflect step number  eg  Step 0 will have   tekton debug info 0   Step 1 will have   tekton debug info 1  etc       Debug Scripts    tekton debug scripts debug continue    Mark the step as completed with success by writing to   tekton run   eg  User wants to exit onfailure breakpoint for failed step 0  Running this script would create   tekton run 0  and   tekton run 0 out breakpointexit      tekton debug scripts debug fail continue    Mark the step as completed with failure by writing to   tekton run   eg  User wants to exit onfailure breakpoint for failed step 0  Running this script would create   tekton run 0  and   tekton run 0 out breakpointexit err      tekton debug scripts debug beforestep continue    Mark the step continue to execute by writing to   tekton run   eg  User wants to exit before step breakpoint for before step 0  Running this script would create   tekton run 0  and   tekton run 0 out beforestepexit      tekton debug scripts debug beforestep fail continue    Mark the step not continue to execute by writing to   tekton run   eg  User wants to exit before step breakpoint for before step 0  Running this script would create   tekton run 0  and   tekton run 0 out beforestepexit err  "}
{"questions":"tekton Tekton Pipelines API Specification toc","answers":"# Tekton Pipelines API Specification\n\n<!-- toc -->\n- [Tekton Pipelines API Specification](#tekton-pipelines-api-specification)\n  - [Abstract](#abstract)\n  - [Background](#background)\n  - [Modifying This Specification](#modifying-this-specification)\n  - [Resource Overview - v1](#resource-overview---v1)\n    - [`Task`](#task)\n    - [`Pipeline`](#pipeline)\n    - [`TaskRun`](#taskrun)\n    - [`PipelineRun`](#pipelinerun)\n  - [Detailed Resource Types - `v1`](#detailed-resource-types---v1)\n    - [TypeMeta](#typemeta)\n    - [ObjectMeta](#objectmeta)\n    - [TaskSpec](#taskspec)\n    - [ParamSpec](#paramspec)\n    - [ParamType](#paramtype)\n    - [Step](#step)\n    - [Sidecar](#sidecar)\n    - [SecurityContext](#securitycontext)\n    - [TaskResult](#taskresult)\n    - [ResultsType](#resultstype)\n    - [PipelineSpec](#pipelinespec)\n    - [PipelineTask](#pipelinetask)\n    - [TaskRef](#taskref)\n    - [ResolverRef](#resolverref)\n    - [Param](#param)\n    - [ParamValue](#paramvalue)\n    - [PipelineResult](#pipelineresult)\n    - [TaskRunSpec](#taskrunspec)\n    - [TaskRunStatus](#taskrunstatus)\n    - [Condition](#condition)\n    - [StepState](#stepstate)\n    - [ContainerState](#containerstate)\n    - [`ContainerStateRunning`](#containerstaterunning)\n    - [`ContainerStateWaiting`](#containerstatewaiting)\n    - [`ContainerStateTerminated`](#containerstateterminated)\n    - [TaskRunResult](#taskrunresult)\n    - [SidecarState](#sidecarstate)\n    - [PipelineRunSpec](#pipelinerunspec)\n    - [PipelineRef](#pipelineref)\n    - [PipelineRunStatus](#pipelinerunstatus)\n    - [PipelineRunResult](#pipelinerunresult)\n    - [ChildStatusReference](#childstatusreference)\n    - [TimeoutFields](#timeoutfields)\n    - [WorkspaceDeclaration](#workspacedeclaration)\n    - [WorkspacePipelineTaskBinding](#workspacepipelinetaskbinding)\n    - [PipelineWorkspaceDeclaration](#pipelineworkspacedeclaration)\n    - [WorkspaceBinding](#workspacebinding)\n    - [EnvVar](#envvar)\n  - [Status Signalling](#status-signalling)\n<!-- \/toc -->\n\n## Abstract\n\nThe Tekton Pipelines platform provides common abstractions for describing and executing container-based, run-to-completion workflows, typically in service of CI\/CD scenarios. The Tekton Conformance Policy defines the requirements that Tekton implementations must meet to claim conformance with the Tekton API. [TEP-0131](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0131-tekton-conformance-policy.md) lay out details of the policy itself.\n\nAccording to the policy, Tekton implementations can claim Conformance on GA Primitives, thus, all API Spec in this doc is for Tekton V1 APIs. Implementations are only required to provide resource management (i.e. CRUD APIs) for Runtime Primitives (TaskRun and PipelineRun). For Authoring-time Primitives (Task and Pipeline), supporting CRUD APIs is not a requirement but we recommend referencing them in runtime types (e.g. from git, catalog, within the cluster etc.)\n\nThis document describes the structure, and lifecycle of Tekton resources. This document does not define the [runtime contract](https:\/\/tekton.dev\/docs\/pipelines\/container-contract\/) nor prescribe specific implementations of supporting services such as access control, observability, or resource management.\n\nThis document makes reference in a few places to different profiles for Tekton installations. A profile in this context is a set of operations, resources, and fields that are accessible to a developer interacting with a Tekton installation. Currently, only a single (minimal) profile for Tekton Pipelines is defined, but additional profiles may be defined in the future to standardize advanced functionality. A minimal profile is one that implements all of the \u201cMUST\u201d, \u201cMUST NOT\u201d, and \u201cREQUIRED\u201d conditions of this document.\n\n## Background\n\nThe key words \u201cMUST\u201d, \u201cMUST NOT\u201d, \u201cREQUIRED\u201d, \u201cSHALL\u201d, \u201cSHALL NOT\u201d, \u201cSHOULD\u201d, \u201cSHOULD NOT\u201d, \u201cRECOMMENDED\u201d, \u201cNOT RECOMMENDED\u201d, \u201cMAY\u201d, and \u201cOPTIONAL\u201d are to be interpreted as described in [RFC 2119](https:\/\/tools.ietf.org\/html\/rfc2119).\n\nThere is no formal specification of the Kubernetes API and Resource Model. This document assumes Kubernetes 1.25 behavior; this behavior will typically be supported by many future Kubernetes versions. Additionally, this document may reference specific core Kubernetes resources; these references may be illustrative (i.e. an implementation on Kubernetes) or descriptive (i.e. this Kubernetes resource MUST be exposed). References to these core Kubernetes resources will be annotated as either illustrative or descriptive.\n\n## Modifying This Specification\n\nThis spec is a living document, meaning new resources and fields may be added, and may transition from being OPTIONAL to RECOMMENDED to REQUIRED over time. In general a resource or field should not be added\u00a0as REQUIRED directly, as this may cause unsuspecting previously-conformant implementations to suddenly no longer be conformant. These should be first OPTIONAL or RECOMMENDED, then change to be REQUIRED once a survey of conformant implementations indicates that doing so will not cause undue burden on any implementation.\n\n## Resource Overview - v1\n\nThe following schema defines a set of REQUIRED or RECOMMENDED resource fields on the Tekton resource types. Whether a field is REQUIRED or RECOMMENDED is denoted in the \"Requirement\" column.\n\nAdditional fields MAY be provided by particular implementations, however it is expected that most extension will be accomplished via the `metadata.labels` and `metadata.annotations` fields, as Tekton implementations MAY validate supplied resources against these fields and refuse resources which specify unknown fields.\n\nTekton implementations MUST NOT require `spec` fields outside this implementation; to do so would break interoperability between such implementations and implementations which implement validation of field names.\n\n**NB:** All fields and resources not listed below are assumed to be **OPTIONAL**, not RECOMMENDED or REQUIRED.\n\n### `Task`\n\nA Task is a collection of Steps that is defined and arranged in a sequential order of execution.\n\n| Field        | Type                        | Requirement | Notes                                          |\n|--------------|-----------------------------|-------------|------------------------------------------------|\n| `kind`       | string                      | RECOMMENDED | Describes the type of the resource i.e. `Task` |\n| `apiVersion` | string                      | RECOMMENDED | Schema version i.e. `v1`                       |\n| `metadata`   | [`ObjectMeta`](#objectmeta) | REQUIRED    | Common metadata about a resource               |\n| `spec`       | [`TaskSpec`](#taskspec)     | REQUIRED    | Defines the desired state of Task.             |\n\n**NB:** If `kind` and `apiVersion` are not supported, an alternative method of identifying the type of resource must be supported.\n\n### `Pipeline`\n\nA Pipeline is a collection of Tasks that is defined and arranged in a specific order of execution\n\n| Field        | Type                            | Requirement | Notes                                              |\n|--------------|---------------------------------|-------------|----------------------------------------------------|\n| `kind`       | string                          | RECOMMENDED | Describes the type of the resource i.e. `Pipeline` |\n| `apiVersion` | string                          | RECOMMENDED | Schema version i.e. `v1`                           |\n| `metadata`   | [`ObjectMeta`](#objectmeta)     | REQUIRED    | Common metadata about a resource                   |\n| `spec`       | [`PipelineSpec`](#pipelinespec) | REQUIRED    | Defines the desired state of Pipeline.             |\n\n**NB:** If `kind` and `apiVersion` are not supported, an alternative method of identifying the type of resource must be supported.\n\n### `TaskRun`\n\nA `TaskRun` represents an instantiation of a single execution of a `Task`. It can describe the steps of the Task directly.\n\n| Field        | Type                              | Requirement | Notes                                            |\n|--------------|-----------------------------------|-------------|--------------------------------------------------|\n| `kind`       | string                            | RECOMMENDED | Describes the type of the resource i.e.`TaskRun` |\n| `apiVersion` | string                            | RECOMMENDED | Schema version i.e. `v1`                         |\n| `metadata`   | [`ObjectMeta`](#objectmeta)       | REQUIRED    | Common metadata about a resource                 |\n| `spec`       | [`TaskRunSpec`](#taskrunspec)     | REQUIRED    | Defines the desired state of TaskRun             |\n| `status`     | [`TaskRunStatus`](#taskrunstatus) | REQUIRED    | Defines the current status of TaskRun            |\n\n**NB:** If `kind` and `apiVersion` are not supported, an alternative method of identifying the type of resource must be supported.\n\n### `PipelineRun`\n\nA `PipelineRun` represents an instantiation of a single execution of a `Pipeline`. It can describe the spec of the Pipeline directly.\n\n| Field        | Type                                      | Requirement | Notes                                                |\n|--------------|-------------------------------------------|-------------|------------------------------------------------------|\n| `kind`       | string                                    | RECOMMENDED | Describes the type of the resource i.e.`PipelineRun` |\n| `apiVersion` | string                                    | RECOMMENDED | Schema version i.e. `v1`                             |\n| `metadata`   | [`ObjectMeta`](#objectmeta)               | REQUIRED    | Common metadata about a resource                     |\n| `spec`       | [`PipelineRunSpec`](#pipelinerunspec)     | REQUIRED    | Defines the desired state of PipelineRun             |\n| `status`     | [`PipelineRunStatus`](#pipelinerunstatus) | REQUIRED    | Defines the current status of PipelineRun            |\n\n**NB:** If `kind` and `apiVersion` are not supported, an alternative method of identifying the type of resource must be supported.\n\n## Detailed Resource Types - `v1`\n\n### TypeMeta\n\nDerived from [Kuberentes Type Meta](https:\/\/pkg.go.dev\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#TypeMeta)\n\n| Field        | Type   | Notes                                                             |\n|--------------|--------|-------------------------------------------------------------------|\n| `kind`       | string | A string value representing the resource this object represents.  |\n| `apiVersion` | string | Defines the versioned schema of this representation of an object. |\n\n### ObjectMeta\n\nDerived from standard Kubernetes [meta.v1\/ObjectMeta](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.18\/#objectmeta-v1-meta) resource.\n\n| Field               | Type               | Requirement         | Notes                                                                                                                                                                                                                               |\n| ------------------- | ------------------ | ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `name`              | string             | REQUIRED            | Mutually exclusive with the `generateName` field.                                                                                                                                                                                   |\n| `labels`            | map<string,string> | RECOMMENDED         |                                                                                                                                                                                                                                     |\n| `annotations`       | map<string,string> | RECOMMENDED         | `annotations` are necessary in order to support integration with Tekton ecosystem tooling such as Results and Chains                                                                                                                |\n| `creationTimestamp` | string             | REQUIRED (see note) | `creationTimestamp` MUST be populated by the implementation, in [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339). <br>The field is required for any runtimeTypes such as `TaskRun` and `PipelineRun` and RECOMMENDED for othet types. |\n| `uid`               | string             | RECOMMENDED         | If `uid` is not supported, the implementation must support another way of uniquely identifying a runtime object such as using a combination of `namespace` and `name`                                                               |\n| `resourceVersion`   | string             | OPTIONAL            |                                                                                                                                                                                                                                     |\n| `generation`        | int64              | OPTIONAL            |                                                                                                                                                                                                                                     |\n| `generateName`      | string             | RECOMMENDED         | If supported by the implementation, when `generateName` is specified at creation, it MUST be prepended to a random string and set as the `name`, and not set on the subsequent response.                                            |\n\n### TaskSpec\n\nDefines the desired state of Task\n\n| Field         | Type                                              | Requirement | Notes |\n|---------------|---------------------------------------------------|-------------|-------|\n| `description` | string                                            | REQUIRED    |       |\n| `params`      | [][`ParamSpec`](#paramspec)                       | REQUIRED    |       |\n| `steps`       | [][`Step`](#step)                                 | REQUIRED    |       |\n| `sidecars`    | [][`Sidecar`](#sidecar)                           | REQUIRED    |       |\n| `results`     | [][`TaskResult`](#taskresult)                     | REQUIRED    |       |\n| `workspaces`  | [][`WorkspaceDeclaration`](#workspacedeclaration) | REQUIRED    |       |\n\n### ParamSpec\n\nDeclares a parameter whose value has to be provided at runtime\n\n| Field Name    | Field Type                  | Requirement         | Notes                                                                                                                                                                                    |\n|---------------|-----------------------------|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `name`        | string                      | REQUIRED            |                                                                                                                                                                                          |\n| `description` | string                      | REQUIRED            |                                                                                                                                                                                          |\n| `type`        | [`ParamType`](#paramtype)   | REQUIRED (see note) | The values `string` and `array` for this field are REQUIRED, and the value `object` is RECOMMENDED.                                                                                      |\n| `properties`  | map<string,PropertySpec>    | RECOMMENDED         | `PropertySpec` is a type that defines the spec of an individual key. See how to define the `properties` section in the [example](..\/examples\/v1\/taskruns\/beta\/object-param-result.yaml). |\n| `default`     | [`ParamValue`](#paramvalue) | REQUIRED            |                                                                                                                                                                                          |\n\n### ParamType\n\nDefines the type of a parameter\n\nstring enum, allowed values are `string`, `array`, and `object`. Supporting `string` and `array` are required while the other types are optional for conformance.\n\n### Step\n\nA Step is a reference to a container image that executes a specific tool on a specific input and produces a specific output.\n**NB:** All other fields inherited from the [core.v1\/Container](https:\/\/godoc.org\/k8s.io\/api\/core\/v1#Container) type supported by the Kubernetes implementation are **OPTIONAL** for the purposes of this spec.\n\n| Field Name        | Field Type                            | Requirement | Notes |\n| ----------------- | ------------------------------------- | ----------- | ----- |\n| `name`            | string                                | REQUIRED    |       |\n| `image`           | string                                | REQUIRED    |       |\n| `args`            | []string                              | REQUIRED    |       |\n| `command`         | []string                              | REQUIRED    |       |\n| `workingDir`      | []string                              | REQUIRED    |       |\n| `env`             | [][`EnvVar`](#envvar)                 | REQUIRED    |       |\n| `script`          | string                                | REQUIRED    |       |\n| `securityContext` | [`SecurityContext`](#securitycontext) | REQUIRED    |       |\n\n### Sidecar\n\nSpecifies a list of containers to run alongside the Steps in a Task. If sidecars are supported, the following fields are required:\n\n| Field             | Type                                  | Requirement | Notes                                                                                                                       |\n|-------------------|---------------------------------------|-------------|-----------------------------------------------------------------------------------------------------------------------------|\n| `name`            | string                                | REQUIRED    | Name of the Sidecar specified as a DNS_LABEL. Each Sidecar in a Task must have a unique name (DNS_LABEL).Cannot be updated. |\n| `image`           | string                                | REQUIRED    | [Container image name](https:\/\/kubernetes.io\/docs\/concepts\/containers\/images\/#image-names)                                  |\n| `command`         | []string                              | REQUIRED    | Entrypoint array. Not executed within a shell. The image's ENTRYPOINT is used if this is not provided.                      |\n| `args`            | []string                              | REQUIRED    | Arguments to the entrypoint. The image's CMD is used if this is not provided.                                               |\n| `script`          | string                                | REQUIRED    | Script is the contents of an executable file to execute. If Script is not empty, the Sidecar cannot have a Command or Args. |\n| `securityContext` | [`SecurityContext`](#securitycontext) | REQUIRED    | Defines the security options the Sidecar should be run with.                                                                |\n\n### SecurityContext\n\nAll other fields derived from [core.v1\/SecurityContext](https:\/\/pkg.go.dev\/k8s.io\/api\/core\/v1#SecurityContext) are OPTIONAL for the purposes of this spec.\n\n| Field        | Type | Requirement | Notes                                                  |\n| ------------ | ---- | ----------- | ------------------------------------------------------ |\n| `privileged` | bool | REQUIRED    | Run the container in privileged mode. Default to false |\n\n### TaskResult\n\nDefines a result produced by a Task\n\n| Field         | Type                          | Requirement | Notes                                                                                                                                                                                    |\n|---------------|-------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `name`        | string                        | REQUIRED    | Declares the name by which a parameter is referenced.                                                                                                                                    |\n| `type`        | [`ResultsType`](#resultstype) | REQUIRED    | Type is the user-specified type of the result.  The values `string` this field is REQUIRED, and the values `array` and `object` are RECOMMENDED.                                         |\n| `description` | string                        | RECOMMENDED | Description of the result                                                                                                                                                                |\n| `properties`  | map<string,PropertySpec>      | RECOMMENDED | `PropertySpec` is a type that defines the spec of an individual key. See how to define the `properties` section in the [example](..\/examples\/v1\/taskruns\/beta\/object-param-result.yaml). |\n\n### ResultsType\n\nResultsType indicates the type of a result.\n\nstring enum, Allowed values are `string`, `array`, and `object`. Supporting `string` is required while the other types are optional for conformance.\n\n### PipelineSpec\n\nDefines a pipeline\n\n| Field        | Type                                                              | Requirement | Notes                                                                                            |\n|--------------|-------------------------------------------------------------------|-------------|--------------------------------------------------------------------------------------------------|\n| `params`     | [][`ParamSpec`](#paramspec)                                       | REQUIRED    | Params declares a list of input parameters that must be supplied when this Pipeline is run.      |\n| `tasks`      | [][`PipelineTask`](#pipelinetask)                                 | REQUIRED    | Tasks declares the graph of Tasks that execute when this Pipeline is run.                        |\n| `results`    | [][`PipelineResult`](#pipelineresult)                             | REQUIRED    | Values that this pipeline can output once run.                                                   |\n| `finally`    | [][`PipelineTask`](#pipelinetask)                                 | REQUIRED    | The list of Tasks that execute just before leaving the Pipeline                                  |\n| `workspaces` | [][`PipelineWorkspaceDeclaration`](#pipelineworkspacedeclaration) | REQUIRED    | Workspaces declares a set of named workspaces that are expected to be provided by a PipelineRun. |\n\n### PipelineTask\n\nPiplineTask defines a task in a Pipeline, passing inputs from both `Params`` and from the output of previous tasks.\n\n| Field        | Type                                                              | Requirement | Notes                                                                                                                                                                             |\n|--------------|-------------------------------------------------------------------|-------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `name`       | string                                                            | REQUIRED    | The name of this task within the context of a Pipeline. Used as a coordinate with the from and runAfter fields to establish the execution order of tasks relative to one another. |\n| `taskRef`    | [`TaskRef`](#taskref)                                             | RECOMMENDED | TaskRef is a reference to a task definition. Mutually exclusive with TaskSpec                                                                                                     |\n| `taskSpec`   | [`TaskSpec`](#taskspec)                                           | REQUIRED    | TaskSpec is a specification of a task. Mutually exclusive with TaskRef                                                                                                            |\n| `runAfter`   | []string                                                          | REQUIRED    | RunAfter is the list of PipelineTask names that should be executed before this Task executes. (Used to force a specific ordering in graph execution.)                             |\n| `params`     | [][`Param`](#param)                                               | REQUIRED    | Declares parameters passed to this task.                                                                                                                                          |\n| `workspaces` | [][`WorkspacePipelineTaskBinding`](#workspacepipelinetaskbinding) | REQUIRED    | Workspaces maps workspaces from the pipeline spec to the workspaces declared in the Task.                                                                                         |\n| `timeout`    | int64                                                             | REQUIRED    | Time after which the TaskRun times out.  Setting the timeout to 0 implies no timeout. There isn't a default max timeout set.                                                      |\n\n### TaskRef\n\nRefers to a Task. Tasks should be referred either by a name or by using the Remote Resolution framework.\n\n| Field      | Type                | Requirement | Notes                   |\n|------------|---------------------|-------------|-------------------------|\n| `name`     | string              | RECOMMENDED | Name of the referent.   |\n| `resolver` | string              | RECOMMENDED | A field of ResolverRef. |\n| `params`   | [][`Param`](#param) | RECOMMENDED | A field of ResolverRef. |\n\n### ResolverRef\n\n| Field      | Type                | Requirement | Notes                                                                                                                 |\n|------------|---------------------|-------------|-----------------------------------------------------------------------------------------------------------------------|\n| `resolver` | string              | RECOMMENDED | Resolver is the name of the resolver that should perform resolution of the referenced Tekton resource, such as \"git\". |\n| `params`   | [][`Param`](#param) | RECOMMENDED | Contains the parameters used to identify the referenced Tekton resource.                                              |\n\n### Param\n\nProvides a value for the named paramter.\n\n| Field   | Type                        | Requirement | Notes |\n|---------|-----------------------------|-------------|-------|\n| `name`  | string                      | REQUIRED    |       |\n| `value` | [`ParamValue`](#paramvalue) | REQUIRED    |       |\n\n### ParamValue\n\nA `ParamValue` may be a string, a list of string, or a map of string to string.\n\n### PipelineResult\n\n| Field   | Type                          | Requirement | Notes |\n|---------|-------------------------------|-------------|-------|\n| `name`  | string                        | REQUIRED    |       |\n| `type`  | [`ResultsType`](#resultstype) | REQUIRED    |       |\n| `value` | [`ParamValue`](#paramvalue)   | REQUIRED    |       |\n\n### TaskRunSpec\n\n| Field                 | Type                                                | Requirement | Notes                                                                                                                                                                                                                                                                   |\n|-----------------------|-----------------------------------------------------|-------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `params`              | [][`Param`](#param)                                 | REQUIRED    |                                                                                                                                                                                                                                                                         |\n| `taskRef`             | [`TaskRef`](#taskref)                               | REQUIRED    |                                                                                                                                                                                                                                                                         |\n| `taskSpec`            | [`TaskSpec`](#taskspec)                             | REQUIRED    |                                                                                                                                                                                                                                                                         |\n| `workspaces`          | [][`WorkspaceBinding`](#workspacebinding)           | REQUIRED    |                                                                                                                                                                                                                                                                         |\n| `timeout`             | string (duration)                                   | REQUIRED    | Time after which one retry attempt times out. Defaults to 1 hour.                                                                                                                                                                                                       |\n| `status`              | Enum:<br>- `\"\"` (default)<br>- `\"TaskRunCancelled\"` | RECOMMENDED |                                                                                                                                                                                                                                                                         |\n| `serviceAccountName`^ | string                                              | RECOMMENDED | In the Kubernetes implementation, `serviceAccountName` refers to a Kubernetes `ServiceAccount` resource that is assumed to exist in the same namespace. Other implementations MAY interpret this string differently, and impose other requirements on specified values. |\n\n### TaskRunStatus\n\n| Field                | Type                                | Requirement | Notes                                                                                                                           |\n|----------------------|-------------------------------------|-------------|---------------------------------------------------------------------------------------------------------------------------------|\n| `conditions`         | [][`Condition`](#condition)         | REQUIRED    | Condition type `Succeeded` MUST be populated. See [Status Signalling](#status-signalling) for details. Other types are OPTIONAL |\n| `startTime`          | string                              | REQUIRED    | MUST be populated by the implementation, in [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339).                                     |\n| `completionTime`     | string                              | REQUIRED    | MUST be populated by the implementation, in [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339).                                     |\n| `taskSpec`           | [`TaskSpec`](#taskspec)             | REQUIRED    |                                                                                                                                 |\n| `steps`              | [][`StepState`](#stepstate)         | REQUIRED    |                                                                                                                                 |\n| `results`            | [][`TaskRunResult`](#taskrunresult) | REQUIRED    |                                                                                                                                 |\n| `sidecars`           | [][`SidecarState`](#sidecarstate)   | RECOMMENDED |                                                                                                                                 |\n| `observedGeneration` | int64                               | RECOMMENDED |                                                                                                                                 |\n\n### Condition\n\n| Field     | Type   | Requirement | Notes                                                                                                                                                                  |\n|-----------|--------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `type`    | string | REQUIRED    | Required values: <br> `Succeeded`: specifies that the resource has finished.<br> Other OPTIONAL values: <br> `TaskRunResultsVerified` <br>  `TrustedResourcesVerified` |\n| `status`  | string | REQUIRED    | Valid values: <br> \"UNKNOWN\": .<br> \"TRUE\" <br> \"FALSE\". (Also see [Status Signalling](#status-signalling))                                                            |\n| `reason`  | string | REQUIRED    | The reason for the condition's last transition.                                                                                                                        |\n| `message` | string | RECOMMENDED | Message describing the status and reason.                                                                                                                              |\n\n### StepState\n\n| Field            | Type                                | Requirement | Notes                     |\n|------------------|-------------------------------------|-------------|---------------------------|\n| `name`           | string                              | REQUIRED    | Name of the StepState.    |\n| `imageID`        | string                              | REQUIRED    | Image ID of the StepState |\n| `containerState` | [`ContainerState`](#containerstate) | REQUIRED    | State of the container    |\n\n### ContainerState\n\n| Field        | Type                         | Requirement | Notes                                |\n|--------------|------------------------------|-------------|--------------------------------------|\n| `waiting`    | [`ContainerStateWaiting`]    | REQUIRED    | Details about a waiting container    |\n| `running`    | [`ContainerStateRunning`]    | REQUIRED    | Details about a running container    |\n| `terminated` | [`ContainerStateTerminated`] | REQUIRED    | Details about a terminated container |\n\n\\* Only one of `waiting`, `running` or `terminated` can be returned at a time.\n\n### `ContainerStateRunning`\n\n| Field Name   | Field Type | Requirement | Notes                                                                                                                                                     |\n|--------------|------------|-------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `startedAt`* | string     | REQUIRED    | Time at which the container was last (re-)started.`startedAt` MUST be populated by the implementation, in [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339). |\n\n### `ContainerStateWaiting`\n\n| Field Name | Field Type | Requirement | Notes                                                   |\n|------------|------------|-------------|---------------------------------------------------------|\n| `reason`   | string     | REQUIRED    | Reason the container is not yet running.                |\n| `message`  | string     | RECOMMENDED | Message regarding why the container is not yet running. |\n\n### `ContainerStateTerminated`\n\n| Field Name    | Field Type | Requirement | Notes                                                   |\n|---------------|------------|-------------|---------------------------------------------------------|\n| `exitCode`    | int32      | REQUIRED    | Exit status from the last termination of the container. |\n| `reason`      | string     | REQUIRED    | Reason from the last termination of the container.      |\n| `message`     | string     | RECOMMENDED | Message regarding the last termination of the container |\n| `startedAt`*  | string     | REQUIRED    | Time at which the container was last (re-)started.      |\n| `finishedAt`* | string     | REQUIRED    | Time at which the container last terminated.            |\n\n\\* `startedAt` and `finishedAt` MUST be populated by the implementation, in [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339).\n\n### TaskRunResult\n\n| Field   | Type                          | Requirement | Notes |\n|---------|-------------------------------|-------------|-------|\n| `name`  | string                        | REQUIRED    |       |\n| `type`  | [`ResultsType`](#resultstype) | REQUIRED    |       |\n| `value` | [`ParamValue`](#paramvalue)   | REQUIRED    |       |\n\n### SidecarState\n\n| Field            | Type                                | Requirement | Notes                         |\n|------------------|-------------------------------------|-------------|-------------------------------|\n| `name`           | string                              | RECOMMENDED | Name of the SidecarState.     |\n| `imageID`        | string                              | RECOMMENDED | Image ID of the SidecarState. |\n| `containerState` | [`ContainerState`](#containerstate) | RECOMMENDED | State of the container.       |\n\n### PipelineRunSpec\n\n| Field          | Type                                    | Requirement | Notes                                    |\n|----------------|-----------------------------------------|-------------|------------------------------------------|\n| `params`       | [][`Param`](#param)                     | REQUIRED    |                                          |\n| `pipelineRef`  | [`PipelineRef`](#pipelineref)           | RECOMMENDED |                                          |\n| `pipelineSpec` | [`PipelineSpec`](#pipelinespec)         | REQUIRED    |                                          |\n| `timeouts`     | [`TimeoutFields`](#timeoutfields)       | REQUIRED    | Time after which the Pipeline times out. |\n| `workspaces`   | [`WorkspaceBinding`](#workspacebinding) | REQUIRED    |                                          |\n\n### PipelineRef\n\n| Field      | Type              | Requirement | Note                    |\n|------------|-------------------|-------------|-------------------------|\n| `name`     | string            | RECOMMENDED | Name of the referent.   |\n| `resolver` | string            | RECOMMENDED | A field of ResolverRef. |\n| `params`   | [][Param](#param) | RECOMMENDED | A field of ResolverRef. |\n\n### PipelineRunStatus\n\n| Field             | Type                                            | Requirement | Notes                                                                                                                           |\n|-------------------|-------------------------------------------------|-------------|---------------------------------------------------------------------------------------------------------------------------------|\n| `conditions`      | [][`Condition`](#condition)                     | REQUIRED    | Condition type `Succeeded` MUST be populated. See [Status Signalling](#status-signalling) for details. Other types are OPTIONAL |\n| `startTime`       | string                                          | REQUIRED    | MUST be populated by the implementation, in [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339).                                     |\n| `completionTime`  | string                                          | REQUIRED    | MUST be populated by the implementation, in [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339).                                     |\n| `pipelineSpec`    | [`PipelineSpec`](#pipelinespec)                 | RECOMMEDED  | Resolved spec of the pipeline that was executed                                                                                 |\n| `results`         | [][`PipelineRunResult`](#pipelinerunresult)     | RECOMMENDED | Results produced from the pipeline                                                                                              |\n| `childReferences` | [][ChildStatusReference](#childstatusreference) | REQUIRED    | References to any child Runs created as part of executing the pipelinerun                                                       |\n\n### PipelineRunResult\n\n| Field   | Type                        | Requirement | Notes                                                               |\n|---------|-----------------------------|-------------|---------------------------------------------------------------------|\n| `name`  | string                      | RECOMMENDED | Name is the result's name as declared by the Pipeline               |\n| `value` | [`ParamValue`](#paramvalue) | RECOMMENDED | Value is the result returned from the execution of this PipelineRun |\n\n### ChildStatusReference\n\n| Field              | Type   | Requirement | Notes                                                                 |\n|--------------------|--------|-------------|-----------------------------------------------------------------------|\n| `Name`             | string | REQUIRED    | Name is the name of the TaskRun this is referencing.                  |\n| `PipelineTaskName` | string | REQUIRED    | PipelineTaskName is the name of the PipelineTask this is referencing. |\n\n### TimeoutFields\n\n| Field      | Type              | Requirement | Notes                                                                                                                                                             |\n|------------|-------------------|-------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `Pipeline` | string (duration) | REQUIRED    | Pipeline sets the maximum allowed duration for execution of the entire pipeline. The sum of individual timeouts for tasks and finally must not exceed this value. |\n| `Tasks`    | string (duration) | REQUIRED    | Tasks sets the maximum allowed duration of this pipeline's tasks                                                                                                  |\n| `Finally`  | string (duration) | REQUIRED    | Finally sets the maximum allowed duration of this pipeline's finally                                                                                              |\n\n**string (duration)** :  A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as \"300ms\", \"-1.5h\" or \"2h45m\". Valid time units are \"ns\", \"us\" (or \"\u00b5s\"), \"ms\", \"s\", \"m\", \"h\"\n\n**Note:** Currently three keys are accepted in the map: pipeline, tasks and finally. Timeouts.pipeline >= Timeouts.tasks + Timeouts.finally\n\n### WorkspaceDeclaration\n\n| Field         | Type    | Requirement | Notes                                                         |\n|---------------|---------|-------------|---------------------------------------------------------------|\n| `name`        | string  | REQUIRED    | Name is the name by which you can bind the volume at runtime. |\n| `description` | string  | RECOMMENDED |                                                               |\n| `mountPath`   | string  | RECOMMENDED |                                                               |\n| `readOnly`    | boolean | RECOMMENDED | Defaults to false.                                            |\n\n### WorkspacePipelineTaskBinding\n\n| Field       | Type   | Requirement | Notes                                                           |\n|-------------|--------|-------------|-----------------------------------------------------------------|\n| `name`      | string | REQUIRED    | Name is the name of the workspace as declared by the task       |\n| `workspace` | string | REQUIRED    | Workspace is the name of the workspace declared by the pipeline |\n\n### PipelineWorkspaceDeclaration\n\n| Field  | Type   | Requirement | Notes                                                            |\n|--------|--------|-------------|------------------------------------------------------------------|\n| `name` | string | REQUIRED    | Name is the name of a workspace to be provided by a PipelineRun. |\n\n### WorkspaceBinding\n\n| Field Name | Field Type   | Requirement | Notes |\n|------------|--------------|-------------|-------|\n| `name`     | string       | REQUIRED    |       |\n| `emptyDir` | empty struct | REQUIRED    |       |\n\n**NB:** All other Workspace types supported by the Kubernetes implementation are **OPTIONAL** for the purposes of this spec.\n### EnvVar\n\n| Field Name | Field Type | Requirement | Notes |\n|------------|------------|-------------|-------|\n| `name`     | string     | REQUIRED    |       |\n| `value`    | string     | REQUIRED    |       |\n\n**NB:** All other [EnvVar](https:\/\/godoc.org\/k8s.io\/api\/core\/v1#EnvVar) types inherited from [core.v1\/EnvVar](https:\/\/godoc.org\/k8s.io\/api\/core\/v1#EnvVar) and supported by the Kubernetes implementation (e.g., `valueFrom`) are **OPTIONAL** for the purposes of this spec.\n\n## Status Signalling\n <!-- wokeignore:rule=master --> \nThe Tekton Pipelines API uses the [Kubernetes Conditions convention](https:\/\/github.com\/kubernetes\/community\/blob\/master\/contributors\/devel\/sig-architecture\/api-conventions.md#typical-status-properties) to communicate status and errors to the user.\n\n`TaskRun`'s `status` field MUST have a `conditions` field, which must be a list of `Condition` objects of the following form:\n\n| Field                 | Type                                                          | Requirement |\n|-----------------------|---------------------------------------------------------------|-------------|\n| `type`                | string                                                        | REQUIRED    |\n| `status`              | Enum:<br>- `\"True\"`<br>- `\"False\"`<br>- `\"Unknown\"` (default) | REQUIRED    |\n| `reason`              | string                                                        | REQUIRED    |\n| `message`             | string                                                        | REQUIRED    |\n| `severity`            | Enum:<br>- `\"\"` (default)<br>- `\"Warning\"`<br>- `\"Info\"`      | REQUIRED    |\n| `lastTransitionTime`* | string                                                        | OPTIONAL    |\n\n\\* If `lastTransitionTime` is populated by the implementation, it must be in [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339).\n\nAdditionally, the resource's `status.conditions` field MUST be managed as follows to enable clients to present useful diagnostic and error information to the user.\n\nIf a resource describes that it must report a Condition of the `type` `Succeeded`, then it must report it in the following manner:\n\n- If the `status` field is `\"True\"`, that means the execution finished successfully.\n- If the `status` field is `\"False\"`, that means the execution finished unsuccessfully -- the Condition's `reason` and `message` MUST include further diagnostic information.\n- If the `status` field is `\"Unknown\"`, that means the execution is still ongoing, and clients can check again later until the Condition's `status` reports either `\"True\"` or `\"False\"`.\n\nResources MAY report Conditions with other `type`s, but none are REQUIRED or RECOMMENDED.","site":"tekton","answers_cleaned":"  Tekton Pipelines API Specification       toc        Tekton Pipelines API Specification   tekton pipelines api specification       Abstract   abstract       Background   background       Modifying This Specification   modifying this specification       Resource Overview   v1   resource overview   v1          Task    task          Pipeline    pipeline          TaskRun    taskrun          PipelineRun    pipelinerun       Detailed Resource Types    v1    detailed resource types   v1         TypeMeta   typemeta         ObjectMeta   objectmeta         TaskSpec   taskspec         ParamSpec   paramspec         ParamType   paramtype         Step   step         Sidecar   sidecar         SecurityContext   securitycontext         TaskResult   taskresult         ResultsType   resultstype         PipelineSpec   pipelinespec         PipelineTask   pipelinetask         TaskRef   taskref         ResolverRef   resolverref         Param   param         ParamValue   paramvalue         PipelineResult   pipelineresult         TaskRunSpec   taskrunspec         TaskRunStatus   taskrunstatus         Condition   condition         StepState   stepstate         ContainerState   containerstate          ContainerStateRunning    containerstaterunning          ContainerStateWaiting    containerstatewaiting          ContainerStateTerminated    containerstateterminated         TaskRunResult   taskrunresult         SidecarState   sidecarstate         PipelineRunSpec   pipelinerunspec         PipelineRef   pipelineref         PipelineRunStatus   pipelinerunstatus         PipelineRunResult   pipelinerunresult         ChildStatusReference   childstatusreference         TimeoutFields   timeoutfields         WorkspaceDeclaration   workspacedeclaration         WorkspacePipelineTaskBinding   workspacepipelinetaskbinding         PipelineWorkspaceDeclaration   pipelineworkspacedeclaration         WorkspaceBinding   workspacebinding         EnvVar   envvar       Status Signalling   status signalling        toc         Abstract  The Tekton Pipelines platform provides common abstractions for describing and executing container based  run to completion workflows  typically in service of CI CD scenarios  The Tekton Conformance Policy defines the requirements that Tekton implementations must meet to claim conformance with the Tekton API   TEP 0131  https   github com tektoncd community blob main teps 0131 tekton conformance policy md  lay out details of the policy itself   According to the policy  Tekton implementations can claim Conformance on GA Primitives  thus  all API Spec in this doc is for Tekton V1 APIs  Implementations are only required to provide resource management  i e  CRUD APIs  for Runtime Primitives  TaskRun and PipelineRun   For Authoring time Primitives  Task and Pipeline   supporting CRUD APIs is not a requirement but we recommend referencing them in runtime types  e g  from git  catalog  within the cluster etc    This document describes the structure  and lifecycle of Tekton resources  This document does not define the  runtime contract  https   tekton dev docs pipelines container contract   nor prescribe specific implementations of supporting services such as access control  observability  or resource management   This document makes reference in a few places to different profiles for Tekton installations  A profile in this context is a set of operations  resources  and fields that are accessible to a developer interacting with a Tekton installation  Currently  only a single  minimal  profile for Tekton Pipelines is defined  but additional profiles may be defined in the future to standardize advanced functionality  A minimal profile is one that implements all of the  MUST    MUST NOT   and  REQUIRED  conditions of this document      Background  The key words  MUST    MUST NOT    REQUIRED    SHALL    SHALL NOT    SHOULD    SHOULD NOT    RECOMMENDED    NOT RECOMMENDED    MAY   and  OPTIONAL  are to be interpreted as described in  RFC 2119  https   tools ietf org html rfc2119    There is no formal specification of the Kubernetes API and Resource Model  This document assumes Kubernetes 1 25 behavior  this behavior will typically be supported by many future Kubernetes versions  Additionally  this document may reference specific core Kubernetes resources  these references may be illustrative  i e  an implementation on Kubernetes  or descriptive  i e  this Kubernetes resource MUST be exposed   References to these core Kubernetes resources will be annotated as either illustrative or descriptive      Modifying This Specification  This spec is a living document  meaning new resources and fields may be added  and may transition from being OPTIONAL to RECOMMENDED to REQUIRED over time  In general a resource or field should not be added as REQUIRED directly  as this may cause unsuspecting previously conformant implementations to suddenly no longer be conformant  These should be first OPTIONAL or RECOMMENDED  then change to be REQUIRED once a survey of conformant implementations indicates that doing so will not cause undue burden on any implementation      Resource Overview   v1  The following schema defines a set of REQUIRED or RECOMMENDED resource fields on the Tekton resource types  Whether a field is REQUIRED or RECOMMENDED is denoted in the  Requirement  column   Additional fields MAY be provided by particular implementations  however it is expected that most extension will be accomplished via the  metadata labels  and  metadata annotations  fields  as Tekton implementations MAY validate supplied resources against these fields and refuse resources which specify unknown fields   Tekton implementations MUST NOT require  spec  fields outside this implementation  to do so would break interoperability between such implementations and implementations which implement validation of field names     NB    All fields and resources not listed below are assumed to be   OPTIONAL    not RECOMMENDED or REQUIRED        Task   A Task is a collection of Steps that is defined and arranged in a sequential order of execution     Field          Type                          Requirement   Notes                                                                                                                                                             kind          string                        RECOMMENDED   Describes the type of the resource i e   Task       apiVersion    string                        RECOMMENDED   Schema version i e   v1                             metadata        ObjectMeta    objectmeta    REQUIRED      Common metadata about a resource                    spec            TaskSpec    taskspec        REQUIRED      Defines the desired state of Task                   NB    If  kind  and  apiVersion  are not supported  an alternative method of identifying the type of resource must be supported        Pipeline   A Pipeline is a collection of Tasks that is defined and arranged in a specific order of execution    Field          Type                              Requirement   Notes                                                                                                                                                                         kind          string                            RECOMMENDED   Describes the type of the resource i e   Pipeline       apiVersion    string                            RECOMMENDED   Schema version i e   v1                                 metadata        ObjectMeta    objectmeta        REQUIRED      Common metadata about a resource                        spec            PipelineSpec    pipelinespec    REQUIRED      Defines the desired state of Pipeline                   NB    If  kind  and  apiVersion  are not supported  an alternative method of identifying the type of resource must be supported        TaskRun   A  TaskRun  represents an instantiation of a single execution of a  Task   It can describe the steps of the Task directly     Field          Type                                Requirement   Notes                                                                                                                                                                       kind          string                              RECOMMENDED   Describes the type of the resource i e  TaskRun       apiVersion    string                              RECOMMENDED   Schema version i e   v1                               metadata        ObjectMeta    objectmeta          REQUIRED      Common metadata about a resource                      spec            TaskRunSpec    taskrunspec        REQUIRED      Defines the desired state of TaskRun                  status          TaskRunStatus    taskrunstatus    REQUIRED      Defines the current status of TaskRun                 NB    If  kind  and  apiVersion  are not supported  an alternative method of identifying the type of resource must be supported        PipelineRun   A  PipelineRun  represents an instantiation of a single execution of a  Pipeline   It can describe the spec of the Pipeline directly     Field          Type                                        Requirement   Notes                                                                                                                                                                                       kind          string                                      RECOMMENDED   Describes the type of the resource i e  PipelineRun       apiVersion    string                                      RECOMMENDED   Schema version i e   v1                                   metadata        ObjectMeta    objectmeta                  REQUIRED      Common metadata about a resource                          spec            PipelineRunSpec    pipelinerunspec        REQUIRED      Defines the desired state of PipelineRun                  status          PipelineRunStatus    pipelinerunstatus    REQUIRED      Defines the current status of PipelineRun                 NB    If  kind  and  apiVersion  are not supported  an alternative method of identifying the type of resource must be supported      Detailed Resource Types    v1       TypeMeta  Derived from  Kuberentes Type Meta  https   pkg go dev k8s io apimachinery pkg apis meta v1 TypeMeta     Field          Type     Notes                                                                                                                                                                kind          string   A string value representing the resource this object represents        apiVersion    string   Defines the versioned schema of this representation of an object         ObjectMeta  Derived from standard Kubernetes  meta v1 ObjectMeta  https   kubernetes io docs reference generated kubernetes api v1 18  objectmeta v1 meta  resource     Field                 Type                 Requirement           Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             name                 string               REQUIRED              Mutually exclusive with the  generateName  field                                                                                                                                                                                         labels               map string string    RECOMMENDED                                                                                                                                                                                                                                                    annotations          map string string    RECOMMENDED            annotations  are necessary in order to support integration with Tekton ecosystem tooling such as Results and Chains                                                                                                                     creationTimestamp    string               REQUIRED  see note     creationTimestamp  MUST be populated by the implementation  in  RFC3339  https   tools ietf org html rfc3339    br The field is required for any runtimeTypes such as  TaskRun  and  PipelineRun  and RECOMMENDED for othet types       uid                  string               RECOMMENDED           If  uid  is not supported  the implementation must support another way of uniquely identifying a runtime object such as using a combination of  namespace  and  name                                                                     resourceVersion      string               OPTIONAL                                                                                                                                                                                                                                                       generation           int64                OPTIONAL                                                                                                                                                                                                                                                       generateName         string               RECOMMENDED           If supported by the implementation  when  generateName  is specified at creation  it MUST be prepended to a random string and set as the  name   and not set on the subsequent response                                                    TaskSpec  Defines the desired state of Task    Field           Type                                                Requirement   Notes                                                                                                  description    string                                              REQUIRED                 params             ParamSpec    paramspec                          REQUIRED                 steps              Step    step                                    REQUIRED                 sidecars           Sidecar    sidecar                              REQUIRED                 results            TaskResult    taskresult                        REQUIRED                 workspaces         WorkspaceDeclaration    workspacedeclaration    REQUIRED                   ParamSpec  Declares a parameter whose value has to be provided at runtime    Field Name      Field Type                    Requirement           Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                          name           string                        REQUIRED                                                                                                                                                                                                            description    string                        REQUIRED                                                                                                                                                                                                            type             ParamType    paramtype      REQUIRED  see note    The values  string  and  array  for this field are REQUIRED  and the value  object  is RECOMMENDED                                                                                            properties     map string PropertySpec       RECOMMENDED            PropertySpec  is a type that defines the spec of an individual key  See how to define the  properties  section in the  example     examples v1 taskruns beta object param result yaml        default          ParamValue    paramvalue    REQUIRED                                                                                                                                                                                                              ParamType  Defines the type of a parameter  string enum  allowed values are  string    array   and  object   Supporting  string  and  array  are required while the other types are optional for conformance       Step  A Step is a reference to a container image that executes a specific tool on a specific input and produces a specific output    NB    All other fields inherited from the  core v1 Container  https   godoc org k8s io api core v1 Container  type supported by the Kubernetes implementation are   OPTIONAL   for the purposes of this spec     Field Name          Field Type                              Requirement   Notes                                                                                          name               string                                  REQUIRED                 image              string                                  REQUIRED                 args                 string                                REQUIRED                 command              string                                REQUIRED                 workingDir           string                                REQUIRED                 env                    EnvVar    envvar                    REQUIRED                 script             string                                  REQUIRED                 securityContext      SecurityContext    securitycontext    REQUIRED                   Sidecar  Specifies a list of containers to run alongside the Steps in a Task  If sidecars are supported  the following fields are required     Field               Type                                    Requirement   Notes                                                                                                                                                                                                                                                                                                                                      name               string                                  REQUIRED      Name of the Sidecar specified as a DNS LABEL  Each Sidecar in a Task must have a unique name  DNS LABEL  Cannot be updated       image              string                                  REQUIRED       Container image name  https   kubernetes io docs concepts containers images  image names                                        command              string                                REQUIRED      Entrypoint array  Not executed within a shell  The image s ENTRYPOINT is used if this is not provided                            args                 string                                REQUIRED      Arguments to the entrypoint  The image s CMD is used if this is not provided                                                     script             string                                  REQUIRED      Script is the contents of an executable file to execute  If Script is not empty  the Sidecar cannot have a Command or Args       securityContext      SecurityContext    securitycontext    REQUIRED      Defines the security options the Sidecar should be run with                                                                        SecurityContext  All other fields derived from  core v1 SecurityContext  https   pkg go dev k8s io api core v1 SecurityContext  are OPTIONAL for the purposes of this spec     Field          Type   Requirement   Notes                                                                                                                                                      privileged    bool   REQUIRED      Run the container in privileged mode  Default to false        TaskResult  Defines a result produced by a Task    Field           Type                            Requirement   Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                    name           string                          REQUIRED      Declares the name by which a parameter is referenced                                                                                                                                          type             ResultsType    resultstype    REQUIRED      Type is the user specified type of the result   The values  string  this field is REQUIRED  and the values  array  and  object  are RECOMMENDED                                               description    string                          RECOMMENDED   Description of the result                                                                                                                                                                     properties     map string PropertySpec         RECOMMENDED    PropertySpec  is a type that defines the spec of an individual key  See how to define the  properties  section in the  example     examples v1 taskruns beta object param result yaml          ResultsType  ResultsType indicates the type of a result   string enum  Allowed values are  string    array   and  object   Supporting  string  is required while the other types are optional for conformance       PipelineSpec  Defines a pipeline    Field          Type                                                                Requirement   Notes                                                                                                                                                                                                                                                                                                       params            ParamSpec    paramspec                                          REQUIRED      Params declares a list of input parameters that must be supplied when this Pipeline is run            tasks             PipelineTask    pipelinetask                                    REQUIRED      Tasks declares the graph of Tasks that execute when this Pipeline is run                              results           PipelineResult    pipelineresult                                REQUIRED      Values that this pipeline can output once run                                                         finally           PipelineTask    pipelinetask                                    REQUIRED      The list of Tasks that execute just before leaving the Pipeline                                       workspaces        PipelineWorkspaceDeclaration    pipelineworkspacedeclaration    REQUIRED      Workspaces declares a set of named workspaces that are expected to be provided by a PipelineRun         PipelineTask  PiplineTask defines a task in a Pipeline  passing inputs from both  Params   and from the output of previous tasks     Field          Type                                                                Requirement   Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                         name          string                                                              REQUIRED      The name of this task within the context of a Pipeline  Used as a coordinate with the from and runAfter fields to establish the execution order of tasks relative to one another       taskRef         TaskRef    taskref                                                RECOMMENDED   TaskRef is a reference to a task definition  Mutually exclusive with TaskSpec                                                                                                          taskSpec        TaskSpec    taskspec                                              REQUIRED      TaskSpec is a specification of a task  Mutually exclusive with TaskRef                                                                                                                 runAfter        string                                                            REQUIRED      RunAfter is the list of PipelineTask names that should be executed before this Task executes   Used to force a specific ordering in graph execution                                    params            Param    param                                                  REQUIRED      Declares parameters passed to this task                                                                                                                                                workspaces        WorkspacePipelineTaskBinding    workspacepipelinetaskbinding    REQUIRED      Workspaces maps workspaces from the pipeline spec to the workspaces declared in the Task                                                                                               timeout       int64                                                               REQUIRED      Time after which the TaskRun times out   Setting the timeout to 0 implies no timeout  There isn t a default max timeout set                                                              TaskRef  Refers to a Task  Tasks should be referred either by a name or by using the Remote Resolution framework     Field        Type                  Requirement   Notes                                                                                                     name        string                RECOMMENDED   Name of the referent         resolver    string                RECOMMENDED   A field of ResolverRef       params          Param    param    RECOMMENDED   A field of ResolverRef         ResolverRef    Field        Type                  Requirement   Notes                                                                                                                                                                                                                                                                                                 resolver    string                RECOMMENDED   Resolver is the name of the resolver that should perform resolution of the referenced Tekton resource  such as  git        params          Param    param    RECOMMENDED   Contains the parameters used to identify the referenced Tekton resource                                                      Param  Provides a value for the named paramter     Field     Type                          Requirement   Notes                                                                      name     string                        REQUIRED                 value      ParamValue    paramvalue    REQUIRED                   ParamValue  A  ParamValue  may be a string  a list of string  or a map of string to string       PipelineResult    Field     Type                            Requirement   Notes                                                                        name     string                          REQUIRED                 type       ResultsType    resultstype    REQUIRED                 value      ParamValue    paramvalue      REQUIRED                   TaskRunSpec    Field                   Type                                                  Requirement   Notes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                params                     Param    param                                    REQUIRED                                                                                                                                                                                                                                                                                   taskRef                  TaskRef    taskref                                  REQUIRED                                                                                                                                                                                                                                                                                   taskSpec                 TaskSpec    taskspec                                REQUIRED                                                                                                                                                                                                                                                                                   workspaces                 WorkspaceBinding    workspacebinding              REQUIRED                                                                                                                                                                                                                                                                                   timeout                string  duration                                      REQUIRED      Time after which one retry attempt times out  Defaults to 1 hour                                                                                                                                                                                                             status                 Enum  br         default  br     TaskRunCancelled     RECOMMENDED                                                                                                                                                                                                                                                                                serviceAccountName     string                                                RECOMMENDED   In the Kubernetes implementation   serviceAccountName  refers to a Kubernetes  ServiceAccount  resource that is assumed to exist in the same namespace  Other implementations MAY interpret this string differently  and impose other requirements on specified values         TaskRunStatus    Field                  Type                                  Requirement   Notes                                                                                                                                                                                                                                                                                                                                               conditions                Condition    condition            REQUIRED      Condition type  Succeeded  MUST be populated  See  Status Signalling   status signalling  for details  Other types are OPTIONAL      startTime             string                                REQUIRED      MUST be populated by the implementation  in  RFC3339  https   tools ietf org html rfc3339                                            completionTime        string                                REQUIRED      MUST be populated by the implementation  in  RFC3339  https   tools ietf org html rfc3339                                            taskSpec                TaskSpec    taskspec                REQUIRED                                                                                                                                           steps                     StepState    stepstate            REQUIRED                                                                                                                                           results                   TaskRunResult    taskrunresult    REQUIRED                                                                                                                                           sidecars                  SidecarState    sidecarstate      RECOMMENDED                                                                                                                                        observedGeneration    int64                                 RECOMMENDED                                                                                                                                          Condition    Field       Type     Requirement   Notes                                                                                                                                                                                                                                                                                                                                                                                     type       string   REQUIRED      Required values   br   Succeeded   specifies that the resource has finished  br  Other OPTIONAL values   br   TaskRunResultsVerified   br    TrustedResourcesVerified       status     string   REQUIRED      Valid values   br   UNKNOWN     br   TRUE   br   FALSE    Also see  Status Signalling   status signalling                                                                   reason     string   REQUIRED      The reason for the condition s last transition                                                                                                                              message    string   RECOMMENDED   Message describing the status and reason                                                                                                                                      StepState    Field              Type                                  Requirement   Notes                                                                                                                               name              string                                REQUIRED      Name of the StepState          imageID           string                                REQUIRED      Image ID of the StepState      containerState      ContainerState    containerstate    REQUIRED      State of the container           ContainerState    Field          Type                           Requirement   Notes                                                                                                                                          waiting         ContainerStateWaiting        REQUIRED      Details about a waiting container         running         ContainerStateRunning        REQUIRED      Details about a running container         terminated      ContainerStateTerminated     REQUIRED      Details about a terminated container       Only one of  waiting    running  or  terminated  can be returned at a time        ContainerStateRunning     Field Name     Field Type   Requirement   Notes                                                                                                                                                                                                                                                                                                                                                                  startedAt     string       REQUIRED      Time at which the container was last  re  started  startedAt  MUST be populated by the implementation  in  RFC3339  https   tools ietf org html rfc3339           ContainerStateWaiting     Field Name   Field Type   Requirement   Notes                                                                                                                                                            reason      string       REQUIRED      Reason the container is not yet running                      message     string       RECOMMENDED   Message regarding why the container is not yet running          ContainerStateTerminated     Field Name      Field Type   Requirement   Notes                                                                                                                                                               exitCode       int32        REQUIRED      Exit status from the last termination of the container       reason         string       REQUIRED      Reason from the last termination of the container            message        string       RECOMMENDED   Message regarding the last termination of the container      startedAt      string       REQUIRED      Time at which the container was last  re  started            finishedAt     string       REQUIRED      Time at which the container last terminated                    startedAt  and  finishedAt  MUST be populated by the implementation  in  RFC3339  https   tools ietf org html rfc3339        TaskRunResult    Field     Type                            Requirement   Notes                                                                        name     string                          REQUIRED                 type       ResultsType    resultstype    REQUIRED                 value      ParamValue    paramvalue      REQUIRED                   SidecarState    Field              Type                                  Requirement   Notes                                                                                                                                       name              string                                RECOMMENDED   Name of the SidecarState           imageID           string                                RECOMMENDED   Image ID of the SidecarState       containerState      ContainerState    containerstate    RECOMMENDED   State of the container               PipelineRunSpec    Field            Type                                      Requirement   Notes                                                                                                                                                               params              Param    param                        REQUIRED                                                    pipelineRef       PipelineRef    pipelineref              RECOMMENDED                                                 pipelineSpec      PipelineSpec    pipelinespec            REQUIRED                                                    timeouts          TimeoutFields    timeoutfields          REQUIRED      Time after which the Pipeline times out       workspaces        WorkspaceBinding    workspacebinding    REQUIRED                                                      PipelineRef    Field        Type                Requirement   Note                                                                                                    name        string              RECOMMENDED   Name of the referent         resolver    string              RECOMMENDED   A field of ResolverRef       params         Param   param    RECOMMENDED   A field of ResolverRef         PipelineRunStatus    Field               Type                                              Requirement   Notes                                                                                                                                                                                                                                                                                                                                                        conditions             Condition    condition                        REQUIRED      Condition type  Succeeded  MUST be populated  See  Status Signalling   status signalling  for details  Other types are OPTIONAL      startTime          string                                            REQUIRED      MUST be populated by the implementation  in  RFC3339  https   tools ietf org html rfc3339                                            completionTime     string                                            REQUIRED      MUST be populated by the implementation  in  RFC3339  https   tools ietf org html rfc3339                                            pipelineSpec         PipelineSpec    pipelinespec                    RECOMMEDED    Resolved spec of the pipeline that was executed                                                                                      results                PipelineRunResult    pipelinerunresult        RECOMMENDED   Results produced from the pipeline                                                                                                   childReferences       ChildStatusReference   childstatusreference    REQUIRED      References to any child Runs created as part of executing the pipelinerun                                                              PipelineRunResult    Field     Type                          Requirement   Notes                                                                                                                                                                                                  name     string                        RECOMMENDED   Name is the result s name as declared by the Pipeline                    value      ParamValue    paramvalue    RECOMMENDED   Value is the result returned from the execution of this PipelineRun        ChildStatusReference    Field                Type     Requirement   Notes                                                                                                                                                                                            Name                string   REQUIRED      Name is the name of the TaskRun this is referencing                        PipelineTaskName    string   REQUIRED      PipelineTaskName is the name of the PipelineTask this is referencing         TimeoutFields    Field        Type                Requirement   Notes                                                                                                                                                                                                                                                                                                                                                                                       Pipeline    string  duration    REQUIRED      Pipeline sets the maximum allowed duration for execution of the entire pipeline  The sum of individual timeouts for tasks and finally must not exceed this value       Tasks       string  duration    REQUIRED      Tasks sets the maximum allowed duration of this pipeline s tasks                                                                                                       Finally     string  duration    REQUIRED      Finally sets the maximum allowed duration of this pipeline s finally                                                                                                   string  duration       A duration string is a possibly signed sequence of decimal numbers  each with optional fraction and a unit suffix  such as  300ms     1 5h  or  2h45m   Valid time units are  ns    us   or   s     ms    s    m    h     Note    Currently three keys are accepted in the map  pipeline  tasks and finally  Timeouts pipeline    Timeouts tasks   Timeouts finally      WorkspaceDeclaration    Field           Type      Requirement   Notes                                                                                                                                                                        name           string    REQUIRED      Name is the name by which you can bind the volume at runtime       description    string    RECOMMENDED                                                                      mountPath      string    RECOMMENDED                                                                      readOnly       boolean   RECOMMENDED   Defaults to false                                                    WorkspacePipelineTaskBinding    Field         Type     Requirement   Notes                                                                                                                                                                         name         string   REQUIRED      Name is the name of the workspace as declared by the task            workspace    string   REQUIRED      Workspace is the name of the workspace declared by the pipeline        PipelineWorkspaceDeclaration    Field    Type     Requirement   Notes                                                                                                                                                                      name    string   REQUIRED      Name is the name of a workspace to be provided by a PipelineRun         WorkspaceBinding    Field Name   Field Type     Requirement   Notes                                                          name        string         REQUIRED                 emptyDir    empty struct   REQUIRED                 NB    All other Workspace types supported by the Kubernetes implementation are   OPTIONAL   for the purposes of this spec      EnvVar    Field Name   Field Type   Requirement   Notes                                                        name        string       REQUIRED                 value       string       REQUIRED                 NB    All other  EnvVar  https   godoc org k8s io api core v1 EnvVar  types inherited from  core v1 EnvVar  https   godoc org k8s io api core v1 EnvVar  and supported by the Kubernetes implementation  e g    valueFrom   are   OPTIONAL   for the purposes of this spec      Status Signalling       wokeignore rule master      The Tekton Pipelines API uses the  Kubernetes Conditions convention  https   github com kubernetes community blob master contributors devel sig architecture api conventions md typical status properties  to communicate status and errors to the user    TaskRun  s  status  field MUST have a  conditions  field  which must be a list of  Condition  objects of the following form     Field                   Type                                                            Requirement                                                                                                              type                   string                                                          REQUIRED         status                 Enum  br     True   br     False   br     Unknown    default    REQUIRED         reason                 string                                                          REQUIRED         message                string                                                          REQUIRED         severity               Enum  br         default  br     Warning   br     Info          REQUIRED         lastTransitionTime     string                                                          OPTIONAL          If  lastTransitionTime  is populated by the implementation  it must be in  RFC3339  https   tools ietf org html rfc3339    Additionally  the resource s  status conditions  field MUST be managed as follows to enable clients to present useful diagnostic and error information to the user   If a resource describes that it must report a Condition of the  type   Succeeded   then it must report it in the following manner     If the  status  field is   True    that means the execution finished successfully    If the  status  field is   False    that means the execution finished unsuccessfully    the Condition s  reason  and  message  MUST include further diagnostic information    If the  status  field is   Unknown    that means the execution is still ongoing  and clients can check again later until the Condition s  status  reports either   True   or   False     Resources MAY report Conditions with other  type s  but none are REQUIRED or RECOMMENDED "}
{"questions":"tekton remained in alpha while the other resource kinds were promoted to beta Replacing PipelineResources with Tasks Replacing PipelineResources with Tasks weight 207","answers":"<!--\n---\nlinkTitle: \"Replacing PipelineResources with Tasks\"\nweight: 207\n---\n-->\n\n## Replacing PipelineResources with Tasks\n\n`PipelineResources` remained in alpha while the other resource kinds were promoted to beta.\nSince then, **`PipelineResources` have been removed**.\nRead more about the deprecation in [TEP-0074](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0074-deprecate-pipelineresources.md).\n\n_More on the reasoning and what's left to do in\n[Why aren't PipelineResources in Beta?](resources.md#why-aren-t-pipelineresources-in-beta)._\n\nTo ease migration away from `PipelineResources`\n[some types have an equivalent `Task` in the Catalog](#replacing-pipelineresources-with-tasks).\nTo use these replacement `Tasks` you will need to combine them with your existing `Tasks` via a `Pipeline`.\n\nFor example, if you were using this `Task` which was fetching from `git` and building with\n`Kaniko`:\n\n```yaml\napiVersion: tekton.dev\/v1alpha1\nkind: Task\nmetadata:\n  name: build-push-kaniko\nspec:\n  inputs:\n    resources:\n      - name: workspace\n        type: git\n    params:\n      - name: pathToDockerFile\n        description: The path to the dockerfile to build\n        default: \/workspace\/workspace\/Dockerfile\n      - name: pathToContext\n        description: The build context used by Kaniko\n        default: \/workspace\/workspace\n  outputs:\n    resources:\n      - name: builtImage\n        type: image\n  steps:\n    - name: build-and-push\n      image: gcr.io\/kaniko-project\/executor:v0.17.1\n      env:\n        - name: \"DOCKER_CONFIG\"\n          value: \"\/tekton\/home\/.docker\/\"\n      args:\n        - --dockerfile=$(inputs.params.pathToDockerFile)\n        - --destination=$(outputs.resources.builtImage.url)\n        - --context=$(inputs.params.pathToContext)\n        - --oci-layout-path=$(inputs.resources.builtImage.path)\n      securityContext:\n        runAsUser: 0\n```\n\nTo do the same thing with the `git` catalog `Task` and the kaniko `Task` you will need to combine them in a\n`Pipeline`.\n\nFor example this Pipeline uses the Kaniko and `git` catalog Tasks:\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: kaniko-pipeline\nspec:\n  params:\n    - name: git-url\n    - name: git-revision\n    - name: image-name\n    - name: path-to-image-context\n    - name: path-to-dockerfile\n  workspaces:\n    - name: git-source\n  tasks:\n    - name: fetch-from-git\n      taskRef:\n        name: git-clone\n      params:\n        - name: url\n          value: $(params.git-url)\n        - name: revision\n          value: $(params.git-revision)\n      workspaces:\n        - name: output\n          workspace: git-source\n    - name: build-image\n      taskRef:\n        name: kaniko\n      params:\n        - name: IMAGE\n          value: $(params.image-name)\n        - name: CONTEXT\n          value: $(params.path-to-image-context)\n        - name: DOCKERFILE\n          value: $(params.path-to-dockerfile)\n      workspaces:\n        - name: source\n          workspace: git-source\n  # If you want you can add a Task that uses the IMAGE_DIGEST from the kaniko task\n  # via $(tasks.build-image.results.IMAGE_DIGEST) - this was a feature we hadn't been\n  # able to fully deliver with the Image PipelineResource!\n```\n\n_Note that [the `image` `PipelineResource` is gone in this example](#replacing-an-image-resource) (replaced with\na [`result`](tasks.md#emitting-results)), and also that now the `Task` doesn't need to know anything\nabout where the files come from that it builds from._\n\n### Replacing a `git` resource\n\nYou can replace a `git` resource with the [`git-clone` Catalog `Task`](https:\/\/github.com\/tektoncd\/catalog\/tree\/main\/task\/git-clone).\n\n### Replacing a `pullrequest` resource\n\nYou can replace a `pullrequest` resource with the [`pullrequest` Catalog `Task`](https:\/\/github.com\/tektoncd\/catalog\/tree\/main\/task\/pull-request).\n\n### Replacing a `gcs` resource\n\nYou can replace a `gcs` resource with the [`gcs` Catalog `Task`](https:\/\/github.com\/tektoncd\/catalog\/tree\/main\/task\/gcs-generic).\n\n### Replacing an `image` resource\n\nSince the `image` resource is simply a way to share the digest of a built image with subsequent\n`Tasks` in your `Pipeline`, you can use [`Task` results](tasks.md#emitting-results) to\nachieve equivalent functionality.\n\nFor examples of replacing an `image` resource, see the following Catalog `Tasks`:\n\n- The [Kaniko Catalog `Task`](https:\/\/github.com\/tektoncd\/catalog\/blob\/v1beta1\/kaniko\/)\n  illustrates how to write the digest of an image to a result.\n- The [Buildah Catalog `Task`](https:\/\/github.com\/tektoncd\/catalog\/blob\/v1beta1\/buildah\/)\n  illustrates how to accept an image digest as a parameter.\n\n### Replacing a `cluster` resource\n\nYou can replace a `cluster` resource with the [`kubeconfig-creator` Catalog `Task`](https:\/\/github.com\/tektoncd\/catalog\/tree\/main\/task\/kubeconfig-creator).\n\n### Replacing a `cloudEvent` resource\n\nYou can replace a `cloudEvent` resource with the [`CloudEvent` Catalog `Task`](https:\/\/github.com\/tektoncd\/catalog\/tree\/main\/task\/cloudevent).","site":"tekton","answers_cleaned":"         linkTitle   Replacing PipelineResources with Tasks  weight  207             Replacing PipelineResources with Tasks   PipelineResources  remained in alpha while the other resource kinds were promoted to beta  Since then     PipelineResources  have been removed    Read more about the deprecation in  TEP 0074  https   github com tektoncd community blob main teps 0074 deprecate pipelineresources md     More on the reasoning and what s left to do in  Why aren t PipelineResources in Beta   resources md why aren t pipelineresources in beta     To ease migration away from  PipelineResources   some types have an equivalent  Task  in the Catalog   replacing pipelineresources with tasks   To use these replacement  Tasks  you will need to combine them with your existing  Tasks  via a  Pipeline    For example  if you were using this  Task  which was fetching from  git  and building with  Kaniko       yaml apiVersion  tekton dev v1alpha1 kind  Task metadata    name  build push kaniko spec    inputs      resources          name  workspace         type  git     params          name  pathToDockerFile         description  The path to the dockerfile to build         default   workspace workspace Dockerfile         name  pathToContext         description  The build context used by Kaniko         default   workspace workspace   outputs      resources          name  builtImage         type  image   steps        name  build and push       image  gcr io kaniko project executor v0 17 1       env            name   DOCKER CONFIG            value    tekton home  docker         args              dockerfile   inputs params pathToDockerFile              destination   outputs resources builtImage url              context   inputs params pathToContext              oci layout path   inputs resources builtImage path        securityContext          runAsUser  0      To do the same thing with the  git  catalog  Task  and the kaniko  Task  you will need to combine them in a  Pipeline    For example this Pipeline uses the Kaniko and  git  catalog Tasks      yaml apiVersion  tekton dev v1beta1 kind  Pipeline metadata    name  kaniko pipeline spec    params        name  git url       name  git revision       name  image name       name  path to image context       name  path to dockerfile   workspaces        name  git source   tasks        name  fetch from git       taskRef          name  git clone       params            name  url           value    params git url            name  revision           value    params git revision        workspaces            name  output           workspace  git source       name  build image       taskRef          name  kaniko       params            name  IMAGE           value    params image name            name  CONTEXT           value    params path to image context            name  DOCKERFILE           value    params path to dockerfile        workspaces            name  source           workspace  git source     If you want you can add a Task that uses the IMAGE DIGEST from the kaniko task     via   tasks build image results IMAGE DIGEST    this was a feature we hadn t been     able to fully deliver with the Image PipelineResource        Note that  the  image   PipelineResource  is gone in this example   replacing an image resource   replaced with a   result   tasks md emitting results    and also that now the  Task  doesn t need to know anything about where the files come from that it builds from        Replacing a  git  resource  You can replace a  git  resource with the   git clone  Catalog  Task   https   github com tektoncd catalog tree main task git clone        Replacing a  pullrequest  resource  You can replace a  pullrequest  resource with the   pullrequest  Catalog  Task   https   github com tektoncd catalog tree main task pull request        Replacing a  gcs  resource  You can replace a  gcs  resource with the   gcs  Catalog  Task   https   github com tektoncd catalog tree main task gcs generic        Replacing an  image  resource  Since the  image  resource is simply a way to share the digest of a built image with subsequent  Tasks  in your  Pipeline   you can use   Task  results  tasks md emitting results  to achieve equivalent functionality   For examples of replacing an  image  resource  see the following Catalog  Tasks      The  Kaniko Catalog  Task   https   github com tektoncd catalog blob v1beta1 kaniko     illustrates how to write the digest of an image to a result    The  Buildah Catalog  Task   https   github com tektoncd catalog blob v1beta1 buildah     illustrates how to accept an image digest as a parameter       Replacing a  cluster  resource  You can replace a  cluster  resource with the   kubeconfig creator  Catalog  Task   https   github com tektoncd catalog tree main task kubeconfig creator        Replacing a  cloudEvent  resource  You can replace a  cloudEvent  resource with the   CloudEvent  Catalog  Task   https   github com tektoncd catalog tree main task cloudevent  "}
{"questions":"tekton Get started with Resolvers weight 103 Getting Started with Resolvers","answers":"<!--\n---\nlinkTitle: \"Get started with Resolvers\"\nweight: 103\n---\n-->\n\n\n# Getting Started with Resolvers\n\n## Introduction\n\nThis guide will take you from an empty Kubernetes cluster to a\nfunctioning Tekton Pipelines installation and a PipelineRun executing\nwith a Pipeline stored in a git repo.\n\n## Prerequisites\n\n- A computer with\n  [`kubectl`](https:\/\/kubernetes.io\/docs\/tasks\/tools\/#kubectl).\n- A Kubernetes cluster running at least Kubernetes 1.28. A [`kind`\n  cluster](https:\/\/kind.sigs.k8s.io\/docs\/user\/quick-start\/#installation)\n  should work fine for following the guide on your local machine.\n- An image registry that you can push images to. If you're using `kind`\n  make sure your `KO_DOCKER_REPO` environment variable is set to\n  `kind.local`.\n- A publicly available git repository where you can put a pipeline yaml\n  file.\n\n## Step 1: Install Tekton Pipelines and the Resolvers\n\nSee [the installation instructions for Tekton Pipeline](.\/install.md#installing-tekton-pipelines-on-kubernetes), and\n[the installation instructions for the built-in resolvers](.\/install.md#installing-and-configuring-remote-task-and-pipeline-resolution).\n\n## Step 2: Ensure Pipelines is configured to enable resolvers\n\nStarting with v0.41.0, remote resolvers for Tekton Pipelines are enabled by default, \nbut can be disabled via feature flags in the `resolvers-feature-flags` configmap in \nthe `tekton-pipelines-resolvers` namespace. Check that configmap to verify that the\nresolvers you wish to have enabled are set to `\"true\"`.\n\nThe feature flags for the built-in resolvers are:\n\n* The `bundles` resolver: `enable-bundles-resolver`\n* The `git` resolver: `enable-git-resolver`\n* The `hub` resolver: `enable-hub-resolver`\n* The `cluster` resolver: `enable-cluster-resolver`\n\n## Step 3: Try it out!\n\nIn order to test out your install you'll need a Pipeline stored in a\npublic git repository. First cd into a clone of your repo and then\ncreate a new branch:\n\n```sh\n# checkout a new branch in the public repo you're using\ngit checkout -b add-a-simple-pipeline\n```\n\nThen create a basic pipeline:\n\n```sh\ncat <<\"EOF\" > pipeline.yaml\nkind: Pipeline\napiVersion: tekton.dev\/v1beta1\nmetadata:\n  name: a-simple-pipeline\nspec:\n  params:\n  - name: username\n  tasks:\n  - name: task-1\n    params:\n    - name: username\n      value: $(params.username)\n    taskSpec:\n      params:\n      - name: username\n      steps:\n      - image: alpine:3.15\n        script: |\n          echo \"hello $(params.username)\"\nEOF\n```\n\nCommit the pipeline and push it to your git repo:\n\n```sh\ngit add .\/pipeline.yaml\ngit commit -m \"Add a basic pipeline to test Tekton Pipeline remote resolution\"\n\n# push to your publicly accessible repository, replacing origin with\n# your git remote's name\ngit push origin add-a-simple-pipeline\n```\n\nAnd finally create a `PipelineRun` that uses your pipeline:\n\n```sh\n# first assign your public repo's url to an environment variable\nREPO_URL=# insert your repo's url here\n\n# create a pipelinerun yaml file\ncat <<EOF > pipelinerun.yaml\nkind: PipelineRun\napiVersion: tekton.dev\/v1beta1\nmetadata:\n  name: run-basic-pipeline-from-git\nspec:\n  pipelineRef:\n    resolver: git\n    params:\n    - name: url\n      value: ${REPO_URL}\n    - name: revision\n      value: add-a-simple-pipeline\n    - name: pathInRepo\n      value: pipeline.yaml\n  params:\n  - name: username\n    value: liza\nEOF\n\n# execute the pipelinerun\nkubectl apply -f .\/pipelinerun.yaml\n```\n\n## Step 4: Monitor the PipelineRun\n\nFirst let's watch the PipelineRun to see if it succeeds:\n\n```sh\nkubectl get pipelineruns -w\n```\n\nShortly the PipelineRun should move into a Succeeded state.\n\nNow we can check the logs of the PipelineRun's only task:\n\n```sh\nkubectl logs run-basic-pipeline-from-git-task-1-pod\n# This should print \"hello liza\"\n```\n\n---\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/),\nand code samples are licensed under the\n[Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"         linkTitle   Get started with Resolvers  weight  103             Getting Started with Resolvers     Introduction  This guide will take you from an empty Kubernetes cluster to a functioning Tekton Pipelines installation and a PipelineRun executing with a Pipeline stored in a git repo      Prerequisites    A computer with     kubectl   https   kubernetes io docs tasks tools  kubectl     A Kubernetes cluster running at least Kubernetes 1 28  A   kind    cluster  https   kind sigs k8s io docs user quick start  installation    should work fine for following the guide on your local machine    An image registry that you can push images to  If you re using  kind    make sure your  KO DOCKER REPO  environment variable is set to    kind local     A publicly available git repository where you can put a pipeline yaml   file      Step 1  Install Tekton Pipelines and the Resolvers  See  the installation instructions for Tekton Pipeline    install md installing tekton pipelines on kubernetes   and  the installation instructions for the built in resolvers    install md installing and configuring remote task and pipeline resolution       Step 2  Ensure Pipelines is configured to enable resolvers  Starting with v0 41 0  remote resolvers for Tekton Pipelines are enabled by default   but can be disabled via feature flags in the  resolvers feature flags  configmap in  the  tekton pipelines resolvers  namespace  Check that configmap to verify that the resolvers you wish to have enabled are set to   true     The feature flags for the built in resolvers are     The  bundles  resolver   enable bundles resolver    The  git  resolver   enable git resolver    The  hub  resolver   enable hub resolver    The  cluster  resolver   enable cluster resolver      Step 3  Try it out   In order to test out your install you ll need a Pipeline stored in a public git repository  First cd into a clone of your repo and then create a new branch      sh   checkout a new branch in the public repo you re using git checkout  b add a simple pipeline      Then create a basic pipeline      sh cat    EOF    pipeline yaml kind  Pipeline apiVersion  tekton dev v1beta1 metadata    name  a simple pipeline spec    params      name  username   tasks      name  task 1     params        name  username       value    params username      taskSpec        params          name  username       steps          image  alpine 3 15         script              echo  hello   params username   EOF      Commit the pipeline and push it to your git repo      sh git add   pipeline yaml git commit  m  Add a basic pipeline to test Tekton Pipeline remote resolution     push to your publicly accessible repository  replacing origin with   your git remote s name git push origin add a simple pipeline      And finally create a  PipelineRun  that uses your pipeline      sh   first assign your public repo s url to an environment variable REPO URL   insert your repo s url here    create a pipelinerun yaml file cat   EOF   pipelinerun yaml kind  PipelineRun apiVersion  tekton dev v1beta1 metadata    name  run basic pipeline from git spec    pipelineRef      resolver  git     params        name  url       value    REPO URL        name  revision       value  add a simple pipeline       name  pathInRepo       value  pipeline yaml   params      name  username     value  liza EOF    execute the pipelinerun kubectl apply  f   pipelinerun yaml         Step 4  Monitor the PipelineRun  First let s watch the PipelineRun to see if it succeeds      sh kubectl get pipelineruns  w      Shortly the PipelineRun should move into a Succeeded state   Now we can check the logs of the PipelineRun s only task      sh kubectl logs run basic pipeline from git task 1 pod   This should print  hello liza            Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    and code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton weight 402 Tekton Bundle Contract v0 1 When using a Tekton Bundle in a task or pipeline reference the OCI artifact backing the Tekton Bundles Contract","answers":"<!--\n---\nlinkTitle: \"Tekton Bundles Contract\"\nweight: 402\n---\n-->\n\n# Tekton Bundle Contract v0.1\n\nWhen using a Tekton Bundle in a task or pipeline reference, the OCI artifact backing the\nbundle must adhere to the following contract.\n\n## Contract\n\nOnly Tekton CRDs (eg, `Task` or `Pipeline`) may reside in a Tekton Bundle used as a Tekton\nbundle reference.\n\nEach layer of the image must map 1:1 with a single Tekton resource (eg Task).\n\n*No more than 20* individual layers (Pipelines and\/or Tasks) maybe placed in a single image.\n\nEach layer must contain all of the following annotations:\n\n- `dev.tekton.image.name` => `ObjectMeta.Name` of the resource\n- `dev.tekton.image.kind` => `TypeMeta.Kind` of the resource, all lower-cased and singular (eg, `task`)\n- `dev.tekton.image.apiVersion` => `TypeMeta.APIVersion` of the resource (eg \n\"tekton.dev\/v1beta1\")  \n\nThe union of the { `dev.tekton.image.apiVersion`, `dev.tekton.image.kind`, `dev.tekton.image.name` }\nannotations on a given layer must be unique among all layers of that image. In practical terms, this means no two\n\"tasks\" can have the same name for example.\n\nEach layer must be compressed and stored with a supported OCI MIME type *except* for `+zstd` types. For list of the \nsupported types see \n<!-- wokeignore:rule=master --> \n[the official spec](https:\/\/github.com\/opencontainers\/image-spec\/blob\/master\/layer.md#zstd-media-types).\n \nFurthermore, each layer must contain a YAML or JSON representation of the underlying resource. If the resource is \nmissing any identifying fields (missing an `apiVersion` for instance) then it will be considered invalid.\n\nAny tool creating a Tekton bundle must enforce this format and ensure that the annotations and contents all match and\nconform to this spec. Additionally, the Tekton controller will reject non-conforming Tekton Bundles.\n\n## Examples\n\nSay you wanted to create a Tekton Bundle out of the following resources: \n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: foo\n---\napiVersion: tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: bar\n---\napiVersion: tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: foobar\n```\n\nIf we imagine what the contents of the resulting bundle look like, it would look something like this (YAML is just for \nillustrative purposes):\n```\n# my-bundle\nlayers:\n  - annotations:\n    - name: \"dev.tekton.image.name\"\n      value: \"foo\"\n    - name: \"dev.tekton.image.kind\"\n      value: \"Task\"\n    - name: \"dev.tekton.image.apiVersion\"\n      value: \"tekton.dev\/v1beta1\"\n    contents: <compressed bytes of Task object>\n  - annotations:\n    - name: \"dev.tekton.image.name\"\n      value: \"bar\"\n    - name: \"dev.tekton.image.kind\"\n      value: \"Task\"\n    - name: \"dev.tekton.image.apiVersion\"\n      value: \"tekton.dev\/v1beta1\"\n    contents: <compressed bytes of Task object>\n  - annotations:\n    - name: \"dev.tekton.image.name\"\n      value: \"foobar\"\n    - name: \"dev.tekton.image.kind\"\n      value: \"Pipeline\"\n    - name: \"dev.tekton.image.apiVersion\"\n      value: \"tekton.dev\/v1beta1\"\n    contents: <compressed bytes of Pipeline object>\n```\n\n---\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/),\nand code samples are licensed under the\n[Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"         linkTitle   Tekton Bundles Contract  weight  402            Tekton Bundle Contract v0 1  When using a Tekton Bundle in a task or pipeline reference  the OCI artifact backing the bundle must adhere to the following contract      Contract  Only Tekton CRDs  eg   Task  or  Pipeline   may reside in a Tekton Bundle used as a Tekton bundle reference   Each layer of the image must map 1 1 with a single Tekton resource  eg Task     No more than 20  individual layers  Pipelines and or Tasks  maybe placed in a single image   Each layer must contain all of the following annotations      dev tekton image name      ObjectMeta Name  of the resource    dev tekton image kind      TypeMeta Kind  of the resource  all lower cased and singular  eg   task      dev tekton image apiVersion      TypeMeta APIVersion  of the resource  eg   tekton dev v1beta1      The union of the    dev tekton image apiVersion    dev tekton image kind    dev tekton image name    annotations on a given layer must be unique among all layers of that image  In practical terms  this means no two  tasks  can have the same name for example   Each layer must be compressed and stored with a supported OCI MIME type  except  for   zstd  types  For list of the  supported types see       wokeignore rule master       the official spec  https   github com opencontainers image spec blob master layer md zstd media types     Furthermore  each layer must contain a YAML or JSON representation of the underlying resource  If the resource is  missing any identifying fields  missing an  apiVersion  for instance  then it will be considered invalid   Any tool creating a Tekton bundle must enforce this format and ensure that the annotations and contents all match and conform to this spec  Additionally  the Tekton controller will reject non conforming Tekton Bundles      Examples  Say you wanted to create a Tekton Bundle out of the following resources       yaml apiVersion  tekton dev v1beta1 kind  Task metadata    name  foo     apiVersion  tekton dev v1beta1 kind  Task metadata    name  bar     apiVersion  tekton dev v1beta1 kind  Pipeline metadata    name  foobar      If we imagine what the contents of the resulting bundle look like  it would look something like this  YAML is just for  illustrative purposes         my bundle layers      annotations        name   dev tekton image name        value   foo        name   dev tekton image kind        value   Task        name   dev tekton image apiVersion        value   tekton dev v1beta1      contents   compressed bytes of Task object      annotations        name   dev tekton image name        value   bar        name   dev tekton image kind        value   Task        name   dev tekton image apiVersion        value   tekton dev v1beta1      contents   compressed bytes of Task object      annotations        name   dev tekton image name        value   foobar        name   dev tekton image kind        value   Pipeline        name   dev tekton image apiVersion        value   tekton dev v1beta1      contents   compressed bytes of Pipeline object            Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    and code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton weight 404 ul Pipeline API p Packages p title Pipeline API","answers":"<!--\n---\ntitle: Pipeline API\nlinkTitle: Pipeline API\nweight: 404\n---\n-->\n\n<p>Packages:<\/p>\n<ul>\n<li>\n<a href=\"#resolution.tekton.dev%2fv1alpha1\">resolution.tekton.dev\/v1alpha1<\/a>\n<\/li>\n<li>\n<a href=\"#resolution.tekton.dev%2fv1beta1\">resolution.tekton.dev\/v1beta1<\/a>\n<\/li>\n<li>\n<a href=\"#tekton.dev%2fv1\">tekton.dev\/v1<\/a>\n<\/li>\n<li>\n<a href=\"#tekton.dev%2fv1alpha1\">tekton.dev\/v1alpha1<\/a>\n<\/li>\n<li>\n<a href=\"#tekton.dev%2fv1beta1\">tekton.dev\/v1beta1<\/a>\n<\/li>\n<\/ul>\n<h2 id=\"resolution.tekton.dev\/v1alpha1\">resolution.tekton.dev\/v1alpha1<\/h2>\n<div>\n<\/div>\nResource Types:\n<ul><\/ul>\n<h3 id=\"resolution.tekton.dev\/v1alpha1.ResolutionRequest\">ResolutionRequest\n<\/h3>\n<div>\n<p>ResolutionRequest is an object for requesting the content of\na Tekton resource like a pipeline.yaml.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#resolution.tekton.dev\/v1alpha1.ResolutionRequestSpec\">\nResolutionRequestSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec holds the information for the request part of the resource request.<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Parameters are the runtime attributes passed to\nthe resolver to help it figure out how to resolve the\nresource being requested. For example: repo URL, commit SHA,\npath to file, the kind of authentication to leverage, etc.<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#resolution.tekton.dev\/v1alpha1.ResolutionRequestStatus\">\nResolutionRequestStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status communicates the state of the request and, ultimately,\nthe content of the resolved resource.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"resolution.tekton.dev\/v1alpha1.ResolutionRequestSpec\">ResolutionRequestSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#resolution.tekton.dev\/v1alpha1.ResolutionRequest\">ResolutionRequest<\/a>)\n<\/p>\n<div>\n<p>ResolutionRequestSpec are all the fields in the spec of the\nResolutionRequest CRD.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Parameters are the runtime attributes passed to\nthe resolver to help it figure out how to resolve the\nresource being requested. For example: repo URL, commit SHA,\npath to file, the kind of authentication to leverage, etc.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"resolution.tekton.dev\/v1alpha1.ResolutionRequestStatus\">ResolutionRequestStatus\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#resolution.tekton.dev\/v1alpha1.ResolutionRequest\">ResolutionRequest<\/a>)\n<\/p>\n<div>\n<p>ResolutionRequestStatus are all the fields in a ResolutionRequest&rsquo;s\nstatus subresource.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>Status<\/code><br\/>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/knative.dev\/pkg\/apis\/duck\/v1#Status\">\nknative.dev\/pkg\/apis\/duck\/v1.Status\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>Status<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ResolutionRequestStatusFields<\/code><br\/>\n<em>\n<a href=\"#resolution.tekton.dev\/v1alpha1.ResolutionRequestStatusFields\">\nResolutionRequestStatusFields\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>ResolutionRequestStatusFields<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"resolution.tekton.dev\/v1alpha1.ResolutionRequestStatusFields\">ResolutionRequestStatusFields\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#resolution.tekton.dev\/v1alpha1.ResolutionRequestStatus\">ResolutionRequestStatus<\/a>)\n<\/p>\n<div>\n<p>ResolutionRequestStatusFields are the ResolutionRequest-specific fields\nfor the status subresource.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>data<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Data is a string representation of the resolved content\nof the requested resource in-lined into the ResolutionRequest\nobject.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>refSource<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.RefSource\">\nRefSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>RefSource is the source reference of the remote data that records where the remote\nfile came from including the url, digest and the entrypoint.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr\/>\n<h2 id=\"resolution.tekton.dev\/v1beta1\">resolution.tekton.dev\/v1beta1<\/h2>\n<div>\n<\/div>\nResource Types:\n<ul><\/ul>\n<h3 id=\"resolution.tekton.dev\/v1beta1.ResolutionRequest\">ResolutionRequest\n<\/h3>\n<div>\n<p>ResolutionRequest is an object for requesting the content of\na Tekton resource like a pipeline.yaml.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#resolution.tekton.dev\/v1beta1.ResolutionRequestSpec\">\nResolutionRequestSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec holds the information for the request part of the resource request.<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Param\">\n[]Param\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Parameters are the runtime attributes passed to\nthe resolver to help it figure out how to resolve the\nresource being requested. For example: repo URL, commit SHA,\npath to file, the kind of authentication to leverage, etc.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>url<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>URL is the runtime url passed to the resolver\nto help it figure out how to resolver the resource being\nrequested.\nThis is currently at an ALPHA stability level and subject to\nalpha API compatibility policies.<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#resolution.tekton.dev\/v1beta1.ResolutionRequestStatus\">\nResolutionRequestStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status communicates the state of the request and, ultimately,\nthe content of the resolved resource.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"resolution.tekton.dev\/v1beta1.ResolutionRequestSpec\">ResolutionRequestSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#resolution.tekton.dev\/v1beta1.ResolutionRequest\">ResolutionRequest<\/a>)\n<\/p>\n<div>\n<p>ResolutionRequestSpec are all the fields in the spec of the\nResolutionRequest CRD.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Param\">\n[]Param\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Parameters are the runtime attributes passed to\nthe resolver to help it figure out how to resolve the\nresource being requested. For example: repo URL, commit SHA,\npath to file, the kind of authentication to leverage, etc.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>url<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>URL is the runtime url passed to the resolver\nto help it figure out how to resolver the resource being\nrequested.\nThis is currently at an ALPHA stability level and subject to\nalpha API compatibility policies.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"resolution.tekton.dev\/v1beta1.ResolutionRequestStatus\">ResolutionRequestStatus\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#resolution.tekton.dev\/v1beta1.ResolutionRequest\">ResolutionRequest<\/a>)\n<\/p>\n<div>\n<p>ResolutionRequestStatus are all the fields in a ResolutionRequest&rsquo;s\nstatus subresource.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>Status<\/code><br\/>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/knative.dev\/pkg\/apis\/duck\/v1#Status\">\nknative.dev\/pkg\/apis\/duck\/v1.Status\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>Status<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ResolutionRequestStatusFields<\/code><br\/>\n<em>\n<a href=\"#resolution.tekton.dev\/v1beta1.ResolutionRequestStatusFields\">\nResolutionRequestStatusFields\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>ResolutionRequestStatusFields<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"resolution.tekton.dev\/v1beta1.ResolutionRequestStatusFields\">ResolutionRequestStatusFields\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#resolution.tekton.dev\/v1beta1.ResolutionRequestStatus\">ResolutionRequestStatus<\/a>)\n<\/p>\n<div>\n<p>ResolutionRequestStatusFields are the ResolutionRequest-specific fields\nfor the status subresource.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>data<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Data is a string representation of the resolved content\nof the requested resource in-lined into the ResolutionRequest\nobject.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>source<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.RefSource\">\nRefSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Deprecated: Use RefSource instead<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>refSource<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.RefSource\">\nRefSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>RefSource is the source reference of the remote data that records the url, digest\nand the entrypoint.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr\/>\n<h2 id=\"tekton.dev\/v1\">tekton.dev\/v1<\/h2>\n<div>\n<p>Package v1 contains API Schema definitions for the pipeline v1 API group<\/p>\n<\/div>\nResource Types:\n<ul><li>\n<a href=\"#tekton.dev\/v1.Pipeline\">Pipeline<\/a>\n<\/li><li>\n<a href=\"#tekton.dev\/v1.PipelineRun\">PipelineRun<\/a>\n<\/li><li>\n<a href=\"#tekton.dev\/v1.Task\">Task<\/a>\n<\/li><li>\n<a href=\"#tekton.dev\/v1.TaskRun\">TaskRun<\/a>\n<\/li><\/ul>\n<h3 id=\"tekton.dev\/v1.Pipeline\">Pipeline\n<\/h3>\n<div>\n<p>Pipeline describes a list of Tasks to execute. It expresses how outputs\nof tasks feed into inputs of subsequent tasks.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>Pipeline<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineSpec\">\nPipelineSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec holds the desired state of the Pipeline from the client<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>displayName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DisplayName is a user-facing name of the pipeline that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the pipeline that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tasks<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineTask\">\n[]PipelineTask\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Tasks declares the graph of Tasks that execute when this Pipeline is run.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamSpecs\">\nParamSpecs\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Params declares a list of input parameters that must be supplied when\nthis Pipeline is run.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineWorkspaceDeclaration\">\n[]PipelineWorkspaceDeclaration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces declares a set of named workspaces that are expected to be\nprovided by a PipelineRun.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineResult\">\n[]PipelineResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results are values that this pipeline can output once run<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>finally<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineTask\">\n[]PipelineTask\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Finally declares the list of Tasks that execute just before leaving the Pipeline\ni.e. either after all Tasks are finished executing successfully\nor after a failure which would result in ending the Pipeline<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineRun\">PipelineRun\n<\/h3>\n<div>\n<p>PipelineRun represents a single execution of a Pipeline. PipelineRuns are how\nthe graph of Tasks declared in a Pipeline are executed; they specify inputs\nto Pipelines such as parameter values and capture operational aspects of the\nTasks execution such as service account and tolerations. Creating a\nPipelineRun creates TaskRuns for Tasks in the referenced Pipeline.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>PipelineRun<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineRunSpec\">\nPipelineRunSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>pipelineRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineRef\">\nPipelineRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>pipelineSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineSpec\">\nPipelineSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specifying PipelineSpec can be disabled by setting\n<code>disable-inline-spec<\/code> feature flag..<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Params is a list of parameter names and values.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineRunSpecStatus\">\nPipelineRunSpecStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for cancelling a pipelinerun (and maybe more later on)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeouts<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TimeoutFields\">\nTimeoutFields\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Time after which the Pipeline times out.\nCurrently three keys are accepted in the map\npipeline, tasks and finally\nwith Timeouts.pipeline &gt;= Timeouts.tasks + Timeouts.finally<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskRunTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineTaskRunTemplate\">\nPipelineTaskRunTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TaskRunTemplate represent template of taskrun<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WorkspaceBinding\">\n[]WorkspaceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces holds a set of workspace bindings that must match names\nwith those declared in the pipeline.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskRunSpecs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineTaskRunSpec\">\n[]PipelineTaskRunSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TaskRunSpecs holds a set of runtime specs<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineRunStatus\">\nPipelineRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.Task\">Task\n<\/h3>\n<div>\n<p>Task represents a collection of sequential steps that are run as part of a\nPipeline using a set of inputs and producing a set of outputs. Tasks execute\nwhen TaskRuns are created that provide the input parameters and resources and\noutput resources the Task requires.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>Task<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskSpec\">\nTaskSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec holds the desired state of the Task from the client<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamSpecs\">\nParamSpecs\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Params is a list of input parameters required to run the task. Params\nmust be supplied as inputs in TaskRuns unless they declare a default\nvalue.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>displayName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DisplayName is a user-facing name of the task that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the task that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>steps<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Step\">\n[]Step\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Steps are the steps of the build; each step is run sequentially with the\nsource mounted into \/workspace.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumes<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volume-v1-core\">\n[]Kubernetes core\/v1.Volume\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Volumes is a collection of volumes that are available to mount into the\nsteps of the build.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stepTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.StepTemplate\">\nStepTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>StepTemplate can be used as the basis for all step containers within the\nTask, so that the steps inherit settings on the base container.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sidecars<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Sidecar\">\n[]Sidecar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Sidecars are run alongside the Task&rsquo;s step containers. They begin before\nthe steps start and end after the steps complete.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WorkspaceDeclaration\">\n[]WorkspaceDeclaration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Workspaces are the volumes that this Task requires.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskResult\">\n[]TaskResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Results are values that this Task can output<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskRun\">TaskRun\n<\/h3>\n<div>\n<p>TaskRun represents a single execution of a Task. TaskRuns are how the steps\nspecified in a Task are executed; they specify the parameters and resources\nused to run the steps in a Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>TaskRun<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunSpec\">\nTaskRunSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>debug<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunDebug\">\nTaskRunDebug\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRef\">\nTaskRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>no more than one of the TaskRef and TaskSpec may be specified.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskSpec\">\nTaskSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specifying PipelineSpec can be disabled by setting\n<code>disable-inline-spec<\/code> feature flag..<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunSpecStatus\">\nTaskRunSpecStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for cancelling a TaskRun (and maybe more later on)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>statusMessage<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunSpecStatusMessage\">\nTaskRunSpecStatusMessage\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status message for cancellation.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retries<\/code><br\/>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Retries represents how many times this TaskRun should be retried in the event of task failure.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Time after which one retry attempt times out. Defaults to 1 hour.\nRefer Go&rsquo;s ParseDuration documentation for expected format: <a href=\"https:\/\/golang.org\/pkg\/time\/#ParseDuration\">https:\/\/golang.org\/pkg\/time\/#ParseDuration<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>podTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/unversioned.Template\">\nTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>PodTemplate holds pod specific configuration<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WorkspaceBinding\">\n[]WorkspaceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stepSpecs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunStepSpec\">\n[]TaskRunStepSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specs to apply to Steps in this TaskRun.\nIf a field is specified in both a Step and a StepSpec,\nthe value from the StepSpec will be used.\nThis field is only supported when the alpha feature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sidecarSpecs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunSidecarSpec\">\n[]TaskRunSidecarSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specs to apply to Sidecars in this TaskRun.\nIf a field is specified in both a Sidecar and a SidecarSpec,\nthe value from the SidecarSpec will be used.\nThis field is only supported when the alpha feature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>computeResources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Compute resources to use for this TaskRun<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunStatus\">\nTaskRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.Algorithm\">Algorithm\n(<code>string<\/code> alias)<\/h3>\n<div>\n<p>Algorithm Standard cryptographic hash algorithm<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1.Artifact\">Artifact\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.Artifacts\">Artifacts<\/a>, <a href=\"#tekton.dev\/v1.StepState\">StepState<\/a>)\n<\/p>\n<div>\n<p>TaskRunStepArtifact represents an artifact produced or used by a step within a task run.\nIt directly uses the Artifact type for its structure.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>The artifact&rsquo;s identifying category name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>values<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ArtifactValue\">\n[]ArtifactValue\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>A collection of values related to the artifact<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>buildOutput<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<p>Indicate if the artifact is a build output or a by-product<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.ArtifactValue\">ArtifactValue\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.Artifact\">Artifact<\/a>)\n<\/p>\n<div>\n<p>ArtifactValue represents a specific value or data element within an Artifact.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>digest<\/code><br\/>\n<em>\nmap[github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1.Algorithm]string\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>uri<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Algorithm-specific digests for verifying the content (e.g., SHA256)<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.Artifacts\">Artifacts\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskRunStatusFields\">TaskRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>Artifacts represents the collection of input and output artifacts associated with\na task run or a similar process. Artifacts in this context are units of data or resources\nthat the process either consumes as input or produces as output.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>inputs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Artifact\">\n[]Artifact\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>outputs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Artifact\">\n[]Artifact\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.ChildStatusReference\">ChildStatusReference\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineRunStatusFields\">PipelineRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>ChildStatusReference is used to point to the statuses of individual TaskRuns and Runs within this PipelineRun.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of the TaskRun or Run this is referencing.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>displayName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>DisplayName is a user-facing name of the pipelineTask that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>pipelineTaskName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>PipelineTaskName is the name of the PipelineTask this is referencing.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>whenExpressions<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WhenExpression\">\n[]WhenExpression\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.Combination\">Combination\n(<code>map[string]string<\/code> alias)<\/h3>\n<div>\n<p>Combination is a map, mainly defined to hold a single combination from a Matrix with key as param.Name and value as param.Value<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1.Combinations\">Combinations\n(<code>[]github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1.Combination<\/code> alias)<\/h3>\n<div>\n<p>Combinations is a Combination list<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1.EmbeddedTask\">EmbeddedTask\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineTask\">PipelineTask<\/a>)\n<\/p>\n<div>\n<p>EmbeddedTask is used to define a Task inline within a Pipeline&rsquo;s PipelineTasks.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\nk8s.io\/apimachinery\/pkg\/runtime.RawExtension\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec is a specification of a custom task<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>-<\/code><br\/>\n<em>\n[]byte\n<\/em>\n<\/td>\n<td>\n<p>Raw is the underlying serialization of this object.<\/p>\n<p>TODO: Determine how to detect ContentType and ContentEncoding of &lsquo;Raw&rsquo; data.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>-<\/code><br\/>\n<em>\nk8s.io\/apimachinery\/pkg\/runtime.Object\n<\/em>\n<\/td>\n<td>\n<p>Object can hold a representation of this extension - useful for working with versioned\nstructs.<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineTaskMetadata\">\nPipelineTaskMetadata\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>TaskSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskSpec\">\nTaskSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>TaskSpec<\/code> are embedded into this type.)\n<\/p>\n<em>(Optional)<\/em>\n<p>TaskSpec is a specification of a task<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.IncludeParams\">IncludeParams\n<\/h3>\n<div>\n<p>IncludeParams allows passing in a specific combinations of Parameters into the Matrix.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name the specified combination<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Params takes only <code>Parameters<\/code> of type <code>&quot;string&quot;<\/code>\nThe names of the <code>params<\/code> must match the names of the <code>params<\/code> in the underlying <code>Task<\/code><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.Matrix\">Matrix\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineTask\">PipelineTask<\/a>)\n<\/p>\n<div>\n<p>Matrix is used to fan out Tasks in a Pipeline<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Params is a list of parameters used to fan out the pipelineTask\nParams takes only <code>Parameters<\/code> of type <code>&quot;array&quot;<\/code>\nEach array element is supplied to the <code>PipelineTask<\/code> by substituting <code>params<\/code> of type <code>&quot;string&quot;<\/code> in the underlying <code>Task<\/code>.\nThe names of the <code>params<\/code> in the <code>Matrix<\/code> must match the names of the <code>params<\/code> in the underlying <code>Task<\/code> that they will be substituting.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>include<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.IncludeParamsList\">\nIncludeParamsList\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Include is a list of IncludeParams which allows passing in specific combinations of Parameters into the Matrix.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.OnErrorType\">OnErrorType\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.Step\">Step<\/a>)\n<\/p>\n<div>\n<p>OnErrorType defines a list of supported exiting behavior of a container on error<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;continue&#34;<\/p><\/td>\n<td><p>Continue indicates continue executing the rest of the steps irrespective of the container exit code<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;stopAndFail&#34;<\/p><\/td>\n<td><p>StopAndFail indicates exit the taskRun if the container exits with non-zero exit code<\/p>\n<\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.Param\">Param\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#resolution.tekton.dev\/v1beta1.ResolutionRequestSpec\">ResolutionRequestSpec<\/a>)\n<\/p>\n<div>\n<p>Param declares an ParamValues to use for the parameter called name.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamValue\">\nParamValue\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.ParamSpec\">ParamSpec\n<\/h3>\n<div>\n<p>ParamSpec defines arbitrary parameters needed beyond typed inputs (such as\nresources). Parameter values are provided by users as inputs on a TaskRun\nor PipelineRun.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name declares the name by which a parameter is referenced.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamType\">\nParamType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Type is the user-specified type of the parameter. The possible types\nare currently &ldquo;string&rdquo;, &ldquo;array&rdquo; and &ldquo;object&rdquo;, and &ldquo;string&rdquo; is the default.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the parameter that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>properties<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PropertySpec\">\nmap[string]github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1.PropertySpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Properties is the JSON Schema properties to support key-value pairs parameter.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>default<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamValue\">\nParamValue\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Default is the value a parameter takes if no input value is supplied. If\ndefault is set, a Task may be executed without a supplied value for the\nparameter.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>enum<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Enum declares a set of allowed param input values for tasks\/pipelines that can be validated.\nIf Enum is not set, no input validation is performed for the param.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.ParamSpecs\">ParamSpecs\n(<code>[]github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1.ParamSpec<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineSpec\">PipelineSpec<\/a>, <a href=\"#tekton.dev\/v1.TaskSpec\">TaskSpec<\/a>, <a href=\"#tekton.dev\/v1alpha1.StepActionSpec\">StepActionSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.StepActionSpec\">StepActionSpec<\/a>)\n<\/p>\n<div>\n<p>ParamSpecs is a list of ParamSpec<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1.ParamType\">ParamType\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.ParamSpec\">ParamSpec<\/a>, <a href=\"#tekton.dev\/v1.ParamValue\">ParamValue<\/a>, <a href=\"#tekton.dev\/v1.PropertySpec\">PropertySpec<\/a>)\n<\/p>\n<div>\n<p>ParamType indicates the type of an input parameter;\nUsed to distinguish between a single string and an array of strings.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;array&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;object&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;string&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.ParamValue\">ParamValue\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.Param\">Param<\/a>, <a href=\"#tekton.dev\/v1.ParamSpec\">ParamSpec<\/a>, <a href=\"#tekton.dev\/v1.PipelineResult\">PipelineResult<\/a>, <a href=\"#tekton.dev\/v1.PipelineRunResult\">PipelineRunResult<\/a>, <a href=\"#tekton.dev\/v1.TaskResult\">TaskResult<\/a>, <a href=\"#tekton.dev\/v1.TaskRunResult\">TaskRunResult<\/a>)\n<\/p>\n<div>\n<p>ResultValue is a type alias of ParamValue<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>Type<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamType\">\nParamType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>StringVal<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Represents the stored type of ParamValues.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ArrayVal<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ObjectVal<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.Params\">Params\n(<code>[]github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1.Param<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.IncludeParams\">IncludeParams<\/a>, <a href=\"#tekton.dev\/v1.Matrix\">Matrix<\/a>, <a href=\"#tekton.dev\/v1.PipelineRunSpec\">PipelineRunSpec<\/a>, <a href=\"#tekton.dev\/v1.PipelineTask\">PipelineTask<\/a>, <a href=\"#tekton.dev\/v1.ResolverRef\">ResolverRef<\/a>, <a href=\"#tekton.dev\/v1.Step\">Step<\/a>, <a href=\"#tekton.dev\/v1.TaskRunInputs\">TaskRunInputs<\/a>, <a href=\"#tekton.dev\/v1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>Params is a list of Param<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1.PipelineRef\">PipelineRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineRunSpec\">PipelineRunSpec<\/a>, <a href=\"#tekton.dev\/v1.PipelineTask\">PipelineTask<\/a>)\n<\/p>\n<div>\n<p>PipelineRef can be used to refer to a specific instance of a Pipeline.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name of the referent; More info: <a href=\"http:\/\/kubernetes.io\/docs\/user-guide\/identifiers#names\">http:\/\/kubernetes.io\/docs\/user-guide\/identifiers#names<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>API version of the referent<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ResolverRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ResolverRef\">\nResolverRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ResolverRef allows referencing a Pipeline in a remote location\nlike a git repo. This field is only supported when the alpha\nfeature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineResult\">PipelineResult\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineSpec\">PipelineSpec<\/a>)\n<\/p>\n<div>\n<p>PipelineResult used to describe the results of a pipeline<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name the given name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ResultsType\">\nResultsType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Type is the user-specified type of the result.\nThe possible types are &lsquo;string&rsquo;, &lsquo;array&rsquo;, and &lsquo;object&rsquo;, with &lsquo;string&rsquo; as the default.\n&lsquo;array&rsquo; and &lsquo;object&rsquo; types are alpha features.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a human-readable description of the result<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamValue\">\nParamValue\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Value the expression used to retrieve the value<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineRunReason\">PipelineRunReason\n(<code>string<\/code> alias)<\/h3>\n<div>\n<p>PipelineRunReason represents a reason for the pipeline run &ldquo;Succeeded&rdquo; condition<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;CELEvaluationFailed&#34;<\/p><\/td>\n<td><p>ReasonCELEvaluationFailed indicates the pipeline fails the CEL evaluation<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Cancelled&#34;<\/p><\/td>\n<td><p>PipelineRunReasonCancelled is the reason set when the PipelineRun cancelled by the user\nThis reason may be found with a corev1.ConditionFalse status, if the cancellation was processed successfully\nThis reason may be found with a corev1.ConditionUnknown status, if the cancellation is being processed or failed<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;CancelledRunningFinally&#34;<\/p><\/td>\n<td><p>PipelineRunReasonCancelledRunningFinally indicates that pipeline has been gracefully cancelled\nand no new Tasks will be scheduled by the controller, but final tasks are now running<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Completed&#34;<\/p><\/td>\n<td><p>PipelineRunReasonCompleted is the reason set when the PipelineRun completed successfully with one or more skipped Tasks<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;PipelineRunCouldntCancel&#34;<\/p><\/td>\n<td><p>ReasonCouldntCancel indicates that a PipelineRun was cancelled but attempting to update\nall of the running TaskRuns as cancelled failed.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;CouldntGetPipeline&#34;<\/p><\/td>\n<td><p>ReasonCouldntGetPipeline indicates that the reason for the failure status is that the\nassociated Pipeline couldn&rsquo;t be retrieved<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;CouldntGetPipelineResult&#34;<\/p><\/td>\n<td><p>PipelineRunReasonCouldntGetPipelineResult indicates that the pipeline fails to retrieve the\nreferenced result. This could be due to failed TaskRuns or Runs that were supposed to produce\nthe results<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;CouldntGetTask&#34;<\/p><\/td>\n<td><p>ReasonCouldntGetTask indicates that the reason for the failure status is that the\nassociated Pipeline&rsquo;s Tasks couldn&rsquo;t all be retrieved<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;PipelineRunCouldntTimeOut&#34;<\/p><\/td>\n<td><p>ReasonCouldntTimeOut indicates that a PipelineRun was timed out but attempting to update\nall of the running TaskRuns as timed out failed.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;CreateRunFailed&#34;<\/p><\/td>\n<td><p>ReasonCreateRunFailed indicates that the pipeline fails to create the taskrun or other run resources<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Failed&#34;<\/p><\/td>\n<td><p>PipelineRunReasonFailed is the reason set when the PipelineRun completed with a failure<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;PipelineValidationFailed&#34;<\/p><\/td>\n<td><p>ReasonFailedValidation indicates that the reason for failure status is\nthat pipelinerun failed runtime validation<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;InvalidPipelineResourceBindings&#34;<\/p><\/td>\n<td><p>ReasonInvalidBindings indicates that the reason for the failure status is that the\nPipelineResources bound in the PipelineRun didn&rsquo;t match those declared in the Pipeline<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;PipelineInvalidGraph&#34;<\/p><\/td>\n<td><p>ReasonInvalidGraph indicates that the reason for the failure status is that the\nassociated Pipeline is an invalid graph (a.k.a wrong order, cycle, \u2026)<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;InvalidMatrixParameterTypes&#34;<\/p><\/td>\n<td><p>ReasonInvalidMatrixParameterTypes indicates a matrix contains invalid parameter types<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;InvalidParamValue&#34;<\/p><\/td>\n<td><p>PipelineRunReasonInvalidParamValue indicates that the PipelineRun Param input value is not allowed.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;InvalidPipelineResultReference&#34;<\/p><\/td>\n<td><p>PipelineRunReasonInvalidPipelineResultReference indicates a pipeline result was declared\nby the pipeline but not initialized in the pipelineTask<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;InvalidTaskResultReference&#34;<\/p><\/td>\n<td><p>ReasonInvalidTaskResultReference indicates a task result was declared\nbut was not initialized by that task<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;InvalidTaskRunSpecs&#34;<\/p><\/td>\n<td><p>ReasonInvalidTaskRunSpec indicates that PipelineRun.Spec.TaskRunSpecs[].PipelineTaskName is defined with\na not exist taskName in pipelineSpec.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;InvalidWorkspaceBindings&#34;<\/p><\/td>\n<td><p>ReasonInvalidWorkspaceBinding indicates that a Pipeline expects a workspace but a\nPipelineRun has provided an invalid binding.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;ObjectParameterMissKeys&#34;<\/p><\/td>\n<td><p>ReasonObjectParameterMissKeys indicates that the object param value provided from PipelineRun spec\nmisses some keys required for the object param declared in Pipeline spec.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;ParamArrayIndexingInvalid&#34;<\/p><\/td>\n<td><p>ReasonParamArrayIndexingInvalid indicates that the use of param array indexing is out of bound.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;ParameterMissing&#34;<\/p><\/td>\n<td><p>ReasonParameterMissing indicates that the reason for the failure status is that the\nassociated PipelineRun didn&rsquo;t provide all the required parameters<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;ParameterTypeMismatch&#34;<\/p><\/td>\n<td><p>ReasonParameterTypeMismatch indicates that the reason for the failure status is that\nparameter(s) declared in the PipelineRun do not have the some declared type as the\nparameters(s) declared in the Pipeline that they are supposed to override.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;PipelineRunPending&#34;<\/p><\/td>\n<td><p>PipelineRunReasonPending is the reason set when the PipelineRun is in the pending state<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;RequiredWorkspaceMarkedOptional&#34;<\/p><\/td>\n<td><p>ReasonRequiredWorkspaceMarkedOptional indicates an optional workspace\nhas been passed to a Task that is expecting a non-optional workspace<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;ResolvingPipelineRef&#34;<\/p><\/td>\n<td><p>ReasonResolvingPipelineRef indicates that the PipelineRun is waiting for\nits pipelineRef to be asynchronously resolved.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;ResourceVerificationFailed&#34;<\/p><\/td>\n<td><p>ReasonResourceVerificationFailed indicates that the pipeline fails the trusted resource verification,\nit could be the content has changed, signature is invalid or public key is invalid<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Running&#34;<\/p><\/td>\n<td><p>PipelineRunReasonRunning is the reason set when the PipelineRun is running<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Started&#34;<\/p><\/td>\n<td><p>PipelineRunReasonStarted is the reason set when the PipelineRun has just started<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;StoppedRunningFinally&#34;<\/p><\/td>\n<td><p>PipelineRunReasonStoppedRunningFinally indicates that pipeline has been gracefully stopped\nand no new Tasks will be scheduled by the controller, but final tasks are now running<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;PipelineRunStopping&#34;<\/p><\/td>\n<td><p>PipelineRunReasonStopping indicates that no new Tasks will be scheduled by the controller, and the\npipeline will stop once all running tasks complete their work<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Succeeded&#34;<\/p><\/td>\n<td><p>PipelineRunReasonSuccessful is the reason set when the PipelineRun completed successfully<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;PipelineRunTimeout&#34;<\/p><\/td>\n<td><p>PipelineRunReasonTimedOut is the reason set when the PipelineRun has timed out<\/p>\n<\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineRunResult\">PipelineRunResult\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineRunStatusFields\">PipelineRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>PipelineRunResult used to describe the results of a pipeline<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the result&rsquo;s name as declared by the Pipeline<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamValue\">\nParamValue\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Value is the result returned from the execution of this PipelineRun<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineRunRunStatus\">PipelineRunRunStatus\n<\/h3>\n<div>\n<p>PipelineRunRunStatus contains the name of the PipelineTask for this Run and the Run&rsquo;s Status<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>pipelineTaskName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>PipelineTaskName is the name of the PipelineTask.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.CustomRunStatus\">\nCustomRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status is the RunStatus for the corresponding Run<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>whenExpressions<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WhenExpression\">\n[]WhenExpression\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineRunSpec\">PipelineRunSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineRun\">PipelineRun<\/a>)\n<\/p>\n<div>\n<p>PipelineRunSpec defines the desired state of PipelineRun<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>pipelineRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineRef\">\nPipelineRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>pipelineSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineSpec\">\nPipelineSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specifying PipelineSpec can be disabled by setting\n<code>disable-inline-spec<\/code> feature flag..<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Params is a list of parameter names and values.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineRunSpecStatus\">\nPipelineRunSpecStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for cancelling a pipelinerun (and maybe more later on)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeouts<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TimeoutFields\">\nTimeoutFields\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Time after which the Pipeline times out.\nCurrently three keys are accepted in the map\npipeline, tasks and finally\nwith Timeouts.pipeline &gt;= Timeouts.tasks + Timeouts.finally<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskRunTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineTaskRunTemplate\">\nPipelineTaskRunTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TaskRunTemplate represent template of taskrun<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WorkspaceBinding\">\n[]WorkspaceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces holds a set of workspace bindings that must match names\nwith those declared in the pipeline.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskRunSpecs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineTaskRunSpec\">\n[]PipelineTaskRunSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TaskRunSpecs holds a set of runtime specs<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineRunSpecStatus\">PipelineRunSpecStatus\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineRunSpec\">PipelineRunSpec<\/a>)\n<\/p>\n<div>\n<p>PipelineRunSpecStatus defines the pipelinerun spec status the user can provide<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1.PipelineRunStatus\">PipelineRunStatus\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineRun\">PipelineRun<\/a>)\n<\/p>\n<div>\n<p>PipelineRunStatus defines the observed state of PipelineRun<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>Status<\/code><br\/>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/knative.dev\/pkg\/apis\/duck\/v1#Status\">\nknative.dev\/pkg\/apis\/duck\/v1.Status\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>Status<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>PipelineRunStatusFields<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineRunStatusFields\">\nPipelineRunStatusFields\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>PipelineRunStatusFields<\/code> are embedded into this type.)\n<\/p>\n<p>PipelineRunStatusFields inlines the status fields.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineRunStatusFields\">PipelineRunStatusFields\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineRunStatus\">PipelineRunStatus<\/a>)\n<\/p>\n<div>\n<p>PipelineRunStatusFields holds the fields of PipelineRunStatus&rsquo; status.\nThis is defined separately and inlined so that other types can readily\nconsume these fields via duck typing.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>startTime<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>StartTime is the time the PipelineRun is actually started.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>completionTime<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>CompletionTime is the time the PipelineRun completed.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineRunResult\">\n[]PipelineRunResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results are the list of results written out by the pipeline task&rsquo;s containers<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>pipelineSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineSpec\">\nPipelineSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>PipelineRunSpec contains the exact spec used to instantiate the run<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>skippedTasks<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.SkippedTask\">\n[]SkippedTask\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>list of tasks that were skipped due to when expressions evaluating to false<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>childReferences<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ChildStatusReference\">\n[]ChildStatusReference\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>list of TaskRun and Run names, PipelineTask names, and API versions\/kinds for children of this PipelineRun.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>finallyStartTime<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>FinallyStartTime is when all non-finally tasks have been completed and only finally tasks are being executed.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>provenance<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Provenance\">\nProvenance\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Provenance contains some key authenticated metadata about how a software artifact was built (what sources, what inputs\/outputs, etc.).<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spanContext<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<p>SpanContext contains tracing span context fields<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineRunTaskRunStatus\">PipelineRunTaskRunStatus\n<\/h3>\n<div>\n<p>PipelineRunTaskRunStatus contains the name of the PipelineTask for this TaskRun and the TaskRun&rsquo;s Status<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>pipelineTaskName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>PipelineTaskName is the name of the PipelineTask.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunStatus\">\nTaskRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status is the TaskRunStatus for the corresponding TaskRun<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>whenExpressions<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WhenExpression\">\n[]WhenExpression\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineSpec\">PipelineSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.Pipeline\">Pipeline<\/a>, <a href=\"#tekton.dev\/v1.PipelineRunSpec\">PipelineRunSpec<\/a>, <a href=\"#tekton.dev\/v1.PipelineRunStatusFields\">PipelineRunStatusFields<\/a>, <a href=\"#tekton.dev\/v1.PipelineTask\">PipelineTask<\/a>)\n<\/p>\n<div>\n<p>PipelineSpec defines the desired state of Pipeline.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>displayName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DisplayName is a user-facing name of the pipeline that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the pipeline that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tasks<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineTask\">\n[]PipelineTask\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Tasks declares the graph of Tasks that execute when this Pipeline is run.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamSpecs\">\nParamSpecs\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Params declares a list of input parameters that must be supplied when\nthis Pipeline is run.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineWorkspaceDeclaration\">\n[]PipelineWorkspaceDeclaration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces declares a set of named workspaces that are expected to be\nprovided by a PipelineRun.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineResult\">\n[]PipelineResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results are values that this pipeline can output once run<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>finally<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineTask\">\n[]PipelineTask\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Finally declares the list of Tasks that execute just before leaving the Pipeline\ni.e. either after all Tasks are finished executing successfully\nor after a failure which would result in ending the Pipeline<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineTask\">PipelineTask\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineSpec\">PipelineSpec<\/a>)\n<\/p>\n<div>\n<p>PipelineTask defines a task in a Pipeline, passing inputs from both\nParams and from the output of previous tasks.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of this task within the context of a Pipeline. Name is\nused as a coordinate with the <code>from<\/code> and <code>runAfter<\/code> fields to establish\nthe execution order of tasks relative to one another.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>displayName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DisplayName is the display name of this task within the context of a Pipeline.\nThis display name may be used to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is the description of this task within the context of a Pipeline.\nThis description may be used to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRef\">\nTaskRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TaskRef is a reference to a task definition.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.EmbeddedTask\">\nEmbeddedTask\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TaskSpec is a specification of a task\nSpecifying TaskSpec can be disabled by setting\n<code>disable-inline-spec<\/code> feature flag..<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>when<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WhenExpressions\">\nWhenExpressions\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>When is a list of when expressions that need to be true for the task to run<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retries<\/code><br\/>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Retries represents how many times this task should be retried in case of task failure: ConditionSucceeded set to False<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>runAfter<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>RunAfter is the list of PipelineTask names that should be executed before\nthis Task executes. (Used to force a specific ordering in graph execution.)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Parameters declares parameters passed to this task.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>matrix<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Matrix\">\nMatrix\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Matrix declares parameters used to fan out this task.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WorkspacePipelineTaskBinding\">\n[]WorkspacePipelineTaskBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces maps workspaces from the pipeline spec to the workspaces\ndeclared in the Task.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Time after which the TaskRun times out. Defaults to 1 hour.\nRefer Go&rsquo;s ParseDuration documentation for expected format: <a href=\"https:\/\/golang.org\/pkg\/time\/#ParseDuration\">https:\/\/golang.org\/pkg\/time\/#ParseDuration<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>pipelineRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineRef\">\nPipelineRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>PipelineRef is a reference to a pipeline definition\nNote: PipelineRef is in preview mode and not yet supported<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>pipelineSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineSpec\">\nPipelineSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>PipelineSpec is a specification of a pipeline\nNote: PipelineSpec is in preview mode and not yet supported\nSpecifying PipelineSpec can be disabled by setting\n<code>disable-inline-spec<\/code> feature flag..<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>onError<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineTaskOnErrorType\">\nPipelineTaskOnErrorType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>OnError defines the exiting behavior of a PipelineRun on error\ncan be set to [ continue | stopAndFail ]<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineTaskMetadata\">PipelineTaskMetadata\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.EmbeddedTask\">EmbeddedTask<\/a>, <a href=\"#tekton.dev\/v1.PipelineTaskRunSpec\">PipelineTaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>PipelineTaskMetadata contains the labels or annotations for an EmbeddedTask<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>labels<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>annotations<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineTaskOnErrorType\">PipelineTaskOnErrorType\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineTask\">PipelineTask<\/a>)\n<\/p>\n<div>\n<p>PipelineTaskOnErrorType defines a list of supported failure handling behaviors of a PipelineTask on error<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;continue&#34;<\/p><\/td>\n<td><p>PipelineTaskContinue indicates to continue executing the rest of the DAG when the PipelineTask fails<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;stopAndFail&#34;<\/p><\/td>\n<td><p>PipelineTaskStopAndFail indicates to stop and fail the PipelineRun if the PipelineTask fails<\/p>\n<\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineTaskParam\">PipelineTaskParam\n<\/h3>\n<div>\n<p>PipelineTaskParam is used to provide arbitrary string parameters to a Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineTaskRun\">PipelineTaskRun\n<\/h3>\n<div>\n<p>PipelineTaskRun reports the results of running a step in the Task. Each\ntask has the potential to succeed or fail (based on the exit code)\nand produces logs.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineTaskRunSpec\">PipelineTaskRunSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineRunSpec\">PipelineRunSpec<\/a>)\n<\/p>\n<div>\n<p>PipelineTaskRunSpec  can be used to configure specific\nspecs for a concrete Task<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>pipelineTaskName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>podTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/unversioned.Template\">\nTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stepSpecs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunStepSpec\">\n[]TaskRunStepSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sidecarSpecs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunSidecarSpec\">\n[]TaskRunSidecarSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PipelineTaskMetadata\">\nPipelineTaskMetadata\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>computeResources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Compute resources to use for this TaskRun<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineTaskRunTemplate\">PipelineTaskRunTemplate\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineRunSpec\">PipelineRunSpec<\/a>)\n<\/p>\n<div>\n<p>PipelineTaskRunTemplate is used to specify run specifications for all Task in pipelinerun.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>podTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/unversioned.Template\">\nTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PipelineWorkspaceDeclaration\">PipelineWorkspaceDeclaration\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineSpec\">PipelineSpec<\/a>)\n<\/p>\n<div>\n<p>WorkspacePipelineDeclaration creates a named slot in a Pipeline that a PipelineRun\nis expected to populate with a workspace binding.<\/p>\n<p>Deprecated: use PipelineWorkspaceDeclaration type instead<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of a workspace to be provided by a PipelineRun.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a human readable string describing how the workspace will be\nused in the Pipeline. It can be useful to include a bit of detail about which\ntasks are intended to have access to the data on the workspace.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>optional<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<p>Optional marks a Workspace as not being required in PipelineRuns. By default\nthis field is false and so declared workspaces are required.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.PropertySpec\">PropertySpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.ParamSpec\">ParamSpec<\/a>, <a href=\"#tekton.dev\/v1.StepResult\">StepResult<\/a>, <a href=\"#tekton.dev\/v1.TaskResult\">TaskResult<\/a>)\n<\/p>\n<div>\n<p>PropertySpec defines the struct for object keys<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamType\">\nParamType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.Provenance\">Provenance\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineRunStatusFields\">PipelineRunStatusFields<\/a>, <a href=\"#tekton.dev\/v1.StepState\">StepState<\/a>, <a href=\"#tekton.dev\/v1.TaskRunStatusFields\">TaskRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>Provenance contains metadata about resources used in the TaskRun\/PipelineRun\nsuch as the source from where a remote build definition was fetched.\nThis field aims to carry minimum amoumt of metadata in *Run status so that\nTekton Chains can capture them in the provenance.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>refSource<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.RefSource\">\nRefSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>RefSource identifies the source where a remote task\/pipeline came from.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>featureFlags<\/code><br\/>\n<em>\ngithub.com\/tektoncd\/pipeline\/pkg\/apis\/config.FeatureFlags\n<\/em>\n<\/td>\n<td>\n<p>FeatureFlags identifies the feature flags that were used during the task\/pipeline run<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.Ref\">Ref\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.Step\">Step<\/a>)\n<\/p>\n<div>\n<p>Ref can be used to refer to a specific instance of a StepAction.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name of the referenced step<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ResolverRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ResolverRef\">\nResolverRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ResolverRef allows referencing a StepAction in a remote location\nlike a git repo.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.RefSource\">RefSource\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.Provenance\">Provenance<\/a>, <a href=\"#resolution.tekton.dev\/v1alpha1.ResolutionRequestStatusFields\">ResolutionRequestStatusFields<\/a>, <a href=\"#resolution.tekton.dev\/v1beta1.ResolutionRequestStatusFields\">ResolutionRequestStatusFields<\/a>)\n<\/p>\n<div>\n<p>RefSource contains the information that can uniquely identify where a remote\nbuilt definition came from i.e. Git repositories, Tekton Bundles in OCI registry\nand hub.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>uri<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>URI indicates the identity of the source of the build definition.\nExample: &ldquo;<a href=\"https:\/\/github.com\/tektoncd\/catalog&quot;\">https:\/\/github.com\/tektoncd\/catalog&rdquo;<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>digest<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<p>Digest is a collection of cryptographic digests for the contents of the artifact specified by URI.\nExample: {&ldquo;sha1&rdquo;: &ldquo;f99d13e554ffcb696dee719fa85b695cb5b0f428&rdquo;}<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>entryPoint<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>EntryPoint identifies the entry point into the build. This is often a path to a\nbuild definition file and\/or a target label within that file.\nExample: &ldquo;task\/git-clone\/0.8\/git-clone.yaml&rdquo;<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.ResolverName\">ResolverName\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.ResolverRef\">ResolverRef<\/a>)\n<\/p>\n<div>\n<p>ResolverName is the name of a resolver from which a resource can be\nrequested.<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1.ResolverRef\">ResolverRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineRef\">PipelineRef<\/a>, <a href=\"#tekton.dev\/v1.Ref\">Ref<\/a>, <a href=\"#tekton.dev\/v1.TaskRef\">TaskRef<\/a>)\n<\/p>\n<div>\n<p>ResolverRef can be used to refer to a Pipeline or Task in a remote\nlocation like a git repo. This feature is in beta and these fields\nare only available when the beta feature gate is enabled.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>resolver<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ResolverName\">\nResolverName\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Resolver is the name of the resolver that should perform\nresolution of the referenced Tekton resource, such as &ldquo;git&rdquo;.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Params contains the parameters used to identify the\nreferenced Tekton resource. Example entries might include\n&ldquo;repo&rdquo; or &ldquo;path&rdquo; but the set of params ultimately depends on\nthe chosen resolver.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.ResultRef\">ResultRef\n<\/h3>\n<div>\n<p>ResultRef is a type that represents a reference to a task run result<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>pipelineTask<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>result<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resultsIndex<\/code><br\/>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>property<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.ResultsType\">ResultsType\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineResult\">PipelineResult<\/a>, <a href=\"#tekton.dev\/v1.StepResult\">StepResult<\/a>, <a href=\"#tekton.dev\/v1.TaskResult\">TaskResult<\/a>, <a href=\"#tekton.dev\/v1.TaskRunResult\">TaskRunResult<\/a>)\n<\/p>\n<div>\n<p>ResultsType indicates the type of a result;\nUsed to distinguish between a single string and an array of strings.\nNote that there is ResultType used to find out whether a\nRunResult is from a task result or not, which is different from\nthis ResultsType.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;array&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;object&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><tr><td><p>&#34;string&#34;<\/p><\/td>\n<td><\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.Sidecar\">Sidecar\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskSpec\">TaskSpec<\/a>)\n<\/p>\n<div>\n<p>Sidecar has nearly the same data structure as Step but does not have the ability to timeout.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name of the Sidecar specified as a DNS_LABEL.\nEach Sidecar in a Task must have a unique name (DNS_LABEL).\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>image<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Image reference name.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>command<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Entrypoint array. Not executed within a shell.\nThe image&rsquo;s ENTRYPOINT is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the Sidecar&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>args<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Arguments to the entrypoint.\nThe image&rsquo;s CMD is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the Sidecar&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workingDir<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Sidecar&rsquo;s working directory.\nIf not specified, the container runtime&rsquo;s default will be used, which\nmight be configured in the container image.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ports<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#containerport-v1-core\">\n[]Kubernetes core\/v1.ContainerPort\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of ports to expose from the Sidecar. Exposing a port here gives\nthe system additional information about the network connections a\ncontainer uses, but is primarily informational. Not specifying a port here\nDOES NOT prevent that port from being exposed. Any port which is\nlistening on the default &ldquo;0.0.0.0&rdquo; address inside a container will be\naccessible from the network.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>envFrom<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envfromsource-v1-core\">\n[]Kubernetes core\/v1.EnvFromSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of sources to populate environment variables in the Sidecar.\nThe keys defined within a source must be a C_IDENTIFIER. All invalid keys\nwill be reported as an event when the container is starting. When a key exists in multiple\nsources, the value associated with the last source will take precedence.\nValues defined by an Env with a duplicate key will take precedence.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>env<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envvar-v1-core\">\n[]Kubernetes core\/v1.EnvVar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of environment variables to set in the Sidecar.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>computeResources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ComputeResources required by this Sidecar.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\">https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeMounts<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumemount-v1-core\">\n[]Kubernetes core\/v1.VolumeMount\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Volumes to mount into the Sidecar&rsquo;s filesystem.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeDevices<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumedevice-v1-core\">\n[]Kubernetes core\/v1.VolumeDevice\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>volumeDevices is the list of block devices to be used by the Sidecar.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>livenessProbe<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#probe-v1-core\">\nKubernetes core\/v1.Probe\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Periodic probe of Sidecar liveness.\nContainer will be restarted if the probe fails.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>readinessProbe<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#probe-v1-core\">\nKubernetes core\/v1.Probe\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Periodic probe of Sidecar service readiness.\nContainer will be removed from service endpoints if the probe fails.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>startupProbe<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#probe-v1-core\">\nKubernetes core\/v1.Probe\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>StartupProbe indicates that the Pod the Sidecar is running in has successfully initialized.\nIf specified, no other probes are executed until this completes successfully.\nIf this probe fails, the Pod will be restarted, just as if the livenessProbe failed.\nThis can be used to provide different probe parameters at the beginning of a Pod&rsquo;s lifecycle,\nwhen it might take a long time to load data or warm a cache, than during steady-state operation.\nThis cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>lifecycle<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#lifecycle-v1-core\">\nKubernetes core\/v1.Lifecycle\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Actions that the management system should take in response to Sidecar lifecycle events.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>terminationMessagePath<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Optional: Path at which the file to which the Sidecar&rsquo;s termination message\nwill be written is mounted into the Sidecar&rsquo;s filesystem.\nMessage written is intended to be brief final status, such as an assertion failure message.\nWill be truncated by the node if greater than 4096 bytes. The total message length across\nall containers will be limited to 12kb.\nDefaults to \/dev\/termination-log.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>terminationMessagePolicy<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#terminationmessagepolicy-v1-core\">\nKubernetes core\/v1.TerminationMessagePolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Indicate how the termination message should be populated. File will use the contents of\nterminationMessagePath to populate the Sidecar status message on both success and failure.\nFallbackToLogsOnError will use the last chunk of Sidecar log output if the termination\nmessage file is empty and the Sidecar exited with an error.\nThe log output is limited to 2048 bytes or 80 lines, whichever is smaller.\nDefaults to File.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>imagePullPolicy<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#pullpolicy-v1-core\">\nKubernetes core\/v1.PullPolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Image pull policy.\nOne of Always, Never, IfNotPresent.\nDefaults to Always if :latest tag is specified, or IfNotPresent otherwise.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#updating-images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#updating-images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>securityContext<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#securitycontext-v1-core\">\nKubernetes core\/v1.SecurityContext\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecurityContext defines the security options the Sidecar should be run with.\nIf set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/\">https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stdin<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Whether this Sidecar should allocate a buffer for stdin in the container runtime. If this\nis not set, reads from stdin in the Sidecar will always result in EOF.\nDefault is false.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stdinOnce<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Whether the container runtime should close the stdin channel after it has been opened by\na single attach. When stdin is true the stdin stream will remain open across multiple attach\nsessions. If stdinOnce is set to true, stdin is opened on Sidecar start, is empty until the\nfirst client attaches to stdin, and then remains open and accepts data until the client disconnects,\nat which time stdin is closed and remains closed until the Sidecar is restarted. If this\nflag is false, a container processes that reads from stdin will never receive an EOF.\nDefault is false<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tty<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Whether this Sidecar should allocate a TTY for itself, also requires &lsquo;stdin&rsquo; to be true.\nDefault is false.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>script<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Script is the contents of an executable file to execute.<\/p>\n<p>If Script is not empty, the Step cannot have an Command or Args.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WorkspaceUsage\">\n[]WorkspaceUsage\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>This is an alpha field. You must set the &ldquo;enable-api-fields&rdquo; feature flag to &ldquo;alpha&rdquo;\nfor this field to be supported.<\/p>\n<p>Workspaces is a list of workspaces from the Task that this Sidecar wants\nexclusive access to. Adding a workspace to this list means that any\nother Step or Sidecar that does not also request this Workspace will\nnot have access to it.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>restartPolicy<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#containerrestartpolicy-v1-core\">\nKubernetes core\/v1.ContainerRestartPolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>RestartPolicy refers to kubernetes RestartPolicy. It can only be set for an\ninitContainer and must have it&rsquo;s policy set to &ldquo;Always&rdquo;. It is currently\nleft optional to help support Kubernetes versions prior to 1.29 when this feature\nwas introduced.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.SidecarState\">SidecarState\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskRunStatusFields\">TaskRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>SidecarState reports the results of running a sidecar in a Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>ContainerState<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#containerstate-v1-core\">\nKubernetes core\/v1.ContainerState\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>ContainerState<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>container<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>imageID<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.SkippedTask\">SkippedTask\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineRunStatusFields\">PipelineRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>SkippedTask is used to describe the Tasks that were skipped due to their When Expressions\nevaluating to False. This is a struct because we are looking into including more details\nabout the When Expressions that caused this Task to be skipped.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the Pipeline Task name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>reason<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.SkippingReason\">\nSkippingReason\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Reason is the cause of the PipelineTask being skipped.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>whenExpressions<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WhenExpression\">\n[]WhenExpression\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.SkippingReason\">SkippingReason\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.SkippedTask\">SkippedTask<\/a>)\n<\/p>\n<div>\n<p>SkippingReason explains why a PipelineTask was skipped.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;Matrix Parameters have an empty array&#34;<\/p><\/td>\n<td><p>EmptyArrayInMatrixParams means the task was skipped because Matrix parameters contain empty array.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;PipelineRun Finally timeout has been reached&#34;<\/p><\/td>\n<td><p>FinallyTimedOutSkip means the task was skipped because the PipelineRun has passed its Timeouts.Finally.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;PipelineRun was gracefully cancelled&#34;<\/p><\/td>\n<td><p>GracefullyCancelledSkip means the task was skipped because the pipeline run has been gracefully cancelled<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;PipelineRun was gracefully stopped&#34;<\/p><\/td>\n<td><p>GracefullyStoppedSkip means the task was skipped because the pipeline run has been gracefully stopped<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Results were missing&#34;<\/p><\/td>\n<td><p>MissingResultsSkip means the task was skipped because it&rsquo;s missing necessary results<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;None&#34;<\/p><\/td>\n<td><p>None means the task was not skipped<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Parent Tasks were skipped&#34;<\/p><\/td>\n<td><p>ParentTasksSkip means the task was skipped because its parent was skipped<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;PipelineRun timeout has been reached&#34;<\/p><\/td>\n<td><p>PipelineTimedOutSkip means the task was skipped because the PipelineRun has passed its overall timeout.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;PipelineRun was stopping&#34;<\/p><\/td>\n<td><p>StoppingSkip means the task was skipped because the pipeline run is stopping<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;PipelineRun Tasks timeout has been reached&#34;<\/p><\/td>\n<td><p>TasksTimedOutSkip means the task was skipped because the PipelineRun has passed its Timeouts.Tasks.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;When Expressions evaluated to false&#34;<\/p><\/td>\n<td><p>WhenExpressionsSkip means the task was skipped due to at least one of its when expressions evaluating to false<\/p>\n<\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.Step\">Step\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskSpec\">TaskSpec<\/a>)\n<\/p>\n<div>\n<p>Step runs a subcomponent of a Task<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name of the Step specified as a DNS_LABEL.\nEach Step in a Task must have a unique name.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>image<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Docker image name.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>command<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Entrypoint array. Not executed within a shell.\nThe image&rsquo;s ENTRYPOINT is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the container&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>args<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Arguments to the entrypoint.\nThe image&rsquo;s CMD is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the container&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workingDir<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Step&rsquo;s working directory.\nIf not specified, the container runtime&rsquo;s default will be used, which\nmight be configured in the container image.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>envFrom<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envfromsource-v1-core\">\n[]Kubernetes core\/v1.EnvFromSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of sources to populate environment variables in the Step.\nThe keys defined within a source must be a C_IDENTIFIER. All invalid keys\nwill be reported as an event when the Step is starting. When a key exists in multiple\nsources, the value associated with the last source will take precedence.\nValues defined by an Env with a duplicate key will take precedence.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>env<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envvar-v1-core\">\n[]Kubernetes core\/v1.EnvVar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of environment variables to set in the Step.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>computeResources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ComputeResources required by this Step.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\">https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeMounts<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumemount-v1-core\">\n[]Kubernetes core\/v1.VolumeMount\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Volumes to mount into the Step&rsquo;s filesystem.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeDevices<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumedevice-v1-core\">\n[]Kubernetes core\/v1.VolumeDevice\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>volumeDevices is the list of block devices to be used by the Step.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>imagePullPolicy<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#pullpolicy-v1-core\">\nKubernetes core\/v1.PullPolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Image pull policy.\nOne of Always, Never, IfNotPresent.\nDefaults to Always if :latest tag is specified, or IfNotPresent otherwise.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#updating-images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#updating-images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>securityContext<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#securitycontext-v1-core\">\nKubernetes core\/v1.SecurityContext\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecurityContext defines the security options the Step should be run with.\nIf set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/\">https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>script<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Script is the contents of an executable file to execute.<\/p>\n<p>If Script is not empty, the Step cannot have an Command and the Args will be passed to the Script.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Timeout is the time after which the step times out. Defaults to never.\nRefer to Go&rsquo;s ParseDuration documentation for expected format: <a href=\"https:\/\/golang.org\/pkg\/time\/#ParseDuration\">https:\/\/golang.org\/pkg\/time\/#ParseDuration<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WorkspaceUsage\">\n[]WorkspaceUsage\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>This is an alpha field. You must set the &ldquo;enable-api-fields&rdquo; feature flag to &ldquo;alpha&rdquo;\nfor this field to be supported.<\/p>\n<p>Workspaces is a list of workspaces from the Task that this Step wants\nexclusive access to. Adding a workspace to this list means that any\nother Step or Sidecar that does not also request this Workspace will\nnot have access to it.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>onError<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.OnErrorType\">\nOnErrorType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>OnError defines the exiting behavior of a container on error\ncan be set to [ continue | stopAndFail ]<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stdoutConfig<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.StepOutputConfig\">\nStepOutputConfig\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Stores configuration for the stdout stream of the step.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stderrConfig<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.StepOutputConfig\">\nStepOutputConfig\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Stores configuration for the stderr stream of the step.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ref<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Ref\">\nRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Contains the reference to an existing StepAction.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Params declares parameters passed to this step action.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.StepResult\">\n[]StepResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results declares StepResults produced by the Step.<\/p>\n<p>This is field is at an ALPHA stability level and gated by &ldquo;enable-step-actions&rdquo; feature flag.<\/p>\n<p>It can be used in an inlined Step when used to store Results to $(step.results.resultName.path).\nIt cannot be used when referencing StepActions using [v1.Step.Ref].\nThe Results declared by the StepActions will be stored here instead.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>when<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WhenExpressions\">\nWhenExpressions\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>When is a list of when expressions that need to be true for the task to run<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.StepOutputConfig\">StepOutputConfig\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.Step\">Step<\/a>)\n<\/p>\n<div>\n<p>StepOutputConfig stores configuration for a step output stream.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>path<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Path to duplicate stdout stream to on container&rsquo;s local filesystem.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.StepResult\">StepResult\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.Step\">Step<\/a>, <a href=\"#tekton.dev\/v1alpha1.StepActionSpec\">StepActionSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.Step\">Step<\/a>, <a href=\"#tekton.dev\/v1beta1.StepActionSpec\">StepActionSpec<\/a>)\n<\/p>\n<div>\n<p>StepResult used to describe the Results of a Step.<\/p>\n<p>This is field is at an BETA stability level and gated by &ldquo;enable-step-actions&rdquo; feature flag.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name the given name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ResultsType\">\nResultsType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>The possible types are &lsquo;string&rsquo;, &lsquo;array&rsquo;, and &lsquo;object&rsquo;, with &lsquo;string&rsquo; as the default.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>properties<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PropertySpec\">\nmap[string]github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1.PropertySpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Properties is the JSON Schema properties to support key-value pairs results.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a human-readable description of the result<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.StepState\">StepState\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskRunStatusFields\">TaskRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>StepState reports the results of running a step in a Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>ContainerState<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#containerstate-v1-core\">\nKubernetes core\/v1.ContainerState\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>ContainerState<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>container<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>imageID<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunResult\">\n[]TaskRunResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>provenance<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Provenance\">\nProvenance\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>terminationReason<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>inputs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Artifact\">\n[]Artifact\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>outputs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Artifact\">\n[]Artifact\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.StepTemplate\">StepTemplate\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskSpec\">TaskSpec<\/a>)\n<\/p>\n<div>\n<p>StepTemplate is a template for a Step<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>image<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Image reference name.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>command<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Entrypoint array. Not executed within a shell.\nThe image&rsquo;s ENTRYPOINT is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the Step&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>args<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Arguments to the entrypoint.\nThe image&rsquo;s CMD is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the Step&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workingDir<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Step&rsquo;s working directory.\nIf not specified, the container runtime&rsquo;s default will be used, which\nmight be configured in the container image.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>envFrom<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envfromsource-v1-core\">\n[]Kubernetes core\/v1.EnvFromSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of sources to populate environment variables in the Step.\nThe keys defined within a source must be a C_IDENTIFIER. All invalid keys\nwill be reported as an event when the Step is starting. When a key exists in multiple\nsources, the value associated with the last source will take precedence.\nValues defined by an Env with a duplicate key will take precedence.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>env<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envvar-v1-core\">\n[]Kubernetes core\/v1.EnvVar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of environment variables to set in the Step.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>computeResources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ComputeResources required by this Step.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\">https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeMounts<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumemount-v1-core\">\n[]Kubernetes core\/v1.VolumeMount\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Volumes to mount into the Step&rsquo;s filesystem.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeDevices<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumedevice-v1-core\">\n[]Kubernetes core\/v1.VolumeDevice\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>volumeDevices is the list of block devices to be used by the Step.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>imagePullPolicy<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#pullpolicy-v1-core\">\nKubernetes core\/v1.PullPolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Image pull policy.\nOne of Always, Never, IfNotPresent.\nDefaults to Always if :latest tag is specified, or IfNotPresent otherwise.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#updating-images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#updating-images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>securityContext<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#securitycontext-v1-core\">\nKubernetes core\/v1.SecurityContext\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecurityContext defines the security options the Step should be run with.\nIf set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/\">https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/<\/a><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskBreakpoints\">TaskBreakpoints\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskRunDebug\">TaskRunDebug<\/a>)\n<\/p>\n<div>\n<p>TaskBreakpoints defines the breakpoint config for a particular Task<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>onFailure<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>if enabled, pause TaskRun on failure of a step\nfailed step will not exit<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>beforeSteps<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskKind\">TaskKind\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskRef\">TaskRef<\/a>)\n<\/p>\n<div>\n<p>TaskKind defines the type of Task used by the pipeline.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;ClusterTask&#34;<\/p><\/td>\n<td><p>ClusterTaskRefKind is the task type for a reference to a task with cluster scope.\nClusterTasks are not supported in v1, but v1 types may reference ClusterTasks.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Task&#34;<\/p><\/td>\n<td><p>NamespacedTaskKind indicates that the task type has a namespaced scope.<\/p>\n<\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskRef\">TaskRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineTask\">PipelineTask<\/a>, <a href=\"#tekton.dev\/v1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>TaskRef can be used to refer to a specific instance of a task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name of the referent; More info: <a href=\"http:\/\/kubernetes.io\/docs\/user-guide\/identifiers#names\">http:\/\/kubernetes.io\/docs\/user-guide\/identifiers#names<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskKind\">\nTaskKind\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>TaskKind indicates the Kind of the Task:\n1. Namespaced Task when Kind is set to &ldquo;Task&rdquo;. If Kind is &ldquo;&rdquo;, it defaults to &ldquo;Task&rdquo;.\n2. Custom Task when Kind is non-empty and APIVersion is non-empty<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>API version of the referent\nNote: A Task with non-empty APIVersion and Kind is considered a Custom Task<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ResolverRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ResolverRef\">\nResolverRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ResolverRef allows referencing a Task in a remote location\nlike a git repo. This field is only supported when the alpha\nfeature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskResult\">TaskResult\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskSpec\">TaskSpec<\/a>)\n<\/p>\n<div>\n<p>TaskResult used to describe the results of a task<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name the given name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ResultsType\">\nResultsType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Type is the user-specified type of the result. The possible type\nis currently &ldquo;string&rdquo; and will support &ldquo;array&rdquo; in following work.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>properties<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.PropertySpec\">\nmap[string]github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1.PropertySpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Properties is the JSON Schema properties to support key-value pairs results.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a human-readable description of the result<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamValue\">\nParamValue\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Value the expression used to retrieve the value of the result from an underlying Step.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskRunDebug\">TaskRunDebug\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>TaskRunDebug defines the breakpoint config for a particular TaskRun<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>breakpoints<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskBreakpoints\">\nTaskBreakpoints\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskRunInputs\">TaskRunInputs\n<\/h3>\n<div>\n<p>TaskRunInputs holds the input values that this task was invoked with.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskRunReason\">TaskRunReason\n(<code>string<\/code> alias)<\/h3>\n<div>\n<p>TaskRunReason is an enum used to store all TaskRun reason for\nthe Succeeded condition that are controlled by the TaskRun itself. Failure\nreasons that emerge from underlying resources are not included here<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;TaskRunCancelled&#34;<\/p><\/td>\n<td><p>TaskRunReasonCancelled is the reason set when the TaskRun is cancelled by the user<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Failed&#34;<\/p><\/td>\n<td><p>TaskRunReasonFailed is the reason set when the TaskRun completed with a failure<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;TaskRunResolutionFailed&#34;<\/p><\/td>\n<td><p>TaskRunReasonFailedResolution indicated that the reason for failure status is\nthat references within the TaskRun could not be resolved<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;TaskRunValidationFailed&#34;<\/p><\/td>\n<td><p>TaskRunReasonFailedValidation indicated that the reason for failure status is\nthat taskrun failed runtime validation<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;FailureIgnored&#34;<\/p><\/td>\n<td><p>TaskRunReasonFailureIgnored is the reason set when the Taskrun has failed due to pod execution error and the failure is ignored for the owning PipelineRun.\nTaskRuns failed due to reconciler\/validation error should not use this reason.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;TaskRunImagePullFailed&#34;<\/p><\/td>\n<td><p>TaskRunReasonImagePullFailed is the reason set when the step of a task fails due to image not being pulled<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;InvalidParamValue&#34;<\/p><\/td>\n<td><p>TaskRunReasonInvalidParamValue indicates that the TaskRun Param input value is not allowed.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;ResourceVerificationFailed&#34;<\/p><\/td>\n<td><p>TaskRunReasonResourceVerificationFailed indicates that the task fails the trusted resource verification,\nit could be the content has changed, signature is invalid or public key is invalid<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;TaskRunResultLargerThanAllowedLimit&#34;<\/p><\/td>\n<td><p>TaskRunReasonResultLargerThanAllowedLimit is the reason set when one of the results exceeds its maximum allowed limit of 1 KB<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Running&#34;<\/p><\/td>\n<td><p>TaskRunReasonRunning is the reason set when the TaskRun is running<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Started&#34;<\/p><\/td>\n<td><p>TaskRunReasonStarted is the reason set when the TaskRun has just started<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;TaskRunStopSidecarFailed&#34;<\/p><\/td>\n<td><p>TaskRunReasonStopSidecarFailed indicates that the sidecar is not properly stopped.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;Succeeded&#34;<\/p><\/td>\n<td><p>TaskRunReasonSuccessful is the reason set when the TaskRun completed successfully<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;TaskValidationFailed&#34;<\/p><\/td>\n<td><p>TaskRunReasonTaskFailedValidation indicated that the reason for failure status is\nthat task failed runtime validation<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;TaskRunTimeout&#34;<\/p><\/td>\n<td><p>TaskRunReasonTimedOut is the reason set when one TaskRun execution has timed out<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;ToBeRetried&#34;<\/p><\/td>\n<td><p>TaskRunReasonToBeRetried is the reason set when the last TaskRun execution failed, and will be retried<\/p>\n<\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskRunResult\">TaskRunResult\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.StepState\">StepState<\/a>, <a href=\"#tekton.dev\/v1.TaskRunStatusFields\">TaskRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>TaskRunStepResult is a type alias of TaskRunResult<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name the given name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ResultsType\">\nResultsType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Type is the user-specified type of the result. The possible type\nis currently &ldquo;string&rdquo; and will support &ldquo;array&rdquo; in following work.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamValue\">\nParamValue\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Value the given value of the result<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskRunSidecarSpec\">TaskRunSidecarSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineTaskRunSpec\">PipelineTaskRunSpec<\/a>, <a href=\"#tekton.dev\/v1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>TaskRunSidecarSpec is used to override the values of a Sidecar in the corresponding Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>The name of the Sidecar to override.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>computeResources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The resource requirements to apply to the Sidecar.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskRunSpec\">TaskRunSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskRun\">TaskRun<\/a>)\n<\/p>\n<div>\n<p>TaskRunSpec defines the desired state of TaskRun<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>debug<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunDebug\">\nTaskRunDebug\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRef\">\nTaskRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>no more than one of the TaskRef and TaskSpec may be specified.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskSpec\">\nTaskSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specifying PipelineSpec can be disabled by setting\n<code>disable-inline-spec<\/code> feature flag..<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunSpecStatus\">\nTaskRunSpecStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for cancelling a TaskRun (and maybe more later on)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>statusMessage<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunSpecStatusMessage\">\nTaskRunSpecStatusMessage\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status message for cancellation.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retries<\/code><br\/>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Retries represents how many times this TaskRun should be retried in the event of task failure.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Time after which one retry attempt times out. Defaults to 1 hour.\nRefer Go&rsquo;s ParseDuration documentation for expected format: <a href=\"https:\/\/golang.org\/pkg\/time\/#ParseDuration\">https:\/\/golang.org\/pkg\/time\/#ParseDuration<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>podTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/unversioned.Template\">\nTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>PodTemplate holds pod specific configuration<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WorkspaceBinding\">\n[]WorkspaceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stepSpecs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunStepSpec\">\n[]TaskRunStepSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specs to apply to Steps in this TaskRun.\nIf a field is specified in both a Step and a StepSpec,\nthe value from the StepSpec will be used.\nThis field is only supported when the alpha feature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sidecarSpecs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunSidecarSpec\">\n[]TaskRunSidecarSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specs to apply to Sidecars in this TaskRun.\nIf a field is specified in both a Sidecar and a SidecarSpec,\nthe value from the SidecarSpec will be used.\nThis field is only supported when the alpha feature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>computeResources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Compute resources to use for this TaskRun<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskRunSpecStatus\">TaskRunSpecStatus\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>TaskRunSpecStatus defines the TaskRun spec status the user can provide<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1.TaskRunSpecStatusMessage\">TaskRunSpecStatusMessage\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>TaskRunSpecStatusMessage defines human readable status messages for the TaskRun.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Value<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody><tr><td><p>&#34;TaskRun cancelled as the PipelineRun it belongs to has been cancelled.&#34;<\/p><\/td>\n<td><p>TaskRunCancelledByPipelineMsg indicates that the PipelineRun of which this\nTaskRun was a part of has been cancelled.<\/p>\n<\/td>\n<\/tr><tr><td><p>&#34;TaskRun cancelled as the PipelineRun it belongs to has timed out.&#34;<\/p><\/td>\n<td><p>TaskRunCancelledByPipelineTimeoutMsg indicates that the TaskRun was cancelled because the PipelineRun running it timed out.<\/p>\n<\/td>\n<\/tr><\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskRunStatus\">TaskRunStatus\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskRun\">TaskRun<\/a>, <a href=\"#tekton.dev\/v1.PipelineRunTaskRunStatus\">PipelineRunTaskRunStatus<\/a>, <a href=\"#tekton.dev\/v1.TaskRunStatusFields\">TaskRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>TaskRunStatus defines the observed state of TaskRun<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>Status<\/code><br\/>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/knative.dev\/pkg\/apis\/duck\/v1#Status\">\nknative.dev\/pkg\/apis\/duck\/v1.Status\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>Status<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>TaskRunStatusFields<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunStatusFields\">\nTaskRunStatusFields\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>TaskRunStatusFields<\/code> are embedded into this type.)\n<\/p>\n<p>TaskRunStatusFields inlines the status fields.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskRunStatusFields\">TaskRunStatusFields\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskRunStatus\">TaskRunStatus<\/a>)\n<\/p>\n<div>\n<p>TaskRunStatusFields holds the fields of TaskRun&rsquo;s status.  This is defined\nseparately and inlined so that other types can readily consume these fields\nvia duck typing.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>podName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>PodName is the name of the pod responsible for executing this task&rsquo;s steps.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>startTime<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>StartTime is the time the build is actually started.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>completionTime<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>CompletionTime is the time the build completed.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>steps<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.StepState\">\n[]StepState\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Steps describes the state of each build step container.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retriesStatus<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunStatus\">\n[]TaskRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>RetriesStatus contains the history of TaskRunStatus in case of a retry in order to keep record of failures.\nAll TaskRunStatus stored in RetriesStatus will have no date within the RetriesStatus as is redundant.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskRunResult\">\n[]TaskRunResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results are the list of results written out by the task&rsquo;s containers<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>artifacts<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Artifacts\">\nArtifacts\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Artifacts are the list of artifacts written out by the task&rsquo;s containers<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sidecars<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.SidecarState\">\n[]SidecarState\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The list has one entry per sidecar in the manifest. Each entry is\nrepresents the imageid of the corresponding sidecar.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskSpec\">\nTaskSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>TaskSpec contains the Spec from the dereferenced Task definition used to instantiate this TaskRun.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>provenance<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Provenance\">\nProvenance\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Provenance contains some key authenticated metadata about how a software artifact was built (what sources, what inputs\/outputs, etc.).<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spanContext<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<p>SpanContext contains tracing span context fields<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskRunStepSpec\">TaskRunStepSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineTaskRunSpec\">PipelineTaskRunSpec<\/a>, <a href=\"#tekton.dev\/v1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>TaskRunStepSpec is used to override the values of a Step in the corresponding Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>The name of the Step to override.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>computeResources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The resource requirements to apply to the Step.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TaskSpec\">TaskSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.Task\">Task<\/a>, <a href=\"#tekton.dev\/v1.EmbeddedTask\">EmbeddedTask<\/a>, <a href=\"#tekton.dev\/v1.TaskRunSpec\">TaskRunSpec<\/a>, <a href=\"#tekton.dev\/v1.TaskRunStatusFields\">TaskRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>TaskSpec defines the desired state of Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamSpecs\">\nParamSpecs\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Params is a list of input parameters required to run the task. Params\nmust be supplied as inputs in TaskRuns unless they declare a default\nvalue.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>displayName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DisplayName is a user-facing name of the task that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the task that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>steps<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Step\">\n[]Step\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Steps are the steps of the build; each step is run sequentially with the\nsource mounted into \/workspace.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumes<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volume-v1-core\">\n[]Kubernetes core\/v1.Volume\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Volumes is a collection of volumes that are available to mount into the\nsteps of the build.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stepTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.StepTemplate\">\nStepTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>StepTemplate can be used as the basis for all step containers within the\nTask, so that the steps inherit settings on the base container.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sidecars<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.Sidecar\">\n[]Sidecar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Sidecars are run alongside the Task&rsquo;s step containers. They begin before\nthe steps start and end after the steps complete.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.WorkspaceDeclaration\">\n[]WorkspaceDeclaration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Workspaces are the volumes that this Task requires.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.TaskResult\">\n[]TaskResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Results are values that this Task can output<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.TimeoutFields\">TimeoutFields\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineRunSpec\">PipelineRunSpec<\/a>)\n<\/p>\n<div>\n<p>TimeoutFields allows granular specification of pipeline, task, and finally timeouts<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>pipeline<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Pipeline sets the maximum allowed duration for execution of the entire pipeline. The sum of individual timeouts for tasks and finally must not exceed this value.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tasks<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Tasks sets the maximum allowed duration of this pipeline&rsquo;s tasks<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>finally<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Finally sets the maximum allowed duration of this pipeline&rsquo;s finally<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.WhenExpression\">WhenExpression\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.ChildStatusReference\">ChildStatusReference<\/a>, <a href=\"#tekton.dev\/v1.PipelineRunRunStatus\">PipelineRunRunStatus<\/a>, <a href=\"#tekton.dev\/v1.PipelineRunTaskRunStatus\">PipelineRunTaskRunStatus<\/a>, <a href=\"#tekton.dev\/v1.SkippedTask\">SkippedTask<\/a>)\n<\/p>\n<div>\n<p>WhenExpression allows a PipelineTask to declare expressions to be evaluated before the Task is run\nto determine whether the Task should be executed or skipped<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>input<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Input is the string for guard checking which can be a static input or an output from a parent Task<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>operator<\/code><br\/>\n<em>\nk8s.io\/apimachinery\/pkg\/selection.Operator\n<\/em>\n<\/td>\n<td>\n<p>Operator that represents an Input&rsquo;s relationship to the values<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>values<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<p>Values is an array of strings, which is compared against the input, for guard checking\nIt must be non-empty<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>cel<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>CEL is a string of Common Language Expression, which can be used to conditionally execute\nthe task based on the result of the expression evaluation\nMore info about CEL syntax: <a href=\"https:\/\/github.com\/google\/cel-spec\/blob\/master\/doc\/langdef.md\">https:\/\/github.com\/google\/cel-spec\/blob\/master\/doc\/langdef.md<\/a><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.WhenExpressions\">WhenExpressions\n(<code>[]github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1.WhenExpression<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineTask\">PipelineTask<\/a>, <a href=\"#tekton.dev\/v1.Step\">Step<\/a>)\n<\/p>\n<div>\n<p>WhenExpressions are used to specify whether a Task should be executed or skipped\nAll of them need to evaluate to True for a guarded Task to be executed.<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1.WorkspaceBinding\">WorkspaceBinding\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineRunSpec\">PipelineRunSpec<\/a>, <a href=\"#tekton.dev\/v1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>WorkspaceBinding maps a Task&rsquo;s declared workspace to a Volume.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of the workspace populated by the volume.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>subPath<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SubPath is optionally a directory on the volume which should be used\nfor this binding (i.e. the volume will be mounted at this sub directory).<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeClaimTemplate<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#persistentvolumeclaim-v1-core\">\nKubernetes core\/v1.PersistentVolumeClaim\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>VolumeClaimTemplate is a template for a claim that will be created in the same namespace.\nThe PipelineRun controller is responsible for creating a unique claim for each instance of PipelineRun.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>persistentVolumeClaim<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#persistentvolumeclaimvolumesource-v1-core\">\nKubernetes core\/v1.PersistentVolumeClaimVolumeSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>PersistentVolumeClaimVolumeSource represents a reference to a\nPersistentVolumeClaim in the same namespace. Either this OR EmptyDir can be used.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>emptyDir<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#emptydirvolumesource-v1-core\">\nKubernetes core\/v1.EmptyDirVolumeSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>EmptyDir represents a temporary directory that shares a Task&rsquo;s lifetime.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#emptydir\">https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#emptydir<\/a>\nEither this OR PersistentVolumeClaim can be used.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>configMap<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#configmapvolumesource-v1-core\">\nKubernetes core\/v1.ConfigMapVolumeSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ConfigMap represents a configMap that should populate this workspace.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secret<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#secretvolumesource-v1-core\">\nKubernetes core\/v1.SecretVolumeSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Secret represents a secret that should populate this workspace.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>projected<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#projectedvolumesource-v1-core\">\nKubernetes core\/v1.ProjectedVolumeSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Projected represents a projected volume that should populate this workspace.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>csi<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#csivolumesource-v1-core\">\nKubernetes core\/v1.CSIVolumeSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>CSI (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.WorkspaceDeclaration\">WorkspaceDeclaration\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.TaskSpec\">TaskSpec<\/a>)\n<\/p>\n<div>\n<p>WorkspaceDeclaration is a declaration of a volume that a Task requires.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name by which you can bind the volume at runtime.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is an optional human readable description of this volume.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>mountPath<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>MountPath overrides the directory that the volume will be made available at.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>readOnly<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<p>ReadOnly dictates whether a mounted volume is writable. By default this\nfield is false and so mounted volumes are writable.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>optional<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<p>Optional marks a Workspace as not being required in TaskRuns. By default\nthis field is false and so declared workspaces are required.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.WorkspacePipelineTaskBinding\">WorkspacePipelineTaskBinding\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.PipelineTask\">PipelineTask<\/a>)\n<\/p>\n<div>\n<p>WorkspacePipelineTaskBinding describes how a workspace passed into the pipeline should be\nmapped to a task&rsquo;s declared workspace.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of the workspace as declared by the task<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspace<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspace is the name of the workspace declared by the pipeline<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>subPath<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SubPath is optionally a directory on the volume which should be used\nfor this binding (i.e. the volume will be mounted at this sub directory).<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1.WorkspaceUsage\">WorkspaceUsage\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1.Sidecar\">Sidecar<\/a>, <a href=\"#tekton.dev\/v1.Step\">Step<\/a>)\n<\/p>\n<div>\n<p>WorkspaceUsage is used by a Step or Sidecar to declare that it wants isolated access\nto a Workspace defined in a Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of the workspace this Step or Sidecar wants access to.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>mountPath<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>MountPath is the path that the workspace should be mounted to inside the Step or Sidecar,\noverriding any MountPath specified in the Task&rsquo;s WorkspaceDeclaration.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr\/>\n<h2 id=\"tekton.dev\/v1alpha1\">tekton.dev\/v1alpha1<\/h2>\n<div>\n<p>Package v1alpha1 contains API Schema definitions for the pipeline v1alpha1 API group<\/p>\n<\/div>\nResource Types:\n<ul><li>\n<a href=\"#tekton.dev\/v1alpha1.Run\">Run<\/a>\n<\/li><li>\n<a href=\"#tekton.dev\/v1alpha1.StepAction\">StepAction<\/a>\n<\/li><li>\n<a href=\"#tekton.dev\/v1alpha1.VerificationPolicy\">VerificationPolicy<\/a>\n<\/li><li>\n<a href=\"#tekton.dev\/v1alpha1.PipelineResource\">PipelineResource<\/a>\n<\/li><\/ul>\n<h3 id=\"tekton.dev\/v1alpha1.Run\">Run\n<\/h3>\n<div>\n<p>Run represents a single execution of a Custom Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1alpha1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>Run<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.RunSpec\">\nRunSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>ref<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRef\">\nTaskRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.EmbeddedRunSpec\">\nEmbeddedRunSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec is a specification of a custom task<\/p>\n<br\/>\n<br\/>\n<table>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.RunSpecStatus\">\nRunSpecStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for cancelling a run (and maybe more later on)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>statusMessage<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.RunSpecStatusMessage\">\nRunSpecStatusMessage\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status message for cancellation.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retries<\/code><br\/>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for propagating retries count to custom tasks<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>podTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/unversioned.Template\">\nTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>PodTemplate holds pod specific configuration<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Time after which the custom-task times out.\nRefer Go&rsquo;s ParseDuration documentation for expected format: <a href=\"https:\/\/golang.org\/pkg\/time\/#ParseDuration\">https:\/\/golang.org\/pkg\/time\/#ParseDuration<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WorkspaceBinding\">\n[]WorkspaceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.RunStatus\">\nRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.StepAction\">StepAction\n<\/h3>\n<div>\n<p>StepAction represents the actionable components of Step.\nThe Step can only reference it from the cluster or using remote resolution.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1alpha1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>StepAction<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.StepActionSpec\">\nStepActionSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec holds the desired state of the Step from the client<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the stepaction that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>image<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Image reference name to run for this StepAction.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>command<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Entrypoint array. Not executed within a shell.\nThe image&rsquo;s ENTRYPOINT is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the container&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>args<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Arguments to the entrypoint.\nThe image&rsquo;s CMD is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the container&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>env<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envvar-v1-core\">\n[]Kubernetes core\/v1.EnvVar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of environment variables to set in the container.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>script<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Script is the contents of an executable file to execute.<\/p>\n<p>If Script is not empty, the Step cannot have an Command and the Args will be passed to the Script.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workingDir<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Step&rsquo;s working directory.\nIf not specified, the container runtime&rsquo;s default will be used, which\nmight be configured in the container image.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamSpecs\">\nParamSpecs\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Params is a list of input parameters required to run the stepAction.\nParams must be supplied as inputs in Steps unless they declare a defaultvalue.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.StepResult\">\n[]StepResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results are values that this StepAction can output<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>securityContext<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#securitycontext-v1-core\">\nKubernetes core\/v1.SecurityContext\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecurityContext defines the security options the Step should be run with.\nIf set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/\">https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/<\/a>\nThe value set in StepAction will take precedence over the value from Task.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeMounts<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumemount-v1-core\">\n[]Kubernetes core\/v1.VolumeMount\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Volumes to mount into the Step&rsquo;s filesystem.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.VerificationPolicy\">VerificationPolicy\n<\/h3>\n<div>\n<p>VerificationPolicy defines the rules to verify Tekton resources.\nVerificationPolicy can config the mapping from resources to a list of public\nkeys, so when verifying the resources we can use the corresponding public keys.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1alpha1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>VerificationPolicy<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.VerificationPolicySpec\">\nVerificationPolicySpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Spec holds the desired state of the VerificationPolicy.<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.ResourcePattern\">\n[]ResourcePattern\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Resources defines the patterns of resources sources that should be subject to this policy.\nFor example, we may want to apply this Policy from a certain GitHub repo.\nThen the ResourcesPattern should be valid regex. E.g. If using gitresolver, and we want to config keys from a certain git repo.\n<code>ResourcesPattern<\/code> can be <code>https:\/\/github.com\/tektoncd\/catalog.git<\/code>, we will use regex to filter out those resources.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>authorities<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.Authority\">\n[]Authority\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Authorities defines the rules for validating signatures.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>mode<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.ModeType\">\nModeType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Mode controls whether a failing policy will fail the taskrun\/pipelinerun, or only log the warnings\nenforce - fail the taskrun\/pipelinerun if verification fails (default)\nwarn - don&rsquo;t fail the taskrun\/pipelinerun if verification fails but log warnings<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.PipelineResource\">PipelineResource\n<\/h3>\n<div>\n<p>PipelineResource describes a resource that is an input to or output from a\nTask.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1alpha1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>PipelineResource<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.PipelineResourceSpec\">\nPipelineResourceSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Spec holds the desired state of the PipelineResource from the client<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the resource that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.ResourceParam\">\n[]ResourceParam\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secrets<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.SecretParam\">\n[]SecretParam\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Secrets to fetch to populate some of resource fields<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.PipelineResourceStatus\">\nPipelineResourceStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status is used to communicate the observed state of the PipelineResource from\nthe controller, but was unused as there is no controller for PipelineResource.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.Authority\">Authority\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.VerificationPolicySpec\">VerificationPolicySpec<\/a>)\n<\/p>\n<div>\n<p>The Authority block defines the keys for validating signatures.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name for this authority.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>key<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.KeyRef\">\nKeyRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Key contains the public key to validate the resource.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.EmbeddedRunSpec\">EmbeddedRunSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.RunSpec\">RunSpec<\/a>)\n<\/p>\n<div>\n<p>EmbeddedRunSpec allows custom task definitions to be embedded<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineTaskMetadata\">\nPipelineTaskMetadata\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\nk8s.io\/apimachinery\/pkg\/runtime.RawExtension\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec is a specification of a custom task<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>-<\/code><br\/>\n<em>\n[]byte\n<\/em>\n<\/td>\n<td>\n<p>Raw is the underlying serialization of this object.<\/p>\n<p>TODO: Determine how to detect ContentType and ContentEncoding of &lsquo;Raw&rsquo; data.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>-<\/code><br\/>\n<em>\nk8s.io\/apimachinery\/pkg\/runtime.Object\n<\/em>\n<\/td>\n<td>\n<p>Object can hold a representation of this extension - useful for working with versioned\nstructs.<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.HashAlgorithm\">HashAlgorithm\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.KeyRef\">KeyRef<\/a>)\n<\/p>\n<div>\n<p>HashAlgorithm defines the hash algorithm used for the public key<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1alpha1.KeyRef\">KeyRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.Authority\">Authority<\/a>)\n<\/p>\n<div>\n<p>KeyRef defines the reference to a public key<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>secretRef<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#secretreference-v1-core\">\nKubernetes core\/v1.SecretReference\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecretRef sets a reference to a secret with the key.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>data<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Data contains the inline public key.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kms<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>KMS contains the KMS url of the public key\nSupported formats differ based on the KMS system used.\nOne example of a KMS url could be:\ngcpkms:\/\/projects\/[PROJECT]\/locations\/[LOCATION]&gt;\/keyRings\/[KEYRING]\/cryptoKeys\/[KEY]\/cryptoKeyVersions\/[KEY_VERSION]\nFor more examples please refer <a href=\"https:\/\/docs.sigstore.dev\/cosign\/kms_support\">https:\/\/docs.sigstore.dev\/cosign\/kms_support<\/a>.\nNote that the KMS is not supported yet.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>hashAlgorithm<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.HashAlgorithm\">\nHashAlgorithm\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>HashAlgorithm always defaults to sha256 if the algorithm hasn&rsquo;t been explicitly set<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.ModeType\">ModeType\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.VerificationPolicySpec\">VerificationPolicySpec<\/a>)\n<\/p>\n<div>\n<p>ModeType indicates the type of a mode for VerificationPolicy<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1alpha1.ResourcePattern\">ResourcePattern\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.VerificationPolicySpec\">VerificationPolicySpec<\/a>)\n<\/p>\n<div>\n<p>ResourcePattern defines the pattern of the resource source<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>pattern<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Pattern defines a resource pattern. Regex is created to filter resources based on <code>Pattern<\/code>\nExample patterns:\nGitHub resource: <a href=\"https:\/\/github.com\/tektoncd\/catalog.git\">https:\/\/github.com\/tektoncd\/catalog.git<\/a>, <a href=\"https:\/\/github.com\/tektoncd\/*\">https:\/\/github.com\/tektoncd\/*<\/a>\nBundle resource: gcr.io\/tekton-releases\/catalog\/upstream\/git-clone, gcr.io\/tekton-releases\/catalog\/upstream\/*\nHub resource: <a href=\"https:\/\/artifacthub.io\/*\">https:\/\/artifacthub.io\/*<\/a>,<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.RunReason\">RunReason\n(<code>string<\/code> alias)<\/h3>\n<div>\n<p>RunReason is an enum used to store all Run reason for the Succeeded condition that are controlled by the Run itself.<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1alpha1.RunSpec\">RunSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.Run\">Run<\/a>)\n<\/p>\n<div>\n<p>RunSpec defines the desired state of Run<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>ref<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRef\">\nTaskRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.EmbeddedRunSpec\">\nEmbeddedRunSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec is a specification of a custom task<\/p>\n<br\/>\n<br\/>\n<table>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.RunSpecStatus\">\nRunSpecStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for cancelling a run (and maybe more later on)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>statusMessage<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.RunSpecStatusMessage\">\nRunSpecStatusMessage\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status message for cancellation.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retries<\/code><br\/>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for propagating retries count to custom tasks<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>podTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/unversioned.Template\">\nTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>PodTemplate holds pod specific configuration<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Time after which the custom-task times out.\nRefer Go&rsquo;s ParseDuration documentation for expected format: <a href=\"https:\/\/golang.org\/pkg\/time\/#ParseDuration\">https:\/\/golang.org\/pkg\/time\/#ParseDuration<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WorkspaceBinding\">\n[]WorkspaceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.RunSpecStatus\">RunSpecStatus\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.RunSpec\">RunSpec<\/a>)\n<\/p>\n<div>\n<p>RunSpecStatus defines the taskrun spec status the user can provide<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1alpha1.RunSpecStatusMessage\">RunSpecStatusMessage\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.RunSpec\">RunSpec<\/a>)\n<\/p>\n<div>\n<p>RunSpecStatusMessage defines human readable status messages for the TaskRun.<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1alpha1.StepActionObject\">StepActionObject\n<\/h3>\n<div>\n<p>StepActionObject is implemented by StepAction<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1alpha1.StepActionSpec\">StepActionSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.StepAction\">StepAction<\/a>)\n<\/p>\n<div>\n<p>StepActionSpec contains the actionable components of a step.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the stepaction that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>image<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Image reference name to run for this StepAction.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>command<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Entrypoint array. Not executed within a shell.\nThe image&rsquo;s ENTRYPOINT is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the container&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>args<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Arguments to the entrypoint.\nThe image&rsquo;s CMD is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the container&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>env<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envvar-v1-core\">\n[]Kubernetes core\/v1.EnvVar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of environment variables to set in the container.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>script<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Script is the contents of an executable file to execute.<\/p>\n<p>If Script is not empty, the Step cannot have an Command and the Args will be passed to the Script.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workingDir<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Step&rsquo;s working directory.\nIf not specified, the container runtime&rsquo;s default will be used, which\nmight be configured in the container image.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamSpecs\">\nParamSpecs\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Params is a list of input parameters required to run the stepAction.\nParams must be supplied as inputs in Steps unless they declare a defaultvalue.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.StepResult\">\n[]StepResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results are values that this StepAction can output<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>securityContext<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#securitycontext-v1-core\">\nKubernetes core\/v1.SecurityContext\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecurityContext defines the security options the Step should be run with.\nIf set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/\">https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/<\/a>\nThe value set in StepAction will take precedence over the value from Task.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeMounts<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumemount-v1-core\">\n[]Kubernetes core\/v1.VolumeMount\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Volumes to mount into the Step&rsquo;s filesystem.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.VerificationPolicySpec\">VerificationPolicySpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.VerificationPolicy\">VerificationPolicy<\/a>)\n<\/p>\n<div>\n<p>VerificationPolicySpec defines the patterns and authorities.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.ResourcePattern\">\n[]ResourcePattern\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Resources defines the patterns of resources sources that should be subject to this policy.\nFor example, we may want to apply this Policy from a certain GitHub repo.\nThen the ResourcesPattern should be valid regex. E.g. If using gitresolver, and we want to config keys from a certain git repo.\n<code>ResourcesPattern<\/code> can be <code>https:\/\/github.com\/tektoncd\/catalog.git<\/code>, we will use regex to filter out those resources.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>authorities<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.Authority\">\n[]Authority\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Authorities defines the rules for validating signatures.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>mode<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.ModeType\">\nModeType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Mode controls whether a failing policy will fail the taskrun\/pipelinerun, or only log the warnings\nenforce - fail the taskrun\/pipelinerun if verification fails (default)\nwarn - don&rsquo;t fail the taskrun\/pipelinerun if verification fails but log warnings<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.PipelineResourceSpec\">PipelineResourceSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.PipelineResource\">PipelineResource<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineResourceBinding\">PipelineResourceBinding<\/a>)\n<\/p>\n<div>\n<p>PipelineResourceSpec defines an individual resources used in the pipeline.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the resource that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.ResourceParam\">\n[]ResourceParam\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secrets<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.SecretParam\">\n[]SecretParam\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Secrets to fetch to populate some of resource fields<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.PipelineResourceStatus\">PipelineResourceStatus\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.PipelineResource\">PipelineResource<\/a>)\n<\/p>\n<div>\n<p>PipelineResourceStatus does not contain anything because PipelineResources on their own\ndo not have a status<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1alpha1.ResourceDeclaration\">ResourceDeclaration\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskResource\">TaskResource<\/a>)\n<\/p>\n<div>\n<p>ResourceDeclaration defines an input or output PipelineResource declared as a requirement\nby another type such as a Task or Condition. The Name field will be used to refer to these\nPipelineResources within the type&rsquo;s definition, and when provided as an Input, the Name will be the\npath to the volume mounted containing this PipelineResource as an input (e.g.\nan input Resource named <code>workspace<\/code> will be mounted at <code>\/workspace<\/code>).<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name declares the name by which a resource is referenced in the\ndefinition. Resources may be referenced by name in the definition of a\nTask&rsquo;s steps.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Type is the type of this resource;<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the declared resource that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>targetPath<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TargetPath is the path in workspace directory where the resource\nwill be copied.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>optional<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<p>Optional declares the resource as optional.\nBy default optional is set to false which makes a resource required.\noptional: true - the resource is considered optional\noptional: false - the resource is considered required (equivalent of not specifying it)<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.ResourceParam\">ResourceParam\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.PipelineResourceSpec\">PipelineResourceSpec<\/a>)\n<\/p>\n<div>\n<p>ResourceParam declares a string value to use for the parameter called Name, and is used in\nthe specific context of PipelineResources.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.SecretParam\">SecretParam\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.PipelineResourceSpec\">PipelineResourceSpec<\/a>)\n<\/p>\n<div>\n<p>SecretParam indicates which secret can be used to populate a field of the resource<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>fieldName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretKey<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secretName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.RunResult\">RunResult\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.RunStatusFields\">RunStatusFields<\/a>)\n<\/p>\n<div>\n<p>RunResult used to describe the results of a task<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name the given name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Value the given value of the result<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.RunStatus\">RunStatus\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.Run\">Run<\/a>, <a href=\"#tekton.dev\/v1alpha1.RunStatusFields\">RunStatusFields<\/a>)\n<\/p>\n<div>\n<p>RunStatus defines the observed state of Run<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>Status<\/code><br\/>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/knative.dev\/pkg\/apis\/duck\/v1#Status\">\nknative.dev\/pkg\/apis\/duck\/v1.Status\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>Status<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>RunStatusFields<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.RunStatusFields\">\nRunStatusFields\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>RunStatusFields<\/code> are embedded into this type.)\n<\/p>\n<p>RunStatusFields inlines the status fields.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1alpha1.RunStatusFields\">RunStatusFields\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.RunStatus\">RunStatus<\/a>)\n<\/p>\n<div>\n<p>RunStatusFields holds the fields of Run&rsquo;s status.  This is defined\nseparately and inlined so that other types can readily consume these fields\nvia duck typing.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>startTime<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>StartTime is the time the build is actually started.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>completionTime<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>CompletionTime is the time the build completed.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.RunResult\">\n[]RunResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results reports any output result values to be consumed by later\ntasks in a pipeline.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retriesStatus<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.RunStatus\">\n[]RunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>RetriesStatus contains the history of RunStatus, in case of a retry.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>extraFields<\/code><br\/>\n<em>\nk8s.io\/apimachinery\/pkg\/runtime.RawExtension\n<\/em>\n<\/td>\n<td>\n<p>ExtraFields holds arbitrary fields provided by the custom task\ncontroller.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr\/>\n<h2 id=\"tekton.dev\/v1beta1\">tekton.dev\/v1beta1<\/h2>\n<div>\n<p>Package v1beta1 contains API Schema definitions for the pipeline v1beta1 API group<\/p>\n<\/div>\nResource Types:\n<ul><li>\n<a href=\"#tekton.dev\/v1beta1.ClusterTask\">ClusterTask<\/a>\n<\/li><li>\n<a href=\"#tekton.dev\/v1beta1.CustomRun\">CustomRun<\/a>\n<\/li><li>\n<a href=\"#tekton.dev\/v1beta1.Pipeline\">Pipeline<\/a>\n<\/li><li>\n<a href=\"#tekton.dev\/v1beta1.PipelineRun\">PipelineRun<\/a>\n<\/li><li>\n<a href=\"#tekton.dev\/v1beta1.StepAction\">StepAction<\/a>\n<\/li><li>\n<a href=\"#tekton.dev\/v1beta1.Task\">Task<\/a>\n<\/li><li>\n<a href=\"#tekton.dev\/v1beta1.TaskRun\">TaskRun<\/a>\n<\/li><\/ul>\n<h3 id=\"tekton.dev\/v1beta1.ClusterTask\">ClusterTask\n<\/h3>\n<div>\n<p>ClusterTask is a Task with a cluster scope. ClusterTasks are used to\nrepresent Tasks that should be publicly addressable from any namespace in the\ncluster.<\/p>\n<p>Deprecated: Please use the cluster resolver instead.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1beta1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>ClusterTask<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskSpec\">\nTaskSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec holds the desired state of the Task from the client<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskResources\">\nTaskResources\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Resources is a list input and output resource to run the task\nResources are represented in TaskRuns as bindings to instances of\nPipelineResources.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ParamSpecs\">\nParamSpecs\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Params is a list of input parameters required to run the task. Params\nmust be supplied as inputs in TaskRuns unless they declare a default\nvalue.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>displayName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DisplayName is a user-facing name of the task that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the task that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>steps<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Step\">\n[]Step\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Steps are the steps of the build; each step is run sequentially with the\nsource mounted into \/workspace.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumes<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volume-v1-core\">\n[]Kubernetes core\/v1.Volume\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Volumes is a collection of volumes that are available to mount into the\nsteps of the build.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stepTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.StepTemplate\">\nStepTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>StepTemplate can be used as the basis for all step containers within the\nTask, so that the steps inherit settings on the base container.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sidecars<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Sidecar\">\n[]Sidecar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Sidecars are run alongside the Task&rsquo;s step containers. They begin before\nthe steps start and end after the steps complete.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WorkspaceDeclaration\">\n[]WorkspaceDeclaration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Workspaces are the volumes that this Task requires.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskResult\">\n[]TaskResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Results are values that this Task can output<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.CustomRun\">CustomRun\n<\/h3>\n<div>\n<p>CustomRun represents a single execution of a Custom Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1beta1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>CustomRun<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.CustomRunSpec\">\nCustomRunSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>customRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRef\">\nTaskRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>customSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.EmbeddedCustomRunSpec\">\nEmbeddedCustomRunSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec is a specification of a custom task<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.CustomRunSpecStatus\">\nCustomRunSpecStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for cancelling a customrun (and maybe more later on)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>statusMessage<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.CustomRunSpecStatusMessage\">\nCustomRunSpecStatusMessage\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status message for cancellation.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retries<\/code><br\/>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for propagating retries count to custom tasks<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Time after which the custom-task times out.\nRefer Go&rsquo;s ParseDuration documentation for expected format: <a href=\"https:\/\/golang.org\/pkg\/time\/#ParseDuration\">https:\/\/golang.org\/pkg\/time\/#ParseDuration<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WorkspaceBinding\">\n[]WorkspaceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.CustomRunStatus\">\nCustomRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.Pipeline\">Pipeline\n<\/h3>\n<div>\n<p>Pipeline describes a list of Tasks to execute. It expresses how outputs\nof tasks feed into inputs of subsequent tasks.<\/p>\n<p>Deprecated: Please use v1.Pipeline instead.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1beta1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>Pipeline<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineSpec\">\nPipelineSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec holds the desired state of the Pipeline from the client<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>displayName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DisplayName is a user-facing name of the pipeline that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the pipeline that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineDeclaredResource\">\n[]PipelineDeclaredResource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tasks<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineTask\">\n[]PipelineTask\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Tasks declares the graph of Tasks that execute when this Pipeline is run.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ParamSpecs\">\nParamSpecs\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Params declares a list of input parameters that must be supplied when\nthis Pipeline is run.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineWorkspaceDeclaration\">\n[]PipelineWorkspaceDeclaration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces declares a set of named workspaces that are expected to be\nprovided by a PipelineRun.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineResult\">\n[]PipelineResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results are values that this pipeline can output once run<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>finally<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineTask\">\n[]PipelineTask\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Finally declares the list of Tasks that execute just before leaving the Pipeline\ni.e. either after all Tasks are finished executing successfully\nor after a failure which would result in ending the Pipeline<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineRun\">PipelineRun\n<\/h3>\n<div>\n<p>PipelineRun represents a single execution of a Pipeline. PipelineRuns are how\nthe graph of Tasks declared in a Pipeline are executed; they specify inputs\nto Pipelines such as parameter values and capture operational aspects of the\nTasks execution such as service account and tolerations. Creating a\nPipelineRun creates TaskRuns for Tasks in the referenced Pipeline.<\/p>\n<p>Deprecated: Please use v1.PipelineRun instead.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1beta1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>PipelineRun<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineRunSpec\">\nPipelineRunSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>pipelineRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineRef\">\nPipelineRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>pipelineSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineSpec\">\nPipelineSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specifying PipelineSpec can be disabled by setting\n<code>disable-inline-spec<\/code> feature flag..<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineResourceBinding\">\n[]PipelineResourceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Resources is a list of bindings specifying which actual instances of\nPipelineResources to use for the resources the Pipeline has declared\nit needs.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Params is a list of parameter names and values.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineRunSpecStatus\">\nPipelineRunSpecStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for cancelling a pipelinerun (and maybe more later on)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeouts<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TimeoutFields\">\nTimeoutFields\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Time after which the Pipeline times out.\nCurrently three keys are accepted in the map\npipeline, tasks and finally\nwith Timeouts.pipeline &gt;= Timeouts.tasks + Timeouts.finally<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Timeout is the Time after which the Pipeline times out.\nDefaults to never.\nRefer to Go&rsquo;s ParseDuration documentation for expected format: <a href=\"https:\/\/golang.org\/pkg\/time\/#ParseDuration\">https:\/\/golang.org\/pkg\/time\/#ParseDuration<\/a><\/p>\n<p>Deprecated: use pipelineRunSpec.Timeouts.Pipeline instead<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>podTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/unversioned.Template\">\nTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>PodTemplate holds pod specific configuration<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WorkspaceBinding\">\n[]WorkspaceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces holds a set of workspace bindings that must match names\nwith those declared in the pipeline.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskRunSpecs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineTaskRunSpec\">\n[]PipelineTaskRunSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TaskRunSpecs holds a set of runtime specs<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineRunStatus\">\nPipelineRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.StepAction\">StepAction\n<\/h3>\n<div>\n<p>StepAction represents the actionable components of Step.\nThe Step can only reference it from the cluster or using remote resolution.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1beta1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>StepAction<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.StepActionSpec\">\nStepActionSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec holds the desired state of the Step from the client<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the stepaction that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>image<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Image reference name to run for this StepAction.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>command<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Entrypoint array. Not executed within a shell.\nThe image&rsquo;s ENTRYPOINT is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the container&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>args<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Arguments to the entrypoint.\nThe image&rsquo;s CMD is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the container&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>env<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envvar-v1-core\">\n[]Kubernetes core\/v1.EnvVar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of environment variables to set in the container.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>script<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Script is the contents of an executable file to execute.<\/p>\n<p>If Script is not empty, the Step cannot have an Command and the Args will be passed to the Script.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workingDir<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Step&rsquo;s working directory.\nIf not specified, the container runtime&rsquo;s default will be used, which\nmight be configured in the container image.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamSpecs\">\nParamSpecs\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Params is a list of input parameters required to run the stepAction.\nParams must be supplied as inputs in Steps unless they declare a defaultvalue.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.StepResult\">\n[]StepResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results are values that this StepAction can output<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>securityContext<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#securitycontext-v1-core\">\nKubernetes core\/v1.SecurityContext\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecurityContext defines the security options the Step should be run with.\nIf set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/\">https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/<\/a>\nThe value set in StepAction will take precedence over the value from Task.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeMounts<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumemount-v1-core\">\n[]Kubernetes core\/v1.VolumeMount\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Volumes to mount into the Step&rsquo;s filesystem.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.Task\">Task\n<\/h3>\n<div>\n<p>Task represents a collection of sequential steps that are run as part of a\nPipeline using a set of inputs and producing a set of outputs. Tasks execute\nwhen TaskRuns are created that provide the input parameters and resources and\noutput resources the Task requires.<\/p>\n<p>Deprecated: Please use v1.Task instead.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1beta1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>Task<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskSpec\">\nTaskSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec holds the desired state of the Task from the client<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskResources\">\nTaskResources\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Resources is a list input and output resource to run the task\nResources are represented in TaskRuns as bindings to instances of\nPipelineResources.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ParamSpecs\">\nParamSpecs\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Params is a list of input parameters required to run the task. Params\nmust be supplied as inputs in TaskRuns unless they declare a default\nvalue.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>displayName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DisplayName is a user-facing name of the task that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the task that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>steps<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Step\">\n[]Step\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Steps are the steps of the build; each step is run sequentially with the\nsource mounted into \/workspace.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumes<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volume-v1-core\">\n[]Kubernetes core\/v1.Volume\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Volumes is a collection of volumes that are available to mount into the\nsteps of the build.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stepTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.StepTemplate\">\nStepTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>StepTemplate can be used as the basis for all step containers within the\nTask, so that the steps inherit settings on the base container.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sidecars<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Sidecar\">\n[]Sidecar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Sidecars are run alongside the Task&rsquo;s step containers. They begin before\nthe steps start and end after the steps complete.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WorkspaceDeclaration\">\n[]WorkspaceDeclaration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Workspaces are the volumes that this Task requires.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskResult\">\n[]TaskResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Results are values that this Task can output<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskRun\">TaskRun\n<\/h3>\n<div>\n<p>TaskRun represents a single execution of a Task. TaskRuns are how the steps\nspecified in a Task are executed; they specify the parameters and resources\nused to run the steps in a Task.<\/p>\n<p>Deprecated: Please use v1.TaskRun instead.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\nstring<\/td>\n<td>\n<code>\ntekton.dev\/v1beta1\n<\/code>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\nstring\n<\/td>\n<td><code>TaskRun<\/code><\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#objectmeta-v1-meta\">\nKubernetes meta\/v1.ObjectMeta\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\nRefer to the Kubernetes API documentation for the fields of the\n<code>metadata<\/code> field.\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunSpec\">\nTaskRunSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>debug<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunDebug\">\nTaskRunDebug\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunResources\">\nTaskRunResources\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRef\">\nTaskRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>no more than one of the TaskRef and TaskSpec may be specified.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskSpec\">\nTaskSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specifying PipelineSpec can be disabled by setting\n<code>disable-inline-spec<\/code> feature flag..<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunSpecStatus\">\nTaskRunSpecStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for cancelling a TaskRun (and maybe more later on)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>statusMessage<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunSpecStatusMessage\">\nTaskRunSpecStatusMessage\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status message for cancellation.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retries<\/code><br\/>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Retries represents how many times this TaskRun should be retried in the event of Task failure.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Time after which one retry attempt times out. Defaults to 1 hour.\nRefer Go&rsquo;s ParseDuration documentation for expected format: <a href=\"https:\/\/golang.org\/pkg\/time\/#ParseDuration\">https:\/\/golang.org\/pkg\/time\/#ParseDuration<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>podTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/unversioned.Template\">\nTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>PodTemplate holds pod specific configuration<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WorkspaceBinding\">\n[]WorkspaceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stepOverrides<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunStepOverride\">\n[]TaskRunStepOverride\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Overrides to apply to Steps in this TaskRun.\nIf a field is specified in both a Step and a StepOverride,\nthe value from the StepOverride will be used.\nThis field is only supported when the alpha feature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sidecarOverrides<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunSidecarOverride\">\n[]TaskRunSidecarOverride\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Overrides to apply to Sidecars in this TaskRun.\nIf a field is specified in both a Sidecar and a SidecarOverride,\nthe value from the SidecarOverride will be used.\nThis field is only supported when the alpha feature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>computeResources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Compute resources to use for this TaskRun<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunStatus\">\nTaskRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.Algorithm\">Algorithm\n(<code>string<\/code> alias)<\/h3>\n<div>\n<p>Algorithm Standard cryptographic hash algorithm<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.Artifact\">Artifact\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.Artifacts\">Artifacts<\/a>, <a href=\"#tekton.dev\/v1beta1.StepState\">StepState<\/a>)\n<\/p>\n<div>\n<p>TaskRunStepArtifact represents an artifact produced or used by a step within a task run.\nIt directly uses the Artifact type for its structure.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>The artifact&rsquo;s identifying category name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>values<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ArtifactValue\">\n[]ArtifactValue\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>A collection of values related to the artifact<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>buildOutput<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<p>Indicate if the artifact is a build output or a by-product<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.ArtifactValue\">ArtifactValue\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.Artifact\">Artifact<\/a>)\n<\/p>\n<div>\n<p>ArtifactValue represents a specific value or data element within an Artifact.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>digest<\/code><br\/>\n<em>\nmap[github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1beta1.Algorithm]string\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>uri<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Algorithm-specific digests for verifying the content (e.g., SHA256)<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.Artifacts\">Artifacts\n<\/h3>\n<div>\n<p>Artifacts represents the collection of input and output artifacts associated with\na task run or a similar process. Artifacts in this context are units of data or resources\nthat the process either consumes as input or produces as output.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>inputs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Artifact\">\n[]Artifact\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>outputs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Artifact\">\n[]Artifact\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.ChildStatusReference\">ChildStatusReference\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRunStatusFields\">PipelineRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>ChildStatusReference is used to point to the statuses of individual TaskRuns and Runs within this PipelineRun.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of the TaskRun or Run this is referencing.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>displayName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>DisplayName is a user-facing name of the pipelineTask that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>pipelineTaskName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>PipelineTaskName is the name of the PipelineTask this is referencing.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>whenExpressions<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WhenExpression\">\n[]WhenExpression\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.CloudEventCondition\">CloudEventCondition\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.CloudEventDeliveryState\">CloudEventDeliveryState<\/a>)\n<\/p>\n<div>\n<p>CloudEventCondition is a string that represents the condition of the event.<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.CloudEventDelivery\">CloudEventDelivery\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskRunStatusFields\">TaskRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>CloudEventDelivery is the target of a cloud event along with the state of\ndelivery.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>target<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Target points to an addressable<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.CloudEventDeliveryState\">\nCloudEventDeliveryState\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.CloudEventDeliveryState\">CloudEventDeliveryState\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.CloudEventDelivery\">CloudEventDelivery<\/a>)\n<\/p>\n<div>\n<p>CloudEventDeliveryState reports the state of a cloud event to be sent.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>condition<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.CloudEventCondition\">\nCloudEventCondition\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Current status<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sentAt<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SentAt is the time at which the last attempt to send the event was made<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>message<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Error is the text of error (if any)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retryCount<\/code><br\/>\n<em>\nint32\n<\/em>\n<\/td>\n<td>\n<p>RetryCount is the number of attempts of sending the cloud event<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.Combination\">Combination\n(<code>map[string]string<\/code> alias)<\/h3>\n<div>\n<p>Combination is a map, mainly defined to hold a single combination from a Matrix with key as param.Name and value as param.Value<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.Combinations\">Combinations\n(<code>[]github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1beta1.Combination<\/code> alias)<\/h3>\n<div>\n<p>Combinations is a Combination list<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.ConfigSource\">ConfigSource\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.Provenance\">Provenance<\/a>)\n<\/p>\n<div>\n<p>ConfigSource contains the information that can uniquely identify where a remote\nbuilt definition came from i.e. Git repositories, Tekton Bundles in OCI registry\nand hub.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>uri<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>URI indicates the identity of the source of the build definition.\nExample: &ldquo;<a href=\"https:\/\/github.com\/tektoncd\/catalog&quot;\">https:\/\/github.com\/tektoncd\/catalog&rdquo;<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>digest<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<p>Digest is a collection of cryptographic digests for the contents of the artifact specified by URI.\nExample: {&ldquo;sha1&rdquo;: &ldquo;f99d13e554ffcb696dee719fa85b695cb5b0f428&rdquo;}<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>entryPoint<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>EntryPoint identifies the entry point into the build. This is often a path to a\nbuild definition file and\/or a target label within that file.\nExample: &ldquo;task\/git-clone\/0.8\/git-clone.yaml&rdquo;<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.CustomRunReason\">CustomRunReason\n(<code>string<\/code> alias)<\/h3>\n<div>\n<p>CustomRunReason is an enum used to store all Run reason for the Succeeded condition that are controlled by the CustomRun itself.<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.CustomRunSpec\">CustomRunSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.CustomRun\">CustomRun<\/a>)\n<\/p>\n<div>\n<p>CustomRunSpec defines the desired state of CustomRun<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>customRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRef\">\nTaskRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>customSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.EmbeddedCustomRunSpec\">\nEmbeddedCustomRunSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec is a specification of a custom task<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.CustomRunSpecStatus\">\nCustomRunSpecStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for cancelling a customrun (and maybe more later on)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>statusMessage<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.CustomRunSpecStatusMessage\">\nCustomRunSpecStatusMessage\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status message for cancellation.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retries<\/code><br\/>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for propagating retries count to custom tasks<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Time after which the custom-task times out.\nRefer Go&rsquo;s ParseDuration documentation for expected format: <a href=\"https:\/\/golang.org\/pkg\/time\/#ParseDuration\">https:\/\/golang.org\/pkg\/time\/#ParseDuration<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WorkspaceBinding\">\n[]WorkspaceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.CustomRunSpecStatus\">CustomRunSpecStatus\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.CustomRunSpec\">CustomRunSpec<\/a>)\n<\/p>\n<div>\n<p>CustomRunSpecStatus defines the taskrun spec status the user can provide<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.CustomRunSpecStatusMessage\">CustomRunSpecStatusMessage\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.CustomRunSpec\">CustomRunSpec<\/a>)\n<\/p>\n<div>\n<p>CustomRunSpecStatusMessage defines human readable status messages for the TaskRun.<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.EmbeddedCustomRunSpec\">EmbeddedCustomRunSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.CustomRunSpec\">CustomRunSpec<\/a>)\n<\/p>\n<div>\n<p>EmbeddedCustomRunSpec allows custom task definitions to be embedded<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineTaskMetadata\">\nPipelineTaskMetadata\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\nk8s.io\/apimachinery\/pkg\/runtime.RawExtension\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec is a specification of a custom task<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>-<\/code><br\/>\n<em>\n[]byte\n<\/em>\n<\/td>\n<td>\n<p>Raw is the underlying serialization of this object.<\/p>\n<p>TODO: Determine how to detect ContentType and ContentEncoding of &lsquo;Raw&rsquo; data.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>-<\/code><br\/>\n<em>\nk8s.io\/apimachinery\/pkg\/runtime.Object\n<\/em>\n<\/td>\n<td>\n<p>Object can hold a representation of this extension - useful for working with versioned\nstructs.<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.EmbeddedTask\">EmbeddedTask\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineTask\">PipelineTask<\/a>)\n<\/p>\n<div>\n<p>EmbeddedTask is used to define a Task inline within a Pipeline&rsquo;s PipelineTasks.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>spec<\/code><br\/>\n<em>\nk8s.io\/apimachinery\/pkg\/runtime.RawExtension\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Spec is a specification of a custom task<\/p>\n<br\/>\n<br\/>\n<table>\n<tr>\n<td>\n<code>-<\/code><br\/>\n<em>\n[]byte\n<\/em>\n<\/td>\n<td>\n<p>Raw is the underlying serialization of this object.<\/p>\n<p>TODO: Determine how to detect ContentType and ContentEncoding of &lsquo;Raw&rsquo; data.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>-<\/code><br\/>\n<em>\nk8s.io\/apimachinery\/pkg\/runtime.Object\n<\/em>\n<\/td>\n<td>\n<p>Object can hold a representation of this extension - useful for working with versioned\nstructs.<\/p>\n<\/td>\n<\/tr>\n<\/table>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineTaskMetadata\">\nPipelineTaskMetadata\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>TaskSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskSpec\">\nTaskSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>TaskSpec<\/code> are embedded into this type.)\n<\/p>\n<em>(Optional)<\/em>\n<p>TaskSpec is a specification of a task<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.IncludeParams\">IncludeParams\n<\/h3>\n<div>\n<p>IncludeParams allows passing in a specific combinations of Parameters into the Matrix.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name the specified combination<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Params takes only <code>Parameters<\/code> of type <code>&quot;string&quot;<\/code>\nThe names of the <code>params<\/code> must match the names of the <code>params<\/code> in the underlying <code>Task<\/code><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.InternalTaskModifier\">InternalTaskModifier\n<\/h3>\n<div>\n<p>InternalTaskModifier implements TaskModifier for resources that are built-in to Tekton Pipelines.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>stepsToPrepend<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Step\">\n[]Step\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stepsToAppend<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Step\">\n[]Step\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumes<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volume-v1-core\">\n[]Kubernetes core\/v1.Volume\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.Matrix\">Matrix\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineTask\">PipelineTask<\/a>)\n<\/p>\n<div>\n<p>Matrix is used to fan out Tasks in a Pipeline<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Params is a list of parameters used to fan out the pipelineTask\nParams takes only <code>Parameters<\/code> of type <code>&quot;array&quot;<\/code>\nEach array element is supplied to the <code>PipelineTask<\/code> by substituting <code>params<\/code> of type <code>&quot;string&quot;<\/code> in the underlying <code>Task<\/code>.\nThe names of the <code>params<\/code> in the <code>Matrix<\/code> must match the names of the <code>params<\/code> in the underlying <code>Task<\/code> that they will be substituting.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>include<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.IncludeParamsList\">\nIncludeParamsList\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Include is a list of IncludeParams which allows passing in specific combinations of Parameters into the Matrix.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.OnErrorType\">OnErrorType\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.Step\">Step<\/a>)\n<\/p>\n<div>\n<p>OnErrorType defines a list of supported exiting behavior of a container on error<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.Param\">Param\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskRunInputs\">TaskRunInputs<\/a>)\n<\/p>\n<div>\n<p>Param declares an ParamValues to use for the parameter called name.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ParamValue\">\nParamValue\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.ParamSpec\">ParamSpec\n<\/h3>\n<div>\n<p>ParamSpec defines arbitrary parameters needed beyond typed inputs (such as\nresources). Parameter values are provided by users as inputs on a TaskRun\nor PipelineRun.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name declares the name by which a parameter is referenced.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ParamType\">\nParamType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Type is the user-specified type of the parameter. The possible types\nare currently &ldquo;string&rdquo;, &ldquo;array&rdquo; and &ldquo;object&rdquo;, and &ldquo;string&rdquo; is the default.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the parameter that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>properties<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PropertySpec\">\nmap[string]github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1beta1.PropertySpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Properties is the JSON Schema properties to support key-value pairs parameter.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>default<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ParamValue\">\nParamValue\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Default is the value a parameter takes if no input value is supplied. If\ndefault is set, a Task may be executed without a supplied value for the\nparameter.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>enum<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Enum declares a set of allowed param input values for tasks\/pipelines that can be validated.\nIf Enum is not set, no input validation is performed for the param.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.ParamSpecs\">ParamSpecs\n(<code>[]github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1beta1.ParamSpec<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineSpec\">PipelineSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskSpec\">TaskSpec<\/a>)\n<\/p>\n<div>\n<p>ParamSpecs is a list of ParamSpec<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.ParamType\">ParamType\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.ParamSpec\">ParamSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.ParamValue\">ParamValue<\/a>, <a href=\"#tekton.dev\/v1beta1.PropertySpec\">PropertySpec<\/a>)\n<\/p>\n<div>\n<p>ParamType indicates the type of an input parameter;\nUsed to distinguish between a single string and an array of strings.<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.ParamValue\">ParamValue\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.Param\">Param<\/a>, <a href=\"#tekton.dev\/v1beta1.ParamSpec\">ParamSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineResult\">PipelineResult<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineRunResult\">PipelineRunResult<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskResult\">TaskResult<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRunResult\">TaskRunResult<\/a>)\n<\/p>\n<div>\n<p>ResultValue is a type alias of ParamValue<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>Type<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ParamType\">\nParamType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>StringVal<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Represents the stored type of ParamValues.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ArrayVal<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ObjectVal<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.Params\">Params\n(<code>[]github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1beta1.Param<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.RunSpec\">RunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.CustomRunSpec\">CustomRunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.IncludeParams\">IncludeParams<\/a>, <a href=\"#tekton.dev\/v1beta1.Matrix\">Matrix<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineRunSpec\">PipelineRunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineTask\">PipelineTask<\/a>, <a href=\"#tekton.dev\/v1beta1.ResolverRef\">ResolverRef<\/a>, <a href=\"#tekton.dev\/v1beta1.Step\">Step<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>Params is a list of Param<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.PipelineDeclaredResource\">PipelineDeclaredResource\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineSpec\">PipelineSpec<\/a>)\n<\/p>\n<div>\n<p>PipelineDeclaredResource is used by a Pipeline to declare the types of the\nPipelineResources that it will required to run and names which can be used to\nrefer to these PipelineResources in PipelineTaskResourceBindings.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name that will be used by the Pipeline to refer to this resource.\nIt does not directly correspond to the name of any PipelineResources Task\ninputs or outputs, and it does not correspond to the actual names of the\nPipelineResources that will be bound in the PipelineRun.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Type is the type of the PipelineResource.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>optional<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<p>Optional declares the resource as optional.\noptional: true - the resource is considered optional\noptional: false - the resource is considered required (default\/equivalent of not specifying it)<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineObject\">PipelineObject\n<\/h3>\n<div>\n<p>PipelineObject is implemented by Pipeline<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.PipelineRef\">PipelineRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRunSpec\">PipelineRunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineTask\">PipelineTask<\/a>)\n<\/p>\n<div>\n<p>PipelineRef can be used to refer to a specific instance of a Pipeline.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name of the referent; More info: <a href=\"http:\/\/kubernetes.io\/docs\/user-guide\/identifiers#names\">http:\/\/kubernetes.io\/docs\/user-guide\/identifiers#names<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>API version of the referent<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>bundle<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Bundle url reference to a Tekton Bundle.<\/p>\n<p>Deprecated: Please use ResolverRef with the bundles resolver instead.\nThe field is staying there for go client backward compatibility, but is not used\/allowed anymore.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ResolverRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ResolverRef\">\nResolverRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ResolverRef allows referencing a Pipeline in a remote location\nlike a git repo. This field is only supported when the alpha\nfeature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineResourceBinding\">PipelineResourceBinding\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRunSpec\">PipelineRunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskResourceBinding\">TaskResourceBinding<\/a>)\n<\/p>\n<div>\n<p>PipelineResourceBinding connects a reference to an instance of a PipelineResource\nwith a PipelineResource dependency that the Pipeline has declared<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of the PipelineResource in the Pipeline&rsquo;s declaration<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resourceRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineResourceRef\">\nPipelineResourceRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ResourceRef is a reference to the instance of the actual PipelineResource\nthat should be used<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resourceSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.PipelineResourceSpec\">\nPipelineResourceSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ResourceSpec is specification of a resource that should be created and\nconsumed by the task<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineResourceInterface\">PipelineResourceInterface\n<\/h3>\n<div>\n<p>PipelineResourceInterface interface to be implemented by different PipelineResource types<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.PipelineResourceRef\">PipelineResourceRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineResourceBinding\">PipelineResourceBinding<\/a>)\n<\/p>\n<div>\n<p>PipelineResourceRef can be used to refer to a specific instance of a Resource<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name of the referent; More info: <a href=\"http:\/\/kubernetes.io\/docs\/user-guide\/identifiers#names\">http:\/\/kubernetes.io\/docs\/user-guide\/identifiers#names<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>API version of the referent<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineResult\">PipelineResult\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineSpec\">PipelineSpec<\/a>)\n<\/p>\n<div>\n<p>PipelineResult used to describe the results of a pipeline<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name the given name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ResultsType\">\nResultsType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Type is the user-specified type of the result.\nThe possible types are &lsquo;string&rsquo;, &lsquo;array&rsquo;, and &lsquo;object&rsquo;, with &lsquo;string&rsquo; as the default.\n&lsquo;array&rsquo; and &lsquo;object&rsquo; types are alpha features.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a human-readable description of the result<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ParamValue\">\nParamValue\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Value the expression used to retrieve the value<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineRunReason\">PipelineRunReason\n(<code>string<\/code> alias)<\/h3>\n<div>\n<p>PipelineRunReason represents a reason for the pipeline run &ldquo;Succeeded&rdquo; condition<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.PipelineRunResult\">PipelineRunResult\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRunStatusFields\">PipelineRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>PipelineRunResult used to describe the results of a pipeline<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the result&rsquo;s name as declared by the Pipeline<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ParamValue\">\nParamValue\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Value is the result returned from the execution of this PipelineRun<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineRunRunStatus\">PipelineRunRunStatus\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRunStatusFields\">PipelineRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>PipelineRunRunStatus contains the name of the PipelineTask for this CustomRun or Run and the CustomRun or Run&rsquo;s Status<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>pipelineTaskName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>PipelineTaskName is the name of the PipelineTask.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.CustomRunStatus\">\nCustomRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status is the CustomRunStatus for the corresponding CustomRun or Run<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>whenExpressions<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WhenExpression\">\n[]WhenExpression\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineRunSpec\">PipelineRunSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRun\">PipelineRun<\/a>)\n<\/p>\n<div>\n<p>PipelineRunSpec defines the desired state of PipelineRun<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>pipelineRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineRef\">\nPipelineRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>pipelineSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineSpec\">\nPipelineSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specifying PipelineSpec can be disabled by setting\n<code>disable-inline-spec<\/code> feature flag..<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineResourceBinding\">\n[]PipelineResourceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Resources is a list of bindings specifying which actual instances of\nPipelineResources to use for the resources the Pipeline has declared\nit needs.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Params is a list of parameter names and values.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineRunSpecStatus\">\nPipelineRunSpecStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for cancelling a pipelinerun (and maybe more later on)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeouts<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TimeoutFields\">\nTimeoutFields\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Time after which the Pipeline times out.\nCurrently three keys are accepted in the map\npipeline, tasks and finally\nwith Timeouts.pipeline &gt;= Timeouts.tasks + Timeouts.finally<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Timeout is the Time after which the Pipeline times out.\nDefaults to never.\nRefer to Go&rsquo;s ParseDuration documentation for expected format: <a href=\"https:\/\/golang.org\/pkg\/time\/#ParseDuration\">https:\/\/golang.org\/pkg\/time\/#ParseDuration<\/a><\/p>\n<p>Deprecated: use pipelineRunSpec.Timeouts.Pipeline instead<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>podTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/unversioned.Template\">\nTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>PodTemplate holds pod specific configuration<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WorkspaceBinding\">\n[]WorkspaceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces holds a set of workspace bindings that must match names\nwith those declared in the pipeline.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskRunSpecs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineTaskRunSpec\">\n[]PipelineTaskRunSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TaskRunSpecs holds a set of runtime specs<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineRunSpecStatus\">PipelineRunSpecStatus\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRunSpec\">PipelineRunSpec<\/a>)\n<\/p>\n<div>\n<p>PipelineRunSpecStatus defines the pipelinerun spec status the user can provide<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.PipelineRunStatus\">PipelineRunStatus\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRun\">PipelineRun<\/a>)\n<\/p>\n<div>\n<p>PipelineRunStatus defines the observed state of PipelineRun<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>Status<\/code><br\/>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/knative.dev\/pkg\/apis\/duck\/v1#Status\">\nknative.dev\/pkg\/apis\/duck\/v1.Status\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>Status<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>PipelineRunStatusFields<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineRunStatusFields\">\nPipelineRunStatusFields\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>PipelineRunStatusFields<\/code> are embedded into this type.)\n<\/p>\n<p>PipelineRunStatusFields inlines the status fields.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineRunStatusFields\">PipelineRunStatusFields\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRunStatus\">PipelineRunStatus<\/a>)\n<\/p>\n<div>\n<p>PipelineRunStatusFields holds the fields of PipelineRunStatus&rsquo; status.\nThis is defined separately and inlined so that other types can readily\nconsume these fields via duck typing.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>startTime<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>StartTime is the time the PipelineRun is actually started.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>completionTime<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>CompletionTime is the time the PipelineRun completed.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskRuns<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineRunTaskRunStatus\">\nmap[string]*github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1beta1.PipelineRunTaskRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TaskRuns is a map of PipelineRunTaskRunStatus with the taskRun name as the key.<\/p>\n<p>Deprecated: use ChildReferences instead. As of v0.45.0, this field is no\nlonger populated and is only included for backwards compatibility with\nolder server versions.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>runs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineRunRunStatus\">\nmap[string]*github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1beta1.PipelineRunRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Runs is a map of PipelineRunRunStatus with the run name as the key<\/p>\n<p>Deprecated: use ChildReferences instead. As of v0.45.0, this field is no\nlonger populated and is only included for backwards compatibility with\nolder server versions.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>pipelineResults<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineRunResult\">\n[]PipelineRunResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>PipelineResults are the list of results written out by the pipeline task&rsquo;s containers<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>pipelineSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineSpec\">\nPipelineSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>PipelineRunSpec contains the exact spec used to instantiate the run<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>skippedTasks<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.SkippedTask\">\n[]SkippedTask\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>list of tasks that were skipped due to when expressions evaluating to false<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>childReferences<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ChildStatusReference\">\n[]ChildStatusReference\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>list of TaskRun and Run names, PipelineTask names, and API versions\/kinds for children of this PipelineRun.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>finallyStartTime<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>FinallyStartTime is when all non-finally tasks have been completed and only finally tasks are being executed.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>provenance<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Provenance\">\nProvenance\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Provenance contains some key authenticated metadata about how a software artifact was built (what sources, what inputs\/outputs, etc.).<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spanContext<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<p>SpanContext contains tracing span context fields<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineRunTaskRunStatus\">PipelineRunTaskRunStatus\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRunStatusFields\">PipelineRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>PipelineRunTaskRunStatus contains the name of the PipelineTask for this TaskRun and the TaskRun&rsquo;s Status<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>pipelineTaskName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>PipelineTaskName is the name of the PipelineTask.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunStatus\">\nTaskRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status is the TaskRunStatus for the corresponding TaskRun<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>whenExpressions<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WhenExpression\">\n[]WhenExpression\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineSpec\">PipelineSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.Pipeline\">Pipeline<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineRunSpec\">PipelineRunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineRunStatusFields\">PipelineRunStatusFields<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineTask\">PipelineTask<\/a>)\n<\/p>\n<div>\n<p>PipelineSpec defines the desired state of Pipeline.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>displayName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DisplayName is a user-facing name of the pipeline that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the pipeline that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineDeclaredResource\">\n[]PipelineDeclaredResource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tasks<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineTask\">\n[]PipelineTask\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Tasks declares the graph of Tasks that execute when this Pipeline is run.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ParamSpecs\">\nParamSpecs\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Params declares a list of input parameters that must be supplied when\nthis Pipeline is run.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineWorkspaceDeclaration\">\n[]PipelineWorkspaceDeclaration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces declares a set of named workspaces that are expected to be\nprovided by a PipelineRun.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineResult\">\n[]PipelineResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results are values that this pipeline can output once run<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>finally<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineTask\">\n[]PipelineTask\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Finally declares the list of Tasks that execute just before leaving the Pipeline\ni.e. either after all Tasks are finished executing successfully\nor after a failure which would result in ending the Pipeline<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineTask\">PipelineTask\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineSpec\">PipelineSpec<\/a>)\n<\/p>\n<div>\n<p>PipelineTask defines a task in a Pipeline, passing inputs from both\nParams and from the output of previous tasks.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of this task within the context of a Pipeline. Name is\nused as a coordinate with the <code>from<\/code> and <code>runAfter<\/code> fields to establish\nthe execution order of tasks relative to one another.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>displayName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DisplayName is the display name of this task within the context of a Pipeline.\nThis display name may be used to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is the description of this task within the context of a Pipeline.\nThis description may be used to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRef\">\nTaskRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TaskRef is a reference to a task definition.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.EmbeddedTask\">\nEmbeddedTask\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TaskSpec is a specification of a task\nSpecifying TaskSpec can be disabled by setting\n<code>disable-inline-spec<\/code> feature flag..<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>when<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WhenExpressions\">\nWhenExpressions\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>WhenExpressions is a list of when expressions that need to be true for the task to run<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retries<\/code><br\/>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Retries represents how many times this task should be retried in case of task failure: ConditionSucceeded set to False<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>runAfter<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>RunAfter is the list of PipelineTask names that should be executed before\nthis Task executes. (Used to force a specific ordering in graph execution.)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineTaskResources\">\nPipelineTaskResources\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Parameters declares parameters passed to this task.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>matrix<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Matrix\">\nMatrix\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Matrix declares parameters used to fan out this task.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WorkspacePipelineTaskBinding\">\n[]WorkspacePipelineTaskBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces maps workspaces from the pipeline spec to the workspaces\ndeclared in the Task.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Time after which the TaskRun times out. Defaults to 1 hour.\nRefer Go&rsquo;s ParseDuration documentation for expected format: <a href=\"https:\/\/golang.org\/pkg\/time\/#ParseDuration\">https:\/\/golang.org\/pkg\/time\/#ParseDuration<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>pipelineRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineRef\">\nPipelineRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>PipelineRef is a reference to a pipeline definition\nNote: PipelineRef is in preview mode and not yet supported<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>pipelineSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineSpec\">\nPipelineSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>PipelineSpec is a specification of a pipeline\nNote: PipelineSpec is in preview mode and not yet supported\nSpecifying TaskSpec can be disabled by setting\n<code>disable-inline-spec<\/code> feature flag..<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>onError<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineTaskOnErrorType\">\nPipelineTaskOnErrorType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>OnError defines the exiting behavior of a PipelineRun on error\ncan be set to [ continue | stopAndFail ]<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineTaskInputResource\">PipelineTaskInputResource\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineTaskResources\">PipelineTaskResources<\/a>)\n<\/p>\n<div>\n<p>PipelineTaskInputResource maps the name of a declared PipelineResource input\ndependency in a Task to the resource in the Pipeline&rsquo;s DeclaredPipelineResources\nthat should be used. This input may come from a previous task.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of the PipelineResource as declared by the Task.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resource<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Resource is the name of the DeclaredPipelineResource to use.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>from<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>From is the list of PipelineTask names that the resource has to come from.\n(Implies an ordering in the execution graph.)<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineTaskMetadata\">PipelineTaskMetadata\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.EmbeddedRunSpec\">EmbeddedRunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.EmbeddedCustomRunSpec\">EmbeddedCustomRunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.EmbeddedTask\">EmbeddedTask<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineTaskRunSpec\">PipelineTaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>PipelineTaskMetadata contains the labels or annotations for an EmbeddedTask<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>labels<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>annotations<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineTaskOnErrorType\">PipelineTaskOnErrorType\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineTask\">PipelineTask<\/a>)\n<\/p>\n<div>\n<p>PipelineTaskOnErrorType defines a list of supported failure handling behaviors of a PipelineTask on error<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.PipelineTaskOutputResource\">PipelineTaskOutputResource\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineTaskResources\">PipelineTaskResources<\/a>)\n<\/p>\n<div>\n<p>PipelineTaskOutputResource maps the name of a declared PipelineResource output\ndependency in a Task to the resource in the Pipeline&rsquo;s DeclaredPipelineResources\nthat should be used.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of the PipelineResource as declared by the Task.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resource<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Resource is the name of the DeclaredPipelineResource to use.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineTaskParam\">PipelineTaskParam\n<\/h3>\n<div>\n<p>PipelineTaskParam is used to provide arbitrary string parameters to a Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineTaskResources\">PipelineTaskResources\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineTask\">PipelineTask<\/a>)\n<\/p>\n<div>\n<p>PipelineTaskResources allows a Pipeline to declare how its DeclaredPipelineResources\nshould be provided to a Task as its inputs and outputs.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>inputs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineTaskInputResource\">\n[]PipelineTaskInputResource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Inputs holds the mapping from the PipelineResources declared in\nDeclaredPipelineResources to the input PipelineResources required by the Task.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>outputs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineTaskOutputResource\">\n[]PipelineTaskOutputResource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Outputs holds the mapping from the PipelineResources declared in\nDeclaredPipelineResources to the input PipelineResources required by the Task.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineTaskRun\">PipelineTaskRun\n<\/h3>\n<div>\n<p>PipelineTaskRun reports the results of running a step in the Task. Each\ntask has the potential to succeed or fail (based on the exit code)\nand produces logs.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineTaskRunSpec\">PipelineTaskRunSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRunSpec\">PipelineRunSpec<\/a>)\n<\/p>\n<div>\n<p>PipelineTaskRunSpec  can be used to configure specific\nspecs for a concrete Task<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>pipelineTaskName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskServiceAccountName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskPodTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/unversioned.Template\">\nTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stepOverrides<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunStepOverride\">\n[]TaskRunStepOverride\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sidecarOverrides<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunSidecarOverride\">\n[]TaskRunSidecarOverride\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>metadata<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineTaskMetadata\">\nPipelineTaskMetadata\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>computeResources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Compute resources to use for this TaskRun<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PipelineWorkspaceDeclaration\">PipelineWorkspaceDeclaration\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineSpec\">PipelineSpec<\/a>)\n<\/p>\n<div>\n<p>WorkspacePipelineDeclaration creates a named slot in a Pipeline that a PipelineRun\nis expected to populate with a workspace binding.<\/p>\n<p>Deprecated: use PipelineWorkspaceDeclaration type instead<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of a workspace to be provided by a PipelineRun.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a human readable string describing how the workspace will be\nused in the Pipeline. It can be useful to include a bit of detail about which\ntasks are intended to have access to the data on the workspace.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>optional<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<p>Optional marks a Workspace as not being required in PipelineRuns. By default\nthis field is false and so declared workspaces are required.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.PropertySpec\">PropertySpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.ParamSpec\">ParamSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskResult\">TaskResult<\/a>)\n<\/p>\n<div>\n<p>PropertySpec defines the struct for object keys<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ParamType\">\nParamType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.Provenance\">Provenance\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRunStatusFields\">PipelineRunStatusFields<\/a>, <a href=\"#tekton.dev\/v1beta1.StepState\">StepState<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRunStatusFields\">TaskRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>Provenance contains metadata about resources used in the TaskRun\/PipelineRun\nsuch as the source from where a remote build definition was fetched.\nThis field aims to carry minimum amoumt of metadata in *Run status so that\nTekton Chains can capture them in the provenance.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>configSource<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ConfigSource\">\nConfigSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Deprecated: Use RefSource instead<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>refSource<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.RefSource\">\nRefSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>RefSource identifies the source where a remote task\/pipeline came from.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>featureFlags<\/code><br\/>\n<em>\ngithub.com\/tektoncd\/pipeline\/pkg\/apis\/config.FeatureFlags\n<\/em>\n<\/td>\n<td>\n<p>FeatureFlags identifies the feature flags that were used during the task\/pipeline run<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.Ref\">Ref\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.Step\">Step<\/a>)\n<\/p>\n<div>\n<p>Ref can be used to refer to a specific instance of a StepAction.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name of the referenced step<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ResolverRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ResolverRef\">\nResolverRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ResolverRef allows referencing a StepAction in a remote location\nlike a git repo.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.RefSource\">RefSource\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.Provenance\">Provenance<\/a>)\n<\/p>\n<div>\n<p>RefSource contains the information that can uniquely identify where a remote\nbuilt definition came from i.e. Git repositories, Tekton Bundles in OCI registry\nand hub.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>uri<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>URI indicates the identity of the source of the build definition.\nExample: &ldquo;<a href=\"https:\/\/github.com\/tektoncd\/catalog&quot;\">https:\/\/github.com\/tektoncd\/catalog&rdquo;<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>digest<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<p>Digest is a collection of cryptographic digests for the contents of the artifact specified by URI.\nExample: {&ldquo;sha1&rdquo;: &ldquo;f99d13e554ffcb696dee719fa85b695cb5b0f428&rdquo;}<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>entryPoint<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>EntryPoint identifies the entry point into the build. This is often a path to a\nbuild definition file and\/or a target label within that file.\nExample: &ldquo;task\/git-clone\/0.8\/git-clone.yaml&rdquo;<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.ResolverName\">ResolverName\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.ResolverRef\">ResolverRef<\/a>)\n<\/p>\n<div>\n<p>ResolverName is the name of a resolver from which a resource can be\nrequested.<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.ResolverRef\">ResolverRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRef\">PipelineRef<\/a>, <a href=\"#tekton.dev\/v1beta1.Ref\">Ref<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRef\">TaskRef<\/a>)\n<\/p>\n<div>\n<p>ResolverRef can be used to refer to a Pipeline or Task in a remote\nlocation like a git repo.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>resolver<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ResolverName\">\nResolverName\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Resolver is the name of the resolver that should perform\nresolution of the referenced Tekton resource, such as &ldquo;git&rdquo;.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Params contains the parameters used to identify the\nreferenced Tekton resource. Example entries might include\n&ldquo;repo&rdquo; or &ldquo;path&rdquo; but the set of params ultimately depends on\nthe chosen resolver.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.ResultRef\">ResultRef\n<\/h3>\n<div>\n<p>ResultRef is a type that represents a reference to a task run result<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>pipelineTask<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>result<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resultsIndex<\/code><br\/>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>property<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.ResultsType\">ResultsType\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineResult\">PipelineResult<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskResult\">TaskResult<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRunResult\">TaskRunResult<\/a>)\n<\/p>\n<div>\n<p>ResultsType indicates the type of a result;\nUsed to distinguish between a single string and an array of strings.\nNote that there is ResultType used to find out whether a\nRunResult is from a task result or not, which is different from\nthis ResultsType.<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.RunObject\">RunObject\n<\/h3>\n<div>\n<p>RunObject is implemented by CustomRun and Run<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.Sidecar\">Sidecar\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskSpec\">TaskSpec<\/a>)\n<\/p>\n<div>\n<p>Sidecar has nearly the same data structure as Step but does not have the ability to timeout.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name of the Sidecar specified as a DNS_LABEL.\nEach Sidecar in a Task must have a unique name (DNS_LABEL).\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>image<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Image name to be used by the Sidecar.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>command<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Entrypoint array. Not executed within a shell.\nThe image&rsquo;s ENTRYPOINT is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the Sidecar&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>args<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Arguments to the entrypoint.\nThe image&rsquo;s CMD is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the container&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workingDir<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Sidecar&rsquo;s working directory.\nIf not specified, the container runtime&rsquo;s default will be used, which\nmight be configured in the container image.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ports<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#containerport-v1-core\">\n[]Kubernetes core\/v1.ContainerPort\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of ports to expose from the Sidecar. Exposing a port here gives\nthe system additional information about the network connections a\ncontainer uses, but is primarily informational. Not specifying a port here\nDOES NOT prevent that port from being exposed. Any port which is\nlistening on the default &ldquo;0.0.0.0&rdquo; address inside a container will be\naccessible from the network.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>envFrom<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envfromsource-v1-core\">\n[]Kubernetes core\/v1.EnvFromSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of sources to populate environment variables in the Sidecar.\nThe keys defined within a source must be a C_IDENTIFIER. All invalid keys\nwill be reported as an event when the Sidecar is starting. When a key exists in multiple\nsources, the value associated with the last source will take precedence.\nValues defined by an Env with a duplicate key will take precedence.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>env<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envvar-v1-core\">\n[]Kubernetes core\/v1.EnvVar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of environment variables to set in the Sidecar.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Compute Resources required by this Sidecar.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\">https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeMounts<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumemount-v1-core\">\n[]Kubernetes core\/v1.VolumeMount\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Volumes to mount into the Sidecar&rsquo;s filesystem.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeDevices<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumedevice-v1-core\">\n[]Kubernetes core\/v1.VolumeDevice\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>volumeDevices is the list of block devices to be used by the Sidecar.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>livenessProbe<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#probe-v1-core\">\nKubernetes core\/v1.Probe\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Periodic probe of Sidecar liveness.\nContainer will be restarted if the probe fails.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>readinessProbe<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#probe-v1-core\">\nKubernetes core\/v1.Probe\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Periodic probe of Sidecar service readiness.\nContainer will be removed from service endpoints if the probe fails.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>startupProbe<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#probe-v1-core\">\nKubernetes core\/v1.Probe\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>StartupProbe indicates that the Pod the Sidecar is running in has successfully initialized.\nIf specified, no other probes are executed until this completes successfully.\nIf this probe fails, the Pod will be restarted, just as if the livenessProbe failed.\nThis can be used to provide different probe parameters at the beginning of a Pod&rsquo;s lifecycle,\nwhen it might take a long time to load data or warm a cache, than during steady-state operation.\nThis cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>lifecycle<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#lifecycle-v1-core\">\nKubernetes core\/v1.Lifecycle\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Actions that the management system should take in response to Sidecar lifecycle events.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>terminationMessagePath<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Optional: Path at which the file to which the Sidecar&rsquo;s termination message\nwill be written is mounted into the Sidecar&rsquo;s filesystem.\nMessage written is intended to be brief final status, such as an assertion failure message.\nWill be truncated by the node if greater than 4096 bytes. The total message length across\nall containers will be limited to 12kb.\nDefaults to \/dev\/termination-log.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>terminationMessagePolicy<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#terminationmessagepolicy-v1-core\">\nKubernetes core\/v1.TerminationMessagePolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Indicate how the termination message should be populated. File will use the contents of\nterminationMessagePath to populate the Sidecar status message on both success and failure.\nFallbackToLogsOnError will use the last chunk of Sidecar log output if the termination\nmessage file is empty and the Sidecar exited with an error.\nThe log output is limited to 2048 bytes or 80 lines, whichever is smaller.\nDefaults to File.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>imagePullPolicy<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#pullpolicy-v1-core\">\nKubernetes core\/v1.PullPolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Image pull policy.\nOne of Always, Never, IfNotPresent.\nDefaults to Always if :latest tag is specified, or IfNotPresent otherwise.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#updating-images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#updating-images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>securityContext<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#securitycontext-v1-core\">\nKubernetes core\/v1.SecurityContext\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecurityContext defines the security options the Sidecar should be run with.\nIf set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/\">https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stdin<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Whether this Sidecar should allocate a buffer for stdin in the container runtime. If this\nis not set, reads from stdin in the Sidecar will always result in EOF.\nDefault is false.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stdinOnce<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Whether the container runtime should close the stdin channel after it has been opened by\na single attach. When stdin is true the stdin stream will remain open across multiple attach\nsessions. If stdinOnce is set to true, stdin is opened on Sidecar start, is empty until the\nfirst client attaches to stdin, and then remains open and accepts data until the client disconnects,\nat which time stdin is closed and remains closed until the Sidecar is restarted. If this\nflag is false, a container processes that reads from stdin will never receive an EOF.\nDefault is false<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tty<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Whether this Sidecar should allocate a TTY for itself, also requires &lsquo;stdin&rsquo; to be true.\nDefault is false.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>script<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Script is the contents of an executable file to execute.<\/p>\n<p>If Script is not empty, the Step cannot have an Command or Args.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WorkspaceUsage\">\n[]WorkspaceUsage\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>This is an alpha field. You must set the &ldquo;enable-api-fields&rdquo; feature flag to &ldquo;alpha&rdquo;\nfor this field to be supported.<\/p>\n<p>Workspaces is a list of workspaces from the Task that this Sidecar wants\nexclusive access to. Adding a workspace to this list means that any\nother Step or Sidecar that does not also request this Workspace will\nnot have access to it.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>restartPolicy<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#containerrestartpolicy-v1-core\">\nKubernetes core\/v1.ContainerRestartPolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>RestartPolicy refers to kubernetes RestartPolicy. It can only be set for an\ninitContainer and must have it&rsquo;s policy set to &ldquo;Always&rdquo;. It is currently\nleft optional to help support Kubernetes versions prior to 1.29 when this feature\nwas introduced.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.SidecarState\">SidecarState\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskRunStatusFields\">TaskRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>SidecarState reports the results of running a sidecar in a Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>ContainerState<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#containerstate-v1-core\">\nKubernetes core\/v1.ContainerState\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>ContainerState<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>container<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>imageID<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.SkippedTask\">SkippedTask\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRunStatusFields\">PipelineRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>SkippedTask is used to describe the Tasks that were skipped due to their When Expressions\nevaluating to False. This is a struct because we are looking into including more details\nabout the When Expressions that caused this Task to be skipped.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the Pipeline Task name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>reason<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.SkippingReason\">\nSkippingReason\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Reason is the cause of the PipelineTask being skipped.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>whenExpressions<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WhenExpression\">\n[]WhenExpression\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>WhenExpressions is the list of checks guarding the execution of the PipelineTask<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.SkippingReason\">SkippingReason\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.SkippedTask\">SkippedTask<\/a>)\n<\/p>\n<div>\n<p>SkippingReason explains why a PipelineTask was skipped.<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.Step\">Step\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.InternalTaskModifier\">InternalTaskModifier<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskSpec\">TaskSpec<\/a>)\n<\/p>\n<div>\n<p>Step runs a subcomponent of a Task<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name of the Step specified as a DNS_LABEL.\nEach Step in a Task must have a unique name.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>image<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Image reference name to run for this Step.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>command<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Entrypoint array. Not executed within a shell.\nThe image&rsquo;s ENTRYPOINT is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the container&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>args<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Arguments to the entrypoint.\nThe image&rsquo;s CMD is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the container&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workingDir<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Step&rsquo;s working directory.\nIf not specified, the container runtime&rsquo;s default will be used, which\nmight be configured in the container image.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ports<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#containerport-v1-core\">\n[]Kubernetes core\/v1.ContainerPort\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of ports to expose from the Step&rsquo;s container. Exposing a port here gives\nthe system additional information about the network connections a\ncontainer uses, but is primarily informational. Not specifying a port here\nDOES NOT prevent that port from being exposed. Any port which is\nlistening on the default &ldquo;0.0.0.0&rdquo; address inside a container will be\naccessible from the network.\nCannot be updated.<\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>envFrom<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envfromsource-v1-core\">\n[]Kubernetes core\/v1.EnvFromSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of sources to populate environment variables in the container.\nThe keys defined within a source must be a C_IDENTIFIER. All invalid keys\nwill be reported as an event when the container is starting. When a key exists in multiple\nsources, the value associated with the last source will take precedence.\nValues defined by an Env with a duplicate key will take precedence.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>env<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envvar-v1-core\">\n[]Kubernetes core\/v1.EnvVar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of environment variables to set in the container.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Compute Resources required by this Step.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\">https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeMounts<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumemount-v1-core\">\n[]Kubernetes core\/v1.VolumeMount\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Volumes to mount into the Step&rsquo;s filesystem.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeDevices<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumedevice-v1-core\">\n[]Kubernetes core\/v1.VolumeDevice\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>volumeDevices is the list of block devices to be used by the Step.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>livenessProbe<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#probe-v1-core\">\nKubernetes core\/v1.Probe\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Periodic probe of container liveness.\nStep will be restarted if the probe fails.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes<\/a><\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>readinessProbe<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#probe-v1-core\">\nKubernetes core\/v1.Probe\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Periodic probe of container service readiness.\nStep will be removed from service endpoints if the probe fails.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes<\/a><\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>startupProbe<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#probe-v1-core\">\nKubernetes core\/v1.Probe\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DeprecatedStartupProbe indicates that the Pod this Step runs in has successfully initialized.\nIf specified, no other probes are executed until this completes successfully.\nIf this probe fails, the Pod will be restarted, just as if the livenessProbe failed.\nThis can be used to provide different probe parameters at the beginning of a Pod&rsquo;s lifecycle,\nwhen it might take a long time to load data or warm a cache, than during steady-state operation.\nThis cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes<\/a><\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>lifecycle<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#lifecycle-v1-core\">\nKubernetes core\/v1.Lifecycle\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Actions that the management system should take in response to container lifecycle events.\nCannot be updated.<\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>terminationMessagePath<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Deprecated: This field will be removed in a future release and can&rsquo;t be meaningfully used.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>terminationMessagePolicy<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#terminationmessagepolicy-v1-core\">\nKubernetes core\/v1.TerminationMessagePolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Deprecated: This field will be removed in a future release and can&rsquo;t be meaningfully used.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>imagePullPolicy<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#pullpolicy-v1-core\">\nKubernetes core\/v1.PullPolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Image pull policy.\nOne of Always, Never, IfNotPresent.\nDefaults to Always if :latest tag is specified, or IfNotPresent otherwise.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#updating-images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#updating-images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>securityContext<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#securitycontext-v1-core\">\nKubernetes core\/v1.SecurityContext\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecurityContext defines the security options the Step should be run with.\nIf set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/\">https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stdin<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Whether this container should allocate a buffer for stdin in the container runtime. If this\nis not set, reads from stdin in the container will always result in EOF.\nDefault is false.<\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stdinOnce<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Whether the container runtime should close the stdin channel after it has been opened by\na single attach. When stdin is true the stdin stream will remain open across multiple attach\nsessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the\nfirst client attaches to stdin, and then remains open and accepts data until the client disconnects,\nat which time stdin is closed and remains closed until the container is restarted. If this\nflag is false, a container processes that reads from stdin will never receive an EOF.\nDefault is false<\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tty<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Whether this container should allocate a DeprecatedTTY for itself, also requires &lsquo;stdin&rsquo; to be true.\nDefault is false.<\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>script<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Script is the contents of an executable file to execute.<\/p>\n<p>If Script is not empty, the Step cannot have an Command and the Args will be passed to the Script.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Timeout is the time after which the step times out. Defaults to never.\nRefer to Go&rsquo;s ParseDuration documentation for expected format: <a href=\"https:\/\/golang.org\/pkg\/time\/#ParseDuration\">https:\/\/golang.org\/pkg\/time\/#ParseDuration<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WorkspaceUsage\">\n[]WorkspaceUsage\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>This is an alpha field. You must set the &ldquo;enable-api-fields&rdquo; feature flag to &ldquo;alpha&rdquo;\nfor this field to be supported.<\/p>\n<p>Workspaces is a list of workspaces from the Task that this Step wants\nexclusive access to. Adding a workspace to this list means that any\nother Step or Sidecar that does not also request this Workspace will\nnot have access to it.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>onError<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.OnErrorType\">\nOnErrorType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>OnError defines the exiting behavior of a container on error\ncan be set to [ continue | stopAndFail ]<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stdoutConfig<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.StepOutputConfig\">\nStepOutputConfig\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Stores configuration for the stdout stream of the step.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stderrConfig<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.StepOutputConfig\">\nStepOutputConfig\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Stores configuration for the stderr stream of the step.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ref<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Ref\">\nRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Contains the reference to an existing StepAction.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Params declares parameters passed to this step action.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.StepResult\">\n[]StepResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results declares StepResults produced by the Step.<\/p>\n<p>This is field is at an ALPHA stability level and gated by &ldquo;enable-step-actions&rdquo; feature flag.<\/p>\n<p>It can be used in an inlined Step when used to store Results to $(step.results.resultName.path).\nIt cannot be used when referencing StepActions using [v1beta1.Step.Ref].\nThe Results declared by the StepActions will be stored here instead.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>when<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WhenExpressions\">\nWhenExpressions\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.StepActionObject\">StepActionObject\n<\/h3>\n<div>\n<p>StepActionObject is implemented by StepAction<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.StepActionSpec\">StepActionSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.StepAction\">StepAction<\/a>)\n<\/p>\n<div>\n<p>StepActionSpec contains the actionable components of a step.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the stepaction that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>image<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Image reference name to run for this StepAction.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>command<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Entrypoint array. Not executed within a shell.\nThe image&rsquo;s ENTRYPOINT is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the container&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>args<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Arguments to the entrypoint.\nThe image&rsquo;s CMD is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the container&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>env<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envvar-v1-core\">\n[]Kubernetes core\/v1.EnvVar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of environment variables to set in the container.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>script<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Script is the contents of an executable file to execute.<\/p>\n<p>If Script is not empty, the Step cannot have an Command and the Args will be passed to the Script.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workingDir<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Step&rsquo;s working directory.\nIf not specified, the container runtime&rsquo;s default will be used, which\nmight be configured in the container image.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.ParamSpecs\">\nParamSpecs\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Params is a list of input parameters required to run the stepAction.\nParams must be supplied as inputs in Steps unless they declare a defaultvalue.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1.StepResult\">\n[]StepResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results are values that this StepAction can output<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>securityContext<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#securitycontext-v1-core\">\nKubernetes core\/v1.SecurityContext\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecurityContext defines the security options the Step should be run with.\nIf set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/\">https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/<\/a>\nThe value set in StepAction will take precedence over the value from Task.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeMounts<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumemount-v1-core\">\n[]Kubernetes core\/v1.VolumeMount\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Volumes to mount into the Step&rsquo;s filesystem.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.StepOutputConfig\">StepOutputConfig\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.Step\">Step<\/a>)\n<\/p>\n<div>\n<p>StepOutputConfig stores configuration for a step output stream.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>path<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Path to duplicate stdout stream to on container&rsquo;s local filesystem.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.StepState\">StepState\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskRunStatusFields\">TaskRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>StepState reports the results of running a step in a Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>ContainerState<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#containerstate-v1-core\">\nKubernetes core\/v1.ContainerState\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>ContainerState<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>container<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>imageID<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunResult\">\n[]TaskRunResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>provenance<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Provenance\">\nProvenance\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>inputs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Artifact\">\n[]Artifact\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>outputs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Artifact\">\n[]Artifact\n<\/a>\n<\/em>\n<\/td>\n<td>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.StepTemplate\">StepTemplate\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskSpec\">TaskSpec<\/a>)\n<\/p>\n<div>\n<p>StepTemplate is a template for a Step<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Default name for each Step specified as a DNS_LABEL.\nEach Step in a Task must have a unique name.\nCannot be updated.<\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>image<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Default image name to use for each Step.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images<\/a>\nThis field is optional to allow higher level config management to default or override\ncontainer images in workload controllers like Deployments and StatefulSets.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>command<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Entrypoint array. Not executed within a shell.\nThe docker image&rsquo;s ENTRYPOINT is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the Step&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>args<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Arguments to the entrypoint.\nThe image&rsquo;s CMD is used if this is not provided.\nVariable references $(VAR_NAME) are expanded using the Step&rsquo;s environment. If a variable\ncannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced\nto a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. &ldquo;$$(VAR_NAME)&rdquo; will\nproduce the string literal &ldquo;$(VAR_NAME)&rdquo;. Escaped references will never be expanded, regardless\nof whether the variable exists or not. Cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell\">https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/define-command-argument-container\/#running-a-command-in-a-shell<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workingDir<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Step&rsquo;s working directory.\nIf not specified, the container runtime&rsquo;s default will be used, which\nmight be configured in the container image.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ports<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#containerport-v1-core\">\n[]Kubernetes core\/v1.ContainerPort\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of ports to expose from the Step&rsquo;s container. Exposing a port here gives\nthe system additional information about the network connections a\ncontainer uses, but is primarily informational. Not specifying a port here\nDOES NOT prevent that port from being exposed. Any port which is\nlistening on the default &ldquo;0.0.0.0&rdquo; address inside a container will be\naccessible from the network.\nCannot be updated.<\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>envFrom<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envfromsource-v1-core\">\n[]Kubernetes core\/v1.EnvFromSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of sources to populate environment variables in the Step.\nThe keys defined within a source must be a C_IDENTIFIER. All invalid keys\nwill be reported as an event when the container is starting. When a key exists in multiple\nsources, the value associated with the last source will take precedence.\nValues defined by an Env with a duplicate key will take precedence.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>env<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#envvar-v1-core\">\n[]Kubernetes core\/v1.EnvVar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>List of environment variables to set in the container.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Compute Resources required by this Step.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\">https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeMounts<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumemount-v1-core\">\n[]Kubernetes core\/v1.VolumeMount\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Volumes to mount into the Step&rsquo;s filesystem.\nCannot be updated.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeDevices<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volumedevice-v1-core\">\n[]Kubernetes core\/v1.VolumeDevice\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>volumeDevices is the list of block devices to be used by the Step.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>livenessProbe<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#probe-v1-core\">\nKubernetes core\/v1.Probe\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Periodic probe of container liveness.\nContainer will be restarted if the probe fails.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes<\/a><\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>readinessProbe<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#probe-v1-core\">\nKubernetes core\/v1.Probe\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Periodic probe of container service readiness.\nContainer will be removed from service endpoints if the probe fails.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes<\/a><\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>startupProbe<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#probe-v1-core\">\nKubernetes core\/v1.Probe\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DeprecatedStartupProbe indicates that the Pod has successfully initialized.\nIf specified, no other probes are executed until this completes successfully.\nIf this probe fails, the Pod will be restarted, just as if the livenessProbe failed.\nThis can be used to provide different probe parameters at the beginning of a Pod&rsquo;s lifecycle,\nwhen it might take a long time to load data or warm a cache, than during steady-state operation.\nThis cannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle#container-probes<\/a><\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>lifecycle<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#lifecycle-v1-core\">\nKubernetes core\/v1.Lifecycle\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Actions that the management system should take in response to container lifecycle events.\nCannot be updated.<\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>terminationMessagePath<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Deprecated: This field will be removed in a future release and cannot be meaningfully used.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>terminationMessagePolicy<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#terminationmessagepolicy-v1-core\">\nKubernetes core\/v1.TerminationMessagePolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Deprecated: This field will be removed in a future release and cannot be meaningfully used.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>imagePullPolicy<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#pullpolicy-v1-core\">\nKubernetes core\/v1.PullPolicy\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Image pull policy.\nOne of Always, Never, IfNotPresent.\nDefaults to Always if :latest tag is specified, or IfNotPresent otherwise.\nCannot be updated.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#updating-images\">https:\/\/kubernetes.io\/docs\/concepts\/containers\/images#updating-images<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>securityContext<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#securitycontext-v1-core\">\nKubernetes core\/v1.SecurityContext\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SecurityContext defines the security options the Step should be run with.\nIf set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/\">https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stdin<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Whether this Step should allocate a buffer for stdin in the container runtime. If this\nis not set, reads from stdin in the Step will always result in EOF.\nDefault is false.<\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stdinOnce<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Whether the container runtime should close the stdin channel after it has been opened by\na single attach. When stdin is true the stdin stream will remain open across multiple attach\nsessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the\nfirst client attaches to stdin, and then remains open and accepts data until the client disconnects,\nat which time stdin is closed and remains closed until the container is restarted. If this\nflag is false, a container processes that reads from stdin will never receive an EOF.\nDefault is false<\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tty<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Whether this Step should allocate a DeprecatedTTY for itself, also requires &lsquo;stdin&rsquo; to be true.\nDefault is false.<\/p>\n<p>Deprecated: This field will be removed in a future release.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskBreakpoints\">TaskBreakpoints\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskRunDebug\">TaskRunDebug<\/a>)\n<\/p>\n<div>\n<p>TaskBreakpoints defines the breakpoint config for a particular Task<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>onFailure<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>if enabled, pause TaskRun on failure of a step\nfailed step will not exit<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>beforeSteps<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskKind\">TaskKind\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskRef\">TaskRef<\/a>)\n<\/p>\n<div>\n<p>TaskKind defines the type of Task used by the pipeline.<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.TaskModifier\">TaskModifier\n<\/h3>\n<div>\n<p>TaskModifier is an interface to be implemented by different PipelineResources<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.TaskObject\">TaskObject\n<\/h3>\n<div>\n<p>TaskObject is implemented by Task and ClusterTask<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.TaskRef\">TaskRef\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.RunSpec\">RunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.CustomRunSpec\">CustomRunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineTask\">PipelineTask<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>TaskRef can be used to refer to a specific instance of a task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name of the referent; More info: <a href=\"http:\/\/kubernetes.io\/docs\/user-guide\/identifiers#names\">http:\/\/kubernetes.io\/docs\/user-guide\/identifiers#names<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>kind<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskKind\">\nTaskKind\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>TaskKind indicates the Kind of the Task:\n1. Namespaced Task when Kind is set to &ldquo;Task&rdquo;. If Kind is &ldquo;&rdquo;, it defaults to &ldquo;Task&rdquo;.\n2. Cluster-Scoped Task when Kind is set to &ldquo;ClusterTask&rdquo;\n3. Custom Task when Kind is non-empty and APIVersion is non-empty<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>apiVersion<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>API version of the referent\nNote: A Task with non-empty APIVersion and Kind is considered a Custom Task<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>bundle<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Bundle url reference to a Tekton Bundle.<\/p>\n<p>Deprecated: Please use ResolverRef with the bundles resolver instead.\nThe field is staying there for go client backward compatibility, but is not used\/allowed anymore.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>ResolverRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ResolverRef\">\nResolverRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ResolverRef allows referencing a Task in a remote location\nlike a git repo. This field is only supported when the alpha\nfeature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskResource\">TaskResource\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskResources\">TaskResources<\/a>)\n<\/p>\n<div>\n<p>TaskResource defines an input or output Resource declared as a requirement\nby a Task. The Name field will be used to refer to these Resources within\nthe Task definition, and when provided as an Input, the Name will be the\npath to the volume mounted containing this Resource as an input (e.g.\nan input Resource named <code>workspace<\/code> will be mounted at <code>\/workspace<\/code>).<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>ResourceDeclaration<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1alpha1.ResourceDeclaration\">\nResourceDeclaration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>ResourceDeclaration<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskResourceBinding\">TaskResourceBinding\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskRunInputs\">TaskRunInputs<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRunOutputs\">TaskRunOutputs<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRunResources\">TaskRunResources<\/a>)\n<\/p>\n<div>\n<p>TaskResourceBinding points to the PipelineResource that\nwill be used for the Task input or output called Name.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>PipelineResourceBinding<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PipelineResourceBinding\">\nPipelineResourceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>PipelineResourceBinding<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>paths<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Paths will probably be removed in #1284, and then PipelineResourceBinding can be used instead.\nThe optional Path field corresponds to a path on disk at which the Resource can be found\n(used when providing the resource via mounted volume, overriding the default logic to fetch the Resource).<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskResources\">TaskResources\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskSpec\">TaskSpec<\/a>)\n<\/p>\n<div>\n<p>TaskResources allows a Pipeline to declare how its DeclaredPipelineResources\nshould be provided to a Task as its inputs and outputs.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>inputs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskResource\">\n[]TaskResource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Inputs holds the mapping from the PipelineResources declared in\nDeclaredPipelineResources to the input PipelineResources required by the Task.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>outputs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskResource\">\n[]TaskResource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Outputs holds the mapping from the PipelineResources declared in\nDeclaredPipelineResources to the input PipelineResources required by the Task.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskResult\">TaskResult\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskSpec\">TaskSpec<\/a>)\n<\/p>\n<div>\n<p>TaskResult used to describe the results of a task<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name the given name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ResultsType\">\nResultsType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Type is the user-specified type of the result. The possible type\nis currently &ldquo;string&rdquo; and will support &ldquo;array&rdquo; in following work.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>properties<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.PropertySpec\">\nmap[string]github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1beta1.PropertySpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Properties is the JSON Schema properties to support key-value pairs results.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a human-readable description of the result<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ParamValue\">\nParamValue\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Value the expression used to retrieve the value of the result from an underlying Step.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskRunConditionType\">TaskRunConditionType\n(<code>string<\/code> alias)<\/h3>\n<div>\n<p>TaskRunConditionType is an enum used to store TaskRun custom\nconditions such as one used in spire results verification<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.TaskRunDebug\">TaskRunDebug\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>TaskRunDebug defines the breakpoint config for a particular TaskRun<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>breakpoints<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskBreakpoints\">\nTaskBreakpoints\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskRunInputs\">TaskRunInputs\n<\/h3>\n<div>\n<p>TaskRunInputs holds the input values that this task was invoked with.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskResourceBinding\">\n[]TaskResourceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Param\">\n[]Param\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskRunOutputs\">TaskRunOutputs\n<\/h3>\n<div>\n<p>TaskRunOutputs holds the output values that this task was invoked with.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskResourceBinding\">\n[]TaskResourceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskRunReason\">TaskRunReason\n(<code>string<\/code> alias)<\/h3>\n<div>\n<p>TaskRunReason is an enum used to store all TaskRun reason for\nthe Succeeded condition that are controlled by the TaskRun itself. Failure\nreasons that emerge from underlying resources are not included here<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.TaskRunResources\">TaskRunResources\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>TaskRunResources allows a TaskRun to declare inputs and outputs TaskResourceBinding<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>inputs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskResourceBinding\">\n[]TaskResourceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Inputs holds the inputs resources this task was invoked with<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>outputs<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskResourceBinding\">\n[]TaskResourceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Outputs holds the inputs resources this task was invoked with<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskRunResult\">TaskRunResult\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.StepState\">StepState<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRunStatusFields\">TaskRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>TaskRunStepResult is a type alias of TaskRunResult<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name the given name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>type<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ResultsType\">\nResultsType\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Type is the user-specified type of the result. The possible type\nis currently &ldquo;string&rdquo; and will support &ldquo;array&rdquo; in following work.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ParamValue\">\nParamValue\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Value the given value of the result<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskRunSidecarOverride\">TaskRunSidecarOverride\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineTaskRunSpec\">PipelineTaskRunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>TaskRunSidecarOverride is used to override the values of a Sidecar in the corresponding Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>The name of the Sidecar to override.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The resource requirements to apply to the Sidecar.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskRunSpec\">TaskRunSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskRun\">TaskRun<\/a>)\n<\/p>\n<div>\n<p>TaskRunSpec defines the desired state of TaskRun<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>debug<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunDebug\">\nTaskRunDebug\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Params\">\nParams\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunResources\">\nTaskRunResources\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>serviceAccountName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskRef<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRef\">\nTaskRef\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>no more than one of the TaskRef and TaskSpec may be specified.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskSpec\">\nTaskSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Specifying PipelineSpec can be disabled by setting\n<code>disable-inline-spec<\/code> feature flag..<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>status<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunSpecStatus\">\nTaskRunSpecStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Used for cancelling a TaskRun (and maybe more later on)<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>statusMessage<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunSpecStatusMessage\">\nTaskRunSpecStatusMessage\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Status message for cancellation.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retries<\/code><br\/>\n<em>\nint\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Retries represents how many times this TaskRun should be retried in the event of Task failure.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>timeout<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Time after which one retry attempt times out. Defaults to 1 hour.\nRefer Go&rsquo;s ParseDuration documentation for expected format: <a href=\"https:\/\/golang.org\/pkg\/time\/#ParseDuration\">https:\/\/golang.org\/pkg\/time\/#ParseDuration<\/a><\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>podTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/unversioned.Template\">\nTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>PodTemplate holds pod specific configuration<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WorkspaceBinding\">\n[]WorkspaceBinding\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspaces is a list of WorkspaceBindings from volumes to workspaces.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stepOverrides<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunStepOverride\">\n[]TaskRunStepOverride\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Overrides to apply to Steps in this TaskRun.\nIf a field is specified in both a Step and a StepOverride,\nthe value from the StepOverride will be used.\nThis field is only supported when the alpha feature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sidecarOverrides<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunSidecarOverride\">\n[]TaskRunSidecarOverride\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Overrides to apply to Sidecars in this TaskRun.\nIf a field is specified in both a Sidecar and a SidecarOverride,\nthe value from the SidecarOverride will be used.\nThis field is only supported when the alpha feature gate is enabled.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>computeResources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Compute resources to use for this TaskRun<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskRunSpecStatus\">TaskRunSpecStatus\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>TaskRunSpecStatus defines the TaskRun spec status the user can provide<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.TaskRunSpecStatusMessage\">TaskRunSpecStatusMessage\n(<code>string<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>TaskRunSpecStatusMessage defines human readable status messages for the TaskRun.<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.TaskRunStatus\">TaskRunStatus\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskRun\">TaskRun<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineRunTaskRunStatus\">PipelineRunTaskRunStatus<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRunStatusFields\">TaskRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>TaskRunStatus defines the observed state of TaskRun<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>Status<\/code><br\/>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/knative.dev\/pkg\/apis\/duck\/v1#Status\">\nknative.dev\/pkg\/apis\/duck\/v1.Status\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>Status<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>TaskRunStatusFields<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunStatusFields\">\nTaskRunStatusFields\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>TaskRunStatusFields<\/code> are embedded into this type.)\n<\/p>\n<p>TaskRunStatusFields inlines the status fields.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskRunStatusFields\">TaskRunStatusFields\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskRunStatus\">TaskRunStatus<\/a>)\n<\/p>\n<div>\n<p>TaskRunStatusFields holds the fields of TaskRun&rsquo;s status.  This is defined\nseparately and inlined so that other types can readily consume these fields\nvia duck typing.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>podName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>PodName is the name of the pod responsible for executing this task&rsquo;s steps.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>startTime<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>StartTime is the time the build is actually started.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>completionTime<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>CompletionTime is the time the build completed.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>steps<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.StepState\">\n[]StepState\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Steps describes the state of each build step container.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>cloudEvents<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.CloudEventDelivery\">\n[]CloudEventDelivery\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>CloudEvents describe the state of each cloud event requested via a\nCloudEventResource.<\/p>\n<p>Deprecated: Removed in v0.44.0.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retriesStatus<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunStatus\">\n[]TaskRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>RetriesStatus contains the history of TaskRunStatus in case of a retry in order to keep record of failures.\nAll TaskRunStatus stored in RetriesStatus will have no date within the RetriesStatus as is redundant.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resourcesResult<\/code><br\/>\n<em>\n[]github.com\/tektoncd\/pipeline\/pkg\/result.RunResult\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results from Resources built during the TaskRun.\nThis is tomb-stoned along with the removal of pipelineResources\nDeprecated: this field is not populated and is preserved only for backwards compatibility<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskResults<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskRunResult\">\n[]TaskRunResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>TaskRunResults are the list of results written out by the task&rsquo;s containers<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sidecars<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.SidecarState\">\n[]SidecarState\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The list has one entry per sidecar in the manifest. Each entry is\nrepresents the imageid of the corresponding sidecar.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>taskSpec<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskSpec\">\nTaskSpec\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>TaskSpec contains the Spec from the dereferenced Task definition used to instantiate this TaskRun.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>provenance<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Provenance\">\nProvenance\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Provenance contains some key authenticated metadata about how a software artifact was built (what sources, what inputs\/outputs, etc.).<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>spanContext<\/code><br\/>\n<em>\nmap[string]string\n<\/em>\n<\/td>\n<td>\n<p>SpanContext contains tracing span context fields<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskRunStepOverride\">TaskRunStepOverride\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineTaskRunSpec\">PipelineTaskRunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>TaskRunStepOverride is used to override the values of a Step in the corresponding Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>The name of the Step to override.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#resourcerequirements-v1-core\">\nKubernetes core\/v1.ResourceRequirements\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>The resource requirements to apply to the Step.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TaskSpec\">TaskSpec\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.ClusterTask\">ClusterTask<\/a>, <a href=\"#tekton.dev\/v1beta1.Task\">Task<\/a>, <a href=\"#tekton.dev\/v1beta1.EmbeddedTask\">EmbeddedTask<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRunSpec\">TaskRunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRunStatusFields\">TaskRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>TaskSpec defines the desired state of Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>resources<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskResources\">\nTaskResources\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Resources is a list input and output resource to run the task\nResources are represented in TaskRuns as bindings to instances of\nPipelineResources.<\/p>\n<p>Deprecated: Unused, preserved only for backwards compatibility<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>params<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.ParamSpecs\">\nParamSpecs\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Params is a list of input parameters required to run the task. Params\nmust be supplied as inputs in TaskRuns unless they declare a default\nvalue.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>displayName<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>DisplayName is a user-facing name of the task that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is a user-facing description of the task that may be\nused to populate a UI.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>steps<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Step\">\n[]Step\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Steps are the steps of the build; each step is run sequentially with the\nsource mounted into \/workspace.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumes<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#volume-v1-core\">\n[]Kubernetes core\/v1.Volume\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Volumes is a collection of volumes that are available to mount into the\nsteps of the build.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>stepTemplate<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.StepTemplate\">\nStepTemplate\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>StepTemplate can be used as the basis for all step containers within the\nTask, so that the steps inherit settings on the base container.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>sidecars<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.Sidecar\">\n[]Sidecar\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Sidecars are run alongside the Task&rsquo;s step containers. They begin before\nthe steps start and end after the steps complete.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspaces<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.WorkspaceDeclaration\">\n[]WorkspaceDeclaration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Workspaces are the volumes that this Task requires.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.TaskResult\">\n[]TaskResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Results are values that this Task can output<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.TimeoutFields\">TimeoutFields\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineRunSpec\">PipelineRunSpec<\/a>)\n<\/p>\n<div>\n<p>TimeoutFields allows granular specification of pipeline, task, and finally timeouts<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>pipeline<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Pipeline sets the maximum allowed duration for execution of the entire pipeline. The sum of individual timeouts for tasks and finally must not exceed this value.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>tasks<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Tasks sets the maximum allowed duration of this pipeline&rsquo;s tasks<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>finally<\/code><br\/>\n<em>\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/apis\/meta\/v1#Duration\">\nKubernetes meta\/v1.Duration\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>Finally sets the maximum allowed duration of this pipeline&rsquo;s finally<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.WhenExpression\">WhenExpression\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.ChildStatusReference\">ChildStatusReference<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineRunRunStatus\">PipelineRunRunStatus<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineRunTaskRunStatus\">PipelineRunTaskRunStatus<\/a>, <a href=\"#tekton.dev\/v1beta1.SkippedTask\">SkippedTask<\/a>)\n<\/p>\n<div>\n<p>WhenExpression allows a PipelineTask to declare expressions to be evaluated before the Task is run\nto determine whether the Task should be executed or skipped<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>input<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Input is the string for guard checking which can be a static input or an output from a parent Task<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>operator<\/code><br\/>\n<em>\nk8s.io\/apimachinery\/pkg\/selection.Operator\n<\/em>\n<\/td>\n<td>\n<p>Operator that represents an Input&rsquo;s relationship to the values<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>values<\/code><br\/>\n<em>\n[]string\n<\/em>\n<\/td>\n<td>\n<p>Values is an array of strings, which is compared against the input, for guard checking\nIt must be non-empty<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>cel<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>CEL is a string of Common Language Expression, which can be used to conditionally execute\nthe task based on the result of the expression evaluation\nMore info about CEL syntax: <a href=\"https:\/\/github.com\/google\/cel-spec\/blob\/master\/doc\/langdef.md\">https:\/\/github.com\/google\/cel-spec\/blob\/master\/doc\/langdef.md<\/a><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.WhenExpressions\">WhenExpressions\n(<code>[]github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1beta1.WhenExpression<\/code> alias)<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineTask\">PipelineTask<\/a>, <a href=\"#tekton.dev\/v1beta1.Step\">Step<\/a>)\n<\/p>\n<div>\n<p>WhenExpressions are used to specify whether a Task should be executed or skipped\nAll of them need to evaluate to True for a guarded Task to be executed.<\/p>\n<\/div>\n<h3 id=\"tekton.dev\/v1beta1.WorkspaceBinding\">WorkspaceBinding\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1alpha1.RunSpec\">RunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.CustomRunSpec\">CustomRunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineRunSpec\">PipelineRunSpec<\/a>, <a href=\"#tekton.dev\/v1beta1.TaskRunSpec\">TaskRunSpec<\/a>)\n<\/p>\n<div>\n<p>WorkspaceBinding maps a Task&rsquo;s declared workspace to a Volume.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of the workspace populated by the volume.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>subPath<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SubPath is optionally a directory on the volume which should be used\nfor this binding (i.e. the volume will be mounted at this sub directory).<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>volumeClaimTemplate<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#persistentvolumeclaim-v1-core\">\nKubernetes core\/v1.PersistentVolumeClaim\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>VolumeClaimTemplate is a template for a claim that will be created in the same namespace.\nThe PipelineRun controller is responsible for creating a unique claim for each instance of PipelineRun.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>persistentVolumeClaim<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#persistentvolumeclaimvolumesource-v1-core\">\nKubernetes core\/v1.PersistentVolumeClaimVolumeSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>PersistentVolumeClaimVolumeSource represents a reference to a\nPersistentVolumeClaim in the same namespace. Either this OR EmptyDir can be used.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>emptyDir<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#emptydirvolumesource-v1-core\">\nKubernetes core\/v1.EmptyDirVolumeSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>EmptyDir represents a temporary directory that shares a Task&rsquo;s lifetime.\nMore info: <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#emptydir\">https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#emptydir<\/a>\nEither this OR PersistentVolumeClaim can be used.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>configMap<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#configmapvolumesource-v1-core\">\nKubernetes core\/v1.ConfigMapVolumeSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>ConfigMap represents a configMap that should populate this workspace.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>secret<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#secretvolumesource-v1-core\">\nKubernetes core\/v1.SecretVolumeSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Secret represents a secret that should populate this workspace.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>projected<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#projectedvolumesource-v1-core\">\nKubernetes core\/v1.ProjectedVolumeSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Projected represents a projected volume that should populate this workspace.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>csi<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#csivolumesource-v1-core\">\nKubernetes core\/v1.CSIVolumeSource\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>CSI (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.WorkspaceDeclaration\">WorkspaceDeclaration\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.TaskSpec\">TaskSpec<\/a>)\n<\/p>\n<div>\n<p>WorkspaceDeclaration is a declaration of a volume that a Task requires.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name by which you can bind the volume at runtime.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>description<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Description is an optional human readable description of this volume.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>mountPath<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>MountPath overrides the directory that the volume will be made available at.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>readOnly<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<p>ReadOnly dictates whether a mounted volume is writable. By default this\nfield is false and so mounted volumes are writable.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>optional<\/code><br\/>\n<em>\nbool\n<\/em>\n<\/td>\n<td>\n<p>Optional marks a Workspace as not being required in TaskRuns. By default\nthis field is false and so declared workspaces are required.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.WorkspacePipelineTaskBinding\">WorkspacePipelineTaskBinding\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.PipelineTask\">PipelineTask<\/a>)\n<\/p>\n<div>\n<p>WorkspacePipelineTaskBinding describes how a workspace passed into the pipeline should be\nmapped to a task&rsquo;s declared workspace.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of the workspace as declared by the task<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>workspace<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Workspace is the name of the workspace declared by the pipeline<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>subPath<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>SubPath is optionally a directory on the volume which should be used\nfor this binding (i.e. the volume will be mounted at this sub directory).<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.WorkspaceUsage\">WorkspaceUsage\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.Sidecar\">Sidecar<\/a>, <a href=\"#tekton.dev\/v1beta1.Step\">Step<\/a>)\n<\/p>\n<div>\n<p>WorkspaceUsage is used by a Step or Sidecar to declare that it wants isolated access\nto a Workspace defined in a Task.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name is the name of the workspace this Step or Sidecar wants access to.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>mountPath<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>MountPath is the path that the workspace should be mounted to inside the Step or Sidecar,\noverriding any MountPath specified in the Task&rsquo;s WorkspaceDeclaration.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.CustomRunResult\">CustomRunResult\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.CustomRunStatusFields\">CustomRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>CustomRunResult used to describe the results of a task<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>name<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Name the given name<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>value<\/code><br\/>\n<em>\nstring\n<\/em>\n<\/td>\n<td>\n<p>Value the given value of the result<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.CustomRunStatus\">CustomRunStatus\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.CustomRun\">CustomRun<\/a>, <a href=\"#tekton.dev\/v1.PipelineRunRunStatus\">PipelineRunRunStatus<\/a>, <a href=\"#tekton.dev\/v1beta1.PipelineRunRunStatus\">PipelineRunRunStatus<\/a>, <a href=\"#tekton.dev\/v1beta1.CustomRunStatusFields\">CustomRunStatusFields<\/a>)\n<\/p>\n<div>\n<p>CustomRunStatus defines the observed state of CustomRun<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>Status<\/code><br\/>\n<em>\n<a href=\"https:\/\/pkg.go.dev\/knative.dev\/pkg\/apis\/duck\/v1#Status\">\nknative.dev\/pkg\/apis\/duck\/v1.Status\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>Status<\/code> are embedded into this type.)\n<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>CustomRunStatusFields<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.CustomRunStatusFields\">\nCustomRunStatusFields\n<\/a>\n<\/em>\n<\/td>\n<td>\n<p>\n(Members of <code>CustomRunStatusFields<\/code> are embedded into this type.)\n<\/p>\n<p>CustomRunStatusFields inlines the status fields.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"tekton.dev\/v1beta1.CustomRunStatusFields\">CustomRunStatusFields\n<\/h3>\n<p>\n(<em>Appears on:<\/em><a href=\"#tekton.dev\/v1beta1.CustomRunStatus\">CustomRunStatus<\/a>)\n<\/p>\n<div>\n<p>CustomRunStatusFields holds the fields of CustomRun&rsquo;s status.  This is defined\nseparately and inlined so that other types can readily consume these fields\nvia duck typing.<\/p>\n<\/div>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>\n<code>startTime<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>StartTime is the time the build is actually started.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>completionTime<\/code><br\/>\n<em>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.24\/#time-v1-meta\">\nKubernetes meta\/v1.Time\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>CompletionTime is the time the build completed.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>results<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.CustomRunResult\">\n[]CustomRunResult\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>Results reports any output result values to be consumed by later\ntasks in a pipeline.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>retriesStatus<\/code><br\/>\n<em>\n<a href=\"#tekton.dev\/v1beta1.CustomRunStatus\">\n[]CustomRunStatus\n<\/a>\n<\/em>\n<\/td>\n<td>\n<em>(Optional)<\/em>\n<p>RetriesStatus contains the history of CustomRunStatus, in case of a retry.<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td>\n<code>extraFields<\/code><br\/>\n<em>\nk8s.io\/apimachinery\/pkg\/runtime.RawExtension\n<\/em>\n<\/td>\n<td>\n<p>ExtraFields holds arbitrary fields provided by the custom task\ncontroller.<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr\/>\n<p><em>\nGenerated with <code>gen-crd-api-reference-docs<\/code>\n.\n<\/em><\/p>","site":"tekton","answers_cleaned":"         title  Pipeline API linkTitle  Pipeline API weight  404           p Packages   p   ul   li   a href   resolution tekton dev 2fv1alpha1  resolution tekton dev v1alpha1  a    li   li   a href   resolution tekton dev 2fv1beta1  resolution tekton dev v1beta1  a    li   li   a href   tekton dev 2fv1  tekton dev v1  a    li   li   a href   tekton dev 2fv1alpha1  tekton dev v1alpha1  a    li   li   a href   tekton dev 2fv1beta1  tekton dev v1beta1  a    li    ul   h2 id  resolution tekton dev v1alpha1  resolution tekton dev v1alpha1  h2   div    div  Resource Types   ul   ul   h3 id  resolution tekton dev v1alpha1 ResolutionRequest  ResolutionRequest   h3   div   p ResolutionRequest is an object for requesting the content of a Tekton resource like a pipeline yaml   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   resolution tekton dev v1alpha1 ResolutionRequestSpec   ResolutionRequestSpec   a    em    td   td   em  Optional   em   p Spec holds the information for the request part of the resource request   p   br    br    table   tr   td   code params  code  br    em  map string string   em    td   td   em  Optional   em   p Parameters are the runtime attributes passed to the resolver to help it figure out how to resolve the resource being requested  For example  repo URL  commit SHA  path to file  the kind of authentication to leverage  etc   p    td    tr    table    td    tr   tr   td   code status  code  br    em   a href   resolution tekton dev v1alpha1 ResolutionRequestStatus   ResolutionRequestStatus   a    em    td   td   em  Optional   em   p Status communicates the state of the request and  ultimately  the content of the resolved resource   p    td    tr    tbody    table   h3 id  resolution tekton dev v1alpha1 ResolutionRequestSpec  ResolutionRequestSpec   h3   p    em Appears on   em  a href   resolution tekton dev v1alpha1 ResolutionRequest  ResolutionRequest  a     p   div   p ResolutionRequestSpec are all the fields in the spec of the ResolutionRequest CRD   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code params  code  br    em  map string string   em    td   td   em  Optional   em   p Parameters are the runtime attributes passed to the resolver to help it figure out how to resolve the resource being requested  For example  repo URL  commit SHA  path to file  the kind of authentication to leverage  etc   p    td    tr    tbody    table   h3 id  resolution tekton dev v1alpha1 ResolutionRequestStatus  ResolutionRequestStatus   h3   p    em Appears on   em  a href   resolution tekton dev v1alpha1 ResolutionRequest  ResolutionRequest  a     p   div   p ResolutionRequestStatus are all the fields in a ResolutionRequest rsquo s status subresource   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code Status  code  br    em   a href  https   pkg go dev knative dev pkg apis duck v1 Status   knative dev pkg apis duck v1 Status   a    em    td   td   p   Members of  code Status  code  are embedded into this type     p    td    tr   tr   td   code ResolutionRequestStatusFields  code  br    em   a href   resolution tekton dev v1alpha1 ResolutionRequestStatusFields   ResolutionRequestStatusFields   a    em    td   td   p   Members of  code ResolutionRequestStatusFields  code  are embedded into this type     p    td    tr    tbody    table   h3 id  resolution tekton dev v1alpha1 ResolutionRequestStatusFields  ResolutionRequestStatusFields   h3   p    em Appears on   em  a href   resolution tekton dev v1alpha1 ResolutionRequestStatus  ResolutionRequestStatus  a     p   div   p ResolutionRequestStatusFields are the ResolutionRequest specific fields for the status subresource   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code data  code  br    em  string   em    td   td   p Data is a string representation of the resolved content of the requested resource in lined into the ResolutionRequest object   p    td    tr   tr   td   code refSource  code  br    em   a href   tekton dev v1 RefSource   RefSource   a    em    td   td   p RefSource is the source reference of the remote data that records where the remote file came from including the url  digest and the entrypoint   p    td    tr    tbody    table   hr    h2 id  resolution tekton dev v1beta1  resolution tekton dev v1beta1  h2   div    div  Resource Types   ul   ul   h3 id  resolution tekton dev v1beta1 ResolutionRequest  ResolutionRequest   h3   div   p ResolutionRequest is an object for requesting the content of a Tekton resource like a pipeline yaml   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   resolution tekton dev v1beta1 ResolutionRequestSpec   ResolutionRequestSpec   a    em    td   td   em  Optional   em   p Spec holds the information for the request part of the resource request   p   br    br    table   tr   td   code params  code  br    em   a href   tekton dev v1 Param     Param   a    em    td   td   em  Optional   em   p Parameters are the runtime attributes passed to the resolver to help it figure out how to resolve the resource being requested  For example  repo URL  commit SHA  path to file  the kind of authentication to leverage  etc   p    td    tr   tr   td   code url  code  br    em  string   em    td   td   em  Optional   em   p URL is the runtime url passed to the resolver to help it figure out how to resolver the resource being requested  This is currently at an ALPHA stability level and subject to alpha API compatibility policies   p    td    tr    table    td    tr   tr   td   code status  code  br    em   a href   resolution tekton dev v1beta1 ResolutionRequestStatus   ResolutionRequestStatus   a    em    td   td   em  Optional   em   p Status communicates the state of the request and  ultimately  the content of the resolved resource   p    td    tr    tbody    table   h3 id  resolution tekton dev v1beta1 ResolutionRequestSpec  ResolutionRequestSpec   h3   p    em Appears on   em  a href   resolution tekton dev v1beta1 ResolutionRequest  ResolutionRequest  a     p   div   p ResolutionRequestSpec are all the fields in the spec of the ResolutionRequest CRD   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code params  code  br    em   a href   tekton dev v1 Param     Param   a    em    td   td   em  Optional   em   p Parameters are the runtime attributes passed to the resolver to help it figure out how to resolve the resource being requested  For example  repo URL  commit SHA  path to file  the kind of authentication to leverage  etc   p    td    tr   tr   td   code url  code  br    em  string   em    td   td   em  Optional   em   p URL is the runtime url passed to the resolver to help it figure out how to resolver the resource being requested  This is currently at an ALPHA stability level and subject to alpha API compatibility policies   p    td    tr    tbody    table   h3 id  resolution tekton dev v1beta1 ResolutionRequestStatus  ResolutionRequestStatus   h3   p    em Appears on   em  a href   resolution tekton dev v1beta1 ResolutionRequest  ResolutionRequest  a     p   div   p ResolutionRequestStatus are all the fields in a ResolutionRequest rsquo s status subresource   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code Status  code  br    em   a href  https   pkg go dev knative dev pkg apis duck v1 Status   knative dev pkg apis duck v1 Status   a    em    td   td   p   Members of  code Status  code  are embedded into this type     p    td    tr   tr   td   code ResolutionRequestStatusFields  code  br    em   a href   resolution tekton dev v1beta1 ResolutionRequestStatusFields   ResolutionRequestStatusFields   a    em    td   td   p   Members of  code ResolutionRequestStatusFields  code  are embedded into this type     p    td    tr    tbody    table   h3 id  resolution tekton dev v1beta1 ResolutionRequestStatusFields  ResolutionRequestStatusFields   h3   p    em Appears on   em  a href   resolution tekton dev v1beta1 ResolutionRequestStatus  ResolutionRequestStatus  a     p   div   p ResolutionRequestStatusFields are the ResolutionRequest specific fields for the status subresource   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code data  code  br    em  string   em    td   td   p Data is a string representation of the resolved content of the requested resource in lined into the ResolutionRequest object   p    td    tr   tr   td   code source  code  br    em   a href   tekton dev v1 RefSource   RefSource   a    em    td   td   p Deprecated  Use RefSource instead  p    td    tr   tr   td   code refSource  code  br    em   a href   tekton dev v1 RefSource   RefSource   a    em    td   td   p RefSource is the source reference of the remote data that records the url  digest and the entrypoint   p    td    tr    tbody    table   hr    h2 id  tekton dev v1  tekton dev v1  h2   div   p Package v1 contains API Schema definitions for the pipeline v1 API group  p    div  Resource Types   ul  li   a href   tekton dev v1 Pipeline  Pipeline  a    li  li   a href   tekton dev v1 PipelineRun  PipelineRun  a    li  li   a href   tekton dev v1 Task  Task  a    li  li   a href   tekton dev v1 TaskRun  TaskRun  a    li   ul   h3 id  tekton dev v1 Pipeline  Pipeline   h3   div   p Pipeline describes a list of Tasks to execute  It expresses how outputs of tasks feed into inputs of subsequent tasks   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1   code    td    tr   tr   td   code kind  code  br   string   td   td  code Pipeline  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1 PipelineSpec   PipelineSpec   a    em    td   td   em  Optional   em   p Spec holds the desired state of the Pipeline from the client  p   br    br    table   tr   td   code displayName  code  br    em  string   em    td   td   em  Optional   em   p DisplayName is a user facing name of the pipeline that may be used to populate a UI   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the pipeline that may be used to populate a UI   p    td    tr   tr   td   code tasks  code  br    em   a href   tekton dev v1 PipelineTask     PipelineTask   a    em    td   td   p Tasks declares the graph of Tasks that execute when this Pipeline is run   p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1 ParamSpecs   ParamSpecs   a    em    td   td   p Params declares a list of input parameters that must be supplied when this Pipeline is run   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1 PipelineWorkspaceDeclaration     PipelineWorkspaceDeclaration   a    em    td   td   em  Optional   em   p Workspaces declares a set of named workspaces that are expected to be provided by a PipelineRun   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1 PipelineResult     PipelineResult   a    em    td   td   em  Optional   em   p Results are values that this pipeline can output once run  p    td    tr   tr   td   code finally  code  br    em   a href   tekton dev v1 PipelineTask     PipelineTask   a    em    td   td   p Finally declares the list of Tasks that execute just before leaving the Pipeline i e  either after all Tasks are finished executing successfully or after a failure which would result in ending the Pipeline  p    td    tr    table    td    tr    tbody    table   h3 id  tekton dev v1 PipelineRun  PipelineRun   h3   div   p PipelineRun represents a single execution of a Pipeline  PipelineRuns are how the graph of Tasks declared in a Pipeline are executed  they specify inputs to Pipelines such as parameter values and capture operational aspects of the Tasks execution such as service account and tolerations  Creating a PipelineRun creates TaskRuns for Tasks in the referenced Pipeline   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1   code    td    tr   tr   td   code kind  code  br   string   td   td  code PipelineRun  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1 PipelineRunSpec   PipelineRunSpec   a    em    td   td   em  Optional   em   br    br    table   tr   td   code pipelineRef  code  br    em   a href   tekton dev v1 PipelineRef   PipelineRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code pipelineSpec  code  br    em   a href   tekton dev v1 PipelineSpec   PipelineSpec   a    em    td   td   em  Optional   em   p Specifying PipelineSpec can be disabled by setting  code disable inline spec  code  feature flag    p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1 Params   Params   a    em    td   td   p Params is a list of parameter names and values   p    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1 PipelineRunSpecStatus   PipelineRunSpecStatus   a    em    td   td   em  Optional   em   p Used for cancelling a pipelinerun  and maybe more later on   p    td    tr   tr   td   code timeouts  code  br    em   a href   tekton dev v1 TimeoutFields   TimeoutFields   a    em    td   td   em  Optional   em   p Time after which the Pipeline times out  Currently three keys are accepted in the map pipeline  tasks and finally with Timeouts pipeline  gt   Timeouts tasks   Timeouts finally  p    td    tr   tr   td   code taskRunTemplate  code  br    em   a href   tekton dev v1 PipelineTaskRunTemplate   PipelineTaskRunTemplate   a    em    td   td   em  Optional   em   p TaskRunTemplate represent template of taskrun  p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1 WorkspaceBinding     WorkspaceBinding   a    em    td   td   em  Optional   em   p Workspaces holds a set of workspace bindings that must match names with those declared in the pipeline   p    td    tr   tr   td   code taskRunSpecs  code  br    em   a href   tekton dev v1 PipelineTaskRunSpec     PipelineTaskRunSpec   a    em    td   td   em  Optional   em   p TaskRunSpecs holds a set of runtime specs  p    td    tr    table    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1 PipelineRunStatus   PipelineRunStatus   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1 Task  Task   h3   div   p Task represents a collection of sequential steps that are run as part of a Pipeline using a set of inputs and producing a set of outputs  Tasks execute when TaskRuns are created that provide the input parameters and resources and output resources the Task requires   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1   code    td    tr   tr   td   code kind  code  br   string   td   td  code Task  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1 TaskSpec   TaskSpec   a    em    td   td   em  Optional   em   p Spec holds the desired state of the Task from the client  p   br    br    table   tr   td   code params  code  br    em   a href   tekton dev v1 ParamSpecs   ParamSpecs   a    em    td   td   em  Optional   em   p Params is a list of input parameters required to run the task  Params must be supplied as inputs in TaskRuns unless they declare a default value   p    td    tr   tr   td   code displayName  code  br    em  string   em    td   td   em  Optional   em   p DisplayName is a user facing name of the task that may be used to populate a UI   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the task that may be used to populate a UI   p    td    tr   tr   td   code steps  code  br    em   a href   tekton dev v1 Step     Step   a    em    td   td   p Steps are the steps of the build  each step is run sequentially with the source mounted into  workspace   p    td    tr   tr   td   code volumes  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volume v1 core     Kubernetes core v1 Volume   a    em    td   td   p Volumes is a collection of volumes that are available to mount into the steps of the build   p    td    tr   tr   td   code stepTemplate  code  br    em   a href   tekton dev v1 StepTemplate   StepTemplate   a    em    td   td   p StepTemplate can be used as the basis for all step containers within the Task  so that the steps inherit settings on the base container   p    td    tr   tr   td   code sidecars  code  br    em   a href   tekton dev v1 Sidecar     Sidecar   a    em    td   td   p Sidecars are run alongside the Task rsquo s step containers  They begin before the steps start and end after the steps complete   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1 WorkspaceDeclaration     WorkspaceDeclaration   a    em    td   td   p Workspaces are the volumes that this Task requires   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1 TaskResult     TaskResult   a    em    td   td   p Results are values that this Task can output  p    td    tr    table    td    tr    tbody    table   h3 id  tekton dev v1 TaskRun  TaskRun   h3   div   p TaskRun represents a single execution of a Task  TaskRuns are how the steps specified in a Task are executed  they specify the parameters and resources used to run the steps in a Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1   code    td    tr   tr   td   code kind  code  br   string   td   td  code TaskRun  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1 TaskRunSpec   TaskRunSpec   a    em    td   td   em  Optional   em   br    br    table   tr   td   code debug  code  br    em   a href   tekton dev v1 TaskRunDebug   TaskRunDebug   a    em    td   td   em  Optional   em    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1 Params   Params   a    em    td   td   em  Optional   em    td    tr   tr   td   code serviceAccountName  code  br    em  string   em    td   td   em  Optional   em    td    tr   tr   td   code taskRef  code  br    em   a href   tekton dev v1 TaskRef   TaskRef   a    em    td   td   em  Optional   em   p no more than one of the TaskRef and TaskSpec may be specified   p    td    tr   tr   td   code taskSpec  code  br    em   a href   tekton dev v1 TaskSpec   TaskSpec   a    em    td   td   em  Optional   em   p Specifying PipelineSpec can be disabled by setting  code disable inline spec  code  feature flag    p    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1 TaskRunSpecStatus   TaskRunSpecStatus   a    em    td   td   em  Optional   em   p Used for cancelling a TaskRun  and maybe more later on   p    td    tr   tr   td   code statusMessage  code  br    em   a href   tekton dev v1 TaskRunSpecStatusMessage   TaskRunSpecStatusMessage   a    em    td   td   em  Optional   em   p Status message for cancellation   p    td    tr   tr   td   code retries  code  br    em  int   em    td   td   em  Optional   em   p Retries represents how many times this TaskRun should be retried in the event of task failure   p    td    tr   tr   td   code timeout  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Time after which one retry attempt times out  Defaults to 1 hour  Refer Go rsquo s ParseDuration documentation for expected format   a href  https   golang org pkg time  ParseDuration  https   golang org pkg time  ParseDuration  a   p    td    tr   tr   td   code podTemplate  code  br    em   a href   tekton dev unversioned Template   Template   a    em    td   td   p PodTemplate holds pod specific configuration  p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1 WorkspaceBinding     WorkspaceBinding   a    em    td   td   em  Optional   em   p Workspaces is a list of WorkspaceBindings from volumes to workspaces   p    td    tr   tr   td   code stepSpecs  code  br    em   a href   tekton dev v1 TaskRunStepSpec     TaskRunStepSpec   a    em    td   td   em  Optional   em   p Specs to apply to Steps in this TaskRun  If a field is specified in both a Step and a StepSpec  the value from the StepSpec will be used  This field is only supported when the alpha feature gate is enabled   p    td    tr   tr   td   code sidecarSpecs  code  br    em   a href   tekton dev v1 TaskRunSidecarSpec     TaskRunSidecarSpec   a    em    td   td   em  Optional   em   p Specs to apply to Sidecars in this TaskRun  If a field is specified in both a Sidecar and a SidecarSpec  the value from the SidecarSpec will be used  This field is only supported when the alpha feature gate is enabled   p    td    tr   tr   td   code computeResources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   p Compute resources to use for this TaskRun  p    td    tr    table    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1 TaskRunStatus   TaskRunStatus   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1 Algorithm  Algorithm   code string  code  alias   h3   div   p Algorithm Standard cryptographic hash algorithm  p    div   h3 id  tekton dev v1 Artifact  Artifact   h3   p    em Appears on   em  a href   tekton dev v1 Artifacts  Artifacts  a    a href   tekton dev v1 StepState  StepState  a     p   div   p TaskRunStepArtifact represents an artifact produced or used by a step within a task run  It directly uses the Artifact type for its structure   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p The artifact rsquo s identifying category name  p    td    tr   tr   td   code values  code  br    em   a href   tekton dev v1 ArtifactValue     ArtifactValue   a    em    td   td   p A collection of values related to the artifact  p    td    tr   tr   td   code buildOutput  code  br    em  bool   em    td   td   p Indicate if the artifact is a build output or a by product  p    td    tr    tbody    table   h3 id  tekton dev v1 ArtifactValue  ArtifactValue   h3   p    em Appears on   em  a href   tekton dev v1 Artifact  Artifact  a     p   div   p ArtifactValue represents a specific value or data element within an Artifact   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code digest  code  br    em  map github com tektoncd pipeline pkg apis pipeline v1 Algorithm string   em    td   td    td    tr   tr   td   code uri  code  br    em  string   em    td   td   p Algorithm specific digests for verifying the content  e g   SHA256   p    td    tr    tbody    table   h3 id  tekton dev v1 Artifacts  Artifacts   h3   p    em Appears on   em  a href   tekton dev v1 TaskRunStatusFields  TaskRunStatusFields  a     p   div   p Artifacts represents the collection of input and output artifacts associated with a task run or a similar process  Artifacts in this context are units of data or resources that the process either consumes as input or produces as output   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code inputs  code  br    em   a href   tekton dev v1 Artifact     Artifact   a    em    td   td    td    tr   tr   td   code outputs  code  br    em   a href   tekton dev v1 Artifact     Artifact   a    em    td   td    td    tr    tbody    table   h3 id  tekton dev v1 ChildStatusReference  ChildStatusReference   h3   p    em Appears on   em  a href   tekton dev v1 PipelineRunStatusFields  PipelineRunStatusFields  a     p   div   p ChildStatusReference is used to point to the statuses of individual TaskRuns and Runs within this PipelineRun   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of the TaskRun or Run this is referencing   p    td    tr   tr   td   code displayName  code  br    em  string   em    td   td   p DisplayName is a user facing name of the pipelineTask that may be used to populate a UI   p    td    tr   tr   td   code pipelineTaskName  code  br    em  string   em    td   td   p PipelineTaskName is the name of the PipelineTask this is referencing   p    td    tr   tr   td   code whenExpressions  code  br    em   a href   tekton dev v1 WhenExpression     WhenExpression   a    em    td   td   em  Optional   em   p WhenExpressions is the list of checks guarding the execution of the PipelineTask  p    td    tr    tbody    table   h3 id  tekton dev v1 Combination  Combination   code map string string  code  alias   h3   div   p Combination is a map  mainly defined to hold a single combination from a Matrix with key as param Name and value as param Value  p    div   h3 id  tekton dev v1 Combinations  Combinations   code   github com tektoncd pipeline pkg apis pipeline v1 Combination  code  alias   h3   div   p Combinations is a Combination list  p    div   h3 id  tekton dev v1 EmbeddedTask  EmbeddedTask   h3   p    em Appears on   em  a href   tekton dev v1 PipelineTask  PipelineTask  a     p   div   p EmbeddedTask is used to define a Task inline within a Pipeline rsquo s PipelineTasks   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code spec  code  br    em  k8s io apimachinery pkg runtime RawExtension   em    td   td   em  Optional   em   p Spec is a specification of a custom task  p   br    br    table   tr   td   code    code  br    em    byte   em    td   td   p Raw is the underlying serialization of this object   p   p TODO  Determine how to detect ContentType and ContentEncoding of  lsquo Raw rsquo  data   p    td    tr   tr   td   code    code  br    em  k8s io apimachinery pkg runtime Object   em    td   td   p Object can hold a representation of this extension   useful for working with versioned structs   p    td    tr    table    td    tr   tr   td   code metadata  code  br    em   a href   tekton dev v1 PipelineTaskMetadata   PipelineTaskMetadata   a    em    td   td   em  Optional   em    td    tr   tr   td   code TaskSpec  code  br    em   a href   tekton dev v1 TaskSpec   TaskSpec   a    em    td   td   p   Members of  code TaskSpec  code  are embedded into this type     p   em  Optional   em   p TaskSpec is a specification of a task  p    td    tr    tbody    table   h3 id  tekton dev v1 IncludeParams  IncludeParams   h3   div   p IncludeParams allows passing in a specific combinations of Parameters into the Matrix   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name the specified combination  p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1 Params   Params   a    em    td   td   p Params takes only  code Parameters  code  of type  code  quot string quot   code  The names of the  code params  code  must match the names of the  code params  code  in the underlying  code Task  code   p    td    tr    tbody    table   h3 id  tekton dev v1 Matrix  Matrix   h3   p    em Appears on   em  a href   tekton dev v1 PipelineTask  PipelineTask  a     p   div   p Matrix is used to fan out Tasks in a Pipeline  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code params  code  br    em   a href   tekton dev v1 Params   Params   a    em    td   td   p Params is a list of parameters used to fan out the pipelineTask Params takes only  code Parameters  code  of type  code  quot array quot   code  Each array element is supplied to the  code PipelineTask  code  by substituting  code params  code  of type  code  quot string quot   code  in the underlying  code Task  code   The names of the  code params  code  in the  code Matrix  code  must match the names of the  code params  code  in the underlying  code Task  code  that they will be substituting   p    td    tr   tr   td   code include  code  br    em   a href   tekton dev v1 IncludeParamsList   IncludeParamsList   a    em    td   td   em  Optional   em   p Include is a list of IncludeParams which allows passing in specific combinations of Parameters into the Matrix   p    td    tr    tbody    table   h3 id  tekton dev v1 OnErrorType  OnErrorType   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1 Step  Step  a     p   div   p OnErrorType defines a list of supported exiting behavior of a container on error  p    div   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 continue  34   p   td   td  p Continue indicates continue executing the rest of the steps irrespective of the container exit code  p    td    tr  tr  td  p   34 stopAndFail  34   p   td   td  p StopAndFail indicates exit the taskRun if the container exits with non zero exit code  p    td    tr   tbody    table   h3 id  tekton dev v1 Param  Param   h3   p    em Appears on   em  a href   resolution tekton dev v1beta1 ResolutionRequestSpec  ResolutionRequestSpec  a     p   div   p Param declares an ParamValues to use for the parameter called name   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td    td    tr   tr   td   code value  code  br    em   a href   tekton dev v1 ParamValue   ParamValue   a    em    td   td    td    tr    tbody    table   h3 id  tekton dev v1 ParamSpec  ParamSpec   h3   div   p ParamSpec defines arbitrary parameters needed beyond typed inputs  such as resources   Parameter values are provided by users as inputs on a TaskRun or PipelineRun   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name declares the name by which a parameter is referenced   p    td    tr   tr   td   code type  code  br    em   a href   tekton dev v1 ParamType   ParamType   a    em    td   td   em  Optional   em   p Type is the user specified type of the parameter  The possible types are currently  ldquo string rdquo    ldquo array rdquo  and  ldquo object rdquo   and  ldquo string rdquo  is the default   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the parameter that may be used to populate a UI   p    td    tr   tr   td   code properties  code  br    em   a href   tekton dev v1 PropertySpec   map string github com tektoncd pipeline pkg apis pipeline v1 PropertySpec   a    em    td   td   em  Optional   em   p Properties is the JSON Schema properties to support key value pairs parameter   p    td    tr   tr   td   code default  code  br    em   a href   tekton dev v1 ParamValue   ParamValue   a    em    td   td   em  Optional   em   p Default is the value a parameter takes if no input value is supplied  If default is set  a Task may be executed without a supplied value for the parameter   p    td    tr   tr   td   code enum  code  br    em    string   em    td   td   em  Optional   em   p Enum declares a set of allowed param input values for tasks pipelines that can be validated  If Enum is not set  no input validation is performed for the param   p    td    tr    tbody    table   h3 id  tekton dev v1 ParamSpecs  ParamSpecs   code   github com tektoncd pipeline pkg apis pipeline v1 ParamSpec  code  alias   h3   p    em Appears on   em  a href   tekton dev v1 PipelineSpec  PipelineSpec  a    a href   tekton dev v1 TaskSpec  TaskSpec  a    a href   tekton dev v1alpha1 StepActionSpec  StepActionSpec  a    a href   tekton dev v1beta1 StepActionSpec  StepActionSpec  a     p   div   p ParamSpecs is a list of ParamSpec  p    div   h3 id  tekton dev v1 ParamType  ParamType   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1 ParamSpec  ParamSpec  a    a href   tekton dev v1 ParamValue  ParamValue  a    a href   tekton dev v1 PropertySpec  PropertySpec  a     p   div   p ParamType indicates the type of an input parameter  Used to distinguish between a single string and an array of strings   p    div   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 array  34   p   td   td   td    tr  tr  td  p   34 object  34   p   td   td   td    tr  tr  td  p   34 string  34   p   td   td   td    tr   tbody    table   h3 id  tekton dev v1 ParamValue  ParamValue   h3   p    em Appears on   em  a href   tekton dev v1 Param  Param  a    a href   tekton dev v1 ParamSpec  ParamSpec  a    a href   tekton dev v1 PipelineResult  PipelineResult  a    a href   tekton dev v1 PipelineRunResult  PipelineRunResult  a    a href   tekton dev v1 TaskResult  TaskResult  a    a href   tekton dev v1 TaskRunResult  TaskRunResult  a     p   div   p ResultValue is a type alias of ParamValue  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code Type  code  br    em   a href   tekton dev v1 ParamType   ParamType   a    em    td   td    td    tr   tr   td   code StringVal  code  br    em  string   em    td   td   p Represents the stored type of ParamValues   p    td    tr   tr   td   code ArrayVal  code  br    em    string   em    td   td    td    tr   tr   td   code ObjectVal  code  br    em  map string string   em    td   td    td    tr    tbody    table   h3 id  tekton dev v1 Params  Params   code   github com tektoncd pipeline pkg apis pipeline v1 Param  code  alias   h3   p    em Appears on   em  a href   tekton dev v1 IncludeParams  IncludeParams  a    a href   tekton dev v1 Matrix  Matrix  a    a href   tekton dev v1 PipelineRunSpec  PipelineRunSpec  a    a href   tekton dev v1 PipelineTask  PipelineTask  a    a href   tekton dev v1 ResolverRef  ResolverRef  a    a href   tekton dev v1 Step  Step  a    a href   tekton dev v1 TaskRunInputs  TaskRunInputs  a    a href   tekton dev v1 TaskRunSpec  TaskRunSpec  a     p   div   p Params is a list of Param  p    div   h3 id  tekton dev v1 PipelineRef  PipelineRef   h3   p    em Appears on   em  a href   tekton dev v1 PipelineRunSpec  PipelineRunSpec  a    a href   tekton dev v1 PipelineTask  PipelineTask  a     p   div   p PipelineRef can be used to refer to a specific instance of a Pipeline   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name of the referent  More info   a href  http   kubernetes io docs user guide identifiers names  http   kubernetes io docs user guide identifiers names  a   p    td    tr   tr   td   code apiVersion  code  br    em  string   em    td   td   em  Optional   em   p API version of the referent  p    td    tr   tr   td   code ResolverRef  code  br    em   a href   tekton dev v1 ResolverRef   ResolverRef   a    em    td   td   em  Optional   em   p ResolverRef allows referencing a Pipeline in a remote location like a git repo  This field is only supported when the alpha feature gate is enabled   p    td    tr    tbody    table   h3 id  tekton dev v1 PipelineResult  PipelineResult   h3   p    em Appears on   em  a href   tekton dev v1 PipelineSpec  PipelineSpec  a     p   div   p PipelineResult used to describe the results of a pipeline  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name the given name  p    td    tr   tr   td   code type  code  br    em   a href   tekton dev v1 ResultsType   ResultsType   a    em    td   td   p Type is the user specified type of the result  The possible types are  lsquo string rsquo    lsquo array rsquo   and  lsquo object rsquo   with  lsquo string rsquo  as the default   lsquo array rsquo  and  lsquo object rsquo  types are alpha features   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a human readable description of the result  p    td    tr   tr   td   code value  code  br    em   a href   tekton dev v1 ParamValue   ParamValue   a    em    td   td   p Value the expression used to retrieve the value  p    td    tr    tbody    table   h3 id  tekton dev v1 PipelineRunReason  PipelineRunReason   code string  code  alias   h3   div   p PipelineRunReason represents a reason for the pipeline run  ldquo Succeeded rdquo  condition  p    div   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 CELEvaluationFailed  34   p   td   td  p ReasonCELEvaluationFailed indicates the pipeline fails the CEL evaluation  p    td    tr  tr  td  p   34 Cancelled  34   p   td   td  p PipelineRunReasonCancelled is the reason set when the PipelineRun cancelled by the user This reason may be found with a corev1 ConditionFalse status  if the cancellation was processed successfully This reason may be found with a corev1 ConditionUnknown status  if the cancellation is being processed or failed  p    td    tr  tr  td  p   34 CancelledRunningFinally  34   p   td   td  p PipelineRunReasonCancelledRunningFinally indicates that pipeline has been gracefully cancelled and no new Tasks will be scheduled by the controller  but final tasks are now running  p    td    tr  tr  td  p   34 Completed  34   p   td   td  p PipelineRunReasonCompleted is the reason set when the PipelineRun completed successfully with one or more skipped Tasks  p    td    tr  tr  td  p   34 PipelineRunCouldntCancel  34   p   td   td  p ReasonCouldntCancel indicates that a PipelineRun was cancelled but attempting to update all of the running TaskRuns as cancelled failed   p    td    tr  tr  td  p   34 CouldntGetPipeline  34   p   td   td  p ReasonCouldntGetPipeline indicates that the reason for the failure status is that the associated Pipeline couldn rsquo t be retrieved  p    td    tr  tr  td  p   34 CouldntGetPipelineResult  34   p   td   td  p PipelineRunReasonCouldntGetPipelineResult indicates that the pipeline fails to retrieve the referenced result  This could be due to failed TaskRuns or Runs that were supposed to produce the results  p    td    tr  tr  td  p   34 CouldntGetTask  34   p   td   td  p ReasonCouldntGetTask indicates that the reason for the failure status is that the associated Pipeline rsquo s Tasks couldn rsquo t all be retrieved  p    td    tr  tr  td  p   34 PipelineRunCouldntTimeOut  34   p   td   td  p ReasonCouldntTimeOut indicates that a PipelineRun was timed out but attempting to update all of the running TaskRuns as timed out failed   p    td    tr  tr  td  p   34 CreateRunFailed  34   p   td   td  p ReasonCreateRunFailed indicates that the pipeline fails to create the taskrun or other run resources  p    td    tr  tr  td  p   34 Failed  34   p   td   td  p PipelineRunReasonFailed is the reason set when the PipelineRun completed with a failure  p    td    tr  tr  td  p   34 PipelineValidationFailed  34   p   td   td  p ReasonFailedValidation indicates that the reason for failure status is that pipelinerun failed runtime validation  p    td    tr  tr  td  p   34 InvalidPipelineResourceBindings  34   p   td   td  p ReasonInvalidBindings indicates that the reason for the failure status is that the PipelineResources bound in the PipelineRun didn rsquo t match those declared in the Pipeline  p    td    tr  tr  td  p   34 PipelineInvalidGraph  34   p   td   td  p ReasonInvalidGraph indicates that the reason for the failure status is that the associated Pipeline is an invalid graph  a k a wrong order  cycle      p    td    tr  tr  td  p   34 InvalidMatrixParameterTypes  34   p   td   td  p ReasonInvalidMatrixParameterTypes indicates a matrix contains invalid parameter types  p    td    tr  tr  td  p   34 InvalidParamValue  34   p   td   td  p PipelineRunReasonInvalidParamValue indicates that the PipelineRun Param input value is not allowed   p    td    tr  tr  td  p   34 InvalidPipelineResultReference  34   p   td   td  p PipelineRunReasonInvalidPipelineResultReference indicates a pipeline result was declared by the pipeline but not initialized in the pipelineTask  p    td    tr  tr  td  p   34 InvalidTaskResultReference  34   p   td   td  p ReasonInvalidTaskResultReference indicates a task result was declared but was not initialized by that task  p    td    tr  tr  td  p   34 InvalidTaskRunSpecs  34   p   td   td  p ReasonInvalidTaskRunSpec indicates that PipelineRun Spec TaskRunSpecs   PipelineTaskName is defined with a not exist taskName in pipelineSpec   p    td    tr  tr  td  p   34 InvalidWorkspaceBindings  34   p   td   td  p ReasonInvalidWorkspaceBinding indicates that a Pipeline expects a workspace but a PipelineRun has provided an invalid binding   p    td    tr  tr  td  p   34 ObjectParameterMissKeys  34   p   td   td  p ReasonObjectParameterMissKeys indicates that the object param value provided from PipelineRun spec misses some keys required for the object param declared in Pipeline spec   p    td    tr  tr  td  p   34 ParamArrayIndexingInvalid  34   p   td   td  p ReasonParamArrayIndexingInvalid indicates that the use of param array indexing is out of bound   p    td    tr  tr  td  p   34 ParameterMissing  34   p   td   td  p ReasonParameterMissing indicates that the reason for the failure status is that the associated PipelineRun didn rsquo t provide all the required parameters  p    td    tr  tr  td  p   34 ParameterTypeMismatch  34   p   td   td  p ReasonParameterTypeMismatch indicates that the reason for the failure status is that parameter s  declared in the PipelineRun do not have the some declared type as the parameters s  declared in the Pipeline that they are supposed to override   p    td    tr  tr  td  p   34 PipelineRunPending  34   p   td   td  p PipelineRunReasonPending is the reason set when the PipelineRun is in the pending state  p    td    tr  tr  td  p   34 RequiredWorkspaceMarkedOptional  34   p   td   td  p ReasonRequiredWorkspaceMarkedOptional indicates an optional workspace has been passed to a Task that is expecting a non optional workspace  p    td    tr  tr  td  p   34 ResolvingPipelineRef  34   p   td   td  p ReasonResolvingPipelineRef indicates that the PipelineRun is waiting for its pipelineRef to be asynchronously resolved   p    td    tr  tr  td  p   34 ResourceVerificationFailed  34   p   td   td  p ReasonResourceVerificationFailed indicates that the pipeline fails the trusted resource verification  it could be the content has changed  signature is invalid or public key is invalid  p    td    tr  tr  td  p   34 Running  34   p   td   td  p PipelineRunReasonRunning is the reason set when the PipelineRun is running  p    td    tr  tr  td  p   34 Started  34   p   td   td  p PipelineRunReasonStarted is the reason set when the PipelineRun has just started  p    td    tr  tr  td  p   34 StoppedRunningFinally  34   p   td   td  p PipelineRunReasonStoppedRunningFinally indicates that pipeline has been gracefully stopped and no new Tasks will be scheduled by the controller  but final tasks are now running  p    td    tr  tr  td  p   34 PipelineRunStopping  34   p   td   td  p PipelineRunReasonStopping indicates that no new Tasks will be scheduled by the controller  and the pipeline will stop once all running tasks complete their work  p    td    tr  tr  td  p   34 Succeeded  34   p   td   td  p PipelineRunReasonSuccessful is the reason set when the PipelineRun completed successfully  p    td    tr  tr  td  p   34 PipelineRunTimeout  34   p   td   td  p PipelineRunReasonTimedOut is the reason set when the PipelineRun has timed out  p    td    tr   tbody    table   h3 id  tekton dev v1 PipelineRunResult  PipelineRunResult   h3   p    em Appears on   em  a href   tekton dev v1 PipelineRunStatusFields  PipelineRunStatusFields  a     p   div   p PipelineRunResult used to describe the results of a pipeline  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the result rsquo s name as declared by the Pipeline  p    td    tr   tr   td   code value  code  br    em   a href   tekton dev v1 ParamValue   ParamValue   a    em    td   td   p Value is the result returned from the execution of this PipelineRun  p    td    tr    tbody    table   h3 id  tekton dev v1 PipelineRunRunStatus  PipelineRunRunStatus   h3   div   p PipelineRunRunStatus contains the name of the PipelineTask for this Run and the Run rsquo s Status  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code pipelineTaskName  code  br    em  string   em    td   td   p PipelineTaskName is the name of the PipelineTask   p    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1beta1 CustomRunStatus   CustomRunStatus   a    em    td   td   em  Optional   em   p Status is the RunStatus for the corresponding Run  p    td    tr   tr   td   code whenExpressions  code  br    em   a href   tekton dev v1 WhenExpression     WhenExpression   a    em    td   td   em  Optional   em   p WhenExpressions is the list of checks guarding the execution of the PipelineTask  p    td    tr    tbody    table   h3 id  tekton dev v1 PipelineRunSpec  PipelineRunSpec   h3   p    em Appears on   em  a href   tekton dev v1 PipelineRun  PipelineRun  a     p   div   p PipelineRunSpec defines the desired state of PipelineRun  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code pipelineRef  code  br    em   a href   tekton dev v1 PipelineRef   PipelineRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code pipelineSpec  code  br    em   a href   tekton dev v1 PipelineSpec   PipelineSpec   a    em    td   td   em  Optional   em   p Specifying PipelineSpec can be disabled by setting  code disable inline spec  code  feature flag    p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1 Params   Params   a    em    td   td   p Params is a list of parameter names and values   p    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1 PipelineRunSpecStatus   PipelineRunSpecStatus   a    em    td   td   em  Optional   em   p Used for cancelling a pipelinerun  and maybe more later on   p    td    tr   tr   td   code timeouts  code  br    em   a href   tekton dev v1 TimeoutFields   TimeoutFields   a    em    td   td   em  Optional   em   p Time after which the Pipeline times out  Currently three keys are accepted in the map pipeline  tasks and finally with Timeouts pipeline  gt   Timeouts tasks   Timeouts finally  p    td    tr   tr   td   code taskRunTemplate  code  br    em   a href   tekton dev v1 PipelineTaskRunTemplate   PipelineTaskRunTemplate   a    em    td   td   em  Optional   em   p TaskRunTemplate represent template of taskrun  p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1 WorkspaceBinding     WorkspaceBinding   a    em    td   td   em  Optional   em   p Workspaces holds a set of workspace bindings that must match names with those declared in the pipeline   p    td    tr   tr   td   code taskRunSpecs  code  br    em   a href   tekton dev v1 PipelineTaskRunSpec     PipelineTaskRunSpec   a    em    td   td   em  Optional   em   p TaskRunSpecs holds a set of runtime specs  p    td    tr    tbody    table   h3 id  tekton dev v1 PipelineRunSpecStatus  PipelineRunSpecStatus   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1 PipelineRunSpec  PipelineRunSpec  a     p   div   p PipelineRunSpecStatus defines the pipelinerun spec status the user can provide  p    div   h3 id  tekton dev v1 PipelineRunStatus  PipelineRunStatus   h3   p    em Appears on   em  a href   tekton dev v1 PipelineRun  PipelineRun  a     p   div   p PipelineRunStatus defines the observed state of PipelineRun  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code Status  code  br    em   a href  https   pkg go dev knative dev pkg apis duck v1 Status   knative dev pkg apis duck v1 Status   a    em    td   td   p   Members of  code Status  code  are embedded into this type     p    td    tr   tr   td   code PipelineRunStatusFields  code  br    em   a href   tekton dev v1 PipelineRunStatusFields   PipelineRunStatusFields   a    em    td   td   p   Members of  code PipelineRunStatusFields  code  are embedded into this type     p   p PipelineRunStatusFields inlines the status fields   p    td    tr    tbody    table   h3 id  tekton dev v1 PipelineRunStatusFields  PipelineRunStatusFields   h3   p    em Appears on   em  a href   tekton dev v1 PipelineRunStatus  PipelineRunStatus  a     p   div   p PipelineRunStatusFields holds the fields of PipelineRunStatus rsquo  status  This is defined separately and inlined so that other types can readily consume these fields via duck typing   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code startTime  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   p StartTime is the time the PipelineRun is actually started   p    td    tr   tr   td   code completionTime  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   p CompletionTime is the time the PipelineRun completed   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1 PipelineRunResult     PipelineRunResult   a    em    td   td   em  Optional   em   p Results are the list of results written out by the pipeline task rsquo s containers  p    td    tr   tr   td   code pipelineSpec  code  br    em   a href   tekton dev v1 PipelineSpec   PipelineSpec   a    em    td   td   p PipelineRunSpec contains the exact spec used to instantiate the run  p    td    tr   tr   td   code skippedTasks  code  br    em   a href   tekton dev v1 SkippedTask     SkippedTask   a    em    td   td   em  Optional   em   p list of tasks that were skipped due to when expressions evaluating to false  p    td    tr   tr   td   code childReferences  code  br    em   a href   tekton dev v1 ChildStatusReference     ChildStatusReference   a    em    td   td   em  Optional   em   p list of TaskRun and Run names  PipelineTask names  and API versions kinds for children of this PipelineRun   p    td    tr   tr   td   code finallyStartTime  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   em  Optional   em   p FinallyStartTime is when all non finally tasks have been completed and only finally tasks are being executed   p    td    tr   tr   td   code provenance  code  br    em   a href   tekton dev v1 Provenance   Provenance   a    em    td   td   em  Optional   em   p Provenance contains some key authenticated metadata about how a software artifact was built  what sources  what inputs outputs  etc     p    td    tr   tr   td   code spanContext  code  br    em  map string string   em    td   td   p SpanContext contains tracing span context fields  p    td    tr    tbody    table   h3 id  tekton dev v1 PipelineRunTaskRunStatus  PipelineRunTaskRunStatus   h3   div   p PipelineRunTaskRunStatus contains the name of the PipelineTask for this TaskRun and the TaskRun rsquo s Status  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code pipelineTaskName  code  br    em  string   em    td   td   p PipelineTaskName is the name of the PipelineTask   p    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1 TaskRunStatus   TaskRunStatus   a    em    td   td   em  Optional   em   p Status is the TaskRunStatus for the corresponding TaskRun  p    td    tr   tr   td   code whenExpressions  code  br    em   a href   tekton dev v1 WhenExpression     WhenExpression   a    em    td   td   em  Optional   em   p WhenExpressions is the list of checks guarding the execution of the PipelineTask  p    td    tr    tbody    table   h3 id  tekton dev v1 PipelineSpec  PipelineSpec   h3   p    em Appears on   em  a href   tekton dev v1 Pipeline  Pipeline  a    a href   tekton dev v1 PipelineRunSpec  PipelineRunSpec  a    a href   tekton dev v1 PipelineRunStatusFields  PipelineRunStatusFields  a    a href   tekton dev v1 PipelineTask  PipelineTask  a     p   div   p PipelineSpec defines the desired state of Pipeline   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code displayName  code  br    em  string   em    td   td   em  Optional   em   p DisplayName is a user facing name of the pipeline that may be used to populate a UI   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the pipeline that may be used to populate a UI   p    td    tr   tr   td   code tasks  code  br    em   a href   tekton dev v1 PipelineTask     PipelineTask   a    em    td   td   p Tasks declares the graph of Tasks that execute when this Pipeline is run   p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1 ParamSpecs   ParamSpecs   a    em    td   td   p Params declares a list of input parameters that must be supplied when this Pipeline is run   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1 PipelineWorkspaceDeclaration     PipelineWorkspaceDeclaration   a    em    td   td   em  Optional   em   p Workspaces declares a set of named workspaces that are expected to be provided by a PipelineRun   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1 PipelineResult     PipelineResult   a    em    td   td   em  Optional   em   p Results are values that this pipeline can output once run  p    td    tr   tr   td   code finally  code  br    em   a href   tekton dev v1 PipelineTask     PipelineTask   a    em    td   td   p Finally declares the list of Tasks that execute just before leaving the Pipeline i e  either after all Tasks are finished executing successfully or after a failure which would result in ending the Pipeline  p    td    tr    tbody    table   h3 id  tekton dev v1 PipelineTask  PipelineTask   h3   p    em Appears on   em  a href   tekton dev v1 PipelineSpec  PipelineSpec  a     p   div   p PipelineTask defines a task in a Pipeline  passing inputs from both Params and from the output of previous tasks   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of this task within the context of a Pipeline  Name is used as a coordinate with the  code from  code  and  code runAfter  code  fields to establish the execution order of tasks relative to one another   p    td    tr   tr   td   code displayName  code  br    em  string   em    td   td   em  Optional   em   p DisplayName is the display name of this task within the context of a Pipeline  This display name may be used to populate a UI   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is the description of this task within the context of a Pipeline  This description may be used to populate a UI   p    td    tr   tr   td   code taskRef  code  br    em   a href   tekton dev v1 TaskRef   TaskRef   a    em    td   td   em  Optional   em   p TaskRef is a reference to a task definition   p    td    tr   tr   td   code taskSpec  code  br    em   a href   tekton dev v1 EmbeddedTask   EmbeddedTask   a    em    td   td   em  Optional   em   p TaskSpec is a specification of a task Specifying TaskSpec can be disabled by setting  code disable inline spec  code  feature flag    p    td    tr   tr   td   code when  code  br    em   a href   tekton dev v1 WhenExpressions   WhenExpressions   a    em    td   td   em  Optional   em   p When is a list of when expressions that need to be true for the task to run  p    td    tr   tr   td   code retries  code  br    em  int   em    td   td   em  Optional   em   p Retries represents how many times this task should be retried in case of task failure  ConditionSucceeded set to False  p    td    tr   tr   td   code runAfter  code  br    em    string   em    td   td   em  Optional   em   p RunAfter is the list of PipelineTask names that should be executed before this Task executes   Used to force a specific ordering in graph execution    p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1 Params   Params   a    em    td   td   em  Optional   em   p Parameters declares parameters passed to this task   p    td    tr   tr   td   code matrix  code  br    em   a href   tekton dev v1 Matrix   Matrix   a    em    td   td   em  Optional   em   p Matrix declares parameters used to fan out this task   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1 WorkspacePipelineTaskBinding     WorkspacePipelineTaskBinding   a    em    td   td   em  Optional   em   p Workspaces maps workspaces from the pipeline spec to the workspaces declared in the Task   p    td    tr   tr   td   code timeout  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Time after which the TaskRun times out  Defaults to 1 hour  Refer Go rsquo s ParseDuration documentation for expected format   a href  https   golang org pkg time  ParseDuration  https   golang org pkg time  ParseDuration  a   p    td    tr   tr   td   code pipelineRef  code  br    em   a href   tekton dev v1 PipelineRef   PipelineRef   a    em    td   td   em  Optional   em   p PipelineRef is a reference to a pipeline definition Note  PipelineRef is in preview mode and not yet supported  p    td    tr   tr   td   code pipelineSpec  code  br    em   a href   tekton dev v1 PipelineSpec   PipelineSpec   a    em    td   td   em  Optional   em   p PipelineSpec is a specification of a pipeline Note  PipelineSpec is in preview mode and not yet supported Specifying PipelineSpec can be disabled by setting  code disable inline spec  code  feature flag    p    td    tr   tr   td   code onError  code  br    em   a href   tekton dev v1 PipelineTaskOnErrorType   PipelineTaskOnErrorType   a    em    td   td   em  Optional   em   p OnError defines the exiting behavior of a PipelineRun on error can be set to   continue   stopAndFail    p    td    tr    tbody    table   h3 id  tekton dev v1 PipelineTaskMetadata  PipelineTaskMetadata   h3   p    em Appears on   em  a href   tekton dev v1 EmbeddedTask  EmbeddedTask  a    a href   tekton dev v1 PipelineTaskRunSpec  PipelineTaskRunSpec  a     p   div   p PipelineTaskMetadata contains the labels or annotations for an EmbeddedTask  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code labels  code  br    em  map string string   em    td   td   em  Optional   em    td    tr   tr   td   code annotations  code  br    em  map string string   em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1 PipelineTaskOnErrorType  PipelineTaskOnErrorType   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1 PipelineTask  PipelineTask  a     p   div   p PipelineTaskOnErrorType defines a list of supported failure handling behaviors of a PipelineTask on error  p    div   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 continue  34   p   td   td  p PipelineTaskContinue indicates to continue executing the rest of the DAG when the PipelineTask fails  p    td    tr  tr  td  p   34 stopAndFail  34   p   td   td  p PipelineTaskStopAndFail indicates to stop and fail the PipelineRun if the PipelineTask fails  p    td    tr   tbody    table   h3 id  tekton dev v1 PipelineTaskParam  PipelineTaskParam   h3   div   p PipelineTaskParam is used to provide arbitrary string parameters to a Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td    td    tr   tr   td   code value  code  br    em  string   em    td   td    td    tr    tbody    table   h3 id  tekton dev v1 PipelineTaskRun  PipelineTaskRun   h3   div   p PipelineTaskRun reports the results of running a step in the Task  Each task has the potential to succeed or fail  based on the exit code  and produces logs   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td    td    tr    tbody    table   h3 id  tekton dev v1 PipelineTaskRunSpec  PipelineTaskRunSpec   h3   p    em Appears on   em  a href   tekton dev v1 PipelineRunSpec  PipelineRunSpec  a     p   div   p PipelineTaskRunSpec  can be used to configure specific specs for a concrete Task  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code pipelineTaskName  code  br    em  string   em    td   td    td    tr   tr   td   code serviceAccountName  code  br    em  string   em    td   td    td    tr   tr   td   code podTemplate  code  br    em   a href   tekton dev unversioned Template   Template   a    em    td   td    td    tr   tr   td   code stepSpecs  code  br    em   a href   tekton dev v1 TaskRunStepSpec     TaskRunStepSpec   a    em    td   td    td    tr   tr   td   code sidecarSpecs  code  br    em   a href   tekton dev v1 TaskRunSidecarSpec     TaskRunSidecarSpec   a    em    td   td    td    tr   tr   td   code metadata  code  br    em   a href   tekton dev v1 PipelineTaskMetadata   PipelineTaskMetadata   a    em    td   td   em  Optional   em    td    tr   tr   td   code computeResources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   p Compute resources to use for this TaskRun  p    td    tr    tbody    table   h3 id  tekton dev v1 PipelineTaskRunTemplate  PipelineTaskRunTemplate   h3   p    em Appears on   em  a href   tekton dev v1 PipelineRunSpec  PipelineRunSpec  a     p   div   p PipelineTaskRunTemplate is used to specify run specifications for all Task in pipelinerun   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code podTemplate  code  br    em   a href   tekton dev unversioned Template   Template   a    em    td   td   em  Optional   em    td    tr   tr   td   code serviceAccountName  code  br    em  string   em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1 PipelineWorkspaceDeclaration  PipelineWorkspaceDeclaration   h3   p    em Appears on   em  a href   tekton dev v1 PipelineSpec  PipelineSpec  a     p   div   p WorkspacePipelineDeclaration creates a named slot in a Pipeline that a PipelineRun is expected to populate with a workspace binding   p   p Deprecated  use PipelineWorkspaceDeclaration type instead  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of a workspace to be provided by a PipelineRun   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a human readable string describing how the workspace will be used in the Pipeline  It can be useful to include a bit of detail about which tasks are intended to have access to the data on the workspace   p    td    tr   tr   td   code optional  code  br    em  bool   em    td   td   p Optional marks a Workspace as not being required in PipelineRuns  By default this field is false and so declared workspaces are required   p    td    tr    tbody    table   h3 id  tekton dev v1 PropertySpec  PropertySpec   h3   p    em Appears on   em  a href   tekton dev v1 ParamSpec  ParamSpec  a    a href   tekton dev v1 StepResult  StepResult  a    a href   tekton dev v1 TaskResult  TaskResult  a     p   div   p PropertySpec defines the struct for object keys  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code type  code  br    em   a href   tekton dev v1 ParamType   ParamType   a    em    td   td    td    tr    tbody    table   h3 id  tekton dev v1 Provenance  Provenance   h3   p    em Appears on   em  a href   tekton dev v1 PipelineRunStatusFields  PipelineRunStatusFields  a    a href   tekton dev v1 StepState  StepState  a    a href   tekton dev v1 TaskRunStatusFields  TaskRunStatusFields  a     p   div   p Provenance contains metadata about resources used in the TaskRun PipelineRun such as the source from where a remote build definition was fetched  This field aims to carry minimum amoumt of metadata in  Run status so that Tekton Chains can capture them in the provenance   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code refSource  code  br    em   a href   tekton dev v1 RefSource   RefSource   a    em    td   td   p RefSource identifies the source where a remote task pipeline came from   p    td    tr   tr   td   code featureFlags  code  br    em  github com tektoncd pipeline pkg apis config FeatureFlags   em    td   td   p FeatureFlags identifies the feature flags that were used during the task pipeline run  p    td    tr    tbody    table   h3 id  tekton dev v1 Ref  Ref   h3   p    em Appears on   em  a href   tekton dev v1 Step  Step  a     p   div   p Ref can be used to refer to a specific instance of a StepAction   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name of the referenced step  p    td    tr   tr   td   code ResolverRef  code  br    em   a href   tekton dev v1 ResolverRef   ResolverRef   a    em    td   td   em  Optional   em   p ResolverRef allows referencing a StepAction in a remote location like a git repo   p    td    tr    tbody    table   h3 id  tekton dev v1 RefSource  RefSource   h3   p    em Appears on   em  a href   tekton dev v1 Provenance  Provenance  a    a href   resolution tekton dev v1alpha1 ResolutionRequestStatusFields  ResolutionRequestStatusFields  a    a href   resolution tekton dev v1beta1 ResolutionRequestStatusFields  ResolutionRequestStatusFields  a     p   div   p RefSource contains the information that can uniquely identify where a remote built definition came from i e  Git repositories  Tekton Bundles in OCI registry and hub   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code uri  code  br    em  string   em    td   td   p URI indicates the identity of the source of the build definition  Example   ldquo  a href  https   github com tektoncd catalog quot   https   github com tektoncd catalog rdquo   a   p    td    tr   tr   td   code digest  code  br    em  map string string   em    td   td   p Digest is a collection of cryptographic digests for the contents of the artifact specified by URI  Example    ldquo sha1 rdquo    ldquo f99d13e554ffcb696dee719fa85b695cb5b0f428 rdquo    p    td    tr   tr   td   code entryPoint  code  br    em  string   em    td   td   p EntryPoint identifies the entry point into the build  This is often a path to a build definition file and or a target label within that file  Example   ldquo task git clone 0 8 git clone yaml rdquo   p    td    tr    tbody    table   h3 id  tekton dev v1 ResolverName  ResolverName   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1 ResolverRef  ResolverRef  a     p   div   p ResolverName is the name of a resolver from which a resource can be requested   p    div   h3 id  tekton dev v1 ResolverRef  ResolverRef   h3   p    em Appears on   em  a href   tekton dev v1 PipelineRef  PipelineRef  a    a href   tekton dev v1 Ref  Ref  a    a href   tekton dev v1 TaskRef  TaskRef  a     p   div   p ResolverRef can be used to refer to a Pipeline or Task in a remote location like a git repo  This feature is in beta and these fields are only available when the beta feature gate is enabled   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code resolver  code  br    em   a href   tekton dev v1 ResolverName   ResolverName   a    em    td   td   em  Optional   em   p Resolver is the name of the resolver that should perform resolution of the referenced Tekton resource  such as  ldquo git rdquo    p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1 Params   Params   a    em    td   td   em  Optional   em   p Params contains the parameters used to identify the referenced Tekton resource  Example entries might include  ldquo repo rdquo  or  ldquo path rdquo  but the set of params ultimately depends on the chosen resolver   p    td    tr    tbody    table   h3 id  tekton dev v1 ResultRef  ResultRef   h3   div   p ResultRef is a type that represents a reference to a task run result  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code pipelineTask  code  br    em  string   em    td   td    td    tr   tr   td   code result  code  br    em  string   em    td   td    td    tr   tr   td   code resultsIndex  code  br    em  int   em    td   td    td    tr   tr   td   code property  code  br    em  string   em    td   td    td    tr    tbody    table   h3 id  tekton dev v1 ResultsType  ResultsType   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1 PipelineResult  PipelineResult  a    a href   tekton dev v1 StepResult  StepResult  a    a href   tekton dev v1 TaskResult  TaskResult  a    a href   tekton dev v1 TaskRunResult  TaskRunResult  a     p   div   p ResultsType indicates the type of a result  Used to distinguish between a single string and an array of strings  Note that there is ResultType used to find out whether a RunResult is from a task result or not  which is different from this ResultsType   p    div   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 array  34   p   td   td   td    tr  tr  td  p   34 object  34   p   td   td   td    tr  tr  td  p   34 string  34   p   td   td   td    tr   tbody    table   h3 id  tekton dev v1 Sidecar  Sidecar   h3   p    em Appears on   em  a href   tekton dev v1 TaskSpec  TaskSpec  a     p   div   p Sidecar has nearly the same data structure as Step but does not have the ability to timeout   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name of the Sidecar specified as a DNS LABEL  Each Sidecar in a Task must have a unique name  DNS LABEL   Cannot be updated   p    td    tr   tr   td   code image  code  br    em  string   em    td   td   em  Optional   em   p Image reference name  More info   a href  https   kubernetes io docs concepts containers images  https   kubernetes io docs concepts containers images  a   p    td    tr   tr   td   code command  code  br    em    string   em    td   td   em  Optional   em   p Entrypoint array  Not executed within a shell  The image rsquo s ENTRYPOINT is used if this is not provided  Variable references   VAR NAME  are expanded using the Sidecar rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code args  code  br    em    string   em    td   td   em  Optional   em   p Arguments to the entrypoint  The image rsquo s CMD is used if this is not provided  Variable references   VAR NAME  are expanded using the Sidecar rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code workingDir  code  br    em  string   em    td   td   em  Optional   em   p Sidecar rsquo s working directory  If not specified  the container runtime rsquo s default will be used  which might be configured in the container image  Cannot be updated   p    td    tr   tr   td   code ports  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  containerport v1 core     Kubernetes core v1 ContainerPort   a    em    td   td   em  Optional   em   p List of ports to expose from the Sidecar  Exposing a port here gives the system additional information about the network connections a container uses  but is primarily informational  Not specifying a port here DOES NOT prevent that port from being exposed  Any port which is listening on the default  ldquo 0 0 0 0 rdquo  address inside a container will be accessible from the network  Cannot be updated   p    td    tr   tr   td   code envFrom  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envfromsource v1 core     Kubernetes core v1 EnvFromSource   a    em    td   td   em  Optional   em   p List of sources to populate environment variables in the Sidecar  The keys defined within a source must be a C IDENTIFIER  All invalid keys will be reported as an event when the container is starting  When a key exists in multiple sources  the value associated with the last source will take precedence  Values defined by an Env with a duplicate key will take precedence  Cannot be updated   p    td    tr   tr   td   code env  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envvar v1 core     Kubernetes core v1 EnvVar   a    em    td   td   em  Optional   em   p List of environment variables to set in the Sidecar  Cannot be updated   p    td    tr   tr   td   code computeResources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   em  Optional   em   p ComputeResources required by this Sidecar  Cannot be updated  More info   a href  https   kubernetes io docs concepts configuration manage resources containers   https   kubernetes io docs concepts configuration manage resources containers   a   p    td    tr   tr   td   code volumeMounts  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumemount v1 core     Kubernetes core v1 VolumeMount   a    em    td   td   em  Optional   em   p Volumes to mount into the Sidecar rsquo s filesystem  Cannot be updated   p    td    tr   tr   td   code volumeDevices  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumedevice v1 core     Kubernetes core v1 VolumeDevice   a    em    td   td   em  Optional   em   p volumeDevices is the list of block devices to be used by the Sidecar   p    td    tr   tr   td   code livenessProbe  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  probe v1 core   Kubernetes core v1 Probe   a    em    td   td   em  Optional   em   p Periodic probe of Sidecar liveness  Container will be restarted if the probe fails  Cannot be updated  More info   a href  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  a   p    td    tr   tr   td   code readinessProbe  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  probe v1 core   Kubernetes core v1 Probe   a    em    td   td   em  Optional   em   p Periodic probe of Sidecar service readiness  Container will be removed from service endpoints if the probe fails  Cannot be updated  More info   a href  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  a   p    td    tr   tr   td   code startupProbe  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  probe v1 core   Kubernetes core v1 Probe   a    em    td   td   em  Optional   em   p StartupProbe indicates that the Pod the Sidecar is running in has successfully initialized  If specified  no other probes are executed until this completes successfully  If this probe fails  the Pod will be restarted  just as if the livenessProbe failed  This can be used to provide different probe parameters at the beginning of a Pod rsquo s lifecycle  when it might take a long time to load data or warm a cache  than during steady state operation  This cannot be updated  More info   a href  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  a   p    td    tr   tr   td   code lifecycle  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  lifecycle v1 core   Kubernetes core v1 Lifecycle   a    em    td   td   em  Optional   em   p Actions that the management system should take in response to Sidecar lifecycle events  Cannot be updated   p    td    tr   tr   td   code terminationMessagePath  code  br    em  string   em    td   td   em  Optional   em   p Optional  Path at which the file to which the Sidecar rsquo s termination message will be written is mounted into the Sidecar rsquo s filesystem  Message written is intended to be brief final status  such as an assertion failure message  Will be truncated by the node if greater than 4096 bytes  The total message length across all containers will be limited to 12kb  Defaults to  dev termination log  Cannot be updated   p    td    tr   tr   td   code terminationMessagePolicy  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  terminationmessagepolicy v1 core   Kubernetes core v1 TerminationMessagePolicy   a    em    td   td   em  Optional   em   p Indicate how the termination message should be populated  File will use the contents of terminationMessagePath to populate the Sidecar status message on both success and failure  FallbackToLogsOnError will use the last chunk of Sidecar log output if the termination message file is empty and the Sidecar exited with an error  The log output is limited to 2048 bytes or 80 lines  whichever is smaller  Defaults to File  Cannot be updated   p    td    tr   tr   td   code imagePullPolicy  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  pullpolicy v1 core   Kubernetes core v1 PullPolicy   a    em    td   td   em  Optional   em   p Image pull policy  One of Always  Never  IfNotPresent  Defaults to Always if  latest tag is specified  or IfNotPresent otherwise  Cannot be updated  More info   a href  https   kubernetes io docs concepts containers images updating images  https   kubernetes io docs concepts containers images updating images  a   p    td    tr   tr   td   code securityContext  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  securitycontext v1 core   Kubernetes core v1 SecurityContext   a    em    td   td   em  Optional   em   p SecurityContext defines the security options the Sidecar should be run with  If set  the fields of SecurityContext override the equivalent fields of PodSecurityContext  More info   a href  https   kubernetes io docs tasks configure pod container security context   https   kubernetes io docs tasks configure pod container security context   a   p    td    tr   tr   td   code stdin  code  br    em  bool   em    td   td   em  Optional   em   p Whether this Sidecar should allocate a buffer for stdin in the container runtime  If this is not set  reads from stdin in the Sidecar will always result in EOF  Default is false   p    td    tr   tr   td   code stdinOnce  code  br    em  bool   em    td   td   em  Optional   em   p Whether the container runtime should close the stdin channel after it has been opened by a single attach  When stdin is true the stdin stream will remain open across multiple attach sessions  If stdinOnce is set to true  stdin is opened on Sidecar start  is empty until the first client attaches to stdin  and then remains open and accepts data until the client disconnects  at which time stdin is closed and remains closed until the Sidecar is restarted  If this flag is false  a container processes that reads from stdin will never receive an EOF  Default is false  p    td    tr   tr   td   code tty  code  br    em  bool   em    td   td   em  Optional   em   p Whether this Sidecar should allocate a TTY for itself  also requires  lsquo stdin rsquo  to be true  Default is false   p    td    tr   tr   td   code script  code  br    em  string   em    td   td   em  Optional   em   p Script is the contents of an executable file to execute   p   p If Script is not empty  the Step cannot have an Command or Args   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1 WorkspaceUsage     WorkspaceUsage   a    em    td   td   em  Optional   em   p This is an alpha field  You must set the  ldquo enable api fields rdquo  feature flag to  ldquo alpha rdquo  for this field to be supported   p   p Workspaces is a list of workspaces from the Task that this Sidecar wants exclusive access to  Adding a workspace to this list means that any other Step or Sidecar that does not also request this Workspace will not have access to it   p    td    tr   tr   td   code restartPolicy  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  containerrestartpolicy v1 core   Kubernetes core v1 ContainerRestartPolicy   a    em    td   td   em  Optional   em   p RestartPolicy refers to kubernetes RestartPolicy  It can only be set for an initContainer and must have it rsquo s policy set to  ldquo Always rdquo   It is currently left optional to help support Kubernetes versions prior to 1 29 when this feature was introduced   p    td    tr    tbody    table   h3 id  tekton dev v1 SidecarState  SidecarState   h3   p    em Appears on   em  a href   tekton dev v1 TaskRunStatusFields  TaskRunStatusFields  a     p   div   p SidecarState reports the results of running a sidecar in a Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code ContainerState  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  containerstate v1 core   Kubernetes core v1 ContainerState   a    em    td   td   p   Members of  code ContainerState  code  are embedded into this type     p    td    tr   tr   td   code name  code  br    em  string   em    td   td    td    tr   tr   td   code container  code  br    em  string   em    td   td    td    tr   tr   td   code imageID  code  br    em  string   em    td   td    td    tr    tbody    table   h3 id  tekton dev v1 SkippedTask  SkippedTask   h3   p    em Appears on   em  a href   tekton dev v1 PipelineRunStatusFields  PipelineRunStatusFields  a     p   div   p SkippedTask is used to describe the Tasks that were skipped due to their When Expressions evaluating to False  This is a struct because we are looking into including more details about the When Expressions that caused this Task to be skipped   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the Pipeline Task name  p    td    tr   tr   td   code reason  code  br    em   a href   tekton dev v1 SkippingReason   SkippingReason   a    em    td   td   p Reason is the cause of the PipelineTask being skipped   p    td    tr   tr   td   code whenExpressions  code  br    em   a href   tekton dev v1 WhenExpression     WhenExpression   a    em    td   td   em  Optional   em   p WhenExpressions is the list of checks guarding the execution of the PipelineTask  p    td    tr    tbody    table   h3 id  tekton dev v1 SkippingReason  SkippingReason   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1 SkippedTask  SkippedTask  a     p   div   p SkippingReason explains why a PipelineTask was skipped   p    div   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 Matrix Parameters have an empty array  34   p   td   td  p EmptyArrayInMatrixParams means the task was skipped because Matrix parameters contain empty array   p    td    tr  tr  td  p   34 PipelineRun Finally timeout has been reached  34   p   td   td  p FinallyTimedOutSkip means the task was skipped because the PipelineRun has passed its Timeouts Finally   p    td    tr  tr  td  p   34 PipelineRun was gracefully cancelled  34   p   td   td  p GracefullyCancelledSkip means the task was skipped because the pipeline run has been gracefully cancelled  p    td    tr  tr  td  p   34 PipelineRun was gracefully stopped  34   p   td   td  p GracefullyStoppedSkip means the task was skipped because the pipeline run has been gracefully stopped  p    td    tr  tr  td  p   34 Results were missing  34   p   td   td  p MissingResultsSkip means the task was skipped because it rsquo s missing necessary results  p    td    tr  tr  td  p   34 None  34   p   td   td  p None means the task was not skipped  p    td    tr  tr  td  p   34 Parent Tasks were skipped  34   p   td   td  p ParentTasksSkip means the task was skipped because its parent was skipped  p    td    tr  tr  td  p   34 PipelineRun timeout has been reached  34   p   td   td  p PipelineTimedOutSkip means the task was skipped because the PipelineRun has passed its overall timeout   p    td    tr  tr  td  p   34 PipelineRun was stopping  34   p   td   td  p StoppingSkip means the task was skipped because the pipeline run is stopping  p    td    tr  tr  td  p   34 PipelineRun Tasks timeout has been reached  34   p   td   td  p TasksTimedOutSkip means the task was skipped because the PipelineRun has passed its Timeouts Tasks   p    td    tr  tr  td  p   34 When Expressions evaluated to false  34   p   td   td  p WhenExpressionsSkip means the task was skipped due to at least one of its when expressions evaluating to false  p    td    tr   tbody    table   h3 id  tekton dev v1 Step  Step   h3   p    em Appears on   em  a href   tekton dev v1 TaskSpec  TaskSpec  a     p   div   p Step runs a subcomponent of a Task  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name of the Step specified as a DNS LABEL  Each Step in a Task must have a unique name   p    td    tr   tr   td   code image  code  br    em  string   em    td   td   em  Optional   em   p Docker image name  More info   a href  https   kubernetes io docs concepts containers images  https   kubernetes io docs concepts containers images  a   p    td    tr   tr   td   code command  code  br    em    string   em    td   td   em  Optional   em   p Entrypoint array  Not executed within a shell  The image rsquo s ENTRYPOINT is used if this is not provided  Variable references   VAR NAME  are expanded using the container rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code args  code  br    em    string   em    td   td   em  Optional   em   p Arguments to the entrypoint  The image rsquo s CMD is used if this is not provided  Variable references   VAR NAME  are expanded using the container rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code workingDir  code  br    em  string   em    td   td   em  Optional   em   p Step rsquo s working directory  If not specified  the container runtime rsquo s default will be used  which might be configured in the container image  Cannot be updated   p    td    tr   tr   td   code envFrom  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envfromsource v1 core     Kubernetes core v1 EnvFromSource   a    em    td   td   em  Optional   em   p List of sources to populate environment variables in the Step  The keys defined within a source must be a C IDENTIFIER  All invalid keys will be reported as an event when the Step is starting  When a key exists in multiple sources  the value associated with the last source will take precedence  Values defined by an Env with a duplicate key will take precedence  Cannot be updated   p    td    tr   tr   td   code env  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envvar v1 core     Kubernetes core v1 EnvVar   a    em    td   td   em  Optional   em   p List of environment variables to set in the Step  Cannot be updated   p    td    tr   tr   td   code computeResources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   em  Optional   em   p ComputeResources required by this Step  Cannot be updated  More info   a href  https   kubernetes io docs concepts configuration manage resources containers   https   kubernetes io docs concepts configuration manage resources containers   a   p    td    tr   tr   td   code volumeMounts  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumemount v1 core     Kubernetes core v1 VolumeMount   a    em    td   td   em  Optional   em   p Volumes to mount into the Step rsquo s filesystem  Cannot be updated   p    td    tr   tr   td   code volumeDevices  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumedevice v1 core     Kubernetes core v1 VolumeDevice   a    em    td   td   em  Optional   em   p volumeDevices is the list of block devices to be used by the Step   p    td    tr   tr   td   code imagePullPolicy  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  pullpolicy v1 core   Kubernetes core v1 PullPolicy   a    em    td   td   em  Optional   em   p Image pull policy  One of Always  Never  IfNotPresent  Defaults to Always if  latest tag is specified  or IfNotPresent otherwise  Cannot be updated  More info   a href  https   kubernetes io docs concepts containers images updating images  https   kubernetes io docs concepts containers images updating images  a   p    td    tr   tr   td   code securityContext  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  securitycontext v1 core   Kubernetes core v1 SecurityContext   a    em    td   td   em  Optional   em   p SecurityContext defines the security options the Step should be run with  If set  the fields of SecurityContext override the equivalent fields of PodSecurityContext  More info   a href  https   kubernetes io docs tasks configure pod container security context   https   kubernetes io docs tasks configure pod container security context   a   p    td    tr   tr   td   code script  code  br    em  string   em    td   td   em  Optional   em   p Script is the contents of an executable file to execute   p   p If Script is not empty  the Step cannot have an Command and the Args will be passed to the Script   p    td    tr   tr   td   code timeout  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Timeout is the time after which the step times out  Defaults to never  Refer to Go rsquo s ParseDuration documentation for expected format   a href  https   golang org pkg time  ParseDuration  https   golang org pkg time  ParseDuration  a   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1 WorkspaceUsage     WorkspaceUsage   a    em    td   td   em  Optional   em   p This is an alpha field  You must set the  ldquo enable api fields rdquo  feature flag to  ldquo alpha rdquo  for this field to be supported   p   p Workspaces is a list of workspaces from the Task that this Step wants exclusive access to  Adding a workspace to this list means that any other Step or Sidecar that does not also request this Workspace will not have access to it   p    td    tr   tr   td   code onError  code  br    em   a href   tekton dev v1 OnErrorType   OnErrorType   a    em    td   td   p OnError defines the exiting behavior of a container on error can be set to   continue   stopAndFail    p    td    tr   tr   td   code stdoutConfig  code  br    em   a href   tekton dev v1 StepOutputConfig   StepOutputConfig   a    em    td   td   em  Optional   em   p Stores configuration for the stdout stream of the step   p    td    tr   tr   td   code stderrConfig  code  br    em   a href   tekton dev v1 StepOutputConfig   StepOutputConfig   a    em    td   td   em  Optional   em   p Stores configuration for the stderr stream of the step   p    td    tr   tr   td   code ref  code  br    em   a href   tekton dev v1 Ref   Ref   a    em    td   td   em  Optional   em   p Contains the reference to an existing StepAction   p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1 Params   Params   a    em    td   td   em  Optional   em   p Params declares parameters passed to this step action   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1 StepResult     StepResult   a    em    td   td   em  Optional   em   p Results declares StepResults produced by the Step   p   p This is field is at an ALPHA stability level and gated by  ldquo enable step actions rdquo  feature flag   p   p It can be used in an inlined Step when used to store Results to   step results resultName path   It cannot be used when referencing StepActions using  v1 Step Ref   The Results declared by the StepActions will be stored here instead   p    td    tr   tr   td   code when  code  br    em   a href   tekton dev v1 WhenExpressions   WhenExpressions   a    em    td   td   em  Optional   em   p When is a list of when expressions that need to be true for the task to run  p    td    tr    tbody    table   h3 id  tekton dev v1 StepOutputConfig  StepOutputConfig   h3   p    em Appears on   em  a href   tekton dev v1 Step  Step  a     p   div   p StepOutputConfig stores configuration for a step output stream   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code path  code  br    em  string   em    td   td   em  Optional   em   p Path to duplicate stdout stream to on container rsquo s local filesystem   p    td    tr    tbody    table   h3 id  tekton dev v1 StepResult  StepResult   h3   p    em Appears on   em  a href   tekton dev v1 Step  Step  a    a href   tekton dev v1alpha1 StepActionSpec  StepActionSpec  a    a href   tekton dev v1beta1 Step  Step  a    a href   tekton dev v1beta1 StepActionSpec  StepActionSpec  a     p   div   p StepResult used to describe the Results of a Step   p   p This is field is at an BETA stability level and gated by  ldquo enable step actions rdquo  feature flag   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name the given name  p    td    tr   tr   td   code type  code  br    em   a href   tekton dev v1 ResultsType   ResultsType   a    em    td   td   em  Optional   em   p The possible types are  lsquo string rsquo    lsquo array rsquo   and  lsquo object rsquo   with  lsquo string rsquo  as the default   p    td    tr   tr   td   code properties  code  br    em   a href   tekton dev v1 PropertySpec   map string github com tektoncd pipeline pkg apis pipeline v1 PropertySpec   a    em    td   td   em  Optional   em   p Properties is the JSON Schema properties to support key value pairs results   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a human readable description of the result  p    td    tr    tbody    table   h3 id  tekton dev v1 StepState  StepState   h3   p    em Appears on   em  a href   tekton dev v1 TaskRunStatusFields  TaskRunStatusFields  a     p   div   p StepState reports the results of running a step in a Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code ContainerState  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  containerstate v1 core   Kubernetes core v1 ContainerState   a    em    td   td   p   Members of  code ContainerState  code  are embedded into this type     p    td    tr   tr   td   code name  code  br    em  string   em    td   td    td    tr   tr   td   code container  code  br    em  string   em    td   td    td    tr   tr   td   code imageID  code  br    em  string   em    td   td    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1 TaskRunResult     TaskRunResult   a    em    td   td    td    tr   tr   td   code provenance  code  br    em   a href   tekton dev v1 Provenance   Provenance   a    em    td   td    td    tr   tr   td   code terminationReason  code  br    em  string   em    td   td    td    tr   tr   td   code inputs  code  br    em   a href   tekton dev v1 Artifact     Artifact   a    em    td   td    td    tr   tr   td   code outputs  code  br    em   a href   tekton dev v1 Artifact     Artifact   a    em    td   td    td    tr    tbody    table   h3 id  tekton dev v1 StepTemplate  StepTemplate   h3   p    em Appears on   em  a href   tekton dev v1 TaskSpec  TaskSpec  a     p   div   p StepTemplate is a template for a Step  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code image  code  br    em  string   em    td   td   em  Optional   em   p Image reference name  More info   a href  https   kubernetes io docs concepts containers images  https   kubernetes io docs concepts containers images  a   p    td    tr   tr   td   code command  code  br    em    string   em    td   td   em  Optional   em   p Entrypoint array  Not executed within a shell  The image rsquo s ENTRYPOINT is used if this is not provided  Variable references   VAR NAME  are expanded using the Step rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code args  code  br    em    string   em    td   td   em  Optional   em   p Arguments to the entrypoint  The image rsquo s CMD is used if this is not provided  Variable references   VAR NAME  are expanded using the Step rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code workingDir  code  br    em  string   em    td   td   em  Optional   em   p Step rsquo s working directory  If not specified  the container runtime rsquo s default will be used  which might be configured in the container image  Cannot be updated   p    td    tr   tr   td   code envFrom  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envfromsource v1 core     Kubernetes core v1 EnvFromSource   a    em    td   td   em  Optional   em   p List of sources to populate environment variables in the Step  The keys defined within a source must be a C IDENTIFIER  All invalid keys will be reported as an event when the Step is starting  When a key exists in multiple sources  the value associated with the last source will take precedence  Values defined by an Env with a duplicate key will take precedence  Cannot be updated   p    td    tr   tr   td   code env  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envvar v1 core     Kubernetes core v1 EnvVar   a    em    td   td   em  Optional   em   p List of environment variables to set in the Step  Cannot be updated   p    td    tr   tr   td   code computeResources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   em  Optional   em   p ComputeResources required by this Step  Cannot be updated  More info   a href  https   kubernetes io docs concepts configuration manage resources containers   https   kubernetes io docs concepts configuration manage resources containers   a   p    td    tr   tr   td   code volumeMounts  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumemount v1 core     Kubernetes core v1 VolumeMount   a    em    td   td   em  Optional   em   p Volumes to mount into the Step rsquo s filesystem  Cannot be updated   p    td    tr   tr   td   code volumeDevices  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumedevice v1 core     Kubernetes core v1 VolumeDevice   a    em    td   td   em  Optional   em   p volumeDevices is the list of block devices to be used by the Step   p    td    tr   tr   td   code imagePullPolicy  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  pullpolicy v1 core   Kubernetes core v1 PullPolicy   a    em    td   td   em  Optional   em   p Image pull policy  One of Always  Never  IfNotPresent  Defaults to Always if  latest tag is specified  or IfNotPresent otherwise  Cannot be updated  More info   a href  https   kubernetes io docs concepts containers images updating images  https   kubernetes io docs concepts containers images updating images  a   p    td    tr   tr   td   code securityContext  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  securitycontext v1 core   Kubernetes core v1 SecurityContext   a    em    td   td   em  Optional   em   p SecurityContext defines the security options the Step should be run with  If set  the fields of SecurityContext override the equivalent fields of PodSecurityContext  More info   a href  https   kubernetes io docs tasks configure pod container security context   https   kubernetes io docs tasks configure pod container security context   a   p    td    tr    tbody    table   h3 id  tekton dev v1 TaskBreakpoints  TaskBreakpoints   h3   p    em Appears on   em  a href   tekton dev v1 TaskRunDebug  TaskRunDebug  a     p   div   p TaskBreakpoints defines the breakpoint config for a particular Task  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code onFailure  code  br    em  string   em    td   td   em  Optional   em   p if enabled  pause TaskRun on failure of a step failed step will not exit  p    td    tr   tr   td   code beforeSteps  code  br    em    string   em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1 TaskKind  TaskKind   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1 TaskRef  TaskRef  a     p   div   p TaskKind defines the type of Task used by the pipeline   p    div   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 ClusterTask  34   p   td   td  p ClusterTaskRefKind is the task type for a reference to a task with cluster scope  ClusterTasks are not supported in v1  but v1 types may reference ClusterTasks   p    td    tr  tr  td  p   34 Task  34   p   td   td  p NamespacedTaskKind indicates that the task type has a namespaced scope   p    td    tr   tbody    table   h3 id  tekton dev v1 TaskRef  TaskRef   h3   p    em Appears on   em  a href   tekton dev v1 PipelineTask  PipelineTask  a    a href   tekton dev v1 TaskRunSpec  TaskRunSpec  a     p   div   p TaskRef can be used to refer to a specific instance of a task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name of the referent  More info   a href  http   kubernetes io docs user guide identifiers names  http   kubernetes io docs user guide identifiers names  a   p    td    tr   tr   td   code kind  code  br    em   a href   tekton dev v1 TaskKind   TaskKind   a    em    td   td   p TaskKind indicates the Kind of the Task  1  Namespaced Task when Kind is set to  ldquo Task rdquo   If Kind is  ldquo  rdquo   it defaults to  ldquo Task rdquo   2  Custom Task when Kind is non empty and APIVersion is non empty  p    td    tr   tr   td   code apiVersion  code  br    em  string   em    td   td   em  Optional   em   p API version of the referent Note  A Task with non empty APIVersion and Kind is considered a Custom Task  p    td    tr   tr   td   code ResolverRef  code  br    em   a href   tekton dev v1 ResolverRef   ResolverRef   a    em    td   td   em  Optional   em   p ResolverRef allows referencing a Task in a remote location like a git repo  This field is only supported when the alpha feature gate is enabled   p    td    tr    tbody    table   h3 id  tekton dev v1 TaskResult  TaskResult   h3   p    em Appears on   em  a href   tekton dev v1 TaskSpec  TaskSpec  a     p   div   p TaskResult used to describe the results of a task  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name the given name  p    td    tr   tr   td   code type  code  br    em   a href   tekton dev v1 ResultsType   ResultsType   a    em    td   td   em  Optional   em   p Type is the user specified type of the result  The possible type is currently  ldquo string rdquo  and will support  ldquo array rdquo  in following work   p    td    tr   tr   td   code properties  code  br    em   a href   tekton dev v1 PropertySpec   map string github com tektoncd pipeline pkg apis pipeline v1 PropertySpec   a    em    td   td   em  Optional   em   p Properties is the JSON Schema properties to support key value pairs results   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a human readable description of the result  p    td    tr   tr   td   code value  code  br    em   a href   tekton dev v1 ParamValue   ParamValue   a    em    td   td   em  Optional   em   p Value the expression used to retrieve the value of the result from an underlying Step   p    td    tr    tbody    table   h3 id  tekton dev v1 TaskRunDebug  TaskRunDebug   h3   p    em Appears on   em  a href   tekton dev v1 TaskRunSpec  TaskRunSpec  a     p   div   p TaskRunDebug defines the breakpoint config for a particular TaskRun  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code breakpoints  code  br    em   a href   tekton dev v1 TaskBreakpoints   TaskBreakpoints   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1 TaskRunInputs  TaskRunInputs   h3   div   p TaskRunInputs holds the input values that this task was invoked with   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code params  code  br    em   a href   tekton dev v1 Params   Params   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1 TaskRunReason  TaskRunReason   code string  code  alias   h3   div   p TaskRunReason is an enum used to store all TaskRun reason for the Succeeded condition that are controlled by the TaskRun itself  Failure reasons that emerge from underlying resources are not included here  p    div   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 TaskRunCancelled  34   p   td   td  p TaskRunReasonCancelled is the reason set when the TaskRun is cancelled by the user  p    td    tr  tr  td  p   34 Failed  34   p   td   td  p TaskRunReasonFailed is the reason set when the TaskRun completed with a failure  p    td    tr  tr  td  p   34 TaskRunResolutionFailed  34   p   td   td  p TaskRunReasonFailedResolution indicated that the reason for failure status is that references within the TaskRun could not be resolved  p    td    tr  tr  td  p   34 TaskRunValidationFailed  34   p   td   td  p TaskRunReasonFailedValidation indicated that the reason for failure status is that taskrun failed runtime validation  p    td    tr  tr  td  p   34 FailureIgnored  34   p   td   td  p TaskRunReasonFailureIgnored is the reason set when the Taskrun has failed due to pod execution error and the failure is ignored for the owning PipelineRun  TaskRuns failed due to reconciler validation error should not use this reason   p    td    tr  tr  td  p   34 TaskRunImagePullFailed  34   p   td   td  p TaskRunReasonImagePullFailed is the reason set when the step of a task fails due to image not being pulled  p    td    tr  tr  td  p   34 InvalidParamValue  34   p   td   td  p TaskRunReasonInvalidParamValue indicates that the TaskRun Param input value is not allowed   p    td    tr  tr  td  p   34 ResourceVerificationFailed  34   p   td   td  p TaskRunReasonResourceVerificationFailed indicates that the task fails the trusted resource verification  it could be the content has changed  signature is invalid or public key is invalid  p    td    tr  tr  td  p   34 TaskRunResultLargerThanAllowedLimit  34   p   td   td  p TaskRunReasonResultLargerThanAllowedLimit is the reason set when one of the results exceeds its maximum allowed limit of 1 KB  p    td    tr  tr  td  p   34 Running  34   p   td   td  p TaskRunReasonRunning is the reason set when the TaskRun is running  p    td    tr  tr  td  p   34 Started  34   p   td   td  p TaskRunReasonStarted is the reason set when the TaskRun has just started  p    td    tr  tr  td  p   34 TaskRunStopSidecarFailed  34   p   td   td  p TaskRunReasonStopSidecarFailed indicates that the sidecar is not properly stopped   p    td    tr  tr  td  p   34 Succeeded  34   p   td   td  p TaskRunReasonSuccessful is the reason set when the TaskRun completed successfully  p    td    tr  tr  td  p   34 TaskValidationFailed  34   p   td   td  p TaskRunReasonTaskFailedValidation indicated that the reason for failure status is that task failed runtime validation  p    td    tr  tr  td  p   34 TaskRunTimeout  34   p   td   td  p TaskRunReasonTimedOut is the reason set when one TaskRun execution has timed out  p    td    tr  tr  td  p   34 ToBeRetried  34   p   td   td  p TaskRunReasonToBeRetried is the reason set when the last TaskRun execution failed  and will be retried  p    td    tr   tbody    table   h3 id  tekton dev v1 TaskRunResult  TaskRunResult   h3   p    em Appears on   em  a href   tekton dev v1 StepState  StepState  a    a href   tekton dev v1 TaskRunStatusFields  TaskRunStatusFields  a     p   div   p TaskRunStepResult is a type alias of TaskRunResult  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name the given name  p    td    tr   tr   td   code type  code  br    em   a href   tekton dev v1 ResultsType   ResultsType   a    em    td   td   em  Optional   em   p Type is the user specified type of the result  The possible type is currently  ldquo string rdquo  and will support  ldquo array rdquo  in following work   p    td    tr   tr   td   code value  code  br    em   a href   tekton dev v1 ParamValue   ParamValue   a    em    td   td   p Value the given value of the result  p    td    tr    tbody    table   h3 id  tekton dev v1 TaskRunSidecarSpec  TaskRunSidecarSpec   h3   p    em Appears on   em  a href   tekton dev v1 PipelineTaskRunSpec  PipelineTaskRunSpec  a    a href   tekton dev v1 TaskRunSpec  TaskRunSpec  a     p   div   p TaskRunSidecarSpec is used to override the values of a Sidecar in the corresponding Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p The name of the Sidecar to override   p    td    tr   tr   td   code computeResources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   p The resource requirements to apply to the Sidecar   p    td    tr    tbody    table   h3 id  tekton dev v1 TaskRunSpec  TaskRunSpec   h3   p    em Appears on   em  a href   tekton dev v1 TaskRun  TaskRun  a     p   div   p TaskRunSpec defines the desired state of TaskRun  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code debug  code  br    em   a href   tekton dev v1 TaskRunDebug   TaskRunDebug   a    em    td   td   em  Optional   em    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1 Params   Params   a    em    td   td   em  Optional   em    td    tr   tr   td   code serviceAccountName  code  br    em  string   em    td   td   em  Optional   em    td    tr   tr   td   code taskRef  code  br    em   a href   tekton dev v1 TaskRef   TaskRef   a    em    td   td   em  Optional   em   p no more than one of the TaskRef and TaskSpec may be specified   p    td    tr   tr   td   code taskSpec  code  br    em   a href   tekton dev v1 TaskSpec   TaskSpec   a    em    td   td   em  Optional   em   p Specifying PipelineSpec can be disabled by setting  code disable inline spec  code  feature flag    p    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1 TaskRunSpecStatus   TaskRunSpecStatus   a    em    td   td   em  Optional   em   p Used for cancelling a TaskRun  and maybe more later on   p    td    tr   tr   td   code statusMessage  code  br    em   a href   tekton dev v1 TaskRunSpecStatusMessage   TaskRunSpecStatusMessage   a    em    td   td   em  Optional   em   p Status message for cancellation   p    td    tr   tr   td   code retries  code  br    em  int   em    td   td   em  Optional   em   p Retries represents how many times this TaskRun should be retried in the event of task failure   p    td    tr   tr   td   code timeout  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Time after which one retry attempt times out  Defaults to 1 hour  Refer Go rsquo s ParseDuration documentation for expected format   a href  https   golang org pkg time  ParseDuration  https   golang org pkg time  ParseDuration  a   p    td    tr   tr   td   code podTemplate  code  br    em   a href   tekton dev unversioned Template   Template   a    em    td   td   p PodTemplate holds pod specific configuration  p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1 WorkspaceBinding     WorkspaceBinding   a    em    td   td   em  Optional   em   p Workspaces is a list of WorkspaceBindings from volumes to workspaces   p    td    tr   tr   td   code stepSpecs  code  br    em   a href   tekton dev v1 TaskRunStepSpec     TaskRunStepSpec   a    em    td   td   em  Optional   em   p Specs to apply to Steps in this TaskRun  If a field is specified in both a Step and a StepSpec  the value from the StepSpec will be used  This field is only supported when the alpha feature gate is enabled   p    td    tr   tr   td   code sidecarSpecs  code  br    em   a href   tekton dev v1 TaskRunSidecarSpec     TaskRunSidecarSpec   a    em    td   td   em  Optional   em   p Specs to apply to Sidecars in this TaskRun  If a field is specified in both a Sidecar and a SidecarSpec  the value from the SidecarSpec will be used  This field is only supported when the alpha feature gate is enabled   p    td    tr   tr   td   code computeResources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   p Compute resources to use for this TaskRun  p    td    tr    tbody    table   h3 id  tekton dev v1 TaskRunSpecStatus  TaskRunSpecStatus   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1 TaskRunSpec  TaskRunSpec  a     p   div   p TaskRunSpecStatus defines the TaskRun spec status the user can provide  p    div   h3 id  tekton dev v1 TaskRunSpecStatusMessage  TaskRunSpecStatusMessage   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1 TaskRunSpec  TaskRunSpec  a     p   div   p TaskRunSpecStatusMessage defines human readable status messages for the TaskRun   p    div   table   thead   tr   th Value  th   th Description  th    tr    thead   tbody  tr  td  p   34 TaskRun cancelled as the PipelineRun it belongs to has been cancelled   34   p   td   td  p TaskRunCancelledByPipelineMsg indicates that the PipelineRun of which this TaskRun was a part of has been cancelled   p    td    tr  tr  td  p   34 TaskRun cancelled as the PipelineRun it belongs to has timed out   34   p   td   td  p TaskRunCancelledByPipelineTimeoutMsg indicates that the TaskRun was cancelled because the PipelineRun running it timed out   p    td    tr   tbody    table   h3 id  tekton dev v1 TaskRunStatus  TaskRunStatus   h3   p    em Appears on   em  a href   tekton dev v1 TaskRun  TaskRun  a    a href   tekton dev v1 PipelineRunTaskRunStatus  PipelineRunTaskRunStatus  a    a href   tekton dev v1 TaskRunStatusFields  TaskRunStatusFields  a     p   div   p TaskRunStatus defines the observed state of TaskRun  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code Status  code  br    em   a href  https   pkg go dev knative dev pkg apis duck v1 Status   knative dev pkg apis duck v1 Status   a    em    td   td   p   Members of  code Status  code  are embedded into this type     p    td    tr   tr   td   code TaskRunStatusFields  code  br    em   a href   tekton dev v1 TaskRunStatusFields   TaskRunStatusFields   a    em    td   td   p   Members of  code TaskRunStatusFields  code  are embedded into this type     p   p TaskRunStatusFields inlines the status fields   p    td    tr    tbody    table   h3 id  tekton dev v1 TaskRunStatusFields  TaskRunStatusFields   h3   p    em Appears on   em  a href   tekton dev v1 TaskRunStatus  TaskRunStatus  a     p   div   p TaskRunStatusFields holds the fields of TaskRun rsquo s status   This is defined separately and inlined so that other types can readily consume these fields via duck typing   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code podName  code  br    em  string   em    td   td   p PodName is the name of the pod responsible for executing this task rsquo s steps   p    td    tr   tr   td   code startTime  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   p StartTime is the time the build is actually started   p    td    tr   tr   td   code completionTime  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   p CompletionTime is the time the build completed   p    td    tr   tr   td   code steps  code  br    em   a href   tekton dev v1 StepState     StepState   a    em    td   td   em  Optional   em   p Steps describes the state of each build step container   p    td    tr   tr   td   code retriesStatus  code  br    em   a href   tekton dev v1 TaskRunStatus     TaskRunStatus   a    em    td   td   em  Optional   em   p RetriesStatus contains the history of TaskRunStatus in case of a retry in order to keep record of failures  All TaskRunStatus stored in RetriesStatus will have no date within the RetriesStatus as is redundant   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1 TaskRunResult     TaskRunResult   a    em    td   td   em  Optional   em   p Results are the list of results written out by the task rsquo s containers  p    td    tr   tr   td   code artifacts  code  br    em   a href   tekton dev v1 Artifacts   Artifacts   a    em    td   td   em  Optional   em   p Artifacts are the list of artifacts written out by the task rsquo s containers  p    td    tr   tr   td   code sidecars  code  br    em   a href   tekton dev v1 SidecarState     SidecarState   a    em    td   td   p The list has one entry per sidecar in the manifest  Each entry is represents the imageid of the corresponding sidecar   p    td    tr   tr   td   code taskSpec  code  br    em   a href   tekton dev v1 TaskSpec   TaskSpec   a    em    td   td   p TaskSpec contains the Spec from the dereferenced Task definition used to instantiate this TaskRun   p    td    tr   tr   td   code provenance  code  br    em   a href   tekton dev v1 Provenance   Provenance   a    em    td   td   em  Optional   em   p Provenance contains some key authenticated metadata about how a software artifact was built  what sources  what inputs outputs  etc     p    td    tr   tr   td   code spanContext  code  br    em  map string string   em    td   td   p SpanContext contains tracing span context fields  p    td    tr    tbody    table   h3 id  tekton dev v1 TaskRunStepSpec  TaskRunStepSpec   h3   p    em Appears on   em  a href   tekton dev v1 PipelineTaskRunSpec  PipelineTaskRunSpec  a    a href   tekton dev v1 TaskRunSpec  TaskRunSpec  a     p   div   p TaskRunStepSpec is used to override the values of a Step in the corresponding Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p The name of the Step to override   p    td    tr   tr   td   code computeResources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   p The resource requirements to apply to the Step   p    td    tr    tbody    table   h3 id  tekton dev v1 TaskSpec  TaskSpec   h3   p    em Appears on   em  a href   tekton dev v1 Task  Task  a    a href   tekton dev v1 EmbeddedTask  EmbeddedTask  a    a href   tekton dev v1 TaskRunSpec  TaskRunSpec  a    a href   tekton dev v1 TaskRunStatusFields  TaskRunStatusFields  a     p   div   p TaskSpec defines the desired state of Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code params  code  br    em   a href   tekton dev v1 ParamSpecs   ParamSpecs   a    em    td   td   em  Optional   em   p Params is a list of input parameters required to run the task  Params must be supplied as inputs in TaskRuns unless they declare a default value   p    td    tr   tr   td   code displayName  code  br    em  string   em    td   td   em  Optional   em   p DisplayName is a user facing name of the task that may be used to populate a UI   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the task that may be used to populate a UI   p    td    tr   tr   td   code steps  code  br    em   a href   tekton dev v1 Step     Step   a    em    td   td   p Steps are the steps of the build  each step is run sequentially with the source mounted into  workspace   p    td    tr   tr   td   code volumes  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volume v1 core     Kubernetes core v1 Volume   a    em    td   td   p Volumes is a collection of volumes that are available to mount into the steps of the build   p    td    tr   tr   td   code stepTemplate  code  br    em   a href   tekton dev v1 StepTemplate   StepTemplate   a    em    td   td   p StepTemplate can be used as the basis for all step containers within the Task  so that the steps inherit settings on the base container   p    td    tr   tr   td   code sidecars  code  br    em   a href   tekton dev v1 Sidecar     Sidecar   a    em    td   td   p Sidecars are run alongside the Task rsquo s step containers  They begin before the steps start and end after the steps complete   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1 WorkspaceDeclaration     WorkspaceDeclaration   a    em    td   td   p Workspaces are the volumes that this Task requires   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1 TaskResult     TaskResult   a    em    td   td   p Results are values that this Task can output  p    td    tr    tbody    table   h3 id  tekton dev v1 TimeoutFields  TimeoutFields   h3   p    em Appears on   em  a href   tekton dev v1 PipelineRunSpec  PipelineRunSpec  a     p   div   p TimeoutFields allows granular specification of pipeline  task  and finally timeouts  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code pipeline  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   p Pipeline sets the maximum allowed duration for execution of the entire pipeline  The sum of individual timeouts for tasks and finally must not exceed this value   p    td    tr   tr   td   code tasks  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   p Tasks sets the maximum allowed duration of this pipeline rsquo s tasks  p    td    tr   tr   td   code finally  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   p Finally sets the maximum allowed duration of this pipeline rsquo s finally  p    td    tr    tbody    table   h3 id  tekton dev v1 WhenExpression  WhenExpression   h3   p    em Appears on   em  a href   tekton dev v1 ChildStatusReference  ChildStatusReference  a    a href   tekton dev v1 PipelineRunRunStatus  PipelineRunRunStatus  a    a href   tekton dev v1 PipelineRunTaskRunStatus  PipelineRunTaskRunStatus  a    a href   tekton dev v1 SkippedTask  SkippedTask  a     p   div   p WhenExpression allows a PipelineTask to declare expressions to be evaluated before the Task is run to determine whether the Task should be executed or skipped  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code input  code  br    em  string   em    td   td   p Input is the string for guard checking which can be a static input or an output from a parent Task  p    td    tr   tr   td   code operator  code  br    em  k8s io apimachinery pkg selection Operator   em    td   td   p Operator that represents an Input rsquo s relationship to the values  p    td    tr   tr   td   code values  code  br    em    string   em    td   td   p Values is an array of strings  which is compared against the input  for guard checking It must be non empty  p    td    tr   tr   td   code cel  code  br    em  string   em    td   td   em  Optional   em   p CEL is a string of Common Language Expression  which can be used to conditionally execute the task based on the result of the expression evaluation More info about CEL syntax   a href  https   github com google cel spec blob master doc langdef md  https   github com google cel spec blob master doc langdef md  a   p    td    tr    tbody    table   h3 id  tekton dev v1 WhenExpressions  WhenExpressions   code   github com tektoncd pipeline pkg apis pipeline v1 WhenExpression  code  alias   h3   p    em Appears on   em  a href   tekton dev v1 PipelineTask  PipelineTask  a    a href   tekton dev v1 Step  Step  a     p   div   p WhenExpressions are used to specify whether a Task should be executed or skipped All of them need to evaluate to True for a guarded Task to be executed   p    div   h3 id  tekton dev v1 WorkspaceBinding  WorkspaceBinding   h3   p    em Appears on   em  a href   tekton dev v1 PipelineRunSpec  PipelineRunSpec  a    a href   tekton dev v1 TaskRunSpec  TaskRunSpec  a     p   div   p WorkspaceBinding maps a Task rsquo s declared workspace to a Volume   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of the workspace populated by the volume   p    td    tr   tr   td   code subPath  code  br    em  string   em    td   td   em  Optional   em   p SubPath is optionally a directory on the volume which should be used for this binding  i e  the volume will be mounted at this sub directory    p    td    tr   tr   td   code volumeClaimTemplate  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  persistentvolumeclaim v1 core   Kubernetes core v1 PersistentVolumeClaim   a    em    td   td   em  Optional   em   p VolumeClaimTemplate is a template for a claim that will be created in the same namespace  The PipelineRun controller is responsible for creating a unique claim for each instance of PipelineRun   p    td    tr   tr   td   code persistentVolumeClaim  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  persistentvolumeclaimvolumesource v1 core   Kubernetes core v1 PersistentVolumeClaimVolumeSource   a    em    td   td   em  Optional   em   p PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace  Either this OR EmptyDir can be used   p    td    tr   tr   td   code emptyDir  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  emptydirvolumesource v1 core   Kubernetes core v1 EmptyDirVolumeSource   a    em    td   td   em  Optional   em   p EmptyDir represents a temporary directory that shares a Task rsquo s lifetime  More info   a href  https   kubernetes io docs concepts storage volumes emptydir  https   kubernetes io docs concepts storage volumes emptydir  a  Either this OR PersistentVolumeClaim can be used   p    td    tr   tr   td   code configMap  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  configmapvolumesource v1 core   Kubernetes core v1 ConfigMapVolumeSource   a    em    td   td   em  Optional   em   p ConfigMap represents a configMap that should populate this workspace   p    td    tr   tr   td   code secret  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  secretvolumesource v1 core   Kubernetes core v1 SecretVolumeSource   a    em    td   td   em  Optional   em   p Secret represents a secret that should populate this workspace   p    td    tr   tr   td   code projected  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  projectedvolumesource v1 core   Kubernetes core v1 ProjectedVolumeSource   a    em    td   td   em  Optional   em   p Projected represents a projected volume that should populate this workspace   p    td    tr   tr   td   code csi  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  csivolumesource v1 core   Kubernetes core v1 CSIVolumeSource   a    em    td   td   em  Optional   em   p CSI  Container Storage Interface  represents ephemeral storage that is handled by certain external CSI drivers   p    td    tr    tbody    table   h3 id  tekton dev v1 WorkspaceDeclaration  WorkspaceDeclaration   h3   p    em Appears on   em  a href   tekton dev v1 TaskSpec  TaskSpec  a     p   div   p WorkspaceDeclaration is a declaration of a volume that a Task requires   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name by which you can bind the volume at runtime   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is an optional human readable description of this volume   p    td    tr   tr   td   code mountPath  code  br    em  string   em    td   td   em  Optional   em   p MountPath overrides the directory that the volume will be made available at   p    td    tr   tr   td   code readOnly  code  br    em  bool   em    td   td   p ReadOnly dictates whether a mounted volume is writable  By default this field is false and so mounted volumes are writable   p    td    tr   tr   td   code optional  code  br    em  bool   em    td   td   p Optional marks a Workspace as not being required in TaskRuns  By default this field is false and so declared workspaces are required   p    td    tr    tbody    table   h3 id  tekton dev v1 WorkspacePipelineTaskBinding  WorkspacePipelineTaskBinding   h3   p    em Appears on   em  a href   tekton dev v1 PipelineTask  PipelineTask  a     p   div   p WorkspacePipelineTaskBinding describes how a workspace passed into the pipeline should be mapped to a task rsquo s declared workspace   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of the workspace as declared by the task  p    td    tr   tr   td   code workspace  code  br    em  string   em    td   td   em  Optional   em   p Workspace is the name of the workspace declared by the pipeline  p    td    tr   tr   td   code subPath  code  br    em  string   em    td   td   em  Optional   em   p SubPath is optionally a directory on the volume which should be used for this binding  i e  the volume will be mounted at this sub directory    p    td    tr    tbody    table   h3 id  tekton dev v1 WorkspaceUsage  WorkspaceUsage   h3   p    em Appears on   em  a href   tekton dev v1 Sidecar  Sidecar  a    a href   tekton dev v1 Step  Step  a     p   div   p WorkspaceUsage is used by a Step or Sidecar to declare that it wants isolated access to a Workspace defined in a Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of the workspace this Step or Sidecar wants access to   p    td    tr   tr   td   code mountPath  code  br    em  string   em    td   td   p MountPath is the path that the workspace should be mounted to inside the Step or Sidecar  overriding any MountPath specified in the Task rsquo s WorkspaceDeclaration   p    td    tr    tbody    table   hr    h2 id  tekton dev v1alpha1  tekton dev v1alpha1  h2   div   p Package v1alpha1 contains API Schema definitions for the pipeline v1alpha1 API group  p    div  Resource Types   ul  li   a href   tekton dev v1alpha1 Run  Run  a    li  li   a href   tekton dev v1alpha1 StepAction  StepAction  a    li  li   a href   tekton dev v1alpha1 VerificationPolicy  VerificationPolicy  a    li  li   a href   tekton dev v1alpha1 PipelineResource  PipelineResource  a    li   ul   h3 id  tekton dev v1alpha1 Run  Run   h3   div   p Run represents a single execution of a Custom Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1alpha1   code    td    tr   tr   td   code kind  code  br   string   td   td  code Run  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1alpha1 RunSpec   RunSpec   a    em    td   td   em  Optional   em   br    br    table   tr   td   code ref  code  br    em   a href   tekton dev v1beta1 TaskRef   TaskRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1alpha1 EmbeddedRunSpec   EmbeddedRunSpec   a    em    td   td   em  Optional   em   p Spec is a specification of a custom task  p   br    br    table    table    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 Params   Params   a    em    td   td   em  Optional   em    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1alpha1 RunSpecStatus   RunSpecStatus   a    em    td   td   em  Optional   em   p Used for cancelling a run  and maybe more later on   p    td    tr   tr   td   code statusMessage  code  br    em   a href   tekton dev v1alpha1 RunSpecStatusMessage   RunSpecStatusMessage   a    em    td   td   em  Optional   em   p Status message for cancellation   p    td    tr   tr   td   code retries  code  br    em  int   em    td   td   em  Optional   em   p Used for propagating retries count to custom tasks  p    td    tr   tr   td   code serviceAccountName  code  br    em  string   em    td   td   em  Optional   em    td    tr   tr   td   code podTemplate  code  br    em   a href   tekton dev unversioned Template   Template   a    em    td   td   em  Optional   em   p PodTemplate holds pod specific configuration  p    td    tr   tr   td   code timeout  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Time after which the custom task times out  Refer Go rsquo s ParseDuration documentation for expected format   a href  https   golang org pkg time  ParseDuration  https   golang org pkg time  ParseDuration  a   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 WorkspaceBinding     WorkspaceBinding   a    em    td   td   em  Optional   em   p Workspaces is a list of WorkspaceBindings from volumes to workspaces   p    td    tr    table    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1alpha1 RunStatus   RunStatus   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1alpha1 StepAction  StepAction   h3   div   p StepAction represents the actionable components of Step  The Step can only reference it from the cluster or using remote resolution   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1alpha1   code    td    tr   tr   td   code kind  code  br   string   td   td  code StepAction  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1alpha1 StepActionSpec   StepActionSpec   a    em    td   td   em  Optional   em   p Spec holds the desired state of the Step from the client  p   br    br    table   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the stepaction that may be used to populate a UI   p    td    tr   tr   td   code image  code  br    em  string   em    td   td   em  Optional   em   p Image reference name to run for this StepAction  More info   a href  https   kubernetes io docs concepts containers images  https   kubernetes io docs concepts containers images  a   p    td    tr   tr   td   code command  code  br    em    string   em    td   td   em  Optional   em   p Entrypoint array  Not executed within a shell  The image rsquo s ENTRYPOINT is used if this is not provided  Variable references   VAR NAME  are expanded using the container rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code args  code  br    em    string   em    td   td   em  Optional   em   p Arguments to the entrypoint  The image rsquo s CMD is used if this is not provided  Variable references   VAR NAME  are expanded using the container rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code env  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envvar v1 core     Kubernetes core v1 EnvVar   a    em    td   td   em  Optional   em   p List of environment variables to set in the container  Cannot be updated   p    td    tr   tr   td   code script  code  br    em  string   em    td   td   em  Optional   em   p Script is the contents of an executable file to execute   p   p If Script is not empty  the Step cannot have an Command and the Args will be passed to the Script   p    td    tr   tr   td   code workingDir  code  br    em  string   em    td   td   em  Optional   em   p Step rsquo s working directory  If not specified  the container runtime rsquo s default will be used  which might be configured in the container image  Cannot be updated   p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1 ParamSpecs   ParamSpecs   a    em    td   td   em  Optional   em   p Params is a list of input parameters required to run the stepAction  Params must be supplied as inputs in Steps unless they declare a defaultvalue   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1 StepResult     StepResult   a    em    td   td   em  Optional   em   p Results are values that this StepAction can output  p    td    tr   tr   td   code securityContext  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  securitycontext v1 core   Kubernetes core v1 SecurityContext   a    em    td   td   em  Optional   em   p SecurityContext defines the security options the Step should be run with  If set  the fields of SecurityContext override the equivalent fields of PodSecurityContext  More info   a href  https   kubernetes io docs tasks configure pod container security context   https   kubernetes io docs tasks configure pod container security context   a  The value set in StepAction will take precedence over the value from Task   p    td    tr   tr   td   code volumeMounts  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumemount v1 core     Kubernetes core v1 VolumeMount   a    em    td   td   em  Optional   em   p Volumes to mount into the Step rsquo s filesystem  Cannot be updated   p    td    tr    table    td    tr    tbody    table   h3 id  tekton dev v1alpha1 VerificationPolicy  VerificationPolicy   h3   div   p VerificationPolicy defines the rules to verify Tekton resources  VerificationPolicy can config the mapping from resources to a list of public keys  so when verifying the resources we can use the corresponding public keys   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1alpha1   code    td    tr   tr   td   code kind  code  br   string   td   td  code VerificationPolicy  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1alpha1 VerificationPolicySpec   VerificationPolicySpec   a    em    td   td   p Spec holds the desired state of the VerificationPolicy   p   br    br    table   tr   td   code resources  code  br    em   a href   tekton dev v1alpha1 ResourcePattern     ResourcePattern   a    em    td   td   p Resources defines the patterns of resources sources that should be subject to this policy  For example  we may want to apply this Policy from a certain GitHub repo  Then the ResourcesPattern should be valid regex  E g  If using gitresolver  and we want to config keys from a certain git repo   code ResourcesPattern  code  can be  code https   github com tektoncd catalog git  code   we will use regex to filter out those resources   p    td    tr   tr   td   code authorities  code  br    em   a href   tekton dev v1alpha1 Authority     Authority   a    em    td   td   p Authorities defines the rules for validating signatures   p    td    tr   tr   td   code mode  code  br    em   a href   tekton dev v1alpha1 ModeType   ModeType   a    em    td   td   em  Optional   em   p Mode controls whether a failing policy will fail the taskrun pipelinerun  or only log the warnings enforce   fail the taskrun pipelinerun if verification fails  default  warn   don rsquo t fail the taskrun pipelinerun if verification fails but log warnings  p    td    tr    table    td    tr    tbody    table   h3 id  tekton dev v1alpha1 PipelineResource  PipelineResource   h3   div   p PipelineResource describes a resource that is an input to or output from a Task   p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1alpha1   code    td    tr   tr   td   code kind  code  br   string   td   td  code PipelineResource  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1alpha1 PipelineResourceSpec   PipelineResourceSpec   a    em    td   td   p Spec holds the desired state of the PipelineResource from the client  p   br    br    table   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the resource that may be used to populate a UI   p    td    tr   tr   td   code type  code  br    em  string   em    td   td    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1alpha1 ResourceParam     ResourceParam   a    em    td   td    td    tr   tr   td   code secrets  code  br    em   a href   tekton dev v1alpha1 SecretParam     SecretParam   a    em    td   td   em  Optional   em   p Secrets to fetch to populate some of resource fields  p    td    tr    table    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1alpha1 PipelineResourceStatus   PipelineResourceStatus   a    em    td   td   em  Optional   em   p Status is used to communicate the observed state of the PipelineResource from the controller  but was unused as there is no controller for PipelineResource   p    td    tr    tbody    table   h3 id  tekton dev v1alpha1 Authority  Authority   h3   p    em Appears on   em  a href   tekton dev v1alpha1 VerificationPolicySpec  VerificationPolicySpec  a     p   div   p The Authority block defines the keys for validating signatures   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name for this authority   p    td    tr   tr   td   code key  code  br    em   a href   tekton dev v1alpha1 KeyRef   KeyRef   a    em    td   td   p Key contains the public key to validate the resource   p    td    tr    tbody    table   h3 id  tekton dev v1alpha1 EmbeddedRunSpec  EmbeddedRunSpec   h3   p    em Appears on   em  a href   tekton dev v1alpha1 RunSpec  RunSpec  a     p   div   p EmbeddedRunSpec allows custom task definitions to be embedded  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code metadata  code  br    em   a href   tekton dev v1beta1 PipelineTaskMetadata   PipelineTaskMetadata   a    em    td   td   em  Optional   em    td    tr   tr   td   code spec  code  br    em  k8s io apimachinery pkg runtime RawExtension   em    td   td   em  Optional   em   p Spec is a specification of a custom task  p   br    br    table   tr   td   code    code  br    em    byte   em    td   td   p Raw is the underlying serialization of this object   p   p TODO  Determine how to detect ContentType and ContentEncoding of  lsquo Raw rsquo  data   p    td    tr   tr   td   code    code  br    em  k8s io apimachinery pkg runtime Object   em    td   td   p Object can hold a representation of this extension   useful for working with versioned structs   p    td    tr    table    td    tr    tbody    table   h3 id  tekton dev v1alpha1 HashAlgorithm  HashAlgorithm   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1alpha1 KeyRef  KeyRef  a     p   div   p HashAlgorithm defines the hash algorithm used for the public key  p    div   h3 id  tekton dev v1alpha1 KeyRef  KeyRef   h3   p    em Appears on   em  a href   tekton dev v1alpha1 Authority  Authority  a     p   div   p KeyRef defines the reference to a public key  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code secretRef  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  secretreference v1 core   Kubernetes core v1 SecretReference   a    em    td   td   em  Optional   em   p SecretRef sets a reference to a secret with the key   p    td    tr   tr   td   code data  code  br    em  string   em    td   td   em  Optional   em   p Data contains the inline public key   p    td    tr   tr   td   code kms  code  br    em  string   em    td   td   em  Optional   em   p KMS contains the KMS url of the public key Supported formats differ based on the KMS system used  One example of a KMS url could be  gcpkms   projects  PROJECT  locations  LOCATION  gt  keyRings  KEYRING  cryptoKeys  KEY  cryptoKeyVersions  KEY VERSION  For more examples please refer  a href  https   docs sigstore dev cosign kms support  https   docs sigstore dev cosign kms support  a   Note that the KMS is not supported yet   p    td    tr   tr   td   code hashAlgorithm  code  br    em   a href   tekton dev v1alpha1 HashAlgorithm   HashAlgorithm   a    em    td   td   em  Optional   em   p HashAlgorithm always defaults to sha256 if the algorithm hasn rsquo t been explicitly set  p    td    tr    tbody    table   h3 id  tekton dev v1alpha1 ModeType  ModeType   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1alpha1 VerificationPolicySpec  VerificationPolicySpec  a     p   div   p ModeType indicates the type of a mode for VerificationPolicy  p    div   h3 id  tekton dev v1alpha1 ResourcePattern  ResourcePattern   h3   p    em Appears on   em  a href   tekton dev v1alpha1 VerificationPolicySpec  VerificationPolicySpec  a     p   div   p ResourcePattern defines the pattern of the resource source  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code pattern  code  br    em  string   em    td   td   p Pattern defines a resource pattern  Regex is created to filter resources based on  code Pattern  code  Example patterns  GitHub resource   a href  https   github com tektoncd catalog git  https   github com tektoncd catalog git  a    a href  https   github com tektoncd    https   github com tektoncd    a  Bundle resource  gcr io tekton releases catalog upstream git clone  gcr io tekton releases catalog upstream   Hub resource   a href  https   artifacthub io    https   artifacthub io    a    p    td    tr    tbody    table   h3 id  tekton dev v1alpha1 RunReason  RunReason   code string  code  alias   h3   div   p RunReason is an enum used to store all Run reason for the Succeeded condition that are controlled by the Run itself   p    div   h3 id  tekton dev v1alpha1 RunSpec  RunSpec   h3   p    em Appears on   em  a href   tekton dev v1alpha1 Run  Run  a     p   div   p RunSpec defines the desired state of Run  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code ref  code  br    em   a href   tekton dev v1beta1 TaskRef   TaskRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1alpha1 EmbeddedRunSpec   EmbeddedRunSpec   a    em    td   td   em  Optional   em   p Spec is a specification of a custom task  p   br    br    table    table    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 Params   Params   a    em    td   td   em  Optional   em    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1alpha1 RunSpecStatus   RunSpecStatus   a    em    td   td   em  Optional   em   p Used for cancelling a run  and maybe more later on   p    td    tr   tr   td   code statusMessage  code  br    em   a href   tekton dev v1alpha1 RunSpecStatusMessage   RunSpecStatusMessage   a    em    td   td   em  Optional   em   p Status message for cancellation   p    td    tr   tr   td   code retries  code  br    em  int   em    td   td   em  Optional   em   p Used for propagating retries count to custom tasks  p    td    tr   tr   td   code serviceAccountName  code  br    em  string   em    td   td   em  Optional   em    td    tr   tr   td   code podTemplate  code  br    em   a href   tekton dev unversioned Template   Template   a    em    td   td   em  Optional   em   p PodTemplate holds pod specific configuration  p    td    tr   tr   td   code timeout  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Time after which the custom task times out  Refer Go rsquo s ParseDuration documentation for expected format   a href  https   golang org pkg time  ParseDuration  https   golang org pkg time  ParseDuration  a   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 WorkspaceBinding     WorkspaceBinding   a    em    td   td   em  Optional   em   p Workspaces is a list of WorkspaceBindings from volumes to workspaces   p    td    tr    tbody    table   h3 id  tekton dev v1alpha1 RunSpecStatus  RunSpecStatus   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1alpha1 RunSpec  RunSpec  a     p   div   p RunSpecStatus defines the taskrun spec status the user can provide  p    div   h3 id  tekton dev v1alpha1 RunSpecStatusMessage  RunSpecStatusMessage   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1alpha1 RunSpec  RunSpec  a     p   div   p RunSpecStatusMessage defines human readable status messages for the TaskRun   p    div   h3 id  tekton dev v1alpha1 StepActionObject  StepActionObject   h3   div   p StepActionObject is implemented by StepAction  p    div   h3 id  tekton dev v1alpha1 StepActionSpec  StepActionSpec   h3   p    em Appears on   em  a href   tekton dev v1alpha1 StepAction  StepAction  a     p   div   p StepActionSpec contains the actionable components of a step   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the stepaction that may be used to populate a UI   p    td    tr   tr   td   code image  code  br    em  string   em    td   td   em  Optional   em   p Image reference name to run for this StepAction  More info   a href  https   kubernetes io docs concepts containers images  https   kubernetes io docs concepts containers images  a   p    td    tr   tr   td   code command  code  br    em    string   em    td   td   em  Optional   em   p Entrypoint array  Not executed within a shell  The image rsquo s ENTRYPOINT is used if this is not provided  Variable references   VAR NAME  are expanded using the container rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code args  code  br    em    string   em    td   td   em  Optional   em   p Arguments to the entrypoint  The image rsquo s CMD is used if this is not provided  Variable references   VAR NAME  are expanded using the container rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code env  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envvar v1 core     Kubernetes core v1 EnvVar   a    em    td   td   em  Optional   em   p List of environment variables to set in the container  Cannot be updated   p    td    tr   tr   td   code script  code  br    em  string   em    td   td   em  Optional   em   p Script is the contents of an executable file to execute   p   p If Script is not empty  the Step cannot have an Command and the Args will be passed to the Script   p    td    tr   tr   td   code workingDir  code  br    em  string   em    td   td   em  Optional   em   p Step rsquo s working directory  If not specified  the container runtime rsquo s default will be used  which might be configured in the container image  Cannot be updated   p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1 ParamSpecs   ParamSpecs   a    em    td   td   em  Optional   em   p Params is a list of input parameters required to run the stepAction  Params must be supplied as inputs in Steps unless they declare a defaultvalue   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1 StepResult     StepResult   a    em    td   td   em  Optional   em   p Results are values that this StepAction can output  p    td    tr   tr   td   code securityContext  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  securitycontext v1 core   Kubernetes core v1 SecurityContext   a    em    td   td   em  Optional   em   p SecurityContext defines the security options the Step should be run with  If set  the fields of SecurityContext override the equivalent fields of PodSecurityContext  More info   a href  https   kubernetes io docs tasks configure pod container security context   https   kubernetes io docs tasks configure pod container security context   a  The value set in StepAction will take precedence over the value from Task   p    td    tr   tr   td   code volumeMounts  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumemount v1 core     Kubernetes core v1 VolumeMount   a    em    td   td   em  Optional   em   p Volumes to mount into the Step rsquo s filesystem  Cannot be updated   p    td    tr    tbody    table   h3 id  tekton dev v1alpha1 VerificationPolicySpec  VerificationPolicySpec   h3   p    em Appears on   em  a href   tekton dev v1alpha1 VerificationPolicy  VerificationPolicy  a     p   div   p VerificationPolicySpec defines the patterns and authorities   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code resources  code  br    em   a href   tekton dev v1alpha1 ResourcePattern     ResourcePattern   a    em    td   td   p Resources defines the patterns of resources sources that should be subject to this policy  For example  we may want to apply this Policy from a certain GitHub repo  Then the ResourcesPattern should be valid regex  E g  If using gitresolver  and we want to config keys from a certain git repo   code ResourcesPattern  code  can be  code https   github com tektoncd catalog git  code   we will use regex to filter out those resources   p    td    tr   tr   td   code authorities  code  br    em   a href   tekton dev v1alpha1 Authority     Authority   a    em    td   td   p Authorities defines the rules for validating signatures   p    td    tr   tr   td   code mode  code  br    em   a href   tekton dev v1alpha1 ModeType   ModeType   a    em    td   td   em  Optional   em   p Mode controls whether a failing policy will fail the taskrun pipelinerun  or only log the warnings enforce   fail the taskrun pipelinerun if verification fails  default  warn   don rsquo t fail the taskrun pipelinerun if verification fails but log warnings  p    td    tr    tbody    table   h3 id  tekton dev v1alpha1 PipelineResourceSpec  PipelineResourceSpec   h3   p    em Appears on   em  a href   tekton dev v1alpha1 PipelineResource  PipelineResource  a    a href   tekton dev v1beta1 PipelineResourceBinding  PipelineResourceBinding  a     p   div   p PipelineResourceSpec defines an individual resources used in the pipeline   p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the resource that may be used to populate a UI   p    td    tr   tr   td   code type  code  br    em  string   em    td   td    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1alpha1 ResourceParam     ResourceParam   a    em    td   td    td    tr   tr   td   code secrets  code  br    em   a href   tekton dev v1alpha1 SecretParam     SecretParam   a    em    td   td   em  Optional   em   p Secrets to fetch to populate some of resource fields  p    td    tr    tbody    table   h3 id  tekton dev v1alpha1 PipelineResourceStatus  PipelineResourceStatus   h3   p    em Appears on   em  a href   tekton dev v1alpha1 PipelineResource  PipelineResource  a     p   div   p PipelineResourceStatus does not contain anything because PipelineResources on their own do not have a status  p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   h3 id  tekton dev v1alpha1 ResourceDeclaration  ResourceDeclaration   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskResource  TaskResource  a     p   div   p ResourceDeclaration defines an input or output PipelineResource declared as a requirement by another type such as a Task or Condition  The Name field will be used to refer to these PipelineResources within the type rsquo s definition  and when provided as an Input  the Name will be the path to the volume mounted containing this PipelineResource as an input  e g  an input Resource named  code workspace  code  will be mounted at  code  workspace  code     p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name declares the name by which a resource is referenced in the definition  Resources may be referenced by name in the definition of a Task rsquo s steps   p    td    tr   tr   td   code type  code  br    em  string   em    td   td   p Type is the type of this resource   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the declared resource that may be used to populate a UI   p    td    tr   tr   td   code targetPath  code  br    em  string   em    td   td   em  Optional   em   p TargetPath is the path in workspace directory where the resource will be copied   p    td    tr   tr   td   code optional  code  br    em  bool   em    td   td   p Optional declares the resource as optional  By default optional is set to false which makes a resource required  optional  true   the resource is considered optional optional  false   the resource is considered required  equivalent of not specifying it   p    td    tr    tbody    table   h3 id  tekton dev v1alpha1 ResourceParam  ResourceParam   h3   p    em Appears on   em  a href   tekton dev v1alpha1 PipelineResourceSpec  PipelineResourceSpec  a     p   div   p ResourceParam declares a string value to use for the parameter called Name  and is used in the specific context of PipelineResources   p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td    td    tr   tr   td   code value  code  br    em  string   em    td   td    td    tr    tbody    table   h3 id  tekton dev v1alpha1 SecretParam  SecretParam   h3   p    em Appears on   em  a href   tekton dev v1alpha1 PipelineResourceSpec  PipelineResourceSpec  a     p   div   p SecretParam indicates which secret can be used to populate a field of the resource  p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code fieldName  code  br    em  string   em    td   td    td    tr   tr   td   code secretKey  code  br    em  string   em    td   td    td    tr   tr   td   code secretName  code  br    em  string   em    td   td    td    tr    tbody    table   h3 id  tekton dev v1alpha1 RunResult  RunResult   h3   p    em Appears on   em  a href   tekton dev v1alpha1 RunStatusFields  RunStatusFields  a     p   div   p RunResult used to describe the results of a task  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name the given name  p    td    tr   tr   td   code value  code  br    em  string   em    td   td   p Value the given value of the result  p    td    tr    tbody    table   h3 id  tekton dev v1alpha1 RunStatus  RunStatus   h3   p    em Appears on   em  a href   tekton dev v1alpha1 Run  Run  a    a href   tekton dev v1alpha1 RunStatusFields  RunStatusFields  a     p   div   p RunStatus defines the observed state of Run  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code Status  code  br    em   a href  https   pkg go dev knative dev pkg apis duck v1 Status   knative dev pkg apis duck v1 Status   a    em    td   td   p   Members of  code Status  code  are embedded into this type     p    td    tr   tr   td   code RunStatusFields  code  br    em   a href   tekton dev v1alpha1 RunStatusFields   RunStatusFields   a    em    td   td   p   Members of  code RunStatusFields  code  are embedded into this type     p   p RunStatusFields inlines the status fields   p    td    tr    tbody    table   h3 id  tekton dev v1alpha1 RunStatusFields  RunStatusFields   h3   p    em Appears on   em  a href   tekton dev v1alpha1 RunStatus  RunStatus  a     p   div   p RunStatusFields holds the fields of Run rsquo s status   This is defined separately and inlined so that other types can readily consume these fields via duck typing   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code startTime  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   em  Optional   em   p StartTime is the time the build is actually started   p    td    tr   tr   td   code completionTime  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   em  Optional   em   p CompletionTime is the time the build completed   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1alpha1 RunResult     RunResult   a    em    td   td   em  Optional   em   p Results reports any output result values to be consumed by later tasks in a pipeline   p    td    tr   tr   td   code retriesStatus  code  br    em   a href   tekton dev v1alpha1 RunStatus     RunStatus   a    em    td   td   em  Optional   em   p RetriesStatus contains the history of RunStatus  in case of a retry   p    td    tr   tr   td   code extraFields  code  br    em  k8s io apimachinery pkg runtime RawExtension   em    td   td   p ExtraFields holds arbitrary fields provided by the custom task controller   p    td    tr    tbody    table   hr    h2 id  tekton dev v1beta1  tekton dev v1beta1  h2   div   p Package v1beta1 contains API Schema definitions for the pipeline v1beta1 API group  p    div  Resource Types   ul  li   a href   tekton dev v1beta1 ClusterTask  ClusterTask  a    li  li   a href   tekton dev v1beta1 CustomRun  CustomRun  a    li  li   a href   tekton dev v1beta1 Pipeline  Pipeline  a    li  li   a href   tekton dev v1beta1 PipelineRun  PipelineRun  a    li  li   a href   tekton dev v1beta1 StepAction  StepAction  a    li  li   a href   tekton dev v1beta1 Task  Task  a    li  li   a href   tekton dev v1beta1 TaskRun  TaskRun  a    li   ul   h3 id  tekton dev v1beta1 ClusterTask  ClusterTask   h3   div   p ClusterTask is a Task with a cluster scope  ClusterTasks are used to represent Tasks that should be publicly addressable from any namespace in the cluster   p   p Deprecated  Please use the cluster resolver instead   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1beta1   code    td    tr   tr   td   code kind  code  br   string   td   td  code ClusterTask  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1beta1 TaskSpec   TaskSpec   a    em    td   td   em  Optional   em   p Spec holds the desired state of the Task from the client  p   br    br    table   tr   td   code resources  code  br    em   a href   tekton dev v1beta1 TaskResources   TaskResources   a    em    td   td   em  Optional   em   p Resources is a list input and output resource to run the task Resources are represented in TaskRuns as bindings to instances of PipelineResources   p   p Deprecated  Unused  preserved only for backwards compatibility  p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 ParamSpecs   ParamSpecs   a    em    td   td   em  Optional   em   p Params is a list of input parameters required to run the task  Params must be supplied as inputs in TaskRuns unless they declare a default value   p    td    tr   tr   td   code displayName  code  br    em  string   em    td   td   em  Optional   em   p DisplayName is a user facing name of the task that may be used to populate a UI   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the task that may be used to populate a UI   p    td    tr   tr   td   code steps  code  br    em   a href   tekton dev v1beta1 Step     Step   a    em    td   td   p Steps are the steps of the build  each step is run sequentially with the source mounted into  workspace   p    td    tr   tr   td   code volumes  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volume v1 core     Kubernetes core v1 Volume   a    em    td   td   p Volumes is a collection of volumes that are available to mount into the steps of the build   p    td    tr   tr   td   code stepTemplate  code  br    em   a href   tekton dev v1beta1 StepTemplate   StepTemplate   a    em    td   td   p StepTemplate can be used as the basis for all step containers within the Task  so that the steps inherit settings on the base container   p    td    tr   tr   td   code sidecars  code  br    em   a href   tekton dev v1beta1 Sidecar     Sidecar   a    em    td   td   p Sidecars are run alongside the Task rsquo s step containers  They begin before the steps start and end after the steps complete   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 WorkspaceDeclaration     WorkspaceDeclaration   a    em    td   td   p Workspaces are the volumes that this Task requires   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1beta1 TaskResult     TaskResult   a    em    td   td   p Results are values that this Task can output  p    td    tr    table    td    tr    tbody    table   h3 id  tekton dev v1beta1 CustomRun  CustomRun   h3   div   p CustomRun represents a single execution of a Custom Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1beta1   code    td    tr   tr   td   code kind  code  br   string   td   td  code CustomRun  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1beta1 CustomRunSpec   CustomRunSpec   a    em    td   td   em  Optional   em   br    br    table   tr   td   code customRef  code  br    em   a href   tekton dev v1beta1 TaskRef   TaskRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code customSpec  code  br    em   a href   tekton dev v1beta1 EmbeddedCustomRunSpec   EmbeddedCustomRunSpec   a    em    td   td   em  Optional   em   p Spec is a specification of a custom task  p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 Params   Params   a    em    td   td   em  Optional   em    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1beta1 CustomRunSpecStatus   CustomRunSpecStatus   a    em    td   td   em  Optional   em   p Used for cancelling a customrun  and maybe more later on   p    td    tr   tr   td   code statusMessage  code  br    em   a href   tekton dev v1beta1 CustomRunSpecStatusMessage   CustomRunSpecStatusMessage   a    em    td   td   em  Optional   em   p Status message for cancellation   p    td    tr   tr   td   code retries  code  br    em  int   em    td   td   em  Optional   em   p Used for propagating retries count to custom tasks  p    td    tr   tr   td   code serviceAccountName  code  br    em  string   em    td   td   em  Optional   em    td    tr   tr   td   code timeout  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Time after which the custom task times out  Refer Go rsquo s ParseDuration documentation for expected format   a href  https   golang org pkg time  ParseDuration  https   golang org pkg time  ParseDuration  a   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 WorkspaceBinding     WorkspaceBinding   a    em    td   td   em  Optional   em   p Workspaces is a list of WorkspaceBindings from volumes to workspaces   p    td    tr    table    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1beta1 CustomRunStatus   CustomRunStatus   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1beta1 Pipeline  Pipeline   h3   div   p Pipeline describes a list of Tasks to execute  It expresses how outputs of tasks feed into inputs of subsequent tasks   p   p Deprecated  Please use v1 Pipeline instead   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1beta1   code    td    tr   tr   td   code kind  code  br   string   td   td  code Pipeline  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1beta1 PipelineSpec   PipelineSpec   a    em    td   td   em  Optional   em   p Spec holds the desired state of the Pipeline from the client  p   br    br    table   tr   td   code displayName  code  br    em  string   em    td   td   em  Optional   em   p DisplayName is a user facing name of the pipeline that may be used to populate a UI   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the pipeline that may be used to populate a UI   p    td    tr   tr   td   code resources  code  br    em   a href   tekton dev v1beta1 PipelineDeclaredResource     PipelineDeclaredResource   a    em    td   td   p Deprecated  Unused  preserved only for backwards compatibility  p    td    tr   tr   td   code tasks  code  br    em   a href   tekton dev v1beta1 PipelineTask     PipelineTask   a    em    td   td   p Tasks declares the graph of Tasks that execute when this Pipeline is run   p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 ParamSpecs   ParamSpecs   a    em    td   td   p Params declares a list of input parameters that must be supplied when this Pipeline is run   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 PipelineWorkspaceDeclaration     PipelineWorkspaceDeclaration   a    em    td   td   em  Optional   em   p Workspaces declares a set of named workspaces that are expected to be provided by a PipelineRun   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1beta1 PipelineResult     PipelineResult   a    em    td   td   em  Optional   em   p Results are values that this pipeline can output once run  p    td    tr   tr   td   code finally  code  br    em   a href   tekton dev v1beta1 PipelineTask     PipelineTask   a    em    td   td   p Finally declares the list of Tasks that execute just before leaving the Pipeline i e  either after all Tasks are finished executing successfully or after a failure which would result in ending the Pipeline  p    td    tr    table    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineRun  PipelineRun   h3   div   p PipelineRun represents a single execution of a Pipeline  PipelineRuns are how the graph of Tasks declared in a Pipeline are executed  they specify inputs to Pipelines such as parameter values and capture operational aspects of the Tasks execution such as service account and tolerations  Creating a PipelineRun creates TaskRuns for Tasks in the referenced Pipeline   p   p Deprecated  Please use v1 PipelineRun instead   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1beta1   code    td    tr   tr   td   code kind  code  br   string   td   td  code PipelineRun  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1beta1 PipelineRunSpec   PipelineRunSpec   a    em    td   td   em  Optional   em   br    br    table   tr   td   code pipelineRef  code  br    em   a href   tekton dev v1beta1 PipelineRef   PipelineRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code pipelineSpec  code  br    em   a href   tekton dev v1beta1 PipelineSpec   PipelineSpec   a    em    td   td   em  Optional   em   p Specifying PipelineSpec can be disabled by setting  code disable inline spec  code  feature flag    p    td    tr   tr   td   code resources  code  br    em   a href   tekton dev v1beta1 PipelineResourceBinding     PipelineResourceBinding   a    em    td   td   p Resources is a list of bindings specifying which actual instances of PipelineResources to use for the resources the Pipeline has declared it needs   p   p Deprecated  Unused  preserved only for backwards compatibility  p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 Params   Params   a    em    td   td   p Params is a list of parameter names and values   p    td    tr   tr   td   code serviceAccountName  code  br    em  string   em    td   td   em  Optional   em    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1beta1 PipelineRunSpecStatus   PipelineRunSpecStatus   a    em    td   td   em  Optional   em   p Used for cancelling a pipelinerun  and maybe more later on   p    td    tr   tr   td   code timeouts  code  br    em   a href   tekton dev v1beta1 TimeoutFields   TimeoutFields   a    em    td   td   em  Optional   em   p Time after which the Pipeline times out  Currently three keys are accepted in the map pipeline  tasks and finally with Timeouts pipeline  gt   Timeouts tasks   Timeouts finally  p    td    tr   tr   td   code timeout  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Timeout is the Time after which the Pipeline times out  Defaults to never  Refer to Go rsquo s ParseDuration documentation for expected format   a href  https   golang org pkg time  ParseDuration  https   golang org pkg time  ParseDuration  a   p   p Deprecated  use pipelineRunSpec Timeouts Pipeline instead  p    td    tr   tr   td   code podTemplate  code  br    em   a href   tekton dev unversioned Template   Template   a    em    td   td   p PodTemplate holds pod specific configuration  p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 WorkspaceBinding     WorkspaceBinding   a    em    td   td   em  Optional   em   p Workspaces holds a set of workspace bindings that must match names with those declared in the pipeline   p    td    tr   tr   td   code taskRunSpecs  code  br    em   a href   tekton dev v1beta1 PipelineTaskRunSpec     PipelineTaskRunSpec   a    em    td   td   em  Optional   em   p TaskRunSpecs holds a set of runtime specs  p    td    tr    table    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1beta1 PipelineRunStatus   PipelineRunStatus   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1beta1 StepAction  StepAction   h3   div   p StepAction represents the actionable components of Step  The Step can only reference it from the cluster or using remote resolution   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1beta1   code    td    tr   tr   td   code kind  code  br   string   td   td  code StepAction  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1beta1 StepActionSpec   StepActionSpec   a    em    td   td   em  Optional   em   p Spec holds the desired state of the Step from the client  p   br    br    table   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the stepaction that may be used to populate a UI   p    td    tr   tr   td   code image  code  br    em  string   em    td   td   em  Optional   em   p Image reference name to run for this StepAction  More info   a href  https   kubernetes io docs concepts containers images  https   kubernetes io docs concepts containers images  a   p    td    tr   tr   td   code command  code  br    em    string   em    td   td   em  Optional   em   p Entrypoint array  Not executed within a shell  The image rsquo s ENTRYPOINT is used if this is not provided  Variable references   VAR NAME  are expanded using the container rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code args  code  br    em    string   em    td   td   em  Optional   em   p Arguments to the entrypoint  The image rsquo s CMD is used if this is not provided  Variable references   VAR NAME  are expanded using the container rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code env  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envvar v1 core     Kubernetes core v1 EnvVar   a    em    td   td   em  Optional   em   p List of environment variables to set in the container  Cannot be updated   p    td    tr   tr   td   code script  code  br    em  string   em    td   td   em  Optional   em   p Script is the contents of an executable file to execute   p   p If Script is not empty  the Step cannot have an Command and the Args will be passed to the Script   p    td    tr   tr   td   code workingDir  code  br    em  string   em    td   td   em  Optional   em   p Step rsquo s working directory  If not specified  the container runtime rsquo s default will be used  which might be configured in the container image  Cannot be updated   p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1 ParamSpecs   ParamSpecs   a    em    td   td   em  Optional   em   p Params is a list of input parameters required to run the stepAction  Params must be supplied as inputs in Steps unless they declare a defaultvalue   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1 StepResult     StepResult   a    em    td   td   em  Optional   em   p Results are values that this StepAction can output  p    td    tr   tr   td   code securityContext  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  securitycontext v1 core   Kubernetes core v1 SecurityContext   a    em    td   td   em  Optional   em   p SecurityContext defines the security options the Step should be run with  If set  the fields of SecurityContext override the equivalent fields of PodSecurityContext  More info   a href  https   kubernetes io docs tasks configure pod container security context   https   kubernetes io docs tasks configure pod container security context   a  The value set in StepAction will take precedence over the value from Task   p    td    tr   tr   td   code volumeMounts  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumemount v1 core     Kubernetes core v1 VolumeMount   a    em    td   td   em  Optional   em   p Volumes to mount into the Step rsquo s filesystem  Cannot be updated   p    td    tr    table    td    tr    tbody    table   h3 id  tekton dev v1beta1 Task  Task   h3   div   p Task represents a collection of sequential steps that are run as part of a Pipeline using a set of inputs and producing a set of outputs  Tasks execute when TaskRuns are created that provide the input parameters and resources and output resources the Task requires   p   p Deprecated  Please use v1 Task instead   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1beta1   code    td    tr   tr   td   code kind  code  br   string   td   td  code Task  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1beta1 TaskSpec   TaskSpec   a    em    td   td   em  Optional   em   p Spec holds the desired state of the Task from the client  p   br    br    table   tr   td   code resources  code  br    em   a href   tekton dev v1beta1 TaskResources   TaskResources   a    em    td   td   em  Optional   em   p Resources is a list input and output resource to run the task Resources are represented in TaskRuns as bindings to instances of PipelineResources   p   p Deprecated  Unused  preserved only for backwards compatibility  p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 ParamSpecs   ParamSpecs   a    em    td   td   em  Optional   em   p Params is a list of input parameters required to run the task  Params must be supplied as inputs in TaskRuns unless they declare a default value   p    td    tr   tr   td   code displayName  code  br    em  string   em    td   td   em  Optional   em   p DisplayName is a user facing name of the task that may be used to populate a UI   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the task that may be used to populate a UI   p    td    tr   tr   td   code steps  code  br    em   a href   tekton dev v1beta1 Step     Step   a    em    td   td   p Steps are the steps of the build  each step is run sequentially with the source mounted into  workspace   p    td    tr   tr   td   code volumes  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volume v1 core     Kubernetes core v1 Volume   a    em    td   td   p Volumes is a collection of volumes that are available to mount into the steps of the build   p    td    tr   tr   td   code stepTemplate  code  br    em   a href   tekton dev v1beta1 StepTemplate   StepTemplate   a    em    td   td   p StepTemplate can be used as the basis for all step containers within the Task  so that the steps inherit settings on the base container   p    td    tr   tr   td   code sidecars  code  br    em   a href   tekton dev v1beta1 Sidecar     Sidecar   a    em    td   td   p Sidecars are run alongside the Task rsquo s step containers  They begin before the steps start and end after the steps complete   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 WorkspaceDeclaration     WorkspaceDeclaration   a    em    td   td   p Workspaces are the volumes that this Task requires   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1beta1 TaskResult     TaskResult   a    em    td   td   p Results are values that this Task can output  p    td    tr    table    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskRun  TaskRun   h3   div   p TaskRun represents a single execution of a Task  TaskRuns are how the steps specified in a Task are executed  they specify the parameters and resources used to run the steps in a Task   p   p Deprecated  Please use v1 TaskRun instead   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code apiVersion  code  br   string  td   td   code  tekton dev v1beta1   code    td    tr   tr   td   code kind  code  br   string   td   td  code TaskRun  code   td    tr   tr   td   code metadata  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  objectmeta v1 meta   Kubernetes meta v1 ObjectMeta   a    em    td   td   em  Optional   em  Refer to the Kubernetes API documentation for the fields of the  code metadata  code  field    td    tr   tr   td   code spec  code  br    em   a href   tekton dev v1beta1 TaskRunSpec   TaskRunSpec   a    em    td   td   em  Optional   em   br    br    table   tr   td   code debug  code  br    em   a href   tekton dev v1beta1 TaskRunDebug   TaskRunDebug   a    em    td   td   em  Optional   em    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 Params   Params   a    em    td   td   em  Optional   em    td    tr   tr   td   code resources  code  br    em   a href   tekton dev v1beta1 TaskRunResources   TaskRunResources   a    em    td   td   em  Optional   em   p Deprecated  Unused  preserved only for backwards compatibility  p    td    tr   tr   td   code serviceAccountName  code  br    em  string   em    td   td   em  Optional   em    td    tr   tr   td   code taskRef  code  br    em   a href   tekton dev v1beta1 TaskRef   TaskRef   a    em    td   td   em  Optional   em   p no more than one of the TaskRef and TaskSpec may be specified   p    td    tr   tr   td   code taskSpec  code  br    em   a href   tekton dev v1beta1 TaskSpec   TaskSpec   a    em    td   td   em  Optional   em   p Specifying PipelineSpec can be disabled by setting  code disable inline spec  code  feature flag    p    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1beta1 TaskRunSpecStatus   TaskRunSpecStatus   a    em    td   td   em  Optional   em   p Used for cancelling a TaskRun  and maybe more later on   p    td    tr   tr   td   code statusMessage  code  br    em   a href   tekton dev v1beta1 TaskRunSpecStatusMessage   TaskRunSpecStatusMessage   a    em    td   td   em  Optional   em   p Status message for cancellation   p    td    tr   tr   td   code retries  code  br    em  int   em    td   td   em  Optional   em   p Retries represents how many times this TaskRun should be retried in the event of Task failure   p    td    tr   tr   td   code timeout  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Time after which one retry attempt times out  Defaults to 1 hour  Refer Go rsquo s ParseDuration documentation for expected format   a href  https   golang org pkg time  ParseDuration  https   golang org pkg time  ParseDuration  a   p    td    tr   tr   td   code podTemplate  code  br    em   a href   tekton dev unversioned Template   Template   a    em    td   td   p PodTemplate holds pod specific configuration  p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 WorkspaceBinding     WorkspaceBinding   a    em    td   td   em  Optional   em   p Workspaces is a list of WorkspaceBindings from volumes to workspaces   p    td    tr   tr   td   code stepOverrides  code  br    em   a href   tekton dev v1beta1 TaskRunStepOverride     TaskRunStepOverride   a    em    td   td   em  Optional   em   p Overrides to apply to Steps in this TaskRun  If a field is specified in both a Step and a StepOverride  the value from the StepOverride will be used  This field is only supported when the alpha feature gate is enabled   p    td    tr   tr   td   code sidecarOverrides  code  br    em   a href   tekton dev v1beta1 TaskRunSidecarOverride     TaskRunSidecarOverride   a    em    td   td   em  Optional   em   p Overrides to apply to Sidecars in this TaskRun  If a field is specified in both a Sidecar and a SidecarOverride  the value from the SidecarOverride will be used  This field is only supported when the alpha feature gate is enabled   p    td    tr   tr   td   code computeResources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   p Compute resources to use for this TaskRun  p    td    tr    table    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1beta1 TaskRunStatus   TaskRunStatus   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1beta1 Algorithm  Algorithm   code string  code  alias   h3   div   p Algorithm Standard cryptographic hash algorithm  p    div   h3 id  tekton dev v1beta1 Artifact  Artifact   h3   p    em Appears on   em  a href   tekton dev v1beta1 Artifacts  Artifacts  a    a href   tekton dev v1beta1 StepState  StepState  a     p   div   p TaskRunStepArtifact represents an artifact produced or used by a step within a task run  It directly uses the Artifact type for its structure   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p The artifact rsquo s identifying category name  p    td    tr   tr   td   code values  code  br    em   a href   tekton dev v1beta1 ArtifactValue     ArtifactValue   a    em    td   td   p A collection of values related to the artifact  p    td    tr   tr   td   code buildOutput  code  br    em  bool   em    td   td   p Indicate if the artifact is a build output or a by product  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 ArtifactValue  ArtifactValue   h3   p    em Appears on   em  a href   tekton dev v1beta1 Artifact  Artifact  a     p   div   p ArtifactValue represents a specific value or data element within an Artifact   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code digest  code  br    em  map github com tektoncd pipeline pkg apis pipeline v1beta1 Algorithm string   em    td   td    td    tr   tr   td   code uri  code  br    em  string   em    td   td   p Algorithm specific digests for verifying the content  e g   SHA256   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 Artifacts  Artifacts   h3   div   p Artifacts represents the collection of input and output artifacts associated with a task run or a similar process  Artifacts in this context are units of data or resources that the process either consumes as input or produces as output   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code inputs  code  br    em   a href   tekton dev v1beta1 Artifact     Artifact   a    em    td   td    td    tr   tr   td   code outputs  code  br    em   a href   tekton dev v1beta1 Artifact     Artifact   a    em    td   td    td    tr    tbody    table   h3 id  tekton dev v1beta1 ChildStatusReference  ChildStatusReference   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRunStatusFields  PipelineRunStatusFields  a     p   div   p ChildStatusReference is used to point to the statuses of individual TaskRuns and Runs within this PipelineRun   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of the TaskRun or Run this is referencing   p    td    tr   tr   td   code displayName  code  br    em  string   em    td   td   p DisplayName is a user facing name of the pipelineTask that may be used to populate a UI   p    td    tr   tr   td   code pipelineTaskName  code  br    em  string   em    td   td   p PipelineTaskName is the name of the PipelineTask this is referencing   p    td    tr   tr   td   code whenExpressions  code  br    em   a href   tekton dev v1beta1 WhenExpression     WhenExpression   a    em    td   td   em  Optional   em   p WhenExpressions is the list of checks guarding the execution of the PipelineTask  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 CloudEventCondition  CloudEventCondition   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 CloudEventDeliveryState  CloudEventDeliveryState  a     p   div   p CloudEventCondition is a string that represents the condition of the event   p    div   h3 id  tekton dev v1beta1 CloudEventDelivery  CloudEventDelivery   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskRunStatusFields  TaskRunStatusFields  a     p   div   p CloudEventDelivery is the target of a cloud event along with the state of delivery   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code target  code  br    em  string   em    td   td   p Target points to an addressable  p    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1beta1 CloudEventDeliveryState   CloudEventDeliveryState   a    em    td   td    td    tr    tbody    table   h3 id  tekton dev v1beta1 CloudEventDeliveryState  CloudEventDeliveryState   h3   p    em Appears on   em  a href   tekton dev v1beta1 CloudEventDelivery  CloudEventDelivery  a     p   div   p CloudEventDeliveryState reports the state of a cloud event to be sent   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code condition  code  br    em   a href   tekton dev v1beta1 CloudEventCondition   CloudEventCondition   a    em    td   td   p Current status  p    td    tr   tr   td   code sentAt  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   em  Optional   em   p SentAt is the time at which the last attempt to send the event was made  p    td    tr   tr   td   code message  code  br    em  string   em    td   td   p Error is the text of error  if any   p    td    tr   tr   td   code retryCount  code  br    em  int32   em    td   td   p RetryCount is the number of attempts of sending the cloud event  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 Combination  Combination   code map string string  code  alias   h3   div   p Combination is a map  mainly defined to hold a single combination from a Matrix with key as param Name and value as param Value  p    div   h3 id  tekton dev v1beta1 Combinations  Combinations   code   github com tektoncd pipeline pkg apis pipeline v1beta1 Combination  code  alias   h3   div   p Combinations is a Combination list  p    div   h3 id  tekton dev v1beta1 ConfigSource  ConfigSource   h3   p    em Appears on   em  a href   tekton dev v1beta1 Provenance  Provenance  a     p   div   p ConfigSource contains the information that can uniquely identify where a remote built definition came from i e  Git repositories  Tekton Bundles in OCI registry and hub   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code uri  code  br    em  string   em    td   td   p URI indicates the identity of the source of the build definition  Example   ldquo  a href  https   github com tektoncd catalog quot   https   github com tektoncd catalog rdquo   a   p    td    tr   tr   td   code digest  code  br    em  map string string   em    td   td   p Digest is a collection of cryptographic digests for the contents of the artifact specified by URI  Example    ldquo sha1 rdquo    ldquo f99d13e554ffcb696dee719fa85b695cb5b0f428 rdquo    p    td    tr   tr   td   code entryPoint  code  br    em  string   em    td   td   p EntryPoint identifies the entry point into the build  This is often a path to a build definition file and or a target label within that file  Example   ldquo task git clone 0 8 git clone yaml rdquo   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 CustomRunReason  CustomRunReason   code string  code  alias   h3   div   p CustomRunReason is an enum used to store all Run reason for the Succeeded condition that are controlled by the CustomRun itself   p    div   h3 id  tekton dev v1beta1 CustomRunSpec  CustomRunSpec   h3   p    em Appears on   em  a href   tekton dev v1beta1 CustomRun  CustomRun  a     p   div   p CustomRunSpec defines the desired state of CustomRun  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code customRef  code  br    em   a href   tekton dev v1beta1 TaskRef   TaskRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code customSpec  code  br    em   a href   tekton dev v1beta1 EmbeddedCustomRunSpec   EmbeddedCustomRunSpec   a    em    td   td   em  Optional   em   p Spec is a specification of a custom task  p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 Params   Params   a    em    td   td   em  Optional   em    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1beta1 CustomRunSpecStatus   CustomRunSpecStatus   a    em    td   td   em  Optional   em   p Used for cancelling a customrun  and maybe more later on   p    td    tr   tr   td   code statusMessage  code  br    em   a href   tekton dev v1beta1 CustomRunSpecStatusMessage   CustomRunSpecStatusMessage   a    em    td   td   em  Optional   em   p Status message for cancellation   p    td    tr   tr   td   code retries  code  br    em  int   em    td   td   em  Optional   em   p Used for propagating retries count to custom tasks  p    td    tr   tr   td   code serviceAccountName  code  br    em  string   em    td   td   em  Optional   em    td    tr   tr   td   code timeout  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Time after which the custom task times out  Refer Go rsquo s ParseDuration documentation for expected format   a href  https   golang org pkg time  ParseDuration  https   golang org pkg time  ParseDuration  a   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 WorkspaceBinding     WorkspaceBinding   a    em    td   td   em  Optional   em   p Workspaces is a list of WorkspaceBindings from volumes to workspaces   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 CustomRunSpecStatus  CustomRunSpecStatus   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 CustomRunSpec  CustomRunSpec  a     p   div   p CustomRunSpecStatus defines the taskrun spec status the user can provide  p    div   h3 id  tekton dev v1beta1 CustomRunSpecStatusMessage  CustomRunSpecStatusMessage   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 CustomRunSpec  CustomRunSpec  a     p   div   p CustomRunSpecStatusMessage defines human readable status messages for the TaskRun   p    div   h3 id  tekton dev v1beta1 EmbeddedCustomRunSpec  EmbeddedCustomRunSpec   h3   p    em Appears on   em  a href   tekton dev v1beta1 CustomRunSpec  CustomRunSpec  a     p   div   p EmbeddedCustomRunSpec allows custom task definitions to be embedded  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code metadata  code  br    em   a href   tekton dev v1beta1 PipelineTaskMetadata   PipelineTaskMetadata   a    em    td   td   em  Optional   em    td    tr   tr   td   code spec  code  br    em  k8s io apimachinery pkg runtime RawExtension   em    td   td   em  Optional   em   p Spec is a specification of a custom task  p   br    br    table   tr   td   code    code  br    em    byte   em    td   td   p Raw is the underlying serialization of this object   p   p TODO  Determine how to detect ContentType and ContentEncoding of  lsquo Raw rsquo  data   p    td    tr   tr   td   code    code  br    em  k8s io apimachinery pkg runtime Object   em    td   td   p Object can hold a representation of this extension   useful for working with versioned structs   p    td    tr    table    td    tr    tbody    table   h3 id  tekton dev v1beta1 EmbeddedTask  EmbeddedTask   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineTask  PipelineTask  a     p   div   p EmbeddedTask is used to define a Task inline within a Pipeline rsquo s PipelineTasks   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code spec  code  br    em  k8s io apimachinery pkg runtime RawExtension   em    td   td   em  Optional   em   p Spec is a specification of a custom task  p   br    br    table   tr   td   code    code  br    em    byte   em    td   td   p Raw is the underlying serialization of this object   p   p TODO  Determine how to detect ContentType and ContentEncoding of  lsquo Raw rsquo  data   p    td    tr   tr   td   code    code  br    em  k8s io apimachinery pkg runtime Object   em    td   td   p Object can hold a representation of this extension   useful for working with versioned structs   p    td    tr    table    td    tr   tr   td   code metadata  code  br    em   a href   tekton dev v1beta1 PipelineTaskMetadata   PipelineTaskMetadata   a    em    td   td   em  Optional   em    td    tr   tr   td   code TaskSpec  code  br    em   a href   tekton dev v1beta1 TaskSpec   TaskSpec   a    em    td   td   p   Members of  code TaskSpec  code  are embedded into this type     p   em  Optional   em   p TaskSpec is a specification of a task  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 IncludeParams  IncludeParams   h3   div   p IncludeParams allows passing in a specific combinations of Parameters into the Matrix   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name the specified combination  p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 Params   Params   a    em    td   td   p Params takes only  code Parameters  code  of type  code  quot string quot   code  The names of the  code params  code  must match the names of the  code params  code  in the underlying  code Task  code   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 InternalTaskModifier  InternalTaskModifier   h3   div   p InternalTaskModifier implements TaskModifier for resources that are built in to Tekton Pipelines   p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code stepsToPrepend  code  br    em   a href   tekton dev v1beta1 Step     Step   a    em    td   td    td    tr   tr   td   code stepsToAppend  code  br    em   a href   tekton dev v1beta1 Step     Step   a    em    td   td    td    tr   tr   td   code volumes  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volume v1 core     Kubernetes core v1 Volume   a    em    td   td    td    tr    tbody    table   h3 id  tekton dev v1beta1 Matrix  Matrix   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineTask  PipelineTask  a     p   div   p Matrix is used to fan out Tasks in a Pipeline  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code params  code  br    em   a href   tekton dev v1beta1 Params   Params   a    em    td   td   p Params is a list of parameters used to fan out the pipelineTask Params takes only  code Parameters  code  of type  code  quot array quot   code  Each array element is supplied to the  code PipelineTask  code  by substituting  code params  code  of type  code  quot string quot   code  in the underlying  code Task  code   The names of the  code params  code  in the  code Matrix  code  must match the names of the  code params  code  in the underlying  code Task  code  that they will be substituting   p    td    tr   tr   td   code include  code  br    em   a href   tekton dev v1beta1 IncludeParamsList   IncludeParamsList   a    em    td   td   em  Optional   em   p Include is a list of IncludeParams which allows passing in specific combinations of Parameters into the Matrix   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 OnErrorType  OnErrorType   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 Step  Step  a     p   div   p OnErrorType defines a list of supported exiting behavior of a container on error  p    div   h3 id  tekton dev v1beta1 Param  Param   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskRunInputs  TaskRunInputs  a     p   div   p Param declares an ParamValues to use for the parameter called name   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td    td    tr   tr   td   code value  code  br    em   a href   tekton dev v1beta1 ParamValue   ParamValue   a    em    td   td    td    tr    tbody    table   h3 id  tekton dev v1beta1 ParamSpec  ParamSpec   h3   div   p ParamSpec defines arbitrary parameters needed beyond typed inputs  such as resources   Parameter values are provided by users as inputs on a TaskRun or PipelineRun   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name declares the name by which a parameter is referenced   p    td    tr   tr   td   code type  code  br    em   a href   tekton dev v1beta1 ParamType   ParamType   a    em    td   td   em  Optional   em   p Type is the user specified type of the parameter  The possible types are currently  ldquo string rdquo    ldquo array rdquo  and  ldquo object rdquo   and  ldquo string rdquo  is the default   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the parameter that may be used to populate a UI   p    td    tr   tr   td   code properties  code  br    em   a href   tekton dev v1beta1 PropertySpec   map string github com tektoncd pipeline pkg apis pipeline v1beta1 PropertySpec   a    em    td   td   em  Optional   em   p Properties is the JSON Schema properties to support key value pairs parameter   p    td    tr   tr   td   code default  code  br    em   a href   tekton dev v1beta1 ParamValue   ParamValue   a    em    td   td   em  Optional   em   p Default is the value a parameter takes if no input value is supplied  If default is set  a Task may be executed without a supplied value for the parameter   p    td    tr   tr   td   code enum  code  br    em    string   em    td   td   em  Optional   em   p Enum declares a set of allowed param input values for tasks pipelines that can be validated  If Enum is not set  no input validation is performed for the param   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 ParamSpecs  ParamSpecs   code   github com tektoncd pipeline pkg apis pipeline v1beta1 ParamSpec  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineSpec  PipelineSpec  a    a href   tekton dev v1beta1 TaskSpec  TaskSpec  a     p   div   p ParamSpecs is a list of ParamSpec  p    div   h3 id  tekton dev v1beta1 ParamType  ParamType   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 ParamSpec  ParamSpec  a    a href   tekton dev v1beta1 ParamValue  ParamValue  a    a href   tekton dev v1beta1 PropertySpec  PropertySpec  a     p   div   p ParamType indicates the type of an input parameter  Used to distinguish between a single string and an array of strings   p    div   h3 id  tekton dev v1beta1 ParamValue  ParamValue   h3   p    em Appears on   em  a href   tekton dev v1beta1 Param  Param  a    a href   tekton dev v1beta1 ParamSpec  ParamSpec  a    a href   tekton dev v1beta1 PipelineResult  PipelineResult  a    a href   tekton dev v1beta1 PipelineRunResult  PipelineRunResult  a    a href   tekton dev v1beta1 TaskResult  TaskResult  a    a href   tekton dev v1beta1 TaskRunResult  TaskRunResult  a     p   div   p ResultValue is a type alias of ParamValue  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code Type  code  br    em   a href   tekton dev v1beta1 ParamType   ParamType   a    em    td   td    td    tr   tr   td   code StringVal  code  br    em  string   em    td   td   p Represents the stored type of ParamValues   p    td    tr   tr   td   code ArrayVal  code  br    em    string   em    td   td    td    tr   tr   td   code ObjectVal  code  br    em  map string string   em    td   td    td    tr    tbody    table   h3 id  tekton dev v1beta1 Params  Params   code   github com tektoncd pipeline pkg apis pipeline v1beta1 Param  code  alias   h3   p    em Appears on   em  a href   tekton dev v1alpha1 RunSpec  RunSpec  a    a href   tekton dev v1beta1 CustomRunSpec  CustomRunSpec  a    a href   tekton dev v1beta1 IncludeParams  IncludeParams  a    a href   tekton dev v1beta1 Matrix  Matrix  a    a href   tekton dev v1beta1 PipelineRunSpec  PipelineRunSpec  a    a href   tekton dev v1beta1 PipelineTask  PipelineTask  a    a href   tekton dev v1beta1 ResolverRef  ResolverRef  a    a href   tekton dev v1beta1 Step  Step  a    a href   tekton dev v1beta1 TaskRunSpec  TaskRunSpec  a     p   div   p Params is a list of Param  p    div   h3 id  tekton dev v1beta1 PipelineDeclaredResource  PipelineDeclaredResource   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineSpec  PipelineSpec  a     p   div   p PipelineDeclaredResource is used by a Pipeline to declare the types of the PipelineResources that it will required to run and names which can be used to refer to these PipelineResources in PipelineTaskResourceBindings   p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name that will be used by the Pipeline to refer to this resource  It does not directly correspond to the name of any PipelineResources Task inputs or outputs  and it does not correspond to the actual names of the PipelineResources that will be bound in the PipelineRun   p    td    tr   tr   td   code type  code  br    em  string   em    td   td   p Type is the type of the PipelineResource   p    td    tr   tr   td   code optional  code  br    em  bool   em    td   td   p Optional declares the resource as optional  optional  true   the resource is considered optional optional  false   the resource is considered required  default equivalent of not specifying it   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineObject  PipelineObject   h3   div   p PipelineObject is implemented by Pipeline  p    div   h3 id  tekton dev v1beta1 PipelineRef  PipelineRef   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRunSpec  PipelineRunSpec  a    a href   tekton dev v1beta1 PipelineTask  PipelineTask  a     p   div   p PipelineRef can be used to refer to a specific instance of a Pipeline   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name of the referent  More info   a href  http   kubernetes io docs user guide identifiers names  http   kubernetes io docs user guide identifiers names  a   p    td    tr   tr   td   code apiVersion  code  br    em  string   em    td   td   em  Optional   em   p API version of the referent  p    td    tr   tr   td   code bundle  code  br    em  string   em    td   td   em  Optional   em   p Bundle url reference to a Tekton Bundle   p   p Deprecated  Please use ResolverRef with the bundles resolver instead  The field is staying there for go client backward compatibility  but is not used allowed anymore   p    td    tr   tr   td   code ResolverRef  code  br    em   a href   tekton dev v1beta1 ResolverRef   ResolverRef   a    em    td   td   em  Optional   em   p ResolverRef allows referencing a Pipeline in a remote location like a git repo  This field is only supported when the alpha feature gate is enabled   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineResourceBinding  PipelineResourceBinding   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRunSpec  PipelineRunSpec  a    a href   tekton dev v1beta1 TaskResourceBinding  TaskResourceBinding  a     p   div   p PipelineResourceBinding connects a reference to an instance of a PipelineResource with a PipelineResource dependency that the Pipeline has declared  p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of the PipelineResource in the Pipeline rsquo s declaration  p    td    tr   tr   td   code resourceRef  code  br    em   a href   tekton dev v1beta1 PipelineResourceRef   PipelineResourceRef   a    em    td   td   em  Optional   em   p ResourceRef is a reference to the instance of the actual PipelineResource that should be used  p    td    tr   tr   td   code resourceSpec  code  br    em   a href   tekton dev v1alpha1 PipelineResourceSpec   PipelineResourceSpec   a    em    td   td   em  Optional   em   p ResourceSpec is specification of a resource that should be created and consumed by the task  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineResourceInterface  PipelineResourceInterface   h3   div   p PipelineResourceInterface interface to be implemented by different PipelineResource types  p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   h3 id  tekton dev v1beta1 PipelineResourceRef  PipelineResourceRef   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineResourceBinding  PipelineResourceBinding  a     p   div   p PipelineResourceRef can be used to refer to a specific instance of a Resource  p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name of the referent  More info   a href  http   kubernetes io docs user guide identifiers names  http   kubernetes io docs user guide identifiers names  a   p    td    tr   tr   td   code apiVersion  code  br    em  string   em    td   td   em  Optional   em   p API version of the referent  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineResult  PipelineResult   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineSpec  PipelineSpec  a     p   div   p PipelineResult used to describe the results of a pipeline  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name the given name  p    td    tr   tr   td   code type  code  br    em   a href   tekton dev v1beta1 ResultsType   ResultsType   a    em    td   td   p Type is the user specified type of the result  The possible types are  lsquo string rsquo    lsquo array rsquo   and  lsquo object rsquo   with  lsquo string rsquo  as the default   lsquo array rsquo  and  lsquo object rsquo  types are alpha features   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a human readable description of the result  p    td    tr   tr   td   code value  code  br    em   a href   tekton dev v1beta1 ParamValue   ParamValue   a    em    td   td   p Value the expression used to retrieve the value  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineRunReason  PipelineRunReason   code string  code  alias   h3   div   p PipelineRunReason represents a reason for the pipeline run  ldquo Succeeded rdquo  condition  p    div   h3 id  tekton dev v1beta1 PipelineRunResult  PipelineRunResult   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRunStatusFields  PipelineRunStatusFields  a     p   div   p PipelineRunResult used to describe the results of a pipeline  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the result rsquo s name as declared by the Pipeline  p    td    tr   tr   td   code value  code  br    em   a href   tekton dev v1beta1 ParamValue   ParamValue   a    em    td   td   p Value is the result returned from the execution of this PipelineRun  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineRunRunStatus  PipelineRunRunStatus   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRunStatusFields  PipelineRunStatusFields  a     p   div   p PipelineRunRunStatus contains the name of the PipelineTask for this CustomRun or Run and the CustomRun or Run rsquo s Status  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code pipelineTaskName  code  br    em  string   em    td   td   p PipelineTaskName is the name of the PipelineTask   p    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1beta1 CustomRunStatus   CustomRunStatus   a    em    td   td   em  Optional   em   p Status is the CustomRunStatus for the corresponding CustomRun or Run  p    td    tr   tr   td   code whenExpressions  code  br    em   a href   tekton dev v1beta1 WhenExpression     WhenExpression   a    em    td   td   em  Optional   em   p WhenExpressions is the list of checks guarding the execution of the PipelineTask  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineRunSpec  PipelineRunSpec   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRun  PipelineRun  a     p   div   p PipelineRunSpec defines the desired state of PipelineRun  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code pipelineRef  code  br    em   a href   tekton dev v1beta1 PipelineRef   PipelineRef   a    em    td   td   em  Optional   em    td    tr   tr   td   code pipelineSpec  code  br    em   a href   tekton dev v1beta1 PipelineSpec   PipelineSpec   a    em    td   td   em  Optional   em   p Specifying PipelineSpec can be disabled by setting  code disable inline spec  code  feature flag    p    td    tr   tr   td   code resources  code  br    em   a href   tekton dev v1beta1 PipelineResourceBinding     PipelineResourceBinding   a    em    td   td   p Resources is a list of bindings specifying which actual instances of PipelineResources to use for the resources the Pipeline has declared it needs   p   p Deprecated  Unused  preserved only for backwards compatibility  p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 Params   Params   a    em    td   td   p Params is a list of parameter names and values   p    td    tr   tr   td   code serviceAccountName  code  br    em  string   em    td   td   em  Optional   em    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1beta1 PipelineRunSpecStatus   PipelineRunSpecStatus   a    em    td   td   em  Optional   em   p Used for cancelling a pipelinerun  and maybe more later on   p    td    tr   tr   td   code timeouts  code  br    em   a href   tekton dev v1beta1 TimeoutFields   TimeoutFields   a    em    td   td   em  Optional   em   p Time after which the Pipeline times out  Currently three keys are accepted in the map pipeline  tasks and finally with Timeouts pipeline  gt   Timeouts tasks   Timeouts finally  p    td    tr   tr   td   code timeout  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Timeout is the Time after which the Pipeline times out  Defaults to never  Refer to Go rsquo s ParseDuration documentation for expected format   a href  https   golang org pkg time  ParseDuration  https   golang org pkg time  ParseDuration  a   p   p Deprecated  use pipelineRunSpec Timeouts Pipeline instead  p    td    tr   tr   td   code podTemplate  code  br    em   a href   tekton dev unversioned Template   Template   a    em    td   td   p PodTemplate holds pod specific configuration  p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 WorkspaceBinding     WorkspaceBinding   a    em    td   td   em  Optional   em   p Workspaces holds a set of workspace bindings that must match names with those declared in the pipeline   p    td    tr   tr   td   code taskRunSpecs  code  br    em   a href   tekton dev v1beta1 PipelineTaskRunSpec     PipelineTaskRunSpec   a    em    td   td   em  Optional   em   p TaskRunSpecs holds a set of runtime specs  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineRunSpecStatus  PipelineRunSpecStatus   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRunSpec  PipelineRunSpec  a     p   div   p PipelineRunSpecStatus defines the pipelinerun spec status the user can provide  p    div   h3 id  tekton dev v1beta1 PipelineRunStatus  PipelineRunStatus   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRun  PipelineRun  a     p   div   p PipelineRunStatus defines the observed state of PipelineRun  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code Status  code  br    em   a href  https   pkg go dev knative dev pkg apis duck v1 Status   knative dev pkg apis duck v1 Status   a    em    td   td   p   Members of  code Status  code  are embedded into this type     p    td    tr   tr   td   code PipelineRunStatusFields  code  br    em   a href   tekton dev v1beta1 PipelineRunStatusFields   PipelineRunStatusFields   a    em    td   td   p   Members of  code PipelineRunStatusFields  code  are embedded into this type     p   p PipelineRunStatusFields inlines the status fields   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineRunStatusFields  PipelineRunStatusFields   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRunStatus  PipelineRunStatus  a     p   div   p PipelineRunStatusFields holds the fields of PipelineRunStatus rsquo  status  This is defined separately and inlined so that other types can readily consume these fields via duck typing   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code startTime  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   p StartTime is the time the PipelineRun is actually started   p    td    tr   tr   td   code completionTime  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   p CompletionTime is the time the PipelineRun completed   p    td    tr   tr   td   code taskRuns  code  br    em   a href   tekton dev v1beta1 PipelineRunTaskRunStatus   map string  github com tektoncd pipeline pkg apis pipeline v1beta1 PipelineRunTaskRunStatus   a    em    td   td   em  Optional   em   p TaskRuns is a map of PipelineRunTaskRunStatus with the taskRun name as the key   p   p Deprecated  use ChildReferences instead  As of v0 45 0  this field is no longer populated and is only included for backwards compatibility with older server versions   p    td    tr   tr   td   code runs  code  br    em   a href   tekton dev v1beta1 PipelineRunRunStatus   map string  github com tektoncd pipeline pkg apis pipeline v1beta1 PipelineRunRunStatus   a    em    td   td   em  Optional   em   p Runs is a map of PipelineRunRunStatus with the run name as the key  p   p Deprecated  use ChildReferences instead  As of v0 45 0  this field is no longer populated and is only included for backwards compatibility with older server versions   p    td    tr   tr   td   code pipelineResults  code  br    em   a href   tekton dev v1beta1 PipelineRunResult     PipelineRunResult   a    em    td   td   em  Optional   em   p PipelineResults are the list of results written out by the pipeline task rsquo s containers  p    td    tr   tr   td   code pipelineSpec  code  br    em   a href   tekton dev v1beta1 PipelineSpec   PipelineSpec   a    em    td   td   p PipelineRunSpec contains the exact spec used to instantiate the run  p    td    tr   tr   td   code skippedTasks  code  br    em   a href   tekton dev v1beta1 SkippedTask     SkippedTask   a    em    td   td   em  Optional   em   p list of tasks that were skipped due to when expressions evaluating to false  p    td    tr   tr   td   code childReferences  code  br    em   a href   tekton dev v1beta1 ChildStatusReference     ChildStatusReference   a    em    td   td   em  Optional   em   p list of TaskRun and Run names  PipelineTask names  and API versions kinds for children of this PipelineRun   p    td    tr   tr   td   code finallyStartTime  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   em  Optional   em   p FinallyStartTime is when all non finally tasks have been completed and only finally tasks are being executed   p    td    tr   tr   td   code provenance  code  br    em   a href   tekton dev v1beta1 Provenance   Provenance   a    em    td   td   em  Optional   em   p Provenance contains some key authenticated metadata about how a software artifact was built  what sources  what inputs outputs  etc     p    td    tr   tr   td   code spanContext  code  br    em  map string string   em    td   td   p SpanContext contains tracing span context fields  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineRunTaskRunStatus  PipelineRunTaskRunStatus   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRunStatusFields  PipelineRunStatusFields  a     p   div   p PipelineRunTaskRunStatus contains the name of the PipelineTask for this TaskRun and the TaskRun rsquo s Status  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code pipelineTaskName  code  br    em  string   em    td   td   p PipelineTaskName is the name of the PipelineTask   p    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1beta1 TaskRunStatus   TaskRunStatus   a    em    td   td   em  Optional   em   p Status is the TaskRunStatus for the corresponding TaskRun  p    td    tr   tr   td   code whenExpressions  code  br    em   a href   tekton dev v1beta1 WhenExpression     WhenExpression   a    em    td   td   em  Optional   em   p WhenExpressions is the list of checks guarding the execution of the PipelineTask  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineSpec  PipelineSpec   h3   p    em Appears on   em  a href   tekton dev v1beta1 Pipeline  Pipeline  a    a href   tekton dev v1beta1 PipelineRunSpec  PipelineRunSpec  a    a href   tekton dev v1beta1 PipelineRunStatusFields  PipelineRunStatusFields  a    a href   tekton dev v1beta1 PipelineTask  PipelineTask  a     p   div   p PipelineSpec defines the desired state of Pipeline   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code displayName  code  br    em  string   em    td   td   em  Optional   em   p DisplayName is a user facing name of the pipeline that may be used to populate a UI   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the pipeline that may be used to populate a UI   p    td    tr   tr   td   code resources  code  br    em   a href   tekton dev v1beta1 PipelineDeclaredResource     PipelineDeclaredResource   a    em    td   td   p Deprecated  Unused  preserved only for backwards compatibility  p    td    tr   tr   td   code tasks  code  br    em   a href   tekton dev v1beta1 PipelineTask     PipelineTask   a    em    td   td   p Tasks declares the graph of Tasks that execute when this Pipeline is run   p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 ParamSpecs   ParamSpecs   a    em    td   td   p Params declares a list of input parameters that must be supplied when this Pipeline is run   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 PipelineWorkspaceDeclaration     PipelineWorkspaceDeclaration   a    em    td   td   em  Optional   em   p Workspaces declares a set of named workspaces that are expected to be provided by a PipelineRun   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1beta1 PipelineResult     PipelineResult   a    em    td   td   em  Optional   em   p Results are values that this pipeline can output once run  p    td    tr   tr   td   code finally  code  br    em   a href   tekton dev v1beta1 PipelineTask     PipelineTask   a    em    td   td   p Finally declares the list of Tasks that execute just before leaving the Pipeline i e  either after all Tasks are finished executing successfully or after a failure which would result in ending the Pipeline  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineTask  PipelineTask   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineSpec  PipelineSpec  a     p   div   p PipelineTask defines a task in a Pipeline  passing inputs from both Params and from the output of previous tasks   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of this task within the context of a Pipeline  Name is used as a coordinate with the  code from  code  and  code runAfter  code  fields to establish the execution order of tasks relative to one another   p    td    tr   tr   td   code displayName  code  br    em  string   em    td   td   em  Optional   em   p DisplayName is the display name of this task within the context of a Pipeline  This display name may be used to populate a UI   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is the description of this task within the context of a Pipeline  This description may be used to populate a UI   p    td    tr   tr   td   code taskRef  code  br    em   a href   tekton dev v1beta1 TaskRef   TaskRef   a    em    td   td   em  Optional   em   p TaskRef is a reference to a task definition   p    td    tr   tr   td   code taskSpec  code  br    em   a href   tekton dev v1beta1 EmbeddedTask   EmbeddedTask   a    em    td   td   em  Optional   em   p TaskSpec is a specification of a task Specifying TaskSpec can be disabled by setting  code disable inline spec  code  feature flag    p    td    tr   tr   td   code when  code  br    em   a href   tekton dev v1beta1 WhenExpressions   WhenExpressions   a    em    td   td   em  Optional   em   p WhenExpressions is a list of when expressions that need to be true for the task to run  p    td    tr   tr   td   code retries  code  br    em  int   em    td   td   em  Optional   em   p Retries represents how many times this task should be retried in case of task failure  ConditionSucceeded set to False  p    td    tr   tr   td   code runAfter  code  br    em    string   em    td   td   em  Optional   em   p RunAfter is the list of PipelineTask names that should be executed before this Task executes   Used to force a specific ordering in graph execution    p    td    tr   tr   td   code resources  code  br    em   a href   tekton dev v1beta1 PipelineTaskResources   PipelineTaskResources   a    em    td   td   em  Optional   em   p Deprecated  Unused  preserved only for backwards compatibility  p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 Params   Params   a    em    td   td   em  Optional   em   p Parameters declares parameters passed to this task   p    td    tr   tr   td   code matrix  code  br    em   a href   tekton dev v1beta1 Matrix   Matrix   a    em    td   td   em  Optional   em   p Matrix declares parameters used to fan out this task   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 WorkspacePipelineTaskBinding     WorkspacePipelineTaskBinding   a    em    td   td   em  Optional   em   p Workspaces maps workspaces from the pipeline spec to the workspaces declared in the Task   p    td    tr   tr   td   code timeout  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Time after which the TaskRun times out  Defaults to 1 hour  Refer Go rsquo s ParseDuration documentation for expected format   a href  https   golang org pkg time  ParseDuration  https   golang org pkg time  ParseDuration  a   p    td    tr   tr   td   code pipelineRef  code  br    em   a href   tekton dev v1beta1 PipelineRef   PipelineRef   a    em    td   td   em  Optional   em   p PipelineRef is a reference to a pipeline definition Note  PipelineRef is in preview mode and not yet supported  p    td    tr   tr   td   code pipelineSpec  code  br    em   a href   tekton dev v1beta1 PipelineSpec   PipelineSpec   a    em    td   td   em  Optional   em   p PipelineSpec is a specification of a pipeline Note  PipelineSpec is in preview mode and not yet supported Specifying TaskSpec can be disabled by setting  code disable inline spec  code  feature flag    p    td    tr   tr   td   code onError  code  br    em   a href   tekton dev v1beta1 PipelineTaskOnErrorType   PipelineTaskOnErrorType   a    em    td   td   em  Optional   em   p OnError defines the exiting behavior of a PipelineRun on error can be set to   continue   stopAndFail    p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineTaskInputResource  PipelineTaskInputResource   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineTaskResources  PipelineTaskResources  a     p   div   p PipelineTaskInputResource maps the name of a declared PipelineResource input dependency in a Task to the resource in the Pipeline rsquo s DeclaredPipelineResources that should be used  This input may come from a previous task   p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of the PipelineResource as declared by the Task   p    td    tr   tr   td   code resource  code  br    em  string   em    td   td   p Resource is the name of the DeclaredPipelineResource to use   p    td    tr   tr   td   code from  code  br    em    string   em    td   td   em  Optional   em   p From is the list of PipelineTask names that the resource has to come from   Implies an ordering in the execution graph    p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineTaskMetadata  PipelineTaskMetadata   h3   p    em Appears on   em  a href   tekton dev v1alpha1 EmbeddedRunSpec  EmbeddedRunSpec  a    a href   tekton dev v1beta1 EmbeddedCustomRunSpec  EmbeddedCustomRunSpec  a    a href   tekton dev v1beta1 EmbeddedTask  EmbeddedTask  a    a href   tekton dev v1beta1 PipelineTaskRunSpec  PipelineTaskRunSpec  a     p   div   p PipelineTaskMetadata contains the labels or annotations for an EmbeddedTask  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code labels  code  br    em  map string string   em    td   td   em  Optional   em    td    tr   tr   td   code annotations  code  br    em  map string string   em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineTaskOnErrorType  PipelineTaskOnErrorType   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineTask  PipelineTask  a     p   div   p PipelineTaskOnErrorType defines a list of supported failure handling behaviors of a PipelineTask on error  p    div   h3 id  tekton dev v1beta1 PipelineTaskOutputResource  PipelineTaskOutputResource   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineTaskResources  PipelineTaskResources  a     p   div   p PipelineTaskOutputResource maps the name of a declared PipelineResource output dependency in a Task to the resource in the Pipeline rsquo s DeclaredPipelineResources that should be used   p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of the PipelineResource as declared by the Task   p    td    tr   tr   td   code resource  code  br    em  string   em    td   td   p Resource is the name of the DeclaredPipelineResource to use   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineTaskParam  PipelineTaskParam   h3   div   p PipelineTaskParam is used to provide arbitrary string parameters to a Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td    td    tr   tr   td   code value  code  br    em  string   em    td   td    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineTaskResources  PipelineTaskResources   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineTask  PipelineTask  a     p   div   p PipelineTaskResources allows a Pipeline to declare how its DeclaredPipelineResources should be provided to a Task as its inputs and outputs   p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code inputs  code  br    em   a href   tekton dev v1beta1 PipelineTaskInputResource     PipelineTaskInputResource   a    em    td   td   p Inputs holds the mapping from the PipelineResources declared in DeclaredPipelineResources to the input PipelineResources required by the Task   p    td    tr   tr   td   code outputs  code  br    em   a href   tekton dev v1beta1 PipelineTaskOutputResource     PipelineTaskOutputResource   a    em    td   td   p Outputs holds the mapping from the PipelineResources declared in DeclaredPipelineResources to the input PipelineResources required by the Task   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineTaskRun  PipelineTaskRun   h3   div   p PipelineTaskRun reports the results of running a step in the Task  Each task has the potential to succeed or fail  based on the exit code  and produces logs   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineTaskRunSpec  PipelineTaskRunSpec   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRunSpec  PipelineRunSpec  a     p   div   p PipelineTaskRunSpec  can be used to configure specific specs for a concrete Task  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code pipelineTaskName  code  br    em  string   em    td   td    td    tr   tr   td   code taskServiceAccountName  code  br    em  string   em    td   td    td    tr   tr   td   code taskPodTemplate  code  br    em   a href   tekton dev unversioned Template   Template   a    em    td   td    td    tr   tr   td   code stepOverrides  code  br    em   a href   tekton dev v1beta1 TaskRunStepOverride     TaskRunStepOverride   a    em    td   td    td    tr   tr   td   code sidecarOverrides  code  br    em   a href   tekton dev v1beta1 TaskRunSidecarOverride     TaskRunSidecarOverride   a    em    td   td    td    tr   tr   td   code metadata  code  br    em   a href   tekton dev v1beta1 PipelineTaskMetadata   PipelineTaskMetadata   a    em    td   td   em  Optional   em    td    tr   tr   td   code computeResources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   p Compute resources to use for this TaskRun  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PipelineWorkspaceDeclaration  PipelineWorkspaceDeclaration   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineSpec  PipelineSpec  a     p   div   p WorkspacePipelineDeclaration creates a named slot in a Pipeline that a PipelineRun is expected to populate with a workspace binding   p   p Deprecated  use PipelineWorkspaceDeclaration type instead  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of a workspace to be provided by a PipelineRun   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a human readable string describing how the workspace will be used in the Pipeline  It can be useful to include a bit of detail about which tasks are intended to have access to the data on the workspace   p    td    tr   tr   td   code optional  code  br    em  bool   em    td   td   p Optional marks a Workspace as not being required in PipelineRuns  By default this field is false and so declared workspaces are required   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 PropertySpec  PropertySpec   h3   p    em Appears on   em  a href   tekton dev v1beta1 ParamSpec  ParamSpec  a    a href   tekton dev v1beta1 TaskResult  TaskResult  a     p   div   p PropertySpec defines the struct for object keys  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code type  code  br    em   a href   tekton dev v1beta1 ParamType   ParamType   a    em    td   td    td    tr    tbody    table   h3 id  tekton dev v1beta1 Provenance  Provenance   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRunStatusFields  PipelineRunStatusFields  a    a href   tekton dev v1beta1 StepState  StepState  a    a href   tekton dev v1beta1 TaskRunStatusFields  TaskRunStatusFields  a     p   div   p Provenance contains metadata about resources used in the TaskRun PipelineRun such as the source from where a remote build definition was fetched  This field aims to carry minimum amoumt of metadata in  Run status so that Tekton Chains can capture them in the provenance   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code configSource  code  br    em   a href   tekton dev v1beta1 ConfigSource   ConfigSource   a    em    td   td   p Deprecated  Use RefSource instead  p    td    tr   tr   td   code refSource  code  br    em   a href   tekton dev v1beta1 RefSource   RefSource   a    em    td   td   p RefSource identifies the source where a remote task pipeline came from   p    td    tr   tr   td   code featureFlags  code  br    em  github com tektoncd pipeline pkg apis config FeatureFlags   em    td   td   p FeatureFlags identifies the feature flags that were used during the task pipeline run  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 Ref  Ref   h3   p    em Appears on   em  a href   tekton dev v1beta1 Step  Step  a     p   div   p Ref can be used to refer to a specific instance of a StepAction   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name of the referenced step  p    td    tr   tr   td   code ResolverRef  code  br    em   a href   tekton dev v1beta1 ResolverRef   ResolverRef   a    em    td   td   em  Optional   em   p ResolverRef allows referencing a StepAction in a remote location like a git repo   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 RefSource  RefSource   h3   p    em Appears on   em  a href   tekton dev v1beta1 Provenance  Provenance  a     p   div   p RefSource contains the information that can uniquely identify where a remote built definition came from i e  Git repositories  Tekton Bundles in OCI registry and hub   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code uri  code  br    em  string   em    td   td   p URI indicates the identity of the source of the build definition  Example   ldquo  a href  https   github com tektoncd catalog quot   https   github com tektoncd catalog rdquo   a   p    td    tr   tr   td   code digest  code  br    em  map string string   em    td   td   p Digest is a collection of cryptographic digests for the contents of the artifact specified by URI  Example    ldquo sha1 rdquo    ldquo f99d13e554ffcb696dee719fa85b695cb5b0f428 rdquo    p    td    tr   tr   td   code entryPoint  code  br    em  string   em    td   td   p EntryPoint identifies the entry point into the build  This is often a path to a build definition file and or a target label within that file  Example   ldquo task git clone 0 8 git clone yaml rdquo   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 ResolverName  ResolverName   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 ResolverRef  ResolverRef  a     p   div   p ResolverName is the name of a resolver from which a resource can be requested   p    div   h3 id  tekton dev v1beta1 ResolverRef  ResolverRef   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRef  PipelineRef  a    a href   tekton dev v1beta1 Ref  Ref  a    a href   tekton dev v1beta1 TaskRef  TaskRef  a     p   div   p ResolverRef can be used to refer to a Pipeline or Task in a remote location like a git repo   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code resolver  code  br    em   a href   tekton dev v1beta1 ResolverName   ResolverName   a    em    td   td   em  Optional   em   p Resolver is the name of the resolver that should perform resolution of the referenced Tekton resource  such as  ldquo git rdquo    p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 Params   Params   a    em    td   td   em  Optional   em   p Params contains the parameters used to identify the referenced Tekton resource  Example entries might include  ldquo repo rdquo  or  ldquo path rdquo  but the set of params ultimately depends on the chosen resolver   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 ResultRef  ResultRef   h3   div   p ResultRef is a type that represents a reference to a task run result  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code pipelineTask  code  br    em  string   em    td   td    td    tr   tr   td   code result  code  br    em  string   em    td   td    td    tr   tr   td   code resultsIndex  code  br    em  int   em    td   td    td    tr   tr   td   code property  code  br    em  string   em    td   td    td    tr    tbody    table   h3 id  tekton dev v1beta1 ResultsType  ResultsType   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineResult  PipelineResult  a    a href   tekton dev v1beta1 TaskResult  TaskResult  a    a href   tekton dev v1beta1 TaskRunResult  TaskRunResult  a     p   div   p ResultsType indicates the type of a result  Used to distinguish between a single string and an array of strings  Note that there is ResultType used to find out whether a RunResult is from a task result or not  which is different from this ResultsType   p    div   h3 id  tekton dev v1beta1 RunObject  RunObject   h3   div   p RunObject is implemented by CustomRun and Run  p    div   h3 id  tekton dev v1beta1 Sidecar  Sidecar   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskSpec  TaskSpec  a     p   div   p Sidecar has nearly the same data structure as Step but does not have the ability to timeout   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name of the Sidecar specified as a DNS LABEL  Each Sidecar in a Task must have a unique name  DNS LABEL   Cannot be updated   p    td    tr   tr   td   code image  code  br    em  string   em    td   td   em  Optional   em   p Image name to be used by the Sidecar  More info   a href  https   kubernetes io docs concepts containers images  https   kubernetes io docs concepts containers images  a   p    td    tr   tr   td   code command  code  br    em    string   em    td   td   em  Optional   em   p Entrypoint array  Not executed within a shell  The image rsquo s ENTRYPOINT is used if this is not provided  Variable references   VAR NAME  are expanded using the Sidecar rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code args  code  br    em    string   em    td   td   em  Optional   em   p Arguments to the entrypoint  The image rsquo s CMD is used if this is not provided  Variable references   VAR NAME  are expanded using the container rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code workingDir  code  br    em  string   em    td   td   em  Optional   em   p Sidecar rsquo s working directory  If not specified  the container runtime rsquo s default will be used  which might be configured in the container image  Cannot be updated   p    td    tr   tr   td   code ports  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  containerport v1 core     Kubernetes core v1 ContainerPort   a    em    td   td   em  Optional   em   p List of ports to expose from the Sidecar  Exposing a port here gives the system additional information about the network connections a container uses  but is primarily informational  Not specifying a port here DOES NOT prevent that port from being exposed  Any port which is listening on the default  ldquo 0 0 0 0 rdquo  address inside a container will be accessible from the network  Cannot be updated   p    td    tr   tr   td   code envFrom  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envfromsource v1 core     Kubernetes core v1 EnvFromSource   a    em    td   td   em  Optional   em   p List of sources to populate environment variables in the Sidecar  The keys defined within a source must be a C IDENTIFIER  All invalid keys will be reported as an event when the Sidecar is starting  When a key exists in multiple sources  the value associated with the last source will take precedence  Values defined by an Env with a duplicate key will take precedence  Cannot be updated   p    td    tr   tr   td   code env  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envvar v1 core     Kubernetes core v1 EnvVar   a    em    td   td   em  Optional   em   p List of environment variables to set in the Sidecar  Cannot be updated   p    td    tr   tr   td   code resources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   em  Optional   em   p Compute Resources required by this Sidecar  Cannot be updated  More info   a href  https   kubernetes io docs concepts configuration manage resources containers   https   kubernetes io docs concepts configuration manage resources containers   a   p    td    tr   tr   td   code volumeMounts  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumemount v1 core     Kubernetes core v1 VolumeMount   a    em    td   td   em  Optional   em   p Volumes to mount into the Sidecar rsquo s filesystem  Cannot be updated   p    td    tr   tr   td   code volumeDevices  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumedevice v1 core     Kubernetes core v1 VolumeDevice   a    em    td   td   em  Optional   em   p volumeDevices is the list of block devices to be used by the Sidecar   p    td    tr   tr   td   code livenessProbe  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  probe v1 core   Kubernetes core v1 Probe   a    em    td   td   em  Optional   em   p Periodic probe of Sidecar liveness  Container will be restarted if the probe fails  Cannot be updated  More info   a href  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  a   p    td    tr   tr   td   code readinessProbe  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  probe v1 core   Kubernetes core v1 Probe   a    em    td   td   em  Optional   em   p Periodic probe of Sidecar service readiness  Container will be removed from service endpoints if the probe fails  Cannot be updated  More info   a href  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  a   p    td    tr   tr   td   code startupProbe  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  probe v1 core   Kubernetes core v1 Probe   a    em    td   td   em  Optional   em   p StartupProbe indicates that the Pod the Sidecar is running in has successfully initialized  If specified  no other probes are executed until this completes successfully  If this probe fails  the Pod will be restarted  just as if the livenessProbe failed  This can be used to provide different probe parameters at the beginning of a Pod rsquo s lifecycle  when it might take a long time to load data or warm a cache  than during steady state operation  This cannot be updated  More info   a href  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  a   p    td    tr   tr   td   code lifecycle  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  lifecycle v1 core   Kubernetes core v1 Lifecycle   a    em    td   td   em  Optional   em   p Actions that the management system should take in response to Sidecar lifecycle events  Cannot be updated   p    td    tr   tr   td   code terminationMessagePath  code  br    em  string   em    td   td   em  Optional   em   p Optional  Path at which the file to which the Sidecar rsquo s termination message will be written is mounted into the Sidecar rsquo s filesystem  Message written is intended to be brief final status  such as an assertion failure message  Will be truncated by the node if greater than 4096 bytes  The total message length across all containers will be limited to 12kb  Defaults to  dev termination log  Cannot be updated   p    td    tr   tr   td   code terminationMessagePolicy  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  terminationmessagepolicy v1 core   Kubernetes core v1 TerminationMessagePolicy   a    em    td   td   em  Optional   em   p Indicate how the termination message should be populated  File will use the contents of terminationMessagePath to populate the Sidecar status message on both success and failure  FallbackToLogsOnError will use the last chunk of Sidecar log output if the termination message file is empty and the Sidecar exited with an error  The log output is limited to 2048 bytes or 80 lines  whichever is smaller  Defaults to File  Cannot be updated   p    td    tr   tr   td   code imagePullPolicy  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  pullpolicy v1 core   Kubernetes core v1 PullPolicy   a    em    td   td   em  Optional   em   p Image pull policy  One of Always  Never  IfNotPresent  Defaults to Always if  latest tag is specified  or IfNotPresent otherwise  Cannot be updated  More info   a href  https   kubernetes io docs concepts containers images updating images  https   kubernetes io docs concepts containers images updating images  a   p    td    tr   tr   td   code securityContext  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  securitycontext v1 core   Kubernetes core v1 SecurityContext   a    em    td   td   em  Optional   em   p SecurityContext defines the security options the Sidecar should be run with  If set  the fields of SecurityContext override the equivalent fields of PodSecurityContext  More info   a href  https   kubernetes io docs tasks configure pod container security context   https   kubernetes io docs tasks configure pod container security context   a   p    td    tr   tr   td   code stdin  code  br    em  bool   em    td   td   em  Optional   em   p Whether this Sidecar should allocate a buffer for stdin in the container runtime  If this is not set  reads from stdin in the Sidecar will always result in EOF  Default is false   p    td    tr   tr   td   code stdinOnce  code  br    em  bool   em    td   td   em  Optional   em   p Whether the container runtime should close the stdin channel after it has been opened by a single attach  When stdin is true the stdin stream will remain open across multiple attach sessions  If stdinOnce is set to true  stdin is opened on Sidecar start  is empty until the first client attaches to stdin  and then remains open and accepts data until the client disconnects  at which time stdin is closed and remains closed until the Sidecar is restarted  If this flag is false  a container processes that reads from stdin will never receive an EOF  Default is false  p    td    tr   tr   td   code tty  code  br    em  bool   em    td   td   em  Optional   em   p Whether this Sidecar should allocate a TTY for itself  also requires  lsquo stdin rsquo  to be true  Default is false   p    td    tr   tr   td   code script  code  br    em  string   em    td   td   em  Optional   em   p Script is the contents of an executable file to execute   p   p If Script is not empty  the Step cannot have an Command or Args   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 WorkspaceUsage     WorkspaceUsage   a    em    td   td   em  Optional   em   p This is an alpha field  You must set the  ldquo enable api fields rdquo  feature flag to  ldquo alpha rdquo  for this field to be supported   p   p Workspaces is a list of workspaces from the Task that this Sidecar wants exclusive access to  Adding a workspace to this list means that any other Step or Sidecar that does not also request this Workspace will not have access to it   p    td    tr   tr   td   code restartPolicy  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  containerrestartpolicy v1 core   Kubernetes core v1 ContainerRestartPolicy   a    em    td   td   em  Optional   em   p RestartPolicy refers to kubernetes RestartPolicy  It can only be set for an initContainer and must have it rsquo s policy set to  ldquo Always rdquo   It is currently left optional to help support Kubernetes versions prior to 1 29 when this feature was introduced   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 SidecarState  SidecarState   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskRunStatusFields  TaskRunStatusFields  a     p   div   p SidecarState reports the results of running a sidecar in a Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code ContainerState  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  containerstate v1 core   Kubernetes core v1 ContainerState   a    em    td   td   p   Members of  code ContainerState  code  are embedded into this type     p    td    tr   tr   td   code name  code  br    em  string   em    td   td    td    tr   tr   td   code container  code  br    em  string   em    td   td    td    tr   tr   td   code imageID  code  br    em  string   em    td   td    td    tr    tbody    table   h3 id  tekton dev v1beta1 SkippedTask  SkippedTask   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRunStatusFields  PipelineRunStatusFields  a     p   div   p SkippedTask is used to describe the Tasks that were skipped due to their When Expressions evaluating to False  This is a struct because we are looking into including more details about the When Expressions that caused this Task to be skipped   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the Pipeline Task name  p    td    tr   tr   td   code reason  code  br    em   a href   tekton dev v1beta1 SkippingReason   SkippingReason   a    em    td   td   p Reason is the cause of the PipelineTask being skipped   p    td    tr   tr   td   code whenExpressions  code  br    em   a href   tekton dev v1beta1 WhenExpression     WhenExpression   a    em    td   td   em  Optional   em   p WhenExpressions is the list of checks guarding the execution of the PipelineTask  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 SkippingReason  SkippingReason   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 SkippedTask  SkippedTask  a     p   div   p SkippingReason explains why a PipelineTask was skipped   p    div   h3 id  tekton dev v1beta1 Step  Step   h3   p    em Appears on   em  a href   tekton dev v1beta1 InternalTaskModifier  InternalTaskModifier  a    a href   tekton dev v1beta1 TaskSpec  TaskSpec  a     p   div   p Step runs a subcomponent of a Task  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name of the Step specified as a DNS LABEL  Each Step in a Task must have a unique name   p    td    tr   tr   td   code image  code  br    em  string   em    td   td   em  Optional   em   p Image reference name to run for this Step  More info   a href  https   kubernetes io docs concepts containers images  https   kubernetes io docs concepts containers images  a   p    td    tr   tr   td   code command  code  br    em    string   em    td   td   em  Optional   em   p Entrypoint array  Not executed within a shell  The image rsquo s ENTRYPOINT is used if this is not provided  Variable references   VAR NAME  are expanded using the container rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code args  code  br    em    string   em    td   td   em  Optional   em   p Arguments to the entrypoint  The image rsquo s CMD is used if this is not provided  Variable references   VAR NAME  are expanded using the container rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code workingDir  code  br    em  string   em    td   td   em  Optional   em   p Step rsquo s working directory  If not specified  the container runtime rsquo s default will be used  which might be configured in the container image  Cannot be updated   p    td    tr   tr   td   code ports  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  containerport v1 core     Kubernetes core v1 ContainerPort   a    em    td   td   em  Optional   em   p List of ports to expose from the Step rsquo s container  Exposing a port here gives the system additional information about the network connections a container uses  but is primarily informational  Not specifying a port here DOES NOT prevent that port from being exposed  Any port which is listening on the default  ldquo 0 0 0 0 rdquo  address inside a container will be accessible from the network  Cannot be updated   p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code envFrom  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envfromsource v1 core     Kubernetes core v1 EnvFromSource   a    em    td   td   em  Optional   em   p List of sources to populate environment variables in the container  The keys defined within a source must be a C IDENTIFIER  All invalid keys will be reported as an event when the container is starting  When a key exists in multiple sources  the value associated with the last source will take precedence  Values defined by an Env with a duplicate key will take precedence  Cannot be updated   p    td    tr   tr   td   code env  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envvar v1 core     Kubernetes core v1 EnvVar   a    em    td   td   em  Optional   em   p List of environment variables to set in the container  Cannot be updated   p    td    tr   tr   td   code resources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   em  Optional   em   p Compute Resources required by this Step  Cannot be updated  More info   a href  https   kubernetes io docs concepts configuration manage resources containers   https   kubernetes io docs concepts configuration manage resources containers   a   p    td    tr   tr   td   code volumeMounts  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumemount v1 core     Kubernetes core v1 VolumeMount   a    em    td   td   em  Optional   em   p Volumes to mount into the Step rsquo s filesystem  Cannot be updated   p    td    tr   tr   td   code volumeDevices  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumedevice v1 core     Kubernetes core v1 VolumeDevice   a    em    td   td   em  Optional   em   p volumeDevices is the list of block devices to be used by the Step   p    td    tr   tr   td   code livenessProbe  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  probe v1 core   Kubernetes core v1 Probe   a    em    td   td   em  Optional   em   p Periodic probe of container liveness  Step will be restarted if the probe fails  Cannot be updated  More info   a href  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  a   p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code readinessProbe  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  probe v1 core   Kubernetes core v1 Probe   a    em    td   td   em  Optional   em   p Periodic probe of container service readiness  Step will be removed from service endpoints if the probe fails  Cannot be updated  More info   a href  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  a   p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code startupProbe  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  probe v1 core   Kubernetes core v1 Probe   a    em    td   td   em  Optional   em   p DeprecatedStartupProbe indicates that the Pod this Step runs in has successfully initialized  If specified  no other probes are executed until this completes successfully  If this probe fails  the Pod will be restarted  just as if the livenessProbe failed  This can be used to provide different probe parameters at the beginning of a Pod rsquo s lifecycle  when it might take a long time to load data or warm a cache  than during steady state operation  This cannot be updated  More info   a href  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  a   p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code lifecycle  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  lifecycle v1 core   Kubernetes core v1 Lifecycle   a    em    td   td   em  Optional   em   p Actions that the management system should take in response to container lifecycle events  Cannot be updated   p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code terminationMessagePath  code  br    em  string   em    td   td   em  Optional   em   p Deprecated  This field will be removed in a future release and can rsquo t be meaningfully used   p    td    tr   tr   td   code terminationMessagePolicy  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  terminationmessagepolicy v1 core   Kubernetes core v1 TerminationMessagePolicy   a    em    td   td   em  Optional   em   p Deprecated  This field will be removed in a future release and can rsquo t be meaningfully used   p    td    tr   tr   td   code imagePullPolicy  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  pullpolicy v1 core   Kubernetes core v1 PullPolicy   a    em    td   td   em  Optional   em   p Image pull policy  One of Always  Never  IfNotPresent  Defaults to Always if  latest tag is specified  or IfNotPresent otherwise  Cannot be updated  More info   a href  https   kubernetes io docs concepts containers images updating images  https   kubernetes io docs concepts containers images updating images  a   p    td    tr   tr   td   code securityContext  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  securitycontext v1 core   Kubernetes core v1 SecurityContext   a    em    td   td   em  Optional   em   p SecurityContext defines the security options the Step should be run with  If set  the fields of SecurityContext override the equivalent fields of PodSecurityContext  More info   a href  https   kubernetes io docs tasks configure pod container security context   https   kubernetes io docs tasks configure pod container security context   a   p    td    tr   tr   td   code stdin  code  br    em  bool   em    td   td   em  Optional   em   p Whether this container should allocate a buffer for stdin in the container runtime  If this is not set  reads from stdin in the container will always result in EOF  Default is false   p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code stdinOnce  code  br    em  bool   em    td   td   em  Optional   em   p Whether the container runtime should close the stdin channel after it has been opened by a single attach  When stdin is true the stdin stream will remain open across multiple attach sessions  If stdinOnce is set to true  stdin is opened on container start  is empty until the first client attaches to stdin  and then remains open and accepts data until the client disconnects  at which time stdin is closed and remains closed until the container is restarted  If this flag is false  a container processes that reads from stdin will never receive an EOF  Default is false  p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code tty  code  br    em  bool   em    td   td   em  Optional   em   p Whether this container should allocate a DeprecatedTTY for itself  also requires  lsquo stdin rsquo  to be true  Default is false   p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code script  code  br    em  string   em    td   td   em  Optional   em   p Script is the contents of an executable file to execute   p   p If Script is not empty  the Step cannot have an Command and the Args will be passed to the Script   p    td    tr   tr   td   code timeout  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Timeout is the time after which the step times out  Defaults to never  Refer to Go rsquo s ParseDuration documentation for expected format   a href  https   golang org pkg time  ParseDuration  https   golang org pkg time  ParseDuration  a   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 WorkspaceUsage     WorkspaceUsage   a    em    td   td   em  Optional   em   p This is an alpha field  You must set the  ldquo enable api fields rdquo  feature flag to  ldquo alpha rdquo  for this field to be supported   p   p Workspaces is a list of workspaces from the Task that this Step wants exclusive access to  Adding a workspace to this list means that any other Step or Sidecar that does not also request this Workspace will not have access to it   p    td    tr   tr   td   code onError  code  br    em   a href   tekton dev v1beta1 OnErrorType   OnErrorType   a    em    td   td   p OnError defines the exiting behavior of a container on error can be set to   continue   stopAndFail    p    td    tr   tr   td   code stdoutConfig  code  br    em   a href   tekton dev v1beta1 StepOutputConfig   StepOutputConfig   a    em    td   td   em  Optional   em   p Stores configuration for the stdout stream of the step   p    td    tr   tr   td   code stderrConfig  code  br    em   a href   tekton dev v1beta1 StepOutputConfig   StepOutputConfig   a    em    td   td   em  Optional   em   p Stores configuration for the stderr stream of the step   p    td    tr   tr   td   code ref  code  br    em   a href   tekton dev v1beta1 Ref   Ref   a    em    td   td   em  Optional   em   p Contains the reference to an existing StepAction   p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 Params   Params   a    em    td   td   em  Optional   em   p Params declares parameters passed to this step action   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1 StepResult     StepResult   a    em    td   td   em  Optional   em   p Results declares StepResults produced by the Step   p   p This is field is at an ALPHA stability level and gated by  ldquo enable step actions rdquo  feature flag   p   p It can be used in an inlined Step when used to store Results to   step results resultName path   It cannot be used when referencing StepActions using  v1beta1 Step Ref   The Results declared by the StepActions will be stored here instead   p    td    tr   tr   td   code when  code  br    em   a href   tekton dev v1beta1 WhenExpressions   WhenExpressions   a    em    td   td    td    tr    tbody    table   h3 id  tekton dev v1beta1 StepActionObject  StepActionObject   h3   div   p StepActionObject is implemented by StepAction  p    div   h3 id  tekton dev v1beta1 StepActionSpec  StepActionSpec   h3   p    em Appears on   em  a href   tekton dev v1beta1 StepAction  StepAction  a     p   div   p StepActionSpec contains the actionable components of a step   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the stepaction that may be used to populate a UI   p    td    tr   tr   td   code image  code  br    em  string   em    td   td   em  Optional   em   p Image reference name to run for this StepAction  More info   a href  https   kubernetes io docs concepts containers images  https   kubernetes io docs concepts containers images  a   p    td    tr   tr   td   code command  code  br    em    string   em    td   td   em  Optional   em   p Entrypoint array  Not executed within a shell  The image rsquo s ENTRYPOINT is used if this is not provided  Variable references   VAR NAME  are expanded using the container rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code args  code  br    em    string   em    td   td   em  Optional   em   p Arguments to the entrypoint  The image rsquo s CMD is used if this is not provided  Variable references   VAR NAME  are expanded using the container rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code env  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envvar v1 core     Kubernetes core v1 EnvVar   a    em    td   td   em  Optional   em   p List of environment variables to set in the container  Cannot be updated   p    td    tr   tr   td   code script  code  br    em  string   em    td   td   em  Optional   em   p Script is the contents of an executable file to execute   p   p If Script is not empty  the Step cannot have an Command and the Args will be passed to the Script   p    td    tr   tr   td   code workingDir  code  br    em  string   em    td   td   em  Optional   em   p Step rsquo s working directory  If not specified  the container runtime rsquo s default will be used  which might be configured in the container image  Cannot be updated   p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1 ParamSpecs   ParamSpecs   a    em    td   td   em  Optional   em   p Params is a list of input parameters required to run the stepAction  Params must be supplied as inputs in Steps unless they declare a defaultvalue   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1 StepResult     StepResult   a    em    td   td   em  Optional   em   p Results are values that this StepAction can output  p    td    tr   tr   td   code securityContext  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  securitycontext v1 core   Kubernetes core v1 SecurityContext   a    em    td   td   em  Optional   em   p SecurityContext defines the security options the Step should be run with  If set  the fields of SecurityContext override the equivalent fields of PodSecurityContext  More info   a href  https   kubernetes io docs tasks configure pod container security context   https   kubernetes io docs tasks configure pod container security context   a  The value set in StepAction will take precedence over the value from Task   p    td    tr   tr   td   code volumeMounts  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumemount v1 core     Kubernetes core v1 VolumeMount   a    em    td   td   em  Optional   em   p Volumes to mount into the Step rsquo s filesystem  Cannot be updated   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 StepOutputConfig  StepOutputConfig   h3   p    em Appears on   em  a href   tekton dev v1beta1 Step  Step  a     p   div   p StepOutputConfig stores configuration for a step output stream   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code path  code  br    em  string   em    td   td   em  Optional   em   p Path to duplicate stdout stream to on container rsquo s local filesystem   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 StepState  StepState   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskRunStatusFields  TaskRunStatusFields  a     p   div   p StepState reports the results of running a step in a Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code ContainerState  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  containerstate v1 core   Kubernetes core v1 ContainerState   a    em    td   td   p   Members of  code ContainerState  code  are embedded into this type     p    td    tr   tr   td   code name  code  br    em  string   em    td   td    td    tr   tr   td   code container  code  br    em  string   em    td   td    td    tr   tr   td   code imageID  code  br    em  string   em    td   td    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1beta1 TaskRunResult     TaskRunResult   a    em    td   td    td    tr   tr   td   code provenance  code  br    em   a href   tekton dev v1beta1 Provenance   Provenance   a    em    td   td    td    tr   tr   td   code inputs  code  br    em   a href   tekton dev v1beta1 Artifact     Artifact   a    em    td   td    td    tr   tr   td   code outputs  code  br    em   a href   tekton dev v1beta1 Artifact     Artifact   a    em    td   td    td    tr    tbody    table   h3 id  tekton dev v1beta1 StepTemplate  StepTemplate   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskSpec  TaskSpec  a     p   div   p StepTemplate is a template for a Step  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Default name for each Step specified as a DNS LABEL  Each Step in a Task must have a unique name  Cannot be updated   p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code image  code  br    em  string   em    td   td   em  Optional   em   p Default image name to use for each Step  More info   a href  https   kubernetes io docs concepts containers images  https   kubernetes io docs concepts containers images  a  This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets   p    td    tr   tr   td   code command  code  br    em    string   em    td   td   em  Optional   em   p Entrypoint array  Not executed within a shell  The docker image rsquo s ENTRYPOINT is used if this is not provided  Variable references   VAR NAME  are expanded using the Step rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code args  code  br    em    string   em    td   td   em  Optional   em   p Arguments to the entrypoint  The image rsquo s CMD is used if this is not provided  Variable references   VAR NAME  are expanded using the Step rsquo s environment  If a variable cannot be resolved  the reference in the input string will be unchanged  Double    are reduced to a single    which allows for escaping the   VAR NAME  syntax  i e   ldquo    VAR NAME  rdquo  will produce the string literal  ldquo   VAR NAME  rdquo   Escaped references will never be expanded  regardless of whether the variable exists or not  Cannot be updated  More info   a href  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  https   kubernetes io docs tasks inject data application define command argument container  running a command in a shell  a   p    td    tr   tr   td   code workingDir  code  br    em  string   em    td   td   em  Optional   em   p Step rsquo s working directory  If not specified  the container runtime rsquo s default will be used  which might be configured in the container image  Cannot be updated   p    td    tr   tr   td   code ports  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  containerport v1 core     Kubernetes core v1 ContainerPort   a    em    td   td   em  Optional   em   p List of ports to expose from the Step rsquo s container  Exposing a port here gives the system additional information about the network connections a container uses  but is primarily informational  Not specifying a port here DOES NOT prevent that port from being exposed  Any port which is listening on the default  ldquo 0 0 0 0 rdquo  address inside a container will be accessible from the network  Cannot be updated   p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code envFrom  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envfromsource v1 core     Kubernetes core v1 EnvFromSource   a    em    td   td   em  Optional   em   p List of sources to populate environment variables in the Step  The keys defined within a source must be a C IDENTIFIER  All invalid keys will be reported as an event when the container is starting  When a key exists in multiple sources  the value associated with the last source will take precedence  Values defined by an Env with a duplicate key will take precedence  Cannot be updated   p    td    tr   tr   td   code env  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  envvar v1 core     Kubernetes core v1 EnvVar   a    em    td   td   em  Optional   em   p List of environment variables to set in the container  Cannot be updated   p    td    tr   tr   td   code resources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   em  Optional   em   p Compute Resources required by this Step  Cannot be updated  More info   a href  https   kubernetes io docs concepts configuration manage resources containers   https   kubernetes io docs concepts configuration manage resources containers   a   p    td    tr   tr   td   code volumeMounts  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumemount v1 core     Kubernetes core v1 VolumeMount   a    em    td   td   em  Optional   em   p Volumes to mount into the Step rsquo s filesystem  Cannot be updated   p    td    tr   tr   td   code volumeDevices  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volumedevice v1 core     Kubernetes core v1 VolumeDevice   a    em    td   td   em  Optional   em   p volumeDevices is the list of block devices to be used by the Step   p    td    tr   tr   td   code livenessProbe  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  probe v1 core   Kubernetes core v1 Probe   a    em    td   td   em  Optional   em   p Periodic probe of container liveness  Container will be restarted if the probe fails  Cannot be updated  More info   a href  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  a   p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code readinessProbe  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  probe v1 core   Kubernetes core v1 Probe   a    em    td   td   em  Optional   em   p Periodic probe of container service readiness  Container will be removed from service endpoints if the probe fails  Cannot be updated  More info   a href  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  a   p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code startupProbe  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  probe v1 core   Kubernetes core v1 Probe   a    em    td   td   em  Optional   em   p DeprecatedStartupProbe indicates that the Pod has successfully initialized  If specified  no other probes are executed until this completes successfully  If this probe fails  the Pod will be restarted  just as if the livenessProbe failed  This can be used to provide different probe parameters at the beginning of a Pod rsquo s lifecycle  when it might take a long time to load data or warm a cache  than during steady state operation  This cannot be updated  More info   a href  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  https   kubernetes io docs concepts workloads pods pod lifecycle container probes  a   p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code lifecycle  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  lifecycle v1 core   Kubernetes core v1 Lifecycle   a    em    td   td   em  Optional   em   p Actions that the management system should take in response to container lifecycle events  Cannot be updated   p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code terminationMessagePath  code  br    em  string   em    td   td   em  Optional   em   p Deprecated  This field will be removed in a future release and cannot be meaningfully used   p    td    tr   tr   td   code terminationMessagePolicy  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  terminationmessagepolicy v1 core   Kubernetes core v1 TerminationMessagePolicy   a    em    td   td   em  Optional   em   p Deprecated  This field will be removed in a future release and cannot be meaningfully used   p    td    tr   tr   td   code imagePullPolicy  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  pullpolicy v1 core   Kubernetes core v1 PullPolicy   a    em    td   td   em  Optional   em   p Image pull policy  One of Always  Never  IfNotPresent  Defaults to Always if  latest tag is specified  or IfNotPresent otherwise  Cannot be updated  More info   a href  https   kubernetes io docs concepts containers images updating images  https   kubernetes io docs concepts containers images updating images  a   p    td    tr   tr   td   code securityContext  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  securitycontext v1 core   Kubernetes core v1 SecurityContext   a    em    td   td   em  Optional   em   p SecurityContext defines the security options the Step should be run with  If set  the fields of SecurityContext override the equivalent fields of PodSecurityContext  More info   a href  https   kubernetes io docs tasks configure pod container security context   https   kubernetes io docs tasks configure pod container security context   a   p    td    tr   tr   td   code stdin  code  br    em  bool   em    td   td   em  Optional   em   p Whether this Step should allocate a buffer for stdin in the container runtime  If this is not set  reads from stdin in the Step will always result in EOF  Default is false   p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code stdinOnce  code  br    em  bool   em    td   td   em  Optional   em   p Whether the container runtime should close the stdin channel after it has been opened by a single attach  When stdin is true the stdin stream will remain open across multiple attach sessions  If stdinOnce is set to true  stdin is opened on container start  is empty until the first client attaches to stdin  and then remains open and accepts data until the client disconnects  at which time stdin is closed and remains closed until the container is restarted  If this flag is false  a container processes that reads from stdin will never receive an EOF  Default is false  p   p Deprecated  This field will be removed in a future release   p    td    tr   tr   td   code tty  code  br    em  bool   em    td   td   em  Optional   em   p Whether this Step should allocate a DeprecatedTTY for itself  also requires  lsquo stdin rsquo  to be true  Default is false   p   p Deprecated  This field will be removed in a future release   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskBreakpoints  TaskBreakpoints   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskRunDebug  TaskRunDebug  a     p   div   p TaskBreakpoints defines the breakpoint config for a particular Task  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code onFailure  code  br    em  string   em    td   td   em  Optional   em   p if enabled  pause TaskRun on failure of a step failed step will not exit  p    td    tr   tr   td   code beforeSteps  code  br    em    string   em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskKind  TaskKind   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskRef  TaskRef  a     p   div   p TaskKind defines the type of Task used by the pipeline   p    div   h3 id  tekton dev v1beta1 TaskModifier  TaskModifier   h3   div   p TaskModifier is an interface to be implemented by different PipelineResources  p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   h3 id  tekton dev v1beta1 TaskObject  TaskObject   h3   div   p TaskObject is implemented by Task and ClusterTask  p    div   h3 id  tekton dev v1beta1 TaskRef  TaskRef   h3   p    em Appears on   em  a href   tekton dev v1alpha1 RunSpec  RunSpec  a    a href   tekton dev v1beta1 CustomRunSpec  CustomRunSpec  a    a href   tekton dev v1beta1 PipelineTask  PipelineTask  a    a href   tekton dev v1beta1 TaskRunSpec  TaskRunSpec  a     p   div   p TaskRef can be used to refer to a specific instance of a task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name of the referent  More info   a href  http   kubernetes io docs user guide identifiers names  http   kubernetes io docs user guide identifiers names  a   p    td    tr   tr   td   code kind  code  br    em   a href   tekton dev v1beta1 TaskKind   TaskKind   a    em    td   td   p TaskKind indicates the Kind of the Task  1  Namespaced Task when Kind is set to  ldquo Task rdquo   If Kind is  ldquo  rdquo   it defaults to  ldquo Task rdquo   2  Cluster Scoped Task when Kind is set to  ldquo ClusterTask rdquo  3  Custom Task when Kind is non empty and APIVersion is non empty  p    td    tr   tr   td   code apiVersion  code  br    em  string   em    td   td   em  Optional   em   p API version of the referent Note  A Task with non empty APIVersion and Kind is considered a Custom Task  p    td    tr   tr   td   code bundle  code  br    em  string   em    td   td   em  Optional   em   p Bundle url reference to a Tekton Bundle   p   p Deprecated  Please use ResolverRef with the bundles resolver instead  The field is staying there for go client backward compatibility  but is not used allowed anymore   p    td    tr   tr   td   code ResolverRef  code  br    em   a href   tekton dev v1beta1 ResolverRef   ResolverRef   a    em    td   td   em  Optional   em   p ResolverRef allows referencing a Task in a remote location like a git repo  This field is only supported when the alpha feature gate is enabled   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskResource  TaskResource   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskResources  TaskResources  a     p   div   p TaskResource defines an input or output Resource declared as a requirement by a Task  The Name field will be used to refer to these Resources within the Task definition  and when provided as an Input  the Name will be the path to the volume mounted containing this Resource as an input  e g  an input Resource named  code workspace  code  will be mounted at  code  workspace  code     p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code ResourceDeclaration  code  br    em   a href   tekton dev v1alpha1 ResourceDeclaration   ResourceDeclaration   a    em    td   td   p   Members of  code ResourceDeclaration  code  are embedded into this type     p    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskResourceBinding  TaskResourceBinding   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskRunInputs  TaskRunInputs  a    a href   tekton dev v1beta1 TaskRunOutputs  TaskRunOutputs  a    a href   tekton dev v1beta1 TaskRunResources  TaskRunResources  a     p   div   p TaskResourceBinding points to the PipelineResource that will be used for the Task input or output called Name   p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code PipelineResourceBinding  code  br    em   a href   tekton dev v1beta1 PipelineResourceBinding   PipelineResourceBinding   a    em    td   td   p   Members of  code PipelineResourceBinding  code  are embedded into this type     p    td    tr   tr   td   code paths  code  br    em    string   em    td   td   em  Optional   em   p Paths will probably be removed in  1284  and then PipelineResourceBinding can be used instead  The optional Path field corresponds to a path on disk at which the Resource can be found  used when providing the resource via mounted volume  overriding the default logic to fetch the Resource    p    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskResources  TaskResources   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskSpec  TaskSpec  a     p   div   p TaskResources allows a Pipeline to declare how its DeclaredPipelineResources should be provided to a Task as its inputs and outputs   p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code inputs  code  br    em   a href   tekton dev v1beta1 TaskResource     TaskResource   a    em    td   td   p Inputs holds the mapping from the PipelineResources declared in DeclaredPipelineResources to the input PipelineResources required by the Task   p    td    tr   tr   td   code outputs  code  br    em   a href   tekton dev v1beta1 TaskResource     TaskResource   a    em    td   td   p Outputs holds the mapping from the PipelineResources declared in DeclaredPipelineResources to the input PipelineResources required by the Task   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskResult  TaskResult   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskSpec  TaskSpec  a     p   div   p TaskResult used to describe the results of a task  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name the given name  p    td    tr   tr   td   code type  code  br    em   a href   tekton dev v1beta1 ResultsType   ResultsType   a    em    td   td   em  Optional   em   p Type is the user specified type of the result  The possible type is currently  ldquo string rdquo  and will support  ldquo array rdquo  in following work   p    td    tr   tr   td   code properties  code  br    em   a href   tekton dev v1beta1 PropertySpec   map string github com tektoncd pipeline pkg apis pipeline v1beta1 PropertySpec   a    em    td   td   em  Optional   em   p Properties is the JSON Schema properties to support key value pairs results   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a human readable description of the result  p    td    tr   tr   td   code value  code  br    em   a href   tekton dev v1beta1 ParamValue   ParamValue   a    em    td   td   em  Optional   em   p Value the expression used to retrieve the value of the result from an underlying Step   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskRunConditionType  TaskRunConditionType   code string  code  alias   h3   div   p TaskRunConditionType is an enum used to store TaskRun custom conditions such as one used in spire results verification  p    div   h3 id  tekton dev v1beta1 TaskRunDebug  TaskRunDebug   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskRunSpec  TaskRunSpec  a     p   div   p TaskRunDebug defines the breakpoint config for a particular TaskRun  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code breakpoints  code  br    em   a href   tekton dev v1beta1 TaskBreakpoints   TaskBreakpoints   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskRunInputs  TaskRunInputs   h3   div   p TaskRunInputs holds the input values that this task was invoked with   p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code resources  code  br    em   a href   tekton dev v1beta1 TaskResourceBinding     TaskResourceBinding   a    em    td   td   em  Optional   em    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 Param     Param   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskRunOutputs  TaskRunOutputs   h3   div   p TaskRunOutputs holds the output values that this task was invoked with   p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code resources  code  br    em   a href   tekton dev v1beta1 TaskResourceBinding     TaskResourceBinding   a    em    td   td   em  Optional   em    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskRunReason  TaskRunReason   code string  code  alias   h3   div   p TaskRunReason is an enum used to store all TaskRun reason for the Succeeded condition that are controlled by the TaskRun itself  Failure reasons that emerge from underlying resources are not included here  p    div   h3 id  tekton dev v1beta1 TaskRunResources  TaskRunResources   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskRunSpec  TaskRunSpec  a     p   div   p TaskRunResources allows a TaskRun to declare inputs and outputs TaskResourceBinding  p   p Deprecated  Unused  preserved only for backwards compatibility  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code inputs  code  br    em   a href   tekton dev v1beta1 TaskResourceBinding     TaskResourceBinding   a    em    td   td   p Inputs holds the inputs resources this task was invoked with  p    td    tr   tr   td   code outputs  code  br    em   a href   tekton dev v1beta1 TaskResourceBinding     TaskResourceBinding   a    em    td   td   p Outputs holds the inputs resources this task was invoked with  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskRunResult  TaskRunResult   h3   p    em Appears on   em  a href   tekton dev v1beta1 StepState  StepState  a    a href   tekton dev v1beta1 TaskRunStatusFields  TaskRunStatusFields  a     p   div   p TaskRunStepResult is a type alias of TaskRunResult  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name the given name  p    td    tr   tr   td   code type  code  br    em   a href   tekton dev v1beta1 ResultsType   ResultsType   a    em    td   td   em  Optional   em   p Type is the user specified type of the result  The possible type is currently  ldquo string rdquo  and will support  ldquo array rdquo  in following work   p    td    tr   tr   td   code value  code  br    em   a href   tekton dev v1beta1 ParamValue   ParamValue   a    em    td   td   p Value the given value of the result  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskRunSidecarOverride  TaskRunSidecarOverride   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineTaskRunSpec  PipelineTaskRunSpec  a    a href   tekton dev v1beta1 TaskRunSpec  TaskRunSpec  a     p   div   p TaskRunSidecarOverride is used to override the values of a Sidecar in the corresponding Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p The name of the Sidecar to override   p    td    tr   tr   td   code resources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   p The resource requirements to apply to the Sidecar   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskRunSpec  TaskRunSpec   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskRun  TaskRun  a     p   div   p TaskRunSpec defines the desired state of TaskRun  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code debug  code  br    em   a href   tekton dev v1beta1 TaskRunDebug   TaskRunDebug   a    em    td   td   em  Optional   em    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 Params   Params   a    em    td   td   em  Optional   em    td    tr   tr   td   code resources  code  br    em   a href   tekton dev v1beta1 TaskRunResources   TaskRunResources   a    em    td   td   em  Optional   em   p Deprecated  Unused  preserved only for backwards compatibility  p    td    tr   tr   td   code serviceAccountName  code  br    em  string   em    td   td   em  Optional   em    td    tr   tr   td   code taskRef  code  br    em   a href   tekton dev v1beta1 TaskRef   TaskRef   a    em    td   td   em  Optional   em   p no more than one of the TaskRef and TaskSpec may be specified   p    td    tr   tr   td   code taskSpec  code  br    em   a href   tekton dev v1beta1 TaskSpec   TaskSpec   a    em    td   td   em  Optional   em   p Specifying PipelineSpec can be disabled by setting  code disable inline spec  code  feature flag    p    td    tr   tr   td   code status  code  br    em   a href   tekton dev v1beta1 TaskRunSpecStatus   TaskRunSpecStatus   a    em    td   td   em  Optional   em   p Used for cancelling a TaskRun  and maybe more later on   p    td    tr   tr   td   code statusMessage  code  br    em   a href   tekton dev v1beta1 TaskRunSpecStatusMessage   TaskRunSpecStatusMessage   a    em    td   td   em  Optional   em   p Status message for cancellation   p    td    tr   tr   td   code retries  code  br    em  int   em    td   td   em  Optional   em   p Retries represents how many times this TaskRun should be retried in the event of Task failure   p    td    tr   tr   td   code timeout  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   em  Optional   em   p Time after which one retry attempt times out  Defaults to 1 hour  Refer Go rsquo s ParseDuration documentation for expected format   a href  https   golang org pkg time  ParseDuration  https   golang org pkg time  ParseDuration  a   p    td    tr   tr   td   code podTemplate  code  br    em   a href   tekton dev unversioned Template   Template   a    em    td   td   p PodTemplate holds pod specific configuration  p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 WorkspaceBinding     WorkspaceBinding   a    em    td   td   em  Optional   em   p Workspaces is a list of WorkspaceBindings from volumes to workspaces   p    td    tr   tr   td   code stepOverrides  code  br    em   a href   tekton dev v1beta1 TaskRunStepOverride     TaskRunStepOverride   a    em    td   td   em  Optional   em   p Overrides to apply to Steps in this TaskRun  If a field is specified in both a Step and a StepOverride  the value from the StepOverride will be used  This field is only supported when the alpha feature gate is enabled   p    td    tr   tr   td   code sidecarOverrides  code  br    em   a href   tekton dev v1beta1 TaskRunSidecarOverride     TaskRunSidecarOverride   a    em    td   td   em  Optional   em   p Overrides to apply to Sidecars in this TaskRun  If a field is specified in both a Sidecar and a SidecarOverride  the value from the SidecarOverride will be used  This field is only supported when the alpha feature gate is enabled   p    td    tr   tr   td   code computeResources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   p Compute resources to use for this TaskRun  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskRunSpecStatus  TaskRunSpecStatus   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskRunSpec  TaskRunSpec  a     p   div   p TaskRunSpecStatus defines the TaskRun spec status the user can provide  p    div   h3 id  tekton dev v1beta1 TaskRunSpecStatusMessage  TaskRunSpecStatusMessage   code string  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskRunSpec  TaskRunSpec  a     p   div   p TaskRunSpecStatusMessage defines human readable status messages for the TaskRun   p    div   h3 id  tekton dev v1beta1 TaskRunStatus  TaskRunStatus   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskRun  TaskRun  a    a href   tekton dev v1beta1 PipelineRunTaskRunStatus  PipelineRunTaskRunStatus  a    a href   tekton dev v1beta1 TaskRunStatusFields  TaskRunStatusFields  a     p   div   p TaskRunStatus defines the observed state of TaskRun  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code Status  code  br    em   a href  https   pkg go dev knative dev pkg apis duck v1 Status   knative dev pkg apis duck v1 Status   a    em    td   td   p   Members of  code Status  code  are embedded into this type     p    td    tr   tr   td   code TaskRunStatusFields  code  br    em   a href   tekton dev v1beta1 TaskRunStatusFields   TaskRunStatusFields   a    em    td   td   p   Members of  code TaskRunStatusFields  code  are embedded into this type     p   p TaskRunStatusFields inlines the status fields   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskRunStatusFields  TaskRunStatusFields   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskRunStatus  TaskRunStatus  a     p   div   p TaskRunStatusFields holds the fields of TaskRun rsquo s status   This is defined separately and inlined so that other types can readily consume these fields via duck typing   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code podName  code  br    em  string   em    td   td   p PodName is the name of the pod responsible for executing this task rsquo s steps   p    td    tr   tr   td   code startTime  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   p StartTime is the time the build is actually started   p    td    tr   tr   td   code completionTime  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   p CompletionTime is the time the build completed   p    td    tr   tr   td   code steps  code  br    em   a href   tekton dev v1beta1 StepState     StepState   a    em    td   td   em  Optional   em   p Steps describes the state of each build step container   p    td    tr   tr   td   code cloudEvents  code  br    em   a href   tekton dev v1beta1 CloudEventDelivery     CloudEventDelivery   a    em    td   td   em  Optional   em   p CloudEvents describe the state of each cloud event requested via a CloudEventResource   p   p Deprecated  Removed in v0 44 0   p    td    tr   tr   td   code retriesStatus  code  br    em   a href   tekton dev v1beta1 TaskRunStatus     TaskRunStatus   a    em    td   td   em  Optional   em   p RetriesStatus contains the history of TaskRunStatus in case of a retry in order to keep record of failures  All TaskRunStatus stored in RetriesStatus will have no date within the RetriesStatus as is redundant   p    td    tr   tr   td   code resourcesResult  code  br    em    github com tektoncd pipeline pkg result RunResult   em    td   td   em  Optional   em   p Results from Resources built during the TaskRun  This is tomb stoned along with the removal of pipelineResources Deprecated  this field is not populated and is preserved only for backwards compatibility  p    td    tr   tr   td   code taskResults  code  br    em   a href   tekton dev v1beta1 TaskRunResult     TaskRunResult   a    em    td   td   em  Optional   em   p TaskRunResults are the list of results written out by the task rsquo s containers  p    td    tr   tr   td   code sidecars  code  br    em   a href   tekton dev v1beta1 SidecarState     SidecarState   a    em    td   td   p The list has one entry per sidecar in the manifest  Each entry is represents the imageid of the corresponding sidecar   p    td    tr   tr   td   code taskSpec  code  br    em   a href   tekton dev v1beta1 TaskSpec   TaskSpec   a    em    td   td   p TaskSpec contains the Spec from the dereferenced Task definition used to instantiate this TaskRun   p    td    tr   tr   td   code provenance  code  br    em   a href   tekton dev v1beta1 Provenance   Provenance   a    em    td   td   em  Optional   em   p Provenance contains some key authenticated metadata about how a software artifact was built  what sources  what inputs outputs  etc     p    td    tr   tr   td   code spanContext  code  br    em  map string string   em    td   td   p SpanContext contains tracing span context fields  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskRunStepOverride  TaskRunStepOverride   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineTaskRunSpec  PipelineTaskRunSpec  a    a href   tekton dev v1beta1 TaskRunSpec  TaskRunSpec  a     p   div   p TaskRunStepOverride is used to override the values of a Step in the corresponding Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p The name of the Step to override   p    td    tr   tr   td   code resources  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  resourcerequirements v1 core   Kubernetes core v1 ResourceRequirements   a    em    td   td   p The resource requirements to apply to the Step   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 TaskSpec  TaskSpec   h3   p    em Appears on   em  a href   tekton dev v1beta1 ClusterTask  ClusterTask  a    a href   tekton dev v1beta1 Task  Task  a    a href   tekton dev v1beta1 EmbeddedTask  EmbeddedTask  a    a href   tekton dev v1beta1 TaskRunSpec  TaskRunSpec  a    a href   tekton dev v1beta1 TaskRunStatusFields  TaskRunStatusFields  a     p   div   p TaskSpec defines the desired state of Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code resources  code  br    em   a href   tekton dev v1beta1 TaskResources   TaskResources   a    em    td   td   em  Optional   em   p Resources is a list input and output resource to run the task Resources are represented in TaskRuns as bindings to instances of PipelineResources   p   p Deprecated  Unused  preserved only for backwards compatibility  p    td    tr   tr   td   code params  code  br    em   a href   tekton dev v1beta1 ParamSpecs   ParamSpecs   a    em    td   td   em  Optional   em   p Params is a list of input parameters required to run the task  Params must be supplied as inputs in TaskRuns unless they declare a default value   p    td    tr   tr   td   code displayName  code  br    em  string   em    td   td   em  Optional   em   p DisplayName is a user facing name of the task that may be used to populate a UI   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is a user facing description of the task that may be used to populate a UI   p    td    tr   tr   td   code steps  code  br    em   a href   tekton dev v1beta1 Step     Step   a    em    td   td   p Steps are the steps of the build  each step is run sequentially with the source mounted into  workspace   p    td    tr   tr   td   code volumes  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  volume v1 core     Kubernetes core v1 Volume   a    em    td   td   p Volumes is a collection of volumes that are available to mount into the steps of the build   p    td    tr   tr   td   code stepTemplate  code  br    em   a href   tekton dev v1beta1 StepTemplate   StepTemplate   a    em    td   td   p StepTemplate can be used as the basis for all step containers within the Task  so that the steps inherit settings on the base container   p    td    tr   tr   td   code sidecars  code  br    em   a href   tekton dev v1beta1 Sidecar     Sidecar   a    em    td   td   p Sidecars are run alongside the Task rsquo s step containers  They begin before the steps start and end after the steps complete   p    td    tr   tr   td   code workspaces  code  br    em   a href   tekton dev v1beta1 WorkspaceDeclaration     WorkspaceDeclaration   a    em    td   td   p Workspaces are the volumes that this Task requires   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1beta1 TaskResult     TaskResult   a    em    td   td   p Results are values that this Task can output  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 TimeoutFields  TimeoutFields   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineRunSpec  PipelineRunSpec  a     p   div   p TimeoutFields allows granular specification of pipeline  task  and finally timeouts  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code pipeline  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   p Pipeline sets the maximum allowed duration for execution of the entire pipeline  The sum of individual timeouts for tasks and finally must not exceed this value   p    td    tr   tr   td   code tasks  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   p Tasks sets the maximum allowed duration of this pipeline rsquo s tasks  p    td    tr   tr   td   code finally  code  br    em   a href  https   godoc org k8s io apimachinery pkg apis meta v1 Duration   Kubernetes meta v1 Duration   a    em    td   td   p Finally sets the maximum allowed duration of this pipeline rsquo s finally  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 WhenExpression  WhenExpression   h3   p    em Appears on   em  a href   tekton dev v1beta1 ChildStatusReference  ChildStatusReference  a    a href   tekton dev v1beta1 PipelineRunRunStatus  PipelineRunRunStatus  a    a href   tekton dev v1beta1 PipelineRunTaskRunStatus  PipelineRunTaskRunStatus  a    a href   tekton dev v1beta1 SkippedTask  SkippedTask  a     p   div   p WhenExpression allows a PipelineTask to declare expressions to be evaluated before the Task is run to determine whether the Task should be executed or skipped  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code input  code  br    em  string   em    td   td   p Input is the string for guard checking which can be a static input or an output from a parent Task  p    td    tr   tr   td   code operator  code  br    em  k8s io apimachinery pkg selection Operator   em    td   td   p Operator that represents an Input rsquo s relationship to the values  p    td    tr   tr   td   code values  code  br    em    string   em    td   td   p Values is an array of strings  which is compared against the input  for guard checking It must be non empty  p    td    tr   tr   td   code cel  code  br    em  string   em    td   td   em  Optional   em   p CEL is a string of Common Language Expression  which can be used to conditionally execute the task based on the result of the expression evaluation More info about CEL syntax   a href  https   github com google cel spec blob master doc langdef md  https   github com google cel spec blob master doc langdef md  a   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 WhenExpressions  WhenExpressions   code   github com tektoncd pipeline pkg apis pipeline v1beta1 WhenExpression  code  alias   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineTask  PipelineTask  a    a href   tekton dev v1beta1 Step  Step  a     p   div   p WhenExpressions are used to specify whether a Task should be executed or skipped All of them need to evaluate to True for a guarded Task to be executed   p    div   h3 id  tekton dev v1beta1 WorkspaceBinding  WorkspaceBinding   h3   p    em Appears on   em  a href   tekton dev v1alpha1 RunSpec  RunSpec  a    a href   tekton dev v1beta1 CustomRunSpec  CustomRunSpec  a    a href   tekton dev v1beta1 PipelineRunSpec  PipelineRunSpec  a    a href   tekton dev v1beta1 TaskRunSpec  TaskRunSpec  a     p   div   p WorkspaceBinding maps a Task rsquo s declared workspace to a Volume   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of the workspace populated by the volume   p    td    tr   tr   td   code subPath  code  br    em  string   em    td   td   em  Optional   em   p SubPath is optionally a directory on the volume which should be used for this binding  i e  the volume will be mounted at this sub directory    p    td    tr   tr   td   code volumeClaimTemplate  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  persistentvolumeclaim v1 core   Kubernetes core v1 PersistentVolumeClaim   a    em    td   td   em  Optional   em   p VolumeClaimTemplate is a template for a claim that will be created in the same namespace  The PipelineRun controller is responsible for creating a unique claim for each instance of PipelineRun   p    td    tr   tr   td   code persistentVolumeClaim  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  persistentvolumeclaimvolumesource v1 core   Kubernetes core v1 PersistentVolumeClaimVolumeSource   a    em    td   td   em  Optional   em   p PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace  Either this OR EmptyDir can be used   p    td    tr   tr   td   code emptyDir  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  emptydirvolumesource v1 core   Kubernetes core v1 EmptyDirVolumeSource   a    em    td   td   em  Optional   em   p EmptyDir represents a temporary directory that shares a Task rsquo s lifetime  More info   a href  https   kubernetes io docs concepts storage volumes emptydir  https   kubernetes io docs concepts storage volumes emptydir  a  Either this OR PersistentVolumeClaim can be used   p    td    tr   tr   td   code configMap  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  configmapvolumesource v1 core   Kubernetes core v1 ConfigMapVolumeSource   a    em    td   td   em  Optional   em   p ConfigMap represents a configMap that should populate this workspace   p    td    tr   tr   td   code secret  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  secretvolumesource v1 core   Kubernetes core v1 SecretVolumeSource   a    em    td   td   em  Optional   em   p Secret represents a secret that should populate this workspace   p    td    tr   tr   td   code projected  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  projectedvolumesource v1 core   Kubernetes core v1 ProjectedVolumeSource   a    em    td   td   em  Optional   em   p Projected represents a projected volume that should populate this workspace   p    td    tr   tr   td   code csi  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  csivolumesource v1 core   Kubernetes core v1 CSIVolumeSource   a    em    td   td   em  Optional   em   p CSI  Container Storage Interface  represents ephemeral storage that is handled by certain external CSI drivers   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 WorkspaceDeclaration  WorkspaceDeclaration   h3   p    em Appears on   em  a href   tekton dev v1beta1 TaskSpec  TaskSpec  a     p   div   p WorkspaceDeclaration is a declaration of a volume that a Task requires   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name by which you can bind the volume at runtime   p    td    tr   tr   td   code description  code  br    em  string   em    td   td   em  Optional   em   p Description is an optional human readable description of this volume   p    td    tr   tr   td   code mountPath  code  br    em  string   em    td   td   em  Optional   em   p MountPath overrides the directory that the volume will be made available at   p    td    tr   tr   td   code readOnly  code  br    em  bool   em    td   td   p ReadOnly dictates whether a mounted volume is writable  By default this field is false and so mounted volumes are writable   p    td    tr   tr   td   code optional  code  br    em  bool   em    td   td   p Optional marks a Workspace as not being required in TaskRuns  By default this field is false and so declared workspaces are required   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 WorkspacePipelineTaskBinding  WorkspacePipelineTaskBinding   h3   p    em Appears on   em  a href   tekton dev v1beta1 PipelineTask  PipelineTask  a     p   div   p WorkspacePipelineTaskBinding describes how a workspace passed into the pipeline should be mapped to a task rsquo s declared workspace   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of the workspace as declared by the task  p    td    tr   tr   td   code workspace  code  br    em  string   em    td   td   em  Optional   em   p Workspace is the name of the workspace declared by the pipeline  p    td    tr   tr   td   code subPath  code  br    em  string   em    td   td   em  Optional   em   p SubPath is optionally a directory on the volume which should be used for this binding  i e  the volume will be mounted at this sub directory    p    td    tr    tbody    table   h3 id  tekton dev v1beta1 WorkspaceUsage  WorkspaceUsage   h3   p    em Appears on   em  a href   tekton dev v1beta1 Sidecar  Sidecar  a    a href   tekton dev v1beta1 Step  Step  a     p   div   p WorkspaceUsage is used by a Step or Sidecar to declare that it wants isolated access to a Workspace defined in a Task   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name is the name of the workspace this Step or Sidecar wants access to   p    td    tr   tr   td   code mountPath  code  br    em  string   em    td   td   p MountPath is the path that the workspace should be mounted to inside the Step or Sidecar  overriding any MountPath specified in the Task rsquo s WorkspaceDeclaration   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 CustomRunResult  CustomRunResult   h3   p    em Appears on   em  a href   tekton dev v1beta1 CustomRunStatusFields  CustomRunStatusFields  a     p   div   p CustomRunResult used to describe the results of a task  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code name  code  br    em  string   em    td   td   p Name the given name  p    td    tr   tr   td   code value  code  br    em  string   em    td   td   p Value the given value of the result  p    td    tr    tbody    table   h3 id  tekton dev v1beta1 CustomRunStatus  CustomRunStatus   h3   p    em Appears on   em  a href   tekton dev v1beta1 CustomRun  CustomRun  a    a href   tekton dev v1 PipelineRunRunStatus  PipelineRunRunStatus  a    a href   tekton dev v1beta1 PipelineRunRunStatus  PipelineRunRunStatus  a    a href   tekton dev v1beta1 CustomRunStatusFields  CustomRunStatusFields  a     p   div   p CustomRunStatus defines the observed state of CustomRun  p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code Status  code  br    em   a href  https   pkg go dev knative dev pkg apis duck v1 Status   knative dev pkg apis duck v1 Status   a    em    td   td   p   Members of  code Status  code  are embedded into this type     p    td    tr   tr   td   code CustomRunStatusFields  code  br    em   a href   tekton dev v1beta1 CustomRunStatusFields   CustomRunStatusFields   a    em    td   td   p   Members of  code CustomRunStatusFields  code  are embedded into this type     p   p CustomRunStatusFields inlines the status fields   p    td    tr    tbody    table   h3 id  tekton dev v1beta1 CustomRunStatusFields  CustomRunStatusFields   h3   p    em Appears on   em  a href   tekton dev v1beta1 CustomRunStatus  CustomRunStatus  a     p   div   p CustomRunStatusFields holds the fields of CustomRun rsquo s status   This is defined separately and inlined so that other types can readily consume these fields via duck typing   p    div   table   thead   tr   th Field  th   th Description  th    tr    thead   tbody   tr   td   code startTime  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   em  Optional   em   p StartTime is the time the build is actually started   p    td    tr   tr   td   code completionTime  code  br    em   a href  https   kubernetes io docs reference generated kubernetes api v1 24  time v1 meta   Kubernetes meta v1 Time   a    em    td   td   em  Optional   em   p CompletionTime is the time the build completed   p    td    tr   tr   td   code results  code  br    em   a href   tekton dev v1beta1 CustomRunResult     CustomRunResult   a    em    td   td   em  Optional   em   p Results reports any output result values to be consumed by later tasks in a pipeline   p    td    tr   tr   td   code retriesStatus  code  br    em   a href   tekton dev v1beta1 CustomRunStatus     CustomRunStatus   a    em    td   td   em  Optional   em   p RetriesStatus contains the history of CustomRunStatus  in case of a retry   p    td    tr   tr   td   code extraFields  code  br    em  k8s io apimachinery pkg runtime RawExtension   em    td   td   p ExtraFields holds arbitrary fields provided by the custom task controller   p    td    tr    tbody    table   hr    p  em  Generated with  code gen crd api reference docs  code      em   p "}
{"questions":"tekton Git Resolver Resolver Type weight 309 Simple Git Resolver","answers":"<!--\n---\nlinkTitle: \"Git Resolver\"\nweight: 309\n---\n-->\n\n# Simple Git Resolver\n\n## Resolver Type\n\nThis Resolver responds to type `git`.\n\n## Parameters\n\n| Param Name   | Description                                                                                                                                                                | Example Value                                               |\n|--------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------|\n| `url`        | URL of the repo to fetch and clone anonymously. Either `url`, or `repo` (with `org`) must be specified, but not both.                                                      | `https:\/\/github.com\/tektoncd\/catalog.git`                   |\n| `repo`       | The repository to find the resource in. Either `url`, or `repo` (with `org`) must be specified, but not both.                                                              | `pipeline`, `test-infra`                                    |\n| `org`        | The organization to find the repository in. Default can be set in [configuration](#configuration).                                                                         | `tektoncd`, `kubernetes`                                    |\n| `token`      | An optional secret name in the `PipelineRun` namespace to fetch the token from. Defaults to empty, meaning it will try to use the configuration from the global configmap. | `secret-name`, (empty)                                      |\n| `tokenKey`   | An optional key in the token secret name in the `PipelineRun` namespace to fetch the token from. Defaults to `token`.                                                      | `token`                                                     |\n| `revision`   | Git revision to checkout a file from. This can be commit SHA, branch or tag.                                                                                               | `aeb957601cf41c012be462827053a21a420befca` `main` `v0.38.2` |\n| `pathInRepo` | Where to find the file in the repo.                                                                                                                                        | `task\/golang-build\/0.3\/golang-build.yaml`                   |\n| `serverURL`  | An optional server URL (that includes the https:\/\/ prefix) to connect for API operations                                                                                   | `https:\/github.mycompany.com`                               |\n| `scmType`    | An optional SCM type to use for API operations                                                                                                                             | `github`, `gitlab`, `gitea`                                 |\n\n## Requirements\n\n- A cluster running Tekton Pipeline v0.41.0 or later.\n- The [built-in remote resolvers installed](.\/install.md#installing-and-configuring-remote-task-and-pipeline-resolution).\n- The `enable-git-resolver` feature flag in the `resolvers-feature-flags` ConfigMap in the\n  `tekton-pipelines-resolvers` namespace set to `true`.\n- [Beta features](.\/additional-configs.md#beta-features) enabled.\n\n## Configuration\n\nThis resolver uses a `ConfigMap` for its settings. See\n[`..\/config\/resolvers\/git-resolver-config.yaml`](..\/config\/resolvers\/git-resolver-config.yaml)\nfor the name, namespace and defaults that the resolver ships with.\n\n### Options\n\n| Option Name                  | Description                                                                                                                                                   | Example Values                                                   |\n|------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------|\n| `default-revision`           | The default git revision to use if none is specified                                                                                                          | `main`                                                           |\n| `fetch-timeout`              | The maximum time any single git clone resolution may take. **Note**: a global maximum timeout of 1 minute is currently enforced on _all_ resolution requests. | `1m`, `2s`, `700ms`                                              |\n| `default-url`                | The default git repository URL to use for anonymous cloning if none is specified.                                                                             | `https:\/\/github.com\/tektoncd\/catalog.git`                        |\n| `scm-type`                   | The SCM provider type. Required if using the authenticated API with `org` and `repo`.                                                                         | `github`, `gitlab`, `gitea`, `bitbucketcloud`, `bitbucketserver` |\n| `server-url`                 | The SCM provider's base URL for use with the authenticated API. Not needed if using github.com, gitlab.com, or BitBucket Cloud                                | `api.internal-github.com`                                        |\n| `api-token-secret-name`      | The Kubernetes secret containing the SCM provider API token. Required if using the authenticated API with `org` and `repo`.                                   | `bot-token-secret`                                               |\n| `api-token-secret-key`       | The key within the token secret containing the actual secret. Required if using the authenticated API with `org` and `repo`.                                  | `oauth`, `token`                                                 |\n| `api-token-secret-namespace` | The namespace containing the token secret, if not `default`.                                                                                                  | `other-namespace`                                                |\n| `default-org`                | The default organization to look for repositories under when using the authenticated API, if not specified in the resolver parameters. Optional.              | `tektoncd`, `kubernetes`                                         |\n\n## Usage\n\nThe `git` resolver has two modes: cloning a repository anonymously, or fetching individual files via an SCM provider's API using an API token.\n\n### Anonymous Cloning\n\nAnonymous cloning is supported only for public repositories. This mode clones the full git repo.\n\n#### Task Resolution\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: git-clone-demo-tr\nspec:\n  taskRef:\n    resolver: git\n    params:\n    - name: url\n      value: https:\/\/github.com\/tektoncd\/catalog.git\n    - name: revision\n      value: main\n    - name: pathInRepo\n      value: task\/git-clone\/0.6\/git-clone.yaml\n```\n\n#### Pipeline resolution\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: git-clone-demo-pr\nspec:\n  pipelineRef:\n    resolver: git\n    params:\n    - name: url\n      value: https:\/\/github.com\/tektoncd\/catalog.git\n    - name: revision\n      value: main\n    - name: pathInRepo\n      value: pipeline\/simple\/0.1\/simple.yaml\n  params:\n  - name: name\n    value: Ranni\n```\n\n### Authenticated API\n\nThe authenticated API supports private repositories, and fetches only the file at the specified path rather than doing a full clone.\n\nWhen using the authenticated API, [providers with implementations in `go-scm`](https:\/\/github.com\/jenkins-x\/go-scm\/tree\/main\/scm\/driver) can be used.\nNote that not all `go-scm` implementations have been tested with the `git` resolver, but it is known to work with:\n  * github.com and GitHub Enterprise\n  * gitlab.com and self-hosted Gitlab\n  * Gitea\n  * BitBucket Server\n  * BitBucket Cloud\n\n#### Task Resolution\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: git-api-demo-tr\nspec:\n  taskRef:\n    resolver: git\n    params:\n    - name: org\n      value: tektoncd\n    - name: repo\n      value: catalog\n    - name: revision\n      value: main\n    - name: pathInRepo\n      value: task\/git-clone\/0.6\/git-clone.yaml\n```\n\n#### Task Resolution with a custom token to a custom SCM provider\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: git-api-demo-tr\nspec:\n  taskRef:\n    resolver: git\n    params:\n    - name: org\n      value: tektoncd\n    - name: repo\n      value: catalog\n    - name: revision\n      value: main\n    - name: pathInRepo\n      value: task\/git-clone\/0.6\/git-clone.yaml\n    # my-secret-token should be created in the namespace where the\n    # pipelinerun is created and contain a GitHub personal access\n    # token in the token key of the secret.\n    - name: token\n      value: my-secret-token\n    - name: tokenKey\n      value: token\n    - name: scmType\n      value: github\n    - name: serverURL\n      value: https:\/\/ghe.mycompany.com\n```\n\n#### Pipeline resolution\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: git-api-demo-pr\nspec:\n  pipelineRef:\n    resolver: git\n    params:\n    - name: org\n      value: tektoncd\n    - name: repo\n      value: catalog\n    - name: revision\n      value: main\n    - name: pathInRepo\n      value: pipeline\/simple\/0.1\/simple.yaml\n  params:\n  - name: name\n    value: Ranni\n```\n\n### Specifying Configuration for Multiple Git Providers\n\nIt is possible to specify configurations for multiple providers and even multiple configurations for same provider to use in \ndifferent tekton resources. Firstly, details need to be added in configmap with the unique identifier key prefix.\nTo use them in tekton resources, pass the unique key mentioned in configmap as an extra param to resolver with key \n`configKey` and value will be the unique key. If no `configKey` param is passed, `default` will be used. Default \nconfiguration to be used for git resolver can be specified in configmap by either mentioning no unique identifier or \nusing identifier `default`\n\n**Note**: `configKey` should not contain `.` while specifying configurations in configmap\n\n### Example Configmap\n\nMultiple configurations can be specified in `git-resolver-config` configmap like this. All keys mentioned above are supported.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: git-resolver-config\n  namespace: tekton-pipelines-resolvers\n  labels:\n    app.kubernetes.io\/component: resolvers\n    app.kubernetes.io\/instance: default\n    app.kubernetes.io\/part-of: tekton-pipelines\ndata:\n  # configuration 1, default one to use if no configKey provided or provided with value default\n  fetch-timeout: \"1m\"\n  default-url: \"https:\/\/github.com\/tektoncd\/catalog.git\"\n  default-revision: \"main\"\n  scm-type: \"github\"\n  server-url: \"\"\n  api-token-secret-name: \"\"\n  api-token-secret-key: \"\"\n  api-token-secret-namespace: \"default\"\n  default-org: \"\"\n\n  # configuration 2, will be used if configKey param passed with value test1\n  test1.fetch-timeout: \"5m\"\n  test1.default-url: \"\"\n  test1.default-revision: \"stable\"\n  test1.scm-type: \"github\"\n  test1.server-url: \"api.internal-github.com\"\n  test1.api-token-secret-name: \"test1-secret\"\n  test1.api-token-secret-key: \"token\"\n  test1.api-token-secret-namespace: \"test1\"\n  test1.default-org: \"tektoncd\"\n\n  # configuration 3, will be used if configKey param passed with value test2\n  test2.fetch-timeout: \"10m\"\n  test2.default-url: \"\"\n  test2.default-revision: \"stable\"\n  test2.scm-type: \"gitlab\"\n  test2.server-url: \"api.internal-gitlab.com\"\n  test2.api-token-secret-name: \"test2-secret\"\n  test2.api-token-secret-key: \"pat\"\n  test2.api-token-secret-namespace: \"test2\"\n  test2.default-org: \"tektoncd-infra\"\n```\n\n#### Task Resolution\n\nA specific configurations from the configMap can be selected by passing the parameter `configKey` with the value \nmatching one of the configuration keys used in the configMap.\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: git-api-demo-tr\nspec:\n  taskRef:\n    resolver: git\n    params:\n    - name: org\n      value: tektoncd\n    - name: repo\n      value: catalog\n    - name: revision\n      value: main\n    - name: pathInRepo\n      value: task\/git-clone\/0.6\/git-clone.yaml\n    - name: configKey\n      value: test1\n```\n\n#### Pipeline resolution\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: git-api-demo-pr\nspec:\n  pipelineRef:\n    resolver: git\n    params:\n    - name: org\n      value: tektoncd\n    - name: repo\n      value: catalog\n    - name: revision\n      value: main\n    - name: pathInRepo\n      value: pipeline\/simple\/0.1\/simple.yaml\n    - name: configKey\n      value: test2\n  params:\n  - name: name\n    value: Ranni\n```\n\n## `ResolutionRequest` Status\n`ResolutionRequest.Status.RefSource` field captures the source where the remote resource came from. It includes the 3 subfields: `url`, `digest` and `entrypoint`.\n- `url`\n  - If users choose to use anonymous cloning, the url is just user-provided value for the `url` param in the [SPDX download format](https:\/\/spdx.github.io\/spdx-spec\/package-information\/#77-package-download-location-field).\n  - If scm api is used, it would be the clone URL of the repo fetched from scm repository service in the [SPDX download format](https:\/\/spdx.github.io\/spdx-spec\/package-information\/#77-package-download-location-field).\n- `digest`\n  - The algorithm name is fixed \"sha1\", but subject to be changed to \"sha256\" once Git eventually uses SHA256 at some point later. See https:\/\/git-scm.com\/docs\/hash-function-transition for more details.\n  - The value is the actual commit sha at the moment of resolving the resource even if a user provides a tag\/branch name for the param `revision`.\n- `entrypoint`: the user-provided value for the `path` param.\n\nExample:\n- Pipeline Resolution\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: git-demo\nspec:\n  pipelineRef:\n    resolver: git\n    params:\n    - name: url\n      value: https:\/\/github.com\/<username>\/<reponame>.git\n    - name: revision\n      value: main\n    - name: pathInRepo\n      value: pipeline.yaml\n```\n\n- `ResolutionRequest`\n```yaml\napiVersion: resolution.tekton.dev\/v1alpha1\nkind: ResolutionRequest\nmetadata:\n  labels:\n    resolution.tekton.dev\/type: git\n  ...\nspec:\n  params:\n    pathInRepo: pipeline.yaml\n    revision: main\n    url: https:\/\/github.com\/<username>\/<reponame>.git\nstatus:\n  refSource:\n    uri: git+https:\/\/github.com\/<username>\/<reponame>.git\n    digest:\n      sha1: <The latest commit sha on main at the moment of resolving>\n    entrypoint: pipeline.yaml\n  data: a2luZDogUGxxxx...\n```\n\n---\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/),\nand code samples are licensed under the\n[Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"         linkTitle   Git Resolver  weight  309            Simple Git Resolver     Resolver Type  This Resolver responds to type  git       Parameters    Param Name     Description                                                                                                                                                                  Example Value                                                                                                                                                                                                                                                                                                                url           URL of the repo to fetch and clone anonymously  Either  url   or  repo   with  org   must be specified  but not both                                                          https   github com tektoncd catalog git                         repo          The repository to find the resource in  Either  url   or  repo   with  org   must be specified  but not both                                                                  pipeline    test infra                                          org           The organization to find the repository in  Default can be set in  configuration   configuration                                                                              tektoncd    kubernetes                                          token         An optional secret name in the  PipelineRun  namespace to fetch the token from  Defaults to empty  meaning it will try to use the configuration from the global configmap     secret name    empty                                            tokenKey      An optional key in the token secret name in the  PipelineRun  namespace to fetch the token from  Defaults to  token                                                           token                                                           revision      Git revision to checkout a file from  This can be commit SHA  branch or tag                                                                                                   aeb957601cf41c012be462827053a21a420befca   main   v0 38 2       pathInRepo    Where to find the file in the repo                                                                                                                                            task golang build 0 3 golang build yaml                         serverURL     An optional server URL  that includes the https    prefix  to connect for API operations                                                                                      https  github mycompany com                                     scmType       An optional SCM type to use for API operations                                                                                                                                github    gitlab    gitea                                        Requirements    A cluster running Tekton Pipeline v0 41 0 or later    The  built in remote resolvers installed    install md installing and configuring remote task and pipeline resolution     The  enable git resolver  feature flag in the  resolvers feature flags  ConfigMap in the    tekton pipelines resolvers  namespace set to  true      Beta features    additional configs md beta features  enabled      Configuration  This resolver uses a  ConfigMap  for its settings  See      config resolvers git resolver config yaml      config resolvers git resolver config yaml  for the name  namespace and defaults that the resolver ships with       Options    Option Name                    Description                                                                                                                                                     Example Values                                                                                                                                                                                                                                                                                                                            default revision              The default git revision to use if none is specified                                                                                                             main                                                                 fetch timeout                 The maximum time any single git clone resolution may take    Note    a global maximum timeout of 1 minute is currently enforced on  all  resolution requests     1m    2s    700ms                                                    default url                   The default git repository URL to use for anonymous cloning if none is specified                                                                                 https   github com tektoncd catalog git                              scm type                      The SCM provider type  Required if using the authenticated API with  org  and  repo                                                                              github    gitlab    gitea    bitbucketcloud    bitbucketserver       server url                    The SCM provider s base URL for use with the authenticated API  Not needed if using github com  gitlab com  or BitBucket Cloud                                   api internal github com                                              api token secret name         The Kubernetes secret containing the SCM provider API token  Required if using the authenticated API with  org  and  repo                                        bot token secret                                                     api token secret key          The key within the token secret containing the actual secret  Required if using the authenticated API with  org  and  repo                                       oauth    token                                                       api token secret namespace    The namespace containing the token secret  if not  default                                                                                                       other namespace                                                      default org                   The default organization to look for repositories under when using the authenticated API  if not specified in the resolver parameters  Optional                  tektoncd    kubernetes                                                Usage  The  git  resolver has two modes  cloning a repository anonymously  or fetching individual files via an SCM provider s API using an API token       Anonymous Cloning  Anonymous cloning is supported only for public repositories  This mode clones the full git repo        Task Resolution     yaml apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  git clone demo tr spec    taskRef      resolver  git     params        name  url       value  https   github com tektoncd catalog git       name  revision       value  main       name  pathInRepo       value  task git clone 0 6 git clone yaml           Pipeline resolution     yaml apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    name  git clone demo pr spec    pipelineRef      resolver  git     params        name  url       value  https   github com tektoncd catalog git       name  revision       value  main       name  pathInRepo       value  pipeline simple 0 1 simple yaml   params      name  name     value  Ranni          Authenticated API  The authenticated API supports private repositories  and fetches only the file at the specified path rather than doing a full clone   When using the authenticated API   providers with implementations in  go scm   https   github com jenkins x go scm tree main scm driver  can be used  Note that not all  go scm  implementations have been tested with the  git  resolver  but it is known to work with      github com and GitHub Enterprise     gitlab com and self hosted Gitlab     Gitea     BitBucket Server     BitBucket Cloud       Task Resolution     yaml apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  git api demo tr spec    taskRef      resolver  git     params        name  org       value  tektoncd       name  repo       value  catalog       name  revision       value  main       name  pathInRepo       value  task git clone 0 6 git clone yaml           Task Resolution with a custom token to a custom SCM provider     yaml apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  git api demo tr spec    taskRef      resolver  git     params        name  org       value  tektoncd       name  repo       value  catalog       name  revision       value  main       name  pathInRepo       value  task git clone 0 6 git clone yaml       my secret token should be created in the namespace where the       pipelinerun is created and contain a GitHub personal access       token in the token key of the secret        name  token       value  my secret token       name  tokenKey       value  token       name  scmType       value  github       name  serverURL       value  https   ghe mycompany com           Pipeline resolution     yaml apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    name  git api demo pr spec    pipelineRef      resolver  git     params        name  org       value  tektoncd       name  repo       value  catalog       name  revision       value  main       name  pathInRepo       value  pipeline simple 0 1 simple yaml   params      name  name     value  Ranni          Specifying Configuration for Multiple Git Providers  It is possible to specify configurations for multiple providers and even multiple configurations for same provider to use in  different tekton resources  Firstly  details need to be added in configmap with the unique identifier key prefix  To use them in tekton resources  pass the unique key mentioned in configmap as an extra param to resolver with key   configKey  and value will be the unique key  If no  configKey  param is passed   default  will be used  Default  configuration to be used for git resolver can be specified in configmap by either mentioning no unique identifier or  using identifier  default     Note     configKey  should not contain     while specifying configurations in configmap      Example Configmap  Multiple configurations can be specified in  git resolver config  configmap like this  All keys mentioned above are supported      yaml apiVersion  v1 kind  ConfigMap metadata    name  git resolver config   namespace  tekton pipelines resolvers   labels      app kubernetes io component  resolvers     app kubernetes io instance  default     app kubernetes io part of  tekton pipelines data      configuration 1  default one to use if no configKey provided or provided with value default   fetch timeout   1m    default url   https   github com tektoncd catalog git    default revision   main    scm type   github    server url       api token secret name       api token secret key       api token secret namespace   default    default org          configuration 2  will be used if configKey param passed with value test1   test1 fetch timeout   5m    test1 default url       test1 default revision   stable    test1 scm type   github    test1 server url   api internal github com    test1 api token secret name   test1 secret    test1 api token secret key   token    test1 api token secret namespace   test1    test1 default org   tektoncd       configuration 3  will be used if configKey param passed with value test2   test2 fetch timeout   10m    test2 default url       test2 default revision   stable    test2 scm type   gitlab    test2 server url   api internal gitlab com    test2 api token secret name   test2 secret    test2 api token secret key   pat    test2 api token secret namespace   test2    test2 default org   tektoncd infra            Task Resolution  A specific configurations from the configMap can be selected by passing the parameter  configKey  with the value  matching one of the configuration keys used in the configMap      yaml apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  git api demo tr spec    taskRef      resolver  git     params        name  org       value  tektoncd       name  repo       value  catalog       name  revision       value  main       name  pathInRepo       value  task git clone 0 6 git clone yaml       name  configKey       value  test1           Pipeline resolution     yaml apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    name  git api demo pr spec    pipelineRef      resolver  git     params        name  org       value  tektoncd       name  repo       value  catalog       name  revision       value  main       name  pathInRepo       value  pipeline simple 0 1 simple yaml       name  configKey       value  test2   params      name  name     value  Ranni          ResolutionRequest  Status  ResolutionRequest Status RefSource  field captures the source where the remote resource came from  It includes the 3 subfields   url    digest  and  entrypoint      url      If users choose to use anonymous cloning  the url is just user provided value for the  url  param in the  SPDX download format  https   spdx github io spdx spec package information  77 package download location field       If scm api is used  it would be the clone URL of the repo fetched from scm repository service in the  SPDX download format  https   spdx github io spdx spec package information  77 package download location field      digest      The algorithm name is fixed  sha1   but subject to be changed to  sha256  once Git eventually uses SHA256 at some point later  See https   git scm com docs hash function transition for more details      The value is the actual commit sha at the moment of resolving the resource even if a user provides a tag branch name for the param  revision      entrypoint   the user provided value for the  path  param   Example    Pipeline Resolution    yaml apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    name  git demo spec    pipelineRef      resolver  git     params        name  url       value  https   github com  username   reponame  git       name  revision       value  main       name  pathInRepo       value  pipeline yaml         ResolutionRequest     yaml apiVersion  resolution tekton dev v1alpha1 kind  ResolutionRequest metadata    labels      resolution tekton dev type  git       spec    params      pathInRepo  pipeline yaml     revision  main     url  https   github com  username   reponame  git status    refSource      uri  git https   github com  username   reponame  git     digest        sha1   The latest commit sha on main at the moment of resolving      entrypoint  pipeline yaml   data  a2luZDogUGxxxx              Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    and code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton weight 106 High Availability Support HA Support for Tekton Pipeline Controllers","answers":"<!--\n---\nlinkTitle: \"High Availability Support\"\nweight: 106\n---\n-->\n\n# HA Support for Tekton Pipeline Controllers\n\n  - [Overview](#overview)\n  - [Controller HA](#controller-ha)\n    - [Configuring Controller Replicas](#configuring-controller-replicas)\n    - [Configuring Leader Election](#configuring-leader-election)\n    - [Disabling Controller HA](#disabling-controller-ha)\n  - [Webhook HA](#webhook-ha)\n    - [Configuring Webhook Replicas](#configuring-webhook-replicas)\n    - [Avoiding Disruptions](#avoiding-disruptions)\n\n## Overview\n\nThis document is aimed at helping Cluster Admins when configuring High Availability (HA) support for the Tekton Pipeline [Controller](.\/..\/config\/controller.yaml) and [Webhook](.\/..\/config\/webhook.yaml) components. HA support allows components to remain operational when a disruption occurs, such as nodes being drained for upgrades.\n\n## Controller HA\n\nFor the Controller, HA is achieved by following an active\/active model, where all replicas of the Controller can receive and process work items. In this HA approach the workqueue is distributed across buckets, where each replica owns a subset of those buckets and can process the load if the given replica is the leader of that bucket.\n\nBy default, only one Controller replica is configured, to reduce resource usage. This effectively disables HA for the Controller by default.\n\n### Configuring Controller Replicas\n\nIn order to achieve HA for the Controller, the number of replicas for the Controller should be greater than one. This allows other instances to take over in case of any disruption on the current active controller.\n\nYou can modify the replicas number in the [Controller deployment](.\/..\/config\/controller.yaml) under `spec.replicas`, or apply an update to a running deployment:\n\n```sh\nkubectl -n tekton-pipelines scale deployment tekton-pipelines-controller --replicas=3\n```\n\n### Configuring Leader Election\n\nLeader election can be configured in [config-leader-election.yaml](.\/..\/config\/config-leader-election-controller.yaml). The ConfigMap defines the following parameters:\n\n| Parameter            | Default  |\n| -------------------- | -------- |\n| `data.buckets`       | 1        |\n| `data.leaseDuration` | 15s      |\n| `data.renewDeadline` | 10s      |\n| `data.retryPeriod`   | 2s       |\n\n_Note_: The maximum value of `data.buckets` at this time is 10.\n\n### Disabling Controller HA\n\nIf HA is not required, you can disable it by scaling the deployment back to one replica. You can also modify the [controller deployment](.\/..\/config\/controller.yaml), by specifying in the `tekton-pipelines-controller` container the `disable-ha` flag. For example:\n\n```yaml\nspec:\n  serviceAccountName: tekton-pipelines-controller\n  containers:\n    - name: tekton-pipelines-controller\n      # ...\n      args: [\n          # Other flags defined here...\n          \"-disable-ha=true\",\n        ]\n```\n\n**Note:** If you set `-disable-ha=false` and run multiple replicas of the Controller, each replica will process work items separately, which will lead to unwanted behavior when creating resources (e.g., `TaskRuns`, etc.).\n\nIn general, setting `-disable-ha=false` is not recommended. Instead, to disable HA, simply run one replica of the Controller deployment.\n\n## Webhook HA\n\nThe Webhook deployment is stateless, which means it can more easily be configured for HA, and even autoscale replicas in response to load.\n\nBy default, only one Webhook replica is configured, to reduce resource usage. This effectively disables HA for the Webhook by default.\n\n### Configuring Webhook Replicas\n\nIn order to achieve HA for the Webhook deployment, you can modify the `replicas` number in the [Webhook deployment](.\/..\/config\/webhook.yaml) under `spec.replicas`, or apply an update to a running deployment:\n\n```sh\nkubectl -n tekton-pipelines scale deployment tekton-pipelines-webhook --replicas=3\n```\n\nYou can also modify the [HorizontalPodAutoscaler](.\/..\/config\/webhook-hpa.yaml) to set a minimum number of replicas:\n\n```yaml\napiVersion: autoscaling\/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n  name: tekton-pipelines-webhook\n# ...\nspec:\n  minReplicas: 1\n```\n<!-- wokeignore:rule=master -->\nBy default, the Webhook deployment is _not_ configured to block a [Cluster Autoscaler](https:\/\/github.com\/kubernetes\/autoscaler\/tree\/master\/cluster-autoscaler) from scaling down the node that's running the only replica of the deployment using the `cluster-autoscaler.kubernetes.io\/safe-to-evict` annotation.\nThis means that during node drains, the Webhook might be unavailable temporarily, during which time Tekton resources can't be created, updated or deleted.\nTo avoid this, you can add the `safe-to-evict` annotation set to `false` to block node drains during autoscaling, or, better yet, configure multiple replicas of the Webhook deployment.\n\n### Avoiding Disruptions\n\nTo avoid the Webhook Service becoming unavailable during node unavailability (e.g., during node upgrades), you can ensure that a minimum number of Webhook replicas are available at time by defining a [`PodDisruptionBudget`](https:\/\/kubernetes.io\/docs\/tasks\/run-application\/configure-pdb\/) which sets a `minAvailable` greater than zero:\n\n```yaml\napiVersion: policy\/v1beta1\nkind: PodDisruptionBudget\nmetadata:\n  name: tekton-pipelines-webhook\n  namespace: tekton-pipelines\n  labels:\n    app.kubernetes.io\/name: webhook\n    app.kubernetes.io\/component: webhook\n    app.kubernetes.io\/instance: default\n    app.kubernetes.io\/part-of: tekton-pipelines\n    # ...\nspec:\n  minAvailable: 1\n  selector:\n    matchLabels:\n      app.kubernetes.io\/name: webhook\n      app.kubernetes.io\/component: webhook\n      app.kubernetes.io\/instance: default\n      app.kubernetes.io\/part-of: tekton-pipelines\n```\n\nWebhook replicas are configured to avoid being scheduled onto the same node by default, so that a single node disruption doesn't make all Webhook replicas unavailable.","site":"tekton","answers_cleaned":"         linkTitle   High Availability Support  weight  106            HA Support for Tekton Pipeline Controllers       Overview   overview       Controller HA   controller ha         Configuring Controller Replicas   configuring controller replicas         Configuring Leader Election   configuring leader election         Disabling Controller HA   disabling controller ha       Webhook HA   webhook ha         Configuring Webhook Replicas   configuring webhook replicas         Avoiding Disruptions   avoiding disruptions      Overview  This document is aimed at helping Cluster Admins when configuring High Availability  HA  support for the Tekton Pipeline  Controller       config controller yaml  and  Webhook       config webhook yaml  components  HA support allows components to remain operational when a disruption occurs  such as nodes being drained for upgrades      Controller HA  For the Controller  HA is achieved by following an active active model  where all replicas of the Controller can receive and process work items  In this HA approach the workqueue is distributed across buckets  where each replica owns a subset of those buckets and can process the load if the given replica is the leader of that bucket   By default  only one Controller replica is configured  to reduce resource usage  This effectively disables HA for the Controller by default       Configuring Controller Replicas  In order to achieve HA for the Controller  the number of replicas for the Controller should be greater than one  This allows other instances to take over in case of any disruption on the current active controller   You can modify the replicas number in the  Controller deployment       config controller yaml  under  spec replicas   or apply an update to a running deployment      sh kubectl  n tekton pipelines scale deployment tekton pipelines controller   replicas 3          Configuring Leader Election  Leader election can be configured in  config leader election yaml       config config leader election controller yaml   The ConfigMap defines the following parameters     Parameter              Default                                           data buckets          1             data leaseDuration    15s           data renewDeadline    10s           data retryPeriod      2s           Note   The maximum value of  data buckets  at this time is 10       Disabling Controller HA  If HA is not required  you can disable it by scaling the deployment back to one replica  You can also modify the  controller deployment       config controller yaml   by specifying in the  tekton pipelines controller  container the  disable ha  flag  For example      yaml spec    serviceAccountName  tekton pipelines controller   containers        name  tekton pipelines controller                   args                Other flags defined here                disable ha true                    Note    If you set   disable ha false  and run multiple replicas of the Controller  each replica will process work items separately  which will lead to unwanted behavior when creating resources  e g    TaskRuns   etc     In general  setting   disable ha false  is not recommended  Instead  to disable HA  simply run one replica of the Controller deployment      Webhook HA  The Webhook deployment is stateless  which means it can more easily be configured for HA  and even autoscale replicas in response to load   By default  only one Webhook replica is configured  to reduce resource usage  This effectively disables HA for the Webhook by default       Configuring Webhook Replicas  In order to achieve HA for the Webhook deployment  you can modify the  replicas  number in the  Webhook deployment       config webhook yaml  under  spec replicas   or apply an update to a running deployment      sh kubectl  n tekton pipelines scale deployment tekton pipelines webhook   replicas 3      You can also modify the  HorizontalPodAutoscaler       config webhook hpa yaml  to set a minimum number of replicas      yaml apiVersion  autoscaling v2 kind  HorizontalPodAutoscaler metadata    name  tekton pipelines webhook       spec    minReplicas  1          wokeignore rule master     By default  the Webhook deployment is  not  configured to block a  Cluster Autoscaler  https   github com kubernetes autoscaler tree master cluster autoscaler  from scaling down the node that s running the only replica of the deployment using the  cluster autoscaler kubernetes io safe to evict  annotation  This means that during node drains  the Webhook might be unavailable temporarily  during which time Tekton resources can t be created  updated or deleted  To avoid this  you can add the  safe to evict  annotation set to  false  to block node drains during autoscaling  or  better yet  configure multiple replicas of the Webhook deployment       Avoiding Disruptions  To avoid the Webhook Service becoming unavailable during node unavailability  e g   during node upgrades   you can ensure that a minimum number of Webhook replicas are available at time by defining a   PodDisruptionBudget   https   kubernetes io docs tasks run application configure pdb   which sets a  minAvailable  greater than zero      yaml apiVersion  policy v1beta1 kind  PodDisruptionBudget metadata    name  tekton pipelines webhook   namespace  tekton pipelines   labels      app kubernetes io name  webhook     app kubernetes io component  webhook     app kubernetes io instance  default     app kubernetes io part of  tekton pipelines           spec    minAvailable  1   selector      matchLabels        app kubernetes io name  webhook       app kubernetes io component  webhook       app kubernetes io instance  default       app kubernetes io part of  tekton pipelines      Webhook replicas are configured to avoid being scheduled onto the same node by default  so that a single node disruption doesn t make all Webhook replicas unavailable "}
{"questions":"tekton Pipelines weight 203 Pipelines","answers":"<!--\n---\nlinkTitle: \"Pipelines\"\nweight: 203\n---\n-->\n\n# Pipelines\n\n- [Pipelines](#pipelines)\n  - [Overview](#overview)\n  - [Configuring a `Pipeline`](#configuring-a-pipeline)\n  - [Specifying `Workspaces`](#specifying-workspaces)\n  - [Specifying `Parameters`](#specifying-parameters)\n  - [Adding `Tasks` to the `Pipeline`](#adding-tasks-to-the-pipeline)\n    - [Specifying Display Name](#specifying-displayname-in-pipelinetasks)\n    - [Specifying Remote Tasks](#specifying-remote-tasks)\n    - [Specifying `Pipelines` in `PipelineTasks`](#specifying-pipelines-in-pipelinetasks)\n    - [Specifying `Parameters` in `PipelineTasks`](#specifying-parameters-in-pipelinetasks)\n    - [Specifying `Matrix` in `PipelineTasks`](#specifying-matrix-in-pipelinetasks)\n    - [Specifying `Workspaces` in `PipelineTasks`](#specifying-workspaces-in-pipelinetasks)\n    - [Tekton Bundles](#tekton-bundles)\n    - [Using the `runAfter` field](#using-the-runafter-field)\n    - [Using the `retries` field](#using-the-retries-field)\n    - [Using the `onError` field](#using-the-onerror-field)\n    - [Produce results with `OnError`](#produce-results-with-onerror)\n    - [Guard `Task` execution using `when` expressions](#guard-task-execution-using-when-expressions)\n      - [Guarding a `Task` and its dependent `Tasks`](#guarding-a-task-and-its-dependent-tasks)\n        - [Cascade `when` expressions to the specific dependent `Tasks`](#cascade-when-expressions-to-the-specific-dependent-tasks)\n        - [Compose using Pipelines in Pipelines](#compose-using-pipelines-in-pipelines)\n      - [Guarding a `Task` only](#guarding-a-task-only)\n    - [Configuring the failure timeout](#configuring-the-failure-timeout)\n  - [Using variable substitution](#using-variable-substitution)\n    - [Using the `retries` and `retry-count` variable substitutions](#using-the-retries-and-retry-count-variable-substitutions)\n  - [Using `Results`](#using-results)\n    - [Passing one Task's `Results` into the `Parameters` or `when` expressions of another](#passing-one-tasks-results-into-the-parameters-or-when-expressions-of-another)\n    - [Emitting `Results` from a `Pipeline`](#emitting-results-from-a-pipeline)\n  - [Configuring the `Task` execution order](#configuring-the-task-execution-order)\n  - [Adding a description](#adding-a-description)\n  - [Adding `Finally` to the `Pipeline`](#adding-finally-to-the-pipeline)\n    - [Specifying Display Name](#specifying-displayname-in-finally-tasks)\n    - [Specifying `Workspaces` in `finally` tasks](#specifying-workspaces-in-finally-tasks)\n    - [Specifying `Parameters` in `finally` tasks](#specifying-parameters-in-finally-tasks)\n    - [Specifying `matrix` in `finally` tasks](#specifying-matrix-in-finally-tasks)\n    - [Consuming `Task` execution results in `finally`](#consuming-task-execution-results-in-finally)\n    - [Consuming `Pipeline` result with `finally`](#consuming-pipeline-result-with-finally)\n    - [`PipelineRun` Status with `finally`](#pipelinerun-status-with-finally)\n    - [Using Execution `Status` of `pipelineTask`](#using-execution-status-of-pipelinetask)\n    - [Using Aggregate Execution `Status` of All `Tasks`](#using-aggregate-execution-status-of-all-tasks)\n    - [Guard `finally` `Task` execution using `when` expressions](#guard-finally-task-execution-using-when-expressions)\n      - [`when` expressions using `Parameters` in `finally` `Tasks`](#when-expressions-using-parameters-in-finally-tasks)\n      - [`when` expressions using `Results` in `finally` 'Tasks`](#when-expressions-using-results-in-finally-tasks)\n      - [`when` expressions using `Execution Status` of `PipelineTask` in `finally` `tasks`](#when-expressions-using-execution-status-of-pipelinetask-in-finally-tasks)\n      - [`when` expressions using `Aggregate Execution Status` of `Tasks` in `finally` `tasks`](#when-expressions-using-aggregate-execution-status-of-tasks-in-finally-tasks)\n    - [Known Limitations](#known-limitations)\n      - [Cannot configure the `finally` task execution order](#cannot-configure-the-finally-task-execution-order)\n  - [Using Custom Tasks](#using-custom-tasks)\n    - [Specifying the target Custom Task](#specifying-the-target-custom-task)\n    - [Specifying a Custom Task Spec in-line (or embedded)](#specifying-a-custom-task-spec-in-line-or-embedded)\n    - [Specifying parameters](#specifying-parameters-1)\n    - [Specifying matrix](#specifying-matrix)\n    - [Specifying workspaces](#specifying-workspaces-1)\n    - [Using `Results`](#using-results-1)\n    - [Specifying `Timeout`](#specifying-timeout)\n    - [Specifying `Retries`](#specifying-retries)\n    - [Known Custom Tasks](#known-custom-tasks)\n  - [Code examples](#code-examples)\n\n## Overview\n\nA `Pipeline` is a collection of `Tasks` that you define and arrange in a specific order\nof execution as part of your continuous integration flow. Each `Task` in a `Pipeline`\nexecutes as a `Pod` on your Kubernetes cluster. You can configure various execution\nconditions to fit your business needs.\n\n## Configuring a `Pipeline`\n\nA `Pipeline` definition supports the following fields:\n\n- Required:\n  - [`apiVersion`][kubernetes-overview] - Specifies the API version, for example\n    `tekton.dev\/v1beta1`.\n  - [`kind`][kubernetes-overview] - Identifies this resource object as a `Pipeline` object.\n  - [`metadata`][kubernetes-overview] - Specifies metadata that uniquely identifies the\n    `Pipeline` object. For example, a `name`.\n  - [`spec`][kubernetes-overview] - Specifies the configuration information for\n    this `Pipeline` object. This must include:\n      - [`tasks`](#adding-tasks-to-the-pipeline) - Specifies the `Tasks` that comprise the `Pipeline`\n        and the details of their execution.\n- Optional:\n  - [`params`](#specifying-parameters) - Specifies the `Parameters` that the `Pipeline` requires.\n  - [`workspaces`](#specifying-workspaces) - Specifies a set of Workspaces that the `Pipeline` requires.\n  - [`tasks`](#adding-tasks-to-the-pipeline):\n      - [`name`](#adding-tasks-to-the-pipeline) - the name of this `Task` within the context of this `Pipeline`.\n      - [`displayName`](#specifying-displayname-in-pipelinetasks) - a user-facing name of this `Task` within the context of this `Pipeline`.\n      - [`description`](#adding-tasks-to-the-pipeline) - a description of this `Task` within the context of this `Pipeline`.\n      - [`taskRef`](#adding-tasks-to-the-pipeline) - a reference to a `Task` definition.\n      - [`taskSpec`](#adding-tasks-to-the-pipeline) - a specification of a `Task`.\n      - [`runAfter`](#using-the-runafter-field) - Indicates that a `Task` should execute after one or more other\n        `Tasks` without output linking.\n      - [`retries`](#using-the-retries-field) - Specifies the number of times to retry the execution of a `Task` after\n        a failure. Does not apply to execution cancellations.\n      - [`when`](#guard-finally-task-execution-using-when-expressions) - Specifies `when` expressions that guard\n        the execution of a `Task`; allow execution only when all `when` expressions evaluate to true.\n      - [`timeout`](#configuring-the-failure-timeout) - Specifies the timeout before a `Task` fails.\n      - [`params`](#specifying-parameters-in-pipelinetasks) - Specifies the `Parameters` that a `Task` requires.\n      - [`workspaces`](#specifying-workspaces-in-pipelinetasks) - Specifies the `Workspaces` that a `Task` requires.\n      - [`matrix`](#specifying-matrix-in-pipelinetasks) - Specifies the `Parameters` used to fan out a `Task` into\n        multiple `TaskRuns` or `Runs`.\n  - [`results`](#emitting-results-from-a-pipeline) - Specifies the location to which the `Pipeline` emits its execution\n    results.\n  - [`displayName`](#specifying-a-display-name) - is a user-facing name of the pipeline that may be used to populate a UI.\n  - [`description`](#adding-a-description) - Holds an informative description of the `Pipeline` object.\n  - [`finally`](#adding-finally-to-the-pipeline) - Specifies one or more `Tasks` to be executed in parallel after\n    all other tasks have completed.\n    - [`name`](#adding-finally-to-the-pipeline) - the name of this `Task` within the context of this `Pipeline`.\n    - [`displayName`](#specifying-displayname-in-finally-tasks) - a user-facing name of this `Task` within the context of this `Pipeline`.\n    - [`description`](#adding-finally-to-the-pipeline) - a description of this `Task` within the context of this `Pipeline`.\n    - [`taskRef`](#adding-finally-to-the-pipeline) - a reference to a `Task` definition.\n    - [`taskSpec`](#adding-finally-to-the-pipeline) - a specification of a `Task`.\n    - [`retries`](#using-the-retries-field) - Specifies the number of times to retry the execution of a `Task` after\n      a failure. Does not apply to execution cancellations.\n    - [`when`](#guard-finally-task-execution-using-when-expressions) - Specifies `when` expressions that guard\n      the execution of a `Task`; allow execution only when all `when` expressions evaluate to true.\n    - [`timeout`](#configuring-the-failure-timeout) - Specifies the timeout before a `Task` fails.\n    - [`params`](#specifying-parameters-in-finally-tasks) - Specifies the `Parameters` that a `Task` requires.\n    - [`workspaces`](#specifying-workspaces-in-finally-tasks) - Specifies the `Workspaces` that a `Task` requires.\n    - [`matrix`](#specifying-matrix-in-finally-tasks) - Specifies the `Parameters` used to fan out a `Task` into\n      multiple `TaskRuns` or `Runs`.\n\n[kubernetes-overview]:\n  https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/kubernetes-objects\/#required-fields\n\n## Specifying `Workspaces`\n\n`Workspaces` allow you to specify one or more volumes that each `Task` in the `Pipeline`\nrequires during execution. You specify one or more `Workspaces` in the `workspaces` field.\nFor example:\n\n```yaml\nspec:\n  workspaces:\n    - name: pipeline-ws1 # The name of the workspace in the Pipeline\n  tasks:\n    - name: use-ws-from-pipeline\n      taskRef:\n        name: gen-code # gen-code expects a workspace with name \"output\"\n      workspaces:\n        - name: output\n          workspace: pipeline-ws1\n    - name: use-ws-again\n      taskRef:\n        name: commit # commit expects a workspace with name \"src\"\n      runAfter:\n        - use-ws-from-pipeline # important: use-ws-from-pipeline writes to the workspace first\n      workspaces:\n        - name: src\n          workspace: pipeline-ws1\n```\n\nFor simplicity you can also map the name of the `Workspace` in `PipelineTask` to match with\nthe `Workspace` from the `Pipeline`.\nFor example:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: pipeline\nspec:\n  workspaces:\n    - name: source\n  tasks:\n    - name: gen-code\n      taskRef:\n        name: gen-code # gen-code expects a Workspace named \"source\"\n      workspaces:\n        - name: source # <- mapping workspace name\n    - name: commit\n      taskRef:\n        name: commit # commit expects a Workspace named \"source\"\n      workspaces:\n        - name: source # <- mapping workspace name\n      runAfter:\n        - gen-code\n```\n\nFor more information, see:\n- [Using `Workspaces` in `Pipelines`](workspaces.md#using-workspaces-in-pipelines)\n- The [`Workspaces` in a `PipelineRun`](..\/examples\/v1\/pipelineruns\/workspaces.yaml) code example\n- The [variables available in a `PipelineRun`](variables.md#variables-available-in-a-pipeline), including `workspaces.<name>.bound`.\n- [Mapping `Workspaces`](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0108-mapping-workspaces.md)\n\n## Specifying `Parameters`\n\n(See also [Specifying Parameters in Tasks](tasks.md#specifying-parameters))\n\nYou can specify global parameters, such as compilation flags or artifact names, that you want to supply\nto the `Pipeline` at execution time. `Parameters` are passed to the `Pipeline` from its corresponding\n`PipelineRun` and can replace template values specified within each `Task` in the `Pipeline`.\n\nParameter names:\n- Must only contain alphanumeric characters, hyphens (`-`), and underscores (`_`).\n- Must begin with a letter or an underscore (`_`).\n\nFor example, `fooIs-Bar_` is a valid parameter name, but `barIsBa$` or `0banana` are not.\n\nEach declared parameter has a `type` field, which can be set to either `array` or `string`.\n`array` is useful in cases where the number of compilation flags being supplied to the `Pipeline`\nvaries throughout its execution. If no value is specified, the `type` field defaults to `string`.\nWhen the actual parameter value is supplied, its parsed type is validated against the `type` field.\nThe `description` and `default` fields for a `Parameter` are optional.\n\nThe following example illustrates the use of `Parameters` in a `Pipeline`.\n\nThe following `Pipeline` declares two input parameters :\n\n- `context` which passes its value (a string) to the `Task` to set the value of the `pathToContext` parameter within the `Task`.\n- `flags` which passes its value (an array) to the `Task` to set the value of\n  the `flags` parameter within the `Task`. The `flags` parameter within the\n`Task` **must** also be an array.\nIf you specify a value for the `default` field and invoke this `Pipeline` in a `PipelineRun`\nwithout specifying a value for `context`, that value will be used.\n\n**Note:** Input parameter values can be used as variables throughout the `Pipeline`\nby using [variable substitution](variables.md#variables-available-in-a-pipeline).\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: pipeline-with-parameters\nspec:\n  params:\n    - name: context\n      type: string\n      description: Path to context\n      default: \/some\/where\/or\/other\n    - name: flags\n      type: array\n      description: List of flags\n  tasks:\n    - name: build-skaffold-web\n      taskRef:\n        name: build-push\n      params:\n        - name: pathToDockerFile\n          value: Dockerfile\n        - name: pathToContext\n          value: \"$(params.context)\"\n        - name: flags\n          value: [\"$(params.flags[*])\"]\n```\n\nThe following `PipelineRun` supplies a value for `context`:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: pipelinerun-with-parameters\nspec:\n  pipelineRef:\n    name: pipeline-with-parameters\n  params:\n    - name: \"context\"\n      value: \"\/workspace\/examples\/microservices\/leeroy-web\"\n    - name: \"flags\"\n      value:\n        - \"foo\"\n        - \"bar\"\n```\n\n#### Param enum\n> :seedling: **`enum` is an [alpha](additional-configs.md#alpha-features) feature.** The `enable-param-enum` feature flag must be set to `\"true\"` to enable this feature.\n\nParameter declarations can include `enum` which is a predefine set of valid values that can be accepted by the `Pipeline` `Param`. If a `Param` has both `enum` and default value, the default value must be in the `enum` set. For example, the valid\/allowed values for `Param` \"message\" is bounded to `v1` and `v2`:\n\n``` yaml\napiVersion: tekton.dev\/v1\nkind: Pipeline\nmetadata:\n  name: pipeline-param-enum\nspec:\n  params:\n  - name: message\n    enum: [\"v1\", \"v2\"]\n    default: \"v1\"\n  tasks:\n  - name: task1\n    params:\n      - name: message\n        value: $(params.message)\n    steps:\n    - name: build\n      image: bash:3.2\n      script: |\n        echo \"$(params.message)\"\n```\n\nIf the `Param` value passed in by `PipelineRun` is **NOT** in the predefined `enum` list, the `PipelineRun` will fail with reason `InvalidParamValue`.\n\nIf a `PipelineTask` references a `Task` with `enum`, the `enums` specified in the Pipeline `spec.params` (pipeline-level `enum`) must be\na **subset** of the `enums` specified in the referenced `Task` (task-level `enum`). An empty pipeline-level `enum` is invalid\nin this scenario since an empty `enum` set indicates a \"universal set\" which allows all possible values. The same rules apply to `Pipelines` with embbeded `Tasks`.\n\nIn the below example, the referenced `Task` accepts `v1` and `v2` as valid values, the `Pipeline` further restricts the valid value to `v1`.\n\n``` yaml\napiVersion: tekton.dev\/v1\nkind: Task\nmetadata:\n  name: param-enum-demo\nspec:\n  params:\n  - name: message\n    type: string\n    enum: [\"v1\", \"v2\"]\n  steps:\n  - name: build\n    image: bash:latest\n    script: |\n      echo \"$(params.message)\"\n```\n\n``` yaml\napiVersion: tekton.dev\/v1\nkind: Pipeline\nmetadata:\n  name: pipeline-param-enum\nspec:\n  params:\n  - name: message\n    enum: [\"v1\"]  # note that an empty enum set is invalid\n  tasks:\n  - name: task1\n    params:\n      - name: message\n        value: $(params.message)\n    taskRef:\n      name: param-enum-demo\n```\n\nNote that this subset restriction only applies to the task-level `params` with a **direct single** reference to pipeline-level `params`. If a task-level `param` references multiple pipeline-level `params`, the subset validation is not applied.\n\n``` yaml\napiVersion: tekton.dev\/v1\nkind: Pipeline\n...\nspec:\n  params:\n  - name: message1\n    enum: [\"v1\"]\n  - name: message2\n    enum: [\"v2\"]\n  tasks:\n  - name: task1\n    params:\n      - name: message\n        value: \"$(params.message1) and $(params.message2)\"\n    taskSpec:\n      params: message\n      enum: [...] # the message enum is not required to be a subset of message1 or message2\n    ...\n```\n\nTekton validates user-provided values in a `PipelineRun` against the `enum` specified in the `PipelineSpec.params`. Tekton also validates\nany resolved `param` value against the `enum` specified in each `PipelineTask` before creating the `TaskRun`.\n\nSee usage in this [example](..\/examples\/v1\/pipelineruns\/alpha\/param-enum.yaml)\n\n#### Propagated Params\n\nLike with embedded [pipelineruns](pipelineruns.md#propagated-parameters), you can propagate `params` declared in the `pipeline` down to the inlined `pipelineTasks` and its inlined `Steps`. Wherever a resource (e.g. a `pipelineTask`) or a `StepAction` is referenced, the parameters need to be passed explicitly. \n\nFor example, the following is a valid yaml.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: pipelien-propagated-params\nspec:\n  params:\n    - name: HELLO\n      default: \"Hello World!\"\n    - name: BYE\n      default: \"Bye World!\"\n  tasks:\n    - name: echo-hello\n      taskSpec:\n        steps:\n          - name: echo\n            image: ubuntu\n            script: |\n              #!\/usr\/bin\/env bash\n              echo \"$(params.HELLO)\"\n    - name: echo-bye\n      taskSpec:\n        steps:\n          - name: echo-action\n            ref:\n              name: step-action-echo\n            params:\n              - name: msg\n                value: \"$(params.BYE)\" \n```\nThe same rules defined in [pipelineruns](pipelineruns.md#propagated-parameters) apply here.\n\n\n## Adding `Tasks` to the `Pipeline`\n\n Your `Pipeline` definition must reference at least one [`Task`](tasks.md).\nEach `Task` within a `Pipeline` must have a [valid](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names)\n`name` and a `taskRef` or a `taskSpec`. For example:\n\n```yaml\ntasks:\n  - name: build-the-image\n    taskRef:\n      name: build-push\n```\n\n**Note:** Using both `apiVersion` and `kind` will create [CustomRun](customruns.md), don't set `apiVersion` if only referring to [`Task`](tasks.md).\n\nor\n\n```yaml\ntasks:\n  - name: say-hello\n    taskSpec:\n      steps:\n      - image: ubuntu\n        script: echo 'hello there'\n```\n\nNote that any `task` specified in `taskSpec` will be the same version as the `Pipeline`.\n\n### Specifying `displayName` in `PipelineTasks`\n\nThe `displayName` field is an optional field that allows you to add a user-facing name of the `PipelineTask` that can be\nused to populate and distinguish in the dashboard. For example:\n\n```yaml\nspec:\n  tasks:\n    - name: scan\n      displayName: \"Code Scan\"\n      taskRef:\n        name: sonar-scan\n```\n\nThe `displayName` also allows you to parameterize the human-readable name of your choice based on the\n[params](#specifying-parameters), [the task results](#passing-one-tasks-results-into-the-parameters-or-when-expressions-of-another),\nand [the context variables](#context-variables). For example:\n\n```yaml\nspec:\n  params:\n    - name: application\n  tasks:\n    - name: scan\n      displayName: \"Code Scan for $(params.application)\"\n      taskRef:\n        name: sonar-scan\n    - name: upload-scan-report\n      displayName: \"Upload Scan Report $(tasks.scan.results.report)\"\n      taskRef:\n        name: upload\n```\n\nSpecifying task results in the `displayName` does not introduce an inherent resource dependency among `tasks`. The\npipeline author is responsible for specifying dependency explicitly either using [runAfter](#using-the-runafter-field)\nor rely on [whenExpressions](#guard-task-execution-using-when-expressions) or [task results in params](#using-results).\n\nFully resolved `displayName` is also available in the status as part of the `pipelineRun.status.childReferences`. The\nclients such as the dashboard, CLI, etc. can retrieve the `displayName` from the `childReferences`. The `displayName` mainly\ndrives a better user experience and at the same time it is not validated for the content or length by the controller.\n\n### Specifying Remote Tasks\n\n**([beta feature](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/install.md#beta-features))**\n\nA `taskRef` field may specify a Task in a remote location such as git.\nSupport for specific types of remote will depend on the Resolvers your\ncluster's operator has installed. For more information including a tutorial, please check [resolution docs](resolution.md). The below example demonstrates referencing a Task in git:\n\n```yaml\ntasks:\n- name: \"go-build\"\n  taskRef:\n    resolver: git\n    params:\n    - name: url\n      value: https:\/\/github.com\/tektoncd\/catalog.git\n    - name: revision\n      # value can use params declared at the pipeline level or a static value like main\n      value: $(params.gitRevision)\n    - name: pathInRepo\n      value: task\/golang-build\/0.3\/golang-build.yaml\n```\n\n### Specifying `Pipelines` in `PipelineTasks`\n\n> :seedling: **Specifying `pipelines` in `PipelineTasks` is an [alpha](additional-configs.md#alpha-features) feature.**\n> The `enable-api-fields` feature flag must be set to `\"alpha\"` to specify `PipelineRef` or `PipelineSpec` in a `PipelineTask`.\n> This feature is in **Preview Only** mode and not yet supported\/implemented.\n\nApart from `taskRef` and `taskSpec`, `pipelineRef` and `pipelineSpec` allows you to specify a `pipeline`  in `pipelineTask`.\nThis allows you to generate a child `pipelineRun` which is inherited by the parent `pipelineRun`.\n\n```\nkind: Pipeline\nmetadata:\n  name: security-scans\nspec:\n  tasks:\n    - name: scorecards\n      taskSpec:\n        steps:\n          - image: alpine\n            name: step-1\n            script: |\n              echo \"Generating scorecard report ...\"\n    - name: codeql\n      taskSpec:\n        steps:\n          - image: alpine\n            name: step-1\n            script: |\n              echo \"Generating codeql report ...\"\n---\napiVersion: tekton.dev\/v1\nkind: Pipeline\nmetadata:\n  name: clone-scan-notify\nspec:\n  tasks:\n    - name: git-clone\n      taskSpec:\n        steps:\n          - image: alpine\n            name: step-1\n            script: |\n              echo \"Cloning a repo to run security scans ...\"\n    - name: security-scans\n      runAfter:\n        - git-clone\n      pipelineRef:\n        name: security-scans\n---\n```\nFor further information read [Pipelines in Pipelines](.\/pipelines-in-pipelines.md)\n\n\n### Specifying `Parameters` in `PipelineTasks`\n\nYou can also provide [`Parameters`](tasks.md#specifying-parameters):\n\n```yaml\nspec:\n  tasks:\n    - name: build-skaffold-web\n      taskRef:\n        name: build-push\n      params:\n        - name: pathToDockerFile\n          value: Dockerfile\n        - name: pathToContext\n          value: \/workspace\/examples\/microservices\/leeroy-web\n```\n\n### Specifying `Matrix` in `PipelineTasks`\n\n> :seedling: **`Matrix` is an [beta](additional-configs.md#beta-features) feature.**\n> The `enable-api-fields` feature flag can be set to `\"beta\"` to specify `Matrix` in a `PipelineTask`.\n\nYou can also provide [`Parameters`](tasks.md#specifying-parameters) through the `matrix` field:\n\n```yaml\nspec:\n  tasks:\n    - name: browser-test\n      taskRef:\n        name: browser-test\n      matrix:\n        params:\n        - name: browser\n          value:\n          - chrome\n          - safari\n          - firefox\n        include:\n          - name: build-1\n            params:\n              - name: browser\n                value: chrome\n              - name: url\n                value: some-url\n```\n\nFor further information, read [`Matrix`](.\/matrix.md).\n\n### Specifying `Workspaces` in `PipelineTasks`\n\nYou can also provide [`Workspaces`](tasks.md#specifying-workspaces):\n\n```yaml\nspec:\n  tasks:\n    - name: use-workspace\n      taskRef:\n        name: gen-code # gen-code expects a workspace with name \"output\"\n      workspaces:\n        - name: output\n          workspace: shared-ws\n```\n\n### Tekton Bundles\n\nA `Tekton Bundle` is an OCI artifact that contains Tekton resources like `Tasks` which can be referenced within a `taskRef`.\n\nThere is currently a hard limit of 20 objects in a bundle.\n\nYou can reference a `Tekton bundle` in a `TaskRef` in both `v1` and `v1beta1` using [remote resolution](.\/bundle-resolver.md#pipeline-resolution). The example syntax shown below for `v1` uses remote resolution and requires enabling [beta features](.\/additional-configs.md#beta-features).\n\n```yaml\nspec:\n  tasks:\n    - name: hello-world\n      taskRef:\n        resolver: bundles\n        params:\n        - name: bundle\n          value: docker.io\/myrepo\/mycatalog\n        - name: name\n          value: echo-task\n        - name: kind\n          value: Task\n```\n\nYou may also specify a `tag` as you would with a Docker image which will give you a fixed,\nrepeatable reference to a `Task`.\n\n```yaml\nspec:\n  taskRef:\n    resolver: bundles\n    params:\n    - name: bundle\n      value: docker.io\/myrepo\/mycatalog:v1.0.1\n    - name: name\n      value: echo-task\n    - name: kind\n      value: Task\n```\n\nYou may also specify a fixed digest instead of a tag.\n\n```yaml\nspec:\n  taskRef:\n    resolver: bundles\n    params:\n    - name: bundle\n      value: docker.io\/myrepo\/mycatalog@sha256:abc123\n    - name: name\n      value: echo-task\n    - name: kind\n      value: Task\n```\n\nAny of the above options will fetch the image using the `ImagePullSecrets` attached to the\n`ServiceAccount` specified in the `PipelineRun`.\nSee the [Service Account](pipelineruns.md#specifying-custom-serviceaccount-credentials) section\nfor details on how to configure a `ServiceAccount` on a `PipelineRun`. The `PipelineRun` will then\nrun that `Task` without registering it in the cluster allowing multiple versions of the same named\n`Task` to be run at once.\n\n`Tekton Bundles` may be constructed with any toolsets that produce valid OCI image artifacts\nso long as the artifact adheres to the [contract](tekton-bundle-contracts.md).\n\n### Using the `runAfter` field\n\nIf you need your `Tasks` to execute in a specific order within the `Pipeline`,\nuse the `runAfter` field to indicate that a `Task` must execute after\none or more other `Tasks`.\n\nIn the example below, we want to test the code before we build it. Since there\nis no output from the `test-app` `Task`, the `build-app` `Task` uses `runAfter`\nto indicate that `test-app` must run before it, regardless of the order in which\nthey are referenced in the `Pipeline` definition.\n\n```yaml\nworkspaces:\n- name: source\ntasks:\n- name: test-app\n  taskRef:\n    name: make-test\n  workspaces:\n  - name: source\n    workspace: source\n- name: build-app\n  taskRef:\n    name: kaniko-build\n  runAfter:\n    - test-app\n  workspaces:\n  - name: source\n    workspace: source\n```\n\n### Using the `retries` field\n\nFor each `Task` in the `Pipeline`, you can specify the number of times Tekton\nshould retry its execution when it fails. When a `Task` fails, the corresponding\n`TaskRun` sets its `Succeeded` `Condition` to `False`. The `retries` field\ninstructs Tekton to retry executing the `Task` when this happens. `retries` are executed\neven when other `Task`s in the `Pipeline` have failed, unless the `PipelineRun` has\nbeen [cancelled](.\/pipelineruns.md#cancelling-a-pipelinerun) or\n[gracefully cancelled](.\/pipelineruns.md#gracefully-cancelling-a-pipelinerun).\n\nIf you expect a `Task` to encounter problems during execution (for example,\nyou know that there will be issues with network connectivity or missing\ndependencies), set its `retries` field to a suitable value greater than 0.\nIf you don't explicitly specify a value, Tekton does not attempt to execute\nthe failed `Task` again.\n\nIn the example below, the execution of the `build-the-image` `Task` will be\nretried once after a failure; if the retried execution fails, too, the `Task`\nexecution fails as a whole.\n\n```yaml\ntasks:\n  - name: build-the-image\n    retries: 1\n    taskRef:\n      name: build-push\n```\n\n### Using the `onError` field\n\nWhen a `PipelineTask` fails, the rest of the `PipelineTasks` are skipped and the `PipelineRun` is declared a failure. If you would like to\nignore such `PipelineTask` failure and continue executing the rest of the `PipelineTasks`, you can specify `onError` for such a `PipelineTask`.\n\n`OnError` can be set to `stopAndFail` (default) and `continue`. The failure of a `PipelineTask` with `stopAndFail` would stop and fail the whole `PipelineRun`.  A `PipelineTask` fails with `continue` does not fail the whole `PipelineRun`, and the rest of the `PipelineTask` will continue to execute.\n\nTo ignore a `PipelineTask` failure, set `onError` to `continue`:\n\n``` yaml\napiVersion: tekton.dev\/v1\nkind: Pipeline\nmetadata:\n  name: demo\nspec:\n  tasks:\n    - name: task1\n      onError: continue\n      taskSpec:\n        steps:\n          - name: step1\n            image: alpine\n            script: |\n              exit 1\n```\n\nAt runtime, the failure is ignored to determine the `PipelineRun` status. The `PipelineRun` `message` contains the ignored failure info:\n\n``` yaml\nstatus:\n  conditions:\n  - lastTransitionTime: \"2023-09-28T19:08:30Z\"\n    message: 'Tasks Completed: 1 (Failed: 1 (Ignored: 1), Cancelled 0), Skipped: 0'\n    reason: Succeeded\n    status: \"True\"\n    type: Succeeded\n  ...\n```\n\nNote that the `TaskRun` status remains as it is irrelevant to `OnError`. Failed but ignored `TaskRuns` result in a `failed` status with reason\n`FailureIgnored`.\n\nFor example, the `TaskRun` created by the above `PipelineRun` has the following status:\n\n``` bash\n$ kubectl get tr demo-run-task1\nNAME                                SUCCEEDED   REASON           STARTTIME   COMPLETIONTIME\ndemo-run-task1                      False       FailureIgnored   12m         12m\n```\n\nTo specify `onError` for a `step`, please see [specifying onError for a step](.\/tasks.md#specifying-onerror-for-a-step).\n\n**Note:** Setting [`Retry`](#specifying-retries) and `OnError:continue` at the same time is **NOT** allowed.\n\n### Produce results with `OnError`\n\nWhen a `PipelineTask` is set to ignore error and the `PipelineTask` is able to initialize a result before failing, the result is made available to the consumer `PipelineTasks`.\n\n``` yaml\n  tasks:\n    - name: task1\n      onError: continue\n      taskSpec:\n        results:\n          - name: result1\n        steps:\n          - name: step1\n            image: alpine\n            script: |\n              echo -n 123 | tee $(results.result1.path)\n              exit 1\n```\n\nThe consumer `PipelineTasks` can access the result by referencing `$(tasks.task1.results.result1)`.\n\nIf the result is **NOT** initialized before failing, and there is a `PipelineTask` consuming it:\n\n``` yaml\n  tasks:\n    - name: task1\n      onError: continue\n      taskSpec:\n        results:\n          - name: result1\n        steps:\n          - name: step1\n            image: alpine\n            script: |\n              exit 1\n              echo -n 123 | tee $(results.result1.path)\n```\n\n- If the consuming `PipelineTask` has `OnError:stopAndFail`, the `PipelineRun` will fail with `InvalidTaskResultReference`.\n- If the consuming `PipelineTask` has `OnError:continue`, the consuming `PipelineTask` will be skipped with reason `Results were missing`,\nand the `PipelineRun` will continue to execute.\n\n### Guard `Task` execution using `when` expressions\n\nTo run a `Task` only when certain conditions are met, it is possible to _guard_ task execution using the `when` field. The `when` field allows you to list a series of references to `when` expressions.\n\nThe components of `when` expressions are `input`, `operator` and `values`:\n\n| Component  | Description                                                                                                | Syntax                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |\n|------------|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `input`    | Input for the `when` expression, defaults to an empty string if not provided.                              | * Static values e.g. `\"ubuntu\"`<br\/> * Variables ([parameters](#specifying-parameters) or [results](#using-results)) e.g. `\"$(params.image)\"` or `\"$(tasks.task1.results.image)\"` or `\"$(tasks.task1.results.array-results[1])\"`                                                                                                                                                                                                                                               |\n| `operator` | `operator` represents an `input`'s relationship to a set of `values`, a valid `operator` must be provided. | `in` or `notin`                                                                                                                                                                                                                                                                                                                                                                                                                                                                |\n| `values`   | An array of string values, the `values` array must be provided and has to be non-empty.                    | * An array param e.g. `[\"$(params.images[*])\"]`<br\/> * An array result of a task `[\"$(tasks.task1.results.array-results[*])\"]`<br\/> * `values` can contain static values e.g. `\"ubuntu\"`<br\/> * `values` can contain variables ([parameters](#specifying-parameters) or [results](#using-results)) or [a Workspaces's `bound` state](#specifying-workspaces) e.g. `[\"$(params.image)\"]` or `[\"$(tasks.task1.results.image)\"]` or `[\"$(tasks.task1.results.array-results[1])\"]` |\n\n\nThe [`Parameters`](#specifying-parameters) are read from the `Pipeline` and [`Results`](#using-results) are read directly from previous [`Tasks`](#adding-tasks-to-the-pipeline). Using [`Results`](#using-results) in a `when` expression in a guarded `Task` introduces a resource dependency on the previous `Task` that produced the `Result`.\n\nThe declared `when` expressions are evaluated before the `Task` is run. If all the `when` expressions evaluate to `True`, the `Task` is run. If any of the `when` expressions evaluate to `False`, the `Task` is not run and the `Task` is listed in the [`Skipped Tasks` section of the `PipelineRunStatus`](pipelineruns.md#monitoring-execution-status).\n\nIn these examples, `first-create-file` task will only be executed if the `path` parameter is `README.md`, `echo-file-exists` task will only be executed if the `exists` result from `check-file` task is `yes` and `run-lint` task will only be executed if the `lint-config` optional workspace has been provided by a PipelineRun.\n\n```yaml\ntasks:\n  - name: first-create-file\n    when:\n      - input: \"$(params.path)\"\n        operator: in\n        values: [\"README.md\"]\n    taskRef:\n      name: first-create-file\n---\ntasks:\n  - name: echo-file-exists\n    when:\n      - input: \"$(tasks.check-file.results.exists)\"\n        operator: in\n        values: [\"yes\"]\n    taskRef:\n      name: echo-file-exists\n---\ntasks:\n  - name: run-lint\n    when:\n      - input: \"$(workspaces.lint-config.bound)\"\n        operator: in\n        values: [\"true\"]\n    taskRef:\n      name: lint-source\n---\ntasks:\n  - name: deploy-in-blue\n    when:\n      - input: \"blue\"\n        operator: in\n        values: [\"$(params.deployments[*])\"]\n    taskRef:\n      name: deployment\n```\n\nFor an end-to-end example, see [PipelineRun with `when` expressions](..\/examples\/v1\/pipelineruns\/pipelinerun-with-when-expressions.yaml).\n\nThere are a lot of scenarios where `when` expressions can be really useful. Some of these are:\n- Checking if the name of a git branch matches\n- Checking if the `Result` of a previous `Task` is as expected\n- Checking if a git file has changed in the previous commits\n- Checking if an image exists in the registry\n- Checking if the name of a CI job matches\n- Checking if an optional Workspace has been provided\n\n#### Use CEL expression in WhenExpression\n\n> :seedling: **`CEL in WhenExpression` is an [alpha](additional-configs.md#alpha-features) feature.**\n> The `enable-cel-in-whenexpression` feature flag must be set to `\"true\"` to enable the use of `CEL` in `WhenExpression`.\n\nCEL (Common Expression Language) is a declarative language designed for simplicity, speed, safety, and portability which can be used to express a wide variety of conditions and computations.\n\nYou can define a CEL expression in `WhenExpression` to guard the execution of a `Task`.  The CEL expression must evaluate to either `true` or `false`. You can use a single line of CEL string to replace current `WhenExpressions`'s `input`+`operator`+`values`. For example:\n\n```yaml\n# current WhenExpressions\nwhen:\n  - input: \"foo\"\n    operator: \"in\"\n    values: [\"foo\", \"bar\"]\n  - input: \"duh\"\n    operator: \"notin\"\n    values: [\"foo\", \"bar\"]\n\n# with cel\nwhen:\n  - cel: \"'foo' in ['foo', 'bar']\"\n  - cel: \"!('duh' in ['foo', 'bar'])\"\n```\n\nCEL can offer more conditional functions, such as numeric comparisons (e.g. `>`, `<=`, etc), logic operators (e.g. `OR`, `AND`), Regex Pattern Matching. For example:\n\n```yaml\n  when:\n    # test coverage result is larger than 90%\n    - cel: \"'$(tasks.unit-test.results.test-coverage)' > 0.9\"\n    # params is not empty, or params2 is 8.5 or 8.6\n    - cel: \"'$(params.param1)' != '' || '$(params.param2)' == '8.5' || '$(params.param2)' == '8.6'\"\n    # param branch matches pattern `release\/.*`\n    - cel: \"'$(params.branch)'.matches('release\/.*')\"\n```\n\n##### Variable substitution in CEL\n\n`CEL` supports [string substitutions](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/variables.md#variables-available-in-a-pipeline), you can reference string, array indexing or object value of a param\/result. For example:\n\n```yaml\n  when:\n    # string result\n    - cel: \"$(tasks.unit-test.results.test-coverage) > 0.9\"\n    # array indexing result\n    - cel: \"$(tasks.unit-test.results.test-coverage[0]) > 0.9\"\n    # object result key\n    - cel: \"'$(tasks.objectTask.results.repo.url)'.matches('github.com\/tektoncd\/.*')\"\n    # string param\n    - cel: \"'$(params.foo)' == 'foo'\"\n    # array indexing\n    - cel: \"'$(params.branch[0])' == 'foo'\"\n    # object param key\n    - cel: \"'$(params.repo.url)'.matches('github.com\/tektoncd\/.*')\"\n```\n\n**Note:** the reference needs to be wrapped with single quotes.\nWhole `Array` and `Object` replacements are not supported yet. The following usage is not supported:\n\n```yaml\n  when:\n    - cel: \"'foo' in '$(params.array_params[*])'\"\n    - cel: \"'foo' in '$(params.object_params[*])'\"\n```\n<!-- wokeignore:rule=master -->\nIn addition to the cases listed above, you can craft any valid CEL expression as defined by the [cel-spec language definition](https:\/\/github.com\/google\/cel-spec\/blob\/master\/doc\/langdef.md)\n\n\n`CEL` expression is validated at admission webhook and a validation error will be returned if the expression is invalid.\n\n**Note:** To use Tekton's [variable substitution](variables.md), you need to wrap the reference with single quotes. This also means that if you pass another CEL expression via `params` or `results`, it won't be executed. Therefore CEL injection is disallowed.\n\nFor example:\n```\nThis is valid: '$(params.foo)' == 'foo'\nThis is invalid: $(params.foo) == 'foo'\nCEL's variable substitution is not supported yet and thus invalid: params.foo == 'foo'\n```\n\n#### Guarding a `Task` and its dependent `Tasks`\n\nTo guard a `Task` and its dependent Tasks:\n- cascade the `when` expressions to the specific dependent `Tasks` to be guarded as well\n- compose the `Task` and its dependent `Tasks` as a unit to be guarded and executed together using `Pipelines` in `Pipelines`\n\n##### Cascade `when` expressions to the specific dependent `Tasks`\n\nPick and choose which specific dependent `Tasks` to guard as well, and cascade the `when` expressions to those `Tasks`.\n\nTaking the use case below, a user who wants to guard `manual-approval` and its dependent `Tasks`:\n\n```\n                                     tests\n                                       |\n                                       v\n                                 manual-approval\n                                 |            |\n                                 v        (approver)\n                            build-image       |\n                                |             v\n                                v          slack-msg\n                            deploy-image\n```\n\nThe user can design the `Pipeline` to solve their use case as such:\n\n```yaml\ntasks:\n#...\n- name: manual-approval\n  runAfter:\n    - tests\n  when:\n    - input: $(params.git-action)\n      operator: in\n      values:\n        - merge\n  taskRef:\n    name: manual-approval\n\n- name: build-image\n  when:\n    - input: $(params.git-action)\n      operator: in\n      values:\n        - merge\n  runAfter:\n    - manual-approval\n  taskRef:\n    name: build-image\n\n- name: deploy-image\n  when:\n    - input: $(params.git-action)\n      operator: in\n      values:\n        - merge\n  runAfter:\n    - build-image\n  taskRef:\n    name: deploy-image\n\n- name: slack-msg\n  params:\n    - name: approver\n      value: $(tasks.manual-approval.results.approver)\n  taskRef:\n    name: slack-msg\n```\n\n##### Compose using Pipelines in Pipelines\n\nCompose a set of `Tasks` as a unit of execution using `Pipelines` in `Pipelines`, which allows for guarding a `Task` and\nits dependent `Tasks` (as a sub-`Pipeline`) using `when` expressions.\n\n**Note:** `Pipelines` in `Pipelines` is an [experimental feature](https:\/\/github.com\/tektoncd\/experimental\/tree\/main\/pipelines-in-pipelines)\n\nTaking the use case below, a user who wants to guard `manual-approval` and its dependent `Tasks`:\n\n```\n                                     tests\n                                       |\n                                       v\n                                 manual-approval\n                                 |            |\n                                 v        (approver)\n                            build-image       |\n                                |             v\n                                v          slack-msg\n                            deploy-image\n```\n\nThe user can design the `Pipelines` to solve their use case as such:\n\n```yaml\n## sub pipeline (approve-build-deploy-slack)\ntasks:\n  - name: manual-approval\n    runAfter:\n      - integration-tests\n    taskRef:\n      name: manual-approval\n\n  - name: build-image\n    runAfter:\n      - manual-approval\n    taskRef:\n      name: build-image\n\n  - name: deploy-image\n    runAfter:\n      - build-image\n    taskRef:\n      name: deploy-image\n\n  - name: slack-msg\n    params:\n      - name: approver\n        value: $(tasks.manual-approval.results.approver)\n    taskRef:\n      name: slack-msg\n\n---\n## main pipeline\ntasks:\n#...\n- name: approve-build-deploy-slack\n  runAfter:\n    - tests\n  when:\n    - input: $(params.git-action)\n      operator: in\n      values:\n        - merge\n  taskRef:\n    apiVersion: tekton.dev\/v1beta1\n    kind: Pipeline\n    name: approve-build-deploy-slack\n```\n\n#### Guarding a `Task` only\n\nWhen `when` expressions evaluate to `False`, the `Task` will be skipped and:\n- The ordering-dependent `Tasks` will be executed\n- The resource-dependent `Tasks` (and their dependencies) will be skipped because of missing `Results` from the skipped\n  parent `Task`. When we add support for [default `Results`](https:\/\/github.com\/tektoncd\/community\/pull\/240), then the\n  resource-dependent `Tasks` may be executed if the default `Results` from the skipped parent `Task` are specified. In\n  addition, if a resource-dependent `Task` needs a file from a guarded parent `Task` in a shared `Workspace`, make sure\n  to handle the execution of the child `Task` in case the expected file is missing from the `Workspace` because the\n  guarded parent `Task` is skipped.\n\nOn the other hand, the rest of the `Pipeline` will continue executing.\n\n```\n                                     tests\n                                       |\n                                       v\n                                 manual-approval\n                                 |            |\n                                 v        (approver)\n                            build-image       |\n                                |             v\n                                v          slack-msg\n                            deploy-image\n```\n\nTaking the use case above, a user who wants to guard `manual-approval` only can design the `Pipeline` as such:\n\n```yaml\ntasks:\n#...\n- name: manual-approval\n  runAfter:\n    - tests\n  when:\n    - input: $(params.git-action)\n      operator: in\n      values:\n        - merge\n  taskRef:\n    name: manual-approval\n\n- name: build-image\n  runAfter:\n    - manual-approval\n  taskRef:\n    name: build-image\n\n- name: deploy-image\n  runAfter:\n    - build-image\n  taskRef:\n    name: deploy-image\n\n- name: slack-msg\n  params:\n    - name: approver\n      value: $(tasks.manual-approval.results.approver)\n  taskRef:\n    name: slack-msg\n```\n\nIf `manual-approval` is skipped, execution of its dependent `Tasks` (`slack-msg`, `build-image` and `deploy-image`)\nwould be unblocked regardless:\n- `build-image` and `deploy-image` should be executed successfully\n- `slack-msg` will be skipped because it is missing the `approver` `Result` from `manual-approval`\n  - dependents of `slack-msg` would have been skipped too if it had any of them\n  - if `manual-approval` specifies a default `approver` `Result`, such as \"None\", then `slack-msg` would be executed\n    ([supporting default `Results` is in progress](https:\/\/github.com\/tektoncd\/community\/pull\/240))\n\n### Configuring the failure timeout\n\nYou can use the `Timeout` field in the `Task` spec within the `Pipeline` to set the timeout\nof the `TaskRun` that executes that `Task` within the `PipelineRun` that executes your `Pipeline.`\nThe `Timeout` value is a `duration` conforming to Go's [`ParseDuration`](https:\/\/golang.org\/pkg\/time\/#ParseDuration)\nformat. For example, valid values are `1h30m`, `1h`, `1m`, and `60s`.\n\n**Note:** If you do not specify a `Timeout` value, Tekton instead honors the timeout for the [`PipelineRun`](pipelineruns.md#configuring-a-pipelinerun).\n\nIn the example below, the `build-the-image` `Task` is configured to time out after 90 seconds:\n\n```yaml\nspec:\n  tasks:\n    - name: build-the-image\n      taskRef:\n        name: build-push\n      timeout: \"0h1m30s\"\n```\n\n## Using variable substitution\n\nTekton provides variables to inject values into the contents of certain fields.\nThe values you can inject come from a range of sources including other fields\nin the Pipeline, context-sensitive information that Tekton provides, and runtime\ninformation received from a PipelineRun.\n\nThe mechanism of variable substitution is quite simple - string replacement is\nperformed by the Tekton Controller when a PipelineRun is executed.\n\nSee the [complete list of variable substitutions for Pipelines](.\/variables.md#variables-available-in-a-pipeline)\nand the [list of fields that accept substitutions](.\/variables.md#fields-that-accept-variable-substitutions).\n\nFor an end-to-end example, see [using context variables](..\/examples\/v1\/pipelineruns\/using_context_variables.yaml).\n\n### Using the `retries` and `retry-count` variable substitutions\n\nTekton supports variable substitution for the [`retries`](#using-the-retries-field)\nparameter of `PipelineTask`. Variables like `context.pipelineTask.retries` and\n`context.task.retry-count` can be added to the parameters of a `PipelineTask`.\n`context.pipelineTask.retries` will be replaced by `retries` of the `PipelineTask`, while\n`context.task.retry-count` will be replaced by current retry number of the `PipelineTask`.\n\n```yaml\nparams:\n- name: pipelineTask-retries\n  value: \"$(context.pipelineTask.retries)\"\ntaskSpec:\n  params:\n  - name: pipelineTask-retries\n  steps:\n  - image: ubuntu\n    name: print-if-retries-exhausted\n    script: |\n      if [ \"$(context.task.retry-count)\" == \"$(params.pipelineTask-retries)\" ]\n      then\n        echo \"This is the last retry.\"\n      fi\n      exit 1\n```\n\n**Note:** Every `PipelineTask` can only access its own `retries` and `retry-count`. These\nvalues aren't accessible for other `PipelineTask`s.\n\n## Using `Results`\n\nTasks can emit [`Results`](tasks.md#emitting-results) when they execute. A Pipeline can use these\n`Results` for two different purposes:\n\n1. A Pipeline can pass the `Result` of a `Task` into the `Parameters` or `when` expressions of another.\n2. A Pipeline can itself emit `Results` and include data from the `Results` of its Tasks.\n\n> **Note** Tekton does not enforce that results are produced at Task level. If a pipeline attempts to\n> consume a result that was declared by a Task, but not produced, it will fail. [TEP-0048](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0048-task-results-without-results.md)\n> propopses introducing default values for results to help Pipeline authors manage this case.\n\n### Passing one Task's `Results` into the `Parameters` or `when` expressions of another\n\nSharing `Results` between `Tasks` in a `Pipeline` happens via\n[variable substitution](variables.md#variables-available-in-a-pipeline) - one `Task` emits\na `Result` and another receives it as a `Parameter` with a variable such as\n`$(tasks.<task-name>.results.<result-name>)`. Pipeline support two new types of\nresults and parameters: array `[]string` and object `map[string]string`.\nArray result is a beta feature and can be enabled by setting `enable-api-fields` to `alpha` or `beta`.\n\n| Result Type | Parameter Type | Specification                                    | `enable-api-fields` |\n|-------------|----------------|--------------------------------------------------|---------------------|\n| string      | string         | `$(tasks.<task-name>.results.<result-name>)`     | stable              |\n| array       | array          | `$(tasks.<task-name>.results.<result-name>[*])`  | alpha or beta       |\n| array       | string         | `$(tasks.<task-name>.results.<result-name>[i])`  | alpha or beta       |\n| object      | object         | `$(tasks.<task-name>.results.<result-name>[*])`  | alpha or beta              |\n| object      | string         | `$(tasks.<task-name>.results.<result-name>.key)` | alpha or beta              |\n\n**Note:** Whole Array and Object `Results` (using star notation) cannot be referred in `script`.\n\nWhen one `Task` receives the `Results` of another, there is a dependency created between those\ntwo `Tasks`. In order for the receiving `Task` to get data from another `Task's` `Result`,\nthe `Task` producing the `Result` must run first. Tekton enforces this `Task` ordering\nby ensuring that the `Task` emitting the `Result` executes before any `Task` that uses it.\n\nIn the snippet below, a param is provided its value from the `commit` `Result` emitted by the\n`checkout-source` `Task`. Tekton will make sure that the `checkout-source` `Task` runs\nbefore this one.\n\n```yaml\nparams:\n  - name: foo\n    value: \"$(tasks.checkout-source.results.commit)\"\n  - name: array-params\n    value: \"$(tasks.checkout-source.results.array-results[*])\"\n  - name: array-indexing-params\n    value: \"$(tasks.checkout-source.results.array-results[1])\"\n  - name: object-params\n    value: \"$(tasks.checkout-source.results.object-results[*])\"\n  - name: object-element-params\n    value: \"$(tasks.checkout-source.results.object-results.objectkey)\"\n```\n\n**Note:** If `checkout-source` exits successfully without initializing `commit` `Result`,\nthe receiving `Task` fails and causes the `Pipeline` to fail with `InvalidTaskResultReference`:\n\n```\nunable to find result referenced by param 'foo' in 'task';: Could not find result with name 'commit' for task run 'checkout-source'\n```\n\nIn the snippet below, a `when` expression is provided its value from the `exists` `Result` emitted by the\n`check-file` `Task`. Tekton will make sure that the `check-file` `Task` runs before this one.\n\n```yaml\nwhen:\n  - input: \"$(tasks.check-file.results.exists)\"\n    operator: in\n    values: [\"yes\"]\n```\n\nFor an end-to-end example, see [`Task` `Results` in a `PipelineRun`](..\/examples\/v1\/pipelineruns\/task_results_example.yaml).\n\nNote that `when` expressions are whitespace-sensitive.  In particular, when producing results intended for inputs to `when`\nexpressions that may include newlines at their close (e.g. `cat`, `jq`), you may wish to truncate them.\n\n```yaml\ntaskSpec:\n  params:\n  - name: jsonQuery-check\n  steps:\n  - image: ubuntu\n    name: store-name-in-results\n    script: |\n      curl -s https:\/\/my-json-server.typicode.com\/typicode\/demo\/profile | jq -r .name | tr -d '\\n' | tee $(results.name.path)\n```\n\n### Emitting `Results` from a `Pipeline`\n\nA `Pipeline` can emit `Results` of its own for a variety of reasons - an external\nsystem may need to read them when the `Pipeline` is complete, they might summarise\nthe most important `Results` from the `Pipeline's` `Tasks`, or they might simply\nbe used to expose non-critical messages generated during the execution of the `Pipeline`.\n\nA `Pipeline's` `Results` can be composed of one or many `Task` `Results` emitted during\nthe course of the `Pipeline's` execution. A `Pipeline` `Result` can refer to its `Tasks'`\n`Results` using a variable of the form `$(tasks.<task-name>.results.<result-name>)`.\n\nAfter a `Pipeline` has executed the `PipelineRun` will be populated with the `Results`\nemitted by the `Pipeline`. These will be written to the `PipelineRun's`\n`status.pipelineResults` field.\n\nIn the example below, the `Pipeline` specifies a `results` entry with the name `sum` that\nreferences the `outputValue` `Result` emitted by the `calculate-sum` `Task`.\n\n```yaml\nresults:\n  - name: sum\n    description: the sum of all three operands\n    value: $(tasks.calculate-sum.results.outputValue)\n```\n\nFor an end-to-end example, see [`Results` in a `PipelineRun`](..\/examples\/v1\/pipelineruns\/pipelinerun-results.yaml).\n\nIn the example below, the `Pipeline` collects array and object results from `Tasks`.\n\n```yaml\n    results:\n      - name: array-results\n        type: array\n        description: whole array\n        value: $(tasks.task1.results.array-results[*])\n      - name: array-indexing-results\n        type: string\n        description: array element\n        value: $(tasks.task1.results.array-results[1])\n      - name: object-results\n        type: object\n        description: whole object\n        value: $(tasks.task2.results.object-results[*])\n      - name: object-element\n        type: string\n        description: object element\n        value: $(tasks.task2.results.object-results.foo)\n```\n\nFor an end-to-end example see [`Array and Object Results` in a `PipelineRun`](..\/examples\/v1\/pipelineruns\/pipeline-emitting-results.yaml).\n\nA `Pipeline Result` is not emitted if any of the following are true:\n- A `PipelineTask` referenced by the `Pipeline Result` failed. The `PipelineRun` will also\nhave failed.\n- A `PipelineTask` referenced by the `Pipeline Result` was skipped.\n- A `PipelineTask` referenced by the `Pipeline Result` didn't emit the referenced `Task Result`. This\nshould be considered a bug in the `Task` and [may fail a `PipelineTask` in future](https:\/\/github.com\/tektoncd\/pipeline\/issues\/3497).\n- The `Pipeline Result` uses a variable that doesn't point to an actual `PipelineTask`. This will\nresult in an `InvalidTaskResultReference` validation error during `PipelineRun` execution.\n- The `Pipeline Result` uses a variable that doesn't point to an actual result in a `PipelineTask`.\nThis will cause an `InvalidTaskResultReference` validation error during `PipelineRun` execution.\n\n**Note:** Since a `Pipeline Result` can contain references to multiple `Task Results`, if any of those\n`Task Result` references are invalid the entire `Pipeline Result` is not emitted.\n**Note:** If a `PipelineTask` referenced by the `Pipeline Result` was skipped, the `Pipeline Result` will not be emitted and the `PipelineRun` will not fail due to a missing result.\n\n## Configuring the `Task` execution order\n\nYou can connect `Tasks` in a `Pipeline` so that they execute in a Directed Acyclic Graph (DAG).\nEach `Task` in the `Pipeline` becomes a node on the graph that can be connected with an edge\nso that one will run before another and the execution of the `Pipeline` progresses to completion\nwithout getting stuck in an infinite loop.\n\nThis is done using:\n- _resource dependencies_:\n  - [`results`](#emitting-results-from-a-pipeline) of one `Task` being passed into `params` or `when` expressions of\n    another\n\n- _ordering dependencies_:\n  - [`runAfter`](#using-the-runafter-field) clauses on the corresponding `Tasks`\n\nFor example, the `Pipeline` defined as follows\n\n```yaml\ntasks:\n- name: lint-repo\n  taskRef:\n    name: pylint\n- name: test-app\n  taskRef:\n    name: make-test\n- name: build-app\n  taskRef:\n    name: kaniko-build-app\n  runAfter:\n    - test-app\n- name: build-frontend\n  taskRef:\n    name: kaniko-build-frontend\n  runAfter:\n    - test-app\n- name: deploy-all\n  taskRef:\n    name: deploy-kubectl\n  runAfter:\n    - build-app\n    - build-frontend\n```\n\nexecutes according to the following graph:\n\n```none\n        |            |\n        v            v\n     test-app    lint-repo\n    \/        \\\n   v          v\nbuild-app  build-frontend\n   \\          \/\n    v        v\n    deploy-all\n```\n\nIn particular:\n\n1. The `lint-repo` and `test-app` `Tasks` have no `runAfter` clauses\n   and start executing simultaneously.\n2. Once `test-app` completes, both `build-app` and `build-frontend` start\n   executing simultaneously since they both `runAfter` the `test-app` `Task`.\n3. The `deploy-all` `Task` executes once both `build-app` and `build-frontend`\n   complete, since it is supposed to `runAfter` them both.\n4. The entire `Pipeline` completes execution once both `lint-repo` and `deploy-all`\n   complete execution.\n\n## Specifying a display name\n\nThe `displayName` field is an optional field that allows you to add a user-facing name of the `Pipeline` that can be used to populate a UI. For example:\n\n```yaml\nspec:\n  displayName: \"Code Scan\"\n  tasks:\n    - name: scan\n      taskRef:\n        name: sonar-scan\n```\n\n## Adding a description\n\nThe `description` field is an optional field and can be used to provide description of the `Pipeline`.\n\n## Adding `Finally` to the `Pipeline`\n\nYou can specify a list of one or more final tasks under `finally` section. `finally` tasks are guaranteed to be executed\nin parallel after all `PipelineTasks` under `tasks` have completed regardless of success or error. `finally` tasks are very\nsimilar to `PipelineTasks` under `tasks` section and follow the same syntax. Each `finally` task must have a\n[valid](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/names\/#names) `name` and a [taskRef or\ntaskSpec](taskruns.md#specifying-the-target-task). For example:\n\n```yaml\nspec:\n  tasks:\n    - name: tests\n      taskRef:\n        name: integration-test\n  finally:\n    - name: cleanup-test\n      taskRef:\n        name: cleanup\n```\n\n### Specifying `displayName` in `finally` tasks\n\nSimilar to [specifying `displayName` in `pipelineTasks`](#specifying-displayname-in-pipelinetasks), `finally` tasks also\nallows to add a user-facing name of the `finally` task that can be used to populate and distinguish in the dashboard.\nFor example:\n\n```yaml\nspec:\n  finally:\n    - name: notification\n      displayName: \"Notify\"\n      taskRef:\n        name: notification\n    - name: notification-using-context-variable\n      displayName: \"Notification from $(context.pipeline.name)\"\n      taskRef:\n        name: notification\n```\n\nThe `displayName` also allows you to parameterize the human-readable name of your choice based on the\n[params](#specifying-parameters), [the task results](#consuming-task-execution-results-in-finally),\nand [the context variables](#context-variables).\n\nFully resolved `displayName` is also available in the status as part of the `pipelineRun.status.childReferences`. The\nclients such as the dashboard, CLI, etc. can retrieve the `displayName` from the `childReferences`. The `displayName` mainly\ndrives a better user experience and at the same time it is not validated for the content or length by the controller.\n\n### Specifying `Workspaces` in `finally` tasks\n\n`finally` tasks can specify [workspaces](workspaces.md) which `PipelineTasks` might have utilized\ne.g. a mount point for credentials held in Secrets. To support that requirement, you can specify one or more\n`Workspaces` in the `workspaces` field for the `finally` tasks similar to `tasks`.\n\n```yaml\nspec:\n  workspaces:\n    - name: shared-workspace\n  tasks:\n    - name: clone-app-source\n      taskRef:\n        name: clone-app-repo-to-workspace\n      workspaces:\n        - name: shared-workspace\n          workspace: shared-workspace\n  finally:\n    - name: cleanup-workspace\n      taskRef:\n        name: cleanup-workspace\n      workspaces:\n        - name: shared-workspace\n          workspace: shared-workspace\n```\n\n### Specifying `Parameters` in `finally` tasks\n\nSimilar to `tasks`, you can specify [`Parameters`](tasks.md#specifying-parameters) in `finally` tasks:\n\n```yaml\nspec:\n  tasks:\n    - name: tests\n      taskRef:\n        name: integration-test\n  finally:\n    - name: report-results\n      taskRef:\n        name: report-results\n      params:\n        - name: url\n          value: \"someURL\"\n```\n\n### Specifying `matrix` in `finally` tasks\n\n> :seedling: **`Matrix` is an [beta](additional-configs.md#beta-features) feature.**\n> The `enable-api-fields` feature flag can be set to `\"beta\"` to specify `Matrix` in a `PipelineTask`.\n\nSimilar to `tasks`, you can also provide [`Parameters`](tasks.md#specifying-parameters) through `matrix`\nin `finally` tasks:\n\n```yaml\nspec:\n  tasks:\n    - name: tests\n      taskRef:\n        name: integration-test\n  finally:\n    - name: report-results\n      taskRef:\n        name: report-results\n      params:\n        - name: url\n          value: \"someURL\"\n      matrix:\n        params:\n        - name: slack-channel\n          value:\n          - \"foo\"\n          - \"bar\"\n        include:\n          - name: build-1\n            params:\n              - name: slack-channel\n                value: \"foo\"\n              - name: flags\n                value: \"-v\"\n```\n\nFor further information, read [`Matrix`](.\/matrix.md).\n\n### Consuming `Task` execution results in `finally`\n\n`finally` tasks can be configured to consume `Results` of `PipelineTask` from the `tasks` section:\n\n```yaml\nspec:\n  tasks:\n    - name: clone-app-repo\n      taskRef:\n        name: git-clone\n  finally:\n    - name: discover-git-commit\n      params:\n        - name: commit\n          value: $(tasks.clone-app-repo.results.commit)\n```\n**Note:** The scheduling of such `finally` task does not change, it will still be executed in parallel with other\n`finally` tasks after all non-`finally` tasks are done.\n\nThe controller resolves task results before executing the `finally` task `discover-git-commit`. If the task\n`clone-app-repo` failed before initializing `commit` or skipped with [when expression](#guard-task-execution-using-when-expressions)\nresulting in uninitialized task result `commit`, the `finally` Task `discover-git-commit` will be included in the list of\n`skippedTasks` and continues executing rest of the `finally` tasks. The pipeline exits with `completion` instead of\n`success` if a `finally` task is added to the list of `skippedTasks`.\n\n### Consuming `Pipeline` result with `finally`\n\n`finally` tasks can emit `Results` and these results emitted from the `finally` tasks can be configured in the\n[Pipeline Results](#emitting-results-from-a-pipeline). References of `Results` from `finally` will follow the same naming conventions as referencing `Results` from `tasks`: ```$(finally.<finally-pipelinetask-name>.result.<result-name>)```.\n\n```yaml\nresults:\n  - name: comment-count-validate\n    value: $(finally.check-count.results.comment-count-validate)\nfinally:\n  - name: check-count\n    taskRef:\n      name: example-task-name\n```\n\nIn this example, `pipelineResults` in `status` will show the name-value pair for the result `comment-count-validate` which is produced in the `Task` `example-task-name`.\n\n\n### `PipelineRun` Status with `finally`\n\nWith `finally`, `PipelineRun` status is calculated based on `PipelineTasks` under `tasks` section and `finally` tasks.\n\nWithout `finally`:\n\n| `PipelineTasks` under `tasks`                                                                           | `PipelineRun` status | Reason      |\n|---------------------------------------------------------------------------------------------------------|----------------------|-------------|\n| all `PipelineTasks` successful                                                                          | `true`               | `Succeeded` |\n| one or more `PipelineTasks` [skipped](#guard-task-execution-using-when-expressions) and rest successful | `true`               | `Completed` |\n| single failure of `PipelineTask`                                                                        | `false`              | `failed`    |\n\nWith `finally`:\n\n| `PipelineTasks` under `tasks`                                                                          | `finally` tasks                        | `PipelineRun` status | Reason      |\n|--------------------------------------------------------------------------------------------------------|----------------------------------------|----------------------|-------------|\n| all `PipelineTask` successful                                                                          | all `finally` tasks successful         | `true`               | `Succeeded` |\n| all `PipelineTask` successful                                                                          | one or more failure of `finally` tasks | `false`              | `Failed`    |\n| one or more `PipelineTask` [skipped](#guard-task-execution-using-when-expressions) and rest successful | all `finally` tasks successful         | `true`               | `Completed` |\n| one or more `PipelineTask` [skipped](#guard-task-execution-using-when-expressions) and rest successful | one or more failure of `finally` tasks | `false`              | `Failed`    |\n| single failure of `PipelineTask`                                                                       | all `finally` tasks successful         | `false`              | `failed`    |\n| single failure of `PipelineTask`                                                                       | one or more failure of `finally` tasks | `false`              | `failed`    |\n\nOverall, `PipelineRun` state transitioning is explained below for respective scenarios:\n\n* All `PipelineTask` and `finally` tasks are successful: `Started` -> `Running` -> `Succeeded`\n* At least one `PipelineTask` skipped and rest successful:  `Started` -> `Running` -> `Completed`\n* One `PipelineTask` failed \/ one or more `finally` tasks failed: `Started` -> `Running` -> `Failed`\n\nPlease refer to the [table](pipelineruns.md#monitoring-execution-status) under Monitoring Execution Status to learn about\nwhat kind of events are triggered based on the `Pipelinerun` status.\n\n\n### Using Execution `Status` of `pipelineTask`\n\nA `pipeline` can check the status of a specific `pipelineTask` from the `tasks` section in `finally` through the task\nparameters:\n\n```yaml\nfinally:\n  - name: finaltask\n    params:\n      - name: task1Status\n        value: \"$(tasks.task1.status)\"\n    taskSpec:\n      params:\n        - name: task1Status\n      steps:\n        - image: ubuntu\n          name: print-task-status\n          script: |\n            if [ $(params.task1Status) == \"Failed\" ]\n            then\n              echo \"Task1 has failed, continue processing the failure\"\n            fi\n```\n\nThis kind of variable can have any one of the values from the following table:\n\n| Status      | Description                                                                                      |\n|-------------|--------------------------------------------------------------------------------------------------|\n| `Succeeded` | `taskRun` for the `pipelineTask` completed successfully                                          |\n| `Failed`    | `taskRun` for the `pipelineTask` completed with a failure or cancelled by the user               |\n| `None`      | the `pipelineTask` has been skipped or no execution information available for the `pipelineTask` |\n\nFor an end-to-end example, see [`status` in a `PipelineRun`](..\/examples\/v1\/pipelineruns\/pipelinerun-task-execution-status.yaml).\n\n### Using Aggregate Execution `Status` of All `Tasks`\n\nA `pipeline` can check an aggregate status of all the `tasks` section in `finally` through the task parameters:\n\n```yaml\nfinally:\n  - name: finaltask\n    params:\n      - name: aggregateTasksStatus\n        value: \"$(tasks.status)\"\n    taskSpec:\n      params:\n        - name: aggregateTasksStatus\n      steps:\n        - image: ubuntu\n          name: check-task-status\n          script: |\n            if [ $(params.aggregateTasksStatus) == \"Failed\" ]\n            then\n              echo \"Looks like one or more tasks returned failure, continue processing the failure\"\n            fi\n```\n\nThis kind of variable can have any one of the values from the following table:\n\n| Status      | Description                                                                                                                       |\n|-------------|-----------------------------------------------------------------------------------------------------------------------------------|\n| `Succeeded` | all `tasks` have succeeded                                                                                                        |\n| `Failed`    | one ore more `tasks` failed                                                                                                       |\n| `Completed` | all `tasks` completed successfully including one or more skipped tasks                                                            |\n| `None`      | no aggregate execution status available (i.e. none of the above), one or more `tasks` could be pending\/running\/cancelled\/timedout |\n\nFor an end-to-end example, see [`$(tasks.status)` usage in a `Pipeline`](..\/examples\/v1\/pipelineruns\/pipelinerun-task-execution-status.yaml).\n\n### Guard `finally` `Task` execution using `when` expressions\n\nSimilar to `Tasks`, `finally` `Tasks` can be guarded using [`when` expressions](#guard-task-execution-using-when-expressions)\nthat operate on static inputs or variables. Like in `Tasks`, `when` expressions in `finally` `Tasks` can operate on\n`Parameters` and `Results`. Unlike in `Tasks`, `when` expressions in `finally` `tasks` can also operate on the [`Execution\nStatus`](#using-execution-status-of-pipelinetask) of `Tasks`.\n\n#### `when` expressions using `Parameters` in `finally` `Tasks`\n\n`when` expressions in `finally` `Tasks` can utilize `Parameters` as demonstrated using [`golang-build`](https:\/\/github.com\/tektoncd\/catalog\/tree\/main\/task\/golang-build\/0.1)\nand [`send-to-channel-slack`](https:\/\/github.com\/tektoncd\/catalog\/tree\/main\/task\/send-to-channel-slack\/0.1) Catalog\n`Tasks`:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: pipelinerun-\nspec:\n  pipelineSpec:\n    params:\n      - name: enable-notifications\n        type: string\n        description: a boolean indicating whether the notifications should be sent\n    tasks:\n      - name: golang-build\n        taskRef:\n          name: golang-build\n      # [\u2026]\n    finally:\n      - name: notify-build-failure # executed only when build task fails and notifications are enabled\n        when:\n          - input: $(tasks.golang-build.status)\n            operator: in\n            values: [\"Failed\"]\n          - input: $(params.enable-notifications)\n            operator: in\n            values: [\"true\"]\n        taskRef:\n          name: send-to-slack-channel\n      # [\u2026]\n  params:\n    - name: enable-notifications\n      value: true\n```\n\n#### `when` expressions using `Results` in `finally` 'Tasks`\n\n`when` expressions in `finally` `tasks` can utilize `Results`, as demonstrated using [`git-clone`](https:\/\/github.com\/tektoncd\/catalog\/tree\/main\/task\/git-clone\/0.2)\nand [`github-add-comment`](https:\/\/github.com\/tektoncd\/catalog\/tree\/main\/task\/github-add-comment\/0.2) Catalog `Tasks`:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: pipelinerun-\nspec:\n  pipelineSpec:\n    tasks:\n      - name: git-clone\n        taskRef:\n          name: git-clone\n      - name: go-build\n      # [\u2026]\n    finally:\n      - name: notify-commit-sha # executed only when commit sha is not the expected sha\n        when:\n          - input: $(tasks.git-clone.results.commit)\n            operator: notin\n            values: [$(params.expected-sha)]\n        taskRef:\n          name: github-add-comment\n      # [\u2026]\n  params:\n    - name: expected-sha\n      value: 54dd3984affab47f3018852e61a1a6f9946ecfa\n```\n\nIf the `when` expressions in a `finally` `task` use `Results` from a skipped or failed non-finally `Tasks`, then the\n`finally` `task` would also be skipped and be included in the list of `Skipped Tasks` in the `Status`, [similarly to when using\n`Results` in other parts of the `finally` `task`](#consuming-task-execution-results-in-finally).\n\n#### `when` expressions using `Execution Status` of `PipelineTask` in `finally` `tasks`\n\n`when` expressions in `finally` `tasks` can utilize [`Execution Status` of `PipelineTasks`](#using-execution-status-of-pipelinetask),\nas demonstrated using [`golang-build`](https:\/\/github.com\/tektoncd\/catalog\/tree\/main\/task\/golang-build\/0.1) and\n[`send-to-channel-slack`](https:\/\/github.com\/tektoncd\/catalog\/tree\/main\/task\/send-to-channel-slack\/0.1) Catalog `Tasks`:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: pipelinerun-\nspec:\n  pipelineSpec:\n    tasks:\n      - name: golang-build\n        taskRef:\n          name: golang-build\n      # [\u2026]\n    finally:\n      - name: notify-build-failure # executed only when build task fails\n        when:\n          - input: $(tasks.golang-build.status)\n            operator: in\n            values: [\"Failed\"]\n        taskRef:\n          name: send-to-slack-channel\n      # [\u2026]\n```\n\nFor an end-to-end example, see [PipelineRun with `when` expressions](..\/examples\/v1\/pipelineruns\/pipelinerun-with-when-expressions.yaml).\n\n#### `when` expressions using `Aggregate Execution Status` of `Tasks` in `finally` `tasks`\n\n`when` expressions in `finally` `tasks` can utilize\n[`Aggregate Execution Status` of `Tasks`](#using-aggregate-execution-status-of-all-tasks) as demonstrated:\n\n```yaml\nfinally:\n  - name: notify-any-failure # executed only when one or more tasks fail\n    when:\n      - input: $(tasks.status)\n        operator: in\n        values: [\"Failed\"]\n    taskRef:\n      name: notify-failure\n```\n\nFor an end-to-end example, see [PipelineRun with `when` expressions](..\/examples\/v1\/pipelineruns\/pipelinerun-with-when-expressions.yaml).\n\n### Known Limitations\n\n#### Cannot configure the `finally` task execution order\n\nIt's not possible to configure or modify the execution order of the `finally` tasks. Unlike `Tasks` in a `Pipeline`,\nall `finally` tasks run simultaneously and start executing once all `PipelineTasks` under `tasks` have settled which means\nno `runAfter` can be specified in `finally` tasks.\n\n## Using Custom Tasks\n\nCustom Tasks have been promoted from `v1alpha1` to `v1beta1`. Starting from `v0.43.0` to `v0.46.0`, Pipeline Controller is able to create either `v1alpha1` or `v1beta1` Custom Task gated by a feature flag `custom-task-version`, defaulting to `v1beta1`. You can set `custom-task-version` to `v1alpha1` or `v1beta1` to control which version to create.\n\nStarting from `v0.47.0`, feature flag `custom-task-version` is removed and only `v1beta1` Custom Task will be supported. See the [migration doc](migrating-v1alpha1.Run-to-v1beta1.CustomRun.md) for details.\n\n[Custom Tasks](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0002-custom-tasks.md)\ncan implement behavior that doesn't correspond directly to running a workload in a `Pod` on the cluster.\nFor example, a custom task might execute some operation outside of the cluster and wait for its execution to complete.\n\nA `PipelineRun` starts a custom task by creating a [`CustomRun`](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/customruns.md) instead of a `TaskRun`.\nIn order for a custom task to execute, there must be a custom task controller running on the cluster\nthat is responsible for watching and updating `CustomRun`s which reference their type.\n\n### Specifying the target Custom Task\n\nTo specify the custom task type you want to execute, the `taskRef` field\nmust include the custom task's `apiVersion` and `kind` as shown below.\nUsing `apiVersion` will always create a `CustomRun`. If `apiVersion` is set, `kind` is required as well.\n\n```yaml\nspec:\n  tasks:\n    - name: run-custom-task\n      taskRef:\n        apiVersion: example.dev\/v1alpha1\n        kind: Example\n```\n\nThis creates a `Run\/CustomRun` of a custom task of type `Example` in the `example.dev` API group with the version `v1alpha1`.\n\nValidation error will be returned if `apiVersion` or `kind` is missing.\n\nYou can also specify the `name` of a custom task resource object previously defined in the cluster.\n\n```yaml\nspec:\n  tasks:\n    - name: run-custom-task\n      taskRef:\n        apiVersion: example.dev\/v1alpha1\n        kind: Example\n        name: myexample\n```\n\nIf the `taskRef` specifies a name, the custom task controller should look up the\n`Example` resource with that name and use that object to configure the execution.\n\nIf the `taskRef` does not specify a name, the custom task controller might support\nsome default behavior for executing unnamed tasks.\n\n### Specifying a Custom Task Spec in-line (or embedded)\n\n**For `v1alpha1.Run`**\n```yaml\nspec:\n  tasks:\n    - name: run-custom-task\n      taskSpec:\n        apiVersion: example.dev\/v1alpha1\n        kind: Example\n        spec:\n          field1: value1\n          field2: value2\n```\n\n**For `v1beta1.CustomRun`**\n```yaml\nspec:\n  tasks:\n    - name: run-custom-task\n      taskSpec:\n        apiVersion: example.dev\/v1alpha1\n        kind: Example\n        customSpec:\n          field1: value1\n          field2: value2\n```\n\nIf the custom task controller supports the in-line or embedded task spec, this will create a `Run\/CustomRun` of a custom task of\ntype `Example` in the `example.dev` API group with the version `v1alpha1`.\n\nIf the `taskSpec` is not supported, the custom task controller should produce proper validation errors.\n\nPlease take a look at the\ndeveloper guide for custom controllers supporting `taskSpec`:\n- [guidance for `Run`](runs.md#developer-guide-for-custom-controllers-supporting-spec)\n- [guidance for `CustomRun`](customruns.md#developer-guide-for-custom-controllers-supporting-customspec)\n\n`taskSpec` support for `pipelineRun` was designed and discussed in\n[TEP-0061](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0061-allow-custom-task-to-be-embedded-in-pipeline.md)\n\n### Specifying parameters\n\nIf a custom task supports [`parameters`](tasks.md#specifying-parameters), you can use the\n`params` field to specify their values:\n\n```yaml\nspec:\n  tasks:\n    - name: run-custom-task\n      taskRef:\n        apiVersion: example.dev\/v1alpha1\n        kind: Example\n        name: myexample\n      params:\n        - name: foo\n          value: bah\n```\n\n## Context Variables\n\nThe `Parameters` in the `Params` field will accept\n[context variables](variables.md) that will be substituted, including:\n\n* `PipelineRun` name, namespace and uid\n* `Pipeline` name\n* `PipelineTask` retries\n\n```yaml\nspec:\n  tasks:\n    - name: run-custom-task\n      taskRef:\n        apiVersion: example.dev\/v1alpha1\n        kind: Example\n        name: myexample\n      params:\n        - name: foo\n          value: $(context.pipeline.name)\n```\n\n### Specifying matrix\n\n> :seedling: **`Matrix` is an [alpha](additional-configs.md#alpha-features) feature.**\n> The `enable-api-fields` feature flag must be set to `\"alpha\"` to specify `Matrix` in a `PipelineTask`.\n\nIf a custom task supports [`parameters`](tasks.md#specifying-parameters), you can use the\n`matrix` field to specify their values, if you want to fan:\n\n```yaml\nspec:\n  tasks:\n    - name: run-custom-task\n      taskRef:\n        apiVersion: example.dev\/v1alpha1\n        kind: Example\n        name: myexample\n      params:\n        - name: foo\n          value: bah\n      matrix:\n        params:\n        - name: bar\n          value:\n            - qux\n            - thud\n        include:\n          - name: build-1\n            params:\n            - name: common-package\n              value: path-to-common-pkg\n```\n\nFor further information, read [`Matrix`](.\/matrix.md).\n\n### Specifying workspaces\n\nIf the custom task supports it, you can provide [`Workspaces`](workspaces.md#using-workspaces-in-tasks) to share data with the custom task.\n\n```yaml\nspec:\n  tasks:\n    - name: run-custom-task\n      taskRef:\n        apiVersion: example.dev\/v1alpha1\n        kind: Example\n        name: myexample\n      workspaces:\n        - name: my-workspace\n```\n\nConsult the documentation of the custom task that you are using to determine whether it supports workspaces and how to name them.\n\n### Using `Results`\n\nIf the custom task produces results, you can reference them in a Pipeline using the normal syntax,\n`$(tasks.<task-name>.results.<result-name>)`.\n\n### Specifying `Timeout`\n\n#### `v1alpha1.Run`\nIf the custom task supports it as [we recommended](runs.md#developer-guide-for-custom-controllers-supporting-timeout), you can provide `timeout` to specify the maximum running time of a `CustomRun` (including all retry attempts or other operations).\n\n#### `v1beta1.CustomRun`\nIf the custom task supports it as [we recommended](customruns.md#developer-guide-for-custom-controllers-supporting-timeout), you can provide `timeout` to specify the maximum running time of one `CustomRun` execution.\n\n```yaml\nspec:\n  tasks:\n    - name: run-custom-task\n      timeout: 2s\n      taskRef:\n        apiVersion: example.dev\/v1alpha1\n        kind: Example\n        name: myexample\n```\n\nConsult the documentation of the custom task that you are using to determine whether it supports `Timeout`.\n\n### Specifying `Retries`\nIf the custom task supports it, you can provide `retries` to specify how many times you want to retry the custom task.\n\n```yaml\nspec:\n  tasks:\n    - name: run-custom-task\n      retries: 2\n      taskRef:\n        apiVersion: example.dev\/v1alpha1\n        kind: Example\n        name: myexample\n```\n\nConsult the documentation of the custom task that you are using to determine whether it supports `Retries`.\n\n### Known Custom Tasks\n\nWe try to list as many known Custom Tasks as possible here so that users can easily find what they want. Please feel free to share the Custom Task you implemented in this table.\n\n#### v1beta1.CustomRun\n\n| Custom Task                      | Description                                                                                                                      |\n|:---------------------------------|:---------------------------------------------------------------------------------------------------------------------------------|\n| [Wait Task Beta][wait-task-beta] | Waits a given amount of time before succeeding, specified by an input parameter named duration. Support `timeout` and `retries`. |\n| [Approvals][approvals-beta]| Pauses the execution of `PipelineRuns` and waits for manual approvals. Version 0.6.0 and up. |\n\n#### v1alpha1.Run\n\n| Custom Task                                      | Description                                                                                                |\n|:-------------------------------------------------|:-----------------------------------------------------------------------------------------------------------|\n| [Pipeline Loops][pipeline-loops]                 | Runs a `Pipeline` in a loop with varying `Parameter` values.                                               |\n| [Common Expression Language][cel]                | Provides Common Expression Language support in Tekton Pipelines.                                           |\n| [Wait][wait]                                     | Waits a given amount of time, specified by a `Parameter` named \"duration\", before succeeding.              |\n| [Approvals][approvals-alpha]                     | Pauses the execution of `PipelineRuns` and waits for manual approvals. Version up to (and including) 0.5.0 |\n| [Pipelines in Pipelines][pipelines-in-pipelines] | Defines and executes a `Pipeline` in a `Pipeline`.                                                         |\n| [Task Group][task-group]                         | Groups `Tasks` together as a `Task`.                                                                       |\n| [Pipeline in a Pod][pipeline-in-pod]             | Runs `Pipeline` in a `Pod`.                                                                                |\n\n[pipeline-loops]: https:\/\/github.com\/tektoncd\/experimental\/tree\/f60e1cd8ce22ed745e335f6f547bb9a44580dc7c\/pipeline-loops\n[task-loops]: https:\/\/github.com\/tektoncd\/experimental\/tree\/f60e1cd8ce22ed745e335f6f547bb9a44580dc7c\/task-loops\n[cel]: https:\/\/github.com\/tektoncd\/experimental\/tree\/f60e1cd8ce22ed745e335f6f547bb9a44580dc7c\/cel\n[wait]: https:\/\/github.com\/tektoncd\/experimental\/tree\/f60e1cd8ce22ed745e335f6f547bb9a44580dc7c\/wait-task\n[approvals-alpha]: https:\/\/github.com\/automatiko-io\/automatiko-approval-task\/tree\/v0.5.0\n[approvals-beta]: https:\/\/github.com\/automatiko-io\/automatiko-approval-task\/tree\/v0.6.1\n[task-group]: https:\/\/github.com\/openshift-pipelines\/tekton-task-group\/tree\/39823f26be8f59504f242a45b9f2e791d4b36e1c\n[pipelines-in-pipelines]: https:\/\/github.com\/tektoncd\/experimental\/tree\/f60e1cd8ce22ed745e335f6f547bb9a44580dc7c\/pipelines-in-pipelines\n[pipeline-in-pod]: https:\/\/github.com\/tektoncd\/experimental\/tree\/f60e1cd8ce22ed745e335f6f547bb9a44580dc7c\/pipeline-in-pod\n[wait-task-beta]: https:\/\/github.com\/tektoncd\/pipeline\/tree\/a127323da31bcb933a04a6a1b5dbb6e0411e3dc1\/test\/custom-task-ctrls\/wait-task-beta\n\n## Code examples\n\nFor a better understanding of `Pipelines`, study [our code examples](https:\/\/github.com\/tektoncd\/pipeline\/tree\/main\/examples).\n\n---\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/),\nand code samples are licensed under the\n[Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"         linkTitle   Pipelines  weight  203            Pipelines     Pipelines   pipelines       Overview   overview       Configuring a  Pipeline    configuring a pipeline       Specifying  Workspaces    specifying workspaces       Specifying  Parameters    specifying parameters       Adding  Tasks  to the  Pipeline    adding tasks to the pipeline         Specifying Display Name   specifying displayname in pipelinetasks         Specifying Remote Tasks   specifying remote tasks         Specifying  Pipelines  in  PipelineTasks    specifying pipelines in pipelinetasks         Specifying  Parameters  in  PipelineTasks    specifying parameters in pipelinetasks         Specifying  Matrix  in  PipelineTasks    specifying matrix in pipelinetasks         Specifying  Workspaces  in  PipelineTasks    specifying workspaces in pipelinetasks         Tekton Bundles   tekton bundles         Using the  runAfter  field   using the runafter field         Using the  retries  field   using the retries field         Using the  onError  field   using the onerror field         Produce results with  OnError    produce results with onerror         Guard  Task  execution using  when  expressions   guard task execution using when expressions           Guarding a  Task  and its dependent  Tasks    guarding a task and its dependent tasks             Cascade  when  expressions to the specific dependent  Tasks    cascade when expressions to the specific dependent tasks             Compose using Pipelines in Pipelines   compose using pipelines in pipelines           Guarding a  Task  only   guarding a task only         Configuring the failure timeout   configuring the failure timeout       Using variable substitution   using variable substitution         Using the  retries  and  retry count  variable substitutions   using the retries and retry count variable substitutions       Using  Results    using results         Passing one Task s  Results  into the  Parameters  or  when  expressions of another   passing one tasks results into the parameters or when expressions of another         Emitting  Results  from a  Pipeline    emitting results from a pipeline       Configuring the  Task  execution order   configuring the task execution order       Adding a description   adding a description       Adding  Finally  to the  Pipeline    adding finally to the pipeline         Specifying Display Name   specifying displayname in finally tasks         Specifying  Workspaces  in  finally  tasks   specifying workspaces in finally tasks         Specifying  Parameters  in  finally  tasks   specifying parameters in finally tasks         Specifying  matrix  in  finally  tasks   specifying matrix in finally tasks         Consuming  Task  execution results in  finally    consuming task execution results in finally         Consuming  Pipeline  result with  finally    consuming pipeline result with finally          PipelineRun  Status with  finally    pipelinerun status with finally         Using Execution  Status  of  pipelineTask    using execution status of pipelinetask         Using Aggregate Execution  Status  of All  Tasks    using aggregate execution status of all tasks         Guard  finally   Task  execution using  when  expressions   guard finally task execution using when expressions            when  expressions using  Parameters  in  finally   Tasks    when expressions using parameters in finally tasks            when  expressions using  Results  in  finally   Tasks    when expressions using results in finally tasks            when  expressions using  Execution Status  of  PipelineTask  in  finally   tasks    when expressions using execution status of pipelinetask in finally tasks            when  expressions using  Aggregate Execution Status  of  Tasks  in  finally   tasks    when expressions using aggregate execution status of tasks in finally tasks         Known Limitations   known limitations           Cannot configure the  finally  task execution order   cannot configure the finally task execution order       Using Custom Tasks   using custom tasks         Specifying the target Custom Task   specifying the target custom task         Specifying a Custom Task Spec in line  or embedded    specifying a custom task spec in line or embedded         Specifying parameters   specifying parameters 1         Specifying matrix   specifying matrix         Specifying workspaces   specifying workspaces 1         Using  Results    using results 1         Specifying  Timeout    specifying timeout         Specifying  Retries    specifying retries         Known Custom Tasks   known custom tasks       Code examples   code examples      Overview  A  Pipeline  is a collection of  Tasks  that you define and arrange in a specific order of execution as part of your continuous integration flow  Each  Task  in a  Pipeline  executes as a  Pod  on your Kubernetes cluster  You can configure various execution conditions to fit your business needs      Configuring a  Pipeline   A  Pipeline  definition supports the following fields     Required        apiVersion   kubernetes overview    Specifies the API version  for example      tekton dev v1beta1         kind   kubernetes overview    Identifies this resource object as a  Pipeline  object        metadata   kubernetes overview    Specifies metadata that uniquely identifies the      Pipeline  object  For example  a  name         spec   kubernetes overview    Specifies the configuration information for     this  Pipeline  object  This must include            tasks    adding tasks to the pipeline    Specifies the  Tasks  that comprise the  Pipeline          and the details of their execution    Optional        params    specifying parameters    Specifies the  Parameters  that the  Pipeline  requires        workspaces    specifying workspaces    Specifies a set of Workspaces that the  Pipeline  requires        tasks    adding tasks to the pipeline             name    adding tasks to the pipeline    the name of this  Task  within the context of this  Pipeline             displayName    specifying displayname in pipelinetasks    a user facing name of this  Task  within the context of this  Pipeline             description    adding tasks to the pipeline    a description of this  Task  within the context of this  Pipeline             taskRef    adding tasks to the pipeline    a reference to a  Task  definition            taskSpec    adding tasks to the pipeline    a specification of a  Task             runAfter    using the runafter field    Indicates that a  Task  should execute after one or more other          Tasks  without output linking            retries    using the retries field    Specifies the number of times to retry the execution of a  Task  after         a failure  Does not apply to execution cancellations            when    guard finally task execution using when expressions    Specifies  when  expressions that guard         the execution of a  Task   allow execution only when all  when  expressions evaluate to true            timeout    configuring the failure timeout    Specifies the timeout before a  Task  fails            params    specifying parameters in pipelinetasks    Specifies the  Parameters  that a  Task  requires            workspaces    specifying workspaces in pipelinetasks    Specifies the  Workspaces  that a  Task  requires            matrix    specifying matrix in pipelinetasks    Specifies the  Parameters  used to fan out a  Task  into         multiple  TaskRuns  or  Runs         results    emitting results from a pipeline    Specifies the location to which the  Pipeline  emits its execution     results        displayName    specifying a display name    is a user facing name of the pipeline that may be used to populate a UI        description    adding a description    Holds an informative description of the  Pipeline  object        finally    adding finally to the pipeline    Specifies one or more  Tasks  to be executed in parallel after     all other tasks have completed          name    adding finally to the pipeline    the name of this  Task  within the context of this  Pipeline           displayName    specifying displayname in finally tasks    a user facing name of this  Task  within the context of this  Pipeline           description    adding finally to the pipeline    a description of this  Task  within the context of this  Pipeline           taskRef    adding finally to the pipeline    a reference to a  Task  definition          taskSpec    adding finally to the pipeline    a specification of a  Task           retries    using the retries field    Specifies the number of times to retry the execution of a  Task  after       a failure  Does not apply to execution cancellations          when    guard finally task execution using when expressions    Specifies  when  expressions that guard       the execution of a  Task   allow execution only when all  when  expressions evaluate to true          timeout    configuring the failure timeout    Specifies the timeout before a  Task  fails          params    specifying parameters in finally tasks    Specifies the  Parameters  that a  Task  requires          workspaces    specifying workspaces in finally tasks    Specifies the  Workspaces  that a  Task  requires          matrix    specifying matrix in finally tasks    Specifies the  Parameters  used to fan out a  Task  into       multiple  TaskRuns  or  Runs     kubernetes overview     https   kubernetes io docs concepts overview working with objects kubernetes objects  required fields     Specifying  Workspaces    Workspaces  allow you to specify one or more volumes that each  Task  in the  Pipeline  requires during execution  You specify one or more  Workspaces  in the  workspaces  field  For example      yaml spec    workspaces        name  pipeline ws1   The name of the workspace in the Pipeline   tasks        name  use ws from pipeline       taskRef          name  gen code   gen code expects a workspace with name  output        workspaces            name  output           workspace  pipeline ws1       name  use ws again       taskRef          name  commit   commit expects a workspace with name  src        runAfter            use ws from pipeline   important  use ws from pipeline writes to the workspace first       workspaces            name  src           workspace  pipeline ws1      For simplicity you can also map the name of the  Workspace  in  PipelineTask  to match with the  Workspace  from the  Pipeline   For example      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Pipeline metadata    name  pipeline spec    workspaces        name  source   tasks        name  gen code       taskRef          name  gen code   gen code expects a Workspace named  source        workspaces            name  source      mapping workspace name       name  commit       taskRef          name  commit   commit expects a Workspace named  source        workspaces            name  source      mapping workspace name       runAfter            gen code      For more information  see     Using  Workspaces  in  Pipelines   workspaces md using workspaces in pipelines    The   Workspaces  in a  PipelineRun      examples v1 pipelineruns workspaces yaml  code example   The  variables available in a  PipelineRun   variables md variables available in a pipeline   including  workspaces  name  bound      Mapping  Workspaces   https   github com tektoncd community blob main teps 0108 mapping workspaces md      Specifying  Parameters    See also  Specifying Parameters in Tasks  tasks md specifying parameters    You can specify global parameters  such as compilation flags or artifact names  that you want to supply to the  Pipeline  at execution time   Parameters  are passed to the  Pipeline  from its corresponding  PipelineRun  and can replace template values specified within each  Task  in the  Pipeline    Parameter names    Must only contain alphanumeric characters  hyphens        and underscores          Must begin with a letter or an underscore         For example   fooIs Bar   is a valid parameter name  but  barIsBa   or  0banana  are not   Each declared parameter has a  type  field  which can be set to either  array  or  string    array  is useful in cases where the number of compilation flags being supplied to the  Pipeline  varies throughout its execution  If no value is specified  the  type  field defaults to  string   When the actual parameter value is supplied  its parsed type is validated against the  type  field  The  description  and  default  fields for a  Parameter  are optional   The following example illustrates the use of  Parameters  in a  Pipeline    The following  Pipeline  declares two input parameters       context  which passes its value  a string  to the  Task  to set the value of the  pathToContext  parameter within the  Task      flags  which passes its value  an array  to the  Task  to set the value of   the  flags  parameter within the  Task   The  flags  parameter within the  Task    must   also be an array  If you specify a value for the  default  field and invoke this  Pipeline  in a  PipelineRun  without specifying a value for  context   that value will be used     Note    Input parameter values can be used as variables throughout the  Pipeline  by using  variable substitution  variables md variables available in a pipeline       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Pipeline metadata    name  pipeline with parameters spec    params        name  context       type  string       description  Path to context       default   some where or other       name  flags       type  array       description  List of flags   tasks        name  build skaffold web       taskRef          name  build push       params            name  pathToDockerFile           value  Dockerfile           name  pathToContext           value     params context             name  flags           value      params flags            The following  PipelineRun  supplies a value for  context       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    name  pipelinerun with parameters spec    pipelineRef      name  pipeline with parameters   params        name   context        value    workspace examples microservices leeroy web        name   flags        value             foo             bar            Param enum    seedling     enum  is an  alpha  additional configs md alpha features  feature    The  enable param enum  feature flag must be set to   true   to enable this feature   Parameter declarations can include  enum  which is a predefine set of valid values that can be accepted by the  Pipeline   Param   If a  Param  has both  enum  and default value  the default value must be in the  enum  set  For example  the valid allowed values for  Param   message  is bounded to  v1  and  v2        yaml apiVersion  tekton dev v1 kind  Pipeline metadata    name  pipeline param enum spec    params      name  message     enum    v1    v2       default   v1    tasks      name  task1     params          name  message         value    params message      steps        name  build       image  bash 3 2       script            echo    params message        If the  Param  value passed in by  PipelineRun  is   NOT   in the predefined  enum  list  the  PipelineRun  will fail with reason  InvalidParamValue    If a  PipelineTask  references a  Task  with  enum   the  enums  specified in the Pipeline  spec params   pipeline level  enum   must be a   subset   of the  enums  specified in the referenced  Task   task level  enum    An empty pipeline level  enum  is invalid in this scenario since an empty  enum  set indicates a  universal set  which allows all possible values  The same rules apply to  Pipelines  with embbeded  Tasks    In the below example  the referenced  Task  accepts  v1  and  v2  as valid values  the  Pipeline  further restricts the valid value to  v1        yaml apiVersion  tekton dev v1 kind  Task metadata    name  param enum demo spec    params      name  message     type  string     enum    v1    v2     steps      name  build     image  bash latest     script          echo    params message            yaml apiVersion  tekton dev v1 kind  Pipeline metadata    name  pipeline param enum spec    params      name  message     enum    v1      note that an empty enum set is invalid   tasks      name  task1     params          name  message         value    params message      taskRef        name  param enum demo      Note that this subset restriction only applies to the task level  params  with a   direct single   reference to pipeline level  params   If a task level  param  references multiple pipeline level  params   the subset validation is not applied       yaml apiVersion  tekton dev v1 kind  Pipeline     spec    params      name  message1     enum    v1       name  message2     enum    v2     tasks      name  task1     params          name  message         value     params message1  and   params message2       taskSpec        params  message       enum          the message enum is not required to be a subset of message1 or message2              Tekton validates user provided values in a  PipelineRun  against the  enum  specified in the  PipelineSpec params   Tekton also validates any resolved  param  value against the  enum  specified in each  PipelineTask  before creating the  TaskRun    See usage in this  example     examples v1 pipelineruns alpha param enum yaml        Propagated Params  Like with embedded  pipelineruns  pipelineruns md propagated parameters   you can propagate  params  declared in the  pipeline  down to the inlined  pipelineTasks  and its inlined  Steps   Wherever a resource  e g  a  pipelineTask   or a  StepAction  is referenced  the parameters need to be passed explicitly    For example  the following is a valid yaml      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Pipeline metadata    name  pipelien propagated params spec    params        name  HELLO       default   Hello World         name  BYE       default   Bye World     tasks        name  echo hello       taskSpec          steps              name  echo             image  ubuntu             script                     usr bin env bash               echo    params HELLO         name  echo bye       taskSpec          steps              name  echo action             ref                name  step action echo             params                  name  msg                 value     params BYE        The same rules defined in  pipelineruns  pipelineruns md propagated parameters  apply here       Adding  Tasks  to the  Pipeline    Your  Pipeline  definition must reference at least one   Task   tasks md   Each  Task  within a  Pipeline  must have a  valid  https   kubernetes io docs concepts overview working with objects names  names   name  and a  taskRef  or a  taskSpec   For example      yaml tasks      name  build the image     taskRef        name  build push        Note    Using both  apiVersion  and  kind  will create  CustomRun  customruns md   don t set  apiVersion  if only referring to   Task   tasks md    or     yaml tasks      name  say hello     taskSpec        steps          image  ubuntu         script  echo  hello there       Note that any  task  specified in  taskSpec  will be the same version as the  Pipeline        Specifying  displayName  in  PipelineTasks   The  displayName  field is an optional field that allows you to add a user facing name of the  PipelineTask  that can be used to populate and distinguish in the dashboard  For example      yaml spec    tasks        name  scan       displayName   Code Scan        taskRef          name  sonar scan      The  displayName  also allows you to parameterize the human readable name of your choice based on the  params   specifying parameters    the task results   passing one tasks results into the parameters or when expressions of another   and  the context variables   context variables   For example      yaml spec    params        name  application   tasks        name  scan       displayName   Code Scan for   params application         taskRef          name  sonar scan       name  upload scan report       displayName   Upload Scan Report   tasks scan results report         taskRef          name  upload      Specifying task results in the  displayName  does not introduce an inherent resource dependency among  tasks   The pipeline author is responsible for specifying dependency explicitly either using  runAfter   using the runafter field  or rely on  whenExpressions   guard task execution using when expressions  or  task results in params   using results    Fully resolved  displayName  is also available in the status as part of the  pipelineRun status childReferences   The clients such as the dashboard  CLI  etc  can retrieve the  displayName  from the  childReferences   The  displayName  mainly drives a better user experience and at the same time it is not validated for the content or length by the controller       Specifying Remote Tasks      beta feature  https   github com tektoncd pipeline blob main docs install md beta features      A  taskRef  field may specify a Task in a remote location such as git  Support for specific types of remote will depend on the Resolvers your cluster s operator has installed  For more information including a tutorial  please check  resolution docs  resolution md   The below example demonstrates referencing a Task in git      yaml tasks    name   go build    taskRef      resolver  git     params        name  url       value  https   github com tektoncd catalog git       name  revision         value can use params declared at the pipeline level or a static value like main       value    params gitRevision        name  pathInRepo       value  task golang build 0 3 golang build yaml          Specifying  Pipelines  in  PipelineTasks      seedling    Specifying  pipelines  in  PipelineTasks  is an  alpha  additional configs md alpha features  feature      The  enable api fields  feature flag must be set to   alpha   to specify  PipelineRef  or  PipelineSpec  in a  PipelineTask     This feature is in   Preview Only   mode and not yet supported implemented   Apart from  taskRef  and  taskSpec    pipelineRef  and  pipelineSpec  allows you to specify a  pipeline   in  pipelineTask   This allows you to generate a child  pipelineRun  which is inherited by the parent  pipelineRun        kind  Pipeline metadata    name  security scans spec    tasks        name  scorecards       taskSpec          steps              image  alpine             name  step 1             script                  echo  Generating scorecard report            name  codeql       taskSpec          steps              image  alpine             name  step 1             script                  echo  Generating codeql report          apiVersion  tekton dev v1 kind  Pipeline metadata    name  clone scan notify spec    tasks        name  git clone       taskSpec          steps              image  alpine             name  step 1             script                  echo  Cloning a repo to run security scans            name  security scans       runAfter            git clone       pipelineRef          name  security scans         For further information read  Pipelines in Pipelines    pipelines in pipelines md        Specifying  Parameters  in  PipelineTasks   You can also provide   Parameters   tasks md specifying parameters       yaml spec    tasks        name  build skaffold web       taskRef          name  build push       params            name  pathToDockerFile           value  Dockerfile           name  pathToContext           value   workspace examples microservices leeroy web          Specifying  Matrix  in  PipelineTasks      seedling     Matrix  is an  beta  additional configs md beta features  feature      The  enable api fields  feature flag can be set to   beta   to specify  Matrix  in a  PipelineTask    You can also provide   Parameters   tasks md specifying parameters  through the  matrix  field      yaml spec    tasks        name  browser test       taskRef          name  browser test       matrix          params            name  browser           value              chrome             safari             firefox         include              name  build 1             params                  name  browser                 value  chrome                 name  url                 value  some url      For further information  read   Matrix     matrix md        Specifying  Workspaces  in  PipelineTasks   You can also provide   Workspaces   tasks md specifying workspaces       yaml spec    tasks        name  use workspace       taskRef          name  gen code   gen code expects a workspace with name  output        workspaces            name  output           workspace  shared ws          Tekton Bundles  A  Tekton Bundle  is an OCI artifact that contains Tekton resources like  Tasks  which can be referenced within a  taskRef    There is currently a hard limit of 20 objects in a bundle   You can reference a  Tekton bundle  in a  TaskRef  in both  v1  and  v1beta1  using  remote resolution    bundle resolver md pipeline resolution   The example syntax shown below for  v1  uses remote resolution and requires enabling  beta features    additional configs md beta features       yaml spec    tasks        name  hello world       taskRef          resolver  bundles         params            name  bundle           value  docker io myrepo mycatalog           name  name           value  echo task           name  kind           value  Task      You may also specify a  tag  as you would with a Docker image which will give you a fixed  repeatable reference to a  Task       yaml spec    taskRef      resolver  bundles     params        name  bundle       value  docker io myrepo mycatalog v1 0 1       name  name       value  echo task       name  kind       value  Task      You may also specify a fixed digest instead of a tag      yaml spec    taskRef      resolver  bundles     params        name  bundle       value  docker io myrepo mycatalog sha256 abc123       name  name       value  echo task       name  kind       value  Task      Any of the above options will fetch the image using the  ImagePullSecrets  attached to the  ServiceAccount  specified in the  PipelineRun   See the  Service Account  pipelineruns md specifying custom serviceaccount credentials  section for details on how to configure a  ServiceAccount  on a  PipelineRun   The  PipelineRun  will then run that  Task  without registering it in the cluster allowing multiple versions of the same named  Task  to be run at once    Tekton Bundles  may be constructed with any toolsets that produce valid OCI image artifacts so long as the artifact adheres to the  contract  tekton bundle contracts md        Using the  runAfter  field  If you need your  Tasks  to execute in a specific order within the  Pipeline   use the  runAfter  field to indicate that a  Task  must execute after one or more other  Tasks    In the example below  we want to test the code before we build it  Since there is no output from the  test app   Task   the  build app   Task  uses  runAfter  to indicate that  test app  must run before it  regardless of the order in which they are referenced in the  Pipeline  definition      yaml workspaces    name  source tasks    name  test app   taskRef      name  make test   workspaces      name  source     workspace  source   name  build app   taskRef      name  kaniko build   runAfter        test app   workspaces      name  source     workspace  source          Using the  retries  field  For each  Task  in the  Pipeline   you can specify the number of times Tekton should retry its execution when it fails  When a  Task  fails  the corresponding  TaskRun  sets its  Succeeded   Condition  to  False   The  retries  field instructs Tekton to retry executing the  Task  when this happens   retries  are executed even when other  Task s in the  Pipeline  have failed  unless the  PipelineRun  has been  cancelled    pipelineruns md cancelling a pipelinerun  or  gracefully cancelled    pipelineruns md gracefully cancelling a pipelinerun    If you expect a  Task  to encounter problems during execution  for example  you know that there will be issues with network connectivity or missing dependencies   set its  retries  field to a suitable value greater than 0  If you don t explicitly specify a value  Tekton does not attempt to execute the failed  Task  again   In the example below  the execution of the  build the image   Task  will be retried once after a failure  if the retried execution fails  too  the  Task  execution fails as a whole      yaml tasks      name  build the image     retries  1     taskRef        name  build push          Using the  onError  field  When a  PipelineTask  fails  the rest of the  PipelineTasks  are skipped and the  PipelineRun  is declared a failure  If you would like to ignore such  PipelineTask  failure and continue executing the rest of the  PipelineTasks   you can specify  onError  for such a  PipelineTask     OnError  can be set to  stopAndFail   default  and  continue   The failure of a  PipelineTask  with  stopAndFail  would stop and fail the whole  PipelineRun    A  PipelineTask  fails with  continue  does not fail the whole  PipelineRun   and the rest of the  PipelineTask  will continue to execute   To ignore a  PipelineTask  failure  set  onError  to  continue        yaml apiVersion  tekton dev v1 kind  Pipeline metadata    name  demo spec    tasks        name  task1       onError  continue       taskSpec          steps              name  step1             image  alpine             script                  exit 1      At runtime  the failure is ignored to determine the  PipelineRun  status  The  PipelineRun   message  contains the ignored failure info       yaml status    conditions      lastTransitionTime   2023 09 28T19 08 30Z      message   Tasks Completed  1  Failed  1  Ignored  1   Cancelled 0   Skipped  0      reason  Succeeded     status   True      type  Succeeded            Note that the  TaskRun  status remains as it is irrelevant to  OnError   Failed but ignored  TaskRuns  result in a  failed  status with reason  FailureIgnored    For example  the  TaskRun  created by the above  PipelineRun  has the following status       bash   kubectl get tr demo run task1 NAME                                SUCCEEDED   REASON           STARTTIME   COMPLETIONTIME demo run task1                      False       FailureIgnored   12m         12m      To specify  onError  for a  step   please see  specifying onError for a step    tasks md specifying onerror for a step      Note    Setting   Retry    specifying retries  and  OnError continue  at the same time is   NOT   allowed       Produce results with  OnError   When a  PipelineTask  is set to ignore error and the  PipelineTask  is able to initialize a result before failing  the result is made available to the consumer  PipelineTasks        yaml   tasks        name  task1       onError  continue       taskSpec          results              name  result1         steps              name  step1             image  alpine             script                  echo  n 123   tee   results result1 path                exit 1      The consumer  PipelineTasks  can access the result by referencing    tasks task1 results result1     If the result is   NOT   initialized before failing  and there is a  PipelineTask  consuming it       yaml   tasks        name  task1       onError  continue       taskSpec          results              name  result1         steps              name  step1             image  alpine             script                  exit 1               echo  n 123   tee   results result1 path         If the consuming  PipelineTask  has  OnError stopAndFail   the  PipelineRun  will fail with  InvalidTaskResultReference     If the consuming  PipelineTask  has  OnError continue   the consuming  PipelineTask  will be skipped with reason  Results were missing   and the  PipelineRun  will continue to execute       Guard  Task  execution using  when  expressions  To run a  Task  only when certain conditions are met  it is possible to  guard  task execution using the  when  field  The  when  field allows you to list a series of references to  when  expressions   The components of  when  expressions are  input    operator  and  values      Component    Description                                                                                                  Syntax                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           input       Input for the  when  expression  defaults to an empty string if not provided                                   Static values e g    ubuntu   br     Variables   parameters   specifying parameters  or  results   using results   e g      params image    or     tasks task1 results image    or     tasks task1 results array results 1                                                                                                                                                                                                                                                        operator     operator  represents an  input  s relationship to a set of  values   a valid  operator  must be provided     in  or  notin                                                                                                                                                                                                                                                                                                                                                                                                                                                                      values      An array of string values  the  values  array must be provided and has to be non empty                         An array param e g       params images        br     An array result of a task      tasks task1 results array results        br      values  can contain static values e g    ubuntu   br      values  can contain variables   parameters   specifying parameters  or  results   using results   or  a Workspaces s  bound  state   specifying workspaces  e g       params image     or      tasks task1 results image     or      tasks task1 results array results 1          The   Parameters    specifying parameters  are read from the  Pipeline  and   Results    using results  are read directly from previous   Tasks    adding tasks to the pipeline   Using   Results    using results  in a  when  expression in a guarded  Task  introduces a resource dependency on the previous  Task  that produced the  Result    The declared  when  expressions are evaluated before the  Task  is run  If all the  when  expressions evaluate to  True   the  Task  is run  If any of the  when  expressions evaluate to  False   the  Task  is not run and the  Task  is listed in the   Skipped Tasks  section of the  PipelineRunStatus   pipelineruns md monitoring execution status    In these examples   first create file  task will only be executed if the  path  parameter is  README md    echo file exists  task will only be executed if the  exists  result from  check file  task is  yes  and  run lint  task will only be executed if the  lint config  optional workspace has been provided by a PipelineRun      yaml tasks      name  first create file     when          input     params path           operator  in         values    README md       taskRef        name  first create file     tasks      name  echo file exists     when          input     tasks check file results exists           operator  in         values    yes       taskRef        name  echo file exists     tasks      name  run lint     when          input     workspaces lint config bound           operator  in         values    true       taskRef        name  lint source     tasks      name  deploy in blue     when          input   blue          operator  in         values      params deployments           taskRef        name  deployment      For an end to end example  see  PipelineRun with  when  expressions     examples v1 pipelineruns pipelinerun with when expressions yaml    There are a lot of scenarios where  when  expressions can be really useful  Some of these are    Checking if the name of a git branch matches   Checking if the  Result  of a previous  Task  is as expected   Checking if a git file has changed in the previous commits   Checking if an image exists in the registry   Checking if the name of a CI job matches   Checking if an optional Workspace has been provided       Use CEL expression in WhenExpression     seedling     CEL in WhenExpression  is an  alpha  additional configs md alpha features  feature      The  enable cel in whenexpression  feature flag must be set to   true   to enable the use of  CEL  in  WhenExpression    CEL  Common Expression Language  is a declarative language designed for simplicity  speed  safety  and portability which can be used to express a wide variety of conditions and computations   You can define a CEL expression in  WhenExpression  to guard the execution of a  Task    The CEL expression must evaluate to either  true  or  false   You can use a single line of CEL string to replace current  WhenExpressions  s  input   operator   values   For example      yaml   current WhenExpressions when      input   foo      operator   in      values    foo    bar       input   duh      operator   notin      values    foo    bar      with cel when      cel    foo  in   foo    bar        cel      duh  in   foo    bar          CEL can offer more conditional functions  such as numeric comparisons  e g             etc   logic operators  e g   OR    AND    Regex Pattern Matching  For example      yaml   when        test coverage result is larger than 90        cel      tasks unit test results test coverage     0 9        params is not empty  or params2 is 8 5 or 8 6       cel      params param1               params param2       8 5        params param2       8 6         param branch matches pattern  release           cel      params branch   matches  release                  Variable substitution in CEL   CEL  supports  string substitutions  https   github com tektoncd pipeline blob main docs variables md variables available in a pipeline   you can reference string  array indexing or object value of a param result  For example      yaml   when        string result       cel     tasks unit test results test coverage    0 9        array indexing result       cel     tasks unit test results test coverage 0     0 9        object result key       cel      tasks objectTask results repo url   matches  github com tektoncd             string param       cel      params foo       foo         array indexing       cel      params branch 0        foo         object param key       cel      params repo url   matches  github com tektoncd              Note    the reference needs to be wrapped with single quotes  Whole  Array  and  Object  replacements are not supported yet  The following usage is not supported      yaml   when        cel    foo  in    params array params             cel    foo  in    params object params                wokeignore rule master     In addition to the cases listed above  you can craft any valid CEL expression as defined by the  cel spec language definition  https   github com google cel spec blob master doc langdef md     CEL  expression is validated at admission webhook and a validation error will be returned if the expression is invalid     Note    To use Tekton s  variable substitution  variables md   you need to wrap the reference with single quotes  This also means that if you pass another CEL expression via  params  or  results   it won t be executed  Therefore CEL injection is disallowed   For example      This is valid     params foo       foo  This is invalid    params foo      foo  CEL s variable substitution is not supported yet and thus invalid  params foo     foo            Guarding a  Task  and its dependent  Tasks   To guard a  Task  and its dependent Tasks    cascade the  when  expressions to the specific dependent  Tasks  to be guarded as well   compose the  Task  and its dependent  Tasks  as a unit to be guarded and executed together using  Pipelines  in  Pipelines         Cascade  when  expressions to the specific dependent  Tasks   Pick and choose which specific dependent  Tasks  to guard as well  and cascade the  when  expressions to those  Tasks    Taking the use case below  a user who wants to guard  manual approval  and its dependent  Tasks                                             tests                                                                                 v                                  manual approval                                                                                  v         approver                              build image                                                       v                                 v          slack msg                             deploy image      The user can design the  Pipeline  to solve their use case as such      yaml tasks         name  manual approval   runAfter        tests   when        input    params git action        operator  in       values            merge   taskRef      name  manual approval    name  build image   when        input    params git action        operator  in       values            merge   runAfter        manual approval   taskRef      name  build image    name  deploy image   when        input    params git action        operator  in       values            merge   runAfter        build image   taskRef      name  deploy image    name  slack msg   params        name  approver       value    tasks manual approval results approver    taskRef      name  slack msg            Compose using Pipelines in Pipelines  Compose a set of  Tasks  as a unit of execution using  Pipelines  in  Pipelines   which allows for guarding a  Task  and its dependent  Tasks   as a sub  Pipeline   using  when  expressions     Note     Pipelines  in  Pipelines  is an  experimental feature  https   github com tektoncd experimental tree main pipelines in pipelines   Taking the use case below  a user who wants to guard  manual approval  and its dependent  Tasks                                             tests                                                                                 v                                  manual approval                                                                                  v         approver                              build image                                                       v                                 v          slack msg                             deploy image      The user can design the  Pipelines  to solve their use case as such      yaml    sub pipeline  approve build deploy slack  tasks      name  manual approval     runAfter          integration tests     taskRef        name  manual approval      name  build image     runAfter          manual approval     taskRef        name  build image      name  deploy image     runAfter          build image     taskRef        name  deploy image      name  slack msg     params          name  approver         value    tasks manual approval results approver      taskRef        name  slack msg         main pipeline tasks         name  approve build deploy slack   runAfter        tests   when        input    params git action        operator  in       values            merge   taskRef      apiVersion  tekton dev v1beta1     kind  Pipeline     name  approve build deploy slack           Guarding a  Task  only  When  when  expressions evaluate to  False   the  Task  will be skipped and    The ordering dependent  Tasks  will be executed   The resource dependent  Tasks   and their dependencies  will be skipped because of missing  Results  from the skipped   parent  Task   When we add support for  default  Results   https   github com tektoncd community pull 240   then the   resource dependent  Tasks  may be executed if the default  Results  from the skipped parent  Task  are specified  In   addition  if a resource dependent  Task  needs a file from a guarded parent  Task  in a shared  Workspace   make sure   to handle the execution of the child  Task  in case the expected file is missing from the  Workspace  because the   guarded parent  Task  is skipped   On the other hand  the rest of the  Pipeline  will continue executing                                            tests                                                                                 v                                  manual approval                                                                                  v         approver                              build image                                                       v                                 v          slack msg                             deploy image      Taking the use case above  a user who wants to guard  manual approval  only can design the  Pipeline  as such      yaml tasks         name  manual approval   runAfter        tests   when        input    params git action        operator  in       values            merge   taskRef      name  manual approval    name  build image   runAfter        manual approval   taskRef      name  build image    name  deploy image   runAfter        build image   taskRef      name  deploy image    name  slack msg   params        name  approver       value    tasks manual approval results approver    taskRef      name  slack msg      If  manual approval  is skipped  execution of its dependent  Tasks    slack msg    build image  and  deploy image   would be unblocked regardless     build image  and  deploy image  should be executed successfully    slack msg  will be skipped because it is missing the  approver   Result  from  manual approval      dependents of  slack msg  would have been skipped too if it had any of them     if  manual approval  specifies a default  approver   Result   such as  None   then  slack msg  would be executed       supporting default  Results  is in progress  https   github com tektoncd community pull 240        Configuring the failure timeout  You can use the  Timeout  field in the  Task  spec within the  Pipeline  to set the timeout of the  TaskRun  that executes that  Task  within the  PipelineRun  that executes your  Pipeline   The  Timeout  value is a  duration  conforming to Go s   ParseDuration   https   golang org pkg time  ParseDuration  format  For example  valid values are  1h30m    1h    1m   and  60s      Note    If you do not specify a  Timeout  value  Tekton instead honors the timeout for the   PipelineRun   pipelineruns md configuring a pipelinerun    In the example below  the  build the image   Task  is configured to time out after 90 seconds      yaml spec    tasks        name  build the image       taskRef          name  build push       timeout   0h1m30s          Using variable substitution  Tekton provides variables to inject values into the contents of certain fields  The values you can inject come from a range of sources including other fields in the Pipeline  context sensitive information that Tekton provides  and runtime information received from a PipelineRun   The mechanism of variable substitution is quite simple   string replacement is performed by the Tekton Controller when a PipelineRun is executed   See the  complete list of variable substitutions for Pipelines    variables md variables available in a pipeline  and the  list of fields that accept substitutions    variables md fields that accept variable substitutions    For an end to end example  see  using context variables     examples v1 pipelineruns using context variables yaml        Using the  retries  and  retry count  variable substitutions  Tekton supports variable substitution for the   retries    using the retries field  parameter of  PipelineTask   Variables like  context pipelineTask retries  and  context task retry count  can be added to the parameters of a  PipelineTask    context pipelineTask retries  will be replaced by  retries  of the  PipelineTask   while  context task retry count  will be replaced by current retry number of the  PipelineTask       yaml params    name  pipelineTask retries   value     context pipelineTask retries   taskSpec    params      name  pipelineTask retries   steps      image  ubuntu     name  print if retries exhausted     script          if      context task retry count         params pipelineTask retries           then         echo  This is the last retry         fi       exit 1        Note    Every  PipelineTask  can only access its own  retries  and  retry count   These values aren t accessible for other  PipelineTask s      Using  Results   Tasks can emit   Results   tasks md emitting results  when they execute  A Pipeline can use these  Results  for two different purposes   1  A Pipeline can pass the  Result  of a  Task  into the  Parameters  or  when  expressions of another  2  A Pipeline can itself emit  Results  and include data from the  Results  of its Tasks       Note   Tekton does not enforce that results are produced at Task level  If a pipeline attempts to   consume a result that was declared by a Task  but not produced  it will fail   TEP 0048  https   github com tektoncd community blob main teps 0048 task results without results md    propopses introducing default values for results to help Pipeline authors manage this case       Passing one Task s  Results  into the  Parameters  or  when  expressions of another  Sharing  Results  between  Tasks  in a  Pipeline  happens via  variable substitution  variables md variables available in a pipeline    one  Task  emits a  Result  and another receives it as a  Parameter  with a variable such as    tasks  task name  results  result name     Pipeline support two new types of results and parameters  array    string  and object  map string string   Array result is a beta feature and can be enabled by setting  enable api fields  to  alpha  or  beta      Result Type   Parameter Type   Specification                                       enable api fields                                                                                                                string        string              tasks  task name  results  result name          stable                  array         array               tasks  task name  results  result name          alpha or beta           array         string              tasks  task name  results  result name  i       alpha or beta           object        object              tasks  task name  results  result name          alpha or beta                  object        string              tasks  task name  results  result name  key     alpha or beta                   Note    Whole Array and Object  Results   using star notation  cannot be referred in  script    When one  Task  receives the  Results  of another  there is a dependency created between those two  Tasks   In order for the receiving  Task  to get data from another  Task s   Result   the  Task  producing the  Result  must run first  Tekton enforces this  Task  ordering by ensuring that the  Task  emitting the  Result  executes before any  Task  that uses it   In the snippet below  a param is provided its value from the  commit   Result  emitted by the  checkout source   Task   Tekton will make sure that the  checkout source   Task  runs before this one      yaml params      name  foo     value     tasks checkout source results commit       name  array params     value     tasks checkout source results array results          name  array indexing params     value     tasks checkout source results array results 1        name  object params     value     tasks checkout source results object results          name  object element params     value     tasks checkout source results object results objectkey          Note    If  checkout source  exits successfully without initializing  commit   Result   the receiving  Task  fails and causes the  Pipeline  to fail with  InvalidTaskResultReference        unable to find result referenced by param  foo  in  task    Could not find result with name  commit  for task run  checkout source       In the snippet below  a  when  expression is provided its value from the  exists   Result  emitted by the  check file   Task   Tekton will make sure that the  check file   Task  runs before this one      yaml when      input     tasks check file results exists       operator  in     values    yes        For an end to end example  see   Task   Results  in a  PipelineRun      examples v1 pipelineruns task results example yaml    Note that  when  expressions are whitespace sensitive   In particular  when producing results intended for inputs to  when  expressions that may include newlines at their close  e g   cat    jq    you may wish to truncate them      yaml taskSpec    params      name  jsonQuery check   steps      image  ubuntu     name  store name in results     script          curl  s https   my json server typicode com typicode demo profile   jq  r  name   tr  d   n    tee   results name path           Emitting  Results  from a  Pipeline   A  Pipeline  can emit  Results  of its own for a variety of reasons   an external system may need to read them when the  Pipeline  is complete  they might summarise the most important  Results  from the  Pipeline s   Tasks   or they might simply be used to expose non critical messages generated during the execution of the  Pipeline    A  Pipeline s   Results  can be composed of one or many  Task   Results  emitted during the course of the  Pipeline s  execution  A  Pipeline   Result  can refer to its  Tasks    Results  using a variable of the form    tasks  task name  results  result name      After a  Pipeline  has executed the  PipelineRun  will be populated with the  Results  emitted by the  Pipeline   These will be written to the  PipelineRun s   status pipelineResults  field   In the example below  the  Pipeline  specifies a  results  entry with the name  sum  that references the  outputValue   Result  emitted by the  calculate sum   Task       yaml results      name  sum     description  the sum of all three operands     value    tasks calculate sum results outputValue       For an end to end example  see   Results  in a  PipelineRun      examples v1 pipelineruns pipelinerun results yaml    In the example below  the  Pipeline  collects array and object results from  Tasks       yaml     results          name  array results         type  array         description  whole array         value    tasks task1 results array results             name  array indexing results         type  string         description  array element         value    tasks task1 results array results 1           name  object results         type  object         description  whole object         value    tasks task2 results object results             name  object element         type  string         description  object element         value    tasks task2 results object results foo       For an end to end example see   Array and Object Results  in a  PipelineRun      examples v1 pipelineruns pipeline emitting results yaml    A  Pipeline Result  is not emitted if any of the following are true    A  PipelineTask  referenced by the  Pipeline Result  failed  The  PipelineRun  will also have failed    A  PipelineTask  referenced by the  Pipeline Result  was skipped    A  PipelineTask  referenced by the  Pipeline Result  didn t emit the referenced  Task Result   This should be considered a bug in the  Task  and  may fail a  PipelineTask  in future  https   github com tektoncd pipeline issues 3497     The  Pipeline Result  uses a variable that doesn t point to an actual  PipelineTask   This will result in an  InvalidTaskResultReference  validation error during  PipelineRun  execution    The  Pipeline Result  uses a variable that doesn t point to an actual result in a  PipelineTask   This will cause an  InvalidTaskResultReference  validation error during  PipelineRun  execution     Note    Since a  Pipeline Result  can contain references to multiple  Task Results   if any of those  Task Result  references are invalid the entire  Pipeline Result  is not emitted    Note    If a  PipelineTask  referenced by the  Pipeline Result  was skipped  the  Pipeline Result  will not be emitted and the  PipelineRun  will not fail due to a missing result      Configuring the  Task  execution order  You can connect  Tasks  in a  Pipeline  so that they execute in a Directed Acyclic Graph  DAG   Each  Task  in the  Pipeline  becomes a node on the graph that can be connected with an edge so that one will run before another and the execution of the  Pipeline  progresses to completion without getting stuck in an infinite loop   This is done using     resource dependencies         results    emitting results from a pipeline  of one  Task  being passed into  params  or  when  expressions of     another     ordering dependencies         runAfter    using the runafter field  clauses on the corresponding  Tasks   For example  the  Pipeline  defined as follows     yaml tasks    name  lint repo   taskRef      name  pylint   name  test app   taskRef      name  make test   name  build app   taskRef      name  kaniko build app   runAfter        test app   name  build frontend   taskRef      name  kaniko build frontend   runAfter        test app   name  deploy all   taskRef      name  deploy kubectl   runAfter        build app       build frontend      executes according to the following graph      none                                v            v      test app    lint repo                   v          v build app  build frontend                     v        v     deploy all      In particular   1  The  lint repo  and  test app   Tasks  have no  runAfter  clauses    and start executing simultaneously  2  Once  test app  completes  both  build app  and  build frontend  start    executing simultaneously since they both  runAfter  the  test app   Task   3  The  deploy all   Task  executes once both  build app  and  build frontend     complete  since it is supposed to  runAfter  them both  4  The entire  Pipeline  completes execution once both  lint repo  and  deploy all     complete execution      Specifying a display name  The  displayName  field is an optional field that allows you to add a user facing name of the  Pipeline  that can be used to populate a UI  For example      yaml spec    displayName   Code Scan    tasks        name  scan       taskRef          name  sonar scan         Adding a description  The  description  field is an optional field and can be used to provide description of the  Pipeline       Adding  Finally  to the  Pipeline   You can specify a list of one or more final tasks under  finally  section   finally  tasks are guaranteed to be executed in parallel after all  PipelineTasks  under  tasks  have completed regardless of success or error   finally  tasks are very similar to  PipelineTasks  under  tasks  section and follow the same syntax  Each  finally  task must have a  valid  https   kubernetes io docs concepts overview working with objects names  names   name  and a  taskRef or taskSpec  taskruns md specifying the target task   For example      yaml spec    tasks        name  tests       taskRef          name  integration test   finally        name  cleanup test       taskRef          name  cleanup          Specifying  displayName  in  finally  tasks  Similar to  specifying  displayName  in  pipelineTasks    specifying displayname in pipelinetasks    finally  tasks also allows to add a user facing name of the  finally  task that can be used to populate and distinguish in the dashboard  For example      yaml spec    finally        name  notification       displayName   Notify        taskRef          name  notification       name  notification using context variable       displayName   Notification from   context pipeline name         taskRef          name  notification      The  displayName  also allows you to parameterize the human readable name of your choice based on the  params   specifying parameters    the task results   consuming task execution results in finally   and  the context variables   context variables    Fully resolved  displayName  is also available in the status as part of the  pipelineRun status childReferences   The clients such as the dashboard  CLI  etc  can retrieve the  displayName  from the  childReferences   The  displayName  mainly drives a better user experience and at the same time it is not validated for the content or length by the controller       Specifying  Workspaces  in  finally  tasks   finally  tasks can specify  workspaces  workspaces md  which  PipelineTasks  might have utilized e g  a mount point for credentials held in Secrets  To support that requirement  you can specify one or more  Workspaces  in the  workspaces  field for the  finally  tasks similar to  tasks       yaml spec    workspaces        name  shared workspace   tasks        name  clone app source       taskRef          name  clone app repo to workspace       workspaces            name  shared workspace           workspace  shared workspace   finally        name  cleanup workspace       taskRef          name  cleanup workspace       workspaces            name  shared workspace           workspace  shared workspace          Specifying  Parameters  in  finally  tasks  Similar to  tasks   you can specify   Parameters   tasks md specifying parameters  in  finally  tasks      yaml spec    tasks        name  tests       taskRef          name  integration test   finally        name  report results       taskRef          name  report results       params            name  url           value   someURL           Specifying  matrix  in  finally  tasks     seedling     Matrix  is an  beta  additional configs md beta features  feature      The  enable api fields  feature flag can be set to   beta   to specify  Matrix  in a  PipelineTask    Similar to  tasks   you can also provide   Parameters   tasks md specifying parameters  through  matrix  in  finally  tasks      yaml spec    tasks        name  tests       taskRef          name  integration test   finally        name  report results       taskRef          name  report results       params            name  url           value   someURL        matrix          params            name  slack channel           value               foo               bar          include              name  build 1             params                  name  slack channel                 value   foo                  name  flags                 value    v       For further information  read   Matrix     matrix md        Consuming  Task  execution results in  finally    finally  tasks can be configured to consume  Results  of  PipelineTask  from the  tasks  section      yaml spec    tasks        name  clone app repo       taskRef          name  git clone   finally        name  discover git commit       params            name  commit           value    tasks clone app repo results commit        Note    The scheduling of such  finally  task does not change  it will still be executed in parallel with other  finally  tasks after all non  finally  tasks are done   The controller resolves task results before executing the  finally  task  discover git commit   If the task  clone app repo  failed before initializing  commit  or skipped with  when expression   guard task execution using when expressions  resulting in uninitialized task result  commit   the  finally  Task  discover git commit  will be included in the list of  skippedTasks  and continues executing rest of the  finally  tasks  The pipeline exits with  completion  instead of  success  if a  finally  task is added to the list of  skippedTasks        Consuming  Pipeline  result with  finally    finally  tasks can emit  Results  and these results emitted from the  finally  tasks can be configured in the  Pipeline Results   emitting results from a pipeline   References of  Results  from  finally  will follow the same naming conventions as referencing  Results  from  tasks        finally  finally pipelinetask name  result  result name           yaml results      name  comment count validate     value    finally check count results comment count validate  finally      name  check count     taskRef        name  example task name      In this example   pipelineResults  in  status  will show the name value pair for the result  comment count validate  which is produced in the  Task   example task name          PipelineRun  Status with  finally   With  finally    PipelineRun  status is calculated based on  PipelineTasks  under  tasks  section and  finally  tasks   Without  finally       PipelineTasks  under  tasks                                                                               PipelineRun  status   Reason                                                                                                                                                           all  PipelineTasks  successful                                                                             true                   Succeeded      one or more  PipelineTasks   skipped   guard task execution using when expressions  and rest successful    true                   Completed      single failure of  PipelineTask                                                                            false                  failed        With  finally       PipelineTasks  under  tasks                                                                              finally  tasks                           PipelineRun  status   Reason                                                                                                                                                                                                   all  PipelineTask  successful                                                                            all  finally  tasks successful            true                   Succeeded      all  PipelineTask  successful                                                                            one or more failure of  finally  tasks    false                  Failed         one or more  PipelineTask   skipped   guard task execution using when expressions  and rest successful   all  finally  tasks successful            true                   Completed      one or more  PipelineTask   skipped   guard task execution using when expressions  and rest successful   one or more failure of  finally  tasks    false                  Failed         single failure of  PipelineTask                                                                          all  finally  tasks successful            false                  failed         single failure of  PipelineTask                                                                          one or more failure of  finally  tasks    false                  failed        Overall   PipelineRun  state transitioning is explained below for respective scenarios     All  PipelineTask  and  finally  tasks are successful   Started      Running      Succeeded    At least one  PipelineTask  skipped and rest successful    Started      Running      Completed    One  PipelineTask  failed   one or more  finally  tasks failed   Started      Running      Failed   Please refer to the  table  pipelineruns md monitoring execution status  under Monitoring Execution Status to learn about what kind of events are triggered based on the  Pipelinerun  status        Using Execution  Status  of  pipelineTask   A  pipeline  can check the status of a specific  pipelineTask  from the  tasks  section in  finally  through the task parameters      yaml finally      name  finaltask     params          name  task1Status         value     tasks task1 status       taskSpec        params            name  task1Status       steps            image  ubuntu           name  print task status           script                if     params task1Status      Failed                then               echo  Task1 has failed  continue processing the failure              fi      This kind of variable can have any one of the values from the following table     Status        Description                                                                                                                                                                                                              Succeeded     taskRun  for the  pipelineTask  completed successfully                                               Failed        taskRun  for the  pipelineTask  completed with a failure or cancelled by the user                    None         the  pipelineTask  has been skipped or no execution information available for the  pipelineTask     For an end to end example  see   status  in a  PipelineRun      examples v1 pipelineruns pipelinerun task execution status yaml        Using Aggregate Execution  Status  of All  Tasks   A  pipeline  can check an aggregate status of all the  tasks  section in  finally  through the task parameters      yaml finally      name  finaltask     params          name  aggregateTasksStatus         value     tasks status       taskSpec        params            name  aggregateTasksStatus       steps            image  ubuntu           name  check task status           script                if     params aggregateTasksStatus      Failed                then               echo  Looks like one or more tasks returned failure  continue processing the failure              fi      This kind of variable can have any one of the values from the following table     Status        Description                                                                                                                                                                                                                                                                                Succeeded    all  tasks  have succeeded                                                                                                             Failed       one ore more  tasks  failed                                                                                                            Completed    all  tasks  completed successfully including one or more skipped tasks                                                                 None         no aggregate execution status available  i e  none of the above   one or more  tasks  could be pending running cancelled timedout    For an end to end example  see     tasks status   usage in a  Pipeline      examples v1 pipelineruns pipelinerun task execution status yaml        Guard  finally   Task  execution using  when  expressions  Similar to  Tasks    finally   Tasks  can be guarded using   when  expressions   guard task execution using when expressions  that operate on static inputs or variables  Like in  Tasks    when  expressions in  finally   Tasks  can operate on  Parameters  and  Results   Unlike in  Tasks    when  expressions in  finally   tasks  can also operate on the   Execution Status    using execution status of pipelinetask  of  Tasks          when  expressions using  Parameters  in  finally   Tasks    when  expressions in  finally   Tasks  can utilize  Parameters  as demonstrated using   golang build   https   github com tektoncd catalog tree main task golang build 0 1  and   send to channel slack   https   github com tektoncd catalog tree main task send to channel slack 0 1  Catalog  Tasks       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    generateName  pipelinerun  spec    pipelineSpec      params          name  enable notifications         type  string         description  a boolean indicating whether the notifications should be sent     tasks          name  golang build         taskRef            name  golang build                 finally          name  notify build failure   executed only when build task fails and notifications are enabled         when              input    tasks golang build status              operator  in             values    Failed               input    params enable notifications              operator  in             values    true           taskRef            name  send to slack channel               params        name  enable notifications       value  true            when  expressions using  Results  in  finally   Tasks    when  expressions in  finally   tasks  can utilize  Results   as demonstrated using   git clone   https   github com tektoncd catalog tree main task git clone 0 2  and   github add comment   https   github com tektoncd catalog tree main task github add comment 0 2  Catalog  Tasks       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    generateName  pipelinerun  spec    pipelineSpec      tasks          name  git clone         taskRef            name  git clone         name  go build                 finally          name  notify commit sha   executed only when commit sha is not the expected sha         when              input    tasks git clone results commit              operator  notin             values     params expected sha           taskRef            name  github add comment               params        name  expected sha       value  54dd3984affab47f3018852e61a1a6f9946ecfa      If the  when  expressions in a  finally   task  use  Results  from a skipped or failed non finally  Tasks   then the  finally   task  would also be skipped and be included in the list of  Skipped Tasks  in the  Status    similarly to when using  Results  in other parts of the  finally   task    consuming task execution results in finally          when  expressions using  Execution Status  of  PipelineTask  in  finally   tasks    when  expressions in  finally   tasks  can utilize   Execution Status  of  PipelineTasks    using execution status of pipelinetask   as demonstrated using   golang build   https   github com tektoncd catalog tree main task golang build 0 1  and   send to channel slack   https   github com tektoncd catalog tree main task send to channel slack 0 1  Catalog  Tasks       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  PipelineRun metadata    generateName  pipelinerun  spec    pipelineSpec      tasks          name  golang build         taskRef            name  golang build                 finally          name  notify build failure   executed only when build task fails         when              input    tasks golang build status              operator  in             values    Failed           taskRef            name  send to slack channel                  For an end to end example  see  PipelineRun with  when  expressions     examples v1 pipelineruns pipelinerun with when expressions yaml          when  expressions using  Aggregate Execution Status  of  Tasks  in  finally   tasks    when  expressions in  finally   tasks  can utilize   Aggregate Execution Status  of  Tasks    using aggregate execution status of all tasks  as demonstrated      yaml finally      name  notify any failure   executed only when one or more tasks fail     when          input    tasks status          operator  in         values    Failed       taskRef        name  notify failure      For an end to end example  see  PipelineRun with  when  expressions     examples v1 pipelineruns pipelinerun with when expressions yaml        Known Limitations       Cannot configure the  finally  task execution order  It s not possible to configure or modify the execution order of the  finally  tasks  Unlike  Tasks  in a  Pipeline   all  finally  tasks run simultaneously and start executing once all  PipelineTasks  under  tasks  have settled which means no  runAfter  can be specified in  finally  tasks      Using Custom Tasks  Custom Tasks have been promoted from  v1alpha1  to  v1beta1   Starting from  v0 43 0  to  v0 46 0   Pipeline Controller is able to create either  v1alpha1  or  v1beta1  Custom Task gated by a feature flag  custom task version   defaulting to  v1beta1   You can set  custom task version  to  v1alpha1  or  v1beta1  to control which version to create   Starting from  v0 47 0   feature flag  custom task version  is removed and only  v1beta1  Custom Task will be supported  See the  migration doc  migrating v1alpha1 Run to v1beta1 CustomRun md  for details    Custom Tasks  https   github com tektoncd community blob main teps 0002 custom tasks md  can implement behavior that doesn t correspond directly to running a workload in a  Pod  on the cluster  For example  a custom task might execute some operation outside of the cluster and wait for its execution to complete   A  PipelineRun  starts a custom task by creating a   CustomRun   https   github com tektoncd pipeline blob main docs customruns md  instead of a  TaskRun   In order for a custom task to execute  there must be a custom task controller running on the cluster that is responsible for watching and updating  CustomRun s which reference their type       Specifying the target Custom Task  To specify the custom task type you want to execute  the  taskRef  field must include the custom task s  apiVersion  and  kind  as shown below  Using  apiVersion  will always create a  CustomRun   If  apiVersion  is set   kind  is required as well      yaml spec    tasks        name  run custom task       taskRef          apiVersion  example dev v1alpha1         kind  Example      This creates a  Run CustomRun  of a custom task of type  Example  in the  example dev  API group with the version  v1alpha1    Validation error will be returned if  apiVersion  or  kind  is missing   You can also specify the  name  of a custom task resource object previously defined in the cluster      yaml spec    tasks        name  run custom task       taskRef          apiVersion  example dev v1alpha1         kind  Example         name  myexample      If the  taskRef  specifies a name  the custom task controller should look up the  Example  resource with that name and use that object to configure the execution   If the  taskRef  does not specify a name  the custom task controller might support some default behavior for executing unnamed tasks       Specifying a Custom Task Spec in line  or embedded     For  v1alpha1 Run       yaml spec    tasks        name  run custom task       taskSpec          apiVersion  example dev v1alpha1         kind  Example         spec            field1  value1           field2  value2        For  v1beta1 CustomRun       yaml spec    tasks        name  run custom task       taskSpec          apiVersion  example dev v1alpha1         kind  Example         customSpec            field1  value1           field2  value2      If the custom task controller supports the in line or embedded task spec  this will create a  Run CustomRun  of a custom task of type  Example  in the  example dev  API group with the version  v1alpha1    If the  taskSpec  is not supported  the custom task controller should produce proper validation errors   Please take a look at the developer guide for custom controllers supporting  taskSpec      guidance for  Run   runs md developer guide for custom controllers supporting spec     guidance for  CustomRun   customruns md developer guide for custom controllers supporting customspec    taskSpec  support for  pipelineRun  was designed and discussed in  TEP 0061  https   github com tektoncd community blob main teps 0061 allow custom task to be embedded in pipeline md       Specifying parameters  If a custom task supports   parameters   tasks md specifying parameters   you can use the  params  field to specify their values      yaml spec    tasks        name  run custom task       taskRef          apiVersion  example dev v1alpha1         kind  Example         name  myexample       params            name  foo           value  bah         Context Variables  The  Parameters  in the  Params  field will accept  context variables  variables md  that will be substituted  including      PipelineRun  name  namespace and uid    Pipeline  name    PipelineTask  retries     yaml spec    tasks        name  run custom task       taskRef          apiVersion  example dev v1alpha1         kind  Example         name  myexample       params            name  foo           value    context pipeline name           Specifying matrix     seedling     Matrix  is an  alpha  additional configs md alpha features  feature      The  enable api fields  feature flag must be set to   alpha   to specify  Matrix  in a  PipelineTask    If a custom task supports   parameters   tasks md specifying parameters   you can use the  matrix  field to specify their values  if you want to fan      yaml spec    tasks        name  run custom task       taskRef          apiVersion  example dev v1alpha1         kind  Example         name  myexample       params            name  foo           value  bah       matrix          params            name  bar           value                qux               thud         include              name  build 1             params                name  common package               value  path to common pkg      For further information  read   Matrix     matrix md        Specifying workspaces  If the custom task supports it  you can provide   Workspaces   workspaces md using workspaces in tasks  to share data with the custom task      yaml spec    tasks        name  run custom task       taskRef          apiVersion  example dev v1alpha1         kind  Example         name  myexample       workspaces            name  my workspace      Consult the documentation of the custom task that you are using to determine whether it supports workspaces and how to name them       Using  Results   If the custom task produces results  you can reference them in a Pipeline using the normal syntax     tasks  task name  results  result name          Specifying  Timeout         v1alpha1 Run  If the custom task supports it as  we recommended  runs md developer guide for custom controllers supporting timeout   you can provide  timeout  to specify the maximum running time of a  CustomRun   including all retry attempts or other operations          v1beta1 CustomRun  If the custom task supports it as  we recommended  customruns md developer guide for custom controllers supporting timeout   you can provide  timeout  to specify the maximum running time of one  CustomRun  execution      yaml spec    tasks        name  run custom task       timeout  2s       taskRef          apiVersion  example dev v1alpha1         kind  Example         name  myexample      Consult the documentation of the custom task that you are using to determine whether it supports  Timeout        Specifying  Retries  If the custom task supports it  you can provide  retries  to specify how many times you want to retry the custom task      yaml spec    tasks        name  run custom task       retries  2       taskRef          apiVersion  example dev v1alpha1         kind  Example         name  myexample      Consult the documentation of the custom task that you are using to determine whether it supports  Retries        Known Custom Tasks  We try to list as many known Custom Tasks as possible here so that users can easily find what they want  Please feel free to share the Custom Task you implemented in this table        v1beta1 CustomRun    Custom Task                        Description                                                                                                                                                                                                                                                                                                   Wait Task Beta  wait task beta    Waits a given amount of time before succeeding  specified by an input parameter named duration  Support  timeout  and  retries        Approvals  approvals beta   Pauses the execution of  PipelineRuns  and waits for manual approvals  Version 0 6 0 and up          v1alpha1 Run    Custom Task                                        Description                                                                                                                                                                                                                                                                       Pipeline Loops  pipeline loops                    Runs a  Pipeline  in a loop with varying  Parameter  values                                                     Common Expression Language  cel                   Provides Common Expression Language support in Tekton Pipelines                                                 Wait  wait                                        Waits a given amount of time  specified by a  Parameter  named  duration   before succeeding                    Approvals  approvals alpha                        Pauses the execution of  PipelineRuns  and waits for manual approvals  Version up to  and including  0 5 0      Pipelines in Pipelines  pipelines in pipelines    Defines and executes a  Pipeline  in a  Pipeline                                                                Task Group  task group                            Groups  Tasks  together as a  Task                                                                              Pipeline in a Pod  pipeline in pod                Runs  Pipeline  in a  Pod                                                                                      pipeline loops   https   github com tektoncd experimental tree f60e1cd8ce22ed745e335f6f547bb9a44580dc7c pipeline loops  task loops   https   github com tektoncd experimental tree f60e1cd8ce22ed745e335f6f547bb9a44580dc7c task loops  cel   https   github com tektoncd experimental tree f60e1cd8ce22ed745e335f6f547bb9a44580dc7c cel  wait   https   github com tektoncd experimental tree f60e1cd8ce22ed745e335f6f547bb9a44580dc7c wait task  approvals alpha   https   github com automatiko io automatiko approval task tree v0 5 0  approvals beta   https   github com automatiko io automatiko approval task tree v0 6 1  task group   https   github com openshift pipelines tekton task group tree 39823f26be8f59504f242a45b9f2e791d4b36e1c  pipelines in pipelines   https   github com tektoncd experimental tree f60e1cd8ce22ed745e335f6f547bb9a44580dc7c pipelines in pipelines  pipeline in pod   https   github com tektoncd experimental tree f60e1cd8ce22ed745e335f6f547bb9a44580dc7c pipeline in pod  wait task beta   https   github com tektoncd pipeline tree a127323da31bcb933a04a6a1b5dbb6e0411e3dc1 test custom task ctrls wait task beta     Code examples  For a better understanding of  Pipelines   study  our code examples  https   github com tektoncd pipeline tree main examples         Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    and code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton weight 312 Trusted Resources Trusted Resources","answers":"<!--\n---\nlinkTitle: \"Trusted Resources\"\nweight: 312\n---\n-->\n\n# Trusted Resources\n\n- [Overview](#overview)\n- [Instructions](#Instructions)\n - [Sign Resources](#sign-resources)\n - [Enable Trusted Resources](#enable-trusted-resources)\n\n## Overview\n\nTrusted Resources is a feature which can be used to sign Tekton Resources and verify them. Details of design can be found at [TEP--0091](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0091-trusted-resources.md). This is an alpha feature and supports `v1beta1` and `v1` version of  `Task` and `Pipeline`.\n\n**Note**: Currently, trusted resources only support verifying Tekton resources that come from remote places i.e. git, OCI registry and Artifact Hub. To use [cluster resolver](.\/cluster-resolver.md) for in-cluster resources, make sure to set all default values for the resources before applied to cluster, because the mutating webhook will update the default fields if not given and fail the verification.\n\nVerification failure will mark corresponding taskrun\/pipelinerun as Failed status and stop the execution.\n\n## Instructions\n\n### Sign Resources\nWe have added `sign` and `verify` into [Tekton Cli](https:\/\/github.com\/tektoncd\/cli) as a subcommand in release [v0.28.0 and later](https:\/\/github.com\/tektoncd\/cli\/releases\/tag\/v0.28.0). Please refer to [cli docs](https:\/\/github.com\/tektoncd\/cli\/blob\/main\/docs\/cmd\/tkn_task_sign.md) to sign and Tekton resources.\n\nA signed task example:\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  annotations:\n    tekton.dev\/signature: MEYCIQDM8WHQAn\/yKJ6psTsa0BMjbI9IdguR+Zi6sPTVynxv6wIhAMy8JSETHP7A2Ncw7MyA7qp9eLsu\/1cCKOjRL1mFXIKV\n  creationTimestamp: null\n  name: example-task\n  namespace: tekton-trusted-resources\nspec:\n  steps:\n  - image: ubuntu\n    name: echo\n```\n\n### Enable Trusted Resources\n\n#### Enable feature flag\n\nUpdate the config map:\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: feature-flags\n  namespace: tekton-pipelines\n  labels:\n    app.kubernetes.io\/instance: default\n    app.kubernetes.io\/part-of: tekton-pipelines\ndata:\n  trusted-resources-verification-no-match-policy: \"fail\"\n```\n\n`trusted-resources-verification-no-match-policy` configurations:\n * `ignore`: if no matching policies are found, skip the verification, don't log, and don't fail the taskrun\/pipelinerun\n * `warn`:  if no matching policies are found, skip the verification, log a warning, and don't fail the taskrun\/pipelinerun\n * `fail`: Fail the taskrun\/pipelinerun if no matching policies are found.\n\n **Notes:**\n * To skip the verification: make sure no policies exist and `trusted-resources-verification-no-match-policy` is set to `warn` or `ignore`.\n * To enable the verification: install [VerificationPolicy](#config-key-at-verificationpolicy) to match the resources.\n\nOr patch the new values:\n```bash\nkubectl patch configmap feature-flags -n tekton-pipelines -p='{\"data\":{\"trusted-resources-verification-no-match-policy\":\"fail\"}}\n```\n\n #### TaskRun and PipelineRun status update\n <!-- wokeignore:rule=master -->\nTrusted resources will update the taskrun's [condition](https:\/\/github.com\/kubernetes\/community\/blob\/master\/contributors\/devel\/sig-architecture\/api-conventions.md#typical-status-properties) to indicate if it passes verification or not.\n\nThe following tables illustrate how the conditions are impacted by feature flag and verification result. Note that if not `true` or `false` means this case doesn't update the corresponding condition.\n**No Matching Policies:**\n|                             | `Conditions.TrustedResourcesVerified` | `Conditions.Succeeded` |\n|-----------------------------|---------------------------------------|------------------------|\n| `no-match-policy`: \"ignore\" |                                       |                        |\n| `no-match-policy`: \"warn\"   | False                                 |                        |\n| `no-match-policy`: \"fail\"   | False                                 | False                  |\n\n**Matching Policies(no matter what `trusted-resources-verification-no-match-policy` value is):**\n|                          | `Conditions.TrustedResourcesVerified` | `Conditions.Succeeded` |\n|--------------------------|---------------------------------------|------------------------|\n| all policies pass        | True                                  |                        |\n| any enforce policy fails | False                                 | False                  |\n| only warn policies fail  | False                                 |                        |\n\n\nA successful sample `TrustedResourcesVerified` condition is:\n```yaml\nstatus:\n  conditions:\n  - lastTransitionTime: \"2023-03-01T18:17:05Z\"\n    message: Trusted resource verification passed\n    status: \"True\"\n    type: TrustedResourcesVerified\n```\n\nFailed sample `TrustedResourcesVerified` and `Succeeded` conditions are:\n```yaml\nstatus:\n  conditions:\n  - lastTransitionTime: \"2023-03-01T18:17:05Z\"\n    message: resource verification failed # This will be filled with detailed error message.\n    status: \"False\"\n    type: TrustedResourcesVerified\n  - lastTransitionTime: \"2023-03-01T18:17:10Z\"\n    message: resource verification failed\n    status: \"False\"\n    type: Succeeded\n```\n\n#### Config key at VerificationPolicy\nVerificationPolicy supports SecretRef or encoded public key data.\n\nHow does VerificationPolicy work?\nYou can create multiple `VerificationPolicy` and apply them to the cluster.\n1. Trusted resources will look up policies from the resource namespace (usually this is the same as taskrun\/pipelinerun namespace).\n2. If multiple policies are found. For each policy we will check if the resource url is matching any of the `patterns` in the `resources` list. If matched then this policy will be used for verification.\n3. If multiple policies are matched, the resource must pass all the \"enforce\" mode policies. If the resource only matches policies in \"warn\" mode and fails to pass the \"warn\" policy, it will not fail the taskrun or pipelinerun, but log a warning instead.\n\n4. To pass one policy, the resource can pass any public keys in the policy.\n\nTake the following `VerificationPolicies` for example, a resource from \"https:\/\/github.com\/tektoncd\/catalog.git\", needs to pass both `verification-policy-a` and `verification-policy-b`, to pass `verification-policy-a` the resource needs to pass either `key1` or `key2`.\n\nExample:\n```yaml\napiVersion: tekton.dev\/v1alpha1\nkind: VerificationPolicy\nmetadata:\n  name: verification-policy-a\n  namespace: resource-namespace\nspec:\n  # resources defines a list of patterns\n  resources:\n    - pattern: \"https:\/\/github.com\/tektoncd\/catalog.git\"  #git resource pattern\n    - pattern: \"gcr.io\/tekton-releases\/catalog\/upstream\/git-clone\"  # bundle resource pattern\n    - pattern: \" https:\/\/artifacthub.io\/\"  # hub resource pattern\n  # authorities defines a list of public keys\n  authorities:\n    - name: key1\n      key:\n        # secretRef refers to a secret in the cluster, this secret should contain public keys data\n        secretRef:\n          name: secret-name-a\n          namespace: secret-namespace\n        hashAlgorithm: sha256\n    - name: key2\n      key:\n        # data stores the inline public key data\n        data: \"STRING_ENCODED_PUBLIC_KEY\"\n  # mode can be set to \"enforce\" (default) or \"warn\".\n  mode: enforce\n```\n\n```yaml\napiVersion: tekton.dev\/v1alpha1\nkind: VerificationPolicy\nmetadata:\n  name: verification-policy-b\n  namespace: resource-namespace\nspec:\n  resources:\n    - pattern: \"https:\/\/github.com\/tektoncd\/catalog.git\"\n  authorities:\n    - name: key3\n      key:\n        # data stores the inline public key data\n        data: \"STRING_ENCODED_PUBLIC_KEY\"\n```\n\n`namespace` should be the same of corresponding resources' namespace.\n\n`pattern` is used to filter out remote resources by their sources URL. e.g. git resources pattern can be set to https:\/\/github.com\/tektoncd\/catalog.git. The `pattern` should follow regex schema, we use go regex library's [`Match`](https:\/\/pkg.go.dev\/regexp#Match) to match the pattern from VerificationPolicy to the `ConfigSource` URL resolved by remote resolution. Note that `.*` will match all resources.\nTo learn more about regex syntax please refer to [syntax](https:\/\/pkg.go.dev\/regexp\/syntax).\nTo learn more about `ConfigSource` please refer to resolvers doc for more context. e.g. [gitresolver](.\/git-resolver.md)\n\n `key` is used to store the public key, `key` can be configured with `secretRef`, `data`, `kms` note that only 1 of these 3 fields can be configured.\n\n  * `secretRef`: refers to secret in cluster to store the public key.\n  * `data`: contains the inline data of the pubic key in \"PEM-encoded byte slice\" format.\n  * `kms`: refers to the uri of the public key, it should follow the format defined in [sigstore](https:\/\/docs.sigstore.dev\/cosign\/kms_support).\n\n`hashAlgorithm` is the algorithm for the public key, by default is `sha256`. It also supports `SHA224`, `SHA384`, `SHA512`.\n\n`mode` controls whether a failing policy will fail the taskrun\/pipelinerun, or only log the a warning\n * enforce (default) - fail the taskrun\/pipelinerun if verification fails\n * warn - don't fail the taskrun\/pipelinerun if verification fails but log a warning\n\n#### Migrate Config key at configmap to VerificationPolicy\n**Note:** key configuration in configmap is deprecated,\nThe following usage of public keys in configmap can be migrated to VerificationPolicy\/\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: config-trusted-resources\n  namespace: tekton-pipelines\n  labels:\n    app.kubernetes.io\/instance: default\n    app.kubernetes.io\/part-of: tekton-pipelines\ndata:\n  publickeys: \"\/etc\/verification-secrets\/cosign.pub, \/etc\/verification-secrets\/cosign2.pub\"\n```\n\nTo migrate to VerificationPolicy: Stores the public key files in a secret, and configure the secret ref in VerificationPolicy\n\n```yaml\napiVersion: tekton.dev\/v1alpha1\nkind: VerificationPolicy\nmetadata:\n  name: verification-policy-name\n  namespace: resource-namespace\nspec:\n  authorities:\n    - name: key1\n      key:\n        # secretRef refers to a secret in the cluster, this secret should contain public keys data\n        secretRef:\n          name: secret-name-cosign\n          namespace: secret-namespace\n        hashAlgorithm: sha256\n    - name: key2\n      key:\n        secretRef:\n          name: secret-name-cosign2\n          namespace: secret-namespace\n        hashAlgorithm: sha256\n```","site":"tekton","answers_cleaned":"         linkTitle   Trusted Resources  weight  312            Trusted Resources     Overview   overview     Instructions   Instructions      Sign Resources   sign resources      Enable Trusted Resources   enable trusted resources      Overview  Trusted Resources is a feature which can be used to sign Tekton Resources and verify them  Details of design can be found at  TEP  0091  https   github com tektoncd community blob main teps 0091 trusted resources md   This is an alpha feature and supports  v1beta1  and  v1  version of   Task  and  Pipeline      Note    Currently  trusted resources only support verifying Tekton resources that come from remote places i e  git  OCI registry and Artifact Hub  To use  cluster resolver    cluster resolver md  for in cluster resources  make sure to set all default values for the resources before applied to cluster  because the mutating webhook will update the default fields if not given and fail the verification   Verification failure will mark corresponding taskrun pipelinerun as Failed status and stop the execution      Instructions      Sign Resources We have added  sign  and  verify  into  Tekton Cli  https   github com tektoncd cli  as a subcommand in release  v0 28 0 and later  https   github com tektoncd cli releases tag v0 28 0   Please refer to  cli docs  https   github com tektoncd cli blob main docs cmd tkn task sign md  to sign and Tekton resources   A signed task example     yaml apiVersion  tekton dev v1beta1 kind  Task metadata    annotations      tekton dev signature  MEYCIQDM8WHQAn yKJ6psTsa0BMjbI9IdguR Zi6sPTVynxv6wIhAMy8JSETHP7A2Ncw7MyA7qp9eLsu 1cCKOjRL1mFXIKV   creationTimestamp  null   name  example task   namespace  tekton trusted resources spec    steps      image  ubuntu     name  echo          Enable Trusted Resources       Enable feature flag  Update the config map     yaml apiVersion  v1 kind  ConfigMap metadata    name  feature flags   namespace  tekton pipelines   labels      app kubernetes io instance  default     app kubernetes io part of  tekton pipelines data    trusted resources verification no match policy   fail        trusted resources verification no match policy  configurations      ignore   if no matching policies are found  skip the verification  don t log  and don t fail the taskrun pipelinerun     warn    if no matching policies are found  skip the verification  log a warning  and don t fail the taskrun pipelinerun     fail   Fail the taskrun pipelinerun if no matching policies are found      Notes       To skip the verification  make sure no policies exist and  trusted resources verification no match policy  is set to  warn  or  ignore      To enable the verification  install  VerificationPolicy   config key at verificationpolicy  to match the resources   Or patch the new values     bash kubectl patch configmap feature flags  n tekton pipelines  p    data    trusted resources verification no match policy   fail               TaskRun and PipelineRun status update       wokeignore rule master     Trusted resources will update the taskrun s  condition  https   github com kubernetes community blob master contributors devel sig architecture api conventions md typical status properties  to indicate if it passes verification or not   The following tables illustrate how the conditions are impacted by feature flag and verification result  Note that if not  true  or  false  means this case doesn t update the corresponding condition    No Matching Policies                                     Conditions TrustedResourcesVerified     Conditions Succeeded                                                                                                        no match policy    ignore                                                                        no match policy    warn      False                                                               no match policy    fail      False                                   False                       Matching Policies no matter what  trusted resources verification no match policy  value is                                   Conditions TrustedResourcesVerified     Conditions Succeeded                                                                                                    all policies pass          True                                                               any enforce policy fails   False                                   False                      only warn policies fail    False                                                              A successful sample  TrustedResourcesVerified  condition is     yaml status    conditions      lastTransitionTime   2023 03 01T18 17 05Z      message  Trusted resource verification passed     status   True      type  TrustedResourcesVerified      Failed sample  TrustedResourcesVerified  and  Succeeded  conditions are     yaml status    conditions      lastTransitionTime   2023 03 01T18 17 05Z      message  resource verification failed   This will be filled with detailed error message      status   False      type  TrustedResourcesVerified     lastTransitionTime   2023 03 01T18 17 10Z      message  resource verification failed     status   False      type  Succeeded           Config key at VerificationPolicy VerificationPolicy supports SecretRef or encoded public key data   How does VerificationPolicy work  You can create multiple  VerificationPolicy  and apply them to the cluster  1  Trusted resources will look up policies from the resource namespace  usually this is the same as taskrun pipelinerun namespace   2  If multiple policies are found  For each policy we will check if the resource url is matching any of the  patterns  in the  resources  list  If matched then this policy will be used for verification  3  If multiple policies are matched  the resource must pass all the  enforce  mode policies  If the resource only matches policies in  warn  mode and fails to pass the  warn  policy  it will not fail the taskrun or pipelinerun  but log a warning instead   4  To pass one policy  the resource can pass any public keys in the policy   Take the following  VerificationPolicies  for example  a resource from  https   github com tektoncd catalog git   needs to pass both  verification policy a  and  verification policy b   to pass  verification policy a  the resource needs to pass either  key1  or  key2    Example     yaml apiVersion  tekton dev v1alpha1 kind  VerificationPolicy metadata    name  verification policy a   namespace  resource namespace spec      resources defines a list of patterns   resources        pattern   https   github com tektoncd catalog git    git resource pattern       pattern   gcr io tekton releases catalog upstream git clone     bundle resource pattern       pattern    https   artifacthub io      hub resource pattern     authorities defines a list of public keys   authorities        name  key1       key            secretRef refers to a secret in the cluster  this secret should contain public keys data         secretRef            name  secret name a           namespace  secret namespace         hashAlgorithm  sha256       name  key2       key            data stores the inline public key data         data   STRING ENCODED PUBLIC KEY      mode can be set to  enforce   default  or  warn     mode  enforce         yaml apiVersion  tekton dev v1alpha1 kind  VerificationPolicy metadata    name  verification policy b   namespace  resource namespace spec    resources        pattern   https   github com tektoncd catalog git    authorities        name  key3       key            data stores the inline public key data         data   STRING ENCODED PUBLIC KEY        namespace  should be the same of corresponding resources  namespace    pattern  is used to filter out remote resources by their sources URL  e g  git resources pattern can be set to https   github com tektoncd catalog git  The  pattern  should follow regex schema  we use go regex library s   Match   https   pkg go dev regexp Match  to match the pattern from VerificationPolicy to the  ConfigSource  URL resolved by remote resolution  Note that      will match all resources  To learn more about regex syntax please refer to  syntax  https   pkg go dev regexp syntax   To learn more about  ConfigSource  please refer to resolvers doc for more context  e g   gitresolver    git resolver md     key  is used to store the public key   key  can be configured with  secretRef    data    kms  note that only 1 of these 3 fields can be configured        secretRef   refers to secret in cluster to store the public key       data   contains the inline data of the pubic key in  PEM encoded byte slice  format       kms   refers to the uri of the public key  it should follow the format defined in  sigstore  https   docs sigstore dev cosign kms support     hashAlgorithm  is the algorithm for the public key  by default is  sha256   It also supports  SHA224    SHA384    SHA512     mode  controls whether a failing policy will fail the taskrun pipelinerun  or only log the a warning    enforce  default    fail the taskrun pipelinerun if verification fails    warn   don t fail the taskrun pipelinerun if verification fails but log a warning       Migrate Config key at configmap to VerificationPolicy   Note    key configuration in configmap is deprecated  The following usage of public keys in configmap can be migrated to VerificationPolicy      yaml apiVersion  v1 kind  ConfigMap metadata    name  config trusted resources   namespace  tekton pipelines   labels      app kubernetes io instance  default     app kubernetes io part of  tekton pipelines data    publickeys    etc verification secrets cosign pub   etc verification secrets cosign2 pub       To migrate to VerificationPolicy  Stores the public key files in a secret  and configure the secret ref in VerificationPolicy     yaml apiVersion  tekton dev v1alpha1 kind  VerificationPolicy metadata    name  verification policy name   namespace  resource namespace spec    authorities        name  key1       key            secretRef refers to a secret in the cluster  this secret should contain public keys data         secretRef            name  secret name cosign           namespace  secret namespace         hashAlgorithm  sha256       name  key2       key          secretRef            name  secret name cosign2           namespace  secret namespace         hashAlgorithm  sha256    "}
{"questions":"tekton weight 4000 Migrating From Tekton to Tekton Migrating from Tekton v1beta1","answers":"<!--\n---\nlinkTitle: \"Migrating from Tekton v1beta1\"\nweight: 4000\n---\n-->\n\n# Migrating From Tekton `v1beta1` to Tekton `v1`\n\n- [Changes to fields](#changes-to-fields)\n- [Upgrading `PipelineRun.Timeout` to `PipelineRun.Timeouts`](#upgrading-pipelinerun.timeout-to-pipelinerun.timeouts)\n- [Replacing Resources from Task, TaskRun, Pipeline and PipelineRun](#replacing-resources-from-task,-taskrun,-pipeline-and-pipelinerun)\n- [Replacing `taskRef.bundle` and `pipelineRef.bundle` with Bundle Resolver](#replacing-taskRef.bundle-and-pipelineRef.bundle-with-bundle-resolver)\n- [Replacing ClusterTask with Remote Resolution](#replacing-clustertask-with-remote-resolution)\n- [Adding ServiceAccountName and PodTemplate under `TaskRunTemplate` in `PipelineRun.Spec`](#adding-serviceaccountname-and-podtemplate-under-taskruntemplate-in-pipelinerun.spec)\n\n\nThis document describes the differences between `v1beta1` Tekton entities and their\n`v1` counterparts. It also describes the changed fields and the deprecated fields into v1.\n## Changes to fields\n\nIn Tekton `v1`, the following fields have been changed:\n\n| Old field | Replacement |\n| --------- | ----------|\n| `pipelineRun.spec.Timeout`| `pipelineRun.spec.timeouts.pipeline` |\n| `pipelineRun.spec.taskRunSpecs.taskServiceAccountName` | `pipelineRun.spec.taskRunSpecs.serviceAccountName` |\n| `pipelineRun.spec.taskRunSpecs.taskPodTemplate` | `pipelineRun.spec.taskRunSpecs.podTemplate` |\n| `taskRun.status.taskResults` | `taskRun.status.results` |\n| `pipelineRun.status.pipelineResults` | `pipelineRun.status.results` |\n| `taskRun.spec.taskRef.bundle` | `taskRun.spec.taskRef.resolver` |\n| `pipelineRun.spec.pipelineRef.bundle` | `pipelineRun.spec.pipelineRef.resolver` |\n| `task.spec.resources` | removed from `Task` |\n| `taskrun.spec.resources` | removed from `TaskRun` |\n| `taskRun.status.cloudEvents` | removed from `TaskRun` |\n| `taskRun.status.resourcesResult` | removed from `TaskRun` |\n| `pipeline.spec.resources` | removed from `Pipeline` |\n| `pipelineRun.spec.resources` | removed from `PipelineRun` |\n| `pipelineRun.spec.serviceAccountName` | [`pipelineRun.spec.taskRunTemplate.serviceAccountName`](#adding-serviceaccountname-and-podtemplate-under-taskruntemplate-in-pipelinerun.spec) |\n| `pipelineRun.spec.podTemplate` | [`pipelineRun.spec.taskRunTemplate.podTemplate`](#adding-serviceaccountname-and-podtemplate-under-taskruntemplate-in-pipelinerun.spec) |\n| `task.spec.steps[].resources` | `task.spec.steps[].computeResources` |\n| `task.spec.stepTemplate.resources` | `task.spec.stepTemplate.computeResources` |\n| `task.spec.sidecars[].resources` | `task.spec.sidecars[].computeResources` |\n| `taskRun.spec.sidecarOverrides`| `taskRun.spec.sidecarSpecs` |\n| `taskRun.spec.stepOverrides` | `taskRun.spec.stepSpecs` |\n| `taskRun.spec.sidecarSpecs[].resources` | `taskRun.spec.sidecarSpecs[].computeResources` |\n| `taskRun.spec.stepSpecs[].resources` | `taskRun.spec.stepSpecs[].computeResources` |\n\n## Replacing `resources` from Task, TaskRun, Pipeline and PipelineRun <a id='replacing-resources-from-task,-taskrun,-pipeline-and-pipelinerun'> <\/a>\n`PipelineResources` and the `resources` fields of Task, TaskRun, Pipeline and PipelineRun have been removed. Please use `Tasks` instead. For more information, see [Replacing PipelineResources](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/pipelineresources.md)\n\n## Replacing `taskRef.bundle` and `pipelineRef.bundle` with Bundle Resolver <a id='replacing-taskRef.bundle-and-pipelineRef.bundle-with-bundle-resolver'> <\/a>\n\n**Note: `taskRef.bundle` and `pipelineRef.bundle` are now removed from `v1beta1`. This is kept for \"history\" purposes**.\n\nBundle resolver in remote resolution should be used instead of `taskRun.spec.taskRef.bundle` and `pipelineRun.spec.pipelineRef.bundle`.\n\nThe [`enable-bundles-resolver`](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/install.md#customizing-the-pipelines-controller-behavior) feature flag must be enabled to use this feature.\n\n```yaml\n# Before in v1beta1:\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nspec:\n  taskRef:\n    name: example-task\n    bundle: python:3-alpine\n---\n# After in v1:\napiVersion: tekton.dev\/v1\nkind: TaskRun\nspec:\n  taskRef:\n    resolver: bundles\n    params:\n    - name: bundle\n      value: python:3-alpine\n    - name: name\n      value: taskName\n    - name: kind\n      value: Task\n```\n\n## Replacing ClusterTask with Remote Resolution\n`ClusterTask` is deprecated. Please use the `cluster` resolver instead.\n\nThe [`enable-cluster-resolver`](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/install.md#customizing-the-pipelines-controller-behavior) feature flag must be enabled to use this feature.\n\nThe `cluster` resolver allows `Pipeline`s, `PipelineRun`s, and `TaskRun`s to refer\nto `Pipeline`s and `Task`s defined in other namespaces in the cluster.\n\n```yaml\n# Before in v1beta1:\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: cluster-task-reference\nspec:\n  taskRef:\n    name: example-task\n    kind: ClusterTask\n---\n# After in v1:\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  name: cluster-task-reference\nspec:\n  taskRef:\n    resolver: cluster\n    params:\n    - name: kind\n      value: task\n    - name: name\n      value: example-task\n    - name: namespace\n      value: example-namespace\n```\n\nFor more information, see [Remote resolution](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0060-remote-resource-resolution.md).\n\n## Adding `ServiceAccountName` and `PodTemplate` under TaskRunTemplate in PipelineRun.Spec <a id='adding-serviceaccountname-and-podtemplate-under-taskruntemplate-in-pipelinerun.spec'><\/a>\n`ServiceAccountName` and `PodTemplate` are moved to `TaskRunTemplate` as `TaskRunTemplate.ServiceAccountName` and `TaskRunTemplate.PodTemplate` so that users can specify common configuration in `TaskRunTemplate` which will apply to all the TaskRuns.\n\n```yaml\n# Before in v1beta1:\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: template-pr\nspec:\n  pipelineRef:\n    name: clone-test-build\n  serviceAccountName: build\n  podTemplate:\n    securityContext:\n      fsGroup: 65532\n---\n# After in v1:\napiVersion: tekton.dev\/v1\nkind: PipelineRun\nmetadata:\n  name: template-pr\nspec:\n  pipelineRef:\n    name: clone-test-build\n  taskRunTemplate:\n    serviceAccountName: build\n    podTemplate:\n      securityContext:\n        fsGroup: 65532\n```\n\nFor more information, see [TEP-119](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0119-add-taskrun-template-in-pipelinerun.md).","site":"tekton","answers_cleaned":"         linkTitle   Migrating from Tekton v1beta1  weight  4000            Migrating From Tekton  v1beta1  to Tekton  v1      Changes to fields   changes to fields     Upgrading  PipelineRun Timeout  to  PipelineRun Timeouts    upgrading pipelinerun timeout to pipelinerun timeouts     Replacing Resources from Task  TaskRun  Pipeline and PipelineRun   replacing resources from task  taskrun  pipeline and pipelinerun     Replacing  taskRef bundle  and  pipelineRef bundle  with Bundle Resolver   replacing taskRef bundle and pipelineRef bundle with bundle resolver     Replacing ClusterTask with Remote Resolution   replacing clustertask with remote resolution     Adding ServiceAccountName and PodTemplate under  TaskRunTemplate  in  PipelineRun Spec    adding serviceaccountname and podtemplate under taskruntemplate in pipelinerun spec    This document describes the differences between  v1beta1  Tekton entities and their  v1  counterparts  It also describes the changed fields and the deprecated fields into v1     Changes to fields  In Tekton  v1   the following fields have been changed     Old field   Replacement                                pipelineRun spec Timeout    pipelineRun spec timeouts pipeline       pipelineRun spec taskRunSpecs taskServiceAccountName     pipelineRun spec taskRunSpecs serviceAccountName       pipelineRun spec taskRunSpecs taskPodTemplate     pipelineRun spec taskRunSpecs podTemplate       taskRun status taskResults     taskRun status results       pipelineRun status pipelineResults     pipelineRun status results       taskRun spec taskRef bundle     taskRun spec taskRef resolver       pipelineRun spec pipelineRef bundle     pipelineRun spec pipelineRef resolver       task spec resources    removed from  Task       taskrun spec resources    removed from  TaskRun       taskRun status cloudEvents    removed from  TaskRun       taskRun status resourcesResult    removed from  TaskRun       pipeline spec resources    removed from  Pipeline       pipelineRun spec resources    removed from  PipelineRun       pipelineRun spec serviceAccountName      pipelineRun spec taskRunTemplate serviceAccountName    adding serviceaccountname and podtemplate under taskruntemplate in pipelinerun spec       pipelineRun spec podTemplate      pipelineRun spec taskRunTemplate podTemplate    adding serviceaccountname and podtemplate under taskruntemplate in pipelinerun spec       task spec steps   resources     task spec steps   computeResources       task spec stepTemplate resources     task spec stepTemplate computeResources       task spec sidecars   resources     task spec sidecars   computeResources       taskRun spec sidecarOverrides    taskRun spec sidecarSpecs       taskRun spec stepOverrides     taskRun spec stepSpecs       taskRun spec sidecarSpecs   resources     taskRun spec sidecarSpecs   computeResources       taskRun spec stepSpecs   resources     taskRun spec stepSpecs   computeResources        Replacing  resources  from Task  TaskRun  Pipeline and PipelineRun  a id  replacing resources from task  taskrun  pipeline and pipelinerun     a   PipelineResources  and the  resources  fields of Task  TaskRun  Pipeline and PipelineRun have been removed  Please use  Tasks  instead  For more information  see  Replacing PipelineResources  https   github com tektoncd pipeline blob main docs pipelineresources md      Replacing  taskRef bundle  and  pipelineRef bundle  with Bundle Resolver  a id  replacing taskRef bundle and pipelineRef bundle with bundle resolver     a     Note   taskRef bundle  and  pipelineRef bundle  are now removed from  v1beta1   This is kept for  history  purposes     Bundle resolver in remote resolution should be used instead of  taskRun spec taskRef bundle  and  pipelineRun spec pipelineRef bundle    The   enable bundles resolver   https   github com tektoncd pipeline blob main docs install md customizing the pipelines controller behavior  feature flag must be enabled to use this feature      yaml   Before in v1beta1  apiVersion  tekton dev v1beta1 kind  TaskRun spec    taskRef      name  example task     bundle  python 3 alpine       After in v1  apiVersion  tekton dev v1 kind  TaskRun spec    taskRef      resolver  bundles     params        name  bundle       value  python 3 alpine       name  name       value  taskName       name  kind       value  Task         Replacing ClusterTask with Remote Resolution  ClusterTask  is deprecated  Please use the  cluster  resolver instead   The   enable cluster resolver   https   github com tektoncd pipeline blob main docs install md customizing the pipelines controller behavior  feature flag must be enabled to use this feature   The  cluster  resolver allows  Pipeline s   PipelineRun s  and  TaskRun s to refer to  Pipeline s and  Task s defined in other namespaces in the cluster      yaml   Before in v1beta1  apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  cluster task reference spec    taskRef      name  example task     kind  ClusterTask       After in v1  apiVersion  tekton dev v1 kind  TaskRun metadata    name  cluster task reference spec    taskRef      resolver  cluster     params        name  kind       value  task       name  name       value  example task       name  namespace       value  example namespace      For more information  see  Remote resolution  https   github com tektoncd community blob main teps 0060 remote resource resolution md       Adding  ServiceAccountName  and  PodTemplate  under TaskRunTemplate in PipelineRun Spec  a id  adding serviceaccountname and podtemplate under taskruntemplate in pipelinerun spec    a   ServiceAccountName  and  PodTemplate  are moved to  TaskRunTemplate  as  TaskRunTemplate ServiceAccountName  and  TaskRunTemplate PodTemplate  so that users can specify common configuration in  TaskRunTemplate  which will apply to all the TaskRuns      yaml   Before in v1beta1  apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    name  template pr spec    pipelineRef      name  clone test build   serviceAccountName  build   podTemplate      securityContext        fsGroup  65532       After in v1  apiVersion  tekton dev v1 kind  PipelineRun metadata    name  template pr spec    pipelineRef      name  clone test build   taskRunTemplate      serviceAccountName  build     podTemplate        securityContext          fsGroup  65532      For more information  see  TEP 119  https   github com tektoncd community blob main teps 0119 add taskrun template in pipelinerun md  "}
{"questions":"tekton Workspaces Workspaces weight 405","answers":"<!--\n---\nlinkTitle: \"Workspaces\"\nweight: 405\n---\n-->\n\n# Workspaces\n\n- [Overview](#overview)\n  - [`Workspaces` in `Tasks` and `TaskRuns`](#workspaces-in-tasks-and-taskruns)\n  - [`Workspaces` in `Pipelines` and `PipelineRuns`](#workspaces-in-pipelines-and-pipelineruns)\n  - [Optional `Workspaces`](#optional-workspaces)\n  - [Isolated `Workspaces`](#isolated-workspaces)\n- [Configuring `Workspaces`](#configuring-workspaces)\n  - [Using `Workspaces` in `Tasks`](#using-workspaces-in-tasks)\n    - [Isolating `Workspaces` to Specific `Steps` or `Sidecars`](#isolating-workspaces-to-specific-steps-or-sidecars)\n    - [Setting a default `TaskRun` `Workspace Binding`](#setting-a-default-taskrun-workspace-binding)\n    - [Using `Workspace` variables in `Tasks`](#using-workspace-variables-in-tasks)\n    - [Mapping `Workspaces` in `Tasks` to `TaskRuns`](#mapping-workspaces-in-tasks-to-taskruns)\n    - [Examples of `TaskRun` definition using `Workspaces`](#examples-of-taskrun-definition-using-workspaces)\n  - [Using `Workspaces` in `Pipelines`](#using-workspaces-in-pipelines)\n    - [Specifying `Workspace` order in a `Pipeline` and Affinity Assistants](#specifying-workspace-order-in-a-pipeline-and-affinity-assistants)\n    - [Specifying `Workspaces` in `PipelineRuns`](#specifying-workspaces-in-pipelineruns)\n    - [Example `PipelineRun` definition using `Workspaces`](#example-pipelinerun-definition-using-workspaces)\n  - [Specifying `VolumeSources` in `Workspaces`](#specifying-volumesources-in-workspaces)\n    - [Using `PersistentVolumeClaims` as `VolumeSource`](#using-persistentvolumeclaims-as-volumesource)\n    - [Using other types of `VolumeSources`](#using-other-types-of-volumesources)\n- [Using Persistent Volumes within a `PipelineRun`](#using-persistent-volumes-within-a-pipelinerun)\n- [More examples](#more-examples)\n\n## Overview\n\n`Workspaces` allow `Tasks` to declare parts of the filesystem that need to be provided\nat runtime by `TaskRuns`. A `TaskRun` can make these parts of the filesystem available\nin many ways: using a read-only `ConfigMap` or `Secret`, an existing `PersistentVolumeClaim`\nshared with other Tasks, create a `PersistentVolumeClaim` from a provided `VolumeClaimTemplate`, or simply an `emptyDir` that is discarded when the `TaskRun`\ncompletes.\n\n`Workspaces` are similar to `Volumes` except that they allow a `Task` author \nto defer to users and their `TaskRuns` when deciding which class of storage to use.\n\n`Workspaces` can serve the following purposes:\n\n- Storage of inputs and\/or outputs\n- Sharing data among `Tasks`\n- A mount point for credentials held in `Secrets`\n- A mount point for configurations held in `ConfigMaps`\n- A mount point for common tools shared by an organization\n- A cache of build artifacts that speed up jobs\n\n### `Workspaces` in `Tasks` and `TaskRuns`\n\n`Tasks` specify where a `Workspace` resides on disk for its `Steps`. At\nruntime, a `TaskRun` provides the specific details of the `Volume` that is\nmounted into that `Workspace`.\n\nThis separation of concerns allows for a lot of flexibility. For example, in isolation,\na single `TaskRun` might simply provide an `emptyDir` volume that mounts quickly\nand disappears at the end of the run. In a more complex system, however, a `TaskRun`\nmight use a `PersistentVolumeClaim` which is pre-populated with\ndata for the `Task` to process. In both scenarios the `Task's`\n`Workspace` declaration remains the same and only the runtime\ninformation in the `TaskRun` changes.\n\n`Tasks` can also share `Workspaces` with their `Sidecars`, though there's a little more\nconfiguration involved to add the required `volumeMount`. This allows for a\nlong-running process in a `Sidecar` to share data with the executing `Steps` of a `Task`.\n\n**Note**: If the `enable-api-fields` feature-flag is set to `\"beta\"` then workspaces\nwill automatically be available to `Sidecars` too!\n\n### `Workspaces` in `Pipelines` and `PipelineRuns`\n\nA `Pipeline` can use `Workspaces` to show how storage will be shared through\nits `Tasks`. For example, `Task` A might clone a source repository onto a `Workspace`\nand `Task` B might compile the code that it finds in that `Workspace`. It's\nthe `Pipeline's` job to ensure that the `Workspace` these two `Tasks` use is the\nsame, and more importantly, that the order in which they access the `Workspace` is\ncorrect.\n\n`PipelineRuns` perform mostly the same duties as `TaskRuns` - they provide the\nspecific `Volume` information to use for the `Workspaces` used by each `Pipeline`.\n`PipelineRuns` have the added responsibility of ensuring that whatever `Volume` type they\nprovide can be safely and correctly shared across multiple `Tasks`.\n\n### Optional `Workspaces`\n\nBoth `Tasks` and `Pipelines` can declare a `Workspace` \"optional\". When an optional `Workspace`\nis declared the `TaskRun` or `PipelineRun` may omit a `Workspace` Binding for that `Workspace`.\nThe `Task` or `Pipeline` behaviour may change when the Binding is omitted. This feature has\nmany uses:\n\n- A `Task` may optionally accept credentials to run authenticated commands.\n- A `Pipeline` may accept optional configuration that changes the linting or compilation\nparameters used.\n- An optional build cache may be provided to speed up compile times.\n\nSee the section [Using `Workspaces` in `Tasks`](#using-workspaces-in-tasks) for more info on\nthe `optional` field.\n\n### Isolated `Workspaces`\n\nThis is a beta feature. The `enable-api-fields` feature flag [must be set to `\"beta\"`](.\/install.md)\nfor Isolated Workspaces to function.\n\nCertain kinds of data are more sensitive than others. To reduce exposure of sensitive data Task\nauthors can isolate `Workspaces` to only those `Steps` and `Sidecars` that require access to\nthem. The primary use-case for this is credentials but it can apply to any data that should have\nits access strictly limited to only specific container images.\n\nSee the section [Isolating `Workspaces` to Specific `Steps` or `Sidecars`](#isolating-workspaces-to-specific-steps-or-sidecars)\nfor more info on this feature.\n\n## Configuring `Workspaces`\n\nThis section describes how to configure one or more `Workspaces` in a `TaskRun`.\n\n### Using `Workspaces` in `Tasks`\n\nTo configure one or more `Workspaces` in a `Task`, add a `workspaces` list with each entry using the following fields:\n\n- `name` -  (**required**) A **unique** string identifier that can be used to refer to the workspace\n- `description` - An informative string describing the purpose of the `Workspace`\n- `readOnly` - A boolean declaring whether the `Task` will write to the `Workspace`. Defaults to `false`.\n- `optional` - A boolean indicating whether a TaskRun can omit the `Workspace`. Defaults to `false`.\n- `mountPath` - A path to a location on disk where the workspace will be available to `Steps`. If a\n  `mountPath` is not provided the workspace will be placed by default at `\/workspace\/<name>` where `<name>`\n  is the workspace's unique name.\n\nNote the following:\n\n- A `Task` definition can include as many `Workspaces` as it needs. It is recommended that `Tasks` use\n  **at most** one _writeable_ `Workspace`.\n- A `readOnly` `Workspace` will have its volume mounted as read-only. Attempting to write\n  to a `readOnly` `Workspace` will result in errors and failed `TaskRuns`.\n\nBelow is an example `Task` definition that includes a `Workspace` called `messages` to which the `Task` writes a message:\n\n```yaml\nspec:\n  steps:\n    - name: write-message\n      image: ubuntu\n      script: |\n        #!\/usr\/bin\/env bash\n        set -xe\n        if [ \"$(workspaces.messages.bound)\" == \"true\" ] ; then\n          echo hello! > $(workspaces.messages.path)\/message\n        fi\n  workspaces:\n    - name: messages\n      description: |\n        The folder where we write the message to. If no workspace\n        is provided then the message will not be written.\n      optional: true\n      mountPath: \/custom\/path\/relative\/to\/root\n```\n\n#### Sharing `Workspaces` with `Sidecars`\n\nA `Task's` `Sidecars` are also able to access the `Workspaces` the `Task` defines but must have their\n`volumeMount` configuration set explicitly. Below is an example `Task` that shares a `Workspace` between\nits `Steps` and its `Sidecar`. In the example a `Sidecar` sleeps for a short amount of time and then writes\na `ready` file which the `Step` is waiting for:\n\n```yaml\nspec:\n  workspaces:\n    - name: signals\n  steps:\n    - image: alpine\n      script: |\n        while [ ! -f \"$(workspaces.signals.path)\/ready\" ]; do\n          echo \"Waiting for ready file...\"\n          sleep 1\n        done\n        echo \"Saw ready file!\"\n  sidecars:\n    - image: alpine\n      # Note: must explicitly include volumeMount for the workspace to be accessible in the Sidecar\n      volumeMounts:\n        - name: $(workspaces.signals.volume)\n          mountPath: $(workspaces.signals.path)\n      script: |\n        sleep 3\n        touch \"$(workspaces.signals.path)\/ready\"\n```\n\n**Note:** Starting in Pipelines v0.24.0 `Sidecars` automatically get access to `Workspaces`. This is a\nbeta feature and requires Pipelines to have [the \"beta\" feature gate enabled](.\/install.md#beta-features).\n\nIf a Sidecar already has a `volumeMount` at the location expected for a `workspace` then that `workspace` is\nnot bound to the Sidecar. This preserves backwards-compatibility with any existing uses of the `volumeMount`\ntrick described above.\n\n#### Isolating `Workspaces` to Specific `Steps` or `Sidecars`\n\nThis is a beta feature. The `enable-api-fields` feature flag [must be set to `\"beta\"`](.\/install.md#beta-features)\nfor Isolated Workspaces to function.\n\nTo limit access to a `Workspace` from a subset of a `Task's` `Steps` or `Sidecars` requires\nadding a `workspaces` declaration to those sections. In the following example a `Task` has several\n`Steps` but only the one that performs a `git clone` will be able to access the SSH credentials\npassed into it:\n\n```yaml\nspec:\n  workspaces:\n  - name: ssh-credentials\n    description: An .ssh directory with keys, known_host and config files used to clone the repo.\n  steps:\n  - name: clone-repo\n    workspaces:\n    - name: ssh-credentials # This Step receives the sensitive workspace; the others do not.\n    image: git\n    script: # git clone ...\n  - name: build-source\n    image: third-party-source-builder:latest # This image doesn't get access to ssh-credentials.\n  - name: lint-source\n    image: third-party-source-linter:latest # This image doesn't get access to ssh-credentials.\n```\n\nIt can potentially be useful to mount `Workspaces` to different locations on a per-`Step` or\nper-`Sidecar` basis and this is also supported:\n\n```yaml\nkind: Task\nspec:\n  workspaces:\n  - name: ws\n    mountPath: \/workspaces\/ws\n  steps:\n  - name: edit-files-1\n    workspaces:\n    - name: ws\n      mountPath: \/foo # overrides mountPath\n  - name: edit-files-2\n    workspaces:\n    - name: ws # no mountPath specified so will use \/workspaces\/ws\n  sidecars:\n  - name: watch-files-on-workspace\n    workspaces:\n    - name: ws\n      mountPath: \/files # overrides mountPath\n```\n\n#### Setting a default `TaskRun` `Workspace Binding`\n\nAn organization may want to specify default `Workspace` configuration for `TaskRuns`. This allows users to\nuse `Tasks` without having to know the specifics of `Workspaces` - they can simply rely on the platform\nto use the default configuration when a `Workspace` is missing. To support this Tekton allows a default\n`Workspace Binding` to be specified for `TaskRuns`. When the `TaskRun` executes, any `Workspaces` that \na `Task` requires but which are not provided by the `TaskRun` will be bound with the default configuration.\n\nThe configuration for the default `Workspace Binding` is added to the `config-defaults` `ConfigMap`, under\nthe `default-task-run-workspace-binding` key. For an example, see the [Customizing basic execution\nparameters](.\/additional-configs.md#customizing-basic-execution-parameters) section of the install doc.\n\n**Note:** the default configuration is used for any _required_ `Workspace` declared by a `Task`. Optional\n`Workspaces` are not populated with the default binding. This is because a `Task's` behaviour will typically\ndiffer slightly when an optional `Workspace` is bound.\n\n#### Using `Workspace` variables in `Tasks`\n\nThe following variables make information about `Workspaces` available to `Tasks`:\n\n- `$(workspaces.<name>.path)` - specifies the path to a `Workspace`\n   where `<name>` is the name of the `Workspace`. This will be an\n   empty string when a Workspace is declared optional and not provided\n   by a TaskRun.\n- `$(workspaces.<name>.bound)` - either `true` or `false`, specifies\n   whether a workspace was bound. Always `true` if the workspace is required.\n- `$(workspaces.<name>.claim)` - specifies the name of the `PersistentVolumeClaim` used as a volume source for the `Workspace` \n   where `<name>` is the name of the `Workspace`. If a volume source other than `PersistentVolumeClaim` is used, an empty string is returned.\n- `$(workspaces.<name>.volume)`- specifies the name of the `Volume`\n   provided for a `Workspace` where `<name>` is the name of the `Workspace`.\n\n#### Mapping `Workspaces` in `Tasks` to `TaskRuns`\n\nA `TaskRun` that executes a `Task` containing a `workspaces` list must bind\nthose `workspaces` to actual physical `Volumes`. To do so, the `TaskRun` includes\nits own `workspaces` list. Each entry in the list contains the following fields:\n\n- `name` - (**required**) The name of the `Workspace` within the `Task` for which the `Volume` is being provided\n- `subPath` - An optional subdirectory on the `Volume` to store data for that `Workspace`\n\nThe entry must also include one `VolumeSource`. See [Specifying `VolumeSources` in `Workspaces`](#specifying-volumesources-in-workspaces) for more information.\n\n**Caution:**\n- The `Workspaces` declared in a `Task` must be available when executing the associated `TaskRun`.\n  Otherwise, the `TaskRun` will fail.\n\n#### Examples of `TaskRun` definition using `Workspaces`\n\nThe following example illustrate how to specify `Workspaces` in your `TaskRun` definition,\nan [`emptyDir`](https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/#emptydir)\nis provided for a Task's `workspace` called `myworkspace`:\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  generateName: example-taskrun-\nspec:\n  taskRef:\n    name: example-task\n  workspaces:\n    - name: myworkspace # this workspace name must be declared in the Task\n      emptyDir: {}      # emptyDir volumes can be used for TaskRuns, \n                        # but consider using a PersistentVolumeClaim for PipelineRuns\n```\nFor examples of using other types of volume sources, see [Specifying `VolumeSources` in `Workspaces`](#specifying-volumesources-in-workspaces).\nFor a more in-depth example, see [`Workspaces` in a `TaskRun`](..\/examples\/v1\/taskruns\/workspace.yaml).\n\n### Using `Workspaces` in `Pipelines`\n\nWhile individual `Tasks` declare the `Workspaces` they need to run, the `Pipeline` decides\nwhich `Workspaces` are shared among its `Tasks`. To declare shared `Workspaces` in a `Pipeline`,\nyou must add the following information to your `Pipeline` definition:\n\n- A list of `Workspaces` that your `PipelineRuns` will be providing. Use the `workspaces` field to\n  specify the target `Workspaces` in your `Pipeline` definition as shown below. Each entry in the\n  list must have a unique name.\n- A mapping of `Workspace` names between the `Pipeline` and the `Task` definitions.\n\nThe example below defines a `Pipeline` with a `Workspace` named `pipeline-ws1`. This\n`Workspace` is bound in two `Tasks` - first as the `output` workspace declared by the `gen-code`\n`Task`, then as the `src` workspace declared by the `commit` `Task`. If the `Workspace`\nprovided by the `PipelineRun` is a `PersistentVolumeClaim` then these two `Tasks` can share\ndata within that `Workspace`.\n\n```yaml\nspec:\n  workspaces:\n    - name: pipeline-ws1 # Name of the workspace in the Pipeline\n    - name: pipeline-ws2\n      optional: true\n  tasks:\n    - name: use-ws-from-pipeline\n      taskRef:\n        name: gen-code # gen-code expects a workspace named \"output\"\n      workspaces:\n        - name: output\n          workspace: pipeline-ws1\n    - name: use-ws-again\n      taskRef:\n        name: commit # commit expects a workspace named \"src\"\n      workspaces:\n        - name: src\n          workspace: pipeline-ws1\n      runAfter:\n        - use-ws-from-pipeline # important: use-ws-from-pipeline writes to the workspace first\n```\n\nInclude a `subPath` in the `Workspace Binding` to mount different parts of the same volume for different Tasks. See [a full example of this kind of Pipeline](..\/examples\/v1\/pipelineruns\/pipelinerun-using-different-subpaths-of-workspace.yaml) which writes data to two adjacent directories on the same Volume.\n\nThe `subPath` specified in a `Pipeline` will be appended to any `subPath` specified as part of the `PipelineRun` workspace declaration. So a `PipelineRun` declaring a `Workspace` with `subPath` of `\/foo` for a `Pipeline` who binds it to a `Task` with `subPath` of `\/bar` will end up mounting the `Volume`'s `\/foo\/bar` directory.\n\n#### Specifying `Workspace` order in a `Pipeline` and Affinity Assistants\n\nSharing a `Workspace` between `Tasks` requires you to define the order in which those `Tasks`\nwrite to or read from that `Workspace`. Use the `runAfter` field in your `Pipeline` definition\nto define when a `Task` should be executed. For more information, see the [`runAfter` documentation](pipelines.md#using-the-runafter-parameter).\n\nWhen a `PersistentVolumeClaim` is used as volume source for a `Workspace` in a `PipelineRun`,\nan Affinity Assistant will be created. For more information, see the [`Affinity Assistants` documentation](affinityassistants.md).\n\n**Note**: When `coschedule` is set to `workspaces` or `disabled`, it is not allowed to bind multiple [`PersistentVolumeClaim` based workspaces](#using-persistentvolumeclaims-as-volumesource) to the same `TaskRun` in a `PipelineRun` due to potential Availability Zone conflicts.\nSee more details in [Availability Zones](#availability-zones).\n\n#### Specifying `Workspaces` in `PipelineRuns`\n\nFor a `PipelineRun` to execute a `Pipeline` that includes one or more `Workspaces`, it needs to\nbind the `Workspace` names to volumes using its own `workspaces` field. Each entry in\nthis list must correspond to a `Workspace` declaration in the `Pipeline`. Each entry in the\n`workspaces` list must specify the following:\n\n- `name` - (**required**) the name of the `Workspace` specified in the `Pipeline` definition for which a volume is being provided.\n- `subPath` - (optional) a directory on the volume that will store that `Workspace's` data. This directory must exist at the\n  time the `TaskRun` executes, otherwise the execution will fail.\n\nThe entry must also include one `VolumeSource`. See [Using `VolumeSources` with `Workspaces`](#specifying-volumesources-in-workspaces) for more information.\n\n**Note:** If the `Workspaces` specified by a `Pipeline` are not provided at runtime by a `PipelineRun`, that `PipelineRun` will fail.\n\nYou can pass in extra `Workspaces` if needed depending on your use cases. An example use\ncase is when your CI system autogenerates `PipelineRuns` and it has `Workspaces` it wants to\nprovide to all `PipelineRuns`. Because you can pass in extra `Workspaces`, you don't have to\ngo through the complexity of checking each `Pipeline` and providing only the required `Workspaces`.\n\n#### Example `PipelineRun` definition using `Workspaces`\n\nIn the example below, a `volumeClaimTemplate` is provided for how a `PersistentVolumeClaim` should be created for a workspace named\n`myworkspace` declared in a `Pipeline`. When using `volumeClaimTemplate` a new `PersistentVolumeClaim` is created for \neach `PipelineRun` and it allows the user to specify e.g. size and StorageClass for the volume.\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: example-pipelinerun-\nspec:\n  pipelineRef:\n    name: example-pipeline\n  workspaces:\n    - name: myworkspace # this workspace name must be declared in the Pipeline\n      volumeClaimTemplate:\n        spec:\n          accessModes:\n            - ReadWriteOnce # access mode may affect how you can use this volume in parallel tasks\n          resources:\n            requests:\n              storage: 1Gi\n```\n\nFor examples of using other types of volume sources, see [Specifying `VolumeSources` in `Workspaces`](#specifying-volumesources-in-workspaces).\nFor a more in-depth example, see the [`Workspaces` in `PipelineRun`](..\/examples\/v1\/pipelineruns\/workspaces.yaml) YAML sample.\n\n### Specifying `VolumeSources` in `Workspaces`\n\nYou can only use a single type of `VolumeSource` per `Workspace` entry. The configuration\noptions differ for each type. `Workspaces` support the following fields:\n\n#### Using `PersistentVolumeClaims` as `VolumeSource`\n\n`PersistentVolumeClaim` volumes are a good choice for sharing data among `Tasks` within a `Pipeline`.\nBeware that the [access mode](https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes\/#access-modes)\nconfigured for the `PersistentVolumeClaim` effects how you can use the volume for parallel `Tasks` in a `Pipeline`. See\n[Specifying `workspace` order in a `Pipeline` and Affinity Assistants](#specifying-workspace-order-in-a-pipeline-and-affinity-assistants) for more information about this.\nThere are two ways of using `PersistentVolumeClaims` as a `VolumeSource`.\n\n##### `volumeClaimTemplate`\n\nThe `volumeClaimTemplate` is a template of a [`PersistentVolumeClaim` volume](https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/#persistentvolumeclaim),\ncreated for each `PipelineRun` or `TaskRun`. When the volume is created from a template in a `PipelineRun` or `TaskRun` \nit will be deleted when the `PipelineRun` or `TaskRun` is deleted.\n\n```yaml\nworkspaces:\n  - name: myworkspace\n    volumeClaimTemplate:\n      spec:\n        accessModes:\n          - ReadWriteOnce\n        resources:\n          requests:\n            storage: 1Gi\n```\n\n##### `persistentVolumeClaim`\n\nThe `persistentVolumeClaim` field references an *existing* [`persistentVolumeClaim` volume](https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/#persistentvolumeclaim). The example exposes only the subdirectory `my-subdir` from that `PersistentVolumeClaim`\n\n```yaml\nworkspaces:\n  - name: myworkspace\n    persistentVolumeClaim:\n      claimName: mypvc\n    subPath: my-subdir\n```\n\n#### Using other types of `VolumeSources`\n\n##### `emptyDir`\n\nThe `emptyDir` field references an [`emptyDir` volume](https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/#emptydir) which holds\na temporary directory that only lives as long as the `TaskRun` that invokes it. `emptyDir` volumes are **not** suitable for sharing data among `Tasks` within a `Pipeline`.\nHowever, they work well for single `TaskRuns` where the data stored in the `emptyDir` needs to be shared among the `Steps` of the `Task` and discarded after execution.\n\n```yaml\nworkspaces:\n  - name: myworkspace\n    emptyDir: {}\n```\n\n##### `configMap`\n\nThe `configMap` field references a [`configMap` volume](https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/#configmap).\nUsing a `configMap` as a `Workspace` has the following limitations:\n\n- `configMap` volume sources are always mounted as read-only. `Steps` cannot write to them and will error out if they try.\n- The `configMap` you want to use as a `Workspace` must exist prior to submitting the `TaskRun`.\n- `configMaps` are [size-limited to 1MB](https:\/\/github.com\/kubernetes\/kubernetes\/blob\/f16bfb069a22241a5501f6fe530f5d4e2a82cf0e\/pkg\/apis\/core\/validation\/validation.go#L5042).\n\n```yaml\nworkspaces:\n  - name: myworkspace\n    configmap:\n      name: my-configmap\n```\n\n##### `secret`\n\nThe `secret` field references a [`secret` volume](https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/#secret).\nUsing a `secret` volume has the following limitations:\n\n- `secret` volume sources are always mounted as read-only. `Steps` cannot write to them and will error out if they try.\n- The `secret` you want to use as a `Workspace` must exist prior to submitting the `TaskRun`.\n- `secret` are [size-limited to 1MB](https:\/\/github.com\/kubernetes\/kubernetes\/blob\/f16bfb069a22241a5501f6fe530f5d4e2a82cf0e\/pkg\/apis\/core\/validation\/validation.go#L5042).\n\n```yaml\nworkspaces:\n  - name: myworkspace\n    secret:\n      secretName: my-secret\n```\n\n##### `projected`\n\nThe `projected` field references a [`projected` volume](https:\/\/kubernetes.io\/docs\/concepts\/storage\/projected-volumes).\n`projected` volume workspaces are a [beta feature](.\/additional-configs.md#beta-features).\nUsing a `projected` volume has the following limitations:\n\n- `projected` volume sources are always mounted as read-only. `Steps` cannot write to them and will error out if they try.\n- The volumes you want to project as a `Workspace` must exist prior to submitting the `TaskRun`.\n- The following volumes can be projected: `configMap`, `secret`, `serviceAccountToken` and `downwardApi`\n\n```yaml\nworkspaces:\n  - name: myworkspace\n    projected:\n      sources:\n        - configMap:\n            name: my-configmap\n        - secret:\n            name: my-secret\n```\n\n##### `csi`\n\nThe `csi` field references a [`csi` volume](https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/#csi).\n`csi` workspaces are a [beta feature](.\/additional-configs.md#beta-features).\nUsing a `csi` volume has the following limitations:\n<!-- wokeignore:rule=master --> \n- `csi` volume sources require a volume driver to use, which must correspond to the value by the CSI driver as defined in the [CSI spec](https:\/\/github.com\/container-storage-interface\/spec\/blob\/master\/spec.md#getplugininfo).\n\n```yaml\nworkspaces:\n  - name: my-credentials\n    csi:\n      driver: secrets-store.csi.k8s.io\n      readOnly: true\n      volumeAttributes:\n        secretProviderClass: \"vault-database\"\n```\n\nExample of CSI workspace using Hashicorp Vault:\n\n- Install the required csi driver. eg. [secrets-store-csi-driver](https:\/\/github.com\/hashicorp\/vault-csi-provider#using-yaml)\n- Install the `vault` Provider onto the kubernetes cluster. [Reference](https:\/\/learn.hashicorp.com\/tutorials\/vault\/kubernetes-raft-deployment-guide)\n- Deploy a provider via [example](https:\/\/gist.github.com\/JeromeJu\/cc8e4e758029b6694806604750b8911c)\n- Create a SecretProviderClass Provider using the following [yaml](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/examples\/v1\/pipelineruns\/no-ci\/csi-workspace.yaml#L1-L19)\n- Specify the ServiceAccount via vault:\n\n```\nvault write auth\/kubernetes\/role\/database \\\nbound_service_account_names=default \\\nbound_service_account_namespaces=default \\\npolicies=internal-app \\\nttl=20m\n```\n\nIf you need support for a `VolumeSource` type not listed above, [open an issue](https:\/\/github.com\/tektoncd\/pipeline\/issues) or\na [pull request](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/CONTRIBUTING.md).\n\n## Using Persistent Volumes within a `PipelineRun`\n\nWhen using a workspace with a [`PersistentVolumeClaim` as `VolumeSource`](#using-persistentvolumeclaims-as-volumesource),\na Kubernetes [Persistent Volumes](https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes\/) is used within the `PipelineRun`.\nThere are some details that are good to know when using Persistent Volumes within a `PipelineRun`.\n\n### Storage Class\n\n`PersistentVolumeClaims` specify a [Storage Class](https:\/\/kubernetes.io\/docs\/concepts\/storage\/storage-classes\/) for the underlying Persistent Volume. Storage Classes have specific\ncharacteristics. If a StorageClassName is not specified for your `PersistentVolumeClaim`, the cluster defined _default_\nStorage Class is used. For _regional_ clusters - clusters that typically consist of Nodes located in multiple Availability\nZones - it is important to know whether your Storage Class is available to all Nodes. Default Storage Classes are typically\nonly available to Nodes within *one* Availability Zone. There is usually an option to use a _regional_ Storage Class,\nbut they have trade-offs, e.g. you need to pay for multiple volumes since they are replicated and your volume may have \nsubstantially higher latency.\n\n### Access Modes\n\nA `PersistentVolumeClaim` specifies an [Access Mode](https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes\/#access-modes).\nAvailable Access Modes are `ReadWriteOnce`, `ReadWriteMany` and `ReadOnlyMany`. What Access Mode you can use depend on\nthe storage solution that you are using.\n\n* `ReadWriteOnce` is the most commonly available Access Mode. A volume with this Access Mode can only be mounted on one\n  Node at a time. This can be problematic for a `Pipeline` that has parallel `Tasks` that access the volume concurrently.\n  The Affinity Assistant helps with this problem by scheduling all `Tasks` that use the same `PersistentVolumeClaim` to\n  the same Node.\n  \n* `ReadOnlyMany` is read-only and is less common in a CI\/CD-pipeline. These volumes often need to be \"prepared\" with data\n  in some way before use. Dynamically provided volumes can usually not be used in read-only mode.\n\n* `ReadWriteMany` is the least commonly available Access Mode. If you use this access mode and these volumes are available\n  to all Nodes within your cluster, you may want to disable the Affinity Assistant.\n\n### Availability Zones\n`Persistent Volumes` are \"zonal\" in some cloud providers like GKE (i.e. they live within a single Availability Zone and cannot be accessed from a `pod` living in another Availability Zone). When using a workspace backed by a `PersistentVolumeClaim` (typically only available within a Data Center), the `TaskRun` `pods` can be scheduled to any Availability Zone in a regional cluster. This results in potential Availability Zone scheduling conflict when two `pods` requiring the same Volume are scheduled to different Availability Zones (see issue [#3480](https:\/\/github.com\/tektoncd\/pipeline\/issues\/3480) and [#5275](https:\/\/github.com\/tektoncd\/pipeline\/issues\/5275)).\n\nTo avoid such conflict in `PipelineRuns`, Tekton provides [Affinity Assistants](affinityassistants.md) which schedule all `TaskRun` `pods` or all `TaskRun` sharing a `PersistentVolumeClaim` in a `PipelineRun` to the same Node depending on the `coschedule` mode.\n\nSpecifically, for users use zonal clusters like GKE or use `PersistentVolumeClaim` in ReadWriteOnce access modes, please set `coschedule: workspaces` to schedule each of the `TaskRun` `pod` to the same zone as the associated `PersistentVolumeClaim`. In addition, for users want to bind multiple `PersistentVolumeClaims` to a single `TaskRun`, please set `coschedule: pipelineruns` to schedule all `TaskRun` `pods` and `PersistentVolumeClaim` in a `PipelineRun` to the same zone.\n\n## More examples\n\nSee the following in-depth examples of configuring `Workspaces`:\n\n- [`Workspaces` in a `TaskRun`](..\/examples\/v1\/taskruns\/workspace.yaml)\n- [`Workspaces` in a `PipelineRun`](..\/examples\/v1\/pipelineruns\/workspaces.yaml)\n- [`Workspaces` from a volumeClaimTemplate in a `PipelineRun`](..\/examples\/v1\/pipelineruns\/workspace-from-volumeclaimtemplate.yaml)","site":"tekton","answers_cleaned":"         linkTitle   Workspaces  weight  405            Workspaces     Overview   overview        Workspaces  in  Tasks  and  TaskRuns    workspaces in tasks and taskruns        Workspaces  in  Pipelines  and  PipelineRuns    workspaces in pipelines and pipelineruns       Optional  Workspaces    optional workspaces       Isolated  Workspaces    isolated workspaces     Configuring  Workspaces    configuring workspaces       Using  Workspaces  in  Tasks    using workspaces in tasks         Isolating  Workspaces  to Specific  Steps  or  Sidecars    isolating workspaces to specific steps or sidecars         Setting a default  TaskRun   Workspace Binding    setting a default taskrun workspace binding         Using  Workspace  variables in  Tasks    using workspace variables in tasks         Mapping  Workspaces  in  Tasks  to  TaskRuns    mapping workspaces in tasks to taskruns         Examples of  TaskRun  definition using  Workspaces    examples of taskrun definition using workspaces       Using  Workspaces  in  Pipelines    using workspaces in pipelines         Specifying  Workspace  order in a  Pipeline  and Affinity Assistants   specifying workspace order in a pipeline and affinity assistants         Specifying  Workspaces  in  PipelineRuns    specifying workspaces in pipelineruns         Example  PipelineRun  definition using  Workspaces    example pipelinerun definition using workspaces       Specifying  VolumeSources  in  Workspaces    specifying volumesources in workspaces         Using  PersistentVolumeClaims  as  VolumeSource    using persistentvolumeclaims as volumesource         Using other types of  VolumeSources    using other types of volumesources     Using Persistent Volumes within a  PipelineRun    using persistent volumes within a pipelinerun     More examples   more examples      Overview   Workspaces  allow  Tasks  to declare parts of the filesystem that need to be provided at runtime by  TaskRuns   A  TaskRun  can make these parts of the filesystem available in many ways  using a read only  ConfigMap  or  Secret   an existing  PersistentVolumeClaim  shared with other Tasks  create a  PersistentVolumeClaim  from a provided  VolumeClaimTemplate   or simply an  emptyDir  that is discarded when the  TaskRun  completes    Workspaces  are similar to  Volumes  except that they allow a  Task  author  to defer to users and their  TaskRuns  when deciding which class of storage to use    Workspaces  can serve the following purposes     Storage of inputs and or outputs   Sharing data among  Tasks    A mount point for credentials held in  Secrets    A mount point for configurations held in  ConfigMaps    A mount point for common tools shared by an organization   A cache of build artifacts that speed up jobs       Workspaces  in  Tasks  and  TaskRuns    Tasks  specify where a  Workspace  resides on disk for its  Steps   At runtime  a  TaskRun  provides the specific details of the  Volume  that is mounted into that  Workspace    This separation of concerns allows for a lot of flexibility  For example  in isolation  a single  TaskRun  might simply provide an  emptyDir  volume that mounts quickly and disappears at the end of the run  In a more complex system  however  a  TaskRun  might use a  PersistentVolumeClaim  which is pre populated with data for the  Task  to process  In both scenarios the  Task s   Workspace  declaration remains the same and only the runtime information in the  TaskRun  changes    Tasks  can also share  Workspaces  with their  Sidecars   though there s a little more configuration involved to add the required  volumeMount   This allows for a long running process in a  Sidecar  to share data with the executing  Steps  of a  Task      Note    If the  enable api fields  feature flag is set to   beta   then workspaces will automatically be available to  Sidecars  too        Workspaces  in  Pipelines  and  PipelineRuns   A  Pipeline  can use  Workspaces  to show how storage will be shared through its  Tasks   For example   Task  A might clone a source repository onto a  Workspace  and  Task  B might compile the code that it finds in that  Workspace   It s the  Pipeline s  job to ensure that the  Workspace  these two  Tasks  use is the same  and more importantly  that the order in which they access the  Workspace  is correct    PipelineRuns  perform mostly the same duties as  TaskRuns    they provide the specific  Volume  information to use for the  Workspaces  used by each  Pipeline    PipelineRuns  have the added responsibility of ensuring that whatever  Volume  type they provide can be safely and correctly shared across multiple  Tasks        Optional  Workspaces   Both  Tasks  and  Pipelines  can declare a  Workspace   optional   When an optional  Workspace  is declared the  TaskRun  or  PipelineRun  may omit a  Workspace  Binding for that  Workspace   The  Task  or  Pipeline  behaviour may change when the Binding is omitted  This feature has many uses     A  Task  may optionally accept credentials to run authenticated commands    A  Pipeline  may accept optional configuration that changes the linting or compilation parameters used    An optional build cache may be provided to speed up compile times   See the section  Using  Workspaces  in  Tasks    using workspaces in tasks  for more info on the  optional  field       Isolated  Workspaces   This is a beta feature  The  enable api fields  feature flag  must be set to   beta      install md  for Isolated Workspaces to function   Certain kinds of data are more sensitive than others  To reduce exposure of sensitive data Task authors can isolate  Workspaces  to only those  Steps  and  Sidecars  that require access to them  The primary use case for this is credentials but it can apply to any data that should have its access strictly limited to only specific container images   See the section  Isolating  Workspaces  to Specific  Steps  or  Sidecars    isolating workspaces to specific steps or sidecars  for more info on this feature      Configuring  Workspaces   This section describes how to configure one or more  Workspaces  in a  TaskRun        Using  Workspaces  in  Tasks   To configure one or more  Workspaces  in a  Task   add a  workspaces  list with each entry using the following fields      name        required    A   unique   string identifier that can be used to refer to the workspace    description    An informative string describing the purpose of the  Workspace     readOnly    A boolean declaring whether the  Task  will write to the  Workspace   Defaults to  false      optional    A boolean indicating whether a TaskRun can omit the  Workspace   Defaults to  false      mountPath    A path to a location on disk where the workspace will be available to  Steps   If a    mountPath  is not provided the workspace will be placed by default at   workspace  name   where   name     is the workspace s unique name   Note the following     A  Task  definition can include as many  Workspaces  as it needs  It is recommended that  Tasks  use     at most   one  writeable   Workspace     A  readOnly   Workspace  will have its volume mounted as read only  Attempting to write   to a  readOnly   Workspace  will result in errors and failed  TaskRuns    Below is an example  Task  definition that includes a  Workspace  called  messages  to which the  Task  writes a message      yaml spec    steps        name  write message       image  ubuntu       script               usr bin env bash         set  xe         if      workspaces messages bound       true      then           echo hello      workspaces messages path  message         fi   workspaces        name  messages       description            The folder where we write the message to  If no workspace         is provided then the message will not be written        optional  true       mountPath   custom path relative to root           Sharing  Workspaces  with  Sidecars   A  Task s   Sidecars  are also able to access the  Workspaces  the  Task  defines but must have their  volumeMount  configuration set explicitly  Below is an example  Task  that shares a  Workspace  between its  Steps  and its  Sidecar   In the example a  Sidecar  sleeps for a short amount of time and then writes a  ready  file which the  Step  is waiting for      yaml spec    workspaces        name  signals   steps        image  alpine       script            while      f    workspaces signals path  ready     do           echo  Waiting for ready file               sleep 1         done         echo  Saw ready file     sidecars        image  alpine         Note  must explicitly include volumeMount for the workspace to be accessible in the Sidecar       volumeMounts            name    workspaces signals volume            mountPath    workspaces signals path        script            sleep 3         touch    workspaces signals path  ready         Note    Starting in Pipelines v0 24 0  Sidecars  automatically get access to  Workspaces   This is a beta feature and requires Pipelines to have  the  beta  feature gate enabled    install md beta features    If a Sidecar already has a  volumeMount  at the location expected for a  workspace  then that  workspace  is not bound to the Sidecar  This preserves backwards compatibility with any existing uses of the  volumeMount  trick described above        Isolating  Workspaces  to Specific  Steps  or  Sidecars   This is a beta feature  The  enable api fields  feature flag  must be set to   beta      install md beta features  for Isolated Workspaces to function   To limit access to a  Workspace  from a subset of a  Task s   Steps  or  Sidecars  requires adding a  workspaces  declaration to those sections  In the following example a  Task  has several  Steps  but only the one that performs a  git clone  will be able to access the SSH credentials passed into it      yaml spec    workspaces      name  ssh credentials     description  An  ssh directory with keys  known host and config files used to clone the repo    steps      name  clone repo     workspaces        name  ssh credentials   This Step receives the sensitive workspace  the others do not      image  git     script    git clone         name  build source     image  third party source builder latest   This image doesn t get access to ssh credentials      name  lint source     image  third party source linter latest   This image doesn t get access to ssh credentials       It can potentially be useful to mount  Workspaces  to different locations on a per  Step  or per  Sidecar  basis and this is also supported      yaml kind  Task spec    workspaces      name  ws     mountPath   workspaces ws   steps      name  edit files 1     workspaces        name  ws       mountPath   foo   overrides mountPath     name  edit files 2     workspaces        name  ws   no mountPath specified so will use  workspaces ws   sidecars      name  watch files on workspace     workspaces        name  ws       mountPath   files   overrides mountPath           Setting a default  TaskRun   Workspace Binding   An organization may want to specify default  Workspace  configuration for  TaskRuns   This allows users to use  Tasks  without having to know the specifics of  Workspaces    they can simply rely on the platform to use the default configuration when a  Workspace  is missing  To support this Tekton allows a default  Workspace Binding  to be specified for  TaskRuns   When the  TaskRun  executes  any  Workspaces  that  a  Task  requires but which are not provided by the  TaskRun  will be bound with the default configuration   The configuration for the default  Workspace Binding  is added to the  config defaults   ConfigMap   under the  default task run workspace binding  key  For an example  see the  Customizing basic execution parameters    additional configs md customizing basic execution parameters  section of the install doc     Note    the default configuration is used for any  required   Workspace  declared by a  Task   Optional  Workspaces  are not populated with the default binding  This is because a  Task s  behaviour will typically differ slightly when an optional  Workspace  is bound        Using  Workspace  variables in  Tasks   The following variables make information about  Workspaces  available to  Tasks         workspaces  name  path     specifies the path to a  Workspace     where   name   is the name of the  Workspace   This will be an    empty string when a Workspace is declared optional and not provided    by a TaskRun       workspaces  name  bound     either  true  or  false   specifies    whether a workspace was bound  Always  true  if the workspace is required       workspaces  name  claim     specifies the name of the  PersistentVolumeClaim  used as a volume source for the  Workspace      where   name   is the name of the  Workspace   If a volume source other than  PersistentVolumeClaim  is used  an empty string is returned       workspaces  name  volume    specifies the name of the  Volume     provided for a  Workspace  where   name   is the name of the  Workspace         Mapping  Workspaces  in  Tasks  to  TaskRuns   A  TaskRun  that executes a  Task  containing a  workspaces  list must bind those  workspaces  to actual physical  Volumes   To do so  the  TaskRun  includes its own  workspaces  list  Each entry in the list contains the following fields      name       required    The name of the  Workspace  within the  Task  for which the  Volume  is being provided    subPath    An optional subdirectory on the  Volume  to store data for that  Workspace   The entry must also include one  VolumeSource   See  Specifying  VolumeSources  in  Workspaces    specifying volumesources in workspaces  for more information     Caution      The  Workspaces  declared in a  Task  must be available when executing the associated  TaskRun     Otherwise  the  TaskRun  will fail        Examples of  TaskRun  definition using  Workspaces   The following example illustrate how to specify  Workspaces  in your  TaskRun  definition  an   emptyDir   https   kubernetes io docs concepts storage volumes  emptydir  is provided for a Task s  workspace  called  myworkspace       yaml apiVersion  tekton dev v1beta1 kind  TaskRun metadata    generateName  example taskrun  spec    taskRef      name  example task   workspaces        name  myworkspace   this workspace name must be declared in the Task       emptyDir            emptyDir volumes can be used for TaskRuns                             but consider using a PersistentVolumeClaim for PipelineRuns     For examples of using other types of volume sources  see  Specifying  VolumeSources  in  Workspaces    specifying volumesources in workspaces   For a more in depth example  see   Workspaces  in a  TaskRun      examples v1 taskruns workspace yaml        Using  Workspaces  in  Pipelines   While individual  Tasks  declare the  Workspaces  they need to run  the  Pipeline  decides which  Workspaces  are shared among its  Tasks   To declare shared  Workspaces  in a  Pipeline   you must add the following information to your  Pipeline  definition     A list of  Workspaces  that your  PipelineRuns  will be providing  Use the  workspaces  field to   specify the target  Workspaces  in your  Pipeline  definition as shown below  Each entry in the   list must have a unique name    A mapping of  Workspace  names between the  Pipeline  and the  Task  definitions   The example below defines a  Pipeline  with a  Workspace  named  pipeline ws1   This  Workspace  is bound in two  Tasks    first as the  output  workspace declared by the  gen code   Task   then as the  src  workspace declared by the  commit   Task   If the  Workspace  provided by the  PipelineRun  is a  PersistentVolumeClaim  then these two  Tasks  can share data within that  Workspace       yaml spec    workspaces        name  pipeline ws1   Name of the workspace in the Pipeline       name  pipeline ws2       optional  true   tasks        name  use ws from pipeline       taskRef          name  gen code   gen code expects a workspace named  output        workspaces            name  output           workspace  pipeline ws1       name  use ws again       taskRef          name  commit   commit expects a workspace named  src        workspaces            name  src           workspace  pipeline ws1       runAfter            use ws from pipeline   important  use ws from pipeline writes to the workspace first      Include a  subPath  in the  Workspace Binding  to mount different parts of the same volume for different Tasks  See  a full example of this kind of Pipeline     examples v1 pipelineruns pipelinerun using different subpaths of workspace yaml  which writes data to two adjacent directories on the same Volume   The  subPath  specified in a  Pipeline  will be appended to any  subPath  specified as part of the  PipelineRun  workspace declaration  So a  PipelineRun  declaring a  Workspace  with  subPath  of   foo  for a  Pipeline  who binds it to a  Task  with  subPath  of   bar  will end up mounting the  Volume  s   foo bar  directory        Specifying  Workspace  order in a  Pipeline  and Affinity Assistants  Sharing a  Workspace  between  Tasks  requires you to define the order in which those  Tasks  write to or read from that  Workspace   Use the  runAfter  field in your  Pipeline  definition to define when a  Task  should be executed  For more information  see the   runAfter  documentation  pipelines md using the runafter parameter    When a  PersistentVolumeClaim  is used as volume source for a  Workspace  in a  PipelineRun   an Affinity Assistant will be created  For more information  see the   Affinity Assistants  documentation  affinityassistants md      Note    When  coschedule  is set to  workspaces  or  disabled   it is not allowed to bind multiple   PersistentVolumeClaim  based workspaces   using persistentvolumeclaims as volumesource  to the same  TaskRun  in a  PipelineRun  due to potential Availability Zone conflicts  See more details in  Availability Zones   availability zones         Specifying  Workspaces  in  PipelineRuns   For a  PipelineRun  to execute a  Pipeline  that includes one or more  Workspaces   it needs to bind the  Workspace  names to volumes using its own  workspaces  field  Each entry in this list must correspond to a  Workspace  declaration in the  Pipeline   Each entry in the  workspaces  list must specify the following      name       required    the name of the  Workspace  specified in the  Pipeline  definition for which a volume is being provided     subPath     optional  a directory on the volume that will store that  Workspace s  data  This directory must exist at the   time the  TaskRun  executes  otherwise the execution will fail   The entry must also include one  VolumeSource   See  Using  VolumeSources  with  Workspaces    specifying volumesources in workspaces  for more information     Note    If the  Workspaces  specified by a  Pipeline  are not provided at runtime by a  PipelineRun   that  PipelineRun  will fail   You can pass in extra  Workspaces  if needed depending on your use cases  An example use case is when your CI system autogenerates  PipelineRuns  and it has  Workspaces  it wants to provide to all  PipelineRuns   Because you can pass in extra  Workspaces   you don t have to go through the complexity of checking each  Pipeline  and providing only the required  Workspaces         Example  PipelineRun  definition using  Workspaces   In the example below  a  volumeClaimTemplate  is provided for how a  PersistentVolumeClaim  should be created for a workspace named  myworkspace  declared in a  Pipeline   When using  volumeClaimTemplate  a new  PersistentVolumeClaim  is created for  each  PipelineRun  and it allows the user to specify e g  size and StorageClass for the volume      yaml apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    generateName  example pipelinerun  spec    pipelineRef      name  example pipeline   workspaces        name  myworkspace   this workspace name must be declared in the Pipeline       volumeClaimTemplate          spec            accessModes                ReadWriteOnce   access mode may affect how you can use this volume in parallel tasks           resources              requests                storage  1Gi      For examples of using other types of volume sources  see  Specifying  VolumeSources  in  Workspaces    specifying volumesources in workspaces   For a more in depth example  see the   Workspaces  in  PipelineRun      examples v1 pipelineruns workspaces yaml  YAML sample       Specifying  VolumeSources  in  Workspaces   You can only use a single type of  VolumeSource  per  Workspace  entry  The configuration options differ for each type   Workspaces  support the following fields        Using  PersistentVolumeClaims  as  VolumeSource    PersistentVolumeClaim  volumes are a good choice for sharing data among  Tasks  within a  Pipeline   Beware that the  access mode  https   kubernetes io docs concepts storage persistent volumes  access modes  configured for the  PersistentVolumeClaim  effects how you can use the volume for parallel  Tasks  in a  Pipeline   See  Specifying  workspace  order in a  Pipeline  and Affinity Assistants   specifying workspace order in a pipeline and affinity assistants  for more information about this  There are two ways of using  PersistentVolumeClaims  as a  VolumeSource           volumeClaimTemplate   The  volumeClaimTemplate  is a template of a   PersistentVolumeClaim  volume  https   kubernetes io docs concepts storage volumes  persistentvolumeclaim   created for each  PipelineRun  or  TaskRun   When the volume is created from a template in a  PipelineRun  or  TaskRun   it will be deleted when the  PipelineRun  or  TaskRun  is deleted      yaml workspaces      name  myworkspace     volumeClaimTemplate        spec          accessModes              ReadWriteOnce         resources            requests              storage  1Gi             persistentVolumeClaim   The  persistentVolumeClaim  field references an  existing    persistentVolumeClaim  volume  https   kubernetes io docs concepts storage volumes  persistentvolumeclaim   The example exposes only the subdirectory  my subdir  from that  PersistentVolumeClaim      yaml workspaces      name  myworkspace     persistentVolumeClaim        claimName  mypvc     subPath  my subdir           Using other types of  VolumeSources          emptyDir   The  emptyDir  field references an   emptyDir  volume  https   kubernetes io docs concepts storage volumes  emptydir  which holds a temporary directory that only lives as long as the  TaskRun  that invokes it   emptyDir  volumes are   not   suitable for sharing data among  Tasks  within a  Pipeline   However  they work well for single  TaskRuns  where the data stored in the  emptyDir  needs to be shared among the  Steps  of the  Task  and discarded after execution      yaml workspaces      name  myworkspace     emptyDir                 configMap   The  configMap  field references a   configMap  volume  https   kubernetes io docs concepts storage volumes  configmap   Using a  configMap  as a  Workspace  has the following limitations      configMap  volume sources are always mounted as read only   Steps  cannot write to them and will error out if they try    The  configMap  you want to use as a  Workspace  must exist prior to submitting the  TaskRun      configMaps  are  size limited to 1MB  https   github com kubernetes kubernetes blob f16bfb069a22241a5501f6fe530f5d4e2a82cf0e pkg apis core validation validation go L5042       yaml workspaces      name  myworkspace     configmap        name  my configmap             secret   The  secret  field references a   secret  volume  https   kubernetes io docs concepts storage volumes  secret   Using a  secret  volume has the following limitations      secret  volume sources are always mounted as read only   Steps  cannot write to them and will error out if they try    The  secret  you want to use as a  Workspace  must exist prior to submitting the  TaskRun      secret  are  size limited to 1MB  https   github com kubernetes kubernetes blob f16bfb069a22241a5501f6fe530f5d4e2a82cf0e pkg apis core validation validation go L5042       yaml workspaces      name  myworkspace     secret        secretName  my secret             projected   The  projected  field references a   projected  volume  https   kubernetes io docs concepts storage projected volumes    projected  volume workspaces are a  beta feature    additional configs md beta features   Using a  projected  volume has the following limitations      projected  volume sources are always mounted as read only   Steps  cannot write to them and will error out if they try    The volumes you want to project as a  Workspace  must exist prior to submitting the  TaskRun     The following volumes can be projected   configMap    secret    serviceAccountToken  and  downwardApi      yaml workspaces      name  myworkspace     projected        sources            configMap              name  my configmap           secret              name  my secret             csi   The  csi  field references a   csi  volume  https   kubernetes io docs concepts storage volumes  csi    csi  workspaces are a  beta feature    additional configs md beta features   Using a  csi  volume has the following limitations       wokeignore rule master         csi  volume sources require a volume driver to use  which must correspond to the value by the CSI driver as defined in the  CSI spec  https   github com container storage interface spec blob master spec md getplugininfo       yaml workspaces      name  my credentials     csi        driver  secrets store csi k8s io       readOnly  true       volumeAttributes          secretProviderClass   vault database       Example of CSI workspace using Hashicorp Vault     Install the required csi driver  eg   secrets store csi driver  https   github com hashicorp vault csi provider using yaml    Install the  vault  Provider onto the kubernetes cluster   Reference  https   learn hashicorp com tutorials vault kubernetes raft deployment guide    Deploy a provider via  example  https   gist github com JeromeJu cc8e4e758029b6694806604750b8911c    Create a SecretProviderClass Provider using the following  yaml  https   github com tektoncd pipeline blob main examples v1 pipelineruns no ci csi workspace yaml L1 L19    Specify the ServiceAccount via vault       vault write auth kubernetes role database   bound service account names default   bound service account namespaces default   policies internal app   ttl 20m      If you need support for a  VolumeSource  type not listed above   open an issue  https   github com tektoncd pipeline issues  or a  pull request  https   github com tektoncd pipeline blob main CONTRIBUTING md       Using Persistent Volumes within a  PipelineRun   When using a workspace with a   PersistentVolumeClaim  as  VolumeSource    using persistentvolumeclaims as volumesource   a Kubernetes  Persistent Volumes  https   kubernetes io docs concepts storage persistent volumes   is used within the  PipelineRun   There are some details that are good to know when using Persistent Volumes within a  PipelineRun        Storage Class   PersistentVolumeClaims  specify a  Storage Class  https   kubernetes io docs concepts storage storage classes   for the underlying Persistent Volume  Storage Classes have specific characteristics  If a StorageClassName is not specified for your  PersistentVolumeClaim   the cluster defined  default  Storage Class is used  For  regional  clusters   clusters that typically consist of Nodes located in multiple Availability Zones   it is important to know whether your Storage Class is available to all Nodes  Default Storage Classes are typically only available to Nodes within  one  Availability Zone  There is usually an option to use a  regional  Storage Class  but they have trade offs  e g  you need to pay for multiple volumes since they are replicated and your volume may have  substantially higher latency       Access Modes  A  PersistentVolumeClaim  specifies an  Access Mode  https   kubernetes io docs concepts storage persistent volumes  access modes   Available Access Modes are  ReadWriteOnce    ReadWriteMany  and  ReadOnlyMany   What Access Mode you can use depend on the storage solution that you are using      ReadWriteOnce  is the most commonly available Access Mode  A volume with this Access Mode can only be mounted on one   Node at a time  This can be problematic for a  Pipeline  that has parallel  Tasks  that access the volume concurrently    The Affinity Assistant helps with this problem by scheduling all  Tasks  that use the same  PersistentVolumeClaim  to   the same Node        ReadOnlyMany  is read only and is less common in a CI CD pipeline  These volumes often need to be  prepared  with data   in some way before use  Dynamically provided volumes can usually not be used in read only mode      ReadWriteMany  is the least commonly available Access Mode  If you use this access mode and these volumes are available   to all Nodes within your cluster  you may want to disable the Affinity Assistant       Availability Zones  Persistent Volumes  are  zonal  in some cloud providers like GKE  i e  they live within a single Availability Zone and cannot be accessed from a  pod  living in another Availability Zone   When using a workspace backed by a  PersistentVolumeClaim   typically only available within a Data Center   the  TaskRun   pods  can be scheduled to any Availability Zone in a regional cluster  This results in potential Availability Zone scheduling conflict when two  pods  requiring the same Volume are scheduled to different Availability Zones  see issue   3480  https   github com tektoncd pipeline issues 3480  and   5275  https   github com tektoncd pipeline issues 5275     To avoid such conflict in  PipelineRuns   Tekton provides  Affinity Assistants  affinityassistants md  which schedule all  TaskRun   pods  or all  TaskRun  sharing a  PersistentVolumeClaim  in a  PipelineRun  to the same Node depending on the  coschedule  mode   Specifically  for users use zonal clusters like GKE or use  PersistentVolumeClaim  in ReadWriteOnce access modes  please set  coschedule  workspaces  to schedule each of the  TaskRun   pod  to the same zone as the associated  PersistentVolumeClaim   In addition  for users want to bind multiple  PersistentVolumeClaims  to a single  TaskRun   please set  coschedule  pipelineruns  to schedule all  TaskRun   pods  and  PersistentVolumeClaim  in a  PipelineRun  to the same zone      More examples  See the following in depth examples of configuring  Workspaces        Workspaces  in a  TaskRun      examples v1 taskruns workspace yaml      Workspaces  in a  PipelineRun      examples v1 pipelineruns workspaces yaml      Workspaces  from a volumeClaimTemplate in a  PipelineRun      examples v1 pipelineruns workspace from volumeclaimtemplate yaml "}
{"questions":"tekton Pipelines in Pipelines Pipelines in Pipelines weight 406","answers":"<!--\n---\nlinkTitle: \"Pipelines in Pipelines\"\nweight: 406\n---\n-->\n\n# Pipelines in Pipelines\n\n- [Overview](#overview)\n- [Specifying `pipelineRef` in `Tasks`](#specifying-pipelineref-in-pipelinetasks)\n- [Specifying `pipelineSpec` in `Tasks`](#specifying-pipelinespec-in-pipelinetasks)\n- [Specifying `Parameters`](#specifying-parameters)\n\n## Overview\n\nA mechanism to define and execute Pipelines in Pipelines, alongside Tasks and Custom Tasks, for a more in-depth background and inspiration, refer to the proposal [TEP-0056](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0056-pipelines-in-pipelines.md \"Proposal\").\n\n> :seedling: **Pipelines in Pipelines is an  [alpha](additional-configs.md#alpha-features) feature.**\n> The `enable-api-fields` feature flag must be set to `\"alpha\"` to specify `pipelineRef` or `pipelineSpec` in a `pipelineTask`.\n> This feature is in Preview Only mode and not yet supported\/implemented.\n\n## Specifying `pipelineRef` in `pipelineTasks`\n\nDefining Pipelines in Pipelines at authoring time, by either specifying `PipelineRef` or `PipelineSpec` fields to a `PipelineTask` alongside `TaskRef` and `TaskSpec`.\n\nFor example, a Pipeline named security-scans which is run within a Pipeline named clone-scan-notify where the PipelineRef is used:\n\n```\napiVersion: tekton.dev\/v1\nkind: Pipeline\nmetadata:\n  name: security-scans\nspec:\n  tasks:\n    - name: scorecards\n      taskRef:\n        name: scorecards\n    - name: codeql\n      taskRef:\n        name: codeql\n---\napiVersion: tekton.dev\/v1\nkind: Pipeline\nmetadata:\n  name: clone-scan-notify\nspec:\n  tasks:\n    - name: git-clone\n      taskRef:\n        name: git-clone\n    - name: security-scans\n      pipelineRef:\n        name: security-scans\n    - name: notification\n      taskRef:\n        name: notification\n```\n\n## Specifying `pipelineSpec` in `pipelineTasks`\n\nThe `pipelineRef` [example](#specifying-pipelineref-in-pipelinetasks) can be modified to use PipelineSpec instead of PipelineRef to instead embed the Pipeline specification:\n```\napiVersion: tekton.dev\/v1\nkind: Pipeline\nmetadata:\n  name: clone-scan-notify\nspec:\n  tasks:\n    - name: git-clone\n      taskRef:\n        name: git-clone\n    - name: security-scans\n      pipelineSpec:\n        tasks:\n          - name: scorecards\n            taskRef:\n              name: scorecards\n          - name: codeql\n            taskRef:\n              name: codeql\n    - name: notification\n      taskRef:\n        name: notification\n```\n\n## Specifying `Parameters`\n\nPipelines in Pipelines consume Parameters in the same way as Tasks in Pipelines\n```\napiVersion: tekton.dev\/v1\nkind: Pipeline\nmetadata:\n  name: clone-scan-notify\nspec:\n  params:\n    - name: repo\n      value: $(params.repo)\n  tasks:\n    - name: git-clone\n      params:\n        - name: repo\n          value: $(params.repo)      \n      taskRef:\n        name: git-clone\n    - name: security-scans\n      params:\n        - name: repo\n          value: $(params.repo)\n      pipelineRef:\n        name: security-scans\n    - name: notification\n      taskRef:\n        name: notification\n```","site":"tekton","answers_cleaned":"         linkTitle   Pipelines in Pipelines  weight  406            Pipelines in Pipelines     Overview   overview     Specifying  pipelineRef  in  Tasks    specifying pipelineref in pipelinetasks     Specifying  pipelineSpec  in  Tasks    specifying pipelinespec in pipelinetasks     Specifying  Parameters    specifying parameters      Overview  A mechanism to define and execute Pipelines in Pipelines  alongside Tasks and Custom Tasks  for a more in depth background and inspiration  refer to the proposal  TEP 0056  https   github com tektoncd community blob main teps 0056 pipelines in pipelines md  Proposal        seedling    Pipelines in Pipelines is an   alpha  additional configs md alpha features  feature      The  enable api fields  feature flag must be set to   alpha   to specify  pipelineRef  or  pipelineSpec  in a  pipelineTask     This feature is in Preview Only mode and not yet supported implemented      Specifying  pipelineRef  in  pipelineTasks   Defining Pipelines in Pipelines at authoring time  by either specifying  PipelineRef  or  PipelineSpec  fields to a  PipelineTask  alongside  TaskRef  and  TaskSpec    For example  a Pipeline named security scans which is run within a Pipeline named clone scan notify where the PipelineRef is used       apiVersion  tekton dev v1 kind  Pipeline metadata    name  security scans spec    tasks        name  scorecards       taskRef          name  scorecards       name  codeql       taskRef          name  codeql     apiVersion  tekton dev v1 kind  Pipeline metadata    name  clone scan notify spec    tasks        name  git clone       taskRef          name  git clone       name  security scans       pipelineRef          name  security scans       name  notification       taskRef          name  notification         Specifying  pipelineSpec  in  pipelineTasks   The  pipelineRef   example   specifying pipelineref in pipelinetasks  can be modified to use PipelineSpec instead of PipelineRef to instead embed the Pipeline specification      apiVersion  tekton dev v1 kind  Pipeline metadata    name  clone scan notify spec    tasks        name  git clone       taskRef          name  git clone       name  security scans       pipelineSpec          tasks              name  scorecards             taskRef                name  scorecards             name  codeql             taskRef                name  codeql       name  notification       taskRef          name  notification         Specifying  Parameters   Pipelines in Pipelines consume Parameters in the same way as Tasks in Pipelines     apiVersion  tekton dev v1 kind  Pipeline metadata    name  clone scan notify spec    params        name  repo       value    params repo    tasks        name  git clone       params            name  repo           value    params repo              taskRef          name  git clone       name  security scans       params            name  repo           value    params repo        pipelineRef          name  security scans       name  notification       taskRef          name  notification    "}
{"questions":"tekton CustomRuns CustomRuns weight 206","answers":"<!--\n---\nlinkTitle: \"CustomRuns\"\nweight: 206\n---\n-->\n\n# CustomRuns\n\n- [Overview](#overview)\n- [Configuring a `CustomRun`](#configuring-a-customrun)\n  - [Specifying the target Custom Task](#specifying-the-target-custom-task)\n  - [Cancellation](#cancellation)\n  - [Specifying `Timeout`](#specifying-timeout)\n  - [Specifying `Retries`](#specifying-retries)\n  - [Specifying Parameters](#specifying-parameters)\n  - [Specifying Workspaces](#specifying-workspaces)\n  - [Specifying Service Account](#specifying-a-serviceaccount)\n- [Monitoring execution status](#monitoring-execution-status)\n  - [Status Reporting](#status-reporting)\n  - [Monitoring `Results`](#monitoring-results)\n- [Code examples](#code-examples)\n  - [Example `CustomRun` with a referenced custom task](#example-customrun-with-a-referenced-custom-task)\n  - [Example `CustomRun` with an unnamed custom task](#example-customrun-with-an-unnamed-custom-task)\n  - [Example of specifying parameters](#example-of-specifying-parameters)\n\n# Overview\n\n*`v1beta1.CustomRun` has replaced `v1alpha1.Run` for executing `Custom Tasks`. Please refer to the [migration doc](migrating-v1alpha1.Run-to-v1beta1.CustomRun.md) for details\non updating `v1alpha1.Run` to `v1beta1.CustomRun` before upgrading to a release that does not support `v1alpha1.Run`.*\n\nA `CustomRun` allows you to instantiate and execute a [Custom\nTask](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0002-custom-tasks.md),\nwhich can be implemented by a custom task controller running on-cluster. Custom\nTasks can implement behavior that's independent of how Tekton TaskRun is implemented.\n\nIn order for a `CustomRun` to actually execute, there must be a custom task\ncontroller running on the cluster that is responsible for watching and updating\n`CustomRun`s which reference their type. If no such controller is running, `CustomRun`s\nwill have no `.status` value and no further action will be taken.\n\n## Configuring a `CustomRun`\n\nA `CustomRun` definition supports the following fields:\n\n- Required:\n  - [`apiVersion`][kubernetes-overview] - Specifies the API version. Currently,only\n    `tekton.dev\/v1beta1` is supported.\n  - [`kind`][kubernetes-overview] - Identifies this resource object as a `CustomRun` object.\n  - [`metadata`][kubernetes-overview] - Specifies the metadata that uniquely identifies the\n    `CustomRun`, such as a `name`.\n  - [`spec`][kubernetes-overview] - Specifies the configuration for the `CustomRun`.\n    - [`customRef`](#1-specifying-the-target-custom-task-with-customref) - Specifies the type and\n      (optionally) name of the custom task type to execute.\n    - [`customSpec`](#2-specifying-the-target-custom-task-by-embedding-its-spec) - Embed the custom task resource spec\n      directly in a `CustomRun`.\n- Optional:\n  - [`timeout`](#specifying-timeout) - specifies the maximum duration of a single execution\n    of a `CustomRun`.\n  - [`retries`](#specifying-retries) - specifies the number of retries to execute upon `CustomRun`failure.\n  - [`params`](#specifying-parameters) - Specifies the desired execution\n    parameters for the custom task.\n  - [`serviceAccountName`](#specifying-a-serviceaccount) - Specifies a `ServiceAccount`\n    object for executing the `CustomRun`.\n  - [`workspaces`](#specifying-workspaces) - Specifies the physical volumes to use for the\n    [`Workspaces`](workspaces.md) required by a custom task.\n\n[kubernetes-overview]:\n  https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/kubernetes-objects\/#required-fields\n\n### Specifying the target Custom Task\n\nA custom task resource's `CustomSpec` may be directly embedded in the `CustomRun` or it may\nbe referred to by a `CustomRef`. But, not both at the same time.\n\n1. [Specifying the target Custom Task with customRef](#specifying-the-target-custom-task-with-customref)\n   Referring a custom task (i.e. `CustomRef` ) promotes reuse of custom task definitions.\n\n2. [Specifying the target Custom Task by embedding its spec](#specifying-the-target-custom-task-by-embedding-its-customspec)\n   Embedding a custom task (i.e. `CustomSpec` ) helps in avoiding name collisions with other users within the same namespace.\n   Additionally, in a pipeline with multiple embedded custom tasks, the details of entire pipeline can be fetched in a\n   single API request.\n\n#### Specifying the target Custom Task with customRef\n\nTo specify the custom task type you want to execute in your `CustomRun`, use the\n`customRef` field as shown below:\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: CustomRun\nmetadata:\n  name: my-example-run\nspec:\n  customRef:\n    apiVersion: example.dev\/v1beta1\n    kind: MyCustomKind\n  params:\n  - name: duration\n    value: 10s\n```\n\nWhen this `CustomRun` is created, the Custom Task controller responsible for\nreconciling objects of kind \"MyCustomKind\" in the \"example.dev\/v1beta1\" api group\nwill execute it based on the input params.\n\nYou can also specify the `name` and optional `namespace` (default is `default`)\nof a custom task resource object previously defined in the cluster.\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: CustomRun\nmetadata:\n  name: my-example-run\nspec:\n  customRef:\n    apiVersion: example.dev\/v1beta1\n    kind: Example\n    name: an-existing-example-task\n```\n\nIf the `customRef` specifies a name, the custom task controller should look up the\n`Example` resource with that name, and use that object to configure the\nexecution.\n\nIf the `customRef` does not specify a name, the custom task controller might support\nsome default behavior for executing unnamed tasks.\n\nIn either case, if the named resource cannot be found, or if unnamed tasks are\nnot supported, the custom task controller should update the `CustomRun`'s status to\nindicate the error.\n\n####  Specifying the target Custom Task by embedding its customSpec\n\nTo specify the custom task spec, it can be embedded directly into a\n`CustomRun`'s spec as shown below:\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: CustomRun\nmetadata:\n  name: embedded-run\nspec:\n  customSpec:\n    apiVersion: example.dev\/v1beta1\n    kind: Example\n    spec:\n      field1: value1\n      field2: value2\n```\n\nWhen this `CustomRun` is created, the custom task controller responsible for\nreconciling objects of kind `Example` in the `example.dev` api group will\nexecute it.\n\n#### Developer guide for custom controllers supporting `customSpec`.\n\n1. A custom controller may or may not support a `Spec`. In cases where it is\n   not supported the custom controller should respond with proper validation\n   error.\n\n2. Validation of the fields of the custom task is delegated to the custom\n   task controller. It is recommended to implement validations as asynchronous\n   (i.e. at reconcile time), rather than part of the webhook. Using a webhook\n   for validation is problematic because, it is not possible to filter custom\n   task resource objects before validation step, as a result each custom task\n   resource has to undergo validation by all the installed custom task\n   controllers.\n\n3. A custom task may have an empty spec, but cannot have an empty\n   `ApiVersion` and `Kind`. Custom task controllers should handle\n   an empty spec, either with a default behaviour, in a case no default\n   behaviour is supported then, appropriate validation error should be\n   updated to the `CustomRun`'s status.\n\n### Cancellation\n\nThe custom task is responsible for implementing `cancellation` to support pipelineRun level `timeouts` and `cancellation`. If the Custom Task implementor does not support cancellation via `.spec.status`, `Pipeline` **can not** timeout within the specified interval\/duration and **can not** be cancelled as expected upon request.\n\nPipeline Controller sets the `spec.Status` and `spec.StatusMessage` to signal `CustomRuns` about the `Cancellation`, while `CustomRun` controller updates its `status.conditions` as following once noticed the change on `spec.Status`.\n\n```yaml\nstatus\n  conditions:\n  - type: Succeeded\n    status: False\n    reason: CustomRunCancelled\n```\n\n### Specifying `Timeout`\n\nA custom task specification can be created with `Timeout` as follows:\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: CustomRun\nmetadata:\n  generateName: simpleexample\nspec:\n  timeout: 10s # set timeouts here.\n  params:\n    - name: searching\n      value: the purpose of my existence\n  customRef:\n    apiVersion: custom.tekton.dev\/v1alpha1\n    kind: Example\n    name: exampleName\n```\n\nSupporting timeouts is optional but recommended.\n\n#### Developer guide for custom controllers supporting `Timeout`\n\n1. Tekton controllers will never directly update the status of the `CustomRun`, \n   it is the responsibility of the custom task controller to support timeout.\n   If timeout is not supported, it's the responsibility of the custom task\n   controller to reject `CustomRun`s that specify a timeout value.\n2. When `CustomRun.Spec.Status` is updated to `RunCancelled`, the custom task controller\n   MUST cancel the `CustomRun`. Otherwise, pipeline-level `Timeout` and\n   `Cancellation` won't work for the Custom Task.\n3. A Custom Task controller can watch for this status update\n   (i.e. `CustomRun.Spec.Status == RunCancelled`) and or `CustomRun.HasTimedOut()`\n   and take any corresponding actions (i.e. a clean up e.g., cancel a cloud build,\n   stop the waiting timer, tear down the approval listener). \n4. Once resources or timers are cleaned up, while it is **REQUIRED** to set a\n   `conditions` on the `CustomRun`'s `status` of `Succeeded\/False` with an optional\n   `Reason` of `CustomRunTimedOut`.\n5. `Timeout` is specified for each `retry attempt` instead of all `retries`.\n\n### Specifying `Retries`\n\nA custom task specification can be created with `Retries` as follows:\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: CustomRun\nmetadata:\n  generateName: simpleexample\nspec:\n  retries: 3 # set retries\n  params:\n    - name: searching\n      value: the purpose of my existence\n  customRef:\n    apiVersion: custom.tekton.dev\/v1alpha1\n    kind: Example\n    name: exampleName\n```\n\nSupporting retries is optional but recommended.\n\n#### Developer guide for custom controllers supporting `retries`\n\n1. Tekton controller only depends on `ConditionSucceeded` to determine the \n   termination status of a `CustomRun`, therefore Custom task implementors\n   MUST NOT set `ConditionSucceeded` to `False` until all retries are exhausted.\n2. Those custom tasks who do not wish to support retry, can simply ignore it.\n3. It is recommended, that custom task should update the field `RetriesStatus`\n   of a `CustomRun` on each retry performed by the custom task.\n4. Tekton controller does not validate that number of entries in `RetriesStatus`\n   is same as specified value of retries count.\n\n### Specifying `Parameters`\n\nIf a custom task supports [`parameters`](tasks.md#parameters), you can use the\n`params` field in the `CustomRun` to specify their values:\n\n```yaml\nspec:\n  params:\n    - name: my-param\n      value: chicken\n```\n\nIf the custom task controller knows how to interpret the parameter value, it\nwill do so. It might enforce that some parameter values must be specified, or\nreject unknown parameter values.\n\n### Specifying workspaces\n\nIf the custom task supports it, you can provide [`Workspaces`](workspaces.md) to share data with the custom task .\n\n```yaml\nspec:\n  workspaces:\n    - name: my-workspace\n      emptyDir: {}\n```\n\nConsult the documentation of the custom task that you are using to determine whether it supports workspaces and how to name them.\n\n### Specifying a ServiceAccount\n\nIf the custom task supports it, you can execute the `CustomRun` with a specific set of credentials by\nspecifying a `ServiceAccount` object name in the `serviceAccountName` field in your `CustomRun`\ndefinition. If you do not explicitly specify this, the `CustomRun` executes with the service account\nspecified in the `configmap-defaults` `ConfigMap`. If this default is not specified, `CustomRuns`\nwill execute with the [`default` service account](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/#use-the-default-service-account-to-access-the-api-server)\nset for the target [`namespace`](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/namespaces\/).\n\n```yaml\nspec:\n  serviceAccountName: my-account\n```\n\nConsult the documentation of the custom task that you are using to determine whether it supports a service account name.\n\n## Monitoring execution status\n\nAs your `CustomRun` executes, its `status` field accumulates information on the\nexecution of the `CustomRun`. This information includes the current state of the\n`CustomRun`, start and completion times, and any output `results` reported by\nthe custom task controller.\n\n### Status Reporting\n\nWhen the `CustomRun<Example>` is validated and created, the Custom Task controller will be notified and is expected to begin doing some operation. When the operation begins, the controller **MUST** update the `CustomRun`'s `.status.conditions` to report that it's ongoing:\n\n```yaml\nstatus\n  conditions:\n  - type: Succeeded\n    status: Unknown\n```\n\nWhen the operation completes, if it was successful, the condition **MUST** report `status: True`, and optionally a brief `reason` and human-readable `message`:\n\n```yaml\nstatus\n  conditions:\n  - type: Succeeded\n    status: True\n    reason: ExampleComplete # optional\n    message: Yay, good times # optional\n```\n\nIf the operation was _unsuccessful_, the condition **MUST** report `status: False`, and optionally a `reason` and human-readable `message`:\n\n```yaml\nstatus\n  conditions:\n  - type: Succeeded\n    status: False\n    reason: ExampleFailed # optional\n    message: Oh no bad times # optional\n```\n\nIf the `CustomRun` was _cancelled_, the condition **MUST** report `status: False`, `reason: CustomRunCancelled`, and optionally a human-readable `message`:\n\n```yaml\nstatus\n  conditions:\n  - type: Succeeded\n    status: False\n    reason: CustomRunCancelled\n    message: Oh it's cancelled # optional\n```\n\nThe following tables shows the overall status of a `CustomRun`:\n\n`status`|Description\n:-------|:-----------\n<unset>|The custom task controller has not taken any action on the `CustomRun`.\nUnknown|The custom task controller has started execution and the `CustomRun` is ongoing.\nTrue|The `CustomRun` completed successfully.\nFalse|The `CustomRun` completed unsuccessfully, and all retries were exhausted.\n\nThe `CustomRun` type's `.status` will also allow controllers to report other fields, such as `startTime`, `completionTime`, `results` (see below), and arbitrary context-dependent fields the Custom Task author wants to report. A fully-specified `CustomRun` status might look like:\n\n```\nstatus\n  conditions:\n  - type: Succeeded\n    status: True\n    reason: ExampleComplete\n    message: Yay, good times\n  completionTime: \"2020-06-18T11:55:01Z\"\n  startTime: \"2020-06-18T11:55:01Z\"\n  results:\n  - name: first-name\n    value: Bob\n  - name: last-name\n    value: Smith\n  arbitraryField: hello world\n  arbitraryStructuredField:\n    listOfThings: [\"a\", \"b\", \"c\"]\n```\n\n### Monitoring `Results`\n\nAfter the `CustomRun` completes, the custom task controller can report output\nvalues in the `results` field:\n\n```\nresults:\n- name: my-result\n  value: chicken\n```\n\n## Code examples\n\nTo better understand `CustomRuns`, study the following code examples:\n\n- [Example `CustomRun` with a referenced custom task](#example-customrun-with-a-referenced-custom-task)\n- [Example `CustomRun` with an unnamed custom task](#example-customrun-with-an-unnamed-custom-task)\n- [Example of specifying parameters](#example-of-specifying-parameters)\n\n### Example `CustomRun` with a referenced custom task\n\nIn this example, a `CustomRun` named `my-example-run` invokes a custom task of the `v1alpha1`\nversion of the `Example` kind in the `example.dev` API group, with the name\n`my-example-task`.\n\nIn this case the custom task controller is expected to look up the `Example`\nresource named `my-example-task` and to use that configuration to configure the\nexecution of the `CustomRun`.\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: CustomRun\nmetadata:\n  name: my-example-run\nspec:\n  customRef:\n    apiVersion: example.dev\/v1alpha1\n    kind: Example\n    name: my-example-task\n```\n\n### Example `CustomRun` with an unnamed custom task\n\nIn this example, a `CustomRun` named `my-example-run` invokes a custom task of the `v1alpha1`\nversion of the `Example` kind in the `example.dev` API group, without a specified name.\n\nIn this case the custom task controller is expected to provide some default\nbehavior when the referenced task is unnamed.\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: CustomRun\nmetadata:\n  name: my-example-run\nspec:\n  customRef:\n    apiVersion: example.dev\/v1alpha1\n    kind: Example\n```\n\n### Example of specifying parameters\n\nIn this example, a `CustomRun` named `my-example-run` invokes a custom task, and\nspecifies some parameter values to further configure the execution's behavior.\n\nIn this case the custom task controller is expected to validate and interpret\nthese parameter values and use them to configure the `CustomRun`'s execution.\n\n```yaml\napiVersion: tekton.dev\/v1alpha1\nkind: CustomRun\nmetadata:\n  name: my-example-run\nspec:\n  customRef:\n    apiVersion: example.dev\/v1alpha1\n    kind: Example\n    name: my-example-task\n  params:\n    - name: my-first-param\n      value: i'm number one\n    - name: my-second-param\n      value: close second\n```","site":"tekton","answers_cleaned":"         linkTitle   CustomRuns  weight  206            CustomRuns     Overview   overview     Configuring a  CustomRun    configuring a customrun       Specifying the target Custom Task   specifying the target custom task       Cancellation   cancellation       Specifying  Timeout    specifying timeout       Specifying  Retries    specifying retries       Specifying Parameters   specifying parameters       Specifying Workspaces   specifying workspaces       Specifying Service Account   specifying a serviceaccount     Monitoring execution status   monitoring execution status       Status Reporting   status reporting       Monitoring  Results    monitoring results     Code examples   code examples       Example  CustomRun  with a referenced custom task   example customrun with a referenced custom task       Example  CustomRun  with an unnamed custom task   example customrun with an unnamed custom task       Example of specifying parameters   example of specifying parameters     Overview    v1beta1 CustomRun  has replaced  v1alpha1 Run  for executing  Custom Tasks   Please refer to the  migration doc  migrating v1alpha1 Run to v1beta1 CustomRun md  for details on updating  v1alpha1 Run  to  v1beta1 CustomRun  before upgrading to a release that does not support  v1alpha1 Run     A  CustomRun  allows you to instantiate and execute a  Custom Task  https   github com tektoncd community blob main teps 0002 custom tasks md   which can be implemented by a custom task controller running on cluster  Custom Tasks can implement behavior that s independent of how Tekton TaskRun is implemented   In order for a  CustomRun  to actually execute  there must be a custom task controller running on the cluster that is responsible for watching and updating  CustomRun s which reference their type  If no such controller is running   CustomRun s will have no   status  value and no further action will be taken      Configuring a  CustomRun   A  CustomRun  definition supports the following fields     Required        apiVersion   kubernetes overview    Specifies the API version  Currently only      tekton dev v1beta1  is supported        kind   kubernetes overview    Identifies this resource object as a  CustomRun  object        metadata   kubernetes overview    Specifies the metadata that uniquely identifies the      CustomRun   such as a  name         spec   kubernetes overview    Specifies the configuration for the  CustomRun           customRef    1 specifying the target custom task with customref    Specifies the type and        optionally  name of the custom task type to execute          customSpec    2 specifying the target custom task by embedding its spec    Embed the custom task resource spec       directly in a  CustomRun     Optional        timeout    specifying timeout    specifies the maximum duration of a single execution     of a  CustomRun         retries    specifying retries    specifies the number of retries to execute upon  CustomRun failure        params    specifying parameters    Specifies the desired execution     parameters for the custom task        serviceAccountName    specifying a serviceaccount    Specifies a  ServiceAccount      object for executing the  CustomRun         workspaces    specifying workspaces    Specifies the physical volumes to use for the       Workspaces   workspaces md  required by a custom task    kubernetes overview     https   kubernetes io docs concepts overview working with objects kubernetes objects  required fields      Specifying the target Custom Task  A custom task resource s  CustomSpec  may be directly embedded in the  CustomRun  or it may be referred to by a  CustomRef   But  not both at the same time   1   Specifying the target Custom Task with customRef   specifying the target custom task with customref     Referring a custom task  i e   CustomRef    promotes reuse of custom task definitions   2   Specifying the target Custom Task by embedding its spec   specifying the target custom task by embedding its customspec     Embedding a custom task  i e   CustomSpec    helps in avoiding name collisions with other users within the same namespace     Additionally  in a pipeline with multiple embedded custom tasks  the details of entire pipeline can be fetched in a    single API request        Specifying the target Custom Task with customRef  To specify the custom task type you want to execute in your  CustomRun   use the  customRef  field as shown below      yaml apiVersion  tekton dev v1beta1 kind  CustomRun metadata    name  my example run spec    customRef      apiVersion  example dev v1beta1     kind  MyCustomKind   params      name  duration     value  10s      When this  CustomRun  is created  the Custom Task controller responsible for reconciling objects of kind  MyCustomKind  in the  example dev v1beta1  api group will execute it based on the input params   You can also specify the  name  and optional  namespace   default is  default   of a custom task resource object previously defined in the cluster      yaml apiVersion  tekton dev v1beta1 kind  CustomRun metadata    name  my example run spec    customRef      apiVersion  example dev v1beta1     kind  Example     name  an existing example task      If the  customRef  specifies a name  the custom task controller should look up the  Example  resource with that name  and use that object to configure the execution   If the  customRef  does not specify a name  the custom task controller might support some default behavior for executing unnamed tasks   In either case  if the named resource cannot be found  or if unnamed tasks are not supported  the custom task controller should update the  CustomRun  s status to indicate the error         Specifying the target Custom Task by embedding its customSpec  To specify the custom task spec  it can be embedded directly into a  CustomRun  s spec as shown below      yaml apiVersion  tekton dev v1beta1 kind  CustomRun metadata    name  embedded run spec    customSpec      apiVersion  example dev v1beta1     kind  Example     spec        field1  value1       field2  value2      When this  CustomRun  is created  the custom task controller responsible for reconciling objects of kind  Example  in the  example dev  api group will execute it        Developer guide for custom controllers supporting  customSpec    1  A custom controller may or may not support a  Spec   In cases where it is    not supported the custom controller should respond with proper validation    error   2  Validation of the fields of the custom task is delegated to the custom    task controller  It is recommended to implement validations as asynchronous     i e  at reconcile time   rather than part of the webhook  Using a webhook    for validation is problematic because  it is not possible to filter custom    task resource objects before validation step  as a result each custom task    resource has to undergo validation by all the installed custom task    controllers   3  A custom task may have an empty spec  but cannot have an empty     ApiVersion  and  Kind   Custom task controllers should handle    an empty spec  either with a default behaviour  in a case no default    behaviour is supported then  appropriate validation error should be    updated to the  CustomRun  s status       Cancellation  The custom task is responsible for implementing  cancellation  to support pipelineRun level  timeouts  and  cancellation   If the Custom Task implementor does not support cancellation via   spec status    Pipeline    can not   timeout within the specified interval duration and   can not   be cancelled as expected upon request   Pipeline Controller sets the  spec Status  and  spec StatusMessage  to signal  CustomRuns  about the  Cancellation   while  CustomRun  controller updates its  status conditions  as following once noticed the change on  spec Status       yaml status   conditions      type  Succeeded     status  False     reason  CustomRunCancelled          Specifying  Timeout   A custom task specification can be created with  Timeout  as follows      yaml apiVersion  tekton dev v1beta1 kind  CustomRun metadata    generateName  simpleexample spec    timeout  10s   set timeouts here    params        name  searching       value  the purpose of my existence   customRef      apiVersion  custom tekton dev v1alpha1     kind  Example     name  exampleName      Supporting timeouts is optional but recommended        Developer guide for custom controllers supporting  Timeout   1  Tekton controllers will never directly update the status of the  CustomRun       it is the responsibility of the custom task controller to support timeout     If timeout is not supported  it s the responsibility of the custom task    controller to reject  CustomRun s that specify a timeout value  2  When  CustomRun Spec Status  is updated to  RunCancelled   the custom task controller    MUST cancel the  CustomRun   Otherwise  pipeline level  Timeout  and     Cancellation  won t work for the Custom Task  3  A Custom Task controller can watch for this status update     i e   CustomRun Spec Status    RunCancelled   and or  CustomRun HasTimedOut       and take any corresponding actions  i e  a clean up e g   cancel a cloud build     stop the waiting timer  tear down the approval listener    4  Once resources or timers are cleaned up  while it is   REQUIRED   to set a     conditions  on the  CustomRun  s  status  of  Succeeded False  with an optional     Reason  of  CustomRunTimedOut   5   Timeout  is specified for each  retry attempt  instead of all  retries        Specifying  Retries   A custom task specification can be created with  Retries  as follows      yaml apiVersion  tekton dev v1beta1 kind  CustomRun metadata    generateName  simpleexample spec    retries  3   set retries   params        name  searching       value  the purpose of my existence   customRef      apiVersion  custom tekton dev v1alpha1     kind  Example     name  exampleName      Supporting retries is optional but recommended        Developer guide for custom controllers supporting  retries   1  Tekton controller only depends on  ConditionSucceeded  to determine the     termination status of a  CustomRun   therefore Custom task implementors    MUST NOT set  ConditionSucceeded  to  False  until all retries are exhausted  2  Those custom tasks who do not wish to support retry  can simply ignore it  3  It is recommended  that custom task should update the field  RetriesStatus     of a  CustomRun  on each retry performed by the custom task  4  Tekton controller does not validate that number of entries in  RetriesStatus     is same as specified value of retries count       Specifying  Parameters   If a custom task supports   parameters   tasks md parameters   you can use the  params  field in the  CustomRun  to specify their values      yaml spec    params        name  my param       value  chicken      If the custom task controller knows how to interpret the parameter value  it will do so  It might enforce that some parameter values must be specified  or reject unknown parameter values       Specifying workspaces  If the custom task supports it  you can provide   Workspaces   workspaces md  to share data with the custom task       yaml spec    workspaces        name  my workspace       emptyDir          Consult the documentation of the custom task that you are using to determine whether it supports workspaces and how to name them       Specifying a ServiceAccount  If the custom task supports it  you can execute the  CustomRun  with a specific set of credentials by specifying a  ServiceAccount  object name in the  serviceAccountName  field in your  CustomRun  definition  If you do not explicitly specify this  the  CustomRun  executes with the service account specified in the  configmap defaults   ConfigMap   If this default is not specified   CustomRuns  will execute with the   default  service account  https   kubernetes io docs tasks configure pod container configure service account  use the default service account to access the api server  set for the target   namespace   https   kubernetes io docs concepts overview working with objects namespaces        yaml spec    serviceAccountName  my account      Consult the documentation of the custom task that you are using to determine whether it supports a service account name      Monitoring execution status  As your  CustomRun  executes  its  status  field accumulates information on the execution of the  CustomRun   This information includes the current state of the  CustomRun   start and completion times  and any output  results  reported by the custom task controller       Status Reporting  When the  CustomRun Example   is validated and created  the Custom Task controller will be notified and is expected to begin doing some operation  When the operation begins  the controller   MUST   update the  CustomRun  s   status conditions  to report that it s ongoing      yaml status   conditions      type  Succeeded     status  Unknown      When the operation completes  if it was successful  the condition   MUST   report  status  True   and optionally a brief  reason  and human readable  message       yaml status   conditions      type  Succeeded     status  True     reason  ExampleComplete   optional     message  Yay  good times   optional      If the operation was  unsuccessful   the condition   MUST   report  status  False   and optionally a  reason  and human readable  message       yaml status   conditions      type  Succeeded     status  False     reason  ExampleFailed   optional     message  Oh no bad times   optional      If the  CustomRun  was  cancelled   the condition   MUST   report  status  False    reason  CustomRunCancelled   and optionally a human readable  message       yaml status   conditions      type  Succeeded     status  False     reason  CustomRunCancelled     message  Oh it s cancelled   optional      The following tables shows the overall status of a  CustomRun     status  Description                        unset  The custom task controller has not taken any action on the  CustomRun   Unknown The custom task controller has started execution and the  CustomRun  is ongoing  True The  CustomRun  completed successfully  False The  CustomRun  completed unsuccessfully  and all retries were exhausted   The  CustomRun  type s   status  will also allow controllers to report other fields  such as  startTime    completionTime    results   see below   and arbitrary context dependent fields the Custom Task author wants to report  A fully specified  CustomRun  status might look like       status   conditions      type  Succeeded     status  True     reason  ExampleComplete     message  Yay  good times   completionTime   2020 06 18T11 55 01Z    startTime   2020 06 18T11 55 01Z    results      name  first name     value  Bob     name  last name     value  Smith   arbitraryField  hello world   arbitraryStructuredField      listOfThings    a    b    c            Monitoring  Results   After the  CustomRun  completes  the custom task controller can report output values in the  results  field       results    name  my result   value  chicken         Code examples  To better understand  CustomRuns   study the following code examples      Example  CustomRun  with a referenced custom task   example customrun with a referenced custom task     Example  CustomRun  with an unnamed custom task   example customrun with an unnamed custom task     Example of specifying parameters   example of specifying parameters       Example  CustomRun  with a referenced custom task  In this example  a  CustomRun  named  my example run  invokes a custom task of the  v1alpha1  version of the  Example  kind in the  example dev  API group  with the name  my example task    In this case the custom task controller is expected to look up the  Example  resource named  my example task  and to use that configuration to configure the execution of the  CustomRun       yaml apiVersion  tekton dev v1beta1 kind  CustomRun metadata    name  my example run spec    customRef      apiVersion  example dev v1alpha1     kind  Example     name  my example task          Example  CustomRun  with an unnamed custom task  In this example  a  CustomRun  named  my example run  invokes a custom task of the  v1alpha1  version of the  Example  kind in the  example dev  API group  without a specified name   In this case the custom task controller is expected to provide some default behavior when the referenced task is unnamed      yaml apiVersion  tekton dev v1beta1 kind  CustomRun metadata    name  my example run spec    customRef      apiVersion  example dev v1alpha1     kind  Example          Example of specifying parameters  In this example  a  CustomRun  named  my example run  invokes a custom task  and specifies some parameter values to further configure the execution s behavior   In this case the custom task controller is expected to validate and interpret these parameter values and use them to configure the  CustomRun  s execution      yaml apiVersion  tekton dev v1alpha1 kind  CustomRun metadata    name  my example run spec    customRef      apiVersion  example dev v1alpha1     kind  Example     name  my example task   params        name  my first param       value  i m number one       name  my second param       value  close second    "}
{"questions":"tekton Windows weight 306 Windows","answers":"<!--\n---\nlinkTitle: \"Windows\"\nweight: 306\n---\n-->\n\n# Windows\n\n- [Overview](#overview)\n- [Scheduling Tasks on Windows Nodes](#scheduling-tasks-on-windows-nodes)\n  - [Node Selectors](#node-selectors)\n  - [Node Affinity](#node-affinity)\n  \n## Overview\n\nIf you need a Windows environment as part of a Tekton Task or Pipeline, you can include Windows container images in your Task steps. Because Windows containers can only run on a Windows host, you will need to have Windows nodes available in your Kubernetes cluster. You should read [Windows support in Kubernetes](https:\/\/kubernetes.io\/docs\/setup\/production-environment\/windows\/intro-windows-in-kubernetes\/) to understand the functionality and limitations of Kubernetes on Windows.\n\nSome important things to note about **Windows containers and Kubernetes**:\n\n- Windows containers cannot run on a Linux host.\n- Kubernetes does not support *Windows only* clusters. The Kubernetes control plane components can only run on Linux. \n- Kubernetes currently only supports process isolated containers, which means a container's base image OS version **must** match that of the host OS. See [Windows container version compatibility](https:\/\/docs.microsoft.com\/en-us\/virtualization\/windowscontainers\/deploy-containers\/version-compatibility?tabs=windows-server-20H2%2Cwindows-10-20H2) for more information.\n- A Kubernetes Pod cannot contain both Windows and Linux containers.\n\nSome important things to note about **Windows support in Tekton**: \n\n- A Task can only have Windows or Linux containers, but not both. \n- A Pipeline can contain both Windows and Linux Tasks. \n- In a mixed-OS cluster, TaskRuns and PipelineRuns will need to be scheduled to the correct node using one of the methods [described below](#scheduling-tasks-on-windows-nodes). \n- Tekton's controller components can only run on Linux nodes.\n\n## Scheduling Tasks on Windows Nodes\n\nIn order to ensure that Tasks are scheduled to a node with the correct host OS, you will need to update the TaskRun or PipelineRun spec with rules to define this behaviour. This can be done in a couple of different ways, but the simplest option is to specify a node selector. \n\n### Node Selectors\n\nNode selectors are the simplest way to schedule pods to a Windows or Linux node. By default, Kubernetes nodes include a label `kubernetes.io\/os` to identify the host OS. The Kubelet populates this with `runtime.GOOS` as defined by Go. Use `spec.podTemplate.nodeSelector` (or `spec.taskRunSpecs[i].podTemplate.nodeSelector` in a PipelineRun) to schedule Tasks to a node with a specific label and value.\n\nFor example:\n\n``` yaml\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  name: windows-taskrun\nspec:\n  taskRef:\n    name: windows-task\n  podTemplate:\n    nodeSelector:\n      kubernetes.io\/os: windows\n---\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  name: linux-taskrun\nspec:\n  taskRef:\n    name: linux-task\n  podTemplate:\n    nodeSelector:\n      kubernetes.io\/os: linux\n```\n\n### Node Affinity\n\nNode affinity can be used as an alternative method of defining the OS requirement of a Task. These rules can be set under `spec.podTemplate.affinity.nodeAffinity` in a TaskRun definition. The example below produces the same result as the previous example which used node selectors.\n\nFor example:\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  name: windows-taskrun\nspec:\n  taskRef:\n    name: windows-task\n  podTemplate:\n    affinity:\n        nodeAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            nodeSelectorTerms:\n              - matchExpressions:\n                - key: kubernetes.io\/os\n                  operator: In\n                  values:\n                  - windows\n---\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  name: linux-taskrun\nspec:\n  taskRef:\n    name: linux-task\n  podTemplate:\n    affinity:\n        nodeAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            nodeSelectorTerms:\n              - matchExpressions:\n                - key: kubernetes.io\/os\n                  operator: In\n                  values:\n                  - linux\n```","site":"tekton","answers_cleaned":"         linkTitle   Windows  weight  306            Windows     Overview   overview     Scheduling Tasks on Windows Nodes   scheduling tasks on windows nodes       Node Selectors   node selectors       Node Affinity   node affinity        Overview  If you need a Windows environment as part of a Tekton Task or Pipeline  you can include Windows container images in your Task steps  Because Windows containers can only run on a Windows host  you will need to have Windows nodes available in your Kubernetes cluster  You should read  Windows support in Kubernetes  https   kubernetes io docs setup production environment windows intro windows in kubernetes   to understand the functionality and limitations of Kubernetes on Windows   Some important things to note about   Windows containers and Kubernetes       Windows containers cannot run on a Linux host    Kubernetes does not support  Windows only  clusters  The Kubernetes control plane components can only run on Linux     Kubernetes currently only supports process isolated containers  which means a container s base image OS version   must   match that of the host OS  See  Windows container version compatibility  https   docs microsoft com en us virtualization windowscontainers deploy containers version compatibility tabs windows server 20H2 2Cwindows 10 20H2  for more information    A Kubernetes Pod cannot contain both Windows and Linux containers   Some important things to note about   Windows support in Tekton        A Task can only have Windows or Linux containers  but not both     A Pipeline can contain both Windows and Linux Tasks     In a mixed OS cluster  TaskRuns and PipelineRuns will need to be scheduled to the correct node using one of the methods  described below   scheduling tasks on windows nodes      Tekton s controller components can only run on Linux nodes      Scheduling Tasks on Windows Nodes  In order to ensure that Tasks are scheduled to a node with the correct host OS  you will need to update the TaskRun or PipelineRun spec with rules to define this behaviour  This can be done in a couple of different ways  but the simplest option is to specify a node selector        Node Selectors  Node selectors are the simplest way to schedule pods to a Windows or Linux node  By default  Kubernetes nodes include a label  kubernetes io os  to identify the host OS  The Kubelet populates this with  runtime GOOS  as defined by Go  Use  spec podTemplate nodeSelector   or  spec taskRunSpecs i  podTemplate nodeSelector  in a PipelineRun  to schedule Tasks to a node with a specific label and value   For example       yaml apiVersion  tekton dev v1 kind  TaskRun metadata    name  windows taskrun spec    taskRef      name  windows task   podTemplate      nodeSelector        kubernetes io os  windows     apiVersion  tekton dev v1 kind  TaskRun metadata    name  linux taskrun spec    taskRef      name  linux task   podTemplate      nodeSelector        kubernetes io os  linux          Node Affinity  Node affinity can be used as an alternative method of defining the OS requirement of a Task  These rules can be set under  spec podTemplate affinity nodeAffinity  in a TaskRun definition  The example below produces the same result as the previous example which used node selectors   For example      yaml apiVersion  tekton dev v1 kind  TaskRun metadata    name  windows taskrun spec    taskRef      name  windows task   podTemplate      affinity          nodeAffinity            requiredDuringSchedulingIgnoredDuringExecution              nodeSelectorTerms                  matchExpressions                    key  kubernetes io os                   operator  In                   values                      windows     apiVersion  tekton dev v1 kind  TaskRun metadata    name  linux taskrun spec    taskRef      name  linux task   podTemplate      affinity          nodeAffinity            requiredDuringSchedulingIgnoredDuringExecution              nodeSelectorTerms                  matchExpressions                    key  kubernetes io os                   operator  In                   values                      linux    "}
{"questions":"tekton Artifacts weight 201 Artifacts","answers":"<!--\n---\nlinkTitle: \"Artifacts\"\nweight: 201\n---\n-->\n\n# Artifacts\n\n- [Overview](#overview)\n- [Artifact Provenance Data](#artifact-provenance-data)\n  - [Passing Artifacts between Steps](#passing-artifacts-between-steps)\n  - [Passing Artifacts between Tasks](#passing-artifacts-between-tasks)\n\n\n\n## Overview\n> :seedling: **`Artifacts` is an [alpha](additional-configs.md#alpha-features) feature.**\n> The `enable-artifacts` feature flag must be set to `\"true\"` to read or write artifacts in a step.\n\nArtifacts provide a way to track the origin of data produced and consumed within your Tekton Tasks.\n\n## Artifact Provenance Data\nArtifacts fall into two categories:\n\n - Inputs: Artifacts downloaded and used by the Step\/Task.\n - Outputs: Artifacts created and uploaded by the Step\/Task.\n\nExample Structure:\n```json\n{\n  \"inputs\":[\n    {\n      \"name\": \"<input-category-name>\", \n      \"values\": [\n        {\n          \"uri\": \"pkg:github\/package-url\/purl-spec@244fd47e07d1004f0aed9c\", \n          \"digest\": { \"sha256\": \"b35caccc...\" }\n        }\n      ]\n    }\n  ],\n  \"outputs\": [\n    {\n      \"name\": \"<output-category-name>\",\n      \"values\": [\n        {\n          \"uri\": \"pkg:oci\/nginx:stable-alpine3.17-slim?repository_url=docker.io\/library\",\n          \"digest\": {\n            \"sha256\": \"df85b9e3...\",\n            \"sha1\": \"95588b8f...\"\n          }\n        }\n      ]\n    }\n  ]\n}\n\n```\n\nThe content is written by the `Step` to a file `$(step.artifacts.path)`:\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  generateName: step-artifacts-\nspec:\n  taskSpec:\n    description: |\n      A simple task that populates artifacts to TaskRun stepState\n    steps:\n      - name: artifacts-producer\n        image: bash:latest\n        script: |\n          cat > $(step.artifacts.path) << EOF\n          {\n            \"inputs\":[\n              {\n                \"name\":\"source\",\n                \"values\":[\n                  {\n                    \"uri\":\"pkg:github\/package-url\/purl-spec@244fd47e07d1004f0aed9c\",\n                    \"digest\":{\n                      \"sha256\":\"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0\"\n                    }\n                  }\n                ]\n              }\n            ],\n            \"outputs\":[\n              {\n                \"name\":\"image\",\n                \"values\":[\n                  {\n                    \"uri\":\"pkg:oci\/nginx:stable-alpine3.17-slim?repository_url=docker.io\/library\",\n                    \"digest\":{\n                      \"sha256\":\"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48\",\n                      \"sha1\":\"95588b8f34c31eb7d62c92aaa4e6506639b06ef2\"\n                    }\n                  }\n                ]\n              }\n            ]\n          }\n          EOF\n```\n\nThe content is written by the `Step` to a file `$(artifacts.path)`:\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  generateName: step-artifacts-\nspec:\n  taskSpec:\n    description: |\n      A simple task that populates artifacts to TaskRun stepState\n    steps:\n      - name: artifacts-producer\n        image: bash:latest\n        script: |\n          cat > $(artifacts.path) << EOF\n          {\n            \"inputs\":[\n              {\n                \"name\":\"source\",\n                \"values\":[\n                  {\n                    \"uri\":\"pkg:github\/package-url\/purl-spec@244fd47e07d1004f0aed9c\",\n                    \"digest\":{\n                      \"sha256\":\"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0\"\n                    }\n                  }\n                ]\n              }\n            ],\n            \"outputs\":[\n              {\n                \"name\":\"image\",\n                \"values\":[\n                  {\n                    \"uri\":\"pkg:oci\/nginx:stable-alpine3.17-slim?repository_url=docker.io\/library\",\n                    \"digest\":{\n                      \"sha256\":\"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48\",\n                      \"sha1\":\"95588b8f34c31eb7d62c92aaa4e6506639b06ef2\"\n                    }\n                  }\n                ]\n              }\n            ]\n          }\n          EOF\n```\n\nIt is recommended to use [purl format](https:\/\/github.com\/package-url\/purl-spec\/blob\/master\/PURL-SPECIFICATION.rst) for artifacts uri as shown in the example. \n\n### Output Artifacts in SLSA Provenance\n\nArtifacts are classified as either:\n\n- Build Outputs - packages, images, etc. that are being published by the build.\n- Build Byproducts - logs, caches, etc. that are incidental artifacts that are produced by the build.\n\nBy default, Tekton Chains will consider all output artifacts as `byProducts` when generating in the [SLSA provenance](https:\/\/slsa.dev\/spec\/v1.0\/provenance). In order to treat an artifact as a [subject](https:\/\/slsa.dev\/spec\/v1.0\/provenance#schema) of the build, you must set a boolean field `\"buildOutput\": true` for the output artifact.\n\ne.g.  \n```yaml\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  generateName: step-artifacts-\nspec:\n  taskSpec:\n    description: |\n      A simple task that populates artifacts to TaskRun stepState\n    steps:\n      - name: artifacts-producer\n        image: bash:latest\n        script: |\n          cat > $(artifacts.path) << EOF\n          {\n            \"outputs\":[\n              {\n                \"name\":\"image\",\n                \"buildOutput\": true,\n                \"values\":[\n                  {\n                    \"uri\":\"pkg:oci\/nginx:stable-alpine3.17-slim?repository_url=docker.io\/library\",\n                    \"digest\":{\n                      \"sha256\":\"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48\",\n                      \"sha1\":\"95588b8f34c31eb7d62c92aaa4e6506639b06ef2\"\n                    }\n                  }\n                ]\n              }\n            ]\n          }\n          EOF\n```\n\nThis informs Tekton Chains your desire to handle the artifact.\n\n> [!TIP] \n> When authoring a `StepAction` or a `Task`, you can parametrize this field to allow users to indicate their desire depending on what they are uploading - this can be useful for actions that may produce either a build output or a byproduct depending on the context!\n\n### Passing Artifacts between Steps\nYou can pass artifacts from one step to the next using:\n- Specific Artifact: `$(steps.<step-name>.inputs.<artifact-category-name>)` or `$(steps.<step-name>.outputs.<artifact-category-name>)`\n\nThe example below shows how to access the previous' step artifacts from another step in the same task\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  generateName: step-artifacts-\nspec:\n  taskSpec:\n    description: |\n      A simple task that populates artifacts to TaskRun stepState\n    steps:\n      - name: artifacts-producer\n        image: bash:latest\n        script: |\n          # the script is for creating the output artifacts\n          cat > $(step.artifacts.path) << EOF\n          {\n            \"inputs\":[\n              {\n                \"name\":\"source\",\n                \"values\":[\n                  {\n                    \"uri\":\"pkg:github\/package-url\/purl-spec@244fd47e07d1004f0aed9c\",\n                    \"digest\":{\n                      \"sha256\":\"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0\"\n                    }\n                  }\n                ]\n              }\n            ],\n            \"outputs\":[\n              {\n                \"name\":\"image\",\n                \"values\":[\n                  {\n                    \"uri\":\"pkg:oci\/nginx:stable-alpine3.17-slim?repository_url=docker.io\/library\",\n                    \"digest\":{\n                      \"sha256\":\"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48\",\n                      \"sha1\":\"95588b8f34c31eb7d62c92aaa4e6506639b06ef2\"\n                    }\n                  }\n                ]\n              }\n            ]\n          }\n          EOF\n      - name: artifacts-consumer\n        image: bash:latest\n        script: |\n          echo $(steps.artifacts-producer.outputs.image)\n```\n\n\nThe resolved value of `$(steps.<step-name>.outputs.<artifact-category-name>)` is the values of an artifact. For this example, \n`$(steps.artifacts-producer.outputs.image)` is resolved to \n```json\n[\n                  {\n                    \"uri\":\"pkg:oci\/nginx:stable-alpine3.17-slim?repository_url=docker.io\/library\",\n                    \"digest\":{\n                      \"sha256\":\"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48\",\n                      \"sha1\":\"95588b8f34c31eb7d62c92aaa4e6506639b06ef2\"\n                    }\n                  }\n]\n```\n\n\nUpon resolution and execution of the `TaskRun`, the `Status` will look something like:\n\n```json\n{\n  \"artifacts\": {\n    \"inputs\": [\n      {\n        \"name\": \"source\",\n        \"values\": [\n          {\n            \"digest\": {\n              \"sha256\": \"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0\"\n            },\n            \"uri\": \"pkg:github\/package-url\/purl-spec@244fd47e07d1004f0aed9c\"\n          }\n        ]\n      }\n    ],\n    \"outputs\": [\n      {\n        \"name\": \"image\",\n        \"values\": [\n          {\n            \"digest\": {\n              \"sha1\": \"95588b8f34c31eb7d62c92aaa4e6506639b06ef2\",\n              \"sha256\": \"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48\"\n            },\n            \"uri\": \"pkg:oci\/nginx:stable-alpine3.17-slim?repository_url=docker.io\/library\"\n          }\n        ]\n      }\n    ]\n  },\n  \"steps\": [\n    {\n      \"container\": \"step-artifacts-producer\",\n      \"imageID\": \"docker.io\/library\/bash@sha256:5353512b79d2963e92a2b97d9cb52df72d32f94661aa825fcfa0aede73304743\",\n      \"inputs\": [\n        {\n          \"name\": \"source\",\n          \"values\": [\n            {\n              \"digest\": {\n                \"sha256\": \"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0\"\n              },\n              \"uri\": \"pkg:github\/package-url\/purl-spec@244fd47e07d1004f0aed9c\"\n            }\n          ]\n        }\n      ],\n      \"name\": \"artifacts-producer\",\n      \"outputs\": [\n        {\n          \"name\": \"image\",\n          \"values\": [\n            {\n              \"digest\": {\n                \"sha1\": \"95588b8f34c31eb7d62c92aaa4e6506639b06ef2\",\n                \"sha256\": \"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48\"\n              },\n              \"uri\": \"pkg:oci\/nginx:stable-alpine3.17-slim?repository_url=docker.io\/library\"\n            }\n          ]\n        }\n      ],\n      \"terminated\": {\n        \"containerID\": \"containerd:\/\/010f02d103d1db48531327a1fe09797c87c1d50b6a216892319b3af93e0f56e7\",\n        \"exitCode\": 0,\n        \"finishedAt\": \"2024-03-18T17:05:06Z\",\n        \"message\": \"...\",\n        \"reason\": \"Completed\",\n        \"startedAt\": \"2024-03-18T17:05:06Z\"\n      },\n      \"terminationReason\": \"Completed\"\n    },\n    {\n      \"container\": \"step-artifacts-consumer\",\n      \"imageID\": \"docker.io\/library\/bash@sha256:5353512b79d2963e92a2b97d9cb52df72d32f94661aa825fcfa0aede73304743\",\n      \"name\": \"artifacts-consumer\",\n      \"terminated\": {\n        \"containerID\": \"containerd:\/\/42428aa7e5a507eba924239f213d185dd4bc0882b6f217a79e6792f7fec3586e\",\n        \"exitCode\": 0,\n        \"finishedAt\": \"2024-03-18T17:05:06Z\",\n        \"reason\": \"Completed\",\n        \"startedAt\": \"2024-03-18T17:05:06Z\"\n      },\n      \"terminationReason\": \"Completed\"\n    }\n  ]\n}\n\n```\n\n### Passing Artifacts between Tasks\nYou can pass artifacts from one task to the another using:\n\n- Specific Artifact: `$(tasks.<task-name>.inputs.<artifact-category-name>)` or `$(tasks.<task-name>.outputs.<artifact-category-name>)`\n\nThe example below shows how to access the previous' task artifacts from another task in a pipeline\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: PipelineRun\nmetadata:\n  generateName: pipelinerun-consume-tasks-artifacts\nspec:\n  pipelineSpec:\n    tasks:\n      - name: produce-artifacts-task\n        taskSpec:\n          description: |\n            A simple task that produces artifacts\n          steps:\n            - name: produce-artifacts\n              image: bash:latest\n              script: |\n                #!\/usr\/bin\/env bash\n                cat > $(artifacts.path) << EOF\n                {\n                  \"inputs\":[\n                    {\n                      \"name\":\"input-artifacts\",\n                      \"values\":[\n                        {\n                          \"uri\":\"pkg:example.github.com\/inputs\",\n                          \"digest\":{\n                            \"sha256\":\"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0\"\n                          }\n                        }\n                      ]\n                    }\n                  ],\n                  \"outputs\":[\n                    {\n                      \"name\":\"image\",\n                      \"values\":[\n                        {\n                          \"uri\":\"pkg:github\/package-url\/purl-spec@244fd47e07d1004f0aed9c\",\n                          \"digest\":{\n                            \"sha256\":\"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48\",\n                            \"sha1\":\"95588b8f34c31eb7d62c92aaa4e6506639b06ef2\"\n                          }\n                        }\n                      ]\n                    }\n                  ]\n                }\n                EOF\n      - name: consume-artifacts\n        runAfter:\n          - produce-artifacts-task\n        taskSpec:\n          steps:\n            - name: artifacts-consumer-python\n              image: python:latest\n              script: |\n                #!\/usr\/bin\/env python3\n                import json\n                data = json.loads('$(tasks.produce-artifacts-task.outputs.image)')\n                if data[0]['uri'] != \"pkg:github\/package-url\/purl-spec@244fd47e07d1004f0aed9c\":\n                  exit(1)\n```\n\n\nSimilar to Step Artifacts. The resolved value of `$(tasks.<task-name>.outputs.<artifact-category-name>)` is the values of an artifact. For this example,\n`$(tasks.produce-artifacts-task.outputs.image)` is resolved to\n```json\n[\n  {\n    \"uri\":\"pkg:example.github.com\/inputs\",\n    \"digest\":{\n      \"sha256\":\"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0\"\n    }\n }\n]\n```\nUpon resolution and execution of the `TaskRun`, the `Status` will look something like:\n```json\n{\n \"artifacts\": {\n      \"inputs\": [\n        {\n          \"name\": \"input-artifacts\",\n          \"values\": [\n            {\n              \"digest\": {\n                \"sha256\": \"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0\"\n              },\n              \"uri\": \"pkg:example.github.com\/inputs\"\n            }\n          ]\n        }\n      ],\n      \"outputs\": [\n        {\n          \"name\": \"image\",\n          \"values\": [\n            {\n              \"digest\": {\n                \"sha1\": \"95588b8f34c31eb7d62c92aaa4e6506639b06ef2\",\n                \"sha256\": \"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48\"\n              },\n              \"uri\": \"pkg:github\/package-url\/purl-spec@244fd47e07d1004f0aed9c\"\n            }\n          ]\n        }\n      ]\n    },\n    \"completionTime\": \"2024-05-28T14:10:58Z\",\n    \"conditions\": [\n      {\n        \"lastTransitionTime\": \"2024-05-28T14:10:58Z\",\n        \"message\": \"All Steps have completed executing\",\n        \"reason\": \"Succeeded\",\n        \"status\": \"True\",\n        \"type\": \"Succeeded\"\n      }\n    ],\n    \"podName\": \"pipelinerun-consume-tasks-a41ee44e4f964e95adfd3aea417d52f90-pod\",\n    \"provenance\": {\n      \"featureFlags\": {\n        \"AwaitSidecarReadiness\": true,\n        \"Coschedule\": \"workspaces\",\n        \"DisableAffinityAssistant\": false,\n        \"DisableCredsInit\": false,\n        \"DisableInlineSpec\": \"\",\n        \"EnableAPIFields\": \"beta\",\n        \"EnableArtifacts\": true,\n        \"EnableCELInWhenExpression\": false,\n        \"EnableConciseResolverSyntax\": false,\n        \"EnableKeepPodOnCancel\": false,\n        \"EnableParamEnum\": false,\n        \"EnableProvenanceInStatus\": true,\n        \"EnableStepActions\": true,\n        \"EnableTektonOCIBundles\": false,\n        \"EnforceNonfalsifiability\": \"none\",\n        \"MaxResultSize\": 4096,\n        \"RequireGitSSHSecretKnownHosts\": false,\n        \"ResultExtractionMethod\": \"termination-message\",\n        \"RunningInEnvWithInjectedSidecars\": true,\n        \"ScopeWhenExpressionsToTask\": false,\n        \"SendCloudEventsForRuns\": false,\n        \"SetSecurityContext\": false,\n        \"VerificationNoMatchPolicy\": \"ignore\"\n      }\n    },\n    \"startTime\": \"2024-05-28T14:10:48Z\",\n    \"steps\": [\n      {\n        \"container\": \"step-produce-artifacts\",\n        \"imageID\": \"docker.io\/library\/bash@sha256:23f90212fd89e4c292d7b41386ef1a6ac2b8a02bbc6947680bfe184cbc1a2899\",\n        \"name\": \"produce-artifacts\",\n        \"terminated\": {\n          \"containerID\": \"containerd:\/\/1291ce07b175a7897beee6ba62eaa1528427bacb1f76b31435eeba68828c445a\",\n          \"exitCode\": 0,\n          \"finishedAt\": \"2024-05-28T14:10:57Z\",\n          \"message\": \"...\",\n          \"reason\": \"Completed\",\n          \"startedAt\": \"2024-05-28T14:10:57Z\"\n        },\n        \"terminationReason\": \"Completed\"\n      }\n    ],\n    \"taskSpec\": {\n      \"description\": \"A simple task that produces artifacts\\n\",\n      \"steps\": [\n        {\n          \"computeResources\": {},\n          \"image\": \"bash:latest\",\n          \"name\": \"produce-artifacts\",\n          \"script\": \"#!\/usr\/bin\/env bash\\ncat > \/tekton\/artifacts\/provenance.json << EOF\\n{\\n  \\\"inputs\\\":[\\n    {\\n      \\\"name\\\":\\\"input-artifacts\\\",\\n      \\\"values\\\":[\\n        {\\n          \\\"uri\\\":\\\"pkg:example.github.com\/inputs\\\",\\n          \\\"digest\\\":{\\n            \\\"sha256\\\":\\\"b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0\\\"\\n          }\\n        }\\n      ]\\n    }\\n  ],\\n  \\\"outputs\\\":[\\n    {\\n      \\\"name\\\":\\\"image\\\",\\n      \\\"values\\\":[\\n        {\\n          \\\"uri\\\":\\\"pkg:github\/package-url\/purl-spec@244fd47e07d1004f0aed9c\\\",\\n          \\\"digest\\\":{\\n            \\\"sha256\\\":\\\"df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48\\\",\\n            \\\"sha1\\\":\\\"95588b8f34c31eb7d62c92aaa4e6506639b06ef2\\\"\\n          }\\n        }\\n      ]\\n    }\\n  ]\\n}\\nEOF\\n\"\n        }\n      ]\n    }\n}\n```","site":"tekton","answers_cleaned":"         linkTitle   Artifacts  weight  201            Artifacts     Overview   overview     Artifact Provenance Data   artifact provenance data       Passing Artifacts between Steps   passing artifacts between steps       Passing Artifacts between Tasks   passing artifacts between tasks        Overview    seedling     Artifacts  is an  alpha  additional configs md alpha features  feature      The  enable artifacts  feature flag must be set to   true   to read or write artifacts in a step   Artifacts provide a way to track the origin of data produced and consumed within your Tekton Tasks      Artifact Provenance Data Artifacts fall into two categories      Inputs  Artifacts downloaded and used by the Step Task     Outputs  Artifacts created and uploaded by the Step Task   Example Structure     json      inputs                 name     input category name            values                          uri    pkg github package url purl spec 244fd47e07d1004f0aed9c               digest      sha256    b35caccc                                       outputs                  name     output category name           values                          uri    pkg oci nginx stable alpine3 17 slim repository url docker io library              digest                  sha256    df85b9e3                   sha1    95588b8f                                                     The content is written by the  Step  to a file    step artifacts path        yaml apiVersion  tekton dev v1 kind  TaskRun metadata    generateName  step artifacts  spec    taskSpec      description          A simple task that populates artifacts to TaskRun stepState     steps          name  artifacts producer         image  bash latest         script              cat     step artifacts path     EOF                          inputs                                     name   source                    values                                             uri   pkg github package url purl spec 244fd47e07d1004f0aed9c                        digest                           sha256   b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0                                                                                                          outputs                                     name   image                    values                                             uri   pkg oci nginx stable alpine3 17 slim repository url docker io library                        digest                           sha256   df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48                          sha1   95588b8f34c31eb7d62c92aaa4e6506639b06ef2                                                                                                                  EOF      The content is written by the  Step  to a file    artifacts path        yaml apiVersion  tekton dev v1 kind  TaskRun metadata    generateName  step artifacts  spec    taskSpec      description          A simple task that populates artifacts to TaskRun stepState     steps          name  artifacts producer         image  bash latest         script              cat     artifacts path     EOF                          inputs                                     name   source                    values                                             uri   pkg github package url purl spec 244fd47e07d1004f0aed9c                        digest                           sha256   b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0                                                                                                          outputs                                     name   image                    values                                             uri   pkg oci nginx stable alpine3 17 slim repository url docker io library                        digest                           sha256   df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48                          sha1   95588b8f34c31eb7d62c92aaa4e6506639b06ef2                                                                                                                  EOF      It is recommended to use  purl format  https   github com package url purl spec blob master PURL SPECIFICATION rst  for artifacts uri as shown in the example        Output Artifacts in SLSA Provenance  Artifacts are classified as either     Build Outputs   packages  images  etc  that are being published by the build    Build Byproducts   logs  caches  etc  that are incidental artifacts that are produced by the build   By default  Tekton Chains will consider all output artifacts as  byProducts  when generating in the  SLSA provenance  https   slsa dev spec v1 0 provenance   In order to treat an artifact as a  subject  https   slsa dev spec v1 0 provenance schema  of the build  you must set a boolean field   buildOutput   true  for the output artifact   e g       yaml apiVersion  tekton dev v1 kind  TaskRun metadata    generateName  step artifacts  spec    taskSpec      description          A simple task that populates artifacts to TaskRun stepState     steps          name  artifacts producer         image  bash latest         script              cat     artifacts path     EOF                          outputs                                     name   image                    buildOutput   true                   values                                             uri   pkg oci nginx stable alpine3 17 slim repository url docker io library                        digest                           sha256   df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48                          sha1   95588b8f34c31eb7d62c92aaa4e6506639b06ef2                                                                                                                  EOF      This informs Tekton Chains your desire to handle the artifact       TIP     When authoring a  StepAction  or a  Task   you can parametrize this field to allow users to indicate their desire depending on what they are uploading   this can be useful for actions that may produce either a build output or a byproduct depending on the context       Passing Artifacts between Steps You can pass artifacts from one step to the next using    Specific Artifact     steps  step name  inputs  artifact category name    or    steps  step name  outputs  artifact category name     The example below shows how to access the previous  step artifacts from another step in the same task     yaml apiVersion  tekton dev v1 kind  TaskRun metadata    generateName  step artifacts  spec    taskSpec      description          A simple task that populates artifacts to TaskRun stepState     steps          name  artifacts producer         image  bash latest         script                the script is for creating the output artifacts           cat     step artifacts path     EOF                          inputs                                     name   source                    values                                             uri   pkg github package url purl spec 244fd47e07d1004f0aed9c                        digest                           sha256   b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0                                                                                                          outputs                                     name   image                    values                                             uri   pkg oci nginx stable alpine3 17 slim repository url docker io library                        digest                           sha256   df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48                          sha1   95588b8f34c31eb7d62c92aaa4e6506639b06ef2                                                                                                                  EOF         name  artifacts consumer         image  bash latest         script              echo   steps artifacts producer outputs image        The resolved value of    steps  step name  outputs  artifact category name    is the values of an artifact  For this example      steps artifacts producer outputs image   is resolved to     json                                            uri   pkg oci nginx stable alpine3 17 slim repository url docker io library                        digest                           sha256   df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48                          sha1   95588b8f34c31eb7d62c92aaa4e6506639b06ef2                                                    Upon resolution and execution of the  TaskRun   the  Status  will look something like      json      artifacts          inputs                      name    source            values                              digest                    sha256    b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0                              uri    pkg github package url purl spec 244fd47e07d1004f0aed9c                                            outputs                      name    image            values                              digest                    sha1    95588b8f34c31eb7d62c92aaa4e6506639b06ef2                  sha256    df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48                              uri    pkg oci nginx stable alpine3 17 slim repository url docker io library                                              steps                  container    step artifacts producer          imageID    docker io library bash sha256 5353512b79d2963e92a2b97d9cb52df72d32f94661aa825fcfa0aede73304743          inputs                          name    source              values                                  digest                      sha256    b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0                                  uri    pkg github package url purl spec 244fd47e07d1004f0aed9c                                                      name    artifacts producer          outputs                          name    image              values                                  digest                      sha1    95588b8f34c31eb7d62c92aaa4e6506639b06ef2                    sha256    df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48                                  uri    pkg oci nginx stable alpine3 17 slim repository url docker io library                                                      terminated              containerID    containerd   010f02d103d1db48531327a1fe09797c87c1d50b6a216892319b3af93e0f56e7            exitCode   0           finishedAt    2024 03 18T17 05 06Z            message                   reason    Completed            startedAt    2024 03 18T17 05 06Z                  terminationReason    Completed                      container    step artifacts consumer          imageID    docker io library bash sha256 5353512b79d2963e92a2b97d9cb52df72d32f94661aa825fcfa0aede73304743          name    artifacts consumer          terminated              containerID    containerd   42428aa7e5a507eba924239f213d185dd4bc0882b6f217a79e6792f7fec3586e            exitCode   0           finishedAt    2024 03 18T17 05 06Z            reason    Completed            startedAt    2024 03 18T17 05 06Z                  terminationReason    Completed                        Passing Artifacts between Tasks You can pass artifacts from one task to the another using     Specific Artifact     tasks  task name  inputs  artifact category name    or    tasks  task name  outputs  artifact category name     The example below shows how to access the previous  task artifacts from another task in a pipeline     yaml apiVersion  tekton dev v1 kind  PipelineRun metadata    generateName  pipelinerun consume tasks artifacts spec    pipelineSpec      tasks          name  produce artifacts task         taskSpec            description                A simple task that produces artifacts           steps                name  produce artifacts               image  bash latest               script                       usr bin env bash                 cat     artifacts path     EOF                                      inputs                                                 name   input artifacts                          values                                                         uri   pkg example github com inputs                              digest                                 sha256   b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0                                                                                                                                              outputs                                                 name   image                          values                                                         uri   pkg github package url purl spec 244fd47e07d1004f0aed9c                              digest                                 sha256   df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48                                sha1   95588b8f34c31eb7d62c92aaa4e6506639b06ef2                                                                                                                                                            EOF         name  consume artifacts         runAfter              produce artifacts task         taskSpec            steps                name  artifacts consumer python               image  python latest               script                       usr bin env python3                 import json                 data   json loads    tasks produce artifacts task outputs image                    if data 0   uri       pkg github package url purl spec 244fd47e07d1004f0aed9c                     exit 1        Similar to Step Artifacts  The resolved value of    tasks  task name  outputs  artifact category name    is the values of an artifact  For this example     tasks produce artifacts task outputs image   is resolved to    json            uri   pkg example github com inputs        digest           sha256   b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0                 Upon resolution and execution of the  TaskRun   the  Status  will look something like     json     artifacts            inputs                          name    input artifacts              values                                  digest                      sha256    b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0                                  uri    pkg example github com inputs                                                      outputs                          name    image              values                                  digest                      sha1    95588b8f34c31eb7d62c92aaa4e6506639b06ef2                    sha256    df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48                                  uri    pkg github package url purl spec 244fd47e07d1004f0aed9c                                                          completionTime    2024 05 28T14 10 58Z        conditions                      lastTransitionTime    2024 05 28T14 10 58Z            message    All Steps have completed executing            reason    Succeeded            status    True            type    Succeeded                      podName    pipelinerun consume tasks a41ee44e4f964e95adfd3aea417d52f90 pod        provenance            featureFlags              AwaitSidecarReadiness   true           Coschedule    workspaces            DisableAffinityAssistant   false           DisableCredsInit   false           DisableInlineSpec                EnableAPIFields    beta            EnableArtifacts   true           EnableCELInWhenExpression   false           EnableConciseResolverSyntax   false           EnableKeepPodOnCancel   false           EnableParamEnum   false           EnableProvenanceInStatus   true           EnableStepActions   true           EnableTektonOCIBundles   false           EnforceNonfalsifiability    none            MaxResultSize   4096           RequireGitSSHSecretKnownHosts   false           ResultExtractionMethod    termination message            RunningInEnvWithInjectedSidecars   true           ScopeWhenExpressionsToTask   false           SendCloudEventsForRuns   false           SetSecurityContext   false           VerificationNoMatchPolicy    ignore                      startTime    2024 05 28T14 10 48Z        steps                      container    step produce artifacts            imageID    docker io library bash sha256 23f90212fd89e4c292d7b41386ef1a6ac2b8a02bbc6947680bfe184cbc1a2899            name    produce artifacts            terminated                containerID    containerd   1291ce07b175a7897beee6ba62eaa1528427bacb1f76b31435eeba68828c445a              exitCode   0             finishedAt    2024 05 28T14 10 57Z              message                     reason    Completed              startedAt    2024 05 28T14 10 57Z                      terminationReason    Completed                      taskSpec            description    A simple task that produces artifacts n          steps                          computeResources                  image    bash latest              name    produce artifacts              script       usr bin env bash ncat    tekton artifacts provenance json    EOF n  n    inputs     n      n        name     input artifacts    n        values     n          n            uri     pkg example github com inputs    n            digest     n              sha256     b35cacccfdb1e24dc497d15d553891345fd155713ffe647c281c583269eaaae0   n            n          n        n      n     n    outputs     n      n        name     image    n        values     n          n            uri     pkg github package url purl spec 244fd47e07d1004f0aed9c    n            digest     n              sha256     df85b9e3983fe2ce20ef76ad675ecf435cc99fc9350adc54fa230bae8c32ce48    n              sha1     95588b8f34c31eb7d62c92aaa4e6506639b06ef2   n            n          n        n      n    n  nEOF n                               "}
{"questions":"tekton Install Tekton Pipelines weight 101 Install Tekton Pipelines on your cluster title Install Tekton Pipelines","answers":"<!--\n---\ntitle: \"Install Tekton Pipelines\"\nlinkTitle: \"Install Tekton Pipelines\"\nweight: 101\ndescription: >\n  Install Tekton Pipelines on your cluster\n---\n-->\n\n\nTo view the full contents of this page, go to the \n<a href=\"http:\/\/tekton.dev\/docs\/installation\/pipelines\/\">Tekton website<\/a>.\n\n\n\n\n\n\nThis guide explains how to install Tekton Pipelines.\n\n## Prerequisites\n\n-   A [Kubernetes cluster][k8s] running version 1.28 or later.\n-   [Kubectl][].\n-   Grant `cluster-admin` privileges to the current user. See the [Kubernetes\n    role-based access control (RBAC) docs][rbac] for more information.\n-   (Optional) Install a [Metrics Server][metrics] if you need support for high\n    availability use cases.\n\nSee the [local installation guide][local-install] if you want to test Tekton on\nyour computer.\n\n## Installation\n\n\n\n\nTo install Tekton Pipelines on a Kubernetes cluster:\n\n1. Run one of the following commands depending on which version of Tekton\n   Pipelines you want to install:\n\n   - **Latest official release:**\n\n     ```bash\n     kubectl apply --filename https:\/\/storage.googleapis.com\/tekton-releases\/pipeline\/latest\/release.yaml\n     ```\n     \n      Note: These instructions are ideal as a quick start installation guide with Tekton Pipelines and not meant for the production use. Please refer to the [operator](https:\/\/github.com\/tektoncd\/operator) to install, upgrade and manage Tekton projects. \n\n   - **Nightly release:**\n\n     ```bash\n     kubectl apply --filename https:\/\/storage.googleapis.com\/tekton-releases-nightly\/pipeline\/latest\/release.yaml\n     ```\n\n   - **Specific release:**\n\n     ```bash\n      kubectl apply --filename https:\/\/storage.googleapis.com\/tekton-releases\/pipeline\/previous\/<version_number>\/release.yaml\n     ```\n\n     Replace `<version_number>` with the numbered version you want to install.\n     For example, `v0.26.0`.\n\n   - **Untagged release:**\n\n     If your container runtime does not support `image-reference:tag@digest`:\n\n     ```bash\n     kubectl apply --filename https:\/\/storage.googleapis.com\/tekton-releases\/pipeline\/latest\/release.notags.yaml\n     ```\n\nMulti-tenant installation is only partially supported today, read the [guide](.\/developers\/multi-tenant-support.md)\nfor reference.\n\n1. Monitor the installation:\n\n   ```bash\n   kubectl get pods --namespace tekton-pipelines --watch\n   ```\n\n   When all components show `1\/1` under the `READY` column, the installation is\n   complete. Hit *Ctrl + C* to stop monitoring.\n\nCongratulations! You have successfully installed Tekton Pipelines on your\nKubernetes cluster.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n## Additional configuration options\n\nYou can enable additional alpha and beta features, customize execution\nparameters, configure availability, and many more options. See the\n[addition configurations options](.\/additional-configs.md) for more information.\n\n## Next steps\n\nTo get started with Tekton check the [Introductory tutorials][quickstarts],\nthe [how-to guides][howtos], and the [examples folder][examples].\n\n---\n\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License][cca4], and code samples are licensed\nunder the [Apache 2.0 License][apache2l].\n\n\n[quickstarts]: https:\/\/tekton.dev\/docs\/getting-started\/\n[howtos]: https:\/\/tekton.dev\/docs\/how-to-guides\/\n[examples]: https:\/\/github.com\/tektoncd\/pipeline\/tree\/main\/examples\/\n[cca4]: https:\/\/creativecommons.org\/licenses\/by\/4.0\/\n[apache2l]: https:\/\/www.apache.org\/licenses\/LICENSE-2.0\n[k8s]: https:\/\/www.downloadkubernetes.com\/\n[kubectl]: https:\/\/www.downloadkubernetes.com\/\n[rbac]: https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/rbac\/\n[metrics]: https:\/\/github.com\/kubernetes-sigs\/metrics-server\n[local-install]: https:\/\/tekton.dev\/docs\/installation\/local-installation\/\n","site":"tekton","answers_cleaned":"         title   Install Tekton Pipelines  linkTitle   Install Tekton Pipelines  weight  101 description      Install Tekton Pipelines on your cluster           To view the full contents of this page  go to the   a href  http   tekton dev docs installation pipelines   Tekton website  a         This guide explains how to install Tekton Pipelines      Prerequisites      A  Kubernetes cluster  k8s  running version 1 28 or later       Kubectl         Grant  cluster admin  privileges to the current user  See the  Kubernetes     role based access control  RBAC  docs  rbac  for more information       Optional  Install a  Metrics Server  metrics  if you need support for high     availability use cases   See the  local installation guide  local install  if you want to test Tekton on your computer      Installation     To install Tekton Pipelines on a Kubernetes cluster   1  Run one of the following commands depending on which version of Tekton    Pipelines you want to install          Latest official release             bash      kubectl apply   filename https   storage googleapis com tekton releases pipeline latest release yaml                      Note  These instructions are ideal as a quick start installation guide with Tekton Pipelines and not meant for the production use  Please refer to the  operator  https   github com tektoncd operator  to install  upgrade and manage Tekton projects           Nightly release             bash      kubectl apply   filename https   storage googleapis com tekton releases nightly pipeline latest release yaml                  Specific release             bash       kubectl apply   filename https   storage googleapis com tekton releases pipeline previous  version number  release yaml                Replace   version number   with the numbered version you want to install       For example   v0 26 0           Untagged release          If your container runtime does not support  image reference tag digest            bash      kubectl apply   filename https   storage googleapis com tekton releases pipeline latest release notags yaml           Multi tenant installation is only partially supported today  read the  guide    developers multi tenant support md  for reference   1  Monitor the installation         bash    kubectl get pods   namespace tekton pipelines   watch            When all components show  1 1  under the  READY  column  the installation is    complete  Hit  Ctrl   C  to stop monitoring   Congratulations  You have successfully installed Tekton Pipelines on your Kubernetes cluster                      Additional configuration options  You can enable additional alpha and beta features  customize execution parameters  configure availability  and many more options  See the  addition configurations options    additional configs md  for more information      Next steps  To get started with Tekton check the  Introductory tutorials  quickstarts   the  how to guides  howtos   and the  examples folder  examples          Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  cca4   and code samples are licensed under the  Apache 2 0 License  apache2l      quickstarts   https   tekton dev docs getting started   howtos   https   tekton dev docs how to guides   examples   https   github com tektoncd pipeline tree main examples   cca4   https   creativecommons org licenses by 4 0   apache2l   https   www apache org licenses LICENSE 2 0  k8s   https   www downloadkubernetes com   kubectl   https   www downloadkubernetes com   rbac   https   kubernetes io docs reference access authn authz rbac   metrics   https   github com kubernetes sigs metrics server  local install   https   tekton dev docs installation local installation  "}
{"questions":"tekton Compute Resources in Tekton Compute Resources Limits weight 408 Background Resource Requirements in Kubernetes","answers":"<!--\n---\nlinkTitle: \"Compute Resources Limits\"\nweight: 408\n---\n-->\n\n# Compute Resources in Tekton\n\n## Background: Resource Requirements in Kubernetes\n\nKubernetes allows users to specify CPU, memory, and ephemeral storage constraints\nfor [containers](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/).\nResource requests determine the resources reserved for a pod when it's scheduled,\nand affect likelihood of pod eviction. Resource limits constrain the maximum amount of\na resource a container can use. A container that exceeds its memory limits will be killed,\nand a container that exceeds its CPU limits will be throttled.\n\nA pod's [effective resource requests and limits](https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/init-containers\/#resources)\nare the higher of:\n- the sum of all app containers request\/limit for a resource\n- the effective init container request\/limit for a resource\n\nThis formula exists because Kubernetes runs init containers sequentially and app containers\nin parallel. (There is no distinction made between app containers and sidecar containers\nin Kubernetes; a sidecar is used in the following example to illustrate this.)\n\nFor example, consider a pod with the following containers:\n\n| Container           | CPU request | CPU limit |\n| ------------------- | ----------- | --------- |\n| init container 1    | 1           | 2         |\n| init container 2    | 2           | 3         |\n| app container 1     | 1           | 2         |\n| app container 2     | 2           | 3         |\n| sidecar container 1 | 3           | no limit  |\n\nThe sum of all app container CPU requests is 6 (including the sidecar container), which is\ngreater than the maximum init container CPU request (2). Therefore, the pod's effective CPU\nrequest will be 6.\n\nSince the sidecar container has no CPU limit, this is treated as the highest CPU limit.\nTherefore, the pod will have no effective CPU limit.\n\n## Task-level Compute Resources Configuration\n\n**([beta](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/additional-configs.md#beta-features))**\n\nTekton allows users to specify resource requirements of [`Steps`](.\/tasks.md#defining-steps),\nwhich run sequentially. However, the pod's effective resource requirements are still the\nsum of its containers' resource requirements. This means that when specifying resource \nrequirements for `Step` containers, they must be treated as if they are running in parallel.\n\nTekton adjusts `Step` resource requirements to comply with [LimitRanges](#limitrange-support).\n[ResourceQuotas](#resourcequota-support) are not currently supported.\n\nInstead of specifying resource requirements on each `Step`, users can choose to specify resource requirements at the Task-level. If users specify a Task-level resource request, it will ensure that the kubelet reserves only that amount of resources to execute the `Task`'s `Steps`.\nIf users specify a Task-level resource limit, no `Step` may use more than that amount of resources.\n\nEach of these details is explained in more depth below.\n\nSome points to note:\n\n- Task-level resource requests and limits do not apply to sidecars which can be configured separately.\n- If only limits are configured in task-level, it will be applied as the task-level requests.\n- Resource requirements configured in `Step` or `StepTemplate` of the referenced `Task` will be overridden by the task-level requirements.\n- `TaskRun` configured with both `StepOverrides` and task-level requirements will be rejected.\n\n### Configure Task-level Compute Resources\n\nTask-level resource requirements can be configured in `TaskRun.ComputeResources`, or `PipelineRun.TaskRunSpecs.ComputeResources`.\n\ne.g.\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: foo \nspec:\n  computeResources:\n    requests:\n      cpu: 1 \n    limits:\n      cpu: 2\n```\n\nThe following TaskRun will be rejected, because it configures both stepOverrides and task-level compute resource requirements:\n\n```yaml\nkind: TaskRun\nspec:\n  stepOverrides:\n    - name: foo\n      resources:\n        requests:\n          cpu: 1\n  computeResources:\n    requests:\n      cpu: 2 \n```\n\n```yaml\nkind: PipelineRun\nspec:\n  taskRunSpecs:\n    - pipelineTaskName: foo\n      stepOverrides:\n        - name: foo \n          resources:\n            requests:\n              cpu: 1 \n      computeResources:\n        requests:\n          cpu: 2\n```\n\n### Configure Resource Requirements with Sidecar\n\nUsers can specify compute resources separately for a sidecar while configuring task-level resource requirements on TaskRun.\n\ne.g.\n\n```yaml\nkind: TaskRun\nspec:\n  sidecarOverrides:\n    - name: sidecar\n      resources:\n        requests:\n          cpu: 750m\n        limits:\n          cpu: 1\n  computeResources:\n    requests:\n      cpu: 2\n```\n\n## LimitRange Support\n\nKubernetes allows users to configure [LimitRanges]((https:\/\/kubernetes.io\/docs\/concepts\/policy\/limit-range\/)),\nwhich constrain compute resources of pods, containers, or PVCs running in the same namespace.\n\nLimitRanges can:\n- Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.\n- Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.\n- Enforce a ratio between request and limit for a resource in a namespace.\n- Set default request\/limit for compute resources in a namespace and automatically inject them to Containers at runtime.\n\nTekton applies the resource requirements specified by users directly to the containers\nin a `Task's` pod, unless there is a LimitRange present in the namespace.\nTekton supports LimitRange minimum, maximum, and default resource requirements for containers,\nbut does not support LimitRange ratios between requests and limits ([#4230](https:\/\/github.com\/tektoncd\/pipeline\/issues\/4230)).\nLimitRange types other than \"Container\" are not considered for purposes of resource requirements.\n\nTekton doesn't allow users to configure init containers for a `Task`, but any `default` and `defaultRequest` from a LimitRange\nwill be applied to the init containers that Tekton injects into a `TaskRun`'s pod.\n\n### Requests\n\nIf a Step container does not have requests defined, Tekton will divide a LimitRange's `defaultRequests` by the number of Step containers and apply these requests to the Steps.\nThis results in a TaskRun with overall requests equal to LimitRange `defaultRequests`.\nIf this value is less than the LimitRange minimum, the LimitRange minimum will be used instead.\nLimitRange `defaultRequests` are applied as-is to init containers or Sidecar containers that don't specify requests.\n\nContainers that do specify requests will not be modified. If these requests are lower than LimitRange minimums, Kubernetes will reject the resulting TaskRun's pod.\n\n### Limits\n\nTekton does not adjust container limits, regardless of whether a container is a Step, Sidecar, or init container.\nIf a container does not have limits defined, Kubernetes will apply the LimitRange `default` to the container's limits.\nIf a container does define limits, and they are less than the LimitRange `default`, Kubernetes will reject the resulting TaskRun's pod.\n\n### Examples\n\nConsider the following LimitRange:\n\n```\napiVersion: v1\nkind: LimitRange\nmetadata:\n  name: limitrange-example\nspec:\n  limits:\n  - default:  # The default limits\n      cpu: 2\n    defaultRequest:  # The default requests\n      cpu: 1\n    max:  # The maximum limits\n      cpu: 3\n    min:  # The minimum requests\n      cpu: 300m\n    type: Container\n```\n\nA `Task` with 2 `Steps` and no resources specified would result in a pod with the following containers:\n\n| Container    | CPU request | CPU limit |\n| ------------ | ----------- | --------- |\n| container 1  | 500m        | 2         | \n| container 2  | 500m        | 2         |\n\nHere, the default CPU request was divided among the step containers, and this value was used since it was greater\nthan the minimum request specified by the LimitRange.\nThe CPU limits are 2 for each container, as this is the default limit specifed in the LimitRange.\n\nIf we had a `Task` with 2 `Steps` and 1 `Sidecar` with no resources specified would result in a pod with the following containers:  \n\n| Container    | CPU request | CPU limit |\n| ------------ | ----------- | --------- |\n| container 1  | 500m        | 2         |\n| container 2  | 500m        | 2         |\n| container 3  | 1           | 2         |\n\nFor the first two containers, the default CPU request was divided among the step containers, and this value was used since it was greater\nthan the minimum request specified by the LimitRange. The third container is a sidecar and since it is not a step container gets the full \ndefault CPU request of 1. AS before the CPU limits are 2 for each container, as this is the default limit specifed in the LimitRange.\n\n\nNow, consider a `Task` with the following `Step`s:\n\n| Step   | CPU request | CPU limit |\n| ------ | ----------- | --------- |\n| step 1 | 200m        | 2         |\n| step 2 | 1           | 4         |\n\nThe resulting pod would have the following containers:\n\n| Container    | CPU request | CPU limit |\n| ------------ | ----------- | --------- |\n| container 1  | 300m        | 2         |\n| container 2  | 1           | 3         |\n\nHere, the first `Step's` request was less than the LimitRange minimum, so the output request is the minimum (300m).\nThe second `Step's` request is unchanged. The first `Step's` limit is less than the maximum, so it is unchanged,\nwhile the second `Step's` limit is greater than the maximum, so the maximum (3) is used.\n\n### Support for multiple LimitRanges\n\nTekton supports running `TaskRuns` in namespaces with multiple LimitRanges.\nFor a given resource, the minumum used will be the largest of any of the LimitRanges' minimum values,\nand the maximum used will be the smallest of any of the LimitRanges' maximum values.\n\nThe minimum resource requirement used will be the largest of any minimum for that resource,\nand the maximum resource requirement will be the smallest of any of the maximum values defined.\nThe default value will be the minimum of any default values defined.\nIf the resulting default value is less than the resulting minimum value, the default value will be the minimum value.\n\nIt's possible for multiple LimitRanges to be defined which are not compatible with each other, preventing pods from being scheduled.\n\n#### Example\n\nConsider a namespaces with the following LimitRanges defined:\n\n```\napiVersion: v1\nkind: LimitRange\nmetadata:\n  name: limitrange-1\nspec:\n  limits:\n  - default:  # The default limits\n      cpu: 2\n    defaultRequest:  # The default requests\n      cpu: 750m\n    max:  # The maximum limits\n      cpu: 3\n    min:  # The minimum requests\n      cpu: 500m\n    type: Container\n```\n\n```\napiVersion: v1\nkind: LimitRange\nmetadata:\n  name: limitrange-2\nspec:\n  limits:\n  - default:  # The default limits\n      cpu: 1.5\n    defaultRequest:  # The default requests\n      cpu: 1\n    max:  # The maximum limits\n      cpu: 2.5\n    min:  # The minimum requests\n      cpu: 300m\n    type: Container\n```\n\nA namespace with limitrange-1 and limitrange-2 would be treated as if it contained only the following LimitRange:\n\n```\napiVersion: v1\nkind: LimitRange\nmetadata:\n  name: aggregate-limitrange\nspec:\n  limits:\n  - default:  # The default limits\n      cpu: 1.5\n    defaultRequest:  # The default requests\n      cpu: 750m\n    max:  # The maximum limits\n      cpu: 2.5\n    min:  # The minimum requests\n      cpu: 300m\n    type: Container\n```\n\nHere, the minimum of the \"max\" values is the output \"max\" value, and likewise for \"default\" and \"defaultRequest\".\nThe maximum of the \"min\" values is the output \"min\" value.\n\n## ResourceQuota Support\n\nKubernetes allows users to define [ResourceQuotas](https:\/\/kubernetes.io\/docs\/concepts\/policy\/resource-quotas\/),\nwhich restrict the maximum resource requests and limits of all pods running in a namespace.\n\nTo deploy Tekton TaskRuns or PipelineRuns in namespaces with ResourceQuotas, compute resource requirements\nmust be set for all containers in a `TaskRun`'s pod, including the init containers injected by Tekton.\n`Step` and `Sidecar` resource requirements can be configured directly through the API, as described in\n[Task Resource Requirements](#task-resource-requirements). To configure resource requirements for Tekton's init containers,\ndeploy a LimitRange in the same namespace. The LimitRange's `default` and `defaultRequest` will be applied to the init containers,\nand divided among the `Steps` and `Sidecars`, as described in [LimitRange Support](#limitrange-support).\n\n[#2933](https:\/\/github.com\/tektoncd\/pipeline\/issues\/2933) tracks support for running `TaskRuns` in a namespace with a ResourceQuota\nwithout having to use LimitRanges.\n\nResourceQuotas consider the effective resource requests and limits of a pod, which Kubernetes determines by summing the resource requirements\nof its containers (under the assumption that they run in parallel). When using LimitRanges to set compute resources for `TaskRun` pods,\nLimitRange default requests are divided among `Step` containers, meaning that the pod's effective requests reflect the actual requests\nthat the pod needs. However, LimitRange default limits are not divided among containers, meaning the pod's effective limits are much larger\nthan the limits applied during execution of any given `Step`. For example, if a ResourceQuota restricts a namespace to a limit of 10 CPU,\nand a user creates a TaskRun with 20 steps with a limit of 1 CPU each, the pod would not be schedulable even though it is\nlimited to 1 CPU at each point in time. Therefore, it is recommended to use ResourceQuotas to restrict only requests of `TaskRun` pods,\nnot limits (tracked in [#4976](https:\/\/github.com\/tektoncd\/pipeline\/issues\/4976)). \n\n## Quality of Service (QoS)\n\nBy default, pods that run Tekton TaskRuns will have a [Quality of Service (QoS)](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/quality-service-pod\/)\nof \"BestEffort\". If compute resource requirements are set for any Step or Sidecar, the pod will have a \"Burstable\" QoS.\nTo get a \"Guaranteed\" QoS, a TaskRun pod must have compute resources set for all of its containers, including init containers which are\ninjected by Tekton, and all containers must have their requests equal to their limits.\nThis can be achieved by using LimitRanges to apply default requests and limits.\n\n# References\n\n- [LimitRange in k8s docs](https:\/\/kubernetes.io\/docs\/concepts\/policy\/limit-range\/)\n- [Configure default memory requests and limits for a Namespace](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/manage-resources\/memory-default-namespace\/)\n- [Configure default CPU requests and limits for a Namespace](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/manage-resources\/cpu-default-namespace\/)\n- [Configure Minimum and Maximum CPU constraints for a Namespace](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/manage-resources\/cpu-constraint-namespace\/)\n- [Configure Minimum and Maximum Memory constraints for a Namespace](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/manage-resources\/memory-constraint-namespace\/)\n- [Managing Resources for Containers](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/)\n- [Kubernetes best practices: Resource requests and limits](https:\/\/cloud.google.com\/blog\/products\/containers-kubernetes\/kubernetes-best-practices-resource-requests-and-limits)\n- [Restrict resource consumption with limit ranges](https:\/\/docs.openshift.com\/container-platform\/4.8\/nodes\/clusters\/nodes-cluster-limit-ranges.html)","site":"tekton","answers_cleaned":"         linkTitle   Compute Resources Limits  weight  408            Compute Resources in Tekton     Background  Resource Requirements in Kubernetes  Kubernetes allows users to specify CPU  memory  and ephemeral storage constraints for  containers  https   kubernetes io docs concepts configuration manage resources containers    Resource requests determine the resources reserved for a pod when it s scheduled  and affect likelihood of pod eviction  Resource limits constrain the maximum amount of a resource a container can use  A container that exceeds its memory limits will be killed  and a container that exceeds its CPU limits will be throttled   A pod s  effective resource requests and limits  https   kubernetes io docs concepts workloads pods init containers  resources  are the higher of    the sum of all app containers request limit for a resource   the effective init container request limit for a resource  This formula exists because Kubernetes runs init containers sequentially and app containers in parallel   There is no distinction made between app containers and sidecar containers in Kubernetes  a sidecar is used in the following example to illustrate this    For example  consider a pod with the following containers     Container             CPU request   CPU limit                                                       init container 1      1             2             init container 2      2             3             app container 1       1             2             app container 2       2             3             sidecar container 1   3             no limit     The sum of all app container CPU requests is 6  including the sidecar container   which is greater than the maximum init container CPU request  2   Therefore  the pod s effective CPU request will be 6   Since the sidecar container has no CPU limit  this is treated as the highest CPU limit  Therefore  the pod will have no effective CPU limit      Task level Compute Resources Configuration      beta  https   github com tektoncd pipeline blob main docs additional configs md beta features      Tekton allows users to specify resource requirements of   Steps     tasks md defining steps   which run sequentially  However  the pod s effective resource requirements are still the sum of its containers  resource requirements  This means that when specifying resource  requirements for  Step  containers  they must be treated as if they are running in parallel   Tekton adjusts  Step  resource requirements to comply with  LimitRanges   limitrange support    ResourceQuotas   resourcequota support  are not currently supported   Instead of specifying resource requirements on each  Step   users can choose to specify resource requirements at the Task level  If users specify a Task level resource request  it will ensure that the kubelet reserves only that amount of resources to execute the  Task  s  Steps   If users specify a Task level resource limit  no  Step  may use more than that amount of resources   Each of these details is explained in more depth below   Some points to note     Task level resource requests and limits do not apply to sidecars which can be configured separately    If only limits are configured in task level  it will be applied as the task level requests    Resource requirements configured in  Step  or  StepTemplate  of the referenced  Task  will be overridden by the task level requirements     TaskRun  configured with both  StepOverrides  and task level requirements will be rejected       Configure Task level Compute Resources  Task level resource requirements can be configured in  TaskRun ComputeResources   or  PipelineRun TaskRunSpecs ComputeResources    e g      yaml apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  foo  spec    computeResources      requests        cpu  1      limits        cpu  2      The following TaskRun will be rejected  because it configures both stepOverrides and task level compute resource requirements      yaml kind  TaskRun spec    stepOverrides        name  foo       resources          requests            cpu  1   computeResources      requests        cpu  2          yaml kind  PipelineRun spec    taskRunSpecs        pipelineTaskName  foo       stepOverrides            name  foo            resources              requests                cpu  1        computeResources          requests            cpu  2          Configure Resource Requirements with Sidecar  Users can specify compute resources separately for a sidecar while configuring task level resource requirements on TaskRun   e g      yaml kind  TaskRun spec    sidecarOverrides        name  sidecar       resources          requests            cpu  750m         limits            cpu  1   computeResources      requests        cpu  2         LimitRange Support  Kubernetes allows users to configure  LimitRanges   https   kubernetes io docs concepts policy limit range     which constrain compute resources of pods  containers  or PVCs running in the same namespace   LimitRanges can    Enforce minimum and maximum compute resources usage per Pod or Container in a namespace    Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace    Enforce a ratio between request and limit for a resource in a namespace    Set default request limit for compute resources in a namespace and automatically inject them to Containers at runtime   Tekton applies the resource requirements specified by users directly to the containers in a  Task s  pod  unless there is a LimitRange present in the namespace  Tekton supports LimitRange minimum  maximum  and default resource requirements for containers  but does not support LimitRange ratios between requests and limits    4230  https   github com tektoncd pipeline issues 4230    LimitRange types other than  Container  are not considered for purposes of resource requirements   Tekton doesn t allow users to configure init containers for a  Task   but any  default  and  defaultRequest  from a LimitRange will be applied to the init containers that Tekton injects into a  TaskRun  s pod       Requests  If a Step container does not have requests defined  Tekton will divide a LimitRange s  defaultRequests  by the number of Step containers and apply these requests to the Steps  This results in a TaskRun with overall requests equal to LimitRange  defaultRequests   If this value is less than the LimitRange minimum  the LimitRange minimum will be used instead  LimitRange  defaultRequests  are applied as is to init containers or Sidecar containers that don t specify requests   Containers that do specify requests will not be modified  If these requests are lower than LimitRange minimums  Kubernetes will reject the resulting TaskRun s pod       Limits  Tekton does not adjust container limits  regardless of whether a container is a Step  Sidecar  or init container  If a container does not have limits defined  Kubernetes will apply the LimitRange  default  to the container s limits  If a container does define limits  and they are less than the LimitRange  default   Kubernetes will reject the resulting TaskRun s pod       Examples  Consider the following LimitRange       apiVersion  v1 kind  LimitRange metadata    name  limitrange example spec    limits      default     The default limits       cpu  2     defaultRequest     The default requests       cpu  1     max     The maximum limits       cpu  3     min     The minimum requests       cpu  300m     type  Container      A  Task  with 2  Steps  and no resources specified would result in a pod with the following containers     Container      CPU request   CPU limit                                                container 1    500m          2              container 2    500m          2            Here  the default CPU request was divided among the step containers  and this value was used since it was greater than the minimum request specified by the LimitRange  The CPU limits are 2 for each container  as this is the default limit specifed in the LimitRange   If we had a  Task  with 2  Steps  and 1  Sidecar  with no resources specified would result in a pod with the following containers       Container      CPU request   CPU limit                                                container 1    500m          2             container 2    500m          2             container 3    1             2            For the first two containers  the default CPU request was divided among the step containers  and this value was used since it was greater than the minimum request specified by the LimitRange  The third container is a sidecar and since it is not a step container gets the full  default CPU request of 1  AS before the CPU limits are 2 for each container  as this is the default limit specifed in the LimitRange    Now  consider a  Task  with the following  Step s     Step     CPU request   CPU limit                                          step 1   200m          2             step 2   1             4            The resulting pod would have the following containers     Container      CPU request   CPU limit                                                container 1    300m          2             container 2    1             3            Here  the first  Step s  request was less than the LimitRange minimum  so the output request is the minimum  300m   The second  Step s  request is unchanged  The first  Step s  limit is less than the maximum  so it is unchanged  while the second  Step s  limit is greater than the maximum  so the maximum  3  is used       Support for multiple LimitRanges  Tekton supports running  TaskRuns  in namespaces with multiple LimitRanges  For a given resource  the minumum used will be the largest of any of the LimitRanges  minimum values  and the maximum used will be the smallest of any of the LimitRanges  maximum values   The minimum resource requirement used will be the largest of any minimum for that resource  and the maximum resource requirement will be the smallest of any of the maximum values defined  The default value will be the minimum of any default values defined  If the resulting default value is less than the resulting minimum value  the default value will be the minimum value   It s possible for multiple LimitRanges to be defined which are not compatible with each other  preventing pods from being scheduled        Example  Consider a namespaces with the following LimitRanges defined       apiVersion  v1 kind  LimitRange metadata    name  limitrange 1 spec    limits      default     The default limits       cpu  2     defaultRequest     The default requests       cpu  750m     max     The maximum limits       cpu  3     min     The minimum requests       cpu  500m     type  Container          apiVersion  v1 kind  LimitRange metadata    name  limitrange 2 spec    limits      default     The default limits       cpu  1 5     defaultRequest     The default requests       cpu  1     max     The maximum limits       cpu  2 5     min     The minimum requests       cpu  300m     type  Container      A namespace with limitrange 1 and limitrange 2 would be treated as if it contained only the following LimitRange       apiVersion  v1 kind  LimitRange metadata    name  aggregate limitrange spec    limits      default     The default limits       cpu  1 5     defaultRequest     The default requests       cpu  750m     max     The maximum limits       cpu  2 5     min     The minimum requests       cpu  300m     type  Container      Here  the minimum of the  max  values is the output  max  value  and likewise for  default  and  defaultRequest   The maximum of the  min  values is the output  min  value      ResourceQuota Support  Kubernetes allows users to define  ResourceQuotas  https   kubernetes io docs concepts policy resource quotas    which restrict the maximum resource requests and limits of all pods running in a namespace   To deploy Tekton TaskRuns or PipelineRuns in namespaces with ResourceQuotas  compute resource requirements must be set for all containers in a  TaskRun  s pod  including the init containers injected by Tekton   Step  and  Sidecar  resource requirements can be configured directly through the API  as described in  Task Resource Requirements   task resource requirements   To configure resource requirements for Tekton s init containers  deploy a LimitRange in the same namespace  The LimitRange s  default  and  defaultRequest  will be applied to the init containers  and divided among the  Steps  and  Sidecars   as described in  LimitRange Support   limitrange support      2933  https   github com tektoncd pipeline issues 2933  tracks support for running  TaskRuns  in a namespace with a ResourceQuota without having to use LimitRanges   ResourceQuotas consider the effective resource requests and limits of a pod  which Kubernetes determines by summing the resource requirements of its containers  under the assumption that they run in parallel   When using LimitRanges to set compute resources for  TaskRun  pods  LimitRange default requests are divided among  Step  containers  meaning that the pod s effective requests reflect the actual requests that the pod needs  However  LimitRange default limits are not divided among containers  meaning the pod s effective limits are much larger than the limits applied during execution of any given  Step   For example  if a ResourceQuota restricts a namespace to a limit of 10 CPU  and a user creates a TaskRun with 20 steps with a limit of 1 CPU each  the pod would not be schedulable even though it is limited to 1 CPU at each point in time  Therefore  it is recommended to use ResourceQuotas to restrict only requests of  TaskRun  pods  not limits  tracked in   4976  https   github com tektoncd pipeline issues 4976         Quality of Service  QoS   By default  pods that run Tekton TaskRuns will have a  Quality of Service  QoS   https   kubernetes io docs tasks configure pod container quality service pod   of  BestEffort   If compute resource requirements are set for any Step or Sidecar  the pod will have a  Burstable  QoS  To get a  Guaranteed  QoS  a TaskRun pod must have compute resources set for all of its containers  including init containers which are injected by Tekton  and all containers must have their requests equal to their limits  This can be achieved by using LimitRanges to apply default requests and limits     References     LimitRange in k8s docs  https   kubernetes io docs concepts policy limit range      Configure default memory requests and limits for a Namespace  https   kubernetes io docs tasks administer cluster manage resources memory default namespace      Configure default CPU requests and limits for a Namespace  https   kubernetes io docs tasks administer cluster manage resources cpu default namespace      Configure Minimum and Maximum CPU constraints for a Namespace  https   kubernetes io docs tasks administer cluster manage resources cpu constraint namespace      Configure Minimum and Maximum Memory constraints for a Namespace  https   kubernetes io docs tasks administer cluster manage resources memory constraint namespace      Managing Resources for Containers  https   kubernetes io docs concepts configuration manage resources containers      Kubernetes best practices  Resource requests and limits  https   cloud google com blog products containers kubernetes kubernetes best practices resource requests and limits     Restrict resource consumption with limit ranges  https   docs openshift com container platform 4 8 nodes clusters nodes cluster limit ranges html "}
{"questions":"tekton Pod templates A Pod template defines a portion of a weight 409 Pod templates","answers":"<!--\n---\nlinkTitle: \"Pod templates\"\nweight: 409\n---\n-->\n\n# Pod templates\n\nA Pod template defines a portion of a [`PodSpec`](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.18\/#pod-v1-core)\nconfiguration that Tekton can use as \"boilerplate\" for a Pod that runs your `Tasks` and `Pipelines`.\n\nYou can specify a Pod template for `TaskRuns` and `PipelineRuns`. In the template, you can specify custom values for fields governing\nthe execution of individual `Tasks` or for all `Tasks` executed by a given `PipelineRun`.\n\nYou also have the option to define a global Pod template [in your Tekton config](.\/additional-configs.md#customizing-basic-execution-parameters) using the key `default-pod-template`.\nHowever, this global template is going to be merged with any templates you specify in your `TaskRuns` and `PipelineRuns`.<br>\nExcept for the `env` and `volumes` fields, other fields that exist in both the global template and the `TaskRun`'s or\n`PipelineRun`'s template will be taken from the `TaskRun` or `PipelineRun`.\nThe `env` and `volumes` fields are merged by the `name` value in the array elements. If the item's `name` is the same, the item from `TaskRun` or `PipelineRun` will be used.\n\nSee the following for examples of specifying a Pod template:\n- [Specifying a Pod template for a `TaskRun`](.\/taskruns.md#specifying-a-pod-template)\n- [Specifying a Pod template for a `PipelineRun`](.\/pipelineruns.md#specifying-a-pod-template)\n\n## Supported fields\n\nPod templates support fields listed in the table below.\n\n<table>\n\t<thead>\n\t\t<th>Field<\/th>\n\t\t<th>Description<\/th>\n\t<\/thead>\n\t<tbody>\n\t\t<tr>\n\t\t\t<td><code>env<\/code><\/td>\n\t\t\t<td>Environment variables defined in the Pod template at <code>TaskRun<\/code> and <code>PipelineRun<\/code> level take precedence over the ones defined in <code>steps<\/code> and <code>stepTemplate<\/code><\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>nodeSelector<\/code><\/td>\n\t\t\t<td>Must be true for <a href=https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/>the Pod to fit on a node<\/a>.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>tolerations<\/code><\/td>\n\t\t\t<td>Allows (but does not require) the Pods to schedule onto nodes with matching taints.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>affinity<\/code><\/td>\n\t\t\t<td>Allows constraining the set of nodes for which the Pod can be scheduled based on the labels present on the node.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>securityContext<\/code><\/td>\n\t\t\t<td>Specifies Pod-level security attributes and common container settings such as <code>runAsUser<\/code> and <code>selinux<\/code>.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>volumes<\/code><\/td>\n\t\t\t<td>Specifies a list of volumes that containers within the Pod can mount. This allows you to specify a volume type for each <code>volumeMount<\/code> in a <code>Task<\/code>.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>runtimeClassName<\/code><\/td>\n\t\t\t<td>Specifies the <a href=https:\/\/kubernetes.io\/docs\/concepts\/containers\/runtime-class\/>runtime class<\/a> for the Pod.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>automountServiceAccountToken<\/code><\/td>\n\t\t\t<td><b>Default:<\/b> <code>true<\/code>. Determines whether Tekton automatically provides the token for the service account used by the Pod inside containers at a predefined path.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>dnsPolicy<\/code><\/td>\n\t\t\t<td><b>Default:<\/b> <code>ClusterFirst<\/code>. Specifies the <a href=https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/dns-pod-service\/#pod-s-dns-policy>DNS policy<\/a>\n                for the Pod. Legal values are <code>ClusterFirst<\/code>, <code>Default<\/code>, and <code>None<\/code>. Does <b>not<\/b> support <code>ClusterFirstWithHostNet<\/code>\n                because Tekton Pods cannot run with host networking.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>dnsConfig<\/code><\/td>\n\t\t\t<td>Specifies <a href=https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/dns-pod-service\/#pod-s-dns-config>additional DNS configuration for the Pod<\/a>, such as name servers and search domains.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>enableServiceLinks<\/code><\/td>\n\t\t\t<td><b>Default:<\/b> <code>true<\/code>. Determines whether services in the Pod's namespace are exposed as environment variables to the Pod, similarly to Docker service links.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>priorityClassName<\/code><\/td>\n\t\t\t<td>Specifies the <a href=https:\/\/kubernetes.io\/docs\/concepts\/configuration\/pod-priority-preemption\/>priority class<\/a> for the Pod. Allows you to selectively enable preemption on lower-priority workloads.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>schedulerName<\/code><\/td>\n\t\t\t<td>Specifies the <a href=https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/configure-multiple-schedulers\/>scheduler<\/a> to use when dispatching the Pod. You can specify different schedulers for different types of\n                workloads, such as <code>volcano.sh<\/code> for machine learning workloads.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>imagePullSecrets<\/code><\/td>\n\t\t\t<td>Specifies the <a href=https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/>secret<\/a> to use when <a href=https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/pull-image-private-registry\/>\n                pulling a container image<\/a>.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>hostNetwork<\/code><\/td>\n\t\t\t<td><b>Default:<\/b> <code>false<\/code>. Determines whether to use the host network namespace.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>hostAliases<\/code><\/td>\n\t\t\t<td>Adds entries to a Pod's `\/etc\/hosts` to provide Pod-level overrides of hostnames. For further info see [Kubernetes' docs for this field](https:\/\/kubernetes.io\/docs\/tasks\/network\/customize-hosts-file-for-pods\/).<\/td>\n\t\t<\/tr>\n        <tr>\n            <td><code>topologySpreadConstraints<\/code><\/td>\n            <td>Specify how Pods are spread across your cluster among topology domains.<\/td>\n        <\/tr>\n\t<\/tbody>\n<\/table>\n\n## Use `imagePullSecrets` to lookup entrypoint\n\nIf no command is configured in `task` and `imagePullSecrets` is configured in `podTemplate`, the Tekton Controller will look up the entrypoint of image with `imagePullSecrets`. The Tekton controller's service account is given access to secrets by default. See [this](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/config\/200-clusterrole.yaml) for reference. If the Tekton controller's service account is not granted the access to secrets in different namespace, you need to grant the access via `RoleBinding`:\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: Role\nmetadata:\n  name: creds-getter\n  namespace: my-ns\nrules:\n- apiGroups: [\"\"]\n  resources: [\"secrets\"]\n  resourceNames: [\"creds\"]\n  verbs: [\"get\"]\n```\n\n```yaml\napiVersion: rbac.authorization.k8s.io\/v1\nkind: RoleBinding\nmetadata:\n  name: creds-getter-binding\n  namespace: my-ns\nsubjects:\n- kind: ServiceAccount\n  name: tekton-pipelines-controller\n  namespace: tekton-pipelines\n  apiGroup: rbac.authorization.k8s.io\nroleRef:\n  kind: Role\n  name: creds-getter\n  apiGroup: rbac.authorization.k8s.io\n```\n\n# Affinity Assistant Pod templates\n\nThe Pod templates specified in the `TaskRuns` and `PipelineRuns `also apply to\nthe [affinity assistant Pods](#.\/workspaces.md#specifying-workspace-order-in-a-pipeline-and-affinity-assistants)\nthat are created when using Workspaces, but only on selected fields.\n\nThe supported fields for affinity assistant pods are: `tolerations`, `nodeSelector`, `securityContext`, \n`priorityClassName` and `imagePullSecrets` (see the table above for more details about the fields).\n\nSimilarly to global Pod Template, you have the option to define a global affinity\nassistant Pod template [in your Tekton config](.\/additional-configs.md#customizing-basic-execution-parameters)\nusing the key `default-affinity-assistant-pod-template`. The merge strategy is\nthe same as the one described above for the supported fields.\n\n---\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/),\nand code samples are licensed under the\n[Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"         linkTitle   Pod templates  weight  409            Pod templates  A Pod template defines a portion of a   PodSpec   https   kubernetes io docs reference generated kubernetes api v1 18  pod v1 core  configuration that Tekton can use as  boilerplate  for a Pod that runs your  Tasks  and  Pipelines    You can specify a Pod template for  TaskRuns  and  PipelineRuns   In the template  you can specify custom values for fields governing the execution of individual  Tasks  or for all  Tasks  executed by a given  PipelineRun    You also have the option to define a global Pod template  in your Tekton config    additional configs md customizing basic execution parameters  using the key  default pod template   However  this global template is going to be merged with any templates you specify in your  TaskRuns  and  PipelineRuns   br  Except for the  env  and  volumes  fields  other fields that exist in both the global template and the  TaskRun  s or  PipelineRun  s template will be taken from the  TaskRun  or  PipelineRun   The  env  and  volumes  fields are merged by the  name  value in the array elements  If the item s  name  is the same  the item from  TaskRun  or  PipelineRun  will be used   See the following for examples of specifying a Pod template     Specifying a Pod template for a  TaskRun     taskruns md specifying a pod template     Specifying a Pod template for a  PipelineRun     pipelineruns md specifying a pod template      Supported fields  Pod templates support fields listed in the table below    table    thead     th Field  th     th Description  th     thead    tbody     tr      td  code env  code   td      td Environment variables defined in the Pod template at  code TaskRun  code  and  code PipelineRun  code  level take precedence over the ones defined in  code steps  code  and  code stepTemplate  code   td      tr     tr      td  code nodeSelector  code   td      td Must be true for  a href https   kubernetes io docs concepts configuration assign pod node  the Pod to fit on a node  a    td      tr     tr      td  code tolerations  code   td      td Allows  but does not require  the Pods to schedule onto nodes with matching taints   td      tr     tr      td  code affinity  code   td      td Allows constraining the set of nodes for which the Pod can be scheduled based on the labels present on the node   td      tr     tr      td  code securityContext  code   td      td Specifies Pod level security attributes and common container settings such as  code runAsUser  code  and  code selinux  code    td      tr     tr      td  code volumes  code   td      td Specifies a list of volumes that containers within the Pod can mount  This allows you to specify a volume type for each  code volumeMount  code  in a  code Task  code    td      tr     tr      td  code runtimeClassName  code   td      td Specifies the  a href https   kubernetes io docs concepts containers runtime class  runtime class  a  for the Pod   td      tr     tr      td  code automountServiceAccountToken  code   td      td  b Default   b   code true  code   Determines whether Tekton automatically provides the token for the service account used by the Pod inside containers at a predefined path   td      tr     tr      td  code dnsPolicy  code   td      td  b Default   b   code ClusterFirst  code   Specifies the  a href https   kubernetes io docs concepts services networking dns pod service  pod s dns policy DNS policy  a                  for the Pod  Legal values are  code ClusterFirst  code    code Default  code   and  code None  code   Does  b not  b  support  code ClusterFirstWithHostNet  code                  because Tekton Pods cannot run with host networking   td      tr     tr      td  code dnsConfig  code   td      td Specifies  a href https   kubernetes io docs concepts services networking dns pod service  pod s dns config additional DNS configuration for the Pod  a   such as name servers and search domains   td      tr     tr      td  code enableServiceLinks  code   td      td  b Default   b   code true  code   Determines whether services in the Pod s namespace are exposed as environment variables to the Pod  similarly to Docker service links   td      tr     tr      td  code priorityClassName  code   td      td Specifies the  a href https   kubernetes io docs concepts configuration pod priority preemption  priority class  a  for the Pod  Allows you to selectively enable preemption on lower priority workloads   td      tr     tr      td  code schedulerName  code   td      td Specifies the  a href https   kubernetes io docs tasks administer cluster configure multiple schedulers  scheduler  a  to use when dispatching the Pod  You can specify different schedulers for different types of                 workloads  such as  code volcano sh  code  for machine learning workloads   td      tr     tr      td  code imagePullSecrets  code   td      td Specifies the  a href https   kubernetes io docs concepts configuration secret  secret  a  to use when  a href https   kubernetes io docs tasks configure pod container pull image private registry                   pulling a container image  a    td      tr     tr      td  code hostNetwork  code   td      td  b Default   b   code false  code   Determines whether to use the host network namespace   td      tr     tr      td  code hostAliases  code   td      td Adds entries to a Pod s   etc hosts  to provide Pod level overrides of hostnames  For further info see  Kubernetes  docs for this field  https   kubernetes io docs tasks network customize hosts file for pods     td      tr           tr               td  code topologySpreadConstraints  code   td               td Specify how Pods are spread across your cluster among topology domains   td            tr     tbody    table      Use  imagePullSecrets  to lookup entrypoint  If no command is configured in  task  and  imagePullSecrets  is configured in  podTemplate   the Tekton Controller will look up the entrypoint of image with  imagePullSecrets   The Tekton controller s service account is given access to secrets by default  See  this  https   github com tektoncd pipeline blob main config 200 clusterrole yaml  for reference  If the Tekton controller s service account is not granted the access to secrets in different namespace  you need to grant the access via  RoleBinding       yaml apiVersion  rbac authorization k8s io v1 kind  Role metadata    name  creds getter   namespace  my ns rules    apiGroups         resources    secrets     resourceNames    creds     verbs    get           yaml apiVersion  rbac authorization k8s io v1 kind  RoleBinding metadata    name  creds getter binding   namespace  my ns subjects    kind  ServiceAccount   name  tekton pipelines controller   namespace  tekton pipelines   apiGroup  rbac authorization k8s io roleRef    kind  Role   name  creds getter   apiGroup  rbac authorization k8s io        Affinity Assistant Pod templates  The Pod templates specified in the  TaskRuns  and  PipelineRuns  also apply to the  affinity assistant Pods     workspaces md specifying workspace order in a pipeline and affinity assistants  that are created when using Workspaces  but only on selected fields   The supported fields for affinity assistant pods are   tolerations    nodeSelector    securityContext     priorityClassName  and  imagePullSecrets   see the table above for more details about the fields    Similarly to global Pod Template  you have the option to define a global affinity assistant Pod template  in your Tekton config    additional configs md customizing basic execution parameters  using the key  default affinity assistant pod template   The merge strategy is the same as the one described above for the supported fields        Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    and code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton Resolver Type weight 308 Bundles Resolver Bundles Resolver","answers":"<!--\n---\nlinkTitle: \"Bundles Resolver\"\nweight: 308\n---\n-->\n\n# Bundles Resolver\n\n## Resolver Type\n\nThis Resolver responds to type `bundles`.\n\n## Parameters\n\n| Param Name       | Description                                                                   | Example Value                                              |\n|------------------|-------------------------------------------------------------------------------|------------------------------------------------------------|\n| `secret` | The name of the secret to use when constructing registry credentials | `default`                                                  |\n| `bundle`         | The bundle url pointing at the image to fetch                                 | `gcr.io\/tekton-releases\/catalog\/upstream\/golang-build:0.1` |\n| `name`           | The name of the resource to pull out of the bundle                            | `golang-build`                                             |\n| `kind`           | The resource kind to pull out of the bundle                                   | `task`                                                     |\n\n## Requirements\n\n- A cluster running Tekton Pipeline v0.41.0 or later.\n- The [built-in remote resolvers installed](.\/install.md#installing-and-configuring-remote-task-and-pipeline-resolution).\n- The `enable-bundles-resolver` feature flag in the `resolvers-feature-flags` ConfigMap\n  in the `tekton-pipelines-resolvers` namespace set to `true`.\n- [Beta features](.\/additional-configs.md#beta-features) enabled.\n\n## Configuration\n\nThis resolver uses a `ConfigMap` for its settings. See\n[`..\/config\/resolvers\/bundleresolver-config.yaml`](..\/config\/resolvers\/bundleresolver-config.yaml)\nfor the name, namespace and defaults that the resolver ships with.\n\n### Options\n\n| Option Name               | Description                                                  | Example Values        |\n|---------------------------|--------------------------------------------------------------|-----------------------|\n| `default-kind`            | The default layer kind in the bundle image.                  | `task`, `pipeline`    |\n\n## Usage\n\n### Task Resolution\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: remote-task-reference\nspec:\n  taskRef:\n    resolver: bundles\n    params:\n    - name: bundle\n      value: docker.io\/ptasci67\/example-oci@sha256:053a6cb9f3711d4527dd0d37ac610e8727ec0288a898d5dfbd79b25bcaa29828\n    - name: name\n      value: hello-world\n    - name: kind\n      value: task\n```\n\n### Pipeline Resolution\n\nUnfortunately the Tekton Catalog does not publish pipelines at the\nmoment. Here's an example PipelineRun that talks to a private registry\nbut won't work unless you tweak the `bundle` field to point to a\nregistry with a pipeline in it:\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: bundle-demo\nspec:\n  pipelineRef:\n    resolver: bundles\n    params:\n    - name: bundle\n      value: 10.96.190.208:5000\/simple\/pipeline:latest\n    - name: name\n      value: hello-pipeline\n    - name: kind\n      value: pipeline\n  params:\n  - name: username\n    value: \"tekton pipelines\"\n```\n\n## `ResolutionRequest` Status\n`ResolutionRequest.Status.RefSource` field captures the source where the remote resource came from. It includes the 3 subfields: `url`, `digest` and `entrypoint`.\n- `uri`: The image repository URI\n- `digest`: The map of the algorithm portion -> the hex encoded portion of the image digest.\n- `entrypoint`: The resource name in the OCI bundle image.\n\nExample:\n- TaskRun Resolution\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: remote-task-reference\nspec:\n  taskRef:\n    resolver: bundles\n    params:\n    - name: bundle\n      value: gcr.io\/tekton-releases\/catalog\/upstream\/git-clone:0.7\n    - name: name\n      value: git-clone\n    - name: kind\n      value: task\n  params:\n    - name: url\n      value: https:\/\/github.com\/octocat\/Hello-World\n  workspaces:\n    - name: output\n      volumeClaimTemplate:\n        spec:\n          accessModes:\n            - ReadWriteOnce\n          resources:\n            requests:\n              storage: 500Mi\n```\n\n- `ResolutionRequest`\n```yaml\napiVersion: resolution.tekton.dev\/v1beta1\nkind: ResolutionRequest\nmetadata:\n  ...\n  labels:\n    resolution.tekton.dev\/type: bundles\n  name: bundles-21ad80ec13f3e8b73fed5880a64d4611\n  ...\nspec:\n  params:\n  - name: bundle\n    value: gcr.io\/tekton-releases\/catalog\/upstream\/git-clone:0.7\n  - name: name\n    value: git-clone\n  - name: kind\n    value: task\nstatus:\n  annotations: ...\n  ...\n  data: xxx\n  observedGeneration: 1\n  refSource:\n    digest:\n      sha256: f51ca50f1c065acba8290ef14adec8461915ecc5f70a8eb26190c6e8e0ededaf\n    entryPoint: git-clone\n    uri: gcr.io\/tekton-releases\/catalog\/upstream\/git-clone\n```\n\n---\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/),\nand code samples are licensed under the\n[Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"         linkTitle   Bundles Resolver  weight  308            Bundles Resolver     Resolver Type  This Resolver responds to type  bundles       Parameters    Param Name         Description                                                                     Example Value                                                                                                                                                                                                                     secret    The name of the secret to use when constructing registry credentials    default                                                        bundle            The bundle url pointing at the image to fetch                                    gcr io tekton releases catalog upstream golang build 0 1       name              The name of the resource to pull out of the bundle                               golang build                                                   kind              The resource kind to pull out of the bundle                                      task                                                            Requirements    A cluster running Tekton Pipeline v0 41 0 or later    The  built in remote resolvers installed    install md installing and configuring remote task and pipeline resolution     The  enable bundles resolver  feature flag in the  resolvers feature flags  ConfigMap   in the  tekton pipelines resolvers  namespace set to  true      Beta features    additional configs md beta features  enabled      Configuration  This resolver uses a  ConfigMap  for its settings  See      config resolvers bundleresolver config yaml      config resolvers bundleresolver config yaml  for the name  namespace and defaults that the resolver ships with       Options    Option Name                 Description                                                    Example Values                                                                                                                                  default kind               The default layer kind in the bundle image                      task    pipeline           Usage      Task Resolution     yaml apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  remote task reference spec    taskRef      resolver  bundles     params        name  bundle       value  docker io ptasci67 example oci sha256 053a6cb9f3711d4527dd0d37ac610e8727ec0288a898d5dfbd79b25bcaa29828       name  name       value  hello world       name  kind       value  task          Pipeline Resolution  Unfortunately the Tekton Catalog does not publish pipelines at the moment  Here s an example PipelineRun that talks to a private registry but won t work unless you tweak the  bundle  field to point to a registry with a pipeline in it      yaml apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    name  bundle demo spec    pipelineRef      resolver  bundles     params        name  bundle       value  10 96 190 208 5000 simple pipeline latest       name  name       value  hello pipeline       name  kind       value  pipeline   params      name  username     value   tekton pipelines           ResolutionRequest  Status  ResolutionRequest Status RefSource  field captures the source where the remote resource came from  It includes the 3 subfields   url    digest  and  entrypoint      uri   The image repository URI    digest   The map of the algorithm portion    the hex encoded portion of the image digest     entrypoint   The resource name in the OCI bundle image   Example    TaskRun Resolution    yaml apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  remote task reference spec    taskRef      resolver  bundles     params        name  bundle       value  gcr io tekton releases catalog upstream git clone 0 7       name  name       value  git clone       name  kind       value  task   params        name  url       value  https   github com octocat Hello World   workspaces        name  output       volumeClaimTemplate          spec            accessModes                ReadWriteOnce           resources              requests                storage  500Mi         ResolutionRequest     yaml apiVersion  resolution tekton dev v1beta1 kind  ResolutionRequest metadata          labels      resolution tekton dev type  bundles   name  bundles 21ad80ec13f3e8b73fed5880a64d4611       spec    params      name  bundle     value  gcr io tekton releases catalog upstream git clone 0 7     name  name     value  git clone     name  kind     value  task status    annotations              data  xxx   observedGeneration  1   refSource      digest        sha256  f51ca50f1c065acba8290ef14adec8461915ecc5f70a8eb26190c6e8e0ededaf     entryPoint  git clone     uri  gcr io tekton releases catalog upstream git clone           Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    and code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton Use resolver type Hub Resolver weight 311 Hub Resolver","answers":"<!--\n---\nlinkTitle: \"Hub Resolver\"\nweight: 311\n---\n-->\n\n# Hub Resolver\n\nUse resolver type `hub`.\n\n## Parameters\n\n| Param Name       | Description                                                                   | Example Value                                              |\n|------------------|-------------------------------------------------------------------------------|------------------------------------------------------------|\n| `catalog`        | The catalog from where to pull the resource (Optional)                        | Default:  `tekton-catalog-tasks` (for `task` kind);  `tekton-catalog-pipelines` (for `pipeline` kind)                                        |\n| `type`           | The type of Hub from where to pull the resource (Optional). Either `artifact` or `tekton` | Default:  `artifact`                                         |\n| `kind`           | Either `task` or `pipeline` (Optional)                                        | Default: `task`                                                     |\n| `name`           | The name of the task or pipeline to fetch from the hub                        | `golang-build`                                             |\n| `version`        | Version or a Constraint (see [below](#version-constraint) of a task or a pipeline to pull in from. Wrap the number in quotes!   | `\"0.5.0\"`, `\">= 0.5.0\"`                                                    |\n\nThe Catalogs in the Artifact Hub follows the semVer (i.e.` <major-version>.<minor-version>.0`) and the Catalogs in the Tekton Hub follows the simplified semVer (i.e. `<major-version>.<minor-version>`). Both full and simplified semantic versioning will be accepted by the `version` parameter. The Hub Resolver will map the version to the format expected by the target Hub `type`.\n\n## Requirements\n\n- A cluster running Tekton Pipeline v0.41.0 or later.\n- The [built-in remote resolvers installed](.\/install.md#installing-and-configuring-remote-task-and-pipeline-resolution).\n- The `enable-hub-resolver` feature flag in the `resolvers-feature-flags` ConfigMap in the\n  `tekton-pipelines-resolvers` namespace set to `true`.\n- [Beta features](.\/additional-configs.md#beta-features) enabled.\n\n## Configuration\n\nThis resolver uses a `ConfigMap` for its settings. See\n[`..\/config\/resolvers\/hubresolver-config.yaml`](..\/config\/resolvers\/hubresolver-config.yaml)\nfor the name, namespace and defaults that the resolver ships with.\n\n### Options\n\n| Option Name                 | Description                                          | Example Values         |\n|-----------------------------|------------------------------------------------------|------------------------|\n| `default-tekton-hub-catalog`| The default tekton hub catalog from where to pull the resource.| `Tekton`               |\n| `default-artifact-hub-task-catalog`| The default artifact hub catalog from where to pull the resource for task kind.| `tekton-catalog-tasks`               |\n| `default-artifact-hub-pipeline-catalog`| The default artifact hub catalog from where to pull the resource for pipeline kind.  | `tekton-catalog-pipelines`               |\n| `default-kind`              | The default object kind for references.              | `task`, `pipeline`     |\n| `default-type`              | The default hub from where to pull the resource.     | `artifact`, `tekton`   |\n\n\n### Configuring the Hub API endpoint\n\nThe Hub Resolver supports to resolve resources from the [Artifact Hub](https:\/\/artifacthub.io\/) and the [Tekton Hub](https:\/\/hub.tekton.dev\/),\nwhich can be configured by setting the `type` field of the resolver. \n\n*(Please note that the [Tekton Hub](https:\/\/hub.tekton.dev\/) will be deprecated after [migration to the Artifact Hub](https:\/\/github.com\/tektoncd\/hub\/issues\/667) is done.)*\n\nWhen setting the `type` field to `artifact`, the resolver will hit the public hub api at https:\/\/artifacthub.io\/ by default\nbut you can configure your own (for example to use a private hub\ninstance) by setting the `ARTIFACT_HUB_API` environment variable in\n[`..\/config\/resolvers\/resolvers-deployment.yaml`](..\/config\/resolvers\/resolvers-deployment.yaml). Example:\n\n```yaml\nenv\n- name: ARTIFACT_HUB_API\n  value: \"https:\/\/artifacthub.io\/\"\n```\n\nWhen setting the `type` field to `tekton`, the resolver will hit the public\ntekton catalog api at https:\/\/api.hub.tekton.dev by default but you can configure\nyour own instance of the Tekton Hub by setting the `TEKTON_HUB_API` environment\nvariable in\n[`..\/config\/resolvers\/resolvers-deployment.yaml`](..\/config\/resolvers\/resolvers-deployment.yaml). Example:\n\n```yaml\nenv\n- name: TEKTON_HUB_API\n  value: \"https:\/\/api.private.hub.instance.dev\"\n```\n\nThe Tekton Hub deployment guide can be found [here](https:\/\/github.com\/tektoncd\/hub\/blob\/main\/docs\/DEPLOYMENT.md).\n\n## Usage\n\n### Task Resolution\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: remote-task-reference\nspec:\n  taskRef:\n    resolver: hub\n    params:\n    - name: catalog # optional\n      value: tekton-catalog-tasks\n    - name: type # optional\n      value: artifact \n    - name: kind\n      value: task\n    - name: name\n      value: git-clone\n    - name: version\n      value: \"0.6\"\n```\n\n### Pipeline Resolution\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: hub-demo\nspec:\n  pipelineRef:\n    resolver: hub\n    params:\n    - name: catalog # optional\n      value: tekton-catalog-pipelines \n    - name: type # optional\n      value: artifact\n    - name: kind\n      value: pipeline\n    - name: name\n      value: buildpacks\n    - name: version\n      value: \"0.1\"\n  # Note: the buildpacks pipeline requires parameters.\n  # Resolution of the pipeline will succeed but the PipelineRun\n  # overall will not succeed without those parameters.\n```\n\n### Version constraint\n\nInstead of a version you can specify a constraint to choose from. The constraint is a string as documented in the [go-version](https:\/\/github.com\/hashicorp\/go-version) library.\n\nSome examples:\n\n```yaml\nparams:\n  - name: name\n    value: git-clone\n  - name: version\n    value: \">=0.7.0\"\n```\n\nWill only choose the git-clone task that is greater than version `0.7.0`\n\n```yaml\nparams:\n  - name: name\n    value: git-clone\n  - name: version\n    value: \">=0.7.0, < 2.0.0\"\n```\n\nWill select the **latest** git-clone task that is greater than version `0.7.0` and\nless than version `2.0.0`, so if the latest task is the version `0.9.0` it will\nbe selected.\n\nOther operators for selection are available for comparisons, see the\n[go-version](https:\/\/github.com\/hashicorp\/go-version\/blob\/644291d14038339745c2d883a1a114488e30b702\/constraint.go#L40C2-L48)\nsource code.\n\n---\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/),\nand code samples are licensed under the\n[Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"         linkTitle   Hub Resolver  weight  311            Hub Resolver  Use resolver type  hub       Parameters    Param Name         Description                                                                     Example Value                                                                                                                                                                                                                     catalog           The catalog from where to pull the resource  Optional                           Default    tekton catalog tasks   for  task  kind     tekton catalog pipelines   for  pipeline  kind                                              type              The type of Hub from where to pull the resource  Optional   Either  artifact  or  tekton    Default    artifact                                               kind              Either  task  or  pipeline   Optional                                           Default   task                                                           name              The name of the task or pipeline to fetch from the hub                           golang build                                                   version           Version or a Constraint  see  below   version constraint  of a task or a pipeline to pull in from  Wrap the number in quotes        0 5 0         0 5 0                                                         The Catalogs in the Artifact Hub follows the semVer  i e    major version   minor version  0   and the Catalogs in the Tekton Hub follows the simplified semVer  i e    major version   minor version     Both full and simplified semantic versioning will be accepted by the  version  parameter  The Hub Resolver will map the version to the format expected by the target Hub  type       Requirements    A cluster running Tekton Pipeline v0 41 0 or later    The  built in remote resolvers installed    install md installing and configuring remote task and pipeline resolution     The  enable hub resolver  feature flag in the  resolvers feature flags  ConfigMap in the    tekton pipelines resolvers  namespace set to  true      Beta features    additional configs md beta features  enabled      Configuration  This resolver uses a  ConfigMap  for its settings  See      config resolvers hubresolver config yaml      config resolvers hubresolver config yaml  for the name  namespace and defaults that the resolver ships with       Options    Option Name                   Description                                            Example Values                                                                                                                              default tekton hub catalog   The default tekton hub catalog from where to pull the resource    Tekton                     default artifact hub task catalog   The default artifact hub catalog from where to pull the resource for task kind    tekton catalog tasks                     default artifact hub pipeline catalog   The default artifact hub catalog from where to pull the resource for pipeline kind      tekton catalog pipelines                     default kind                 The default object kind for references                  task    pipeline           default type                 The default hub from where to pull the resource         artifact    tekton            Configuring the Hub API endpoint  The Hub Resolver supports to resolve resources from the  Artifact Hub  https   artifacthub io   and the  Tekton Hub  https   hub tekton dev    which can be configured by setting the  type  field of the resolver      Please note that the  Tekton Hub  https   hub tekton dev   will be deprecated after  migration to the Artifact Hub  https   github com tektoncd hub issues 667  is done     When setting the  type  field to  artifact   the resolver will hit the public hub api at https   artifacthub io  by default but you can configure your own  for example to use a private hub instance  by setting the  ARTIFACT HUB API  environment variable in      config resolvers resolvers deployment yaml      config resolvers resolvers deployment yaml   Example      yaml env   name  ARTIFACT HUB API   value   https   artifacthub io        When setting the  type  field to  tekton   the resolver will hit the public tekton catalog api at https   api hub tekton dev by default but you can configure your own instance of the Tekton Hub by setting the  TEKTON HUB API  environment variable in      config resolvers resolvers deployment yaml      config resolvers resolvers deployment yaml   Example      yaml env   name  TEKTON HUB API   value   https   api private hub instance dev       The Tekton Hub deployment guide can be found  here  https   github com tektoncd hub blob main docs DEPLOYMENT md       Usage      Task Resolution     yaml apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  remote task reference spec    taskRef      resolver  hub     params        name  catalog   optional       value  tekton catalog tasks       name  type   optional       value  artifact        name  kind       value  task       name  name       value  git clone       name  version       value   0 6           Pipeline Resolution     yaml apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    name  hub demo spec    pipelineRef      resolver  hub     params        name  catalog   optional       value  tekton catalog pipelines        name  type   optional       value  artifact       name  kind       value  pipeline       name  name       value  buildpacks       name  version       value   0 1      Note  the buildpacks pipeline requires parameters      Resolution of the pipeline will succeed but the PipelineRun     overall will not succeed without those parameters           Version constraint  Instead of a version you can specify a constraint to choose from  The constraint is a string as documented in the  go version  https   github com hashicorp go version  library   Some examples      yaml params      name  name     value  git clone     name  version     value     0 7 0       Will only choose the git clone task that is greater than version  0 7 0      yaml params      name  name     value  git clone     name  version     value     0 7 0    2 0 0       Will select the   latest   git clone task that is greater than version  0 7 0  and less than version  2 0 0   so if the latest task is the version  0 9 0  it will be selected   Other operators for selection are available for comparisons  see the  go version  https   github com hashicorp go version blob 644291d14038339745c2d883a1a114488e30b702 constraint go L40C2 L48  source code        Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    and code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton Tasks Tasks weight 201","answers":"<!--\n---\nlinkTitle: \"Tasks\"\nweight: 201\n---\n-->\n\n# Tasks\n\n- [Overview](#overview)\n- [Configuring a `Task`](#configuring-a-task)\n  - [`Task` vs. `ClusterTask`](#task-vs-clustertask)\n  - [Defining `Steps`](#defining-steps)\n    - [Reserved directories](#reserved-directories)\n    - [Running scripts within `Steps`](#running-scripts-within-steps)\n      - [Windows scripts](#windows-scripts)\n    - [Specifying a timeout](#specifying-a-timeout)\n    - [Specifying `onError` for a `step`](#specifying-onerror-for-a-step)\n    - [Accessing Step's `exitCode` in subsequent `Steps`](#accessing-steps-exitcode-in-subsequent-steps)\n    - [Produce a task result with `onError`](#produce-a-task-result-with-onerror)\n    - [Breakpoint on failure with `onError`](#breakpoint-on-failure-with-onerror)\n    - [Redirecting step output streams with `stdoutConfig` and `stderrConfig`](#redirecting-step-output-streams-with-stdoutConfig-and-stderrConfig)\n  - [Specifying `Parameters`](#specifying-parameters)\n  - [Specifying `Workspaces`](#specifying-workspaces)\n  - [Emitting `Results`](#emitting-results)\n    - [Larger `Results` using sidecar logs](#larger-results-using-sidecar-logs)\n  - [Specifying `Volumes`](#specifying-volumes)\n  - [Specifying a `Step` template](#specifying-a-step-template)\n  - [Specifying `Sidecars`](#specifying-sidecars)\n  - [Specifying a `DisplayName`](#specifying-a-display-name)\n  - [Adding a description](#adding-a-description)\n  - [Using variable substitution](#using-variable-substitution)\n    - [Substituting parameters and resources](#substituting-parameters-and-resources)\n    - [Substituting `Array` parameters](#substituting-array-parameters)\n    - [Substituting `Workspace` paths](#substituting-workspace-paths)\n    - [Substituting `Volume` names and types](#substituting-volume-names-and-types)\n    - [Substituting in `Script` blocks](#substituting-in-script-blocks)\n- [Code examples](#code-examples)\n  - [Building and pushing a Docker image](#building-and-pushing-a-docker-image)\n    - [Mounting multiple `Volumes`](#mounting-multiple-volumes)\n    - [Mounting a `ConfigMap` as a `Volume` source](#mounting-a-configmap-as-a-volume-source)\n    - [Using a `Secret` as an environment source](#using-a-secret-as-an-environment-source)\n    - [Using a `Sidecar` in a `Task`](#using-a-sidecar-in-a-task)\n- [Debugging](#debugging)\n  - [Inspecting the file structure](#inspecting-the-file-structure)\n  - [Inspecting the `Pod`](#inspecting-the-pod)\n  - [Running Step Containers as a Non Root User](#running-step-containers-as-a-non-root-user)\n- [`Task` Authoring Recommendations](#task-authoring-recommendations)\n\n## Overview\n\nA `Task` is a collection of `Steps` that you\ndefine and arrange in a specific order of execution as part of your continuous integration flow.\nA `Task` executes as a Pod on your Kubernetes cluster. A `Task` is available within a specific\nnamespace, while a `ClusterTask` is available across the entire cluster.\n\nA `Task` declaration includes the following elements:\n\n- [Parameters](#specifying-parameters)\n- [Steps](#defining-steps)\n- [Workspaces](#specifying-workspaces)\n- [Results](#emitting-results)\n\n## Configuring a `Task`\n\nA `Task` definition supports the following fields:\n\n- Required:\n  - [`apiVersion`][kubernetes-overview] - Specifies the API version. For example,\n    `tekton.dev\/v1beta1`.\n  - [`kind`][kubernetes-overview] - Identifies this resource object as a `Task` object.\n  - [`metadata`][kubernetes-overview] - Specifies metadata that uniquely identifies the\n    `Task` resource object. For example, a `name`.\n  - [`spec`][kubernetes-overview] - Specifies the configuration information for\n    this `Task` resource object.\n  - [`steps`](#defining-steps) - Specifies one or more container images to run in the `Task`.\n- Optional:\n  - [`description`](#adding-a-description) - An informative description of the `Task`.\n  - [`params`](#specifying-parameters) - Specifies execution parameters for the `Task`.\n  - [`workspaces`](#specifying-workspaces) - Specifies paths to volumes required by the `Task`.\n  - [`results`](#emitting-results) - Specifies the names under which `Tasks` write execution results.\n  - [`volumes`](#specifying-volumes) - Specifies one or more volumes that will be available to the `Steps` in the `Task`.\n  - [`stepTemplate`](#specifying-a-step-template) - Specifies a `Container` step definition to use as the basis for all `Steps` in the `Task`.\n  - [`sidecars`](#specifying-sidecars) - Specifies `Sidecar` containers to run alongside the `Steps` in the `Task`.\n\n[kubernetes-overview]:\n  https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/kubernetes-objects\/#required-fields\n\nThe non-functional example below demonstrates the use of most of the above-mentioned fields:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: example-task-name\nspec:\n  params:\n    - name: pathToDockerFile\n      type: string\n      description: The path to the dockerfile to build\n      default: \/workspace\/workspace\/Dockerfile\n    - name: builtImageUrl\n      type: string\n      description: location to push the built image to\n  steps:\n    - name: ubuntu-example\n      image: ubuntu\n      args: [\"ubuntu-build-example\", \"SECRETS-example.md\"]\n    - image: gcr.io\/example-builders\/build-example\n      command: [\"echo\"]\n      args: [\"$(params.pathToDockerFile)\"]\n    - name: dockerfile-pushexample\n      image: gcr.io\/example-builders\/push-example\n      args: [\"push\", \"$(params.builtImageUrl)\"]\n      volumeMounts:\n        - name: docker-socket-example\n          mountPath: \/var\/run\/docker.sock\n  volumes:\n    - name: example-volume\n      emptyDir: {}\n```\n\n### `Task` vs. `ClusterTask`\n\n**Note: ClusterTasks are deprecated.** Please use the [cluster resolver](.\/cluster-resolver.md) instead.\n\nA `ClusterTask` is a `Task` scoped to the entire cluster instead of a single namespace.\nA `ClusterTask` behaves identically to a `Task` and therefore everything in this document\napplies to both.\n\n**Note:** When using a `ClusterTask`, you must explicitly set the `kind` sub-field in the `taskRef` field to `ClusterTask`.\n          If not specified, the `kind` sub-field defaults to `Task.`\n\nBelow is an example of a Pipeline declaration that uses a `ClusterTask`:\n**Note**:\n- There is no `v1` API specification for `ClusterTask` but a `v1beta1 clustertask` can still be referenced in a `v1 pipeline`.\n- The cluster resolver syntax below can be used to reference any task, not just a clustertask.\n\n\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: Pipeline\nmetadata:\n  name: demo-pipeline\nspec:\n  tasks:\n    - name: build-skaffold-web\n      taskRef:\n        resolver: cluster\n        params:\n        - name: kind\n          value: task\n        - name: name\n          value: build-push\n        - name: namespace\n          value: default\n```\n\n\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: demo-pipeline\n  namespace: default\nspec:\n  tasks:\n    - name: build-skaffold-web\n      taskRef:\n        name: build-push\n        kind: ClusterTask\n      params: ....\n```\n\n\n\n### Defining `Steps`\n\nA `Step` is a reference to a container image that executes a specific tool on a\nspecific input and produces a specific output. To add `Steps` to a `Task` you\ndefine a `steps` field (required) containing a list of desired `Steps`. The order in\nwhich the `Steps` appear in this list is the order in which they will execute.\n\nThe following requirements apply to each container image referenced in a `steps` field:\n\n- The container image must abide by the [container contract](.\/container-contract.md).\n- Each container image runs to completion or until the first failure occurs.\n- The CPU, memory, and ephemeral storage resource requests set on `Step`s\n  will be adjusted to comply with any [`LimitRange`](https:\/\/kubernetes.io\/docs\/concepts\/policy\/limit-range\/)s\n  present in the `Namespace`. In addition, Kubernetes determines a pod's effective resource\n  requests and limits by summing the requests and limits for all its containers, even\n  though Tekton runs `Steps` sequentially.\n  For more detail, see [Compute Resources in Tekton](.\/compute-resources.md).\n\n**Note:** If the image referenced in the `step` field is from a private registry, `TaskRuns` or `PipelineRuns` that consume the task\n          must provide the `imagePullSecrets` in a [podTemplate](.\/podtemplates.md).\n\nBelow is an example of setting the resource requests and limits for a step:\n\n\n\n\n```yaml\nspec:\n  steps:\n    - name: step-with-limts\n      computeResources:\n        requests:\n          memory: 1Gi\n          cpu: 500m\n        limits:\n          memory: 2Gi\n          cpu: 800m\n```\n\n\n\n```yaml\nspec:\n  steps:\n    - name: step-with-limts\n      resources:\n        requests:\n          memory: 1Gi\n          cpu: 500m\n        limits:\n          memory: 2Gi\n          cpu: 800m\n```\n\n\n\n#### Reserved directories\n\nThere are several directories that all `Tasks` run by Tekton will treat as special\n\n* `\/workspace` - This directory is where [resources](#specifying-resources) and [workspaces](#specifying-workspaces)\n  are mounted. Paths to these are available to `Task` authors via [variable substitution](variables.md)\n* `\/tekton` - This directory is used for Tekton specific functionality:\n    * `\/tekton\/results` is where [results](#emitting-results) are written to.\n      The path is available to `Task` authors via [`$(results.name.path)`](variables.md)\n    * There are other subfolders which are [implementation details of Tekton](developers\/README.md#reserved-directories)\n      and **users should not rely on their specific behavior as it may change in the future**\n\n#### Running scripts within `Steps`\n\nA step can specify a `script` field, which contains the body of a script. That script is\ninvoked as if it were stored inside the container image, and any `args` are passed directly\nto it.\n\n**Note:** If the `script` field is present, the step cannot also contain a `command` field.\n\nScripts that do not start with a [shebang](https:\/\/en.wikipedia.org\/wiki\/Shebang_(Unix))\nline will have the following default preamble prepended:\n\n```bash\n#!\/bin\/sh\nset -e\n```\n\nYou can override this default preamble by prepending a shebang that specifies the desired parser.\nThis parser must be present within that `Step's` container image.\n\nThe example below executes a Bash script:\n\n```yaml\nsteps:\n  - image: ubuntu # contains bash\n    script: |\n      #!\/usr\/bin\/env bash\n      echo \"Hello from Bash!\"\n```\n\nThe example below executes a Python script:\n\n```yaml\nsteps:\n  - image: python # contains python\n    script: |\n      #!\/usr\/bin\/env python3\n      print(\"Hello from Python!\")\n```\n\nThe example below executes a Node script:\n\n```yaml\nsteps:\n  - image: node # contains node\n    script: |\n      #!\/usr\/bin\/env node\n      console.log(\"Hello from Node!\")\n```\n\nYou can execute scripts directly in the workspace:\n\n```yaml\nsteps:\n  - image: ubuntu\n    script: |\n      #!\/usr\/bin\/env bash\n      \/workspace\/my-script.sh  # provided by an input resource\n```\n\nYou can also execute scripts within the container image:\n\n```yaml\nsteps:\n  - image: my-image # contains \/bin\/my-binary\n    script: |\n      #!\/usr\/bin\/env bash\n      \/bin\/my-binary\n```\n\n##### Windows scripts\n\nScripts in tasks that will eventually run on windows nodes need a custom shebang line, so that Tekton knows how to run the script. The format of the shebang line is:\n\n`#!win <interpreter command> <args>`\n\nUnlike linux, we need to specify how to interpret the script file which is generated by Tekton. The example below shows how to execute a powershell script:\n\n```yaml\nsteps:\n  - image: mcr.microsoft.com\/windows\/servercore:1809\n    script: |\n      #!win powershell.exe -File\n      echo 'Hello from PowerShell'\n```\n\nMicrosoft provide `powershell` images, which contain Powershell Core (which is slightly different from powershell found in standard windows images). The example below shows how to use these images:\n```yaml\nsteps:\n  - image: mcr.microsoft.com\/powershell:nanoserver\n    script: |\n      #!win pwsh.exe -File\n      echo 'Hello from PowerShell Core'\n```\n\nAs can be seen the command is different. The windows shebang can be used for any interpreter, as long as it exists in the image and can interpret commands from a file. The example below executes a Python script:\n```yaml\n  steps:\n  - image: python\n    script: |\n      #!win python\n      print(\"Hello from Python!\")\n```\nNote that other than the `#!win` shebang the example is identical to the earlier linux example.\n\nFinally, if no interpreter is specified on the `#!win` line then the script will be treated as a windows `.cmd` file which will be excecuted. The example below shows this:\n```yaml\n  steps:\n  - image: mcr.microsoft.com\/powershell:lts-nanoserver-1809\n    script: |\n      #!win\n      echo Hello from the default cmd file\n```\n\n#### Specifying a timeout\n\nA `Step` can specify a `timeout` field.\nIf the `Step` execution time exceeds the specified timeout, the `Step` kills\nits running process and any subsequent `Steps` in the `TaskRun` will not be\nexecuted. The `TaskRun` is placed into a `Failed` condition.  An accompanying log\ndescribing which `Step` timed out is written as the `Failed` condition's message.\n\nThe timeout specification follows the duration format as specified in the [Go time package](https:\/\/golang.org\/pkg\/time\/#ParseDuration) (e.g. 1s or 1ms).\n\nThe example `Step` below is supposed to sleep for 60 seconds but will be canceled by the specified 5 second timeout.\n```yaml\nsteps:\n  - name: sleep-then-timeout\n    image: ubuntu\n    script: |\n      #!\/usr\/bin\/env bash\n      echo \"I am supposed to sleep for 60 seconds!\"\n      sleep 60\n    timeout: 5s\n```\n\n#### Specifying `onError` for a `step`\n\nWhen a `step` in a `task` results in a failure, the rest of the steps in the `task` are skipped and the `taskRun` is\ndeclared a failure. If you would like to ignore such step errors and continue executing the rest of the steps in\nthe task, you can specify `onError` for such a `step`.\n\n`onError` can be set to either `continue` or `stopAndFail` as part of the step definition. If `onError` is\nset to `continue`, the entrypoint sets the original failed exit code of the [script](#running-scripts-within-steps)\nin the container terminated state. A `step` with `onError` set to `continue` does not fail the `taskRun` and continues\nexecuting the rest of the steps in a task.\n\nTo ignore a step error, set `onError` to `continue`:\n\n```yaml\nsteps:\n  - image: docker.io\/library\/golang:latest\n    name: ignore-unit-test-failure\n    onError: continue\n    script: |\n      go test .\n```\n\nThe original failed exit code of the [script](#running-scripts-within-steps) is available in the terminated state of\nthe container.\n\n```\nkubectl get tr taskrun-unit-test-t6qcl -o json | jq .status\n{\n  \"conditions\": [\n    {\n      \"message\": \"All Steps have completed executing\",\n      \"reason\": \"Succeeded\",\n      \"status\": \"True\",\n      \"type\": \"Succeeded\"\n    }\n  ],\n  \"steps\": [\n    {\n      \"container\": \"step-ignore-unit-test-failure\",\n      \"imageID\": \"...\",\n      \"name\": \"ignore-unit-test-failure\",\n      \"terminated\": {\n        \"containerID\": \"...\",\n        \"exitCode\": 1,\n        \"reason\": \"Completed\",\n      }\n    },\n  ],\n```\n\nFor an end-to-end example, see [the taskRun ignoring a step error](..\/examples\/v1\/taskruns\/ignore-step-error.yaml)\nand [the pipelineRun ignoring a step error](..\/examples\/v1\/pipelineruns\/ignore-step-error.yaml).\n\n#### Accessing Step's `exitCode` in subsequent `Steps`\n\nA step can access the exit code of any previous step by reading the file pointed to by the `exitCode` path variable:\n\n```shell\ncat $(steps.step-<step-name>.exitCode.path)\n```\n\nThe `exitCode` of a step without any name can be referenced using:\n\n```shell\ncat $(steps.step-unnamed-<step-index>.exitCode.path)\n```\n\n#### Produce a task result with `onError`\n\nWhen a step is set to ignore the step error and if that step is able to initialize a result file before failing,\nthat result is made available to its consumer task.\n\n```yaml\nsteps:\n  - name: ignore-failure-and-produce-a-result\n    onError: continue\n    image: busybox\n    script: |\n      echo -n 123 | tee $(results.result1.path)\n      exit 1\n```\n\nThe task consuming the result using the result reference `$(tasks.task1.results.result1)` in a `pipeline` will be able\nto access the result and run with the resolved value.\n\nNow, a step can fail before initializing a result and the `pipeline` can ignore such step failure. But, the  `pipeline`\nwill fail with `InvalidTaskResultReference` if it has a task consuming that task result. For example, any task\nconsuming `$(tasks.task1.results.result2)` will cause the pipeline to fail.\n\n```yaml\nsteps:\n  - name: ignore-failure-and-produce-a-result\n    onError: continue\n    image: busybox\n    script: |\n      echo -n 123 | tee $(results.result1.path)\n      exit 1\n      echo -n 456 | tee $(results.result2.path)\n```\n\n#### Breakpoint on failure with `onError`\n\n[Debugging](taskruns.md#debugging-a-taskrun) a taskRun is supported to debug a container and comes with a set of\n[tools](taskruns.md#debug-environment) to declare the step as a failure or a success. Specifying\n[breakpoint](taskruns.md#breakpoint-on-failure) at the `taskRun` level overrides ignoring a step error using `onError`.\n\n#### Redirecting step output streams with `stdoutConfig` and `stderrConfig`\n\nThis is an alpha feature. The `enable-api-fields` feature flag [must be set to `\"alpha\"`](.\/install.md)\nfor Redirecting Step Output Streams to function.\n\nThis feature defines optional `Step` fields `stdoutConfig` and `stderrConfig` which can be used to redirection the output streams `stdout` and `stderr` respectively:\n\n```yaml\n- name: ...\n  ...\n  stdoutConfig:\n    path: ...\n  stderrConfig:\n    path: ...\n```\n\nOnce `stdoutConfig.path` or `stderrConfig.path` is specified, the corresponding output stream will be duplicated to both the given file and the standard output stream of the container, so users can still view the output through the Pod log API. If both `stdoutConfig.path` and `stderrConfig.path` are set to the same value, outputs from both streams will be interleaved in the same file, but there will be no ordering guarantee on the data. If multiple `Step`'s `stdoutConfig.path` fields are set to the same value, the file content will be overwritten by the last outputting step.\n\nVariable substitution will be applied to the new fields, so one could specify `$(results.<name>.path)` to the `stdoutConfig.path` or `stderrConfig.path` field to extract the stdout of a step into a Task result.\n\n##### Example Usage\n\nRedirecting stdout of `boskosctl` to `jq` and publish the resulting `project-id` as a Task result:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: boskos-acquire\nspec:\n  results:\n  - name: project-id\n  steps:\n  - name: boskosctl\n    image: gcr.io\/k8s-staging-boskos\/boskosctl\n    args:\n    - acquire\n    - --server-url=http:\/\/boskos.test-pods.svc.cluster.local\n    - --owner-name=christie-test-boskos\n    - --type=gke-project\n    - --state=free\n    - --target-state=busy\n    stdoutConfig:\n      path: \/data\/boskosctl-stdout\n    volumeMounts:\n    - name: data\n      mountPath: \/data\n  - name: parse-project-id\n    image: imega\/jq\n    args:\n    - -r\n    - .name\n    - \/data\/boskosctl-stdout\n    stdoutConfig:\n      path: $(results.project-id.path)\n    volumeMounts:\n    - name: data\n      mountPath: \/data\n  volumes:\n  - name: data\n  ```\n\n> NOTE:\n>\n> - If the intent is to share output between `Step`s via a file, the user must ensure that the paths provided are shared between the `Step`s (e.g via `volumes`).\n> - There is currently a limit on the overall size of the `Task` results. If the stdout\/stderr of a step is set to the path of a `Task` result and the step prints too many data, the result manifest would become too large. Currently the entrypoint binary will fail if that happens.\n> - If the stdout\/stderr of a `Step` is set to the path of a `Task` result, e.g. `$(results.empty.path)`, but that result is not defined for the `Task`, the `Step` will run but the output will be captured in a file named `$(results.empty.path)` in the current working directory. Similarly, any stubstition that is not valid, e.g. `$(some.invalid.path)\/out.txt`, will be left as-is and will result in a file path `$(some.invalid.path)\/out.txt` relative to the current working directory.\n\n### Specifying `Parameters`\n\nYou can specify parameters, such as compilation flags or artifact names, that you want to supply to the `Task` at execution time.\n `Parameters` are passed to the `Task` from its corresponding `TaskRun`.\n\n#### Parameter name\nParameter name format:\n- Must only contain alphanumeric characters, hyphens (`-`), underscores (`_`), and dots (`.`). However, `object` parameter name and its key names can't contain dots (`.`). See the reasons in the third item added in this [PR](https:\/\/github.com\/tektoncd\/community\/pull\/711).\n- Must begin with a letter or an underscore (`_`).\n\nFor example, `foo.Is-Bar_` is a valid parameter name for string or array type, but is invalid for object parameter because it contains dots. On the other hand, `barIsBa$` or `0banana` are invalid for all types.\n\n> NOTE:\n> 1. Parameter names are **case insensitive**. For example, `APPLE` and `apple` will be treated as equal. If they appear in the same TaskSpec's params, it will be rejected as invalid.\n> 2. If a parameter name contains dots (.), it must be referenced by using the [bracket notation](#substituting-parameters-and-resources) with either single or double quotes i.e. `$(params['foo.bar'])`, `$(params[\"foo.bar\"])`. See the following example for more information.\n\n#### Parameter type\nEach declared parameter has a `type` field, which can be set to `string`, `array` or `object`.\n\n##### `object` type\n\n`object` type is useful in cases where users want to group related parameters. For example, an object parameter called `gitrepo` can contain both the `url` and the `commmit` to group related information:\n\n```yaml\nspec:\n  params:\n    - name: gitrepo\n      type: object\n      properties:\n        url:\n          type: string\n        commit:\n          type: string\n```\n\nRefer to the [TaskRun example](..\/examples\/v1\/taskruns\/object-param-result.yaml) and the [PipelineRun example](..\/examples\/v1\/pipelineruns\/pipeline-object-param-and-result.yaml) in which `object` parameters are demonstrated.\n\n  > NOTE:\n  > - `object` param must specify the `properties` section to define the schema i.e. what keys are available for this object param. See how to define `properties` section in the following example and the [TEP-0075](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0075-object-param-and-result-types.md#defaulting-to-string-types-for-values).\n  > - When providing value for an `object` param, one may provide values for just a subset of keys in spec's `default`, and provide values for the rest of keys at runtime ([example](..\/examples\/v1\/taskruns\/object-param-result.yaml)).\n  > - When using object in variable replacement, users can only access its individual key (\"child\" member) of the object by its name i.e. `$(params.gitrepo.url)`. Using an entire object as a value is only allowed when the value is also an object like [this example](..\/examples\/v1\/pipelineruns\/pipeline-object-param-and-result.yaml). See more details about using object param from the [TEP-0075](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0075-object-param-and-result-types.md#using-objects-in-variable-replacement).\n\n##### `array` type\n\n`array` type is useful in cases where the number of compilation flags being supplied to a task varies throughout the `Task's` execution.\n`array` param can be defined by setting `type` to `array`.  Also, `array` params only supports `string` array i.e.\neach array element has to be of type `string`.\n\n```yaml\nspec:\n  params:\n    - name: flags\n      type: array\n```\n\n##### `string` type\n\nIf not specified, the `type` field defaults to `string`. When the actual parameter value is supplied, its parsed type is validated against the `type` field.\n\nThe following example illustrates the use of `Parameters` in a `Task`. The `Task` declares 3 input parameters named `gitrepo` (of type `object`), `flags`\n(of type `array`) and `someURL` (of type `string`). These parameters are used in the `steps.args` list\n  - For `object` parameter, you can only use individual members (aka keys).\n  - You can expand parameters of type `array` inside an existing array using the star operator. In this example, `flags` contains the star operator: `$(params.flags[*])`.\n\n**Note:** Input parameter values can be used as variables throughout the `Task` by using [variable substitution](#using-variable-substitution).\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: task-with-parameters\nspec:\n  params:\n    - name: gitrepo\n      type: object\n      properties:\n        url:\n          type: string\n        commit:\n          type: string\n    - name: flags\n      type: array\n    - name: someURL\n      type: string\n    - name: foo.bar\n      description: \"the name contains dot character\"\n      default: \"test\"\n  steps:\n    - name: do-the-clone\n      image: some-git-image\n      args: [\n        \"-url=$(params.gitrepo.url)\",\n        \"-revision=$(params.gitrepo.commit)\"\n      ]\n    - name: build\n      image: my-builder\n      args: [\n        \"build\",\n        \"$(params.flags[*])\",\n        # It would be equivalent to use $(params[\"someURL\"]) here,\n        # which is necessary when the parameter name contains '.'\n        # characters (e.g. `$(params[\"some.other.URL\"])`). See the example in step \"echo-param\"\n        'url=$(params.someURL)',\n      ]\n    - name: echo-param\n      image: bash\n      args: [\n        \"echo\",\n        \"$(params['foo.bar'])\",\n      ]\n```\n\nThe following `TaskRun` supplies the value for the parameter `gitrepo`, `flags` and `someURL`:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: run-with-parameters\nspec:\n  taskRef:\n    name: task-with-parameters\n  params:\n    - name: gitrepo\n      value:\n        url: \"abc.com\"\n        commit: \"c12b72\"\n    - name: flags\n      value:\n        - \"--set\"\n        - \"arg1=foo\"\n        - \"--randomflag\"\n        - \"--someotherflag\"\n    - name: someURL\n      value: \"http:\/\/google.com\"\n```\n\n#### Default value\nParameter declarations (within Tasks and Pipelines) can include default values which will be used if the parameter is\nnot specified, for example to specify defaults for both string params and array params\n([full example](..\/examples\/v1\/taskruns\/array-default.yaml)) :\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: task-with-array-default\nspec:\n  params:\n    - name: flags\n      type: array\n      default:\n        - \"--set\"\n        - \"arg1=foo\"\n        - \"--randomflag\"\n        - \"--someotherflag\"\n```\n\n#### Param enum\n> :seedling: **`enum` is an [alpha](additional-configs.md#alpha-features) feature.** The `enable-param-enum` feature flag must be set to `\"true\"` to enable this feature.\n\nParameter declarations can include `enum` which is a predefine set of valid values that can be accepted by the `Param`. If a `Param` has both `enum` and default value, the default value must be in the `enum` set. For example, the valid\/allowed values for `Param` \"message\" is bounded to `v1`, `v2` and `v3`:\n\n``` yaml\napiVersion: tekton.dev\/v1\nkind: Task\nmetadata:\n  name: param-enum-demo\nspec:\n  params:\n  - name: message\n    type: string\n    enum: [\"v1\", \"v2\", \"v3\"]\n    default: \"v1\"\n  steps:\n  - name: build\n    image: bash:latest\n    script: |\n      echo \"$(params.message)\"\n```\n\nIf the `Param` value passed in by `TaskRuns` is **NOT** in the predefined `enum` list, the `TaskRuns` will fail with reason `InvalidParamValue`.\n\nSee usage in this [example](..\/examples\/v1\/taskruns\/alpha\/param-enum.yaml)\n\n### Specifying `Workspaces`\n\n[`Workspaces`](workspaces.md#using-workspaces-in-tasks) allow you to specify\none or more volumes that your `Task` requires during execution. It is recommended that `Tasks` uses **at most**\none writeable `Workspace`. For example:\n\n```yaml\nspec:\n  steps:\n    - name: write-message\n      image: ubuntu\n      script: |\n        #!\/usr\/bin\/env bash\n        set -xe\n        echo hello! > $(workspaces.messages.path)\/message\n  workspaces:\n    - name: messages\n      description: The folder where we write the message to\n      mountPath: \/custom\/path\/relative\/to\/root\n```\n\nFor more information, see [Using `Workspaces` in `Tasks`](workspaces.md#using-workspaces-in-tasks)\nand the [`Workspaces` in a `TaskRun`](..\/examples\/v1\/taskruns\/workspace.yaml) example YAML file.\n\n### Propagated `Workspaces`\n\nWorkspaces can be propagated to embedded task specs, not referenced Tasks. For more information, see [Propagated Workspaces](taskruns.md#propagated-workspaces).\n\n### Emitting `Results`\n\nA Task is able to emit string results that can be viewed by users and passed to other Tasks in a Pipeline. These\nresults have a wide variety of potential uses. To highlight just a few examples from the Tekton Catalog: the\n[`git-clone` Task](https:\/\/github.com\/tektoncd\/catalog\/blob\/main\/task\/git-clone\/0.1\/git-clone.yaml) emits a\ncloned commit SHA as a result, the [`generate-build-id` Task](https:\/\/github.com\/tektoncd\/catalog\/blob\/main\/task\/generate-build-id\/0.1\/generate-build-id.yaml)\nemits a randomized ID as a result, and the [`kaniko` Task](https:\/\/github.com\/tektoncd\/catalog\/tree\/main\/task\/kaniko\/0.1)\nemits a container image digest as a result. In each case these results convey information for users to see when\nlooking at their TaskRuns and can also be used in a Pipeline to pass data along from one Task to the next.\n`Task` results are best suited for holding small amounts of data, such as commit SHAs, branch names,\nephemeral namespaces, and so on.\n\nTo define a `Task's` results, use the `results` field.\nIn the example below, the `Task` specifies two files in the `results` field:\n`current-date-unix-timestamp` and `current-date-human-readable`.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: print-date\n  annotations:\n    description: |\n      A simple task that prints the date\nspec:\n  results:\n    - name: current-date-unix-timestamp\n      description: The current date in unix timestamp format\n    - name: current-date-human-readable\n      description: The current date in human readable format\n  steps:\n    - name: print-date-unix-timestamp\n      image: bash:latest\n      script: |\n        #!\/usr\/bin\/env bash\n        date +%s | tee $(results.current-date-unix-timestamp.path)\n    - name: print-date-human-readable\n      image: bash:latest\n      script: |\n        #!\/usr\/bin\/env bash\n        date | tee $(results.current-date-human-readable.path)\n```\n\nIn this example, [`$(results.name.path)`](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/variables.md#variables-available-in-a-task)\nis replaced with the path where Tekton will store the Task's results.\n\nWhen this Task is executed in a TaskRun, the results will appear in the TaskRun's status:\n\n\n\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: TaskRun\n# ...\nstatus:\n  # ...\n  results:\n    - name: current-date-human-readable\n      value: |\n        Wed Jan 22 19:47:26 UTC 2020\n    - name: current-date-unix-timestamp\n      value: |\n        1579722445\n```\n\n\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\n# ...\nstatus:\n  # ...\n  taskResults:\n    - name: current-date-human-readable\n      value: |\n        Wed Jan 22 19:47:26 UTC 2020\n    - name: current-date-unix-timestamp\n      value: |\n        1579722445\n```\n\n\n\nTekton does not perform any processing on the contents of results; they are emitted\nverbatim from your Task including any leading or trailing whitespace characters. Make sure to write only the\nprecise string you want returned from your `Task` into the result files that your `Task` creates.\n\nThe stored results can be used [at the `Task` level](.\/pipelines.md#passing-one-tasks-results-into-the-parameters-or-when-expressions-of-another)\nor [at the `Pipeline` level](.\/pipelines.md#emitting-results-from-a-pipeline).\n\n> **Note** Tekton does not enforce Task results unless there is a consumer: when a Task declares a result,\n> it may complete successfully even if no result was actually produced. When a Task that declares results is\n> used in a Pipeline, and a component of the Pipeline attempts to consume the Task's result, if the result\n> was not produced the pipeline will fail. [TEP-0048](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0048-task-results-without-results.md)\n> propopses introducing default values for results to help Pipeline authors manage this case.\n\n#### Emitting Object `Results`\nEmitting a task result of type `object` is implemented based on the\n[TEP-0075](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0075-object-param-and-result-types.md#emitting-object-results).\nYou can initialize `object` results from a `task` using JSON escaped string. For example, to assign the following data to an object result:\n\n```\n{\"url\":\"abc.dev\/sampler\",\"digest\":\"19f02276bf8dbdd62f069b922f10c65262cc34b710eea26ff928129a736be791\"}\n```\n\nYou will need to use escaped JSON to write to pod termination message:\n\n```\n{\\\"url\\\":\\\"abc.dev\/sampler\\\",\\\"digest\\\":\\\"19f02276bf8dbdd62f069b922f10c65262cc34b710eea26ff928129a736be791\\\"}\n```\n\nAn example of a task definition producing an object result:\n\n```yaml\nkind: Task\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nmetadata:\n  name: write-object\n  annotations:\n    description: |\n      A simple task that writes object\nspec:\n  results:\n    - name: object-results\n      type: object\n      description: The object results\n      properties:\n        url:\n          type: string\n        digest:\n          type: string\n  steps:\n    - name: write-object\n      image: bash:latest\n      script: |\n        #!\/usr\/bin\/env bash\n        echo -n \"{\\\"url\\\":\\\"abc.dev\/sampler\\\",\\\"digest\\\":\\\"19f02276bf8dbdd62f069b922f10c65262cc34b710eea26ff928129a736be791\\\"}\" | tee $(results.object-results.path)\n```\n\n> **Note:**\n> -  that the opening and closing braces  are mandatory along with an escaped JSON.\n> - object result must specify the `properties` section to define the schema i.e. what keys are available for this object result. Failing to emit keys from the defined object results will result in validation error at runtime.\n\n#### Emitting Array `Results`\n\nTekton Task also supports defining a result of type `array` and `object` in addition to `string`.\nEmitting a task result of type `array` is a `beta` feature implemented based on the\n[TEP-0076](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0076-array-result-types.md#emitting-array-results).\nYou can initialize `array` results from a `task` using JSON escaped string, for example, to assign the following\nlist of animals to an array result:\n\n```\n[\"cat\", \"dog\", \"squirrel\"]\n```\n\nYou will have to initialize the pod termination message as escaped JSON:\n\n```\n[\\\"cat\\\", \\\"dog\\\", \\\"squirrel\\\"]\n```\n\nAn example of a task definition producing an array result with such greetings `[\"hello\", \"world\"]`:\n\n```yaml\nkind: Task\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nmetadata:\n  name: write-array\n  annotations:\n    description: |\n      A simple task that writes array\nspec:\n  results:\n    - name: array-results\n      type: array\n      description: The array results\n  steps:\n    - name: write-array\n      image: bash:latest\n      script: |\n        #!\/usr\/bin\/env bash\n        echo -n \"[\\\"hello\\\",\\\"world\\\"]\" | tee $(results.array-results.path)\n```\n\n**Note** that the opening and closing square brackets are mandatory along with an escaped JSON.\n\nNow, similar to the Go zero-valued slices, an array result is considered as uninitialized (i.e. `nil`) if it's set to an empty\narray i.e. `[]`. For example, `echo -n \"[]\" |  tee $(results.result.path);` is equivalent to `result := []string{}`.\nThe result initialized in this way will have zero length. And trying to access this array with a star notation i.e.\n`$(tasks.write-array-results.results.result[*])` or an element of such array i.e. `$(tasks.write-array-results.results.result[0])`\nresults in `InvalidTaskResultReference` with `index out of range`.\n\nDepending on your use case, you might have to initialize a result array to the desired length just like using `make()` function in Go.\n`make()` function is used to allocate an array and returns a slice of the specified length i.e.\n`result := make([]string, 5)` results in `[\"\", \"\", \"\", \"\", \"\"]`, similarly set the array result to following JSON escaped\nexpression to allocate an array of size 2:\n\n```\necho -n \"[\\\"\\\", \\\"\\\"]\" | tee $(results.array-results.path) # an array of size 2 with empty string\necho -n \"[\\\"first-array-element\\\", \\\"\\\"]\" | tee $(results.array-results.path) # an array of size 2 with only first element initialized\necho -n \"[\\\"\\\", \\\"second-array-element\\\"]\" | tee $(results.array-results.path) # an array of size 2 with only second element initialized\necho -n \"[\\\"first-array-element\\\", \\\"second-array-element\\\"]\" | tee $(results.array-results.path) # an array of size 2 with both elements initialized\n```\n\nThis is also important to maintain the order of the elements in an array. The order in which the task result was\ninitialized is the order in which the result is consumed by the dependent tasks. For example, a task is producing\ntwo array results `images` and `configmaps`. The pipeline author can implement deployment by indexing into each array result:\n\n```yaml\n    - name: deploy-stage-1\n      taskRef:\n        name: deploy\n      params:\n        - name: image\n          value: $(tasks.setup.results.images[0])\n        - name: configmap\n          value: $(tasks.setup.results.configmap[0])\n      ...\n    - name: deploy-stage-2\n      taskRef:\n        name: deploy\n      params:\n        - name: image\n          value: $(tasks.setup.results.images[1])\n        - name: configmap\n          value: $(tasks.setup.results.configmap[1])\n```\n\nAs a task author, make sure the task array results are initialized accordingly or set to a zero value in case of no\n`image` or `configmap` to maintain the order.\n\n**Note**: Tekton uses [termination\nmessages](https:\/\/kubernetes.io\/docs\/tasks\/debug\/debug-application\/determine-reason-pod-failure\/#writing-and-reading-a-termination-message). As\nwritten in\n[tektoncd\/pipeline#4808](https:\/\/github.com\/tektoncd\/pipeline\/issues\/4808),\nthe maximum size of a `Task's` results is limited by the container termination message feature of Kubernetes.\nAt present, the limit is [\"4096 bytes\"](https:\/\/github.com\/kubernetes\/kubernetes\/blob\/96e13de777a9eb57f87889072b68ac40467209ac\/pkg\/kubelet\/container\/runtime.go#L632).\nThis also means that the number of Steps in a Task affects the maximum size of a Result,\nas each Step is implemented as a container in the TaskRun's pod.\nThe more containers we have in our pod, *the smaller the allowed size of each container's\nmessage*, meaning that the **more steps you have in a Task, the smaller the result for each step can be**.\nFor example, if you have 10 steps, the size of each step's Result will have a maximum of less than 1KB.\n\nIf your `Task` writes a large number of small results, you can work around this limitation\nby writing each result from a separate `Step` so that each `Step` has its own termination message.\nIf a termination message is detected as being too large the TaskRun will be placed into a failed state\nwith the following message: `Termination message is above max allowed size 4096, caused by large task\nresult`. Since Tekton also uses the termination message for some internal information, so the real\navailable size will less than 4096 bytes.\n\nAs a general rule-of-thumb, if a result needs to be larger than a kilobyte, you should likely use a\n[`Workspace`](#specifying-workspaces) to store and pass it between `Tasks` within a `Pipeline`.\n\n#### Larger `Results` using sidecar logs\n\nThis is a beta feature which is guarded behind its own feature flag.  The `results-from` feature flag must be set to\n[`\"sidecar-logs\"`](.\/install.md#enabling-larger-results-using-sidecar-logs) to enable larger results using sidecar logs.\n\nInstead of using termination messages to store results, the taskrun controller injects a sidecar container which monitors\nthe results of all the steps. The sidecar mounts the volume where results of all the steps are stored. As soon as it\nfinds a new result, it logs it to std out. The controller has access to the logs of the sidecar container.\n**CAUTION**: we need you to enable access to [kubernetes pod\/logs](.\/install.md#enabling-larger-results-using-sidecar-logs).\n\nThis feature allows users to store up to 4 KB per result by default. Because we are not limited by the size of the\ntermination messages, users can have as many results as they require (or until the CRD reaches its limit). If the size\nof a result exceeds this limit, then the TaskRun will be placed into a failed state with the following message: `Result\nexceeded the maximum allowed limit.`\n\n**Note**: If you require even larger results, you can specify a different upper limit per result by setting\n`max-result-size` feature flag to your desired size in bytes ([see instructions](.\/install.md#enabling-larger-results-using-sidecar-logs)).\n**CAUTION**: the larger you make the size, more likely will the CRD reach its max limit enforced by the `etcd` server\nleading to bad user experience.\n\nRefer to the detailed instructions listed in [additional config](additional-configs.md#enabling-larger-results-using-sidecar-logs)\nto learn how to enable this feature.\n\n### Specifying `Volumes`\n\nSpecifies one or more [`Volumes`](https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/) that the `Steps` in your\n`Task` require to execute in addition to volumes that are implicitly created for input and output resources.\n\nFor example, you can use `Volumes` to do the following:\n\n- [Mount a Kubernetes `Secret`](auth.md).\n- Create an `emptyDir` persistent `Volume` that caches data across multiple `Steps`.\n- Mount a [Kubernetes `ConfigMap`](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-pod-configmap\/)\n  as `Volume` source.\n- Mount a host's Docker socket to use a `Dockerfile` for building container images.\n  **Note:** Building a container image on-cluster using `docker build` is **very\n  unsafe** and is mentioned only for the sake of the example. Use [kaniko](https:\/\/github.com\/GoogleContainerTools\/kaniko) instead.\n\n### Specifying a `Step` template\n\nThe `stepTemplate` field specifies a [`Container`](https:\/\/kubernetes.io\/docs\/concepts\/containers\/)\nconfiguration that will be used as the starting point for all of the `Steps` in your\n`Task`. Individual configurations specified within `Steps` supersede the template wherever\noverlap occurs.\n\nIn the example below, the `Task` specifies a `stepTemplate` field with the environment variable\n`FOO` set to `bar`. The first `Step` in the `Task` uses that value for `FOO`, but the second `Step`\noverrides the value set in the template with `baz`. Additional, the `Task` specifies a `stepTemplate`\nfield with the environment variable `TOKEN` set to `public`. The last one `Step` in the `Task` uses\n`private` in the referenced secret to override the value set in the template.\n\n```yaml\nstepTemplate:\n  env:\n    - name: \"FOO\"\n      value: \"bar\"\n    - name: \"TOKEN\"\n      value: \"public\"\nsteps:\n  - image: ubuntu\n    command: [echo]\n    args: [\"FOO is ${FOO}\"]\n  - image: ubuntu\n    command: [echo]\n    args: [\"FOO is ${FOO}\"]\n    env:\n      - name: \"FOO\"\n        value: \"baz\"\n  - image: ubuntu\n    command: [echo]\n    args: [\"TOKEN is ${TOKEN}\"]\n    env:\n      - name: \"TOKEN\"\n        valueFrom:\n          secretKeyRef:\n            key: \"token\"\n            name: \"test\"\n---\n# The secret 'test' part data is as follows.\ndata:\n  # The decoded value of 'cHJpdmF0ZQo=' is 'private'.\n  token: \"cHJpdmF0ZQo=\"\n```\n\n### Specifying `Sidecars`\n\nThe `sidecars` field specifies a list of [`Containers`](https:\/\/kubernetes.io\/docs\/concepts\/containers\/)\nto run alongside the `Steps` in your `Task`. You can use `Sidecars` to provide auxiliary functionality, such as\n[Docker in Docker](https:\/\/hub.docker.com\/_\/docker) or running a mock API server that your app can hit during testing.\n`Sidecars` spin up before your `Task` executes and are deleted after the `Task` execution completes.\nFor further information, see [`Sidecars` in `TaskRuns`](taskruns.md#specifying-sidecars).\n\n**Note**: Starting in v0.62 you can enable native Kubernetes sidecar support using the `enable-kubernetes-sidecar` feature flag ([see instructions](.\/additional-configs.md#customizing-the-pipelines-controller-behavior)). If kubernetes does not wait for your sidecar application to be ready, use a `startupProbe` to help kubernetes identify when it is ready.\n\nRefer to the detailed instructions listed in [additional config](additional-configs.md#enabling-larger-results-using-sidecar-logs)\nto learn how to enable this feature.\n\nIn the example below, a `Step` uses a Docker-in-Docker `Sidecar` to build a Docker image:\n\n```yaml\nsteps:\n  - image: docker\n    name: client\n    script: |\n      #!\/usr\/bin\/env bash\n      cat > Dockerfile << EOF\n      FROM ubuntu\n      RUN apt-get update\n      ENTRYPOINT [\"echo\", \"hello\"]\n      EOF\n      docker build -t hello . && docker run hello\n      docker images\n    volumeMounts:\n      - mountPath: \/var\/run\/\n        name: dind-socket\nsidecars:\n  - image: docker:18.05-dind\n    name: server\n    securityContext:\n      privileged: true\n    volumeMounts:\n      - mountPath: \/var\/lib\/docker\n        name: dind-storage\n      - mountPath: \/var\/run\/\n        name: dind-socket\nvolumes:\n  - name: dind-storage\n    emptyDir: {}\n  - name: dind-socket\n    emptyDir: {}\n```\n\nSidecars, just like `Steps`, can also run scripts:\n\n```yaml\nsidecars:\n  - image: busybox\n    name: hello-sidecar\n    script: |\n      echo 'Hello from sidecar!'\n```\n**Note:** Tekton's current `Sidecar` implementation contains a bug.\nTekton uses a container image named `nop` to terminate `Sidecars`.\nThat image is configured by passing a flag to the Tekton controller.\nIf the configured `nop` image contains the exact command the `Sidecar`\nwas executing before receiving a \"stop\" signal, the `Sidecar` keeps\nrunning, eventually causing the `TaskRun` to time out with an error.\nFor more information, see [issue 1347](https:\/\/github.com\/tektoncd\/pipeline\/issues\/1347).\n\n### Specifying a display name\n\nThe `displayName` field is an optional field that allows you to add a user-facing name to the task that may be used to populate a UI.\n\n### Adding a description\n\nThe `description` field is an optional field that allows you to add an informative description to the `Task`.\n\n### Using variable substitution\n\nTekton provides variables to inject values into the contents of certain fields.\nThe values you can inject come from a range of sources including other fields\nin the Task, context-sensitive information that Tekton provides, and runtime\ninformation received from a TaskRun.\n\nThe mechanism of variable substitution is quite simple - string replacement is\nperformed by the Tekton Controller when a TaskRun is executed.\n\n`Tasks` allow you to substitute variable names for the following entities:\n\n- [Parameters and resources](#substituting-parameters-and-resources)\n- [`Array` parameters](#substituting-array-parameters)\n- [`Workspaces`](#substituting-workspace-paths)\n- [`Volume` names and types](#substituting-volume-names-and-paths)\n\nSee the [complete list of variable substitutions for Tasks](.\/variables.md#variables-available-in-a-task)\nand the [list of fields that accept substitutions](.\/variables.md#fields-that-accept-variable-substitutions).\n\n#### Substituting parameters and resources\n\n[`params`](#specifying-parameters) and [`resources`](#specifying-resources) attributes can replace\nvariable values as follows:\n\n- To reference a parameter in a `Task`, use the following syntax, where `<name>` is the name of the parameter:\n  ```shell\n  # dot notation\n  # Here, the name cannot contain dots (eg. foo.bar is not allowed). If the name contains `dots`, it can only be accessed via the bracket notation.\n  $(params.<name> )\n  # or bracket notation (wrapping <name> with either single or double quotes):\n  # Here, the name can contain dots (eg. foo.bar is allowed).\n  $(params['<name>'])\n  $(params[\"<name>\"])\n  ```\n- To access parameter values from resources, see [variable substitution](resources.md#variable-substitution)\n\n#### Substituting `Array` parameters\n\nYou can expand referenced parameters of type `array` using the star operator. To do so, add the operator (`[*]`)\nto the named parameter to insert the array elements in the spot of the reference string.\n\nFor example, given a `params` field with the contents listed below, you can expand\n`command: [\"first\", \"$(params.array-param[*])\", \"last\"]` to `command: [\"first\", \"some\", \"array\", \"elements\", \"last\"]`:\n\n```yaml\nparams:\n  - name: array-param\n    value:\n      - \"some\"\n      - \"array\"\n      - \"elements\"\n```\n\nYou **must** reference parameters of type `array` in a completely isolated string within a larger `string` array.\nReferencing an `array` parameter in any other way will result in an error. For example, if `build-args` is a parameter of\ntype `array`, then the following example is an invalid `Step` because the string isn't isolated:\n\n```yaml\n- name: build-step\n  image: gcr.io\/cloud-builders\/some-image\n  args: [\"build\", \"additionalArg $(params.build-args[*])\"]\n```\n\nSimilarly, referencing `build-args` in a non-`array` field is also invalid:\n\n```yaml\n- name: build-step\n  image: \"$(params.build-args[*])\"\n  args: [\"build\", \"args\"]\n```\n\nA valid reference to the `build-args` parameter is isolated and in an eligible field (`args`, in this case):\n\n```yaml\n- name: build-step\n  image: gcr.io\/cloud-builders\/some-image\n  args: [\"build\", \"$(params.build-args[*])\", \"additionalArg\"]\n```\n\n`array` param when referenced in `args` section of the `step` can be utilized in the `script` as command line arguments:\n\n```yaml\n- name: build-step\n  image: gcr.io\/cloud-builders\/some-image\n  args: [\"$(params.flags[*])\"]\n  script: |\n    #!\/usr\/bin\/env bash\n    echo \"The script received $# flags.\"\n    echo \"The first command line argument is $1.\"\n```\n\nIndexing into an array to reference an individual array element is supported as an **alpha** feature (`enable-api-fields: alpha`).\nReferencing an individual array element in `args`:\n\n```yaml\n- name: build-step\n  image: gcr.io\/cloud-builders\/some-image\n  args: [\"$(params.flags[0])\"]\n```\n\nReferencing an individual array element in `script`:\n\n```yaml\n- name: build-step\n  image: gcr.io\/cloud-builders\/some-image\n  script: |\n    #!\/usr\/bin\/env bash\n    echo \"$(params.flags[0])\"\n```\n\n#### Substituting `Workspace` paths\n\nYou can substitute paths to `Workspaces` specified within a `Task` as follows:\n\n```yaml\n$(workspaces.myworkspace.path)\n```\n\nSince the `Volume` name is randomized and only set when the `Task` executes, you can also\nsubstitute the volume name as follows:\n\n```yaml\n$(workspaces.myworkspace.volume)\n```\n\n#### Substituting `Volume` names and types\n\nYou can substitute `Volume` names and [types](https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/#types-of-volumes)\nby parameterizing them. Tekton supports popular `Volume` types such as `ConfigMap`, `Secret`, and `PersistentVolumeClaim`.\nSee this [example](#mounting-a-configmap-as-a-volume-source) to find out how to perform this type of substitution\nin your `Task.`\n\n#### Substituting in `Script` blocks\n\nVariables can contain any string, including snippets of script that can\nbe injected into a Task's `Script` field. If you are using Tekton's variables\nin your Task's `Script` field be aware that the strings you're interpolating\ncould include executable instructions.\n\nPreventing a substituted variable from executing as code depends on the container\nimage, language or shell that your Task uses. Here's an example of interpolating\na Tekton variable into a `bash` `Script` block that prevents the variable's string\ncontents from being executed:\n\n```yaml\n# Task.yaml\nspec:\n  steps:\n    - image: an-image-that-runs-bash\n      env:\n        - name: SCRIPT_CONTENTS\n          value: $(params.script)\n      script: |\n        printf '%s' \"${SCRIPT_CONTENTS}\" > input-script\n```\n\nThis works by injecting Tekton's variable as an environment variable into the Step's\ncontainer. The `printf` program is then used to write the environment variable's\ncontent to a file.\n\n## Code examples\n\nStudy the following code examples to better understand how to configure your `Tasks`:\n\n- [Building and pushing a Docker image](#building-and-pushing-a-docker-image)\n- [Mounting multiple `Volumes`](#mounting-multiple-volumes)\n- [Mounting a `ConfigMap` as a `Volume` source](#mounting-a-configmap-as-a-volume-source)\n- [Using a `Secret` as an environment source](#using-a-secret-as-an-environment-source)\n- [Using a `Sidecar` in a `Task`](#using-a-sidecar-in-a-task)\n\n_Tip: See the collection of Tasks in the\n[Tekton community catalog](https:\/\/github.com\/tektoncd\/catalog) for\nmore examples.\n\n### Building and pushing a Docker image\n\nThe following example `Task` builds and pushes a `Dockerfile`-built image.\n\n**Note:** Building a container image using `docker build` on-cluster is **very\nunsafe** and is shown here only as a demonstration. Use [kaniko](https:\/\/github.com\/GoogleContainerTools\/kaniko) instead.\n\n```yaml\nspec:\n  params:\n    # This may be overridden, but is a sensible default.\n    - name: dockerfileName\n      type: string\n      description: The name of the Dockerfile\n      default: Dockerfile\n    - name: image\n      type: string\n      description: The image to build and push\n  workspaces:\n  - name: source\n  steps:\n    - name: dockerfile-build\n      image: gcr.io\/cloud-builders\/docker\n      workingDir: \"$(workspaces.source.path)\"\n      args:\n        [\n          \"build\",\n          \"--no-cache\",\n          \"--tag\",\n          \"$(params.image)\",\n          \"--file\",\n          \"$(params.dockerfileName)\",\n          \".\",\n        ]\n      volumeMounts:\n        - name: docker-socket\n          mountPath: \/var\/run\/docker.sock\n\n    - name: dockerfile-push\n      image: gcr.io\/cloud-builders\/docker\n      args: [\"push\", \"$(params.image)\"]\n      volumeMounts:\n        - name: docker-socket\n          mountPath: \/var\/run\/docker.sock\n\n  # As an implementation detail, this Task mounts the host's daemon socket.\n  volumes:\n    - name: docker-socket\n      hostPath:\n        path: \/var\/run\/docker.sock\n        type: Socket\n```\n\n#### Mounting multiple `Volumes`\n\nThe example below illustrates mounting multiple `Volumes`:\n\n```yaml\nspec:\n  steps:\n    - image: ubuntu\n      script: |\n        #!\/usr\/bin\/env bash\n        curl https:\/\/foo.com > \/var\/my-volume\n      volumeMounts:\n        - name: my-volume\n          mountPath: \/var\/my-volume\n\n    - image: ubuntu\n      script: |\n        #!\/usr\/bin\/env bash\n        cat \/etc\/my-volume\n      volumeMounts:\n        - name: my-volume\n          mountPath: \/etc\/my-volume\n\n  volumes:\n    - name: my-volume\n      emptyDir: {}\n```\n\n#### Mounting a `ConfigMap` as a `Volume` source\n\nThe example below illustrates how to mount a `ConfigMap` to act as a `Volume` source:\n\n```yaml\nspec:\n  params:\n    - name: CFGNAME\n      type: string\n      description: Name of config map\n    - name: volumeName\n      type: string\n      description: Name of volume\n  steps:\n    - image: ubuntu\n      script: |\n        #!\/usr\/bin\/env bash\n        cat \/var\/configmap\/test\n      volumeMounts:\n        - name: \"$(params.volumeName)\"\n          mountPath: \/var\/configmap\n\n  volumes:\n    - name: \"$(params.volumeName)\"\n      configMap:\n        name: \"$(params.CFGNAME)\"\n```\n\n#### Using a `Secret` as an environment source\n\nThe example below illustrates how to use a `Secret` as an environment source:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: goreleaser\nspec:\n  params:\n    - name: package\n      type: string\n      description: base package to build in\n    - name: github-token-secret\n      type: string\n      description: name of the secret holding the github-token\n      default: github-token\n  workspaces:\n  - name: source\n  steps:\n    - name: release\n      image: goreleaser\/goreleaser\n      workingDir: $(workspaces.source.path)\/$(params.package)\n      command:\n        - goreleaser\n      args:\n        - release\n      env:\n        - name: GOPATH\n          value: \/workspace\n        - name: GITHUB_TOKEN\n          valueFrom:\n            secretKeyRef:\n              name: $(params.github-token-secret)\n              key: bot-token\n```\n\n#### Using a `Sidecar` in a `Task`\n\nThe example below illustrates how to use a `Sidecar` in your `Task`:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: with-sidecar-task\nspec:\n  params:\n    - name: sidecar-image\n      type: string\n      description: Image name of the sidecar container\n    - name: sidecar-env\n      type: string\n      description: Environment variable value\n  sidecars:\n    - name: sidecar\n      image: $(params.sidecar-image)\n      env:\n        - name: SIDECAR_ENV\n          value: $(params.sidecar-env)\n  steps:\n    - name: test\n      image: hello-world\n```\n\n## Debugging\n\nThis section describes techniques for debugging the most common issues in `Tasks`.\n\n### Inspecting the file structure\n\nA common issue when configuring `Tasks` stems from not knowing the location of your data.\nFor the most part, files ingested and output by your `Task` live in the `\/workspace` directory,\nbut the specifics can vary. To inspect the file structure of your `Task`, add a step that outputs\nthe name of every file stored in the `\/workspace` directory to the build log. For example:\n\n```yaml\n- name: build-and-push-1\n  image: ubuntu\n  command:\n    - \/bin\/bash\n  args:\n    - -c\n    - |\n      set -ex\n      find \/workspace\n```\n\nYou can also choose to examine the *contents* of every file used by your `Task`:\n\n```yaml\n- name: build-and-push-1\n  image: ubuntu\n  command:\n    - \/bin\/bash\n  args:\n    - -c\n    - |\n      set -ex\n      find \/workspace | xargs cat\n```\n\n### Inspecting the `Pod`\n\nTo inspect the contents of the `Pod` used by your `Task` at a specific stage in the `Task's` execution,\nlog into the `Pod` and add a `Step` that pauses the `Task` at the desired stage. For example:\n\n```yaml\n- name: pause\n  image: docker\n  args: [\"sleep\", \"6000\"]\n```\n\n### Running Step Containers as a Non Root User\n\nAll steps that do not require to be run as a root user should make use of TaskRun features to\ndesignate the container for a step runs as a user without root permissions. As a best practice,\nrunning containers as non root should be built into the container image to avoid any possibility\nof the container being run as root. However, as a further measure of enforcing this practice,\nsteps can make use of a `securityContext` to specify how the container should run.\n\nAn example of running Task steps as a non root user is shown below:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: show-non-root-steps\nspec:\n  steps:\n    # no securityContext specified so will use\n    # securityContext from TaskRun podTemplate\n    - name: show-user-1001\n      image: ubuntu\n      command:\n        - ps\n      args:\n        - \"aux\"\n    # securityContext specified so will run as\n    # user 2000 instead of 1001\n    - name: show-user-2000\n      image: ubuntu\n      command:\n        - ps\n      args:\n        - \"aux\"\n      securityContext:\n        runAsUser: 2000\n---\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  generateName: show-non-root-steps-run-\nspec:\n  taskRef:\n    name: show-non-root-steps\n  podTemplate:\n    securityContext:\n      runAsNonRoot: true\n      runAsUser: 1001\n```\n\nIn the example above, the step `show-user-2000` specifies via a `securityContext` that the container\nfor the step should run as user 2000. A `securityContext` must still be specified via a TaskRun `podTemplate`\nfor this TaskRun to run in a Kubernetes environment that enforces running containers as non root as a requirement.\n\nThe `runAsNonRoot` property specified via the `podTemplate` above validates that steps part of this TaskRun are\nrunning as non root users and will fail to start any step container that attempts to run as root. Only specifying\n`runAsNonRoot: true` will not actually run containers as non root as the property simply validates that steps are not\nrunning as root. It is the `runAsUser` property that is actually used to set the non root user ID for the container.\n\nIf a step defines its own `securityContext`, it will be applied for the step container over the `securityContext`\nspecified at the pod level via the TaskRun `podTemplate`.\n\nMore information about Pod and Container Security Contexts can be found via the [Kubernetes website](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/#set-the-security-context-for-a-pod).\n\nThe example Task\/TaskRun above can be found as a [TaskRun example](..\/examples\/v1\/taskruns\/run-steps-as-non-root.yaml).\n\n## `Task` Authoring Recommendations\n\nRecommendations for authoring `Tasks` are available in the [Tekton Catalog][recommendations].\n\n[recommendations]: https:\/\/github.com\/tektoncd\/catalog\/blob\/main\/recommendations.md\n\n---\n\nExcept as otherwise noted, the contents of this page are licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/).\nCode samples are licensed under the [Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"         linkTitle   Tasks  weight  201            Tasks     Overview   overview     Configuring a  Task    configuring a task        Task  vs   ClusterTask    task vs clustertask       Defining  Steps    defining steps         Reserved directories   reserved directories         Running scripts within  Steps    running scripts within steps           Windows scripts   windows scripts         Specifying a timeout   specifying a timeout         Specifying  onError  for a  step    specifying onerror for a step         Accessing Step s  exitCode  in subsequent  Steps    accessing steps exitcode in subsequent steps         Produce a task result with  onError    produce a task result with onerror         Breakpoint on failure with  onError    breakpoint on failure with onerror         Redirecting step output streams with  stdoutConfig  and  stderrConfig    redirecting step output streams with stdoutConfig and stderrConfig       Specifying  Parameters    specifying parameters       Specifying  Workspaces    specifying workspaces       Emitting  Results    emitting results         Larger  Results  using sidecar logs   larger results using sidecar logs       Specifying  Volumes    specifying volumes       Specifying a  Step  template   specifying a step template       Specifying  Sidecars    specifying sidecars       Specifying a  DisplayName    specifying a display name       Adding a description   adding a description       Using variable substitution   using variable substitution         Substituting parameters and resources   substituting parameters and resources         Substituting  Array  parameters   substituting array parameters         Substituting  Workspace  paths   substituting workspace paths         Substituting  Volume  names and types   substituting volume names and types         Substituting in  Script  blocks   substituting in script blocks     Code examples   code examples       Building and pushing a Docker image   building and pushing a docker image         Mounting multiple  Volumes    mounting multiple volumes         Mounting a  ConfigMap  as a  Volume  source   mounting a configmap as a volume source         Using a  Secret  as an environment source   using a secret as an environment source         Using a  Sidecar  in a  Task    using a sidecar in a task     Debugging   debugging       Inspecting the file structure   inspecting the file structure       Inspecting the  Pod    inspecting the pod       Running Step Containers as a Non Root User   running step containers as a non root user      Task  Authoring Recommendations   task authoring recommendations      Overview  A  Task  is a collection of  Steps  that you define and arrange in a specific order of execution as part of your continuous integration flow  A  Task  executes as a Pod on your Kubernetes cluster  A  Task  is available within a specific namespace  while a  ClusterTask  is available across the entire cluster   A  Task  declaration includes the following elements      Parameters   specifying parameters     Steps   defining steps     Workspaces   specifying workspaces     Results   emitting results      Configuring a  Task   A  Task  definition supports the following fields     Required        apiVersion   kubernetes overview    Specifies the API version  For example       tekton dev v1beta1         kind   kubernetes overview    Identifies this resource object as a  Task  object        metadata   kubernetes overview    Specifies metadata that uniquely identifies the      Task  resource object  For example  a  name         spec   kubernetes overview    Specifies the configuration information for     this  Task  resource object        steps    defining steps    Specifies one or more container images to run in the  Task     Optional        description    adding a description    An informative description of the  Task         params    specifying parameters    Specifies execution parameters for the  Task         workspaces    specifying workspaces    Specifies paths to volumes required by the  Task         results    emitting results    Specifies the names under which  Tasks  write execution results        volumes    specifying volumes    Specifies one or more volumes that will be available to the  Steps  in the  Task         stepTemplate    specifying a step template    Specifies a  Container  step definition to use as the basis for all  Steps  in the  Task         sidecars    specifying sidecars    Specifies  Sidecar  containers to run alongside the  Steps  in the  Task     kubernetes overview     https   kubernetes io docs concepts overview working with objects kubernetes objects  required fields  The non functional example below demonstrates the use of most of the above mentioned fields      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  example task name spec    params        name  pathToDockerFile       type  string       description  The path to the dockerfile to build       default   workspace workspace Dockerfile       name  builtImageUrl       type  string       description  location to push the built image to   steps        name  ubuntu example       image  ubuntu       args    ubuntu build example    SECRETS example md         image  gcr io example builders build example       command    echo         args      params pathToDockerFile          name  dockerfile pushexample       image  gcr io example builders push example       args    push      params builtImageUrl          volumeMounts            name  docker socket example           mountPath   var run docker sock   volumes        name  example volume       emptyDir               Task  vs   ClusterTask     Note  ClusterTasks are deprecated    Please use the  cluster resolver    cluster resolver md  instead   A  ClusterTask  is a  Task  scoped to the entire cluster instead of a single namespace  A  ClusterTask  behaves identically to a  Task  and therefore everything in this document applies to both     Note    When using a  ClusterTask   you must explicitly set the  kind  sub field in the  taskRef  field to  ClusterTask             If not specified  the  kind  sub field defaults to  Task    Below is an example of a Pipeline declaration that uses a  ClusterTask     Note      There is no  v1  API specification for  ClusterTask  but a  v1beta1 clustertask  can still be referenced in a  v1 pipeline     The cluster resolver syntax below can be used to reference any task  not just a clustertask        yaml apiVersion  tekton dev v1 kind  Pipeline metadata    name  demo pipeline spec    tasks        name  build skaffold web       taskRef          resolver  cluster         params            name  kind           value  task           name  name           value  build push           name  namespace           value  default           yaml apiVersion  tekton dev v1beta1 kind  Pipeline metadata    name  demo pipeline   namespace  default spec    tasks        name  build skaffold web       taskRef          name  build push         kind  ClusterTask       params                  Defining  Steps   A  Step  is a reference to a container image that executes a specific tool on a specific input and produces a specific output  To add  Steps  to a  Task  you define a  steps  field  required  containing a list of desired  Steps   The order in which the  Steps  appear in this list is the order in which they will execute   The following requirements apply to each container image referenced in a  steps  field     The container image must abide by the  container contract    container contract md     Each container image runs to completion or until the first failure occurs    The CPU  memory  and ephemeral storage resource requests set on  Step s   will be adjusted to comply with any   LimitRange   https   kubernetes io docs concepts policy limit range  s   present in the  Namespace   In addition  Kubernetes determines a pod s effective resource   requests and limits by summing the requests and limits for all its containers  even   though Tekton runs  Steps  sequentially    For more detail  see  Compute Resources in Tekton    compute resources md      Note    If the image referenced in the  step  field is from a private registry   TaskRuns  or  PipelineRuns  that consume the task           must provide the  imagePullSecrets  in a  podTemplate    podtemplates md    Below is an example of setting the resource requests and limits for a step         yaml spec    steps        name  step with limts       computeResources          requests            memory  1Gi           cpu  500m         limits            memory  2Gi           cpu  800m           yaml spec    steps        name  step with limts       resources          requests            memory  1Gi           cpu  500m         limits            memory  2Gi           cpu  800m             Reserved directories  There are several directories that all  Tasks  run by Tekton will treat as special      workspace    This directory is where  resources   specifying resources  and  workspaces   specifying workspaces    are mounted  Paths to these are available to  Task  authors via  variable substitution  variables md      tekton    This directory is used for Tekton specific functionality          tekton results  is where  results   emitting results  are written to        The path is available to  Task  authors via     results name path    variables md        There are other subfolders which are  implementation details of Tekton  developers README md reserved directories        and   users should not rely on their specific behavior as it may change in the future         Running scripts within  Steps   A step can specify a  script  field  which contains the body of a script  That script is invoked as if it were stored inside the container image  and any  args  are passed directly to it     Note    If the  script  field is present  the step cannot also contain a  command  field   Scripts that do not start with a  shebang  https   en wikipedia org wiki Shebang  Unix   line will have the following default preamble prepended      bash    bin sh set  e      You can override this default preamble by prepending a shebang that specifies the desired parser  This parser must be present within that  Step s  container image   The example below executes a Bash script      yaml steps      image  ubuntu   contains bash     script             usr bin env bash       echo  Hello from Bash        The example below executes a Python script      yaml steps      image  python   contains python     script             usr bin env python3       print  Hello from Python         The example below executes a Node script      yaml steps      image  node   contains node     script             usr bin env node       console log  Hello from Node         You can execute scripts directly in the workspace      yaml steps      image  ubuntu     script             usr bin env bash        workspace my script sh    provided by an input resource      You can also execute scripts within the container image      yaml steps      image  my image   contains  bin my binary     script             usr bin env bash        bin my binary            Windows scripts  Scripts in tasks that will eventually run on windows nodes need a custom shebang line  so that Tekton knows how to run the script  The format of the shebang line is      win  interpreter command   args    Unlike linux  we need to specify how to interpret the script file which is generated by Tekton  The example below shows how to execute a powershell script      yaml steps      image  mcr microsoft com windows servercore 1809     script            win powershell exe  File       echo  Hello from PowerShell       Microsoft provide  powershell  images  which contain Powershell Core  which is slightly different from powershell found in standard windows images   The example below shows how to use these images     yaml steps      image  mcr microsoft com powershell nanoserver     script            win pwsh exe  File       echo  Hello from PowerShell Core       As can be seen the command is different  The windows shebang can be used for any interpreter  as long as it exists in the image and can interpret commands from a file  The example below executes a Python script     yaml   steps      image  python     script            win python       print  Hello from Python        Note that other than the    win  shebang the example is identical to the earlier linux example   Finally  if no interpreter is specified on the    win  line then the script will be treated as a windows   cmd  file which will be excecuted  The example below shows this     yaml   steps      image  mcr microsoft com powershell lts nanoserver 1809     script            win       echo Hello from the default cmd file           Specifying a timeout  A  Step  can specify a  timeout  field  If the  Step  execution time exceeds the specified timeout  the  Step  kills its running process and any subsequent  Steps  in the  TaskRun  will not be executed  The  TaskRun  is placed into a  Failed  condition   An accompanying log describing which  Step  timed out is written as the  Failed  condition s message   The timeout specification follows the duration format as specified in the  Go time package  https   golang org pkg time  ParseDuration   e g  1s or 1ms    The example  Step  below is supposed to sleep for 60 seconds but will be canceled by the specified 5 second timeout     yaml steps      name  sleep then timeout     image  ubuntu     script             usr bin env bash       echo  I am supposed to sleep for 60 seconds         sleep 60     timeout  5s           Specifying  onError  for a  step   When a  step  in a  task  results in a failure  the rest of the steps in the  task  are skipped and the  taskRun  is declared a failure  If you would like to ignore such step errors and continue executing the rest of the steps in the task  you can specify  onError  for such a  step     onError  can be set to either  continue  or  stopAndFail  as part of the step definition  If  onError  is set to  continue   the entrypoint sets the original failed exit code of the  script   running scripts within steps  in the container terminated state  A  step  with  onError  set to  continue  does not fail the  taskRun  and continues executing the rest of the steps in a task   To ignore a step error  set  onError  to  continue       yaml steps      image  docker io library golang latest     name  ignore unit test failure     onError  continue     script          go test        The original failed exit code of the  script   running scripts within steps  is available in the terminated state of the container       kubectl get tr taskrun unit test t6qcl  o json   jq  status      conditions                  message    All Steps have completed executing          reason    Succeeded          status    True          type    Succeeded                steps                  container    step ignore unit test failure          imageID                 name    ignore unit test failure          terminated              containerID                   exitCode   1           reason    Completed                            For an end to end example  see  the taskRun ignoring a step error     examples v1 taskruns ignore step error yaml  and  the pipelineRun ignoring a step error     examples v1 pipelineruns ignore step error yaml         Accessing Step s  exitCode  in subsequent  Steps   A step can access the exit code of any previous step by reading the file pointed to by the  exitCode  path variable      shell cat   steps step  step name  exitCode path       The  exitCode  of a step without any name can be referenced using      shell cat   steps step unnamed  step index  exitCode path            Produce a task result with  onError   When a step is set to ignore the step error and if that step is able to initialize a result file before failing  that result is made available to its consumer task      yaml steps      name  ignore failure and produce a result     onError  continue     image  busybox     script          echo  n 123   tee   results result1 path        exit 1      The task consuming the result using the result reference    tasks task1 results result1   in a  pipeline  will be able to access the result and run with the resolved value   Now  a step can fail before initializing a result and the  pipeline  can ignore such step failure  But  the   pipeline  will fail with  InvalidTaskResultReference  if it has a task consuming that task result  For example  any task consuming    tasks task1 results result2   will cause the pipeline to fail      yaml steps      name  ignore failure and produce a result     onError  continue     image  busybox     script          echo  n 123   tee   results result1 path        exit 1       echo  n 456   tee   results result2 path            Breakpoint on failure with  onError    Debugging  taskruns md debugging a taskrun  a taskRun is supported to debug a container and comes with a set of  tools  taskruns md debug environment  to declare the step as a failure or a success  Specifying  breakpoint  taskruns md breakpoint on failure  at the  taskRun  level overrides ignoring a step error using  onError         Redirecting step output streams with  stdoutConfig  and  stderrConfig   This is an alpha feature  The  enable api fields  feature flag  must be set to   alpha      install md  for Redirecting Step Output Streams to function   This feature defines optional  Step  fields  stdoutConfig  and  stderrConfig  which can be used to redirection the output streams  stdout  and  stderr  respectively      yaml   name              stdoutConfig      path        stderrConfig      path           Once  stdoutConfig path  or  stderrConfig path  is specified  the corresponding output stream will be duplicated to both the given file and the standard output stream of the container  so users can still view the output through the Pod log API  If both  stdoutConfig path  and  stderrConfig path  are set to the same value  outputs from both streams will be interleaved in the same file  but there will be no ordering guarantee on the data  If multiple  Step  s  stdoutConfig path  fields are set to the same value  the file content will be overwritten by the last outputting step   Variable substitution will be applied to the new fields  so one could specify    results  name  path   to the  stdoutConfig path  or  stderrConfig path  field to extract the stdout of a step into a Task result         Example Usage  Redirecting stdout of  boskosctl  to  jq  and publish the resulting  project id  as a Task result      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  boskos acquire spec    results      name  project id   steps      name  boskosctl     image  gcr io k8s staging boskos boskosctl     args        acquire         server url http   boskos test pods svc cluster local         owner name christie test boskos         type gke project         state free         target state busy     stdoutConfig        path   data boskosctl stdout     volumeMounts        name  data       mountPath   data     name  parse project id     image  imega jq     args         r        name        data boskosctl stdout     stdoutConfig        path    results project id path      volumeMounts        name  data       mountPath   data   volumes      name  data          NOTE        If the intent is to share output between  Step s via a file  the user must ensure that the paths provided are shared between the  Step s  e g via  volumes        There is currently a limit on the overall size of the  Task  results  If the stdout stderr of a step is set to the path of a  Task  result and the step prints too many data  the result manifest would become too large  Currently the entrypoint binary will fail if that happens      If the stdout stderr of a  Step  is set to the path of a  Task  result  e g     results empty path    but that result is not defined for the  Task   the  Step  will run but the output will be captured in a file named    results empty path   in the current working directory  Similarly  any stubstition that is not valid  e g     some invalid path  out txt   will be left as is and will result in a file path    some invalid path  out txt  relative to the current working directory       Specifying  Parameters   You can specify parameters  such as compilation flags or artifact names  that you want to supply to the  Task  at execution time    Parameters  are passed to the  Task  from its corresponding  TaskRun         Parameter name Parameter name format    Must only contain alphanumeric characters  hyphens        underscores        and dots        However   object  parameter name and its key names can t contain dots        See the reasons in the third item added in this  PR  https   github com tektoncd community pull 711     Must begin with a letter or an underscore         For example   foo Is Bar   is a valid parameter name for string or array type  but is invalid for object parameter because it contains dots  On the other hand   barIsBa   or  0banana  are invalid for all types     NOTE    1  Parameter names are   case insensitive    For example   APPLE  and  apple  will be treated as equal  If they appear in the same TaskSpec s params  it will be rejected as invalid    2  If a parameter name contains dots      it must be referenced by using the  bracket notation   substituting parameters and resources  with either single or double quotes i e     params  foo bar         params  foo bar      See the following example for more information        Parameter type Each declared parameter has a  type  field  which can be set to  string    array  or  object           object  type   object  type is useful in cases where users want to group related parameters  For example  an object parameter called  gitrepo  can contain both the  url  and the  commmit  to group related information      yaml spec    params        name  gitrepo       type  object       properties          url            type  string         commit            type  string      Refer to the  TaskRun example     examples v1 taskruns object param result yaml  and the  PipelineRun example     examples v1 pipelineruns pipeline object param and result yaml  in which  object  parameters are demonstrated       NOTE         object  param must specify the  properties  section to define the schema i e  what keys are available for this object param  See how to define  properties  section in the following example and the  TEP 0075  https   github com tektoncd community blob main teps 0075 object param and result types md defaulting to string types for values         When providing value for an  object  param  one may provide values for just a subset of keys in spec s  default   and provide values for the rest of keys at runtime   example     examples v1 taskruns object param result yaml          When using object in variable replacement  users can only access its individual key   child  member  of the object by its name i e     params gitrepo url    Using an entire object as a value is only allowed when the value is also an object like  this example     examples v1 pipelineruns pipeline object param and result yaml   See more details about using object param from the  TEP 0075  https   github com tektoncd community blob main teps 0075 object param and result types md using objects in variable replacement           array  type   array  type is useful in cases where the number of compilation flags being supplied to a task varies throughout the  Task s  execution   array  param can be defined by setting  type  to  array    Also   array  params only supports  string  array i e  each array element has to be of type  string       yaml spec    params        name  flags       type  array             string  type  If not specified  the  type  field defaults to  string   When the actual parameter value is supplied  its parsed type is validated against the  type  field   The following example illustrates the use of  Parameters  in a  Task   The  Task  declares 3 input parameters named  gitrepo   of type  object     flags   of type  array   and  someURL   of type  string    These parameters are used in the  steps args  list     For  object  parameter  you can only use individual members  aka keys       You can expand parameters of type  array  inside an existing array using the star operator  In this example   flags  contains the star operator     params flags          Note    Input parameter values can be used as variables throughout the  Task  by using  variable substitution   using variable substitution       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  task with parameters spec    params        name  gitrepo       type  object       properties          url            type  string         commit            type  string       name  flags       type  array       name  someURL       type  string       name  foo bar       description   the name contains dot character        default   test    steps        name  do the clone       image  some git image       args              url   params gitrepo url              revision   params gitrepo commit                 name  build       image  my builder       args             build              params flags                 It would be equivalent to use   params  someURL    here            which is necessary when the parameter name contains               characters  e g     params  some other URL       See the example in step  echo param           url   params someURL                  name  echo param       image  bash       args             echo              params  foo bar                   The following  TaskRun  supplies the value for the parameter  gitrepo    flags  and  someURL       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    name  run with parameters spec    taskRef      name  task with parameters   params        name  gitrepo       value          url   abc com          commit   c12b72        name  flags       value               set             arg1 foo               randomflag               someotherflag        name  someURL       value   http   google com            Default value Parameter declarations  within Tasks and Pipelines  can include default values which will be used if the parameter is not specified  for example to specify defaults for both string params and array params   full example     examples v1 taskruns array default yaml         yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  task with array default spec    params        name  flags       type  array       default               set             arg1 foo               randomflag               someotherflag            Param enum    seedling     enum  is an  alpha  additional configs md alpha features  feature    The  enable param enum  feature flag must be set to   true   to enable this feature   Parameter declarations can include  enum  which is a predefine set of valid values that can be accepted by the  Param   If a  Param  has both  enum  and default value  the default value must be in the  enum  set  For example  the valid allowed values for  Param   message  is bounded to  v1    v2  and  v3        yaml apiVersion  tekton dev v1 kind  Task metadata    name  param enum demo spec    params      name  message     type  string     enum    v1    v2    v3       default   v1    steps      name  build     image  bash latest     script          echo    params message        If the  Param  value passed in by  TaskRuns  is   NOT   in the predefined  enum  list  the  TaskRuns  will fail with reason  InvalidParamValue    See usage in this  example     examples v1 taskruns alpha param enum yaml       Specifying  Workspaces     Workspaces   workspaces md using workspaces in tasks  allow you to specify one or more volumes that your  Task  requires during execution  It is recommended that  Tasks  uses   at most   one writeable  Workspace   For example      yaml spec    steps        name  write message       image  ubuntu       script               usr bin env bash         set  xe         echo hello      workspaces messages path  message   workspaces        name  messages       description  The folder where we write the message to       mountPath   custom path relative to root      For more information  see  Using  Workspaces  in  Tasks   workspaces md using workspaces in tasks  and the   Workspaces  in a  TaskRun      examples v1 taskruns workspace yaml  example YAML file       Propagated  Workspaces   Workspaces can be propagated to embedded task specs  not referenced Tasks  For more information  see  Propagated Workspaces  taskruns md propagated workspaces        Emitting  Results   A Task is able to emit string results that can be viewed by users and passed to other Tasks in a Pipeline  These results have a wide variety of potential uses  To highlight just a few examples from the Tekton Catalog  the   git clone  Task  https   github com tektoncd catalog blob main task git clone 0 1 git clone yaml  emits a cloned commit SHA as a result  the   generate build id  Task  https   github com tektoncd catalog blob main task generate build id 0 1 generate build id yaml  emits a randomized ID as a result  and the   kaniko  Task  https   github com tektoncd catalog tree main task kaniko 0 1  emits a container image digest as a result  In each case these results convey information for users to see when looking at their TaskRuns and can also be used in a Pipeline to pass data along from one Task to the next   Task  results are best suited for holding small amounts of data  such as commit SHAs  branch names  ephemeral namespaces  and so on   To define a  Task s  results  use the  results  field  In the example below  the  Task  specifies two files in the  results  field   current date unix timestamp  and  current date human readable       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  print date   annotations      description          A simple task that prints the date spec    results        name  current date unix timestamp       description  The current date in unix timestamp format       name  current date human readable       description  The current date in human readable format   steps        name  print date unix timestamp       image  bash latest       script               usr bin env bash         date   s   tee   results current date unix timestamp path        name  print date human readable       image  bash latest       script               usr bin env bash         date   tee   results current date human readable path       In this example      results name path    https   github com tektoncd pipeline blob main docs variables md variables available in a task  is replaced with the path where Tekton will store the Task s results   When this Task is executed in a TaskRun  the results will appear in the TaskRun s status         yaml apiVersion  tekton dev v1 kind  TaskRun       status            results        name  current date human readable       value            Wed Jan 22 19 47 26 UTC 2020       name  current date unix timestamp       value            1579722445           yaml apiVersion  tekton dev v1beta1 kind  TaskRun       status            taskResults        name  current date human readable       value            Wed Jan 22 19 47 26 UTC 2020       name  current date unix timestamp       value            1579722445        Tekton does not perform any processing on the contents of results  they are emitted verbatim from your Task including any leading or trailing whitespace characters  Make sure to write only the precise string you want returned from your  Task  into the result files that your  Task  creates   The stored results can be used  at the  Task  level    pipelines md passing one tasks results into the parameters or when expressions of another  or  at the  Pipeline  level    pipelines md emitting results from a pipeline        Note   Tekton does not enforce Task results unless there is a consumer  when a Task declares a result    it may complete successfully even if no result was actually produced  When a Task that declares results is   used in a Pipeline  and a component of the Pipeline attempts to consume the Task s result  if the result   was not produced the pipeline will fail   TEP 0048  https   github com tektoncd community blob main teps 0048 task results without results md    propopses introducing default values for results to help Pipeline authors manage this case        Emitting Object  Results  Emitting a task result of type  object  is implemented based on the  TEP 0075  https   github com tektoncd community blob main teps 0075 object param and result types md emitting object results   You can initialize  object  results from a  task  using JSON escaped string  For example  to assign the following data to an object result         url   abc dev sampler   digest   19f02276bf8dbdd62f069b922f10c65262cc34b710eea26ff928129a736be791        You will need to use escaped JSON to write to pod termination message          url     abc dev sampler     digest     19f02276bf8dbdd62f069b922f10c65262cc34b710eea26ff928129a736be791         An example of a task definition producing an object result      yaml kind  Task apiVersion  tekton dev v1   or tekton dev v1beta1 metadata    name  write object   annotations      description          A simple task that writes object spec    results        name  object results       type  object       description  The object results       properties          url            type  string         digest            type  string   steps        name  write object       image  bash latest       script               usr bin env bash         echo  n     url     abc dev sampler     digest     19f02276bf8dbdd62f069b922f10c65262cc34b710eea26ff928129a736be791       tee   results object results path           Note         that the opening and closing braces  are mandatory along with an escaped JSON      object result must specify the  properties  section to define the schema i e  what keys are available for this object result  Failing to emit keys from the defined object results will result in validation error at runtime        Emitting Array  Results   Tekton Task also supports defining a result of type  array  and  object  in addition to  string   Emitting a task result of type  array  is a  beta  feature implemented based on the  TEP 0076  https   github com tektoncd community blob main teps 0076 array result types md emitting array results   You can initialize  array  results from a  task  using JSON escaped string  for example  to assign the following list of animals to an array result         cat    dog    squirrel        You will have to initialize the pod termination message as escaped JSON          cat      dog      squirrel         An example of a task definition producing an array result with such greetings    hello    world         yaml kind  Task apiVersion  tekton dev v1   or tekton dev v1beta1 metadata    name  write array   annotations      description          A simple task that writes array spec    results        name  array results       type  array       description  The array results   steps        name  write array       image  bash latest       script               usr bin env bash         echo  n     hello     world       tee   results array results path         Note   that the opening and closing square brackets are mandatory along with an escaped JSON   Now  similar to the Go zero valued slices  an array result is considered as uninitialized  i e   nil   if it s set to an empty array i e        For example   echo  n         tee   results result path    is equivalent to  result      string     The result initialized in this way will have zero length  And trying to access this array with a star notation i e     tasks write array results results result      or an element of such array i e     tasks write array results results result 0    results in  InvalidTaskResultReference  with  index out of range    Depending on your use case  you might have to initialize a result array to the desired length just like using  make    function in Go   make    function is used to allocate an array and returns a slice of the specified length i e   result    make   string  5   results in                         similarly set the array result to following JSON escaped expression to allocate an array of size 2       echo  n                  tee   results array results path    an array of size 2 with empty string echo  n     first array element             tee   results array results path    an array of size 2 with only first element initialized echo  n           second array element       tee   results array results path    an array of size 2 with only second element initialized echo  n     first array element      second array element       tee   results array results path    an array of size 2 with both elements initialized      This is also important to maintain the order of the elements in an array  The order in which the task result was initialized is the order in which the result is consumed by the dependent tasks  For example  a task is producing two array results  images  and  configmaps   The pipeline author can implement deployment by indexing into each array result      yaml       name  deploy stage 1       taskRef          name  deploy       params            name  image           value    tasks setup results images 0             name  configmap           value    tasks setup results configmap 0                   name  deploy stage 2       taskRef          name  deploy       params            name  image           value    tasks setup results images 1             name  configmap           value    tasks setup results configmap 1        As a task author  make sure the task array results are initialized accordingly or set to a zero value in case of no  image  or  configmap  to maintain the order     Note    Tekton uses  termination messages  https   kubernetes io docs tasks debug debug application determine reason pod failure  writing and reading a termination message   As written in  tektoncd pipeline 4808  https   github com tektoncd pipeline issues 4808   the maximum size of a  Task s  results is limited by the container termination message feature of Kubernetes  At present  the limit is   4096 bytes   https   github com kubernetes kubernetes blob 96e13de777a9eb57f87889072b68ac40467209ac pkg kubelet container runtime go L632   This also means that the number of Steps in a Task affects the maximum size of a Result  as each Step is implemented as a container in the TaskRun s pod  The more containers we have in our pod   the smaller the allowed size of each container s message   meaning that the   more steps you have in a Task  the smaller the result for each step can be    For example  if you have 10 steps  the size of each step s Result will have a maximum of less than 1KB   If your  Task  writes a large number of small results  you can work around this limitation by writing each result from a separate  Step  so that each  Step  has its own termination message  If a termination message is detected as being too large the TaskRun will be placed into a failed state with the following message   Termination message is above max allowed size 4096  caused by large task result   Since Tekton also uses the termination message for some internal information  so the real available size will less than 4096 bytes   As a general rule of thumb  if a result needs to be larger than a kilobyte  you should likely use a   Workspace    specifying workspaces  to store and pass it between  Tasks  within a  Pipeline         Larger  Results  using sidecar logs  This is a beta feature which is guarded behind its own feature flag   The  results from  feature flag must be set to    sidecar logs      install md enabling larger results using sidecar logs  to enable larger results using sidecar logs   Instead of using termination messages to store results  the taskrun controller injects a sidecar container which monitors the results of all the steps  The sidecar mounts the volume where results of all the steps are stored  As soon as it finds a new result  it logs it to std out  The controller has access to the logs of the sidecar container    CAUTION    we need you to enable access to  kubernetes pod logs    install md enabling larger results using sidecar logs    This feature allows users to store up to 4 KB per result by default  Because we are not limited by the size of the termination messages  users can have as many results as they require  or until the CRD reaches its limit   If the size of a result exceeds this limit  then the TaskRun will be placed into a failed state with the following message   Result exceeded the maximum allowed limit      Note    If you require even larger results  you can specify a different upper limit per result by setting  max result size  feature flag to your desired size in bytes   see instructions    install md enabling larger results using sidecar logs      CAUTION    the larger you make the size  more likely will the CRD reach its max limit enforced by the  etcd  server leading to bad user experience   Refer to the detailed instructions listed in  additional config  additional configs md enabling larger results using sidecar logs  to learn how to enable this feature       Specifying  Volumes   Specifies one or more   Volumes   https   kubernetes io docs concepts storage volumes   that the  Steps  in your  Task  require to execute in addition to volumes that are implicitly created for input and output resources   For example  you can use  Volumes  to do the following      Mount a Kubernetes  Secret   auth md     Create an  emptyDir  persistent  Volume  that caches data across multiple  Steps     Mount a  Kubernetes  ConfigMap   https   kubernetes io docs tasks configure pod container configure pod configmap     as  Volume  source    Mount a host s Docker socket to use a  Dockerfile  for building container images      Note    Building a container image on cluster using  docker build  is   very   unsafe   and is mentioned only for the sake of the example  Use  kaniko  https   github com GoogleContainerTools kaniko  instead       Specifying a  Step  template  The  stepTemplate  field specifies a   Container   https   kubernetes io docs concepts containers   configuration that will be used as the starting point for all of the  Steps  in your  Task   Individual configurations specified within  Steps  supersede the template wherever overlap occurs   In the example below  the  Task  specifies a  stepTemplate  field with the environment variable  FOO  set to  bar   The first  Step  in the  Task  uses that value for  FOO   but the second  Step  overrides the value set in the template with  baz   Additional  the  Task  specifies a  stepTemplate  field with the environment variable  TOKEN  set to  public   The last one  Step  in the  Task  uses  private  in the referenced secret to override the value set in the template      yaml stepTemplate    env        name   FOO        value   bar        name   TOKEN        value   public  steps      image  ubuntu     command   echo      args    FOO is   FOO        image  ubuntu     command   echo      args    FOO is   FOO        env          name   FOO          value   baz      image  ubuntu     command   echo      args    TOKEN is   TOKEN        env          name   TOKEN          valueFrom            secretKeyRef              key   token              name   test        The secret  test  part data is as follows  data      The decoded value of  cHJpdmF0ZQo   is  private     token   cHJpdmF0ZQo            Specifying  Sidecars   The  sidecars  field specifies a list of   Containers   https   kubernetes io docs concepts containers   to run alongside the  Steps  in your  Task   You can use  Sidecars  to provide auxiliary functionality  such as  Docker in Docker  https   hub docker com   docker  or running a mock API server that your app can hit during testing   Sidecars  spin up before your  Task  executes and are deleted after the  Task  execution completes  For further information  see   Sidecars  in  TaskRuns   taskruns md specifying sidecars      Note    Starting in v0 62 you can enable native Kubernetes sidecar support using the  enable kubernetes sidecar  feature flag   see instructions    additional configs md customizing the pipelines controller behavior    If kubernetes does not wait for your sidecar application to be ready  use a  startupProbe  to help kubernetes identify when it is ready   Refer to the detailed instructions listed in  additional config  additional configs md enabling larger results using sidecar logs  to learn how to enable this feature   In the example below  a  Step  uses a Docker in Docker  Sidecar  to build a Docker image      yaml steps      image  docker     name  client     script             usr bin env bash       cat   Dockerfile    EOF       FROM ubuntu       RUN apt get update       ENTRYPOINT   echo    hello         EOF       docker build  t hello      docker run hello       docker images     volumeMounts          mountPath   var run          name  dind socket sidecars      image  docker 18 05 dind     name  server     securityContext        privileged  true     volumeMounts          mountPath   var lib docker         name  dind storage         mountPath   var run          name  dind socket volumes      name  dind storage     emptyDir         name  dind socket     emptyDir          Sidecars  just like  Steps   can also run scripts      yaml sidecars      image  busybox     name  hello sidecar     script          echo  Hello from sidecar         Note    Tekton s current  Sidecar  implementation contains a bug  Tekton uses a container image named  nop  to terminate  Sidecars   That image is configured by passing a flag to the Tekton controller  If the configured  nop  image contains the exact command the  Sidecar  was executing before receiving a  stop  signal  the  Sidecar  keeps running  eventually causing the  TaskRun  to time out with an error  For more information  see  issue 1347  https   github com tektoncd pipeline issues 1347        Specifying a display name  The  displayName  field is an optional field that allows you to add a user facing name to the task that may be used to populate a UI       Adding a description  The  description  field is an optional field that allows you to add an informative description to the  Task        Using variable substitution  Tekton provides variables to inject values into the contents of certain fields  The values you can inject come from a range of sources including other fields in the Task  context sensitive information that Tekton provides  and runtime information received from a TaskRun   The mechanism of variable substitution is quite simple   string replacement is performed by the Tekton Controller when a TaskRun is executed    Tasks  allow you to substitute variable names for the following entities      Parameters and resources   substituting parameters and resources      Array  parameters   substituting array parameters      Workspaces    substituting workspace paths      Volume  names and types   substituting volume names and paths   See the  complete list of variable substitutions for Tasks    variables md variables available in a task  and the  list of fields that accept substitutions    variables md fields that accept variable substitutions         Substituting parameters and resources    params    specifying parameters  and   resources    specifying resources  attributes can replace variable values as follows     To reference a parameter in a  Task   use the following syntax  where   name   is the name of the parameter       shell     dot notation     Here  the name cannot contain dots  eg  foo bar is not allowed   If the name contains  dots   it can only be accessed via the bracket notation      params  name        or bracket notation  wrapping  name  with either single or double quotes       Here  the name can contain dots  eg  foo bar is allowed       params   name         params   name             To access parameter values from resources  see  variable substitution  resources md variable substitution        Substituting  Array  parameters  You can expand referenced parameters of type  array  using the star operator  To do so  add the operator         to the named parameter to insert the array elements in the spot of the reference string   For example  given a  params  field with the contents listed below  you can expand  command    first      params array param        last    to  command    first    some    array    elements    last         yaml params      name  array param     value           some           array           elements       You   must   reference parameters of type  array  in a completely isolated string within a larger  string  array  Referencing an  array  parameter in any other way will result in an error  For example  if  build args  is a parameter of type  array   then the following example is an invalid  Step  because the string isn t isolated      yaml   name  build step   image  gcr io cloud builders some image   args    build    additionalArg   params build args            Similarly  referencing  build args  in a non  array  field is also invalid      yaml   name  build step   image     params build args        args    build    args        A valid reference to the  build args  parameter is isolated and in an eligible field   args   in this case       yaml   name  build step   image  gcr io cloud builders some image   args    build      params build args        additionalArg         array  param when referenced in  args  section of the  step  can be utilized in the  script  as command line arguments      yaml   name  build step   image  gcr io cloud builders some image   args      params flags         script           usr bin env bash     echo  The script received    flags       echo  The first command line argument is  1        Indexing into an array to reference an individual array element is supported as an   alpha   feature   enable api fields  alpha    Referencing an individual array element in  args       yaml   name  build step   image  gcr io cloud builders some image   args      params flags 0          Referencing an individual array element in  script       yaml   name  build step   image  gcr io cloud builders some image   script           usr bin env bash     echo    params flags 0              Substituting  Workspace  paths  You can substitute paths to  Workspaces  specified within a  Task  as follows      yaml   workspaces myworkspace path       Since the  Volume  name is randomized and only set when the  Task  executes  you can also substitute the volume name as follows      yaml   workspaces myworkspace volume            Substituting  Volume  names and types  You can substitute  Volume  names and  types  https   kubernetes io docs concepts storage volumes  types of volumes  by parameterizing them  Tekton supports popular  Volume  types such as  ConfigMap    Secret   and  PersistentVolumeClaim   See this  example   mounting a configmap as a volume source  to find out how to perform this type of substitution in your  Task         Substituting in  Script  blocks  Variables can contain any string  including snippets of script that can be injected into a Task s  Script  field  If you are using Tekton s variables in your Task s  Script  field be aware that the strings you re interpolating could include executable instructions   Preventing a substituted variable from executing as code depends on the container image  language or shell that your Task uses  Here s an example of interpolating a Tekton variable into a  bash   Script  block that prevents the variable s string contents from being executed      yaml   Task yaml spec    steps        image  an image that runs bash       env            name  SCRIPT CONTENTS           value    params script        script            printf   s     SCRIPT CONTENTS     input script      This works by injecting Tekton s variable as an environment variable into the Step s container  The  printf  program is then used to write the environment variable s content to a file      Code examples  Study the following code examples to better understand how to configure your  Tasks       Building and pushing a Docker image   building and pushing a docker image     Mounting multiple  Volumes    mounting multiple volumes     Mounting a  ConfigMap  as a  Volume  source   mounting a configmap as a volume source     Using a  Secret  as an environment source   using a secret as an environment source     Using a  Sidecar  in a  Task    using a sidecar in a task    Tip  See the collection of Tasks in the  Tekton community catalog  https   github com tektoncd catalog  for more examples       Building and pushing a Docker image  The following example  Task  builds and pushes a  Dockerfile  built image     Note    Building a container image using  docker build  on cluster is   very unsafe   and is shown here only as a demonstration  Use  kaniko  https   github com GoogleContainerTools kaniko  instead      yaml spec    params        This may be overridden  but is a sensible default        name  dockerfileName       type  string       description  The name of the Dockerfile       default  Dockerfile       name  image       type  string       description  The image to build and push   workspaces      name  source   steps        name  dockerfile build       image  gcr io cloud builders docker       workingDir     workspaces source path         args                       build                no cache                tag                params image                 file                params dockerfileName                                   volumeMounts            name  docker socket           mountPath   var run docker sock        name  dockerfile push       image  gcr io cloud builders docker       args    push      params image          volumeMounts            name  docker socket           mountPath   var run docker sock      As an implementation detail  this Task mounts the host s daemon socket    volumes        name  docker socket       hostPath          path   var run docker sock         type  Socket           Mounting multiple  Volumes   The example below illustrates mounting multiple  Volumes       yaml spec    steps        image  ubuntu       script               usr bin env bash         curl https   foo com    var my volume       volumeMounts            name  my volume           mountPath   var my volume        image  ubuntu       script               usr bin env bash         cat  etc my volume       volumeMounts            name  my volume           mountPath   etc my volume    volumes        name  my volume       emptyDir               Mounting a  ConfigMap  as a  Volume  source  The example below illustrates how to mount a  ConfigMap  to act as a  Volume  source      yaml spec    params        name  CFGNAME       type  string       description  Name of config map       name  volumeName       type  string       description  Name of volume   steps        image  ubuntu       script               usr bin env bash         cat  var configmap test       volumeMounts            name     params volumeName             mountPath   var configmap    volumes        name     params volumeName         configMap          name     params CFGNAME             Using a  Secret  as an environment source  The example below illustrates how to use a  Secret  as an environment source      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  goreleaser spec    params        name  package       type  string       description  base package to build in       name  github token secret       type  string       description  name of the secret holding the github token       default  github token   workspaces      name  source   steps        name  release       image  goreleaser goreleaser       workingDir    workspaces source path    params package        command            goreleaser       args            release       env            name  GOPATH           value   workspace           name  GITHUB TOKEN           valueFrom              secretKeyRef                name    params github token secret                key  bot token           Using a  Sidecar  in a  Task   The example below illustrates how to use a  Sidecar  in your  Task       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  with sidecar task spec    params        name  sidecar image       type  string       description  Image name of the sidecar container       name  sidecar env       type  string       description  Environment variable value   sidecars        name  sidecar       image    params sidecar image        env            name  SIDECAR ENV           value    params sidecar env    steps        name  test       image  hello world         Debugging  This section describes techniques for debugging the most common issues in  Tasks        Inspecting the file structure  A common issue when configuring  Tasks  stems from not knowing the location of your data  For the most part  files ingested and output by your  Task  live in the   workspace  directory  but the specifics can vary  To inspect the file structure of your  Task   add a step that outputs the name of every file stored in the   workspace  directory to the build log  For example      yaml   name  build and push 1   image  ubuntu   command         bin bash   args         c               set  ex       find  workspace      You can also choose to examine the  contents  of every file used by your  Task       yaml   name  build and push 1   image  ubuntu   command         bin bash   args         c               set  ex       find  workspace   xargs cat          Inspecting the  Pod   To inspect the contents of the  Pod  used by your  Task  at a specific stage in the  Task s  execution  log into the  Pod  and add a  Step  that pauses the  Task  at the desired stage  For example      yaml   name  pause   image  docker   args    sleep    6000            Running Step Containers as a Non Root User  All steps that do not require to be run as a root user should make use of TaskRun features to designate the container for a step runs as a user without root permissions  As a best practice  running containers as non root should be built into the container image to avoid any possibility of the container being run as root  However  as a further measure of enforcing this practice  steps can make use of a  securityContext  to specify how the container should run   An example of running Task steps as a non root user is shown below      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  show non root steps spec    steps        no securityContext specified so will use       securityContext from TaskRun podTemplate       name  show user 1001       image  ubuntu       command            ps       args             aux        securityContext specified so will run as       user 2000 instead of 1001       name  show user 2000       image  ubuntu       command            ps       args             aux        securityContext          runAsUser  2000     apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    generateName  show non root steps run  spec    taskRef      name  show non root steps   podTemplate      securityContext        runAsNonRoot  true       runAsUser  1001      In the example above  the step  show user 2000  specifies via a  securityContext  that the container for the step should run as user 2000  A  securityContext  must still be specified via a TaskRun  podTemplate  for this TaskRun to run in a Kubernetes environment that enforces running containers as non root as a requirement   The  runAsNonRoot  property specified via the  podTemplate  above validates that steps part of this TaskRun are running as non root users and will fail to start any step container that attempts to run as root  Only specifying  runAsNonRoot  true  will not actually run containers as non root as the property simply validates that steps are not running as root  It is the  runAsUser  property that is actually used to set the non root user ID for the container   If a step defines its own  securityContext   it will be applied for the step container over the  securityContext  specified at the pod level via the TaskRun  podTemplate    More information about Pod and Container Security Contexts can be found via the  Kubernetes website  https   kubernetes io docs tasks configure pod container security context  set the security context for a pod    The example Task TaskRun above can be found as a  TaskRun example     examples v1 taskruns run steps as non root yaml        Task  Authoring Recommendations  Recommendations for authoring  Tasks  are available in the  Tekton Catalog  recommendations     recommendations   https   github com tektoncd catalog blob main recommendations md       Except as otherwise noted  the contents of this page are licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    Code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton Cluster Resolver Resolver Type weight 310 Cluster Resolver","answers":"<!--\n---\nlinkTitle: \"Cluster Resolver\"\nweight: 310\n---\n-->\n\n# Cluster Resolver\n\n## Resolver Type\n\nThis Resolver responds to type `cluster`.\n\n## Parameters\n\n| Param Name  | Description                                           | Example Value                    |\n|-------------|-------------------------------------------------------|----------------------------------|\n| `kind`      | The kind of resource to fetch.                        | `task`, `pipeline`, `stepaction` |\n| `name`      | The name of the resource to fetch.                    | `some-pipeline`, `some-task`     |\n| `namespace` | The namespace in the cluster containing the resource. | `default`, `other-namespace`     |\n\n## Requirements\n\n- A cluster running Tekton Pipeline v0.41.0 or later.\n- The [built-in remote resolvers installed](.\/install.md#installing-and-configuring-remote-task-and-pipeline-resolution).\n- The `enable-cluster-resolver` feature flag in the `resolvers-feature-flags` ConfigMap\n  in the `tekton-pipelines-resolvers` namespace set to `true`.\n- [Beta features](.\/additional-configs.md#beta-features) enabled.\n\n## Configuration\n\nThis resolver uses a `ConfigMap` for its settings. See\n[`..\/config\/resolvers\/cluster-resolver-config.yaml`](..\/config\/resolvers\/cluster-resolver-config.yaml)\nfor the name, namespace and defaults that the resolver ships with.\n\n### Options\n\n| Option Name          | Description                                                                                                                                         | Example Values                     |\n|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------|\n| `default-kind`       | The default resource kind to fetch if not specified in parameters.                                                                                  | `task`, `pipeline`, `stepaction`   |\n| `default-namespace`  | The default namespace to fetch resources from if not specified in parameters.                                                                       | `default`, `some-namespace`        |\n| `allowed-namespaces` | An optional comma-separated list of namespaces which the resolver is allowed to access. Defaults to empty, meaning all namespaces are allowed.      | `default,some-namespace`, (empty)  |\n| `blocked-namespaces` | An optional comma-separated list of namespaces which the resolver is blocked from accessing. If the value is a `*` all namespaces will be disallowed and allowed namespace will need to be explicitely listed in `allowed-namespaces`. Defaults to empty, meaning all namespaces are allowed. | `default,other-namespace`, `*`, (empty) |\n\n## Usage\n\n### Task Resolution\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: remote-task-reference\nspec:\n  taskRef:\n    resolver: cluster\n    params:\n    - name: kind\n      value: task\n    - name: name\n      value: some-task\n    - name: namespace\n      value: namespace-containing-task\n```\n\n### StepAction Resolution\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: remote-stepaction-reference\nspec:\n  steps:\n  - name: step-action-example\n    ref\n      resolver: cluster\n      params:\n      - name: kind\n        value: stepaction\n      - name: name\n        value: some-stepaction\n      - name: namespace\n        value: namespace-containing-stepaction\n```\n\n### Pipeline resolution\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: remote-pipeline-reference\nspec:\n  pipelineRef:\n    resolver: cluster\n    params:\n    - name: kind\n      value: pipeline\n    - name: name\n      value: some-pipeline\n    - name: namespace\n      value: namespace-containing-pipeline\n```\n\n## `ResolutionRequest` Status\n`ResolutionRequest.Status.RefSource` field captures the source where the remote resource came from. It includes the 3 subfields: `url`, `digest` and `entrypoint`.\n- `url`: url is the unique full identifier for the resource in the cluster. It is in the format of `<resource uri>@<uid>`. Resource URI part is the namespace-scoped uri i.e. `\/apis\/GROUP\/VERSION\/namespaces\/NAMESPACE\/RESOURCETYPE\/NAME`. See [K8s Resource URIs](https:\/\/kubernetes.io\/docs\/reference\/using-api\/api-concepts\/#resource-uris) for more details.\n- `digest`: hex-encoded sha256 checksum of the content in the in-cluster resource's spec field. The reason why it's the checksum of the spec content rather than the whole object is because the metadata of in-cluster resources might be modified i.e. annotations. Therefore, the checksum of the spec content should be sufficient for source verifiers to verify if things have been changed maliciously even though the metadata is modified with good intentions.\n- `entrypoint`: ***empty*** because the path information is already available in the url field.\n\nExample:\n- TaskRun Resolution\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: cluster-demo\nspec:\n  taskRef:\n    resolver: cluster\n    params:\n    - name: kind\n      value: task\n    - name: name\n      value: a-simple-task\n    - name: namespace\n      value: default\n```\n\n\n- `ResolutionRequest`\n```yaml\napiVersion: resolution.tekton.dev\/v1beta1\nkind: ResolutionRequest\nmetadata:\n  labels:\n    resolution.tekton.dev\/type: cluster\n  name: cluster-7a04be6baa3eeedd232542036b7f3b2d\n  namespace: default\n  ownerReferences: ...\nspec:\n  params:\n  - name: kind\n    value: task\n  - name: name\n    value: a-simple-task\n  - name: namespace\n    value: default\nstatus:\n  annotations: ...\n  conditions: ...\n  data: xxx\n  refSource:\n    digest:\n      sha256: 245b1aa918434cc8195b4d4d026f2e43df09199e2ed31d4dfd9c2cbea1c7ce54\n    uri: \/apis\/tekton.dev\/v1beta1\/namespaces\/default\/task\/a-simple-task@3b82d8c4-f89e-47ea-a49d-3be0dca4c038\n```\n---\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/),\nand code samples are licensed under the\n[Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"         linkTitle   Cluster Resolver  weight  310            Cluster Resolver     Resolver Type  This Resolver responds to type  cluster       Parameters    Param Name    Description                                             Example Value                                                                                                                                    kind         The kind of resource to fetch                            task    pipeline    stepaction       name         The name of the resource to fetch                        some pipeline    some task           namespace    The namespace in the cluster containing the resource     default    other namespace            Requirements    A cluster running Tekton Pipeline v0 41 0 or later    The  built in remote resolvers installed    install md installing and configuring remote task and pipeline resolution     The  enable cluster resolver  feature flag in the  resolvers feature flags  ConfigMap   in the  tekton pipelines resolvers  namespace set to  true      Beta features    additional configs md beta features  enabled      Configuration  This resolver uses a  ConfigMap  for its settings  See      config resolvers cluster resolver config yaml      config resolvers cluster resolver config yaml  for the name  namespace and defaults that the resolver ships with       Options    Option Name            Description                                                                                                                                           Example Values                                                                                                                                                                                                                                              default kind          The default resource kind to fetch if not specified in parameters                                                                                      task    pipeline    stepaction         default namespace     The default namespace to fetch resources from if not specified in parameters                                                                           default    some namespace              allowed namespaces    An optional comma separated list of namespaces which the resolver is allowed to access  Defaults to empty  meaning all namespaces are allowed          default some namespace    empty        blocked namespaces    An optional comma separated list of namespaces which the resolver is blocked from accessing  If the value is a     all namespaces will be disallowed and allowed namespace will need to be explicitely listed in  allowed namespaces   Defaults to empty  meaning all namespaces are allowed     default other namespace         empty        Usage      Task Resolution     yaml apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  remote task reference spec    taskRef      resolver  cluster     params        name  kind       value  task       name  name       value  some task       name  namespace       value  namespace containing task          StepAction Resolution     yaml apiVersion  tekton dev v1beta1 kind  Task metadata    name  remote stepaction reference spec    steps      name  step action example     ref       resolver  cluster       params          name  kind         value  stepaction         name  name         value  some stepaction         name  namespace         value  namespace containing stepaction          Pipeline resolution     yaml apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    name  remote pipeline reference spec    pipelineRef      resolver  cluster     params        name  kind       value  pipeline       name  name       value  some pipeline       name  namespace       value  namespace containing pipeline          ResolutionRequest  Status  ResolutionRequest Status RefSource  field captures the source where the remote resource came from  It includes the 3 subfields   url    digest  and  entrypoint      url   url is the unique full identifier for the resource in the cluster  It is in the format of   resource uri   uid    Resource URI part is the namespace scoped uri i e    apis GROUP VERSION namespaces NAMESPACE RESOURCETYPE NAME   See  K8s Resource URIs  https   kubernetes io docs reference using api api concepts  resource uris  for more details     digest   hex encoded sha256 checksum of the content in the in cluster resource s spec field  The reason why it s the checksum of the spec content rather than the whole object is because the metadata of in cluster resources might be modified i e  annotations  Therefore  the checksum of the spec content should be sufficient for source verifiers to verify if things have been changed maliciously even though the metadata is modified with good intentions     entrypoint      empty    because the path information is already available in the url field   Example    TaskRun Resolution     yaml apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  cluster demo spec    taskRef      resolver  cluster     params        name  kind       value  task       name  name       value  a simple task       name  namespace       value  default          ResolutionRequest     yaml apiVersion  resolution tekton dev v1beta1 kind  ResolutionRequest metadata    labels      resolution tekton dev type  cluster   name  cluster 7a04be6baa3eeedd232542036b7f3b2d   namespace  default   ownerReferences      spec    params      name  kind     value  task     name  name     value  a simple task     name  namespace     value  default status    annotations        conditions        data  xxx   refSource      digest        sha256  245b1aa918434cc8195b4d4d026f2e43df09199e2ed31d4dfd9c2cbea1c7ce54     uri   apis tekton dev v1beta1 namespaces default task a simple task 3b82d8c4 f89e 47ea a49d 3be0dca4c038          Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    and code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton based on so that it possible for the taskruns to execute parallel while sharing volume weight 405 Affinity Assistants Affinity Assistant is a feature to coschedule to the same node Affinity Assistants","answers":"<!--\n---\nlinkTitle: \"Affinity Assistants\"\nweight: 405\n---\n-->\n\n# Affinity Assistants\nAffinity Assistant is a feature to coschedule `PipelineRun` `pods` to the same node\nbased on [kubernetes pod affinity](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#inter-pod-affinity-and-anti-affinity) so that it possible for the taskruns to execute parallel while sharing volume.\nAvailable Affinity Assistant Modes are **coschedule workspaces**, **coschedule pipelineruns**,\n**isolate pipelinerun** and **disabled**. \n\n> :seedling: **coschedule pipelineruns** and **isolate pipelinerun** modes are [**alpha features**](.\/additional-configs.md#alpha-features).\n> **coschedule workspaces** is a **stable feature**\n\n* **coschedule workspaces** - When a `PersistentVolumeClaim` is used as volume source for a `Workspace` in a `PipelineRun`,\nall `TaskRun` pods within the `PipelineRun` that share the `Workspace` will be scheduled to the same Node. (**Note:** Only one pvc-backed workspace can be mounted to each TaskRun in this mode.)\n\n* **coschedule pipelineruns** - All `TaskRun` pods within the `PipelineRun` will be scheduled to the same Node.\n\n* **isolate pipelinerun** - All `TaskRun` pods within the `PipelineRun` will be scheduled to the same Node,\nand only one PipelineRun is allowed to run on a node at a time.\n\n* **disabled** - The Affinity Assistant is disabled. No pod coscheduling behavior.\n\nThis means that Affinity Assistant is incompatible with other affinity rules\nconfigured for the `TaskRun` pods (i.e. other affinity rules specified in custom [PodTemplate](pipelineruns.md#specifying-a-pod-template) will be overwritten by Affinity Assistant).\nIf the `PipelineRun` has a custom [PodTemplate](pipelineruns.md#specifying-a-pod-template) configured, the `NodeSelector` and `Tolerations` fields will also be set on the Affinity Assistant pod. The Affinity Assistant\nis deleted when the `PipelineRun` is completed. \n\nCurrently, the Affinity Assistant Modes can be configured by the `disable-affinity-assistant` and `coschedule` feature flags. \nThe `disable-affinity-assistant` feature flag is now deprecated and will be removed in release `v0.60`. At the time, the Affinity Assistant Modes will be only determined by the `coschedule` feature flag. \n\nThe following chart summarizes the Affinity Assistant Modes with different combinations of the `disable-affinity-assistant` and `coschedule` feature flags during migration (when both feature flags are present) and after the migration (when only the `coschedule` flag is present).\n\n<table>\n    <thead>\n        <tr>\n            <th>disable-affinity-assistant<\/th>\n            <th>coschedule<\/th>\n            <th>behavior during migration<\/th>\n            <th>behavior after migration<\/th>\n        <\/tr>\n    <\/thead>\n    <tbody>\n        <tr>\n            <td>false (default)<\/td>\n            <td>disabled<\/td>\n            <td>N\/A: invalid<\/td>\n            <td>disabled<\/td>\n        <\/tr>\n        <tr>\n            <td>false (default)<\/td>\n            <td>workspaces (default)<\/td>\n            <td>coschedule workspaces<\/td>\n            <td>coschedule workspaces<\/td>\n        <\/tr>\n        <tr>\n            <td>false (default)<\/td>\n            <td>pipelineruns<\/td>\n            <td>N\/A: invalid<\/td>\n            <td>coschedule pipelineruns<\/td>\n        <\/tr>\n        <tr>\n            <td>false (default)<\/td>\n            <td>isolate-pipelinerun<\/td>\n            <td>N\/A: invalid<\/td>\n            <td>isolate pipelinerun<\/td>\n        <\/tr>\n        <tr>\n            <td>true<\/td>\n            <td>disabled<\/td>\n            <td>disabled<\/td>\n            <td>disabled<\/td>\n        <\/tr>\n        <tr>\n            <td>true<\/td>\n            <td>workspaces (default)<\/td>\n            <td>disabled<\/td>\n            <td>coschedule workspaces<\/td>\n        <\/tr>\n        <tr>\n            <td>true<\/td>\n            <td>pipelineruns<\/td>\n            <td>coschedule pipelineruns<\/td>\n            <td>coschedule pipelineruns<\/td>\n        <\/tr>\n        <tr>\n            <td>true<\/td>\n            <td>isolate-pipelinerun<\/td>\n            <td>isolate pipelinerun<\/td>\n            <td>isolate pipelinerun<\/td>\n        <\/tr>\n    <\/tbody>\n<\/table>\n\n**Note:** For users who previously accepted the default behavior (`disable-affinity-assistant`: `false`) but now want one of the new features, you need to set `disable-affinity-assistant` to \"true\" and then turn on the new behavior by setting the `coschedule` flag. For users who previously disabled the affinity assistant but want one of the new features, just set the `coschedule` flag accordingly.\n\n**Note:** Affinity Assistant use [Inter-pod affinity and anti-affinity](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#inter-pod-affinity-and-anti-affinity)\nthat require substantial amount of processing which can slow down scheduling in large clusters\nsignificantly. We do not recommend using the affinity assistant in clusters larger than several hundred nodes\n\n**Note:** Pod anti-affinity requires nodes to be consistently labelled, in other words every\nnode in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes\nare missing the specified `topologyKey` label, it can lead to unintended behavior.\n\n**Note:** Any time during the execution of a `pipelineRun`, if the node with a placeholder Affinity Assistant pod and\nthe `taskRun` pods sharing a `workspace` is `cordoned` or disabled for scheduling anything new (`tainted`), the\n`pipelineRun` controller deletes the placeholder pod. The `taskRun` pods on a `cordoned` node continues running\nuntil completion. The deletion of a placeholder pod triggers creating a new placeholder pod on any available node\nsuch that the rest of the `pipelineRun` can continue without any disruption until it finishes","site":"tekton","answers_cleaned":"         linkTitle   Affinity Assistants  weight  405            Affinity Assistants Affinity Assistant is a feature to coschedule  PipelineRun   pods  to the same node based on  kubernetes pod affinity  https   kubernetes io docs concepts scheduling eviction assign pod node  inter pod affinity and anti affinity  so that it possible for the taskruns to execute parallel while sharing volume  Available Affinity Assistant Modes are   coschedule workspaces      coschedule pipelineruns      isolate pipelinerun   and   disabled         seedling    coschedule pipelineruns   and   isolate pipelinerun   modes are    alpha features      additional configs md alpha features       coschedule workspaces   is a   stable feature        coschedule workspaces     When a  PersistentVolumeClaim  is used as volume source for a  Workspace  in a  PipelineRun   all  TaskRun  pods within the  PipelineRun  that share the  Workspace  will be scheduled to the same Node     Note    Only one pvc backed workspace can be mounted to each TaskRun in this mode        coschedule pipelineruns     All  TaskRun  pods within the  PipelineRun  will be scheduled to the same Node       isolate pipelinerun     All  TaskRun  pods within the  PipelineRun  will be scheduled to the same Node  and only one PipelineRun is allowed to run on a node at a time       disabled     The Affinity Assistant is disabled  No pod coscheduling behavior   This means that Affinity Assistant is incompatible with other affinity rules configured for the  TaskRun  pods  i e  other affinity rules specified in custom  PodTemplate  pipelineruns md specifying a pod template  will be overwritten by Affinity Assistant   If the  PipelineRun  has a custom  PodTemplate  pipelineruns md specifying a pod template  configured  the  NodeSelector  and  Tolerations  fields will also be set on the Affinity Assistant pod  The Affinity Assistant is deleted when the  PipelineRun  is completed    Currently  the Affinity Assistant Modes can be configured by the  disable affinity assistant  and  coschedule  feature flags   The  disable affinity assistant  feature flag is now deprecated and will be removed in release  v0 60   At the time  the Affinity Assistant Modes will be only determined by the  coschedule  feature flag    The following chart summarizes the Affinity Assistant Modes with different combinations of the  disable affinity assistant  and  coschedule  feature flags during migration  when both feature flags are present  and after the migration  when only the  coschedule  flag is present     table       thead           tr               th disable affinity assistant  th               th coschedule  th               th behavior during migration  th               th behavior after migration  th            tr        thead       tbody           tr               td false  default   td               td disabled  td               td N A  invalid  td               td disabled  td            tr           tr               td false  default   td               td workspaces  default   td               td coschedule workspaces  td               td coschedule workspaces  td            tr           tr               td false  default   td               td pipelineruns  td               td N A  invalid  td               td coschedule pipelineruns  td            tr           tr               td false  default   td               td isolate pipelinerun  td               td N A  invalid  td               td isolate pipelinerun  td            tr           tr               td true  td               td disabled  td               td disabled  td               td disabled  td            tr           tr               td true  td               td workspaces  default   td               td disabled  td               td coschedule workspaces  td            tr           tr               td true  td               td pipelineruns  td               td coschedule pipelineruns  td               td coschedule pipelineruns  td            tr           tr               td true  td               td isolate pipelinerun  td               td isolate pipelinerun  td               td isolate pipelinerun  td            tr        tbody    table     Note    For users who previously accepted the default behavior   disable affinity assistant    false   but now want one of the new features  you need to set  disable affinity assistant  to  true  and then turn on the new behavior by setting the  coschedule  flag  For users who previously disabled the affinity assistant but want one of the new features  just set the  coschedule  flag accordingly     Note    Affinity Assistant use  Inter pod affinity and anti affinity  https   kubernetes io docs concepts scheduling eviction assign pod node  inter pod affinity and anti affinity  that require substantial amount of processing which can slow down scheduling in large clusters significantly  We do not recommend using the affinity assistant in clusters larger than several hundred nodes    Note    Pod anti affinity requires nodes to be consistently labelled  in other words every node in the cluster must have an appropriate label matching  topologyKey   If some or all nodes are missing the specified  topologyKey  label  it can lead to unintended behavior     Note    Any time during the execution of a  pipelineRun   if the node with a placeholder Affinity Assistant pod and the  taskRun  pods sharing a  workspace  is  cordoned  or disabled for scheduling anything new   tainted    the  pipelineRun  controller deletes the placeholder pod  The  taskRun  pods on a  cordoned  node continues running until completion  The deletion of a placeholder pod triggers creating a new placeholder pod on any available node such that the rest of the  pipelineRun  can continue without any disruption until it finishes"}
{"questions":"tekton Authentication Authentication at Run Time weight 301 This document describes how Tekton handles authentication when executing","answers":"<!--\n---\nlinkTitle: \"Authentication\"\nweight: 301\n---\n-->\n\n# Authentication at Run Time\n\nThis document describes how Tekton handles authentication when executing\n`TaskRuns` and `PipelineRuns`. Since authentication concepts and processes\napply to both of those entities in the same manner, this document collectively\nrefers to `TaskRuns` and `PipelineRuns` as `Runs` for the sake of brevity.\n\n- [Overview](#overview)\n- [Understanding credential selection](#understanding-credential-selection)\n- [Using `Secrets` as a non-root user](#using-secrets-as-a-non-root-user)\n- [Limiting `Secret` access to specific `Steps`](#limiting-secret-access-to-specific-steps)\n- [Configuring authentication for Git](#configuring-authentication-for-git)\n  - [Configuring `basic-auth` authentication for Git](#configuring-basic-auth-authentication-for-git)\n  - [Configuring `ssh-auth` authentication for Git](#configuring-ssh-auth-authentication-for-git)\n  - [Using a custom port for SSH authentication](#using-a-custom-port-for-ssh-authentication)\n  - [Using SSH authentication in `git` type `Tasks`](#using-ssh-authentication-in-git-type-tasks)\n- [Configuring authentication for Docker](#configuring-authentication-for-docker)\n  - [Configuring `basic-auth` authentication for Docker](#configuring-basic-auth-authentication-for-docker)\n  - [Configuring `docker*` authentication for Docker](#configuring-docker-authentication-for-docker)\n- [Technical reference](#technical-reference)\n  - [`basic-auth` for Git](#basic-auth-for-git)\n  - [`ssh-auth` for Git](#ssh-auth-for-git)\n  - [`basic-auth` for Docker](#basic-auth-for-docker)\n  - [Errors and their meaning](#errors-and-their-meaning)\n    - [\"unsuccessful cred copy\" Warning](#unsuccessful-cred-copy-warning)\n      - [Multiple Steps with varying UIDs](#multiple-steps-with-varying-uids)\n      - [A Workspace or Volume is also Mounted for the same credentials](#a-workspace-or-volume-is-also-mounted-for-the-same-credentials)\n      - [A Task employes a read-only-Workspace or Volume for `$HOME`](#a-task-employs-a-read-only-workspace-or-volume-for-home)\n      - [The Step is named `image-digest-exporter`](#the-step-is-named-image-digest-exporter)\n- [Disabling Tekton's Built-In Auth](#disabling-tektons-built-in-auth)\n  - [Why would an organization want to do this?](#why-would-an-organization-want-to-do-this)\n  - [What are the effects of making this change?](#what-are-the-effects-of-making-this-change)\n  - [How to disable the built-in auth](#how-to-disable-the-built-in-auth)\n\n## Overview\n\nTekton supports authentication via the Kubernetes first-class `Secret` types listed below.\n\n<table>\n\t<thead>\n\t\t<th>Git<\/th>\n\t\t<th>Docker<\/th>\n\t<\/thead>\n\t<tbody>\n\t\t<tr>\n\t\t\t<td><code>kubernetes.io\/basic-auth<\/code><br>\n\t\t\t\t<code>kubernetes.io\/ssh-auth<\/code>\n\t\t\t<\/td>\n\t\t\t<td><code>kubernetes.io\/basic-auth<\/code><br>\n\t\t\t\t<code>kubernetes.io\/dockercfg<\/code><br>\n\t\t\t\t<code>kubernetes.io\/dockerconfigjson<\/code>\n\t\t\t<\/td>\n\t<\/tbody>\n<\/table>\n\nA `Run` gains access to these `Secrets` through its associated `ServiceAccount`. Tekton requires that each\nsupported `Secret` includes a [Tekton-specific annotation](#understanding-credential-selection).\n\nTekton converts properly annotated `Secrets` of the supported types and stores them in a `Step's` container as follows:\n\n - **Git:** Tekton produces a ~\/.gitconfig file or a ~\/.ssh directory.\n - **Docker:** Tekton produces a ~\/.docker\/config.json file.\n\nEach `Secret` type supports multiple credentials covering multiple domains and establishes specific rules governing\ncredential formatting and merging. Tekton follows those rules when merging credentials of each supported type.\n\nTo consume these `Secrets`, Tekton performs credential initialization within every `Pod` it instantiates, before executing\nany `Steps` in the `Run`. During credential initialization, Tekton accesses each `Secret` associated with the `Run` and\naggregates them into a `\/tekton\/creds` directory. Tekton then copies or symlinks files from this directory into the user's\n`$HOME` directory.\n\nTODO(#5357): Update docs to explain recommended methods of passing secrets in via workspaces\n\n## Understanding credential selection\n\nA `Run` might require multiple types of authentication. For example, a `Run` might require access to\nmultiple private Git and Docker repositories. You must properly annotate each `Secret` to specify the\ndomains for which Tekton can use the credentials that the `Secret` contains. Tekton **ignores** all\n`Secrets` that are not properly annotated.\n\nA credential annotation key must begin with `tekton.dev\/git-` or `tekton.dev\/docker-` and its value is the\nURL of the host for which you want Tekton to use that credential. In the following example, Tekton uses a\n`basic-auth` (username\/password pair) `Secret` to access Git repositories at `github.com` and `gitlab.com`\nas well as Docker repositories at `gcr.io`:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  annotations:\n    tekton.dev\/git-0: https:\/\/github.com\n    tekton.dev\/git-1: https:\/\/gitlab.com\n    tekton.dev\/docker-0: https:\/\/gcr.io\ntype: kubernetes.io\/basic-auth\nstringData:\n  username: <cleartext username>\n  password: <cleartext password>\n```\n\nAnd in this example, Tekton uses an `ssh-auth` `Secret` to access Git repositories\nat `github.com` only:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  annotations:\n    tekton.dev\/git-0: github.com\ntype: kubernetes.io\/ssh-auth\nstringData:\n  ssh-privatekey: <private-key>\n  # This is non-standard, but its use is encouraged to make this more secure.\n  # Omitting this results in the server's public key being blindly accepted.\n```\n\n## Using `Secrets` as a non-root user\n\nIn certain scenarios you might need to use `Secrets` as a non-root user. For example:\n\n- Your platform randomizes the user and\/or groups that your containers use to execute.\n- The `Steps` in your `Task` define a non-root `securityContext`.\n- Your `Task` specifies a global non-root `securityContext` that applies to all `Steps` in the `Task`.\n\nThe following are considerations for executing `Runs` as a non-root user:\n\n- `ssh-auth` for Git requires the user to have a valid home directory configured in `\/etc\/passwd`.\n  Specifying a UID that has no valid home directory results in authentication failure.\n- Since SSH authentication ignores the `$HOME` environment variable, you must either move or symlink\n  the appropriate `Secret` files from the `$HOME` directory defined by Tekton (`\/tekton\/home`) to\n  the non-root user's valid home directory to use SSH authentication for either Git or Docker.\n\nFor an example of configuring SSH authentication in a non-root `securityContext`,\nsee [`authenticating-git-commands`](..\/examples\/v1\/taskruns\/authenticating-git-commands.yaml).\n\n## Limiting `Secret` access to specific `Steps`\n\nAs described earlier in this document, Tekton stores supported `Secrets` in\n`$HOME\/tekton\/home` and makes them available to all `Steps` within a `Task`. \n\nIf you want to limit a `Secret` to only be accessible to specific `Steps` but not\nothers, you must explicitly specify a `Volume` using the `Secret` definition and\nmanually `VolumeMount` it into the desired `Steps` instead of using the procedures\ndescribed later in this document.\n\n## Configuring authentication for Git\n\nThis section describes how to configure the following authentication schemes for use with Git:\n\n- [Configuring `basic-auth` authentication for Git](#configuring-basic-auth-authentication-for-git)\n- [Configuring `ssh-auth` authentication for Git](#configuring-ssh-auth-authentication-for-git)\n- [Using a custom port for SSH authentication](#using-a-custom-port-for-ssh-authentication)\n- [Using SSH authentication in `git` type `Tasks`](#using-ssh-authentication-in-git-type-tasks)\n\n### Configuring `basic-auth` authentication for Git\n\nThis section describes how to configure a `basic-auth` type `Secret` for use with Git. In the example below,\nbefore executing any `Steps` in the `Run`, Tekton creates a `~\/.gitconfig` file containing the credentials\nspecified in the `Secret`. \n\nNote: Github deprecated basic authentication with username and password. You can still use basic authentication, but you wil need to use a personal access token instead of the cleartext password in the following example. You can find out how to create such a token on the [Github documentation site](https:\/\/docs.github.com\/en\/github\/authenticating-to-github\/creating-a-personal-access-token).\n\n1. In `secret.yaml`, define a `Secret` that specifies the username and password that you want Tekton\n   to use to access the target Git repository:\n\n   ```yaml\n   apiVersion: v1\n   kind: Secret\n   metadata:\n     name: basic-user-pass\n     annotations:\n       tekton.dev\/git-0: https:\/\/github.com # Described below\n   type: kubernetes.io\/basic-auth\n   stringData:\n     username: <cleartext username>\n     password: <cleartext password>\n   ```\n\n   In the above example, the value for `tekton.dev\/git-0` specifies the URL for which Tekton will use this `Secret`,\n   as described in [Understanding credential selection](#understanding-credential-selection).\n\n1. In `serviceaccount.yaml`, associate the `Secret` with the desired `ServiceAccount`:\n\n   ```yaml\n   apiVersion: v1\n   kind: ServiceAccount\n   metadata:\n     name: build-bot\n   secrets:\n     - name: basic-user-pass\n   ```\n\n1. In `run.yaml`, associate the `ServiceAccount` with your `Run` by doing one of the following:\n\n   - Associate the `ServiceAccount` with your `TaskRun`:\n\n     ```yaml\n     apiVersion: tekton.dev\/v1beta1\n     kind: TaskRun\n     metadata:\n       name: build-push-task-run-2\n     spec:\n       serviceAccountName: build-bot\n       taskRef:\n         name: build-push\n     ```\n   - Associate the `ServiceAccount` with your `PipelineRun`:\n\n     ```yaml\n     apiVersion: tekton.dev\/v1beta1\n     kind: PipelineRun\n     metadata:\n       name: demo-pipeline\n       namespace: default\n     spec:\n       serviceAccountName: build-bot\n       pipelineRef:\n         name: demo-pipeline\n     ```\n\n1. Execute the `Run`:\n\n   ```shell\n   kubectl apply --filename secret.yaml serviceaccount.yaml run.yaml\n   ```\n\n### Configuring `ssh-auth` authentication for Git\n\nThis section describes how to configure an `ssh-auth` type `Secret` for use with Git. In the example below,\nbefore executing any `Steps` in the `Run`, Tekton creates a `~\/.ssh\/config` file containing the SSH key\nspecified in the `Secret`.\n\n1. In `secret.yaml`, define a `Secret` that specifies your SSH private key:\n\n   ```yaml\n   apiVersion: v1\n   kind: Secret\n   metadata:\n     name: ssh-key\n     annotations:\n       tekton.dev\/git-0: github.com # Described below\n   type: kubernetes.io\/ssh-auth\n   stringData:\n     ssh-privatekey: <private-key>\n     # This is non-standard, but its use is encouraged to make this more secure.\n     # If it is not provided then the git server's public key will be requested\n     # when the repo is first fetched.\n     known_hosts: <known-hosts>\n   ```\n\n   In the above example, the value for `tekton.dev\/git-0` specifies the URL for which Tekton will use this `Secret`,\n   as described in [Understanding credential selection](#understanding-credential-selection).\n\n1. Generate the `ssh-privatekey` value. For example:\n\n   `cat ~\/.ssh\/id_rsa`\n\n1. Set the value of the `known_hosts` field to the generated `ssh-privatekey` value from the previous step.\n\n1. In `serviceaccount.yaml`, associate the `Secret` with the desired `ServiceAccount`:\n\n   ```yaml\n   apiVersion: v1\n   kind: ServiceAccount\n   metadata:\n     name: build-bot\n   secrets:\n     - name: ssh-key\n   ```\n\n1. In `run.yaml`, associate the `ServiceAccount` with your `Run` by doing one of the following:\n\n   - Associate the `ServiceAccount` with your `TaskRun`:\n\n     ```yaml\n     apiVersion: tekton.dev\/v1beta1\n     kind: TaskRun\n     metadata:\n       name: build-push-task-run-2\n     spec:\n       serviceAccountName: build-bot\n       taskRef:\n         name: build-push\n     ```\n\n   - Associate the `ServiceAccount` with your `PipelineRun`:\n\n   ```yaml\n   apiVersion: tekton.dev\/v1beta1\n   kind: PipelineRun\n   metadata:\n     name: demo-pipeline\n     namespace: default\n   spec:\n     serviceAccountName: build-bot\n     pipelineRef:\n       name: demo-pipeline\n   ```\n\n1. Execute the `Run`:\n\n   ```shell\n   kubectl apply --filename secret.yaml,serviceaccount.yaml,run.yaml\n   ```\n\n### Using a custom port for SSH authentication\n\nYou can specify a custom SSH port in your `Secret`. \n\n```\napiVersion: v1\nkind: Secret\nmetadata:\n  name: ssh-key-custom-port\n  annotations:\n    tekton.dev\/git-0: example.com:2222\ntype: kubernetes.io\/ssh-auth\nstringData:\n  ssh-privatekey: <private-key>\n  known_hosts: <known-hosts>\n```\n\n### Using SSH authentication in `git` type `Tasks`\n\nYou can use SSH authentication as described earlier in this document when invoking `git` commands\ndirectly in the `Steps` of a `Task`. Since `ssh` ignores the `$HOME` variable and only uses the\nuser's home directory specified in `\/etc\/passwd`, each `Step` must symlink `\/tekton\/home\/.ssh`\nto the home directory of its associated user.\n\n**Note:** This explicit symlinking is not necessary when using the\n[`git-clone` `Task`](https:\/\/github.com\/tektoncd\/catalog\/tree\/v1beta1\/git) from Tekton Catalog.\n\nFor example usage, see [`authenticating-git-commands`](..\/examples\/v1\/taskruns\/authenticating-git-commands.yaml).\n\n## Configuring authentication for Docker\n\nThis section describes how to configure the following authentication schemes for use with Docker:\n\n- [Configuring `basic-auth` authentication for Docker](#configuring-basic-auth-authentication-for-docker)\n- [Configuring `docker*` authentication for Docker](#configuring-docker-authentication-for-docker)\n\n### Configuring `basic-auth` authentication for Docker\n\nThis section describes how to configure the `basic-auth` (username\/password pair) type `Secret` for use with Docker.\n\nIn the example below, before executing any `Steps` in the `Run`, Tekton creates a `~\/.docker\/config.json` file containing\nthe credentials specified in the `Secret`.\n\n1. In `secret.yaml`, define a `Secret` that specifies the username and password that you want Tekton\n   to use to access the target Docker registry:\n\n   ```yaml\n   apiVersion: v1\n   kind: Secret\n   metadata:\n     name: basic-user-pass\n     annotations:\n       tekton.dev\/docker-0: https:\/\/gcr.io # Described below\n   type: kubernetes.io\/basic-auth\n   stringData:\n     username: <cleartext username>\n     password: <cleartext password>\n   ```\n\n   In the above example, the value for `tekton.dev\/docker-0` specifies the URL for which Tekton will use this `Secret`,\n   as described in [Understanding credential selection](#understanding-credential-selection).\n\n1. In `serviceaccount.yaml`, associate the `Secret` with the desired `ServiceAccount`:\n\n   ```yaml\n   apiVersion: v1\n   kind: ServiceAccount\n   metadata:\n     name: build-bot\n   secrets:\n     - name: basic-user-pass\n   ```\n\n1. In `run.yaml`, associate the `ServiceAccount` with your `Run` by doing one of the following:\n\n   - Associate the `ServiceAccount` with your `TaskRun`:\n\n     ```yaml\n     apiVersion: tekton.dev\/v1beta1\n     kind: TaskRun\n     metadata:\n       name: build-push-task-run-2\n     spec:\n       serviceAccountName: build-bot\n       taskRef:\n         name: build-push\n     ```\n\n   - Associate the `ServiceAccount` with your `PipelineRun`:\n\n     ```yaml\n     apiVersion: tekton.dev\/v1beta1\n     kind: PipelineRun\n     metadata:\n       name: demo-pipeline\n       namespace: default\n     spec:\n       serviceAccountName: build-bot\n       pipelineRef:\n         name: demo-pipeline\n     ```\n\n1. Execute the `Run`:\n\n   ```shell\n   kubectl apply --filename secret.yaml serviceaccount.yaml run.yaml\n   ```\n\n## Configuring `docker*` authentication for Docker\n\nThis section describes how to configure authentication using the `dockercfg` and `dockerconfigjson` type\n`Secrets` for use with Docker. In the example below, before executing any `Steps` in the `Run`, Tekton creates\na `~\/.docker\/config.json` file containing the credentials specified in the `Secret`. When the `Steps` execute,\nTekton uses those credentials to access the target Docker registry.\nf\n**Note:** If you specify both the Tekton `basic-auth` and the above Kubernetes `Secrets`, Tekton merges all\ncredentials from all specified `Secrets` but Tekton's `basic-auth` `Secret` overrides either of the\nKubernetes `Secrets`.\n\n1. Define a `Secret` based on your Docker client configuration file.\n   \n   ```bash\n   kubectl create secret generic regcred \\\n    --from-file=.dockerconfigjson=<path\/to\/.docker\/config.json> \\\n    --type=kubernetes.io\/dockerconfigjson\n   ```\n   For more information, see [Pull an Image from a Private Registry](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/pull-image-private-registry\/)\n   in the Kubernetes documentation.\n\n1. In `serviceaccount.yaml`, associate the `Secret` with the desired `ServiceAccount`:\n\n   ```yaml\n   apiVersion: v1\n   kind: ServiceAccount\n   metadata:\n     name: build-bot\n   secrets:\n     - name: regcred\n   ```\n\n1. In `run.yaml`, associate the `ServiceAccount` with your `Run` by doing one of the following:\n\n   - Associate the `ServiceAccount` with your `TaskRun`:\n\n     ```yaml\n     apiVersion: tekton.dev\/v1beta1\n     kind: TaskRun\n     metadata:\n       name: build-with-basic-auth\n     spec:\n       serviceAccountName: build-bot\n       steps:\n       # ...\n     ```\n\n   - Associate the `ServiceAccount` with your `PipelineRun`:\n\n     ```yaml\n     apiVersion: tekton.dev\/v1beta1\n     kind: PipelineRun\n     metadata:\n       name: demo-pipeline\n       namespace: default\n     spec:\n       serviceAccountName: build-bot\n       pipelineRef:\n         name: demo-pipeline\n     ```\n\n1. Execute the build:\n\n   ```shell\n   kubectl apply --filename secret.yaml --filename serviceaccount.yaml --filename taskrun.yaml\n   ```\n\n## Technical reference\n\nThis section provides a technical reference for the implementation of the authentication mechanisms\ndescribed earlier in this document.\n\n### `basic-auth` for Git\n\nGiven URLs, usernames, and passwords of the form: `https:\/\/url{n}.com`,\n`user{n}`, and `pass{n}`, Tekton generates the following:\n\n```\n=== ~\/.gitconfig ===\n[credential]\n    helper = store\n[credential \"https:\/\/url1.com\"]\n    username = \"user1\"\n[credential \"https:\/\/url2.com\"]\n    username = \"user2\"\n...\n=== ~\/.git-credentials ===\nhttps:\/\/user1:pass1@url1.com\nhttps:\/\/user2:pass2@url2.com\n...\n```\n\n### `ssh-auth` for Git\n\nGiven hostnames, private keys, and `known_hosts` of the form: `url{n}.com`,\n`key{n}`, and `known_hosts{n}`, Tekton generates the following. \n\nBy default, if no value is specified for `known_hosts`, Tekton configures SSH to accept\n**any public key** returned by the server on first query. Tekton does this\nby setting Git's `core.sshCommand` variable to `ssh -o StrictHostKeyChecking=accept-new`.\nThis behaviour can be prevented\n[using a feature-flag: `require-git-ssh-secret-known-hosts`](.\/install.md#customizing-the-pipelines-controller-behavior).\nSet this flag to `true` and all Git SSH Secrets _must_ include a `known_hosts`.\n\n```\n=== ~\/.ssh\/id_key1 ===\n{contents of key1}\n=== ~\/.ssh\/id_key2 ===\n{contents of key2}\n...\n=== ~\/.ssh\/config ===\nHost url1.com\n    HostName url1.com\n    IdentityFile ~\/.ssh\/id_key1\nHost url2.com\n    HostName url2.com\n    IdentityFile ~\/.ssh\/id_key2\n...\n=== ~\/.ssh\/known_hosts ===\n{contents of known_hosts1}\n{contents of known_hosts2}\n...\n```\n\n### `basic-auth` for Docker\n\nGiven URLs, usernames, and passwords of the form: `https:\/\/url{n}.com`,\n`user{n}`, and `pass{n}`, Tekton generates the following. Since Docker doesn't\nsupport the `kubernetes.io\/ssh-auth` type `Secret`, Tekton ignores annotations\non `Secrets` of that type.\n\n```\n=== ~\/.docker\/config.json ===\n{\n  \"auths\": {\n    \"https:\/\/url1.com\": {\n      \"auth\": \"$(echo -n user1:pass1 | base64)\",\n      \"email\": \"not@val.id\",\n    },\n    \"https:\/\/url2.com\": {\n      \"auth\": \"$(echo -n user2:pass2 | base64)\",\n      \"email\": \"not@val.id\",\n    },\n    ...\n  }\n}\n```\n\n## Errors and their meaning\n\n### \"unsuccessful cred copy\" Warning\n\nThis message has the following format:\n\n> `warning: unsuccessful cred copy: \".docker\" from \"\/tekton\/creds\" to\n> \"\/tekton\/home\": unable to open destination: open\n> \/tekton\/home\/.docker\/config.json: permission denied`\n\nThe precise credential and paths mentioned can vary. This message is only a\nwarning but can be indicative of the following problems:\n\n#### Multiple Steps with varying UIDs\n\nMultiple Steps with different users \/ UIDs are trying to initialize docker\nor git credentials in the same Task. If those Steps need access to the\ncredentials then they may fail as they might not have permission to access them.\n\nThis happens because, by default, `\/tekton\/home` is set to be a Step user's home\ndirectory and Tekton makes this directory a shared volume that all Steps in a\nTask have access to. Any credentials initialized by one Step are overwritten\nby subsequent Steps also initializing credentials.\n\nIf the Steps reporting this warning do not use the credentials mentioned\nin the message then you can safely ignore it.\n\nThis can most easily be resolved by ensuring that each Step executing in your\nTask and TaskRun runs with the same UID. A blanket UID can be set with [a\nTaskRun's `Pod template` field](.\/taskruns.md#specifying-a-pod-template).\n\nIf you require Steps to run with different UIDs then you should disable\nTekton's built-in credential initialization and use Workspaces to mount\ncredentials from Secrets instead. See [the section on disabling Tekton's\ncredential initialization](#disabling-tektons-built-in-auth).\n\n#### A Workspace or Volume is also Mounted for the same credentials\n\nA Task has mounted both a Workspace (or Volume) for credentials and the TaskRun\nhas attached a service account with git or docker credentials that Tekton will\ntry to initialize.\n\nThe simplest solution to this problem is to not mix credentials mounted via\nWorkspace with those initialized using the process described in this document.\nSee [the section on disabling Tekton's credential initialization](#disabling-tektons-built-in-auth).\n\n#### A Task employs a read-only Workspace or Volume for `$HOME`\n\nA Task has mounted a read-only Workspace (or Volume) for the user's `HOME`\ndirectory and the TaskRun attaches a service account with git or docker\ncredentials that Tekton will try to initialize.\n\nThe simplest solution to this problem is to not mix credentials mounted via\nWorkspace with those initialized using the process described in this document.\nSee [the section on disabling Tekton's credential initialization](#disabling-tektons-built-in-auth).\n\n#### The contents of `$HOME` are `chown`ed to a different user\n\nA Task Step that modifies the ownership of files in the user home directory\nmay prevent subsequent Steps from initializing credentials in that same home\ndirectory. The simplest solution to this problem is to avoid running chown\non files and directories under `\/tekton`. Another option is to run all Steps\nwith the same UID.\n\n#### The Step is named `image-digest-exporter`\n\nIf you see this warning reported specifically by an `image-digest-exporter` Step\nyou can safely ignore this message. The reason it appears is that this Step is\ninjected by Tekton and it runs with a non-root UID\nthat can differ from those of the Steps in the Task. The Step does not use\nthese credentials.\n\n---\n\n## Disabling Tekton's Built-In Auth\n\n### Why would an organization want to do this?\n\nThere are a number of reasons that an organization may want to disable\nTekton's built-in credential handling:\n\n1. The mechanism can be quite difficult to debug.\n2. There are an extremely limited set of supported credential types.\n3. Tasks with Steps that have different UIDs can break if multiple Steps\nare trying to share access to the same credentials.\n4. Tasks with Steps that have different UIDs can log more warning messages,\ncreating more noise in TaskRun logs. Again this is because multiple Steps\nwith differing UIDs cannot share access to the same credential files.\n\n\n### What are the effects of making this change?\n\n1. Credentials must now be passed explicitly to Tasks either with [Workspaces](.\/workspaces.md#using-workspaces-in-tasks),\nenvironment variables (using [`envFrom`](https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/#use-case-as-container-environment-variables) in your Steps and a Task param to\nspecify a Secret), or a custom volume and volumeMount definition.\n\n### How to disable the built-in auth\n\nTo disable Tekton's built-in auth, edit the `feature-flag` `ConfigMap` in the\n`tekton-pipelines` namespace and update the value of `disable-creds-init`\nfrom `\"false\"` to `\"true\"`.\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/),\nand code samples are licensed under the\n[Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"         linkTitle   Authentication  weight  301            Authentication at Run Time  This document describes how Tekton handles authentication when executing  TaskRuns  and  PipelineRuns   Since authentication concepts and processes apply to both of those entities in the same manner  this document collectively refers to  TaskRuns  and  PipelineRuns  as  Runs  for the sake of brevity      Overview   overview     Understanding credential selection   understanding credential selection     Using  Secrets  as a non root user   using secrets as a non root user     Limiting  Secret  access to specific  Steps    limiting secret access to specific steps     Configuring authentication for Git   configuring authentication for git       Configuring  basic auth  authentication for Git   configuring basic auth authentication for git       Configuring  ssh auth  authentication for Git   configuring ssh auth authentication for git       Using a custom port for SSH authentication   using a custom port for ssh authentication       Using SSH authentication in  git  type  Tasks    using ssh authentication in git type tasks     Configuring authentication for Docker   configuring authentication for docker       Configuring  basic auth  authentication for Docker   configuring basic auth authentication for docker       Configuring  docker   authentication for Docker   configuring docker authentication for docker     Technical reference   technical reference        basic auth  for Git   basic auth for git        ssh auth  for Git   ssh auth for git        basic auth  for Docker   basic auth for docker       Errors and their meaning   errors and their meaning          unsuccessful cred copy  Warning   unsuccessful cred copy warning           Multiple Steps with varying UIDs   multiple steps with varying uids           A Workspace or Volume is also Mounted for the same credentials   a workspace or volume is also mounted for the same credentials           A Task employes a read only Workspace or Volume for   HOME    a task employs a read only workspace or volume for home           The Step is named  image digest exporter    the step is named image digest exporter     Disabling Tekton s Built In Auth   disabling tektons built in auth       Why would an organization want to do this    why would an organization want to do this       What are the effects of making this change    what are the effects of making this change       How to disable the built in auth   how to disable the built in auth      Overview  Tekton supports authentication via the Kubernetes first class  Secret  types listed below    table    thead     th Git  th     th Docker  th     thead    tbody     tr      td  code kubernetes io basic auth  code  br       code kubernetes io ssh auth  code       td      td  code kubernetes io basic auth  code  br       code kubernetes io dockercfg  code  br       code kubernetes io dockerconfigjson  code       td     tbody    table   A  Run  gains access to these  Secrets  through its associated  ServiceAccount   Tekton requires that each supported  Secret  includes a  Tekton specific annotation   understanding credential selection    Tekton converts properly annotated  Secrets  of the supported types and stores them in a  Step s  container as follows        Git    Tekton produces a    gitconfig file or a    ssh directory       Docker    Tekton produces a    docker config json file   Each  Secret  type supports multiple credentials covering multiple domains and establishes specific rules governing credential formatting and merging  Tekton follows those rules when merging credentials of each supported type   To consume these  Secrets   Tekton performs credential initialization within every  Pod  it instantiates  before executing any  Steps  in the  Run   During credential initialization  Tekton accesses each  Secret  associated with the  Run  and aggregates them into a   tekton creds  directory  Tekton then copies or symlinks files from this directory into the user s   HOME  directory   TODO  5357   Update docs to explain recommended methods of passing secrets in via workspaces     Understanding credential selection  A  Run  might require multiple types of authentication  For example  a  Run  might require access to multiple private Git and Docker repositories  You must properly annotate each  Secret  to specify the domains for which Tekton can use the credentials that the  Secret  contains  Tekton   ignores   all  Secrets  that are not properly annotated   A credential annotation key must begin with  tekton dev git   or  tekton dev docker   and its value is the URL of the host for which you want Tekton to use that credential  In the following example  Tekton uses a  basic auth   username password pair   Secret  to access Git repositories at  github com  and  gitlab com  as well as Docker repositories at  gcr io       yaml apiVersion  v1 kind  Secret metadata    annotations      tekton dev git 0  https   github com     tekton dev git 1  https   gitlab com     tekton dev docker 0  https   gcr io type  kubernetes io basic auth stringData    username   cleartext username    password   cleartext password       And in this example  Tekton uses an  ssh auth   Secret  to access Git repositories at  github com  only      yaml apiVersion  v1 kind  Secret metadata    annotations      tekton dev git 0  github com type  kubernetes io ssh auth stringData    ssh privatekey   private key      This is non standard  but its use is encouraged to make this more secure      Omitting this results in the server s public key being blindly accepted          Using  Secrets  as a non root user  In certain scenarios you might need to use  Secrets  as a non root user  For example     Your platform randomizes the user and or groups that your containers use to execute    The  Steps  in your  Task  define a non root  securityContext     Your  Task  specifies a global non root  securityContext  that applies to all  Steps  in the  Task    The following are considerations for executing  Runs  as a non root user      ssh auth  for Git requires the user to have a valid home directory configured in   etc passwd     Specifying a UID that has no valid home directory results in authentication failure    Since SSH authentication ignores the   HOME  environment variable  you must either move or symlink   the appropriate  Secret  files from the   HOME  directory defined by Tekton    tekton home   to   the non root user s valid home directory to use SSH authentication for either Git or Docker   For an example of configuring SSH authentication in a non root  securityContext   see   authenticating git commands      examples v1 taskruns authenticating git commands yaml       Limiting  Secret  access to specific  Steps   As described earlier in this document  Tekton stores supported  Secrets  in   HOME tekton home  and makes them available to all  Steps  within a  Task     If you want to limit a  Secret  to only be accessible to specific  Steps  but not others  you must explicitly specify a  Volume  using the  Secret  definition and manually  VolumeMount  it into the desired  Steps  instead of using the procedures described later in this document      Configuring authentication for Git  This section describes how to configure the following authentication schemes for use with Git      Configuring  basic auth  authentication for Git   configuring basic auth authentication for git     Configuring  ssh auth  authentication for Git   configuring ssh auth authentication for git     Using a custom port for SSH authentication   using a custom port for ssh authentication     Using SSH authentication in  git  type  Tasks    using ssh authentication in git type tasks       Configuring  basic auth  authentication for Git  This section describes how to configure a  basic auth  type  Secret  for use with Git  In the example below  before executing any  Steps  in the  Run   Tekton creates a     gitconfig  file containing the credentials specified in the  Secret     Note  Github deprecated basic authentication with username and password  You can still use basic authentication  but you wil need to use a personal access token instead of the cleartext password in the following example  You can find out how to create such a token on the  Github documentation site  https   docs github com en github authenticating to github creating a personal access token    1  In  secret yaml   define a  Secret  that specifies the username and password that you want Tekton    to use to access the target Git repository         yaml    apiVersion  v1    kind  Secret    metadata       name  basic user pass      annotations         tekton dev git 0  https   github com   Described below    type  kubernetes io basic auth    stringData       username   cleartext username       password   cleartext password             In the above example  the value for  tekton dev git 0  specifies the URL for which Tekton will use this  Secret      as described in  Understanding credential selection   understanding credential selection    1  In  serviceaccount yaml   associate the  Secret  with the desired  ServiceAccount          yaml    apiVersion  v1    kind  ServiceAccount    metadata       name  build bot    secrets         name  basic user pass         1  In  run yaml   associate the  ServiceAccount  with your  Run  by doing one of the following        Associate the  ServiceAccount  with your  TaskRun            yaml      apiVersion  tekton dev v1beta1      kind  TaskRun      metadata         name  build push task run 2      spec         serviceAccountName  build bot        taskRef           name  build push               Associate the  ServiceAccount  with your  PipelineRun            yaml      apiVersion  tekton dev v1beta1      kind  PipelineRun      metadata         name  demo pipeline        namespace  default      spec         serviceAccountName  build bot        pipelineRef           name  demo pipeline           1  Execute the  Run          shell    kubectl apply   filename secret yaml serviceaccount yaml run yaml             Configuring  ssh auth  authentication for Git  This section describes how to configure an  ssh auth  type  Secret  for use with Git  In the example below  before executing any  Steps  in the  Run   Tekton creates a     ssh config  file containing the SSH key specified in the  Secret    1  In  secret yaml   define a  Secret  that specifies your SSH private key         yaml    apiVersion  v1    kind  Secret    metadata       name  ssh key      annotations         tekton dev git 0  github com   Described below    type  kubernetes io ssh auth    stringData       ssh privatekey   private key         This is non standard  but its use is encouraged to make this more secure         If it is not provided then the git server s public key will be requested        when the repo is first fetched       known hosts   known hosts             In the above example  the value for  tekton dev git 0  specifies the URL for which Tekton will use this  Secret      as described in  Understanding credential selection   understanding credential selection    1  Generate the  ssh privatekey  value  For example       cat    ssh id rsa   1  Set the value of the  known hosts  field to the generated  ssh privatekey  value from the previous step   1  In  serviceaccount yaml   associate the  Secret  with the desired  ServiceAccount          yaml    apiVersion  v1    kind  ServiceAccount    metadata       name  build bot    secrets         name  ssh key         1  In  run yaml   associate the  ServiceAccount  with your  Run  by doing one of the following        Associate the  ServiceAccount  with your  TaskRun            yaml      apiVersion  tekton dev v1beta1      kind  TaskRun      metadata         name  build push task run 2      spec         serviceAccountName  build bot        taskRef           name  build push                Associate the  ServiceAccount  with your  PipelineRun          yaml    apiVersion  tekton dev v1beta1    kind  PipelineRun    metadata       name  demo pipeline      namespace  default    spec       serviceAccountName  build bot      pipelineRef         name  demo pipeline         1  Execute the  Run          shell    kubectl apply   filename secret yaml serviceaccount yaml run yaml             Using a custom port for SSH authentication  You can specify a custom SSH port in your  Secret         apiVersion  v1 kind  Secret metadata    name  ssh key custom port   annotations      tekton dev git 0  example com 2222 type  kubernetes io ssh auth stringData    ssh privatekey   private key    known hosts   known hosts           Using SSH authentication in  git  type  Tasks   You can use SSH authentication as described earlier in this document when invoking  git  commands directly in the  Steps  of a  Task   Since  ssh  ignores the   HOME  variable and only uses the user s home directory specified in   etc passwd   each  Step  must symlink   tekton home  ssh  to the home directory of its associated user     Note    This explicit symlinking is not necessary when using the   git clone   Task   https   github com tektoncd catalog tree v1beta1 git  from Tekton Catalog   For example usage  see   authenticating git commands      examples v1 taskruns authenticating git commands yaml       Configuring authentication for Docker  This section describes how to configure the following authentication schemes for use with Docker      Configuring  basic auth  authentication for Docker   configuring basic auth authentication for docker     Configuring  docker   authentication for Docker   configuring docker authentication for docker       Configuring  basic auth  authentication for Docker  This section describes how to configure the  basic auth   username password pair  type  Secret  for use with Docker   In the example below  before executing any  Steps  in the  Run   Tekton creates a     docker config json  file containing the credentials specified in the  Secret    1  In  secret yaml   define a  Secret  that specifies the username and password that you want Tekton    to use to access the target Docker registry         yaml    apiVersion  v1    kind  Secret    metadata       name  basic user pass      annotations         tekton dev docker 0  https   gcr io   Described below    type  kubernetes io basic auth    stringData       username   cleartext username       password   cleartext password             In the above example  the value for  tekton dev docker 0  specifies the URL for which Tekton will use this  Secret      as described in  Understanding credential selection   understanding credential selection    1  In  serviceaccount yaml   associate the  Secret  with the desired  ServiceAccount          yaml    apiVersion  v1    kind  ServiceAccount    metadata       name  build bot    secrets         name  basic user pass         1  In  run yaml   associate the  ServiceAccount  with your  Run  by doing one of the following        Associate the  ServiceAccount  with your  TaskRun            yaml      apiVersion  tekton dev v1beta1      kind  TaskRun      metadata         name  build push task run 2      spec         serviceAccountName  build bot        taskRef           name  build push                Associate the  ServiceAccount  with your  PipelineRun            yaml      apiVersion  tekton dev v1beta1      kind  PipelineRun      metadata         name  demo pipeline        namespace  default      spec         serviceAccountName  build bot        pipelineRef           name  demo pipeline           1  Execute the  Run          shell    kubectl apply   filename secret yaml serviceaccount yaml run yaml            Configuring  docker   authentication for Docker  This section describes how to configure authentication using the  dockercfg  and  dockerconfigjson  type  Secrets  for use with Docker  In the example below  before executing any  Steps  in the  Run   Tekton creates a     docker config json  file containing the credentials specified in the  Secret   When the  Steps  execute  Tekton uses those credentials to access the target Docker registry  f   Note    If you specify both the Tekton  basic auth  and the above Kubernetes  Secrets   Tekton merges all credentials from all specified  Secrets  but Tekton s  basic auth   Secret  overrides either of the Kubernetes  Secrets    1  Define a  Secret  based on your Docker client configuration file            bash    kubectl create secret generic regcred         from file  dockerconfigjson  path to  docker config json          type kubernetes io dockerconfigjson           For more information  see  Pull an Image from a Private Registry  https   kubernetes io docs tasks configure pod container pull image private registry      in the Kubernetes documentation   1  In  serviceaccount yaml   associate the  Secret  with the desired  ServiceAccount          yaml    apiVersion  v1    kind  ServiceAccount    metadata       name  build bot    secrets         name  regcred         1  In  run yaml   associate the  ServiceAccount  with your  Run  by doing one of the following        Associate the  ServiceAccount  with your  TaskRun            yaml      apiVersion  tekton dev v1beta1      kind  TaskRun      metadata         name  build with basic auth      spec         serviceAccountName  build bot        steps                              Associate the  ServiceAccount  with your  PipelineRun            yaml      apiVersion  tekton dev v1beta1      kind  PipelineRun      metadata         name  demo pipeline        namespace  default      spec         serviceAccountName  build bot        pipelineRef           name  demo pipeline           1  Execute the build         shell    kubectl apply   filename secret yaml   filename serviceaccount yaml   filename taskrun yaml            Technical reference  This section provides a technical reference for the implementation of the authentication mechanisms described earlier in this document        basic auth  for Git  Given URLs  usernames  and passwords of the form   https   url n  com    user n    and  pass n    Tekton generates the following              gitconfig      credential      helper   store  credential  https   url1 com       username    user1   credential  https   url2 com       username    user2             git credentials     https   user1 pass1 url1 com https   user2 pass2 url2 com               ssh auth  for Git  Given hostnames  private keys  and  known hosts  of the form   url n  com    key n    and  known hosts n    Tekton generates the following    By default  if no value is specified for  known hosts   Tekton configures SSH to accept   any public key   returned by the server on first query  Tekton does this by setting Git s  core sshCommand  variable to  ssh  o StrictHostKeyChecking accept new   This behaviour can be prevented  using a feature flag   require git ssh secret known hosts     install md customizing the pipelines controller behavior   Set this flag to  true  and all Git SSH Secrets  must  include a  known hosts               ssh id key1      contents of key1         ssh id key2      contents of key2             ssh config     Host url1 com     HostName url1 com     IdentityFile    ssh id key1 Host url2 com     HostName url2 com     IdentityFile    ssh id key2            ssh known hosts      contents of known hosts1   contents of known hosts2                basic auth  for Docker  Given URLs  usernames  and passwords of the form   https   url n  com    user n    and  pass n    Tekton generates the following  Since Docker doesn t support the  kubernetes io ssh auth  type  Secret   Tekton ignores annotations on  Secrets  of that type              docker config json          auths          https   url1 com            auth      echo  n user1 pass1   base64           email    not val id               https   url2 com            auth      echo  n user2 pass2   base64           email    not val id                                Errors and their meaning       unsuccessful cred copy  Warning  This message has the following format      warning  unsuccessful cred copy    docker  from   tekton creds  to     tekton home   unable to open destination  open    tekton home  docker config json  permission denied   The precise credential and paths mentioned can vary  This message is only a warning but can be indicative of the following problems        Multiple Steps with varying UIDs  Multiple Steps with different users   UIDs are trying to initialize docker or git credentials in the same Task  If those Steps need access to the credentials then they may fail as they might not have permission to access them   This happens because  by default    tekton home  is set to be a Step user s home directory and Tekton makes this directory a shared volume that all Steps in a Task have access to  Any credentials initialized by one Step are overwritten by subsequent Steps also initializing credentials   If the Steps reporting this warning do not use the credentials mentioned in the message then you can safely ignore it   This can most easily be resolved by ensuring that each Step executing in your Task and TaskRun runs with the same UID  A blanket UID can be set with  a TaskRun s  Pod template  field    taskruns md specifying a pod template    If you require Steps to run with different UIDs then you should disable Tekton s built in credential initialization and use Workspaces to mount credentials from Secrets instead  See  the section on disabling Tekton s credential initialization   disabling tektons built in auth         A Workspace or Volume is also Mounted for the same credentials  A Task has mounted both a Workspace  or Volume  for credentials and the TaskRun has attached a service account with git or docker credentials that Tekton will try to initialize   The simplest solution to this problem is to not mix credentials mounted via Workspace with those initialized using the process described in this document  See  the section on disabling Tekton s credential initialization   disabling tektons built in auth         A Task employs a read only Workspace or Volume for   HOME   A Task has mounted a read only Workspace  or Volume  for the user s  HOME  directory and the TaskRun attaches a service account with git or docker credentials that Tekton will try to initialize   The simplest solution to this problem is to not mix credentials mounted via Workspace with those initialized using the process described in this document  See  the section on disabling Tekton s credential initialization   disabling tektons built in auth         The contents of   HOME  are  chown ed to a different user  A Task Step that modifies the ownership of files in the user home directory may prevent subsequent Steps from initializing credentials in that same home directory  The simplest solution to this problem is to avoid running chown on files and directories under   tekton   Another option is to run all Steps with the same UID        The Step is named  image digest exporter   If you see this warning reported specifically by an  image digest exporter  Step you can safely ignore this message  The reason it appears is that this Step is injected by Tekton and it runs with a non root UID that can differ from those of the Steps in the Task  The Step does not use these credentials           Disabling Tekton s Built In Auth      Why would an organization want to do this   There are a number of reasons that an organization may want to disable Tekton s built in credential handling   1  The mechanism can be quite difficult to debug  2  There are an extremely limited set of supported credential types  3  Tasks with Steps that have different UIDs can break if multiple Steps are trying to share access to the same credentials  4  Tasks with Steps that have different UIDs can log more warning messages  creating more noise in TaskRun logs  Again this is because multiple Steps with differing UIDs cannot share access to the same credential files        What are the effects of making this change   1  Credentials must now be passed explicitly to Tasks either with  Workspaces    workspaces md using workspaces in tasks   environment variables  using   envFrom   https   kubernetes io docs concepts configuration secret  use case as container environment variables  in your Steps and a Task param to specify a Secret   or a custom volume and volumeMount definition       How to disable the built in auth  To disable Tekton s built in auth  edit the  feature flag   ConfigMap  in the  tekton pipelines  namespace and update the value of  disable creds init  from   false   to   true     Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    and code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton Matrix Matrix weight 406","answers":"<!--\n---\nlinkTitle: \"Matrix\"\nweight: 406\n---\n-->\n\n# Matrix\n\n- [Overview](#overview)\n- [Configuring a Matrix](#configuring-a-matrix)\n  - [Generating Combinations](#generating-combinations)\n  - [Explicit Combinations](#explicit-combinations)\n- [Concurrency Control](#concurrency-control)\n- [Parameters](#parameters)\n  - [Parameters in Matrix.Params](#parameters-in-matrixparams-1)\n  - [Parameters in Matrix.Include.Params](#parameters-in-matrixincludeparams)\n  - [Specifying both `params` and `matrix` in a `PipelineTask`](#specifying-both-params-and-matrix-in-a-pipelinetask)\n- [Context Variables](#context-variables)\n  - [Access Matrix Combinations Length](#access-matrix-combinations-length)\n  - [Access Aggregated Results Length](#access-aggregated-results-length)\n- [Results](#results)\n  - [Specifying Results in a Matrix](#specifying-results-in-a-matrix)\n    - [Results in Matrix.Params](#results-in-matrixparams)\n    - [Results in Matrix.Include.Params](#results-in-matrixincludeparams)\n  - [Results from fanned out PipelineTasks](#results-from-fanned-out-pipelinetasks)\n- [Retries](#retries)\n- [Examples](#examples)\n  - [`Matrix` Combinations with `Matrix.Params` only](#-matrix--combinations-with--matrixparams--only)\n  - [`Matrix` Combinations with `Matrix.Params` and `Matrix.Include`](#-matrix--combinations-with--matrixparams--and--matrixinclude-)\n  - [`PipelineTasks` with `Tasks`](#-pipelinetasks--with--tasks-)\n  - [`PipelineTasks` with `Custom Tasks`](#-pipelinetasks--with--custom-tasks-)\n\n## Overview\n\n`Matrix` is used to fan out `Tasks` in a `Pipeline`. This doc will explain the details of `matrix` support in\nTekton.\n\nDocumentation for specifying `Matrix` in a `Pipeline`:\n- [Specifying `Matrix` in `Tasks`](pipelines.md#specifying-matrix-in-pipelinetasks)\n- [Specifying `Matrix` in `Finally Tasks`](pipelines.md#specifying-matrix-in-finally-tasks)\n- [Specifying `Matrix` in `Custom Tasks`](pipelines.md#specifying-matrix)\n\n> :seedling: **`Matrix` is an [beta](additional-configs.md#beta-features) feature.**\n> The `enable-api-fields` feature flag can be set to `\"beta\"` to specify `Matrix` in a `PipelineTask`.\n\n## Configuring a Matrix\n\nA `Matrix` allows you to generate combinations and specify explicit combinations to fan out a `PipelineTask`.\n\n### Generating Combinations\n\nThe `Matrix.Params` is used to generate combinations to fan out a `PipelineTask`.\n\n```yaml\n    matrix:\n      params:\n        - name: platform\n          value:\n          - linux\n          - mac\n        - name: browser\n          value:\n          - safari\n          - chrome\n  ...\n```\n\nCombinations generated\n\n```json!\n{ \"platform\": \"linux\", \"browser\": \"safari\" }\n{ \"platform\": \"linux\", \"browser\": \"chrome\"}\n{ \"platform\": \"mac\", \"browser\": \"safari\" }\n{ \"platform\": \"mac\", \"browser\": \"chrome\"}\n```\n[See another example](#-matrix--combinations-with--matrixparams--only)\n\n### Explicit Combinations\n\nThe `Matrix.Include` is used to add explicit combinations to fan out a `PipelineTask`.\n\n```yaml\n    matrix:\n      params:\n        - name: platform\n          value:\n          - linux\n          - mac\n        - name: browser\n          value:\n          - safari\n          - chrome\n      include:\n        - name: linux-url\n          params:\n            - name: platform\n              value: linux\n            - name: url\n              value: some-url\n        - name: non-existent-browser\n          params:\n            - name: browser\n              value: \"i-do-not-exist\"\n  ...\n```\n\nThe first `Matrix.Include` clause adds `\"url\": \"some-url\"` only to the original `matrix` combinations that include `\"platform\": \"linux\"` and the second `Matrix.Include` clause cannot be added to any original `matrix` combination without overwriting any `params` of the original combinations, so it is added as an additional `matrix` combination:\n\nCombinations generated\n```json!\n{ \"platform\": \"linux\", \"browser\": \"safari\", \"url\": \"some-url\" }\n{ \"platform\": \"linux\", \"browser\": \"chrome\", \"url\": \"some-url\"}\n{ \"platform\": \"mac\", \"browser\": \"safari\" }\n{ \"platform\": \"mac\", \"browser\": \"chrome\"}\n{ \"browser\": \"i-do-not-exist\"}\n```\n\n[See another example](#-matrix--combinations-with--matrixparams--and--matrixinclude-)\n\nThe `Matrix.Include` can also be used without `Matrix.Params` to generate explicit combinations to fan out a `PipelineTask`.\n\n```yaml\n    matrix:\n        include:\n          - name: build-1\n            params:\n              - name: IMAGE\n                value: \"image-1\"\n              - name: DOCKERFILE\n                value: \"path\/to\/Dockerfile1\"\n          - name: build-2\n            params:\n              - name: IMAGE\n                value: \"image-2\"\n              - name: DOCKERFILE\n                value: \"path\/to\/Dockerfile2\"\n          - name: build-3\n            params:\n              - name: IMAGE\n                value: \"image-3\"\n              - name: DOCKERFILE\n                value: \"path\/to\/Dockerfile3\"\n  ...\n```\n\nThis configuration allows users to take advantage of `Matrix` to fan out without having an auto-populated `Matrix`. `Matrix` with include section without `Params` section creates the number of `TaskRuns` specified in the `Include` section with the specified `Parameters`.\n\n\nCombinations generated\n\n```json!\n{ \"IMAGE\": \"image-1\", \"DOCKERFILE\": \"path\/to\/Dockerfile1\" }\n{ \"IMAGE\": \"image-2\", \"DOCKERFILE\": \"path\/to\/Dockerfile2\"}\n{ \"IMAGE\": \"image-3\", \"DOCKERFILE\": \"path\/to\/Dockerfile3}\n```\n\n## DisplayName\n\nMatrix creates multiple `taskRuns` with the same `pipelineTask`. Each `taskRun` has its unique combination `params` based\non the `matrix` specifications. These `params` can now be surfaced and used to configure unique name of each `matrix`\ninstance such that it is easier to distinguish all the instances based on their inputs.\n\n```yaml\npipelineSpec:\n  tasks:\n    - name: platforms-and-browsers\n      displayName: \"Platforms and Browsers: $(params.platform) and $(params.browser)\"\n      matrix:\n        params:\n          - name: platform\n            value:\n              - linux\n              - mac\n              - windows\n          - name: browser\n            value:\n              - chrome\n              - safari\n              - firefox\n      taskRef:\n        name: platform-browsers\n```\n\nThe `displayName` is available as part of `pipelineRun.status.childReferences` with each `taskRun`.\nThis allows the clients to consume `displayName` wherever needed:\n\n```json\n[\n  {\n    \"apiVersion\": \"tekton.dev\/v1\",\n    \"displayName\": \"Platforms and Browsers: linux and chrome\",\n    \"kind\": \"TaskRun\",\n    \"name\": \"matrixed-pr-vcx79-platforms-and-browsers-0\",\n    \"pipelineTaskName\": \"platforms-and-browsers\"\n  },\n  {\n    \"apiVersion\": \"tekton.dev\/v1\",\n    \"displayName\": \"Platforms and Browsers: mac and safari\",\n    \"kind\": \"TaskRun\",\n    \"name\": \"matrixed-pr-vcx79-platforms-and-browsers-1\",\n    \"pipelineTaskName\": \"platforms-and-browsers\"\n  }\n]\n```\n\n### `matrix.include[].name`\n\n`matrix.include[]` section allows specifying a `name` along with a list of `params`. This `name` field is available as\npart of the `pipelineRun.status.childReferences[].displayName` if specified.\n\n`displayName` and `matrix.include[].name` can co-exist but `matrix.include[].name` takes higher precedence. It is also\npossible for the pipeline author to specify `params` in `matrix.include[].name` which are resolved in the `childReferences`.\n\n```yaml\n- name: platforms-and-browsers-with-include\n  matrix:\n    include:\n      - name: \"Platform: $(params.platform)\"\n        params:\n          - name: platform\n            value: linux111\n  params:\n    - name: browser\n      value: chrome\n```\n\n### Precedence Order\n\n| specification                                             | precedence                      | `childReferences[].displayName` |\n|-----------------------------------------------------------|---------------------------------|---------------------------------|\n| `tasks[].displayName`                                     | `tasks[].displayName`           | `tasks[].displayName`           |\n| `tasks[].matrix.include[].name`                           | `tasks[].matrix.include[].name` | `tasks[].matrix.include[].name` |\n| `tasks[].displayName` and `tasks[].matrix.include[].name` | `tasks[].matrix.include[].name` | `tasks[].matrix.include[].name` |\n\n## Concurrency Control\n\nThe default maximum count of `TaskRuns` or `Runs` from a given `Matrix` is **256**. To customize the maximum count of\n`TaskRuns` or `Runs` generated from a given `Matrix`, configure the `default-max-matrix-combinations-count` in\n[config defaults](\/config\/config-defaults.yaml). When a `Matrix` in `PipelineTask` would generate more than the maximum\n`TaskRuns` or `Runs`, the `Pipeline` validation would fail.\n\nNote: The matrix combination count includes combinations generated from both `Matrix.Params` and `Matrix.Include.Params`.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: config-defaults\ndata:\n  default-service-account: \"tekton\"\n  default-timeout-minutes: \"20\"\n  default-max-matrix-combinations-count: \"1024\"\n  ...\n```\n\nFor more information, see [installation customizations](.\/additional-configs.md#customizing-basic-execution-parameters).\n\n## Parameters\n\n`Matrix` takes in `Parameters` in two sections:\n- `Matrix.Params`: used to generate combinations to fan out the `PipelineTask`.\n- `Matrix.Include.Params`: used to specify specific combinations to fan out the `PipelineTask`.\n\nNote that:\n- The names of the `Parameters` in the `Matrix` must match the names of the `Parameters` in the underlying\n`Task` that they will be substituting.\n- The names of the `Parameters` in the `Matrix` must be unique. Specifying the same parameter multiple times\nwill result in a validation error.\n- A `Parameter` can be passed to either the `matrix` or `params` field, not both.\n- If the `Matrix` has an empty array `Parameter`, then the `PipelineTask` will be skipped.\n\nFor further details on specifying `Parameters` in the `Pipeline` and passing them to\n`PipelineTasks`, see [documentation](pipelines.md#specifying-parameters).\n\n#### Parameters in Matrix.Params\n\n`Matrix.Params` supports string replacements from `Parameters` of type String, Array or Object.\n\n```yaml\ntasks:\n...\n- name: task-4\n  taskRef:\n    name: task-4\n  matrix:\n    params:\n    - name: param-one\n      value:\n      - $(params.foo) # string replacement from string param\n      - $(params.bar[0]) # string replacement from array param\n      - $(params.rad.key) # string replacement from object param\n    - name: param-two\n      value: $(params.bar) # array replacement from array param\n```\n\n`Matrix.Params` supports whole array replacements from array `Parameters`.\n\n```yaml\ntasks:\n...\n- name: task-4\n  taskRef:\n    name: task-4\n  matrix:\n    params:\n    - name: param-one\n      value: $(params.bar[*]) # whole array replacement from array param\n```\n#### Parameters in Matrix.Include.Params\n\n`Matrix.Include.Params` takes string replacements from `Parameters` of type String, Array or Object.\n\n```yaml\ntasks:\n...\n- name: task-4\n  taskRef:\n    name: task-4\n  matrix:\n    include:\n      - name: foo-bar-rad\n        params:\n        - name: foo\n          value: $(params.foo) # string replacement from string param\n        - name: bar\n          value: $(params.bar[0]) # string replacement from array param\n        - name: rad\n          value: $(params.rad.key) # string replacement from object param\n```\n\n### Specifying both `params` and `matrix` in a `PipelineTask`\n\nIn the example below, the *test* `Task` takes *browser* and *platform* `Parameters` of type\n`\"string\"`. A `Pipeline` used to run the `Task` on three browsers (using `matrix`) and one\nplatform (using `params`) would be specified as such and execute three `TaskRuns`:\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: platform-browser-tests\nspec:\n  tasks:\n  - name: fetch-repository\n    taskRef:\n      name: git-clone\n    ...\n  - name: test\n    matrix:\n      params:\n        - name: browser\n          value:\n            - chrome\n            - safari\n            - firefox\n    params:\n      - name: platform\n        value: linux\n    taskRef:\n      name: browser-test\n  ...\n```\n\n## Context Variables\n\nSimilarly to the `Parameters` in the `Params` field, the `Parameters` in the `Matrix` field will accept\n[context variables](variables.md) that will be substituted, including:\n\n* `PipelineRun` name, namespace and uid\n* `Pipeline` name\n* `PipelineTask` retries\n\n\nThe following `context` variables allow users to access the `matrix` runtime data. Note: In order to create an ordering dependency, use `runAfter` or `taskResult` consumption as part of the same pipelineTask.\n\n#### Access Matrix Combinations Length\n\nThe pipeline authors can access the total number of instances created as part of the `matrix` using the syntax: `tasks.<pipelineTaskName>.matrix.length`.\n\n```yaml\n      - name: matrixed-echo-length\n        runAfter:\n          - matrix-emitting-results\n        params:\n          - name: matrixlength\n            value: $(tasks.matrix-emitting-results.matrix.length)\n```\n\n#### Access Aggregated Results Length\n\nThe pipeline authors can access the length of the array of aggregated results that were\nactually produced using the syntax: `tasks.<pipelineTaskName>.matrix.<resultName>.length`. This will allow users to loop over the results produced.\n\n```yaml\n      - name: matrixed-echo-results-length\n        runAfter:\n          - matrix-emitting-results\n        params:\n          - name: matrixlength\n            value: $(tasks.matrix-emitting-results.matrix.a-result.length)\n```\n\nSee the full example here: [pr-with-matrix-context-variables]\n\n## Results\n\n### Specifying Results in a Matrix\n\nConsuming `Results` from previous `TaskRuns` or `Runs` in a `Matrix`, which would dynamically generate\n`TaskRuns` or `Runs` from the fanned out `PipelineTask`, is supported. Producing `Results` in from a\n`PipelineTask` with a `Matrix` is not yet supported - see [further details](#results-from-fanned-out-pipelinetasks).\n\nSee the end-to-end example in [`PipelineRun` with `Matrix` and `Results`][pr-with-matrix-and-results].\n\n#### Results in Matrix.Params\n\n`Matrix.Params` supports whole array replacements and string replacements from `Results` of type String, Array or Object\n\n```yaml\ntasks:\n...\n- name: task-4\n  taskRef:\n    name: task-4\n  matrix:\n    params:\n    - name: values\n      value: $(tasks.task-4.results.whole-array[*])\n```\n\n```yaml\ntasks:\n...\n- name: task-5\n  taskRef:\n    name: task-5\n  matrix:\n    params:\n    - name: values\n      value:\n        - $(tasks.task-1.results.a-string-result)\n        - $(tasks.task-2.results.an-array-result[0])\n        - $(tasks.task-3.results.an-object-result.key)\n```\n\n\nFor further information, see the example in [`PipelineRun` with `Matrix` and `Results`][pr-with-matrix-and-results].\n\n\n#### Results in Matrix.Include.Params\n\n`Matrix.Include.Params` supports string replacements from `Results` of type String, Array or Object.\n\n```yaml\ntasks:\n...\n- name: task-4\n  taskRef:\n    name: task-4\n  matrix:\n    include:\n      - name: foo-bar-duh\n      params:\n        - name: foo\n          value: $(tasks.task-1.results.foo) # string replacement from string result\n        - name: bar\n          value: $(tasks.task-2.results.bar[0]) # string replacement from array result\n        - name: duh\n          value: $(tasks.task-2.results.duh.key) # string replacement from object result\n```\n\n### Results from fanned out Matrixed PipelineTasks\n\nEmitting `Results` from fanned out `PipelineTasks` is now supported. Each fanned out\n`TaskRun` that produces `Result` of type `string` will be aggregated into an `array`\nof `Results` during reconciliation, in which the whole `array` of `Results` can be consumed by another `pipelineTask` using the star notion [*].\nNote: A known limitation is not being able to consume a singular result or specific\ncombinations of results produced by a previous fanned out `PipelineTask`.\n\n| Result Type in `taskRef` or `taskSpec` | Parameter Type of Consumer | Specification                                         |\n|----------------------------------------|----------------------------|-------------------------------------------------------|\n| string                                 | array                      | `$(tasks.<pipelineTaskName>.results.<resultName>[*])` |\n| array                                  | Not Supported              | Not Supported                                         |\n| object                                 | Not Supported              | Not Supported                                         |\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: platform-browser-tests\nspec:\n  tasks:\n    - name: matrix-emitting-results\n        matrix:\n          params:\n            - name: platform\n              value:\n                - linux\n                - mac\n                - windows\n            - name: browser\n              value:\n                - chrome\n                - safari\n                - firefox\n        taskRef:\n          name: taskwithresults\n          kind: Task\n      - name: task-consuming-results\n        taskRef:\n          name: echoarrayurl\n          kind: Task\n        params:\n          - name: url\n            value: $(tasks.matrix-emitting-results.results.report-url[*])\n  ...\n```\nSee the full example [pr-with-matrix-emitting-results]\n\n\n## Retries\n\nThe `retries` field is used to specify the number of times a `PipelineTask` should be retried when its `TaskRun` or\n`Run` fails, see the [documentation][retries] for further details. When a `PipelineTask` is fanned out using `Matrix`,\na given `TaskRun` or `Run` executed will be retried as much as the field in the `retries` field of the `PipelineTask`.\n\nFor example, the `PipelineTask` in this `PipelineRun` will be fanned out into three `TaskRuns` each of which will be\nretried once:\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: matrixed-pr-with-retries-\nspec:\n  pipelineSpec:\n    tasks:\n      - name: matrix-and-params\n        matrix:\n          params:\n            - name: platform\n              value:\n                - linux\n                - mac\n                - windows\n        params:\n          - name: browser\n            value: chrome\n        retries: 1\n        taskSpec:\n          params:\n            - name: platform\n            - name: browser\n          steps:\n            - name: echo\n              image: alpine\n              script: |\n                echo \"$(params.platform) and $(params.browser)\"\n                exit 1\n```\n\n## Examples\n\n### `Matrix` Combinations with `Matrix.Params` only\n\n```yaml\nmatrix:\n  params:\n    - name: GOARCH\n      value:\n        - \"linux\/amd64\"\n        - \"linux\/ppc64le\"\n        - \"linux\/s390x\"\n    - name: version\n      value:\n        - \"go1.17\"\n        - \"go1.18.1\"\n```\n\nThis `matrix` specification will result in six `taskRuns` with the following `matrix` combinations:\n\n```json!\n{ \"GOARCH\": \"linux\/amd64\", \"version\": \"go1.17\" }\n{ \"GOARCH\": \"linux\/amd64\", \"version\": \"go1.18.1\" }\n{ \"GOARCH\": \"linux\/ppc64le\", \"version\": \"go1.17\" }\n{ \"GOARCH\": \"linux\/ppc64le\", \"version\": \"go1.18.1\" }\n{ \"GOARCH\": \"linux\/s390x\", \"version\": \"go1.17\" }\n{ \"GOARCH\": \"linux\/s390x\", \"version\": \"go1.18.1\" }\n```\n\nLet's expand this use case to showcase a little more complex combinations in the next example.\n\n### `Matrix` Combinations with `Matrix.Params` and `Matrix.Include`\n\nNow, let's introduce `include` with a couple of `Parameters`: `\"package\"`, `\"flags\"` and `\"context\"`:\n\n```yaml\n      matrix:\n        params:\n          - name: GOARCH\n            value:\n              - \"linux\/amd64\"\n              - \"linux\/ppc64le\"\n              - \"linux\/s390x\"\n          - name: version\n            value:\n              - \"go1.17\"\n              - \"go1.18.1\"\n        include:\n          - name: common-package\n            params:\n              - name: package\n                value: \"path\/to\/common\/package\/\"\n          - name: s390x-no-race\n            params:\n              - name: GOARCH\n                value: \"linux\/s390x\"\n              - name: flags\n                value: \"-cover -v\"\n\n          - name: go117-context\n            params:\n              - name: version\n                value: \"go1.17\"\n              - name: context\n                value: \"path\/to\/go117\/context\"\n          - name: non-existent-arch\n            params:\n                - name: GOARCH\n                  value: \"I-do-not-exist\"\n```\n\nThe first `include` clause is added to all the original `matrix` combintations without overwriting any `parameters` of\nthe original combinations:\n\n```json!\n{ \"GOARCH\": \"linux\/amd64\", \"version\": \"go1.17\", **\"package\": \"path\/to\/common\/package\/\"** }\n{ \"GOARCH\": \"linux\/amd64\", \"version\": \"go1.18.1\", **\"package\": \"path\/to\/common\/package\/\"** }\n{ \"GOARCH\": \"linux\/ppc64le\", \"version\": \"go1.17\", **\"package\": \"path\/to\/common\/package\/\"** }\n{ \"GOARCH\": \"linux\/ppc64le\", \"version\": \"go1.18.1\", **\"package\": \"path\/to\/common\/package\/\"** }\n{ \"GOARCH\": \"linux\/s390x\", \"version\": \"go1.17\", **\"package\": \"path\/to\/common\/package\/\"** }\n{ \"GOARCH\": \"linux\/s390x\", \"version\": \"go1.18.1\", **\"package\": \"path\/to\/common\/package\/\"** }\n```\n\nThe second `include` clause adds `\"flags\": \"-cover -v\"` only to the original `matrix` combinations that include\n`\"GOARCH\": \"linux\/s390x\"`:\n\n```json!\n{ \"GOARCH\": \"linux\/s390x\", \"version\": \"go1.17\", \"package\": \"path\/to\/common\/package\/\", **\"flags\": \"-cover -v\"** }\n{ \"GOARCH\": \"linux\/s390x\", \"version\": \"go1.18.1\", \"package\": \"path\/to\/common\/package\/\", **\"flags\": \"-cover -v\"** }\n```\n\nThe third `include` clause adds `\"context\": \"path\/to\/go117\/context\"` only to the original `matrix` combinations\nthat include `\"version\": \"go1.17\"`:\n\n```json!\n{ \"GOARCH\": \"linux\/amd64\", \"version\": \"go1.17\", \"package\": \"path\/to\/common\/package\/\", **\"context\": \"path\/to\/go117\/context\"** }\n{ \"GOARCH\": \"linux\/ppc64le\", \"version\": \"go1.17\", \"package\": \"path\/to\/common\/package\/\", **\"context\": \"path\/to\/go117\/context\"** }\n{ \"GOARCH\": \"linux\/s390x\", \"version\": \"go1.17\", \"package\": \"path\/to\/common\/package\/\", \"flags\": \"-cover -v\", **\"context\": \"path\/to\/go117\/context\"** }\n```\n\nThe fourth `include` clause cannot be added to any original `matrix` combination without overwriting any `params` of the\noriginal combinations, so it is added as an additional `matrix` combination:\n\n```json!\n* { **\"GOARCH\": \"I-do-not-exist\"** }\n```\n\nThe above specification will result in seven `taskRuns` with the following matrix combinations:\n\n```json!\n{ \"GOARCH\": \"linux\/amd64\", \"version\": \"go1.17\", \"package\": \"path\/to\/common\/package\/\", \"context\": \"path\/to\/go117\/context\" }\n{ \"GOARCH\": \"linux\/amd64\", \"version\": \"go1.18.1\", \"package\": \"path\/to\/common\/package\/\" }\n{ \"GOARCH\": \"linux\/ppc64le\", \"version\": \"go1.17\", \"package\": \"path\/to\/common\/package\/\", \"context\": \"path\/to\/go117\/context\" }\n{ \"GOARCH\": \"linux\/ppc64le\", \"version\": \"go1.18.1\", \"package\": \"path\/to\/common\/package\/\" }\n{ \"GOARCH\": \"linux\/s390x\", \"version\": \"go1.17\", \"package\": \"path\/to\/common\/package\/\", \"flags\": \"-cover -v\", \"context\": \"path\/to\/go117\/context\" }\n{ \"GOARCH\": \"linux\/s390x\", \"version\": \"go1.18.1\", \"package\": \"path\/to\/common\/package\/\", \"flags\": \"-cover -v\" }\n{ \"GOARCH\": \"I-do-not-exist\" }\n```\n\n### `PipelineTasks` with `Tasks`\n\nWhen a `PipelineTask` has a `Task` and a `Matrix`, the `Task` will be executed in parallel `TaskRuns` with\nsubstitutions from combinations of `Parameters`.\n\nIn the example below, nine `TaskRuns` are created with combinations of platforms (\"linux\", \"mac\", \"windows\")\nand browsers (\"chrome\", \"safari\", \"firefox\").\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: platform-browsers\n  annotations:\n    description: |\n      A task that does something cool with platforms and browsers\nspec:\n  params:\n    - name: platform\n    - name: browser\n  steps:\n    - name: echo\n      image: alpine\n      script: |\n        echo \"$(params.platform) and $(params.browser)\"\n---\n# run platform-browsers task with:\n#   platforms: linux, mac, windows\n#   browsers: chrome, safari, firefox\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: matrixed-pr-\nspec:\n  serviceAccountName: 'default'\n  pipelineSpec:\n    tasks:\n      - name: platforms-and-browsers\n        matrix:\n          params:\n            - name: platform\n              value:\n                - linux\n                - mac\n                - windows\n            - name: browser\n              value:\n                - chrome\n                - safari\n                - firefox\n        taskRef:\n          name: platform-browsers\n```\n\nWhen the above `PipelineRun` is executed, these are the `TaskRuns` that are created:\n\n```shell\n$ tkn taskruns list\n\nNAME                                         STARTED        DURATION     STATUS\nmatrixed-pr-6lvzk-platforms-and-browsers-8   11 seconds ago   7 seconds    Succeeded\nmatrixed-pr-6lvzk-platforms-and-browsers-6   12 seconds ago   7 seconds    Succeeded\nmatrixed-pr-6lvzk-platforms-and-browsers-7   12 seconds ago   9 seconds    Succeeded\nmatrixed-pr-6lvzk-platforms-and-browsers-4   12 seconds ago   7 seconds    Succeeded\nmatrixed-pr-6lvzk-platforms-and-browsers-5   12 seconds ago   6 seconds    Succeeded\nmatrixed-pr-6lvzk-platforms-and-browsers-3   13 seconds ago   7 seconds    Succeeded\nmatrixed-pr-6lvzk-platforms-and-browsers-1   13 seconds ago   8 seconds    Succeeded\nmatrixed-pr-6lvzk-platforms-and-browsers-2   13 seconds ago   8 seconds    Succeeded\nmatrixed-pr-6lvzk-platforms-and-browsers-0   13 seconds ago   8 seconds    Succeeded\n```\n\nWhen the above `Pipeline` is executed, its status is populated with `ChildReferences` of the above `TaskRuns`. The\n`PipelineRun` status tracks the status of all the fanned out `TaskRuns`. This is the `PipelineRun` after completing\nsuccessfully:\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: matrixed-pr-\n  labels:\n    tekton.dev\/pipeline: matrixed-pr-6lvzk\n  name: matrixed-pr-6lvzk\n  namespace: default\nspec:\n  pipelineSpec:\n    tasks:\n    - matrix:\n        params:\n          - name: platform\n            value:\n              - linux\n              - mac\n              - windows\n          - name: browser\n              value:\n                - chrome\n                - safari\n                - firefox\n      name: platforms-and-browsers\n      taskRef:\n        kind: Task\n        name: platform-browsers\n  serviceAccountName: default\n  timeout: 1h0m0s\nstatus:\n  pipelineSpec:\n    tasks:\n      - matrix:\n          params:\n            - name: platform\n              value:\n                - linux\n                - mac\n                - windows\n            - name: browser\n              value:\n                - chrome\n                - safari\n                - firefox\n        name: platforms-and-browsers\n        taskRef:\n          kind: Task\n          name: platform-browsers\n  startTime: \"2022-06-23T23:01:11Z\"\n  completionTime: \"2022-06-23T23:01:20Z\"\n  conditions:\n    - lastTransitionTime: \"2022-06-23T23:01:20Z\"\n      message: 'Tasks Completed: 1 (Failed: 0, Cancelled 0), Skipped: 0'\n      reason: Succeeded\n      status: \"True\"\n      type: Succeeded\n  childReferences:\n  - apiVersion: tekton.dev\/v1beta1\n    kind: TaskRun\n    name: matrixed-pr-6lvzk-platforms-and-browsers-4\n    pipelineTaskName: platforms-and-browsers\n  - apiVersion: tekton.dev\/v1beta1\n    kind: TaskRun\n    name: matrixed-pr-6lvzk-platforms-and-browsers-6\n    pipelineTaskName: platforms-and-browsers\n  - apiVersion: tekton.dev\/v1beta1\n    kind: TaskRun\n    name: matrixed-pr-6lvzk-platforms-and-browsers-2\n    pipelineTaskName: platforms-and-browsers\n  - apiVersion: tekton.dev\/v1beta1\n    kind: TaskRun\n    name: matrixed-pr-6lvzk-platforms-and-browsers-1\n    pipelineTaskName: platforms-and-browsers\n  - apiVersion: tekton.dev\/v1beta1\n    kind: TaskRun\n    name: matrixed-pr-6lvzk-platforms-and-browsers-7\n    pipelineTaskName: platforms-and-browsers\n  - apiVersion: tekton.dev\/v1beta1\n    kind: TaskRun\n    name: matrixed-pr-6lvzk-platforms-and-browsers-0\n    pipelineTaskName: platforms-and-browsers\n  - apiVersion: tekton.dev\/v1beta1\n    kind: TaskRun\n    name: matrixed-pr-6lvzk-platforms-and-browsers-8\n    pipelineTaskName: platforms-and-browsers\n  - apiVersion: tekton.dev\/v1beta1\n    kind: TaskRun\n    name: matrixed-pr-6lvzk-platforms-and-browsers-3\n    pipelineTaskName: platforms-and-browsers\n  - apiVersion: tekton.dev\/v1beta1\n    kind: TaskRun\n    name: matrixed-pr-6lvzk-platforms-and-browsers-5\n    pipelineTaskName: platforms-and-browsers\n```\n\nTo execute this example yourself, run [`PipelineRun` with `Matrix`][pr-with-matrix].\n\n### `PipelineTasks` with `Custom Tasks`\n\nWhen a `PipelineTask` has a `Custom Task` and a `Matrix`, the `Custom Task` will be executed in parallel `Runs` with\nsubstitutions from combinations of `Parameters`.\n\nIn the example below, eight `Runs` are created with combinations of CEL expressions, using the [CEL `Custom Task`][cel].\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: matrixed-pr-\nspec:\n  serviceAccountName: 'default'\n  pipelineSpec:\n    tasks:\n      - name: platforms-and-browsers\n        matrix:\n          params:\n            - name: type\n              value:\n                - \"type(1)\"\n                - \"type(1.0)\"\n            - name: colors\n              value:\n                - \"{'blue': '0x000080', 'red': '0xFF0000'}['blue']\"\n                - \"{'blue': '0x000080', 'red': '0xFF0000'}['red']\"\n            - name: bool\n              value:\n                - \"type(1) == int\"\n                - \"{'blue': '0x000080', 'red': '0xFF0000'}['red'] == '0xFF0000'\"\n        taskRef:\n          apiVersion: cel.tekton.dev\/v1alpha1\n          kind: CEL\n```\n\nWhen the above `PipelineRun` is executed, these `Runs` are created:\n\n```shell\n$ k get run.tekton.dev\n\nNAME                                         SUCCEEDED   REASON              STARTTIME   COMPLETIONTIME\nmatrixed-pr-4djw9-platforms-and-browsers-0   True        EvaluationSuccess   10s         10s\nmatrixed-pr-4djw9-platforms-and-browsers-1   True        EvaluationSuccess   10s         10s\nmatrixed-pr-4djw9-platforms-and-browsers-2   True        EvaluationSuccess   10s         10s\nmatrixed-pr-4djw9-platforms-and-browsers-3   True        EvaluationSuccess   9s          9s\nmatrixed-pr-4djw9-platforms-and-browsers-4   True        EvaluationSuccess   9s          9s\nmatrixed-pr-4djw9-platforms-and-browsers-5   True        EvaluationSuccess   9s          9s\nmatrixed-pr-4djw9-platforms-and-browsers-6   True        EvaluationSuccess   9s          9s\nmatrixed-pr-4djw9-platforms-and-browsers-7   True        EvaluationSuccess   9s          9s\n```\n\nWhen the above `PipelineRun` is executed, its status is populated with `ChildReferences` of the above `Runs`. The\n`PipelineRun` status tracks the status of all the fanned out `Runs`. This is the `PipelineRun` after completing:\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  generateName: matrixed-pr-\n  labels:\n    tekton.dev\/pipeline: matrixed-pr-4djw9\n  name: matrixed-pr-4djw9\n  namespace: default\nspec:\n  pipelineSpec:\n    tasks:\n      - matrix:\n          params:\n            - name: type\n              value:\n                - type(1)\n                - type(1.0)\n            - name: colors\n              value:\n                - '{''blue'': ''0x000080'', ''red'': ''0xFF0000''}[''blue'']'\n                - '{''blue'': ''0x000080'', ''red'': ''0xFF0000''}[''red'']'\n            - name: bool\n              value:\n                - type(1) == int\n                - '{''blue'': ''0x000080'', ''red'': ''0xFF0000''}[''red''] == ''0xFF0000'''\n        name: platforms-and-browsers\n        taskRef:\n          apiVersion: cel.tekton.dev\/v1alpha1\n          kind: CEL\n  serviceAccountName: default\n  timeout: 1h0m0s\nstatus:\n  pipelineSpec:\n    tasks:\n      - matrix:\n          params:\n            - name: type\n              value:\n                - type(1)\n                - type(1.0)\n            - name: colors\n              value:\n                - '{''blue'': ''0x000080'', ''red'': ''0xFF0000''}[''blue'']'\n                - '{''blue'': ''0x000080'', ''red'': ''0xFF0000''}[''red'']'\n            - name: bool\n              value:\n                - type(1) == int\n                - '{''blue'': ''0x000080'', ''red'': ''0xFF0000''}[''red''] == ''0xFF0000'''\n        name: platforms-and-browsers\n        taskRef:\n          apiVersion: cel.tekton.dev\/v1alpha1\n          kind: CEL\n  startTime: \"2022-06-28T20:49:40Z\"\n  completionTime: \"2022-06-28T20:49:41Z\"\n  conditions:\n    - lastTransitionTime: \"2022-06-28T20:49:41Z\"\n      message: 'Tasks Completed: 1 (Failed: 0, Cancelled 0), Skipped: 0'\n      reason: Succeeded\n      status: \"True\"\n      type: Succeeded\n  childReferences:\n    - apiVersion: tekton.dev\/v1alpha1\n      kind: Run\n      name: matrixed-pr-4djw9-platforms-and-browsers-1\n      pipelineTaskName: platforms-and-browsers\n    - apiVersion: tekton.dev\/v1alpha1\n      kind: Run\n      name: matrixed-pr-4djw9-platforms-and-browsers-2\n      pipelineTaskName: platforms-and-browsers\n    - apiVersion: tekton.dev\/v1alpha1\n      kind: Run\n      name: matrixed-pr-4djw9-platforms-and-browsers-3\n      pipelineTaskName: platforms-and-browsers\n    - apiVersion: tekton.dev\/v1alpha1\n      kind: Run\n      name: matrixed-pr-4djw9-platforms-and-browsers-4\n      pipelineTaskName: platforms-and-browsers\n    - apiVersion: tekton.dev\/v1alpha1\n      kind: Run\n      name: matrixed-pr-4djw9-platforms-and-browsers-5\n      pipelineTaskName: platforms-and-browsers\n    - apiVersion: tekton.dev\/v1alpha1\n      kind: Run\n      name: matrixed-pr-4djw9-platforms-and-browsers-6\n      pipelineTaskName: platforms-and-browsers\n    - apiVersion: tekton.dev\/v1alpha1\n      kind: Run\n      name: matrixed-pr-4djw9-platforms-and-browsers-7\n      pipelineTaskName: platforms-and-browsers\n    - apiVersion: tekton.dev\/v1alpha1\n      kind: Run\n      name: matrixed-pr-4djw9-platforms-and-browsers-0\n      pipelineTaskName: platforms-and-browsers\n```\n\n[cel]: https:\/\/github.com\/tektoncd\/experimental\/tree\/1609827ea81d05c8d00f8933c5c9d6150cd36989\/cel\n[pr-with-matrix]: https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/examples\/v1\/pipelineruns\/beta\/pipelinerun-with-matrix.yaml\n[pr-with-matrix-and-results]: https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/examples\/v1\/pipelineruns\/beta\/pipelinerun-with-matrix-and-results.yaml\n[pr-with-matrix-context-variables]: https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/examples\/v1\/pipelineruns\/beta\/pipelinerun-with-matrix-context-variables.yaml\n[pr-with-matrix-emitting-results]: https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/examples\/v1\/pipelineruns\/beta\/pipelinerun-with-matrix-emitting-results.yaml\n[retries]: pipelines.md#using-the-retries-field","site":"tekton","answers_cleaned":"         linkTitle   Matrix  weight  406            Matrix     Overview   overview     Configuring a Matrix   configuring a matrix       Generating Combinations   generating combinations       Explicit Combinations   explicit combinations     Concurrency Control   concurrency control     Parameters   parameters       Parameters in Matrix Params   parameters in matrixparams 1       Parameters in Matrix Include Params   parameters in matrixincludeparams       Specifying both  params  and  matrix  in a  PipelineTask    specifying both params and matrix in a pipelinetask     Context Variables   context variables       Access Matrix Combinations Length   access matrix combinations length       Access Aggregated Results Length   access aggregated results length     Results   results       Specifying Results in a Matrix   specifying results in a matrix         Results in Matrix Params   results in matrixparams         Results in Matrix Include Params   results in matrixincludeparams       Results from fanned out PipelineTasks   results from fanned out pipelinetasks     Retries   retries     Examples   examples        Matrix  Combinations with  Matrix Params  only    matrix  combinations with  matrixparams  only        Matrix  Combinations with  Matrix Params  and  Matrix Include     matrix  combinations with  matrixparams  and  matrixinclude         PipelineTasks  with  Tasks     pipelinetasks  with  tasks         PipelineTasks  with  Custom Tasks     pipelinetasks  with  custom tasks       Overview   Matrix  is used to fan out  Tasks  in a  Pipeline   This doc will explain the details of  matrix  support in Tekton   Documentation for specifying  Matrix  in a  Pipeline      Specifying  Matrix  in  Tasks   pipelines md specifying matrix in pipelinetasks     Specifying  Matrix  in  Finally Tasks   pipelines md specifying matrix in finally tasks     Specifying  Matrix  in  Custom Tasks   pipelines md specifying matrix      seedling     Matrix  is an  beta  additional configs md beta features  feature      The  enable api fields  feature flag can be set to   beta   to specify  Matrix  in a  PipelineTask       Configuring a Matrix  A  Matrix  allows you to generate combinations and specify explicit combinations to fan out a  PipelineTask        Generating Combinations  The  Matrix Params  is used to generate combinations to fan out a  PipelineTask       yaml     matrix        params            name  platform           value              linux             mac           name  browser           value              safari             chrome            Combinations generated     json     platform    linux    browser    safari       platform    linux    browser    chrome      platform    mac    browser    safari       platform    mac    browser    chrome        See another example    matrix  combinations with  matrixparams  only       Explicit Combinations  The  Matrix Include  is used to add explicit combinations to fan out a  PipelineTask       yaml     matrix        params            name  platform           value              linux             mac           name  browser           value              safari             chrome       include            name  linux url           params                name  platform               value  linux               name  url               value  some url           name  non existent browser           params                name  browser               value   i do not exist             The first  Matrix Include  clause adds   url    some url   only to the original  matrix  combinations that include   platform    linux   and the second  Matrix Include  clause cannot be added to any original  matrix  combination without overwriting any  params  of the original combinations  so it is added as an additional  matrix  combination   Combinations generated    json     platform    linux    browser    safari    url    some url       platform    linux    browser    chrome    url    some url      platform    mac    browser    safari       platform    mac    browser    chrome      browser    i do not exist         See another example    matrix  combinations with  matrixparams  and  matrixinclude    The  Matrix Include  can also be used without  Matrix Params  to generate explicit combinations to fan out a  PipelineTask       yaml     matrix          include              name  build 1             params                  name  IMAGE                 value   image 1                  name  DOCKERFILE                 value   path to Dockerfile1              name  build 2             params                  name  IMAGE                 value   image 2                  name  DOCKERFILE                 value   path to Dockerfile2              name  build 3             params                  name  IMAGE                 value   image 3                  name  DOCKERFILE                 value   path to Dockerfile3             This configuration allows users to take advantage of  Matrix  to fan out without having an auto populated  Matrix    Matrix  with include section without  Params  section creates the number of  TaskRuns  specified in the  Include  section with the specified  Parameters     Combinations generated     json     IMAGE    image 1    DOCKERFILE    path to Dockerfile1       IMAGE    image 2    DOCKERFILE    path to Dockerfile2      IMAGE    image 3    DOCKERFILE    path to Dockerfile3          DisplayName  Matrix creates multiple  taskRuns  with the same  pipelineTask   Each  taskRun  has its unique combination  params  based on the  matrix  specifications  These  params  can now be surfaced and used to configure unique name of each  matrix  instance such that it is easier to distinguish all the instances based on their inputs      yaml pipelineSpec    tasks        name  platforms and browsers       displayName   Platforms and Browsers    params platform  and   params browser         matrix          params              name  platform             value                  linux                 mac                 windows             name  browser             value                  chrome                 safari                 firefox       taskRef          name  platform browsers      The  displayName  is available as part of  pipelineRun status childReferences  with each  taskRun   This allows the clients to consume  displayName  wherever needed      json            apiVersion    tekton dev v1        displayName    Platforms and Browsers  linux and chrome        kind    TaskRun        name    matrixed pr vcx79 platforms and browsers 0        pipelineTaskName    platforms and browsers                apiVersion    tekton dev v1        displayName    Platforms and Browsers  mac and safari        kind    TaskRun        name    matrixed pr vcx79 platforms and browsers 1        pipelineTaskName    platforms and browsers                  matrix include   name    matrix include    section allows specifying a  name  along with a list of  params   This  name  field is available as part of the  pipelineRun status childReferences   displayName  if specified    displayName  and  matrix include   name  can co exist but  matrix include   name  takes higher precedence  It is also possible for the pipeline author to specify  params  in  matrix include   name  which are resolved in the  childReferences       yaml   name  platforms and browsers with include   matrix      include          name   Platform    params platform           params              name  platform             value  linux111   params        name  browser       value  chrome          Precedence Order    specification                                               precedence                         childReferences   displayName                                                                                                                                         tasks   displayName                                         tasks   displayName               tasks   displayName                 tasks   matrix include   name                               tasks   matrix include   name     tasks   matrix include   name       tasks   displayName  and  tasks   matrix include   name     tasks   matrix include   name     tasks   matrix include   name        Concurrency Control  The default maximum count of  TaskRuns  or  Runs  from a given  Matrix  is   256    To customize the maximum count of  TaskRuns  or  Runs  generated from a given  Matrix   configure the  default max matrix combinations count  in  config defaults   config config defaults yaml   When a  Matrix  in  PipelineTask  would generate more than the maximum  TaskRuns  or  Runs   the  Pipeline  validation would fail   Note  The matrix combination count includes combinations generated from both  Matrix Params  and  Matrix Include Params       yaml apiVersion  v1 kind  ConfigMap metadata    name  config defaults data    default service account   tekton    default timeout minutes   20    default max matrix combinations count   1024             For more information  see  installation customizations    additional configs md customizing basic execution parameters       Parameters   Matrix  takes in  Parameters  in two sections     Matrix Params   used to generate combinations to fan out the  PipelineTask      Matrix Include Params   used to specify specific combinations to fan out the  PipelineTask    Note that    The names of the  Parameters  in the  Matrix  must match the names of the  Parameters  in the underlying  Task  that they will be substituting    The names of the  Parameters  in the  Matrix  must be unique  Specifying the same parameter multiple times will result in a validation error    A  Parameter  can be passed to either the  matrix  or  params  field  not both    If the  Matrix  has an empty array  Parameter   then the  PipelineTask  will be skipped   For further details on specifying  Parameters  in the  Pipeline  and passing them to  PipelineTasks   see  documentation  pipelines md specifying parameters         Parameters in Matrix Params   Matrix Params  supports string replacements from  Parameters  of type String  Array or Object      yaml tasks        name  task 4   taskRef      name  task 4   matrix      params        name  param one       value            params foo    string replacement from string param           params bar 0     string replacement from array param           params rad key    string replacement from object param       name  param two       value    params bar    array replacement from array param       Matrix Params  supports whole array replacements from array  Parameters       yaml tasks        name  task 4   taskRef      name  task 4   matrix      params        name  param one       value    params bar       whole array replacement from array param          Parameters in Matrix Include Params   Matrix Include Params  takes string replacements from  Parameters  of type String  Array or Object      yaml tasks        name  task 4   taskRef      name  task 4   matrix      include          name  foo bar rad         params            name  foo           value    params foo    string replacement from string param           name  bar           value    params bar 0     string replacement from array param           name  rad           value    params rad key    string replacement from object param          Specifying both  params  and  matrix  in a  PipelineTask   In the example below  the  test   Task  takes  browser  and  platform   Parameters  of type   string    A  Pipeline  used to run the  Task  on three browsers  using  matrix   and one platform  using  params   would be specified as such and execute three  TaskRuns       yaml apiVersion  tekton dev v1beta1 kind  Pipeline metadata    name  platform browser tests spec    tasks      name  fetch repository     taskRef        name  git clone             name  test     matrix        params            name  browser           value                chrome               safari               firefox     params          name  platform         value  linux     taskRef        name  browser test               Context Variables  Similarly to the  Parameters  in the  Params  field  the  Parameters  in the  Matrix  field will accept  context variables  variables md  that will be substituted  including      PipelineRun  name  namespace and uid    Pipeline  name    PipelineTask  retries   The following  context  variables allow users to access the  matrix  runtime data  Note  In order to create an ordering dependency  use  runAfter  or  taskResult  consumption as part of the same pipelineTask        Access Matrix Combinations Length  The pipeline authors can access the total number of instances created as part of the  matrix  using the syntax   tasks  pipelineTaskName  matrix length       yaml         name  matrixed echo length         runAfter              matrix emitting results         params              name  matrixlength             value    tasks matrix emitting results matrix length            Access Aggregated Results Length  The pipeline authors can access the length of the array of aggregated results that were actually produced using the syntax   tasks  pipelineTaskName  matrix  resultName  length   This will allow users to loop over the results produced      yaml         name  matrixed echo results length         runAfter              matrix emitting results         params              name  matrixlength             value    tasks matrix emitting results matrix a result length       See the full example here   pr with matrix context variables      Results      Specifying Results in a Matrix  Consuming  Results  from previous  TaskRuns  or  Runs  in a  Matrix   which would dynamically generate  TaskRuns  or  Runs  from the fanned out  PipelineTask   is supported  Producing  Results  in from a  PipelineTask  with a  Matrix  is not yet supported   see  further details   results from fanned out pipelinetasks    See the end to end example in   PipelineRun  with  Matrix  and  Results   pr with matrix and results         Results in Matrix Params   Matrix Params  supports whole array replacements and string replacements from  Results  of type String  Array or Object     yaml tasks        name  task 4   taskRef      name  task 4   matrix      params        name  values       value    tasks task 4 results whole array             yaml tasks        name  task 5   taskRef      name  task 5   matrix      params        name  values       value              tasks task 1 results a string result              tasks task 2 results an array result 0               tasks task 3 results an object result key        For further information  see the example in   PipelineRun  with  Matrix  and  Results   pr with matrix and results          Results in Matrix Include Params   Matrix Include Params  supports string replacements from  Results  of type String  Array or Object      yaml tasks        name  task 4   taskRef      name  task 4   matrix      include          name  foo bar duh       params            name  foo           value    tasks task 1 results foo    string replacement from string result           name  bar           value    tasks task 2 results bar 0     string replacement from array result           name  duh           value    tasks task 2 results duh key    string replacement from object result          Results from fanned out Matrixed PipelineTasks  Emitting  Results  from fanned out  PipelineTasks  is now supported  Each fanned out  TaskRun  that produces  Result  of type  string  will be aggregated into an  array  of  Results  during reconciliation  in which the whole  array  of  Results  can be consumed by another  pipelineTask  using the star notion      Note  A known limitation is not being able to consume a singular result or specific combinations of results produced by a previous fanned out  PipelineTask      Result Type in  taskRef  or  taskSpec    Parameter Type of Consumer   Specification                                                                                                                                                                             string                                   array                           tasks  pipelineTaskName  results  resultName           array                                    Not Supported                Not Supported                                             object                                   Not Supported                Not Supported                                               yaml apiVersion  tekton dev v1beta1 kind  Pipeline metadata    name  platform browser tests spec    tasks        name  matrix emitting results         matrix            params                name  platform               value                    linux                   mac                   windows               name  browser               value                    chrome                   safari                   firefox         taskRef            name  taskwithresults           kind  Task         name  task consuming results         taskRef            name  echoarrayurl           kind  Task         params              name  url             value    tasks matrix emitting results results report url               See the full example  pr with matrix emitting results       Retries  The  retries  field is used to specify the number of times a  PipelineTask  should be retried when its  TaskRun  or  Run  fails  see the  documentation  retries  for further details  When a  PipelineTask  is fanned out using  Matrix   a given  TaskRun  or  Run  executed will be retried as much as the field in the  retries  field of the  PipelineTask    For example  the  PipelineTask  in this  PipelineRun  will be fanned out into three  TaskRuns  each of which will be retried once      yaml apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    generateName  matrixed pr with retries  spec    pipelineSpec      tasks          name  matrix and params         matrix            params                name  platform               value                    linux                   mac                   windows         params              name  browser             value  chrome         retries  1         taskSpec            params                name  platform               name  browser           steps                name  echo               image  alpine               script                    echo    params platform  and   params browser                   exit 1         Examples       Matrix  Combinations with  Matrix Params  only     yaml matrix    params        name  GOARCH       value             linux amd64             linux ppc64le             linux s390x        name  version       value             go1 17             go1 18 1       This  matrix  specification will result in six  taskRuns  with the following  matrix  combinations      json     GOARCH    linux amd64    version    go1 17       GOARCH    linux amd64    version    go1 18 1       GOARCH    linux ppc64le    version    go1 17       GOARCH    linux ppc64le    version    go1 18 1       GOARCH    linux s390x    version    go1 17       GOARCH    linux s390x    version    go1 18 1         Let s expand this use case to showcase a little more complex combinations in the next example        Matrix  Combinations with  Matrix Params  and  Matrix Include   Now  let s introduce  include  with a couple of  Parameters     package      flags   and   context        yaml       matrix          params              name  GOARCH             value                   linux amd64                   linux ppc64le                   linux s390x              name  version             value                   go1 17                   go1 18 1          include              name  common package             params                  name  package                 value   path to common package               name  s390x no race             params                  name  GOARCH                 value   linux s390x                  name  flags                 value    cover  v               name  go117 context             params                  name  version                 value   go1 17                  name  context                 value   path to go117 context              name  non existent arch             params                    name  GOARCH                   value   I do not exist       The first  include  clause is added to all the original  matrix  combintations without overwriting any  parameters  of the original combinations      json     GOARCH    linux amd64    version    go1 17      package    path to common package          GOARCH    linux amd64    version    go1 18 1      package    path to common package          GOARCH    linux ppc64le    version    go1 17      package    path to common package          GOARCH    linux ppc64le    version    go1 18 1      package    path to common package          GOARCH    linux s390x    version    go1 17      package    path to common package          GOARCH    linux s390x    version    go1 18 1      package    path to common package            The second  include  clause adds   flags     cover  v   only to the original  matrix  combinations that include   GOARCH    linux s390x        json     GOARCH    linux s390x    version    go1 17    package    path to common package       flags     cover  v         GOARCH    linux s390x    version    go1 18 1    package    path to common package       flags     cover  v           The third  include  clause adds   context    path to go117 context   only to the original  matrix  combinations that include   version    go1 17        json     GOARCH    linux amd64    version    go1 17    package    path to common package       context    path to go117 context         GOARCH    linux ppc64le    version    go1 17    package    path to common package       context    path to go117 context         GOARCH    linux s390x    version    go1 17    package    path to common package     flags     cover  v      context    path to go117 context           The fourth  include  clause cannot be added to any original  matrix  combination without overwriting any  params  of the original combinations  so it is added as an additional  matrix  combination      json         GOARCH    I do not exist           The above specification will result in seven  taskRuns  with the following matrix combinations      json     GOARCH    linux amd64    version    go1 17    package    path to common package     context    path to go117 context       GOARCH    linux amd64    version    go1 18 1    package    path to common package        GOARCH    linux ppc64le    version    go1 17    package    path to common package     context    path to go117 context       GOARCH    linux ppc64le    version    go1 18 1    package    path to common package        GOARCH    linux s390x    version    go1 17    package    path to common package     flags     cover  v    context    path to go117 context       GOARCH    linux s390x    version    go1 18 1    package    path to common package     flags     cover  v       GOARCH    I do not exist              PipelineTasks  with  Tasks   When a  PipelineTask  has a  Task  and a  Matrix   the  Task  will be executed in parallel  TaskRuns  with substitutions from combinations of  Parameters    In the example below  nine  TaskRuns  are created with combinations of platforms   linux    mac    windows   and browsers   chrome    safari    firefox        yaml apiVersion  tekton dev v1beta1 kind  Task metadata    name  platform browsers   annotations      description          A task that does something cool with platforms and browsers spec    params        name  platform       name  browser   steps        name  echo       image  alpine       script            echo    params platform  and   params browser         run platform browsers task with      platforms  linux  mac  windows     browsers  chrome  safari  firefox apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    generateName  matrixed pr  spec    serviceAccountName   default    pipelineSpec      tasks          name  platforms and browsers         matrix            params                name  platform               value                    linux                   mac                   windows               name  browser               value                    chrome                   safari                   firefox         taskRef            name  platform browsers      When the above  PipelineRun  is executed  these are the  TaskRuns  that are created      shell   tkn taskruns list  NAME                                         STARTED        DURATION     STATUS matrixed pr 6lvzk platforms and browsers 8   11 seconds ago   7 seconds    Succeeded matrixed pr 6lvzk platforms and browsers 6   12 seconds ago   7 seconds    Succeeded matrixed pr 6lvzk platforms and browsers 7   12 seconds ago   9 seconds    Succeeded matrixed pr 6lvzk platforms and browsers 4   12 seconds ago   7 seconds    Succeeded matrixed pr 6lvzk platforms and browsers 5   12 seconds ago   6 seconds    Succeeded matrixed pr 6lvzk platforms and browsers 3   13 seconds ago   7 seconds    Succeeded matrixed pr 6lvzk platforms and browsers 1   13 seconds ago   8 seconds    Succeeded matrixed pr 6lvzk platforms and browsers 2   13 seconds ago   8 seconds    Succeeded matrixed pr 6lvzk platforms and browsers 0   13 seconds ago   8 seconds    Succeeded      When the above  Pipeline  is executed  its status is populated with  ChildReferences  of the above  TaskRuns   The  PipelineRun  status tracks the status of all the fanned out  TaskRuns   This is the  PipelineRun  after completing successfully      yaml apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    generateName  matrixed pr    labels      tekton dev pipeline  matrixed pr 6lvzk   name  matrixed pr 6lvzk   namespace  default spec    pipelineSpec      tasks        matrix          params              name  platform             value                  linux                 mac                 windows             name  browser               value                    chrome                   safari                   firefox       name  platforms and browsers       taskRef          kind  Task         name  platform browsers   serviceAccountName  default   timeout  1h0m0s status    pipelineSpec      tasks          matrix            params                name  platform               value                    linux                   mac                   windows               name  browser               value                    chrome                   safari                   firefox         name  platforms and browsers         taskRef            kind  Task           name  platform browsers   startTime   2022 06 23T23 01 11Z    completionTime   2022 06 23T23 01 20Z    conditions        lastTransitionTime   2022 06 23T23 01 20Z        message   Tasks Completed  1  Failed  0  Cancelled 0   Skipped  0        reason  Succeeded       status   True        type  Succeeded   childReferences      apiVersion  tekton dev v1beta1     kind  TaskRun     name  matrixed pr 6lvzk platforms and browsers 4     pipelineTaskName  platforms and browsers     apiVersion  tekton dev v1beta1     kind  TaskRun     name  matrixed pr 6lvzk platforms and browsers 6     pipelineTaskName  platforms and browsers     apiVersion  tekton dev v1beta1     kind  TaskRun     name  matrixed pr 6lvzk platforms and browsers 2     pipelineTaskName  platforms and browsers     apiVersion  tekton dev v1beta1     kind  TaskRun     name  matrixed pr 6lvzk platforms and browsers 1     pipelineTaskName  platforms and browsers     apiVersion  tekton dev v1beta1     kind  TaskRun     name  matrixed pr 6lvzk platforms and browsers 7     pipelineTaskName  platforms and browsers     apiVersion  tekton dev v1beta1     kind  TaskRun     name  matrixed pr 6lvzk platforms and browsers 0     pipelineTaskName  platforms and browsers     apiVersion  tekton dev v1beta1     kind  TaskRun     name  matrixed pr 6lvzk platforms and browsers 8     pipelineTaskName  platforms and browsers     apiVersion  tekton dev v1beta1     kind  TaskRun     name  matrixed pr 6lvzk platforms and browsers 3     pipelineTaskName  platforms and browsers     apiVersion  tekton dev v1beta1     kind  TaskRun     name  matrixed pr 6lvzk platforms and browsers 5     pipelineTaskName  platforms and browsers      To execute this example yourself  run   PipelineRun  with  Matrix   pr with matrix         PipelineTasks  with  Custom Tasks   When a  PipelineTask  has a  Custom Task  and a  Matrix   the  Custom Task  will be executed in parallel  Runs  with substitutions from combinations of  Parameters    In the example below  eight  Runs  are created with combinations of CEL expressions  using the  CEL  Custom Task   cel       yaml apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    generateName  matrixed pr  spec    serviceAccountName   default    pipelineSpec      tasks          name  platforms and browsers         matrix            params                name  type               value                     type 1                      type 1 0                 name  colors               value                       blue    0x000080    red    0xFF0000    blue                         blue    0x000080    red    0xFF0000    red                  name  bool               value                     type 1     int                       blue    0x000080    red    0xFF0000    red       0xFF0000           taskRef            apiVersion  cel tekton dev v1alpha1           kind  CEL      When the above  PipelineRun  is executed  these  Runs  are created      shell   k get run tekton dev  NAME                                         SUCCEEDED   REASON              STARTTIME   COMPLETIONTIME matrixed pr 4djw9 platforms and browsers 0   True        EvaluationSuccess   10s         10s matrixed pr 4djw9 platforms and browsers 1   True        EvaluationSuccess   10s         10s matrixed pr 4djw9 platforms and browsers 2   True        EvaluationSuccess   10s         10s matrixed pr 4djw9 platforms and browsers 3   True        EvaluationSuccess   9s          9s matrixed pr 4djw9 platforms and browsers 4   True        EvaluationSuccess   9s          9s matrixed pr 4djw9 platforms and browsers 5   True        EvaluationSuccess   9s          9s matrixed pr 4djw9 platforms and browsers 6   True        EvaluationSuccess   9s          9s matrixed pr 4djw9 platforms and browsers 7   True        EvaluationSuccess   9s          9s      When the above  PipelineRun  is executed  its status is populated with  ChildReferences  of the above  Runs   The  PipelineRun  status tracks the status of all the fanned out  Runs   This is the  PipelineRun  after completing      yaml apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    generateName  matrixed pr    labels      tekton dev pipeline  matrixed pr 4djw9   name  matrixed pr 4djw9   namespace  default spec    pipelineSpec      tasks          matrix            params                name  type               value                    type 1                    type 1 0                name  colors               value                        blue      0x000080      red      0xFF0000      blue                           blue      0x000080      red      0xFF0000      red                   name  bool               value                    type 1     int                       blue      0x000080      red      0xFF0000      red         0xFF0000            name  platforms and browsers         taskRef            apiVersion  cel tekton dev v1alpha1           kind  CEL   serviceAccountName  default   timeout  1h0m0s status    pipelineSpec      tasks          matrix            params                name  type               value                    type 1                    type 1 0                name  colors               value                        blue      0x000080      red      0xFF0000      blue                           blue      0x000080      red      0xFF0000      red                   name  bool               value                    type 1     int                       blue      0x000080      red      0xFF0000      red         0xFF0000            name  platforms and browsers         taskRef            apiVersion  cel tekton dev v1alpha1           kind  CEL   startTime   2022 06 28T20 49 40Z    completionTime   2022 06 28T20 49 41Z    conditions        lastTransitionTime   2022 06 28T20 49 41Z        message   Tasks Completed  1  Failed  0  Cancelled 0   Skipped  0        reason  Succeeded       status   True        type  Succeeded   childReferences        apiVersion  tekton dev v1alpha1       kind  Run       name  matrixed pr 4djw9 platforms and browsers 1       pipelineTaskName  platforms and browsers       apiVersion  tekton dev v1alpha1       kind  Run       name  matrixed pr 4djw9 platforms and browsers 2       pipelineTaskName  platforms and browsers       apiVersion  tekton dev v1alpha1       kind  Run       name  matrixed pr 4djw9 platforms and browsers 3       pipelineTaskName  platforms and browsers       apiVersion  tekton dev v1alpha1       kind  Run       name  matrixed pr 4djw9 platforms and browsers 4       pipelineTaskName  platforms and browsers       apiVersion  tekton dev v1alpha1       kind  Run       name  matrixed pr 4djw9 platforms and browsers 5       pipelineTaskName  platforms and browsers       apiVersion  tekton dev v1alpha1       kind  Run       name  matrixed pr 4djw9 platforms and browsers 6       pipelineTaskName  platforms and browsers       apiVersion  tekton dev v1alpha1       kind  Run       name  matrixed pr 4djw9 platforms and browsers 7       pipelineTaskName  platforms and browsers       apiVersion  tekton dev v1alpha1       kind  Run       name  matrixed pr 4djw9 platforms and browsers 0       pipelineTaskName  platforms and browsers       cel   https   github com tektoncd experimental tree 1609827ea81d05c8d00f8933c5c9d6150cd36989 cel  pr with matrix   https   github com tektoncd pipeline blob main examples v1 pipelineruns beta pipelinerun with matrix yaml  pr with matrix and results   https   github com tektoncd pipeline blob main examples v1 pipelineruns beta pipelinerun with matrix and results yaml  pr with matrix context variables   https   github com tektoncd pipeline blob main examples v1 pipelineruns beta pipelinerun with matrix context variables yaml  pr with matrix emitting results   https   github com tektoncd pipeline blob main examples v1 pipelineruns beta pipelinerun with matrix emitting results yaml  retries   pipelines md using the retries field"}
{"questions":"tekton weight 305 Labels and Annotations Labels and Annotations Tekton allows you to use custom","answers":"<!--\n---\nlinkTitle: \"Labels and Annotations\"\nweight: 305\n---\n-->\n\n# Labels and Annotations\n\nTekton allows you to use custom [Kubernetes Labels](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\/)\nto easily mark Tekton entities belonging to the same conceptual execution chain. Tekton also automatically adds select labels\nto more easily identify resource relationships. This document describes the label propagation scheme, automatic labeling, and\nprovides usage examples.\n\n---\n\n- [Label propagation](#label-propagation)\n- [Automatic labeling](#automatic-labeling)\n- [Usage examples](#usage-examples)\n\n---\n\n## Label propagation\n\nLabels propagate among Tekton entities as follows:\n\n- For `Pipelines` instantiated using a `PipelineRun`, labels propagate\nautomatically from `Pipelines` to `PipelineRuns` to `TaskRuns`, and then to\nthe associated `Pods`. If a label is present in both `Pipeline` and\n`PipelineRun`, the label in `PipelineRun` takes precedence.\n\n- Labels from `Tasks` referenced by `TaskRuns` within a `PipelineRun` propagate to the corresponding `TaskRuns`,\nand then to the associated `Pods`. As for `Pipeline` and `PipelineRun`, if a label is present in both `Task` and\n`TaskRun`, the label in `TaskRun` takes precedence.\n\n- For standalone `TaskRuns` (that is, ones not executing as part of a `Pipeline`), labels\npropagate from the [referenced `Task`](taskruns.md#specifying-the-target-task), if one exists, to\nthe corresponding `TaskRun`, and then to the associated `Pod`. The same as above applies.\n\n## Automatic labeling\n\nTekton automatically adds labels to Tekton entities as described in the following table.\n\n**Note:** `*.tekton.dev` labels are reserved for Tekton's internal use only. Do not add or remove them manually.\n\n<table >\n\t<tbody>\n\t\t<tr>\n\t\t\t<td><b>Label<\/b><\/td>\n\t\t\t<td><b>Added To<\/b><\/td>\n\t\t\t<td><b>Propagates To<\/b><\/td>\n\t\t\t<td><b>Contains<\/b><\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>tekton.dev\/pipeline<\/code><\/td>\n\t\t\t<td><code>PipelineRuns<\/code><\/td>\n\t\t\t<td><code>TaskRuns, Pods<\/code><\/td>\n\t\t\t<td>Name of the <code>Pipeline<\/code> that the <code>PipelineRun<\/code> references.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>tekton.dev\/pipelineRun<\/code><\/td>\n\t\t\t<td><code>TaskRuns<\/code> that are created automatically during the execution of a <code>PipelineRun<\/code>.<\/td>\n\t\t\t<td><code>TaskRuns, Pods<\/code><\/td>\n\t\t\t<td>Name of the <code>PipelineRun<\/code> that triggered the creation of the <code>TaskRun<\/code>.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>tekton.dev\/task<\/code><\/td>\n\t\t\t<td><code>TaskRuns<\/code> that <a href=\"taskruns.md#specifying-the-target-task\">reference an existing <\/code>Task<\/code><\/a>.<\/td>\n\t\t\t<td><code>Pods<\/code><\/td>\n\t\t\t<td>Name of the <code>Task<\/code> that the <code>TaskRun<\/code> references.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>tekton.dev\/clusterTask<\/code><\/td>\n\t\t\t<td><code>TaskRuns<\/code> that reference an existing <code>ClusterTask<\/code>.<\/td>\n\t\t\t<td><code>Pods<\/code><\/td>\n\t\t\t<td>Name of the <code>ClusterTask<\/code> that the <code>TaskRun<\/code> references.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>tekton.dev\/taskRun<\/code><\/td>\n\t\t\t<td><code>Pods<\/code><\/td>\n\t\t\t<td>No propagation.<\/td>\n\t\t\t<td>Name of the <code>TaskRun<\/code> that created the <code>Pod<\/code>.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>tekton.dev\/memberOf<\/code><\/td>\n\t\t\t<td><code>TaskRuns<\/code> that are created automatically during the execution of a <code>PipelineRun<\/code>.<\/td>\n\t\t\t<td><code>TaskRuns, Pods<\/code><\/td>\n\t\t\t<td><code>tasks<\/code> or <code>finally<\/code> depending on the <code>PipelineTask<\/code>'s membership in the <code>Pipeline<\/code>.<\/td>\n\t\t<\/tr>\n\t\t<tr>\n\t\t\t<td><code>app.kubernetes.io\/instance<\/code>, <code>app.kubernetes.io\/component<\/code><\/td>\n\t\t\t<td><code>Pods<\/code>, <code>StatefulSets<\/code> (Affinity Assistant)<\/td>\n\t\t\t<td>No propagation.<\/td>\n\t\t\t<td><code>Pod<\/code> affinity values for <code>TaskRuns<\/code>.<\/td>\n\t\t<\/tr>\n\t<\/tbody>\n<\/table>\n\n## Usage examples\n\nBelow are some examples of using labels:\n\nThe following command finds all `Pods` created by a `PipelineRun` named `test-pipelinerun`:\n\n```shell\nkubectl get pods --all-namespaces -l tekton.dev\/pipelineRun=test-pipelinerun\n```\n\nThe following command finds all `TaskRuns` that reference a `Task` named `test-task`:\n\n```shell\nkubectl get taskruns --all-namespaces -l tekton.dev\/task=test-task\n```\n\nThe following command finds all `TaskRuns` that reference a `ClusterTask` named `test-clustertask`:\n\n```shell\nkubectl get taskruns --all-namespaces -l tekton.dev\/clusterTask=test-clustertask\n```\n\n## Annotations propagation\n\nAnnotation propagate among Tekton entities as follows (similar to Labels):\n\n- For `Pipelines` instantiated using a `PipelineRun`, annotations propagate\nautomatically from `Pipelines` to `PipelineRuns` to `TaskRuns`, and then to\nthe associated `Pods`. If a annotation is present in both `Pipeline` and\n`PipelineRun`, the annotation in `PipelineRun` takes precedence.\n\n- Annotations from `Tasks` referenced by `TaskRuns` within a `PipelineRun` propagate to the corresponding `TaskRuns`,\nand then to the associated `Pods`. As for `Pipeline` and `PipelineRun`, if a annotation is present in both `Task` and\n`TaskRun`, the annotation in `TaskRun` takes precedence.\n\n- For standalone `TaskRuns` (that is, ones not executing as part of a `Pipeline`), annotations\npropagate from the [referenced `Task`](taskruns.md#specifying-the-target-task), if one exists, to\nthe corresponding `TaskRun`, and then to the associated `Pod`. The same as above applies.","site":"tekton","answers_cleaned":"         linkTitle   Labels and Annotations  weight  305            Labels and Annotations  Tekton allows you to use custom  Kubernetes Labels  https   kubernetes io docs concepts overview working with objects labels   to easily mark Tekton entities belonging to the same conceptual execution chain  Tekton also automatically adds select labels to more easily identify resource relationships  This document describes the label propagation scheme  automatic labeling  and provides usage examples           Label propagation   label propagation     Automatic labeling   automatic labeling     Usage examples   usage examples           Label propagation  Labels propagate among Tekton entities as follows     For  Pipelines  instantiated using a  PipelineRun   labels propagate automatically from  Pipelines  to  PipelineRuns  to  TaskRuns   and then to the associated  Pods   If a label is present in both  Pipeline  and  PipelineRun   the label in  PipelineRun  takes precedence     Labels from  Tasks  referenced by  TaskRuns  within a  PipelineRun  propagate to the corresponding  TaskRuns   and then to the associated  Pods   As for  Pipeline  and  PipelineRun   if a label is present in both  Task  and  TaskRun   the label in  TaskRun  takes precedence     For standalone  TaskRuns   that is  ones not executing as part of a  Pipeline    labels propagate from the  referenced  Task   taskruns md specifying the target task   if one exists  to the corresponding  TaskRun   and then to the associated  Pod   The same as above applies      Automatic labeling  Tekton automatically adds labels to Tekton entities as described in the following table     Note       tekton dev  labels are reserved for Tekton s internal use only  Do not add or remove them manually    table     tbody     tr      td  b Label  b   td      td  b Added To  b   td      td  b Propagates To  b   td      td  b Contains  b   td      tr     tr      td  code tekton dev pipeline  code   td      td  code PipelineRuns  code   td      td  code TaskRuns  Pods  code   td      td Name of the  code Pipeline  code  that the  code PipelineRun  code  references   td      tr     tr      td  code tekton dev pipelineRun  code   td      td  code TaskRuns  code  that are created automatically during the execution of a  code PipelineRun  code    td      td  code TaskRuns  Pods  code   td      td Name of the  code PipelineRun  code  that triggered the creation of the  code TaskRun  code    td      tr     tr      td  code tekton dev task  code   td      td  code TaskRuns  code  that  a href  taskruns md specifying the target task  reference an existing   code Task  code   a    td      td  code Pods  code   td      td Name of the  code Task  code  that the  code TaskRun  code  references   td      tr     tr      td  code tekton dev clusterTask  code   td      td  code TaskRuns  code  that reference an existing  code ClusterTask  code    td      td  code Pods  code   td      td Name of the  code ClusterTask  code  that the  code TaskRun  code  references   td      tr     tr      td  code tekton dev taskRun  code   td      td  code Pods  code   td      td No propagation   td      td Name of the  code TaskRun  code  that created the  code Pod  code    td      tr     tr      td  code tekton dev memberOf  code   td      td  code TaskRuns  code  that are created automatically during the execution of a  code PipelineRun  code    td      td  code TaskRuns  Pods  code   td      td  code tasks  code  or  code finally  code  depending on the  code PipelineTask  code  s membership in the  code Pipeline  code    td      tr     tr      td  code app kubernetes io instance  code    code app kubernetes io component  code   td      td  code Pods  code    code StatefulSets  code   Affinity Assistant   td      td No propagation   td      td  code Pod  code  affinity values for  code TaskRuns  code    td      tr     tbody    table      Usage examples  Below are some examples of using labels   The following command finds all  Pods  created by a  PipelineRun  named  test pipelinerun       shell kubectl get pods   all namespaces  l tekton dev pipelineRun test pipelinerun      The following command finds all  TaskRuns  that reference a  Task  named  test task       shell kubectl get taskruns   all namespaces  l tekton dev task test task      The following command finds all  TaskRuns  that reference a  ClusterTask  named  test clustertask       shell kubectl get taskruns   all namespaces  l tekton dev clusterTask test clustertask         Annotations propagation  Annotation propagate among Tekton entities as follows  similar to Labels      For  Pipelines  instantiated using a  PipelineRun   annotations propagate automatically from  Pipelines  to  PipelineRuns  to  TaskRuns   and then to the associated  Pods   If a annotation is present in both  Pipeline  and  PipelineRun   the annotation in  PipelineRun  takes precedence     Annotations from  Tasks  referenced by  TaskRuns  within a  PipelineRun  propagate to the corresponding  TaskRuns   and then to the associated  Pods   As for  Pipeline  and  PipelineRun   if a annotation is present in both  Task  and  TaskRun   the annotation in  TaskRun  takes precedence     For standalone  TaskRuns   that is  ones not executing as part of a  Pipeline    annotations propagate from the  referenced  Task   taskruns md specifying the target task   if one exists  to the corresponding  TaskRun   and then to the associated  Pod   The same as above applies "}
{"questions":"tekton HTTP Resolver weight 311 HTTP Resolver","answers":"<!--\n---\n\nlinkTitle: \"HTTP Resolver\"\nweight: 311\n---\n-->\n\n# HTTP Resolver\n\nThis resolver responds to type `http`.\n\n## Parameters\n\n| Param Name                 | Description                                                                                                                                                                | Example Value                                                                                   |   |\n|----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|---|\n| `url`                      | The URL to fetch from                                                                                                                                                      | <https:\/\/raw.githubusercontent.com\/tektoncd-catalog\/git-clone\/main\/task\/git-clone\/git-clone.yaml> |   |\n| `http-username`            | An optional username when fetching a task with credentials (need to be used in conjunction with `http-password-secret`)                                                    | `git`                                                                                           |   |\n| `http-password-secret`     | An optional secret in the PipelineRun namespace with a reference to a password when fetching a task with credentials (need to be used in conjunction with `http-username`) | `http-password`                                                                                 |   |\n| `http-password-secret-key` | An optional key in the `http-password-secret` to be used when fetching a task with credentials                                                                             | Default: `password`                                                                             |   |\n\nA valid URL must be provided. Only HTTP or HTTPS URLs are supported.\n\n## Requirements\n\n- A cluster running Tekton Pipeline v0.41.0 or later.\n- The [built-in remote resolvers installed](.\/install.md#installing-and-configuring-remote-task-and-pipeline-resolution).\n- The `enable-http-resolver` feature flag in the `resolvers-feature-flags` ConfigMap in the\n  `tekton-pipelines-resolvers` namespace set to `true`.\n- [Beta features](.\/additional-configs.md#beta-features) enabled.\n\n## Configuration\n\nThis resolver uses a `ConfigMap` for its settings. See\n[`..\/config\/resolvers\/http-resolver-config.yaml`](..\/config\/resolvers\/http-resolver-config.yaml)\nfor the name, namespace and defaults that the resolver ships with.\n\n### Options\n\n| Option Name                 | Description                                          | Example Values         |\n|-----------------------------|------------------------------------------------------|------------------------|\n| `fetch-timeout`              | The maximum time any fetching of URL resolution may take. **Note**: a global maximum timeout of 1 minute is currently enforced on _all_ resolution requests. | `1m`, `2s`, `700ms`                                              |\n\n## Usage\n\n### Task Resolution\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: remote-task-reference\nspec:\n  taskRef:\n    resolver: http\n    params:\n    - name: url\n      value: https:\/\/raw.githubusercontent.com\/tektoncd-catalog\/git-clone\/main\/task\/git-clone\/git-clone.yaml\n```\n\n### Task Resolution with Basic Auth\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: remote-task-reference\nspec:\n  taskRef:\n    resolver: http\n    params:\n    - name: url\n      value: https:\/\/raw.githubusercontent.com\/owner\/private-repo\/main\/task\/task.yaml\n    - name: http-username\n      value: git\n    - name: http-password-secret\n      value: git-secret\n    - name: http-password-secret-key\n      value: git-token\n```\n\n### Pipeline Resolution\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: http-demo\nspec:\n  pipelineRef:\n    resolver: http\n    params:\n    - name: url\n      value: https:\/\/raw.githubusercontent.com\/tektoncd\/catalog\/main\/pipeline\/build-push-gke-deploy\/0.1\/build-push-gke-deploy.yaml\n```\n\n---\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/),\nand code samples are licensed under the\n[Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"          linkTitle   HTTP Resolver  weight  311            HTTP Resolver  This resolver responds to type  http       Parameters    Param Name                   Description                                                                                                                                                                  Example Value                                                                                                                                                                                                                                                                                                                                                                                                              url                         The URL to fetch from                                                                                                                                                         https   raw githubusercontent com tektoncd catalog git clone main task git clone git clone yaml           http username               An optional username when fetching a task with credentials  need to be used in conjunction with  http password secret                                                         git                                                                                                     http password secret        An optional secret in the PipelineRun namespace with a reference to a password when fetching a task with credentials  need to be used in conjunction with  http username      http password                                                                                           http password secret key    An optional key in the  http password secret  to be used when fetching a task with credentials                                                                               Default   password                                                                                     A valid URL must be provided  Only HTTP or HTTPS URLs are supported      Requirements    A cluster running Tekton Pipeline v0 41 0 or later    The  built in remote resolvers installed    install md installing and configuring remote task and pipeline resolution     The  enable http resolver  feature flag in the  resolvers feature flags  ConfigMap in the    tekton pipelines resolvers  namespace set to  true      Beta features    additional configs md beta features  enabled      Configuration  This resolver uses a  ConfigMap  for its settings  See      config resolvers http resolver config yaml      config resolvers http resolver config yaml  for the name  namespace and defaults that the resolver ships with       Options    Option Name                   Description                                            Example Values                                                                                                                              fetch timeout                 The maximum time any fetching of URL resolution may take    Note    a global maximum timeout of 1 minute is currently enforced on  all  resolution requests     1m    2s    700ms                                                     Usage      Task Resolution     yaml apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  remote task reference spec    taskRef      resolver  http     params        name  url       value  https   raw githubusercontent com tektoncd catalog git clone main task git clone git clone yaml          Task Resolution with Basic Auth     yaml apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  remote task reference spec    taskRef      resolver  http     params        name  url       value  https   raw githubusercontent com owner private repo main task task yaml       name  http username       value  git       name  http password secret       value  git secret       name  http password secret key       value  git token          Pipeline Resolution     yaml apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    name  http demo spec    pipelineRef      resolver  http     params        name  url       value  https   raw githubusercontent com tektoncd catalog main pipeline build push gke deploy 0 1 build push gke deploy yaml           Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    and code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton weight 1660 TaskRun result attestations is currently an alpha experimental feature Currently all that is implemented is support for configuring Tekton to connect to SPIRE See TEP 0089 for details on the overall design and feature set TaskRun Result Attestation This is a work in progress SPIRE support is not yet functional","answers":"<!--\n---\nlinkTitle: \"TaskRun Result Attestation\"\nweight: 1660\n---\n-->\n\u26a0\ufe0f This is a work in progress: SPIRE support is not yet functional\n\nTaskRun result attestations is currently an alpha experimental feature. Currently all that is implemented is support for configuring Tekton to connect to SPIRE. See TEP-0089 for details on the overall design and feature set.\n\nThis being a large feature, this will be implemented in the following phases. This document will be updated as we implement new phases.\n1.  Add a client for SPIRE (done).\n2.  Add a configMap which initializes SPIRE (in progress).\n3.  Modify TaskRun to sign and verify TaskRun Results using SPIRE.\n4.  Modify Tekton Chains to verify the TaskRun Results.\n\n## Architecture Overview\n\nThis feature relies on a SPIRE installation. This is how it integrates into the architecture of Tekton:\n\n```\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510  Register TaskRun Workload Identity           \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502             \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25ba\u2502          \u2502\n\u2502  Tekton     \u2502                                               \u2502  SPIRE   \u2502\n\u2502  Controller \u2502\u25c4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510                                  \u2502  Server  \u2502\n\u2502             \u2502            \u2502 Listen on TaskRun                \u2502          \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2518            \u2502                                  \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u25b2           \u2502     \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510     \u25b2\n \u2502           \u2502     \u2502           Tekton TaskRun              \u2502     \u2502\n \u2502           \u2502     \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518     \u2502\n \u2502  Configure\u2502                                          \u25b2        \u2502 Attest\n \u2502  Pod &    \u2502                                          \u2502        \u2502   +\n \u2502  check    \u2502                                          \u2502        \u2502 Request\n \u2502  ready    \u2502     \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510                        \u2502        \u2502 SVIDs\n \u2502           \u2514\u2500\u2500\u2500\u2500\u25ba\u2502  TaskRun  \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518        \u2502\n \u2502                 \u2502  Pod      \u2502                                 \u2502\n \u2502                 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518     TaskRun Entrypointer        \u2502\n \u2502                   \u25b2               Sign Result and update      \u2502\n \u2502 Get               \u2502 Get SVID      TaskRun status with         \u2502\n \u2502 SPIRE             \u2502               signature + cert            \u2502\n \u2502 server            \u2502                                           \u2502\n \u2502 Credentials       \u2502                                           \u25bc\n\u250c\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502                                                                          \u2502\n\u2502   SPIRE Agent    ( Runs as   )                                           \u2502\n\u2502   + CSI Driver   ( Daemonset )                                           \u2502\n\u2502                                                                          \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\nInitial Setup:\n1. As part of the SPIRE deployment, the SPIRE server attests the agents running on each node in the cluster.\n1. The Tekton Controller is configured to have workload identity entry creation permissions to the SPIRE server.\n1. As part of the Tekton Controller operations, the Tekton Controller will retrieve an identity that it can use to talk to the SPIRE server to register TaskRun workloads.\n\nWhen a TaskRun is created:\n1. The Tekton Controller creates a TaskRun pod and its associated resources\n1. When the TaskRun pod is ready, the Tekton Controller registers an identity with the information of the pod to the SPIRE server. This will tell the SPIRE server the identity of the TaskRun to use as well as how to attest the workload\/pod.\n1. After the TaskRun steps complete, as part of the entrypointer code, it requests an SVID from SPIFFE workload API (via the SPIRE agent socket)\n1. The SPIRE agent will attest the workload and request an SVID.\n1. The entrypointer receives an x509 SVID, containing the x509 certificate and associated private key. \n1. The entrypointer signs the results of the TaskRun and emits the signatures and x509 certificate to the TaskRun results for later verification.\n\n## Enabling TaskRun result attestations\n\nTo enable TaskRun attestations:\n1. Make sure `enforce-nonfalsifiability` is set to `\"spire\"`. See [`additional-configs.md`](.\/additional-configs.md#customizing-the-pipelines-controller-behavior) for details\n1. Create a SPIRE deployment containing a SPIRE server, SPIRE agents and the SPIRE CSI driver, for convenience, [this sample single cluster deployment](https:\/\/github.com\/spiffe\/spiffe-csi\/tree\/main\/example\/config) can be used.\n1. Register the SPIRE workload entry for Tekton with the \"Admin\" flag, which will allow the Tekton controller to communicate with the SPIRE server to manage the TaskRun identities dynamically.\n    ```\n\n    # This example is assuming use of the above SPIRE deployment\n    # Example where trust domain is \"example.org\" and cluster name is \"example-cluster\"\n    \n    # Register a node alias for all nodes of which the Tekton Controller may reside\n    kubectl -n spire exec -it \\\n        deployment\/spire-server -- \\\n        \/opt\/spire\/bin\/spire-server entry create \\\n            -node \\\n            -spiffeID spiffe:\/\/example.org\/allnodes \\\n            -selector k8s_psat:cluster:example-cluster\n    \n    # Register the tekton controller workload to have access to creating entries in the SPIRE server\n    kubectl -n spire exec -it \\\n        deployment\/spire-server -- \\\n        \/opt\/spire\/bin\/spire-server entry create \\\n            -admin \\\n            -spiffeID spiffe:\/\/example.org\/tekton\/controller \\\n            -parentID spiffe:\/\/example.org\/allnode \\\n            -selector k8s:ns:tekton-pipelines \\\n            -selector k8s:pod-label:app:tekton-pipelines-controller \\\n            -selector k8s:sa:tekton-pipelines-controller\n    \n    ```\n    \n1. Modify the controller (`config\/controller.yaml`) to provide access to the SPIRE agent socket.\n    ```yaml\n    # Add the following the volumeMounts of the \"tekton-pipelines-controller\" container\n    - name: spiffe-workload-api\n      mountPath: \/spiffe-workload-api\n      readOnly: true\n    \n    # Add the following to the volumes of the controller pod\n    - name: spiffe-workload-api\n      csi:\n        driver: \"csi.spiffe.io\"\n    ```\n1. (Optional) Modify the configmap (`config\/config-spire.yaml`) to configure non-default SPIRE options.\n    ```yaml\n    apiVersion: v1\n    kind: ConfigMap\n    metadata:\n      name: config-spire\n      namespace: tekton-pipelines\n    labels:\n      app.kubernetes.io\/instance: default\n      app.kubernetes.io\/part-of: tekton-pipelines\n    data:\n      # More explanation about the fields is at the SPIRE Server Configuration file\n      # https:\/\/spiffe.io\/docs\/latest\/deploying\/spire_server\/#server-configuration-file\n      # spire-trust-domain specifies the SPIRE trust domain to use.\n      spire-trust-domain: \"example.org\"\n      # spire-socket-path specifies the SPIRE agent socket for SPIFFE workload API.\n      spire-socket-path: \"unix:\/\/\/spiffe-workload-api\/spire-agent.sock\"\n      # spire-server-addr specifies the SPIRE server address for workload\/node registration.\n      spire-server-addr: \"spire-server.spire.svc.cluster.local:8081\"\n      # spire-node-alias-prefix specifies the SPIRE node alias prefix to use.\n      spire-node-alias-prefix: \"\/tekton-node\/\"\n    ```","site":"tekton","answers_cleaned":"         linkTitle   TaskRun Result Attestation  weight  1660            This is a work in progress  SPIRE support is not yet functional  TaskRun result attestations is currently an alpha experimental feature  Currently all that is implemented is support for configuring Tekton to connect to SPIRE  See TEP 0089 for details on the overall design and feature set   This being a large feature  this will be implemented in the following phases  This document will be updated as we implement new phases  1   Add a client for SPIRE  done   2   Add a configMap which initializes SPIRE  in progress   3   Modify TaskRun to sign and verify TaskRun Results using SPIRE  4   Modify Tekton Chains to verify the TaskRun Results      Architecture Overview  This feature relies on a SPIRE installation  This is how it integrates into the architecture of Tekton                        Register TaskRun Workload Identity                                                                                                      Tekton                                                        SPIRE        Controller                                                    Server                                 Listen on TaskRun                                                                                                                                                                                                          Tekton TaskRun                                                                                             Configure                                                      Attest     Pod                                                                    check                                                          Request     ready                                                          SVIDs                       TaskRun                                                            Pod                                                                               TaskRun Entrypointer                                               Sign Result and update           Get                 Get SVID      TaskRun status with              SPIRE                             signature   cert                 server                                                             Credentials                                                                                                                                                                                                                   SPIRE Agent      Runs as                                                       CSI Driver     Daemonset                                                                                                                                                                                                              Initial Setup  1  As part of the SPIRE deployment  the SPIRE server attests the agents running on each node in the cluster  1  The Tekton Controller is configured to have workload identity entry creation permissions to the SPIRE server  1  As part of the Tekton Controller operations  the Tekton Controller will retrieve an identity that it can use to talk to the SPIRE server to register TaskRun workloads   When a TaskRun is created  1  The Tekton Controller creates a TaskRun pod and its associated resources 1  When the TaskRun pod is ready  the Tekton Controller registers an identity with the information of the pod to the SPIRE server  This will tell the SPIRE server the identity of the TaskRun to use as well as how to attest the workload pod  1  After the TaskRun steps complete  as part of the entrypointer code  it requests an SVID from SPIFFE workload API  via the SPIRE agent socket  1  The SPIRE agent will attest the workload and request an SVID  1  The entrypointer receives an x509 SVID  containing the x509 certificate and associated private key   1  The entrypointer signs the results of the TaskRun and emits the signatures and x509 certificate to the TaskRun results for later verification      Enabling TaskRun result attestations  To enable TaskRun attestations  1  Make sure  enforce nonfalsifiability  is set to   spire    See   additional configs md     additional configs md customizing the pipelines controller behavior  for details 1  Create a SPIRE deployment containing a SPIRE server  SPIRE agents and the SPIRE CSI driver  for convenience   this sample single cluster deployment  https   github com spiffe spiffe csi tree main example config  can be used  1  Register the SPIRE workload entry for Tekton with the  Admin  flag  which will allow the Tekton controller to communicate with the SPIRE server to manage the TaskRun identities dynamically                 This example is assuming use of the above SPIRE deployment       Example where trust domain is  example org  and cluster name is  example cluster             Register a node alias for all nodes of which the Tekton Controller may reside     kubectl  n spire exec  it           deployment spire server               opt spire bin spire server entry create                node                spiffeID spiffe   example org allnodes                selector k8s psat cluster example cluster            Register the tekton controller workload to have access to creating entries in the SPIRE server     kubectl  n spire exec  it           deployment spire server               opt spire bin spire server entry create                admin                spiffeID spiffe   example org tekton controller                parentID spiffe   example org allnode                selector k8s ns tekton pipelines                selector k8s pod label app tekton pipelines controller                selector k8s sa tekton pipelines controller                   1  Modify the controller   config controller yaml   to provide access to the SPIRE agent socket         yaml       Add the following the volumeMounts of the  tekton pipelines controller  container       name  spiffe workload api       mountPath   spiffe workload api       readOnly  true            Add the following to the volumes of the controller pod       name  spiffe workload api       csi          driver   csi spiffe io          1   Optional  Modify the configmap   config config spire yaml   to configure non default SPIRE options         yaml     apiVersion  v1     kind  ConfigMap     metadata        name  config spire       namespace  tekton pipelines     labels        app kubernetes io instance  default       app kubernetes io part of  tekton pipelines     data          More explanation about the fields is at the SPIRE Server Configuration file         https   spiffe io docs latest deploying spire server  server configuration file         spire trust domain specifies the SPIRE trust domain to use        spire trust domain   example org          spire socket path specifies the SPIRE agent socket for SPIFFE workload API        spire socket path   unix    spiffe workload api spire agent sock          spire server addr specifies the SPIRE server address for workload node registration        spire server addr   spire server spire svc cluster local 8081          spire node alias prefix specifies the SPIRE node alias prefix to use        spire node alias prefix    tekton node          "}
{"questions":"tekton StepActions StepActions weight 201","answers":"<!--\n---\nlinkTitle: \"StepActions\"\nweight: 201\n---\n-->\n\n# StepActions\n\n- [Overview](#overview)\n- [Configuring a StepAction](#configuring-a-stepaction)\n  - [Declaring Parameters](#declaring-parameters)\n    - [Passing Params to StepAction](#passing-params-to-stepaction)\n  - [Emitting Results](#emitting-results)\n    - [Fetching Emitted Results from StepActions](#fetching-emitted-results-from-stepactions)\n  - [Declaring WorkingDir](#declaring-workingdir)\n  - [Declaring SecurityContext](#declaring-securitycontext)\n  - [Declaring VolumeMounts](#declaring-volumemounts)\n  - [Referencing a StepAction](#referencing-a-stepaction)\n    - [Specifying Remote StepActions](#specifying-remote-stepactions)\n  - [Controlling Step Execution with when Expressions](#controlling-step-execution-with-when-expressions)\n\n## Overview\n> :seedling: **`StepActions` is an [beta](additional-configs.md#beta-features) feature.**\n> The `enable-step-actions` feature flag must be set to `\"true\"` to specify a `StepAction` in a `Step`.\n\nA `StepAction` is the reusable and scriptable unit of work that is performed by a `Step`.\n\nA `Step` is not reusable, the work it performs is reusable and referenceable. `Steps` are in-lined in the `Task` definition and either perform work directly or perform a `StepAction`. A `StepAction` cannot be run stand-alone (unlike a `TaskRun` or a `PipelineRun`). It has to be referenced by a `Step`. Another way to think about this is that a `Step` is not composed of `StepActions` (unlike a `Task` being composed of `Steps` and `Sidecars`). Instead, a `Step` is an actionable component, meaning that it has the ability to refer to a `StepAction`. The author of the `StepAction` must be able to compose a `Step` using a `StepAction` and provide all the necessary context (or orchestration) to it.\n\n\n## Configuring a `StepAction`\n\nA `StepAction` definition supports the following fields:\n\n- Required\n  - [`apiVersion`][kubernetes-overview] - Specifies the API version. For example,\n    `tekton.dev\/v1alpha1`.\n  - [`kind`][kubernetes-overview] - Identifies this resource object as a `StepAction` object.\n  - [`metadata`][kubernetes-overview] - Specifies metadata that uniquely identifies the\n    `StepAction` resource object. For example, a `name`.\n  - [`spec`][kubernetes-overview] - Specifies the configuration information for this `StepAction` resource object.\n  - `image` - Specifies the image to use for the `Step`.\n    - The container image must abide by the [container contract](.\/container-contract.md).\n- Optional\n  - `command`\n    - cannot be used at the same time as using `script`.\n  - `args`\n  - `script`\n    - cannot be used at the same time as using `command`.\n  - `env`\n  - [`params`](#declaring-parameters)\n  - [`results`](#emitting-results)\n  - [`workingDir`](#declaring-workingdir)\n  - [`securityContext`](#declaring-securitycontext)\n  - [`volumeMounts`](#declaring-volumemounts)\n\n[kubernetes-overview]:\n  https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/kubernetes-objects\/#required-fields\n\nThe example below demonstrates the use of most of the above-mentioned fields:\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: StepAction\nmetadata:\n  name: example-stepaction-name\nspec:\n  env:\n    - name: HOME\n      value: \/home\n  image: ubuntu\n  command: [\"ls\"]\n  args: [\"-lh\"]\n```\n\n### Declaring Parameters\n\nLike with `Tasks`, a `StepAction` must declare all the parameters that it uses. The same rules for `Parameter` [name](.\/tasks.md\/#parameter-name), [type](.\/tasks.md\/#parameter-type) (including [object](.\/tasks.md\/#object-type), [array](.\/tasks.md\/#array-type) and [string](.\/tasks.md\/#string-type)) apply as when declaring them in `Tasks`. A `StepAction` can also provide [default value](.\/tasks.md\/#default-value) to a `Parameter`.\n\n `Parameters` are passed to the `StepAction` from its corresponding `Step` referencing it.\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: StepAction\nmetadata:\n  name: stepaction-using-params\nspec:\n  params:\n    - name: gitrepo\n      type: object\n      properties:\n        url:\n          type: string\n        commit:\n          type: string\n    - name: flags\n      type: array\n    - name: outputPath\n      type: string\n      default: \"\/workspace\"\n  image: some-git-image\n  args: [\n    \"-url=$(params.gitrepo.url)\",\n    \"-revision=$(params.gitrepo.commit)\",\n    \"-output=$(params.outputPath)\",\n    \"$(params.flags[*])\",\n  ]\n```\n\n> :seedling: **`params` cannot be directly used in a `script` in `StepActions`.**\n> Directly substituting `params` in `scripts` makes the workload prone to shell attacks. Therefore, we do not allow direct usage of `params` in `scripts` in `StepActions`. Instead, rely on passing `params` to `env` variables and reference them in `scripts`. We cannot do the same for `inlined-steps` because it breaks `v1 API` compatibility for existing users.\n\n#### Passing Params to StepAction\n\nA `StepAction` may require [params](#declaring-parameters). In this case, a `Task` needs to ensure that the `StepAction` has access to all the required `params`.\nWhen referencing a `StepAction`, a `Step` can also provide it with `params`, just like how a `TaskRun` provides params to the underlying `Task`.\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: Task\nmetadata:\n  name: step-action\nspec:\n  params:\n    - name: param-for-step-action\n      description: \"this is a param that the step action needs.\"\n  steps:\n    - name: action-runner\n      ref:\n        name: step-action\n      params:\n        - name: step-action-param\n          value: $(params.param-for-step-action)\n```\n\n**Note:** If a `Step` declares `params` for an `inlined Step`, it will also lead to a validation error. This is because an `inlined Step` gets its `params` from the `TaskRun`.\n\n### Emitting Results\n\nA `StepAction` also declares the results that it will emit.\n\n```yaml\napiVersion: tekton.dev\/v1alpha1\nkind: StepAction\nmetadata:\n  name: stepaction-declaring-results\nspec:\n  results:\n    - name: current-date-unix-timestamp\n      description: The current date in unix timestamp format\n    - name: current-date-human-readable\n      description: The current date in human readable format\n  image: bash:latest\n  script: |\n    #!\/usr\/bin\/env bash\n    date +%s | tee $(results.current-date-unix-timestamp.path)\n    date | tee $(results.current-date-human-readable.path)\n```\n\nIt is possible that a `StepAction` with `Results` is used multiple times in the same `Task` or multiple `StepActions` in the same `Task` produce `Results` with the same name. Resolving the `Result` names becomes critical otherwise there could be unexpected outcomes. The `Task` needs to be able to resolve these `Result` names clashes by mapping it to a different `Result` name. For this reason, we introduce the capability to store results on a `Step` level.\n\n`StepActions` can also emit `Results` to `$(step.results.<resultName>.path)`.\n\n```yaml\napiVersion: tekton.dev\/v1alpha1\nkind: StepAction\nmetadata:\n  name: stepaction-declaring-results\nspec:\n  results:\n    - name: current-date-unix-timestamp\n      description: The current date in unix timestamp format\n    - name: current-date-human-readable\n      description: The current date in human readable format\n  image: bash:latest\n  script: |\n    #!\/usr\/bin\/env bash\n    date +%s | tee $(step.results.current-date-unix-timestamp.path)\n    date | tee $(step.results.current-date-human-readable.path)\n```\n\n`Results` from the above `StepAction` can be [fetched by the `Task`](#fetching-emitted-results-from-stepactions) or in [another `Step\/StepAction`](#passing-step-results-between-steps) via `$(steps.<stepName>.results.<resultName>)`.\n\n#### Fetching Emitted Results from StepActions\n\nA `Task` can fetch `Results` produced by the `StepActions` (i.e. only `Results` emitted to `$(step.results.<resultName>.path)`, NOT `$(results.<resultName>.path)`) using variable replacement syntax. We introduce a field to [`Task Results`](.\/tasks.md#emitting-results) called `Value` whose value can be set to the variable `$(steps.<stepName>.results.<resultName>)`.\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: Task\nmetadata:\n  name: task-fetching-results\nspec:\n  results:\n    - name: git-url\n      description: \"url of git repo\"\n      value: $(steps.git-clone.results.url)\n    - name: registry-url\n      description: \"url of docker registry\"\n      value: $(steps.kaniko.results.url)\n  steps:\n    - name: git-clone\n      ref:\n        name: clone-step-action\n    - name: kaniko\n      ref:\n        name: kaniko-step-action\n```\n\n`Results` emitted to `$(step.results.<resultName>.path)` are not automatically available as `TaskRun Results`. The `Task` must explicitly fetch it from the underlying `Step` referencing `StepActions`.\n\nFor example, lets assume that in the previous example, the \"kaniko\" `StepAction` also produced a `Result` named \"digest\". In that case, the `Task` should also fetch the \"digest\" from \"kaniko\" `Step`.\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: Task\nmetadata:\n  name: task-fetching-results\nspec:\n  results:\n    - name: git-url\n      description: \"url of git repo\"\n      value: $(steps.git-clone.results.url)\n    - name: registry-url\n      description: \"url of docker registry\"\n      value: $(steps.kaniko.results.url)\n    - name: digest\n      description: \"digest of the image\"\n      value: $(steps.kaniko.results.digest)\n  steps:\n    - name: git-clone\n      ref:\n        name: clone-step-action\n    - name: kaniko\n      ref:\n        name: kaniko-step-action\n```\n\n#### Passing Results between Steps\n\n`StepResults` (i.e. results written to `$(step.results.<result-name>.path)`, NOT `$(results.<result-name>.path)`) can be shared with following steps via replacement variable `$(steps.<step-name>.results.<result-name>)`.\n\nPipeline supports two new types of results and parameters: array `[]string` and object `map[string]string`.\n\n| Result Type | Parameter Type | Specification                                    | `enable-api-fields` |\n|-------------|----------------|--------------------------------------------------|---------------------|\n| string      | string         | `$(steps.<step-name>.results.<result-name>)`     | stable              |\n| array       | array          | `$(steps.<step-name>.results.<result-name>[*])`  | alpha or beta       |\n| array       | string         | `$(steps.<step-name>.results.<result-name>[i])`  | alpha or beta       |\n| object      | string         | `$(tasks.<task-name>.results.<result-name>.key)` | alpha or beta       |\n\n**Note:** Whole Array `Results` (using star notation) cannot be referred in `script` and `env`.\n\nThe example below shows how you could pass `step results` from a `step` into following steps, in this case, into a `StepAction`.\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  name: step-action-run\nspec:\n  TaskSpec:\n    steps:\n      - name: inline-step\n        results:\n          - name: result1\n            type: array\n          - name: result2\n            type: string\n          - name: result3\n            type: object\n            properties:\n              IMAGE_URL:\n                type: string\n              IMAGE_DIGEST:\n                type: string\n        image: alpine\n        script: |\n          echo -n \"[\\\"image1\\\", \\\"image2\\\", \\\"image3\\\"]\" | tee $(step.results.result1.path)\n          echo -n \"foo\" | tee $(step.results.result2.path)\n          echo -n \"{\\\"IMAGE_URL\\\":\\\"ar.com\\\", \\\"IMAGE_DIGEST\\\":\\\"sha234\\\"}\" | tee $(step.results.result3.path)\n      - name: action-runner\n        ref:\n          name: step-action\n        params:\n          - name: param1\n            value: $(steps.inline-step.results.result1[*])\n          - name: param2\n            value: $(steps.inline-step.results.result2)\n          - name: param3\n            value: $(steps.inline-step.results.result3[*])\n```\n\n**Note:** `Step Results` can only be referenced in a `Step's\/StepAction's` `env`, `command` and `args`. Referencing in any other field will throw an error.\n\n### Declaring WorkingDir\n\nYou can declare `workingDir` in a `StepAction`:\n\n```yaml\napiVersion: tekton.dev\/v1alpha1\nkind: StepAction\nmetadata:\n  name: example-stepaction-name\nspec:\n  image:  gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/git-init:latest\n  workingDir: \/workspace\n  script: |\n    # clone the repo\n    ...\n```\n\nThe `Task` using the `StepAction` has more context about how the `Steps` have been orchestrated. As such, the `Task` should be able to update the `workingDir` of the `StepAction` so that the `StepAction` is executed from the correct location.\nThe `StepAction` can parametrize the `workingDir` and work relative to it. This way, the `Task` does not really need control over the workingDir, it just needs to pass the path as a parameter.\n\n```yaml\napiVersion: tekton.dev\/v1alpha1\nkind: StepAction\nmetadata:\n  name: example-stepaction-name\nspec:\n  image: ubuntu\n  params:\n    - name: source\n      description: \"The path to the source code.\"\n  workingDir: $(params.source)\n```\n\n\n### Declaring SecurityContext\n\nYou can declare `securityContext` in a `StepAction`:\n\n```yaml\napiVersion: tekton.dev\/v1alpha1\nkind: StepAction\nmetadata:\n  name: example-stepaction-name\nspec:\n  image:  gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/git-init:latest\n  securityContext:\n      runAsUser: 0\n  script: |\n    # clone the repo\n    ...\n```\n\nNote that the `securityContext` from `StepAction` will overwrite the `securityContext` from [`TaskRun`](.\/taskruns.md\/#example-of-running-step-containers-as-a-non-root-user).\n\n### Declaring VolumeMounts\n\nYou can define `VolumeMounts` in `StepActions`. The `name` of the `VolumeMount` MUST be a single reference to a string `Parameter`. For example, `$(params.registryConfig)` is valid while `$(params.registryConfig)-foo` and `\"unparametrized-name\"` are invalid. This is to ensure reusability of `StepActions` such that `Task` authors have control of which `Volumes` they bind to the `VolumeMounts`.\n\n```yaml\napiVersion: tekton.dev\/v1alpha1\nkind: StepAction\nmetadata:\n  name: myStep\nspec:\n  params:\n    - name: registryConfig\n    - name: otherConfig\n  volumeMounts:\n    - name: $(params.registryConfig)\n      mountPath: \/registry-config\n    - name: $(params.otherConfig)\n      mountPath: \/other-config\n  image: ...\n  script: ...\n```\n\n\n### Referencing a StepAction\n\n`StepActions` can be referenced from the `Step` using the `ref` field, as follows:\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  name: step-action-run\nspec:\n  taskSpec:\n    steps:\n      - name: action-runner\n        ref:\n          name: step-action\n```\n\nUpon resolution and execution of the `TaskRun`, the `Status` will look something like:\n\n```yaml\nstatus:\n  completionTime: \"2023-10-24T20:28:42Z\"\n  conditions:\n  - lastTransitionTime: \"2023-10-24T20:28:42Z\"\n    message: All Steps have completed executing\n    reason: Succeeded\n    status: \"True\"\n    type: Succeeded\n  podName: step-action-run-pod\n  provenance:\n    featureFlags:\n      EnableStepActions: true\n      ...\n  startTime: \"2023-10-24T20:28:32Z\"\n  steps:\n  - container: step-action-runner\n    imageID: docker.io\/library\/alpine@sha256:eece025e432126ce23f223450a0326fbebde39cdf496a85d8c016293fc851978\n    name: action-runner\n    terminationReason: Completed\n    terminated:\n      containerID: containerd:\/\/46a836588967202c05b594696077b147a0eb0621976534765478925bb7ce57f6\n      exitCode: 0\n      finishedAt: \"2023-10-24T20:28:42Z\"\n      reason: Completed\n      startedAt: \"2023-10-24T20:28:42Z\"\n  taskSpec:\n    steps:\n    - computeResources: {}\n      image: alpine\n      name: action-runner\n```\n\nIf a `Step` is referencing a `StepAction`, it cannot contain the fields supported by `StepActions`. This includes:\n- `image`\n- `command`\n- `args`\n- `script`\n- `env`\n- `volumeMounts`\n\nUsing any of the above fields and referencing a `StepAction` in the same `Step` is not allowed and will cause a validation error.\n\n```yaml\n# This is not allowed and will result in a validation error\n# because the image is expected to be provided by the StepAction\n# and not inlined.\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  name: step-action-run\nspec:\n  taskSpec:\n    steps:\n      - name: action-runner\n        ref:\n          name: step-action\n        image: ubuntu\n```\nExecuting the above `TaskRun` will result in an error that looks like:\n\n```\nError from server (BadRequest): error when creating \"STDIN\": admission webhook \"validation.webhook.pipeline.tekton.dev\" denied the request: validation failed: image cannot be used with Ref: spec.taskSpec.steps[0].image\n```\n\nWhen a `Step` is referencing a `StepAction`, it can contain the following fields:\n- `computeResources`\n- `workspaces` (Isolated workspaces)\n- `volumeDevices`\n- `imagePullPolicy`\n- `onError`\n- `stdoutConfig`\n- `stderrConfig`\n- `securityContext`\n- `envFrom`\n- `timeout`\n- `ref`\n- `params`\n\nUsing any of the above fields and referencing a `StepAction` is allowed and will not cause an error. For example, the `TaskRun` below will execute without any errors:\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  name: step-action-run\nspec:\n  taskSpec:\n    steps:\n      - name: action-runner\n        ref:\n          name: step-action\n        params:\n          - name: step-action-param\n            value: hello\n        computeResources:\n          requests:\n            memory: 1Gi\n            cpu: 500m\n        timeout: 1h\n        onError: continue\n```\n\n#### Specifying Remote StepActions\n\nA `ref` field may specify a `StepAction` in a remote location such as git.\nSupport for specific types of remote will depend on the `Resolvers` your\ncluster's operator has installed. For more information including a tutorial, please check [resolution docs](resolution.md). The below example demonstrates referencing a `StepAction` in git:\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  generateName: step-action-run-\nspec:\n  taskSpec:\n    steps:\n      - name: action-runner\n        ref:\n          resolver: git\n          params:\n            - name: url\n              value: https:\/\/github.com\/repo\/repo.git\n            - name: revision\n              value: main\n            - name: pathInRepo\n              value: remote_step.yaml\n```\n\nThe default resolver type can be configured by the `default-resolver-type` field in the `config-defaults` ConfigMap (`alpha` feature). See [additional-configs.md](.\/additional-configs.md) for details.\n\n### Controlling Step Execution with when Expressions\n\nYou can define `when` in a `step` to control its execution. \n\nThe components of `when` expressions are `input`, `operator`, `values`, `cel`:\n\n| Component  | Description                                                                                                                                                                                                                                                      | Syntax                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |\n|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `input`    | Input for the `when` expression, defaults to an empty string if not provided.                                                                                                                                                                                    | * Static values e.g. `\"ubuntu\"`<br\/> * Variables (parameters or results) e.g. `\"$(params.image)\"` or `\"$(tasks.task1.results.image)\"` or `\"$(tasks.task1.results.array-results[1])\"`                                                                                                                                                                                                                                                                                                                                                       |\n| `operator` | `operator` represents an `input`'s relationship to a set of `values`, a valid `operator` must be provided.                                                                                                                                                       | `in` or `notin`                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |\n| `values`   | An array of string values, the `values` array must be provided and has to be non-empty.                                                                                                                                                                          | * An array param e.g. `[\"$(params.images[*])\"]`<br\/> * An array result of a task `[\"$(tasks.task1.results.array-results[*])\"]`<br\/> * An array result of a step`[\"(steps.step1.results.array-results[*])\"]`<br\/>* `values` can contain static values e.g. `\"ubuntu\"`<br\/> * `values` can contain variables (parameters or results) or a Workspaces's `bound` state e.g. `[\"$(params.image)\"]` or `[\"$(steps.step1.results.image)\"]` or `[\"$(tasks.task1.results.array-results[1])\"]` or `[\"$(steps.step1.results.array-results[1])\"]` |\n| `cel`      | The Common Expression Language (CEL) implements common semantics for expression evaluation, enabling different applications to more easily interoperate. This is an `alpha` feature, `enable-cel-in-whenexpression` needs to be set to true to use this feature. |  [cel-syntax](https:\/\/github.com\/google\/cel-spec\/blob\/master\/doc\/langdef.md#syntax)\n\nThe below example shows how to use when expressions to control step executions:\n\n```yaml\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: my-pvc-2\nspec:\n  resources:\n    requests:\n      storage: 5Gi\n  volumeMode: Filesystem\n  accessModes:\n    - ReadWriteOnce\n---\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  generateName: step-when-example\nspec:\n  workspaces:\n    - name: custom\n      persistentVolumeClaim:\n        claimName: my-pvc-2\n  taskSpec:\n    description: |\n      A simple task that shows how to use when determine if a step should be executed\n    steps:\n      - name: should-execute\n        image: bash:latest\n        script: |\n          #!\/usr\/bin\/env bash\n          echo \"executed...\"\n        when:\n          - input: \"$(workspaces.custom.bound)\"\n            operator: in\n            values: [ \"true\" ]\n      - name: should-skip\n        image: bash:latest\n        script: |\n          #!\/usr\/bin\/env bash\n          echo skipskipskip\n        when:\n          - input: \"$(workspaces.custom2.bound)\"\n            operator: in\n            values: [ \"true\" ]\n      - name: should-continue\n        image: bash:latest\n        script: |\n          #!\/usr\/bin\/env bash\n          echo blabalbaba\n      - name: produce-step\n        image: alpine\n        results:\n          - name: result2\n            type: string\n        script: |\n          echo -n \"foo\" | tee $(step.results.result2.path)\n      - name: run-based-on-step-results\n        image: alpine\n        script: |\n          echo \"wooooooo\"\n        when:\n          - input: \"$(steps.produce-step.results.result2)\"\n            operator: in\n            values: [ \"bar\" ]\n    workspaces:\n      - name: custom\n```\n\nThe StepState for a skipped step looks like something similar to the below:\n```yaml\n      {\n        \"container\": \"step-run-based-on-step-results\",\n        \"imageID\": \"docker.io\/library\/alpine@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b\",\n        \"name\": \"run-based-on-step-results\",\n        \"terminated\": {\n          \"containerID\": \"containerd:\/\/bf81162e79cf66a2bbc03e3654942d3464db06ff368c0be263a8a70f363a899b\",\n          \"exitCode\": 0,\n          \"finishedAt\": \"2024-03-26T03:57:47Z\",\n          \"reason\": \"Completed\",\n          \"startedAt\": \"2024-03-26T03:57:47Z\"\n        },\n        \"terminationReason\": \"Skipped\"\n      }\n```\nWhere `terminated.exitCode` is `0` and `terminationReason` is `Skipped` to indicate the Step exited successfully and was skipped. ","site":"tekton","answers_cleaned":"         linkTitle   StepActions  weight  201            StepActions     Overview   overview     Configuring a StepAction   configuring a stepaction       Declaring Parameters   declaring parameters         Passing Params to StepAction   passing params to stepaction       Emitting Results   emitting results         Fetching Emitted Results from StepActions   fetching emitted results from stepactions       Declaring WorkingDir   declaring workingdir       Declaring SecurityContext   declaring securitycontext       Declaring VolumeMounts   declaring volumemounts       Referencing a StepAction   referencing a stepaction         Specifying Remote StepActions   specifying remote stepactions       Controlling Step Execution with when Expressions   controlling step execution with when expressions      Overview    seedling     StepActions  is an  beta  additional configs md beta features  feature      The  enable step actions  feature flag must be set to   true   to specify a  StepAction  in a  Step    A  StepAction  is the reusable and scriptable unit of work that is performed by a  Step    A  Step  is not reusable  the work it performs is reusable and referenceable   Steps  are in lined in the  Task  definition and either perform work directly or perform a  StepAction   A  StepAction  cannot be run stand alone  unlike a  TaskRun  or a  PipelineRun    It has to be referenced by a  Step   Another way to think about this is that a  Step  is not composed of  StepActions   unlike a  Task  being composed of  Steps  and  Sidecars    Instead  a  Step  is an actionable component  meaning that it has the ability to refer to a  StepAction   The author of the  StepAction  must be able to compose a  Step  using a  StepAction  and provide all the necessary context  or orchestration  to it       Configuring a  StepAction   A  StepAction  definition supports the following fields     Required       apiVersion   kubernetes overview    Specifies the API version  For example       tekton dev v1alpha1         kind   kubernetes overview    Identifies this resource object as a  StepAction  object        metadata   kubernetes overview    Specifies metadata that uniquely identifies the      StepAction  resource object  For example  a  name         spec   kubernetes overview    Specifies the configuration information for this  StepAction  resource object       image    Specifies the image to use for the  Step         The container image must abide by the  container contract    container contract md     Optional      command        cannot be used at the same time as using  script        args       script        cannot be used at the same time as using  command        env        params    declaring parameters        results    emitting results        workingDir    declaring workingdir        securityContext    declaring securitycontext        volumeMounts    declaring volumemounts    kubernetes overview     https   kubernetes io docs concepts overview working with objects kubernetes objects  required fields  The example below demonstrates the use of most of the above mentioned fields      yaml apiVersion  tekton dev v1beta1 kind  StepAction metadata    name  example stepaction name spec    env        name  HOME       value   home   image  ubuntu   command    ls     args     lh            Declaring Parameters  Like with  Tasks   a  StepAction  must declare all the parameters that it uses  The same rules for  Parameter   name    tasks md  parameter name    type    tasks md  parameter type   including  object    tasks md  object type    array    tasks md  array type  and  string    tasks md  string type   apply as when declaring them in  Tasks   A  StepAction  can also provide  default value    tasks md  default value  to a  Parameter      Parameters  are passed to the  StepAction  from its corresponding  Step  referencing it      yaml apiVersion  tekton dev v1beta1 kind  StepAction metadata    name  stepaction using params spec    params        name  gitrepo       type  object       properties          url            type  string         commit            type  string       name  flags       type  array       name  outputPath       type  string       default    workspace    image  some git image   args          url   params gitrepo url          revision   params gitrepo commit          output   params outputPath           params flags                   seedling     params  cannot be directly used in a  script  in  StepActions       Directly substituting  params  in  scripts  makes the workload prone to shell attacks  Therefore  we do not allow direct usage of  params  in  scripts  in  StepActions   Instead  rely on passing  params  to  env  variables and reference them in  scripts   We cannot do the same for  inlined steps  because it breaks  v1 API  compatibility for existing users        Passing Params to StepAction  A  StepAction  may require  params   declaring parameters   In this case  a  Task  needs to ensure that the  StepAction  has access to all the required  params   When referencing a  StepAction   a  Step  can also provide it with  params   just like how a  TaskRun  provides params to the underlying  Task       yaml apiVersion  tekton dev v1 kind  Task metadata    name  step action spec    params        name  param for step action       description   this is a param that the step action needs     steps        name  action runner       ref          name  step action       params            name  step action param           value    params param for step action         Note    If a  Step  declares  params  for an  inlined Step   it will also lead to a validation error  This is because an  inlined Step  gets its  params  from the  TaskRun        Emitting Results  A  StepAction  also declares the results that it will emit      yaml apiVersion  tekton dev v1alpha1 kind  StepAction metadata    name  stepaction declaring results spec    results        name  current date unix timestamp       description  The current date in unix timestamp format       name  current date human readable       description  The current date in human readable format   image  bash latest   script           usr bin env bash     date   s   tee   results current date unix timestamp path      date   tee   results current date human readable path       It is possible that a  StepAction  with  Results  is used multiple times in the same  Task  or multiple  StepActions  in the same  Task  produce  Results  with the same name  Resolving the  Result  names becomes critical otherwise there could be unexpected outcomes  The  Task  needs to be able to resolve these  Result  names clashes by mapping it to a different  Result  name  For this reason  we introduce the capability to store results on a  Step  level    StepActions  can also emit  Results  to    step results  resultName  path        yaml apiVersion  tekton dev v1alpha1 kind  StepAction metadata    name  stepaction declaring results spec    results        name  current date unix timestamp       description  The current date in unix timestamp format       name  current date human readable       description  The current date in human readable format   image  bash latest   script           usr bin env bash     date   s   tee   step results current date unix timestamp path      date   tee   step results current date human readable path        Results  from the above  StepAction  can be  fetched by the  Task    fetching emitted results from stepactions  or in  another  Step StepAction    passing step results between steps  via    steps  stepName  results  resultName           Fetching Emitted Results from StepActions  A  Task  can fetch  Results  produced by the  StepActions   i e  only  Results  emitted to    step results  resultName  path    NOT    results  resultName  path    using variable replacement syntax  We introduce a field to   Task Results     tasks md emitting results  called  Value  whose value can be set to the variable    steps  stepName  results  resultName         yaml apiVersion  tekton dev v1 kind  Task metadata    name  task fetching results spec    results        name  git url       description   url of git repo        value    steps git clone results url        name  registry url       description   url of docker registry        value    steps kaniko results url    steps        name  git clone       ref          name  clone step action       name  kaniko       ref          name  kaniko step action       Results  emitted to    step results  resultName  path   are not automatically available as  TaskRun Results   The  Task  must explicitly fetch it from the underlying  Step  referencing  StepActions    For example  lets assume that in the previous example  the  kaniko   StepAction  also produced a  Result  named  digest   In that case  the  Task  should also fetch the  digest  from  kaniko   Step       yaml apiVersion  tekton dev v1 kind  Task metadata    name  task fetching results spec    results        name  git url       description   url of git repo        value    steps git clone results url        name  registry url       description   url of docker registry        value    steps kaniko results url        name  digest       description   digest of the image        value    steps kaniko results digest    steps        name  git clone       ref          name  clone step action       name  kaniko       ref          name  kaniko step action           Passing Results between Steps   StepResults   i e  results written to    step results  result name  path    NOT    results  result name  path    can be shared with following steps via replacement variable    steps  step name  results  result name      Pipeline supports two new types of results and parameters  array    string  and object  map string string      Result Type   Parameter Type   Specification                                       enable api fields                                                                                                                string        string              steps  step name  results  result name          stable                  array         array               steps  step name  results  result name          alpha or beta           array         string              steps  step name  results  result name  i       alpha or beta           object        string              tasks  task name  results  result name  key     alpha or beta            Note    Whole Array  Results   using star notation  cannot be referred in  script  and  env    The example below shows how you could pass  step results  from a  step  into following steps  in this case  into a  StepAction       yaml apiVersion  tekton dev v1 kind  TaskRun metadata    name  step action run spec    TaskSpec      steps          name  inline step         results              name  result1             type  array             name  result2             type  string             name  result3             type  object             properties                IMAGE URL                  type  string               IMAGE DIGEST                  type  string         image  alpine         script              echo  n     image1      image2      image3       tee   step results result1 path            echo  n  foo    tee   step results result2 path            echo  n     IMAGE URL     ar com      IMAGE DIGEST     sha234       tee   step results result3 path          name  action runner         ref            name  step action         params              name  param1             value    steps inline step results result1                 name  param2             value    steps inline step results result2              name  param3             value    steps inline step results result3            Note     Step Results  can only be referenced in a  Step s StepAction s   env    command  and  args   Referencing in any other field will throw an error       Declaring WorkingDir  You can declare  workingDir  in a  StepAction       yaml apiVersion  tekton dev v1alpha1 kind  StepAction metadata    name  example stepaction name spec    image   gcr io tekton releases github com tektoncd pipeline cmd git init latest   workingDir   workspace   script          clone the repo              The  Task  using the  StepAction  has more context about how the  Steps  have been orchestrated  As such  the  Task  should be able to update the  workingDir  of the  StepAction  so that the  StepAction  is executed from the correct location  The  StepAction  can parametrize the  workingDir  and work relative to it  This way  the  Task  does not really need control over the workingDir  it just needs to pass the path as a parameter      yaml apiVersion  tekton dev v1alpha1 kind  StepAction metadata    name  example stepaction name spec    image  ubuntu   params        name  source       description   The path to the source code     workingDir    params source            Declaring SecurityContext  You can declare  securityContext  in a  StepAction       yaml apiVersion  tekton dev v1alpha1 kind  StepAction metadata    name  example stepaction name spec    image   gcr io tekton releases github com tektoncd pipeline cmd git init latest   securityContext        runAsUser  0   script          clone the repo              Note that the  securityContext  from  StepAction  will overwrite the  securityContext  from   TaskRun     taskruns md  example of running step containers as a non root user        Declaring VolumeMounts  You can define  VolumeMounts  in  StepActions   The  name  of the  VolumeMount  MUST be a single reference to a string  Parameter   For example     params registryConfig   is valid while    params registryConfig  foo  and   unparametrized name   are invalid  This is to ensure reusability of  StepActions  such that  Task  authors have control of which  Volumes  they bind to the  VolumeMounts       yaml apiVersion  tekton dev v1alpha1 kind  StepAction metadata    name  myStep spec    params        name  registryConfig       name  otherConfig   volumeMounts        name    params registryConfig        mountPath   registry config       name    params otherConfig        mountPath   other config   image        script                Referencing a StepAction   StepActions  can be referenced from the  Step  using the  ref  field  as follows      yaml apiVersion  tekton dev v1 kind  TaskRun metadata    name  step action run spec    taskSpec      steps          name  action runner         ref            name  step action      Upon resolution and execution of the  TaskRun   the  Status  will look something like      yaml status    completionTime   2023 10 24T20 28 42Z    conditions      lastTransitionTime   2023 10 24T20 28 42Z      message  All Steps have completed executing     reason  Succeeded     status   True      type  Succeeded   podName  step action run pod   provenance      featureFlags        EnableStepActions  true             startTime   2023 10 24T20 28 32Z    steps      container  step action runner     imageID  docker io library alpine sha256 eece025e432126ce23f223450a0326fbebde39cdf496a85d8c016293fc851978     name  action runner     terminationReason  Completed     terminated        containerID  containerd   46a836588967202c05b594696077b147a0eb0621976534765478925bb7ce57f6       exitCode  0       finishedAt   2023 10 24T20 28 42Z        reason  Completed       startedAt   2023 10 24T20 28 42Z    taskSpec      steps        computeResources           image  alpine       name  action runner      If a  Step  is referencing a  StepAction   it cannot contain the fields supported by  StepActions   This includes     image     command     args     script     env     volumeMounts   Using any of the above fields and referencing a  StepAction  in the same  Step  is not allowed and will cause a validation error      yaml   This is not allowed and will result in a validation error   because the image is expected to be provided by the StepAction   and not inlined  apiVersion  tekton dev v1 kind  TaskRun metadata    name  step action run spec    taskSpec      steps          name  action runner         ref            name  step action         image  ubuntu     Executing the above  TaskRun  will result in an error that looks like       Error from server  BadRequest   error when creating  STDIN   admission webhook  validation webhook pipeline tekton dev  denied the request  validation failed  image cannot be used with Ref  spec taskSpec steps 0  image      When a  Step  is referencing a  StepAction   it can contain the following fields     computeResources     workspaces   Isolated workspaces     volumeDevices     imagePullPolicy     onError     stdoutConfig     stderrConfig     securityContext     envFrom     timeout     ref     params   Using any of the above fields and referencing a  StepAction  is allowed and will not cause an error  For example  the  TaskRun  below will execute without any errors      yaml apiVersion  tekton dev v1 kind  TaskRun metadata    name  step action run spec    taskSpec      steps          name  action runner         ref            name  step action         params              name  step action param             value  hello         computeResources            requests              memory  1Gi             cpu  500m         timeout  1h         onError  continue           Specifying Remote StepActions  A  ref  field may specify a  StepAction  in a remote location such as git  Support for specific types of remote will depend on the  Resolvers  your cluster s operator has installed  For more information including a tutorial  please check  resolution docs  resolution md   The below example demonstrates referencing a  StepAction  in git      yaml apiVersion  tekton dev v1 kind  TaskRun metadata    generateName  step action run  spec    taskSpec      steps          name  action runner         ref            resolver  git           params                name  url               value  https   github com repo repo git               name  revision               value  main               name  pathInRepo               value  remote step yaml      The default resolver type can be configured by the  default resolver type  field in the  config defaults  ConfigMap   alpha  feature   See  additional configs md    additional configs md  for details       Controlling Step Execution with when Expressions  You can define  when  in a  step  to control its execution    The components of  when  expressions are  input    operator    values    cel      Component    Description                                                                                                                                                                                                                                                        Syntax                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           input       Input for the  when  expression  defaults to an empty string if not provided                                                                                                                                                                                         Static values e g    ubuntu   br     Variables  parameters or results  e g      params image    or     tasks task1 results image    or     tasks task1 results array results 1                                                                                                                                                                                                                                                                                                                                                                operator     operator  represents an  input  s relationship to a set of  values   a valid  operator  must be provided                                                                                                                                                           in  or  notin                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   values      An array of string values  the  values  array must be provided and has to be non empty                                                                                                                                                                               An array param e g       params images        br     An array result of a task      tasks task1 results array results        br     An array result of a step    steps step1 results array results        br     values  can contain static values e g    ubuntu   br      values  can contain variables  parameters or results  or a Workspaces s  bound  state e g       params image     or      steps step1 results image     or      tasks task1 results array results 1      or      steps step1 results array results 1           cel         The Common Expression Language  CEL  implements common semantics for expression evaluation  enabling different applications to more easily interoperate  This is an  alpha  feature   enable cel in whenexpression  needs to be set to true to use this feature      cel syntax  https   github com google cel spec blob master doc langdef md syntax   The below example shows how to use when expressions to control step executions      yaml apiVersion  v1 kind  PersistentVolumeClaim metadata    name  my pvc 2 spec    resources      requests        storage  5Gi   volumeMode  Filesystem   accessModes        ReadWriteOnce     apiVersion  tekton dev v1 kind  TaskRun metadata    generateName  step when example spec    workspaces        name  custom       persistentVolumeClaim          claimName  my pvc 2   taskSpec      description          A simple task that shows how to use when determine if a step should be executed     steps          name  should execute         image  bash latest         script                 usr bin env bash           echo  executed             when              input     workspaces custom bound               operator  in             values     true            name  should skip         image  bash latest         script                 usr bin env bash           echo skipskipskip         when              input     workspaces custom2 bound               operator  in             values     true            name  should continue         image  bash latest         script                 usr bin env bash           echo blabalbaba         name  produce step         image  alpine         results              name  result2             type  string         script              echo  n  foo    tee   step results result2 path          name  run based on step results         image  alpine         script              echo  wooooooo          when              input     steps produce step results result2               operator  in             values     bar        workspaces          name  custom      The StepState for a skipped step looks like something similar to the below     yaml                  container    step run based on step results            imageID    docker io library alpine sha256 c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b            name    run based on step results            terminated                containerID    containerd   bf81162e79cf66a2bbc03e3654942d3464db06ff368c0be263a8a70f363a899b              exitCode   0             finishedAt    2024 03 26T03 57 47Z              reason    Completed              startedAt    2024 03 26T03 57 47Z                      terminationReason    Skipped              Where  terminated exitCode  is  0  and  terminationReason  is  Skipped  to indicate the Step exited successfully and was skipped  "}
{"questions":"tekton This how to will outline the steps a developer needs to take when creating weight 104 How to write a Resolver How to write a Resolver","answers":"<!--\n---\nlinkTitle: \"How to write a Resolver\"\nweight: 104\n---\n-->\n\n# How to write a Resolver\n\nThis how-to will outline the steps a developer needs to take when creating\na new (very basic) Resolver. Rather than focus on support for a particular version\ncontrol system or cloud platform this Resolver will simply respond with\nsome hard-coded YAML.\n\nIf you aren't yet familiar with the meaning of \"resolution\" when it\ncomes to Tekton, a short summary follows. You might also want to read a\nlittle bit into Tekton Pipelines, particularly [the docs on specifying a\ntarget Pipeline to\nrun](.\/pipelineruns.md#specifying-the-target-pipeline)\nand, if you're feeling particularly brave or bored, the [really long\ndesign doc describing Tekton\nResolution](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0060-remote-resource-resolution.md).\n\n## What's a Resolver?\n\nA Resolver is a program that runs in a Kubernetes cluster alongside\n[Tekton Pipelines](https:\/\/github.com\/tektoncd\/pipeline) and \"resolves\"\nrequests for `Tasks` and `Pipelines` from remote locations. More\nconcretely: if a user submitted a `PipelineRun` that needed a Pipeline\nYAML stored in a git repo, then it would be a `Resolver` that's\nresponsible for fetching the YAML file from git and returning it to\nTekton Pipelines.\n\nThis pattern extends beyond just git, allowing a developer to integrate\nsupport for other version control systems, cloud buckets, or storage systems\nwithout having to modify Tekton Pipelines itself.\n\n## Just want to see the working example?\n\nIf you'd prefer to look at the end result of this howto you can take a\nvisit the\n[`.\/resolver-template`](.\/resolver-template)\nin the Tekton Resolution repo. That template is built on the code from\nthis howto to get you up and running quickly.\n\n## Pre-requisites\n\nBefore getting started with this howto you'll need to be comfortable\ndeveloping in Go and have a general understanding of how Tekton\nResolution works.\n\nYou'll also need the following:\n\n- A computer with\n  [`kubectl`](https:\/\/kubernetes.io\/docs\/tasks\/tools\/#kubectl) and\n  [`ko`](https:\/\/github.com\/google\/ko) installed.\n- A Kubernetes cluster running at least Kubernetes 1.28. A [`kind`\n  cluster](https:\/\/kind.sigs.k8s.io\/docs\/user\/quick-start\/#installation)\n  should work fine for following the guide on your local machine.\n- An image registry that you can push images to. If you're using `kind`\n  make sure your `KO_DOCKER_REPO` environment variable is set to\n  `kind.local`.\n- Tekton Pipelines and remote resolvers installed in your Kubernetes\n  cluster. See [the installation\n  guide](.\/install.md#installing-and-configuring-remote-task-and-pipeline-resolution) for\n  instructions on installing it.\n\n## First Steps\n\nThe first thing to do is create an initial directory structure for your\nproject. For this example we'll create a directory and initialize a new\ngo module with a few subdirectories for our code:\n\n```bash\n$ mkdir demoresolver\n\n$ cd demoresolver\n\n$ go mod init example.com\/demoresolver\n\n$ mkdir -p cmd\/demoresolver\n\n$ mkdir config\n```\n\nThe `cmd\/demoresolver` directory will contain code for the resolver and the\n`config` directory will eventually contain a yaml file for deploying the\nresolver to Kubernetes.\n\n## Initializing the resolver's binary\n\nA Resolver is ultimately just a program running in your cluster, so the\nfirst step is to fill out the initial code for starting that program.\nOur resolver here is going to be extremely simple and doesn't need any\nflags or special environment variables, so we'll just initialize it with\na little bit of boilerplate.\n\nCreate `cmd\/demoresolver\/main.go` with the following setup code:\n\n\n\n\n```go\npackage main\n\nimport (\n  \"context\"\n  \"github.com\/tektoncd\/pipeline\/pkg\/remoteresolution\/resolver\/framework\"\n  \"knative.dev\/pkg\/injection\/sharedmain\"\n)\n\nfunc main() {\n  sharedmain.Main(\"controller\",\n    framework.NewController(context.Background(), &resolver{}),\n  )\n}\n\ntype resolver struct {}\n```\n\n\n\n```go\npackage main\n\nimport (\n  \"context\"\n  \"github.com\/tektoncd\/pipeline\/pkg\/resolution\/resolver\/framework\"\n  \"knative.dev\/pkg\/injection\/sharedmain\"\n)\n\nfunc main() {\n  sharedmain.Main(\"controller\",\n    framework.NewController(context.Background(), &resolver{}),\n  )\n}\n\ntype resolver struct {}\n```\n\n\n\n\n\nThis won't compile yet but you can download the dependencies by running:\n\n```bash\n# Depending on your go version you might not need the -compat flag\n$ go mod tidy -compat=1.17\n```\n\n## Writing the Resolver\n\nIf you try to build the binary right now you'll receive the following\nerror:\n\n```bash\n$ go build -o \/dev\/null .\/cmd\/demoresolver\n\ncmd\/demoresolver\/main.go:11:78: cannot use &resolver{} (type *resolver) as\ntype framework.Resolver in argument to framework.NewController:\n        *resolver does not implement framework.Resolver (missing GetName method)\n```\n\nWe've already defined our own `resolver` type but in order to get the\nresolver running you'll need to add the methods defined in [the\n`framework.Resolver` interface](..\/pkg\/resolution\/resolver\/framework\/interface.go)\nto your `main.go` file. Going through each method in turn:\n\n## The `Initialize` method\n\nThis method is used to start any libraries or otherwise setup any\nprerequisites your resolver needs. For this example we won't need\nanything so this method can just return `nil`.\n\n```go\n\/\/ Initialize sets up any dependencies needed by the resolver. None atm.\nfunc (r *resolver) Initialize(context.Context) error {\n  return nil\n}\n```\n\n## The `GetName` method\n\nThis method returns a string name that will be used to refer to this\nresolver. You'd see this name show up in places like logs. For this\nsimple example we'll return `\"Demo\"`:\n\n```go\n\/\/ GetName returns a string name to refer to this resolver by.\nfunc (r *resolver) GetName(context.Context) string {\n  return \"Demo\"\n}\n```\n\n## The `GetSelector` method\n\nThis method should return a map of string labels and their values that\nwill be used to direct requests to this resolver. For this example the\nonly label we're interested in matching on is defined by\n`tektoncd\/resolution`:\n\n```go\n\/\/ GetSelector returns a map of labels to match requests to this resolver.\nfunc (r *resolver) GetSelector(context.Context) map[string]string {\n  return map[string]string{\n    common.LabelKeyResolverType: \"demo\",\n  }\n}\n```\n\nWhat this does is tell the resolver framework that any\n`ResolutionRequest` object with a label of\n`\"resolution.tekton.dev\/type\": \"demo\"` should be routed to our\nexample resolver.\n\nWe'll also need to add another import for this package at the top:\n\n\n\n\n```go\nimport (\n  \"context\"\n  \n  \/\/ Add this one; it defines LabelKeyResolverType we use in GetSelector\n  \"github.com\/tektoncd\/pipeline\/pkg\/resolution\/common\"\n  \"github.com\/tektoncd\/pipeline\/pkg\/remoteresolution\/resolver\/framework\"\n  \"knative.dev\/pkg\/injection\/sharedmain\"\n  pipelinev1 \"github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1\"\n)\n```\n\n\n\n\n```go\nimport (\n  \"context\"\n  \n  \/\/ Add this one; it defines LabelKeyResolverType we use in GetSelector\n  \"github.com\/tektoncd\/pipeline\/pkg\/resolution\/common\"\n  \"github.com\/tektoncd\/pipeline\/pkg\/resolution\/resolver\/framework\"\n  \"knative.dev\/pkg\/injection\/sharedmain\"\n  pipelinev1 \"github.com\/tektoncd\/pipeline\/pkg\/apis\/pipeline\/v1\"\n)\n```\n\n\n\n\n\n## The `Validate` method\n\nThe `Validate` method checks that the resolution-spec submitted as part of\na resolution request are valid. Our example resolver doesn't expect\nany params in the spec so we'll simply ensure that the there are no params.\nOur example resolver also expects format for the `url` to be `demoscheme:\/\/<path>` so we'll validate this format.\nIn the previous version, this was instead called `ValidateParams` method. See below \nfor the differences.\n\n\n\n\n\n```go\n\/\/ Validate ensures that the resolution spec from a request is as expected.\nfunc (r *resolver) Validate(ctx context.Context, req *v1beta1.ResolutionRequestSpec) error {\n  if len(req.Params) > 0 {\n    return errors.New(\"no params allowed\")\n  }\n  url := req.URL\n  u, err := neturl.ParseRequestURI(url)\n  if err != nil {\n    return err\n  }\n  if u.Scheme != \"demoscheme\" {\n    return fmt.Errorf(\"Invalid Scheme. Want %s, Got %s\", \"demoscheme\", u.Scheme)\n  }\n  if u.Path == \"\" {\n    return errors.New(\"Empty path.\")\n  }\n  return nil\n}\n```\n\nYou'll also need to add the `net\/url` as `neturl` and `\"errors\"` package to your list of imports at\nthe top of the file.\n\n```\n\n\n\n\n```go\n\/\/ ValidateParams ensures that the params from a request are as expected.\nfunc (r *resolver) ValidateParams(ctx context.Context, params []pipelinev1.Param) error {\n  if len(req.Params) > 0 {\n    return errors.New(\"no params allowed\")\n  }\n  return nil\n}\n```\n\n\n\n\n\nYou'll also need to add the `\"errors\"` package to your list of imports at\nthe top of the file.\n\n## The `Resolve` method\n\nWe implement the `Resolve` method to do the heavy lifting of fetching\nthe contents of a file and returning them.  It takes in the resolution request spec as input.\nFor this example we're just going to return a hard-coded string of YAML. Since Tekton Pipelines\ncurrently only supports fetching Pipeline resources via remote\nresolution that's what we'll return.\n\n\nThe method signature we're implementing here has a\n`framework.ResolvedResource` interface as one of its return values. This\nis another type we have to implement but it has a small footprint:\n\n\n\n\n\n\n```go\n\/\/ Resolve uses the given resolution spec to resolve the requested file or resource.\nfunc (r *resolver) Resolve(ctx context.Context, req *v1beta1.ResolutionRequestSpec) (framework.ResolvedResource, error) {\n  return &myResolvedResource{}, nil\n}\n\n\/\/ our hard-coded resolved file to return\nconst pipeline = `\napiVersion: tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: my-pipeline\nspec:\n  tasks:\n  - name: hello-world\n    taskSpec:\n      steps:\n      - image: alpine:3.15.1\n        script: |\n          echo \"hello world\"\n`\n\n\/\/ myResolvedResource wraps the data we want to return to Pipelines\ntype myResolvedResource struct {}\n\n\/\/ Data returns the bytes of our hard-coded Pipeline\nfunc (*myResolvedResource) Data() []byte {\n  return []byte(pipeline)\n}\n\n\/\/ Annotations returns any metadata needed alongside the data. None atm.\nfunc (*myResolvedResource) Annotations() map[string]string {\n  return nil\n}\n\n\/\/ RefSource is the source reference of the remote data that records where the remote \n\/\/ file came from including the url, digest and the entrypoint. None atm.\nfunc (*myResolvedResource) RefSource() *pipelinev1.RefSource {\n\treturn nil\n}\n```\n\n\n\n\n\n\n```go\n\/\/ Resolve uses the given resolution spec to resolve the requested file or resource.\nfunc (r *resolver) Resolve(ctx context.Context, params []pipelinev1.Param) (framework.ResolvedResource, error) {\n  return &myResolvedResource{}, nil\n}\n\n\/\/ our hard-coded resolved file to return\nconst pipeline = `\napiVersion: tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: my-pipeline\nspec:\n  tasks:\n  - name: hello-world\n    taskSpec:\n      steps:\n      - image: alpine:3.15.1\n        script: |\n          echo \"hello world\"\n`\n\n\/\/ myResolvedResource wraps the data we want to return to Pipelines\ntype myResolvedResource struct {}\n\n\/\/ Data returns the bytes of our hard-coded Pipeline\nfunc (*myResolvedResource) Data() []byte {\n  return []byte(pipeline)\n}\n\n\/\/ Annotations returns any metadata needed alongside the data. None atm.\nfunc (*myResolvedResource) Annotations() map[string]string {\n  return nil\n}\n\n\/\/ RefSource is the source reference of the remote data that records where the remote \n\/\/ file came from including the url, digest and the entrypoint. None atm.\nfunc (*myResolvedResource) RefSource() *pipelinev1.RefSource {\n\treturn nil\n}\n```\n\n\n\n\n\n\n```go\n\/\/ Resolve uses the given resolution spec to resolve the requested file or resource.\nfunc (r *resolver) Resolve(ctx context.Context, req *v1beta1.ResolutionRequestSpec) (framework.ResolvedResource, error) {\n  return &myResolvedResource{}, nil\n}\n\n\/\/ our hard-coded resolved file to return\nconst pipeline = `\napiVersion: tekton.dev\/v1beta1\nkind: Pipeline\nmetadata:\n  name: my-pipeline\nspec:\n  tasks:\n  - name: hello-world\n    taskSpec:\n      steps:\n      - image: alpine:3.15.1\n        script: |\n          echo \"hello world\"\n`\n\n\/\/ myResolvedResource wraps the data we want to return to Pipelines\ntype myResolvedResource struct {}\n\n\/\/ Data returns the bytes of our hard-coded Pipeline\nfunc (*myResolvedResource) Data() []byte {\n  return []byte(pipeline)\n}\n\n\/\/ Annotations returns any metadata needed alongside the data. None atm.\nfunc (*myResolvedResource) Annotations() map[string]string {\n  return nil\n}\n\n\/\/ RefSource is the source reference of the remote data that records where the remote \n\/\/ file came from including the url, digest and the entrypoint. None atm.\nfunc (*myResolvedResource) RefSource() *pipelinev1.RefSource {\n\treturn nil\n}\n```\n\nBest practice: In order to enable Tekton Chains to record the source \ninformation of the remote data in the SLSA provenance, the resolver should \nimplement the `RefSource()` method to return a correct RefSource value. See the \nfollowing example.\n\n```go\n\/\/ RefSource is the source reference of the remote data that records where the remote \n\/\/ file came from including the url, digest and the entrypoint.\nfunc (*myResolvedResource) RefSource() *pipelinev1.RefSource {\n\treturn &v1.RefSource{\n\t\tURI: \"https:\/\/github.com\/user\/example\",\n\t\tDigest: map[string]string{\n\t\t\t\"sha1\": \"example\",\n\t\t},\n\t\tEntryPoint: \"foo\/bar\/task.yaml\",\n\t}\n}\n```\n\n\n## The deployment configuration\n\nFinally, our resolver needs some deployment configuration so that it can\nrun in Kubernetes.\n\nA full description of the config is beyond the scope of a short howto\nbut in summary we'll tell Kubernetes to run our resolver application\nalong with some environment variables and other configuration that the\nunderlying `knative` framework expects. The deployed application is put\nin the `tekton-pipelines` namespace and uses `ko` to build its\ncontainer image. Finally the `ServiceAccount` our deployment uses is\n`tekton-pipelines-resolvers`, which is the default `ServiceAccount` shared by all\nresolvers in the `tekton-pipelines-resolvers` namespace.\n\nThe full configuration follows:\n\n```yaml\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: demoresolver\n  namespace: tekton-pipelines-resolvers\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: demoresolver\n  template:\n    metadata:\n      labels:\n        app: demoresolver\n    spec:\n      affinity:\n        podAntiAffinity:\n          preferredDuringSchedulingIgnoredDuringExecution:\n          - podAffinityTerm:\n              labelSelector:\n                matchLabels:\n                  app: demoresolver\n              topologyKey: kubernetes.io\/hostname\n            weight: 100\n      serviceAccountName: tekton-pipelines-resolvers\n      containers:\n      - name: controller\n        image: ko:\/\/example.com\/demoresolver\/cmd\/demoresolver\n        resources:\n          requests:\n            cpu: 100m\n            memory: 100Mi\n          limits:\n            cpu: 1000m\n            memory: 1000Mi\n        ports:\n        - name: metrics\n          containerPort: 9090\n        env:\n        - name: SYSTEM_NAMESPACE\n          valueFrom:\n            fieldRef:\n              fieldPath: metadata.namespace\n        - name: CONFIG_LOGGING_NAME\n          value: config-logging\n        - name: CONFIG_OBSERVABILITY_NAME\n          value: config-observability\n        - name: METRICS_DOMAIN\n          value: tekton.dev\/resolution\n        securityContext:\n          allowPrivilegeEscalation: false\n          readOnlyRootFilesystem: true\n          runAsNonRoot: true\n          capabilities:\n            drop:\n            - all\n```\n\nPhew, ok, put all that in a file at `config\/demo-resolver-deployment.yaml`\nand you'll be ready to deploy your application to Kubernetes and see it\nwork!\n\n## Trying it out\n\nNow that all the code is written your new resolver should be ready to\ndeploy to a Kubernetes cluster. We'll use `ko` to build and deploy the\napplication:\n\n```bash\n$ ko apply -f .\/config\/demo-resolver-deployment.yaml\n```\n\nAssuming the resolver deployed successfully you should be able to see it\nin the output from the following command:\n\n```bash\n$ kubectl get deployments -n tekton-pipelines\n\n# And here's approximately what you should see when you run this command:\nNAME          READY   UP-TO-DATE   AVAILABLE   AGE\ncontroller    1\/1     1            1           2d21h\ndemoresolver  1\/1     1            1           91s\nwebhook       1\/1     1            1           2d21\n```\n\nTo exercise your new resolver, let's submit a request for its hard-coded\npipeline. Create a file called `test-request.yaml` with the following\ncontent:\n\n```yaml\napiVersion: resolution.tekton.dev\/v1beta1\nkind: ResolutionRequest\nmetadata:\n  name: test-request\n  labels:\n    resolution.tekton.dev\/type: demo\n```\n\nAnd submit this request with the following command:\n\n```bash\n$ kubectl apply -f .\/test-request.yaml && kubectl get --watch resolutionrequests\n```\n\nYou should soon see your ResolutionRequest printed to screen with a True\nvalue in its SUCCEEDED column:\n\n```bash\nresolutionrequest.resolution.tekton.dev\/test-request created\nNAME           SUCCEEDED   REASON\ntest-request   True\n```\n\nPress Ctrl-C to get back to the command line.\n\nIf you now take a look at the ResolutionRequest's YAML you'll see the\nhard-coded pipeline yaml in its `status.data` field. It won't be totally\nrecognizable, though, because it's encoded as base64. Have a look with the\nfollowing command:\n\n```bash\n$ kubectl get resolutionrequest test-request -o yaml\n```\n\nYou can convert that base64 data back into yaml with the following\ncommand:\n\n```bash\n$ kubectl get resolutionrequest test-request -o jsonpath=\"{$.status.data}\" | base64 -d\n```\n\nGreat work, you've successfully written a Resolver from scratch!\n\n## Next Steps\n\nAt this point you could start to expand the `Resolve()` method in your\nResolver to fetch data from your storage backend of choice.\n\nOr if you prefer to take a look at a more fully-realized example of a\nResolver, see the [code for the `gitresolver` hosted in the Tekton\nPipeline repo](https:\/\/github.com\/tektoncd\/pipeline\/tree\/main\/pkg\/resolution\/resolver\/git\/).\n\nFinally, another direction you could take this would be to try writing a\n`PipelineRun` for Tekton Pipelines that speaks to your Resolver. Can\nyou get a `PipelineRun` to execute successfully that uses the hard-coded\n`Pipeline` your Resolver returns?\n\n---\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/),\nand code samples are licensed under the\n[Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"         linkTitle   How to write a Resolver  weight  104            How to write a Resolver  This how to will outline the steps a developer needs to take when creating a new  very basic  Resolver  Rather than focus on support for a particular version control system or cloud platform this Resolver will simply respond with some hard coded YAML   If you aren t yet familiar with the meaning of  resolution  when it comes to Tekton  a short summary follows  You might also want to read a little bit into Tekton Pipelines  particularly  the docs on specifying a target Pipeline to run    pipelineruns md specifying the target pipeline  and  if you re feeling particularly brave or bored  the  really long design doc describing Tekton Resolution  https   github com tektoncd community blob main teps 0060 remote resource resolution md       What s a Resolver   A Resolver is a program that runs in a Kubernetes cluster alongside  Tekton Pipelines  https   github com tektoncd pipeline  and  resolves  requests for  Tasks  and  Pipelines  from remote locations  More concretely  if a user submitted a  PipelineRun  that needed a Pipeline YAML stored in a git repo  then it would be a  Resolver  that s responsible for fetching the YAML file from git and returning it to Tekton Pipelines   This pattern extends beyond just git  allowing a developer to integrate support for other version control systems  cloud buckets  or storage systems without having to modify Tekton Pipelines itself      Just want to see the working example   If you d prefer to look at the end result of this howto you can take a visit the     resolver template     resolver template  in the Tekton Resolution repo  That template is built on the code from this howto to get you up and running quickly      Pre requisites  Before getting started with this howto you ll need to be comfortable developing in Go and have a general understanding of how Tekton Resolution works   You ll also need the following     A computer with     kubectl   https   kubernetes io docs tasks tools  kubectl  and     ko   https   github com google ko  installed    A Kubernetes cluster running at least Kubernetes 1 28  A   kind    cluster  https   kind sigs k8s io docs user quick start  installation    should work fine for following the guide on your local machine    An image registry that you can push images to  If you re using  kind    make sure your  KO DOCKER REPO  environment variable is set to    kind local     Tekton Pipelines and remote resolvers installed in your Kubernetes   cluster  See  the installation   guide    install md installing and configuring remote task and pipeline resolution  for   instructions on installing it      First Steps  The first thing to do is create an initial directory structure for your project  For this example we ll create a directory and initialize a new go module with a few subdirectories for our code      bash   mkdir demoresolver    cd demoresolver    go mod init example com demoresolver    mkdir  p cmd demoresolver    mkdir config      The  cmd demoresolver  directory will contain code for the resolver and the  config  directory will eventually contain a yaml file for deploying the resolver to Kubernetes      Initializing the resolver s binary  A Resolver is ultimately just a program running in your cluster  so the first step is to fill out the initial code for starting that program  Our resolver here is going to be extremely simple and doesn t need any flags or special environment variables  so we ll just initialize it with a little bit of boilerplate   Create  cmd demoresolver main go  with the following setup code         go package main  import      context     github com tektoncd pipeline pkg remoteresolution resolver framework     knative dev pkg injection sharedmain     func main       sharedmain Main  controller       framework NewController context Background     resolver            type resolver struct              go package main  import      context     github com tektoncd pipeline pkg resolution resolver framework     knative dev pkg injection sharedmain     func main       sharedmain Main  controller       framework NewController context Background     resolver            type resolver struct             This won t compile yet but you can download the dependencies by running      bash   Depending on your go version you might not need the  compat flag   go mod tidy  compat 1 17         Writing the Resolver  If you try to build the binary right now you ll receive the following error      bash   go build  o  dev null   cmd demoresolver  cmd demoresolver main go 11 78  cannot use  resolver    type  resolver  as type framework Resolver in argument to framework NewController           resolver does not implement framework Resolver  missing GetName method       We ve already defined our own  resolver  type but in order to get the resolver running you ll need to add the methods defined in  the  framework Resolver  interface     pkg resolution resolver framework interface go  to your  main go  file  Going through each method in turn      The  Initialize  method  This method is used to start any libraries or otherwise setup any prerequisites your resolver needs  For this example we won t need anything so this method can just return  nil       go    Initialize sets up any dependencies needed by the resolver  None atm  func  r  resolver  Initialize context Context  error     return nil           The  GetName  method  This method returns a string name that will be used to refer to this resolver  You d see this name show up in places like logs  For this simple example we ll return   Demo        go    GetName returns a string name to refer to this resolver by  func  r  resolver  GetName context Context  string     return  Demo            The  GetSelector  method  This method should return a map of string labels and their values that will be used to direct requests to this resolver  For this example the only label we re interested in matching on is defined by  tektoncd resolution       go    GetSelector returns a map of labels to match requests to this resolver  func  r  resolver  GetSelector context Context  map string string     return map string string      common LabelKeyResolverType   demo              What this does is tell the resolver framework that any  ResolutionRequest  object with a label of   resolution tekton dev type    demo   should be routed to our example resolver   We ll also need to add another import for this package at the top         go import      context          Add this one  it defines LabelKeyResolverType we use in GetSelector    github com tektoncd pipeline pkg resolution common     github com tektoncd pipeline pkg remoteresolution resolver framework     knative dev pkg injection sharedmain    pipelinev1  github com tektoncd pipeline pkg apis pipeline v1               go import      context          Add this one  it defines LabelKeyResolverType we use in GetSelector    github com tektoncd pipeline pkg resolution common     github com tektoncd pipeline pkg resolution resolver framework     knative dev pkg injection sharedmain    pipelinev1  github com tektoncd pipeline pkg apis pipeline v1                The  Validate  method  The  Validate  method checks that the resolution spec submitted as part of a resolution request are valid  Our example resolver doesn t expect any params in the spec so we ll simply ensure that the there are no params  Our example resolver also expects format for the  url  to be  demoscheme    path   so we ll validate this format  In the previous version  this was instead called  ValidateParams  method  See below  for the differences          go    Validate ensures that the resolution spec from a request is as expected  func  r  resolver  Validate ctx context Context  req  v1beta1 ResolutionRequestSpec  error     if len req Params    0       return errors New  no params allowed         url    req URL   u  err    neturl ParseRequestURI url    if err    nil       return err       if u Scheme     demoscheme        return fmt Errorf  Invalid Scheme  Want  s  Got  s    demoscheme   u Scheme        if u Path             return errors New  Empty path          return nil        You ll also need to add the  net url  as  neturl  and   errors   package to your list of imports at the top of the file              go    ValidateParams ensures that the params from a request are as expected  func  r  resolver  ValidateParams ctx context Context  params   pipelinev1 Param  error     if len req Params    0       return errors New  no params allowed         return nil            You ll also need to add the   errors   package to your list of imports at the top of the file      The  Resolve  method  We implement the  Resolve  method to do the heavy lifting of fetching the contents of a file and returning them   It takes in the resolution request spec as input  For this example we re just going to return a hard coded string of YAML  Since Tekton Pipelines currently only supports fetching Pipeline resources via remote resolution that s what we ll return    The method signature we re implementing here has a  framework ResolvedResource  interface as one of its return values  This is another type we have to implement but it has a small footprint           go    Resolve uses the given resolution spec to resolve the requested file or resource  func  r  resolver  Resolve ctx context Context  req  v1beta1 ResolutionRequestSpec   framework ResolvedResource  error      return  myResolvedResource    nil       our hard coded resolved file to return const pipeline     apiVersion  tekton dev v1beta1 kind  Pipeline metadata    name  my pipeline spec    tasks      name  hello world     taskSpec        steps          image  alpine 3 15 1         script              echo  hello world        myResolvedResource wraps the data we want to return to Pipelines type myResolvedResource struct        Data returns the bytes of our hard coded Pipeline func   myResolvedResource  Data     byte     return   byte pipeline        Annotations returns any metadata needed alongside the data  None atm  func   myResolvedResource  Annotations   map string string     return nil       RefSource is the source reference of the remote data that records where the remote     file came from including the url  digest and the entrypoint  None atm  func   myResolvedResource  RefSource    pipelinev1 RefSource    return nil                go    Resolve uses the given resolution spec to resolve the requested file or resource  func  r  resolver  Resolve ctx context Context  params   pipelinev1 Param   framework ResolvedResource  error      return  myResolvedResource    nil       our hard coded resolved file to return const pipeline     apiVersion  tekton dev v1beta1 kind  Pipeline metadata    name  my pipeline spec    tasks      name  hello world     taskSpec        steps          image  alpine 3 15 1         script              echo  hello world        myResolvedResource wraps the data we want to return to Pipelines type myResolvedResource struct        Data returns the bytes of our hard coded Pipeline func   myResolvedResource  Data     byte     return   byte pipeline        Annotations returns any metadata needed alongside the data  None atm  func   myResolvedResource  Annotations   map string string     return nil       RefSource is the source reference of the remote data that records where the remote     file came from including the url  digest and the entrypoint  None atm  func   myResolvedResource  RefSource    pipelinev1 RefSource    return nil                go    Resolve uses the given resolution spec to resolve the requested file or resource  func  r  resolver  Resolve ctx context Context  req  v1beta1 ResolutionRequestSpec   framework ResolvedResource  error      return  myResolvedResource    nil       our hard coded resolved file to return const pipeline     apiVersion  tekton dev v1beta1 kind  Pipeline metadata    name  my pipeline spec    tasks      name  hello world     taskSpec        steps          image  alpine 3 15 1         script              echo  hello world        myResolvedResource wraps the data we want to return to Pipelines type myResolvedResource struct        Data returns the bytes of our hard coded Pipeline func   myResolvedResource  Data     byte     return   byte pipeline        Annotations returns any metadata needed alongside the data  None atm  func   myResolvedResource  Annotations   map string string     return nil       RefSource is the source reference of the remote data that records where the remote     file came from including the url  digest and the entrypoint  None atm  func   myResolvedResource  RefSource    pipelinev1 RefSource    return nil        Best practice  In order to enable Tekton Chains to record the source  information of the remote data in the SLSA provenance  the resolver should  implement the  RefSource    method to return a correct RefSource value  See the  following example      go    RefSource is the source reference of the remote data that records where the remote     file came from including the url  digest and the entrypoint  func   myResolvedResource  RefSource    pipelinev1 RefSource    return  v1 RefSource    URI   https   github com user example     Digest  map string string      sha1    example          EntryPoint   foo bar task yaml                 The deployment configuration  Finally  our resolver needs some deployment configuration so that it can run in Kubernetes   A full description of the config is beyond the scope of a short howto but in summary we ll tell Kubernetes to run our resolver application along with some environment variables and other configuration that the underlying  knative  framework expects  The deployed application is put in the  tekton pipelines  namespace and uses  ko  to build its container image  Finally the  ServiceAccount  our deployment uses is  tekton pipelines resolvers   which is the default  ServiceAccount  shared by all resolvers in the  tekton pipelines resolvers  namespace   The full configuration follows      yaml apiVersion  apps v1 kind  Deployment metadata    name  demoresolver   namespace  tekton pipelines resolvers spec    replicas  1   selector      matchLabels        app  demoresolver   template      metadata        labels          app  demoresolver     spec        affinity          podAntiAffinity            preferredDuringSchedulingIgnoredDuringExecution              podAffinityTerm                labelSelector                  matchLabels                    app  demoresolver               topologyKey  kubernetes io hostname             weight  100       serviceAccountName  tekton pipelines resolvers       containers          name  controller         image  ko   example com demoresolver cmd demoresolver         resources            requests              cpu  100m             memory  100Mi           limits              cpu  1000m             memory  1000Mi         ports            name  metrics           containerPort  9090         env            name  SYSTEM NAMESPACE           valueFrom              fieldRef                fieldPath  metadata namespace           name  CONFIG LOGGING NAME           value  config logging           name  CONFIG OBSERVABILITY NAME           value  config observability           name  METRICS DOMAIN           value  tekton dev resolution         securityContext            allowPrivilegeEscalation  false           readOnlyRootFilesystem  true           runAsNonRoot  true           capabilities              drop                all      Phew  ok  put all that in a file at  config demo resolver deployment yaml  and you ll be ready to deploy your application to Kubernetes and see it work      Trying it out  Now that all the code is written your new resolver should be ready to deploy to a Kubernetes cluster  We ll use  ko  to build and deploy the application      bash   ko apply  f   config demo resolver deployment yaml      Assuming the resolver deployed successfully you should be able to see it in the output from the following command      bash   kubectl get deployments  n tekton pipelines    And here s approximately what you should see when you run this command  NAME          READY   UP TO DATE   AVAILABLE   AGE controller    1 1     1            1           2d21h demoresolver  1 1     1            1           91s webhook       1 1     1            1           2d21      To exercise your new resolver  let s submit a request for its hard coded pipeline  Create a file called  test request yaml  with the following content      yaml apiVersion  resolution tekton dev v1beta1 kind  ResolutionRequest metadata    name  test request   labels      resolution tekton dev type  demo      And submit this request with the following command      bash   kubectl apply  f   test request yaml    kubectl get   watch resolutionrequests      You should soon see your ResolutionRequest printed to screen with a True value in its SUCCEEDED column      bash resolutionrequest resolution tekton dev test request created NAME           SUCCEEDED   REASON test request   True      Press Ctrl C to get back to the command line   If you now take a look at the ResolutionRequest s YAML you ll see the hard coded pipeline yaml in its  status data  field  It won t be totally recognizable  though  because it s encoded as base64  Have a look with the following command      bash   kubectl get resolutionrequest test request  o yaml      You can convert that base64 data back into yaml with the following command      bash   kubectl get resolutionrequest test request  o jsonpath     status data     base64  d      Great work  you ve successfully written a Resolver from scratch      Next Steps  At this point you could start to expand the  Resolve    method in your Resolver to fetch data from your storage backend of choice   Or if you prefer to take a look at a more fully realized example of a Resolver  see the  code for the  gitresolver  hosted in the Tekton Pipeline repo  https   github com tektoncd pipeline tree main pkg resolution resolver git     Finally  another direction you could take this would be to try writing a  PipelineRun  for Tekton Pipelines that speaks to your Resolver  Can you get a  PipelineRun  to execute successfully that uses the hard coded  Pipeline  your Resolver returns        Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    and code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton Migrating from v1alpha1 Run to v1beta1 CustomRun weight 4000 Migrating from v1alpha1 Run to v1beta1 CustomRun This document describes how to migrate from and","answers":"<!--\n---\nlinkTitle: \"Migrating from v1alpha1.Run to v1beta1.CustomRun\"\nweight: 4000\n---\n-->\n\n# Migrating from v1alpha1.Run to v1beta1.CustomRun\n\nThis document describes how to migrate from `v1alpha1.Run` and `v1beta1.CustomRun`\n- [Changes to fields](#changes-to-fields)\n  - [Changes to the specification](#changes-to-the-specification)\n  - [Changes to the reference](#changes-to-the-reference)\n  - [Support `PodTemplate` in Custom Task Spec](#support-podtemplate-in-custom-task-spec)\n- [Changes to implementation requirements](#changes-to-implementation-instructions)\n  - [Status Reporting](#status-reporting)\n  - [Cancellation](#cancellation)\n  - [Timeout](#timeout)\n  - [Retries & RetriesStatus](#4-retries--retriesstatus)\n- [New feature flag `custom-task-version` for migration](#new-feature-flag-custom-task-version)\n\n## Changes to fields\n\nComparing `v1alpha1.Run` with `v1beta1.CustomRun`, the following fields have been changed:\n\n| Old field              | New field         |\n| ---------------------- | ------------------|\n| `spec.spec`            | [`spec.customSpec`](#changes-to-the-specification) |\n| `spec.ref`             | [`spec.customRef`](#changes-to-the-reference) |\n| `spec.podTemplate`     | removed, see [this section](#support-podtemplate-in-custom-task-spec) if you still want to support it|\n\n\n### Changes to the specification\n\nIn `v1beta1.CustomRun`, the specification is renamed from `spec.spec` to `spec.customSpec`.\n\n```yaml\n# before (v1alpha1.Run)\napiVersion: tekton.dev\/v1alpha1\nkind: Run\nmetadata:\n  name: run-with-spec\nspec:\n  spec:\n    apiVersion: example.dev\/v1alpha1\n    kind: Example\n    spec:\n      field1: value1\n      field2: value2\n# after (v1beta1.CustomRun)\napiVersion: tekton.dev\/v1beta1\nkind: CustomRun\nmetadata:\n  name: customrun-with-spec\nspec:\n  customSpec:\n    apiVersion: example.dev\/v1beta1\n    kind: Example\n    spec:\n      field1: value1\n      field2: value2\n```\n\n### Changes to the reference\n\nIn `v1beta1.CustomRun`, the specification is renamed from `spec.ref` to `spec.customRef`.\n\n```yaml\n# before (v1alpha1.Run)\napiVersion: tekton.dev\/v1alpha1\nkind: Run\nmetadata:\n  name: run-with-reference\nspec:\n  ref:\n    apiVersion: example.dev\/v1alpha1\n    kind: Example\n    name: my-example\n---\n# after (v1beta1.CustomRun)\napiVersion: tekton.dev\/v1beta1\nkind: CustomRun\nmetadata:\n  name: customrun-with-reference\nspec:\n  customRef:\n    apiVersion: example.dev\/v1beta1\n    kind: Example\n    name: my-customtask\n```\n\n### Support `PodTemplate` in Custom Task Spec\n\n`spec.podTemplate` is removed in `v1beta1.CustomRun`. You can support that field in your own custom task spec if you want to. For example:\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: CustomRun\nmetadata:\n  name: customrun-with-podtemplate\nspec:\n  customSpec:\n    apiVersion: example.dev\/v1beta1\n    kind: Example\n    spec:\n      podTemplate: \n        securityContext:\n        runAsUser: 1001\n```\n\n## Changes to implementation instructions\nWe've changed four implementation instructions. Note that `status reporting` is the only required instruction to follow, others are recommendations.\n\n### Status Reporting\n\nYou **MUST** report `v1beta1.CustomRun` as `Done` (set its `Conditions.Succeeded` status as `True` or `False`) **ONLY** when you want Pipeline Controller consider it as finished.\n\nFor example, if the `CustomRun` failed, while it still has remaining `retries`. If you want pipeline controller to continue watching its status changing, you **MUST NOT** mark it as `Done`. Otherwise, the `PipelineRun` controller may consider it as finished.\n\nHere are statuses that indicating the `CustomRun` is done.\n```yaml\nType: Succeeded\nStatus: False\nReason: TimedOut\n```\n```yaml\nType: Succeeded\nStatus: True\nReason: Succeeded\n```\n\n### Cancellation\n\nCustom Task implementors are responsible for implementing `cancellation` to **support pipelineRun level timeouts and cancellation**. If a Custom Task implementor does not support cancellation via `customRun.spec.status`, `PipelineRun` can not timeout within the specified interval\/duration and can not be cancelled as expected upon request.\n\nIt is recommended to update the `CustomRun` status  as following, once noticing `customRun.Spec.Status` is updated to `RunCancelled`\n\n```yaml\nType: Succeeded\nStatus: False\nReason: CustomRunCancelled\n```\n\n### Timeout\n\nWe recommend `customRun.Timeout` to be set for each retry attempt instead of all retries. That's said, if one `CustomRun` execution fails on `Timeout` and it has remaining retries, the `CustomRun` controller SHOULD NOT set the status of that `CustomRun` as `False`. Instead, it SHOULD initiate another round of execution.\n\n### 4. Retries & RetriesStatus\n\nWe recommend you use `customRun.Spec.Retries` if you want to implement the `retry` logic for your `Custom Task`, and archive history `customRun.Status` in `customRun.Status.RetriesStatus`.\n\nSay you started a `CustomRun` by setting the following condition:\n```yaml\nStatus:\n  Conditions:\n  - Type: Succeeded\n    Status: Unknown\n    Reason: Running\n```\nNow it failed for some reasons but has remaining retries, instead of setting Conditions as failed on TimedOut:\n```yaml\nStatus:\n  Conditions:\n  - Type: Succeeded\n    Status: False\n    Reason: Failed\n```\nWe **recommend** you to do archive the failure status to `customRun.Status.retriesStatus`, and keep setting `customRun.Status` as `Unknown`:\n```yaml\nStatus:\n  Conditions:\n  - Type: Succeeded\n    Status: Unknown\n    Reason: \"xxx\"\n  RetriesStatus:\n  - Conditions:\n    - Type: Succeeded\n      Status: False\n      Reason: Failed\n```\n\n\n## New feature flag `custom-task-version` for migration\n\nYou can use `custom-task-version` to control whether `v1alpha1.Run` or `v1beta1.CustomRun` should be created when a `Custom Task` is specified in a `Pipeline`. The feature flag currently supports two values: `v1alpha1` and `v1beta1`.\n\nWe'll change its default value per the following timeline:\n- v0.43.*: default to `v1alpha1`.\n- v0.44.*: switch the default to `v1beta1`\n- v0.47.*: remove the feature flag, stop supporting creating `v1alpha1.Run`\n\n\n[cancel-pr]: https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/pipelineruns.md#cancelling-a-pipelinerun\n[gracefully-cancel-pr]: (https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/pipelineruns.md#gracefully-cancelling-a-pipelinerun","site":"tekton","answers_cleaned":"         linkTitle   Migrating from v1alpha1 Run to v1beta1 CustomRun  weight  4000            Migrating from v1alpha1 Run to v1beta1 CustomRun  This document describes how to migrate from  v1alpha1 Run  and  v1beta1 CustomRun     Changes to fields   changes to fields       Changes to the specification   changes to the specification       Changes to the reference   changes to the reference       Support  PodTemplate  in Custom Task Spec   support podtemplate in custom task spec     Changes to implementation requirements   changes to implementation instructions       Status Reporting   status reporting       Cancellation   cancellation       Timeout   timeout       Retries   RetriesStatus   4 retries  retriesstatus     New feature flag  custom task version  for migration   new feature flag custom task version      Changes to fields  Comparing  v1alpha1 Run  with  v1beta1 CustomRun   the following fields have been changed     Old field                New field                                                             spec spec                 spec customSpec    changes to the specification       spec ref                  spec customRef    changes to the reference       spec podTemplate        removed  see  this section   support podtemplate in custom task spec  if you still want to support it        Changes to the specification  In  v1beta1 CustomRun   the specification is renamed from  spec spec  to  spec customSpec       yaml   before  v1alpha1 Run  apiVersion  tekton dev v1alpha1 kind  Run metadata    name  run with spec spec    spec      apiVersion  example dev v1alpha1     kind  Example     spec        field1  value1       field2  value2   after  v1beta1 CustomRun  apiVersion  tekton dev v1beta1 kind  CustomRun metadata    name  customrun with spec spec    customSpec      apiVersion  example dev v1beta1     kind  Example     spec        field1  value1       field2  value2          Changes to the reference  In  v1beta1 CustomRun   the specification is renamed from  spec ref  to  spec customRef       yaml   before  v1alpha1 Run  apiVersion  tekton dev v1alpha1 kind  Run metadata    name  run with reference spec    ref      apiVersion  example dev v1alpha1     kind  Example     name  my example       after  v1beta1 CustomRun  apiVersion  tekton dev v1beta1 kind  CustomRun metadata    name  customrun with reference spec    customRef      apiVersion  example dev v1beta1     kind  Example     name  my customtask          Support  PodTemplate  in Custom Task Spec   spec podTemplate  is removed in  v1beta1 CustomRun   You can support that field in your own custom task spec if you want to  For example      yaml apiVersion  tekton dev v1beta1 kind  CustomRun metadata    name  customrun with podtemplate spec    customSpec      apiVersion  example dev v1beta1     kind  Example     spec        podTemplate           securityContext          runAsUser  1001         Changes to implementation instructions We ve changed four implementation instructions  Note that  status reporting  is the only required instruction to follow  others are recommendations       Status Reporting  You   MUST   report  v1beta1 CustomRun  as  Done   set its  Conditions Succeeded  status as  True  or  False     ONLY   when you want Pipeline Controller consider it as finished   For example  if the  CustomRun  failed  while it still has remaining  retries   If you want pipeline controller to continue watching its status changing  you   MUST NOT   mark it as  Done   Otherwise  the  PipelineRun  controller may consider it as finished   Here are statuses that indicating the  CustomRun  is done     yaml Type  Succeeded Status  False Reason  TimedOut        yaml Type  Succeeded Status  True Reason  Succeeded          Cancellation  Custom Task implementors are responsible for implementing  cancellation  to   support pipelineRun level timeouts and cancellation    If a Custom Task implementor does not support cancellation via  customRun spec status    PipelineRun  can not timeout within the specified interval duration and can not be cancelled as expected upon request   It is recommended to update the  CustomRun  status  as following  once noticing  customRun Spec Status  is updated to  RunCancelled      yaml Type  Succeeded Status  False Reason  CustomRunCancelled          Timeout  We recommend  customRun Timeout  to be set for each retry attempt instead of all retries  That s said  if one  CustomRun  execution fails on  Timeout  and it has remaining retries  the  CustomRun  controller SHOULD NOT set the status of that  CustomRun  as  False   Instead  it SHOULD initiate another round of execution       4  Retries   RetriesStatus  We recommend you use  customRun Spec Retries  if you want to implement the  retry  logic for your  Custom Task   and archive history  customRun Status  in  customRun Status RetriesStatus    Say you started a  CustomRun  by setting the following condition     yaml Status    Conditions      Type  Succeeded     Status  Unknown     Reason  Running     Now it failed for some reasons but has remaining retries  instead of setting Conditions as failed on TimedOut     yaml Status    Conditions      Type  Succeeded     Status  False     Reason  Failed     We   recommend   you to do archive the failure status to  customRun Status retriesStatus   and keep setting  customRun Status  as  Unknown      yaml Status    Conditions      Type  Succeeded     Status  Unknown     Reason   xxx    RetriesStatus      Conditions        Type  Succeeded       Status  False       Reason  Failed          New feature flag  custom task version  for migration  You can use  custom task version  to control whether  v1alpha1 Run  or  v1beta1 CustomRun  should be created when a  Custom Task  is specified in a  Pipeline   The feature flag currently supports two values   v1alpha1  and  v1beta1    We ll change its default value per the following timeline    v0 43    default to  v1alpha1     v0 44    switch the default to  v1beta1    v0 47    remove the feature flag  stop supporting creating  v1alpha1 Run     cancel pr   https   github com tektoncd pipeline blob main docs pipelineruns md cancelling a pipelinerun  gracefully cancel pr    https   github com tektoncd pipeline blob main docs pipelineruns md gracefully cancelling a pipelinerun"}
{"questions":"tekton TaskRuns weight 202 toc TaskRuns","answers":"<!--\n---\nlinkTitle: \"TaskRuns\"\nweight: 202\n---\n-->\n\n# TaskRuns\n\n<!-- toc -->\n- [Overview](#overview)\n- [Configuring a `TaskRun`](#configuring-a-taskrun)\n  - [Specifying the target `Task`](#specifying-the-target-task)\n    - [Tekton Bundles](#tekton-bundles)\n    - [Remote Tasks](#remote-tasks)\n  - [Specifying `Parameters`](#specifying-parameters)\n    - [Propagated Parameters](#propagated-parameters)\n    - [Propagated Object Parameters](#propagated-object-parameters)\n    - [Extra Parameters](#extra-parameters)\n  - [Specifying `Resource` limits](#specifying-resource-limits)\n  - [Specifying Task-level `ComputeResources`](#specifying-task-level-computeresources)\n  - [Specifying a `Pod` template](#specifying-a-pod-template)\n  - [Specifying `Workspaces`](#specifying-workspaces)\n    - [Propagated Workspaces](#propagated-workspaces)\n  - [Specifying `Sidecars`](#specifying-sidecars)\n  - [Configuring `Task` `Steps` and `Sidecars` in a TaskRun](#configuring-task-steps-and-sidecars-in-a-taskrun)\n  - [Specifying `LimitRange` values](#specifying-limitrange-values)\n  - [Specifying `Retries`](#specifying-retries)\n  - [Configuring the failure timeout](#configuring-the-failure-timeout)\n  - [Specifying `ServiceAccount` credentials](#specifying-serviceaccount-credentials)\n- [<code>TaskRun<\/code> status](#taskrun-status)\n  - [The <code>status<\/code> field](#the-status-field)\n- [Monitoring execution status](#monitoring-execution-status)\n    - [Monitoring `Steps`](#monitoring-steps)\n    - [Steps](#steps)\n    - [Monitoring `Results`](#monitoring-results)\n- [Cancelling a `TaskRun`](#cancelling-a-taskrun)\n- [Debugging a `TaskRun`](#debugging-a-taskrun)\n    - [Breakpoint on Failure](#breakpoint-on-failure)\n    - [Debug Environment](#debug-environment)\n- [Events](events.md#taskruns)\n- [Running a TaskRun Hermetically](hermetic.md)\n- [Code examples](#code-examples)\n  - [Example `TaskRun` with a referenced `Task`](#example-taskrun-with-a-referenced-task)\n  - [Example `TaskRun` with an embedded `Task`](#example-taskrun-with-an-embedded-task)\n  - [Example of reusing a `Task`](#example-of-reusing-a-task)\n  - [Example of Using custom `ServiceAccount` credentials](#example-of-using-custom-serviceaccount-credentials)\n  - [Example of Running Step Containers as a Non Root User](#example-of-running-step-containers-as-a-non-root-user)\n<!-- \/toc -->\n\n## Overview\n\nA `TaskRun` allows you to instantiate and execute a [`Task`](tasks.md) on-cluster. A `Task` specifies one or more\n`Steps` that execute container images and each container image performs a specific piece of build work. A `TaskRun` executes the\n`Steps` in the `Task` in the order they are specified until all `Steps` have executed successfully or a failure occurs.\n\n## Configuring a `TaskRun`\n\nA `TaskRun` definition supports the following fields:\n\n- Required:\n  - [`apiVersion`][kubernetes-overview] - Specifies the API version, for example\n    `tekton.dev\/v1beta1`.\n  - [`kind`][kubernetes-overview] - Identifies this resource object as a `TaskRun` object.\n  - [`metadata`][kubernetes-overview] - Specifies the metadata that uniquely identifies the\n    `TaskRun`, such as a `name`.\n  - [`spec`][kubernetes-overview] - Specifies the configuration for the `TaskRun`.\n    - [`taskRef` or `taskSpec`](#specifying-the-target-task) - Specifies the `Tasks` that the\n    `TaskRun` will execute.\n- Optional:\n  - [`serviceAccountName`](#specifying-serviceaccount-credentials) - Specifies a `ServiceAccount`\n    object that provides custom credentials for executing the `TaskRun`.\n  - [`params`](#specifying-parameters) - Specifies the desired execution parameters for the `Task`.\n  - [`timeout`](#configuring-the-failure-timeout) - Specifies the timeout before the `TaskRun` fails.\n  - [`podTemplate`](#specifying-a-pod-template) - Specifies a [`Pod` template](podtemplates.md) to use as\n    the starting point for configuring the `Pods` for the `Task`.\n  - [`workspaces`](#specifying-workspaces) - Specifies the physical volumes to use for the\n    [`Workspaces`](workspaces.md#using-workspaces-in-tasks) declared by a `Task`.\n  - [`debug`](#debugging-a-taskrun)- Specifies any breakpoints and debugging configuration for the `Task` execution.\n  - [`stepOverrides`](#overriding-task-steps-and-sidecars) - Specifies configuration to use to override the `Task`'s `Step`s.\n  - [`sidecarOverrides`](#overriding-task-steps-and-sidecars) - Specifies configuration to use to override the `Task`'s `Sidecar`s.\n\n[kubernetes-overview]:\n  https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/kubernetes-objects\/#required-fields\n\n### Specifying the target `Task`\n\nTo specify the `Task` you want to execute in your `TaskRun`, use the `taskRef` field as shown below:\n\n```yaml\nspec:\n  taskRef:\n    name: read-task\n```\n\nYou can also embed the desired `Task` definition directly in the `TaskRun` using the `taskSpec` field:\n\n```yaml\nspec:\n  taskSpec:\n    workspaces:\n    - name: source\n    steps:\n      - name: build-and-push\n        image: gcr.io\/kaniko-project\/executor:v0.17.1\n        # specifying DOCKER_CONFIG is required to allow kaniko to detect docker credential\n        workingDir: $(workspaces.source.path)\n        env:\n          - name: \"DOCKER_CONFIG\"\n            value: \"\/tekton\/home\/.docker\/\"\n        command:\n          - \/kaniko\/executor\n        args:\n          - --destination=gcr.io\/my-project\/gohelloworld\n```\n\n#### Tekton Bundles\n\nA `Tekton Bundle` is an OCI artifact that contains Tekton resources like `Tasks` which can be referenced within a `taskRef`.\n\nYou can reference a `Tekton bundle` in a `TaskRef` in both `v1` and `v1beta1` using [remote resolution](.\/bundle-resolver.md#pipeline-resolution). The example syntax shown below for `v1` uses remote resolution and requires enabling [beta features](.\/additional-configs.md#beta-features).\n\n```yaml\nspec:\n  taskRef:\n    resolver: bundles\n    params:\n    - name: bundle\n      value: docker.io\/myrepo\/mycatalog\n    - name: name\n      value: echo-task\n    - name: kind\n      value: Task\n```\n\nYou may also specify a `tag` as you would with a Docker image which will give you a repeatable reference to a `Task`.\n\n```yaml\nspec:\n  taskRef:\n    resolver: bundles\n    params:\n    - name: bundle\n      value: docker.io\/myrepo\/mycatalog:v1.0.1\n    - name: name\n      value: echo-task\n    - name: kind\n      value: Task\n```\n\nYou may also specify a fixed digest instead of a tag which ensures the referenced task is constant.\n\n```yaml\nspec:\n  taskRef:\n    resolver: bundles\n    params:\n    - name: bundle\n      value: docker.io\/myrepo\/mycatalog@sha256:abc123\n    - name: name\n      value: echo-task\n    - name: kind\n      value: Task\n```\n\nA working example can be found [here](..\/examples\/v1beta1\/taskruns\/no-ci\/tekton-bundles.yaml).\n\nAny of the above options will fetch the image using the `ImagePullSecrets` attached to the\n`ServiceAccount` specified in the `TaskRun`. See the [Service Account](#service-account)\nsection for details on how to configure a `ServiceAccount` on a `TaskRun`. The `TaskRun`\nwill then run that `Task` without registering it in the cluster allowing multiple versions\nof the same named `Task` to be run at once.\n\n`Tekton Bundles` may be constructed with any toolsets that produces valid OCI image artifacts so long as\nthe artifact adheres to the [contract](tekton-bundle-contracts.md). Additionally, you may also use the `tkn`\ncli *(coming soon)*.\n\n#### Remote Tasks\n\n**([beta feature](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/install.md#beta-features))**\n\nA `taskRef` field may specify a Task in a remote location such as git.\nSupport for specific types of remote will depend on the Resolvers your\ncluster's operator has installed. For more information including a tutorial, please check [resolution docs](resolution.md). The below example demonstrates referencing a Task in git:\n\n```yaml\nspec:\n  taskRef:\n    resolver: git\n    params:\n    - name: url\n      value: https:\/\/github.com\/tektoncd\/catalog.git\n    - name: revision\n      value: abc123\n    - name: pathInRepo\n      value: \/task\/golang-build\/0.3\/golang-build.yaml\n```\n\n### Specifying `Parameters`\n\nIf a `Task` has [`parameters`](tasks.md#specifying-parameters), you can use the `params` field to specify their values:\n\n```yaml\nspec:\n  params:\n    - name: flags\n      value: -someflag\n```\n\n**Note:** If a parameter does not have an implicit default value, you must explicitly set its value.\n\n#### Propagated Parameters\n\nWhen using an inlined `taskSpec`, parameters from the parent `TaskRun` will be\navailable to the `Task` without needing to be explicitly defined.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  generateName: hello-\nspec:\n  params:\n    - name: message\n      value: \"hello world!\"\n  taskSpec:\n    # There are no explicit params defined here.\n    # They are derived from the TaskRun params above.\n    steps:\n    - name: default\n      image: ubuntu\n      script: |\n        echo $(params.message)\n```\n\nOn executing the task run, the parameters will be interpolated during resolution.\nThe specifications are not mutated before storage and so it remains the same.\nThe status is updated.\n\n```yaml\nkind: TaskRun\nmetadata:\n  name: hello-dlqm9\n  ...\nspec:\n  params:\n  - name: message\n    value: hello world!\n  serviceAccountName: default\n  taskSpec:\n    steps:\n    - image: ubuntu\n      name: default\n      script: |\n        echo $(params.message)\nstatus:\n  conditions:\n  - lastTransitionTime: \"2022-05-20T15:24:41Z\"\n    message: All Steps have completed executing\n    reason: Succeeded\n    status: \"True\"\n    type: Succeeded\n  ...\n  steps:\n  - container: step-default\n    ...\n  taskSpec:\n    steps:\n    - image: ubuntu\n      name: default\n      script: |\n        echo \"hello world!\"\n```\n\n#### Propagated Object Parameters\nWhen using an inlined `taskSpec`, object parameters from the parent `TaskRun` will be\navailable to the `Task` without needing to be explicitly defined.\n\n\n**Note:** If an object parameter is being defined explicitly then you must define the spec of the object in `Properties`.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  generateName: object-param-result-\nspec:\n  params:\n  - name: gitrepo\n    value:\n      commit: sha123\n      url: xyz.com\n  taskSpec:\n    steps:\n    - name: echo-object-params\n      image: bash\n      args:\n      - echo\n      - --url=$(params.gitrepo.url)\n      - --commit=$(params.gitrepo.commit)\n```\n\nOn executing the task run, the object parameters will be interpolated during resolution.\nThe specifications are not mutated before storage and so it remains the same.\nThe status is updated.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: object-param-result-vlnmb\n  ...\nspec:\n  params:\n  - name: gitrepo\n    value:\n      commit: sha123\n      url: xyz.com\n  serviceAccountName: default\n  taskSpec:\n    steps:\n    - args:\n      - echo\n      - --url=$(params.gitrepo.url)\n      - --commit=$(params.gitrepo.commit)\n      image: bash\n      name: echo-object-params\nstatus:\n  completionTime: \"2022-09-08T17:09:37Z\"\n  conditions:\n  - lastTransitionTime: \"2022-09-08T17:09:37Z\"\n    message: All Steps have completed executing\n    reason: Succeeded\n    status: \"True\"\n    type: Succeeded\n    ...\n  steps:\n  - container: step-echo-object-params\n    ...\n  taskSpec:\n    steps:\n    - args:\n      - echo\n      - --url=xyz.com\n      - --commit=sha123\n      image: bash\n      name: echo-object-params\n```\n\n#### Extra Parameters\n\n**([alpha only](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/additional-configs.md#alpha-features))**\n\nYou can pass in extra `Parameters` if needed depending on your use cases. An example use\ncase is when your CI system autogenerates `TaskRuns` and it has `Parameters` it wants to\nprovide to all `TaskRuns`. Because you can pass in extra `Parameters`, you don't have to\ngo through the complexity of checking each `Task` and providing only the required params.\n\n#### Parameter Enums\n\n> :seedling: **`enum` is an [alpha](additional-configs.md#alpha-features) feature.** The `enable-param-enum` feature flag must be set to `\"true\"` to enable this feature.\n\nIf a `Parameter` is guarded by `Enum` in the `Task`, you can only provide `Parameter` values in the `TaskRun` that are predefined in the `Param.Enum` in the `Task`. The `TaskRun` will fail with reason `InvalidParamValue` otherwise.\n\nYou can also specify `Enum` for [`TaskRun` with an embedded `Task`](#example-taskrun-with-an-embedded-task). The same param validation will be executed in this scenario.\n\nSee more details in [Param.Enum](.\/tasks.md#param-enum).\n\n### Specifying `Resource` limits\n\nEach Step in a Task can specify its resource requirements. See\n[Defining `Steps`](tasks.md#defining-steps). Resource requirements defined in Steps and Sidecars\nmay be overridden by a TaskRun's StepSpecs and SidecarSpecs.\n\n### Specifying Task-level `ComputeResources`\n\n**([beta only](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/additional-configs.md#beta-features))**\n\nTask-level compute resources can be configured in `TaskRun.ComputeResources`, or `PipelineRun.TaskRunSpecs.ComputeResources`.\n\ne.g.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: task\nspec:\n  steps:\n    - name: foo\n---\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: taskrun\nspec:\n  taskRef:\n    name: task\n  computeResources:\n    requests:\n      cpu: 1\n    limits:\n      cpu: 2\n```\n\nFurther details and examples could be found in [Compute Resources in Tekton](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/compute-resources.md).\n\n### Specifying a `Pod` template\n\nYou can specify a [`Pod` template](podtemplates.md) configuration that will serve as the configuration starting\npoint for the `Pod` in which the container images specified in your `Task` will execute. This allows you to\ncustomize the `Pod` configuration specifically for that `TaskRun`.\n\nIn the following example, the `Task` specifies a `volumeMount` (`my-cache`) object, also provided by the `TaskRun`,\nusing a `PersistentVolumeClaim` volume. A specific scheduler is also configured in the  `SchedulerName` field.\nThe `Pod` executes with regular (non-root) user permissions.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: mytask\n  namespace: default\nspec:\n  steps:\n    - name: writesomething\n      image: ubuntu\n      command: [\"bash\", \"-c\"]\n      args: [\"echo 'foo' > \/my-cache\/bar\"]\n      volumeMounts:\n        - name: my-cache\n          mountPath: \/my-cache\n---\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: mytaskrun\n  namespace: default\nspec:\n  taskRef:\n    name: mytask\n  podTemplate:\n    schedulerName: volcano\n    securityContext:\n      runAsNonRoot: true\n      runAsUser: 1001\n    volumes:\n      - name: my-cache\n        persistentVolumeClaim:\n          claimName: my-volume-claim\n```\n\n### Specifying `Workspaces`\n\nIf a `Task` specifies one or more `Workspaces`, you must map those `Workspaces` to\nthe corresponding physical volumes in your `TaskRun` definition. For example, you\ncan map a `PersistentVolumeClaim` volume to a `Workspace` as follows:\n\n```yaml\nworkspaces:\n  - name: myworkspace # must match workspace name in the Task\n    persistentVolumeClaim:\n      claimName: mypvc # this PVC must already exist\n    subPath: my-subdir\n```\n\nFor more information, see the following topics:\n- For information on mapping `Workspaces` to `Volumes`, see [Using `Workspace` variables in `TaskRuns`](workspaces.md#using-workspace-variables-in-taskruns).\n- For a list of supported `Volume` types, see [Specifying `VolumeSources` in `Workspaces`](workspaces.md#specifying-volumesources-in-workspaces).\n- For an end-to-end example, see [`Workspaces` in a `TaskRun`](..\/examples\/v1\/taskruns\/workspace.yaml).\n\n#### Propagated Workspaces\n\nWhen using an embedded spec, workspaces from the parent `TaskRun` will be\npropagated to any inlined specs without needing to be explicitly defined. This\nallows authors to simplify specs by automatically propagating top-level\nworkspaces down to other inlined resources.\n**Workspace substutions will only be made for `commands`, `args` and `script` fields of `steps`, `stepTemplates`, and `sidecars`.**\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  generateName: propagating-workspaces-\nspec:\n  taskSpec:\n    steps:\n      - name: simple-step\n        image: ubuntu\n        command:\n          - echo\n        args:\n          - $(workspaces.tr-workspace.path)\n  workspaces:\n  - emptyDir: {}\n    name: tr-workspace\n```\n\nUpon execution, the workspaces will be interpolated during resolution through to the `taskSpec`.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: propagating-workspaces-ndxnc\n  ...\nspec:\n  ...\nstatus:\n  ...\n  taskSpec:\n    steps:\n      ...\n    workspaces:\n    - name: tr-workspace\n\n```\n\n##### Propagating Workspaces to Referenced Tasks\n\nWorkspaces can only be propagated to `embedded` task specs, not `referenced` Tasks.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: workspace-propagation\nspec:\n  steps:\n    - name: simple-step\n      image: ubuntu\n      command:\n        - echo\n      args:\n        - $(workspaces.tr-workspace.path)\n---\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  generateName: propagating-workspaces-\nspec:\n  taskRef:\n    name: workspace-propagation\n  workspaces:\n  - emptyDir: {}\n    name: tr-workspace\n```\n\nUpon execution, the above `TaskRun` will fail because the `Task` is referenced and workspace is not propagated. It must be explicitly defined in the `spec` of the defined `Task`.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  ...\nspec:\n  taskRef:\n    kind: Task\n    name: workspace-propagation\n  workspaces:\n  - emptyDir: {}\n    name: tr-workspace\nstatus:\n  conditions:\n  - lastTransitionTime: \"2022-09-13T15:12:35Z\"\n    message: workspace binding \"tr-workspace\" does not match any declared workspace\n    reason: TaskRunValidationFailed\n    status: \"False\"\n    type: Succeeded\n  ...\n```\n\n### Specifying `Sidecars`\n\nA `Sidecar` is a container that runs alongside the containers specified\nin the `Steps` of a task to provide auxiliary support to the execution of\nthose `Steps`. For example, a `Sidecar` can run a logging daemon, a service\nthat updates files on a shared volume, or a network proxy.\n\nTekton supports the injection of `Sidecars` into a `Pod` belonging to\na `TaskRun` with the condition that each `Sidecar` running inside the\n`Pod` are terminated as soon as all `Steps` in the `Task` complete execution.\nThis might result in the `Pod` including each affected `Sidecar` with a\nretry count of 1 and a different container image than expected.\n\nWe are aware of the following issues affecting Tekton's implementation of `Sidecars`:\n\n- The configured `nop` image **must not** provide the command that the\n`Sidecar` is expected to run, otherwise it will not exit, resulting in the `Sidecar`\nrunning forever and the Task eventually timing out. For more information, see the\n[associated issue](https:\/\/github.com\/tektoncd\/pipeline\/issues\/1347).\n\n- The `kubectl get pods` command returns the status of the `Pod` as \"Completed\" if a\n`Sidecar` exits successfully and as \"Error\" if a `Sidecar` exits with an error,\ndisregarding the exit codes of the container images that actually executed the `Steps`\ninside the `Pod`. Only the above command is affected. The `Pod's` description correctly\ndenotes a \"Failed\" status and the container statuses correctly denote their exit codes\nand reasons.\n\n### Configuring Task Steps and Sidecars in a TaskRun\n\n**([beta only](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/additional-configs.md#beta-features))**\n\nA TaskRun can specify `StepSpecs` or `SidecarSpecs` to configure Step or Sidecar\nspecified in a Task. Only named Steps and Sidecars may be configured.\n\nFor example, given the following Task definition:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: image-build-task\nspec:\n  steps:\n    - name: build\n      image: gcr.io\/kaniko-project\/executor:latest\n  sidecars:\n    - name: logging\n      image: my-logging-image\n```\n\nAn example TaskRun definition could look like:\n\n\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: TaskRun\nmetadata:\n  name: image-build-taskrun\nspec:\n  taskRef:\n    name: image-build-task\n  stepSpecs:\n    - name: build\n      computeResources:\n        requests:\n          memory: 1Gi\n  sidecarSpecs:\n    - name: logging\n      computeResources:\n        requests:\n          cpu: 100m\n        limits:\n          cpu: 500m\n```\n\n\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: image-build-taskrun\nspec:\n  taskRef:\n    name: image-build-task\n  stepOverrides:\n    - name: build\n      resources:\n        requests:\n          memory: 1Gi\n  sidecarOverrides:\n    - name: logging\n      resources:\n        requests:\n          cpu: 100m\n        limits:\n          cpu: 500m\n```\n\n\n\n`StepSpecs` and `SidecarSpecs` must include the `name` field and may include `resources`.\nNo other fields can be overridden.\nIf the overridden `Task` uses a [`StepTemplate`](.\/tasks.md#specifying-a-step-template), configuration on\n`Step` will take precedence over configuration in `StepTemplate`, and configuration in `StepSpec` will\ntake precedence over both.\n\nWhen merging resource requirements, different resource types are considered independently.\nFor example, if a `Step` configures both CPU and memory, and a `StepSpec` configures only memory,\nthe CPU values from the `Step` will be preserved. Requests and limits are also considered independently.\nFor example, if a `Step` configures a memory request and limit, and a `StepSpec` configures only a\nmemory request, the memory limit from the `Step` will be preserved.\n\n### Specifying `LimitRange` values\n\nIn order to only consume the bare minimum amount of resources needed to execute one `Step` at a\ntime from the invoked `Task`, Tekton will request the compute values for CPU, memory, and ephemeral\nstorage for each `Step` based on the [`LimitRange`](https:\/\/kubernetes.io\/docs\/concepts\/policy\/limit-range\/)\nobject(s), if present. Any `Request` or `Limit` specified by the user (on `Task` for example) will be left unchanged.\n\nFor more information, see the [`LimitRange` support in Pipeline](.\/compute-resources.md#limitrange-support).\n\n### Specifying `Retries`\nYou can use the `retries` field to set how many times you want to retry on a failed TaskRun.\nAll TaskRun failures are retriable except for `Cancellation`.\n\nFor a retriable `TaskRun`, when an error occurs:\n- The error status is archived in `status.RetriesStatus`\n- The `Succeeded` condition in `status` is updated:\n```\nType: Succeeded\nStatus: Unknown\nReason: ToBeRetried\n```\n- `status.StartTime`, `status.PodName` and `status.Results` are unset to trigger another retry attempt.\n\n### Configuring the failure timeout\n\nYou can use the `timeout` field to set the `TaskRun's` desired timeout value for **each retry attempt**. If you do\nnot specify this value, the global default timeout value applies (the same, to `each retry attempt`). If you set the timeout to 0,\nthe `TaskRun` will have no timeout and will run until it completes successfully or fails from an error.\n\nThe `timeout` value is a `duration` conforming to Go's\n[`ParseDuration`](https:\/\/golang.org\/pkg\/time\/#ParseDuration) format. For example, valid\nvalues are `1h30m`, `1h`, `1m`, `60s`, and `0`.\n\nIf a `TaskRun` runs longer than its timeout value, the pod associated with the `TaskRun` will be deleted. This\nmeans that the logs of the `TaskRun` are not preserved. The deletion of the `TaskRun` pod is necessary in order to\nstop `TaskRun` step containers from running.\n\nThe global default timeout is set to 60 minutes when you first install Tekton. You can set\na different global default timeout value using the `default-timeout-minutes` field in\n[`config\/config-defaults.yaml`](.\/..\/config\/config-defaults.yaml). If you set the global timeout to 0,\nall `TaskRuns` that do not have a timeout set will have no timeout and will run until it completes successfully\nor fails from an error.\n\n> :note: An internal detail of the `PipelineRun` and `TaskRun` reconcilers in the Tekton controller is that it will requeue a `PipelineRun` or `TaskRun` for re-evaluation, versus waiting for the next update, under certain conditions.  The wait time for that re-queueing is the elapsed time subtracted from the timeout; however, if the timeout is set to '0', that calculation produces a negative number, and the new reconciliation event will fire immediately, which can impact overall performance, which is counter to the intent of wait time calculation.  So instead, the reconcilers will use the configured global timeout as the wait time when the associated timeout has been set to '0'.\n\n### Specifying `ServiceAccount` credentials\n\nYou can execute the `Task` in your `TaskRun` with a specific set of credentials by\nspecifying a `ServiceAccount` object name in the `serviceAccountName` field in your `TaskRun`\ndefinition. If you do not explicitly specify this, the `TaskRun` executes with the credentials\nspecified in the `configmap-defaults` `ConfigMap`. If this default is not specified, `TaskRuns`\nwill execute with the [`default` service account](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/#use-the-default-service-account-to-access-the-api-server)\nset for the target [`namespace`](https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/namespaces\/).\n\nFor more information, see [`ServiceAccount`](auth.md).\n## `TaskRun` status\nThe `status` field defines the observed state of `TaskRun`\n### The `status` field\n- Required:\n  - `status` - The most relevant information about the TaskRun's state. This field includes:\n  <!-- wokeignore:rule=master -->\n    - `status.conditions`, which contains the latest observations of the `TaskRun`'s state. [See here](https:\/\/github.com\/kubernetes\/community\/blob\/master\/contributors\/devel\/sig-architecture\/api-conventions.md#typical-status-properties) for information on typical status properties.\n  - `podName` - Name of the pod containing the containers responsible for executing this `task`'s `step`s.\n  - `startTime` - The time at which the `TaskRun` began executing, conforms to [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339) format.\n  - `completionTime` - The time at which the `TaskRun` finished executing, conforms to [RFC3339](https:\/\/tools.ietf.org\/html\/rfc3339) format.\n  - [`taskSpec`](tasks.md#configuring-a-task) - `TaskSpec` defines the desired state of the `Task` executed via the `TaskRun`.\n\n- Optional:\n  - `results` - List of results written out by the `task`'s containers.\n\n  - `provenance` - Provenance contains metadata about resources used in the `TaskRun` such as the source from where a remote `task` definition was fetched. It carries minimum amount of metadata in `TaskRun` `status` so that `Tekton Chains` can utilize it for provenance, its two subfields are:\n    - `refSource`: the source from where a remote `Task` definition was fetched.\n    - `featureFlags`: Identifies the feature flags used during the `TaskRun`.\n  - `steps` - Contains the `state` of each `step` container.\n    - `steps[].terminationReason` - When the step is terminated, it stores the step's final state.\n  - `retriesStatus` - Contains the history of `TaskRun`'s `status` in case of a retry in order to keep record of failures. No `status` stored within `retriesStatus` will have any `date` within as it is redundant.\n\n  - [`sidecars`](tasks.md#using-a-sidecar-in-a-task) - This field is a list. The list has one entry per `sidecar` in the manifest. Each entry represents the imageid of the corresponding sidecar.\n  - `spanContext` - Contains tracing span context fields.\n\n\n\n## Monitoring execution status\n\nAs your `TaskRun` executes, its `status` field accumulates information on the execution of each `Step`\nas well as the `TaskRun` as a whole. This information includes start and stop times, exit codes, the\nfully-qualified name of the container image, and the corresponding digest.\n\n**Note:** If any `Pods` have been [`OOMKilled`](https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/out-of-resource\/)\nby Kubernetes, the `TaskRun` is marked as failed even if its exit code is 0.\n\nThe following example shows the `status` field of a `TaskRun` that has executed successfully:\n\n```yaml\ncompletionTime: \"2019-08-12T18:22:57Z\"\nconditions:\n  - lastTransitionTime: \"2019-08-12T18:22:57Z\"\n    message: All Steps have completed executing\n    reason: Succeeded\n    status: \"True\"\n    type: Succeeded\npodName: status-taskrun-pod\nstartTime: \"2019-08-12T18:22:51Z\"\nsteps:\n  - container: step-hello\n    imageID: docker-pullable:\/\/busybox@sha256:895ab622e92e18d6b461d671081757af7dbaa3b00e3e28e12505af7817f73649\n    name: hello\n    terminationReason: Completed\n    terminated:\n      containerID: docker:\/\/d5a54f5bbb8e7a6fd3bc7761b78410403244cf4c9c5822087fb0209bf59e3621\n      exitCode: 0\n      finishedAt: \"2019-08-12T18:22:56Z\"\n      reason: Completed\n      startedAt: \"2019-08-12T18:22:54Z\"\n  ```\n\nThe following tables shows how to read the overall status of a `TaskRun`:\n\n| `status` | `reason`               | `message`                                                         | `completionTime` is set |                                                                                       Description |\n|:---------|:-----------------------|:------------------------------------------------------------------|:-----------------------:|--------------------------------------------------------------------------------------------------:|\n| Unknown  | Started                | n\/a                                                               |           No            |                                            The TaskRun has just been picked up by the controller. |\n| Unknown  | Pending                | n\/a                                                               |           No            |                                                The TaskRun is waiting on a Pod in status Pending. |\n| Unknown  | Running                | n\/a                                                               |           No            |                                   The TaskRun has been validated and started to perform its work. |\n| Unknown  | TaskRunCancelled       | n\/a                                                               |           No            |               The user requested the TaskRun to be cancelled. Cancellation has not been done yet. |\n| True     | Succeeded              | n\/a                                                               |           Yes           |                                                               The TaskRun completed successfully. |\n| False    | Failed                 | n\/a                                                               |           Yes           |                                               The TaskRun failed because one of the steps failed. |\n| False    | \\[Error message\\]      | n\/a                                                               |           No            | The TaskRun encountered a non-permanent error, and it's still running. It may ultimately succeed. |\n| False    | \\[Error message\\]      | n\/a                                                               |           Yes           |                                   The TaskRun failed with a permanent error (usually validation). |\n| False    | TaskRunCancelled       | n\/a                                                               |           Yes           |                                                           The TaskRun was cancelled successfully. |\n| False    | TaskRunCancelled       | TaskRun cancelled as the PipelineRun it belongs to has timed out. |           Yes           |                                      The TaskRun was cancelled because the PipelineRun timed out. |\n| False    | TaskRunTimeout         | n\/a                                                               |           Yes           |                                                                            The TaskRun timed out. |\n| False    | TaskRunImagePullFailed | n\/a                                                               |           Yes           |                      The TaskRun failed due to one of its steps not being able to pull the image. |\n| False    | FailureIgnored         | n\/a                                                               |           Yes           |                                                   The TaskRun failed but the failure was ignored. |\n\nWhen a `TaskRun` changes status, [events](events.md#taskruns) are triggered accordingly.\n\nThe name of the `Pod` owned by a `TaskRun`  is univocally associated to the owning resource.\nIf a `TaskRun` resource is deleted and created with the same name, the child `Pod` will be created with the same name\nas before. The base format of the name is `<taskrun-name>-pod`. The name may vary according to the logic of\n[`kmeta.ChildName`](https:\/\/pkg.go.dev\/github.com\/knative\/pkg\/kmeta#ChildName). In case of retries of a `TaskRun`\ntriggered by the `PipelineRun` controller, the base format of the name is `<taskrun-name>-pod-retry<N>` starting from\nthe first retry.\n\nSome examples:\n\n| `TaskRun` Name                                                             | `Pod` Name                                                      |\n|----------------------------------------------------------------------------|-----------------------------------------------------------------|\n| task-run                                                                   | task-run-pod                                                    |\n| task-run-0123456789-0123456789-0123456789-0123456789-0123456789-0123456789 | task-run-0123456789-01234560d38957287bb0283c59440df14069f59-pod |\n\n\n### Monitoring `Steps`\n\nIf multiple `Steps` are defined in the `Task` invoked by the `TaskRun`, you can monitor their execution\nstatus in the `status.steps` field using the following command, where `<name>` is the name of the target\n`TaskRun`:\n\n```bash\nkubectl get taskrun <name> -o yaml\n```\n\nThe exact Task Spec used to instantiate the TaskRun is also included in the Status for full auditability.\n\n### Steps\n\nThe corresponding statuses appear in the `status.steps` list in the order in which the `Steps` have been\nspecified in the `Task` definition.\n\n### Monitoring `Results`\n\nIf one or more `results` fields have been specified in the invoked `Task`, the `TaskRun's` execution\nstatus will include a `Task Results` section, in which the `Results` appear verbatim, including original\nline returns and whitespace. For example:\n\n```yaml\nStatus:\n  # [\u2026]\n  Steps:\n  # [\u2026]\n  Task Results:\n    Name:   current-date-human-readable\n    Value:  Thu Jan 23 16:29:06 UTC 2020\n\n    Name:   current-date-unix-timestamp\n    Value:  1579796946\n\n```\n\n## Cancelling a `TaskRun`\n\nTo cancel a `TaskRun` that's currently executing, update its status to mark it as cancelled.\n\nWhen you cancel a TaskRun, the running pod associated with that `TaskRun` is deleted. This\nmeans that the logs of the `TaskRun` are not preserved. The deletion of the `TaskRun` pod is necessary\nin order to stop `TaskRun` step containers from running.\n\n**Note: if `keep-pod-on-cancel` is set to\n`\"true\"` in the `feature-flags`,  the pod associated with that `TaskRun` will not be deleted**\n\nExample of cancelling a `TaskRun`:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: go-example-git\nspec:\n  # [\u2026]\n  status: \"TaskRunCancelled\"\n```\n\n\n## Debugging a `TaskRun`\n\n### Breakpoint on Failure\n\nTaskRuns can be halted on failure for troubleshooting by providing the following spec patch as seen below.\n\n```yaml\nspec:\n  debug:\n    breakpoints:\n      onFailure: \"enabled\"\n```\n\n### Breakpoint before step\n\nIf you want to set a breakpoint before the step is executed, you can add the step name to the `beforeSteps` field in the following way:\n\n```yaml\nspec:\n  debug:\n    breakpoints:\n      beforeSteps: \n        - \n```\n\nUpon failure of a step, the TaskRun Pod execution is halted. If this TaskRun Pod continues to run without any lifecycle\nchange done by the user (running the debug-continue or debug-fail-continue script) the TaskRun would be subject to\n[TaskRunTimeout](#configuring-the-failure-timeout).\nDuring this time, the user\/client can get remote shell access to the step container with a command such as the following.\n\n```bash\nkubectl exec -it print-date-d7tj5-pod -c step-print-date-human-readable sh\n```\n\n### Debug Environment\n\nAfter the user\/client has access to the container environment, they can scour for any missing parts because of which\ntheir step might have failed.\n\nTo control the lifecycle of the step to mark it as a success or a failure or close the breakpoint, there are scripts\nprovided in the `\/tekton\/debug\/scripts` directory in the container. The following are the scripts and the tasks they\nperform :-\n\n`debug-continue`: Mark the step as a success and exit the breakpoint.\n\n`debug-fail-continue`: Mark the step as a failure and exit the breakpoint.\n\n`debug-beforestep-continue`: Mark the step continue to execute\n\n`debug-beforestep-fail-continue`: Mark the step not continue to execute\n\n*More information on the inner workings of debug can be found in the [Debug documentation](debug.md)*\n\n## Code examples\n\nTo better understand `TaskRuns`, study the following code examples:\n\n- [Example `TaskRun` with a referenced `Task`](#example-taskrun-with-a-referenced-task)\n- [Example `TaskRun` with an embedded `Task`](#example-taskrun-with-an-embedded-task)\n- [Example of reusing a `Task`](#example-of-reusing-a-task)\n- [Example of Using custom `ServiceAccount` credentials](#example-of-using-custom-serviceaccount-credentials)\n- [Example of Running Step Containers as a Non Root User](#example-of-running-step-containers-as-a-non-root-user)\n\n### Example `TaskRun` with a referenced `Task`\n\nIn this example, a `TaskRun` named `read-repo-run` invokes and executes an existing\n`Task` named `read-task`. This `Task` reads the repository from the\n\"input\" `workspace`.\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: read-task\nspec:\n  workspaces:\n  - name: input\n  steps:\n    - name: readme\n      image: ubuntu\n      script: cat $(workspaces.input.path)\/README.md\n---\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: read-repo-run\nspec:\n  taskRef:\n    name: read-task\n  workspaces:\n  - name: input\n    persistentVolumeClaim:\n      claimName: mypvc\n    subPath: my-subdir\n```\n\n### Example `TaskRun` with an embedded `Task`\n\nIn this example, a `TaskRun` named `build-push-task-run-2` directly executes\na `Task` from its definition embedded in the `TaskRun's` `taskSpec` field:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: build-push-task-run-2\nspec:\n  workspaces:\n  - name: source\n    persistentVolumeClaim:\n      claimName: my-pvc\n  taskSpec:\n    workspaces:\n    - name: source\n    steps:\n      - name: build-and-push\n        image: gcr.io\/kaniko-project\/executor:v0.17.1\n        workingDir: $(workspaces.source.path)\n        # specifying DOCKER_CONFIG is required to allow kaniko to detect docker credential\n        env:\n          - name: \"DOCKER_CONFIG\"\n            value: \"\/tekton\/home\/.docker\/\"\n        command:\n          - \/kaniko\/executor\n        args:\n          - --destination=gcr.io\/my-project\/gohelloworld\n```\n\n### Example of Using custom `ServiceAccount` credentials\n\nThe example below illustrates how to specify a `ServiceAccount` to access a private `git` repository:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: test-task-with-serviceaccount-git-ssh\nspec:\n  serviceAccountName: test-task-robot-git-ssh\n  workspaces:\n  - name: source\n    persistentVolumeClaim:\n      claimName: repo-pvc\n  - name: ssh-creds\n    secret:\n      secretName: test-git-ssh\n  params:\n    - name: url\n      value: https:\/\/github.com\/tektoncd\/pipeline.git\n  taskRef:\n    name: git-clone\n```\n\nIn the above code snippet, `serviceAccountName: test-build-robot-git-ssh` references the following\n`ServiceAccount`:\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: test-task-robot-git-ssh\nsecrets:\n  - name: test-git-ssh\n```\n\nAnd `secretName: test-git-ssh` references the following `Secret`:\n\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n  name: test-git-ssh\n  annotations:\n    tekton.dev\/git-0: github.com\ntype: kubernetes.io\/ssh-auth\ndata:\n  # Generated by:\n  # cat id_rsa | base64 -w 0\n  ssh-privatekey: LS0tLS1CRUdJTiBSU0EgUFJJVk.....[example]\n  # Generated by:\n  # ssh-keyscan github.com | base64 -w 0\n  known_hosts: Z2l0aHViLmNvbSBzc2g.....[example]\n```\n\n### Example of Running Step Containers as a Non Root User\n\nAll steps that do not require to be run as a root user should make use of TaskRun features to\ndesignate the container for a step runs as a user without root permissions. As a best practice,\nrunning containers as non root should be built into the container image to avoid any possibility\nof the container being run as root. However, as a further measure of enforcing this practice,\nTaskRun pod templates can be used to specify how containers should be run within a TaskRun pod.\n\nAn example of using a TaskRun pod template is shown below to specify that containers running via this\nTaskRun's pod should run as non root and run as user 1001 if the container itself does not specify what\nuser to run as:\n\n```yaml\napiVersion: tekton.dev\/v1 # or tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  generateName: show-non-root-steps-run-\nspec:\n  taskRef:\n    name: show-non-root-steps\n  podTemplate:\n    securityContext:\n      runAsNonRoot: true\n      runAsUser: 1001\n```\n\nIf a Task step specifies that it is to run as a different user than what is specified in the pod template,\nthe step's `securityContext` will be applied instead of what is specified at the pod level. An example of\nthis is available as a [TaskRun example](..\/examples\/v1\/taskruns\/run-steps-as-non-root.yaml).\n\nMore information about Pod and Container Security Contexts can be found via the [Kubernetes website](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/#set-the-security-context-for-a-pod).\n\n---\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/),\nand code samples are licensed under the\n[Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"         linkTitle   TaskRuns  weight  202            TaskRuns       toc        Overview   overview     Configuring a  TaskRun    configuring a taskrun       Specifying the target  Task    specifying the target task         Tekton Bundles   tekton bundles         Remote Tasks   remote tasks       Specifying  Parameters    specifying parameters         Propagated Parameters   propagated parameters         Propagated Object Parameters   propagated object parameters         Extra Parameters   extra parameters       Specifying  Resource  limits   specifying resource limits       Specifying Task level  ComputeResources    specifying task level computeresources       Specifying a  Pod  template   specifying a pod template       Specifying  Workspaces    specifying workspaces         Propagated Workspaces   propagated workspaces       Specifying  Sidecars    specifying sidecars       Configuring  Task   Steps  and  Sidecars  in a TaskRun   configuring task steps and sidecars in a taskrun       Specifying  LimitRange  values   specifying limitrange values       Specifying  Retries    specifying retries       Configuring the failure timeout   configuring the failure timeout       Specifying  ServiceAccount  credentials   specifying serviceaccount credentials      code TaskRun  code  status   taskrun status       The  code status  code  field   the status field     Monitoring execution status   monitoring execution status         Monitoring  Steps    monitoring steps         Steps   steps         Monitoring  Results    monitoring results     Cancelling a  TaskRun    cancelling a taskrun     Debugging a  TaskRun    debugging a taskrun         Breakpoint on Failure   breakpoint on failure         Debug Environment   debug environment     Events  events md taskruns     Running a TaskRun Hermetically  hermetic md     Code examples   code examples       Example  TaskRun  with a referenced  Task    example taskrun with a referenced task       Example  TaskRun  with an embedded  Task    example taskrun with an embedded task       Example of reusing a  Task    example of reusing a task       Example of Using custom  ServiceAccount  credentials   example of using custom serviceaccount credentials       Example of Running Step Containers as a Non Root User   example of running step containers as a non root user        toc         Overview  A  TaskRun  allows you to instantiate and execute a   Task   tasks md  on cluster  A  Task  specifies one or more  Steps  that execute container images and each container image performs a specific piece of build work  A  TaskRun  executes the  Steps  in the  Task  in the order they are specified until all  Steps  have executed successfully or a failure occurs      Configuring a  TaskRun   A  TaskRun  definition supports the following fields     Required        apiVersion   kubernetes overview    Specifies the API version  for example      tekton dev v1beta1         kind   kubernetes overview    Identifies this resource object as a  TaskRun  object        metadata   kubernetes overview    Specifies the metadata that uniquely identifies the      TaskRun   such as a  name         spec   kubernetes overview    Specifies the configuration for the  TaskRun           taskRef  or  taskSpec    specifying the target task    Specifies the  Tasks  that the      TaskRun  will execute    Optional        serviceAccountName    specifying serviceaccount credentials    Specifies a  ServiceAccount      object that provides custom credentials for executing the  TaskRun         params    specifying parameters    Specifies the desired execution parameters for the  Task         timeout    configuring the failure timeout    Specifies the timeout before the  TaskRun  fails        podTemplate    specifying a pod template    Specifies a   Pod  template  podtemplates md  to use as     the starting point for configuring the  Pods  for the  Task         workspaces    specifying workspaces    Specifies the physical volumes to use for the       Workspaces   workspaces md using workspaces in tasks  declared by a  Task         debug    debugging a taskrun   Specifies any breakpoints and debugging configuration for the  Task  execution        stepOverrides    overriding task steps and sidecars    Specifies configuration to use to override the  Task  s  Step s        sidecarOverrides    overriding task steps and sidecars    Specifies configuration to use to override the  Task  s  Sidecar s    kubernetes overview     https   kubernetes io docs concepts overview working with objects kubernetes objects  required fields      Specifying the target  Task   To specify the  Task  you want to execute in your  TaskRun   use the  taskRef  field as shown below      yaml spec    taskRef      name  read task      You can also embed the desired  Task  definition directly in the  TaskRun  using the  taskSpec  field      yaml spec    taskSpec      workspaces        name  source     steps          name  build and push         image  gcr io kaniko project executor v0 17 1           specifying DOCKER CONFIG is required to allow kaniko to detect docker credential         workingDir    workspaces source path          env              name   DOCKER CONFIG              value    tekton home  docker           command               kaniko executor         args                destination gcr io my project gohelloworld           Tekton Bundles  A  Tekton Bundle  is an OCI artifact that contains Tekton resources like  Tasks  which can be referenced within a  taskRef    You can reference a  Tekton bundle  in a  TaskRef  in both  v1  and  v1beta1  using  remote resolution    bundle resolver md pipeline resolution   The example syntax shown below for  v1  uses remote resolution and requires enabling  beta features    additional configs md beta features       yaml spec    taskRef      resolver  bundles     params        name  bundle       value  docker io myrepo mycatalog       name  name       value  echo task       name  kind       value  Task      You may also specify a  tag  as you would with a Docker image which will give you a repeatable reference to a  Task       yaml spec    taskRef      resolver  bundles     params        name  bundle       value  docker io myrepo mycatalog v1 0 1       name  name       value  echo task       name  kind       value  Task      You may also specify a fixed digest instead of a tag which ensures the referenced task is constant      yaml spec    taskRef      resolver  bundles     params        name  bundle       value  docker io myrepo mycatalog sha256 abc123       name  name       value  echo task       name  kind       value  Task      A working example can be found  here     examples v1beta1 taskruns no ci tekton bundles yaml    Any of the above options will fetch the image using the  ImagePullSecrets  attached to the  ServiceAccount  specified in the  TaskRun   See the  Service Account   service account  section for details on how to configure a  ServiceAccount  on a  TaskRun   The  TaskRun  will then run that  Task  without registering it in the cluster allowing multiple versions of the same named  Task  to be run at once    Tekton Bundles  may be constructed with any toolsets that produces valid OCI image artifacts so long as the artifact adheres to the  contract  tekton bundle contracts md   Additionally  you may also use the  tkn  cli   coming soon          Remote Tasks      beta feature  https   github com tektoncd pipeline blob main docs install md beta features      A  taskRef  field may specify a Task in a remote location such as git  Support for specific types of remote will depend on the Resolvers your cluster s operator has installed  For more information including a tutorial  please check  resolution docs  resolution md   The below example demonstrates referencing a Task in git      yaml spec    taskRef      resolver  git     params        name  url       value  https   github com tektoncd catalog git       name  revision       value  abc123       name  pathInRepo       value   task golang build 0 3 golang build yaml          Specifying  Parameters   If a  Task  has   parameters   tasks md specifying parameters   you can use the  params  field to specify their values      yaml spec    params        name  flags       value   someflag        Note    If a parameter does not have an implicit default value  you must explicitly set its value        Propagated Parameters  When using an inlined  taskSpec   parameters from the parent  TaskRun  will be available to the  Task  without needing to be explicitly defined      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    generateName  hello  spec    params        name  message       value   hello world     taskSpec        There are no explicit params defined here        They are derived from the TaskRun params above      steps        name  default       image  ubuntu       script            echo   params message       On executing the task run  the parameters will be interpolated during resolution  The specifications are not mutated before storage and so it remains the same  The status is updated      yaml kind  TaskRun metadata    name  hello dlqm9       spec    params      name  message     value  hello world    serviceAccountName  default   taskSpec      steps        image  ubuntu       name  default       script            echo   params message  status    conditions      lastTransitionTime   2022 05 20T15 24 41Z      message  All Steps have completed executing     reason  Succeeded     status   True      type  Succeeded         steps      container  step default           taskSpec      steps        image  ubuntu       name  default       script            echo  hello world             Propagated Object Parameters When using an inlined  taskSpec   object parameters from the parent  TaskRun  will be available to the  Task  without needing to be explicitly defined      Note    If an object parameter is being defined explicitly then you must define the spec of the object in  Properties       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    generateName  object param result  spec    params      name  gitrepo     value        commit  sha123       url  xyz com   taskSpec      steps        name  echo object params       image  bash       args          echo           url   params gitrepo url            commit   params gitrepo commit       On executing the task run  the object parameters will be interpolated during resolution  The specifications are not mutated before storage and so it remains the same  The status is updated      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    name  object param result vlnmb       spec    params      name  gitrepo     value        commit  sha123       url  xyz com   serviceAccountName  default   taskSpec      steps        args          echo           url   params gitrepo url            commit   params gitrepo commit        image  bash       name  echo object params status    completionTime   2022 09 08T17 09 37Z    conditions      lastTransitionTime   2022 09 08T17 09 37Z      message  All Steps have completed executing     reason  Succeeded     status   True      type  Succeeded           steps      container  step echo object params           taskSpec      steps        args          echo           url xyz com           commit sha123       image  bash       name  echo object params           Extra Parameters      alpha only  https   github com tektoncd pipeline blob main docs additional configs md alpha features      You can pass in extra  Parameters  if needed depending on your use cases  An example use case is when your CI system autogenerates  TaskRuns  and it has  Parameters  it wants to provide to all  TaskRuns   Because you can pass in extra  Parameters   you don t have to go through the complexity of checking each  Task  and providing only the required params        Parameter Enums     seedling     enum  is an  alpha  additional configs md alpha features  feature    The  enable param enum  feature flag must be set to   true   to enable this feature   If a  Parameter  is guarded by  Enum  in the  Task   you can only provide  Parameter  values in the  TaskRun  that are predefined in the  Param Enum  in the  Task   The  TaskRun  will fail with reason  InvalidParamValue  otherwise   You can also specify  Enum  for   TaskRun  with an embedded  Task    example taskrun with an embedded task   The same param validation will be executed in this scenario   See more details in  Param Enum    tasks md param enum        Specifying  Resource  limits  Each Step in a Task can specify its resource requirements  See  Defining  Steps   tasks md defining steps   Resource requirements defined in Steps and Sidecars may be overridden by a TaskRun s StepSpecs and SidecarSpecs       Specifying Task level  ComputeResources       beta only  https   github com tektoncd pipeline blob main docs additional configs md beta features      Task level compute resources can be configured in  TaskRun ComputeResources   or  PipelineRun TaskRunSpecs ComputeResources    e g      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  task spec    steps        name  foo     apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    name  taskrun spec    taskRef      name  task   computeResources      requests        cpu  1     limits        cpu  2      Further details and examples could be found in  Compute Resources in Tekton  https   github com tektoncd pipeline blob main docs compute resources md        Specifying a  Pod  template  You can specify a   Pod  template  podtemplates md  configuration that will serve as the configuration starting point for the  Pod  in which the container images specified in your  Task  will execute  This allows you to customize the  Pod  configuration specifically for that  TaskRun    In the following example  the  Task  specifies a  volumeMount    my cache   object  also provided by the  TaskRun   using a  PersistentVolumeClaim  volume  A specific scheduler is also configured in the   SchedulerName  field  The  Pod  executes with regular  non root  user permissions      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  mytask   namespace  default spec    steps        name  writesomething       image  ubuntu       command    bash     c         args    echo  foo     my cache bar         volumeMounts            name  my cache           mountPath   my cache     apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    name  mytaskrun   namespace  default spec    taskRef      name  mytask   podTemplate      schedulerName  volcano     securityContext        runAsNonRoot  true       runAsUser  1001     volumes          name  my cache         persistentVolumeClaim            claimName  my volume claim          Specifying  Workspaces   If a  Task  specifies one or more  Workspaces   you must map those  Workspaces  to the corresponding physical volumes in your  TaskRun  definition  For example  you can map a  PersistentVolumeClaim  volume to a  Workspace  as follows      yaml workspaces      name  myworkspace   must match workspace name in the Task     persistentVolumeClaim        claimName  mypvc   this PVC must already exist     subPath  my subdir      For more information  see the following topics    For information on mapping  Workspaces  to  Volumes   see  Using  Workspace  variables in  TaskRuns   workspaces md using workspace variables in taskruns     For a list of supported  Volume  types  see  Specifying  VolumeSources  in  Workspaces   workspaces md specifying volumesources in workspaces     For an end to end example  see   Workspaces  in a  TaskRun      examples v1 taskruns workspace yaml         Propagated Workspaces  When using an embedded spec  workspaces from the parent  TaskRun  will be propagated to any inlined specs without needing to be explicitly defined  This allows authors to simplify specs by automatically propagating top level workspaces down to other inlined resources    Workspace substutions will only be made for  commands    args  and  script  fields of  steps    stepTemplates   and  sidecars         yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    generateName  propagating workspaces  spec    taskSpec      steps          name  simple step         image  ubuntu         command              echo         args                workspaces tr workspace path    workspaces      emptyDir         name  tr workspace      Upon execution  the workspaces will be interpolated during resolution through to the  taskSpec       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    name  propagating workspaces ndxnc       spec        status          taskSpec      steps                workspaces        name  tr workspace             Propagating Workspaces to Referenced Tasks  Workspaces can only be propagated to  embedded  task specs  not  referenced  Tasks      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  workspace propagation spec    steps        name  simple step       image  ubuntu       command            echo       args              workspaces tr workspace path      apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    generateName  propagating workspaces  spec    taskRef      name  workspace propagation   workspaces      emptyDir         name  tr workspace      Upon execution  the above  TaskRun  will fail because the  Task  is referenced and workspace is not propagated  It must be explicitly defined in the  spec  of the defined  Task       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata        spec    taskRef      kind  Task     name  workspace propagation   workspaces      emptyDir         name  tr workspace status    conditions      lastTransitionTime   2022 09 13T15 12 35Z      message  workspace binding  tr workspace  does not match any declared workspace     reason  TaskRunValidationFailed     status   False      type  Succeeded                Specifying  Sidecars   A  Sidecar  is a container that runs alongside the containers specified in the  Steps  of a task to provide auxiliary support to the execution of those  Steps   For example  a  Sidecar  can run a logging daemon  a service that updates files on a shared volume  or a network proxy   Tekton supports the injection of  Sidecars  into a  Pod  belonging to a  TaskRun  with the condition that each  Sidecar  running inside the  Pod  are terminated as soon as all  Steps  in the  Task  complete execution  This might result in the  Pod  including each affected  Sidecar  with a retry count of 1 and a different container image than expected   We are aware of the following issues affecting Tekton s implementation of  Sidecars      The configured  nop  image   must not   provide the command that the  Sidecar  is expected to run  otherwise it will not exit  resulting in the  Sidecar  running forever and the Task eventually timing out  For more information  see the  associated issue  https   github com tektoncd pipeline issues 1347      The  kubectl get pods  command returns the status of the  Pod  as  Completed  if a  Sidecar  exits successfully and as  Error  if a  Sidecar  exits with an error  disregarding the exit codes of the container images that actually executed the  Steps  inside the  Pod   Only the above command is affected  The  Pod s  description correctly denotes a  Failed  status and the container statuses correctly denote their exit codes and reasons       Configuring Task Steps and Sidecars in a TaskRun      beta only  https   github com tektoncd pipeline blob main docs additional configs md beta features      A TaskRun can specify  StepSpecs  or  SidecarSpecs  to configure Step or Sidecar specified in a Task  Only named Steps and Sidecars may be configured   For example  given the following Task definition      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  image build task spec    steps        name  build       image  gcr io kaniko project executor latest   sidecars        name  logging       image  my logging image      An example TaskRun definition could look like        yaml apiVersion  tekton dev v1 kind  TaskRun metadata    name  image build taskrun spec    taskRef      name  image build task   stepSpecs        name  build       computeResources          requests            memory  1Gi   sidecarSpecs        name  logging       computeResources          requests            cpu  100m         limits            cpu  500m           yaml apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  image build taskrun spec    taskRef      name  image build task   stepOverrides        name  build       resources          requests            memory  1Gi   sidecarOverrides        name  logging       resources          requests            cpu  100m         limits            cpu  500m         StepSpecs  and  SidecarSpecs  must include the  name  field and may include  resources   No other fields can be overridden  If the overridden  Task  uses a   StepTemplate     tasks md specifying a step template   configuration on  Step  will take precedence over configuration in  StepTemplate   and configuration in  StepSpec  will take precedence over both   When merging resource requirements  different resource types are considered independently  For example  if a  Step  configures both CPU and memory  and a  StepSpec  configures only memory  the CPU values from the  Step  will be preserved  Requests and limits are also considered independently  For example  if a  Step  configures a memory request and limit  and a  StepSpec  configures only a memory request  the memory limit from the  Step  will be preserved       Specifying  LimitRange  values  In order to only consume the bare minimum amount of resources needed to execute one  Step  at a time from the invoked  Task   Tekton will request the compute values for CPU  memory  and ephemeral storage for each  Step  based on the   LimitRange   https   kubernetes io docs concepts policy limit range   object s   if present  Any  Request  or  Limit  specified by the user  on  Task  for example  will be left unchanged   For more information  see the   LimitRange  support in Pipeline    compute resources md limitrange support        Specifying  Retries  You can use the  retries  field to set how many times you want to retry on a failed TaskRun  All TaskRun failures are retriable except for  Cancellation    For a retriable  TaskRun   when an error occurs    The error status is archived in  status RetriesStatus    The  Succeeded  condition in  status  is updated      Type  Succeeded Status  Unknown Reason  ToBeRetried        status StartTime    status PodName  and  status Results  are unset to trigger another retry attempt       Configuring the failure timeout  You can use the  timeout  field to set the  TaskRun s  desired timeout value for   each retry attempt    If you do not specify this value  the global default timeout value applies  the same  to  each retry attempt    If you set the timeout to 0  the  TaskRun  will have no timeout and will run until it completes successfully or fails from an error   The  timeout  value is a  duration  conforming to Go s   ParseDuration   https   golang org pkg time  ParseDuration  format  For example  valid values are  1h30m    1h    1m    60s   and  0    If a  TaskRun  runs longer than its timeout value  the pod associated with the  TaskRun  will be deleted  This means that the logs of the  TaskRun  are not preserved  The deletion of the  TaskRun  pod is necessary in order to stop  TaskRun  step containers from running   The global default timeout is set to 60 minutes when you first install Tekton  You can set a different global default timeout value using the  default timeout minutes  field in   config config defaults yaml        config config defaults yaml   If you set the global timeout to 0  all  TaskRuns  that do not have a timeout set will have no timeout and will run until it completes successfully or fails from an error      note  An internal detail of the  PipelineRun  and  TaskRun  reconcilers in the Tekton controller is that it will requeue a  PipelineRun  or  TaskRun  for re evaluation  versus waiting for the next update  under certain conditions   The wait time for that re queueing is the elapsed time subtracted from the timeout  however  if the timeout is set to  0   that calculation produces a negative number  and the new reconciliation event will fire immediately  which can impact overall performance  which is counter to the intent of wait time calculation   So instead  the reconcilers will use the configured global timeout as the wait time when the associated timeout has been set to  0        Specifying  ServiceAccount  credentials  You can execute the  Task  in your  TaskRun  with a specific set of credentials by specifying a  ServiceAccount  object name in the  serviceAccountName  field in your  TaskRun  definition  If you do not explicitly specify this  the  TaskRun  executes with the credentials specified in the  configmap defaults   ConfigMap   If this default is not specified   TaskRuns  will execute with the   default  service account  https   kubernetes io docs tasks configure pod container configure service account  use the default service account to access the api server  set for the target   namespace   https   kubernetes io docs concepts overview working with objects namespaces     For more information  see   ServiceAccount   auth md       TaskRun  status The  status  field defines the observed state of  TaskRun      The  status  field   Required       status    The most relevant information about the TaskRun s state  This field includes         wokeignore rule master            status conditions   which contains the latest observations of the  TaskRun  s state   See here  https   github com kubernetes community blob master contributors devel sig architecture api conventions md typical status properties  for information on typical status properties       podName    Name of the pod containing the containers responsible for executing this  task  s  step s       startTime    The time at which the  TaskRun  began executing  conforms to  RFC3339  https   tools ietf org html rfc3339  format       completionTime    The time at which the  TaskRun  finished executing  conforms to  RFC3339  https   tools ietf org html rfc3339  format        taskSpec   tasks md configuring a task     TaskSpec  defines the desired state of the  Task  executed via the  TaskRun      Optional       results    List of results written out by the  task  s containers        provenance    Provenance contains metadata about resources used in the  TaskRun  such as the source from where a remote  task  definition was fetched  It carries minimum amount of metadata in  TaskRun   status  so that  Tekton Chains  can utilize it for provenance  its two subfields are         refSource   the source from where a remote  Task  definition was fetched         featureFlags   Identifies the feature flags used during the  TaskRun        steps    Contains the  state  of each  step  container         steps   terminationReason    When the step is terminated  it stores the step s final state       retriesStatus    Contains the history of  TaskRun  s  status  in case of a retry in order to keep record of failures  No  status  stored within  retriesStatus  will have any  date  within as it is redundant         sidecars   tasks md using a sidecar in a task    This field is a list  The list has one entry per  sidecar  in the manifest  Each entry represents the imageid of the corresponding sidecar       spanContext    Contains tracing span context fields        Monitoring execution status  As your  TaskRun  executes  its  status  field accumulates information on the execution of each  Step  as well as the  TaskRun  as a whole  This information includes start and stop times  exit codes  the fully qualified name of the container image  and the corresponding digest     Note    If any  Pods  have been   OOMKilled   https   kubernetes io docs tasks administer cluster out of resource   by Kubernetes  the  TaskRun  is marked as failed even if its exit code is 0   The following example shows the  status  field of a  TaskRun  that has executed successfully      yaml completionTime   2019 08 12T18 22 57Z  conditions      lastTransitionTime   2019 08 12T18 22 57Z      message  All Steps have completed executing     reason  Succeeded     status   True      type  Succeeded podName  status taskrun pod startTime   2019 08 12T18 22 51Z  steps      container  step hello     imageID  docker pullable   busybox sha256 895ab622e92e18d6b461d671081757af7dbaa3b00e3e28e12505af7817f73649     name  hello     terminationReason  Completed     terminated        containerID  docker   d5a54f5bbb8e7a6fd3bc7761b78410403244cf4c9c5822087fb0209bf59e3621       exitCode  0       finishedAt   2019 08 12T18 22 56Z        reason  Completed       startedAt   2019 08 12T18 22 54Z         The following tables shows how to read the overall status of a  TaskRun       status     reason                   message                                                             completionTime  is set                                                                                         Description                                                                                                                                                                                                                                             Unknown    Started                  n a                                                                           No                                                         The TaskRun has just been picked up by the controller      Unknown    Pending                  n a                                                                           No                                                             The TaskRun is waiting on a Pod in status Pending      Unknown    Running                  n a                                                                           No                                                The TaskRun has been validated and started to perform its work      Unknown    TaskRunCancelled         n a                                                                           No                            The user requested the TaskRun to be cancelled  Cancellation has not been done yet      True       Succeeded                n a                                                                           Yes                                                                           The TaskRun completed successfully      False      Failed                   n a                                                                           Yes                                                           The TaskRun failed because one of the steps failed      False        Error message          n a                                                                           No              The TaskRun encountered a non permanent error  and it s still running  It may ultimately succeed      False        Error message          n a                                                                           Yes                                               The TaskRun failed with a permanent error  usually validation       False      TaskRunCancelled         n a                                                                           Yes                                                                       The TaskRun was cancelled successfully      False      TaskRunCancelled         TaskRun cancelled as the PipelineRun it belongs to has timed out              Yes                                                  The TaskRun was cancelled because the PipelineRun timed out      False      TaskRunTimeout           n a                                                                           Yes                                                                                        The TaskRun timed out      False      TaskRunImagePullFailed   n a                                                                           Yes                                  The TaskRun failed due to one of its steps not being able to pull the image      False      FailureIgnored           n a                                                                           Yes                                                               The TaskRun failed but the failure was ignored     When a  TaskRun  changes status   events  events md taskruns  are triggered accordingly   The name of the  Pod  owned by a  TaskRun   is univocally associated to the owning resource  If a  TaskRun  resource is deleted and created with the same name  the child  Pod  will be created with the same name as before  The base format of the name is   taskrun name  pod   The name may vary according to the logic of   kmeta ChildName   https   pkg go dev github com knative pkg kmeta ChildName   In case of retries of a  TaskRun  triggered by the  PipelineRun  controller  the base format of the name is   taskrun name  pod retry N   starting from the first retry   Some examples      TaskRun  Name                                                                Pod  Name                                                                                                                                                                                                           task run                                                                     task run pod                                                        task run 0123456789 0123456789 0123456789 0123456789 0123456789 0123456789   task run 0123456789 01234560d38957287bb0283c59440df14069f59 pod         Monitoring  Steps   If multiple  Steps  are defined in the  Task  invoked by the  TaskRun   you can monitor their execution status in the  status steps  field using the following command  where   name   is the name of the target  TaskRun       bash kubectl get taskrun  name   o yaml      The exact Task Spec used to instantiate the TaskRun is also included in the Status for full auditability       Steps  The corresponding statuses appear in the  status steps  list in the order in which the  Steps  have been specified in the  Task  definition       Monitoring  Results   If one or more  results  fields have been specified in the invoked  Task   the  TaskRun s  execution status will include a  Task Results  section  in which the  Results  appear verbatim  including original line returns and whitespace  For example      yaml Status            Steps            Task Results      Name    current date human readable     Value   Thu Jan 23 16 29 06 UTC 2020      Name    current date unix timestamp     Value   1579796946          Cancelling a  TaskRun   To cancel a  TaskRun  that s currently executing  update its status to mark it as cancelled   When you cancel a TaskRun  the running pod associated with that  TaskRun  is deleted  This means that the logs of the  TaskRun  are not preserved  The deletion of the  TaskRun  pod is necessary in order to stop  TaskRun  step containers from running     Note  if  keep pod on cancel  is set to   true   in the  feature flags    the pod associated with that  TaskRun  will not be deleted    Example of cancelling a  TaskRun       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    name  go example git spec            status   TaskRunCancelled           Debugging a  TaskRun       Breakpoint on Failure  TaskRuns can be halted on failure for troubleshooting by providing the following spec patch as seen below      yaml spec    debug      breakpoints        onFailure   enabled           Breakpoint before step  If you want to set a breakpoint before the step is executed  you can add the step name to the  beforeSteps  field in the following way      yaml spec    debug      breakpoints        beforeSteps                   Upon failure of a step  the TaskRun Pod execution is halted  If this TaskRun Pod continues to run without any lifecycle change done by the user  running the debug continue or debug fail continue script  the TaskRun would be subject to  TaskRunTimeout   configuring the failure timeout   During this time  the user client can get remote shell access to the step container with a command such as the following      bash kubectl exec  it print date d7tj5 pod  c step print date human readable sh          Debug Environment  After the user client has access to the container environment  they can scour for any missing parts because of which their step might have failed   To control the lifecycle of the step to mark it as a success or a failure or close the breakpoint  there are scripts provided in the   tekton debug scripts  directory in the container  The following are the scripts and the tasks they perform      debug continue   Mark the step as a success and exit the breakpoint    debug fail continue   Mark the step as a failure and exit the breakpoint    debug beforestep continue   Mark the step continue to execute   debug beforestep fail continue   Mark the step not continue to execute   More information on the inner workings of debug can be found in the  Debug documentation  debug md       Code examples  To better understand  TaskRuns   study the following code examples      Example  TaskRun  with a referenced  Task    example taskrun with a referenced task     Example  TaskRun  with an embedded  Task    example taskrun with an embedded task     Example of reusing a  Task    example of reusing a task     Example of Using custom  ServiceAccount  credentials   example of using custom serviceaccount credentials     Example of Running Step Containers as a Non Root User   example of running step containers as a non root user       Example  TaskRun  with a referenced  Task   In this example  a  TaskRun  named  read repo run  invokes and executes an existing  Task  named  read task   This  Task  reads the repository from the  input   workspace       yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  Task metadata    name  read task spec    workspaces      name  input   steps        name  readme       image  ubuntu       script  cat   workspaces input path  README md     apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    name  read repo run spec    taskRef      name  read task   workspaces      name  input     persistentVolumeClaim        claimName  mypvc     subPath  my subdir          Example  TaskRun  with an embedded  Task   In this example  a  TaskRun  named  build push task run 2  directly executes a  Task  from its definition embedded in the  TaskRun s   taskSpec  field      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    name  build push task run 2 spec    workspaces      name  source     persistentVolumeClaim        claimName  my pvc   taskSpec      workspaces        name  source     steps          name  build and push         image  gcr io kaniko project executor v0 17 1         workingDir    workspaces source path            specifying DOCKER CONFIG is required to allow kaniko to detect docker credential         env              name   DOCKER CONFIG              value    tekton home  docker           command               kaniko executor         args                destination gcr io my project gohelloworld          Example of Using custom  ServiceAccount  credentials  The example below illustrates how to specify a  ServiceAccount  to access a private  git  repository      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    name  test task with serviceaccount git ssh spec    serviceAccountName  test task robot git ssh   workspaces      name  source     persistentVolumeClaim        claimName  repo pvc     name  ssh creds     secret        secretName  test git ssh   params        name  url       value  https   github com tektoncd pipeline git   taskRef      name  git clone      In the above code snippet   serviceAccountName  test build robot git ssh  references the following  ServiceAccount       yaml apiVersion  v1 kind  ServiceAccount metadata    name  test task robot git ssh secrets      name  test git ssh      And  secretName  test git ssh  references the following  Secret       yaml apiVersion  v1 kind  Secret metadata    name  test git ssh   annotations      tekton dev git 0  github com type  kubernetes io ssh auth data      Generated by      cat id rsa   base64  w 0   ssh privatekey  LS0tLS1CRUdJTiBSU0EgUFJJVk      example      Generated by      ssh keyscan github com   base64  w 0   known hosts  Z2l0aHViLmNvbSBzc2g      example           Example of Running Step Containers as a Non Root User  All steps that do not require to be run as a root user should make use of TaskRun features to designate the container for a step runs as a user without root permissions  As a best practice  running containers as non root should be built into the container image to avoid any possibility of the container being run as root  However  as a further measure of enforcing this practice  TaskRun pod templates can be used to specify how containers should be run within a TaskRun pod   An example of using a TaskRun pod template is shown below to specify that containers running via this TaskRun s pod should run as non root and run as user 1001 if the container itself does not specify what user to run as      yaml apiVersion  tekton dev v1   or tekton dev v1beta1 kind  TaskRun metadata    generateName  show non root steps run  spec    taskRef      name  show non root steps   podTemplate      securityContext        runAsNonRoot  true       runAsUser  1001      If a Task step specifies that it is to run as a different user than what is specified in the pod template  the step s  securityContext  will be applied instead of what is specified at the pod level  An example of this is available as a  TaskRun example     examples v1 taskruns run steps as non root yaml    More information about Pod and Container Security Contexts can be found via the  Kubernetes website  https   kubernetes io docs tasks configure pod container security context  set the security context for a pod         Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    and code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton Events in Tekton weight 302 Tekton s controllers emits Events","answers":"<!--\n---\nlinkTitle: \"Events\"\nweight: 302\n---\n-->\n\n# Events in Tekton\n\nTekton's controllers emits [Kubernetes events](https:\/\/kubernetes.io\/docs\/reference\/generated\/kubernetes-api\/v1.18\/#event-v1-core)\nwhen `TaskRuns` and `PipelineRuns` execute. This allows you to monitor and react to what's happening during execution by\nretrieving those events using the `kubectl describe` command. Tekton can also emit [CloudEvents](https:\/\/github.com\/cloudevents\/spec).\n\n**Note:** `Conditions` [do not emit events](https:\/\/github.com\/tektoncd\/pipeline\/issues\/2461)\nbut the underlying `TaskRun` do.\n\n## Events in `TaskRuns`\n\n`TaskRuns` emit events for the following `Reasons`:\n\n- `Started`: emitted the first time the `TaskRun` is picked by the\n  reconciler from its work queue, so it only happens if webhook validation was\n  successful. This event in itself does not indicate that a `Step` is executing;\n  the `Step` executes once the following conditions are satisfied:\n  - Validation of the `Task` and  its associated resources must succeed, and\n  - Checks for associated `Conditions` must succeed, and\n  - Scheduling of the associated `Pod` must succeed.\n- `Succeeded`: emitted once all steps in the `TaskRun` have executed successfully,\n   including post-steps injected by Tekton.\n- `Failed`: emitted if the `TaskRun` finishes running unsuccessfully because a `Step` failed,\n   or the `TaskRun` timed out or was cancelled. A `TaskRun` also emits `Failed` events\n   if it cannot execute at all due to failing validation.\n\n## Events in `PipelineRuns`\n\n`PipelineRuns` emit events for the following `Reasons`:\n\n- `Started`: emitted the first time the `PipelineRun` is picked by the\n  reconciler from its work queue, so it only happens if webhook validation was\n  successful. This event in itself does not indicate that a `Step` is executing;\n  the `Step` executes once validation for the `Pipeline` as well as all associated `Tasks`\n  and `Resources` is successful.\n- `Running`: emitted when the `PipelineRun` passes validation and\n  actually begins execution.\n- `Succeeded`: emitted once all `Tasks` reachable via the DAG have\n  executed successfully.\n- `Failed`: emitted if the `PipelineRun` finishes running unsuccessfully because a `Task` failed or the\n  `PipelineRun` timed out or was cancelled. A `PipelineRun` also emits `Failed` events if it cannot\n  execute at all due to failing validation.\n\n# Events via `CloudEvents`\n\nWhen you [configure a sink](.\/additional-configs.md#configuring-cloudevents-notifications), Tekton emits\nevents as described in the table below.\n\nTekton sends cloud events in a parallel routine to allow for retries without blocking the\nreconciler. A routine is started every time the `Succeeded` condition changes - either state,\nreason or message. Retries are sent using an exponential back-off strategy.\nBecause of retries, events are not guaranteed to be sent to the target sink in the order they happened.\n\nResource      |Event    |Event Type\n:-------------|:-------:|:----------------------------------------------------------\n`TaskRun`     | `Started` | `dev.tekton.event.taskrun.started.v1`\n`TaskRun`     | `Running` | `dev.tekton.event.taskrun.running.v1`\n`TaskRun`     | `Condition Change while Running` | `dev.tekton.event.taskrun.unknown.v1`\n`TaskRun`     | `Succeed` | `dev.tekton.event.taskrun.successful.v1`\n`TaskRun`     | `Failed`  | `dev.tekton.event.taskrun.failed.v1`\n`PipelineRun` | `Started` | `dev.tekton.event.pipelinerun.started.v1`\n`PipelineRun` | `Running` | `dev.tekton.event.pipelinerun.running.v1`\n`PipelineRun` | `Condition Change while Running` | `dev.tekton.event.pipelinerun.unknown.v1`\n`PipelineRun` | `Succeed` | `dev.tekton.event.pipelinerun.successful.v1`\n`PipelineRun` | `Failed`  | `dev.tekton.event.pipelinerun.failed.v1`\n`Run`         | `Started` | `dev.tekton.event.run.started.v1`\n`Run`         | `Running` | `dev.tekton.event.run.running.v1`\n`Run`         | `Succeed` | `dev.tekton.event.run.successful.v1`\n`Run`         | `Failed`  | `dev.tekton.event.run.failed.v1`\n\n`CloudEvents` for `Runs` are only sent when enabled in the [configuration](.\/additional-configs.md#configuring-cloudevents-notifications).\n\n**Note**: `CloudEvents` for `Runs` rely on an ephemeral cache to avoid duplicate\nevents. In case of controller restart, the cache is reset and duplicate events\nmay be sent.\n\n## Format of `CloudEvents`\n\nAccording to the [`CloudEvents` spec](https:\/\/github.com\/cloudevents\/spec\/blob\/main\/cloudevents\/spec.md), HTTP headers are included to match the context fields. For example:\n\n```\n\"Ce-Id\": \"77f78ae7-ff6d-4e39-9d05-b9a0b7850527\",\n\"Ce-Source\": \"\/apis\/tekton.dev\/v1beta1\/namespaces\/default\/taskruns\/curl-run-6gplk\",\n\"Ce-Specversion\": \"1.0\",\n\"Ce-Subject\": \"curl-run-6gplk\",\n\"Ce-Time\": \"2021-01-29T14:47:58.157819Z\",\n\"Ce-Type\": \"dev.tekton.event.taskrun.unknown.v1\",\n```\n\nOther HTTP headers are:\n```\n\"Accept-Encoding\": \"gzip\",\n\"Connection\": \"close\",\n\"Content-Length\": \"3519\",\n\"Content-Type\": \"application\/json\",\n\"User-Agent\": \"Go-http-client\/1.1\"\n```\n\nThe payload is JSON, a map with a single root key `taskRun` or `pipelineRun`, depending on the source\nof the event. Inside the root key, the whole `spec` and `status` of the resource is included. For example:\n\n```json\n{\n  \"taskRun\": {\n    \"metadata\": {\n      \"annotations\": {\n        \"pipeline.tekton.dev\/release\": \"v0.20.1\",\n        \"tekton.dev\/pipelines.minVersion\": \"0.12.1\",\n        \"tekton.dev\/tags\": \"search\"\n      },\n      \"creationTimestamp\": \"2021-01-29T14:47:57Z\",\n      \"generateName\": \"curl-run-\",\n      \"generation\": 1,\n      \"labels\": {\n        \"app.kubernetes.io\/managed-by\": \"tekton-pipelines\",\n        \"app.kubernetes.io\/version\": \"0.1\",\n        \"tekton.dev\/task\": \"curl\"\n      },\n      \"managedFields\": \"(...)\",\n      \"name\": \"curl-run-6gplk\",\n      \"namespace\": \"default\",\n      \"resourceVersion\": \"156770\",\n      \"selfLink\": \"\/apis\/tekton.dev\/v1beta1\/namespaces\/default\/taskruns\/curl-run-6gplk\",\n      \"uid\": \"4ccb4f01-3ecc-4eb4-87e1-76f04efeee5c\"\n    },\n    \"spec\": {\n      \"params\": [\n        {\n          \"name\": \"url\",\n          \"value\": \"https:\/\/api.hub.tekton.dev\/resource\/96\"\n        }\n      ],\n      \"resources\": {},\n      \"serviceAccountName\": \"default\",\n      \"taskRef\": {\n        \"kind\": \"Task\",\n        \"name\": \"curl\"\n      },\n      \"timeout\": \"1h0m0s\"\n    },\n    \"status\": {\n      \"conditions\": [\n        {\n          \"lastTransitionTime\": \"2021-01-29T14:47:58Z\",\n          \"message\": \"pod status \\\"Initialized\\\":\\\"False\\\"; message: \\\"containers with incomplete status: [place-tools]\\\"\",\n          \"reason\": \"Pending\",\n          \"status\": \"Unknown\",\n          \"type\": \"Succeeded\"\n        }\n      ],\n      \"podName\": \"curl-run-6gplk-pod\",\n      \"startTime\": \"2021-01-29T14:47:57Z\",\n      \"steps\": [\n        {\n          \"container\": \"step-curl\",\n          \"name\": \"curl\",\n          \"waiting\": {\n            \"reason\": \"PodInitializing\"\n          }\n        }\n      ],\n      \"taskSpec\": {\n        \"description\": \"This task performs curl operation to transfer data from internet.\",\n        \"params\": [\n          {\n            \"description\": \"URL to curl'ed\",\n            \"name\": \"url\",\n            \"type\": \"string\"\n          },\n          {\n            \"default\": [],\n            \"description\": \"options of url\",\n            \"name\": \"options\",\n            \"type\": \"array\"\n          },\n          {\n            \"default\": \"docker.io\/curlimages\/curl:7.72.0@sha256:3c3ff0c379abb1150bb586c7d55848ed4dcde4a6486b6f37d6815aed569332fe\",\n            \"description\": \"option of curl image\",\n            \"name\": \"curl-image\",\n            \"type\": \"string\"\n          }\n        ],\n        \"steps\": [\n          {\n            \"args\": [\n              \"$(params.options[*])\",\n              \"$(params.url)\"\n            ],\n            \"command\": [\n              \"curl\"\n            ],\n            \"image\": \"$(params.curl-image)\",\n            \"name\": \"curl\",\n            \"resources\": {}\n          }\n        ]\n      }\n    }\n  }\n}\n```","site":"tekton","answers_cleaned":"         linkTitle   Events  weight  302            Events in Tekton  Tekton s controllers emits  Kubernetes events  https   kubernetes io docs reference generated kubernetes api v1 18  event v1 core  when  TaskRuns  and  PipelineRuns  execute  This allows you to monitor and react to what s happening during execution by retrieving those events using the  kubectl describe  command  Tekton can also emit  CloudEvents  https   github com cloudevents spec      Note     Conditions   do not emit events  https   github com tektoncd pipeline issues 2461  but the underlying  TaskRun  do      Events in  TaskRuns    TaskRuns  emit events for the following  Reasons       Started   emitted the first time the  TaskRun  is picked by the   reconciler from its work queue  so it only happens if webhook validation was   successful  This event in itself does not indicate that a  Step  is executing    the  Step  executes once the following conditions are satisfied      Validation of the  Task  and  its associated resources must succeed  and     Checks for associated  Conditions  must succeed  and     Scheduling of the associated  Pod  must succeed     Succeeded   emitted once all steps in the  TaskRun  have executed successfully     including post steps injected by Tekton     Failed   emitted if the  TaskRun  finishes running unsuccessfully because a  Step  failed     or the  TaskRun  timed out or was cancelled  A  TaskRun  also emits  Failed  events    if it cannot execute at all due to failing validation      Events in  PipelineRuns    PipelineRuns  emit events for the following  Reasons       Started   emitted the first time the  PipelineRun  is picked by the   reconciler from its work queue  so it only happens if webhook validation was   successful  This event in itself does not indicate that a  Step  is executing    the  Step  executes once validation for the  Pipeline  as well as all associated  Tasks    and  Resources  is successful     Running   emitted when the  PipelineRun  passes validation and   actually begins execution     Succeeded   emitted once all  Tasks  reachable via the DAG have   executed successfully     Failed   emitted if the  PipelineRun  finishes running unsuccessfully because a  Task  failed or the    PipelineRun  timed out or was cancelled  A  PipelineRun  also emits  Failed  events if it cannot   execute at all due to failing validation     Events via  CloudEvents   When you  configure a sink    additional configs md configuring cloudevents notifications   Tekton emits events as described in the table below   Tekton sends cloud events in a parallel routine to allow for retries without blocking the reconciler  A routine is started every time the  Succeeded  condition changes   either state  reason or message  Retries are sent using an exponential back off strategy  Because of retries  events are not guaranteed to be sent to the target sink in the order they happened   Resource       Event     Event Type                                                                                       TaskRun         Started     dev tekton event taskrun started v1   TaskRun         Running     dev tekton event taskrun running v1   TaskRun         Condition Change while Running     dev tekton event taskrun unknown v1   TaskRun         Succeed     dev tekton event taskrun successful v1   TaskRun         Failed      dev tekton event taskrun failed v1   PipelineRun     Started     dev tekton event pipelinerun started v1   PipelineRun     Running     dev tekton event pipelinerun running v1   PipelineRun     Condition Change while Running     dev tekton event pipelinerun unknown v1   PipelineRun     Succeed     dev tekton event pipelinerun successful v1   PipelineRun     Failed      dev tekton event pipelinerun failed v1   Run             Started     dev tekton event run started v1   Run             Running     dev tekton event run running v1   Run             Succeed     dev tekton event run successful v1   Run             Failed      dev tekton event run failed v1    CloudEvents  for  Runs  are only sent when enabled in the  configuration    additional configs md configuring cloudevents notifications      Note     CloudEvents  for  Runs  rely on an ephemeral cache to avoid duplicate events  In case of controller restart  the cache is reset and duplicate events may be sent      Format of  CloudEvents   According to the   CloudEvents  spec  https   github com cloudevents spec blob main cloudevents spec md   HTTP headers are included to match the context fields  For example        Ce Id    77f78ae7 ff6d 4e39 9d05 b9a0b7850527    Ce Source     apis tekton dev v1beta1 namespaces default taskruns curl run 6gplk    Ce Specversion    1 0    Ce Subject    curl run 6gplk    Ce Time    2021 01 29T14 47 58 157819Z    Ce Type    dev tekton event taskrun unknown v1        Other HTTP headers are       Accept Encoding    gzip    Connection    close    Content Length    3519    Content Type    application json    User Agent    Go http client 1 1       The payload is JSON  a map with a single root key  taskRun  or  pipelineRun   depending on the source of the event  Inside the root key  the whole  spec  and  status  of the resource is included  For example      json      taskRun          metadata            annotations              pipeline tekton dev release    v0 20 1            tekton dev pipelines minVersion    0 12 1            tekton dev tags    search                  creationTimestamp    2021 01 29T14 47 57Z          generateName    curl run           generation   1         labels              app kubernetes io managed by    tekton pipelines            app kubernetes io version    0 1            tekton dev task    curl                  managedFields                   name    curl run 6gplk          namespace    default          resourceVersion    156770          selfLink     apis tekton dev v1beta1 namespaces default taskruns curl run 6gplk          uid    4ccb4f01 3ecc 4eb4 87e1 76f04efeee5c              spec            params                          name    url              value    https   api hub tekton dev resource 96                            resources              serviceAccountName    default          taskRef              kind    Task            name    curl                  timeout    1h0m0s              status            conditions                          lastTransitionTime    2021 01 29T14 47 58Z              message    pod status   Initialized     False    message    containers with incomplete status   place tools                 reason    Pending              status    Unknown              type    Succeeded                            podName    curl run 6gplk pod          startTime    2021 01 29T14 47 57Z          steps                          container    step curl              name    curl              waiting                  reason    PodInitializing                                        taskSpec              description    This task performs curl operation to transfer data from internet             params                              description    URL to curl ed                name    url                type    string                                        default                    description    options of url                name    options                type    array                                        default    docker io curlimages curl 7 72 0 sha256 3c3ff0c379abb1150bb586c7d55848ed4dcde4a6486b6f37d6815aed569332fe                description    option of curl image                name    curl image                type    string                                  steps                              args                      params options                        params url                               command                    curl                              image      params curl image                 name    curl                resources                                                   "}
{"questions":"tekton Variable Substitutions Supported by and Variable Substitutions weight 407 This page documents the variable substitutions supported by and","answers":"<!--\n---\nlinkTitle: \"Variable Substitutions\"\nweight: 407\n---\n-->\n\n# Variable Substitutions Supported by `Tasks` and `Pipelines`\n\nThis page documents the variable substitutions supported by `Tasks` and `Pipelines`.\n\nFor instructions on using variable substitutions see the relevant section of [the Tasks doc](tasks.md#using-variable-substitution).\n\n**Note:** Tekton does not escape the contents of variables. Task authors are responsible for properly escaping a variable's value according to the shell, image or scripting language that the variable will be used in.\n\n## Variables available in a `Pipeline`\n\n| Variable                                           | Description                                                                                                                                                                                                                                                                                                                         |\n|----------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `params.<param name>`                              | The value of the parameter at runtime.                                                                                                                                                                                                                                                                                              |\n| `params['<param name>']`                           | (see above)                                                                                                                                                                                                                                                                                                                         |\n| `params[\"<param name>\"]`                           | (see above)                                                                                                                                                                                                                                                                                                                         |\n| `params.<param name>[*]`                           | Get the whole param array or object.                                                                                                                                                                                                                                                                                                |\n| `params['<param name>'][*]`                        | (see above)                                                                                                                                                                                                                                                                                                                         |\n| `params[\"<param name>\"][*]`                        | (see above)                                                                                                                                                                                                                                                                                                                         |\n| `params.<param name>[i]`                           | Get the i-th element of param array. This is alpha feature, set `enable-api-fields` to `alpha`  to use it.                                                                                                                                                                                                                          |\n| `params['<param name>'][i]`                        | (see above)                                                                                                                                                                                                                                                                                                                         |\n| `params[\"<param name>\"][i]`                        | (see above)                                                                                                                                                                                                                                                                                                                         |\n| `params.<object-param-name>[*]`                    | Get the value of the whole object param. This is alpha feature, set `enable-api-fields` to `alpha`  to use it.                                                                                                                                                                                                                      |\n| `params.<object-param-name>.<individual-key-name>` | Get the value of an individual child of an object param. This is alpha feature, set `enable-api-fields` to `alpha`  to use it.                                                                                                                                                                                                      |\n| `tasks.<taskName>.matrix.length`                   | The length of the `Matrix` combination count.                                                                                                                                                                                                                                                                                       |\n| `tasks.<taskName>.results.<resultName>`            | The value of the `Task's` result. Can alter `Task` execution order within a `Pipeline`.)                                                                                                                                                                                                                                            |\n| `tasks.<taskName>.results.<resultName>[i]`         | The ith value of the `Task's` array result. Can alter `Task` execution order within a `Pipeline`.)                                                                                                                                                                                                                                  |\n| `tasks.<taskName>.results.<resultName>[*]`         | The array value of the `Task's` result. Can alter `Task` execution order within a `Pipeline`. Cannot be used in `script`.)                                                                                                                                                                                                          |\n| `tasks.<taskName>.results.<resultName>.key`        | The `key` value of the `Task's` object result. Can alter `Task` execution order within a `Pipeline`.)                                                                                                                                                                                                                               |\n| `tasks.<taskName>.matrix.<resultName>.length`      | The length of the matrixed `Task's` results. (Can alter `Task` execution order within a `Pipeline`.)                                                                                                                                                                                                                                |\n| `workspaces.<workspaceName>.bound`                 | Whether a `Workspace` has been bound or not. \"false\" if the `Workspace` declaration has `optional: true` and the Workspace binding was omitted by the PipelineRun.                                                                                                                                                                  |\n| `context.pipelineRun.name`                         | The name of the `PipelineRun` that this `Pipeline` is running in.                                                                                                                                                                                                                                                                   |\n| `context.pipelineRun.namespace`                    | The namespace of the `PipelineRun` that this `Pipeline` is running in.                                                                                                                                                                                                                                                              |\n| `context.pipelineRun.uid`                          | The uid of the `PipelineRun` that this `Pipeline` is running in.                                                                                                                                                                                                                                                                    |\n| `context.pipeline.name`                            | The name of this `Pipeline` .                                                                                                                                                                                                                                                                                                       |\n| `tasks.<pipelineTaskName>.status`                  | The execution status of the specified `pipelineTask`, only available in `finally` tasks. The execution status can be set to any one of the values (`Succeeded`, `Failed`, or `None`) described [here](pipelines.md#using-execution-status-of-pipelinetask).                                                                         |\n| `tasks.<pipelineTaskName>.reason`                  | The execution reason of the specified `pipelineTask`, only available in `finally` tasks. The reason can be set to any one of the values (`Failed`, `TaskRunCancelled`, `TaskRunTimeout`, `FailureIgnored`, etc ) described [here](taskruns.md#monitoring-execution-status).                                                         |\n| `tasks.status`                                     | An aggregate status of all the `pipelineTasks` under the `tasks` section (excluding the `finally` section). This variable is only available in the `finally` tasks and can have any one of the values (`Succeeded`, `Failed`, `Completed`, or `None`) described [here](pipelines.md#using-aggregate-execution-status-of-all-tasks). |\n| `context.pipelineTask.retries`                     | The retries of this `PipelineTask`.                                                                                                                                                                                                                                                                                                 |\n| `tasks.<taskName>.outputs.<artifactName>`          | The value of a specific output artifact of the `Task`                                                                                                                                                                                                                                                                               |\n| `tasks.<taskName>.inputs.<artifactName>`           | The value of a specific input artifact of the `Task`                                                                                                                                                                                                                                                                                |\n\n## Variables available in a `Task`\n\n| Variable                                           | Description                                                                                                                    |\n|----------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|\n| `params.<param name>`                              | The value of the parameter at runtime.                                                                                         |\n| `params['<param name>']`                           | (see above)                                                                                                                    |\n| `params[\"<param name>\"]`                           | (see above)                                                                                                                    |\n| `params.<param name>[*]`                           | Get the whole param array or object.                                                                                           |\n| `params['<param name>'][*]`                        | (see above)                                                                                                                    |\n| `params[\"<param name>\"][*]`                        | (see above)                                                                                                                    |\n| `params.<param name>[i]`                           | Get the i-th element of param array. This is alpha feature, set `enable-api-fields` to `alpha`  to use it.                     |\n| `params['<param name>'][i]`                        | (see above)                                                                                                                    |\n| `params[\"<param name>\"][i]`                        | (see above)                                                                                                                    |\n| `params.<object-param-name>.<individual-key-name>` | Get the value of an individual child of an object param. This is alpha feature, set `enable-api-fields` to `alpha`  to use it. |\n| `results.<resultName>.path`                        | The path to the file where the `Task` writes its results data.                                                                 |\n| `results['<resultName>'].path`                     | (see above)                                                                                                                    |\n| `results[\"<resultName>\"].path`                     | (see above)                                                                                                                    |\n| `workspaces.<workspaceName>.path`                  | The path to the mounted `Workspace`. Empty string if an optional `Workspace` has not been provided by the TaskRun.             |\n| `workspaces.<workspaceName>.bound`                 | Whether a `Workspace` has been bound or not. \"false\" if an optional`Workspace` has not been provided by the TaskRun.           |\n| `workspaces.<workspaceName>.claim`                 | The name of the `PersistentVolumeClaim` specified as a volume source for the `Workspace`. Empty string for other volume types. |\n| `workspaces.<workspaceName>.volume`                | The name of the volume populating the `Workspace`.                                                                             |\n| `credentials.path`                                 | The path to credentials injected from Secrets with matching annotations.                                                       |\n| `context.taskRun.name`                             | The name of the `TaskRun` that this `Task` is running in.                                                                      |\n| `context.taskRun.namespace`                        | The namespace of the `TaskRun` that this `Task` is running in.                                                                 |\n| `context.taskRun.uid`                              | The uid of the `TaskRun` that this `Task` is running in.                                                                       |\n| `context.task.name`                                | The name of this `Task`.                                                                                                       |\n| `context.task.retry-count`                         | The current retry number of this `Task`.                                                                                       |\n| `steps.step-<stepName>.exitCode.path`              | The path to the file where a Step's exit code is stored.                                                                       |\n| `steps.step-unnamed-<stepIndex>.exitCode.path`     | The path to the file where a Step's exit code is stored for a step without any name.                                           |\n| `artifacts.path`                                   | The path to the file where the `Task` writes its artifacts data.                                                               |\n\n## Fields that accept variable substitutions\n\n| CRD           | Field                                                           |\n|---------------|-----------------------------------------------------------------|\n| `Task`        | `spec.steps[].name`                                             |\n| `Task`        | `spec.steps[].image`                                            |\n| `Task`        | `spec.steps[].imagePullPolicy`                                  |\n| `Task`        | `spec.steps[].command`                                          |\n| `Task`        | `spec.steps[].args`                                             |\n| `Task`        | `spec.steps[].script`                                           |\n| `Task`        | `spec.steps[].onError`                                          |\n| `Task`        | `spec.steps[].env.value`                                        |\n| `Task`        | `spec.steps[].env.valueFrom.secretKeyRef.name`                  |\n| `Task`        | `spec.steps[].env.valueFrom.secretKeyRef.key`                   |\n| `Task`        | `spec.steps[].env.valueFrom.configMapKeyRef.name`               |\n| `Task`        | `spec.steps[].env.valueFrom.configMapKeyRef.key`                |\n| `Task`        | `spec.steps[].volumeMounts.name`                                |\n| `Task`        | `spec.steps[].volumeMounts.mountPath`                           |\n| `Task`        | `spec.steps[].volumeMounts.subPath`                             |\n| `Task`        | `spec.volumes[].name`                                           |\n| `Task`        | `spec.volumes[].configMap.name`                                 |\n| `Task`        | `spec.volumes[].configMap.items[].key`                          |\n| `Task`        | `spec.volumes[].configMap.items[].path`                         |\n| `Task`        | `spec.volumes[].secret.secretName`                              |\n| `Task`        | `spec.volumes[].secret.items[].key`                             |\n| `Task`        | `spec.volumes[].secret.items[].path`                            |\n| `Task`        | `spec.volumes[].persistentVolumeClaim.claimName`                |\n| `Task`        | `spec.volumes[].projected.sources.configMap.name`               |\n| `Task`        | `spec.volumes[].projected.sources.secret.name`                  |\n| `Task`        | `spec.volumes[].projected.sources.serviceAccountToken.audience` |\n| `Task`        | `spec.volumes[].csi.nodePublishSecretRef.name`                  |\n| `Task`        | `spec.volumes[].csi.volumeAttributes.* `                        |\n| `Task`        | `spec.sidecars[].name`                                          |\n| `Task`        | `spec.sidecars[].image`                                         |\n| `Task`        | `spec.sidecars[].imagePullPolicy`                               |\n| `Task`        | `spec.sidecars[].env.value`                                     |\n| `Task`        | `spec.sidecars[].env.valueFrom.secretKeyRef.name`               |\n| `Task`        | `spec.sidecars[].env.valueFrom.secretKeyRef.key`                |\n| `Task`        | `spec.sidecars[].env.valueFrom.configMapKeyRef.name`            |\n| `Task`        | `spec.sidecars[].env.valueFrom.configMapKeyRef.key`             |\n| `Task`        | `spec.sidecars[].volumeMounts.name`                             |\n| `Task`        | `spec.sidecars[].volumeMounts.mountPath`                        |\n| `Task`        | `spec.sidecars[].volumeMounts.subPath`                          |\n| `Task`        | `spec.sidecars[].command`                                       |\n| `Task`        | `spec.sidecars[].args`                                          |\n| `Task`        | `spec.sidecars[].script`                                        |\n| `Task`        | `spec.workspaces[].mountPath`                                   |\n| `TaskRun`     | `spec.workspaces[].subPath`                                     |\n| `TaskRun`     | `spec.workspaces[].persistentVolumeClaim.claimName`             |\n| `TaskRun`     | `spec.workspaces[].configMap.name`                              |\n| `TaskRun`     | `spec.workspaces[].configMap.items[].key`                       |\n| `TaskRun`     | `spec.workspaces[].configMap.items[].path`                      |\n| `TaskRun`     | `spec.workspaces[].secret.secretName`                           |\n| `TaskRun`     | `spec.workspaces[].secret.items[].key`                          |\n| `TaskRun`     | `spec.workspaces[].secret.items[].path`                         |\n| `TaskRun`     | `spec.workspaces[].projected.sources[].secret.name`             |\n| `TaskRun`     | `spec.workspaces[].projected.sources[].secret.items[].key`      |\n| `TaskRun`     | `spec.workspaces[].projected.sources[].secret.items[].path`     |\n| `TaskRun`     | `spec.workspaces[].projected.sources[].configMap.name`          |\n| `TaskRun`     | `spec.workspaces[].projected.sources[].configMap.items[].key`   |\n| `TaskRun`     | `spec.workspaces[].projected.sources[].configMap.items[].path`  |\n| `TaskRun`     | `spec.workspaces[].csi.driver`                                  |\n| `TaskRun`     | `spec.workspaces[].csi.nodePublishSecretRef.name`               |\n| `Pipeline`    | `spec.tasks[].params[].value`                                   |\n| `Pipeline`    | `spec.tasks[].conditions[].params[].value`                      |\n| `Pipeline`    | `spec.results[].value`                                          |\n| `Pipeline`    | `spec.tasks[].when[].input`                                     |\n| `Pipeline`    | `spec.tasks[].when[].values`                                    |\n| `Pipeline`    | `spec.tasks[].workspaces[].subPath`                             |\n| `Pipeline`    | `spec.tasks[].displayName`                                      |\n| `PipelineRun` | `spec.workspaces[].subPath`                                     |\n| `PipelineRun` | `spec.workspaces[].persistentVolumeClaim.claimName`             |\n| `PipelineRun` | `spec.workspaces[].configMap.name`                              |\n| `PipelineRun` | `spec.workspaces[].configMap.items[].key`                       |\n| `PipelineRun` | `spec.workspaces[].configMap.items[].path`                      |\n| `PipelineRun` | `spec.workspaces[].secret.secretName`                           |\n| `PipelineRun` | `spec.workspaces[].secret.items[].key`                          |\n| `PipelineRun` | `spec.workspaces[].secret.items[].path`                         |\n| `PipelineRun` | `spec.workspaces[].projected.sources[].secret.name`             |\n| `PipelineRun` | `spec.workspaces[].projected.sources[].secret.items[].key`      |\n| `PipelineRun` | `spec.workspaces[].projected.sources[].secret.items[].path`     |\n| `PipelineRun` | `spec.workspaces[].projected.sources[].configMap.name`          |\n| `PipelineRun` | `spec.workspaces[].projected.sources[].configMap.items[].key`   |\n| `PipelineRun` | `spec.workspaces[].projected.sources[].configMap.items[].path`  |\n| `PipelineRun` | `spec.workspaces[].csi.driver`                                  |\n| `PipelineRun` | `spec.workspaces[].csi.nodePublishSecretRef.name`               |","site":"tekton","answers_cleaned":"         linkTitle   Variable Substitutions  weight  407            Variable Substitutions Supported by  Tasks  and  Pipelines   This page documents the variable substitutions supported by  Tasks  and  Pipelines    For instructions on using variable substitutions see the relevant section of  the Tasks doc  tasks md using variable substitution      Note    Tekton does not escape the contents of variables  Task authors are responsible for properly escaping a variable s value according to the shell  image or scripting language that the variable will be used in      Variables available in a  Pipeline     Variable                                             Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           params  param name                                  The value of the parameter at runtime                                                                                                                                                                                                                                                                                                    params   param name                                  see above                                                                                                                                                                                                                                                                                                                               params   param name                                  see above                                                                                                                                                                                                                                                                                                                               params  param name                                  Get the whole param array or object                                                                                                                                                                                                                                                                                                      params   param name                                  see above                                                                                                                                                                                                                                                                                                                               params   param name                                  see above                                                                                                                                                                                                                                                                                                                               params  param name  i                               Get the i th element of param array  This is alpha feature  set  enable api fields  to  alpha   to use it                                                                                                                                                                                                                                params   param name    i                             see above                                                                                                                                                                                                                                                                                                                               params   param name    i                             see above                                                                                                                                                                                                                                                                                                                               params  object param name                           Get the value of the whole object param  This is alpha feature  set  enable api fields  to  alpha   to use it                                                                                                                                                                                                                            params  object param name   individual key name     Get the value of an individual child of an object param  This is alpha feature  set  enable api fields  to  alpha   to use it                                                                                                                                                                                                            tasks  taskName  matrix length                      The length of the  Matrix  combination count                                                                                                                                                                                                                                                                                             tasks  taskName  results  resultName                The value of the  Task s  result  Can alter  Task  execution order within a  Pipeline                                                                                                                                                                                                                                                    tasks  taskName  results  resultName  i             The ith value of the  Task s  array result  Can alter  Task  execution order within a  Pipeline                                                                                                                                                                                                                                          tasks  taskName  results  resultName                The array value of the  Task s  result  Can alter  Task  execution order within a  Pipeline   Cannot be used in  script                                                                                                                                                                                                                  tasks  taskName  results  resultName  key           The  key  value of the  Task s  object result  Can alter  Task  execution order within a  Pipeline                                                                                                                                                                                                                                       tasks  taskName  matrix  resultName  length         The length of the matrixed  Task s  results   Can alter  Task  execution order within a  Pipeline                                                                                                                                                                                                                                        workspaces  workspaceName  bound                    Whether a  Workspace  has been bound or not   false  if the  Workspace  declaration has  optional  true  and the Workspace binding was omitted by the PipelineRun                                                                                                                                                                        context pipelineRun name                            The name of the  PipelineRun  that this  Pipeline  is running in                                                                                                                                                                                                                                                                         context pipelineRun namespace                       The namespace of the  PipelineRun  that this  Pipeline  is running in                                                                                                                                                                                                                                                                    context pipelineRun uid                             The uid of the  PipelineRun  that this  Pipeline  is running in                                                                                                                                                                                                                                                                          context pipeline name                               The name of this  Pipeline                                                                                                                                                                                                                                                                                                               tasks  pipelineTaskName  status                     The execution status of the specified  pipelineTask   only available in  finally  tasks  The execution status can be set to any one of the values   Succeeded    Failed   or  None   described  here  pipelines md using execution status of pipelinetask                                                                                tasks  pipelineTaskName  reason                     The execution reason of the specified  pipelineTask   only available in  finally  tasks  The reason can be set to any one of the values   Failed    TaskRunCancelled    TaskRunTimeout    FailureIgnored   etc   described  here  taskruns md monitoring execution status                                                                tasks status                                        An aggregate status of all the  pipelineTasks  under the  tasks  section  excluding the  finally  section   This variable is only available in the  finally  tasks and can have any one of the values   Succeeded    Failed    Completed   or  None   described  here  pipelines md using aggregate execution status of all tasks        context pipelineTask retries                        The retries of this  PipelineTask                                                                                                                                                                                                                                                                                                        tasks  taskName  outputs  artifactName              The value of a specific output artifact of the  Task                                                                                                                                                                                                                                                                                     tasks  taskName  inputs  artifactName               The value of a specific input artifact of the  Task                                                                                                                                                                                                                                                                                       Variables available in a  Task     Variable                                             Description                                                                                                                                                                                                                                                                                                                 params  param name                                  The value of the parameter at runtime                                                                                               params   param name                                  see above                                                                                                                          params   param name                                  see above                                                                                                                          params  param name                                  Get the whole param array or object                                                                                                 params   param name                                  see above                                                                                                                          params   param name                                  see above                                                                                                                          params  param name  i                               Get the i th element of param array  This is alpha feature  set  enable api fields  to  alpha   to use it                           params   param name    i                             see above                                                                                                                          params   param name    i                             see above                                                                                                                          params  object param name   individual key name     Get the value of an individual child of an object param  This is alpha feature  set  enable api fields  to  alpha   to use it       results  resultName  path                           The path to the file where the  Task  writes its results data                                                                       results   resultName    path                         see above                                                                                                                          results   resultName    path                         see above                                                                                                                          workspaces  workspaceName  path                     The path to the mounted  Workspace   Empty string if an optional  Workspace  has not been provided by the TaskRun                   workspaces  workspaceName  bound                    Whether a  Workspace  has been bound or not   false  if an optional Workspace  has not been provided by the TaskRun                 workspaces  workspaceName  claim                    The name of the  PersistentVolumeClaim  specified as a volume source for the  Workspace   Empty string for other volume types       workspaces  workspaceName  volume                   The name of the volume populating the  Workspace                                                                                    credentials path                                    The path to credentials injected from Secrets with matching annotations                                                             context taskRun name                                The name of the  TaskRun  that this  Task  is running in                                                                            context taskRun namespace                           The namespace of the  TaskRun  that this  Task  is running in                                                                       context taskRun uid                                 The uid of the  TaskRun  that this  Task  is running in                                                                             context task name                                   The name of this  Task                                                                                                              context task retry count                            The current retry number of this  Task                                                                                              steps step  stepName  exitCode path                 The path to the file where a Step s exit code is stored                                                                             steps step unnamed  stepIndex  exitCode path        The path to the file where a Step s exit code is stored for a step without any name                                                 artifacts path                                      The path to the file where the  Task  writes its artifacts data                                                                      Fields that accept variable substitutions    CRD             Field                                                                                                                                                    Task            spec steps   name                                                   Task            spec steps   image                                                  Task            spec steps   imagePullPolicy                                        Task            spec steps   command                                                Task            spec steps   args                                                   Task            spec steps   script                                                 Task            spec steps   onError                                                Task            spec steps   env value                                              Task            spec steps   env valueFrom secretKeyRef name                        Task            spec steps   env valueFrom secretKeyRef key                         Task            spec steps   env valueFrom configMapKeyRef name                     Task            spec steps   env valueFrom configMapKeyRef key                      Task            spec steps   volumeMounts name                                      Task            spec steps   volumeMounts mountPath                                 Task            spec steps   volumeMounts subPath                                   Task            spec volumes   name                                                 Task            spec volumes   configMap name                                       Task            spec volumes   configMap items   key                                Task            spec volumes   configMap items   path                               Task            spec volumes   secret secretName                                    Task            spec volumes   secret items   key                                   Task            spec volumes   secret items   path                                  Task            spec volumes   persistentVolumeClaim claimName                      Task            spec volumes   projected sources configMap name                     Task            spec volumes   projected sources secret name                        Task            spec volumes   projected sources serviceAccountToken audience       Task            spec volumes   csi nodePublishSecretRef name                        Task            spec volumes   csi volumeAttributes                                 Task            spec sidecars   name                                                Task            spec sidecars   image                                               Task            spec sidecars   imagePullPolicy                                     Task            spec sidecars   env value                                           Task            spec sidecars   env valueFrom secretKeyRef name                     Task            spec sidecars   env valueFrom secretKeyRef key                      Task            spec sidecars   env valueFrom configMapKeyRef name                  Task            spec sidecars   env valueFrom configMapKeyRef key                   Task            spec sidecars   volumeMounts name                                   Task            spec sidecars   volumeMounts mountPath                              Task            spec sidecars   volumeMounts subPath                                Task            spec sidecars   command                                             Task            spec sidecars   args                                                Task            spec sidecars   script                                              Task            spec workspaces   mountPath                                         TaskRun         spec workspaces   subPath                                           TaskRun         spec workspaces   persistentVolumeClaim claimName                   TaskRun         spec workspaces   configMap name                                    TaskRun         spec workspaces   configMap items   key                             TaskRun         spec workspaces   configMap items   path                            TaskRun         spec workspaces   secret secretName                                 TaskRun         spec workspaces   secret items   key                                TaskRun         spec workspaces   secret items   path                               TaskRun         spec workspaces   projected sources   secret name                   TaskRun         spec workspaces   projected sources   secret items   key            TaskRun         spec workspaces   projected sources   secret items   path           TaskRun         spec workspaces   projected sources   configMap name                TaskRun         spec workspaces   projected sources   configMap items   key         TaskRun         spec workspaces   projected sources   configMap items   path        TaskRun         spec workspaces   csi driver                                        TaskRun         spec workspaces   csi nodePublishSecretRef name                     Pipeline        spec tasks   params   value                                         Pipeline        spec tasks   conditions   params   value                            Pipeline        spec results   value                                                Pipeline        spec tasks   when   input                                           Pipeline        spec tasks   when   values                                          Pipeline        spec tasks   workspaces   subPath                                   Pipeline        spec tasks   displayName                                            PipelineRun     spec workspaces   subPath                                           PipelineRun     spec workspaces   persistentVolumeClaim claimName                   PipelineRun     spec workspaces   configMap name                                    PipelineRun     spec workspaces   configMap items   key                             PipelineRun     spec workspaces   configMap items   path                            PipelineRun     spec workspaces   secret secretName                                 PipelineRun     spec workspaces   secret items   key                                PipelineRun     spec workspaces   secret items   path                               PipelineRun     spec workspaces   projected sources   secret name                   PipelineRun     spec workspaces   projected sources   secret items   key            PipelineRun     spec workspaces   projected sources   secret items   path           PipelineRun     spec workspaces   projected sources   configMap name                PipelineRun     spec workspaces   projected sources   configMap items   key         PipelineRun     spec workspaces   projected sources   configMap items   path        PipelineRun     spec workspaces   csi driver                                        PipelineRun     spec workspaces   csi nodePublishSecretRef name                 "}
{"questions":"tekton Additional configurations when installing Tekton Pipelines title Additional Configuration Options Additional Configuration Options weight 109","answers":"<!--\n---\ntitle: \"Additional Configuration Options\"\nlinkTitle: \"Additional Configuration Options\"\nweight: 109\ndescription: >\n  Additional configurations when installing Tekton Pipelines\n---\n-->\n\nThis document describes additional options to configure your Tekton Pipelines\ninstallation.\n\n## Table of Contents\n\n  - [Configuring built-in remote Task and Pipeline resolution](#configuring-built-in-remote-task-and-pipeline-resolution)\n  - [Configuring CloudEvents notifications](#configuring-cloudevents-notifications)\n  - [Configuring self-signed cert for private registry](#configuring-self-signed-cert-for-private-registry)\n  - [Configuring environment variables](#configuring-environment-variables)\n  - [Customizing basic execution parameters](#customizing-basic-execution-parameters)\n    - [Customizing the Pipelines Controller behavior](#customizing-the-pipelines-controller-behavior)\n    - [Alpha Features](#alpha-features)\n    - [Beta Features](#beta-features)\n  - [Enabling larger results using sidecar logs](#enabling-larger-results-using-sidecar-logs)\n  - [Configuring High Availability](#configuring-high-availability)\n  - [Configuring tekton pipeline controller performance](#configuring-tekton-pipeline-controller-performance)\n  - [Platform Support](#platform-support)\n  - [Creating a custom release of Tekton Pipelines](#creating-a-custom-release-of-tekton-pipelines)\n  - [Verify Tekton Pipelines Release](#verify-tekton-pipelines-release)\n    - [Verify signatures using `cosign`](#verify-signatures-using-cosign)\n    - [Verify the transparency logs using `rekor-cli`](#verify-the-transparency-logs-using-rekor-cli)\n  - [Verify Tekton Resources](#verify-tekton-resources)\n  - [Pipelinerun with Affinity Assistant](#pipelineruns-with-affinity-assistant)\n  - [TaskRuns with `imagePullBackOff` Timeout](#taskruns-with-imagepullbackoff-timeout)\n  - [Disabling Inline Spec in TaskRun and PipelineRun](#disabling-inline-spec-in-taskrun-and-pipelinerun)\n  - [Next steps](#next-steps)\n\n\n## Configuring built-in remote Task and Pipeline resolution\n\nFour remote resolvers are currently provided as part of the Tekton Pipelines installation.\nBy default, these remote resolvers are enabled. Each resolver can be disabled by setting\nthe appropriate feature flag in the `resolvers-feature-flags` ConfigMap in the `tekton-pipelines-resolvers`\nnamespace:\n\n1. [The `bundles` resolver](.\/bundle-resolver.md), disabled by setting the `enable-bundles-resolver`\n  feature flag to `false`.\n1. [The `git` resolver](.\/git-resolver.md), disabled by setting the `enable-git-resolver`\n   feature flag to `false`.\n1. [The `hub` resolver](.\/hub-resolver.md), disabled by setting the `enable-hub-resolver`\n   feature flag to `false`.\n1. [The `cluster` resolver](.\/cluster-resolver.md), disabled by setting the `enable-cluster-resolver`\n   feature flag to `false`.\n\n## Configuring CloudEvents notifications\n\nWhen configured so, Tekton can generate `CloudEvents` for `TaskRun`,\n`PipelineRun` and `CustomRun`lifecycle events. The main configuration parameter is the\nURL of the sink. When not set, no notification is generated.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: config-events\n  namespace: tekton-pipelines\n  labels:\n    app.kubernetes.io\/instance: default\n    app.kubernetes.io\/part-of: tekton-pipelines\ndata:\n  formats: tektonv1\n  sink: https:\/\/my-sink-url\n```\n\nThe sink used to be configured in the `config-defaults` config map.\nThis option is still available, but deprecated, and will be removed.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: config-defaults\n  namespace: tekton-pipelines\n  labels:\n    app.kubernetes.io\/instance: default\n    app.kubernetes.io\/part-of: tekton-pipelines\ndata:\n  default-cloud-events-sink: https:\/\/my-sink-url\n```\n\nAdditionally, CloudEvents for `CustomRuns` require an extra configuration to be\nenabled. This setting exists to avoid collisions with CloudEvents that might\nbe sent by custom task controllers:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: feature-flags\n  namespace: tekton-pipelines\n  labels:\n    app.kubernetes.io\/instance: default\n    app.kubernetes.io\/part-of: tekton-pipelines\ndata:\n  send-cloudevents-for-runs: true\n```\n\n## Configuring self-signed cert for private registry\n\nThe `SSL_CERT_DIR` is set to `\/etc\/ssl\/certs` as the default cert directory. If you are using a self-signed cert for private registry and the cert file is not under the default cert directory, configure your registry cert in the `config-registry-cert` `ConfigMap` with the key `cert`.\n\n## Configuring environment variables\n\nEnvironment variables can be configured in the following ways, mentioned in order of precedence from lowest to highest.\n\n1. Implicit environment variables\n2. `Step`\/`StepTemplate` environment variables\n3. Environment variables specified via a `default` `PodTemplate`.\n4. Environment variables specified via a `PodTemplate`.\n\nThe environment variables specified by a `PodTemplate` supercedes all other ways of specifying environment variables. However, there exists a configuration i.e. `default-forbidden-env`, the environment variable specified in this list cannot be updated via a `PodTemplate`.\n\nFor example:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: config-defaults\n  namespace: tekton-pipelines\ndata:\n  default-timeout-minutes: \"50\"\n  default-service-account: \"tekton\"\n  default-forbidden-env: \"TEST_TEKTON\"\n---\napiVersion: tekton.dev\/v1beta1\nkind: Task\nmetadata:\n  name: mytask\n  namespace: default\nspec:\n  steps:\n    - name: echo-env\n      image: ubuntu\n      command: [\"bash\", \"-c\"]\n      args: [\"echo $TEST_TEKTON \"]\n      env:\n          - name: \"TEST_TEKTON\"\n            value: \"true\"\n---\napiVersion: tekton.dev\/v1beta1\nkind: TaskRun\nmetadata:\n  name: mytaskrun\n  namespace: default\nspec:\n  taskRef:\n    name: mytask\n  podTemplate:\n    env:\n        - name: \"TEST_TEKTON\"\n          value: \"false\"\n```\n\n_In the above example the environment variable `TEST_TEKTON` will not be overriden by value specified in podTemplate, because the `config-default` option `default-forbidden-env` is configured with value `TEST_TEKTON`._\n\n\n## Configuring default resources requirements\n\nResource requirements of containers created by the controller can be assigned default values. This allows to fully control the resources requirement of `TaskRun`.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: config-defaults\n  namespace: tekton-pipelines\ndata:\n  default-container-resource-requirements: |\n    place-scripts: # updates resource requirements of a 'place-scripts' container\n      requests:\n        memory: \"64Mi\"\n        cpu: \"250m\"\n      limits:\n        memory: \"128Mi\"\n        cpu: \"500m\"\n  \n    prepare: # updates resource requirements of a 'prepare' container\n      requests:\n        memory: \"64Mi\"\n        cpu: \"250m\"\n      limits:\n        memory: \"256Mi\"\n        cpu: \"500m\"\n  \n    working-dir-initializer: # updates resource requirements of a 'working-dir-initializer' container\n      requests:\n        memory: \"64Mi\"\n        cpu: \"250m\"\n      limits:\n        memory: \"512Mi\"\n        cpu: \"500m\"\n  \n    prefix-scripts: # updates resource requirements of containers which starts with 'scripts-'\n      requests:\n        memory: \"64Mi\"\n        cpu: \"250m\"\n      limits:\n        memory: \"128Mi\"\n        cpu: \"500m\"\n  \n    prefix-sidecar-scripts: # updates resource requirements of containers which starts with 'sidecar-scripts-'\n      requests:\n        memory: \"64Mi\"\n        cpu: \"250m\"\n      limits:\n        memory: \"128Mi\"\n        cpu: \"500m\"\n  \n    default: # updates resource requirements of init-containers and containers which has empty resource resource requirements\n      requests:\n        memory: \"64Mi\"\n        cpu: \"250m\"\n      limits:\n        memory: \"256Mi\"\n        cpu: \"500m\"\n```\n\nAny resource requirements set at the `Task` and `TaskRun` levels will overidde the default one specified in the `config-defaults` configmap.\n\n## Customizing basic execution parameters\n\nYou can specify your own values that replace the default service account (`ServiceAccount`), timeout (`Timeout`), resolver (`Resolver`), and Pod template (`PodTemplate`) values used by Tekton Pipelines in `TaskRun` and `PipelineRun` definitions. To do so, modify the ConfigMap `config-defaults` with your desired values.\n\nThe example below customizes the following:\n\n- the default service account from `default` to `tekton`.\n- the default timeout from 60 minutes to 20 minutes.\n- the default `app.kubernetes.io\/managed-by` label is applied to all Pods created to execute `TaskRuns`.\n- the default Pod template to include a node selector to select the node where the Pod will be scheduled by default. A list of supported fields is available [here](.\/podtemplates.md#supported-fields).\n  For more information, see [`PodTemplate` in `TaskRuns`](.\/taskruns.md#specifying-a-pod-template) or [`PodTemplate` in `PipelineRuns`](.\/pipelineruns.md#specifying-a-pod-template).\n- the default `Workspace` configuration can be set for any `Workspaces` that a Task declares but that a TaskRun does not explicitly provide.\n- the default maximum combinations of `Parameters` in a `Matrix` that can be used to fan out a `PipelineTask`. For\nmore information, see [`Matrix`](matrix.md).\n- the default resolver type to `git`.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: config-defaults\ndata:\n  default-service-account: \"tekton\"\n  default-timeout-minutes: \"20\"\n  default-pod-template: |\n    nodeSelector:\n      kops.k8s.io\/instancegroup: build-instance-group\n  default-managed-by-label-value: \"my-tekton-installation\"\n  default-task-run-workspace-binding: |\n    emptyDir: {}\n  default-max-matrix-combinations-count: \"1024\"\n  default-resolver-type: \"git\"\n```\n\n**Note:** The `_example` key in the provided [config-defaults.yaml](.\/..\/config\/config-defaults.yaml)\nfile lists the keys you can customize along with their default values.\n\n### Customizing the Pipelines Controller behavior\n\nTo customize the behavior of the Pipelines Controller, modify the ConfigMap `feature-flags` via\n`kubectl edit configmap feature-flags -n tekton-pipelines`.\n\n**Note:** Changing feature flags may result in undefined behavior for TaskRuns and PipelineRuns\nthat are running while the change occurs.\n\nThe flags in this ConfigMap are as follows:\n\n- `disable-affinity-assistant` - set this flag to `true` to disable the [Affinity Assistant](.\/affinityassistants)\n  that is used to provide Node Affinity for `TaskRun` pods that share workspace volume.\n  The Affinity Assistant is incompatible with other affinity rules\n  configured for `TaskRun` pods.\n\n  **Note:** This feature flag is deprecated and will be removed in release `v0.60`. Consider using `coschedule` feature flag to configure Affinity Assistant behavior.\n\n  **Note:** Affinity Assistant use [Inter-pod affinity and anti-affinity](https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#inter-pod-affinity-and-anti-affinity)\n  that require substantial amount of processing which can slow down scheduling in large clusters\n  significantly. We do not recommend using them in clusters larger than several hundred nodes\n\n  **Note:** Pod anti-affinity requires nodes to be consistently labelled, in other words every\n  node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes\n  are missing the specified `topologyKey` label, it can lead to unintended behavior.\n\n- `coschedule`: set this flag determines how PipelineRun Pods are scheduled with [Affinity Assistant](.\/affinityassistants).\nAcceptable values are \"workspaces\" (default), \"pipelineruns\", \"isolate-pipelinerun\", or \"disabled\".\nSetting it to \"workspaces\" will schedule all the taskruns sharing the same PVC-based workspace in a pipelinerun to the same node.\nSetting it to \"pipelineruns\" will schedule all the taskruns in a pipelinerun to the same node.\nSetting it to \"isolate-pipelinerun\" will schedule all the taskruns in a pipelinerun to the same node,\nand only allows one pipelinerun to run on a node at a time. Setting it to \"disabled\" will not apply any coschedule policy.\n\n- `await-sidecar-readiness`: set this flag to `\"false\"` to allow the Tekton controller to start a\nTasksRun's first step immediately without waiting for sidecar containers to be running first. Using\nthis option should decrease the time it takes for a TaskRun to start running, and will allow TaskRun\npods to be scheduled in environments that don't support [Downward API](https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/downward-api-volume-expose-pod-information\/)\nvolumes (e.g. some virtual kubelet implementations). However, this may lead to unexpected behaviour\nwith Tasks that use sidecars, or in clusters that use injected sidecars (e.g. Istio). Setting this flag\nto `\"false\"` will mean the `running-in-environment-with-injected-sidecars` flag has no effect.\n\n- `running-in-environment-with-injected-sidecars`: set this flag to `\"false\"` to allow the\nTekton controller to start a TasksRun's first step immediately if it has no Sidecars specified.\nUsing this option should decrease the time it takes for a TaskRun to start running.\nHowever, for clusters that use injected sidecars (e.g. Istio) this can lead to unexpected behavior.\n\n- `require-git-ssh-secret-known-hosts`: set this flag to `\"true\"` to require that\nGit SSH Secrets include a `known_hosts` field. This ensures that a git remote server's\nkey is validated before data is accepted from it when authenticating over SSH. Secrets\nthat don't include a `known_hosts` will result in the TaskRun failing validation and\nnot running.\n\n- `enable-tekton-oci-bundles`: set this flag to `\"true\"` to enable the\n  tekton OCI bundle usage (see [the tekton bundle\n  contract](.\/tekton-bundle-contracts.md)). Enabling this option\n  allows the use of `bundle` field in `taskRef` and `pipelineRef` for\n  `Pipeline`, `PipelineRun` and `TaskRun`. By default, this option is\n  disabled (`\"false\"`), which means it is disallowed to use the\n  `bundle` field.\n\n- `disable-creds-init` - set this flag to `\"true\"` to [disable Tekton's built-in credential initialization](auth.md#disabling-tektons-built-in-auth)\nand use Workspaces to mount credentials from Secrets instead.\nThe default is `false`. For more information, see the [associated issue](https:\/\/github.com\/tektoncd\/pipeline\/issues\/3399).\n\n- `enable-api-fields`: When using v1beta1 APIs, setting this field to \"stable\" or \"beta\"\nenables [beta features](#beta-features). When using v1 APIs, setting this field to \"stable\"\nallows only stable features, and setting it to \"beta\" allows only beta features.\nSet this field to \"alpha\" to allow [alpha features](#alpha-features) to be used.\n\n- `enable-kubernetes-sidecar`: Set this flag to `\"true\"` to enable native kubernetes sidecar support. This will allow Tekton sidecars to run as Kubernetes sidecars. Must be using Kubernetes v1.29 or greater.\n\nFor example:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: feature-flags\ndata:\n  enable-api-fields: \"alpha\" # Allow alpha fields to be used in Tasks and Pipelines.\n```\n\n- `trusted-resources-verification-no-match-policy`: Setting this flag to `fail` will fail the taskrun\/pipelinerun if no matching policies found. Setting to `warn` will skip verification and log a warning if no matching policies are found, but not fail the taskrun\/pipelinerun. Setting to `ignore` will skip verification if no matching policies found.\nDefaults to \"ignore\".\n\n- `results-from`: set this flag to \"termination-message\" to use the container's termination message to fetch results from. This is the default method of extracting results. Set it to \"sidecar-logs\" to enable use of a results sidecar logs to extract results instead of termination message.\n\n- `enable-provenance-in-status`: Set this flag to `\"true\"` to enable populating\n  the `provenance` field in `TaskRun` and `PipelineRun` status. The `provenance`\n  field contains metadata about resources used in the TaskRun\/PipelineRun such as the\n  source from where a remote Task\/Pipeline definition was fetched. By default, this is set to `true`.\n  To disable populating this field, set this flag to `\"false\"`.\n\n- `set-security-context`: Set this flag to `true` to set a security context for containers injected by Tekton that will allow TaskRun pods\nto run in namespaces with `restricted` pod security admission. By default, this is set to `false`.\n\n### Alpha Features\n\nAlpha features in the following table are still in development and their syntax is subject to change.\n- To enable the features ***without*** an individual flag:\n  set the `enable-api-fields` feature flag to `\"alpha\"` in the `feature-flags` ConfigMap alongside your Tekton Pipelines deployment via `kubectl patch cm feature-flags -n tekton-pipelines -p '{\"data\":{\"enable-api-fields\":\"alpha\"}}'`.\n- To enable the features ***with*** an individual flag:\n  set the individual flag accordingly in the `feature-flag` ConfigMap alongside your Tekton Pipelines deployment. Example: `kubectl patch cm feature-flags -n tekton-pipelines -p '{\"data\":{\"<FLAG-NAME>\":\"<FLAG-VALUE>\"}}'`.\n\n\nFeatures currently in \"alpha\" are:\n\n| Feature                                                                                                      | Proposal                                                                                                             | Release                                                              | Individual Flag                                  |\n|:-------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------|:-------------------------------------------------|\n| [Bundles ](.\/pipelineruns.md#tekton-bundles)                                                                 | [TEP-0005](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0005-tekton-oci-bundles.md)                          | [v0.18.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.18.0) | `enable-tekton-oci-bundles`                      |\n| [Hermetic Execution Mode](.\/hermetic.md)                                                                     | [TEP-0025](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0025-hermekton.md)                                   | [v0.25.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.25.0) |                                                  |\n| [Windows Scripts](.\/tasks.md#windows-scripts)                                                                | [TEP-0057](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0057-windows-support.md)                             | [v0.28.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.28.0) |                                                  |\n| [Debug](.\/debug.md)                                                                                          | [TEP-0042](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0042-taskrun-breakpoint-on-failure.md)               | [v0.26.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.26.0) |                                                  |\n| [StdoutConfig and StderrConfig](.\/tasks#redirecting-step-output-streams-with-stdoutConfig-and-stderrConfig) | [TEP-0011](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0011-redirecting-step-output-streams.md)             | [v0.38.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.38.0) |                                                  |\n| [Trusted Resources](.\/trusted-resources.md)                                                                  | [TEP-0091](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0091-trusted-resources.md)                           | [v0.49.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.49.0) | `trusted-resources-verification-no-match-policy` |\n| [Configure Default Resolver](.\/resolution.md#configuring-built-in-resolvers)                                 | [TEP-0133](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0133-configure-default-resolver.md)                  | [v0.46.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.46.0) |                                                  |\n| [Coschedule](.\/affinityassistants.md)                                                                        | [TEP-0135](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0135-coscheduling-pipelinerun-pods.md)               | [v0.51.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.51.0) | `coschedule`                                     |\n| [keep pod on cancel](.\/taskruns.md#cancelling-a-taskrun)                                                     | N\/A                                                                                                                  | [v0.52.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.52.0) | `keep-pod-on-cancel`                             |\n| [CEL in WhenExpression](.\/pipelines.md#use-cel-expression-in-whenexpression)                                                  | [TEP-0145](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0145-cel-in-whenexpression.md)                       | [v0.53.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.53.0) | `enable-cel-in-whenexpression`                   |\n| [Param Enum](.\/taskruns.md#parameter-enums)                                                                  | [TEP-0144](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0144-param-enum.md)                                  | [v0.54.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.54.0) | `enable-param-enum`                              |\n\n### Beta Features\n\nBeta features are fields of stable CRDs that follow our \"beta\" [compatibility policy](..\/api_compatibility_policy.md).\nTo enable these features, set the `enable-api-fields` feature flag to `\"beta\"` in\nthe `feature-flags` ConfigMap alongside your Tekton Pipelines deployment via\n`kubectl patch cm feature-flags -n tekton-pipelines -p '{\"data\":{\"enable-api-fields\":\"beta\"}}'`.\n\nFeatures currently in \"beta\" are:\n\n| Feature                                                                                               | Proposal                                                                                                 | Alpha Release                                                        | Beta Release                                                         | Individual Flag               |\n|:------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------|:---------------------------------------------------------------------|:------------------------------|\n| [Remote Tasks](.\/taskruns.md#remote-tasks) and [Remote Pipelines](.\/pipelineruns.md#remote-pipelines) | [TEP-0060](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0060-remote-resolution.md)               |                                                                      | [v0.41.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.41.0) |                               |\n| [`Provenance` field in Status](pipeline-api.md#provenance)                                            | [issue#5550](https:\/\/github.com\/tektoncd\/pipeline\/issues\/5550)                                           | [v0.41.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.41.0) | [v0.48.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.48.0) | `enable-provenance-in-status` |\n| [Isolated `Step` & `Sidecar` `Workspaces`](.\/workspaces.md#isolated-workspaces)                       | [TEP-0029](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0029-step-workspaces.md)                 | [v0.24.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.24.0) | [v0.50.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.50.0) |                               |\n| [Matrix](.\/matrix.md)                                                                                 | [TEP-0090](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0090-matrix.md)                          | [v0.38.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.38.0) | [v0.53.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.53.0) |                               |\n| [Task-level Resource Requirements](compute-resources.md#task-level-compute-resources-configuration)   | [TEP-0104](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0104-tasklevel-resource-requirements.md) | [v0.39.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.39.0) | [v0.53.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.53.0) |                               |\n| [Reusable Steps via StepActions](.\/stepactions.md)                                                           | [TEP-0142](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0142-enable-step-reusability.md)                     | [v0.54.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.54.0) | `enable-step-actions`                            |\n| [Larger Results via Sidecar Logs](#enabling-larger-results-using-sidecar-logs)                               | [TEP-0127](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0127-larger-results-via-sidecar-logs.md)             | [v0.43.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.43.0) | [v0.61.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.61.0) | `results-from`                                   |\n| [Step and Sidecar Overrides](.\/taskruns.md#overriding-task-steps-and-sidecars)                               | [TEP-0094](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0094-specifying-resource-requirements-at-runtime.md) | [v0.34.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.34.0) |                                                  | [v0.61.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.61.0)  | |\n| [Ignore Task Failure](.\/pipelines.md#using-the-onerror-field)                                                | [TEP-0050](https:\/\/github.com\/tektoncd\/community\/blob\/main\/teps\/0050-ignore-task-failures.md)                        |    [v0.55.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.55.0)                                                              | [v0.62.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.62.0)                                                 | N\/A |\n\n## Enabling larger results using sidecar logs\n\n**Note**: The maximum size of a Task's results is limited by the container termination message feature of Kubernetes,\nas results are passed back to the controller via this mechanism. At present, the limit is per task is \u201c4096 bytes\u201d. All\nresults produced by the task share this upper limit.\n\nTo exceed this limit of 4096 bytes, you can enable larger results using sidecar logs. By enabling this feature, you will\nhave a configurable limit (with a default of 4096 bytes) per result with no restriction on the number of results. The\nresults are still stored in the taskRun CRD, so they should not exceed the 1.5MB CRD size limit.\n\n**Note**: to enable this feature, you need to grant `get` access to all `pods\/log` to the `tekton-pipelines-controller`.\nThis means that the tekton pipeline controller has the ability to access the pod logs.\n\n1. Create a cluster role and rolebinding by applying the following spec to provide log access to `tekton-pipelines-controller`.\n\n```\nkubectl apply -f optional_config\/enable-log-access-to-controller\/\n```\n\n2. Set the `results-from` feature flag to use sidecar logs by setting `results-from: sidecar-logs` in the\n[configMap](#customizing-the-pipelines-controller-behavior).\n\n```\nkubectl patch cm feature-flags -n tekton-pipelines -p '{\"data\":{\"results-from\":\"sidecar-logs\"}}'\n```\n\n3. If you want the size per result to be something other than 4096 bytes, you can set the `max-result-size` feature flag\nin bytes by setting `max-result-size: 8192(whatever you need here)`. **Note:** The value you can set here cannot exceed\nthe size of the CRD limit of 1.5 MB.\n\n```\nkubectl patch cm feature-flags -n tekton-pipelines -p '{\"data\":{\"max-result-size\":\"<VALUE-IN-BYTES>\"}}'\n```\n\n## Configuring High Availability\n\nIf you want to run Tekton Pipelines in a way so that webhooks are resiliant against failures and support\nhigh concurrency scenarios, you need to run a [Metrics Server](https:\/\/github.com\/kubernetes-sigs\/metrics-server) in\nyour Kubernetes cluster. This is required by the [Horizontal Pod Autoscalers](https:\/\/kubernetes.io\/docs\/tasks\/run-application\/horizontal-pod-autoscale\/)\nto compute replica count.\n\nSee [HA Support for Tekton Pipeline Controllers](.\/enabling-ha.md) for instructions on configuring\nHigh Availability in the Tekton Pipelines Controller.\n\nThe default configuration is defined in [webhook-hpa.yaml](.\/..\/config\/webhook-hpa.yaml) which can be customized\nto better fit specific usecases.\n\n## Configuring tekton pipeline controller performance\n\nOut-of-the-box, Tekton Pipelines Controller is configured for relatively small-scale deployments but there have several options for configuring Pipelines' performance are available. See the [Performance Configuration](tekton-controller-performance-configuration.md) document which describes how to change the default ThreadsPerController, QPS and Burst settings to meet your requirements.\n\n## Running TaskRuns and PipelineRuns with restricted pod security standards\n\nTo allow TaskRuns and PipelineRuns to run in namespaces with [restricted pod security standards](https:\/\/kubernetes.io\/docs\/concepts\/security\/pod-security-standards\/),\nset the \"set-security-context\" feature flag to \"true\" in the [feature-flags configMap](#customizing-the-pipelines-controller-behavior). This configuration option applies a [SecurityContext](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/security-context\/)\nto any containers injected into TaskRuns by the Pipelines controller. If the [Affinity Assistants](affinityassistants.md) feature is enabled, the SecurityContext is also applied to those containers.\nThis SecurityContext may not be supported in all Kubernetes implementations (for example, OpenShift).\n\n**Note**: running TaskRuns and PipelineRuns in the \"tekton-pipelines\" namespace is discouraged.\n\n## Platform Support\n\nThe Tekton project provides support for running on x86 Linux Kubernetes nodes.\n\nThe project produces images capable of running on other architectures and operating systems, but may not be able to help debug issues specific to those platforms as readily as those that affect Linux on x86.\n\nThe controller and webhook components are currently built for:\n\n- linux\/amd64\n- linux\/arm64\n- linux\/arm (Arm v7)\n- linux\/ppc64le (PowerPC)\n- linux\/s390x (IBM Z)\n\nThe entrypoint component is also built for Windows, which enables TaskRun workloads to execute on Windows nodes.\nSee [Windows documentation](windows.md) for more information.\n\n## Creating a custom release of Tekton Pipelines\n\nYou can create a custom release of Tekton Pipelines by following and customizing the steps in [Creating an official release](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/tekton\/README.md#create-an-official-release). For example, you might want to customize the container images built and used by Tekton Pipelines.\n\n## Verify Tekton Pipelines Release\n\n> We will refine this process over time to be more streamlined. For now, please follow the steps listed in this section\nto verify Tekton pipeline release.\n\nTekton Pipeline's images are being signed by [Tekton Chains](https:\/\/github.com\/tektoncd\/chains) since [0.27.1](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.27.1). You can verify the images with\n`cosign` using the [Tekton's public key](https:\/\/raw.githubusercontent.com\/tektoncd\/chains\/main\/tekton.pub).\n\n### Verify signatures using `cosign`\n\nWith Go 1.16+, you can install `cosign` by running:\n\n```shell\ngo install github.com\/sigstore\/cosign\/cmd\/cosign@latest\n```\n\nYou can verify Tekton Pipelines official images using the Tekton public key:\n\n```shell\ncosign verify -key https:\/\/raw.githubusercontent.com\/tektoncd\/chains\/main\/tekton.pub gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/controller:v0.28.1\n```\n\nwhich results in:\n\n```shell\nVerification for gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/controller:v0.28.1 --\nThe following checks were performed on each of these signatures:\n  - The cosign claims were validated\n  - The signatures were verified against the specified public key\n  - Any certificates were verified against the Fulcio roots.\n{\n  \"Critical\": {\n    \"Identity\": {\n      \"docker-reference\": \"gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/controller\"\n    },\n    \"Image\": {\n      \"Docker-manifest-digest\": \"sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8\"\n    },\n    \"Type\": \"Tekton container signature\"\n  },\n  \"Optional\": {}\n}\n```\n\nThe verification shows a list of checks performed and returns the digest in `Critical.Image.Docker-manifest-digest`\nwhich can be used to retrieve the provenance from the transparency logs for that image using `rekor-cli`.\n\n### Verify the transparency logs using `rekor-cli`\n\nInstall the `rekor-cli` by running:\n\n```shell\ngo install -v github.com\/sigstore\/rekor\/cmd\/rekor-cli@latest\n```\n\nNow, use the digest collected from the previous [section](#verify-signatures-using-cosign) in\n`Critical.Image.Docker-manifest-digest`, for example,\n`sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8`.\n\nSearch the transparency log with the digest just collected:\n\n```shell\nrekor-cli search --sha sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8\n```\n\nwhich results in:\n\n```shell\nFound matching entries (listed by UUID):\n68a53d0e75463d805dc9437dda5815171502475dd704459a5ce3078edba96226\n```\n\nTekton Chains generates provenance based on the custom [format](https:\/\/github.com\/tektoncd\/chains\/blob\/main\/PROVENANCE_SPEC.md)\nin which the `subject` holds the list of artifacts which were built as part of the release. For the Pipeline release,\n`subject` includes a list of images including pipeline controller, pipeline webhook, etc. Use the `UUID` to get the provenance:\n\n```shell\nrekor-cli get --uuid 68a53d0e75463d805dc9437dda5815171502475dd704459a5ce3078edba96226 --format json | jq -r .Attestation | base64 --decode | jq\n```\n\nwhich results in:\n\n```shell\n{\n  \"_type\": \"https:\/\/in-toto.io\/Statement\/v0.1\",\n  \"predicateType\": \"https:\/\/tekton.dev\/chains\/provenance\",\n  \"subject\": [\n    {\n      \"name\": \"gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/controller\",\n      \"digest\": {\n        \"sha256\": \"0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8\"\n      }\n    },\n    {\n      \"name\": \"gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/entrypoint\",\n      \"digest\": {\n        \"sha256\": \"2fa7f7c3408f52ff21b2d8c4271374dac4f5b113b1c4dbc7d5189131e71ce721\"\n      }\n    },\n    {\n      \"name\": \"gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/git-init\",\n      \"digest\": {\n        \"sha256\": \"83d5ec6addece4aac79898c9631ee669f5fee5a710a2ed1f98a6d40c19fb88f7\"\n      }\n    },\n    {\n      \"name\": \"gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/imagedigestexporter\",\n      \"digest\": {\n        \"sha256\": \"e4d77b5b8902270f37812f85feb70d57d6d0e1fed2f3b46f86baf534f19cd9c0\"\n      }\n    },\n    {\n      \"name\": \"gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/nop\",\n      \"digest\": {\n        \"sha256\": \"59b5304bcfdd9834150a2701720cf66e3ebe6d6e4d361ae1612d9430089591f8\"\n      }\n    },\n    {\n      \"name\": \"gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/pullrequest-init\",\n      \"digest\": {\n        \"sha256\": \"4992491b2714a73c0a84553030e6056e6495b3d9d5cc6b20cf7bc8c51be779bb\"\n      }\n    },\n    {\n      \"name\": \"gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/webhook\",\n      \"digest\": {\n        \"sha256\": \"bf0ef565b301a1981cb2e0d11eb6961c694f6d2401928dccebe7d1e9d8c914de\"\n      }\n    }\n  ],\n  ...\n```\n\nNow, verify the digest in the `release.yaml` by matching it with the provenance, for example, the digest for the release `v0.28.1`:\n\n```shell\ncurl -s https:\/\/storage.googleapis.com\/tekton-releases\/pipeline\/previous\/v0.28.1\/release.yaml | grep github.com\/tektoncd\/pipeline\/cmd\/controller:v0.28.1 | awk -F\"github.com\/tektoncd\/pipeline\/cmd\/controller:v0.28.1@\" '{print $2}'\n```\n\nwhich results in:\n\n```shell\nsha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8\n```\n\nNow, you can verify the deployment specifications in the `release.yaml` to match each of these images and their digest.\nThe `tekton-pipelines-controller` deployment specification has a container named `tekton-pipeline-controller` and a\nlist of image references with their digest as part of the `args`:\n\n```yaml\n      containers:\n        - name: tekton-pipelines-controller\n          image: gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/controller:v0.28.1@sha256:0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8\n          args: [\n            # These images are built on-demand by `ko resolve` and are replaced\n            # by image references by digest.\n              \"-git-image\",\n              \"gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/git-init:v0.28.1@sha256:83d5ec6addece4aac79898c9631ee669f5fee5a710a2ed1f98a6d40c19fb88f7\",\n              \"-entrypoint-image\",\n              \"gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/entrypoint:v0.28.1@sha256:2fa7f7c3408f52ff21b2d8c4271374dac4f5b113b1c4dbc7d5189131e71ce721\",\n              \"-nop-image\",\n              \"gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/nop:v0.28.1@sha256:59b5304bcfdd9834150a2701720cf66e3ebe6d6e4d361ae1612d9430089591f8\",\n              \"-imagedigest-exporter-image\",\n              \"gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/imagedigestexporter:v0.28.1@sha256:e4d77b5b8902270f37812f85feb70d57d6d0e1fed2f3b46f86baf534f19cd9c0\",\n              \"-pr-image\",\n              \"gcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/pullrequest-init:v0.28.1@sha256:4992491b2714a73c0a84553030e6056e6495b3d9d5cc6b20cf7bc8c51be779bb\",\n```\n\nSimilarly, you can verify the rest of the images which were published as part of the Tekton Pipelines release:\n\n```shell\ngcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/git-init\ngcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/entrypoint\ngcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/nop\ngcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/imagedigestexporter\ngcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/pullrequest-init\ngcr.io\/tekton-releases\/github.com\/tektoncd\/pipeline\/cmd\/webhook\n```\n\n## Verify Tekton Resources\n\nTrusted Resources is a feature to verify Tekton Tasks and Pipelines. The current\nversion of feature supports `v1beta1` `Task` and `Pipeline`. For more details\nplease take a look at [Trusted Resources](.\/trusted-resources.md).\n\n## Pipelineruns with Affinity Assistant\n\nThe cluster operators can review the [guidelines](developers\/affinity-assistant.md) to `cordon` a node in the cluster\nwith the tekton controller and the affinity assistant is enabled.\n\n## TaskRuns with `imagePullBackOff` Timeout\n\nTekton pipelines has adopted a fail fast strategy with a taskRun failing with `TaskRunImagePullFailed` in case of an\n`imagePullBackOff`. This can be limited in some cases, and it generally depends on the infrastructure. To allow the\ncluster operators to decide whether to wait in case of an `imagePullBackOff`, a setting is available to configure\nthe wait time such that the controller will wait for the specified duration before declaring a failure.\nFor example, with the following `config-defaults`, the controller does not mark the taskRun as failure for 5 minutes since\nthe pod is scheduled in case the image pull fails with `imagePullBackOff`. The `default-imagepullbackoff-timeout` is\nof type `time.Duration` and can be set to a duration such as \"1m\", \"5m\", \"10s\", \"1h\", etc.\nSee issue https:\/\/github.com\/tektoncd\/pipeline\/issues\/5987 for more details.\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: config-defaults\n  namespace: tekton-pipelines\ndata:\n  default-imagepullbackoff-timeout: \"5m\"\n```\n\n## Disabling Inline Spec in Pipeline, TaskRun and PipelineRun\n\nTekton users may embed the specification of a `Task` (via `taskSpec`) or a `Pipeline` (via `pipelineSpec`) as an alternative to referring to an external resource via `taskRef` and `pipelineRef` respectively.  This behaviour can be selectively disabled for three Tekton resources: `TaskRun`, `PipelineRun` and `Pipeline`.\n\n In certain clusters and scenarios, an admin might want to disable the customisation of `Tasks` and `Pipelines` and only allow users to run pre-defined resources. To achieve that the admin should disable embedded specification via the `disable-inline-spec` flag, and remote resolvers too.\n\nTo disable inline specification, set the `disable-inline-spec` flag to `\"pipeline,pipelinerun,taskrun\"`\nin the `feature-flags` configmap.\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: feature-flags\n  namespace: tekton-pipelines\n  labels:\n    app.kubernetes.io\/instance: default\n    app.kubernetes.io\/part-of: tekton-pipelines\ndata:\n  disable-inline-spec: \"pipeline,pipelinerun,taskrun\"\n```\n\nInline specifications can be disabled for specific resources only. To achieve that, set the disable-inline-spec flag to a comma-separated list of the desired resources. Valid values are pipeline, pipelinerun and taskrun.\n\nThe default value of disable-inline-spec is \"\", which means inline specification is enabled in all cases.\n\n## Next steps\n\nTo get started with Tekton check the [Introductory tutorials][quickstarts],\nthe [how-to guides][howtos], and the [examples folder][examples].\n\n---\n\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License][cca4], and code samples are licensed\nunder the [Apache 2.0 License][apache2l].\n\n\n[quickstarts]: https:\/\/tekton.dev\/docs\/getting-started\/\n[howtos]: https:\/\/tekton.dev\/docs\/how-to-guides\/\n[examples]: https:\/\/github.com\/tektoncd\/pipeline\/tree\/main\/examples\/\n[cca4]: https:\/\/creativecommons.org\/licenses\/by\/4.0\/\n[apache2l]: https:\/\/www.apache.org\/licenses\/LICENSE-2.0","site":"tekton","answers_cleaned":"         title   Additional Configuration Options  linkTitle   Additional Configuration Options  weight  109 description      Additional configurations when installing Tekton Pipelines          This document describes additional options to configure your Tekton Pipelines installation      Table of Contents       Configuring built in remote Task and Pipeline resolution   configuring built in remote task and pipeline resolution       Configuring CloudEvents notifications   configuring cloudevents notifications       Configuring self signed cert for private registry   configuring self signed cert for private registry       Configuring environment variables   configuring environment variables       Customizing basic execution parameters   customizing basic execution parameters         Customizing the Pipelines Controller behavior   customizing the pipelines controller behavior         Alpha Features   alpha features         Beta Features   beta features       Enabling larger results using sidecar logs   enabling larger results using sidecar logs       Configuring High Availability   configuring high availability       Configuring tekton pipeline controller performance   configuring tekton pipeline controller performance       Platform Support   platform support       Creating a custom release of Tekton Pipelines   creating a custom release of tekton pipelines       Verify Tekton Pipelines Release   verify tekton pipelines release         Verify signatures using  cosign    verify signatures using cosign         Verify the transparency logs using  rekor cli    verify the transparency logs using rekor cli       Verify Tekton Resources   verify tekton resources       Pipelinerun with Affinity Assistant   pipelineruns with affinity assistant       TaskRuns with  imagePullBackOff  Timeout   taskruns with imagepullbackoff timeout       Disabling Inline Spec in TaskRun and PipelineRun   disabling inline spec in taskrun and pipelinerun       Next steps   next steps       Configuring built in remote Task and Pipeline resolution  Four remote resolvers are currently provided as part of the Tekton Pipelines installation  By default  these remote resolvers are enabled  Each resolver can be disabled by setting the appropriate feature flag in the  resolvers feature flags  ConfigMap in the  tekton pipelines resolvers  namespace   1   The  bundles  resolver    bundle resolver md   disabled by setting the  enable bundles resolver    feature flag to  false   1   The  git  resolver    git resolver md   disabled by setting the  enable git resolver     feature flag to  false   1   The  hub  resolver    hub resolver md   disabled by setting the  enable hub resolver     feature flag to  false   1   The  cluster  resolver    cluster resolver md   disabled by setting the  enable cluster resolver     feature flag to  false       Configuring CloudEvents notifications  When configured so  Tekton can generate  CloudEvents  for  TaskRun    PipelineRun  and  CustomRun lifecycle events  The main configuration parameter is the URL of the sink  When not set  no notification is generated      yaml apiVersion  v1 kind  ConfigMap metadata    name  config events   namespace  tekton pipelines   labels      app kubernetes io instance  default     app kubernetes io part of  tekton pipelines data    formats  tektonv1   sink  https   my sink url      The sink used to be configured in the  config defaults  config map  This option is still available  but deprecated  and will be removed      yaml apiVersion  v1 kind  ConfigMap metadata    name  config defaults   namespace  tekton pipelines   labels      app kubernetes io instance  default     app kubernetes io part of  tekton pipelines data    default cloud events sink  https   my sink url      Additionally  CloudEvents for  CustomRuns  require an extra configuration to be enabled  This setting exists to avoid collisions with CloudEvents that might be sent by custom task controllers      yaml apiVersion  v1 kind  ConfigMap metadata    name  feature flags   namespace  tekton pipelines   labels      app kubernetes io instance  default     app kubernetes io part of  tekton pipelines data    send cloudevents for runs  true         Configuring self signed cert for private registry  The  SSL CERT DIR  is set to   etc ssl certs  as the default cert directory  If you are using a self signed cert for private registry and the cert file is not under the default cert directory  configure your registry cert in the  config registry cert   ConfigMap  with the key  cert       Configuring environment variables  Environment variables can be configured in the following ways  mentioned in order of precedence from lowest to highest   1  Implicit environment variables 2   Step   StepTemplate  environment variables 3  Environment variables specified via a  default   PodTemplate   4  Environment variables specified via a  PodTemplate    The environment variables specified by a  PodTemplate  supercedes all other ways of specifying environment variables  However  there exists a configuration i e   default forbidden env   the environment variable specified in this list cannot be updated via a  PodTemplate    For example      yaml apiVersion  v1 kind  ConfigMap metadata    name  config defaults   namespace  tekton pipelines data    default timeout minutes   50    default service account   tekton    default forbidden env   TEST TEKTON      apiVersion  tekton dev v1beta1 kind  Task metadata    name  mytask   namespace  default spec    steps        name  echo env       image  ubuntu       command    bash     c         args    echo  TEST TEKTON          env              name   TEST TEKTON              value   true      apiVersion  tekton dev v1beta1 kind  TaskRun metadata    name  mytaskrun   namespace  default spec    taskRef      name  mytask   podTemplate      env            name   TEST TEKTON            value   false        In the above example the environment variable  TEST TEKTON  will not be overriden by value specified in podTemplate  because the  config default  option  default forbidden env  is configured with value  TEST TEKTON         Configuring default resources requirements  Resource requirements of containers created by the controller can be assigned default values  This allows to fully control the resources requirement of  TaskRun       yaml apiVersion  v1 kind  ConfigMap metadata    name  config defaults   namespace  tekton pipelines data    default container resource requirements        place scripts    updates resource requirements of a  place scripts  container       requests          memory   64Mi          cpu   250m        limits          memory   128Mi          cpu   500m         prepare    updates resource requirements of a  prepare  container       requests          memory   64Mi          cpu   250m        limits          memory   256Mi          cpu   500m         working dir initializer    updates resource requirements of a  working dir initializer  container       requests          memory   64Mi          cpu   250m        limits          memory   512Mi          cpu   500m         prefix scripts    updates resource requirements of containers which starts with  scripts         requests          memory   64Mi          cpu   250m        limits          memory   128Mi          cpu   500m         prefix sidecar scripts    updates resource requirements of containers which starts with  sidecar scripts         requests          memory   64Mi          cpu   250m        limits          memory   128Mi          cpu   500m         default    updates resource requirements of init containers and containers which has empty resource resource requirements       requests          memory   64Mi          cpu   250m        limits          memory   256Mi          cpu   500m       Any resource requirements set at the  Task  and  TaskRun  levels will overidde the default one specified in the  config defaults  configmap      Customizing basic execution parameters  You can specify your own values that replace the default service account   ServiceAccount    timeout   Timeout    resolver   Resolver    and Pod template   PodTemplate   values used by Tekton Pipelines in  TaskRun  and  PipelineRun  definitions  To do so  modify the ConfigMap  config defaults  with your desired values   The example below customizes the following     the default service account from  default  to  tekton     the default timeout from 60 minutes to 20 minutes    the default  app kubernetes io managed by  label is applied to all Pods created to execute  TaskRuns     the default Pod template to include a node selector to select the node where the Pod will be scheduled by default  A list of supported fields is available  here    podtemplates md supported fields     For more information  see   PodTemplate  in  TaskRuns     taskruns md specifying a pod template  or   PodTemplate  in  PipelineRuns     pipelineruns md specifying a pod template     the default  Workspace  configuration can be set for any  Workspaces  that a Task declares but that a TaskRun does not explicitly provide    the default maximum combinations of  Parameters  in a  Matrix  that can be used to fan out a  PipelineTask   For more information  see   Matrix   matrix md     the default resolver type to  git       yaml apiVersion  v1 kind  ConfigMap metadata    name  config defaults data    default service account   tekton    default timeout minutes   20    default pod template        nodeSelector        kops k8s io instancegroup  build instance group   default managed by label value   my tekton installation    default task run workspace binding        emptyDir       default max matrix combinations count   1024    default resolver type   git         Note    The   example  key in the provided  config defaults yaml       config config defaults yaml  file lists the keys you can customize along with their default values       Customizing the Pipelines Controller behavior  To customize the behavior of the Pipelines Controller  modify the ConfigMap  feature flags  via  kubectl edit configmap feature flags  n tekton pipelines      Note    Changing feature flags may result in undefined behavior for TaskRuns and PipelineRuns that are running while the change occurs   The flags in this ConfigMap are as follows      disable affinity assistant    set this flag to  true  to disable the  Affinity Assistant    affinityassistants    that is used to provide Node Affinity for  TaskRun  pods that share workspace volume    The Affinity Assistant is incompatible with other affinity rules   configured for  TaskRun  pods       Note    This feature flag is deprecated and will be removed in release  v0 60   Consider using  coschedule  feature flag to configure Affinity Assistant behavior       Note    Affinity Assistant use  Inter pod affinity and anti affinity  https   kubernetes io docs concepts scheduling eviction assign pod node  inter pod affinity and anti affinity    that require substantial amount of processing which can slow down scheduling in large clusters   significantly  We do not recommend using them in clusters larger than several hundred nodes      Note    Pod anti affinity requires nodes to be consistently labelled  in other words every   node in the cluster must have an appropriate label matching  topologyKey   If some or all nodes   are missing the specified  topologyKey  label  it can lead to unintended behavior      coschedule   set this flag determines how PipelineRun Pods are scheduled with  Affinity Assistant    affinityassistants   Acceptable values are  workspaces   default    pipelineruns    isolate pipelinerun   or  disabled   Setting it to  workspaces  will schedule all the taskruns sharing the same PVC based workspace in a pipelinerun to the same node  Setting it to  pipelineruns  will schedule all the taskruns in a pipelinerun to the same node  Setting it to  isolate pipelinerun  will schedule all the taskruns in a pipelinerun to the same node  and only allows one pipelinerun to run on a node at a time  Setting it to  disabled  will not apply any coschedule policy      await sidecar readiness   set this flag to   false   to allow the Tekton controller to start a TasksRun s first step immediately without waiting for sidecar containers to be running first  Using this option should decrease the time it takes for a TaskRun to start running  and will allow TaskRun pods to be scheduled in environments that don t support  Downward API  https   kubernetes io docs tasks inject data application downward api volume expose pod information   volumes  e g  some virtual kubelet implementations   However  this may lead to unexpected behaviour with Tasks that use sidecars  or in clusters that use injected sidecars  e g  Istio   Setting this flag to   false   will mean the  running in environment with injected sidecars  flag has no effect      running in environment with injected sidecars   set this flag to   false   to allow the Tekton controller to start a TasksRun s first step immediately if it has no Sidecars specified  Using this option should decrease the time it takes for a TaskRun to start running  However  for clusters that use injected sidecars  e g  Istio  this can lead to unexpected behavior      require git ssh secret known hosts   set this flag to   true   to require that Git SSH Secrets include a  known hosts  field  This ensures that a git remote server s key is validated before data is accepted from it when authenticating over SSH  Secrets that don t include a  known hosts  will result in the TaskRun failing validation and not running      enable tekton oci bundles   set this flag to   true   to enable the   tekton OCI bundle usage  see  the tekton bundle   contract    tekton bundle contracts md    Enabling this option   allows the use of  bundle  field in  taskRef  and  pipelineRef  for    Pipeline    PipelineRun  and  TaskRun   By default  this option is   disabled    false     which means it is disallowed to use the    bundle  field      disable creds init    set this flag to   true   to  disable Tekton s built in credential initialization  auth md disabling tektons built in auth  and use Workspaces to mount credentials from Secrets instead  The default is  false   For more information  see the  associated issue  https   github com tektoncd pipeline issues 3399       enable api fields   When using v1beta1 APIs  setting this field to  stable  or  beta  enables  beta features   beta features   When using v1 APIs  setting this field to  stable  allows only stable features  and setting it to  beta  allows only beta features  Set this field to  alpha  to allow  alpha features   alpha features  to be used      enable kubernetes sidecar   Set this flag to   true   to enable native kubernetes sidecar support  This will allow Tekton sidecars to run as Kubernetes sidecars  Must be using Kubernetes v1 29 or greater   For example      yaml apiVersion  v1 kind  ConfigMap metadata    name  feature flags data    enable api fields   alpha    Allow alpha fields to be used in Tasks and Pipelines          trusted resources verification no match policy   Setting this flag to  fail  will fail the taskrun pipelinerun if no matching policies found  Setting to  warn  will skip verification and log a warning if no matching policies are found  but not fail the taskrun pipelinerun  Setting to  ignore  will skip verification if no matching policies found  Defaults to  ignore       results from   set this flag to  termination message  to use the container s termination message to fetch results from  This is the default method of extracting results  Set it to  sidecar logs  to enable use of a results sidecar logs to extract results instead of termination message      enable provenance in status   Set this flag to   true   to enable populating   the  provenance  field in  TaskRun  and  PipelineRun  status  The  provenance    field contains metadata about resources used in the TaskRun PipelineRun such as the   source from where a remote Task Pipeline definition was fetched  By default  this is set to  true     To disable populating this field  set this flag to   false        set security context   Set this flag to  true  to set a security context for containers injected by Tekton that will allow TaskRun pods to run in namespaces with  restricted  pod security admission  By default  this is set to  false        Alpha Features  Alpha features in the following table are still in development and their syntax is subject to change    To enable the features    without    an individual flag    set the  enable api fields  feature flag to   alpha   in the  feature flags  ConfigMap alongside your Tekton Pipelines deployment via  kubectl patch cm feature flags  n tekton pipelines  p    data    enable api fields   alpha         To enable the features    with    an individual flag    set the individual flag accordingly in the  feature flag  ConfigMap alongside your Tekton Pipelines deployment  Example   kubectl patch cm feature flags  n tekton pipelines  p    data     FLAG NAME     FLAG VALUE          Features currently in  alpha  are     Feature                                                                                                        Proposal                                                                                                               Release                                                                Individual Flag                                                                                                                                                                                                                                                                                                                                                                                                         Bundles     pipelineruns md tekton bundles                                                                     TEP 0005  https   github com tektoncd community blob main teps 0005 tekton oci bundles md                              v0 18 0  https   github com tektoncd pipeline releases tag v0 18 0     enable tekton oci bundles                            Hermetic Execution Mode    hermetic md                                                                         TEP 0025  https   github com tektoncd community blob main teps 0025 hermekton md                                       v0 25 0  https   github com tektoncd pipeline releases tag v0 25 0                                                          Windows Scripts    tasks md windows scripts                                                                    TEP 0057  https   github com tektoncd community blob main teps 0057 windows support md                                 v0 28 0  https   github com tektoncd pipeline releases tag v0 28 0                                                          Debug    debug md                                                                                              TEP 0042  https   github com tektoncd community blob main teps 0042 taskrun breakpoint on failure md                   v0 26 0  https   github com tektoncd pipeline releases tag v0 26 0                                                          StdoutConfig and StderrConfig    tasks redirecting step output streams with stdoutConfig and stderrConfig     TEP 0011  https   github com tektoncd community blob main teps 0011 redirecting step output streams md                 v0 38 0  https   github com tektoncd pipeline releases tag v0 38 0                                                          Trusted Resources    trusted resources md                                                                      TEP 0091  https   github com tektoncd community blob main teps 0091 trusted resources md                               v0 49 0  https   github com tektoncd pipeline releases tag v0 49 0     trusted resources verification no match policy       Configure Default Resolver    resolution md configuring built in resolvers                                     TEP 0133  https   github com tektoncd community blob main teps 0133 configure default resolver md                      v0 46 0  https   github com tektoncd pipeline releases tag v0 46 0                                                          Coschedule    affinityassistants md                                                                            TEP 0135  https   github com tektoncd community blob main teps 0135 coscheduling pipelinerun pods md                   v0 51 0  https   github com tektoncd pipeline releases tag v0 51 0     coschedule                                           keep pod on cancel    taskruns md cancelling a taskrun                                                        N A                                                                                                                     v0 52 0  https   github com tektoncd pipeline releases tag v0 52 0     keep pod on cancel                                   CEL in WhenExpression    pipelines md use cel expression in whenexpression                                                      TEP 0145  https   github com tektoncd community blob main teps 0145 cel in whenexpression md                           v0 53 0  https   github com tektoncd pipeline releases tag v0 53 0     enable cel in whenexpression                         Param Enum    taskruns md parameter enums                                                                      TEP 0144  https   github com tektoncd community blob main teps 0144 param enum md                                      v0 54 0  https   github com tektoncd pipeline releases tag v0 54 0     enable param enum                                      Beta Features  Beta features are fields of stable CRDs that follow our  beta   compatibility policy     api compatibility policy md   To enable these features  set the  enable api fields  feature flag to   beta   in the  feature flags  ConfigMap alongside your Tekton Pipelines deployment via  kubectl patch cm feature flags  n tekton pipelines  p    data    enable api fields   beta        Features currently in  beta  are     Feature                                                                                                 Proposal                                                                                                   Alpha Release                                                          Beta Release                                                           Individual Flag                                                                                                                                                                                                                                                                                                                                                                                                                       Remote Tasks    taskruns md remote tasks  and  Remote Pipelines    pipelineruns md remote pipelines     TEP 0060  https   github com tektoncd community blob main teps 0060 remote resolution md                                                                                          v0 41 0  https   github com tektoncd pipeline releases tag v0 41 0                                        Provenance  field in Status  pipeline api md provenance                                                issue 5550  https   github com tektoncd pipeline issues 5550                                               v0 41 0  https   github com tektoncd pipeline releases tag v0 41 0     v0 48 0  https   github com tektoncd pipeline releases tag v0 48 0     enable provenance in status       Isolated  Step     Sidecar   Workspaces     workspaces md isolated workspaces                           TEP 0029  https   github com tektoncd community blob main teps 0029 step workspaces md                     v0 24 0  https   github com tektoncd pipeline releases tag v0 24 0     v0 50 0  https   github com tektoncd pipeline releases tag v0 50 0                                       Matrix    matrix md                                                                                     TEP 0090  https   github com tektoncd community blob main teps 0090 matrix md                              v0 38 0  https   github com tektoncd pipeline releases tag v0 38 0     v0 53 0  https   github com tektoncd pipeline releases tag v0 53 0                                       Task level Resource Requirements  compute resources md task level compute resources configuration       TEP 0104  https   github com tektoncd community blob main teps 0104 tasklevel resource requirements md     v0 39 0  https   github com tektoncd pipeline releases tag v0 39 0     v0 53 0  https   github com tektoncd pipeline releases tag v0 53 0                                       Reusable Steps via StepActions    stepactions md                                                               TEP 0142  https   github com tektoncd community blob main teps 0142 enable step reusability md                         v0 54 0  https   github com tektoncd pipeline releases tag v0 54 0     enable step actions                                  Larger Results via Sidecar Logs   enabling larger results using sidecar logs                                   TEP 0127  https   github com tektoncd community blob main teps 0127 larger results via sidecar logs md                 v0 43 0  https   github com tektoncd pipeline releases tag v0 43 0     v0 61 0  https   github com tektoncd pipeline releases tag v0 61 0     results from                                         Step and Sidecar Overrides    taskruns md overriding task steps and sidecars                                   TEP 0094  https   github com tektoncd community blob main teps 0094 specifying resource requirements at runtime md     v0 34 0  https   github com tektoncd pipeline releases tag v0 34 0                                                        v0 61 0  https   github com tektoncd pipeline releases tag v0 61 0          Ignore Task Failure    pipelines md using the onerror field                                                    TEP 0050  https   github com tektoncd community blob main teps 0050 ignore task failures md                               v0 55 0  https   github com tektoncd pipeline releases tag v0 55 0                                                                  v0 62 0  https   github com tektoncd pipeline releases tag v0 62 0                                                    N A       Enabling larger results using sidecar logs    Note    The maximum size of a Task s results is limited by the container termination message feature of Kubernetes  as results are passed back to the controller via this mechanism  At present  the limit is per task is  4096 bytes   All results produced by the task share this upper limit   To exceed this limit of 4096 bytes  you can enable larger results using sidecar logs  By enabling this feature  you will have a configurable limit  with a default of 4096 bytes  per result with no restriction on the number of results  The results are still stored in the taskRun CRD  so they should not exceed the 1 5MB CRD size limit     Note    to enable this feature  you need to grant  get  access to all  pods log  to the  tekton pipelines controller   This means that the tekton pipeline controller has the ability to access the pod logs   1  Create a cluster role and rolebinding by applying the following spec to provide log access to  tekton pipelines controller        kubectl apply  f optional config enable log access to controller       2  Set the  results from  feature flag to use sidecar logs by setting  results from  sidecar logs  in the  configMap   customizing the pipelines controller behavior        kubectl patch cm feature flags  n tekton pipelines  p    data    results from   sidecar logs          3  If you want the size per result to be something other than 4096 bytes  you can set the  max result size  feature flag in bytes by setting  max result size  8192 whatever you need here      Note    The value you can set here cannot exceed the size of the CRD limit of 1 5 MB       kubectl patch cm feature flags  n tekton pipelines  p    data    max result size    VALUE IN BYTES              Configuring High Availability  If you want to run Tekton Pipelines in a way so that webhooks are resiliant against failures and support high concurrency scenarios  you need to run a  Metrics Server  https   github com kubernetes sigs metrics server  in your Kubernetes cluster  This is required by the  Horizontal Pod Autoscalers  https   kubernetes io docs tasks run application horizontal pod autoscale   to compute replica count   See  HA Support for Tekton Pipeline Controllers    enabling ha md  for instructions on configuring High Availability in the Tekton Pipelines Controller   The default configuration is defined in  webhook hpa yaml       config webhook hpa yaml  which can be customized to better fit specific usecases      Configuring tekton pipeline controller performance  Out of the box  Tekton Pipelines Controller is configured for relatively small scale deployments but there have several options for configuring Pipelines  performance are available  See the  Performance Configuration  tekton controller performance configuration md  document which describes how to change the default ThreadsPerController  QPS and Burst settings to meet your requirements      Running TaskRuns and PipelineRuns with restricted pod security standards  To allow TaskRuns and PipelineRuns to run in namespaces with  restricted pod security standards  https   kubernetes io docs concepts security pod security standards    set the  set security context  feature flag to  true  in the  feature flags configMap   customizing the pipelines controller behavior   This configuration option applies a  SecurityContext  https   kubernetes io docs tasks configure pod container security context   to any containers injected into TaskRuns by the Pipelines controller  If the  Affinity Assistants  affinityassistants md  feature is enabled  the SecurityContext is also applied to those containers  This SecurityContext may not be supported in all Kubernetes implementations  for example  OpenShift      Note    running TaskRuns and PipelineRuns in the  tekton pipelines  namespace is discouraged      Platform Support  The Tekton project provides support for running on x86 Linux Kubernetes nodes   The project produces images capable of running on other architectures and operating systems  but may not be able to help debug issues specific to those platforms as readily as those that affect Linux on x86   The controller and webhook components are currently built for     linux amd64   linux arm64   linux arm  Arm v7    linux ppc64le  PowerPC    linux s390x  IBM Z   The entrypoint component is also built for Windows  which enables TaskRun workloads to execute on Windows nodes  See  Windows documentation  windows md  for more information      Creating a custom release of Tekton Pipelines  You can create a custom release of Tekton Pipelines by following and customizing the steps in  Creating an official release  https   github com tektoncd pipeline blob main tekton README md create an official release   For example  you might want to customize the container images built and used by Tekton Pipelines      Verify Tekton Pipelines Release    We will refine this process over time to be more streamlined  For now  please follow the steps listed in this section to verify Tekton pipeline release   Tekton Pipeline s images are being signed by  Tekton Chains  https   github com tektoncd chains  since  0 27 1  https   github com tektoncd pipeline releases tag v0 27 1   You can verify the images with  cosign  using the  Tekton s public key  https   raw githubusercontent com tektoncd chains main tekton pub        Verify signatures using  cosign   With Go 1 16   you can install  cosign  by running      shell go install github com sigstore cosign cmd cosign latest      You can verify Tekton Pipelines official images using the Tekton public key      shell cosign verify  key https   raw githubusercontent com tektoncd chains main tekton pub gcr io tekton releases github com tektoncd pipeline cmd controller v0 28 1      which results in      shell Verification for gcr io tekton releases github com tektoncd pipeline cmd controller v0 28 1    The following checks were performed on each of these signatures      The cosign claims were validated     The signatures were verified against the specified public key     Any certificates were verified against the Fulcio roots       Critical          Identity            docker reference    gcr io tekton releases github com tektoncd pipeline cmd controller              Image            Docker manifest digest    sha256 0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8              Type    Tekton container signature          Optional             The verification shows a list of checks performed and returns the digest in  Critical Image Docker manifest digest  which can be used to retrieve the provenance from the transparency logs for that image using  rekor cli        Verify the transparency logs using  rekor cli   Install the  rekor cli  by running      shell go install  v github com sigstore rekor cmd rekor cli latest      Now  use the digest collected from the previous  section   verify signatures using cosign  in  Critical Image Docker manifest digest   for example   sha256 0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8    Search the transparency log with the digest just collected      shell rekor cli search   sha sha256 0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8      which results in      shell Found matching entries  listed by UUID   68a53d0e75463d805dc9437dda5815171502475dd704459a5ce3078edba96226      Tekton Chains generates provenance based on the custom  format  https   github com tektoncd chains blob main PROVENANCE SPEC md  in which the  subject  holds the list of artifacts which were built as part of the release  For the Pipeline release   subject  includes a list of images including pipeline controller  pipeline webhook  etc  Use the  UUID  to get the provenance      shell rekor cli get   uuid 68a53d0e75463d805dc9437dda5815171502475dd704459a5ce3078edba96226   format json   jq  r  Attestation   base64   decode   jq      which results in      shell       type    https   in toto io Statement v0 1      predicateType    https   tekton dev chains provenance      subject                  name    gcr io tekton releases github com tektoncd pipeline cmd controller          digest              sha256    0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8                              name    gcr io tekton releases github com tektoncd pipeline cmd entrypoint          digest              sha256    2fa7f7c3408f52ff21b2d8c4271374dac4f5b113b1c4dbc7d5189131e71ce721                              name    gcr io tekton releases github com tektoncd pipeline cmd git init          digest              sha256    83d5ec6addece4aac79898c9631ee669f5fee5a710a2ed1f98a6d40c19fb88f7                              name    gcr io tekton releases github com tektoncd pipeline cmd imagedigestexporter          digest              sha256    e4d77b5b8902270f37812f85feb70d57d6d0e1fed2f3b46f86baf534f19cd9c0                              name    gcr io tekton releases github com tektoncd pipeline cmd nop          digest              sha256    59b5304bcfdd9834150a2701720cf66e3ebe6d6e4d361ae1612d9430089591f8                              name    gcr io tekton releases github com tektoncd pipeline cmd pullrequest init          digest              sha256    4992491b2714a73c0a84553030e6056e6495b3d9d5cc6b20cf7bc8c51be779bb                              name    gcr io tekton releases github com tektoncd pipeline cmd webhook          digest              sha256    bf0ef565b301a1981cb2e0d11eb6961c694f6d2401928dccebe7d1e9d8c914de                                Now  verify the digest in the  release yaml  by matching it with the provenance  for example  the digest for the release  v0 28 1       shell curl  s https   storage googleapis com tekton releases pipeline previous v0 28 1 release yaml   grep github com tektoncd pipeline cmd controller v0 28 1   awk  F github com tektoncd pipeline cmd controller v0 28 1     print  2        which results in      shell sha256 0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8      Now  you can verify the deployment specifications in the  release yaml  to match each of these images and their digest  The  tekton pipelines controller  deployment specification has a container named  tekton pipeline controller  and a list of image references with their digest as part of the  args       yaml       containers            name  tekton pipelines controller           image  gcr io tekton releases github com tektoncd pipeline cmd controller v0 28 1 sha256 0c320bc09e91e22ce7f01e47c9f3cb3449749a5f72d5eaecb96e710d999c28e8           args                  These images are built on demand by  ko resolve  and are replaced               by image references by digest                  git image                  gcr io tekton releases github com tektoncd pipeline cmd git init v0 28 1 sha256 83d5ec6addece4aac79898c9631ee669f5fee5a710a2ed1f98a6d40c19fb88f7                   entrypoint image                  gcr io tekton releases github com tektoncd pipeline cmd entrypoint v0 28 1 sha256 2fa7f7c3408f52ff21b2d8c4271374dac4f5b113b1c4dbc7d5189131e71ce721                   nop image                  gcr io tekton releases github com tektoncd pipeline cmd nop v0 28 1 sha256 59b5304bcfdd9834150a2701720cf66e3ebe6d6e4d361ae1612d9430089591f8                   imagedigest exporter image                  gcr io tekton releases github com tektoncd pipeline cmd imagedigestexporter v0 28 1 sha256 e4d77b5b8902270f37812f85feb70d57d6d0e1fed2f3b46f86baf534f19cd9c0                   pr image                  gcr io tekton releases github com tektoncd pipeline cmd pullrequest init v0 28 1 sha256 4992491b2714a73c0a84553030e6056e6495b3d9d5cc6b20cf7bc8c51be779bb        Similarly  you can verify the rest of the images which were published as part of the Tekton Pipelines release      shell gcr io tekton releases github com tektoncd pipeline cmd git init gcr io tekton releases github com tektoncd pipeline cmd entrypoint gcr io tekton releases github com tektoncd pipeline cmd nop gcr io tekton releases github com tektoncd pipeline cmd imagedigestexporter gcr io tekton releases github com tektoncd pipeline cmd pullrequest init gcr io tekton releases github com tektoncd pipeline cmd webhook         Verify Tekton Resources  Trusted Resources is a feature to verify Tekton Tasks and Pipelines  The current version of feature supports  v1beta1   Task  and  Pipeline   For more details please take a look at  Trusted Resources    trusted resources md       Pipelineruns with Affinity Assistant  The cluster operators can review the  guidelines  developers affinity assistant md  to  cordon  a node in the cluster with the tekton controller and the affinity assistant is enabled      TaskRuns with  imagePullBackOff  Timeout  Tekton pipelines has adopted a fail fast strategy with a taskRun failing with  TaskRunImagePullFailed  in case of an  imagePullBackOff   This can be limited in some cases  and it generally depends on the infrastructure  To allow the cluster operators to decide whether to wait in case of an  imagePullBackOff   a setting is available to configure the wait time such that the controller will wait for the specified duration before declaring a failure  For example  with the following  config defaults   the controller does not mark the taskRun as failure for 5 minutes since the pod is scheduled in case the image pull fails with  imagePullBackOff   The  default imagepullbackoff timeout  is of type  time Duration  and can be set to a duration such as  1m    5m    10s    1h   etc  See issue https   github com tektoncd pipeline issues 5987 for more details      yaml apiVersion  v1 kind  ConfigMap metadata    name  config defaults   namespace  tekton pipelines data    default imagepullbackoff timeout   5m          Disabling Inline Spec in Pipeline  TaskRun and PipelineRun  Tekton users may embed the specification of a  Task   via  taskSpec   or a  Pipeline   via  pipelineSpec   as an alternative to referring to an external resource via  taskRef  and  pipelineRef  respectively   This behaviour can be selectively disabled for three Tekton resources   TaskRun    PipelineRun  and  Pipeline     In certain clusters and scenarios  an admin might want to disable the customisation of  Tasks  and  Pipelines  and only allow users to run pre defined resources  To achieve that the admin should disable embedded specification via the  disable inline spec  flag  and remote resolvers too   To disable inline specification  set the  disable inline spec  flag to   pipeline pipelinerun taskrun   in the  feature flags  configmap     yaml apiVersion  v1 kind  ConfigMap metadata    name  feature flags   namespace  tekton pipelines   labels      app kubernetes io instance  default     app kubernetes io part of  tekton pipelines data    disable inline spec   pipeline pipelinerun taskrun       Inline specifications can be disabled for specific resources only  To achieve that  set the disable inline spec flag to a comma separated list of the desired resources  Valid values are pipeline  pipelinerun and taskrun   The default value of disable inline spec is     which means inline specification is enabled in all cases      Next steps  To get started with Tekton check the  Introductory tutorials  quickstarts   the  how to guides  howtos   and the  examples folder  examples          Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  cca4   and code samples are licensed under the  Apache 2 0 License  apache2l      quickstarts   https   tekton dev docs getting started   howtos   https   tekton dev docs how to guides   examples   https   github com tektoncd pipeline tree main examples   cca4   https   creativecommons org licenses by 4 0   apache2l   https   www apache org licenses LICENSE 2 0"}
{"questions":"tekton Resolver Template This directory contains a working Resolver based on the instructions Parameters Resolver Type from the This Resolver responds to type","answers":"# Resolver Template\n\nThis directory contains a working Resolver based on the instructions\nfrom the [developer howto in the docs](..\/how-to-write-a-resolver.md).\n\n## Resolver Type\n\nThis Resolver responds to type `demo`.\n\n## Parameters\n\n| Name   | Desccription                 | Example Value               |\n|--------|------------------------------|-----------------------------|\n| `url`  | The repository url.          | `https:\/\/example.com\/repo\/` |\n\n## Using the template to start a new Resolver\n\nYou can use this as a template to quickly get a new Resolver up and\nrunning with your own preferred storage backend.\n\nTo reuse the template, simply copy this entire subdirectory to a new\ndirectory. \n\nThe entire program for the `latest` framework is defined in \n[`.\/cmd\/resolver\/main.go`](.\/cmd\/resolver\/main.go) and provides stub\nimplementations of all the methods defined by the [`framework.Resolver`\ninterface](..\/..\/pkg\/remoteresolution\/resolver\/framework\/interface.go).\n\nIf you choose to use the previous framework (deprecated) is defined in\n[`.\/cmd\/demoresolver\/main.go`](.\/cmd\/demoresolver\/main.go) and provides stub\nimplementations of all the methods defined by the [`framework.Resolver`\ninterface](..\/..\/pkg\/resolution\/resolver\/framework\/interface.go).\n\n\nOnce copied you'll need to run `go mod init` and `go mod tidy` at the root\nof your project. We don't need this in `tektoncd\/resolution` because this\nsubmodule relies on the `go.mod` and `go.sum` defined at the root of the repo.\n\nAfter your go module is initialized and dependencies tidied, update\n`config\/demo-resolver-deployment.yaml`. The `image` field of the container\nwill need to point to your new go module's name, with a `ko:\/\/` prefix.\n\n## Deploying the Resolver\n\n### Requirements\n\n- A computer with\n  [`kubectl`](https:\/\/kubernetes.io\/docs\/tasks\/tools\/#kubectl) and\n  [`ko`](https:\/\/github.com\/google\/ko) installed.\n- The `tekton-pipelines` namespace and `ResolutionRequest`\n  controller installed. See [the getting started\n  guide](.\/getting-started.md#step-3-install-tekton-resolution) for\n  instructions.\n\n### Install\n\n1. Install the `\"demo\"` Resolver:\n\n```bash\n$ ko apply -f .\/config\/demo-resolver-deployment.yaml\n```\n\n### Testing\n\nTry creating a `ResolutionRequest` targeting `\"demo\"` with no parameters:\n\n```bash\n$ cat <<EOF > rrtest.yaml\napiVersion: resolution.tekton.dev\/v1beta1\nkind: ResolutionRequest\nmetadata:\n  name: test-resolver-template\n  labels:\n    resolution.tekton.dev\/type: demo\nEOF\n\n$ kubectl apply -f .\/rrtest.yaml\n\n$ kubectl get resolutionrequest -w test-resolver-template\n```\n\nYou should shortly see the `ResolutionRequest` succeed and the content of\na hello-world `Pipeline` base64-encoded in the object's `status.data`\nfield.\n\n### Example PipelineRun\n\nHere's an example PipelineRun that uses the hard-coded demo Pipeline:\n\n```yaml\napiVersion: tekton.dev\/v1beta1\nkind: PipelineRun\nmetadata:\n  name: resolver-demo\nspec:\n  pipelineRef:\n    resolver: demo\n```\n\n## What's Supported?\n\n- Just one hard-coded `Pipeline` for demonstration purposes.\n\n---\n\nExcept as otherwise noted, the content of this page is licensed under the\n[Creative Commons Attribution 4.0 License](https:\/\/creativecommons.org\/licenses\/by\/4.0\/),\nand code samples are licensed under the\n[Apache 2.0 License](https:\/\/www.apache.org\/licenses\/LICENSE-2.0).","site":"tekton","answers_cleaned":"  Resolver Template  This directory contains a working Resolver based on the instructions from the  developer howto in the docs     how to write a resolver md       Resolver Type  This Resolver responds to type  demo       Parameters    Name     Desccription                   Example Value                                                                                            url     The repository url              https   example com repo         Using the template to start a new Resolver  You can use this as a template to quickly get a new Resolver up and running with your own preferred storage backend   To reuse the template  simply copy this entire subdirectory to a new directory    The entire program for the  latest  framework is defined in      cmd resolver main go     cmd resolver main go  and provides stub implementations of all the methods defined by the   framework Resolver  interface        pkg remoteresolution resolver framework interface go    If you choose to use the previous framework  deprecated  is defined in     cmd demoresolver main go     cmd demoresolver main go  and provides stub implementations of all the methods defined by the   framework Resolver  interface        pkg resolution resolver framework interface go     Once copied you ll need to run  go mod init  and  go mod tidy  at the root of your project  We don t need this in  tektoncd resolution  because this submodule relies on the  go mod  and  go sum  defined at the root of the repo   After your go module is initialized and dependencies tidied  update  config demo resolver deployment yaml   The  image  field of the container will need to point to your new go module s name  with a  ko     prefix      Deploying the Resolver      Requirements    A computer with     kubectl   https   kubernetes io docs tasks tools  kubectl  and     ko   https   github com google ko  installed    The  tekton pipelines  namespace and  ResolutionRequest    controller installed  See  the getting started   guide    getting started md step 3 install tekton resolution  for   instructions       Install  1  Install the   demo   Resolver      bash   ko apply  f   config demo resolver deployment yaml          Testing  Try creating a  ResolutionRequest  targeting   demo   with no parameters      bash   cat   EOF   rrtest yaml apiVersion  resolution tekton dev v1beta1 kind  ResolutionRequest metadata    name  test resolver template   labels      resolution tekton dev type  demo EOF    kubectl apply  f   rrtest yaml    kubectl get resolutionrequest  w test resolver template      You should shortly see the  ResolutionRequest  succeed and the content of a hello world  Pipeline  base64 encoded in the object s  status data  field       Example PipelineRun  Here s an example PipelineRun that uses the hard coded demo Pipeline      yaml apiVersion  tekton dev v1beta1 kind  PipelineRun metadata    name  resolver demo spec    pipelineRef      resolver  demo         What s Supported     Just one hard coded  Pipeline  for demonstration purposes        Except as otherwise noted  the content of this page is licensed under the  Creative Commons Attribution 4 0 License  https   creativecommons org licenses by 4 0    and code samples are licensed under the  Apache 2 0 License  https   www apache org licenses LICENSE 2 0  "}
{"questions":"tekton explains This section gives an overview of how the affinity assistant is resilient to a cluster maintenance without losing how an affinity assistant is created when a is used as a volume source for a in a Please refer to the same section for more details on the affinity assistant the running Please refer to the issue https github com tektoncd pipeline issues 6586 for more details Affinity Assistant","answers":"# Affinity Assistant\n\n\n[Specifying `Workspaces` in a `Pipeline`](..\/workspaces.md#specifying-workspace-order-in-a-pipeline-and-affinity-assistants) explains\nhow an affinity assistant is created when a `persistentVolumeClaim` is used as a volume source for a `workspace` in a `pipelineRun`.\nPlease refer to the same section for more details on the affinity assistant.\n\nThis section gives an overview of how the affinity assistant is resilient to a cluster maintenance without losing\nthe running `pipelineRun`. (Please refer to the issue https:\/\/github.com\/tektoncd\/pipeline\/issues\/6586 for more details.)\n\nWhen a list of `tasks` share a single workspace, the affinity assistant pod gets created on a `node` along with all\n`taskRun` pods. It is very common for a `pipeline` author to design a long-running tasks with a single workspace.\nWith these long-running tasks, a `node` on which these pods are scheduled can be cordoned while the `pipelineRun` is\nstill running. The tekton controller migrates the affinity assistant pod to any available `node` in a cluster along with\nthe rest of the `taskRun` pods sharing the same workspace.\n\nLet's understand this with a sample `pipelineRun`:\n\n```yaml\napiVersion: tekton.dev\/v1\nkind: PipelineRun\nmetadata:\n  generateName: pipeline-run-\nspec:\n  workspaces:\n  - name: source\n    volumeClaimTemplate:\n      spec:\n        accessModes:\n          - ReadWriteOnce\n        resources:\n          requests:\n            storage: 10Mi\n  pipelineSpec:\n    workspaces:\n    - name: source\n    tasks:\n    - name: first-task\n      taskSpec:\n        workspaces:\n        - name: source\n        steps:\n        - image: alpine\n          script: |\n            echo $(workspaces.source.path)\n            sleep 60\n      workspaces:\n      - name: source\n    - name: last-task\n      taskSpec:\n        workspaces:\n        - name: source\n        steps:\n        - image: alpine\n          script: |\n            echo $(workspaces.source.path)\n            sleep 60\n      runAfter: [\"first-task\"]\n      workspaces:\n      - name: source\n```\n\nThis `pipelineRun` has two long-running tasks, `first-task` and `last-task`. Both of these tasks are sharing a single\nvolume with the access mode set to `ReadWriteOnce` which means the volume can be mounted to a single `node` at any\ngiven point of time.\n\nCreate a `pipelineRun` and determine on which `node` the affinity assistant pod is scheduled:\n\n```shell\nkubectl get pods -l app.kubernetes.io\/component=affinity-assistant -o wide -w\nNAME                              READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES\naffinity-assistant-c7b485007a-0   0\/1     Pending   0          0s    <none>   <none>   <none>           <none>\naffinity-assistant-c7b485007a-0   0\/1     Pending   0          0s    <none>   kind-multinode-worker1   <none>           <none>\naffinity-assistant-c7b485007a-0   0\/1     ContainerCreating   0          0s    <none>   kind-multinode-worker1   <none>           <none>\naffinity-assistant-c7b485007a-0   0\/1     ContainerCreating   0          1s    <none>   kind-multinode-worker1   <none>           <none>\naffinity-assistant-c7b485007a-0   1\/1     Running             0          5s    10.244.1.144   kind-multinode-worker1   <none>           <none>\n```\n\nNow, `cordon` that node to mark it unschedulable for any new pods:\n\n```shell\nkubectl cordon kind-multinode-worker1\nnode\/kind-multinode-worker1 cordoned\n```\n\nThe node is cordoned:\n\n```shell\nkubectl get node\nNAME                           STATUS                     ROLES           AGE   VERSION\nkind-multinode-control-plane   Ready                      control-plane   13d   v1.26.3\nkind-multinode-worker1         Ready,SchedulingDisabled   <none>          13d   v1.26.3\nkind-multinode-worker2         Ready                      <none>          13d   v1.26.3\n```\n\nNow, watch the affinity assistant pod getting transferred onto other available node `kind-multinode-worker2`:\n\n```shell\nkubectl get pods -l app.kubernetes.io\/component=affinity-assistant -o wide -w\nNAME                              READY   STATUS    RESTARTS   AGE   IP              NODE            NOMINATED NODE   READINESS GATES\naffinity-assistant-c7b485007a-0   1\/1     Running   0          49s   10.244.1.144   kind-multinode-worker1   <none>           <none>\naffinity-assistant-c7b485007a-0   1\/1     Terminating   0          70s   10.244.1.144   kind-multinode-worker1   <none>           <none>\naffinity-assistant-c7b485007a-0   1\/1     Terminating   0          70s   10.244.1.144   kind-multinode-worker1   <none>           <none>\naffinity-assistant-c7b485007a-0   0\/1     Terminating   0          70s   10.244.1.144   kind-multinode-worker1   <none>           <none>\naffinity-assistant-c7b485007a-0   0\/1     Terminating   0          70s   10.244.1.144   kind-multinode-worker1   <none>           <none>\naffinity-assistant-c7b485007a-0   0\/1     Terminating   0          70s   10.244.1.144   kind-multinode-worker1   <none>           <none>\naffinity-assistant-c7b485007a-0   0\/1     Pending       0          0s    <none>          <none>          <none>           <none>\naffinity-assistant-c7b485007a-0   0\/1     Pending       0          1s    <none>          kind-multinode-worker2   <none>           <none>\naffinity-assistant-c7b485007a-0   0\/1     ContainerCreating   0          1s    <none>          kind-multinode-worker2   <none>           <none>\naffinity-assistant-c7b485007a-0   0\/1     ContainerCreating   0          2s    <none>          kind-multinode-worker2   <none>           <none>\naffinity-assistant-c7b485007a-0   1\/1     Running             0          4s    10.244.2.144    kind-multinode-worker2   <none>           <none>\n```\n\nAnd, the `pipelineRun` finishes to completion:\n\n```shell\nkubectl get pr\nNAME                     SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME\npipeline-run-r2c7k       True        Succeeded   4m22s       2m1s\n\nkubectl get tr\nNAME                            SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME\npipeline-run-r2c7k-first-task   True        Succeeded   5m16s       4m7s\npipeline-run-r2c7k-last-task    True        Succeeded   4m6s        2m56s\n```","site":"tekton","answers_cleaned":"  Affinity Assistant    Specifying  Workspaces  in a  Pipeline      workspaces md specifying workspace order in a pipeline and affinity assistants  explains how an affinity assistant is created when a  persistentVolumeClaim  is used as a volume source for a  workspace  in a  pipelineRun   Please refer to the same section for more details on the affinity assistant   This section gives an overview of how the affinity assistant is resilient to a cluster maintenance without losing the running  pipelineRun    Please refer to the issue https   github com tektoncd pipeline issues 6586 for more details    When a list of  tasks  share a single workspace  the affinity assistant pod gets created on a  node  along with all  taskRun  pods  It is very common for a  pipeline  author to design a long running tasks with a single workspace  With these long running tasks  a  node  on which these pods are scheduled can be cordoned while the  pipelineRun  is still running  The tekton controller migrates the affinity assistant pod to any available  node  in a cluster along with the rest of the  taskRun  pods sharing the same workspace   Let s understand this with a sample  pipelineRun       yaml apiVersion  tekton dev v1 kind  PipelineRun metadata    generateName  pipeline run  spec    workspaces      name  source     volumeClaimTemplate        spec          accessModes              ReadWriteOnce         resources            requests              storage  10Mi   pipelineSpec      workspaces        name  source     tasks        name  first task       taskSpec          workspaces            name  source         steps            image  alpine           script                echo   workspaces source path              sleep 60       workspaces          name  source       name  last task       taskSpec          workspaces            name  source         steps            image  alpine           script                echo   workspaces source path              sleep 60       runAfter    first task         workspaces          name  source      This  pipelineRun  has two long running tasks   first task  and  last task   Both of these tasks are sharing a single volume with the access mode set to  ReadWriteOnce  which means the volume can be mounted to a single  node  at any given point of time   Create a  pipelineRun  and determine on which  node  the affinity assistant pod is scheduled      shell kubectl get pods  l app kubernetes io component affinity assistant  o wide  w NAME                              READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES affinity assistant c7b485007a 0   0 1     Pending   0          0s     none     none     none             none  affinity assistant c7b485007a 0   0 1     Pending   0          0s     none    kind multinode worker1    none             none  affinity assistant c7b485007a 0   0 1     ContainerCreating   0          0s     none    kind multinode worker1    none             none  affinity assistant c7b485007a 0   0 1     ContainerCreating   0          1s     none    kind multinode worker1    none             none  affinity assistant c7b485007a 0   1 1     Running             0          5s    10 244 1 144   kind multinode worker1    none             none       Now   cordon  that node to mark it unschedulable for any new pods      shell kubectl cordon kind multinode worker1 node kind multinode worker1 cordoned      The node is cordoned      shell kubectl get node NAME                           STATUS                     ROLES           AGE   VERSION kind multinode control plane   Ready                      control plane   13d   v1 26 3 kind multinode worker1         Ready SchedulingDisabled    none           13d   v1 26 3 kind multinode worker2         Ready                       none           13d   v1 26 3      Now  watch the affinity assistant pod getting transferred onto other available node  kind multinode worker2       shell kubectl get pods  l app kubernetes io component affinity assistant  o wide  w NAME                              READY   STATUS    RESTARTS   AGE   IP              NODE            NOMINATED NODE   READINESS GATES affinity assistant c7b485007a 0   1 1     Running   0          49s   10 244 1 144   kind multinode worker1    none             none  affinity assistant c7b485007a 0   1 1     Terminating   0          70s   10 244 1 144   kind multinode worker1    none             none  affinity assistant c7b485007a 0   1 1     Terminating   0          70s   10 244 1 144   kind multinode worker1    none             none  affinity assistant c7b485007a 0   0 1     Terminating   0          70s   10 244 1 144   kind multinode worker1    none             none  affinity assistant c7b485007a 0   0 1     Terminating   0          70s   10 244 1 144   kind multinode worker1    none             none  affinity assistant c7b485007a 0   0 1     Terminating   0          70s   10 244 1 144   kind multinode worker1    none             none  affinity assistant c7b485007a 0   0 1     Pending       0          0s     none            none            none             none  affinity assistant c7b485007a 0   0 1     Pending       0          1s     none           kind multinode worker2    none             none  affinity assistant c7b485007a 0   0 1     ContainerCreating   0          1s     none           kind multinode worker2    none             none  affinity assistant c7b485007a 0   0 1     ContainerCreating   0          2s     none           kind multinode worker2    none             none  affinity assistant c7b485007a 0   1 1     Running             0          4s    10 244 2 144    kind multinode worker2    none             none       And  the  pipelineRun  finishes to completion      shell kubectl get pr NAME                     SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME pipeline run r2c7k       True        Succeeded   4m22s       2m1s  kubectl get tr NAME                            SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME pipeline run r2c7k first task   True        Succeeded   5m16s       4m7s pipeline run r2c7k last task    True        Succeeded   4m6s        2m56s    "}
{"questions":"tekton This section provides guidelines for running Tekton on your local workstation via the following methods Using Docker Desktop Prerequisites Local Setup","answers":"# Local Setup\n\nThis section provides guidelines for running Tekton on your local workstation via the following methods:\n\n- [Docker for Desktop](#using-docker-desktop)\n- [Minikube](#using-minikube)\n\n## Using Docker Desktop\n\n### Prerequisites\n\nComplete these prerequisites to run Tekton locally using Docker Desktop:\n\n- Install the [required tools](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/DEVELOPMENT.md#requirements).\n- Install [Docker Desktop](https:\/\/www.docker.com\/products\/docker-desktop)\n- Configure Docker Desktop ([Mac](https:\/\/docs.docker.com\/docker-for-mac\/#resources), [Windows](https:\/\/docs.docker.com\/docker-for-windows\/#resources))to use six CPUs, 10 GB of RAM and 2GB of swap space.\n- Set `host.docker.internal:5000` as an insecure registry with Docker for Desktop. See the [Docker insecure registry documentation](https:\/\/docs.docker.com\/registry\/insecure\/).\n  for details.\n- Pass `--insecure` as an argument to your Kaniko tasks so that you can push to an insecure registry.\n- Run a local (insecure) Docker registry as follows:\n\n  `docker run -d -p 5000:5000 --name registry-srv -e REGISTRY_STORAGE_DELETE_ENABLED=true registry:2`\n\n- (Optional) Install a Docker registry viewer to verify the images have been pushed:\n\n`docker run -it -p 8080:8080 --name registry-web --link registry-srv -e REGISTRY_URL=http:\/\/registry-srv:5000\/v2 -e REGISTRY_NAME=localhost:5000 hyper\/docker-registry-web`\n\n- Verify that you can push to `host.docker.internal:5000\/myregistry\/<image_name>`.\n\n### Reconfigure logging\n\n- You can keep your logs in memory only without sending them to a logging service\n  such as [Stackdriver](https:\/\/cloud.google.com\/logging\/).\n- You can deploy Elasticsearch, Beats, or Kibana locally to view logs. You can find an\n  example configuration at <https:\/\/github.com\/mgreau\/tekton-pipelines-elastic-tutorials>.\n- To learn more about obtaining logs, see [Logs](logs.md).\n\n## Using Minikube\n\n### Prerequisites\n\nComplete these prerequisites to run Tekton locally using Minikube:\n\n- Install the [required tools](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/DEVELOPMENT.md#requirements).\n- Install [minikube](https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-minikube\/) and start a session as follows:\n```bash\nminikube start --memory 6144 --cpus 2\n```\n- Point your shell to minikube's docker-daemon by running `eval $(minikube -p minikube docker-env)`\n<!-- wokeignore:rule=master -->\n- Set up a [registry on minikube](https:\/\/github.com\/kubernetes\/minikube\/tree\/master\/deploy\/addons\/registry-aliases) by running `minikube addons enable registry` and `minikube addons enable registry-aliases`\n\n### Reconfigure logging\n\nSee the information in the \"Docker for Desktop\" section\n\n## Using kind and local docker registry\n\n### Prerequisites\n\nComplete these prerequisites to run Tekton locally using Kind:\n\n- Install the [required tools](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/DEVELOPMENT.md#requirements).\n- Install [Docker](https:\/\/www.docker.com\/get-started).\n- Install [kind](https:\/\/kind.sigs.k8s.io\/).\n\n### Use local registry without authentication\n\nSee [Using KinD](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/DEVELOPMENT.md#using-kind).\n\n### Use local private registry\n\n1. Create password file with basic auth.\n\n```bash\nexport TEST_USER=testuser\nexport TEST_PASS=testpassword\nif [ ! -f auth ]; then\n    mkdir auth\nfi\ndocker run \\\n --entrypoint htpasswd \\\n httpd:2 -Bbn $TEST_USER $TEST_PASS > auth\/htpasswd\n```\n\n2. Start kind cluster and local private registry\n\n\nExecute the script.\n\n```shell\n#!\/bin\/sh\nset -o errexit\n\n# create registry container unless it already exists\nreg_name='kind-registry'\nreg_port='5000'\nrunning=\"$(docker inspect -f '' \"${reg_name}\" 2>\/dev\/null || true)\"\nif [ \"${running}\" != 'true' ]; then\n docker run \\\n   -d --restart=always -p \"127.0.0.1:${reg_port}:5000\" --name \"${reg_name}\" \\\n   -v \"$(pwd)\"\/auth:\/auth \\\n   -e \"REGISTRY_AUTH=htpasswd\" \\\n   -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm\" \\\n   -e REGISTRY_AUTH_HTPASSWD_PATH=\/auth\/htpasswd \\\n   registry:2\nfi\n\n# create a cluster with the local registry enabled in containerd\ncat <<EOF | kind create cluster --config=-\nkind: Cluster\napiVersion: kind.x-k8s.io\/v1alpha4\ncontainerdConfigPatches:\n- |-\n [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"localhost:${reg_port}\"]\n   endpoint = [\"http:\/\/${reg_name}:5000\"]\nEOF\n\n# connect the registry to the cluster network\n# (the network may already be connected)\ndocker network connect \"kind\" \"${reg_name}\" || true\n# Document the local registry\n# <!-- wokeignore:rule=master --> \n# https:\/\/github.com\/kubernetes\/enhancements\/tree\/master\/keps\/sig-cluster-lifecycle\/generic\/1755-communicating-a-local-registry\ncat <<EOF | kubectl apply -f -\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: local-registry-hosting\n namespace: kube-public\ndata:\n localRegistryHosting.v1: |\n   host: \"localhost:${reg_port}\"\n   help: \"https:\/\/kind.sigs.k8s.io\/docs\/user\/local-registry\/\"\nEOF\n\n```\n\n3. Install tekton [pipeline](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/install.md) and create the secret in cluster.\n\n```bash\nkubectl create secret docker-registry secret-tekton \\\n  --docker-username=$TEST_USER \\\n  --docker-password=$TEST_PASS \\\n  --docker-server=localhost:5000 \\\n   --namespace=tekton-pipelines\n```\n\n4. Config [ko](https:\/\/github.com\/google\/ko#install) and add secret to service acount.\n\n```bash\nexport KO_DOCKER_REPO='localhost:5000'\n```\n\n`200-serviceaccount.yaml`\n\n```yaml\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: tekton-pipelines-controller\n namespace: tekton-pipelines\n labels:\n   app.kubernetes.io\/component: controller\n   app.kubernetes.io\/instance: default\n   app.kubernetes.io\/part-of: tekton-pipelines\nimagePullSecrets:\n - name: secret-tekton\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: tekton-pipelines-webhook\n namespace: tekton-pipelines\n labels:\n   app.kubernetes.io\/component: webhook\n   app.kubernetes.io\/instance: default\n   app.kubernetes.io\/part-of: tekton-pipelines\nimagePullSecrets:\n - name: secret-tekton\n```","site":"tekton","answers_cleaned":"  Local Setup  This section provides guidelines for running Tekton on your local workstation via the following methods      Docker for Desktop   using docker desktop     Minikube   using minikube      Using Docker Desktop      Prerequisites  Complete these prerequisites to run Tekton locally using Docker Desktop     Install the  required tools  https   github com tektoncd pipeline blob main DEVELOPMENT md requirements     Install  Docker Desktop  https   www docker com products docker desktop    Configure Docker Desktop   Mac  https   docs docker com docker for mac  resources    Windows  https   docs docker com docker for windows  resources  to use six CPUs  10 GB of RAM and 2GB of swap space    Set  host docker internal 5000  as an insecure registry with Docker for Desktop  See the  Docker insecure registry documentation  https   docs docker com registry insecure      for details    Pass    insecure  as an argument to your Kaniko tasks so that you can push to an insecure registry    Run a local  insecure  Docker registry as follows      docker run  d  p 5000 5000   name registry srv  e REGISTRY STORAGE DELETE ENABLED true registry 2      Optional  Install a Docker registry viewer to verify the images have been pushed    docker run  it  p 8080 8080   name registry web   link registry srv  e REGISTRY URL http   registry srv 5000 v2  e REGISTRY NAME localhost 5000 hyper docker registry web     Verify that you can push to  host docker internal 5000 myregistry  image name         Reconfigure logging    You can keep your logs in memory only without sending them to a logging service   such as  Stackdriver  https   cloud google com logging      You can deploy Elasticsearch  Beats  or Kibana locally to view logs  You can find an   example configuration at  https   github com mgreau tekton pipelines elastic tutorials     To learn more about obtaining logs  see  Logs  logs md       Using Minikube      Prerequisites  Complete these prerequisites to run Tekton locally using Minikube     Install the  required tools  https   github com tektoncd pipeline blob main DEVELOPMENT md requirements     Install  minikube  https   kubernetes io docs tasks tools install minikube   and start a session as follows     bash minikube start   memory 6144   cpus 2       Point your shell to minikube s docker daemon by running  eval   minikube  p minikube docker env        wokeignore rule master       Set up a  registry on minikube  https   github com kubernetes minikube tree master deploy addons registry aliases  by running  minikube addons enable registry  and  minikube addons enable registry aliases       Reconfigure logging  See the information in the  Docker for Desktop  section     Using kind and local docker registry      Prerequisites  Complete these prerequisites to run Tekton locally using Kind     Install the  required tools  https   github com tektoncd pipeline blob main DEVELOPMENT md requirements     Install  Docker  https   www docker com get started     Install  kind  https   kind sigs k8s io         Use local registry without authentication  See  Using KinD  https   github com tektoncd pipeline blob main DEVELOPMENT md using kind        Use local private registry  1  Create password file with basic auth      bash export TEST USER testuser export TEST PASS testpassword if      f auth    then     mkdir auth fi docker run      entrypoint htpasswd    httpd 2  Bbn  TEST USER  TEST PASS   auth htpasswd      2  Start kind cluster and local private registry   Execute the script      shell    bin sh set  o errexit    create registry container unless it already exists reg name  kind registry  reg port  5000  running    docker inspect  f       reg name   2  dev null    true   if      running       true     then  docker run       d   restart always  p  127 0 0 1   reg port  5000    name    reg name         v    pwd   auth  auth       e  REGISTRY AUTH htpasswd        e  REGISTRY AUTH HTPASSWD REALM Registry Realm        e REGISTRY AUTH HTPASSWD PATH  auth htpasswd      registry 2 fi    create a cluster with the local registry enabled in containerd cat   EOF   kind create cluster   config   kind  Cluster apiVersion  kind x k8s io v1alpha4 containerdConfigPatches         plugins  io containerd grpc v1 cri  registry mirrors  localhost   reg port       endpoint     http     reg name  5000   EOF    connect the registry to the cluster network    the network may already be connected  docker network connect  kind     reg name      true   Document the local registry        wokeignore rule master        https   github com kubernetes enhancements tree master keps sig cluster lifecycle generic 1755 communicating a local registry cat   EOF   kubectl apply  f   apiVersion  v1 kind  ConfigMap metadata   name  local registry hosting  namespace  kube public data   localRegistryHosting v1       host   localhost   reg port      help   https   kind sigs k8s io docs user local registry   EOF       3  Install tekton  pipeline  https   github com tektoncd pipeline blob main docs install md  and create the secret in cluster      bash kubectl create secret docker registry secret tekton       docker username  TEST USER       docker password  TEST PASS       docker server localhost 5000        namespace tekton pipelines      4  Config  ko  https   github com google ko install  and add secret to service acount      bash export KO DOCKER REPO  localhost 5000        200 serviceaccount yaml      yaml apiVersion  v1 kind  ServiceAccount metadata   name  tekton pipelines controller  namespace  tekton pipelines  labels     app kubernetes io component  controller    app kubernetes io instance  default    app kubernetes io part of  tekton pipelines imagePullSecrets     name  secret tekton     apiVersion  v1 kind  ServiceAccount metadata   name  tekton pipelines webhook  namespace  tekton pipelines  labels     app kubernetes io component  webhook    app kubernetes io instance  default    app kubernetes io part of  tekton pipelines imagePullSecrets     name  secret tekton    "}
{"questions":"tekton You may follow the existing Tekton feature flags demo for detailed reference Feature Versioning API driven features are features that are accessed via a specific field in pipeline API They comply to the and the specified in the For example is an API driven feature Adding feature gates for API driven features The stability levels of features feature versioning are independent of CRD","answers":"# Feature Versioning\n\nThe stability levels of features (feature versioning) are independent of CRD [API versioning](.\/api-versioning.md).\n\nYou may follow the existing Tekton feature flags demo for detailed reference:\n- [Tekton Per-feature Flag Demo Slides](https:\/\/docs.google.com\/presentation\/d\/1MAwBTKYUN40SZcd6om6LMw217TtNSppbmA8MatMjyjk\/edit?usp=sharing&resourcekey=0-JY7-QhCrWJrzFgsFbJGROg)\n- [Tekton Per-feature Flag Demo Recording](https:\/\/drive.google.com\/file\/d\/1myFHtqps3gt2I6wBkvGIghDaJElOYOq1\/view?usp=sharing)\n\n## Adding feature gates for API-driven features\nAPI-driven features are features that are accessed via a specific field in pipeline API. They comply to the [feature gates](..\/..\/api_compatibility_policy.md#feature-gates) and the [feature graduation process](..\/..\/api_compatibility_policy.md#feature-graduation-process) specified in the [API compatibility policy](..\/..\/api_compatibility_policy.md). For example, [remote tasks](https:\/\/github.com\/tektoncd\/pipeline\/blob\/454bfd340d102f16f4f2838cf4487198537e3cfa\/docs\/taskruns.md#remote-tasks) is an API-driven feature.\n\n## Adding feature gated API fields for API-driven features\n### Per-feature flag\n\nAll new features added after [v0.53.0](https:\/\/github.com\/tektoncd\/pipeline\/releases\/tag\/v0.53.0) will be enabled by their dedicated feature flags. To introduce a new per-feature flag, we will proceed as follows:\n- Add default values to the new per-feature flag for the new API-driven feature following the `PerFeatureFlag` struct in [feature_flags.go](.\/..\/..\/pkg\/apis\/config\/feature_flags.go).\n- Write unit tests to verify the new feature flag and update all test cases that require the configMap setting, such as those related to provenance propagation.\n- To add integration tests:\n    - First, add the tests to `pull-tekton-pipeline-alpha-integration-test` by enabling the newly-introduced per-feature flag at [alpha test Prow environment](.\/..\/..\/test\/e2e-tests-kind-prow-alpha.env).\n    - When the flag is promoted to `beta` stability level, change the test to use [beta Prow environment setup](.\/..\/..\/test\/e2e-tests-kind-prow-beta.env).\n    - To add additional CI tests for combinations of feature flags, add tests for all alpha feature flags being turned on, with one alpha feature turned off at a time.\n- Add the tested new per-feature flag key value to the [the pipeline configMap](.\/..\/..\/config\/config-feature-flags.yaml).\n- Update documentations for the new alpha feature at [alpha-stability-level](.\/..\/additional-configs.md#alpha-features).\n\n#### Example of adding a new Per-feature flag\n1. Add the default value following the Per-Feature flag struct\n```golang\nconst enableExampleNewFeatureKey = 'example-new-feature'\n\nvar DefaultExampleNewFeatre = PerFeatureFlag {\n        Name:      enableExampleNewFeatureKey,\n        Stability: AlphaAPIFields,\n        Enabled:   DefaultAlphaFeatureEnabled,\n}\n```\n2. Add unit tests with the newly-introduced yamls `feature-flags-example-new-feature` and `feature-flags-invalid-example-new-feature` according to the existing testing framework.\n\n3. For integration tests, add `example-new-feature: true` to [alpha test Prow environment](.\/..\/..\/test\/e2e-tests-kind-prow-alpha.env).\n\n4. Add `example-new-feature: false` to [the pipeline configMap](.\/..\/..\/config\/config-feature-flags.yaml) with a release note.\n\n5. Update documentations for the new alpha feature at [alpha-stability-level](.\/..\/additional-configs.md#alpha-features).\n\n### `enable-api-fields`\n\nPrior to [v0.53.0](https:\/\/github.com\/tektoncd\/pipeline\/tree\/release-v0.53.x),  we have had the global feature flag `enable-api-fields` in\n[config-feature-flags.yaml file](..\/..\/config\/config-feature-flags.yaml)\ndeployed as part of our releases.\n\n_Note that the `enable-api-fields` flag will has been deprecated since [v0.53.0](https:\/\/github.com\/tektoncd\/pipeline\/tree\/release-v0.53.x) and we will transition to use [Per-feature flags](#per-feature-flag) instead._\n\nThis field can be configured either to be `alpha`, `beta`, or `stable`. This field is\ndocumented as part of our\n[install docs](..\/install.md#customizing-the-pipelines-controller-behavior).\n\nFor developers adding new features to Pipelines' CRDs we've got a couple of\nhelpful tools to make gating those features simpler and to provide a consistent\ntesting experience.\n\n### Guarding Features with Feature Gates\n\nWriting new features is made trickier when you need to support both the existing\nstable behaviour as well as your new alpha behaviour.\n\nIn reconciler code you can guard your new features with an `if` statement such\nas the following:\n\n```go\nalphaAPIEnabled := config.FromContextOrDefaults(ctx).FeatureFlags.EnableAPIFields == \"alpha\"\nif alphaAPIEnabled {\n  \/\/ new feature code goes here\n} else {\n  \/\/ existing stable code goes here\n}\n```\n\nNotice that you'll need a context object to be passed into your function for\nthis to work. When writing new features keep in mind that you might need to\ninclude this in your new function signatures.\n\n### Guarding Validations with Feature Gates\n\nJust because your application code might be correctly observing the feature gate\nflag doesn't mean you're done yet! When a user submits a Tekton resource it's\nvalidated by Pipelines' webhook. Here too you'll need to ensure your new\nfeatures aren't accidentally accepted when the feature gate suggests they\nshouldn't be. We've got a helper function,\n[`ValidateEnabledAPIFields`](..\/..\/pkg\/apis\/version\/version_validation.go),\nto make validating the current feature gate easier. Use it like this:\n\n```go\nrequiredVersion := config.AlphaAPIFields\n\/\/ errs is an instance of *apis.FieldError, a common type in our validation code\nerrs = errs.Also(ValidateEnabledAPIFields(ctx, \"your feature name\", requiredVersion))\n```\n\nIf the user's cluster isn't configured with the required feature gate it'll\nreturn an error like this:\n\n```\n<your feature> requires \"enable-api-fields\" feature gate to be \"alpha\" but it is \"stable\"\n```\n\n### Unit Testing with Feature Gates\n\nAny new code you write that uses the `ctx` context variable is trivially unit\ntested with different feature gate settings. You should make sure to unit test\nyour code both with and without a feature gate enabled to make sure it's\nproperly guarded. See the following for an example of a unit test that sets the\nfeature gate to test behaviour:\n\n```go\n\/\/ EnableAlphaAPIFields enables alpha features in an existing context (for use in testing)\nfunc EnableAlphaAPIFields(ctx context.Context) context.Context {\n\treturn setEnableAPIFields(ctx, config.AlphaAPIFields)\n}\n\nfunc setEnableAPIFields(ctx context.Context, want string) context.Context {\n\tfeatureFlags, _ := config.NewFeatureFlagsFromMap(map[string]string{\n\t\t\"enable-api-fields\": want,\n\t})\n\tcfg := &config.Config{\n\t\tDefaults: &config.Defaults{\n\t\t\tDefaultTimeoutMinutes: 60,\n\t\t},\n\t\tFeatureFlags: featureFlags,\n\t}\n\treturn config.ToContext(ctx, cfg)\n}\n```\n\n### Example YAMLs\n\nWriting new YAML examples that require a feature gate to be set is easy. New\nYAML example files typically go in a directory called something like\n`examples\/v1\/taskruns` in the root of the repo. To create a YAML that\nshould only be exercised when the `enable-api-fields` flag is `alpha` just put\nit in an `alpha` subdirectory so the structure looks like:\n\n```\nexamples\/v1\/taskruns\/alpha\/your-example.yaml\n```\n\nThis should work for both taskruns and pipelineruns.\n\n**Note**: To execute alpha examples with the integration test runner you must\nmanually set the `enable-api-fields` feature flag to `alpha` in your testing\ncluster before kicking off the tests.\n\nWhen you set this flag to `stable` in your cluster it will prevent `alpha`\nexamples from being created by the test runner. When you set the flag to `alpha`\nall examples are run, since we want to exercise backwards-compatibility of the\nexamples under alpha conditions.\n\n### Integration Tests\n\nFor integration tests we provide the\n[`requireAnyGate` function](..\/..\/test\/gate.go) which should be passed to the\n`setup` function used by tests:\n\n```go\nc, namespace := setup(ctx, t, requireAnyGate(map[string]string{\"enable-api-fields\": \"alpha\"}))\n```\n\nThis will Skip your integration test if the feature gate is not set to `alpha`\nwith a clear message explaining why it was skipped.\n\n**Note**: As with running example YAMLs you have to manually set the\n`enable-api-fields` flag to `alpha` in your test cluster to see your alpha\nintegration tests run. When the flag in your cluster is `alpha` _all_\nintegration tests are executed, both `stable` and `alpha`. Setting the feature\nflag to `stable` will exclude `alpha` tests.","site":"tekton","answers_cleaned":"  Feature Versioning  The stability levels of features  feature versioning  are independent of CRD  API versioning    api versioning md    You may follow the existing Tekton feature flags demo for detailed reference     Tekton Per feature Flag Demo Slides  https   docs google com presentation d 1MAwBTKYUN40SZcd6om6LMw217TtNSppbmA8MatMjyjk edit usp sharing resourcekey 0 JY7 QhCrWJrzFgsFbJGROg     Tekton Per feature Flag Demo Recording  https   drive google com file d 1myFHtqps3gt2I6wBkvGIghDaJElOYOq1 view usp sharing      Adding feature gates for API driven features API driven features are features that are accessed via a specific field in pipeline API  They comply to the  feature gates        api compatibility policy md feature gates  and the  feature graduation process        api compatibility policy md feature graduation process  specified in the  API compatibility policy        api compatibility policy md   For example   remote tasks  https   github com tektoncd pipeline blob 454bfd340d102f16f4f2838cf4487198537e3cfa docs taskruns md remote tasks  is an API driven feature      Adding feature gated API fields for API driven features     Per feature flag  All new features added after  v0 53 0  https   github com tektoncd pipeline releases tag v0 53 0  will be enabled by their dedicated feature flags  To introduce a new per feature flag  we will proceed as follows    Add default values to the new per feature flag for the new API driven feature following the  PerFeatureFlag  struct in  feature flags go          pkg apis config feature flags go     Write unit tests to verify the new feature flag and update all test cases that require the configMap setting  such as those related to provenance propagation    To add integration tests        First  add the tests to  pull tekton pipeline alpha integration test  by enabling the newly introduced per feature flag at  alpha test Prow environment          test e2e tests kind prow alpha env         When the flag is promoted to  beta  stability level  change the test to use  beta Prow environment setup          test e2e tests kind prow beta env         To add additional CI tests for combinations of feature flags  add tests for all alpha feature flags being turned on  with one alpha feature turned off at a time    Add the tested new per feature flag key value to the  the pipeline configMap          config config feature flags yaml     Update documentations for the new alpha feature at  alpha stability level       additional configs md alpha features         Example of adding a new Per feature flag 1  Add the default value following the Per Feature flag struct    golang const enableExampleNewFeatureKey    example new feature   var DefaultExampleNewFeatre   PerFeatureFlag           Name       enableExampleNewFeatureKey          Stability  AlphaAPIFields          Enabled    DefaultAlphaFeatureEnabled        2  Add unit tests with the newly introduced yamls  feature flags example new feature  and  feature flags invalid example new feature  according to the existing testing framework   3  For integration tests  add  example new feature  true  to  alpha test Prow environment          test e2e tests kind prow alpha env    4  Add  example new feature  false  to  the pipeline configMap          config config feature flags yaml  with a release note   5  Update documentations for the new alpha feature at  alpha stability level       additional configs md alpha features         enable api fields   Prior to  v0 53 0  https   github com tektoncd pipeline tree release v0 53 x    we have had the global feature flag  enable api fields  in  config feature flags yaml file        config config feature flags yaml  deployed as part of our releases    Note that the  enable api fields  flag will has been deprecated since  v0 53 0  https   github com tektoncd pipeline tree release v0 53 x  and we will transition to use  Per feature flags   per feature flag  instead    This field can be configured either to be  alpha    beta   or  stable   This field is documented as part of our  install docs     install md customizing the pipelines controller behavior    For developers adding new features to Pipelines  CRDs we ve got a couple of helpful tools to make gating those features simpler and to provide a consistent testing experience       Guarding Features with Feature Gates  Writing new features is made trickier when you need to support both the existing stable behaviour as well as your new alpha behaviour   In reconciler code you can guard your new features with an  if  statement such as the following      go alphaAPIEnabled    config FromContextOrDefaults ctx  FeatureFlags EnableAPIFields     alpha  if alphaAPIEnabled        new feature code goes here   else        existing stable code goes here        Notice that you ll need a context object to be passed into your function for this to work  When writing new features keep in mind that you might need to include this in your new function signatures       Guarding Validations with Feature Gates  Just because your application code might be correctly observing the feature gate flag doesn t mean you re done yet  When a user submits a Tekton resource it s validated by Pipelines  webhook  Here too you ll need to ensure your new features aren t accidentally accepted when the feature gate suggests they shouldn t be  We ve got a helper function    ValidateEnabledAPIFields         pkg apis version version validation go   to make validating the current feature gate easier  Use it like this      go requiredVersion    config AlphaAPIFields    errs is an instance of  apis FieldError  a common type in our validation code errs   errs Also ValidateEnabledAPIFields ctx   your feature name   requiredVersion        If the user s cluster isn t configured with the required feature gate it ll return an error like this        your feature  requires  enable api fields  feature gate to be  alpha  but it is  stable           Unit Testing with Feature Gates  Any new code you write that uses the  ctx  context variable is trivially unit tested with different feature gate settings  You should make sure to unit test your code both with and without a feature gate enabled to make sure it s properly guarded  See the following for an example of a unit test that sets the feature gate to test behaviour      go    EnableAlphaAPIFields enables alpha features in an existing context  for use in testing  func EnableAlphaAPIFields ctx context Context  context Context    return setEnableAPIFields ctx  config AlphaAPIFields     func setEnableAPIFields ctx context Context  want string  context Context    featureFlags       config NewFeatureFlagsFromMap map string string     enable api fields   want       cfg     config Config    Defaults   config Defaults     DefaultTimeoutMinutes  60         FeatureFlags  featureFlags      return config ToContext ctx  cfg             Example YAMLs  Writing new YAML examples that require a feature gate to be set is easy  New YAML example files typically go in a directory called something like  examples v1 taskruns  in the root of the repo  To create a YAML that should only be exercised when the  enable api fields  flag is  alpha  just put it in an  alpha  subdirectory so the structure looks like       examples v1 taskruns alpha your example yaml      This should work for both taskruns and pipelineruns     Note    To execute alpha examples with the integration test runner you must manually set the  enable api fields  feature flag to  alpha  in your testing cluster before kicking off the tests   When you set this flag to  stable  in your cluster it will prevent  alpha  examples from being created by the test runner  When you set the flag to  alpha  all examples are run  since we want to exercise backwards compatibility of the examples under alpha conditions       Integration Tests  For integration tests we provide the   requireAnyGate  function        test gate go  which should be passed to the  setup  function used by tests      go c  namespace    setup ctx  t  requireAnyGate map string string  enable api fields    alpha          This will Skip your integration test if the feature gate is not set to  alpha  with a clear message explaining why it was skipped     Note    As with running example YAMLs you have to manually set the  enable api fields  flag to  alpha  in your test cluster to see your alpha integration tests run  When the flag in your cluster is  alpha   all  integration tests are executed  both  stable  and  alpha   Setting the feature flag to  stable  will exclude  alpha  tests "}
{"questions":"tekton If specified wait until the file has non zero size user provided binary for each container to manage the execution order of the containers Entrypoint rewriting and step ordering This doc describes how TaskRuns are implemented using pods The binary has the following arguments Tekton releases include a binary called the entrypoint which wraps the If specified file to wait for","answers":"This doc describes how TaskRuns are implemented using pods.\n\n## Entrypoint rewriting and step ordering\n\nTekton releases include a binary called the \"entrypoint\", which wraps the\nuser-provided binary for each `Step` container to manage the execution order of the containers.\nThe `entrypoint` binary has the following arguments:\n\n- `wait_file` - If specified, file to wait for\n- `wait_file_content` - If specified, wait until the file has non-zero size\n- `post_file` - If specified, file to write upon completion\n- `entrypoint` - The command to run in the image being wrapped\n\nAs part of the PodSpec created by `TaskRun` the entrypoint for each `Task` step\nis changed to the entrypoint binary with the mentioned arguments and a volume\nwith the binary and file(s) is mounted.\n\nIf the image is a private registry, the service account should include an\n[ImagePullSecret](https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-service-account\/#add-imagepullsecrets-to-a-service-account)\n\nFor more details, see [entrypoint\/README.md](..\/..\/cmd\/entrypoint\/README.md)\nor the talk [\"Russian Doll: Extending Containers with Nested Processes\"](https:\/\/www.youtube.com\/watch?v=iz9_omZ0ctk).\n\n## How to access the exit code of a step from any subsequent step in a task\n\nThe entrypoint now allows exiting with an error and continue running rest of the\nsteps in a task i.e., it is possible for a step to exit with a non-zero exit\ncode. Now, it is possible to design a task with a step which can take an action\ndepending on the exit code of any prior steps. The user can access the exit code\nof a step by reading the file pointed by the path variable\n`$(steps.step-<step-name>.exitCode.path)` or\n`$(steps.step-unnamed-<step-index>.exitCode.path)`. For example:\n\n- `$(steps.step-my-awesome-step.exitCode.path)` where the step name is\n  `my-awesome-step`.\n- `$(steps.step-unnamed-0.exitCode.path)` where the first step in a task has no\n  name.\n\nThe exit code of a step is stored in a file named `exitCode` under a directory\n`\/tekton\/steps\/step-<step-name>\/` or `\/tekton\/steps\/step-unnamed-<step-index>\/`\nwhich is reserved for any other step specific information in the future.\n\nIf you would like to use the tekton internal path, you can access the exit code\nby reading the file (which is not recommended though since the path might change\nin the future):\n\n```shell\ncat \/tekton\/steps\/step-<step-name>\/exitCode\n```\n\nAnd, access the step exit code without a step name:\n\n```shell\ncat \/tekton\/steps\/step-unnamed-<step-index>\/exitCode\n```\n\nOr, you can access the step metadata directory via symlink, for example, use\n`cat \/tekton\/steps\/0\/exitCode` for the first step in a task.\n\n## TaskRun Use of Pod Termination Messages\n\nTekton Pipelines uses a `Pod's`\n[termination message](https:\/\/kubernetes.io\/docs\/tasks\/debug-application-cluster\/determine-reason-pod-failure\/)\nto pass data from a Step's container to the Pipelines controller. Examples of\nthis data include: the time that execution of the user's step began, contents of\ntask results, contents of pipeline resource results.\n\nThe contents and format of the termination message can change. At time of\nwriting the message takes the form of a serialized JSON blob. Some of the data\nfrom the message is internal to Tekton Pipelines, used for book-keeping, and\nsome is distributed across a number of fields of the `TaskRun's` `status`. For\nexample, a `TaskRun's` `status.taskResults` is populated from the termination\nmessage.\n\n## Reserved directories\n\n### \/workspace\n\n- `\/workspace` - This directory is where volumes for [resources](#resources) and\n  [workspaces](#workspaces) are mounted.\n\n### \/tekton\n\nThe `\/tekton\/` directory is reserved on containers for internal usage.\n\nHere is an example of a directory layout for a simple Task with 2 script steps:\n\n```\n\/tekton\n|-- bin\n    `-- entrypoint\n|-- creds\n|-- downward\n|   |-- ..2021_09_16_18_31_06.270542700\n|   |   `-- ready\n|   |-- ..data -> ..2021_09_16_18_31_06.270542700\n|   `-- ready -> ..data\/ready\n|-- home\n|-- results\n|-- run\n    `-- 0\n        `-- out\n        `-- status\n            `-- exitCode\n|-- scripts\n|   |-- script-0-t4jd8\n|   `-- script-1-4pjwp\n|-- steps\n|   |-- 0 -> \/tekton\/run\/0\/status\n|   |-- 1 -> \/tekton\/run\/1\/status\n|   |-- step-foo -> \/tekton\/run\/1\/status\n|   `-- step-unnamed-0 -> \/tekton\/run\/0\/status\n`-- termination\n```\n\n| Path                | Description                                                                                                                                                                                                                                                                                              |\n| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| \/tekton             | Directory used for Tekton specific functionality                                                                                                                                                                                                                                                         |\n| \/tekton\/bin         | Tekton provided binaries \/ tools                                                                                                                                                                                                                                                                         |\n| \/tekton\/creds       | Location of Tekton mounted secrets. See [Authentication at Run Time](..\/auth.md) for more details.                                                                                                                                                                                                       |\n| \/tekton\/debug       | Contains [Debug scripts](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/debug.md#debug-scripts) used to manage step lifecycle during debugging at a breakpoint and the [Debug Info](https:\/\/github.com\/tektoncd\/pipeline\/blob\/main\/docs\/debug.md#mounts) mount used to assist for the same.         |                                                                                                                                               |  \n| \/tekton\/downward    | Location of data mounted via the [Downward API](https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/downward-api-volume-expose-pod-information\/#the-downward-api).                                                                                                                                  |\n| \/tekton\/home        | (deprecated - see https:\/\/github.com\/tektoncd\/pipeline\/issues\/2013) Default home directory for user containers.                                                                                                                                                                                          |\n| \/tekton\/results     | Where [results](#results) are written to (path available to `Task` authors via [`$(results.name.path)`](..\/variables.md))                                                                                                                                                                                |\n| \/tekton\/run         | Runtime variable data. [Used for coordinating step ordering](#entrypoint-rewriting-and-step-ordering).                                                                                                                                                                                                   |\n| \/tekton\/scripts     | Contains user provided scripts specified in the TaskSpec.                                                                                                                                                                                                                                                |\n| \/tekton\/steps       | Where the `step` exitCodes are written to (path available to `Task` authors via [`$(steps.<stepName>.exitCode.path)`](..\/variables.md#variables-available-in-a-task))                                                                                                                                    |\n| \/tekton\/termination | where the eventual [termination log message](https:\/\/kubernetes.io\/docs\/tasks\/debug-application-cluster\/determine-reason-pod-failure\/#writing-and-reading-a-termination-message) is written to [Sequencing step containers](#entrypoint-rewriting-and-step-ordering)                                     |\n\nThe following directories are covered by the\n[Tekton API Compatibility policy](..\/..\/api_compatibility_policy.md), and can be\nrelied on for stability:\n\n- `\/tekton\/results`\n\nAll other files\/directories are internal implementation details of Tekton -\n**users should not rely on specific paths or behaviors as it may change in the\nfuture**.\n\n## What and Why of `\/tekton\/run`\n\n`\/tekton\/run` is a collection of implicit volumes mounted on a pod and created\nfor storing the step specific information\/metadata. Steps can only write\nmetadata to their own `tekton\/run` directory - all other step volumes are mounted as\n`readonly`. The `tekton\/run` directories are considered internal implementation details\nof Tekton and are not bound by the API compatibility policy - the contents and\nstructure can be safely changed so long as user behavior remains the same.\n\n### `\/tekton\/steps`\n\n`\/tekton\/steps` are special subdirectories are created for each step in a task -\neach directory is actually a symlink to a directory in the Step's corresponding\n`\/tekton\/run` volume. This is done to ensure that step directories can only be\nmodified by their own Step. To ensure that these symlinks are not modified, the\nentire `\/tekton\/steps` volume is initially populated by an initContainer, and\nmounted `readonly` on all user steps.\n\nThese symlinks are created as a part of the `step-init` entrypoint subcommand\ninitContainer on each Task Pod.\n\n### Entrypoint configuration\n\nThe entrypoint is modified to include an additional flag representing the step\nspecific directory where step metadata should be written:\n\n```\nstep_metadata_dir - the dir specified in this flag is created to hold a step specific metadata\n```\n\n`step_metadata_dir` is set to `\/tekton\/run\/<step #>\/status` for the entrypoint\nof each step.\n\n### Example\n\nLet's take an example of a task with two steps, each exiting with non-zero exit\ncode:\n\n```yaml\nkind: TaskRun\napiVersion: tekton.dev\/v1beta1\nmetadata:\n  generateName: test-taskrun-\nspec:\n  taskSpec:\n    steps:\n      - image: alpine\n        name: step0\n        onError: continue\n        script: |\n          exit 1\n      - image: alpine\n        onError: continue\n        script: |\n          exit 2\n```\n\nDuring `step-step0`, the first container is actively running so none of the\noutput files are populated yet. The `\/tekton\/steps` directories are symlinked to\nlocations that do not yet exist, but will be populated during execution.\n\n```\n\/tekton\n|-- run\n|   |-- 0\n|   `-- 1\n|-- steps\n    |-- 0 -> \/tekton\/run\/0\/status\n    |-- 1 -> \/tekton\/run\/1\/status\n    |-- step-step0 -> \/tekton\/run\/0\/status\n    `-- step-unnamed1 -> \/tekton\/run\/1\/status\n```\n\nDuring `step-unnamed1`, the first container has now finished. The output files\nfor the first step are now populated, and the folder pointed to by\n`\/tekton\/steps\/0` now exists, and is populated with a file named `exitCode`\nwhich contains the exit code of the first step.\n\n```\n\/tekton\n|-- run\n|   |-- 0\n|   |   |-- out\n|   |   `-- status\n|   |       `-- exitCode\n|   `-- 1\n|-- steps\n    |-- 0 -> \/tekton\/run\/0\/status\n    |-- 1 -> \/tekton\/run\/1\/status\n    |-- step-step0 -> \/tekton\/run\/0\/status\n    `-- step-unnamed1 -> \/tekton\/run\/1\/status\n```\n\nNotice that there are multiple symlinks showing under `\/tekton\/steps\/` pointing\nto the same `\/tekton\/run` location. These symbolic links are created to provide\nsimplified access to the step metadata directories i.e., instead of referring to\na directory with the step name, access it via the step index. The step index\nbecomes complex and hard to keep track of in a task with a long list of steps,\nfor example, a task with 20 steps. Creating the step metadata directory using a\nstep name and creating a symbolic link using the step index gives the user\nflexibility, and an option to choose whatever works best for them.\n\n## Handling of injected sidecars\n\nTekton has to take some special steps to support sidecars that are injected into\nTaskRun Pods. Without intervention sidecars will typically run for the entire\nlifetime of a Pod but in Tekton's case it's desirable for the sidecars to run\nonly as long as Steps take to complete. There's also a need for Tekton to\nschedule the sidecars to start before a Task's Steps begin, just in case the\nSteps rely on a sidecars behavior, for example to join an Istio service mesh. To\nhandle all of this, Tekton Pipelines implements the following lifecycle for\nsidecar containers:\n\nFirst, the\n[Downward API](https:\/\/kubernetes.io\/docs\/tasks\/inject-data-application\/downward-api-volume-expose-pod-information\/#the-downward-api)\nis used to project an annotation on the TaskRun's Pod into the `entrypoint`\ncontainer as a file. The annotation starts as an empty string, so the file\nprojected by the downward API has zero length. The entrypointer spins, waiting\nfor that file to have non-zero size.\n\nThe sidecar containers start up. Once they're all in a ready state, the\nannotation is populated with string \"READY\", which in turn populates the\nDownward API projected file. The entrypoint binary recognizes that the projected\nfile has a non-zero size and allows the Task's steps to begin.\n\nOn completion of all steps in a Task the TaskRun reconciler stops any sidecar\ncontainers. The `Image` field of any sidecar containers is swapped to the nop\nimage. Kubernetes observes the change and relaunches the container with updated\ncontainer image. The nop container image exits immediately _because it does not\nprovide the command that the sidecar is configured to run_. The container is\nconsidered `Terminated` by Kubernetes and the TaskRun's Pod stops.\n\nThere are known issues with the existing implementation of sidecars:\n\n- When the `nop` image does provide the sidecar's command, the sidecar will\n  continue to run even after `nop` has been swapped into the sidecar container's\n  image field. See\n  [the issue tracking this bug](https:\/\/github.com\/tektoncd\/pipeline\/issues\/1347)\n  for the issue tracking this bug. Until this issue is resolved the best way to\n  avoid it is to avoid overriding the `nop` image when deploying the tekton\n  controller, or ensuring that the overridden `nop` image contains as few\n  commands as possible.\n\n- `kubectl get pods` will show a Completed pod when a sidecar exits successfully\n  but an Error when the sidecar exits with an error. This is only apparent when\n  using `kubectl` to get the pods of a TaskRun, not when describing the Pod\n  using `kubectl describe pod ...` nor when looking at the TaskRun, but can be\n  quite confusing.\n\n## Breakpoint on Failure\n\nHalting a TaskRun execution on Failure of a step.\n\n### Failure of a Step\n\nThe entrypoint binary is used to manage the lifecycle of a step. Steps are aligned beforehand by the TaskRun controller\nallowing each step to run in a particular order. This is done using `-wait_file` and the `-post_file` flags. The former\nlet's the entrypoint binary know that it has to wait on creation of a particular file before starting execution of the step.\nAnd the latter provides information on the step number and signal the next step on completion of the step.\n\nOn success of a step, the `-post-file` is written as is, signalling the next step which would have the same argument given\nfor `-wait_file` to resume the entrypoint process and move ahead with the step.\n\nOn failure of a step, the `-post_file` is written with appending `.err` to it denoting that the previous step has failed with\nand error. The subsequent steps are skipped in this case as well, marking the TaskRun as a failure.\n\n### Halting a Step on failure\n\nThe failed step writes `<step-no>.err` to `\/tekton\/run` and stops running completely. To be able to debug a step we would\nneed it to continue running (not exit), not skip the next steps and signal health of the step. By disabling step skipping,\nstopping write of the `<step-no>.err` file and waiting on a signal by the user to disable the halt, we would be simulating a\n\"breakpoint\".\n\nIn this breakpoint, which is essentially a limbo state the TaskRun finds itself in, the user can interact with the step\nenvironment using a CLI or an IDE.\n\n### Exiting onfailure breakpoint\n\nTo exit a step which has been paused upon failure, the step would wait on a file similar to `<step-no>.breakpointexit` which\nwould unpause and exit the step container. eg: Step 0 fails and is paused. Writing `0.breakpointexit` in `\/tekton\/run`\nwould unpause and exit the step container.\n\n## Breakpoint before step\n\nTaskRun will be stuck waiting for user debugging before the step execution.\n\n### Halting a Step before execution\n\nThe step program will be executed after all the `-wait_file` monitoring ends. If want the user to enter the debugging before the step is executed,\nneed to pass a parameter `debug_before_step` to `entrypoint`,\nand `entrypoint` will end the monitoring of `waitFiles` back pause,\nwaiting to listen to the `\/tekton\/run\/0\/out.beforestepexit` file\n\n### Exiting before step breakpoint\n\n`entrypoint` listening `\/tekton\/run\/\/out.beforestepexit` or `\/tekton\/run\/\/out.beforestepexit.err` to\ndecide whether to proceed this step, `out.beforestepexit` means continue with step,\n`out.beforestepexit.err` means do not continue with the step","site":"tekton","answers_cleaned":"This doc describes how TaskRuns are implemented using pods      Entrypoint rewriting and step ordering  Tekton releases include a binary called the  entrypoint   which wraps the user provided binary for each  Step  container to manage the execution order of the containers  The  entrypoint  binary has the following arguments      wait file    If specified  file to wait for    wait file content    If specified  wait until the file has non zero size    post file    If specified  file to write upon completion    entrypoint    The command to run in the image being wrapped  As part of the PodSpec created by  TaskRun  the entrypoint for each  Task  step is changed to the entrypoint binary with the mentioned arguments and a volume with the binary and file s  is mounted   If the image is a private registry  the service account should include an  ImagePullSecret  https   kubernetes io docs tasks configure pod container configure service account  add imagepullsecrets to a service account   For more details  see  entrypoint README md        cmd entrypoint README md  or the talk   Russian Doll  Extending Containers with Nested Processes   https   www youtube com watch v iz9 omZ0ctk       How to access the exit code of a step from any subsequent step in a task  The entrypoint now allows exiting with an error and continue running rest of the steps in a task i e   it is possible for a step to exit with a non zero exit code  Now  it is possible to design a task with a step which can take an action depending on the exit code of any prior steps  The user can access the exit code of a step by reading the file pointed by the path variable    steps step  step name  exitCode path   or    steps step unnamed  step index  exitCode path    For example        steps step my awesome step exitCode path   where the step name is    my awesome step        steps step unnamed 0 exitCode path   where the first step in a task has no   name   The exit code of a step is stored in a file named  exitCode  under a directory   tekton steps step  step name    or   tekton steps step unnamed  step index    which is reserved for any other step specific information in the future   If you would like to use the tekton internal path  you can access the exit code by reading the file  which is not recommended though since the path might change in the future       shell cat  tekton steps step  step name  exitCode      And  access the step exit code without a step name      shell cat  tekton steps step unnamed  step index  exitCode      Or  you can access the step metadata directory via symlink  for example  use  cat  tekton steps 0 exitCode  for the first step in a task      TaskRun Use of Pod Termination Messages  Tekton Pipelines uses a  Pod s   termination message  https   kubernetes io docs tasks debug application cluster determine reason pod failure   to pass data from a Step s container to the Pipelines controller  Examples of this data include  the time that execution of the user s step began  contents of task results  contents of pipeline resource results   The contents and format of the termination message can change  At time of writing the message takes the form of a serialized JSON blob  Some of the data from the message is internal to Tekton Pipelines  used for book keeping  and some is distributed across a number of fields of the  TaskRun s   status   For example  a  TaskRun s   status taskResults  is populated from the termination message      Reserved directories       workspace      workspace    This directory is where volumes for  resources   resources  and    workspaces   workspaces  are mounted        tekton  The   tekton   directory is reserved on containers for internal usage   Here is an example of a directory layout for a simple Task with 2 script steps        tekton     bin         entrypoint     creds     downward           2021 09 16 18 31 06 270542700             ready           data      2021 09 16 18 31 06 270542700         ready      data ready     home     results     run         0             out             status                 exitCode     scripts         script 0 t4jd8         script 1 4pjwp     steps         0     tekton run 0 status         1     tekton run 1 status         step foo     tekton run 1 status         step unnamed 0     tekton run 0 status     termination        Path                  Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      tekton               Directory used for Tekton specific functionality                                                                                                                                                                                                                                                              tekton bin           Tekton provided binaries   tools                                                                                                                                                                                                                                                                              tekton creds         Location of Tekton mounted secrets  See  Authentication at Run Time     auth md  for more details                                                                                                                                                                                                             tekton debug         Contains  Debug scripts  https   github com tektoncd pipeline blob main docs debug md debug scripts  used to manage step lifecycle during debugging at a breakpoint and the  Debug Info  https   github com tektoncd pipeline blob main docs debug md mounts  mount used to assist for the same                                                                                                                                                                 tekton downward      Location of data mounted via the  Downward API  https   kubernetes io docs tasks inject data application downward api volume expose pod information  the downward api                                                                                                                                         tekton home           deprecated   see https   github com tektoncd pipeline issues 2013  Default home directory for user containers                                                                                                                                                                                                tekton results       Where  results   results  are written to  path available to  Task  authors via     results name path       variables md                                                                                                                                                                                       tekton run           Runtime variable data   Used for coordinating step ordering   entrypoint rewriting and step ordering                                                                                                                                                                                                          tekton scripts       Contains user provided scripts specified in the TaskSpec                                                                                                                                                                                                                                                      tekton steps         Where the  step  exitCodes are written to  path available to  Task  authors via     steps  stepName  exitCode path       variables md variables available in a task                                                                                                                                           tekton termination   where the eventual  termination log message  https   kubernetes io docs tasks debug application cluster determine reason pod failure  writing and reading a termination message  is written to  Sequencing step containers   entrypoint rewriting and step ordering                                         The following directories are covered by the  Tekton API Compatibility policy        api compatibility policy md   and can be relied on for stability       tekton results   All other files directories are internal implementation details of Tekton     users should not rely on specific paths or behaviors as it may change in the future        What and Why of   tekton run     tekton run  is a collection of implicit volumes mounted on a pod and created for storing the step specific information metadata  Steps can only write metadata to their own  tekton run  directory   all other step volumes are mounted as  readonly   The  tekton run  directories are considered internal implementation details of Tekton and are not bound by the API compatibility policy   the contents and structure can be safely changed so long as user behavior remains the same         tekton steps     tekton steps  are special subdirectories are created for each step in a task   each directory is actually a symlink to a directory in the Step s corresponding   tekton run  volume  This is done to ensure that step directories can only be modified by their own Step  To ensure that these symlinks are not modified  the entire   tekton steps  volume is initially populated by an initContainer  and mounted  readonly  on all user steps   These symlinks are created as a part of the  step init  entrypoint subcommand initContainer on each Task Pod       Entrypoint configuration  The entrypoint is modified to include an additional flag representing the step specific directory where step metadata should be written       step metadata dir   the dir specified in this flag is created to hold a step specific metadata       step metadata dir  is set to   tekton run  step    status  for the entrypoint of each step       Example  Let s take an example of a task with two steps  each exiting with non zero exit code      yaml kind  TaskRun apiVersion  tekton dev v1beta1 metadata    generateName  test taskrun  spec    taskSpec      steps          image  alpine         name  step0         onError  continue         script              exit 1         image  alpine         onError  continue         script              exit 2      During  step step0   the first container is actively running so none of the output files are populated yet  The   tekton steps  directories are symlinked to locations that do not yet exist  but will be populated during execution        tekton     run         0         1     steps         0     tekton run 0 status         1     tekton run 1 status         step step0     tekton run 0 status         step unnamed1     tekton run 1 status      During  step unnamed1   the first container has now finished  The output files for the first step are now populated  and the folder pointed to by   tekton steps 0  now exists  and is populated with a file named  exitCode  which contains the exit code of the first step        tekton     run         0             out             status                 exitCode         1     steps         0     tekton run 0 status         1     tekton run 1 status         step step0     tekton run 0 status         step unnamed1     tekton run 1 status      Notice that there are multiple symlinks showing under   tekton steps   pointing to the same   tekton run  location  These symbolic links are created to provide simplified access to the step metadata directories i e   instead of referring to a directory with the step name  access it via the step index  The step index becomes complex and hard to keep track of in a task with a long list of steps  for example  a task with 20 steps  Creating the step metadata directory using a step name and creating a symbolic link using the step index gives the user flexibility  and an option to choose whatever works best for them      Handling of injected sidecars  Tekton has to take some special steps to support sidecars that are injected into TaskRun Pods  Without intervention sidecars will typically run for the entire lifetime of a Pod but in Tekton s case it s desirable for the sidecars to run only as long as Steps take to complete  There s also a need for Tekton to schedule the sidecars to start before a Task s Steps begin  just in case the Steps rely on a sidecars behavior  for example to join an Istio service mesh  To handle all of this  Tekton Pipelines implements the following lifecycle for sidecar containers   First  the  Downward API  https   kubernetes io docs tasks inject data application downward api volume expose pod information  the downward api  is used to project an annotation on the TaskRun s Pod into the  entrypoint  container as a file  The annotation starts as an empty string  so the file projected by the downward API has zero length  The entrypointer spins  waiting for that file to have non zero size   The sidecar containers start up  Once they re all in a ready state  the annotation is populated with string  READY   which in turn populates the Downward API projected file  The entrypoint binary recognizes that the projected file has a non zero size and allows the Task s steps to begin   On completion of all steps in a Task the TaskRun reconciler stops any sidecar containers  The  Image  field of any sidecar containers is swapped to the nop image  Kubernetes observes the change and relaunches the container with updated container image  The nop container image exits immediately  because it does not provide the command that the sidecar is configured to run   The container is considered  Terminated  by Kubernetes and the TaskRun s Pod stops   There are known issues with the existing implementation of sidecars     When the  nop  image does provide the sidecar s command  the sidecar will   continue to run even after  nop  has been swapped into the sidecar container s   image field  See    the issue tracking this bug  https   github com tektoncd pipeline issues 1347    for the issue tracking this bug  Until this issue is resolved the best way to   avoid it is to avoid overriding the  nop  image when deploying the tekton   controller  or ensuring that the overridden  nop  image contains as few   commands as possible      kubectl get pods  will show a Completed pod when a sidecar exits successfully   but an Error when the sidecar exits with an error  This is only apparent when   using  kubectl  to get the pods of a TaskRun  not when describing the Pod   using  kubectl describe pod      nor when looking at the TaskRun  but can be   quite confusing      Breakpoint on Failure  Halting a TaskRun execution on Failure of a step       Failure of a Step  The entrypoint binary is used to manage the lifecycle of a step  Steps are aligned beforehand by the TaskRun controller allowing each step to run in a particular order  This is done using   wait file  and the   post file  flags  The former let s the entrypoint binary know that it has to wait on creation of a particular file before starting execution of the step  And the latter provides information on the step number and signal the next step on completion of the step   On success of a step  the   post file  is written as is  signalling the next step which would have the same argument given for   wait file  to resume the entrypoint process and move ahead with the step   On failure of a step  the   post file  is written with appending   err  to it denoting that the previous step has failed with and error  The subsequent steps are skipped in this case as well  marking the TaskRun as a failure       Halting a Step on failure  The failed step writes   step no  err  to   tekton run  and stops running completely  To be able to debug a step we would need it to continue running  not exit   not skip the next steps and signal health of the step  By disabling step skipping  stopping write of the   step no  err  file and waiting on a signal by the user to disable the halt  we would be simulating a  breakpoint    In this breakpoint  which is essentially a limbo state the TaskRun finds itself in  the user can interact with the step environment using a CLI or an IDE       Exiting onfailure breakpoint  To exit a step which has been paused upon failure  the step would wait on a file similar to   step no  breakpointexit  which would unpause and exit the step container  eg  Step 0 fails and is paused  Writing  0 breakpointexit  in   tekton run  would unpause and exit the step container      Breakpoint before step  TaskRun will be stuck waiting for user debugging before the step execution       Halting a Step before execution  The step program will be executed after all the   wait file  monitoring ends  If want the user to enter the debugging before the step is executed  need to pass a parameter  debug before step  to  entrypoint   and  entrypoint  will end the monitoring of  waitFiles  back pause  waiting to listen to the   tekton run 0 out beforestepexit  file      Exiting before step breakpoint   entrypoint  listening   tekton run  out beforestepexit  or   tekton run  out beforestepexit err  to decide whether to proceed this step   out beforestepexit  means continue with step   out beforestepexit err  means do not continue with the step"}
